diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzgomy" "b/data_all_eng_slimpj/shuffled/split2/finalzzgomy" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzgomy" @@ -0,0 +1,5 @@ +{"text":"\\section{Notation overview}\n\\label{sec:notation}\n\n\\begin{center}\n\\begin{tabular}{cl}\n\\toprule\n\\textbf{Symbol} & \\textbf{Description}\\\\\n\\midrule\n\t$h(\\bm{x}_i,\\bm{\\theta})$ & The model, with a softmax \\\\ \n\t$h_k(\\bm{x}_i,\\bm{\\theta})$ & The model's $k$th scaled logit \\\\ \n\t$D_{\\bm{j}}\\left( f \\right)$ & The directional derivative of $f$ along $\\bm{j}$ \\\\\n\n $\\mathbb{P}_{\\text{data}}$ & Probability distribution of original data \\\\\n $\\bm{x}_i$ & An input data sample, where $\\bm{x}_i\\sim \\mathbb{P}_{\\text{data}}$ \\\\\n $\\bm{y}_i$ & A label that corresponds to the $\\bm{x}_i$ sample \\\\\n $\\eta$ & Learning rate \\\\\n $n$ & Number of classes \\\\\n $\\bm{\\theta}$ & A model's trainable parameters \\\\\n $\\bm{\\lambda}$ & The loss function's parameters \\\\ \n $\\mathcal{L}(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta})$ & The loss function \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\n\n\\section{Third-order TaylorGLO loss function in the zero training error regime}\n\\label{sec:notation_tayzeroerr}\n\\begin{equation}\n\\gamma_k(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta}) = a + b y_{ik} + c y_{ik}^2 ,\n\\end{equation}\nwhere\n\\begin{eqnarray}\na &=& \\lambda_2 - 2 \\lambda_1 \\lambda_3 - \\lambda_5 \\lambda_0 + 2\\lambda_1\\lambda_6\\lambda_0 + \\lambda_7 \\lambda_0^2 + 3 \\lambda_4 \\lambda_1^2 \\\\\nb &=& 2\\lambda_3 - 2\\lambda_6 \\lambda_0 - 2\\lambda_1\\lambda_6 + \\lambda_5 - 2\\lambda_7\\lambda_0 - 6\\lambda_4\\lambda_1 \\\\\nc &=& 2\\lambda_6 + \\lambda_7 + 3\\lambda_4 .\n\\end{eqnarray}\n\n\n\\section{Behavior at the null epoch}\n\\label{sec:nullepoch}\nConsider the first epoch of training. Assume all weights are randomly initialized:\n\\begin{equation}\n\\forall k \\in [1,n], \\text{where}\\; n\\geq 2:\\; \\mathop{\\mathbb{E}}_i \\left[ h_k(\\bm{x}_i,\\bm{\\theta}) \\right] = \\frac{1}{n} .\n\\end{equation}\nThat is, logits are distributed with high entropy. Behavior at the null epoch can then be defined piecewise for target vs.\\ non-target logits for each loss function.\n\nIn the case of {\\bf Mean squared error (MSE),}\n\\begin{equation}\n\\gamma_k(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta}) = \\left\\{\n \\begin{array}{rl}\n 2n^{-1} & \\quad y_{ik} = 0 \\\\\n 2n^{-1} - 2 & \\quad y_{ik} = 1 .\n \\end{array}\n \\right.\n\\end{equation}\nSince $n\\geq2$, the $y_{ik} = 1$ case will always be negative, while the $y_{ik} = 0$ case will always be positive. Thus, target scaled logits will be maximized and non-target scaled logits minimized.\n\nIn the case of {\\bf Cross-entropy loss,}\n\\begin{equation}\n\\gamma_k(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta}) = \\left\\{\n \\begin{array}{rl}\n 0 & \\quad y_{ik} = 0 \\\\\n n & \\quad y_{ik} = 1 .\n \\end{array}\n \\right.\n\\end{equation}\nTarget scaled logits are maximized and, consequently, non-target scaled logits minimized as a result of the softmax function.\n\nSimilarly in the case of {\\bf Baikal loss,}\n\\begin{equation}\n\\gamma_k(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta}) = \\left\\{\n \\begin{array}{rl}\n n & \\quad y_{ik} = 0 \\\\\n n + n^2 & \\quad y_{ik} = 1.\n \\end{array}\n \\right.\n\\end{equation}\nTarget scaled logits are minimized and, consequently, non-target scaled logits minimized as a result of the softmax function (since the $y_{ik} = 1$ case dominates).\n\n\nIn the case of {\\bf Third-order TaylorGLO loss,} since behavior is highly dependent on $\\bm{\\lambda}$, consider the concrete loss function used above:\n\\begin{equation}\n\\gamma_k(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta}) = \\left\\{\n \\begin{array}{rl}\n -373.917 - 130.264 \\;h_k(\\bm{x}_i,\\bm{\\theta}) - 11.2188 \\;h_k(\\bm{x}_i,\\bm{\\theta})^2 & \\quad y_{ik} = 0 \\\\\n -372.470735 - 131.47 \\;h_k(\\bm{x}_i,\\bm{\\theta}) - 11.2188 \\;h_k(\\bm{x}_i,\\bm{\\theta})^2 & \\quad y_{ik} = 1 .\n \\end{array}\n \\right.\n\\end{equation}\nNote that Equation~\\ref{eqn:tayk3_zeroerrorclassrule} is a special case of this behavior where $h_k(\\bm{x}_i,\\bm{\\theta}) = y_{ik}$. Let us substitute $h_k(\\bm{x}_i,\\bm{\\theta}) = \\frac{1}{n}$ (i.e., the expected value of a logit at the null epoch):\n\\begin{equation}\n\\gamma_k(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta}) = \\left\\{\n \\begin{array}{rl}\n -373.917 - 130.264 \\;n^{-1} - 11.2188 \\;n^{-2} & \\quad y_{ik} = 0 \\\\\n -372.470735 - 131.47 \\;n^{-1} - 11.2188 \\;n^{-2} & \\quad y_{ik} = 1 .\n \\end{array}\n \\right.\n\\end{equation}\nSince this loss function was found on CIFAR-10, a 10-class image classification task, $n=10$:\n\\begin{equation}\n\\gamma_k(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta}) = \\left\\{\n \\begin{array}{rl}\n -386.9546188 & \\quad y_{ik} = 0 \\\\\n -385.729923 & \\quad y_{ik} = 1 .\n \\end{array}\n \\right.\n\\end{equation}\nSince both cases of $\\gamma_k(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta})$ are negative, this behavior implies that all scaled logits will be minimized. However, since the scaled logits are the output of a softmax function, and the $y_{ik} = 0$ case is more strongly negative, the non-target scaled logits will be minimized more than the target scaled logits, resulting in a maximization of the target scaled logits.\n\n\n\n\n\\section{Proofs and derivations}\n\\label{ap:proofs}\n\nProofs and derivations for theorems in the paper are presented below:\n\n\n\n\\subsection{Zero Training Error is not an Attractor of Baikal}\n\\label{sec:baikalzeroerrnotattractor}\nGiven that Baikal does tend to minimize training error to a large degree---otherwise it would be useless as a loss function since we are effectively assuming that the training data is in-distribution---we can observe what happens as we approach a point in parameter space that is arbitrarily-close to zero training error. Assume, without loss of generality, that all non-target scaled logits have the same value.\n\\begin{equation}\n\\theta_j \\leftarrow \\theta_j + \\eta \\frac{1}{n} \\sum^n_{k=1} \\left\\{\n \\begin{array}{rl}\n {\\displaystyle \\lim_{h_k(\\bm{x}_i,\\bm{\\theta})\\to \\frac{\\epsilon}{n-1}}}\\; \\gamma_k(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta}) D_{\\bm{j}} \\left( h_k(\\bm{x}_i,\\bm{\\theta}) \\right) & \\quad y_{ik} = 0 \\\\\n {\\displaystyle \\lim_{h_k(\\bm{x}_i,\\bm{\\theta})\\to 1-\\epsilon}}\\; \\gamma_k(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta}) D_{\\bm{j}} \\left( h_k(\\bm{x}_i,\\bm{\\theta}) \\right) & \\quad y_{ik} = 1\n \\end{array}\n \\right.\n\\end{equation}\n\\begin{equation}\n= \\theta_j + \\eta \\frac{1}{n} \\sum^n_{k=1} \\left\\{\n \\begin{array}{rl}\n {\\displaystyle \\lim_{h_k(\\bm{x}_i,\\bm{\\theta})\\to \\frac{\\epsilon}{n-1}}}\\;\\; \\left(\\dfrac{1}{h_k(\\bm{x}_i,\\bm{\\theta})} + \\dfrac{0}{h_k(\\bm{x}_i,\\bm{\\theta})^2}\\right) D_{\\bm{j}} \\left( h_k(\\bm{x}_i,\\bm{\\theta}) \\right) & \\quad y_{ik} = 0 \\\\\n {\\displaystyle \\lim_{h_k(\\bm{x}_i,\\bm{\\theta})\\to 1-\\epsilon}}\\; \\left(\\dfrac{1}{h_k(\\bm{x}_i,\\bm{\\theta})} + \\dfrac{1}{h_k(\\bm{x}_i,\\bm{\\theta})^2}\\right) D_{\\bm{j}} \\left( h_k(\\bm{x}_i,\\bm{\\theta}) \\right) & \\quad y_{ik} = 1\n \\end{array}\n \\right.\n\\end{equation}\n\\begin{equation}\n= \\theta_j + \\eta \\frac{1}{n} \\sum^n_{k=1} \\left\\{\n \\renewcommand{\\arraystretch}{2}\n \\begin{array}{rl}\n \\dfrac{n-1}{\\epsilon} D_{\\bm{j}} \\left( h_k(\\bm{x}_i,\\bm{\\theta}) \\right) & \\quad y_{ik} = 0 \\\\\n \\left(\\dfrac{1}{1-\\epsilon} + \\dfrac{1}{\\left( 1-\\epsilon \\right)^2}\\right) D_{\\bm{j}} \\left( h_k(\\bm{x}_i,\\bm{\\theta}) \\right) & \\quad y_{ik} = 1\n \\end{array}\n \\right.\n\\end{equation}\n\\begin{equation}\n= \\theta_j + \\eta \\frac{1}{n} \\sum^n_{k=1} \\left\\{\n \\renewcommand{\\arraystretch}{2}\n \\begin{array}{rl}\n \\dfrac{n-1}{\\epsilon} D_{\\bm{j}} \\left( h_k(\\bm{x}_i,\\bm{\\theta}) \\right) & \\quad y_{ik} = 0 \\\\\n \\dfrac{2 - \\epsilon}{\\epsilon^2 - 2\\epsilon + 1} D_{\\bm{j}} \\left( h_k(\\bm{x}_i,\\bm{\\theta}) \\right) & \\quad y_{ik} = 1\n \\end{array}\n \\right.\n\\end{equation}\nThe behavior in the $y_{ik} = 0$ case will dominate for small values of $\\epsilon$. Both cases have a positive range for small values of $\\epsilon$, ultimately resulting in non-target scaled logits becoming maximized, and subsequently the non-target logit becoming minimized. This is equivalent, in expectation, to saying that $\\epsilon$ will become larger after applying the learning rule. A larger $\\epsilon$ clearly implies a move away from a zero training error area of the parameter space. Thus, zero training error is not an attractor for the Baikal loss function.\n\n\n\\subsection{Label smoothing in TaylorGLO}\n\\label{sec:labelsmoothinggeneraltaylor}\n\\newcommand{\\dfrac{\\alpha}{n} }{\\dfrac{\\alpha}{n} }\n\\newcommand{\\dfrac{\\alpha^2}{n^2} }{\\dfrac{\\alpha^2}{n^2} }\n\\newcommand{\\left(1-\\alpha\\dfrac{n-1}{n}\\right) }{\\left(1-\\alpha\\dfrac{n-1}{n}\\right) }\nConsider a basic setup with standard label smoothing, controlled by a hyperparameter $\\alpha \\in (0,1)$, such that the target value in any $\\bm{y}_i$ is $1-\\alpha\\frac{n-1}{n}$, rather than $1$, and non-target values are $\\frac{\\alpha}{n}$, rather than $0$. The learning rule changes in the general case as follows:\n\\begin{equation}\n \\begin{aligned}\n\\gamma_k(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta}) = \\left\\{\n\t\\renewcommand{\\arraystretch}{2}\n \\begin{array}{rl}\n\t c_1 + c_h h_k(\\bm{x}_i,\\bm{\\theta}) + c_{hh} h_k(\\bm{x}_i,\\bm{\\theta})^2 \\\\+ c_{hy} h_k(\\bm{x}_i,\\bm{\\theta}) \\dfrac{\\alpha}{n} + c_y \\dfrac{\\alpha}{n} + c_{yy} \\dfrac{\\alpha^2}{n^2} & \\;\\; y_{ik} = 0 \\\\\\\\\n\t c_1 + c_h h_k(\\bm{x}_i,\\bm{\\theta}) + c_{hh} h_k(\\bm{x}_i,\\bm{\\theta})^2 + c_{hy} h_k(\\bm{x}_i,\\bm{\\theta}) \\left(1-\\alpha\\dfrac{n-1}{n}\\right) \\\\+ c_y \\left(1-\\alpha\\dfrac{n-1}{n}\\right) + c_{yy} \\left(1-\\alpha\\dfrac{n-1}{n}\\right) ^2 & \\;\\; y_{ik} = 1\n \\end{array}\n \\right.\n \\end{aligned}\n\\end{equation}\nLet $\\hat{c}_1, \\hat{c}_h, \\hat{c}_{hh}, \\hat{c}_{hy}, \\hat{c}_y, \\hat{c}_{yy}$ represent settings for $c_1, c_h, c_{hh}, c_{hy}, c_y, c_{yy}$ in the non-label-smoothed case that implicitly apply label smoothing within the TaylorGLO parameterization. Given the two cases in the label-smoothed and non-label-smoothed definitions of $\\gamma_k(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta})$, there are two equations that must be satisfiable by settings of $\\hat{c}$ constants for any $c$ constants, with shared terms highlighted in blue and red:\n\\begin{equation}\n\t\\label{eqn:lstaygeneralzerocase}\n\t\\begin{aligned}\n{\\color{blue} c_1 + c_h h_k(\\bm{x}_i,\\bm{\\theta}) + c_{hh} h_k(\\bm{x}_i,\\bm{\\theta})^2 }+ c_{hy} h_k(\\bm{x}_i,\\bm{\\theta}) \\dfrac{\\alpha}{n} + c_y \\dfrac{\\alpha}{n} + c_{yy} \\dfrac{\\alpha^2}{n^2} \\\\= {\\color{red} \\hat{c}_1 + \\hat{c}_h h_k(\\bm{x}_i,\\bm{\\theta}) + \\hat{c}_{hh} h_k(\\bm{x}_i,\\bm{\\theta})^2 }\n\t\\end{aligned}\n\\end{equation}\n\\begin{equation}\n \\label{eqn:lstaygeneralonecase}\n\t\\begin{aligned}\n{\\color{blue} c_1 + c_h h_k(\\bm{x}_i,\\bm{\\theta}) + c_{hh} h_k(\\bm{x}_i,\\bm{\\theta})^2 }+ c_{hy} h_k(\\bm{x}_i,\\bm{\\theta}) \\left(1-\\alpha\\dfrac{n-1}{n}\\right) \\\\+ c_y \\left(1-\\alpha\\dfrac{n-1}{n}\\right) + c_{yy} \\left(1-\\alpha\\dfrac{n-1}{n}\\right) ^2 \\\\= {\\color{red} \\hat{c}_1 + \\hat{c}_h h_k(\\bm{x}_i,\\bm{\\theta}) + \\hat{c}_{hh} h_k(\\bm{x}_i,\\bm{\\theta})^2 }+ \\hat{c}_{hy} h_k(\\bm{x}_i,\\bm{\\theta}) + \\hat{c}_y + \\hat{c}_{yy}\n\t\\end{aligned}\n\\end{equation}\nLet us then factor the left-hand side of Equation~\\ref{eqn:lstaygeneralzerocase} in terms of different powers of $h_k(\\bm{x}_i,\\bm{\\theta})$:\n\\begin{equation}\n \\label{eqn:lstaygeneralc1chchhdefs}\n\t\\begin{aligned}\n\\underbrace{\\left(c_1 + c_y \\dfrac{\\alpha}{n} + c_{yy} \\dfrac{\\alpha^2}{n^2} \\right)}_{\\hat{c}_1} + \\underbrace{\\left(c_h + c_{hy} \\dfrac{\\alpha}{n} \\right)}_{\\hat{c}_h} h_k(\\bm{x}_i,\\bm{\\theta}) + \\underbrace{c_{hh}}_{\\hat{c}_{hh}} h_k(\\bm{x}_i,\\bm{\\theta})^2\n\t\\end{aligned}\n\\end{equation}\nResulting in definitions for $\\hat{c}_1, \\hat{c}_h, \\hat{c}_{hh}$. Let us then add the following form of zero to the left-hand side of Equation~\\ref{eqn:lstaygeneralonecase}:\n\\begin{equation}\n\t\\begin{aligned}\n\\left( c_{hy} h_k(\\bm{x}_i,\\bm{\\theta}) \\dfrac{\\alpha}{n} + c_y \\dfrac{\\alpha}{n} + c_{yy} \\dfrac{\\alpha^2}{n^2} \\right) - \\left( c_{hy} h_k(\\bm{x}_i,\\bm{\\theta}) \\dfrac{\\alpha}{n} + c_y \\dfrac{\\alpha}{n} + c_{yy} \\dfrac{\\alpha^2}{n^2} \\right)\n\t\\end{aligned}\n\\end{equation}\nThis allows us to substitute the definitions for $\\hat{c}_1, \\hat{c}_h, \\hat{c}_{hh}$ from Equation~\\ref{eqn:lstaygeneralc1chchhdefs} into Equation~\\ref{eqn:lstaygeneralonecase}:\n\\begin{equation}\n\t\\begin{aligned}\n{\\color{red}\\hat{c}_1 + \\hat{c}_h h_k(\\bm{x}_i,\\bm{\\theta}) + \\hat{c}_{hh} h_k(\\bm{x}_i,\\bm{\\theta})^2} - \\left( c_{hy} h_k(\\bm{x}_i,\\bm{\\theta}) \\dfrac{\\alpha}{n} + c_y \\dfrac{\\alpha}{n} + c_{yy} \\dfrac{\\alpha^2}{n^2} \\right) \\\\+ c_{hy} h_k(\\bm{x}_i,\\bm{\\theta}) \\left(1-\\alpha\\dfrac{n-1}{n}\\right) + c_y \\left(1-\\alpha\\dfrac{n-1}{n}\\right) + c_{yy} \\left(1-\\alpha\\dfrac{n-1}{n}\\right) ^2 \\\\= {\\color{red} \\hat{c}_1 + \\hat{c}_h h_k(\\bm{x}_i,\\bm{\\theta}) + \\hat{c}_{hh} h_k(\\bm{x}_i,\\bm{\\theta})^2 }+ \\hat{c}_{hy} h_k(\\bm{x}_i,\\bm{\\theta}) + \\hat{c}_y + \\hat{c}_{yy}\n\t\\end{aligned}\n\\end{equation}\nSimplifying into:\n\\begin{equation}\n \\label{eqn:lstaygeneralalmostthereonecase}\n\t\\begin{aligned}\nc_{hy} h_k(\\bm{x}_i,\\bm{\\theta}) \\left(1-\\alpha\\dfrac{n-1}{n}\\right) + c_y \\left(1-\\alpha\\dfrac{n-1}{n}\\right) + c_{yy} \\left(1-\\alpha\\dfrac{n-1}{n}\\right) ^2 \\\\- \\left( c_{hy} h_k(\\bm{x}_i,\\bm{\\theta}) \\dfrac{\\alpha}{n} + c_y \\dfrac{\\alpha}{n} + c_{yy} \\dfrac{\\alpha^2}{n^2} \\right) \\\\= \\hat{c}_{hy} h_k(\\bm{x}_i,\\bm{\\theta}) + \\hat{c}_y + \\hat{c}_{yy}\n\t\\end{aligned}\n\\end{equation}\nFinally, factor the left-hand side of Equation~\\ref{eqn:lstaygeneralalmostthereonecase} in terms of, $h_k(\\bm{x}_i,\\bm{\\theta})$, $1$, and $1^2$:\n\\begin{equation}\n\t\\begin{aligned}\n\\underbrace{\\left( c_{hy} \\left(1-\\alpha\\dfrac{n-1}{n}\\right) - c_{hy} \\dfrac{\\alpha}{n} \\right)}_{\\hat{c}_{hy}} h_k(\\bm{x}_i,\\bm{\\theta}) \\\\+ \\underbrace{\\left( c_{y} \\left(1-\\alpha\\dfrac{n-1}{n}\\right) - c_{y} \\dfrac{\\alpha}{n} \\right)}_{\\hat{c}_{y}} + \\underbrace{\\left( c_{yy} \\left(1-\\alpha\\dfrac{n-1}{n}\\right) ^2 - c_{yy} \\dfrac{\\alpha^2}{n^2} \\right)}_{\\hat{c}_{yy}}\n\t\\end{aligned}\n\\end{equation}\n\n \n\nThus, the in-parameterization constants with implicit label smoothing can be defined for any desired, label-smoothed constants as follows:\n\\begin{eqnarray}\n\\hat{c}_{1} &=& c_1 + c_y \\dfrac{\\alpha}{n} + c_{yy} \\dfrac{\\alpha^2}{n^2} \\\\\n\\hat{c}_{h} &=& c_h + c_{hy} \\dfrac{\\alpha}{n} \\\\\n\\hat{c}_{hh} &=& c_{hh} \\\\\n\\hat{c}_{hy} &=& c_{hy} \\left(1-\\alpha\\dfrac{n-1}{n}\\right) - c_{hy} \\dfrac{\\alpha}{n} \\\\\n\\hat{c}_{y} &=& c_{y} \\left(1-\\alpha\\dfrac{n-1}{n}\\right) - c_{y} \\dfrac{\\alpha}{n} \\\\\n\\hat{c}_{yy} &=& c_{yy} \\left(1-\\alpha\\dfrac{n-1}{n}\\right) ^2 - c_{yy} \\dfrac{\\alpha^2}{n^2} \n\\end{eqnarray}\nSo for any $\\bm{\\lambda}$ and any $\\alpha \\in (0,1)$, there exists a $\\bm{\\hat{\\lambda}}$ such that the behavior imposed by $\\bm{\\hat{\\lambda}}$ without explicit label smoothing is identical to the behavior imposed by $\\bm{\\lambda}$ \\emph{with} explicit label smoothing. That is, any degree of label smoothing can be implicitly represented for any TaylorGLO loss function. Thus, TaylorGLO may discover and utilize label smoothing as part of discovering loss functions, increasing their ability to regularize further.\n\n\n\n\n\n\\subsection{Softmax entropy criticality}\n\\label{sec:softmaxentropyderivation}\nLet us analyze the case where all non-target logits have the same value, $\\frac{\\epsilon}{n-1}$, and the target logit has the value $1-\\epsilon$. That is, all non-target classes have equal probabilities.\n\nA model's scaled logit for an input $\\bm{x}_i$ can be represented as:\n\\begin{equation}\nh_k(\\bm{x}_i,\\bm{\\theta}) = \\sigma_k(f(\\bm{x}_i,\\bm{\\theta})) = \\frac{\\mathrm{e}^{f_k(\\bm{x}_i,\\bm{\\theta})}}{\\sum_{j=1}^{n} \\mathrm{e}^{f_j(\\bm{x}_i,\\bm{\\theta})}}\n\\end{equation}\nwhere $f_k(\\bm{x}_i,\\bm{\\theta})$ is a raw output logit from the model.\n\nThe $(k,j)$th entry of the Jacobian matrix for $h(\\bm{x}_i,\\bm{\\theta})$ can be easily derived through application of the chain rule:\n\\begin{equation}\n\\bm{J}_{kj} h(\\bm{x}_i,\\bm{\\theta}) = \\frac{\\partial h_k(\\bm{x}_i,\\bm{\\theta})}{\\partial f_j(\\bm{x}_i,\\bm{\\theta})} =\\left\\{\n \\begin{array}{rl}\n h_j(\\bm{x}_i,\\bm{\\theta})\\; (1 - h_k(\\bm{x}_i,\\bm{\\theta}))\\;f_k(\\bm{x}_i,\\bm{\\theta}) & \\quad k=j \\\\\n - h_j(\\bm{x}_i,\\bm{\\theta}) \\;h_k(\\bm{x}_i,\\bm{\\theta})\\;f_k(\\bm{x}_i,\\bm{\\theta}) & \\quad k\\neq j\n \\end{array}\n \\right.\n\\end{equation}\nConsider an SGD learning rule of the form:\n\\begin{equation}\n\\theta_j \\leftarrow \\theta_j + \\eta \\frac{1}{n} \\sum^n_{k=1}\\left[ \\gamma_k(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta}) D_{\\bm{j}} \\left( h_k(\\bm{x}_i,\\bm{\\theta}) \\right) \\right]\n\\end{equation}\nLet us freeze a network at any specific point during the training process for any specific sample. Now, treating all $f_j(\\bm{x}_i,\\bm{\\theta}), j\\in [1,n]$ as free parameters with unit derivatives, rather than as functions. That is, $\\theta_j = f_j(\\bm{x}_i,\\bm{\\theta})$. We observe that updates are as follows:\n\\begin{equation}\n\\Delta f_j \\propto \\sum^n_{k=1} \\gamma_j \\left\\{\n \\begin{array}{rl}\n h_j(\\bm{x}_i,\\bm{\\theta})\\; (1 - h_k(\\bm{x}_i,\\bm{\\theta})) & \\quad k=j \\\\\n - h_j(\\bm{x}_i,\\bm{\\theta}) \\;h_k(\\bm{x}_i,\\bm{\\theta}) & \\quad k\\neq j\n \\end{array}\n \\right.\n\\end{equation}\nFor downstream analysis, we can consider, as substitutions for $\\gamma_j$ above, $\\gamma_{\\neg T}$ to be the value for non-target logits, and $\\gamma_T$ for the target logit.\n\nThis sum can be expanded and conceptually simplified by considering $j$ indices and $\\neg j$ indices. $\\neg j$ indices, of which there are $n-1$, are either all non-target logits, or one is the target logit in the case where $j$ is not the target logit. Let us consider both cases, while substituting the scaled logit values defined above:\n\\begin{equation}\n\\Delta f_j \\propto \\left\\{\n \\begin{array}{rl}\n \\gamma_{\\neg T} \\;\\bm{J}_{k=j} h(\\bm{x}_i,\\bm{\\theta}) + (n-2) \\gamma_{\\neg T} \\;\\bm{J}_{k\\neq j} h(\\bm{x}_i,\\bm{\\theta}) + \\gamma_{T} \\;\\bm{J}_{k\\neq j} h(\\bm{x}_i,\\bm{\\theta}) & \\quad \\text{non-target}\\;j \\\\\n \\gamma_T \\;\\bm{J}_{k=j} h(\\bm{x}_i,\\bm{\\theta}) + (n-1) \\gamma_{\\neg T} \\;\\bm{J}_{k\\neq j} h(\\bm{x}_i,\\bm{\\theta}) & \\quad \\text{target}\\;j\n \\end{array}\n \\right.\n\\end{equation}\n\\begin{equation}\n\\Delta f_j \\propto \\left\\{\n \\begin{array}{rl}\n \\gamma_{\\neg T} h_{\\neg T}(\\bm{x}_i,\\bm{\\theta})\\; (1 - h_{\\neg T}(\\bm{x}_i,\\bm{\\theta})) \\\\+ (n-2) \\gamma_{\\neg T} \\left(- h_{\\neg T}(\\bm{x}_i,\\bm{\\theta}) \\;h_{\\neg T}(\\bm{x}_i,\\bm{\\theta})\\right) \\\\+ \\gamma_{T} \\left(- h_{\\neg T}(\\bm{x}_i,\\bm{\\theta}) \\;h_T(\\bm{x}_i,\\bm{\\theta})\\right) & \\quad \\text{non-target}\\;j \\\\\\\\\n \\gamma_T h_T(\\bm{x}_i,\\bm{\\theta})\\; (1 - h_T(\\bm{x}_i,\\bm{\\theta})) \\\\+ (n-1) \\gamma_{\\neg T} \\left(- h_{\\neg T}(\\bm{x}_i,\\bm{\\theta}) \\;h_T(\\bm{x}_i,\\bm{\\theta})\\right) & \\quad \\text{target}\\;j\n \\end{array}\n \\right.\n\\end{equation}\n\\begin{equation}\n\\text{where}\\quad h_T(\\bm{x}_i,\\bm{\\theta}) = 1-\\epsilon , \\quad h_{\\neg T}(\\bm{x}_i,\\bm{\\theta}) = \\frac{\\epsilon}{n-1}\n\\end{equation}\n\\begin{equation}\n\\Delta f_j \\propto \\left\\{\n \\begin{array}{rl}\n \\gamma_{\\neg T} \\dfrac{\\epsilon}{n-1} \\bigg(1-\\frac{\\epsilon}{n-1}\\bigg) + \\gamma_{\\neg T} (n-2)\\dfrac{\\epsilon^2}{n^2-2n+1} + \\gamma_T (\\epsilon-1)\\dfrac{\\epsilon}{n-1} & \\quad \\text{non-target}\\;j \\\\\n \\gamma_T \\epsilon - \\gamma_T \\epsilon^2 + \\gamma_{\\neg T}(n-1)(\\epsilon-1) \\dfrac{\\epsilon}{n-1} & \\quad \\text{target}\\;j\n \\end{array}\n \\right.\n\\end{equation}\nAt this point, we have closed-form solutions for the changes to softmax inputs. To characterize entropy, we must now derive solutions for the changes to softmax outputs given such changes to the inputs. That is:\n\\begin{equation}\n\\Delta \\sigma_j(f(\\bm{x}_i,\\bm{\\theta})) = \\frac{\\mathrm{e}^{f_j(\\bm{x}_i,\\bm{\\theta}) + \\Delta f_j}}{\\sum_{k=1}^{n} \\mathrm{e}^{f_k(\\bm{x}_i,\\bm{\\theta})+\\Delta f_k}}\n\\end{equation}\nDue to the two cases in $\\Delta f_j$, $\\Delta \\sigma_j(f(\\bm{x}_i,\\bm{\\theta}))$ is thus also split into two cases for target and non-target logits:\n\\begin{equation}\n\\Delta \\sigma_j(f(\\bm{x}_i,\\bm{\\theta})) = \\left\\{\n \\renewcommand{\\arraystretch}{2}\n \\begin{array}{rl}\n \\dfrac{\\mathrm{e}^{f_{\\neg T}(\\bm{x}_i,\\bm{\\theta}) + \\Delta f_{\\neg T}}}{(n-1) \\mathrm{e}^{f_{\\neg T}(\\bm{x}_i,\\bm{\\theta}) + \\Delta f_{\\neg T}} + \\mathrm{e}^{f_T(\\bm{x}_i,\\bm{\\theta}) + \\Delta f_T}} & \\quad \\text{non-target}\\;j \\\\\n \\dfrac{\\mathrm{e}^{f_{T}(\\bm{x}_i,\\bm{\\theta}) + \\Delta f_T}}{(n-1) \\mathrm{e}^{f_{\\neg T}(\\bm{x}_i,\\bm{\\theta}) + \\Delta f_{\\neg T}} + \\mathrm{e}^{f_T(\\bm{x}_i,\\bm{\\theta}) + \\Delta f_T}} & \\quad \\text{target}\\;j\n \\end{array}\n \\right.\n\\end{equation}\nNow, we can see that scaled logits have a lower entropy distribution when $\\Delta \\sigma_T(f(\\bm{x}_i,\\bm{\\theta})) > 0$ and $\\Delta \\sigma_{\\neg T}(f(\\bm{x}_i,\\bm{\\theta})) < 0$. Essentially, the target and non-target scaled logits are being repelled from each other. We can ignore either of these inequalities, if one is satisfied then both are satisfied, in part because $|\\bm{\\sigma}(f(\\bm{x}_i,\\bm{\\theta}))|_1 = 1$. The target-case constraint (i.e., the target scaled logit must grow) can be represented as:\n\\begin{equation}\n\t\\label{eqn:taytargetconstraintforzeroerrattractor}\n\t\\dfrac{\\mathrm{e}^{f_{T}(\\bm{x}_i,\\bm{\\theta}) + \\Delta f_T}}{(n-1) \\mathrm{e}^{f_{\\neg T}(\\bm{x}_i,\\bm{\\theta}) + \\Delta f_{\\neg T}} + \\mathrm{e}^{f_T(\\bm{x}_i,\\bm{\\theta}) + \\Delta f_T}} > 1-\\epsilon\n\\end{equation}\nConsider the target logit case prior to changes:\n\\begin{equation}\n\t\\dfrac{\\mathrm{e}^{f_{T}(\\bm{x}_i,\\bm{\\theta})}}{(n-1) \\mathrm{e}^{f_{\\neg T}(\\bm{x}_i,\\bm{\\theta})} + \\mathrm{e}^{f_T(\\bm{x}_i,\\bm{\\theta})}} = 1-\\epsilon\n\\end{equation}\nLet us solve for $\\mathrm{e}^{f_{T}(\\bm{x}_i,\\bm{\\theta})}$:\n\\begin{eqnarray}\n\\mathrm{e}^{f_{T}(\\bm{x}_i,\\bm{\\theta})} &=& (n - 1) \\mathrm{e}^{f_{\\neg T}(\\bm{x}_i,\\bm{\\theta})} + \\mathrm{e}^{f_{T}(\\bm{x}_i,\\bm{\\theta})} - \\epsilon (n - 1) \\mathrm{e}^{f_{\\neg T}(\\bm{x}_i,\\bm{\\theta})} - \\epsilon \\mathrm{e}^{f_{T}(\\bm{x}_i,\\bm{\\theta})} \\\\\n&=& \\left( \\frac{n-1}{\\epsilon} - n + 1 \\right) \\mathrm{e}^{f_{\\neg T}(\\bm{x}_i,\\bm{\\theta})}\n\\end{eqnarray}\nSubstituting this definition into Equation~\\ref{eqn:taytargetconstraintforzeroerrattractor}:\n\\begin{equation}\n\t\\dfrac{ \\mathrm{e}^{\\Delta f_T} \\left( \\dfrac{n-1}{\\epsilon} - n + 1 \\right) \\mathrm{e}^{f_{\\neg T}(\\bm{x}_i,\\bm{\\theta})} }{(n-1) \\mathrm{e}^{f_{\\neg T}(\\bm{x}_i,\\bm{\\theta}) + \\Delta f_{\\neg T}} + \\mathrm{e}^{\\Delta f_T} \\left( \\dfrac{n-1}{\\epsilon} - n + 1 \\right) \\mathrm{e}^{f_{\\neg T}(\\bm{x}_i,\\bm{\\theta})} } > 1-\\epsilon\n\\end{equation}\nCoalescing exponents:\n\\begin{equation}\n\t\\dfrac{ \\mathrm{e}^{\\Delta f_T + f_{\\neg T}(\\bm{x}_i,\\bm{\\theta})} \\left( \\dfrac{n-1}{\\epsilon} - n + 1 \\right) }{(n-1) \\mathrm{e}^{f_{\\neg T}(\\bm{x}_i,\\bm{\\theta}) + \\Delta f_{\\neg T}} + \\mathrm{e}^{\\Delta f_T + f_{\\neg T}(\\bm{x}_i,\\bm{\\theta})} \\left( \\dfrac{n-1}{\\epsilon} - n + 1 \\right) } + \\epsilon - 1 > 0\n\\end{equation}\nSubstituting in definitions for $\\Delta f_T$ and $\\Delta f_{\\neg T}$ and greatly simplifying in a CAS is able to remove instances of $f_{\\neg T}$:\n\\begin{equation}\n\t\\dfrac{ \\epsilon (\\epsilon - 1) \\left( \\mathrm{e}^{\\epsilon (\\epsilon - 1) (\\gamma_{\\neg T} - \\gamma_T)} - \\mathrm{e}^{\\dfrac{\\epsilon (\\epsilon - 1) \\gamma_T (n-1) + \\epsilon \\gamma_{\\neg T} (\\epsilon (n-3) + n - 1)}{(n-1)^2}} \\right) }{ (\\epsilon - 1) \\mathrm{e}^{\\epsilon (\\epsilon - 1) (\\gamma_{\\neg T} - \\gamma_T)} - \\epsilon \\mathrm{e}^{\\dfrac{\\epsilon (\\epsilon - 1) \\gamma_T (n-1) + \\epsilon \\gamma_{\\neg T} (\\epsilon (n-3) + n - 1)}{(n-1)^2}} } > 0\n\\end{equation}\n\n\n\n\n\\subsection{TaylorGLO parameter invariant at the null epoch}\n\\label{sec:tayinvariantderivation}\nAt the null epoch, a valid loss function aims to, in expectation, minimize non-target scaled logits while maximizing target scaled logits. Thus, we attempt to find cases of $\\bm{\\lambda}$ for which these behaviors occur. Considering the representation for $\\gamma_k(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta})$ in Equation~\\ref{eqn:tayk3_gamma_ccomb}:\n\\begin{equation}\n\\theta_j \\leftarrow \\theta_j + \\eta \\frac{1}{n} \\sum^n_{k=1} \\left\\{\n\t\t\\renewcommand{\\arraystretch}{1.3}\n \\begin{array}{rl}\n \\left( c_1 + c_h h_k(\\bm{x}_i,\\bm{\\theta}) + c_{hh} h_k(\\bm{x}_i,\\bm{\\theta})^2 \\right) D_{\\bm{j}} \\left( h_k(\\bm{x}_i,\\bm{\\theta}) \\right) & \\quad y_{ik} = 0 \\\\\n \\big( c_1 + c_h h_k(\\bm{x}_i,\\bm{\\theta}) + c_{hh} h_k(\\bm{x}_i,\\bm{\\theta})^2 \\\\+ c_{hy} h_k(\\bm{x}_i,\\bm{\\theta}) + c_y + c_{yy} \\big) D_{\\bm{j}} \\left( h_k(\\bm{x}_i,\\bm{\\theta}) \\right) & \\quad y_{ik} = 1\n \\end{array}\n \\right.\n\\end{equation}\nLet us substitute $h_k(\\bm{x}_i,\\bm{\\theta}) = \\frac{1}{n}$ (i.e., the expected value of a logit at the null epoch):\n\\begin{equation}\n\\theta_j \\leftarrow \\theta_j + \\eta \\frac{1}{n} \\sum^n_{k=1} \\left\\{\n\t\t\\renewcommand{\\arraystretch}{2}\n \\begin{array}{rl}\n \\left( c_1 + \\dfrac{c_h}{n} + \\dfrac{c_{hh}}{n^2} \\right) D_{\\bm{j}} \\left( h_k(\\bm{x}_i,\\bm{\\theta}) \\right) & \\quad y_{ik} = 0 \\\\\n \\left( c_1 + c_y + c_{yy} + \\dfrac{c_h+c_{hy}}{n} + \\dfrac{c_{hh}}{n^2} \\right) D_{\\bm{j}} \\left( h_k(\\bm{x}_i,\\bm{\\theta}) \\right) & \\quad y_{ik} = 1\n \\end{array}\n \\right.\n\\end{equation}\nFor the desired degenerate behavior to appear, the directional derivative's coefficient in the $y_{ik} = 1$ case must be less than zero:\n\\begin{eqnarray}\nc_1 + c_y + c_{yy} + \\frac{c_h + c_{hy}}{n} + \\frac{c_{hh}}{n^2} &<& 0\n\\end{eqnarray}\nThis finding can be made more general, by asserting that the directional derivative's coefficient in the $y_{ik} = 1$ case be less than $(n-1)$ times the coefficient in the $y_{ik} = 0$ case. Thus arriving at the following constraint on $\\bm{\\lambda}$:\n\\begin{eqnarray}\nc_1 + c_y + c_{yy} + \\dfrac{c_h+c_{hy}}{n} + \\dfrac{c_{hh}}{n^2} &<& \\left(n-1\\right)\\left(c_1 + \\dfrac{c_h}{n} + \\dfrac{c_{hh}}{n^2} \\right) \\\\\nc_y + c_{yy} + \\dfrac{c_{hy}}{n} &<& \\left(n-2\\right)\\left(c_1 + \\dfrac{c_h}{n} + \\dfrac{c_{hh}}{n^2} \\right)\n\\end{eqnarray}\nThe inverse of these constraints may be used as invariants during loss function evolution.\n\n\n\n\n\n\\iffalse\n\\section{Zero training error attractor constraint function examples}\n\n\\begin{figure}\n \\centering\n\\includegraphics[width=0.35\\linewidth]{images\/constraint_plots\/MSE_null}\\hspace{0.1\\linewidth}\n\\includegraphics[width=0.35\\linewidth]{images\/constraint_plots\/MSE_zerr}\\\\[1.5em]\n\\includegraphics[width=0.35\\linewidth]{images\/constraint_plots\/CE_null}\n\\hspace{0.1\\linewidth}\n\\includegraphics[width=0.35\\linewidth]{images\/constraint_plots\/CE_zerr}\\\\[1.5em]\n\\includegraphics[width=0.35\\linewidth]{images\/constraint_plots\/Baikal_null}\n\\hspace{0.1\\linewidth}\n\\includegraphics[width=0.35\\linewidth]{images\/constraint_plots\/Baikal_zerr}\\\\[1.5em]\n\\includegraphics[width=0.35\\linewidth]{images\/constraint_plots\/TayAllCNN_null}\n\\hspace{0.1\\linewidth}\n\\includegraphics[width=0.35\\linewidth]{images\/constraint_plots\/TayAllCNN_zerr}\n \\vspace{-0.5em}\n \\caption{\\color{red} DO THIS HERE.}\n \\label{fig:basins}\n\\end{figure}\n\\fi\n\n\n\n\n\n\n\n\n\n\n\\section{Introduction}\n\nRegularization is a key concept in deep learning: it guides learning towards configurations that are likely to perform robustly on unseen data. Different regularization approaches originate from intuitive understanding of the learning process and have been shown to be effective empirically. However, the understanding of the underlying mechanisms, the different types of regularization, and their interactions, is limited.\n\nRecently, loss function optimization has emerged as a new area of metalearning, and shown great potential in training better models. Experiments suggest that metalearned loss functions serve as regularizers in a surprising but transparent way: they prevent the network from learning too confident predictions \\citep[e.g.\\ Baikal loss;][]{gonzalez2019glo}. While it may be too early to develop a comprehensive theory of regularization, given the relatively nascent state of this area, it may be possible to make progress in understanding regularization of this specific type. That is the goal of this paper.\n\nSince metalearned loss functions are customized to a given architecture-task pair, there needs to be a shared framework under which loss functions can be analyzed and compared.\nThe TaylorGLO \\citep{taylorglo} technique for loss function metalearning lends itself well to such analysis: It represents loss functions as multivariate Taylor polynomials, and leverages evolution to optimize a fixed number of parameters in this representation. In this framework, the SGD learning rule is decomposed to coefficient expressions that can be defined for a wide range of loss functions. These expressions provide an intuitive understanding of the training dynamics in specific contexts. \n\nUsing this framework, mean squared error (MSE), cross-entropy, Baikal, and TaylorGLO loss functions are analyzed at the null epoch, when network weights are similarly distributed (Appendix~\\ref{sec:nullepoch}), and in a zero training error regime, where the training samples' labels have been perfectly memorized. For any intermediate point in the training process, the strength of the zero training error regime as an attractor is analyzed and a constraint on this property is derived on TaylorGLO parameters by characterizing how the output distribution's entropy changes. In a concrete TaylorGLO loss function that has been metalearned, these attraction dynamics are calculated for individual samples at every epoch in a real training run, and contrasted with those for the cross-entropy loss. This comparison provides clarity on how TaylorGLO avoids becoming overly confident in its predictions. Further, the analysis shows (in Appendix~\\ref{sec:labelsmoothinggeneraltaylor}) how label smoothing \\citep{inceptionv3}, a traditional type of regularization, can be implicitly encoded by TaylorGLO loss functions: Any representable loss function has label-smoothed variants that are also representable by the parameterization\n\nFrom these analyses, practical opportunities arise. First, at the null epoch, where the desired behavior can be characterized clearly, an invariant can be derived on a TaylorGLO loss function's parameters that must hold true for networks to be trainable. This constraint is then applied within the TaylorGLO algorithm to guide the search process towards good loss functions more efficiently. Second, loss-function-based regularization results in robustness that should e.g.\\ make them more resilient to adversarial attacks. This property is demonstrated experimentally by incorporating adversarial robustness as an objective within the TaylorGLO search process. Thus, loss-function metalearning can be seen as a well-founded and practical approach to effective regularization in deep learning.\n\n\n\n\\section{Background}\n\nRegularization traditionally refers to methods for encouraging smoother mappings by adding a regularizing term to the objective function, i.e., to the loss function in neural networks. It can be defined more broadly, however, e.g.\\ as ``any modification we make to a learning algorithm that is intended to reduce its generalization error but not its training error'' \\citep{goodfellow2015explaining}. To that end, many regularization techniques have been developed that aim to improve the training process in neural networks. These techniques can be architectural in nature, such as Dropout \\citep{dropout} and Batch Normalization \\cite{batchnorm}, or they can alter some aspect of the training process, such as label smoothing \\citep{inceptionv3} or the minimization of a weight norm \\citep{hanson1989comparingbiases}. These techniques are briefly reviewed in this section, providing context for loss-function metalearning.\n\n\\subsection{Implicit biases in optimizers}\n\nIt may seem surprising that overparameterized neural networks are able to generalize at all, given that they have the capacity to memorize a training set perfectly, and in fact sometimes do (i.e., zero training error is reached). Different optimizers have different implicit biases that determine which solutions are ultimately found. These biases are helpful in providing implicit regularization to the optimization process \\citep{neyshabur2015pathsgd}. Such implicit regularization is the result of a network norm---a measure of complexity---that is minimized as optimization progresses. This is why models continue to improve even after training set has been memorized (i.e., the training error global optima is reached) \\citep{neyshabur2017geometry}.\n\nFor example, the process of stochastic gradient descent (SGD) itself has been found to provide regularization implicitly when learning on data with noisy labels \\citep{blanc2020implicit}. In overparameterized networks, adaptive optimizers find very different solutions than basic SGD. These solutions tend to have worse generalization properties, even though they tend to have lower training errors \\citep{wilson2017marginal}.\n\n\\subsection{Regularization approaches}\n\nWhile optimizers may minimize a network norm implicitly, regularization approaches supplement this process and make it explicit. For example, a common way to restrict the parameter norm explicitly is through weight decay. This approach discourages network complexity by placing a cost on weights \\citep{hanson1989comparingbiases}.\n\nGeneralization and regularization are often characterized at the end of training, i.e. as a behavior that resulst from the optimization process. Various findings have influenced work in regularization. For example, flat landscapes have better generalization properties \\citep{keskar2016large,losslandscape,chaudhari2019entropysgd}. In overparameterized cases, the solutions at the center of these landscapes may have zero training error (i.e., perfect memorization), and under certain conditions, zero training error empirically leads to lower generalization error \\citep{belkin2019reconciling, nakkiran2019deep}. However, when a training loss of zero is reached, generalization suffers \\citep{ishida2020we}. This behavior can be thought of as overtraining, and techniques have been developed to reduce it at the end of the training process, such as early stopping \\citep{morgan1990generalization} and flooding \\citep{ishida2020we}.\n\nBoth flooding and early stopping assume that overfitting happens at the end of training, which is not always true \\citep{golatkar2019time}. In fact, the order in which easy-to-generalize and hard-to-generalize concepts are learned is important the network's ultimate generalization. For instance, larger learning rates early in the training process often lead to better generalization in the final model \\citep{li2019biglr}. Similarly, low-error solutions found by SGD in a relatively quick manner---such as through high learning rates---often have good generalization properties \\citep{yao2007early}.\n\nOther techniques tackle overfitting by making it more difficult. Dropout \\citep{dropout} makes some connections disappear. Cutout \\citep{cutout}, Mixup \\citep{mixup}, and their composition, CutMix \\citep{cutmix}, augment training data with a broader variation of examples.\n\nNotably, regularization is not a one-dimensional continuum. Different techniques regularize in different ways that often interact. For example, flooding invalidates performance gains from early stopping \\citep{ishida2020we}. However, ultimately all regularization techniques alter the gradients that result from the training loss. This observation suggests loss-function optimization might be an effective way to regularize the training process.\n\n\n\n\n\\subsection{Loss-function metalearning}\n\nLoss function metalearning for deep networks was introduced by \\citet{gonzalez2019glo} as an automatic way to find customized loss functions that aim to optimize a performance metric for a model. The technique, a genetic programming approach named GLO, discovered one particular loss function, Baikal, that improves classification accuracy, training speed, and data utilization. Baikal appeared to achieve these properties through a form of regularization that ensured the model would not become overly confident in its predictions. That is, instead of monotonically decreasing the loss when the output gets closer to the correct value, Baikal loss increases rapidly when the output is almost correct, thus discouraging extreme accuracy.\n\nSubsequent techniques have advanced this new field further, for example by metalearning state-dependent loss functions for inverse dynamics models \\citep{morse2020learning}, and using a trained network that is itself a metalearned loss function \\citep{bechtle2019meta}. One particular technique, TaylorGLO \\citep{taylorglo}, lends itself well towards analyzing what makes loss-function metalearning effective. TaylorGLO represents loss functions as parameterizations of multivariate Taylor polynomials. These loss functions have a tunable complexity based on the order of the polynomial. This paper analyzes third-order TaylorGLO loss functions.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Learning rule decomposition}\n\nThis section develops the framework for the analysis in this paper. By decomposing the learning rules under different loss functions, comparisons can be drawn at different stages of the training process. Consider the standard SGD update rule:\n\\begin{equation}\n\\bm{\\theta} \\leftarrow \\bm{\\theta} - \\eta \\nabla_{\\bm{\\theta}} \\left( \\mathcal{L}(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta}) \\right) .\n\\end{equation}\nwhere $\\eta$ is the learning rate, $\\mathcal{L}(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta})$ is the loss function applied to the network $h(\\bm{x}_i,\\bm{\\theta})$, $\\bm{x}_i$ is an input data sample, $\\bm{y}_i$ is the $i$th sample's corresponding label, and $\\bm{\\theta}$ is the set of trainable parameters in the model.\nThe update for a single weight $\\theta_j$ is\n\\begin{equation}\n\\label{eqn:sgdsingleweight}\n\\theta_j \\leftarrow \\theta_j - \\eta D_{\\bm{j}} \\left( \\mathcal{L}(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta}) \\right) = \\theta_j - \\eta \\frac{\\partial}{\\partial s} \\mathcal{L}(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta} + s \\bm{j}) \\;\\bigg\\rvert_{s\\rightarrow 0} .\n\\end{equation}\nwhere $\\bm{j}$ is a basis vector for the $j$th weight. The following text illustrates decompositions of this general learning rule in a classification context for a variety of loss functions: mean squared error (MSE), the cross-entropy loss function, the general third-order TaylorGLO loss function, and the Baikal loss function. Each decomposition results in a learning rule of the form\n\\begin{equation}\n\\theta_j \\leftarrow \\theta_j + \\eta \\frac{1}{n} \\sum^n_{k=1}\\left[ \\gamma_k(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta}) D_{\\bm{j}} \\left( h_k(\\bm{x}_i,\\bm{\\theta}) \\right) \\right],\n\\end{equation}\nwhere $\\gamma_k(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta})$ is an expression that is specific to each loss function.\n\n\\newcommand{\\frac{\\partial}{\\partial s} h_k(\\bm{x}_i,\\hlam + s \\bm{j}) }{\\frac{\\partial}{\\partial s} h_k(\\bm{x}_i,\\bm{\\theta} + s \\bm{j}) }\n\\newcommand{\\left(h_k(\\bm{x}_i,\\hlam + s \\bm{j}) - \\lambda_1 \\right) }{\\left(h_k(\\bm{x}_i,\\bm{\\theta} + s \\bm{j}) - \\lambda_1 \\right) }\n\nSubstituting the {\\bf Mean squared error (MSE)} loss into Equation~\\ref{eqn:sgdsingleweight},\n\\vspace{-0.5em}\n\\begin{equation}\n \\begin{aligned}\n\\theta_j \\leftarrow \\theta_j - \\eta \\frac{1}{n} \\sum^n_{k=1}\\bigg[ 2 \\left( h_k(\\bm{x}_i,\\bm{\\theta} + s \\bm{j}) - y_{ik} \\right) \\frac{\\partial}{\\partial s} h_k(\\bm{x}_i,\\hlam + s \\bm{j}) \\bigg] \\;\\bigg\\rvert_{s\\rightarrow 0}\n \\end{aligned}\n\\end{equation}\n\\vspace{-0.5em}\nand breaking up the coefficient expressions into $\\gamma_k(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta})$ results in the weight update step\n\\begin{equation}\n \\begin{aligned}\n\\gamma_k(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta}) = 2y_{ik} - 2 h_k(\\bm{x}_i,\\bm{\\theta}).\n \\end{aligned}\n\\end{equation}\nSubstituting the {\\bf Cross-entropy loss} into Equation~\\ref{eqn:sgdsingleweight}\n\\vspace{-0.5em}\n\\begin{equation}\n \\begin{aligned}\n\\theta_j \\leftarrow \\theta_j + \\eta \\frac{1}{n} \\sum^n_{k=1}\\bigg[ y_{ik} \\frac{1}{h_k(\\bm{x}_i,\\bm{\\theta} + s \\bm{j})} \\frac{\\partial}{\\partial s} h_k(\\bm{x}_i,\\hlam + s \\bm{j}) \\bigg] \\;\\bigg\\rvert_{s\\rightarrow 0}\n \\end{aligned}\n\\end{equation}\n\\vspace{-0.5em}\nand breaking up the coefficient expressions into $\\gamma_k(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta})$ results in the weight update step\n\\begin{equation}\n \\begin{aligned}\n\\gamma_k(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta}) = \\frac{y_{ik} }{h_k(\\bm{x}_i,\\bm{\\theta})}.\n \\end{aligned}\n\\end{equation}\nSubstituting the {\\bf Baikal loss} into Equation~\\ref{eqn:sgdsingleweight},\n\\vspace{-0.5em}\n\\begin{equation}\n \\begin{aligned}\n\\theta_j \\leftarrow \\theta_j + \\eta \\frac{1}{n} \\sum^n_{k=1}\\bigg[ \\bigg( \\frac{1}{h_k(\\bm{x}_i,\\bm{\\theta} + s \\bm{j})} + \\frac{y_{ik}}{h_k(\\bm{x}_i,\\bm{\\theta} + s \\bm{j})^2} \\bigg) \\frac{\\partial}{\\partial s} h_k(\\bm{x}_i,\\hlam + s \\bm{j}) \\bigg] \\;\\bigg\\rvert_{s\\rightarrow 0}\n \\end{aligned}\n\\end{equation}\n\\vspace{-0.5em}\nand breaking up the coefficient expressions into $\\gamma_k(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta})$ results in the weight update step\n\\begin{equation}\n \\begin{aligned}\n\\gamma_k(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta}) = \\frac{1}{h_k(\\bm{x}_i,\\bm{\\theta})} + \\frac{y_{ik}}{h_k(\\bm{x}_i,\\bm{\\theta})^2}.\n \\end{aligned}\n\\end{equation}\nSubstituting the {\\bf Third-order TaylorGLO loss} with parameters $\\bm{\\lambda}$ into Equation~\\ref{eqn:sgdsingleweight},\n\\vspace{-0.5em}\n\\begin{equation}\n \\begin{aligned}\n\\theta_j \\leftarrow \\theta_j + \\eta \\frac{1}{n} \\sum^n_{k=1}\\bigg[ \\lambda_2 \\frac{\\partial}{\\partial s} h_k(\\bm{x}_i,\\hlam + s \\bm{j}) + \\lambda_3 2 \\left(h_k(\\bm{x}_i,\\hlam + s \\bm{j}) - \\lambda_1 \\right) \\frac{\\partial}{\\partial s} h_k(\\bm{x}_i,\\hlam + s \\bm{j}) \\\\[-0.6em]+ \\lambda_4 3 \\left(h_k(\\bm{x}_i,\\hlam + s \\bm{j}) - \\lambda_1 \\right) ^2 \\frac{\\partial}{\\partial s} h_k(\\bm{x}_i,\\hlam + s \\bm{j}) + \\lambda_5 (y_{ik} - \\lambda_0) \\frac{\\partial}{\\partial s} h_k(\\bm{x}_i,\\hlam + s \\bm{j}) \\\\+ \\left( \\lambda_6 (y_{ik} - \\lambda_0) 2 \\left(h_k(\\bm{x}_i,\\hlam + s \\bm{j}) - \\lambda_1 \\right) + \\lambda_7 (y_{ik} - \\lambda_0)^2 \\right) \\frac{\\partial}{\\partial s} h_k(\\bm{x}_i,\\hlam + s \\bm{j}) \\bigg] \\;\\bigg\\rvert_{s\\rightarrow 0}\n \\end{aligned}\n\\end{equation}\n\\vspace{-0.5em}\nand breaking up the coefficient expressions into $\\gamma_k(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta})$ results in the weight update step\n\\begin{equation}\n \\begin{aligned}\n\\gamma_k(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta}) = 2 \\lambda_3 h_k(\\bm{x}_i,\\bm{\\theta}) - 2 \\lambda_1 \\lambda_3 + 2 \\lambda_6 h_k(\\bm{x}_i,\\bm{\\theta}) y_{ik} - 2 \\lambda_6 \\lambda_0 h_k(\\bm{x}_i,\\bm{\\theta}) \\\\- 2\\lambda_1 \\lambda_6 y_{ik} + 2\\lambda_1 \\lambda_6 \\lambda_0 + \\lambda_2 + \\lambda_5 y_{ik} - \\lambda_5 \\lambda_0 + \\lambda_7 y_{ik}^2 - 2\\lambda_7 \\lambda_0 y_{ik} \\\\+ \\lambda_7 \\lambda_0^2 + 3 \\lambda_4 h_k(\\bm{x}_i,\\bm{\\theta})^2 - 6 \\lambda_1 \\lambda_4 h_k(\\bm{x}_i,\\bm{\\theta}) + 3 \\lambda_4 \\lambda_1^2 .\n \\end{aligned}\n\\end{equation}\nTo simplify analysis, $\\gamma_k(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta})$ can be decomposed into a linear combination of $[1, h_k(\\bm{x}_i,\\bm{\\theta}), h_k(\\bm{x}_i,\\bm{\\theta})^2, h_k(\\bm{x}_i,\\bm{\\theta})y_{ik}, y_{ik}, y_{ik}^2]$ with respective coefficients $[c_1, c_h, c_{hh}, c_{hy}, c_y, c_{yy} ]$ whose values are implicitly functions of $\\bm{\\lambda}$:\n\\vspace{-0.2em}\n\\begin{equation}\n\\label{eqn:tayk3_gamma_ccomb}\n\\gamma_k(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta}) = c_1 + c_h h_k(\\bm{x}_i,\\bm{\\theta}) + c_{hh} h_k(\\bm{x}_i,\\bm{\\theta})^2 + c_{hy} h_k(\\bm{x}_i,\\bm{\\theta}) y_{ik} + c_y y_{ik} + c_{yy} y_{ik}^2.\n\\end{equation}\n\n\n\\section{Characterizing training dynamics}\n\nUsing the decomposition framework above, it is possible to characterize and compare training dynamics under different loss functions. In this section, the decompositions are first analyzed under a zero training error regime to identify optimization biases that lead to implicit regularization. Second, generalizing to the entire training process, a theoretical constraint is derived on the entropy of a network's outputs. Combined with experimental data, this constraint characterizes the data fitting and regularization processes that result from the TaylorGLO training process.\n\n\n\\subsection{Optimization biases in the zero training error regime}\nCertain biases in optimization imposed by a loss function can be best observed in the case where there is nothing new to learn from the training data. Consider the case where there is zero training error,\nthat is, $h_k(\\bm{x}_i,\\bm{\\theta}) - y_{ik} = 0$. In this case, all $h_k(\\bm{x}_i,\\bm{\\theta})$ can be substituted with $y_{ik}$ in $\\gamma_k(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta})$, as is done below for the different loss functions.\n\n\\textbf{Mean squared error (MSE):\\quad} In this case,\n\\begin{equation}\n \\begin{aligned}\n\\gamma_k(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta}) = 2y_{ik} - 2 h_k(\\bm{x}_i,\\bm{\\theta}) = 0.\n \\end{aligned}\n\\end{equation}\nThus, there are no changes to the weights of the model once error reaches zero. This observation contrasts with the findings in \\citet{blanc2020implicit}, who discovered an implicit regularization effect when training with MSE loss \\emph{and} label noise. Notably, this null behavior is representable in a non-degenerate TaylorGLO parameterization, since MSE is itself representable by TaylorGLO with $\\bm{\\lambda} = [0,0,0,-1,0,2,0,0]$. Thus, this behavior can be leveraged in evolved loss functions.\n\n\\textbf{Cross-entropy loss:\\quad}\nSince $h_k(\\bm{x}_i,\\bm{\\theta}) = 0$ for non-target logits in a zero training error regime, $\\gamma_k(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta}) = \\frac{0}{0}$, i.e.\\ an indeterminate form. Thus, an arbitrarily-close-to-zero training error regime is analyzed instead, such that $h_k(\\bm{x}_i,\\bm{\\theta}) = \\epsilon$ for non-target logits for an arbitrarily small $\\epsilon$. Since all scaled logits sum to $1$, $h_k(\\bm{x}_i,\\bm{\\theta}) = 1-(n-1)\\epsilon$ for the target logit. Let us analyze the learning rule as $\\epsilon$ tends towards $0$:\n\\vspace{-0.7em}\n\\begin{equation}\n\\theta_j \\leftarrow \\theta_j + \\lim_{\\epsilon\\to 0} \\eta \\frac{1}{n} \\sum^n_{k=1} \\left\\{\n \\begin{array}{rl}\n \\dfrac{y_{ik}}{\\epsilon} D_{\\bm{j}} \\left( h_k(\\bm{x}_i,\\bm{\\theta}) \\right) & \\quad y_{ik} = 0 \\\\\n \\dfrac{y_{ik}}{1-(n-1)\\epsilon} D_{\\bm{j}} \\left( h_k(\\bm{x}_i,\\bm{\\theta}) \\right) & \\quad y_{ik} = 1\n \\end{array}\n \\right.\n\\end{equation}\n\\begin{equation}\n= \\theta_j + \\eta \\frac{1}{n} \\sum^n_{k=1} \\left\\{\n \\begin{array}{rl}\n 0 & \\quad y_{ik} = 0 \\\\\n D_{\\bm{j}} \\left( h_k(\\bm{x}_i,\\bm{\\theta}) \\right) & \\quad y_{ik} = 1 .\n \\end{array}\n \\right.\n\\end{equation}\nIntuitively, this learning rule aims to increase the value of the target scaled logits. Since logits are scaled by a softmax function, increasing the value of one logit decreases the values of other logits. Thus, the fixed point of this bias will be to force non-target scaled logits to zero, and target scaled logits to one. In other words, this behavior aims to minimize the divergence between the predicted distribution and the training data's distribution.\n\nTaylorGLO can represent this behavior, and can thus be leveraged in evolved loss functions, through any case where $a=0$ and $b+c>0$. Any $\\bm{\\lambda}$ where $\\lambda_2 = 2 \\lambda_1 \\lambda_3 + \\lambda_5 \\lambda_0 - 2\\lambda_1\\lambda_6\\lambda_0 - \\lambda_7 \\lambda_0^2 - 3 \\lambda_4 \\lambda_1^2$ represents such a satisfying family of cases. Additionally, TaylorGLO allows for the strength of this bias to be tuned independently from $\\eta$ by adjusting the magnitude of $b+c$.\n\n\n\n\\textbf{Baikal loss:\\quad}\nNotably, the Baikal loss function results in infinite gradients at zero training error, rendering it unstable, even if using it to fine-tune from a previously trained network that already reached zero training error. However, the zero-error regime is irrelevant with Baikal because it cannot be reached in practice:\n\n\\emph{Theorem 1: Zero training error regions of the weight space are not attractors for the Baikal loss function.}\n\nThe reason is that if a network reaches reaches a training error that is arbitrarily close to zero, there is a repulsive effect that biases the model's weights away from zero training error. Proof of this theorem is in Appendix~\\ref{sec:baikalzeroerrnotattractor}).\n\n\\textbf{Third-order TaylorGLO loss:\\quad}\nAccording to Equation~\\ref{eqn:tayk3_gamma_ccomb}, in the zero-error regime\n$\\gamma_k(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta})$ can be written as a linear combination of $[1, y_{ik}, y_{ik}^2]$ and $[a,b,c]$ (as defined in Appendix~\\ref{sec:notation_tayzeroerr}), i.e. $\\gamma_k(\\bm{x}_i,\\bm{y}_i,\\bm{\\theta}) = a + b y_{ik} + c y_{ik}^2$.\n\nNotably, in the basic classification case, $\\forall w\\in \\mathbb{N}_1: y_{ik} = y_{ik}^w$, since $y_{ik} \\in \\{0,1 \\}$. This observation provides an intuition for why higher-order TaylorGLO loss functions are not able to provide fundamentally different behavior (beyond a more overparameterized search space), and thus no improvements in performance over third-order loss functions. The learning rule thus becomes\n\\begin{equation}\n\\label{eqn:tayk3_zeroerrorclassrule}\n\\theta_j \\leftarrow \\theta_j + \\eta \\frac{1}{n} \\sum^n_{k=1} \\left\\{\n \\begin{array}{rl}\n a D_{\\bm{j}} \\left( h_k(\\bm{x}_i,\\bm{\\theta}) \\right) & \\quad y_{ik} = 0 \\\\\n (a+b+c) D_{\\bm{j}} \\left( h_k(\\bm{x}_i,\\bm{\\theta}) \\right) & \\quad y_{ik} = 1 .\n \\end{array}\n \\right.\n\\end{equation}\nAs a concrete example, consider the loss function TaylorGLO discovered for the AllCNN-C model on CIFAR-10 \\citep{taylorglo}. It had $a=-373.917$, $b=-129.928$, $c=-11.3145$. Notably, all three coefficients are negative, i.e.\\ all changes to $\\theta_j$ are a negatively scaled values of $D_{\\bm{j}} \\left( h_k(\\bm{x}_i,\\bm{\\theta}) \\right)$, as can be seen from Equation~\\ref{eqn:tayk3_zeroerrorclassrule}. Thus, there are two competing processes in this learning rule: one that aims to minimize all non-target scaled logits (increasing the scaled logit distribution's entropy), and one that aims to minimize the target scaled logit (decreasing the scaled logit distribution's entropy). The processes conflict with each other since logits are scaled through a softmax function. These processes can shift weights in a particular way while maintaining zero training error, which results in implicit regularization. If, however, such shifts in this zero training error regime do lead to misclassifications on the training data, $h_k(\\bm{x}_i,\\bm{\\theta})$ would no longer equal $y_{ik}$, and a non-zero error regime's learning rule would come into effect. It would strive to get back to zero training error with a different $\\bm{\\theta}$.\n\nSimilarly to Baikal loss, a training error of exactly zero is not an attractor for some third-order TaylorGLO loss functions (this property can be seen through an analysis similar to that in Section~\\ref{sec:baikalzeroerrnotattractor}). The zero-error case would occur in practice only if this loss function were to be used to fine tune a network that truly has a zero training error. It is, however, a useful step in characterizing the behavior of TaylorGLO, as will be seen later.\n\n\n\n\n\n\n\n\n\\subsection{Data fitting and regularization processes}\n\nUnder what gradient conditions does a network's softmax function transition from increasing the entropy in the output distribution to decreasing it? Let us analyze the case where all non-target logits have the same value, $\\frac{\\epsilon}{n-1}$, and the target logit has the value $1-\\epsilon$. That is, all non-target classes have equal probabilities.\n\n\\label{thm:softmaxentropy}\n\\emph{Theorem 2.\nThe strength of entropy reduction is proportional to}\n\\vspace{-0.5em}\n\\begin{equation}\n\\label{eqn:thmentropyreductionstrength}\n\t\\dfrac{ \\epsilon (\\epsilon - 1) \\left( \\mathrm{e}^{\\epsilon (\\epsilon - 1) (\\gamma_{\\neg T} - \\gamma_T)} - \\mathrm{e}^{\\frac{\\epsilon (\\epsilon - 1) \\gamma_T (n-1) + \\epsilon \\gamma_{\\neg T} (\\epsilon (n-3) + n - 1)}{(n-1)^2}} \\right) }{ (\\epsilon - 1)\\; \\mathrm{e}^{\\epsilon (\\epsilon - 1) (\\gamma_{\\neg T} - \\gamma_T)} - \\epsilon\\; \\mathrm{e}^{\\frac{\\epsilon (\\epsilon - 1) \\gamma_T (n-1) + \\epsilon \\gamma_{\\neg T} (\\epsilon (n-3) + n - 1)}{(n-1)^2}} }\n\\end{equation}\nThus, values less than zero imply that entropy is increased, values greater than zero that it is decreased, and values equal to zero imply that there is no change. The proof is in Appendix~\\ref{sec:softmaxentropyderivation}.\n\n\n\n\\begin{figure}\n \\centering\n \\begin{minipage}{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/zerr_attraction_ce_fullrange.png}\\\\\n {\\footnotesize $(a)$ Cross-Entropy Loss}\n \\end{minipage}\n \\hfill\n \\begin{minipage}{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/zerr_attraction_tayallcnnc.png}\\\\\n {\\footnotesize $(a)$ TaylorGLO Loss}\n \\end{minipage}\n \\vspace{-0.5em}\n \\caption{Attraction towards zero training error with cross-entropy and TaylorGLO loss functions on CIFAR-10 AllCNN-C models. Each point represents an individual training sample (500 are randomly sampled per epoch); its $x$-location indicates the training epoch, and $y$-location the strength with which the loss functions pulls the output towards the correct label, or pushes it away from it. With the cross-entropy loss, these values are always positive, indicating a constant pull towards the correct label for every single training sample. Interestingly, the TaylorGLO values span both the positives and the negatives; at the beginning of training there is a strong pull towards the correct label (seen as the dark area on top left), which then changes to more prominent push away from it in later epochs. This plot shows how TaylorGLO regularizes by preventing overconfidence and biasing solutions towards different parts of the weight space.}\n \\label{fig:zeroerrforces}\n \\vspace{-1em}\n\\end{figure}\n\nThe strength of the entropy reduction in Theorem~2 can also be thought of as a measure of the strength of the attraction towards zero training error regions of the parameter space. This strength can be calculated for individual training samples during any part of the training process, leading to the insight that the process results from competing ``push'' and ``pull'' forces. This theoretical insight, combined with empirical data from actual training sessions, explains how different loss functions balance data fitting and regularization.\n\nFigure~\\ref{fig:zeroerrforces} provides one such example on AllCNN-C \\citep{allcnn} models trained on CIFAR-10 \\citep{krizhevsky2009learning} with cross-entropy and custom TaylorGLO loss functions. Scaled target and non-target logit values were logged for every sample at every epoch and used to calculate respective $\\gamma_T$ and $\\gamma_{\\neg T}$ values. These values were then substituted into Equation~\\ref{eqn:thmentropyreductionstrength} to get the strength of bias towards zero training error.\n\nThe cross-entropy loss exhibits a tendency towards zero training error for every single sample, as expected. The TaylorGLO loss, however, has a much different behavior: initially, there is a much stronger pull towards zero training error for all samples---which leads to better generalization \\citep{yao2007early,li2019biglr}---after which a stratification occurs, where the majority of samples are repelled, and thus biased towards a different region of the weight space that happens to have better performance characteristics empirically.\n\n\n\n\n\n\n\\section{Invariant on TaylorGLO parameters}\n\nThere are many different instances of $\\bm{\\lambda}$ for which models are untrainable. One such case, albeit a degenerate one, is $\\bm{\\lambda} = \\bm{0}$ (i.e., a function with zero gradients everywhere). Given the training dynamics at the null epoch (characterized in Appendix~\\ref{sec:nullepoch}), more general constraints on $\\bm{\\lambda}$ can be derived (in Appendix~\\ref{sec:tayinvariantderivation}), resulting in the following theorem:\n\n\\emph{Theorem 3.\nA third-order TaylorGLO loss function is not trainable if the following constraints on $\\bm{\\lambda}$ are satisfied:}\n\\vspace{-1.5em}\n\\begin{eqnarray}\nc_1 + c_y + c_{yy} + \\dfrac{c_h+c_{hy}}{n} + \\dfrac{c_{hh}}{n^2} &<& \\left(n-1\\right)\\left(c_1 + \\dfrac{c_h}{n} + \\dfrac{c_{hh}}{n^2} \\right) \\\\\nc_y + c_{yy} + \\dfrac{c_{hy}}{n} &<& \\left(n-2\\right)\\left(c_1 + \\dfrac{c_h}{n} + \\dfrac{c_{hh}}{n^2} \\right).\n\\end{eqnarray}\nThe inverse of these constraints may be used as an invariant during loss function evolution. That is, they can be used to identify entire families of loss function parameters that are not usable, rule them out during search, and thereby make the search more effective. More specifically, before each candidate $\\bm{\\lambda}$ is evaluated, it is checked for conformance to the invariant. If the invariant is violated, the algorithm can skip that candidate's validation training and simply assign a fitness of zero. However, due to the added complexity that the invariant imposes on the fitness landscape, a larger population size is needed for evolution within TaylorGLO to be more stable. Practically, a doubling of the population size from 20 to 40 works well.\n\nTable~\\ref{tab:results} presents results from TaylorGLO runs with and without the invariant on the CIFAR-10 image classification benchmark dataset \\citep{krizhevsky2009learning} with various architectures. Networks with Cutout \\citep{cutout} were also evaluated to show that TaylorGLO provides a different approach to regularization. Standard training hyperparameters from the references were used for each architecture. Notably, the invariant allows TaylorGLO to discover loss functions that have statistically significantly better performance in many cases and never a detrimental effect. These result demonstrate that the theoretical invariant is useful in practice, and should become a standard in TaylorGLO applications.\n\n\\begin{table}\n \\caption{Test-set accuracy of loss functions discovered by TaylorGLO with and without an invariant constraint on $\\bm{\\lambda}$. Models were trained on the loss function that had the highest validation accuracy during the TaylorGLO evolution. All averages are from ten separately trained models and $p$-values are from one-tailed Welch's $t$-Tests. Standard deviations are shown in parentheses. The invariant allows focusing metalearning to viable areas of the search space, resulting in better loss functions.}\n \\vspace{0.5em}\n \\label{tab:results}\n \\centering\n {\n \\footnotesize\n \\begin{tabular}{lccc}\n \\toprule\n Task and Model&Avg. TaylorGLO Acc.&\\textbf{+ Invariant}&$p$-value\\\\\n \\midrule\n\t\tCIFAR-10 on AlexNet \\footnotemark[1] \t\t\t\t\t\t\t\t\t\t & 0.7901 (0.0026) & \\textbf{0.7933 (0.0026)} & 0.0092\\\\\n CIFAR-10 on PreResNet-20 \\footnotemark[2] \t\t\t\t\t\t\t\t\t & 0.9169 (0.0014) & 0.9164 (0.0019) & 0.2827\\\\\n CIFAR-10 on AllCNN-C \\footnotemark[3] \t\t\t\t\t\t\t\t\t\t & 0.9271 (0.0013) & \\textbf{0.9290 (0.0014)} & 0.0004\\\\\n CIFAR-10 on AllCNN-C \\footnotemark[3] + Cutout \\footnotemark[4] \t\t\t & 0.9329 (0.0022) & \\textbf{0.9350 (0.0014)} & 0.0124\\\\\n \\bottomrule\n\\end{tabular}\n{\\scriptsize\n\\\\\n\\footnotemark[1] \\citet{NIPS2012_4824} \\;\n\\footnotemark[2] \\citet{preresnet} \\;\n\\footnotemark[3] \\citet{allcnn} \\;\n\\footnotemark[4] \\citet{cutout}\n}\n}\n\\vspace{-1em}\n\\end{table}\n\n\n\n\n\n\\section{Adversarial robustness}\n\nTaylorGLO loss functions discourage overconfidence, i.e.\\ their activations are less extreme and vary more smoothly with input. Such encodings are likely to be more robust against noise, damage, and other imperfections in the data and in the network execution. In the extreme case, they may also be more robust against adversarial attacks. This hypothesis will be tested experimentally in this section.\n\nAdversarial attacks elicit incorrect predictions from a trained model by changing input samples in small ways that can even be imperceptible. They are generally classified as ``white-box'' or ``black-box'' attacks, depending on whether the attacker has access to the underlying model or not, respectively. Naturally white-box attacks are more powerful at overwhelming a model. One such white-box attack is the Fixed Gradient Sign Method \\citep[FGSM;][]{goodfellow2015explaining}: following evaluation of a dataset, input gradients are taken from the network following a backward pass. Each individual gradient has its sign calculated and scaled by an $\\epsilon$ scaling factor that determines the attack strength. These values are added to future network inputs with an $\\epsilon$ scaling factor, causing misclassifications.\n\n\n\\begin{figure}\n\\vspace{1em}\n \\centering\n \\begin{minipage}{0.4\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/fgsm_allcnnc.pdf}\\\\\n {\\footnotesize $(a)$ AllCNN-C}\n \\end{minipage}\n \\hfill\n \\begin{minipage}{0.4\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/fgsm_wrn285.pdf}\\\\\n {\\footnotesize $(a)$ Wide ResNet 28-5}\n \\end{minipage}\n \\vspace{-0.5em}\n \\caption{Comparing accuracies on CIFAR-10 at different FGSM adversarial attack strengths for AllCNN-C and WRN-28-5 network architectures. For each architecture, the blue bars represent accuracy achieved through training with the cross-entropy loss, green curves that with a TaylorGLO loss, and a gray curves that with a TaylorGLO loss specifically evolved in the adversarial attack environment. The leftmost points on each plot represent evaluations without adversarial attacks. TaylorGLO regularization makes the networks more robust against adversarial attacks, and this property can be further enhanced by making it an explicit goal in evolution.}\n \\label{fig:adversarial}\n\\end{figure}\n\n\nFigure~\\ref{fig:adversarial} shows how robust networks with different loss functions are to FGSM attacks of various strengths. In this experiment, AllCNN-C and Wide ResNet 28-5 \\citep{wideresnet} networks were trained on CIFAR-10 with TaylorGLO and cross-entropy loss; indeed TaylorGLO outperforms the cross-entropy loss models significantly at all attack strengths. Note that in this case, loss functions were evolved simply to perform well, and adversarial robustness emerged as a side benefit. However, it is also possible to take adversarial attacks into account as an explicit objective in loss function evolution. Since TaylorGLO can uses non-differentiable metrics as objectives in its search process, the traditional validation accuracy objective can be replaced with validation accuracy at a particular FGSM attack strength. Remarkably, loss functions found with this objective outperform both the previous TaylorGLO loss functions and the cross-entropy loss. These results demonstrate that the TaylorGLO regularization leads to robust encoding, and such robustness can be further improved by making it an explicit goal in loss-function optimization.\n\n\n\n\n\n\n\n\n\\section{Conclusion}\n\nRegularization has long been a crucial aspect of training deep neural networks, and exists in many different flavors. This paper contributed an understanding of one recent and compelling family of regularization techniques: loss-function metalearning. A theoretical framework for representing different loss functions was first developed in order to analyze their training dynamics in various contexts. The results demonstrate that TaylorGLO loss functions implement a guard against overfitting, resulting in automatic regularization. Two practical opportunities emerged from this analysis: filtering based on an invariant was shown to improve the search process, and the robustness against overfitting to make the networks more robust against adversarial attacks. The results thus extend the scope of metalearning, focusing it not just on finding optimal model configurations, but also on improving regularization, learning efficiency, and robustness directly.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{The CBM detector}\\label{cbmdet}\nCBM is a fixed target experiment that can be configured for electron-hadron measurements as well as muon-hadron measurements. A micro strip detector based Silicon Tracking System (STS) \\cite{Heuser:2015zpa,Deveaux:2014cda} reconstructing momenta and tracks of charged particles is one of the key components of the CBM experiment. The STS comprises of 8 equidistant planar detector stations placed from 30-100 cm downstream the target. The STS provides a single hit resolution of ~25 $\\mu m$ and a momentum resolution of ~ 1$\\%$. The CMOS pixel based Micro Vertex Detector (MVD) \\cite{Deveaux:2014cda} is designed to reconstruct open charm decays with a secondary vertex resolution of ~50~$\\mu m$. MVD comprises of 4 silicon pixel layers located 5-20 cm downstream the target. The MVD together with STS are placed in the gap of a dipole magnet with magnetic field of ~1~Tm. A Ring Imaging CHerenkov \\cite{Adamczewski-Musch:2017pix} detector is used to identify electrons from decay of low mass vector meson decay while high energy electrons and positrons are identified using the Transition Radiation Detector. Resistive Plate Chambers based Time Of Flight (TOF) measurements are used to identify hadrons \\cite{herrmann2014technical}. \nAforementioned detector systems are the basis of the electron-hadron configuration which is considered in this analysis. The collisions will produce up to 1000 charged particles at the maximum interaction rate of 10 MHz, producing ~1 Tbytes\/s of raw data. The data are then processed using a First Level Event Selector (FLES) \\cite{de2011first}, which performs online event building, reconstruction, tracking and event selection.\n It is interesting to note that a CBM full-system test-setup named mCBM has been constructed at the SIS18 facility of GSI\/FAIR. As this setup offers additional high-rate detector tests in nucleus-nucleus collisions under realistic experimental conditions, it can be used to test the present analysis also at lower energies than at the full CBM detector.\n\n\\section{Simulation and datasets}\\label{simdat}\nThe microscopic relativistic N-body hadron transport model UrQMD 3.4 \\cite{Bass:1998ca, Bleicher:1999xi} is selected for use as event generator for the present study. UrQMD provides both a reasonable, physically well motivated scenario for the primary nucleus-nucleus collision as well as a fast, robust N-body event-by-event output in the CBM energy range. These generated UrQMD events then serve as the input to the subsequent CbmRoot \\cite{root_url} detector simulation framework, which performs event-by-event transport of all particles of each event through the detector subsystems. The standard macros in CbmRoot are used to perform particle transport, detector response and event reconstruction. The default detector geometry for electron-hadron configuration (sis100$\\_$electron) was simulated using the Geant3 \\cite{brun1987geant} software. Since UrQMD does not include any weak or electromagnetic decays of the produced hadrons, these are performed within the Geant3 package. The present analysis includes only those particles which produce hits in the two main silicon detectors (STS and MVD). Even though the CbmRoot can perform the full detector simulation according to the experimental specifications, it does not include a realistic simulation of different backgrounds which may lead to additional noise. The study of such effects and how DL may be able to reduce the impact of detector noise, will be studied in future works.\\\\\n\n\n \n With the current simulation setup, four different datasets, labelled as \\textit{Train} and \\textit{Test1}-\\textit{Test3}, of Au+Au collisions at 10 AGeV are generated for this study. \nThe DL models were trained using dataset \\textit{Train} which contains 10${^5}$ events with impact parameters in the range of 0 to 16 fm, sampled from a uniform $b$-distribution.\n\nDatasets \\textit{Test1}, \\textit{Test2} and \\textit{Test3} were used to quantify the performance of the trained models. \nThe first testing set \\textit{Test1} contains 18 subsets, each comprising of 500 events with a different but fixed impact parameter from 0 to 16 fm. Datasets \\textit{Test2} and \\textit{Test3} contain 10$^{6}$ and 10$^{5}$ events respectively with impact parameters sampled from a \\textit{bdb} distribution (i.e. the probability of an impact parameter $b$ is proportional to $b$, from 0 - 16 fm). Thus, \\textit{Test2} and \\textit{Test3} contain impact parameter distributions which are different from the training set which is important for a meaningful validation of the models. Moreover, \\textit{Test3} uses a modified physics scenario which will be explained later in the paper.\n\nThe features of all the datasets are presented in table \\ref{table_data}.\n\n\\begin{table}[h]\n\\begin{tabular}{cccc}\n\\hline\n\\hline\nDataset & $\\#$ events & \\begin{tabular}[c]{@{}c@{}}Impact parameter\\\\ $[\\mathrm{fm}]$\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Impact parameter \\\\ distribution\\end{tabular} \\\\ \\hline\n\\textit{Train} & 10${^5}$ & 0-16 & uniform \\\\ \n\\textit{Test1} & $18\\times 500$ & 0.5 - 16 & constant \\\\ \n\\textit{Test2} & 10${^6}$ & 0-16 & $bdb$ \\\\ \n\\textit{Test3} & 10${^5}$ & 0-16 & $bdb$ \\\\ \\hline\n\\hline\n\\end{tabular}\n\\caption{\\label{table_data} Datasets used in the study. The last column defines the impact parameter distribution of the events. The training dataset has a uniform distribution of impact parameter while a constant or $bdb$ distribution is used in the testing datasets.}\n\\end{table}\n\n\n\\section{Deep Learning models} \\label{dlmod}\n\nDeep Learning is a subset of Machine Learning which uses multiple layer neural networks that can capture deep correlations in the data \\cite{lecun2015deep}. This enables the computer to find better solutions to complex problems, which traditional ML techniques cannot find. PointNet is a deep learning architecture optimised to learn from point cloud data \\cite{qi2017pointnet}. Point clouds are collection of unordered points in space where each point represents the 'N' dimensional attributes of an element that contributes to the collective structure of the cloud.\nOne of the important features of the PointNet model is that it can learn to be invariant to the order of input points. \n\nThe PointNet architecture can be extremely useful in nuclear and particle physics experiments, as most of the sensor or detector data has the geometrical structure of point clouds. PointNet can be used to train deep learning models which take raw experimental data as input. Here the predictions are independent of the ordering of the particle tracks or hits. In this study, we have developed four PointNet based models that learn from different types of detector outputs such as hits and tracks of particles as features to determine the impact parameter of each collision. A point in the pointcloud is therefore defined by the attributes of a hit or a track.\nMore detailed information on the construction and training of the PointNet can be found in the supplemental material in \\ref{appendix}.\n\n Impact parameter regression is a supervised learning problem where the model learns to map the inputs to the impact parameter of the event upon being trained on labelled data. During training, the model goes through several samples of data to learn the correlations in the input data and the expected output. The loss function is used as a measure of how well the model has learned during the training stage. In this study, the models were trained using 75$\\%$ of events in dataset \\textit{Train} with the Mean Squared Error (MSE) as the loss function. The remaining 25$\\%$ of events were used for validation. Other metrics such as Mean Absolute Error (MAE) and coefficient of determination ($R^{2}$) were used to select the best model for further analyses. If $y_{true}$, $y_{pred}$ and $\\left< y_{true} \\right>$ are the true impact parameter, DL predictions and the mean of true values respectively, the coefficient of determination is calculated as\n\\begin{equation}\n R^{2}=1- \\frac{\\sum(y_{true} - y_{pred})^{2}}{\\sum(y_{true} - \\left< y_{true}\\right> )^{2}+\\epsilon}\n\\end{equation}\nwhere the second term is the fraction of variance unexplained by the predictions and $\\epsilon$ is a small positive number to prevent division by zero. The sums run over all validation events.\n \n Training the models require the tuning of several hyperparameters to achieve its best performance. We started with network structures similar to the original PointNet implementation, and then tuned different hyperparameters using a trial and error method until an optimum performance, as defined by MSE, MAE and $R^2$, was observed. The models developed in this study are briefly described below.\\\\\n \n \\noindent\n\\textbf{Model-1 (M-hits)}:\\\\\n This model (\\textit{M-hits}) uses the x, y, z position of the hits of particles in the MVD detector as input attributes. Since our inputs are just hits in the detector planes, this model can perform impact parameter determination before track finding and fitting. Since the PointNet architecture requires a fixed input size, the event with maximum number of hits ($N_{max}=1995$) in the training dataset is used as reference to fix the input dimensions (N$\\times$F) to be 1995$\\times$3. Any event with smaller numbers of hits has the remaining rows filled with zero. When the maximum number of hits exceeded 1995 in the testing datasets, hits were dropped randomly to fit into the input dimensions. Note that in principle the input size could also be extended to take into account the exponential tail of the $N_{charge}$ distribution, but that would also increase the computational time.\\\\\n \n \\noindent\n\\textbf{Model-2 (S-hits)}:\\\\\nThis model uses the x, y, z coordinates of hits in the STS detector planes. Similar to the \\textit{M-hits} model, \\textit{S-hits} also does not require tracking to be performed before impact parameter can be reconstructed.\nThe maximum number of hits present in an event in the training data was 9820. Therefore, the input dimensions (N$\\times$F) were fixed to be 9820$\\times$3 with provisions analogue to \\textit{M-hits} to overcome smaller or larger number of hits in testing data.\\\\\n\n \\noindent \n\\textbf{Model-3 (MS-tracks)}:\\\\\nThe \\textit{MS-tracks} model uses the features of tracks reconstructed from, both, the hits in MVD and STS, for predicting the impact parameter. Hence, this model can be used to estimate the impact parameter only after track reconstruction. In this model, the x, y, z coordinates, dx\/dz, dy\/dz and charge-to-momentum ratio (q\/p) of tracks of particles in the first and last plane of the tracks are the attributes of a point in the 12 dimensional point cloud. Therefore, the input dimensions are 560$\\times$12 (N$\\times$F) where 560 is the maximum number of tracks present in an event from the training data. Events with fewer tracks are filled with rows of zeros to maintain the same input dimensionality.\\\\\n\n \\noindent\n\\textbf{Model-4 (HT-combi)}:\\\\\nThis model learns from the combination of both hit and track information used by the \\textit{M-hits} and \\textit{MS-tracks} respectively. It uses the hits from MVD together with tracks reconstructed from hits in MVD and STS to determine the impact parameter of an event. It takes the MVD hits with dimensions 1995$\\times$3 and MVD + STS tracks with dimensions 560$\\times$12.\\\\\n\n\\section{Performance of the models}\\label{perf}\nThe DL models were trained via backpropagation until the validation MSE (loss) started saturating or diverged from the training loss. The MAE and coefficient of determination of the validation dataset were also considered before choosing the final weights for the model. The trained models were then tested on datasets \\textit{Test1}, \\textit{Test2} and \\textit{Test3} to evaluate their performances. The details of the final models are tabulated in table \\ref{modeldet}. All models achieved an $R^{2}$ value of about 0.98 upon training. It can be seen that increasing the complexity ($\\#$ param.) increases the training duration required for the model to converge to an optimal solution. Nevertheless, all the models finally achieve similar scores for MSE, MAE and $R^{2}$ with the \\textit{MS-tracks} and \\textit{HT-combi} achieving a marginally better $R^{2}$ value.\n\n\\begin{table}[b]\n\n\n\\begin{tabular}{p{1.48cm}p{1cm}p{1.3cm}p{0.9cm}p{0.9cm}p{0.8cm}p{0.8cm}}\n\\hline\n\\hline\nModel&Epochs & $\\# $ param. & MSE & MAE & $R^{2}$ & Events\/s \\\\\n\\hline\nM-hits & 128 &$ 3 \\cdot 10^{6}$ & 0.43 & 0.51 & 0.979 & 660 \\\\\nS-hits & 354 &$ 3 \\cdot 10^{6}$ & 0.47 & 0.54 & 0.976 & 159 \\\\\nMS-tracks & 372 & $ 6 \\cdot 10^{6}$ & 0.40 & 0.50 & 0.981 & 1092 \\\\\nHT-combi & 484 & $ 10 \\cdot 10^{6}$ & 0.39 & 0.49 & 0.981 & 435 \\\\\n\n\\hline\n\\hline \n\\end{tabular}\n\\caption{\\label{modeldet} Main features of the trained DL models. An epoch is defined as a single training pass through the entire training dataset. The number of parameters ($\\# $ param.) refers to the weights, biases and kernels of the model together with non-trainable parameters which define the structure of the network. This number roughly corresponds to the complexity of the model. The MSE, MAE and $R^{2}$ are for the validation data. The last column gives an estimate for the execution speed of the model on a GPU card.} \n\\end{table}\n\n\n To study the speed of the DL models, 10000 events from the dataset \\textit{Test2} were tested on a Nvidia Geforce RTX 2080 Ti with a graphics processing memory of 12 GB. The \\textit{MS-tracks} model was found to be the fastest with a prediction speed of about 1092 events\/second while the \\textit{S-hits} model was the slowest with a speed of about 159 events\/second. However, the \\textit{MS-tracks} can only be deployed after track reconstruction, which means that some sort of pre-processing is required which takes computational time. It must also be noted that the models were not optimised for speed. It is possible to improve the model speed by reducing the model complexity, by modifying the input dimensions to make an optimum utilisation of the available resources or by using more advanced GPUs. Nevertheless, the current speed is promising to be useful for an online analysis of data, if performed parallelly on multiple GPUs. In addition, the advantage of a more complex model, as in our study, is that it can also be used for other analysis tasks which can then be performed at a similar speed.\n\n\n\n \\begin{figure}\n \\centering\n \\includegraphics[width=0.485\\textwidth,height=6.05cm]{2.eps}\n \\caption{(Color online) Histogram of the charged particle track multiplicity as a function of impact parameter. The distribution is generated using the 10${^6}$ minimum bias events in \\textit{Test2}. }\n \\label{2}\n\\end{figure}\nWhile conventional methods of centrality determination, based on connecting the number of charged tracks in an event with its centrality \\cite{Klochkov:2017oxr}, can be useful for a broad grouping of events, it lacks the ability to perform accurate impact parameter determination of individual events. This is evident from figure \\ref{2}, in which the charged particle track multiplicity is plotted as a function of impact parameter. For a given track multiplicity, there is a wide range of possible impact parameters. This spread in track multiplicity is the largest for the most interesting central events. Similarly, for the most peripheral events, a track multiplicity could correspond to a large range of impact parameters.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.485\\textwidth]{3c.eps}\n \\caption{(Color online) Relative precision of the DL models as a function of impact parameter. The results from the $\\textit{Polyfit}$ model (grey) are also plotted to benchmark the performance of DL models. The events used are from dataset $\\textit{Test1}$, and predictions are for a fixed impact parameter.}\n \\label{3}\n\\end{figure}\n\nAccurate impact parameter determination on an event by event basis is therefore not a trivial task that can be accomplished only based on a single variable like track multiplicity. It requires modelling of other known and unknown correlations in the experimental data to the impact parameter. Moreover, an online event analysis demands minimal pre-processing of the raw experimental data. This makes PointNet based DL models an efficient candidate for event by event impact parameter determination. As a basic reference for the performance of our DL models, we will use a much simpler polynomial fit that can also perform event-by-event predictions from track multiplicity of the event. This model (\\textit{Polyfit}) uses a third order polynomial fit to the track multiplicity as function of impact parameter to determine the impact parameter \n\\begin{equation}\n b = a_0 + a_1\\times x + a_2 \\times x^{2} + a_3 \\times x^{3}\n\\end{equation}\nwhere $b$ and $x$ are impact parameter and the number of charged tracks, respectively. The fit gives the following parameters:\\\\\n$a_0=14.28 $; $\\ a_1=-7.01 \\times 10^{-2}$; $\\ a_2=2.13 \\times 10^{-4}$; $ \\ a_3= -2.70 \\times 10^{-7}$.\\\\\nTo quantify the precision of DL models we will first look at the spread of the predictions of the DL models for a fixed input impact parameter. The relative precision in the predictions of DL models can be calculated as $\\sigma_{err}\/b_{true}$ where, $\\sigma_{err}$ is the standard deviation of the distribution of the prediction error $(true - predicted)$ and $b_{true}$ is the true impact parameter. The relative precision in predictions is plotted as a function of impact parameter for different DL models and the Polyfit model in figure \\ref{3}. It is evident that the simple model fails for the most central collisions (b$<$ 2~fm) with the relative precision increasing up to 200 $\\%$ while the DL models have a better precision in comparison. At 0.5~fm, the worst relative precision observed in DL models was about 79 $\\%$ and this dropped below 50~$\\%$ for events with impact parameter 1~fm or above. For events from 3 - 16~fm, the spread in predictions of DL models and polynomial fit model are similar. \n\n\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.485\\textwidth]{4c.eps}\n \\caption{(Color online) Mean error of the predictions as a function of the impact parameter. The events used are from dataset $\\textit{Test1}$, and predictions are for a fixed impact parameter. The error bars are smaller than the symbol size.}\n \\label{4}\n\\end{figure}\n\nHowever, the standard deviation of error in predictions quantifies only the precision of the model. The predictions can be considered both accurate and precise only if the error distributions have a mean close to zero and an acceptable precision. Figure \\ref{4} shows the mean of the error in predictions as a function of the impact parameter for \\textit{Test1}. The polynomial fit model has a poor accuracy in comparison to DL models, despite its comparable precision in mid-central and peripheral events. The DL models have a mean error between -0.33 to 0.22~fm for events with impact parameter 2-14~fm, while the mean for the \\textit{Polyfit} model fluctuates between -0.7 and 0.4~fm. For events in the range 5-14~fm, the $\\textit{HT-combi}$ and \\textit{Polyfit} offer a relative precision of 4-9~$\\%$ and 2-8~$\\%$ respectively. Despite their similar precision (for 5-14~fm), $\\textit{HT-combi}$ yields more accurate predictions, with a mean error of -0.33 to 0.13~fm, while the polynomial fit exhibits mean errors varying between -0.7 to 0.4~fm. These results indicate that the DL models use more information than just the number of charged tracks to determine the impact parameter.\n\n\nIn an actual collision experiment, the probability of having events with impact parameter ($b$) is proportional to the impact parameter, which gives a different distribution of the impact parameters than the ones used in the \\textit{Train} dataset: i.e. peripheral events are more likely. To study the performance of the DL models in such a scenario, dataset \\textit{Test2} was used to predict the impact parameter for different centrality classes with a bin width of $5\\%$.\nThe mean of the prediction error is plotted as a function of centrality in figure \\ref{5}.\nThe DL models have a mean error close to zero for most of the centrality classes while there are large fluctuations in the simple polynomial model. Another interesting factor is that the number of events which has at least 1 hit in the MVD detector but no tracks (using MVD and STS hits) reconstructed were about 10$\\%$ of \\textit{Test2}. These are ``empty\" events for the \ntrack multiplicity based method. However, the DL models can use hits to make predictions of the impact parameter of these events, though the error is large in comparison to their predictions for central and mid-central events.\n\n\n \\begin{figure}\n \\centering\n \\includegraphics[width=0.485\\textwidth]{5c.eps}\n \\caption{(Color online) Mean error in predictions as a function of centrality. Dataset \\textit{Test2} is used in which peripheral events are more likely to occur. The track multiplicity is used for the centrality binning. The points at 90 $\\%$ centrality are results from events with no tracks reconstructed. Therefore the \\textit{Polyfit} and \\textit{MS-Tracks} model do not have a data point at 90 $\\%$ centrality.}\n \\label{5}\n \\end{figure}\n\nThe accuracy of the reconstructed impact parameter of an event can depend on how accurate the simulation model can describe the outcome of single events. This introduces a bias on the predictions from the choice of the event generating model. The dependence of the DL predictions on the physics model is studied by predicting the events from a separate dataset, that introduces different physics (\\textit{Test3}), on the DL model trained on dataset \\textit{Train}. To generate \\textit{Test3\\textit}, the final charged particle multiplicity in the tested events was modified by an increase of the pion production cross section in UrQMD. To do so, the $\\Delta$-baryon absorption cross section in the UrQMD model was decreased by a factor 2, resulting in an increased pion production, especially for central collisions. The increased number of pions is reflected in the difference of the mean charged track multiplicity ($\\Delta M)$ for events in \\textit{Test3} and \\textit{Test2}, for a given centrality as shown in the inlet of figure \\ref{7}. There is a difference of about 14 tracks for most central events and it reduces to less than 3 for peripheral collisions. This change in physics is translated to a shift in the mean of the error distributions ($\\mu_{err}^{shift}$) given by,\n\\begin{equation}\n\\mu_{err}^{shift}= \\sqrt{(\\mu_{errT3}- \\mu_{errT2})^{2}} \n\\end{equation}\nwhere $\\mu_{errT3}$ and $\\mu_{errT2}$ are the mean in the prediction errors for dataset \\textit{Test3} and \\textit{Test2} respectively. This shift in mean is plotted as a function of centrality in figure \\ref{7}. It is observed that the DL models show a shift in the mean of up to 0.32~fm while the polynomial fit shows a shift up to 0.53~fm. The shift is more evident for central collisions as expected. \nThis means that the DL network learns more information than the \\textit{Polyfit} about the event features independent of the event multiplicity and thus is less model dependent than a simple fit.\nThe \\textit{MS-tracks} and \\textit{HT-combi} show slightly better robustness to the physics modification compared to\\textit{ M-hits} and \\textit{S-hits} models. The track multiplicity of the event is definitely an important feature with strong correlation with impact parameter. However, as DL models learn other information in the data in addition to track multiplicity, they tend to be more robust than the polynomial fit model which essentially depends only on the track multiplicity.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.485\\textwidth]{8c.eps}\n \\caption{(color online) Inset: Difference of the mean track multiplicity for datasets \\textit{Test3} and \\textit{Test2} ($\\Delta M$) as a function of centrality. The change of the pion cross section in \\textit{Test3} is expected to be more visible in central collisions and leads to a larger number of charged tracks. Large figure: Difference of the mean of error distributions for datasets \\textit{Test3} and \\textit{Test2} as a function of centrality. The increased pion production in central events leads to a systematic under-prediction of the impact parameter in \\textit{Test3}. However, the DL models appear less model dependent than the polynomial fit.\n \\label{7}}\n\\end{figure}\n\n\\section{Conclusion and Discussion}\\label{conc}\nIn this study, we have shown that pointnet-based DL models can be used for an accurate determination of impact parameter in the CBM experiment. The use of input data with minimum preprocessing and high processing speed of DL models make it an ideal candidate for online event selection. It is also interesting to note that all four types of models (\\textit{M-hits}, \\textit{S-hits}, \\textit{MS-tracks} and \\textit{HT-combi}) lead essentially to comparable precision in the determination of the underlying impact parameter. Indeed track-based modelling shows only marginally better performance in evaluating validation data. \n\n\nThe DL models are a reliable tool for impact parameter determination over impact parameters in the range 2-14~fm. Events having an impact parameter less than 2~fm is only a very small fraction of the total events in an experiment. Nevertheless, the predictions are still better than the prediction from the polynomial fit which fails for most central events. The deep learning models show a superior performance in comparison to a simple model which relies only on the track multiplicity. However, all methods to estimate impact parameter will have a bias in the predictions acquired from the physics models used in data generation. This is true for Glauber based estimation as well. In addition, the training data used in this DL study using the UrQMD model and CBM detector simulation may not perfectly represent real data. This model bias can be estimated for DL models by comparing the predictions of a model on different event generator data. This bias could also be minimised by using events from multiple event generators in the training samples. The use of these DL models in the experiment would also require more investigations into the robustness of the model against expected detector noise and efficiency. However, these are beyond the scope of this paper and are desirable for further investigations in future. The practical application of a DL based event selection algorithm however requires further studies on the scalability of the prediction speed on multiple GPUs and also the possibilities to incorporate other selection criteria. It was also found that the model complexity can be further reduced without significant change in the performance. Therefore, the prediction speed can also be scaled up by reducing the number of model parameters. The results of an ablation study on the \\textit {M-hits} model to see the performance change with reduced number of parameters are described in \\ref{appendixb}. \n\nThe PointNet based models presented in this study use information like tracks and hits of particle which are available in every heavy ion collision experiment immediately during data collection. It is intended to use the developed model architectures in other heavy ion collision experiments, e.g.: ALICE at the Large Hadron Collider (LHC) or HADES at the SIS18. Here the model can be employed and studied with real data. Moreover, the models used in this paper can readily be generalised for tasks other than impact parameter determination. In the future it is worthwhile to study if a similar model can be used also for more complex tasks like identification of rare physics processes, determination of other observables and the detection of QCD phase transition. \n\n\n\\section*{Acknowledgement}\nThe authors thank Volker Friese, Manuel Lorenz, Ilya Selyuzhenkov and Steffen Bass for helpful discussions and comments, Dariusz Miskowiec, Dmytro Kresan and Florian Uhlig for help with unigen code. We thank the CBM collaboration for the access to the CBM-ROOT simulation software package.\nMOK, JS, and KZ thank the Samson AG and the BMBF through the ErUM-Data project for funding. JS and HS thank the Walter Greiner Gesellschaft zur F\\\"{o}rderung der physikalischen Grundlagenforschung e.V. for her support. MOK thanks HGS-HiRe and GSI through a F$\\&$E grant for their support.\nComputational resources were provided by the NVIDIA Corporation with the donation of two NVIDIA TITAN Xp GPUs and the Frankfurt Center for Scientific Computing (Goethe-HLR).\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\nMetal-insulator transitions are an interesting topic of intense activity in modern physics.\nIn general, there are three kinds of insulators.\nSystems in which the valence band is completely filled are called band insulators\\cite{PhysRevLett.83.2014,https:\/\/doi.org\/10.1002\/andp.19283921704}.\nThe staggered potential, in which the on-site energies are different, can also produce band insulators with a spectral gap in cold atom experiments\\cite{PhysRevLett.115.115303,PhysRevLett.119.230403}.\nIn real materials, disorder weakens the constructive interference and affects quantum transport.\nDisorder-induced localization, which was proposed more than half a century ago as the Anderson insulator,\nhas inspired numerous efforts to explore the metal-insulator transition\\cite{PhysRev.109.1492,RevModPhys.50.191,PhysRevLett.96.063904}.\nIn addition to these systems, when electron correlations are considered, a metallic system can become an insulator, induced by the competition between the energy gap and the kinetic energy;\nthe electrons in the narrow bands near the Fermi energy become localized,\nand the system becomes a Mott insulator\\cite{PhysRevB.75.193103,RC1977297}.\n\nIn past decades, the nature of the disorder-driven metal-insulator transition in two dimensional (2D) interacting system\nhas been discussed intensively\\cite{A.M.Finkelstein,finkel1984weak,PhysRevB.30.527,PhysRevB.57.R9381,PhysRevLett.88.016802,doi:10.1126\/science.1115660,app9061169}.\nThe existence of a metal state at zero magnetic filed was firstly predicted by Finkelstein\\cite{A.M.Finkelstein,finkel1984weak} and Castellani et al\\cite{PhysRevB.30.527}, and the possibility of metallic behavior and metal-insulator transition were later confirmed in Refs.\\cite{PhysRevLett.88.016802,PhysRevB.57.R9381}.\nBy perturbative renormalization group methods, the combined effects of interactions and disorder were studied, and a quantum critical point was identified to separate the metallic phase stabilized by electronic correlation from insulating phase where disorder prevails over the electronic interactions\\cite{doi:10.1126\/science.1115660}. For reviews, see Refs.\\cite{app9061169} and references therein. To understand the metal-insulator transition, it is now believed that we must consider\nboth electronic correlation and disorder on the same footing\nbecause disorder and interactions are both present in real\nmaterials\\cite{Curro_2009,Dagotto257,PhysRevB.50.8039}. From a theoretical point of view, this is difficult.\nWhen both disorder and interactions are strong, perturbative approaches usually break down\\cite{RevModPhys.66.261,PhysRevB.30.527},\nand quantum Monte Carlo simulations may be affected by the `minus-sign problem'.\n\nIn the context of QMC simulations, various interesting metal-insulator transitions\nhave been reported in different physical systems\\cite{PhysRevLett.120.116601,PhysRevLett.109.026404,PhysRevB.101.155413}. By studying the disordered\nHubbard model on a square lattice at quarter filling, it was shown that repulsion between electrons can significantly enhance the conductivity, which provides evidence of a phase transition, in a two-dimensional model containing both interactions and disorder\\cite{PhysRevLett.83.4610}.\nThe effects of a Zeeman magnetic field on the transport and thermodynamic properties have also been discussed\\cite{PhysRevLett.90.246401};\nit was argued that a magnetic field enhances localized behavior in the presence of interactions and disorder and induces a metal-insulator transition, in which the qualitative features of magnetoconductance agree with experimental findings.\nIn a two-dimensional system of a honeycomb lattice that\nfeatures a linearly vanishing density of states at the Fermi level, a novel disorder-induced nonmagnetic\ninsulating phase is found to emerge from the zero-temperature quantum critical point, separating a\nsemimetal from a Mott insulator\\cite{Singha1176}.\nThe authenticity of the insulating phase has also been studied, and `false insulating' behavior originates in closed-shell effects\\cite{PhysRevB.85.125127}.\n\nHowever, due to the limitation of the `minus-sign problem' in QMC simulations, most studies have focused on the half-filled case\\cite{PhysRevLett.101.086401,PhysRevB.81.075106} or some fixed electronic filling\\cite{PhysRevB.77.075101,PhysRevB.67.205112}.\nExperimentally, transport measurements of effectively two-dimensional (2D) electron systems in\nsilicon metal-oxide-semiconductor field-effect transistors\nprovided evidence that a metal-insulator transition can occur,\nwhere the temperature dependence of the conductivity $\\sigma_{dc}$ changes\nfrom that typical of an insulator at lower density to that typical of a conductor as the density increases\nabove a critical density\\cite{PhysRevB.50.8039,PhysRevB.51.7038,PhysRevLett.77.4938}.\nIn two-dimensional Mott insulator also observed a transition from an anomalous metal to a Fermi liquid by doping\\cite{doi:10.1126\/science.abe7165}.\nThus, doping is also an important physical parameter to tune the phase transition,\nwhile determining the doping-dependent metal-insulator transition is a subtle and largely understudied problem.\nAnd study reported that cold atom-based quantum simulations offer remarkable opportunity for investigate the doping problem\\cite{doi:10.1126\/science.aal3837}.\n\nIn this paper, we evaluate the doping-dependent sign problem and then select several doping levels to examine the doping-dependent metal-insulator transition of the disordered Hubbard model on a square lattice. We then\nexamine whether this model also has a universal value of conductivity.\nIn simulations, the sign problem is minimized by\nchoosing off-diagonal values rather than diagonal disorder because, at\nleast at half filling, there is no sign problem in the\nformer case, and consequently, simulations can be pushed\nto significantly lower temperatures.\nWe show that the sign-problem behavior worsens with increasing parameter strength, such as on-site interaction; however, the sign-problem behavior also decreases in the presence of bond disorder\\cite{PhysRevB.55.4149}.\nFor results away from half filling, we choose some points, where the sign problem is\nless severe compared to other densities, and show a phase diagram of the critical disorder strength determined by repulsion and doping in a disordered Hubbard model, going beyond previous results\\cite{PhysRevLett.83.4610}.\n\n\\section{Model and method}\n\\label{sec:model}\n\nThe Hamiltonian for a disordered Hubbard model on a square lattice is defined as\n\\begin{eqnarray}\n\\label{Hamiltonian}\n\\hat H=-\\sum_{{\\bf ij}\\sigma}t_{\\bf ij}\\hat c_{{\\bf i}\\sigma}^\\dagger \\hat c_{{\\bf j} \\sigma}^{\\phantom{\\dagger}}+U\\sum_{{\\bf i}}\\hat n_{{\\bf i}\\uparrow}\\hat n_{{\\bf i}\\downarrow}-\\mu \\sum_{{\\bf i}\\sigma} \\hat n_{{\\bf i}\\sigma}\n\\end{eqnarray}\nwhere $t_{\\bf ij}$ and $U$ represent the hopping amplitude between the nearest-neighbor electrons\nand on-site repulsive interaction,respectively, and $\\mu$ denotes the chemical potential, which can control the electron density of the system.\n$\\hat c_{{\\bf i}\\sigma}^\\dagger(\\hat c_{{\\bf i}\\sigma}^{\\phantom{\\dagger}})$ is the creation (annihilation)\noperator with spin $\\sigma$ at site ${\\bf i}$, and\n$\\hat n_{{\\bf i}\\sigma}$=$\\hat c_{{\\bf i}\\sigma}^\\dagger \\hat c_{{\\bf i}\\sigma}^{\\phantom{\\dagger}}$ is the number operator.\nDisorder is introduced by taking the hopping parameters $t_{\\bf ij}$ from a probability $P(t_{\\bf ij})=1\/\\Delta$\nfor $t_{\\bf ij}\\in[t-\\Delta\/2,t+\\Delta\/2]$ and zero otherwise. $\\Delta$ is a measure of the strength of the disorder\\cite{PhysRevLett.83.4610}. We set $t$=1 as the default energy scale.\nThe number of disorder realizations used in present work is $20$ which is enough to obtain reliable results (see Appendix for details).\n\nWe use the DQMC method\\cite{PhysRevD.24.2278} to investigate the phase transitions in the model defined by Eq.(\\ref{Hamiltonian}) numerically. DQMC is a nonperturbative approach, providing an exact numerical method to study\nthe Hubbard model under a finite temperature. First, the partition function $Z=Tr e^{-\\beta H}$ is regarded as a path integral discretized\ninto $\\Delta \\tau $ functions in the imaginary time interval $(0,\\beta)$.\nThe kinetic term is quadratic, and the on-site interaction term can be decoupled into a quadratic term by a discrete Hubbard-Stratonovich field; then, by analytically integrating the Hamiltonian quadratic term, $Z$ can be converted into the product of two fermion determinants, where one is spin up and the other is spin down.\nThe Metropolis algorithm is used to stochastically update the sample, and we set $\\Delta\\tau=0.1$, leading to sufficiently small errors in the Trotter approximation.\n\nTo study the phase transitions of the system, we computed the $T$-dependent dc conductivity, which can be obtained from the momentum $\\textbf{q-}$ and imaginary time $\\tau$-dependent current-current correlation function $\\Lambda_{xx}(\\textbf{q},\\tau)$\\cite{PhysRevB.54.R3756,PhysRevLett.75.312}:\n\\begin{eqnarray}\n\\label{DC}\n\\sigma_{dc}(T)=\\frac{\\beta^2}{\\pi}\\Lambda_{xx}(\\textbf{q}=0,\\tau=\\frac{\\beta}{2})\n\\end{eqnarray}\nHere, $\\Lambda_{xx}(\\textbf{q},\\tau)$=$\\left<\\hat{j}_x(\\textbf{q},\\tau)\\hat{j}_x(\\textbf{-q},0)\\right>$, $\\beta$=$1\/T$, where $\\hat{j}_x(\\textbf{q},\\tau)$ is the Fourier transform of time-dependent current operator $\\hat{j}_x(\\textbf{r},\\tau)$ in the $x$ direction:\n\\begin{eqnarray}\n\\label{transform}\n\\hat{j}_x(\\textbf{r},\\tau) = e^{H\\tau\/h}\\hat{j}_x(\\textbf{r})e^{-H\\tau\/h}\n\\end{eqnarray}\nwhere $\\hat{j}_x(\\textbf{r})$ is the electronic current density operator, defined in Eq.(\\ref{J}).\n\\begin{eqnarray}\n\\label{J}\n\\hat{j}_x(\\textbf{r}) = {i}\\sum_{\\sigma}t_{i+\\hat{x},i}\\times(c_{i+\\hat{x},\\sigma}^{+}c_{i\\sigma}-c_{i \\sigma}^{+}c_{i+\\hat{x},\\sigma})\n\\end{eqnarray}\nThe validity of Eq.(\\ref{DC}) has been examined, and this equation has been used for metal-insulator transitions\nin the Hubbard model in many studies\\cite{PhysRevLett.75.312,PhysRevLett.83.4610,PhysRevLett.120.116601}.\n\n\\section{Results and discussion}\n\n\\label{sec:results}\n\nAt half filling, due to the particle-hole symmetry, under the transformation $c_{\\bf{i}\\sigma}^{\\dagger} \\rightarrow (-1)^{\\bf{i}}c_{\\bf {i}\\sigma}$,\nthe Hamiltonian is unchanged, and\nthe simulation can be performed without considering the sign problem\\cite{PhysRevLett.87.146401}.\nWhen far from half filling, the system may have a sign problem; thus, in a doped Hubbard model on a square lattice, the notorious sign problem prevents exact results at lower temperatures, at higher interactions, or with larger lattices.\nTo ensure the reliability of the data in our simulation,\nwe first present the average sign in Fig.\\ref{Fig:sign},\nwhich is shown as a function of electron filling for (a) different temperatures, (b) different interactions, (c) different disorder strengths, and (d) different lattice sizes along with the Monte Carlo parameters after 30,000 iterations.\nThe average sign decays exponentially both with increasing inverse temperature and lattice size\\cite{PhysRevB.55.4149}.\n\nThe average sign is determined by the ratio of the integral of the product of up and down spin determinants to the integral of the absolute\nvalue of the product\\cite{PhysRevB.92.045110}:\n\\begin{eqnarray}\n\\label{J}\n\\langle S \\rangle &=\n\\frac\n{\\sum_{\\cal X} \\,\\,\n{\\rm det} M_\\uparrow({\\cal X}) \\,\n{\\rm det} M_\\downarrow({\\cal X})\n}\n{\n\\sum_{\\cal X} \\,\\,\n| \\, {\\rm det} M_\\uparrow({\\cal X}) \\,\n{\\rm det} M_\\downarrow({\\cal X}) \\, |\n}\n\\end{eqnarray}\nwhere ${\\cal X}$ is the HS configurations composed of the spatial sites and the imaginary time slices;\nand $M_\\sigma({\\cal X})$ is defined as each spin specie matrix.\nAs shown in Fig.\\ref{Fig:sign} (a), we evaluate the variation in the sign problem with density for various inverse temperatures.\nThe average sign decreases quickly as the system is doped from $n=1.0$ to $n=0.9$.\nThe average sign is small when $0.680$ due to the perfect nesting in the Fermi surface.\nPrevious studies demonstrated that when considering disorder at half filling for $U=4.0$, the insulating behavior at low temperatures persists to much larger bond disorder strengths\\cite{PhysRevLett.83.4610}.\nAccording to previous studies, a basic question arises:\nOn a square lattice with repulsive interactions, in addition to half filling, how are the transport properties at other carrier concentrations affected by disorder?\nTo answer this question, we take advantage of the temperature-dependent dc conductivity $\\sigma_{dc}(T)$ to distinguish between an insulator and a metal.\nFig.\\ref{Fig:UD} shows $\\sigma_{dc}(T)$ measured\non the square lattice across several disorder $\\Delta$ values at different densities $n=0.3,0.4$,\nwhere the sign problem has little effect on the results.\nIn the low-temperature regime, the behavior of $\\sigma_{dc}$ shows that a transition from metallic to insulating behavior occurs with increasing disorder.\nFor example, when $L=8$, $n=0.3$, $T\\leqslant0.2$ and $\\Delta=0.0$, the dc conductivity grows as the temperature decreases (i.e., $d\\sigma_{dc}\/dT<0$), which indicates that the system is metallic, and the error bars stem from the statistical fluctuation of disorder sampling.\nConversely, at $\\Delta=4.0$, the dc conductivity falls with decreasing temperature (i.e.,\n$d\\sigma_{dc}\/dT>0$) and approaches zero as $T\\rightarrow0$, which is characteristic of insulating behavior.\nTherefore, it can be deduced from the above figure that hopping disorder decreases the dc conductivity.\nThe transition from metallic to insulating clearly occurs at $\\Delta_{c}=1.5\\thicksim2.0$.\nIn the same way, by changing the carrier density $n$, the critical disorder strength\nat $n=0.4$ is about $\\Delta_{c}=2.5$,\nindicating the occurrence of the metal-insulator transition in the presence of disorder at other densities, which differs from the half filling case.\nFig.\\ref{Fig:UD}(c), (d) show the results of $L=12$. Even though the values of dc conductivity have not been saturated at $L=12$, the values of critical disorder strength are roughly the same for $L=8$ and $L=12$. Our further data in Fig. \\ref{Fig:L} (a) show that the dc conductivity itself tends to converge at $L=20$, while simulations on such lattice cost huge CPU times.\nAnd through the shift in the maximum dc conductivity, one can infer that the mobility gap increases as the bond disorder increases.\n\nIn addition, we ascertain that the occurrence of a phase transition results from the bond disorder rather than the system size being smaller than a localization length.\nFig.\\ref{Fig:L} (a) shows that as the lattice size increases, the dc conductivity will converge\nto a finite value under various conditions, although the convergence speed is affected by parameter sets (such as $\\sigma_{dc}$ in the insulating phase converges faster than in the metallic phase, or $\\sigma_{dc}$ in system at $n=0.4$ converges faster than $n=0.3$).\n\n\\begin{figure}[t]\n\\centerline {\\includegraphics*[width=3.5in]{Fig3}}\n\\caption{(Color online) Conductivity $\\sigma_{dc}$ as a function of temperature for different disorder strengths for $U=4.0$ on the $L=8,16,20$ lattices at (a) $n=0.3$ and (b) $n=0.4$.\nIn (a), the $\\sigma_{dc}$ curves of $L=20$ and $L=16$ are almost coincident which indicates that the dc conductivity tends to converge as $L\\geq 20$, although the lower temperature calculation at $\\beta >6$ is constrained by the DQMC simulations.}\n\\label{Fig:L}\n\\end{figure}\n\n\\begin{figure}[t]\n\\centerline {\\includegraphics*[width=3.2in]{Fig4}}\n\\caption{(Color online) Conductivity as a function of the disorder strength for three temperature $\\beta=6,8,10,12$\nat (a) $U=4, n=0.3$; (b) $U=4, n=0.4$ (c) $U=3, n=0.3$; and (d) $U=2, n=0.4$. The intersection determines the critical disorder strength, and the value of the conductivity at the critical disorder is approximately 0.30.}\n\\label{Fig:universal}\n\\end{figure}\n\n\n\\begin{figure}[t]\n\\centerline {\\includegraphics*[width=3.2in]{Fig5}}\n\\caption{(Color online)Top: conductivity $\\sigma_{dc}$ as a function of temperature at $U=4.0$ for (a) $\\Delta$=1.0 and (b) $\\Delta$=2.0 with different fillings. Below: critical disorder strength $\\Delta_{c}$ (c) as a function of $U$ at different $n$ and (d) as a function of $n$ at different $U$.}\n\\label{Fig:U}\n\\end{figure}\n\n\\begin{figure}[t]\n\\centerline {\\includegraphics*[width=3.2in]{Fig6}}\n\\caption{(Color online) Spin susceptibility $\\chi$ as a function of temperature (a) at various interaction strengths $U=0.0,4.0$, disorder strengths $\\Delta=2.0,4.0$ and lattice sizes $L=8,16$ at fixed density $n=0.5$ and (b) at fixed interaction strength $U=4.0$ and disorder strength $\\Delta=4.0$ with different fillings on an $N=8\\times 8$ square lattice.}\n\\label{Fig:X}\n\\end{figure}\n\n\nOn the basis of Fig.\\ref{Fig:UD}, we plot $\\sigma_{dc}$ as a function of the disorder strength in Fig.\\ref{Fig:universal}, to determine the critical point accurately and the corresponding value of dc conductivity.\nThe intersection of four curves marks the critical point for the metal-insulator transition.\nThe ordinate of this intersection describes the critical dc conductivity (i.e.,\nat $n= 0.3$, $\\sigma_{dc,crit}=0.30$, and at $n= 0.4$, $\\sigma_{dc,crit}=0.30$). Here, the value of the critical dc conductivity is determined to an accuracy of 0.01. \nComparing the results for these parameters sets ($U=4.0, n=0.3$ and $U=4.0, n=0.4$), it shows that the system has the possibility of a universal value of the critical dc conductivity.\nTo strongly support these findings, we present the same plots for different interaction strength ($U=2.0,3.0$) shown in Fig.\\ref{Fig:universal} (c) and (d): although the critical disorder strength is varied,\nthe critical dc conductivity is still $\\sigma_{dc,crit}=0.30$.\nBesides, we also compute other parameter sets, such as $U=4.0$, $n= 0.5$, $\\Delta_{c}=2.77$, $\\sigma_{dc,crit}=0.30$; $U=3.0$, $n= 0.6$, $\\Delta_{c}=2.70$, $\\sigma_{dc,crit}=0.26$;\n$U=2.0$, $n= 0.5$, $\\Delta_{c}=2.91$, $\\sigma_{dc,crit}=0.29$; and $U=1.0$, $n= 0.6$, $\\Delta_{c}=2.42$, $\\sigma_{dc,crit}=0.32$. The standard deviation equal to $0.02$ is small enough to ensure the clustering of the dc conductivity values around the mean value,\nwhich confirms the existence of universal conductivity($\\sigma_{dc,crit}=0.30\\pm0.01$)\\cite{PhysRevB.84.035121} (the error 0.01 is computed by estimating the arithmetic mean from the listed eight datasets) and its independence with $n$, $U$, and $\\Delta_{c}$. \nThis property has also been realized in the quantum sigma model\\cite{Anissimova2007,Punnoose289}, and discussed in both\ngraphene\\cite{PhysRevLett.98.256801} and integer quantum Hall effect\\cite{PhysRevLett.95.256805}.\n\n\nTo describe the role of doping in more details, we investigate the change in $\\sigma_{dc}$\nwith different densities at fixed disorder strength,\nas shown in Fig.\\ref{Fig:U} (a) and (b). Increasing the electronic density shall enhance the dc conductivity, and when $\\Delta=2.0$, the system behaves as an insulator at $n=0.3$.\nConversely, at $n=0.4$ and $n=0.5$, the system behaves as a metal.\nThus, we deduce that doping can affect the metal-insulator transition.\nWe compile the results of $\\Delta_{c}$ in Fig.\\ref{Fig:U} (c), (d), showing the relationship between critical disorder strength and interaction strength $U$ (or density $n$).\nThe critical disorder strength increases firstly and then decreases as $U$ increases at a fixed density,\nwhich is also reported in the ionic Hubbard model\\cite{PhysRevLett.98.046403,PhysRevB.99.014204}.\nThe Coulomb repulsion enhances metallicity when $U<3.0$, and a larger $U$ will make it more effective to localize electrons to decrease $\\sigma_{dc}$.\nOn the other hand, in our calculation, the effect of density on $\\Delta_{c}$ is non-monotonous:\n$\\Delta_{c}$ increases as the density increases from 0.3 to 0.5, and then decreases as the density increases to 0.6.\nAlthough the sign problem restricts us to calculate the large density, the current results have provided strong support for the conclusion that doping-dependent metal-insulator transition in a disordered Hubbard model.\n\nThe spin dynamics of electrons are often discussed together with the localization transition, and we discuss the correlation between the spin susceptibility and temperature through $\\chi=\\beta S(q=0)$, where $S(q=0)$ denotes the ferromagnetic structure factor\\cite{PhysRevB.59.3321}.\nFig.\\ref{Fig:X} (a) shows that the spin susceptibility $\\chi$ increases as the temperature decreases and as $U$ increases (for $U=0.0$ and $U=4.0$),\nmeaning that interaction can enhance the ferromagnetic susceptibility. Additionally, the spin susceptibility\ndiverges as $T\\rightarrow0$, implying that magnetic order exists in both the metallic ($\\Delta=2.0$) and insulating phases ($\\Delta=4.0$).\nThe ferromagnetic susceptibility reduces with increasing disorder in the presence of interaction and hopping disorder, which is in accord with the Stoner criterion for a ferromagnetic $UN(E_{F})>1$.\n$N(E_{F})$ represents the density of states at the Fermi level.\nThe Stoner criterion estimates that the behavior of a ferromagnetic acts against increasing disorder due to the reduction in the spectral density at the Fermi level\\cite{PhysRevB.84.155123}.\nComparing the results of $L=8$ with $L=16$, the spin susceptibility is little affected by size effects.\nAdditionally, we find that the density plays a positive role in the ferromagnetic susceptibility, as shown in Fig.\\ref{Fig:X} (b).\n\\section{Conclusions}\n\\label{sec:conclusions}\n\nIn summary, we have studied a disordered Hubbard model on a square lattice away from half filling by using the determinant quantum Monte Carlo method. We find that the sign problem emerges away from half filling, accompanied by a nonmonotonic behavior as the density varies, and that adding hopping disorder can reduce the sign problem.\nThe system becomes metallic at finite $U$ unlike with half filling, and the metal-insulator transition is affected by disorder.\nAlthough the critical disorder strength non-monotonically varies with changing the electron density and repulsion,\nthe critical dc conductivity is independent of the parameter set, similar to the site disorder case\\cite{PhysRevB.84.035121}.\nThe behavior of spin susceptibility suggests that under a range of densities, the insulating phase is accompanied by local moments.\nThe ferromagnetic susceptibility tends to reduce with increasing bond disorder strength, in line with the Stoner criterion.\n\nAt fixed disorder, we also demonstrate that the carrier density $n$ can be used as a tuning parameter for the occurrence of the phase transition, which can be explained as follows:\nvarying the intensity of disorder $\\Delta$ at a fixed density $n$ can be regarded as adjusting the mobility boundary via the Fermi energy and is similar to varying the carrier concentration $n$ at a fixed disorder strength $\\Delta$, which can be thought of as a shift in the Fermi energy\\cite{PhysRevLett.83.4610}.\n\n\\begin{acknowledgments}\nThis work is supported by NSFC (Nos. 11974049 and 11774033) and Beijing Natural Science Foundation (No. 1192011). The numerical simulations were performed at the HSCC of Beijing Normal University and on the Tianhe-2JK in the Beijing Computational Science Research Center.\n\\end{acknowledgments}\n\n\\begin{figure}[htb]\n\\centerline {\\includegraphics*[width=3in]{Fig7}}\n\\caption{(Color online)(a)Conductivity $\\sigma_{dc}$ as a function of number of realization at $L=8, U=4.0$, and $\\Delta=2.0$. The error bars are derive from DQMC simulation. (b) The corresponding variance of the data in the insert.\nInsert: The averaged dc conductivity as a function of the number of groups. N represents the number of disorder realizations in a group.}\n\\label{Fig:number}\n\\end{figure}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{Intro}\n\nRichard Stanley has been a pioneer in modern combinatorics, and a key figure in the development of both enumerative combinatorics and algebraic combinatorics. \n\nIn enumerative combinatorics two crucial building blocks are \\textbf{(a)} the generating series for a set of combinatorial objects, and \\textbf{(b)}~the relationship between algebraic operations on types of generating series and combinatorial operations on the set. The enumerative significance of a generating series is, of course, that the normalized coefficient of each of its monomials counts the objects in the set of combinatorial objects indexed by the monomial. Stanley contributed early to these important building blocks in his paper with Doubilet and Rota~\\cite{drs} -- part of Gian-Carlo Rota's seminal series \\emph{``On the foundations of combinatorial theory''}. Further early work appeared in paper~\\cite{s1}. \n\nThe power of algebraic combinatorics often seems to depend on the efficacy of the analogue relationship between algebra and the combinatorics, in which methods from one may assist in solving questions raised in the other. Stanley has been particularly attracted by combinatorics that has made an impact in other branches of mathematics, and was himself an early developer of many of these analogue relationships.\n\nOur own work has been inspired by these early developments. In an essential way, they have influenced our work in enumerative combinatorics, both together and separately, which has, in its turn, contributed to the further study of the connection between combinatorial structure and algebraic structure, and its application to other parts of mathematics and the mathematical sciences. \n\n\\section{Transitive factorizations of permutations} \n\nIn this article, we describe our longtime work on transitive factorizations of permutations. The themes that it illustrates include:\n\\begin{itemize}\n\\item the fundamental underlying combinatorial problem is very simple to state;\n\\item the contexts in which instances of this combinatorial problem arise are diverse within mathematics and mathematical physics;\n\\item the interplay between algebra and combinatorics is exhibited in both directions, with methods from other parts of mathematics applied to combinatorial problems, as well as combinatorial methods applied within other parts of mathematics;\n\\item work in this area continues to be the subject of intense research activity both in algebraic combinatorics and in other parts of mathematics;\n\\item Stanley's work has made an important contribution in a number of places.\n\\end{itemize}\n\nWe now describe the fundamental factorization problem that we consider in this article, with two variations. The following notation will be used: $\\mathfrak{S}_n$ is the symmetric group acting on the symbols $\\{1,\\ldots,n\\}$; we write $\\alpha\\vdash n$, or equivalently $|\\alpha |=n$, to indicate that $\\alpha$ is a partition of $n$; the number of parts in $\\alpha$ is denoted by $l(\\alpha )$; $\\mathcal{C}_\\alpha$ is used to denote the conjugacy class of $\\mathfrak{S}_n$ with natural index $\\alpha$. If $m_i$ is the number of parts of $\\alpha$ equal to $i$, $i\\geq 1$, then $|\\mathrm{Aut}\\, \\alpha |=\\prod_{i\\geq 1} m_i!$.\n\n\\begin{problem}[{\\bf The Permutation Factorization Problem}]\\label{pfp}\nFor fixed partitions $\\alpha, \\beta_1, \\ldots, \\beta_m$ of $n$, find the number of permutations $\\rho\\in\\mathcal{C}_{\\alpha}$ and $\\pi_i\\in\\mathcal{C}_{\\beta_i}$, $i=1,\\ldots ,m$, such that\n\\begin{equation}\\label{permfactn}\n\\pi_1 \\pi_2 \\cdots \\pi_m = \\rho.\n\\end{equation}\nWe shall call $(\\pi_1,\\ldots, \\pi_m)$ a \\emph{factorization} of $\\rho$.\n\\end{problem}\n\n\\begin{problem}[{\\bf The Transitive Permutation Factorization Problem}]\\label{tpfp}\nFor fixed partitions $\\alpha, \\beta_1, \\ldots, \\beta_m$ of $n$, find the number of permutations $\\rho\\in\\mathcal{C}_{\\alpha}$ and $\\pi_i\\in\\mathcal{C}_{\\beta_i}$, $i=1,\\ldots ,m$, that satisfy equation~(\\ref{permfactn}), and such that $\\langle\\pi_1,\\ldots ,\\pi_m\\rangle$, the group generated by the factors $\\pi_1,\\ldots, \\pi_m$, acts transitively on the underlying symbols $\\{ 1,\\ldots ,n\\}$.\nIn this case we shall call $(\\pi_1,\\ldots, \\pi_m)$ a \\emph{transitive factorization} of $\\rho$.\n\\end{problem}\n\nThere were a number of papers in the combinatorics literature on permutation factorization problems in the 70's by various authors. These relied on elementary methods only; see, for example, Walkup~\\cite{wa}. In his 1981 paper~\\cite{s2}, Stanley applied the powerful mathematical methodology of symmetric group characters to solve the problem in the case in which all factors are $n$-cycles (in the conjugacy class $\\mathcal{C}_{(n)}$). As part of this, he was able to prove a conjecture from~\\cite{wa}.\n\nA convenient way of describing the method of symmetric group characters is to work in the centre of the group algebra of $\\mathfrak{S}_{n}$. One basis of the centre is the set \n$\\{\\mathsf{K}_{\\theta} = \\sum_{\\sigma\\in\\mathcal{C}_{\\theta}} \\sigma\\; \\colon \\theta\\vdash n\\}$ of classes, and another is the set $\\{\\mathsf{F}_\\alpha \\colon \\alpha\\vdash n\\}$ of orthogonal idempotents. These bases are related by the linear relations\n\\begin{equation}\\label{classidem}\n\\mathsf{F}_\\alpha = \\frac{\\chi^{\\alpha}(1^n)}{n!} \\sum_{\\theta\\vdash n} \\chi^{\\alpha}(\\theta) \\mathsf{K}_{\\theta},\\qquad\\qquad \\mathsf{K}_{\\theta} = |\\mathcal{C}_{\\theta}| \\sum_{\\alpha\\vdash n} \\frac{\\chi^\\alpha(\\theta)}{\\chi^{\\alpha}(1^n)} \\mathsf{F}_{\\alpha},\n\\end{equation}\nwhere $\\chi^\\alpha(\\theta)$ is the character $\\chi^\\alpha$ of the (ordinary) irreducible representation of $\\mathfrak{S}_{n}$ indexed by $\\alpha$, and evaluated on the class $\\mathcal{C}_\\theta$.\n\nEncoded in this way, the answer to the Permutation Factorization Problem is given by\n\\begin{equation}\\label{classsoln}\n|\\mathcal{C}_{\\alpha}|\\cdot\\Big( [\\mathsf{K}_{\\alpha}]\\mathsf{K}_{\\beta_1}\\cdots\\mathsf{K}_{\\beta_m}\\Big) ,\n\\end{equation}\nwhere the notation $[X]Y$ denotes the coefficient of $X$ in the expansion of $Y$. The factor $|\\mathcal{C}_{\\alpha}|$ appears in~(\\ref{classsoln}) since each element of the class $\\mathcal{C}_{\\alpha}$ is created in the product with the same frequency; the factor $|\\mathcal{C}_{\\alpha}|$ would be removed if in~(\\ref{permfactn}) we were considering permutation factorizations of a fixed and arbitrary element $\\rho$ of the class $\\mathcal{C}_{\\alpha}$. Of course, to apply~(\\ref{classsoln}), one simply applies~(\\ref{classidem}), and uses the fact that $\\mathsf{F}_{\\alpha}\\cdot\\mathsf{F}_{\\beta} = \\mathsf{F}_{\\alpha}$ if $\\alpha =\\beta$, and $\\mathsf{F}_{\\alpha}\\cdot\\mathsf{F}_{\\beta} = 0$ otherwise. Thus, one has changed bases to one in which multiplication is ``trivial'', before changing back to the basis of conjugacy classes. In general, the resulting expression is a sum over partitions of $n$ involving arbitrary characters of the symmetric group. Such summations are generally regarded as intractable, but significant simplification occurs in the case considered by Stanley~\\cite{s2}, where all factors are $n$-cycles (so $\\beta_i=(n)$ for $i=1,\\ldots ,m$).\nIn this case, the characters have explicit evaluations, almost always equal to $0$.\n\nSince the group generated by any single $n$-cycle acts transitively on $\\{1,\\ldots,n\\}$, the factorizations that Stanley considered in that paper were in fact transitive, though this condition is not caused by any particularly ``natural'' mathematical reason. The remainder of the paper deals with applications of transitive permutation factorization in which the transitivity condition is quite natural, and involve factors in arbitrary conjugacy classes, not simply $n$-cycles.\n\n\\section{Maps in orientable surfaces}\\label{maporient} \n\nA \\emph{rooted map} is a graph embedded in a surface so that all faces are two-cells (homeomorphic to a disc). In the case of orientable surfaces, one vertex is distinguished, called the root vertex, and one edge incident with the root vertex is distinguished, called the root edge. In order to construct permutation factorizations corresponding to a rooted map in an orientable surface with $n$ edges, assign labels to the two ends of the edges with the integers $1,\\ldots ,2n$ subject only to the restriction that the end of the root edge incident with the root vertex is assigned the label $1$. We call the resulting object a \\emph{decorated rooted map}, and of course, there are $(2n-1)!$ decorated rooted maps corresponding to every rooted map with $n$ edges. An example with $9$ edges using the standard polygonal representation of the torus is given in Figure~\\ref{otblefig}. \n \n\\begin{figure}\n\\scalebox{.4}{\\includegraphics{otblemapGJ.pdf}}\n\\caption{A decorated rooted map embedded in the torus}\\label{otblefig}\n\\end{figure}\n \n \\begin{construction} \\label{mapperms}\n(see, \\textit{e.g.}, Tutte~\\cite{t} for full details) Given a decorated rooted map with $n$ edges, construct three permutations $\\nu ,\\varepsilon ,\\phi $ in $\\mathfrak{S}_{2n}$ as follows:\n\n\\begin{itemize}\n\\item\nthe disjoint cycles of $\\nu$, the \\emph{vertex permutation}, are the clockwise circular lists of end labels of edges incident with each vertex;\n\n\\item\nthe disjoint cycles of $\\varepsilon$, the \\emph{edge permutation}, are the pairs of labels on the two ends of each edge;\n\n\\item\nthe disjoint cycles of $\\phi$, the \\emph{face permutation}, are the counterclockwise circular lists of the second label on each edge encountered when traversing the interior of the faces. \n \n\\end{itemize}\n\\end{construction}\n\nAs an example of Construction~\\ref{mapperms}, the three permutations that we construct from the decorated rooted map given in Figure~\\ref{otblefig} are:\n\\begin{align*}\n\\nu &= (1\\, 8\\, 5\\, 15)(1\\, 12\\, 10\\, 14\\, 16\\, 11)(3\\, 18\\, 17\\, 7)(4\\, 9\\, 13)(6),\\\\\n\\epsilon &= (1\\, 14)(2\\, 17)(3\\, 7)(4\\, 10)(5\\, 13)(6\\, 8)(9\\, 18)(11\\, 15)(12\\, 16),\\\\\n\\phi &= (1\\, 6\\, 8\\, 13\\, 10)(2\\, 16\\, 15\\, 14\\, 12\\, 4\\, 18)(3\\, 9\\, 5\\, 11\\, 17)(7).\n\\end{align*}\n\nFrom the description in Construction~\\ref{mapperms}, it is clear that in general, as in the above example,\n\\begin{itemize}\n\n\\item\nthe lengths of the cycles of $\\nu$ specify the vertex-degrees of the underlying rooted map,\n\n\\item\nall cycles of $\\varepsilon$ have length $2$,\n\n\\item\nthe lengths of the cycles of $\\phi$ specify the face-degrees of the underlying map.\n\n\\end{itemize}\n\n\\noindent\nMoreover, by construction we have $\\varepsilon\\nu = \\phi$, and the fact that $\\langle \\varepsilon , \\nu\\rangle$ acts transitively on the symbols $1,\\ldots ,2n$ follows immediately from the fact that the embedded graph is connected. Finally, the genus of the embedding surface can be obtained from $\\nu ,\\varepsilon ,\\phi$ by Euler's formula.\n\nConsequently, the enumeration of rooted maps embedded in orientable surfaces is, up to scaling, a special case of the Transitive Permutation Factorization Problem (Problem~\\ref{tpfp}), in which there are precisely two factors. When we solve this enumerative question in terms of group characters by means of~(\\ref{classsoln}), and form the generating series, we find that symmetric functions are introduced in a natural way because the linear relations~(\\ref{classidem}) are scale equivalent to the linear relations\n \\begin{equation*}\ns_{\\alpha}= \\sum_{\\theta\\vdash n}\\frac{|\\mathcal{C}_{\\theta}|}{n!} \\chi^{\\alpha}(\\theta) p_{\\theta}, \\qquad\\qquad p_{\\theta} =\\sum_{\\alpha\\vdash n} \\chi^{\\alpha}(\\theta) s_{\\alpha},\n\\end{equation*}\nbetween the Schur functions $s_{\\alpha}$ and power sums $p_{\\theta}$. Thus, for $\\lambda$, $\\mu$ partitions of $2n$, if $m^{\\lambda}_{\\mu}$ is the number of rooted maps in orientable surfaces with $n$ edges, vertex degrees given by the parts of $\\lambda$, and face degrees given by the parts of $\\mu$, then we obtain\n\\begin{equation*}\\label{coefforient}\nm^{\\lambda}_{\\mu} = [p_{\\lambda}(\\mathbf{x})p_{\\mu}(\\mathbf{y})p_{(2^n)}(\\mathbf{z})t^{2n}] H_O\\left( p(\\mathbf{x}), p(\\mathbf{y}), p(\\mathbf{z}), t \\right) ,\n\\end{equation*}\nwhere\n\\begin{equation}\\label{gseriesorient}\nH_O\\left( p(\\mathbf{x}), p(\\mathbf{y}), p(\\mathbf{z}), t \\right) = \nt\\frac{\\partial}{\\partial t} \\log \\left( \\sum_{\\theta\\in\\mathcal{P}} \n\\frac{|\\theta|!}{\\chi^\\theta (1^{|\\theta|})} \\,\ns_\\theta(\\mathbf{x}) s_\\theta(\\mathbf{y}) s_\\theta(\\mathbf{z}) \\, t^{|\\theta|} \\right),\n\\end{equation}\nand $\\mathcal{P}$ is the set of all (integer) partitions, $p(\\mathbf{x}) := (p_1(\\mathbf{x}), p_2(\\mathbf{x}), \\ldots)$, $p_k(\\mathbf{x})$ is the degree $k$ power sum symmetric function in the indeterminates $\\mathbf{x} =(x_1, x_2, \\ldots)$. Full details of this was developed with Visentin in \\cite{jv1, jv2}, so we make only a few technical remarks here: \\textbf{(a)} the generating series $H_O$ is actually an \\emph{exponential} generating series in the indeterminate $t$, but for the number of \\emph{decorated} rooted maps, \\textbf{(b}) the ``$\\log$'' appears in~(\\ref{gseriesorient}) to restrict to the connected objects in the usual way for exponential generating series, \\textbf{(c}) the effect of $t\\partial\/\\partial t$ in~(\\ref{gseriesorient}) is to multiply the coefficient of $t^{2n}$ by $2n$, thus adjusting the exponential monomial $\\frac{t^{2n}}{(2n)!}$ to $\\frac{t^{2n}}{(2n-1)!}$; this division by $(2n-1)!$ is the correct scaling between decorated rooted maps and rooted maps, \\textbf{(d)} the coefficient of arbitrary monomials $p_{\\tau}(\\mathbf{z} )$ in $H_O$ also has combinatorial meaning; it accounts for rooted \\emph{hypermaps}.\n\n\\section{Maps in surfaces and Jack symmetric functions}\n\nThis enumerative approach to rooted maps was extended in~\\cite{gj4} from orientable surfaces to all surfaces (includes non-orientable surfaces). For all surfaces, the class algebra of the symmetric group -- products of conjugacy classes, was replaced by the Hecke algebra associated with the hyperoctahdedral group -- products of double cosets of the symmetric group multiplied by the hyperoctahedral subgroup on both sides. Stanley's paper with Hanlon and Stembridge~\\cite{hss} was an essential source, describing completely the character theory of this algebra, and the relationship with symmetric functions, in this case the zonal polynomials $Z_{\\theta}$.\nFor the generating series, again with $\\lambda$, $\\mu$ partitions of $2n$, let $\\ell^{\\lambda}_{\\mu}$ be the number of rooted maps in locally orientable surfaces with $n$ edges, vertex degrees given by the parts of $\\lambda$, and face degrees given by the parts of $\\mu$. Then we obtain\n\\begin{equation*}\\label{coefflocorient}\n{\\ell}^{\\lambda}_{\\mu} = [p_{\\lambda}(\\mathbf{x})p_{\\mu}(\\mathbf{y})p_{(2^n)}(\\mathbf{z})t^{2n}] H\\left( p(\\mathbf{x}), p(\\mathbf{y}), p(\\mathbf{z}), t \\right) ,\n\\end{equation*}\nwhere\n\\begin{equation*}\\label{gserieslocorient}\nH\\left( p(\\mathbf{x}), p(\\mathbf{y}), p(\\mathbf{z}), t \\right) = \n2t\\frac{\\partial}{\\partial t} \\log \\left( \\sum_{\\theta\\in\\mathcal{P}} \n\\frac{\\chi^{2\\theta}(1^{|2\\theta|})}{|2\\theta|!} \\,\nZ_\\theta(\\mathbf{x}) Z_\\theta(\\mathbf{y}) Z_\\theta(\\mathbf{z}) \\, t^{|\\theta|} \\right) ,\n\\end{equation*}\nand $2\\theta:=(2\\theta_1, 2\\theta_2,\\ldots)$ for $\\theta=(\\theta_1, \\theta_2, \\ldots)$. Again, the coefficient of arbitrary monomials $p_{\\tau}(\\mathbf{z} )$ in $H$ accounts for rooted \\emph{hypermaps}.\n\nBut there is more that we can say. We showed in~\\cite{gj3} that $H_O$ and $H$ have a common generalization as the cases $\\alpha =1$ and $\\alpha=2$, respectively, of\n\\begin{equation}\\label{Jacksum}\n\\Psi(\\mathbf{x}, \\mathbf{y}, \\mathbf{z}; t,\\alpha )\n:= \\alpha \\, t \\frac{\\partial}{\\partial t} \\log \\sum_{\\theta\\in\\mathcal{P}} \\frac{1}{\\Jnorm{\\theta}{\\alpha }}\nJ_\\theta(\\mathbf{x}; \\alpha ) \\, J_\\theta(\\mathbf{y}; \\alpha ) \\, J_\\theta(\\mathbf{z}; \\alpha )\\, t^{|\\theta|} ,\n\\end{equation}\nwhere $J_\\theta(\\mathbf{x}; \\alpha )$ is the Jack symmetric function with parameter $\\alpha$ and $\\langle\\cdot,\\cdot\\rangle_{\\alpha}$ is the standard inner product on the ring of symmetric functions. \nIn this work, our source for the necessary results on Jack symmetric functions was again a paper of Stanley, in this case~\\cite{s3}. \nFollowing extensive computer algebra computations with the generating series $\\Psi$, we conjectured the following: \n\\begin{conjecture}[{\\bf The $b$-conjecture}]\\label{bconjecture}\nThe series $\\Psi(\\mathbf{x}, \\mathbf{y}, \\mathbf{z}; t,1+b)$ has coefficients that are \\emph{polynomial} in $b$ with \\emph{non-negative integer coefficients}. In this polynomial, the constant term, obtained with $b=0$ (so $\\alpha =1$), accounts for rooted hypermaps embedded in orientable surfaces, and the sum of all terms, obtained with $b=1$ (so $\\alpha =2$), accounts for rooted hypermaps embedded in all surfaces. Accordingly, the indeterminate $b$ marks a \\emph{statistic of nonorientability} associated with rooted hypermaps.\n\\end{conjecture}\n\nThe $b$-conjecture has not yet been resolved, but some progress towards determining a suitable statistic of nonorientabilty has been made, and work is ongoing. Recent progress with La Croix~\\cite{JlaC} has involved providing combinatorial interpretations, in terms of maps and hypermaps (or of transformations of the series in terms of polynomial glueings), for sums of coefficients rather than for individual coefficients. Despite success with these marginal sums, a complete understanding of the $b$-Conjecture for maps and hypermaps continues to elude us.\n \nThere is a closely related $b$-conjecture for \\emph{matchings} that we conjectured in \\cite{gj3}. Previously, progress on the matching version has appeared in~Do{l}ega and F\\'{e}ray~\\cite{df} and Do{l}ega, F\\'{e}ray and \\'{S}niady~\\cite{dfs}.\n\n\\section{Maps, matrix integrals and virtual Euler characteristic}\n\nStanley's paper with Hanlon and Stembridge~\\cite{hss} also contained matrix integral results associated with symmetric functions. Especially due to the influence of mathematical physics, such results are important in algebraic combinatorics, and this continues to be an area of active research interest. The generating series for maps, whose symmetric function expressions have been discussed in the previous two sections, also have matrix integral forms, and we present a brief discussion of these in this section. \n\nFor $\\mathbf{i}=(i_1,i_2,\\ldots )$, let $m_O(\\mathbf{i},j,n)$ denote the number of rooted maps in orientable surfaces with $i_k$ vertices of degree $k$, $k\\ge 1$, $j$ faces, and $n$ edges, and let $m(\\mathbf{i},j,n)$ denote the corresponding number in all surfaces. Define the generating series\n$$\nM_O(\\mathbf{y},x,z) = \\sum_{\\mathbf{i}, j, n} m_O(\\mathbf{i},j,n) \\, \\mathbf{y}^\\mathbf{i} x^j z^n\n\\quad\\mbox{and}\\quad\nM(\\mathbf{y},x,z) = \\sum_{\\mathbf{i}, j, n} m(\\mathbf{i},j,n) \\, \\mathbf{y}^\\mathbf{i} x^j z^n\n$$\nwhere $\\mathbf{y}^\\mathbf{i} := \\prod_{k\\ge1} y_k^{i_k}$. A matrix integral, over Hermitian complex matrices, was given in \\cite{j2} for $M_O$. Using a different argument, a matrix integral over real symmetric matrices was given\nin \\cite{gj65} for $M$. These integrals can be transformed by the Weyl integration theorems\n(the diagonalizing groups are the unitary group and the orthogonal group, respectively; the measure may be factored into Haar measure for the manifold of the groups, and an $\\mathbb{R}^N$ integral over the spectra $\\lambda$), and with Harer~\\cite{ghj} we obtained a common generalization $M(\\mathbf{y},N,z; \\alpha )$ of the diagonalized integrals, where $M(\\mathbf{y},x,z; 1 )= M_O(\\mathbf{y},x,z)$ and $M(\\mathbf{y},x,z; 2 )= M(\\mathbf{y},x,z)$, with\n\\begin{equation}\\label{RNalphaintegral}\nM(\\mathbf{y},N,z; \\alpha ) \n=2\\alpha z\\frac{\\partial}{\\partial z}\\log \n\\left(\n\\frac\n{\\int_{\\mathbb{R}^N} \\left|V(\\lambda)\\right|^{\\frac{2}{\\alpha}}e^{\\sum_{k\\ge1} \\frac{1}{k} y_k \\sqrt{z}^k p_k(\\lambda)} \n\\cdot e^{- \\frac{1}{2\\alpha} p_2(\\lambda)} d\\lambda}\n{\\int_{\\mathbb{R}^N} \\left|V(\\lambda)\\right|^{\\frac{2}{\\alpha}} e^{- \\frac{1}{2\\alpha} p_2(\\lambda)} d\\lambda}\n\\right) ,\n\\end{equation}\nand $V$ is the Vandermonde determinant. For combinatorial reasons, the coefficients of $z^n$ are polynomials in $N$; we may formally replace $N$ by $x$ to obtain $M(\\mathbf{y},x,z;\\alpha )$ from $M(\\mathbf{y},N,z;\\alpha )$.\n\nNote that the parameter $\\alpha$ in~(\\ref{RNalphaintegral}) specializes in the same way as the Jack parameter in~(\\ref{Jacksum}) or Conjecture~\\ref{bconjecture}, but we do not have a matrix integral (undiagonalized) that involves the parameter $\\alpha$.\n\nAn immediate application in~\\cite{ghj} was to obtain $\\chi(\\mathcal{M}^s_g(\\tau))$, the virtual Euler characteristic of the moduli spaces of real curves of genus $g$, with $s$ marked points and a fixed topological type of orientation reversing involution $\\tau$. Harer and Zagier~\\cite{hz} had earlier obtained $\\chi(\\mathcal{M}^s_g)$, the virtual Euler characteristic for the case of complex curves, using the fact that it can be obtained from a sum over rooted \\emph{monopoles} -- maps with a single face. A further application in \\cite{ghj}, of the common generalization~(\\ref{RNalphaintegral}), was to determine a common generalization $\\xi^s_g(\\alpha )$ of these virtual Euler characteristics, that gave the complex case when $\\alpha =1$ and the real case when $\\alpha =2$. Comparing this with the $b$-conjecture (Conjecture~\\ref{bconjecture}) suggests that the coefficients of $b$ in the polynomial $\\xi^s_g(1+b)$ have a geometric interpretation in the context of the moduli spaces of curves, but this has not been resolved to date.\n\nThere are applications of $\\xi^s_g(\\alpha )$ to string theory. For example, the expressions for the virtual Euler characteristics for real and complex curve confirm the case $g=1$ determined by Ooguri and Vafa~\\cite{ov} associated with the $SO(N)$ and $Sp(N)$ gauge groups.\n\nWe conclude this section with the following observation. By comparing the symmetric function and matrix integral expressions for the map generating series, we conjectured in \\cite{gj65} that\n$$\n\\left\\langle J_\\theta(\\lambda;\\alpha)\\right\\rangle_{\\mathbb{R}^N} = J_\\theta(1_N;\\alpha) \\cdot [p_2^m]\\,J_\\theta, \\mbox{where}\\; \\left\\langle f(\\lambda)\\right\\rangle_{\\mathbb{R}^N} := \\frac {\\int_{\\mathbb{R}^N} |V(\\lambda)|^{\\frac{2}{\\alpha}} e^{-\\frac{1}{2\\alpha} p_2(\\lambda)} f(\\lambda) d\\lambda} {\\int_{\\mathbb{R}^N} |V(\\lambda)|^{\\frac{2}{\\alpha}} e^{-\\frac{1}{2\\alpha} p_2(\\lambda)} d\\lambda},\n$$\nand $\\theta\\vdash 2m$. This conjecture was subsequently proved by Okounkov~\\cite{ok}.\n\n\\section{Branched covers of the sphere and Hurwitz numbers} \\label{S:EncTranFact}\n\nIn Section~\\ref{maporient}, we showed that the special case of the Transitive Factorization Problem (Problem~\\ref{tpfp}) with two factors has a geometric interpretation in terms of rooted maps, or embedded graphs, in orientable surfaces. In that case the genus of the embedding surface could be determined from the factors by Euler's polyhedral formula. In this Section, we consider a second geometric interpretation of the Transitive Factorization Problem, in this case in terms of \\emph{branched covers} from algebraic geometry. \n\nConsider branched covers of the sphere by an $n$-sheeted Riemann surface of genus $g$. Suppose that the branch points are $P_0,P_1,\\ldots, P_m$, with branching at $P_i$ specified by permutation $\\pi_i\\in\\mathfrak{S}_n$, for $i=0,1,\\ldots ,m$, where $\\pi_0\\in\\mathcal{C}_{\\alpha}$ and $\\pi_i\\in\\mathcal{C}_{\\beta_i}$, $i=1,\\ldots ,m$. (This means that if one walks in a small neighbourhood, counterclockwise, around $P_i$, starting at sheet $j$, then one ends at sheet $\\pi_i(j)$.) Hurwitz~\\cite{h} proved that, up to homeomorphism, each $\\pi_0,\\pi_1,\\ldots ,\\pi_m$ as defined above determines a unique branched cover precisely when, in the language of Problem~\\ref{tpfp}, $(\\pi_1,\\ldots ,\\pi_m)$ is a transitive factorization of $\\rho =\\pi_0^{-1}$. Note the following points:\n\n\\begin{itemize}\n\n\\item\nthe fact that the permutations form a factorization is a \\emph{monodromy} condition on the sheets;\n\n\\item\nthe transitivity condition on the factorization means that the cover is connected;\n\n\\item\nwe say that the \\emph{branching type} of $P_0$ is $\\alpha$, and of $P_i$ is $\\beta_i$, $i=1,\\ldots ,m$;\n\n\\item\nthe genus of the surface $g$ is obtained from the branching types of the permutations by the \\emph{Riemann-Hurwitz} formula, which gives\n\\begin{equation}\\label{RieHur}\n\\sum_{i=1}^m \\left( n-l(\\beta_i)\\right) = n+l(\\alpha)+2g-2;\n\\end{equation}\n\n\\item\nthe minimum number of factors in such a transitive factorization, from~(\\ref{RieHur}), is $n+l(\\alpha)-2$ which are obtained with genus $g=0$. We call such factorizations \\emph{minimal} transitive factorizations;\n\n\\item \nif branching at a branch point is a transposition then it is called \\emph{simple}.\n\n\\end{itemize}\n\nOur own work on the enumeration of branched covers was initiated through Richard Stanley. Arising from joint work with Crescimanno \\cite{ct}, Washington Taylor (Dept. of Physics, MIT) had asked Stanley about a particular transitive factorization problem for permutations, that turned out to be a special case of \\emph{Hurwitz numbers} in genus $0$. Stanley suggested he should contact me (DMJ). Taylor's e-mail languished unanswered for three months on an old main frame computer at Waterloo. It was only through Stanley's well-known and encyclop{\\ae}dic grasp of progress on active questions that I became aware of the oversight, after he e-mailed asking about progress.\n\nThe Hurwitz number $H_{\\alpha}^g$ is the number of topologically distinct branched covers in genus $g$, in which branching is of type $\\alpha$ at one specified branch point, and branching is simple at $r$ remaining branch points. \\emph{Topologically distinct} means that we divide the number of branched covers by $n!$, for geometric reasons. Thus $H_{\\alpha}^g$ equals $\\frac{1}{n!}$ times the number of transitive factorizations of an element of $\\mathcal{C}_{\\alpha}$ into $r$ transpositions, where:\n\\begin{itemize}\n\n\\item\nfrom~(\\ref{RieHur}), the number of transpositions is given by $r=n+l(\\alpha)+2g-2$;\n\n\\item\nthe group generated by the $m$ transpositions acts transitively on $\\{ 1,\\ldots ,n\\}$. Equivalently, the multigraph with vertex-set $\\{ 1,\\ldots ,n\\}$, and $r$ edges, one edge $\\{ a,b\\}$ for each transposition $(a\\, b)$, is connected.\n\\end{itemize}\n\nIn this language, Taylor was asking about the number of transitive factorizations of the identity permutation into $2n-2$ transpositions. These are minimal transitive factorizations, with genus $g=0$, and hence are given by the Hurwitz number $H_{(1^n)}^0$. \n\n\\section{The join-cut equation}\n\nIn our first attempts to solve Taylor's problem, we applied group characters, to obtain a generating series in the form of a logarithm of a Schur function summation, analogous to the map generating series given in~(\\ref{gseriesorient}). We were not able to obtain an explicit formula for Taylor's problem from this symmetric function form of the generating series, so we moved on to the following more indirect analysis: Form the generating series\n\\begin{equation*}\nH^0=\\sum_{n=1}^{\\infty}z^n \\sum_{\\alpha\\vdash n} \\frac{H_{\\alpha}^0}{(n+l(\\alpha )-2)!} p_{\\alpha}\n\\end{equation*}\nin the indeterminates $z,p_1,p_2,\\ldots$, and suppose that the last transposition in the factorization is $(a\\, b)$. Then when we multiply a permutation by $(a\\, b)$, there are two possibilities: \n\\begin{itemize}\n\n\\item[\\textbf{Case 1:}] $a$ and $b$ occur on \\emph{different} cycles of lengths $i$ and $j$, and the cycles are \\emph{joined} to form a single cycle of length $i+j$;\n\n\\item[\\textbf{Case 2:}] $a$ and $b$ occur on the \\emph{same} cycle, of length $i+j$, and this cycle is \\emph{cut} into a cycle of length $i$ and a cycle of length $j$.\n\n\\end{itemize}\nThis analysis (which we call a join-cut analysis) leads immediately to the formal partial differential equation\n\\begin{equation}\\label{jcutpde}\n\\frac{1}{2}\\sum_{i,j\\ge 1} \\left( p_{i+j} \\, i\\frac{\\partial H^0}{\\partial p_i} j\\frac{\\partial H^0}{\\partial p_j} + p_i p_j (i+j)\\frac{\\partial H^0}{\\partial p_{i+j}} \\right)\n-z\\frac{\\partial H^0}{\\partial z} - \\sum_{i\\ge 1} p_i \\frac{\\partial H^0}{\\partial p_i} + 2H^0 = 0,\n\\end{equation}\nwhich we call the \\emph{join-cut equation} for the series $H^0$. Together with the initial condition $[z^0]H^0=0$, this uniquely determines $H^0$.\n\nIt turns out that working with equation~(\\ref{jcutpde}) is greatly simplified by changing variables from $z$ to $s$ by means of the functional equation\n\\begin{equation}\\label{szeqn}\ns=z\\exp\\left( \\sum_{i\\ge 1}\\frac{i^i}{i!}p_i\\, s^i \\right) .\n\\end{equation}\nThis change of variables is pefectly natural within algebraic combinatorics, and is quite tractable by means of Lagrange's Implicit Function Theorem. For example, we can express the series $H^0$ in terms of $s$ in the simple form\n\\begin{equation*}\n\\left( \\frac{\\partial^2}{\\partial z^2}\\right) H^0 = \\log \\left( \\frac{s}{z} \\right) ,\n\\end{equation*}\nand it follows immediately from Lagrange's Theorem that the Hurwitz number for genus $0$ has the explicit form\n\\begin{equation}\\label{H0alpha}\nH^0_{\\alpha}= \\frac{(n+\\ell -2)!}{|\\mathrm{Aut}\\,\\alpha |} \\, n^{\\ell -3} \\, \\prod_{j=1}^{\\ell } \\frac{\\alpha_j^{\\alpha_j}}{\\alpha_j!},\n\\end{equation}\nwhere $\\alpha\\vdash n$ and $\\ell = l(\\alpha )$ (see \\cite{gj1} for full details). In this notation, D\\'{e}nes~\\cite{d1} and Crescimanno and Taylor~\\cite{ct} had previously obtained the results for $\\alpha = (n)$ and $\\alpha=(1^n)$, respectively. We were unaware when writing the paper that this explicit form for all $\\alpha$ had been obtained much earlier by Hurwitz~\\cite{h}.\n\nThe join-cut analysis can be extended to Hurwitz numbers in arbitrary genus. Again the change of variables in~(\\ref{szeqn}) helps to simplify, and in~\\cite{gjvn} (see also \\cite{gj7} and \\cite{gjv1}) we were led to conjecture the existence of a polynomial $P_{g,\\ell}$, for each $g\\ge 0$ and $\\ell\\geq 1$, such that for all partitions $\\alpha \\vdash n$ with $\\ell=l(\\alpha )$ parts,\n\\begin{equation}\\label{Hgalpha} \n H^g_{\\alpha} = \\frac{(n+\\ell +2g-2)!}{|\\mathrm{Aut}\\,\\alpha |} \\, P_{g,\\ell}(\\alpha_1, \\dots, \\alpha_\\ell) \\, \\prod_{j=1}^\\ell \\frac{\\alpha_j^{\\alpha_j}}{\\alpha_j!}.\n\\end{equation}\nFor example, from~(\\ref{H0alpha}), since $\\alpha\\vdash n$, we have\n\\begin{equation*}\nP_{0,\\ell}(\\alpha_1,\\ldots ,\\alpha_{\\ell}) = \\left( \\alpha_1+\\cdots +\\alpha_{\\ell} \\right)^{\\ell -3}.\n\\end{equation*}\n\n\\section{Hodge integrals, the moduli space of curves, and integrable hierarchies}\n\nHurwitz numbers have been the subject of much research interest over the last couple of decades, with a variety of mathematical areas making substantial contributions, including mathematical physics, algebraic geometry and algebraic combinatorics. For example, soon after we conjectured the existence of the polynomial $P_{g,\\ell }$ in~(\\ref{Hgalpha}), Ekedahl, Lando, Shapiro and Vainshtein \\cite{elsv} proved it by constructing an explicit expression for the polynomial as a \\emph{Hodge integral}. The expression is the celebrated ELSV formula\n\\begin{equation}\\label{eq:elsvf}\n P_{g,\\ell}(\\alpha_1, \\dots, \\alpha_\\ell) = \\int_{{\\overline{\\mathcal{M}}}_{g,\\ell}} \\frac{1 - \\lambda_1 + \\cdots + (-1)^g \\lambda_g}{(1 - \\alpha_1 \\psi_1) \\cdots (1 - \\alpha_\\ell \\psi_\\ell)},\n\\end{equation}\nwhere ${\\overline{\\mathcal{M}}}_{g,\\ell}$ is the (compact) moduli space of stable $\\ell$-pointed genus $g$ curves, $\\psi_1$, $\\dots$, $\\psi_{\\ell}$ are (codimension $1$) classes corresponding to the $\\ell$ marked points, and $\\lambda_k$ is the (codimension $k$) $k$th Chern class of the Hodge bundle. Equation~(\\ref{eq:elsvf}) should be interpreted as follows: formally invert the denominator of the integrand as a geometric series; select the terms of codimension $\\dim {\\overline{\\mathcal{M}}}_{g,\\ell}=3g - 3 + \\ell$; and ``intersect'' these terms on ${\\overline{\\mathcal{M}}}_{g,\\ell}$. \n\nEarlier, and perhaps most notably, Witten~\\cite{wi} had initiated much of this work by his conjecture that a transform of the generating series for Hurwitz numbers is a $\\tau$-function for the KdV hierarchy from integrable systems. Witten's motivation for the conjecture was that two different models of two-dimensional quantum gravity have the same partition function. For one of these models, the partition function can be described in terms of intersection numbers on moduli space, but also in terms of Hurwitz numbers. Witten's Conjecture \\cite{wi} was proved soon after by Kontsevich \\cite{kon}, and a number of proofs have appeared since, for example Kazarian and Lando \\cite{kl}.\n\nA variant of Hurwitz numbers called \\emph{double} Hurwitz numbers have also been the subject of recent research interest. They were introduced by Okounkov \\cite{ok2}, motivated by a conjecture of Pandharipande \\cite{p} in Gromow-Witten theory. Okounkov showed that a particular generating series for double Hurwitz numbers is a $\\tau$-function for the Toda lattice hierarchy from integrable systems.\n\nThe double Hurwitz number $H_{\\alpha ,\\beta}^g$ is the number of topologically distinct branched covers in genus $g$, where branching is of type $\\alpha$ at one specified branch point, type $\\beta$ at another specified branch point, and branching is simple at $r$ remaining branch points. Thus $H_{\\alpha ,\\beta}^g$ equals $|\\mathrm{Aut}\\,\\alpha |\\cdot |\\mathrm{Aut}\\, \\beta |\/n!$ (this factor is chosen for geometric reasons) times the number of transitive factorizations of an element of $\\mathcal{C}_{\\alpha}$ into an element of $\\mathcal{C}_{\\beta}$ together with $r$ transpositions, where:\n\\begin{itemize}\n\n\\item\nfrom~(\\ref{RieHur}), the number of transpositions is given by $r=l(\\alpha)+l(\\beta)+2g-2$;\n\n\\item\nthe group generated by the element of $\\mathcal{C}_{\\beta}$ and the $r$ transpositions acts transitively on $\\{ 1,\\ldots ,n\\}$.\n\\end{itemize}\n\nIn joint work with Vakil \\cite{gjv2}, we used both group characters and a join-cut analysis to obtain various results for double Hurwitz numbers. One of these was the following conjectured ELSV-type formula for double Hurwitz numbers where one of the partitions has a single part:\n\n$$\nH^g_{\\alpha ,(n)} = n\\, (\\ell +2g-1)! \\,\\int_{\\overline{\\mathsf{Pic}}_{g,\\ell }}\n\\frac{\\Lambda_0 -\\Lambda_2 + \\cdots \\pm\\Lambda_{2g}}\n{(1-\\alpha_1\\psi_1) \\cdots (1-\\alpha_{\\ell}\\psi_{\\ell})} ,\n$$\nwhere $\\alpha =(\\alpha_1,\\ldots ,\\alpha_{\\ell})$, $\\overline{\\mathsf{Pic}}_{g,n}$ is a conjectural compactification of the universal Picard variety, and $\\Lambda_{2k}$ is a conjectural (codimension $2k$) class.\n\nA second result we obtained for double Hurwitz numbers, reminiscent of the polynomiality result given in~(\\ref{Hgalpha}) for Hurwitz numbers, was a \\emph{piecewise} polynomiality result. In particular, for fixed $g,\\ell ,k$, and $\\alpha =(\\alpha_1,\\ldots ,\\alpha_{\\ell})$ with $\\ell$ parts, and $\\beta =(\\beta_1,\\ldots ,\\beta_k)$ with $k$ parts, then $H^g_{\\alpha ,\\beta}$ is piecewise polynomial (and \\emph{not} polynomial) in the parts $\\alpha_1,\\ldots ,\\alpha_{\\ell},\\beta_1,\\ldots ,\\beta_k$, of degree $4g-3+\\ell -k$. Our proof of Piecewise Polynomiality used ribbon graphs to interpret double Hurwitz numbers as counting lattice points in certain polytopes. We then required Ehrhart's Theorem and Ehrhart polynomials, whose properties have been studied extensively by Stanley, see for example \\cite{s4}. The piecewise polynomiality property of the double Hurwitz numbers has prompted further study of the chamber structure and wall crossings in these polytopes; see for example \\cite{cjm, ssv}. \n\n Finally, a substantially different but related geometric setting in which transitive permutation factorizations have been applied is given in Lando and Zvonkin \\cite{lz}, where they are called \\emph{constellations}.\n\n\n\\bibliographystyle{amsplain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\nIn this article, we establish a (special) \\(\\lambda\\)-ring structure on the Grothendieck-Witt ring of a scheme, and some basic properties of the associated \\(\\gamma\\)-filtration.\n\nAs far as Witt rings of fields are concerned, there is an unchallenged natural candidate for a good filtration: the ``fundamental filtration'', given by powers of the ``fundamental ideal''. Its claim to fame is that the associated graded ring is isomorphic to the mod-2 ^^e9tale cohomology ring, as predicted by Milnor \\cite{Milnor} and verified by Voevodsky et al.\\ \\citelist{\\cite{Voevodsky:Milnor}\\cite{OVV:Milnor}}. For the Witt ring of a more general variety \\(X\\), there is no candidate filtration of equal renown. The two most frequently encountered filtrations are:\n\\begin{compactitem}\n\\item A short filtration which we will refer to as the {\\bf classical filtration} \\(\\clasF{*}\\mathrm W} \\newcommand*{\\shW}{\\sheaf W(X)\\), given by the whole ring, the kernel of the rank homomorphism, the kernels of the first two Stiefel-Whitney classes. This filtration is used, for example, in \\citelist{\\cite{Fernandez}\\cite{Me:WCS}}.\n\\item The {\\bf unramified filtration} \\(\\urF{*}\\mathrm W} \\newcommand*{\\shW}{\\sheaf W(X)\\), given by the preimage of the fundamental filtration on the Witt ring of the function field \\(K\\) of \\(X\\) under the natural homomorphism \\(\\mathrm W} \\newcommand*{\\shW}{\\sheaf W(X)\\to \\mathrm W} \\newcommand*{\\shW}{\\sheaf W(K)\\). Said morphism is not generally injective (\\mbox{e.\\thinspace{}g.\\ } \\cite{Totaro:Witt}), at least not when \\(\\dim(X) > 3\\), and its kernel will clearly be contained in every piece of the filtration. Recent computations with this filtration include \\cite{FunkHoobler}\n\\end{compactitem}\nClearly, the unramified filtration coincides with the fundamental filtration in the case of a field, and so does the classical filtration as far as it is defined.\nThe same will be true of the \\(\\gamma\\)-filtration introduced here. It may be thought of as an attempt to extend the classical filtration to higher degrees.\n\nIn general, in order to define a ``\\(\\gamma\\)-filtration'', we simply need to exhibit a pre-\\(\\lambda\\)-structure on the ring in question. However, the natural candidates for \\(\\lambda\\)-operations, the exterior powers, are not well-defined on the Witt ring \\(\\mathrm W} \\newcommand*{\\shW}{\\sheaf W(X)\\). We remedy this by passing to the Grothen\\-dieck-Witt ring \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\). It is defined just like the Witt ring, except that we do not quotient out hyperbolic elements. Consequently, the two rings are related by an exact sequence\n\\[\n\\mathrm K} \\newcommand*{\\shK}{\\sheaf K(X) \\to \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X) \\to \\mathrm W} \\newcommand*{\\shW}{\\sheaf W(X) \\to 0.\n\\]\nGiven the (pre-)\\(\\lambda\\)-structure on \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\), we can formulate the following theorem concerning the associated \\(\\gamma\\)-filtration. Let \\(X\\) be an integral scheme over a field \\(k\\) of characteristic not two.\n\\needspace{2cm}\n\\begin{plainnumberthm}\\label{mainthm:filtration}\\hfill\n \\begin{compactenum}[(1)]\n \\item The \\(\\gamma\\)-filtration on \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(k)\\) is the fundamental filtration.\n \\item\n The \\(\\gamma\\)-filtration on \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\) is related to the classical filtration as follows:\n \\begin{align*}\n \\gammaF{1}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X) &= \\clasF{1}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X) := \\ker\\left(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\xrightarrow{\\rank}\\mathbb{Z}\\right)\\\\\n \\gammaF{2}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X) &= \\clasF{2}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X) :=\\ker\\left(\\clasF{1}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\xrightarrow{w_1} H^1_\\mathit{et}(X,\\mathbb{Z}\/2)\\right)\\\\\n \\gammaF{3}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X) &\\subseteq \\clasF{3}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X) :=\\ker\\left(\\clasF{2}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\xrightarrow{w_2} H^2_\\mathit{et}(X,\\mathbb{Z}\/2)\\right)\n \\end{align*}\n However, the inclusion at the third step is not in general an equality.\n \\item\n The \\(\\gamma\\)-filtration on \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\) is finer than the unramified filtration.\n \\end{compactenum}\n\\end{plainnumberthm}\n\nWe define the ``\\(\\gamma\\)-filtration'' on the Witt ring as the image of the above filtration under the canonical projection \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\to\\mathrm W} \\newcommand*{\\shW}{\\sheaf W(X)\\).\nThus, each of the above statements easily implies an analogous statement for the Witt ring: the \\(\\gamma\\)-filtration on the Witt ring of a field is the fundamental filtration, \\(\\gammaF{i}\\mathrm W} \\newcommand*{\\shW}{\\sheaf W(X)\\) agrees with \\(\\clasF{i}\\mathrm W} \\newcommand*{\\shW}{\\sheaf W(X)\\) for \\(i<3\\) etc.\nThe same example as for the Grothen\\-dieck-Witt ring (\\Cref{eg:punctured-A^d}) will show that \\(\\gammaF{3}\\mathrm W} \\newcommand*{\\shW}{\\sheaf W(X) \\neq \\clasF{3}\\mathrm W} \\newcommand*{\\shW}{\\sheaf W(X)\\) in general.\n\nMost statements of \\Cref{mainthm:filtration} also hold under weaker hypotheses---see {\\it (1)} \\Cref{prop:local-filtration}, {\\it (2)} \\Cref{comparison:F2,comparison:F2F3} and {\\it (3)} \\Cref{comparison:finer-than-unramified}. On the other hand, under some additional restrictions, the relation with the unramified filtration can be made more precise. For example, if \\(X\\) is a regular variety of dimension at most three and \\(k\\) is infinite, the unramified filtration on the Witt ring agrees with the global sections of the sheafified \\(\\gamma\\)-filtration (\\Cref{sec:unramified-filtration}).\n\nThe crucial assertion is of course the equality of \\(F^2_\\gamma\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\) with the kernel of \\(w_1\\)---all other statements would hold similarly for the naive filtration of \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\) by the powers of the ``fundamental ideal'' \\(F^1_\\gamma\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\). The equality follows from the fact that the exterior powers make \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\) not only a pre-\\(\\lambda\\)-ring, but even a \\(\\lambda\\)-ring:\\footnote{\n In older terminology, pre-\\(\\lambda\\)-rings are called ``\\(\\lambda\\)-rings'', while \\(\\lambda\\)-rings are referred to as ``special \\(\\lambda\\)-rings''. See also the introduction to \\cite{Me:LambdaReps}.}\n\\begin{plainnumberthm}\\label{mainthm}\n For any scheme \\(X\\) over a field of characteristic not two, the exterior power operations give \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\) the structure of a \\(\\lambda\\)-ring.\n\\end{plainnumberthm}\nIn the case when \\(X\\) is a field, this was established in \\cite{McGarraghy:exterior}. The underlying pre-\\(\\lambda\\)-structure for affine \\(X\\) has also recently been established independently in \\cite{Xie}, where it is used to study sums-of-squares formulas.\n\nAlthough in this article the \\(\\lambda\\)-structure is used mainly as a tool in proving \\Cref{mainthm:filtration}, it should be noted that \\(\\lambda\\)-rings and even pre-\\(\\lambda\\)-rings have strong structural properties. Much of the general structure of Witt rings of \\emph{fields}---for example, the fact they contain no \\(p\\)-torsion for odd \\(p\\)---could be (re)derived using the pre-\\(\\lambda\\)-structure on the Grothen\\-dieck-Witt ring. Among the few results that generalize immediately to Grothendieck-Witt rings of schemes is the fact that torsion elements are nilpotent: this is true in any pre-\\(\\lambda\\)-ring. For \\(\\lambda\\)-rings, Clauwens has even found a sharp bound on the nilpotence degree \\cite{Clauwens}. In our situation, Clauwens result reads:\n\\begin{cor*}\n Let \\(X\\) be as above. Suppose \\(x\\in\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\) is an element satisfying \\(p^e x = 0\\) for some prime \\(p\\) and some exponent \\(e>0\\). Then \\mbox{\\(x^{p^e+p^{e-1}} = 0\\)}.\n\\end{cor*}\nTo put the corollary into context, recall that for a field \\(k\\) of characteristic not two, an element \\(x\\in\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(k)\\) is nilpotent if and only if it is \\(2^n\\)-torsion for some \\(n\\) \\citelist{\\cite{Lam}*{VIII.8}\\cite{Me:App}}. This equivalence may be generalized at least to connected semi-local rings in which two is invertible, using the pre-\\(\\lambda\\)-structure for one implication and \\cite{KRW72}*{Ex.~3.11} for the other. See \\cite{McGarraghy:exterior} for further applications of the \\(\\lambda\\)-ring structure on Grothen\\-dieck-Witt rings of fields and \\cite{Balmer:Nilpotence} for nilpotence results for Witt rings of regular schemes.\n\nFrom the \\(\\lambda\\)-theoretic point of view, the main complication in the Grothen\\-dieck-Witt ring of a general scheme as opposed to that of a field is that not all generators can be written as sums of line elements. In K-theory, this difficulty can often be overcome by embedding \\(\\mathrm K} \\newcommand*{\\shK}{\\sheaf K(X)\\) into the K-ring of some auxiliary scheme in which a given generator does have this property, but in our situation this is impossible: there is no splitting principle for Grothen\\-dieck-Witt rings (\\Cref{sec:no splitting}).\n\n\n\\subsection*{Acknowledgements}\nI thank Pierre Guillot for getting me started on these questions, and Kirsten Wickelgren for making me aware of the above-mentioned nilpotence results.\n\n\\section{Generalities}\n\\subsection{$\\lambda$-rings}\\label{sec:general-lambda}\n\\newcommand{\\mathfrak S}{\\mathfrak S}\n\\newcommand{\\mathbb W}{\\mathbb W}\n\\newcommand{\\mathrm{ev}}{\\mathrm{ev}}\nWe give a quick and informal introduction to \\(\\lambda\\)-rings, treading medium ground between the traditional definition in terms of exterior power operations \\cite{SGA6}*{Expos^^e9~V} and the abstract definition of \\(\\lambda\\)-rings as coalgebras over a comonad \\cite{Borger:BasicI}*{1.17}. The main point we would like to get across is that a \\(\\lambda\\)-ring is ``a ring equipped with all possible symmetric operations'', not just ``a ring with exterior powers''. This observation is not essential for anything that follows---we will later work exclusively with the traditional definition---but we hope that it provides some intrinsic motivation for considering this kind of structure.\n\nTo make our statement more precise, let \\(\\mathbb W\\) be the ring of symmetric functions. That is, \\(\\mathbb W\\) consists of all formal power series \\(\\phi(x_1,x_2,\\dots)\\) in countably many variables \\(x_1,x_2,\\dots\\) with coefficients in \\(\\mathbb{Z}\\) such that \\(\\phi\\) has bounded degree and such that the image \\(\\phi(x_1,\\dots,x_n,0,0,\\dots)\\) of \\(\\phi\\) under the projection to \\(\\mathbb{Z}[x_1,\\dots,x_n]\\) is a symmetric polynomial for all \\(n\\). For example, \\(\\mathbb W\\) contains~\\dots\n\\begin{compactitem}[\\dots]\n \\item \\eqparbox{egfunctions}{the {\\bf elementary symmetric functions}} \\(\\lambda_k := \\sum_{i_1<\\dots}[dr]\\ar[r]^{\\theta_A} & {\\mathbb W A} \\ar[d]^{\\mathrm{ev}_{\\phi}} \\\\\n & A\n }\n\\]\nIn particular, we have families of operations \\(\\lambda^k\\), \\(\\sigma^k\\) and \\(\\psi^k\\) corresponding to the symmetric functions specified above. They are referred to as {\\bf exterior power operations}, {\\bf symmetric power operations} and {\\bf Adams operations}, respectively.\n\nThe underlying additive group of the universal \\(\\lambda\\)-ring \\(\\mathbb W A\\) is isomorphic to the multiplicative group \\((1 + t A\\llbracket t \\rrbracket)^\\times\\) inside the ring of invertible power series over \\(A\\), and the isomorphism can be chosen such that the projection onto the coefficient of \\(t^i\\) corresponds to \\(\\mathrm{ev}_{\\lambda_i}\\) (e.g.~\\cite{Hesselholt:Big}*{Prop.~1.14, Rem.~1.21(2)}). Thus, a pre-\\(\\lambda\\)-structure is completely determined by the operations \\(\\lambda^i\\), and conversely, any family of operations \\(\\lambda^i\\) for which the map\n\\begin{align*}\nA &\\xrightarrow{\\;\\;\\lambda_t\\;} (1 + t A\\llbracket t \\rrbracket)^\\times\\\\\na &\\;\\;\\mapsto\\;\\; 1 + \\lambda^1(a)t + \\lambda^2(a)t^2 + \\dots\n\\end{align*}\nis a group homomorphism, and for which \\(\\lambda^1(a) = a\\), defines a \\(\\lambda\\)-structure. This recovers the traditional definition of a pre-\\(\\lambda\\)-structure as a family of operations \\(\\lambda^i\\) (with \\(\\lambda^0=1\\) and \\(\\lambda^1=\\id\\)) satisfying the relation \\(\\lambda^k(x+y) = \\sum_{i+j=k}\\lambda^i(x)\\lambda^j(y)\\) for all \\(k\\geq 0\\) and all \\(x,y\\in A\\).\n\nThe question whether the resulting pre-\\(\\lambda\\)-structure is a \\(\\lambda\\)-structure can similarly be reduced to certain polynomial identities, though these are more difficult to state and often also more difficult to verify in practice. However, for pre-\\(\\lambda\\)-rings with some additional structure, there are certain standard criteria that make life easier.\n\\begin{defn}\n An \\define{augmented (pre-)\\(\\lambda\\)-ring} is a (pre-)\\(\\lambda\\)-ring \\(A\\) together with a \\(\\lambda\\)-morphism\n \\[\n d\\colon A\\to \\mathbb{Z},\n \\]\n where the (pre-)\\(\\lambda\\)-structure on \\(\\mathbb{Z}\\) is defined by \\(\\lambda^i(n) := \\binom{n}{i}\\).\n\n A \\define{(pre-)\\(\\lambda\\)-ring with positive structure} is an augmented (pre-)\\(\\lambda\\)-ring \\(A\\) together with a specified subset \\(A_{>0}\\subset A\\) on which \\(d\\) is positive and which generates \\(A\\) in the strong sense that any element of \\(A\\) can be written as a difference of elements in \\(A_{>0}\\); it is moreover required to satisfy a list of axioms for which we refer to \\cite{Me:LambdaReps}*{\\S3}.\n\\end{defn}\n\nFor example, one of the axioms for a positive structure is that for an element \\(e\\in A_{>0}\\), the exterior powers \\(\\lambda^k e\\) vanish for all \\(k>d(e)\\). We will refer to elements of \\(A_{>0}\\) as \\define{positive elements}, and to positive elements \\(l\\) of augmentation \\(d(l) = 1\\) as \\define{line elements}. The motivating example, the K-ring \\(\\mathrm K} \\newcommand*{\\shK}{\\sheaf K(X)\\) of a connected scheme \\(X\\), is augmented by the rank homomorphism, and a set of positive elements is given by the classes of vector bundles. The situation for the Grothen\\-dieck-Witt ring will be analogous.\n\nHere are two simple criteria for showing that a pre-\\(\\lambda\\)-ring with positive structure is a \\(\\lambda\\)-ring:\n\\begin{description}\n\\item[Splitting Criterion] If all positive elements of \\(A\\) decompose into sums of line elements, then \\(A\\) is a \\(\\lambda\\)-ring.\n\n\\item[Detection Criterion] If for any pair of positive elements \\(e_1, e_2 \\in A_{>0}\\) we can find a \\(\\lambda\\)-ring \\(A'\\) and a \\(\\lambda\\)-morphism \\(A'\\to A\\) with both \\(e_1\\) and \\(e_2\\) in its image, then \\(A\\) is a \\(\\lambda\\)-ring.\n\\end{description}\nWe again refer to \\cite{Me:LambdaReps} for details.\n\n\\subsection{The $\\gamma$-filtration}\nThe \\define{\\(\\gamma\\)-operations} on a pre-\\(\\lambda\\)-ring \\(A\\) can be defined as \\(\\gamma^n(x) := \\lambda^n(x+n-1)\\). They again satisfy the identity \\(\\gamma^k(x+y) = \\sum_{i+j=k}\\gamma^i(x)\\gamma^j(y)\\).\n\\begin{defn}\n The \\(\\gamma\\)-filtration on an augmented pre-\\(\\lambda\\)-ring \\(A\\) is defined as follows:\n \\begin{align*}\n \\gammaF{0} A &:= A\\\\\n \\gammaF{1} A &:= \\ker(A\\xrightarrow{d} \\mathbb{Z})\\\\\n \\gammaF{i} A &:= \\left(\\ctext{10cm}{subgroup generated by all finite products\\\\ \\(\\textstyle\\prod_j \\gamma^{i_j}(a_j)\\) with \\(a_j\\in \\gammaF{1} A\\) and \\(\\textstyle\\sum_j{i_j} \\geq i\\)}\\right)\n &\\text{ for } i > 1\n \\end{align*}\n\\end{defn}\nThis is in fact a filtration by ideals, multiplicative in the sense that \\(\\gammaF{i} A \\cdot \\gammaF{j} A \\subset \\gammaF{i+j}A\\), hence we have an associated graded ring\n\\[\n\\graded^*_\\gamma A := \\bigoplus_i \\gammaF{i} A\/ \\gammaF{i+1} A.\n\\]\nSee \\cite{AtiyahTall}*{\\S4} or \\cite{FultonLang}*{III~\\S1} for details. The following lemma is sometimes useful for concrete computations.\n\\begin{lem}\\label{lem:gamma-filtration-generators}\n If \\(A\\) is a pre-\\(\\lambda\\)-ring with positive structure such that every positive element in \\(A\\) can be written as a sum of line elements, then \\(\\gammaF{k} A = (\\gammaF{1} A)^k\\).\n\n More generally, suppose that \\(A\\) is an augmented pre-\\(\\lambda\\)-ring, and let \\(E\\subset A\\) be some set of additive generators of \\(\\gammaF{1} A\\).\n Then \\(\\gammaF{k} A\\) is additively generated by finite products of the form\n \\(\n \\prod_j \\gamma^{i_j}(e_j)\n \\)\n with \\(e_j\\in E\\) and \\(\\sum_j i_j \\geq k\\).\n\\end{lem}\n\\begin{proof}\n The first assertion may be found in \\cite{FultonLang}*{III~\\S1}.\n It also follows from the second, which we now prove.\n As each \\(x\\in \\gammaF{1} A\\) can be written as a linear combination of elements of \\(E\\), we can write any \\(\\gamma^{i}(x)\\) as a linear combination of products of the form \\(\\prod_j \\gamma^{i_j}(\\pm e_j)\\) with \\(e_j\\in E\\) and \\(\\sum_j i_j = i\\).\n Thus, \\(\\gammaF{k} A\\) can be generated by finite products of the form \\(\\prod_j \\gamma^{i_j}(\\pm e_j)\\), with \\(e_j\\in E\\) and \\(\\sum_j i_j\\geq k\\).\n Moreover, \\(\\gamma^i(-e)\\) is a linear combination of products of the form \\(\\prod_j \\gamma^{i_j}(e)\\) with \\(\\sum_j i_j = i\\): this follows from the above identity for \\(\\gamma^k(x+y)\\).\n Thus, \\(\\gammaF{k} A\\) is already generated by products of the form described.\n\\end{proof}\n\nFor \\(\\lambda\\)-rings with positive structure, we also have the following general fact:\n\\begin{lem}[\\cite{FultonLang}*{III, Thm~1.7}]\\label{lem:graded-degree-1}\n For any \\(\\lambda\\)-ring \\(A\\) with positive structure, the additive group \\(\\graded^1 A = \\gammaF{1} A\/\\gammaF{2} A\\) is isomorphic to the multiplicative group of line elements in \\(A\\).\n\\end{lem}\n\n\\section{The $\\lambda$-structure on the Grothendieck-Witt ring}\n\\subsection{The pre-$\\lambda$-structure}\n\\begin{prop}\\label{prop:lambda-on-GW}\n Let \\(X\\) be a scheme. The exterior power operations \\( \\lambda^k\\colon (\\vb M, \\mu) \\mapsto (\\Lambda^k \\vb M, \\Lambda^k \\mu) \\) induce well-defined maps on \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\) which provide \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\) with the structure of a pre-\\(\\lambda\\)-ring.\n\\end{prop}\nOur proof of the existence of a pre-\\(\\lambda\\)-structure will follow the same pattern as the proof for symmetric representation rings in \\cite{Me:LambdaReps}:\n\n\\paragraph{Step 1.} The assignment \\(\\lambda^i(\\vb M, \\mu) := (\\Lambda^i \\vb M, \\Lambda^i\\mu)\\) is well-defined on the set of isometry classes of symmetric vector bundles over \\(X\\), so that we have an induced map\n\\begin{align*}\n\\lambda_t\\colon \\left\\{\\ctext{5cm}{isometry classes of symmetric vector bundles over \\(X\\)}\\right\\} &\\longrightarrow (1 + t \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\llbracket t \\rrbracket)^\\times.\n\\intertext{%\nWe extend it linearly to a group homomorphism\n}\n\\bigoplus\\mathbb{Z}(\\vb M, \\mu) &\\longrightarrow (1 + t \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\llbracket t \\rrbracket)^\\times,\n\\end{align*}\nwhere the sum on the left is over all isometry classes of symmetric vector bundles over \\(X\\).\n\n\\paragraph{Step 2.} The map \\(\\lambda_t\\) is additive in the sense that\n\\[\n\\lambda_t((\\vb M, \\mu)\\perp(\\vb N,\\nu)) = \\lambda_t(\\vb M, \\mu)\\lambda_t(\\vb N,\\nu).\n\\]\nThus, it factors through the quotient of \\(\\bigoplus\\mathbb{Z}(\\vb M, \\mu)\\) by the ideal generated by the relations\n\\(\n((\\vb M, \\mu)\\perp(\\vb N,\\nu)) = (\\vb M, \\mu) + (\\vb N,\\nu)\n\\).\n\\paragraph{Step 3.} The homomorphism \\(\\lambda_t\\) respects the relation \\( (\\vb M,\\mu) = H(\\vb L)\\) for every metabolic vector bundle \\( (\\vb M,\\mu) \\) with Lagrangian \\(\\vb L\\). Thus, we obtain the desired factorization\n\\[\n\\lambda_t\\colon \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\to (1 + t \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\llbracket t \\rrbracket)^\\times.\n\\]\n\nTo carry out these steps, we only need to replace all arguments on the level of vector spaces of \\cite{Me:LambdaReps} with local arguments. We formulate the key lemma in detail and then sketch the remaining part of the proof.\n\\begin{lemFiltration}[\\mbox{c.\\thinspace{}f.\\ }~\\cite{SGA6}*{Expos^^e9~V, Lemme~2.2.1}]\\label{lemFiltration}\n Let \\(0\\to \\vb L\\to \\vb M\\to \\vb N\\to 0\\) be an extension of vector bundles over a scheme \\(X\\). Then we can find a filtration of \\(\\Lambda^n \\vb M\\) by sub-vector bundles\n \\(\n \\Lambda^n \\vb M = \\vb M^0 \\supset \\vb M^1 \\supset \\vb M^2 \\supset \\cdots\n \\)\n together with isomorphisms\n \\[\n \\pi_{\\vb M}^i\\colon \\factor{\\vb M^i}{\\vb M^{i+1}} \\cong \\Lambda^i \\vb L \\otimes \\Lambda^{n-i} \\vb N.\n \\]\n More precisely, there is a unique way of associating such filtrations and isomorphisms with extensions of vector bundles on schemes subject to the following conditions:\n \\begin{itemize}\n \\item [(1)] The filtrations are natural with respect to isomorphisms of extensions. That is, given extensions \\(\\vb M\\) and \\(\\tilde{\\vb M}\\) of \\(\\vb N\\) by \\(\\vb L\\), any isomorphism \\(\\phi\\colon \\vb M\\to \\tilde{\\vb M}\\) for which\n \\[\\xymatrix{\n 0 \\ar[r] & \\vb L \\ar[r]\\ar@{=}[d] & \\vb M \\ar[r]\\ar[d]^{\\phi}_{\\cong} & \\vb N \\ar[r]\\ar@{=}[d] & 0 \\\\\n 0 \\ar[r] & \\vb L \\ar[r] & {\\tilde{\\vb M}} \\ar[r] & \\vb N \\ar[r] & 0\n }\\]\n commutes restricts to isomorphisms \\(\\vb M^i \\to \\tilde{\\vb M}^i\\) compatible with the isomorphisms \\(\\pi_{\\vb M}^i\\) and \\(\\tilde\\pi_{\\vb M}^i\\) in the sense that\n \\[\\xymatrix{\n {\\factor{\\vb M^i}{\\vb M^{i+1}}} \\ar[rd]^{\\cong}_{\\pi_{\\vb M}^i}\\ar[rr]_{\\cong}^{\\bar \\phi} && \\ar[ld]_{\\cong}^{\\pi_{\\tilde{\\vb M}}^i} {\\factor{\\tilde{\\vb M}^i}{\\tilde{\\vb M}^{i+1}}} \\\\\n & {\\Lambda^i \\vb L\\otimes \\Lambda^{n-i} \\vb N}\n }\\]\n commutes.\n\n \\item[(1')]\\label{i:filt-1}\n The filtration is natural with respect to morphisms of schemes \\mbox{\\(f\\colon Y\\to X\\)}: \\( (f^*\\vb M)^i = f^*(\\vb M^i)\\) and \\(f^*(\\pi_{\\vb M}^i) = \\pi_{f^*\\vb M}^i\\) under the identification of \\(\\Lambda^n(f^* \\vb M)\\) with \\(f^*(\\Lambda^n \\vb M)\\).\n\n \\item[(2)]\\label{i:filt-2}\n For the trivial extension, \\((\\vb L\\oplus \\vb N)^i \\subset \\Lambda^n(\\vb L\\oplus \\vb N)\\) corresponds to the submodule\n \\[\n \\textstyle\\bigoplus_{j\\geq i} \\Lambda^j \\vb L\\otimes \\Lambda^{n-j} \\vb N \\quad \\subset \\quad \\textstyle \\bigoplus_j \\Lambda^j \\vb L\\otimes \\Lambda^{n-j} \\vb N\n \\]\n under the canonical isomorphism \\(\\Lambda^n(\\vb L\\oplus \\vb N) \\cong \\bigoplus_j \\Lambda^j \\vb L\\otimes \\Lambda^{n-j} \\vb N\\), and the iso\\-morphisms\n \\[\n \\pi_{\\vb L \\oplus \\vb N}^i\\colon \\factor{(\\vb L\\oplus \\vb N)^i}{(\\vb L\\oplus \\vb N)^{i+1}} \\xrightarrow{\\cong} \\Lambda^i \\vb L\\otimes \\Lambda^{n-i} \\vb N\n \\]\n are induced by the canonical projections.\n \\end{itemize}\n\\end{lemFiltration}\n(The numbering {\\it (1), (1'), (2)} is chosen to be as close as possible to the Filtration Lemma in \\cite{Me:LambdaReps}. For a closer analogy, statement~{\\it (1)} there should also be split into two parts {\\it (1)} and {\\it (1')}: naturality with respect to isomorphisms of extensions, and naturality with respect to pullback along a group homomorphism. As stated there, only the pullback to the trivial group is covered.)\n\n\\begin{proof}[Proof of \\cref{lemFiltration}]\n Uniqueness is clear: if filtrations and isomorphisms satisfying the above conditions exist, they are determined locally by {\\it (1)} and {\\it (2)}, hence globally by {\\it (1')}.\n\n Existence may be proved via the following direct construction. Let \\(0\\to \\vb L\\xrightarrow{\\iota} \\vb M \\xrightarrow{\\pi} \\vb N \\to 0\\) be an arbitrary short exact sequence of vector bundles over \\(X\\). Consider the morphism \\(\\Lambda^i \\vb L\\otimes \\Lambda^{n-i} \\vb M \\to \\Lambda^n \\vb M\\) induced by \\(\\iota\\). Let \\(\\vb M_i\\) be its kernel and \\(\\vb M^i\\) its image, so that we have a short exact sequence of quasi\\-coherent sheaves:\n \\begin{equation*\n \\xymatrix{\n 0 \\ar[r] & {\\vb M_i} \\ar[r] & {\\Lambda^i \\vb L\\otimes \\Lambda^{n-i} \\vb M} \\ar[r] & {\\vb M^i} \\ar[r] & 0\n }\n \\end{equation*}\n We claim {\\it (a)} that the sheaves \\(\\vb M_i\\) and \\(\\vb M^i\\) are again vector bundles, {\\it (b)} that the morphism \\(\\Lambda^i \\vb L\\otimes \\Lambda^{n-i} \\vb M \\to \\Lambda^i \\vb L\\otimes \\Lambda^{n-i} \\vb N\\) induced by \\(\\pi\\) factors through \\(\\vb M^i\\) and induces an isomorphism\n \\[\n \\pi_{\\vb M}^i\\colon \\factor{\\vb M^i}{\\vb M^{i+1}}\\to \\Lambda^i \\vb L\\otimes \\Lambda^{n-i} \\vb N,\n \\]\n and {\\it (c)} that this construction of subbundles \\(\\vb M^i\\) and isomorphisms \\(\\pi_{\\vb M}^i\\) satisfies the properties {\\it (1), (1'), (2)}.\n This can be checked in the following order: First, verify most of {\\it (c)}: the first half of statement~{\\it (1)}, statement~{\\it (1')} and statement~{\\it (2)}. Then {\\it (a)} and {\\it (b)} follow because any extension of vector bundles is locally split. Lastly, the commutativity of the triangle in {\\it (1)} can also be checked locally.\n\\end{proof}\n\n\\begin{proof}[Proof of \\Cref{prop:lambda-on-GW}]\nFor Step~1, we note that the exterior power operation\n\\(\n \\Lambda^i\\colon \\cat Vec(X) \\to \\cat Vec(X)\n\\)\nis a duality functor in the sense that we have an isomorphism\n\\(\n \\eta_{\\vb M}\\colon \\Lambda^i(\\vb M^\\vee) \\xrightarrow{\\cong} (\\Lambda^i \\vb M)^\\vee\n\\)\nfor any vector bundle \\(\\vb M\\). Indeed, we can define \\(\\eta_{\\vb M}\\) on sections by\n\\[\\phi_1\\wedge\\dots\\wedge\\phi_i \\mapsto (m_1\\wedge\\dots\\wedge m_i \\mapsto \\det(\\phi_\\alpha(m_\\beta)). \\]\nThe fact that this is an isomorphism can be checked locally and follows from \\cite{Bourbaki:Algebre}*{Ch.~3, \\S\\,11.5, (30 bis)}.\nWe therefore obtain a well-defined operation on the set of isometry classes of symmetric vector bundles over \\(X\\) by defining\n\\[\n\\lambda^i(\\vb M,\\mu) := (\\Lambda^i \\vb M, \\eta_\\vb M\\circ \\Lambda^i(\\mu)).\n\\]\n\nStep~2 is completely analogous to the argument in \\cite{Me:LambdaReps}.\n\nFor Step~3, let \\( (\\vb M,\\mu) \\) be metabolic with Lagrangian \\(\\vb L\\), so that we have a short exact sequence\n \\begin{equation}\\label{eq:lambda-on-GW:ses}\n 0 \\to \\vb L \\xrightarrow{i} \\vb M \\xrightarrow{i^\\vee\\mu} \\vb L^\\vee \\to 0.\n \\end{equation}\n When \\(n\\) is odd, say \\(n=2k-1\\), we claim that \\(\\vb M^k\\) is a Lagrangian of \\(\\Lambda^n(\\vb M,\\mu)\\).\n When \\(n\\) is even, say \\(n=2k\\), we claim that \\(\\vb M^{k+1}\\) is an admissible sub-Lagrangian of \\(\\Lambda^n(\\vb M,\\mu)\\) with \\((\\vb M^{k+1})^\\perp = \\vb M^k\\), and that the composition of isomorphisms\n \\[\n \\factor{\\vb M^{k}}{\\vb M^{k+1}} \\cong \\Lambda^k \\vb L\\otimes \\Lambda^k(\\vb L^\\vee) \\cong \\factor{H(\\vb L)^{k}}{H(\\vb L)^{k+1}}\n \\]\n of \\cref{lemFiltration} is even an isometry between \\((\\vb M^{k}\/\\vb M^{k+1},\\mu)\\) and \\((H(\\vb L)^{k}\/H(\\vb L)^{k+1}, \\bar{\\mm{0 & 1 \\\\ 1 & 0}})\\). All of these claims can be checked locally. Both the local arguments and the conclusions are analogous to those of \\cite{Me:LambdaReps}.\n\\end{proof}\n\n\\begin{rem}\n In many cases, Step~3 can be simplified. If \\(X\\) is affine, then Step~3 is redundant because any short exact sequence of vector bundles splits. If \\(X\\) is a regular quasiprojective variety over a field, we can reduce the argument to the affine case using Jouanolou's trick and homotopy invariance. Indeed, Jouanolou's trick yields an affine vector bundle torsor \\(\\pi\\colon E\\to X\\) \\cite{Weibel:KH}*{Prop.~4.3}, and as \\(X\\) is regular, \\(\\pi^*\\) is an isomorphism: it is an isomorphism on \\(\\mathrm K} \\newcommand*{\\shK}{\\sheaf K(-)\\) by \\cite{Weibel:KH}*{below Ex.~4.7}, an isomorphism on \\(\\mathrm W} \\newcommand*{\\shW}{\\sheaf W(-)\\) by \\cite{Gille:HomotopyInvariance}*{Cor.~4.2}, and hence an isomorphism on \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(-)\\) by Karoubi induction.\n\\end{rem}\n\n\\subsection{The pre-$\\lambda$-structure is a $\\lambda$-structure}\nWe would now like to show that the pre-\\(\\lambda\\)-structure on \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\) discussed above is an actual \\(\\lambda\\)-structure. In some cases, this is easy:\n\n\\begin{prop}\\label{GW-R-special}\n For any connected semi-local commutative ring \\(R\\) in which two is invertible, the Grothen\\-dieck-Witt ring \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(R)\\) is a \\(\\lambda\\)-ring.\n\\end{prop}\n\\begin{proof}\n Over such a ring, any symmetric space decomposes into a sum of line elements \\cite{Baeza}*{Prop.~I.3.4\\slash 3.5}, so the result follows from the splitting criterion (\\Cref{sec:general-lambda}).\n\\end{proof}\n\nThe following result is more interesting:\n\\begin{thm}\n For any connected scheme \\(X\\) over a field of characteristic not two, the pre-\\(\\lambda\\)-structure on \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\) introduced above is a \\(\\lambda\\)-structure.\n\\end{thm}\n\nThe usual strategy for proving the analogous result in K-theory is via a ``geometric splitting principle'': see \\Cref{sec:no splitting}. However, as we will see there, no such principle is available for the Grothen\\-dieck-Witt ring. So instead, we follow an alternative strategy, which we recall from \\cite{SGA6}*{Expos^^e9~VI, Thm~3.3}.\\footnote{\n The result of Serre invoked at the end of the proof in \\cite{SGA6} to verify (K1) is \\cite{Serre}*{Thm~4}.}\nLet \\(G\\) be a linear algebraic group scheme over \\(k\\). The principal components of this alternative strategy are:\n\\begin{itemize}\n\\item[(K1)] The representation ring \\(\\mathrm K} \\newcommand*{\\shK}{\\sheaf K(\\mathcal Rep(G))\\) is a \\(\\lambda\\)-ring.\n\\item[(K2)] For any \\(G\\)-torsor \\({\\sheaf S}\\) over \\(X\\), the map\n \\[\\mathrm K} \\newcommand*{\\shK}{\\sheaf K(\\mathcal Rep(G))\\to \\mathrm K} \\newcommand*{\\shK}{\\sheaf K(X)\\]\n sending a representation \\(V\\) of \\(G\\) to the vector bundle \\({\\sheaf S}\\times_{G}V\\) is a \\(\\lambda\\)-morphism. (Here, the \\(V\\) in \\({\\sheaf S}\\times_{G}V\\) is to be interpreted as a trivial vector bundle over \\(X\\).)\n\\item[(K3)]\n For any pair of vector bundles \\(\\vb E\\) and \\(\\vb F\\) over \\(X\\), there exists a linear algebraic group scheme \\(G\\) and a \\(G\\)-torsor \\({\\sheaf S}\\) such that both \\(\\vb E\\) and \\(\\vb F\\) lie in the image of the morphism \\(\\mathrm K} \\newcommand*{\\shK}{\\sheaf K(\\mathcal Rep(G))\\to \\mathrm K} \\newcommand*{\\shK}{\\sheaf K(X)\\) defined by \\({\\sheaf S}\\). (\\(G\\) can be chosen to be a product of general linear groups.)\n\\end{itemize}\nFrom these three points, the fact that \\(\\mathrm K} \\newcommand*{\\shK}{\\sheaf K(X)\\) is a \\(\\lambda\\)-ring follows via the detection criterion (\\Cref{sec:general-lambda}).\nThe same argument will clearly work for \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\) provided the following three analogous statements hold. (We now assume that \\(\\characteristic k \\neq 2\\).)\n\\begin{itemize}\n\\item[(GW1)] The symmetric representation ring \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\mathcal Rep(G))\\) is a \\(\\lambda\\)-ring.\n\\item[(GW2)] For any \\(G\\)-torsor \\({\\sheaf S}\\) over \\(X\\), the map\n \\[\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\mathcal Rep(G))\\to \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\]\n sending a symmetric representation \\(V\\) of \\(G\\) to the symmetric vector bundle \\({\\sheaf S}\\times_{G} V\\) is a \\(\\lambda\\)-morphism.\n\\item[(GW3)]\n Any pair of symmetric vector bundles lies in the image of some common morphism \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\mathcal Rep(G))\\to \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\) defined by a \\(G\\)-torsor as above. (\\(G\\) can be chosen to be a product of split orthogonal groups.)\n\\end{itemize}\nStatement (GW1) is the main result of \\cite{Me:LambdaReps}. The remaining points (GW2) and (GW3) are discussed below: see \\Cref{lem:K2GW2} and \\Cref{lem:GW3}. Any reader to whom (K2) and (K3) are obvious will most likely consider (GW2) and (GW3) equally obvious---the only point of note is that for (GW3) we have to work in the ^^e9tale topology rather than in the Zariski topology.\n\n\\subsubsection*{Twisting by torsors}\nLet \\(X\\) be a scheme with structure sheaf \\(\\O\\). We fix some Grothendieck topology on \\(X\\). (For (K2) and (GW2), the topology is irrelevant. For (K3) we can take any topology at least as fine as the Zariski topology, while for (GW3) we will need the ^^e9tale topology.)\n\nGiven a sheaf of (not necessarily abelian) groups \\({\\sheaf G}\\) over \\(X\\), recall that a (right) \\define{\\({\\sheaf G}\\)-torsor} is a sheaf of sets \\({\\sheaf S}\\) over \\(X\\) with a right \\({\\sheaf G}\\)-action such that:\n\\begin{compactenum}[(1)]\n\\item There exists a cover \\(\\{U_i\\to X\\}_i\\) such that \\({\\sheaf S}(U_i)\\neq \\emptyset\\) for all \\(U_i\\). (Any such cover is said to \\define{split} \\({\\sheaf S}\\).)\n\\item For all open \\((U\\to X)\\), and for one (hence for all) \\(s\\in{\\sheaf S}(U)\\), the map\n \\begin{align*}\n {\\sheaf G}_{|U} &\\xrightarrow{} {\\sheaf S}_{|U}\\\\\n g &\\mapsto s.g\n \\end{align*}\n is an isomorphism.\n\\end{compactenum}\n\n\\newcommand{\\p}[2]{{_{#1}}{[#2]}}\n\n\\begin{defn}\\label{def1}\n Let \\({\\sheaf S}\\) be a \\({\\sheaf G}\\)-torsor as above. For any presheaf \\(\\sheaf E\\) of \\(\\O\\)-modules with an \\(\\O\\)-linear left \\({\\sheaf G}\\)-action, we define a new presheaf of \\(\\O\\)-modules by\n \\begin{align*}\n {\\sheaf S} \\mathbin{\\hat\\times}_{{\\sheaf G}} \\sheaf E\n &:= \\coeq\\big( ({\\sheaf S}\\times {\\sheaf G})\\mathbin{\\otimes} \\sheaf E \\rightrightarrows {\\sheaf S}\\mathbin{\\otimes} \\sheaf E \\big)\\\\\n &\\,= \\coker\\big( \\textstyle\\bigoplus_{{\\sheaf S},{\\sheaf G}} \\sheaf E \\longrightarrow \\textstyle\\bigoplus_{{\\sheaf S}} \\sheaf E \\big),\n \\end{align*}\n where, on any open \\(U\\), the morphism in the second line has the form \\(\\p{s,g}{v}\\mapsto \\p{s.g}{v}-\\p{s}{g.v}\\) for \\(s\\in{\\sheaf S}(U)\\), \\(g\\in{\\sheaf G}(U)\\) and \\(v\\in\\sheaf E(U)\\).\\footnote{\n We use square brackets with a subscript on the left for an element of a direct sum that is concentrated in a single summand. A general element of \\(\\bigoplus_{s\\in S} X_s\\) is a finite sum of the form \\(\\displaystyle \\sum_{s\\in S} \\p{s}{x_s}\\) in this notation.}\n If \\(\\sheaf E\\) is a sheaf, we define \\({\\sheaf S}\\times_{{\\sheaf G}} \\sheaf E\\) as the sheafification of \\({\\sheaf S}\\mathbin{\\hat\\times}_{{\\sheaf G}}\\sheaf E\\). Equivalently, we may define \\({\\sheaf S}\\times_{{\\sheaf G}}\\sheaf E\\) by the same formula as \\({\\sheaf S}\\mathbin{\\hat\\times}_{{\\sheaf G}}\\sheaf E\\) provided we interpret the direct sum and cokernel as direct sum and cokernel in the category of sheaves of \\(\\O\\)-modules.\n\\end{defn}\n\n\\begin{rem}\n The sheaf of \\(\\O\\)-modules \\({\\sheaf S}\\times_{{\\sheaf G}}\\sheaf E\\) can alternatively be described as follows:\n Fix a cover \\(\\{U_i\\to X\\}_i\\) which splits \\({\\sheaf S}\\), and fix an element \\(s_i\\in S(U_i)\\) for each \\(U_i\\).\n Let \\(g_{ij}\\in{\\sheaf G}(U_i\\times_X U_j)\\) be the unique element satisfying \\(s_j = s_i.g_{ij}\\).\n Then \\({\\sheaf S}\\times_{{\\sheaf G}}\\sheaf E\\) is isomorphic to the sheaf given on any open \\((V\\to X)\\) by\n \\[\n \\{(v_i)_i \\in \\textstyle\\prod_i\\sheaf E(V_i) \\mid v_i = g_{ij}.v_j \\text{ on } V_{ij} \\},\n \\]\n where \\(V_i := V\\times_X U_i\\), \\(V_{ij} := V_i\\times_X V_j\\).\n (We will not use this description in the following.)\n\\end{rem}\n\nWe next recall the basic properties of \\({\\sheaf S}\\times_{{\\sheaf G}}\\sheaf E\\). We call two presheaves of \\(\\O\\)-modules {\\bf locally isomorphic} if \\(X\\) has a cover such that the restrictions to each open of the cover are isomorphic.\nA morphism of presheaves of \\(\\O\\)-modules is said to be {\\bf locally an isomorphism} if \\(X\\) has a cover such that the restriction of the morphism to each open of the cover is an isomorphism.\n\n\\begin{lem}\\label{lem:local-iso}\\mbox{}\\hfill\n \\begin{compactenum}[(i)]\n \\item For any presheaf \\(\\sheaf E\\) as above, \\({\\sheaf S}\\mathbin{\\hat\\times}_{{\\sheaf G}}\\sheaf E\\) is locally isomorphic to \\(\\sheaf E\\).\n \\item For any sheaf \\(\\sheaf E\\) as above, \\({\\sheaf S}\\times_{{\\sheaf G}}\\sheaf E\\) is locally isomorphic to \\(\\sheaf E\\).\n \\item The canonical morphism \\({\\sheaf S}\\mathbin{\\hat\\times}_{{\\sheaf G}}\\sheaf E\\to{\\sheaf S}\\times_{{\\sheaf G}}\\sheaf E\\) is locally an isomorphism for any sheaf as above.\n \\end{compactenum}\n More precisely, the presheaves in {\\it (i) \\& (ii)} are isomorphic over any open \\((U\\to X)\\) such that \\({\\sheaf S}(U)\\neq \\emptyset\\), and likewise the morphism in {\\it (iii)} is an isomorphism over any such \\(U\\).\n\\end{lem}\n\n\\begin{proof}\n For {\\it (i)} of the lemma, let \\((U\\to X)\\) be an open such that \\({\\sheaf S}(U)\\neq\\emptyset\\). Fix any \\(s\\in {\\sheaf S}(U)\\). For each \\((V\\to U)\\) and each \\(t\\in{\\sheaf S}(V)\\), there exists a unique element \\(g_t\\in{\\sheaf G}(V)\\) such that \\(t = s.g_t\\). Therefore, the morphism \\(\\bigoplus_{{\\sheaf S}_{|U}} \\sheaf E_{|U} \\to \\sheaf E_{|U}\\) sending \\(\\p{t}{v}\\) to \\(g_t.v\\) describes the cokernel defining \\({\\sheaf S}\\mathbin{\\hat\\times}_{{\\sheaf G}}\\sheaf E\\) over \\(U\\).\n Statements {\\it (ii) \\& (iii)} of the lemma follow from {\\it (i)}.\n\\end{proof}\n\n\\begin{lem}\\label{lem:basic-properties}\\mbox{}\\hfill\n \\begin{compactenum}[(i)]\n \\item The functor \\({\\sheaf S}\\times_{{\\sheaf G}} -\\) is exact, \\mbox{i.\\thinspace{}e.\\ } it takes exact sequences of sheaves of \\(\\O\\)-modules with \\(\\O\\)-linear \\({\\sheaf G}\\)-action to exact sequences of sheaves of \\(\\O\\)-modules.\n \\item If \\(\\sheaf E\\) is a sheaf of \\(\\O\\)-modules with \\emph{trivial} \\({\\sheaf G}\\)-action, then \\({\\sheaf S}\\times_{{\\sheaf G}}\\sheaf E\\cong \\sheaf E\\).\n \\item Given arbitrary sheaves of \\(\\O\\)-modules \\(\\sheaf E\\) and \\(\\sheaf F\\) with \\(\\O\\)-linear \\({\\sheaf G}\\)-actions, consider \\(\\sheaf E\\oplus \\sheaf F\\), \\(\\sheaf E\\mathbin{\\otimes} \\sheaf F\\), \\(\\Lambda^i\\sheaf E\\) and \\(\\sheaf E^\\vee\\) with the induced \\({\\sheaf G}\\)-actions. Then we have the following isomorphisms of \\(\\O\\)-modules, natural in \\(\\sheaf E\\) and \\(\\sheaf F\\):\n \\begin{alignat*}{7}\n \\sigma\\colon&& {\\sheaf S}\\times_{{\\sheaf G}}(\\sheaf E\\oplus \\sheaf F) &\\cong ({\\sheaf S}\\times_{{\\sheaf G}}\\sheaf E) \\oplus ({\\sheaf S}\\times_{{\\sheaf G}}\\sheaf F)\\\\\n \\theta\\colon&& {\\sheaf S}\\times_{{\\sheaf G}}(\\sheaf E\\mathbin{\\otimes} \\sheaf F) &\\cong ({\\sheaf S}\\times_{{\\sheaf G}}\\sheaf E) \\mathbin{\\otimes} ({\\sheaf S}\\times_{{\\sheaf G}}\\sheaf F)\\\\\n \\lambda\\colon&& {\\sheaf S}\\times_{{\\sheaf G}}(\\Lambda^k\\sheaf E) &\\cong \\Lambda^k({\\sheaf S}\\times_{{\\sheaf G}}\\sheaf E)\\\\\n \\eta\\colon&& {\\sheaf S}\\times_{{\\sheaf G}}(\\sheaf E^\\vee) &\\cong ({\\sheaf S}\\times_{{\\sheaf G}}\\sheaf E)^\\vee\n \\end{alignat*}\n \\end{compactenum}\n\\end{lem}\n\n\\begin{proof}\n \\begin{asparaenum}[\\it (i)]\n \\item If we fix \\(s\\) and \\(U\\) as in the proof of \\Cref{lem:local-iso}, then the induced isomorphism \\({\\sheaf S}\\times_{{\\sheaf G}}\\sheaf E_{|U}\\to\\sheaf E_{|U}\\) is functorial for morphisms of \\(\\O\\)-modules with \\(\\O\\)-linear \\({\\sheaf G}\\)-action. The claim follows as exactness of a sequence of sheaves can be checked locally.\n\n \\item When \\({\\sheaf G}\\) acts trivially on \\(\\sheaf E\\), the local isomorphisms of \\Cref{lem:local-iso} do not depend on choices and glue to a global isomorphism.\n\n \\item It is immediate from \\Cref{lem:local-iso} that in each case the two sides are locally isomorphic, but we still need to construct global morphisms between them.\n\n For \\(\\oplus\\) everything is clear.\n\n For \\(\\mathbin{\\otimes}\\) and \\(\\Lambda^k\\), we first note that all constructions involved are compatible with sheafification, in the following sense: let \\(\\mathbin{\\hat\\tensor}\\) and \\(\\smash{\\hat\\Lambda}\\) denote the presheaf tensor product and the presheaf exterior power. Then, for arbitrary presheaves \\(\\sheaf E\\) and \\(\\sheaf F\\), the canonical morphisms\n \\begin{align*}\n (\\sheaf E\\mathbin{\\hat\\tensor}\\sheaf F)^+ &\\to (\\sheaf E^+)\\mathbin{\\otimes}(\\sheaf F^+)\\\\\n (\\smash{\\hat\\Lambda}^k\\sheaf E)^+ &\\to \\Lambda^k(\\sheaf E^+)\\\\\n ({\\sheaf S}\\mathbin{\\hat\\times}_{{\\sheaf G}}\\sheaf E)^+ &\\to {\\sheaf S}\\times_{{\\sheaf G}}(\\sheaf E^+)\n \\end{align*}\n are isomorphisms. (In the third case, this follows from \\Cref{lem:local-iso}.)\n\n The arguments for \\(\\mathbin{\\otimes}\\) and \\(\\Lambda^k\\) are very similar, so we only discuss the latter functor.\n Let \\(\\hat\\bigoplus\\) denote the (infinite) direct sum in the category of presheaves.\n We first check that the morphism\n \\begin{align*}\n \\textstyle\\hat\\bigoplus_{{\\sheaf S}}(\\smash{\\hat\\Lambda}^k\\sheaf E) \\to \\smash{\\hat\\Lambda}^k(\\textstyle\\hat\\bigoplus_{{\\sheaf S}}\\sheaf E)\n \\end{align*}\n which identifies the summand \\(\\p{s}{\\smash{\\hat\\Lambda}^k\\sheaf E}\\) on the left with \\(\\smash{\\hat\\Lambda}^k(\\p{s}{\\sheaf E})\\) on the right induces a well-defined morphism\n \\begin{align*}\n {\\sheaf S}\\mathbin{\\hat\\times}_{{\\sheaf G}}(\\smash{\\hat\\Lambda}^k\\sheaf E) \\xrightarrow{\\hat\\lambda} \\smash{\\hat\\Lambda}^k({\\sheaf S}\\mathbin{\\hat\\times}_{{\\sheaf G}}\\sheaf E).\n \\end{align*}\n Secondly, we claim that \\(\\hat\\lambda\\) is locally an isomorphism. For this, we only need to observe that over any \\(U\\) such that \\({\\sheaf S}(U)\\neq \\emptyset\\), we have a commutative triangle\n \\[\\xymatrix{\n {\\left({\\sheaf S}\\mathbin{\\hat\\times}_{{\\sheaf G}}(\\smash{\\hat\\Lambda}^k\\sheaf E)\\right)(U)}\\ar[dr]_{\\cong} \\ar[rr]^{\\hat\\lambda}&& {\\left(\\smash{\\hat\\Lambda}^k({\\sheaf S}\\mathbin{\\hat\\times}_{{\\sheaf G}}\\sheaf E)\\right)(U)}\\ar[dl]^{\\cong}\\\\\n & {(\\smash{\\hat\\Lambda}^k\\sheaf E)(U)}\n }\\]\n where the diagonal arrows are induced by the isomorphisms of \\Cref{lem:local-iso}.\n\n For dualization, one of the sheafification morphisms goes in the wrong direction, so the argument is slightly different. Again, we first construct a morphism of presheaves \\(\\hat\\eta\\colon{\\sheaf S}\\mathbin{\\hat\\times}_{{\\sheaf G}}\\sheaf E^\\vee \\to ({\\sheaf S}\\mathbin{\\hat\\times}_{{\\sheaf G}}\\sheaf E)^\\vee\\).\n Over opens \\(U\\) such that \\({\\sheaf S}(U)=\\emptyset\\), the left-hand side is zero, so we take the zero morphism.\n Over opens \\(U\\) with \\({\\sheaf S}(U)\\neq\\emptyset\\), we define\n \\begin{align*}\n ({\\sheaf S}\\mathbin{\\hat\\times}_{{\\sheaf G}}\\sheaf E^\\vee)(U) &\\xrightarrow{\\hat\\eta} ({\\sheaf S}\\mathbin{\\hat\\times}_{{\\sheaf G}}\\sheaf E)^\\vee(U)\\\\\n \\p{s}{\\phi} \\quad &\\mapsto \\left(\\p{s.g}{v} \\mapsto \\phi(g.v)\\right)\n \\end{align*}\n Over these \\(U\\) with \\({\\sheaf S}(U)\\neq\\emptyset\\), the morphism is in fact an isomorphism. To define a local inverse, pick an arbitrary \\(s\\in{\\sheaf S}(U)\\), and send \\(\\psi\\) on the right-hand side to \\(\\p{s}{v\\mapsto \\psi(\\p{s}{v})}\\) on the left.\n\n Finally, given the morphism \\(\\hat\\eta\\), we consider the following square in which \\(\\alpha\\) and \\(\\beta\\) are sheafification morphisms:\n \\[\\xymatrix{\n {{\\sheaf S}\\mathbin{\\hat\\times}_{{\\sheaf G}}\\sheaf E^\\vee} \\ar[r]^{\\hat\\eta}\\ar[d]_{\\alpha}\\ar@{-->}[rd] & {({\\sheaf S}\\mathbin{\\hat\\times}_{{\\sheaf G}}\\sheaf E)^\\vee} \\\\\n { {\\sheaf S}\\times_{{\\sheaf G}}\\sheaf E^\\vee} \\ar@{-->}[r]_{\\eta} & {({\\sheaf S}\\times_{{\\sheaf G}}\\sheaf E)^\\vee} \\ar[u]_{\\beta^\\vee}\n }\\]\n By \\Cref{lem:local-iso}, both \\(\\alpha\\) and \\(\\beta\\) are locally isomorphisms, and it follows that \\(\\beta^\\vee\\) is likewise locally an isomorphism. The diagonal morphism is defined as follows: over any \\(U\\) with \\({\\sheaf S}(U)=\\emptyset\\), it is the zero morphism, and over all other \\(U\\), it is the composition of \\(\\hat\\eta\\) with the (local) inverse of \\(\\beta^\\vee\\). Thus, the dotted diagonal is a factorization of \\(\\hat\\eta\\) over \\(({\\sheaf S}\\times_{{\\sheaf G}}\\sheaf E)^\\vee\\). The latter being a sheaf, this factorization must further factor through \\({\\sheaf S}\\times_G\\sheaf E^\\vee\\). We thus obtain the horizontal morphism of sheaves \\(\\eta\\) which, being a locally an isomorphism, must be an isomorphism.\\qedhere\n \\end{asparaenum}\n\\end{proof}\n\n\\subsubsection*{Twisting symmetric bundles}\nRecall that a duality functor is a functor between categories with dualities\n\\[F\\colon (\\cat A, \\vee, \\omega) \\to (\\cat B, \\vee, \\omega)\\]\ntogether with a natural isomorphism\n\\(\\eta\\colon F(-^\\vee) \\xrightarrow{\\;\\cong\\;} F(-)^\\vee\\)\nsuch that\n\\[\\xymatrix{\n & {FA} \\ar[dl]_{F\\omega_A} \\ar[dr]^{\\omega_{FA}} & \\\\\n {F({A^\\vee}^\\vee)} \\ar[r]_{\\eta_{A^\\vee}} & {F(A^\\vee)^\\vee} & \\ar[l]^{\\eta_A^\\vee} {{(FA)^\\vee}^\\vee}\n}\\]\ncommutes \\cite{Balmer:Handbook}*{Def.~1.1.15}.\nSuch a functor induces a functor \\(F_\\text{sym}\\) on the category of symmetric spaces over \\(\\cat A\\): it sends a symmetric space \\((A,\\alpha)\\) over \\(\\cat A\\) to the symmetric space \\((FA,\\eta_AF\\alpha)\\) over \\(\\cat B\\). Moreover, we have isometries \\(H(FA)\\cong F_\\text{sym}(HA)\\), where \\(H\\) denotes the hyperbolic functor. We will sometimes simply write \\(F\\) for \\(F_\\text{sym}\\).\n\nNow consider the functor \\({\\sheaf S}\\times_{{\\sheaf G}}-\\). By \\Cref{lem:local-iso}, we can restrict \\({\\sheaf S}\\times_{{\\sheaf G}}-\\) to a functor from \\({\\sheaf G}\\)-equivariant vector bundles to vector bundles over \\(X\\). Moreover, by \\Cref{lem:basic-properties}, we have a natural isomorphism \\(\\eta\\colon {\\sheaf S}\\times_{{\\sheaf G}}(-^\\vee)\\cong({\\sheaf S}\\times_{{\\sheaf G}}(-))^\\vee\\). The commutativity of the triangle in the definition of a duality functor is easily checked, so we deduce:\n\\begin{lem}\n \\(({\\sheaf S}\\times_{{\\sheaf G}}-, \\eta)\\) is a duality functor \\({\\sheaf G}\\cat Vec(X) \\to \\cat Vec(X)\\).\n\\end{lem}\nFor any symmetric vector bundle \\((\\sheaf E,\\varepsilon)\\) with an \\(\\O\\)-linear left \\({\\sheaf G}\\)-action, we can now define\n\\[\n{\\sheaf S}\\times_{{\\sheaf G}}(\\sheaf E,\\varepsilon):=({\\sheaf S}\\times_{{\\sheaf G}}-)_\\text{sym} (\\sheaf E,\\varepsilon).\n\\]\nNo compatibility of the \\({\\sheaf G}\\)-action with the symmetric structure is required for this definition, but we will insist in the following that \\({\\sheaf G}\\) acts via isometries:\n\\begin{lem}\\label{lem:local-isometry}\n If \\({\\sheaf G}\\) acts on \\(\\sheaf E\\) via isometries, then \\({\\sheaf S}\\times_{{\\sheaf G}}(\\sheaf E,\\varepsilon)\\) is locally \\emph{isometric} to \\((\\sheaf E,\\varepsilon)\\).\n\\end{lem}\nMore precisely, the local isomorphisms of \\Cref{lem:local-iso} are isometries in this case.\nMoreover, the natural isomorphisms \\(\\sigma\\), \\(\\theta\\) and \\(\\lambda\\) of Lemma~\\ref{lem:basic-properties} respect the symmetric structures:\n\\begin{lem}\\label{lem:basic-isometries}\n For symmetric vector bundles \\((\\sheaf E,\\varepsilon)\\) and \\((\\sheaf F,\\phi)\\) on which \\({\\sheaf G}\\) acts through isometries, \\(\\sigma\\), \\(\\theta\\) and \\(\\lambda\\) respect the induced symmetries.\n\\end{lem}\n\\begin{proof}\n Temporarily writing \\(F\\) for the functor \\({\\sheaf S}\\times_{{\\sheaf G}}-\\), checking the claim for \\(\\theta\\) amounts to the following:\n For symmetric bundles \\((\\sheaf E,\\varepsilon)\\) and \\((\\sheaf F,\\phi)\\), the isomorphism \\(\\theta_{\\sheaf E,\\sheaf F}\\) is an isometry from \\(F(\\sheaf E\\mathbin{\\otimes} \\sheaf F)\\) to \\(F\\sheaf E\\mathbin{\\otimes} F\\sheaf F\\) with respect to the induced symmetries if and only if the outer square of the following diagram commutes:\n \\[\\xymatrix{\n {F(\\sheaf E\\mathbin{\\otimes} \\sheaf F)} \\ar[d]^{F(\\varepsilon\\mathbin{\\otimes}\\phi)} \\ar[rr]^{\\theta_{\\sheaf E,\\sheaf F}}\n &&{F\\sheaf E\\mathbin{\\otimes} F\\sheaf F} \\ar[d]^{F\\varepsilon\\mathbin{\\otimes} F\\phi} \\\\\n {F(\\sheaf E^\\vee\\mathbin{\\otimes} \\sheaf F^\\vee)} \\ar[d] \\ar[rr]^{\\theta_{\\sheaf E^\\vee,\\sheaf F^\\vee}}\n &&{F(\\sheaf E^\\vee)\\mathbin{\\otimes} F(\\sheaf F^\\vee)} \\ar[d]^{\\eta_\\sheaf E\\mathbin{\\otimes}\\eta_\\sheaf F} \\\\\n {F((\\sheaf E\\mathbin{\\otimes} \\sheaf F)^\\vee)} \\ar[d]^{\\eta_{\\sheaf E\\mathbin{\\otimes} \\sheaf F}}\n && {(F\\sheaf E)^\\vee\\mathbin{\\otimes} (F\\sheaf F)^\\vee} \\ar[d] \\\\\n {(F(\\sheaf E\\mathbin{\\otimes} \\sheaf F))^\\vee}\n &&\\ar[ll]^{\\theta^\\vee_{\\sheaf E,\\sheaf F}} {(F\\sheaf E\\mathbin{\\otimes} F\\sheaf F)^\\vee}\n }\\]\n\n Similarly, for a symmetric vector bundle \\((\\sheaf E,\\varepsilon)\\), the isomorphism \\(\\lambda_{\\sheaf E}\\) is an isometry from \\(F(\\Lambda^k\\sheaf E)\\) to \\(\\Lambda^k(F\\sheaf E)\\) if and only if the outer square of the following diagram commutes:\n \\[\\xymatrix{\n {F\\Lambda^k\\sheaf E} \\ar[d]^{F\\Lambda^k\\varepsilon} \\ar[rr]^{\\lambda_{\\sheaf E}}\n && {\\Lambda^k F\\sheaf E} \\ar[d]^{\\Lambda^k F\\varepsilon} \\\\\n {F\\Lambda^k(\\sheaf E^\\vee)} \\ar[d] \\ar[rr]^{\\lambda_{\\sheaf E^\\vee}}\n && {\\Lambda^kF(\\sheaf E^\\vee)} \\ar[d]^{\\Lambda^k\\eta_\\sheaf E} \\\\\n {F((\\Lambda^k\\sheaf E)^\\vee)} \\ar[d]^{\\eta_{\\Lambda^k\\sheaf E}}\n && {\\Lambda^k((F\\sheaf E)^\\vee)} \\ar[d] \\\\\n {(F\\Lambda^k\\sheaf E)^\\vee}\n &&\\ar[ll]^{\\lambda^\\vee_{\\sheaf E}} {(\\Lambda^k F\\sheaf E)^\\vee}\n }\\]\n\n In both cases, we already know that the the upper square commutes for all \\(\\sheaf E\\) and \\(\\sheaf F\\), by naturality of \\(\\theta\\) and \\(\\lambda\\). So it suffices to verify that the lower square commutes. This can be checked locally, and follows easily from the descriptions of \\(\\eta\\), \\(\\theta\\) and \\(\\lambda\\) given in the proof of \\Cref{lem:local-iso}.\n\\end{proof}\n\n\\subsubsection*{Proofs of the statements}\n\\Cref{lem:basic-properties,lem:basic-isometries} immediately imply:\n\\begin{cor}\\label{lem:K2GW2} Let \\(\\pi\\colon X\\to \\Spec(k)\\) be a scheme over some field \\(k\\). For any algebraic group scheme \\(G\\) over \\(k\\), and for any \\(G\\)-torsor \\({\\sheaf S}\\), the maps\n \\begin{align*}\n \\mathrm K} \\newcommand*{\\shK}{\\sheaf K(\\mathcal Rep_kG) &\\to \\mathrm K} \\newcommand*{\\shK}{\\sheaf K(X) & \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\mathcal Rep_kG)&\\to \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\\\\n V &\\mapsto {\\sheaf S}\\times_G \\pi^*V & (V,\\nu) &\\mapsto {\\sheaf S}\\times_{G}\\pi^*(V,\\nu)\n \\end{align*}\n are \\(\\lambda\\)-morphisms.\n\\end{cor}\nThis proves statements (K2) and (GW2).\nIt remains to prove (K3) and (GW3), which we now state in a more detailed form.\n\\begin{prop}\\label{lem:K3}\n Let \\(V_n\\) be the standard representation of \\(\\GL{n}\\).\n \\begin{compactenum}[(a)]\n \\item Any vector bundle \\(\\sheaf E\\) is isomorphic to \\({\\sheaf S}\\times_{\\GL{n}}\\pi^*V_n\\) for some Zariski \\(\\GL{n}\\)-torsor \\({\\sheaf S}\\).\n \\item For any two vector bundles \\(\\sheaf E\\) and \\(\\sheaf F\\), there exists a Zariski \\(\\GL{n}\\times\\GL{m}\\)-torsor \\({\\sheaf S}\\) such that\n \\begin{align*}\n \\sheaf E &\\cong {\\sheaf S}\\times_{\\GL{n}\\times\\GL{m}} \\pi^* V_n\\\\\n \\sheaf F&\\cong {\\sheaf S}\\times_{\\GL{n}\\times\\GL{m}} \\pi^* V_m\n \\end{align*}\n Here \\(\\GL{m}\\) is supposed to act trivially on \\(V_n\\),\\\\\n and \\(\\GL{n}\\) is supposed to act trivially on \\(V_m\\).\n \\end{compactenum}\n\\end{prop}\n\\begin{prop}\\label{lem:GW3}\n Let \\(X\\) be a scheme over a field \\(k\\) of characteristic not two.\n Let \\((V_n,q_n)\\) be the standard representation of \\(\\OO{n}\\) over \\(k\\), equipped with its standard symmetric form.\n \\begin{compactenum}[(a)]\n \\item Any symmetric vector bundle \\((\\sheaf E,\\varepsilon)\\) is isomorphic to \\({\\sheaf S}\\times_{\\OO{n}}\\pi^*(V_n,q_n)\\) for some ^^e9tale \\(\\OO{n}\\)-torsor \\({\\sheaf S}\\).\n \\item For any two symmetric vector bundles \\((\\sheaf E,\\varepsilon)\\) and \\((\\sheaf F,\\phi)\\), there exists an ^^e9tale \\(\\OO{n}\\times\\OO{m}\\)-torsor \\({\\sheaf S}\\) such that\n \\begin{align*}\n (\\sheaf E,\\varepsilon) &\\cong {\\sheaf S}\\times_{\\OO{n}\\times\\OO{m}} \\pi^*(V_n,q_n)\\\\\n (\\sheaf F,\\phi) &\\cong {\\sheaf S}\\times_{\\OO{n}\\times\\OO{m}} \\pi^*(V_m,q_m)\n \\end{align*}\n Here, \\(\\OO{m}\\) is supposed to act trivially on \\(V_n\\),\\\\\n and \\(\\OO{n}\\) is supposed to act trivially on \\(V_m\\).\n \\end{compactenum}\n\\end{prop}\n\\begin{proof}[Proof of \\Cref{lem:K3}]\\mbox{}\\hfill\n \\begin{asparaenum}[(a)]\n \\item Identify \\(\\pi^*V_n\\) with \\(\\O^{\\oplus n}\\), and let \\({\\sheaf S}\\) be the sheaf of isomorphisms \\({\\sheaf S}:=\\sheaf Iso(\\O^{\\oplus n},\\sheaf E)\\) with \\(\\GL{n} = \\sheaf Aut(\\O^{\\oplus n})\\) acting by precomposition.\n This is a \\(\\GL{n}\\)-torsor as \\(\\sheaf E\\) is locally isomorphic to \\(\\O^{\\oplus n}\\).\n Moreover, we have a well-defined morphism\n \\begin{align*}\n \\mathrm{ev}\\colon {\\sheaf S}\\mathbin{\\hat\\times}_{{\\sheaf G}}\\O^{\\oplus n}&\\to \\sheaf E\\\\\n \\p{f}{v}&\\mapsto f(v)\n \\end{align*}\n which is locally an isomorphism:\n for any \\(s\\in\\sheaf Iso(\\O^{\\oplus n}, \\sheaf E)(V)\\), the restriction \\(\\mathrm{ev}_{|V}\\) factors as\n \\[\n ({\\sheaf S}\\mathbin{\\hat\\times}_{{\\sheaf G}}\\O^{\\oplus n})_{|V} \\xrightarrow{\\cong} \\O^{\\oplus n}_{|V} \\xrightarrow[s]{\\cong} \\sheaf E_{|V},\n \\]\n where the first arrow is the isomorphism \\(\\p{f}{v}\\mapsto g_f(v)\\) of \\Cref{lem:local-iso} determined by \\(s\\).\n \\item Suppose \\({\\sheaf S}\\) is a \\({\\sheaf G}\\)-torsor, \\({\\sheaf S}'\\) is a \\({\\sheaf G}'\\)-torsor, and \\(\\sheaf E\\) is a sheaf of \\(\\O\\)-modules with \\(\\O\\)-linear actions by both \\({\\sheaf G}\\) and \\({\\sheaf G}'\\). Then if \\({\\sheaf G}'\\) acts trivially,\n \\[ ({\\sheaf S}\\times{\\sheaf S}')\\times_{{\\sheaf G}\\times{\\sheaf G}'}\\sheaf E \\cong {\\sheaf S}\\times_{{\\sheaf G}}\\sheaf E.\\]\n We can therefore take \\({\\sheaf S}:=\\sheaf Iso(\\O^{\\oplus n},\\sheaf E)\\times\\sheaf Iso(\\O^{\\oplus m},\\sheaf F)\\).\n \\qedhere\n \\end{asparaenum}\n\\end{proof}\n\\begin{proof}[Proof of \\Cref{lem:GW3}]\n We have \\(\\OO{n} = \\sheaf Aut(\\O^{\\oplus n},q_n)\\).\n So let \\({\\sheaf S}\\) be the sheaf of isometries \\({\\sheaf S}:=\\sheaf Iso((\\O^{\\oplus n},q_n),(\\sheaf E,\\varepsilon))\\) with \\(\\OO{n}\\) acting by precomposition. This is an \\(\\OO{n}\\)-torsor in the ^^e9tale topology since any symmetric vector bundle \\((\\sheaf E,\\varepsilon)\\) is ^^e9tale locally isometric to \\((\\O^{\\oplus n},q_n)\\) (\\mbox{e.\\thinspace{}g.\\ } \\cite{Hornbostel:Representability}*{3.6}).\n The rest of the proof works exactly as in the non-symmetric case.\n\\end{proof}\n\n\\section{(No) splitting principle}\\label{sec:no splitting}\nThe splitting principle in K-theory asserts that any vector bundle behaves like a sum of line bundles. There are two incarnations:\n\n{\\bf The algebraic splitting principle: } For any positive element \\(e\\) of a \\(\\lambda\\)-ring with positive structure \\(A\\), there exists an extension of \\(\\lambda\\)-rings with positive structure \\(A\\hookrightarrow A_e\\) such that \\(e\\) splits as a sum of line elements in \\(A_e\\).\n\n{\\bf The geometric splitting principle: } For any vector bundle \\(\\sheaf E\\) over a scheme \\(X\\), there exists an \\(X\\)-scheme \\(\\pi\\colon X_{\\sheaf E}\\to X\\) such that the induced morphism \\(\\pi^*\\colon \\mathrm K} \\newcommand*{\\shK}{\\sheaf K(X)\\hookrightarrow \\mathrm K} \\newcommand*{\\shK}{\\sheaf K(X_{\\sheaf E})\\) is an extension of \\(\\lambda\\)-rings with positive structure, and such that \\(\\pi^*\\sheaf E\\) splits as a sum of line bundles in \\(\\mathrm K} \\newcommand*{\\shK}{\\sheaf K(X_{\\sheaf E})\\).\n\nBoth incarnations are discussed in \\cite{FultonLang}*{I, \\S2}. An {\\bf extension} of a \\(\\lambda\\)-ring with positive structure \\(A\\) is simply an injective \\(\\lambda\\)-morphism to another \\(\\lambda\\)-ring with positive structure \\(A\\hookrightarrow A'\\), compatible with the augmentation and such that \\(A_{\\geq 0}\\) maps to \\(A'_{\\geq 0}\\).\n\n\\subsubsection*{No splitting principle for GW}\nFor \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\), the analogue of the geometric splitting principle fails:\n\\begin{quote}\n Over any field of characteristic not two, there exists a (smooth, projective) scheme \\(X\\) and a symmetric vector bundle \\((\\sheaf E,\\varepsilon)\\) over \\(X\\) such that there exists no \\(X\\)-scheme \\(\\pi\\colon X_{(\\sheaf E,\\varepsilon)}\\to X\\) for which the class of \\(\\pi^*(\\sheaf E,\\varepsilon)\\) in \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X_{(\\sheaf E,\\varepsilon)})\\) splits into a sum of symmetric line bundles.\n\\end{quote}\nThe natural analogue of the algebraic splitting principle could be formulated using the notion of a real \\(\\lambda\\)-ring:\n\n\\begin{defn}\nA \\define{real \\(\\lambda\\)-ring} is a \\(\\lambda\\)-ring with positive structure \\(A\\) in which any line element squares to one.\n\\end{defn}\n This property is clearly satisfied by the Grothen\\-dieck-Witt ring \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\) of any scheme \\(X\\).\nHowever, an algebraic splitting principle for real \\(\\lambda\\)-rings fails likewise:\n\\begin{quote}\n There exist a real \\(\\lambda\\)-ring \\(A\\) and a positive element \\(e\\in A\\) that does not split into a sum of line elements in any extension of real \\(\\lambda\\)-rings \\(A\\hookrightarrow A_e\\).\n\\end{quote}\nThe failure of both splitting principles is clear from the following simple counterexample:\n\\begin{lem} Let \\(\\mathbb{P}^2\\) be the projective plane over some field \\(k\\) of characteristic not two. Consider the element \\(e:=H(\\O(1))\\in\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\mathbb{P}^2)\\). There exists no extension of \\(\\lambda\\)-rings \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\mathbb{P}^2)\\hookrightarrow A_e\\) such that \\(A_e\\) is real and such that \\(e\\) splits as a sum of line elements in \\(A_e\\).\n\\end{lem}\n\\begin{proof}\n For any element \\(a\\) in a real \\(\\lambda\\)-ring that can be written as a sum of line elements, the Adams operations \\(\\psi^n\\) are given by\n \\[\n \\psi^n(a) = \\begin{cases}\n \\rank(a) &\\text{if \\(n\\) is even,}\\\\\n a &\\text{if \\(n\\) is odd.}\n \\end{cases}\n \\]\n However, for \\(e:= H(\\O(1))\\in\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\mathbb{P}^2)\\) we have \\(\\psi^2(e) \\neq 2\\),\n so \\(\\psi^2(e)\\) cannot be a sum of line bundles, neither in \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\mathbb{P}^2)\\) itself nor in any real extension.\n\n (Explicitly, \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\mathbb{P}^2) = \\pi^*\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(k) \\oplus \\mathbb{Z} e\\) with \\(e^2=-2\\pi^*\\langle 1, -1 \\rangle + 4e\\) (see \\Cref{eg:Pr} below) and \\(\\lambda^2(e) = \\pi^*\\langle -1 \\rangle\\) (by an explicit calculation). So\n \\[\n \\psi^2(e)\n = e^2 -2\\lambda^2(e)\n \n = -2\\pi^*\\langle 1, -1, -1 \\rangle + 4e\n ,\n \\]\n which differs from \\(2\\), as claimed.)\n \\end{proof}\n\n\\subsubsection*{A splitting principle for ^^e9tale cohomology}\nDespite the negative result above, we do have a splitting principle for Stiefel-Whitney classes of symmetric bundles. Let \\(X\\) be any scheme over \\(\\mathbb{Z}[\\frac{1}{2}]\\).\n\\begin{prop}\\label{thm:etale-splitting}\n For any symmetric bundle \\((\\vb E, \\varepsilon)\\) over \\(X\\) there exists a morphism\n \\(\n \\pi\\colon X_{(\\vb E, \\varepsilon)} \\to X\n \\)\n such that \\(\\pi^*(\\vb E, \\varepsilon)\\) splits as an orthogonal sum of symmetric line bundles over \\(X_{(\\vb E,\\varepsilon)}\\) and such that \\(\\pi^*\\) is injective on ^^e9tale cohomology with \\(\\mathbb{Z}\/2\\)-coefficients.\n\\end{prop}\n\\begin{proof}\n Recall the geometric construction of higher Stiefel-Whitney classes of Delzant and Laborde, as explained for example in \\cite{EKV}*{\\S5}: given a symmetric vector bundle \\((\\sheaf E,\\varepsilon)\\) as above, the key idea is to consider the scheme of non-degenerate one-dimensional subspaces \\(\\pi\\colon \\mathbb{P}_{\\mathit{nd}}(\\sheaf E,\\varepsilon) \\to X\\), \\mbox{i.\\thinspace{}e.\\ } the complement of the quadric in \\(\\mathbb{P}(\\sheaf E)\\) defined by \\(\\varepsilon\\). (This is an algebraic version of the projective bundle associated with a real vector bundle in topology; \\mbox{c.\\thinspace{}f.\\ } \\cite{Me:WCCV}*{Lem.~1.7}.) Let \\(\\O(-1)\\) denote the restriction of the universal line bundle over \\(\\mathbb{P}(\\sheaf E)\\) to \\(\\mathbb{P}_{\\mathit{nd}}(\\sheaf E,\\varepsilon)\\). This is a subbundle of \\(\\pi^*\\sheaf E\\), and by construction the restriction of \\(\\pi^*\\varepsilon\\) to \\(\\O(-1)\\) is non-degenerate. Let \\(w\\) be the first Stiefel-Whitney class of this symmetric line bundle \\(\\O(-1)\\). The ^^e9tale cohomology of \\(\\mathbb{P}_{\\mathit{nd}}(\\sheaf E, \\varepsilon)\\) decomposes as\n\\[\n H^*_{\\mathit{et}}(\\mathbb{P}_{\\mathit{nd}}(\\sheaf E,\\varepsilon),\\mathbb{Z}\/2)=\\bigoplus_{i=0}^{r-1} \\pi^*H^*_{\\mathit{et}}(X,\\mathbb{Z}\/2)\\cdot w^i,\n\\]\nand the higher Stiefel-Whitney classes of \\((\\sheaf E,\\varepsilon)\\) can be defined as the coefficients of the equation expressing \\(w^r\\) as a linear combination of the smaller powers \\(w^i\\) in \\(H^*_{\\mathit{et}}(\\mathbb{P}_{\\mathit{nd}}(\\sheaf E,\\varepsilon),\\mathbb{Z}\/2)\\).\n\nWe only need to note two facts from this construction: Firstly, over \\(\\mathbb{P}_{\\mathit{nd}}(\\sheaf E,\\varepsilon)\\) we have an orthogonal decomposition\n\\[\n \\pi^*(\\sheaf E,\\varepsilon) \\cong (\\O(-1),\\epsilon') \\perp (\\sheaf E'',\\varepsilon''),\n\\]\nwhere \\(\\sheaf E'' = \\O(-1)^\\perp\\) and \\(\\varepsilon'\\) and \\(\\varepsilon''\\) are the restrictions of \\(\\pi^*\\varepsilon\\). Secondly, \\(\\pi\\) induces a monomorphism from the ^^e9tale cohomology of \\(X\\) to the ^^e9tale cohomology of \\(\\mathbb{P}_{\\mathit{nd}}(\\sheaf E,\\varepsilon)\\).\nSo the \\namecref{thm:etale-splitting} is proved by iterating this construction.\n\\end{proof}\n\n\\section{The $\\gamma$-filtration on the Grothendieck-Witt ring}\nFrom now on, we assume that \\(X\\) is connected.\nAs we have seen, \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\) is a pre-\\(\\lambda\\)-ring with positive structure, and we can consider the associated \\(\\gamma\\)-filtration \\(\\gammaF{i} \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\) of \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\). The image of this filtration under the canonical epimorphism \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\twoheadrightarrow \\mathrm W} \\newcommand*{\\shW}{\\sheaf W(X)\\) will be denoted \\(\\gammaF{i} \\mathrm W} \\newcommand*{\\shW}{\\sheaf W(X)\\). In particular, by definition,\n\\begin{alignat*}{7}\n&\\gammaF{1} \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X) &&= \\clasF{1}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X) &&:= \\ker(\\rank\\colon \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\to \\mathbb{Z}),\\\\\n&\\gammaF{1} \\mathrm W} \\newcommand*{\\shW}{\\sheaf W(X) &&= \\clasF{1}\\mathrm W} \\newcommand*{\\shW}{\\sheaf W(X) &&:= \\ker(\\bar\\rank\\colon \\mathrm W} \\newcommand*{\\shW}{\\sheaf W(X)\\to \\mathbb{Z}\/2).\n\\end{alignat*}\n\nFor a field, or more generally for a connected semi-local ring \\(R\\), we also write \\(\\mathrm{GI}} \\newcommand*{\\shGI}{\\sheaf{GI}(R)\\) and \\(\\mathrm I} \\newcommand*{\\shI}{\\sheaf I(R)\\) instead of \\(\\clasF{1}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(R)\\) and \\(\\clasF{1}\\mathrm W} \\newcommand*{\\shW}{\\sheaf W(R)\\), respectively.\n\n\\subsection{Comparison with the fundamental filtration}\n\\begin{prop}\\label{prop:local-filtration}\n For any connected semi-local commutative ring \\(R\\) in which two is invertible, the \\(\\gamma\\)-filtration on \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(R)\\) is the filtration by powers of the augmentation ideal \\(\\mathrm{GI}} \\newcommand*{\\shGI}{\\sheaf{GI}(R)\\), and the induced filtration on \\(\\mathrm W} \\newcommand*{\\shW}{\\sheaf W(R)\\) is the filtration by powers of the fundamental ideal \\(I(R)\\).\n\\end{prop}\n\\begin{proof}\n As we have already noted in the proof of \\Cref{GW-R-special}, all positive elements of the Grothen\\-dieck-Witt ring \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(R)\\) can be written as sums of line elements. Thus, the claim concerning \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(R)\\) is immediate from \\Cref{lem:gamma-filtration-generators}. Moreover, the fundamental filtration on \\(\\mathrm W} \\newcommand*{\\shW}{\\sheaf W(R)\\) is the image of the fundamental filtration on \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(R)\\).\n\\end{proof}\n\n\\begin{rem}\n In the situation above, the projection \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(R)\\to\\mathrm W} \\newcommand*{\\shW}{\\sheaf W(R)\\) even induces isomorphisms \\( \\mathrm{GI}} \\newcommand*{\\shGI}{\\sheaf{GI}^n(R) \\to \\mathrm I} \\newcommand*{\\shI}{\\sheaf I^n(R)\\), so that \\(\\graded^i_\\gamma\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(R) \\cong \\graded^i_\\gamma\\mathrm W} \\newcommand*{\\shW}{\\sheaf W(R)\\) in degrees \\(i>0\\). This fails for general schemes in place of \\(R\\) (see \\Cref{sec:examples}).\n\\end{rem}\n\n\\begin{rem}\n It may seem more natural to define a filtration on \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\) starting with the kernel not of the rank morphism but of the rank reduced modulo two, as for example in \\cite{Auel:Milnor}:\n \\[\n \\mathrm{GI}} \\newcommand*{\\shGI}{\\sheaf{GI}'(X) := \\ker\\big(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\to H^0(X,\\mathbb{Z}\/2)\\big)\n \\]\n For connected \\(X\\), \\(\\mathrm{GI}} \\newcommand*{\\shGI}{\\sheaf{GI}'(X)\\) is isomorphic to a direct sum of \\(\\mathrm{GI}} \\newcommand*{\\shGI}{\\sheaf{GI}(X)\\) and a copy of \\(\\mathbb{Z}\\) generated by the hyperbolic plane \\(\\mathbb{H}\\).\n In particular, \\(\\mathrm{GI}} \\newcommand*{\\shGI}{\\sheaf{GI}(X)\\) and \\(\\mathrm{GI}} \\newcommand*{\\shGI}{\\sheaf{GI}'(X)\\) have the same image in \\(\\mathrm W} \\newcommand*{\\shW}{\\sheaf W(X)\\).\n However, even over a field, the filtration by powers of \\(\\mathrm{GI}} \\newcommand*{\\shGI}{\\sheaf{GI}'\\) does \\emph{not} yield the same graded ring as the filtration by powers of (\\(\\mathrm{GI}} \\newcommand*{\\shGI}{\\sheaf{GI}\\) or) \\(\\mathrm I} \\newcommand*{\\shI}{\\sheaf I\\).\n For example, for \\(X=\\Spec(\\mathbb{R})\\), we find:\n \\begin{align*}\n \\factor{\\mathrm{GI}} \\newcommand*{\\shGI}{\\sheaf{GI}^n(\\mathbb{R})}{\\mathrm{GI}} \\newcommand*{\\shGI}{\\sheaf{GI}^{n+1}(\\mathbb{R})} &\\cong \\mathbb{Z}\/2 &&\\quad(n>0)\\\\\n \\factor{(\\mathrm{GI}} \\newcommand*{\\shGI}{\\sheaf{GI}')^n(\\mathbb{R})}{(\\mathrm{GI}} \\newcommand*{\\shGI}{\\sheaf{GI}')^{n+1}(\\mathbb{R})} &\\cong \\mathbb{Z}\/2 \\oplus \\mathbb{Z}\/2\n \\end{align*}\n It is the filtration by powers of \\(\\mathrm{GI}} \\newcommand*{\\shGI}{\\sheaf{GI}\\) that yields an associated graded ring isomorphic to \\(H^*_{\\mathit{et}}(\\mathbb{R}, \\mathbb{Z}\/2)\\) in positive degrees, not the filtration by powers of \\(\\mathrm{GI}} \\newcommand*{\\shGI}{\\sheaf{GI}'\\).\n\\end{rem}\n\n\\subsection{Comparison with the classical filtration}\nA common filtration on the Witt ring of a scheme is given by the kernels of the first two ^^e9tale Stiefel-Whitney classes \\(w_1\\) and \\(w_2\\) on the Grothen\\-dieck-Witt ring and of the induced classes \\(\\bar w_1\\) and \\(\\bar w_2\\) on the Witt ring:\n\\begin{alignat*}{7}\n &\\clasF{2}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X) &&:= \\ker\\left(\\clasF{1}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X) \\xrightarrow{w_1} H^1_\\mathit{et}(X,\\mathbb{Z}\/2)\\right) \\\\\n &\\clasF{2}\\mathrm W} \\newcommand*{\\shW}{\\sheaf W(X) &&:= \\ker\\left(\\clasF{1}\\mathrm W} \\newcommand*{\\shW}{\\sheaf W(X)\\phantom{G}\\xrightarrow{\\bar w_1} H^1_\\mathit{et}(X,\\mathbb{Z}\/2)\\right) \\\\\\\\\n &\\clasF{3}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X) &&:= \\ker\\left(\\clasF{2}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X) \\xrightarrow{w_2} H^2_\\mathit{et}(X,\\mathbb{Z}\/2)\\right) \\\\\n &\\clasF{3}\\mathrm W} \\newcommand*{\\shW}{\\sheaf W(X) &&:= \\ker\\left(\\clasF{2}\\mathrm W} \\newcommand*{\\shW}{\\sheaf W(X)\\phantom{G}\\xrightarrow{\\bar w_2} H^2_\\mathit{et}(X,\\mathbb{Z}\/2)\/{\\mathrm{Pic}}(X)\\right)\n\\end{alignat*}\n\\begin{prop}\\label{comparison:F2}\n Let \\(X\\) be any connected scheme over a field of characteristic not two (or, more generally, any scheme such that the canonical pre-\\(\\lambda\\)-structure on \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\) is a\n \\(\\lambda\\)-structure). Then:\n \\begin{align*}\n \\gammaF{2} \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X) &= \\clasF{2}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X) \\\\\n \\gammaF{2} \\mathrm W} \\newcommand*{\\shW}{\\sheaf W(X) &= \\clasF{2}\\mathrm W} \\newcommand*{\\shW}{\\sheaf W(X)\n \\end{align*}\n\\end{prop}\n\\begin{proof}\n The first identity is a consequence of \\Cref{lem:graded-degree-1}:\n In our case, the group of line elements may be identified with \\(H^1_\\mathit{et}(X,\\mathbb{Z}\/2)\\); then the determinant\n \\(\n \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X) \\to H^1_\\mathit{et}(X,\\mathbb{Z}\/2)\n \\)\n is precisely the first Stiefel-Whitney class \\(w_1\\). In particular, the kernel of the restriction of \\(w_1\\) to \\(\\clasF{1}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\) is \\(\\gammaF{2} \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\), as claimed.\n For the second identity, it suffices to observe that \\(\\clasF{2}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\) maps surjectively onto \\(\\clasF{2}\\mathrm W} \\newcommand*{\\shW}{\\sheaf W(X)\\).\n\\end{proof}\n\nIn order to analyse the relation of \\(\\gammaF{3} \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\) to \\(\\clasF{3}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\), we need a few lemmas concerning products of ``reduced line elements'':\n\\begin{lem}\\label{lem:gamma-calculation}\n Let \\(u_1,\\dots, u_l, v_1,\\dots, v_l\\) be line elements in a pre-\\(\\lambda\\)-ring \\(A\\) with positive structure. Then\n \\(\\gamma^{k}\\left(\\textstyle\\sum_i(u_i-v_i)\\right) \\)\n can be written as a linear combination of products\n \\[\n (u_{i_1}-1)\\cdots(u_{i_s}-1)(v_{j_1}-1)\\cdots(v_{j_t}-1)\n \\] with \\(s+t=k\\) factors.\n\\end{lem}\n\n\\begin{proof}\n This is easily seen by induction over \\(l\\).\n For \\(l=1\\) and \\(k=0\\) the statement is trivial, while for \\(l=1\\) and \\(k\\geq 1\\) we have\n \\begin{align*}\n \\gamma^{k}(u-v) &=\\gamma^{k}((u-1)+(1-v)) \\\\\n &=\\gamma^{0}(u-1)\\gamma^{k}(1-v) + \\gamma^{1}(u-1)\\gamma^{k-1}(1-v)\\\\\n &=\\pm(v-1)^k \\mp (u-1)(v-1)^{k-1}\n \\end{align*}\n For the induction step, we observe that every summand in\n \\begin{align*}\n \\gamma^{k}\\left(\\textstyle\\sum_{i=1}^{l+1}u_i-v_i\\right) &= \\sum_{i=0}^k\\gamma^{i}\\left(\\textstyle\\sum_{i=1}^l u_i-v_i\\right)\\gamma^{k-i}(u_l-v_l)\n \\end{align*}\n can be written as a linear combination of the required form.\n\\end{proof}\n\n\\begin{lem}\\label{lem:w-calculation}\n Let \\(X\\) be a scheme over \\(\\mathbb{Z}[\\frac{1}{2}]\\), and let \\(u_1, \\dots, u_n \\in \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\) be classes of symmetric line bundles with Stiefel-Whitney classes \\(w_1(u_i)=:\\bar u_i\\). Let \\(\\rho\\) denote the product\n \\[\n \\rho := (u_1-1)\\cdots(u_n-1).\n \\]\n Then \\(w_i(\\rho) = 0\\) for \\(0 < i < 2^{n-1}\\), and\n \\[\n w_{2^{n-1}}(\\rho)\n = \\quad\\prod_{\\mathclap{\\substack{1\\leq i_1 < \\dots < i_k \\leq n\\\\\\text{ with } k\\text{ odd}}}}\\quad ({\\bar u}_{i_1} + \\cdots + {\\bar u}_{i_k})\n = \\quad\\sum_{\\mathclap{\\substack{r_1,\\dots, r_n:\\\\2^{r_1}+\\cdots+2^{r_n}= 2^{n-1}}}}\\quad {\\bar u}_1^{2^{r_1}}\\cdots\\cdot {\\bar u}_n^{2^{r_n}}.\n \\]\n\\end{lem}\n\n\\begin{proof}\n The lemma generalizes Lemma~3.2\\slash Corollary~3.3 of \\cite{Milnor}. The first part of Milnor's proof applies verbatim.\n Consider the evaluation map\n \\[\n \\mathbb{Z}\/2\\llbracket x_1,\\dots,x_n\\rrbracket \\xrightarrow{\\;\\mathrm{ev}\\;} \\textstyle\\prod_i H^i_\\mathit{et}(X,\\mathbb{Z}\/2)\\\\\n \\]\n sending \\(x_i\\) to \\(\\bar u_i\\).\n The total Stiefel-Whitney class \\(w(\\rho) = 1 + w_1(\\rho) + w_2(\\rho) + \\dots\\) is the evaluation of the power series\n \\[\n \\omega(x_1,\\dots,x_n) := \\left(\\frac{\\prod_{\\abs{\\vec \\epsilon}\\text{ even }}(1+\\vec\\epsilon\\,\\vec x)}\n {\\prod_{\\abs{\\vec \\epsilon}\\text{ odd }}(1+\\vec\\epsilon\\,\\vec x)}\\right)^{(-1)^n},\n \\]\n where the products range over all \\(\\vec \\epsilon = (\\epsilon_1,\\dots,\\epsilon_n)\\in(\\mathbb{Z}\/2)^n\\) with \\(\\abs{\\vec \\epsilon}:=\\epsilon_1+\\cdots+\\epsilon_n\\) even or odd, and where \\(\\vec\\epsilon\\,\\vec x\\) denotes the sum \\(\\sum_i\\epsilon_i x_i\\).\n As Milnor points out, all factors of \\(\\omega\\) cancel if we substitute \\(x_i=0\\) for some~\\(i\\).\n More generally, all factors cancel whenever we replace a given variable \\(x_i\\) by the sum of an even number of variables \\(x_{i_1} + \\cdots + x_{i_{2l}}\\) all distinct from \\(x_i\\).\n Indeed, consider the substitution \\(x_n = \\vec\\alpha\\,\\vec x\\) with \\(\\abs{\\vec\\alpha}\\) even and \\(\\alpha_n= 0\\). Write \\(\\vec x = (\\vec x', x_n)\\), \\(\\vec \\epsilon= (\\vec\\epsilon',\\epsilon_n)\\) and \\(\\vec \\alpha=(\\vec \\alpha',0)\\), so that the substitution may be rewritten as \\(x_n = \\vec \\alpha'\\,\\vec x'\\). Then\n \\[\n (\\vec \\epsilon', \\epsilon_n)(\\vec x', \\vec\\alpha'\\,\\vec x') = (\\vec \\epsilon'+\\vec \\alpha', \\epsilon_n+1)(\\vec x', \\vec\\alpha'\\,\\vec x'),\n \\]\n but the parities of \\(\\abs{(\\vec\\epsilon',\\epsilon_n)}\\) and \\(\\abs{(\\vec \\epsilon'+\\vec \\alpha',\\epsilon_n+1)}\\) are different. Thus, the corresponding factors of \\(\\omega\\) cancel.\n It follows that \\(\\omega-1\\) is divisible by all sums of an odd number of distinct variables \\(x_{i_1} + \\cdots + x_{i_k}\\).\n Therefore,\n \\begin{equation}\\label{eq:w-calculation}\n \\omega = 1 + (\\textstyle\\prod_{\\abs{\\vec\\epsilon}\\text{ odd}} \\vec\\epsilon \\vec x)\\cdot f(\\vec x)\n \\end{equation}\n for some power series \\(f\\).\n In particular, \\(\\omega\\) has no non-zero coefficients in positive total degrees below \\(\\sum_{k \\text{ odd}}\\binom{n}{k} = 2^{n-1}\\), proving the first part of the \\namecref{lem:w-calculation}.\n\n For the second part, we need to show that the constant coefficient of \\(f\\) is \\(1\\).\n This can be seen by considering the substitution \\(x_1=x_2=\\cdots=x_n = x\\) in \\eqref{eq:w-calculation}:\n we obtain\n \\begin{align*}\n \\left(\\frac{1}{(1+x)^K}\\right)^{\\pm 1} &= 1 + x^K f(x,\\dots,x)\n \\intertext{with \\(K = \\sum_{k \\text{ odd}} \\binom{n}{k} = 2^{n-1}\\),\n and as \\((1+x^K) = 1 + x^K \\mod 2\\) for \\(K\\) a power of two,\n this equation can be rewritten as\n }\n (1 + x^K)^{\\mp 1} &= 1 + x^K f(x,\\dots, x).\n \\end{align*}\n The claim follows.\n Finally, the identification of the product expression for \\(w_{2^{n-1}}(\\rho)\\) with a sum is Lemma~2.5 of \\cite{GuillotMinac}.\n It is verified by showing that all factors of the product divide the sum, using similar substitution arguments as above.\n\\end{proof}\n\n\\begin{rem}\n Milnor's proof in the case when \\(X\\) is a field \\(k\\) uses the relation \\(a^{\\scup 2} = [-1]\\scup a\\) in \\(H^2(k,\\mathbb{Z}\/2)\\), which does not hold in general.\n\\end{rem}\n\n\\begin{prop}\\label{comparison:F2F3}\n Let \\(X\\) be a connected scheme over \\(\\mathbb{Z}[\\frac{1}{2}]\\).\n Then\n \\(\n w_i(\\gammaF{n} \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)) = 0 \\text{ for } 03\\), we obtain:\n \\begin{alignat*}{7}\n &\\graded^*_{\\mathit{clas}}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\n &&\\cong \\mathbb{Z} \\oplus H^1_{\\mathit{et}}(X,\\mathbb{Z}\/2) \\oplus H^2_{\\mathit{et}}(X,\\mathbb{Z}\/2) \\oplus {\\mathrm{CH}}^2(X)\\\\\n \\graded^*_\\gamma\\mathrm W} \\newcommand*{\\shW}{\\sheaf W(X) =\n &\\graded^*_{\\mathit{clas}}\\mathrm W} \\newcommand*{\\shW}{\\sheaf W(X)\n &&\\cong \\mathbb{Z}\/2\\oplus H^1_\\mathit{et}(X,\\mathbb{Z}\/2) \\oplus H^2_\\mathit{et}(X,\\mathbb{Z}\/2)\/{\\mathrm{Pic}}(X)\n \\end{alignat*}\n However, in general \\(\\gammaF{3}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\subsetneq \\clasF{3}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)={\\mathrm{CH}}^2(X)\\). For a concrete example, consider the product \\(X = C\\times\\mathbb{P}^1\\), where \\(C\\) is any smooth projective curve. In this case\n \\begin{align*}\n \\clasF{3}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X) &\\cong {\\mathrm{Pic}}(C)\\\\\n \\gammaF{3}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X) &\\cong {\\mathrm{Pic}}(C)[2] &&(\\text{kernel of multiplication by 2}).\n \\end{align*}\n\\end{example}\n\n\\begin{example}[\\(\\mathbb{P}^r\\)]\\label{eg:Pr}\n Let \\(\\mathbb{P}^r\\) be the \\(r\\)-dimensional projective space over a field~\\(k\\). We first describe its Grothen\\-dieck-Witt ring. Let \\(a := H_0(\\lb O(1) - 1)\\) and \\(\\rho := \\lceil \\frac{r}{2}\\rceil\\). Then:\n \\begin{align*}\n \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\mathbb{P}^r) &\\cong\n \\begin{cases}\n \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(k) \\oplus \\mathbb{Z} a \\oplus \\mathbb{Z} a^2 \\oplus \\dots \\oplus\\mathbb{Z} a^{\\rho-1} \\oplus \\mathbb{Z} a^{\\rho}\\phantom{\\big)} &\\text{ if \\(r\\) is even}\\\\\n \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(k) \\oplus \\mathbb{Z} a \\oplus \\mathbb{Z} a^2 \\oplus \\dots \\oplus \\mathbb{Z} a^{\\rho-1} \\oplus (\\mathbb{Z}\/2) a^{\\rho} &\\text{ if \\(r \\equiv \\phantom{-}1 \\mod 4\\)}\\\\\n \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(k) \\oplus \\mathbb{Z} a \\oplus \\mathbb{Z} a^2 \\oplus \\dots \\oplus \\mathbb{Z} a^{\\rho-1} &\\text{ if \\(r \\equiv -1 \\mod 4\\)}\n \\end{cases}\n \\end{align*}\n The multiplication is determined by the formula \\(\\phi\\cdot a^i = \\rank(\\phi)a^i\\) for \\(\\phi\\in \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(k)\\) and \\(i>0\\), and by the vanishing of all higher powers of \\(a\\) (\\mbox{i.\\thinspace{}e.\\ } \\(a^i = 0\\) for all \\(i\\geq\\rho\\) when \\(r\\equiv -1 \\mod 4\\); \\(a^i = 0\\) for all \\(i>\\rho\\) in the other cases).\\footnote{\n Over \\(k=\\mathbb{C}\\), this agrees with the ring structure of \\({\\mathrm{KO}}(\\mathbb{C} P^n)\\) as computed by Sanderson \\cite{Sanderson}*{Thm~3.9}.}\n\n In this description, \\(\\gammaF{i}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\mathbb{P}^r)\\) is the ideal generated by \\(\\gammaF{i}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(k)\\) and \\(a^{\\lceil\\frac{i}{2}\\rceil}\\). In particular, \\(\\gammaF{3}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\) is again strictly smaller than \\(\\clasF{3}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\):\n \\begin{align*}\n \\clasF{3}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\mathbb{P}^r) &= \\gammaF{3}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(k) + (a^2,2a)\\\\\n \\gammaF{3}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\mathbb{P}^r) &= \\gammaF{3}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(k) + (a^2)\n \\end{align*}\n The associated graded ring looks very similar to the ring itself:\n \\begin{align*}\n \\graded_\\gamma^*\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\mathbb{P}^r) & \\cong\n \\begin{cases}\n \\graded_\\gamma^*\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(k) \\oplus \\mathbb{Z} a \\oplus \\mathbb{Z} a^2 \\oplus \\dots \\oplus\\mathbb{Z} a^{\\rho-1} \\oplus \\mathbb{Z} a^{\\rho}\\phantom{\\big)} &\\text{ if \\(r\\) is even}\\\\\n \\graded_\\gamma^*\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(k) \\oplus \\mathbb{Z} a \\oplus \\mathbb{Z} a^2 \\oplus \\dots \\oplus \\mathbb{Z} a^{\\rho-1} \\oplus (\\mathbb{Z}\/2) a^{\\rho} &\\text{ if \\(r \\equiv \\phantom{-}1 \\mod 4\\)}\\\\\n \\graded_\\gamma^*\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(k) \\oplus \\mathbb{Z} a \\oplus \\mathbb{Z} a^2 \\oplus \\dots \\oplus \\mathbb{Z} a^{\\rho-1} &\\text{ if \\(r \\equiv -1 \\mod 4\\)}\n \\end{cases}\n \\end{align*}\n with \\(a\\) of degree~2. In the Witt ring, all the hyperbolic elements \\(a^i\\) vanish, so obviously \\(\\graded_\\gamma^*\\mathrm W} \\newcommand*{\\shW}{\\sheaf W(\\mathbb{P}^r) \\cong \\graded_\\gamma^*\\mathrm W} \\newcommand*{\\shW}{\\sheaf W(k)\\).\n\\end{example}\n\n\\begin{example}[\\(\\AA^1\\setminus 0\\)]\\label{eg:punctured-A^1}\n For the punctured affine line over a field \\(k\\), we have\n \\begin{alignat*}{7}\n \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\AA^1\\setminus 0) &\\;\\cong\\; &\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(k) &\\;\\oplus\\;& \\mathrm W} \\newcommand*{\\shW}{\\sheaf W(k)\\red\\varepsilon& \\\\\n \\gammaF{i}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\AA^1\\setminus 0) &\\;\\cong\\; &\\mathrm{GI}} \\newcommand*{\\shGI}{\\sheaf{GI}^i(k) &\\;\\oplus\\;&\\mathrm I} \\newcommand*{\\shI}{\\sheaf I^{i-1}(k)\\red\\varepsilon&\n \\end{alignat*}\n for some generator \\(\\red\\varepsilon\\in\\gammaF{1}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\AA^1\\setminus 0)\\) satisfying \\(\\red\\varepsilon^2 = 2\\red\\varepsilon\\). In this example, \\(\\gammaF{3}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\AA^1\\setminus 0) = \\ker(w_2)\\).\n\\end{example}\n\n\\begin{example}[\\(\\AA^{4n+1}\\setminus 0\\)]\\label{eg:punctured-A^d}\n For punctured affine spaces of dimensions \\(d\\equiv 1 \\mod 4\\) with \\(d > 1\\), there is a similar result for the Grothen\\-dieck-Witt group \\cite{BalmerGille}:\n \\[\n \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\AA^{4n+1}\\setminus 0) \\cong \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(k) \\oplus \\mathrm W} \\newcommand*{\\shW}{\\sheaf W(k)\\red\\varepsilon\n \\]\n for some \\(\\red\\varepsilon\\in \\gammaF{1}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\AA^1\\setminus 0)\\).\n However, in this case \\(\\red\\varepsilon^2= 0\\), and the \\(\\gamma\\)-filtration is also different from the \\(\\gamma\\)-filtration in the one-dimensional case.\n This is already apparent over the complex numbers, where we find:\n \\[\n \\gammaF{i}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\AA_\\mathbb{C}^5\\setminus 0) \\cong \\gammaF{i}\\mathrm W} \\newcommand*{\\shW}{\\sheaf W(\\AA_\\mathbb{C}^5\\setminus 0) \\cong\n \\left\\{\n \\begin{alignedat}{7}\n \\mathbb{Z}\/2 &\\red\\varepsilon &&\\text{ for } i=1,2\\\\\n 0 & && \\text{ for } i \\geq 3\n \\end{alignedat}\n\\right.\n\\]\n In particular, in this example \\(\\gammaF{3}\\mathrm W} \\newcommand*{\\shW}{\\sheaf W(X)\\neq \\clasF{3}\\mathrm W} \\newcommand*{\\shW}{\\sheaf W(X)\\), the latter being non-zero since since \\(w_2\\) and \\(\\bar w_2\\) are zero.\n\\end{example}\n\n\\begin{proof}[Calculations for \\Cref{eg:curve} (curve)]\n Consider the summary at the beginning of this section. In dimension~\\(1\\), we have \\(\\topF{2}\\mathrm K} \\newcommand*{\\shK}{\\sheaf K = 0\\), so \\(\\gammaF{2}\\mathrm K} \\newcommand*{\\shK}{\\sheaf K = \\ker(c_1) = 0\\). Moreover, by \\cite{Me:WCS}*{proof of Cor.~3.7}, \\(w_2\\) is surjective for the curves under consideration, with kernel isomorphic to the kernel of \\(c_1\\). So \\(w_2\\) is an isomorphism. It follows that \\(\\gammaF{3} \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW} = \\gammaF{3} \\mathrm W} \\newcommand*{\\shW}{\\sheaf W = 0\\) and hence that \\(\\graded_\\gamma^*\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW} = \\graded_\\mathit{clas}^*\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}\\) and \\(\\graded_\\gamma^*\\mathrm W} \\newcommand*{\\shW}{\\sheaf W = \\graded_\\mathit{clas}^* \\mathrm W} \\newcommand*{\\shW}{\\sheaf W\\). These graded groups are computed in [loc.\\ cit., Thm~3.1 and Cor.~3.7].\n\\end{proof}\n\n\\begin{proof}[Calculations for \\Cref{eg:surface} (surface)]\n The classical filtration is computed in \\cite{Me:WCS}*{Cor.~3.7\/4.7}. In the case \\(X=C\\times\\mathbb{P}^1\\), Walter's projective bundle formula \\cite{Walter:PB}*{Thm~1.5} and the results on \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}^*(C)\\) of \\cite{Me:WCS}*{Thm~2.1\/3.1} yield:\n \\[\n \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X) \\cong\n \\lefteqn{\\overbrace{\\phantom{\\mathbb{Z} \\oplus {\\mathrm{Pic}}(C)[2] \\oplus \\mathbb{Z}\/2}}^{\\pi^*\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(C)}}\n \\mathbb{Z}\n \\oplus\n \\underbrace{{\\mathrm{Pic}}(C)[2]}_{H^1_\\mathit{et}(X,\\mathbb{Z}\/2)}\n \\oplus\n \\underbrace{\n \\mathbb{Z}\/2 \\oplus\n \\lefteqn{\\overbrace{\\phantom{\\mathbb{Z}\/2\\oplus{\\mathrm{Pic}}(C)}}^{\\pi^*\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}^{-1}(C)\\cdot\\Psi}}\n \\mathbb{Z}\/2}_{H^2_\\mathit{et}(X,\\mathbb{Z}\/2)}\n \\oplus\n \\underbrace{{\\mathrm{Pic}}(C)}_{{\\mathrm{CH}}^2(X)}\n \\]\n Here, \\(\\pi\\colon X\\twoheadrightarrow C\\) is the projection and \\(\\Psi\\in\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}^1(\\mathbb{P}^1)\\) is a generator. Writing \\(H_i\\colon \\mathrm K} \\newcommand*{\\shK}{\\sheaf K\\to\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}^i\\) for the hyperbolic maps, we can describe the additive generators of \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\) explicitly as follows:\n \\begin{compactitem}[-]\n \\item \\(1\\) (the trivial symmetric line bundle)\n\n \\item \\(a_{\\lb L} := \\pi^*{\\lb L}-1\\), for each symmetric line bundle \\(\\lb L\\) on \\(C\\), \\mbox{i.\\thinspace{}e.\\ } for each \\(\\lb L\\in{\\mathrm{Pic}}(C)[2]\\)\n \\item \\(b := H_0(\\pi^*\\lb L_1-1)\\), where \\(\\lb L_1\\) is a line bundle of degree~1 on \\(C\\) (hence a generator of the free summand of \\({\\mathrm{Pic}}(C)\\))\n \\item \\(c := H_{-1}(1)\\cdot \\Psi = H_0(F\\Psi)\\); here \\(F\\Psi = \\lb O(-1) - 1\\) with \\(\\lb O(-1)\\) the pullback of the canonical line bundle on \\(\\mathbb{P}^1\\)\n \\item \\(d_{\\lb N} := H_{-1}(\\pi^*\\lb N - 1) \\cdot \\Psi = H_0((\\pi^*\\lb N - 1)\\cdot F\\Psi)\\), for each \\(\\lb N\\in {\\mathrm{Pic}}(C)\\).\n \\end{compactitem}\n In this list, the generators appear in the same order as the direct summands of \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\) that they generate appear in the formula above.\n An alternative set of generators is obtained by replacing the generators \\(d_{\\lb N}\\) by the following generators:\n \\begin{align*}\n d'_{\\lb N}\n &:= H_0(\\pi^*\\lb N\\otimes \\lb O(-1) - 1)\\\\\n &\\,=\\begin{cases}\n d_{\\lb N} + c &\\text{ if \\(\\lb N\\) is of even degree}\\\\\n d_{\\lb N} + b + c &\\text{ if \\(\\lb N\\) is of odd degree}\n \\end{cases}\n \\end{align*}\n The only non-trivial products of the alternative generators are\n \\(a_{\\lb L}c = a_{\\lb L}d'_{\\lb N} = d_{\\lb L}' + c \\; (= d_{\\lb L})\\).\n Moreover, the effects of the operations \\(\\gamma^{i}\\) on the alternative generators is immediate from \\Cref{lem:gamma-of-H-line} below. So \\Cref{lem:gamma-filtration-generators} tells us that \\(\\gammaF{3}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}\\) has additive generators\n \\[\n \\gamma^{1}(a_{\\lb L})\\cdot \\gamma^{2}(c)\n = a_{\\lb L}\\cdot (-c)\n = d_{\\lb L}\n \\]\n with \\(\\lb L\\in {\\mathrm{Pic}}(C)[2]\\). Thus, \\(\\gammaF{3}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X) \\cong {\\mathrm{Pic}}(C)[2]\\), viewed as subgroup of the last summand in the formula above. We also find that \\(\\gammaF{4}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X) = 0\\).\n\\end{proof}\n\n\\begin{proof}[Calculation of the ring structure on \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\mathbb{P}^r)\\) (\\Cref{eg:Pr})]\\mbox{}\\\\\n By \\cite{Walter:PB}*{Thms~1.1 and 1.5}, the Grothen\\-dieck-Witt ring of projective space can be additively described as\n \\[\n \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\mathbb{P}^r) =\n \\begin{cases}\n \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(k) \\oplus \\mathbb{Z} a_1 \\oplus \\dots \\oplus \\mathbb{Z} a_{\\rho} & \\text{ if \\(r\\) is even}\\\\\n \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(k) \\oplus \\mathbb{Z} a_1 \\oplus \\dots \\oplus \\mathbb{Z} a_{\\rho-1} \\oplus (\\mathbb{Z}\/2) H_0(F\\Psi) & \\text{ if \\(r \\equiv \\phantom{-}1 \\mod 4\\)}\\\\\n \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(k) \\oplus \\mathbb{Z} a_1 \\oplus \\dots \\oplus \\mathbb{Z} a_{\\rho-1} & \\text{ if \\(r \\equiv -1 \\mod 4\\)}\n \\end{cases}\n \\]\n where \\(a_i = H_0(\\lb O(i) - 1)\\) and \\(\\Psi\\) is a certain element in \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}^r(\\mathbb{P}^r)\\). Moreover, by tracing through Walter's computations, we find that\n \\begin{equation}\\label{eq:Pr-FPsi-formula}\n H_0(F\\Psi) = -\\sum_{j=1}^{\\rho}(-1)^j\\binom{r+1}{\\rho-j}a_j.\n \\end{equation}\n Indeed, we see from the proof of \\cite{Walter:PB}*{Thm~1.5} that \\(F\\Psi = \\lb O^{\\oplus N} - \\lambda^\\rho(\\Omega)(\\rho)\\) in \\(\\mathrm K} \\newcommand*{\\shK}{\\sheaf K(\\mathbb{P}^r)\\), where \\(\\Omega\\) is the cotangent bundle of \\(\\mathbb{P}^r\\) and \\(N\\) is such that the virtual rank of this element is zero.\n\n\n The short exact sequence \\(0\\to \\Omega \\to \\lb O^{\\oplus (r+1)}(-1) \\to \\lb O \\to 0\\) over \\(\\mathbb{P}^r\\) implies that\n \\[\n \\lambda^\\rho(\\Omega) = \\lambda^\\rho(\\lb O^{\\oplus(r+1)}(-1) - 1) \\text{ in } \\mathrm K} \\newcommand*{\\shK}{\\sheaf K(\\mathbb{P}^r),\n \\]\n from which \\eqref{eq:Pr-FPsi-formula} follows by a short computation.\n\n An element \\(\\Psi \\in \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}^{r}(\\mathbb{P}^r)\\) also exists in the case \\(r\\equiv -1\\mod 4\\), and \\eqref{eq:Pr-FPsi-formula} is likewise valid in this case. However, in this case, we see from Karoubi's exact sequence\n \\[\n \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}^{-1}(\\mathbb{P}^r) \\xrightarrow{F} \\mathrm K} \\newcommand*{\\shK}{\\sheaf K(\\mathbb{P}^r) \\xrightarrow{H_0} \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}^0(\\mathbb{P}^r)\n \\]\n that \\(H_0(F\\Psi) = 0\\). We can thus rewrite the above result for the Grothen\\-dieck-Witt group as\n \\[\n \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\mathbb{P}^r) =\n \\begin{cases}\n \\phantom{\\big(}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(k) \\oplus \\mathbb{Z} a_1 \\oplus \\dots \\oplus \\mathbb{Z} a_{\\rho}\\phantom{\\big)} & \\text{ if \\(r\\) is even}\\\\\n \\big(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(k) \\oplus \\mathbb{Z} a_1 \\oplus \\dots \\oplus \\mathbb{Z} a_{\\rho-1} \\oplus \\mathbb{Z} a_{\\rho}\\big)\\big\/2h_r & \\text{ if \\(r \\equiv \\phantom{-}1 \\mod 4\\)}\\\\\n \\big(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(k) \\oplus \\mathbb{Z} a_1 \\oplus \\dots \\oplus \\mathbb{Z} a_{\\rho-1} \\oplus \\mathbb{Z} a_{\\rho}\\big)\\big\/h_r & \\text{ if \\(r \\equiv -1 \\mod 4\\)}\n \\end{cases}\n \\]\n with \\( h_r := \\sum_{j=1}^{\\rho}(-1)^j\\binom{r+1}{\\rho-j}a_j\\).\n\n To see that we can alternatively use powers of \\(a := a_1\\) as generators, it suffices to observe that for all \\(k\\geq 1\\),\n \\begin{align}\n \\label{eq:Pn-a_k}\n a_k &= a^k + \\left(\\ctext{4cm}{a linear combination of \\(a, a^2, \\dots, a^{k-1}\\)}\\right),\n \\intertext{\\ignorespaces\n which follows inductively from the recursive relation\n }\n \\label{eq:Pn-recursive}\n a_k &= (a + 2) a_{k-1} - a_{k-2} + 2a.\n \\end{align}\n for all \\(k\\geq 2\\). (\\(a_0 := 0\\).)\n\n Next, we show that \\(a^k = 0\\) for all \\(k > \\rho\\).\n Let \\(x := \\sheaf O(1)\\), viewed as an element of \\(\\mathrm K} \\newcommand*{\\shK}{\\sheaf K(\\mathbb{P}^r)\\). The relation \\((x-1)^{r+1} = 0\\) in \\(\\mathrm K} \\newcommand*{\\shK}{\\sheaf K(\\mathbb{P}^r)\\) implies that\n \\[\n (x-1) + (x^{-1}-1) = \\sum_{i=2}^r (-1)^i(x-1)^i,\n \\]\n so that we can compute:\n \\begin{align*}\n a^k\n = [H(x-1)]^k\n &= H\\left( \\left[FH(x-1)\\right]^{k-1} (x-1) \\right)\\\\\n &= H\\left( \\left[(x-1) + (x^{-1} - 1)\\right]^{k-1} (x-1) \\right)\\\\\n &= H\\left( (x-1)^{2k-1} + \\ctext{5cm}{higher order terms in \\((x-1)\\)} \\right)\\\\\n &= 0 \\quad \\text{for \\(2k-1 > r\\), or, equivalently, for \\(k > \\rho\\).}\n \\end{align*}\n \\Cref{eq:Pn-a_k} also allows us to rewrite \\(h_r\\) in terms of the powers of \\(a\\). Inductively, we find that \\(h_r = (-a)^{\\rho}\\) for all odd \\(r\\), where \\(\\rho = \\lceil \\frac{r}{2}\\rceil\\).\n\\end{proof}\n\n\\begin{proof}[Calculation of the \\(\\gamma\\)-filtration on \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\mathbb{P}^r)\\) (\\Cref{eg:Pr}, continued)]\\mbox{}\\\\\n We claim above that \\(\\gammaF{i}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\mathbb{P}^r)\\) is the ideal generated by \\(\\gammaF{i}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(k)\\) and \\(a^{\\lceil\\frac{i}{2}\\rceil}\\). Equivalently, it is the subgroup generated by \\(\\gammaF{i}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(k)\\) and by all powers \\(a^j\\) with \\(j \\geq \\frac{i}{2}\\).\n To verify the claim, we note that by \\Cref{lem:gamma-of-H-line} below, we have \\(\\gamma_i(a_j) = \\pm a_j\\) for \\(i = 1, 2\\), while for all \\(i > 2\\) we have \\(\\gamma_i(a_j) = 0\\).\n In particular, \\(a=a_1 \\in\\gammaF{2}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\mathbb{P}^r)\\), and therefore \\(a^j\\in\\gammaF{2j}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\mathbb{P}^r)\\). This shows that all the above named additive generators indeed lie in \\(\\gammaF{i}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\mathbb{P}^r)\\). For the converse inclusion, we note that by \\Cref{lem:gamma-filtration-generators}, \\(\\gammaF{i}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\mathbb{P}^r)\\) is additively generated by \\( \\gammaF{i}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(k)\\) and by all finite products of the form\n \\[\n \\prod_j \\gamma^{i_j}(a_{\\alpha_j})\n \\]\n with \\(\\sum_j i_j\\geq i\\). Such a product is non-zero only if \\(i_j\\in\\{0,1,2\\}\\) for all \\(j\\), in which case it is of the form\n \\(\n \\pm \\prod_j a_{\\alpha_j}\n \\)\n with at least \\(\\frac{i}{2}\\) non-trivial factors. By \\eqref{eq:Pn-a_k}, each non-trivial factor \\(a_{\\alpha_j}\\) can be expressed as a non-zero polynomial in \\(a\\) with no constant term. Thus, the product itself can be rewritten as a linear combination of powers \\(a^j\\) with \\(j\\geq\\frac{i}{2}\\).\n\\end{proof}\n\n\\begin{proof}[Calculations for \\Cref{eg:punctured-A^1} \\((\\AA^1\\setminus 0)\\)]\\mbox{}\\\\\n The Witt group of the punctured affine line has the form\n \\(\n \\mathrm W} \\newcommand*{\\shW}{\\sheaf W(\\AA^1\\setminus 0) \\cong \\mathrm W} \\newcommand*{\\shW}{\\sheaf W(k) \\oplus \\mathrm W} \\newcommand*{\\shW}{\\sheaf W(k)\\varepsilon,\n \\)\n where \\(\\varepsilon = (\\lb O, t)\\), the trivial line bundle with the symmetric form given by multiplication with the standard coordinate (\\mbox{e.\\thinspace{}g.\\ } \\cite{BalmerGille}). It follows that\n \\[\n \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\AA^1\\setminus 0) \\cong \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(k) \\oplus \\mathrm W} \\newcommand*{\\shW}{\\sheaf W(k)\\red\\varepsilon,\n \\]\n where \\(\\red\\varepsilon := \\varepsilon - 1\\).\n As for any symmetric line bundle, \\(\\varepsilon^2 = 1\\) in the Grothen\\-dieck-Witt ring; equivalently, \\(\\red\\varepsilon^2 = -2\\red\\varepsilon\\).\n To compute the \\(\\gamma\\)-filtration, we need only observe that \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\AA^1\\setminus 0)\\) is generated by line elements. So\n \\begin{align*}\n \\gammaF{i}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\AA^1\\setminus 0)\n &= \\left(\\gammaF{1}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\AA^1\\setminus 0)\\right)^i\\\\\n &= \\left(\\mathrm{GI}} \\newcommand*{\\shGI}{\\sheaf{GI}(k) \\oplus \\mathrm W} \\newcommand*{\\shW}{\\sheaf W(k)\\red\\varepsilon\\right)^i\\\\\n &= \\mathrm{GI}} \\newcommand*{\\shGI}{\\sheaf{GI}^i(k) \\oplus \\mathrm I} \\newcommand*{\\shI}{\\sheaf I^{i-1}(k)\\red\\varepsilon\n \\end{align*}\n\n The ^^e9tale cohomology of \\(\\AA^1\\setminus 0\\) has the form\n \\[\n H^*_\\mathit{et}(\\AA^1\\setminus 0,\\mathbb{Z}\/2) \\cong H^*_\\mathit{et}(k,\\mathbb{Z}\/2) \\oplus H^*_\\mathit{et}(k,\\mathbb{Z}\/2)w_1\\varepsilon.\n \\]\n Recall that when we write \\(\\ker(w_1)\\) and \\(\\ker(w_2)\\), we necessarily mean the kernels of the restrictions of \\(w_1\\) and \\(w_2\\) to \\(\\ker(\\rank)\\) and \\(\\ker(w_1)\\), respectively.\n An arbitrary element of \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\AA^1\\setminus 0)\\) can be written as \\(x + y\\red\\varepsilon\\) with \\(x, y\\in \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(k)\\).\n For such an element, we have \\(w_1(x + y\\red\\varepsilon) = w_1x + \\rank(y) w_1\\varepsilon\\), so the general fact that \\(\\ker(w_1) = \\gammaF{2}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}\\) is consistent with our computation.\n When \\(\\rank(y) = 0\\), we further find that\n \\[\n w_2(x + y\\red\\varepsilon) = w_2 x + w_1 y \\scup w_1 \\varepsilon,\n \\]\n proving the claim that \\(\\ker(w_2) = \\gammaF{3}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}\\) in this example.\n\\end{proof}\n\n\\begin{proof}[Calculations for \\Cref{eg:punctured-A^d} \\((\\AA^{4n+1}\\setminus 0)\\)]\\mbox{}\\\\\n Balmer and Gille show in \\cite{BalmerGille} that for \\(d = 4n+1\\) we have \\(\\mathrm W} \\newcommand*{\\shW}{\\sheaf W(\\AA^d\\setminus 0)\\cong \\mathrm W} \\newcommand*{\\shW}{\\sheaf W(k)\\oplus \\mathrm W} \\newcommand*{\\shW}{\\sheaf W(k)\\varepsilon\\) for some symmetric space \\(\\varepsilon\\) of even rank \\(r\\) such that \\(\\varepsilon^2 = 0\\) in the Witt ring. Let \\(\\red\\varepsilon := \\varepsilon-\\tfrac{r}{2}\\mathbb{H}\\). Then\n \\[\n \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\AA^d\\setminus 0)\\cong \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(k) \\oplus \\mathrm W} \\newcommand*{\\shW}{\\sheaf W(k)\\red\\varepsilon\n \\]\n with \\(\\red\\varepsilon^2 = 0\\). As the K-ring of \\(\\AA^d\\setminus 0\\) is trivial, \\mbox{i.\\thinspace{}e.\\ } isomorphic to \\(\\mathbb{Z}\\) via the rank homomorphism, \\(\\gammaF{i}\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\AA^d\\setminus 0)\\) maps isomorphically to \\(\\gammaF{i}\\mathrm W} \\newcommand*{\\shW}{\\sheaf W(\\AA^d\\setminus 0)\\) for all \\(i>0\\).\n We now switch to the complex numbers.\n Equipped with the analytic topology, \\(\\AA^{4n+1}_\\mathbb{C}\\) is homotopy equivalent to the sphere \\(S^{8n+1}\\), so we have a comparison map \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(\\AA^{4n+1}_\\mathbb{C}\\setminus 0)\\to{\\mathrm{KO}}(S^{8n+1})\\). As the \\(\\lambda\\)-ring structures on both sides are defined via exterior powers, this is clearly a map of \\(\\lambda\\)-rings.\n In fact, it is an isomorphism, as we see by comparing the localization sequences for \\(\\AA^d_\\mathbb{C} \\setminus 0 \\mbox{\\(\\lefteqn{\\;\\;\\circ}\\hookrightarrow\\)} \\AA^d_\\mathbb{C} \\mbox{\\(\\lefteqn{\\;\\;\\;\\shortmid}\\hookleftarrow\\)} \\{0\\}\\), as in the proof of \\cite{Me:WCCV}*{Thm~2.5}.\n The \\(\\lambda\\)-ring structure on \\({\\mathrm{KO}}(S^{8n+1})\\) can be deduced from \\cite{Adams:Spheres}*{Thm~7.4}: As a special case, the theorem asserts that the projection \\(\\mathbb{R}\\mathbb{P}^{8n+1}\\twoheadrightarrow \\mathbb{R}\\mathbb{P}^{8n+1}\/\\mathbb{R}\\mathbb{P}^{8n}\\simeq S^{8n+1}\\) induces the following map in \\({\\mathrm{KO}}\\)-theory.\n \\[\\xymatrix@C=0pt@R=6pt{\n {{\\mathrm{KO}}(S^{8n+1})} \\ar@{^{(}->}[d] \\ar@{}[r]|{\\cong}\n & {\\factor{\\mathbb{Z}[\\red\\varepsilon]}{(2\\red\\varepsilon, \\red\\varepsilon^2)}}\n &&&& {\\red\\varepsilon} \\ar@{|->}[d]\n \\\\\n {{\\mathrm{KO}}(\\mathbb{R}\\mathbb{P}^{8n+1})} \\ar@{}[r]|{\\cong}\n & {\\quad\\quad\\quad\\factor{\\mathbb{Z}[\\red\\lambda]}{(2^f\\red\\lambda,\\red\\lambda^2 - 2\\red\\lambda)}}\n &&&& {2^{f-1}\\red\\lambda}\n }\\]\n Here, \\(\\lambda\\) is the canonical line bundle over the real projective space, \\(\\red\\lambda := \\lambda-1\\), and \\(f\\) is some integer.\n Thus, \\(\\gamma_t(2^{f-1}\\red\\lambda) = (1 + \\red\\lambda t)^{2^{f-1}}\\) and we find that \\(\\gamma_i(\\red\\varepsilon) = c_i\\red\\varepsilon\\) for \\(c_i := \\tbinom{2^{f-1}}{i}2^{i-f}\\).\n Note that \\(c_i\\) is indeed an integer: by Kummer's theorem on binomial coefficients, we find that the highest power of two dividing \\(\\binom{2^{f-1}}{i}\\) is at least \\(f-1-k\\), where \\(k\\) is the highest power of two such that \\(2^k \\leq i\\).\n In fact, modulo two we have \\(c_2 \\equiv 1\\) and \\(c_i \\equiv 0\\) for all \\(i > 2\\). So the \\(\\gamma\\)-filtration is as described.\n\\end{proof}\n\nFinally, here is the lemma referred to multiple times above.\n\\begin{lem}\\label{lem:gamma-of-H-line}\n Let \\(\\sheaf L\\) be a line bundle over a scheme \\(X\\) over \\(\\mathbb{Z}[\\frac{1}{2}]\\). Then\n \\begin{align*}\n \\gamma_2(H(\\sheaf L - 1)) &= -H(\\sheaf L -1)\n \\end{align*}\n and \\( \\gamma_i(H(\\sheaf L - 1)) = 0 \\) in \\(\\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\) for all \\(i > 2\\).\n\\end{lem}\n\\begin{proof}\nLet us write \\(\\lambda_t(x) = 1 + xt + \\lambda^2(x)t^2 + \\dots \\) for the total \\(\\lambda\\)-operation, and similarly for \\(\\gamma_t(x)\\). Then \\(\\lambda_t(x+y) = \\lambda_t(x)\\lambda_t(y)\\), \\(\\gamma_t(x+y)=\\gamma_t(x)\\gamma_t(y)\\), and \\(\\gamma_t(x) = \\lambda_{\\frac{t}{1-t}}(x)\\). Let \\(a:= H(\\sheaf L - 1)\\). From\n \\begin{align*}\n \\lambda_t(a)\n &= \\frac{\\lambda_t(H\\sheaf L)}{\\lambda_t (H1)}\n = \\frac{1 + (H\\sheaf L)t + \\det(H\\sheaf L)t^2}{1 + (H1)t + \\det(H1)t^2}\n = \\frac{1 + (H\\sheaf L)t + \\langle -1 \\rangle t^2}{1 + (H1)t + {\\langle -1 \\rangle} t^2}\n \\intertext{we deduce that}\n \\gamma_t(a)\n &= \\frac{1 + (H\\sheaf L - 2)t + (1 + {\\langle -1 \\rangle} - H\\sheaf L)t^2}{1 + (H1 - 2)t + (1 + {\\langle -1 \\rangle} - H1)t^2} \\\\\n &= \\frac{1 + (H\\sheaf L - 2)t - H(\\sheaf L - 1)t^2}{1 + (H1-2)t}\\\\\n &= [1 + (H\\sheaf L - 2)t - H(\\sheaf L - 1)t^2] \\cdot \\sum_{i\\geq 0}(2 - H1)^i t^i.\n \\end{align*}\n Here, the penultimate step uses that \\(H1 \\cong 1 + {\\langle -1 \\rangle}\\) when two is invertible.\n\n In order to proceed, we observe that \\(H1\\cdot Hx = H(FH1\\cdot x) = 2 Hx\\) for any \\(x\\in \\mathrm{GW}} \\newcommand*{\\shGW}{\\sheaf{GW}(X)\\).\n It follows that\n \\(\n (2 - H1)^i = 2^{i-1} (2 - H1)\n \\)\n and hence that\n \\[\n [1 + (H\\sheaf L - 2)t - H(\\sheaf L - 1)t^2] \\cdot (2 - H1)^it^i\n = 2^{i-1}(2-H1) (1-2t)t^i\n \\]\n for all \\(i \\geq 1\\).\n This implies that the above expression for \\(\\gamma_t(a)\\) simplifies to \\(1 + H(\\sheaf L - 1)t - H(\\sheaf L -1)t^2\\), as claimed.\n\\end{proof}\n\n\\begin{bibdiv}\n \\renewcommand*{\\MR}[1]{\\href{http:\/\/www.ams.org\/mathscinet-getitem?mr=#1}{\\tiny{\\sffamily ~~[MR#1]}}}\n \\newcommand*{\\arxiv}[1]{\\href{http:\/\/arxiv.org\/abs\/#1}{arXiv:#1}}\n \\begin{biblist}\n \\bib{Adams:Spheres}{article}{\n author={Adams, J. F.},\n title={Vector fields on spheres},\n journal={Ann. of Math. (2)},\n volume={75},\n date={1962},\n pages={603--632},\n issn={0003-486X},\n review={\\MR{0139178}}\n }\n \\bib{AtiyahTall}{article}{\n author={Atiyah, Michael F.},\n author={Tall, David O.},\n title={Group representations, $\\lambda $-rings and the $J$-homomorphism},\n journal={Topology},\n volume={8},\n date={1969},\n pages={253--297},\n \n review={\\MR{0244387}}\n }\n \\bib{Auel:Milnor}{article}{\n author={Auel, Asher},\n title={Remarks on the Milnor conjecture over schemes},\n conference={\n title={Galois-Teichm\\\"uller theory and arithmetic geometry},\n },\n book={\n series={Adv. Stud. Pure Math.},\n volume={63},\n publisher={Math. Soc. Japan, Tokyo},\n },\n date={2012},\n pages={1--30},\n review={\\MR{3051237}},\n }\n \\bib{Balmer:Nilpotence}{article}{\n author={Balmer, Paul},\n title={Vanishing and nilpotence of locally trivial symmetric spaces over\n regular schemes},\n journal={Comment. Math. Helv.},\n volume={78},\n date={2003},\n number={1},\n pages={101--115},\n issn={0010-2571},\n review={\\MR{1966753}}\n }\n \\bib{Balmer:Handbook}{article}{\n author={Balmer, Paul},\n title={Witt groups},\n conference={\n title={Handbook of $K$-theory. Vol. 1, 2},\n },\n book={\n publisher={Springer},\n place={Berlin},\n },\n date={2005},\n pages={539--576},\n review={\\MR{2181829}}\n \n }\n \\bib{BalmerGille}{article}{\n author={Balmer, Paul},\n author={Gille, Stefan},\n title={Koszul complexes and symmetric forms over the punctured affine\n space},\n journal={Proc. London Math. Soc. (3)},\n volume={91},\n date={2005},\n number={2},\n pages={273--299},\n \n review={\\MR{2167088}}\n \n }\n \\bib{BalmerWalter}{article}{\n author={Balmer, Paul},\n author={Walter, Charles},\n title={A Gersten-Witt spectral sequence for regular schemes},\n language={English, with English and French summaries},\n journal={Ann. Sci. \\'Ecole Norm. Sup. (4)},\n volume={35},\n date={2002},\n number={1},\n pages={127--152},\n \n review={\\MR{1886007}}\n \n }\n \\bib{Baeza}{book}{\n author={Baeza, Ricardo},\n title={Quadratic forms over semilocal rings},\n series={Lecture Notes in Mathematics, Vol. 655},\n publisher={Springer-Verlag},\n place={Berlin},\n date={1978},\n pages={vi+199},\n \n review={\\MR{0491773}}\n \n }\n \\bib{Borger:BasicI}{article}{\n author={Borger, James},\n title={The basic geometry of Witt vectors, I: The affine case},\n journal={Algebra Number Theory},\n volume={5},\n date={2011},\n number={2},\n pages={231--285},\n issn={1937-0652},\n review={\\MR{2833791}}\n doi={10.2140\/ant.2011.5.231},\n }\n \\bib{Borger:Positivity}{article}{\n author={Borger,James},\n title={Witt vectors, semirings, and total positivity},\n date={2013},\n note={\\arxiv{1310.3013}},\n }\n \\bib{Bourbaki:Algebre}{book}{\n author={Bourbaki, N.},\n title={\\'El\\'ements de math\\'ematique. Alg\\`ebre. Chapitres 1 \\`a 3},\n publisher={Hermann},\n place={Paris},\n date={1970},\n review={\\MR{0274237}}\n }\n \\bib{Clauwens}{article}{\n author={Clauwens, Franciscus Johannes Baptist Jozef},\n title={The nilpotence degree of torsion elements in lambda-rings},\n note={\\arxiv{1004.0829}},\n date={2010},\n }\n \\bib{Eisenbud}{book}{\n author={Eisenbud, David},\n title={Commutative algebra},\n series={Graduate Texts in Mathematics},\n volume={150},\n publisher={Springer-Verlag},\n place={New York},\n date={1995},\n pages={xvi+785},\n review={\\MR{1322960}}\n }\n \\bib{EKV}{article}{\n author={Esnault, H{\\'e}l{\\`e}ne},\n author={Kahn, Bruno},\n author={Viehweg, Eckart},\n title={Coverings with odd ramification and Stiefel-Whitney classes},\n journal={J. Reine Angew. Math.},\n volume={441},\n date={1993},\n pages={145--188},\n }\n \\bib{Fernandez}{article}{\n author={Fern{\\'a}ndez-Carmena, Fernando},\n title={The Witt group of a smooth complex surface},\n journal={Math. Ann.},\n volume={277},\n date={1987},\n number={3},\n pages={469--481},\n }\n \\bib{Fulton:Intersection}{book}{\n author={Fulton, William},\n title={Intersection theory},\n series={Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge.\n A Series of Modern Surveys in Mathematics},\n \n volume={2},\n edition={2},\n publisher={Springer-Verlag},\n place={Berlin},\n date={1998},\n \n \n \n review={\\MR{1644323}}\n \n }\n \\bib{FultonLang}{book}{\n author={Fulton, William},\n author={Lang, Serge},\n title={Riemann-Roch algebra},\n series={Grundlehren der Mathematischen Wissenschaften},\n volume={277},\n publisher={Springer-Verlag},\n place={New York},\n date={1985},\n pages={x+203},\n review={\\MR{801033}}\n }\n \\bib{FunkHoobler}{article}{\n author={Funk, Jeanne M.},\n author={Hoobler, Raymond T.},\n title={The Witt ring of a curve with good reduction over a non-dyadic local field},\n journal={J. Algebra},\n volume={422},\n date={2015},\n pages={648--659},\n review={\\MR{3272094}},\n }\n \\bib{Gille:HomotopyInvariance}{article}{\n author={Gille, Stefan},\n title={Homotopy invariance of coherent Witt groups},\n journal={Math. Z.},\n volume={244},\n date={2003},\n number={2},\n pages={211--233},\n issn={0025-5874},\n review={\\MR{1992537}}\n }\n \\bib{GuillotMinac}{article}{\n author={Guillot, Pierre},\n author={Min{\\'a}{\\v{c}}, J{\\'a}n},\n title={Milnor $K$-theory and the graded representation ring},\n journal={J. K-Theory},\n volume={13},\n date={2014},\n number={3},\n pages={447--480},\n \n \n \n }\n \\bib{Hesselholt:Big}{article}{\n author={Hesselholt, Lars},\n title={The big de Rham-Witt complex},\n date={2004},\n note={\\arxiv{1006.3125v2}},\n }\n \\bib{Hornbostel:Representability}{article}{\n author={Hornbostel, Jens},\n title={$A^1$-representability of Hermitian $K$-theory and Witt groups},\n journal={Topology},\n volume={44},\n date={2005},\n number={3},\n pages={661--687},\n issn={0040-9383},\n review={\\MR{2122220}}\n \n }\n \\bib{Hornbostel:nilpotence}{article}{\n author={Hornbostel,Jens},\n title={Some comments on motivic nilpotence},\n date={2016},\n note={\\arxiv{1511.07292}},\n }\n \\bib{KerzMS}{article}{\n author={Kerz, Moritz},\n author={M{\\\"u}ller-Stach, Stefan},\n title={The Milnor-Chow homomorphism revisited},\n journal={$K$-Theory},\n volume={38},\n date={2007},\n number={1},\n pages={49--58},\n issn={0920-3036},\n review={\\MR{2353863}}\n }\n \\bib{KRW72}{article}{\n author={Knebusch, Manfred},\n author={Rosenberg, Alex},\n author={Ware, Roger},\n title={Structure of Witt rings and quotients of Abelian group rings},\n journal={Amer. J. Math.},\n volume={94},\n date={1972},\n pages={119--155},\n issn={0002-9327},\n review={\\MR{0296103}}\n }\n \\bib{Lam}{book}{\n author={Lam, T. Y.},\n title={Introduction to quadratic forms over fields},\n series={Graduate Studies in Mathematics},\n volume={67},\n publisher={American Mathematical Society, Providence, RI},\n date={2005},\n pages={xxii+550},\n isbn={0-8218-1095-2},\n review={\\MR{2104929}}\n }\n \\bib{McGarraghy:exterior}{article}{\n author={McGarraghy, Se{\\'a}n},\n title={Exterior powers of symmetric bilinear forms},\n journal={Algebra Colloq.},\n volume={9},\n date={2002},\n number={2},\n pages={197--218},\n issn={1005-3867},\n review={\\MR{1901274}}\n }\n \\bib{Milnor}{article}{\n author={Milnor, John},\n title={Algebraic $K$-theory and quadratic forms},\n journal={Invent. Math.},\n volume={9},\n date={1969\/1970},\n pages={318--344},\n issn={0020-9910},\n review={\\MR{0260844}}\n }\n \\bib{Ojanguren}{article}{\n author={Ojanguren, Manuel},\n title={Quadratic forms over regular rings},\n journal={J. Indian Math. Soc. (N.S.)},\n volume={44},\n date={1980},\n number={1-4},\n pages={109--116 (1982)},\n issn={0019-5839},\n review={\\MR{752647}}\n }\n \\bib{OjangurenPanin}{article}{\n author={Ojanguren, Manuel},\n author={Panin, Ivan},\n title={A purity theorem for the Witt group},\n language={English, with English and French summaries},\n journal={Ann. Sci. \\'Ecole Norm. Sup. (4)},\n volume={32},\n date={1999},\n number={1},\n pages={71--86},\n issn={0012-9593},\n review={\\MR{1670591}}\n doi={10.1016\/S0012-9593(99)80009-3},\n }\n \\bib{OVV:Milnor}{article}{\n author={Orlov, D.},\n author={Vishik, A.},\n author={Voevodsky, V.},\n title={An exact sequence for $K^M_\\ast\/2$ with applications to\n quadratic forms},\n journal={Ann. of Math. (2)},\n volume={165},\n date={2007},\n number={1},\n pages={1--13},\n issn={0003-486X},\n review={\\MR{2276765}}\n doi={10.4007\/annals.2007.165.1},\n }\n \\bib{Sanderson}{article}{\n author={Sanderson, B. J.},\n title={Immersions and embeddings of projective spaces},\n journal={Proc. London Math. Soc. (3)},\n volume={14},\n date={1964},\n pages={137--153},\n issn={0024-6115},\n review={\\MR{0165532}},\n }\n \\bib{SGA6}{book}{\n label={SGA6},\n title={Th\\'eorie des intersections et th\\'eor\\`eme de Riemann-Roch},\n series={Lecture Notes in Mathematics, Vol. 225},\n note={S\\'eminaire de G\\'eom\\'etrie Alg\\'ebrique du Bois-Marie 1966--1967\n (SGA 6\n },\n publisher={Springer-Verlag},\n place={Berlin},\n date={1971},\n pages={xii+700},\n review={\\MR{0354655}}\n }\n \\bib{Serre}{article}{\n author={Serre, Jean-Pierre},\n title={Groupes de Grothendieck des sch\\'emas en groupes r\\'eductifs d\\'eploy\\'es},\n journal={Inst. Hautes \\'Etudes Sci. Publ. Math.},\n number={34},\n date={1968},\n pages={37--52},\n issn={0073-8301},\n review={\\MR{0231831}}\n }\n \\bib{Totaro:Witt}{article}{\n author={Totaro, Burt},\n title={Non-injectivity of the map from the Witt group of a variety to the Witt group of its function field},\n journal={J. Inst. Math. Jussieu},\n volume={2},\n date={2003},\n number={3},\n pages={483--493},\n issn={1474-7480},\n }\n \\bib{Voevodsky:Milnor}{article}{\n author={Voevodsky, Vladimir},\n title={Motivic cohomology with ${\\bf Z}\/2$-coefficients},\n journal={Publ. Math. Inst. Hautes \\'Etudes Sci.},\n number={98},\n date={2003},\n pages={59--104},\n issn={0073-8301},\n review={\\MR{2031199}}\n }\n \\bib{Walter:PB}{article}{\n author={Walter, Charles},\n title={Grothendieck-Witt groups of projective bundles},\n note={Preprint},\n eprint={www.math.uiuc.edu\/K-theory\/0644\/},\n date={2003},\n }\n \\bib{Weibel:KH}{article}{\n author={Weibel, Charles A.},\n title={Homotopy algebraic $K$-theory},\n conference={\n title={Algebraic $K$-theory and algebraic number theory (Honolulu, HI,\n 1987)},\n },\n book={\n series={Contemp. Math.},\n volume={83},\n publisher={Amer. Math. Soc., Providence, RI},\n },\n date={1989},\n pages={461--488},\n review={\\MR{991991}}\n }\n \\bib{Xie}{article}{\n author={Xie, Heng},\n title={An application of Hermitian $K$-theory: sums-of-squares formulas},\n journal={Doc. Math.},\n volume={19},\n date={2014},\n pages={195--208},\n issn={1431-0635},\n review={\\MR{3178250}},\n }\n \\bib{Me:WCCV}{article}{\n author={Zibrowius, Marcus},\n title={Witt groups of complex cellular varieties},\n journal={Documenta Math.},\n number={16},\n date={2011},\n pages={465--511},\n issn={1431-0635},\n review={\\MR{2823367}},\n }\n \\bib{Me:WCS}{article}{\n author={Zibrowius, Marcus},\n title={Witt groups of curves and surfaces},\n journal={Math. Zeits.},\n volume={278},\n number={1--2},\n date={2014},\n pages={191--227},\n issn={0025-5874},\n review={\\MR{3267576}},\n \n }\n \\bib{Me:LambdaReps}{article}{\n author={Zibrowius, Marcus},\n title={Symmetric representation rings are $\\lambda$-rings},\n journal={New York J. Math.},\n volume={21},\n date={2015},\n pages={1055--1092},\n issn={1076-9803},\n review={\\MR{3425635}},\n }\n \\bib{Me:App}{article}{\n author={Zibrowius, Marcus},\n title={Nilpotence in Milnor-Witt K-Theory},\n note={Appendix to \\cite{Hornbostel:nilpotence}},\n date={2016},\n }\n \\end{biblist}\n\\end{bibdiv}\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}