{"text":"\n\\chapter{Hardware}\n\\label{app:hardware}\n\nThis appendix gives an overview of the processors used throughout this work and\ntheir relevant properties.\n\nNote that, while the single-threaded peak performance is, where appropriate,\nbased on the processors' maximum turbo frequency, the multi-threaded peak\nperformance is instead computed from the base frequency. Furthermore, we only\nlist the vector instructions that allow to reach a processor's theoretical peak\nperformance.\n\n\\section{\\hwstyle Harpertown E5450}\n\\label{hardware:E5450}\n\n\\href{http:\/\/ark.intel.com\/products\/33083\/Intel-Xeon-Processor-E5450-12M-Cache-3_00-GHz-1333-MHz-FSB}{\\nolinkurl{http:\/\/ark.intel.com\/products\/33083\/Intel-Xeon-Processor-E5450-}\\\\\\nolinkurl{12M-Cache-3_00-GHz-1333-MHz-FSB}}\n\nOur {\\namestyle Harpertown E5450}s were part of our compute cluster. Because\nthey were disposed of in mid~2016, they are only used in a part of this work's\nperformance analyses.\n\n\\begin{hwtable}\n Name &\\namestyle Intel\\textsuperscript\\textregistered{} Xeon\\textsuperscript\\textregistered{} Processor E5450 \\\\\n Codename &\\namestyle Harpertown \\\\\n Lithography &\\SI{45}{\\nano\\meter} \\\\\n Release &Q4 2007 \\\\\n Cores \/ Threads &4 \/ 4 \\\\\n Base Frequency &\\SI{3.00}{\\giga\\hertz} \\\\\n Peak Performance &\\SI{12}{\\giga\\flops\\per\\second} (single-threaded) \\\\\\nopagebreak\n &\\SI{48}{\\giga\\flops\\per\\second} (all cores) \\\\\n Peak Bandwidth &\\SI{10.6}{\\giga\\byte\\per\\second} \\\\\n L2~cache &\\SI6{\\mebi\\byte} {\\em per 2~cores}, 24-way set associative\\\\\n L1d~cache &\\SI{32}{\\kibi\\byte} per core, 8-way set-associative \\\\\n Vector Instructions &1 SSE FMUL + 1 SSE FADD per cycle \\\\\\nopagebreak\n &$= \\SI4{\\flops\\per\\cycle}$ \\\\\n\\end{hwtable}\n\n\\section{\\hwstyle Sandy Bridge-EP E5-2670}\n\\label{hardware:E5-2670}\n\n\\href{http:\/\/ark.intel.com\/products\/64595\/Intel-Xeon-Processor-E5-2670-20M-Cache-2_60-GHz-8_00-GTs-Intel-QPI}{\\nolinkurl{http:\/\/ark.intel.com\/products\/64595\/Intel-Xeon-Processor-E5-2670-}\\\\\\nolinkurl{20M-Cache-2_60-GHz-8_00-GTs-Intel-QPI}}\n\nOur {\\namestyle Sandy Bridge E5-2680 v2}s are part of our compute cluster.\n\\intel{} \\turboboost is disabled on these machines unless otherwise stated.\n\n\\begin{hwtable}\n Name &\\namestyle Intel\\textsuperscript\\textregistered{} Xeon\\textsuperscript\\textregistered{} Processor E5-2670 \\\\\n Codename &\\namestyle Sandy Bridge-EP \\\\\n Lithography &\\SI{32}{\\nano\\meter} \\\\\n Release &Q1 2012 \\\\\n Cores \/ Threads &8 \/ 16 \\\\\n Base Frequency &\\SI{2.60}{\\giga\\hertz} \\\\\n Max Turbo Frequency &\\SI{3.30}{\\giga\\hertz} ({\\em disabled unless otherwise stated})\\\\\n Peak Performance &\\SI{20.8}{\\giga\\flops\\per\\second} (single-threaded) \\\\\\nopagebreak\n &\\SI{166.4}{\\giga\\flops\\per\\second} (all cores) \\\\\n Peak Bandwidth &\\SI{51.2}{\\giga\\byte\\per\\second} \\\\\n L3~cache &\\SI{20}{\\mebi\\byte} shared, 20-way set associative \\\\\n L2~cache &\\SI{256}{\\kibi\\byte} per core, 8-way set associative \\\\\n L1d~cache &\\SI{32}{\\kibi\\byte} per core, 8-way set-associative \\\\\n Vector Instructions &1 AVX FMUL + 1 AVX FADD per cycle \\\\\\nopagebreak\n &$= \\SI8{\\flops\\per\\cycle}$ \\\\\n\\end{hwtable}\n\n\n\\section{\\hwstyle Ivy Bridge-EP E5-2680 v2}\n\\label{hardware:E5-2680 v2}\n\n\\href{http:\/\/ark.intel.com\/products\/75277\/Intel-Xeon-Processor-E5-2680-v2-25M-Cache-2_80-GHz}{\\nolinkurl{http:\/\/ark.intel.com\/products\/75277\/Intel-Xeon-Processor-E5-2680-}\\\\\\nolinkurl{v2-25M-Cache-2_80-GHz}}\n\nOur {\\namestyle Ivy Bridge E5-2680 v3}s are part of our compute cluster.\n\n\\begin{hwtable}\n Name &\\namestyle Intel\\textsuperscript\\textregistered{} Xeon\\textsuperscript\\textregistered{} Processor E5-2680 v2\\\\\n Codename &\\namestyle Ivy Bridge-EP \\\\\n Lithography &\\SI{22}{\\nano\\meter} \\\\\n Release &Q3 2013 \\\\\n Cores \/ Threads &10 \/ 20 \\\\\n Base Frequency &\\SI{2.80}{\\giga\\hertz} \\\\\n Max Turbo Frequency &\\SI{3.60}{\\giga\\hertz} \\\\\n Peak Performance &\\SI{28.8}{\\giga\\flops\\per\\second} (single-threaded) \\\\\\nopagebreak\n &\\SI{224}{\\giga\\flops\\per\\second} (all cores) \\\\\n Peak Bandwidth &\\SI{59.7}{\\giga\\byte\\per\\second} \\\\\n L3~cache &\\SI{25}{\\mebi\\byte} shared, 20-way set associative \\\\\n L2~cache &\\SI{256}{\\kibi\\byte} per core, 8-way set associative \\\\\n L1d~cache &\\SI{32}{\\kibi\\byte} per core, 8-way set-associative \\\\\n Vector Instructions &1 AVX FMUL + 1 AVX FADD per cycle \\\\\\nopagebreak\n &$= \\SI8{\\flops\\per\\cycle}$ \\\\\n\\end{hwtable}\n\n\\section{\\hwstyle Haswell-EP E5-2680 v3}\n\\label{hardware:E5-2680 v3}\n\n\\href{http:\/\/ark.intel.com\/products\/81908\/Intel-Xeon-Processor-E5-2680-v3-30M-Cache-2_50-GHz}{\\nolinkurl{http:\/\/ark.intel.com\/products\/81908\/Intel-Xeon-Processor-E5-2680-}\\\\\\nolinkurl{v3-30M-Cache-2_50-GHz}}\n\nOur {\\namestyle Haswell-EP E5-2680 v3}s are part of our compute cluster.\n\n\\begin{hwtable}\n Name &\\namestyle Intel\\textsuperscript\\textregistered{} Xeon\\textsuperscript\\textregistered{} Processor E5-2680 v3\\\\\n Codename &\\namestyle Haswell-EP \\\\\n Lithography &\\SI{22}{\\nano\\meter} \\\\\n Release &Q3 2014 \\\\\n Cores \/ Threads &12 \/ 24 \\\\\n Base Frequency &\\SI{2.50}{\\giga\\hertz} \\\\\n Max Turbo Frequency &\\SI{3.30}{\\giga\\hertz} \\\\\n Peak Performance &\\SI{52.8}{\\giga\\flops\\per\\second} (single-threaded) \\\\\\nopagebreak\n &\\SI{480}{\\giga\\flops\\per\\second} (all cores) \\\\\n Peak Bandwidth &\\SI{68}{\\giga\\byte\\per\\second} \\\\\n L3~cache &\\SI{30}{\\mebi\\byte} shared, 20-way set associative \\\\\n L2~cache &\\SI{256}{\\kibi\\byte} per core, 8-way set associative \\\\\n L1d~cache &\\SI{32}{\\kibi\\byte} per core, 8-way set-associative \\\\\n Vector Instructions &2 AVX FMA per cycle \\\\\\nopagebreak\n &$= \\SI{16}{\\flops\\per\\cycle}$ \\\\\n\\end{hwtable}\n\n\\section{\\hwstyle Broadwell i7-5557U}\n\\label{hardware:i7-5557U}\n\n\\href{https:\/\/ark.intel.com\/products\/84993\/Intel-Core-i7-5557U-Processor-4M-Cache-up-to-3_40-GHz}{\\nolinkurl{https:\/\/ark.intel.com\/products\/84993\/Intel-Core-i7-5557U-}\\\\\\nolinkurl{Processor-4M-Cache-up-to-3_40-GHz}}\n\nOur {\\namestyle Broadwell i7-5557U} is part of a {\\namestyle MacBook Pro}.\n\n\\begin{hwtable}\n Name &\\namestyle Intel\\textsuperscript\\textregistered{} Core\\texttrademark{} i7-5557U Processor \\\\\n Codename &\\namestyle Broadwell-U \\\\\n Lithography &\\SI{14}{\\nano\\meter} \\\\\n Release &Q1 2015 \\\\\n Cores \/ Threads &2 \/ 4 \\\\\n Base Frequency &\\SI{3.10}{\\giga\\hertz} \\\\\n Max Turbo Frequency &\\SI{3.40}{\\giga\\hertz} \\\\\n Peak Performance &\\SI{54.4}{\\giga\\flops\\per\\second} (single-threaded) \\\\\\nopagebreak\n &\\SI{99.2}{\\giga\\flops\\per\\second} (all cores) \\\\\n Peak Bandwidth &\\SI{25.6}{\\giga\\byte\\per\\second} \\\\\n L3~cache &\\SI4{\\mebi\\byte} shared, 16-way set associative \\\\\n L2~cache &\\SI{256}{\\kibi\\byte} per core, 8-way set associative \\\\\n L1d~cache &\\SI{32}{\\kibi\\byte} per core, 8-way set-associative \\\\\n Vector Instructions &2 AVX FMA per cycle \\\\\\nopagebreak\n &$= \\SI{16}{\\flops\\per\\cycle}$ \\\\\n\\end{hwtable}\n\n\\subsection{\\blasl1}\n\n\\routinedoc{dcopy,\n arguments={\n n=dimension $n$,\n x=vector $\\dv x \\in \\R^n$,\n incx=increment for \\dv x,\n y=vector $\\dv y \\in \\R^n$,\n incy=increment for \\dv y\n },\n description={double-precision vector copy},\n operations={$\\dv y \\coloneqq \\alpha \\dv x$},\n flops=0,\n datavol=$2 n$,\n datamov=$2 n$,\n}\n\n\\routinedoc{dswap,\n arguments={\n n=dimension $n$,\n x=vector $\\dv x \\in \\R^n$,\n incx=increment for \\dv x,\n y=vector $\\dv y \\in \\R^n$,\n incy=increment for \\dv y\n },\n description={double-precision vector swap},\n operations={${\\dv x, \\dv y \\coloneqq \\dv y, \\dv x}$},\n flops=0,\n datavol=$2 n$,\n datamov=$4 n$,\n}\n\n\\routinedoc{daxpy,\n arguments={\n n=dimension $n$,\n alpha=scalar $\\alpha$,\n x=vector $\\dv x \\in \\R^n$,\n incx=increment for \\dv x,\n y=vector $\\dv y \\in \\R^n$,\n incy=increment for \\dv y\n },\n description={double-precision scaled vector addition},\n operations={$\\dv y \\coloneqq \\alpha \\dv x + \\dv y$},\n flops=$2 n$,\n datavol=$2 n$,\n datamov=$3 n$,\n}\n\n\\routinedoc{ddot,\n arguments={\n n=dimension $n$,\n x=vector $\\dv x \\in \\R^n$,\n incx=increment for \\dv x,\n y=vector $\\dv y \\in \\R^n$,\n incy=increment for \\dv y\n },\n description={double-precision inner vector product},\n operations={${\\alpha \\coloneqq \\dm[height=0, ']x \\dv x}$},\n flops=$2 n$,\n datavol=$2 n$,\n datamov=$2 n$,\n}\n\n\n\\subsection{\\blasl2}\n\n\\routinedoc{dgemv,\n arguments={\n trans=\\dm A is transposed,\n m=dimension $m$,\n n=dimension $n$,\n alpha=scalar $\\alpha$,\n A=matrix $\\dm A \\in \\R^{m \\times n}$,\n ldA=leading dimension for \\dm A,\n x={vector $\\dv x \\in \\begin{cases}\n \\R^n &\\text{if } \\code{trans} = \\code N\\\\\n \\R^m &\\text{else}\n \\end{cases}$},\n incx=increment for \\dv x,\n beta=scalar $\\beta$,\n y={vector $\\dv y \\in \\begin{cases}\n \\R^m &\\text{if } \\code{trans} = \\code N\\\\\n \\R^n &\\text{else}\n \\end{cases}$},\n incy=increment for \\dv y\n },\n description={double-precision matrix-vector product},\n operations={\n {$\\dv y \\coloneqq \\alpha \\dm A \\matmatsep \\dv x + \\beta\\dv y$},\n {$\\dv y \\coloneqq \\alpha \\dm[']A \\dv x + \\beta\\dv y$}\n },\n flops=$2 m n$,\n datavol={$\\begin{array}{ll}\n m n + m &\\text{if } \\code{trans} = \\code N\\\\\n m n + n &\\text{else}\n \\end{array}$},\n datamov={$\\begin{array}{ll}\n m n + 2 m &\\text{if } \\code{trans} = \\code N\\\\\n m n + 2 n &\\text{else}\n \\end{array}$},\n}\n\n\\routinedoc{dger,\n arguments={\n m=dimension $m$,\n n=dimension $n$,\n alpha=scalar $\\alpha$,\n x=vector $\\dv x \\in \\R^m$,\n incx=increment for \\dv x,\n y=vector $\\dv y \\in \\R^n$,\n incy=increment for \\dv y,\n A=matrix $\\dm A \\in \\R^{m \\times n}$,\n ldA=leading dimension for \\dm A\n },\n description={double-precision vector outer product},\n operations={${\\dm A \\coloneqq \\alpha \\dv x \\dm[height=0, ']y + \\dm A}$},\n flops=$2 m n$,\n datavol=$m n + m + n$,\n datamov=$2 m n + m + n$,\n}\n\n\\routinedoc{dtrsv,\n arguments={\n uplo=\\dm[lower]A is lower- or upper-triangular,\n trans=\\dm[lower]A is transposed,\n diag=\\dm[lower]A is unit triangular,\n n=dimension $n$,\n A=matrix $\\dm[lower]A \\in \\R^{n \\times n}$,\n ldA=leading dimension for \\dm[lower]A,\n x=vector $\\dv x \\in \\R^n$,\n incX=increment for \\dv x\n },\n description={double-precision triangular linear system solve},\n operations={\n {$\\dv x \\coloneqq \\dm[lower, inv]A \\dv x$},\n {$\\dv x \\coloneqq \\dm[lower, inv']A \\dv x$}\n },\n flops=$n^2$,\n datavol={$\\frac12 n (n + 1) + n$},\n datamov={$\\frac12 n (n + 1) + 2 n$}\n}\n\n\n\\subsection{\\blasl3}\n\n\\routinedoc{dgemm,\n arguments={\n transA=\\dm A is transposed,\n transB=\\dm B is transposed,\n m=dimension $m$,\n n=dimension $n$,\n k=dimension $k$,\n alpha=scalar $\\alpha$,\n A={matrix $\\dm A \\in \\begin{cases}\n \\R^{m \\times k} &\\text{if } \\code{transA} = \\code N\\\\\n \\R^{k \\times m} &\\text{else}\n \\end{cases}$},\n ldA=leading dimension for \\dm A,\n B={matrix $\\dm B \\in \\begin{cases}\n \\R^{k \\times n} &\\text{if } \\code{transB} = \\code N\\\\\n \\R^{n \\times k} &\\text{else}\n \\end{cases}$},\n ldB=leading dimension for \\dm B,\n beta=scalar $\\beta$,\n C={matrix $\\dm C \\in \\R^{m \\times n}$},\n ldC=leading dimension for \\dm C\n },\n description={double-precision matrix-matrix product},\n operations={\n {$\\dm C \\coloneqq \\alpha \\dm A \\matmatsep \\dm B + \\beta \\dm C$},\n {$\\dm C \\coloneqq \\alpha \\dm A \\matmatsep \\dm[']B + \\beta \\dm C$},\n {$\\dm C \\coloneqq \\alpha \\dm[']A \\matmatsep \\dm B + \\beta \\dm C$},\n {$\\dm C \\coloneqq \\alpha \\dm[']A \\matmatsep \\dm[']B + \\beta \\dm C$}\n },\n flops=$2 m n k$,\n datavol=$m k + k n + m n$,\n datamov=$m k + k n + 2 m n$,\n}\n\n\\routinedoc{dsymm,\n arguments={\n side=\\dm A is on the left or right of \\dm B,\n uplo=\\dm A is in lower- or upper-triangular storage,\n m=dimension $m$,\n n=dimension $n$,\n alpha=scalar $\\alpha$,\n A={matrix $\\dm A \\in \\begin{cases}\n \\R^{m \\times m} &\\text{if } \\code{side} = \\code L\\\\\n \\R^{n \\times n} &\\text{else}\n \\end{cases}$},\n ldA=leading dimension for \\dm A,\n B={matrix $\\dm B \\in \\R^{m \\times n}$},\n ldB=leading dimension for \\dm B,\n beta=scalar $\\beta$,\n C={matrix $\\dm C \\in \\R^{m \\times n}$},\n ldC=leading dimension for \\dm C\n },\n description={double-precision symmetric matrix-matrix product},\n operations={\n {$\\dm C \\coloneqq \\alpha \\dm A \\matmatsep \\dm B + \\beta \\dm C$},\n {$\\dm C \\coloneqq \\alpha \\dm B \\matmatsep \\dm A + \\beta \\dm C$}\n },\n flops={$\\begin{array}{ll}\n 2 m^2 n &\\text{if } \\code{side} = \\code L\\\\\n 2 m n^2 &\\text{else}\n \\end{array}$},\n datavol={$\\begin{array}{ll}\n \\frac12 m (m + 1) + 2 m n &\\text{if } \\code{side} = \\code L\\\\\n \\frac12 n (n + 1) + 2 m n &\\text{else}\n \\end{array}$},\n datamov={$\\begin{array}{ll}\n \\frac12 m (m + 1) + 3 m n &\\text{if } \\code{side} = \\code L\\\\\n \\frac12 n (n + 1) + 3 m n &\\text{else}\n \\end{array}$},\n}\n\n\\routinedoc{dtrmm,\n arguments={\n side=\\dm[lower]A is on the left or right of \\dm B,\n uplo=\\dm[lower]A is lower- or upper-triangular,\n transA=\\dm[lower]A is transposed,\n diag=\\dm[lower]A is unit triangular,\n m=dimension $m$,\n n=dimension $n$,\n alpha=scalar $\\alpha$,\n A={matrix $\\dm[lower]A \\in \\begin{cases}\n \\R^{m \\times m} &\\text{if } \\code{side} = \\code L\\\\\n \\R^{n \\times n} &\\text{else}\n \\end{cases}$},\n ldA=leading dimension for \\dm[lower]A,\n B={matrix $\\dm B \\in \\R^{m \\times n}$},\n ldB=leading dimension for \\dm B\n },\n description={double-precision triangular matrix-matrix product},\n operations={\n {$\\dm B \\coloneqq \\alpha \\dm[lower]A \\matmatsep \\dm B$},\n {$\\dm B \\coloneqq \\alpha \\dm[lower, ']A \\dm B$},\n {$\\dm B \\coloneqq \\alpha \\dm[upper]A \\matmatsep \\dm B$},\n {$\\dm B \\coloneqq \\alpha \\dm[upper, ']A \\dm B$},\n {$\\dm B \\coloneqq \\alpha \\dm B \\matmatsep \\dm[lower]A$},\n {$\\dm B \\coloneqq \\alpha \\dm B \\matmatsep \\dm[lower, ']A$},\n {$\\dm B \\coloneqq \\alpha \\dm B \\matmatsep \\dm[upper]A$},\n {$\\dm B \\coloneqq \\alpha \\dm B \\matmatsep \\dm[upper, ']A$}\n },\n flops={$\\begin{array}{ll}\n m^2 n &\\text{if } \\code{side} = \\code L\\\\\n m n^2 &\\text{else}\n \\end{array}$},\n datavol={$\\begin{array}{ll}\n \\frac12 m (m + 1) + m n &\\text{if } \\code{side} = \\code L\\\\\n \\frac12 n (n + 1) + m n &\\text{else}\n \\end{array}$},\n datamov={$\\begin{array}{ll}\n \\frac12 m (m + 1) + 2 m n &\\text{if } \\code{side} = \\code L\\\\\n \\frac12 n (n + 1) + 2 m n &\\text{else}\n \\end{array}$},\n}\n\n\\routinedocforward\\dsyrk{ssyrk,\n arguments={uplo=, trans=, n=, k=, alpha=, A=, ldA=, beta=, C=, ldB=},\n description={single-precision symmetric rank-k update},\n}\n\n\\routinedoc{dsyrk,\n arguments={\n uplo=\\dm C has lower- or upper-triangular storage,\n trans=\\dm A is transposed,\n n=dimension $n$,\n k=dimension $k$,\n alpha=scalar $\\alpha$,\n A={matrix $\\dm A \\in \\begin{cases}\n \\R^{n \\times k} &\\text{if } \\code{trans} = \\code N\\\\\n \\R^{k \\times n} &\\text{else}\n \\end{cases}$},\n ldA=leading dimension for \\dm A,\n beta=scalar $\\beta$,\n C={symmetric matrix $\\dm C \\in \\R^{n \\times n}$},\n ldB=leading dimension for \\dm C\n },\n description={double-precision symmetric rank-k update},\n operations={\n {$\\dm C \\coloneqq \\alpha \\dm A \\matmatsep \\dm[']A + \\dm C$},\n {$\\dm C \\coloneqq \\alpha \\dm[']A \\dm A + \\dm C$},\n },\n flops={$n (n + 1) k$},\n datavol={$\\frac12 n (n + 1) + n k$},\n datamov={$n (n + 1) + n k$},\n}\n\n\\routinedocforward\\dsyrk{cherk,\n arguments={uplo=, trans=, n=, k=, alpha=, A=, ldA=, beta=, C=, ldB=},\n description={single-precision complex Hermitian rank-k update},\n}\n\n\\routinedocforward\\dsyrk{zherk,\n arguments={uplo=, trans=, n=, k=, alpha=, A=, ldA=, beta=, C=, ldB=},\n description={double-precision complex Hermitian rank-k update},\n}\n\n\\routinedoc{dsyr2k,\n arguments={\n uplo=\\dm C has lower- or upper-triangular storage,\n trans=\\dm A is transposed,\n n=dimension $n$,\n k=dimension $k$,\n alpha=scalar $\\alpha$,\n A={matrix $\\dm A \\in \\begin{cases}\n \\R^{n \\times k} &\\text{if } \\code{trans} = \\code N\\\\\n \\R^{k \\times n} &\\text{else}\n \\end{cases}$},\n ldA=leading dimension for \\dm A,\n B={matrix $\\dm B \\in \\begin{cases}\n \\R^{n \\times k} &\\text{if } \\code{trans} = \\code N\\\\\n \\R^{k \\times n} &\\text{else}\n \\end{cases}$},\n ldB=leading dimension for \\dm B,\n beta=scalar $\\beta$,\n C={symmetric matrix $\\dm C \\in \\R^{n \\times n}$},\n ldC=leading dimension for \\dm C\n },\n description={double-precision symmetric rank-2k update},\n operations={\n {$\\dm C \\coloneqq \\alpha \\dm A \\matmatsep \\dm[']B + \\alpha \\dm B \\matmatsep \\dm[']A + \\dm C$},\n {$\\dm C \\coloneqq \\alpha \\dm[']A \\dm B + \\alpha \\dm[']B \\dm A + \\dm C$}\n },\n flops={$2 n (n + 1) k$},\n datavol={$\\frac12 n (n + 1) + 2 n k$},\n datamov={$n (n + 1) + 2 n k$},\n}\n\n\\routinedocforward\\dtrsm{strsm,\n arguments={side=, uplo=, transA=, diag=, m=, n=, alpha=, A=, ldA=, B=, ldB=},\n description={single-precision triangular linear system solve with multiple\n right hand sides},\n}\n\n\\routinedoc{dtrsm,\n arguments={\n side=\\dm[lower]A is on the left or right of \\dm B,\n uplo=\\dm[lower]A is lower- or upper-triangular,\n transA=\\dm[lower]A is transposed,\n diag=\\dm[lower]A is unit triangular,\n m=dimension $m$,\n n=dimension $n$,\n alpha=scalar $\\alpha$,\n A={matrix $\\dm[lower]A \\in \\begin{cases}\n \\R^{m \\times m} &\\text{if } \\code{side} = \\code L\\\\\n \\R^{n \\times n} &\\text{else}\n \\end{cases}$},\n ldA=leading dimension for \\dm[lower]A,\n B={matrix $\\dm B \\in \\R^{m \\times n}$},\n ldB=leading dimension for \\dm B\n },\n description={double-precision triangular linear system solve with multiple\n right hand sides},\n operations={\n {$\\dm B \\coloneqq \\alpha \\dm[lower, inv]A \\dm B$},\n {$\\dm B \\coloneqq \\alpha \\dm[lower, inv']A \\dm B$},\n {$\\dm B \\coloneqq \\alpha \\dm[upper, inv]A \\dm B$},\n {$\\dm B \\coloneqq \\alpha \\dm[upper, inv']A \\dm B$},\n {$\\dm B \\coloneqq \\alpha \\dm B \\matmatsep \\dm[lower, inv]A$},\n {$\\dm B \\coloneqq \\alpha \\dm B \\matmatsep \\dm[lower, inv']A$},\n {$\\dm B \\coloneqq \\alpha \\dm B \\matmatsep \\dm[upper, inv]A$},\n {$\\dm B \\coloneqq \\alpha \\dm B \\matmatsep \\dm[upper, inv']A$}\n },\n flops={$\\begin{array}{ll}\n m^2 n &\\text{if } \\code{side} = \\code L\\\\\n m n^2 &\\text{else}\n \\end{array}$},\n datavol={$\\begin{array}{ll}\n \\frac12 m (m + 1) + m n &\\text{if } \\code{side} = \\code L\\\\\n \\frac12 n (n + 1) + m n &\\text{else}\n \\end{array}$},\n datamov={$\\begin{array}{ll}\n \\frac12 m (m + 1) + 2 m n &\\text{if } \\code{side} = \\code L\\\\\n \\frac12 n (n + 1) + 2 m n &\\text{else}\n \\end{array}$},\n}\n\n\\routinedocforward\\dtrsm{ctrsm,\n arguments={side=, uplo=, transA=, diag=, m=, n=, alpha=, A=, ldA=, B=, ldB=},\n description={single-precision complex triangular linear system solve with\n multiple right hand sides},\n}\n\n\\routinedocforward\\dtrsm{ztrsm,\n arguments={side=, uplo=, transA=, diag=, m=, n=, alpha=, A=, ldA=, B=, ldB=},\n description={double-precision complex triangular linear system solve with\n multiple right hand sides},\n}\n\n\\subsection*{\\codestyle\\bf\\llap{\\routine(}\\arglist)}\n \\label{routine:\\routine}\n {\\it\\description}\n ]\n \\def\\empty{}\\small\\singlespacing\n \\expandafter\\ifx\\note\\pgfkeysnovalue\\else\n \\paragraph{Note\\strut}\n \\note\n \\fi\n\n \\ifx\\operations\\empty\\else\n \\paragraph{Operations\\strut}\n \\operations\n \\fi\n\n {\n \\raggedright\n \\hbadness=10000\n \\hangafter=1\n \\renewcommand\\newline{\n \\par\n \\settowidth\\hangindent{\\hspace\\argwidth: }\n \\makebox[\\hangindent]{}%\n }\n \\paragraph{Arguments\\strut}\n \\arguments\n }\n\n \\expandafter\\ifx\\flops\\pgfkeysnovalue\\else\n \\paragraph{Minimal FLOP-count\\strut}\n \\flops\n \\fi\n\n \\expandafter\\ifx\\datavol\\pgfkeysnovalue\\else\n \\paragraph{Data volume\\strut}\n \\datavol\n \\fi\n\n \\expandafter\\ifx\\datamov\\pgfkeysnovalue\\else\n \\paragraph{Minimal data movement\\strut}\n \\datamov\n \\fi\n \\end{multicols}\n\n \\filbreak\n}}\n\n\\newcommand\\routinedocforward[2]{\n \\pgfkeys{\n \/routine,\n #2,\n name\/.get=\\routine,\n arglist\/.get=\\arglist,\n description\/.get=\\description,\n }\n \\subsection*{\\codestyle\\bf\\routine(\\arglist)}\n \\label{routine:\\routine}\n {\\it\\description.}\n See #1.\n\n \\filbreak\n}\n\n\n\n\\subsection*{Reference Implementations}\n\nThe \\blas and \\lapack reference implementations~\\cite{blasweb, lapackweb} are\nfully functional and well-documented and thus of great value as references for\nroutine interfaces and semantics. However, on their own they only attain poor\nperformance, and should therefore not be used in production codes.\n\nAll routines in the \\blas reference implementation are single-threaded\nand unoptimized. The central kernel \\dgemm, for instance, is realized as a\nsimple triple loop that reaches around \\SI6{\\percent} of modern processors'\nsingle-threaded theoretical peak performance---optimized implementations are\ncommonly $15\\times$~faster on a single core and provide excellent multi-threaded\nscalability.\n\nSince \\lapack primarily relies on a tuned \\blas implementation for speed,\nthe reference implementation can in principle reach good performance. However,\nas its documentation states, this requires careful tuning of its block sizes,\nwhose default values are generally too low on contemporary processors.\nOptimized implementations may further improve \\lapack's performance through\nfaster algorithms, tuned unblocked kernels (e.g, \\dtrti2, \\dpotf2), and\nalgorithm-level parallelism (e.g., task-based algorithms-by-blocks).\n\nThroughout this work, we use reference \\blas and \\lapack version~3.5.0.\n\n\n\\subsection*{\\namestyle OpenBLAS}\n\n{\\namestyle OpenBLAS}~\\cite{openblasweb} is a high-performance open-source \\blas\nand \\lapack implementation that is currently developed and maintained at the\n{\\namestyle Massachusetts Institute of Technology}. It provides optimized and\nmulti-threaded \\blas kernels for a wide range of architectures, and offers tuned\nversion of core \\lapack routines, such as the \\dlauum, \\dtrtri, \\dpotrf, and\n\\dgetrf. {\\namestyle OpenBLAS} is based on the discontinued {\\namestyle\nGotoBLAS2}, adopting its approach and much of its source-code; it includes\nassembly kernels for more recent architectures, such as \\sandybridgeshort and\n\\haswellshort, as well {\\namestyle AMD} processors.\n\nThroughout this work, we use {\\namestyle OpenBLAS} version~0.2.15.\n\n\n\\subsection*{\\namestyle BLIS}\n\nThe {\\namestyle BLAS-like Library Instantiation Software} ({\\namestyle\nBLIS})~\\cite{blis1, blis2, blis3, blisweb} is a fairly recent framework for\ndense linear algebra libraries that is actively developed at the {\\namestyle\nUniversity of Texas at Austin}. While it comes with its own API, which is a\nsuperset, generalization, and extension of the \\blas, it contains a\ncompatibility layer offering the original de-factor standard \\blas interface.\n{\\namestyle BLIS} builds upon the {\\namestyle GotoBLAS} approach, yet\nrestructures and solidifies it to make all but a tiny ``micro-kernel''\narchitecture-independent. While its performance is so far generally lower than\nthat of \\openblas (see examples in \\cref{sec:model:args}), its ambitious goal is\nto significantly speed up both the development of new application-specific\nkernels, and the adaptation to other architectures.\n\nAlthough multi-threading was introduced into {\\namestyle BLIS}~\\cite{blis3} soon\nafter its inception, its flexible threading model lacked a simple end-user\ninterface (such as following the environment variable \\code{OMP\\_NUM\\_THREADS})\nuntil November~2016 (commit\n\\href{https:\/\/github.com\/flame\/blis\/commit\/6b5a4032d2e3ed29a272c7f738b7e3ed6657e556}{\\sf\n6b5a403}). As a result, we only presents single-threaded results for\n{\\namestyle BLIS}.\n\nThroughout this work we use {\\namestyle BLIS} version~0.2.0.\n\n\n\\subsection*{\\namestyle MKL}\n\n\\intel's {\\namestyle Math Kernel Library} ({\\namestyle MKL})~\\cite{mklweb} is a\nhigh-performance library for \\intel processors that covers \\blas and\n\\lapack, as well as other high-performance computations, such as for Fast\nFourier Transforms (FFT) and Deep Neural Networks (DNN). While {\\namestyle MKL}\nis a closed-source library, it recently began offering free developer licenses.\nIn terms of performance, it is in most scenarios superior to open-source\nlibraries such as \\openblas and \\blis (see examples in \\cref{sec:model:args}).\n\nThroughout this work we use {\\namestyle MKL} version~11.3.\n\n\n\\subsection*{\\namestyle Accelerate}\n\n\\apple's framework {\\namestyle Accelerate}~\\cite{accelerateweb} is a\nhigh-performance library that ships with {\\namestyle macOS} and, among others,\nprovides full \\blas and \\lapack functionality. Its performance is for many\ncases comparable to \\openblas or slightly better.\n\n\n\\subsection*{Other Implementations}\n\nThe following notable \\blas and \\lapack implementations are not used throughout\nthis work:\n\\begin{itemize}\n \\item The {\\namestyle Automatically Tuned Linear Algebra Software}\n ({\\namestyle ATLAS})~\\cite{atlas1, atlas2, atlas3, atlasweb} is a\n high-performance \\blas implementation that relies on auto-tuning. While\n {\\namestyle ATLAS} kernels typically don not reach the performance of\n hand-tuned implementations such as \\openblas, \\blis, and \\mkl, it\n provides good performance for new and exotic architectures with little\n effort.\n\n \\item {\\namestyle GotoBLAS2}~\\cite{gotoblas1, gotoblas2, gotoblasweb} is a\n high-performance \\blas implementation that was developed at the\n {\\namestyle Texas Advanced Computing Center}. Since its\n discontinuation, much of its code-base was picked up by its successor\n \\openblas in~2011, and its approach was refined and generalized in\n \\blis.\n\n \\item {\\namestyle IBM}'s {\\namestyle Engineering and Scientific Subroutine\n Library} ({\\namestyle ESSL}) \\cite{esslweb} provides a high-performance\n \\blas implementation and parts of \\lapack for {\\namestyle POWER}-based\n systems, such as {\\namestyle Blue Gene} supercomputers.\n\\end{itemize}\n\n\\section{Storage Format}\n \\label{app:libs:store}\n \\input{applibs\/store}\n\n \\section{\\namestyle Basic Linear Algebra Subprograms}\n \\label{app:libs:blas}\n \\input{applibs\/blas}\n\n \\section{\\namestyle Linear Algebra PACKage}\n \\label{app:libs:lapack}\n \\input{applibs\/lapack}\n\n \\section{Implementations}\n \\label{app:libs:libs}\n \\input{applibs\/libs}\n}\n\n\\subsection{Scalars}\nEach scalar operand (e.g., $\\alpha \\in \\R$) is passed as a single argument,\n (e.g., \\code{double *alpha}). Complex scalars are stored as two consecutive\n elements of the basis data-type (\\code{float} or \\code{double}) that represent\n the real and imaginary parts.\n\n\n\\subsection{Vectors}\nEach vector operand (e.g., $\\dv x \\in \\R^n$) is specified by three arguments:\n\\begin{itemize}\n \\item A size argument (e.g., \\code{int *n}) determines the length of the\n vector. One size argument can describe multiple vectors (and\/or\n matrices) with the same size.\n\n \\item A data argument (e.g., \\code{double *x}) points to the vector's first\n element in memory.\n\n \\item An increment argument (e.g., \\code{int *incx}) identifies the stride\n between consecutive elements of the vector. For instance, a\n contiguously stored vector has an increment of~1.\n\n Note that most routines allow negative increments. In this case, the\n vector is stored in reverse, and the data argument points to the\n vector's last element---the first memory location.\n\\end{itemize}\nTo summarize, vector element~$x_i$ is stored at \\code{x[i * incx]} if\n\\code{incx} is positive and \\code{x[(i - n + 1) * incx]} otherwise.\n\n\n\\subsection{Matrices}\nEach matrix (e.g., $\\dm[width=.7]A \\in \\R^{m \\times n}$) is specified by four\narguments:\n\\begin{itemize}\n \\item Two size arguments (e.g., \\code{int *m} and \\code{int *n}) determine\n the matrix height~($m$) and width~($n$). One size argument can describe\n the dimensions of multiple matrices (and\/or vectors), or both dimensions\n of a square matrix.\n\n \\item A data argument (e.g., \\code{double *A}) points to the first matrix\n element in memory (e.g., $a_{00}$). The following elements of the\n first column (e.g., $a_{i0}$) are stored consecutively in memory as\n vector with increment~1.\n\n \\item A leading dimension argument (e.g., \\code{int *ldA}) describes the\n distance in memory between matrix columns. It can hence be understood\n and used as the increment argument for the matrix rows as vectors. The\n term ``leading dimension'' comes from the concept that a referenced\n matrix is part of a larger, contiguously stored ``leading'' matrix. It\n allows to operate on sub-matrices or tensor panels as shown throughout\n this work.\n\n Leading dimensions must be at least equal to the height of the matrix\n (e.g., $m$).\n\\end{itemize}\nTo summarize, matrix element~$a_{ij}$ is stored at \\code{A[i + j * ldA]}.\n\n\n\\subsection{Compute-Bound Efficiency}\n\\label{sec:term:eff:compbound}\n\nA computation is compute-bound on a hardware platform if the memory operations\nto load and store the involved data can be amortized by floating-point\noperations, i.e., the available memory bandwidth is sufficient for all transfers\nand the speed at which the processor performs \\flops is the bottleneck. An\noperation is theoretically bandwidth bound when\n\\[\n \\text{arithmetic intensity} \\geq \\frac\\pperf\\pbw \\enspace.\n\\]\nFurthermore, a computation's\n\\definition[(compute-bound)\\\\efficiency]{compute-bound efficiency} (or simply\n{\\em efficiency}) is given by\n\\begin{equation}\\label{eq:term:eff}\n \\text{compute-bound efficiency}\n \\defeqq \\frac{\\text{attained performance}}\\pperf \\enspace.\n\\end{equation}\nThis unit-less metric between 0 and~1 indicates how well the available hardware\nresources are utilized: While a value close to~1 corresponds to near-optimal\nutilization, lower values indicate untapped resource potential.\n\n\\begin{example}{Compute-bound efficiency}{term:eff}\n The matrix-matrix multiplication $\\dm C \\coloneqq \\dm A \\matmatsep \\dm B +\n \\dm C$ (\\dgemm[NN]) with $\\dm A,\\allowbreak \\dm B,\\allowbreak \\dm C \\in\n \\R^{1000 \\times 1000}$ has an arithmetic intensity of (see\n \\cref{ex:term:ai})\n \\[\n \\SIvar{1000 \\times \\frac1{16}}{\\flops\\per\\Byte} \n = \\SI{62.5}{\\flops\\per\\Byte} \\enspace.\n \\]\n On a single core of a \\sandybridge with a peak floating-point performance of\n \\SI{20.8}{\\giga\\flops\\per\\second} (\\turboboost disabled) and peak bandwidth\n of \\SI{51.2}{\\gibi\\byte\\per\\second} this operation is clearly compute bound:\n \\[\n \\frac\n {\\SI{20.8}{\\giga\\flops\\per\\second}}\n {\\SI{16.25}{\\gibi\\byte\\per\\second}}\n \\approx \\SI{1.28}{\\flops\\per\\Byte}\n < \\SI{62.5}{\\flops\\per\\Byte} \\enspace.\n \\]\n If the \\dgemm[NN] runs at \\SI{19.61}{\\giga\\flops\\per\\second}\n (\\cref{ex:term:perf}), it reached an efficiency of\n \\[\n \\frac{\\text{attained performance}}\\pperf\n = \\frac\n {\\SI{19.61}{\\giga\\flops\\per\\second}}\n {\\SI{20.8}{\\giga\\flops\\per\\second}}\n \\approx \\SI{94.27}\\percent \\enspace.\n \\]\n\\end{example}\n\nThere are many different ways to look at efficiency other than the ratio of\nattained performance to peak performance. Rewriting the definition of\nefficiency as\n\\begin{align*}\n \\text{efficiency}\n &= \\frac{\\text{attained performance}}\\pperf \\\\\n &= \\frac\n {\\text{cost} \/ \\text{runtime}}\n {\\text{cost} \/ \\text{optimal runtime}} \\\\\n &= \\frac{\\text{optimal runtime}}{\\text{runtime}} \\enspace,\n\\end{align*}\nit is expressed as the ratio of the minimum time required to perform the\noperation's minimal \\flops on the given hardware to the computation's runtime.\nIf we reorganize it as\n\\begin{align*}\n \\text{efficiency}\n &= \\frac{\\text{attained performance}}\\pperf \\\\\n &= \\frac{\\text{cost} \/ \\text{runtime}}\\pperf \\\\\n &= \\frac{\\text{cost}}{\\text{runtime} \\times \\pperf} \\\\\n &= \\frac{\\text{cost}}{\\text{available \\flops}} \\enspace,\n\\end{align*}\nit can be seen as the ratio of the operation's minimal \\flop-count to how many\n\\flops the processor could theoretically perform during the computation's\nruntime.\n\n\\begin{example}{Expressing compute-bound efficiency}{term:eff2}\n In \\cref{ex:term:eff} the \\dgemm[NN] took \\SI{102}\\ms, while the\n \\sandybridge with a peak performance of \\SI{20.8}{\\giga\\flops\\per\\second}\n (\\turboboost disabled) could have performed the required $\\SIvar{2 \\times\n 1000^3}\\flops = \\SI{2e9}\\flops$ in\n \\[\n \\frac{\\SI{2e9}\\flops}{\\SI{20.8}{\\giga\\flops\\per\\second}}\n \\approx \\SI{96.15}\\ms \\enspace .\n \\]\n Hence, the computation's efficiency can be computed as\n \\[\n \\frac{\\text{optimal runtime}}{\\text{runtime}}\n = \\frac{\\SI{96.15}\\ms}{\\SI{102}\\ms} \\approx \\SI{94.26}\\percent \\enspace.\n \\]\n\n We can also consider that in the \\SI{102}{\\ms} that the \\dgemm[NN] took, the\n \\sandybridgeshort core could have performed\n \\[\n \\SI{102}\\ms \\times \\SI{20.8}{\\giga\\flops\\per\\second}\n \\approx \\SI{2.12e9}\\flops \\enspace.\n \\]\n Once again we obtain the same efficiency, as a \\flop-count ratio:\n \\[\n \\frac{\\text{cost}}{\\text{available \\flops}}\n = \\frac{\\SI{2e9}\\flops}{\\SI{2.12e9}\\flops}\n \\approx \\SI{94.26}\\percent\n \\enspace.\n \\]\n\\end{example}\n\n\n\\subsection{Bandwidth-Bound Efficiency}\n\\label{sec:term:eff:bwbound}\n\nA computation is bandwidth-bound on a hardware platform if the memory operations\ncannot load and store the involved data as fast as the processor's\nfloating-point units can process it, i.e., the memory bandwidth is the\nbottleneck and the compute units are partially idle. An operation is\ntheoretically bandwidth-bound when\n\\[\n \\text{arithmetic intensity} \\leq \\frac\\pperf\\pbw \\enspace.\n\\]\nFurthermore, a computation's \\definition{bandwidth-bound efficiency} is defined\nas\n\\begin{equation}\n \\label{eq:term:eff:bwbound}\n \\text{bandwidth-bound efficiency} \\defeqq\n \\frac{\\text{attained bandwidth}}\\pbw \\enspace.\n\\end{equation}\nA bandwidth-bound efficiency close to~1 indicates a good utilization of the\nprocessor's main-memory bandwidth, while smaller values signal underutilization.\n\n\\begin{example}{Bandwidth-bound efficiency}{term:bwbeff}\n The vector inner product $\\alpha \\coloneqq \\dm[height=0, ']x \\matvecsep \\dv\n y$ (\\ddot) with $\\dv x, \\dv y \\in \\R^{\\num{100000}}$ has an arithmetic\n intensity of \\SIvar{\\frac18}{\\flops\\per\\Byte} (\\cref{ex:term:ai}) and is\n thus clearly bandwidth-bound. If on one core of a \\sandybridge, it attains\n a bandwidth of \\SI{11.49}{\\gibi\\byte\\per\\second} (\\cref{ex:term:bw}),\n relative to the processor's empirical peak bandwidth of\n \\SI{16.25}{\\gibi\\byte\\per\\second} (\\cref{ex:term:peakbw}), it performed at a\n bandwidth-bound efficiency of\n \\[\n \\frac{\\text{attained bandwidth}}\\pbw\n = \\frac\n {\\SI{11.49}{\\gibi\\byte\\per\\second}}\n {\\SI{16.25}{\\gibi\\byte\\per\\second}}\n \\approx \\SI{70.71}\\percent \\enspace.\n \\]\n\\end{example}\n\n\n\\subsection{The Roofline Model}\n\\label{sec:term:roofline}\n\nThe \\definition{Roofline model}~\\cite{roofline1} plots the performance of\ncomputations (in \\si{\\giga\\flops\\per\\second}) against their arithmetic intensity\n(in \\si{\\flops\\per\\Byte}). In addition to data-points from measurements, two\nlines are added to such a plot to indicate the theoretically attainable\nperformance depending on the arithmetic intensity: The product of peak bandwidth\nand arithmetic intensity (in units: $\\si{\\gibi\\byte\\per\\second} \\times\n\\si{\\flops\\per\\Byte} = \\si{\\gibi\\flops\\per\\second} \\approx\n\\SI{.93}{\\giga\\flops\\per\\second}$) constitutes a straight line through the\norigin with the bandwidth as a gradient (visually: \\tikz\\draw[thick, darkred]\n(0, 0) -- (1.5ex, 1.5ex);) that represents the bandwidth-bound performance limit;\nand the peak floating-point performance is a constant line (\\tikz\\draw[thick,\ndarkred] (0,0) (0, 1.5ex) -- (3ex, 1.5ex);). Together these two lines form the\nroofline-shaped performance limit (\\tikz\\draw[thick, darkred] (0, 0) -- (1.5ex,\n1.5ex) -- (4.5ex, 1.5ex);) that gives the visualization its name:\n\\begin{equation}\\label{eq:term:roofline}\n \\text{performance limit} =\n \\min\\left(\\begin{array}c\n \\pbw \\times \\text{intensity},\\\\\n \\pperf\n \\end{array}\\right) \\enspace.\n\\end{equation}\nComparing the attained performance of a computation to this limit yields the\ncomputation's efficiency---bandwidth-bound below the left part of the ``roof''\nand compute-bound below the right part.\n\n\\input{appterm\/figures\/roofline}\n\n\\begin{example}{The roofline model}{term:roofline}\n \\Cref{fig:term:roofline} presents the Roofline model for one core of a\n \\sandybridge. This processor has a single-core peak performance of\n \\SI{20.8}{\\giga\\flops\\per\\cycle} (\\turboboost disabled), and we use the\n measured single-core peak bandwidth of \\SI{16.25}{\\gibi\\byte\\per\\second}\n (\\cref{ex:term:peakbw}). Together these two factors impose the performance\n limit~(\\ref*{plt:term:roofline:peak})\n \\[\n \\min(\\SI{16.25}{\\gibi\\byte\\per\\second} \\times \\text{arithmetic\n intensity}, \\SI{20.8}{\\giga\\flops\\per\\second})\n \\]\n\n \\cref{fig:term:roofline} also contains the measured performance of\n representative \\blasl1, 2, and~3 operations, whose arithmetic intensity was\n determined in \\cref{ex:term:ai}.\n \\begin{itemize}\n \\item The vector inner product $\\alpha \\coloneqq \\dm[height=0, ']x\n \\matvecsep \\dv y$ (\\ddot) with $\\dv x, \\dv y \\in \\R^n$\n (\\ref*{plt:term:roofline:ddot}) has a arithmetic intensity of\n \\SIvar{\\frac18}{\\flops\\per\\Byte}, making it clearly bandwidth-bound\n below the left part of the ``roofline''. The attained\n (bandwidth-bound) efficiency, which is given by the ratio of the\n measured performance~(\\ref*{plt:term:roofline:ddot}) to the\n attainable peak performance~(\\ref*{plt:term:roofline:peak}), is\n quite high at~\\SI{87.93}\\percent.\n\n \\item The matrix-vector multiplication $\\dv y \\coloneqq \\dm A \\matvecsep\n \\dv x + \\dm y$ (\\dgemv) with $\\dm A \\in \\R^{n \\times n}$ and $\\dv x,\n \\dv y \\in \\R^n$ (\\ref*{plt:term:roofline:dgemv}) has a computation\n intensity of $\\approx \\SIvar{\\frac14}{\\flops\\per\\Byte}$, making it\n also bandwidth-bound. The (bandwidth-bound) efficiency\n (\\ref*{plt:term:roofline:dgemv} divided by\n \\ref*{plt:term:roofline:peak}) is between~\\SI{45.32}{\\percent} (for\n $n = 100$) and \\SI{76.66}{\\percent} (for~$n = 2000$).\n\n \\item The matrix-matrix multiplication $\\dm C \\coloneqq \\dm A \\matmatsep\n \\dm B + \\dm C$ (\\dgemm[NN]) with $\\dm A, \\dm B, \\dm C \\in \\R^{n\n \\times n}$ (\\ref*{plt:term:roofline:dgemm}) has a higher arithmetic\n intensity of \\SIvar{\\frac n{16}}{\\flops\\per\\Byte}, which makes it\n theoretically compute-bound on our system for~$n \\geq 21$. In the\n memory-bound domain it reaches its peak (memory-bound) efficiency\n (\\ref*{plt:term:roofline:dgemm} divided\n by~\\ref*{plt:term:roofline:peak}) of \\SI{50.15}{\\percent} at~$n =\n 20$. Within the compute-bound domain, its (compute-bound)\n efficiency grows towards \\SI{74.32}{\\percent} for our largest\n problem size~$n = 100$. Beyond this size the efficiency keeps\n growing and converge to its peak of \\SI{93.70}{\\percent} for\n matrices of size~$n = 2000$.\n \\end{itemize}\n\\end{example}\n\n\n\n\n\\section{Workload}\n \\label{sec:term:workload}\n \\input{appterm\/workload}\n\n \\section{Runtime}\n \\label{sec:term:time}\n \\input{appterm\/time}\n\n \\section{Performance and Attained Bandwidth}\n \\label{sec:term:perf}\n \\input{appterm\/perf}\n\n \\section{Hardware Constraints}\n \\label{sec:term:hw}\n \\input{appterm\/hardware}\n\n \\section{Efficiency}\n \\label{sec:term:eff}\n \\input{appterm\/eff}\n\n \\section{Other Metrics}\n \\label{sec:term:other}\n \\input{appterm\/othermetrics}\n}\n\n\n\n\n\\subsection{Floating-Point Operations}\n\\label{sec:term:flops}\n\nMost scientific computations, as complex as they may be, perform their work\nthrough a small set of elementary arithmetic operations on floating-point\nrepresentations of real numbers, such as scalar additions or\nmultiplications\\footnote{%\n Exceptions that work on integer data or other structures include graph\n algorithms and discrete optimization.\n}---These the so-called \\definition[\\flops: floating-point\noperations\\\\single- and double-precision]{floating-point operations}\n({\\em\\flops}).\\footnote{%\n Not to be confused with floating-point operations {\\em per second}\n (\\si{\\flops\\per\\second}).\n}\n\nContemporary hardware offers two floating-point precisions standardized in\nIEEE~754~\\cite{ieee754}: {\\em single-precision}, and {\\em double-precision}.\nThey differ in the range of representable numbers, their representation\naccuracy, and their implementation in hardware. While we distinguish between\nsingle-precision \\flops and double-precision \\flops, throughout this work we are\nmostly concerned with double-precision computations. Hence we use ``\\flops''\nwithout a specification refers to double-precision floating-point operations,\nand \\R is used to denote double-precision numbers.\n\nAs commonly practiced in dense linear algebra, we assume that the multiplication\nof two $n \\times n$ matrices requires \\SIvar{2 n^3}\\flops{}---it has an\nasymptotic \\definition[matrix-matrix multiplication: $O(n^3)$]{complexity} of\n$O(n^3)$. While algorithms with lower asymptotic complexities (such as the {\\em\nStrassen algorithm} with a complexity of $O(n^{2.807})$~\\cite{strassen} or the\n{\\em Coppersmith-Winograd algorithm} with a complexity of\n$O(n^{2.376})$~\\cite{coppersmith}) were already known in the 1970s, due to\nconsiderably higher constant factors they found little to no application in\nhigh-performance computing until recently~\\cite{blisstrassen}.\n\nThe \\flop-count of most dense liner algebra operations such as the matrix-matrix\nmultiplication is \\definition[data-independence]{data-independent}, i.e., the\noperand entries do not affect what arithmetic operations are\nperformed.\\footnote{%\n Exceptions may be caused by corrupted input, such as \\code{NaN}s, or\n floating-point exceptions, such as division by~0 or under-\/overflows.\n} In particular, this means that all multiplications with 0's are explicitly\nperformed no matter how sparse an operand is (i.e., how few non-zero entries\nit has). A notable exception to the data-independence are numerical\neigensolvers, whose FLOP-counts depend on the eigenspectrum of the input matrix;\nhowever, we do not study eigensolvers in further detail in this work.\n\nAssuming the cubic complexity of the matrix-matrix multiplication, the\ndata-independence allows us to compute the \\definition[cost = minimal\nFLOP-count]{minimal FLOP-count}---also referred to as {\\em cost}---for most\noperations solely based on their operands' sizes.\n\n\\begin{example}{Minimal \\flop-counts}{term:flops}\n The vector inner product $\\alpha \\coloneqq \\dm[height=0, ']x \\matvecsep \\dv\n y$ (\\ddot) with $\\dv x, \\dv y \\in \\R^n$ costs \\SIvar{2 n}\\flops: one\n multiplication and one addition per vector entry.\n\n The solution of a triangular linear system with multiple right-hand-sides\n $\\dm[width=.4]B \\coloneqq \\dm[lower, inv]A \\dm[width=.4]B$ (\\dtrsm) with\n $\\dm[lower]A\\lowerpostsep \\in \\R^{n \\times n}$ and $\\dm[width=.4]B \\in \\R^{n\n \\times m}$ requires \\SIvar{n^2 m}\\flops.\n\n The Cholesky decomposition of a symmetric positive definite (SPD) matrix\n $\\dm[lower]L \\dm[upper, ']L \\coloneqq \\dm A$ (\\dpotrf) with $\\dm A \\in \\R^{n\n \\times n}$ costs\n \\[\n \\SIvar{\\frac16 n (n + 1) (2 n + 1)}\\flops\n \\approx \\SIvar{\\frac13 n^3}\\flops \\enspace.\n \\]\n\\end{example}\n\nNote that an operation's minimal \\flop-count only provides a lower bound for\nroutines implementing it; reasons for exceeding this bound range from technical\nlimitations to cache-aware data movement patterns and algorithmic schemes that\nperform extra \\flops to use faster compute kernels.\n\n\n\\subsection{Data Volume and Movement}\n\\label{sec:term:datamovement}\n\nThe largest portion of a scientific computation's memory footprint is typically\noccupied by its numerical data consisting of floating-point numbers. A real\nnumber in single- and double-precision requires, respectively, 4 and~\\SI8\\bytes,\nwhereas complex numbers are represented as two consecutive real numbers\nand thus require twice the space. Since throughout this work we mostly use\ndouble-precision numbers---conventionally called ``\\definition[$\\SI1\\double =\n\\SI8\\bytes$]{doubles}''---we can proceed with the assumption that each number\ntakes up \\SI8\\bytes.\n\nIn dense linear algebra, the \\definition[data volume in \\bytes]{data volume}\n(in~\\bytes) involved in a computation is determined almost exclusively by the\ninvolved matrix operands. For instance, a square matrix of size $1000 \\times\n1000$ consists of $\\SI{e6}\\doubles = \\SI{8e6}\\bytes \\approx\n\\SI{7.63}{\\mebi\\byte}$;\\footnote{%\n We use the 1024-based binary prefixes for data volumes: $\\SI{1024}\\bytes =\n \\SI1{\\kibi\\byte}$ (``kibibyte''), $\\SI{1024}{\\kibi\\byte} = \\SI1{\\mebi\\byte}$\n (``mebibyte''), and $\\SI{1024}{\\mebi\\byte} = \\SI1{\\gibi\\byte}$\n (``gibibyte'').\n} vector and scalar operands in comparison take up little space: A vector of\nsize~1000 requires $\\SI{8000}\\bytes = \\SI{7.81}{\\kibi\\byte}$, and a scalar fits\nin just \\SI8\\bytes.\n\nWhile a computation's data volume describes how much data is involved in an\noperation, it says nothing about how often it is accessed. For this purpose we\nintroduce the concept of \\definition{data movement} that quantifies how much\ndata is read from or written to memory. A computation's data movement is\ncommonly higher than its data volume, because (parts of) the data are accessed\nmultiple times.\n\nWhile the actual data movement of any dense linear algebra operation is highly\nimplementation dependent, we can easily derive the \\definition{minimal data\nmovement} from the operation's mathematical formulation by summing the size of\nall input and output operands, counting the operands that are both input and\noutput twice.\n\n\\begin{example}{Data volume and movement}{term:datamov}\n The vector inner product $\\alpha \\coloneqq \\dm[height=0, ']x \\matvecsep \\dv\n y$ (\\ddot) with $\\dv x, \\dv y \\in \\R^n$ involves a data volume of $\\SIvar{2\n n}\\doubles = \\SIvar{16 n}\\bytes$ (ignoring the scalar $\\alpha$); since both\n \\dv x and \\dv y need only be read once the data movement is also \\SIvar{16\n n}\\bytes.\n\n The matrix-matrix product $\\dm C \\coloneqq \\dm A \\matmatsep \\dm B + \\dm C$\n (\\dgemm[NN]) with $\\dm A,\\allowbreak \\dm B,\\allowbreak \\dm C \\in \\R^{n\n \\times n}$ involves a data volume of $\\SIvar{3 n^2}\\doubles = \\SIvar{24\n n^2}\\bytes$, however, since $\\dm C$ is updated, the minimal data movement is\n $\\SIvar{4 n^2}\\doubles = \\SIvar{32 n^2}\\bytes$.\n\n The Cholesky decomposition $\\dm[lower]L \\dm[upper, ']L \\coloneqq \\dm A$\n (\\dpotrf) with $\\dm A \\in \\R^{n \\times n}$ uses only the lower-triangular\n part of the symmetric matrix \\dm A,\\footnotemark{} and \\dm A is decomposed\n in place, i.e., it is overwritten by \\dm[lower]L\\lowerpostsep upon\n completion. Hence the data volume is $\\SIvar{\\frac12 n (n + 1)}\\doubles\n \\approx \\SIvar{4 n^2}\\bytes$, while the minimal data movement is at least\n $\\SIvar{2 \\cdot \\frac12 n (n + 1)}\\doubles \\approx \\SIvar{8 n^2}\\bytes$.\n\\end{example}\n\\footnotetext{%\n Space for the whole matrix is allocated, but the strictly upper-triangular\n part is not accessed.\n}\n\nNote that the minimal data movement is a strict lower bound when none of the\ninvolved data is in any of the processor's caches. Furthermore, depending on\nthe operation and the cache sizes, it may not be attainable in implementations.\n\n\n\\subsection{Arithmetic Intensity}\n\\label{sec:term:ai}\n\nDividing an operation's minimal flop count by its minimal data movement yields\nits \\definition{arithmetic intensity}:\n\\begin{equation}\n \\label{eq:term:ai}\n \\text{arithmetic intensity}\n \\defeqq \\frac{\\text{minimal \\flop-count}}{\\text{minimal data movement}}\n \\enspace.\n\\end{equation}\nA low arithmetic intensity means that few operations are performed per memory\naccess, thus making the data movement a likely bottleneck; a high arithmetic\nintensity on the other hand indicates that a lot of work is performed per data\nelement, thus making the floating-point computations the potential bottleneck.\nArithmetic intensity divides dense linear algebra operations into two groups:\nWhile for \\blasl1 (vector-vector) and~2 (matrix-vector) operations the intensity\nis quite small and independent of the problem size, it is considerably larger\nfor \\blasl3 (matrix-matrix) and dense \\lapack-level operations, for which\nincreases linearly with the problem size.\n\n\\begin{example}{Arithmetic intensity}{term:ai}\n The vector inner product $\\alpha \\coloneqq \\dm[height=0, ']x \\dv y$ (\\ddot)\n with $\\dv x, \\dv y \\in \\R^n$ is a \\blasl1 operation that performs \\SIvar{2\n n}{\\flops} over \\SIvar{2 n}{\\doubles} of data movement. Hence its\n arithmetic intensity is\n \\[\n \\frac{\\text{minimal \\flop-count}}{\\text{minimal data movement}}\n = \\frac{\\SIvar{2 n}\\flops}{\\SIvar{2 n}\\doubles}\n = \\SIvar{\\frac18}{\\flops\\per\\Byte} \\enspace.\n \\]\n\n The matrix-vector multiplication $\\dv y \\coloneqq \\dm A \\matvecsep \\dv x +\n \\dm y$ (\\dgemv[N]) with $\\dm A \\in \\R^{n \\times n}$ and $\\dv x, \\dv y \\in\n \\R^n$ is a \\blasl2 operation that performs \\SIvar{2 n^2}{\\flops} over\n \\SIvar{n^2 + 3 n}{\\doubles} of data movement ($\\dv y$ is both read and\n written). Therefore, its arithmetic intensity is\n \\[\n \\frac{\\text{minimal \\flop-count}}{\\text{minimal data movement}}\n = \\frac{\\SIvar{2 n^2}\\flops}{\\SIvar{n^2 + 3 n}\\doubles}\n \\approx \\SIvar{\\frac14}{\\flops\\per\\Byte} \\enspace.\n \\]\n\n The matrix-matrix multiplication $\\dm C \\coloneqq \\dm A \\matmatsep \\dm B +\n \\dm C$ (\\dgemm[NN]) with $\\dm A,\\allowbreak \\dm B,\\allowbreak \\dm C \\in\n \\R^{n \\times n}$ is a \\blasl3 that performs \\SIvar{2 n^3}{\\flops} over\n \\SIvar{4 n^2}{\\doubles} of data movement ($\\dm C$ is both read and written).\n Hence, its arithmetic intensity\n \\[\n \\frac{\\text{minimal \\flop-count}}{\\text{minimal data movement}}\n = \\frac{\\SIvar{2 n^3}\\flops}{\\SIvar{4 n^2}\\doubles}\n = \\SIvar{\\frac n{16}}{\\flops\\per\\Byte}\n \\]\n grows linearly with the problem size~$n$ and already exceeds the intensity\n of \\dgemv for matrices as small as $5 \\times 5$.\n\\end{example}\n\nWe revisit the arithmetic intensity in \\cref{sec:term:eff}, where it determines\nwhether a computation's performance is limited by the processor's memory\nsubsystem or its floating-point units.\n\n\\section*{About This Document}\n\n\\def\\gettexliveversion#1, #2 (#3)#4\\relax{#2}\n\\newcommand\\pdftexver{\\expandafter\\gettexliveversion\\pdftexbanner\\relax\\xspace}\n\nThis document was written in \\href{https:\/\/www.latex-project.org\/}{\\LaTeXe} and\ntypeset with \\href{http:\/\/www.tug.org\/applications\/pdftex\/}{pdfTeX} \\pdftexver\non \\today.\n\nIt relies on the following packages:\n\\href{http:\/\/ctan.org\/pkg\/microtype}{\\code{microtype}} for micro-typography;\n\\href{http:\/\/ctan.org\/pkg\/listings}{\\code{listings}} and\n\\href{http:\/\/ctan.org\/pkg\/tcolorbox}{\\code{tcolorbox}} for algorithms, listings,\nand examples; \\href{http:\/\/ctan.org\/pkg\/pgf}{\\code{tikz}} and\n\\href{http:\/\/ctan.org\/pkg\/pgfplots}{\\code{pgfplots}} for graphics and plots;\n\\href{http:\/\/ctan.org\/pkg\/drawmatrix}{\\code{drawmatrix}} for matrix\nvisualizations; \\href{http:\/\/ctan.org\/pkg\/cleveref}{\\code{cleveref}} and\n\\href{http:\/\/ctan.org\/pkg\/hyperref}{\\code{hyperref}} for references and\nhyperlinks; and \\href{http:\/\/ctan.org\/pkg\/biblatex}{\\code{biblatex}} for the\nbibliography.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Timing Kernels in \\lapack's \\texorpdfstring\\dgeqrf{dgeqrf}}\n \\label{sec:cache:qr:alg}\n \\input{cache\/alg}\n\n \\subsection{Cache-Aware Timings}\n \\label{sec:cache:qr:timings}\n \\input{cache\/timings}\n\n \\subsection{Modeling the Cache}\n \\label{sec:cache:qr:cache}\n \\input{cache\/cache}\n\n \\subsection{Varying the Setup}\n \\label{sec:cache:qr:res}\n \\input{cache\/qrresults}\n\n \\section{Application to Other Algorithms}\n \\label{sec:cache:algs}\n \\input{cache\/otheralgs}\n\n \\section{Feasibility on Modern Hardware}\n \\label{sec:cache:new}\n \\input{cache\/new}\n\n \\section{Summary}\n \\label{sec:cache:conclusion}\n \\input{cache\/conclusion}\n}\n\n\\subsection{In- and Out-of-Cache Timings}\n\\label{sec:cache:icoc}\n\n\\input{cache\/figures\/ooc}\n\nOut-of-core timings are hardware independent, and just as on the\n\\harpertownshort serve as an upper bound on the \\sandybridgeshort and\n\\haswellshort. This is illustrated in \\cref{fig:cache:ooc} for the inversion of\na lower-triangular matrix $\\dm[lower]A \\in \\R^{3200 \\times 3200}$ with\n\\dtrtri[LN] (\\cref{alg:dtrtriLN2}) and block size~$b = 64$ on the \\haswellshort,\nand the QR~decomposition of $\\dm A \\in \\R^{2400 \\times 2400}$ with \\dgeqrf\n(\\cref{alg:dgeqrf}) and $b = 32$ on the \\sandybridgeshort{}---the chosen\nmatrices comprise around \\SI{40}{\\mebi\\byte} and thus exceed the\n\\sandybridgeshort's and \\haswellshort's last-level cache~(L3) of, respectively,\n\\SIlist{20,30}{\\mebi\\byte}. The out-of-cache timings indeed consistently\noverestimate the in-algorithm timings---by up to~\\SI{347}{\\percent} for the last\ncall to \\refdtrmmRUNN in the QR~decomposition \\dgeqrf on the \\sandybridgeshort\n(\\cref{fig:cache:ooc:dgeqrf:sandybridge} is clipped at~\\SI{175}\\percent). As\nsuch, these measurements serve well as an upper bound on the in-algorithm\ntimings.\n\n\\input{cache\/figures\/ic}\n\nFore the same scenarios \\cref{fig:cache:ic} presents the error of our previous\nin-cache setup with respect to the in-algorithm timings: While we expect the\nour setup to yield faster kernel executions than the in-algorithm timings, on\nthe \\sandybridge (with \\turboboost disabled) the in-cache timings are still up to~\\SI{.51}{\\percent} slower than\nthe in-algorithm timings (not accounting for the small unblocked \\dgeqr2); on\nthe \\haswell (with \\turboboost enabled), the relative errors for \\dtrtri[LN] and\n\\dgeqrf reach, respectively, \\SIlist{1.67;3.44}\\percent.\n\n\\input{cache\/figures\/ictb}\n\nFurther investigation reveals that the processor's \\intel{} \\turboboost is a source of\ncomplication for out measurements: As \\cref{fig:cache:ictb} shows, enabling\n\\turboboost on the \\sandybridge leads to overestimations of the \\dtrtri[LN]'s\nand \\dgeqrf's most compute-intensive operations (i.e., the \\refdtrmmLLNN and the\ntwo \\dgemm{}s (\\ref*{plt:dgemmTN}, \\ref*{plt:dgemmNT})), by up to, respectively,\n\\SIlist{3.20;2.79}\\percent.\n\nWhile \\turboboost increases the overestimation of individual kernels, this\nphenomenon's origin lies in the processor's cache hierarchy: Within an\nalgorithm, each kernel is invoked with a distinct cache precondition, i.e., with\nonly portions of its operands in the processor's caches. Since our\nalgorithm-independent measurements do clearly not match such preconditions, we\nattempted to construct conditions in which the kernel executes at its absolute\npeak performance with different cache setups:\n\\begin{itemize}\n \\item First, we used simple repeated execution of the kernel without any\n modification of the cache in between as before.\n\n \\item Next, we accessed the kernel operands in various\n orders prior to the invocation. E.g., for a \\dgemm $\\dm[width=.25]C\n \\coloneqq \\dm[width=.8]A \\matvecsep \\dm[width=.25, height=.8]B +\n \\dm[width=.25]C$, we attempted all permutations of access orders, such\n as \\dm[width=.8]A--\\dm[width=.25, height=.8]B--\\dm[width=.25]C and\n \\dm[width=.25]C--\\dm[width=.8]A--\\dm[width=.25, height=.8]B.\n\n \\item Finally, we refined the access granularity and attempted to bring\n operands into cache not as a whole but only partially: For a kernel\n with one operand larger than the cache and the other operand(s) only a\n fraction of that size (e.g., the \\dgemm[TN] (\\ref*{plt:dgemmTN}) in\n \\dgeqrf: $\\dm[width=.25]C \\coloneqq \\dm[width=.8]A \\matvecsep \\dm\n [width=.25, height=.8]B + \\dm[width=.25]C$ where \\dm[width=.25]B and\n \\dm[width=.25]C are of width~$b$ and close to the problem size~$n$ in\n height), we bring the entire small operand(s) into cache but only\n portions of the large one.\n\n \\input{cache\/figures\/acc}\n\n \\Cref{fig:cache:acc} presents which operand portions we chose to load\n into the cache. These choices are based on the assumption that any\n kernel implementation likely traverses the input matrix somehow form\n from the top-left \\tsearrow to the bottom-right.\\footnote{%\n Exceptions are, e.g., \\dtrsm[RLNN] ($B \\coloneqq B A^{-1}$) and\n \\dtrsm[LUNN] ($B \\coloneqq A^{-1} B$), which must traverse the\n triangular~$A$ from the bottom-right to the top-left---in these\n cases the accessed matrix portions are mirrored accordingly.\n } Therefore, we bring a column panel of the operand, a row panel, a\n square block, or any combination of these into the processor's caches.\n While doing so, we varied the sizes~$s_1$, $s_2$, and~$s_3$ of the\n accessed operand portions.\n\\end{itemize}\n\nWhile in some scenarios changing the in-cache setup for kernel invocations\nreduced the runtime overestimation, the effects were not consistent across\ndifferent algorithms, kernels, processors, and \\blas implementations.\nAltogether, it was not possible to determine general, algorithm-independent\nin-cache setups that yield a clear lower bound on the in-algorithm timings.\n\n\n\\subsection{Algorithm-Aware Timings}\n\\label{sec:cache:algaware}\n\nSince our above attempts at algorithm-independent in-cache timings did not yield\nthe required lower bound on in-algorithm timings, the only alternative is to\ntailor the timing setups to individual algorithms. We might for instance setup\neach kernel timing with several preceding kernel invocations from within the\nalgorithms. Such obtained \\definition{algorithm-aware timings} yield accurate\nestimates for the in-algorithm timings, and rid us of the need for combining in-\nand out-of-cache estimates.\n\n\\input{cache\/figures\/exact}\n\n\\begin{example}{Algorithm-aware timings}{cache:algaware}\n \\Cref{fig:cache:exact} presents the accuracy of algorithm-aware timings as\n estimates for in-algorithm timings for the inversion of a lower-triangular\n matrix (\\dtrtri[LN]) and the QR~decomposition (\\dgeqrf) on a \\sandybridge\n (with \\turboboost enabled) using single-threaded \\openblas. The\n algorithm-aware timings were created by preceding each measured kernel\n invocation with the calls from the corresponding blocked algorithm that were\n executed since that kernel's last invocation.\n\n \\Cref{fig:cache:exact:dtrtri} shows that for \\dtrtri[LN] the algorithm-aware\n timings are with few exceptions within~\\SI1{\\percent} of the in-algorithm\n timings with an average absolute relative error (ARE) of~\\SI{.54}\\percent.\n As seen in \\Cref{fig:cache:exact:dtrtri}, for the \\dgetrf the relative error\n is overall larger yet similarly spread around~\\SI0{\\percent} with an average\n ARE of~\\SI{.84}\\percent.\n\\end{example}\n\nWhile this approach yields accurate estimates, when the kernel invocations for\neach algorithm execution are timed separately and each measurement is preceded\nwith a setup of one or more kernels, the timing procedure takes effectively\nlonger than executing and measuring the target algorithm repeatedly. As a\nresult, this method is at the same time highly accurate and impractical, which\nis why we do not further pursue it.\n\n\\subsection{Cholesky Decomposition: \\texorpdfstring{\\dpotrf[U]}{dpotrf}}\n\\label{sec:cache:dpotrfU}\n\n\\input{cache\/figures\/cholUalg}\n\nFirst, we consider \\lapack's upper triangular Cholesky decomposition \\dpotrf[U]\n\\[\n \\dm[lower, ']U \\dm[upper]U \\coloneqq \\dm A\n\\]\nof a symmetric positive definite $\\dm A \\in \\R^{n \\times n}$ in upper triangular\nstorage. \\Cref{alg:dpotrfU} presents the blocked algorithm employed in this\nroutine, which is the transpose of \\dpotrf's algorithm for lower-triangular\ncase (\\cref{alg:chol2} on \\cpageref{algs:chol}). As the algorithm traverses\n\\dm A, both the size and shape of~\\dm[mat02, width=1.25]{A_{02}} (the largest\noperand) change noticeably: It starts as row panel, then grows to a square\nmatrix and finally shrinks to a column panel. \\dm[mat02, width=1.25]{A_{02}}'s\nsize determines the workload performed by the algorithm's large \\refdgemmTN,\nwhich is reflected in the in-algorithm timings in\n\\cref{fig:cache:dpotrfU:instr}.\n\n\\input{cache\/figures\/cholres}\n\nIn our experiments, we execute \\dpotrf[U] on a \\harpertown with single-threaded\n\\openblas, $\\dm A \\in \\R^{2400 \\times 2400}$,\\footnote{%\n For $n = 2400$, the upper-triangular portion of~$A$ takes up about\n \\SI{12}{\\mebi\\byte}---twice the size of the L2~cache.\n} and block size $b = 32$. \\Cref{fig:cache:dpotrf:res} presents the relative\nperformance difference with respect to in-algorithm timings for both repeated\nexecution timings and our final estimates. Our estimates yield improvements for\nthe \\refdsyrkUT and \\refdpotfU involving large matrices in the middle of \\dm A's\ntraversal. In the beginning of the traversal, the estimates are generally too\npessimistic because some matrices are (partially) brought into cache by\nprefetching, which is not accounted for in our estimates. On average the\nrelative error is reduced from~\\SIrange{11.11}{7.87}{\\percent}, i.e., by a\nfactor of~1.41.\n\nHowever, note that the improvement is only visible in the averaged per-kernel\nrelative error: Since the runtime of large \\dgemm[TN]~(\\ref*{plt:dgemmTN}) is\noverestimated, the accumulated runtime estimate for the entire algorithm\nactually becomes less accurate.\n\n\n\\subsection{Inversion of a Triangular Matrix:\n\\texorpdfstring{\\dtrtri[LN]}{dtrtri}}\n\\label{sec:cache:dtrtriLN}\n\n\\input{cache\/figures\/trinvalg}\n\nWe now take a closer look at \\lapack's inversion of a lower-triangular matrix\n\\dtrtri[LN]\n\\[\n \\dm[lower]A \\coloneqq \\dm[lower, inv]A\n\\]\nwith $\\dm A \\in \\R^{n \\times n}$, whose blocked algorithm is presented in\n\\cref{alg:dtrtriLN2}. In contrast to the previous operations, this algorithm\ntraverses \\dm A \\tnwarrow from the bottom-right to the top-left, thereby\noperating on sub-matrices of increasing size. \\Cref{fig:cache:dtrtriLN:instr}\nshows the in-algorithm timings for the algorithm, which are dominated by\n\\refdtrmmLLNN.\n\n\\input{cache\/figures\/trinvres}\n\nWe execute \\dtrtri[LN] on a \\harpertown with single-threaded \\openblas, $\\dm A\n\\in \\R^{2400 \\times 2400}$, and block size $b = 32$.\n\\Cref{fig:cache:dtrtriLN:res} compares the performance measurements from\nrepeated execution and our final estimates to in-algorithm timings: The\nimprovements of our estimates are most significant in \\refdtrmmLLNN (which\nperforms the most computation) and \\refdtrtiLN; the error is reduced from an\naverage of~\\SIrange{6.70}{3.37}{\\percent}---a total improvement of~$1.99\\times$.\n\n\n\\subsection{Summary}\n\nWe have seen that, on a \\harpertown the accuracy of our runtime estimates for\nkernels within blocked algorithms is increased by taking the state of the\nL2~cache throughout the algorithm execution into consideration. For different\nalgorithms, problem sizes, block sizes, \\blas implementations, and thread\ncounts, we have seen improvements between~$1.15\\times$ (with all 4~cores)\nand~$2.99\\times$.\n\n\n\n\n\n\n\n\\chapter{Conclusion}\n\\label{ch:conclusion}\n\nThis dissertation set out to predict the performance of dense linear algebra\nalgorithms. It targeted two types of algorithms that require different\nprediction approaches: blocked algorithms and tensor contractions.\n\nFor blocked algorithms, we accomplished accurate performance predictions through\nautomatically generated performance models for compute kernels. Our\npredictions both reliably identify the fastest blocked algorithm from\npotentially large numbers of available alternatives, and select a block size\nfor near-optimal algorithm performance. Our approach's main advantage is its\nseparation of the model generation and the performance prediction: While the\ngeneration may take several hours, thousands of algorithm executions are\nafterwards predicted within seconds. A discussed downside to the approach,\nhowever, is that it does not account for algorithm-dependent caching effects.\n\nFor tensor contractions, we established performance predictions that identify\nthe fastest among potentially hundreds of alternative \\blas-based contraction\nalgorithms. By using cache-aware micro-benchmarks instead of our performance\nmodels, our solution is highly accurate even for contractions with severely\nskewed dimensions. Furthermore, since these micro-benchmarks only execute a\ntiny fraction of each tensor contraction, they provide performance predictions\norders of magnitude faster than empirical measurements.\n\nTogether, our model generation framework and micro-benchmarks form a solid\nfoundation for accurate and fast performance prediction for dense linear algebra\nalgorithms.\n\n\n\\section{Outlook}\nThe techniques presented in this dissertations offer numerous opportunities for\napplications and extensions:\n\\begin{itemize}\n \\item Our methods can be applied to predict the performance various types of\n algorithms and operations, such as recursive algorithms and\n algorithms-by-blocks.\n\n \\item For dense eigenvalue solvers, our models can predict the two most\n computationally intensive stages: The reduction to tridiagonal form and\n the back-transformation. By additionally estimating the data-dependent\n performance of tridiagonal eigensolvers, one can predict the solution of\n complete eigenproblems.\n\n \\item Beyond individual operations, our predictions can be applied to\n composite operations and algorithms, such as matrix chain\n multiplications or least squares solvers.\n\n \\item Our models were designed to provide estimates for configurable yet\n limited ranges of problem sizes. For extrapolations to larger problems\n they should be revised to ensure that local performance phenomena do not\n distort faraway estimates.\n\n \\item Computations on distributed memory systems, accelerators, and graphics\n cards can be predicted by combining our techniques with models for data\n movement and communication.\n\\end{itemize}\n\n\\chapter*{Abstract}\n\n\\input{abstract\/abstract}\n\n\n\\chapter*{Acknowledgments}\n\nFirst and foremost, I would like to express my sincere gratitude to my advisor\nPaolo Bientinesi. While guiding me through my studies, he always embraced my\nown ideas and helped me shape and develop them in countless discussions. While\nhe granted me freedom in many aspects of my work, he always had time for\nanything between a quick exchange of thoughts and extensive brainstorming\nsessions. Beyond our professional relationship, we enjoyed twisty puzzles and\nboard games in breaks from work, long game nights, and annual trips to SPIEL. I\nconsider my self lucky to have spend my time as a doctoral student with him and\nhis research group.\n\nThe HPAC group proved to be much more than a collection of researchers working\non remotely associated projects; my colleagues were not only a source of\nincredibly valuable discussions and feedback regarding my work, we also indulged\nin various unrelated arguments and exchanges over lunch and at many other\noccasions. My thanks go to Edoardo Di~Napoli, Diego Fabregat-Traver, Paul\nSpringer, Jan Winkelmann, Henrik Barthels, Markus H\u00f6hnerbach, Sebastian\nAchilles, William McDoniel, and Caterina Fenu, as well as our former group\nmembers Matthias Petschow, Roman Iakymchuk, Daniel Tameling, and Lucas Beyer.\n\nI am grateful for financial support from the {\\namestyle Deutsche\nForschungs\\-gemeinschaft} (DFG) through grant GSC~111 (the graduate school\nAICES) and the {\\namestyle Deutsche Telekomstiftung}. Their programs not only\nfunded my work, but opened further opportunities in the form of seminars and\nworkshops, and connected me with like-minded students from various disciplines.\n\nThe {\\namestyle\\rwth IT Center} provided and maintained an extremely reliable\ninfrastructure central to my work: the {\\namestyle \\rwth Compute Cluster}. I\nthank its staff not only for ensuring smooth operations but also for their\ncompetent and detailed responses to my many inquiries and requests regarding our\ninstitute's cluster partition.\n\nThe AICES service team did their best to shield me from the bureaucracy of\ncontracts, stipends, and reimbursements. I am grateful they allowed me to focus\nsolely on my research.\n\nEven more important than a gratifying work environment is forgetting about it\nevery once in a while. My friends played a bigger role in this effort than\nprobably most of them know, whether we were simply spending time hanging out or\nplaying games, went swimming, climbing or playing badminton, or taught swimming\nand worked as lifeguards. You are too many to enumerate, but you know who you\nare.\n\nFinally, but most importantly, none of this would have been possible without the\nendless and uncompromising support of may parents. You are the reason I grew\ninto the person I am today. Danke!\n\n\\tableofcontents\n\n\\subsection{Motivation: Blocked Algorithms}\n\\label{sec:intro:blocked:algs}\n\n\\definitionp[blocked algorithm]{Blocked algorithms} are commonly used to exploit\nthe performance of optimized \\blasl3 kernels\\footnote{%\n The {\\namestyle Basic Linear Algebra Subprograms} (BLAS) form the basis for\n high-performance in dense linear algebra. See \\cref{app:term,app:libs}.\n} in other matrix operations, such as decompositions, inversions, and\nreductions. Every blocked algorithm traverses its input matrix (or matrices) in\nsteps of a fixed \\definition{block size}; in each step of this traversal, it\nexposes a set of \\definition[sub-matrices\\\\updates]{sub-matrices} to which it\napplies a series of {\\em updates}. Through these updates, it progresses with\nthe computation and obtains a portion of the operation's result; once the matrix\ntraversal completes, the entire result is computed.\n\n\\input{intro\/figures\/blocked}\n\n\\footnotetextbefore{%\n \\Cref{app:libs} gives an overview of the \\blas and \\lapack routines used\n throughout this work. When specified, the subscripts indicate the values of\n the flag arguments, which identify the variant of the operation; e.g., in\n \\dpotrf[L] the \\code L corresponds to the argument \\code{uplo} indicating\n a lower-triangular decomposition.\n}\n\\begin{example}{Blocked algorithms for the Cholesky decomposition}{intro:chol}\n \\newcommand\\Azz{\\dm[mat00, lower]{A_{00}}\\xspace}%\n \\newcommand\\Aoz{\\dm[mat10, height=.5]{A_{10}}\\xspace}%\n \\newcommand\\Aoo{\\dm[mat11, size=.5, lower]{A_{11}}\\xspace}%\n \\newcommand\\Atz{\\dm[mat20, height=1.25]{A_{20}}\\xspace}%\n \\Cref{algs:chol} illustrates blocked algorithms for a simple yet\n representative operation: the lower-triangular Cholesky decomposition\n \\[\n \\dm[lower]L \\dm[upper, ']L \\coloneqq \\dm A\n \\]\n of a symmetric positive definite (SPD) matrix $\\dm A \\in \\R^{n \\times n}$ in\n lower-triangular storage (\\lapack: \\dpotrf[L]\\footnotemark). For this\n operation there exist three different blocked algorithms. Each algorithm\n traverses \\dm A diagonally from the top-left to the bottom-right \\tsearrow\n and computes the Cholesky factor~\\dm[lower]L in place. At each step of the\n traversal, the algorithm exposes the sub-matrices shown in\n \\cref{algs:chol:traversal} and makes progress by applying the\n algorithm-dependent updates in \\cref{alg:chol1,alg:chol2,alg:chol3}. Before\n these updates, the sub-matrix~\\Azz, which in the first step is of size $0\n \\times 0$, already contains a portion of the Cholesky factor~\\dm[lower]L;\n after the updates, the sub-matrices~\\Aoz and~\\Aoo also contain their\n portions of~\\dm[lower]L, and in the next step become part of~\\Azz. Once the\n traversal reaches the bottom-right corner (i.e., \\Azz is now of size $n\n \\times n$), the entire matrix is factorized.\n\\end{example}\n\nBlocked algorithms pose two \\definition[optimization challenges:\\\\alternative\nalgorithms]{optimization challenges}:\n\\begin{itemize}\n \\item For each operation there typically exist several {\\em alternative\n algorithms}, which are mathematically equivalent in exact arithmetic;\n however, even if such algorithms perform the same number of floating\n point operations, they may differ significantly in performance.\n\n \\item For each algorithm, the \\definition{block size} influences the number\n of traversal steps and the sizes and shapes of the exposed sub-matrices,\n and thus the performance of the kernels applied to them.\n\\end{itemize}\nWhat makes matters more complicated is that the optimal choice depends on\nvarious factors, such as the hardware , the number of threads, the kernel\nimplementations, and the problem size.\n\n\\input{intro\/figures\/chol_vars}\n\n\\footnotetextbefore{%\n \\Cref{app:hardware} provides an overview of the processors used throughout\n this work.\n}\n\\begin{example}{Performance of alternative algorithms}{intro:chol:var}\n \\Cref{fig:intro:chol:vars} shows the performance of the three blocked\n Cholesky decompositions from \\cref{algs:chol} with block size~$b = 128$ and\n increasing problem size~$n$ on a 12-core \\haswell\\footnotemark{} with\n single- and multi-threaded \\openblas.\n\n In both the single- and multi-threaded scenarios,\n algorithm~3~(\\ref*{plt:chol3}) is the fastest among the three alternatives\n for all problem sizes. On a single core and for problem size $n = 4152$, it\n is \\SIlist{27.40;12.89}{\\percent} faster than, respectively,\n algorithms~1~(\\ref*{plt:chol1}) and~2~(\\ref*{plt:chol2}), and it reaches up\n to \\SI{91.01}{\\percent} of the processor's theoretical peak performance (red\n line \\legendline[very thick, darkred] at the top of the plot). On all 12~of\n the processor's cores, algorithm~3~(\\ref*{plt:chol3}) still reaches an\n efficiency of~\\SI{69.70}\\percent, and outperforms\n algorithms~1~(\\ref*{plt:chol1}) and~2~(\\ref*{plt:chol2}) by, respectively,\n $5.21\\times$ and~$1.92\\times$.\n\n Although algorithm~3~(\\ref*{plt:chol3}) is clearly the fastest in this and\n many other scenarios, \\lapack's \\dpotrf[L] implements\n algorithm~2~(\\ref*{plt:chol2}).\n\n For other operations, the choice becomes more complicated, since no single\n algorithm is the fastest for all problem sizes and scenarios. For instance,\n for the single-threaded inversion of a lower-triangular matrix $\\dm[lower]A\n \\coloneqq \\dm[lower, inv]A$, two different algorithms are the fastest for\n small and large matrices; with the performance differing by up\n to~\\SI{13}{\\percent} in either direction (\\cref{sec:pred:var:trinv}).\n\\end{example}\n\n\\input{intro\/figures\/chol_b}\n\n\\begin{example}{Influence of the block size on performance}{intro:chol:b}\n Let us consider the blocked Cholesky decomposition\n algorithm~3~(\\ref*{plt:chol3} in \\cref{fig:intro:chol:vars}) with fixed\n problem sizes~$n = 1000$, 2000, 3000, and~4000 and varying block size~$b$.\n \\cref{fig:intro:chol:b} presents the performance of these algorithm\n executions for 1 and 12~threads on the \\haswell using \\openblas:\n Single-threaded, the optimal block size increases from~$b = 96$ for~$n =\n 1000$ to~$b = 184$ for~$n = 4000$. On 12~cores, on the other hand, the\n performance is less smooth and the optimal choices for~$b$ are between~56\n and~112.\n\n \\Cref{fig:intro:chol:b} demonstrates the importance of selecting the block\n size dynamically: If we use~$b = 184$, which is optimal for~$n = 4000$ on\n one core, for~$n = 1000$ on 12~cores we only reach \\SI{77.62}{\\percent} of\n the algorithm's optimal performance. On the other hand, \\lapack's default\n block size~$b = 64$ (which is close to the optimal~$b = 56$ for~$n = 1000$\n on 12~cores) would reach \\SI{95.95}{\\percent} of the optimal single-threaded\n performance for~$n = 4000$.\n\\end{example}\n\n\n\\subsection{Prediction through Performance Models}\n\\label{sec:intro:blocked:pred}\n\nNaturally, both the best algorithm and its optimal block size for a given\nscenario (operation, problem size, hardware, kernel library, multi-threading)\ncan be determined through exhaustive performance measurements; however, this is\nextremely time consuming and thus often impractical. Instead we aim to\ndetermine the optimal configuration {\\em without executing} any of the\nalternative algorithms. For this purpose, we use the hierarchical structure of\nblocked algorithms: Their entire computation is performed in a series of calls\nto a few kernel routines; hence, by accurately estimating the runtime of these\nkernels, we can predict an entire algorithm's runtime and performance.\n\nIn order to estimate the kernel runtimes, let us study how these kernels are\nused: In each algorithm execution, the same set of kernels is invoked\nrepeatedly---once for each step of the blocked matrix traversal. Each\ninvocation, however, works on operands of different size depending on the\nprogress of the algorithms' traversal, the input problem size, and the block\nsize. In short, we need to estimate the performance of only a few kernels, yet\nwith potentially wide ranges of operand sizes.\n\nOur solution is \\definition{performance modeling}, as detailed in\n\\cref{ch:model}: Based on a detailed study of how a kernel's arguments (i.e.,\nflags, operand sizes, etc.) affect its performance, we design performance models\nin the form of piecewise multi-variate polynomials. These models are generated\nautomatically once for each hardware and software setup and subsequently provide\naccurate performance estimates at a tiny fraction of the kernel's runtime.\n\nUsing such estimates, we \\definition[performance prediction]{predict} the {\\em\nperformance} of blocked algorithms, as presented in \\cref{ch:pred}. These fast\npredictions prove to be highly accurate, and allow us to both rank the blocked\nalgorithms for a given operation according to their performance, and find\nnear-optimal values for the algorithmic block sizes.\n\nWhile our models yield accurate performance estimates for individual kernel\nexecutions, they do not capture the performance influence of\n\\definition{caching} between kernels. Prior to the invocation of each compute\nkernel in an algorithm, typically only a portion of its operands are in cache,\nand loading operands from main memory increases the kernel runtime.\n\\cref{ch:cache} investigates how caching effects can be accounted for in blocked\nalgorithms, and attempts to combine pure in- and out-of-cache estimates into\nmore accurate prediction. However, while the results look promising on a rather\nold \\harpertown, the analysis reveals that on modern processors the effect\ncaching on kernel performance is so complex that accounting for it in\nalgorithm-independent performance models to further improve our prediction\naccuracy is infeasible.\n\n\n\n\n\n\n\n\n\\chapter{Introduction}\n\\chapterlabel{intro}\n{\n \\tikzsetexternalprefix{externalized\/intro-}\n\n \\input{intro\/intro.tex}\n\n \\section[Performance Modeling for Blocked Algorithms]\n {Performance Modeling\\newline for Blocked Algorithms}\n \\label{sec:intro:blocked}\n \\input{intro\/blocked}\n\n \\section{Micro-Benchmarks for Tensor Contractions}\n \\label{sec:intro:tensor}\n \\input{intro\/tensors}\n\n \\section{Related Work}\n \\label{sec:intro:relwork}\n \\input{intro\/relwork}\n}\n\n\\subsection{Dense Linear Algebra Libraries and Algorithms}\n\\label{sec:relwork:libsalgs}\n\nWe begin with a brief history of the fundamental DLA libraries \\blas and \\lapack\nand prominent implementations in \\cref{sec:relwork:libs}. We then focus on\nblocked algorithms and their tuning opportunities in \\cref{sec:relwork:blocked},\nand finally give an overview of alternative algorithms and libraries for\ndistributed-memory and accelerator hardware in, respectively,\n\\cref{sec:relwork:altalgs,sec:relwork:dist}.\n\n\n\\subsubsection{\\blas and \\lapack}\n\\label{sec:relwork:libs}\n\nThe development of standardized DLA libraries began in~1979 with the inception\nof the {\\namestyle Basic Linear Algebra Subprograms}\n(\\definition{\\blas})~\\cite{blasl1}, a \\fortran interface specification for,\ninitially, various ``Level~1'' scalar and vector operations. It was\nsubsequently extended to kernels for ``Level~2'' matrix-vector~\\cite{blasl2} and\n``Level~3'' matrix-matrix~\\cite{blasl3} operations in, respectively, 1988\nand~1990. The aim of the \\blas specification is to enable performance portable\napplications: DLA codes reach high performance on different hardware by using\narchitecture-specific \\blas implementations. Although computer architectures\nhave evolved dramatically in the last~40 years, this principle of performance\nportability is still at the core of all current DLA libraries.\n\nThe \\blas specification is accompanied by a reference\nimplementation~\\cite{blasweb} that, while fully functional and well documented,\nis deliberately simple and thus slow; to reach high performance, users instead\nlink with optimized \\definition[open-source implementations]{\\blas\nimplementations}. The oldest {\\em open-source} implementation still in\nuse is the {\\namestyle Automatically Tuned Linear Algebra Software}\n(\\atlas)~\\cite{atlas1, atlas3, atlas2, atlasweb}, first released in 1997; this\nauto-tuning based library's main proficiency is to yield decent performance on a\nwide range of hardware platforms with little developer and user effort. The\nfirst major open-source implementation hand-tuned for modern processors with\ncache hierarchies was {\\swstyle GotoBLAS}~\\cite{gotoblas1, gotoblas2,\ngotoblasweb}. It reaches up to around \\SI{90}{\\percent} of a processor's peak\nfloating-point performance for both sequential and multi-threaded Level~3\nkernels and good bandwidth-bound performance for Level~1 and~2 operations.\nAfter {\\swstyle GotoBLAS}'s discontinuation in~2010, its code-base and approach\nwere picked up and extended to more recent processors in the \\openblas\nlibrary~\\cite{openblasweb}, which is currently the fastest open-source\nimplementation for many architectures. Also inspired by {\\swstyle GotoBLAS}'s\napproach is the fairly recent {\\namestyle \\blas-like Library Instantiation\nSoftware} (\\blis)~\\cite{blis3, blis1, blis2, blisweb}, an open-source framework\nthat provides optimized kernels for basic DLA operations, such as the \\blas,\nbased on one hand-tuned micro-kernel per architecture.\n\nIn addition to open-source implementations, many hardware \\definition[vendor\nimplementations]{vendors} maintain and distribute their own high-performance\n{\\em\\blas}, e.g., \\intel's {\\namestyle Math Kernel Library}\n(\\mkl)~\\cite{mklweb}, \\apple's framework \\accelerate~\\cite{accelerateweb}, and\n{\\namestyle IBM}'s {\\namestyle Engineering and Scientific Subroutine Library}\n(\\essl)~\\cite{esslweb}.\n\n\\blas forms the basis for DLA libraries covering more advanced operations. The\nearliest library built on top of first \\blasl1 and later Level~2 was {\\swstyle\nLINPACK}~\\cite{linpack, linpackweb}, a package of solvers for linear equations\nand least-squares problems from the~1970s and~1980s. {\\swstyle LINPACK}\ntogether with {\\swstyle EISPACK}~\\cite{eispack, eispackweb}, a collection of\neigenvalue solvers, was superseded by the {\\namestyle Linear Algebra PACKage}\n(\\definition{\\lapack})~\\cite{lapack, lapackweb} in~1992. \\lapack has since been\nextended with new features and algorithms, and is still under active\ndevelopment. Just like \\blas, \\lapack functions as a de-facto standard\ninterface specification for many advanced DLA operations; libraries such as\n\\openblas and \\mkl adopt its interface and provide tuned implementations of\nvarious routines.\n\nFor more details on \\blas and \\lapack, and their kernels and implementations\nused throughout this work, see \\cref{app:libs}.\n\n\n\\subsubsection{Blocked Algorithms}\n\\label{sec:relwork:blocked}\n\n\\lapack uses \\definition{blocked algorithms} for most of its dense operations.\nThe core idea behind these algorithms is to leverage a processor's cache\nhierarchy by increasing the spacial and temporal locality of operands, as well\nas casting most of an operation's computation in terms of \\blasl3 kernels. As a\nresult, complex operations can reach performance levels close to the hardware's\ntheoretical peak.\n\nHowever, for each operation, there typically exist multiple\n\\definition{alternative blocked algorithms}, of which \\lapack offers only one,\nbut not always the fastest. The alternative algorithms for a given operation\ncan be derived from its mathematical formulation\nsystematically~\\cite{derivingbalgs} and automatically~\\cite{loopgen, pmegen}.\nBased on these principles, \\libflame~\\cite{libflameref, libflame, libflameweb}\noffers many alternative algorithms for each operation, and for several\noperations provides more efficient default algorithms than \\lapack. In this\nwork we consider \\libflame's blocked algorithms for various operations, and aim\nto predict which of them is most efficient for given scenarios.\n\nAnother caveat of blocked algorithms is their \\definition[block size\ntuning]{block sizes}, which need to be carefully {\\em tuned} to maximize\nperformance. Since this is a well-known aspect of blocked\nalgorithms~\\cite{rooflinedla, blocksizetuning}, \\lapack encapsulates and exposes\nall its tuning parameters in \\ilaenv, a central routine that is used to\nconfigure the library at compile time; for many operations the block\nsizes used by \\lapack's reference implementation of \\ilaenv (64~for most\nalgorithms) have been too small on recent hardware for quite some time.\nAlthough the necessity of optimizing block sizes is well understood and taken\ncare of by implementations such as \\mkl, it remains non-trivial, and in fact few\nend-users and application-developers are aware of it. The automated model-based\noptimization of the block size for blocked algorithms is the second major goal\nof this work.\n\n\n\\subsubsection{Alternatives to Blocked Algorithms}\n\\label{sec:relwork:altalgs}\n\nAn alternative to blocked algorithms is \\definition{recursive algorithms},\nwhich avoid both the algorithm selection and block-size optimization. They are\nalso known as ``cache oblivious'' algorithms~\\cite{cacheoblivious2,\ncacheoblivious1} since they minimize the data-movement between cache\nlevels~\\cite{dlarec}. Recursion has been suggested for many DLA operations,\nsuch as the LU~decomposition~\\cite{lurec, lurec2}, the Cholesky\ndecomposition~\\cite{cholrec}, triangular matrix inversion~\\cite{trinvrec},\ntwo-sided linear systems~\\cite{sygstrec}, tall-and-skinny\nQR~factorization~\\cite{qrrec}, and Sylvester-type equation solvers~\\cite{recsy,\nrecsyweb}.\n\nHowever, since no readily-available recursion-based library comparable to\n\\lapack existed, we developed the {\\namestyle Recursive \\lapack collection}\n(\\definition{\\relapack})~\\cite{relapack, relapackweb}. \\relapack provides\nrecursive implementations for 48~\\lapack routines, and outperforms not only the\nreference implementation but in many cases also optimized libraries such as\n\\openblas and \\mkl.\n\nA second alternative to blocked algorithms tailored to shared-memory systems are\ntask-based \\definition{algorithms-by-blocks}, also known as ``block algorithms''\nor ``tiled algorithms''. However, these algorithms not only introduce a\nspecialized storage scheme of matrices ``by block'', but also require custom\ntask scheduling mechanisms. Implementations of such schedulers include\n{\\namestyle QUARK}~\\cite{quark} as part of {\\namestyle PLASMA}~\\cite{plasma},\n{\\namestyle DAGuE}~\\cite{dague}, {\\namestyle SMPSs}~\\cite{smpssdla}, and\n{\\namestyle SuperMatrix}~\\cite{supermatrix}.\n\n\n\\subsubsection{Distributed-Memory and Accelerators}\n\\label{sec:relwork:dist}\n\n\\definition[distributed memory]{Distributed-memory} systems and super-computers\nare indispensable for large-scale DLA computations. The first noteworthy\nextension of the \\blas and the \\lapack to this domain was the {\\namestyle\nScalable Linear Algebra PACKage} (\\scalapack)~\\cite{scalapack, scalapackweb},\nwritten in \\fortran and based on \\blas, \\lapack, and the {\\namestyle Message\nPassing Interface} (MPI). However, {\\namestyle ScaLAPACK} is only sparingly\nupdated (last in~2012), and, instead, the state of the art for\ndistributed-memory DLA is {\\namestyle Elemental}~\\cite{elemental, elementalweb},\nan actively developed \\cpplang~library, based on \\libflame's methodology in and\nobject-oriented and templated programming techniques.\n\nSince \\definition{accelerators} such as {\\namestyle Xeon-Phi} coprocessors and\ngraphics processors lend themselves well to compute-intensive operations, they\nare a natural target for DLA codes. While some classic \\blas implementations\nsuch as \\atlas, \\blis, and \\mkl, can be used on the x68-based {\\namestyle Xeon\nPhi}s, separate libraries are required for graphics processors: {\\namestyle\nNVIDIA}'s {\\namestyle cuBLAS}~\\cite{cublasweb} provides high-performance \\blas\nkernels for {\\langstyle CUDA}-enabled graphics cards, and {\\namestyle\nclBLAS}~\\cite{clblasweb} targets {\\langstyle OpenCL}-capable devices.\nFurthermore, {\\namestyle Matrix Algebra on GPU and Multicore Architectures}\n({\\namestyle MAGMA})~\\cite{magma, magmaweb} targets \\blas and \\lapack operations\non heterogeneous systems (e.g., CPU + GPU).\n\n\n\\subsection{Performance Measurements and Profiling}\n\\label{sec:relwork:meas}\n\nRuntime measurements of both application codes and algorithms are crucial in the\ninvestigation of performance behaviors, bottlenecks, as well as optimization\nand tuning in general; hence, numerous tools facilitate such\nmeasurements. Simple timers are accessible in virtually any language and\nenvironment: e.g., \\code{time} in Unix, \\code{rdtsc} in x86~assembly,\n\\code{gettimeofday()} in~\\clang, \\code{omp\\_get\\_wtime()} in {\\namestyle\nOpenMP}, \\code{tic} and \\code{toc} in \\matlab, and \\code{timeit} in \\python.\nSeveral more advanced tools \\definition[profiling]{profile} executions of\nfunctions and communications in applications by tracing or sampling: e.g.,\n{\\namestyle gprof}~\\cite{gprof, gprofweb}, {\\namestyle VAMPIR}~\\cite{vampirweb},\n{\\namestyle TAU}~\\cite{tau, tauweb}, {\\namestyle Scalasca}~\\cite{scalasca,\nscalascaweb}, and \\intel's {\\namestyle VTune}~\\cite{vtuneweb}. While such tools\nare invaluable in the performance analysis of application codes, their\ngenerality makes them somewhat unwieldy for our purposes of investigating DLA\nkernel performance. Therefore, we designed {\\namestyle Experimental Linear\nAlgebra Performance Studies} (\\definition{\\elaps})~\\cite{elaps, elapsweb}, a\nframework for performance measurements and analysis of DLA routines and\nalgorithms, further detailed in \\cref{sec:meas:elaps}.\n\n\n\\subsection{Performance Modeling and Predictions}\n\\label{sec:relwork:model}\n\nPredicting and modeling application performance is an important aspect of\nhigh-performance computing, and the term ``performance modeling'' is used to\ndescribe many different techniques and approaches. This section gives a brief\noverview of such approaches with focus on methods for DLA algorithms.\n\nThe well-established \\definition{Roofline model}~\\cite{roofline1} does not\npredict performance, but relates an algorithm's attained performance to the\nhardware's potential: As detailed in \\ref{sec:term:roofline}, it allows to\nevaluate an execution's resource efficiency by relating its algorithm's\narithmetic intensity and int performance relative to the hardware's peak\nmain-memory bandwidth and floating-point performance. It has been applied,\nimplemented, and extended in numerous publications, such as~\\cite{rooflinecache,\nrooflinetoolkit, roofline2}. Notably, \\citeauthor*{rooflinedla} use the\nroofline model (the arithmetic intensity in particular) to optimize the block\nsize for a blocked matrix inversion algorithm~\\cite{rooflinedla}.\n\nModel-based performance tuning of \\blas implementations was suggested for both\n\\atlas~\\cite{atlasmodel} and \\blis~\\cite{blismodel}, showing that near-optimal\n\\blas performance can be reached without measurement-based autotuning: Instead\nthey, e.g., select blocking sizes according to the \\blas implementation and the\ntarget processor's cache sizes. Note that these approaches are used to tune\n\\blas kernels, and do not actually predict their performance; hence they cannot\nserve as a basis for our predictions.\n\nPrevious work in our research group by \\citeauthor*{roman1} constructed accurate\n\\definition[analytical models]{analytical performance models} for small DLA\nkernels~\\cite{romandis, roman1}. These models target problems that fit within a\n\\harpertown's last-level cache (L2), and are based on the number of\nmemory-stalls and arithmetic operations as well as their overlap incurred by\nspecific kernel implementations. As such, they require not only a deep\nunderstanding of the processor architecture, but also a detailed analysis of the\nkernel implementation. While the resulting models yield accurate predictions\nwithin a few percent of reference measurements, they are not easily extended to\nlarger problems and other operations. Therefore, this work instead\nconsiders automatically generated, measurement-based models.\n\n\\Citeauthor*{blis3model} construct \\definition[piecewise models]{piecewise}\nruntime and energy {\\em models}---somewhat similar to those presented in this\nwork---for the \\blis implementations of \\dgemm and \\dtrsm~\\cite{blis3model} on a\n{\\hwstyle Sandy Bridge-EP E5-2620}. However, their approach is based on\nextensive knowledge of \\blis~\\cite{blismodel}, and their models only represent\none degree of freedom (by considering only square matrices or operations on\npanel matrices with fixed width\/height). Their average runtime model accuracy\nfor \\dgemm and \\dtrsm is, respectively, \\SIlist{1.5;4.5}\\percent, with local\nerrors of up to, respectively, \\SIlist{4.5;7}\\percent.\n\\citeauthor*{blischolmodel} extend this work to multi-threaded \\dgemm, \\dtrsm,\nand \\dsyrk in order to predict the performance of a blocked Cholesky\ndecomposition algorithm with fixed block size~\\cite{blischolmodel}; their\naverage runtime prediction errors are \\SIlist{3.7;2.4}\\percent, depending on the\nparallelization within \\blis. In contrast to these publications, the modeling\nframework presented in this work, which was developed around the same time, is\nfully automated, applicable to any \\blas- or \\lapack-like routine, not limited\nto one implementation and hardware, and offers models with multiple degrees of\nfreedom.\n\nIn a separate effort \\citeauthor*{tridiagmodel} constructs measurement-based,\nyet hardware- and \\definition{implementation-independent models} in the form of\na series of univariate polynomials (one kernel argument is represented by the\npolynomial, the other varied in the series) for several \\blasl3\nkernels~\\cite{tridiagmodel, qrmodel}. These models are used to predict the\nperformance of both a blocked reduction to tridiagonal form~\\cite{tridiagmodel}\nand a blocked multishift QR~algorithm~\\cite{qrmodel}. The resulting prediction\nerror on an unspecified {\\namestyle AMD Opteron} is reported to be\nbelow~\\SI{10}{\\percent} for the single-threaded tridiagonalization, and is on\naverage around~\\SI{10}{\\percent} for the QR~algorithm using multi-threaded\n\\blas. In contrast, the more general piecewise models proposed in this work\nyield considerable smaller prediction errors for various blocked algorithms.\n\nSeveral research projects model the performance of \\definition[distributed\nmemory]{distributed-memory} applications. A general purpose approach by\n\\citeauthor*{alex1} builds basic performance models for kernels in application\ncodes based on performance profiling~\\cite{alex2, alex1}, allowing to\ninvestigate the complexity and scalability of application components. In the\nfield of distributed-memory DLA, most modeling efforts target \\scalapack using\ndomain-specific knowledge through, e.g., polynomial\nfitting~\\cite{scalapackpolfit} or hierarchical modeling of\nkernels~\\cite{scalapckhierarchmodel}.\n\n\n\\subsection{Tensor Contractions}\n\\label{sec:relwork:tensor}\n\nTensor contractions are at the core of scientific computations, such as machine\nlearning~\\cite{tensorml}, general relativity~\\cite{generalrelativity,\ngeneralrelativity2}, and quantum chemistry~\\cite{ccd2, ccd1}. Since generally\nspeaking such contractions are high-dimensional matrix-matrix multiplications,\nthey are closely related to \\blasl3 operations, and in fact most contractions\ncan be cast in terms of one or more calls to \\dgemm, either by adding loops or\ntranspositions; this is implemented in many frameworks, such as the {\\namestyle\nTensor Contraction Engine} (TCE)~\\cite{tce, tceweb}, the {\\namestyle Cyclops\nTensor Framework} (CTF)~\\cite{cyclops, cyclopsweb}, the \\matlab{} {\\namestyle\nTensor Toolbox}~\\cite{matlabtt, matlabttweb}, and {\\namestyle\nlibtensor}~\\cite{libtensor, libtensorweb}.\n\nIn contrast to these implementations, which rely on a single algorithm for each\ncontraction (potentially selected through heuristics), previous work in our\ngroup by \\citeauthor*{tensorgen} investigated the automated generation of all\nalternative \\blas-based algorithms~\\cite{tensorgen}. \\Cref{ch:tensor} picks up\nthis work and presents a performance prediction framework for such algorithms\nthat allow to automatically identify the fastest algorithm~\\cite{tensorpred}.\n\nMore recent and ongoing work in our group by \\citeauthor*{gett} attempts to go\nbreak the barrier between contraction algorithms and \\dgemm implementations.\nFollowing the structured design of \\blis~\\cite{blis1}, they propose code\ngenerators that provide high-performance algorithms tailored to specific\ncontraction problems that reach close to optimal performance~\\cite{gett}. Their\ntools construct numerous alternative implementations, and identify the fastest\nthrough a combination of heuristics and micro-benchmarks.\n\n\n\\subsection{Motivation: Tensor Contraction Algorithms}\n\\label{sec:intro:tensor:algs}\n\nComputationally, tensor contractions are generalizations of matrix-vector and\nmatrix-matrix products to operands of higher dimensionality. While\n\\blas covers contractions of up to two-dimensional operands (i.e., matrices),\nthere are no equivalently established and standardized high-performance\nlibraries for general tensor contractions. Fortunately, just as a matrix-matrix\nproducts can be decomposed into sequences of matrix-vector products, higher\ndimensional tensor contractions can be cast in terms of matrix-matrix or\nmatrix-vector kernels. (A broader overview of alternative approaches is given\nin \\cref{sec:relwork:tensor}.)\n\n\\input{intro\/figures\/tensor_algs}\n\n\\begin{example}{Tensor contraction algorithms}{intro:tensor:algs}\n Let us consider the contraction $C_{abc} \\coloneqq A_{ai} B_{ibc}$ (in\n Einstein notation), which is visualized as follows:\n \\[\n \\begin{tikzpicture}[baseline=(c.base)]\n \\begin{drawcube}\n \\node[anchor=east] at (-1, 0, 1) {$\\scriptstyle a$};\n \\node[anchor=north] at (0, -1, 1) {$\\scriptstyle b$};\n \\node[anchor=north west] at (1, -1, 0) {$\\scriptstyle c$};\n \\node (c) {$C$};\n \\end{drawcube}\n \\end{tikzpicture}\n \\coloneqq\n \\begin{tikzpicture}[baseline=(c.base)]\n \\begin{drawsquare}\n \\node[anchor=east] at (-1, 0, 0) {$\\scriptstyle a$};\n \\node[anchor=north] at (0, -1, 0) {$\\scriptstyle i$};\n \\node {$A$};\n \\end{drawsquare}\n \\end{tikzpicture}\n \\matmatsep\n \\begin{tikzpicture}[baseline=(c.base)]\n \\begin{drawcube}\n \\node[anchor=east] at (-1, 0, 1) {$\\scriptstyle i$};\n \\node[anchor=north] at (0, -1, 1) {$\\scriptstyle b$};\n \\node[anchor=north west] at (1, -1, 0) {$\\scriptstyle c$};\n \\node {$B$};\n \\end{drawcube}\n \\end{tikzpicture}\n \\enspace.\n \\]\n The entries~$C$\\code{[a,b,c]} of the resulting three-dimensional tensor $C\n \\in \\R^{a \\times b \\times c}$ are computed as\n \\[\n \\forall \\code a \\forall \\code b \\forall \\code c :\\\n C\\text{\\code{[a,b,c]}} \\coloneqq \\sum_\\code i A\\text{\\code{[a,i]}}\n B\\text{\\code{[i,b,c]}}\n \\enspace.\n \\]\n As further described in \\cref{sec:tensor:alggen}, this contraction can be\n performed by a total of 36~alternative algorithms, each consisting of one or\n more \\code{\\bf for}-loops with a single \\blas kernel at its core. Three\n examples of such algorithms using \\blasl1, 2, and~3 kernels are shown in\n \\cref{fig:intro:tensor:algs}. These algorithms use \\matlab's ``\\code:''\n slicing notation\\footnotemark{} to access matrices and vectors within the\n tensors~$A$, $B$, and~$C$; the resulting operand shapes within the tensors\n passed to the \\blas kernel are shown alongside the algorithms.\n\\end{example}\n\\footnotetext{%\n The index ``\\code:'' in a tensor refers to all elements along that\n dimension, e.g., $A$\\code{[a,:]} is the \\code a-th row of~$A$.\n}\n\nEach tensor contraction can be computed via \\blas kernels through many---even\nhundreds---of algorithms, each with its own performance behavior. The\n\\definition[optimization challenge:\\\\alternative algorithms\\\\skewed\ndiensions]{optimization challenge} of identifying the fastest among such a set\nof {\\em alternative algorithms} is especially difficult due to the in practice\ncommonly encountered {\\em skewed dimensions} (i.e., one or more dimensions are\nextremely small) for which most \\blas implementations are typically not\noptimized.\n\n\\input{intro\/figures\/tensor_perf}\n\n\\begin{example}{Performance of contraction algorithms}{intro:tensor:perf}\n Let us consider the tensor contraction $C_{abc} \\coloneqq A_{ai} B_{ibc}$\n from \\cref{ex:intro:tensor:algs} with tensors $A \\in \\R^{n \\times 8}$, $B\n \\in \\R^{8 \\times n \\times n}$, and thus $C \\in \\R^{n \\times n \\times n}$;\n for~$n = 100$, this can be visualized as follows:\n \\[\n \\begin{tikzpicture}[baseline=(c.base)]\n \\begin{drawcube}\n \\node[anchor=east] at (-1, 0, 1) {$\\scriptstyle a$};\n \\node[anchor=north] at (0, -1, 1) {$\\scriptstyle b$};\n \\node[anchor=north west] at (1, -1, 0) {$\\scriptstyle c$};\n \\node (c) {$C$};\n \\end{drawcube}\n \\end{tikzpicture}\n \\coloneqq\n \\begin{tikzpicture}[baseline=(a.base), x={(.08, 0)}]\n \\begin{drawsquare}\n \\node[anchor=east] at (-1, 0, 0) {$\\scriptstyle a$};\n \\node[anchor=north] at (0, -1, 0) {$\\scriptstyle i$};\n \\node (a) {$A$};\n \\end{drawsquare}\n \\end{tikzpicture}\n \\begin{tikzpicture}[baseline=(a.base), y={(0, .08)}]\n \\begin{drawcube}\n \\node[anchor=east] at (-1, 0, 1) {$\\scriptstyle i$};\n \\node[anchor=north] at (0, -1, 1) {$\\scriptstyle b$};\n \\node[anchor=north west] at (1, -1, 0) {$\\scriptstyle c$};\n \\node (b) {$B$};\n \\end{drawcube}\n \\end{tikzpicture}\n \\enspace.\n \\]\n\n \\Cref{fig:intro:tensor:perf1} presents the performance of all 36~algorithms\n for this contraction on a \\harpertown with single-threaded \\openblas. While\n the two \\dgemm-based algorithms~(\\ref*{plt:intro:tensor:dgemm}) are clearly\n faster than the others, they differ in performance by up to\n \\SI{23.32}\\percent; with other kernels the difference are even more extreme,\n exceeding a factor of~60 for the \\daxpy-based\n algorithms~(\\ref*{plt:intro:tensor:daxpy}).\n\n \\Cref{fig:intro:tensor:perf2} showcases the performance of algorithms for\n the more complex contraction $C_{abc} \\coloneqq A_{ija} B_{jbic}$ on all\n 10~cores of an \\ivybridge using multi-threaded \\openblas. In this scenario,\n the performance of the \\dgemm-based algorithms alone differs by up\n to~$3\\times$.\n\\end{example}\n\nOne could argue that only \\dgemm-based algorithms are viable candidates to\nachieve the best performance; while for the most part this observation is true,\ndue to skewed dimensions, even the performance of only these algorithms can\ndiffer dramatically. Furthermore, some contractions (e.g., $C_a \\coloneqq\nA_{iaj} B_{ji}$) cannot be implemented via \\dgemm in the first place.\nTherefore, we aim at the accurate prediction of any \\blas-based contraction,\nirrespective of which kernel is used.\n\n\n\\subsection{Prediction through Micro-Benchmarks}\n\\label{sec:intro:tensor:pred}\n\nAt first sight the situation seems similar to the selection of blocked\nalgorithms: We want to avoid exhaustive performance measurements and select the\nbest algorithm {\\em without executing} any of the alternatives; our strategy is\nonce again to predict each algorithm's performance by estimating its invoked\nkernel's runtime. However, while performance models accurately estimates the\nperformance of such kernels for many operand sizes, they perform rather poorly\nfor operations with skewed dimensions: For extremely thin or small operands,\n\\blas kernels exhibit strong size-dependent performance fluctuations, which are\nimpractical to capture and represent in performance models.\n\nWhile we cannot rely on performance models, analyzing the structure of tensor\ncontraction algorithms suggests a different approach: In contrast to blocked\nalgorithms, a contraction algorithm performs its entire computation in a series\nof calls to a \\definition[single kernel\\\\fixed size\\\\micro-benchmarks]{single\n\\blas kernel} of with operands of {\\em fixed size}. Based on this observation,\nwe estimate the performance of such calls by constructing a small set of {\\em\nmicro-benchmarks} that executes the kernel only a few times, and thus performs\nonly a fraction of the algorithm's computation. Since memory locality plays an\nespecially important role in contractions with skewed dimensions, we carefully\nrecreate the stat of the processor's caches within the micro-benchmarks to time\nthe kernel in conditions analogous to those in the actual algorithm.\n\nBased on such micro-benchmarks, we can predict the total runtime of contraction\nalgorithms for tensors of various shapes and sizes. These predictions reliably\nsingle out the fastest algorithm from a set of alternatives several orders of\nmagnitude faster than a single algorithm execution.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsubsection{Background and System Noise}\n\\label{sec:meas:fluct:noise}\n\nThe potentially most disturbing, yet also quite easily avoidable source of\nfluctuations are other \\definition{background processes} competing for the\nprocessor's resources.\n\n\\input{meas\/figures\/fluct}\n\n\\begin{example}{Influence of background noise}{meas:fluct}\n \\Cref{fig:meas:fluct} presents the runtime of 1000~repetitions of the\n matrix-matrix multiplication $\\dm C \\coloneqq \\dm A \\matmatsep \\dm B + \\dm\n C$ (\\dgemm[NN]) with $\\dm A, \\dm B, \\dm C \\in \\R^{100 \\times 100}$ on a\n \\broadwell (as part of {\\namestyle MacBook Pro} with \\apple's framework\n \\accelerate and a \\sandybridge (as part of \\rwth's computing cluster) with\n \\mkl.\n\n On the \\broadwell~(\\ref*{plt:ibacc:circ}) with various other applications\n running in the background (e.g., browser and music player), the fluctuations\n are enormous: The measurement standard deviation is over $4\\times$~the mean\n runtime. On the \\sandybridge~(\\ref*{plt:sbmkl:circ}) with no other user\n applications running during measurements, the fluctuations are already much\n smaller at \\SI{2.36}{\\percent}~of the average time. For larger problem\n sizes, the fluctuations are considerably smaller, and quickly fall below\n \\SI{.1}\\percent.\n\\end{example}\n\nWhile these type of fluctuations can be avoided to some extend by ensuring that\nno other applications run during measurements, they cannot be avoided altogether\neven with exclusive access to dedicated high-performance hardware---the\nremaining fluctuations are known as \\definition{system noise}. Hence, for our\nexperiments, models, and micro-benchmarks all our measurements are repeated at\nleast five times and \\definition{summary statistics} of the runtime (or\nperformance) are presented, such as the minimum or median.\n\n\n\\subsubsection{\\intel{} \\turboboost}\n\\label{sec:meas:fluct:turbo}\n\nCompute-bound dense linear algebra computations, such as \\blasl3 and\n\\lapack-level routines, benefit directly from increased processing frequencies.\nTherefore, they usually trigger \\intel{} \\turboboost and constantly run at the\nmaximum turbo frequency if possible. Since this frequency cannot be sustained\nindefinitely on most machines, the processor frequency is eventually lowered and\nhenceforth fluctuates to keep the hardware within its power and thermal limits.\n\n\\input{meas\/figures\/turbo}\n\n\\begin{example}{\\turboboost}{meas:turbo}\n \\Cref{fig:meas:turbo} presents the runtime of repeated matrix-matrix\n multiplications $\\dm C \\coloneqq \\dm A \\matmatsep \\dm B + \\dm C$\n (\\dgemm[NN]) with $\\dm A, \\dm B, \\dm C \\in \\R^{1300 \\times 1300}$ alongside\n the processor's temperature and frequency\\footnotemark{} on both cores of a\n \\broadwell with multi-threaded \\accelerate; in this experiment, no other\n resource intensive programs run in the background.\n\n In the beginning, the processor is at a cool\n \\SI{53}{\\celsius}~(\\ref*{plt:meas:turbo:temp}) and each \\dgemm[NN] takes\n about \\SI{60}{\\ms}~(\\ref*{plt:meas:turbo:time}) at the maximum turbo\n frequency of \\SI{3.4}{\\GHz}~(\\ref*{plt:meas:turbo:freq}). The processor\n temperature increases steadily up to \\SI{105}{\\celsius} around\n repetition~200 (\\SI{12}{\\second} into the experiment); at this point the\n frequency is reduced and continuously adjusted between \\SIlist{3;3.2}{\\GHz}\n such that this temperature threshold is not exceeded. This change in\n frequency, as well as its fluctuations towards the end have a direct effect\n on the \\dgemm[NN]'s runtime: It increases by about~\\SI{10}{\\percent} to\n roughly~\\SI{67}\\ms.\n\\end{example}\n\\footnotetext{%\n Obtained through the \\intel {\\namestyle Power Gadget}.\n}\n\nThe behavior of \\turboboost depends enormously on the computation environment:\nWhile on a work-station or laptop system the processor temperature increases\nrapidly and the maximum turbo frequency is not sustained for long, on dedicated\nhigh-performance compute clusters, efficient cooling allows for the processor to\noperate at the maximum turbo frequency for much longer, if not indefinitely.\nHowever, even in our main computing facilities at the {\\namestyle\\rwth IT\nCenter}, we observed notable fluctuations of the frequency below its maximum\nwith negative impacts on our measurement quality and stability.\n\nThroughout this work, we consider processors with and without enabled\n\\turboboost. While the performance of these two cases is not directly\ncomparable, we consider our methodologies for both scenarios. In particular,\n\\turboboost is disabled on our \\sandybridge (unless otherwise stated) and\nenabled on our \\haswell---an overview of all hardware configurations is given in\n\\cref{app:hardware}.\n\n\n\\subsubsection{Distinct Long-Term Performance Levels}\n\\label{sec:meas:fluct:longterm}\n\nEven with \\turboboost disabled, a processor's speed is not always fixed to its\nbase frequency and we instead observed jumps between two or more\n\\definition{performance levels}.\n\n\\input{meas\/figures\/longterm}\n\n\\begin{example}{Performance levels}{meas:longterm}\n \\Cref{fig:meas:longterm} presents the runtime of 1000~repetitions of the\n matrix-matrix multiplication $\\dm[width=.05]C \\coloneqq \\dm A \\matvecsep\n \\dm[width=.05]B + \\dm[width=.05]C$ (\\dgemm[NN]) with $\\dm A \\in \\R^{4000\n \\times 4000}$ and $\\dm[width=.05]B, \\dm[width=.05]C \\in \\R^{4000 \\times\n 200}$ on a \\sandybridge and a \\haswell (both with \\turboboost disabled) with\n single-threaded \\openblas.\n\n On both systems, we can clearly make out two distinct runtime levels: on the\n \\sandybridgeshort, the measurements jump between \\SIlist{354;359}\\ms, which\n are \\SI{1.4}{\\percent}~apart, and on the \\haswellshort with twice the\n floating-point performance per cycle, the two levels\n at~\\SIlist{205;213}{\\ms} differ by~\\SI{3.9}\\percent. There is no\n discernible pattern to the jumps between these levels and the processors\n commonly stay at the same level for~\\SI{10}{\\second} or longer\n (50~repetitions at \\SI{200}{\\ms} each).\n\\end{example}\n\nSince we found no means to eradicate this type of fluctuations, we adopt our\nmeasurement setups to account for them: Whenever we have more than one\nmeasurement point (e.g., varying the routines or problem sizes), we not only\nrepeat each measurement several times in isolation, but also shuffle the\nrepetitions. As a result, the repetitions for each data point are spread across\nthe entire experiment duration and summary statistics such as the minimum and\nmedian yield a stable runtime estimate for only one performance level.\n\nIn summary, we can avoid or account for various types of fluctuations within our\nmeasurements.\n\n\n\n\\section{Performance Effects for Dense Linear Algebra Kernels}\n \\label{sec:meas:effects}\n \\input{meas\/effects}\n\n \\subsection{Library Initialization Overhead}\n \\label{sec:meas:effects:init}\n \\input{meas\/init}\n\n \\subsection{Fluctuations}\n \\label{sec:meas:effects:fluct}\n \\input{meas\/fluct}\n\n \\subsection{Thread Pinning}\n \\label{sec:meas:effects:pin}\n \\input{meas\/pin}\n\n \\subsection{Caching}\n \\label{sec:meas:effects:caching}\n \\input{meas\/caching}\n\n \\subsection{Summary}\n \\label{sec:meas:effects:sum}\n \\input{meas\/effectssum}\n\n \\section{Measurements and Experiments: \\elaps}\n \\label{sec:meas:elaps}\n \\input{meas\/elapsintro}\n\n \\subsection{The \\sampler}\n \\label{sec:meas:sampler}\n \\input{meas\/sampler}\n\n \\subsection{The \\elaps{} \\python Framework}\n \\label{sec:meas:elapslib}\n \\input{meas\/elaps}\n\n \\section{Summary}\n \\label{sec:meas:conclusion}\n \\input{meas\/conclusion}\n}\n\n\n\n\n\n\n\n\n\n\\subsubsection{Alignment to Cache-Lines}\n\nData is moved through the memory hierarchy in blocks of \\SI{64}{\\bytes} ($=\n\\SI8\\doubles$) called \\definition{cache-lines}.\\footnote{%\n The cache-line size is generally not fixed but for most processors it is\n \\SI{64}{\\Byte}.\n} Hence using multiples of the cache-lines size as memory access strides\ntypically shows a more regular and often better performance compared to other\nstrides.\n\n\\input{model\/figures\/ld8}\n\n\\footnotetextbefore{%\n Since $A$ and~$B$ have 256~rows, the leading dimensions are at least~256.\n}\n\\begin{example}{Aligning leading dimensions to cache-lines}{model:args:ld:8}\n \\Cref{fig:model:ld8} shows the runtime of\n \\displaycall\\dtrsm{\n \\arg{side}L, \\arg{uplo}L,\n \\arg{transA}N, \\arg{diag}N,\n \\arg m{256}, \\arg n{256},\n \\arg{alpha}{1.0},\n \\arg AA, \\arg{ldA}{\\it\\color{blue}ld},\n \\arg BB, \\arg{ldB}{\\it\\color{blue}ld}\n }\n i.e., $\\dmB \\coloneqq \\dmAi \\dmB$ with $\\dmA\\lowerpostsep, \\dmB \\in \\R^{256\n \\times 256}$, for leading dimensions\\footnotemark{} $ld = 256, \\ldots, 320$\n in steps of~1 on a \\sandybridge and a \\haswell with single-threaded\n \\openblas, \\blis, and \\mkl.\n\n For all setups, the \\dtrsm[LLNN]'s runtime exhibits some regular pattern in\n terms of the leading dimension arguments---with an average amplitude\n of~\\SI{2.19}\\percent. However the patterns are quite different: While\n \\openblas's runtime on the \\sandybridgeshort~(\\ref*{plt:sbopen}) drops\n equally at every even leading dimension, \\mkl on the\n \\haswellshort~(\\ref*{plt:hwmkl}) dips only at multiples of~4, and on the\n \\sandybridgeshort~(\\ref*{plt:sbmkl}) it has stronger dips at multiples of~8.\n \\blis on the other hand shows the exact opposite behavior: On both\n platforms~(\\ref*{plt:sbblis}, \\ref*{plt:hwblis}) its runtime spikes slightly\n at multiples of~8.\n\n Independent of the specific behavior of each setup, a smooth runtime curve\n is obtained when only multiples of~8 are considered as leading dimensions.\n\\end{example}\n\nTo avoid small performance irregularities, we will generate our models using\n\\definition[use multiples of the cache-line size]{multiples of the cache-line\nsize} for leading dimensions---in double-precision: multiples of~8.\n\n\n\\subsubsection{Set-Associative Cache Conflicts}\n\\label{sec:model:args:ld512}\n\nThe Level~1 and~2 caches in our processors are \\definition{8-way\nset-associative}: They are divided into sets of 8~cache-lines, and when a\ncache-line is loaded, its address's least significant bits determine which of the\nsets it is assigned to; within the set, an architecture-dependent cache\nreplacement policy determines in which of the 8~slots it is stored. When the\naddress space is accessed contiguously, consecutive cache-lines are loaded into\nconsecutive sets, and the cache is filled evenly. In the worst case, however,\nthe address space is accessed with a stride equal to the number of sets, and\nall loaded cache-lines are associated to the same set: Only 8~cache-lines are\ncached, and each additional line results in a \\definition{cache conflict miss}\ncausing a recently loaded line to be evicted. This effect should be avoided\nwhenever possible.\n\nOn recent \\intel{} {\\namestyle Xeon} processors, the Level~1 data cache~(L1d)\nfits \\SI{32}{\\kibi\\byte} organized as 64~sets of 8~cache-lines. A memory\nlocation with address~$a$ is a part of cache-line~$\\lfloor a \/ 64 \\rfloor$ (due\nto the size of \\SI{64}{\\Byte} per line) and assigned to set $\\lfloor a \/ 64\n\\rfloor \\bmod 64$ (due to the capacity of 64~sets). The Level~2 cache (L2) in\nturn fits \\SI{256}{\\kibi\\byte} in 1024~sets; here address~$a$ is assigned to set\n$\\lfloor a \/ 64 \\rfloor \\bmod 1024$.\n\nIn a double-precision matrix stored with leading dimension~$ld$, consecutive\nelements in each row are $8 ld$~\\bytes apart ($\\SI1\\double = \\SI8\\bytes$).\nHence, for $ld = 512$, the consecutive row elements starting at address~$a_0$\nare stored at~$a_i = a_0 + 8 ld \\cdot i = a_0 + 4096 i$, and associated to the\nsame set in the L1d~cache:\n\\begin{align*}\n \\left\\lfloor \\frac{a_i}{64} \\right\\rfloor \\bmod 64\n &= \\left\\lfloor \\frac{a_0 + 4096 i}{64} \\right\\rfloor \\bmod 64 \\\\\n &= \\left(\\left\\lfloor \\frac{a_0}{64} \\right\\rfloor + 64 i \\right) \\bmod 64 \\\\\n &= \\left\\lfloor \\frac{a_0}{64} \\right\\rfloor \\bmod 64.\n\\end{align*}\nThe same problem occurs for leading dimensions that are multiples of~512, and\neven below~512 powers of~2 have a similar effect: E.g., with $ld = 256$ the\nelements of a row are associated to only two of the cache's 64~sets. Similarly,\nfor the L2~cache with 1024~sets, consecutive row-elements are associated to the\nsame cache set for leading dimensions that are multiples of~8192, and multiples\nof~4096 utilize only two sets.\n\n\\input{model\/figures\/ld512}\n\n\\begin{example}{Cache conflict misses caused by leading\n dimensions}{model:args:ld:512}\n \\Cref{fig:model:ld512} shows the runtime of\n \\displaycall\\dtrsm{\n \\arg{side}L, \\arg{uplo}L, \\arg{transA}N, \\arg{diag}N,\n \\arg m{256}, \\arg n{256}, \\arg{alpha}{1.0},\n \\arg AA, \\varg{ldA}{ld}, \\arg BB, \\varg{ldB}{ld}\n }\n i.e., $\\dmB \\coloneqq \\dmAi \\dmB$ with $\\dmA\\lowerpostsep, \\dmB \\in \\R^{256\n \\times 256}$, for leading dimensions $ld = 256, \\ldots, 8320$ in steps\n of~128 on a \\sandybridge and a \\haswell with single-threaded \\openblas,\n \\blis, and \\mkl.\n\n For most setups the runtime spikes above the baseline at multiples of~512.\n However, the average magnitude of these spikes ranges\n from~\\SI{.14}{\\percent} for \\blis on the\n \\sandybridgeshort~(\\ref*{plt:sbblis}) to~\\SI{8.37}{\\percent} for \\openblas\n on the \\haswellshort~(\\ref*{plt:hwopen}). Especially for\n \\openblas~(\\ref*{plt:sbopen}, \\ref*{plt:hwopen}), there are additional, yet\n lower spikes of \\SI{1.40}{\\percent} at multiples of~256. Furthermore, on\n the \\haswellshort for both \\openblas~(\\ref*{plt:hwopen}) and\n \\blis~(\\ref*{plt:hwblis}) the spikes are especially high at $ld = 4096$\n and~8192, exceeding the baseline by, respectively,\n \\SIlist{6.55;11.24}\\percent.\n\\end{example}\n\nTo prevent distortions from unfortunate leading dimensions in our model\ngeneration altogether, we will \\definition{avoid multiples of~256} for these\narguments.\n\nNote that by using leading dimensions that are multiples of~8, yet not of~256 in\nour measurements, our models will not yield accurate predictions for kernel\ninvocations that do not follow this pattern. However, predicting the\nperformance of such unfortunate invocations, which can be systematically\navoided, is not part of our models' purpose and would exceed the scope of this\nwork.\n\n\n\\subsubsection{Smalls Scale Behavior}\n\\label{sec:model:args:size:small}\n\nOptimizations of compute kernels commonly involve vectorization and loop\nunrolling of length~4 or~8. These optimizations typically have a direct\ninfluence on a kernel's runtime for small variations of the size arguments.\n\n\\input{model\/figures\/size8}\n\n\\begin{example}{Small variations of size arguments}{model:args:size:8}\n \\Cref{fig:model:size8} shows the runtime of\n \\displaycall\\dtrsm{\n \\arg{side}L, \\arg{uplo}L, \\arg{transA}N, \\arg{diag}N,\n \\varg mn, \\varg nn, \\arg{alpha}{1.0},\n \\arg AA, \\arg{ldA}{400}, \\arg BB, \\arg{ldB}{400}\n }\n i.e., $\\dmB \\coloneqq \\dmAi \\dmB$ with $\\dmA\\lowerpostsep, \\dmB \\in \\R^{n\n \\times n}$, for $n = 256, \\ldots, 320$ in steps of~1 on a \\sandybridge and a\n \\haswell with single-threaded \\openblas, \\blis, and \\mkl.\n\n All setups show periodic patterns in their runtimes. While these patterns\n differ between the implementations, most have local runtime minima at\n multiples of~4, and all of them have minima at multiples of~8.\n\\end{example}\n\nTo avoid runtime artefacts introduced by vectorization and loop unrolling, we\nwill build our models on measurements that \\definition{use multiples of~8} for\nall size arguments.\n\n\n\\subsubsection{Piecewise Polynomial Behavior}\n\\label{sec:model:args:size:large}\n\nSince an operation's minimal \\flop-count is generally a (multivariate)\npolynomial function of the size arguments, one might expect that (for\ncompute-bound kernels) it translates directly into an equally polynomial\nruntime. However, since a kernel's performance is generally not constant for\nvarying operand sizes, a single polynomial is often insufficient to accurately\nrepresent a kernel's runtime for large ranges of problem sizes.\n\n\\input{model\/figures\/size}\n\n\\begin{example}{Polynomial fitting for size arguments}{model:args:size}\n \\Cref{fig:model:size} shows the runtime of\n \\displaycall\\dtrsm{\n \\arg{side}L, \\arg{uplo}L, \\arg{transA}N, \\arg{diag}N,\n \\varg m{n}, \\varg n{n}, \\arg{alpha}{1.0},\n \\arg AA, \\arg{ldA}{1000}, \\arg BB, \\arg{ldB}{1000}\n }\n i.e., $\\dmB \\coloneqq \\dmAi \\dmB$ with $\\dmA\\lowerpostsep, \\dmB \\in \\R^{n\n \\times n}$, with $n = 24, \\ldots, 536$ in steps of~16 on a \\sandybridge and\n a \\haswell with single-threaded \\openblas, \\blis, and \\mkl.\n\n At first sight, the runtime for all setups follows a smooth cubic\n behavior---perfectly in line with the operation's minimal cost of\n \\SIvar{n^3}\\flops. However, if for each setup we fit the measurements with\n a single cubic polynomial that minimizes the least-squares relative error\n (details in~\\cref{sec:model:fit}), we are left with the approximation error\n shown in~\\cref{fig:model:size:err1}. The absolute relative approximation\n error\\footnotemark{} lies between \\SI{.86}{\\percent} for \\blis on the\n \\sandybridgeshort~(\\ref*{plt:sbblis}) and \\SI{11.22}{\\percent} for \\openblas\n on the \\haswellshort~(\\ref*{plt:hwopen}); on average it\n is~\\SI{5.30}\\percent.\n\n If we look closer at the approximation errors in\n \\cref{fig:model:size:err1}---especially for \\openblas on the\n \\haswellshort~(\\ref*{plt:hwopen})---we observe a piecewise smooth(er)\n behavior. Motivated by this observation, we now fit not one polynomial to\n each data-set but two: one for the first half ($n \\leq 280$) and one for the\n second half ($n \\geq 280$). For this two-split polynomial fit the\n approximation error is shown in~\\cref{fig:model:size:err2}: The largest\n error is now reduced to~\\SI{5.25}{\\percent} for \\mkl on the\n \\haswellshort~(\\ref*{plt:hwmkl}), and the average error\n is~\\SI{2.55}{\\percent}---less than half of the original approximation error.\n (Based on a more detailed analysis, a better splitting point than\n $\\frac{24+536}2 = 280$ could have been chosen, but as\n \\cref{fig:model:size:err1} shows such choices would be notably different for\n each setup.) Within the new approximation, the error for the second\n polynomial ($n \\geq 280$) is already quite low---on\n average~\\SI{.38}\\percent. Hence, in a second step, we further subdivide\n only the first half of the domain ($n \\leq 280$) at~$n = 152$, and generate\n a new approximation consisting of three polynomials. As\n \\cref{fig:model:size:err3} shows, the error of this approximation is\n below~\\SI{1.28}{\\percent}~(\\ref*{plt:hwmkl}) in all cases and on\n average~\\SI{.71}\\percent.\n\\end{example}\n\\footnotetext{%\n For a polynomial~$p(x)$ fit to measurements~$y_1, \\ldots, y_N$ in\n points~$x_1, \\ldots, x_N$ we consider the error $1 \/ N \\sum_{i=1}^N \\lvert\n y_i - p(x_i) \\rvert \/ y_i$. Note that the least-squares fitting minimizes\n not this sum of absolute relative errors but the sum of squared relative\n errors.\n}\n\nTo account for the not purely polynomial influence of a kernel's size arguments\non its runtime, we will represent it in our models through \\definition{piecewise\npolynomials}. Details on the such piecewise polynomial representations and\ntheir automated generation are given in\n\\cref{sec:model:fit,sec:model:adaptive,sec:model:config}.\n\n\n\n\n\\subsection{Configuration Parameters}\n\nThe adaptive refinement is controlled by a total of eight\n\\definition{configuration parameters}. They allow to control the model\naccuracy, but also affect the time spent for the required measurements. The\neight parameters regulate the model generation as follows:\n\\begin{itemize}\n \\item To represent the runtime of a kernel, the monomial basis for the\n fitted polynomials needs to at least cover the kernel's asymptotic\n complexity (i.e., its minimal \\flop-count). To better represent\n performance variations, however, the maximum degree of the monomials can\n be increased in each each dimension (i.e., size argument). We refer to\n this increase as \\definition[overfitting:\\\\between 0\n and~2]{overfitting}; practical values are {\\em between 0 and~2}.\n\n \\item To fit a polynomial to a routine's runtime, the number of sampling\n points along each dimension needs to be at least one more than the\n corresponding polynomial degree. However, since this minimal number of\n points yields a polynomial that fits the measurements perfectly, we\n cannot use it to compute an approximation error. We hence increase the\n number of sampling points per dimension by at least one, and to further\n improve the approximation accuracy, further points can be added; we\n refer to the total number of points added as\n \\definition[oversampling:\\\\between 1 and~10]{oversampling}; practical\n values are values {\\em between 1 and~10}.\n\n \\item We introduced two alternatives to \\definition[distribution\n grid:\\\\Cartesian or Chebyshev]{distribute} sampling points on {\\em\n grids} that cover the domains of problem sizes: a {\\em Cartesian} grid\n and a {\\em Chebyshev} grid.\n\n \\item For each sampling point, we perform several \\definition[measurement\n repetitions:\\\\between 5 and~20]{measurement repetitions}; practical\n values are {\\em between 5 and~20}.\n\n \\item From the repetitions, we compute several runtime summary statistics:\n minimum, median, maximum, average, and standard deviation. One of these\n is selected as the \\definition[reference statistic:\\\\minimum or\n median]{reference statistic}; practical choices are the {\\em minimum and\n median}.\n\n \\item From the absolute relative errors in the reference statistic for all\n sampling points, we compute the \\definition[error measure:\\\\average,\n maximum, or 90th~percentile]{error measure} which is these relative\n errors' {\\em average, maximum, or 90th~percentile}.\n\n \\item The first termination criterion for the adaptive refinement process is\n the approximation accuracy: The refinement stops when the computed\n error measure is below a \\definition[target error bound:\\\\between\n {\\SIlist[detect-all=true]{1;5}\\percent}]{target error bound}; practical\n values for this bound are {\\em between\n \\SIlist[detect-all=true]{1;5}\\percent}.\n\n \\item The second termination criterion is the size of the domains: The\n refinement stops when a new domain is smaller than a \\definition[minimum\n width:\\\\32 or~64]{minimum width} along all dimensions; typical values\n are {\\em 32 and~64}.\n\\end{itemize}\n\n\n\\subsection{Trade-Off and Configuration Selection}\n\nIn the following, we analyze the accuracy of our models and their generation\ncost, and select a configuration to generate the models for the performance\npredictions in the \\cref{ch:pred}.\n\nWe consider the model generation for\n\\displaycall\\dtrsm{\n \\arg{side}L, \\arg{uplo}L, \\arg{transA}N, \\arg{diag}N,\n \\arg m{\\it\\color{blue}m}, \\arg n{\\it\\color{blue}n}, \\arg{alpha}{1.0},\n \\arg AA, \\arg{ldA}{5000}, \\arg BB, \\arg{ldB}{5000}\n}\ni.e., $\\dmB[height=.5] \\coloneqq \\dmAi[size=.5] \\dmB[height=.5]$ with\n$\\dmA[size=.5] \\in \\R^{m \\times m}$ and $\\dmB[height=.5] \\in \\R^{m \\times n}$,\nfor sizes $m \\in [24, 536]$ and $n \\in [24, 4152]$ on a \\sandybridge and a\n\\haswell using single-threaded \\openblas, \\blis, and \\mkl.\n\nFor each setup, our first step is to exhaustively measure the \\dtrsm[LLNN]'s\nruntime 15~times in all points $(m, n)$ in the domain $[24, 536] \\times [24,\n4152]$ at which both~$m$ and~$n$ are multiples of~8---a total of \\num{504075}\nmeasurements. These measurements are used both as the basis for our model\ngeneration and to evaluate the model accuracy across the entire domain (contrary\nto the model generation, which can only evaluate the error in its sampling\npoints).\n\n\\input{model\/tables\/config}\n\nWe generate models for all 2880~configurations obtained from combining the\nparameter values shown in \\cref{tbl:model:config}. These configurations result\nin a wide range of models with significantly different accuracies and generation\ncosts. To evaluate them, we quantify the \\definition{model error} as the\naveraged relative error of the predicted minimum runtime~$p(\\x_i)$ relative to\nthe measured minimum~$y_i$ across all $N = \\num{33605}$ points~$\\x_i$ of the\ndomain:\n\\[\n \\text{model error} \\defeqq\n \\frac1N \\sum_{i=1}^N \\frac{\\lvert p(\\x_i) - y_i \\rvert}{y_i} \\enspace;\n\\]\nfurthermore, we define the \\definition{model cost} as the total runtime of the\nrequired measurements used as samples.\n\n\\input{model\/figures\/modelplots}\n\\input{model\/tables\/modelplots}\n\n\\begin{example}{Model accuracy}{model:acc}\n \\Cref{fig:model:modelplots} shows the structure and point-wise accuracy of\n the four models with minimum and maximum accuracy and cost for\n single-threaded \\openblas on a \\sandybridge; \\cref{tbl:model:modelplots}\n lists the corresponding configurations. Both the cheapest and least\n accurate model use only a single polynomial for the entire domain but also\n offer only poor accuracy. The expensive and accurate models on the other\n hand subdivide the domain repetitively, and thus find a better fitting\n piecewise polynomial.\n\\end{example}\n\n\\input{model\/figures\/tradeoff}\n\nThe accuracy and cost of all 2880~generated models for each setup are presented\nin \\cref{fig:model:tradeoff:full}; in this plot, the preferable models with low\nerror and cost are found close to the origin. All setups share the same general\ntrend: Models with low accuracy are quite cheap, while models with high\naccuracy are more expensive. Hence we are faced with a\n\\definition[trade-off:\\\\accuracy vs.~cost]{trade-off between accuracy and cost}.\nHowever, the configuration selection is not straight-forward: Models with\npractically identical accuracy are up to a factor of~16 apart in generation\ncost, and a cheap and accurate configuration for one setup may be neither for\nother setups. In the following, we describe how we approach the search-space of\nall considered configurations, and identify a desirable default configuration\nthat we subsequently use to generate the models for all setups and kernels\nneeded for our performance predictions in \\cref{ch:pred}.\n\nBefore we begin to reduce our search space, we notice that on the \\haswellshort,\nthe models for both \\blis~(\\ref*{plt:hwblis:circ}) and\n\\mkl~(\\ref*{plt:hwmkl:circ}) are on average less than half as accurate than for\nthe other setups. The cause is a rather jagged performance behavior, which is\ndifficult to represent accurately. Hence, to identify a good default\nconfiguration, we consider only the \\sandybridgeshort~(\\ref*{plt:sbopen:circ},\n\\ref*{plt:sbblis:circ}, \\ref*{plt:sbmkl:circ}) and \\openblas on the\n\\haswellshort~(\\ref*{plt:hwopen:circ}).\n\nOur first step is to \\definition{prune by accuracy}: We discard any\nconfiguration that for any of the considered setups yields a model error larger\nthan $1.5\\times$ the minimum error for that setup; in other words, all\nremaining configurations generate models that are at most \\SI{50}{\\percent}\nless accurate than the most accurate model. This step reduces the number of\npotential configurations from 2880 to~163; all remaining configurations use an\noversampling value of~3 or higher, and a target error bound of~\\SI1\\percent.\n\\cref{fig:model:tradeoff:within2err} shows the 163~remaining models' accuracy\nand cost.\n\n\\input{model\/tables\/tradeofffinal}\n\nOur second step is to similarly \\definition{prune by cost}: We discard any\nconfiguration that for any considered setup takes longer than the first quartile\nin generation time for that setup; in other words, the remaining models are all\nwithin the \\SI{25}{\\percent} that are generated the fastest. This step further\nreduces the number of potential configurations from 163 to~14, as shown in\n\\cref{fig:model:tradeoff:belowmedcost}.\n\nThe parameter values for the 14~remaining configurations are shown in\n\\cref{tbl:model:tradeoff:final}. For each parameter, we can find one value that\nis common to at least 8~of the 14~configurations (highlighted in {\\bf bold}).\nWe choose our \\definition{default configuration} by selecting this most common\nvalue for each parameter. It corresponds to line~(10) in\n\\cref{tbl:model:tradeoff:final} (highlighted in {\\bf\\color{blue}blue}), and is\nmarked for each setup in \\cref{fig:model:tradeoff:belowmedcost}. Note that it\nalso serves as a good choice between accuracy and cost for\n\\blis~(\\ref*{plt:hwblis:circ}) and \\mkl~(\\ref*{plt:hwmkl:circ}) on the\n\\haswellshort, which were not included in the pruning process.\n\n\n\\subsection{Variations of the Default Configuration}\n\nWhile the configuration was found to yield good accuracies at reasonable costs\nfor almost all encountered kernels, it proves to be quite expensive for kernels\nwith \\definition[3D case (\\dgemm)]{three degrees of freedom}, which for the\npredictions in \\cref{ch:pred} only applies to {\\em\\dgemm} with its three size\narguments~\\code m, \\code n, and~\\code k. To reduce the modeling cost for this\nkernel, we adjust the default configuration by reduce the overfitting from~2\nto~0, and increasing the minimum width from~32 to~64.\n\nFurthermore, the performance of \\blas kernels becomes less smooth when we bring\n\\definition{multi-threading} into the picture. Hence, to avoid excessive\npartitioning as seen in \\cref{fig:model:modelplots:maxcost}, we increase the\nminimum width for all models to~64, and for \\dgemm to~256.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\chapter{Performance Modeling}\n\\chapterlabel{model}\n{\n \\input{model\/commands}\n\n \\input{model\/intro}\n\n \\section{Kernel Argument Analysis}\n \\label{sec:model:args}\n \\input{model\/args}\n\n \\subsection{Flag Arguments}\n \\label{sec:model:args:flag}\n \\input{model\/arg-flag}\n\n \\subsection{Scalar Arguments}\n \\label{sec:model:args:scalar}\n \\input{model\/arg-scalar}\n\n \\subsection{Leading Dimension Arguments}\n \\label{sec:model:args:ld}\n \\input{model\/arg-ld}\n\n \\subsection{Increment Arguments}\n \\label{sec:model:args:inc}\n \\input{model\/arg-inc}\n\n \\subsection{Size Arguments}\n \\label{sec:model:args:size}\n \\input{model\/arg-size}\n\n \\subsection{Data Arguments}\n \\label{sec:model:args:data}\n \\input{model\/arg-data}\n\n \\subsection{Summary}\n \\label{sec:model:args:sum}\n \\input{model\/arg-sum}\n\n \\section{Model Generation}\n \\label{sec:model:generation}\n \\input{model\/generation}\n\n \\subsection{Model Structure}\n \\label{sec:model:structure}\n \\input{model\/structure}\n\n \\subsection{Sample Distribution}\n \\label{sec:model:grids}\n \\input{model\/grids}\n\n \\subsection{Repeated Measurements and Summary Statistics}\n \\label{sec:model:stat}\n \\input{model\/stat}\n\n \\subsection{Relative Least-Squares Polynomial Fitting}\n \\label{sec:model:fit}\n \\input{model\/fit}\n\n \\subsection{Adaptive Refinement}\n \\label{sec:model:adaptive}\n \\input{model\/adaptive}\n\n \\section{Model Generator Configuration}\n \\label{sec:model:config}\n \\input{model\/config}\n\n \\section{Summary}\n \\label{sec:model:sum}\n \\input{model\/model-sum}\n}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Varying Problem Size}\n\\label{sec:pred:chol:n}\n\n\\input{pred\/figures\/cholperf}\n\nIn our first analysis, we use only one of the \\sandybridgeshort's 8~cores and\nvary the problem size between~$n = 56$ and~4152 in steps of~64 while keeping the\nblock size fixed at~$b = 128$. \\Cref{fig:pred:chol:time_perf} shows the runtime\nand performance of predictions and measurements for this setup side-by-side.\n(Since the red line \\legendline[very thick, darkred] at the top of the\nperformance plots indicates the processor's theoretical peak performance, such\nplots can also be interpreted as compute-bound efficiencies with\n\\SI0{\\percent}~at the bottom and \\SI{100}{\\percent}~at the top.) The\npredictions give a good idea of the algorithm behavior: While the runtime\nincreases cubically with the problem size~$n$, the performance is low for small\nmatrices and increases steadily towards \\SI{18}{\\giga\\flops\\per\\second}. At\nfirst sight, the predictions match the measurements well.\n\n\\input{pred\/figures\/cholerr}\n\nTo further study the accuracy of our predictions, the top half of\n\\cref{fig:pred:chol:err} presents the prediction errors. As one might expect,\n\\cref{fig:pred:chol:err:time} indicates that with increasing problem size, the\nmagnitude of the runtime prediction error increases for all summary\nstatistics---most notably for the maximum~(\\ref*{plt:max}). Since in contrast\nthe performance prediction error~(\\cref{fig:pred:chol:err:perf}) is not affected\nby the decomposition's cubic runtime, we instead observe the largest prediction\nerrors for the smallest problem size~$n = 56$. Furthermore, we find that the\nminimum performance prediction error~(\\ref*{plt:min}) seems to alternate between\ntwo separate levels: one around \\SI0{\\mega\\flops\\per\\second} and one close to\n\\SI{200}{\\mega\\flops\\per\\second}. This behavior, which is also already somewhat\nvisible in \\cref{fig:pred:chol:perf:meas,fig:pred:chol:err:time}, is caused by\nmeasurement fluctuations as discussed in \\cref{sec:meas:fluct:longterm}.\n\nWe gain more insights from the prediction errors when we compare it to the\npredicted quantities. For this purpose, the bottom half of\n\\cref{fig:pred:chol:err} presents the relative runtime and performance\nprediction errors. These relative errors for these metrics are almost identical\nup to a change in the sign---since the runtime is generally slightly\nunderestimated, the performance is somewhat overestimated. Focusing on the\nruntime in \\cref{fig:pred:chol:re:time}, we notice that the average standard\ndeviation ARE is~\\SI{194.70}\\percent~(\\ref*{plt:std}), which, as in\n\\cref{ex:pred:err}, exceeds the error of the other prediction statistics by far.\nFurthermore, the previously addressed measurement fluctuations are also clearly\nvisible in the maximum~(\\ref*{plt:max}) as variations with a magnitude\nof~\\SI{1.5}\\percent. The minimum~(\\ref*{plt:min}), median~(\\ref*{plt:med}), and\nmean~(\\ref*{plt:avg}) AREs on the other hand quickly fall below~\\SI2{\\percent}\nfor matrices larger than~$n = 200$ and further below below~\\SI1{\\percent}\nbeyond~$n \\approx 1000$; across all chosen problem sizes, the average AREs for\nthe minimum, median and mean runtime are, respectively,\n\\SIlist{.78;.91;.90}\\percent.\n\nAmong the eight metrics presented in\n\\cref{fig:pred:chol:time_perf,fig:pred:chol:err}, we gained the most insight\nfrom 1)~the performance prediction (\\cref{fig:pred:chol:perf:pred}), which gives\na good idea of both the algorithm's performance and efficiency, and 2)~the\nrelative runtime prediction error (\\cref{fig:pred:chol:re:time}), which provides\nnot only an accuracy measure independent of the operation, the algorithm, and\nthe actual performance, but also indicates whether the runtime is under- or\noverestimated. Hence, we use these two types of plots in our following\nanalyses.\n\n\n\\subsection{Varying Block Size}\n\\label{sec:pred:chol:b}\n\n\\input{pred\/figures\/cholnb}\n\nIn our next analysis, we fix the problem size to~$n = 3000$ and vary the block\nsize between~$b = 24$ and~536 in steps of~8. \\Cref{fig:pred:chol:b} presents\nthe performance prediction and the relative runtime prediction error for this\nscenario using single-threaded \\openblas on the \\sandybridgeshort.\n\nThe performance prediction (\\cref{fig:pred:chol:b:perf}) exhibits the typical\ntrade-off for any blocked algorithm: While for both small and large block sizes\nthe algorithm attains rather poor performance, in between it reaches up to\n\\SI{17.91}{\\giga\\flops\\per\\second}, which corresponds to an efficiency\nof~\\SI{85.10}\\percent. The cause for this trade-off and the selection of block\nsizes are addressed in detail in \\cref{sec:pred:b}.\n\nCompared to our previous performance predictions\n(\\cref{fig:pred:chol:perf:pred}), \\cref{fig:pred:chol:b:perf} exhibits a far\nwider spread of the summary statistics for large block sizes. In particular,\nthe predicted minimum performance~(\\ref*{plt:min}) drops drastically, which\nimmediately causes the mean performance~(\\ref*{plt:avg}) to decrease and an\nenormous increase in the predicted standard deviation~(\\ref*{plt:stdf}).\n\nThe relative runtime prediction error (\\cref{fig:pred:chol:b:re}) indicates that\nthe predicted performance fluctuations are not present in the performance\nmeasurements: The maximum and mean relative errors (\\ref*{plt:max} and\n\\ref*{plt:avg}) increase drastically for large problem size, suggesting that the\nmodel generation was influenced by large outlier measurements. (A repetition of\nthe generation process would likely encounter different outliers and distort\nthese metrics statistics for other problem sizes.) The minimum~(\\ref*{plt:min})\nand median~(\\ref*{plt:med}), on the other hand, are with few exceptions\npredicted within~\\SI1\\percent; their average prediction AREs are\n\\SI{.36}{\\percent} (minimum \\ref*{plt:min}) and \\SI{.42}{\\percent} (median\n\\ref*{plt:med}).\n\n\n\\subsection{Varying Problem Size and Block Size}\n\\label{sec:pred:chol:nb}\n\n\\input{pred\/figures\/cholheatmap}\n\nIf we vary both the problem size~$n$ and the block size~$b$, we can visualize\nthe runtime prediction ARE as a set of heat-maps as shown in\n\\cref{fig:pred:chol:heatmap}. Note that these plots are based on a total of\n\\num{39690}~measurements of the algorithm's runtime (65~problem sizes, $\\approx\n65$~block sizes, 10~repetitions) that took over 4~hours. The performance models\nfor the kernels needed for the predictions (\\dpotf[L]2, \\dtrsm[RLTN], and\n\\dsyrk[LN]), on the other hand, were generated in just under 10~minutes,\nproduced our predictions in under \\SI{20}\\second.\n\nThe standard deviation ARE is once again too large to fit the chosen scale and\nis hence not shown. Furthermore, as already seen in \\cref{fig:pred:chol:b}, the\nmaximum prediction becomes rather inaccurate for large~$n$ and~$b$, which also\nhas a negative impact on the mean prediction. On the other hand, both the\nminimum and median predictions are overall quite accurate with an average ARE of\nonly~\\SI{.45}\\percent.\n\nSince in the following we compare multiple alternative algorithms and\nhardware\/software setups, we limit our focus to a single statistic.\nWhile in the previous analysis the runtime minimum or median were predicted with\nequivalent accuracy, in practice the expected performance is better represented\nby the median runtime.\\footnote{%\n In scenarios other than our considered single-node computations different\n measures might be preferable; e.g., the 90th~percentile runtime.\n} Hence, from now on we use the \\definition[accuracy\nmeasure: relative median runtime prediction error]{relative median runtime\nprediction error}~\\Q t{med}{RE} as our {\\em prediction accuracy measure}.\n\n\n\\subsection{Other Data-Types}\n\\label{sec:pred:chol:dt}\n\n\\input{pred\/tables\/cholfp}\n\\input{pred\/figures\/cholfp}\n\nSo far, we have considered the Cholesky decomposition of real double-precision\nmatrices; however, the same algorithm is also applicable to other data-types.\nFor the four de-facto standard numerical data-types (real and complex\\footnote{%\n For the complex cases, the Cholesky decomposition is of the form $L L^H\n \\coloneqq A$, where $A$~must be Hermitian positive definite (HPD).\n} floating-point numbers in single- and double-precision)\n\\cref{tab:pred:chol:fp} summarizes the algorithm's \\blas and \\lapack kernels,\nand \\Cref{fig:pred:chol:fp} presents our model's performance predictions and\ntheir accuracy. (For each data-type, we generated a separate set of performance\nmodels.)\n\nIn the performance predictions (\\cref{fig:pred:chol:fp:perf}), we observe that\nthe real double-precision version~(\\ref*{plt:dt:d}) is most efficient (with\nrespect to its theoretical peak performance); this was to be expected because\n\\openblas is most optimized for this data-type. In contrast, it is somewhat\nsurprising that, while single-precision complex~(\\ref*{plt:dt:c}) is noticeably\nmore performant than single-precision real~(\\ref*{plt:dt:s}), double-precision\ncomplex~(\\ref*{plt:dt:z}) does not exceed an efficiency of~\\SI{50}\\percent.\n\nAlthough the algorithm's performance for the four data-types differs\nsignificantly, \\cref{fig:pred:chol:fp:perf} reveals that our models predict the\nruntime for all of them equally well. Moreover, for the in comparison\ninefficient double-precision complex variant~(\\ref*{plt:dt:z}), the prediction\nis already notably accurate small problem sizes below~$n = 1000$.\n\nWith equally accurate predictions demonstrated for four data-types, we will in\nthe following focus on real operations in double-precision.\n\n\n\\subsection{Multi-Threaded \\blas}\n\\label{sec:pred:chol:mt}\n\n\\input{pred\/figures\/cholp}\n\nFinally, we consider how multi-threading (through \\openblas) impacts the\nalgorithm's performance and our predictions' accuracy. For this purpose,\n\\cref{fig:pred:cholp} presents the predicted performance of the Cholesky\ndecomposition and the prediction accuracy with 1, 2, 4, and 8~threads on the\n8-core \\sandybridgeshort. (For each of these four levels of parallelism, a\nseparate set of performance models was generated.)\n\nThe predictions show that, while the performance grows with the number of\nthreads, the efficiency decreases from~\\SI{87.74}{\\percent} with one thread to a\nmaximum of~\\SI{70.78}{\\percent} with eight threads. Furthermore, the\nperformance curves become less smooth with increased parallelism.\n\nConsidering our prediction's accuracy, we notice that for small problem sizes\nbelow~$n = 500$, the prediction ARE increases significantly when more threads\nare added. Beyond this point however, the prediction for 1~(\\ref*{plt:nt:1})\nand 2~threads~(\\ref*{plt:nt:2}) are both highly accurate with an average ARE\nof~\\SI{.46}{\\percent}; the predictions for 4~(\\ref*{plt:nt:4}) and\n8~threads~(\\ref*{plt:nt:8}) are slightly less accurate and the AREs fluctuate\naround~\\SI1\\percent. Note that the large fluctuations within the ARE for the\nmulti-threaded algorithms are caused by the combination of the block size~$b =\n128$ and the chosen problem sizes in steps of~64. While with\n8~threads~(\\ref*{plt:nt:8}) these fluctuations are represented by our\npredictions to some degree, with 2~(\\ref*{plt:nt:2}) and\n4~threads~(\\ref*{plt:nt:4}), they are most striking for large problem sizes,\nwhere our models do not predict such fluctuations.\n\n\n\\subsection{Summary}\n\\label{sec:pred:chol:sum}\n\nWe studied the blocked Cholesky decomposition algorithm~3 on a \\sandybridge\nusing \\openblas with varying problem and block sizes, data-types, and kernel\nparallelism. We analyzed this algorithm's measured and predicted runtime and\nperformance to evaluate the accuracy of our predictions, and selected the\nrelative median runtime prediction error~\\Q t{med}{RE} as our primary accuracy\nmeasure.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Single-Threaded \\blas}\n\\label{sec:pred:acc:st}\n\nWe begin with a study of the single-threaded prediction accuracy with \\lapack's\ndefault block size ($b = 64$, except for \\dgeqrf with~$b = 32$). While these\nare generally sub-optimal configurations and often even sub-optimal algorithms\nfor the performed operations, this configuration is unfortunately still\nencountered frequently in application codes that use the reference \\lapack\nimplementation. As such, it forms a quite canonical reference for the\nevaluation of our predictions.\n\n\\input{pred\/figures\/accst}\n\\input{pred\/tables\/accst}\n\n\\Cref{fig:pred:lapack:st} presents the relative runtime prediction error~\\Q\nt{med}{RE} for this scenario. For all algorithms and setups, our\npredictions are mostly within \\SI5{\\percent}~of the measured runtime, and in\nmany situations considerably closer. The runtime prediction ARE averaged across\nall problem sizes for each routine and setup is summarized in\n\\cref{tbl:pred:acc:st}: It ranges from~\\SIrange{.71}{3.93}\\percent, and its\naverage and median are, respectively, \\SIlist{1.91;1.69}\\percent. Overall, the\npredictions are slightly more accurate on the \\sandybridge (average $\\Q\nt{med}{ARE} = \\SI{1.66}\\percent$) with the lowest average $\\Q t{med}{ARE} =\n\\SI{1.22}\\percent$ for \\openblas~(\\ref*{plt:sbopen}); on the \\haswell (average\n$\\Q t{med}{ARE} = \\SI{2.16}\\percent$), the predictions are least accurate for\n\\mkl~(\\ref*{plt:hwmkl}) with an average of $\\Q t{med}{ARE} = \\SI{2.26}\\percent$.\n\nMost routines are predicted equally well (with an average \\Q t{med}{ARE} around\n\\SI{1.5}\\percent) with two exceptions: \\dsygst[1L] (average $\\Q t{med}{ARE} =\n\\SI{2.63}\\percent$) and \\dgeqrf (average $\\Q t{med}{ARE} = \\SI{2.87}\\percent$).\n\\begin{itemize}\n \\item For the two-sided linear system solver \\dsygst,\n \\cref{fig:pred:accst:dsygst} reveals that for most setups, the\n predictions consistently underestimate the algorithm runtime for large\n problem sizes~$n$.\n\n A quick calculation shows that this effect is related to the size of the\n last-level cache~(L3): On the \\haswellshort, the problem emerges\n beyond~$n \\approx 2000$ at which point the two operands~\\dm A (symmetric\n in lower-triangular storage) and~\\dm[lower]L\\lowerpostsep take up\n $\\SIvar{2 \\times \\frac{2000^2}2}\\doubles \\approx\n \\SI{30.52}{\\mebi\\byte}$---slightly more than the L3~cache of\n \\SI{30}{\\mebi\\byte}. On the \\sandybridgeshort with \\SI{20}{\\mebi\\byte}\n of L3~cache, the effect is accordingly already visible beyond~$n \\approx\n 1600$.\n\n The cause for the underestimation of large problems is as follows: Our\n models are based on repeated kernel measurements, which operate on\n cached (``warm'') data as long as all of the kernel's arguments fit in\n the cache. However, each traversal step of \\dsygst[1L]\n (\\cref{alg:dsygst}) uses two separate kernels (namely \\dsyrtk[LN] and\n \\dtrsm[LLNN]) that operate on the trailing parts of \\dm A and\n \\dm[lower]L\\lowerpostsep{}---since these do not fit in the cache\n simultaneously, they are mutually evicted by these kernels, and hence\n have to be loaded from main memory repeatedly (``cold'' data). To\n summarize, our models estimate fast operations on cached data, while in\n the algorithm the operations are slower due to cache misses.\n\n A more detailed study of caching effects within blocked algorithms and\n attempts to account for them are presented in \\cref{ch:cache}.\n\n Note that only \\dsygst is affected by caching effects on this scale\n because all other routines involve only one dense operand.\n\n \\item For the QR~decomposition \\dgeqrf, \\cref{fig:pred:accst:dgeqrf} reports\n that the runtime for almost all setups is consistently\n underestimated---especially for small problems.\n\n The cause is the transposed matrix copy and addition (see\n \\cref{alg:dgeqrf}), which account for about~\\SI4{\\percent} of the\n runtime for small problems ($n \\approx 250$) and \\SI1{\\percent} for\n large problems ($n \\approx 4000$): The copy, performed by a sequence of\n $b = 32$~\\dcopy{}s, is underestimated by~$2\\times$ to~$7\\times$ because\n our models do not account for caching effects; the addition, which\n inlined as two nested loops, is not accounted for at all.\n\\end{itemize}\n\n\n\\subsection{Multi-Threaded \\blas}\n\\label{sec:pred:acc:mt}\n\nWe study the multi-threaded prediction accuracy for the same six \\lapack\nalgorithms using all available cores of the processors, i.e., 8~threads on the\n\\sandybridge and 12~threads on the \\haswell. In contrast to the single-threaded\npredictions, we use a block size of~$b = 128$ for all algorithms---while this\nconfiguration is certainly not optimal for all algorithms and problem sizes, it\ngenerally yields better performance than \\lapack's default values.\n\n\\input{pred\/figures\/accmt}\n\\input{pred\/tables\/accmt}\n\n\\Cref{fig:pred:lapack:mt} presents the relative runtime prediction errors~\\Q\nt{med}{RE} for this scenario, and \\cref{tbl:pred:acc:mt} summarizes their\naveraged AREs~\\Q t{med}{ARE}. Compared to the single-threaded case, the\nprediction errors are across the board around $2.5\\times$~larger with a total\naverage of $\\Q t{med}{ARE} = \\SI{4.85}\\percent$. The predictions are roughly\nequally accurate across the two architectures and the two \\blas implementations.\n\nConsidering \\cref{fig:pred:lapack:mt}, we note fluctuation patterns in the\nprediction errors by up to~\\SI{10}\\percent, most notably for \\dsygst[1L] and\n\\dtrtri[LN] using \\mkl on the \\haswellshort~(\\ref*{plt:hwmkl}). As observed in\n\\cref{sec:pred:chol:mt}, these fluctuations are an artefact of the block size~$b\n= 128$ interacting with the considered problem sizes in steps of~64: Between\nconsecutive problem sizes, the remaining matrix portions in the last step of the\nmatrix traversal alternate between widths~56 and~120.\n\nAs in the single-threaded case, the QR~decomposition's runtime is\nunderestimated by on average~\\SI{8.00}\\percent, due to the \\dcopy{}s and the\ninlined matrix addition. Since especially the latter cannot make any use of the\nmulti-threaded parallelism, their impact increases significantly with the number\nof available cores.\n\nFurthermore, several individual algorithms and setups are consistently under- or\noverestimated: e.g., \\openblas on the \\sandybridge~(\\ref*{plt:sbopen}) for\n\\dlauum[L] and \\dpotrf[L]. These problems arise from the multi-threaded\nimplementations of \\dgemm, whose irregular performance is not well represented\nin our models: Since \\blas implementations distribute computations among\nthreads along a certain dimension of the operation, for small dimension (such as\nthe block size), only a subset of the available threads is used. When the small\ndimension is increased, more threads are activated and the performance increases\nsuddenly.\n\n\n\\subsection{Summary}\n\\label{sec:pred:acc:sum}\n\nThis section has shown that across experiments on two processor architectures,\nthree \\blas implementations, and six blocked \\lapack algorithms, our models\nyield accurate predictions that are on average within~\\SI{1.91}{\\percent}\n(single-threaded) and \\SI{4.85}{\\percent} (multi-threaded) of reference\nmeasurements. Encouraged by these accuracy results, the following sections use\nperformance predictions to target our main goals of algorithm selection and\nblock-size optimization.\n\n\\section{Performance Prediction}\n \\label{sec:pred:pred}\n \\input{pred\/pred}\n\n \\section{Accuracy Quantification}\n \\label{sec:pred:acc}\n \\input{pred\/acc}\n\n \\section[Accuracy Case Study: Cholesky Decomposition]\n {Accuracy Case Study:\\newline Cholesky Decomposition}\n \\label{sec:pred:chol}\n \\input{pred\/chol}\n\n \\section[Accuracy Study: Blocked \\lapack Algorithms]\n {Accuracy Study:\\newline Blocked \\lapack Algorithms}\n \\label{sec:pred:lapack}\n \\input{pred\/lapack}\n\n \\section{Algorithm Selection}\n \\label{sec:pred:var}\n \\input{pred\/var}\n\n \\subsection{Cholesky Decomposition}\n \\label{sec:pred:var:chol}\n \\input{pred\/varchol}\n\n \\subsection{Triangular Inversion}\n \\label{sec:pred:var:trinv}\n \\input{pred\/vartrinv}\n\n \\subsection{Sylvester Equation Solver}\n \\label{sec:pred:var:sylv}\n \\input{pred\/varsylv}\n\n \\subsection{Summary}\n \\label{sec:pred:var:sum}\n \\input{pred\/varsum}\n\n \\section{Block Size Optimization}\n \\label{sec:pred:b}\n \\input{pred\/b}\n\n \\subsection{Cholesky Decomposition}\n \\label{sec:pred:b:chol}\n \\input{pred\/bchol}\n\n \\subsection{Triangular Inversion}\n \\label{sec:pred:b:trinv}\n \\input{pred\/btrinv}\n\n \\subsection{\\lapack Algorithms}\n \\label{sec:pred:b:lapack}\n \\input{pred\/blapack}\n\n \\section{Summary}\n \\label{sec:pred:conclusion}\n \\input{pred\/conclusion}\n}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsubsection{Algorithms}\nThe solution to the triangular Sylvester equation is computed by traversing \\dmC\nfrom the bottom left to the top right. However, in contrast to the previous\noperations, this traversal does not need to follow \\dmC's diagonal; in fact \\dmC\ncan be traversed in various different ways: Two algorithms traverse \\dmC\nvertically, two horizontally (using $3 \\times 1$ and $1 \\times 3$ partitions),\nand 14~diagonally (exposing $3 \\times 3$ sub-matrices), making a total of\n18~algorithms. Furthermore, as detailed in the following, the Sylvester\nequation requires two layers of blocked algorithms, resulting in a total of\n\\definition[Sylvester equation:\\\\64~``complete'' algorithms]{64~``complete''\nalgorithms}.\n\n\\input{pred\/figures\/sylv1dalgs}\n\n\\Cref{algs:sylv1d} presents the four algorithms that traverse \\dmC vertically or\nhorizontally, thereby exposing $3 \\times 1$ or $1 \\times 3$ sub-matrices; each\nof these algorithms consists of one call to \\dgemm[NN] and the solution of a\nsub-problem (another triangular Sylvester equation). To obtain a ``complete''\nalgorithm, two of these algorithms with orthogonal traversals are combined---the\nfirst traverses the full~\\dmC and invokes the second to solve sub-problem in\neach iteration; the second, in turn, solves its small $b \\times b$ sub-problem\nusing \\lapack's unblocked \\dtrsyl[NN1]. E.g., one can use algorithm~$m1$ to\ntraverse \\dmC vertically and in each step apply algorithm~$n2$ to traverse the\nmiddle panel~\\dm[mat11, height=.2, width=.8]{C_1} horizontally. We call the\nresulting ``complete'' algorithm~$m1n2$, and see that eight such combinations\nare possible: $m1n1$, $m1n2$, $m2n1$, $m2n2$, $n1m1$, $n1m2$, $n2m1$,\nand~$n2m2$. Note that in principle the block sizes for the two layered blocked\nalgorithms can be chosen independently; however, we limit our study to a single\nblock size for both layers.\n\n\\input{pred\/figures\/sylv2dalgs}\n\nBeyond the combination of the vertically and horizontally traversing algorithms\nabove, an additional 14~algorithms traverse the matrix diagonally (with\npotentially different block sizes~$b_m$ and~$b_n$ for dimensions~$m$ and~$n$),\nand operate on a set of $3 \\times 3$ sub-matrices in each iteration;\n\\cref{algs:sylv2d} presents a sample of two of these algorithms (all\n14~algorithms are found in \\libflame~\\cite{libflameweb}). Each algorithm\nconsists of a sequence of \\dgemm[NN]{}s and three solutions of sub-problems that\nare also triangular Sylvester equations. While the sub-problem involving\n\\dm[mat11, size=.5]{B_{11}} of size $b_m \\times b_n$ is directly solved by the\nunblocked \\dtrsyl[NN1], the other two involve potentially large yet thin panels\nof~\\dmC. Complete algorithms are constructed by solving each of these sub\nproblems with an appropriate vertical or horizontal traversal\nalgorithm.\\footnote{%\n Setting one of the block sizes of a diagonally traversing algorithm to the\n corresponding matrix size results in one of the vertical or horizontal\n traversal algorithms.\n} Since each of the 14~algorithms has\ntwo such sub-problems, for each of which we can choose from two algorithms, we\nend up with a total of $14 \\cdot 2 \\cdot 2 = 56$~possible combinations.\nTogether with the eight combinations of only vertical and horizontal traversal\nalgorithms, this results in a grand total of 64~different ``complete'' blocked\nalgorithms.\n\n\n\\subsubsection{Algorithm Selection}\n\n\\input{pred\/figures\/varsylv}\n\n\\cref{fig:pred:var:sylv} presents performance predictions and measurements for\nthe Sylvester equation solver for problem sizes between~$n = 56$ and~4152 in\nsteps of~64 and block size~$b = 64$ on a \\haswell using \\openblas. Since the\nexecutions for this setup take between 40~minutes and 2~hours for each\nalgorithm, we only measured the eight algorithms based exclusively on orthogonal\nmatrix traversals. Our predictions, which are generated up to\n$1500\\times$~faster at roughly \\SI5{\\second}~per algorithm, indicate that in\nterms of performance these eight algorithms are evenly spread across the entire\n64~``complete'' algorithms.\n\nFor the single-threaded scenario, the predictions in\n\\cref{fig:pred:var:sylv:pred:1} suggest that\nalgorithms~$n2m2$~(\\ref*{plt:sylvn2m2}) and $m1n1$~(\\ref*{plt:sylvm1n1}) are,\nrespectively, the fastest and slowest, and differ in performance\nby~\\SI{9.99}\\percent. The measurements in \\cref{fig:pred:var:sylv:meas:1}\nconfirm that, while algorithm~$n2m2$~(\\ref*{plt:sylvn2m2}) is indeed the\nfastest, algorithm~~$n1m1$~(\\ref*{plt:sylvn1m1}) is the slowest. While the\nperformance of algorithms~$m1n1$~(\\ref*{plt:sylvm1n1}) and\n$n1m1$~(\\ref*{plt:sylvn1m2}) is predicted to be almost identical, the\nmeasurements show that $m1n1$~(\\ref*{plt:sylvm1n1}) is in fact up to\n\\SI{3.00}{\\percent} faster than $n1m1$~(\\ref*{plt:sylvn1m2}). Furthermore,\nwhile the remaining algorithms are correctly placed between the fastest and the\nslowest, they are not accurately ranked.\n\nThe predictions and measurements for the multi-threaded scenario in\n\\cref{fig:pred:var:sylv:pred:12,fig:pred:var:sylv:meas:12} are at first sight\nsurprising: Compared to the single-threaded case the attained performance is\nconsiderably lower. For matrices of size~$n = 4000$, the algorithms reach\nroughly \\SI8{\\giga\\flops\\per\\second}, which corresponds to\nmerely~\\SI{1.67}{\\percent} of the processor's 12-core peak performance of\n\\SI{480}{\\giga\\flops\\per\\second} (without \\turboboost). An analysis revealed\nthat the source of the drastic increase in runtime is the \\blasl1 kernel \\dswap,\nwhich the unblocked \\dtrsyl\\footnote{%\n Technically within \\code{dlasy2}, which is called from \\dtrsyl.\n} uses to swap two vectors of length~4: Although the workload for this\noperation is tiny, with multiple threads \\openblas (version~0.2.15) activates\nits parallelisation, which for a copy operation on only~\\SI{64}{\\bytes}\nintroduces a overhead of over~$200\\times$ the kernel's single-threaded runtime.\n(The problem was subsequently fixed in \\openblas version~0.2.16 (March 2016) and\nis not present in \\mkl.)\n\nWhile the multi-threaded predictions for all 64~algorithms indicate virtually\nidentical performance and thus do not allow a meaningful performance ranking,\nthey support the crucial revelation that using \\openblas~0.2.15 the triangular\nSylvester equation is solved considerably faster on a single core than on\n12~cores without exception.\n}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Algorithm Generation}\n \\label{sec:tensor:alggen}\n \\input{tensor\/alggen}\n\n \\section{Runtime Prediction}\n \\label{sec:tensor:pred}\n \\input{tensor\/pred}\n\n \\subsection{Example Contraction: \\texorpdfstring{$C_{abc} \\coloneqq\n A_{ai} B_{ibc}$}{C\\_abc := A\\_ai B\\_ibc}}\n \\label{sec:tensor:extc}\n \\input{tensor\/predex}\n\n \\subsection{Repeated Execution}\n \\label{sec:tensor:repeat}\n \\input{tensor\/repeat}\n\n \\subsection{Operand Access Distance}\n \\label{sec:tensor:accdist}\n \\input{tensor\/accdist}\n\n \\subsection{Cache Prefetching}\n \\label{sec:tensor:prefetch}\n \\input{tensor\/prefetch}\n\n \\subsection{Prefetching Failures}\n \\label{sec:tensor:prefetchfail}\n \\input{tensor\/prefetchfail}\n\n \\subsection{First Loop Iterations}\n \\label{sec:tensor:firstiter}\n \\input{tensor\/firstiter}\n\n \\section{Results}\n \\label{sec:tensor:results}\n \\input{tensor\/results}\n\n \\section{Summary}\n \\label{sec:tensor:conclusion}\n \\input{tensor\/conclusion}\n}\n\n\n\n\n\n\n\\subsection{Changing the Setup for \\texorpdfstring{$C_{abc} \\coloneqq A_{ai}\nB_{ibc}$}{C\\_abc := A\\_ai B\\_ibc}}\n\\label{sec:ai_ibc2}\n\n\\input{tensor\/figures\/ai_ibc2}\n\nWe consider the previously studied contraction with an entirely different setup:\nWe use $a = b = c = 128$ and $i = 8, \\ldots, 1000$ in steps of~8 on an\n\\ivybridge with single-threaded \\mkl. For this scenario,\n\\cref{fig:tensor:ai_ibc2} presents the performance predictions and measurements\nfor all 36~algorithms (see \\cref{sec:tensor:extc}). Although everything,\nranging from the problem sizes to the machine and \\blas library was changed in\nthis setup, the predictions are of equivalent quality and our tool correctly\ndetermines that the \\dgemm-based algorithms (\\ref*{plt:ai_ibc:c_gemm}),\n\\ref*{plt:ai_ibc:b_gemm}) not only perform best and equally well but also reach\nover~\\SI{75}{\\percent} of the \\ivybridgeshort's theoretical peak performance of\n\\SI{28.8}{\\giga\\flops\\per\\second}.\n\n\n\\subsection{Vector Contraction: \\texorpdfstring{$C_a \\coloneqq A_{iaj}\nB_{ji}$}{C\\_a := A\\_iaj B\\_ji}}\n\\label{sec:noblas3}\n\n\\input{tensor\/algs\/iaj_ji}\n\\input{tensor\/figures\/iaj_ji}\n\nFor certain contractions (e.g., those involving vectors), \\dgemm cannot be\nused as a compute kernel, and algorithms can only be based on \\blasl1 or~2\nkernels. One such scenario is encountered in the contraction $C_a \\coloneqq\nA_{iaj} B_{ji}$, for which our generator yields 8~algorithms:\n\\begin{itemize}\n \\item 4 \\ddot-based:\n \\tensoralgname{aj}{dot}~(\\ref*{plt:iaj_ji:aj_dot}),\n \\tensoralgname{ja}{dot}~(\\ref*{plt:iaj_ji:ja_dot}),\\\\\n \\tensoralgname{ai}{dot}~(\\ref*{plt:iaj_ji:ai_dot}),\n \\tensoralgname{ia}{dot}~(\\ref*{plt:iaj_ji:ia_dot});\n \\item 2 \\daxpy-based:\n \\tensoralgname{ij}{axpy}~(\\ref*{plt:iaj_ji:ij_axpy}),\n \\tensoralgname{ji}{axpy}~(\\ref*{plt:iaj_ji:ji_axpy}), and\n \\item 2 \\dgemv-based (see \\cref{algs:iaj_ji}):\n \\tensoralgname j{gemv}~(\\ref*{plt:iaj_ji:j_gemv}),\n \\tensoralgname{i'}{gemv}~(\\ref*{plt:iaj_ji:i'_gemv}).\n\\end{itemize}\nNote that since last algorithm operates on slices \\tind A{i,:,:}, which do not\nhave contiguously-stored dimension, a \\code{copy} kernel (indicated by the\napostrophe in the algorithm name) is required before each \\dgemv[N]\n(\\cref{alg:iaj_ji:i'-gemv}).\n\n\\Cref{fig:tensor:iaj_ji} presents the predicted and measured performance for\nthese algorithms. Our predictions clearly identify the fastest algorithm\n\\tensoralgname j{gemv}~(\\ref*{plt:iaj_ji:j_gemv}) across the board.\nFurthermore, the next group of four algorithms is also correctly recognized, and\nthe low performance of the second \\dgemv[N]-based algorithm\n\\tensoralgname{i'}{gemv}~(\\ref*{plt:iaj_ji:i'_gemv}) (due to the overhead of the\ninvolved copy operation) is correctly predicted as well.\n\n\n\\subsection{Challenging Contraction: \\texorpdfstring{$C_{abc} \\coloneqq A_{ija}\nB_{jbic}$}{C\\_abc := A\\_ija B\\_jbic}}\n\\label{sec:ijb_jcid}\n\n\\input{tensor\/algs\/ijb_jcid}\n\nWe now turn to a more complex example inspired by space-time continuum\ncomputations in the field general relativity~\\cite{generalrelativity}: $C_{abc}\n\\coloneqq A_{ija} B_{jbic}$. For this contraction, we generated a total of\n176~different algorithms:\n\\begin{itemize}\n \\item 48 \\ddot-based~(\\ref*{plt:ijb_jcid:dot}),\n \\item 72 \\daxpy-based~(\\ref*{plt:ijb_jcid:axpy}),\n \\item 36 \\dgemv-based~(\\ref*{plt:ijb_jcid:gemv}),\n \\item 12 \\dger-based~(\\ref*{plt:ijb_jcid:ger}), and\n \\item 8 \\dgemm-based:\\\\\n \\tensoralgname{cj'}{gemm}~(\\ref*{plt:ijb_jcid:cj'_gemm}),\n \\tensoralgname{jc'}{gemm}~(\\ref*{plt:ijb_jcid:jc'_gemm}),\n \\tensoralgname{ci'}{gemm}~(\\ref*{plt:ijb_jcid:ci'_gemm}),\n \\tensoralgname{i'c}{gemm}~(\\ref*{plt:ijb_jcid:i'c_gemm}),\\\\\n \\tensoralgname{bj'}{gemm}~(\\ref*{plt:ijb_jcid:bj'_gemm}),\n \\tensoralgname{jb'}{gemm}~(\\ref*{plt:ijb_jcid:jb'_gemm}),\n \\tensoralgname{bi'}{gemm}~(\\ref*{plt:ijb_jcid:bi'_gemm}),\n \\tensoralgname{i'b}{gemm}~(\\ref*{plt:ijb_jcid:i'b_gemm}).\n\\end{itemize}\nAll \\dgemm-based (see \\cref{algs:ijb_jcid}) and several of the \\dgemv-based\nalgorithms involve copy operations to ensure that each matrix has a\ncontiguously-stored dimension as required by the \\blas interface. Once again,\nwe consider a challenging scenario where both contracted indices are of size $i\n= j = 8$ and the free indices $a = b = c$ vary between~8 and~1000.\n\n\\input{tensor\/figures\/ijb_jcid}\n\n\\Cref{fig:tensor:ijb_jcid:pred} presents the predicted performance of the\n176~algorithms, where algorithms based on \\blasl1 and~2 are grouped by kernel.\nEven with the copy operations, the \\dgemm-based algorithms are the fastest.\nHowever, within these 8~algorithms, the performance differs by more\nthan~\\SI{20}\\percent. \\Cref{fig:tensor:ijb_jcid:meas} compares our predictions\nwith corresponding performance measurements\\footnote{%\n Slow tensor contraction algorithms were stopped before reaching the largest\n problem size by limiting the total measurement time per algorithm\n to~\\SI{15}\\min.\n}: Among the \\dgemm-based algorithms, our predictions clearly separate the bulk\nof fast algorithms from the slightly less efficient ones.\n\n\\input{tensor\/figures\/ijb_jcid10}\n\n\\paragraph{Multi-Threading}\nOur contraction algorithms can profit from shared memory parallelism through\nmulti-threaded \\blas kernels. To focus on the impact of parallelism, we\nincrease the contracted tensor dimension sizes to~$i = j = 32$ and use all\n10~cores of the \\ivybridge with multi-threaded \\openblas.\n\\Cref{fig:tensor:ijb_jcid10} presents performance predictions and measurements\nfor this setup: Our predictions accurately distinguish the three groups of\n\\dgemm-based implementations, and algorithms\n\\tensoralgname{i'c}{gemm}~(\\ref*{plt:ijb_jcid:i'c_gemm}) and\n\\tensoralgname{i'b}{gemm}~(\\ref*{plt:ijb_jcid:i'b_gemm}) (see\n\\cref{algs:ijb_jcid}), which reach \\SI{170}{\\giga\\flops\\per\\second}, are\ncorrectly identified as the fastest.\n\\tensoralgname{jb'}{gemm}~(\\ref*{plt:ijb_jcid:jb'_gemm}) on the other hand\nmerely reaches \\SI{60}{\\giga\\flops\\per\\second}. This $3\\times$~difference in\nperformance among \\dgemm-based algorithms emphasizes the importance of selecting\nthe right algorithm.\n\n\n\\subsection{Efficiency Study}\n\n\\input{tensor\/figures\/eff}\n\nThe above study provided evidence that our automated approach successfully\nidentifies the most efficient algorithm(s). In the following we show how much\nfaster this approach is compared to empirical measurements. For this purpose, we\nonce more consider the contraction $C_{abc} \\coloneqq A_{ai} B_{ibc}$ with $i =\n8$ and varying $a = b = c$ on a \\harpertown with \\openblas.\n\\Cref{fig:tensor:eff} presents the speedup of our micro-benchmark over\ncorresponding algorithm measurements: Generally our predictions are several\norders of magnitude faster than such algorithm executions. For $a = b = c =\n1000$, this relative improvement is smallest for the \\dgemm-based\nalgorithms~(\\ref*{plt:eff:gemm}) at $1000\\times$, because each \\dgemm performs a\nsignificant portion of the computation; for the \\dger-based\nalgorithms~(\\ref*{plt:eff:ger}), it lies between 6000 and \\num{10000} and for\nthe \\dgemv-based algorithms~(\\ref*{plt:eff:gemv}) the gain is $\\num{5e5}\\times$\nto $\\num{e6}\\times$; finally, for the \\blasl1-based\nalgorithms~(\\ref*{plt:eff:axpy}, \\ref*{plt:eff:dot}), where each kernel\ninvocation only performs a tiny fraction of the contraction, our predictions are\n\\num{1e6} to \\num{1e9}~times faster than the algorithm executions.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section*{Affine and non-affine deformation}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.9\\linewidth]{figure1.pdf}\n\\caption{(a) The deformation of infinitesimal spherical fluid elements after $\\Delta t=3\\tau_\\eta$ (size not to scale). (b) The time evolution of the mean curvature $\\langle\\kappa_1\\rangle$ (black solid curve); The black dashed line represents a linear relationship, and the cyan dashed line represents the prediction based on an Eulerian quantity $\\langle|\\boldsymbol{\\hat{e}}_1\\cdot\\boldsymbol{H}\\cdot\\boldsymbol{\\hat{e}}_1|\\rangle$ with $\\boldsymbol{\\hat{e}}_1$ being the eigenvector corresponding to the maximum eigenvalue of the rate-of-strain tensor. Inset: the same figure for $\\langle\\kappa_1\\rangle$ (black solid line) with a linear scale in time. The black dashed line represents an exponential growth over time.\\label{fig_element_deformation}}\n\\end{figure}\n\n\nTo build a framework that makes that connection, we consider the folding of infinitesimal fluid elements. Fig. \\ref{fig_element_deformation}(a) shows a number of infinitesimal spherical fluid elements being deformed after a time $3\\tau_\\eta$ ($\\tau_\\eta$ is the Kolmogorov time scale) in 3D homogeneous and isotropic turbulence \\cite{li2008public,perlman2007data} (details of the direct numerical simulation (DNS) of the turbulence can be found in Supplemental Material). It is clear that the deformed fluid elements show complex geometry involving both stretching and folding. To mathematically describe this high-order deformation, we consider each point $\\boldsymbol{X}$ at $t_0$ within an infinitesimal fluid element mapped to another point $\\boldsymbol{x}$ within the deformed element after a finite time $\\Delta t$, where $\\boldsymbol{x}$ and $\\boldsymbol{X}$ are the relative positions with respect to the center of the fluid elements. The non-linear mapping function between $\\boldsymbol{X}$ and $\\boldsymbol{x}$ with the leading orders follows\n\\begin{equation}\\label{eqn_mapping}\n\\boldsymbol{x}=\\boldsymbol{F}(t_0+\\Delta t)\\cdot\\boldsymbol{X}+\\boldsymbol{X}\\cdot\\boldsymbol{G}(t_0+\\Delta t)\\cdot\\boldsymbol{X},\n\\end{equation}\nwhere $F_{ij}=\\partial x_i\/\\partial X_j$ is the deformation gradient tensor and $G_{ijk}=\\partial^2 x_i\/\\partial X_j\\partial X_k$ is the deformation Hessian tensor. The tensors $F_{ij}$ and $G_{ijk}$ can be then determined by integrating $dF_{ij}(t)\/dt=A_{im}F_{mj}(t)$ and $dG_{ijk}(t)\/dt=A_{im}G_{mjk}(t)+H_{imn}F_{mj}(t)F_{nk}(t)\/2$ along the trajectories of fluid elements, with $A_{ij}=\\partial u_i\/\\partial x_j$ and $H_{ijk}=\\partial^2 u_i\/\\partial x_j \\partial x_k$ being the velocity gradient and velocity Hessian tensors, respectively. Details of these equations can be found in Supplemental Material. \n\nTo further simplify Eq. (\\ref{eqn_mapping}), we consider the deformation of an arbitrary straight material line passing through the center of a fluid element, represented by a set of positions $\\boldsymbol{X}$ represented parametrically according to $\\boldsymbol{X}(\\lambda)=\\boldsymbol{\\hat{e}}\\lambda$. $\\boldsymbol{\\hat{e}}$ is a selected unit vector and the parameter $\\lambda\\rightarrow0$ indicates the distance from the center of the fluid element. Substituting $\\boldsymbol{X}(\\lambda)=\\boldsymbol{\\hat{e}}\\lambda$ into Eq. (\\ref{eqn_mapping}) yields the expression for the deformed material line at $t_0+\\Delta t$, \n\\begin{equation}\\label{eqn_mapping_expand}\n \\boldsymbol{x}(\\lambda)=\\boldsymbol{F}\\cdot \\boldsymbol{\\hat{e}} \\lambda+\\boldsymbol{\\hat{e}}\\cdot\\boldsymbol{G}\\cdot\\boldsymbol{\\hat{e}}\\lambda^2= \\boldsymbol{r}^s \\lambda+\\boldsymbol{r}^b \\lambda^2,\n\\end{equation}\nwhere $\\boldsymbol{r}^s=\\boldsymbol{F}\\cdot \\boldsymbol{\\hat{e}}$ and $\\boldsymbol{r}^b=\\boldsymbol{\\hat{e}}\\cdot\\boldsymbol{G}\\cdot\\boldsymbol{\\hat{e}}$ are defined as the stretching vector and the bending vector, respectively. \n\n\n\n\nA highly relevant material line is the one that gets stretched the most, written as $\\boldsymbol{X}(\\lambda)=\\boldsymbol{\\hat{e}}_{R1}\\lambda$. Here, $\\boldsymbol{\\hat{e}}_{R1}$ is the unit eigenvector associated with the greatest eigenvalue of right Cauchy-Green strain tensor $\\boldsymbol{C}^R=\\boldsymbol{F}^T\\boldsymbol{F}$. This special material line, as the \"skeleton\" of the fluid element, can be used to reflect the overall geometry of the fluid element. Substituting $\\boldsymbol{\\hat{e}}=\\boldsymbol{\\hat{e}}_{R1}$ in Eq. (\\ref{eqn_mapping_expand}) results in the quadratic equation $\\boldsymbol{x}(\\lambda)= \\boldsymbol{r}^s_1 \\lambda+\\boldsymbol{r}^b_1 \\lambda^2$ where $\\boldsymbol{r}^s_1=\\boldsymbol{F}\\cdot\\boldsymbol{\\hat{e}}_{R1}$ and $\\boldsymbol{r}^b_1=\\boldsymbol{\\hat{e}}_{R1}\\cdot\\boldsymbol{G}\\cdot\\boldsymbol{\\hat{e}}_{R1}$. An example of this material line is shown as the inset of Fig. \\ref{fig_element_deformation}(a) (black dashed line). Given this quadratic equation, the curvature of the material line $\\kappa_1$ can be found using $\\kappa_1=2r^b_{1\\perp}\/(r_1^s)^2$, where $\\boldsymbol{r}^b_{1\\perp}$ represents the component of $\\boldsymbol{r}^b_1$ that is perpendicular to $\\boldsymbol{r}^s_1$. Although $\\kappa_1$ is not sufficient to describe the complete deformation, it does reflect the overall folding of the fluid element. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nThe curvature $\\kappa_1$ can therefore be obtained by computing $\\boldsymbol{F}$ and $\\boldsymbol{G}$ and their associated $\\boldsymbol{r}^b_1$ and $\\boldsymbol{r}^s_1$ along with each fluid trajectory. Fig. \\ref{fig_element_deformation}(b) shows the time evolution of the mean curvature $\\langle\\kappa_1\\rangle$, averaged over $10^5$ fluid elements, as a function of the integration time $\\Delta t$ using the DNS data. It is evident that, for the available simulation duration, the mean curvature of the fluid elements grows continuously, but the growth rate changes appreciably between two regimes. In early times, $\\langle\\kappa_1\\rangle$ increases linearly. The linear regime lasts until about the Kolmogorov timescale $\\tau_\\eta$ when the length scale $1\/\\langle\\kappa_1\\rangle$ is around 25$\\eta$ ($\\eta$ is the Kolmogorov length scale), and the growth of $\\langle\\kappa_1\\rangle$ slows down, marking the transition of the curvature dynamics. Soon after $\\tau_\\eta$, the growth of $\\langle\\kappa_1\\rangle$ accelerates again, and this late stage behavior is better fitted with an exponential function, which is illustrated in a semi-logarithmic plot in the inset of Fig. \\ref{fig_element_deformation}(b). \n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.9\\linewidth]{curv_pdf.pdf}\n \\caption{(a) The PDFs of the curvature $p(\\kappa_1)$ at different time instants in the early stage. Inset of (a): the same PDFs but for the normalized curvature $p(\\kappa_1\/\\langle\\kappa_1\\rangle)$. (b) The PDFs of the curvature $p(\\kappa_1)$ at different time instants in the late stage with the solid curves representing the data and the dashed curves representing the prediction by the model (Eq. (\\ref{eqn_pdf_evolution})). Inset of (b): the time evolution of the kurtosis of $\\kappa_1$. }\n \\label{fig_curv_pdf}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\nThe transition from the linear to the exponential growth of $\\langle \\kappa_1 \\rangle$ indicates different mechanisms at play, which can be better understood using local curvature. Here, the probability density function (PDF) of $\\kappa_1$, i.e. $p(\\kappa_1)$, at different times are shown in Fig. \\ref{fig_curv_pdf} for the early (a) and late (b) stages. In the early stage, the curvature grows systematically, but follows a self-similar behavior as indicated by the collapsed PDFs of the normalized curvature $p(\\kappa_1\/\\langle \\kappa_1 \\rangle)$ in the inset of Fig. \\ref{fig_curv_pdf}(a). In the late stage, the tail of the PDF still rises over time, whereas the peak location remains constant. This distinct behavior suggests that the curvature distribution becomes more intermittent over time, which is confirmed by the growing kurtosis as shown in Fig. \\ref{fig_curv_pdf}(b) inset. This result highlights the growing inhomogeneity of local mixing as locations with extreme curvature should reach a well-mixed stage much sooner than what is implied by the mean.\n\n\n\n\n\n\n\n\n\nTo model the multi-stage growth behavior of curvature, we consider an arbitrary deforming infinitesimal material line as in Eq. (\\ref{eqn_mapping_expand}). The equation for this material line can therefore be decomposed along two directions, $\\boldsymbol{\\hat{e}}_\\parallel=\\boldsymbol{r}^s\/r^s$ and $\\boldsymbol{\\hat{e}}_\\perp=\\boldsymbol{r}^b_\\perp\/r^b_\\perp$ , following:\n\\begin{equation}\\label{eqn_geometry}\n \\boldsymbol{x}(\\lambda)=\\left(r^s\\lambda+r^b_\\parallel\\lambda^2\\right)\\boldsymbol{\\hat{e}}_\\parallel+r^b_\\perp\\lambda^2\\boldsymbol{\\hat{e}}_\\perp,\n\\end{equation}\nwhere $\\boldsymbol{r}^b_\\parallel=(\\boldsymbol{r}^b\\cdot\\boldsymbol{\\hat{e}}_\\parallel)\\boldsymbol{\\hat{e}}_\\parallel$ and $\\boldsymbol{r}^b_\\perp=\\boldsymbol{r}^b-\\boldsymbol{r}^b_\\parallel$.\n\nThe velocity of any arbitrary material point on the material line, $\\boldsymbol{u}(\\lambda)$, can then be expressed in the frame spanned by ($\\boldsymbol{\\hat{e}}_\\parallel$, $\\boldsymbol{\\hat{e}}_\\perp)$ in two different ways by taking either direct time derivative of Eq. (\\ref{eqn_geometry}) or the Taylor expansion based on the velocity information (see Supplemental Material). Comparing these two expressions for $\\boldsymbol{u}(\\lambda)$ leads to evolution equations for $r^s$ and $r^b_\\perp$, which then yields the evolution equation for curvature of the material line\n\\begin{equation}\\label{eqn_curv_evolution}\n\\begin{split}\n \\frac{d\\kappa}{dt}=&\\left(\\boldsymbol{\\hat{e}}_\\parallel\\cdot\\boldsymbol{H}\\cdot\\boldsymbol{\\hat{e}}_\\parallel\\right)\\cdot\\boldsymbol{\\hat{e}}_\\perp\\\\\n &+\\left(\\boldsymbol{\\hat{e}}_\\perp\\cdot\\boldsymbol{S}\\cdot\\boldsymbol{\\hat{e}}_\\perp-2\\boldsymbol{\\hat{e}}_\\parallel\\cdot\\boldsymbol{S}\\cdot\\boldsymbol{\\hat{e}}_\\parallel\\right)\\kappa.\n\\end{split}\n\\end{equation}\nHere $\\boldsymbol{S}$ and $\\boldsymbol{H}$ are the rate-of-strain tensor and the velocity Hessian tensor following the trajectories of fluid elements, respectively.\n\nEq. (\\ref{eqn_curv_evolution}) holds for an arbitrary material line, so it also works for the curvature along the largest stretching ($\\boldsymbol{\\hat{e}}_{R1}$) direction $\\kappa_1$. The first term on the right side of Eq. (\\ref{eqn_curv_evolution}) represents the contribution from the velocity Hessian, which can directly bend the fluid element as shown in Fig. \\ref{fig_alignment}(a). Here, the thick blue arrows indicate the primary velocity Hessian that bends the element (i.e., the velocity gradient that changes along the $\\boldsymbol{\\hat{e}}_\\parallel$ direction). In the short time limit, $\\kappa_1\\rightarrow0$, all the terms multiplied by $\\kappa_1$ in Eq. (\\ref{eqn_curv_evolution}) are negligible, \nso Eq. (\\ref{eqn_curv_evolution}) can be simplified to $d\\kappa_1\/dt=\\left(\\boldsymbol{\\hat{e}}_\\parallel\\cdot\\boldsymbol{H}\\cdot\\boldsymbol{\\hat{e}}_\\parallel\\right)\\cdot\\boldsymbol{\\hat{e}}_\\perp$, which corresponds to the linear growth in the early stage as in Fig. \\ref{fig_element_deformation}(b). At later times ($\\Delta t>\\tau_\\eta$), this contribution of the velocity Hessian approaches zero as shown in Fig. \\ref{fig_alignment}(d) (blue solid line) because $\\left(\\boldsymbol{\\hat{e}}_\\parallel\\cdot\\boldsymbol{H}\\cdot\\boldsymbol{\\hat{e}}_\\parallel\\right)$ may not be perfectly aligned with $\\boldsymbol{\\hat{e}}_\\perp$. Since the velocity Hessian is a small-scale quantity, it is not surprising that the transition in Fig. \\ref{fig_element_deformation}(b) begins at a small $\\Delta t$ as the velocity Hessian decorrelates \\cite{schumacher2007asymptotic}.\n\n\n\n\n\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.9\\linewidth]{alignment.pdf}\n \\caption{(a-c) Schematics illustrating how (a) velocity Hessian , (b) strain along $\\boldsymbol{\\hat{e}}_\\perp$, and (c) strain along $\\boldsymbol{\\hat{e}}_\\parallel$ contribute to the curvature change, respectively. For all cases, the black dashed curves represent the special material line (skeleton) while the gray dashed curves indicate the same material line at a later time deformed by the surrounding flows indicated by the thick arrows. (d) The time evolution of the contribution to the mean curvature growth by each term in Eq. (\\ref{eqn_curv_evolution}), conditioned on $\\kappa_1>3\\langle\\kappa_1\\rangle$. All the terms are normalized by the Kolmogorov scales. }\n \\label{fig_alignment}\n\\end{figure}\n\n\n\n \nIn addition to the Hessian term, the other two terms in Eq. (\\ref{eqn_curv_evolution}), both proportional to $\\kappa_1$, represent how the strain affects the curvature of an already-bent fluid element. Here, $\\boldsymbol{\\hat{e}}_\\perp\\cdot\\boldsymbol{S}\\cdot\\boldsymbol{\\hat{e}}_\\perp$ represents the stretching along $\\boldsymbol{\\hat{e}}_\\perp$, which tends to increase the curvature (as shown in Fig. \\ref{fig_alignment}(b)); $\\boldsymbol{\\hat{e}}_\\parallel\\cdot\\boldsymbol{S}\\cdot\\boldsymbol{\\hat{e}}_\\parallel$ represents the stretching along $\\boldsymbol{\\hat{e}}_\\parallel$, which straightens an already-bent fluid element and reduces the curvature (as shown in Fig. \\ref{fig_alignment}(c)). At later times, the mean curvature $\\langle\\kappa_1\\rangle$ is large so both terms associated with $\\kappa_1$ become dominant, leading to $d\\kappa_1\/dt\\propto \\kappa_1$. As a result, the late stage growth of curvature exhibits exponential trend, consistent with the results in Fig. \\ref{fig_element_deformation}(b) inset.\n\nThe contributions from strain by each of the two terms (dashed line) and their combination (red solid line) are shown in Fig. \\ref{fig_alignment}(d). The statistics were collected by only using the fluid elements with $\\kappa_1>3\\langle\\kappa_1\\rangle$ because the late stage is dominated by the large-curvature cases as indicated by Eq. (\\ref{eqn_curv_evolution}). It is evident that, as the velocity Hessian contribution approaches zero, the total contribution by the strain grows significantly, signaling the transition of the roles between these two mechanisms. This growing contribution by the strain is dominated by $(\\boldsymbol{\\hat{e}}_\\perp\\cdot\\boldsymbol{S}\\cdot\\boldsymbol{\\hat{e}}_\\perp)\\kappa$ which enhances the folding, whereas the other term $(-\\boldsymbol{\\hat{e}}_\\parallel\\cdot\\boldsymbol{S}\\cdot\\boldsymbol{\\hat{e}}_\\parallel)\\kappa$ that reduces the curvature plateaus close to zero.\n\n\nTo understand the enhanced curvature intermittency at the late stage, the time evolution of the PDF of $\\kappa_1$, i.e. $p(\\kappa_1,t)$ as shown in Fig. \\ref{fig_curv_pdf}(b), is modelled by assuming that $p(\\kappa_1,t)d\\kappa_1=p(\\kappa_1',t+dt)d\\kappa_1'$, where $\\kappa_1'=\\kappa_1+(d\\kappa_1\/dt)dt$ is the curvature of the fluid elements with an initial curvature $\\kappa_1$ after $dt$. Substituting $\\kappa_1'$ into the equation for PDF leads to,\n\\begin{equation}\\label{eqn_pdf_evolution}\n \\frac{\\partial p}{\\partial t}+(d\\kappa_1\/dt)\\cdot\\frac{\\partial p}{\\partial \\kappa_1}+p\\cdot\\frac{d(d\\kappa_1\/dt)}{d\\kappa_1}=0.\n\\end{equation}\nHere we approximate $d\\kappa_1\/dt\\approx\\langle\\boldsymbol{\\hat{e}}_\\perp\\cdot\\boldsymbol{S}\\cdot\\boldsymbol{\\hat{e}}_\\perp-2\\boldsymbol{\\hat{e}}_\\parallel\\cdot\\boldsymbol{S}\\cdot\\boldsymbol{\\hat{e}}_\\parallel\\rangle\\kappa_1$ because (i) the strain is the dominant mechanism in the late stage and (ii) the contribution by velocity Hessian will only result in a self-similar distribution of curvature as shown in Fig. \\ref{fig_curv_pdf}(a), whereas the PDFs in the late stage exhibit longer tails over time. Eq. (\\ref{eqn_pdf_evolution}) is then solved numerically with $p(\\kappa_1)$ at $t\/\\tau_\\eta=3$ obtained from the DNS data serving as the initial condition. \n\n\n\n\n\n\n\n\n\nThe predicted PDFs at different times are shown as the dashed curves in Fig. \\ref{fig_curv_pdf}(b). An overall good agreement between the prediction and the data is achieved up to $t\\approx10\\tau_\\eta$, particulary in the tail region extended beyond $\\kappa_1\\eta\\approx 0.2$ in Fig. \\ref{fig_curv_pdf}(b), which correspond to a length scale smaller than 5$\\eta$. This suggests that the intermittency shown here is related to the curved elements being stretched even further by small-scale straining motions in the dissipative range. Note that the range of $\\kappa_1\\eta$ is limited because of the exceedingly low probability of finding fluid elements with $\\kappa_1\\eta$ greater than 0.25. We also note that the model following Eq. (\\ref{eqn_pdf_evolution}) is simplified and it only holds when $d\\kappa_1\/dt$ increases with $\\kappa_1$, i.e., more curved elements are being bent at a faster rate, which can only be satisfied at the late stage given the overall positive magnitude of $\\langle\\boldsymbol{\\hat{e}}_\\perp\\cdot\\boldsymbol{S}\\cdot\\boldsymbol{\\hat{e}}_\\perp-2\\boldsymbol{\\hat{e}}_\\parallel\\cdot\\boldsymbol{S}\\cdot\\boldsymbol{\\hat{e}}_\\parallel\\rangle$ in Eq. (\\ref{eqn_curv_evolution}). Furthermore, the model is intended only for the tail region because the peak region with smaller $\\kappa_1$ is dominated by the velocity Hessian. As a result, a mismatch between model predictions and simulation results is not unexpected for smaller $\\kappa_1\\eta$.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.9\\linewidth]{joint_pdf.pdf}\n \\caption{The joint PDF of the normalized curvature along $\\boldsymbol{\\hat{e}}_1$ and $\\boldsymbol{\\hat{e}}_2$ directions. Two schematics show an initially spherical fluid elements deforming to a bowl shape (top) and a saddle shape (bottom) after a short time, respectively. }\n \\label{fig_joint_pdf}\n\\end{figure}\n\nEq. (\\ref{eqn_curv_evolution}) also enables us to use simple Eulerian quantities to understand folding in the early stage. As $\\Delta t\\rightarrow0$, $\\boldsymbol{\\hat{e}}_\\parallel$ approaches $\\boldsymbol{\\hat{e}}_1$, which is the one of the three eigenvectors [$\\boldsymbol{\\hat{e}}_i$ ($i=1,2,3$)] corresponding to the maximum eigenvalue of the rate-of-strain tensor $\\boldsymbol{S}$. The early growth of the material curvature can therefore be determined by an Eulerian quantity $\\langle|\\boldsymbol{\\hat{e}}_1\\cdot\\boldsymbol{H}\\cdot\\boldsymbol{\\hat{e}}_1|\\rangle$ following $d\\langle\\kappa_1\\rangle\/dt\\approx\\langle\\left(\\boldsymbol{\\hat{e}}_\\parallel\\cdot\\boldsymbol{H}\\cdot\\boldsymbol{\\hat{e}}_\\parallel\\right)\\cdot\\boldsymbol{\\hat{e}}_\\perp\\rangle\\approx \\langle|\\boldsymbol{\\hat{e}}_1\\cdot\\boldsymbol{H}\\cdot\\boldsymbol{\\hat{e}}_1|\\rangle\\beta$, where $\\beta\\approx0.85$ is the mean cosine of the angle between $\\boldsymbol{\\hat{e}}_\\parallel\\cdot\\boldsymbol{H}\\cdot\\boldsymbol{\\hat{e}}_\\parallel$ and $\\boldsymbol{\\hat{e}}_\\perp$ obtained from the DNS data. The predicted result is shown as the cyan dashed line in Fig. \\ref{fig_element_deformation}(b), and it overlaps with the DNS data perfectly. \n\n\n\n\nThis Eulerian quantity $\\langle|\\boldsymbol{\\hat{e}}_1\\cdot\\boldsymbol{H}\\cdot\\boldsymbol{\\hat{e}}_1|\\rangle$ also helps to establish a better physical picture of the deformed fluid elements in the short time limit beyond a simple flat sheet that extends along the $\\boldsymbol{\\hat{e}}_1$ and $\\boldsymbol{\\hat{e}}_2$ directions considered in the classical framework \\cite{lund1994improved}. As illustrated in the schematics of Fig. \\ref{fig_joint_pdf}, such a sheet could be curved along $\\boldsymbol{\\hat{e}}_3$ direction, and its geometry can be described by two curvatures, whose growth are controlled by $(\\boldsymbol{\\hat{e}}_1\\cdot\\boldsymbol{H}\\cdot\\boldsymbol{\\hat{e}}_1)\\cdot\\boldsymbol{\\hat{e}}_3$ and $(\\boldsymbol{\\hat{e}}_2\\cdot\\boldsymbol{H}\\cdot\\boldsymbol{\\hat{e}}_2)\\cdot\\boldsymbol{\\hat{e}}_3$, respectively. \n\n\nThe joint PDF of $(\\boldsymbol{\\hat{e}}_1\\cdot\\boldsymbol{H}\\cdot\\boldsymbol{\\hat{e}}_1)\\cdot\\boldsymbol{\\hat{e}}_3$ and $(\\boldsymbol{\\hat{e}}_2\\cdot\\boldsymbol{H}\\cdot\\boldsymbol{\\hat{e}}_2)\\cdot\\boldsymbol{\\hat{e}}_3$ normalized by Kolmogorov scales is shown in Fig. \\ref{fig_joint_pdf}. Here, the direction of $\\boldsymbol{\\hat{e}}_3$ is chosen such that $(\\boldsymbol{\\hat{e}}_1\\cdot\\boldsymbol{H}\\cdot\\boldsymbol{\\hat{e}}_1)\\cdot\\boldsymbol{\\hat{e}}_3>0$, while $(\\boldsymbol{\\hat{e}}_2\\cdot\\boldsymbol{H}\\cdot\\boldsymbol{\\hat{e}}_2)\\cdot\\boldsymbol{\\hat{e}}_3$ can be either positive (bowl shape) or negative (saddle shape). The joint PDF suggests a nearly symmetric probability for either shape, skewing only slightly towards the bowl case. Nevertheless, for a given curvature in one direction, the most likely curvature in the other direction is zero, so there appears to be some preference for cigar like shapes. This is confirmed in Fig. \\ref{fig_element_deformation}(a) where the bending occurs mostly in one direction (although various other bending configurations can be seen). Note that large values of the velocity Hessian may be the result of local instabilities (e.g. shear instabilities that are responsible for rolling up the vortex sheets into tubes \\citep{vincent1994dynamics}). Connecting the dynamics\nof instabilities to velocity Hessian and curvature requires further investigations.\n\n\n\n\n\n\n\n\nIn sum, our work establishes a new framework to connect folding dynamics to the velocity Hessian and deformation Hessian tensors in a way similar to the connection between stretching to velocity gradient and Cauchy-Green strain tensors. As the stretching can be well described by the Lyapunov exponents based on strain, such a relationship may inspire the development of new ways to formulate the dynamical system for folding. Our framework also provides new insights into the flow intermittency that the sharp-turning points in flows become even more curved due to strain, which could help gain deeper insights into the intermittency and inhomogeneity of turbulent mixing. Future work can possibly extend our framework to finite-sized fluid elements considering the coarse-graining effect at the same length scale. This extension will help develop improved models for length-scale reduction in the energy cascade process.\n\n\nWe acknowledge the financial support from the National Science Foundation under the award number CAREER-1905103. This project was also partially supported by the ONR award: N00014-21-1-2083. \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{${\\cal F}$-transversality}\\label{sF}\n\n\nIn this section, we study properties\nof morphisms of schemes\nwith respect to\ncomplexes on the \\'etale site\nof a scheme.\nThe transversality is defined \nas a condition for a canonical morphism\nfor extraordinary pull-back to be\nan isomorphism.\nIn Section \\ref{ssFtr},\nafter preparing some sorites on\nthe canonical morphism,\nwe establish basic properties\non the transversality.\nIn Section \\ref{ssla},\nafter recalling basic properties\nof local acyclicity,\nwe study the relation between\nthe local acyclicity and\nthe transversality.\n\nIn this section\nand Section \\ref{sms},\n$\\Lambda$ denotes\na finite field of characteristic $\\ell$\ninvertible on relevant \nnoetherian schemes.\nThe derived categories\n$D^+(-,\\Lambda)$ of \nbounded below complexes\nand \n$D^b_c(-,\\Lambda)$ of \nconstructible complexes\nare defined as usual.\n\n\\subsection{${\\cal F}$-transversality}\\label{ssFtr}\n\n\n\nLet $h\\colon W\\to X$\nbe a separated morphism\nof finite type of noetherian schemes\nand $\\Lambda$ be a finite field\nof characteristic $\\ell$ invertible on $X$.\nThe functor\n$Rh^!\\colon D^+(X,\\Lambda)\n\\to D^+(W,\\Lambda)$\nis defined as the adjoint\nof\n$Rh_!\\colon D(W,\\Lambda)\n\\to D(X,\\Lambda)$\nin \\cite[Th\\'eor\\`eme 3.1.4.]{DP}.\nIf $X$ is quasi-excellent,\nby the finiteness theorem\n\\cite[{\\sc Th\\'eor\\`eme} 1.1.1]{fini},\nwe have a functor\n$Rh^!\\colon D^b_c(X,\\Lambda)\n\\to D^b_c(W,\\Lambda)$\nsee also \\cite[Corollaire 1.5]{TF}.\nRecall that a scheme of\nfinite type over\na Dedekind domain with\nfraction field of characteristic 0\nis quasi-excellent\nby \\cite[Scholie (7.8.3)]{EGA4}.\n\n\nLet ${\\cal F}\n\\in D^+(X,\\Lambda)$ and \n${\\cal G}\\in D^+(W,\\Lambda)$ .\nThen, the adjoint of the morphism\n$h^*{\\cal F}\\otimes h^*Rh_*{\\cal G}\n\\to h^*{\\cal F}\\otimes {\\cal G}$\ninduced by the adjunction\n$h^*Rh_*{\\cal G} \\to{\\cal G}$\ndefines a canonical morphism\n\\begin{equation}\n{\\cal F}\\otimes Rh_*{\\cal G}\n\\to Rh_*(h^*{\\cal F}\\otimes {\\cal G}).\n\\label{eqpr0}\n\\end{equation}\nIf $h$ is an open immersion\nand if ${\\cal G}=h^*{\\cal G}_X$\nfor some extension of ${\\cal G}$\non $X$,\n(\\ref{eqpr0}) is identified with\nthe morphism\n${\\cal F}\\otimes R{\\cal H}om(h_!\\Lambda,\n{\\cal G}_X)\n\\to R{\\cal H}om(h_!\\Lambda,{\\cal F}\\otimes {\\cal G}_X)$ defined\nby the product.\n\nApplying the construction\n(\\ref{eqpr0})\nto a compactification of $h$\nand the extension by $0$,\na canonical isomorphism\n\\begin{equation}\n{\\cal F}\\otimes\nRh_!{\\cal G} \n\\to Rh_!(h^*{\\cal F}\\otimes{\\cal G} ),\n\\label{eqprj}\n\\end{equation}\nthe projection formula\n\\cite[(4.9.1)]{Rapport}\nis defined.\n\\if{This is defined as the adjoint\n$h^*{\\cal F}\\otimes^L_\\Lambda\nh^*Rh_!{\\cal G} \n\\to h^*{\\cal F}\\otimes^L_\\Lambda{\\cal G}$\nof the morphism induced\nby the adjunction\n$h^*Rh_!{\\cal G} \n\\to {\\cal G}$\nif $h$ is proper.\nIt is defined as the inverse of\nthe isomorphism\n$h^*{\\cal F}\\otimes^L_\\Lambda\nh^*Rh_!{\\cal G} \n\\gets h^*{\\cal F}\\otimes^L_\\Lambda{\\cal G}$\nif $h$ is an open immersion.}\\fi\n\n\\begin{df}\\label{dfAB}\nLet $h\\colon W\\to X$\nbe a separated morphism of finite\ntype of quasi-excellent noetherian schemes.\nLet ${\\cal F}\\in D^+(X,\\Lambda)$.\n\n{\\rm 1.}\nLet ${\\cal G}\\in D^+(X,\\Lambda)$. \nWe define a canonical morphism\n\\begin{equation}\nc_{{\\cal F},{\\cal G},h}\\colon\nh^*{\\cal F}\n\\otimes\nRh^!{\\cal G}\n\\to \nRh^!({\\cal F}\n\\otimes\n{\\cal G})\n\\label{eqAB}\n\\end{equation}\nto be the adjoint of the composition\n$$\nRh_!(h^*{\\cal F}\n\\otimes\nRh^!{\\cal G})\n\\to \n{\\cal F}\n\\otimes\nRh_!Rh^!{\\cal G}\n\\to {\\cal F}\n\\otimes\n{\\cal G}$$\nof the inverse \nof the isomorphism {\\rm (\\ref{eqprj})}\nand the morphism induced\nby the adjunction\n$Rh_!Rh^!{\\cal G}\n\\to\n{\\cal G}$.\nFor ${\\cal G}=\\Lambda$,\nwe define a canonical morphism\n\\begin{equation}\nc_{{\\cal F},h}\n\\colon \nh^*{\\cal F}\n\\otimes^L\nRh^!\\Lambda\n\\to Rh^!{\\cal F}\n\\label{eqcF}\n\\end{equation}\nto be \n$c_{{\\cal F},\\Lambda,h}$.\n\\end{df}\n\n\n\\begin{lm}\\label{lmcF}\nLet $h\\colon W\\to X$\nbe a separated morphism of finite\ntype of noetherian schemes.\nLet ${\\cal F}\\in D^+(X,\\Lambda)$.\n\n{\\rm 1.}\nLet ${\\cal G},{\\cal H}\\in D^+(X,\\Lambda)$.\nThen, the diagram \n\\begin{equation}\n\\begin{CD}\nh^*{\\cal F}\n\\otimes \nRh^!({\\cal G}\\otimes {\\cal H})\n@>{c_{{\\cal F},\n{\\cal G}\\otimes {\\cal H},h}}>>\nRh^!({\\cal F}\n\\otimes \n{\\cal G}\\otimes {\\cal H})\\\\\n@A{1\\otimes c_{{\\cal G},\n{\\cal H},h}}AA\n@AA{c_{{\\cal F},\n{\\cal G},h}\\otimes 1}A\\\\\nh^*{\\cal F}\n\\otimes \nRh^!{\\cal G}\\otimes h^*{\\cal H}\n@>{c_{{\\cal F},\n{\\cal G},h}\\otimes 1}>>\nRh^!({\\cal F}\\otimes {\\cal G})\n\\otimes \nh^*{\\cal H}\n\\end{CD}\n\\end{equation}\nis commutative.\n\n\n{\\rm 2.}\nLet $g\\colon V\\to W$\nbe a separated morphism of finite\ntype of schemes\nand \nlet ${\\cal G}\\in D^+(X,\\Lambda)$.\nThen, the diagram\n\\begin{equation}\n\\xymatrix{\n(hg)^*{\\cal F}\n\\otimes\nR(hg)^!{\\cal G}\n\\ar[r]^{c_{{\\cal F},{\\cal G},hg}}\n&\nR(hg)^!({\\cal F}\\otimes\n{\\cal G})\n\\\\\ng^*h^*{\\cal F}\n\\otimes\nRg^!Rh^!{\\cal G}\n\\ar[u]\n\\ar[rd]^{c_{h^*{\\cal F},\nRh^!{\\cal G},g}}\n&\nRg^!Rh^!({\\cal F}\\otimes\n{\\cal G})\n\\ar[u]\n\\\\\ng^*h^*{\\cal F}\n\\otimes\nRg^!\\Lambda\n\\otimes\ng^*Rh^!{\\cal G}\n\\ar[u]^{1\\otimes\nc_{Rh^!{\\cal G},g}}\n\\ar[dr]_{\nc_{h^*{\\cal F},g}\\otimes 1}\n&\nRg^!(h^*{\\cal F}\n\\otimes Rh^!{\\cal G})\n\\ar[u]_{Rg^!(c_{{\\cal F},{\\cal G},h})}\n\\\\\n&\nRg^!h^*{\\cal F}\n\\otimes g^*Rh^!{\\cal G}.\n\\ar[u]_{c_{h^*{\\cal F},Rh^!{\\cal G},g}}\n}\n\\label{eqcgh}\n\\end{equation}\nwhere the upper vertical arrows\nare canonical isomorphisms\n{\\rm \\cite[(3.1.13.1)]{DP}}\nis commutative.\n\n\n{\\rm 3.}\nLet $$\\begin{CD}\nX@{c_{Rf_*{\\cal F},g}}>>\nRg^!Rf_*{\\cal F}\n\\\\\n@VVV@VVV\\\\\nRf'_*h^*{\\cal F}\n\\otimes\nRg^!\\Lambda\n@.\nRf'_*Rh^!{\\cal F}\n\\\\\n@V\n{\\rm (\\ref{eqpr0})}VV\n@AA{Rf'_*(c_{{\\cal F},h})}A\\\\\nRf'_*(h^*{\\cal F}\n\\otimes\nf'^*Rg^!\\Lambda)\n@>>>\nRf'_*(h^*{\\cal F}\n\\otimes\nRh^!\\Lambda)\n\\end{CD}\n\\label{eqcfg}\n\\end{equation}\nwhere the arrows without\ntags are defined by\nbase change morphisms\nis commutative.\n\\end{lm}\n\n\\proof{\n1.\nThe diagram\n$$\\begin{CD}\nRh_!Rh^!({\\cal G}\\otimes {\\cal H})\n@>>>\n{\\cal G}\\otimes {\\cal H}\\\\\n@A\n{Rh_!(c_{{\\cal G},{\\cal H},h})}AA\n@AAA\\\\\nRh_!(Rh^!{\\cal G}\\otimes h^*{\\cal H})\n@<{{\\rm (\\ref{eqprj})}}<<\nRh_!Rh^!{\\cal G}\\otimes {\\cal H}\n\\end{CD}$$\nwhere the arrows without\ntags are defined by the adjunction\nis commutative\nby the definition of\n$c_{{\\cal G},{\\cal H},h}$.\nTaking the tensor products with ${\\cal F}$,\napplying the projection formula\n(\\ref{eqprj}) and\ntaking the adjoint,\nwe see that the upper triangle in\n\\begin{equation*}\n\\xymatrix{\nh^*{\\cal F}\n\\otimes \nRh^!({\\cal G}\\otimes {\\cal H})\n\\ar[r]^{c_{{\\cal F},\n{\\cal G}\\otimes {\\cal H},h}}&\nRh^!({\\cal F}\n\\otimes \n{\\cal G}\\otimes {\\cal H})\\\\\nh^*{\\cal F}\n\\otimes \nRh^!{\\cal G}\\otimes h^*{\\cal H}\n\\ar[u]^{1\\otimes c_{{\\cal G},\n{\\cal H},h}}\n\\ar[ru]^\n{c_{{\\cal F}\\otimes{\\cal H},\n{\\cal G},h}}\n\\ar[r]^{c_{{\\cal F},\n{\\cal G},h}\\otimes 1}\n&\nRh^!({\\cal F}\\otimes {\\cal G})\n\\otimes \nh^*{\\cal H}\n\\ar[u]_{c_{{\\cal F},\n{\\cal G},h}\\otimes 1}\n}\n\\end{equation*}\nis commutative.\nThe lower triangle is similarly\ncommutative\nand the assertion follows.\n\n2.\nThe lower quadrangle\nis commutative by 1.\nThe composition \n$g^*h^*{\\cal F}\n\\otimes\nRg^!Rh^!{\\cal G}\n\\to\nRg^!Rh^!{\\cal F}$\nthrough\n$\nRg^!(h^*{\\cal F}\n\\otimes Rh^!{\\cal G})$\nis the adjoint of\n$Rh_!Rg_!\n(g^*h^*{\\cal F}\n\\otimes\nRg^!Rh^!{\\cal G})\n\\to\n{\\cal F}\\otimes\nRh_!Rg_!\nRg^!Rh^!{\\cal G}$\ninduced by\nthe adjunction\n$Rh_!Rg_!\nRg^!Rh^!{\\cal G}\n\\to\nRh_!Rh^!{\\cal G}\n\\to {\\cal G}$.\nSince the last morphism\nis identified\nwith \nthe adjunction\n$R(hg)_!\nR(hg)^!{\\cal G}\n\\to {\\cal G}$,\nthe upper pentagon is also commutative.\n\n\n3.\nFor ${\\cal G}\\in D^+(V,\\Lambda)$,\nwe consider the diagram\n\\begin{equation}\n\\begin{CD}\nf^*Rg_!(g^*Rf_*{\\cal F}\n\\otimes\n{\\cal G})\n@<{f^*{\\rm (\\ref{eqprj})}}<<\nf^*Rf_*{\\cal F}\n\\otimes\nf^*Rg_!{\\cal G}\n@>>>\n{\\cal F}\n\\otimes\nf^*Rg_!{\\cal G}\n\\\\\n@VVV@.@VVV\\\\\nRh_!f'^*(Rf'_*h^*{\\cal F}\n\\otimes\n{\\cal G})\n@>>>\nRh_!(h^*{\\cal F}\n\\otimes\nf^*{\\cal G})\n@<{\\rm (\\ref{eqprj})}<<\n{\\cal F}\n\\otimes\nRh_!f'^*{\\cal G}\n\\end{CD}\n\\label{eqcfga}\n\\end{equation}\ndefined as follows.\nThe vertical arrows are\ndefined by the base change morphisms\nand the horizontal arrows\nwithout labels are\ndefined by adjunction.\nWe see that the diagram is commutative\nby reducing to the case\nwhere $g$ is proper and\ngoing back to the definition\nof (\\ref{eqprj}).\n\nWe apply (\\ref{eqcfga}) to\n${\\cal G}=Rg^!\\Lambda$.\nSince the composition\n$f^*Rg_!Rg^!\\Lambda\n\\to\nRh_!f'^*Rg^!\\Lambda\n\\to Rh_!Rh^!\\Lambda\n\\to \\Lambda$\nof the base change morphisms\nwith the adjunction\nis induced by the adjuncion\n$Rg_!Rg^!\\Lambda\\to \\Lambda$,\nwe obtain a commutative diagram\n\\begin{equation}\n\\begin{CD}\nf^*Rg_!(g^*Rf_*{\\cal F}\n\\otimes\nRg^!\\Lambda)\n@<{f^*{\\rm (\\ref{eqprj})}}<<\nf^*Rf_*{\\cal F}\n\\otimes\nf^*Rg_!Rg^!\\Lambda\n@>>>\n{\\cal F}\n\\\\\n@VVV@.@AAA\\\\\nRh_!f'^*(Rf'_*h^*{\\cal F}\n\\otimes\nRg^!\\Lambda)\n@>>>\nRh_!(h^*{\\cal F}\n\\otimes\nRh^!\\Lambda)\n@<{\\rm (\\ref{eqprj})}<<\n{\\cal F}\n\\otimes\nRh_!Rh^!\\Lambda\n\\end{CD}\n\\label{eqcfgb}\n\\end{equation}\nSince the canonical morphism\n(\\ref{eqcF}) is defined as\nthe adjoint of (\\ref{eqprj}),\nwe obtain (\\ref{eqcfg})\nby taking the adjoint of (\\ref{eqcfgb}).\n\\qed\n\n}\n\n\n\\begin{lm}\\label{lmij}\nLet $i\\colon Z\\to X$ be a closed\nimmersion of noetherian schemes\nand let ${\\cal F},{\\cal G}\n\\in D^+(X,\\Lambda)$.\n\n{\\rm 1.}\nWe define the slant arrow\nand the vertical arrow\nin the diagram\n\\begin{equation}\n\\xymatrix{\n{\\cal F}\\otimes\ni_*Ri^!{\\cal G}\n\\ar[r]^-{\\rm(\\ref{eqprj})}\n\\ar[rd]\n&\ni_*(i^*{\\cal F}\\otimes\nRi^!{\\cal G})\n\\ar[r]^-{i_*(c_{{\\cal F},{\\cal G},i})}\n&\ni_*Ri^!({\\cal F}\\otimes {\\cal G})\n\\ar[d]\n\\\\\n&\n{\\cal F}\\otimes\nR{\\cal H}om(i_*\\Lambda,{\\cal G})\n\\ar[r]\n&\nR{\\cal H}om(i_*\\Lambda,{\\cal F}\n\\otimes{\\cal G})\n}\n\\label{eqic}\n\\end{equation}\nby the canonical isomorphism\n$i_*Ri^!\\to R{\\cal H}om(i_*\\Lambda,-)$\nand the lower horizontal arrow\nby the product.\nThen, the diagram\n{\\rm (\\ref{eqic})}\nis commutative.\n\n{\\rm 2.}\nLet $j\\colon U=X\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} Z\\to X$\nbe the open immersion of the complement.\nThen, \nthe exact sequence\n$0\\to j_!\\Lambda\n\\to \\Lambda\\to i_*\\Lambda\\to 0$\ndefines a commutative diagram\n\\begin{equation}\n\\begin{CD}\n{\\cal F}\\otimes\ni_*Ri^!{\\cal G}\n@>>> {\\cal F}\\otimes\n{\\cal G}\n@>>>\n{\\cal F}\\otimes\nRj_*j^*{\\cal G}\n@>>>\\\\\n@V{c_{{\\cal F},{\\cal G},i}}VV@|\n@VV{\\rm (\\ref{eqpr0})}V@.\\\\\ni_*Ri^!({\\cal F}\\otimes\n{\\cal G})\n@>>> {\\cal F}\\otimes\n{\\cal G}\n@>>>\nRj_*j^*({\\cal F}\\otimes\n{\\cal G})\n@>>>\n\\end{CD}\n\\label{eqij}\n\\end{equation}\nof distinguished triangles.\n\\end{lm}\n\n\n\\proof{\n1.\nBy the definition of\n$c_{{\\cal F},{\\cal G},i}$,\nthe morphism\n$i_*(c_{{\\cal F},{\\cal G},i})\\colon\ni_*(i^*{\\cal F}\\otimes\nRi^!{\\cal G})\n\\to i_*Ri^!({\\cal F}\n\\otimes {\\cal G})$\nis the unique morphism\nsuch that the diagram\n$$\n\\begin{CD}\n{\\cal F}\\otimes\ni^*Ri^!{\\cal G}\n@>>> {\\cal F}\\otimes{\\cal G}\n\\\\\n@V{\\rm (\\ref{eqprj})}VV@AAA\\\\\ni_*(i^*{\\cal F}\\otimes\nRi^!{\\cal G})\n@>{i_*(c_{{\\cal F},{\\cal G},i})}>>\ni_*Ri^!({\\cal F}\\otimes{\\cal G})\n\\end{CD}$$\nis commutative.\nHere the arrows without tag\nare defined by the\nadjunction $i_*Ri^!\\to 1$.\nSimilarly, the lower horizontal arrow\n${\\cal F}\\otimes\nR{\\cal H}om(i_*\\Lambda,{\\cal G})\n\\to\nR{\\cal H}om(i_*\\Lambda,{\\cal F}\n\\otimes{\\cal G})$\nis the unique morphism\nsuch that the diagram\n$$\n\\begin{CD}\n{\\cal F}\\otimes\ni^*Ri^!{\\cal G}\n@>>> {\\cal F}\\otimes{\\cal G}\n\\\\\n@VVV@AAA\\\\\n{\\cal F}\\otimes\nR{\\cal H}om(i_*\\Lambda,{\\cal G})\n@>>>\nR{\\cal H}om(i_*\\Lambda,{\\cal F}\n\\otimes{\\cal G})\n\\end{CD}$$\nis commutative.\nHere the left vertical arrow\nis the slant arrow in (\\ref{eqic})\nand the right vertical arrow\nis induced by $\\Lambda\\to i_*\\Lambda$.\nHence\nthe assertion follows.\n\n2.\nThe exact sequence\n$0\\to j_!\\Lambda\n\\to \\Lambda\\to i_*\\Lambda\\to 0$\ndefines a commutative diagram\n\\begin{equation}\n\\begin{CD}\n{\\cal F}\\otimes\nR{\\cal H}om(i_*\\Lambda,{\\cal G})\n@>>> {\\cal F}\\otimes\n{\\cal G}\n@>>>\n{\\cal F}\\otimes\nR{\\cal H}om(j_!\\Lambda,{\\cal G})\n@>>>\\\\\n@VVV@VVV@VVV@.\\\\\nR{\\cal H}om(i_*\\Lambda,{\\cal F}\\otimes\n{\\cal G})\n@>>> {\\cal F}\\otimes\n{\\cal G}\n@>>>\nR{\\cal H}om(j_!\\Lambda,{\\cal F}\\otimes\n{\\cal G})\n@>>>\n\\end{CD}\n\\label{eqij2}\n\\end{equation}\nof distinguished triangles.\nBy 1.,\nthe left vertical arrow \nof (\\ref{eqij}) is\nidentified with\nthat of (\\ref{eqij2})\nand similarly for the\nright vertical arrows.\n\\qed\n\n}\n\n\\begin{lm}\\label{lmtrbc}\nLet\n$$\\begin{CD}\nX@{i'_s}>>X\\times_YY_{(s)}@<{j'_t}<< X_t\\\\\n@V{f_s}VV@V{f_{(s)}}VV @VV{f_t}V\\\\\ns@>{i_s}>>Y_{(s)}@<{j_t}<{i'_s}>>X\\times_YY_{(s)}@<{j'_t}<< X_t\\\\\n@V{p_s}VV@V{p_{(s)}}VV @VV{p_t}V\\\\\nP_s@>{i''_s}>>P\\times_YY_{(s)}@<{j''_t}<{\\rm (\\ref{eqpr0})}>>\nR(p'j')_*(p'j')^*{\\cal F}\n\\end{CD}\n\\label{eqYU}\n\\end{equation}\nwhere the first morphism is\ninduced by the base change morphism\nis an isomorphism\non a neighborhood of $Z$.\n\n\n\n{\\rm 2.}\n$f$ is universally ${\\cal F}$-acyclic\nalong $Z$.\n\\end{pr}\n\nFor the sake of completeness,\nwe record the proof in \\cite{CC}\nwith more detail.\n\n\\proof{\n1. \nLet \n$D_1,\\ldots,D_n$\nbe the irreducible components\nof $D$. For a subset $I\\subset \\{1,\\ldots,n\\}$,\nlet $X'_I=X'\\times_{Y'}(\\bigcap_{i\\in I}D_i)$\nand let $i'_I\\colon X'_I\\to X'$\nbe the closed immersion.\nBy the assumption,\n$p'\\colon X'\\to X$\nand $p'i'_I\\colon X'_I\\to X$\nare ${\\cal F}$-transversal\non neighborhoods of\nthe inverse images of $Z$.\n\nLet ${\\cal F}'=p'^*{\\cal F}$.\nSince the assumption\non $Rh^!\\Lambda$\nin Proposition \\ref{prhF}.1\nis satisfied by the absolute\npurity \\cite[{\\sc Th\\'eor\\`eme 3.1.1}]{purete},\nthe immersions \n$i'_I\\colon X'_I\\to X'$\nare ${\\cal F}'$-transversal\non neighborhoods of\nthe inverse images of $Z$\nby Proposition \\ref{prhF}.1.\nHence by Lemma \\ref{lmiZ},\nthe canonical morphism\n${\\cal F}'\n\\otimes Rj'_*\\Lambda\n\\to Rj'_*j'^*{\\cal F}'$ (\\ref{eqpr0}) is an\nisomorphism \non a neighborhood of $p'^{-1}(Z)$.\nSince $p'$ is proper,\nwe obtain an isomorphism\n$Rp'_*({\\cal F}'\n\\otimes Rj'_*\\Lambda)\n\\to R(pj')_*(pj')^*{\\cal F}$\non a neighborhood of $Z$.\n\n\nBy the projection formula\n(\\ref{eqprj}),\nwe have a canonical isomorphism\n${\\cal F}\n\\otimes Rp'_*Rj'_*\\Lambda\n\\to Rp'_*({\\cal F}'\n\\otimes Rj'_*\\Lambda)$.\nThe base change morphism\n$f^*R(pj)_*\\Lambda\\to\nRp'_*Rj'_*\\Lambda$ is an isomorphism\nby the smooth base change\ntheorem\n\\cite[Corollaire 1.2]{smbc}.\nHence the morphism (\\ref{eqYU})\nis an isomorphism on a neighborhood of $Z$.\n\n{\\rm 2.}\nIt suffices to show that\nfor a smooth morphism\n$Y'\\to Y$,\nthe base change\n$X'\\to Y'$ of $f$\nis locally acyclic with respect to the \npull-back of ${\\cal F}$ by Lemma \\ref{lmlac}.3.\nSimilarly as in the proof of 1.,\nthe assumption is satisfied\nfor the pull-back $Y'\\to Y$.\nHence, \nby replacing $Y$ by $Y'$,\nit suffices to show\nthat\n$f$ is locally acyclic with respect to ${\\cal F}$.\n\n\nLet $s\\gets t$ be a specialization\nof geometric points of $Y$\nas in Lemma \\ref{lmlac}.1\nand let the notation be as loc.~cit.\nBy \\cite[Theorem 4.1,\nTheorem 8.2]{dJ},\nwe may write $t$ as a limit\n$\\varprojlim_\\lambda\nU_\\lambda$\nof the complements $U_\\lambda=Y_\\lambda\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} \nD_\\lambda$,\nin regular schemes $Y_\\lambda$\nendowed with a\nproper, surjective\nand generically finite\nmorphism $p_\\lambda\\colon Y_\\lambda\n\\to Y$ \nof divisors $D_\\lambda\\subset\nY_\\lambda$ with simple normal crossings.\nThen, as the limit of\n(\\ref{eqYU}), the canonical morphism\n\\begin{equation}\n{\\cal F}\\otimes f_{(s)}^*\nRj_{t*}j^*_t\\Lambda\n\\to \nRj'_{t*}j^{\\prime*}_t{\\cal F}\n\\label{eqijst2}\n\\end{equation}\nis an isomorphism\non the inverse image of $Z$.\nSince $Y$ is normal,\nthe canonical morphism\n$\\Lambda\\to\ni_s^*Rj_{t*}j^*_t\\Lambda$\nis an isomorphism.\nHence the isomorphism\n(\\ref{eqijst2})\ninduces an isomorphism\n(\\ref{eqijst})\non the inverse image of $Z$.\n\\qed\n\n}\n\n\\begin{cor}\\label{corlc}\nLet $X$\nbe a regular scheme\nof finite type over \na discrete valuation ring\n${\\cal O}_K$\nand\n$Z\\subset X$ be a closed subset.\nLet\n${\\cal F}$ be a constructible\ncomplex of $\\Lambda$-modules\non $X$.\nAssume that every separated morphism\n$h\\colon W\\to X$ of\nregular schemes\nof finite type over ${\\cal O}_K$\nis ${\\cal F}$-transversal\non a neighborhood of\nthe inverse image $h^{-1}(Z)$.\nThen ${\\cal F}$ is locally constant\non a neighborhood of $Z$.\n\\end{cor}\n\n\\proof{\nBy Proposition \\ref{prla1}\napplied to $1_X\\colon X\\to X$,\nthe identity $1_X\\colon X\\to X$\nis ${\\cal F}$-acyclic\nalong $Z$.\nHence ${\\cal F}$ is locally constant\non a neighborhood of $Z$\nby Lemma \\ref{lmla}.2.\n\\qed\n\n}\n\n\\medskip\n\nWe have a partial converse of\nProposition \\ref{prla1}\nnot used in the article.\n\n\\begin{pr}[{\\cite[Corollary 8.10]{CC}}]\\label{prla2}\nLet $f\\colon X\\to Y$\nbe a smooth morphism\nof noetherian schemes\nand let\n${\\cal F}$ be a constructible\ncomplex of $\\Lambda$-modules\non $X$.\nLet $i\\colon Z\\to Y$ be an immersion\nand let\n$$\\begin{CD}\nX@h>>X@<{j'}<< U\\\\\n@VgVV @VfVV @VV{f_V}V\\\\\nZ@>i>>Y@{g'}>>X'@>{f'}>>Y'\\\\\n@V{h_V}VV@VhVV@VV{h'}V\\\\\nV@>g>> X@>f>>Y\n\\end{CD}$$\nbe a cartesian diagram of\nmorphisms of finite type of schemes\nsuch that \n$f\\colon X\\to Y$ is smooth\nand that the vertical arrows are\nseparated.\nAssume that\n$Rh^!\\Lambda$ is locally constant\nof support $X'$\nand that the base change\nmorphism\n$g'^*Rh^!\\Lambda\n\\to Rh_V^!\\Lambda$\nis an isomorphism.\n\nLet ${\\cal G}$\nbe a constructible complex\nof $\\Lambda$-modules on $V$\nand \nassume that $f$\nis $Rg_*{\\cal G}$-acyclic\nand that $fg$ is\n${\\cal G}$-acyclic.\nThen, \nthe base change morphism\n\\begin{equation}\nh^*Rg_*{\\cal G}\n\\to \nRg'_*h_V^*{\\cal G}\n\\label{eqbcj}\n\\end{equation}\nis an isomorphism.\n\\end{cor}\n\n\\proof{\nSince $f$ is $Rg_*{\\cal G}$-acyclic\nand $fg$ is ${\\cal G}$-acyclic,\nby Proposition \\ref{prla2},\n$h$ is $Rg_*{\\cal G}$-transversal\nand\n$h_V$ is ${\\cal G}$-transversal.\nHence the assertion follows\nfrom Proposition \\ref{prhF}.2.\n\\qed\n\n}\n\n\\section{$C$-transversality}\\label{sTX}\n\n\nIn this section,\nfirst we define \nthe FW-cotangent bundle\nof a regular scheme,\nas a vector bundle\non the closed subscheme \ndefined by $p=0$.\nThen, \nwe study properties\nof morphisms with respect to\nits closed conical subsets\ncorresponding to the transversality\nand the local\nacyclicity studied in Section \\ref{sF}.\n\nFirst in Section \\ref{ssFW},\nwe recall basic properties\nof the sheaf $F\\Omega^1_X$ \nof Frobenius-Witt differentials\nfrom \\cite{FW}.\nIn particular if $X$ is regular,\nunder a certain finiteness condition,\nthe sheaf $F\\Omega^1_X$\nis a locally free \n${\\cal O}_{X_{{\\mathbf F}_p}}$-module\nof rank $\\dim X$ on\n$X_{{\\mathbf F}_p}=\nX\\times_{{\\rm Spec}\\, {\\mathbf Z}}\n{\\rm Spec}\\, {\\mathbf F}_p$.\nUnder this condition,\nwe define the FW-cotangent bundle\n$FT^*X|_{X_{{\\mathbf F}_p}}$ on $X_{{\\mathbf F}_p}$\nas the vector bundle\nassociated to the locally free\n${\\cal O}_{X_{{\\mathbf F}_p}}$-module\n$F\\Omega^1_X$.\n\n\nWe study properties\nof morphisms with respect to \na given closed conical subset in\nSections \\ref{ssCtr} and \\ref{ssCac}.\nIn Section \\ref{ssCtr},\nwe study the transversality\nfor morphisms to $X$.\nIn Section \\ref{ssCac},\nwe study the acyclicity,\nwhich was also called transversality,\nfor morphisms from $X$.\n\n\n\n\n\\subsection{FW-cotangent bundle}\n\\label{ssFW}\n\n\n\\begin{df}[{\\rm \\cite[Definition 1.1]{FW}}]\\label{dfFW}\nLet $p$ be a prime number.\n\n{\\rm 1.}\nDefine a polynomial\n$P\\in {\\mathbf Z}[X,Y]$\nby\n\\begin{equation}\nP=\n\\sum_{i=1}^{p-1}\n\\dfrac{(p-1)!}{i!(p-i)!}\\cdot\nX^iY^{p-i}.\n\\label{eqP}\n\\end{equation}\n\n{\\rm 2.}\nLet $A$ be a ring\nand $M$ be an $A$-module.\nWe say that a mapping\n$w\\colon A\\to M$\nis an Frobenius-Witt derivation\nor FW-derivation for short\nif the following condition is\nsatisfied:\nFor any $a,b\\in A$, we have\n\\begin{align}\nw(a+b)\\, &=\nw(a)+\nw(b)\n-P(a,b)\n\\cdot w(p),\n\\label{eqadd}\\\\\nw(ab)\\, &=\nb^p\\cdot w(a)+\na^p\\cdot w(b).\n\\label{eqLb}\n\\end{align}\n\\end{df}\n\n\nDefinition \\ref{dfFW}.2\nis essentially the same\nas \\cite[Definition 2.1.1]{DKRZ}.\nWe recall some results from \\cite{FW}.\n\n\\begin{lm}\\label{lmOm}\nLet $p$ be a prime number and\n$A$ be a ring.\n\n{\\rm 1.\n(\\cite[Lemma 2.1.1]{FW})}\nThere exists a universal pair\nof an $A$-module\n$F\\Omega^1_A$\nand an FW-derivation\n$w\\colon A\n\\to F\\Omega^1_A$.\n\n{\\rm 2.\n(\\cite[Corollary 2.3.1]{FW})}\nIf $A$ is a ring over ${\\mathbf Z}_{(p)}$,\nwe have $p\\cdot F\\Omega^1_A=0$.\n\n{\\rm 3.\n(\\cite[Corollary 2.3.2]{FW})}\nIf $A$ is a ring over ${\\mathbf F}_{p}$,\nthen there exists a canonical\nisomorphism $F\\Omega^1_A\n\\to F^*\\Omega^1_A\n=\\Omega^1_A\\otimes_AA$\nto the tensor product\nwith respect to the absolute\nFrobenius morphism $A\\to A$.\n\\end{lm}\n\n\n\nWe call $F\\Omega^1_A$\nthe module of FW-differentials of $A$\nand $w(a)\\in F\\Omega^1_A$\nthe FW-differential of $a\\in A$.\nFor a morphism $A\\to B$ of rings,\nwe have a canonical $B$-linear morphism\n$F\\Omega^1_A\\otimes_AB\n\\to\nF\\Omega^1_B$.\n\nWe may sheafify the construction\nand define $F\\Omega^1$\nas a quasi-coherent ${\\cal O}_X$-module\nfor a scheme $X$. \nWe call $F\\Omega^1_X$\nthe sheaf of FW-differentials on $X$.\nIf $X$ is a scheme over\n${\\mathbf Z}_{(p)}$,\nthe ${\\cal O}_X$-module\n$F\\Omega^1_X$ is\nan ${\\cal O}_{X_{{\\mathbf F}_p}}$-module\nwhere\n$X_{{\\mathbf F}_p}\n=X\\times_{{\\rm Spec}\\, {\\mathbf Z}_{(p)}}\n{\\rm Spec}\\, {\\mathbf F}_p$.\nFurther if $X$ is noetherian\nand if $X_{{\\mathbf F}_p}$\nis of finite type over a field\nof finite $p$-basis,\nthen $F\\Omega^1_X$ is\na coherent \n${\\cal O}_{X_{{\\mathbf F}_p}}$-module\nby \\cite[Lemma 4.1.2]{FW}.\nIf $X$ is a scheme over\n${\\mathbf F}_p$,\nwe have a canonical isomorphism\n\\begin{equation}\nF\\Omega^1_X\n\\to F^*\\Omega^1_X\n\\label{eqFFX}\n\\end{equation}\nto the pull-back by\nthe absolute Frobenius morphism\n$F\\colon X\\to X$,\nsending $w(a)$ to $da$.\n\n\nFor a morphism \n$f\\colon X\\to Y$ of schemes,\nwe have a canonical morphism\n\\begin{equation}\nf^*F\\Omega^1_Y\\to\nF\\Omega^1_X\n\\label{eqFXY}\n\\end{equation}\n\n\n\n\n\n\n\n\\begin{pr}[{\\rm \\cite[Proposition 2.4]{FW}}]\\label{prdx}\nLet $X$ be a scheme\nand $x\\in X$\nbe a point such that\nthe residue field $k(x)={\\cal O}_{X,x}\/\n{\\mathfrak m}_{X,x}$\nis of characteristic $p$.\nFor a $k(x)$-vector space $M$,\nlet $F^*M$\ndenote the tensor product\n$M\\otimes_{k(x)}k(x)$\nwith respect to the Frobenius\n$F\\colon k(x)\\to k(x)$.\nThen, we have an exact\nsequence \n\\begin{equation}\n\\begin{CD}\n0@>>>\nF^*({\\mathfrak m}_{X,x}\/\n{\\mathfrak m}_{X,x}^2)\n@>{w}>>\nF\\Omega^1_{X,x}\n\\otimes_{{\\cal O}_{X,x}} k(x)\n@>{\\rm (\\ref{eqFFX})}>>\nF^*\\Omega^1_{k(x)}\n@>>>0\n\\end{CD}\n\\label{eqdx}\n\\end{equation}\nof $k(x)$-vector spaces.\n\\end{pr}\n\n\n\n\\begin{pr}[{\\rm \\cite[Proposition 2.8]{FW}}]\\label{prsm}\nLet $f\\colon X\\to Y$\nbe a morphism of finite type of\nregular noetherian schemes\nover ${\\mathbf Z}_{(p)}$.\nThen the following conditions\nare equivalent:\n\n{\\rm (1)}\n$f\\colon X\\to Y$ is smooth\non a neighborhood of\n$X_{{\\mathbf F}_p}$.\n\n{\\rm (2)}\nThe sequence \n\\begin{equation}\n\\begin{CD}\n0@>>>\nf^*F\\Omega^1_Y\n@>{\\rm (\\ref{eqFXY})}>>\nF\\Omega^1_X\n@>{\\rm (\\ref{eqFFX})}>>F^*\\Omega^1_{\nX_{{\\mathbf F}_p}\/\nY_{{\\mathbf F}_p}}\n@>>>\n0\n\\end{CD}\n\\end{equation}\nof ${\\cal O}_{X_{{\\mathbf F}_p}}$-modules\nis a locally split exact sequence.\n\\end{pr}\n\n\n\\begin{thm}[{\\rm \\cite[Theorem 3.1]{FW}}]\n\\label{thmreg}\nLet $X$ be a noetherian scheme\nover ${\\mathbf Z}_{(p)}$\nand $X_{{\\mathbf F}_p}=\nX\\times_{{\\rm Spec}\\, \n{\\mathbf Z}_{(p)}}\n{\\rm Spec}\\, {\\mathbf F}_p$\nbe the closed subscheme.\nAssume that the reduced\npart $X_{{\\mathbf F}_p,{\\rm red}}$ is\na scheme of finite type over\na field $k$ with finite $p$-basis.\nIf $X$ is regular and \nis equi-dimensional\nof dimension $n$ and if\n$[k:k^p]=p^r$, then \nthe ${\\cal O}_{X_{{\\mathbf F}_p}}$-module\n$F\\Omega^1_X$\nis locally free of rank $n+r$.\n\\end{thm}\n\n\n\n\n\\begin{cor}[{\\rm \\cite[Corollary 2.6,\nCorollary 3.2]{FW}}]\n\\label{corXZ}\nLet $X$ be a regular noetherian scheme\nover ${\\mathbf Z}_{(p)}$\nsuch that the reduced\npart $X_{{\\mathbf F}_p,{\\rm red}}$ \nof\n$X_{{\\mathbf F}_p}=X\\times_{{\\rm Spec}\\, \n{\\mathbf Z}_{(p)}}\n{\\rm Spec}\\, {\\mathbf F}_p$\nis\na scheme of finite type over\na field $k$ of finite $p$-basis.\nLet $Z\\subset X$ be a closed subscheme.\n\nWe consider the following conditions:\n\n{\\rm (1)}\n$Z$ is regular on a neighborhood of\n$Z_{{\\mathbf F}_p}=Z\\times_{{\\rm Spec}\\, \n{\\mathbf Z}_{(p)}}\n{\\rm Spec}\\, {\\mathbf F}_p$.\n\n{\\rm (1$'$)}\nAt every point $x\\in Z\n_{{\\mathbf F}_p}$,\nthe local ring\n${\\cal O}_{Z,x}$ is regular.\n\n{\\rm (2)}\nThe sequence \n\\begin{equation}\n\\begin{CD}\n0@>>>\nF^*(N_{Z\/X}\\otimes_{{\\cal O}_Z}\n{\\cal O}_{Z_{{\\mathbf F}_p}})\n@>w>>\nF\\Omega^1_X\\otimes_{{\\cal O}_X}\n{\\cal O}_{Z_{{\\mathbf F}_p}}\n\\longrightarrow\nF\\Omega^1_Z\n@>>>\n0\n\\end{CD}\n\\label{eqXZ}\n\\end{equation}\nof \n${\\cal O}_{Z_{{\\mathbf F}_p}}$-modules\nis a locally splitting exact sequence.\n\nThen, we have\n{\\rm (1)}$\\Rightarrow${\\rm (2)}$\\Rightarrow${\\rm (1$'$)}.\nConsequently \nif the subset\n${\\rm Reg}(Z)\n\\subset Z$ consisting\nof regular points is an open subset,\nthe 3 conditions are equivalent.\n\\end{cor}\n\n\n\\proof{\nThe implications\n{\\rm (1)}$\\Rightarrow${\\rm (2)} and\n{\\rm (2)}$\\Rightarrow${\\rm (1$'$)}\nare proved in \n\\cite[Corollary 3.2]{FW} and in\n\\cite[Corollary 2.6.1]{FW}\nrespectively.\nSince (1$'$) means\n$Z_{{\\mathbf F}_p}\n\\subset {\\rm Reg}(Z)$,\nthe last assertion follows.\n\\qed\n\n}\n\n\n\n\n\n\\begin{df}\\label{dfFTX}\nLet $k$ be a perfect field \nof characteristic $p>0$\nand let $X$ be\na regular noetherian scheme\nsatisfying the following condition:\n\n{\\rm (F)}\n$X_{{\\mathbf F}_p}\n=X\\times_{{\\rm Spec}\\, {\\mathbf Z}}\n{\\rm Spec}\\, {\\mathbf F}_p$\nis a scheme of\nfinite type over $k$.\n\n\\noindent\nThen, we define\nthe FW-cotangent bundle\n$FT^*X|_{X_{{\\mathbf F}_p}}$\nof $X$ to be the vector bundle\non $X_{{\\mathbf F}_p}$\nassociated with the locally free\n${\\cal O}_{X_{{\\mathbf F}_p}}$-module\n$F\\Omega^1_X$\nof rank $\\dim X$.\n\\end{df}\n\nLet $x\\in X_{{\\mathbf F}_p}$\nbe a closed point \nand let\n$T^*_xX$\ndenote the cotangent space\nat $x$ defined as a scheme\n${\\rm Spec}\\,\nS_{k(x)}({\\mathfrak m}_x\/\n{\\mathfrak m}_x^2)^\\vee$\nassociated to the\n$k(x)$-vector space\n${\\mathfrak m}_x\/\n{\\mathfrak m}_x^2$.\nSince $k(x)$ is perfect,\nthe exact sequence\n(\\ref{eqdx}) defines a canonical\nisomorphism\n\\begin{equation}\nF^*T^*_xX\n\\to \nFT^*X|_x\n\\label{eqTx}\n\\end{equation}\nto the fiber of \nthe FW-cotangent bundle\nat $x$ \nfrom the pull-back by Frobenius\n$F\\colon x\\to x$ of \n$T^*_xX$.\nIf $X=X_{{\\mathbf F}_p}$,\nthen the FW-cotangent bundle\n$FT^*X|_{X_{{\\mathbf F}_p}}$\nis the pull-back of\nthe cotangent bundle\n$T^*X$ \nby the Frobenius morphism\n$F\\colon X\\to X$\nby {\\rm (\\ref{eqFFX})}.\n\n\nLet $X\\to Y$ be a morphism\nof finite type\nof regular noetherian schemes\nsatisfying the condition (F)\nin Definition \\ref{dfFTX}.\nThen, the morphism (\\ref{eqFXY})\ndefines morphisms\n\\begin{equation}\n\\begin{CD}\nFT^*X|_{X_{{\\mathbf F}_p}}\n@<{f^*}<<\nFT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nX_{{\\mathbf F}_p}\n@>>>\nFT^*Y|_{Y_{{\\mathbf F}_p}}\n\\end{CD}\n\\label{eqdXY}\n\\end{equation}\n of\nschemes.\n\nAssume that $X\\to Y$ is\nsmooth and let \n$F^*T^*X\/Y|_{\nX_{{\\mathbf F}_p}}$ denote the\npull-back by the Frobenius\n$F\\colon X_{{\\mathbf F}_p}\n\\to X_{{\\mathbf F}_p}$\nof the restriction to\n$X_{{\\mathbf F}_p}$ of \nthe vector\nbundle defined $T^*X\/Y$ by\nthe locally free ${\\cal O}_X$-module\n$\\Omega^1_{X\/Y}$.\nThen, by Proposition \\ref{prsm},\nwe have an exact sequence\n\\begin{equation}\n0\\to FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nX_{{\\mathbf F}_p}\n\\to FT^*X|_{X_{{\\mathbf F}_p}}\n\\to F^*T^*X\/Y|_{\nX_{{\\mathbf F}_p}}\\to 0\n\\label{eqTEf}\n\\end{equation}\nof vector bundles on $X_{{\\mathbf F}_p}$.\n\nSimilarly,\nlet $Z\\to X$ be a closed immersion\nof regular noetherian schemes\nsatisfying the condition (F).\nLet ${\\cal I}_Z\\subset {\\cal O}_X$\nbe the ideal sheaf\nand let $T^*_ZX$ be\nthe conormal bundle\ndefined by\nthe locally free ${\\cal O}_Z$-module\n${\\cal I}_Z\/{\\cal I}_Z^2$.\nLet \n$F^*T^*_ZX|_{\nZ_{{\\mathbf F}_p}}$ denote the\npull-back by the Frobenius\n$F\\colon Z_{{\\mathbf F}_p}\n\\to Z_{{\\mathbf F}_p}$\nof the restriction to\n$Z_{{\\mathbf F}_p}$.\nThen, by Corollary \\ref{corXZ},\nwe have an exact sequence\n\\begin{equation}\n0\\to F^*T^*_ZX|_{\nZ_{{\\mathbf F}_p}}\\to \nFT^*X|_{Z_{{\\mathbf F}_p}}\n\\to \nFT^*Z|_{Z_{{\\mathbf F}_p}}\n\\to 0\n\\label{eqTEi}\n\\end{equation}\nof vector bundles on $Z_{{\\mathbf F}_p}$.\n\n\n\\subsection{$C$-transversality}\\label{ssCtr}\n\nIn the rest of this section,\nwe fix a perfect field $k$\nof characteristic $p>0$.\n\nWe fix some terminology\non closed conical subsets of\na vector bundle of a scheme.\nLet $V$ be a vector bundle\nover a scheme $Y$.\nWe say that a closed subset\nof $V$ is conical if it is\nstable under the action of\n${\\mathbf G}_{m,Y}$.\nFor a closed conical subset\n$C\\subset V$,\nthe intersection\n$B=C\\cap Y$ with the\n$0$-section $Y\\subset V$ regarded\nas a closed subset of $Y$\nis called the base of $C$.\nThe base $B$ equals the\nimage of $C$ by\nthe projection $V\\to Y$.\n\n\nWe say that a separated\nmorphism $f\\colon X\\to Y$\nof finite type of schemes\nis proper on a closed subset $Z\\subset X$\nif for every base change\n$f'\\colon X'\\to Y'$ of $f$\nits restriction to\nthe inverse image $Z'\n\\subset X'$ is a closed mapping.\nFor a morphism\n$V\\to V'$ of vector bundles\non a scheme $Y$\nand a closed conical subset\n$C$ of $V$,\nthe morphism $V\\to V'$\nis proper on $C$ if and only\nif the intersection\n$C\\cap {\\rm Ker}(V\\to V')$\nis a subset of the $0$-section of $V$\nby \\cite[Lemma 1.2(ii)]{Be}.\n\n\n\\begin{df}\\label{dfhC}\nLet $X$ be a \nregular noetherian scheme\nsatisfying the condition {\\rm (F)}\nin Definition {\\rm \\ref{dfFTX}}\nand let $C\\subset FT^*X|_{X_{{\\mathbf F}_p}}\n$ be a closed\nconical subset\nof the FW-cotangent bundle.\nLet $h\\colon W\\to X$\nbe a morphism of finite type\nof regular schemes.\n\n\n{\\rm 1.}\n{\\rm (\\cite[1.2]{Be}, \\cite[Definition 3.3]{CC})}\nWe say that \n$h\\colon W\\to X$ is $C$-transversal\nif \nthe intersection of\n$h^*C=C\\times_XW\n\\subset \nFT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}$\nwith the kernel \n${\\rm Ker}(\nFT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}\n\\to\nFT^*W|_{W_{{\\mathbf F}_p}})$\nis a subset of the $0$-section.\n\n{\\rm 2.}\nAssume that $h$ is\n$C$-transversal.\nThen we define\na closed conical subset \n$h^\\circ C\n\\subset FT^*W|_{W_{{\\mathbf F}_p}}$\nto be the image\nof $h^*C$ by\n$\nFT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}\n\\to\nFT^*W|_{W_{{\\mathbf F}_p}}$.\n\\end{df}\n\nExample.\nLet $Z\\subset X$ be a regular closed subscheme.\nThen a closed\nconical subset \n$C=F^*T^*_ZX|_{Z_{{\\mathbf F}_p}}\n\\subset \nFT^*X|_{X_{{\\mathbf F}_p}}$\nis defined by (\\ref{eqTEi}).\nIn particular,\nfor $Z=X$,\nthe $0$-section\n$F^*T^*_XX|_{X_{{\\mathbf F}_p}}\n=X_{{\\mathbf F}_p}$ is\na closed conical subset of\n$FT^*X|_{X_{{\\mathbf F}_p}}$.\n\n\n\n\\begin{lm}\\label{lmTXC}\nLet $X$ be a \nregular noetherian scheme\nsatisfying the condition {\\rm (F)}\nin Definition {\\rm \\ref{dfFTX}}\nand let $C\\subset FT^*X|_{X_{{\\mathbf F}_p}}\n$ be a closed\nconical subset.\nLet $h\\colon W\\to X$\nbe a morphism of finite type\nof regular schemes.\n\n{\\rm 1.}\nLet $C=FT^*X|_Z$\nbe the restriction to \na closed subset\n$Z\\subset X_{{\\mathbf F}_p}$\nof the closed fiber.\nIf $h$ is $C$-transversal,\nthen $h$ is smooth\non a neighborhood of\nthe inverse image $h^{-1}(Z)$.\n\n{\\rm 2.}\nIf $C$ is the $0$-section\nof $FT^*X|_{X_{{\\mathbf F}_p}}$,\nthen $h$ is $C$-transversal.\n\n{\\rm 3.}\nIf $h$ is smooth,\nfor any closed conical subset\n$C$ of $FT^*X|_{X_{{\\mathbf F}_p}}$,\nthe morphism\n$h$ is $C$-transversal.\n\\end{lm}\n\n\n\\proof{\n1.\nThe condition that the\nintersection of\n$h^*C=FT^*X|_Z\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}\n=FT^*X\n\\times_{X_{{\\mathbf F}_p}}\nh^{-1}(Z)$\nwith the kernel\n${\\rm Ker}(FT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}\n\\to FT^*W|_{W_{{\\mathbf F}_p}})$\nis a subset of the $0$-section\nmeans that\n$F\\Omega^1_X\n\\otimes_{{\\cal O}_X}{\\cal O}_W\n\\to F\\Omega^1_W$\nis a locally splitting injection\non a neighborhood of $h^{-1}(Z)$.\nBy Proposition \\ref{prsm},\nthis means that\n$W\\to X$\nis smooth on a neighborhood of\nthe inverse image $h^{-1}(Z)$.\n\n\n2.\nIf $C$ is the $0$-section,\nits intersection with the\nkernel \n${\\rm Ker}(FT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}\n\\to FT^*W|_{W_{{\\mathbf F}_p}})$\nis also the $0$-section.\n\n\n\n3.\nIf $h$ is smooth,\nthe morphism $FT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}\n\\to FT^*W|_{W_{{\\mathbf F}_p}}$ is an injection\nby Proposition \\ref{prsm}.\nHence \nfor any subset\n$C\\subset FT^*X|_{X_{{\\mathbf F}_p}}$,\nits intersection with the\nkernel \n${\\rm Ker}(FT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}\n\\to FT^*W|_{W_{{\\mathbf F}_p}})$\nis a subset of the $0$-section.\n\\qed\n\n}\n\n\\begin{lm}\\label{lmhC}\nLet $h\\colon W\\to X$\nbe a morphism of\nfinite type of \nregular noetherian schemes\nsatisfying the condition {\\rm (F)}\nand let $C$ be a closed\nconical subset of $FT^*X|_{X_{{\\mathbf F}_p}}$.\nAssume that $h$ is\n$C$-transversal.\nThen, for a morphism \n$g\\colon V\\to W$ of finite type of\nregular noetherian schemes\nthe following conditions\nare equivalent:\n\n{\\rm (1)}\nThe morphism\n$g$ is $h^\\circ C$-transversal.\n\n{\\rm (2)}\nThe composition\n$hg$ is $C$-transversal.\n\n\\noindent\nIf these equivalent conditions\nare satisfied,\nwe have $(hg)^\\circ C=g^\\circ h^\\circ C$.\n\\end{lm}\n\n\\proof{\nThe condition (1)\nmeans that\nthe intersection\n$h^*C\\cap {\\rm Ker}(\nFT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}\n\\to FT^*W|_{W_{{\\mathbf F}_p}})$\nis a subset of the $0$-section\nand further that\nfor $h^\\circ C\n\\subset FT^*W|_{W_{{\\mathbf F}_p}}$,\nthe intersection\n$g^*h^\\circ C\\cap {\\rm Ker}(\nFT^*W|_{W_{{\\mathbf F}_p}}\n\\times_{W_{{\\mathbf F}_p}}\nV_{{\\mathbf F}_p}\n\\to FT^*V|_{V_{{\\mathbf F}_p}})$\nis a subset of the $0$-section.\nThis means that\n$(hg)^*C\\cap {\\rm Ker}(\nFT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nV_{{\\mathbf F}_p}\n\\to FT^*V|_{V_{{\\mathbf F}_p}})$\nis a subset of the $0$-section,\nnamely the condition (2).\n\nThe image of \n$(hg)^*C$ by\n$FT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nV_{{\\mathbf F}_p}\n\\to FT^*V|_{V_{{\\mathbf F}_p}}$\nequals \nthe image of $g^*h^\\circ C$\nby \n$FT^*W|_{W_{{\\mathbf F}_p}}\n\\times_{W_{{\\mathbf F}_p}}\nV_{{\\mathbf F}_p}\n\\to FT^*V|_{V_{{\\mathbf F}_p}}$.\n\\qed\n\n}\n\n\\medskip\nThe terminology transversality\nis related to the transversality\nof morphisms of regular\nschemes defined as follows.\n\n\n\n\n\n\\begin{df}\\label{dftrans}\nLet $f\\colon X\\to Y$\nand $g\\colon V\\to Y$\nbe morphisms of finite type of\nregular schemes\nand set $W\n=X\\times_YV$.\n\n\n\n{\\rm 1.}\nLet $w\\in W$\nand $x\\in X,y\\in Y,v\\in V$\nbe the images.\nWe say that \n$f$ and $g$ are transversal\nat $w$, if ${\\cal O}_{W,w}$\nis regular and if\n$Tor_q^{{\\cal O}_{Y,y}}\n({\\cal O}_{X,x},{\\cal O}_{V,v})=0$\nfor $q>0$.\n\n{\\rm 2.}\nLet $W_1\\subset W$\nbe an open subscheme.\nWe say that \n$f$ and $g$ are transversal on\n$W_1$ if \n$f$ and $g$ are transversal \nat every point of $W_1$.\n\\end{df}\n\nExample.\nLet $Z\\subset X$ be a regular closed subscheme\nand \n$C=F^*T^*_ZX|_{Z_{{\\mathbf F}_p}}\n\\subset \nFT^*X|_{Z_{{\\mathbf F}_p}}$\nbe the closed\nconical subset defined by the conormal bundle.\nThen, as we will see\nin Corollary \\ref{corfC},\na morphism\n$h\\colon W\\to X$ of\nfinite type \nof regular quasi-excellent\nnoetherian schemes\nis $C$-transversal\nif and only if\n$h\\colon W\\to X$\nis transversal to $Z\\subset X$\non a neighborhood of\nthe closed fiber $W_{{\\mathbf F}_p}$.\n\nIn particular, \nif $X$ is smooth over \na discrete valuation \nring ${\\cal O}_K$ of mixed\ncharacteristic with residue field $k$\nand if $C=F^*T^*_{X_k}X|_{X_k}$\nfor the closed fiber $Z=X_k$,\nthen the condition that\n$h\\colon W\\to X$ is\n$C$-transversal\nmeans that\n$W$ is smooth over ${\\cal O}_K$ \non a neighborhood of\nthe closed fiber $W_k$.\n\n\n\n\n\\begin{lm}\\label{lmtrreg}\nLet $f\\colon X\\to Y$\nand $g\\colon V\\to Y$\nbe morphisms of finite type of\nregular schemes\nand set $W\n=X\\times_YV$.\nLet $w\\in W$\nand $x\\in X,y\\in Y,v\\in V$\nbe the images.\n\n{\\rm 1.}\nSuppose that $g\\colon V\\to Y$\nis an immersion.\nThen, the following conditions\nare equivalent:\n\n{\\rm (1)}\n$f$ and $g$ are transversal\nat $w$.\n\n{\\rm (2)}\nThe morphism\n$T^*_yY\\times_yx\\to T^*_xX$\non the cotangent space\ninduces an injection\non the subspace\n$T^*_VY\\times_Vy\n\\subset\nT^*_yY\\times_yx$.\n\n{\\rm 2.}\nSuppose that the\nsubset ${\\rm Reg}(W)\n\\subset W$\nconsisting of regular points\nis an open subset.\nIf\n$f$ and $g$ are transversal\nat $w\\in W$,\nthen\n$f$ and $g$ are transversal\non a neighborhood of $w$.\n\\end{lm}\n\n\n\nThe condition that\n${\\rm Reg}(W)\n\\subset W$\nis an open subset is satisfied\nif $W$ is of finite type\nover a Dedekind domain\nsuch that the fraction field\nis of characteristic $0$\nor a semi-local ring of dimension\nat most $1$\nby \\cite[Corollaire (6.12.6)]{EGA4}.\n\n\n\\proof{\n1.\nLet $a_1,\\ldots,a_r\\in {\\cal O}_{Y,y}$\nbe a minimal system of generators\nof ${\\rm Ker}({\\cal O}_{Y,y}\n\\to {\\cal O}_{V,y})$.\nThen, the both conditions are\nequivalent to the condition\nthat $a_1,\\ldots,a_r\\in {\\cal O}_{X,x}$\nis a part of a regular system of\nparameters.\n\n\n2.\nSince the ${\\cal O}_W$-modules\n${\\cal T}or_q^{{\\cal O}_Y}\n({\\cal O}_X,{\\cal O}_V)=0$\nare coherent\nand $w$ is an element\nof the open subset ${\\rm Reg}(W)$,\nthe assertion follows.\n\\qed\n\n}\n\n\n\\medskip\n\n\nLet $f\\colon X\\to Y$\nbe a morphism of finite type of\nregular noetherian schemes\nsuch that $Y_{{\\mathbf F}_p}$\nis of finite type\nover $k$\nand consider the\nmorphisms\n(\\ref{eqdXY}).\nLet $C$ be a closed conical subset\nof $FT^*X|_{X_{{\\mathbf F}_p}}$\nsuch that \n$f\\colon X\\to Y$ is proper\non the base $B(C)$.\nThen we define a closed conical subset\n$f_\\circ C$ of $FT^*Y|_{Y_{{\\mathbf F}_p}}$\nto be\nthe image by\n$FT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nY_{{\\mathbf F}_p}\n\\to\nFT^*Y|_{Y_{{\\mathbf F}_p}}$\nof the inverse image of\n$C$ by\n$FT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nY_{{\\mathbf F}_p}\n\\to\nFT^*X|_{X_{{\\mathbf F}_p}}$.\n\n\nFor a closed immersion\n$i\\colon Z\\to X$\nof regular noetherian schemes\nsuch that $X_{{\\mathbf F}_p}$\nis of finite type\nover $k$,\nthe closed conical subset\n$F^*T^*_ZX|_{X_{{\\mathbf F}_p}}$\ndefined by the conormal bundle\nequals $i_\\circ C$\nfor the $0$-section\n$C=FT^*_ZZ|_{Z_{{\\mathbf F}_p}}$\nof $FT^*Z|_{Z_{{\\mathbf F}_p}}$.\n\n\n\n\\begin{pr}\\label{prfC}\nLet $X,Y$ and $V$ be regular\nnoetherian schemes \nsatisfying the condition {\\rm (F)} and\n\\begin{equation}\n\\begin{CD}\nX@>> Q@>>> V\\\\\n@VVV@VVV@VVV@VVV\\\\\nX@<{\\supset}<>> P@>>>Y\n\\end{CD}$$\nsuch that $P\\to Y$\nis smooth and \n$U\\to P$ is a closed immersion.\nLet $w\\in W$ be a closed point\nabove $x$ and $v=f'(w)\\in V$.\nWe may also assume that\nthe morphisms \n$k(y)\\to k(v)$\nand hence $k(x)\\to k(w)$\nare isomorphisms.\nWe consider the cartesian\ndiagram\n$$\\begin{CD}\n@.\nT^*_wQ\n@<<<\nT^*_vV\n\\\\\n@.@AAA@AAA\\\\\nT^*_xX\n@<<<\nT^*_xP\n@<<<\nT^*_yY\n\\end{CD}$$\nof cotangent spaces\nand identify their Frobenius\npull-backs with the fibers\nof FW-cotangent bundles\nby the isomorphism (\\ref{eqTx}).\n\nLet $\\widetilde C_x\n\\subset F^*T^*_xP$\nand $A_x \\subset F^*T^*_yY$\nbe the inverse images of\n$C_x\\subset F^*T^*_xX$.\nThen, by the condition (1),\nthe intersection \n$A_x\\cap {\\rm Ker}(F^*T^*_yY \\to \nF^*T^*_vV)$\nis a subset of the $0$-section.\nSince\n$T^*_yY \\to T^*_xP$\ninduces an isomorphism\n${\\rm Ker}(T^*_yY\\to T^*_vV)\n\\to \n{\\rm Ker}(T^*_xP\\to T^*_wQ)$,\nthe intersection \n$\\widetilde C_x\\cap\n{\\rm Ker}(F^*T^*_xP\\to F^*T^*_wQ)$\nis a subset of the $0$-section.\n\nBy the exact sequence\n$0\\to T^*_XP|_x\n\\to T^*_xP\\to T^*_xX\\to 0$\nand $x\\in B(C)$,\nwe have\n$F^*T^*_XP|_x\\subset \\widetilde C_x$.\nHence\n$T^*_xP\\to T^*_wQ$\ninduces an injection on\n$T^*_XP|_x$.\nNamely, \nthe morphism\n$Q\\to P$ and the immersion\n$U\\to P$ are transversal\non a neighborhood of $w$\nby Lemma \\ref{lmtrreg}.\n\nHence\nthe horizontal arrows\nof the commutative diagram\n\\begin{equation}\n\\begin{CD}\nT^*_wW\n@<<<\nT^*_vV\n\\\\\n@AAA@AAA\\\\\nT^*_xX\n@<<<\nT^*_yY\n\\end{CD}\n\\label{eqUVW}\n\\end{equation}\ninduce isomorphisms\non the kernels and cokernels\nof the vertical arrows.\nSince the intersection of\nthe inverse image $A_x$ with \n${\\rm Ker}(F^*T^*_yY\n\\to F^*T^*_wV)$\nis a subset of the $0$-section,\nthe intersection of\n$C_x$ with \n${\\rm Ker}(F^*T^*_xX\n\\to F^*T^*_wW)$\nis also a subset of the $0$-section.\nNamely,\n$h$ is $C$-transversal\non a neighborhood of $w$.\nThus $h$ is $C$-transversal on\na neighborhood of \nthe inverse image of $B(C)$.\n\nFurther an elementary\ndiagram chasing shows\nthat the inverse image of\n$h^\\circ C|_w$\nby $F^*T^*_wW\\gets F^*T^*_vV$\nequals the image of\n$A_x$ by\n$F^*T^*_yY\\to F^*T^*_vV$.\nHence we have\n$g^\\circ f_\\circ C=\nf'_{1\\circ} h_1^\\circ C$.\n\n\n\n(2)$\\Rightarrow$(1):\nLet $w\\in B(h_1^\\circ C)$\nbe a closed point\nand let $v\\in V, x\\in X$\nand $y\\in Y$ be the image.\nThen, the commutative diagram\n(\\ref{eqUVW})\ninduces an isomorphism\n${\\rm Ker}(T^*_yY\\to T^*_vV)\n\\to\n{\\rm Ker}(T^*_xX\\to T^*_wW)$\non the kernels.\nIn the same notation,\nsince the intersection of\n$C_x$ with \n${\\rm Ker}(F^*T^*_xX\\to F^*T^*_wW)$\nis a subset of the $0$-section,\nthe intersection of\n$A_x$ with \n${\\rm Ker}(F^*T^*_yY\\to F^*T^*_vV)$\nis also a subset of the $0$-section.\n\\qed\n\n}\n\n\\begin{cor}\\label{corfC}\nLet $X,Y$ and $V$\nbe regular noetherian schemes \nsatisfying the condition {\\rm (F)}\nand let\n{\\rm (\\ref{eqprfC})}\nbe a cartesian diagram\nof morphisms of finite type.\nAssume that the\nsubset ${\\rm Reg}(W)\n\\subset W$\nconsisting of regular points\nis an open subset\nand that\n$f\\colon X\\to Y$ is an immersion.\nThen,\nthe following conditions\nare equivalent:\n\n{\\rm (1)}\nThe morphism\n$g$ is $F^*T^*_XY|_{Y_{{\\mathbf F}_p}}$-transversal.\n\n{\\rm (2)}\nThe morphism $g\\colon V\\to Y$\nis transversal with\nthe immersion\n$f\\colon X\\to Y$ on\na neighborhood of $W_{{\\mathbf F}_p}\n=V\\times_XX_{{\\mathbf F}_p}$.\n\\end{cor}\n\n\\proof{\nIt suffices to apply\nProposition \\ref{prfC} \ntogether with Lemma \\ref{lmTXC}.2\nto\nthe $0$-section $C$\nof $FT^*X|_{X_{{\\mathbf F}_p}}$.\n\\qed\n\n}\n\n\\begin{df}\\label{dfCet}\nLet $f\\colon U\\to X$ be an \\'etale morphism\nof regular noetherian schemes \nsatisfying the condition {\\rm (F)}\nand let $C'$ be a closed\nconical subset of $FT^*U$.\nWe identify $FT^*U$ with\nthe pull-back\n$FT^*X\\times_{X_{{\\mathbf F}_p}}\nU_{{\\mathbf F}_p}$ by the\ncanonical isomorphism\ninduced by\n$F\\Omega^1_X\\otimes_{{\\cal O}_X}\n{\\cal O}_U\\to\nF\\Omega^1_U$ \nand let\n${\\rm pr}_1\\colon \nFT^*X\\times_{X_{{\\mathbf F}_p}}\nU_{{\\mathbf F}_p} \\to \nFT^*X$ be the projection.\nThen, we define\na closed conical subset $f_*C'$ of \n$FT^*X$ to be the union of\nthe closure $\\overline{{\\rm pr}_1(C')}$\nand the restriction\n$FT^*X|_{X_{{\\mathbf F}_p}\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} \nf(U_{{\\mathbf F}_p})}$ \nto the image of the complement.\n\\end{df}\n\n\\begin{lm}\\label{lmCet}\nLet $$\n\\begin{CD}\nV@>>>W\\\\\n@VgVV@VVhV\\\\\nU@>f>>X\n\\end{CD}\n$$ be a cartesian diagram\nof regular noetherian schemes \nsatisfying the condition {\\rm (F)}\nsuch that $f$ is\nan \\'etale morphism of finite type.\nLet $C'$ be a closed\nconical subset of $FT^*U$\nand set $C=f_*C'\\subset FT^*X$\nas in Definition {\\rm \\ref{dfCet}}.\n\nIf $h$ is $C$-transversal,\nthen $h$ is smooth on a neighborhood\nof $h^{-1}(X_{{\\mathbf F}_p}\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} \nf(U_{{\\mathbf F}_p}))$\nand $g$ is $C'$-transversal.\n\\end{lm}\n\n\\proof{\nAssume that\n$h$ is $C$-transversal.\nSince \n$C\\supset \nFT^*X|_{X_{{\\mathbf F}_p}\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} \nh(U_{{\\mathbf F}_p})}$,\nthe morphism \n$h$ is smooth on a neighborhood\nof $h^{-1}(X_{{\\mathbf F}_p}\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} \nf(W_{{\\mathbf F}_p}))$\nby Lemma \\ref{lmTXC}.1.\nSince \n$f^\\circ C\\supset C'$,\nthe morphism \n$g$ is $C'$-transversal\nby Lemma \\ref{lmhC}.\n\\qed\n\n}\n\n\\subsection{$C$-acyclicity}\\label{ssCac}\n\nWe keep fixing a perfect field\n$k$ of characteristic $p>0$.\n\n\\begin{df}\\label{dffC}\nLet $f\\colon X\\to Y$\nbe a morphism of finite type of \nregular noetherian schemes\nsatisfying the condition {\\rm (F)}\nin Definition {\\rm \\ref{dfFTX}}\nand \nlet $C$\nbe a closed conical subset\nof the FW-cotangent bundle\n$FT^*X|_{X_{{\\mathbf F}_p}}$.\nWe say that $f$\nis $C$-acyclic if the inverse image of\n$C$ by the morphism\n$FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nX_{{\\mathbf F}_p}\n\\to FT^*X|_{{\\mathbf F}_p}$\nis a subset of the $0$-section.\n\\end{df}\n\n\nThe corresponding notion is\ncalled $C$-transversality\nin \\cite[1.2]{Be} and\n\\cite[Definition 3.5]{CC}.\nHere to avoid confusion with\nthe $C$-transversality \nfor morphisms to $X$ in\nDefinition \\ref{dfhC}.1,\n\\cite[1.2]{Be} and\n\\cite[Definition 3.3]{CC},\nwe introduce another terminology.\nWe will show in Lemma \\ref{lmhf}.2\nthat\nfor a morphism\n$f\\colon X\\to Y$\nof regular schemes and\na closed immersion\n$i\\colon Z\\to X$\nof regular schemes,\nthe morphism\n$f$ is $F^*T^*_ZX|_{X_{{\\mathbf F}_p}}$-acyclic\nif and only if\nthe composition\n$fi$ is smooth on a neighborhood\nof $Z_{{\\mathbf F}_p}$.\n\n\n\\begin{lm}\\label{lmftr}\nLet $f\\colon X\\to Y$\nbe a morphism of finite type of \nregular noetherian schemes\nsatisfying the condition {\\rm (F)}\nand \nlet $C$\nbe a closed conical subset\nof $FT^*X|_{X_{{\\mathbf F}_p}}$.\n\n{\\rm 1.}\nThe following conditions\nare equivalent:\n\n{\\rm (1)}\n$f$ is $C$-acyclic.\n\n{\\rm (2)}\n$f$ is smooth\non a neighborhood of\nthe base $B(C)\\subset \nX_{{\\mathbf F}_p}$\nand \nthe intersection of\n$C\\subset FT^*X|_{X_{{\\mathbf F}_p}}$ \nwith the image of the morphism\n$FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}X_{{\\mathbf F}_p}\n\\to FT^*X|_{X_{{\\mathbf F}_p}}$\nis a subset of the $0$-section.\n\n\n{\\rm 2.}\nIf $C$ is the $0$-section\n$F^*T^*_XX|_{X_{{\\mathbf F}_p}}$,\nthe following conditions\nare equivalent:\n\n\n{\\rm (1)}\n$f$ is $C$-acyclic.\n\n{\\rm (2)}\n$f$ is smooth\non the neighborhood of\n$X_{{\\mathbf F}_p}$.\n\\end{lm}\n\n\\proof{\n1.\nThe condition (1)\nis equivalent to the conjunction\nof the following (1$'$) and (1$''$):\n\n(1$'$)\nThe inverse image of\nthe $0$-section by\n$FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}X_{{\\mathbf F}_p}\n\\to FT^*X|_{X_{{\\mathbf F}_p}}$\non the base $B(C)\\subset X_{{\\mathbf F}_p}$\nis a subset of the $0$-sections.\n\n(1$''$)\nThe intersection of\n$C\\subset FT^*X|_{X_{{\\mathbf F}_p}}$ \nwith the image of the morphism\n$FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}X_{{\\mathbf F}_p}\n\\to FT^*X|_{X_{{\\mathbf F}_p}}$\nis a subset of the $0$-sections.\n\n\\noindent\nThe condition (1$'$)\nmeans that\nthe morphism\n$f^*F\\Omega^1_Y\n\\to\nF\\Omega^1_X$\nis a locally splitting injection\non a neighborhood\nof the base $B(C)\\subset X_{{\\mathbf F}_p}$.\nHence the assertion follows\nfrom Proposition \\ref{prsm}.\n\n2.\nFor the $0$-section\n$C=F^*T^*_XX|_{X_{{\\mathbf F}_p}}$,\nthe base\n$B(C)$ is $X_{{\\mathbf F}_p}$\nand the condition \n(1$''$) in the proof of\n1 is satisfied.\nHence the assertion follows from 1.\n\\qed\n\n}\n\n\n\\begin{pr}\\label{prfCX}\nLet $X,Y,V$\nbe regular noetherian schemes\nsatisfying the condition {\\rm (F)}\nand let\n$$\\begin{CD}\nX@>> FT^*W|_{W_{{\\mathbf F}_p}}\n@>>> F^*T^*W\/V|_{W_{{\\mathbf F}_p}}\n&\\to 0\\\\\n&@AAA@AAA@AA{\\cong}A&\\\\\n0\\to& FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}\n@>>> FT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}\n@>>> F^*T^*X\/Y|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}\n&\\to0\\\\\n\\end{CD}$$\nof exact sequences\nof vector bundles on $W_{{\\mathbf F}_p}$.\nLet $C'\\subset FT^*W|_{W_{{\\mathbf F}_p}}$\nbe the image\nof $h^*C=C\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}\n\\subset FT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}$\nand let\n$A\\subset FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}$\nand $A'\\subset FT^*V|_{V_{{\\mathbf F}_p}}\n\\times_{V_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}$\nbe their inverse images.\n\nSince the right vertical arrow is an\nisomorphism,\nthe lower left arrow induces\nan isomorphism\n${\\rm Ker}(FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}\n\\to \nFT^*V|_{V_{{\\mathbf F}_p}}\n\\times_{V_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p})\n\\to\n{\\rm Ker}(FT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}\n\\to FT^*W|_{W_{{\\mathbf F}_p}})$.\nHence $A\n\\subset FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}$ is a subset\nof the $0$-section\nif and only if\n$A'\\subset\nFT^*V|_{V_{{\\mathbf F}_p}}\n\\times_{V_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}$ and\n$h^*C\\cap \n{\\rm Ker}(FT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}\n\\to FT^*W|_{W_{{\\mathbf F}_p}})$\nare subsets\nof the $0$-sections\nand the assertion follows.\n\\qed\n\n}\n\n\n\\begin{lm}\\label{lmhf}\nLet $f\\colon X\\to Y$\nbe a morphism of finite type of \nregular noetherian schemes\nsatisfying the condition {\\rm (F)}.\n\n{\\rm 1.}\nLet $C$ be a closed conical subset\nof $FT^*X|_{X_{{\\mathbf F}_p}}$\nand \nassume that $f$ is proper\non the base $B(C)$.\nLet $g\\colon Y\\to Z$\nbe a morphism of finite type of \nregular noetherian schemes\nsuch that $Z_{{\\mathbf F}_p}$\nis of finite type over $k$.\nThen the following conditions are\nequivalent:\n\n{\\rm (1)}\n$g$ is $f_\\circ C$-acyclic.\n\n{\\rm (2)}\n$gf$ is $C$-acyclic.\n\n{\\rm 2.}\nLet $p\\colon V\\to X$\nbe a proper morphism of\nregular schemes\nand let $C=p_\\circ F^*T^*_VV|_{V_{{\\mathbf F}_p}}\n\\subset FT^*X|_{X_{{\\mathbf F}_p}}$.\nThen, the following conditions\nare equivalent:\n\n{\\rm (1)}\n$f$ is $C$-acyclic.\n\n{\\rm (2)}\nThe composition\n$fp$ is smooth\non a neighborhood of $V_{{\\mathbf F}_p}$.\n\\end{lm}\n\n\\proof{\n1.\nLet $x\\in X_{{\\mathbf F}_p}$ be a closed\npoint and $y\\in Y_{{\\mathbf F}_p}$ and $z\\in Z_{{\\mathbf F}_p}$\nbe the images.\nSince the assertion is \\'etale local,\nwe may also assume that\nthe morphisms \n$k(z)\\to k(y)\\to k(x)$\nare isomorphisms.\n\nLet $A_x$ be the inverse image of\n$C_x$ by $F^*T^*_xX\\gets F^*T^*_yY$.\nThen, the inverse image $A'_x$\nof $C_x$ by $F^*T^*_xX\\gets F^*T^*_zZ$\nequals the inverse image $A''_x$\nof $A_x$\nby $F^*T^*_yY\\gets F^*T^*_zZ$.\nSince the condition (1) \n(resp.\\ (2)) is equivalent to\nthat $A'_x$ (resp.\\ $A''_x$)\nis a subset of the $0$-section\nfor any $x$,\nthe assertion follows.\n\n2.\nBy 1.~applied to\n$p_\\circ F^*T^*_VV|_{V_{{\\mathbf F}_p}}\n=F^*T^*_VX|_{X_{{\\mathbf F}_p}}$,\nthe condition (1)\nis equivalent to that\nthe composition $fp$\nis $F^*T^*_VV|_{V_{{\\mathbf F}_p}}$-acyclic.\nHence the assertion follows from\nLemma \\ref{lmftr}.2.\n\\qed\n\n}\n\n\n\\begin{df}\\label{dfhfC}\nLet $X$\nbe a regular noetherian scheme\nsatisfying the condition {\\rm (F)}\nand let $C$ be a closed conical subset\nof $FT^*X|_{X_{{\\mathbf F}_p}}$.\nWe say that a pair\n$(h,f)$ of morphisms\n$h\\colon W\\to X$,\n$f\\colon W\\to Y$\nof finite type\nof regular noetherian schemes\nsuch that $Y_{{\\mathbf F}_p}$\nis of finite type over $k$\nis $C$-acyclic\nif the intersection of\n$(C\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p})\n\\times_{W_{{\\mathbf F}_p}} \n(FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p})\n\\subset \n(FT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p})\n\\times_{W_{{\\mathbf F}_p}} \n(FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p})$\nwith the kernel \n${\\rm Ker}((h^*,f^*)\\colon$\n$(FT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p})\n\\times_{W_{{\\mathbf F}_p}} \n(FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p})\n\\to\nFT^*W|_{W_{{\\mathbf F}_p}})$\nis a subset of the $0$-section.\n\\end{df}\n\n\n\\begin{lm}\\label{lmhfC}\nLet $X$\nbe a regular noetherian scheme\nsatisfying the condition {\\rm (F)}\nand let $C$ be a closed conical subset\nof $FT^*X|_{X_{{\\mathbf F}_p}}$.\n\n\n{\\rm 1.}\nLet $f\\colon X\\to Y$\nbe a morphism of finite type\nof regular noetherian schemes\nsatisfying the condition {\\rm (F)}.\nThen, the following conditions are\nequivalent:\n\n{\\rm (1)}\n$f$ is $C$-acyclic.\n\n{\\rm (2)}\n$(1_X,f)$ is $C$-acyclic.\n\n{\\rm 2.}\nLet $h\\colon W\\to X$\nand $f\\colon W\\to Y$\nbe morphisms of finite type\nof regular noetherian schemes\nsatisfying the condition {\\rm (F)}.\nThen the following conditions are\nequivalent:\n\n{\\rm (1)}\n$(h,f)$ is $C$-acyclic.\n\n{\\rm (2)}\n$h$ is $C$-transversal\nand \n$f$ is $h^\\circ C$-acyclic.\n\\end{lm}\n\n\\proof{1.\nIdentify the kernel of\n$(1,f^*)\\colon\nFT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}} \n(FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nX_{{\\mathbf F}_p})\n\\to\nFT^*X|_{X_{{\\mathbf F}_p}}$\nwith the image of\nthe injection\n$(f^*,-1)\\colon \nFT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nX_{{\\mathbf F}_p}\n\\to\nFT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\n(FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nX_{{\\mathbf F}_p})$.\nThen the inverse image\nin $\nFT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nX_{{\\mathbf F}_p}$ of\n$C\n\\times_{X_{{\\mathbf F}_p}}\n(FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nX_{{\\mathbf F}_p})\n\\subset\nFT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\n(FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nX_{{\\mathbf F}_p})$\nis the same as\nthe inverse image of\n$C\\subset T^*X$\nand the assertion follows.\n\n2.\nSince ${\\rm Ker}(h^*\nFT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p}\n\\to\nFT^*W|_{W_{{\\mathbf F}_p}})\n\\times 0\n\\subset\n{\\rm Ker}((h^*,f^*)\\colon\n(FT^*X|_{X_{{\\mathbf F}_p}}\n\\times_{X_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p})\n\\times_{W_{{\\mathbf F}_p}} \n(FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p})\n\\to\nFT^*W|_{W_{{\\mathbf F}_p}})$,\nthe $C$-acyclicity\nof $(h,f)$ implies\nthe $C$-transversality of\n$h$.\nBy 1.,\nthe $h^\\circ C$-acyclicity\nof $f$ is equivalent to the condition that\nthe intersection of\n$h^\\circ C\\times_{W_{{\\mathbf F}_p}} \n(FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p})$\nwith \n${\\rm Ker}(\nFT^*W|_{W_{{\\mathbf F}_p}}\n\\times_{W_{{\\mathbf F}_p}}\n(FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p})\n\\to \nFT^*W|_{W_{{\\mathbf F}_p}})$\nis a subset of the\n$0$-section.\nThis condition is equivalent to the\n$C$-acyclicity\nof $(h,f)$\nsince \n$h^\\circ C\\times_{W_{{\\mathbf F}_p}} \n(FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p})$\nis the image of \n$h^*C\\times_{W_{{\\mathbf F}_p}} \n(FT^*Y|_{Y_{{\\mathbf F}_p}}\n\\times_{Y_{{\\mathbf F}_p}}\nW_{{\\mathbf F}_p})$.\n\\qed\n\n}\n\n\n\\section{Micro-support}\\label{sms}\n\nWe fix a perfect field $k$ \nof characteristic $p>0$\nand a finite field \n$\\Lambda$ of characteristic $\\ell\\neq p$.\nWe will assume\nthat a regular noetherian scheme $X$\nover ${\\mathbf Z}_{(p)}$\nsatisfies the condition {\\rm (F)}\nin Definition {\\rm \\ref{dfFTX}}.\n\n\\subsection{Micro-support}\n\\begin{df}\\label{dfms}\nLet $X$ be a regular noetherian scheme \nover ${\\mathbf Z}_{(p)}$\nsatisfying the condition {\\rm (F)}\nin Definition {\\rm \\ref{dfFTX}} and\nlet $C$ be a closed conical subset\nof the FW-cotangent bundle \n$FT^*X|_{X_{{\\mathbf F}_p}}$.\nLet ${\\cal F}$\nbe a constructible complex\nof $\\Lambda$-modules\non $X$.\nWe say that ${\\cal F}$\nis micro-supported on $C$\nif the following conditions\n{\\rm (1)} and {\\rm (2)}\nare satisfied:\n\n{\\rm (1)}\nThe intersection of\nthe support ${\\rm supp}\\, {\\cal F}$\nwith the closed fiber $X_{{\\mathbf F}_p}$ is \na subset of the base $B(C)$.\n\n{\\rm (2)}\nEvery $C$-transversal separated morphism\n$h\\colon W\\to X$ of finite type of\nregular schemes\nis ${\\cal F}$-transversal\non a neighborhood of the closed\nfiber $W_{{\\mathbf F}_p}$.\n\\end{df}\n\nThis definition of micro-support\nis related to\n\\cite[Proposition 8.13]{CC}\nbut is different from\n\\cite[1.3]{Be}.\nWe discuss this point in\nRemark after Proposition \\ref{prtrla}.\nIt is a property on a neighborhood\nof $X_{{\\mathbf F}_p}$.\nIf $X_{\\mathbf Q}\n=X\\times_{{\\rm Spec}\\, {\\mathbf Z}_{(p)}}\n{\\rm Spec}\\, {\\mathbf Q}$\nis smooth over a field $K$\nof characteristic $0$,\nto cover $X_{\\mathbf Q}$,\none can use the micro-support\nof the restriction of ${\\cal F}$ on $X_{\\mathbf Q}$\ndefined as closed conical subset\nof the cotangent bundle\n$T^*X_{\\mathbf Q}\/K$.\n\n\n\\begin{lm}\\label{lmTX}\nLet $X$ be a regular noetherian scheme \nover ${\\mathbf Z}_{(p)}$\nsatisfying the condition {\\rm (F)}\nand ${\\cal F}$\nbe a constructible complex\nof $\\Lambda$-modules.\n\n{\\rm 1.}\n${\\cal F}$ is micro-supported\non $FT^*X|_{X_{{\\mathbf F}_p}}$.\n\n{\\rm 2.}\nIf ${\\cal F}$ is locally constant\non a neighborhood of\nthe closed fiber $X_{{\\mathbf F}_p}$,\nthen \n${\\cal F}$ is micro-supported\non the $0$-section $F^*T^*_XX|_{X_{{\\mathbf F}_p}}$.\n\n{\\rm 3.}\nAssume that $X$ is a\nsmooth scheme over $k$.\nLet $C\\subset T^*X$ be\na closed conical subset\nand $F^*C\\subset F^*T^*X\n=FT^*X$ \nbe the pull-back of $C$\nThen, ${\\cal F}$ is micro-supported\non $C$ in the sense of\n{\\rm (\\cite[1.3]{Be}, \\cite[Definition 4.1]{CC})}\nif and only if \n${\\cal F}$ is micro-supported\non $F^*C$.\n\\end{lm}\n\nWe show the converse of\n2 in Corollary \\ref{cortrla}.\n\n\n\\proof{\n1.\nLet $h\\colon W\\to X$\nbe a separated\nmorphism of finite type of regular schemes.\nIf $h$ is $FT^*X|_{X_{{\\mathbf F}_p}}$-transversal,\nthen $h$ is smooth\non a neighborhood\nof $W_{{\\mathbf F}_p}$ by Lemma \\ref{lmTXC}.1.\nHence $h$ is ${\\cal F}$-transversal\non a neighborhood\nof $W_{{\\mathbf F}_p}$ by Lemma \\ref{lmPoi}.1.\n\n2.\nLet $h\\colon W\\to X$\nbe a separated\nmorphism of finite type \nof regular schemes.\nThen, since \n${\\cal F}$ is locally constant\non a neighborhood of\nthe closed fiber $X_{{\\mathbf F}_p}$,\n$h$ is ${\\cal F}$-transversal\non a neighborhood of $W_{{\\mathbf F}_p}$\nby Lemma \\ref{lmPoi}.2.\n\n3.\nLet $h\\colon W\\to X$\nbe a separated\nmorphism of finite type of regular schemes.\nThen, $h\\colon W\\to X$\nis a separated morphism \nof smooth schemes\nof finite type over $k$.\nThe morphism\n$h\\colon W\\to X$ is $F^*C$-transversal\nif and only if\n$h\\colon W\\to X$ is $C$-transversal.\nHence\nthe equivalence follows\nfrom \\cite[Proposition 8.13]{CC}.\n\\qed\n\n}\n\n\n\\begin{pr}\\label{prmcf}\nLet $X$ be a regular scheme \nover ${\\mathbf Z}_{(p)}$\nsatisfying the condition {\\rm (F)}\nand let\n${\\cal F}$ be a constructible\ncomplex of $\\Lambda$-modules.\nLet $C$ be a closed conical\nsubset of $FT^*X|_{X_{{\\mathbf F}_p}}$\nsuch that ${\\cal F}$\nis micro-supported on $C$.\n\n{\\rm 1.}\nLet $h\\colon W\\to X$\nbe a separated morphism\nof finite type of regular schemes.\nIf $h$ is $C$-transversal,\nthen $h$ is ${\\cal F}$-transversal\non a neighborhood of $W_{{\\mathbf F}_p}$\nand $h^*{\\cal F}$\nis micro-supported on $h^\\circ C$.\n\n{\\rm 2.}\nLet $f\\colon X\\to Y$\nbe a separated\nmorphism of finite type \nproper on the base $B(C)$\nof regular quasi-excellent\nnoetherian schemes\nsatisfying the condition {\\rm (F)}.\nThen $Rf_*{\\cal F}$\nis micro-supported on $f_\\circ C$.\n\\end{pr}\n\n\\proof{\n1.\nLet $g\\colon V\\to W$\nbe an $h^\\circ C$-transversal\nseparated morphism of finite type of\nregular noetherian schemes.\nThen, by Lemma \\ref{lmhC},\n$hg$ and $h$ are $C$-transversal.\nSince ${\\cal F}$ is\nmicro-supported on $C$,\n$hg$ and $h$ are \n${\\cal F}$-transversal\non neighborhoods of\n$V_{{\\mathbf F}_p}$ and of $W_{{\\mathbf F}_p}$ respectively.\nHence by Proposition \\ref{prhF}.1,\n$g$ is $h^*{\\cal F}$-transversal\non a neighborhood of\n$V_{{\\mathbf F}_p}$.\n\n2.\nLet $g\\colon V\\to Y$\nbe an $f_\\circ C$-transversal \nseparated morphism of finite type \nof regular noetherian schemes\nand let\n$$\\begin{CD}\nX@0$.\nLet $h\\colon W\\to X$ be a \nfinite surjective morphism of\nregular flat schemes\nof finite type over\n${\\cal O}_K$\nsuch that the morphism\n$W_K\\to X_K$\non the generic fiber is \\'etale.\nAssume that the reduced parts\n$D=X_{k,{\\rm red}}$\nand\n$E=W_{k,{\\rm red}}$\nof the closed fibers\nare irreducible and are smooth \nof dimension $\\geqq 1$\nover the residue field $k$.\n\nAssume that the following condition\nis satisfied:\n\n{\\rm (1)}\nThe cokernel of the canonical morphism\n$F\\Omega^1_X\n\\otimes_{{\\cal O}_X}\n{\\cal O}_E\n\\to\nF\\Omega^1_W\n\\otimes_{{\\cal O}_W}\n{\\cal O}_E$\nof locally free \n${\\cal O}_E$-modules\nis locally free of rank $1$.\n\n{\\rm 1.}\nThe direct image\n$C= \\pi_\\circ FT^*_WW|_E\n\\subset FT^*X|_D$\nof the $0$-section\nis the image of the sub line bundle\n${\\rm Ker}(FT^*X|_D\\times_DE\n\\to FT^*W|_E)$\nof\n$FT^*X|_D\\times_DE$.\n\n{\\rm 2.}\nFurther assume \nthat the following condition is satisfied:\n\n{\\rm (2)}\nThe finite morphism\n$E\\to D$ is purely inseparable\nof degree $\\geqq 1$.\n\n\\noindent\nThen, for each closed point $x\\in D$\nand for the point $w\\in E$\nabove $x$,\nthere exists\na regular subscheme\n$Z\\subset W$ \nof codimension $1$\ncontaining $w$ and\nflat over ${\\cal O}_K$\nsatisfying the following conditions:\n\nThe composition\n$Z\\to W\\to X$ is unramified.\nThe pull-back $C\\times_{X_{{\\mathbf F}_p}}w\n\\subset FT^*X\\times_{X_{{\\mathbf F}_p}}\nw$ \nof the fiber at $x$\nequals the fiber of the\nkernel of the surjection\n$FT^*X\\times_{X_{{\\mathbf F}_p}}Z\n\\to FT^*Z$.\n\\end{lm}\n\n\\proof{\n1. \nSince the ${\\cal O}_E$-linear morphism\n$F\\Omega^1_X\n\\otimes_{{\\cal O}_X}\n{\\cal O}_E\n\\to\nF\\Omega^1_W\n\\otimes_{{\\cal O}_W}\n{\\cal O}_E$\nof locally free \n${\\cal O}_E$-modules\nof the same rank has\nthe cokernel of rank 1,\nthe kernel is also locally free of\nrank 1.\nHence the assertion follows.\n\n\n2.\nLet $n=\\dim {\\cal O}_{X,x}$.\nSince $E\\to D$ is assumed\nto be purely inseparable,\nthe residue field\n$k(w)$ is a purely inseparable\nextension of a perfect field\n$k(x)$ and hence \nthe morphism $k(x)\\to k(w)$ is an isomorphism.\nBy the assumption on the\nrank of the cokernel\nand by Proposition \\ref{prdx},\nthe $k(x)$-linear mapping\n${\\mathfrak m}_x\/\n{\\mathfrak m}_x^2\n\\to\n{\\mathfrak m}_w\/\n{\\mathfrak m}_w^2$\ninduced by\n${\\cal O}_{X,x}\\to\n{\\cal O}_{W,w}$ is of rank $n-1$.\n\nTake an element of\n${\\mathfrak m}_w\/\n{\\mathfrak m}_w^2$\nnot contained in the image\nof ${\\mathfrak m}_x\/\n{\\mathfrak m}_x^2$\nand take its lifting\n$f\\in {\\mathfrak m}_w$\nnot divisible by\na prime element $t$ defining \nthe divisor $E\\subset W$.\nThen,\na regular closed subscheme $Z$\nof codimension $1$ \nof a neighborhood \nof $w$ is defined \nby $f$.\nLet $z$ denote $w\\in W$\nregarded as a point of $Z$.\nSince $f$ is not divisible by $t$,\nwe may assume that $Z$ is flat\nover ${\\cal O}_K$.\n\nSince $\\bar f\\in\n{\\mathfrak m}_w\/\n{\\mathfrak m}_w^2$ is \nnot contained in the image\nof ${\\mathfrak m}_x\/\n{\\mathfrak m}_x^2$,\nthe induced morphism\n${\\mathfrak m}_x\/\n{\\mathfrak m}_x^2\n\\to\n{\\mathfrak m}_w\/((f)+\n{\\mathfrak m}_w^2)\n=\n{\\mathfrak m}_z\/\n{\\mathfrak m}_z^2$\nis a surjection.\nHence further shrinking \n$Z$ if necessary,\nwe may assume that\n$Z\\to X$ is unramified.\nSince the kernel of the surjection\n${\\mathfrak m}_x\/\n{\\mathfrak m}_x^2\n\\to\n{\\mathfrak m}_z\/\n{\\mathfrak m}_z^2$\nequals the kernel of\n${\\mathfrak m}_x\/\n{\\mathfrak m}_x^2\n\\to\n{\\mathfrak m}_w\/\n{\\mathfrak m}_w^2$,\nthe last condition \non the fibers is satisfied.\n\\qed\n\n}\n\n\\medskip\nWe show that some concrete\nexamples of Kummer coverings\nsatisfy the assumptions\nin Lemma \\ref{lmXW}.\nLet $K$ be a discrete valuation\nfield as in Lemma \\ref{lmXW}\ncontaining a primitive\n$p$-th root of 1.\nLet $X$ be a regular flat\nscheme of finite type\nover ${\\cal O}_K$\nand assume that the reduced part\n$D=X_{k,{\\rm red}}$\nis smooth over the residue field $k$.\nLet $L$ be the local field \nat the generic point of $D$\nand let $e={\\rm ord}_Lp\\geqq p-1$\nbe the absolute ramification index.\n\n\\begin{lm}\\label{lmKum}\nLet $\\pi \\in \\Gamma(X,{\\cal O}_X)$\nbe a uniformizer of the divisor $D\n=X_{k,{\\rm red}}\\subset X$\nand let $u \\in \\Gamma(X,{\\cal O}_X^\\times)$\nbe a unit.\nLet $1\\leqq n< \\dfrac{pe}{p-1}$\nbe an integer\ncongruent to $0$ or $1$\nmodulo $p$\nand set $n=pm$ or $n=pm+1$\nrespectively.\nIn the case $n=pm$,\nassume that $du$ defines locally\na part of a basis of $\\Omega^1_D$.\nDefine a Kummer covering\n$V\\to U=X_K$\nby $v^p=1+u\\pi^n$.\n\n\n{\\rm 1.}\nThe normalization $\\pi\\colon\nW\\to X$\nin $V$ is regular.\nThe reduced closed fiber\n$E=W_{k,{\\rm red}}$\nis smooth over $k$\nand the finite morphism\n$E\\to D$ is purely inseparable.\n\n\n{\\rm 2.}\nThe cokernel\n${\\rm Coker}(F\\Omega^1_X\n\\otimes_{{\\cal O}_X}\n{\\cal O}_E\n\\to\nF\\Omega^1_W\n\\otimes_{{\\cal O}_W}\n{\\cal O}_E)$\nis an invertible ${\\cal O}_E$-module.\n\n{\\rm 3.}\nAssume $n=pm$.\nIf $e=m+1$,\nlet $\\pi'$ denote the uniformizer\n$p\/\\pi^m$.\nThen, the kernel of the canonical morphism\n$FT^*X|_D\\times_DE\\to\nFT^*W|_E$\nis a line bundle spanned\nby \n$$\\begin{cases}\nw(u)-u\\cdot w(\\pi')\n&\n\\text{ if $p=2$ and $e=m+1$},\n\\\\\nw(u)&\n\\text{ otherwise}.\n\\end{cases}$$\n\\end{lm}\n\n\\proof{\n{\\rm 1.}\nSince the assertion is local,\nwe may assume that\n$X={\\rm Spec}\\, A$ is affine.\nWe show that the normalization\n$B$ of $A$ is generated by \n$t=(v-1)\/\\pi^m$.\nBy the assumption\n$n<\\dfrac{ep}{p-1}$,\nwe have\n$e+m>pm$ and\nthe polynomial\n$(1+\\pi^mT)^p-1\n\\in A[T]$\nis divisible by $\\pi^{pm}$.\nDefine a monic polynomial\n$F\\in A[T]$\nby $1+\\pi^{pm}F=(1+\\pi^mT)^p$.\nSince\n$F\\equiv T^p\\bmod \n\\pi A$ and since $u$ is a unit,\nin the case $n=pm+1$,\nthe equation\n$F=\\pi u$ is an Eisenstein equation.\nIn the case $n=pm$,\nthe reduction of the equation\n$F=u$\nmodulo $\\pi A$\ngives $T^p=u$.\nIn this case $du$ is a part of\na basis of $\\Omega^1_D$\nby the assumption.\nHence \nby setting $v=1+\\pi^mt$\nwhere $t\\in B$ denotes the class of\n$T$,\nwe obtain\n$B=A[T]\/(F-u\\pi)$ \nor $B=A[T]\/(F-u)$ \nrespectively.\n\nThe reduced part $E$\nis defined by $t$ or $\\pi$\naccording to $n=pm+1$ or $n=pm$\nrespectively.\nHence $E$ is smooth over $k$\nand the finite morphism\n$E\\to D$ is purely inseparable\nof degree 1 or $p$\nrespectively.\n\n2.\nBy Corollary \\ref{corXZ},\nwe have a commutative diagram\n$$\\begin{CD}\n0@>>>\nF^*N_{D\/X}\n\\otimes_{{\\cal O}_D}\n{\\cal O}_E\n@>>>\nF\\Omega^1_X\n\\otimes_{{\\cal O}_X}\n{\\cal O}_E\n@>>>\nF^*\\Omega^1_D\n\\otimes_{{\\cal O}_D}\n{\\cal O}_E\n@>>>\n0\\\\\n@.@VVV@VVV@VVV@.\\\\\n0@>>>\nF^*N_{E\/W}\n@>>>\nF\\Omega^1_W\n\\otimes_{{\\cal O}_W}\n{\\cal O}_E\n@>>>\nF^*\\Omega^1_E\n@>>>\n0\n\\end{CD}$$\nof exact sequences of\nlocally free ${\\cal O}_E$-modules.\nIn the case $n=pm+1$,\nthe right vertical arrow\nis an isomorphism\nsince $E\\to D$ is an isomorphism.\nFurther\nthe left vertical arrow is $0$\nsince the ramification index is $p$.\nIn the case $n=pm$,\nthe left vertical arrow\nis an isomorphism\nsince the ramification index is $1$.\nFurther\nthe cokernel of the right vertical arrow is \nlocally free of rank 1\nsince $E\\to D$ is a purely inseparable\ncovering defined by $T^p=u$\nand $du$ is a part of a basis\nof $\\Omega^1_D$.\nHence the assertion follows.\n\n\n\n3.\nWe compute\nthe polynomial $F\n\\bmod \\pi^2$.\nRecall that we have\n$e+m>pm$.\nSince $e$ is divisible by\n$p-1$, the equality\n$e+m=pm+1$ holds\nif and only if\n$p=2$ and $e=m+1$.\nHence \nthe coefficients of $T^i$ \nfor $i=1,\\ldots, p-1$ in \nthe polynomial $F$\nare divisible by $\\pi^2$\nexcept $F=T^2+\n2\/\\pi^m\\cdot T$\nin the exceptional case.\n\nThus, except the exceptional case,\nwe have a congruence\n$F\\equiv T^p\n\\bmod \\pi^2$\nand hence the kernel is\nspanned by \n$w(u)$.\nIn the exceptional case,\nwe have\n$t^2+\\pi't=u$ for\n$\\pi'=2\/\\pi^m$.\nHence $w(u)$\nis sent to $t^2\\cdot w(\\pi')\n=u\\cdot w(\\pi')$.\n\\qed\n\n}\n\n\n\\begin{pr}\\label{prKum}\nLet $K$ be a discrete\nvaluation field of characteristic $0$\nsuch that\nthe residue field $k$\nis a perfect field of characteristic $p>0$.\nLet $X$ be a regular flat scheme\nof finite type over\n${\\cal O}_K$\nsuch that the reduced part\n$D=X_{k,{\\rm red}}$\nis irreducible and is smooth\nover the residue field $k$.\n\nLet ${\\cal F}_U$ be a locally constant\nconstructible sheaf of\n$\\Lambda$-modules on\nthe generic fiber $U=X_K$\nand let ${\\cal F}=j_!{\\cal F}_U$\nbe the $0$-extension\nfor the open immersion\n$j\\colon U\\to X$.\nLet $V\\to U$ be a finite\n\\'etale Galois covering \nof Galois group $G$ such that\nthe pull-back ${\\cal F}_V$\nis constant \nand let $\\pi\\colon W\\to X$ be\nthe normalization \nof $X$ in $V$.\n\nAssume that $W$ is regular\nand that\nthe reduced part\n$E=W_{k,{\\rm red}}$\nis also irreducible and smooth\nover the residue field $k$.\nAssume that the order of $G$\nis invertible in $\\Lambda$\nand that ${\\cal F}_U$ corresponds\nto a non-trivial\nirreducible representation $M$ of $G$.\n\n{\\rm 1.}\nThe canonical morphism\n${\\cal F}=j_!{\\cal F}_U\n\\to Rj_*{\\cal F}_U$\nis an isomorphism.\n\n{\\rm 2.}\nAssume that conditions\n{\\rm (1)} and {\\rm (2)} \nin Lemma {\\rm \\ref{lmXW}}\nare satisfied.\nThen, the singular support $SS{\\cal F}$\nequals the direct image\n$C=\\pi_\\circ FT^*_WW|_{W_k}$\nof the $0$-section.\n\\end{pr}\n\n\n\\proof{\n1.\nBy the assumption \nthat the order of $G$\nis invertible in $\\Lambda$\nand that $M$ is an irreducible\nrepresentation,\nthe locally constant sheaf\n${\\cal F}_U$\nis isomorphic to a direct summand\nof $\\pi_{K*}\\Lambda$\nwhere $\\pi_K\\colon V=W_K\\to U=X_K$\nis the restriction of $\\pi$.\n\nLet $j_W\\colon W_K\\to W$\nbe the open immersion\nof the generic fiber.\nSince $W$ is regular\nand the reduced part\nof the closed fiber\n$W_k$ is a regular divisor,\nwe have isomorphisms\n$\\Lambda\\to j_{W*}\\Lambda$,\n$\\Lambda_E(-1)\\to R^1 j_{W*}\\Lambda$\nand $R^qj_{W*}\\Lambda=0$\nfor $q\\neq 0,1$\nby the absolute purity \n\\cite[{\\sc Th\\'eor\\`eme 3.1.1}]{purete}.\nSimilarly,\nwe have isomorphisms\n$\\Lambda\\to j_*\\Lambda$,\n$\\Lambda_D(-1)\\to R^1 j_*\\Lambda$\nand $R^qj_*\\Lambda=0$\nfor $q\\neq 0,1$.\nSince $E\\to D$ induces a homeomorphism\non the \\'etale site by the assumption,\nthe canonical morphism\n$\\Lambda_D\\to \\pi_*\\Lambda_E$\nis an isomorphism.\nHence, for the cokernel\n${\\cal G}={\\rm Coker}\n(\\Lambda_X\\to \\pi_*\\Lambda_W)$,\nthe canonical morphisms\n$j_!j^*{\\cal G}\\to\n{\\cal G}\\to Rj_*j^*{\\cal G}$\nare isomorphisms.\n\nSince\n$M$ is a non-trivial irreducible\nrepresentation\nof a semi-simple algebra\n$\\Lambda[G]$,\nthe corresponding sheaf\n${\\cal F}$ is a direct summand\nof $j^*{\\cal G}$.\nHence the canonical morphism\n${\\cal F}=j_!{\\cal F}_U\n\\to Rj_*{\\cal F}_U$\nis an isomorphism.\n\n2.\nSince ${\\cal F}$ is a direct summand\nof $\\pi_*\\Lambda_W=\\Lambda_X\n\\oplus {\\cal G}$,\nby Proposition \\ref{prmcf}.2,\nthe constructible sheaf\n${\\cal F}$ is micro-supported on\n$C=\\pi_\\circ FT^*_WW|_{W_k}$.\n\nSuppose ${\\cal F}$\nis micro-supported on \na closed conical subset $C'$.\nIt suffices to prove $C\\subset C'$.\nLet $x\\in X_{{\\mathbf F}_p}$\nbe a closed point,\nlet $h\\colon Z\\to X$\nbe an unramified morphism\nas in Lemma \\ref{lmXW}\nand let $z\\in Z$ be\nthe unique point above $x$.\nSince $Z\\to X$ factors through\n$Z\\to W$,\nthe restriction \n${\\cal F}_{Z\\cap U}$\nis constant.\nHence the morphism\n$h$ is not\n${\\cal F}$-transversal\nby the contraposition\nof Proposition \\ref{prhF}.2\n(1)$\\Rightarrow$(2)\nand 1.\nSince ${\\cal F}$\nis micro-supported on $C'$,\nthe morphism\n$h$ is not $C'$-transversal,\non any open neighborhood of $z\\in Z$.\n\nThe kernel $L={\\rm Ker}\n(FT^*X\\times_{X_{{\\mathbf F}_p}}\nZ_{{\\mathbf F}_p}\\to FT^*Z)$\nis a line bundle\non $Z_{{\\mathbf F}_p}$.\nThe intersection $C'_1=h^*C'\n\\cap L\n\\subset FT^*X\\times_{X_{{\\mathbf F}_p}}\nZ_{{\\mathbf F}_p}$\nis a closed conical subset\nof $L$.\nLet $Z_1\n=\\{y\\in Z_{{\\mathbf F}_p}\\mid\nC'_{1,y}=L_y\\}$\nbe the image by the projection \nof the complement\n$C'_1\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} (C'_1\\cap Z_{{\\mathbf F}_p})$\nof the $0$-section.\nSince $C'_1\\subset L$ is a closed\nconical subset,\nthe image\n$Z_1\\subset\nZ_{{\\mathbf F}_p}$ is a closed subset.\nSince the restriction \n$Z\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} Z_1\\to X$ of $h$\nis $C'$-transversal,\nthe complement\n$Z\\raisebox{2.33pt}{~\\rule{6.4pt}{1.3pt}~} Z_1$ is not\nan open neighborhood of $z$.\nNamely,\nwe have $z\\in Z_1$\nand hence \n$C'_{1,z}=L_z$ \nis a subset of $C'_z$.\n\nSince \n$L_z=C_z=C_x\\times_xz$\nby the condition on $Z$, we get\n$C_x\\subset C'_x$\nfor each closed point $x\\in X_k$.\nThus we have $C\\subset C'$\nas required.\n\\qed\n\n}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Preliminaries}\n\n\\subsection{Cayley algebra and compact Lie group of type ${\\rm G}_2$} \n\nLet $\\mathfrak{C}=\\{e_0 =1, e_1, e_2, e_3, e_4, e_5, e_6, e_7 \\}_{\\sR}$ be the division Cayley algebra. In $\\mathfrak{C}$, since the multiplication and the inner product are well known, these are omitted.\n\\vspace{1mm}\n\nThe connected compact Lie group of type ${\\rm G_2}$ is given by\n$$\nG_2 =\\{\\alpha \\in \\Iso_{\\sR}(\\mathfrak{C})\\,|\\, \\alpha(xy)=(\\alpha x) (\\alpha y) \\}\\vspace{1mm}.\n$$ \n\\subsection{Exceptional Jordan algebra and compact Lie group of type ${\\rm F}_4$} \n\nLet \n$\\mathfrak{J}(3,\\mathfrak{C} ) = \\{ X \\in M(3, \\mathfrak{C}) \\, | \\, X^* = X \\}$ be the \nexceptional Jordan algebra. \nIn $\\mathfrak{J}(3,\\mathfrak{C} )$, the Jordan multiplication $X \\circ Y$, the \ninner product $(X,Y)$ and a cross multiplication $X \\times Y$, called the Freudenthal multiplication, are defined by\n$$\n\\begin{array}{c}\nX \\circ Y = \\dfrac{1}{2}(XY + YX), \\quad (X,Y) = \\tr(X \\circ Y),\n\\vspace{1mm}\\\\\nX \\times Y = \\dfrac{1}{2}(2X \\circ Y-\\tr(X)Y - \\tr(Y)X + (\\tr(X)\\tr(Y) \n- (X, Y))E), \n\\end{array}$$\nrespectively, where $E$ is the $3 \\times 3$ unit matrix. Moreover, we define the trilinear form $(X, Y, Z)$, the determinant $\\det \\,X$ by\n$$\n(X, Y, Z)=(X, Y \\times Z),\\quad \\det \\,X=\\dfrac{1}{3}(X, X, X),\n$$\nrespectively, and briefly denote $\\mathfrak{J}(3, \\mathfrak{C})$\nby $\\mathfrak{J}$.\n\\vspace{1mm}\n\nThe connected compact Lie group of type ${\\rm F_4}$ is given by\n\\begin{align*}\n\tF_4 &= \\{\\alpha \\in \\Iso_{\\sR}(\\mathfrak{J}) \\, | \\, \\alpha(X \\circ Y) = \\alpha X \\circ \\alpha Y \\}\n\t\\\\[1mm]\n\t&= \\{\\alpha \\in \\Iso_{\\sR}(\\mathfrak{J}) \\, | \\, \\alpha(X \\times Y) = \\alpha X \\times \\alpha Y \\}. \n\\end{align*}\nThen we have naturally the inclusion $G_2 \\subset F_4$ as follows:\n\\begin{align*}\n\\varphi:G_2 \\to F_4,\\,\\,\\varphi(\\alpha)X=\\begin{pmatrix}\n\\xi_1 & \\alpha x_3 & \\ov{\\alpha x_2} \\\\\n\\ov{\\alpha x_3} & \\xi_2 & \\alpha x_1 \\\\ \n\\alpha x_2 & \\ov{\\alpha x_1} & \\xi_3\n\\end{pmatrix},\\,\\, X \\in \\mathfrak{J}.\n\\end{align*} \n\\subsection{Complex exceptional Jordan algebra and Compact Lie group of type ${\\rm E}_6$} \nLet $\\mathfrak{J}(3,\\mathfrak{C})^C = \\{ X \\in M(3, \\mathfrak{C})^C \\, | \\, X^* = X \\}$ be the complexification of the exceptional Jordan algebra $\\mathfrak{J}$. In $\\mathfrak{J}(3,\\mathfrak{C})^C$, as in $\\mathfrak{J}$, we can also define the multiplication $X \\circ Y, X \\times Y$, the inner product $(X, Y)$, the trilinear forms $(X, Y, Z)$ and the determinant $\\det \\, X$ in the same manner, and those have the same properties. The $\\mathfrak{J}(3,\\mathfrak{C} )^C$ is called the complex exceptional Jordan algebra, and briefly denote $\\mathfrak{J}(3, \\mathfrak{C})^C$ by $\\mathfrak{J}^C$. \n\\vspace{1mm}\n\nThe connected compact Lie group of type ${\\rm E_6}$ is given by\n\\begin{align*}\n\t\tE_6 &= \\{\\alpha \\in \\Iso_C(\\mathfrak{J}^C) \\, | \\, \\det\\, \\alpha X = \\det\\, X, \\langle \\alpha X, \\alpha Y \\rangle = \\langle X, Y \\rangle \\}\n\t\t\\\\ \n\t\t &=\\{\\alpha \\in \\Iso_C(\\mathfrak{J}^C) \\, | \\,\\alpha X \\times \\alpha Y=\\tau\\alpha\\tau(X \\times Y) , \\langle \\alpha X, \\alpha Y \\rangle = \\langle X, Y \\rangle \\}\n\\end{align*}\nwhere $\\tau$ is a complex conjugation in $\\mathfrak{J}^C$: $\\tau(X+iY)=X-iY, \\,X, Y \\in \\mathfrak{J}$ and the Hermite inner product $\\langle X, Y \\rangle$ is defined by $(\\tau X, Y)$.\n\n\\noindent Then we have naturally the inclusion $F_4 \\subset E_6$ as follows:\n\\begin{align*}\n \\varphi:F_4 \\to E_6,\\,\\,\\varphi(\\alpha)(X_1+iX_2)=(\\alpha X_1)+i(\\alpha X_2),\\,\\,X_1+iX_2 \\in \\mathfrak{J}^C, X_i \\in \\mathfrak{J}.\n\\end{align*}\n\n\n\\if\nIn the last of this section, we state useful lemma. \n\n\\begin{lemma}\\label{lemma 2.3.}\n\tFor Lie groups $G, G' $, let a mapping $\\varphi : G \\to G'$ be a homomorphism of Lie groups. When $G'$ is connected, if $\\Ker\\,\\varphi$ is discrete and $\\dim(\\mathfrak{g})=\\dim(\\mathfrak{g}')$, $\\varphi$ is surjective.\n\\end{lemma}\n\\begin{proof}\n\tThe proof is omitted (see \\cite[Proposition 8.2 (1)]{iy4} in detail).\n\\end{proof}\n\n\\begin{lemma}[E. Cartan-Ra\\v{s}evskii]\\label{lemma 2.3.1}\n\tLet $G$ be a simply connected Lie group with a finite order automorphism $\\sigma$\n\tof $G$. Then $G^\\sigma$ is connected.\n\\end{lemma}\n\\begin{proof}\n\tThe proof is omitted (cf. \\cite[Lemma 0.7]{realization G_2}).\n\\end{proof}\n\\noindent Hereafter, using these lemmas without permission each times, we often prove lemma, proposition or theorem.\n\nWe almost use the same notation as \\cite{iy0}, in particular the complex fields $\\C, C$ are as follows.\n\\begin{align*}\n \\C=\\{x+ye_1 \\,|\\, x,y \\in \\R \\},\\quad C=\\{x+yi \\,|\\, x,y \\in \\R \\}(=\\R^C).\n\\end{align*}\n\\f\n\n\\section{The inner automorphisms of order $3$ and the fixed points subgroups by them}\\label{section 3\n\nIn this section, we will rewrite the inner automorphisms of order $3$ on $G=G_2, F_4,E_6$ and the fixed points subgroups of $G$ by them which were realized and determined in \\cite{iy1}, in association with the involutive inner automorphisms. However, the detailed proofs are omitted.\n\n\\subsection{In $G_2$}\\label{subsection 3.1\n\nLet $\\mathfrak{C}=\\H \\oplus \\H e_4$ be Cayley devision algebra, where $\\H$ is the field of quaternion number. Since a multiplication, a conjugation and inner product in $\\mathfrak{C}=\\H \\oplus \\H e_4$ are well known, these are ommited. If necessary, refer to \\cite{miya1},\\cite{realization G_2} and \\cite{iy0}.\n\nWe define an $\\R$-linear transformation $\\gamma$ of $\\mathfrak{C}$ by \n\\begin{align*}\n\t\t\\gamma(m+ne_4)=m-ne_4, \\,\\, m+ne_4 \\in \\H \\oplus \\H e_4 = \\mathfrak{C}.\n\\end{align*}\nThen we have that $\\gamma \\in G_2$ and $\\gamma^2 =1$. Hence $\\gamma$ induces the involutive inner automorphism $\\tilde{\\gamma}$ on $G_2: \\tilde{\\gamma}(\\alpha)=\\gamma\\alpha\\gamma, \\alpha \\in G_2$, so we have the following well-known result.\n\n\\begin{proposition}\\label{proposition 3.1.1\n\tThe group $(G_2)^\\gamma$ is isomorphism to the group $(Sp(1) \\times Sp(1))\/\\Z_2${\\rm:} $(G_2)^\\gamma \\cong (Sp(1) \\times Sp(1))\/\\Z_2, $ $ \\Z_2=\\{ (1,1), (-1,-1) \\}$.\n\\end{proposition}\n\\begin{proof}\n\tWe define a mapping $\\varphi_{{}_{311}}: Sp(1) \\times Sp(1) \\to (G_2)^\\gamma$ by \n\t\\begin{align*}\n\t\\varphi_{{}_{G_2,\\gamma}}(p, q)(m+n e_4)=qm \\ov{q}+(pn \\ov{q}) e_4, \\,\\,\\,m+n e_4 \\in \\H \\oplus \\H e_4 =\\mathfrak{C}.\n\t\\end{align*}\n\tThis mapping induces the required isomorphism (see \\cite [Theorem 1.10.1]{iy0} in detail).\n\\end{proof}\n\nLet $\\bm{\\omega}=-(1\/2)+(\\sqrt{3}\/2)e_1 \\in U(1) \\subset \\C \\subset \\H \\subset \\mathfrak{C}$. We define an $\\R$-linear transformation $\\gamma_3$ of $\\mathfrak{C}$ by\n\\begin{align*}\n\t\t\\gamma_3(m+ne_4)=m+(\\bm{\\omega} n)e_4, \\,\\,m+ne_4 \\in \\H \\oplus \\H e_4=\\mathfrak{C}.\n\\end{align*}\nThen, using the mapping $\\varphi_{{}_{G_2, \\gamma}}$ above, since $\\gamma_3$ is expressed by $\\varphi_{{}_{G_2,\\gamma}}(\\bm{\\omega},1)$: $\\gamma_3=\\varphi_{{}_{G_2,\\gamma}}(\\bm{\\omega},1)$, it is clear that $\\gamma_3 \\in G_2$ and $(\\gamma_3)^3=1$. Hence $\\gamma_3$ induces the inner automorphism $\\tilde{\\gamma}_3$ of order $3$ on $G_2: \\tilde{\\gamma}_3(\\alpha)={\\gamma_3}^{-1}\\alpha\\gamma_3, \\alpha \\in G_2$. \n\\vspace{1mm}\n\nNow, we have the following theorem.\n\n\\begin{theorem}\\label{theorem 3.1.2\n\tThe group $(G_2)^{\\gamma_3}$ is isomorphism to the group $(U(1) \\times Sp(1))\/\\Z_2${\\rm:} $(G_2)^{\\gamma_3} \\cong (U(1) \\times Sp(1))\/\\Z_2, \\Z_2=\\{ (1,1), (-1,-1) \\}$.\n\\end{theorem}\n\\begin{proof}\n\tLet $U(1)=\\{a \\in \\C \\,|\\,\\ov{a}a=1 \\} \\subset Sp(1)$, where $\\C=\\{x+ye_1\\,|\\, x,y \\in \\R \\}$. Then we define a mapping $\\varphi_{{}_{G_2,\\gamma_3}}:U(1) \\times Sp(1) \\to (G_2)^{\\gamma_3}$ by the restriction of the mapping $\\varphi_{{}_{G_2,\\gamma}}$ (Proposition \\ref{proposition 3.1.1}). This mapping induces the required isomorphism (see \\cite [Theorem 1.2]{iy1} in detail).\n\\end{proof}\n\nThus, since the group $(G_2)^{\\gamma_3}$ is connected, together with the result of Theorem \\ref{theorem 3.1.2}, we have an exceptional $\\varmathbb{Z}_3$-symmetric space $G_2\/(U(1) \\times Sp(1))\/\\Z_2)$.\n\\vspace{2mm}\n\nLet $x = m_0 + m_1e_2 + m_2e_4 + m_3e_6 \\in \\mathfrak{C}, m_i \\in \\C$. Then we associate such elements $x$ of $\\mathfrak{C}$ with the elements \n\\begin{align*}\n\t\t\tm_0 + \\begin{pmatrix}\n\t\t\tm_1 \\\\\n\t\t\tm_2 \\\\\n\t\t\tm_3\n\t\t\t\\end{pmatrix}(=:m_0+\\m)\n\\end{align*}\nof $\\C \\oplus \\C^3$ and we can define a multiplication, a conjugation and an inner product in $\\C \\oplus \\C^3$ corresponding to the same ones in $\\mathfrak{C}$ (see \\cite[Subsection 1.5]{iy0} in detail). Hence we have that $\\C \\oplus \\C^3$ is isomorphic to $\\mathfrak{C}$ as algebra. Hereafter, if necessary, we identify $\\mathfrak{C}$ with $\\C \\oplus \\C^3$: $\\mathfrak{C}=\\C \\oplus \\C^3$. \n\n\n\\if\nWe will rewrite alternative definition of Cayley algebra $\\mathfrak{C}$ according to \\cite[Subsection 1.5]{iy0}.\n\nAny element $x \\in \\mathfrak{C}$ is expressed by\n\\begin{align*}\n\tx &= x_0 + x_1e_1 + x_2e_2 + x_3e_3 + x_4e_4 + x_5e_5 + \n\tx_6e_6 + x_7e_7 \\quad (x_i \\in \\R) \n\t\\\\\n\t&= (x_0 + x _1e_1) + (x_2 + x_3e_1)e_2 + (x_4 + x_5e_1)e_4\n\t+ (x_6 + x_7e_1)e_6,\n\\end{align*}\nthat is,\n$$\nx = m_0 + m_1e_2 + m_2e_4 + m_3e_6, \\quad m_i \\in \\C.\n$$\n\nWe associate such element $x$ of $\\mathfrak{C}$ with the element $m_0 + \\begin{pmatrix}\nm_1 \\\\\nm_2 \\\\\nm_3\n\\end{pmatrix}$ of $\\C \\oplus \\C^3$. \n\n\\noindent In $\\C \\oplus \\C^3$, we define a multiplication, an inner product $(\\;\\;,\\;\\,)$ and a conjugation $\\overline{{\\;}^{\\;}\\;\\;}$ respectively by\n\\begin{align*}\n\t(m_0 + \\m)(n_0 + \\n) &= (m_0 n_0 - \\langle \\m, \\n \\rangle ) + \n\t(m_0\\n + \\ov{n_0}\\m - \\ov{\\m \\times \\n}), \n\t\\\\\n\t(m_0 + \\m, n_0 + \\n) &= (m_0, n_0) + (\\m, \\n), \n\t\\\\\n\t\\ov{m_0 + \\m} &= \\ov{m_0} - \\m, \n\\end{align*}\nwhere the real valued symmetric inner product $(\\m, \\n)$, the Hermitian inner \nproduct $\\langle \\m, \\n \\rangle$ and the exterior product $\\m \\times \\n$ are \nusually defined respectively by\n\\begin{align*}\n(\\m, \\n) = \\frac{1}{2}(\\m^{*}\\n + \\n^{*}\\m) = \\sum_{i=1}^3(m_i,n_i), \\,\\, \\langle \\m, \\n \\rangle = \\sum_{i=1}^3m_i\\ov{n}_i, \\,\\,\\m \\times \\n = \n\\begin{pmatrix} \nm_2n_3 - n_2m_3 \\\\\nm_3n_1 - n_3m_1 \\\\ \nm_1n_2 - n_1m_2\n\\end{pmatrix}\n\\end{align*}\n\\noindent for $\\m = \\begin{pmatrix}m_1 \\\\ m_2 \\\\ m_3\\end{pmatrix}$, $\\n = \\begin{pmatrix}n_1 \\\\ n_2 \\\\ n_3\\end{pmatrix} \n\\in \\C^3$. Since these operations correspond to their respective operations in \n$\\mathfrak{C}$. From now on, we also identify $\\C \\oplus \\C^3$ with $\\mathfrak{C}$ \n: $\\mathfrak{C}=\\C \\oplus \\C^3$.\n\\vspace{1mm}\n\\f\n\nAgain let $\\bm{\\omega}=-(1\/2)+(\\sqrt{3}\/2)e_1 \\in U(1) \\subset \\C \\subset \\H \\subset \\mathfrak{C}$. We define an $\\R$-linear transformation $w_3$ of $\\mathfrak{C}=\\C \\oplus \\C^3$ by\n\\begin{align*}\n\t\tw_3(m_0+\\m)=m_0+\\bm{\\omega} \\m, \\,\\,m_0+\\m \\in \\C \\oplus \\C^3=\\mathfrak{C}.\n\\end{align*}\nThen we have that $w_3 \\in G_2$ (\\cite[Proposition 1.4]{iy1}) and $(w_3)^3=1$. Hence $w_3$ induces the inner automorphism $\\tilde{w}_3$ of order $3$ on $G_2$: $\\tilde{w}_3(\\alpha)={w_3}^{-1}\\alpha w_3, \\alpha \\in G_2$.\n\\vspace{1mm}\n\nNow, we have the following theorem.\n\n\\begin{theorem}\\label{theorem 3.1.3\n\tThe group $(G_2)^{w_3}$ is isomorphic to the group $SU(3)${\\rm :} $(G_2)^{w_3} \\cong SU(3)$.\n\\end{theorem}\n\\begin{proof}\nWe define a mapping $\\varphi_{{}_{G_2,w_3}}: SU(3) \\to (G_2)^{w_3}$ by\n\t\\begin{align*}\n\t\t\t\\varphi_{{}_{G_2,w_3}}(A)(m_0+\\m)=m_0+A\\m, \\,\\,m_0+\\m \\in \\C \\oplus \\C^3=\\mathfrak{C}.\n\t\\end{align*}\n\tThis mapping induces the required isomorphism (see \\cite[Theorem 1.6]{iy1} in detail).\n\\end{proof}\n\nThus, since the group $(G_2)^{w_3}$ is connected, together with the result of Theorem \\ref{theorem 3.1.3}, we have an exceptional $\\varmathbb{Z}_3$-symmetric space $G_2\/SU(3)$. As is well known, this space is homeomorphic to a $6$-dimensional sphere $S^6$: $G_2\/SU(3) \\simeq S^6$. \n\\vspace{2mm}\n\nThe following lemma are useful to determine the structure of groups $G^{\\sigma_3} \\cap G^{\\tau_3}$ in $G_2$.\n\n\\begin{lemma}\\label{lemma 3.1.4\n\t{\\rm (1)} The mapping $\\varphi_{{}_{G_2,\\gamma_3}}:U(1) \\times Sp(1) \\to (G_2)^{\\gamma_3}$ of \\,Theorem {\\rm \\ref{theorem 3.1.2}} satisfies the relational formulas \n\t\\begin{align*}\n \t\\gamma_3&=\\varphi_{{}_{G_2,\\gamma_3}}(\\bm{\\omega},1),\n \t\\\\\n \tw_3&=\\varphi_{{}_{G_2,\\gamma_3}}(1, \\ov{\\bm{\\omega}}),\n\t\\end{align*}\n \twhere $\\bm{\\omega}=-(1\/2)+(\\sqrt{3}\/2)e_1 \\in U(1)$.\n\\vspace{1mm}\n\n\t{\\rm (2)} The mapping $\\varphi_{{}_{G_2,w_3}}:SU(3) \\to (G_2)^{w_3}$ of \\,Theorem {\\rm \\ref{theorem 3.1.3}} satisfies the relational formulas\n\t\\begin{align*}\n\t\\gamma_3&=\\varphi_{{}_{G_2,w_3}}(\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})),\n\t\\\\\n\tw_3&=\\varphi_{{}_{G_2,w_3}}(\\bm{\\omega}E),\n\t\\end{align*}\n\twhere $\\bm{\\omega}=-(1\/2)+(\\sqrt{3}\/2)e_1 \\in U(1)$. \n\\end{lemma}\n\\begin{proof}\n\t(1), (2) By doing straightforward computation we obtain the results above. \n\\end{proof}\n\n\\subsection{In $F_4$}\\label{subsection 3.2\n\nLet $\\mathfrak{J}$ be the exceptional Jordan algebra. As is well known, the elements $X$ of $\\mathfrak{J}$ take the form \n$$\nX = \\begin{pmatrix}\n\\xi_1 & x_3 & \\ov{x_2} \\\\\n\\ov{x_3} & \\xi_2 & x_1 \\\\ \nx_2 & \\ov{x_1} & \\xi_3\n\\end{pmatrix},\\,\\, \\xi_i \\in \\R,\\, x_i \\in \\mathfrak{C},\\, i=1, 2, 3.\n$$\nHereafter, in $\\mathfrak{J}$, we use the following nations:\n\\begin{align*}\nE_1 &= \\left(\\begin{array}{ccc}\n1 & 0 & 0 \\\\\n0 & 0 & 0 \\\\\n0 & 0 & 0\n\\end{array}\n\\right), \\,\\,\\,\\,\\,\\,\\,\\,\nE_2 = \\left(\\begin{array}{ccc}\n0 & 0 & 0 \\\\\n0 & 1 & 0 \\\\\n0 & 0 & 0\n\\end{array}\n\\right), \\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\nE_3 = \\left(\\begin{array}{ccc}\n0 & 0 & 0 \\\\\n0 & 0 & 0 \\\\\n0 & 0 & 1\n\\end{array}\n\\right), \n\\\\[2mm]\nF_1 (x) &= \\left(\\begin{array}{ccc}\n0 & 0 & 0 \\\\\n0 & 0 & x \\\\\n0 & \\ov{x} & 0\n\\end{array}\n\\right), \\,\\,\nF_2(x) = \\left(\\begin{array}{ccc}\n0 & 0 & \\ov{x} \\\\\n0 & 0 & 0 \\\\\nx & 0 & 0\n\\end{array}\n\\right), \\,\\,\nF_3 (x) = \\left(\\begin{array}{ccc}\n0 & x & 0 \\\\\n\\ov{x} & 0 & 0 \\\\\n0 & 0 & 0\n\\end{array}\n\\right).\n\\end{align*}\n\n\n\\if\nThen $\\mathfrak{J}(3, \\H) \\oplus \\H^3$ has the Freudenthal multiplication and the inner product \n\\begin{align*}\n(M + \\a) \\times (N + \\b) &= \\Big(M \\times N - \\dfrac{1}{2} (\\a^*\\b + \\b^*\\a)\\Big) - \\dfrac{1}{2}(\\a N + \\b M), \n\\\\[1mm]\n(M + \\a, N + \\b) &= (M, N) + 2(\\a, \\b) \n\\end{align*}\ncorresponding those of $\\mathfrak{J}$, where $(\\a, \\b) = (1\/2)(\\a\\b^* + \\b\\a^*)$. Hence $\\mathfrak{J}$ is isomorphic to $\\mathfrak{J}(3, \\H) $ $\\oplus \\,\\H^3$ as algebra. From now on, we identify $\\mathfrak{J}$ with $\\mathfrak{J}(3, \\H) \\oplus \\H^3$: $\\mathfrak{J}=\\mathfrak{J}(3, \\H) \\oplus \\H^3$.\n\\f\n\\vspace{1mm}\n\nWe define an $\\R$-linear transformation $\\gamma$ of $\\mathfrak{J}$ by\n$$\n\\gamma X= \\begin{pmatrix} \\xi_1 & \\gamma x_3 & \\ov{\\gamma x_2} \\\\\n\\ov{\\gamma x_3} & \\xi_2 & \\gamma x_1 \\\\\n\\gamma x_2 & \\ov{\\gamma x_1} & \\xi_3 \\end{pmatrix}\n,\\,\\,X \\in \\mathfrak{J},\n$$\nwhere $\\gamma$ on right hand side is the same one as $\\gamma \\in G_2$. Then we have that $\\gamma \\in F_4$ and $\\gamma^2 =1$. Hence $\\gamma$ induce involutive inner automorphism $\\tilde{\\gamma}$ of $F_4{\\rm :}\\,\\tilde{\\gamma}(\\alpha)=\\gamma\\alpha\\gamma, \\alpha \\in F_4$.\n\\vspace{1mm}\n\nHere, we associate the elements $X$ of $\\mathfrak{J}$ with the elements \n\\begin{align*}\n\\begin{pmatrix}\n\\xi_1 & m_3 & \\ov{m_2} \\\\\n\\ov{m_3} & \\xi_2 & m_1 \\\\ \nm_2 & \\ov{m_1} & \\xi_3\n\\end{pmatrix}\n+ (\\a_1, \\a_2, \\a_3)(=:M + \\a) \n\\end{align*}\nof $\\mathfrak{J}(3, \\H) \\oplus \\H^3$ and we can define a multiplication, a conjugation and an inner product in $\\mathfrak{J}(3, \\H) \\oplus \\H^3$ corresponding to the same ones in $\\mathfrak{J}$ (see \\cite[Subsection 2.11]{iy0} in detail). \nHence we have that $\\mathfrak{J}(3, \\H) \\oplus \\H^3$ is isomorphic to the exceptional Jordan algebra $\\mathfrak{J}$ as algebra. From now on, if necessary we identify $\\mathfrak{J}$ with $\\mathfrak{J}(3, \\H) \\oplus \\H^3$: $\\mathfrak{J}=\\mathfrak{J}(3, \\H) \\oplus \\H^3$.\nNote that the action to $\\mathfrak{J}(3, \\H) \\oplus \\H^3$ of $\\gamma$ is as follows.\n\\begin{align*}\n\t\t\\gamma(M+\\a)=M-\\a,\\,\\,M+\\a \\in \\mathfrak{J}(3, \\H) \\oplus \\H^3=\\mathfrak{J}.\n\\end{align*}\n\nThen we have the following well-known result.\n\n\\begin{proposition}\\label{proposition 3.2.1\n\tThe group $(F_4)^\\gamma$ is isomorphic to the group $(Sp(1) \\times Sp(3))\/\\Z_2${\\rm:} $(F_4)^\\gamma \\cong (Sp(1) \\times Sp(3))\/\\Z_2, \\,$ $\\Z_2 =\\{(1, E), (-1, -E) \\}$.\n\\end{proposition}\n\\begin{proof}\n\tWe define a mapping $\\varphi_{{}_{F_4,\\gamma}}: Sp(1) \\times Sp(3) \\to (F_4)^\\gamma$ by\n\t$$\n\t\\varphi_{{}_{F_4,\\gamma}}(p, A)(M+\\a)=AMA^* +p\\a A^*,\\,\\,\\, M+\\a \\in \\mathfrak{J}(3, \\H) \\oplus \\H^3=\\mathfrak{J}.\n\t$$\n\tThis mapping induces the required isomorphism (see \\cite[Theorem 2.11.2]{iy0} in detail).\n\\end{proof}\n\\vspace{1mm}\n\nLet $\\gamma_3 \\in G_2$ be the $\\R$-linear transformation of $\\mathfrak{C}$. Using the inclusion $G_2 \\subset F_4$, $\\gamma_3$ is naturally extended to the $\\R$-linear transformation of $\\mathfrak{J}$. The explicit form of \n$\\gamma_3$ as action to $\\mathfrak{J}$ is as follows.\n\\begin{align*}\n\t\t\\gamma_3 X=\n\t\t\\begin{pmatrix} \\xi_1 & \\gamma_3 x_3 & \\ov{\\gamma_3 x_2} \\\\\n\t\t\\ov{\\gamma_3 x_3} & \\xi_2 & \\gamma_3 x_1 \\\\\n\t\t\\gamma_3 x_2 & \\ov{\\gamma_3 x_1} & \\xi_3 \n\t\t\\end{pmatrix},\\,\\,X \\in \\mathfrak{J},\n\\end{align*}\nwhere $\\gamma_3$ on the right hand side is the same one as $\\gamma_3 \\in G_2$. Needless to say, $\\gamma_3 \\in F_4$ and $(\\gamma_3)^3=1$. Hence $\\gamma_3$ induces the automorphism $\\tilde{\\gamma}_3$ of order $3$ on $F_4$: $\\tilde{\\gamma}_3(\\alpha)={\\gamma_3}^{-1}\\alpha\\gamma_3, \\alpha \\in F_4$. Note that the action to $\\mathfrak{J}(3, \\H) \\oplus \\H^3$ of $\\gamma_3$ is as follows.\n\\begin{align*}\n\\gamma_3(M+\\a)=M+\\bm{\\omega}\\a,\\,\\,M+\\a \\in \\mathfrak{J}(3, \\H) \\oplus \\H^3=\\mathfrak{J}.\n\\end{align*}\n\nNow, we have the following theorem.\n\n\\begin{theorem}\\label{theorem 3.2.2\n\tThe group $(F_4)^{\\gamma_3}$ is isomorphic to the group $(U(1) \\times Sp(3))\/\\Z_2$ {\\rm :} $(F_4)^{\\gamma_3} \\cong (U(1) \\times Sp(3))\/\\Z_2, \\Z_2=\\{(1,E), \n\t(-1,-E) \\}$.\n\\end{theorem}\n\\begin{proof}\n\tAs in the proof of Theorem \\ref{theorem 3.1.2}, let $U(1)=\\{a \\in \\C \\,|\\,\\ov{a}a=1 \\} \\subset Sp(1)$. We define a mapping $\\varphi_{{}_{F_4,\\gamma_3}}:U(1) \\times Sp(3) \\to (F_4)^{\\gamma_3}$ by the restriction of the mapping $\\varphi_{{}_{F_4,\\gamma}}$ (Proposition \\ref{proposition 3.2.1}). This mapping induces the required isomorphism (see \\cite [Theorem 2.2]{iy1} in detail).\t\n\\end{proof}\n\nThus, since the group $(F_4)^{\\gamma_3}$ is connected, together with the result of Theorem \\ref{theorem 3.2.2}, we have an exceptional $\\varmathbb{Z}_3$-symmetric space $F_4\/((U(1) \\times Sp(3))\/\\Z_2)$.\n\\vspace{1mm}\n\nWe define an $\\R$-linear transformation $\\sigma$ of $\\mathfrak{J}$ by\n\\begin{align*}\n\\sigma X= \\begin{pmatrix} \\xi_1 & -x_3 & -\\ov{x_2} \\\\\n-\\ov{x_3} & \\xi_2 & x_1 \\\\\n-x_2 & \\ov{x_1} & \\xi_3 \\end{pmatrix}\n,\\,\\,X \\in \\mathfrak{J},\n\\end{align*}\nThen we have that $\\sigma \\in F_4$ and $\\sigma^2 =1$. Hence $\\sigma$ induce involutive inner automorphism $\\tilde{\\sigma}$ on $F_4{\\rm :}\\,\\tilde{\\sigma}(\\alpha)=\\sigma\\alpha\\sigma, \\alpha \\in F_4$.\n\\vspace{1mm}\n\nThen we have the following well-known result.\n\n\\begin{proposition}\\label{proposition 3.2.3\n\tThe group $(F_4)^\\sigma$ is isomorphic to the group $Spin(9)${\\rm:}$(F_4)^\\sigma \\!\\cong \\!Spin(9)$.\n\\end{proposition}\n\\begin{proof}\n\tFrom \\cite[Thorem 2.7.4]{iy0}\n\t, we have $(F_4)_{E_1} \\cong Spin(9)$, so by proving that $(F_4)^\\sigma \\cong (F_4)_{E_1}$ (\\cite[Thorem 2.9.1]{iy0}) we have the required isomorphism (see \\cite[Sections 2.7, 2.9 ]{iy0} in detail).\n\\end{proof}\n\\vspace{1mm}\n\nLet $U(1)=\\{a \\in \\C \\,|\\,\\ov{a}a=1 \\}$. For $a \\in U(1)$, we define an $\\R$-linear transformation $D_a$ of $\\mathfrak{J}$ by\n\\begin{align*}\n\t\tD_a X= \n\t\t\\begin{pmatrix} \\xi_1 & x_3 a & \\ov{ax_2} \\\\\n\t\t\\ov{x_3 a} & \\xi_2 & \\ov{a}x_1\\ov{a} \\\\\n\t\ta x_2 & a\\ov{x_1}a & \\xi_3 \n\t\t\\end{pmatrix},\\,\\, X \\in \\mathfrak{J}.\n\\end{align*}\nThen, since $D_a=\\varphi_{{}_{F_4,\\gamma}}(1,\\diag(1,\\ov{a},a))$, we have that $D_a \\in F_4$. Hence, by corresponding $a \\in U(1)$ to $D_a \\in F_4$, $U(1)$ is embedded into $F_4$.\nIn addition, we can express $\\sigma$ defined above by $D_{-1}$: $\\sigma=D_{-1}$.\n\nLet $\\bm{\\omega}=-(1\/2)+(\\sqrt{3}\/2)e_1 \\in U(1)$. Then we define an $\\R$-linear transformation $\\sigma_3$ of $\\mathfrak{J}$ by\n\\begin{align*}\n\t\t\\sigma_3X= \n\t\t\\begin{pmatrix} \\xi_1 & x_3 \\bm{\\omega} & \\ov{\\bm{\\omega} x_2} \\\\\n\t\t\\ov{x_3 \\bm{\\omega}} & \\xi_2 & \\ov{\\bm{\\omega}}x_1\\ov{\\bm{\\omega}} \\\\\n\t\t\\bm{\\omega} x_2 & \\bm{\\omega}\\ov{x_1}\\bm{\\omega} & \\xi_3 \n\t\t\\end{pmatrix},\\,\\, X \\in \\mathfrak{J}.\n\\end{align*}\nNeedless to say, since $\\sigma_3=D_\\omega=\\varphi_{{}_{F_4,\\gamma}}(1,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}))$, we have that $\\sigma_3 \\in F_4$. Hence $\\sigma_3$ induces the automorphism $\\tilde{\\sigma}_3$ of order $3$ on $F_4$: $\\tilde{\\sigma}_3(\\alpha)={\\sigma_3}^{-1}\\alpha\\sigma_3, \\alpha \\in F_4$.\n\\vspace{1mm}\n\nNow, we have the following theorem.\n\n\\begin{theorem}\\label{theorem 3.2.4\n\tThe group $(F_4)^{\\sigma_3}$ is isomorphic to the group $(Spin(2) \\times Spin(7))\/\\Z_2${\\rm:} $(F_4)^{\\sigma_3} \\cong (Spin(2) \\times Spin(7))\/\\Z_2, \\Z_2=\\{(1,1), (\\sigma,\\sigma)\\}$.\n\\end{theorem}\n\\begin{proof}\n\tLet $Spin(2)$ as the group $\\{D_a \\in F_4 \\,|\\,a \\in U(1) \\}$ defined above which is isomorphic to the group $U(1)$ and $Spin(7)$ as the subgroup $(F_4)_{E_1, F_1(1),F_1(e_1)}$ of $F_4$ (cf. \\cite[Propsition 2.9 (1)]{iy2}, \\cite[Subsection 2.2]{iy1}). We define a mapping $\\varphi_{{}_{F_4,\\sigma_3}}: Spin(2) \\times Spin(7) \\to (F_4)^{\\sigma_3}$ by\n\t\\begin{align*}\n\t\t\t\\varphi_{{}_{F_4,\\sigma_3}}(D_a, \\beta)=D_a \\beta.\n\t\\end{align*}\t\n\tThis mapping induces the required isomorphism (see \\cite[Lemmas 2.5, 2.6, Theorem 2.7]{iy1} in detail).\n\\end{proof}\n\nThus, since the group $(F_4)^{\\sigma_3}$ is connected, together with the result of Theorem \\ref{theorem 3.2.4}, we have an exceptional $\\varmathbb{Z}_3$-symmetric space $F_4\/((Spin(2) \\times Spin(7))\/\\Z_2)$.\n\\vspace{2mm}\t\n\n We define an $\\R$-linear transformation $w_3$ of $\\mathfrak{J}$ by\n\\begin{align*}\nw_3X= \n\\begin{pmatrix} \\xi_1 & w_3 x_3 & \\ov{w_3 x_2} \\\\\n\\ov{w_3 x_3} & \\xi_2 & w_3 x_1 \\\\\nw_3 x_2 & \\ov{w_3 x_1} & \\xi_3 \n\\end{pmatrix},\\,\\, X \\in \\mathfrak{J},\n\\end{align*}\nwhere $w_3$ on the right hand side is the same one as $w_3 \\in G_2$. Needless to say, $w_3 \\in F_4$ and $(w_3)^3=1$. Hence $w_3$ induces the automorphism $\\tilde{w}_3$ of order $3$ on $F_4$: $\\tilde{w}_3(\\alpha)={w_3}^{-1}\\alpha w_3, \\alpha \\in F_4$.\n\nWe associate the elements $X$ of $\\mathfrak{J}$ with the elements \n\\begin{align*}\n\t\t\\begin{pmatrix}\n\t\t\\xi_1 & c_3 & \\ov{c_2} \\\\\n\t\t\\ov{c_3} & \\xi_2 & c_1 \\\\ \n\t\tc_2 & \\ov{c_1} & \\xi_3\n\t\t\\end{pmatrix} +\n\t\t\\begin{pmatrix}\n\t\t & & \\\\\n\t\t\\m_1 \\!\\!\\!& \\m_2 \\!\\!\\!& \\m_3 \\\\ \n\t\t & & \n\t\t\\end{pmatrix}(=:X_{\\bm{C}}+M)\n\\end{align*}\nof $\\mathfrak{J}(3,\\C) \\oplus M(3,\\C)$, where $\\m_i \\in \\C^3$, \nand we can define a multiplication, a conjugation and an inner product in \n$\\mathfrak{J}(3, \\C) \\oplus M(3,\\C)$ corresponding to the same ones in $\\mathfrak{J}$ (see \\cite[Subsection 2.12]{iy0} in detail). Hence we have that $\\mathfrak{J}(3, \\C) \\oplus M(3,\\C)$ is isomorphic to $\\mathfrak{J}$ as algebra. Hereafter, if necessary we identify $\\mathfrak{J}$ with $\\mathfrak{J}(3, \\C) \\oplus M(3,\\C)$: $\\mathfrak{J}=\\mathfrak{J}(3, \\C) \\oplus M(3,\\C)$. Note that using $\\bm{\\omega}=-(1\/2)+(\\sqrt{3}\/2)e_1 \\in \\C$, the action to $\\mathfrak{J}=\\mathfrak{J}(3, \\C) \\oplus M(3,\\C)$ of $w_3$ is as follows.\n\\begin{align*}\nw_3(X_{\\bm{C}}+M)=X_{\\bm{C}}+\\bm{\\omega} M,\\,\\,X_{\\bm{C}}+M \\in \\mathfrak{J}(3, \\C) \\oplus M(3,\\C)=\\mathfrak{J}.\n\\end{align*}\n\nNow, we have the following theorem.\n\n\\begin{theorem}\\label{theorem 3.2.5\n\t\tThe group $(F_4)^{w_3}$ is isomorphic to the group $(SU(3) \\times SU(3))\/\\Z_3${\\rm :} $(F_4)^{w_3} \\cong (SU(3) \\times SU(3))\/\\Z_3, \\Z_3=\\{(E,E),(\\bm{\\omega} E,\\bm{\\omega} E),({\\bm{\\omega}}^{-1}E,{\\bm{\\omega}}^{-1}E) \\}$.\n\\end{theorem}\n\\begin{proof}\n\tWe define a mapping $\\varphi_{F_4,w_3}:SU(3) \\times SU(3) \\to (F_4)^{w_3}$ by\n\t\\begin{align*}\n\t\t\t\t\\varphi_{F_4,w_3}(B, A)(X_{\\bm{C}}+M)=AX_{\\bm{C}}A^* + BMA^*,\\,\\,X_{\\bm{C}}+M \\in \\mathfrak{J}(3, \\C) \\oplus M(3,\\C)=\\mathfrak{J}.\n\t\\end{align*}\n\tThis mapping induces the required isomorphism (see \\cite[Theorem 2.9]{iy1} in detail).\n\\end{proof}\n\nThus, since the group $(F_4)^{w_3}$ is connected, together with the result of Theorem \\ref{theorem 3.2.5}, we have an exceptional $\\varmathbb{Z}_3$-symmetric space $F_4\/((SU(3) \\times SU(3))\/\\Z_3)$.\n\\vspace{1mm}\n\nAs in Section 3.1, the following lemma are useful to determine the structure of a group $G^{\\sigma_3} \\cap G^{\\tau_3}$ in $F_4$.\n\n\\begin{lemma}\\label{lemma 3.2.6\n\t{\\rm (1)} The mapping $\\varphi_{{}_{F_4,\\gamma_3}}:U(1) \\times Sp(3) \\to (G_2)^{\\gamma_3}$ of \\,Theorem {\\rm \\ref{theorem 3.2.2}} satisfies the relational formulas \n\t\\begin{align*}\n\t\\gamma_3&=\\varphi_{{}_{F_4,\\gamma_3}}(\\bm{\\omega},E), \n\t\\\\\n\t\\sigma_3&=\\varphi_{{}_{F_4,\\gamma_3}}(1,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega})),\n\t\\\\\n\tw_3&=\\varphi_{{}_{F_4,\\gamma_3}}(1, \\ov{\\bm{\\omega}}E), \n\t\\end{align*}\n where $\\bm{\\omega}=-(1\/2)+(\\sqrt{3}\/2)e_1 \\in U(1)$.\n\t\\vspace{1mm}\n\t\n\t{\\rm (2)} The mapping $\\varphi_{{}_{F_4,w_3}}:SU(3)\\times SU(3) \\to (F_4)^{w_3}$ of \\,Theorem {\\rm \\ref{theorem 3.2.5}} satisfies the relational formulas\n\t\\begin{align*}\n\t\\gamma_3&=\\varphi_{{}_{F_4,w_3}}(\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}}),E), \n\t\\\\\n\t\\sigma_3&=\\varphi_{{}_{F_4,w_3}}(E,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}))\\\\\n\tw_3&=\\varphi_{{}_{F_4,w_3}}(\\bm{\\omega}E,E), \n\t\\end{align*}\n where $\\bm{\\omega}=-(1\/2)+(\\sqrt{3}\/2)e_1 \\in U(1)$. \t\n\\end{lemma}\n\\begin{proof}\n\t(1), (2) By doing straightforward computation we obtain the results above. \n\\end{proof}\n\n\\subsection{In $E_6$}\\label{subsection 3.3\n\nLet $\\gamma, \\gamma_3 \\in G_2 \\subset F_4$, and using the inclusion $F_4 \\subset E_6$, \n$\\gamma, \\gamma_3$ are naturally extended to an $C$-linear \ntransformation of $\\mathfrak{J}^C$. Needless to say, $\\gamma, \\gamma_3 \\in E_6$ and $\\gamma^2=(\\gamma_3)^3=1$. Hence $\\gamma, \\gamma_3$ induce the involutive automorphism $\\tilde{\\gamma}$, the automorphism $\\tilde{\\gamma}_3$ of order $3$ on $E_6$, respectively: $\\tilde{\\gamma}(\\alpha)=\\gamma\\alpha\\gamma, \\tilde{\\gamma}_3(\\alpha)={\\gamma_3}^{-1}\\alpha\\gamma_3, \\alpha \\in E_6$. \n\\vspace{1mm}\n\nThen we have the following proposition and theorem.\n\n\\begin{proposition}\\label{proposition 3.3.1}\n\tThe group $(E_6)^\\gamma$ isomorphic to the group $(Sp(1) \\times SU(6))\/\\Z_2${\\rm:}\n\t$(E_6)^\\gamma \\cong (Sp(1) \\times SU(6))\/\\Z_2,\\Z_2 =\\{(1, E), (-1, -E) \\}$.\n\\end{proposition}\n\\begin{proof}\n\tLet $SU(6)=\\{A \\in M(6, C)\\,|\\,(\\tau\\,{}^t A) A$ $=1, \\det\\, A=1) \\}$, where $\\tau$ is the complex conjugation of $C=\\{x+iy \\,|\\,x,y \\in \\R \\}$, that is, $\\tau(x+yi)=x-yi, x,y \\in \\R$.\n\tWe define a mapping $\\varphi_{{}_{E_6,\\gamma}}:Sp(1) \\times SU(6) \\to (E_6)^\\gamma $ by\n\t\\begin{align*}\n\t\\varphi_{{}_{E_6,\\gamma}}(p, A)(M+\\a)={k_J}^{-1}(A(k_J M){}^t\\!A)+p\\a k^{-1}(\\tau \\,{}^t\\!A), M+\\a \\in \\mathfrak{J}(3, \\H)^C \\oplus (\\H^3)^C=\\mathfrak{J}^C,\n\t\\end{align*}\n\twhere both of $k_J:\\mathfrak{J}(3, \\H)^C \\to \\mathfrak{S}(6, C)$ and $k:M(3, \\H)^C \\to M(6, C)$ are the $C$-linear isomorphisms.\n\tThis mapping induces the required isomorphism (see \\cite[Theorem 3.11.4 ]{iy0} in detail).\n\\end{proof}\n\n\\begin{theorem}\\label{theorem 3.3.2\n\tThe group $(E_6)^{\\gamma_3}$ is isomorphic to the group $(U(1) \\times SU(6))\/\\Z_2${\\rm :} $(E_6)^{\\gamma_3} \\cong (U(1) \\times SU(6))\/\\Z_2, \\Z_2=\\{(1,E),\n\t(-1,-E) \\}$.\t\n\\end{theorem}\n\\begin{proof}\n\tLet $U(1)=\\{a \\in \\C\\,|\\, \\ov{a}a=1 \\} \\subset Sp(1)$. We define a mapping $\\varphi_{{}_{E_6,\\gamma_3}}: U(1) \\times SU(6) \\to (E_6)^{\\gamma_3}$ by the restriction of the mapping $\\varphi_{{}_{E_6,\\gamma}}$ (Proposition \\ref{proposition 3.3.1}). This mapping induces the required isomorphism (see \\cite [Theorem 3.2]{iy1} in detail). \n\\end{proof}\n\nThus, since the group $(E_6)^{\\gamma_3}$ is connected, together with the result of Theorem \\ref{theorem 3.3.2}, we have an exceptional $\\varmathbb{Z}_3$-symmetric space $E_6\/((U(1) \\times SU(6))\/\\Z_2)$.\n\\vspace{2mm}\n\nLet $\\sigma, \\sigma_3 \\in F_4$. Then, as in the case above, using the inclusion $F_4 \\subset E_6$, $\\sigma, \\sigma_3$ are naturally extended to\ntransformations of $\\mathfrak{J}^C$. Needless to say, $\\sigma, \\sigma_3 \\in E_6$ and $\\sigma^2=(\\sigma_3)^3=1$. Hence $\\sigma$ and $\\sigma_3$ induce the involutive automorphism $\\tilde{\\sigma}$ and the automorphism $\\tilde{\\sigma}_3$ of order $3$ on $E_6$, respectively: $\\tilde{\\sigma}(\\alpha)=\\sigma\\alpha\\sigma, \\tilde{\\sigma}_3(\\alpha)={\\sigma_3}^{-1}\\alpha\\sigma_3, \\alpha \\in E_6$. \n\\vspace{1mm}\n\nThen we have the following proposition and theorem.\n\n\\begin{proposition}\\label{proposition 3.3.3\n\tThe group $(E_6)^\\sigma$ is isomorphic to the group $(U(1) \\times Spin(10))\/\\Z_4${\\rm:}\\,\n\t$(E_6)^\\sigma \\!\\cong (U(1) \\times Spin(10))\/\\Z_4,\\Z_4=\\{ (1, \\phi_{{}_{6,\\sigma}}(1)), (-1, \\phi_{{}_{6,\\sigma}}(-1)), (i, \\phi_{{}_{6,\\sigma}}(-i)), (-i, \\phi_{{}_{6,\\sigma}}(i)) \\}$.\n\\end{proposition}\n\\begin{proof}\n\tLet $Spin(10)$ as the group $(E_6)_{E_1}=\\{\\alpha \\in E_6\\,|\\,\\alpha E_1=E_1 \\}$ (\\cite[Theorem 3.10.4]{iy0}).\n\tWe define a mapping $\\varphi_{{}_{E_6,\\sigma}}:U(1) \\times Spin(10) \\to (E_6)^\\sigma $ by\n\t$$\n\t\\varphi_{{}_{E_6,\\sigma}}(\\theta, \\delta)=\\phi_{{}_{6,\\sigma}}(\\theta)\\delta,\n\t$$\n\twhere $\\phi_{{}_{6,\\sigma}}:U(1) \\to E_6$ is defined by\n\t\\begin{align*}\n\t\\phi_{{}_{6,\\sigma}}(\\theta)X=\\begin{pmatrix}\n\t\\theta^4 \\xi_1 & \\theta x_3 & \\theta \\ov{x_2} \\\\\n\t\\theta \\ov{x_3} & {\\theta}^{-2}\\xi_2 & {\\theta}^{-2}x_1 \\\\ \n\t\\theta x_2 & {\\theta}^{-2}\\ov{x_1} & {\\theta}^{-2}\\xi_3\n\t\\end{pmatrix}, \\,\\, X \\in \\mathfrak{J}^C.\n\t\\end{align*}\n\tThis mapping induces the required isomorphism (see \\cite[Theorem 3.10.7 ]{iy0} in detail).\n\\end{proof}\n\n\\begin{theorem}\\label{theorem 3.3.4\n\tThe group $(E_6)^{\\sigma_3}$ is isomorphic to the group $(U(1) \\times Spin(2) \\times Spin(8))\/(\\Z_4 \\allowbreak \\times \\Z_2)${\\rm :} $(E_6)^{\\sigma_3} \\cong (U(1) \\times Spin(2) \\times Spin(8))\/(\\Z_2 \\times \\Z_4), \\Z_2=\\{(1,1,1),(1,\\sigma,\\sigma) \\}, \\Z_4=\\{(1,1,1),(i,D_{e_1},\\phi_{{}_{6,\\sigma}}(-i)D_{-e_1}),(-1,\\allowbreak\\sigma,1),(-i,D_{-e_1},\\phi_{{}_{6,\\sigma}}(i)D_{e_1}) \\} \\}$.\n\\end{theorem}\n\\begin{proof}\n\tLet $U(1)=\\{\\theta \\in C\\,|\\,(\\tau \\theta)\\theta=1 \\}$ and $Spin(2)$, which is isomorphic to the group $U(1)$, as the group $\\{D_a \\in F_4 \\,|\\,a \\in U(1) \\}$ defined in $F_4$, moreover let $Spin(8)$ as the group $(E_6)_{E_1, F_1(1),F_1(e_1)}=\\{ \\alpha \\in E_6 \\,|\\,\\alpha E_1=E_1, \\alpha F_1(1)=F_1(1), \\alpha F_1(e_1)=F_1(e_1)\\}$ (cf.\\cite[Proposition 3.22]{iy2}, \\cite[Subsection 3.2]{iy1}), respectively. We define a mapping $\\varphi_{{}_{E_6,\\sigma_3}}: U(1) \\times Spin(2) \\times Spin(8) \\to (E_6)^{\\sigma_3}$ by\n\t\\begin{align*}\n\t\t\t\\varphi_{{}_{E_6,\\sigma_3}}(\\theta, D_a, \\beta)=\\phi_{{}_{6,\\sigma}}(\\theta)D_a \\beta.\n\t\\end{align*}\n\tThis mapping induces the required isomorphism (see \\cite[Theorem 3.9]{iy1} in detail).\n\\end{proof}\n\nThus, since the group $(E_6)^{\\sigma_3}$ is connected, together with the result of Theorem \\ref{theorem 3.3.4}, we have an exceptional $\\varmathbb{Z}_3$-symmetric space $E_6\/((U(1) \\times Spin(2) \\times Spin(8))\/(\\Z_2 \\times \\Z_4))$.\n\\vspace{2mm}\n\nLet $\\nu=\\exp(2\\pi i\/9) \\in U(1)=\\{ \\theta \\in C \\,|\\, (\\tau \\theta)\\theta=1\\} \\subset C$. We consider the element $A_\\nu \\in SU(6) \\subset M(6, C)$ as follows.\n\\begin{align*}\n\t\tA_\\nu=\\diag(\\nu^5, \\nu^{-1}, \\nu^{-1}, \\nu^{-1}, \\nu^{-1},\\nu^{-1}),\n\\end{align*} \nand using this $A_\\nu$, set $\\nu_3=\\varphi_{{}_{E_6,\\gamma}}(1,A_\\nu)$. Then we have that $\\nu_3 \\in (E_6)^\\gamma \\subset E_6$ and $(\\nu_3)^9=1$. Since ${A_\\nu}^3= \\nu^6 E \\in z(SU(6))$ (the center of $SU(6)$) and $(\\nu_3)^3=\\varphi_{{}_{E_6,\\gamma}}(1, {A_\\nu}^3)=\\omega 1$, where $\\omega= -(1\/2)+(\\sqrt{3}\/2)i \\in C$, $\\nu_3$ induces the automorphism $\\tilde{\\nu}_3$ of order $3$ on $E_6$: $\\tilde{\\nu}_3(\\alpha)={\\nu_3}^{-1}\\alpha\\nu_3, \\alpha \\in E_6$.\n\\vspace{1mm}\n\nNow, we have the following theorem.\n\n\\begin{theorem}\\label{theorem 3.3.5\n\tThe group $(E_6)^{\\nu_3}$ is isomorphic to the group $(Sp(1) \\times S(U(1) \\times U(5)))\/\\Z_2${\\rm :} $(E_6)^{\\nu_3} \\cong (Sp(1) \\times S(U(1) \\times U(5)))\/\\Z_2, \\Z_2=\\{(1,E), (-1,-E) \\}$.\n\\end{theorem}\n\\begin{proof}\n\tLet $S(U(1) \\times U(5)) \\subset SU(6)$. We define a mapping $\\varphi_{{}_{E_6, \\nu_3}}:Sp(1) \\times S(U(1) \\times U(5)) \\to (E_6)^{\\nu_3}$ by the restriction of the mapping $\\varphi_{{}_{E_6,\\gamma}}$. This mapping induces the required isomorphism (see \\cite[Theorem 3.4]{iy1} in detail).\n\\end{proof}\n\nThus, since the group $(E_6)^{\\nu_3}$ is connected, together with the result of Theorem \\ref{theorem 3.3.5}, we have an exceptional $\\varmathbb{Z}_3$-symmetric space $E_6\/((U(1) \\times S(U(1) \\times U(5)))\/ \\Z_2)$.\n\\vspace{2mm}\n\nLet $\\phi_{{}_{6,\\sigma}}:U(1) \\to E_6$ be the embedding defined in the proof of Proposition \\ref{proposition 3.3.3}, and again let $\\nu=\\exp(2\\pi i\/9) \\in U(1) \\subset C$. Set $\\mu_3=\\phi_{{}_{6,\\sigma}}(\\nu)$. Then, needless to say, $\\mu_3 \\in E_6$ and $\\nu^9=1$. \nHence, since $\\mu^3=\\omega 1 \\in z(E_6)$ (the center of $E_6$), $\\mu_3$ induces the automorphism $\\tilde{\\mu_3}$ of order $3$ on $E_6$: $\\tilde{\\mu_3}(\\alpha)={\\mu_3}^{-1}\\alpha\\mu_3, \\alpha \\in E_6$.\n\\vspace{1mm}\n\nNow, we have the following theorem.\n\n\\begin{theorem}\\label{theorem 3.3.6\n\tThe group $(E_6)^{\\mu_3}$ coincides with the group $(E_6)^\\sigma$, that is, this group is isomorphic to the group $(U(1) \\times Spin(10))\/\\Z_4${\\rm :} $(E_6)^{\\mu_3} \\cong (U(1) \\times Spin(10))\/\\Z_4, \\Z_4=\\{ (1, 1), (-1, \\sigma), (i, \\phi_{{}_{6,\\sigma}}(-i)), (-i, \\phi_{{}_{6,\\sigma}}(i)) \\}$\n\\end{theorem}\n\\begin{proof}\n\tWe have to prove that $(E_6)^{\\mu_3}=(E_6)^\\sigma$.\n\n\tHowever the details of proof is omitted (see \\cite[Theorem 3.11]{iy1} in detail).\n\\end{proof}\n\\vspace{2mm}\n\nLet $w_3 \\in G_2 \\subset F_4$. Then, as in the cases above, using the inclusion $F_4 \\subset E_6$, $w_3$ are naturally extended to\ntransformation of $\\mathfrak{J}^C$.\nNeedless to say, $w_3 \\in E_6$ by inclusion $F_4 \\subset E_6$ and $(w_3)^3=1$. Hence $w_3$ induces the automorphism $\\tilde{w}_3$ of order $3$ on $E_6$: $\\tilde{w}_3(\\alpha)={w_3}^{-1}\\alpha w_3, \\alpha \\in E_6$.\nNote that using $\\bm{\\omega}=-(1\/2)+(\\sqrt{3}\/2)e_1 \\in \\C$, the action to $\\mathfrak{J}^C=\\mathfrak{J}(3, \\C)^C \\oplus M(3,\\C)^C$ of $w_3$ is as follows.\n\\begin{align*}\n\t\tw_3(X_{\\bm{C}}+M)=X_{\\bm{C}}+\\bm{\\omega}M,\\,\\,X_{\\bm{C}}+M \\in \\mathfrak{J}(3,\\C)^C \\oplus M(3, \\C)^C=\\mathfrak{J}^C.\n\\end{align*}\n\nNow, we have the following theorem.\n\n\\begin{theorem}\\label{theorem 3.3.7\n\tThe group $(E_6)^{w_3}$ is isomorphic to the group $(SU(3) \\times SU(3) \\times SU(3))\/\\Z_3${\\rm:} $(E_6)^{w_3} \\cong (SU(3) \\times SU(3) \\times SU(3))\/\\Z_3, \\Z_3=\\{(E,E,E),(\\bm{\\omega}E,\\bm{\\omega}E,\\bm{\\omega}E),(\\bm{\\omega}^{-1}E,\\bm{\\omega}^{-1}E,\\allowbreak \\bm{\\omega}^{-1}E) \\}$.\n\\end{theorem}\n\\begin{proof}\n\tWe define a mapping $\\varphi_{{}_{E_6,w_3}}:SU(3) \\times SU(3) \\times SU(3) \\to (E_6)^{w_3}$ by\n\t\\begin{align*}\n\t\t\t\\varphi_{{}_{E_6,w_3}}(L,A,B)(X_{C}+M)&=h(A,B)X_{C}h(A,B)^*+LM\\tau h(A,B)^*, \n\t\t\t\\\\\n\t\t\t&\\hspace*{20mm} X_{C}+M \\in \\mathfrak{J}(3, \\C)^C \\oplus \n\t\t\tM(3,\\C)^C=\\mathfrak{J}^C,\n\t\\end{align*}\n\twhere $h:M(3,\\C) \\times M(3,\\C) \\to M(3,\\C)^C$ is defined by \n\t\\begin{align*}\n\t\t\th(A,B)=\\dfrac{A+B}{2}+i\\dfrac{(B-A)e_1}{2}.\n\t\\end{align*}\n\tThis mapping induces the required isomorphism (see \\cite[Theorem 13]{iy0} in detail). Note that there is a mistake for the numbering of theorems in \\cite{iy0}, so Theorem 13 above is corresponding to the last theorem.\n\\end{proof}\n\nThus, since the group $(E_6)^{w_3}$ is connected, together with the result of Theorem \\ref{theorem 3.3.7}, we have an exceptional $\\varmathbb{Z}_3$-symmetric space $E_6\/((SU(3) \\times SU(3))\/ \\Z_3)$.\n\nAs in Subsections 3.1, 3.2, the following lemma are useful to determine the structure of groups $G^{\\sigma_3} \\cap G^{\\tau_3}$ in $E_6$.\n\n\\begin{lemma}\\label{lemma 3.3.8\n\t{\\rm (1)} The mapping $\\varphi_{{}_{E_6,\\gamma_3}}:U(1) \\times SU(6) \\to (E_6)^{\\gamma_3}$ of \\,Theorem {\\rm \\ref{theorem 3.3.2}} satisfies the relational formulas \n\t\\begin{align*}\n\t\\gamma_3&=\\varphi_{{}_{E_6,\\gamma_3}}(\\omega,E), \n\t\\\\\n\t\\sigma_3&=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(1,1,\\tau\\omega,\\omega,\\omega,\\tau\\omega)), \n\t\\\\\n\t\\nu_3&=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(\\nu^5,\\nu^{-1},\\nu^{-1},\\nu^{-1},\\nu^{-1},\\nu^{-1})),\n\t\\\\\n\t\\mu_3&=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(\\nu^{-2},\\nu^2,\\nu^{-1},\\nu,\\nu^{-1},\\nu)),\n\t\\\\\n\tw_3&=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(\\tau\\omega,\\omega,\\tau\\omega,\\omega,\\tau\\omega,\\omega)),\n\t\\end{align*}\n where $\\bm{\\omega}=-(1\/2)+(\\sqrt{3}\/2)i \\in U(1), \\nu=\\exp(2\\pi i\/9)$.\n\t\\vspace{1mm}\n\t\n\t{\\rm (2)} The mapping $\\varphi_{{}_{E_6,w_3}}:SU(3)\\times SU(3) \\times SU(3) \\to (E_6)^{w_3}$ of \\,Theorem {\\rm \\ref{theorem 3.3.7}} satisfies the relational formulas\n\t\\begin{align*}\n\t\\gamma_3&=\\varphi_{{}_{E_6,w_3}}(\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}}),E,E),\n\t\\\\\n\t\\sigma_3&=\\varphi_{{}_{E_6,w_3}}(E,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}),\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega})),\n\t\\\\\n\t\\mu_3&=\\varphi_{{}_{E_6,w_3}}(E,\\diag({\\bm{\\varepsilon}}^{-2},\\bm{\\varepsilon},\\bm{\\varepsilon}),\\diag({\\bm{\\varepsilon}}^2,{\\bm{\\varepsilon}}^{-1},{\\bm{\\varepsilon}}^{-1})),\n\t\\\\\n\tw_3&=\\varphi_{{}_{E_6,w_3}}(\\bm{\\omega}E,E,E),\n\t\\end{align*}\n\t where $\\bm{\\omega}=-(1\/2)+(\\sqrt{3}\/2)e_1 \\in U(1), \\bm{\\varepsilon}=\\exp(2\\pi e_1\/9)$. \t\n\\end{lemma}\n\\begin{proof}\n\t(1), (2) By doing straightforward computation we obtain the results above. \n\\end{proof}\n\n\\section{Globally exceptional $\\varmathbb{Z}_3 \\times \\varmathbb{Z}_3$-symmetric spaces}\n\nIn this section, we construct a finite abelian group $\\varGamma=\\varmathbb{Z}_3 \\times \\varmathbb{Z}_3$ by using the inner automorphisms $\\tilde{\\sigma}_3, \\tilde{\\tau}_3$ of order $3$ on $G=G_2, F_4,E_6$ as the Case 1 below and determine the structure of the group $G^{\\sigma_3} \\cap G^{\\tau_3}$.\n\n\\subsection{Case 1: $\\{1, \\tilde{\\gamma}_3, \\tilde{\\gamma}_3{}^{-1}\\} \\times \\{1, \\tilde{w}_3, \\tilde{w}_3{}^{-1}\\}$-symmetric space}\n\nLet the $\\R$-linear transformations $\\gamma_3, w_3$ of $\\mathfrak{C}$ defined in Subsection \\ref{subsection 3.1}. \n\n\\noindent From Lemma \\ref{lemma 3.1.4} (1), since we can easily confirm that $\\gamma_3$ and $w_3$ are commutative, $\\tilde{\\gamma}_3$ and $\\tilde{w}_3$ are commutative in $\\Aut(G_2)$: $\\tilde{\\gamma}_3\\tilde{w}_3=\\tilde{w}_3\\tilde{\\gamma}_3$.\n\\vspace{1mm}\n\nNow, we will determine the structure of the group $(G_2)^{\\gamma_3} \\cap (G_2)^{w_3}$.\n\n\\begin{theorem}\\label{theorem 4.1.1\n\tThe group $(G_2)^{\\gamma_3} \\cap (G_2)^{w_3}$ is isomorphic to the group $(U(1) \\times U(1))\/\\Z_2${\\rm :} $(G_2)^{\\gamma_3} \\cap (G_2)^{w_3} \\cong (U(1) \\times U(1))\/\\Z_2, \\Z_2=\\{(1,1), (-1,-1) \\}$.\n\\end{theorem}\n\\begin{proof}\n\tLet $U(1) \\subset Sp(1)$. \n\tWe define a mapping $\\varphi_{{}_{G_2,\\gamma_3, w_3}}: U(1) \\times U(1) \\to (G_2)^{\\gamma_3} \\cap (G_2)^{w_3}$ by \n\t\\begin{align*}\n\t\t\t\t\\varphi_{{}_{G_2,\\gamma_3, w_3}}(s,t)(m+ne_4)=tm\\ov{t}+(sn\\ov{t})e_4,\\,\\,m+ne_4 \\in \\H \\oplus \\H e_4=\\mathfrak{C}.\n\t\\end{align*}\n\tNeedless to say, this mapping is the restriction of the mapping $\\varphi_{{}_{G_2,\\gamma_3}}$ (Theorem \\ref{theorem 3.1.2}).\n\t\n\tFirst, we will prove that $\\varphi_{{}_{G_2,\\gamma_3, w_3}}$ is well-defined. Since this mapping is also the restriction of the mapping $\\varphi_{{}_{G_2,\\gamma_3}}$, it is trivial that $\\varphi_{{}_{G_2,\\gamma_3, w_3}}(s,t) \\in (G_2)^{\\gamma_3}$, and from $w_3=\\varphi_{{}_{G_2,\\gamma_3}}(1,\\ov{\\bm{\\omega}})$ (Lemma \\ref{lemma 3.1.4} (1)), it is almost clear that $\\varphi_{{}_{G_2,\\gamma_3, w_3}}(s,t) \\in (G_2)^{w_3}$. Hence $\\varphi_{{}_{G_2,\\gamma_3, w_3}}$ is well-defined. Subsequently, since $\\varphi_{{}_{G_2,\\gamma_3, w_3}}$ is the restriction of $\\varphi_{{}_{G_2,\\gamma_3}}$, we easily see that $\\varphi_{{}_{G_2,\\gamma_3, w_3}}$ is a homomorphism.\n\t\n\tNext, we will prove that $\\varphi_{{}_{G_2,\\gamma_3, w_3}}$ is surjective. Let $\\alpha \\in (G_2)^{\\gamma_3} \\cap (G_2)^{w_3} \\subset (G_2)^{\\gamma_3}$. There exist $s \\in U(1)$ and $q \\in Sp(1)$ such that $\\alpha=\\varphi_{{}_{G_2,\\gamma_3}}(s,q)$ (Theorem \\ref{theorem 3.1.2}). Moreover, since $\\alpha=\\varphi_{{}_{G_2,\\gamma_3}}(s,q)$ commutes with $w_3$, again using $w_3=\\varphi_{{}_{G_2,\\gamma_3}}(1,\\ov{\\bm{\\omega}})$, we have that \n\t\\begin{align*}\n\t\t\t\\left\\{ \\begin{array}{l}\n\t\t\ts=s \\\\\n\t\t\t\\bm{\\omega}q\\ov{\\bm{\\omega}}=q \n\t\t\t\\end{array} \\right. \n\t\t\t\\quad \\text{or}\\quad\n\t\t\t\\left\\{ \\begin{array}{l}\n\t\t\ts=-s \\\\\n\t\t\t\\bm{\\omega}q\\ov{\\bm{\\omega}}=-q.\n\t\t\t\\end{array} \\right.\n\t\\end{align*}\n\tThe latter case is impossible because $s \\not=0$. As for the former case, from the relational formula $\\bm{\\omega}q\\ov{\\bm{\\omega}}=q$ we easily see that $q \\in U(1)$, and needless to say, $s \\in U(1)$. Hence there exist $s,t \\in U(1)$ such that $\\alpha=\\varphi_{{}_{G_2,\\gamma_3}}(s,t)$. Namely, there exist $s,t \\in U(1)$ such that $\\alpha=\\varphi_{{}_{G_2,\\gamma_3,w_3}}(s,t)$. The proof of surjective is completed.\n\t\n\tFinally, we determine $\\Ker \\,\\varphi_{{}_{G_2,\\gamma_3, w_3}}$. However, since $\\varphi_{{}_{G_2,\\gamma_3, w_3}}$ is the restriction of $\\varphi_{{}_{G_2,\\gamma_3}}$, it is easily obtain that $\\Ker \\,\\varphi_{{}_{G_2,\\gamma_3, w_3}}=\\{(1,1),(-1,-1) \\} \\cong \\Z_2$.\n\t\n\tTherefore we have the required isomorphism\n\t\\begin{align*}\n\t\t\t(G_2)^{\\gamma_3} \\cap (G_2)^{w_3} \\cong (U(1) \\times U(1))\/\\Z_2.\n\t\\end{align*}\n\\end{proof}\n\nThus, since the group $(G_2)^{\\gamma_3} \\cap (G_2)^{w_3}$ is connected from Theorem \\ref{theorem 4.1.1}, we have an exceptional $\\varmathbb{Z}_3 \\times \\varmathbb{Z}_3$-symmetric space\n\\begin{align*}\n\t\t\t\t\t\t\tG_2\/((U(1) \\times U(1))\/\\Z_2).\n\\end{align*}\n\n\\subsection{Case 2: $\\{1, \\tilde{\\gamma}_3, \\tilde{\\gamma}_3{}^{-1}\\} \\times \\{1, \\tilde{\\sigma}_3, \\tilde{\\sigma}_3{}^{-1}\\}$-symmetric space}\n\nLet the $\\R$-linear transformations $\\gamma_3, \\sigma_3$ of $\\mathfrak{J}$ defined in Subsection \\ref{subsection 3.2}. \n\n\\noindent From Lemma \\ref{lemma 3.2.6} (1), since we can easily confirm that $\\gamma_3$ and $\\sigma_3$ are commutative, $\\tilde{\\gamma}_3$ and $\\tilde{\\sigma}_3$ are commutative in $\\Aut(F_4)$: $\\tilde{\\gamma}_3\\tilde{\\sigma}_3=\\tilde{\\sigma}_3\\tilde{\\gamma}_3$.\n\\vspace{1mm}\n\nBefore determining the structure of the group $(F_4)^{\\gamma_3} \\cap (F_4)^{\\sigma_3}$, we prove proposition needed in the proof of theorem below.\n\\vspace{1mm}\n\nWe define subgroups $G_{1,2}$ and $G'_{1,2}$ of the group $Sp(3)$ by\n\\begin{align*}\n\tG_{1,2}&=\\left\\{ A=\\begin{pmatrix}\n\t h & 0 & 0 \\\\\n\t 0 & a & c \\\\\n\t 0 & d & b\n\t \\end{pmatrix} \\in Sp(3)\\,\\left|\\,h \\in Sp(1), \\begin{pmatrix}\t \n\t a & c \\\\\n\t d & b\n\t \\end{pmatrix} \\in U(2) \\subset Sp(2) \n\t \\right. \\right\\}, \n\t\\\\\n\tG'_{1,2}&=\\left\\{ A'=\\begin{pmatrix}\n\th' & 0 & 0 \\\\\n\t0 & a' & c'e_2 \\\\\n\t0 & \\ov{e_2}d' & b'\n\t\\end{pmatrix} \\in Sp(3)\\,\\left|\\,h' \\in Sp(1), \n\t\\begin{array}{l}\n\t(c'e_2)(\\ov{c'e_2})+a'\\ov{a'}=1\\\\\n\tb'\\ov{b'}+(\\ov{e_2}d')(\\ov{\\ov{e_2}d'})=1\\\\\n\t(c'e_2)\\ov{b'}+a'(\\ov{\\ov{e_2}d'})=0\\\\\n\ta',b',c',d' \\in \\C\n\t\\end{array}\n\t\\right. \\right\\}, \n\\end{align*}\nwhere $e_2$ is one of basis in $\\mathfrak{C}$.\n\n It goes without saying that $\\begin{pmatrix}\n a & c \\\\\n d & b\n \\end{pmatrix} \\in U(2)$ is equivalent to the conditions\n\\begin{align*}\n\t\tc\\ov{c}+a\\ov{a}=1, \\,\\,b\\ov{b}+d\\ov{d}=1,\\,\\,c\\ov{b}+a\\ov{d}=0,\n\\end{align*}\nmoreover, that $(c'e_2)(\\ov{c'e_2})+a'\\ov{a'}=1$ above is same as $c'\\ov{c}+a'\\ov{a'}=1$, so is others.\n\\vspace{1mm}\n\n\\begin{proposition}\\label{proposition 4.2.1\n\tThe group $G'_{1,2}$ is isomorphic to the group $Sp(1) \\times U(2)${\\rm :} $G'_{1,2} \\cong Sp(1) \\times U(2)$.\n\\end{proposition}\n\\begin{proof}\n\tFirst, we will prove that the group $G'_{1,2}$ is isomorphic to the group $G_{1,2}$. \n\tWe define a mapping $g_{{}_{421}}: G_{1,2} \\to G'_{1,2}$ by\n\t\\begin{align*}\n\t\t\tg_{{}_{421}}(\\begin{pmatrix}\n\t\t\th & 0 & 0 \\\\\n\t\t\t0 & a & c \\\\\n\t\t\t0 & d & b\n\t\t\t\\end{pmatrix})\n\t\t\t&=\\begin{pmatrix}\n\t\t\t1 & 0 & 0 \\\\\n\t\t\t0 & 1 & 0 \\\\\n\t\t\t0 & 0 & \\ov{e_2}\n\t\t\t\\end{pmatrix}\\begin{pmatrix}\n\t\t\th & 0 & 0 \\\\\n\t\t\t0 & a & c \\\\\n\t\t\t0 & d & b\n\t\t\t\\end{pmatrix}\\begin{pmatrix}\n\t\t\t1 & 0 & 0 \\\\\n\t\t\t0 & 1 & 0 \\\\\n\t\t\t0 & 0 & e_2\n\t\t\t\\end{pmatrix}\\left(=\\begin{pmatrix}\n\t\t\th & 0 & 0 \\\\\n\t\t\t0 & a & ce_2 \\\\\n\t\t\t0 & \\ov{e_2}d & b\n\t\t\t\\end{pmatrix} \\right).\n\t\\end{align*}\n\tFirst, it is clear that $g_{{}_{421}}$ is well-defined and a homomorphism. Moreover, it is easy to verify that $g_{{}_{421}}$ is bijective. Thus we have the isomorphism $G'_{1,2} \\cong G_{1,2}$. \n\t\n\tHere, by defining a mapping $f_{{}_{421}}:Sp(1) \\times U(2) \\to G_{1,2}$ as follows:\n\t\\begin{align*}\n\t\tf_{{}_{421}}(p,U)=\\scalebox{0.8}{$\n\t\t\t\\left( \\begin{array}{cccccccc@{\\!}}\n\t\t\t&\\multicolumn{2}{c}{\\raisebox{-15pt}[0pt][0pt]{\\Large$p$}}&&&&\n\t\t\t\\\\\n\t\t\t&&&&\\multicolumn{2}{c}{\\raisebox{-5pt}[0pt]{\\Large $0$}}&\n\t\t\t\\\\\n\t\t\t&&&&&&&\n\t\t\t\\\\\n\t\t\t&&&&\\multicolumn{2}{c}{\\raisebox{-18pt}[0pt][0pt]{\\huge $U$}}&\n\t\t\t\\\\\n\t\t\t&\\multicolumn{2}{c}{\\raisebox{-5pt}[0pt]{\\Large $0$}}&&&&\n\t\t\t\\\\[-2mm]\n\t\t\t&&&&&&&\n\t\t\t\\end{array}\\right)$},\n\t\t\\end{align*}\n\twe have the isomorphism $G_{1,2} \\cong Sp(1) \\times U(2)$.\n\t\n\tTherefore, together with the result of $G'_{1,2} \\cong G_{1,2}$, we have the required isomorphism \n\t\\begin{align*}\n\t\t\tG'_{1,2} \\cong Sp(1) \\times U(2).\n\t\\end{align*}\n\\end{proof}\n\nNow, we will determine the structure of the group $(F_4)^{\\gamma_3} \\cap (F_4)^{\\sigma_3}$.\n\n\\begin{theorem} \\label{theorem 4.2.2\n\tThe group $(F_4)^{\\gamma_3} \\cap (F_4)^{\\sigma_3}$ is isomorphic to the group $(U(1) \\times Sp(1) \\times U(2))\/\\Z_2$ {\\rm: } $(F_4)^{\\gamma_3} \\cap (F_4)^{\\sigma_3} \\cong (U(1) \\times Sp(1) \\times U(2))\/\\Z_2, \\Z_2=\\{(1,1,E),(-1,-1,-E) \\}$.\n\\end{theorem}\n\\begin{proof}\n\tFirst, we denote the composition of $g_{{}_{421}}$ and $f_{{}_{421}}$ by $h$: $h=g_{{}_{421}}f_{{}_{421}}$ (in the proof of Proposition \\ref{proposition 4.2.1}). Then we define a mapping $\\varphi_{{}_{F_4,\\gamma_3,\\sigma_3}}:U(1) \\times Sp(1) \\times U(2) \\to (F_4)^{\\gamma_3} \\cap (F_4)^{\\sigma_3}$ by\n\t\t\\begin{align*}\n\t\t\\varphi_{{}_{F_4,\\gamma_3,\\sigma_3}}(s,p,U)(M+\\a)=h(p,U)Mh(p,U)^*+s\\a h(p,U)^*,\\,\n\t\tM+\\a \\in \\mathfrak{J}(3,\\H) \\oplus \\H^3=\\mathfrak{J}.\n\t\t\\end{align*}\n\t\tNeedless to say, this mapping is the restriction of the mapping $\\varphi_{{}_{F_4,\\gamma_3}}$, that is, $\\varphi_{{}_{F_4,\\gamma_3,\\sigma_3}}(s,p,U) \\allowbreak =\\varphi_{{}_{F_4,\\gamma_3}}(s,h(p,U))$ (Theorem \\ref{theorem 3.2.2}).\n\t\t\n\t\tFirst, we will prove that $\\varphi_{{}_{F_4,\\gamma_3,\\sigma_3}}$ is well-defined. It is clear that $\\varphi_{{}_{F_4,\\gamma_3,\\sigma_3}}(s,p,U) \\in (F_4)^{\\gamma_3}$, and using $\\sigma_3=\\varphi_{{}_{F_4,\\gamma_3}}(1,\\diag(1,\\ov{\\bm{\\omega}}, \\bm{\\omega}))$ (Lemma \\ref{lemma 3.2.6} (1)), it follows that\n\t\t\\begin{align*}\n\t\t{\\sigma_3}^{-1}\\varphi_{{}_{F_4,\\gamma_3,\\sigma_3}}(s,p,U)\\sigma_3\n\t\t&=\\varphi_{{}_{F_4,\\gamma_3}}(1,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}))^{-1}\\varphi_{{}_{F_4,\\gamma_3,\\sigma_3}}(s,p,U)\n\t\t\\varphi_{{}_{F_4,\\gamma_3}}(1,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}))\n\t\t\\\\\n\t\t&=\\varphi_{{}_{F_4,\\gamma_3}}(1,\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}}))\n\t\t\\varphi_{{}_{F_4,\\gamma_3}}(s,h(p,U))\\varphi_{{}_{F_4,\\gamma_3}}(1,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}))\n\t\t\\\\\n\t\t&=\\varphi_{{}_{F_4,\\gamma_3}}(s,\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})h(p,U)\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega})), h(p,U)\\!=\n\t\t\\begin{pmatrix}\n\t\tp & 0 & 0 \\\\\n\t\t0 & a & c \\\\\n\t\t0 & d & b\n\t\t\\end{pmatrix}\n\t\t\\\\\n\t\t&=\\varphi_{{}_{F_4,\\gamma_3}}(s,\n\t\t\\begin{pmatrix}\n\t\tp & 0 & 0 \\\\\n\t\t0 & \\bm{\\omega}a\\ov{\\bm{\\omega}} & \\bm{\\omega}(ce_2)\\bm{\\omega} \\\\\n\t\t0 & \\ov{\\bm{\\omega}}(\\ov{e_2}d)\\ov{\\bm{\\omega}} & \\ov{\\bm{\\omega}}b\\bm{\\omega}\n\t\t\\end{pmatrix})\n\t\t\\\\\n\t\t&=\\varphi_{{}_{F_4,\\gamma_3}}(s,\n\t\t\\begin{pmatrix}\n\t\tp & 0 & 0 \\\\\n\t\t0 & a & c e_2\\\\\n\t\t0 & \\ov{e_2}d & b\n\t\t\\end{pmatrix})\n\t\t\\\\\n\t\t&=\\varphi_{{}_{F_4,\\gamma_3}}(s,h(p,U))\n\t\t\\\\\n\t\t&=\\varphi_{{}_{F_4,\\gamma_3,\\sigma_3}}(s,p,U).\n\t\t\\end{align*}\n\tHence we have that $\\varphi_{{}_{F_4,\\gamma_3,\\sigma_3}}(s,p,U) \\in (F_4)^{\\sigma_3}$. Thus $\\varphi_{{}_{F_4,\\gamma_3,\\sigma_3}}$ is well-defined.\n\tSubsequently, since $\\varphi_{{}_{F_4,\\gamma_3,\\sigma_3}}$ is the restriction of the mapping $\\varphi_{{}_{F_4,\\gamma_3}}$, we easily see that $\\varphi_{{}_{F_4,\\gamma_3,\\sigma_3}}$ is a homomorphism. \n\n\tNext, we will prove that $\\varphi_{{}_{F_4,\\gamma_3,\\sigma_3}}$ is surjective. Let $\\alpha \\in (F_4)^{\\gamma_3} \\cap (F_4)^{\\sigma_3} \\subset (F_4)^{\\gamma_3}$. There exist $s \\in U(1)$ and $A \\in Sp(3)$ such that $\\alpha=\\varphi_{{}_{F_4,\\gamma_3}}(s,A)$ (Theorem \\ref{theorem 3.2.2}). Moreover, from the condition $\\alpha\\in\t(F_4)^{\\sigma_3}$, that is, ${\\sigma_3}^{-1}\\varphi_{{}_{F_4,\\gamma_3}}(s,A)\\sigma_3=\\varphi_{{}_{F_4,\\gamma_3}}(s,A)$, and using ${\\sigma_3}^{-1}\\varphi_{{}_{F_4,\\gamma_3}}(s,A)\\sigma_3\\!=\\!\\varphi_{{}_{F_4,\\gamma_3}}(s,\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})A\\,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}))$ (Lemma \\ref{lemma 3.2.6} (1)), we have that\n\t\\begin{align*}\n\t\\left\\{ \n\t\\begin{array}{l}\n\ts=s \\\\\n\t\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})A\\,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega})=A \n\t\\end{array} \\right.\n\t\\quad {\\text{or}}\\quad\n\t\\left\\{ \n\t\\begin{array}{l}\n\ts=-s \\\\\n\t\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})A\\,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega})=-A. \n\t\\end{array} \\right.\n\t\\end{align*}\n\tThe latter case is impossible because of $s\\not=0$. As for the former case, from the second condition, by doing straightforward computation $A$ takes the following form $\\begin{pmatrix}\n\tp & 0 & 0 \\\\\n\t0 & a & c e_2\\\\\n\t0 & \\ov{e_2}d & b\n\t\\end{pmatrix} \\allowbreak \\in Sp(3)$, that is, $A \\in G'_{1,2}$. Hence there exist $s \\in U(1)$ and $h(p,U) \\in Sp(3)$ such that $\\alpha=\\varphi_{{}_{F_4,\\gamma_3}}(s,h(p,U))$. \n\tMoreover, from Lemma \\ref{lemma 3.2.6} (1) there exist $p \\in Sp(1)$ and $U \\in U(2)$ such that $A=h(p,U)$. Needless to say, $s \\in U(1)$.\n Thus, there exist $s \\in U(1),p \\in Sp(1)$ and $U \\in U(2)$ such that $\\alpha=\\varphi_{{}_{F_4,\\gamma_3, \\sigma_3}}(s,p,U)$. The proof of surjective is completed.\n\t\n\tFinally, we will determine $\\Ker\\,\\varphi_{{}_{F_4,\\gamma_3,\\sigma_3}}$. However, from $\\Ker\\,\\varphi_{{}_{F_4,\\gamma_3}}=\\{(1,E), (-1,-E) \\}$ we easily obtain that $\\Ker\\,\\varphi_{{}_{F_4,\\gamma_3,\\sigma_3}}=\\{(1,1,E), (-1,-1,-E) \\} \\cong \\Z_2$. \n\n Therefore we have the required isomorphism\n \\begin{align*}\n (F_4)^{\\gamma_3} \\cap (F_4)^{\\sigma_3} \\cong (U(1) \\times Sp(1) \\times U(2))\/\\Z_2.\n \\end{align*}\n\\end{proof}\n\\vspace{1mm}\n\nThus, since the group $(F_4)^{\\gamma_3} \\cap (F_4)^{\\sigma_3}$ is connected from Theorem \\ref{theorem 4.2.2}, we have an exceptional $\\varmathbb{Z}_3 \\times \\varmathbb{Z}_3$-symmetric space\n\\begin{align*}\n F_4\/((U(1) \\times Sp(1) \\times U(2))\/\\Z_2).\n\\end{align*}\n\n\\subsection{Case 3: $\\{1, \\tilde{\\gamma}_3, \\tilde{\\gamma}_3{}^{-1}\\} \\times \\{1, \\tilde{w}_3, \\tilde{w}_3{}^{-1}\\}$-symmetric space}\n\nLet the $\\R$-linear transformations $\\gamma_3, w_3$ of $\\mathfrak{J}$ defined in Subsection \\ref{subsection 3.2}. \n\n\\noindent From Lemma \\ref{lemma 3.2.6} (2), since we can easily confirm that $\\gamma_3$ and $w_3$ are commutative, $\\tilde{\\gamma}_3$ and $\\tilde{w}_3$ are commutative in $\\Aut(F_4)$: $\\tilde{\\gamma}_3\\tilde{w}_3=\\tilde{w}_3\\tilde{\\gamma}_3$.\n\\vspace{1mm}\n\nBefore determining the structure of the group $(F_4)^{\\gamma_3} \\cap (F_4)^{w_3}$, we prove lemma needed in the proof of theorem below.\n\n\\begin{lemma}\\label{lemma 4.3.1\n\tThe group $S(U(1)\\times U(1)\\times U(1))$ is isomorphic to the group $U(1)\\times U(1)${\\rm :} $S(U(1)\\times U(1)\\times U(1)) \\cong U(1)\\times U(1)$.\n\\end{lemma}\n\\begin{proof}\n\tWe define a mapping $f_{{}_{431}}: U(1)\\times U(1) \\to S(U(1)\\times U(1)\\times U(1))$ by \n\t\\begin{align*}\n\tf_{{}_{431}}(a,b)=\\left( \n\t\\begin{array}{ccc}\n\ts & & {\\raisebox{-7pt}[0pt]{\\large $0$}}\n\t\\\\[2mm]\n\t& t & \n\t\\\\[2mm]\n\t{\\raisebox{1pt}[0pt]{\\large $0$}}&& (st)^{-1}\n\t\\end{array}\\right) \\in SU(3).\n\t\\end{align*}\n\tThen this mapping induces the required isomorphism.\n\\end{proof}\n\nNow, we will determine the structure of the group $(F_4)^{\\gamma_3} \\cap (F_4)^{w_3}$.\n\n\\begin{theorem}\\label{theorem 4.3.2\n\tThe group $(F_4)^{\\gamma_3} \\cap (F_4)^{w_3}$ is isomorphic to the group $(U(1) \\times U(1) \\times SU(3))\/\\Z_3${\\rm :} $(F_4)^{\\gamma_3} \\cap (F_4)^{w_3} \\cong (U(1) \\times U(1)\\times SU(3))\/\\Z_3, \\Z_3=\\{(1,1,E), (\\bm{\\omega},\\bm{\\omega}, \\bm{\\omega}E),(\\bm{\\omega}^{-1},\\allowbreak\\bm{\\omega}^{-1}, \\bm{\\omega}^{-1}E)\\}$.\n\\end{theorem}\n\\begin{proof}\n\tLet $S(U(1) \\times U(1) \\times U(1)) \\subset SU(3)$.\n\tWe define a mapping $\\varphi_{{}_{F_4,\\gamma_3, w_3}}: S(U(1) \\times U(1) \\times U(1)) \\times SU(3) \\to (F_4)^{\\gamma_3} \\cap (F_4)^{w_3}$ by\n\t\\begin{align*}\n\t\t\t\\varphi_{{}_{F_4,\\gamma_3, w_3}}(L,A)(X_{\\bm{C}}+M)=AX_{\\bm{C}}A^*+LMA^*,\\,\\,X_{\\bm{C}}+M \\in \\mathfrak{J}(3,\\C)\\oplus M(3,\\C)=\\mathfrak{J}.\n\t\\end{align*}\n\tNeedless to say, this mapping is the restriction of the mapping $\\varphi_{{}_{F_4,w_3}}$, that is, $\\varphi_{{}_{F_4,\\gamma_3, w_3}}(L,A)=\\varphi_{{}_{F_4,w_3}}(L,A)$ (Theorem \\ref{theorem 3.2.5}).\n\t\n\tAs usual, we will prove that $\\varphi_{{}_{F_4,\\gamma_3, w_3}}$ is well-defined. It is clear that $\\varphi_{{}_{F_4,\\gamma_3, w_3}}(L,A) \\in (F_4)^{w_3}$, and using $\\gamma_3=\\varphi_{{}_{F_4,w_3}}(\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}), E)$ (Lemma \\ref{lemma 3.2.6} (2)), it follows that \n\t\\begin{align*}\n\t {\\gamma_3}^{-1}\\varphi_{{}_{F_4,\\gamma_3, w_3}}(L,A)\\gamma_3\n\t &=\\varphi_{{}_{F_4,w_3}}(\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}), E)^{-1}\\varphi_{{}_{F_4,\\gamma_3, w_3}}(L,A)\\varphi_{{}_{F_4,w_3}}(\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}), E)\n\t \\\\\n\t &=\\varphi_{{}_{F_4,w_3}}(\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}}), E)\\varphi_{{}_{F_4,w_3}}(L,A)\\varphi_{{}_{F_4,w_3}}(\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}), E)\n\t \\\\\n\t &=\\varphi_{{}_{F_4,w_3}}(\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})L\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}),A),L=\\diag(a,b,c), abc=1\n\t \\\\\n\t &=\\varphi_{{}_{F_4,w_3}}(L,A)\n\t \\\\\n\t &=\\varphi_{{}_{F_4,\\gamma_3,w_3}}(L,A).\n\t\\end{align*}\n\tHence we have that $\\varphi_{{}_{F_4,\\gamma_3, w_3}}(L,A) \\in (F_4)^{\\gamma_3}$. Thus $\\varphi_{{}_{F_4,\\gamma_3, w_3}}$ is well-defined. Subsequently, since $\\varphi_{{}_{F_4,\\gamma_3,w_3}}$ is the restriction of the mapping $\\varphi_{{}_{F_4,w_3}}$, we easily see that $\\varphi_{{}_{F_4,\\gamma_3,w_3}}$ is a homomorphism. \n\t\n\tNext, we will prove that $\\varphi_{{}_{F_4,\\gamma_3,w_3}}$ is surjective. Let $\\alpha \\in (F_4)^{\\gamma_3} \\cap (F_4)^{w_3} \\subset (F_4)^{w_3}$. There exist $P, A \\in SU(3)$ such that $\\alpha=\\varphi_{{}_{F_4,w_3}}(P,A)$ (Theorem \\ref{theorem 3.2.5}). Moreover, from the condition $\\alpha \\in (F_4)^{\\gamma_3}$, that is, ${\\gamma_3}^{-1}\\varphi_{{}_{F_4,w_3}}(P,A)\\gamma_3=\\varphi_{{}_{F_4,w_3}}(P,A)$, and using ${\\gamma_3}^{-1}\\varphi_{{}_{F_4,w_3}}(P,A)\\gamma_3=\\varphi_{{}_{F_4,w_3}}(\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})P\\,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}),A)$ (Lemma \\ref{lemma 3.2.6} (2)), we have that\n\t\\begin{align*}\n\t&\\,\\,\\,{\\rm(i)}\\,\\left\\{\n\t\\begin{array}{l}\n\t\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})P\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega})=P \\\\\n\tA=A,\n\t\\end{array} \\right.\n\t\\qquad\n\t {\\rm(ii)}\\,\\left\\{\n \t\\begin{array}{l}\n \t\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})P\\diag(1,\\ov{\\bm{\\omega}}, \\bm{\\omega})=\\bm{\\omega}P \\\\\n\tA=\\bm{\\omega}A,\n\t\\end{array} \\right.\n\t\\\\[2mm]\n\t&{\\rm(iii)}\\,\\left\\{\n\t\\begin{array}{l}\n\t\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})P\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega})=\\bm{\\omega}^{-1}P \\\\\n\tA=\\bm{\\omega}^{-1}A.\n\t\\end{array} \\right.\n\t\\end{align*}\n\tThe Cases (ii) and (iii) are impossible because of $A\\not=0$. As for the Case (i), from the first condition, by doing straightforward computation $P$ takes the form $\\diag(a,b,c)\n\n\n\n\n\n \\in SU(3)$, that is, $P \\in S(U(1)\\times U(1)\\times U(1))$. Needless to say, $A \\in SU(3)$. Hence there exist $L \\in S(U(1)\\times U(1) \\times U(1))$ and $A \\in SU(3)$ such that $\\alpha=\\varphi_{{}_{F_4,w_3}}(L,A)$. Namely, there exist $L \\in S(U(1)\\times U(1) \\times U(1))$ and $A \\in SU(3)$ such that $\\alpha=\\varphi_{{}_{F_4,\\gamma_3,w_3}}(L,A)$. With above, the proof of surjective is completed.\n\t\n\tFinally, we will determine $\\Ker\\,\\varphi_{{}_{F_4,\\gamma_3,w_3}}$. However, from $\\Ker\\,\\varphi_{{}_{F_4,w_3}}=\\{(E,E),(\\bm{\\omega}E,\\bm{\\omega}E), \\allowbreak (\\bm{\\omega}^{-1}E,\\bm{\\omega}^{-1}E)\\}$, we easily obtain that $\\Ker\\,\\varphi_{{}_{F_4,\\gamma_3,w_3}}=\\{(E,E),(\\bm{\\omega}E,\\bm{\\omega}E), (\\bm{\\omega}^{-1}E,\\bm{\\omega}^{-1}E)\\} \\cong \\Z_3$. Thus we have the isomorphism $(F_4)^{\\gamma_3} \\cap (F_4)^{w_3} \\cong (S(U(1) \\times U(1) \\times U(1))\\times SU(3))\/\\Z_3$.\n\t\n\tTherefore, by Lemma \\ref{lemma 4.3.1} we have the required isomorphism \n\t\\begin{align*}\n\t(F_4)^{\\gamma_3} \\cap (F_4)^{w_3} \\cong (U(1) \\times U(1)\\times SU(3))\/\\Z_3,\n\t\\end{align*}\n\twhere $\\Z_3=\\{(1,1,E), (\\bm{\\omega},\\bm{\\omega}, \\bm{\\omega}E),(\\bm{\\omega}^{-1},\\bm{\\omega}^{-1}, \\bm{\\omega}^{-1}E)\\}$.\n\\end{proof}\n\\vspace{1mm}\n\nThus, since the group $(F_4)^{\\gamma_3} \\cap (F_4)^{w_3}$ is connected from Theorem \\ref{theorem 4.3.2}, we have an exceptional $\\varmathbb{Z}_3 \\times \\varmathbb{Z}_3$-symmetric space\n\\begin{align*}\nF_4\/((U(1) \\times U(1) \\times SU(3))\/\\Z_3).\n\\end{align*}\n\n\\subsection{Case 4: $\\{1, \\tilde{\\sigma}_3, \\tilde{\\sigma}_3{}^{-1}\\} \\times \\{1, \\tilde{w}_3, \\tilde{w}_3{}^{-1}\\}$-symmetric space}\\label{case 4}\n\nLet the $\\R$-linear transformations $\\sigma_3, w_3$ of $\\mathfrak{J}$ defined in Subsection \\ref{subsection 3.2}. \n\n\\noindent From Lemma \\ref{lemma 3.2.6} (1), since we can easily confirm that $\\gamma_3$ and $\\sigma_3$ are commutative, $\\tilde{\\sigma}_3$ and $\\tilde{w}_3$ are commutative in $\\Aut(F_4)$: $\\tilde{\\sigma}_3\\tilde{w}_3=\\tilde{w}_3\\tilde{\\sigma}_3$.\n\\vspace{1mm}\n\nNow, we will determine the structure of the group $(F_4)^{\\sigma_3} \\cap (F_4)^{w_3}$. Note that we can prove theorem below as in the proof of Theorem \\ref{theorem 4.3.2}, however we give the proof as detailed as possible.\n\n\\begin{theorem}\\label{theorem 4.4.1\n\tThe group $(F_4)^{\\sigma_3} \\cap (F_4)^{w_3}$ is isomorphic to the group $(SU(3)\\times U(1) \\times U(1))\/\\Z_3${\\rm :} $(F_4)^{\\sigma_3} \\cap (F_4)^{w_3} \\cong (SU(3)\\times U(1) \\times U(1))\/\\Z_3, \\Z_3=\\{(E,1,1), (\\bm{\\omega}E,\\bm{\\omega},\\bm{\\omega}),( \\bm{\\omega}^{-1}E,\\allowbreak \\bm{\\omega}^{-1},\\bm{\\omega}^{-1}\\}$.\n\\end{theorem}\n\\begin{proof}\n\tLet $S(U(1) \\times U(1) \\times U(1)) \\subset SU(3)$.\n\tWe define a mapping $\\varphi_{{}_{F_4,\\sigma_3, w_3}}: SU(3) \\times S(U(1) \\times U(1) \\times U(1)) \\to (F_4)^{\\sigma_3} \\cap (F_4)^{w_3}$ by\n\t\\begin{align*}\n\t\\varphi_{{}_{F_4,\\sigma_3, w_3}}(P,L)(X_{\\bm{C}}+M)=LX_{\\bm{C}}L^*+PML^*,\\,\\,X_{\\bm{C}}+M \\in \\mathfrak{J}(3,\\C)\\oplus M(3,\\C)=\\mathfrak{J}.\n\t\\end{align*}\n\tNeedless to say, this mapping is the restriction of the mapping $\\varphi_{{}_{F_4,w_3}}$, that is, $\\varphi_{{}_{F_4,\\gamma_3, w_3}}(P,L)=\\varphi_{{}_{F_4,w_3}}(P,L)$ (Theorem \\ref{theorem 3.2.5}).\n\t\n\tAs usual, we will prove that $\\varphi_{{}_{F_4,\\sigma_3, w_3}}$ is well-defined. It is clear that $\\varphi_{{}_{F_4,\\sigma_3, w_3}}(P,L) \\in (F_4)^{w_3}$, and using $\\sigma_3=\\varphi_{{}_{F_4,w_3}}(E,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}))$ (Lemma \\ref{lemma 3.2.6} (2)), it follows that \n\t\\begin{align*}\n\t{\\sigma_3}^{-1}\\varphi_{{}_{F_4,\\sigma_3, w_3}}(P,L)\\sigma_3\n\t&=\\varphi_{{}_{F_4,w_3}}(E,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}))^{-1}\\varphi_{{}_{F_4,\\gamma_3, w_3}}(P,L)\\varphi_{{}_{F_4,w_3}}(E,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}))\n\t\\\\\n\t&=\\varphi_{{}_{F_4,w_3}}(E,\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}}))\\varphi_{{}_{F_4,w_3}}(P,L)\\varphi_{{}_{F_4,w_3}}(E,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}))\n\t\\\\\n\t&=\\varphi_{{}_{F_4,w_3}}(P,\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})L\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega})),\\,\\,L=\\diag(a,b,c)\n\t\\\\\n\t&=\\varphi_{{}_{F_4,w_3}}(P,L)\n\t\\\\\n\t&=\\varphi_{{}_{F_4,\\sigma_3,w_3}}(P,L).\n\t\\end{align*}\n\tHence we have that $\\varphi_{{}_{F_4,\\sigma_3, w_3}}(P,L) \\in (F_4)^{\\sigma_3}$. Thus $\\varphi_{{}_{F_4,\\sigma_3, w_3}}$ is well-defined. Subsequently, since $\\varphi_{{}_{F_4,\\sigma_3,w_3}}$ is the restriction of the mapping $\\varphi_{{}_{F_4,w_3}}$, we easily see that $\\varphi_{{}_{F_4,\\sigma_3,w_3}}$ is a homomorphism. \n\t\n\tNext, we will prove that $\\varphi_{{}_{F_4,\\sigma_3,w_3}}$ is surjective. Let $\\alpha \\in (F_4)^{\\sigma_3} \\cap (F_4)^{w_3} \\subset (F_4)^{w_3}$.\n\t There exist $P, A \\in SU(3)$ such that $\\alpha=\\varphi_{{}_{F_4,w_3}}(P,A)$ (Theorem \\ref{theorem 3.2.5}). Moreover, from the condition $\\alpha \\in (F_4)^{\\sigma_3}$, that is, ${\\sigma_3}^{-1}\\varphi_{{}_{F_4,w_3}}(P,A)\\sigma_3=\\varphi_{{}_{F_4,w_3}}(P,A)$, and using ${\\sigma_3}^{-1}\\varphi_{{}_{F_4,w_3}}(P,A)\\sigma_3\\allowbreak=\\varphi_{{}_{F_4,w_3}}(P,\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})A\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}))$ (Lemma \\ref{lemma 3.2.6} (2)), we have that\n\t\\begin{align*}\n\t&\\,\\,\\,{\\rm(i)}\\,\\left\\{\n\t\\begin{array}{l}\n\tP=P\\\\\n\t\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})A\\,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega})=A, \n\t\\end{array} \\right.\n\t\\qquad\n\t{\\rm(ii)}\\,\\left\\{\n\t\\begin{array}{l}\n\tP=\\bm{\\omega}P\\\\\n\t\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})A\\,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega})=\\bm{\\omega}A, \n\t\\end{array} \\right.\n\t\\\\[2mm]\n\t&{\\rm(iii)}\\,\\left\\{\n\t\\begin{array}{l}\n\tP=\\bm{\\omega}^{-1}P\\\\\n\t\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})A\\,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega})=\\bm{\\omega}^{-1}A.\n\t\\end{array} \\right.\n\t\\end{align*}\n\tThe Cases (ii) and (iii) are impossible because of $P\\not=0$. As for the Case (i), from the first condition, by doing straightforward computation $A$ takes the following form $\\diag(a,b,c), a,b,c \\in U(1), abc=1$, that is, $A \\in S(U(1)\\times U(1)\\times U(1))$. Needless to say, $P \\in SU(3)$. Hence there exist $P \\in SU(3)$ and $A \\in S(U(1)\\times U(1) \\times U(1))$ such that $\\alpha=\\varphi_{{}_{F_4,w_3}}(P,A)$. Namely, there exist $P \\in SU(3)$ and $A \\in S(U(1)\\times U(1) \\times U(1))$ such that $\\alpha=\\varphi_{{}_{F_4,\\sigma_3,w_3}}(P,A)$. The proof of surjective is completed.\n\t\n\tFinally, we will determine $\\Ker\\,\\varphi_{{}_{F_4,\\sigma_3,w_3}}$. However, from $\\Ker\\,\\varphi_{{}_{F_4,w_3}}=\\{(E,E),(\\bm{\\omega}E,\\bm{\\omega}E), \\allowbreak (\\bm{\\omega}^{-1}E,\\bm{\\omega}^{-1}E)\\}$, we easily obtain that $\\Ker\\,\\varphi_{{}_{F_4,\\sigma_3,w_3}}=\\{(E,E),(\\bm{\\omega}E,\\bm{\\omega}E), (\\bm{\\omega}^{-1}E,\\bm{\\omega}^{-1}E)\\} \\cong \\Z_3$. Thus we have the isomorphism $(F_4)^{\\sigma_3} \\cap (F_4)^{w_3} \\cong (SU(3)\\times S(U(1) \\times U(1) \\times U(1)))\/\\Z_3$.\n\t\n\tHere, as in the proof of Theorem \\ref{theorem 4.3.2} we have the isomorphism $U(1) \\times U(1) \\cong S(U(1) \\times U(1) \\times U(1))$.\n\t\n\tTherefore we have the required isomorphism \n\t\\begin{align*}\n\t(F_4)^{\\sigma_3} \\cap (F_4)^{w_3} \\cong (SU(3)\\times U(1) \\times U(1))\/\\Z_3,\n\t\\end{align*}\n\twhere $\\Z_3=\\{(E,1,1), (\\bm{\\omega}E,\\bm{\\omega},\\bm{\\omega}),( \\bm{\\omega}^{-1}E,\\allowbreak \\bm{\\omega}^{-1},\\bm{\\omega}^{-1}\\}$.\n\\end{proof}\n\\vspace{1mm}\n\nThus, since the group $(F_4)^{\\sigma_3} \\cap (F_4)^{w_3}$ is connected from Theorem \\ref{theorem 4.4.1}, we have an exceptional $\\varmathbb{Z}_3 \\times \\varmathbb{Z}_3$-symmetric space\n\\begin{align*}\nF_4\/((SU(3)\\times U(1) \\times U(1))\/\\Z_3).\n\\end{align*} \n\n\\begin{assertion}\\label{assertion}\n\tOn Theorem \\ref{theorem 4.4.1} from a different view point. \n\\end{assertion}\n\n\tFirst, let $U(3) \\subset Sp(3)$. Then, we can embed $U(3)$ into $F_4$ using the mapping $\\varphi_{{}_{F_4,\\gamma_3}}$ as follows:\n\t\\begin{align*}\n\t\t\\varphi_{{}_{F_4,\\gamma_3}}(1,U)(M+\\a)=UMU^*+\\a U^*,\\,\\,M+\\a \\in \\mathfrak{J}(3,\\H) \\oplus \\H^3=\\mathfrak{J},\n\t\\end{align*}\n\tmore detail, since $w_3$ induces an automorphism of the group $(F_4)_{E_1, F_1(1),F_1(e_1)}$, it follows that $\\varphi_{{}_{F_4,\\gamma_3}}(1,U) \\in ((F_4)_{E_1, F_1(1),F_1(e_1)})^{w_3} \\cong (Spin(7))^{w_3}$ , where $Spin(7)$ is defined in Theorem \\ref{theorem 3.2.4}. Here, we denote $\\varphi_{{}_{F_4,\\gamma_3}}(1,U)$ by $\\varphi(U)$: $\\varphi(U)=\\varphi_{{}_{F_4,\\gamma_3}}(1,U)$, and we define a mapping $\\psi: U(1) \\times U(3) \\to (F_4)^{\\sigma_3} \\cap (F_4)^{w_3}$ by\n\t\\begin{align*}\n\t\t\\psi(a,U)=D_a\\varphi(U),\n\t\\end{align*}\n\twhere $D_a$ is defined in Subsection 3.2. Then the mapping $\\psi$ induces the isomorphism $(F_4)^{\\sigma_3} \\cap (F_4)^{w_3} \\cong (U(1)\\times U(3))\/\\Z_3$, where $\\Z_3=\\{(1,E), (\\bm{\\omega},\\bm{\\omega}^{-1}E), (\\bm{\\omega}^{-1}, \\bm{\\omega}E) \\}$.\n\n\\subsection{Case 5: $\\{1, \\tilde{\\gamma}_3, \\tilde{\\gamma}_3{}^{-1}\\} \\times \\{1, \\tilde{\\sigma}_3, \\tilde{\\sigma}_3{}^{-1}\\}$-symmetric space}\n\nLet the $C$-linear transformations $\\gamma_3, \\sigma_3$ of $\\mathfrak{J}^C$ defined in Subsection \\ref{subsection 3.3}.\n\n\\noindent From Lemma \\ref{lemma 3.3.8} (1), since we can easily confirm that $\\gamma_3$ and $\\sigma_3$ are commutative, $\\tilde{\\gamma}_3$ and $\\tilde{\\sigma}_3$ are commutative in $\\Aut(E_6)$: $\\tilde{\\gamma}_3\\tilde{\\sigma}_3=\\tilde{\\sigma}_3\\tilde{\\gamma}_3$.\n\\vspace{1mm}\n\nBefore determining the structure of the group $(E_6)^{\\gamma_3} \\cap (E_6)^{\\sigma_3}$, we prove proposition and lemma needed in the proof of theorem below.\n\\vspace{1mm}\n\nWe define a $C$-linear transformation $\\sigma'_3$ of $\\mathfrak{J}^C$ by\n\\begin{align*}\n\t\t\t\\sigma'_3=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega)) \\in (E_6)^{\\gamma_3} \\subset E_6,\n\\end{align*}\nwhere $\\omega=-(1\/2)+(\\sqrt{3}\/2)i \\in C$.\n\nLet an element \n\\begin{align*}R:=\\scalebox{0.8}{$\\begin{pmatrix}\n\t1&&&&&\\\\\n\t&1&&&&\\\\\n\t&&&&1&\\\\\n\t&&&1&&\\\\\n\t&&-1&&&\\\\\n\t&&&&&1\n\t\\end{pmatrix}$} \\in SO(6) \\subset SU(6), \n\\end{align*}\nwhere the blanks are $0$, and we consider an element $\\varphi_{{}_{E_6,\\gamma_3}}(1,R) \\in (E_6)^{\\gamma_3} \\subset E_6$. Here, we denote this element by $\\delta_R$: $\\delta_R=\\varphi_{{}_{E_6,\\gamma_3}}(1,R)$.\nThen by doing straightforward computation, we have that $\\sigma_3\\delta_R=\\delta_R\\sigma'_3$, that is, $\\sigma_3$ is conjugate to $\\sigma'_3$ under $\\delta_R \\in (E_6)^{\\gamma_3} \\subset E_6$: $\\sigma_3 \\sim \\sigma'_3$. Moreover, $\\sigma'_3$ induces the automorphism $\\tilde{\\sigma'}_3$ of order $3$ on $E_6$: $\\tilde{\\sigma'}_3(\\alpha)={\\sigma'_3}^{-1}\\alpha\\sigma'_3, \\alpha \\in E_6$.\n\\vspace{1mm}\n\nThen we have the following proposition.\n\n\\begin{proposition}\\label{proposition 4.5.1\n\tThe group $(E_6)^{\\gamma_3} \\cap (E_6)^{\\sigma_3}$ is isomorphic to the group $(E_6)^{\\gamma_3} \\cap (E_6)^{\\sigma'_3}${\\rm :} $(E_6)^{\\gamma_3} \\cap (E_6)^{\\sigma_3} \\cong (E_6)^{\\gamma_3} \\cap (E_6)^{\\sigma'_3}$.\n\\end{proposition}\n\\begin{proof}\n\tWe define a mapping $g_{{}_{451}}: (E_6)^{\\gamma_3} \\cap (E_6)^{\\sigma_3} \\to (E_6)^{\\gamma_3} \\cap (E_6)^{\\sigma'_3}$ by\n\t\\begin{align*}\n\t\t\tg_{{}_{451}}(\\alpha)={\\delta_R}^{-1}\\alpha\\delta_R.\t\n\t\\end{align*}\n\tIn order to prove this isomorphism, it is sufficient to show that $g_{{}_{452}}$ is well-defined. \n\t\n\t\\noindent First, we will show that $g_{{}_{451}} \\in (E_6)^{\\gamma_3}$. Since it follows from $\\delta_R=\\varphi_{{}_{E_6,\\gamma_3}}(1,R)$ and $\\gamma_3=\\varphi_{{}_{E_6,\\gamma_3}}(\\omega,E)$ that $\\delta_R\\gamma_3=\\gamma_3\\delta_R$, we have that $g_{{}_{451}} \\in (E_6)^{\\gamma_3}$. Similarly, from $\\sigma_3\\delta_R=\\delta_R\\sigma'_3$ we have that $g_{{}_{451}} \\in (E_6)^{\\sigma'_3}$. Hence $g_{{}_{451}}$ is well-defined. With above, the proof of this proposition is completed.\t\n\\end{proof}\n\\vspace{1mm}\n\nSubsequently, we will prove the following lemma. \n\n\\begin{lemma}\\label{lemma 4.5.2\n\tThe group $S(U(2)\\times U(2)\\times U(2))$ is isomorphic to the group $(U(1) \\times U(1)\\times SU(2)\\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2)${\\rm :} $S(U(2)\\times U(2)\\times U(2)) \\cong (U(1) \\times U(1)\\times SU(2)\\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2), \\Z_2=\\!\\{(1,1,E,E,E), (1,-1,E,-E,E) \\}, \\Z_2=\\!\\{(1,1,E,E,E), (-1,1,-E,\\allowbreak E,E) \\}$.\n\\end{lemma}\n\\begin{proof}\n\tWe define a mapping $f_{{}_{452}}:U(1) \\times U(1)\\times SU(2)\\times SU(2)\\times SU(2) \\to S(U(2)\\times U(2)\\times U(2))$ by\n\t\\begin{align*}\n\t\t\tf_{{}_{452}}(a,b,A,B,C)=\\left( \n\t\t\t\\begin{array}{ccc}\n\t\t\t a\\mbox{\\large {$A$}} & & {\\raisebox{-7pt}[0pt]{\\large $0$}}\n\t\t\t \\\\[2mm]\n\t\t\t & b\\mbox{\\large {$B$}} & \n\t\t\t \\\\[2mm]\n\t\t\t {\\raisebox{1pt}[0pt]{\\large $0$}}&& (ab)^{-2}\\mbox{\\large {$C$}}\n\t\t\t\\end{array}\\right) \\in SU(6).\n\t\\end{align*}\n\tThen it is clear that $f_{{}_{452}}$ is well-defined and a homomorphism. \n\t\n\tWe will prove that $f_{{}_{452}}$ is surjective. Let $P \\in S(U(2)\\times U(2)\\times U(2))$. Then $P$ takes the form of $\\diag(P_1,P_2,P_3),P_j \\in U(2), (\\det\\,P_1)(\\det\\,P_2)(\\det\\,P_3)=1$. Here, since $P_1 \\in U(2)$, we see that $\\det\\,P_1 \\in U(1)$. We choose $a \\in U(1)$ such that $a^2=\\det\\,P_1$, and set $A=(1\/a)P_1$. Then we have that $ A \\in SU(2)$. Similarly, for $P_2 \\in U(2)$, there exist $b \\in U(1)$ and $B \\in SU(2)$ such that $P_2=bB, b^2=\\det\\,P_2$. From $(\\det\\,P_1)(\\det\\,P_2)(\\det\\,P_3)=1$, we have that $\\det\\,P_3=(ab)^{-2}$. Set $C=(ab)^2P_3$. Then we have that $C \\in SU(2)$. With above, the proof of surjective is completed.\n\t\n\tFinally, we will determine $\\Ker\\,f_{{}_{452}}$. It follows from the kernel of definition that\n\t\\begin{align*}\n\t\t\t\\Ker\\,f_{{}_{452}}&=\\{(a,b,A,B,C)\\in U(1)^{\\times 2}\\times SU(2)^{\\times 3} \\,|\\,f_{{}_{453}}(a,b,A,B,C)=E \\}\n\t\t\t\\\\\n\t\t\n\t\t\n\t\t\n\t\t\t&=\\{(a,b,a^{-1}E,b^{-1}E,(ab)^2E)\\in U(1)^{\\times 2}\\times SU(2)^{\\times 3} \\,|\\,a^2=b^2=1 \\}\n\t\t\t\\\\\n\t\t\t&=\\{(1,1,E,E,E), (1,-1,E,-E,E),(-1,1,-E,E,E), (-1,-1,-E,-E,E) \\}\n\t\t\t\\\\\n\t\t\t&=\\{(1,1,E,E,E), (1,-1,E,-E,E) \\} \\times \\{(1,1,E,E,E), (-1,1,-E,E,E) \\}\n\t\t\t\\\\\n\t\t\t& \\cong \\Z_2 \\times \\Z_2.\n\t\\end{align*}\n\t\n\tTherefore we have the required isomorphism \n\t\\begin{align*}\n\t\t\t\tS(U(2)\\times U(2)\\times U(2)) \\cong (U(1) \\times U(1)\\times SU(2)\\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2).\n\t\\end{align*}\n\\end{proof}\n\nNow, we will determine the structure of the group $(E_6)^{\\gamma_3} \\cap (E_6)^{\\sigma_3}$.\n\n\\begin{theorem}\\label{theorem 4.5.3\n\tThe group $(E_6)^{\\gamma_3} \\cap (E_6)^{\\sigma_3}$ is isomorphic the group $(U(1)\\times U(1) \\times U(1)\\allowbreak \\times SU(2) \\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times\\Z_2\\times\\Z_2)${\\rm :} $(E_6)^{\\gamma_3} \\cap (E_6)^{\\sigma_3} \\cong (U(1)\\times U(1) \\times U(1)\\times SU(2) \\allowbreak \\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times\\Z_2\\times\\Z_2), \\Z_2=\\{(1,1,1,E,E,E), (-1,1,1,-E,-E,E) \\},\\,\\Z_2=\\{(1,1,1,E,E,E), (-1,1,-1,-E,E,E) \\},\\Z_2\\!=\\!\\{(1,1,1,E,E,E), (-1,-1,1,-E,-E,E) \\},\\!\\Z_2\\allowbreak=\\{(1,1,1,E,E,E), (-1,-1,-1,E,E,E) \\}$.\n\\end{theorem}\n\\begin{proof}\n\tLet $S(U(2)\\times U(2)\\times U(2)) \\subset SU(6)$. \n\tWe define a mapping $\\varphi_{{}_{E_6,\\gamma_3,\\sigma'_3}}: U(1)\\times S(U(2)\\times U(2)\\times U(2)) \\to (E_6)^{\\gamma_3} \\cap (E_6)^{\\sigma'_3}$ by\n\t\\begin{align*}\n\t\\varphi_{{}_{E_6,\\gamma_3,\\sigma'_3}}(s, P)(M+\\a)&={k_J}^{-1}(P(k_J M){}^t\\!P)+s\\a k^{-1}(\\tau \\,{}^t\\!P), \n \\\\\n\t&\\hspace*{40mm}M+\\a \\in \\mathfrak{J}(3, \\H)^C \\oplus (\\H^3)^C\\!\\!=\\!\\mathfrak{J}^C.\n\t\\end{align*}\n\tNeedless to say, this mapping is the restriction of the mapping $\\varphi_{{}_{E_6,\\gamma_3}}$, that is, $\\varphi_{{}_{E_6,\\gamma_3,\\sigma'_3}}(s, P)=\\varphi_{{}_{E_6,\\gamma_3}}(s,P)$ (Theorem \\ref{theorem 3.3.2}). \n\t\n\tFirst, we will prove that $\\varphi_{{}_{E_6,\\gamma_3,\\sigma'_3}}$ is well-defined. It is clear that $\\varphi_{{}_{E_6,\\gamma_3,\\sigma'_3}}(s,P) \\in (E_6)^{\\gamma_3}$, and it follows from $\\sigma'_3=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega))$ that\n\t\\begin{align*}\n\t\t\t&\\quad {\\sigma'_3}^{-1}\\varphi_{{}_{E_6,\\gamma_3,\\sigma'_3}}(s,P)\\sigma'_3\n\t\t\t\\\\\n\t\t\t&=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega))^{-1}\\varphi_{{}_{E_6,\\gamma_3,\\sigma'_3}}(s,P)\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega))\n\t\t\t\\\\\n\t\t\t&=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(1,1,\\tau\\omega,\\tau\\omega,\\omega,\\omega))\\varphi_{{}_{E_6,\\gamma_3}}(s,P)\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega))\n\t\t\t\\\\\n\t\t\t&=\\varphi_{{}_{E_6,\\gamma_3}}(s,\\diag(1,1,\\tau\\omega,\\tau\\omega,\\omega,\\omega)P\\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega)),P=\\diag(P_1,P_2,P_3)\n\t\t\t\\\\\n\t\t\t&=\\varphi_{{}_{E_6,\\gamma_3}}(s,\\diag(P_1,(\\tau\\omega E) P_2(\\omega E),(\\omega E) P_3(\\tau\\omega E)))\n\t\t\t\\\\\n\t\t\t&=\\varphi_{{}_{E_6,\\gamma_3}}(s,P)\n\t\t\t\\\\\n\t\t\t&=\\varphi_{{}_{E_6,\\gamma_3,\\sigma'_3}}(s,P).\n\t\\end{align*}\n\tHence we have that $\\varphi_{{}_{E_6,\\gamma_3,\\sigma'_3}}(s,P) \\in (E_6)^{\\sigma'_3}$. Thus $\\varphi_{{}_{E_6,\\gamma_3,\\sigma'_3}}$ is well-defined. Subsequently, since $\\varphi_{{}_{E_6,\\gamma_3,\\sigma'_3}}$ is the restriction of the mapping $\\varphi_{{}_{E_6,\\gamma_3}}$, we easily see that $\\varphi_{{}_{E_6,\\gamma_3,\\sigma'_3}}$ is a homomorphism.\n\t\n\tNext, we will prove that $\\varphi_{{}_{E_6,\\gamma_3,\\sigma'_3}}$ is surjective. Let $\\alpha \\in (E_6)^{\\gamma_3} \\cap (E_6)^{\\sigma'_3} \\subset (E_6)^{\\gamma_3}$. There exist $s \\in U(1)$ and $A \\in SU(6)$ such that $\\alpha=\\varphi_{{}_{E_6,\\gamma_3}}(s,A)$ (Theorem \\ref{theorem 3.3.2}). Moreover, from the condition $\\alpha \\in (E_6)^{\\sigma'_3}$, that is, ${\\sigma'_3}^{-1}\\varphi_{{}_{E_6,\\gamma_3}}(s,A)\\sigma'_3=\\varphi_{{}_{E_6,\\gamma_3}}(s,A)$, and using ${\\sigma'_3}^{-1}\\varphi_{{}_{E_6,\\gamma_3}}(s,A)\\sigma'_3=\\varphi_{{}_{E_6,\\gamma_3}}(s,\\diag(1,1,\\tau\\omega,\\tau\\omega,\\omega,\\omega)A\\,\\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega))$ (Lemma \\ref{lemma 3.3.8} (1)), we have that\n\t\\begin{align*}\n\t&\\left\\{\n\t \\begin{array}{l}\n\t s=s \\\\\n\t \\diag(1,1,\\tau\\omega,\\tau\\omega,\\omega,\\omega)A\\,\\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega)=A \n\t \\end{array}\\right. \n\t \\\\\n\t&\\hspace*{45mm}{\\text{or}}\n\t \\\\\n\t&\\left\\{\n\t\\begin{array}{l}\n\ts=-s \\\\\n\t\\diag(1,1,\\tau\\omega,\\tau\\omega,\\omega,\\omega)A\\,\\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega)=-A. \n\t\\end{array}\\right. \n\t\\end{align*}\n\tThe latter case is impossible because of $s\\not=0$. As for the former case, from the second condition, by doing straightforward computation $A$ takes the following form $\\diag(A_1, A_2, A_3), A_j \\in U(2), (\\det\\,A_1)(\\det\\,A_2)(\\det\\,A_3)=1$, that is, $A \\in S(U(2)\\times U(2)\\times U(2))$.\n\tNeedless to say, $s \\in U(1)$. Hence there exist $s \\in U(1)$ and $P \\in S(U(2)\\times U(2) \\times U(2))$ such that $\\alpha=\\varphi_{{}_{E_6,\\gamma_3}}(s,P)$. Namely, there exist $s \\in U(1)$ and $P \\in S(U(2)\\times U(2) \\times U(2))$ such that $\\alpha=\\varphi_{{}_{E_6,\\gamma_3,\\sigma'_3}}(s,P)$. The proof of surjective is completed.\n\t\n\tFinally, we will determine $\\Ker\\,\\varphi_{{}_{E_6,\\gamma_3,\\sigma'_3}}$. However, from $\\Ker\\,\\varphi_{{}_{E_6,\\gamma_3}}=\\{(1,E),(-1,-E) \\}$, we easily obtain that $\\Ker\\,\\varphi_{{}_{E_6,\\gamma_3,\\sigma'_3}}=\\{(1,E),(-1,-E) \\} \\cong \\Z_2$. Thus we have the isomorphism $(E_6)^{\\gamma_3} \\cap (E_6)^{\\sigma'_3} \\cong (U(1)\\times S(U(2)\\times U(2)\\times U(2)))\/\\Z_2$. Here, from Proposition \\ref{proposition 4.5.1} we have the isomorphism $(E_6)^{\\gamma_3} \\cap (E_6)^{\\sigma_3} \\cong (U(1)\\times S(U(2)\\times U(2)\\times U(2)))\/\\Z_2$. Moreover, by Lemma \\ref{lemma 4.5.2} we have the required isomorphism \n\t\\begin{align*}\n\t(E_6)^{\\gamma_3} \\cap (E_6)^{\\sigma_3} \\!\\cong (U(1)\\times U(1) \\times U(1)\\times SU(2) \\allowbreak \\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times\\Z_2\\times\\Z_2), \n\t\\end{align*}\n\twhere \n\t\\begin{align*}\n\t&\\Z_2=\\{(1,1,1,E,E,E), (-1,1,1,-E,-E,E) \\},\n\t\\\\\n\t&\\Z_2=\\{(1,1,1,E,E,E), (-1,1,-1,-E,E,E) \\},\n\t\\\\\n\t&\\Z_2=\\{(1,1,1,E,E,E), (-1,-1,1,-E,-E,E) \\},\n\t\\\\\n\t&\\Z_2\\allowbreak=\\{(1,1,1,E,E,E), (-1,-1,-1,E,E,E) \\}.\n\t\\end{align*}\n\\end{proof}\n\nThus, since the group $(E_6)^{\\gamma_3} \\cap (E_6)^{\\sigma_3}$ is connected from Theorem \\ref{theorem 4.5.3}, we have an exceptional $\\varmathbb{Z}_3 \\times \\varmathbb{Z}_3$-symmetric space\n\\begin{align*}\n\t\t\t\t\tE_6\/((U(1)\\times U(1) \\times U(1)\\times SU(2) \\allowbreak \\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times\\Z_2\\times\\Z_2)).\n\\end{align*}\n\n\\subsection{Case 6: $\\{1, \\tilde{\\gamma}_3, \\tilde{\\gamma}_3{}^{-1}\\} \\times \\{1, \\tilde{\\nu}_3, \\tilde{\\nu}_3{}^{-1}\\}$-symmetric space}\n\nLet the $C$-linear transformations $\\gamma_3, \\nu_3$ of $\\mathfrak{J}^C$ defined in Subsection \\ref{subsection 3.3}.\n\n\\noindent From Lemma \\ref{lemma 3.3.8} (1), together with $\\gamma_3=\\varphi_{{}_{E_6,\\gamma_3}}(\\omega,E)$, since we can easily confirm that $\\gamma_3$ and $\\nu_3$ are commutative, $\\tilde{\\gamma}_3$ and $\\tilde{\\nu}_3$ are commutative in $\\Aut(E_6)$: $\\tilde{\\gamma}_3\\tilde{\\nu}_3=\\tilde{\\nu}_3\\tilde{\\gamma}_3$. \n\nBefore determining the structure of the group $(E_6)^{\\gamma_3} \\cap (E_6)^{\\nu_3}$, we prove lemma needed in the proof of theorem below.\n\n\\begin{lemma}\\label{lemma 4.6.1\n\tThe group $S(U(1)\\times U(5))$ is isomorphism the group $(U(1)\\times SU(5))\/\\Z_5${\\rm :} $S(U(1)\\times U(5))\\! \\cong\\! (U(1)\\times SU(5))\/\\Z_5, \\Z_5\\!=\\!\\{(\\varepsilon_k, {\\varepsilon_k}^{-1}E) | \\varepsilon_k\\!=\\!\\exp((2\\pi i\/5)k), k\\!=0,1,2,3,4\\}$.\n\\end{lemma}\n\\begin{proof}\n\tWe define a mapping $f_{{}_{461}}:U(1) \\times SU(5) \\to S(U(1)\\times U(5))$ by\n\t\\begin{align*}\n\tf_{{}_{461}}(t, T)=\\scalebox{0.7}{$\n\t\t\\left(\\begin{array}{cccccccc@{\\!}}\n\t\t&\\multicolumn{2}{c}{\\raisebox{-15pt}[0pt][0pt]{\\Large$t^{-5}$}}&&&&\n\t\t\\\\\n\t\t&&&&\\multicolumn{2}{c}{\\raisebox{-5pt}[0pt]{\\Large $0$}}&\n\t\t\\\\\n\t\t&&&&&&&\n\t\t\\\\\n\t\t&&&&\\multicolumn{2}{c}\n\t\t{\\raisebox{-15pt}[0pt][0pt]{\\Large $t$}\\,\\raisebox{-18pt}[0pt][0pt]{\\huge $T$}}&\n\t\t\\\\\n\t\t&\\multicolumn{2}{c}{\\raisebox{0pt}[0pt]{\\Large $0$}}&&&&\n\t\t\\\\[-2mm]\n\t\t&&&&&&&\n\t\t\\end{array}\\right)$}.\n\t\\end{align*}\n\tThen it is clear that $f_{{}_{461}}$ is well-defined and a homomorphism. \n\t\n\tNow, we will prove that $f_{{}_{461}}$ is surjective. Let $P \\in S(U(1) \\times U(5))$. Then $P$ takes the form of \n\t\\scalebox{0.6}\n\t{$\\left(\\begin{array}{cccccccc@{\\!}}\n\t\t\t&\\multicolumn{2}{c}{\\raisebox{-15pt}[0pt][0pt]{\\Large $s$}}&&&&\n\t\t\t\\\\\n\t\t\t&&&&\\multicolumn{2}{c}{\\raisebox{-5pt}[0pt]{\\Large $0$}}&\n\t\t\t\\\\\n\t\t\t&&&&&&&\n\t\t\t\\\\\n\t\t\t&&&&\\multicolumn{2}{c}{\\raisebox{-18pt}[0pt][0pt]{\\huge $S$}}&\n\t\t\t\\\\\n\t\t\t&\\multicolumn{2}{c}{\\raisebox{0pt}[0pt]{\\Large $0$}}&&&&\n\t\t\t\\\\[-2mm]\n\t\t\t&&&&&&&\n\t\t\\end{array}\\right)$},\\,\\,$s \\in U(1), S \\in U(5), s(\\det S)=1$.\n\tHere, since $S \\in U(5)$, we see that $\\det\\,S \\in U(1)$, and so we choose $t \\in U(1)$ such that $t^5=\\det\\,S$. Set $T=t^{-1}S$, then we have that $T \\in SU(5)$ and $s=t^{-5}$. With above, the proof of surjective is completed.\n\t\n\tFinally, we will determine $\\Ker\\,f_{{}_{461}}$. It follows from the definition of kernel that \n\t\\begin{align*}\n\t\\Ker\\,f_{{}_{461}}&=\\{(t,T) \\in U(1)\\times SU(5)\\,|\\, f_{{}_{461}}(t,T)=E \\}\n\t\\\\\n\t&=\\{(t,T) \\in U(1)\\times SU(5)\\,|\\,t^5=1, T=t^{-1}E \\}\n\t\\\\\n\t&=\\{(\\varepsilon_k, {\\varepsilon_k}^{-1}E) \\,|\\, \\varepsilon_k=\\exp((2\\pi i\/5)k), k=0,1,2,3,4\\}\n\t\\\\\n\t& \\cong \\Z_5.\n\t\\end{align*}\n\t\n\tTherefore we have the required isomorphism \n\t\\begin{align*}\n\tS(U(1) \\times U(5)) \\cong (U(1)\\times SU(5))\/\\Z_5.\n\t\\end{align*}\n\\end{proof}\n \nNow, we will determine the structure of the group $(E_6)^{\\gamma_3} \\cap (E_6)^{\\nu_3}$.\n\n\\begin{theorem}\\label{theorem 4.6.2\n\tThe group $(E_6)^{\\gamma_3} \\cap (E_6)^{\\nu_3}$ is isomorphic to the group $(U(1)\\times U(1)\\times SU(5))\/\\Z_2${\\rm :} $(E_6)^{\\gamma_3} \\cap (E_6)^{\\nu_3} \\cong (U(1)\\times U(1)\\times SU(5))\/(\\Z_2\\times \\Z_5), \\Z_2=\\{(1,1,E), (-1,-1,\\allowbreak -E) \\}, \\Z_5=\\{(1,\\varepsilon_i, {\\varepsilon_i}^{-1}E) \\,|\\, \\varepsilon_i=\\exp ((2\\pi i\/5)k), k=0,1,2,3,4\\}$.\n\\end{theorem}\n\\begin{proof}\n\tLet $S(U(1)\\times U(5)) \\subset SU(6)$. Then we define a mapping $\\varphi_{{}_{E_6, \\gamma, \\nu_3}}: U(1)\\times S(U(1)\\times U(5)) \\to (E_6)^{\\gamma_3} \\cap (E_6)^{\\nu_3}$ by\n\t\\begin{align*}\n\t\\varphi_{{}_{E_6,\\gamma_3,\\nu_3}}(s, P)(M+\\a)&={k_J}^{-1}(P(k_J M){}^t\\!P)+s\\a k^{-1}(\\tau \\,{}^t\\!P), \n\t\\\\\n\t&\\hspace*{40mm}M+\\a \\in \\mathfrak{J}(3, \\H)^C \\oplus (\\H^3)^C=\\mathfrak{J}^C.\n\t\\end{align*}\n\tNeedless to say, this mapping is the restriction of the mapping $\\varphi_{{}_{E_6,\\gamma_3}}$, that is, $\\varphi_{{}_{E_6,\\gamma_3,\\nu_3}}(s, P)=\\varphi_{{}_{E_6,\\gamma_3}}(s,P)$ (Theorem \\ref{theorem 3.3.2}). \n\n\tFirst, we will prove that $\\varphi_{{}_{E_6,\\gamma_3,\\nu_3}}$ is well-defined. It is clear that $\\varphi_{{}_{E_6,\\gamma_3,\\nu_3}}(s, P) \\in (E_6)^{\\gamma_3}$, and using $\\nu_3=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(\\nu^5, \\nu^{-1},\\nu^{-1},\\nu^{-1},\\nu^{-1},\\nu^{-1}))$ (Lemma \\ref{lemma 3.3.8} (1)), it follows that \n\t\\begin{align*}\n\t\t\t&\\quad {\\nu_3}^{-1}\\varphi_{{}_{E_6, \\gamma_3, \\nu_3}}(s,P)\\nu_3\n\t\t\t\\\\\n\t\t\t&=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(\\nu^5, \\nu^{-1},\\ldots,\\nu^{-1}))^{-1}\\varphi_{{}_{E_6, \\gamma_3, \\nu_3}}(s,P)\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(\\nu^5, \\nu^{-1},\\ldots,\\nu^{-1}))\n\t\t\t\\\\\n\t\t\t&=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(\\nu^{-5}, \\nu,\\ldots,\\nu))\\varphi_{{}_{E_6, \\gamma_3}}(s,P)\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(\\nu^5, \\nu^{-1},\\ldots,\\nu^{-1}))\n\t\t\t\\\\\n\t\t\t&=\\varphi_{{}_{E_6, \\gamma_3}}(s,\\diag(\\nu^{-5}, \\nu,\\ldots,\\nu)P\\,\\diag(\\nu^5, \\nu^{-1},\\ldots,\\nu^{-1})), P=\\scalebox{0.6}{$\n\t\t\t\t\\left( \\begin{array}{cccccccc@{\\!}}\n\t\t\t\t&\\multicolumn{2}{c}{\\raisebox{-15pt}[0pt][0pt]{\\Large$t$}}&&&&\n\t\t\t\t\\\\\n\t\t\t\t&&&&\\multicolumn{2}{c}{\\raisebox{-5pt}[0pt]{\\Large $0$}}&\n\t\t\t\t\\\\\n\t\t\t\t&&&&&&&\n\t\t\t\t\\\\\n\t\t\t\t&&&&\\multicolumn{2}{c}{\\raisebox{-18pt}[0pt][0pt]{\\huge $U$}}&\n\t\t\t\t\\\\\n\t\t\t\t&\\multicolumn{2}{c}{\\raisebox{0pt}[0pt]{\\Large $0$}}&&&&\n\t\t\t\t\\\\[-2mm]\n\t\t\t\t&&&&&&&\n\t\t\t\t\\end{array}\\right)$}\n\t\t\t\\\\\n\t\t\t&=\\varphi_{{}_{E_6, \\gamma_3}}(s,P)\n\t\t\t\\\\\n\t\t\t&=\\varphi_{{}_{E_6, \\gamma_3, \\nu_3}}(s,P)\n\t\\end{align*}\n\tHence we have that $\\varphi_{{}_{E_6, \\gamma_3, \\nu_3}}(s,P) \\in (E_6)^{\\nu_3}$. Thus $\\varphi_{{}_{E_6,\\gamma_3,\\nu_3}}$ is well-defined. Subsequently, since $\\varphi_{{}_{E_6,\\gamma_3,\\nu_3}}$ is the restriction of the mapping $\\varphi_{{}_{E_6,\\gamma_3}}$, we easily see that $\\varphi_{{}_{E_6,\\gamma_3,\\nu_3}}$ is a homomorphism.\n\t\n\tNext, we will prove that $\\varphi_{{}_{E_6, \\gamma_3, \\nu_3}}$ is surjective. Let $\\alpha \\in (E_6)^{\\gamma_3} \\cap (E_6)^{\\nu_3} \\subset (E_6)^{\\nu_3}$. There exist $ q \\in Sp(1)$ and $P \\in S(U(1) \\times U(5))$ such that $\\alpha=\\varphi_{{}_{E_6, \\nu_3}}(q, P)$ (Theorem \\ref{theorem 3.3.5}). Moreover, from the condition $\\alpha \\in (E_6)^{\\gamma_3}$, that is, ${\\gamma_3}^{-1}\\varphi_{{}_{E_6, \\nu_3}}(q, P)\\gamma_3=\\varphi_{{}_{E_6, \\nu_3}}(q, P)$, and note that $\\gamma_3=\\varphi_{{}_{E_6, \\nu_3}}(\\omega,E)(=\\varphi_{{}_{E_6,\\gamma_3}}(\\omega,E))$ (Lemma \\ref{lemma 3.3.8} (1)), since it follows that\n\t${\\gamma_3}^{-1}\\varphi_{{}_{E_6, \\nu_3}}(q, P)\\gamma_3=\\varphi_{{}_{E_6,\\nu_3}}(\\omega^{-1}q\\omega, P)$, we have that\n\t\\begin{align*}\n\t\t\t\t\\left\\{\n\t\t\t\t\\begin{array}{l}\n\t\t\t\t\\omega^{-1}q\\omega=q \\\\\n\t\t\t\tP=P\n\t\t\t\t\\end{array}\\right.\n\t\t\t\t\\quad {\\text{or}}\\quad\n\t\t\t\t\\left\\{\n\t\t\t\t\\begin{array}{l}\n\t\t\t\t\\omega^{-1}q\\omega=-q \\\\\n\t\t\t\tP=-P.\n\t\t\t\t\\end{array}\\right.\n\t\\end{align*}\n\tThe latter case is impossible because of $P\\not=0$. As for the former case, from the first condition, we easily see that $q \\in U(1)$, and needless to say, $P \\in S(U(1)\\times U(5))$. Hence there exist $s \\in U(1)$ and $P \\in S(U(1)\\times U(5))$ such that $\\alpha=\\varphi_{{}_{E_6,\\nu_3}}(s,P)$. Namely, there exist $s \\in U(1)$ and $P \\in S(U(1)\\times U(5))$ such that $\\alpha=\\varphi_{{}_{E_6,\\gamma_3,\\nu_3}}(s,P)$. The proof of surjective is completed.\n\t\n\tFinally, we will determine $\\Ker\\,\\varphi_{{}_{E_6, \\gamma_3, \\nu_3}}$. However, from $\\Ker\\,\\varphi_{{}_{E_6,\\gamma_3}}=\\{(1,E),(-1,-E) \\}$, we easily obtain that $\\Ker\\,\\varphi_{{}_{E_6, \\gamma_3, \\nu_3}}=\\{(1,(1,E)),(-1,(-1,-E)) \\} \\cong \\Z_2$. Thus we have the isomorphism $(E_6)^{\\gamma_3} \\cap (E_6)^{\\nu_3} \\cong (U(1)\\times S(U(1)\\times U(5)))\/\\Z_2$.\n\n\tTherefore, by Lemma \\ref{lemma 4.6.1} we have the required isomorphism\n\t\\begin{align*}\n\t\t\t\t(E_6)^{\\gamma_3} \\cap (E_6)^{\\nu_3} \\cong (U(1)\\times U(1)\\times SU(5))\/(\\Z_2\\times \\Z_5),\n\t\\end{align*}\n\twhere \n\t\\begin{align*}\n\t\t\t&\\Z_2=\\{(1,1,E), (-1,-1,-E) \\},\n\t\t\t\\\\\n\t\t\t&\\Z_5=\\{(1,\\varepsilon_i, {\\varepsilon_i}^{-1}E) \\,|\\, \\varepsilon_i=\\exp ((2\\pi i\/5)k), k=0,1,2,3,4\\}.\n\t\\end{align*}\n\\end{proof}\n\nThus, since the group $(E_6)^{\\gamma_3} \\cap (E_6)^{\\nu_3}$ is connected from Theorem \\ref{theorem 4.6.2}, we have an exceptional $\\varmathbb{Z}_3 \\times \\varmathbb{Z}_3$-symmetric space\n\\begin{align*}\nE_6\/((U(1)\\times U(1)\\times SU(5))\/(\\Z_2\\times \\Z_5)).\n\\end{align*}\n\n\\subsection{Case 7: $\\{1, \\tilde{\\gamma}_3, \\tilde{\\gamma}_3{}^{-1}\\} \\times \\{1, \\tilde{\\mu}_3, \\tilde{\\mu}_3{}^{-1}\\}$-symmetric space}\n\nLet the $C$-linear transformations $\\gamma_3, \\mu_3$ of $\\mathfrak{J}^C$ defined in Subsection \\ref{subsection 3.3}. \n\n\\noindent From Lemma \\ref{lemma 3.3.8} (1), since we can easily confirm that $\\gamma_3$ and $\\mu_3$ are commutative, $\\tilde{\\gamma}_3$ and $\\tilde{\\mu}_3$ are commutative in $\\Aut(E_6)$: $\\tilde{\\gamma}_3\\tilde{\\mu}_3=\\tilde{\\mu}_3\\tilde{\\gamma}_3$.\n\\vspace{1mm}\n\nBefore determining the structure of the group $(E_6)^{\\gamma_3} \\cap (E_6)^{\\mu_3}$, we prove proposition and lemma needed in the proof of theorem below.\n\\vspace{1mm}\n\nWe define a $C$-linear transformation $\\mu'_3$ of $\\mathfrak{J}^C$ by\n\\begin{align*}\n\\mu'_3=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(\\nu^{-2},\\nu^2,\\nu^{-1},\\nu^{-1},\\nu,\\nu))\\in (E_6)^{\\gamma_3} \\subset E_6,\n\\end{align*}\nwhere ${\\nu}=\\exp(2\\pi i\/9)\\in C$.\n\nLet an element \n\\begin{align*}Q:=\\scalebox{0.8}{$\\begin{pmatrix}\n\t1&&&&&\\\\\n\t&1&&&&\\\\\n\t&&1&&&\\\\\n\t&&&&1&\\\\\n\t&&&-1&&\\\\\n\t&&&&&1\n\t\\end{pmatrix}$} \\in SO(6) \\subset SU(6), \n\\end{align*}\nwhere the blanks are $0$, and we consider an element $\\varphi_{{}_{E_6,\\gamma_3}}(1,Q) \\in (E_6)^{\\gamma_3} \\subset E_6$. Here, we denote this element by $\\delta_Q$: $\\delta_Q=\\varphi_{{}_{E_6,\\gamma_3}}(1,Q)$.\nThen by doing straightforward computation, we have that $\\mu_3\\delta_Q=\\delta_Q\\mu'_3$, that is, $\\mu_3$ is conjugate to $\\mu'_3$ under $\\delta_Q \\in (E_6)^{\\gamma_3} \\subset E_6$: $\\mu_3 \\sim \\mu'_3$. Moreover, $\\mu'_3$ induces the automorphism $\\tilde{\\mu'}_3$ of order $3$ on $E_6$: $\\tilde{\\mu'}_3(\\alpha)={\\mu'_3}^{-1}\\alpha\\mu'_3, \\alpha \\in E_6$.\n\\vspace{1mm}\n\nThen we have the following proposition.\n\n\\begin{proposition}\\label{proposition 4.7.1\n\tThe group $(E_6)^{\\gamma_3} \\cap (E_6)^{\\mu_3}$ is isomorphic to the group $(E_6)^{\\gamma_3} \\cap (E_6)^{\\mu'_3}${\\rm :} $(E_6)^{\\gamma_3} \\cap (E_6)^{\\mu_3} \\cong (E_6)^{\\gamma_3} \\cap (E_6)^{\\mu'_3}$.\n\\end{proposition}\n\\begin{proof}\n\tWe define a mapping $g_{{}_{471}}: (E_6)^{\\gamma_3} \\cap (E_6)^{\\mu'_3} \\to (E_6)^{\\gamma_3} \\cap (E_6)^{\\mu_3}$ by\n\t\\begin{align*}\n\tg_{{}_{471}}(\\alpha)=\\delta_Q\\alpha{\\delta_Q}^{-1}.\t\n\t\\end{align*}\n\tIn order to prove this isomorphism, it is sufficient to show that $g_{{}_{471}}$ is well-defined. \n\t\n\t\\noindent First, we will show that $g_{{}_{471}} \\in (E_6)^{\\gamma_3}$. Since it follows from $\\delta_Q=\\varphi_{{}_{E_6,\\gamma_3}}(1,Q)$ and $\\gamma_3=\\varphi_{{}_{E_6,\\gamma_3}}(\\omega,E)$ that $\\delta_Q\\gamma_3=\\gamma_3\\delta_Q$, we have that $g_{{}_{471}} \\in (E_6)^{\\gamma_3}$. Similarly, from $\\mu_3\\delta_Q=\\delta_Q\\mu'_3$ we have that $g_{{}_{471}} \\in (E_6)^{\\sigma_3}$. Hence $g_{{}_{471}}$ is well-defined. With above, the proof of this proposition is completed.\t\n\\end{proof}\n\\vspace{1mm}\n\nSubsequently, we will prove the following lemma. \n\n\\begin{lemma}\\label{lemma 4.7.2\n\tThe group $S(U(1)\\times U(1)\\times U(2)\\times U(2))$ is isomorphic to the group $(U(1) \\times U(1)\\times U(1)\\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times \\Z_2)${\\rm :} $S(U(1)\\times U(1)\\times U(2)\\times U(2)) \\cong (U(1) \\times U(1)\\times U(1)\\times SU(2)\\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times \\Z_2), \\Z_2=\\{(1,1,1,E,E), (1,-1,1,E,-E) \\}, \\Z_2=\\{(1,1,1,E,E), (1,-1,-1,-E,E) \\}, \\Z_2=\\{ (1,1,1,E,E), (-1,1,1,E,-E)\\}$.\n\\end{lemma}\n\\begin{proof}\n\tWe define a mapping $f_{{}_{472}}:U(1)\\times U(1) \\times U(1)\\times SU(2)\\times SU(2) \\to S(U(1)\\times U(1)\\times U(2)\\times U(2))$ by\n\t\\begin{align*}\n\tf_{{}_{472}}(a,b,c,A,B)=\\left( \n\t\\begin{array}{cccc}\n\ta^{-2} && &{\\raisebox{-7pt}[0pt]{\\large $0$}}\n\t\\\\[2mm]\n\t& b^{-2} &&\n\t\\\\[2mm]\n\t&& c^{-1}\\mbox{\\large {$A$}}&\n\t\\\\[2mm]\n\t{\\raisebox{3pt}[0pt]{\\large $0$}}&&&(abc)\\mbox{\\large {$B$}}\n\t\\end{array}\\right) \\in SU(6).\n\t\\end{align*}\n\tThen it is clear that $f_{{}_{472}}$ is well-defined and a homomorphism. \n\t\n\tNow, we will prove that $f_{{}_{472}}$ is surjective. Let $P \\in S(U(1)\\times U(1)\\times U(2)\\times U(2))$. Then $P$ takes the form of $\\diag(s,t,P_1,P_2),s,t \\in U(1),P_j \\in U(2), (st)(\\det\\,P_1)(\\det\\,P_2)=1$. Here, first we choose $a \\in C$ such that $s=a^{-2}$. Then it is clear that $a \\in U(1)$, so is $b \\in C$ such that $t=b^{-2}$, that is, $b \\in U(1)$. \n\tMoreover, since $P_1 \\in U(2)$, we see that $\\det\\,P_1 \\in U(1)$, and so we choose $c \\in U(1)$ such that $c^2=\\det\\,P_1$. Set $A=c^{-1}P_1$, then we have that $A \\in SU(2)$. Similarly, for $P_2 \\in U(2)$, set $B=(stc)P_2$. Since $stc=(\\det\\,P_2)^{-1}$, we have that $B \\in SU(2)$. With above, the proof of surjective is completed.\n\t\n\tFinally, we will determine $\\Ker\\,f_{{}_{472}}$. It follows from the kernel of definition that\n\t\\begin{align*}\n\t\\Ker\\,f_{{}_{472}}&=\\{(a,b,c,A,B)\\in U(1)^{\\times 3}\\times SU(2)^{\\times 2} \\,|\\,f_{{}_{472}}(a,b,c,A,B)=E \\}\n\t\\\\\n\t&=\\{(a,b,c,A,B)\\in U(1)^{\\times 3}\\times SU(2)^{\\times 2} \\,|\\,a^2=b^2=1,A=cE, B=(abc)^{-1}E \\}\n\t\\\\\n\t&=\\{(1,1,1,E,E), (1,1,-1,-E,-E),(1,-1,1,E,-E), (1,-1,-1,-E,E) \\}\n\t\\\\\n\t& \\quad \\cup \\{ (-1,1,1,E,-E), (-1,1,-1,-E,E),(-1,-1,1,E,E), (-1,-1,-1,-E,-E)\\}\n\t\\\\\n\t&=\\{(1,1,1,E,E), (1,-1,1,E,-E) \\}\\times \\{(1,1,1,E,E), (1,-1,-1,-E,E) \\}\n\t\\\\\n\t&\\quad \\times \\{(1,1,1,E,E), (-1,1,1,E,-E) \\}\n\t\\\\\n\t& \\cong \\Z_2 \\times \\Z_2 \\times\\Z_2.\n\t\\end{align*}\n\t\n\tTherefore we have the required isomorphism \n\t\\begin{align*}\n\t&\\quad S(U(1)\\times U(1)\\times U(2)\\times U(2)) \n\t\\\\\n\t&\\cong (U(1)\\times U(1) \\times U(1)\\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times \\Z_2).\n\t\\end{align*}\n\\end{proof}\n\nNow, we will determine the structure of the group $(E_6)^{\\gamma_3} \\cap (E_6)^{\\mu_3}$.\n\n\\begin{theorem}\\label{theorem 4.7.3\n\tThe group $(E_6)^{\\gamma_3} \\cap (E_6)^{\\mu_3}$ is isomorphic the group $(U(1)\\times U(1) \\times U(1)\\allowbreak \\times U(1) \\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times\\Z_4)${\\rm :} $(E_6)^{\\gamma_3} \\cap (E_6)^{\\mu_3} \\cong (U(1)\\times U(1) \\times U(1)\\times U(1) \\allowbreak \\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times\\Z_4), \\Z_2=\\{(1,1,1,1,E,E), (1,1,-1,1,E,-E) \\},\n\t\\Z_2=\\{(1,1,1,1,E,E), (1,1,-1,-1,-E,E) \\},\n\t\\Z_4=\\{(1,1,1,E,E,E), (1,-1,1-,1,E,E), (-1,i,i,\\allowbreak 1,-E,E), (-1,-i,-i,1,-E,E) \\}$.\n\\end{theorem}\n\\begin{proof}\n\tLet $S(U(1)\\times U(1)\\times U(2)\\times U(2)) \\subset SU(6)$. \n\tWe define a mapping $\\varphi_{{}_{E_6,\\gamma_3,\\mu'_3}}: U(1)\\times S(U(1)\\times U(1)\\times U(2)\\times U(2)) \\to (E_6)^{\\gamma_3} \\cap (E_6)^{\\mu'_3}$ by\n\t\\begin{align*}\n\t\\varphi_{{}_{E_6,\\gamma_3,\\mu'_3}}(s, P)(M+\\a)&={k_J}^{-1}(P(k_J M){}^t\\!P)+s\\a k^{-1}(\\tau \\,{}^t\\!P), \n\t\\\\\n\t&\\hspace*{40mm}M+\\a \\in \\mathfrak{J}(3, \\H)^C \\oplus (\\H^3)^C=\\mathfrak{J}^C.\n\t\\end{align*}\n\tNeedless to say, this mapping is the restriction of the mapping $\\varphi_{{}_{E_6,\\gamma_3}}$, that is, $\\varphi_{{}_{E_6,\\gamma_3,\\mu'_3}}(s, P)=\\varphi_{{}_{E_6,\\gamma_3}}(s,P)$ (Theorem \\ref{theorem 3.3.2}). \n\t\n\tAs usual, we will prove that $\\varphi_{{}_{E_6,\\gamma_3,\\mu'_3}}$ is well-defined. It is clear that $\\varphi_{{}_{E_6,\\gamma_3,\\mu'_3}}(s,P) \\in (E_6)^{\\gamma_3}$, and it follows from $\\mu'_3=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag({\\nu}^{-2},\\nu^2,{\\nu}^{-1},{\\nu}^{-1},\\nu,\\nu))$ that\n\t\\begin{align*}\n\t&\\quad {\\mu'_3}^{-1}\\varphi_{{}_{E_6,\\gamma_3,\\nu'_3}}(s,P)\\mu'_3\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag({\\nu}^{-2},\\nu^2,{\\nu}^{-1},{\\nu}^{-1},\\nu,\\nu))^{-1}\\varphi_{{}_{E_6,\\gamma_3,\\mu'_3}}(s,P)\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag({\\nu}^{-2},\\nu^2,{\\nu}^{-1},{\\nu}^{-1},\\nu,\\nu))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag({\\nu}^2,\\nu^{-2},\\nu,\\nu,{\\nu}^{-1},{\\nu}^{-1}))\\varphi_{{}_{E_6,\\gamma_3}}(s,P)\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag({\\nu}^{-2},\\nu^2,{\\nu}^{-1},{\\nu}^{-1},\\nu,\\nu))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\gamma_3}}(s,\\diag({\\nu}^2,\\nu^{-2},\\nu,\\nu,{\\nu}^{-1},{\\nu}^{-1})P\\diag({\\nu}^{-2},\\nu^2,{\\nu}^{-1},{\\nu}^{-1},\\nu,\\nu)),P=\\diag(a,b,P_1,P_2)\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\gamma_3}}(s,\\diag(\\nu^2 a \\nu^{-2}, {\\nu}^{-2} b \\nu^2, (\\nu E) P_1(\\nu^{-1}E), ({\\nu}^{-1}E) P_2 (\\nu E) ))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\gamma_3}}(s,P)\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\gamma_3,\\mu'_3}}(s,P).\n\t\\end{align*}\n\tHence we have that $\\varphi_{{}_{E_6,\\gamma_3,\\mu'_3}}(s,P) \\in (E_6)^{\\sigma'_3}$. Thus $\\varphi_{{}_{E_6,\\gamma_3,\\mu'_3}}$ is well-defined. Subsequently, since $\\varphi_{{}_{E_6,\\gamma_3,\\mu'_3}}$ is the restriction of the mapping $\\varphi_{{}_{E_6,\\gamma_3}}$, we easily see that $\\varphi_{{}_{E_6,\\gamma_3,\\mu'_3}}$ is a homomorphism.\n\t\n\tNext, we will prove that $\\varphi_{{}_{E_6,\\gamma_3,\\mu'_3}}$ is surjective. Let $\\alpha \\in (E_6)^{\\gamma_3} \\cap (E_6)^{\\mu'_3} \\subset (E_6)^{\\gamma_3}$. There exist $s \\in U(1)$ and $A \\in SU(6)$ such that $\\alpha=\\varphi_{{}_{E_6,\\gamma_3}}(s,A)$ (Theorem \\ref{theorem 3.3.2}). Moreover, from the condition $\\alpha \\in (E_6)^{\\mu'_3}$, that is, ${\\mu'_3}^{-1}\\varphi_{{}_{E_6,\\gamma_3}}(s,A)\\mu'_3=\\varphi_{{}_{E_6,\\gamma_3}}(s,A)$, and using ${\\mu'_3}^{-1}\\varphi_{{}_{E_6,\\gamma_3}}(s,A)\\mu'_3=\\varphi_{{}_{E_6,\\gamma_3}}(s,\\diag({\\nu}^2,\\nu^{-2},\\nu,\\nu,{\\nu}^{-1},{\\nu}^{-1}) A \\,\\diag({\\nu}^{-2},\\nu^2,{\\nu}^{-1},{\\nu}^{-1},\\nu,\\nu))$, we have that\n\t\\begin{align*}\n\t&\\left\\{\n\t\\begin{array}{l}\n\ts=s \\\\\n\t\\diag({\\nu}^2,\\nu^{-2},\\nu,\\nu,{\\nu}^{-1},{\\nu}^{-1}) A \\diag({\\nu}^{-2},\\nu^2,{\\nu}^{-1},{\\nu}^{-1},\\nu,\\nu)=A \n\t\\end{array}\\right. \n\t\\\\\n\t&\\hspace*{50mm}{\\text{or}}\n\t\\\\\n\t&\\left\\{\n\t\\begin{array}{l}\n\ts=-s \\\\\n\t\\diag({\\nu}^2,\\nu^{-2},\\nu,\\nu,{\\nu}^{-1},{\\nu}^{-1}) A \\diag({\\nu}^{-2},\\nu^2,{\\nu}^{-1},{\\nu}^{-1},\\nu,\\nu)=-A. \n\t\\end{array}\\right. \n\t\\end{align*}\n\tThe latter case is impossible because of $s\\not=0$. As for the former case, from the second condition, by doing straightforward computation $A$ takes the following form $\\diag(a, b, C, D), a,b \\in U(1),C, D \\in U(2), (ab)(\\det\\,C)(\\det\\,D)=1$, that is, $A \\in S(U(1)\\times U(1)\\times U(2)\\times U(2))$. Needless to say, $s \\in U(1)$.\n\tHence there exist $s \\in U(1)$ and $P \\in S(U(1)\\times U(1)\\times U(2)\\times U(2))$ such that $\\alpha=\\varphi_{{}_{E_6,\\gamma_3}}(s,P)$. Namely, there exist $s \\in U(1)$ and $P \\in S(U(1)\\times U(1)\\times U(2)\\times U(2))$ such that $\\alpha=\\varphi_{{}_{E_6,\\gamma_3,\\mu'_3}}(s,P)$. With above, the proof of surjective is completed.\n\t\n\tFinally, we will determine $\\Ker\\,\\varphi_{{}_{E_6,\\gamma_3,\\mu'_3}}$. However, from $\\Ker\\,\\varphi_{{}_{E_6,\\gamma_3}}=\\{(1,E),(-1,-E) \\}$, we easily obtain that $\\Ker\\,\\varphi_{{}_{E_6,\\gamma_3,\\mu'_3}}=\\{(1,E),(-1,-E) \\} \\cong \\Z_2$. Thus we have the isomorphism $(E_6)^{\\gamma_3} \\cap (E_6)^{\\mu'_3} \\cong (U(1)\\times S(U(1)\\times U(1)\\times U(2)\\times U(2)))\/\\Z_2$. In addition, from Proposition \\ref{proposition 4.7.1} we have the isomorphism $(E_6)^{\\gamma_3} \\cap (E_6)^{\\mu_3} \\cong (U(1)\\times S(U(1)\\times U(1)\\times U(2)\\times U(2)))\/\\Z_2$. Here, using the mapping $f_{{}_{472}}$ in the proof of Lemma \\ref{lemma 4.7.2}, we define a homomorphism $h_{{}_{473}}:U(1)\\times (U(1)\\times U(1)\\times U(1)\\times SU(2)\\times SU(2)) \\to U(1)\\times S(U(1)\\times U(1)\\times U(2)\\times U(2))$ by\n\t\\begin{align*}\n\t\t\th_{{}_{473}}(s,(a,b,c,A,B))=(s,f_{{}_{472}}(a,b,c,A,B)).\n\t\\end{align*}\n\tThen, the elements $(s,(a,b,c,A,B))$ corresponding to the elements $(1,E), (-1,-E) \\in \\Ker\\,\\varphi_{{}_{E_6,\\gamma_3,\\mu'_3}}$ under the mapping $h_{{}_{473}}$ are as follows.\n\t\\begin{align*}\n\t&(1,(1,1,1,E,E)),(1,(1,1,-1,-E,-E)),(1,(1,-1,1,E,-E)),(1,(1,-1,-1,-E,E)),\n\t\\\\\n\t&(1,(-1,1,1,E,-E)),(1,(-1,1,-1,-E,E)),(1,(-1,-1,1,E,E)),(1,(-1,-1,-1,-E,-E)),\n\t\\\\\n\t&(-1,(i,i,1,-E,E)),(-1,(i,i,-1,E,-E)),(-1,(i,-i,1,-E,-E)),(-1,(i,i,-1,-E,E)),\n\t\\\\\n\t&(-1,(-i,i,1,-E,\\!E)),(-1,(-i,i,-1,E,\\!E)),(-1,(-i,-i,1,-E,E)),(-1,(-i,-i,-1,E,-E)).\n\t\\end{align*}\n\t\n\tTherefore we have the required isomorphism \n\t\\begin{align*}\n\t(E_6)^{\\gamma_3} \\cap (E_6)^{\\mu_3} \\!\\cong (U(1)\\times U(1) \\times U(1)\\times U(1) \\allowbreak \\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times\\Z_4), \n\t\\end{align*}\n\twhere \n\t\\begin{align*}\n\t&\\Z_2=\\{(1,1,1,1,E,E), (1,1,-1,1,E,-E) \\},\n\t\\\\\n\t&\\Z_2=\\{(1,1,1,1,E,E), (1,1,-1,-1,-E,E) \\},\n\t\\\\\n\t&\\Z_4=\\{(1,1,1,E,E,E), (1,-1,1-,1,E,E), (-1,i,i,1,-E,E), (-1,-i,-i,1,-E,E) \\}.\n\t\\end{align*}\n\\end{proof}\n\nThus, since the group $(E_6)^{\\gamma_3} \\cap (E_6)^{\\mu_3}$ is connected from Theorem \\ref{theorem 4.7.3}, we have an exceptional $\\varmathbb{Z}_3 \\times \\varmathbb{Z}_3$-symmetric space\n\\begin{align*}\nE_6\/((U(1)\\times U(1) \\times U(1)\\times U(1) \\allowbreak \\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times\\Z_4)).\n\\end{align*}\n\n\\subsection{Case 8: $\\{1, \\tilde{\\gamma}_3, \\tilde{\\gamma}_3{}^{-1}\\} \\times \\{1, \\tilde{w}_3, \\tilde{w}_3{}^{-1}\\}$-symmetric space}\n\nLet the $C$-linear transformations $\\gamma_3, w_3$ of $\\mathfrak{J}^C$ defined in Subsection \\ref{subsection 3.3}. \n\n\\noindent From Lemma \\ref{lemma 3.3.8} (1), since we can easily confirm that $\\gamma_3$ and $w_3$ are commutative, $\\tilde{\\gamma}_3$ and $\\tilde{w}_3$ are commutative in $\\Aut(E_6)$: $\\tilde{\\gamma}_3\\tilde{w}_3=\\tilde{w}_3\\tilde{\\gamma}_3$.\n\\vspace{1mm}\n\nBefore determining the structure of the group $(E_6)^{\\gamma_3} \\cap (E_6)^{w_3}$, we prove proposition and lemma needed in the proof of theorem below.\n\\vspace{1mm}\n\nWe define a $C$-linear transformation $w'_3$ of $\\mathfrak{J}^C$ by\n\\begin{align*}\n\t\tw'_3=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega)) \\in (E_6)^{\\gamma_3} \\subset E_6.\n\\end{align*}\n\nLet an element \n\\begin{align*}N:=\\scalebox{0.8}{$\\begin{pmatrix}\n\t1&&&&&\\\\\n\t&&&&1&\\\\\n\t&&1&&&\\\\\n\t&&&1&&\\\\\n\t&-1&&&&\\\\\n\t&&&&&1\n\t\\end{pmatrix}$} \\in SO(6) \\subset SU(6), \n\\end{align*}\nwhere the blanks are $0$, and we consider an element $\\varphi_{{}_{E_6,\\gamma_3}}(1,N) \\in (E_6)^{\\gamma_3} \\subset E_6$. Here, we denote this element by $\\delta_N$: $\\delta_N=\\varphi_{{}_{E_6,\\gamma_3}}(1,N)$.\nThen by doing straightforward computation, we have that $w_3\\delta_Q=\\delta_Q w'_3$, that is, $w_3$ is conjugate to $w'_3$ under $\\delta_N \\in (E_6)^{\\gamma_3} \\subset E_6$: $w_3 \\sim w'_3$. Moreover, $w'_3$ induces the automorphism $\\tilde{w'}_3$ of order $3$ on $E_6$: $\\tilde{w'}_3(\\alpha)={w'_3}^{-1}\\alpha w'_3, \\alpha \\in E_6$.\n\\vspace{1mm}\n\nThen we have the following proposition.\n\n\\begin{proposition}\\label{proposition 4.8.1\n\tThe group $(E_6)^{\\gamma_3} \\cap (E_6)^{w_3}$ is isomorphic to the group $(E_6)^{\\gamma_3} \\cap (E_6)^{w'_3}${\\rm :} $(E_6)^{\\gamma_3} \\cap (E_6)^{w_3} \\cong (E_6)^{\\gamma_3} \\cap (E_6)^{w'_3}$.\n\\end{proposition}\n\\begin{proof}\n\tWe define a mapping $g_{{}_{481}}: (E_6)^{\\gamma_3} \\cap (E_6)^{w'_3} \\to (E_6)^{\\gamma_3} \\cap (E_6)^{w_3}$ by\n\t\\begin{align*}\n\tg_{{}_{481}}(\\alpha)=\\delta_N\\alpha{\\delta_N}^{-1}.\t\n\t\\end{align*}\n\tIn order to prove this isomorphism, it is sufficient to show that $g_{{}_{481}}$ is well-defined. \n\t\n\t\\noindent First, we will show that $g_{{}_{481}} \\in (E_6)^{\\gamma_3}$. Since it follows from $\\delta_N=\\varphi_{{}_{E_6,\\gamma_3}}(1,N)$ and $\\gamma_3=\\varphi_{{}_{E_6,\\gamma_3}}(\\omega,E)$ that $\\delta_N\\gamma_3=\\gamma_3\\delta_N$, we have that $g_{{}_{481}} \\in (E_6)^{\\gamma_3}$. Similarly, from $w_3\\delta_N=\\delta_N w'_3$ we have that $g_{{}_{481}} \\in (E_6)^{w_3}$. Hence $g_{{}_{481}}$ is well-defined. With above, the proof of this proposition is completed.\t\n\\end{proof}\n\nSubsequently, we will prove the following lemma. \n\n\\begin{lemma}\\label{lemma 4.8.2\n\tThe group $S(U(3)\\times U(3))$ is isomorphic to the group $(U(1) \\times SU(3)\\times SU(3))\/\\Z_3${\\rm :} $S(U(3)\\times U(3)) \\cong (U(1) \\times SU(3)\\times SU(3))\/\\Z_3, \\Z_3=\\{(1,E,E), (\\omega,{\\omega}^{-1} E,\\omega E),\\allowbreak (\\omega,\\omega E,{\\omega}^{-1} E)\\}$, where $\\omega=(-1\/2)+(\\sqrt{3}\/2)i \\in C$.\n\\end{lemma}\n\\begin{proof}\n\tWe define a mapping $f_{{}_{482}}:U(1)\\times SU(3)\\times SU(3) \\to S(U(3)\\times U(3))$ by\n\t\\begin{align*}\n\tf_{{}_{482}}(a,A,B)=\\left( \n\t\\begin{array}{cc}\n\taA &{\\raisebox{-5pt}[0pt]{\\large $0$}}\n\t\\\\[4mm]\n\t{\\raisebox{1pt}[0pt]{\\large $0$}}& a^{-1}B\n\t\\end{array}\\right) \\in SU(6).\n\t\\end{align*}\n\tThen it is clear that $f_{{}_{482}}$ is well-defined and a homomorphism. \n\t\n\tNow, we will prove that $f_{{}_{482}}$ is surjective. Let $P \\in S(U(3)\\times U(3))$. Then $P$ takes the form of $\\diag(P_1,P_2),P_j \\in U(3), (\\det\\,P_1)(\\det\\,P_2)=1$. Here, since $P_1 \\in U(3)$, we see that $\\det\\,P_1 \\in U(1)$, and so we choose $a \\in U(1)$ such that $a^3=\\det\\,P_1$. Set $A=a^{-1}P_1$, then we have that $A \\in SU(3)$. Similarly, for $P_2 \\in U(2)$, set $B=aP_2$, we have that $B \\in SU(3)$. With above, the proof of surjective is completed.\n\t\n\tFinally, we will determine $\\Ker\\,f_{{}_{482}}$. It follows from the kernel of definition that\n\t\\begin{align*}\n\t\\Ker\\,f_{{}_{482}}&=\\{(a,A,B)\\in U(1)\\times SU(3)\\times SU(3) \\,|\\,f_{{}_{482}}(a,A,B)=E \\}\n\t\\\\\n\t&=\\{(a,A,B)\\in U(1)\\times SU(3)\\times SU(3) \\,|\\,a^3=1,A=a^{-1}E, B=aE \\}\n\t\\\\\n\t&=\\{(1,E,E), (\\omega,{\\omega}^{-1}E,\\omega E),({\\omega}^{-1},\\omega E,{\\omega}^{-1}E) \\}\n\t\\\\\n\t& \\cong \\Z_3.\n\t\\end{align*}\n\t\n\tTherefore we have the required isomorphism \n\t\\begin{align*}\n\tS(U(3)\\times U(3)) \\cong (U(1) \\times SU(3)\\times SU(3))\/\\Z_3.\n\t\\end{align*}\n\\end{proof}\n\nNow, we will determine the structure of the group $(E_6)^{\\gamma_3} \\cap (E_6)^{w_3}$.\n\n\\begin{theorem}\\label{theorem 4.8.3\n\tThe group $(E_6)^{\\gamma_3} \\cap (E_6)^{w_3}$ is isomorphic the group $(U(1) \\times U(1) \\times SU(3)\\times SU(3)))\/(\\Z_2 \\times \\Z_3)${\\rm :} $(E_6)^{\\gamma_3} \\cap (E_6)^{w_3} \\cong ((U(1) \\times U(1) \\times SU(3)\\times SU(3)))\/(\\Z_2 \\times \\Z_3), \\Z_2=\\{(1,1,E,E), (-1,-1,E,E)\\},\n\t\\Z_3=\\{(1,1,E,E), (1,\\omega,{\\omega}^{-1}E,\\omega E),(1,{\\omega}^{-1},\\omega E,{\\omega}^{-1}E)\\}$.\n\\end{theorem}\n\\begin{proof}\n\tLet $S(U(3)\\times U(3)) \\subset SU(6)$. \n\tWe define a mapping $\\varphi_{{}_{E_6,\\gamma_3,w'_3}}: U(1)\\times S(U(3)\\times U(3)) \\to (E_6)^{\\gamma_3} \\cap (E_6)^{w'_3}$ by\n\t\\begin{align*}\n\t\\varphi_{{}_{E_6,\\gamma_3,w'_3}}(s, P)(M+\\a)&={k_J}^{-1}(P(k_J M){}^t\\!P)+s\\a k^{-1}(\\tau \\,{}^t\\!P), \n\t\\\\\n\t&\\hspace*{40mm}M+\\a \\in \\mathfrak{J}(3, \\H)^C \\oplus (\\H^3)^C=\\mathfrak{J}^C.\n\t\\end{align*}\n\tNeedless to say, this mapping is the restriction of the mapping $\\varphi_{{}_{E_6,\\gamma_3}}$, that is, $\\varphi_{{}_{E_6,\\gamma_3,w'_3}}(s, P)=\\varphi_{{}_{E_6,\\gamma_3}}(s,P)$ (Theorem \\ref{theorem 3.3.2}). \n\t\n\tFirst, we will prove that $\\varphi_{{}_{E_6,\\gamma_3,w'_3}}$ is well-defined. It is clear that $\\varphi_{{}_{E_6,\\gamma_3,w'_3}}(s,P) \\in (E_6)^{\\gamma_3}$, and it follows from $w'_3=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega))$ that\n\t\\begin{align*}\n\t&\\quad {w'_3}^{-1}\\varphi_{{}_{E_6,\\gamma_3,\\nu'_3}}(s,P) w'_3\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega))^{-1}\\varphi_{{}_{E_6,\\gamma_3,w'_3}}(s,P)\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(\\omega,\\omega,\\omega,\\tau\\omega,\\tau\\omega,\\tau\\omega))\\varphi_{{}_{E_6,\\gamma_3}}(s,P)\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\gamma_3}}(s,\\diag(\\omega,\\omega,\\omega,\\tau\\omega,\\tau\\omega,\\tau\\omega)P\\,\\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega)),P=\\diag(P_1,P_2)\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\gamma_3}}(s,\\diag((\\omega E)P_1(\\tau\\omega E), \\tau(\\omega E) P_2 (\\omega E) ))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\gamma_3}}(s,P)\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\gamma_3,w'_3}}(s,P).\n\t\\end{align*}\n\tHence we have that $\\varphi_{{}_{E_6,\\gamma_3,w'_3}}(s,P) \\in (E_6)^{w'_3}$. Thus $\\varphi_{{}_{E_6,\\gamma_3,w'_3}}$ is well-defined. Subsequently, since $\\varphi_{{}_{E_6,\\gamma_3,w'_3}}$ is the restriction of the mapping $\\varphi_{{}_{E_6,\\gamma_3}}$, we easily see that $\\varphi_{{}_{E_6,\\gamma_3,w'_3}}$ is a homomorphism.\n\t\n\tNext, we will prove that $\\varphi_{{}_{E_6,\\gamma_3,w'_3}}$ is surjective. Let $\\alpha \\in (E_6)^{\\gamma_3} \\cap (E_6)^{w'_3} \\subset (E_6)^{\\gamma_3}$. There exist $s \\in U(1)$ and $A \\in SU(6)$ such that $\\alpha=\\varphi_{{}_{E_6,\\gamma_3}}(s,A)$ (Theorem \\ref{theorem 3.3.2}). Moreover, from the condition $\\alpha \\in (E_6)^{w'_3}$, that is, ${w'_3}^{-1}\\varphi_{{}_{E_6,\\gamma_3}}(s,A)w'_3=\\varphi_{{}_{E_6,\\gamma_3}}(s,A)$, and using ${w'_3}^{-1}\\varphi_{{}_{E_6,\\gamma_3}}(s,A)w'_3=\\varphi_{{}_{E_6,\\gamma_3}}(s,\\diag(\\omega,\\omega,\\omega,\\tau\\omega,\\tau\\omega,\\tau\\omega)A\\,\\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega))$, we have that\n\t\\begin{align*}\n\t&\\left\\{\n\t\\begin{array}{l}\n\ts=s \\\\\n\t\\diag(\\omega,\\omega,\\omega,\\tau\\omega,\\tau\\omega,\\tau\\omega)A\\,\\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega)=A \n\t\\end{array}\\right. \n\t\\\\\n\t&\\hspace*{50mm}{\\text{or}}\n\t\\\\\n\t&\\left\\{\n\t\\begin{array}{l}\n\ts=-s \\\\\n\t\\diag(\\omega,\\omega,\\omega,\\tau\\omega,\\tau\\omega,\\tau\\omega)A\\,\\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega)=-A. \n\t\\end{array}\\right. \n\t\\end{align*}\n\tThe latter case is impossible because of $s\\not=0$. As for the former case, from the second condition, by doing straightforward computation $A$ takes the following form $\\diag(C, D), C, D \\in U(2), (\\det\\,C)(\\det\\,D)=1$, that is, $A \\in S(U(3)\\times U(3))$. Needless to say, $s \\in U(1)$.\n\tHence there exist $s \\in U(1)$ and $P \\in S(U(3)\\times U(3))$ such that $\\alpha=\\varphi_{{}_{E_6,\\gamma_3}}(s,P)$. Namely, there exist $s \\in U(1)$ and $P \\in S(U(3)\\times U(3))$ such that $\\alpha=\\varphi_{{}_{E_6,\\gamma_3,w'_3}}(s,P)$. The proof of surjective is completed.\n\t\n\tFinally, we will determine $\\Ker\\,\\varphi_{{}_{E_6,\\gamma_3,w'_3}}$. However, from $\\Ker\\,\\varphi_{{}_{E_6,\\gamma_3}}=\\{(1,E),(-1,-E) \\}$, we easily obtain that $\\Ker\\,\\varphi_{{}_{E_6,\\gamma_3,w'_3}}=\\{(1,E),(-1,-E) \\} \\cong \\Z_2$. Thus we have the isomorphism $(E_6)^{\\gamma_3} \\cap (E_6)^{w'_3} \\cong (U(1)\\times S(U(3)\\times U(3)))\/\\Z_2$. In addition, from Proposition \\ref{proposition 4.8.1} we have the isomorphism $(E_6)^{\\gamma_3} \\cap (E_6)^{w_3} \\cong (U(1)\\times S(U(3)\\times U(3)))\/\\Z_2$. Here, using the mapping $f_{{}_{482}}$ in the proof of Lemma \\ref{lemma 4.8.2}, we define a homomorphism $h_{{}_{484}}:U(1)\\times (U(1)\\times SU(3)\\times SU(3)) \\to U(1)\\times S(U(3)\\times U(3))$ by\n\t\\begin{align*}\n\th_{{}_{483}}(s,(a,A,B))=(s,f_{{}_{482}}(a,A,B)).\n\t\\end{align*}\n\tThen, the elements $(s,(a,A,B))$ corresponding to the elements $(1,E), (-1,-E) \\in \\\\\n\t\\Ker\\,\\varphi_{{}_{E_6,\\gamma_3,w'_3}}$ under the mapping $h_{{}_{483}}$ are as follows.\n\t\\begin{align*}\n\t& (1,1,E,E), (1,\\omega,{\\omega}^{-1}E,\\omega E),(1,{\\omega}^{-1},\\omega E,{\\omega}^{-1}E) \n\t\\\\\n\t& (-1,-1,E,E), (-1,-\\omega,{\\omega}^{-1}E,\\omega E),(-1,-{\\omega}^{-1},\\omega E,{\\omega}^{-1}E).\n\t\\end{align*}\n\tTherefore we have the required isomorphism\n\t\\begin{align*}\n\t (E_6)^{\\gamma_3} \\cap (E_6)^{w_3} \\cong (U(1) \\times U(1) \\times SU(3)\\times SU(3))\/(\\Z_2 \\times \\Z_3), \n\t \\end{align*}\n\t where\n\t \\begin{align*}\n\t &\\Z_2=\\{(1,1,E,E), (-1,-1,E,E)\\},\n\t \\\\\n\t &\\Z_3=\\{(1,1,E,E), (1,\\omega,{\\omega}^{-1}E,\\omega E),(1,{\\omega}^{-1},\\omega E,{\\omega}^{-1}E)\\}.\n\t \\end{align*}\n\\end{proof}\n\nThus, since the group $(E_6)^{\\gamma_3} \\cap (E_6)^{w_3}$ is connected from Theorem \\ref{theorem 4.8.3}, we have an exceptional $\\varmathbb{Z}_3 \\times \\varmathbb{Z}_3$-symmetric space\n\\begin{align*}\nE_6\/((U(1) \\times U(1) \\times SU(3)\\times SU(3))\/(\\Z_2 \\times \\Z_3)).\n\\end{align*}\n\n\\subsection{Case 9: $\\{1, \\tilde{\\sigma}_3, \\tilde{\\sigma}_3{}^{-1}\\} \\times \\{1, \\tilde{\\nu}_3, \\tilde{\\nu}_3{}^{-1}\\}$-symmetric space}\n\nLet the $C$-linear transformations $\\sigma_3, \\nu_3$ of $\\mathfrak{J}^C$ defined in Subsection \\ref{subsection 3.3}. \n\n\\noindent From Lemma \\ref{lemma 3.3.8} (1), since we can easily confirm that $\\sigma_3$ and $\\nu_3$ are commutative, $\\tilde{\\sigma}_3$ and $\\tilde{\\nu}_3$ are commutative in $\\Aut(E_6)$: $\\tilde{\\sigma}_3\\tilde{\\nu}_3=\\tilde{\\nu}_3\\tilde{\\sigma}_3$.\n\nBefore determining the structure of the group $(E_6)^{\\sigma_3} \\cap (E_6)^{\\nu_3}$, we confirm that useful lemma holds and prove proposition needed in the proof of theorem below.\n\n\\begin{lemma}\\label{lemma 4.9.1\n\tThe mapping $\\varphi_{{}_{E_6,\\nu_3}}:Sp(1) \\times S(U(1)\\times U(5)) \\to (E_6)^{\\nu_3}$ of \\,Theorem {\\rm \\ref{theorem 3.3.5}} satisfies the relational formulas \n\t\\begin{align*}\n\t\\sigma_3&=\\varphi_{{}_{E_6,\\nu_3}}(1, \\diag(1,1,\\tau\\omega,\\omega,\\omega,\\tau\\omega)),\n\t\\\\\n\t\\nu_3&=\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag(\\nu^5,\\nu^{-1},\\nu^{-1},\\nu^{-1},\\nu^{-1},\\nu^{-1})),\n\t\\end{align*}\n\twhere ${\\omega}=-(1\/2)+(\\sqrt{3}\/2)i \\in U(1)$.\n\\end{lemma}\n\\begin{proof}\n\tFrom Lemma \\ref{lemma 3.3.8} (1), these results are trivial. \n\\end{proof}\n\nThe $C$-linear transformation $\\sigma'_3$ defined in the Case 5 is expressed by\n\\begin{align*}\n\\sigma'_3=\\varphi_{{}_{E_6,\\nu_3}}(1, \\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega)),\n\\end{align*}\nand note that $\\delta_R=\\varphi_{{}_{E_6, \\nu_3}}(1,R)(=\\varphi_{{}_{E_6, \\gamma_3}}(1,R))$, where $\\delta_R$ is also defined in the Case 5, moreover needless to say, $\\sigma_3$ is conjugate to $\\sigma'_3$ under $\\delta_R=\\varphi_{{}_{E_6, \\nu_3}}(1,R)$.\n\n\\begin{proposition}\\label{proposition 4.9.2\n\tThe group $(E_6)^{\\sigma_3} \\cap (E_6)^{\\nu_3}$ is isomorphic to the group $(E_6)^{\\sigma'_3} \\cap (E_6)^{\\nu_3}${\\rm :} $(E_6)^{\\sigma_3} \\cap (E_6)^{\\nu_3} \\cong (E_6)^{\\sigma'_3} \\cap (E_6)^{\\nu_3}$.\n\\end{proposition}\n\\begin{proof}\n\tWe define a mapping $g_{{}_{492}}: (E_6)^{\\sigma_3} \\cap (E_6)^{\\nu_3} \\to (E_6)^{\\sigma'_3} \\cap (E_6)^{\\nu_3}$ by\n\t\\begin{align*}\n\tg_{{}_{492}}(\\alpha)={\\delta_R}^{-1}\\alpha\\delta_R,\n\t\\end{align*}\n\twhere $\\delta_R$ is same one above. Since it is easy to verify that $\\delta_R\\nu_3=\\nu_3\\delta_R$ using $\\nu_3=\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag(\\nu^5,\\nu^{-1},\\nu^{-1},\\nu^{-1},\\nu^{-1},\\nu^{-1}))$ (Lemma \\ref{lemma 4.9.1}), we can prove this proposition as in the proof of Proposition \\ref{proposition 4.5.1}\n\\end{proof}\n\nNow, we will determine the structure of the group $(E_6)^{\\sigma_3} \\cap (E_6)^{\\nu_3}$.\n\n\\begin{theorem}\\label{theorem 4.9.3\n\tThe group $(E_6)^{\\sigma_3} \\cap (E_6)^{\\nu_3}$ is isomorphic the group $(Sp(1)\\times U(1) \\times U(1)\\allowbreak \\times U(1) \\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times\\Z_4)${\\rm :} $(E_6)^{\\sigma_3} \\cap (E_6)^{\\nu_3} \\cong (Sp(1)\\times U(1) \\times U(1)\\times U(1) \\allowbreak \\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times\\Z_4), \\Z_2=\\{(1,1,1,1,E,E), (1,1,-1,1,E,-E) \\},\n\t\\Z_2=\\{(1,1,1,1,E,E), (1,1,-1,-1,-E,E) \\},\n\t\\Z_4=\\{(1,1,1,E,E,E), (1,-1,1-,1,E,E), (-1,i,i,\\allowbreak 1,-E,E), (-1,-i,-i,1,-E,E) \\}$.\n\\end{theorem}\n\\begin{proof}\n\tLet $S(U(1)\\times U(1)\\times U(2)\\times U(2)) \\subset S(U(1)\\times U(5))$ as in the proof of Theorem \\ref{theorem 4.7.3}. \n\tWe define a mapping $\\varphi_{{}_{E_6,\\sigma'_3,\\nu_3}}: Sp(1)\\times S(U(1)\\times U(1)\\times U(2)\\times U(2)) \\to (E_6)^{\\sigma'_3} \\cap (E_6)^{\\nu_3}$ by\n\t\\begin{align*}\n\t\\varphi_{{}_{E_6,\\sigma'_3,\\nu_3}}(q, P)(M+\\a)&={k_J}^{-1}(P(k_J M){}^t\\!P)+q\\a k^{-1}(\\tau \\,{}^t\\!P), \n\t\\\\\n\t&\\hspace*{40mm}M+\\a \\in \\mathfrak{J}(3, \\H)^C \\oplus (\\H^3)^C=\\mathfrak{J}^C.\n\t\\end{align*}\n\tNeedless to say, this mapping is the restriction of the mapping $\\varphi_{{}_{E_6,\\nu_3}}$, that is, $\\varphi_{{}_{E_6,\\sigma'_3,\\nu_3}}(q, P)=\\varphi_{{}_{E_6,\\nu_3}}(q,P)$ (Theorem \\ref{theorem 3.3.5}). \n\t\n\tAs usual, we will prove that $\\varphi_{{}_{E_6,\\sigma'_3,\\nu_3}}$ is well-defined. It is clear that $\\varphi_{{}_{E_6,\\sigma'_3,\\nu_3}}(q,P) \\in (E_6)^{\\nu_3}$, and it follows from $\\sigma'_3=\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega))$ that\n\t\\begin{align*}\n\t&\\quad {\\sigma'_3}^{-1}\\varphi_{{}_{E_6,\\sigma'_3,\\nu_3}}(q,P)\\sigma'_3\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega))^{-1}\\varphi_{{}_{E_6,\\sigma'_3,\\nu_3}}(q,P)\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag(1,1,\\tau\\omega,\\tau\\omega,\\omega,\\omega))\\varphi_{{}_{E_6,\\nu_3}}(q,P)\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\nu_3}}(q,\\diag(1,1,\\tau\\omega,\\tau\\omega,\\omega,\\omega)P\\,\\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega)),P=\\diag(a,b,P_1,P_2)\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\nu_3}}(q,\\diag(a, b, (\\tau\\omega E)P_1(\\omega E), (\\omega E) P_2 (\\tau\\omega E) ))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\nu_3}}(q,P)\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\sigma'_3,\\nu_3}}(q,P).\n\t\\end{align*}\n\tHence we have that $\\varphi_{{}_{E_6,\\sigma'_3,\\nu_3}}(q,P) \\in (E_6)^{\\sigma'_3}$. Thus $\\varphi_{{}_{E_6,\\sigma'_3,\\nu_3}}$ is well-defined. Subsequently, since $\\varphi_{{}_{E_6,\\sigma'_3,\\nu_3}}$ is the restriction of the mapping $\\varphi_{{}_{E_6,\\nu_3}}$, we easily see that $\\varphi_{{}_{E_6,\\sigma'_3,\\nu_3}}$ is a homomorphism.\n\t\n\tNext, we will prove that $\\varphi_{{}_{E_6,\\sigma'_3,\\nu_3}}$ is surjective. Let $\\alpha \\in (E_6)^{\\sigma'_3} \\cap (E_6)^{\\nu_3} \\subset (E_6)^{\\nu_3}$. There exist $q \\in Sp(1)$ and $A \\in S(U(1)\\times U(5))$ such that $\\alpha=\\varphi_{{}_{E_6,\\nu_3}}(q,A)$ (Theorem \\ref{theorem 3.3.5}). Moreover, from the condition $\\alpha \\in (E_6)^{\\sigma'_3}$, that is, ${\\sigma'_3}^{-1}\\varphi_{{}_{E_6,\\nu_3}}(q,A)\\sigma'_3=\\varphi_{{}_{E_6,\\nu_3}}(q,A)$, and using ${\\sigma'_3}^{-1}\\varphi_{{}_{E_6,\\nu_3}}(q,A)\\sigma'_3=\\varphi_{{}_{E_6,\\nu_3}}(q,\\diag(1,1,\\tau\\omega,\\tau\\omega,\\omega,\\omega)A\\,\\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega))$, we have that\n\t\\begin{align*}\n\t&\\left\\{\n\t\\begin{array}{l}\n\tq=q \\\\\n\t\\diag(1,1,\\tau\\omega,\\tau\\omega,\\omega,\\omega)A\\,\\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega)=A \n\t\\end{array}\\right. \n\t\\\\\n\t&\\hspace*{50mm}{\\text{or}}\n\t\\\\\n\t&\\left\\{\n\t\\begin{array}{l}\n\tq=-q \\\\\n\t\\diag(1,1,\\tau\\omega,\\tau\\omega,\\omega,\\omega)A\\,\\diag(1,1,\\omega,\\omega,\\tau\\omega,\\tau\\omega)=-A. \n\t\\end{array}\\right. \n\t\\end{align*}\n\tThe latter case is impossible because of $q\\not=0$. As for the former case, from the second condition, by doing straightforward computation $A$ takes the following form $\\diag(a, b, C, D), a,b \\in U(1),C, D \\in U(2), (ab)(\\det\\,C)(\\det\\,D)=1$, that is, $A \\in S(U(1)\\times U(1)\\times U(2)\\times U(2))$. Needless to say, $q \\in Sp(1)$.\n\tHence there exist $q \\in Sp(1)$ and $P \\in S(U(1)\\times U(1)\\times U(2)\\times U(2))$ such that $\\alpha=\\varphi_{{}_{E_6,\\nu_3}}(s,P)$. Namely, there exist $q \\in Sp(1)$ and $P \\in S(U(1)\\times U(1)\\times U(2)\\times U(2))$ such that $\\alpha=\\varphi_{{}_{E_6,\\sigma'_3,\\nu_3}}(s,P)$. The proof of surjective is completed.\n\t\n\tFinally, we will determine $\\Ker\\,\\varphi_{{}_{E_6,\\sigma'_3,\\nu_3}}$. However, from $\\Ker\\,\\varphi_{{}_{E_6,\\nu_3}}=\\{(1,E),(-1,-E) \\}$, we easily obtain that $\\Ker\\,\\varphi_{{}_{E_6,\\sigma'_3,\\nu_3}}=\\{(1,E),(-1,-E) \\} \\cong \\Z_2$. Thus we have the isomorphism $(E_6)^{\\sigma'_3} \\cap (E_6)^{\\nu_3} \\cong (Sp(1)\\times S(U(1)\\times U(1)\\times U(2)\\times U(2)))\/\\Z_2$. In addition, from Proposition \\ref{proposition 4.8.1} we have the isomorphism $(E_6)^{\\sigma_3} \\cap (E_6)^{\\nu_3} \\cong (Sp(1)\\times S(U(1)\\times U(1)\\times U(2)\\times U(2)))\/\\Z_2$. \n\t\\if\n\tHere, \tusing the mapping $f_{{}_{473}}$ in the proof of Lemma \\ref{lemma 4.7.3}, we define a homomorphism $h_{{}_{474}}:U(1)\\times (U(1)\\times U(1)\\times U(1)\\times SU(2)\\times SU(2)) \\to U(1)\\times S(U(1)\\times U(1)\\times U(2)\\times U(2))$ by\n\t\\begin{align*}\n\th_{{}_{474}}(s,(a,b,c,A,B))=(s,f_{{}_{473}}(a,b,c,A,B)).\n\t\\end{align*}\n\tThen, the elements of $(s,(a,b,c,A,B))$ corresponding to $(1,E), (-1,-E) \\in \\Ker\\,\\varphi_{{}_{E_6,\\gamma_3,\\mu'_3}}$ under the mapping $h_{{}_{474}}$ are as follows.\n\t\\begin{align*}\n\t&(1,(1,1,1,E,E)),(1,(1,1,-1,-E,-E)),(1,(1,-1,1,E,-E)),(1,(1,-1,-1,-E,E)),\n\t\\\\\n\t&(1,(-1,1,1,E,-E)),(1,(-1,1,-1,-E,E)),(1,(-1,-1,1,E,E)),(1,(-1,-1,-1,-E,-E)),\n\t\\\\\n\t&(-1,(i,i,1,-E,E)),(-1,(i,i,-1,E,-E)),(-1,(i,-i,1,-E,-E)),(-1,(i,i,-1,-E,E)),\n\t\\\\\n\t&(-1,(-i,i,1,-E,\\!E)),(-1,(-i,i,-1,E,\\!E)),(-1,(-i,-i,1,-E,E)),(-1,(-i,-i,-1,E,-E)).\n\t\\end{align*}\n\t\\f\n\tTherefore, as in the proof of Theorem \\ref{theorem 4.7.3}, we have the required isomorphism \n\t\\begin{align*}\n\t(E_6)^{\\sigma_3} \\cap (E_6)^{\\nu_3} \\!\\cong (Sp(1)\\times U(1) \\times U(1)\\times U(1) \\allowbreak \\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times\\Z_4), \n\t\\end{align*}\n\twhere \n\t\\begin{align*}\n\t&\\Z_2=\\{(1,1,1,1,E,E), (1,1,-1,1,E,-E) \\},\n\t\\\\\n\t&\\Z_2=\\{(1,1,1,1,E,E), (1,1,-1,-1,-E,E) \\},\n\t\\\\\n\t&\\Z_4=\\{(1,1,1,E,E,E), (1,-1,1-,1,E,E), (-1,i,i,1,-E,E), (-1,-i,-i,1,-E,E) \\}.\n\t\\end{align*}\n\\end{proof}\n\nThus, since the group $(E_6)^{\\sigma_3} \\cap (E_6)^{\\nu_3}$ is connected from Theorem \\ref{theorem 4.9.3}, we have an exceptional $\\varmathbb{Z}_3 \\times \\varmathbb{Z}_3$-symmetric space\n\\begin{align*}\nE_6\/((Sp(1)\\times U(1) \\times U(1)\\times U(1) \\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times\\Z_4)).\n\\end{align*}\n\n\\subsection{Case 10: $\\{1, \\tilde{\\sigma}_3, \\tilde{\\sigma}_3{}^{-1}\\} \\times \\{1, \\tilde{\\mu}_3, \\tilde{\\mu}_3{}^{-1}\\}$-symmetric space}\n\nLet the $C$-linear transformations $\\sigma_3, \\mu_3$ of $\\mathfrak{J}^C$ defined in Subsection \\ref{subsection 3.3}. \n\n\\noindent From Lemma \\ref{lemma 3.3.8} (1), since we can easily confirm that $\\sigma_3$ and $\\mu_3$ are commutative, $\\tilde{\\sigma}_3$ and $\\tilde{\\mu}_3$ are commutative in $\\Aut(E_6)$: $\\tilde{\\sigma}_3\\tilde{\\mu}_3=\\tilde{\\mu}_3\\tilde{\\sigma}_3$.\n\nBefore determining the structure of the group $(E_6)^{\\sigma_3} \\cap (E_6)^{\\mu_3}$, we prove proposition needed in the proof of theorem below.\n\n\\begin{proposition}\\label{proposition 4.10.1\n\tThe group $(E_6)^{\\sigma_3}$ is a subgroup of the group $(E_6)^\\sigma${\\rm: } $(E_6)^{\\sigma_3} \\subset (E_6)^\\sigma$.\n\\end{proposition}\n\\begin{proof}\n\tLet $\\alpha \\in (E_6)^{\\sigma_3}$. Then, from Theorem \\ref{theorem 3.3.4}, there exist $\\theta \\in U(1), D_a \\in Spin(2)$ and $\\beta \\in Spin(8)$ such that $\\alpha=\\phi_{{}_{6,\\sigma}}(\\theta) D_a \\beta$. Here, note that $(E_6)_{E_1} \\subset (E_6)^\\sigma$ (\\cite[Theorem 3.10.2]{iy0}), and so since $Spin(8)$ as the group $(E_6)_{E_1,F_1(1),F_1(e_1)} \\subset (E_6)_{E_1} \\subset (E_6)^\\sigma$, it follows that\n\t\\begin{align*}\n\t\t\t\\sigma\\alpha=\\sigma(\\phi_{{}_{6,\\sigma}}(\\theta) D_a \\beta)=\\phi_{{}_{6,\\sigma}}(\\theta)\\sigma D_a\\beta=\\phi_{{}_{6,\\sigma}}(\\theta)D_a \\sigma\\beta=(\\phi_{{}_{6,\\sigma}}(\\theta) D_a \\beta)\\sigma=\\alpha\\sigma.\n\t\\end{align*}\n\tHence we have that $\\alpha \\in (E_6)^\\sigma$, that is, $(E_6)^{\\sigma_3} \\subset (E_6)^\\sigma$.\n\\end{proof}\n\nNow, we will determine the structure of the group $(E_6)^{\\sigma_3} \\cap (E_6)^{\\mu_3}$.\n\n\\begin{theorem}\\label{theorem 4.10.2\n\tThe group $(E_6)^{\\sigma_3} \\cap (E_6)^{\\mu_3}$ coincides with the group $(E_6)^{\\sigma_3}$, that is, the group $(E_6)^{\\sigma_3} \\cap (E_6)^{\\mu_3}$ is isomorphic to the group $(U(1)\\times Spin(2)\\times Spin(8))\/(\\Z_2 \\times \\Z_4),\\\\ \\Z_2=\\!\\{(1,1,1),(1,\\sigma,\\sigma) \\}, \\Z_4\\!=\\!\\{(1,1,1),(i,D_{e_1},\\phi_{{}_{6,\\sigma}}(-i)D_{-e_1}),(-1,\\allowbreak \\sigma,-1),(-i,D_{-e_1}, \\phi_{{}_{6,\\sigma}}(i) \\allowbreak D_{e_1}) \\}$. \n\\end{theorem}\n\\begin{proof}\n\tFrom Proposition \\ref{proposition 3.3.3} and Theorem \\ref{theorem 3.3.6}, we have that the group $(E_6)^{\\sigma_3} \\cap (E_6)^{\\mu_3}$ coincides with the group $(E_6)^{\\sigma_3} \\cap (E_6)^{\\sigma}$: $(E_6)^{\\sigma_3} \\cap (E_6)^{\\mu_3}=(E_6)^{\\sigma_3} \\cap (E_6)^{\\sigma}$. In addition, from Proposition \\ref{proposition 4.10.1} above, we have that \n\t\\begin{align*}\n\t (E_6)^{\\sigma_3} \\cap (E_6)^{\\mu_3}=(E_6)^{\\sigma_3} \\cap (E_6)^{\\sigma}=(E_6)^{\\sigma_3}.\n\t\\end{align*}\n\tTherefore, by Theorem \\ref{theorem 3.3.4}, we have the required isomorphism \n\t\\begin{align*}\n\t \t\t (E_6)^{\\sigma_3} \\cap (E_6)^{\\mu_3} \\cong (U(1)\\times Spin(2)\\times Spin(8))\/(\\Z_2\\times \\Z_4).\n\t\\end{align*}\n\\end{proof}\n\nThus, since the group $(E_6)^{\\sigma_3} \\cap (E_6)^{\\mu_3}$ is connected from Theorem \\ref{theorem 4.10.2}, we have an exceptional $\\varmathbb{Z}_3 \\times \\varmathbb{Z}_3$-symmetric space\n\\begin{align*}\nE_6\/((U(1)\\times Spin(2)\\times Spin(8))\/(\\Z_2\\times \\Z_4)).\n\\end{align*}\n\n\\subsection{Case 11: $\\{1, \\tilde{\\sigma}_3, \\tilde{\\sigma}_3{}^{-1}\\} \\times \\{1, \\tilde{w}_3, \\tilde{w}_3{}^{-1}\\}$-symmetric space}\n\nLet the $C$-linear transformations $\\sigma_3, w_3$ of $\\mathfrak{J}^C$ defined in Subsection \\ref{subsection 3.3}. \n\n\\noindent From Lemma \\ref{lemma 3.3.8} (2), since we can easily confirm that $\\sigma_3$ and $w_3$ are commutative, $\\tilde{\\sigma}_3$ and $\\tilde{w}_3$ are commutative in $\\Aut(E_6)$: $\\tilde{\\sigma}_3\\tilde{w}_3=\\tilde{w}_3\\tilde{\\sigma}_3$.\n\nNow, we will determine the structure of the group $(E_6)^{\\sigma_3}\\cap (E_6)^{w_3}$.\n\n\\begin{theorem}\\label{theorem 4.11.1\n\tThe group $(E_6)^{\\sigma_3}\\cap (E_6)^{w_3}$ is isomorphic to the group $(SU(3)\\times U(1)\\times U(1)\\times U(1)\\times U(1))\/\\Z_3${\\rm :} $(E_6)^{\\sigma_3}\\cap (E_6)^{w_3} \\cong (SU(3)\\times U(1)\\times U(1)\\times U(1)\\times U(1))\/\\Z_3, \\Z_3=\\{(E,1,1,1,1),(\\bm{\\omega}E,\\bm{\\omega},\\bm{\\omega},\\bm{\\omega},\\bm{\\omega}),(\\bm{\\omega}^{-1}E,\\bm{\\omega}^{-1},\\bm{\\omega}^{-1},\\bm{\\omega}^{-1},\\bm{\\omega}^{-1})\\}$.\n\\end{theorem}\n\\begin{proof}\n\tLet $S(U(1)\\times U(1)\\times U(1)) \\subset SU(3)$. We define a mapping $\\varphi_{{}_{E_6,\\sigma_3,w_3}}: SU(3)\\times S(U(1)\\times U(1)\\times U(1))\\times S(U(1)\\times U(1)\\times U(1)) \\to (E_6)^{\\sigma_3}\\cap (E_6)^{w_3}$ by\n\t\\begin{align*}\n\t\t\t\\varphi_{{}_{E_6,\\sigma_3,w_3}}(L,P,Q)(X_{C}+M)&=h(P,Q)X_{C}h(P,Q)^*+LM\\tau h(P,Q)^*, \n\t\t\t\\\\\n\t\t\t&\\hspace*{20mm} X_{C}+M \\in \\mathfrak{J}(3, \\C)^C \\oplus \n\t\t\tM(3,\\C)^C=\\mathfrak{J}^C.\n\t\\end{align*}\n\tNeedless to say, this mapping is the restriction of the mapping $\\varphi_{{}_{E_6,w_3}}$, that is, $\\varphi_{{}_{E_6,\\sigma_3,w_3}}(L,P,\\allowbreak Q)=\\varphi_{{}_{E_6,w_3}}(L,P,Q)$ (Theorem \\ref{theorem 3.3.7}). \n\t\n\tWe will prove that $\\varphi_{{}_{E_6,\\sigma_3,w_3}}$ is well-defined. It is clear that $\\varphi_{{}_{E_6,\\sigma_3,_3}}(L,P,Q) \\in (E_6)^{w_3}$, and it follows from $\\sigma_3=\\varphi_{{}_{E_6,w_3}}(E,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}), \\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}))$ (Lemma \\ref{lemma 3.3.8} (2)) that \n\t\\begin{align*}\n\t&\\quad {\\sigma_3}^{-1}\\varphi_{{}_{E_6,\\sigma_3,w_3}}(L,P,Q)\\sigma_3\n\t\\\\\n\t&=\\varphi_{{}_{E_6,w_3}}(E,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}), \\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}))^{-1}\\varphi_{{}_{E_6,\\sigma_3,w_3}}(L,P,Q)\n\t\\\\\n\t&\\hspace*{70mm}\\varphi_{{}_{E_6,w_3}}(E,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}), \\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,w_3}}(E,\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}}), \\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}}))\\varphi_{{}_{E_6,w_3}}(L,P,Q)\n\t\\\\\n\t&\\hspace*{70mm}\\varphi_{{}_{E_6,w_3}}(E,\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}), \\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,w_3}}(L,\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})P\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}),\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})Q\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega})),\n\t\\\\\n\t&\\hspace*{85mm}P=\\diag(a,b,c), Q=\\diag(s,t,v)\n\t\\\\\n\t&=\\varphi_{{}_{E_6,w_3}}(L,P,Q)\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\sigma_3,w_3}}(L,P,Q).\n\t\\end{align*}\n\tHence we have that $\\varphi_{{}_{E_6,\\sigma_3,w_3}}(L,P,Q) \\in (E_6)^{\\sigma_3}$. Thus $\\varphi_{{}_{E_6,\\sigma_3,w_3}}$ is well-defined. Subsequently, since $\\varphi_{{}_{E_6,\\sigma_3,w_3}}$ is the restriction of the mapping $\\varphi_{{}_{E_6,w_3}}$, we easily see that $\\varphi_{{}_{E_6,\\sigma_3,w_3}}$ is a homomorphism.\n\t\n\tNext we will prove that $\\varphi_{{}_{E_6,\\sigma_3,w_3}}$ is surjective. Let $\\alpha \\in (E_6)^{\\sigma_3}\\cap (E_6)^{w_3} \\subset (E_6)^{w_3}$. There exist $L, A, B \\in SU(3)$ such that $\\alpha=\\varphi_{{}_{E_6,w_3}}(L,A,B)$ (Theorem \\ref{theorem 3.3.7}). Moreover, from the condition $\\alpha \\in (E_6)^{\\sigma_3}$, that is, ${\\sigma_3}^{-1}\\varphi_{{}_{E_6,w_3}}(L,A,B)\\sigma_3=\\varphi_{{}_{E_6,w_3}}(L,A,B)$, and using \n\t\\begin{align*}\n\t\t\t\t&\\quad {\\sigma_3}^{-1}\\varphi_{{}_{E_6,w_3}}(L,A,B)\\sigma_3\n\t\t\t\t\\\\\n\t\t\t\t&=\\varphi_{{}_{E_6,w_3}}(L,\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})A\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega}),\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})B\\diag(1,\\ov{\\bm{\\omega}}, \\bm{\\omega}))\n\t\\end{align*}\n\t(Lemma \\ref{lemma 3.3.8} (2)) we have that \n\t\\begin{align*}\n\t&\\,\\,\\,{\\rm(i)}\\,\\left\\{\n\t\\begin{array}{l}\n\tL=L\\\\\n\t\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})A\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega})=A \\\\\n\t\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})B\\diag(1,\\ov{\\bm{\\omega}}, \\bm{\\omega})=B,\n\t\\end{array} \\right.\n\t\\qquad\n\t{\\rm(ii)}\\,\\left\\{\n\t\\begin{array}{l}\n\tL=\\bm{\\omega}L\\\\\n\t\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})A\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega})=\\bm{\\omega}A \\\\\n\t\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})B\\diag(1,\\ov{\\bm{\\omega}}, \\bm{\\omega})=\\bm{\\omega}B,\n\t\\end{array} \\right.\n\t\\\\[2mm]\n\t&{\\rm(iii)}\\,\\left\\{\n\t\\begin{array}{l}\n\tL=\\bm{\\omega}^{-1}L\\\\\n\t\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})A\\diag(1,\\ov{\\bm{\\omega}},\\bm{\\omega})=\\bm{\\omega}^{-1}A \\\\\n\t\\diag(1,\\bm{\\omega},\\ov{\\bm{\\omega}})B\\diag(1,\\ov{\\bm{\\omega}}, \\bm{\\omega})=\\bm{\\omega}^{-1}B.\n\t\\end{array} \\right.\n\t\\end{align*}\n\tThe Cases (ii) and (iii) are impossible because $L\\not=0$. As for the Case (i), from the second and third conditions, it is easy to see that $A,B \\in S(U(1)\\times U(1) \\times U(1))$. Needless to say, $L \\in SU(3)$. \n\tHence there exist $L \\in SU(3)$ and $A,B \\in S(U(1)\\times U(1)\\times U(1))$ such that $\\alpha=\\varphi_{{}_{E_6,w_3}}(L,P,Q)$. Namely, there exist $L \\in SU(3)$ and $A,B \\in S(U(1)\\times U(1)\\times U(1))$ such that $\\alpha=\\varphi_{{}_{E_6,\\sigma_3,w_3}}(L,P,Q)$. The proof of surjective is completed.\n\t\n\tFinally, we will determine $\\Ker\\,\\varphi_{{}_{E_6,\\sigma_3,w_3}}$. However, from $\\Ker\\,\\varphi_{{}_{E_6,w_3}}=\\{(E,E,E),(\\bm{\\omega}E,\\allowbreak \\bm{\\omega}E,\\bm{\\omega}E),(\\bm{\\omega}^{-1}E,\\bm{\\omega}^{-1}E, \\bm{\\omega}^{-1}E) \\}$, we easily obtain that $\\Ker\\,\\varphi_{{}_{E_6,\\sigma_3,w_3}}=\\{(E,E,E),(\\bm{\\omega}E,\\allowbreak \\bm{\\omega}E,\\bm{\\omega}E),(\\bm{\\omega}^{-1}E,\\bm{\\omega}^{-1}E, \\bm{\\omega}^{-1}E) \\} \\cong \\Z_3$.\n\tThus we have the isomorphism $(E_6)^{\\sigma_3}\\cap (E_6)^{w_3} \\cong SU(3)\\times S(U(1)\\times U(1)\\times U(1))\\times S(U(1)\\times U(1)\\times U(1))\/\\Z_3$. \n\t\n\tTherefore, by Lemma \\ref{lemma 4.3.1} we have the required isomorphism \n\t\\begin{align*}\n\t(E_6)^{\\sigma_3}\\cap (E_6)^{w_3} \\cong (SU(3)\\times U(1)\\times U(1)\\times U(1)\\times U(1))\/\\Z_3,\n\t\\end{align*}\n\twhere $\\Z_3=\\{(E,1,1,1,1),(\\bm{\\omega}E,\\bm{\\omega},\\bm{\\omega},\\bm{\\omega},\\bm{\\omega}),(\\bm{\\omega}^{-1}E,\\bm{\\omega}^{-1},\\bm{\\omega}^{-1},\\bm{\\omega}^{-1},\\bm{\\omega}^{-1})\\}$.\n\\end{proof}\n\nThus, since the group $(E_6)^{\\sigma_3} \\cap (E_6)^{w_3}$ is connected from Theorem \\ref{theorem 4.11.1}, we have an exceptional $\\varmathbb{Z}_3 \\times \\varmathbb{Z}_3$-symmetric space\n\\begin{align*}\nE_6\/((SU(3)\\times U(1)\\times U(1)\\times U(1)\\times U(1))\/\\Z_3).\n\\end{align*}\n\n\\subsection{Case 12: $\\{1, \\tilde{\\nu}_3, \\tilde{\\nu}_3{}^{-1}\\} \\times \\{1, \\tilde{\\mu}_3, \\tilde{\\mu}_3{}^{-1}\\}$-symmetric space}\n\nLet the $C$-linear transformations $\\nu_3, \\mu_3$ of $\\mathfrak{J}^C$ defined in Subsection \\ref{subsection 3.3}. \n\n\\noindent From Lemma \\ref{lemma 3.3.8} (1), since we can easily confirm that $\\nu_3$ and $\\mu_3$ are commutative, $\\tilde{\\nu}_3$ and $\\tilde{\\mu}_3$ are commutative in $\\Aut(E_6)$: $\\tilde{\\nu}_3\\tilde{\\mu}_3=\\tilde{\\mu}_3\\tilde{\\nu}_3$.\n\nBefore determining the structure of the group $(E_6)^{\\nu_3} \\cap (E_6)^{\\mu_3}$, we confirm that useful lemma holds and prove proposition needed in the proof of theorem below.\n\n\\begin{lemma}\\label{lemma 4.12.1\n\tThe mapping $\\varphi_{{}_{E_6,\\nu_3}}:Sp(1) \\times S(U(1)\\times U(5)) \\to (E_6)^{\\nu_3}$ of \\,Theorem {\\rm \\ref{theorem 3.3.5}} satisfies the relational formulas \n\t\\begin{align*}\n\t\t\t\\nu_3&=\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag(\\nu^5,\\nu^{-1},\\nu^{-1},\\nu^{-1},\\nu^{-1},\\nu^{-1})),\n\t\t\t\\\\\n\t\t\t\\mu_3&=\\varphi_{{}_{E_6,\\nu_3}}(1, \\diag(\\nu^{-2},\\nu^{2},\\nu^{-1},\\nu,\\nu^{-1},\\nu)),\n\t\\end{align*}\n\t where $\\nu=\\exp(2\\pi i\/9)\\in U(1)$.\n\\end{lemma}\n\\begin{proof}\n\t From Lemma \\ref{lemma 3.3.8} (1), these results are trivial. \n\\end{proof}\n\nIt goes with out saying that $\\delta_Q=\\varphi_{{}_{E_6, \\nu_3}}(1,Q)(=\\varphi_{{}_{E_6, \\gamma_3}}(1,Q))$, where $\\delta_Q$ is defined in the Case 7, and so from Lemma \\ref{lemma 3.3.8} (1) the $C$-linear transformation $\\mu'_3$ which is conjugate to $\\mu_3$ under $\\delta_Q \\in (E_6)^{\\nu_3}$ is also expressed by\n\\begin{align*}\n\\mu'_3=\\varphi_{{}_{E_6,\\nu_3}}(1, \\diag(\\nu^{-2},\\nu^{2},\\nu^{-1},\\nu^{-1},\\nu,\\nu)).\n\\end{align*}\n\nThen we have the following proposition.\n\n\\begin{proposition}\\label{proposition 4.12.2\n\tThe group $(E_6)^{\\nu_3} \\cap (E_6)^{\\mu_3}$ is isomorphic to the group $(E_6)^{\\nu_3} \\cap (E_6)^{\\mu'_3}${\\rm :} $(E_6)^{\\nu_3} \\cap (E_6)^{\\mu_3} \\cong (E_6)^{\\nu_3} \\cap (E_6)^{\\mu'_3}$.\n\\end{proposition}\n\\begin{proof}\n\tWe define a mapping $g_{{}_{4122}}: (E_6)^{\\nu_3} \\cap (E_6)^{\\mu'_3} \\to (E_6)^{\\gamma_3} \\cap (E_6)^{\\mu_3}$ by\n\t\\begin{align*}\n\tg_{{}_{4122}}(\\alpha)=\\delta_Q\\alpha{\\delta_Q}^{-1}.\t\n\t\\end{align*}\n\tSince it is easily to verify that $\\delta_Q\\nu_3=\\nu_3\\delta_Q$ using $\\nu_3=\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag(\\nu^5,\\nu^{-1},\\nu^{-1},\\nu^{-1},\\nu^{-1}, \\allowbreak \\nu^{-1}))$ (Lemma \\ref{lemma 4.12.1}), we can prove this proposition as in the proof of Proposition \\ref{proposition 4.7.1}.\n\\end{proof}\n\\vspace{1mm}\n\nNow, we will determine the structure of the group $(E_6)^{\\nu_3} \\cap (E_6)^{\\mu_3}$.\n\n\\begin{theorem}\\label{theorem 4.12.3\n\tThe group $(E_6)^{\\nu_3} \\cap (E_6)^{\\mu_3}$ is isomorphic the group $(Sp(1)\\times U(1) \\times U(1)\\allowbreak \\times U(1) \\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times\\Z_4)${\\rm :} $(E_6)^{\\nu_3} \\cap (E_6)^{\\mu_3} \\cong (Sp(1)\\times U(1) \\times U(1)\\times U(1) \\allowbreak \\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times\\Z_4), \\Z_2=\\{(1,1,1,1,E,E), (1,1,-1,1,E,-E) \\},\n\t\\Z_2=\\{(1,1,1,1,E,E), (1,1,-1,-1,-E,E) \\},\n\t\\Z_4=\\{(1,1,1,E,E,E), (1,-1,1-,1,E,E), (-1,i,i,\\allowbreak 1,-E,E), (-1,-i,-i,1,-E,E) \\}$.\n\\end{theorem}\n\\begin{proof}\n\tLet $S(U(1)\\times U(1)\\times U(2)\\times U(2)) \\subset S(U(1)\\times U(5))$. \n\tWe define a mapping $\\varphi_{{}_{E_6,\\nu_3,\\mu'_3}}: Sp(1)\\times S(U(1)\\times U(1)\\times U(2)\\times U(2)) \\to (E_6)^{\\nu_3} \\cap (E_6)^{\\mu'_3}$ by\n\t\\begin{align*}\n\t\\varphi_{{}_{E_6,\\nu_3,\\mu'_3}}(q, P)(M+\\a)&={k_J}^{-1}(P(k_J M){}^t\\!P)+q\\a k^{-1}(\\tau \\,{}^t\\!P), \n\t\\\\\n\t&\\hspace*{40mm}M+\\a \\in \\mathfrak{J}(3, \\H)^C \\oplus (\\H^3)^C=\\mathfrak{J}^C.\n\t\\end{align*}\n\tNeedless to say, this mapping is the restriction of the mapping $\\varphi_{{}_{E_6,\\nu_3}}$, that is, $\\varphi_{{}_{E_6,\\nu_3,\\mu'_3}}(q, P)=\\varphi_{{}_{E_6,\\nu_3}}(q,P)$ (Theorem \\ref{theorem 3.3.5}). \n\t\n\tAs usual, we will prove that $\\varphi_{{}_{E_6,\\nu_3,\\mu'_3}}$ is well-defined. It is clear that $\\varphi_{{}_{E_6,\\nu_3,\\mu'_3}}(q,P) \\in (E_6)^{\\nu_3}$, and it follows from $\\mu'_3=\\varphi_{{}_{E_6,\\gamma_3}}(1,\\diag({\\nu}^{-2},\\nu^2,{\\nu}^{-1},{\\nu}^{-1},\\nu,\\nu))$ that\n\t\\begin{align*}\n\t&\\quad {\\mu'_3}^{-1}\\varphi_{{}_{E_6,\\nu_3,\\mu'_3}}(q,P)\\mu'_3\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag({\\nu}^{-2},\\nu^2,{\\nu}^{-1},{\\nu}^{-1},\\nu,\\nu))^{-1}\\varphi_{{}_{E_6,\\nu_3,\\mu'_3}}(q,P)\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag({\\nu}^{-2},\\nu^2,{\\nu}^{-1},{\\nu}^{-1},\\nu,\\nu))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag({\\nu}^2,\\nu^{-2},\\nu,\\nu,{\\nu}^{-1},{\\nu}^{-1}))\\varphi_{{}_{E_6,\\nu_3}}(q,P)\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag({\\nu}^{-2},\\nu^2,{\\nu}^{-1},{\\nu}^{-1},\\nu,\\nu))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\nu_3}}(q,\\diag({\\nu}^2,\\nu^{-2},\\nu,\\nu,{\\nu}^{-1},{\\nu}^{-1})P\\diag({\\nu}^{-2},\\nu^2,{\\nu}^{-1},{\\nu}^{-1},\\nu,\\nu)),P=\\diag(a,b,P_1,P_2)\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\nu_3}}(q,\\diag(\\nu^2 a \\nu^{-2}, {\\nu}^{-2} b \\nu^2, (\\nu E) P_1(\\nu^{-1}E), ({\\nu}^{-1}E) P_2 (\\nu E) ))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\nu_3}}(q,P)\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\nu_3,\\mu'_3}}(q,P).\n\t\\end{align*}\n\tHence we have that $\\varphi_{{}_{E_6,\\nu_3,\\mu'_3}}(q,P) \\in (E_6)^{\\mu'_3}$. Thus $\\varphi_{{}_{E_6,\\nu_3,\\mu'_3}}$ is well-defined. Subsequently, since $\\varphi_{{}_{E_6,\\nu_3,\\mu'_3}}$ is the restriction of the mapping $\\varphi_{{}_{E_6,\\nu_3}}$, we easily see that $\\varphi_{{}_{E_6,\\nu_3,\\mu'_3}}$ is a homomorphism.\n\t\n\tNext, we will prove that $\\varphi_{{}_{E_6,\\nu_3,\\mu'_3}}$ is surjective. Let $\\alpha \\in (E_6)^{\\nu_3} \\cap (E_6)^{\\mu'_3} \\subset (E_6)^{\\nu_3}$. There exist $q \\in Sp(1)$ and $A \\in S(U(1)\\times U(5))$ such that $\\alpha=\\varphi_{{}_{E_6,\\nu_3}}(q,A)$ (Theorem \\ref{theorem 3.3.5}). Moreover, from the condition $\\alpha \\in (E_6)^{\\mu'_3}$, that is, ${\\mu'_3}^{-1}\\varphi_{{}_{E_6,\\nu_3}}(q,A)\\mu'_3=\\varphi_{{}_{E_6,\\nu_3}}(q,A)$, and using ${\\mu'_3}^{-1}\\varphi_{{}_{E_6,\\nu_3}}(q,A)\\mu'_3=\\varphi_{{}_{E_6,\\nu_3}}(q,\\diag({\\nu}^2,\\nu^{-2},\\nu,\\nu,{\\nu}^{-1},{\\nu}^{-1}) A \\,\\diag({\\nu}^{-2},\\nu^2,{\\nu}^{-1},{\\nu}^{-1},\\nu,\\nu))$, we have that\n\t\\begin{align*}\n\t&\\left\\{\n\t\\begin{array}{l}\n\tq=q \\\\\n\t\\diag({\\nu}^2,\\nu^{-2},\\nu,\\nu,{\\nu}^{-1},{\\nu}^{-1}) A\\, \\diag({\\nu}^{-2},\\nu^2,{\\nu}^{-1},{\\nu}^{-1},\\nu,\\nu)=A \n\t\\end{array}\\right. \n\t\\\\\n\t&\\hspace*{50mm}{\\text{or}}\n\t\\\\\n\t&\\left\\{\n\t\\begin{array}{l}\n\tq=-q \\\\\n\t\\diag({\\nu}^2,\\nu^{-2},\\nu,\\nu,{\\nu}^{-1},{\\nu}^{-1}) A \\,\\diag({\\nu}^{-2},\\nu^2,{\\nu}^{-1},{\\nu}^{-1},\\nu,\\nu)=-A. \n\t\\end{array}\\right. \n\t\\end{align*}\n\tThe latter case is impossible because of $q\\not=0$. As for the former case, from the second condition, by doing straightforward computation $A$ takes the following form $\\diag(a, b, C, D), a,b \\in U(1),C, D \\in U(2), (ab)(\\det\\,C)(\\det\\,D)=1$, that is, $A \\in S(U(1)\\times U(1)\\times U(2)\\times U(2))$. Needless to say, $q \\in Sp(1)$.\n\tHence there exist $q \\in Sp(1)$ and $P \\in S(U(1)\\times U(1)\\times U(2)\\times U(2))$ such that $\\alpha=\\varphi_{{}_{E_6,\\nu_3}}(q,P)$. Namely, there exist $q \\in Sp(1)$ and $P \\in S(U(1)\\times U(1)\\times U(2)\\times U(2))$ such that $\\alpha=\\varphi_{{}_{E_6,\\nu_3,\\mu'_3}}(q,P)$. The proof of surjective is completed.\n\t\n\tFinally, we will determine $\\Ker\\,\\varphi_{{}_{E_6,\\nu_3,\\mu'_3}}$. However, from $\\Ker\\,\\varphi_{{}_{E_6,\\nu_3}}=\\{(1,E),(-1,-E) \\}$, we easily obtain that $\\Ker\\,\\varphi_{{}_{E_6,\\nu_3,\\mu'_3}}=\\{(1,E),(-1,-E) \\} \\cong \\Z_2$. Thus we have the isomorphism $(E_6)^{\\nu_3} \\cap (E_6)^{\\mu'_3} \\cong (Sp(1)\\times S(U(1)\\times U(1)\\times U(2)\\times U(2)))\/\\Z_2$. In addition, by Proposition \\ref{proposition 4.12.2} we have the isomorphism $(E_6)^{\\nu_3} \\cap (E_6)^{\\mu_3} \\cong (Sp(1)\\times S(U(1)\\times U(1)\\times U(2)\\times U(2)))\/\\Z_2$. \n\t\n\tTherefore, as in the proof of Theorem \\ref{theorem 4.7.3}, we have the required isomorphism \n\t\\begin{align*}\n\t(E_6)^{\\nu_3} \\cap (E_6)^{\\mu_3} \\!\\cong (Sp(1)\\times U(1) \\times U(1)\\times U(1) \\allowbreak \\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times\\Z_4), \n\t\\end{align*}\n\twhere \n\t\\begin{align*}\n\t&\\Z_2=\\{(1,1,1,1,E,E), (1,1,-1,1,E,-E) \\},\n\t\\\\\n\t&\\Z_2=\\{(1,1,1,1,E,E), (1,1,-1,-1,-E,E) \\},\n\t\\\\\n\t&\\Z_4=\\{(1,1,1,E,E,E), (1,-1,1-,1,E,E), (-1,i,i,1,-E,E), (-1,-i,-i,1,-E,E) \\}.\n\t\\end{align*}\n\\end{proof}\n\nThus, since the group $(E_6)^{\\nu_3} \\cap (E_6)^{\\mu_3}$ is connected from Theorem \\ref{theorem 4.12.3}, we have an exceptional $\\varmathbb{Z}_3 \\times \\varmathbb{Z}_3$-symmetric space\n\\begin{align*}\nE_6\/((Sp(1)\\times U(1) \\times U(1)\\times U(1) \\allowbreak \\times SU(2)\\times SU(2))\/(\\Z_2\\times\\Z_2\\times\\Z_4)).\n\\end{align*}\n\n\\subsection{Case 13: $\\{1, \\tilde{\\nu}_3, \\tilde{\\nu}_3{}^{-1}\\} \\times \\{1, \\tilde{w}_3, \\tilde{w}_3{}^{-1}\\}$-symmetric space}\n\nLet the $C$-linear transformations $\\nu_3, w_3$ of $\\mathfrak{J}^C$ defined in Subsection \\ref{subsection 3.3}. \n\n\\noindent From Lemma \\ref{lemma 3.3.8} (1), since we can easily confirm that $\\nu_3$ and $w_3$ are commutative, $\\tilde{\\nu}_3$ and $\\tilde{w}_3$ are commutative in $\\Aut(E_6)$: $\\tilde{\\nu}_3\\tilde{w}_3=\\tilde{w}_3\\tilde{\\nu}_3$.\n\nBefore determining the structure of the group $(E_6)^{\\nu_3} \\cap (E_6)^{w_3}$, we confirm that useful lemma holds, and we prove proposition and lemma needed in the proof of theorem below.\n\n\\begin{lemma}\\label{lemma 4.13.1\n\tThe mapping $\\varphi_{{}_{E_6,\\nu_3}}:Sp(1) \\times S(U(1)\\times U(5)) \\to (E_6)^{\\nu_3}$ of \\,Theorem {\\rm \\ref{theorem 3.3.5}} satisfies the relational formula \n\t\\begin{align*}\n\tw_3=\\varphi_{{}_{E_6,\\nu_3}}(1, \\diag(\\tau\\omega,\\omega,\\tau\\omega,\\omega,\\tau\\omega,\\omega)),\n\t\\end{align*}\n\twhere $\\nu=\\exp(2\\pi i\/9)\\in U(1)$.\n\\end{lemma}\n\\begin{proof}\n\tFrom Lemma \\ref{lemma 3.3.8} (1), these results are trivial. \n\\end{proof}\n\nThe $C$-linear transformation $w'_3$ defined in the Case 8 is expressed by\n\\begin{align*}\nw'_3=\\varphi_{{}_{E_6,\\nu_3}}(1, \\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega)),\n\\end{align*}\nand note that $\\delta_N=\\varphi_{{}_{E_6, \\nu_3}}(1,N)(=\\varphi_{{}_{E_6, \\gamma_3}}(1,N))$, where $\\delta_N$ is also defined in the Case 8, needless to say, $w_3$ is conjugate to $w'_3$ under $\\delta_N=\\varphi_{{}_{E_6, \\nu_3}}(1,N)$.\n\\vspace{1mm}\n\nThen we have the following proposition.\n\n\\begin{proposition}\\label{proposition 4.13.2\n\tThe group $(E_6)^{\\nu_3} \\cap (E_6)^{w_3}$ is isomorphic to the group $(E_6)^{\\gamma_3} \\cap (E_6)^{w'_3}${\\rm :} $(E_6)^{\\nu_3} \\cap (E_6)^{w_3} \\cong (E_6)^{\\nu_3} \\cap (E_6)^{w'_3}$.\n\\end{proposition}\n\\begin{proof}\n\tWe define a mapping $g_{{}_{4132}}: (E_6)^{\\nu_3} \\cap (E_6)^{w'_3} \\to (E_6)^{\\nu_3} \\cap (E_6)^{w_3}$ by\n\t\\begin{align*}\n\tg_{{}_{4132}}(\\alpha)=\\delta_N\\alpha{\\delta_N}^{-1},\t\n\t\\end{align*}\n\twhere $\\delta_N$ is same one above. Since it is easy to verify that $\\delta_N \\nu_3=\\nu_3\\delta_N$ using $\\nu_3=\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag(\\nu^5,\\nu^{-1},\\nu^{-1},\\nu^{-1},\\nu^{-1},\\nu^{-1}))$ (Lemma \\ref{lemma 4.9.1}) and $w_3\\delta_N=\\delta_N w'_3$ (Lemma \\ref{lemma 4.13.1}), we can prove this proposition as in the proof of Proposition \\ref{proposition 4.8.1}.\n\\end{proof}\n\nSubsequently, we will prove the following lemma.\n\n\\begin{lemma}\\label{lemma 4.13.3\n\tThe group $S(U(1)\\times U(2)\\times U(3))$ is isomorphic to the group $(U(1)\\times U(1)\\times SU(2)\\times SU(3))\/(\\Z_2\\times \\Z_3)${\\rm :} $S(U(1)\\times U(2)\\times U(3)) \\cong (U(1)\\times U(1)\\times SU(2)\\times SU(3))\/(\\Z_2\\times \\Z_3), \\Z_2\\!=\\{(1,1,E,E),(-1,1,-E,E) \\},\\Z_3\\!=\\!\\{(1,1,E,E),(1,\\omega,E,\\omega E),(1,\\omega^{-1},E,\\omega^{-1}E) \\}$.\t\n\\end{lemma}\n\\begin{proof}\n\tWe define a mapping $f_{{}_{4133}}:U(1) \\times U(1)\\times SU(2)\\times SU(3) \\to S(U(1)\\times U(2)\\times U(3))$ by\n\t\\begin{align*}\n\tf_{{}_{4133}}(a,b,A,B)=\\left( \n\t\\begin{array}{ccc}\n\ta^{-2}b^{-3} & & {\\raisebox{-7pt}[0pt]{\\large $0$}}\n\t\\\\[2mm]\n\t& a\\mbox{\\large {$A$}} & \n\t\\\\[2mm]\n\t{\\raisebox{1pt}[0pt]{\\large $0$}}&& b\\mbox{\\large {$B$}}\n\t\\end{array}\\right) \\in SU(6).\n\t\\end{align*}\n\tThen it is clear that $f_{{}_{4133}}$ is well-defined and a homomorphism. \n\t\n\tWe will prove that $f_{{}_{4133}}$ is surjective. Let $P \\in S(U(1)\\times U(2)\\times U(3))$. Then $P$ takes the form of $\\diag(s,P_1,P_2),s \\in U(1),P_1 \\in U(2), P_2 \\in U(3),s(\\det\\,P_1)(\\det\\,P_2)=1$. Here, since $P_1 \\in U(2), P_2 \\in U(3)$, we see that $\\det\\,P_1, \\det\\,P_2 \\in U(1)$. We choose $a,b \\in U(1)$ such that $a^2=\\det\\,P_1, b^3=\\det\\,P_2$, respectively, and set $A=(1\/a)P_1, B=(1\/b)P_2$. Then we have that $ A \\in SU(2), B \\in SU(3)$. With above, the proof of surjective is completed.\n\t\n\tFinally, we will determine $\\Ker\\,f_{{}_{4133}}$. It follows from the kernel of definition that\n\t\\begin{align*}\n\t\\Ker\\,f_{{}_{4133}}&=\\{(a,b,A,B)\\in U(1)\\times U(1)\\times SU(2) \\times SU(3) \\,|\\,f_{{}_{4133}}(a,b,A,B)=E \\}\n\t\\\\\n\t&=\\{(a,b,A,B)\\in U(1)\\times U(1)\\times SU(2) \\times SU(3)\\,|\\,a^2b^3=1,aA=bB=E \\}\n\t\\\\\n\t&=\\{(a,b,a^{-1}E,b^{-1}E)\\in U(1)\\times U(1)\\times SU(2) \\times SU(3) \\,|\\,a^2=b^3=1 \\}\n\t\\\\\n\t&=\\{(1,1,E,E), (1,\\omega,E,{\\omega}^{-1}E),(1,{\\omega}^{-1},E,\\omega E), \n\t\\\\\n\t&\\hspace*{20mm}(-1,1,-E,E), (-1,\\omega,-E,{\\omega}^{-1}E),(-1,{\\omega}^{-1},E,\\omega E)\\}\n\t\\\\\n\t&=\\{(1,1,E,E), (-1,1,-E,E) \\} \\times \\{(1,1,E,E), (1,\\omega,E,{\\omega}^{-1}E),(1,{\\omega}^{-1},E,\\omega E) \\}\n\t\\\\\n\t& \\cong \\Z_2 \\times \\Z_3.\n\t\\end{align*}\n\t\n\tTherefore we have the required isomorphism \n\t\\begin{align*}\n\tS(U(1)\\times U(2)\\times U(3)) \\cong (U(1) \\times U(1)\\times SU(2)\\times SU(3))\/(\\Z_2\\times\\Z_3).\n\t\\end{align*}\n\\end{proof} \n\nNow, we will determine the structure of the group $(E_6)^{\\nu_3} \\cap (E_6)^{w_3}$.\n\n\\begin{theorem}\\label{theorem 4.13.4\n\tThe group $(E_6)^{\\nu_3} \\cap (E_6)^{w_3}$ is isomorphic the group $(Sp(1)\\times U(1) \\times SU(2)\\times SU(3))\/(\\Z_2 \\times \\Z_3)${\\rm :} $(E_6)^{\\nu_3} \\cap (E_6)^{w_3} \\cong ((Sp(1)\\times U(1) \\times SU(2)\\times SU(3)))\/(\\Z_2 \\times \\Z_2 \\times \\Z_3), \\Z_2\\{(1,1,1,E,E), (1,-1,1,-E,E)\\},\\Z_2=\\{(1,1,1,E,E), (-1,-1,-1,E,E)\\}\n\t\\Z_3=\\{(1,\\allowbreak 1,1,E,E), (1,1,\\omega,E,{\\omega}^{-1}E),(1,1,{\\omega}^{-1},E,\\omega E)\\}$.\n\\end{theorem}\n\\begin{proof}\n\tLet $S(U(1)\\times U(2)\\times U(3)) \\subset S(U(1) \\times U(5))$. \n\tWe define a mapping $\\varphi_{{}_{E_6,\\nu_3,w'_3}}: Sp(1)\\times S(U(1)\\times U(2)\\times U(3)) \\to (E_6)^{\\nu_3} \\cap (E_6)^{w'_3}$ by\n\t\\begin{align*}\n\t\\varphi_{{}_{E_6,\\nu_3,w'_3}}(q, P)(M+\\a)&={k_J}^{-1}(P(k_J M){}^t\\!P)+q\\a k^{-1}(\\tau \\,{}^t\\!P), \n\t\\\\\n\t&\\hspace*{40mm}M+\\a \\in \\mathfrak{J}(3, \\H)^C \\oplus (\\H^3)^C=\\mathfrak{J}^C.\n\t\\end{align*}\n\tNeedless to say, this mapping is the restriction of the mapping $\\varphi_{{}_{E_6,\\nu_3}}$, that is, $\\varphi_{{}_{E_6,\\nu_3,w'_3}}(q, P)=\\varphi_{{}_{E_6,\\nu_3}}(q,P)$ (Theorem \\ref{theorem 3.3.5}). \n\t\n\tAs usual, we will prove that $\\varphi_{{}_{E_6,\\nu_3,w'_3}}$ is well-defined. It is clear that $\\varphi_{{}_{E_6,\\nu_3,w'_3}}(q,P) \\in (E_6)^{\\nu_3}$, and it follows from $w'_3=\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega))$ that\n\t\\begin{align*}\n\t&\\quad {w'_3}^{-1}\\varphi_{{}_{E_6,\\nu_3,\\nu'_3}}(q,P) w'_3\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega))^{-1}\\varphi_{{}_{E_6,\\nu_3,w'_3}}(q,P)\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag(\\omega,\\omega,\\omega,\\tau\\omega,\\tau\\omega,\\tau\\omega))\\varphi_{{}_{E_6,\\nu_3}}(q,P)\\varphi_{{}_{E_6,\\nu_3}}(1,\\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\nu_3}}(q,\\diag(\\omega,\\omega,\\omega,\\tau\\omega,\\tau\\omega,\\tau\\omega)P\\,\\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega)),P=\\diag(s,P_1,P_2)\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\nu_3}}(q,\\diag(\\omega s (\\tau\\omega),(\\omega E)P_1(\\tau\\omega E), \\tau(\\omega E) P_2 (\\omega E) ))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\nu_3}}(q,P)\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\nu_3,w'_3}}(q,P).\n\t\\end{align*}\n\tHence we have that $\\varphi_{{}_{E_6,\\nu_3,w'_3}}(s,P) \\in (E_6)^{w'_3}$. Thus $\\varphi_{{}_{E_6,\\nu_3,w'_3}}$ is well-defined. Subsequently, since $\\varphi_{{}_{E_6,\\nu_3,w'_3}}$ is the restriction of the mapping $\\varphi_{{}_{E_6,\\nu_3}}$, we easily see that $\\varphi_{{}_{E_6,\\nu_3,w'_3}}$ is a homomorphism.\n\t\n\tNext, we will prove that $\\varphi_{{}_{E_6,\\nu_3,w'_3}}$ is surjective. Let $\\alpha \\in (E_6)^{\\nu_3} \\cap (E_6)^{w'_3} \\subset (E_6)^{\\nu_3}$. There exist $q \\in Sp(1)$ and $A \\in S(U(1)\\times U(5))$ such that $\\alpha=\\varphi_{{}_{E_6,\\nu_3}}(q,A)$ (Theorem \\ref{theorem 3.3.5}). Moreover, from the condition $\\alpha \\in (E_6)^{w'_3}$, that is, ${w'_3}^{-1}\\varphi_{{}_{E_6,\\nu_3}}(q,A)w'_3=\\varphi_{{}_{E_6,\\nu_3}}(q,A)$, and using ${w'_3}^{-1}\\varphi_{{}_{E_6,\\nu_3}}(q,A)w'_3=\\varphi_{{}_{E_6,\\nu_3}}(q,\\diag(\\omega,\\omega,\\omega,\\tau\\omega,\\tau\\omega,\\tau\\omega)A\\,\\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega))$, we have that\n\t\\begin{align*}\n\t&\\left\\{\n\t\\begin{array}{l}\n\tq=q \\\\\n\t\\diag(\\omega,\\omega,\\omega,\\tau\\omega,\\tau\\omega,\\tau\\omega)A\\,\\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega)=A \n\t\\end{array}\\right. \n\t\\\\\n\t&\\hspace*{50mm}{\\text{or}}\n\t\\\\\n\t&\\left\\{\n\t\\begin{array}{l}\n\tq=-q \\\\\n\t\\diag(\\omega,\\omega,\\omega,\\tau\\omega,\\tau\\omega,\\tau\\omega)A\\,\\diag(\\tau\\omega,\\tau\\omega,\\tau\\omega,\\omega,\\omega,\\omega)=-A. \n\t\\end{array}\\right. \n\t\\end{align*}\n\tThe latter case is impossible because of $q\\not=0$. As for the former case, from the second condition, by doing straightforward computation $A$ takes the following form $\\diag(s,C, D), C \\in U(2),D \\in U(3), s(\\det\\,C)(\\det\\,D)=1$, that is, $A \\in S(U(1)\\times U(2)\\times U(3))$. Needless to say, $q \\in Sp(1)$.\n\tHence there exist $q \\in Sp(1)$ and $P \\in S(U(1)\\times U(2)\\times U(3))$ such that $\\alpha=\\varphi_{{}_{E_6,\\nu_3}}(q,P)$. Namely, there exist $q \\in Sp(1)$ and $P \\in S(U(1)\\times U(2)\\times U(3))$ such that $\\alpha=\\varphi_{{}_{E_6,\\nu_3,w'_3}}(q,P)$. The proof of surjective is completed.\n\t\n\tFinally, we will determine $\\Ker\\,\\varphi_{{}_{E_6,\\nu_3,w'_3}}$. However, from $\\Ker\\,\\varphi_{{}_{E_6,\\nu_3}}=\\{(1,E),(-1,-E) \\}$, we easily obtain that $\\Ker\\,\\varphi_{{}_{E_6,\\nu_3,w'_3}}=\\{(1,E),(-1,-E) \\} \\cong \\Z_2$. Thus we have the isomorphism $(E_6)^{\\nu_3} \\cap (E_6)^{w'_3} \\cong (Sp(1)\\times S(U(1)\\times U(2)\\times U(3)))\/\\Z_2$. In addition, by Proposition \\ref{proposition 4.13.2} we have the isomorphism $(E_6)^{\\nu_3} \\cap (E_6)^{w_3} \\cong (Sp(1)\\times S(U(1)\\times U(2)\\times U(3)))\/\\Z_2$. Here, using the mapping $f_{{}_{4133}}$ in the proof of Lemma \\ref{lemma 4.13.3}, we define a homomorphism $h_{{}_{4134}}:Sp(1)\\times (U(1)\\times U(1)\\times SU(2)\\times SU(3)) \\to Sp(1)\\times S(U(1)\\times U(2)\\times U(3)))$ by\n\t\\begin{align*}\n\th_{{}_{4134}}(q,(a,b,A,B))=(q,f_{{}_{4133}}(a,b,A,B)).\n\t\\end{align*}\n\tThen, the elements of $(q,(a,b,A,B))$ corresponding to the elements \n\t $(1,E), (-1,-E) \\in \\Ker\\,\\varphi_{{}_{E_6,\\nu_3,w'_3}}$ under the mapping $h_{{}_{4134}}$ are as follows.\n\t\\begin{align*}\n\t& (1,1,1,E,E), (1,1,\\omega,E,{\\omega}^{-1}E),(1,1,{\\omega}^{-1},E,\\omega E), (1,-1,1,-E,E),\n\t\\\\\n\t& (1,-1,\\omega,-E,{\\omega}^{-1}E),(1,-1,{\\omega}^{-1},-E,\\omega E),\n\t\\\\\n\t& (-1,1,-1,-E,E), (-1,1,-\\omega,-E,{\\omega}^{-1}E),(-1,1,-{\\omega}^{-1},-E,\\omega E), (-1,-1,-1,E,E),\n\t\\\\\n\t& (-1,-1,-\\omega,E,{\\omega}^{-1}E),(-1,-1,-{\\omega}^{-1},E,\\omega E).\n\t\\end{align*}\n\t\n\tTherefore we have the required isomorphism\n\t\\begin{align*}\n\t(E_6)^{\\nu_3} \\cap (E_6)^{w_3} \\cong (Sp(1) \\times U(1)\\times U(1) \\times SU(2)\\times SU(3))\/(\\Z_2\\times \\Z_2\\times \\Z_3), \n\t\\end{align*}\n\twhere\n\t\\begin{align*}\n\t&\\Z_2=\\{(1,1,1,E,E), (1,-1,1,-E,E)\\},\n\t\\\\\n\t&\\Z_2=\\{(1,1,1,E,E), (-1,-1,-1,E,E)\\},\n\t\\\\\n\t&\\Z_3=\\{(1,1,1,E,E), (1,1,\\omega,E,{\\omega}^{-1}E),(1,1,{\\omega}^{-1},E,\\omega E)\\}.\n\t\\end{align*}\n\\end{proof}\n\nThus, since the group $(E_6)^{\\nu_3} \\cap (E_6)^{w_3}$ is connected from Theorem \\ref{theorem 4.13.4}, we have an exceptional $\\varmathbb{Z}_3 \\times \\varmathbb{Z}_3$-symmetric space\n\\begin{align*}\nE_6\/((Sp(1) \\times U(1)\\times U(1) \\times SU(2)\\times SU(3))\/(\\Z_2\\times \\Z_2\\times \\Z_3)).\n\\end{align*}\n\n\\subsection{Case 14: $\\{1, \\tilde{\\mu}_3, \\tilde{\\mu}_3{}^{-1}\\} \\times \\{1, \\tilde{w}_3, \\tilde{w}_3{}^{-1}\\}$-symmetric space}\n\nLet the $C$-linear transformations $\\mu_3, w_3$ of $\\mathfrak{J}^C$ defined in Subsection \\ref{subsection 3.3}. \n\n\\noindent From Lemma \\ref{lemma 3.3.8} (2), since we can easily confirm that $\\mu_3$ and $w_3$ are commutative, $\\tilde{\\mu}_3$ and $\\tilde{w}_3$ are commutative in $\\Aut(E_6)$: $\\tilde{\\mu}_3\\tilde{w}_3=\\tilde{w}_3\\tilde{\\mu}_3$.\n\nNow, we will determine the structure of the group $(E_6)^{\\mu_3}\\cap (E_6)^{w_3}$.\n\n\\begin{theorem}\\label{theorem 4.14.1\n\tThe group $(E_6)^{\\mu_3}\\cap (E_6)^{w_3}$ is isomorphic to the group $(SU(3)\\times U(1)\\times U(1)\\times U(1)\\times U(1))\/\\Z_3${\\rm :} $(E_6)^{\\mu_3}\\cap (E_6)^{w_3} \\cong (SU(3)\\times U(1)\\times U(1)\\times U(1)\\times U(1))\/\\Z_3, \\Z_3=\\{(E,1,1,1,1),(\\bm{\\omega}E,\\bm{\\omega},\\bm{\\omega},\\bm{\\omega},\\bm{\\omega}),(\\bm{\\omega}^{-1}E,\\bm{\\omega}^{-1},\\bm{\\omega}^{-1},\\bm{\\omega}^{-1},\\bm{\\omega}^{-1})\\}$.\n\\end{theorem}\n\\begin{proof}\n\tLet $S(U(1)\\times U(1)\\times U(1)) \\subset SU(3)$. We define a mapping $\\varphi_{{}_{E_6,\\mu_3,w_3}}: SU(3)\\times S(U(1)\\times U(1)\\times U(1))\\times S(U(1)\\times U(1)\\times U(1)) \\to (E_6)^{\\nu_3}\\cap (E_6)^{w_3}$ by\n\t\\begin{align*}\n\t\\varphi_{{}_{E_6,\\mu_3,w_3}}(L,P,Q)(X_{C}+M)&=h(P,Q)X_{C}h(P,Q)^*+LM\\tau h(P,Q)^*, \n\t\\\\\n\t&\\hspace*{20mm} X_{C}+M \\in \\mathfrak{J}(3, \\C)^C \\oplus \n\tM(3,\\C)^C=\\mathfrak{J}^C.\n\t\\end{align*}\n\tNeedless to say, this mapping is the restriction of the mapping $\\varphi_{{}_{E_6,w_3}}$, that is, $\\varphi_{{}_{E_6,\\nu_3,w_3}}(L,P,\\allowbreak Q)\\allowbreak=\\varphi_{{}_{E_6,w_3}}(L,P,Q)$ (Theorem \\ref{theorem 3.3.7}). \n\t\n\tAs usual, we will prove that $\\varphi_{{}_{E_6,\\mu_3,w_3}}$ is well-defined. It is clear that $\\varphi_{{}_{E_6,\\mu_3,_3}}(L,P,Q) \\in (E_6)^{w_3}$, and it follows from $\\mu_3=\\varphi_{{}_{E_6,w_3}}(E,\\diag({\\bm{\\varepsilon}}^{-2},\\bm{\\varepsilon},\\bm{\\varepsilon}), \\diag({\\bm{\\varepsilon}}^{2},\\bm{\\varepsilon}^{-1},\\bm{\\varepsilon}^{-1}))$ (Lemma \\ref{lemma 3.3.8} (2)) that \n\t\\begin{align*}\n\t&\\quad {\\mu_3}^{-1}\\varphi_{{}_{E_6,\\sigma_3,w_3}}(L,P,Q)\\mu_3\n\t\\\\\n\t&=\\varphi_{{}_{E_6,w_3}}(E,\\diag({\\bm{\\varepsilon}}^{-2},\\bm{\\varepsilon},\\bm{\\varepsilon}), \\diag({\\bm{\\varepsilon}}^{2},\\bm{\\varepsilon}^{-1},\\bm{\\varepsilon}^{-1}))^{-1}\\varphi_{{}_{E_6,\\mu_3,w_3}}(L,P,Q)\n\t\\\\\n\t&\\hspace*{70mm}\\varphi_{{}_{E_6,w_3}}(E,\\diag({\\bm{\\varepsilon}}^{-2},\\bm{\\varepsilon},\\bm{\\varepsilon}), \\diag({\\bm{\\varepsilon}}^{2},\\bm{\\varepsilon}^{-1},\\bm{\\varepsilon}^{-1}))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,w_3}}(E,\\diag({\\bm{\\varepsilon}}^{2},\\bm{\\varepsilon}^{-1},\\bm{\\varepsilon}^{-1}), \\diag({\\bm{\\varepsilon}}^{-2},\\bm{\\varepsilon},\\bm{\\varepsilon}))\\varphi_{{}_{E_6,\\mu_3,w_3}}(L,P,Q)\n\t\\\\\n\t&\\hspace*{70mm}\\varphi_{{}_{E_6,w_3}}(E,\\diag({\\bm{\\varepsilon}}^{-2},\\bm{\\varepsilon},\\bm{\\varepsilon}), \\diag({\\bm{\\varepsilon}}^{2},\\bm{\\varepsilon}^{-1},\\bm{\\varepsilon}^{-1}))\n\t\\\\\n\t&=\\varphi_{{}_{E_6,w_3}}(L,\\diag({\\bm{\\varepsilon}}^{2},\\bm{\\varepsilon}^{-1},\\bm{\\varepsilon}^{-1})P\\diag({\\bm{\\varepsilon}}^{-2},\\bm{\\varepsilon},\\bm{\\varepsilon}),\\diag({\\bm{\\varepsilon}}^{-2},\\bm{\\varepsilon},\\bm{\\varepsilon})Q \\diag({\\bm{\\varepsilon}}^{2},\\bm{\\varepsilon}^{-1},\\bm{\\varepsilon}^{-1})),\n\t\\\\\n\t&\\hspace*{90mm}P=\\diag(a,b,c), Q=\\diag(s,t,v)\n\t\\\\\n\t&=\\varphi_{{}_{E_6,w_3}}(L,P,Q)\n\t\\\\\n\t&=\\varphi_{{}_{E_6,\\mu_3,w_3}}(L,P,Q).\n\t\\end{align*}\n\tHence we have that $\\varphi_{{}_{E_6,\\mu_3,w_3}}(L,P,Q) \\in (E_6)^{\\mu_3}$. Thus $\\varphi_{{}_{E_6,\\mu_3,w_3}}$ is well-defined. Subsequently, since $\\varphi_{{}_{E_6,\\mu_3,w_3}}$ is the restriction of the mapping $\\varphi_{{}_{E_6,w_3}}$, we easily see that $\\varphi_{{}_{E_6,\\mu_3,w_3}}$ is a homomorphism.\n\t\n\tNext we will prove that $\\varphi_{{}_{E_6,\\mu_3,w_3}}$ is surjective. Let $\\alpha \\in (E_6)^{\\mu_3}\\cap (E_6)^{w_3} \\subset (E_6)^{w_3}$. There exist $L, A, B \\in SU(3)$ such that $\\alpha=\\varphi_{{}_{E_6,w_3}}(L,A,B)$ (Theorem \\ref{theorem 3.3.7}). Moreover, from the condition $\\alpha \\in (E_6)^{\\mu_3}$, that is, ${\\mu_3}^{-1}\\varphi_{{}_{E_6,w_3}}(L,A,B)\\mu_3=\\varphi_{{}_{E_6,w_3}}(L,A,B)$, and using \n\t\\begin{align*}\n\t&\\quad {\\mu_3}^{-1}\\varphi_{{}_{E_6,w_3}}(L,A,B)\\mu_3\n\t\\\\\n\t&=\\varphi_{{}_{E_6,w_3}}(L,\\diag({\\bm{\\varepsilon}}^{2},\\bm{\\varepsilon}^{-1},\\bm{\\varepsilon}^{-1})A\\,\\diag({\\bm{\\varepsilon}}^{-2},\\bm{\\varepsilon},\\bm{\\varepsilon}),\\diag({\\bm{\\varepsilon}}^{-2},\\bm{\\varepsilon},\\bm{\\varepsilon})B \\,\\diag({\\bm{\\varepsilon}}^{2},\\bm{\\varepsilon}^{-1},\\bm{\\varepsilon}^{-1}))\n\t\\end{align*}\n\t(Lemma \\ref{lemma 3.3.8} (2)) we have that \n\t\\begin{align*}\n\t&\\,\\,\\,{\\rm(i)}\\,\\left\\{\n\t\\begin{array}{l}\n\tL=L\n\t\\\\\n\t\\diag({\\bm{\\varepsilon}}^{2},\\bm{\\varepsilon}^{-1},\\bm{\\varepsilon}^{-1})A\\diag({\\bm{\\varepsilon}}^{-2},\\bm{\\varepsilon},\\bm{\\varepsilon})=A \n\t\\\\\n\t\\diag({\\bm{\\varepsilon}}^{-2},\\bm{\\varepsilon},\\bm{\\varepsilon})B \\diag({\\bm{\\varepsilon}}^{2},\\bm{\\varepsilon}^{-1},\\bm{\\varepsilon}^{-1})=B,\n\t\\end{array} \\right.\n\n\t\\\\[2mm]\n\t&{\\rm(ii)}\\,\\left\\{\n\t\\begin{array}{l}\n\tL=\\bm{\\omega}L\n\t\\\\\n\t\\diag({\\bm{\\varepsilon}}^{2},\\bm{\\varepsilon}^{-1},\\bm{\\varepsilon}^{-1})A\\diag({\\bm{\\varepsilon}}^{-2},\\bm{\\varepsilon},\\bm{\\varepsilon})=\\bm{\\omega}A \n\t\\\\\n\t\\diag({\\bm{\\varepsilon}}^{-2},\\bm{\\varepsilon},\\bm{\\varepsilon})B \\diag({\\bm{\\varepsilon}}^{2},\\bm{\\varepsilon}^{-1},\\bm{\\varepsilon}^{-1})=\\bm{\\omega}B,\n\t\\end{array} \\right.\n\t\\\\[2mm]\n\t&{\\rm(iii)}\\,\\left\\{\n\t\\begin{array}{l}\n\tL=\\bm{\\omega}^{-1}L\n\t\\\\\n\t\\diag({\\bm{\\varepsilon}}^{2},\\bm{\\varepsilon}^{-1},\\bm{\\varepsilon}^{-1})A\\diag({\\bm{\\varepsilon}}^{-2},\\bm{\\varepsilon},\\bm{\\varepsilon})=\\bm{\\omega}^{-1}A \n\t\\\\\n\t\\diag({\\bm{\\varepsilon}}^{-2},\\bm{\\varepsilon},\\bm{\\varepsilon})B \\diag({\\bm{\\varepsilon}}^{2},\\bm{\\varepsilon}^{-1},\\bm{\\varepsilon}^{-1})=\\bm{\\omega}^{-1}B.\n\t\\end{array} \\right.\n\t\\end{align*}\n\tThe Cases (ii) and (iii) are impossible because $L\\not=0$. As for the Case (i), from the second and third conditions, it is easy to see that $A,B \\in S(U(1)\\times U(1) \\times U(1))$. Needless to say, $L \\in SU(3)$. Hence there exist $L \\in SU(3)$ and $P,Q \\in S(U(1)\\times U(1) \\times U(1))$ such that $\\alpha=\\varphi_{{}_{E_6,w_3}}(L,P,Q)$. Namely, there exist $L \\in SU(3)$ and $P,Q \\in S(U(1)\\times U(1) \\times U(1))$ such that $\\alpha=\\varphi_{{}_{E_6,\\mu_3, w_3}}(L,P,Q)$. The proof of surjective is completed.\n\t\n\tFinally, we will determine $\\Ker\\,\\varphi_{{}_{E_6,\\mu_3,w_3}}$. However, from $\\Ker\\,\\varphi_{{}_{E_6,w_3}}=\\{(E,E,E),(\\bm{\\omega}E,\\allowbreak \\bm{\\omega}E,\\bm{\\omega}E),(\\bm{\\omega}^{-1}E,\\bm{\\omega}^{-1}E, \\bm{\\omega}^{-1}E) \\}$, we easily obtain that $\\Ker\\,\\varphi_{{}_{E_6,\\mu_3,w_3}}=\\{(E,E,E),(\\bm{\\omega}E,\\allowbreak \\bm{\\omega}E,\\bm{\\omega}E),(\\bm{\\omega}^{-1}E,\\bm{\\omega}^{-1}E, \\bm{\\omega}^{-1}E) \\} \\cong \\Z_3$.\n\tThus we have the isomorphism $(E_6)^{\\mu_3}\\cap (E_6)^{w_3} \\cong (SU(3)\\times S(U(1)\\times U(1)\\times U(1))\\times S(U(1)\\times U(1)\\times U(1)))\/\\Z_3$. \n\t\n\tTherefore, by Lemma \\ref{lemma 4.3.1} we have the required isomorphism \n\t\\begin{align*}\n\t(E_6)^{\\mu_3}\\cap (E_6)^{w_3} \\cong (SU(3)\\times U(1)\\times U(1)\\times U(1)\\times U(1))\/\\Z_3,\n\t\\end{align*}\n\twhere $\\Z_3=\\{(E,1,1,1,1),(\\bm{\\omega}E,\\bm{\\omega},\\bm{\\omega},\\bm{\\omega},\\bm{\\omega}),(\\bm{\\omega}^{-1}E,\\bm{\\omega}^{-1},\\bm{\\omega}^{-1},\\bm{\\omega}^{-1},\\bm{\\omega}^{-1})\\}$.\n\\end{proof}\n\nThus, since the group $(E_6)^{\\mu_3} \\cap (E_6)^{w_3}$ is connected from Theorem \\ref{theorem 4.14.1}, we have an exceptional $\\varmathbb{Z}_3 \\times \\varmathbb{Z}_3$-symmetric space\n\\begin{align*}\nE_6\/((SU(3)\\times U(1)\\times U(1)\\times U(1)\\times U(1))\/\\Z_3).\n\\end{align*}\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\\label{sec:intro}\n\nUpcoming 21~cm surveys are poised to make a first detection of redshifted\n21~cm fluctuations from the EoR within the next several years\n\\citep{DeBoer:2016tnn}. These measurements will provide a direct probe of the\ndistribution of neutral hydrogen in the IGM, revealing the spatial structure\nof the reionization process, and its redshift evolution. Along with these\nmeasurements, several other ``line-intensity'' mapping surveys are planned to\nmap out large-scale structure in the galaxy distribution using convenient\nemission lines with current targets including [C~\\textsc{ii}], CO, Ly-$\\alpha$, and\nH-$\\alpha$ \\citep[see e.g.][and references therein]{kovetz2017:im_review}.\nThese surveys study the spatial fluctuations in the collective emission from\nmany individually unresolved sources (e.g.\n\\citealt{Suginohara:1998ti,Righi:2008br,Visbal10}). These measurements should\nnicely complement 21~cm observations (e.g. \\citealt{Lidz11,gong11:probing}):\nwhile the 21~cm fluctuations trace-out remaining neutral hydrogen residing\nmostly in the low-density IGM, the galactic emission lines track the galaxies\nthemselves, which presumably lie within ``bubbles'' of mostly ionized hydrogen\n\\citep{Lidz:2008ry}.\n\nIn fact, recent work has led to detections in various lines at low redshift\n\\citep{2010Natur.466..463C,2001defi.conf..241D,Keating:2016pka,2018MNRAS.478.1911P,2016MNRAS.457.3541C,Croft:2018rwv},\nbolstering efforts to employ the line-intensity mapping technique at earlier\ntimes during the EoR. It is hence timely to explore the scientific benefits of\ncombining 21~cm observations of the EoR with line-intensity mapping surveys in\nother emission lines.\n\nHere we consider, for the first time, one potential advantage of combining\n21~cm surveys of the EoR with line-intensity mapping surveys in {\\em two\nadditional lines.} Specifically, we show that the linear bias factor of the\n21~cm field may be extracted solely from cross-power spectra between the 21~cm\nfluctuations and those in each of two separate lines. This can provide an\nimportant cross-check on inferences from the 21~cm auto-power spectrum since\ncross-power spectra should be less prone to bias from residual foregrounds\n\\citep[e.g.][]{Furlanetto:2006pg,Lidz:2008ry}; only shared foregrounds\ncontribute to the average cross spectrum signal. \n\nThe foreground problem is especially daunting in the case of redshifted\n21 cm surveys, where the expected foreground-to-signal strength is on the\norder of $\\sim 10^5$\n\\citep[e.g.][]{2009A&A...500..965B,2013ApJ...768L..36P,Dillon:2013rfa}. The\nbasic strategy for extracting the signal is to exploit the fact that the\nforegrounds should be smooth functions of frequency, while the reionization\nsignal has a great deal of spectral structure. In practice, this is\nchallenging because the instrument, calibration errors, and other effects may\nimprint artificial spectral variations. Cross-spectrum measurements should be\nless sensitive to such systematic effects and can therefore help confirm early\ndetections. For instance, \\citet{2015JCAP...03..034V} show that cross-spectra\ncan be robustly measured even in the presence of polarized synchrotron\nforegrounds; this is a troublesome case for auto-spectrum analyses because\nFaraday rotation leads to frequency structure.\n\nThe amplitude of the 21~cm power spectrum evolves with redshift in a\ndistinctive way as reionization proceeds \\citep[e.g.][]{Lidz08}, and recent\nwork has demonstrated that linear biasing describes the large-scale 21~cm\npower spectrum rather well\n\\citep{McQuinn:2018zwa,Hoffmann:2018clb,2018ApJ...867...26B}. Therefore, if\nour three-field method may be employed over a range of redshifts, it can be\nused to extract key and robust information regarding the reionization history\nof the Universe.\n\nIn recent related work we showed that the large-scale 21~cm bias factor may be\nrecovered using suitable cross-bispectra between the 21~cm fluctuations and\nthe [C~\\textsc{ii}] emission field \\citep{2018ApJ...867...26B}. While the\ncross-bispectra method requires only the 21~cm fluctuations and one additional\ntracer field, the technique we propose here should be vastly simpler to\nimplement in practice (provided two additional tracers are available with\ncommon sky and redshift coverage). This is the case because our present method\nrelies only on two-point statistics, and it therefore avoids practical\ndifficulties in carrying out cross-bispectrum analyses. For example, it is\nchallenging to estimate the bispectrum covariance as this involves computing a\nsix-point function. In addition, we will show that our present technique\nallows for a more faithful extraction of the 21~cm bias factor. Ultimately,\nboth analyses may be carried out for additional cross-checks.\n\nThere are a broad range of possible lines that may be combined with the 21~cm\nsurveys. Currently, there are projects -- either ramping-up or in the planning\nstages -- to perform EoR-era line-intensity surveys in: [C~\\textsc{ii}]~$158\\,\\mu\n\\text{m}$ \\citep{Crites14,Lagache:2018hmk,Vavagiakis:2018gen}, rotational\ntransitions from CO molecules \\citep{Chung:2017uot}, Ly-$\\alpha$\n\\citep{Dore16}, and H-$\\alpha$ \\citep{Cooray:2016hro}. Additional\nfine-structure lines such as [O~\\textsc{iii}]~$88\\,\\mu \\text{m}$ \\citep{Moriwaki18} and\n[N \\textsc{ii}]~$122\\,\\mu \\text{m}$ \\citep{Serra:2016jzs} may also be suitable\n--- in some cases, these lines will land in the proposed frequency bands of\nthe planned [C~\\textsc{ii}] surveys. The [O~\\textsc{iii}]~$88\\,\\mu \\text{m}$ line appears\nespecially promising since targeted ALMA observations around $z \\sim 7-9$\ngalaxies have found that this line is {\\em brighter} at high redshift than\nexpected based on local correlations between line-luminosity and\nstar-formation rate \\citep[e.g.][and references therein]{Moriwaki18}.\n\nIn principle, one could extract the 21~cm bias using the cross-spectrum with a\ntraditional galaxy survey, in which case the galaxy bias may be measured\nrobustly from the auto-power spectrum. In practice, this is extremely\nchallenging because one needs {\\em spectroscopic redshifts} for the galaxy\nsurvey over a huge sky area at $z \\sim 8$. If only photometric redshifts are\navailable, then one only accesses long-wavelength line-of-sight modes (with\nsmall or vanishing line-of-sight wavenumbers) in the galaxy survey but\nprecisely these modes are lost to foreground cleaning\/avoidance in the 21~cm\nsurveys (e.g. \\citealt{Lidz:2008ry}). Fortunately, multi-line intensity\nmapping provides a promising way forward here and our approach avoids\nmeasuring bias factors from auto-spectra.\n\nIn Section~\\ref{sec:approach}, we describe our three cross-spectra approach in\ndetail. In Section~\\ref{sec:simulations} we briefly discuss the radiative\ntransfer simulations of reionization \\citep{2007MNRAS.377.1043M,Lidz08} used\nin our analysis, the reionization model assumed, and our method for generating\nmock line-intensity mapping data cubes. We then quantify the accuracy of our\ntechnique in Section~\\ref{sec:results}. The survey specifications required to\nextract bias factors with this method are discussed briefly in\nSection~\\ref{sec:detectability}. We conclude in Section~\\ref{sec:conclusions}.\nWe assume a $\\Lambda$CDM cosmology, parameterized by $(\\Omega_m,\n\\Omega_{\\Lambda}, \\Omega_b, h, \\sigma_8, n_s) = (0.27, 0.73, 0.046, 0.7, 0.8,\n1)$ as in the simulations used in this work \\citep{McQuinn:2007dy}. While\nthese parameters differ slightly from presently favored values (e.g.\n\\citealt{2018arXiv180706209P}), this should not impact our conclusions.\n\n\\section{Approach}\\label{sec:approach}\nHere we define terms and describe our three cross-spectra approach. Ignoring\nredshift-space distortions and spin-temperature fluctuations, the 21~cm\nbrightness temperature contrast between neutral hydrogen gas and the cosmic\nmicrowave background is:\n\\begin{equation}\\label{eq:brightness_temp}\nT_{21}(\\bm{x}) = T_0 X_{\\text{HI}}(\\bm{x})[1+\\delta_\\rho(\\bm{x})]\\text{.}\n\\end{equation}\nHere $T_0 = 28\\,\\text{mK}[(1+z)\/10]^{1\/2}$ \\citep[e.g.][]{Zaldarriaga:2003du},\n$X_{\\text{HI}}(\\bm{x})$ is the neutral hydrogen fraction at position $\\bm{x}$, and\n$\\delta_\\rho(\\bm{x})$ is the gas density contrast, which is assumed to follow\nthe overall matter density field on the large scales of interest. Although\nionized regions imprint large-scale fluctuations in the 21~cm field, on scales\nmuch larger than the size of the ionized regions, the 21~cm fluctuations\nshould nevertheless follow a linear biasing relation\n\\begin{equation}\\label{eq:21cm_bias}\nT_{21}(\\bm{k}) = \\pm \\avg{T_{21}} b_{21} \\delta_{\\text{lin}}(\\bm{k})\\text{,}\n\\end{equation}\nwhere the $\\pm$ indicates that the fields are either correlated ($+$) or\nanti-correlated ($-$) --- during the bulk of the EoR, the 21~cm and density\nfields are anti-correlated on large scales in most models\n\\citep[e.g.][]{Lidz:2008ry}. Here $T_{21}(\\bm{k})$ is the Fourier transform of\nthe brightness temperature field (Equation~\\ref{eq:brightness_temp}) and\n$\\delta_{\\text{lin}}(\\bm{k})$ is the Fourier transform of the linear density\ncontrast.\\footnote{Our Fourier convention is: $T_{21}(\\bm{k}) = \\int\n\\text{d}^3x\\, T_{21}(\\bm{x}) e^{i \\bm{k} \\cdot \\bm{x}}$ and $T_{21}(\\bm{x}) =\n\\int \\frac{\\text{d}^3k}{(2\\pi)^3}\\, T_{21}(\\bm{k}) e^{-i \\bm{k} \\cdot\n\\bm{x}}$.} The quantity $b_{21}$ is the dimensionless, and scale-independent,\nlinear bias factor of the 21~cm fluctuation contrast, $\\delta_{21}(\\bm{x}) =\n\\left(T_{21}(\\bm{x}) - \\avg{T_{21}}\\right)\/\\avg{T_{21}}$, while the\n$\\avg{T_{21}}$ factor reverts to brightness temperature units (since the\naverage brightness temperature is not itself observable from interferometric\nmeasurements.) In this work when we refer to the ``bias'' we mean\n$\\avg{T_{21}}b_{21}$ (and likewise for the intensity mapping surveys.)\n\nLikewise, we can consider additional tracer lines, such as [C~\\textsc{ii}]. On large\nscales, the Fourier transform of the specific intensity of each of these lines\nshould be well-described by\n\\begin{equation}\\label{eq:linear_biasing}\nI_{i}(\\bm{k}) = \\avg{I_{i}} b_{i} \\delta_{\\text{lin}}(\\bm{k})\\text{,}\n\\end{equation}\nwhere $\\avg{I_{i}}$ is the mean specific intensity of the emission\nline.\\footnote{We follow standard conventions in expressing 21~cm fluctuations\nin brightness temperature units, i.e. in $\\text{mK}$, while we use specific\nintensity units for the other tracer lines, i.e. $I_{i}$ is the specific\nintensity in $\\text{Jy\/str}$.} For the case of emission lines sourced by gas\nwithin galaxies, the relevant bias factor is the luminosity-weighted bias of\nthe line-emitting host halos (e.g. \\citealt{Lidz11}). To be completely general\nwe should also include a $\\pm$ here (as in Equation~\\ref{eq:21cm_bias}), but\nfor the galactic emission lines we generally expect brighter line emission in\noverdense regions.\n\nOn sufficiently large scales, the auto-power spectrum of the fluctuations in\neach tracer line (Equation~\\ref{eq:linear_biasing}) will be\n\\begin{equation}\\label{eq:bias_ps}\n\\begin{split}\nP_{i, i}(k, z) &\\equiv \\avg{I_{i}(k, z) I_{i}^{*}(k, z)} \\\\\n&= \\left[\\avg{I_{i}}(z) b_{i}(z)\\right]^2 P_{\\text{lin}}(k, z)\\text{,}\n\\end{split}\n\\end{equation}\nwhere $P_{\\text{lin}}(k,z)$ is the linear matter power spectrum. Similarly, on\nlarge scales the 21~cm auto-power spectrum should follow $P_{21,21}(k,z) =\n\\left[\\avg{T_{21}}(z) b_{21}(z)\\right]^2 P_{\\text{lin}}(k,z)$. In principle,\none can infer the bias factors $\\avg{I_i} b_i$ and $\\avg{T_{21}} b_{21}$ from\nauto-power spectrum measurements (assuming a model for the linear power\nspectrum). However, foreground cleaning\/avoidance present significant\nchallenges here\n\\citep[e.g.][]{2012MNRAS.419.3491L,2013ApJ...769..154M,2015ApJ...804...14T,2016ApJ...819....8P,Ewall-Wice:2016bhu}\nand residual foregrounds may bias such inferences.\n\nAnother approach is to measure the cross-power spectrum between two lines $i$\nand $j$. In this case, one measures\n\\begin{equation}\\label{eq:xps}\nP_{i,j} = r_{i, j} \\avg{I_{i}} \\avg{I_{j}} b_{i} b_{j} P_{\\text{lin}}\\text{,}\n\\end{equation}\nwhere $r_{i,j}$ is the cross-correlation coefficient which ranges from $-1$ to\n$1$.\\footnote{Note that here we adopt the convention that the bias factors are\nalways positive and that the sign of the cross-spectrum is determined solely\nby that of the correlation coefficient. This convention differs from our\nprevious work \\citep{2018ApJ...867...26B}.} In the above equation and in what\nfollows, we generally suppress redshift and wavenumber labels for brevity. In\ngeneral, $r_{i,j}$ is scale-dependent, but asymptotes to $-1$ (for\nanticorrelated fields) or $1$ (for correlated fields) on large\nscales.\\footnote{Note that we neglect shot-noise contributions to the\nauto-spectrum in Equation~\\ref{eq:bias_ps}, as well as correlated shot-noise\nterms in the cross-power spectrum. This should be a very good approximation on\nthe scales of interest unless the line-emitting sources are quite rare (e.g.\n\\citealt{lidz2016:remove_interloper}). Even in the case of rare sources, the\nshot-noise term should be a white-noise contribution on scales much larger\nthan the size of the host halos. In this case, one can perform a joint fit for\nthe shot-noise along with the clustering terms.} If one of the lines is the\n21~cm field, we replace $\\avg{I_i}$ with $\\avg{T_{21}}$ in\nEquation~\\ref{eq:xps}.\n\nHowever, in the presence of a third line $k$, and with $P_{j,k}$ and $P_{k,i}$\ndefined analogously as in Equation~\\ref{eq:xps}, we can simply write\n\\begin{equation}\\label{eq:threefields}\n\\begin{split}\nP_{i,i}=(\\avg{I_i} b_i)^2 P_{\\text{lin}} &= \\frac{r_{j,k}}{r_{i,j} r_{k,i}} \\frac{P_{i,j} P_{k, i}}{P_{j,k}} \\\\\n&\\equiv R_{i,j,k} P_{i,j,k}\\text{,}\n\\end{split}\n\\end{equation}\nwhere we have defined $R_{i,j,k} \\equiv r_{j,k}\/(r_{i,j} r_{k,i})$ and\n$P_{i,j,k} \\equiv (P_{i,j}P_{k,i})\/P_{j,k}$. On sufficiently large scales,\n$R_{i,j,k} \\rightarrow 1$, but on intermediate scales $R_{i,j,k} > 1$ for most\nreasonable cases when the various $r$'s are close in magnitude.\nEquation~\\ref{eq:threefields} shows that (on sufficiently large scales where\nlinear biasing holds and $R_{i,j,k} \\sim 1$) we can recover the linear bias\nfactor of field $i$ from a suitable ratio of cross-spectra. Here we suppose\nthat the underlying density power spectrum is well known.\nEquation~\\ref{eq:threefields} is the main point of this paper; in the\nremainder of this work we consider an application to the EoR and quantify its\naccuracy. Specifically, we will test the range of validity -- in spatial scale\nand redshift\/ionization fraction -- of the assumption that $R_{i,j,k}=1$,\nalong with the linear biasing approximations of\nEquations~\\ref{eq:21cm_bias}~\\&~\\ref{eq:linear_biasing}. Note that testing the\nassumption that $R_{i,j,k} = 1$ directly from upcoming data will require\nreliable auto-spectra.\n\nWe turn now to the specific case of EoR surveys with the goal of extracting\nthe 21~cm bias factor using only cross-power spectra. For further specificity\nwe suppose that the two additional tracer lines are [C~\\textsc{ii}] and [O~\\textsc{iii}],\nalthough little of the analysis that follows depends on the choice of these\ntwo lines --- any of the lines mentioned in Section~\\ref{sec:intro} can be\nused instead of [C~\\textsc{ii}] or [O~\\textsc{iii}]. In this case,\nEquation~\\ref{eq:threefields} may be applied as\n\\begin{equation}\\label{eq:threefields_specific}\n\\begin{split}\nP_{21,21} &= (\\avg{T_{21}} b_{21})^2 P_{\\text{lin}}\\\\\n&= \\frac{P_{21,\\text{C~\\textsc{ii}}} P_{\\text{O~\\textsc{iii}}, 21}}{P_{\\text{C~\\textsc{ii}}, \\text{O~\\textsc{iii}}}}\\text{,}\n\\end{split}\n\\end{equation}\ni.e. assuming $R_{21,\\text{C~\\textsc{ii}},\\text{O~\\textsc{iii}}} = 1$.\n\nWe expect this approach to break down on small scales. First, the three fields\nwill be well-correlated (or anti-correlated) only on large scales, with the\n21~cm field and the [C~\\textsc{ii}], [O~\\textsc{iii}] fields decorrelating on scales smaller\nthan the size of the ionized regions \\citep{Lidz11}. Second, we assume linear\nbiasing which should break down on scales where second-order bias terms become\nsignificant \\citep{McQuinn:2018zwa}.\n\nOne caveat here is that we neglect redshift space distortions throughout.\nIncluding these effects will make the power spectra in\nEquation~\\ref{eq:threefields_specific} angle-dependent. Although these effects\nare well studied in the case of the 21~cm auto-spectrum (e.g.\n\\citealt{Mao12}), an extension of our three cross-spectra method may be needed\nto account for these distortions.\n\n\\section{Simulations}\\label{sec:simulations}\n\nIn order to investigate the accuracy of Equation~\\ref{eq:threefields_specific}\nwe turn to $(186\\,\\text{Mpc})^3$ radiative transfer simulations of the EoR\n\\citep{2007MNRAS.377.1043M,McQuinn:2007dy,Lidz08}. In these calculations,\nradiative transfer is post-processed onto a $(1024)^3$ dark matter only\nsimulation run with \\texttt{GADGET-2} \\citep{Springel:2005mi}. The dark matter\nsimulation resolves halos only down to $10^{10}\\,\\text{M}_\\odot$, however halos down to\n$10^8\\,\\text{M}_\\odot$ are added manually in post-processing with the correct\nstatistical properties \\citep{2007MNRAS.377.1043M}. Halos resolved directly in\nthe simulation (i.e. $>10^{10}\\,\\text{M}_\\odot$) are identified with a\nFriends-of-Friends algorithm with a linking length of 0.2.\n \n In what follows, we adopt the abundant mini-halo sink scenario\n \\citep{2007MNRAS.377.1043M,Lidz08} as our baseline reionization model.\n Although the detailed model for photon sinks implemented in these simulations\n may not be fully realistic, the smaller ionized regions in ``abundant sink''\n scenarios may, in fact, be more plausible than the other cases considered in\n this previous work \\citep{McQuinn:2018zwa}. In any case, the accuracy of our\n method does not depend strongly on the precise reionization model assumed.\n\nIn order to model the [C~\\textsc{ii}] and [O~\\textsc{iii}] emission fluctuations, we assume that\nthe luminosity in each line is correlated with the host halo mass.\nSpecifically, we adopt a power-law average relation between line-luminosity\nand halo mass:\n\\begin{equation}\\label{eq:im_form}\n\\avg{L_i}(M) = L_{i,0} \\left[\\frac{M}{M_0}\\right]^{\\alpha_i},\n\\end{equation}\nwhere $M$ is the mass of the halo, $\\avg{L_i}$ is the average luminosity, and\n$L_{i,0}$ is the luminosity at characteristic mass $M_0$. In order to account\nfor scatter in this relation, we add a random number so that each halo's\nluminosity is $L_i = \\avg{L_i}(1 + \\epsilon)$ where $\\epsilon$ is drawn from a\nzero-mean lognormal distribution of width 0.4 dex.\n\nIn what follows we assume that each host halo in the simulation hosts a [C~\\textsc{ii}]\nand [O~\\textsc{iii}] emitter. If only a random fraction $f$ of halos host active [C~\\textsc{ii}]\nand\/or [O~\\textsc{iii}] emitters while $L_{i,0}$ is boosted to fix the average\nspecific-intensity in each line, this does not change the 21~cm-[C~\\textsc{ii}] or\n21~cm-[O~\\textsc{iii}] cross-power spectra. This represents the case that\nstar-formation activity has a short duty-cycle, yet the total star-formation\nrate density is fixed to the observed value. If the same random fraction emit\nin both [C~\\textsc{ii}] and [O~\\textsc{iii}] this can boost the cross-shot noise contribution to\n$P_{\\text{C~\\textsc{ii}},\\text{O~\\textsc{iii}}}$, but this is highly sub-dominant on the scales of interest\n($k \\leq 0.4$ Mpc$^{-1}$) even for $f=10^{-2}$.\n\nIn order to estimate the specific intensity of the two fields, we use nearest\ngrid-point interpolation to estimate the emissivity on a $512^3$ Cartesian\ngrid, matching the resolution of the density and 21~cm fields from\n\\citet{Lidz08}. Note that we can test the accuracy of\nEquation~\\ref{eq:threefields_specific} without specifying the numerical value\nof $L_{i,0}$ or $M_0$ since they cancel in the ratio. The value of $\\alpha_i$,\non the other hand, controls which host-halos (and galactic star-formation\nrates) produce most of the specific intensity in line $i$.\\footnote{Note we\nassume that the minimum host halo mass of the [C~\\textsc{ii}] and [O~\\textsc{iii}] emitters is\n$10^8 M_\\odot$, comparable to the atomic cooling mass. The true minimum host\nmass of the emitters may, in fact, be larger. However, note that the average\nspecific intensity may be fixed by the total star-formation rate density and\nthe line-luminosity star-formation rate correlation. Provided these quantities\nare fixed, then the main impact of boosting the minimum host halo mass will be\nto increase slightly the bias factors, $b_i$, and the signal strength. See\ne.g. \\citep{lidz2016:remove_interloper} for more details regarding\nline-intensity fluctuation models.} If the value of $\\alpha_i$ is the same for\n[C~\\textsc{ii}] and [O~\\textsc{iii}], then the two fields differ only by an overall\nmultiplicative factor and Equation~\\ref{eq:threefields_specific} reduces to a\nsimple ratio between a single cross-spectrum and an\nauto-spectrum.\\footnote{This assumes, as we do here, that the scatter in the\nluminosity-mass relation is perfectly correlated between [C~\\textsc{ii}] and [O~\\textsc{iii}] at\nfixed $\\alpha_i$.}\n\nWe consider three different values for $\\alpha_i$: $2\/3$, $1$, and $4\/3$. We\nrefer to these as L, M, and H since they provide most weight to low, medium,\nand high mass host-halos respectively. We allow for the case that the two\nlines have different values of $\\alpha_i$: i.e., we consider 21~cm-L-M,\n21~cm-M-H, and 21~cm-H-L, with L, M, or H standing in for [C~\\textsc{ii}] or [O~\\textsc{iii}] in\nEquation~\\ref{eq:threefields_specific}. We then measure the various\ncross-spectra using a slightly modified version of the power spectrum\ncalculator in \\texttt{21cmFAST} \\citep{Mesinger11,2018arXiv180908995P}.\n\n\\section{Results}\\label{sec:results}\n\nWe first investigate how well our three cross-spectra approach for measuring\nthe large-scale 21~cm bias agrees with the true bias. We measure the true bias\nas\n\\begin{equation}\\label{eq:truebias}\n\\avg{T_{21}} b_{21}(k) \\equiv \\sqrt{\\frac{P_{21,21}(k)}{P_{\\delta,\\delta}(k)}}\\text{,}\n\\end{equation}\nand also estimate the bias as\n\\begin{equation}\\label{eq:truebias_cross}\n\\avg{T_{21}} b_{21}(k) \\simeq\n\\left|\\frac{P_{21,\\delta}(k)}{P_{\\delta,\\delta}(k)} \\right| \\text{,}\n\\end{equation}\nwhere $P_{\\delta,\\delta}(k)$ is the auto-power spectrum of the simulated\ndensity field and $P_{21,\\delta}(k)$ is the 21~cm-density cross-power\nspectrum. Note that Equation~\\ref{eq:truebias_cross} assumes that the\ncorrelation coefficient $\\left|r_{21,\\delta}\\right| = 1$ and so will depart\nfrom Equation \\ref{eq:truebias} on small scales, but the two should converge\non large scales (see Section~\\ref{sec:intro}). The absolute value in\nEquation~\\ref{eq:truebias_cross} comes about from the convention adopted in\nSection~\\ref{sec:approach}. On large scales where the 21~cm, [C~\\textsc{ii}], and\n[O~\\textsc{iii}] fields are each well correlated or anti-correlated with the density\nfield and linear theory applies, we expect all estimates of $\\avg{T_{21}}\nb_{21}$ to agree. When we estimate the bias factors using our three\ncross-spectra method (Equation~\\ref{eq:threefields_specific}) we use the\nsimulated density power-spectrum, since this is extremely close to the linear\ntheory prediction on the relevant scales and redshifts.\n\nThe bias factors inferred from Equation~\\ref{eq:threefields_specific} are\nshown in Figure~\\ref{fig:b21_vs_k} for each of the three combinations of our\nluminosity-mass relation models (L-M, M-H, H-L) at $z=8.34$ when the model\nvolume-averaged ionization fraction is $\\avg{x_i}=0.36$. These are compared\nwith the bias inferred from the 21~cm auto-spectrum\n(Equation~\\ref{eq:truebias}) and the 21~cm-density cross-spectrum\n(Equation~\\ref{eq:truebias_cross}). On large scales ($k \\lesssim\n0.3\\,\\text{Mpc}^{-1}$), the methods converge to very nearly the same value. We find\nthat on a scale of $k=0.1\\,\\text{Mpc}^{-1}$ at $\\avg{x_i}=0.36$ the three methods\nagree with the true value to within $0.6\\%$. In the case of 21~cm-L-L,\n21~cm-M-M, or 21~cm-H-H models the agreement is slightly worse but still at\nthe percent-level. Note that another approach for estimating the 21~cm bias\nwould use only the 21~cm-[C~\\textsc{ii}] cross-spectrum and the [C~\\textsc{ii}] auto-spectrum.\nThis requires measuring the [C~\\textsc{ii}] auto-spectrum, which is subject to\ncontamination from interloping line emission, and so we pursue only the more\nrobust three-field technique here.\n\nThe success results because the ionized regions are sufficiently smaller than\nthis scale ($k=0.1\\,\\text{Mpc}^{-1}$), ensuring that the 21~cm and line-intensity\nfields are highly anti-correlated and that second-order biasing contributions\nare small. For example, the cross-correlation coefficient between the 21~cm\nfield and the density field is $r_{21,\\delta} = -0.99$ at $k=0.1\\,\\text{Mpc}^{-1}$\nfor $\\avg{x_i}=0.36$, $z=8.34$.\n\nOn smaller scales, our approach breaks down. At $\\avg{x_i}=0.36$, the\ndifferent bias factor estimates begin diverging at the $\\geq 10\\%$ level near\n$k \\sim 0.4\\,\\text{Mpc}^{-1}$. This occurs because the fields start to de-correlate\nand second order biasing terms become more important. As anticipated after\nEquation~\\ref{eq:threefields}, the three cross-spectra approach underestimates\nthe bias factor in this regime. This underestimation may allow one to place\nrobust lower limits on $P_{21,21}$ that are only $\\sim50\\%$ smaller than the\ntrue value down to $k\\sim2\\,\\text{Mpc}^{-1}$ at this stage of the EoR, although the\nmodel-dependence of such limits warrants further investigation.\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{{b21_k_z8.34}.pdf}\n\\caption{{\\em Upper:} The simulated, dimensionless 21~cm auto-power spectrum\n(gray) compared to that inferred from our three-cross spectra approach\nassuming linear biasing at $\\avg{x_i}=0.36$, $z=8.34$. The different colors\ncorrespond to various possible line-luminosity mass relations (L, M, H), as\ndescribed in Section~\\ref{sec:simulations}. The shaded area shows the\n$1\\,\\sigma$ expected errors for the 21~cm-L-M survey described in\nSection~\\ref{sec:detectability}. {\\em Middle:} The 21~cm bias factor extracted\nfrom our three cross-spectra approach in the different line-luminosity models.\nThese are compared with that inferred from the 21~cm auto-spectrum (solid\ngray) and the 21~cm-density cross-spectrum (gray dashed). {\\em Bottom:} The\nrelative difference between the different bias-factor models. On large scales\nall inferences agree.}\n\\label{fig:b21_vs_k}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{{b21_z}.pdf}\n\\caption{{\\em Upper:} The inferred 21~cm bias factor as a function of\nredshift\/volume-average ionization fraction at $k=0.1\\,\\text{Mpc}^{-1}$. The\ndifferent colored lines show inferences from our three cross-spectra approach\nin the different line-luminosity models (see Figure~\\ref{fig:b21_vs_k} and\ntext). The gray line shows the true bias factor measured from the\n21~cm-$\\delta$ cross-spectrum. {\\em Lower:} The relative error in the three\ncross-spectra approach. We find better than $5\\%$ agreement for most of the\nEoR with sub-percent accuracy achieved near $\\avg{x_i}=0.36$ at $z=8.34$. At\n$\\avg{x_i} \\sim 0.15$ the fields decorrelate on large scales and so the\napproach breaks down (see text).}\n\\label{fig:b21_vs_z}\n\\end{figure}\n\nAt later times, the average ionization fraction and the bubble sizes increase\nand so the scale at which the linear biasing approximation breaks down moves\nto larger scales. For example, at $\\avg{x_i} = 0.7$ ($z = 7.32$), the approach\nbreaks down at the $\\sim 10\\%$ level at $k\\sim0.3\\,\\text{Mpc}^{-1}$, though an\naccuracy of only a few percent is achieved at the largest scales considered\nhere. We suspect better agreement on even larger scales than probed by our\nrelatively small simulation volume.\n\nIn Figure~\\ref{fig:b21_vs_z} we turn to consider the redshift evolution of the\n21 cm bias factor at $k=0.1\\,\\text{Mpc}^{-1}$. As emphasized earlier, the redshift\nevolution of the 21~cm bias factor encodes interesting information about how\nreionization proceeds. The three cross-spectra method generally recovers the\noverall evolution of the 21~cm bias factor with redshift and volume-averaged\nionization fraction quite accurately. This suggests that our technique may\nhelp in reconstructing the reionization history of the Universe, or in\nverifying the results from 21~cm auto-spectrum measurements.\n\nThe one exception is near $\\avg{x_i} \\sim 0.15$, where our technique is\nrelatively inaccurate. This occurs because large-scale overdense regions are\ninitially brighter in 21~cm than typical regions in our model and so the 21~cm\nfields are intially {\\em positively-correlated} with the density fluctuations.\nAs reionization begins, the large-scale overdense regions ionize first which\ncauses the correlation coefficient between the 21~cm and density fields to\nreverse signs. Consequently, there is an intermediate period (near $\\avg{x_i}\n\\sim 0.15$ in this model) where the two fields are roughly {\\em uncorrelated}\non large scales \\citep{Lidz08}. This causes our method to break down, although\nwe caution that incorporating spin-temperature fluctuations into the modeling\nmay modify this conclusion. Note also that it will be challenging to perform\nline-intensity mapping observations at very early times before, e.g.,\nsufficient metal enrichment occurs.\n\nWhile our baseline model assumes the abundant mini-halo sinks scenario we have\nalso investigated the fiducial model used in \\citet{Lidz08}. Although this\nlatter model has a different ionization history and bias factor evolution, the\naccuracy of our three cross-spectra method is broadly similar in this case.\nFor example, near the midpoint of reionization in this model ($z=7.32,\n\\avg{x_i}=0.54$), the 21~cm bias extraction also reaches sub-percent accuracy.\n\n\\section{Detectability}\\label{sec:detectability}\n\nEncouraged by the success of our approach in simulations, we briefly describe\nthe survey specifications required to infer 21~cm bias factors using this\ntechnique. Here we consider only rough estimates and defer an in depth\ntreatment of noise power spectra, variance from residual foregrounds, and a\nfull probabilistic, multi-field framework to future work.\n\nWe first describe the relevant variance and covariance formulae (for\nderivations, see e.g. \\citealt{2015JCAP...03..034V}):\n\\begin{equation}\\label{eq:var_covar}\n\\begin{split}\n\\Var{P_{i,j}} &= P_{i,j}^2 + P_{i,\\text{tot}}P_{j,\\text{tot}} \\\\\n\\Cov{P_{i,j}}{P_{i,k}} &= P_{i,\\text{tot}}P_{j,k} + P_{i,j}P_{i,k}\\text{,}\n\\end{split}\n\\end{equation}\nwhere $P_{i,\\text{tot}} = P_{i} + N_{i}$ and $N_{i}$ is the instrumental noise\npower spectrum of line $i$. For simplicity, we neglect the shot-noise\ncontribution to each field. We note that Equation~\\ref{eq:var_covar} is only\nvalid in the Gaussian approximation, but this is suitable for the large scales\nof interest in our approach.\n\nWe can now apply the standard propagation of errors formula to\nEquation~\\ref{eq:threefields_specific} and substitute\nEquation~\\ref{eq:var_covar}, yielding:\n\\begin{equation}\\label{eq:noise_P21}\n\\begin{split}\n&\\Var{P_{21}} = \\\\\n& \\left(\\frac{P_{21,\\text{C~\\textsc{ii}}}}{P_{21,\\text{O~\\textsc{iii}}}}\\right)^2\\left(P_{21,\\text{O~\\textsc{iii}}}^2 + P_{21,\\text{tot}}P_{\\text{O~\\textsc{iii}},\\text{tot}}\\right) \\\\\n&+ \\left(\\frac{P_{21,\\text{O~\\textsc{iii}}}}{P_{21,\\text{C~\\textsc{ii}}}}\\right)^2\\left(P_{21,\\text{C~\\textsc{ii}}}^2 + P_{21,\\text{tot}}P_{\\text{C~\\textsc{ii}},\\text{tot}}\\right) \\\\\n&+ \\left(\\frac{P_{21,\\text{C~\\textsc{ii}}}P_{21,\\text{O~\\textsc{iii}}}}{P_{\\text{C~\\textsc{ii}},\\text{O~\\textsc{iii}}}^2}\\right)^2\\left(P_{\\text{C~\\textsc{ii}},\\text{O~\\textsc{iii}}}^2 + P_{\\text{C~\\textsc{ii}},\\text{tot}}P_{\\text{O~\\textsc{iii}},\\text{tot}}\\right) \\\\\n&+ \\frac{P_{21,\\text{C~\\textsc{ii}}}P_{21,\\text{O~\\textsc{iii}}}}{P_{\\text{C~\\textsc{ii}},\\text{O~\\textsc{iii}}}^2}\\left(P_{21,\\text{tot}}P_{\\text{C~\\textsc{ii}},\\text{O~\\textsc{iii}}} + P_{21,\\text{C~\\textsc{ii}}}P_{21,\\text{O~\\textsc{iii}}} \\right) \\\\\n&- \\frac{P_{21,\\text{C~\\textsc{ii}}}^2P_{21,\\text{O~\\textsc{iii}}}}{P_{\\text{C~\\textsc{ii}},\\text{O~\\textsc{iii}}}^3}\\left(P_{\\text{O~\\textsc{iii}},\\text{tot}}P_{21,\\text{C~\\textsc{ii}}} + P_{21,\\text{O~\\textsc{iii}}}P_{\\text{C~\\textsc{ii}},\\text{O~\\textsc{iii}}} \\right) \\\\\n&- \\frac{P_{21,\\text{C~\\textsc{ii}}}P_{21,\\text{O~\\textsc{iii}}}^2}{P_{\\text{C~\\textsc{ii}},\\text{O~\\textsc{iii}}}^3}\\left(P_{\\text{C~\\textsc{ii}},\\text{tot}}P_{21,\\text{O~\\textsc{iii}}} + P_{21,\\text{C~\\textsc{ii}}}P_{\\text{C~\\textsc{ii}},\\text{O~\\textsc{iii}}} \\right)\\text{.}\n\\end{split}\n\\end{equation}\n\nThe number of modes in a bin of width $\\delta k$ centered on $k$ is,\n\\begin{equation}\\label{eq:num_modes}\nN_m = \\frac{4\\pi k^2 \\delta k}{V_{\\text{fund}}}\\text{,}\n\\end{equation}\nwhere $V_{\\text{fund}}$ is the volume of a fundamental mode. We assume a\nsquare survey area and therefore compute,\n\\begin{equation}\\label{eq:vfund}\nV_{\\text{fund}} = \\frac{(2\\pi)^3}{L_{\\bot}^2L_{\\parallel}}\\text{,}\n\\end{equation}\nwhere $L_{\\bot}$ is the side length of the survey area and $L_{\\parallel}$ is\nthe length of the redshift bin $\\Delta z$.\n\nWe assume a joint survey area of $100\\,\\text{deg}^2$ and bin widths of $\\delta\nk = 0.03 \\text{Mpc}^{-1}$ and $\\Delta z = 0.25$. In order to make a rough estimate,\nwe assume that each experiment reaches sample-variance limited sensitivity at\n$k=0.1\\,\\text{Mpc}^{-1}$, with $N_i=P_i$ at this wavenumber and adopt a pure,\nisotropic white-noise power spectrum. In the case of [C~\\textsc{ii}], the required\nnoise depends on the uncertain average specific intensity which determines, in\npart, the signal strength, $P_i$. A plausible value is $\\avg{I_{\\text{C~\\textsc{ii}}}}=5\n\\times 10^2\\,\\text{Jy\/str}$ at $z=8.34$ \\citep{2018ApJ...867...26B}. In this\ncase, $N_{\\text{C~\\textsc{ii}}} = 1.6 \\times 10^9$, $2.5 \\times 10^9$, $3.9 \\times\n10^9\\,(\\text{Jy}\/\\text{str})^2\\,\\text{Mpc}^3$ for the L, M, and H models of the\n[C~\\textsc{ii}] line, respectively at $z=8.34$. These noise requirements are comparable\nto the values forecasted for Stage-II [C~\\textsc{ii}] line-intensity mapping forecasts\nin \\citet{silva15:prospects,lidz2016:remove_interloper}. We expect broadly\nsimilar noise requirements for hypothetical future [O~\\textsc{iii}] surveys but defer\ndetailed forecasts to future work. As we discussed previously\n\\citep{2018ApJ...867...26B}, the 21~cm sensitivity requirement assumed here\nseems plausible considering HERA-350 will {\\em image} some large scale modes\n\\citep{DeBoer:2016tnn} --- although the white noise approximation is rather\ncrude and should be refined in future work.\n\nWe caution that the strength of the [C~\\textsc{ii}] signal at the redshifts of interest\nis quite uncertain. A broad range of estimates appear in the current\nliterature, depending on assumptions about: the correlation between [C~\\textsc{ii}]\nluminosity and SFR at high redshift, the total star-formation rate density\n(estimates from UV luminosity functions are sensitive to whether and how one\nextrapolates to faint luminosities beyond current detection limits), and the\nhost-halo masses of [C~\\textsc{ii}] emitters. For example, our model values for\n$\\avg{I_{C~\\textsc{ii}}}$ are similar to a number of recent forecasts\n\\citep{2018arXiv180204804D, 2015ApJ...806..209S}, but are more than an order\nof magnitude larger than some more pessimistic estimates in\n\\citet{2015ApJ...806..209S,2018arXiv181208135C}. In any case, at fixed\nluminosity-weighted bias, the required noise scales quadratically with the\naverage specific intensity and so the reader can rescale our results according\nto their preferred specific intensity model. For instance, in the case of\n$\\avg{I_{C~\\textsc{ii}}} = 20\\,\\text{Jy}\/\\text{sr}$ \\citep{2018arXiv181208135C}, one\nwould require that $N_{C~\\textsc{ii}} = 2.6 \\times 10^6$, $4 \\times 10^6$, $6.2 \\times\n10^6\\,(\\text{Jy}\/\\text{sr})^2\\,\\text{Mpc}^3$ for the L, M, and H models of the\n[C~\\textsc{ii}] line, respectively at $z=8.34$. On the other hand, a more moderate\nestimate of $\\avg{I_{\\text{C~\\textsc{ii}}}} = 100 \\,\\text{Jy}\/\\text{sr}$ \\citep{\n2015ApJ...806..209S,2018arXiv180204804D}, requires $N_{C~\\textsc{ii}} = 6.4\n\\times 10^7$, $1 \\times 10^8$, $1.6 \\times\n10^8\\,(\\text{Jy}\/\\text{sr})^2\\,\\text{Mpc}^3$ for the L, M, and H models of the\n[C~\\textsc{ii}] line, respectively at $z=8.34$.\n\n\\begin{deluxetable}{cCCC}\n\\tablecaption{The noise power-spectrum for upcoming [C~\\textsc{ii}] surveys at $z=7.4$.\n\\label{tab:noise}}\n\\tablehead{\\colhead{survey} & \\colhead{$A_{\\text{survey}}$} & \\colhead{$A_{\\text{pix}}$} & \\colhead{$N_{\\text{C~\\textsc{ii}}}$} \\\\ \n\\colhead{} & \\colhead{$(\\text{deg}^2)$} & \\colhead{$(\\text{deg}^2)$} & \\colhead{$((\\text{Jy}\/\\text{sr})^2\\,\\text{Mpc}^3)$} } \n\\startdata\nCCAT-p & 2 & 2.5\\times10^{-4} & 2.66\\times10^{9} \\\\\nCONCERTO & 1.4 & 6.7\\times10^{-5} & 2.04 \\times 10^{9} \\\\\nTIME & 1.3 \\times 0.0084 & 6.7\\times10^{-5} & 1.04\\times10^9 \\\\\n\\enddata\n\\tablerefs{See \\citet{2018arXiv181208135C} for more details.}\n\\end{deluxetable}\n\nWith the assumed noise and survey requirements for our fiducial model, we show\nthe resulting error bars in the {\\em Upper} and {\\em Middle} panels of\nFigure~\\ref{fig:b21_vs_k} for our particular choice of binning. At least for\nthe hypothetical surveys considered here, the 21~cm bias factor may be\nrecovered with good statistical precision. In other words, if sample-variance\nlimited sensitivity may be reached at $k=0.1\\,\\text{Mpc}^{-1}$ in each line over a\ncommon survey area of $\\sim 100\\,\\text{deg}^2$, then a strong detection\nappears feasible. Of course we have neglected sample variance contributions\nfrom residual foregrounds among other complications, and so this should be\ninterpreted as a best-case scenario. On the other hand, increasing the common\nsurvey area above $100\\,\\text{deg}^2$, for example, could help shrink the\nerror bars.\n\nWhile our fiducial [C~\\textsc{ii}] survey is somewhat futuristic, we can also consider\nthe prospects with current, shortly upcoming surveys, specifically\nCCAT-prime\\footnote{\\url{http:\/\/www.ccatobservatory.org}}\n\\citep{2018SPIE10700E..1MS},\nCONCERTO\\footnote{\\url{https:\/\/people.lam.fr\/lagache.guilaine\/CONCERTO.html}}\n\\citep{Lagache:2018hmk}, and\nTIME\\footnote{\\url{https:\/\/cosmology.caltech.edu\/projects\/TIME}}\n\\citep{Crites14}. We use the pixel noise values, $\\sigma_{\\text{pix}}\nt_{\\text{pix}}^{-1\/2}$, for each survey from \\citet{2018arXiv181208135C}. We\nreport the noise power spectrum at $z=7.4$ (assuming a pure white-noise\nspectrum) in Table~\\ref{tab:noise}. We generically find that\n$N\\sim2\\times10^9\\, (\\text{Jy}\/\\text{sr})^2\\,\\text{Mpc}^3$. If we assume a model\nwith $\\avg{I_{\\text{C~\\textsc{ii}}}}\\sim500\\,\\text{Jy}\/\\text{sr}$ then even the\nfirst-generation surveys reach our requisite noise. However, deeper surveys\nwill be needed in the case of the more pessimistic estimates of\n$\\avg{I_{\\text{C~\\textsc{ii}}}}\\sim100$ or $\\sim20\\,\\text{Jy}\/\\text{sr}$. That being said,\nour fiducial calculations also assume a larger survey area of\n$100\\,\\text{deg}^2$. At $z=8.34$ we find a $\\text{S}\/\\text{N}$ of $3.3$,\n$2.7$, and $2.9$ for the L-M, M-H, and H-L models, respectively at\n$k=0.1\\,\\text{Mpc}^{-1}$ and bin width of $\\Delta k=0.03\\,\\text{Mpc}^{-1}$. Since the\nnumber of modes scale with the square root of the survey area, we estimate\nthat CCAT-p might be able to recover a $\\text{S}\/\\text{N}$ of $0.5$, $0.4$,\nand $0.4$ for the L-M, M-H, and H-L models, respectively at\n$k=0.1\\,\\text{Mpc}^{-1}$. Including some higher $k$-modes, even this\nfirst-generation survey might be capable of a marginal detection (if [O~\\textsc{iii}]\ncan be surveyed as well), but this is only for our optimistic signal strength\nmodel.\n\nSince the strength of the [C~\\textsc{ii}] signal is likely a strong function of\nredshift, the survey requirements should be less stringent at $z \\sim 7$ than\nthe $z \\sim 8$ case considered above. The main effect here should be from\nredshift evolution in the average specific intensity; again, the noise\nrequirements scale with the average intensity squared. The required noise can\ntherefore be adjusted according to one's preferred model for redshift\nevolution in the signal strength.\n\n\\section{Conclusions}\\label{sec:conclusions}\n\nWe have shown that the amplitude of large-scale 21 cm fluctuations may be\ninferred from measuring cross-power spectra between the 21 cm fluctuations and\neach of two separate line-intensity maps, such as [C~\\textsc{ii}] or [O~\\textsc{iii}]. Although\nit has long been recognized that the cross-power spectrum between two fields\nis more robust to foreground contamination than the auto-power spectrum of\neither field alone, the amplitude of a single cross-power spectrum provides\nonly a product of two bias factors. We found that using a suitable combination\nof three cross-power spectra\n(Equations~\\ref{eq:threefields}~and~\\ref{eq:threefields_specific}) one can\ninstead infer the 21~cm bias alone to high accuracy.\n\nQuantitatively, in the reionization model we considered, the accuracy reaches\npercent-level on large scales ($k \\sim 0.1-0.3\\,\\text{Mpc}^{-1}$) during much of the\nEoR. The inferred bias factor evolution can then be compared to that extracted\nfrom the 21~cm auto spectrum. In principle, checking whether the 21~cm\nauto-power spectrum follows linear-biasing on large scales might itself be a\ngood systematics check. However, linear biasing holds only over a limited span\nof wavenumbers and early measurements may probe a small dynamic range in\nspatial scale. Hence we believe that our three cross-spectra approach might\nplay an important role in confirming initial detections. Since our method\nunderestimates $P_{21,21}$ on intermediate scales, it can place informative\nlower limits (i.e. $\\sim 50\\%$ of the true value) down to $k\\sim1\\,\\text{Mpc}^{-1}$,\ndepending on the stage of reionization. More work is necessary, however, to\nsee if there are some allowed reionization and line-intensity models where our\ntechnique actually overestimates $P_{21,21}$.\n\nAlthough we focused here on the case of 21~cm fluctuations during the EoR, the\nmethod has broader applicability. For example, one can also estimate the bias\nof the [C~\\textsc{ii}] and [O~\\textsc{iii}] fluctuations by using a similar ratio of\ncross-spectra. This should help circumvent the line-interloper problem that\npresents a challenge for such surveys \\citep[e.g.][]{kovetz2017:im_review}.\nSince the ionized bubbles lead to scale-dependent biasing in the 21~cm field\non large spatial scales, the 21~cm case is an especially demanding\napplication, and we expect even better performance for [C~\\textsc{ii}], [O~\\textsc{iii}], and\nrelated lines.\n\nIn order to implement the strategy proposed here, there must be a coordinated\neffort to probe the same regions on the sky over common redshifts in multiple\nlines of interest. Ultimately, we envision line-intensity mapping surveys in\n$N$ different lines, all probing the same cosmological volume. Among other\nbenefits, this will provide $N(N-1)\/2$ measurements of the bias factor in each\nline using the same basic technique outlined here.\n\n\\acknowledgments\nWe thank the anonymous referee for providing helpful comments. We thank Matt\nMcQuinn for the simulations used in this analysis. A.B. would like to thank\nTodd Phillips for helpful discussions. A.B. was supported in part by the Roy\n\\& Diana Vagelos Program in the Molecular Life Sciences and the Roy \\& Diana\nVagelos Challenge Award. The work of A.B. and F.V.-N. is supported by the\nSimons Foundation.\n\n\\software{\\texttt{colossus} \\citep{2018ApJS..239...35D}, \\texttt{matplotlib}\n\\citep{Hunter:2007}, \\texttt{numpy} \\citep{numpy:2011}, and \\texttt{scipy}\n\\citep{scipy:2001}.}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}