{"text":"\\section{Annulus maps}\n\nLet $I=[0,1]$ and let $\\iota: I \\to I$ be the involution given by $\\iota(x)=1-x$. Identify $\\mathbb{S}^1$ with $\\mathbb{R}\/\\mathbb{Z}$. \nWe give $I \\times \\mathbb{S}^1$ the product orientation. Fix an even integer $m\\geq 2$ and let ${\\mathcal D}:=(d_0, \\ldots, d_{m-1})$ \nbe a sequence of positive integers such that $\\sum_{i=0}^{m-1} \\frac{1}{d_i} < 1$. \nThen there exist real numbers $a_i, b_i, i=0, \\ldots, m-1$ such that for each $i$, $|b_i-a_i|=\\frac{1}{d_i}$ and \n\\[ 0 < a_0 < b_0 < a_1 < b_1 < \\ldots < a_{m-1} < b_{m-1} < 1.\\]\nFix such a choice $a_0, b_0, \\ldots, a_{m-1}, b_{m-1}$. \nFor each $i$, let $J_i=[a_i, b_i]$, and let $g_i: I \\to J_i$ be the unique affine homeomorphism \nwhich is orientation-preserving if $i$ is even and is orientation-reversing if $i$ is odd. \nThis iterated function system on the line has a unique attractor $C({\\mathcal D})$ and its Hausdorff dimension, \nby the pressure formula \\cite[Thm. 5.3]{falconer:techniques}, is the unique real number $\\lambda=\\lambda({\\mathcal D})$ satisfying \n\\[ \\sum_{i=0}^{m-1}\\frac{1}{d_i^\\lambda} = 1.\\]\n\nLet \n\\[ \\tilde{F}: \\left(\\sqcup_{i=0}^{m-1}J_i\\right)\\times \\mathbb{S}^1 \\to I \\times \\mathbb{S}^1 =: A. \\]\nbe the map whose restriction to the annulus $A_i:=J_i \\times \\mathbb{S}^1$ is given by \n\\[ \\tilde{F}|_{A_i}(x, t) = (g_i^{-1}(x) , (-1)^i d_i \\cdot t \\bmod 1).\\]\n\nThat is: $\\tilde{F}|_{A_i}$ is an orientation-preserving covering map of degree $d_i$ which is a Euclidean homothety with factor \n$d_i$ and which preserves or reverses the linear orientation on the interval factors according to whether $i$ is even or, \nrespectively, is odd. \n\nThe invariant set associated to $\\tilde{F}$ is \n\\[ X({\\mathcal D}) := C({\\mathcal D}) \\times \\mathbb{S}^1 = \\bigcap_{n\\geq 0} \\tilde{F}^{-n}(A).\\]\nFrom \\cite[\\S 3]{kmp:ph:examples}, we have \n\n\\begin{prop}\n\\label{prop:compute_confdim}\nThe Ahlfors-regular conformal dimension of $X({\\mathcal D})$ is equal to $1+\\lambda({\\mathcal D})$. \n\\end{prop}\n\nThis statement is a particular case of a criterion originally due to Pansu \\cite[Prop. 3.7]{ph:bbki}; see also\nTyson's theorem \\cite[Thm 15.10]{heinonen:analysis}. \n\n\n\\section{Proofs of Theorems \\ref{thm:ctimesq} and \\ref{thm:ctimesq_sequence}}\n\nLet ${\\mathcal D}$ be a sequence of positive integers defining a family of annulus maps $\\tilde{F}$ as in the previous section, \nand put $X=X({\\mathcal D})$. \n\n\\begin{prop} \n\\label{prop:extend}\nThere is a smooth embedding $A \\hookrightarrow \\mathbb{S}^2$ such that (upon identifying $A$ with its image) the map \n$\\tilde{F}: \\sqcup_i A_i \\to A$ extends to a smooth map $F: \\mathbb{S}^2 \\to \\mathbb{S}^2$ whose iterates are uniformly quasiregular. \nThere is a quasiconformal (equivalently, a quasisymmetric) homeomorphism $h: \\mathbb{S}^2 \\to \\mbox{$\\widehat{\\C}$}$ such that \n$h\\circ F \\circ h^{-1}$ is a hyperbolic rational map $f$, and $h(X)=J_f$. \n\\end{prop}\n\n\\noindent {\\bf Proof: } The existence of the extension $F$ is a straightforward application of quasiconformal surgery; \nwe merely sketch the ideas and refer to \\cite{kmp:tan:surgery} for details; see also the forthcoming text \\cite{branner:surgery} \ndevoted to this topic. The next two paragraphs outline this construction.\n\nThe linear ordering on the interval $I$ gives rise to a linear ordering on the set of $2m$ boundary components of the set \nof annuli $A_0, \\ldots, A_{m-1}$. \nWe may regard $A$ as a subset of a smooth metric sphere $S^2$ conformally equivalent to $\\mathbb{S}^2$. \nFor $i=1, \\ldots, m-1$ let $C_i$ be the annulus between $A_{i-1}$ and $A_i$. \nLet $D_0, D_1$ be the disks bounded by the least, respectively greatest, boundary of $A$, so that the interiors of \n$D_0, A, D_1$ are disjoint. \nLet $D_0'$ be the disk bounded by the least component of $A_0$ and $D_1'$ be the disk bounded by the greatest component of \n$A_{m-1}$. \n\nWe now extend $\\tilde{F}$ as follows. See Figure 1.\n\\begin{figure}\n\\label{fig:caric}\n\\begin{center}\n\\includegraphics[width=5in]{caric}\n\\caption{Caricature of the extended mapping, $F$. }\n\\end{center}\n\\end{figure}\nSend $D_0' $ to $D_0$ by a proper map of degree $d_0$ ramified over a single point $x$, so that in suitable holomorphic coordinates \nit is equivalent to $z \\mapsto z^{d_0}$ acting near the origin; thus $D_0 \\subset D_0'$ is mapped inside itself. \nSimilarly, send $D_1' $ to $D_0$ by a proper map of degree $d_{m-1}$ ramified only over $x$, so that in suitable holomorphic \ncoordinates it is equivalent to $z \\mapsto 1\/z^{d_{m-1}}$ acting near infinity; thus $D_1 \\subset D_1'$ is mapped into $D_0$. \nTo extend over the annulus $C_i$ between $A_{i-1}$ and $A_{i}$, note that both boundary components of $C_i$ map either to the least, \nor to the greatest, component of $\\bdry A$. It is easy to see that there is a smooth proper degree $d_{i-1}+d_{i}+1$ \nbranched covering of $C_i$ to the corresponding disk $D_0$ (if $i$ is even) or $D_1$ (if $i$ is odd). \nThis completes the definition of the extension $F$.\n\nIt is easy to arrange that $F$ is smooth, hence quasiregular. We may further arrange so that the locus where $F$ is not conformal \nis contained in a small neighborhood of $C_1 \\union \\ldots \\union {C_{m-1}}$. This locus is nonrecurrent, so the iterates of \n$F$ are uniformly quasiregular. \nBy a theorem of Sullivan \\cite[Thm 9]{DS2}, $F$ is conjugate via a quasiconformal homeomorphism $h: S^2 \\to \\mbox{$\\widehat{\\C}$}$ \nto a rational map $f$. By construction, every point not in $h(X)$ converges under $f$ to a superattracting fixed-point $h(x)$ in \nthe disk $h(D_0)$, so $f$ is hyperbolic and $h(X)=J_f$. \\qed\n\nWe now establish a converse.\n\n\\begin{prop}\n\\label{prop:restrict}\nSuppose $f: \\mbox{$\\widehat{\\C}$} \\to \\mbox{$\\widehat{\\C}$}$ is a rational map for which there exists a closed annulus $A$ and essential pairwise disjoint subannuli \n$A_0, A_1, \\ldots, A_{m-1}$, $m$ even, contained in the interior of $A$ such that (with respect to a linear ordering induced by $A$) \n$A_0 < A_1 < \\ldots A_{m-1}$. Let $D_0, D_1$ be the disks bounded by the least (respectively, greatest) boundary component of $A$. \nFurther, suppose that for each $i=0, \\ldots, m-1$, $f|_{A_i}: A_i \\to A$ is a proper covering map of degree $d_i$, \nwith $f$ mapping the greatest component of $A_i$ and the least component of $A_{i+1}$ to the boundary of $D_1$ if $i$ is even, \nand to the boundary of $D_0$ if $i$ is odd. \nPut ${\\mathcal D} = (d_0, d_1, \\ldots, d_{m-1})$. Let $\\tilde{f}=f|_{\\sqcup_{i=0}^{m-1}A_i}$ and put \n$Y=\\intersect_{n\\geq 0}\\tilde{f}^{-n}(A)$. Then $Y \\subset J_f$, $\\tilde{f}(Y)=Y=\\tilde{f}^{-1}(Y)$, \nand there is a quasisymmetric homeomorphism $h: Y \\to X$ conjugating $\\tilde{f}|_Y: Y \\to Y$ to $\\tilde{F}|_X: X \\to X$ \nwhere $\\tilde{F}$ is the family of annulus maps defined by the data ${\\mathcal D}$. \n\\end{prop}\n\n\n\\noindent {\\bf Proof: } The conformal dynamical systems of annulus maps defined by $\\tilde{f}$ and by $\\tilde{F}$ are combinatorially equivalent \nin the sense of McMullen \\cite[Appendix A]{ctm:siegel}, so by \\cite[Thm. A.1]{ctm:siegel} there exists a quasiconformal \n(hence quasisymmetric) conjugacy $\\tilde{h}$ from $\\tilde{f}$ to $\\tilde{F}$; we set $h=\\tilde{h}|_Y$. \n\\qed\n\nCombined with Proposition \\ref{prop:compute_confdim}, this yields:\n\\begin{cor}\n\\label{cor:lower_bound}\nUnder the assumptions of Proposition \\ref{prop:restrict},\n$\\hbox{ARconfdim}(J_f) \\geq 1+\\lambda({\\mathcal D})$, with equality if $Y=J_f$. \n\\end{cor}\n\n\\noindent{\\bf Proof of Theorem \\ref{thm:ctimesq}.} For $\\epsilon \\in \\mathbb{C}$ let $f_\\epsilon(z)=z^2+\\epsilon\/z^3$. \nMcMullen \\cite[\\S 7]{ctm:automorphisms} shows that for $|\\epsilon|$ sufficiently small the map $f_\\epsilon$ restricts \nto a family of annulus maps with the combinatorics determined by the data ${\\mathcal D}=(2,3)$ and with Julia set homeomorphic \nto the repellor $X_{(2,3)}$ determined by ${\\mathcal D}$; it is easy to see that $\\epsilon=10^{-9}$ will do. \n\nExactly the same arguments applied to the family $g_\\epsilon (z)=z^2+\\epsilon\/z^4$ show that if $|\\epsilon|$ \nis sufficiently small, the family $g_\\epsilon$ restricts to a family of annulus maps with the combinatorics determined \nby ${\\mathcal D}=(2,4)$ and whose Julia set is homeomorphic to the corresponding repellor $X_{2,4}$. \nIt is easy to see that $\\epsilon=10^{-20}$ will do; one may take $A=\\{10^{-6} < |z| < 10^{10}\\}$. \nBy Corollary \\ref{cor:lower_bound} and Proposition \\ref{prop:compute_confdim}, the Ahlfors-regular conformal dimensions \n$1+\\lambda_f, 1+\\lambda_g$ of $J_f, J_g$ satisfy the respective equations $2^{-\\lambda_f}+3^{-\\lambda_f} = 1$, \n$2^{-\\lambda_g} + 4^{-\\lambda_g}=1$ and are therefore unequal. \nSince the Ahlfors-regular conformal dimension is a quasisymmetry invariant, the proof is complete.\\qed\n\\gap\n\n\\noindent{\\bf Proof of Theorem \\ref{thm:ctimesq_sequence}.} For an even integer $n\\geq 4$, let \n${\\mathcal D}_n = (d_0, d_1, \\ldots, d_{n-1})$ \nwhere $d_0=\\frac{1}{n+1}$ and $d_i=\\frac{1}{n}$ for $i=1, \\ldots, n-1$. \nLet $f_n$ be the rational map given by Proposition \\ref{prop:extend}. By Corollary \\ref{cor:lower_bound} \n$\\hbox{ARconfdim}(J_{f_n})$ is $1$ plus the unique positive root \n$\\lambda_n$ of the equation \n\\[ (n+1)^{-\\lambda} + (n-1)n^{-\\lambda}=1.\\]\nThe left-hand side is larger than $1$ when $\\lambda=\\frac{\\log(n-1)}{\\log(n)}$, so \n$\\lambda_n > \\frac{\\log(n-1)}{\\log n}$ and thus $\\lambda_n \\to 1$ as $n \\to \\infty$. \nHence $\\hbox{ARconfdim}(J_{f_n}) \\to 2$ as $n \\to \\infty$. \\qed\n\n\n\\section{Proof of Theorem \\ref{thm:carpets}}\n\nFix an even integer $n \\geq 2$. For each such $n$, we will build a rational function $f_n: \\mbox{$\\widehat{\\C}$} \\to \\mbox{$\\widehat{\\C}$}$ \nwith the following properties: (1) its Julia set is homeomorphic to the Sierpi\\'nski carpet, and \n(2) there exists an annulus $A \\subset \\mbox{$\\widehat{\\C}$}$, and parallel pairwise disjoint essential subannuli \n$A_0, \\ldots, A_{n-1}$ \nsuch that for each $i=0, \\ldots, n-1$, the restriction $f|_{A_i}: A_i \\to A$ is a proper holomorphic covering of \ndegree $(n+4)$, just as in the previous section. \nTheorem \\ref{thm:carpets} will then follow immediately from Corollary \\ref{cor:lower_bound} with \n${\\mathcal D}=(\\; \\underbrace{n+4, \\ldots, n+4}_{n}\\; )$. \n\nWe will first build $f_n$ as a function from one Riemann sphere to another, and then re-identify domain and range. \nWe are grateful to Daniel Meyer for suggesting this construction which is more explicit than our original one. \n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=5in]{tiling}\n\\caption{The rational map $f_n$ when $n=2$. The domain and codomain are the doubles of the two polygons across their boundaries. \nNote the conformal symmetries. }\n\\end{center}\n\\end{figure}\n\nWe shall suppress the dependence on $n$ in our construction.\nFor $z \\in \\mbox{$\\widehat{\\C}$}$ set $j(z)=\\bar{z}$, and let us consider the unique Weierstrass function $\\mathfrak{P}:\\mathbb{C}\\to\\mbox{$\\widehat{\\C}$}$ which is $\\mathbb{Z}[i]$-periodic, \nwhich maps $0$, $1\/2$, $(1+i)\/2$ and $i\/2$ to $\\infty$, $(-1)$, $0$ and $1$ respectively.\nWe may thus consider the Riemann sphere $\\mbox{$\\widehat{\\C}$}$ as the quotient Euclidean rectangle $[0,1\/2]\\times [(-1\/2),1\/2]$\nupon identifying boundary-points via the map $j$. We view $\\mbox{$\\widehat{\\C}$}$ as the union of two Euclidean\nsquares: the ``white square'' $[0,1\/2]\\times [0,1\/2]$ and the ``black square''\n$[0,1\/2]\\times [(-1\/2),0]$. The map $\\mathfrak{P}$ maps the white square to the upper half-plane $\\mathbb{H}_+$\nand the black square to the lower half-plane $\\mathbb{H}_-$.\n\n\nTo define the codomain, put $\\delta = \\frac{1}{2(n+4)}$, and let \n$$Q_+= [0,1\/2]^2 \\cup ([-\\delta,0]\\times [0, \\delta]) \\cup ([0, \\delta]\\times [-\\delta,0])$$\nand $Q_-= j(Q_+)$. Both polygons $Q_+$ and $Q_-$ are tiled by $(n+4)^2+2$ squares of size $\\delta$.\nLet $\\Sigma$ be the sphere obtained from the disjoint union $Q_+ \\sqcup Q_-$ by gluing their boundaries via the map $j$. Then $\\Sigma$ inherits a conformal structure from that of $Q_\\pm$: away from the corners this is clear; by the removable singularities theorem, \nthis conformal structure extends over the corners. Note that the map $j$ \ngives an anticonformal involution of $\\Sigma$ which we denote again by $j$.\n\n\n\n\nDefine $F_+:Q_+\\to \\mbox{$\\widehat{\\C}$}$ by $F_+(z)=\\mathfrak{P}((n+4)z))$ and $F_-:Q_-\\to\\mbox{$\\widehat{\\C}$}$ by $F_-=j\\circ F_+\\circ j$.\nThis defines a holomorphic map $F:\\Sigma\\to \\mbox{$\\widehat{\\C}$}$ of degree $(n+4)^2+2$. \nConsidering the tiling of $\\Sigma$ by the squares of size $\\delta$, we may color them into\nwhite and black in such a way that a white square is mapped under to $F$ to $\\mathbb{H}_+$ and\na black one to $\\mathbb{H}_-$, see Figure 2. The critical points of $F$ occur at where four or more squares meet. \nBy construction, the image of every critical point is \none of the points $-1, 0, 1, \\infty$.\n\nBy the Riemann mapping theorem, there exists a unique conformal map $\\varphi_+:Q_+\\to\\overline{\\mathbb{H}_+}$\nsuch that $\\varphi_+(0,1\/2,(1+i)\/2)=(\\infty, (-1),0)$. Note that the map $x+iy\\mapsto y+ix$ defines\nan anticonformal involution of $Q_+$ which fixes $0$ and $(1+i)\/2$ and interchanges\n$1\/2$ and $i\/2$: this forces $\\varphi_+(i\/2)=1$. Set $\\varphi_-:Q_-\\to\\overline{\\mathbb{H}_-}$\nby $\\varphi_-=j\\circ\\varphi_+\\circ j$. Both maps patch together to form a conformal map $\\varphi:\\Sigma\\to\\mbox{$\\widehat{\\C}$}$.\nLet us finally set $$f= F\\circ \\varphi^{-1}:\\mbox{$\\widehat{\\C}$}\\to\\mbox{$\\widehat{\\C}$}\\,.$$\nEvery critical point of $f$ is first mapped to $-1, 0, 1, \\infty$, and every\npoint of this set maps to $\\infty$ under $f$, \nwhich is therefore a fixed critical point at which $f$ has local degree $3$. \nHence $f$ is a critically finite hyperbolic rational map. \n\n\n\nLet $A_+'= [2\\delta, 1\/2 - 2\\delta]\\times [0,1\/2]\\subset Q_+$ and $A_-'=j(A_+)\\subset Q_-$.\nTheir union defines an annulus $A$ of $\\Sigma$, and we let $A=\\varphi(A')$. \n\nThe preimage $F^{-1}(A)$ consists of $(n+5)$ disjoint annuli, each compactly contained\nin vertical strips of width $\\delta$ tiled by squares. Among them, there are $n$ subannuli\n$A_0',\\ldots, A_{n-1}'$ compactly contained in $A'$, each map under $F$ by degree $n+4$.\nLet $A_j=\\varphi(A_j')$: then $A_j$ is compactly contained in $A$ and $f:A_j\\to A$ has degree $n+4$.\nBy Corollary \\ref{cor:lower_bound}, $\\hbox{ARconfdim}(J_{f}) \\geq 1+\\lambda$, where $\\lambda=\\lambda_n$ \nis the unique positive \nroot \nof the equation \n\\[ n(n+4)^{-\\lambda}=1.\\]\nAs $n \\to \\infty$, clearly $\\lambda_n \\to 1$ and so $\\hbox{ARconfdim}(J_{f_n})\\to 2$. \n\n\n\nIt remains only to show that $J_{f}$ is a Sierpi\\'nski carpet. \nWe imitate the arguments of Milnor and Tan Lei given in \\cite[Appendix]{milnor:quadratic}. \nThey first show the following:\n\n\\begin{lemma}\n\\label{lemma:jdomain}\nLet $f$ be a hyperbolic rational map and $z$ a fixed-point at which the local degree of $f$ equals $k \\geq 2$. \nSuppose $W$ is the immediate basin of attraction of $z$. \nSuppose there exist domains $U, V$ each homeomorphic to the disk such that \n$\\cl{\\Omega} \\subset U \\subset \\cl{U} \\subset V$ and $f|_U: U \\to V$ is proper and of degree $k$. \nThen $\\bdry \\Omega$ is a Jordan curve.\n\\end{lemma} \n\nNote that the conformal isomorphism $\\varphi: \\Sigma \\to \\mbox{$\\widehat{\\C}$}$ sends the union of the top and right-hand edges of the square to $[-1,1]$ and sends the point in Figure 2 labelled $0$ to infinity. \nLet $V=\\mbox{$\\widehat{\\C}$}\\setminus [-1,1]$.\nThe map $f$ has a unique periodic Fatou component $W$ ---the immediate basin of $\\infty$--- \nand clearly $W \\subset V$. \nThe domain $V$ is simply connected and contains exactly one critical value of $f$, namely, \nthe point $\\infty$. \nIt follows that there is exactly one component $U$ of $f^{-1}(V)$ containing $\\infty$, and \n$\\cl{W} \\subset U \\subset \\cl{U} \\subset V$ and $f|_U: U \\to V$ is proper and of degree $3$. \nBy Lemma \\ref{lemma:jdomain}, $\\bdry W$ is a Jordan curve. \nThe remaining arguments needed are identitical to those given in {\\em op. cit.}: \nsince $f$ is hyperbolic and critically finite, the Julia set is one-dimensional, connected and locally connected, and there are no critical points in the Julia set. \nIt follows that every Fatou component is a Jordan domain, and that the closures of the Fatou components \nare pairwise disjoint. Therefore $J_{f}$ is homeomorphic to the Sierpi\\'nski carpet \\cite{whyburn:sierpinski}.\n\\qed\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\n\\section{Multiple Layer Analysis of Batch Normalization}\n\\label{sec:multi_layer}\n\nWe now extend the single-layer analysis of the previous section to the composition of two or more DN layers. \nWe begin by showing that the layers' $\\boldsymbol{\\mu}_\\ell$ continue to translate the hyperplanes that comprise the spline partition boundary such that they concentrate around the training data.\nWe then show that the layers' $\\boldsymbol{\\sigma}_\\ell$ fold those same hyperplanes with the same goal.\n\n\n\\subsection{Deep network spline partition (multiple layers)}\n\\label{sec:spline2}\n\nTaking advantage of the fact that a composition of multiple CPA splines is itself a CPA spline, we now extend the results from Section~\\ref{sec:spline1} regarding one DN layer to the composition of layers $1,\\dots,\\ell$, $\\ell>1$ that maps the DN input $\\boldsymbol{x}$ to the feature map $\\boldsymbol{z}_{\\ell+1}$.\\footnote{Our analysis applies to any composition of $\\ell$ DN layers; we focus on the first $\\ell$ layers only for concreteness.} The denote partition of this mapping by $\\Omega_{|\\ell}$,\nwhere we introduce the shorthand $|\\ell$ to denote the mapping through layers $1,\\dots,\\ell$.\nAppendix~\\ref{app:details} provides closed-form formulas for the per-region affine mappings.\n\nAs in Section~\\ref{sec:spline1}, we are primarily interested in the boundary $\\partial\\Omega_{|\\ell}$ of the spline partition $\\Omega_{|\\ell}$. \nRecall that the boundary of the spline partition of a single layer was easily found in (\\ref{eq:h})--(\\ref{eq:boundary}) as the rearrangement of the hyperplanes formed where the layer's pre-activation equals zero. With multiple layers, the situation is almost the same as in (\\ref{eq:Hk})\n\\begin{align}\n \\partial \\Omega_{|\\ell} = \\bigcup_{j=1}^{\\ell}\n \\bigcup_{k=1}^{D_{j}} \\left\\{\\boldsymbol{x} \\in \\mathbb{R}^{D_1}: h_{j,k}=0\\right\\};\n \\label{eq:zero_set}\n\\end{align}\nfurther details are provided in Appendix~\\ref{app:details}. \nThe salient result of interest to us is that $\\partial \\Omega_{|\\ell}$ is constructed from the hyperplanes $\\mathcal{H}_{j,k}$ pulled back through the preceding layer(s). This process {\\em folds} those hyperplanes (toy depiction given in Figure~\\ref{fig:Thm3}) based on the preceding layers' partitions such that the folded $\\mathcal{H}_{j,k}$ consist of a collection of {\\em facets} \n\\begin{align}\n\\mathcal{F}_{j,k,\\omega} \n=\\{\\boldsymbol{x} \\in \\omega: h_{j,k}=0\\}=\n\\{\\boldsymbol{x} \\in \\omega: \\langle \\boldsymbol{w}_{j,k},\\boldsymbol{z}_j\\rangle=\\mu_{j,k}\\}, \\quad \\omega \\in \\Omega_{|j},\\label{eq:facets}\n\\end{align}\nwhich simplifies (\\ref{eq:zero_set}) to\n$\n \\partial \\Omega_{|\\ell} = \\bigcup_{j=1}^{\\ell}\n \\bigcup_{k=1}^{D_{j}} \\mathcal{F}_{j,k}\n$, where $\\mathcal{F}_{\\ell,k}\\triangleq \\bigcup_{\\omega \\in \\Omega_{|j}}\\mathcal{F}_{j,k,\\omega}$.\n\n\n\\subsection[Batch normalization translates the spline partition boundaries towards the training data (Part 2)]{Batch normalization parameter $\\boldsymbol{\\mu}$ translates the spline partition boundaries towards the training data (Part 2)}\n\nIn the one-layer case, we saw in Theorem~\\ref{thm:onelayer} that BN independently translates each DN layer's hyperplanes $\\mathcal{H}_{\\ell,k}$ towards the training data to minimize the TLS distance. \nIn the multilayer case, as we have just seen, those hyperplanes $\\mathcal{H}_{\\ell,k}$ become \nfolded hyperplanes $\\mathcal{F}_{\\ell,k}$ (recall (\\ref{eq:zero_set})). \n\n\nWe now demonstrate that the BN parameter $\\boldsymbol{\\mu}_\\ell$ translates the folded hyperplanes $\\mathcal{F}_{\\ell,k}$ -- and thus adapts $\\Omega_{|\\ell}$ -- towards the input-space training data $\\mathcal{X}$.\nTo this end, denote the squared distance from a point $\\boldsymbol{x}$ in the DN input space to the folded hyperplane $\\mathcal{F}_{\\ell,k}$ by\n\\begin{align}\n d(\\boldsymbol{x},\\mathcal{F}_{\\ell,k}) = \\min_{\\boldsymbol{x}' \\in \\mathcal{F}_{\\ell,k}} \\| \\boldsymbol{x}-\\boldsymbol{x}'\\|^2.\n \\label{eq:distance_F}\n\\end{align}\n\n\\begin{thm}\n\\label{thm:multilayer}\nConsider layer $\\ell>1$ of a layer as described in (\\ref{eq:BN}) with fixed weight matrices and BN parameters from layers 1 through $\\ell-1$ and fixed weights $\\boldsymbol{W}_\\ell$.\nLet $\\boldsymbol{\\mu}_\\ell$ and $\\boldsymbol{\\mu}'_\\ell$ yield the hyperplanes $\\mathcal{H}_{\\ell,k}$ and $\\mathcal{H}'_{\\ell,k}$ and their corresponding folded hyperplanes $\\mathcal{F}_{\\ell,k}$ and $\\mathcal{F}'_{\\ell,k}$.\nThen we have that\n$\n d(\\boldsymbol{z}_{\\ell}(\\boldsymbol{x}),\\mathcal{H}_{\\ell,k}) < d(\\boldsymbol{z}_{\\ell}(\\boldsymbol{x}),\\mathcal{H}_{\\ell,k}') ~ \\implies ~ d(\\boldsymbol{x},\\mathcal{F}_{\\ell,k}) < d(\\boldsymbol{x},\\mathcal{F}_{\\ell,k}')$.\n\\end{thm}\n\nIn words, translating a hyperplane closer to $\\boldsymbol{z}_\\ell$ in layer $\\ell$'s input space moves the corresponding folded hyperplane closer to the DN input $\\boldsymbol{x}$ that produced $\\boldsymbol{z}_\\ell$, which is of particular interest for inputs $\\boldsymbol{x}_i$ from the training data $\\mathcal{X}$.\nWe also have the following corollary.\n\n\\begin{cor}\n\\label{cor:multilayer}\nConsider layer $\\ell>1$ of a trained BN-equipped DN as described in Theorem~\\ref{thm:multilayer}.\nThen $\\boldsymbol{z}_\\ell(\\boldsymbol{x})$ lies on hyperplane $\\mathcal{H}_{\\ell,k}$ for some $k$ if and only $\\boldsymbol{x}$ lies on the corresponding folded hyperplane $\\mathcal{F}_{\\ell,k}$ in the DN input space; that is,\n$\nd(\\boldsymbol{z}_{\\ell-1}(\\boldsymbol{x}),\\mathcal{H}_{\\ell,k}) = 0 ~ \\iff ~ d(\\boldsymbol{x},\\mathcal{F}_{\\ell,k}) = 0$.\n\n\\end{cor}\n\nFigure~\\ref{fig:evolution_boundary}(b) illustrates empirically how $\\boldsymbol{\\mu}_2$ translates the folded hyperplanes $\\mathcal{F}_{2,k}$ realized by the second layer of a toy DN. The impact of the BN parameters $\\boldsymbol{\\sigma}$, is also crucial albeit not as crucial as $\\boldsymbol{\\mu}$. For completeness, we study how this parameter translates and folds adjacent facets composing the folded hyperplanes $\\mathcal{F}_{\\ell,k}$ in Figure~\\ref{fig:evolution_boundary} and Appendix \\ref{sec:sigma}.\n\n\n\n\n\n\n\\begin{figure}[t!]\n \\centering\n \\begin{minipage}{0.52\\linewidth}\n \\begin{minipage}{0.32\\linewidth}\n \\centering\n \\small\n $\\mathcal{H}_{2,k}$ varying $\\mu_{2,k}$\\\\\n \\includegraphics[width=\\linewidth]{images\/2d_partition\/moving_mu1_6_0.pdf}\n \\\\[-0.5em](a)\n \\end{minipage}\n \\begin{minipage}{0.32\\linewidth}\n \\centering\n \\small\n $\\mathcal{F}_{2,k}$ varying $\\mu_{2,k}$\\\\\n \\includegraphics[width=\\linewidth]{images\/2d_partition\/moving_mu2_6_0.pdf}\n \\\\[-0.5em](b)\n \\end{minipage}\n \\begin{minipage}{0.32\\linewidth}\n \\centering\n \\small\n $\\mathcal{F}_{2,k}$ varying $\\boldsymbol{\\sigma}_{1}$\\\\\n \\includegraphics[width=\\linewidth]{images\/2d_partition\/moving_sigma2_6_0.pdf}\n \\\\[-0.5em](c)\n \\end{minipage}\n \\end{minipage}\n \\begin{minipage}{0.47\\linewidth}\n \\caption{\\small \n Translation and folding effected by the BN parameters $\\boldsymbol{\\mu}_\\ell,\\boldsymbol{\\sigma}_\\ell$ on a two-layer DN with 8 units per layer, 2D input space, and random weights. \n %\n (a)~Varying $\\mu_{2,k}$ translates the layer $2$, unit $k$ hyperplane $\\mathcal{H}_{2,k}$ (recall (\\ref{eq:Hk})) viewed in a 2D slice of its own input space.\n %\n (b)~Translation of that same hyperplane but now viewed in the DN input space, where it becomes the folded hyperplane $\\mathcal{F}_{2,k}$ (recall (\\ref{eq:facets})).\n %\n (c)~Fixing $\\mu_{2,k}$ and varying $\\boldsymbol{\\sigma}_1$ folds the next layer(s) hyperplanes, viewed in the DN input space.\n }\n \\label{fig:evolution_boundary}\n \\end{minipage}\n\\end{figure}\n\n\n\n\n\\subsection{Batch-Normalization Increases the Density of Partition Regions Around the Training Data}\n\\label{sec:density}\n\nWe now extend the toy DN numerical experiments reported in Figures~\\ref{fig:2d_partition_bn} to more realistic DN architectures and higher-dimensional settings.\nWe focus on three settings all involving random weights $\\boldsymbol{W}_\\ell$: \n(i) zero bias $\\boldsymbol{c}_\\ell=0$ in (\\ref{eq:no_BN}),\n(ii) random bias $\\boldsymbol{c}_\\ell$ in (\\ref{eq:no_BN}),\nand \n(iii) BN in (\\ref{eq:BN}).\n\n\n{\\bf 2D Toy Dataset.}~We continue visualizing the effect of BN in a 2D input space by reproducing the experiment of Figure~\\ref{fig:2d_partition_bn} but with a more realistic DN with $11$ layers of width 1024 and training data consisting of 50 samples from a star shape in 2D (see the leftmost plots in Figure~\\ref{fig:backprop}). For each of the above three settings, Figure~\\ref{fig:backprop} visualizes in the 2D input space the {\\em concentration} of the contribution to the partition boundary (recall (\\ref{eq:facets})) from three specific layers ($j=1,7,11$). The concentration at each point in the input space corresponds to the number of folded hyperplane facets $\\mathcal{F}_{j,k,\\omega}$ passing through an $\\epsilon$-ball centered on that point.\nThis concentration calculation can be performed efficiently via the technique of \\cite{harman2010decompositional} that is analyzed in \\cite{voelker2017efficiently}.\nThree conclusions are evident from the figure:\n(i) BN clearly focuses the spline partition on the data;\n(ii) random bias creates a random partition that is diffused over the entire input space;\n(iii) zero bias creates a (more or less) central hyperplane arrangement that does not focus on the data.\n\n\n\n\\begin{figure}[bt!]\n \\begin{minipage}{0.02\\linewidth}\n \\rotatebox{90}{\\small zero bias\\;\\;\\;\\;\\;\\;\\;random\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;BN\\;\\;\\;\\;\\;\\;\\;}\n \\end{minipage}\n \\begin{minipage}{0.5\\linewidth}\n \\begin{minipage}{0.23\\linewidth}\n \\centering\n \\small\n data\n \\end{minipage}\n \\foreach \\l in {3,7,11}{\n \\begin{minipage}{0.23\\linewidth}\n \\centering\n $\\mathcal{F}_{\\l,k},\\forall k$\n \\end{minipage}\n }\\\\\n \\foreach \\mode in {bn,random,zero}{\n \\begin{minipage}{0.234\\linewidth}\n \\includegraphics[width=\\linewidth]{images\/backprojection\/x_5.pdf}\n \\end{minipage}\n \\foreach \\l in {3,7,11}{\n \\begin{minipage}{0.234\\linewidth}\n \\includegraphics[width=\\linewidth]{images\/backprojection\/\\mode_\\l_1024_5.pdf}\n \\end{minipage}\n }\\\\[-0.3em]\n }\n \\end{minipage}\n \\begin{minipage}{0.46\\linewidth}\n \\caption{\\small\nVisualization in the 2D input space of the contribution $\\partial \\Omega^{1}_{j}$ to the spline partition boundary $\\Omega^{1}$ from layers $j=1, 7, 11$ of an 11-layer DN of width 1024.\nThe training data set $\\mathcal{X}$ consists of 50 samples from a star-shaped distribution (left plots).\nWe plot the concentration of the folded hyperplane facets in an $\\epsilon$-ball around each 2D input space point for the three initialization settings described in the text: zero bias, random bias, and BN.\nDarker color indicates more partition boundaries crossing through that location.\nEach plot is normalized by the maximum concentration attained, the value of which is noted.\n }\n\\label{fig:backprop}\n\\end{minipage}\n\\end{figure}\n\n\n\n{\\bf CIFAR Images.}~We now consider a high-dimensional input space (CIFAR images) with a Resnet9 architecture (random weights). Since we cannot visualize the spline partition in the 3072D input space, we present a summary of the same concentration analysis carried out in Figure~\\ref{fig:backprop} for the partition boundary by measuring the number of folded hyperplanes passing through an $\\epsilon$-ball centered around $100$ randomly sampled training images (BN statistics are obtained from the full training set). We report these observation in \nFigure~\\ref{fig:neighbourregions} (left) and again clearly see that -- in contrast to zero and random bias initialization -- BN focuses the spline partition to lie close to the data points.\nTo quantify the concentration of regions away from the training data, we repeat the same measurement but for 100 iid random Gaussian images that are scaled to the same mean and variance as the CIFAR images. For this case, we observe in Figure~\\ref{fig:neighbourregions} (right) that BN only focused the partition around the CIFAR images and not around the random ones. This is in concurrence with the low-dimensional experiment in Figure~\\ref{fig:backprop}. \n\n\n\n\n\n\\begin{figure}[t]\n\\centering\n \\begin{minipage}{0.53\\linewidth}\n \\centering\n \\hspace{4mm}{\\small around CIFAR10 images} \\hspace{1mm} {\\small around random images}\\\\[-0.1em]\n \\includegraphics[width=\\linewidth]{images\/counting\/counting_CIFAR10_CNN.pdf}\\\\[-0.5em]\n {\\small radius ($\\epsilon$) \\hspace{2cm}radius ($\\epsilon$)}\n \\end{minipage}\n \\begin{minipage}{0.46\\linewidth}\n \\caption{\\small \n Average concentration of the spline partition boundary $\\Omega^1$ of Resnet9 DN in $\\epsilon$-balls around CIFAR images and around random images.\nThe vertical axis is the average number of folded hyperplane facets in $\\Omega^1$ that pass through an epsilon ball centered on each kind of image.\nAs in Figure~\\ref{fig:backprop}, we see that -- in contrast to zero and random bias initialization -- BN aligns the spline partition with the data.\n \n }\n \\label{fig:neighbourregions}\n \\end{minipage}\n \\end{figure}\n\n\n\n\n\\vspace{-0.2cm}\n\\section{Benefit One: Batch Normalization is a Smart Initialization}\n\\label{sec:smart}\n\n\\vspace{-0.2cm}\n\nUp to this point, our analysis has revealed that BN effects a task-independent, unsupervised learning that aligns a DN's spline partition with the training data independent of any labels.\nWe now demonstrate that BN's utility extends to supervised learning tasks like regression and classification that feature both data and labels.\nIn particular, BN can be viewed as providing a ``smart initialization'' that expedites\nSGD-based supervised learning.\nOur results augment recent empirical work on the importance of initialization on DN performance from the perspective of slope variance constraints \\cite{mishkin2015all,xie2017all}, singular value constraints \\cite{jia2017improving},\nand orthogonality constraints \\cite{saxe2013exact,bansal2018can}.\n\n\nFirst, BN translates and folds the hyperplanes and facets that construct the DN's spline partition $\\Omega$ so that they are closer to the data.\nThis creates more granular partition regions around the training data, which enables better approximation of complicated regression functions \\cite{devore1993constructive} and richer \nclassification boundaries \\cite{balestriero2019geometry,DBLP:journals\/corr\/abs-2102-11535}.\nFor binary classification, for example, this result follows from the fact that the decision boundary created by an $L$-layer DN is \nprecisely the folded hyperplane $\\mathcal{F}_{L,1}$ generated by the final layer. \nWe also prove in Proposition~\\ref{prop:each_side} in Appendix~\\ref{proof:prop_each_side} that BN primes SGD learning so that the decision boundary passes through the data in every mini-batch.\n\n\nWe now empirically demonstrate the benefits of BN's smart initialization for learning by a comparing the same three initialization strategies from Section~\\ref{sec:density}\nin a classification problem using a ResNet9 with random weights.\nIn each experiment, the weights and biases are initialized according to one of the three strategies (zero bias, random bias, and BN over the entire training set), and then standard SGD is performed over mini-batches. \nImportantly, in the BN case, the BN parameters $\\boldsymbol{\\mu}_{\\ell},\\boldsymbol{\\beta}_{\\ell}$ computed for the initialization are frozen for all SGD iterations; this will showcase BN's role as a smart initialization.\nAs we see from Figure ~\\ref{fig:test}, SGD with BN initialization converges faster and to an higher-performing classifier. \nSince BN is only used as an initializer here, the performance gain can be attributed to the better positioning of the ResNet's initial spline partition $\\Omega$.\n\n\n\n\n\\begin{figure}[t]\n \\centering\n \\begin{minipage}{0.022\\linewidth}\n \\rotatebox{90}{\\hspace{0.5cm}Top-1 test accuracy}\n \\end{minipage}\n \\begin{minipage}{0.97\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{images\/bn_accuracies_resnet50.png}\\\\\n learning epochs\n \\end{minipage}\n \\begin{minipage}{\\linewidth}\n \\caption{\\small \n Image classification using a Resnet50 on Imagenet. \n Standard data augmentation was used during SGD training but no BN was employed.\n Instead, the DN was initialized with random weights (\\color{blue}blue\\color{black}) or with random weights and warmed-up BN statistics (\\color{green}\\color{black}) across the entire training set (recall Figure~\\ref{fig:backprop} and (\\ref{eq:bnupdate})). Each {\\bf column} corresponds to different learning rate used by SGD and multiple runs are produced for the right cases.\n BN's ``smart initialization'' reaches a higher-performing solution faster because it provides SGD with a spline partition that is already adapted to the training dataset. This findings provides a novel and complementary understanding of BN that was previously studied solely when continuously employed during training as a mean to better condition a DN's loss landscape. \n We also show in Figure~\\ref{fig:test} in the Appendix a similar observation on CIFAR100.}\n \\label{fig:test}\n \\end{minipage}\n \\end{figure}\n\n\n\\iffalse\n\\subsubsection{Batch-Normalization Positioned Classification Rule is Near Optimal}\n\nWe now go further and demonstrate that in addition of well initializing the DN, one can only adapt the last layer (hence leaving the BN initialized DN partition untouched) and reach much better accuracy that when using random initialization. That is, a random DN with BN performs much better than a plain random DN. We validate this on artificial and computer vision tasks in Fig.~\\ref{fig:bn_helps}. We also experiment by comparing those two cases when training not only the last layer but also the last $X$ layers.\n\n\\begin{figure}[t!]\n \\centering\n \\begin{minipage}{0.4\\linewidth}\n \\begin{minipage}{0.07\\linewidth}\n \\rotatebox{90}{accuracy}\n \\end{minipage}\n \\begin{minipage}{0.9\\linewidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{images\/megaccu_norm20_optnone_W128.jpg}\\\\\n \\# dimension\n \\end{minipage}\n \\end{minipage}\n \\begin{minipage}{0.24\\linewidth}\n \\centering\n Train \\\\\n \\includegraphics[width=1\\linewidth]{images\/cut\/cut_deepresnet_CIFAR100_trainacc.pdf}\\\\\n epochs\n \\end{minipage}\n \\begin{minipage}{0.24\\linewidth}\n \\centering\n Test\\\\\n \\includegraphics[width=1\\linewidth]{images\/cut\/cut_deepresnet_CIFAR100_testacc.pdf}\\\\\n epochs\n \\end{minipage}\n \\begin{minipage}{1\\linewidth}\n \\caption{\\textbf{left}validation that BN indeed helps to position the partitioning to solve classification task\\textbf{right}CIFAR100 experiment with Resnet4 and Resnet12 models. train and test set accuracy at each epoch when training the last $0, 16, 32\\%$ of the layers (blue to red) and with (dotted line) and without (plain line) BN at all the layers. BN is applied to all the layers. \n Clearly, the models with BN applied to all layers provide much better test set accuracy and especially when only training the last layers of the models.\n \\richb{all plots need axis labels}}\n \\label{fig:bn_helps}\n \\end{minipage}\n\\end{figure}\n\n\\fi\n\n\n\nThis result showcases the importance of DN initialization and how a good initialization alone plays a crucial role in performance, as has also been empirically studied with slope variance constraints \\cite{mishkin2015all,xie2017all}, singular values constraints \\cite{jia2017improving} or with orthogonal constraints \\cite{saxe2013exact,bansal2018can}.\n\n\n\\vspace{-0.2cm}\n\\section{Benefit Two: Batch Normalization Increases Margins}\n\\label{sec:noise}\n\n\\vspace{-0.2cm}\n\nThe BN parameters $\\boldsymbol{\\mu}_{\\ell},\\boldsymbol{\\sigma}_{\\ell}$ are re-calculated for each mini-batch $\\mathcal{B}_\\ell$ and hence can be interpreted as stochastic estimators of the mean and standard deviation of the complete training data set $\\mathcal{X}_\\ell$.\nA direct calculation \\cite{von2014mathematical} yields the following result.\n\n\n\n\\begin{prop}\n\\label{prop:variance}\nConsider layer $\\ell>1$ of a BN-equipped DN as described in (\\ref{eq:BN}).\nAssume that the layer input $\\boldsymbol{z}_\\ell$ follows an arbitrary iid distribution with zero mean and diagonal covariance matrix $\\diag({\\bm{\\rho}})$.\nThen we have that\n\\begin{equation}\n{\\rm var}(\\mu_{\\ell,k}) \\hspace{-0.1cm}=\\hspace{-0.1cm} \\frac{\\langle \\boldsymbol{w}^2_{\\ell,k},{\\bm{\\rho}}\\rangle}{|\\mathcal{B}_\\ell|}\\leq \\frac{\\|\\boldsymbol{w}_{\\ell,k}^2\\| \\, \\|{\\bm{\\rho}}\\|}{|\\mathcal{B}_\\ell|}, ~~\n{\\rm var}(\\sigma_{\\ell,k}^2)\\hspace{-0.1cm}=\\hspace{-0.1cm}\\frac{1}{|\\mathcal{B}_\\ell|}\\left(\\hspace{-0.1cm}\\phi_{\\ell,k}^4-\\frac{\\langle \\boldsymbol{w}_{\\ell,k}^2,\\rho\\rangle^2(|\\mathcal{B}_\\ell|-3)}{|\\mathcal{B}_\\ell|-1}\\hspace{-0.1cm}\\right).\n\\label{eq:BNstats}\n\\end{equation}\nwith $\\phi_{\\ell,k}^4$ the fourth-order central moment of $\\langle \\boldsymbol{w}_{\\ell,k},\\boldsymbol{z}_\\ell \\rangle$ and $\\boldsymbol{w}_{\\ell,k}^2$ the coordinate-wise square.\n\\end{prop}\n\nConsequently, during learning, BN's centering and scaling introduces both additive and multiplicative noise to the feature maps whose variance increases as the mini-batch size $|\\mathcal{B}_\\ell|$ decreases. \nThis noise becomes detrimental for small mini-batches, which has been empirically observed in \\cite{ioffe2017batch}. \nWe illustrate this result in Figure~\\ref{fig:noisedb}, where we depict the DN decision boundary realizations from different mini-batches. We also provide in Figure~\\ref{fig:distribution} the empirical, analytical, and controlled parameter distributions of BN applied on a Gaussian input with varying mean and variance.\n\n\n\\begin{figure}[!t]\n\\begin{minipage}{0.59\\linewidth}\n\\footnotesize\n\\begin{minipage}{.25\\linewidth}\n \\centering\n initialization \\\\ $|\\mathcal{B}_\\ell|$=16\\\\\n \\includegraphics[width=1\\linewidth]{images\/2d_noise\/before_circle_16.pdf}\\\\\n \\includegraphics[width=1\\linewidth]{images\/2d_noise\/before_wave_16.pdf}\n\\end{minipage}\n\\hspace{-0.2cm}\n\\begin{minipage}{.25\\linewidth}\n \\centering\n learned \\\\ $|\\mathcal{B}_\\ell|$=16\\\\\n \\includegraphics[width=1\\linewidth]{images\/2d_noise\/after_circle_16.pdf}\\\\\n \\includegraphics[width=1\\linewidth]{images\/2d_noise\/after_wave_16.pdf}\\\\\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.25\\linewidth}\n \\centering\n initialization \\\\ $|\\mathcal{B}_\\ell|$=256\\\\\n \\includegraphics[width=1\\linewidth]{images\/2d_noise\/before_circle_256.pdf}\\\\\n \\includegraphics[width=1\\linewidth]{images\/2d_noise\/before_wave_256.pdf}\n\\end{minipage}\n\\hspace{-0.2cm}\n\\begin{minipage}{.25\\linewidth}\n \\centering\n learned \\\\ $|\\mathcal{B}_\\ell|$=256\\\\\n \\includegraphics[width=1\\linewidth]{images\/2d_noise\/after_circle_256.pdf}\\\\\n \\includegraphics[width=1\\linewidth]{images\/2d_noise\/after_wave_256.pdf}\n\\end{minipage}\n\\normalsize\n\\end{minipage}\n\\begin{minipage}{0.4\\linewidth}\n \\caption{\\small \n Realizations of the classification decision boundary on a toy 2D binary classification (\\color{red}red \\color{black}vs \\color{green} green\\color{black}) problem obtained solely by sampling different mini-batches and thus observing different realizations of $\\boldsymbol{\\mu}_{\\ell},\\boldsymbol{\\sigma}_{\\ell}$\n (recall (\\ref{eq:bnupdate})) at initialization and after training.\n Each mini-batch produces a different decision boundary depicted in \\color{blue}blue\\color{black}. For two different mini-batch sizes $|\\mathcal{B}_\\ell|=16,256$, we change the variance as per Proposition~\\ref{prop:variance}.\n Larger batch sizes clearly produce smaller variability in the decision boundary both at initialization and after training; $\\boldsymbol{\\mu}_{\\ell},\\boldsymbol{\\sigma}_{\\ell}$ distributions are provided in Figure~\\ref{fig:distribution}.\n }\n \\label{fig:noisedb}\n\\end{minipage}\n\\end{figure}\n\n\n\\begin{figure}[t!]\n \\centering\n \\begin{minipage}{0.02\\linewidth}\n \\rotatebox{90}{\\hspace{0.2cm}distrib. of $\\sigma_{\\ell,k}$\\hspace{0.8cm}distrib. of $\\mu_{\\ell,k}$}\n \\end{minipage}\n \\begin{minipage}{0.46\\linewidth}\n \\includegraphics[width=\\linewidth]{images\/hist_batchnorm_32.png}\n \\\\\n \\hspace*{6mm} $k=1$ \\hspace{11mm} $k=2$ \\hspace{11 mm} $k=3$\n \\end{minipage}\n \\begin{minipage}{0.50\\linewidth}\n \\caption{\\small \n Distributions of $\\mu_{\\ell,k}$ (top row) and $\\sigma_{\\ell,k}$ (bottom row) for three units in a DN layer with $\\boldsymbol{W}_{\\ell}\\boldsymbol{z}_{\\ell-1} \\sim \\mathcal{N}([1,0,-1],\\diag([1,3,0.1]))$ ($\\ell$ is not relevant in this context). The empirical distributions in {\\bf black} are obtained by repeatedly sampling mini-batches of size $64$ from a training set of size $1000$.\n The analytical distributions in \\color{green}{\\bf green} \\color{black}are from (\\ref{eq:BNstats}).\n The empirical distributions in \\color{red}{\\bf red} \\color{black}are of the noise-controlled BN, where we increase the variances of the BN parameters to match the variances that would result from a virtual mini-batch size ($32$) that is smaller than the actual size ($64$), hence producing more regularizing jitter perturbations. The different realizations of $\\boldsymbol{\\mu}_{\\ell}$ and $\\boldsymbol{\\sigma}_{\\ell}$ for each mini-batch affect the geometry of the DN partition and decision boundary (see Figure~\\ref{fig:evolution_boundary}) of the current mini-batch.\n \n }\n \\label{fig:distribution} \n \\end{minipage}\n\\end{figure}\n\n\nFigure~\\ref{fig:noisedb} suggests the interpretation that BN noise induces ``jitter'' in the DN decision boundary.\nSmall amounts of jitter can be beneficial, since it forces the DN to learn a representation with an increased margin around the decision boundary. \nLarge amounts of jitter can be detrimental, since the increased margin around the decision boundary might be too large for the classification task.\nThis jitter noise is reminiscent of dropout and other techniques that artificially add noise in the DN input and\/or feature maps to improve generalization performance \\cite{srivastava2013improving,pham2014dropout,molchanov2017variational,wang2018max}.\nWe focus on the effect of BN jitter on DNs for classification problems here, but jitter will also improve DN performance on regression problems.\n\n\nTo further demonstrate that jitter noise increases the margin between the learned decision boundary and the training set (and hence improves generalization), we\nconducted an experiment where we fed additional Gaussian additive noise and\nChi-square multiplicative noise to the layer pre-activations of a DN to increase the variances of $\\boldsymbol{\\mu}_{\\ell}$ and $\\boldsymbol{\\sigma}_{\\ell}$ as desired\n(as in Figure~\\ref{fig:distribution}), in addition to the BN-induced noise.\nFor a Resnet9, we observed that increasing these variances about $15\\%$ increased the classification accuracies (averaged over $5$ runs) from $93.34\\%$ to $93.68\\%$ (CIFAR10), from $72.22\\%$ to $72.74\\%$ (CIFAR100) and from $96.16\\%$ to $96.41\\%$ (SVHN).\nNote that this performance boost comes in addition to that obtained from BN's smart initialization (recall Section \\ref{sec:smart}). \n\n\n\\vspace{-0.2cm}\n\\section{Conclusions}\n\n\\vspace{-0.2cm}\n\nIn this paper, our theoretical analysis of BN shed light and explained two novel crucial ways on how BN boosts DNs performances. \nFirst, BN provides a ``smart initialization'' that solve a total least squares (TLS) optimization to adapt the DN input space spline partition to the data to improve learning and ultimately function approximation. \nSecond, for classification applications, BN introduces a random jitter perturbation to the DN decision boundary that forces the model to learn boundaries with increased margins to the closest training samples.\nFrom the results that we derived one can directly see how to further improve batch-normalization. For example, by controlling the strength of the noise standard deviation of the batch-normalization parameters to further control the decision boundary margin, or by altering the optimization problem that batch-normalization minimizes to enforce a specific adaptivity of the DN partition to the dataset at hand.\nWe hope that this work will motivate researchers to further extend BN into task-specific methods leveraging a priori knowledge of the properties one desires to enforce into their DN. \n\n\n\n\n\\section{Details on Continuous Piecewise Affine Deep Networks}\n\\label{app:details}\n\n\nThe goal of this section is to provide additional details into the forming of the per-region affine mappings of CPA DNs.\n\nAs mentioned in the main text, any DN that is formed from CPA nonlinearities can be expressed itself as a CPA operator. The per-region affine mappings are thus entirely defined by the {\\em state} of the DN nonlinearities. For an activation function such as ReLU, leaky-ReLU or absolute value, the nonlinearity {\\em state} is completely determined by the sign of the activation input as it determines which of the two linear mapping to apply to produce its output. Let denote this code as $\\boldsymbol{q}_{\\ell}(\\boldsymbol{z}_{\\ell-1}) \\in \\{\\alpha, 1\\}^{D_{\\ell}}$ given by\n\\begin{align}\n [\\boldsymbol{q}_{\\ell}(\\boldsymbol{z}_{\\ell-1})]_i = \\begin{cases}\n \\alpha, & [\\boldsymbol{W}_{\\ell}\\boldsymbol{z}_{\\ell-1}+\\boldsymbol{b}_{\\ell}]_i \\leq 0\\\\\n 1, & [\\boldsymbol{W}_{\\ell}\\boldsymbol{z}_{\\ell-1}+\\boldsymbol{b}_{\\ell}]_i > 0\n \\end{cases}\n\\end{align}\nwhere the pre-activation formula above can be replaced with the one from (\\ref{eq:BN}) if BN is employed. \nFor a max-pooling type of nonlinearity the state corresponds to the argmax obtained in each pooling regions, for details on how to generalize the below in that case we refer the reader to \\cite{balestriero2018spline}.\nBased on the above, the layer input-output mapping can be written as\n\\begin{align}\n \\boldsymbol{z}_{\\ell}=\\boldsymbol{Q}_{\\ell}(\\boldsymbol{z}_{\\ell-1})(W_{\\ell}\\boldsymbol{z}_{\\ell-1}+\\boldsymbol{b}_{\\ell}).\n\\end{align}\nwhere $\\boldsymbol{Q}_{\\ell}$ produces a diagonal matrix from the vector $\\boldsymbol{q}_{\\ell}$, and one has $\\alpha=0$ for ReLU, $\\alpha=-1$ for absolute value and $\\alpha>0$ for leaky-ReLU; see \\cite{balestriero2018spline} for additional details.\nThe up-to-layer-$\\ell$ mapping can thus be easily written as \n\\begin{align}\n \\boldsymbol{z}_{\\ell}=A_{1|\\ell}(\\boldsymbol{x})\\boldsymbol{x}+B_{1|\\ell}(\\boldsymbol{x})\n\\end{align}\nwith the following slope and bias parameters\n\\begin{align}\n A_{1|\\ell}(\\boldsymbol{x}) = & \\boldsymbol{Q}_{\\ell}\\boldsymbol{W}_{\\ell}\\boldsymbol{Q}_{\\ell-1}\\boldsymbol{W}_{\\ell-1} \\dots \\boldsymbol{Q}_{1}\\boldsymbol{W}_{1},\\\\\n B_{1|\\ell}(\\boldsymbol{x}) = & \\sum_{i=1}^{\\ell}\\left( \\boldsymbol{Q}_{\\ell}\\boldsymbol{W}_{\\ell}\\boldsymbol{Q}_{\\ell-1}\\boldsymbol{W}_{\\ell-1}\\dots \\boldsymbol{Q}_{i+1}\\boldsymbol{W}_{i+1}\\right)\\boldsymbol{b}_{i},\n \\end{align}\nwhere for clarity we abbreviated $\\boldsymbol{Q}_{\\ell}(\\boldsymbol{z}_{\\ell-1})$ as $\\boldsymbol{Q}_{\\ell}$.\n\nFrom the above formulation, it is clear that whenever the codes $\\boldsymbol{q}_{\\ell}$ stay the same for different inputs, the layer input-output mapping remains linear. This defines a region $\\omega_{\\ell}$ of $\\Omega_{\\ell}$ from (\\ref{eq:boundary}), the layer-input-space partition region, as\n\\begin{align}\n \\omega^{\\boldsymbol{q}}_{\\ell} = \\{\\boldsymbol{z}_{\\ell-1} \\in \\mathbb{R}^{D_{\\ell-1}}: \\boldsymbol{q}_{\\ell}(\\boldsymbol{z}_{\\ell-1})=\\boldsymbol{q}\\}, \\boldsymbol{q} \\in \\{\\alpha, 1\\}^{D_l}.\n\\end{align}\nIn a similar way, the up-to-layer-$\\ell$ input space partition region can be defined.\n\nThe multilayer region is defined as \n\\begin{align}\n \\omega^{\\boldsymbol{q}_1,\\dots,\\boldsymbol{q}_\\ell}_{1|\\ell} = \\bigcap_{i=1}^{\\ell}\\{\\boldsymbol{x} \\in \\mathbb{R}^{D}: \\boldsymbol{q}_{i}(\\boldsymbol{x})=\\boldsymbol{q}_i\\}, \\boldsymbol{q}_i \\in \\{\\alpha, 1\\}^{D_l}.\n\\end{align}\n\n\\begin{defn}[Layer\/DN partition]\n\\label{def:partition}\nThe layer $\\ell$ (resp. -up-to-layer-$\\ell$) input space partition is given by\n\\begin{align}\n \\Omega_{\\ell} &= \\{\\omega_{\\ell}^{\\boldsymbol{q}}, \\boldsymbol{q}\\in \\{\\alpha, 1\\}^{D_l}\\}\\setminus \\emptyset,\\\\\n \\Omega_{1|\\ell} &= \\{\\omega^{\\boldsymbol{q}_1,\\dots,\\boldsymbol{q}_\\ell}_{1|\\ell}, \\boldsymbol{q}_i\\in \\{\\alpha, 1\\}^{D_i}, \\forall i\\}\\setminus \\emptyset.\n\\end{align}\n\\end{defn}\nNote that $\\Omega_{1|L}$ forms the entire DN input space partition. \nBoth unit and layer input space partitioning can be rewritten as Power Diagrams, a generalization of Voronoi Diagrams \\cite{balestriero2019geometry}. \nComposing layers then simply refines successively the previously build input space partitioning via a subdivision process to obtain the -up to layer $\\ell$- input space partitioning $\\Omega_{|\\ell}$. \n\n\n\n\n\n\n\n\n\\section[Batch normalization folds the partition regions towards the \\\\ training data]{Batch normalization parameter $\\boldsymbol{\\sigma}$ folds the partition regions towards the training data}\n\\label{sec:sigma}\n\nWe are now in a position to describe the effect of the BN parameter $\\boldsymbol{\\sigma}_\\ell$, which has no effect on the spline partition of an individual DN layer but comes into play for a composition of two or more layers.\nIn contrast to the hyperplane translation effected by $\\boldsymbol{\\mu}_\\ell$, $\\boldsymbol{\\sigma}_\\ell$ optimizes the {\\em dihedral angles} between adjacent facets of the folded hyperplanes in subsequent layers in order to swing them closer to the training data.\n\nDefine $\\boldsymbol{Q}_\\ell$ as the square diagonal matrix whose diagonal entry $i$ is determined by the sign of the pre-activation $h_{\\ell,i}$.\nThat is,\n\\begin{equation}\n[\\boldsymbol{Q}_\\ell]_{i,i} = \n\\left\\{\n\\begin{array}{ll}\n \\alpha\/\\sigma_{\\ell,i}, \\quad & h_{\\ell,i} < 0 \\\\[1mm]\n 1\/\\sigma_{\\ell,i}, & h_{\\ell,i} \\geq 0,\n\\end{array}\n\\right.\n\\label{eq:Q}\n\\end{equation}\nwith $\\alpha=0$ for ReLU, $\\alpha>0$ for leaky-ReLU, and $\\alpha= -1$ for absolute value \n(recall the definition of the activation function from Section~\\ref{sec:spline1}; see \\cite{balestriero2018from} for additional nonlinearities).\nNote that $\\boldsymbol{Q}_\\ell$ is constant across each region $\\omega$ in the layer's spline partition $\\Omega_\\ell$ since, by definition, none of the pre-activations $h_{\\ell,i}$ change sign within region $\\omega$.\nWe will use $\\boldsymbol{Q}_\\ell(\\omega)$ to denote the dependency on the region $\\omega$; to compute $\\boldsymbol{Q}_\\ell(\\omega)$ given $\\omega$, one merely samples a layer input from $\\omega$, computes the pre-activations $h_{\\ell,i}$, and applies (\\ref{eq:Q}).\n\nIn Appendix~\\ref{XXX} \nwe prove that the BN parameter $\\boldsymbol{\\sigma}_{\\ell}$ adjusts the dihedral folding angles of adjacent facets of each folded hyperplane created by layer $\\ell+1$ in order to align the facets with the training data. \nFigure~\\ref{fig:evolution_boundary}(c) illustrates empirically how $\\boldsymbol{\\sigma}_1$ folds the facets of $\\mathcal{F}_{2,k}$ realized by the second layer of a toy DN.\nSince the relevant expressions quickly (but predictably) grow in length with the number of layers, to expose the salient points, but without loss of generality, we will focus the next theorem on the composition of the first two DN layers (layers $\\ell=1,2$). \nIn this case, there are two geometric quantities of interest that combine to create the input space spline partition $\\Omega=\\Omega_{|2}$: layer 1's hyperplanes $\\mathcal{H}_{1,i}$ and layer 2's folded hyperplanes $\\mathcal{F}_{2,k}$.\n\n\n\\begin{thm}\n\\label{thm:sigma}\nGiven a 2-layer ($\\ell=1,2$) BN-equipped DN employing a leaky-ReLU or absolute value activation function, \nconsider two adjacent regions $\\omega,\\omega'$ from the spline partition $\\Omega$ whose boundaries contain the facets $\\mathcal{F}_{2,k,\\omega},\\mathcal{F}_{2,k,\\omega'}$ created by folding across the boundaries' shared hyperplane $\\mathcal{H}_{1,i}$ (see Figure~\\ref{fig:Thm3}).\nThe dihedral angles between these three (partial) hyperplanes are given by:\n\\begin{align}\n \\theta \\left(\\mathcal{F}_{2,k,\\omega},\\mathcal{H}_{1,i} \\right)\n &=\n \\arccos\\left(\n {\\frac{\\left| \\boldsymbol{w}_{2,k}^\\top\\,\\boldsymbol{Q}_1(\\omega)\\boldsymbol{W}_1\\boldsymbol{w}_{1,i} \\right|}\n {\\left\\|\\boldsymbol{w}_{2,k}^\\top\\,\\boldsymbol{Q}_1(\\omega)\\boldsymbol{W}_1 \\right\\| \\; \\left\\| \\boldsymbol{w}_{1,i} \\right\\|}}\\right),\n \\label{eq:Qangle1} \n \\\\[2mm]\n \\theta \\left(\\mathcal{F}_{2,k,\\omega'},\\mathcal{H}_{1,i} \\right)\n &=\n \\arccos\\left(\n {\\frac{\\left| \\boldsymbol{w}_{2,k}^\\top\\,\\boldsymbol{Q}_1(\\omega')\\boldsymbol{W}_1\\boldsymbol{w}_{1,i} \\right|}\n {\\left\\|\\boldsymbol{w}_{2,k}^\\top\\,\\boldsymbol{Q}_1(\\omega')\\boldsymbol{W}_1 \\right\\| \\; \\left\\| \\boldsymbol{w}_{1,i} \\right\\|}}\\right),\n \\label{eq:Qangle2}\n \\\\[2mm]\n \\theta \\left(\\mathcal{F}_{2,k,\\omega},\\mathcal{F}_{2,k,\\omega'} \\right)\n &=\n \\arccos\\left(\n {\\frac{\\left| \\boldsymbol{w}_{2,k}^\\top\\,\\boldsymbol{Q}_1(\\omega)\\boldsymbol{W}_1\\boldsymbol{W}_1^\\top\\boldsymbol{Q}_1(\\omega')\\boldsymbol{w}_{2,k} \\right|}\n {\\left\\|\\boldsymbol{w}_{2,k}^\\top\\,\\boldsymbol{Q}_1(\\omega)\\boldsymbol{W}_1 \\right\\| \\; \\left\\| \\boldsymbol{w}_{2,k}^\\top\\,\\boldsymbol{Q}_1(\\omega')\\boldsymbol{W}_1 \\right\\|}}\\right).\n \\label{eq:Qangle3}\n\\end{align}\n\\end{thm}\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=50mm]{images\/Theorem3-in-2D.jpg}\n\\caption{\\small Sketch of the situation in Theorem 3 for a two-dimensional input space.}\n\\label{fig:Thm3}\n\\end{figure}\n\n\n\nIn (\\ref{eq:Qangle1})--(\\ref{eq:Qangle3}), $\\boldsymbol{Q}_1(\\omega)$ and $\\boldsymbol{Q}_1(\\omega')$ differ by only one diagonal entry at index $(i,i)$: one matrix takes the value $\\frac{1}{\\sigma_{1,i}}$ and the other the value $\\frac{\\alpha}{\\sigma_{1,i}}$, as per Eq.~\\ref{eq:Q}.\nSince\n(\\ref{eq:optimization-k}) implies that $\\sigma^2_{\\ell,i}\\propto \\min_{\\boldsymbol{\\mu}_{\\ell,i}}\\mathcal{L}(\\boldsymbol{\\mu}_{\\ell,i},\\mathcal{B}_{\\ell})$, we have the following two insights that we state for $\\ell=1$.\n(These results extend in a straightforward fashion for more than two layers as well as for more complicated piecewise linear activation functions.)\n\nOn the one hand, if $\\mathcal{H}_{1,i}$ is well-aligned with the training data $\\mathcal{B}_1$, then the TLS error, and hence $\\sigma^2_{1,i}$\n, will be small.\nFor the absolute value activation function, formulae (\\ref{eq:Qangle1}) and (\\ref{eq:Qangle2}) then tell us that both $\\theta \\left(\\mathcal{F}_{2,k,\\omega},\\mathcal{H}_{1,i} \\right)\\approx 0$ and $\\theta \\left(\\mathcal{F}_{2,k,\\omega'},\\mathcal{H}_{1,i} \\right)\\approx 0$, meaning that $\\mathcal{F}_{2,k,\\omega}$ and $\\mathcal{F}_{2,k,\\omega'}$ will be folded to closely align with $\\mathcal{H}_{1,i}$ (and hence the data). The connection between $\\sigma^2_{\\ell,i}$ and (\\ref{eq:Qangle1},\\ref{eq:Qangle2}) lies in the entries of the $\\boldsymbol{Q}$ matrix. Basically, the $\\boldsymbol{\\sigma}_{\\ell}$ are used in the denominator of $\\boldsymbol{Q}$ and thus bend more or less the angles.\nFor the ReLU\/leaky-ReLU activation function, either $\\mathcal{F}_{2,k,\\omega}$ or $\\mathcal{F}_{2,k,\\omega'}$ will be folded to closely align with $\\mathcal{H}_{1,i}$; the other facet will be unchanged\/mildly folded.\n\nOn the other hand, if $\\mathcal{H}_{1,i}$ is not well-aligned with the training data $\\mathcal{B}_1$, then the TLS error, and hence $\\sigma^2_{1,i}$, will be large.\nThis will force $\\boldsymbol{Q}_1(\\omega)\\approx \\boldsymbol{Q}_1(\\omega')$ and thus $\\theta \\left(\\mathcal{F}_{2,k,\\omega},\\mathcal{F}_{2,k,\\omega'} \\right)\\approx \\pi$, meaning that a poorly aligned layer-1 hyperplane $H_{1,i}$ will not appreciably fold intersecting layer-2 facets.\n\nFigure~\\ref{fig:distances} illustrates empirically how the BN parameter $\\sigma_{\\ell,k}$\nmeasures the quality of the fit of the (folded) hyperplanes to the data in the TLS error sense for a toy DN.\n\nIn summary, and extrapolating to the general case, the effect of the BN parameter $\\boldsymbol{\\sigma}_\\ell$ is to fold the layer-$(\\ell+1)$ hyperplanes \n(also the $\\ell+2$ and subsequent hyperplanes)\nthat contribute to the spline partition boundary $\\Omega$ in order to align them with the layer-$\\ell$ hyperplanes that match the data well.\nHence, not only $\\boldsymbol{\\mu}_{\\ell}$ but also $\\boldsymbol{\\sigma}_{\\ell}$ plays a crucial role in aligning the DN spline partition with the training data (recall Figure~\\ref{fig:2d_partition_bn}).\n\n\n\n\n\\begin{figure}[t!]\n \\centering\n \\begin{minipage}{0.2\\linewidth}\n \\centering\n $\\mathcal{H}_{1,k}$\\\\\n \\includegraphics[width=\\linewidth]{images\/coloring\/distance_coloring_0_4.pdf}\n \\end{minipage}\n \\begin{minipage}{0.2\\linewidth}\n \\centering\n $\\mathcal{F}_{4,k}$\\\\\n \\includegraphics[width=\\linewidth]{images\/coloring\/distance_coloring_3_4.pdf}\n \\end{minipage}\n \\begin{minipage}{0.58\\linewidth}\n \\caption{\\small \n Layer-1 hyperplanes $\\mathcal{H}_{1,k}$ (left) and layer-4 folded hyperplanes $\\mathcal{F}_{4,k}$ (right) depicted in the 2D input space of a toy 4-layer DN trained on the data points denoted with black dots. The (folded) hyperplanes are colored based on the corresponding value $\\sigma^2_{\\ell,k}\/\\|\\boldsymbol{w}_{\\ell,k}\\|^2$, which is proportional to the total least squares (TLS) fitting error to the data (blue: small error, close to the data points; green: large error, far from the data points).\n}\n \\label{fig:distances}\n \\end{minipage}\n\\end{figure}\n\n\n\n\n\\section[Role of the BN Parameters]{Role of the BN Parameters $\\boldsymbol{\\beta}$, $\\boldsymbol{\\gamma}$ and Proof of Proposition~\\ref{prop:redundant}}\n\\label{sec:betagamma}\n\n\nWithout loss of generality, consider a simple two-layer DN to illustrate. Then, it is clear that $\\boldsymbol{\\gamma}_1$ simply rescales the rows of $\\boldsymbol{W}_2$ and the entries of $\\boldsymbol{\\beta}_1$\n\\begin{align}\n\\boldsymbol{z}_{2,r}\n&=\na\\!\\left(\n\\frac\n{\n\\sum_{k=1}^{D_1}\\: [\\boldsymbol{w}_{2,r}]_k \\: a\\!\\left(\\frac{\\langle \\boldsymbol{w}_{1,k},\\boldsymbol{x}\\rangle-\\mu_{1,k}}{\\sigma_{1,k}}\\,\\gamma_{1,k}+\\beta_{1,k}\\right)\n-\\mu_{2,r}\n}\n{\n\\sigma_{2,r}\n}\n\\:\n\\gamma_{2,r}\n+\n\\beta_{2,r}\n\\right) \n\\\\\n&=\na\\!\\left(\n\\frac\n{\n\\sum_{k=1}^{D_1}\\: \\gamma_{1,k} \\, [\\boldsymbol{w}_{2,r}]_k \\: a\\!\\left(\\frac{\\langle \\boldsymbol{w}_{1,k},\\boldsymbol{x}\\rangle-\\mu_{1,k}}{\\sigma_{1,k}} + \\frac{\\beta_{1,k}}{\\gamma_{1,k}}\n\\right)\n-\\mu_{2,r}\n}\n{\n\\sigma_{2,r}\n}\n\\:\n\\gamma_{2,r}\n+\n\\beta_{2,r}\n\\right) \n\\label{eq:pullout}\n \\\\[2mm]\n&=\na\\!\\left(\n\\frac\n{\n\\sum_{k=1}^{D_1}\\: [\\boldsymbol{w}'_{2,r}]_k \\: a\\!\\left(\\frac{\\langle \\boldsymbol{w}_{1,k},\\boldsymbol{x}\\rangle-\\mu_{1,k}}{\\sigma_{1,k}} + \\beta'_{1,k}\n\\right)\n-\\mu_{2,r}\n}\n{\n\\sigma_{2,r}\n}\n\\:\n\\gamma_{2,r}\n+\n\\beta_{2,r}\n\\right) \n\\end{align}\nand so does not need to be optimized separately.\nHere, $[\\boldsymbol{w}_{2,r}]_k$ denotes the $k$-th entry of the $r$-th row of $\\boldsymbol{W}_2$.\nWe obtain (\\ref{eq:pullout}), because standard activation and pooling functions (e.g., (leaky-)ReLU, absolute value, max-pooling) are such that $a(cu)=c\\,a(u)$.\nThis leaves $\\boldsymbol{\\beta}_{\\ell}$ as the only learnable parameter that needs to be considered.\n\n\n\nNow, our BN theoretical analysis relies on setting $\\boldsymbol{\\beta}_{\\ell}=\\mathbf{0}$, which corresponds to the standard initialization of BN. In this setting, we saw that BN ``fits'' the partition boundaries to the data samples exactly. Now, by observing the form of (\\ref{eq:BN}), it is clear that learning $\\boldsymbol{\\beta}_{\\ell}$ enables to ``undo'' this fitting if needed to better solve the task-at-hand. However, we have found that in most practical scenarios, fixing $\\boldsymbol{\\beta}_{\\ell}=\\mathbf{0}$ throughout training actually does not alter final performances. In fact, on Imagenet, the top1 accuracy of a Resnet18 goes from 67.60\\% to 65.93\\% when doing such a change, and on a Resnet34, from 68.71\\% to 67.21\\%. The drop seems to remain the same even on more complicated architecture as for a Resnet50, the top1 test accuracy goes down from 77.11\\% to 74.98\\%, where again we emphasize that the exact same hyper-parameters are employed for both situations i.e. the drop could potentially be reduced by tuning the hyper-parameters.\nObviously those numbers might vary depending on optimizers and other hyper-parameters, we used here the standard ones for those architectures since our point is merely that all our theoretical results relying on $\\boldsymbol{\\beta}_{\\ell}=\\mathbf{0},\\boldsymbol{\\gamma}_{\\ell}=\\mathbf{1}$ still applies to high performing models.\n\nIn addition to the above Imagenet results, we provide in Table~\\ref{tab:learnable} the results for the classification accuracy on various dataset and with two architectures. This set of results simply demonstrates that the learnable parameters of BN ($\\gamma,\\beta$) have very little impact of performances.\n\n\\begin{table}[h]\n \\caption{Test accuracy of various models when employing (yes) or not (no) the BN learnable parameters, as was demonstrated in the main paper, those parameters have very little impact on the final test accuracy (no data-augmentation is used).}\n \\label{tab:learnable}\n \\centering\n \\begin{tabular}{|c|c|c|c|c|c|c|c|c|}\\hline\n &\\multicolumn{2}{|c|}{imagenette}&\\multicolumn{2}{|c|}{cifar10}&\\multicolumn{2}{|c|}{cifar100}&\\multicolumn{2}{|c|}{svhn}\\\\ \\hline \n & No & Yes& No & Yes& No & Yes& No & Yes \\\\ \\hline\n RESNET & 79.2 &78.8 &83. &86.2 &50. &54.8 &94.2 &95.3\\\\\\hline\n CNN & 77.7 &77.6 &87.5 &87.6 &54. &55.2 &96. &95.9\\\\\\hline\n \\end{tabular}\n\\end{table}\n\nWe provide below the descriptions of the DN architectures. For the Residual Networks, the notation Resnet4-10 for example defines the width factor and depth of each block. We provide an example below for Resnet2-2.\n\n\\begin{center}\n Residual Network\n\\begin{verbatim}\nConv2D(layer[-1], 32, (5, 5), pad=\"SAME\", b=None))\nBatchNorm(layer[-1])\nleaky_relu(layer[-1])\nMaxPool2D(layer[-1], (2, 2))\nDropout(layer[-1], 0.9)\n\nfor width in [64, 128, 256, 512]:\n for repeat in range(2):\n Conv2D(layer[-1], width, (3, 3), b=None, pad=\"SAME\")\n BatchNorm(layer[-1])\n leaky_relu(layer[-1])\n Conv2D(layer[-1], width, (3, 3), b=None, pad=\"SAME\")\n BatchNorm(layer)\n if layer[-6].shape == layer[-1].shape:\n leaky_relu(layer[-1]) + layer[-6])\n else:\n leaky_relu(layer[-1]) \n + Conv2D(layer[-6], width, (3, 3), \n b=None, pad=\"SAME\")\n\n AvgPool2D(layer[-1], (2, 2))\n Conv2D(layer[-1], 512, (1, 1), b=None))\n BatchNorm(layer)\n leaky_relu(layer[-1])\n\nGlobalAvgPool2D(layer[-1])\nDense(layer[-1], N_CLASSES)\n\\end{verbatim}\n\\end{center}\n\n\n\nWe now describe the CNN model that we employed. Notice that if the considered dataset is imagenette or other (smaller spatial dimension) dataset there is an additional first layer of convolution plus spatial pooling to reduce the spatial dimensions of the feature maps.\n\n\\begin{center}\n Convolutional Network\n\\begin{verbatim}\nConv2D(layer[-1], 32, (5, 5), pad=\"SAME\", b=None)\nBatchNorm(layer)\nleaky_relu(layer[-1]))\nMaxPool2D(layer[-1], (2, 2))\n\nif args.dataset == \"imagenette\":\n Conv2D(layer[-1], 64, (5, 5), pad=\"SAME\", b=None)\n BatchNorm(layer)\n leaky_relu(layer[-1])\n MaxPool2D(layer[-1], (2, 2))\n\nfor k in range(3):\n Conv2D(layer[-1], 96, (5, 5), b=None, pad=\"SAME\")\n BatchNorm(layer)\n leaky_relu(layer[-1])\n Conv2D(layer[-1], 96, (1, 1), b=None)\n BatchNorm(layer)\n leaky_relu(layer[-1])\n Conv2D(layer[-1], 96, (1, 1), b=None)\n BatchNorm(layer)\n leaky_relu(layer[-1])\n\nDropout(layer[-1], 0.7) \nMaxPool2D(layer[-1], (2, 2))\n\nfor k in range(3):\n Conv2D(layer[-1], 192, (5, 5), b=None, pad=\"SAME\")\n BatchNorm(layer)\n leaky_relu(layer[-1])\n Conv2D(layer[-1], 192, (1, 1), b=None)\n BatchNorm(layer)\n leaky_relu(layer[-1])\n Conv2D(layer[-1], 192, (1, 1), b=None)\n BatchNorm(layer)\n leaky_relu(layer[-1])\n\nDropout(layer[-1], 0.7)\nMaxPool2D(layer[-1], (2, 2))\n\nConv2D(layer[-1], 192, (3, 3), b=None)\nBatchNorm(layer)\nleaky_relu(layer[-1])\nConv2D(layer[-1], 192, (1, 1), b=None))\nBatchNorm(layer)\nleaky_relu(layer[-1])\n\nGlobalAvgPool2D(layer[-1])\nDense(layer[-1], N_CLASSES)\n\\end{verbatim}\n\\end{center}\n\n\n\n\\iffalse\n\\subsection{Batch-Normalization as an Universal Initialization}\n\\label{section:initialization}\n\nIn addition to the boundary positioning, BN also equips DNs with some beneficial properties that further help the training procedure beyond the ``well positioned'' initialization of the partition boundaries.\n\nFirst, one of the key challenge of DNs is the dying neuron problem \\cite{trottier2017parametric} where the sign of a feature map dimension is always negative in which case learning is impaired due to the $\\alpha$ scaling of the gradients. \nBN solves this problem at initialization as we formalize below and was empirically demonstrated in \\cite{liao2016importance}.\n\n\n\\begin{prop}\n\\label{prop:dying_neuron}\nWhen using the BN statistics, at initialization where $\\gamma_{\\ell}=\\mathbf{1}, \\beta_{\\ell}=\\mathbf{0}, \\forall \\ell$ a DN is guaranteed to have all units being activated at least once in each mini-batch.\n\\end{prop}\n\n\\begin{proof}\nThe use of BN ensures by mean removal that the feature maps prior application of the pointwise activation function have zero mean. Recall that we are in the initialization regime. As a result, either all feature maps are $0$, which is a degenerate case that would make the DN diverge (as the standard deviation statistics would be $0$) or the feature maps are not all $0$ and thus thanks to the centering operation, at least $1$ of the sample feature map will be negative and the others positive or vice-versa. As standard activation functions have the active-inactive change at the $0$-crossing, it is clear that when employing BN, each activation function of each layer has to be in both active and inactive states for at least one sample and this for each mini-batch.\n\\end{proof}\n\n\nAnother important research problem for DNs lies in the initialization of the weights of the layers to prevent the dying neuron problem and controlling the feature maps first moments through depth prevent exploding gradient problems \\cite{glorot2010understanding}. Most solutions leverage a carefully design of the distribution used to sample the weights $\\boldsymbol{W}_{\\ell}$; when using BN, the pre-activation feature maps are centered and normalized at initialization to have $0$ mean and unit variance. As such, it is ensured that at any layer, the feature maps will have guaranteed first order statistics alleviating the task of picking the hyper-parameters of the initialization scheme which are otherwise crucial for performances \\cite{he2015delving}.\n\n\\fi\n\n\\section{Proofs}\n\\label{app:proofs}\n\nWe propose in this section the various proofs supporting the diverse theoretical claims from the main part of the paper.\n\n\n\n\n\\subsection{Proof of Theorem~\\ref{thm:onelayer}}\n\\label{proof:thmonelayer}\n\n\n\\begin{proof}\nIn order to prove the theorem we will demonstrate below that the optimum of the total least square optimization problem is reached at the unique global optimum given by the average of the data, hence corresponding to the batch-normalization mean parameter. Then we demonstrate that at this minimum, the value of the total least square loss is given by the variance parameter of batch-normalization.\n\n\nThe optimization problem is given by\n\\begin{align}\n \\mathcal{L}(\\mu; Z) =& \\sum_{k=1}^{D_{\\ell}}\\sum_{\\boldsymbol{z} \\in Z} d\\left(\\boldsymbol{z},\\mathcal{H}_{\\ell,k}\\right)^2\n =\\sum_{k=1}^{D_{\\ell}}\\sum_{\\boldsymbol{z} \\in Z}\\frac{\\left | \\langle \\boldsymbol{w}_{\\ell,k},\\boldsymbol{z}_{\\ell-1}\\rangle - \\boldsymbol{\\mu}_{k}\\right |^2 }{\\|\\boldsymbol{w}_{\\ell,k}\\|^2_2}\n\\end{align}\nit is clear that the optimization problem\n\\begin{align}\n \\min_{\\mu \\in \\mathbb{R}^{D_{\\ell}}} \\mathcal{L}(\\mu,Z),\n\\end{align}\ncan be decomposed into multiple independent optimization problem for each dimension of the vector $\\mu$, since we are working with an unconstrained optimization problem with separable sum. We thus focus on a single $\\boldsymbol{\\mu}_{k}$ for now. The optimization problem becomes\n\\begin{align}\n \\min_{\\boldsymbol{\\mu}_{k}\\in\\mathbb{R}}\\sum_{\\boldsymbol{z} \\in Z}\\frac{\\left | \\langle \\boldsymbol{w}_{\\ell,k},\\boldsymbol{z}_{\\ell-1}\\rangle - \\boldsymbol{\\mu}_{k}\\right |^2 }{\\|\\boldsymbol{w}_{\\ell,k}\\|^2_2}\n\\end{align}\ntaking the first derivative leads to\n\\begin{align}\n \\partial \\sum_{\\boldsymbol{z} \\in Z}\\frac{\\left | \\langle \\boldsymbol{w}_{\\ell,k},\\boldsymbol{z}_{\\ell-1}\\rangle - \\boldsymbol{\\mu}_{k}\\right |^2 }{\\|\\boldsymbol{w}_{\\ell,k}\\|^2_2} =& -2\\sum_{\\boldsymbol{z} \\in Z}\\frac{\\left ( \\langle \\boldsymbol{w}_{\\ell,k},\\boldsymbol{z}_{\\ell-1}\\rangle - \\boldsymbol{\\mu}_{k}\\right) }{\\|\\boldsymbol{w}_{\\ell,k}\\|^2_2}\\\\\n =& -2\\sum_{\\boldsymbol{z} \\in Z}\\frac{\\langle \\boldsymbol{w}_{\\ell,k},\\boldsymbol{z}_{\\ell-1}\\rangle }{\\|\\boldsymbol{w}_{\\ell,k}\\|^2_2} + 2 Card(Z) \\frac{ \\boldsymbol{\\mu}_{k}}{\\|\\boldsymbol{w}_{\\ell,k}\\|^2_2}\\\\\n\\end{align}\nthe above first derivative of the total least square (quadratic) loss function is thus a linear function of $\\boldsymbol{\\mu}_{k}$ being $0$ at the unique point given by\n\\begin{align}\n -2\\sum_{\\boldsymbol{z} \\in Z}\\frac{\\langle \\boldsymbol{w}_{\\ell,k},\\boldsymbol{z}_{\\ell-1}\\rangle }{\\|\\boldsymbol{w}_{\\ell,k}\\|^2_2} + 2 Card(Z) \\frac{ \\boldsymbol{\\mu}_{k}}{\\|\\boldsymbol{w}_{\\ell,k}\\|^2_2}=0 \\iff \\boldsymbol{\\mu}_{k} = \\frac{\\sum_{\\boldsymbol{z} \\in Z}\\langle \\boldsymbol{w}_{\\ell,k},\\boldsymbol{z}_{\\ell-1}\\rangle}{Card(Z)}\n\\end{align}\nconfirming that the average of the pre-activation feature maps (per-dimension) is indeed the optimum of the optimization problem. One can verify easily that it is indeed a minimum by taking the second derivative of the total least square which indeed positive and given by\n$\n \\frac{ 2 Card(Z)}{\\|\\boldsymbol{w}_{\\ell,k}\\|^2_2}$.\nThe above can be done for each dimension $k$ in a similar manner. Now, by inserting this optimal value back into the total least square loss, we obtain the desired result.\n\\end{proof}\n\n\\subsection{Proof of Central Hyperplane Arrangement}\n\\label{proof:centroid}\n\n\\begin{cor}\n\\label{cor:centroid}\nBN constrains the input space partition boundaries $\\partial \\Omega_{\\ell}$ of each layer $\\ell$ of a BN-equipped DN to be a {\\em central hyperplane arrangement}; indeed, the average of the layer's training data inputs\n\\begin{align}\n\\overline{\\boldsymbol{z}}_{\\ell}\n \n \\in \n \\bigcap_{k=1}^{D_{\\ell}}\\mathcal{H}_{\\ell,k}\n \\label{eq:chpa}\n\\end{align}\nas long as $\\|\\boldsymbol{w}_{\\ell,k}\\| >0$.\n\\end{cor}\n\n\\begin{proof}\nIn order to prove the desired result i.e. that there exists a nonempty intersection between all the hyperplanes, we first demonstrate that the layer input centroid $\\overline{\\boldsymbol{z}}_{\\ell-1}$ indeed to one hyperplane, say $k$. Then it will be direct to see that this holds regardless of $k$ and thus the intersection of all hyperplanes contains at least $\\overline{\\boldsymbol{z}}_{\\ell-1}$ which is enough to prove the statement.\n\n\nFor a data point (in our case $\\overline{\\boldsymbol{z}}_{\\ell-1}$) to belong to the $k^{\\rm th}$ (unit) hyperplane $\\mathcal{H}_{\\ell,k}$ of layer $\\ell$, we must ensure that this point belong to the set of the hyperplane defined as (recall (\\ref{eq:Hk}))\n\\begin{align}\n \\mathcal{H}_{\\ell,k}=\\left\\{\\boldsymbol{z}_{\\ell-1} \\in \\mathbb{R}^{D_{\\ell-1}} : \\left\\langle \\boldsymbol{w}_{\\ell,k},\\boldsymbol{z}_{\\ell-1}\\right\\rangle=[\\mu_{\\ell}]_{k}\\right\\},\n\\end{align}\nin our case we can simply use the data centroid and ensure that it fulfils the hyperplane equality\n\\begin{align}\n \\left\\langle \\boldsymbol{w}_{\\ell,k},\\overline{\\boldsymbol{z}}_{\\ell-1}\\right\\rangle=&\\left\\langle \\boldsymbol{w}_{\\ell,k},\\frac{\\sum_{\\boldsymbol{z} \\in Z}\\boldsymbol{z}}{Card(Z)}\\right\\rangle\n =\\sum_{\\boldsymbol{z} \\in Z}\\frac{\\left\\langle \\boldsymbol{w}_{\\ell,k},\\boldsymbol{z}\\right \\rangle}{Card(Z)}\n = [\\mu^*_{\\ell}]_k\n\\end{align}\nwhere the last equation gives in fact the batch-normalization mean parameter. So now, recalling the equation of $\\mathcal{H}_{\\ell,k}$ we see that the point $\\overline{\\boldsymbol{z}}_{\\ell-1}$ makes plane projection $[\\mu^*_{\\ell}]_k$ which equals the bias of the hyperplane effectively making $\\overline{\\boldsymbol{z}}_{\\ell-1}$ part of the (batch-normalized) hyperplane $\\mathcal{H}_{\\ell,k}$. Doing the above for each $k \\in D_{\\ell}$ we see that the layer input centroid belongs to all the unit hyperplane that are shifted by the correct batch-normalization parameter, hence we directly obtain the desired result\n\\begin{align}\n \\overline{\\boldsymbol{z}}_{\\ell-1} \\subset \\bigcap_{k\\in D_{\\ell}} \\mathcal{H}_{\\ell,k},\n\\end{align}\nconcluding the proof.\n\\end{proof}\n\n\\subsection{Proof of Theorem~\\ref{thm:multilayer}}\n\n\\begin{proof}\n\n\nDefine by $\\boldsymbol{x}^*$ the shortest point in $\\mathcal{P}_{\\ell,k}$ from $\\boldsymbol{x}$ defined by\n\\begin{equation*}\n \\boldsymbol{x}^* = \\arg\\min_{\\boldsymbol{u} \\in \\mathcal{P}_{\\ell,k}}\\|\\boldsymbol{x} - \\boldsymbol{u}\\|_2.\n\\end{equation*}\nThe path from $\\boldsymbol{x}$ to $\\boldsymbol{x}^*$ is a straight line in the input space which we define by \n\\begin{align}\n l(\\theta) = \\boldsymbol{x}^* \\theta + (1 - \\theta) \\boldsymbol{x}, \\theta \\in [0,1],\n\\end{align}\ns.t. $l(0)=\\boldsymbol{x}$, our original point, and $l(1)$ is the shortest point on the kinked hyperplane. Now, in the input space of layer $\\ell$, this parametric line becomes a continuous piecewise affine parametric line defined as \n\\begin{align}\n \\boldsymbol{z}_{\\ell-1}(\\theta)= (f_{\\ell-1}\\circ \\dots \\circ f_1)(l(\\theta).\n\\end{align}\n\nBy definition, if $\\mathcal{P}_{\\ell,k}$ is brought closer to $\\boldsymbol{x}$, it means that $\\exists \\theta < 1$ s.t. $l(\\theta)\\in\\mathcal{P}_{\\ell,k}$. Similarly this can be defined in the layer input space as follows.\n\n\\[\n\\exists \\theta'<1 \\;\\; s.t. \\;\\;\\boldsymbol{z}_{\\ell-1}(\\theta) \\in \\mathcal{H}_{\\ell,k} \\implies \\exists \\theta < 1\\;\\; s.t.\\;\\; l(\\theta)\\in\\mathcal{P}_{\\ell,k}\n\\]\nthis demonstrates that when moving the layer hyperplane s.t. it intersects the kinked path $\\boldsymbol{z}_{\\ell-1}$ at a point $\\boldsymbol{z}_{\\ell-1}(\\theta')$ with $\\theta'<1$, then the distance in the input space is also reduced. Now, the BN fitting is greedy and tried to minimize the length of the straight line between $\\boldsymbol{z}_{\\ell-1}(0)$ a.k.a $\\boldsymbol{z}_{\\ell-1}(\\boldsymbol{x})$ and the hyperplane $\\mathcal{H}_{\\ell,k}$. However, notice that if the length of this straight line decreases by brining the hyperplane closer to $\\boldsymbol{z}_{\\ell-1}(\\boldsymbol{x})$ then this also decreases the $\\theta'$ s.t. $\\boldsymbol{z}_{\\ell-1}(\\theta') \\in \\mathcal{H}_{\\ll,k}$ in turn reducing the distance between $\\boldsymbol{x}$ and $\\mathcal{P}_{\\ell,k}$ in the DN input space, giving the desired (second) result.\nConversely, if $\\boldsymbol{z}_{\\ell-1}(0) \\in \\mathcal{H}_{\\ell,k}$ then the point $\\boldsymbol{x}$ lies in the zero-set of the unit, in turn making it belong to the kinked hyperplane $\\mathcal{P}_{\\ell,k}$ which corresponds to this exact set.\n\n\\iffalse\nThe distance from a point to a kinked hyperplane $\\mathcal{P}_{\\ell,k}$ is either an orthogonal projection of the point to one of the faces in which case it is given by\n\\begin{align}\n d_{\\rm ortho}(\\boldsymbol{x},\\mathcal{P}_{\\ell,k})=d(\\boldsymbol{x},\\mathcal{P}_{\\ell,k,\\omega^*})=\\frac{|\\langle \\boldsymbol{w}_{\\ell,k}, \\boldsymbol{z}_{\\ell-1}\\rangle-[\\mu_{\\ell}]_k|}{\\| A_{|\\ell-1}(\\omega^*)^T\\boldsymbol{w}_{\\ell,k}\\|_2}=\\frac{|\\langle A_{|\\ell-1}(\\omega^*)^T\\boldsymbol{w}_{\\ell,k}, \\boldsymbol{x}\\rangle+\\langle\\boldsymbol{b}_{|\\ell-1}(\\omega^*),\\boldsymbol{w}_{\\ell,k}\\rangle-[\\mu_{\\ell}]_k|}{\\| A_{|\\ell-1}(\\omega^*)^T\\boldsymbol{w}_{\\ell,k}\\|_2}\n\\end{align}\nwhere $\\omega^*$ is the region in $\\Omega_{|\\ell-1}$ containing the face that is the closest to the point w.r.t. this orthogonal projection.\nAnother case is that the closest distance if actually a non orthogonal projection in which case the distance is obtained by Pythagore theorem as follows:\n\\begin{align}\n d(\\boldsymbol{x},\\mathcal{P}_{\\ell,k})=\\sqrt{d(\\boldsymbol{x},\\mathcal{P}_{\\ell,k,\\omega^*})^2 + \\min_{\\boldsymbol{x} \\in \\mathcal{P}_{\\ell,k,\\omega^*}} \\|Proj(\\boldsymbol{x}^*,\\mathcal{P}_{\\ell,k,\\omega^*})-\\boldsymbol{x}\\|_2^2}\n\\end{align}\nnotice that in the case where the orthogonal projection is the true closest one then the above formula falls back to the one above.\n\nNow in the event where the point is in the region where the orthogonal projection is indeed the smallest distance then the per layer distance coincides with the global distance\n\n\n\n\n\\end{proof}\n\n\n\n\\subsubsection{Proof of multilayer}\n\n\n\\begin{lemma}\n\\label{lemma:condition_mu}\nUnder condition (i) or (ii), the distance between a point $\\boldsymbol{x}\\in\\omega, \\forall \\omega \\in \\Omega_{|\\ell-1}$ and the layer $\\ell$'s $k^{\\rm th}$ path is given by\n\\begin{align}\n d^{(i)}(\\boldsymbol{x},\\mathcal{P}_{\\ell,k})&=d(\\boldsymbol{x},\\mathcal{F}_{\\ell,k,\\omega}) =\\frac{(\\langle \\boldsymbol{w}_{\\ell,k}, \\boldsymbol{z}_{\\ell-1}\\rangle - [\\mu_{\\ell}]_{k})^2}{\\|A_{Q_{|\\ell-1}(\\boldsymbol{x})}^T\\boldsymbol{w}_{\\ell,k}\\|_2^2}&&\\text{ case (i)}\\\\\n d^{(ii)}(\\boldsymbol{x},\\mathcal{P}_{\\ell,k})\n &=d(\\boldsymbol{x},\\mathcal{F}_{\\ell,k,\\omega}) = \\frac{(\\langle \\boldsymbol{w}_{\\ell,k}, \\boldsymbol{z}_{\\ell-1}\\rangle + \\epsilon(\\mu,\\boldsymbol{x}) - [\\mu_{\\ell}]_{k})^2}{\\|A_{Q_{|\\ell-1}(\\boldsymbol{x})}^T\\boldsymbol{w}_{\\ell,k}\\|_2^2}\\times \\kappa(\\mu,\\boldsymbol{x})&&\\text{ case (ii)}\n\\end{align}\nwith $\\epsilon(\\mu,\\boldsymbol{x})=\\langle \\boldsymbol{w}_{\\ell,k}, (A'_{|\\ell-1}(\\mu)-A_{Q_{|\\ell-1}(\\boldsymbol{x})})\\boldsymbol{x}+(B'_{|\\ell-1}(\\mu)-B_{Q_{|\\ell-1}(\\boldsymbol{x})})\\rangle$ and $\\kappa(\\mu,\\boldsymbol{x})=\\frac{\\| \\|A_{Q_{|\\ell-1}(\\boldsymbol{x})}^T\\boldsymbol{w}_{\\ell,k}\\|_2^2 \\|_2^2}{\\|A_{|\\ell-1}(\\mu)^T\\boldsymbol{w}_{\\ell,k}\\|_2^2}$.\n\\end{lemma}\n\\begin{proof}\nCase (i): The proof follows the one done to obtain (\\ref{eq:dWk}).\nThe proof consists in rewriting any member of $\\mathcal{F}_{\\ell,k,\\omega}$ in term of its basis. Its basis consists of the $D-1$ orthonormal vectors $V=\\{\\mathbf{v}_1,\\dots,\\mathbf{v}_{D-1}\\}$ which are orthogonal to $A_{Q_{|\\ell-1}(\\boldsymbol{x})}^T\\boldsymbol{w}_{\\ell,k}$. That is ${\\rm span}(V,A_{Q_{|\\ell-1}(\\boldsymbol{x})}^T\\boldsymbol{w}_{\\ell,k})=\\mathbb{R}^{D}$, hence any input $\\boldsymbol{x}$ can be rewritten as a linear combination of those vectors we obtain\n\\begin{align}\n d(\\boldsymbol{z}_{\\ell-1},\\mathcal{F}_{\\ell,k,\\omega}) \n &=\\min_{\\mathbf{u}\\in \\mathbb{R}^{D-1}} \\left\\|\\sum_{j=1}^{D-1}[\\mathbf{u}]_j\\mathbf{v}_j+A_{Q_{|\\ell-1}(\\boldsymbol{x})}^T\\boldsymbol{w}_{\\ell,k} \\frac{[\\mu_{\\ell}]_k-\\boldsymbol{w}_{\\ell,k}^TB_{Q_{|\\ell-1}(\\boldsymbol{x})}}{\\|A_{Q_{|\\ell-1}(\\boldsymbol{x})}^T\\boldsymbol{w}_{\\ell,k}\\|_2^2}-\\boldsymbol{x}\\right\\|_2^2\\\\\n &= \\left\\|[A_{Q_{|\\ell-1}(\\boldsymbol{x})}^T\\boldsymbol{w}_{\\ell,k}\\left( \\frac{[\\mu_{\\ell}]_k-\\boldsymbol{w}_{\\ell,k}^TB_{Q_{|\\ell-1}(\\boldsymbol{x})}}{\\|A_{Q_{|\\ell-1}(\\boldsymbol{x})}^T\\boldsymbol{w}_{\\ell,k}\\|_2^2}\n - \\frac{\\langle\\boldsymbol{x},A_{Q_{|\\ell-1}(\\boldsymbol{x})}^T\\boldsymbol{w}_{\\ell,k}\\rangle}{\\|A_{Q_{|\\ell-1}(\\boldsymbol{x})}^T\\boldsymbol{w}_{\\ell,k}\\|_2^2}\\right)\\right\\|_2^2\\\\\n &=\\frac{\\left(\\langle\\boldsymbol{x},A_{Q_{|\\ell-1}(\\boldsymbol{x})}^T\\boldsymbol{w}_{\\ell,k}\\rangle-[\\mu_{\\ell}]_k+\\boldsymbol{w}_{\\ell,k}^TB_{Q_{|\\ell-1}(\\boldsymbol{x})}\\right)^2}{\\|A_{Q_{|\\ell-1}(\\boldsymbol{x})}^T\\boldsymbol{w}_{\\ell,k}\\|_2^2}\\\\\n &=\\frac{(\\langle \\boldsymbol{w}_{\\ell,k}, \\boldsymbol{z}_{\\ell-1}\\rangle - [\\mu_{\\ell}]_{k})^2}{\\|A_{Q_{|\\ell-1}(\\boldsymbol{x})}^T\\boldsymbol{w}_{\\ell,k}\\|_2^2} \n\\end{align}\n\n\\iffalse\nCase (ii):\n\\begin{align}\n \\left(d(\\boldsymbol{x},\\mathcal{P}_{\\ell,k})-d(\\boldsymbol{z}_{\\ell-1},\\mathcal{H}_{\\ell,k}) \\right)^2=&\\left( \\frac{\\left(\\langle \\boldsymbol{w}_{\\ell,k},\\boldsymbol{z}_{\\ell-1}\\rangle -[\\mu_{\\ell}]_{k} \\right)^2}{\\| \\boldsymbol{w}_{\\ell,k}^TA_{Q_{|\\ell-1}}\\|_2^2}-\\frac{\\left(\\langle \\boldsymbol{w}_{\\ell,k},\\boldsymbol{z}_{\\ell-1}'\\rangle -[\\mu_{\\ell}]_{k} \\right)^2}{\\| \\boldsymbol{w}_{\\ell,k}^TA'_{Q_{|\\ell-1}}\\|_2^2}\\right)^2\\\\\n =&\\left(\\left( \\frac{\\langle \\boldsymbol{w}_{\\ell,k},\\boldsymbol{z}_{\\ell-1}\\rangle -[\\mu_{\\ell}]_{k} }{\\| \\boldsymbol{w}_{\\ell,k}^TA_{Q_{|\\ell-1}}\\|_2}+\\frac{\\langle \\boldsymbol{w}_{\\ell,k},\\boldsymbol{z}_{\\ell-1}'\\rangle -[\\mu_{\\ell}]_{k} }{\\| \\boldsymbol{w}_{\\ell,k}^TA'_{Q_{|\\ell-1}}\\|_2}\\right)\\right.\\\\\n &\\left.\\times \\left( \\frac{\\langle \\boldsymbol{w}_{\\ell,k},\\boldsymbol{z}_{\\ell-1}\\rangle -[\\mu_{\\ell}]_{k} }{\\| \\boldsymbol{w}_{\\ell,k}^TA_{Q_{|\\ell-1}}\\|_2}-\\frac{\\langle \\boldsymbol{w}_{\\ell,k},\\boldsymbol{z}_{\\ell-1}'\\rangle -[\\mu_{\\ell}]_{k} }{\\| \\boldsymbol{w}_{\\ell,k}^TA'_{Q_{|\\ell-1}}\\|_2}\\right)\\right)^2\n\\end{align}\nfor clarity let now denote the first term as $C=\\left( \\frac{\\langle \\boldsymbol{w}_{\\ell,k},\\boldsymbol{z}_{\\ell-1}\\rangle -[\\mu_{\\ell}]_{k} }{\\| \\boldsymbol{w}_{\\ell,k}^TA_{Q_{|\\ell-1}}\\|_2}+\\frac{\\langle \\boldsymbol{w}_{\\ell,k},\\boldsymbol{z}_{\\ell-1}'\\rangle -[\\mu_{\\ell}]_{k} }{\\| \\boldsymbol{w}_{\\ell,k}^TA'_{Q_{|\\ell-1}}\\|_2}\\right)$, we now proceed as follows by rewritting the second term\n\\begin{align}\n \\left(d(\\boldsymbol{x},\\mathcal{P}_{\\ell,k})-d(\\boldsymbol{z}_{\\ell-1},\\mathcal{H}_{\\ell,k}) \\right)^2\n =&C\\times \\left(\\left \\langle \\boldsymbol{w}_{\\ell,k},\\frac{\\boldsymbol{z}_{\\ell-1}}{\\| \\boldsymbol{w}_{\\ell,k}^TA_{Q_{|\\ell-1}}\\|_2}-\\frac{\\boldsymbol{z}'_{\\ell-1}}{\\| \\boldsymbol{w}_{\\ell,k}^TA'_{Q_{|\\ell-1}}\\|_2}\\right\\rangle \\right.\\\\\n &\\left.+[\\mu_{\\ell}]_{k}\\left(\\frac{1}{\\| \\boldsymbol{w}_{\\ell,k}^TA'_{Q_{|\\ell-1}}\\|_2}-\\frac{1}{\\| \\boldsymbol{w}_{\\ell,k}^TA_{Q_{|\\ell-1}}\\|_2}\\right)\\right)^2\\\\\n \n \\leq &C\\times \\left(\\left | \\left \\langle \\boldsymbol{w}_{\\ell,k},\\frac{\\boldsymbol{z}_{\\ell-1}}{\\| \\boldsymbol{w}_{\\ell,k}^TA_{Q_{|\\ell-1}}\\|_2}-\\frac{\\boldsymbol{z}'_{\\ell-1}}{\\| \\boldsymbol{w}_{\\ell,k}^TA'_{Q_{|\\ell-1}}\\|_2}\\right\\rangle\\right| \\right.\\\\\n &\\left.+\\left|[\\mu_{\\ell}]_{k}\\left(\\frac{1}{\\| \\boldsymbol{w}_{\\ell,k}^TA'_{Q_{|\\ell-1}}\\|_2}-\\frac{1}{\\| \\boldsymbol{w}_{\\ell,k}^TA_{Q_{|\\ell-1}}\\|_2}\\right)\\right|\\right)^2\\\\ \n \n\\end{align}\nwhere the last inequality comes from $(a+b)^2\\leq (|a|+|b|)^2$ which comes directly from observing that $(a+b)^2=a^2+2ab+b^2\\leq |a|^2+2|a| |b| + |b|^2\\leq (|a|+|b|)^2$, due to the sign of the middle term that are constrained to be positive with the absolute value. We now apply the Cauchy-Schwartz inequality to obtain\n\\begin{align}\n \\left(d(\\boldsymbol{x},\\mathcal{P}_{\\ell,k})-d(\\boldsymbol{z}_{\\ell-1},\\mathcal{H}_{\\ell,k}) \\right)^2\n \\leq &C\\times \\left(\\| \\boldsymbol{w}_{\\ell,k}\\|_2 \\|\\frac{\\boldsymbol{z}_{\\ell-1}}{\\| \\boldsymbol{w}_{\\ell,k}^TA_{Q_{|\\ell-1}}\\|_2}-\\frac{\\boldsymbol{z}'_{\\ell-1}}{\\| \\boldsymbol{w}_{\\ell,k}^TA'_{Q_{|\\ell-1}}\\|_2}\\|_2 \\right.\\\\\n &\\left.+\\left|[\\mu_{\\ell}]_{k}\\left(\\frac{1}{\\| \\boldsymbol{w}_{\\ell,k}^TA'_{Q_{|\\ell-1}}\\|_2}-\\frac{1}{\\| \\boldsymbol{w}_{\\ell,k}^TA_{Q_{|\\ell-1}}\\|_2}\\right)\\right|\\right)^2\\\\\n \\leq &C\\times \\left(\\| \\boldsymbol{w}_{\\ell,k}\\|_2 \\left\\|\\frac{\\boldsymbol{z}_{\\ell-1}}{\\| \\boldsymbol{w}_{\\ell,k}^TA_{Q_{|\\ell-1}}\\|_2}-\\frac{\\boldsymbol{z}'_{\\ell-1}}{\\| \\boldsymbol{w}_{\\ell,k}^TA'_{Q_{|\\ell-1}}\\|_2}\\right\\|_2 \\right.\\\\\n &\\left.+\\left|[\\mu_{\\ell}]_{k}\\left(\\frac{1}{\\| \\boldsymbol{w}_{\\ell,k}^TA'_{Q_{|\\ell-1}}\\|_2}-\\frac{1}{\\| \\boldsymbol{w}_{\\ell,k}^TA_{Q_{|\\ell-1}}\\|_2}\\right)\\right|\\right)^2\\\\\n \\leq &C\\times \\left(\\| \\boldsymbol{w}_{\\ell,k}\\|_2 \\|\\boldsymbol{x}\\|_2\\left\\|\\frac{A_{Q_{|\\ell-1}}}{\\| \\boldsymbol{w}_{\\ell,k}^TA_{Q_{|\\ell-1}}\\|_2}-\\frac{A_{Q_{|\\ell-1}}'}{\\| \\boldsymbol{w}_{\\ell,k}^TA'_{Q_{|\\ell-1}}\\|_2}\\right\\|_F \\right.\\\\\n &\\left.+\\left|[\\mu_{\\ell}]_{k}\\left(\\frac{1}{\\| \\boldsymbol{w}_{\\ell,k}^TA'_{Q_{|\\ell-1}}\\|_2}-\\frac{1}{\\| \\boldsymbol{w}_{\\ell,k}^TA_{Q_{|\\ell-1}}\\|_2}\\right)\\right|\\right)^2\n\\end{align}\n\\fi\n\n\\fi\n\\end{proof}\n\n\\subsection{Proof of Proposition~\\ref{prop:each_side}}\n\\label{proof:prop_each_side}\n\n\n\\begin{prop}\n\\label{prop:each_side}\nConsider an $L$-layer DN configured to learn a binary classifier from the labeled training data $\\mathcal{X}$ using a leaky-ReLU activation function, arbitrary weights \n$\\boldsymbol{W}_{\\ell}$ at all layers, BN at layers $1,\\dots,L-1$, and layer $L$ configured as in (\\ref{eq:no_BN}) with $\\boldsymbol{c}_L=\\mathbf{0}$.\nThen, for any training mini-batch from $\\mathcal{X}$, there will be at least one data point on either side of the decision boundary.\n\\end{prop}\n\n\n\n\\begin{proof}\nWhen using leaky-ReLU the input to the last layer will have positive and negative values in each dimension for at least $1$ in the current minibatch. That means that each dimension will have at least $1$ negative value and all the other positive or vice-versa. As the last layer is initialized with zero bias, the decision boundary is defined in the last layer input space as the hyperplanes (or zero-set) of each output unit. Also, being on one side or the other of the decision boundary in the DN input space is equivalent to being on one side or the other of the linear decision boundary in the last layer input space. Combining those two results we obtain that at initialization, there has to be at least $1$ sample one side of the decision boundary and the others on the other side.\n\\end{proof}\n\n\n\\subsection{Proof of BN Statistics Variance}\n\n\n\\begin{proof}\nLet's consider a random variable with $E(\\boldsymbol{z})=\\boldsymbol{m}$ and $Cov(\\boldsymbol{z})=\\text{diag}(\\rho^2)$. Then we directly have that \n\\begin{align*}\n E(\\langle \\boldsymbol{w},\\boldsymbol{z}\\rangle) =&\\langle \\boldsymbol{w},\\boldsymbol{m}\\rangle\\\\\n Var(\\langle \\boldsymbol{w},\\boldsymbol{z}\\rangle) =& E((\\langle \\boldsymbol{w},\\boldsymbol{z}\\rangle-E(\\langle \\boldsymbol{w},\\boldsymbol{z}\\rangle))^2)\\\\\n =&E((\\langle \\boldsymbol{w},\\boldsymbol{z}\\rangle-\\langle \\boldsymbol{w},\\boldsymbol{m}\\rangle)^2)\\\\\n =&E(\\langle \\boldsymbol{w},\\boldsymbol{z}-\\boldsymbol{m}\\rangle^2)\\\\\n =&E\\left(\\sum_{d}\\boldsymbol{w}_d^2(\\boldsymbol{z}_d-\\boldsymbol{m}_d)^2 + \\sum_{d\\not = d'}\\boldsymbol{w}_d(\\boldsymbol{z}_d-\\boldsymbol{m}_d)\\boldsymbol{w}_{d'}(\\boldsymbol{z}_{d'}-\\boldsymbol{m}_{d'})\\right)\\\\\n =&E\\left(\\sum_{d}\\boldsymbol{w}_d^2(\\boldsymbol{z}_d-\\boldsymbol{m}_d)^2 + \\sum_{d\\not = d'}\\boldsymbol{w}_d(\\boldsymbol{z}_d-\\boldsymbol{m}_d)\\boldsymbol{w}_{d'}(\\boldsymbol{z}_{d'}-\\boldsymbol{m}_{d'})\\right)\\\\\n =&\\sum_{d}\\boldsymbol{w}_d^2\\rho^2_d\\\\\n =&\\langle \\boldsymbol{w}^2,\\rho^2_d\\rangle\n\\end{align*}\nnow given the known variance and mean of the $\\langle \\boldsymbol{w},\\boldsymbol{z}\\rangle$ random variable, the desired result is obtained by using the standard empirical variance estimator.\n\\end{proof}\n\n\n\\begin{figure}[t]\n \\centering\n \\begin{minipage}{0.022\\linewidth}\n \\rotatebox{90}{\\hspace{0.5cm}test accuracy}\n \\end{minipage}\n \\begin{minipage}{0.2\\linewidth}\n \\centering\n \n \\includegraphics[width=\\linewidth,height=28mm]{images\/pretrain\/accuracy_CIFAR100_resnet.pdf}\\\\\n learning epochs\n \\end{minipage}\n \\begin{minipage}{0.76\\linewidth}\n \\caption{\\small \n Image classification using a Resnet9 on CIFAR100. \n No BN or data augmentation was used during SGD training.\n Instead, the DN was initialized with random weights and zero bias (blue), random bias (orange), or via BN across the entire data set as in Figure~\\ref{fig:backprop}\n \n (green).\n Each training was repeated $10$ times with learning rate cross-validation, and we plot the average test accuracy (of the best valid set learning rate) vs.\\ learning epoch. \n BN's ``smart initialization'' reaches a higher-performing solution faster because it provides SGD with a spline partition that is already adapted to the training dataset.\n %\n \n }\n \\label{fig:test2}\n \\end{minipage}\n \\end{figure}\n\n\n\\iffalse\n\n\\subsection{Additional results}\n\n\\begin{table}[t]\n\\caption{{\\bf A}=Images with BN updates only, \n {\\bf B}=Noise Type I with BN updates only,\n {\\bf C}=Noise Type II with BN updates only, \n {\\bf D}=Imgaes with fully trained DN.Average Test Accuracy over $5$ runs for different learning rates and noise standard deviation for $4$ different models. In all cases, increasing the BN induced noise variance allows gains in performances with trends much stronger in residual networks.\n A=Resnet-4-1, B=Resnet-10-1, C=Resnet-6-2, D=LargeConv}\n\\label{tab:noise_table}\n \\centering\\setlength{\\tabcolsep}{1.8pt}\n\\begin{tabular}{|c|l|r|r|r|r|r|}\n\\cline{2-7}\n\\multicolumn{1}{c|}{}&std&0.0&0.001&0.05&0.1&0.2\\\\ \\cline{1-7}\n\\multirow{4}{*}{\\rotatebox{90}{CIFAR10}}\n&A& 91.46 & 91.51 & 91.89 & 91.94 & {\\bf \\underline{92.2}} \\\\ \\cline{2-7}\n&B& 92.54 & 92.68 & 92.81 & 92.87 & {\\bf \\underline{93.08}} \\\\ \\cline{2-7}\n&C& 93.34 & 93.20 & 93.38 & 93.30 & {\\bf \\underline{93.68}} \\\\ \\cline{2-7}\n&D& 93.03 & 93.23 & 93.26 & 93.14 & {\\bf \\underline{93.29}} \\\\ \\hline \\hline\n\\multirow{4}{*}{\\rotatebox{90}{CIFAR100}}&\nA& 67.09 & 66.74 & 67.24 & 67.73 & {\\bf \\underline{68.45}} \\\\ \\cline{2-7}\n&B& 69.47 & 69.45 & 69.44 & 69.74 & {\\bf \\underline{70.26}} \\\\ \\cline{2-7}\n&C& 72.22 & 72.25 & 72.2 & {\\bf \\underline{72.9}} & 72.74 \\\\ \\cline{2-7}\n&D& 71.04 & 71.17 & 71.12 & {\\bf \\underline{71.25}} & 70.68 \\\\ \\hline \\hline\n\\multirow{4}{*}{\\rotatebox{90}{SVHN}}&\nA& 95.48 & 95.69 & 95.56 & 95.68 & {\\bf \\underline{95.98}} \\\\ \\cline{2-7}\n&B& 96.1 & 96.1 & 96.17 & 96.31 & {\\bf \\underline{96.33}} \\\\ \\cline{2-7}\n&C& 96.16 & 96.17 & 96.32 & 96.29 & {\\bf \\underline{96.41}} \\\\ \\cline{2-7}\n&D& 95.72 & {\\bf \\underline{95.93}} & 95.78 & 95.79 & 95.82 \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\n\n\n\n\\subsection*{Deep Networks Architectures}\n\nWe provide below the descriptions of the DN architectures. For the Residual Networks, the notation Resnet4-10 for example defines the width factor and depth of each block. We provide an example below for Resnet2-2.\n\n\\begin{center}\n Residual Network\n\\begin{verbatim}\nConv2D(layer[-1], 32, (5, 5), pad=\"SAME\", b=None))\nBatchNorm(layer[-1])\nleaky_relu(layer[-1])\nMaxPool2D(layer[-1], (2, 2))\nDropout(layer[-1], 0.9)\n\nfor width in [64, 128, 256, 512]:\n for repeat in range(2):\n Conv2D(layer[-1], width, (3, 3), b=None, pad=\"SAME\")\n BatchNorm(layer[-1])\n leaky_relu(layer[-1])\n Conv2D(layer[-1], width, (3, 3), b=None, pad=\"SAME\")\n BatchNorm(layer)\n if layer[-6].shape == layer[-1].shape:\n leaky_relu(layer[-1]) + layer[-6])\n else:\n leaky_relu(layer[-1]) + Conv2D(layer[-6], width, (3, 3), b=None, pad=\"SAME\")\n\n AvgPool2D(layer[-1], (2, 2))\n Conv2D(layer[-1], 512, (1, 1), b=None))\n BatchNorm(layer)\n leaky_relu(layer[-1])\n\nGlobalAvgPool2D(layer[-1])\nDense(layer[-1], N_CLASSES)\n\\end{verbatim}\n\\end{center}\n\n\n\nWe now describe the CNN model that we employed. Notice that if the considered dataset is imagenette or other (smaller spatial dimension) dataset there is an additional first layer of convolution plus spatial pooling to reduce the spatial dimensions of the feature maps.\n\n\\begin{center}\n Convolutional Network\n\\begin{verbatim}\nConv2D(layer[-1], 32, (5, 5), pad=\"SAME\", b=None)\nBatchNorm(layer)\nleaky_relu(layer[-1]))\nMaxPool2D(layer[-1], (2, 2))\n\nif args.dataset == \"imagenette\":\n Conv2D(layer[-1], 64, (5, 5), pad=\"SAME\", b=None)\n BatchNorm(layer)\n leaky_relu(layer[-1])\n MaxPool2D(layer[-1], (2, 2))\n\nfor k in range(3):\n Conv2D(layer[-1], 96, (5, 5), b=None, pad=\"SAME\")\n BatchNorm(layer)\n leaky_relu(layer[-1])\n Conv2D(layer[-1], 96, (1, 1), b=None)\n BatchNorm(layer)\n leaky_relu(layer[-1])\n Conv2D(layer[-1], 96, (1, 1), b=None)\n BatchNorm(layer)\n leaky_relu(layer[-1])\n\nDropout(layer[-1], 0.7) \nMaxPool2D(layer[-1], (2, 2))\n\nfor k in range(3):\n Conv2D(layer[-1], 192, (5, 5), b=None, pad=\"SAME\")\n BatchNorm(layer)\n leaky_relu(layer[-1])\n Conv2D(layer[-1], 192, (1, 1), b=None)\n BatchNorm(layer)\n leaky_relu(layer[-1])\n Conv2D(layer[-1], 192, (1, 1), b=None)\n BatchNorm(layer)\n leaky_relu(layer[-1])\n\nDropout(layer[-1], 0.7)\nMaxPool2D(layer[-1], (2, 2))\n\nConv2D(layer[-1], 192, (3, 3), b=None)\nBatchNorm(layer)\nleaky_relu(layer[-1])\nConv2D(layer[-1], 192, (1, 1), b=None))\nBatchNorm(layer)\nleaky_relu(layer[-1])\n\nGlobalAvgPool2D(layer[-1])\nDense(layer[-1], N_CLASSES)\n\\end{verbatim}\n\\end{center}\n\n\\fi\n\n\n\n\n\n\n\n\n\\iffalse\n\\subsection{Additional results on the impact of BN learnable parameters}\n\nWe provide in Table~\\ref{tab:learnable} the results for the classification accuracy on various dataset and with various architectures. This set of results simply demonstrates that the learnable parameters of BN ($\\gamma,\\beta$) have very little impact of performances as demonstrates in the main text.\n\n\\begin{table}[h]\n \\caption{Test accuracy of various models when employing (yes) or not (no) the BN learnable parameters, as was demonstrated in the main paper, those parameters have very little impact on the final test accuracy.}\n \\label{tab:learnable}\n \\centering\n \\begin{tabular}{|c|c|c|c|c|c|c|c|c|}\\hline\n &\\multicolumn{2}{|c|}{imagenette}&\\multicolumn{2}{|c|}{cifar10}&\\multicolumn{2}{|c|}{cifar100}&\\multicolumn{2}{|c|}{svhn}\\\\ \\hline \n & No & Yes& No & Yes& No & Yes& No & Yes \\\\ \\hline\n RESNET & 79.2 &78.8 &83. &86.2 &50. &54.8 &94.2 &95.3\\\\\\hline\n CNN & 77.7 &77.6 &87.5 &87.6 &54. &55.2 &96. &95.9\\\\\\hline\n \\end{tabular}\n\\end{table}\n\n\n\n\\subsection{Additional results on}\nDepiction of the standard deviations of the classification results given in Fig.~\\ref{fig:test} from the main text.\n\n\n\n\n\n\\begin{table}[H]\n \\centering\n \\caption{Depiction of the standard deviations of the classification results given in Fig.~\\ref{fig:test} from the main text.}\n\\begin{tabular}[b]{c|c|c|c|c|}\\cline{3-5}\n \\multicolumn{2}{c|}{\\multirow{2}{*}{Final}}& \\multicolumn{3}{c|}{initialization} \\\\ \\cline{3-5}\n \\multicolumn{2}{c|}{Test Accu.}& rand. & zero & BN \\\\ \\hline\n \\multirow{2}{*}{\\rotatebox[origin=c]{90}{\\scriptsize{svhn}}}\n & Res. &0.39 & 0.62 & 0.43 \\\\ \\cline{2-5}\n & Res. &0.46 & 0.69 & 0.86 \\\\ \\hline\n \\multirow{2}{*}{\\rotatebox[origin=c]{90}{\\scriptsize{cifar10}}} \n & Res. &0.78 & 1.75 & 1.06\\\\ \\cline{2-5}\n & Res. &0.8 & 1.1 & 1.21 \\\\ \\hline\n \\multirow{2}{*}{\\rotatebox[origin=c]{90}{\\scriptsize{cifar100}}} \n & Res. &5.44 & 1.81 & 1.26\\\\ \\cline{2-5}\n & Res. &0.82 & 0.71 & 1.19 \\\\ \\hline\n \\multirow{2}{*}{\\rotatebox[origin=c]{90}{\\scriptsize{image.}}} \n & Res. &1.21 & 1.53 & 3.31 \\\\ \\cline{2-5}\n & Res. &1.18 & 1.34 & 1.74 \\\\ \\hline\n \\end{tabular}\n \\label{tab:stds_init}\n \\end{table}\n\n\n\n\\fi\n\n\\section{Dataset Descriptions}\n\n\\textbf{MNIST}The MNIST database (Modified National Institute of Standards and Technology database) is a large database of handwritten digits that is commonly used for training various image processing systems. The database is also widely used for training and testing in the field of machine learning. It was created by \"re-mixing\" the samples from NIST's original datasets. The creators felt that since NIST's training dataset was taken from American Census Bureau employees, while the testing dataset was taken from American high school students, it was not well-suited for machine learning experiments. Furthermore, the black and white images from NIST were normalized to fit into a $28x28$ pixel bounding box and anti-aliased, which introduced grayscale levels.\n\n\nThe MNIST database contains $60,000$ training images and $10,000$ testing images. Half of the training set and half of the test set were taken from NIST's training dataset, while the other half of the training set and the other half of the test set were taken from NIST's testing dataset. The original creators of the database keep a list of some of the methods tested on it. In their original paper, they use a support-vector machine to get an error rate of 0.8\\%.\n\n\n\n\\textbf{SVHN}SVHN is a real-world image dataset for developing machine learning and object recognition algorithms with minimal requirement on data preprocessing and formatting. It can be seen as similar in flavor to MNIST (e.g., the images are of small cropped digits), but incorporates an order of magnitude more labeled data (over $600,000$ digit images) and comes from a significantly harder, unsolved, real world problem (recognizing digits and numbers in natural scene images). SVHN is obtained from house numbers in Google Street View images.\n\n\n\\textbf{CIFAR10}The CIFAR-10 dataset (Canadian Institute For Advanced Research) is a collection of images that are commonly used to train machine learning and computer vision algorithms. It is one of the most widely used datasets for machine learning research. The CIFAR-10 dataset contains $60,000$ $32x32$ color images in $10$ different classes. The $10$ different classes represent airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks. There are 6,000 images of each class.\n\nComputer algorithms for recognizing objects in photos often learn by example. CIFAR-10 is a set of images that can be used to teach a computer how to recognize objects. Since the images in CIFAR-10 are low-resolution $(32x32)$, this dataset can allow researchers to quickly try different algorithms to see what works. Various kinds of convolutional neural networks tend to be the best at recognizing the images in CIFAR-10. \n\n\n\n\\textbf{CIFAR100}This dataset is just like the CIFAR-10, except it has 100 classes containing 600 images each. There are 500 training images and 100 testing images per class. The 100 classes in the CIFAR-100 are grouped into 20 superclasses. Each image comes with a \"fine\" label (the class to which it belongs) and a \"coarse\" label (the superclass to which it belongs). \n\n\n\n\\iffalse\n\\section{Dataset Descriptions}\n\n\\dataset{MNIST}{The MNIST database (Modified National Institute of Standards and Technology database) is a large database of handwritten digits that is commonly used for training various image processing systems. The database is also widely used for training and testing in the field of machine learning. It was created by \"re-mixing\" the samples from NIST's original datasets. The creators felt that since NIST's training dataset was taken from American Census Bureau employees, while the testing dataset was taken from American high school students, it was not well-suited for machine learning experiments. Furthermore, the black and white images from NIST were normalized to fit into a $28x28$ pixel bounding box and anti-aliased, which introduced grayscale levels.\n\n\nThe MNIST database contains $60,000$ training images and $10,000$ testing images. Half of the training set and half of the test set were taken from NIST's training dataset, while the other half of the training set and the other half of the test set were taken from NIST's testing dataset. The original creators of the database keep a list of some of the methods tested on it. In their original paper, they use a support-vector machine to get an error rate of 0.8\\%.}\n\n\n\n\\dataset{SVHN}{SVHN is a real-world image dataset for developing machine learning and object recognition algorithms with minimal requirement on data preprocessing and formatting. It can be seen as similar in flavor to MNIST (e.g., the images are of small cropped digits), but incorporates an order of magnitude more labeled data (over $600,000$ digit images) and comes from a significantly harder, unsolved, real world problem (recognizing digits and numbers in natural scene images). SVHN is obtained from house numbers in Google Street View images.\n}\n\n\\dataset{CIFAR10}{The CIFAR-10 dataset (Canadian Institute For Advanced Research) is a collection of images that are commonly used to train machine learning and computer vision algorithms. It is one of the most widely used datasets for machine learning research. The CIFAR-10 dataset contains $60,000$ $32x32$ color images in $10$ different classes. The $10$ different classes represent airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks. There are 6,000 images of each class.\n\nComputer algorithms for recognizing objects in photos often learn by example. CIFAR-10 is a set of images that can be used to teach a computer how to recognize objects. Since the images in CIFAR-10 are low-resolution $(32x32)$, this dataset can allow researchers to quickly try different algorithms to see what works. Various kinds of convolutional neural networks tend to be the best at recognizing the images in CIFAR-10. \n}\n\n\n\\dataset{CIFAR100}{This dataset is just like the CIFAR-10, except it has 100 classes containing 600 images each. There are 500 training images and 100 testing images per class. The 100 classes in the CIFAR-100 are grouped into 20 superclasses. Each image comes with a \"fine\" label (the class to which it belongs) and a \"coarse\" label (the superclass to which it belongs). \n}\n\n\\dataset{Imagenette}{Imagenette is a subset of 10 classes from Imagenet (tench, English springer, cassette player, chain saw, church, French horn, garbage truck, gas pump, golf ball, parachute). The ImageNet project is a large visual database designed for use in visual object recognition software research. More than 14 million images have been hand-annotated by the project to indicate what objects are pictured and in at least one million of the images, bounding boxes are also provided. ImageNet contains more than 20,000 categories with a typical category, such as \"balloon\" or \"strawberry\", consisting of several hundred images.[4] The database of annotations of third-party image URLs is freely available directly from ImageNet, though the actual images are not owned by ImageNet. Since 2010, the ImageNet project runs an annual software contest, the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), where software programs compete to correctly classify and detect objects and scenes. The challenge uses a \"trimmed\" list of one thousand non-overlapping classes.}\n\n\\fi\n\n\\section{Single-Layer Analysis of Batch Normalization}\n\\label{sec:shallow}\n\n\nIn this section, we investigate how BN impacts one individual DN layer.\nOur analysis leverages the identification that DN layers using continuous piecewise linear activation functions $a$ in (\\ref{eq:no_BN}) and (\\ref{eq:BN}) are {\\em splines} that partition their input space into convex polytopal regions. \nWe show that the BN parameter $\\boldsymbol{\\mu}_\\ell$ translates the regions such that they concentrate around the training data.\n\n\n\\subsection{Batch normalization details} \n\n\nThe BN parameters $\\boldsymbol{\\beta}_{\\ell}, \\boldsymbol{\\gamma}_{\\ell}$, along with the DN weights $\\boldsymbol{W}_\\ell$, are {\\em learned directly} through the optimization of the DN's objective function (e.g., squared error or cross-entropy) evaluated at the training data samples $\\mathcal{X}=\\{ \\boldsymbol{x}_i, i=1,\\dots,n\\}$ and labels (if available).\nCurrent practice performs the optimization using some flavor of stochastic gradient descent (SGD) over randomized mini-batches of training data samples $\\mathcal{B}\\subset\\mathcal{X}$.\nOur first new result is that we can set $\\boldsymbol{\\gamma}_{\\ell}=\\boldsymbol{1}$ and $\\boldsymbol{\\beta}_{\\ell}=\\boldsymbol{0}$ with no or negligible impact on DN performance for current architectures, training datasets, and tasks.\nFirst, we prove in Appendix~\\ref{sec:betagamma} that we can set $\\boldsymbol{\\gamma}_\\ell=\\boldsymbol{1}$ both in theory and in practice.\n\n\\begin{prop}\n\\label{prop:redundant}\nThe BN parameter $\\boldsymbol{\\gamma}_\\ell\\neq \\boldsymbol{0}$ does not impact the approximation expressivity of a DN, because its value can be absorbed into $\\boldsymbol{W}_{\\ell+1},\\boldsymbol{\\beta}_{\\ell}$.\n\\end{prop}\n\nSecond, we demonstrate numerically in Appendix~\\ref{sec:betagamma} that setting $\\boldsymbol{\\beta}_{\\ell}=\\boldsymbol{0}$ has negligible impact on DN performance.\nHenceforth, we will assume for our theoretical analysis that $\\boldsymbol{\\gamma}_{\\ell}=\\boldsymbol{1}, \\boldsymbol{\\beta}_{\\ell}=\\boldsymbol{0}$ and will clarify for each experiment whether or not we enforce these constraints.\n\n\n\nLet $\\mathcal{X}_{\\ell}$ denote the collection of feature maps $\\boldsymbol{z}_{\\ell}$ at the input to layer $\\ell$ produced by all inputs $\\boldsymbol{x}$ in the entire training data set $\\mathcal{X}$, \nand similarly let $\\mathcal{B}_\\ell$ denote the collection of feature maps $\\boldsymbol{z}_\\ell$ at the input to layer $\\ell$ produced by all inputs $\\boldsymbol{x}$ in the mini-batch $\\mathcal{B}$.\n\nFor each mini-batch $\\mathcal{B}$ during training, the BN parameters $\\boldsymbol{\\mu}_{\\ell}, \\boldsymbol{\\sigma}_{\\ell}$ are {\\em calculated directly} as the mean and standard deviation of the current mini-batch feature maps $\\mathcal{B}_\\ell$\n\\begin{align}\n&\\boldsymbol{\\mu}_{\\ell} \n\\leftarrow\n\\frac{1}{|\\mathcal{B}_\\ell|}\\sum_{\\boldsymbol{z}_\\ell \\in \\mathcal{B}_\\ell} \\boldsymbol{W}_{\\ell}\\boldsymbol{z}_\\ell,\n&\\boldsymbol{\\sigma}_{\\ell}\n\\leftarrow\n\\sqrt{\\frac{1}{|\\mathcal{B}_\\ell|}\\sum_{\\boldsymbol{z}_\\ell \\in \\mathcal{B}_\\ell}\\big( \\boldsymbol{W}_{\\ell}\\boldsymbol{z}_\\ell-\\boldsymbol{\\mu}_{\\ell} \\big)^2},\n\\label{eq:bnupdate}\n\\end{align}\nwhere the right-hand side square is taken element-wise.\nAfter SGD learning is complete, a final fixed ``test time'' mean $\\overline{\\boldsymbol{\\mu}}_{\\ell}$ and standard deviation $\\overline{\\boldsymbol{\\sigma}}_{\\ell}$ are computed using the above formulae over all of the training data,\\footnote{or more commonly as an exponential moving average of the training mini-batch values.} i.e., with $\\mathcal{B}_\\ell=\\mathcal{X}_\\ell$.\nNote that {\\em no label information enters into the calculation of} $\\boldsymbol{\\mu}_{\\ell}, \\boldsymbol{\\sigma}_{\\ell}$.\n\n\n\\subsection{Deep network spline partition (one layer)}\n\\label{sec:spline1}\n\nWe focus on the lionshare of modern DNs that employ continuous piecewise-linear activation functions $a$ in (\\ref{eq:no_BN}) and (\\ref{eq:BN}). \nTo streamline our notation, but without loss of generality, we assume that $a$ consists of exactly two linear pieces that connect at the origin, such as the ubiquitous ReLU ($a(u)=\\max(0,u)$), leaky-ReLU ($a(u)=\\max(\\alpha,u), \\alpha>0$), and absolute value ($a(u)=\\max(-u,u)$).\nThe extension to more general continuous piecewise-linear activation functions is straightforward \\cite{balestriero2018spline,madmaxIEEE};\nmoreover, the extension to an infinite class of smooth activation functions (including the sigmoid gated learning unit, among others) follows from a simple probabilistic argument \\cite{balestriero2018from}. \nInserting pooling operators \\cite{goodfellow2016deep} between layers does not impact our results (see Appendix~\\ref{app:details}).\n\n\nA DN layer $\\ell$ equipped with BN and employing such a piecewise-linear activation function is a {\\em continuous piecewise-affine (CPA) spline operator} defined by a partition $\\Omega_{\\ell}$ of the layer's input space $\\mathbb{R}^{D_\\ell}$ into a collection of convex polytopal regions and a corresponding collection of affine transformations (one for each region) mapping layer inputs $\\boldsymbol{z}_{\\ell}$ to layer outputs $\\boldsymbol{z}_{\\ell+1}$.\nHere we explain how the partition regions in $\\Omega_{\\ell}$ are formed; then in Section~\\ref{sec:MUpart1} we begin our investigation of how these regions are transformed by BN.\n\nDefine the {\\em pre-activation} of layer $\\ell$ by $\\boldsymbol{h}_{\\ell}$ such that the layer output $\\boldsymbol{z}_{\\ell+1}=\\boldsymbol{a}(\\boldsymbol{h}_{\\ell})$; from (\\ref{eq:BN}) its $k^{\\rm th}$ entry is given by\n\\begin{align}\n h_{\\ell,k}=\n \\frac{\\left\\langle \\boldsymbol{w}_{\\ell,k},\\boldsymbol{z}_{\\ell}\\right\\rangle-\\mu_{\\ell,k}}{\\sigma_{\\ell,k}}.\n \\label{eq:h}\n\\end{align}\nNote from (\\ref{eq:bnupdate}) that $\\sigma_{\\ell,k}>0$ as long as $\\|\\boldsymbol{w}_{\\ell,k}\\|_2^2>0$ and as long as not all inputs are orthogonal to $\\boldsymbol{w}_{\\ell,k}$.\nWith typical CPA nonlinearities $\\boldsymbol{a}$, the $k^{\\rm th}$ feature map output $z_{\\ell+1,k}=a(h_{\\ell,k})$ is linear in $h_{\\ell,k}$ for all inputs with same sign. The separation between those two linear regimes is formed by the collection of layer inputs $\\boldsymbol{z}_{\\ell}$ that produce pre-activations with $h_{\\ell,k} = 0$, hence lie on the $D_\\ell-1$ dimensional hyperplane\n\\begin{align}\n \\mathcal{H}_{\\ell,k}\n &\n =\\left\\{\\boldsymbol{z}_{\\ell} \\in \\mathbb{R}^{D_{\\ell}}: h_{\\ell,k}=0\\right\\}\n =\\left\\{\\boldsymbol{z}_{\\ell} \\in \\mathbb{R}^{D_{\\ell}} : \\left\\langle \\boldsymbol{w}_{\\ell,k},\\boldsymbol{z}_{\\ell}\\right\\rangle=\n \\mu_{\\ell,k}\\right\\}.\n \\label{eq:Hk}\n\\end{align}\nNote that $\\mathcal{H}_{\\ell,k}$ is independent of the value of $\\sigma_{\\ell,k}$.\nThe boundary $\\partial \\Omega_{\\ell}$ of the layer's input space partition $\\Omega_{\\ell}$ is obtained simply by combining all of the $\\mathcal{H}_{\\ell,k}$ into the {\\em hyperplane arrangement} \\cite{zaslavsky1975facing} \n\\begin{align}\n \\partial \\Omega_{\\ell} = \\cup_{k=1}^{D_{\\ell}}\\mathcal{H}_{\\ell,k}. \\label{eq:boundary}\n\\end{align}\nFor additional results on the DN spline partition, see \n\\cite{montufar2014number,raghu2017expressive,serra2018bounding,balestriero2019geometry}; the only property of interest here is that, for all inputs lying in the same region $\\omega \\in \\Omega_{\\ell}$, the layer mapping is a simple affine transformation $\\boldsymbol{z}_{\\ell}=\\sum_{\\omega \\in \\Omega}(\\boldsymbol{A}_{\\ell} (\\omega)\\boldsymbol{z}_{\\ell-1}+\\boldsymbol{b}_{\\ell}(\\omega))\\Indic_{\\{\\boldsymbol{z}_{\\ell-1} \\in \\omega\\}}$ (see Appendix~\\ref{app:details}).\n\n\n\\subsection[Batch normalization translates the spline partition boundaries towards the training data (Part 1)]{Batch normalization parameter $\\boldsymbol{\\mu}$ translates the spline partition boundaries towards the training data (Part 1)}\n\\label{sec:MUpart1}\n\n\nWith the above background in place, we now demonstrate that the BN parameter $\\boldsymbol{\\mu}_\\ell$ impacts the spline partition $\\Omega_\\ell$ of the input space of DN layer $\\ell$ by {\\em translating its boundaries $\\partial \\Omega_\\ell$ towards the current mini-batch training data $\\mathcal{X}_\\ell$.}\n\nWe begin with some definitions.\nThe Euclidean distance from a point $\\boldsymbol{v}$ in layer $\\ell$'s input space to the layer's $k^{\\rm th}$ hyperplane $\\mathcal{H}_{\\ell,k}$ is easily calculated as (e.g., Eq.~1 in \\cite{AMALDI201322})\n\\begin{align}\n d(\\boldsymbol{v},\\mathcal{H}_{\\ell,k}) = \\frac{\\left | \\langle \\boldsymbol{w}_{\\ell,k},\\boldsymbol{v}\\rangle - \\mu_{\\ell,k}\\right | }{\\|\\boldsymbol{w}_{\\ell,k}\\|_2}\n \n \\label{eq:distance}\n\\end{align}\nas long as $\\|\\boldsymbol{w}_{\\ell,k}\\| >0$.\nThen, the average squared distance between $\\mathcal{H}_{\\ell,k}$ and a collection of points $\\mathcal{V}$ in layer $\\ell$'s input space is given by\n\\begin{align}\n\\mathcal{L}_k(\\mu_{\\ell,k},\\mathcal{V}) = \\frac{1}{|\\mathcal{V}|}\\sum_{\\boldsymbol{v} \\in \\mathcal{V}}\nd\\left(\\boldsymbol{v},\\mathcal{H}_{\\ell,k}\\right)^2= \\frac{\\sigma_{\\ell,k}^2}{\\|\\boldsymbol{w}_{\\ell,k}\\|_2^2},\n\\label{eq:optimization-k}\n\\end{align}\nwhere we have made explicit the dependency of $\\mathcal{L}_k$ on $\\mu_{\\ell,k}$ through $\\mathcal{H}_{\\ell,k}$.\nGoing one step further, the {\\em total least squares (TLS) distance} \\citep{samuelson1942note,golub1980analysis}\nbetween a collection of points $\\mathcal{V}$ in layer $\\ell$'s input space and layer $\\ell$'s partition $\\Omega_\\ell$ is given by \n\\begin{align}\n\\mathcal{L}(\\boldsymbol{\\mu}_{\\ell},\\mathcal{V}) = \\sum_{k=1}^{D_{\\ell}}\n\\mathcal{L}_k(\\mu_{\\ell,k},\\mathcal{V}).\n\\label{eq:optimization}\n\\end{align}\n\n\nIn Appendix \\ref{proof:thmonelayer}, we prove that\nthe BN parameter $\\boldsymbol{\\mu}_\\ell$ as computed in (\\ref{eq:bnupdate}) is the unique solution of the strictly convex optimization problem of minimizing the average TLS distance (\\ref{eq:optimization}) \nbetween the training data and layer $\\ell$'s hyperplanes $\\mathcal{H}_{\\ell,k}$ and hence spline partition region boundaries $\\partial\\Omega_\\ell$.\n\n\\begin{thm}\n\\label{thm:onelayer}\nConsider layer $\\ell$ of a DN as described in (\\ref{eq:BN}) and a mini-batch of layer inputs $\\mathcal{B}_\\ell\\subset \\mathcal{X}_\\ell$. Then $\\boldsymbol{\\mu}_{\\ell}$ in (\\ref{eq:bnupdate}) is the unique minimizer of $\\mathcal{L}(\\boldsymbol{\\mu}_{\\ell},\\mathcal{B}_\\ell)$, and $\\overline{\\boldsymbol{\\mu}}_{\\ell}$ is the unique minimizer of \n$\\mathcal{L}(\\boldsymbol{\\mu}_{\\ell},\\mathcal{X}_\\ell)$.\n\\end{thm}\n\nIn words, at each layer $\\ell$ of a DN, BN explicitly adapts the input-space partition $\\Omega_\\ell$ by using $\\boldsymbol{\\mu}_\\ell$ to translate its boundaries $\\mathcal{H}_{\\ell,1},\\mathcal{H}_{\\ell,2},\\dots$ to minimize the TLS distance to the training data.\nFigure~\\ref{fig:2d_partition_bn} demonstrates empirically in two dimensions how this translation focuses the layer's spline partition on the data. \nMoreover, this translation takes on a very special form.\nWe prove in Appendix~\\ref{proof:centroid} \nthat BN transforms the spline partition boundary $\\partial \\Omega_{\\ell}$\ninto a {\\em central hyperplane arrangement} \\cite{stanley2004introduction}.\n\n\nIt is worth noting that the above results do not involve any data label information, and so -- at least as far as $\\boldsymbol{\\mu}$ and $\\boldsymbol{\\sigma}$ are concerned -- BN can be interpreted as an {\\em unsupervised} learning technique.\n\n\\section{Introduction}\n\nDeep learning has made major impacts in a wide range of applications.\nMathematically, a deep (neural) network (DN) maps an input vector $\\boldsymbol{x}$ to a sequence of $L$ {\\em feature maps} $\\boldsymbol{z}_\\ell$, $\\ell=1,\\dots,L$ by successively applying the simple nonlinear transformation (termed a DN {\\em layer})\n\\begin{equation}\n \\boldsymbol{z}_{\\ell+1}= \\boldsymbol{a}\\left(\\boldsymbol{W}_{\\ell}\\boldsymbol{z}_{\\ell}+\\boldsymbol{c}_{\\ell}\\right), \n \\quad \\ell=0,\\dots,L-1\n \\label{eq:no_BN}\n\\end{equation}\nwith $\\boldsymbol{z}_0=\\boldsymbol{x}$, $\\boldsymbol{W}_\\ell$ the weight matrix, $\\boldsymbol{c}_\\ell$ the bias vector, and $\\boldsymbol{a}$ an activation operator that applies a scalar nonlinear activation function $a$ to each element of its vector input. The structure of $\\boldsymbol{W}_\\ell,\\boldsymbol{c}_{\\ell}$ controls the type of layer (e.g., circulant matrix for convolutional layer).\nFor regression tasks, the DN prediction is simply $\\boldsymbol{z}_L$, while for classification tasks, $\\boldsymbol{z}_L$ is often processed through a softmax operator \\cite{goodfellow2016deep}.\nThe DN parameters $\\boldsymbol{W}_\\ell, \\boldsymbol{c}_\\ell$\nare learned from a collection of training data samples $\\mathcal{X}=\\{ \\boldsymbol{x}_i, i=1,\\dots,n\\}$ (augmented with the corresponding ground-truth labels $\\boldsymbol{y}_i$ in supervised settings) by optimizing an objective function (e.g., squared error or cross-entropy). Learning is typically performed via some flavor of stochastic gradient descent (SGD) over randomized mini-batches of training data samples $\\mathcal{B}\\subset\\mathcal{X}$ \\cite{goodfellow2016deep}.\n\n\nWhile a host of different DN architectures have been developed over the past several years, modern, high-performing DNs nearly universally employ {\\em batch normalization} (BN) \\cite{ioffe2015batch} to center and normalize the entries of the feature maps using four additional parameters $\\boldsymbol{\\mu}_\\ell,\\boldsymbol{\\sigma}_\\ell,\\boldsymbol{\\beta}_\\ell,\\boldsymbol{\\gamma}_\\ell$.\nDefine $z_{\\ell,k}$ as $k^{\\rm th}$ entry of feature map $\\boldsymbol{z}_\\ell$ of length $D_\\ell$, \n$\\boldsymbol{w}_{\\ell,k}$ as the $k^{\\rm th}$ row of the weight matrix $\\boldsymbol{W}_\\ell$, \nand $\\mu_{\\ell,k},\\sigma_{\\ell,k},\\beta_{\\ell,k},\\gamma_{\\ell,k}$ as the $k^{\\rm th}$ entries of the BN parameter vectors $\\boldsymbol{\\mu}_\\ell,\\boldsymbol{\\sigma}_\\ell,\\boldsymbol{\\beta}_\\ell,\\boldsymbol{\\gamma}_\\ell$, respectively.\nThen we can write the BN-equipped layer $\\ell$ mapping extending (\\ref{eq:no_BN}) as\n\\begin{equation}\n z_{\\ell+1,k}=\n a\\left(\n \\frac{\\left\\langle \\boldsymbol{w}_{\\ell,k},\\boldsymbol{z}_{\\ell}\\right\\rangle-\\mu_{\\ell,k}}{\\sigma_{\\ell,k}}\n \\: \\gamma_{\\ell,k} + \\beta_{\\ell,k}\n \\right),k=1,\\dots,D_\\ell.\n\\label{eq:BN}\n\\end{equation}\nThe parameters $\\boldsymbol{\\mu}_\\ell,\\boldsymbol{\\sigma}_\\ell$ are computed as the element-wise mean and standard deviation of $\\boldsymbol{W}_\\ell \\boldsymbol{z}_{\\ell}$ for each mini-batch during training and for the entire training set during testing.\nThe parameters $\\boldsymbol{\\beta}_\\ell,\\boldsymbol{\\gamma}_\\ell$ are learned along with $\\boldsymbol{W}_\\ell$ via SGD.\\footnote{Note that the DN bias $\\boldsymbol{c}_\\ell$ from (\\ref{eq:no_BN}) has been subsumed into $\\boldsymbol{\\mu}_\\ell$ and $\\boldsymbol{\\beta}_\\ell$.}\nThe empirical fact that BN significantly improves both training speed and generalization performance of a DN in a wide range of tasks has made it ubiquitous, as evidenced by the 40,000 citations of the originating paper \\cite{ioffe2015batch}.\n\n\nOnly limited progress has been made to date explaining BN, primarily in the context of optimization.\nBy studying how backpropagation updates the layer weights, \\cite{cun1998efficient} observed that unnormalized feature maps are constrained to live on a low-dimensional subspace that limits the capacity of gradient-based learning.\nBy slightly altering the BN formula (\\ref{eq:BN}), \\cite{salimans2016weight} showed that renormalization via $\\boldsymbol{\\sigma}_{\\ell}$ smooths the optimization landscape and enables faster training. \nSimilarly, \\cite{bjorck2018understanding, santurkar2018does,kohler2019exponential} confirmed BN's impact on the gradient distribution and optimization landscape through large-scale experiments. \nUsing mean field theory, \\cite{yang2019mean} characterized the gradient statistics of BN in fully connected feed-forward networks with random weights \nto show that it regularizes the gradients and improves the optimization landscape conditioning.\n\n\n\\begin{figure}[t!]\n \\centering\n \\begin{minipage}{0.02\\linewidth}\n \\rotatebox{90}{with BN \\hspace{15mm} without BN \\hspace{2mm}}\n \\end{minipage}\n \\foreach \\i in {0,1,2,3}{\n \\begin{minipage}{0.23\\linewidth}\n \\centering\n layer \\i\\\\[-0.1em]\n \\includegraphics[width=\\linewidth]{images\/2d_partition\/2d_partition_before_\\i_2.pdf} \\\\[-0.5em]\n \\includegraphics[width=\\linewidth]{images\/2d_partition\/2d_partition_after_\\i_2.pdf} \\end{minipage}\n }\n \\vspace{-3mm}\n \\caption{\\small\n Visualization of the input-space spline partition (``linear regions'') of a four-layer DN with 2D input space, 6 units per layer, leaky-ReLU activation function, and random weights $\\boldsymbol{W}_\\ell$. \n The training data samples are denoted with black dots.\n In each plot, \\color{blue}blue lines \\color{black} correspond to folded hyperplanes introduced by the units of the corresponding layer, while \\color{gray}gray lines \\color{black} correspond to (folded) hyperplanes introduced by previous layers.\n Top row: Without BN (i.e., using (\\ref{eq:no_BN})), the folded hyperplanes are spread throughout the input space, resulting in a spline partition that is agnostic to the data. \n Bottom row: With BN (i.e., using (\\ref{eq:BN})), the folded hyperplanes are drawn towards the data, resulting in an adaptive spline partition that -- even with random weights -- minimizes the distance between the partition boundaries and the data and thus increases the density of partition regions around the data.\n }\n \\label{fig:2d_partition_bn}\n \\vspace{-0.2cm}\n\\end{figure}\n\n\nOne should not take away from the above analyses that BN's only effect is to smooth the optimization loss surface or stabilize gradients.\nIf this were the case, then BN would be redundant in advanced architectures like residual \\cite{li2017visualizing} and mollifying networks \\cite{gulcehre2016mollifying} that have been proven to have improved optimization landscapes\n\\cite{li2018visualizing,riedi2022singular} and have been coupled with advanced optimization techniques like Adam \\cite{kingma2014adam}.\nQuite to the contrary, BN significantly improves the performance of even these advanced networks and techniques.\n\n\nIn this paper, we study BN theoretically from a different perspective that provides new insights into how it boosts DN optimization and inference performance.\nOur perspective is function approximation; we exploit the fact that most of today's state-of-the-art DNs are {\\em continuous piecewise affine (CPA) splines} that fit a predictor to the training data via affine mappings defined over a partition of the input space (the so-called ``linear regions''); see \\cite{madmaxIEEE,balestriero2018spline,balestriero2019geometry} and \nAppendix~\\ref{app:details} for more details.\n\nThe key finding of our study is that {\\bf\\em BN is an unsupervised learning technique that -- independent of the DN's weights or gradient-based learning -- adapts the geometry of a DN's spline partition to match the data}. \nOur three main theoretical contributions are as follows:\n\\begin{itemize}[noitemsep,topsep=0pt]\n\n \\item BN adapts the layer\/DN input space spline partition to minimize the total least squares (TLS) distance between the spline partition boundaries and the layer\/DN inputs, thereby increasing the number of partition regions around the training data and enabling finer approximation (see Figure~\\ref{fig:2d_partition_bn}).\n The BN parameter $\\boldsymbol{\\mu}_\\ell$ translates the boundaries towards the data, while the parameter $\\boldsymbol{\\sigma}_\\ell$ folds the boundaries towards the data (see Sections~\\ref{sec:shallow} and \\ref{sec:multi_layer}).\n\n \\item BN's adaptation of the spline partition provides a ``smart initialization'' that boosts the performance of DN learning, because it adapts even a DN initialized with random weights $\\boldsymbol{W}_\\ell$ to align the spline partition to the data (see Section~\\ref{sec:smart}).\n \n \n \n \\item \n BN's statistics vary between mini-batches, which introduces a dropout-like random jitter perturbation to the partition boundaries and hence the decision boundary for classification problems.\n This jitter reduces overfitting and improves generalization by increasing the margin between the training samples and the decision boundary (see Section~\\ref{sec:noise}). \n \n \n \n \n\\end{itemize}\n\nThe proofs for our results are provided in the Appendices.\n\n\\iffalse\nPrior diving into our analysis, we state a general result that will ease our analysis; it is proved in Appendix~\\ref{sec:betagamma}. \n\n\\begin{prop}\n\\label{prop:redundant}\nWhen employing BN at any layer $\\ell0$ corresponds to a prolate nematic phase N$_+$ and $S<0$ to an oblate nematic phase N$_-$. Note that $U \\neq 0$ if the particles are biaxial as we have here for $\\chi_0 \\neq 180^\\circ$. In a biaxial nematic phase (N$_\\text{B}$), all four are nonzero with $P$ describing the phase biaxiality, and $F$ describing both the phase and particle biaxiality. We also consider the segment order parameters $S_m = \\frac{1}{2} \\langle (3 \\cos^2 \\theta_m -1) \\rangle$, where $\\theta_m$ is the polar angle with respect to the nematic director $\\hat{n}$, which we determine as the eigenvector with the largest eigenvalue ($S_m$) of the diagonalized segment ordering tensor~\\cite{deGennes1993}.\n\nFor the case of a boomerang with a preferred angle of $\\chi_0= \\pi\/2$, we do not expect biaxial order with a two-fold rotational symmetry, but instead four-fold rotational symmetry (called the D$_4$ phase in Ref.~\\cite{blaak1998}), and so we also define the additional fourth-rank order parameter~\\cite{blaak1998}\n\n\t\\begin{equation}\\label{eq:OrderParC}\n\t \tC = \\cos^8 \\frac{\\beta}{2} \\cos[4(\\alpha+\\gamma)] + \\sin^8 \\frac{\\beta}{2} \\cos[4(\\alpha-\\gamma)].\n\t \\end{equation} \n\nIn the isotropic or uniaxial nematic phase $C=0$, while for an N$_\\text{B}$ phase $F\\neq0$ and $C\\neq0$ and in the D$_4$ phase $F=0$ and $C\\neq0$~\\cite{blaak1998}.\n\nDue to our discrete grid of $\\theta$ and $\\phi$ angles, the Euler angles will sometimes not be correctly distributed (e.g. $\\gamma$ is not even defined in the case of straight rods), and so we will set a threshold of $0.1$ for the absolute value of nonvanishing order parameters.\n\n\n\n\n\\section{Results}\\label{sect:results}\n\nWe first consider stiff particles with a persistence length of $P\/L=100$, which corresponds to bending fluctuations on the order $\\sigma_\\chi \\lesssim 6^\\circ$, with these fluctuations only weakly depending on density. The single-segment ODF, together with information about the interarm angle can provide a qualitative understanding of the full boomerang ODF, which is a function of four angles. Therefore, in Fig.~\\ref{fig:psi}, we show the equilibrium single-segment ODF $\\psi_1(\\theta,\\phi)$ on the grid of the $\\theta$ and $\\phi$ angles using the Winkel Tripel map projection for ease of viewing, for various densities $c$ and preferred opening angles $\\chi_0$. We also include a schematic representation of the possible phases in the lower left corner of each plot. In Fig.~\\ref{fig:psi}(a), the boomerangs prefer a straight orientation ($\\chi_0=180^\\circ$), and at density $c=5$ we find a prolate nematic phase where segments prefer orientations parallel or antiparallel to the nematic director $\\hat{n}$ along the map pole. They also prefer to be essentially antiparallel to each other, since $\\sigma_\\chi \\approx 3.1^\\circ$ and $\\langle \\chi \\rangle \\approx 174^\\circ$. Next, in Fig.~\\ref{fig:psi}(b), we consider particles with an intrinsic biaxiality due to a preferred opening angle $\\chi_0=117^\\circ$, which at density $c=20$ form a biaxial nematic phase. Here we find the average interarm angle to be $\\langle \\chi \\rangle \\approx 119^\\circ$ and the standard deviation to be $\\sigma_\\chi \\approx 5.5^\\circ$. We conclude that if the first segment has an orientation e.g. in the peak in the upper left of Fig.~\\ref{fig:psi}(b), then the second segment must have an orientation approximately given by the peak in the lower left, or else the particle's interarm angle would differ significantly from the average interarm angle. Therefore, in this N$_\\text{B}$ phase, particles have two preferred orientations related by the transformation $\\hat{x} \\to -\\hat{x}$ and so the segment ODF has four peaks. For a preferred angle of $\\chi_0=90^\\circ$, the particles are platelike and stiff with $\\langle \\chi \\rangle \\approx 90^\\circ$ and $\\sigma_\\chi \\approx 5.6^\\circ$. As evident from the single equatorial peak in Fig.~\\ref{fig:psi}(c) for $c=15$, we find that they form an oblate nematic with $\\hat{n}$ along the pole. Finally, in Fig.~\\ref{fig:psi}(d) we see that for $\\chi_0=90^\\circ$ and $c=20$, the boomerangs form a D$_4$ phase with four-fold symmetry, with the four preferred orientations being related by the transformations $\\hat{x} \\to -\\hat{x}$, $\\hat{x} \\to \\hat{z}$, and $\\hat{x} \\to -\\hat{z}$.\n\n\n\n\n\n\t\\begin{figure*}[tbph]\n\t\\centering\n\t\t\t\\includegraphics[width=\\textwidth]{figure2.pdf}\n\t\t\\caption{Examples of segment orientation distribution functions $\\psi_1(\\theta,\\phi)$ for stiff particles with $P\/L=100$ for various preferred angles $\\chi_0$ and densities $c$. For $\\chi_0=180^\\circ$ and $c=5$ (a) we find a prolate nematic N$_+$. For $\\chi_0=117^\\circ$ and $c=20$ (b) we find a biaxial nematic N$_\\text{B}$ where boomerangs align their long axis $\\hat{z}$ with the pole. For $\\chi_0=90^\\circ$ and $c=15$ (c) we find a oblate nematic N$_-$ with director parallel to the pole. For $\\chi_0=90^\\circ$ and $c=20$ (d) we find a D$_4$ phase with boomerangs having four equivalent preferred orientations related by a rotation of $\\pi\/2$. Illustrations in the upper left corners show a boomerang with the corresponding interarm angle $\\chi_0$. Illustrations in the lower left corner show a schematic representation of each phase with the subscript on the nematic director $\\hat{n}$ indicating which particle axis is aligned along it, and with arrows around the director indicating symmetry under rotations around the director.}\\label{fig:psi}\n\t\\end{figure*}\n\nAfter this illustration of the nature of the single-segment distributions $\\psi_1(\\mathbf{\\hat{\\omega}})$, we now use the full ODF $\\psi(\\mathbf{\\hat{\\omega}}_1,\\mathbf{\\hat{\\omega}}_2)$ to calculate the order parameters defined from the particle frame [Eqs.~\\eqref{eq:OrderParS}-\\eqref{eq:OrderParC}]. These are shown in Fig.~\\ref{fig:orderPar100} as a function of the density $c$ for stiff boomerangs ($P\/L=100$) with preferred opening angles of (a) $\\chi_0=180^\\circ$, (b) $\\chi_0=117^\\circ$, and (c) $\\chi_0=90^\\circ$. For the rodlike particles in Fig.~\\ref{fig:orderPar100}(a), we find the expected first order I-N$_+$ transition with coexisting isotropic density $c_i \\approx 3.34$ and nematic density $c_n \\approx 4.17$, which we determine using the conditions of mechanical and chemical equilibrium, and which are very similar to those of rigid uniaxial rods. We note that the fact that $U$ is a small nonzero number at low densities is an artifact of calculating the Euler angle $\\gamma$ for a particle with segments restricted to our numerical grid, and is not physically meaningful. Also, we note that the segment order parameter $S_1\\approx S$ since $S$ measures alignment of the particle's $\\hat{z}$ axis (see Fig.~\\ref{fig:particleModel}), which in this case is approximately the same as the segment orientation. In Fig.~\\ref{fig:orderPar100}(b), we find an I-N$_+$ transition at $c_i\\approx9.55$ and $c_n\\approx9.70$ followed by an N$_+$-N$_\\text{B}$ transition at $c\\approx 18$ which we determine by comparing the absolute value of the biaxial order parameters $P$ and $F$ to the threshold of 0.1. Since $S>S_1$ in this case, the main particle axis $\\hat{z}$ is more aligned with the nematic director at high density than the segments are due to the bent shape of the particle. Finally, in Fig.~\\ref{fig:orderPar100}(c), we find a very weakly first order I-N$_-$ transition at $c_i\\approx c_n \\approx 14$ and an N$_-$-D$_4$ transition at $c\\approx 16$.\n\n\n\n\n\t\\begin{figure*}[tbph]\n\t\\centering\n\t\t\t\\includegraphics[width=\\textwidth]{figure3.pdf}\n\t\t\\caption{Order parameters defined in Eqs.~\\eqref{eq:OrderParS}-\\eqref{eq:OrderParC} as a function of density $c$ for stiff boomerangs ($P\/L=100$) with preferred angles (a) $\\chi_0=180^\\circ$, (b) $\\chi_0=117^\\circ$, and (c) $\\chi_0=90^\\circ$. The key applies to (a)-(c).}\\label{fig:orderPar100}\n\t\\end{figure*}\n\nNext we consider semiflexible boomerangs with $P\/L=10$. In this case the bending fluctuations have a greater dependence on density, and so in Fig.~\\ref{fig:chiDist} we plot the interarm probability density $g(\\chi)$ for several densities $c$ and for three preferred angles (a) $\\chi_0=180^\\circ$, (b) $\\chi_0=117^\\circ$, and (c) $\\chi_0=90^\\circ$. We see in Fig.~\\ref{fig:chiDist}(a) that this distribution becomes more peaked and shifts to higher $\\chi$ with increasing $c$. This effect is more pronounced in Fig.~\\ref{fig:chiDist}(b), where the boomerangs have $\\langle \\chi \\rangle \\approx \\chi_0$ at low densities, but pay a bending energy to straighten and hence to pack more efficiently at higher densities. In Fig.~\\ref{fig:chiDist}(c), we see that at densities $c \\leq 15$ the particles fluctuate around $\\langle \\chi \\rangle \\approx \\chi_0 = 90^\\circ$, but at high density $c=20$, $g(\\chi)$ has two peaks, one at small $\\chi$ where segments are almost bent on top of each other and one at large $\\chi$ where the particles are roughly straight. This is an artifact of our segmentwise excluded volume approximation, in which these two configurations have the same excluded volume and also cost the same bending energy because $\\chi_0=90^\\circ$. In this case the full excluded volume as well as intersegment excluded volume should actually be considered. We will use the small-$\\chi$ peaks that may develop in $g(\\chi)$ to inform us of the break down of our model at high densities and high flexibilities.\n\n\n\n\n\t\\begin{figure*}[tbph]\n\t\\centering\n\t\t\t\\includegraphics[width=\\textwidth]{figure4.pdf}\n\t\t\\caption{Probability density $g$ of the interarm angle $\\chi$ for semiflexible boomerangs ($P\/L=10$) at densities $c=5,10,15,20$ for preferred angles (a) $\\chi_0=180^\\circ$, (b) $\\chi_0=117^\\circ$, and (c) $\\chi_0=90^\\circ$. In (a), all four densities shown correspond to N$_+$ phases. In (b), $c=5$ corresponds to the isotropic phase, while $c=10,15,20$ correspond to the N$_+$ phase. In (c), $c=5,10$ correspond to the isotropic phase (we note that the blue and red curves are on top of each other), while $c=15$ corresponds to the N$_-$ phase and $c=20$ corresponds to the N$_+$ phase. The key applies to (a)-(c).}\\label{fig:chiDist}\n\t\\end{figure*}\n\nIn Fig.~\\ref{fig:chiVar}, we plot the average interarm angle $\\langle \\chi \\rangle$ in (a) and the standard deviation $\\sigma_\\chi$ in (b), both as a function of the density $c$ for five different preferred angles $\\chi_0$. As discussed, in the case of $\\chi_0=90^\\circ$, our approximation breaks down at $c>15$ where $\\sigma_\\chi$ becomes exceedingly large due to the spurious small-$\\chi$ peak that develops. In all other cases, however, the particles tend to straighten with increasing density ($\\langle \\chi \\rangle$ approaches $180^\\circ$), which costs bending energy but reduces their excluded volume. In addition, they tend to fluctuate less with increasing density ($\\sigma_\\chi$ decreases).\n\n\n\n\n\t\\begin{figure*}[tbph]\n\t\\centering\n\t\t\t\\includegraphics[width=\\textwidth]{figure5.pdf}\n\t\t\\caption{(a) Average interarm angle $\\langle \\chi \\rangle$ and (b) standard deviation of the interarm angle $\\sigma_\\chi$, both as a function of the density $c$ for flexible boomerangs with $P\/L=10$ and various preferred angles $\\chi_0$.}\\label{fig:chiVar}\n\t\\end{figure*}\n\n\n\nNext, in Fig.~\\ref{fig:orderPar10} we consider the order parameter trends of semiflexible boomerangs with $P\/L=10$ and with preferred opening angles of (a) $\\chi_0=180^\\circ$, (b) $\\chi_0=117^\\circ$, and (c) $\\chi_0=90^\\circ$. In Fig.~\\ref{fig:orderPar10}(a), for boomerangs with a preferred straight configuration, there is an I-N$_+$ transition as in the case of stiff boomerangs, but this has shifted to higher densities with $c_i \\approx 4.05$ and $c_n \\approx 4.54$. The density gap $c_n-c_i$ is therefore also reduced compared with stiffer rods, in agreement with flexible needles in the continuum limit~\\cite{khokhlov1981,khokhlov1982,dijkstra1995,wessels2003,wessels2006,dennison2011JCP}. In the case of Fig.~\\ref{fig:orderPar10}(b), after the isotropic-prolate nematic transition, these semiflexible boomerangs do not transition to a biaxial nematic phase as their stiff counterparts did, but instead deform from their preferred angle $\\chi_0=117^\\circ$ to straighter configurations in the prolate nematic phase, as also discussed above. In Fig.~\\ref{fig:orderPar10}(c), the boomerangs have an I-N$_-$ transition as they did in the stiff case, but instead of forming a D$_4$ at high densities, they rather deform into straighter boomerangs and form an N$_+$ phase. However, as discussed above, the segmentwise approximation breaks down and we no longer trust our calculation at $c>15$ in Fig.~\\ref{fig:orderPar10}(c).\n\n\n\n\n\t\\begin{figure*}[tbph]\n\t\\centering\n\t\t\t\\includegraphics[width=\\textwidth]{figure6.pdf}\n\t\t\\caption{Order parameters as a function of density $c$ for semiflexible boomerangs ($P\/L=10$) with preferred angles (a) $\\chi_0=180^\\circ$, (b) $\\chi_0=117^\\circ$, and (c) $\\chi_0=90^\\circ$. The key applies to (a)-(c).}\\label{fig:orderPar10}\n\t\\end{figure*}\n\n\n\nWe use the order parameters and the thermodynamic quantities to construct phase diagrams in the ($\\chi_0$, $c$) representation in Fig.~\\ref{fig:phaseDiagram100} for the four different persistence lengths: (a) $P\/L=100$, (b) $P\/L=20$, (c) $P\/L=10$, and (d) $P\/L=5$. In addition, we use the probability distribution for interarm angles $g(\\chi)$ to set an approximate criterion of $\\int_0^{\\pi\/4} d\\chi \\, g(\\chi) > 0.1$ to signify the break down of the theory, which is shown as a crosshatched region in the phase diagrams of Fig.~\\ref{fig:phaseDiagram100}. In the rigid case of Fig.~\\ref{fig:phaseDiagram100}(a), we see an isotropic phase at low densities, with a transition at higher densities to a prolate nematic when $\\chi_0> 112^\\circ$ and to an oblate nematic when $\\chi_0< 112^\\circ$. This separation between prolate and oblate ordering at $\\chi_0 \\approx112^\\circ$ is similar to the Landau angle of $\\chi_0 = 107^\\circ$ found for rigid boomerangs in Ref.~\\cite{teixeira1998}. We do not see a direct isotropic to biaxial nematic transition due to our threshold of 0.1 for the order parameters, which is not unexpected since the order parameters are predicted to be small close to the Landau point. In addition, as discussed, we do not find an N$_\\text{B}$ phase but rather a D$_4$ phase for preferred angles close to $\\chi_0=90^\\circ$. In Fig.~\\ref{fig:phaseDiagram100}(b), we find a similar topology, but see that the flexibility destroys much of the region of biaxial nematic stability, with the prolate nematic phase encroaching on this region and the separation between the N$_-$ and N$_+$ moving to smaller angles. The mechanism is the relatively cheap energy penalty to bend the boomerangs into needle-shaped objects. For the even more flexible boomerangs in Fig.~\\ref{fig:phaseDiagram100}(c), there is no longer a biaxial nematic or D$_4$ phase. Finally, in the most flexible case studied here [Fig.~\\ref{fig:phaseDiagram100}(d)], we see that the region in which we predict our approximation to break down has become larger.\n\n In Ref.~\\cite{vaghela2017}, high flexibility was shown to cause spontaneous formation of biaxial nematics from boomerangs with $\\chi_0=180^\\circ$, which are uniaxial on average. However, we found only uniaxial prolate nematic phases for $\\chi_0=180^\\circ$ even for $P\/L=5$ ($\\sigma_\\chi \\lesssim 13^\\circ$) and $P\/L=1$ ($\\sigma_\\chi \\lesssim 50^\\circ$) (not shown). The latter case is so flexible that even at low densities for $\\chi_0=180^\\circ$, $g(\\chi)$ has a peak at small angles, so we no longer trust our approximation there.\n\n\n\n\n\n\n\t\\begin{figure*}[tbph]\n\t\\centering\n\t\t\t\\includegraphics[width=\\textwidth]{figure7.pdf}\n\t\t\\caption{Phase diagrams in the preferred angle $\\chi_0$ and density $c$ representation for semiflexible boomerangs with a persistence length of (a) $P\/L=100$, (b) $P\/L=20$, (c) $P\/L=10$, and (d) $P\/L=5$. Crosshatched regions denote the breakdown of the segmentwise approximation for the excluded volume. The illustrations along the horizontal axis show the particle shape for $\\chi=90^\\circ$ and $\\chi=180^\\circ$.}\\label{fig:phaseDiagram100}\n\t\\end{figure*}\n\n\n\n\\section{Discussion and Conclusions}\\label{sect:conclusions}\n\nIn this paper, we used second-virial density functional theory for semiflexible chains to study the phase behaviour of hard semiflexible boomerangs with different persistence lengths and preferred angles. For stiff boomerangs, we found that the separation between prolate and oblate ordering occurs at $\\chi_0 \\approx 112^\\circ$, which is similar to the Landau angle of $\\chi_0 =107^\\circ$ reported for rigid boomerangs~\\cite{teixeira1998}. However, our phase diagram has a limited region of oblate nematic stability, due to the preference of platelike boomerangs to form a D$_4$ phase with four-fold rotational symmetry. This phase requires fourth-rank order parameters to identify it, and was neglected in the work of Ref.~\\cite{teixeira1998} where only second-rank order parameters were considered.\n\n In contrast with recent results~\\cite{vaghela2017}, we did not find any evidence of a biaxial nematic phase composed of flexible boomerangs with a straight preferred configuration, which are uniaxial particles on average. Moreover, we found that even for particles that are intrinsically biaxial, flexibility discourages the formation of biaxial nematic phases in favour of prolate nematic phases. The underlying mechanism that we identified here is that, at high densities, the flexible boomerangs tend to stretch out in order to reduce their excluded volume. This is similar to an experimentally observed stretching of semiflexible polymer coils in a background nematic in Ref.~\\cite{dogic2004}, which was shown by theory in Ref.~\\cite{dennison2011PRL}.\n\n\n Using the excluded volume in the segmentwise approximation, as was also done in other works studying boomerangs~\\cite{teixeira1998,vaghela2017}, allowed us to formulate the theory in terms of single segment properties, from which the full particle orientation distribution functions and thermodynamics can nevertheless be deduced. We expect this approach to be more accurate than the method based on directly solving for the set of four second-rank order parameters as was done in Ref.~\\cite{teixeira1998} for rigid boomerangs and in Ref.~\\cite{vaghela2017} for flexible boomerangs. For instance, only considering the second-rank order parameters limits the possible phases that can be studied, excluding for example the D$_4$ phase. Moreover, Ref.~\\cite{vaghela2017} is based on the additional approximation of interpolating the excluded volume between six known configurations in order to write it in terms of four angles (three relative Euler angles plus one interarm angle), even though for the flexible case actually five angles would have been needed within this method: three relative Euler angles plus the interarm angles of both particles. Note however that the segmentwise excluded volume in terms of the segment orientations only depends on four angles, the cosines of which being the dot products of the orientations of each pair of segments. Our method not only yields richer information as we have the full boomerang orientation distribution function, but it also has the advantage of being able to treat flexible boomerangs with a bent preferred configuration.\n\n However, a drawback of the currently used approach of the segmentwise approximation is that it neglects the polarity of the boomerangs, which becomes worse for very bent configurations. We saw that in the case of very flexible particles at high densities, this led to spurious results where the boomerangs tended to prefer ``closed up'' configurations with small interarm angles. The recently developed strategy to use Monte Carlo calculations to calculate the excluded volume kernel $E(\\Omega,\\Omega')$ more precisely~\\cite{belli2014,dussi2015} could be used for going beyond the segmentwise approximation, and may reveal polar or chiral phases in the phase diagrams~\\cite{greco2015,lubensky2002,bisi2008}. Direct computer simulations of boomerang systems are of course also a continued source of information and insight. For many years to come we will be able to build on the foundations of liquid-crystal simulations and theory~\\cite{frenkel1984,frenkel1985,mulder1985,eppenga1984,frenkel1987,bolhuis1997} laid by Daan Frenkel.\n\n \n\\section*{Acknowledgments}\n\nIt is a great pleasure to congratulate Daan Frenkel on the occasion of his 70th birthday. His deep understanding and broad knowledge of science combined with his wit and enormous recollection of (non-)trivia on literally any topic have impressed RvR from his undergraduate days onward, and made every interaction with Daan a privilege and a pleasure. RvR is grateful for many years of Daan's guidance, support, and inspiration. We wish Daan good health and spirit for many years to come.\n\n\nWe thank Massimiliano Chiappini, Marjolein Dijkstra, and Simone Dussi for useful discussions. This work is part of the D-ITP consortium, a program of the Netherlands Organization for Scientific Research (NWO) that is funded by the Dutch Ministry of Education, Culture and Science (OCW). We also acknowledge financial support from an NWO-VICI grant.\n\n\n\\bibliographystyle{tfo}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{\\@startsection\n\t{section\n{1\n{0pt\n{-3.5ex plus -1ex minus -.2ex\n{2.3ex plus.2ex\n{\\centering\\normalfont\\Large\\scshape}}\n\n\n\\renewcommand\\subsection{\\@startsection\n\t{subsection\n{2\n{0pt\n{-3ex plus -1ex minus -.2ex\n{1ex plus.2ex\n{\\normalfont\\large\\bfseries}}\n\n\\renewcommand\\subsubsection{\\@startsection\n\t{subsubsection\n{3\n{0pt\n{-1.5ex plus -1ex minus -.2ex\n{0.8ex plus .2ex\n{\\normalfont\\bfseries}}\n\n\\renewcommand\\paragraph{\\@startsection\n\t{paragraph\n{4\n{0em\n{-1.2ex plus -0.4ex minus -.2ex\n{0ex\n{\\normalfont\\bfseries }}\n\n\n\\makeatother\n\n\\usepackage{latexsym,amsfonts,amsmath,amssymb,amsthm,amscd}\n\\usepackage{thmtools}\n\\usepackage{thm-restate}\n\n\\usepackage{bbm}\n\\usepackage{yhmath}\n\n\\usepackage[capitalise]{cleveref}\n\n\\usepackage{subcaption}\n\n\\usepackage{enumerate}\n\n\n\n\n\\newcommand{\\mathcal{L}}{\\mathcal{L}}\n\\newcommand{\\textrm{Card}}{\\textrm{Card}}\n\\newcommand{\\mathbbm{G}}{\\mathbbm{G}}\n\\renewcommand{\\H}{\\mathbbm{H}}\n\\newcommand{\\mathbbm{K}}{\\mathbbm{K}}\n\\newcommand{\\mathbbm{Z}}{\\mathbbm{Z}}\n\\newcommand{\\mathbbm{N}}{\\mathbbm{N}}\n\\newcommand{\\mathcal{M}}{\\mathcal{M}}\n\\newcommand{\\mathbf{B}}{\\mathbf{B}}\n\n\\newcommand{\\mathcal{A}}{\\mathcal{A}}\n\\newcommand{\\mathcal{B}}{\\mathcal{B}}\n\\newcommand{\\mathcal{C}}{\\mathcal{C}}\n\\newcommand{\\mathcal{E}}{\\mathcal{E}}\n\\newcommand{\\mathcal{F}}{\\mathcal{F}}\n\n\n\\newcommand{\\mathbf{T}}{\\mathbf{T}}\n\\newcommand{\\mathbf{S}}{\\mathbf{S}}\n\\newcommand{\\mathcal{F}}{\\mathcal{F}}\n\n\\newcommand{\\sigma}{\\sigma}\n\n\\newcommand{\\mathbf{P}}{\\mathbf{P}}\n\n\n\\providecommand{\\floor}[1]{\\lfloor #1 \\rfloor}\n\\providecommand{\\ceil}[1]{\\lceil #1 \\rceil}\n\\providecommand{\\fract}[1]{\\left\\{ #1 \\right\\}}\n\n\n\n\\newcommand{\\sa}[2]{\\mathbf{SA}_{#1}\\left(#2\\right)}\n\\newcommand{\\aprox}[2]{\\mathbf{Aprox}_{#1}\\left(#2\\right)}\n\\newcommand{\\ft}[2]{\\mathbf{FT}_{#1}\\left(#2\\right)}\n\n\n\\theoremstyle{plain}\n\\newtheorem{theorem}{Theorem}[section]\n\\newtheorem{proposition}[theorem]{Proposition}\n\\newtheorem{fact}[theorem]{Fact}\n\\newtheorem{corollary}[theorem]{Corollary}\n\\newtheorem{lemma}[theorem]{Lemma}\n\n\\newtheorem{con}{Conjecture}\n\n\\theoremstyle{definition}\n\\newtheorem*{definition}{Definition}\n\\newtheorem*{notation}{Notation}\n\n\\theoremstyle{remark}\n\\newtheorem*{remark}{Remark}\n\\newtheorem{example}{Example}[section]\n\\newtheorem{ask}{Open problem}[section]\n\n\\newcounter{claimcount}[theorem]\n\\newcommand{\\THMfont}[1]{{\\sl #1}}\n\\newcommand{\\Claim}[1]{\\refstepcounter{claimcount} \\vspace{0.3em} \n\\noindent {\\sc Claim \\theclaimcount: \\ }\\THMfont{ #1}}\n\\newcommand{\\bprf}[1][Proof:]{\\begin{list}{} {\\setlength{\\leftmargin}{0.5em}\n\\setlength{\\rightmargin}{0em} \\setlength{\\listparindent}{1em}} \\item {\\em\n\\hspace{-1em} #1 }}\n\\newcommand{\\end{list}}{\\end{list}}\n\\newcommand{\\bprf}{\\bprf}\n\\newcommand{\\bprf}{\\bprf}\n\\newcommand{\\bprf}{\\bprf}\n\\newcommand{ \\hfill$\\Box$ \\eprf \\breath }{ \\hfill$\\Box$ \\end{list} \\breath }\n\\newcommand{\\eclaimprf}{ \\hfill $\\Diamond$~{\\scriptsize {\\tt\nClaim~\\theclaimcount}}\\end{list}}\n\n\\usepackage{marvosym,fancybox}\n\\newcommand{\\todo}[1]{\n\\shadowbox{\\usefont{T1}{ugq}{m}{n} To do:} {\\usefont{T1}{cmtt}{m}{n} #1}}\n\n\\title{Parametrization by Horizontal Constraints in the Study of Algorithmic Properties of $\\mathbbm{Z}^2$-Subshift of Finite Type}\n\\date{}\n\n\\author{Sol\u00e8ne J. Esnay, Mathieu Sablik}\n\n\\newcommand{\\emph}{\\emph}\n\n\n\\begin{document}\n\\maketitle\n\n\n\\begin{abstract}\nThe non-emptiness, called the Domino Problem, and the characterization of the possible entropies of $\\mathbbm{Z}^2$-subshifts of finite type are standard problems of symbolic dynamics. In this article we study these questions with horizontal constraints fixed beforehand as a parameter. We determine for which horizontal constraints the Domino Problem is undecidable and when all right-recursively enumerable numbers can be obtained as entropy, with two approaches: either the additional local rules added to the horizontal constraints can be of any shape, or they can only be vertical rules.\n\\end{abstract}\n\n\n\\section*{Introduction}\n\n\n\nThe Domino Problem is a classical decision problem introduced by H. Wang~\\cite{wang}: given a finite set of tiles that are squares with colored edges, called Wang tiles, we ask if it is possible to tile the plane with translated copies of these tiles so that contiguous edges have the same color. This question is also central in symbolic dynamics. A $\\mathbbm{Z}^d$-subshift of finite type (SFT for short) is a set of colorings of $\\mathbbm{Z}^d$ by a finite alphabet, called configurations, and a finite set of patterns that are forbidden to appear in those configurations. The set of tilings obtained when we tile the plane with a Wang tile set is an example of two-dimensional SFT. In the setting of SFTs, the Domino Problem becomes: given a finite set of forbidden patterns $\\mathcal{F}$, is the associated subshift of finite type, denoted $X_{\\mathcal{F}}$, empty?\n\nOn SFTs over $\\mathbbm{Z}$, the Domino Problem is known to be decidable. On those over $\\mathbbm{Z}^2$, Wang conjectured that the Domino Problem was decidable too, and produced an algorithm of decision relying on the hypothetical fact that all subshifts of finite type contained some periodic configuration. However, his claim was disproved by Berger~\\cite{dpberger} who proved that the Domino Problem over any $\\mathbbm{Z}^d, d\\geq 2$ is algorithmically undecidable. The key of the proof is the existence of a $\\mathbbm{Z}^d$-subshift of finite type containing only aperiodic configurations, on which computations are encoded. In the decades that followed, many alternative proofs of this fact were provided~\\cite{dprobinson,dpmozes,dpkari}.\n\nThe exact conditions to cross this frontier between decidability and undecidability have been intensively studied under different points of view during the last decade. To explore the difference of behavior between $\\mathbbm{Z}$ and $\\mathbbm{Z}^2$, the Domino Problem has been extended on discrete groups~\\cite{CohenGoodmanS2015,Jeandel-2015,twoends,Aubrun-Barbieri-Jeandel-2019,Aubrun-Barbieri-Moutot-2019} and fractal structures~\\cite{Barbieri-Sablik-2016-fractal} in view to determine which types of structures can implement computation. The frontier has also been studied by restraining complexity (number of patterns of a given size)~\\cite{karimoutot} or bounding the difference between numbers of colors and tiles~\\cite{undecidabilityconstraints}. Additional dynamical constraints were also considered, such as the block gluing property~\\cite[Lemma 3.1]{pavlovschraudner} or minimality~\\cite{Gangloff-Sablik-2018-SimMinimal}.\n\nAnother way to highlight the computational power of multidimensional SFTs is to consider the algorithmic complexity of the entropy. A famous result by M. Hochman and T. Meyerovitch~\\cite{HM} states that the possible values of the entropy for SFTs of dimension $d\\geq 2$ are exactly the non-negative right-recursively enumerable ($\\Pi_1$-computable) numbers; whereas in dimension one they can only be the logarithm of Perron numbers. It is natural to try to determine, using different parameters, where this change in computational power happens: this can be achieved considering, for instance, SFTs indexed on discrete groups~\\cite{Barbieri-2021}. Once again, dynamical constraints are another relevant parameter: under strong irreducibility constraints, such as being block gluing, the entropy becomes computable~\\cite{pavlovschraudner}. It is possible to extend the notion of block gluingness by adding a gap function $f$ that yields the distance $f(n)$ which allows for the concatenation of two rectangular blocks of size $n$ of the language: depending on the asymptotic behavior of $f(n)$, the set of entropies can be either any $\\Pi_1$-computable number or only some computable numbers~\\cite{Gangloff-Hellouin-2019}. The exact frontier for this parametrization is only known for subshifts with decidable language~\\cite{Gangloff-Sablik-2017-BlockGluing}. \n\nThis decrease in computational complexity, on the entropy or the Domino Problem, can be interpreted as a reduction of the computational power of the model as a whole under the added restriction.\n\n\nIn this article we study the algorithmic complexity of these two properties, the Domino Problem and the entropy, parametrized by local constraints. Formally, given a subshift of finite type defined by a set of forbidden patterns $\\mathcal{H}$, we want to characterize for which of these sets we have the following properties:\n\\begin{itemize}\n\\item the set $DP_h(\\mathcal{H})=\\{<\\mathcal{F}>:X_{\\mathcal{H}\\cup\\mathcal{F}}\\ne\\emptyset\\}$ is undecidable;\n\\item $\\left\\{h(X_{\\mathcal{H}\\cup\\mathcal{F}}):\\mathcal{F} \\textrm{ set of forbidden pattens}\\right\\}$ is the set of $\\Pi_1$-computable numbers in $[0,h(X_{\\mathcal{H}})]$.\n\\end{itemize}\n\nOf course, any set of forbidden patterns which defines a SFT conjugated to $X_\\mathcal{H}$ satisfies the same properties as $\\mathcal{H}$; in other words these properties are invariant by conjugacy. Determining the possible values of $h(X_{\\mathcal{H}\\cup\\mathcal{F}})$ for any set of forbidden patterns $\\mathcal{F}$ comes down to knowing the possible entropies of all subsystems of $X_\\mathcal{H}$ that are of finite type. A result by A. Desai~\\cite{desai} proves that this set is dense in $[0,h(X_{\\mathcal{H}})]$; we want to know when all the $\\Pi_1$-computable numbers of that interval are obtained. \n\nThis article focuses on when the parametrization is given by a set of constraints $\\mathcal{H}$ that are specifically horizontal constraints, associated to a one-dimensional SFT $H$. This means that we impose the horizontal projective subaction of two-dimensional SFTs to be included in $H$. For this case we have a full characterization:\n\\begin{itemize}\n\\item the set $DP_h(\\mathcal{H})$ (also denoted $DP(H)$) is decidable if and only if $H$ contains only periodic points (Proposition~\\ref{prop:DecPeriodicPoint});\n\\item $\\left\\{h(X_{\\mathcal{H}\\cup\\mathcal{F}}):\\mathcal{F} \\textrm{ set of forbidden patterns}\\right\\}$ is the set of $\\Pi_1$-computable numbers in $[0,h(H)]$ (Theorem~\\ref{th:RealizationEntropy}).\n\\end{itemize}\n\nA consequence of the second point is that given an alphabet $\\mathcal{A}$, the set of possible entropies of two-dimensional SFTs on this alphabet is exactly the $\\Pi_1$-computable numbers of $[0,\\log(\\mathcal{A})]$. The result of M. Hochman and T. Meyerovitch does not answer this question because their construction can use an arbitrarily large number of letters. In particular, given a $\\Pi_1$-computable number $h$, their construction gives a SFT of entropy $h$ on an alphabet where the cardinal is proportional to the number of states of a Turing machine which approaches $h$ from above, but it is known that it is possible to find numbers with arbitrarily high Kolmogorov complexity in any interval, so their construction needs a huge alphabet to obtain an algorithmically complex entropy. \n \nIn the last Sections (Sections~\\ref{sec:simulation},~\\ref{sec:consequences} and~\\ref{EntropyCombined}) we consider again the two aforementioned properties parametrized by horizontal constraints, but we force the additional local rules to be vertical (in the previous sections they could have arbitrary shape). This point of view has various motivations. First, to perform an efficient computer search of the smallest aperiodic Wang tile set (reached with 11 Wang tiles in~\\cite{Jeandel-Rao-2015}), it is natural to eliminate horizontal constraints that necessarily yield periodic configurations. Second, a classical result is that every effective $\\mathbbm{Z}$-subshift can be realized as the projective subaction on $\\mathbbm{Z}$ of a $\\mathbbm{Z}^2$-subshift of finite type~\\cite{hochman,Aubrun-Sablik-2010,Durand-Romashchenko-Shen-2012}; however, in these constructions, the dynamic on the other direction is trivial. We can ask which other vertical dynamics can be compatible with a given horizontal dynamic. In this article we consider a easier question: given two one dimensional subshifts of finite type $\\mathcal{H}$ and $\\mathcal{V}$ we ask if there exist a two dimensional configuration where every horizontal lines are in $\\mathcal{H}$ and vertical lines in $\\mathcal{V}$. This interplay helps us to understand a two-dimensional transfer of information using one-dimensional constraints, and ultimately how to transfer information in view to implement computation. This point of view is an extension of a joint conference article with N. Aubrun \\cite{DPH}.\n\nConsidering only vertical patterns is more complicated that considering arbitrary patterns, because we need to understand how to transfer information with few interplay between the two directions. In that case, in the present article, we restrict ourselves to horizontal constraints given by a one-dimensional nearest-neighbor SFT $H$. If this SFT satisfies a certain set of conditions, it is possible to simulate any two-dimensional SFT, as root of the one obtained adding carefully chosen vertical constraints to the horizontal constraints of $H$ (Theorem~\\ref{th:root}). With this, we obtain a characterization of the nearest-neighbor one-dimensional SFTs which have undecidable Domino Problem with this parametrization (Theorem~\\ref{th:DP}). For the entropy, we obtain a partial characterization; yet surprisingly we find horizontal subshifts $H$ that can only have a decidable Domino Problem when vertical constraints are added, but that realize two-dimensional SFTs with $\\Pi_1$-computable numbers as entropies.\n\n\n\n\\section{Definitions}\n\nAny dimension of patterns in $\\mathbbm{Z}^2$ that follows will be written under the format ``width $\\times$ height''.\n\n\\subsection{Symbolic Dynamics}\n\nFor a given finite set $\\mathcal{A}$ called the alphabet, $\\mathcal{A}^{\\mathbbm{Z}^d}$ endowed with the product topology is called the \\emph{$d$-dimensional full shift} over $\\mathcal{A}$, and is a compact space. Any $x \\in \\mathcal{A}^{\\mathbbm{Z}^d}$, called a \\emph{configuration}, can be seen as a function from $\\mathbbm{Z}^d$ to $\\mathcal{A}$ and we write $x_{\\vec{k}} := x(\\vec{k})$.\nFor any $\\vec{v} \\in \\mathbbm{Z}^d$ define the \\emph{shift map} $\\sigma^{\\vec{v}}: \\mathcal{A}^{\\mathbbm{Z}^d} \\rightarrow \\mathcal{A}^{\\mathbbm{Z}^d}$ such that $\\sigma^{\\vec{v}}(x)_{\\vec{k}} = x_{\\vec{k}-\\vec{v}}$. A \\emph{pattern} $p$ is a finite configuration $p \\in \\mathcal{A}^{P_p}$ where $P_p \\subset \\mathbbm{Z}^d$ is finite. We say that a pattern $p \\in \\mathcal{A}^{P_p}$ \\emph{appears} in a configuration $x\\in\\mathcal{A}^{\\mathbbm{Z}^d}$ -- or that $x$ \\emph{contains} $p$ -- if there exists $\\vec{v} \\in \\mathbbm{Z}^d$ such that for every $\\vec{\\ell} \\in P_p$, $\\sigma^{\\vec{v}}(x)_{\\vec{\\ell}} = p_{\\vec{\\ell}}$. We denote it $p \\sqsubset x$.\n\nA \\emph{$d$-dimensional subshift} associated to a set of patterns $\\mathcal{F}$, called set of \\emph{forbidden patterns}, is defined by\n\\[\nX_{\\mathcal{F}} = \\{ x \\in \\mathcal{A}^{\\mathbbm{Z}^d} \\mid \\forall p \\in \\mathcal{F}, p \\not\\sqsubset x \\}\n\\]\nthat is, $X_{\\mathcal{F}}$ is the set of all configurations that do not contain any pattern from $\\mathcal{F}$.\nNote that there can be several sets of forbidden patterns defining the same subshift $X$. A subshift can equivalently be defined as a subset of $\\mathcal{A}^{\\mathbbm{Z}^d}$ that is closed under both the topology and the shift map.\nIf $X=X_\\mathcal{F}$ with $\\mathcal{F}$ finite, then $X$ is called a \\emph{Subshift of Finite Type}, SFT for short.\nFor a $d$-dimensional subshift $X$, the set of all patterns of size $n_1 \\times n_2 \\times \\dots \\times n_d$ (for a usually-ordered $\\mathbbm{Z}^d$-basis) that appear in configurations of $X$ is denoted by $\\mathcal{L}_X(n_1,n_2,\\dots,n_d)$, and its cardinality by $N_X(n_1,n_2,\\dots,n_d)$. For a one-dimensional SFT $H$, we write $\\mathcal{L}_H = \\cup_{n} \\mathcal{L}_H(n)$, which is a language.\n\nFor one-dimensional subshifts, we talk about \\emph{nearest-neighbor} SFTs if $\\mathcal{F}\\subset\\mathcal{A}^{\\{0,1\\}}$. For two-dimensional subshifts, the most well-known are the \\emph{Wang shifts}, defined by a finite number of squared tiles with colored edges that must be placed matching colors called Wang tiles. Formally, these tiles are quadruplets of symbols $(t_e,t_w,t_n,t_s)$. A Wang shift is described by a finite Wang tile set, and local rules $x(i,j)_e = x(i+1,j)_w$ and $x(i,j)_n = x(i,j+1)_s$ for all integers $i,j \\in \\mathbbm{Z}$.\n\nTwo subshift $X$ and $Y$ are \\emph{conjugate} if there exists a continuous function from $X$ to $Y$ which commutes with the shift. Two conjugate subshift have the same dynamical properties. For example, every two-dimensional SFT is conjugate to a Wang shift, though this changes the underlying local rules and alphabet. Notably, a two-dimensional SFT is empty if and only if the corresponding Wang shift is.\n\n\n\n\\subsection{One-dimensional SFTs as graphs}\n\nAs explained in \\cite{LindMarcus}, the Rauzy graph of order $M$ of a one-dimensional SFT $H \\subseteq {\\mathcal{A}_H}^\\mathbbm{Z}$ denotes the following graph $\\mathcal{G}_M(H) = (\\mathcal{V}, \\vec{E})$:\n\\begin{itemize}\n\t\\item $\\mathcal{V} = \\mathcal{L}_H(M)$;\n\t\\item $(u_1\\dots u_M, u_2 \\dots u_{M+1}) \\in \\vec{E}$ for $u_1,\\dots,u_{M+1} \\in \\mathcal{A}_H$ if and only if $u_1 \\dots u_{M+1} \\in \\mathcal{L}_{H}(M+1)$;\n\\end{itemize}\nwhere all the stranded vertices (with in-degree or out-degree $1$) have been iteratively removed. The edge $(u_1\\dots u_M, u_2 \\dots u_{M+1})$ is additionally labeled $u_{M+1}$. Up to a renaming of the symbols, this graph is unique, no matter the forbidden patterns used to describe $H$.\n\nNote that a Rauzy graph can be made of one or several \\emph{strongly connected components (SCC for short)}. We recall that it is constituted of one unique strongly connected components if and only if the SFT associated is \\emph{transitive}; that is, for any $u, w \\in \\mathcal{L}_H$ there exists $v \\in \\mathcal{L}_H$ so that $uvw \\in \\mathcal{L}_H$. If the Rauzy graph has several SCCs it can also contain \\emph{transient vertices}, that are vertices with no path from themselves to themselves. We refer to~\\cite{LindMarcus} for more details. \n\n\n\\begin{example}\n\tThe subshifts $X = X_{\\{10,21,11,30,31,32,33\\}} \\subset \\{0,1,2,3\\}^\\mathbbm{Z}$ and $Y = Y_{\\{10,21,11\\}} \\subset \\{0,1,2\\}^\\mathbbm{Z}$ are the same SFT. They have the same Rauzy graph made of two SCCs $\\{0\\}$ and $\\{2\\}$, and one transient vertex $1$ (vertex $3$ has been deleted from $\\mathcal{G}(X)$ else it would be of out-degree 0).\n\\end{example}\n\nThis technique that algorithmically associates a graph to an SFT will be of great use, because it means that most proofs can focus on combinatorics over graphs to describe all one-dimensional SFTs.\n\nIn all that follows, if there is no further precision, \\emph{the} Rauzy graph of a SFT is of order adapted to the size of its forbidden patterns. That is, for a one-dimensional SFT with forbidden patterns of size at most~$M+1$, $\\mathcal{G}(H) := \\mathcal{G}_M(H)$.\n\n\n\n\\subsection{Horizontal constraints}\n\\label{subsec:horizontalconstraints}\n\nAdding a dimension to a one-dimensional SFT $H$, the local rules allow to define a two-dimensional SFT where each line is a configuration of $H$ chosen independently. We want to study the consequence of adding extra rules to that subshift: the most natural way of doing so is to add forbidden patterns -- which can be two-dimensional. This is formalized in the next definition.\n\n\\begin{definition}\n\tLet $H\\subset\\mathcal{A}^\\mathbbm{Z}$ be a one-dimensional SFT and $\\mathcal{F}$ be a finite set of forbidden patterns. The two-dimensional SFT\n\t\\[\n\tX_{H,\\mathcal{F}} := \\{ x \\in \\mathcal{A}^{\\mathbbm{Z}^2} \\mid \\forall j \\in \\mathbbm{Z}, (x_{k,j})_{k \\in \\mathbbm{Z}} \\in H \\text{ and } x \\in X_\\mathcal{F} \\}\n\t\\]\n\tis called the subshift $X_{\\mathcal{F}}$ \\emph{with added horizontal constraints from H}. \n\\end{definition}\n\nAnother point of view is to search if two one-dimensional subshifts can be combined into a two-dimensional subshift where the first one appears horizontally and the second one vertically.\nGiven a one-dimensional SFT, the main focus of this article is to understand when it can be compatible with another one to build a two-dimensional SFT.\n\n\n\\begin{definition}\n\tLet $H,V \\subset \\mathcal{A}^\\mathbbm{Z}$ be two one-dimensional SFTs. The two-dimensional subshift\n\t\\[\n\tX_{H,V} := \\{ x \\in \\mathcal{A}^{\\mathbbm{Z}^2} \\mid \\forall i, j \\in \\mathbbm{Z}, (x_{k,j})_{k \\in \\mathbbm{Z}} \\in H \\text{ and } (x_{i,\\ell})_{\\ell \\in \\mathbbm{Z}} \\in V \\}\n\t\\]\n\t\n\tis called the \\emph{combined subshift} of $H$ and $V$, and uses $H$ as horizontal rules and $V$ as vertical rules.\n\\end{definition}\n\n\\begin{remark}\n The projection of the horizontal configurations that appear in $X_{H,V}$ does not necessarily recover all of $H$; indeed, some of the configurations in $H$ will not necessarily appear because they may not be legally extended vertically.\n\\end{remark}\n\n\\begin{example}\n \tChoose $\\mathcal{A} = \\{0,1\\}$, $H$ nearest-neighbor and forbidding $00$ and $11$, and $V$ forcing to alternate a $0$ and two $1$s: the resulting $X_{H,V}$ is empty, although neither $H$ nor $V$ are. In some sense, said $H$ and $V$ are incompatible.\n\\end{example}\n\n\\subsection{Root of a subshift}\n\\label{subsec:root}\n\nGiven an SFT $X$, we want to know if it can simulate in some sense any two-dimensional subshift by adding some local rules. Of course this simulation cannot be a conjugacy since we are limited by the topological entropy of $X$; therefore we introduce the notion of root of a subshift:\n\n\\begin{definition}\nThe subshift $X\\subset \\mathcal{A}^{\\mathbbm{Z}^2}$ is a \\emph{$(m,n)$th root} of the subshift $Y\\subset\\mathcal{B}^{\\mathbbm{Z}^2}$ if there exist a clopen $Z\\subset X$ with $\\sigma^{(m,0)}(Z)=Z$ and $\\sigma^{(0,n)}(Z)=Z$, and an homeomorphism $\\varphi\\colon Z\\to Y$ such that:\n\\begin{itemize}\n\\item $\\varphi(\\sigma^{(k_1m,k_2n)}(x))=\\sigma^{(k_1,k_2)}(\\varphi(x))$ for all $x\\in Z$;\n\\item $X = \\bigsqcup_{0 \\leq i < m, 0 \\leq j < n} \\sigma^{(i,j)}(Z)$.\n\\end{itemize}\n\\end{definition}\n\n\\subsection{Entropy}\n\nConsider a $d$-dimensional subshift $X$. The \\emph{topological entropy} of $X$ is\n\\[h(X)=\\lim_{n\\to\\infty}\\frac{\\log_2(N_X(n,\\dots,n))}{n^d}=\\inf_{n}\\frac{\\log_2(N_X(n,\\dots,n))}{n^d}.\\]\nIt is a conjugacy invariant; we refer to~\\cite{LindMarcus} for details.\n\n\n\n\n\n\\section{Domino problem under horizontal constraints}\n\\label{sec:subsystems}\n\n\n\\subsection{Theorem of simulation under horizontal constraints}\n\n\\begin{proposition}\n\t\\label{rootsubsystem}\n\tLet $H\\subset\\mathcal{A}^\\mathbbm{Z}$ be a one-dimensional SFT which is not made solely of periodic points. For any two-dimensional SFT $Y\\subset\\mathcal{B}^{\\mathbbm{Z}^2}$, there exists a set of forbidden patterns $\\mathcal{F}$ such that $X_{H,\\mathcal{F}}$ is a $(m,n)$th root of $Y$ for some $m, n \\in \\mathbbm{Z}$.\n\\end{proposition}\n\n\\begin{proof}\n\tLet $Y\\subset\\mathcal{B}^{\\mathbbm{Z}^2}$ be a two-dimensional SFT. Up to renaming the symbols, suppose the alphabet $\\mathcal{B}$ is made of letters $T_1, \\dots, T_n$.\n\t\n\tSuppose $H\\subset\\mathcal{A}^\\mathbbm{Z}$ is not solely made of periodic points. Consider its Rauzy graph, $\\mathcal{G}(H)$. Fix some vertex $s$ of $\\mathcal{G}(H)$ so that there are at least two paths from $s$ to itself -- one such vertex does exist, else $H$ would have only periodic points. Name those paths $\\gamma_1$ and $\\gamma_2$.\n\t\n\tDefine, for any $k$ in $\\{1,\\dots,n\\}$,\n\t\\begin{align*}\n\t\tU_k & := \\ell(\\gamma_2 \\gamma_1 \\phi_k \\gamma_1 \\gamma_2)\n\t\\end{align*}\n\twhere $\\phi_k$ is a succession of $n$ $\\gamma_1$'s, except the $k$th one, replaced by $\\gamma_2$; and where $\\ell(\\gamma)$ designates the succession of labels of edges in a path $\\gamma$, consequently being an element of $\\mathcal{A}^*$ (the set of finite words made of elements of $\\mathcal{A}$).\n\t\n\tAll of these words $U_k$ have the same length, call it $N$, and we can juxtapose them as desired by construction of the Rauzy graph. Moreover, juxtaposing two of these words creates two consecutive $\\gamma_2$'s, allowing a clear segmentation of a word written with the $U_k$'s into these basic units.\n\t\n\tNow, consider the two-dimensional extension of $H$ into $X_{H,\\mathcal{F}}$ with $\\mathcal{F}$ forbidding the following patterns:\n\t\\begin{itemize}\n\t\t\\item anything horizontally that is not a succession of $U_k$'s (possibly with different $k$'s);\n\t\t\\item any $\\ell(\\gamma_2 \\gamma_2)$ being above something that is not another $\\ell(\\gamma_2 \\gamma_2)$ (this forces the $U_k$'s to be vertically aligned);\n\t\t\\item any pattern $p$ of size $aN \\times b$ of $U_k$'s so that, when building a pattern $q$ with $T_l$ at position $(i,j), i \\in \\{0,\\dots,a-1\\}, j \\in \\{0,\\dots,b-1\\}$ if $p_{iN,j}$ belongs to $U_l$, yields a pattern $q$ that does not appear in $Y$.\n\t\\end{itemize}\n\tThis is a finite number of additional forbidden patterns, since $Y$ is itself a SFT.\n\n\tIt is clear, considering the clopen $Z$ of the configurations in $X_{H,\\mathcal{F}}$ that have a $U_k$ starting at $(0,0)$ exactly, and the homeomorphism $\\varphi\\colon Z \\rightarrow Y$ that sends a $U_k$ on a $T_k$, that $X_{H,\\mathcal{F}}$ is a $(N,1)$th root of $Y$.\n\\end{proof}\n\n\\subsection{The Domino Problem under horizontal constraints}\\label{section.HDominoProblem}\n\nDefine $DP(\\mathbbm{Z}^d) = \\{\\langle \\mathcal{F} \\rangle \\mid X_\\mathcal{F} \\text{ is a nonempty SFT}\\}$ where $\\langle \\mathcal{F} \\rangle$ is an encoding of the set of forbidden patterns $\\mathcal{F}$ suitable for a Turing Machine. $DP(\\mathbbm{Z}^d)$ is a language called the \\emph{Domino Problem} on $\\mathbbm{Z}^d$. As for any language, we can ask if it is algorithmically decidable, i.e. recognizable by a Turing Machine. Said otherwise, is it possible to find a Turing Machine that takes as input any \\emph{finite} set of patterns $\\mathcal{F} \\subset \\mathcal{A}^{\\mathbbm{Z}^d}$ of rules and answers \\texttt{YES} if $X_{\\mathcal{F}}$ contains at least one configuration, and \\texttt{NO} if it is empty?\n\nIt is known that $DP(\\mathbbm{Z})$ is decidable, because the problem can be reduced to the emptiness of nearest-neighbor one-dimensional SFTs, and finding a valid configuration in a nearest-neighbor SFT is equivalent to finding a biinfinite path -- hence a cycle -- in a finite oriented (Rauzy) graph. On the contrary, $DP(\\mathbbm{Z}^2)$ is undecidable~\\cite{dpberger,dprobinson,dpmozes,dpkari}, and so is any $DP(\\mathbbm{Z}^d)$ for $d \\geq 2$ by reduction.\n\nGiven $H$ a one-dimensional SFT, we consider the following \\emph{Domino Problem under horizontal constraints}:\n\n\\[DP_h(H) = \\{\\langle \\mathcal{F} \\rangle \\mid X_{H,\\mathcal{F}} \\text{ is a nonempty SFT}\\}.\\]\n\nThe purpose of its existence is to determine the frontier between decidability ($DP(\\mathbbm{Z})$) and undecidability ($DP(\\mathbbm{Z}^2)$). \n\n\\begin{remark}\n\tThis Domino Problem is defined for a given $H$, and its decidability depends on such a $H$ chosen beforehand.\n\\end{remark}\n\n\\begin{proposition}\\label{prop:DecPeriodicPoint}\n$DP_h(H)$ is decidable if and only if $H$ has only periodic points.\n\\end{proposition}\n\n\\begin{proof}\nIf $H \\subset \\mathcal{A}^\\mathbbm{Z}$ has a nonperiodic point, then for any two-dimensional SFT $Y$ we can apply \\cref{rootsubsystem} and build a two-dimensional SFT $X_{H,\\mathcal{F}}$ so that it is a root of $Y$. Using the definition of a root of a SFT, it is clear that the emptiness of $X_{H,\\mathcal{F}}$ is equivalent to the emptiness of $Y$. Consequently, we can reduce $DP_h(H)$ to $DP(\\mathbbm{Z}^2)$, and $DP_h(H)$ is undecidable.\n\nIf $H \\subset \\mathcal{A}^\\mathbbm{Z}$ only has periodic points, suppose these points' smallest common period is some integer $p > 0$. Now, take as input a finite set of forbidden patterns $\\mathcal{F}$, all of them as a rectangle of size $pL \\times M, L,M\\in \\mathbbm{N}$ up to extending them.\n\nIf there is no rectangle of size $pL \\times M(|\\mathcal{A}|^{pLM}+1)$ respecting the local rules of $X_{H,\\mathcal{F}}$, then that SFT is empty.\nIf there is at least one such rectangle, list all of these possible candidates. Consider $R$ a candidate: either it can be horizontally juxtaposed with itself without containing a pattern of $\\mathcal{F}$, and we keep it; or it cannot, and we delete it. If all candidates are deleted, then $X_{H,\\mathcal{F}}$ is empty, because $H$ forces a $p$-periodic repetition horizontally in any configuration, which happens to be incompatible with all candidate rectangles.\n\nIf at least one candidate $R$ remains, then by the pigeonhole principle it contains at least twice the same rectangle $R^\\prime$ of size $pL \\times M$. To simplify the writing, we assume that the rectangle that repeats is the one with coordinates $[1,pL] \\times [1,M]$ inside $R$ where $[1,pL]$ and $[1,M]$ are intervals of integers, and that it can be found again with coordinates $[1,pL] \\times [k,k+M-1]$. Else, we simply truncate a part of $R$ so that this becomes true.\n\nDefine $P := R|_{[1,pL] \\times [1,k+M-1]}$.\nSince $\\mathcal{F}$ has forbidden patterns of size $pL \\times M$, and since $R$ respects our local rules and begins and ends with $R^{\\prime}$, $P$ can be vertically juxtaposed with itself (overlapping on $R^\\prime$). Moreover, $P$ can be horizontally glued with itself. The result tiles $\\mathbbm{Z}^2$ periodically while respecting the constraints of $H$ and of $\\mathcal{F}$, since any $pL \\times M$ rectangle found in it is already present in the horizontal juxtaposition of $P$, which is valid by $p$-periodicity.\nTherefore, $X_{H,\\mathcal{F}}$ is nonempty.\n\nWith this, we have an algorithm to decide the emptiness of $X_{H,\\mathcal{F}}$ for any input $\\mathcal{F}$.\n\\end{proof}\n\n\n\\section{Characterization of the possible entropies under horizontal constraints}\\label{section:EntropyHorizontalConstraint}\n\n\nThe purpose of this section is to characterize the entropies accessible to two-dimensional SFTs, as in~\\cite{HM}, but under projective constraints. Formally, given $H$ a one-dimensional SFT, we want to characterize the set\n \\[\\left\\{h(X_{H,\\mathcal{F}}):\\mathcal{F} \\textrm{ set of forbidden pattens}\\right\\}.\\]\n \nClearly it is a subset of $[0,h(H)]$ and by~\\cite{desai} it is dense in that interval. The computability obstruction of~\\cite{HM} says that it is a subset of the $\\Pi_1$-computable numbers (also named \\emph{right-recursively enumerable} or r.r.e. numbers). We recall that a real is $\\Pi_1$-computable if there exists a decreasing computable sequence of rationals which converges toward that number. Therefore, it is natural to ask if all $\\Pi_1$-computable numbers of $[0,h(H)]$ can be obtained. We have the following result proved in \\cref{subsec:mainresult}.\n\n\\begin{restatable*}{theorem}{RealizationEntropy}\n\t\\label{th:RealizationEntropy}\n\tLet $H \\subset \\mathcal{A}^\\mathbbm{Z}$ be a SFT. We have\n\t\\[\\left\\{h(X_{H,\\mathcal{F}}): \\mathcal{F}\\textrm{ set of forbiden patterns}\\right\\}=[0,h(H)]\\cap\\Pi_1\\]\n\twhere $\\Pi_1$ is the set of right-recursively enumerable reals.\n\\end{restatable*}\n\n\n\\subsection{Kolmogorov complexity and number of tiles}\n\nGiven $r$ a $\\Pi_1$-computable real, denote $K(r)$ its \\emph{Kolmogorov complexity}. It is the minimal number of states needed for a Turing Machine to enumerate a list of rationals which approach $r$ from above.\nConsider the algorithm from \\cite[Alg. 7.3]{HM}, defined for a given r.r.e. real $h$, that takes as input a sequence $(x_N)_{N \\in \\mathbbm{Z}} \\in \\{0,1\\}^\\mathbbm{Z}$, and makes its frequency of $1$'s -- on specific indices -- approach $r$ from above. The associated Turing Machine is built from the one that approaches $h$ from above so that its number of states is of the form $cK(h)$, $c>0$ not depending on $h$. The authors in \\cite{HM} turn that Turing Machine into a Wang tile set. Since the size of that tile set depends linearly on the number of states of the Turing Machine, we obtain the following:\n\n\\begin{theorem}[From \\cite{HM}, Alg. 7.3]\n\t\\label{prop:algo}\n\tThere exists $C > 0$ such that for any $h \\in \\Pi_1$, there exists a two-dimensional SFT $X$ such that $h(X) = h$, describable by a set of at most $CK(h)$ Wang tiles.\n\\end{theorem}\n\n\n\n\\subsection{Technical lemmas on entropy}\n\n\n\\begin{lemma}\n\t\\label{lemma:entropy}\n\tFor all $\\alpha, \\beta \\in \\mathbbm{N}$, one has:\n\t\\[\\lim\\limits_{n \\to +\\infty} \\dfrac{\\log_2(N_X(\\alpha n, \\beta n))}{\\alpha \\beta n^2} = h(X).\\]\n\\end{lemma}\n\n\\begin{proof}\n\tWe have the inequality\n\t\\[N_X(\\alpha n, \\beta n) \\leq (N_X(n, n))^{\\alpha \\beta}\\]\n\tdue to the fact that globally admissible patterns -- patterns that belong to a valid configuration -- of size $\\alpha n \\times \\beta n$ are themselves made of a number $\\alpha \\beta$ of globally admissible patterns of size $n \\times n$.\n\t\n\tWe also have:\n\t\\[\n\t(N_X(\\alpha \\beta n, \\alpha \\beta n)) \\leq N_X(\\alpha n, \\beta n)^{\\alpha \\beta}\n\t\\]\n\twith the same reasoning.\n\t\n\tFinally, we have\n\t\\[\\lim_n \\dfrac{\\log_2(N_X(\\alpha \\beta n, \\alpha \\beta n))}{\\alpha^2 \\beta^2 n^2} = h(X)\\]\n\tbecause $(\\frac{\\log_2(N_X(\\alpha \\beta n, \\alpha \\beta n))}{\\alpha^2 \\beta^2n^2})$ is a subsequence of the converging sequence $(\\frac{\\log_2(N_X(n, n))}{n^2})$.\n\t\n\tThen applying $\\lim_n \\dfrac{\\log_2(.)}{\\alpha \\beta n^2}$ to\n\t\\begin{align*}\n\t(N_X(\\alpha \\beta n, \\alpha \\beta n))^{\\frac{1}{\\alpha \\beta}} \\leq N_X(\\alpha n, \\beta n) \\leq (N_X(n, n))^{\\alpha \\beta}\n\t\\end{align*}\n\twe obtain what we want.\n\\end{proof}\n\n\n\n\\begin{proposition}\n\t\\label{rootentropy}\nLet $X$ be a two-dimensional subshift which is a $(m,n)$th root of the subshift $Y$. Then\n\\[h(Y)=mn\\,h(X)\\]\n\\end{proposition}\n\\begin{proof}\n\tCall $Z \\subset X$ the clopen homeomorphic to $Y$. In what follows, we write $\\mathcal{L}^{(k_1,k_2)}_{C}(a,b)$ the set of $[k_1,k_1+a-1] \\times [k_2,k_2+b-1]$ patterns that appear in configurations of a clopen $C$; and $N^{(k_1,k_2)}_{C}(a,b)$ its cardinal. Note that $\\mathcal{L}^{(0,0)}_{X}(a,b) = \\mathcal{L}_{X}(a,b)$ since $X$ is shift-invariant; and same for $Y$.\n\t\nConsider $\\phi:Z\\to Y$ the homeomorphism such that $\\phi(\\sigma^{(k_1m,k_2n)}(x))=\\sigma^{(k_1,k_2)}\\phi(x)$ for all $x\\in Z$ and $(k_1,k_2)\\in\\mathbbm{Z}^2$. By the same proof as Hedlund's theorem, there exist $r \\in \\mathbbm{N}_0$ and a local map $\\overline{\\phi}$ applied on patterns of support $[-r,r]^2$ such that for any $x \\in Z$, $\\phi(x)_{(k_1,k_2)}=\\overline{\\phi}(x_{(k_1 m,k_2 n)+[-r,r]^2})$ -- that is, $\\phi$ can be considered as the application of a local map uniformly on patterns of configurations of $Z$. Thus $|\\mathcal{L}^{(0,0)}_Y(k, k)|\\leq|\\mathcal{L}^{(-r,-r)}_Z(mk+2r, nk+2r)|$. Furthermore, since a pattern of support $[-r,mk+r-1]\\times[-r,nk+r-1]$ can be decomposed into a pattern of support $[0,mk-1]\\times[0,nk-1]$ and its border, we deduce that $|\\mathcal{L}^{(0,0)}_Y(k, k)|\\leq|\\mathcal{A}_Z|^{2rk(m+n)} |\\mathcal{L}^{(0,0)}_Z(mk, nk)|$.\n\nIn the same way, there exist $r' \\in \\mathbbm{N}_0$ and a local map $\\overline{\\phi^{-1}}$ applied on patterns of support $[-r',r']^2$ such that $\\phi^{-1}(y)_{(k_1m,k_2n)+[0,m-1]\\times[0,n-1]}=\\overline{\\phi^{-1}}(y_{(k_1,k_2 )+[-r',r']^2})$. Thus $|\\mathcal{L}^{(0,0)}_Z(mk,n k)|\\leq|\\mathcal{L}^{(-r',-r')}_Y(k+2r', k+2r')|$\n\nWe have the same bounds for any $\\sigma^{(i,j)}(Z)$ with $i,j\\in\\mathbbm{Z}$. Moreover, since $X = \\bigsqcup_{0 \\leq i < m, 0 \\leq j < n} \\sigma^{(i,j)}(Z)$, one has $N_X(mk, nk) = \\sum_{0 \\leq i < m, 0 \\leq j < n} N^{(0,0)}_{\\sigma^{(i,j)}(Z)}(mk, nk)$. We deduce the following inequalities\n\\[\n\\frac{mn}{|\\mathcal{A}_Z|^{2rk(m+n)}}N_Y(k,k)\\leq N_X(mk, nk)\\leq mn N_Y(k+2r', k+2r')\n\\]\n\n\t\n\tWe apply $\\frac{\\log_2(.)}{k^2}$ to these inequalities and by using \\cref{lemma:entropy}, we get:\n\t\\[\n\th(Y) = mn h(X).\n\t\\]\n\t\n\\end{proof}\n\n\n\\begin{lemma}\n\t\\label{lemlem}\n\tLet $H$ be a transitive one-dimensional SFT of positive entropy and $h < h(H)$.\n\t\n\tThere exist words $u, w_1, w_2 \\in \\mathcal{L}_H$ and $\\alpha \\in \\mathbbm{N}$ such that:\n\t\\begin{itemize}\n\t\t\\item $\\alpha > M$ the biggest size of forbidden patterns of $H$;\n\t\t\\item $|w_1|=|w_2|=\\alpha$;\n\t\t\\item $u, w_1$ and $w_2$ are cycles from a vertex of the Rauzy graph of $H$ to the same vertex;\n\t\t\\item $uww_1 \\in \\mathcal{L}_H$ for all $w \\in \\{w_1,w_2\\}^*$;\n\t\t\\item $u$ appears as a subword of any word in $u\\{w_1,w_2\\}^*w_1$ at the very beginning only;\n\t\t\\item $h(H_u) > h$ where $H_u$ is the SFT included in $H$ where the word $u$ does not appear.\n\t\\end{itemize}\n\n\tMoreover, the Rauzy graph of $H_u$ is still transitive.\n\\end{lemma}\n\n\\begin{proof}According to \\cite[Th.3]{lind}, for any integer $n$ that is big enough, any $n$-long word $u$ is so that, if we denote $H_u$ the SFT where $u$ is added to the forbidden patterns of $H$, we get\n\t\\[\n\th(H) \\geq h(H_u)>h\n\t\\]\n\tbecause $h(H_u)$ is below yet sufficiently close to $h(H)$ for $n$ big enough.\n\tAs a consequence, the last property we have to find is verified as soon as we choose $\\alpha$ big enough for the three words we want.\n\t\n\tNow, consider the Rauzy graph of $H$, call it $\\mathcal{G}(H)$. Fix some vertex $s$ of $\\mathcal{G}(H)$. Since its entropy is positive and it is transitive, for any vertex $s$ of $\\mathcal{G}(H)$ there are at least two paths from $s$ to itself that do not pass through $s$ at any other moment. Name those paths $\\gamma_1$ and $\\gamma_2$.\n\t\n\tDefine\n\t\\begin{align*}\n\tu & := \\ell(\\gamma_2 \\gamma_1 \\gamma_2 \\gamma_1 \\gamma_1 \\gamma_1 \\gamma_2 {\\gamma_1}^k \\gamma_2)\\\\\n\tw_1 & := \\ell(\\gamma_2 \\gamma_1 \\gamma_1 \\gamma_2 \\gamma_1 \\gamma_1 \\gamma_2 {\\gamma_1}^k \\gamma_2)\\\\\n\tw_2 & := \\ell(\\gamma_2 \\gamma_1 \\gamma_1 \\gamma_1 \\gamma_2 \\gamma_1 \\gamma_2 {\\gamma_1}^k \\gamma_2)\\\\\n\t\\end{align*}\n\twhere $\\ell(\\gamma)$ designates the succession of labels of edges in $\\gamma$, consequently being an element of $\\mathcal{A}^*$. Choose $k > 0$ big enough so that $u$ satisfies what we desire on $h(H_u)$. These three words have the same length and we can juxtapose them as desired by construction of the Rauzy graph of $H$. Moreover, juxtaposing any of those words creates two consecutive $\\gamma_2$, allowing a clear segmentation of a word written with $u$, $w_1$ and $w_2$ into these basic units. Notably, $u$ appears as a subword of any word in $u\\{w_1,w_2\\}^*w_1$ at the very beginning only.\n\t\n\tMoreover, $H_u$ is still transitive. Indeed, let $v_1$ and $v_2$ be two elements of $\\mathcal{L}_{H_u}$; we can suppose they are bigger than $u$ up to an extension of $v_1$ to its left and $v_2$ to its right.\n\t\n\t$v_1$ can be seen as a path in $\\mathcal{G}(H)$ and extended to $v_1w$ so that the corresponding path in $\\mathcal{G}(H)$ reaches $s$, by transitivity of that graph, with the shortest possible $w$.\n\tIt is possible that $v_1w \\notin \\mathcal{L}_{H_u}$, meaning it contains $u$ as a subword. But then $u$ cannot be a subword of $w$, since the path in $\\mathcal{G}(H)$ corresponding to $u$ is a succession of loops from $s$ to $s$; and $u$ cannot be a subword of $v_1$, since $v_1 \\in \\mathcal{L}(H_u)$. Therefore $u$ begins in $v_1$ and ends in $w$. Since $w$ corresponds by definition to the shortest path to $s$ possible in $\\mathcal{G}(H)$, it means most of $u$ is in $v_1$ and only a fraction of the last $\\ell(\\gamma_2)$ is in $w$. Then change $w$ for a new $w'$ as short as possible that breaks the completion of $u$: this is doable because $v_1 \\in \\mathcal{L}_{H_u}$, so it is extendable to the right. Then add the shortest possible $w''$ that goes to $s$ in $\\mathcal{G}(H)$: $u$ is too large to appear elsewhere in $v_1w'w''$. Rename $w'w''$ as $w$.\n\t\n\tIn all cases, we found $v_1w \\in \\mathcal{L}_{H_u}$ that reaches $s$ when considered in $\\mathcal{G}(H)$. Do the same so that some $xv_2 \\in \\mathcal{L}_{H_u}$ starts from $s$ when considered in $\\mathcal{G}(H)$.\n\t\n\tNow, $v_1wxv_2 \\in \\mathcal{L}_{H}$ since $v_1$ and $v_2$ are large enough so that the $wx$ part of this word has no effect on its extension to a biinfinite word in $H_u$.\n\tFurthermore, $v_1wxv_2$ does not contain $u$, since the latter must start from $s$ and end at $s$, and this word has been built so that no $u$ follows when $s$ appears. Since $v_1$ and $v_2$ are also large enough to be extended respectively to the left and to the right without a $u$, we can conclude that $v_1wxv_2 \\in \\mathcal{L}_{H_u}$. Hence $\\mathcal{G}(H_u)$ is transitive.\n\\end{proof}\n\n\\begin{lemma}\n\t\\label{lembezout}\n\tLet $c_1, \\dots, c_l$ be positive integers.\n\tLet $m = GCD(c_1,\\dots,c_l)$.\n\t\n\tThere is a rank $N \\in \\mathbbm{N}$ such that for all $n \\geq N$, $nm$ can be expressed as some $\\sum_{i=1}^l k_ic_i$ with $k_i \\in \\mathbbm{N}$.\n\\end{lemma}\n\n\\begin{proof}\n\tUsing B\u00e9zout's identity, we can write $m=\\sum_{i=1}^l z_ic_i$ for integers $z_i \\in \\mathbbm{Z}$.\n\t\n\tLet $c = \\sum_{i=1}^l c_i$. $m$ divides $c$.\n\tAny multiple $nm$ of $m$ can be written as $nm = qc + rm$ where $q, r$ are integers with $0 \\leq r < \\frac{c}{m}$. Notably, $nm = \\sum_{i=1}^l (q + rz_i)c_i$ using the previous equalities with $m$ and $c$.\n\t\n\tIf $n \\geq 2 \\frac{c^2}{m^2} \\max_i |z_i|$, then $nm \\geq \\frac{c^2}{m} \\max_i |z_i| + \\frac{c^2}{m} \\max_i |z_i|$. Since $qc + \\frac{c^2}{m} \\max_i |z_i| > qc + rc \\max_i |z_i| \\geq qc + rm = nm$, we deduce that $q > \\frac{c}{m} \\max_i |z_i| > r \\max_i |z_i|$.\n\t\n\tTherefore, if $n \\geq 2 \\frac{c^2}{m^2} \\max_i |z_i|$, then for all $i$, $q + rz_i > 0$.\n\tAs a consequence, $nm = \\sum_{i=1}^l k_ic_i$ with $k_i = q + rz_i > 0$.\n\\end{proof}\n\n\\begin{lemma}\n\t\\label{magiclemma}\n\tLet $H$ be a transitive one-dimensional SFT of positive entropy.\n\tLet $u$ and $w_1$ be the words defined as in \\cref{lemlem}, with $\\alpha = |u| = |w_1|$.\n\t\n\tLet \n\t\\[\n\t\\widetilde{N_H}(n) = Card(\\{ v : |v| = n, u \\not\\sqsubset v \\text{ and } w_1vu \\in \\mathcal{L}_H \\}).\n\t\\]\n\tThen\n\t\\[\n\t\\frac{\\log_2(\\widetilde{N_H}(n\\alpha))}{n\\alpha} \\xrightarrow[n \\to \\infty]{} h(H_u).\n\t\\]\n\\end{lemma}\n\n\\begin{proof}\n\tLet $s$ be the vertex of the Rauzy graph of $H$ that begins and ends the paths corresponding to $u$ and $w_1$. We will similarly name $s$ its label, a word in ${\\mathcal{A}_H}^*$.\n\t\n\tLet $m$ be the GCD of the lengths of all cycles $c_i$ from $s$ to $s$ that do not pass through $s$ at any other moment, in the Rauzy graph of $H$. From Lemma~\\ref{lembezout}, we deduce that there is some $N \\in \\mathbbm{N}$ such that for any $n \\geq N$, there is a path of length $nm$ from $s$ to $s$ in the Rauzy graph of $H$, as a concatenation of all cycles $c_i$ with a positive number of times $k_i$ for each of them. Since any order of concatenation works, one can notably build a cycle from $s$ to $s$ of length $nm$ that does not contain $u$ (by concatenating all occurrences of path $\\gamma_2$, as defined in the proof of \\cref{lemlem} in a row).\n\t\n\tLet $d$ be the diameter of the Rauzy graph of $H_u$.\n\tLet $n\\geq N+\\frac{2d+|u|}{\\alpha}$. Let $v\\in \\mathcal{L}_{H_u}$ so that $|v|=n\\alpha-2d-N\\alpha$ (notably $|v| \\geq |u|$).\n\tSince the graph $\\mathcal{G}(H_u)$ is transitive, there are two words $v'$ and $v''$ of size smaller than $d$ so that $v'vv'' \\in \\mathcal{L}_{H_u}$, with the path corresponding to $v'$ beginning with some vertex $s*$ when seen as a path in $\\mathcal{G}(H_u)$, and the path corresponding to $v''$ ending with some vertex $*s$ -- where $s*$ is the label of a vertex that has $s$ as a prefix, and $*s$ the label of a vertex that has $s$ as a suffix. This word can also be seen as a path in the Rauzy graph of $H$; there, that path is a cycle from $s$ to $s$; hence $|v'vv''|=km$ for some $k \\in \\mathbbm{N}$ since $m$ divides the length of all cycles from $s$ to $s$ in $\\mathcal{G}(H)$.\n\t\n\tA notable consequence of this, using the length of $v$ and the fact that $m$ divides $\\alpha$ (since $w_1$ corresponds to a cycle from $s$ to $s$ in $H$) is that $2d-|v'|-|v''| > 0$ is a multiple of $m$.\n\t\n\tLet $v'''$ be a word of length $2d+N\\alpha-|v'|-|v''|$ that is a cycle from $s$ to $s$ in $H$, so that additionally $v'vv''v''' \\in \\mathcal{L}_{H_u}$. This is doable by doing a correct concatenation of cycles from $s$ to $s$ to build $v'''$, because $2d+N\\alpha-|v'|-|v''|$ is a multiple of $m$ which is greater than $N\\alpha$, hence it is greater than $Nm$, so we can apply \\cref{lembezout}.\n\t\n\tWe have $|v'vv''v'''|=n\\alpha$, $v'vv''v''' \\in \\mathcal{L}_{H_u}$ and $w_1v'vv''v'''u \\in \\mathcal{L}_H$.\n\t\n\tWe deduce that\t\n\t\\[\n\t\\widetilde{N_H}(n\\alpha)\\geq N_{H_u}(n\\alpha-2d-N\\alpha).\n\t\\]\n\tSince we also have $N_{H_u}(n\\alpha) \\geq \\widetilde{N_H}(n\\alpha)$, taking the logarithm and dividing by $n\\alpha$, we obtain\n\t\\[\n\t\\frac{\\log_2(\\widetilde{N_H}(n\\alpha))}{n\\alpha}\\underset{n\\to\\infty}{\\longrightarrow}h(H_{u}).\n\t\\]\n\\end{proof}\n\n\\subsection{Main result}\n\\label{subsec:mainresult}\n\n\\RealizationEntropy\n\n\\begin{proof}\n\tLet $h \\leq h(H), h \\in \\Pi_1$. We assume that $h(H) > h > 0$; if not, a trivial two-dimensional SFT satisfies the Theorem.\n\tIt is possible to assume that $H$ is transitive: indeed, in one-dimensional SFTs the entropy is due to only one connected component of the Rauzy graph (see \\cite{LindMarcus}). Furthermore, since we assumed $h(H) > 0$, that connected component is not a plain cycle, which are of entropy $0$. Hence we can additionally assume that the Rauzy graph of $H$ is not a plain cycle.\n\t\n\tSince $H$ is a transitive SFT with positive entropy, thanks to \\cref{lemlem} there exists a word $u \\in \\mathcal{L}_H$ of size $\\alpha = |u|$ larger than the order of $H$ and two different words $w_1, w_2 \\in L(H)$ such that for $|w_1|=|w_2|=\\alpha$ such that $uww_1 \\in \\mathcal{L}_H$ for all $w \\in \\{w_1,w_2\\}^*$. Moreover $h(H_u) > h$ where $H_u$ is the subshift included in $H$ where the word $u$ does not appear.\n\t\n\tDenote\n\t\\[\n\t\\widetilde{N_H}(n) = Card(\\{ v : |v| = n, u \\not\\sqsubset v \\text{ and } w_1vu \\in \\mathcal{L}_H \\}).\n\t\\]\n\tAccording to \\cref{magiclemma}, one has \n\t\\[\n\t\\frac{\\log_2(\\widetilde{N_H}(n\\alpha))}{n\\alpha} \\xrightarrow[n \\to \\infty]{} h(H_u).\n\t\\]\n\t\n\tConsider an integer $t \\geq 2$ such that $h(H_u) > (1 + \\frac{1}{t})h$. Moreover, we ask for $t$ to be big enough so that for any $n \\geq t$,\n\t\\[\n\t\\frac{\\log_2(\\widetilde{N_H}(n\\alpha))}{n\\alpha} > (1+\\frac{1}{t})h\n\t\\]\n\twhich is possible since the left term converges to $h(H_u)$.\n\t\n\tNow consider $K$ the sum of the Kolmogorov\n\tcomplexities of the following different elements: $h$, $t$, $\\alpha$, the algorithms that compute the addition, the\n\tmultiplication, the logarithm, the algorithm that on input $n$ returns $\\widetilde{N_H}(n)$ and the algorithm that on\n\tinput $(a,b,c)$ trio of integers returns the first integer $r$ such that $\\frac{a}{b+r-1} > c \\geq \\frac{a}{b+r}$.\n\t\n\tLet $R = \\ceil{\\log_2(3CK)}$ and consider $q = Rt$. One has\n\t\\[\n\t\\frac{\\log_2(\\widetilde{N_H}(q\\alpha))}{(R+q)\\alpha} = \\frac{q}{q+R} \\frac{\\log_2(\\widetilde{N_H}(q\\alpha))}{q\\alpha} > \\frac{t}{t+1} (1+\\frac{1}{t})h = h.\n\t\\]\n\t\n\tTherefore, there exists $r > R$ such that\n\t\\[\n\t\\frac{\\log_2(\\widetilde{N_H}(q\\alpha))}{(q+r-1)\\alpha} > h \\geq \\frac{\\log_2(\\widetilde{N_H}(q\\alpha))}{(q+r)\\alpha}\n\t\\]\n\t\n\tdue to the fact that it is a sequence decreasing to $0$ as $r$ increases.\n\t\n\t\n\tConsider $h^\\prime = (q+r)\\alpha\\left( h - \\frac{\\log_2(\\widetilde{N_H}(q\\alpha))}{(q+r)\\alpha} \\right) > 0$. The Kolmogorov complexity of $h^\\prime$ is less than $3K$, because $K$ contains the complexity of doing each of the operations used in $h^\\prime$ except computing $q$, and computing $q$ requires to compute $R$ which requires to compute $K$ which has Kolmogorov complexity at most $K$. Assembling all of this, we obtain a complexity of at most $3K$.\n\t\n\tConsequently, using \\cref{prop:algo}, there exists a constant $C>0$ and a Wang tile set $T_W$ with at most $3CK$ tiles such that the associated SFT $W$ has for entropy $h(W) = h^\\prime$.\n\t\n\tNow, consider the two-dimensional subshift $X \\subset \\mathcal{A}^{\\mathbbm{Z}^2}$ with the following local rules:\n\t\\begin{itemize}\n\t\t\\item every line satisfies the conditions of $H$, thus $\\pi_{\\vec{e_1}}(X) \\subset H$;\n\t\t\\item if the word $u$ appears horizontally starting at position $(i,j)$, it also appears at positions $(i,j+1)$ and $(i,j-1)$;\n\t\t\\item the word $u$ appears with the horizontal period $(q+r)\\alpha$ and it cannot appear elsewhere on a given line;\n\t\t\\item the word $u$ is followed by a word of $\\{w_1,w_2\\}^R.w_1^{r-R-1}$;\n\t\t\\item the tiles coded in binary by the words of $\\{w_1,w_2\\}^R$ after $u$ satisfy horizontal and vertical constraints imposed by the Wang tile set $T_W$.\n\t\\end{itemize}\n\n\tThe window of size $q\\alpha$ that remains between two words $u$ and is not filled by the previous constraints has the only restriction of respecting the horizontal conditions of $H$ and of not containing $u$.\n\tThere are horizontal lines that respect all of the horizontal conditions in $X$, because:\n\t\\begin{itemize}\n\t\\item $u\\{w_1,w_2\\}^*w_1 \\subset \\mathcal{L}_H$;\n\t\\item $\\widetilde{N_H}(q\\alpha) \\neq 0$ by the use of \\cref{magiclemma};\n\t\\item forcing $u$ to appear nowhere else than with a $(q+r)\\alpha$ period is possible because of \\cref{lemlem}: $u$ appears as a subword of any word in $u\\{w_1,w_2\\}^*w_1$ at the very beginning only;\n\t\\end{itemize}\n\tThe only vertical restriction that is added between these horizontal lines is only the alignment of the $u$'s between two lines where this word appears; so there are configurations in $X$ overall.\n\t\n\tLet $n = (q+r) \\alpha$. One has\n\t\\[\n\t\\widetilde{N_H}(q\\alpha)^{k(k-1)} N_W(k-1, k) \\leq N_X(kn, k)\n\t\\]\n\tbecause in any $kn \\times k$ window in $X$ there are at least $(k-1) \\times k$ complete horizontal segments starting with a word $u$, that encode a $k-1 \\times k$ pattern of $W$ binary with $\\{w_1,w_2\\}^R$, and each of them also contains $q\\alpha$ additional characters that link $w_1$ and $u$ and that do not contain $u$.\n\t\n\tOne also has\n\t\\[\n\tN_X(kn, k) \\leq \\widetilde{N_H}(q\\alpha)^{k(k+1)} N_W(k+1, k)\n\t\\]\n\tbecause similarly, in any $kn \\times k$ window in $X$ there are less than $(k+1) \\times k$ complete horizontal segments starting with a word $u$. One obtains the resulting inequality:\n\t\n\t\\[\n\t\\frac{(k-1)\\log_2(\\widetilde{N_H}(q\\alpha))}{kn} + \\frac{\\log_2(N_W(k-1, k))}{nk^2} \\leq \\frac{\\log_2(N_X(kn, k))}{nk^2} \\leq \\frac{(k+1)\\log_2(\\widetilde{N_H}(q\\alpha))}{kn} + \\frac{\\log_2(N_W(k+1, k))}{nk^2}\n\t\\]\n\t\n\tThus, taking the limit when $k \\to \\infty$, one obtains\n\t\n\t\\[\n\th(X) = \\frac{\\log_2(\\widetilde{N_H}(q\\alpha))}{(q+r)\\alpha} + \\frac{h(W)}{(q+r)\\alpha} = h.\n\t\\]\n\\end{proof}\n\n\\subsection{Some consequences}\n\nA direct consequence of \\cref{th:RealizationEntropy} -- up to extending the construction to higher dimensions, which is straightforward -- is the characterization of all the entropies in any dimension $d>2$, as is obtained in \\cite{HM}:\n\\begin{corollary}\n\t\\label{th:bonus}\n\tLet $\\mathcal{A}$ be a finite alphabet. For any number $h\\in\\Pi_1$ such that $0\\leq h\\leq\\log_2(\\mathcal{A})$, there exists a SFT on $\\mathcal{A}^{\\mathbbm{Z}^d}$ of entropy $h$.\n\\end{corollary}\n\nGiven a one-dimensional \\emph{effective subshift} $H$, that is a subshift with a list of forbidden patterns enumerable by a Turing Machine, it is known that there exists a two-dimensional sofic subshift which has $H$ as projective subaction \\cite{simulation}. A natural question is if it is possible to impose to have a specific entropy $h$ with $h\\leq h(H)$. The next corollary is a partial result for effective subshifts containing a SFT.\n\n\\begin{corollary}\n\t\\label{th:bonus2}\n\tLet $H \\subset \\mathcal{A}^\\mathbbm{Z}$ be an effective subshift with $H' \\subset H$ a SFT.\n\tLet $0 \\leq h \\leq h(H'), h \\in \\Pi_1$.\n\t\n\tThen there exists a two-dimensional sofic subshift $X$ such that $h(X) = h$ and $\\pi_{\\vec{e_1}}(X) = H$.\n\\end{corollary}\n\n\\begin{proof}\n\tLet $h \\leq h(H'), h \\in \\Pi_1$.\n\t\n\tLet $X'$ be the two-dimensional SFT obtained from $H'$ using \\cref{th:RealizationEntropy}, which is so that $h(X') = h$ and $\\pi_{\\vec{e_1}}(X') \\subset H'$.\n\t\n\tLet $Y_H$ be the SFT with $H$ as horizontal rules and no added vertical rule.\n\t\n\tLet $Z$ be a SFT over $\\{0, 0', 1\\}^{\\mathbbm{Z}^2}$ so that two horizontally successive elements must be the same, so that only $0$ and $1$ can be above $0$, only $0'$ can be above $1$, and only $0'$ can be above $0'$. The configurations of $Z$ are made of at most a single line of $1$s with $0'$s above and $0$s below.\n\t\n\tWe define $Y = Z \\times Y_H \\times Y'$ which is a SFT by product, and finally $X$ which is the projection $\\pi(Y)$ with\n\t\\[\n\t\\pi(y)_{i,j} =\n\t\\begin{cases}\n\t(y_2)_{i,j} &\\quad\\text{if } (y_1)_{i,j} = 1\\\\\n\t(y_3)_{i,j} &\\quad\\text{if } (y_1)_{i,j} \\neq 1\\\\\n\t\\end{cases}\n\t\\]\n\twhere $y_1, y_2$ and $y_3$ are the projections of a configuration $y \\in Y$ on the three SFTs of the product.\n\t\n\tIn the end, a configuration in $X$ has at most one line that can be any configuration of $H$; and all its other lines are from $X'$, ensuring $h(X) = h(X') = h$. Furthermore, since $\\pi_{\\vec{e_1}}(X') \\subset H' \\subset H$, we obtain $\\pi_{\\vec{e_1}}(X) = H$.\n\\end{proof}\n\n\n\\begin{remark}\n In the theorem of simulation of~\\cite{Aubrun-Sablik-2010,Durand-Romashchenko-Shen-2012}, any effective subshift $H$ can be realized as the projective subaction of a two-dimensional sofic subshift, but that sofic subshift has zero entropy. It is natural to ask if it is possible to realize $H$ with any possible entropy for the two-dimensional subshift, that is to say any $\\Pi_1$-computable number of $[0,h(H)]$. This question is related to the conjecture that a one-dimensional subshift $H$ is sofic if and only if the two-dimensional subshift $H^\\ast=\\{x\\in\\mathcal{A}^{\\mathbbm{Z}^2}: \\textrm{ for all }i\\in\\mathbbm{Z}, x_{\\mathbbm{Z}\\times\\{i\\}}\\in H\\}$ is sofic. We remark that the entropy of $H^\\ast$ is $h(H)$, obtained with completely independent rows; thus allowing entropy in the realization is a way of giving some independence between rows.\n\\end{remark}\n\n\n\n\n\n\n\\section{Theorem of simulation under interplay between horizontal and vertical conditions}\n\\label{sec:simulation}\n\nAnother way of looking at one-dimensional constraints in a two-dimensional setting, as mentioned in \\cref{subsec:horizontalconstraints}, is to try to understand if a pair of constraints are compatible. Some are chosen as horizontal conditions and the others as vertical conditions: can the resulting combined subshift $X_{H,V}$ be anything we want?\n\nFor a few nearest-neighbor horizontal constraints, it is not that hard to realize that, whichever vertical constraints we match them with, the combined subshift will necessarily contain periodic configurations, and therefore not any SFT on $\\mathbbm{Z}^2$ will be obtainable. These constraints are said to respect condition D, see \\cref{subsec:conditionD}, which is the union of three smaller easy-to-understand conditions.\n\nFor any other kind of horizontal constraint, we prove in a disjunctive fashion that we can simulate, in some sense, any two-dimensional subshift $Y$. However, since the entropy of $X_{H,V}$ is bounded by $h(H)$, the simulation cannot be a conjugacy; the correct notion is the one of root of a subshift, defined in \\cref{subsec:root}.\n\nThis section is devoted to proving the following, for a condition D to be defined in \\cref{subsec:conditionD}.\n\n\\begin{theorem}\n\\label{th:root}\nLet $H\\subset\\mathcal{A}^\\mathbbm{Z}$ be a one-dimensional \\emph{nearest-neighbor} SFT whose Rauzy graph does not satisfy condition D. For any two-dimensional SFT $Y\\subset\\mathcal{B}^{\\mathbbm{Z}^2}$, there exists a one-dimensional SFT $V_Y\\subset\\mathcal{A}^\\mathbbm{Z}$ such that $X_{H,V_Y}$ is a $(m,n)$th root of $Y$ for some $m, n \\in \\mathbbm{N}^2$. Furthermore, $m$, $n$ and $V_Y$ can be computed algorithmically.\n\\end{theorem}\n\n\\subsection{The condition D}\n\\label{subsec:conditionD}\n\n\\begin{definition}\n\t\\label{conditionD}\n\tWe say that an oriented graph $\\mathcal{G} = (\\mathcal{V},\\vec{E})$ verifies condition $(D)$ (for \"Decidable\") if all its SCCs have a type in common among the following list. A SCC $S$ can be of none, one or several of these types:\n\t\\begin{itemize}\n\t\t\\item for all vertices $v \\in S$, we have $(v,v) \\in \\vec{E}$: we say that $S$ is of \\emph{reflexive type};\n\t\t\n\t\t\\item for all vertices $v \\neq w \\in S$ such that $(v, w) \\in \\vec{E}$, we have $(w,v) \\in \\vec{E}$: we say that $S$ is of \\emph{symmetric type};\n\t\t\n\t\t\\item there exists $p \\in \\mathbbm{N}$ so that $S = \\bigsqcup_{i=0}^{p-1} V_i$ with, for any $v \\in V_i$, we have $[ (v,w) \\in \\vec{E} \\Leftrightarrow w \\in V_{i+1} ]$ with $i+1$ meant modulo $p$: we say that $S$ is of \\emph{state-split cycle type}.\n\t\\end{itemize}\n\t\n\\end{definition}\n\n\\begin{remark}\n\tThe term state-split is used in reference to a notion introduced in \\cite{LindMarcus}: a state-split cycle is a cycle where some vertices have been split.\n\t\n\tNote that $S = \\{v\\}$ a single vertex with a loop is also of symmetric type. Similarly, a single vertex is of state-split cycle type with a partition with one unique class $V_0$.\n\\end{remark}\n\n\\begin{example}\n\tConsider the following Rauzy graphs:\n\t\n\t\\begin{minipage}{0.3\\textwidth}\n\t\t\\begin{tikzpicture}\n\t\t\\node[draw, circle] (b) {};\n\t\t\\node[draw, circle] (a) [below left = 0.5cm and 0.25cm of b] {};\n\t\t\\node[draw, circle] (c) [below right = 0.5cm and 0.25cm of b] {};\n\t\t\n\t\t\\draw[->, >=latex] (a) to[bend left=20] (b);\n\t\t\\draw[->, >=latex] (b) to[bend left=20] (c);\n\t\t\\draw[->, >=latex] (c) to[bend left=20] (a);\n\t\t\n\t\t\\draw[->, >=latex] (c) to[bend left=20] (b);\n\t\t\n\t\t\\draw[->, >=latex] (b) to[loop above] (b);\n\t\t\\draw[->, >=latex] (c) to [out=330,in=300,looseness=12] (c);\n\t\t\\draw[->, >=latex] (a) to [out=240,in=210,looseness=12] (a);\n\t\t\n\t\t\\node (h) [below = 1cm of b] {$\\mathcal{G}(H_1)$};\n\t\t\\end{tikzpicture}\n\t\\end{minipage}\n\t\\begin{minipage}{0.3\\textwidth}\n\t\t\\begin{tikzpicture}\n\t\t\\node[draw, circle] (b) {};\n\t\t\\node[draw, circle] (a) [below left = 0.5cm and 0.25cm of b] {};\n\t\t\\node[draw, circle] (c) [below right = 0.5cm and 0.25cm of b] {};\n\t\t\n\t\t\\draw[->, >=latex] (a) to[bend left=20] (b);\n\t\t\\draw[->, >=latex] (b) to[bend left=20] (c);\n\t\t\\draw[->, >=latex] (c) to[bend left=20] (a);\n\t\t\n\t\t\\draw[->, >=latex] (c) to[bend left=20] (b);\n\t\t\\draw[->, >=latex] (b) to[bend left=20] (a);\n\t\t\\draw[->, >=latex] (a) to[bend left=20] (c);\n\t\t\n\t\t\\draw[->, >=latex] (b) to[loop above] (b);\n\t\t\\draw[->, >=latex] (c) to [out=330,in=300,looseness=12] (c);\n\t\t\n\t\t\\node (h) [below = 1cm of b] {$\\mathcal{G}(H_2)$};\n\t\t\\end{tikzpicture}\n\t\\end{minipage}\n\t\\begin{minipage}{0.3\\textwidth}\n\t\t\\begin{tikzpicture}\n\t\t\\node[draw, circle] (b) {};\n\t\t\\node[draw, circle] (a) [below left = 0.5cm and 0.25cm of b] {};\n\t\t\\node[draw, circle] (c) [below right = 0.5cm and 0.25cm of b] {};\n\t\t\\node[draw, circle] (d) [left = 0.3cm of a] {};\n\t\t\\node[draw, circle] (e) [left = 0.3cm of d] {};\n\t\t\\node[draw, circle] (f) [above = 0.3cm of b] {};\n\t\t\n\t\t\n\t\t\\node [rotate=0][draw,dashed,inner sep=0.5pt, circle,yscale=.7,fit={(a)(d)(e)}] (meta1) {};\n\t\t\\node [rotate=90][draw,dashed,inner sep=0.5pt, circle,yscale=.7,fit={(b)(f)}] (meta2) {};\n\t\t\\node [rotate=0][draw,dashed,inner sep=0.5pt, circle,yscale=.7,fit={(c)}] (meta3) {};\n\t\t\n\t\t\\draw[->, >=latex] (meta1) to[bend left=20] (meta2);\n\t\t\\draw[->, >=latex] (meta2) to[bend left=20] (meta3);\n\t\t\\draw[->, >=latex] (meta3) to[bend left=20] (meta1);\n\t\t\n\t\t\\node (h) [below = 1cm of b] {$\\mathcal{G}(H_3)$};\n\t\t\\end{tikzpicture}\n\t\\end{minipage}\n\n\twhere edges between dotted sets of vertices in the third graph represent that all vertices from the first set have edges leading to all vertices of the second set.\n\t\n\tThese three graphs respect condition D, being respectively of reflexive, symmetric and state-split cycle type.\n\\end{example}\n\n\n\n\n\\subsection{Generic construction}\n\\label{subsec:core}\n\nIn this section, we describe a set of properties on a directed graph, forming a condition called \\emph{condition C} that has stronger requirements than condition D. Condition C allows a generic construction of the proof of \\cref{th:root} for the associated one-dimensional, nearest-neighbor SFT.\n\nIn all that follows, \\emph{we denote elements of cycles with an index that is written modulo the length of the corresponding cycle}.\nWe need the following defintions before describing condition C.\n\n\\begin{definition}\n\tLet $C^1$ and $C^2$ be two cycles in an oriented graph $\\mathcal{G}$, with elements denoted respectively $c^1_i$ and $c^2_j$. Let $M := LCM(|C^1|,|C^2|)$. Let $C$ be any cycle in that graph, with elements denoted $c_i$.\n\t\n\tWe say that the cycles $C^1$ and $C^2$ contain a \\emph{good pair} if there is a pair $(i,j)$ and an integer $1, >=latex,color=red] (a) to[bend right=15] (b);\n\t\t\t\\draw[->, >=latex,color=red] (b) to[bend right=15] (c);\n\t\t\t\\draw[->, >=latex] (c) to[bend right=15] (d);\n\t\t\t\\draw[->, >=latex,color=red] (d) to[bend right=15] (e);\n\t\t\t\\draw[->, >=latex,color=red] (e) to[bend right=15] (a);\n\t\t\t\n\t\t\t\\draw[->, >=latex,color=red] (c) to[bend right=15] (f);\n\t\t\t\\draw[->, >=latex,color=red] (f) to[bend right=15] (d);\n\t\t\t\\end{tikzpicture}\n\t\t}\n\t\t\\caption{$C^1$ and $C^2$ (in red) have a good pair $(i,j)$.}\n\t\t\\label{condC1}\n\t\\end{subfigure}\n\t\\hfill\n\t\\begin{subfigure}[t]{0.17\\textwidth}\n\t\t\\centering\n\t\t\\resizebox{\\columnwidth}{!}{\n\t\t\t\\begin{tikzpicture}[scale=0.4,rotate=90]\n\t\t\t\\def \\radius {1.5cm};\n\t\t\t\n\t\t\t\\node[draw, circle] (a) at (0:\\radius) {};\n\t\t\t\\node[draw, circle] (b) at (72:\\radius) {};\n\t\t\t\\node[draw, circle] (c) at (144:\\radius) {};\n\t\t\t\\node[draw, circle] (d) at (216:\\radius) {};\n\t\t\t\\node[draw, circle] (e) at (288:\\radius) {};\n\t\t\t\n\t\t\t\\node[draw, circle] (f) at (0:2.5cm) {};\n\t\t\t\n\t\t\t\\draw (0,0) node {$C^1$};\n\t\t\t\n\t\t\t\\draw[->, >=latex] (a) to[bend right=15] (b);\n\t\t\t\\draw[->, >=latex,color=red] (b) to[bend right=15] (c);\n\t\t\t\\draw[->, >=latex,color=red] (c) to[bend right=15] (d);\n\t\t\t\\draw[->, >=latex,color=red] (d) to[bend right=15] (e);\n\t\t\t\\draw[->, >=latex] (e) to[bend right=15] (a);\n\t\t\t\n\t\t\t\\draw[->, >=latex,color=red] (e) to[bend right=15] (f);\n\t\t\t\\draw[->, >=latex,color=red] (f) to[bend right=15] (b);\n\t\t\t\\end{tikzpicture}\n\t\t}\n\t\t\\caption{$C^1$ and $C^2$ (in red) have no good pair.}\n\t\t\\label{condC2}\n\t\\end{subfigure}\n\t\\hfill\n\t\\begin{subfigure}[t]{0.17\\textwidth}\n\t\t\\centering\n\t\t\\resizebox{\\columnwidth}{!}{\n\t\t\t\\begin{tikzpicture}[scale=0.4,rotate=-90]\n\t\t\t\\def \\radius {1.5cm};\n\t\t\t\n\t\t\t\\node[draw, circle] (a) at (0:\\radius) {};\n\t\t\t\\node[draw, circle] (b) at (72:\\radius) {};\n\t\t\t\\node[draw, circle] (c) at (144:\\radius) {};\n\t\t\t\\node[draw, circle] (d) at (216:\\radius) {};\n\t\t\t\\node[draw, circle] (e) at (288:\\radius) {};\n\t\t\t\n\t\t\t\\draw[->, >=latex] (a) to[bend right=15] (b);\n\t\t\t\\draw[->, >=latex] (b) to[bend right=15] (c);\n\t\t\t\\draw[->, >=latex] (c) to[bend right=15] (d);\n\t\t\t\\draw[->, >=latex] (d) to[bend right=15] (e);\n\t\t\t\\draw[->, >=latex] (e) to[bend right=15] (a);\n\t\t\t\n\t\t\t\\draw[->, >=latex,color=green] (a) to[bend right=5] (d);\n\t\t\t\\draw[->, >=latex,color=green] (b) to[bend right=5] (e);\n\t\t\t\\draw[->, >=latex,color=green] (c) to[bend right=5] (a);\n\t\t\t\\draw[->, >=latex,color=green] (d) to[bend right=5] (b);\n\t\t\t\\draw[->, >=latex,color=green] (e) to[bend right=5] (c);\n\t\t\t\\end{tikzpicture}\n\t\t}\n\t\t\\caption{A cycle with uniform shortcuts (in green).}\n\t\t\\label{condC3}\n\t\\end{subfigure}\n\t\\hfill\n\t\\begin{subfigure}[t]{0.2\\textwidth}\n\t\t\\centering\n\t\t\\resizebox{\\columnwidth}{!}{\n\t\t\t\\begin{tikzpicture}[scale=0.4]\n\t\t\t\\def \\radius {1.5cm};\n\t\t\t\n\t\t\t\\node[draw, circle] (a) at (0:\\radius) {};\n\t\t\t\\node[draw, circle] (b) at (72:\\radius) {};\n\t\t\t\\node[draw, circle] (c) at (144:\\radius) {};\n\t\t\t\\node[draw, circle] (d) at (216:\\radius) {};\n\t\t\t\\node[draw, circle] (e) at (288:\\radius) {};\n\t\t\t\n\t\t\t\\begin{scope}[xshift=3cm,xscale=-1]\n\t\t\t\\node[draw, circle] (f) at (72:\\radius) {};\n\t\t\t\\node[draw, circle] (g) at (144:\\radius) {};\n\t\t\t\\node[draw, circle] (h) at (216:\\radius) {};\n\t\t\t\\node[draw, circle] (i) at (288:\\radius) {};\n\t\t\t\\draw (0,0) node {$C^2$};\n\t\t\t\\end{scope}\n\t\t\t\n\t\t\t\\draw (0,0) node {$C^1$};\n\t\t\t\n\t\t\t\\draw[->, >=latex] (a) to[bend right=15] (b);\n\t\t\t\\draw[->, >=latex] (b) to[bend right=15] (c);\n\t\t\t\\draw[->, >=latex] (c) to[bend right=15] (d);\n\t\t\t\\draw[->, >=latex] (d) to[bend right=15] (e);\n\t\t\t\\draw[->, >=latex] (e) to[bend right=15] (a);\n\t\t\t\n\t\t\t\\draw[->, >=latex] (a) to[bend left=15] (f);\n\t\t\t\\draw[->, >=latex] (f) to[bend left=15] (g);\n\t\t\t\\draw[->, >=latex] (g) to[bend left=15] (h);\n\t\t\t\\draw[->, >=latex] (h) to[bend left=15] (i);\n\t\t\t\\draw[->, >=latex] (i) to[bend left=15] (a);\n\t\t\t\n\t\t\t\\draw[->, >=latex,color=green] (h) to[bend left=50] (e);\n\t\t\t\\draw[->, >=latex,color=green] (d) to[bend right=50] (i);\n\t\t\t\\end{tikzpicture}\n\t\t}\n\t\t\\caption{A cross-bridge (in green) between cycles $C^1$ and $C^2$.}\n\t\t\\label{condC4}\n\t\\end{subfigure}\n\t\\hfill\n\t\\begin{subfigure}[t]{0.2\\textwidth}\n\t\t\\centering\n\t\t\\resizebox{\\columnwidth}{!}{\n\t\t\t\\begin{tikzpicture}[scale=0.4]\n\t\t\t\\def \\radius {1.5cm};\n\t\t\t\n\t\t\t\\node[draw, circle,scale=0.6] (a) at (0:\\radius) {$t$};\n\t\t\t\\node[draw, circle] (b) at (72:\\radius) {};\n\t\t\t\\node[draw, circle,scale=0.6] (c) at (144:\\radius) {$p$};\n\t\t\t\\node[draw, circle] (d) at (216:\\radius) {};\n\t\t\t\\node[draw, circle] (e) at (288:\\radius) {};\n\t\t\t\n\t\t\t\\draw[->, >=latex] (a) to[bend right=15] (b);\n\t\t\t\\draw[->, >=latex] (b) to[bend right=15] (c);\n\t\t\t\\draw[->, >=latex] (c) to[bend right=15] (d);\n\t\t\t\\draw[->, >=latex] (d) to[bend right=15] (e);\n\t\t\t\\draw[->, >=latex] (e) to[bend right=15] (a);\n\t\t\t\n\t\t\t\\draw[->, >=latex] (b) to[bend right=15] (a);\n\t\t\t\\draw[->, >=latex] (c) to[bend right=15] (a);\n\t\t\t\\draw[->, >=latex] (d) to[bend right=15] (a);\n\t\t\t\n\t\t\t\\draw[->, >=latex] (c) to[bend right=15] (e);\n\t\t\t\\draw[->, >=latex] (c) to[bend right=15] (a);\n\t\t\t\\draw[->, >=latex] (c) to[bend right=15] (b);\n\t\t\t\n\t\t\t\\draw[->, >=latex] (c) to[out=164,in=124,looseness=6] (c);\n\t\t\t\\draw[->, >=latex] (a) to[out=-20,in=20,looseness=6] (a);\n\t\t\t\\end{tikzpicture}\n\t\t}\n\t\t\\caption{An attractive vertex $t$ and a repulsive vertex $p$.}\n\t\t\\label{condC5}\n\t\\end{subfigure}\n\t\\caption{Cases of compliance or not with elements of Condition $C$.}\n\t\\label{fig:condC}\n\\end{figure}\n\n\\begin{proposition}\n\t\\label{propcondC}\n\tIf $\\mathcal{G}(H)$ verifies condition $C$, and $W$ is a Wang shift, then there exists an explicit SFT $V_W \\subset \\mathcal{A}^\\mathbbm{Z}$ such that $X_{H,V_W}$ is a root of $W$.\n\\end{proposition}\n\nThe rest of the subsection is devoted to proving this result.\n\nLet $H$ with $\\mathcal{G}(H)$ verifying condition $C$. We focus on encoding, in the correct $X_{H,V}$, a full shift on an alphabet $\\tau$ of cardinality $N$. Then, the possibility to add vertical rules will allow us to encode any Wang shift $W$ using this alphabet, that is, to simulate the configurations of $W$ as a root of $X_{H,V}$.\n\nFor the rest of the construction, we name $M := LCM(|C^1|,|C^2|)$ and $K := 2 |C^1| + |C^2| + 3$. We suppose that $N \\geq 2$.\nIndeed, the case of encoding a monotile Wang shift is easy: consider only the cycle $C^1$ in $\\mathcal{G}(H)$, that may have extra edges from one element to another, but no uniform shortcut. Build $V$ the vertical SFT from the graph $\\mathcal{G}(H)^\\prime$ obtained by removing any of those extra edges, keeping only a plain cycle -- the same as $C^1$. Then $X_{H,V}$ contains only the translate of one configuration, that cycles through the elements of $C^1$ in the correct order, both horizontally and vertically. This is a root of a monotile Wang shift.\n\nWe refer to \\cref{columns1}\nin the description that follows. We use the term \\emph{slice} as a truncation of a column: it is a part of width $1$ and of finite height. We use the following more specific denominations for the various scales of our construction:\n\\begin{itemize}\n\t\\item A \\emph{macro-slice} is a slice of height $KMN$. Any column of $X_{H,V}$ will merely be made of a succession of some specific macro-slices called ordered macro-slices (see below).\n\t\\item A \\emph{meso-slice} is a slice of height $MN$; meso-slices are assembled into macro-slices.\n\t\\item A \\emph{micro-slice} is a slice of height $N$. This subdivision is used inside specific meso-slices called code meso-slices (see below).\n\\end{itemize}\nAlthough any scale of slice could denote any truncation of column of the right size, we focus on specific slices that are meaningful because of what they contain, so that we can assemble them precisely. They are:\n\\begin{itemize}\n\t\\item An $(i,j)$ $k$-coding micro-slice is a micro-slice composed of $N-1$ symbols $c^1_i$ and one symbol $c^2_j$ at position $k$. It encodes the $k$th tile of alphabet $\\tau$, unless $c^1_i=c^2_j$: in that case, it is called a \\emph{buffer} and encodes nothing. We can write ``$(i,j)$ coding micro-slice'' when we do not want to specify which tile is encoded.\n\t\n\t\\item An $(i_0,j_0)$ $(k,l)$-code meso-slice is a meso-slice made of $M$ vertically successive coding micro-slices. The one on top is an $(i_0,j_0)$ $k$-coding micro-slice.\n\tWe add the following restrictions:\n\t\t\\begin{itemize}\n\t\t\t\\item an $(i,j)$ $k$-coding micro-slice must be vertically followed by a $(i+1,j+1)$ $k$-coding micro-slice, unless $c^1_i \\neq c^2_j$ but $c^1_{i+1} = c^2_{j+1}$, that is if the new micro-slice is a buffer but the previous one is not;\n\t\t\t\\item if $c^1_i = c^2_j$ but $c^1_{i+1} \\neq c^2_{j+1}$, then the buffer must be followed by a $(i+1,j+1)$ $l$-coding micro-slice.\n\t\t\\end{itemize}\n\tWe can write ``$(i_0,j_0)$ code meso-slice'' when we do not want to specify which tiles are encoded.\n\t\n\t\\item An $i$-border meso-slice is made of $\\frac{M}{|C^1|} N$ times the vertical repetition of all the elements of the cycle $C^1$, starting with $c^1_i$.\n\t\n\t\\item A $c^1_i$ meso-slice is made of $MN$ times the vertical repetition of element $c^1_i$, denoted $(c^1_i)^MN$ in \\cref{columns1}. The same definition holds for a $c^2_j$ meso-slice.\n\t\n\t\\item The succession of a $c^1_i$ meso-slice, then a $c^1_{i+1}$ meso-slice, ..., then a $c^1_{i-1}$ meso-slice is called a $i$ $C^1$-slice. It is of height $MN|C^1|$. Similarly, we define a $j$ $C^2$-slice (of height $MN|C^2|$).\n\t\n\t\\item Finally, a $(i,j)$-ordered $(k,l)$-coding macro-slice is the succession of a $i$-border meso-slice, a $i$ $C^1$-slice, a second $i$ $C^1$-slice, a $j$ $C^2$-slice, a $i$-border meso-slice, and finally a $(i,j)$ $(k,l)$-code meso-slice.\n\tWe can write ``$(i,j)$-ordered macro-slice'' when we do not want to specify which tiles are encoded.\n\\end{itemize}\n\n\\begin{remark}\n\tThe $(i_0,j_0)$ $(k,l)$-code meso-slice is well-defined because since $C^1$ and $C^2$ contain a good pair, a code meso-slice is a vertical succession of coding micro-slide and of buffers, with either only one vertical succession of coding micro-slice of one vertical succession of buffers (possibly both). Therefore, there can be at most one change, from $k$ to $l$, in the tiles encoded; $k$ is then called the \\emph{main-coded tile}, and $l$ the \\emph{side-coded tile}.\n\t\n\tNote that if $c^1_{i_0} \\neq c^2_{j_0}$ but $c^1_{i_0-1} = c^2_{j_0-1}$, then the meso-slice contains no side-coded tile -- its last coding micro-slice is the last buffer -- so $l$ is actually irrelevant.\n\tConversely, if $c^1_{i_0} = c^2_{j_0}$, then the meso-slice contains no main-coded tile, so $k$ is actually irrelevant.\n\\end{remark}\n\nNow, the patterns we authorize in $V$ are the $(i_0+p,j_0+p)$-ordered macro-slices with a good pair $(i_0,j_0)$ and $p \\in \\{0, \\dots, M-1\\}$, and all patterns that allow the vertical juxtaposition of two $(i,j)$-ordered macro-slices, using the same $i$ and $j$, but possibly different code meso-slices.\nWe prove below that this is enough for our resulting $X_{H,V}$ to simulate a full shift on $\\tau$.\n\n\n\\begin{figure}[p]\n\t\\centering\n\t\\begin{subfigure}[t]{0.55\\textwidth}\n\t\t\\centering\n\t\t\\resizebox{\\columnwidth}{!}{\n\t\t\t\\begin{tikzpicture}[scale=1\n\t\t\t,rotate=-90,yscale=-1\n\t\t\t]\n\t\t\t\n\t\t\n\t\t\n\t\t\t\n\t\t\t\\fill[color=black!20] (-3,-1) rectangle (8,-2);\n\t\t\t\\draw (-3,-1) rectangle(8,-2);\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\t\n\t\t\t\n\t\t\n\t\t\t\n\t\t\t\\draw(-2.5,-1.5) node[scale=1] {$c_i^1$};\n\t\t\t\\draw(-1.5,-1.5) node[scale=1] {$c_{i+1}^1$};\n\t\t\t\\draw(-0.5,-1.5) node[scale=0.75] {$\\dots$};\n\t\t\t\\draw(0.5,-1.5) node[scale=1] {$c_{i-1}^1$};\n\t\t\t\\draw(1.5,-1.5) node[scale=1] {$c_{i}^1$};\n\t\t\t\\draw(2.5,-1.5) node[scale=1] {$c_{i+1}^1$};\n\t\t\t\\draw(3.5,-1.5) node[scale=0.75] {$\\dots$};\n\t\t\t\\draw(4.5,-1.5) node[scale=1] {$c_{i-1}^1$};\n\t\t\t\\draw(5.5,-1.5) node[scale=0.75] {$\\dots$};\n\t\t\t\\draw(6.5,-1.5) node[scale=0.75] {$\\dots$};\n\t\t\t\\draw(7.5,-1.5) node[scale=1] {$c_{i-1}^1$};\n\t\t\t\n\t\t\t\n\t\t\n\t\t\n\t\t\t\n\t\t\t\\draw (-3,-1) -- (0,0);\n\t\t\t\\draw (8,-1) -- (1,0);\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\\fill[color=black!20] (0,0) rectangle (1,1);\n\t\t\t\\fill[color=black!20] (13,0) rectangle (14,1);\n\t\t\t\n\t\t\t\\fill[color=black!10](1,0)rectangle(9,1);\n\t\t\t\\fill[color=black!5](9,0)rectangle(13,1);\n\t\t\t\n\t\t\t\\draw (0,0) rectangle (15,1);\n\t\t\t\\foreach \\k in {1,...,14}\n\t\t\t{\n\t\t\t\t\\draw (\\k,0)--(\\k,1);\n\t\t\t};\n\t\t\t\n\t\t\t\n\t\t\t\\draw(-1,0.7) node[scale=1,align=center] {$(i,j)$-ordered\\\\macro-slice};\n\t\t\t\n\t\t\t\\draw(0.5,0.5) node[scale=0.75] {border};\n\t\t\t\\draw(1.5,0.5) node[scale=0.65] {$(c_i^1)^{MN}$};\n\t\t\t\\draw(2.5,0.5) node[scale=0.65] {$(c_{i+1}^1)^{MN}$};\n\t\t\t\\draw(3.5,0.5) node[scale=0.75] {$\\dots$};\n\t\t\t\\draw(4.5,0.5) node[scale=0.65] {$(c_{i-1}^1)^{MN}$};\n\t\t\t\\draw(5.5,0.5) node[scale=0.65] {$(c_i^1)^{MN}$};\n\t\t\t\\draw(6.5,0.5) node[scale=0.65] {$(c_{i+1}^1)^{MN}$};\n\t\t\t\\draw(7.5,0.5) node[scale=0.75] {$\\dots$};\n\t\t\t\\draw(8.5,0.5) node[scale=0.65] {$(c_{i-1}^1)^{MN}$};\n\t\t\t\\draw(9.5,0.5) node[scale=0.65] {$(c_{j}^2)^{MN}$};\n\t\t\t\\draw(10.5,0.5) node[scale=0.65] {$(c_{j+1}^2)^{MN}$};\n\t\t\t\\draw(11.5,0.5) node[scale=0.75] {$\\dots$};\n\t\t\t\\draw(12.5,0.5) node[scale=0.65] {$(c_{j-1}^2)^{MN}$};\n\t\t\t\\draw(13.5,0.5) node[scale=0.75] {border};\n\t\t\t\\draw(14.5,0.5) node[scale=0.75] {code};\n\t\t\t\n\t\t\t\\draw [decorate,decoration={brace,amplitude=5pt,mirror,raise=3ex}]\n\t\t\t(1,0.75) -- (5,0.75) node[midway,yshift=-3em]{};\n\t\t\t\n\t\t\t\\draw(3,2.3) node[scale=1] {$i$ $C^1$-slice};\n\t\t\t\n\t\t\t\\draw [decorate,decoration={brace,amplitude=5pt,mirror,raise=3ex}]\n\t\t\t(9,0.75) -- (13,0.75) node[midway,yshift=-3em]{};\n\t\t\t\n\t\t\t\\draw(11,2.3) node[scale=1] {$j$ $C^2$-slice};\n\t\t\t\n\t\t\n\t\t\n\t\t\t\n\t\t\t\\draw(14,0)--(10,-1);\n\t\t\t\\draw(15,0)--(17,-1);\n\t\t\t\n\t\t\t\\draw (10,-2) rectangle(17,-1);\n\t\t\t\\foreach \\k in {11,...,16}\n\t\t\t{\n\t\t\t\t\\draw (\\k,-2)--(\\k,-1);\n\t\t\t};\n\t\t\t\n\t\t\t\n\t\t\t\\draw(9,-1.5) node[scale=1] {meso-slices};\n\t\t\t\n\t\t\t\\draw(10.5,-1.5) node[scale=0.75,align=center] {$\\tau_k$\\\\$i$\\\\$j$};\n\t\t\t\\draw(11.5,-1.5) node[scale=0.75,align=center] {$\\tau_k$\\\\$i+1$\\\\$j+1$};\n\t\t\t\\draw(12.5,-1.5) node[scale=0.75] {$\\dots$};\n\t\t\t\\draw(13.5,-1.5) node[scale=0.75,align=center] {$\\tau_k$\\\\$i-4$\\\\$j-4$};\n\t\t\t\\fill[pattern=north west lines, pattern color=black!30] (14,-2) rectangle (15,-1);\n\t\t\t\\fill[pattern=north west lines, pattern color=black!30] (15,-2) rectangle (16,-1);\n\t\t\t\\draw(16.5,-1.5) node[scale=0.75,align=center] {$\\tau_\\ell$\\\\$i-1$\\\\$j-1$};\n\t\t\t\n\t\t\t\n\t\t\n\t\t\n\t\t\t\n\t\t\t\\draw(7,-3)--(11,-2);\n\t\t\t\\draw(16,-3)--(12,-2);\n\t\t\t\n\t\t\t\\draw (7,-4) rectangle(16,-3);\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\t\n\t\t\t\\draw(6,-3.5) node[scale=1,align=center] {$(i+1,j+1)$\\\\$k$-coding\\\\micro-slice};\n\t\t\t\n\t\t\t\\draw(7.5,-3.5) node[scale=1] {$c_{i+1}^1$};\n\t\t\t\\draw(8.5,-3.5) node[scale=1] {$c_{i+1}^1$};\n\t\t\t\\draw(9.5,-3.5) node[scale=0.75] {$\\dots$};\n\t\t\t\\draw(10.5,-3.5) node[scale=1] {$c_{i+1}^1$};\n\t\t\t\\draw(11.5,-3.5) node[scale=1] {$c_{j+1}^2$};\n\t\t\t\\draw(12.5,-3.5) node[scale=1] {$c_{i+1}^1$};\n\t\t\t\\draw(13.5,-3.5) node[scale=1] {$c_{i+1}^1$};\n\t\t\t\\draw(14.5,-3.5) node[scale=0.75] {$\\dots$};\n\t\t\t\\draw(15.5,-3.5) node[scale=1] {$c_{i+1}^1$};\n\t\t\t\n\t\t\t\\draw[->,>=latex] (11.5,-4.5) -- (11.5,-4.1);\n\t\t\t\\draw(11.5,-4.7) node {$k$};\n\t\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\t\n\t\t\t\\end{tikzpicture}\n\t\t}\n\t\t\\caption{Columns allowed, for $(i,j)$ in the orbit of a good pair. Here $c^1_{i-3}=c^2_{j-3}$ and $c^1_{i-2}=c^2_{j-2}$, forming buffers in the code meso-tile, represented as hatched squares.}\n\t\t\\label{columns1}\n\t\\end{subfigure}\n\t\\hfill\n\t\\begin{subfigure}[t]{0.3\\textwidth}\n\t\t\\centering\n\t\t\\resizebox{\\columnwidth}{!}{\n\t\t\t\\begin{tikzpicture}[scale=1,,rotate=-90,yscale=-1]\n\t\t\t\n\t\t\t\\clip[decorate,decoration={random steps,segment length=2pt,amplitude=3pt}] (-0.8, -0.5) rectangle (17.8,3);\n\t\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\t\n\t\t\t\\begin{scope}[xshift=-3cm]\n\t\t\t\\fill[color=black!20] (0,0) rectangle (1,1);\n\t\t\t\\fill[color=black!20] (13,0) rectangle (14,1);\n\t\t\t\n\t\t\t\\fill[color=black!10] (1,0) rectangle (9,1);\n\t\t\t\n\t\t\t\\fill[color=black!5] (9,0) rectangle (13,1);\n\t\t\t\n\t\t\t\\draw (0,0) rectangle (15,1);\n\t\t\t\\foreach \\k in {1,...,14}\n\t\t\t{\n\t\t\t\t\\draw (\\k,1)--(\\k,0);\n\t\t\t};\n\t\t\t\n\t\t\t\\draw[very thick] (1,1)--(1,0);\n\t\t\t\\draw[very thick] (5,1)--(5,0);\n\t\t\t\\draw[very thick] (9,1)--(9,0);\n\t\t\t\\draw[very thick] (13,1)--(13,0);\n\t\t\t\n\t\t\t\\draw (0.5,0.5) node[scale=0.75] {border};\n\t\t\t\\draw (1.5,0.5) node[scale=0.6] {$(c_{i}^1)^{MN}$};\n\t\t\t\\draw (2.5,0.5) node[scale=0.6] {$(c_{i+1}^1)^{MN}$};\n\t\t\t\\draw (3.5,0.5) node {$...$};\n\t\t\t\\draw (4.5,0.5) node[scale=0.6] {$(c_{i-1}^1)^{MN}$};\n\t\t\t\\draw (5.5,0.5) node[scale=0.6] {$(c_{i}^1)^{MN}$};\n\t\t\t\\draw (6.5,0.5) node[scale=0.6] {$(c_{i+1}^1)^{MN}$};\n\t\t\t\\draw (7.5,0.5) node {$...$};\n\t\t\t\\draw (8.5,0.5) node[scale=0.6] {$(c_{i-1}^1)^{MN}$};\n\t\t\t\\draw (9.5,0.5) node[scale=0.6] {$(c_{j}^2)^{MN}$};\n\t\t\t\\draw (10.5,0.5) node[scale=0.6] {$(c_{j+1}^2)^{MN}$};\n\t\t\t\\draw (11.5,0.5) node {$...$};\n\t\t\t\\draw (12.5,0.5) node[scale=0.6] {$(c_{j-1}^2)^{MN}$};\n\t\t\t\\draw (13.5,0.5) node[scale=0.75] {border};\n\t\t\t\\draw (14.5,0.5) node[scale=0.75] {code};\n\t\t\t\\end{scope}\n\t\t\t\n\t\t\t\\begin{scope}[xshift=12cm]\n\t\t\t\\fill[color=black!20] (0,0) rectangle (1,1);\n\t\t\t\\fill[color=black!20] (13,0) rectangle (14,1);\n\t\t\t\n\t\t\t\\fill[color=black!10] (1,0) rectangle (9,1);\n\t\t\t\n\t\t\t\\fill[color=black!5] (9,0) rectangle (13,1);\n\t\t\t\n\t\t\t\\draw (0,0) rectangle (15,1);\n\t\t\t\\foreach \\k in {1,...,14}\n\t\t\t{\n\t\t\t\t\\draw (\\k,1)--(\\k,0);\n\t\t\t};\n\t\t\t\n\t\t\t\\draw[very thick] (1,1)--(1,0);\n\t\t\t\\draw[very thick] (5,1)--(5,0);\n\t\t\t\\draw[very thick] (9,1)--(9,0);\n\t\t\t\\draw[very thick] (13,1)--(13,0);\n\t\t\t\n\t\t\t\\draw (0.5,0.5) node[scale=0.75] {border};\n\t\t\t\\draw (1.5,0.5) node[scale=0.6] {$(c_{i}^1)^{MN}$};\n\t\t\t\\draw (2.5,0.5) node[scale=0.6] {$(c_{i+1}^1)^{MN}$};\n\t\t\t\\draw (3.5,0.5) node {$...$};\n\t\t\t\\draw (4.5,0.5) node[scale=0.6] {$(c_{i-1}^1)^{MN}$};\n\t\t\t\\draw (5.5,0.5) node[scale=0.6] {$(c_{i}^1)^{MN}$};\n\t\t\t\\draw (6.5,0.5) node[scale=0.6] {$(c_{i+1}^1)^{MN}$};\n\t\t\t\\draw (7.5,0.5) node {$...$};\n\t\t\t\\draw (8.5,0.5) node[scale=0.6] {$(c_{i-1}^1)^{MN}$};\n\t\t\t\\draw (9.5,0.5) node[scale=0.6] {$(c_{j}^2)^{MN}$};\n\t\t\t\\draw (10.5,0.5) node[scale=0.6] {$(c_{j+1}^2)^{MN}$};\n\t\t\t\\draw (11.5,0.5) node {$...$};\n\t\t\t\\draw (12.5,0.5) node[scale=0.6] {$(c_{j-1}^2)^{MN}$};\n\t\t\t\\draw (13.5,0.5) node[scale=0.75] {border};\n\t\t\t\\draw (14.5,0.5) node[scale=0.75] {code};\n\t\t\t\\end{scope}\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\\begin{scope}[xshift=-7.7cm,yshift=1cm]\n\t\t\t\\fill[color=black!20] (0,0) rectangle (1,1);\n\t\t\t\\fill[color=black!20] (13,0) rectangle (14,1);\n\t\t\t\n\t\t\t\\fill[color=black!10] (1,0) rectangle (9,1);\n\t\t\t\n\t\t\t\\fill[color=black!5] (9,0) rectangle (13,1);\n\t\t\t\n\t\t\t\\draw (0,0) rectangle (15,1);\n\t\t\t\\foreach \\k in {1,...,14}\n\t\t\t{\n\t\t\t\t\\draw (\\k,1)--(\\k,0);\n\t\t\t};\n\t\t\t\n\t\t\t\\draw[very thick] (1,1)--(1,0);\n\t\t\t\\draw[very thick] (5,1)--(5,0);\n\t\t\t\\draw[very thick] (9,1)--(9,0);\n\t\t\t\\draw[very thick] (13,1)--(13,0);\n\t\t\t\n\t\t\t\\draw (0.5,0.5) node[scale=0.75] {border};\n\t\t\t\\draw (1.5,0.5) node[scale=0.65] {$(c_{i}^1)^{MN}$};\n\t\t\t\\draw (2.5,0.5) node[scale=0.65] {$(c_{i+1}^1)^{MN}$};\n\t\t\t\\draw (3.5,0.5) node {$...$};\n\t\t\t\\draw (4.5,0.5) node[scale=0.65] {$(c_{i-1}^1)^{MN}$};\n\t\t\t\\draw (5.5,0.5) node[scale=0.65] {$(c_{i}^1)^{MN}$};\n\t\t\t\\draw (6.5,0.5) node[scale=0.65] {$(c_{i+1}^1)^{MN}$};\n\t\t\t\\draw (7.5,0.5) node {$...$};\n\t\t\t\\draw (8.5,0.5) node[scale=0.65] {$(c_{i-1}^1)^{MN}$};\n\t\t\t\\draw (9.5,0.5) node[scale=0.65] {$(c_{j}^2)^{MN}$};\n\t\t\t\\draw (10.5,0.5) node[scale=0.65] {$(c_{j+1}^2)^{MN}$};\n\t\t\t\\draw (11.5,0.5) node {$...$};\n\t\t\t\\draw (12.5,0.5) node[scale=0.65] {$(c_{j-1}^2)^{MN}$};\n\t\t\t\\draw (13.5,0.5) node[scale=0.75] {border};\n\t\t\t\\draw (14.5,0.5) node[scale=0.75] {code};\n\t\t\t\\end{scope}\n\t\t\t\n\t\t\t\\begin{scope}[xshift=7.3cm,yshift=1cm]\n\t\t\t\\fill[color=black!20] (0,0) rectangle (1,1);\n\t\t\t\\fill[color=black!20] (13,0) rectangle (14,1);\n\t\t\t\n\t\t\t\\fill[color=black!10] (1,0) rectangle (9,1);\n\t\t\t\n\t\t\t\\fill[color=black!5] (9,0) rectangle (13,1);\n\t\t\t\n\t\t\t\\draw (0,0) rectangle (15,1);\n\t\t\t\\foreach \\k in {1,...,14}\n\t\t\t{\n\t\t\t\t\\draw (\\k,1)--(\\k,0);\n\t\t\t};\n\t\t\t\n\t\t\t\\draw[very thick] (1,1)--(1,0);\n\t\t\t\\draw[very thick] (5,1)--(5,0);\n\t\t\t\\draw[very thick] (9,1)--(9,0);\n\t\t\t\\draw[very thick] (13,1)--(13,0);\n\t\t\t\n\t\t\t\\draw (0.5,0.5) node[scale=0.75] {border};\n\t\t\t\\draw (1.5,0.5) node[scale=0.65] {$(c_{i}^1)^{MN}$};\n\t\t\t\\draw (2.5,0.5) node[scale=0.65] {$(c_{i+1}^1)^{MN}$};\n\t\t\t\\draw (3.5,0.5) node {$...$};\n\t\t\t\\draw (4.5,0.5) node[scale=0.65] {$(c_{i-1}^1)^{MN}$};\n\t\t\t\\draw (5.5,0.5) node[scale=0.65] {$(c_{i}^1)^{MN}$};\n\t\t\t\\draw (6.5,0.5) node[scale=0.65] {$(c_{i+1}^1)^{MN}$};\n\t\t\t\\draw (7.5,0.5) node {$...$};\n\t\t\t\\draw (8.5,0.5) node[scale=0.65] {$(c_{i-1}^1)^{MN}$};\n\t\t\t\\draw (9.5,0.5) node[scale=0.65] {$(c_{j}^2)^{MN}$};\n\t\t\t\\draw (10.5,0.5) node[scale=0.65] {$(c_{j+1}^2)^{MN}$};\n\t\t\t\\draw (11.5,0.5) node {$...$};\n\t\t\t\\draw (12.5,0.5) node[scale=0.65] {$(c_{j-1}^2)^{MN}$};\n\t\t\t\\draw (13.5,0.5) node[scale=0.75] {border};\n\t\t\t\\draw (14.5,0.5) node[scale=0.75] {code};\n\t\t\t\\end{scope}\n\t\t\t\n\t\t\t\\end{tikzpicture}\n\t\t}\n\t\t\\caption{Faulty alignment of adjacent columns.}\n\t\t\\label{fig:aligned}\n\t\\end{subfigure}\n\t\\caption{The generic construction.}\n\t\\label{generic}\n\\end{figure}\n\n\n\nWe say that two legally adjacent columns are \\emph{aligned} if they are subdivided into ordered macro-slices exactly on the same lines. We say that two adjacent and aligned columns are \\emph{synchronized} if any $(i,j)$-ordered macro-slice of the first one is followed by a $(i+1,j+1)$-ordered macro-slice in the second one.\n\n\n\n\n\n\\begin{proposition}\n\t\\label{propapproxaligned}\n\tIn this construction, two legally adjacent columns are aligned up to a vertical translation of size at most $2 |C^1|-1$ of one of the columns.\n\\end{proposition}\n\n\\begin{proof}\n\tIf two columns, call them $K_1$ and $K_2$, can be legally juxtaposed such that they are not aligned even when vertically shifted by $2 |C^1|-1$ elements, it means that one of the border meso-slices of $K_1$ has at least $2 |C^1|$ vertically consecutive elements that are horizontally followed by something that is not a border meso-slice in $K_2$ (see \\cref{fig:aligned}). Since $2|C^1| < MN$ which is the length of a meso-slice, at least $|C^1|$ successive elements among the ones of the border meso-slice in $K_1$ are horizontally followed by elements that are part of the same meso-slice in $K_2$. If this is a code meso-slice, simply consider the other border meso-slice of $K_1$ (the first you can find, above or below, before repeating the pattern cyclically): that one must be in contact with a $c^1_i$ or $c^2_j$ meso-slice instead. \n\tEither way, we obtain that a border meso-slice has at least $|C^1|$ successive elements that are horizontally followed by some $t$ meso-slice made of a single element $t$. Hence if we suppose that juxtaposing $K_1$ and $K_2$ this way is legal, it means that in $H$ all the elements of $C^1$ lead to $t$, i.e $t$ is an attractive vertex. Either this is forbidden, or the \"reverse\" reasoning where we focus on the borders of $K_2$ proves that there is also an element $p$ used in a $C^i, i \\in \\{1,2\\}$ slice of $K_1$ that leads to every element of $C^1$; that is, $C^1$ has a repulsive vertex in $C^1 \\cup C^2$. Condition C forbids any graph that had both, hence we reach a contradiction. We obtain the proposition we announced.\n\\end{proof}\n\n\\begin{proposition}\n\t\\label{propsync}\n\tIn this construction, two legally adjacent columns are always aligned and synchronized.\n\\end{proposition}\n\n\\begin{proof}\n\t\\cref{propapproxaligned} states that two adjacent columns $K_1$ and $K_2$ are always, in some sense, approximately aligned up to a vertical translation of size at most $2 |C^1|-1$. If the two columns are slightly shifted still, then any meso-slice of the $C^1$ slices of $K_1$ (such a meso-slice consists only of the repetition of some $c^1_i$) is horizontally followed by two different meso-slices in $K_2$. Being different, at least one of them is not $c^1_{i+1}$ but some $c^1_{i+k}, k \\in \\{2, ..., |C^1|\\}$. This is true with the same $k$ for all values of $i$, notably because an ordered macro-slice contains two successive $C^1$ slices, so all meso-slices representing $c^1_i$ are repeated twice -- there is no problem with the meso-slices at the extremities. We obtain something that contradicts our assumption that $C^1$ has no uniform shortcut. Hence there is no vertical shift at all between two consecutive columns. Thus our construction ensures that two adjacent columns are always aligned.\n\t\n\tIt is easy to see that a meso-slice made only of $c^1_i$ in column $K_1$ is horizontally followed, because the columns are aligned, by a meso-slice made only of $c^1_{i+k}$ in column $K_2$, for some $k \\in \\{0, \\dots, |C^1|-1\\}$. This $k$ is once again independent of the $i$ because inside a macro-slice, meso-slices respect the order of cycle $C^1$. But because $C^1$ has no uniform shortcut, we must have $k=1$. The reasoning is the same for the $C^2$ slice, and we use the fact that $C^2$ has no shortcut either. Hence our columns are synchronized.\n\\end{proof}\n\nWith these properties, we have ensured that our structure is rigid: our ordered macro-slices are aligned and synchronized. The last fact to check is the transmission of information, represented by the following proposition:\n\n\\begin{proposition}\n\t\\label{proptransmission}\n\tIn this construction, an $(i,j)$-ordered $(k,l)$-coding macro-slice is horizontally followed by an $(i+1,j+1)$-ordered $(k,l)$-coding macro-slice, except in two situations:\n\t\\begin{itemize}\n\t\t\\item if $c^1_i = c^2_j$, we can have a $(k,l)$-coding macro-slice followed by a $(k^\\prime,l)$-coding macro-slice;\n\t\t\n\t\t\\item if $c^1_i \\neq c^2_j$ but $c^1_{i-1} = c^2_{j-1}$, we can have a $(k,l)$-coding macro-slice followed by a $(k,l^\\prime)$-coding macro-slice.\n\t\\end{itemize}\n\\end{proposition}\n\n\\begin{proof}\n\tThe exceptions are due to an earlier remark: if $c^1_i = c^2_j$, then the code meso-slice contains no main-coded tile, so its value of $k$ is actually irrelevant. Similarly, if $c^1_i \\neq c^2_j$ but $c^1_{i-1} = c^2_{j-1}$, the code meso-slice contains no side-coded tile, so its value of $l$ is irrelevant.\n\t\n\tFor the rest of the proof, it is already clear from \\cref{propsync} that an $(i,j)$-ordered macro-slice is horizontally followed by an $(i+1,j+1)$-ordered macro-slice; the only part left to study is the coding part.\n\t\n\tNow, consider two horizontally adjacent coding micro-slices. By synchronicity, one is an $(i,j)$ micro-slice, and the other an $(i+1,j+1)$ micro-slice. Since $\\mathcal{G}(H)$ verifies condition C, and particularly has no cross-bridge, we cannot have both an edge $(c^1_i,c^2_{j+1}) \\in \\vec{E}$ and an edge $(c^2_j,c^1_{i+1}) \\in \\vec{E}$, except if one of the two micro-slices is a buffer. Therefore, either one of them is a buffer, or they both are $p$-coding for the same $p$.\n\t\n\tThis is enough to prove the proposition.\n\\end{proof}\n\n\\begin{figure}[t]\n\t\\begin{subfigure}[t]{0.4\\textwidth}\n\t\t\\centering\n\t\t\\scalebox{1.5}{\n\t\t\t\\begin{tikzpicture}\n\t\t\t\\node[draw, circle] (b) {$b$};\n\t\t\t\\node[draw, circle] (a) [below left = 0.5cm and 0.25cm of b] {$a$};\n\t\t\t\\node[draw, circle] (c) [below right = 0.5cm and 0.25cm of b] {$c$};\n\t\t\t\n\t\t\t\\draw[->, >=latex] (a) to[bend left=20] (b);\n\t\t\t\\draw[->, >=latex] (b) to[bend left=20] (c);\n\t\t\t\\draw[->, >=latex] (c) to[bend left=20] (a);\n\t\t\t\n\t\t\t\\draw[->, >=latex] (c) to[bend left=20] (b);\n\t\t\t\n\t\t\t\\draw[->, >=latex] (c) to [out=330,in=300,looseness=12] (c);\n\t\t\t\\end{tikzpicture}\n\t\t}\n\t\\end{subfigure}\n\t\\hfill\n\t\\begin{subfigure}[t]{0.4\\textwidth}\n\t\t\\centering\n\t\t\\scalebox{0.6}{\n\t\t\t\\begin{tikzpicture}\n\t\t\t\\clip[decorate,decoration={random steps,segment length=5pt,amplitude=2pt}] (-0.1, 0.1) rectangle (6.1,-9.1);\n\t\t\t\n\t\t\t\\begin{scope}[xshift=-1cm]\n\t\t\t\\draw (0,0) rectangle (3,-3);\n\t\t\t\\draw (3,0) rectangle (6,-3);\n\t\t\t\\draw (6,0) rectangle (9,-3);\n\t\t\t\n\t\t\t\\draw[fill=yellow, rounded corners, inner sep=0.3mm] (0,-2) rectangle (2,-3);\n\t\t\t\\draw[fill=yellow, rounded corners, inner sep=0.3mm] (3,0) rectangle (5,-1);\n\t\t\t\\draw[fill=yellow, rounded corners, inner sep=0.3mm] (6,-1) rectangle (8,-2);\n\t\t\t\n\t\t\t\\fill[pattern=north west lines, pattern color=black!30] (2,0) rectangle (3,-3);\n\t\t\t\\fill[pattern=north west lines, pattern color=black!30] (5,0) rectangle (6,-3);\n\t\t\t\\fill[pattern=north west lines, pattern color=black!30] (8,0) rectangle (9,-3);\n\t\t\t\n\t\t\t\\draw (0.5,-0.5) node {$a$};\n\t\t\t\\draw (0.5,-1.5) node {$a$};\n\t\t\t\\draw (0.5,-2.5) node {$c$};\n\t\t\t%\n\t\t\t\\draw (1.5,-0.5) node {$b$};\n\t\t\t\\draw (1.5,-1.5) node {$b$};\n\t\t\t\\draw (1.5,-2.5) node {$c$};\n\t\t\t%\n\t\t\t\\draw (2.5,-0.5) node {$c$};\n\t\t\t\\draw (2.5,-1.5) node {$c$};\n\t\t\t\\draw (2.5,-2.5) node {$c$};\n\t\t\t\n\t\t\t\\draw (3.5,-0.5) node {$c$};\n\t\t\t\\draw (3.5,-1.5) node {$a$};\n\t\t\t\\draw (3.5,-2.5) node {$a$};\n\t\t\t%\n\t\t\t\\draw (4.5,-0.5) node {$c$};\n\t\t\t\\draw (4.5,-1.5) node {$b$};\n\t\t\t\\draw (4.5,-2.5) node {$b$};\n\t\t\t%\n\t\t\t\\draw (5.5,-0.5) node {$c$};\n\t\t\t\\draw (5.5,-1.5) node {$c$};\n\t\t\t\\draw (5.5,-2.5) node {$c$};\n\t\t\t\n\t\t\t\\draw (6.5,-0.5) node {$a$};\n\t\t\t\\draw (6.5,-1.5) node {$c$};\n\t\t\t\\draw (6.5,-2.5) node {$a$};\n\t\t\t%\n\t\t\t\\draw (7.5,-0.5) node {$b$};\n\t\t\t\\draw (7.5,-1.5) node {$c$};\n\t\t\t\\draw (7.5,-2.5) node {$b$};\n\t\t\t%\n\t\t\t\\draw (8.5,-0.5) node {$c$};\n\t\t\t\\draw (8.5,-1.5) node {$c$};\n\t\t\t\\draw (8.5,-2.5) node {$c$};\n\t\t\t\\end{scope}\n\t\t\t\n\t\t\t\n\t\t\t\\begin{scope}[xshift=-2cm,yshift=-3cm]\n\t\t\t\\draw (0,0) rectangle (3,-3);\n\t\t\t\\draw (3,0) rectangle (6,-3);\n\t\t\t\\draw (6,0) rectangle (9,-3);\n\t\t\t\n\t\t\t\\draw[fill=yellow, rounded corners, inner sep=0.3mm] (0,-2) rectangle (2,-3);\n\t\t\t\\draw[fill=yellow, rounded corners, inner sep=0.3mm] (3,0) rectangle (5,-1);\n\t\t\t\\draw[fill=yellow, rounded corners, inner sep=0.3mm] (6,-1) rectangle (8,-2);\n\t\t\t\n\t\t\t\\fill[pattern=north west lines, pattern color=black!30] (2,0) rectangle (3,-3);\n\t\t\t\\fill[pattern=north west lines, pattern color=black!30] (5,0) rectangle (6,-3);\n\t\t\t\\fill[pattern=north west lines, pattern color=black!30] (8,0) rectangle (9,-3);\n\t\t\t\n\t\t\t\\draw (0.5,-0.5) node {$a$};\n\t\t\t\\draw (0.5,-1.5) node {$a$};\n\t\t\t\\draw (0.5,-2.5) node {$c$};\n\t\t\t%\n\t\t\t\\draw (1.5,-0.5) node {$b$};\n\t\t\t\\draw (1.5,-1.5) node {$b$};\n\t\t\t\\draw (1.5,-2.5) node {$c$};\n\t\t\t%\n\t\t\t\\draw (2.5,-0.5) node {$c$};\n\t\t\t\\draw (2.5,-1.5) node {$c$};\n\t\t\t\\draw (2.5,-2.5) node {$c$};\n\t\t\t\n\t\t\t\\draw (3.5,-0.5) node {$c$};\n\t\t\t\\draw (3.5,-1.5) node {$a$};\n\t\t\t\\draw (3.5,-2.5) node {$a$};\n\t\t\t%\n\t\t\t\\draw (4.5,-0.5) node {$c$};\n\t\t\t\\draw (4.5,-1.5) node {$b$};\n\t\t\t\\draw (4.5,-2.5) node {$b$};\n\t\t\t%\n\t\t\t\\draw (5.5,-0.5) node {$c$};\n\t\t\t\\draw (5.5,-1.5) node {$c$};\n\t\t\t\\draw (5.5,-2.5) node {$c$};\n\t\t\t\n\t\t\t\\draw (6.5,-0.5) node {$a$};\n\t\t\t\\draw (6.5,-1.5) node {$c$};\n\t\t\t\\draw (6.5,-2.5) node {$a$};\n\t\t\t%\n\t\t\t\\draw (7.5,-0.5) node {$b$};\n\t\t\t\\draw (7.5,-1.5) node {$c$};\n\t\t\t\\draw (7.5,-2.5) node {$b$};\n\t\t\t%\n\t\t\t\\draw (8.5,-0.5) node {$c$};\n\t\t\t\\draw (8.5,-1.5) node {$c$};\n\t\t\t\\draw (8.5,-2.5) node {$c$};\n\t\t\t\\end{scope}\n\t\t\t\n\t\t\t\n\t\t\t\\begin{scope}[xshift=-3cm,yshift=-6cm]\n\t\t\t\\draw (0,0) rectangle (3,-3);\n\t\t\t\\draw (3,0) rectangle (6,-3);\n\t\t\t\\draw (6,0) rectangle (9,-3);\n\t\t\t\n\t\t\t\\draw[fill=yellow, rounded corners, inner sep=0.3mm] (0,-2) rectangle (2,-3);\n\t\t\t\\draw[fill=yellow, rounded corners, inner sep=0.3mm] (3,0) rectangle (5,-1);\n\t\t\t\\draw[fill=yellow, rounded corners, inner sep=0.3mm] (6,-1) rectangle (8,-2);\n\t\t\t\n\t\t\t\\fill[pattern=north west lines, pattern color=black!30] (2,0) rectangle (3,-3);\n\t\t\t\\fill[pattern=north west lines, pattern color=black!30] (5,0) rectangle (6,-3);\n\t\t\t\\fill[pattern=north west lines, pattern color=black!30] (8,0) rectangle (9,-3);\n\t\t\t\n\t\t\t\\draw (0.5,-0.5) node {$a$};\n\t\t\t\\draw (0.5,-1.5) node {$a$};\n\t\t\t\\draw (0.5,-2.5) node {$c$};\n\t\t\t%\n\t\t\t\\draw (1.5,-0.5) node {$b$};\n\t\t\t\\draw (1.5,-1.5) node {$b$};\n\t\t\t\\draw (1.5,-2.5) node {$c$};\n\t\t\t%\n\t\t\t\\draw (2.5,-0.5) node {$c$};\n\t\t\t\\draw (2.5,-1.5) node {$c$};\n\t\t\t\\draw (2.5,-2.5) node {$c$};\n\t\t\t\n\t\t\t\\draw (3.5,-0.5) node {$c$};\n\t\t\t\\draw (3.5,-1.5) node {$a$};\n\t\t\t\\draw (3.5,-2.5) node {$a$};\n\t\t\t%\n\t\t\t\\draw (4.5,-0.5) node {$c$};\n\t\t\t\\draw (4.5,-1.5) node {$b$};\n\t\t\t\\draw (4.5,-2.5) node {$b$};\n\t\t\t%\n\t\t\t\\draw (5.5,-0.5) node {$c$};\n\t\t\t\\draw (5.5,-1.5) node {$c$};\n\t\t\t\\draw (5.5,-2.5) node {$c$};\n\t\t\t\n\t\t\t\\draw (6.5,-0.5) node {$a$};\n\t\t\t\\draw (6.5,-1.5) node {$c$};\n\t\t\t\\draw (6.5,-2.5) node {$a$};\n\t\t\t%\n\t\t\t\\draw (7.5,-0.5) node {$b$};\n\t\t\t\\draw (7.5,-1.5) node {$c$};\n\t\t\t\\draw (7.5,-2.5) node {$b$};\n\t\t\t%\n\t\t\t\\draw (8.5,-0.5) node {$c$};\n\t\t\t\\draw (8.5,-1.5) node {$c$};\n\t\t\t\\draw (8.5,-2.5) node {$c$};\n\t\t\t\n\t\t\t\n\t\t\t\\draw (9,0) rectangle (12,-3);\n\t\t\t\n\t\t\t\\draw[fill=yellow, rounded corners, inner sep=0.3mm,thick] (9,-1) rectangle (11,-2);\n\t\t\t\n\t\t\t\\fill[pattern=north west lines, pattern color=black!30] (11,0) rectangle (12,-3);\n\t\t\t\n\t\t\t\\draw (9.5,-0.5) node {$a$};\n\t\t\t\\draw (9.5,-1.5) node {$c$};\n\t\t\t\\draw (9.5,-2.5) node {$a$};\n\t\t\t%\n\t\t\t\\draw (10.5,-0.5) node {$b$};\n\t\t\t\\draw (10.5,-1.5) node {$c$};\n\t\t\t\\draw (10.5,-2.5) node {$b$};\n\t\t\t%\n\t\t\t\\draw (11.5,-0.5) node {$c$};\n\t\t\t\\draw (11.5,-1.5) node {$c$};\n\t\t\t\\draw (11.5,-2.5) node {$c$};\n\t\t\t\\end{scope}\n\t\t\t\n\t\t\t\\draw[thick, rounded corners, inner sep=0.3mm] (3,0) rectangle (4,-9);\n\t\t\t\n\t\t\t\\end{tikzpicture}\n\t\t}\n\t\\end{subfigure}\n\t\n\t\\caption{A Rauzy graph and several associated code meso-slices for $|\\tau| = 3$. Here are horizontally successively encoded $\\tau_3, \\tau_1, \\tau_2$ and $\\tau_2$, the number being indicated by the location of the line of $c$'s. One can check that $\\tau_1$ can be located left to $\\tau_2$ in the encoding, by using vertical constraints only, as depicted by the bold rectangle with rounded corners.}\n\t\\label{codingblock}\n\\end{figure}\n\n\n\n\n\n\n\nIn the end, we proved that if we were able to find two cycles $C^1$ and $C^2$ complying with condition $C$, they would be enough to build the construction we desire: the root of a full shift on $N$ elements.\nIndeed, take $Z$ the clopen made of all the configurations with, at position $(0,0)$, the bottom of an $(i,j)$-ordered macro-slice with $c^1_{i-1} = c^2_{j-1}$ but $c^1_{i} \\neq c^2_{j}$. Suppose that macro-slice is $(k,l)$-coding (with an irrelevant $l$). Map it, along with the $M-1$ that follow horizontally (hence, map a $M \\times KMN$ rectangle) on $y_{(0,0)} = \\tau_k$. By mapping the whole configuration similarly, $Z$ is homeomorphic to $Y$ with the required properties: $X_{H,V_Y}$ is therefore a $(M,KMN)$th root of $\\tau^{\\mathbbm{Z}^2}$.\n\nThen, to encode only the configurations that are valid in $W$ a Wang shift, we forbid the following additional vertical patterns:\n\\begin{itemize}\n\t\\item the code meso-slices that would contain both $\\tau_k$ as its main-coded tile and $\\tau_l$ as its side-coded tile, if $\\tau_k$ cannot be horizontally followed by $\\tau_l$ in $W$;\n\t\\item and the vertical succession of two ordered macro-slices that would contain code meso-slices with two main-coded tiles that cannot be vertically successive in $W$.\n\\end{itemize}\nWith this, we proved \\cref{propcondC}: the set of vertical conditions obtained, that define a one-dimensional SFT $V_W$, is such that $X_{H,V_W}$ is a $(M,KMN)$th root of $W$.\n\n\n\n\\subsection{Summary of the general construction for one strongly connected component}\n\nWe suppose that $H \\subset \\mathcal{A}^\\mathbbm{Z}$ is a one-dimensional nearest-neighbor SFT such that its Rauzy graph does not verify condition $D$ and is made of only one SCC.\n\nNote that $\\mathcal{G}(H)$, since it does not verify condition $D$, contains at least one loopless vertex, and one unidirectional edge.\n\n\n\n\n\n\n\n\n\\begin{table}[b]\n\t\\caption{Table of the main cases, each of them illustrated with an example (the $C^1$ on which we perform the generic construction is the main cycle indicated; the $C^2$ is in red).}\n\t\\begin{center}\n\t\t\\resizebox{\\columnwidth}{!}{\n\t\t\t\\begin{tabular}{|c|c|c||c|c||c|c|c|}\n\t\t\t\t\\hline\n\t\t\t\t\\multicolumn{3}{|c||}{Loops} & \\multicolumn{5}{c|}{No loop}\\\\\n\t\t\t\t\\hline\n\t\t\t\t& & & \\multicolumn{2}{c||}{Bidirectional edges} & \\multicolumn{3}{c|}{No bidirectional edge}\\\\\n\t\t\t\t\\cline{4-8}\n\t\t\t\t\n\t\t\t\t\\begin{tikzpicture}[scale=0.4]\n\t\t\t\t\\def \\radius {1.5cm};\n\t\t\t\t\n\t\t\t\t\\node[draw, circle] (a) at (0:\\radius) {};\n\t\t\t\t\\node[draw, circle] (b) at (72:\\radius) {};\n\t\t\t\t\\node[draw, circle] (c) at (144:\\radius) {};\n\t\t\t\t\\node[draw, circle,scale=0.7] (d) at (216:\\radius) {$v$};\n\t\t\t\t\\node[draw, circle,scale=0.7] (e) at (288:\\radius) {$w$};\n\t\t\t\t\n\t\t\t\t\\draw (0,0) node {$C^1$};\n\t\t\t\t\n\t\t\t\t\\draw[->, >=latex] (a) to[bend right=15] (b);\n\t\t\t\t\\draw[->, >=latex] (b) to[bend right=15] (c);\n\t\t\t\t\\draw[->, >=latex] (c) to[bend right=15] (d);\n\t\t\t\t\\draw[->, >=latex] (d) to[bend right=15] (e);\n\t\t\t\t\\draw[->, >=latex] (e) to[bend right=15] (a);\n\t\t\t\t\n\t\t\t\t\\draw[->, >=latex] (b) to[bend right=15] (a);\n\t\t\t\t\\draw[->, >=latex] (c) to[bend right=15] (b);\n\t\t\t\t\n\t\t\t\t\\draw[->, >=latex,color=red] (e) to[out=308,in=268,looseness=6] (e);\n\t\t\t\t\\end{tikzpicture}\n\t\t\t\t\n\t\t\t\t&\n\t\t\t\t\n\t\t\t\t\\begin{tikzpicture}[scale=0.4]\n\t\t\t\t\\def \\radius {1.5cm};\n\t\t\t\t\n\t\t\t\t\\node[draw, circle,scale=0.7] (a) at (0:\\radius) {$v$};\n\t\t\t\t\\node[draw, circle,scale=0.7] (b) at (72:\\radius) {$w$};\n\t\t\t\t\\node[draw, circle] (c) at (144:\\radius) {};\n\t\t\t\t\\node[draw, circle] (d) at (216:\\radius) {};\n\t\t\t\t\\node[draw, circle,scale=0.7] (e) at (288:\\radius) {$u$};\n\t\t\t\t\n\t\t\t\t\\draw (0,0) node {$C^1$};\n\t\t\t\t\n\t\t\t\t\\draw[->, >=latex,color=red] (a) to[bend right=15] (b);\n\t\t\t\t\\draw[->, >=latex] (b) to[bend right=15] (c);\n\t\t\t\t\\draw[->, >=latex] (c) to[bend right=15] (d);\n\t\t\t\t\\draw[->, >=latex] (d) to[bend right=15] (e);\n\t\t\t\t\\draw[->, >=latex] (e) to[bend right=15] (a);\n\t\t\t\t\n\t\t\t\t\\draw[->, >=latex,color=red] (b) to[bend right=15] (a);\n\t\t\t\t\\draw[->, >=latex] (c) to[bend right=15] (b);\n\t\t\t\t\n\t\t\t\t\\draw[->, >=latex] (b) to[out=92,in=52,looseness=6] (b);\n\t\t\t\t\\end{tikzpicture}\n\t\t\t\t\n\t\t\t\t&\n\t\t\t\t\n\t\t\t\t\\begin{tikzpicture}[scale=0.4]\n\t\t\t\t\\def \\radius {1.5cm};\n\t\t\t\t\n\t\t\t\t\\node[draw, circle,scale=0.7] (a) at (0:\\radius) {$v$};\n\t\t\t\t\\node[draw, circle,scale=0.7] (b) at (72:\\radius) {$a$};\n\t\t\t\t\\node[draw, circle] (c) at (144:\\radius) {};\n\t\t\t\t\\node[draw, circle] (d) at (216:\\radius) {};\n\t\t\t\t\\node[draw, circle,scale=0.7] (e) at (288:\\radius) {$u$};\n\t\t\t\t\n\t\t\t\t\\draw (0,0) node {$C^1$};\n\t\t\t\t\n\t\t\t\t\\draw[->, >=latex] (a) to[bend right=15] (b);\n\t\t\t\t\\draw[->, >=latex] (b) to[bend right=15] (c);\n\t\t\t\t\\draw[->, >=latex] (c) to[bend right=15] (d);\n\t\t\t\t\\draw[->, >=latex] (d) to[bend right=15] (e);\n\t\t\t\t\\draw[->, >=latex] (e) to[bend right=15] (a);\n\t\t\t\t\n\t\t\t\t\\draw[->, >=latex] (b) to[bend right=15] (a);\n\t\t\t\t\\draw[->, >=latex] (c) to[bend right=15] (b);\n\t\t\t\t\n\t\t\t\t\\draw[->, >=latex,color=red] (a) to[out=20,in=-20,looseness=6] (a);\n\t\t\t\t\\draw[->, >=latex] (c) to[out=164,in=124,looseness=6] (c);\n\t\t\t\t\\draw[->, >=latex] (d) to[out=236,in=196,looseness=6] (d)\n\t\t\t\t\\draw[->, >=latex] (e) to[out=308,in=268,looseness=6] (e);\n\t\t\t\t\\end{tikzpicture}\n\t\t\t\t\n\t\t\t\t&\n\t\t\t\t\n\t\t\t\t\\begin{tikzpicture}[scale=0.4]\n\t\t\t\t\\def \\radius {1.5cm};\n\t\t\t\t\n\t\t\t\t\\node[draw, circle,scale=0.7] (a) at (0:\\radius) {$v$};\n\t\t\t\t\\node[draw, circle,scale=0.7] (b) at (72:\\radius) {$w$};\n\t\t\t\t\\node[draw, circle] (c) at (144:\\radius) {};\n\t\t\t\t\\node[draw, circle] (d) at (216:\\radius) {};\n\t\t\t\t\\node[draw, circle,scale=0.7] (e) at (288:\\radius) {$u$};\n\t\t\t\t\n\t\t\t\t\\draw (0,0) node {$C^1$};\n\t\t\t\t\n\t\t\t\t\\draw[->, >=latex,color=red] (a) to[bend right=15] (b);\n\t\t\t\t\\draw[->, >=latex] (b) to[bend right=15] (c);\n\t\t\t\t\\draw[->, >=latex] (c) to[bend right=15] (d);\n\t\t\t\t\\draw[->, >=latex] (d) to[bend right=15] (e);\n\t\t\t\t\\draw[->, >=latex] (e) to[bend right=15] (a);\n\t\t\t\t\n\t\t\t\t\\draw[->, >=latex,color=red] (b) to[bend right=15] (a);\n\t\t\t\t\\draw[->, >=latex] (c) to[bend right=15] (b);\n\t\t\t\t\\end{tikzpicture}\n\t\t\t\t\n\t\t\t\t&\n\t\t\t\t\n\t\t\t\t\\begin{tikzpicture}[scale=0.4,rotate=-90]\n\t\t\t\t\\def \\radius {1.5cm};\n\t\t\t\t\n\t\t\t\t\\node[draw, circle,scale=0.7] (a) at (0:\\radius) {$v$};\n\t\t\t\t\\node[draw, circle] (b) at (72:\\radius) {};\n\t\t\t\t\\node[draw, circle] (c) at (144:\\radius) {};\n\t\t\t\t\\node[draw, circle] (d) at (216:\\radius) {};\n\t\t\t\t\\node[draw, circle] (e) at (288:\\radius) {};\n\t\t\t\t\n\t\t\t\t\\node[draw, circle,scale=0.7] (f) at (3.5,0) {$w$};\n\t\t\t\t\n\t\t\t\t\\draw (0,0) node {$C^1$};\n\t\t\t\t\n\t\t\t\t\\draw[->, >=latex] (a) to[bend right=15] (b);\n\t\t\t\t\\draw[->, >=latex] (b) to[bend right=15] (c);\n\t\t\t\t\\draw[->, >=latex] (c) to[bend right=15] (d);\n\t\t\t\t\\draw[->, >=latex] (d) to[bend right=15] (e);\n\t\t\t\t\\draw[->, >=latex] (e) to[bend right=15] (a);\n\t\t\t\t\n\t\t\t\t\\draw[->, >=latex,color=red] (a) to[bend right=15] (f);\n\t\t\t\t\\draw[->, >=latex,color=red] (f) to[bend right=15] (a);\n\t\t\t\t\\end{tikzpicture}\n\t\t\t\t\n\t\t\t\t&\n\t\t\t\t\n\t\t\t\t\\begin{tikzpicture}[scale=0.4,rotate=-90]\n\t\t\t\t\\def \\radius {1.5cm};\n\t\t\t\t\n\t\t\t\t\\node[draw, circle] (a) at (0:\\radius) {};\n\t\t\t\t\\node[draw, circle] (b) at (72:\\radius) {};\n\t\t\t\t\\node[draw, circle] (c) at (144:\\radius) {};\n\t\t\t\t\\node[draw, circle] (d) at (216:\\radius) {};\n\t\t\t\t\\node[draw, circle] (e) at (288:\\radius) {};\n\t\t\t\t\n\t\t\t\t\\node[draw, circle] (f) at (-2.5,0.8) {};\n\t\t\t\t\\node[draw, circle] (g) at (-2.5,-0.8) {};\n\t\t\t\t\n\t\t\t\t\\draw (0,0) node {$C^1$};\n\t\t\t\t\n\t\t\t\t\\draw[->, >=latex,color=red] (a) to[bend right=15] (b);\n\t\t\t\t\\draw[->, >=latex,color=red] (b) to[bend right=15] (c);\n\t\t\t\t\\draw[->, >=latex] (c) to[bend right=15] (d);\n\t\t\t\t\\draw[->, >=latex,color=red] (d) to[bend right=15] (e);\n\t\t\t\t\\draw[->, >=latex,color=red] (e) to[bend right=15] (a);\n\t\t\t\t\n\t\t\t\t\\draw[->, >=latex,color=red] (c) to[bend right=15] (f);\n\t\t\t\t\\draw[->, >=latex,color=red] (f) to[bend right=15] (g);\n\t\t\t\t\\draw[->, >=latex,color=red] (g) to[bend right=15] (d);\n\t\t\t\t\\end{tikzpicture}\n\t\t\t\t\n\t\t\t\t&\n\t\t\t\t\n\t\t\t\t\\begin{tikzpicture}[scale=0.4,rotate=-90]\n\t\t\t\t\\def \\radius {1.5cm};\n\t\t\t\t\n\t\t\t\t\\node[draw, circle] (a) at (0:\\radius) {};\n\t\t\t\t\\node[draw, circle] (b) at (72:\\radius) {};\n\t\t\t\t\\node[draw, circle] (c) at (144:\\radius) {};\n\t\t\t\t\\node[draw, circle] (d) at (216:\\radius) {};\n\t\t\t\t\\node[draw, circle] (e) at (288:\\radius) {};\n\t\t\t\t\n\t\t\t\t\\node[draw, circle] (f) at (-2.5,1) {};\n\t\t\t\t\\node[draw, circle] (g) at (-2.5,-1) {};\n\t\t\t\t\n\t\t\t\t\\draw (0,0) node {$C^1$};\n\t\t\t\t\n\t\t\t\t\\draw[->, >=latex,color=red] (a) to[bend right=15] (b);\n\t\t\t\t\\draw[->, >=latex] (b) to[bend right=15] (c);\n\t\t\t\t\\draw[->, >=latex] (c) to[bend right=15] (d);\n\t\t\t\t\\draw[->, >=latex] (d) to[bend right=15] (e);\n\t\t\t\t\\draw[->, >=latex,color=red] (e) to[bend right=15] (a);\n\t\t\t\t\n\t\t\t\t\\draw[->, >=latex,color=red] (b) to[bend right=15] (f);\n\t\t\t\t\\draw[->, >=latex,color=red] (f) to[bend right=15] (g);\n\t\t\t\t\\draw[->, >=latex,color=red] (g) to[bend right=15] (e);\n\t\t\t\t\n\t\t\t\t\\draw[->, >=latex] (c) to[bend right=15] (g);\n\t\t\t\t\\end{tikzpicture}\n\t\t\t\t\n\t\t\t\t&\n\t\t\t\t\n\t\t\t\t\\begin{tikzpicture}[scale=0.4]\n\t\t\t\t\\def \\radius {1.5cm};\n\t\t\t\t\n\t\t\t\t\\node[draw, circle] (a) at (0:\\radius) {};\n\t\t\t\t\\node[draw, circle] (b) at (72:\\radius) {};\n\t\t\t\t\\node[draw, circle] (c) at (144:\\radius) {};\n\t\t\t\t\\node[draw, circle] (d) at (216:\\radius) {};\n\t\t\t\t\\node[draw, circle] (e) at (288:\\radius) {};\n\t\t\t\t\n\t\t\t\t\\node[draw, circle] (f) at (2.5,-0.8) {};\n\t\t\t\t\\node[draw, circle] (g) at (2.5,0.8) {};\n\t\t\t\t\n\t\t\t\t\\draw (0,0) node {$C^1$};\n\t\t\t\t\n\t\t\t\t\\draw[->, >=latex] (a) to[bend right=15] (b);\n\t\t\t\t\\draw[->, >=latex] (b) to[bend right=15] (c);\n\t\t\t\t\\draw[->, >=latex] (c) to[bend right=15] (d);\n\t\t\t\t\\draw[->, >=latex] (d) to[bend right=15] (e);\n\t\t\t\t\\draw[->, >=latex] (e) to[bend right=15] (a);\n\t\t\t\t\n\t\t\t\t\\draw[->, >=latex,color=red] (a) to[bend right=15] (f);\n\t\t\t\t\\draw[->, >=latex,color=red] (f) to[bend right=15] (g);\n\t\t\t\t\\draw[->, >=latex,color=red] (g) to[bend right=15] (a);\n\t\t\t\t\\end{tikzpicture}\n\t\t\t\t\n\t\t\t\t\\\\\n\t\t\t\t\n\t\t\t\t\\hline\n\t\t\t\t\n\t\t\t\tCase 1.1 & Case 1.2 & Case 1.3 & Case 2.1 & Case 2.2 & Case 3.1 & Case 3.2 & Case 3.3\\\\\n\t\t\t\t\n\t\t\t\t\\hline\n\t\t\t\\end{tabular}\n\t\t}\n\t\\end{center}\n\t\\label{tablecases}\n\\end{table}\n\nThe idea of the proof of \\cref{th:root} is to classify the possible graphs into various cases. In each case, one has a standard procedure to find convenient $C^1$ and $C^2$ inside any graph to perform the generic construction from \\cref{subsec:core}. Of course, for some specific cases, we will not meet condition $C$ even if $H$ does not verify condition $D$. However, we will punctually adapt the generic construction to these specificities.\n\nThe division into cases is presented in a disjunctive fashion, see \\cref{tablecases}:\n\nIs there a loop on a vertex?\n\\begin{itemize}\n\t\\item If YES: Is there a unidirectional edge $(v,w) \\in \\vec{E}$ so that $v$ is loopless and $w$ has a loop (or the reverse, which is similar)?\n\t\\begin{itemize}\n\t\t\\item If YES: This is Case 1.1. We can find $C^1$ and $C^2$ that check condition $C$ with the exception of the possible presence of both an attractive and a repulsive vertices. However, \\cref{propapproxaligned} is still verified, because by choosing the smallest possible cycle containing such $v$ and $w$, $v$ has in-degree $1$, a property that allows for an easy synchronization.\n\t\t\n\t\t\\item If NO: Do unidirectional edges have loopless vertices?\n\t\t\\begin{itemize}\n\t\t\t\\item If YES: This is Case 1.2. We can find $C^1$ and $C^2$ that check condition $C$, possibly by reducing to a situation encountered in case 2.2.\n\t\t\t\\item If NO: This is Case 1.3. This one generates some exceptional graphs with 4 or 5 vertices that do not check condition $C$ and must be treated separately. However, technical considerations prove that our generic construction still works.\n\t\t\\end{itemize}\n\t\\end{itemize}\n\t\\item If NO: Is there a bidirectional edge?\n\t\\begin{itemize}\n\t\t\\item If YES: Is there a cycle of size at least 3 that contains a bidirectional edge?\n\t\t\\begin{itemize}\n\t\t\t\\item If YES: This is Case 2.1. We can find $C^1$ and $C^2$ that verify condition $C$ rather easily.\n\t\t\t\\item If NO: This is Case 2.2, in which checking condition $C$ is also easy.\n\t\t\\end{itemize}\n\t\t\\item If NO: Is there a minimal cycle with a path between two \\emph{different} elements of it, say $c^1_0$ and $c^1_k$, that does not belong to the cycle?\n\t\t\\begin{itemize}\n\t\t\t\\item If YES: Can we find such a path of length different from $k$?\n\t\t\t\\begin{itemize}\n\t\t\t\t\\item If YES: This is Case 3.1, a rather tedious case, but we can find cycles $C^1$ and $C^2$ that verify condition $C$ nonetheless.\n\t\t\t\t\\item If NO: This is Case 3.2, which relies heavily on the fact that $\\mathcal{G}(H)$ is not of state-split cycle type to find cycles that verify condition $C$.\n\t\t\t\\end{itemize}\n\t\t\t\\item If NO: This is Case 3.3, an easy case to find cycles that verify condition $C$.\n\t\t\\end{itemize}\n\t\\end{itemize}\n\\end{itemize}\n\n\nIn the subsections that follow, we define two cycles $C^1$ and $C^2$ trying to fit condition C as much as possible, and proving when it fails that the propositions from \\cref{subsec:core} still hold nonetheless. We name $C^1$ vertices $c^1_i$ and $C^2$ vertices $c^2_j$, with $i \\in \\{0,\\dots,|C^1|-1\\}$ and $j \\in \\{0,\\dots,|C^2|-1\\}$.\n\n\n\n\n\n\\subsection{Case 1}\n\\label{subsec:case1}\n\nWe suppose that $\\mathcal{G}(H)$ contains a loop.\n\n\\textbf{Case 1.1:} we can find a unidirectional edge so that the first vertex is loopless and the second has a loop (or the opposite, for which the construction is similar and omitted).\n\n\\begin{center}\n\t\\begin{tikzpicture}[scale=0.4]\n\t\t\\def \\radius {1.5cm};\n\t\t\n\t\t\\node[draw, circle] (a) at (0:\\radius) {};\n\t\t\\node[draw, circle] (b) at (72:\\radius) {};\n\t\t\\node[draw, circle] (c) at (144:\\radius) {};\n\t\t\\node[draw, circle,scale=0.7] (d) at (216:\\radius) {$v$};\n\t\t\\node[draw, circle,scale=0.7] (e) at (288:\\radius) {$w$};\n\t\t\n\t\t\\draw (0,0) node {$C^1$};\n\t\t\n\t\t\\draw[->, >=latex] (a) to[bend right=15] (b);\n\t\t\\draw[->, >=latex] (b) to[bend right=15] (c);\n\t\t\\draw[->, >=latex] (c) to[bend right=15] (d);\n\t\t\\draw[->, >=latex] (d) to[bend right=15] (e);\n\t\t\\draw[->, >=latex] (e) to[bend right=15] (a);\n\t\t\n\t\t\\draw[->, >=latex] (b) to[bend right=15] (a);\n\t\t\\draw[->, >=latex] (c) to[bend right=15] (b);\n\t\t\n\t\t\\draw[->, >=latex,color=red] (e) to[out=308,in=268,looseness=6] (e);\n\t\\end{tikzpicture}\n\\end{center}\n\nTake the shortest possible cycle containing such an edge, which exists since $\\mathcal{G}(H)$ is strongly connected. Call $v$ the loopless vertex and $w$ the vertex with a loop. Naming that cycle $C^1$ with $c^1_0 = w$, and setting $C^2 = \\{w\\}$, we have to check that they fulfill the conditions of \\cref{subsec:core}.\n\n$(v,w)$ is unidirectional and $v$ is loopless. Note that no edge can go from $w$ to any vertex that is not $w$ or $c^1_1$, else we could find a shorter cycle with the same characteristics. Similarly, $v$ has in-degree $1$. Hence:\n\\begin{itemize}\n\t\\item $|C^1| \\geq 3$ since $(w,v) \\notin \\vec{E}$;\n\t\n\t\\item $C^1$ and $C^2$ contain a good pair, that is $(c^1_1,w)$.\n\t\n\t\\item $C^2$ has no uniform shortcut because it is made of one single vertex. $C^1$ has no uniform shortcut because of what precedes about $w$ and because $v$ is loopless;\n\t\n\t\\item If there was a cross-bridge between $C^1$ and $C^2$, it would mean there are two edges $(c^1_i,w)$ and $(w,c^1_{i+1}) \\in \\vec{E}$ with $c^1_i \\neq w \\neq c^1_{i+1}$, which is also impossible because of what precedes about $w$;\n\t\n\t\\item Here, there can actually be attractive and repulsive vertices for $C^1$, which endangers \\cref{propapproxaligned}. However, the only vertex that has an edge going to $v$ is the previous one in the cycle $C^1$, call it $u := c^1_{-2}$. As such, in any column, the $v$ meso-slice must be next to the $u$ meso-slice of the previous column since no other block of $u$ of size $M N$ can be found in said previous column. Hence two consecutive columns are always aligned and we can make the generic construction work with no restriction on attractive and repulsive vertices: \\cref{propapproxaligned} still holds, albeit for reasons different from the ones in \\cref{subsec:core}.\n\\end{itemize}\n\n\n\n\n\\textbf{Case 1.2:} all unidirectional edges have loopless vertices.\n\n\\begin{center}\n\t\\begin{tikzpicture}[scale=0.4]\n\t\t\\def \\radius {1.5cm};\n\t\t\n\t\t\\node[draw, circle,scale=0.7] (a) at (0:\\radius) {$v$};\n\t\t\\node[draw, circle,scale=0.7] (b) at (72:\\radius) {$w$};\n\t\t\\node[draw, circle] (c) at (144:\\radius) {};\n\t\t\\node[draw, circle] (d) at (216:\\radius) {};\n\t\t\\node[draw, circle,scale=0.7] (e) at (288:\\radius) {$u$};\n\t\t\n\t\t\\draw (0,0) node {$C^1$};\n\t\t\n\t\t\\draw[->, >=latex,color=red] (a) to[bend right=15] (b);\n\t\t\\draw[->, >=latex] (b) to[bend right=15] (c);\n\t\t\\draw[->, >=latex] (c) to[bend right=15] (d);\n\t\t\\draw[->, >=latex] (d) to[bend right=15] (e);\n\t\t\\draw[->, >=latex] (e) to[bend right=15] (a);\n\t\t\n\t\t\\draw[->, >=latex,color=red] (b) to[bend right=15] (a);\n\t\t\\draw[->, >=latex] (c) to[bend right=15] (b);\n\t\t\n\t\t\\draw[->, >=latex] (b) to[out=92,in=52,looseness=6] (b);\n\t\\end{tikzpicture}\n\\end{center}\n\nSince $\\mathcal{G}(H)$ is in Case 1 and Subcase 1.2, it contains loops and its unidirectional edges have loopless vertices. Hence it contains bidirectional edges. Moreover, it cannot contain only bidirectional edges, else it would verify condition $D$. Hence $\\mathcal{G}(H)$ contains both unidirectional and bidirectional edges, and by strong connectivity, we can find a (possibly self-intersecting) cycle containing both. Therefore we can find a unidirectional edge followed by a bidirectional edge.\n\nWe name $u, v, w$ three successive vertices in the graph so that $(u,v), (v,w)$ and $(w,v) \\in \\vec{E}$, and $(v,u) \\notin \\vec{E}$ (so $u$ and $v$ have no loop). Two situations are possible: either there is a path from $w$ to $u$ that does not go through $v$, and we obtain a cycle containing both a unidirectional and a bidirectional edge; or there is not. In that second case, we consider a path from $v$ to $u$:\n\\begin{itemize}\n\t\\item either it contains no bidirectional edge, and the resulting graph is made of one cycle with unidirectional edges and the bidirectional edge $(v,w)$: this is treated just as Case 2.2 -- with the extra of $w$ having a loop, but this does not change the reasoning.\n\t\\item or the path from $v$ to $u$ contains a bidirectional edge; then concatenated to $(u,v)$ it forms a cycle with both a unidirectional and a bidirectional edge.\n\\end{itemize}\nIteratively reducing the cycle obtained to the shortest one possible, we either end up in a situation that can be reduced to Case 2.2, or to a cycle similar to the figure above: with a unidirectional edge, followed by a bidirectional edge, and containing no shorter cycle that would fit.\n\nNaming that cycle $C^1$ with $c^1_0 = v$, and defining $C^2 = \\{v, w\\}$, we have to check that they fulfill the conditions of the generic construction. Note that $w$ cannot lead to any vertex except $c^1_2$, $v$, and possibly $w$; else we could find a cycle shorter than $C^1$ that has the same properties. Note, also, that $v \\neq c^1_2$, else we would reduce to Case 2.2. Hence:\n\\begin{itemize}\n\t\\item $|C^1| \\geq 3$;\n\t\n\t\\item $C^1$ and $C^2$ have a good pair: $(c^1_2,v)$ is one.\n\t\n\t\\item $C^2$ has no uniform shortcut because it is made of only two vertices and $v$ has no loop. $C^1$ has no uniform shortcut of length $0$ because $u$ and $v$ are loopless, and no uniform shortcut of length $-1$ because $(v,u) \\notin \\vec{E}$. With what precedes about $w$, there is no uniform shortcut at all;\n\t\n\t\\item If there was a cross-bridge, we could have two cases:\n\t\\begin{itemize}\n\t\t\\item First is $(c^1_i,v)$ and $(w,c^1_{i+1}) \\in \\vec{E}$ with $c^1_i \\neq w$ and $c^1_{i+1} \\neq v$. Since $v$ has no loop, we deduce $c^1_i \\neq v$, hence $c^1_{i+1} \\neq w$. Also consider that $c^1_{i+1} = c^1_2$ would imply $c^1_i = w$, which is impossible. Therefore, with what we said on $w$, that kind of cross-bridge cannot happen.\n\t\t\n\t\t\\item Second is $(c^1_i,w)$ and $(v,c^1_{i+1}) \\in \\vec{E}$ with $c^1_i \\neq v$ and $c^1_{i+1} \\neq w$. Since $v$ has no loop and $(v,u) \\notin \\vec{E}$, we also deduce $v \\neq c^1_{i+1}$ and $u \\neq c^1_{i+1}$. Then we can define a shorter cycle that is ${C^1}^\\prime = ( v, c^1_{i+1}, c^1_{i+2}, ..., u )$. Since $(v,u) \\notin \\vec{E}$, ${C^1}^\\prime$ has length at least $3$, contains at least one unidirectional edge, and is strictly shorter than $C^1$. Since $C^1$ is a minimal cycle having these properties and containing a bidirectional edge, ${C^1}^\\prime$ must contain no bidirectional edge. Then ${C^1}^\\prime$ is a cycle of unidirectional edges such that a bidirectional edge has one vertex in common with it (that is, $\\{v, w\\}$). This is an iterative reduction that should have already been performed to build $C^1$ and $C^2$, therefore it cannot happen here.\n\t\\end{itemize}\n\t\n\t\\item If there is an attractive vertex for $C^1$ located in $C^2$, then it is in particular in $C^1$ since $C^2 \\subset C^1$. Since they have no loop, $u$ and $v$ can't be attractive or repulsive. If any other vertex than $w$ or the following vertex, call it $x$, was attractive, then it would allow for a direct edge from $w$ to that vertex, and so a fitting cycle strictly shorter than $C^1$ would exist, which is impossible. But if $w$ (resp. $x$) was attractive, in particular $(u,w) \\in \\vec{E}$ (resp. $(u,x) \\in \\vec{E}$). Since $w$ (resp. $x$) would have a loop because it also attracts itself, in this Case 1.2 we couldn't have a unidirectional edge $(u,w)$ (resp. $(u,x)$): necessarily $(w,u) \\in \\vec{E}$ (resp. $(x,u) \\in \\vec{E}$). The only possibility not to cause a contradiction with the minimality of $C^1$ is that $C^1$ is already of length $3$ (resp. $4$).\n\t\n\tThe length $3$ case is treated in \\cref{subsubsec:length3}. The length $4$ case with $(u, v, w, x)$ and $x$ attractive is actually impossible, because $(u, v, x)$ reduces to a length $3$ case with the correct properties, but $C^1$ was supposed to be minimal.\n\\end{itemize}\n\n\n\n\n\\textbf{Case 1.3:} all unidirectional edges have vertices with loops.\n\n\\begin{center}\n\t\\begin{tikzpicture}[scale=0.4]\n\t\t\\def \\radius {1.5cm};\n\t\t\n\t\t\\node[draw, circle,scale=0.7] (a) at (0:\\radius) {$v$};\n\t\t\\node[draw, circle,scale=0.7] (b) at (72:\\radius) {$a$};\n\t\t\\node[draw, circle] (c) at (144:\\radius) {};\n\t\t\\node[draw, circle] (d) at (216:\\radius) {};\n\t\t\\node[draw, circle,scale=0.7] (e) at (288:\\radius) {$u$};\n\t\t\n\t\t\\draw (0,0) node {$C^1$};\n\t\t\n\t\t\\draw[->, >=latex] (a) to[bend right=15] (b);\n\t\t\\draw[->, >=latex] (b) to[bend right=15] (c);\n\t\t\\draw[->, >=latex] (c) to[bend right=15] (d);\n\t\t\\draw[->, >=latex] (d) to[bend right=15] (e);\n\t\t\\draw[->, >=latex] (e) to[bend right=15] (a);\n\t\t\n\t\t\\draw[->, >=latex] (b) to[bend right=15] (a);\n\t\t\\draw[->, >=latex] (c) to[bend right=15] (b);\n\t\t\n\t\t\\draw[->, >=latex,color=red] (a) to[out=20,in=-20,looseness=6] (a);\n\t\t\\draw[->, >=latex] (c) to[out=164,in=124,looseness=6] (c);\n\t\t\\draw[->, >=latex] (d) to[out=236,in=196,looseness=6] (d)\n\t\t\\draw[->, >=latex] (e) to[out=308,in=268,looseness=6] (e);\n\t\\end{tikzpicture}\n\\end{center}\n\nSince $\\mathcal{G}(H)$ does not verify condition $D$, we can find a cycle with at least one loopless vertex and one unidirectional edge, that, in this specific case, may go through the same vertices twice since it is defined, strictly speaking, as the concatenation of one path from the unidirectional edge to the loopless vertex, and one path back, the two of them being able to intersect.\n\nDefine $C^1$ as the smallest cycle built that way. Call $u$ and $v$ the two successive vertices of the unidirectional edge, that is $(u,v) \\in \\vec{E}$ but $(v,u) \\notin \\vec{E}$. Note that $u$ and $v$ have a loop since we are in Case 1.3. Moreover, call $a$ the loopless vertex that was also used to build $C^1$. Finally, set $a = c^1_0, u = c^1_{i_0}, v = c^1_{i_0+1}$.\n\nSetting $C^2 = \\{v\\}$, we have to check that they fulfill the conditions of the generic construction:\n\\begin{itemize}\n\t\\item $|C^1| \\geq 3$;\n\t\n\t\\item $(c^1_{i_0+2},v)$ is a good pair since $C^1$ passes through $v$ only once in the cycle by construction.\n\t\n\t\\item $C^2$ is made of only one vertex, hence it has no uniform shortcut. $C^1$ has no uniform shortcut of length $0$ since $a$ has no loop. It has no uniform shortcut of length $-1$ because $(v,u) \\notin \\vec{E}$. We study a hypothetical shortcut that would allow $(a,c^1_j) \\in \\vec{E}$: notice that $(c^1_j,a) \\in \\vec{E}$ since $a$ has no loop. $j \\in \\{2,...,|C^1|-2\\}$ hence one can use either $(a,c^1_j)$ or $(c^1_j,a)$ to build a cycle shorter than $C^1$ containing $a$, $u$ and $v$, which would therefore be a cycle that would have all the required properties. This is impossible by minimality of $C^1$.\n\t\n\t\\item There can be cross-bridges between $C^1$ and $C^2$. However, a subtle line of reasoning explained below shows that here, keeping the indices used previously, we only have to avoid the cross-bridges [$(c^1_i,v)$ and $(v,c^1_{i+1}) \\in \\vec{E}$ with $c^1_i \\neq v \\neq c^1_{i+1}$] with $i = i_0-1$ and $i = i_0+2$ for our construction to work -- that is, for the information to be correctly transmitted.\n\t\n\tThe case $i = i_0-1$ is impossible because $(v,c^1_{i_0}) = (v,u) \\notin \\vec{E}$. If $(v,c^1_{i_0+3}) \\in \\vec{E}$ then we can use it to find a cycle shorter than $C^1$ that contains everything we want -- unless $c^1_{i_0+2} = a$.\n\t\n\tIf $c^1_{i_0+2} = a$ then we redefine $C^2 = \\{u\\}$ with which all that we have proved can be adapted: we only have to avoid the cross-bridges [$(c^1_i,v)$ and $(v,c^1_{i+1}) \\in \\vec{E}$ with $c^1_i \\neq v \\neq c^1_{i+1}$] with $i = i_0-2$ and $i = i_0+1$. Once again, this is impossible except if we also have $c^1_{i_0-1} = a$. Then $C^1$ is made of only three elements and this case is solved in \\cref{subsubsec:length3}.\n\t\n\t\\item As we will see, there is only one possibility for $C^1$ to have both an attractive and a repulsive vertex. Since $C^2 \\subset C^1$, it is enough to consider attractive and repulsive vertices for $C^1$ that are located in $C^1$. Let $t$ be an attractor located in $C^1$ for all elements of $C^1$. Notably, $(a,t) \\in \\vec{E}$. Since $a$ has no loop, this Case 1.3 causes $(t,a) \\in \\vec{E}$. Similarly, for $p$ a repulsive vertex, we have not only $(p,a) \\in \\vec{E}$, but also $(a,p) \\in \\vec{E}$. Hence the shortest cycle that meets all our requirements is $(a, p, u, v, t)$, so the only possibility for $C^1$ to have both that does not contradict its minimality is to be this precise cycle (with some of the vertices being possibly equal). This is a case we treat in \\cref{subsubsec:case1.3}.\n\\end{itemize}\n\n\\textbf{Cross-bridges remark for Case 1.3:} We prove why we only need to avoid two cross-bridges to be sure that the information is entirely transmitted. The basics of Case 1.3 are: $(u,v) \\in \\vec{E}$, $(v,u) \\notin \\vec{E}$, $a$ loopless, $C^1$ is a possibly self-intersecting cycle made of the concatenation of a path from $a$ to $u$ and one from $v$ to $a$, all unidirectional edges have a loop, $C^2 = \\{v\\}$.\n\nAny diagonal region that contains the coding of a tile, delimited by buffers and possibly border slices above or below, does encode exactly one tile, see \\cref{fig:complexcode}. Indeed, each of its vertical slices contains at least one of the elements among $\\{c^1_{i_0}, c^1_{i_0+2}\\}$ (since the buffer is given by $c^1_{i_0+1}$), and these are always part of a micro-slice that encodes something. By construction, the whole slice encodes the same thing, vertically.\n\n\\begin{figure}[tb]\n\t\\centering\n\t\\scalebox{1}{\n\t\t\\begin{tikzpicture}\n\t\t\t\\clip[decorate,decoration={random steps,segment length=5pt,amplitude=2pt}] (-0.2, -9.2) -- (-0.2,-5.8) -- (0.8,-5.8) -- (0.8,-2.8) -- (1.8,-2.8) -- (1.8,0.2) -- (4.2,0.2) -- (4.2,-3.2) -- (3.2,-3.2) -- (3.2,-6.2) -- (2.2,-6.2) -- (2.2,-9.2) -- (-0.2,-9.2);\n\t\t\t\n\t\t\t\\begin{scope}[xshift=-1cm]\n\t\t\t\t\\draw (0,0) rectangle (3,-3);\n\t\t\t\t\\draw (3,0) rectangle (6,-3);\n\t\t\t\t\\draw (6,0) rectangle (9,-3);\n\t\t\t\t\n\t\t\t\t\\draw[fill=yellow, rounded corners, inner sep=0.3mm] (0,-2) rectangle (2,-3);\n\t\t\t\t\\draw[fill=yellow, rounded corners, inner sep=0.3mm] (3,0) rectangle (5,-1);\n\t\t\t\t\\draw[fill=yellow, rounded corners, inner sep=0.3mm] (6,-1) rectangle (8,-2);\n\t\t\t\t\n\t\t\t\t\\fill[pattern=north west lines, pattern color=black!30] (2,0) rectangle (3,-3);\n\t\t\t\t\\fill[pattern=north west lines, pattern color=black!30] (5,0) rectangle (6,-3);\n\t\t\t\t\\fill[pattern=north west lines, pattern color=black!30] (8,0) rectangle (9,-3);\n\t\t\t\t\n\t\t\t\t\\draw (0.5,-0.5) node {$a$};\n\t\t\t\t\\draw (0.5,-1.5) node {$a$};\n\t\t\t\t\\draw (0.5,-2.5) node {$c$};\n\t\t\t\t%\n\t\t\t\t\\draw (1.5,-0.5) node {$b$};\n\t\t\t\t\\draw (1.5,-1.5) node {$b$};\n\t\t\t\t\\draw (1.5,-2.5) node {$c$};\n\t\t\t\t%\n\t\t\t\t\\draw (2.5,-0.5) node {$c$};\n\t\t\t\t\\draw (2.5,-1.5) node {$c$};\n\t\t\t\t\\draw (2.5,-2.5) node {$c$};\n\t\t\t\t\n\t\t\t\t\\draw (3.5,-0.5) node {$c$};\n\t\t\t\t\\draw (3.5,-1.5) node {$a$};\n\t\t\t\t\\draw (3.5,-2.5) node {$a$};\n\t\t\t\t%\n\t\t\t\t\\draw (4.5,-0.5) node {$c$};\n\t\t\t\t\\draw (4.5,-1.5) node {$b$};\n\t\t\t\t\\draw (4.5,-2.5) node {$b$};\n\t\t\t\t%\n\t\t\t\t\\draw (5.5,-0.5) node {$c$};\n\t\t\t\t\\draw (5.5,-1.5) node {$c$};\n\t\t\t\t\\draw (5.5,-2.5) node {$c$};\n\t\t\t\t\n\t\t\t\t\\draw (6.5,-0.5) node {$a$};\n\t\t\t\t\\draw (6.5,-1.5) node {$c$};\n\t\t\t\t\\draw (6.5,-2.5) node {$a$};\n\t\t\t\t%\n\t\t\t\t\\draw (7.5,-0.5) node {$b$};\n\t\t\t\t\\draw (7.5,-1.5) node {$c$};\n\t\t\t\t\\draw (7.5,-2.5) node {$b$};\n\t\t\t\t%\n\t\t\t\t\\draw (8.5,-0.5) node {$c$};\n\t\t\t\t\\draw (8.5,-1.5) node {$c$};\n\t\t\t\t\\draw (8.5,-2.5) node {$c$};\n\t\t\t\\end{scope}\n\t\t\t\n\t\t\t\n\t\t\t\\begin{scope}[xshift=-2cm,yshift=-3cm]\n\t\t\t\t\\draw (0,0) rectangle (3,-3);\n\t\t\t\t\\draw (3,0) rectangle (6,-3);\n\t\t\t\t\\draw (6,0) rectangle (9,-3);\n\t\t\t\t\n\t\t\t\t\\draw[fill=yellow, rounded corners, inner sep=0.3mm] (0,-2) rectangle (2,-3);\n\t\t\t\t\\draw[fill=yellow, rounded corners, inner sep=0.3mm] (3,0) rectangle (5,-1);\n\t\t\t\t\\draw[fill=yellow, rounded corners, inner sep=0.3mm] (6,-1) rectangle (8,-2);\n\t\t\t\t\n\t\t\t\t\\fill[pattern=north west lines, pattern color=black!30] (2,0) rectangle (3,-3);\n\t\t\t\t\\fill[pattern=north west lines, pattern color=black!30] (5,0) rectangle (6,-3);\n\t\t\t\t\\fill[pattern=north west lines, pattern color=black!30] (8,0) rectangle (9,-3);\n\t\t\t\t\n\t\t\t\t\\draw (0.5,-0.5) node {$a$};\n\t\t\t\t\\draw (0.5,-1.5) node {$a$};\n\t\t\t\t\\draw (0.5,-2.5) node {$c$};\n\t\t\t\t%\n\t\t\t\t\\draw (1.5,-0.5) node {$b$};\n\t\t\t\t\\draw (1.5,-1.5) node {$b$};\n\t\t\t\t\\draw (1.5,-2.5) node {$c$};\n\t\t\t\t%\n\t\t\t\t\\draw (2.5,-0.5) node {$c$};\n\t\t\t\t\\draw (2.5,-1.5) node {$c$};\n\t\t\t\t\\draw (2.5,-2.5) node {$c$};\n\t\t\t\t\n\t\t\t\t\\draw (3.5,-0.5) node {$c$};\n\t\t\t\t\\draw (3.5,-1.5) node {$a$};\n\t\t\t\t\\draw (3.5,-2.5) node {$a$};\n\t\t\t\t%\n\t\t\t\t\\draw (4.5,-0.5) node {$c$};\n\t\t\t\t\\draw (4.5,-1.5) node {$b$};\n\t\t\t\t\\draw (4.5,-2.5) node {$b$};\n\t\t\t\t%\n\t\t\t\t\\draw (5.5,-0.5) node {$c$};\n\t\t\t\t\\draw (5.5,-1.5) node {$c$};\n\t\t\t\t\\draw (5.5,-2.5) node {$c$};\n\t\t\t\t\n\t\t\t\t\\draw (6.5,-0.5) node {$a$};\n\t\t\t\t\\draw (6.5,-1.5) node {$c$};\n\t\t\t\t\\draw (6.5,-2.5) node {$a$};\n\t\t\t\t%\n\t\t\t\t\\draw (7.5,-0.5) node {$b$};\n\t\t\t\t\\draw (7.5,-1.5) node {$c$};\n\t\t\t\t\\draw (7.5,-2.5) node {$b$};\n\t\t\t\t%\n\t\t\t\t\\draw (8.5,-0.5) node {$c$};\n\t\t\t\t\\draw (8.5,-1.5) node {$c$};\n\t\t\t\t\\draw (8.5,-2.5) node {$c$};\n\t\t\t\\end{scope}\n\t\t\t\n\t\t\t\n\t\t\t\\begin{scope}[xshift=-3cm,yshift=-6cm]\n\t\t\t\t\\draw (0,0) rectangle (3,-3);\n\t\t\t\t\\draw (3,0) rectangle (6,-3);\n\t\t\t\t\\draw (6,0) rectangle (9,-3);\n\t\t\t\t\n\t\t\t\t\\draw[fill=yellow, rounded corners, inner sep=0.3mm] (0,-2) rectangle (2,-3);\n\t\t\t\t\\draw[fill=yellow, rounded corners, inner sep=0.3mm] (3,0) rectangle (5,-1);\n\t\t\t\t\\draw[fill=yellow, rounded corners, inner sep=0.3mm] (6,-1) rectangle (8,-2);\n\t\t\t\t\n\t\t\t\t\\fill[pattern=north west lines, pattern color=black!30] (2,0) rectangle (3,-3);\n\t\t\t\t\\fill[pattern=north west lines, pattern color=black!30] (5,0) rectangle (6,-3);\n\t\t\t\t\\fill[pattern=north west lines, pattern color=black!30] (8,0) rectangle (9,-3);\n\t\t\t\t\n\t\t\t\t\\draw (0.5,-0.5) node {$a$};\n\t\t\t\t\\draw (0.5,-1.5) node {$a$};\n\t\t\t\t\\draw (0.5,-2.5) node {$c$};\n\t\t\t\t%\n\t\t\t\t\\draw (1.5,-0.5) node {$b$};\n\t\t\t\t\\draw (1.5,-1.5) node {$b$};\n\t\t\t\t\\draw (1.5,-2.5) node {$c$};\n\t\t\t\t%\n\t\t\t\t\\draw (2.5,-0.5) node {$c$};\n\t\t\t\t\\draw (2.5,-1.5) node {$c$};\n\t\t\t\t\\draw (2.5,-2.5) node {$c$};\n\t\t\t\t\n\t\t\t\t\\draw (3.5,-0.5) node {$c$};\n\t\t\t\t\\draw (3.5,-1.5) node {$a$};\n\t\t\t\t\\draw (3.5,-2.5) node {$a$};\n\t\t\t\t%\n\t\t\t\t\\draw (4.5,-0.5) node {$c$};\n\t\t\t\t\\draw (4.5,-1.5) node {$b$};\n\t\t\t\t\\draw (4.5,-2.5) node {$b$};\n\t\t\t\t%\n\t\t\t\t\\draw (5.5,-0.5) node {$c$};\n\t\t\t\t\\draw (5.5,-1.5) node {$c$};\n\t\t\t\t\\draw (5.5,-2.5) node {$c$};\n\t\t\t\t\n\t\t\t\t\\draw (6.5,-0.5) node {$a$};\n\t\t\t\t\\draw (6.5,-1.5) node {$c$};\n\t\t\t\t\\draw (6.5,-2.5) node {$a$};\n\t\t\t\t%\n\t\t\t\t\\draw (7.5,-0.5) node {$b$};\n\t\t\t\t\\draw (7.5,-1.5) node {$c$};\n\t\t\t\t\\draw (7.5,-2.5) node {$b$};\n\t\t\t\t%\n\t\t\t\t\\draw (8.5,-0.5) node {$c$};\n\t\t\t\t\\draw (8.5,-1.5) node {$c$};\n\t\t\t\t\\draw (8.5,-2.5) node {$c$};\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\\draw (9,0) rectangle (12,-3);\n\t\t\t\t\n\t\t\t\t\\draw[fill=yellow, rounded corners, inner sep=0.3mm] (9,-1) rectangle (11,-2);\n\t\t\t\t\n\t\t\t\t\\fill[pattern=north west lines, pattern color=black!30] (11,0) rectangle (12,-3);\n\t\t\t\t\n\t\t\t\t\\draw (9.5,-0.5) node {$a$};\n\t\t\t\t\\draw (9.5,-1.5) node {$c$};\n\t\t\t\t\\draw (9.5,-2.5) node {$a$};\n\t\t\t\t%\n\t\t\t\t\\draw (10.5,-0.5) node {$b$};\n\t\t\t\t\\draw (10.5,-1.5) node {$c$};\n\t\t\t\t\\draw (10.5,-2.5) node {$b$};\n\t\t\t\t%\n\t\t\t\t\\draw (11.5,-0.5) node {$c$};\n\t\t\t\t\\draw (11.5,-1.5) node {$c$};\n\t\t\t\t\\draw (11.5,-2.5) node {$c$};\n\t\t\t\\end{scope}\n\t\t\t\n\t\t\\end{tikzpicture}\n\t}\n\t\\caption{A coding region reused from \\cref{codingblock}, delimited by buffers, with an example of transmission that is ensured between the different slices of it.}\n\t\\label{fig:complexcode}\n\\end{figure}\n\nMoreover, two adjacent slices also contain the same encoding, using the fact that there is no cross-bridge for $i=i_0-1$ or $i=i_0+2$. Indeed, suppose we have a $c^1_{i_0}$ coding micro-slice in the rightmost slice. To its left, in the second rightmost slice, is a $c^1_{i_0-1}$ coding micro-slice that encodes the same thing, since there is no cross-bridge for $i=i_0-1$. Since we force two vertically adjacent coding micro-slices to encode the same tile if none of them is a buffer, the coding micro-slice using $c^1_{i_0}$, that is below the one using $c^1_{i_0-1}$, encodes the same tile as the latter. But to the left of the $c^1_{i_0}$ coding micro-tile is a $c^1_{i_0-1}$ coding micro-tile that encodes the same tile. Below this one is, once again, a $c^1_{i_0}$ coding micro-tile that encodes the same tile by construction... The same reasoning works when starting from a $c^1_{i_0+2}$ coding micro-slice in the leftmost slice: it encodes the same thing as the $c^1_{i_0+3}$ coding micro-slice to its right since there is no cross-bridge for $i=i_0+2$, etc.\n\n\n\n\n\\subsection{Case 2}\n\\label{subsec:case2}\n\nHere, we assume that $\\mathcal{G}(H)$ contains no loop but at least one bidirectional edge.\n\n\\textbf{Case 2.1:} we can find one cycle of length at least $3$ with no repeated vertex, with at least one bidirectional edge.\n\n\\begin{center}\n\t\\begin{tikzpicture}[scale=0.4]\n\t\t\\def \\radius {1.5cm};\n\t\t\n\t\t\\node[draw, circle,scale=0.7] (a) at (0:\\radius) {$v$};\n\t\t\\node[draw, circle,scale=0.7] (b) at (72:\\radius) {$w$};\n\t\t\\node[draw, circle] (c) at (144:\\radius) {};\n\t\t\\node[draw, circle] (d) at (216:\\radius) {};\n\t\t\\node[draw, circle,scale=0.7] (e) at (288:\\radius) {$u$};\n\t\t\n\t\t\\draw (0,0) node {$C^1$};\n\t\t\n\t\t\\draw[->, >=latex,color=red] (a) to[bend right=15] (b);\n\t\t\\draw[->, >=latex] (b) to[bend right=15] (c);\n\t\t\\draw[->, >=latex] (c) to[bend right=15] (d);\n\t\t\\draw[->, >=latex] (d) to[bend right=15] (e);\n\t\t\\draw[->, >=latex] (e) to[bend right=15] (a);\n\t\t\n\t\t\\draw[->, >=latex,color=red] (b) to[bend right=15] (a);\n\t\t\\draw[->, >=latex] (c) to[bend right=15] (b);\n\t\\end{tikzpicture}\n\\end{center}\n\nFocus on a cycle that contains both a bidirectional and a unidirectional edge -- we can find one, else $\\mathcal{G}(H)$ would be of symmetric type. Just as in Case 1.2, reduce it iteratively so that it ends up either in a graph -- as small as possible -- similar to the one of Case 2.2 and is therefore treated similarly; or as the smallest cycle with both a bidirectional and a unidirectional edge that does not contain a graph as in Case 2.2.\n\nName $C^1$ this cycle; name $u$, $v$ and $w$ some successive vertices in $C^1$ such that $(u,v), (v,w)$ and $(w,v) \\in \\vec{E}$ but $(v,u) \\notin \\vec{E}$; and set $C^2 = \\{v,w\\}$. Also, define $c^1_0 = v = c^2_0$.\n\nAs in Case 1.2, notice that all edges from $w$ must lead either to $c^1_2$ or to $v = c^1_0$, else we could find a shorter cycle $C^1$. There remains to check that $C^1$ and $C^2$ have the properties we want:\n\\begin{itemize}\n\t\\item $|C^1| \\geq 3$;\n\t\n\t\\item $C^1$ and $C^2$ have a good pair, that is $(c^1_2,v)$;\n\t\n\t\\item $C^2$ has no uniform shortcut since it is of length $2$ with no loop. If $C^1$ had a uniform shortcut, it could not be of size $0$ (because all vertices are loopless) or of size $-1$ (because $(v,u) \\notin \\vec{E}$). Any other size of shortcut is impossible due to the aforementioned property of $w$.\n\t\n\t\\item If there was a cross-bridge, we reach a contradiction in the exact same fashion as what is done in the case of a cross-bridge in Case 1.2.\n\t\n\t\\item There cannot be any attractive or repulsive vertex for $C^1$ located in $C^1$ since no vertex has a loop in the present case. None can be located in $C^2$ either since $C^2 \\subset C^1$.\n\\end{itemize}\n\n\n\n\\textbf{Case 2.2:} any cycle of length at least $3$ with no repeated vertex contains no bidirectional edge.\n\n\\begin{center}\n\t\\begin{tikzpicture}[scale=0.4,rotate=-90]\n\t\t\\def \\radius {1.5cm};\n\t\t\n\t\t\\node[draw, circle,scale=0.7] (a) at (0:\\radius) {$v$};\n\t\t\\node[draw, circle] (b) at (72:\\radius) {};\n\t\t\\node[draw, circle] (c) at (144:\\radius) {};\n\t\t\\node[draw, circle] (d) at (216:\\radius) {};\n\t\t\\node[draw, circle] (e) at (288:\\radius) {};\n\t\t\n\t\t\\node[draw, circle,scale=0.7] (f) at (3.5,0) {$w$};\n\t\t\n\t\t\\draw (0,0) node {$C^1$};\n\t\t\n\t\t\\draw[->, >=latex] (a) to[bend right=15] (b);\n\t\t\\draw[->, >=latex] (b) to[bend right=15] (c);\n\t\t\\draw[->, >=latex] (c) to[bend right=15] (d);\n\t\t\\draw[->, >=latex] (d) to[bend right=15] (e);\n\t\t\\draw[->, >=latex] (e) to[bend right=15] (a);\n\t\t\n\t\t\\draw[->, >=latex,color=red] (a) to[bend right=15] (f);\n\t\t\\draw[->, >=latex,color=red] (f) to[bend right=15] (a);\n\t\\end{tikzpicture}\n\\end{center}\n\nSince there are bidirectional edges in $\\mathcal{G}(H)$ (hypothesis of Case 2), which is strongly connected, we can find at least one cycle of length $\\geq 3$ with no repeated vertex that has one vertex in common with a bidirectional edge. Choose a minimal cycle among these ones, call it $C^1$; name $v$ the vertex it has in common with the bidirectional edge, and $w$ the other vertex. We define $C^2 = \\{v, w\\}$. Call $c^1_0 = v = c^2_0$.\n\\begin{itemize}\n\t\\item $|C^1| \\geq 3$;\n\t\n\t\\item $(c^1_1,w)$ is a good pair for $C^1$ and $C^2$;\n\t\n\t\\item $C^2$ has no uniform shortcut since it is of length $2$ with no loop. If $C^1$ had uniform shortcuts, they could not be of length $0$ because none of its vertices has a loop; they could not be of length $-1$ because none of its edges is bidirectional; and they could not be of any other length else the shortcut starting from $v$ would allow us to define a strictly shorter cycle with the same property, contradicting the minimality of $C^1$.\n\t\n\t\\item If there was a cross-bridge, we could have two cases:\n\t\\begin{itemize}\n\t\t\\item First is $(c^1_i,v)$ and $(w,c^1_{i+1}) \\in \\vec{E}$, with $c^1_i \\neq w$ and $v \\neq c^1_{i+1}$. Then we could use the edge $(w,c^1_{i+1})$ for the following cycle: $\\{ w, c^1_{i+1}, c^1_{i+2}, ..., v \\}$. It would be of length at least $3$ since $c^1_{i+1} \\neq v$, no vertex would repeat, and it would contain one bidirectional edge. This is impossible by assumption of Case 2.2.\n\t\t\n\t\t\\item Second is $(c^1_i,w)$ and $(v,c^1_{i+1}) \\in \\vec{E}$, with $c^1_i \\neq v$ and $w \\neq c^1_{i+1}$. Then we could use the edge $(v,c^1_{i+1})$ to define a cycle strictly shorter than $C^1$, with the same properties, of length at least $3$ ($c^1_{i+1}$ cannot precede $v$, else we would have a bidirectional edge). This is impossible.\n\t\\end{itemize}\n\t\n\t\\item There is no attractive or repulsive vertex for $C^1$ located in $C^1$ since none has a loop. But it seems that there can be an attractive and\/or repulsive vertex for $C^1$ located in $C^2$, that is, $w$. Nevertheless, if $w$ was both attractive and repulsive, using part of $C^1$ we could build a cycle of length at least $3$ with no repeated vertex including $v$ and $w$, hence a bidirectional edge. This is forbidden in this Case 2.2.\n\\end{itemize}\n\n\n\n\\subsection{Case 3}\n\\label{subsec:case3}\n\nIn this subsection, we assume $\\mathcal{G}(H)$ contains no loop and no bidirectional edge.\n\n\\textbf{Case 3.1:} considering the smallest cycle $C$ in $\\mathcal{G}(H)$ one can find, there exists a path $\\gamma$ between two different vertices of $C$ that does not intercept $C$ elsewhere, and $\\gamma$ is of a length different from the length between these vertices inside $C$.\n\n\\begin{center}\n\t\\begin{tikzpicture}[scale=0.4,rotate=-90]\n\t\t\\def \\radius {1.5cm};\n\t\t\n\t\t\\node[draw, circle] (a) at (0:\\radius) {};\n\t\t\\node[draw, circle] (b) at (72:\\radius) {};\n\t\t\\node[draw, circle] (c) at (144:\\radius) {};\n\t\t\\node[draw, circle] (d) at (216:\\radius) {};\n\t\t\\node[draw, circle] (e) at (288:\\radius) {};\n\t\t\n\t\t\\node[draw, circle] (f) at (-2.5,0.8) {};\n\t\t\\node[draw, circle] (g) at (-2.5,-0.8) {};\n\t\t\n\t\t\\draw (0,0) node {$C^1$};\n\t\t\n\t\t\\draw[->, >=latex,color=red] (a) to[bend right=15] (b);\n\t\t\\draw[->, >=latex,color=red] (b) to[bend right=15] (c);\n\t\t\\draw[->, >=latex] (c) to[bend right=15] (d);\n\t\t\\draw[->, >=latex,color=red] (d) to[bend right=15] (e);\n\t\t\\draw[->, >=latex,color=red] (e) to[bend right=15] (a);\n\t\t\n\t\t\\draw[->, >=latex,color=red] (c) to[bend right=15] (f);\n\t\t\\draw[->, >=latex,color=red] (f) to[bend right=15] (g);\n\t\t\\draw[->, >=latex,color=red] (g) to[bend right=15] (d);\n\t\\end{tikzpicture}\n\\end{center}\n\nDefine $C^1$ a cycle with said property for a path $\\gamma$ between two of its vertices so that $C^1$ is a cycle of minimal length. If there are several cycles of minimal length, choose one so that we can find a path $\\gamma$ as short as possible. Now, we name the vertices: $\\gamma$ is a path between $c^1_0$ and $c^1_k$, that is not of length $k$. Define $C^2$ as the concatenation of $\\gamma$ and $( c^1_k, c^1_{k+1}, ..., c^1_0 )$, with $c^2_0 := c^1_0$ and $c^2_l = c^1_k$ with $l = |\\gamma| > k$.\n\n$C^1$ and $C^2$ verify all the conditions we need:\n\\begin{itemize}\n\t\\item $|C^1| \\geq 3$ since there is no bidirectional edge;\n\t\n\t\\item $(c^1_1, c^2_1)$ is a good pair;\n\t\n\t\\item In both $C^1$ and $C^2$, we have no uniform shortcut of length $0$ or $-1$ since there are no loops and no bidirectional edges. $C^1$ cannot have any other length of uniform shortcut, or even any edge between two of its vertices, else we could find a strictly smaller cycle with a path of a different length between two of its vertices.\n\t\n\tSuppose $C^2$ has a uniform shortcut of length $j$. The point if it happens is to build a new cycle, $C^3$, so that $C^1$ and $C^3$ work for the generic construction in \\cref{subsec:core}.\n\tIf we have an uniform shortcut, then $c^2_0 = c^1_0$ cannot lead to an element that $C^2$ shares with $C^1$ by minimality of the latter, hence the edge of its shortcut must lead to some $c^2_j$, with $1< j < l$ the uniform length of the shortcuts. Necessarily, $(c^1_0 = c^2_0, c^2_j, c^2_{j+1}, ..., c^2_l = c^1_k)$ being a path between two elements of $C^1$ that is strictly shorter than $\\gamma$, we have $l-j+1 = k$ in order not to reach a contradiction. Hence $j = l-k+1$. Then $C^\\prime := (c^2_0, c^2_{l-k+1}, c^2_{l-k+2}, ..., c^2_l = c^1_k, c^1_{k+1}, ..., c^1_{-1})$ is a cycle of length $|C^1|$.\n\t\n\tFirst, we study the case $k \\neq 1$. Then $(c^2_0, c^2_1, c^2_2, ..., c^2_{l-k+1})$ is a path of length $l-k+1 < l$, linking two elements of $C^\\prime$ (these elements are $c^2_0$ and $c^2_{l-k+1}$, which are consecutive in $C^\\prime$). Since $C^\\prime$ is of length $|C^1|$ and we found a path of length smaller than $l$ joining two of its vertices, this fact contradicts the minimality of $C^1$, so we cannot actually have $k \\neq 1$.\n\t\n\tNecessarily $k=1$. Then $j = l$, and the edge between $c^2_0$ and $c^2_l = c^1_k = c^1_1$ is already part of $C^1$ -- it is $(c^1_0, c^1_1)$. We have $l>k$ so $l \\neq 1$; if $l=2$ then $j=2$ so $c^2_2=c^1_1$ would have an edge going to $c^2_4 = c^1_3$, an element of $C^1$ necessarily (possibly $c^1_0$). This is impossible by minimality of $C^1$. We deduce that $l>2$.\n\t\n\tWe set $C^3 := (c^2_0, c^2_1, c^2_2, c^1_{3}, c^1_{4}, ..., c^1_{|C^1|-1})$ using the edge $(c^2_2,c^2_{l+2})$ since $j = l$, which is the edge $(c^2_2,c^1_{3})$ since $c^2_l = c^1_1$.\n\tThere is a specific case if $c^1_3 = c^1_0 = c^2_0$, where both $C^1$ and $C^3$ end up being triangles, but the reasoning below still holds.\n\t\n\tInstead of using $C^1$ and $C^2$, we check that choosing $C^1$ and $C^3$ for our generic construction works well:\n\t\\begin{itemize}\n\t\t\\item $|C^1| \\geq 3$, this does not change;\n\t\t\n\t\t\\item $(c^1_1, c^2_1)$ is still a good pair for $C^1$ and $C^3$;\n\t\t\n\t\t\\item $C^1$ being still defined the same way, it does not contain any uniform shortcut or even any edge between two of its vertices. If $C^3$ contains uniform shortcuts, the one starting at $c^1_0 = c^2_0$ must lead to $c^2_2$ since it must be of size different from $1$ and it must not lead to an element in common with $C^1$, because the latter is minimal. But if $(c^2_0,c^2_2) \\in \\vec{E}$, then we could find a path strictly shorter than $\\gamma$ between $c^1_0$ and $c^1_1$, that would not be of length $1$ (because $c^2_2 \\neq c^2_l$). This contradicts the minimality of $C^1$.\n\t\t\n\t\t\\item Since no edge between two non-consecutive elements of $C^1$ is possible, the unique cross-bridge between $C^1$ and $C^3$ would be some $(c^1_i,c^2_2)$ and $(c^2_1,c^1_{i+1}) \\in \\vec{E}$. But it would also be a cross-bridge between $C^1$ and $C^2$, and this is in all cases impossible, see below.\n\t\t\n\t\t\\item There is no attractive or repulsive vertex for $C^1$ in $C^1$ since no element of $C^1$ has a loop. Suppose there are both an attractive and a repulsive vertex for $C^1$ located in $C^3 \\setminus C^1$, call them $t^3$ and $p^3$. They must be distinct (because the graph has no bidirectional edge) and not be in $C^1$; hence $C^3$ contains at least $2$ exclusive vertices.\n\t\t\n\t\tThen the idea is to use $C^1$ as $C^3$ in the generic construction and vice versa: $|C^3| \\geq 3$, and all of our other properties hold when swapping $C^3$ and $C^1$, except the attractive and repulsive conditions. Hence the only facts that we have to verify to exchange their roles is that there is no attractive or repulsive vertex for $C^3$ located either in $C^3$ or in $C^1$. There is none in $C^3$ because no vertex of $C^3$ has a loop. If there was both an attractive and a repulsive vertices for $C^3$ in $C^1$, call them $t^1$ and $p^1$, then notably $t^1$ would lead to $t^3$ and vice versa... But no bidirectional edge exists here. So, up to exchanging what is $C^1$ and what is $C^3$, we cannot have both an attractive and a repulsive vertex for $C^1$. This reasoning will be applied again and be called the \\emph{trick of exchanging the roles}.\n\t\\end{itemize}\n\tTherefore in the worst case, if there are uniform shortcuts in $C^2$, we can build the generic construction from \\cref{subsec:core} with $C^1$ and $C^3$.\n\t\n\t\\item Suppose we have a cross-bridge, that is, $(c^1_i,c^2_{j+1})$ and $(c^2_j,c^1_{i+1}) \\in \\vec{E}$. Since $C^1$ is minimal, $c^2_j$ and $c^2_{j+1}$ are not elements of $C^1$, so $j+1, >=latex,color=red] (a) to[bend right=15] (b);\n\t\t\\draw[->, >=latex] (b) to[bend right=15] (c);\n\t\t\\draw[->, >=latex] (c) to[bend right=15] (d);\n\t\t\\draw[->, >=latex] (d) to[bend right=15] (e);\n\t\t\\draw[->, >=latex,color=red] (e) to[bend right=15] (a);\n\t\t\n\t\t\\draw[->, >=latex,color=red] (b) to[bend right=15] (f);\n\t\t\\draw[->, >=latex,color=red] (f) to[bend right=15] (g);\n\t\t\\draw[->, >=latex,color=red] (g) to[bend right=15] (e);\n\t\t\n\t\t\\draw[->, >=latex] (c) to[bend right=15] (g);\n\t\\end{tikzpicture}\n\\end{center}\n\nDefine $C^1 := C$. We use the following algorithm: we start with $V_0 := \\{c^1_0\\}$ and $(V_i)_{i \\in [1, |C^1|]}$ empty. Then we recursively append to $V_{i+1}$ all vertices $w$ in $\\mathcal{G}(H)$ so that there is a $v \\in V_i$ with $(v,w) \\in \\vec{E}$, with a modulo $|C^1|$ on the index so that $V_{|C^1|} = V_0$.\nThe algorithm halts when it tries to append vertices to a $V_i$ that are all already in it, which happens because $\\mathcal{G}(H)$ is made of a finite number of vertices. The fact that no path exterior to $C^1$ is of a different length than the corresponding path $C^1$, plus the absence of any loop or bidirectional edge, makes all the $V_i$ disjoint. Finally, the strong connectivity we assumed ensures that $H = \\bigsqcup_{i=0}^{|C^1|-1} V_i$.\n\nWe use the fact that $H$ does not verify condition $D$, specifically is not of state-split cycle type. Since by construction, for any $v \\in V_i$, we have $(v,w) \\in \\vec{E} \\Rightarrow w \\in V_{i+1}$, the only possibility is that $\\exists v \\in V_{i_0}, \\exists w^\\prime \\in V_{i_0+1}, (v,w^\\prime) \\notin \\vec{E}$. However, we also have some $v^\\prime \\in V_{i_0+1}, (v,v^\\prime) \\in \\vec{E}$ and $w \\in V_{i_0}, (w, w^\\prime) \\in \\vec{E}$. Obviously, the four vertices are different. Now, take a path $\\gamma_1$ from $v^\\prime$ to $v$ that is as short as possible. Take a different path $\\gamma_2$ from $w^\\prime$ to $w$ that still has as many possible vertices in common with $\\gamma_1$. We redefine $C^1$ as the concatenation of $\\gamma_1$ and $(v,v^\\prime)$ and $C^2$ as the concatenation of $\\gamma_2$ and $(w,w^\\prime)$.\n\nIt is rather easy to see that the properties we need for our generic construction are verified:\n\n\\begin{itemize}\n\t\\item $|C^1| \\geq 3$, since there is no bidirectional edge;\n\t\n\t\\item The unique common part between $C^1$ and $C^2$ is the biggest sequence of vertices $\\gamma_1$ and $\\gamma_2$ have in common, so starting from the first pair on which they disagree we obtain a good pair;\n\t\n\t\\item There is no uniform shortcut between $C^1$ and $C^2$ if $\\gamma_1$ was chosen minimal and $\\gamma_2$ as close to $\\gamma_1$ as possible;\n\t\n\t\\item There is no cross-bridge for the same reason;\n\t\n\t\\item For attractive and repulsive vertices, we use the trick of exchanging the roles of $C^1$ and $C^2$ described in Case 3.1.\n\\end{itemize}\n\n\n\n\\textbf{Case 3.3:} considering the smallest cycle $C$ in $\\mathcal{G}(H)$ with no repeated vertex, we can find no path between two different vertices of this cycle.\n\n\\begin{center}\n\t\\begin{tikzpicture}[scale=0.4]\n\t\t\\def \\radius {1.5cm};\n\t\t\n\t\t\\node[draw, circle] (a) at (0:\\radius) {};\n\t\t\\node[draw, circle] (b) at (72:\\radius) {};\n\t\t\\node[draw, circle] (c) at (144:\\radius) {};\n\t\t\\node[draw, circle] (d) at (216:\\radius) {};\n\t\t\\node[draw, circle] (e) at (288:\\radius) {};\n\t\t\n\t\t\\node[draw, circle] (f) at (2.5,-0.8) {};\n\t\t\\node[draw, circle] (g) at (2.5,0.8) {};\n\t\t\n\t\t\\draw (0,0) node {$C^1$};\n\t\t\n\t\t\\draw[->, >=latex] (a) to[bend right=15] (b);\n\t\t\\draw[->, >=latex] (b) to[bend right=15] (c);\n\t\t\\draw[->, >=latex] (c) to[bend right=15] (d);\n\t\t\\draw[->, >=latex] (d) to[bend right=15] (e);\n\t\t\\draw[->, >=latex] (e) to[bend right=15] (a);\n\t\t\n\t\t\\draw[->, >=latex,color=red] (a) to[bend right=15] (f);\n\t\t\\draw[->, >=latex,color=red] (f) to[bend right=15] (g);\n\t\t\\draw[->, >=latex,color=red] (g) to[bend right=15] (a);\n\t\\end{tikzpicture}\n\\end{center}\n\nSince $\\mathcal{G}(H)$ does not verify condition $D$, it cannot be a plain cycle, hence a path exists from one vertex of $C$ to itself that does not intersect $C$ elsewhere. Define $C^1:=C$ and this vertex as $c^1_0$. Then, considering the smallest path $\\gamma$ from $c^1_0$ to itself outside of $C^1$, we define $C^2 := \\gamma$, with $c^2_0 := c^1_0$.\n\nIt remains to check that these $C^1$ and $C^2$ verify the properties we need.\n\n\\begin{itemize}\n\t\\item $|C^1| \\geq 3$;\n\t\n\t\\item $C^1$ and $C^2$ have exactly one vertex in common, $c^1_0 = c^2_0$, and so $(c^1_1,c^2_1)$ is a good pair;\n\t\n\t\\item There is no uniform shortcut of length $0$ or $-1$ neither in $C^1$ nor in $C^2$, since there are no loop and no bidirectional edge. There is no uniform shortcut of any other length; else consider the edge starting at $c^1_0$, be it in $C^1$ or in $C^2$: it would allow us to build a shorter $C^1$ or a shorter $C^2$, contradicting the fact that the two of them have been chosen to be minimal.\n\t\n\t\\item There is no cross-bridge between $C^1$ and $C^2$ because it would allow us to build path outside of $C^1$ between two distinct elements of $C^1$, which is impossible in this case;\n\t\n\t\\item There is no attractive or repulsive vertex for $C^1$ in $C^1$, because no element of $C^1$ has a loop. There is no attractive or repulsive vertex for $C^1$ in $C^2$ else we could build a path outside of $C^1$ between two distinct elements of $C^1$, which is impossible in this case.\n\\end{itemize}\n\n\n\n\n\\subsection{Additional cases}\n\\label{subsec:additional}\n\n\\subsubsection{Length 3 Cases:}\n\\label{subsubsec:length3}\n\nFor most three-vertex graphs, we can apply the generic construction from \\cref{subsec:core} without any problem. However, some of them require to be slightly more cautious, because some properties are missing. These are, up to a change of labels:\n\n\n\\begin{center}\n\t\\begin{tikzpicture}[scale=0.8]\n\t\t\n\t\t\\begin{scope}\n\t\t\t\\node[draw, circle] (b) {$b$};\n\t\t\t\\node[draw, circle] (a) [below left = 0.9cm and 0.5cm of b] {$a$};\n\t\t\t\\node[draw, circle] (c) [below right = 0.9cm and 0.5cm of b] {$c$};\n\t\t\t\n\t\t\t\\draw[->, >=latex] (a) to[bend left=20] (b);\n\t\t\t\\draw[->, >=latex] (b) to[bend left=20] (c);\n\t\t\t\\draw[->, >=latex] (c) to[bend left=20] (a);\n\t\t\t\n\t\t\t\\draw[->, >=latex] (c) to[bend left=20] (b);\n\t\t\t\n\t\t\t\\draw[->, >=latex] (b) to[loop above] (b);\n\t\t\t\\draw[->, >=latex] (c) to [out=330,in=300,looseness=12] (c);\n\t\t\\end{scope}\n\t\t\n\t\t\\begin{scope}[xshift=4cm]\n\t\t\t\\node[draw, circle] (b) {$b$};\n\t\t\t\\node[draw, circle] (a) [below left = 0.9cm and 0.5cm of b] {$a$};\n\t\t\t\\node[draw, circle] (c) [below right = 0.9cm and 0.5cm of b] {$c$};\n\t\t\t\n\t\t\t\\draw[->, >=latex] (a) to[bend left=20] (b);\n\t\t\t\\draw[->, >=latex] (b) to[bend left=20] (c);\n\t\t\t\\draw[->, >=latex] (c) to[bend left=20] (a);\n\t\t\t\n\t\t\t\\draw[->, >=latex] (c) to[bend left=20] (b);\n\t\t\t\\draw[->, >=latex] (a) to[bend left=20] (c);\n\t\t\t\n\t\t\t\\draw[->, >=latex] (c) to [out=330,in=300,looseness=12] (c);\n\t\t\\end{scope}\n\t\t\n\t\t\\begin{scope}[xshift=8cm]\n\t\t\t\\node[draw, circle] (b) {$b$};\n\t\t\t\\node[draw, circle] (a) [below left = 0.9cm and 0.5cm of b] {$a$};\n\t\t\t\\node[draw, circle] (c) [below right = 0.9cm and 0.5cm of b] {$c$};\n\t\t\t\n\t\t\t\\draw[->, >=latex] (a) to[bend left=20] (b);\n\t\t\t\\draw[->, >=latex] (b) to[bend left=20] (c);\n\t\t\t\\draw[->, >=latex] (c) to[bend left=20] (a);\n\t\t\t\n\t\t\t\\draw[->, >=latex] (c) to[bend left=20] (b);\n\t\t\t\\draw[->, >=latex] (a) to[bend left=20] (c);\n\t\t\t\n\t\t\t\\draw[->, >=latex] (b) to[loop above] (b);\n\t\t\t\\draw[->, >=latex] (c) to [out=330,in=300,looseness=12] (c);\n\t\t\\end{scope}\n\t\t\n\t\\end{tikzpicture}\n\\end{center}\n\nHere, there are both an attractive and a repulsive vertex. Still, similarly to case 1.1, $a$ has in-degree $1$ since only $c$ leads to $a$, and so \\cref{propapproxaligned} holds, since $a$ forces the alignment of columns.\n\nBesides, we have to be careful about the cross-bridge property. In the first example, we choose $C^2=(b)$ (there is no cross-bridge then because $(b,a) \\notin \\vec{E}$). In the second and in the third, we choose $C^2=(a,c)$ ($a$ has no loop hence there is no problem of cross-bridge or of uniform shortcut in $C^2$). All the other properties from Condition $C$ are verified.\n\n\n\n\n\\begin{center}\n\t\\begin{tikzpicture}\n\t\t\\node[draw, circle] (b) {$b$};\n\t\t\\node[draw, circle] (a) [below left = 0.9cm and 0.5cm of b] {$a$};\n\t\t\\node[draw, circle] (c) [below right = 0.9cm and 0.5cm of b] {$c$};\n\t\t\n\t\t\\draw[->, >=latex] (a) to[bend left=20] (b);\n\t\t\\draw[->, >=latex] (b) to[bend left=20] (c);\n\t\t\\draw[->, >=latex] (c) to[bend left=20] (a);\n\t\t\n\t\t\\draw[->, >=latex] (b) to[bend left=20] (a);\n\t\t\\draw[->, >=latex] (a) to[bend left=20] (c);\n\t\t\n\t\t\\draw[->, >=latex] (b) to[loop above] (b);\n\t\t\\draw[->, >=latex] (c) to [out=330,in=300,looseness=12] (c);\n\t\\end{tikzpicture}\n\\end{center}\n\nFinally, here we have three problems:\n\n\\begin{itemize}\n\t\\item We could have cross-bridges, so to avoid them we choose $C^2 = (a, b)$ (we have no problem of uniform shortcut in $C^2$ since $a$ has no loop);\n\t\\item We have attractive and repulsive vertices;\n\t\\item And here we cannot rely on an element of the alphabet that must be followed or preceded by a specific other one to solve that problem.\n\\end{itemize}\n\nThe reasoning is slightly more subtle then: if we try to perform the generic construction, take two successive columns $K_1$ and $K_2$. In any macro-slice of $K_1$, there is some $a$ meso-slice (made of $N M$ symbols $a$) that is vertically preceded by a $c$ meso-slice. The $a$ meso-slice must not be next to any symbol $a$ in column $K_2$. But then if there is any $c$ in the part of $K_2$ horizontally adjacent to this $a$ meso-slice, the aforementioned $c$ meso-slice of $K_1$ is horizontally followed by at least one $b$ (be it from a regular meso-slice, a border, or a code); but this cannot be. Hence an $a$ meso-slice in $K_1$ can only be horizontally followed by symbols $b$. So two columns are always aligned even if we have attractive and repulsive vertices.\n\nThe rest of the generic construction works normally.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsubsection{Specificity of Case 1.3:}\n\\label{subsubsec:case1.3}\n\nWe focus on specific subcase where $C^2 = (c)$ and $C^1 = (a,p,b,c,t)$ where all vertices must not necessarily be different, with $t$ attractive, $p$ repulsive, $(c,b) \\notin \\vec{E}$, loops on $b$ and $c$, and $a$ loopless. Additionally, the initial and terminal vertices of any unidirectional edge must have a loop. Five cases can happen; here we treat only the fourth one, all the others are done similarly. Note that the fifth case is treated among the Length 3 Cases.\n\\begin{itemize}\n\t\\item All elements are distinct;\n\t\\item $p=t$ and all others distinct;\n\t\\item $c=t$ and all others distinct;\n\t\\item $b=p$ and all others distinct;\n\t\\item $b=p$ and $c=t$: this is one of the three-vertex graphs we have seen before.\n\\end{itemize}\n\nNote that in all those cases, stemming from the analyze performed in case 1.3 in \\cref{subsec:case1}, our generic construction seems to work except for the presence of both attractive and repulsive vertices, that endangers \\cref{propapproxaligned}. The only fact that we have to check is that we can circumvent this obstacle in a way similar to what is done in \\cref{subsubsec:length3}.\n\n\n\nIf $b=p$, we obtain the following graph:\n\n\\begin{center}\n\t\\scalebox{0.8}{\n\t\t\\begin{tikzpicture}[scale=0.7]\n\t\t\t\n\t\t\t\\def \\radius {1.5cm};\n\t\t\t\n\t\t\t\\node[draw, circle] (a) at (0:\\radius) {$a$};\n\t\t\t\\node[draw, circle] (p) at (72:\\radius) {$p$};\n\t\t\t\\node[draw, circle] (c) at (216:\\radius) {$c$};\n\t\t\t\\node[draw, circle] (t) at (288:\\radius) {$t$};\n\t\t\t\n\t\t\t\\draw[->, >=latex] (a) to[bend right=15] (p);\n\t\t\t\\draw[->, >=latex] (c) to[bend right=15] (t);\n\t\t\t\\draw[->, >=latex] (t) to[bend right=15] (a);\n\t\t\t\n\t\t\t\\draw[->, >=latex] (p) to[bend right=15] (a);\n\t\t\t\\draw[->, >=latex] (p) to[bend right=15] (c);\n\t\t\t\\draw[->, >=latex] (p) to[bend right=15] (t);\n\t\t\t\\draw[->, >=latex] (p) to[out=60,in=90,looseness=12] (p);\n\t\t\t\n\t\t\t\\draw[->, >=latex] (a) to[bend right=15] (t);\n\t\t\t\\draw[->, >=latex] (t) to[out=280,in=310,looseness=12] (t);\n\t\t\t\n\t\t\t\\draw[->, >=latex] (c) to[out=230,in=200,looseness=12] (c);\n\t\t\t\n\t\t\t\\draw[->, >=latex,color=red] (t) to[bend right=15] (p);\n\t\t\t\\draw[->, >=latex,color=green] (t) to[bend right=15] (c);\n\t\t\t\n\t\t\\end{tikzpicture}\n\t}\n\\end{center}\n\nWe added the red edge so that we cannot reduce the graph to a strictly smaller cycle (with three vertices) on which we already proved the generic construction worked. If there was an edge between $a$ and $c$, it would be bidirectional (since $a$ has no loop). But since the edges between $c$ and $t$ or $c$ and $p$ cannot be both bidirectional, we could reduce the present cycle to a strictly smaller one containing a unidirectional edge, a loop and a loopless vertex. So there is no edge between $a$ and $c$. Since $p=b$ there is none from $p$ to $c$. The only optional edge available is $(t,c)$ (in green).\n\nWe use the same method as before: take two horizontally successive columns $K_1$ and $K_2$. In any macro-slice of $K_1$, there is a $c$ meso-slice slice that is above a $t$ meso-slice, itself above a $a$ meso-slice.\n\nThe $a$ meso-slice in $K_1$ cannot be horizontally followed by a border or a code meso-slice in $K_2$ because they contain $c$. The $c$ meso-slice in $K_1$ must not be followed horizontally by any symbol $p$ or $a$ in column $K_2$ -- so most of it (at least $N M \/ 2$) is in contact neither with a border nor with a code meso-slice, but with a meso-slice made of only one symbol. If that symbol is $c$, then the aforementioned $a$ meso-slice is in contact with a meso-slice made of a symbol $a$. This is impossible.\nHence the $c$ meso-slice of $K_1$ is mostly followed by a $t$ meso-slice of $K_2$, and from this we recover \\cref{propapproxaligned}.\n\nThe three other cases are treated similarly, exploring with what each $C^1$ meso-slice can be in contact to ensure that \\cref{propapproxaligned} is valid even without all of condition $C$. Checking the rest of the properties follows case 1.3.\n\n\n\n\n\n\n\\subsection{Proof of Theorem 4.1 for several strongly connected components}\n\nThe idea if $H$ has several SCCs is to build one, by products of SCCs, that is none of the three types that constitute condition $D$. We can then apply what we did in the previous subsections.\n\nThe direct product $S_1 \\times S_2$ of two SCCs $S_1$ and $S_2$ is made of pairs $(s_1,s_2)$, where an edge exists between two pairs if and only if edges exist in both $S_1$ and $S_2$ between the corresponding vertices. It can be used in our construction by forcing pairs of elements $(s_1,s_2) \\in S_1 \\times S_2$ to be vertically one on top of the other.\n\nSince $H$ does not verify condition $D$, it has a non-reflexive SCC $S_1$, a non-symmetric SCC $S_2$ and a non-state-split SCC $S_3$ (two of them being possibly the same). But then:\n\n\\begin{itemize}\n\t\\item Since $S_1$ is non-reflexive, no SCC of $S_1 \\times S_2 \\times S_3$ is reflexive. Indeed, since $S_1$ is strongly connected, all vertices of $S_1$ are represented in any SCC $C$ of that graph product, meaning that for any $s_1 \\in S_1$ there is at least one vertex of the form $(s_1,*,*)$ in $C$. But if $C$ had loop on all its vertices, then in particular $S_1$ would be reflexive.\n\t\n\t\\item Similarly, since $S_2$ is non-symmetric, no SCC of $S_1 \\times S_2 \\times S_3$ is symmetric.\n\t\n\t\\item Finally, since $S_3$ is non-state-split, no SCC of $S_1 \\times S_2 \\times S_3$ is a state-split cycle. Indeed, suppose $S$ is such a state-split SCC of the direct product. It can be written as a collection of classes $(V_i)_{i \\in I}$ of elements from $S_1 \\times S_2 \\times S_3$ that we can project onto $S_3$, getting new classes $(W_i)_{i \\in I}$, with some elements of $S_3$ that possibly appear in several of these.\n\tLet $c$ be any vertex in $S_3$ that appears at least twice \\emph{with the least difference of indices between two classes where it appears}; say $c \\in W_i$ and $c \\in W_{i+k}$. Since $S$ is state-split, all elements in $W_{i+1}$ are exactly the elements of $S_3$ to which $c$ leads. But it is the same for $W_{i+k+1}$. Hence $W_{i+1} = W_{i+k+1}$. From this we deduce that $W_i=W_{i+k}$ for any $i$, using the fact that indices are modulo $|I|$. Since $k$ is the smallest possible distance between classes having a common element, classes from $(W_i)_{i \\in \\{0,\\dots,k-1\\}}$ are all disjoint; and they obviously contain all vertices from $S_3$. Now simply consider these classes $W_0$ to $W_{k-1}$: you get the proof that $S_3$ is state-split.\n\\end{itemize}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Some consequences of classical problems on two-dimensional subshifts under interplay between horizontal and vertical conditions}\n\\label{sec:consequences}\n\n\\subsection{Periodicity}\n\nFor $x \\in X$ a two-dimensional subshift, we say that $x$ is \\emph{periodic} (of period $\\vec{v}$) if there exists $\\vec{v} \\in \\mathbbm{Z}^2 \\setminus \\{(0,0)\\}$ such that $\\forall (i, j) \\in \\mathbbm{Z}^2, x_{(i,j)} = x_{(i,j)+\\vec{v}}$. In a more general setting of SFTs on groups in general, this is called weak periodicity.\n\n\\begin{corollary}\n\t\\label{coroaperiodic}\n\tLet $H$ be a one-dimensional nearest-neighbor SFT.\n\t\n\t\\begin{center}\n\t\t$X_{H,V}$ is empty or contains a periodic configuration for all one-dimensional SFTs $V$\n\t\t\n\t\t$\\Leftrightarrow$ $\\mathcal{G}(H)$ verifies condition $D$.\n\t\\end{center}\n\\end{corollary}\n\n\\begin{proof}\n\tIf $\\mathcal{G}(H)$ verifies condition $D$, then, as is detailed in the proof of \\cref{th:DP}, whatever may be the chosen $V$, we can find a patch $P$ that respects the local rules of $X_{H,V}$ and tiles the plane periodically. Hence $X_{H,V}$ admits a periodic configuration.\n\t\n\tIf $\\mathcal{G}(H)$ does not verify condition $D$, then, using \\cref{th:root}, we know that for any two-dimensional SFT $Y$ with no periodic configuration, there exists some one-dimensional SFT $V_Y$ such that $X_{H,V_Y}$ is a $(m,n)$th root of $Y$.\n\t\n\tWe consider $Y$ a two-dimensional SFT with no periodic configuration (see \\cite{dprobinson} for instance). Then, naming $V_Y$ the corresponding one-dimensional SFT from \\cref{th:root}, we know that there exists $\\psi\\colon Y \\hookrightarrow X_{H,V}$ continuous with $\\sqcup_{0 \\leq i < m, 0 \\leq j < n} \\sigma^{(i,j)}(\\psi(Y)) = X_{H,V}$ for some integers $m$ and $n$. Note that we use $\\psi$ the inverse map of $\\phi$ in the definition of a $(m,n)$th root, the reasoning being easier with it.\n\t\n\tIf $X_{H,V}$ contained a $\\sigma$-periodic configuration, then $\\sqcup_{0 \\leq i < m, 0 \\leq j < n} \\sigma^{(i,j)}(\\psi(Y))$ would, and so $\\psi(Y)$ would too (since configurations in $\\sqcup_{0 \\leq i < m, 0 \\leq j < n} \\sigma^{(i,j)}(\\psi(Y))$ are merely translates of the ones in $\\psi(Y)$).\n\t\n\tCall $\\psi(y)$ such a periodic configuration, with $y \\in Y$. There exists some $\\vec{v} = (a,b) \\in \\mathbbm{Z}^2$ such that $\\sigma^{\\vec{v}}(\\psi(y)) = \\psi(y)$. But consequently, $\\sigma^{mn\\vec{v}}(\\psi(y)) = \\psi(y)$. Then $\\sigma^{(anm,bmn)}(\\psi(y)) = \\psi(y)$. Using \\cref{th:root} again, we obtain that $\\psi(\\sigma^{(an,bm)}(y)) = \\psi(y)$. $\\psi$ being bijective, we finally get:\n\t\\[\n\t\\sigma^{(an,bm)}(y) = y.\n\t\\]\n\tWe found a periodic configuration in $Y$. This being impossible, we conclude that $X_{H,V}$ contains no periodic configuration.\n\\end{proof}\n\n\n\n\n\n\\subsection{The Domino Problem}\n\\label{DominoProblemCombined}\n\nWe consider now the domino problem when horizontal and vertical constraints interplay. We want to understand when two one-dimensional SFTs are compatible to build a two-dimensional SFT, and by extension where the frontier between decidability (one-dimensional) and undecidability (two-dimensional) yields. This question is notably reflected by the following adapted version of the Domino Problem:\n\n\\begin{definition}\n\tLet $H \\subset \\mathcal{A}^\\mathbbm{Z}$ be a SFT. The \\emph{Domino Problem depending on $H$} is the language\n\t\\[DP_I(H):=\\{ \\mid V \\subset \\mathcal{A}^\\mathbbm{Z} \\text{ is an SFT and } X_{H,V} \\neq \\emptyset\\}.\\]\n\\end{definition}\n\n\\begin{remark}\n\tJust as for $DP_h(H)$ from \\cref{subsec:horizontalconstraints}, this problem is always defined for a given $H$.\n\\end{remark}\n\nFrom this definition and \\cref{sec:simulation}, we deduce:\n\n\\begin{theorem}\n\t\\label{th:DP}\n\tLet $H$ be a nearest-neighbor one-dimensional SFT.\n\t\n\t\\begin{center}\n\t\t$DP_I(H)$ is decidable $\\Leftrightarrow$ $\\mathcal{G}(H)$ verifies condition $D$.\n\t\\end{center}\n\\end{theorem}\n\n\\begin{proof}\n\tProof of $\\Leftarrow$: assume $\\mathcal{G}(H)$ verifies condition $D$. Then its SCCs share a common type, be it reflexive, symmetric, or state-split cycle. For each of these three cases, we produce an algorithm that takes as input a one-dimensional SFT $V \\subset \\mathcal{A}^\\mathbbm{Z}$, and that returns \\texttt{YES} if $X_{H,V}$ is nonempty, and \\texttt{NO} otherwise.\n\t\n\tLet $M$ be the maximal size of forbidden patterns in $\\mathcal{F}_V$ (since $V$ is an SFT, such an integer exists).\n\t\\begin{itemize}\t\t\t\n\t\t\\item If $\\mathcal{G}(H)$ has state-split cycle type SCCs: let L be the LCM of the number of $V_i$s in each component.\n\t\tIf there is no rectangle of size $L \\times M(|\\mathcal{A}|^{LM}+1)$ respecting local rules of $X_{H,V}$ and containing no transient element, then answer \\texttt{NO}. Indeed, any configuration in $X_{H,V}$ contains valid rectangles as large as we want that do not contain transient elements.\n\t\tIf there is such a rectangle $R$, then by the pigeonhole principle it contains at least twice the same rectangle $R^\\prime$ of size $L \\times M$. To simplify the writing, we assume that the rectangle that repeats is the one of coordinates $[1,L] \\times [1,M]$ inside $R$ where $[1,L]$ and $[1,M]$ are intervals of integers, and that it can be found again with coordinates $[1,L] \\times [k,k+M-1]$. Else, we simply truncate a part of $R$ so that it becomes true.\n\t\t\n\t\tDefine $P := R|_{[1,L] \\times [1,k+M-1]}$.\n\t\tSince $V$ has forbidden patterns of size at most $M$, and since $R$ respects our local rules, $P$ can be vertically juxtaposed with itself (overlapping on $R^\\prime$).\n\t\t\n\t\t$P$ can also be horizontally juxtaposed with itself (without overlap). Indeed, one line of $P$ uses only elements of one SCC of $H$ (since elements of two different SCCs cannot be juxtaposed horizontally, and we banned transient elements). Since $L$ is a multiple of the length of all cycle classes, the first element in a given line can follow the last element in the same line. Hence all lines of $P$ can be juxtaposed with themselves.\n\t\t\n\t\tAs a conclusion, $P$ is a valid patch that can tile $\\mathbbm{Z}^2$ periodically. Therefore, $X_{H,V}$ is nonempty; return \\texttt{YES}.\n\t\t\n\t\t\\item If $\\mathcal{G}(H)$ has symmetric type SCCs the construction is similar, but this time build a rectangle $R$ of size $2 \\times M(|\\mathcal{A}|^{2M}+1)$. Either we cannot find one and return \\texttt{NO}; or we can find one and from it extract a patch that tiles the plane periodically and return \\texttt{YES}.\n\t\t\n\t\t\\item Finally, if $\\mathcal{G}(H)$ has reflexive type SCCs, the construction is even simpler than before. Build a rectangle $R$ of size $1 \\times M(|\\mathcal{A}|^{M}+1)$; the rest of the reasoning is identical.\n\t\\end{itemize}\n\tProof of $\\Rightarrow$ is due to \\cref{th:root}, and is done by contraposition. If $\\mathcal{G}(H)$ does not verify condition $D$, then for any Wang shift $W$ we can algorithmically build some one-dimensional SFT $V_W$ such that $X_{H,V_W}$ is a root of $W$, see \\cref{th:root}. If we were able to solve $DP_I(H)$, then there would exist a Turing Machine $\\mathcal{M}$ able to tell us if $X_{H,V}$ is empty for any one-dimensional SFT $V$. But as a consequence we could build a Turing Machine $\\mathcal{N}$ taking as input any Wang shift $W$, and building the corresponding $V_W$ following \\cref{sec:simulation}. Then, by running $\\mathcal{M}$, $\\mathcal{N}$ would be able to tell us if $X_{H,V_W}$ is empty or not. Then it could answer if $W$ is empty or not; but determining the emptiness or nonemptiness of every Wang shift is equivalent to $DP(\\mathbbm{Z}^2)$ being decidable, which is false. Hence, since $DP(\\mathbbm{Z}^2)$ is undecidable, $DP_I(H)$ is too.\n\\end{proof}\n\n\\begin{remark}\n\tA pair of conjugate SFTs $H_1$ and $H_2$ may yield different results, with $DP_{I}(H_1)$ decidable but $DP_{I}(H_2)$ undecidable. Consider for instance the following Rauzy graphs and applications on finite words (extensible to biinfinite words):\n\t\n\t\\begin{minipage}{0.55\\textwidth}\n\t\t\\begin{tikzpicture}\n\t\t\\begin{scope}[xshift=-4cm]\n\t\t\\node[draw, circle] (a) {$a$};\n\t\t\\node[draw, circle] (b) [right = 0.5cm of a] {$b$};\n\t\t\n\t\t\\draw[->, >=latex] (b) to[bend left=20] (a);\n\t\t\\draw[->, >=latex] (a) to[bend left=20] (b);\n\t\t\n\t\t\\draw[->, >=latex] (b) to [loop right] (b);\n\t\t\\draw[->, >=latex] (a) to [loop left] (a);\n\t\t\\end{scope}\n\t\t\n\t\t\\begin{scope}\n\t\t\\node[draw, circle] (b) {$\\beta$};\n\t\t\\node[draw, circle] (a) [below left = 0.5cm and 0.25cm of b] {$\\alpha$};\n\t\t\\node[draw, circle] (c) [below right = 0.5cm and 0.25cm of b] {$\\gamma$};\n\t\t\n\t\t\\draw[->, >=latex] (a) to[bend right=20] (b);\n\t\t\n\t\t\\draw[->, >=latex] (c) to[bend right=20] (b);\n\t\t\\draw[->, >=latex] (b) to[bend right=20] (a);\n\t\t\\draw[->, >=latex] (a) to[bend right=20] (c);\n\t\t\n\t\t\\draw[->, >=latex] (b) to[loop above] (b);\n\t\t\\draw[->, >=latex] (c) to [out=330,in=300,looseness=8] (c);\n\t\t\\end{scope}\n\t\t\\end{tikzpicture}\n\t\\end{minipage}\n\t\\begin{minipage}{0.35\\textwidth}\n\t\t$\\phi:\n\t\t\\begin{cases}\n\t\taa\\mapsto \\gamma \\\\\n\t\tab\\mapsto \\beta \\\\\n\t\tba\\mapsto \\alpha \\\\\n\t\tbb\\mapsto \\beta \\\\\n\t\t\\end{cases}\n\t\n\t\t\\psi:\n\t\t\\begin{cases}\n\t\t\\alpha\\mapsto a \\\\\n\t\t\\beta\\mapsto b \\\\\n\t\t\\gamma\\mapsto a \\\\\n\t\t\\end{cases}$\n\t\\end{minipage}\n\t\n\tThese graphs describe conjugate SFTs through these applications. However, the first graph has decidable $DP_I(H)$ and the second has not, by use of \\cref{th:DP}.\n\\end{remark}\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Impact of the interplay between horizontal and vertical conditions on the algorithmic complexity of the entropy}\n\\label{EntropyCombined}\n\n\n\\subsection{Horizontal constraints without condition D}\n\\label{nocondD}\n\nFrom the construction in \\cref{sec:simulation} we deduce the following:\n\n\\begin{proposition}\n\t\\label{prop:XHV}\n\tLet $H$ be a one-dimensional nearest-neighbor SFT that does not satisfy condition D. Then there exists a one-dimensional SFT $V$ such that $h(X_{H,V})$ is not computable. \\end{proposition}\n\n\\begin{proof}\nLet $W$ be a Wang shift with a non-computable entropy. By using \\cref{th:root} and \\cref{rootentropy}, there exists a one-dimensional SFT $V$ such that $h(W)= KM^2N h(X_{H,V})$ with $N$ the number of elements in the alphabet of $W$, $K$ and $M$ being defined as in \\cref{subsec:core} which depend only of $H$. Therefore, $h(X_{H,V})$ is not computable.\n\\end{proof}\n\n\\begin{remark}\n\tFor a given $H$, the $V$-dependent entropies in general are still not characterized and it seems difficult since in the construction of the root, the entropy decreases if the number of Wang tiles increases. In fact the previous proof allows to expect that in the case where the condition D is not satisfied, there exists a constant $C_H$ such that for every $\\Pi_1$-computable number $h$ smaller than $C_H$ there exists a vertical one-dimensional subshift $V$ that allows $h(X_{H,V})=h$. However, we do not obtain this result exactly since if the Kolmogorov complexity of $h$ is important, then the cardinal of the alphabet of the Wang subshift that has $h$ as entropy, named $N$ in the previous proof, is also important.\n\t\n\tNevertheless, in the case where there are cycles in the Rauzy graph which defines $H$ that do not appear in the coding of the Wang subshift used in \\cref{th:root}, we can encode $h$ in a given part of $X_{H,V}$, diluted by the necessarily large macro-slices, but then add a noisy zone where these cycles can be used to increase the entropy, as it is done in the proof of\t \\cref{th:RealizationEntropy}.\n\\end{remark}\n\n\n\n\\subsection{Condition D, computable entropy}\n\\label{condDcomputable}\n\nA one-dimensional SFT $H$ that verifies condition $D$ can yield computable entropies. Indeed, one has the immediate result that allows only a small range of available entropies:\n\n\\begin{proposition}\n\tThe entropies $h(X_{\\mathcal{A}^\\mathbbm{Z},V})$ accessible for $V \\subset \\mathcal{A}^\\mathbbm{Z}$ SFT are all the entropies accessible for one-dimensional SFTs with alphabet $\\mathcal{A}$. These are included in the values $\\log_2(\\lambda) \\leq \\log_2(|\\mathcal{A}|)$, where $\\lambda$ is a Perron number. Notably, they all are computable.\n\\end{proposition}\n\n\\begin{proof}\n\tWe use the fact that $N_{X_{\\mathcal{A}^\\mathbbm{Z},V}}(n,n) = N_V(n)^n$ since any two $n$-long columns can be juxtaposed horizontally here.\n\tWe conclude using \\cite{LindMarcus} that states that the available entropies for one-dimensional SFTs are the $\\log_2(\\lambda)$ where $\\lambda$ is a Perron number.\n\\end{proof}\n\n\\begin{remark}\n\tAn open question remains: what exactly are these accessible $\\log_2(\\lambda)$ obtained for a fixed size of alphabet?\n\t\\cref{th:bonus} -- and \\cite{HM} -- gives an answer for dimension $2$, but this exact question is, to our knowledge, not answered in dimension $1$.\n\\end{remark}\n\n\nA similar result holds for a larger class of graphs, one of the possibilities for respecting condition D:\n\n\\begin{proposition}\n\tLet $H \\subset \\mathcal{A}^\\mathbbm{Z}$ be a nearest-neighbor SFT whose Rauzy graph is so that each of its SCCs is a state-split cycle. Then for all $V \\subset \\mathcal{A}^\\mathbbm{Z}$ SFT, $h(X_{H,V})$ is computable.\n\\end{proposition}\n\n\\begin{proof}\n\tConsider that $\\mathcal{G}(H) = \\bigsqcup_{i=0}^{p-1} U_i$ is a state-split cycle of length $p > 0$ -- the proof for a graph made of several state-split cycles is similar and briefly mentioned at the end. We prove that for any $V \\subseteq \\mathcal{A}^\\mathbbm{Z}$, $h(X_{H,V})$ is computable. Let $V$ be such a SFT.\n\t\n\tLet $\\phi\\colon \\mathcal{A}^\\mathbbm{Z} \\rightarrow \\{0,\\dots,p-1\\}^\\mathbbm{Z}$ be the factor that maps any symbol in the component $U_i$, on $i$. We also call $\\phi$ its restricted version to $\\mathcal{A}^*$.\n\t\n\tFor $n \\in \\mathbbm{N}$, $j \\in \\{0,\\dots,p-1\\}$ and $u \\in \\{0,\\dots,p-1\\}^n$, let $u^j = u + (j,\\dots,j)$ with the addition made modulo $p$. We also define $S_u = \\{ w \\in \\mathcal{L}_V(n) \\mid \\phi(w)=u \\}$ and $N_u = |S_u|$.\n\t\n\tWe have the following, for integers $m$ and $n$:\n\t\n\t\\[\n\tN_{X_{H,V}} (pm,n) = \\sum_{u \\in \\{0,\\dots,p-1\\}^n} \\left( \\Pi_{j=0}^{p-1} N_{u^j} \\right)^m.\n\t\\]\n\t\n\tIndeed, a rectangle of size $pm \\times n$ in $X_{H,V}$ is given by a vertical word $u$ in $\\{0,\\dots,p-1\\}^n$ that fixes for the whole rectangle where elements of each $U_i$ will be. Then column $j$ can be made of any succession of $n$ symbols that respects $u^j$ -- that are in the correct $U_i$'s.\n\t\n\tTherefore, we have:\n\t\\begin{align*}\n\t\th(X_{H,V}) & = \\lim\\limits_{n \\to +\\infty} \\dfrac{\\log_2\\left(N_{X_{H,V}}(pn,n)\\right)}{pn^2}\\\\\n\t\t& = \\lim\\limits_{n \\to +\\infty} \\dfrac{\\log_2\\left(\\sum_{u \\in \\{0,\\dots,p-1\\}^n} \\left( \\Pi_{j=0}^{p-1} N_{u^j} \\right)^n\\right)}{pn^2}\\\\\n\t\t& = \\lim\\limits_{n \\to +\\infty} \\dfrac{\\log_2\\left( \\left(\\max\\limits_{v \\in \\{0,\\dots,p-1\\}^n} \\Pi_{j=0}^{p-1} N_{v^j}\\right)^n \\sum_{u \\in \\{0,\\dots,p-1\\}^n} \\dfrac{\\left( \\Pi_{j=0}^{p-1} N_{u^j} \\right)^n}{\\left(\\max\\limits_{v \\in \\{0,\\dots,p-1\\}^n} \\Pi_{j=0}^{p-1} N_{v^j}\\right)^n} \\right)}{pn^2}\\\\\n\t\t& = \\lim\\limits_{n \\to +\\infty} \\dfrac{\\log_2\\left( \\left(\\max\\limits_{v \\in \\{0,\\dots,p-1\\}^n} \\Pi_{j=0}^{p-1} N_{v^j}\\right)^n \\right)}{pn^2} + \\lim\\limits_{n \\to +\\infty} \\dfrac{\\log_2\\left( \\sum_{u \\in \\{0,\\dots,p-1\\}^n} \\dfrac{\\left( \\Pi_{j=0}^{p-1} N_{u^j} \\right)^n}{\\left(\\max\\limits_{v \\in \\{0,\\dots,p-1\\}^n} \\Pi_{j=0}^{p-1} N_{v^j}\\right)^n} \\right)}{pn^2}\\\\\n\t\t& = \\lim\\limits_{n \\to +\\infty} \\dfrac{\\log_2\\left( \\max\\limits_{v \\in \\{0,\\dots,p-1\\}^n} \\Pi_{j=0}^{p-1} N_{v^j} \\right)}{pn}\n\t\\end{align*}\n\tsince in the penultimate line the second term can be bounded by $0$ from below (at least one $u \\in \\{0,\\dots,p-1\\}^n$ in the index of the sum reaches the maximum of the denominator, so the sum is at least $1$), and by $\\frac{\\log_2(p^n)}{pn^2}$, which tends to $0$.\n\t\n\tWith the previous computation, it is also clear that for all $n \\in \\mathbbm{N}$,\n\t\n\t\\[\n\t\\dfrac{\\log_2\\left(N_{X_{H,V}}(pn,n)\\right)}{pn^2} \\geq \\dfrac{\\log_2\\left( \\max\\limits_{v \\in \\{0,\\dots,p-1\\}^n} \\Pi_{j=0}^{p-1} N_{v^j} \\right)}{pn}\n\t\\]\n\t\n\thence\n\t\n\t\\[\n\th(X_{H,V}) \\geq \\lim\\limits_{n \\to +\\infty} \\dfrac{\\log_2\\left( \\max\\limits_{v \\in \\{0,\\dots,p-1\\}^n} \\Pi_{j=0}^{p-1} N_{v^j} \\right)}{pn}\n\t\\]\n\t\n\tFrom this we can deduce that the sequence $\\dfrac{\\log_2\\left( \\max\\limits_{v \\in \\{0,\\dots,p-1\\}^n} \\Pi_{j=0}^{p-1} N_{v^j} \\right)}{pn}$ actually converges to $h(X_{H,V})$ from below. There is a Turing Machine, constructed algorithmically with $V$, that computes any term of it. Indeed, for any $v \\in \\{0,\\dots,p-1\\}^n$, $N_v$ can be computed because it depends on a one-dimensional SFT. Then the max on a finite set and all the other operations are also doable. As a consequence, $h(X_{H,V})$ is left-recursively enumerable. Being, by \\cite{HM}, right-recursively enumerable, it is computable.\n\t\n\tThe proof is similar if $\\mathcal{G}(H)$ is made of, say, $k>1$ state-split cycles $C_j$ numbered from $1$ to $k$: consider $p$ the LCM of their periods; $\\phi$ projects words of $\\mathcal{A}^n$ on $(\\{1,\\dots,k\\} \\times \\{0,\\dots,p-1\\})^n$ that indicates for each letter to which $C_j$ it belongs, then to which $U^j_i$ of that $C_j$. Once again, one vertical word of length $n$ is enough to reconstruct, for any $m \\in \\mathbbm{N}$, a $pm \\times n$ rectangle except for the precise choice of an element in each $U^j_i$. The rest of the computation is similar.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Condition D, uncomputable entropy}\n\\label{condDuncomputable}\n\n\nOther horizontal constraints that verify condition $D$, such as some of reflexive type, allow for a greater range of accessible entropies. In what follows, we briefly investigate transformations from graphs of one-dimensional SFTs that originally \\emph{do not} verify condition $D$ where the resulting graph does verify condition $D$, yet is robust enough so that the generic construction in \\cref{subsec:core} can still be mostly performed: it keeps the capacity to obtain, up to a multiplicative factor, right-recursively enumerable entropies by choosing adequate vertical constraints.\n\n\\begin{proposition}\n\t\\label{prop:loops}\n\tLet $H \\subset \\mathcal{A}^\\mathbbm{Z}$ be a nearest-neighbor SFT whose Rauzy graph does not respect condition $D$, and so that it has either no almost-attractive or attractive vertex, or no almost-repulsive or repulsive vertex -- where almost means that the only missing edge is a loop on the designated vertex.\n\t\n\tLet $\\tilde{H} \\subset \\mathcal{A}^\\mathbbm{Z}$ be the nearest-neighbor SFT obtained when adding a loop to every vertex of the Rauzy graph of $H$. Then $\\tilde{H}$ verifies condition $D$, but there exists a subshift of finite type $V$, which can be obtained algorithmically with $H$ as input, such that $h(X_{\\tilde{H},V})$ is not computable.\n\\end{proposition}\n\n\\begin{proof}[Proof (Sketched)]\n\tFirst, one can consider that the Rauzy graph of $H$ has a loop on every vertex except one: such a graph also yields $\\tilde{H}$ when a loop is added to its last loopless vertex, resulting in the same possible constructions -- and entropies -- with $\\tilde{H}$ as horizontal constraints.\n\t\n\tWhen trying to apply the generic construction from \\cref{subsec:core} on such a $H$, one ends up in case 1.1, where the only risk in the construction is the presence of both attractive or repulsive vertices. This is prevented here by the hypotheses of the theorem. Therefore by use of \\cref{th:root}, for any two-dimensional SFT $Y \\subset \\mathcal{B}^{\\mathbbm{Z}^2}$, there exists a one-dimensional SFT $V_Y\\subset\\mathcal{A}^\\mathbbm{Z}$ such that $X_{H,V_Y}$ is a $(m,n)$th root of $Y$ for some $m, n \\in \\mathbbm{N}^2$. Furthermore, $m$, $n$ and $V_Y$ can be computed algorithmically.\n\t\n\tHowever, the construction of $X_{H,V_Y}$ has to be slightly adapted, because if one were to follow the construction of \\cref{sec:simulation} as is, $X_{\\tilde{H}, V_Y}$ could glue an $(i,j)$ macro-slice to the following types of macro-slices:\n\t\\begin{enumerate}\n\t\t\\item an $(i+1,j+1)$ macro-slice;\n\t\t\\item another $(i,j)$ macro-slice;\n\t\t\\item an $(i+1,j+1)$ macro-slice but the second slice is exactly one cell down;\n\t\t\\item another $(i,j)$ macro-slice but the second slice is exactly one cell up;\n\t\t\\item an $(i+1,j)$ macro-slice;\n\t\t\\item an $(i,j+1)$ macro-slice;\n\t\t\\item an $(i+1,j)$ macro-slice but the second slice is exactly one cell down;\n\t\t\\item an $(i,j+1)$ macro-slice but the second slice is exactly one cell up.\n\t\\end{enumerate}\n\tCase 1 is the one supposed to happen. Case 2 will be acceptable because it does not bring entropy, as seen below. Cases 5, 6, 7 and 8 collapse on other cases because $H$ has a $C^2$ made of a unique element in its generic construction, causing the $j$ index to be irrelevant.\n\tCases 3 and 4 are the ones we have to deal with. In short, this is done with two things:\n\t\\begin{itemize}\n\t\t\\item two extra symbols at the bottom of any $(i,j)$-coding micro-slice, forced to be $c^1_i$;\n\t\t\\item and forcing an $(i+1,j+1)$ macro-slice below any $(i,j)$ macro-slice, instead of having each column based on a single pair $(i,j)$.\n\t\\end{itemize}\n\tThese modifications forbid one-cell shifts between two columns, else the $C^1$ elements at the bottom of each coding micro-slice would be in contact with another element of $C^1$ at distance $2$, resulting in a uniform shortcut of length $2$ in the Rauzy graph of $H$, which is forbidden.\n\tIn what follows, we suppose we modify $V_Y$ with all this, so that we obtain a new $X_{H,V_Y}$ and only Cases 1 and 2 of the previous enumeration happen for the corresponding $X_{\\tilde{H},V_Y}$.\n\t\n\tWhen performing this adapted construction, only the dividing constants are modified in \\cref{prop:XHV} (because the size of the macro-slices is modified). The result itself is unchanged: notably, it still shows that a non-computable entropy $h(Y)$ yields a non-computable entropy $h(X_{H,V_Y})$. The only part left is to prove that $h(X_{\\tilde{H},V_Y}) = h(X_{H,V_Y})$. \n\t\n\tSince the only difference between $X_{\\tilde{H},V_Y}$ and $X_{H,V_Y}$ is the possibility to repeat a column several times in a row, we have\n\t\\[\n\tN_{X_{\\tilde{H},V_Y}}(n,n) \\geq N_{X_{H,V_Y}}(n,n)\n\t\\]\n\tbut also\n\t\\[\n\tN_{X_{\\tilde{H},V_Y}}(n,n) = \\sum_{k=1}^{n} \\sum_{i_\\ell | i_1 + \\dots + i_k = n} N_{X_{H,V_Y}}(i_\\ell,n) \\leq \\sum_{k=1}^n \\binom{n+k-1}{n} N_{X_{H,V_Y}}(n,n) = \\binom{2n}{n+1} N_{X_{H,V_Y}}(n,n).\n\t\\]\n\tby counting, for $n$ columns, how many types of them there are. $\\log_2(\\binom{2n}{n+1})$ is a $\\mathcal{O}(2n\\log_2(2n))$, and therefore applying $\\lim_n \\dfrac{\\log_2(.)}{n^2}$ to these bounds shows that the entropy is the same.\n\\end{proof}\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}~\\label{Introduction}\nTurbo codes were first introduced by Berrou {\\em et al.}~\\cite{berrou} in 1993 and\nhave been largely treated since then. It has been shown~\\cite{costello} that these codes with an\niterative turbo decoding algorithm can achieve a very good error performance close\nto Shannon capacity. Also, there has been interest in constructing lattices with high coding gain,\nlow kissing number and low decoding complexity~\\cite{IrregularLDPC, sadeghi, LDLC}.\nThe lattice version of the channel coding is\nto find an $n$-dimensional lattice $\\Lambda$ which attains good error performance\nfor a given value of volume-to-noise ratio (VNR)~\\cite{conway, forneyspherebound, tarokh}.\nPoltyrev~\\cite{polytrev} suggests employing coding without restriction\nfor lattices on the AWGN channel. This means communicating\nwith no power constraints.\nThe existence of ensembles of lattices which can achieve generalized capacity\non the AWGN channel without restriction is also proved in~\\cite{polytrev}.\nForney {\\em et al.}~\\cite{forneyspherebound}\nrestate the above concepts by using coset codes and multilevel\ncoset codes. At the receiver of communication without restriction for lattices,\nthe main problem is to find the closest\nvector of $\\Lambda$ to a given point ${\\bf r}\\in\\mathbb{R}^n$.\nThis is called lattice decoding of $\\Lambda$.\nSome efficient well-known lattice decoders are known for low dimensions~\\cite{Ling09,Viterbo99}.\n\nThere are a wide range of applicable lattices in communications\nincluding the well-known root lattices~\\cite{conway}, the recently\nintroduced low-density parity-check lattices~\\cite{sadeghi} (LDPC\nlattices) and the low-density lattice codes~\\cite{LDLC} (LDLC lattices).\nThe former lattices have been extensively treated in the 1980's\nand 1990's~\\cite{conway}. After the year 2000, two classes of\nlattices based on the primary idea of LDPC codes have been\nestablished. These type of lattices have attracted a lot of\nattention in recent years~\\cite{IrregularLDPC,choi,sadeghi2,Harshan13}.\nHence, constructing lattices based on turbo codes can be a promising research topic.\n\nIn the present work, we borrow the idea of turbo codes and construct a new class of\nlattices that we called {\\em turbo lattices}. In fact, the results by Forney {\\em et al.} in~\\cite{forneyspherebound}\nmotivate us to apply Construction D lattices to design turbo lattices.\nThey proved the existence of sphere-bound-achieving lattices by means of Construction D\nlattices. This leads one to use Construction D method along\nwith well-known turbo codes to produce turbo lattices.\nThis is the first usage of turbo codes in constructing lattices.\nWe benefit from structural properties of lattices and turbo\ncodes to investigate and evaluate the basic parameters of turbo\nlattices such as minimum distance, volume, coding gain and\nkissing number.\n\nVarious types of turbo codes have been constructed in terms of properties of their underlying\nconstituent encoders and interleavers~\\cite{costello}. For example,\nencoders can be either block or convolutional codes and\ninterleavers can be deterministic, pseudo-random or random~\\cite{recent}.\nSince Construction D deals with block codes, we treat turbo codes as block codes.\nTherefore, it seems more reasonable to use terminated convolutional\ncodes. Since we use recursive and non-recursive convolutional codes,\ndifferent types of termination methods can be applied to these component convolutional codes.\nHence, we are interested in terminating trellises for\nboth feed-back~\\cite{solomon,wiss} and feed-forward~\\cite{costello}\nconvolutional codes. To stay away from rate loss, we employ tail-biting\nconvolutional codes for short length turbo lattices. Also zero-tail\nconvolutional codes~\\cite{solomon,wiss} are building blocks of turbo codes to use in construction of\nlattices with larger sizes.\n\nThere are algorithms such as generalized min-sum algorithm~\\cite{sadeghi},\niterative decoding algorithms~\\cite{choi} and the algorithm in~\\cite{LDLC} for decoding\nnewly introduced lattices.\nThe basic idea behind these algorithms\nis to implement min-sum and sum-product algorithms and their generalizations.\nSince we used turbo codes to construct\nturbo lattices, it is more reasonable to benefit\nfrom the underlying turbo structure of these lattices.\nIn this case, we have to somehow relate the decoding of turbo lattices\nto the iterative turbo decoders~\\cite{berrou} for turbo codes.\nThis results in a multi-stage decoding algorithm based on\niterative turbo decoders similar to the one given in~\\cite{forneyspherebound}.\n\nWe summarize our contributions as follows.\n\\begin{itemize}\n\\item We generalize the minimum distance formula for every\nConstruction D lattice by removing a restricting condition on the minimum distance of its underlying codes.\nAn upper bound for the kissing number of these lattices is also derived.\n\\item We construct nested turbo codes and establish the concept of turbo lattices. Various crucial parameters of\nthese lattices such as minimum distance, coding gain and kissing number are investigated.\n\\item A multi-stage turbo lattice decoder is introduced. The error performance of turbo lattices\nis given and compared with other well-known LDPC lattices and LDLC lattices.\n\\end{itemize}\n\nThe present work is organized as follow. Two methods of constructing\nlattices, Construction A and D, are reviewed in Section~\\ref{BackgroundsonLattices}.\nThe crucial parameters of lattices which can be used to measure the\nefficiency of lattices are explained in this section. In Section~\\ref{ConvolutionalandTurboCodes} we\nintroduce nested interleavers in a manner that can be used to build nested turbo codes.\nSection~\\ref{NestedTCandTL} is devoted to the construction of nested turbo codes and consequently\nthe construction of turbo lattices.\nSection~\\ref{ParameterAnalysis} is dedicated to the evaluation\nof the critical parameters of turbo lattices\nbased on the properties of their underlying turbo codes. In Section~\\ref{DecodingAlgorithm}\na multi-stage turbo lattice decoding algorithm is explained.\nIn Section~\\ref{SimulationResultsPerformanceofTL}\nwe carry simulation results.\nWe conclude with final remarks on turbo lattices and further\nresearch topics in Section~\\ref{Conclusion}.\n\n\\section{Backgrounds on Lattices}~\\label{BackgroundsonLattices}\nIn order to make this work self-contained, a background on\nlattices is essential. The general required information about\ncritical parameters of Construction A and Construction D\nas well as parameters for measuring the efficiency of lattices\nare provided below.\n\\subsection{General Notations for Lattices}~\\label{lattices}\nA discrete additive subgroup $\\Lambda$ of $\\mathbb{R}^n$\nis called \\emph{lattice}. Since $\\Lambda$ is discrete, it can be generated by\n$m\\leq n$ linearly independent vectors ${\\bf b}_1,\\ldots,{\\bf b}_m$ in\n$\\mathbb{R}^n$. The set $\\{{\\bf b}_1,\\ldots,{\\bf b}_m\\}$ is called a \\emph{basis}\nfor $\\Lambda$.\nIn the rest of this paper, we assume that $\\Lambda$ is an\n$n$-dimensional full rank ($m=n$) lattice over $\\mathbb{R}^n$.\nBy using the Euclidean norm, $\\|.\\|$, we can define a metric on\n$\\Lambda$; that is, for every ${\\bf x},{\\bf y}\\in\\Lambda$ we have $d({\\bf x},{\\bf y})=\\|{\\bf x}-{\\bf y}\\|^2$.\nThe \\emph{minimum distance} of $\\Lambda$, $d_{\\min}(\\Lambda)$,\nis\n$$d_{\\min}(\\Lambda)=\\min_{{\\bf x}\\neq{\\bf y}}\\{d({\\bf x},{\\bf y})|{\\bf x},{\\bf y}\\in \\Lambda\\}.$$\nLet us put $\\{{\\bf b}_1,\\ldots,{\\bf b}_n\\}$ as the rows of a matrix ${\\bf B}$, then\nwe have $\\Lambda=\\{{\\bf x}\\colon {\\bf x}={\\bf z}{\\bf B},~{\\bf z}\\in \\mathbb{Z}^n\\}$.\nThe matrix ${\\bf B}$ is called a \\emph{generator matrix} for the lattice $\\Lambda$. The \\emph{volume}\nof a lattice $\\Lambda$ can be defined by $\\det\\left({\\bf B}{\\bf B}^T\\right)$ where\n${\\bf B}^T$ is the transpose of ${\\bf B}$. The volume of $\\Lambda$\nis denoted by $\\det(\\Lambda)$.\nThe {\\em coding gain} of\na lattice $\\Lambda$ is defined by\n\\begin{equation}~\\label{eq:codinggain}\n\\gamma(\\Lambda)=\\frac{d_{\\min}^2(\\Lambda)}{\\det(\\Lambda)^{2\/n}},\n\\end{equation}\nwhere $\\det(\\Lambda)^{2\/n}$ is itself called the\n\\emph{normalized volume} of $\\Lambda$.\nThis volume may be regarded as the volume of $\\Lambda$ per two dimensions.\nThe coding gain can be used as a crude measure of the performance of a lattice.\nFor any $n$, $\\gamma\\left(\\mathbb{Z}^n\\right)=1$. An uncoded system\nmay be regarded as the one that uses a constellation based on $\\mathbb{Z}^n$.\nThus the coding gain of an arbitrary lattice $\\Lambda$ may be considered\nas the gain using a constellation based on $\\Lambda$ over an uncoded system\nusing a constellation based on $\\mathbb{Z}^n$~\\cite{forneyspherebound}.\nTherefore, coding gain is the saving in average of energy due to using $\\Lambda$\nfor the transmission instead of using the lattice $\\mathbb{Z}^n$~\\cite{forneymodulation}.\nGeometrically, coding gain measures the increase\nin density of $\\Lambda$ over integer lattice $\\mathbb{Z}^n$~\\cite{conway}.\n\nIf one put an $n$-dimensional sphere of\nradius $d_{\\min}(\\Lambda)\/2$ centered at\nevery lattice point of $\\Lambda$,\nthen the {\\em kissing number} of $\\Lambda$\nis the maximum number\nof spheres that touch a fixed sphere.\nHereafter we denote the kissing number of the lattice $\\Lambda$\nby $\\tau(\\Lambda)$. The \\emph{normalized kissing number} of an $n$-dimensional\nlattice $\\Lambda$ is defined as\n\\begin{equation}\\label{eq:normalizedkissnum1}\n \\tau^\\ast(\\Lambda)=\\frac{\\tau(\\Lambda)}{n}.\n\\end{equation}\n\nSending points of a specific lattice in the absence of power constraints has been\nstudied. This is called {\\em coding without restriction}~\\cite{polytrev}.\nSuppose that the points of an $n$-dimensional lattice $\\Lambda$ are\nsent over an AWGN channel with noise variance $\\sigma^2$.\nThe {\\it volume-to-noise ratio} (VNR) of an $n$-dimensional lattice $\\Lambda$ is defined as\n\\begin{equation}~\\label{VNR}\n{\\alpha^2}=\\frac{\\det(\\Lambda)^{\\frac{2}{n}}}{2\\pi e\\sigma^2}.\n\\end{equation}\nFor large $n$, the VNR is the ratio of the normalized volume of $\\Lambda$\nto the normalized volume of a noise sphere of squared radius $n\\sigma^2$\nwhich is defined as SNR in~\\cite{sadeghi} and $\\alpha^2$ in~\\cite{forneyspherebound}.\n\nSince lattices have a uniform structure,\nwe can assume ${\\bf 0}$ is transmitted and $\\textbf{r}$\nis the received vector. Then $\\textbf{r}$ is a vector whose components are\ndistributed based on a Gaussian distribution with zero mean and variance $\\sigma^2$.\nHence construction of lattices with higher coding gain and lower normalized\nkissing number is of interest.\n\n\\subsection{Lattice Constructions}~\\label{Construction A}\nThere exist many ways to construct a\nlattice~\\cite{conway}.\nIn the following we give two algebraic constructions of lattices based\non linear block codes~\\cite{conway}. The first one is Construction A\nwhich translates a block code to a lattice.\nThen a review of Construction D is given.\nThese two constructions are the main building blocks of this work.\n\nLet $\\mathcal{C}$ be a group code over\n$G=\\mathbb{Z}_{2}\\times\\cdots\\times \\mathbb{Z}_{2}$\n, i.e. $\\mathcal{C}\\subseteq G$, with minimum distance $d_{\\min}$. Define\n$\\Lambda$ as a Construction A lattice~\\cite{conway} derived from $\\mathcal{C}$ by:\n\\begin{equation}\\label{constA}\n\\Lambda=\\{(2z_1+c_1,\\ldots,2z_n+c_n): z_i\\in\\mathbb{Z}, {\\bf c}=(c_1,\\ldots,c_n)\\in \\mathcal{C}\\}.\n\\end{equation}\nLet $\\Lambda$ be a lattice constructed using Construction A.\nThe minimum distance of $\\Lambda$ is\n\\begin{equation}~\\label{eq:dminconstA}\nd_{\\min}(\\Lambda)=\\min\\left\\{2,\\sqrt{d_{min}}\\right\\}.\n\\end{equation}\nIts coding gain is\n\\begin{equation}~\\label{eq:codinggainconstA}\n\\gamma(\\Lambda)=\n\\left\\{\\begin{array}{ll}\n4^{\\frac{k}{n}}&d_{\\min}\\geq4,\\\\\n\\frac{d_{\\min}^2(\\Lambda)}{2}4^{\\frac{k}{n}}&d_{\\min}<4,\n\\end{array}\\right.\n\\end{equation}\nand its kissing number is\n\\begin{equation}~\\label{eq:kissnumberconstA}\n\\tau(\\Lambda)=\n\\left\\{\\begin{array}{ll}\n2^{d_{\\min}}A_{d_{\\min}}&d_{\\min}<4,\\\\\n2n+16A_4&d_{\\min}=4,\\\\\n2n&d_{\\min}>4,\n\\end{array}\\right.\n\\end{equation}\nwhere $A_{d_{\\min}}$ denotes the number of codewords in\n$\\mathcal{C}$ with minimum weight $d_{\\min}$.\nThese definition and theorem can be generalized to a more\npractical and nice lattice construction.\nWe use a set of nested linear block codes to\ngive a more general lattice structure named\nConstruction D. This construction plays a key role\nin this work.\n\nLet $\\mathcal{C}_0\\supseteq \\mathcal{C}_1\\supseteq \\cdots \\supseteq \\mathcal{C}_a$\nbe a family of $a+1$ linear codes where $\\mathcal{C}_{\\ell}[n,k_{\\ell},d_{\\min}^{(\\ell)}]$\nfor $1\\leq {\\ell}\\leq a$ and\n$\\mathcal{C}_0$ is the $[n,n,1]$ trivial code $\\mathbb{F}_2^n$ such that\n$$\\mathcal{C}_{\\ell}=<{\\bf c}_1,\\ldots,{\\bf c}_{k_{\\ell}}>$$\nwhere $$ denotes the subgroup generated by $X$.\nFor any element ${\\bf x}=(x_1,\\ldots,x_n)\\in \\mathbb{F}^n_2$ and for\n$1\\leq {\\ell}\\leq a$ consider the vector in $\\mathbb{R}^n$ of the form:\n$$\\frac{1}{2^{{\\ell}-1}}{\\bf x}=\\left(\\frac{x_1}{2^{{\\ell}-1}},\\ldots,\\frac{x_n}{2^{{\\ell}-1}}\\right).$$\nDefine $\\Lambda\\subseteq\\mathbb{R}^n$\nas all vectors of the form\n\\begin{equation}~\\label{eq:formofconstructionD}\n{\\bf z}+\\sum_{{\\ell}=1}^a\\sum_{j=1}^{k_{\\ell}}\\beta_j^{({\\ell})}\\frac{1}{2^{{\\ell}-1}}{\\bf c}_j\n\\end{equation}\nwhere ${\\bf z}\\in2(\\mathbb{Z})^n$ and $\\beta_j^{({\\ell})}=0$ or $1$.\nAn integral\nbasis for $\\Lambda$ is given by the vectors\n\\begin{equation}~\\label{eq:integralbasisforD}\n\\frac{1}{2^{{\\ell}-1}}{\\bf c}_j\n\\end{equation}\nfor $1\\leq {\\ell}\\leq a$ and $k_{{\\ell}+1}+1\\leq j\\leq k_{\\ell}$ plus $n-k_1$\nvectors of the form $(0,\\ldots,0,2,0,\\ldots,0)$. Let us consider vectors\n${\\bf c}_j$ as integral in $\\mathbb{R}^n$, with components $0$ or $1$.\nTo be specific, this lattice $\\Lambda$ can be represented by the following code formula\n\\begin{equation}~\\label{eq:codeformulaforD}\n\\Lambda=\\mathcal{C}_1+\\frac{1}{2}\\mathcal{C}_2+\\cdots+\\frac{1}{2^{a-1}}\\mathcal{C}_{a}+2(\\mathbb{Z})^n.\n\\end{equation}\n\nIt is useful to bound the coding gain of $\\Lambda$.\nThe next theorem is cited form~\\cite{barnes}.\n\\begin{Theorem}\nLet $\\Lambda$ be a lattice constructed using\nConstruction D, then the volume of $\\Lambda$ is\n$\\det(\\Lambda)=2^{n-\\sum_{{\\ell}=1}^a k_{\\ell}}$.\nFurthermore, if $d_{\\min}^{(\\ell)}\\geq\\frac{4^{\\ell}}{\\beta}$,\nfor $1\\leq {\\ell}\\leq a$ and $\\beta=1$ or $2$,\nthen the squared minimum distance of $\\Lambda$ is at\nleast $4\/\\beta$,\nand its coding gain satisfies\n$$\\gamma(\\Lambda)\\geq\\beta^{-1}4^{\\sum_{{\\ell}=1}^a\\frac{k_{\\ell}}{n}}.$$\n\\end{Theorem}\nIn the above theorem, an exact formula for the determinant of\nevery lattice constructed using Construction\nD is given. Also, proper bounds for\nthe other important parameters of these lattices including\nminimum distance and coding gain have been found with an extra condition on the\nminimum distance of the underlying nested codes~\\cite{conway}.\n\nWe omit this restricting condition on the minimum distance\nof the underlying nested block codes and then generalize those bounds\nto a more useful form. The resulting expressions for minimum distance\nand coding gain are related to the underlying codes\nas we will see soon. In addition, an upper bound for the kissing number\nof every lattice generated using Construction D is derived.\n\\begin{Theorem}~\\label{th:constructionDmindistkiss}\nLet $\\Lambda$ be a lattice constructed based\non Construction D. Then\n\\begin{itemize}\n\\item{}\nfor the minimum distance of $\\Lambda$ we have\n\\begin{equation}~\\label{eq:mindistD}\nd_{\\min}(\\Lambda)=\\min_{1\\leq {\\ell}\\leq a}\\left\\{2,\\frac{\\sqrt{d_{\\min}^{(\\ell)}}}{2^{{\\ell}-1}}\\right\\},\n\\end{equation}\nwhere $d_{\\min}^{(\\ell)}$ is the minimum distance of $\\mathcal{C}_{\\ell}$ for $1\\leq {\\ell}\\leq a$;\n\\item{}the kissing number of $\\Lambda$ has the following property\n\\begin{equation}~\\label{eq:ubkiss}\n\\tau(\\Lambda)\\leq2n+\\sum_{\\substack{1\\leq {\\ell}\\leq a\\\\ d_{\\min}^{(\\ell)}\\leq4^{\\ell}}}2^{d_{\\min}^{(\\ell)}}A_{d_{\\min}^{(\\ell)}},\n\\end{equation}\nwhere $A_{d_{\\min}^{(\\ell)}}$ denotes the number of codewords\nin $\\mathcal{C}_{\\ell}$ with minimum weight $d_{\\min}^{(\\ell)}$. Furthermore, if $d_{\\min}^{(\\ell)}>4^{\\ell}$ for every $1\\leq \\ell\\leq a$,\nthen $\\tau(\\Lambda)\\leq 2n$.\n\\end{itemize}\n\\end{Theorem}\nThe proof is given in Appendix A.\n\nThis theorem provides a relationship between\nthe performance of the lattice $\\Lambda$\nand the performance of its underlying codes.\nThe kissing number of a Construction D lattice can be bounded\nabove based on the minimum distance and the number of minimum weight\ncodewords of each underlying nested code.\n\\section{Convolutional and Turbo Codes}~\\label{ConvolutionalandTurboCodes}\nSince recursive convolutional codes produce better turbo codes,\nwe focus on tail-biting of feed-back convolutional codes.\n\\subsection{Terminated and Tail-Biting Convolutional Codes}~\\label{TerminatedConvolutionalCodes}\nLet $\\mathcal{C}(N,K,\\nu)$ be a systematic convolutional code of rate $\\frac{K}{N}$\nwith constraint length $\\nu$ and memory order $m$.\nThe {\\em terminated convolutional code} technique can be found in~\\cite{costello}.\nIt is known that, in this deformation from the convolutional code $\\mathcal{C}$\nto the mentioned block code there exists a rate loss and\na change in the size of the codewords\nwhile in Construction D\nall the code lengths of the set of nested linear codes\nhave to be equal. However, this termination method modifies the sizes of the underlying\ncodes in each level. This code length modification\nresults in a restriction which prevents the use of\nterminated convolutional codes in our derivation of lattices based on Construction D.\nIn order to avoid this situation, an alternative method\nwhich is referred as tail-biting~\\cite{solomon} can be used. Thus, terminated convolutional codes\ncan only be employed to construct turbo codes which are appropriate for using along with\nConstruction A.\n\nThe {\\em tail-biting} technique for feed-forward convolutional codes are reported in~\\cite{conferenceAllerton,solomon,wiss}.\nThe algorithm for tail-biting a feed-back convolutional encoder\nis also introduced in~\\cite{theory,wiss}.\nHowever, tail-biting is impossible for all sizes.\nIn other words, tail-biting of a feed-back convolutional encoder\nis only possible for some special tail-biting lengths.\n\nLet ${\\bf G}(x)$ be a generator matrix of a systematic feed-back\nconvolutional code $\\mathcal{C}(N,K,\\nu)$ defined as follows\n\\begin{equation}\\label{eq:genmatrixSRCC}\n\\left[\n\\begin{array}{cccccc}\n1&\\cdots&0&g_{1,K+1}(x)&\\cdots&g_{1,N}(x)\\\\\n\\vdots&\\ddots&\\vdots&\\vdots&\\ddots&\\vdots\\\\\n0&\\cdots&1&g_{K,K+1}(x)&\\cdots&g_{K,N}(x)\n\\end{array}\n\\right],\n\\end{equation}\nwhere $g_{i,j}(x)=\\frac{q_{i,j}(x)}{r_i(x)}$ for coprime\npolynomials $q_{i,j}(x)$ and $r_i(x)$, $1\\leq i\\leq K$\nand $K+1\\leq j\\leq N$.\nBy means of tail-biting~\\cite{conferenceAllerton}, we can corresponds a rate $\\frac{K}{N}$ systematic feed-back\nconvolutional encoder with constraint $\\nu$ and a linear code $[LN,LK]$\n(where $L$ is called {\\em tail-biting length})\nwith generator matrix\n\\begin{equation}\\label{eq:genmatrixtailbited}\n{\\bf G}'=\\left[\n\\begin{array}{ccccccc}\n{\\bf R}_1&\\cdots&{\\bf 0}&{\\bf Q}_{1,K+1}&\\cdots&{\\bf Q}_{1,N}\\\\\n\\vdots&\\ddots&\\vdots&\\vdots&\\ddots&\\vdots\\\\\n{\\bf 0}&\\cdots&{\\bf R}_K&{\\bf Q}_{K,K+1}&\\cdots&{\\bf Q}_{K,N}\n\\end{array}\n\\right],\n\\end{equation}\nwhere ${\\bf Q}_i$ and ${\\bf Q}_{i,j}$ are $L\\times L$ circulant\nmatrices with top row of length $L$ made from $r_i(x)$ and $q_{i,j}(x)$ respectively\nfor $1\\leq i\\leq K$ and $K+1\\leq j\\leq N$.\n\\begin{Theorem}~\\label{th:converCCsys}\nLet $r_i(x),~q_{i,j}(x),~L$ and ${\\bf R}_i, {\\bf Q}_{i,j}$\nbe as above for $1\\leq i\\leq K$ and $K+1\\leq j\\leq N$.\nThen the block code $\\mathcal{C}[LN,LK]$ generated by ${\\bf G}'$ in~(\\ref{eq:genmatrixtailbited})\ncan also be generated by\n${\\bf G}=[{\\bf I}_{LK}|{\\bf F}]$, where ${\\bf F}$ is a circulant matrix if and only if\n$(r_i(x),x^L-1)=1$ for all $1\\leq i\\leq K$. In this case,\nwe get\n$$q_{i,j}(x)\\equiv f_{i,j}(x)r_i(x)\\!\\!\\pmod{x^L-1}.$$\n\\end{Theorem}\nThe proof is given in Appendix A.\n\nWe observe that ${\\bf F}$ is an $LK\\times L(N-K)$ circulant\nmatrix consisting of $K\\times (N-K)$ blocks of $L\\times L$\ncirculant submatrices which must be placed\nin the $(i,j)$--th block of ${\\bf F}$. It is obtained using $f_{i,j}(x)$\nas its top row, $1\\leq i\\leq K$ and $K+1\\leq j\\leq N$.\nAlso the identity matrix ${\\bf I}_{LK}$ can be written as an $K\\times K$ identity block matrix\nwith each of its nonzero entries replaced by an identity matrix ${\\bf I}_{L}$.\n\nWe close this subsection giving a proposition that relates our result in the above theorem\nand well-known results~\\cite{sathl,wiss} for eligible lengths of $L$\nthat can be applied to construct tail-biting feed-back\nconvolutional codes. For the sake of brevity, we consider\nonly feed-back convolutional codes of rate $\\frac{N-1}{N}$.\nLet ${\\bf G}(x)$ be a generator matrix of a systematic feed-back\nconvolutional code $\\mathcal{C}(N,N-1,\\nu)$ defined as follows\n\\begin{equation}\\label{eq:genmatrixSRCCforequivalence}\n\\left[\n\\begin{array}{cccc}\n1&\\cdots&0&g_{1,N}(x)\\\\\n\\vdots&\\ddots&\\vdots&\\vdots\\\\\n0&\\cdots&1&g_{N-1,N}(x)\n\\end{array}\n\\right],\n\\end{equation}\nwhere $g_{i,N}(x)=\\frac{q_{i,N}(x)}{r(x)}$ for coprime\npolynomials $q_{i,N}(x)$ and $r(x)$ for $1\\leq i\\leq N-1$.\nWithout loss of generality, we assume that\n$r(x)=r_0+r_1x+\\cdots+r_mx^m$.\nIf we realize this code in observer canonical form~\\cite{wiss},\nthen the state matrix is\n\\begin{equation}\\label{eq:statematrixSRCCforequivalence}\n{\\bf A}=\\left[\n\\begin{array}{ccc|c}\n0&\\cdots&0&r_m\\\\\n1&\\cdots&0&r_{m-1}\\\\\n\\vdots&\\ddots&\\vdots&\\vdots\\\\\n0&\\cdots&1&r_1\n\\end{array}\n\\right].\n\\end{equation}\nWe have that in order to encode an $[LN,LK]$ tail-biting code\nwith the method described in~\\cite{wiss}, the matrix $\\left({\\bf A}^L-{\\bf I}_m\\right)$\nhas to be invertible. It should be noted that~\\cite{wiss} realizing the encoder\nin controller canonical form and observer canonical form leads to\nthe same set of possible sizes $L$.\n\\begin{Proposition}~\\label{relation}\nLet ${\\bf A}$, as in~(\\ref{eq:statematrixSRCCforequivalence}),\nbe the state matrix of a convolutional code $\\mathcal{C}(N,N-1,\\nu)$\nwith generator matrix~(\\ref{eq:genmatrixSRCCforequivalence}).\nThen $\\det\\left({\\bf A}^L-{\\bf I}_m\\right)\\neq0$ if and only if $\\gcd(r(x),x^L-1)=1$.\n\\end{Proposition}\nThe proof is given in Appendix A.\n\\subsection{Parallel Concatenated Codes; Structure of Turbo Codes}~\\label{ParallelConcatenatedCodes;StructureofTurboCodes}\nTurbo codes can be assumed as block codes by fixing their interleaver lengths;\nbut they have not been analyzed from this point of view except in~\\cite{recent}.\nWe follow the construction of turbo codes from~\\cite{berrou,costello} and then we use them\nto produce a new type of lattices called {\\em turbo lattices}.\nWe assume that an interleaver\n$\\Pi$ and a recursive convolutional encoder $\\mathcal{E}$ with parameters\n$(N,K,\\nu)$ are used for constructing a turbo code of size $k=KL$.\n\nThe information block (interleaver size) $k$ has to be selected\nlarge enough to achieve performance close to Shannon limit.\nImproving minimum free distance of turbo codes is possible by designing\ngood interleavers. In other words, interleavers make a shift from lower-weight\ncodewords to higher-weight codewords. This shifting has been called\n{\\it spectral thining}~\\cite{costello}. Such interleaving matches the\ncodewords with lower weight of the first encoder to the high-weight\nparity sequences of the second encoder. More precisely, for large\nvalues of interleaver size $k$ the multiplicities of\nthe low-weight codewords in the turbo code weight spectrum are\nreduced by a factor of $k$. This reduction by a factor of\n$k$ is called \\emph{interleaver gain}. Hence, it is apparent that\ninterleavers have a key role in the heart of turbo codes and it is important to\nhave random-like properties for interleavers~\\cite{costello,recent}.\nBoutros et. al provided almost optimal interleavers in~\\cite{Boutros06}.\n\\section{Nested Turbo Codes and Turbo Lattices}~\\label{NestedTCandTL}\nWe exploit a set of nested tail-biting convolutional codes and a nested interleaver\nalong with Construction D to form turbo lattices.\nAlso terminated convolutional codes and Construction A\nare employed for the same purpose. An explicit explanation of these two approaches is given next.\n\\subsection{Constructing Nested Turbo Codes}~\\label{ConstructingNestedTurboCodes}\nConsider a turbo code $\\mathcal{TC}$ with two component codes generated\nby a generator matrix ${\\bf G}(x)$ of size ${K\\times N}$ of a convolutional code\nand a random interleaver $\\Pi$, of size $k=LK$.\nAssume that both encoders are systematic feed-back convolutional codes.\nEvery interleaver $\\Pi$ can be represented by a matrix ${\\bf P}_{k\\times k}$\nwhich has only a single $1$ in each column and row. It is easy to see that\nthe generator matrix of $\\mathcal{TC}$ can be written as follows\n\\begin{equation}~\\label{eq:genmatTC}\n{\\bf G}_{\\mathcal{TC}}=\n\\left[\\begin{array}{c|c|c}\n{\\bf I}_k&{\\bf F}&{\\bf PF}\n\\end{array}\\right]_{k\\times n}\n\\end{equation}\nwhere ${\\bf F}$ is a $LK\\times L(N-K)$ submatrix of ${\\bf G}$, the tail-bited generator matrix\nof ${\\bf G}(x)$, including only parity columns of ${\\bf G}$. The matrix ${\\bf I}_k$ is the identity matrix of size $k$.\nTherefore, we can assume that ${\\bf G}_{\\mathcal{TC}}$\nis a $k\\times n$ matrix with $k=LK$ rows and $n=2LN-LK$ columns.\n\nThe above representation~(\\ref{eq:genmatTC}) can be extended to construct a generator matrix\nfor a parallel concatenated code with $b$ branches. Each branch has its\nown interleaver $\\Pi_j$ with matrix representation\n${\\bf P}_j$ and a recursive encoder $\\mathcal{E}_j$ for $1\\leq j\\leq b$. Assume\nthat all the encoders are the same $(N, K, \\nu)$ convolutional encoder and the block of information bits has length $k=KL$.\nThus, the corresponding generator matrix of this turbo code is\n\\begin{equation}~\\label{eq:genmatextendedTC}\n{\\bf G}_{\\mathcal{TC}}^e=\n\\left[\\begin{array}{c|c|c|c|c}\n{\\bf I}_k&{\\bf P}_1{\\bf F}&{\\bf P}_2{\\bf F}&\\cdots&{\\bf P}_b{\\bf F}\n\\end{array}\\right]_{k\\times n_e}\n\\end{equation}\nwhere ${\\bf F}$ is a $LK\\times L(N-K)$ as above and $n_e=KL+bL(N-K)$.\n\nIn order to design a nested set of turbo codes, the presence of a\nnested interleaver is essential. Hence, a new concept\nof nested interleavers has to be given.\n\\begin{Definition}~\\label{def:nestedinteleaver}\nThe interleaver $\\Pi$ of size $k$ is a \\emph{$(k_a,\\ldots,k_1)$-nested\ninterleaver} if the following conditions hold\n\\begin{enumerate}\n\\item{}$0\\frac{4^{\\ell}}{\\beta}$ such that $\\beta=1$ or $2$, then we get the following bounds\n$$d_{\\min}(\\Lambda_{\\mathcal{TC}})\\geq\\frac{4}{\\beta},\\quad~\\gamma(\\Lambda_{\\mathcal{TC}})\\geq \\frac{4^{\\sum_{{\\ell}=1}^aR_{\\ell}}}{\\beta},$$\nand\n$$\\tau(\\Lambda_{\\mathcal{TC}})\\leq 2n\\quad \\mbox{or} \\quad \\tau^\\ast(\\Lambda_{\\mathcal{TC}})\\leq 2.$$\n\nIt is obvious that this setting results in (possibly) larger (or at the worst scenario, equivalent) minimum distance,\nabsolutely better coding gain and (possibly) lower (or at the worst scenario, equivalent) kissing number when compared\nwith the turbo lattices which come from parallel concatenated of terminated recursive convolutional codes and Construction A.\nHowever, geometrical and layer properties of an $a$ level Construction D turbo lattices\nmake their decoding algorithm more complex.\n\nAccording to the discussion described above,\nwe can take advantage from a wide range of aspects of these\nlattices. To be more specific, these turbo lattices are generated by Construction\nD using a nested set of block turbo codes. Their underlying codes\nare two tail-biting recursive convolutional codes.\nThus, this class provides an appropriate link between two approaches of\nblock and convolutional codes. The tail-biting method gives us the opportunity\nto combine profits of recursive convolutional codes (such as memory) with the\nadvantages of block codes. It is worth pointing out that the\nnested property of turbo codes induces higher coding gain;\nsee~(\\ref{eq:codinggainTL}). Also, excellent performance of parallel\nconcatenating systematic feed-back convolutional codes imply efficient\nturbo lattices with great fundamental parameters.\n\n\\subsection{Guidelines to Choose Suitable Parameters}\nSince our first priority in designing turbo lattices is to\nhave high coding gain lattices, selecting appropriate\ncode length for underlying turbo codes seems crucial.\nIn addition, guidelines to choose tail-biting convolutional codes that are especially suited for\nparallel concatenated schemes are given in~\\cite{wiss}. The authors of~\\cite{wiss}\nalso tabulate tail-biting convolutional codes of different rate and length. The minimum distance\nof their associated turbo codes are also provided.\nWe express the importance of parameters like $k_1,\\ldots,k_a$ and code length $n$\nof underlying turbo codes via a detail example provided below.\n\nAssume that a tail-biting version of\na systematic recursive convolutional code of rate $\\frac{2}{3}$ with memory $3$ and\ngenerator matrix\n$${\\bf G}_1=\\left(\n\\begin{array}{ccc}\n1&0&\\frac{1+x+x^2+x^3}{1+x^2+x^3}\\\\\n0&1&\\frac{1+x+x^2}{1+x^2+x^3}\n\\end{array}\n\\right)$$\nis used to form a nested turbo code. The resulting turbo code\nhas rate $R_1=\\frac{1}{2}$ and based on~\\cite{wiss}, it has minimum distance\n$d_{\\min}^{(1)}=13$ for block information bits of\nlength $400$.\nNow consider only the first row of the generator matrix\nfor $\\mathcal{TC}_1$. Therefore, the component encoders of $\\mathcal{TC}_2$\nhave generator matrices (after puncturing out the zero bits)\n$${\\bf G}_2=\\left(\n\\begin{array}{cc}\n1&\\frac{1+x+x^2+x^3}{1+x^2+x^3}\n\\end{array}\n\\right).$$\n\nA block turbo code which uses ${\\bf G}_2$ as its constituent codes\nhas rate $R_2=\\frac{1}{3}$ and according to the information in~\\cite{wiss},\nthe minimum distance of this code is $d_{\\min}^{(2)}=28$\nfor information block length of $576$.\nFor instance suppose that a block of information bits of size $1000$ is used.\nSince $\\mathcal{TC}_1$ is a rate-$\\frac{1}{2}$ block turbo code, the lattice points are in\n$\\mathbb{R}^{2000}$. Therefore, a square generator matrix of size $2000$\nfor this turbo lattice ${\\bf G}_{\\mathcal{TL}}$ can be formed following the approach in Example~\\ref{extailbitingexampleTL}.\nHence, ${\\bf G}_{\\mathcal{TL}}$ is\n$$\\left[\n\\begin{array}{rrrr}\n\\frac{1}{2}{\\bf I}_{576}&{\\bf 0}&\\frac{1}{2}{\\bf F}_{1,3}&\\frac{1}{2}{\\bf P}_{1,1}{\\bf F}_{1,3}\\\\\n{\\bf 0}&{\\bf I}_{324}&{\\bf F}_{2,3}&{\\bf P}_{2,2}{\\bf F}_{2,3}\\\\\n\\hline\n{\\bf 0}&{\\bf 0}&2{\\bf I}_{500}&{\\bf 0}\\\\\n{\\bf 0}&{\\bf 0}&{\\bf 0}&2{\\bf I}_{500}\n\\end{array}\n\\right],$$\nwhere\n$${\\bf P}=\\left[\n\\begin{array}{cc}\n{\\bf P}_{1,1}&{\\bf 0}\\\\\n{\\bf 0}&{\\bf P}_{2,2}\n\\end{array}\n\\right]$$\nof size $1000\\time 1000$ is a $576$-nested interleaver. In other words ${\\bf P}$\nis an interleaver for $\\mathcal{TC}_1$ and ${\\bf P}_{1,1}$ is an interleaver\nof size $576$ for $\\mathcal{TC}_2$. Now the fundamental parameters of\nthis turbo lattice $\\Lambda_{\\mathcal{TC}}$ constructed with $2$ levels of Construction D can be found.\nSince $d_{\\min}^{(1)}=13$ and $d_{\\min}^{(2)}=28$, Theorem~\\ref{th:propertiesofTL}\nimplies that\n\\begin{eqnarray}\nd_{\\min}^{2}(\\Lambda_{\\mathcal{TC}})&=&\\min_{1\\leq \\ell\\leq2}\\left\\{4,\\frac{d_{\\min}^{(1)}}{4^{1-1}},\\frac{d_{\\min}^{(2)}}{4^{2-1}}\\right\\}\\nonumber\\\\\n&=&\\min_{1\\leq \\ell\\leq2}\\left\\{4,\\frac{13}{1},\\frac{28}{4}\\right\\}=4.\\nonumber\n\\end{eqnarray}\nand the coding gain of $\\Lambda_{\\mathcal{TC}}$ satisfies\n\\begin{eqnarray}\n\\gamma(\\Lambda_{\\mathcal{TC}})&=&4^{\\sum_{\\ell=1}^2R_{\\ell}-1}d_{\\min}^{2}(\\Lambda_{\\mathcal{TC}})\\nonumber\\\\\n&\\geq&4^{\\frac{1}{2}+\\frac{1}{3}-1}4=4^{\\frac{5}{6}},\\nonumber\n\\end{eqnarray}\nthat is, in decibels, $5~\\mbox{dB}$.\nAlso the kissing number of $\\Lambda_{\\mathcal{TC}}$ is bounded above by\n$$\\tau(\\Lambda_{\\mathcal{TC}})\\leq 2n+\\sum_{\\substack{1\\leq {\\ell}\\leq 2\\\\ d_{\\min}^{(\\ell)}\\leq4^{\\ell}}}2^{d_{\\min}^{(\\ell)}}A_{d_{\\min}^{(\\ell)}}.$$\nSince $d_{\\min}^{(1)}> 4$ and $d_{\\min}^{(1)}> 4^2$, the summation in the above inequality disappears\nand we get $\\tau(\\Lambda_{\\mathcal{TC}})\\leq 4000$ or equivalently $\\tau^\\ast(\\Lambda_{\\mathcal{TC}})\\leq 2$.\n\n\\subsection{Other Possible Design Criteria}\nThe results in~\\cite{forneyspherebound}\nprovide a general guideline on the choice of code rates\n$R_\\ell$, $1\\leq \\ell\\leq a$ which is critical\nin the construction of any capacity-achieving lattice using Construction D.\nHence a complete different line of studies can be done in order to\nchange the above design criteria for turbo lattices via\ninformation theoretic tools.\n\n\\section{Decoding Algorithm}~\\label{DecodingAlgorithm}\nThere exist many decoding algorithms for finding the closest\npoint in a lattice~\\cite{conway, Viterbo99}.\nSimilar expressions, algorithms and theorems can be found in~\\cite{boundeddistforney}.\nIn fact, in~\\cite{boundeddistforney}, Forney uses a code formula\nalong with a multi-stage decoding algorithm to solve a CVP for a lattice\nbased on Construction D.\n\n\n\\subsection{A Multi-Stage Turbo Lattice Decoder}~\\label{AMultiStageTurboLatticeDecoder}\nIn the previous sections we used a set of nested turbo codes\nto produce turbo lattice $\\Lambda_{\\mathcal{TC}}$.\nNow our aim is to solve a closest lattice point problem for $\\Lambda_{\\mathcal{TC}}$.\nAssume that a vector ${\\bf x}\\in\\Lambda_{\\mathcal{TC}}$ is sent over an unconstrained AWGN channel\nwith noise variance $\\sigma^2$ and a vector ${\\bf r}\\in\\mathbb{R}^n$ is received.\nThe closest point search algorithms attempt to compute the lattice vector $\\tilde{{\\bf x}}\\in\\Lambda_{\\mathcal{TC}}$\nsuch that $\\|\\tilde{{\\bf x}}-{\\bf r}\\|$ is minimized.\n\nThe excellent performance of turbo codes is due to the well-known\niterative turbo decoder~\\cite{berrou}. One can generalize and investigate\na multi-stage soft decision decoding algorithm~\\cite{forneyspherebound} for decoding\nlattices constructed based on Construction D. A simple extension to turbo lattices\nis presented next.\n\nAs it is shown in Section~\\ref{BackgroundsonLattices},\nevery lattice constructed using Construction D benefits from a nice layered code structure.\nThis building block consists of a set of nested linear block codes which is a set\nof nested turbo codes in turbo lattices. The goal is to use $a$, the number of\nlevels of the construction, serially matching iterative turbo decoding algorithms.\nThe idea has been brought here from the multi-stage decoding algorithm presented in~\\cite{boundeddistforney}.\n\nOne can restate~(\\ref{eq:codeformulaforD}) as\n\\begin{equation}~\\label{eq:restatecodeformulaforD}\n\\Lambda_0=2^{a-1}\\Lambda_{\\mathcal{TC}}=\\mathcal{TC}_a+2\\mathcal{TC}_{a-1}+\\cdots+2^{a-1}\\mathcal{TC}_1+2^{a}(\\mathbb{Z})^n.\n\\end{equation}\nThe above representation of $\\Lambda_{\\mathcal{TC}}$ states that\nevery ${\\bf x}\\in\\Lambda_{\\mathcal{TC}}$ can be represented by\n\\begin{equation}~\\label{eq:representationofeverylatticepointTC}\n\\left\\{\\begin{array}{l}\n{\\bf x}={\\bf x}_1+\\frac{1}{2}{\\bf x}_2+\\cdots+\\frac{1}{2^{a-1}}{\\bf x}_{a}+{\\bf w},~\\mbox{or}\\\\\n\\\\\n2^{a-1}{\\bf x}={\\bf x}_a+2{\\bf x}_{a-1}+\\cdots+2^{a-1}{\\bf x}_1+2^{a-1}{\\bf w},\n\\end{array}\\right.\n\\end{equation}\nwhere ${\\bf x}_{\\ell}\\in \\mathcal{TC}_{\\ell}$ and ${\\bf w}\\in2(\\mathbb{Z})^n$, $1\\leq {\\ell}\\leq a$.\n\nAny soft-input soft-output (SISO) or soft-input hard-output (SIHO) decoding algorithm for the turbo code $\\mathcal{TC}_\\ell$ may\nbe used as a decoding algorithm for $\\Lambda_{\\ell}=\\mathcal{TC}_\\ell+2\\mathbb{Z}^n$, as follows.\nGiven any ${\\bf r}_{\\ell}$, let us denote the closest even and odd\nintegers to each coordinate\n$r_j$ of ${\\bf r}_{\\ell}$ by $e_j$ and $o_j$ respectively, $1\\leq j\\leq n$. Then one can compute\n$t_j=2(\\pm\\frac{e_j+o_j}{2}\\mp r_j)$\n(where the upper signs are taken if $e_jo_j$)\nand consider $t_j$ as the ``metric\" for $0$ and $1$, respectively.\nThen the vectors ${\\bf t}_{\\ell}=(t_1,\\ldots,t_n)$ (as the confidence vector) and\n${\\bf s}_{\\ell}={\\bf r}_{\\ell}\\pmod{2}$ (as the received vector) are passed\nto a SISO (or SIHO) decoder for $\\mathcal{TC}_\\ell$.\nThe decoded turbo codeword is then mapped back to $e_j$ or $o_j$, at the $j$--th coordinate, depending on whether the decoded\ncodeword is $0$ or $1$ in that coordinate.\n\nThe above algorithm is for $\\Lambda_\\ell$.\nA general scheme for this multi-stage turbo lattice decoder\ncan be shown by the following simple pseudo-code.\nA similar algorithm for a Construction D Barnes-Wall lattices\ncan be found in~\\cite{Harshan13}.\n\n\\noindent\n{\\bf Decoding Algorithm for Turbo Lattices}\\\\\n{\\bf Input:} {\\bf r} an $n$-dimensional vector in $\\mathbb{R}^n$.\\\\\n{\\bf Output:} a closest vector $\\tilde{{\\bf x}}$ to ${\\bf r}$ in $\\Lambda_{TC}$.\n\\begin{itemize}\n \\item{}{\\bf Step 1)} Put ${\\bf r}_a=2^{a-1}{\\bf r}$.\n \\item{}{\\bf Step 2)}\\\\\n {\\bf for} $\\ell=a$ {\\bf downto} $1$ {\\bf do}\n \\begin{itemize}\n \\item{} Decode ${\\bf r}_{\\ell}$ to the closest point ${\\bf x}_{\\ell}\\in\\Lambda_{\\ell}=C_{\\ell}+2(\\mathbb{Z})^n$.\n \\item{} Compute ${\\bf r}_{\\ell-1}=\\frac{{\\bf r}_{\\ell}-{\\bf x}_{\\ell}}{2}$.\n \\end{itemize}\n\\item{}{\\bf Step 3)} Evaluate $\\tilde{{\\bf x}}=\\frac{{\\bf x}_a+2{\\bf x}_{a-1}+\\cdots+2^{a-1}{\\bf x}_1+2^{a-1}{\\bf w}}{2^{a-1}}$.\n\\end{itemize}\n\n\nThe next theorem shows that the above algorithm can find the closest lattice\npoint of $\\Lambda_{\\mathcal{TC}}$ to the received vector ${\\bf r}$ when the points of\n$\\Lambda_{\\mathcal{TC}}$ are sent over an unconstrained AWGN channel with noise variance $\\sigma^2$.\nThere is a similar theorem and proof in~\\cite{boundeddistforney}, however for the sake of\ncompleteness we give them both in the following.\n\\begin{Theorem}~\\label{th:truedecoder}\n Given an $n$-tuple ${\\bf r}$, if there is a point $\\tilde{{\\bf x}}$\nin $\\Lambda_{\\mathcal{TC}}$ such that $\\|{\\bf r}-\\tilde{{\\bf x}}\\|^2< d_{\\min}^2(\\Lambda_{\\mathcal{TC}})\/4$,\nthen the algorithm decodes ${\\bf r}$ to $\\tilde{{\\bf x}}$.\n\\end{Theorem}\nThe proof is given in Appendix A.\nIn the next subsection we analyze the decoding complexity of the proposed algorithm.\n\n\\subsection{Decoding Complexity}~\\label{DecodingComplexity}\nSince the operations for computing\nthe nearest odd and even integer numbers close to the components of a received vector\n${\\bf r}$ are negligible, the decoding complexity of a lattice $\\Lambda_{\\ell}=\\mathcal{C}_{\\ell}+2(\\mathbb{Z})^n$ constructed\nusing Construction A is equal to the complexity of decoding the turbo code\n$\\mathcal{C}_{\\ell}$ via an iterative turbo decoder.\nAs shown before, a turbo lattice decoder uses exactly $a$ subsequent and successive\nturbo decoder algorithms for $\\Lambda_{\\ell}$, $1\\leq {\\ell}\\leq a$. Thus\nthe overall decoding complexity of the proposed turbo lattice decoding algorithm can not\nexceed $a$ times the decoding complexity of an iterative turbo decoder.\n\n\\subsection{Other Possible Decoding Methods}~\\label{OtherPossibleDecodingMethods}\n\\begin{figure*}[t!]%\n \\begin{center}%\n \\vspace{1.5cm}\n\\includegraphics[width=10cm]{Comparisonfinal.eps}~\\caption{Comparison graph for various lengths of a turbo lattice.}~\\label{fig:comarison}\n \\end{center}\n\\end{figure*}\n\nThe multi-stage decoding algorithm may not be the best choice here.\nSome other options are listed in the following paragraphs.\n\nFirst, for low dimensional lattices a universal lattice code\ndecoder~\\cite{Viterbo99} can be employed to decode\nturbo lattices. In that case, one can carve good lattice\nconstellations from turbo lattices by choosing appropriate\nshaping regions~\\cite{Boutros96}.\n\nSecond, it is well-known that turbo codes can be considered as a class of graph-based codes\non iterative decoding~\\cite{recent}.\nAlso Tanner graph realization of lattices are introduced\nin~\\cite{Banihashemi01}. In fact in a multi-stage decoding,\nafter a ``coarse\" code is decoded, it is frozen and the decision\nis passed to a ``fine\" code. In other words,\nthere is no iterative decoding across layers.\nIf the nested underlying turbo codes\nof a turbo lattice are expressed as a Tanner-graph model as in~\\cite{recent}\nor the turbo lattice itself is presented by a Tanner-graph as in~\\cite{Banihashemi01},\nthen it seems feasible to do iterative decoding across layers,\nwhich may potentially increase the performance.\nHowever we should again be careful about short cycles in the corresponding\nTanner-graph of turbo lattices as well as\ncycle-free lattices~\\cite{sakzad10}.\n\nThird, lattice basis reduction algorithms and the faster version\nof that which works in complex plane~\\cite{Ling09} can also be employed\nto find closest lattice point to a received vector.\n\n\n\\section{Simulation Results for Performance of Turbo Lattices}~\\label{SimulationResultsPerformanceofTL}\nAll simulation results presented\nare achieved using an AWGN channel, systematic recursive\nconvolutional codes in the parallel\nconcatenated scheme, and iterative turbo lattice\ndecoder all discussed earlier. Indeed, we investigate turbo lattice designed using two\nidentical terminated convolutional codes with generator matrix\n${\\bf G}_A(x)$. Turbo lattices of different\nlengths are examined. Furthermore, the performance of these\nturbo lattices are evaluated using BCJR algorithms~\\cite{costello} as constituent\ndecoders for the iterative turbo decoder.\nMoreover, $S$-random interleavers of sizes $2^5$, $7^3$ and $15^3$ such that\n$S$ equals to $3$, $10$ and $30$ have been used, respectively. These results in turbo lattices of dimensions\n$(2^5+2)*3=102$, $(7^3+2)*3=1035$ and $(15^3+2)*3=10131$. The number of\niterations for the iterative turbo decoder is fixed. It is equal to ten in all cases.\nFig. 1 shows a comparison between turbo lattices formed with\nturbo codes of different lengths. These turbo lattices\nachieve a symbol error rate (SER) of $10^{-5}$ at an $\\alpha^2=2.75~\\mbox{dB}$ for size $102$,\nan $\\alpha^2=1.25~\\mbox{dB}$ for frame length $1035$.\nAlso an SER of $10^{-5}$ is attained at an $\\alpha^2=.5~\\mbox{dB}$ for size $10131$.\n\nIn the following we compare these results for turbo lattices with other\nnewly introduced latices including LDPC lattices~\\cite{sadeghi}\nand LDLC lattices~\\cite{LDLC}. The comparison is presented in Table~\\ref{tableofcomparison}.\n\n\n\\begin{table}[h!]\n\\begin{center}\n\\begin{tabular}{c|c|c|c}\n\\hline\\hline\nLattice & $n$ & Error Probability & Distance from Capacity \\\\\n\\hline\\hline\nLDPC Lattice & $2000$ & NEP$=10^{-5}$&$2.8$~dB\\\\\n\\hline\nLDLC Lattice & $1000$ & SER$=10^{-5}$&$1.5$~dB\\\\\n\\hline\nTurbo Lattice & $1035$ & SER$=10^{-5}$&$1.25$~dB\\\\\n\\hline\\hline\n\\end{tabular}\n\\end{center}\\caption{A comparison between well-known and newly introduced lattices.}~\\label{tableofcomparison}\n\\end{table}\nIn Fig. 3 and for turbo lattices of sizes $n=102,~1035,~10131$, at SER of $10^{-5}$,\nwe achieve $\\alpha^2=2.75,~1.25$ and $.5$ dB away from capacity\nwhile for $n=100,~1000,~10000,~100000$, LDLC lattices~\\cite{LDLC} can work as close\nas $3.7,~1.5,~0.8$ and $0.6$ dB from capacity, respectively.\nThus, we have an excellent performance of turbo lattices when compared with other lattices.\n\n\n\\section{Conclusion and Further Research Topics}~\\label{Conclusion}\nThe concept of turbo lattices is\nestablished using Construction D method for lattices along\nwith a set of newly introduced nested turbo codes.\nTo this end, tail-biting and\nterminated convolutional codes are concatenated in parallel.\nThis parallel concatenation to induce turbo codes was\nsupported with nested $S$-random interleavers.\nThis gives us the possibility to combine the characteristics of\nconvolutional codes and block codes to produce good turbo lattices.\nThe fundamental parameters of turbo lattices for investigating\nthe error performance are provided. This includes minimum distance, coding gain,\nkissing number and an upper bound on the probability of error.\nFinally, our experimental results show excellent performances for\nturbo lattices as expected by the theoretical results.\nMore precisely, for example at SER of $10^{-5}$ and for $n=10131$\nwe can work as close as $\\alpha^2=0.5$ dB from capacity.\n\nAnalyzing other factors\nand parameters of $b$-branches turbo lattices such as sphere packing, covering and\nquantization problem is also of great interest.\nAnother interesting research problem is to find the error performance\nof turbo lattices designed by other types of interleavers including\ndeterministic interleavers~\\cite{allertonversion, recent}.\n\nSince the performance of turbo lattices depends on\nthe performance of their underlying codes, then search\nfor other well-behaved turbo-like codes would be interesting.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}