diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzfdhb" "b/data_all_eng_slimpj/shuffled/split2/finalzzfdhb" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzfdhb" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{Intro}\n\\label{sec:introduction}\nRecent advancements in the field of communication systems and computational methods necessitate improvements in the traditional cryptosystems. Substitution box (S-box) and pseudo random number generator (PRNG) play an important role in many cryptosystems such as Data Encryption Standard (DES)~\\cite{DES}, Advanced Encryption Standard (AES)~\\cite{AES}, Twofish security system~\\cite{F2}, Blowfish cryptosystem~\\cite{BF}, International Data Encryption Algorithm (IDEA)~\\cite{IDEA} and the cryptosystems developed in~\\cite{umar2, NJia, XC, SS}.\nIt has been pointed out by many researchers that the security of a cryptosystem can be improved by using dynamic S-boxes instead of a single static S-box, see for example~\\cite{RB, SK, MM, AS, MG, KK}.\nThis fact necessitates the development of new S-box generators which can generate a large number of distinct and mutually uncorrelated S-boxes with high cryptographic properties in low time and space complexity~\\cite{Azam}.\n\nMany researchers have proposed improved S-box generators and PRNGs to enhance the security of data against modern cryptanalysis.\nThese improvements are mainly based on finite field arithmetic and chaotic systems.\nKhan and Azam~\\cite{MN1, MN2} developed two different methods to generate 256 cryptographically strong S-boxes by using Gray codes, and affine mapping.\nJakimoski and Kocarev~\\cite{Jaki} used chaotic maps to develop a four-step method for the generation of an S-box. $\\rm \\ddot{O}$zkaynak and $\\rm \\ddot{O}$zer~\\cite{Ozkaynak2} introduced a new method based on a chaotic system to develop secure S-boxes. Unlike the traditional use of chaotic maps, Wang et al.~\\cite{YW} proposed an efficient algorithm to construct S-boxes using gnetic algorithm and chaotic maps. Yin et al.~\\cite{Yin} proposed an S-box design technique using iteration of the chaotic maps. Tang and Liao~\\cite{Tang} constructed S-boxes based on an iterating discretized chaotic map.\nLambi\\'c~\\cite{Lambic} used a special type of discrete chaotic map to obtain bijective S-boxes. \\\"Ozkaynak et al.~\\cite{Ozkaynak} proposed a new S-box based on a fractional order chaotic Chen system.\n Zhang et al.~\\cite{IChing} used I-Ching operators for the construction of highly non-linear S-boxes, and the proposed approach is very efficient.\n\n\nSimilarly, chaotic systems are used to generate pseudo random numbers (PRNs), see for example~\\cite{MF, VP, CG, TS, ZF}. Francois et al.~\\cite{MF} presented a PRNG based on chaotic maps to construct multiple key sequences. Patidar and Sud~\\cite{VP} designed a PRNG with optimal cryptographic properties using chaotic logistic maps. Guyeux et al.~\\cite{CG} developed a chaotic PRNG with the properties of topological chaos which offers sufficient security for cryptographic purposes. Stojanovski and Kocarev~\\cite{TS} analyzed a PRNG based on a piecewise linear one dimensional chaotic map. Fan et al.~\\cite{ZF} proposed a PRNG using generalized Henon map, and a novel technique is used to improve the characteristics of the proposed sequences.\n\nIt has been pointed out by Jia et al.~\\cite{NJia}\nthat the PRNs generated by a chaotic system can have small period due to the hardware computation issues and revealed that elliptic curve (EC) has high security than chaotic system.\nHowever, the computation over ECs is usually performed by group law which is computationally inefficient.\nHayat and Azam~\\cite{umar2}\nproposed an efficient S-box generator and a PRNG based on ECs by using a total order as an alternative to group law.\nThis S-box generator is efficient than the other methods over ECs, however their time and space complexity are $\\mathcal{O}(p^2)$ and $\\mathcal{O}(p)$, respectively, where $p$ is the prime of the underlying EC.\nFurthermore the S-box generator does not guarantee the generation of an S-box.\nThe PRNG proposed by Hayat and Azam~\\cite{umar2} also takes $\\mathcal{O}(p^2)$ and $\\mathcal{O}(p)$ time and space, respectively, to generate a sequence of pseudo random numbers (SPRNs) of size $m \\leq p$.\nAzam et al.~\\cite{Azam}\nproposed an improved S-box generation method to generate bijective S-boxes by using ordered Mordell elliptic curves (MECs).\nThe main advantage of this method is that its time and space complexity are $\\mathcal{O}(mp)$ and $\\mathcal{O}(m)$, respectively, where $m$ is the size of an S-box.\nAzam et al.~\\cite{ikram}\nproposed another S-box generator to generate $m \\times n$, where $m \\leq n$ injective S-boxes which can generate a large number of distinct and mutually uncorrelated S-boxes by using the concept of isomorphism on ECs.\nThe time and space complexity of this method are $\\mathcal{O}(2^np)$ and $\\mathcal{O}(2^n)$, where $n \\leq p$ and is the size of co-domain of the resultant S-box.\nA common draw back of these S-box generators is that the cryptographic properties of their optimal S-boxes are far from the theoretical optimal values.\n\nThe aim of this paper is to propose an efficient S-box generator and a PRNG based on an ordered MEC to generate a large number of distinct, mutually uncorrelated S-boxes and PRNs with optimal cryptographic properties in low time and space complexity to overcome the above mentioned drawbacks.\nThe rest of the paper is organized as follows:\nIn Section~\\ref{Prel} basic definitions are discussed. The proposed S-box generator is described in Section~\\ref{Cons}. Section~\\ref{Anal} consists of security analysis and comparison of the S-box generator.\nThe proposed algorithm for generating PRNs and some general results are given in Section~\\ref{RNG}.\nThe proposed SPRNs are analyzed in Section~\\ref{RA}, while Section~\\ref{Con} concludes the whole paper.\n\n\\section{Preliminaries}\\label{Prel}\nThroughout this paper, we denote a finite set $\\{0, 1, \\ldots, m-1\\}$ simply by $[0, m-1]$.\nA finite field over a prime number $p$ is the set $[0, p-1]$ denoted by $\\mathbb{F}_{p}$ with binary operations addition and multiplication under modulo $p$.\nA non-zero integer $\\alpha \\in \\mathbb{F}_{p}$ is said to be {\\it quadratic residue} (QR) if there exists an integer $\\beta \\in \\mathbb{F}_p$ such that $\\alpha\\equiv \\beta^2 \\pmod p$.\nA non-zero integer in $\\mathbb{F}_{p}$ which is not QR is said to be {\\it quadratic non-residue} (QNR).\n\nFor a prime $p$, non-negative $a\\in \\mathbb{F}_{p}$ and positive $b\\in \\mathbb{F}_{p}$, the EC $E_{p,a,b}$ over a finite field $\\mathbb{F}_{p}$ is defined to be the collection of identity element $\\infty$ and ordered pairs $(x, y) \\in \\mathbb{F}_p \\times \\mathbb{F}_p$ such that \\[ y^{2}\\equiv x^{3}+ax+b \\pmod p.\\]\nIn this setting, we call $p,a$ and $b$ the {\\it parameters} of $E_{p,a,b}$.\nThe number $\\#E_{p,a,b}$ of all such points can be calculated using Hasse's\ntheorem~\\cite{Schoof} \\[|\\#E_{p,a,b}-p-1|\\leq 2\\sqrt{p}.\\]\nTwo ECs $E_{p,a,b}$ and $E_{p,a',b'}$ over $\\mathbb{F}_{p}$ are isomorphic if and only if there exists a non-zero integer $t\\in \\mathbb{F}_{p}$ such that $a't^4\\equiv a \\pmod p$ and $b't^6\\equiv b \\pmod p$.\nIn this case, $t$ is called {\\it isomorphism parameter} between the ECs $E_{p,a,b}$ and $E_{p,a',b'}$.\nFor an isomorphism parameter $t$, each point $(x,y)\\in E_{p,a,b}$ is mapped on $(t^2x,t^3y)\\in E_{p,a',b'}$.\nNote that an isomorphism is an equivalence relation on all ECs over $\\mathbb{F}_{p}$, and therefore all ECs can be divided into equivalence classes~\\cite{ikram}. For the sake of simplicity we represent an arbitrary class by $\\mathcal{C}_i$ and assume that the class $\\mathcal{C}_1$ contains the EC $E_{p,0,1}$.\nA non-negative integer $b\\in \\mathbb{F}_{p}$ such that $E_{p,a,b}\\in \\mathcal{C}_i$ is called {\\it representative} $R(p, \\mathcal{C}_i)$ of the class $\\mathcal{C}_i$.\nClearly, it holds that $R(p,\\mathcal{C}_1) = 1$.\n\nAn EC $E_{p,a,b}$ with $a = 0$ is said to be a {\\it Mordel elliptic curve}.\nThe following theorem is from~\\cite[6.6 (c), p.~188]{Schoof}.\n\\begin{theorem}\\label{mordel}\nLet $p>3$ be a prime such that $p\\equiv 2\\pmod{3} $.\nFor each non-zero $b\\in \\mathbb{F}_{p}$, the MEC $E_{p,0,b}$ has exactly $p+1$\ndistinct points, and has each integer from $[0,p-1]$ exactly once as a $y$%\n-coordinate.\n\\end{theorem}\n\\noindent\nFurthermore, by~\\cite[Lemma 1]{ikram} it follows that there are only two classes of MECs when $p\\equiv 2 \\pmod 3$.\nHenceforth, we denote an MEC $E_{p,0, b}$ by simply $E_{p, b}$ and the term MEC stands for an MEC $E_{p, b}$ such that $p\\equiv 2 \\pmod 3$.\n\nFor a subset $A$ of $[0, p-1]$ and an ordered MEC $(E_{p, b}, \\prec)$, we define a total order $\\prec^*$ on $A$ w.r.t. the ordered MEC such that for any two elements $a_1, a_2 \\in A$ it holds that $a_1 \\prec^* a_2$ if and only if $(x_1, a_1) \\prec (x_2, a_2)$.\nFor any two non-negative integers $p$ and $m$ such that $1 \\leq m \\leq p$, we define an {\\em $(m, p)$-complete set} to be a set $Q$ of size $m$ such that for each element $ q \\in Q$, it holds that $0\\leq q \\leq p-1$,\nand no two elements of $Q$ are congruent with each other under modulo $m$, i.e., for each $q,q' \\in Q$, it holds that $(q \\not\\equiv q') \\pmod{m}$.\nWe denote an ordered set $A$ with a total order $\\prec$ by an ordered pair $(A, \\prec)$.\nLet $(A, \\prec)$ be an ordered set, for any two elements $a,a' \\in A$ such that $a \\prec a'$, we read as $a$ is smaller than $a'$ or $a'$ is greater than $a$ w.r.t. the order $\\prec$.\nFor simplicity, we represent the elements of $(A, \\prec)$ in the form of a non-decreasing sequence and $a_i$ denotes the $i$-th element of the ordered set in its sequence representation.\nFor an ordered MEC $(E_{p, b},\\prec)$ and an $(m,p)$-complete set $Y$, we define the \\textit{ordered $(m,p)$-complete set} $Y^*$ with ordering $\\tilde\\prec$ due to $Y$ and $\\prec$ such that for any two element $y_1, y_2 \\in Y^*$ with $y_1' \\equiv y_1 \\pmod{m}$ and $y_2' \\equiv y_2 \\pmod m$, where $y_1', y_2' \\in Y$, it holds that $y_1 \\tilde\\prec y_2$ if and only if $(x_1, y_1') \\prec (x_2, y_2')$.\n\nFor a given MEC $E_{p, b}$, Azam et al.~\\cite{Azam} defined three typical type of orderings \\textit{natural} $N$, \\textit{diffusion} $D$ and \\textit{modulo diffusion} $M$ ordering based on the coordinates of the points on $E_{p, b}$ as\\\\\n$(x_{1},y_{1})N(x_{2},y_{2}) \\Leftrightarrow\n\\begin{cases}\n{\\rm either \\ } x_{1} m \\log p$, since $\\log{p} < p$ and $m \\leq p$ and by the property of $\\mathcal{O}$ notation, the time complexity of Algorithm~\\ref{algo_sbox_1} is $\\mathcal{O}(mp)$.\nFurthermore, Algorithm~\\ref{algo_sbox_1} only stores sets of size $m$, and therefore its space complexity is $\\mathcal{O}(m)$.\nThis completes the proof.\n\\end{proof}\nNext we present another algorithm for the generation of $(m,p)$-complete S-boxes on a fixed MEC.\nFor this we prove the following results.\n\nFor a fixed ordered MEC $(E_{p,b}, \\prec)$, a positive integer $m \\leq p$ and an integer $0 \\leq k \\leq m - 1$, let Num$(E_{p,b}, \\prec, m, k)$ denote the total number of $(m,p)$-complete S-boxes, possibly with repetition, generated due to the ordered MEC, $m$ and $k$.\n\\begin{lemma}\\label{LL}\nFor a fixed ordered MEC $(E_{p,b}, \\prec)$ and a positive integer $m \\leq p$, the total number of $(m,p)$-complete S-boxes, possibly with repetition, generated due to the MEC is equal to $m(q+1)^{r}q^{m-r}$, where $p=mq+r$, $0\\leq r< m,$ and $0\\leq k \\leq m-1$.\n\\end{lemma}\n\\begin{proof}\nFor a fixed integer $0 \\leq k \\leq m-1$, it holds by the definition of $(m,p)$-complete S-box that the total number of $(m,p)$-complete S-boxes, possibly with repetition, generated due to the ordered MEC, $m$ and $k$\nis equal to the number of distinct $(m,p)$-complete sets.\nIf $p = mq + r$, where $0 \\leq r \\leq m-1$, then there are $q+1$ (resp., $q$) integers $\\ell$ (resp., $h$)\nsuch that $\\ell \\pmod m \\in [0, r-1]$ (resp., $h \\pmod m \\in [r, m-1]$).\nThus, to construct an $(m,p)$-complete set there are $q+1$ (resp., $q$) choices of an integers $a$ such that $a \\pmod m \\in [0, r-1]$ (resp., $[r, m-1]$).\nThis implies that there are $(q+1)^r q^{m-r}$ distinct $(m,p)$-complete sets.\nHence, the number of $(m, p)$-complete S-boxes due to the MEC is $m(q+1)^r q^{m-r}$, since $0 \\leq k \\leq m-1$.\n\\end{proof}\n\\begin{observation}\\label{obs_set}\nFor any subset $F$ of an MEC $E_{p,b}$ there exists a unique subset $F'$ of either MEC $E_{p, R(p,\\mathcal{C}_{1})}$ or $E_{p, R(p,\\mathcal{C}_{2})}$ and a unique integer $t \\in [1, (p-1)\/2]$ such that for each $(x,y) \\in F$ there exists a unique point $(x', y') \\in F'$ for which it holds that $x\\equiv t^2x'\\pmod p$ and $y \\equiv t^3y'\\pmod p$.\n\\end{observation}\n\\noindent\nIt is important to mention that for each subset $F$ such that the set of $y$-coordinates of its points is an $(m,p)$-complete set, the set of $y$-coordinates of the points of $F'$ is not necessarily be an $(m,p)$-complete set.\nThis is explained in Example~\\ref{E2}.\n\\begin{example}\\label{E2}\nLet $F$ be a subset of $E_{11,9}$ with an $(11,10)$-complete set $Y= \\{0,1,2,3,4,5,6,7,8,9\\}$ of $y$-coordinates, where $m = 10$.\nThen for $t = 2$, there exists $F' \\subset E_{11, 1}$ with $y$-coordinates from the set $Y'=\\{0,1,2,3,5,6,7,8,9,10\\} $ which is not an $(11,10)$-complete set.\n\\end{example}\nBy Observation~\\ref{obs_set}, we can avoid the while-loop used in Algorithm~\\ref{algo_sbox_1} to find $x$-coordinate for each element $y$ in an $(m,p)$-complete set $Y$.\n\\begin{algorithm}[H]\n\n\t\\caption{{\\bf Constructing the proposed S-box using the EC isomorphism }}\\label{algo_sbox_2}\n\\begin{algorithmic}[1]\n\\REQUIRE An MEC $E_{p, R(p,\\mathcal{C}_{i})}$, where $i \\in \\{1, 2\\}$, multiplicative inverse $t^{-1}$ of $t$ in $\\mathbb{F}_p$, where $t \\in [1, (p-1)\/2]$, a total order $\\prec$ on the MEC $E_{p,t^{6}R(p,\\mathcal{C}_{i})}$, an $(m,p)$-complete set $Y$ and an integer $k \\leq m-1$.\n\\ENSURE The proposed $(m,p)$-complete S-box $\\sigma(p, t^6R(p, \\mathcal{C}_i), \\prec, Y, k)$.\n\n\\STATE $A:=\\emptyset$; \/*A set containing the points of $E_{p,t^6R(p,\\mathcal{C}_{i})}$ with $y$-coordinates from the set $Y$*\/\n\n\\FORALL {$y \\in Y$}\n\\STATE $y':= (t^{-1})^3y$;\n\\STATE Find $x \\in [0, p-1]$ such that $(x, y) \\in E_{p,R(p,\\mathcal{C}_{i})}$;\n\\STATE $A:= A \\cup \\{(t^2x, y)\\}$;\n\\ENDFOR;\n\\STATE Sort $Y^*:=[0, m-1]$ w.r.t. the element of $A$;\n\\STATE $\\pi := (\\pi(0), \\pi(1), \\ldots, \\pi(m-1))$;\n\\FORALL {integer $i \\in [0, m-1]$}\n\t\\STATE $\\pi(i) := y_{(i+k)\\Mod{m}}\\pmod{m}$, where $y_{(i +k)\\pmod{m}}$ is the $(i +k)\\pmod{m}$-th element of the ordered $(m,p)$-complete set $(Y^*, \\tilde\\prec)$\n\\ENDFOR;\n\\STATE Output $\\pi$ as the $(m,p)$-complete S-box $\\sigma(p, t^6R(p, \\mathcal{C}_i), \\prec,$ $ Y, k)$.\n\t\n\\end{algorithmic}\n\\end{algorithm}\n\\begin{lemma}\nFor an ordered MEC $(E_{p,b}, \\prec)$, where $b= t^6R(p,\\mathcal{C}_{i})$ for some $t \\in [1, (p-1)\/2]$ and $i \\in \\{1, 2\\}$, an $(m,p)$-complete set $Y$ and a non-negative integer $k\\leq m-1$, the $(m,p)$-complete S-box $\\sigma(p, b, \\prec, Y, k)$ can be computed in $O(m\\log m)$ time and $\\mathcal{O}(m)$ space by using Algorithm~\\ref{algo_sbox_2}.\n\\end{lemma}\n\\begin{proof}\nThere is a for-loop over the set $Y$ of size $m$ for finding $x$-coordinate for each element $y \\in Y$ over the MEC $E_{p,t^{6}R(p,\\mathcal{C}_{i})}$.\nNote that at line 4 of Algorithm~\\ref{algo_sbox_2}, $x$ can be computed in constant time, i.e., $\\mathcal{O}(1)$.\nThis is due to Theorem~\\ref{mordel} the MEC $E_{p,b}$ has each element of $[0, p-1]$ uniquely as $y$-coordinate.\nThus, the for-loop over $Y$ can be computed in $\\mathcal{O}(m)$.\nThe remaining part of Algorithm~\\ref{algo_sbox_2} takes $O(m\\log m)$ time.\nHence, with the aid of the property of $\\mathcal{O}$ notion, Algorithm~\\ref{algo_sbox_2} takes $O(m\\log m)$ time.\nMoreover, Algorithm~\\ref{algo_sbox_2} stores only a set of size $m$, other than inputs, and therefore its space complexity is $\\mathcal{O}(m)$.\n\\end{proof}\nNote that using Algorithm~\\ref{algo_sbox_2} is practical, since Lemma~\\ref{LL} implies that for a given ordered MEC $(E_{p,b}, \\prec)$ we can generate a large number of $(m,p)$-complete S-boxes.\nHowever, $E_{p, R(p,\\mathcal{C}_{i})}$, where $i \\in \\{1, 2\\}$, $R(p, \\mathcal{C}_i)$ and $t^{-1}$ for $t \\in [0, (p-1)\/2]$ should be given as input for Algorithm~\\ref{algo_sbox_2}.\nWe know that $R(p,\\mathcal{C}_{1}) = 1$, now the next important question is how to find the representative $R(p,\\mathcal{C}_{2})$ for the class $\\mathcal{C}_{2}$ of MECs.\nFor this we prove the following results.\n\\begin{lemma}\\label{x_zero}\nAn MEC $E_{p,b}$ is an element of the class $\\mathcal{C}_1$ if and only if there exists an integer $y \\in [1, p-1]$ such that $(0, y) \\in E_{p,b}$.\n\\end{lemma}\n\\begin{proof}\nConsider the MEC $E_{p, 1}$.\nThen for $y = 1$ the equation $x^3 + 1 \\equiv 1 \\pmod p$ is satisfied by $x = 0$.\nThis implies that $(0, 1) \\in E_{p, 1}$, and hence the required statement is true for the MEC $E_{p, 1}$.\nLet $E_{p, b} \\in \\mathcal{C}_1$, where $b \\in [2, p-1]$.\nThen there exists an isomorphism parameter $t \\in [1, (p-1)\/2]$ between $E_{p, 1}$ and $E_{p, b}$ such that $(t^20, t^31) = (0, t^3) \\in E_{p, b}$.\nHence, for each MEC $E_{p,b} \\in \\mathcal{C}_1$ there exists an integer $y \\in [1, p-1]$ such that $(0, y) \\in E_{p,b}$.\n\nTo prove the converse, suppose on contrary that there is an MEC $E_{p,b}$ with a point $(0,y)$ for some $y \\in [1, p-1]$ and $E_{p,b} \\notin \\mathcal{C}_1$.\nThis implies that there does not exist an integer $t \\in [1, (p-1)\/2]$ such that $b \\equiv t^6 \\pmod p$.\nThus, $b \\not\\equiv (t^3)^2 \\pmod p$ for all $t \\in [1, (p-1)\/2]$.\nBut it follows from $(0,y) \\in E_{p,b}$ that $b \\equiv y^2 \\pmod p$ for some $y \\in [1, (p-1)\/2]$ which is a contradiction.\nHence $E_{p,b} \\in \\mathcal{C}_1$.\n\\end{proof}\n\n\\begin{lemma}\\label{class_2_rep}\nFor a prime $p$, the representative $R(p,\\mathcal{C}_2)$ of the class $\\mathcal{C}_2$ is a QNR integer in the field $\\mathbb{F}_{p}$.\n\\end{lemma}\n\\begin{proof}\nLet $E_{p,b} \\in \\mathcal{C}_2$.\nSuppose on contrary that $b$ is a quadratic integer in the field $\\mathbb{F}_p,$ i.e., $b \\equiv y^2 \\pmod p$ for some integer $y \\in [1, p-1]$.\nIt follows from the equation $x^3 + b \\equiv y^2 \\pmod p$ that $(0, y) \\in E_{p,b}$.\nBy Lemma~\\ref{x_zero}, it holds that $E_{p,b} \\in \\mathcal{C}_1$, which is a contradiction to our assumption.\nSo, $b$ is a QNR, and hence $R(p,\\mathcal{C}_2)$ is a QNR.\n\\end{proof}\nEuler's Criterion is a well-known method to test if a non-zero element of the field $\\mathbb{F}_p$ is a QR or not.\nWe state this test in Lemma~\\ref{EC}.\n\\begin{lemma}\\label{EC}\n\\textnormal{\\cite[p.~1797]{Sze}} An element $q\\in\\mathbb{F}_{p}$ is a QR if and only if $q^{(p-1)\/2}\\equiv -1 \\pmod p$.\n\\end{lemma}\n\\section{Security Analysis and Comparison}\\label{Anal}\nIn this section, a detailed analysis of the proposed S-box is performed.\nMost of the cryptosystems use $8 \\times 8$ S-boxes and therefore, we use $8 \\times 8$ $(256, 525211)$-complete S-box $\\sigma(52511, 1, N, Y, 0)$ given in Table~\\ref{SN} generated by the proposed method for experiments.\nThe cryptographic properties of the proposed S-box are also compared with some of the well-known S-boxes developed by different mathematical structures.\n\\subsection{Linear Attacks}\nLinear attacks are used to exploit linear relationship between input and output bits.\nA cryptographically strong S-box is the one which can strongly resist linear attacks. The resistance of an S-box against linear attacks is evaluated by well-known tests including non-linearity~\\cite{Carlet}, linear approximation probability~\\cite{Matsui} and algebraic complexity~\\cite{Sakalli}.\nFor a bijective $n \\times n$ S-box $S$, the non-linearity NL$(S)$, linear approximation probability LAP$(S)$ can be computed by Eqs.~(\\ref{NL}) and~(\\ref{LAP}), respectively, while its algebraic complexity AC$(S)$ is measured by the number of non-zero terms in its linearized algebraic expression~\\cite{Lidl}.\n\\begin{equation}\\label{NL}\n{\\rm NL}(S)=\\min_{\\alpha ,\\beta ,\\lambda }\\{x\\in \\mathbb{F}_{2}^{n}:\\alpha \\cdot\nS(x)\\neq \\beta \\cdot x\\oplus \\lambda \\},\n\\end{equation}\n\\begin{equation}\\label{LAP}\n\\begin{split}\n&{\\rm LAP}(S)=\\frac{1}{2^{n}}\\Big\\{\\max_{\\alpha ,\\beta }\\big\\{|\\#\\{x\\in \\mathbb{F}_{2}^{n} \\mid \\\\\n & \\hspace{4cm}x \\cdot \\alpha=S(x) \\cdot \\beta \\}-2^{n-1}|\\big \\}\\Big \\},\n\\end{split}\n\\end{equation}\nwhere $\\alpha \\in \\mathbb{F}_{2}^{n}${\\normalsize , }$\\lambda \\in \\mathbb{F}_{2}$%\n{\\normalsize , }$\\beta \\in \\mathbb{F}_{2}^{n}\\backslash \\{0\\}${\\normalsize \\ and\n\\textquotedblleft }$\\cdot ${\\normalsize \\textquotedblright\\ represents the inner\nproduct over }$\\mathbb{F}_{2}.$\n\nAn S-box $S$ is said to be highly resistive against linear attacks if it has NL close to $2^{n-1} - 2^{(n\/2)-1}$, low LAP and AC close to $2^n-1$.\n\nThe experimental results of NL, LAP and AC of the proposed S-box $\\sigma(52511, 1, N, Y, 0)$ and some of the well-known S-boxes are given in Table~\\ref{allanalysis}.\nNote that the proposed S-box has NL, LAP and AC close to the optimal values.\nThe $\\rm NL$ of $\\sigma(52511, 1, N, Y, 0)$ is greater than that of the S-boxes in~\\cite{Asif, ikram, umar2, Azam, YW, GT, Jaki, Ozkaynak2,Bhattacharya,IChing,YWang,AGautam,Shi} and equal to that of~\\cite{AES}.\nThe $\\rm LAP$ of $\\sigma(52511, 1, N, Y, 0)$ is less than that of the S-boxes in~\\cite{Asif, ikram, umar2, Azam, YW, GT, Jaki, Ozkaynak2,Bhattacharya,YWang,IChing,AGautam,Shi}, and the AC of $\\sigma(52511, 1, N, Y, 0)$ attains the optimal value, which is $255$.\nThus the proposed method is capable of generating S-boxes with optimal resistance against linear attacks as compared to some of the existing well-known S-boxes.\n\\subsection{Differential Attack}\nIn this attack, cryptanalysts try to approximate the original message by observing a particular difference in output bits for a given input bits difference.\nThe strength of an $n \\times n$ S-box $S$ can be measured by calculating its differential approximation probability DAP$(S)$ using Eq.~(\\ref{DAP}).\n\\begin{equation}\\label{DAP}\n\\begin{split}\n&{\\rm DAP}(S)=\\frac{1}{2^{n}}\\Big\\{\\max_{\\Delta x,\\Delta y}\\big\\{\\#\\{x\\in \\mathbb{F}_{2}^{n} \\mid\\\\\n& \\hspace{3cm} S(x\\oplus \\Delta x)= S(x)\\oplus \\Delta y \\}\\big \\}\\Big \\},\n\\end{split}\n\\end{equation}\nwhere $\\triangle x,$\\ $\\triangle y\\in \\mathbb{F}_{2}^{n},$ and \\textquotedblleft $%\n{\\normalsize \\oplus }$\\textquotedblright\\ denotes bit-wise addition over $\\mathbb{F}_{2}$.\n\n An S-box $S$ is highly secure against differential attack if its DAP is close to $1\/2^n$.\n In Table~\\ref{allanalysis}, the $\\rm DAP$ of $\\sigma(52511, 1, N,$ $ Y, 0)$ and other existing S-boxes is given.\n Note that the DAP of the proposed S-box $\\sigma(52511, 1, N, Y, 0)$ is $0.016$ which is close to the optimal value $0.0039$.\n Furthermore, it is evident from Table~\\ref{allanalysis} that the DAP of the proposed S-box is less than the S-boxes in~\\cite{Asif,ikram, umar2, Azam, YW, GT, Jaki, Ozkaynak2,Bhattacharya,YWang,IChing,AGautam,Shi}, and hence the proposed S-box scheme can generate S-boxes with high resistance against differential attack.\n\\subsection{Analysis of Boolean Functions}\nIt is necessary to analyze the boolean functions of a given S-box to measure its confusion\/diffusion creation capability.\nFor an $n \\times n$ S-box, strict avalanche criterion SAC$(S)$ and bit independence criterion BIC$(S)$ are used to analyze its boolean functions.\nThe SAC$(S)$ and the BIC$(S)$ are computed by two matrices $M(S) = [m_{ij}]$ and $B(S)=[b_{ij}]$, respectively, such that\n\\begin{equation}\\label{sAc}\n{ m_{ij}=\\frac{1}{2^{n}}\\left( \\sum_{x\\in \\mathbb{F}_{2}^{n}}{ w}%\n\\left( {S}_{i}{ (x\\oplus \\alpha }_{j}{ %\n)\\oplus S}_{i}{ (x)}\\right) \\right) { ,}}\n\\end{equation}\nand\n\\begin{equation}\\label{bIc}\n\\begin{split}\n&b_{ij}=\\frac{1}{2^{n}}\\Biggl(\\sum_{\\substack{ x\\in \\mathbb{F}_{2}^{n}\\\\%\n1\\leq r\\neq i\\leq n}}w\\biggl(S_{i}(x \\oplus \\alpha_{j})\\oplus S_{i}(x)\\oplus \\\\\n&\\hspace{4cm}S_{r}(x+\\alpha_{j})\\oplus S_{r}(x)\\biggr) \\Biggr),\n\\end{split}\n\\end{equation}\nwhere $w(y)$ is the hamming weight of $y$, $\\alpha_{j}\\in \\mathbb{F}_{2}^{n}$\nsuch that $w(\\alpha_{j})=1$, $ {S}_{i}$ and ${S}_{r}$ are $i$-th and $r$-th boolean functions of $ S$, respectively, and $1\\leq i,j,r \\leq n$.\nAn S-box $S$ satisfies the SAC and the BIC if each non-diagonal entry of $M(S)$ and $B(S)$ have value close to $0.5$.\nThe maximum and minimum values of the SAC (resp., BIC) of the proposed S-box $\\sigma(52511, 1, N, Y, 0)$ are $0.563$ and $0.438$ (resp., $0.521$ and $0.479$).\nNote that these values are closed to $0.5$, and hence the proposed S-box satisfies the SAC and the BIC.\nSimilarly, the SAC and the BIC of some other S-boxes are listed in Table~\\ref{allanalysis} and compared with the results of the proposed S-box.\nIt is evident from Table~\\ref{allanalysis} that the proposed S-box can generate more confusion and diffusion as compared to some of the listed S-boxes.\n\\begin{table}[htb]\n\\caption{Comparison of the proposed and other existing S-boxes}\n\\label{allanalysis}\n\\centering\n\\resizebox{\\columnwidth}{!}\n{\n\\bgroup\n\\def1.2{1.1}\n\\begin{tabular}{cccccccccc}\n \\hline\n S-boxes&Type&\\multicolumn{3}{c}{Linear}& DAP & \\multicolumn{4}{c}{Analysis of}\\\\\n &of&\\multicolumn{3}{c}{ Attacks}& & \\multicolumn{4}{c}{Boolean Functions}\\\\\n \\cline{3-5} \\cline{7-10}\n &S-box&NL&LAP&AC & & SAC &SAC &BIC &BIC\\\\\n & & & & & & (max) &(min) &(max) &(min)\\\\\n \\cline{1-10}\n Ref.~\\cite{Bhattacharya}&&102 &0.133&254 &0.039\t&0.562&\t0.359 &0.535&0.467 \\\\\n Ref.~\\cite{IChing}&other&108 &0.133&255&0.039 &0.563 &0.493& 0.545 &0.475\\\\\n Ref.~\\cite{AES}&&112 & 0.062 & 09 &0.016 &0.562 &0.453 & 0.504 & 0.480 \\\\\n Ref.~\\cite{Shi}& &108 & 0.156 & 255 &0.046 &0.502 &0.406 & 0.503 & 0.47 \\\\\n \\cline{1-10}\n Ref.~\\cite{YW}& &108 & 0.145 &255 & 0.039 &0.578 &0.406 &0.531&0.470 \\\\\n Ref.~\\cite{GT}& &103 &0.132 &255 & 0.039 &0.570 &0.398 &0.535&0.472 \\\\\n Ref.~\\cite{Jaki}&Chaos&100 & 0.129 &255 & 0.039 & 0.594 & 0.422 &0.525&0.477 \\\\\n Ref.~\\cite{Ozkaynak2}&&100 &0.152 &255 &0.039 &0.586 &0.391 &0.537&0.468 \\\\\n Ref.~\\cite{YWang}&&110 & 0.125 & 255 & 0.039 & 0.562 &0.438 &0.555 & 0.473 \\\\\n Ref.~\\cite{AGautam}&& 74 &0.211 &253 &0.055 & 0.688 &0.109 & 0.551 & 0.402 \\\\\n \\cline{1-10}\n Ref.~\\cite{Asif}&&104&0.145&255&0.039&0.625&0.391&0.531&0.471 \\\\\n Ref.~\\cite{umar2}&&106&0.148&254&0.039&0.578&0.437&0.535&0.464 \\\\\n Ref.~\\cite{ikram}&EC&106&0.188&253&0.039 &0.609&0.406&0.527&0.465 \\\\\n Ref.~\\cite{Azam}&&106&0.148&255&0.039&0.641&0.406&0.537&0.471 \\\\\n $\\sigma(52511, 1, N, Y, 0)$&&112\t&0.063& 255 &0.016&\t0.563&\t0.438&\t0.521& 0.479 \\\\\n \\hline\n\\end{tabular}\n\\egroup\n}\n\\end{table}\n\n\n\\subsection{Distinct S-boxes}\n An S-box generator is useful to resist cryptanalysis if it can generate a large number of distinct S-boxes~\\cite{Azam}.\nFor the parameters $p=263, b =1, m=256$ and $k=0$ the number of $(256,263)$-complete S-boxes Num$(E_{263,1}, 256, 0)$ is $128$.\nIt turned out with the computational results that all of these $(256,263)$-complete S-boxes are distinct.\nHowever this is not the case in general.\n\nAn $(m,p)$-complete S-box $\\sigma(p,b,\\prec,Y, k)$ is said to be a \\textit{natural} $(m,p)$\\textit{-complete S-box} if $Y = [0, m-1]$.\nFor a prime $p$ and ordering $\\prec$, let $p^*$ denote the largest integer such that $p^* \\leq p-1$ and there exists at least two ordered MECs $(E_{p,b_1})$ and $E_{p,b_2}$ due to which the natural $(p^*, p)$-complete S-boxes are identical, i.e., for any fixed $m \\geq p^*$ the number of natural $(m, p)$-complete S-boxes due to all ordered MECs with prime $p$, ordering $\\prec$ and $k = 0$ is equal to $p-1$.\nA plot of primes $p \\in [11, 7817]$ and the integers $p^{*}$ is given in Fig.~\\ref{LI}, where the underlying ordering is the natural ordering $N$.\nFor the orderings $D$ and $M$, such plots are similar as that of $N$.\nIt is evident from Fig.~\\ref{LI}, that with the increase in the value of prime, there is no significant increase in the value of $p^*$ and the largest value of $p^*$ for these primes is $12$.\nHence, for each of these primes, each $m \\geq 13$ and $k = 0$, we can get $p-1$ distinct natural $(m, p)$-complete S-boxes with $k = 0$.\n\n\\begin{figure}[htb!]\n\\includegraphics [scale=0.68]{Conjecture.pdf}\n\\caption{Plot of primes $p$ and their corresponding largest integers $p^{*}$}\n\\label{LI}\n\\end{figure}\n\\begin{lemma}\\label{class1}\nLet $\\prec$ be a fixed total order on all MECs in $\\mathcal{C}_{1}$ such that for each MEC $E_{p,b} \\in \\mathcal{C}_1$ it holds that the points $(0, \\pm y)$, where $-y$ is additive inverse of $y$ in $\\mathbb{F}_p$, have indices from the set $\\{1, 2\\}$ in the sequence representation of the MEC.\nThen for a fixed integer $k \\in [0, m-1]$, the number of distinct natural $(m,p)$-complete S-boxes generated by all MECs in $\\mathcal{C}_{1}$ are at least\n\\begin{equation}\\label{L10}\n\\left\\{\n\\begin{tabular}{ll}\n$m - 1$ & \\textnormal{if} $m < (p-1)\/2$\\\\\n$(p-1)\/2 $ & \\textnormal{otherwise}.\n\n\\end{tabular}\n\\right.\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nLet $E_{p,b}$ be an MEC in $\\mathcal{C}_1$, where $b \\in [1, p-1]$.\nThen by Lemma~\\ref{x_zero}, $(0, y) \\in E_{p,b}$ for some $y \\in [1, p-1]$.\nFurther by the fact that if $(x, z) \\in E_{p,b}$ then $(x, -z) \\in E_{p,b}$, where $-z$ is the additive inverse of $z$ in the field $\\mathbb{F}_p$, implies that $(0, \\pm y) \\in E_{p, b}$.\nMoreover, by the group theoretic argument exactly one of the integers $y$ and $-y$ belongs to the interval $[0, (p-1)\/2]$.\nHence, for a fixed $k \\in [0, m-1]$ and the natural $(m,p)$-complete S-box $\\sigma(p, b, \\prec, Y, k)$ it holds that $\\sigma(p, b, \\prec, Y, k)(k)\\in \\{\\pm y\\}$ if $(0, \\pm y)$ have indices from the set $\\{1,2\\}$ in the sequence representation of $E_{p,b}$.\nNote that a point $(0,z)$ cannot appear on two different MECs $E_{p,b_1}$ and $E_{p,b_2}$, otherwise this implies that $b_1 = b_2$.\nThus, for any two MECs $E_{p,b_1},E_{p,b_2}$ in $\\mathcal{C}_1$ satisfying the conditions given in the lemma it holds that the natural $(m,p)$-complete S-boxes $\\sigma(p, b_1, \\prec, Y, k)$ and $\\sigma(p, b_2 , \\prec, Y, k)$ have different images at a fixed input $k \\in[0, m-1]$.\nThus $|\\mathcal{C}_1| = (p-1)\/2$ implies the required result.\n\\end{proof}\nFor three different primes $p$ distinct S-boxes are generated by the proposed method, and compared with the existing schemes over ECs as shown in Table~\\ref{counting}. It is evident that the proposed S-box generator performs better than other schemes.\n\\begin{table}[htb!]\n\\caption{Comparison of the number of distinct $8 \\times 8$ S-boxes generated by different schemes}\n\\label{counting}\n\\centering\n\\resizebox{\\columnwidth}{!}\n{\n\\bgroup\n\\def1.2{1.27}\n\\begin{tabular}{lcclll}\n\\cline{3-6}\n&&$p$ & 1889 & 2111 & 2141 \\\\\n\\cline{3-6}\n&&$b$ & 1888 & 1 & 7 \\\\ \\hline\nDistinct S-boxes by the& $N$ && 32768$^\\dag$ &32768$^\\dag$ &32768$^\\dag$ \\\\ \\cline{2-6}\nproposed method due to& $D$ && 31744$^\\dag$&32704$^\\dag$&30720$^\\dag$ \\\\ \\cline{2-6}\nthe ordering& $M$ && 15360$^\\dag$ & 26748$^\\dag$ & 21504$^\\dag$ \\\\ \\hline\n & $N$ && 944 & 1055 & 1070 \\\\ \\cline{2-6}\nDistinct S-boxes by Ref.~\\cite{ikram}& $D$ && 944 &1055 &1070 \\\\ \\cline{2-6}\n & $M$ && 944 &1055 &1070 \\\\ \\hline\nDistinct S-boxes by Ref.~\\cite{umar2}&& & 50 & 654 & 663 \\\\ \\hline\nDistinct S-boxes by Ref.~\\cite{Asif}&& & 1 & 1 & 1 \\\\ \\hline\n\\end{tabular}\n\\egroup\n}\n\n\\end{table}\n\\xdef\\@thefnmark{}\\@footnotetext{The number $h^{\\dag}$ stands for an integer greater than $h$.}\n\\subsection{Fixed Point Test}\nAn S-box construction scheme is cryptographically good if the average number of fixed points in the constructed S-boxes is as small as possible~\\cite{Azam}. The average number of fixed points of the above generated S-boxes are shown in Table~\\ref{fixed}. The experimental results indicate that the proposed S-box generator generates S-boxes with a very small number of fixed points.\nFurthermore, the average number of fixed points in the proposed S-boxes are comparable with that of the existing schemes over ECs.\n\\begin{table}[htb!]\n\\caption{Comparison of average number of the fixed points in the S-boxes generated by different schemes}\n\\label{fixed}\n\\centering\n\\resizebox{\\columnwidth}{!}\n{\n\\bgroup\n\\def1.2{1.1}\n\\begin{tabular}{lcclll}\n\\cline{3-6}\n&&$p$ & 1889 & 2111 & 2141 \\\\\n\\cline{3-6}\n&&$b$ & 1888 & 1 & 7 \\\\ \\hline\nAvg. \\# fixed points by the& $N$ && 1.1298 & 1.0844 & 1.0972 \\\\ \\cline{2-6}\nproposed method due to& $D$ && 0.9471 & 0.8569 & 0.9393 \\\\ \\cline{2-6}\nthe ordering& $M$ && 0.8361 & 1.1847 & 1.0025 \\\\ \\hline\n & $N$ && 1.77 & 0.9735 & 0.9785 \\\\ \\cline{2-6}\nAvg. \\# fixed points by Ref.~\\cite{ikram}& $D$ && 1.932 &0.9716 &0.9561 \\\\ \\cline{2-6}\n & $M$ && 1.332 &1.0019 &1.0150 \\\\ \\hline\nAvg. \\# fixed points by Ref.~\\cite{umar2}&& & 2.04 & 0.8976 & 0.9351 \\\\ \\hline\nAvg. \\# fixed points by Ref.~\\cite{Asif}&& & 2 & 3 & 0 \\\\ \\hline\n\\end{tabular}\n\\egroup\n}\n\\end{table}\n\\subsection{Correlation Test} The correlation test is used to analyze the relationship among the S-boxes generated by any scheme. A robust scheme generates S-boxes with low correlation~\\cite{Azam}.\nThe proposed method is evaluated by determining the correlation coefficients (CCs) of the designed S-boxes. The lower and upper bounds for their CCs are listed in Table~\\ref{CC}, which reveal that the proposed scheme is capable of constructing S-boxes with very low correlation as compared to the other schemes over ECs.\n\\begin{table}[htb!]\n\\caption{\\ Comparison of CCs of S-boxes generated by different schemes}\n\\label{CC}\n\\centering\n\\resizebox{\\columnwidth}{!}\n \n{\n\\bgroup\n\\def1.2{1.1}\n\\begin{tabular}{lcccccc}\n\\hline\nScheme&$p$& $b$ & Ordering &\\multicolumn{3}{c}{Correlation} \\\\\n\\cline{5-7}\n& & & &Lower & Average & Upper\\\\\n\\hline\n &1889 & 1888&$N$ &-0.2685&0.0508 & 0.2753 \\\\\nProposed&1889 & 1888&$D$ &-0.2263&0.0523 & 0.2986 \\\\\n &1889 & 1888&$M$ &-0.2817&0.0506 & 0.2902 \\\\\n\\hline\n &2111 & 1&$N$ &-0.2718&0.0504 & 0.2600 \\\\\nProposed&2111 & 1&$D$ &-0.2596&0.0531 & 0.3025 \\\\\n &2111 & 1&$M$ &-0.2779&0.0507 & 0.2684 \\\\\n\\hline\n &2141 & 7&$N$ &-0.2682&0.0503 & 0.2666 \\\\\nProposed&2141 & 7&$D$ &-0.2565&0.0517 & 0.2890 \\\\\n &2141 & 7&$M$ &-0.2744&0.0503 & 0.2858 \\\\\n\\hline\n &1889 & 1888&$N$ &-0.2782&0.0503 & 0.2756 \\\\\nRef.~\\cite{ikram}&1889 & 1888&$D$ &-0.4637&-0.0503 & 0.2879 \\\\\n &1889 & 1888&$M$ &-0.2694&0.0501 & 0.4844 \\\\\n\\hline\n &2111 & 1&$N$ &-0.2597&0.0504 & 0.2961 \\\\\nRef.~\\cite{ikram}&2111 & 1&$D$ &-0.3679&0.0500 & 0.3996 \\\\\n &2111 & 1&$M$ &-0.2720&0.0499 & 0.3019 \\\\\n\\hline\n &2141 & 7&$N$ &-0.2984&0.0500 & 0.3301 \\\\\nRef.~\\cite{ikram}&2141 & 7&$D$ &-0.2661&0.0500 & 0.2639 \\\\\n &2141 & 7&$M$ &-0.2977&0.0501 & 0.2975 \\\\\n\\hline\nRef.~\\cite{umar2} & 1889& 1888 &-- &-0.0025&0.2322&0.9821\\\\ \\hline\nRef.~\\cite{umar2} & 2111& 1 &-- &-0.2932&0.0785&0.9988\\\\\n\\hline\nRef.~\\cite{umar2} & 2141& 7 &-- &-0.2723 &0.0629&0.9999\\\\\n\\hline\n\\end{tabular}\n\\egroup\n}}\n\\end{table}\n\\subsection{Time and Space Complexity}\nFor a good S-box generator it is necessary to have low time and space complexity~\\cite{Azam}.\nTime and space complexity of the newly proposed method are compared with some of the existing methods in Table~\\ref{TandSCom}. It follows that for a fixed prime the proposed method can generate an S-box with low complexity and space as compared to other listed schemes.\nThis fact makes the proposed S-box generator more efficient and practical.\n\\begin{table}[htb!]\n\\caption{Comparison of time and space complexity of different S-box generators over ECs}\n\\label{TandSCom}\n\\centering\n\\resizebox{\\columnwidth}{!}\n{\\\n\\bgroup\n\\def1.2{1.2}\n{\\begin{tabular}{cccccc}\n\\hline\nS-box & Ref.~\\cite{Asif} &Ref.~\\cite{umar2} &Ref.~\\cite{ikram}&\\multicolumn{2}{c}{\\small Proposed method} \\\\\n\\cline{5-6}\n& & & &Algorithm~\\ref{algo_sbox_1}&Algorithm~\\ref{algo_sbox_2}\\\\\n\\hline\nTime complexity & $\\mathcal{O}(p^{2})$ & $\\mathcal{O}(p^{2})$ & $\\mathcal{O}(mp)$ & $\\mathcal{O}(mp)$ & $\\mathcal{O}(m\\log m)$ \\\\\n\\hline\n\nSpace complexity & $\\mathcal{O}(p)$ & $\\mathcal{O}(p)$ & $\\mathcal{O}(m)$ & $\\mathcal{O}(m)$ & $\\mathcal{O}(m) $ \\\\\n \\hline\n\\end{tabular}%\n\\egroup\n}}\n\\end{table}\n\\section{The Proposed Random Numbers Generation Scheme}\\label{RNG}\nFor an ordered MEC $(E_{p,b}, \\prec)$, a subset $A \\subseteq [0, p-1]$, an integer $m \\in [1, |A|]$ and a non-negative integer $k \\in [0, m-1]$, we define a sequence of \\textit{pseudo random numbers} (SPRNs) $\\gamma(p, b, \\prec, A, m, k)$ to be a sequence of length $|A|$ whose $i$-th term is defined as $\\gamma(p, b, \\prec, A, m, k)(i) = y_{(i+k)\\pmod m}\\pmod{m}$, where $y_{(i+k)\\pmod m}$ is the $(i+k)\\pmod m$-th element of the ordered set $(A, \\prec^*)$ in its sequence representation.\\\\\nOne of the differences in the definition of an $(m,p)$-complete S-box and the proposed SPRNs is that an $(m,p)$-complete set is required as an input for the S-box generation, since an S-box of length $m$ is a permutation on the set $[0, m-1]$.\nFurthermore, Algorithm~\\ref{algo_sbox_1} and~\\ref{algo_sbox_2} can be used for the generation of the proposed SPRNs, however, we propose an other algorithm which is more efficient than Algorithm~\\ref{algo_sbox_2} for its generation.\nThis new algorithm is also based on Observation~\\ref{obs_set}, but there is no constraint on $A$ to be an $(m,p)$-complete set, and hence we can generate all proposed SPRNs for a given prime $p$ by using $E_{p, R(p, \\mathcal{C}_i)}$, where $i \\in \\{1, 2\\}$.\n\n\\begin{algorithm}[H]\n\n\t\\caption{{\\bf Constructing the proposed SPRNs using EC isomorphism}}\\label{algo_sprn}\n\\begin{algorithmic}[1]\n\\REQUIRE An MEC $E_{p, R(p,\\mathcal{C}_{i})}$, where $i\\in \\{1, 2\\}$, an integer $t \\in [1, (p-1)\/2]$, a total order $\\prec$ on the MEC $E_{p,t^{6}R(p,\\mathcal{C}_{i})}$ and a subset $Y \\subseteq [0, p-1]$.\n\\ENSURE The proposed SPRNs $\\gamma(p, t^{6}R(p,\\mathcal{C}_{i}), \\prec, t^3Y, m, k)$.\n\n\\STATE $A:=\\emptyset$; \/*A set containing the points of $E_{p,t^6R(p,\\mathcal{C}_{i})}$ with $y$-coordinates from the set $t^3Y$*\/\n\n\\FORALL {$y \\in Y$}\n\\STATE Find $x \\in [0, p-1]$ such that $(x, y) \\in E_{p,R(p,\\mathcal{C}_{i})}$;\n\\STATE $A:= A \\cup \\{(t^2x, t^3y)\\}$;\n\\ENDFOR;\n\\STATE Sort $A$ w.r.t. the element of the total order $\\prec^*$;\n\\STATE $\\pi := (\\pi(0), \\pi(1), \\ldots, \\pi(|A|-1))$;\n\\FORALL {integer $i \\in [0, |A|-1]$}\n\t\\STATE $\\pi(i) := a_{(i+k)\\Mod{m}}\\pmod{m}$, where $a_{(i +k)\\pmod{m}}$ is the $(i +k)\\pmod{m}$-th element of the ordered set $(A, \\prec^*)$\n\\ENDFOR;\n\\STATE Output $\\pi$ as the proposed SPRN $\\gamma(p, t^{6}R(p,\\mathcal{C}_{i}), \\prec, t^3Y, m, k)$\n\t\n\\end{algorithmic}\n\\end{algorithm}\n\\noindent\nNote that the time and space complexity of Algorithm~\\ref{algo_sprn} are $\\mathcal{O}(|A|\\log {|A}|$) and $\\mathcal{O}(|A|)$ respectively as obtained for Algorithm~\\ref{algo_sbox_2}.\nHowever, Algorithm~\\ref{algo_sprn} does not require $t^{-1}$ as an input parameter to compute $\\gamma(p, t^{6}R(p,\\mathcal{C}_{i}), \\prec,$ $t^3Y, m, k)$ for which we need preprocessing.\nFurthermore, Lemma~\\ref{class1} trivially holds for our proposed SPRNs.\nThis implies that the proposed PRNG can generate a large number of distinct SPRNs for a given prime.\n\n\n\n\n\n\\section{Analysis of the Proposed SPRNs Method}\\label{RA}\nWe applied some well-known tests to analyze the strength of our proposed SPRNs. A brief introduction to these tests and their experimental results are given below.\nWe used orderings $N, D$ and $M$ for these tests.\n\n\\subsection{Histogram and Entropy Test}\nHistogram and entropy are the two widely used tests to measure the extent of randomness of a RNG.\nFor a sequence $X$ over the set of symbols $\\Omega$, the histogram of $X$ is a function $f_{X}$ over $\\Omega$ such that for each $w \\in \\Omega$, $f_{X}(w)$ is equal to the number of occurrences of $w$ in $X$.\nWe call $f_X(w)$, the frequency of $w$ in $X$.\n\n\nA sequence $X$ has uniform histogram if all elements of its symbol set have same frequency.\nThe histogram test is a generalization of the Monobit test included in NIST STS~\\cite{Rukhin}.\nA sequence is said to be highly random if it has a uniformly distributed histogram.\n\nShannon~\\cite{Shannon} introduced the concept of entropy.\nFor a sequence $X$ over the set of symbols $\\Omega$,\nthe entropy H$(X)$ of $X$ is defined as\n\\begin{equation}\\label{formula}\n{\\rm H}(X)= -\\sum_{w \\in \\Omega}\\frac{f_X(w)}{|X|}\\log_{2}(\\frac{f_X(w)}{|X|}).\n\\end{equation}\nThe upper bound for the entropy is log$_{2}(|\\Omega|).$\nThe higher is the value of entropy of a sequence the higher is the randomness in the sequence.\n\\begin{remark}\n For any distinct $k_1, k_2 \\in [0, m-~1]$, the histograms of the proposed SPRNs $\\gamma(p,b,\\prec,A,m, k_1)$ and $\\gamma(p,b,\\prec,A,m,k_2)$ are the same, and hence ${\\rm H}(\\gamma(p,b,\\prec,$ $A,m,k_1)) = {\\rm H}(\\gamma(p,b,\\prec,A,m,k_2))$.\n \\end{remark\nIn the next lemmas we discuss when the proposed SPRNs has uniformly distributed histogram and its entropy approaches the optimal value.\n\\begin{lemma}\\label{opt}\nFor an $(m,p)$-complete set $A$, a positive integer $h \\leq m$ such that $m = hq + r$, a non-negative integer $k \\leq h$, and the SPRNs $X = \\gamma(p,b,\\prec,A,h,k)$ it holds that\n\\begin{enumerate}[label=\\roman*, font=\\upshape, noitemsep]\n\\item[(i)] \\begin{equation*}\n f_X(w) = \\left\\{\n \\begin{tabular}{ll}\n$q+1$ & \\textnormal{if} $w \\in [0, r-1]$,\\\\\n$q$ & \\textnormal{otherwise},\n \\end{tabular}\n \\right.\n \\end{equation*}\n \\textnormal{if} $r \\neq 0$ \\textnormal{and} $A = [0, m-1]$,\n \\item[(ii)] \\textnormal{for each} $w \\in [0, h-1]$, $f_X(w) = q$ \\textnormal{if} $r = 0$.\n\\end{enumerate}\n\n\n\\end{lemma}\n\\begin{proof}\nIt is trivial that the domain of the histogram of $X$ is the set $[0, h-1]$.\\\\\n(i)~If $r \\neq 0$ and $A = [0, m-1]$, then it can be easily verified that $A$ can be partitioned in $q+1$ sets $\\{ih +\\ell \\mid 0 \\leq \\ell \\leq h-1\\}$, where $0 \\leq i \\leq q-1$, and $\\{qh +\\ell \\mid 0 \\leq \\ell \\leq r -1\\}$.\nThis implies that for each $w \\in [0, h-1]$ it holds that\n\\begin{equation*}\n f_X(w) = \\left\\{\n \\begin{tabular}{ll}\n$q+1$ & \\textnormal{if} $w \\in [0, r-1]$,\\\\\n$q$ &\\textnormal{otherwise}.\n \\end{tabular}\n \\right.\n \\end{equation*}\n(ii)~If $r = 0$, then $m = hq$.\n We know that for each $a \\in A$, it holds that $a = mi + j$, where $ 0 \\leq j \\leq m- 1$.\nThus, with the fact that $m = hq$, it holds that\n\\begin{equation*}\n\\begin{split}\na \\pmod h &= ((mi) \\pmod h + j \\pmod h) \\\\\n &= j \\pmod h\n \\end{split}\n\\end{equation*}\nThis implies that $\\{a\\pmod{m} \\mid a \\in A\\}$ = $\\{(a\\pmod{m}) \\pmod {h} \\mid a \\in A\\}$.\nThus by using the same reason, we can partition $A$ into $q$ sets, since $m = hq$, and hence $f_{X}(w) = q$ for each $w \\in [0, h-1]$.\nThis completes the proof.\n\\end{proof}\nFor the parameters given in Lemma~\\ref{opt}, we can deduce that the histogram of our proposed SPRNs is either approximately uniform or exactly uniform.\n\n\\begin{corollary}\nLet $A$ be an $(m,p)$-complete set, $h \\leq m$ such that $m = hq + r$ be a positive integer, $k \\leq m-1$ be a non-negative integer, and $X$ be the proposed SPRNs $ \\gamma(p,b,\\prec,$ $A,h,k)$.\nIt holds that\n\n \\begin{equation\n \\rm {H}(\\it {X}) = \\begin{cases}\n -r(\\frac{q+1}{|X|})\\rm{log_{2}}\\it(\\frac{q+1}{|X|})- & \\textnormal {if}\\quad r \\neq 0, A = [0, m-1],\\\\\n (h-r)(\\frac{q}{|X|})\\rm{log_{2}}\\it(\\frac{q}{|X|}) &\\\\\n \\log_2(h) & \\textnormal{if}\\quad r = 0.\n \\end{cases}\n \\end{equation}\n\\end{corollary}\n \\begin{proof\nWhen $r \\neq 0$ and $A = [0, m-1]$, then by Lemma~\\ref{opt}~(i), there are $r$ (resp., $h-r$) numbers in $[0, h-1]$ whose frequency is $q+1$ (resp., $q$), and therefore we have the result.\\\\\nWhen $r = 0$, then by Lemma~\\ref{opt}~(ii), all numbers in $[0, h-1]$ have frequency $q$ and there are $h$ elements in $[0, h-1]$ and hence the result.\n\\end{proof}\n\nTo test the efficiency of the proposed PRNG, we generated SPRNs $X_1 = \\gamma(52511, 1, N, A, 127, 0)$, $X_2 = \\gamma(52511, 1, N, A, 16, 0)$, where $A$ is the set given in Table~\\ref{FT}, $X_3 = \\gamma(101, 35, N, [0, 100], 6, 0)$ and $X_4 = \\gamma(3917, 301, N, [0, 3916], 3917, 0)$.\nThe histogram of $X_1$ is given in Fig.~\\ref{Histo1} which is approximately uniform, while by Lemma~\\ref{opt}\nthe histograms of $X_3$ and $X_4$ are uniformly distributed.\nFurthermore, the entropy of each of these SPRNs is listed in Table~\\ref{tests}.\nObserve that the newly generated SPRNs have entropy close to the optimal value.\nThus, by histogram and entropy test it is evident that the proposed method can generate highly random SPRNs.\nMoreover, the proposed SPRNs $X_4$ are compared with the SPRNs $\\mathcal{R}(3917,0,301,10,2)$ generated by the existing technique due to Hayat~\\cite{umar2} over ECs.\nBy Lemma~\\ref{opt} it holds that $f_{X_{4}}(w) = 1$ for each $w \\in [0, 3916]$, and by Fig.~\\ref{Histo2}, it is clear that the histogram of $X_4$ is more uniform as compared to that of the SPRNs $\\mathcal{R}(3917,0,301,10,2)$.\nBy Table~\\ref{tests}, the entropy of $X_4$ is also higher than that of $\\mathcal{R}(3917,0,301,10,2)$, and hence the proposed PRNG is better than the generator due to Hayat~\\cite{umar2}.\n\\begin{table}[htb!]\n\\caption{Comparison of entropy and period of different sequence of random numbers over EC}\n\\label{tests}\n\\centering\n\\resizebox{\\columnwidth}{!}\n \n\\bgroup\n\\def1.2{1.1}\n\\begin{tabular}{llllllll}\n\\hline\n\\multicolumn{1}{c}{Random sequence $X$} & Type of A & H$(X)$ & log$_{2}(|\\Omega|)$ & Period & Optimal\\\\\n& & & & &period \\\\\n\\hline\n$\\gamma(52511,1,N,A,127,0)$&Table~\\ref{FT} \t&6.6076 &6.7814&256 &256 \\\\\n$\\gamma(52511,1,N,A,16,0)$&Table~\\ref{FT} &4 &4&256 &256 \\\\\n$\\gamma(101,35,N,A,6,0)$&$[0, 100]$ & 2.5846 &2.5850& 99&101\\\\\n$\\gamma(3917, 301, N, A, 3917, 0)$ & [0, 3916] & 11.9355 &11.9355&3917 &3917\\\\\n$\\mathcal{R}(3917,0,301,10,2)$ Ref.~\\cite{umar2}&\\multicolumn{1}{c}{--} & 10.9465&11.1536&3917&3917\\\\\n\\hline\n\\end{tabular}\n\\egroup\n}\n\\end{table}\n\\begin{figure}[htb!]\n\\includegraphics [scale=0.65]{R2.pdf}\n\\caption{The histogram of $\\gamma(52511,1,N,Y,127,0)$}\n\\label{Histo1}\n\\end{figure}\n\n\\begin{figure}[htb!]\n\\includegraphics [scale=0.65]{R1.pdf}\n\\caption{The histogram of $\\mathcal{R}(3917,0,301,10,2)$}\n\\label{Histo2}\n\\end{figure}\n\\subsection{Period Test}\nPeriod test is another important test to analyze the randomness of a PRNG.\nA sequence $X = \\{a_{n}\\}$ is said to be periodic if it repeats itself after a fixed number of integers, i.e., $\\{a_{n+h}\\}=\\{a_{n}\\}$ for the least positive integer $h$.\nIn this case $h$ is called the period of the sequence $X$.\nThe maximum period that a sequence $X$ can have is $|X|$.\nThe sequence $X$ is said to be highly random if its period is long enough~\\cite{Marsaglia}.\nWe computed the period of the proposed SPRNs $X_i, i = 1, 2, \\ldots, 4$ and the SPRNs $R(3917,0,301,10,2)$ generated by the scheme proposed in \\cite{umar2} and the results are listed in Table~\\ref{tests}.\nIt is evident from Table~\\ref{tests} that the proposed SPRNs have period colse to the optimal value.\nHence, the proposed PRNG can generated highly random numbers.\n\n\n\n\n\n\n\\subsection{Time and Space Complexity}\nIt is necessary for a good PRNG to have low time and space complexity.\nThe time and space complexity of the proposed PRNG and the generator proposed by Hayat et al.~\\cite{umar2} are compared in Table~\\ref{RNcom}.\nNote that the time and space complexity of the proposed PRNG depend on the size of the input set, while the time and space complexity of PRNG due to Hayat et al.~\\cite{umar2} are $\\mathcal{O}(p^2)$ and $\\mathcal{O}(p)$, respectively, where $p$ is underlying prime.\nHence, the proposed PRNG is more efficient as compared to the PRNG due to Hayat et al.~\\cite{umar2}.\n\\begin{table}[H]\n\\caption{Comparison of time and space complexity of different PRNGs over ECs}\n\\label{RNcom}\n\\centering\n\\resizebox{\\columnwidth}{!}\n{\\\n\\bgroup\n\\def1.2{1.2}\n{\\begin{tabular}{cccc}\n\\hline\n & Input size $m$& Ref.~\\cite{umar2} & Proposed method\\\\\n \\hline\n Time complexity &\\begin{tabular}{@{}l@{}} $m < p$ \\\\\n \t\t\t\t\t\t\t\t\t\t\t\t\t$m = p$ \\end{tabular} & $\\mathcal{O}(p^2)$ &$\\mathcal{O}(|A|\\log |A|)$\\\\\n \\hline\n Space complexity &\\begin{tabular}{@{}l@{}} $m < p$ \\\\\n \t\t\t\t\t\t\t\t\t\t\t\t\t$m = p$ \\end{tabular} & $\\mathcal{O}(p)$ &$\\mathcal{O}(|A|)$\\\\\n \\hline\n\n\\end{tabular}%\n\\egroup\n}}\n\\end{table}\n\n\n\n\\section{Conclusion}\\label{Con}\nNovel S-box generator and PRNG are presented based on a special class of the ordered MECs.\nFurthermore, efficient algorithms are also presented to implement the proposed generators.\nThe security strength of these generators is tested by applying several well-known security tests.\nExperimental results and comparison reveal that the proposed generators are capable of generating highly secure S-boxes and PRNs as compared to some of the exiting commonly used cryptosystems in low time and space complexity.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\\section{Introduction}\n\\begin{figure}[!htb]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/REGroup.pdf}\n \\caption{\\textbf{Overview of REGroup.}~\\textbf{R}ank-aggregating \\textbf{E}nsemble of \\textbf{G}enerative classifiers for \\textbf{ro}b\\textbf{u}st \\textbf{p}redictions. REGroup uses a pre-trained network, and constructs layer-wise generative classifiers modeled by a mixture distribution of the positive and negative pre-activation neural responses at each layer. At test time, an input sample's neural responses are tested with generative classifiers to obtain ranking preferences of classes at each layer. These preferences are aggregated using \\textit{Borda count} based preferential voting theory to make final prediction. \\textit{Note:} construction of layer-wise generative classifiers is a one time process.\n }\n \\label{fig:regroup}\n\\end{figure}\n\nDeep Neural Networks (DNNs) have shown outstanding performance on many computer vision tasks such as image classification~\\cite{krizhevsky2012imagenet}, speech recognition~\\cite{hinton2012deep}, and video classification~\\cite{karpathy2014large}. Despite showing superhuman capabilities in the image classification task~\\cite{he2015delving}, the existence of \\emph{adversarial examples} \\cite{szegedy2013intriguing} have raised questions on the reliability of neural network solutions for safety-critical applications. \n\nAdversarial examples are carefully manipulated adaptations of an input, generated with the intent to fool a classifier into misclassifying them. One of the reasons for the attention that adversarial examples garnered is the ease with which they can be generated for a given model by simply maximizing the corresponding loss function. This is achieved by simply using a gradient based approach that finds a small perturbation at the input which leads to a large change in the output \\cite{szegedy2013intriguing}. This apparent instability in neural networks is most pronounced for deep networks that have an accumulation effect over the layers. This effect results in taking the small, additive, adversarial noise at the input and amplifying it to substantially noisy feature maps at intermediate layers that eventually influences the softmax probabilities enough to misclassify the perturbed input sample. This observation of amplification of input noise over the layers is not new, and has been pointed out in the past \\cite{szegedy2013intriguing}. The recent work by \\cite{xie2019feature} addresses this issue by introducing \\emph{feature denoising} blocks in a network and training them with adversarially generated examples. \n\nThe iterative nature of generating adversarial examples makes their use in training to generate defenses computationally very intensive. For instance, the adversarially trained feature denoising model proposed by \\cite{xie2019feature} takes 38 hours on 128 Nvidia V100 GPUs to train a baseline ResNet-101 with ImageNet. While we leverage this observation of noise amplification over the layers, our proposed approach \\emph{avoids any training or fine-tuning} of the model. Instead, we use a representative subset of training samples and their layer-wise pre-activation responses to construct mixture density based generative classifiers, which are then combined in an ensemble using ranking preferences. \n\nGenerative classifiers have achieved varying degrees of success as defense strategies against adversarial attacks. Recently, \\cite{Fetaya_ICLR2020} studied the class-conditional generative classifiers and concluded that it is impossible to guarantee robustness of such models. More importantly, they highlight the challenges in training generative classifiers using maximum likelihood based objective and their limitations w.r.t. discriminative ability and identification of out-of-distribution samples. While we propose to use generative classifiers, we avoid using likelihood based measures for making classification decisions. Instead, we use rank-order preferences of these classifiers which are then combined using a \\emph{Borda count}-based voting scheme. Borda counts have been used in collective decision making and are known to be robust to various manipulative attacks \\cite{rothe2019borda}. \n\nIn this paper, we present our defense against adversarial attacks on deep networks, referred to as \\emph{Rank-aggregating Ensemble of Generative classifiers for robust predictions} (REGroup). At inference time, our defense requires white-box access to a pre-trained model to collect the pre-activation responses at intermediate layers to make the final prediction. We use the training data to build our generative classifier models. Nonetheless, our strategy is simple, network-agnostic, does not require any training or fine-tuning of the network, and works well for a variety of adversarial attacks, even with varying degress of hardness. Consistent with recent trends, we focus only on the ImageNet dataset to evaluate the robustness of our defense and report performance superior to defenses that rely on adversarial training \\cite{kurakin2016adversarial} and random input transformation \\cite{raff2019barrage} based approaches. Finally, we present extensive analysis of our defense with two different architectures (ResNet and VGG) on different targeted and untargeted attacks. Our primary contributions are summarized below:\n\\begin{itemize\n \\item We present REGroup, a retraining free, model-agnostic defense strategy that leverages an ensemble of generative classifiers over intermediate layers of the model.\n \\item We model each layer-wise generative classifier as a simple mixture distribution of neural responses obtained from a subset of training samples. We discover that \\emph{both positive and negative} pre-activation values contain information that can help correctly classify adversarially perturbed samples. \n \\item We leverage the robustness inherent in Borda-count based consensus over the generative classifiers.\n \\item We show extensive comparisons and analysis on the ImageNet dataset spanning a variety of adversarial attacks.\n\\end{itemize}\n\n\n\n\n\\section{Related Work}\nSeveral defense techniques have been proposed to make neural networks robust to adversarial attacks. Broadly, we can categorize them into two approaches that: 1. Modify training procedure or modify input before testing; 2. Modify network or change hyper-parameters and optimization procedure.\n\n\n\\subsection{Modify Training\/Inputs During Testing} \n\nSome approaches of defenses in this category are mentioned below. \\textit{Adversarial training}~\\cite{zheng2016improving,moosavi2017universal,xie2020adversarial,shafahi2020universal}. \\textit{Data compression}~\\cite{bhagoji2018enhancing} suppresses the high-frequency components and presents an ensemble-based defense approach. \\textit{Data randomization} ~\\cite{wang2016learning,xie2017adversarial} based approaches apply random transformations to the input to defend against adversarial examples by reducing their effectiveness.\n\nPixelDefend \\cite{song2018pixeldefend} sets out to find the image with the highest probability within an $\\epsilon$- neighbourhood of the original image, thereby moving the image back towards distribution seen in training data. Defense-GAN \\cite{samangouei2018defense} tries to model the distribution of unperturbed images and at inference, it generates an image close to what was provided but without adversarial perturbations. These two methods use techniques to generate a clean version of the input and pass to the classifier. \n\\subsection{Modify Network\/Network Add-ons}\nDefenses under this category address the \\textit{detection} of adversarial attacks or cater to both \\textit{detection and correction} of prediction.\nThe aim of detection only defenses is to highlight if an example is adversarial and prevent it from further processing. These approaches include employing a detector sub-network~\\cite{metzen2017detecting}, training the main classifier with an outlier class~\\cite{grosse2017statistical}, using convolution filter statistics~\\cite{li2017adversarial}, or applying feature squeezing~\\cite{xu2017feature} to detect adversarial examples. However, all of these methods have shown to be ineffective against strong adversarial attacks~\\cite{carlini2017adversarial}\\cite{sharma2018bypassing}.\nFull defense approaches include applying defensive distillation~\\cite{papernot2016distillation}\\cite{papernot2017extending} to use the knowledge from the output of the network to re-train the original model and improve the resilience of a network to small perturbations. Another approach is to augment the network with a sub-network called Perturbation Rectifying Network (PRN) \\cite{akhtar2018defense} to detect the perturbations; if the perturbation is detected, then PRN is used to classify the input image. However, later it was shown that the Carlini \\& Wagner (C\\&W) attack successfully defeated the defensive distillation approach.\n\n\n\\subsection{ImageNet Focused Defense Approaches}\nA few approaches have been evaluated on the ImageNet dataset, most of which are based on input transformations or image denoising. Nearly all these defenses designed for ImageNet have failed a thorough evaluation, with a regularly updated list maintained at ~\\cite{robustml}. The approaches in ~\\cite{prakash2018deflecting} and \\cite{liao2018defense} claimed 81\\% and 75\\% accuracy respectively under adversarial attacks. But after a thorough evaluation~\\cite{athalye2018robustness} and accounting for obfuscated gradients~\\cite{athalye2018obfuscated}, the accuracy for both was reduced to 0\\%. Similarly, ~\\cite{xie2017mitigating} and ~\\cite{guo2017countering} claimed 86\\% and 75\\% respectively, but these were also reduced to 0\\%~\\cite{athalye2018obfuscated}. A different approach proposed in ~\\cite{kannan2018adversarial} claimed an accuracy 27.9\\% but later it was also reduced to 0.1\\% ~\\cite{engstrom2018evaluating}. For a comprehensive related work on attacks and defenses, we suggest reader to refer \\cite{chakraborty2018adversarial}.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{REGroup~ Methodology}\n\nWell-trained deep neural networks have a hierarchical structure, where the early layers transform inputs to feature spaces capturing local or more generic patterns, while later layers aggregate this local information to learn more semantically relevant representations. In REGroup, we use many of the higher layers and learn class-conditional generative classifiers, which are simple mixture-distributions estimated from the pre-activation neural responses at each layer from a subset of training samples. An ensemble of these layer-wise generative classifiers is used to make the final prediction by performing a Borda count-based rank-aggregation. Ranking preferences have been used extensively in robust fitting problems in computer vision \\cite{chin2011accelerated,Chin_NIPS2009,tiwari2018dgsac}, and we show its effectiveness in introducing robustness in DNNs against adversarial attacks. \n\nFig. \\ref{fig:regroup} illustrates the overall working of REGroup. The approach has three main components: First, we use each layer as a generative classifier that produces a ranking preference over all classes. Second, each of these class-conditional generative classifiers are modeled using a mixture-distribution over the neural responses of the corresponding layer. Finally, the individual layer's class ranking preferences are aggregated using Borda count-based scoring to make the final predictions. We introduce the notation below and discuss each of these steps in detail in the subsections that follow. \n\n\n\n\\noindent\\textbf{Notation.} In this paper, we will {always} use $\\ell$, $i$ and $j$ for indexing the $\\ell^{th}$~ layer, $i^{th}$~ feature map and the $j^{th}$~ input sample respectively. The {true} and {predicted} class label will be denoted by $y$~ and $\\widehat{y}$~ respectively.\n\nA classifier can be represented in a functional form as $\\widehat{y}$~$=\\mathcal{F}(x)$, it takes an input $x$ and predicts its class label $\\widehat{y}$~. We define $\\boldsymbol{\\phi}^{\\ell i}$ as the $\\ell^{th}$~ layer's $i^{th}$~ {pre-activation feature map}, i.e., the neural responses \\emph{before} they pass through the activation function. For convolutional layers, this feature map $\\boldsymbol{\\phi}^{\\ell i}$ is a 2D array, while for a fully connected layer, it is a scalar value. \n\n\n\\subsection{DNN Layers as Generative Classifiers}\n\n\\label{sec:LGC}\nWe use the highest $k$ layers of a DNN as generative classifiers that use the pre-activation neural responses to produce a ranking preference\\footnote{A rank is assigned to each class based on a score. In the case of ImageNet dataset, the class with rank-1 is most preferred\/likely class, while rank-1000 is the least preferred\/likely class} over all classes. The layer-wise generative classifiers are modeled as a class-conditional mixture distribution, which is estimated using only a \\textit{pre-trained} network and a small subset $X$ of the training data.\n\nLet $X$ contain only correctly classified {training samples}\\footnote{We took 50,000 out of $\\sim$ 1.2 millions training images from ImageNet dataset, 50 per class.}, which we can further divide into $M$ subsets, one for each class i.e $X =\\{ \\cup_{y=1}^{M} X_{y}\\}$, where $X_{y}$ is the subset containing samples that have labels $y$.\n\n\\subsubsection{Layerwise Neural Response Distributions}\nOur preliminary observations indicated that while the ReLU activations truncate the negative pre-activations during the forward pass, these values still contain semantically meaningful information. Our ablative studies in Fig. \\ref{fig:ablation} confirm this observation and additionally, on occasion, we find that the negative pre-activations are complementary to the positive ones. Since the pre-activation features are real-valued, we compute the features $\\boldsymbol{\\phi}_j^{\\ell i}$ for the $j^{th}$~ sample $x_j$, and define its positive ($P^{\\ell i}_j$) and negative ($N^{\\ell i}_j$) response accumulators as $P^{\\ell i}_j = \\sum\\max(0,\\boldsymbol{\\phi}_j^{\\ell i})$, $N^{\\ell i}_j=\\sum\\max(0,-\\boldsymbol{\\phi}_j^{\\ell i})$. \n\nFor convolutional layers, these accumulators represent the overall strength of positive and negative pre-activation responses respectively, when aggregated over the spatial dimensions of the $i^{th}$~ feature map of the $\\ell^{th}$~ layer. On the other hand, for the linear layers, the accumulation becomes trivial with each neuron having a scalar response $\\boldsymbol{\\phi}_j^{\\ell i}$. We can now represent the $\\ell^{th}$~ layer by the positive and negative response accumulator vectors denoted by $P^\\ell_j$ and $N^\\ell_j$ respectively. We normalize these vectors and define the layer-wise probability mass function (PMF) for the positive and negative responses as $\\mathbb{P}^{\\ell}_j=\\frac{P^\\ell_j}{||P^\\ell_j||_1}$ and $\\mathbb{N}^{\\ell}_j=\\frac{N^\\ell_j}{||N^\\ell_j||_1}$ respectively. \n\nOur interpretation of $\\mathbb{P}^{\\ell}_j$ and $\\mathbb{N}^{\\ell}_j$ as a PMF could be justified by drawing an analogy to the softmax output, which is also interpreted as a PMF. However, it is worth emphasizing that we chose the linear rescaling of the accumulator vectors rather than directly applying a softmax normalization. By separating out the positive and negative accumulators, we obtain two independent representations for each layer, which is beneficial to our rank-aggregating ensemble discussed in the following sections. A softmax normalization over a feature map comprising of positive and negative responses would have entirely suppressed the negative responses, discarding all its constituent semantic information. An additional benefit of the linear scaling is its simple computation. Algorithm \\ref{alg:LSS} summarizes the computation of the layer-wise PMFs for a given training sample. \n\n\\begin{algorithm}[!htb]\n\\SetAlgoLined\n\\textbf{Input:} $x_j$ pre-activation features $\\boldsymbol{\\phi}^{\\ell i}_j\\in\\mathbb{R}^{H\\times W}$\\\\\n\\For{ $\\ell \\in [1..n]$}{\n $P^{\\ell i}_j = \\sum\\max(0,\\boldsymbol{\\phi}_j^{\\ell i}),~~~~~~\\forall~ i$~~~ (sum over H, W) \\\\\n $N^{\\ell i}_j=\\sum\\max(0,-\\boldsymbol{\\phi}_j^{\\ell i}),~~~~\\forall~ i$~~ (sum over H, W) \\\\\n \n}\n$P^{\\ell}_j \\leftarrow P^{\\ell}_j+\\delta,$~~~~~$N^{\\ell}_j \\leftarrow N^{\\ell}_j+\\delta$ \\\\\n$\\mathbb{P}_{j}^{\\ell i} \\leftarrow \\frac{P^{\\ell i}_j}{\\sum_{i} \\mathbb{P}_{j}^{\\ell i}},$~~~~~$\\mathbb{N}_{j}^{\\ell i} \\leftarrow \\frac{N^{\\ell i}_j}{\\sum_{i} \\mathbb{N}_{j}^{\\ell i}}$~~~(PMFs) \n\n\\caption{Layerwise PMF of neural responses. $H\\times W$ represents the spatial dimensions of pre-activation features. For $\\ell^{th}$~ convolutional layer the dimensions of feature maps $H\\times W = r^{\\ell}\\times s^{\\ell}$, and for linear layers the dimensions of neuron output $H\\times W = 1\\times 1$. \n}\n\\label{alg:LSS}\n\\end{algorithm}\n\n\\subsubsection{Layerwise Generative Classifiers} \nWe model the layerwise generative classifiers for class $y$ as a class-conditional mixture of distributions, with each mixture component as the PMFs $\\mathbb{P}^\\ell_j$ and $\\mathbb{N}^\\ell_j$ for a given training sample $x_j\\inX_y$. The generative classifiers corresponding to the positive and negative neural responses are then defined as the following mixture of PMFs\\\\\n\\begin{equation}\n \\mathbb{C}^{+\\ell}_y = \\sum_{j:x_j \\in X_{y}}\\lambda_j\\mathbb{P}_{j}^{\\ell},\\qquad\n \\mathbb{C}^{-\\ell}_y = \\sum_{j:x_j \\in X_{y}}\\lambda_j\\mathbb{N}_{j}^{\\ell}\n\\end{equation}\n \nwhere the weights $\\lambda_j$ are nonnegative and add up to one in the respective equations. Here, $\\lambda_j$ is proportional to the softmax probability of the sample $x_j$, and $\\delta$ is the small constant used for numerical stability. We choose the weights to be proportional to the softmax probability value as predicted by the network given the input $x_j$. Using the subset of training samples $X$, we construct the class-conditional mixture distributions, $\\mathbb{C}^{+\\ell}_y$ and $\\mathbb{C}^{-\\ell}_y$ at each layer $\\ell$ only once. At inference time, we input a test sample $x_j$, from the test set $\\mathcal{T}$, to the network and compute the PMFs $\\mathbb{P}^{\\ell}_j$ and $\\mathbb{N}^{\\ell}_j$ using Algorithm \\ref{alg:LSS}. As our test input is a PMF and the generative classifier is also a mixture distribution, we simply use the KL-Divergence between the classifier model $\\mathbb{C}^{+\\ell}$ and the test sample $\\mathbb{P}^{\\ell}_j$ as a classification score as\n\\begin{equation}\n P_{KL}(\\ell,y) = \\sum_{i} \\mathbb{C}^{+\\ell i}_y \\log \\Bigg(\\frac{\\mathbb{C}^{+\\ell i}_y}{\\mathbb{P}^{\\ell i}}\\Bigg),\\forall y\\in\\{1,\\!\\ldots,\\!M\\}\n \\label{eq:klp}\n\\end{equation}\nand similarly for the negative PMFs\n\\begin{equation}\n N_{KL}(\\ell,y) = \\sum_{i} \\mathbb{C}^{-\\ell i}_y \\log \\Bigg(\\frac{\\mathbb{C}^{-\\ell i}_y}{\\mathbb{N}^{\\ell i}}\\Bigg),\\forall y \\in\\{1,\\!\\ldots,\\!M\\} \n \\label{eq:kln}\n\\end{equation}\n\nWe use a simple classification rule and select the predicted class $\\widehat{y}$~ as the one with the smallest KL-Divergence with the test sample PMF. However, rather than identifying $\\widehat{y}$~, at this stage we are only interested in rank-ordering the classes, which we simply achieve by sorting the KL-Divergences (Eqns. (\\ref{eq:klp}) and (\\ref{eq:kln})) in ascending order. The resulting ranking preferences of classes for the $\\ell^{th}$~ layer are given below in Eqns. \\eqref{eq:rankvec_pos} and \\eqref{eq:rankvec_neg} respectively. Where, $R^{\\ell y}_{+}$ is the rank (position of $y^{th}$ class in the ascending order of KL-Divergences in $P_{KL}$) of $y^{th}$ class in the $\\ell^{th}$~ layer preference list $R^{\\ell}_{+}$. \n\\begin{eqnarray}\nR^{\\ell}_{+}=[R^{\\ell 1}_{+},R^{\\ell 2}_{+},...,R^{\\ell y}_{+},...,R^{\\ell M}_{+}] \\label{eq:rankvec_pos} \\\\\nR^{\\ell}_{-}=[R^{\\ell 1}_{-},R^{\\ell 2}_{-},...,R^{\\ell y}_{-},...,R^{\\ell M}_{-}]\n\\label{eq:rankvec_neg}\n\\end{eqnarray}\n\n\n\\subsection{Robust Predictions with Rank Aggregation}\nRank aggregation based {preferential voting} for making group decisions is widely used in selecting a winner in a democratic setup~\\cite{rothe2019borda}. The basic premise of preferential voting is that $n$ voters are allowed to rank $m$ candidates in the order of their preferences. The rankings of all $n$ voters are then aggregated to make a final prediction.\n\n{Borda count}~\\cite{black1958theory} is one of the approaches for preferential voting that relies on aggregating the rankings of all the voters to make a collective decision~\\cite{rothe2019borda,kahng2019statistical}. The other popular voting strategies to find a winner out of $m$ different choices include {Plurality voting}~\\cite{van1992borda}, and {Condorcet winner}~\\cite{young1988condorcet}. In Plurality voting, the winner would be the one who gets the maximum fraction of votes, while Condorcet winner is the one who gets the majority votes.\n\n\n\n\\subsubsection{Rank Aggregation Using Borda Count}\n\\textit{Borda count} is a generalization of the majority voting. In a two-candidates case it is equivalent to majority vote. The \\textit{Borda count} for a candidate is the sum of the number of candidates ranked below it by each voter. In our setting, while processing a test sample $x_j\\in\\mathcal{T}$, \\textit{every layer} acts as two independent voters based on $\\mP^\\ell$ and $\\mN^\\ell$. The number of classes i.e $M$ is the number of candidates. The Borda count for the $y^{th}$~ class at the $\\ell^{th}$~ layer is denoted by $B^{\\ell y}=B^{\\ell y}_{+}+B^{\\ell y}_{-}$, where $B^{\\ell y}_{+}$ and $B^{\\ell y}_{-}$ are the individual Borda count of both the voters and computed as shown in eq. \\eqref{eq:indv_bordacount}.\n\\begin{equation}\nB^{\\ell y}_{+} = (M - R^{\\ell y}_{+}),~~~~~~~\nB^{\\ell y}_{-} = (M - R^{\\ell y}_{-}) \n\\label{eq:indv_bordacount}\n\\end{equation}\n\n\n\\subsubsection{Hyperparameter settings} \nWe aggregate the Borda counts of highest $k$ layers of the network, which is the only hyperparameter to set in REGroup. Let $B^{:ky}$ denote the aggregated Borda count of $y^{th}$~ class from the last $k$ layers irrespective of the type (convolutional or fully connected). Here, $n$ is the total number of layers. The final prediction would be the class with maximum aggregated Borda count.\n\\begin{align}\n\\nonumber B^{:ky} & = \\sum_{\\ell=n-k+1}^{n} B^{\\ell y}\\\\\n\\nonumber & = \\sum_{\\ell=n-k+1}^{n} B^{\\ell y}_{+}+B^{\\ell y}_{-},~~\\forall y\\in\\{1..M\\}\\\\\n\\widehat{y}&= \\textit{argmax}_{y} ~~B^{:ky}\n \\label{eq:agg_borda}\n\\end{align}\nTo determine the value of $k$, we evaluate REGroup~ on 10,000 \\textit{correctly classified} samples from the ImageNet Validation set at {each layer}, using {per layer} Borda count i.e $ \\widehat{y}= \\arg\\max_{y} ~~B^{\\ell y}$. We select $k$ to be the number of later layers at which we get at-least $75\\%$ accuracy. This can be viewed in the context of the confidence of individual layers on discriminating samples of different classes. We follow the above heuristic and found $k=5$ for both the architectures ResNet-50 and VGG-19, which we use in all our experiments. An ablation study with all possible values of $k$ is included in section \\ref{sec:ablation}\n\n\n\\section{Experiments}\nIn this section, we evaluate robustness of REGroup~ against state-of-the-art attack methods. We follow the recommendations on defense evaluation in~\\cite{carlini2019evaluating}. \\\\\n\\noindent \\textbf{Attack methods.} We consider attack methods in the following two categories: \\textit{gradient-based} and \\textit{gradient-free}. \\\\ \n\\underline{{Gradient-Based Attacks}}. Within this category, we consider two variants, \\textit{restricted} and \\textit{unrestricted} attacks. The restricted attacks generate adversarial examples by searching an adversarial perturbations within the bound of $L_p$ norm, while unrestricted attacks generate adversarial example by manipulating image-based visual descriptors. Due to restriction on the perturbation the adversarial examples generated by restricted attacks are similar to the clean original image, while unrestricted attacks generate natural-looking adversarial examples, which are far from the clean original image in terms of $L_p$ distance. We consider the following, \\textit{Restricted attacks}: PGD~\\cite{madry2017towards} , DeepFool~\\cite{moosavi2016deepfool}, C\\&W~\\cite{carlini2017towards} and Trust Region~\\cite{yao2019trust}, and \\textit{Unrestricted attack}: cAdv~\\cite{bhattad2020unrestricted} semantic manipulation attack. An example of cAdv is shown in Fig.~\\ref{fig:cadv}. \\\\\n\n\\noindent \\underline{{Gradient-Free Attacks}}. The approaches in this category does not have access to the network weights. We consider following attacks: SPSA~\\cite{uesato2018adversarial}, Boundary~\\cite{brendel2017decision} and Spatial~\\cite{engstrom2019exploring}. Refer supplementary for the attack specific hyper-parameters detail.\n\\begin{figure}[!htb]\n\\centering\n\\begin{tabular}{c}\n \\includegraphics[width=7.5cm]{figures\/cars+pretzel.pdf} \n\\end{tabular}\n\\caption{cAdv~\\cite{bhattad2020unrestricted} adversarial examples}\n\\label{fig:cadv}\n\\end{figure}\n\n\\noindent \\textbf{Network architectures.} We consider ResNet-50\\footnote{https:\/\/download.pytorch.org\/models\/resnet50-19c8e357.pth} and VGG-19\\footnote{https:\/\/download.pytorch.org\/models\/vgg19-dcbb9e9d.pth} architectures, {pre-trained} on ImageNet dataset.\\\\\n\\noindent\\textbf{Datasets.} We present our evaluations, comparisons and analysis only on ImageNet~\\cite{imagenet} dataset. We use the subsets of full ImageNet validation set as described in Tab. \\ref{tbl:dset}. Note: V10K, V2K and V10C would be different for ResNet-50 and VGG-19, since an image classified correctly by ResNet-50 need not be classified correctly by the VGG-19. \n\\begin{table}[h]\n\\centering\n\\resizebox{\\linewidth}{!}{%\n\\begin{tabular}{l|l}\n\\hline\nDataset & Description \\\\\n\\hline\nV50K & Full ImageNet validation set with 50000 images. \\\\\nV10K & A subset of 10000 correctly classified images from V50K set. 10 Per class. \\\\\nV2K & A subset of 2000 correctly classified images from V50K set. 2 Per class.\\\\\nV10C & A subset of correctly classified images of 10 sufficiently different classes.\\\\\n\\hline\n\\end{tabular}%\n}\n\\caption{Dataset used for evaluation and analysis.}\n\\label{tbl:dset}\n\\end{table}\n\n\n\\subsection{Performance on Gradient-Based Attacks}\n\n\\noindent \\textbf{Comparison with adversarial-training\/ fine-tuning.} \nWe evaluate REGroup~on clean samples as well as adversarial examples generated using PGD $(\\epsilon = 16)$ from V50K dataset, and compare it with prior state-of-the-art works. The results are reported in Tab. \\ref{tbl:sota}, and we see that REGroup~outperforms the state-of-the-art input transformation based defense BaRT~\\cite{raff2019barrage}, both in terms of the clean and adversarial samples (except in the case of Top-1 accuracy with $\\hat{k}=10$, which is the number of input transformations used in BaRT). We see that while our performance on clean samples decreases when compared to adversarial training (Inception v3), it improves significantly on adversarial examples with a high $\\epsilon=16$. While our method is not directly comparable with adversarially trained Inception v3 and ResNet-152, because the base models are different, a similar decrease in the accuracy over clean samples is reported in their paper. The trade-off between robustness and the standard accuracy has been studied in \\cite{dohmatob2018limitations} and \\cite{tsipras2018robustness}.\n\nAn important observation to make with this experiment is, if we set aside the base models of ResNets and compare Top-1 accuracies on clean samples of full ImageNet validation set, our method (REGroup) without any \\textit{adv-training\/fine-tuning} either outperforms or performs similar to the state-of-the-art \\textit{adv-training\/fine-tuning} based methods \\cite{raff2019barrage, xie2019feature}.\n\\begin{table}[!htb]\n\\centering\n\n\\scriptsize\n\\begin{tabular}{lcccc}\n\\toprule \n ({Dataset used:} \\text{V50K}). & \\multicolumn{2}{c}{Clean Images} &\\multicolumn{2}{c}{Attacked Images}\n \\vspace{0.01cm} \\\\\n \n \\cline{2-5}\n \\vspace{0.02cm}\n Model & Top-1 & Top-5 & Top-1 & Top-5 \\vspace{0.01cm}\\\\\n \\toprule\n ResNet-50 & 76 & 93 & 0.0 & 0.0 \\\\\n Inception v3 & 78 & 94 & 0.7 & 4.4 \\\\\n ResNet-152 & 79 & 94 & - & - \\\\\n \\hline\n Inception v3 w\/Adv. Train & 78 & 94 & 1.5 & 5.5 \\\\\n ResNet-152 w\/Adv. Train & 63 & - & 45 & - \\\\\n ResNet-152 w\/Adv. Train w\/ denoise & 66 & - & 49 & - \\\\\n ResNet-50-BaRT, $\\hat{k}=5$ & 65 & 85 & 16 & 51 \\\\\n ResNet-50-BaRT, $\\hat{k}=10$ & 65 & 85 & 36 & 57 \\\\\n \\hline \n ResNet-50-REGroup & 66 & 86 & {22} & {65}\\\\\n \\toprule\n\\end{tabular}%\n\\caption{\\textbf{Comparison with adversarially trained and fine-tuned classification models.} Top-1 and Top-5 classification accuracy (\\%) of adversarial trained (Inception V3 \\cite{kurakin2016adversarial} and ResNet-152 ~\\cite{xie2019feature}) and fine-tuned (ResNet-50 BaRT~\\cite{raff2019barrage}) classification models. Clean Images are the non-attacked original images. The results are divided into three blocks, the top block include original networks, middle block include defense approaches based on adversarial re-training\/fine-tuning of original networks, bottom block is our defense \\textit{without re-training\/fine-tuning}. Results of the competing methods are taken from their respective papers. `-' indicate the results were not provided in the respective papers. }\n\\label{tbl:sota}\n\\end{table}\n\n\n\n\\noindent \\textbf{Performance w.r.t PGD Adversarial Strength.}\nWe evaluate REGroup~ w.r.t the maximum perturbation of the adversary. The results are reported in Fig.~\\ref{fig:pgd_strength}(a).\nREGroup~ outperforms both the adversarial training \\cite{kurakin2016adversarial} and BaRT~\\cite{raff2019barrage}.\nBoth adversarial training and BaRT have shown protection against PGD adversarial attacks with a maximum perturbation strength $\\epsilon=16$ and $\\epsilon=32$ respectively, however we additionally show the results with $\\epsilon=40$ on full ImageNet validation set. \nWe also note that with increasing perturbation strength, our defense's accuracy is also strictly decreasing. This is in accordance with \\cite{carlini2019evaluating}, where transitioning from a clean image to noise should yield a downward slope in accuracy, else there could be some form of gradient masking involved. While it may seem $\\epsilon=40$ is a large perturbation budget and it will destroy the object information in the image completely, but we would like to emphasize that it is not the case when using large size images. A comparison of PGD examples generated with $\\epsilon=40$ using CIFAR-10 (32 $\\times$ 32) and ImageNet (224 $\\times$ 224) images is shown in Fig.~\\ref{fig:pgd_strength}(b). \\\\\n\n\\begin{figure}[!htb]\n \\centering\n \n \\includegraphics[width=8cm]{figures\/bart_comparison1+.pdf}\n \\caption{\\textbf{Top-1 and Top-5 accuracy(\\%) w.r.t PGD adversarial strength.} Comparison with adversarial training based method~\\cite{kurakin2016adversarial} and fine-tuning using random input transformations based method (BaRT) \\cite{raff2019barrage} with Expectation Over Transformation (EOT) steps 10 and 40, against the PGD perturbation strength ($\\epsilon$). The results of the competing methods are taken from their respective papers. Dataset used: \\text{V50K}. }\n \\label{fig:pgd_strength}\n\\end{figure}\n\n\\begin{table}[h]\n\\centering\n\\setlength{\\tabcolsep}{1.5pt}\n\\scriptsize\n\\begin{tabular}{l|ccc|ccc|ccc}\n\\hline\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{3}{c|}{ResNet-50} & \\multicolumn{3}{c}{VGG-19} \\\\\n\n \\cline{5-10} \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{UN \/} &\n \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{SMax} & \\multicolumn{1}{c|}{REGroup} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{SMax} & \\multicolumn{1}{c}{REGroup} \\\\\n \\cline{5-10} \\multicolumn{1}{c}{} & \n \\multicolumn{1}{c}{Data} & \n \\multicolumn{1}{c}{TA \/ HC} & \n \\multicolumn{1}{c}{$\\epsilon$} & \\multicolumn{1}{c}{\\#S} & \\multicolumn{1}{c}{T1(\\%)} & \\multicolumn{1}{c|}{T1(\\%)} & \\multicolumn{1}{c}{\\#S} & \\multicolumn{1}{c}{T1(\\%)} & \\multicolumn{1}{c}{T1(\\%)} \\\\\n \\cline{1-2}\n \\cline{3-10}\nClean &V10K&-- & -- & 10000 & {100} & 88 &10000 & {100} & 76\n\\\\\nClean &V2K &--&-- &2000 & 100 &86 &2000 & 100 & 72 \\\\\nClean &V10C &--&-- &417 & 100 &84 &392 & 100 & 79 \\\\\n\\hdashline\nPGD &V10K& UN &4 ($L_\\infty$) & 9997 & 0 & {48} &9887 & 0 & {46} \\\\\nDFool &V10K&UN & 2 ($L_2$) & 9789 & 0 & {61} &9939 & 0 & {55} \\\\\nC\\&W &V10K &UN &4 ($L_2$) & 10000 & 0 & {40} &10000 & 0 & {38} \\\\\nTR &V10K&UN &2 ($L_\\infty$) & 10000 & 0 & {41} & 9103 & 0 & {45} \\\\\ncAdv &V10C&UN &-- & 417 & 0 & {37} & 392 & 0 & {18} \\\\\n\\hdashline\n\nPGD &V2K &TA & ($L_\\infty$)&2000 &0 &47 &2000 &0 & 31 \\\\\nC\\&W &V2K &TA &($L_2$) &2000 & 0 &46 & 2000 &0 & 38 \\\\\n\n\\hdashline\nPGD &V2K &UN+HC& ($L_\\infty$)& 2000 & 0 & 21 & 2000 & 0 & 19 \\\\\nPGD &V2K &TA+HC& ($L_\\infty$) & 2000 & 0 & 23 & 2000 & 0 & 17 \\\\\n\\hline\n\\end{tabular}%\n\\caption{\\textbf{Performance on Gradient-Based Attacks.} Comparison of Top-1 classification accuracy between SoftMax (SMax) and REGroup~ based final classification. UN and TA indicates, un-targeted and targeted attacks respectively. The +HC indicates adversarial examples are generated with high-confidence ( $>90\\%$) constraint, in this case $\\epsilon$ can be any value that satisfies the HC criteria. For targeted attack we select a target class uniformly at random from the 1000 classes leaving out the true class. $\\#S$ is the number of images for which the attacker is successfully able to generate adversarial examples using the respective attack models and the accuracies are reported with respect to the $\\#S$ samples, hence the 0\\% accuracies with the SoftMax (SMax). Since $\\#S$ is different for several attacks, therefore, the performance may not be directly comparable \\emph{across} different attacks. `--' indicate the information is not-applicable. For data description refer Tab. \\ref{tbl:dset}.}\n\\label{tbl:grad-based}\n\\end{table}\n\n\n\\noindent \\textbf{Performance on Un-Targeted Attacks.}\nWe evaluate REGroup~ on various untargeted attacks and report results in Tab. \\ref{tbl:grad-based}. The perturbation budgets ($\\epsilon$) and dataset used for the respective attacks are listed in the table. With the exception of the maximum perturbation allowed, we used default parameters given by FoolBox~\\cite{rauber2017foolbox}. Due to space limitations, the attack specific hyper-parameters detail are included in the supplementary. We observe that the performance of our defense is quite similar for both the models employed. This is due to the attack-agnostic nature of our defense. We achieve 48\\% accuracy (ResNet-50) for PGD attack using our defense which is significant given that PGD is considered to be one of the strongest attacks among the class of first order adversaries. \\\\\n\n\n\n\n\\noindent \\textbf{Performance on Unrestricted, Untargeted Semantic Manipulation Attacks.}\nWe consider V10C dataset for cAdv attack. We use the publicly released source code by the authors. Specifically we use $\\text{cAdv}_{4}$ variant with the parameters suggested by the authors. The results are reported in Tab. \\ref{tbl:grad-based}. \\\\\n\n\\noindent \\textbf{Performance on Targeted Attacks.}\nWe consider \\text{V2K}~dataset for targeted attacks and report the performance on PGD and C\\&W targeted attacks in Tab. \\ref{tbl:grad-based}. Target class for such attacks is chosen uniformly at random from the 1000 ImageNet classes apart from the original(ground-truth) class. \\\\\n\n\\noindent \\textbf{Performance on PGD attack with High Confidence.}\nWe evaluate REGroup~on PGD examples on which the network makes highly confident predictions using SoftMax. We generate un-targeted and targeted adversarial examples using PGD attack with a constraint that the network's confidence of the prediction of adversarial examples is \\text{at-least 90}\\%. For this experiment we do not put constraint on the adversarial perturbation i.e $\\epsilon$. Results are reported in Tab. \\ref{tbl:grad-based}.\n\n\n\n\n\\subsection{Performance on Gradient-Free Attacks}\n\nSeveral studies \\cite{athalye2018obfuscated}, \\cite{papernot2017practical} have observed a phenomenon called\\textit{ gradient masking}. This phenomenon occurs when a practitioner unintentionally or intentionally proposes a defense which does not have meaningful gradients, either by reducing them to small values (vanishing gradients), removing them completely (shattered gradients) or adding some noise to it (stochastic gradient). \n\nGradient masking based defenses hinder the gradient computation and in turn inhibit gradient-based attacks, thus providing a false sense of security. Therefore, to establish the robustness of a defense against adversarial attacks in general, it is important to rule out that a defense relies on gradient masking. \n\nTo ensure that REGroup~ is not masking the gradients we follow the standard practice \\cite{pang2019rethinking} \\cite{zhang2020clipped} and evaluate on strong gradient-free SPSA~\\cite{uesato2018adversarial} attack. In addition to SPSA, we also show results on two more gradient-free attacks, Boundary~\\cite{brendel2017decision} and Spatial~\\cite{engstrom2019exploring} attack. The results are reported in Tab. \\ref{tbl:grad-free}. \n\n\\begin{table}[!htb]\n\\centering\n\\setlength{\\tabcolsep}{1.1pt}\n\\scriptsize\n\\begin{tabular}{l|ccc|ccc|ccc}\n\\hline\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{3}{c|}{ResNet-50} & \\multicolumn{3}{c}{VGG-19} \\\\\n\n \\cline{5-10} \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{} &\n \\multicolumn{1}{c}{UN \/} &\n \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{SMax} & \\multicolumn{1}{c|}{REGroup} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{SMax} & \\multicolumn{1}{c}{REGroup} \\\\\n \\cline{5-10} \\multicolumn{1}{c}{} & \n \\multicolumn{1}{c}{Data} & \n \\multicolumn{1}{c}{TA \/ HC} & \n \\multicolumn{1}{c}{$\\epsilon$} & \\multicolumn{1}{c}{\\#S} & \\multicolumn{1}{c}{T1(\\%)} & \\multicolumn{1}{c|}{T1(\\%)} & \\multicolumn{1}{c}{\\#S} & \\multicolumn{1}{c}{T1(\\%)} & \\multicolumn{1}{c}{T1(\\%)} \\\\\n \\cline{1-2}\n \\cline{3-10}\nSPSA &V10K&UN &4 ($L_{\\infty}$) & 4911 & 0 & {71} &5789 & 0 & {58} \\\\\nBoundary &V10K&UN& 2 ($L_{2}$) & 10000 & 0 & {50} & 10000 & 0 & {50} \\\\\nSpatial &V10K&UN & 2 ($L_{2}$) & 2624 & 0 & {36} &2634 & 0 & {30} \\\\\n\\hline\n\\end{tabular}%\n\\caption{\\textbf{Performance on Gradient-Free Attacks.} Top-1 ( \\%) classification accuracy comparison between SoftMax (SMax) and REGroup. Legends are same as in Tab. \\ref{tbl:grad-based}. }\n\\label{tbl:grad-free}\n\\end{table}\n\nThe consistent superior performance on both gradient-based (both restricted and unrestricted) and gradient free attack shows REGroup~is not masking the gradients and is attack method agnostic.\n\n\\section{Analysis}\\label{sec:ablation}\n\n\\subsection{Accuracy vs number of layers ($k$)}\nWe report performance of REGroup~on various attacks reported in Tab. \\ref{tbl:grad-based} for all possible values of $k$. The accuracy of VGG-19 w.r.t. the various values of $k$ is plotted in Fig. \\ref{fig:acc_vs_k}. We observe a similar accuracy vs $k$ graph for ResNet-50 and note that a reasonable choice of $k$ made based on this graph does not significantly impact REGroup's performance. Refer Fig. \\ref{fig:acc_vs_k}, the `Agg' stands for using aggregated Borda count $B^{:ky}$. PGD(V10K,UN), DFool, C\\&W(V10K,UN) and Trust Region are the same experiments as reported in Tab. \\ref{tbl:grad-based}, but with all possible values of $k$. `Per\\_Layer\\_V10K' stands for evaluation using per layer Borda count i.e $\\widehat{y}$~$= \\textit{argmax}_{y} ~~B^{\\ell y}$ on a separate 10,000 correctly classified subset of validation set. In all our experiments we choose the $k$-highest layers where `Per\\_Layer\\_V10K' has at-least $75\\%$ accuracy. A reasonable change in this accuracy criteria of $75\\%$ would not affect the results on adversarial attacks significantly. However, a substantial change (to say $50\\%$) deteriorates the performance on clean sample significantly. \n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=0.9\\linewidth]{figures\/vs_k.pdf}\n \\caption{Accuracy vs no. of layers ($k$) }\n \\label{fig:acc_vs_k}\n\\end{figure}\n\\begin{figure*}[htb]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{figures\/posneg_fig.pdf}\n \\caption{ Effect of Considering Positive and Negative Pre-Activation Responses}\n \\label{fig:ablation}\n\\end{figure*}\nThe phenomenon of decrease in accuracy of clean samples vs robustness has been studied in \\cite{dohmatob2018limitations} and \\cite{tsipras2018robustness}. \n\n\\subsection{Effect of positive and negative pre-activation responses}\nWe report the impact of using positive, negative and a combination of both pre-activation responses on the performance of REGroup~ in Fig. \\ref{fig:ablation}. We consider three variants of Borda count rank aggregation from later $k$ layers. {Pos:} $B^{:ky} = \\sum_{\\ell=n-k+1}^{n} B^{\\ell y}_{+}$, {Neg:} $B^{:ky} = \\sum_{\\ell=n-k+1}^{n} B^{\\ell y}_{-}$, and { Pos+Neg:} $B^{:ky} = \\sum_{\\ell=n-k+1}^{n} B^{\\ell y}_{+}+B^{\\ell y}_{-}$. We report the Top-1 accuracy (\\%) of the attacks experiment as set up in Tab. \\ref{tbl:grad-based} (DF: DFool, C\\&W, TR: Trust Region), in Tab. \\ref{tbl:grad-free} (BD: Boundary, SP: Spatial), and in Fig. \\ref{fig:pgd_strength} (PGD2, PGD4 and PGD8, with $\\epsilon=2$, $4$ and $8$ respectively). From the bar chart it is evident that in some experiments, {Pos} performs better than { Neg } (e.g UN\\_TR), while in others {Neg} is better than {Pos} only (e.g UN\\_DF). It is also evident that {Pos}+{Neg} occasionally improve the overall performance, and the improvement seems significant in the targeted C\\&W attacks for both the ResNet-50 and VGG-19. We leave it to the design choice of the application, if inference time is an important parameter, then one may choose either Pos or Neg to reduce the inference time to approximately half of what is reported in Tab. \\ref{tbl:inf_time}.\n \n\n\n\n\n\\subsection{Results on CIFAR-10}\nWhile we mainly show results on large-scale dataset (ImageNet), we believe scaling down the datasets to one like CIFAR10 will not have a substantial impact on REGroup's performance. We evaluate REGroup on CIFAR10 dataset using VGG-19 based classifier. We construct generative classifiers using CIFAR-10 dataset following the same protocol as for the ImageNet case described in the Sec. \\ref{sec:LGC}. We apply PGD attack with $\\epsilon=4$ and generate adversarial examples. The results are included in the Tab. \\ref{tbl:cifar_pgd}.\n\n\\begin{table}[!htb]\n\\centering\n\\setlength{\\tabcolsep}{3pt}\n\\scriptsize\n\\begin{tabular}{lccc}\n\\hline\n \\multicolumn{1}{c}{} & \\multicolumn{3}{c}{VGG-19} \\\\\n\n \\cline{2-4} & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{SMax} & \\multicolumn{1}{c}{REGroup} \\\\\n \n \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{\\#S} & \\multicolumn{1}{c}{T1(\\%)} & \\multicolumn{1}{c}{T1(\\%)} \\\\\n \\cline{1-2}\n \\cline{3-4}\n\n \\vspace{0.01cm}\nClean & 10000 & 92 & 88 \\\\\nPGD Untargeted & 9243 & 0 & 57 \\\\\n\\hline\n\\end{tabular}%\n\\caption{\\textbf{Performance on CIFAR10.} Comparison of Top-1 classification accuracy between SoftMax (SMax) and REGroup~ based final classification. $\\#S$ is the number of images for which the attacker is successfully able to generate adversarial examples using PGD attack and the accuracies are reported with respect to the $\\#S$ samples, hence the 0\\% accuracies with the SoftMax (SMax). }\n\\label{tbl:cifar_pgd}\n\\end{table}\n\n\\subsection{Inference time using REGroup} Since we suggest to use REGroup~as a test time replacement of SoftMax, we compare the inference time on both CPU and GPU in Tab. \\ref{tbl:inf_time}. We use a machine with an i7-8700 CPU and GTX 1080 GPU.\n\n\n\n\\begin{table}[!htb]\n\\centering\n\\scriptsize\n\\begin{tabular}{lcccc|cccc}\n\\hline\n \\multicolumn{1}{c}{} & \\multicolumn{4}{c|}{ResNet-50} & \\multicolumn{4}{c}{VGG-19} \\\\\n\n \\cline{2-9}& \\multicolumn{2}{c}{SMax}& \\multicolumn{2}{c|}{REGroup} & \\multicolumn{2}{c}{SMax}& \\multicolumn{2}{c}{REGroup} \\\\\n \\cline{2-9} \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{GPU} & \\multicolumn{1}{c}{CPU}& \\multicolumn{1}{c}{GPU} & \\multicolumn{1}{c|}{CPU} & \\multicolumn{1}{c}{GPU} & \\multicolumn{1}{c}{CPU}\n & \\multicolumn{1}{c}{GPU}& \\multicolumn{1}{c}{CPU} \\\\\n \\cline{1-2}\n \\cline{3-9}\n\n \\vspace{0.01cm}\nTime(s) & 0.02 & 0.06 &0.13 & 0.35 &0.03 & 0.12 & 0.16 & 0.64 \\\\\n\\hline\n\\end{tabular}%\n\\caption{\\textbf{Inference time comparison.} REGroup~vs SoftMax}\n\\label{tbl:inf_time}\n\\end{table}\n\n \n\nIn this work, we have presented a simple, scalable, and practical defense strategy that is model agnostic and does not require any re-training or fine-tuning. We suggest to use REGroup~at test time to make a pre-trained network robust to adversarial perturbations. There are three aspects of REGroup~that justify its success. First, instead of using a maximum likelihood based prediction, REGroup~adopts a ranking preference based approach. Second, aggregation of preferences from multiple layers lead to group decision making, unlike SoftMax that relies on the output of the last layer only. Using both positive and negative layerwise responses help contribute to the robustness of REGroup. Third, there exists inherent robustness of Borda count based rank aggregation in the presence of noisy individual voters \\cite{rothe2019borda},~\\cite{kahng2019statistical}. Hence, where SoftMax fails to predict the correct class of an adversarial example, REGroup~ takes ranked preferences from multiple layers and builds a consensus using Borda count to make robust predictions. Our promising empirical results indicate that deeper theoretical analysis of REGroup~ would be an interesting direction to pursue. One direction of analysis could be inspired from the recently proposed perspective of neurons as cooperating classifiers \\cite{davel2020dnns}. \n\n\\section{Hyper-parameters for Generating Adversarial Examples}\nWe use Foolbox's \\cite{rauber2017foolbox} implementation of almost all the adversarial attacks(except SPSA\\footnote{https:\/\/github.com\/tensorflow\/cleverhans}, Trust Region\\footnote{https:\/\/github.com\/amirgholami\/TRAttack} and cAdv\\footnote{https:\/\/github.com\/AI-secure\/Big-but-Invisible-Adversarial-Attack}) used in this work. We report the attack specific hyper-parameters in Tab.\\ref{tab:hyperparams}.\n\n\\section{Elastic-Net Attacks}\nWe evaluate REGroup~ on Elastic-Net attacks \\cite{chen2018ead}. Elastic-Net attack formulate the attack process as a elastic-net regularized optimization problem. The results are shown in the table \\ref{tbl:ead_attacks}.\n\n\\begin{table}[!htb]\n\\centering\n\\setlength{\\tabcolsep}{3pt}\n\\scriptsize\n\\begin{tabular}{l|c|cc|c|cc}\n\\hline\n \\multicolumn{1}{c}{} & \\multicolumn{3}{c|}{ResNet-50} & \\multicolumn{3}{c}{VGG-19} \\\\\n\n \\cline{2-7} \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c}{SMax} & \\multicolumn{1}{c|}{REGroup} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c}{SMax} & \\multicolumn{1}{c}{REGroup} \\\\\n \n \\multicolumn{1}{c|}{Attacks} & \\multicolumn{1}{c|}{\\#S} & \\multicolumn{1}{c}{T1(\\%)} & \\multicolumn{1}{c|}{T1(\\%)} & \\multicolumn{1}{c|}{\\#S} & \\multicolumn{1}{c}{T1(\\%)} & \\multicolumn{1}{c}{T1(\\%)} \\\\\n \\cline{1-2}\n \\cline{3-7}\n\n \\vspace{0.01cm}\nEAD-Attack & 2000 & 0 & 52 & 2000 & 0 & 49 \\\\\n\\hline\n\\end{tabular}%\n\\caption{\\textit{Performance on EAD attacks.} Top-1 ( \\%) classification accuracy comparison between SoftMax (SMax) and REGroup. $\\#S$ is the number of images for which the attacker is successfully able to generate adversarial examples and the accuracies are reported with respect to the $\\#S$ samples, hence the 0\\% accuracies with the SoftMax (SMax). }\n\\label{tbl:ead_attacks}\n\\end{table}\n\n\n\n\n\\section{Accuracy vs no. of layer\/voters(ResNet50)}\n\\label{sec:resnet50_vs_k}\nWe report the performance of REGroup on various attacks reported in table 2 of the main paper for all possible values of $k$. The accuracy of ResNet-50 w.r.t. the various values of $k$ is plotted in figure \\ref{fig:abl_vs_k_resnet}. \n\n\n\n\\begin{table*}[!htb]\n \\centering\n \\begin{tabular}{|c|c|}\n \\hline\n \\textbf{Attack} & \\textbf{Hyper-parameters} \\\\\n \\hline\n \\multirow{2}{*}{PGD (Untargeted)} & $\\epsilon=4$, Dist:$L_\\infty$, random\\_start=True,\\\\\n & stepsize=0.01, max\\_iter=40 \\\\\n \\hline\n \\multirow{2}{*}{DeepFool (Untargeted)} & $\\epsilon=2$, Dist:$L_2$, max\\_iter=100, \\\\\n & subsample=10 (Limit on the number of the most likely classes)\\\\\n \\hline\n \\multirow{2}{*}{CW (Untargeted)} & $\\epsilon=4$, Dist:$L_2$, binary\\_search\\_steps=5, max\\_iter=1000,\\\\\n & confidence=0, learning\\_rate=0.005, initial\\_const=0.01 \\\\\n \\hline\n Trust Region (Untargeted) & $\\epsilon=2$, Dist:$L_\\infty$, iterations=5000 \\\\\n \\hline\n \\multirow{7}{*}{Boundary (Untargeted)} & $\\epsilon=2$, Dist:$L_2$, iterations=500, max\\_directions=25, starting\\_point=None, \\\\\n &initialization\\_attack=None, \n log\\_every\\_n\\_steps=None, \\\\\n & spherical\\_step=0.01, source\\_step=0.01, step\\_adaptation=1.5, \\\\\n & batch\\_size=1, tune\\_batch\\_size=True, \\\\\n & threaded\\_rnd=True, threaded\\_gen=True \\\\\n \\hline\n \\multirow{3}{*}{Spatial (Untargeted)} & $\\epsilon=2$, Dist=$L_2$, do\\_rotations=True, do\\_translations=True, x\\_shift\\_limits=(-5, 5),\\\\\n & y\\_shift\\_limits=(-5, 5), angular\\_limits=(-5, 5), granularity=10,\\\\ & random\\_sampling=False, abort\\_early=True \\\\\n \\hline\n \\multirow{2}{*}{PGD (Targeted)} & Dist = $L_\\infty$, binary\\_search=True, epsilon=0.3, \\\\\n & stepsize=0.01, iterations=40, random\\_start=True, return\\_early=True \\\\\n \\hline\n \\multirow{2}{*}{CW (Targeted)} & binary\\_search\\_steps=5, max\\_iterations=1000, confidence=0, \\\\\n & learning\\_rate=0.005, initial\\_const=0.01, abort\\_early=True \\\\\n \\hline\n SPSA & $\\epsilon=(4,8)$, Dist:$L_\\infty$, max\\_iter=300, batch\\_size=64, early\\_stop\\_loss\\_thresh = 0, \\\\ \n & perturbation\\_size $\\delta=0.01$, Adam LR=0.01\\\\\n \\hline\n \\multirow{2}{*}{EAD} & Dist=$L_2$, binary\\_search\\_steps=5, max\\_iterations=1000, confidence=0, \\\\\n & initial\\_learning\\_rate=0.01, regularization=0.01, initial\\_const=0.01, abort\\_early=True \\\\\n \\hline\n \\multirow{2}{*}{PGD (Untargeted,HC)} & min\\_conf=0.9, Dist=$L_\\infty$, binary\\_search=True, epsilon=0.3, \\\\\n & stepsize=0.01, iterations=40, random\\_start=True, return\\_early=True \n \\\\\n \\hline\n \\multirow{2}{*}{PGD (Targeted,HC)} & min\\_conf=0.9, Dist=$L_\\infty$, binary\\_search=True, epsilon=0.3, \\\\\n & stepsize=0.01, iterations=40, random\\_start=True, return\\_early=True \\\\\n \\hline\n\n \n \\end{tabular}\n\\caption{Attack Specific Hyper-parameters.} \n\\label{tab:hyperparams}\n\\end{table*}\n\n\\begin{figure*}[!htb]\n \\centering\n \\includegraphics[width=17cm]{supp_figures\/resnet50_vs_k.png}\n \\caption{\\text{Ablation study for accuracy vs no. of layers ($k$) on ResNet-50}: `Agg' stands for using aggregated Borda count $B^{:ky}$. PGD, DFool, C\\&W and Trust Region are the same experiments as reported in table 2 of the main paper, but with all possible values of $k$. \"Per\\_Layer\\_V10K\" stands for evaluation using per layer Borda count i.e $\\widehat{y}$~$= \\textit{argmax}_{y} ~~B^{\\ell y}$ on a separate 10,000 correctly classified subset of validation set. In all our experiments we choose the $k$-highest layers where `Per\\_Layer\\_V10K' has at-least $75\\%$ accuracy. A reasonable change in this accuracy criteria of $75\\%$ would not affect the results on adversarial attacks significantly. However, a substantial change (to say $50\\%$) deteriorates the performance on clean sample significantly. The phenomenon of decrease in accuracy of clean samples vs robustness has been studied in \\cite{dohmatob2018limitations} and \\cite{tsipras2018robustness}. \\textbf{Note:} There are four down-sampling layers in the ResNet-50 architecture, hence the total 54 layers. }\n \\label{fig:abl_vs_k_resnet}\n\\end{figure*}\n\n\n\n\\begin{figure*}[!htb]\n\n \\centering\n \\hspace{-0.5cm} \\subfigure[ResNet-50 $46^{th}$ Layer (CONV)]{\n \\includegraphics[width=5.5cm]{supp_figures\/r50_46.png}} \\hspace{-0.2cm}\n \\subfigure[ResNet-50 $48^{th}$ Layer (CONV)]{\\includegraphics[width=5.5cm]{supp_figures\/r50_48.png}}\n \\subfigure[ResNet-50 $50^{th}$ Layer (FC)]{\\includegraphics[width=5.5cm]{supp_figures\/r50_fc.png}}\\\\\n \n \\hspace{-0.5cm} \\subfigure[VGG-19 $17^{th}$ Layer (FC)]{\\includegraphics[width=5.5cm]{supp_figures\/vgg19_fc17.png}}\n \\subfigure[VGG-19 $18^{th}$ Layer (FC)]{\\includegraphics[width=5.5cm]{supp_figures\/vgg19_fc18.png}} \\hspace{-0.2cm}\n \\subfigure[VGG-19 $19^{th}$ Layer (FC)]{\\includegraphics[width=5.5cm]{supp_figures\/vgg19_fc19.png}}\n \n \\caption{TSNE visualization of three variants of pre-activation features i.e positive only (pos), negative only (neg) and combined positive and negative (combined). Visualization of 50 samples of 5 random classes of ImageNet dataset. Class membership is color coded. The dimensions of the pos, neg and combined variants of pre-activation feature is the same for any fully connected layer, while for a CONV layer, pos and neg has the same dimension which is equal to the no. of filters\/feature maps of the respective CONV layer and for combined it is equal to the dimension we get after flattening the whole CONV layer. It can be observed in figure(b) that the cluster formed by combined pre-activation feature responses is not a tight as formed by pos and neg separately, which shows the importance of considering pos and neg re-activation responses separately. }\n \\label{fig:tsne_plots}\n\\end{figure*}\n\n\n\n\n\n\n\n\n\\section{Analyzing Pre-Activation Responses}\n\\label{sec:tsne}\nOne of the contributions of our proposed approach is to use both positive and negative pre-activation values separately. We observed both positive and negative pre-activation values contain information that can help correctly classify adversarially perturbed samples. An empirical validation of our statement is shown in figure 3 of the main paper. We further show using TSNE~\\cite{vanDerMaaten2008} plots that all the three variants of the pre-activation feature of a single layer i.e positive only (pos), negative only (neg) and combined positive and negative pre-activation values forms clusters. This indicates that all three contain equivalent information for discriminating samples from others. While on one hand where ReLU like activation functions discard the negative pre-activation responses, we consider negative responses equivalently important and leverage them to model the layerwise behaviour of class samples. The benefit of using positive and negative accumulators is it reduce the computational cost significantly e.g flattening a convolution layer gives a very high-dimensional vector while accumulator reduce it to number of filter dimensions.\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nDiffusion magnetic resonance imaging (dMRI) is an in vivo and non-invasive\nmedical imaging technology that uses water diffusion as a proxy to probe\nanatomical structures of biological samples. The most important application\nof dMRI is to reconstruct white matter fiber tracts \\revisedc{in brain} -- large \\revisedd{axonal}\nbundles with similar destinations.\nIn white matter,\n water diffusion appears to be anisotropic as water tends to diffuse faster along the fiber bundles. Therefore, white matter fiber structures can be deduced from the diffusion characteristics of water.\n Mapping white matter fiber tracts is of great importance in the study of structural organization of neuronal networks and the understanding of brain functionality\n\\citep{Mori07, Sporns11}. Moreover, dMRI also \\revisedd{has} many\n clinical applications, including detecting brain abnormality in white matter\n due to axonal loss or deformation, which are thought to be \\revisedc{related to}\n many neuron degenerative diseases including Alzheimer's disease, \\revisedd{and also in} surgical\n planning by resolving complex neuronal connections between white and gray\n matter \\citep{nimsky2006implementation}.\n\n \\jpeng{dMRI techniques sensitize signal intensity with the amount of water\n diffusion by applying pulsed magnetic gradient fields on the sample.\n Specifically, water diffusion along the gradient field direction leads to signal\n loss and the amount of loss at a voxel equals to the summation (across\n locations within the voxel) of the sinusoid waves with shifted signal phases\n weighted by the proton density at their respective locations. In other words,\n signal loss (referred to as diffusion weighted signal) is the inverse Fourier\n transform of the diffusion probability density function of water molecules and thus can be used to\n recover water diffusion characteristics. The amount of signal loss is also\n influenced by various experimental parameters including the gradient field\n intensity (the stronger, the more loss), the duration of gradient fields (the\n longer, the more loss), etc. Their effects are aggregatively reflected by an\n experimental parameter called the ``$b$-value\" which is often fixed throughout\n the experiment (though multiple $b$-values are used in Q-space imaging). Since\n only water motion along the gradient field direction can be detected, multiple\n gradient directions need to be applied \\citep{Mori07}.}\n\n \\jpeng{In its raw form, dMRI\n provides diffusion weighted signal measurements on a 3D spatial grid (of the\n sample) along a set of predetermined gradient directions}\n\\citep{Bammer-Holdsworth-Veldhuis09, Beaulieu02, Chanraud-Zahr-Sullivan10,\n Mukherjee-Berman-Chung08}. For example, a typical data set from the Alzheimer's Disease Neuroimaging Initiative (ADNI) has diffusion measurements along $41$ gradient directions for each voxel on a $256 \\times 256 \\times 59$ 3D grid of the brain. The first step of dMRI analysis\n is to summarize these measurements into estimates of water diffusion at each voxel. A popular model for water diffusion is the so called single tensor model where the diffusion process is \\revisedd{modeled} as a 3D Gaussian process described by a $3 \\times 3$ positive definite matrix, referred to as a diffusion tensor; see \\citet{Mori07} for an introduction to diffusion tensor imaging (DTI) techniques.\n \\rwong{Figure~\\ref{fig:tensormap} depicts a tensor map on a 2D grid, where each diffusion tensor is represented by an ellipsoid, estimated from diffusion weighted measurements from an ADNI data set using a single tensor model.}\n One then extracts the\n \\revisedd{local diffusion direction as the principal eigenvector of the (estimated) diffusion tensor}\n at each voxel and reconstructs the white matter fiber tracts by computer aided tracking algorithms via a process called tractography \\citep{Basser-Pajevic-Pierpaoli00}.\n\nHowever, DTI cannot resolve multiple fiber populations with distinct orientations, i.e., crossing fibers, within a voxel since a tensor only has one principal direction. Consequently, in crossing fiber regions, estimated diffusion tensors may lead to low anisotropy estimation or oblate tensor estimation. Poor tensor estimation results in poor direction estimation which adversely affects fiber reconstruction; e.g., early termination of or biased fiber tracking.\n\nIn order to resolve intravoxel orientational heterogeneity, several approaches have been proposed. \\citet{Tuch-Reese-Wiegell02} propose a multi-tensor model which assumes a finite number of homogeneous fiber directions within a voxel. However, it has been shown that the parameters in the multi-tensor model are not identifiable \\citep{Scherrer-Warfield10}.\n\\jpeng{Imaging techniques such as Q-ball and Q-space and the corresponding nonparametric methods have also been proposed}\n\\citep{Tuch04, Descoteaux-Angelino-Fitzgibbons07}. However such methods often require high angular resolution diffusion imaging (HARDI) \\citep{Tuch-Reese-Wiegell02, Hosey-Williams-Ansorge05} where a large number of gradients is sampled.\n\\revisedc{In light of these facts}, the goal of this paper is to develop a new fiber \\tlee{direction estimation}\nand tracking method that can handle crossing fibers without requiring any high\nresolution techniques. The proposed method, named DiST, short for\n\\revisedd{\\textbf{Di}ffusion Direction \\textbf{S}moothing and \\textbf{T}racking},\nis completely \\revisedd{automated} and improves existing methods in several aspects.\n\\revisedc{Particularly, it} is applicable either when there is a large number\nof gradient directions (\\revisedc{as} in the HARDI setting) or when only a\nrelatively small number of gradient directions \\revisedc{are} available (\\revisedc{as} in most clinical\nsettings).\n\n\nThe DiST method can be divided into three major steps.\n\n{\\bf Step 1:}\n\\revisedd{Estimate} the tensor directions within each voxel under a multi-tensor model. A new parametrization is proposed which makes the tensor directions identifiable. An efficient and numerically stable computational procedure is developed to obtain the maximum likelihood (ML) estimate of the tensor directions. \\tlee{Here we highlight that, this method focuses on the estimation of the tensor directions rather than the actual tensors themselves.}\n\n{\\bf Step 2:}\n\\revisedd{Using the voxel-wise tensor direction estimates from Step~1 as input,}\na new direction smoothing procedure is applied to further improve the diffusion direction estimates by borrowing information from neighboring voxels. A distinctive and unique feature of this smoothing procedure is that it handles crossing fibers through the clustering of directions into homogeneous groups. We note that, although various tensor smoothing methods have been proposed\n\\citep[e.g.,][]{Pennec-Fillard-Ayache06,Arsigny-Fillard-Pennec06,Fillard-Pennec-Arsigny07,Fletcher-Joshi07,Yuan-Zhu-Lin12,Carmichael-Chen-Paul13},\nlittle work has been done on direct diffusion direction smoothing.\n\\revisedb{\nOne notable exception is \\citet{Schwartzman-Dougherty-Taylor08}, which\nharnesses diffusion\ndirections directly to construct a map of test statistics for detecting differences\nbetween diffusion direction maps from two groups of subjects,\n\\revisedc{while the spatial smoothness of the test statistics is being considered.}\nAlso note that approaches to averaging unsigned directions in the real\nprojective space are known in the directional statistics literature.\n}\n\n{\\bf Step 3:}\nLastly, a fiber tracking algorithm is applied to reconstruct fiber tracts using the smoothed diffusion direction estimates obtained in Step~2. This tracking algorithm is designed to explicitly allow for multiple directions within a voxel.\n\nWe apply DiST to an ADNI data set measured on a healthy elderly person with 41-direction dMRI scan on a 3 Tesla GE\nMedical Systems MRI scanner. ADNI is a longitudinal study (since 2005) that collects serial\nMRI, cognitive assessments, and numerous additional measurements approximately twice per year\nfrom hundreds of elderly individuals spanning a range from cognitive health to clinically-diagnosed Alzheimer's\ndisease. We also examine DiST using simulated data sets which mimic the most commonly encountered experimental situations in terms of number of gradient directions and signal to noise ratio. \\tlee{DiST is shown} to lead to superior\nresults than those based on the single tensor model in the simulation study, as well as more biologically sensible results in the real data application.\n\nThe rest of the paper is organized as follows. Section~\\ref{sec:models} provides background material for some common tensor models. The proposed methods for tensor direction estimation, smoothing of estimated directions, and fiber tracking are presented in, respectively, Sections~\\ref{sec:voxelwise}, \\ref{sec:smoothing} and~\\ref{sec:track}.\nSection~\\ref{sec:sim1} summarizes simulation results. \nThe application to an ADNI data set is presented in Section~\\ref{sec:real}. Section~\\ref{sec:discuss} provides some concluding remarks, while additional \\rwong{simulation} results and technical details are collected in an online Supplemental Material.\n\n\\begin{figure}[htpb]\n\\begin{center}\n \\includegraphics[height=5cm]{tensormap.png}\n\\end{center}\n \\caption{\\rwong{An example of a tensor map on a 2D grid, where each diffusion tensor is represented by an elliposid.}}\\label{fig:tensormap}\n\\end{figure}\n\n\n\\section{Tensor models}\\label{sec:models}\nSuppose dMRI measurements are made on $N$ voxels on a 3D grid representing a brain. For each voxel, we have measurements of diffusion weighted signals along a fixed set (i.e., the same for all voxels) of unit-norm gradient vectors $\\mathcal{U}=\\{{\\brm{u}}_i:i=1,\\dots,m\\}$.\nWe write the set of measurements as $\\{S({\\brm{s}}, {\\brm{u}}): {\\brm{u}}\\in \\mathcal{U} \\}$, where ${\\brm{s}}$ is the 3D coordinate of the center of this voxel.\n\nAssuming Gaussian diffusion, the noiseless signal intensity is given by \\citep[e.g.,][]{Mori07}\n\\[\n\\bar{S} ({\\brm{s}}, {\\brm{u}}) = S_0({\\brm{s}}) \\exp\\left\\{-b{\\brm{u}}^\\intercal {\\brm{D}}({\\brm{s}}) {\\brm{u}}\\right\\},\n\\]\nwhere $S_0({\\brm{s}})$ is the non-diffusion-weighted intensity, $b>0$ is an\nexperimental constant referred to as the $b$-value and $\\brm{D}({\\brm{s}})$ is a\n$3\\times 3$ covariance matrix referred to as the diffusion tensor. This model\nis called the single tensor model and suits for the case of at most one\ndominant diffusion direction within a voxel.\n\nAlthough the single tensor model is the most widely used tensor model in practice, it is not suitable for crossing fiber regions. To deal with crossing fibers, this model has been extended to a multi-tensor model\n\\citep[e.g.,][]{Tuch02,\n Behrens-Woolrich-Jenkinson03,\n Behrens-Berg-Jbabdi07,\n Tabelow-Voss-Polzehl12}:\n\\begin{align}\n\\bar{S}({\\brm{s}}, {\\brm{u}}) = S_0({\\brm{s}})\\sum^{J(s)}_{j=1} p_j({\\brm{s}}) \\exp\\left\\{-b{\\brm{u}}^\\intercal\n {\\brm{D}}_j({\\brm{s}}) {\\brm{u}}\\right\\}, \\label{eqn:truedti}\n\\end{align}\nwhere $\\sum^{J({\\brm{s}})}_{j=1} p_j({\\brm{s}})=1$ and $p_j({\\brm{s}})>0$ for $j=1,\\dots,J({\\brm{s}})$. Here $J({\\brm{s}})$ represents the number of fiber populations and $p_j({\\brm{s}})$'s denote weights of the corresponding fibers.\n\n\n\n\\section{Voxel-wise estimation of diffusion directions}\n\\label{sec:voxelwise}\nOne important goal of \\revisedc{dMRI} studies is to estimate principal diffusion directions, referred to as diffusion directions hereafter, at each voxel. They may be interpreted as tangent directions along fiber bundles at the corresponding voxel. The estimated diffusion directions are then used as an input for tractography algorithms to reconstruct fiber tracts. This section explores the diffusion direction estimation within a single voxel. For notational simplicity, dependence on voxel index ${\\brm{s}}$ is temporarily dropped. Moreover, for ease of exposition, we assume that $\\sigma$ and $S_0({\\brm{s}})$ are known {and delay the discussion of their estimation to Section~\\ref{sec:real}.\n\nUnder the single tensor model, various methods for tensor estimation have been proposed including linear regression, nonlinear regression and ML estimation; e.g., see \\citet{Carmichael-Chen-Paul13} for a comprehensive review. Then diffusion directions are derived as principal eigenvectors of (estimated) diffusion tensors. However, for the estimation of multi-tensor models, severe computational issues have been observed and additional prior information and additional assumptions are usually imposed to tackle these issues. For instance, \\citet{Behrens-Woolrich-Jenkinson03,Behrens-Berg-Jbabdi07} use shrinkage priors and \\citet{Tabelow-Voss-Polzehl12} assume all tensors to be axially symmetric (i.e., the two minor eigenvalues are the same) and have the same set of eigenvalues. \\citet{Scherrer-Warfield10} show that the multi-tensor model is indeed non-identifiable \\tlee{in the sense that there exist multiple parameterizations that are observationally equivalent}. These authors suggest to use multiple $b$-values in data acquisition to make the model identifiable. However, due to practical limitations, most of the current dMRI studies are obtained under a fixed $b$-value and so render their suggestion inapplicable. Below we show that the identifiability issue does not prevent one from estimating the diffusion directions and so neither strong assumptions nor special experimental settings are necessary if one is only interested in diffusion directions rather than the diffusion tensors themselves.\n\n\n\\subsection{Identifiability of multi-tensor model}\nModel~(\\ref{eqn:truedti}) can be re-written as\n\\[\n \\bar{S}({\\brm{u}}) = S_0 \\sum^{J}_{j=1} p_j a_j \\exp \\left\\{\n -b {\\brm{u}}^\\intercal\\left( {\\brm{D}}_j + \\frac{\\log a_j}{b} \\brm{I}_3\\right) {\\brm{u}}\n \\right\\},\n\\]\nwhere $a_j>0$ for $j=1,\\dots,J$ such that $p_ja_j>0$, $\\brm{D}_j +\n(\\log a_j\/b)\\brm{I}_3$ is positive definite and $\\sum^J_{j=1}{p_ja_j}=1$. When\n$J=2$, one can easily derive the explicit conditions for $a_j$ to fulfill\nthese criteria, and see that there are infinite sets of such $a_j$'s.\nHowever, note that $\\brm{D}_j +\n(\\log a_j\/b)\\brm{I}_3$ shares the same set of eigenvectors with ${\\brm{D}}_j$. Thus,\none may still be able to estimate diffusion directions, which correspond to the\nmajor eigenvectors of the tensors. This motivates us to consider estimating\ndiffusion directions directly instead of the tensors themselves.\n\nNow we assume that ${\\brm{D}}_j$'s are axially symmetric; that is, the two minor eigenvalues of ${\\brm{D}}_j$ are equal.\nThis is a common assumption\nfor modeling dMRI data and it implies that\ndiffusion is symmetric around the principal\ndiffusion direction\n\\citep{Tournier-Calamante-Gadian04, Tournier-Calamante-Connelly07}.\nBy not differentiating the two minor eigenvectors, we obtain a\nclear meaning of diffusion direction. In addition, this reduces the number of\nunknown parameters by one \\reviseda{for each tensor in\nthe multiple tensor model} and thus facilitates estimation.\nIn the following, we propose a new parametrization of the multi-tensor model which is identifiable and thus can be used for direction estimation.\n\n\nWrite $\\mathcal{M}$ as the space of the unit principal eigenvector; i.e., the 3D unit sphere with equivalence relation\n${\\brm{m}} \\sim -{\\brm{m}}$.\nLet $\\alpha_j\\ge0$, $\\xi_j>0$ and ${{\\brm{m}}}_j\\in\\mathcal{M}$\nbe the difference between the larger and smaller eigenvalue, smaller eigenvalue\nand the standardized principal eigenvector of ${\\brm{D}}_j$, respectively.\nSince $\\brm{D}_j =\\alpha_j {{\\brm{m}}}_j {{\\brm{m}}}_j^\\intercal + \\xi_j \\brm{I}_3 $,\nmodel~(\\ref{eqn:truedti}) becomes\n\\begin{align}\n \\bar{S}({\\brm{u}}) &= S_0 \\sum^J_{j=1}p_j \\exp \\left\\{ -b {\\brm{u}}^\\intercal \\left(\n \\alpha_j {{\\brm{m}}}_j {{\\brm{m}}}_j^\\intercal + \\xi_j \\brm{I}_3\\right) {\\brm{u}} \\right\\}\\nonumber\\\\\n&= S_0 \\sum^J_{j=1} \\tau_j \\exp \\left\\{ -b \\alpha_j ({\\brm{u}}^\\intercal\n {{\\brm{m}}}_j)^2\\right\\},\\label{eqn:newpar}\n\\end{align}\nwhere $\\tau_j = p_j \\exp(-b\\xi_j) \\in (0,1)$.\n From the above, one can see that\n$p_j$ and $\\xi_j$ are not simultaneously identifiable, so\nwe cannot estimate the tensors.\nHowever, as stated in the following theorem, $\\tau_j, \\alpha_j, {\\brm{m}}_j^\\intercal$ are identifiable and hence we can estimate the principal diffusion directions ${\\brm{m}}_j$'s.\n\n\\begin{thm}\\label{thm:identi}\nUnder model~(\\ref{eqn:newpar}), \\tlee{for any arbitrary $J$}, the parameters $\\bm{\\gamma}=(\\bm{\\gamma}_1^\\intercal,\\dots,\\bm{\\gamma}_J^\\intercal)^\\intercal$ are identifiable,\nwhere\n$\\bm{\\gamma}_j=(\\tau_j, \\alpha_j, {\\brm{m}}_j^\\intercal)^\\intercal$ for $j=1, \\dots, J$.\n\\end{thm}\n\\revisedb{The proof of this theorem can be found in \\suppref{Section~S5.1} of the Supplemental Material. Note that,\n \\revisedc{compared to the model in} \\citet{Tabelow-Voss-Polzehl12},\n \\revisedc{model (\\ref{eqn:newpar})} allows for different eigenvalues and shapes of the\n tensors within a voxel, \\revisedc{and thus is much more flexible}.}\n\n\\subsection{Parameter estimation using \\revisedc{maximum likelihood (ML)}}\nWe first consider the case when $J$ is known and delay the selection of $J$ to Section~\\ref{sec:J}.\n\\revisedb{By assuming Gaussian additive noise on both real and imaginary parts of the\n \\revisedc{complex} signal, the observed signal intensity can be modeled as \\reviseda{\\citep[see, e.g.,][]{Zhu-Zhang-Ibrahim07}}\n\\[\nS({\\brm{u}}) = \\| \\bar{S}( {\\brm{u}}) \\bm{\\phi}( {\\brm{u}}) + \\sigma\n\\bm{\\epsilon}({\\brm{u}})\\|,\n\\]\nwhere $\\bar{S}( {\\brm{u}})$ is the intensity of the noiseless signal, $\\bm{\\phi}({\\brm{u}})$ is a unit vector in $\\mathbb{R}^2$ representing the phase of the signal, $\\bm{\\epsilon}({\\brm{u}})$ is the noise random variable following $\\mathcal{N}_2(\\brm{0},\\brm{I}_2)$ and $\\sigma>0$ denotes the noise level.\nNote that both $\\phi$ and $\\bm{\\epsilon}$ \\revisedc{may} depend on ${\\brm{s}}$.\nThe observed signal intensity then follows a Rician distribution \\citep{Gudbjartsson-Patz95}:\n\\[\nS({\\brm{u}}) \\sim \\mathrm{Rician}(\\bar{S}({\\brm{u}}),\\sigma).\n\\]\nMoreover, we assume the noise $\\bm{\\epsilon}({\\brm{u}})$'s are independent across different voxels and gradient directions.}\n\nUnder the Rician noise assumption, the log-likelihood of $\\bm{\\gamma}$ in model~(\\ref{eqn:newpar}) is:\n\\begin{align}\nl(\\bm{\\gamma}) &= \\sum_{{\\brm{u}}\\in\\mathcal{U}}\\log\\left[\n \\frac{S({\\brm{u}})}{\\sigma^2} \\exp \\left\\{ -\\frac{S^2({\\brm{u}}) +\n \\bar{S}^2({\\brm{u}})}{2\\sigma^2} \\right\\} I_0 \\left\\{\\frac{S({\\brm{u}})\n \\bar{S}({\\brm{u}})}{\\sigma^2}\\right\\} \\right] \\notag\\\\\n&= \\sum_{{\\brm{u}}\\in\\mathcal{U}} \\left[ \\log\\left\\{\\frac{S({\\brm{u}})}{\\sigma^2}\\right\\}\n -\\frac{S^2({\\brm{u}}) + \\bar{S}^2({\\brm{u}})}{2\\sigma^2} + \\log I_0 \\left\\{\\frac{S({\\brm{u}})\n \\bar{S}({\\brm{u}})}{\\sigma^2}\\right\\} \\right], \\label{eqn:likelihood}\n\\end{align}\nwhere $I_0(x) = \\int^{\\pi}_0 \\exp(x\\cos\\phi) d\\phi\/ \\pi$ is the zeroth order modified Bessel function of the first kind.\nThe ML estimate is obtained through maximizing~(\\ref{eqn:likelihood}). Although the above new parametrization avoids the identifiability issue, the likelihood function usually has multiple local maxima, which makes the computation of ML estimate difficult and unstable.\n\nThe method that we used to overcome this issue can be briefly described as follows. We first develop an approximation of model~(\\ref{eqn:newpar}) whose likelihood can be globally maximized via a grid search. We utilize the geometry of the problem so that the grid search can be done \\revisedc{efficiently}. Then we use the ML estimate of this approximated model as the initial value in a gradient method to obtain the ML estimate of model~(\\ref{eqn:newpar}). This method provides very reliable estimates. To speed up the pace of this article, its full description is given in \\suppref{Section~S1} of the Supplemental Material.\n\n\\ignore{\nIn \\reviseda{an} attempt to find the global maximizer of~(\\ref{eqn:likelihood}), we develop an efficient algorithm through an approximation of model~(\\ref{eqn:newpar}). This algorithm essentially performs a grid search, but it makes use of the geometry of the problem so it is fast. It includes three major steps: (i) lay down a grid for $(\\alpha_j, {\\brm{m}}_j^\\intercal)$'s, (ii) evaluate the \\reviseda{maximized} likelihood function \\reviseda{w.r.t. $\\tau_j$'s} on the grid, and (iii) return the grid point that maximizes the likelihood function. One can then use this returned grid point as a starting value in a gradient method for obtaining ML estimation of model~(\\ref{eqn:newpar}). Such a strategy results in better numerical stability and accuracy in finding ML estimates.\n\n\\subsubsection{An approximation of model~(\\ref{eqn:newpar})}\nLet $\\brm{c}_j=(\\alpha_j,{\\brm{m}}_j^\\intercal)^\\intercal$, $\\brm{c}=(\\brm{c}_1^\\intercal,\\dots, \\brm{c}_J^\\intercal)^\\intercal$ and $\\mathcal{C}_j$ be the set of grid points for $\\brm{c}_j$. For simplicity, we take the same set of grid points, $\\mathcal{C}$, for all $j$. To lay down a grid for ${\\brm{m}}_j$'s, we apply the sphere tessellation using Icosahedron, which is depicted in Figure~\\ref{fig:tess}. Here, we only pick unique vertices up to a sign for the formation of the grid. In our implementation, we utilize randomly rotated versions of the tessellation with two subdivisions, which results in a grid with 321 directions \\reviseda{corresponding to those unique vertices (up to a sign change) in Figure~\\ref{fig:tess} (Right)}. If $\\brm{c}\\in \\prod^J_{j=1} \\mathcal{C}_j =\\mathcal{C}^J$, model~(\\ref{eqn:newpar}) can be rewritten as\n\\begin{equation}\n\\bar{S}({\\brm{u}}) = \\sum_{k=1}^{K} \\tilde{\\beta}_k x({\\brm{u}}, \\tilde{{\\brm{m}}}_k, \\tilde{\\alpha}_k),\n\\label{eqn:lm0}\n\\end{equation}\nwhere $K=|\\mathcal{C}|$, $x({\\brm{u}},\\tilde{{\\brm{m}}}_k, \\tilde{\\alpha}_k) = S_0\\exp\\{-b\\tilde{\\alpha}_k({\\brm{u}}^\\intercal \\tilde{{\\brm{m}}}_k)^2\\}$, $(\\tilde{\\alpha}_k, \\tilde{{\\brm{m}}}_k) \\in \\mathcal{C}$ and $\\tilde{\\beta}_k\\in[0,1)$. One may notice that, in this reformulation, the non-zero $\\tilde{\\beta}_k$'s \\reviseda{are} $\\tau_j$'s in model~(\\ref{eqn:newpar}). If $\\brm{c} \\not\\in \\mathcal{C}^J$, i.e. the set of parameters is not a grid point, then equation~(\\ref{eqn:lm0}) serves as an approximation to $\\bar{S}({\\brm{u}})$ in model~(\\ref{eqn:newpar}) as long as the grid is dense enough in the parameter space.\n\nFurthermore, under the commonly used scales of $b$-values and tensors, $x({\\brm{u}},\\tilde{{\\brm{m}}}_k, \\tilde{\\alpha}_k )$ and $x({\\brm{u}}, \\tilde{{\\brm{m}}}_{k'}, \\tilde{\\alpha}_{k'})$ are highly correlated if $\\tilde{{\\brm{m}}}_k = \\tilde{{\\brm{m}}}_{k'}$.\n\\reviseda{\n Thus, $x({\\brm{u}},\\tilde{{\\brm{m}}}_k, \\tilde{\\alpha}_k )$ is proportional to\n $x({\\brm{u}},\\tilde{{\\brm{m}}}_k, \\tilde{\\alpha}_k' )$ approximately.\n Note that the proportional constant can be combined with\n $\\tilde{\\beta}_k$ to form a new coefficient in linear model~(\\ref{eqn:lm0}).\n}\nInspired by this observation, we reduce the grid size by setting $\\tilde{\\alpha}_k=\\tilde{\\alpha}$ for all $k$ to a common value $\\tilde{\\alpha}$ \\reviseda{and using new coefficients $\\beta_k$'s to take care of the proportional constants due to the discrepancy between ${\\alpha}_j$'s and $\\tilde{\\alpha}$.} From our experience, we set $\\tilde{\\alpha}=2\/b$. With all these approximations, we consider fitting the following model:\n\\begin{equation}\n \\bar{S}(u) = \\sum^K_{k=1} \\beta_k x_k({\\brm{u}}), \\label{eqn:lm}\n\\end{equation}\nwhere $x_k({\\brm{u}})=x({\\brm{u}}, \\tilde{{\\brm{m}}}_k, \\tilde{\\alpha})$ and $\\beta_k\\ge 0$. For our purpose, we want to identify non-zero $\\beta_k$'s because those $\\tilde{{\\brm{m}}}_k$'s associated with non-zero $\\hat{\\beta}_k$'s can be regarded as selected diffusion directions. Note that model~(\\ref{eqn:lm}) converts the expensive grid search to an estimation problem of a linear model (with respect to ${\\beta}_k$'s) with non-negative constraints. A fast algorithm for fitting this model with Rician noise assumption is given in \\suppref{Section~S1} of the Supplemental Material. As it turns out, the non-negativity constraints often result in a sparse estimate of $\\bm{\\beta}=(\\beta_1,\\dots, \\beta_K)^\\intercal$; i.e., only a subset of directions is selected. In particular, if the estimate of the unconstrained problem (i.e., $\\beta_k$'s are allowed to be negative) is not located in the first quadrant of the parameter space, the corresponding constrained solution will be sparse.\n\nEven though the solution is often sparse, the number of selected directions is usually larger than $J$, the true number of tensor components. This is partly due to colinearity of $x_k({\\brm{u}})$'s resulting from the use of a dense grid on the\ndirections $\\tilde{{\\brm{m}}}_k$'s.\n\nIn the following, we propose to first divide the selected directions into $I$ groups and then generate stable estimates of ${\\brm{m}}_j$'s via gradient methods (Section~\\ref{sec:grouping}). Finally, Bayesian information criterion (BIC) \\citep{Schwarz78} is used to choose an appropriate $I$ as the estimate for $J$ (Section~\\ref{sec:J}).\n\n\\begin{figure}[htpb]\n\\begin{center}\n\\vspace*{-0.2cm}\n\\begin{tabular}{ccc}\n \\includegraphics[height=4cm]{ico-1.png} &\n\\hspace*{-1cm}\n \\includegraphics[height=4cm]{ico-2.png} &\n\\hspace*{-1cm}\n \\includegraphics[height=4cm]{ico-3.png}\n\\end{tabular}\n\\end{center}\n\\vspace*{-0.5cm}\n \\caption{Sphere tessellations through triangulation using Icosahedron with level of subdivisions: 0 (Left), 1 (Middle) and 2 (Right).}\\label{fig:tess}\n\\end{figure}\n\n\n\\subsubsection{Clustering of the selected directions}\\label{sec:grouping}\n\nWrite the above ML estimate of $\\beta_k$ as $\\hat{\\beta}_k$ for $k=1,\\dots, K$. Suppose there are $L>0$ non-zero $\\hat{\\beta}_k$'s, without loss of generality, $k=1,\\dots, L$. Thus, $\\tilde{{\\brm{m}}}_1,\\dots,\\tilde{{\\brm{m}}}_L$ are the selected directions. Now, we develop a strategy to cluster the selected directions into $I$ groups, for a set of $I \\in \\{1,\\dots,L\\}$. To perform clustering, we require a metric measure on the space of directions $\\mathcal{M}$. A natural metric is\n\\begin{equation}\n d^{*} ({\\brm{u}},{\\brm{v}}) = \\mathrm{arccos}(|{\\brm{u}}^\\intercal {\\brm{v}}|), \\label{eqn:geodist}\n\\end{equation}\nwhere ${\\brm{u}}, {\\brm{v}} \\in \\mathcal{M}$. Note that, $d^{*} ({\\brm{u}},{\\brm{v}})$ is the acute angle between ${\\brm{u}}$ and ${\\brm{v}}$. With this distance metric, one can define dissimilarity matrix for a set of directions and make use of a generic clustering algorithm. Our choice is the Partition Around Medoids (PAM) \\citep{Kaufman-Rousseeuw90} due to its simplicity. The detailed procedure is described in \\suppref{Algorithm~S1} in the Supplemental Material, where the input vectors are the selected directions. Due to the sparsity of $\\hat{\\beta}_j$'s and efficient algorithms of PAM, this clustering strategy is practically fast. Let $\\check{{\\brm{m}}}_1, \\dots, \\check{{\\brm{m}}}_I$ be the resulting group (Karcher) means. They are used as the starting value for gradient-based methods, such as L-BFGS-B algorithm \\citep{Byrd-Lu-Nocedal95}, for obtaining $\\hat{\\bm{\\gamma}}(I)$, the ML estimate of $\\gamma$ under model~(\\ref{eqn:newpar}) with $I$ tensor components. More specifically, the starting value is set as $((1\/I, \\tilde{\\alpha}, \\check{{\\brm{m}}}^\\intercal_1),\\dots,(1\/I, \\tilde{\\alpha},\n\\check{{\\brm{m}}}^\\intercal_I))^\\intercal$.\n}\n\n\n\\subsection{Selecting the number of tensor components $J$}\\label{sec:J}\nCommon model selection methods can be applied to select the number of components $J$. Results from extensive numerical experiments suggest that the Bayesian information criterion (BIC) \\citep{Schwarz78} is good choice; see \\suppref{Section~S2} of the Supplemental Material.\n\n\\reviseda{\n Under model~(\\ref{eqn:newpar}),\n each \\revisedc{tensor} corresponds to four free scalar parameters since\n $\\brm{m}_j$ is characterized by two free scalar parameters.\n}\nThe BIC for a model with $I$ \\revisedc{tensors} is\n\\begin{equation}\n \\mathsf{BIC}(I) = -2l(\\hat{\\bm{\\gamma}}(I)) + 4I\\log(m), \\label{eqn:bic}\n\\end{equation}\nwhere $m$ is the number of gradient directions and $\\hat{\\bm{\\gamma}}(I)$ is the ML estimate of $\\bm{\\gamma}$ under $I$ tensors.\nThen $J$ is chosen as\n$\n \\hat{J} = \\mathrm{argmin}_{I \\in \\{1, \\dots, \\tilde{I} \\}}\n \\mathsf{BIC}(I),\n$\nwhere $\\tilde{I}$ is a pre-specified upper bound for the number of components.\nBased on our experience, $\\tilde{I}=4$ is a reasonable choice.\n\nIn practice, there are voxels with no major diffusion directions.\n\\revisedc{This corresponds to the case where there is only one isotropic tensor.}\nIn the case of isotropic tensor,\n(\\ref{eqn:newpar}) reduces to\n$\n \\bar{S}({\\brm{u}}) = \\brm{S}_0\\tau_1.\n$\nThus there is only one parameter $\\tau_1$. We write\nthe corresponding likelihood function as $\\tilde{l}$\nand denote the ML estimate of $\\tau_1$ by $\\hat{\\tau}_1$,\nwhich can be obtained by a generic gradient method.\nThe corresponding\nBIC criterion is\n\\[\n \\mathsf{BIC}(0) = -2 \\tilde{l}(\\hat{\\tau}_1) + \\log(m),\n\\]\nwhere 0 represents no diffusion direction.\nCombined with the previous BIC formulation~(\\ref{eqn:bic}), one has a comprehensive\nmodel selection rule, which handles voxels with from zero to up to $\\tilde{I}$ (here 4) fiber\npopulations.\n\n\\revisedb{In practice, we follow the convention and use\nfractional anisotropy (FA) \\citep[see, e.g.,][]{Mori07},\n\\begin{equation}\n FA = \\sqrt{\\frac{ (\\lambda_1-\\lambda_2)^2 + (\\lambda_2-\\lambda_3)^2 +\n (\\lambda_3-\\lambda_1)^2 }{{2(\\lambda_1^2 + \\lambda_2^2 + \\lambda_3^2})}}, \\label{eqn:FA}\n\\end{equation}\nwhere $\\lambda_1$, $\\lambda_2$ and $\\lambda_3$ are the eigenvalues of the corresponding tensor in the single tensor model,\n\\revisedc{to conduct an initial screening} to speed up the whole procedure.\nThe FA value lies between zero and one and the larger it is, the more anisotropic the water diffusion is at the corresponding voxel.\nThus we\nfirst remove voxels with very small FA values and then apply the BIC\napproach over those suspected anisotropic\nvoxels.}\n\\rwong{Note that such removal is mainly for reducing computational cost as a typical\ndMRI data set consists of hundreds of thousands of voxels. From our experiences,\nthis has little effect on the final tracking results.\nWe also note that the proposed framework including selection of $J$ can be applied\nwithout such removal if enough computational resources are available.}\n\nWe summarize our voxel-wise estimation procedure in \\suppref{Algorithm~S2} in the Supplemental Material. A simulation study is conducted and the corresponding results are presented in \\suppref{Section~S2} of the Supplemental Material. These numerical results suggest that our voxel-wise estimation procedure provides extremely stable and reliable results under various settings.\n\n\n\\section{Spatial smoothing of diffusion directions}\n\\label{sec:smoothing}\nAlthough model~(\\ref{eqn:newpar}) provides a better modeling than the single tensor model for crossing fiber regions, it also leads to an increase in the number of parameters and thus the variability of the estimates.\nTo further improve estimation, we consider borrowing information from neighboring voxels and develop a novel smoothing technique for diffusion directions.\n\n\n\n\n\\revisedc{In many brain regions, it is reasonable to model the fiber tracts as smooth curves at the resolution of voxels in dMRI ($\\sim$ 2mm).\nTherefore, we shall assume that the tangent directions of fiber bundles change smoothly. This leads to the spatial smoothness of diffusion directions that belong to the same fiber bundle.}\n\n\n\\subsection{Smoothing along a single fiber}\\label{sec:single_smooth}\n\nThis subsection considers the simpler situation where there is only one homogeneous population of diffusion directions; i.e., there is only one single fiber bundle without crossing.\nWrite $T$ as the total number of estimated diffusion directions from all voxels and $\\{\\hat{{\\brm{m}}}_k: k=1,\\dots, T\\}$ as the set of all estimated diffusion directions. Also write ${\\brm{s}}_k$ as the corresponding voxel location associated with $\\hat{{\\brm{m}}}_k$. Note that some ${\\brm{s}}_k$'s share the same value, as some voxels contain multiple estimated directions.\nFollowing the idea of kernel smoothing on Euclidean space \\citep[e.g.,][]{Fan-Gijbels96}, the smoothing estimate at voxel ${\\brm{s}}_0$ is defined as a weighted Karcher mean of the neighboring direction vectors:\n\\begin{equation}\n \\operatornamewithlimits{arg\\ min}_{{\\brm{v}}\\in\\mathcal{M}} \\sum^T_{i=1}\n w_id^{*2}(\\hat{{\\brm{m}}}_i,{{\\brm{v}}}), \\label{eqn:general_smooth}\n\\end{equation}\nwhere $w_i = K_{\\brm{H}}({\\brm{s}}_i-{\\brm{s}}_0)$'s are spatial weights\nand the metric $d^*$ is defined as\n\\begin{equation}\n d^{*} ({\\brm{u}},{\\brm{v}}) = \\mathrm{arccos}(|{\\brm{u}}^\\intercal {\\brm{v}}|), \\quad\n{\\brm{u}}, {\\brm{v}} \\in \\mathcal{M}; \\label{eqn:dist2}\n\\end{equation}\ni.e., $d^{*} ({\\brm{u}},{\\brm{v}})$ is the acute angle between ${\\brm{u}}$ and ${\\brm{v}}$.\nThe weights $w_i$'s place more emphasis on spatially closer observations.\nHere $K_{\\brm{H}}(\\cdot)=|\\brm{H}|^{-1\/2}K({\\brm{H}}^{-1\/2}\\cdot)$ with $K(\\cdot)$ as a\n3D kernel function satisfying $\\int K({\\brm{s}}) d{\\brm{s}} = 1$, and\n$\\brm{H}$ is a $3 \\times 3$ bandwidth matrix. In our numerical work, we choose\n$K(\\cdot)$ as the standard Gaussian density, and set $\\brm{H} = h\\brm{I}_3$, where\n$h$ is \\revisedd{chosen using} the cross-validation (CV) approach described in\n\\suppref{Section~S3} of the Supplemental Material.\n\\revisedb{We adopt the leave-one-out CV idea to\ndevelop an ordinary CV score and two robust CV scores.\n\\rwong{Their practical performances are reported\nin \\suppref{Section~S6} of the Supplemental Material.\n}}\n\n\\subsection{Smoothing over multiple fibers}\\label{sec:mult_smooth}\nWhen there are crossing fibers in a voxel ${\\brm{s}}_0$, the above smoothing procedure will not work well. To \\revisedc{address} this issue, we first cluster the neighboring estimated directions of ${\\brm{s}}_0$ into \\revisedc{groups} that correspond to different fiber populations. Then we apply the above smoothing procedure to each individual \\revisedc{cluster}. This subsection describes this procedure in details.\n\n\nFirst we define neighboring voxels for ${\\brm{s}}_0$. We begin with\ncomputing the spatial weights defined in Section~\\ref{sec:single_smooth}. We\nthen remove those voxels with weights smaller than a threshold. By filtering\nout these voxels, we obtain tighter and better separated clusters of directions.\nMoreover,\nsuch voxels have little effects on smoothing due to their small weights. The\nartificial data set displayed in Figure~\\ref{fig:separate} provides an\nillustrative example. Each black dot in the left panel represents an\nestimated direction (from the center of the sphere). In the middle panel, the\nsize of each dot is proportional to its spatial weight in equation\n(\\ref{eqn:general_smooth}). Lastly, the right\npanel shows all dots with spatial weights larger than a threshold. Notice that\nsuch a trimming operation leads to two obvious clusters of directions, which makes\nthe subsequent task of clustering the directions much easier.\n\n\n\\begin{figure}[htpb]\n\\[\n \\includegraphics[height=3cm]{neighbor-all.png}\n \\includegraphics[height=3cm]{neighbor-weight.png}\n \\includegraphics[height=3cm]{neighbor-thres.png}\n\\]\n\\vspace*{-0.5cm}\n\\caption{\n \\revisedc{Direction clustering.}\n Left: all estimated directions. Middle: sizes of all estimated directions proportional to weights. Right: estimated directions with weights larger than a threshold. Red lines represents underlying true directions.}\\label{fig:separate}\n\\end{figure}\n\n\nNext we need a clustering strategy\nto choose the number of clusters adaptively.\nWith the distance metric (\\ref{eqn:dist2}), one can define dissimilarity matrix for a set of directions and make use of a generic clustering algorithm. Our choice is the Partition Around Medoids (PAM) \\citep{Kaufman-Rousseeuw90} due to its simplicity.\nAlso, we apply the\naverage silhouette \\citep{Rousseeuw87} to choose the number of clusters; see \\suppref{Algorithm~S3} of the Supplemental Material.\nThe silhouette of a datum $i$ measures the strength of its membership to its cluster, as compared to the neighboring cluster. Here, the neighboring cluster is the one, apart from cluster of datum $i$, that has the smallest average dissimilarity with datum $i$. The corresponding silhouette is defined as\n$(b_i-a_i)\/(\\max\\{a_i,b_i\\})$,\nwhere $a_i$ and $b_i$ represent the average dissimilarities of datum $i$ with all other data in the same cluster and that with the neighboring cluster respectively.\nThe average silhouette of all data gives a measure of how good the clustering\nis. Thus we select the number of clusters via maximizing the average silhouette.\nThe detailed smoothing procedure is given in \\suppref{Algorithm~S4}.\n\n\n\\subsection{Theoretical results}\\label{sec:theory}\nThis subsection derives asymptotic properties of the proposed direction\nsmoothing estimator.\nNote that, since the space of direction vectors has a non-Euclidean geometry and\nso the theoretical framework is different from that of classical smoothing estimators.\nWithout loss of generality, suppose we observe ${\\brm{v}}_1,\\dots,{\\brm{v}}_n \\in \\mathcal{M}$\nat spatial locations ${\\brm{s}}_1,\n\\dots, {\\brm{s}}_n$ respectively.\nLet $\\mathcal{V}$ be the 3D unit sphere.\nThen $\\mathcal{M}$ is the quotient space of $\\mathcal{V}$ with equivalence relation ${\\brm{v}} \\sim -{\\brm{v}}$\nfor any ${\\brm{v}}\\in\\mathcal{V}$. This space is also identified with the so-called\nreal projective space $\\mathbb{R} P^2$.\n\n\nThe theoretical results below were derived under \\revisedb{the more convenient\n random design} where\n${\\brm{s}}_i$'s are independently and identically sampled from a distribution with\ndensity $f_S$.}\n \\dpaul{The below theorem (Theorem \\ref{thm:con+norm}) remains valid even\n under a fixed, regular design setting, with the\n number of grid points increasing to infinity. In this case, in the statement of the asymptotic\n formulae and their proofs, the density function $f_S$ is replaced with a\n constant-valued function, representing a regular grid, with corresponding changes\n wherever derivatives of $f_S$ appear.}\n\n Given a spatial location ${\\brm{s}}_0$, our target is to estimate ${\\brm{v}}_0$, namely\nthe diffusion direction at ${\\brm{s}}_0$, \\jpeng{which is defined as the minimizer of}\n$\n \\mathbb{E} \\left\\{d^{*2}(\\brm{V}, {\\brm{v}}) | \\brm{S}={\\brm{s}}_0 \\right\\}\n$\n\\rwong{over ${\\brm{v}}$,}\nwhere ${d}^{*}({\\brm{u}},{\\brm{v}})=\\mathrm{arccos}(|{\\brm{u}}^\\intercal {\\brm{v}}|)$.\n\\rwong{ Here $\\bm{\\mathrm{V}}$\nis a random unit vector representing a random diffusion direction\nand the expectation is taken over\n$\\bm{\\mathrm{V}}$ conditional on $\\bm{\\mathrm{S}}=\\bm{\\mathrm{s}}_0$, where\n$\\bm{\\mathrm{S}}$ represents the location of where\n$\\bm{\\mathrm{V}}$ is observed.}\nFor simplicity, we assume ${\\brm{s}}_i\\in\\mathbb{R}$ and write it as $s_i$\nthereafter. Thus, our estimator~(\\ref{eqn:general_smooth}) at $s_0$ can be written as\n\\[\n \\hat{{\\brm{v}}}(s_0) = \\operatornamewithlimits{arg\\ min}_{{\\brm{v}}\\in\\mathcal{M}} \\sum^n_{i=1} K_h(s_i-s_0) d^{*2}({\\brm{v}}_i,{\\brm{v}}),\n\\]\nwhere \\rwong{$n$ is the number of diffusion direction vectors} and\n$K_h(\\cdot)=K(\\cdot\/h)\/h$. Here, with\nslight notation abuse, $K(\\cdot)$ represents a one dimensional kernel function throughout\nthe theoretical developments.\n\nWe first describe a working coordinate system.\nFor each ${\\brm{p}}\\in\\mathcal{V}$, one can endow a tangent space\n$T_{{\\brm{p}}} \\mathcal{V}=\\{{\\brm{v}}\\in\\mathbb{R}^3:{\\brm{v}}^\\intercal {\\brm{p}}=0\\}$ with the metric tensor\n$g_{{\\brm{p}}} : T_{{\\brm{p}}} \\mathcal{V} \\times T_{{\\brm{p}}} \\mathcal{V} \\rightarrow \\mathbb{R}$ defined as\n$g_{{\\brm{p}}}({\\brm{u}}_1, {\\brm{u}}_2)= {\\brm{u}}_1^\\intercal {\\brm{u}}_2$.\nNote that the tangent space is identified with $\\mathbb{R}^2$.\nThe geodesics are great circles and the geodesic\ndistance is $\\mathrm{arccos}({\\brm{p}}_1^\\intercal {\\brm{p}}_2)$, for any ${\\brm{p}}_1, {\\brm{p}}_2 \\in \\mathcal{V}$. The corresponding exponential\nmap at ${\\brm{p}}\\in\\mathcal{V}$, $\\mathrm{Exp}_{{\\brm{p}}}:T_{{\\brm{p}}}\\mathcal{V}\\rightarrow\\mathcal{V}$, is given by\n\\begin{align*}\n \\mathrm{Exp}_{{\\brm{p}}}(\\bm{0}) = {\\brm{p}} \\quad \\mbox{and} \\quad\n \\mathrm{Exp}_{{\\brm{p}}} ({\\brm{u}}) = \\cos(\\|{\\brm{u}}\\|) {\\brm{p}} + \\frac{\\sin(\\|{\\brm{u}}\\|)}{\\|{\\brm{u}}\\|}\n {\\brm{u}} \\quad \\mbox{when} \\quad {\\brm{u}}\\neq \\bm{0},\n\\end{align*}\nwhile the corresponding logarithm map at ${\\brm{p}}\\in\\mathcal{V}$, $\\mathrm{Log}_{{\\brm{p}}}:\n\\mathcal{V}\\backslash\\{-{\\brm{p}}\\}\\rightarrow T_{{\\brm{p}}}\\mathcal{V}$, is\ngiven by\n\\begin{align*}\n \\mathrm{Log}_{{\\brm{p}}}({\\brm{p}}) = \\bm{0} \\quad \\mbox{and} \\quad\n \\mathrm{Log}_{{\\brm{p}}}({\\brm{v}}) = \\frac{\\mathrm{arccos}({\\brm{v}}^\\intercal {\\brm{p}})}{\\sqrt{1-({\\brm{v}}^\\intercal {\\brm{p}})^2}}\n [ {\\brm{v}} - ({\\brm{v}}^\\intercal {\\brm{p}}) {\\brm{p}} ] \\quad \\mbox{when} \\quad\n {\\brm{v}}\\neq {\\brm{p}}.\n\\end{align*}\nOne can use the exponential map and the logarithm map to\ndefine a coordinate system for the $\\mathcal{V}\\backslash\\{-{\\brm{v}}_0\\}$ in the following way.\nGiven ${\\brm{v}} \\in\\mathcal{V}$, we\ndefine the logarithmic coordinate as\n\\[\\omega_1 = \\brm{e}_1^\\intercal \\mathrm{Log}_{{\\brm{v}}_0} ({\\brm{v}}) \\quad \\mbox{and} \\quad \\omega_2 = \\brm{e}_2^\\intercal \\mathrm{Log}_{{\\brm{v}}_0} ({\\brm{v}}),\n\\]\nwhere $\\brm{e}_1, \\brm{e}_2 \\in T_{{\\brm{v}}_0}\\mathcal{V}$ and $\\{\\brm{e}_1, \\brm{e}_2\\}$ forms an\northonormal basis for $T_{{\\brm{v}}_0}\\mathcal{V}$.\nWrite ${\\phi}({\\brm{v}}) = (\\omega_1, \\omega_2)^\\intercal$.\nIn addition, we define\n\\[\n \\rho_{{\\brm{v}}_0}({\\brm{v}}) =\\begin{cases}\n \\mathrm{sign}({\\brm{v}}_0^\\intercal {\\brm{v}}) {\\brm{v}} & {\\brm{v}}_0^\\intercal {\\brm{v}} \\neq 0\\\\\n {\\brm{v}} & {\\brm{v}}_0^\\intercal {\\brm{v}} = 0\n \\end{cases},\n\\]\n\\reviseda{which aligns ${\\brm{v}}$ with ${\\brm{v}}_0$, and\n${d}(\\bm{\\omega},\\bm{\\theta}) = d^*({\\phi}^{-1} (\\bm{\\omega}),{\\phi}^{-1}\n(\\bm{\\theta}))$ for $\\bm{\\omega}, \\bm{\\theta}\\in \\mathbb{R}^2$.\nNote that for any ${\\brm{v}}, {\\brm{p}}\\in\\mathcal{V}$,\nwe have $d(\\tilde{\\phi}({{\\brm{v}}}), \\tilde{\\phi}({\\brm{p}})) = d^*({\\brm{v}}, {\\brm{p}})$\nwhere $\\tilde{\\phi} = {\\phi}\\circ \\rho_{{\\brm{v}}_0}$.}\n\\revisedb{\n Here $\\tilde{\\phi}({\\brm{v}})$ first aligns a direction ${\\brm{v}}$ with\n the true diffusion direction ${\\brm{v}}_0$ and then represents it by its\n logarithmic coordinate.\n}\n\nWe now present the asymptotic results.\nNow, write\n$\\bm{\\theta}_i=\\tilde{\\phi}({\\brm{v}}_i)$\n for $i=1,\\dots, n$, and $\\psi(\\bm{\\omega},\\bm{\\theta}) =\nd^2(\\bm{\\omega},\\bm{\\theta})$. We have $\\bm{\\theta}_0 = \\tilde{\\phi}({\\brm{v}}_0)=\\bm{0}$.\n\\rwong{Also, let $\\bm{\\psi}_j(\\bm{\\omega},\\bm{\\theta})$ be the $j$-th order derivative of $\\psi$ with\nrespect to $\\bm{\\theta}$ for $j=1,2$.}\nLet $\\brm{m}(s) = (m_1(s), m_2(s))^\\intercal = \\mathbb{E}(\\bm{\\theta}_1|S_1=s)$ and $\\bm{\\Sigma}(s) =\n[\\Sigma_{jk}(s)]_{1\\le j, k \\le 2} = \\mathrm{Var}(\\bm{\\theta}_1 | S_1 =s)$. Also, denote\n$\\bm{\\Psi}(s)= [\\Psi_{jk}(s)]_{1\\le j,k \\le 2} = \\mathbb{E}[\\bm{\\psi}_2(\\bm{\\theta}_1,\n\\bm{\\theta}_0)|S_1=s]$.\n\nUnder the assumptions 1-10 laid out in \\suppref{Section S5.2} of the Supplemental Material\n\\revisedc{which are all standard technical conditions (except for Assumption 1\n which is to ensure the representation of the geodesic distance as a function\n of the working coordinate system)},\nwe have the following theorem.\n\\begin{thm}\\label{thm:con+norm}\n Let $M_n(\\bm{\\theta}) = \\sum^n_{i=1} h K_h(S_i-s_0) d^2(\\bm{\\theta}_i, \\bm{\\theta})$, and assume Assumptions 1-10 hold.\n\n \\begin{enumerate}[(a)]\n \\item There exists a sequence of solutions, $\\hat{\\bm{\\theta}}_n(s_0)$, to\n $M_n^{(1)}(\\bm{\\theta}) = 0$,\n such that $\\hat{\\bm{\\theta}}_n(s_0)$ converges in probability to $\\bm{\\theta}_0$.\n\\item $\\hat{\\bm{\\theta}}_n$ is asymptotically normal:\n \\[\n \\sqrt{nh}\\left\\{ (\\hat{\\bm{\\theta}}_n-\\bm{\\theta}_0) - h^2 \\bm{\\eta}\\right\\} \\implies\n \\mathcal{N}_2(\\bm{0},\\bm{\\Omega}),\n \\]\n where\n \\[\n \\bm{\\eta} = 2\\int x^2 K(x) dx \\bm{\\Psi}^{-1}(s_0) \\left\\{\n \\frac{f_{S}^{(1)}(s_0)}{f_{S}(s_0)}m^{(1)}(s_0) +\n \\frac{1}{2} m^{(2)}(s_0)\\right\\}\n \\]\n and\n \\[\n \\bm{\\Omega} = 4 \\int K^2(x) dx \\bm{\\Psi}^{-1}(s_0) \\bm{\\Sigma}(s_0).\n \\]\n\\end{enumerate}\n\\end{thm}\nThe proof of the Theorem~\\ref{thm:con+norm} can be found in \\revisedb{\\suppref{Section~S5.2}} of the Supplemental Material.\n\n\n\\section{Fiber tracking}\n\\label{sec:track}\nFor dMRI, fiber tractography can be \\revisedc{classified as} deterministic and probabilistic methods. Deterministic methods \\citep[e.g.][]{Mori-Crain-Chacko99, Weinstein-Kindlmann-Lundberg99, Mori-Zijl02} track fiber bundles by utilizing the principal eigenvectors of tensors, while probabilistic methods \\citep[e.g.][]{Koch-Norris-Hund-Georgiadis02,Parker-Alexander03,Friman-Farneback-Westin06} use the probability density of diffusion orientations.\nMost \\revisedc{deterministic} methods \\revisedd{assume one} single diffusion tensor in each voxel, and hence \\revisedd{are unable} to handle voxels with crossing fibers. In view of this, this section develops a \\revisedc{deterministic} tracking algorithm that allows for multiple or no principal diffusion directions in a voxel.\n\n\nThe proposed algorithm can be seen as a generalization of the popular Fiber Assignment by Continuous Tracking (FACT) \\citep{Mori-Crain-Chacko99} algorithm. A brief description of FACT is as \\revisedc{follows.}\nTracking starts at the center of a voxel (Voxel~1 in Figure~\\ref{fig:tract} left panel) and continues in the direction of the estimated diffusion direction. When it enters the next voxel (Voxel~2 in Figure~\\ref{fig:tract} left panel), the track changes its direction to align with the new diffusion direction and so on. This tracking rule may produce many short and fragmented fiber tracts due to either a wrongfully identified isotropic voxel or spurious directions which go nowhere. In addition, it cannot determine which direction to follow in case there are multiple directions in a voxel, which happens in crossing fiber regions.\n\nTo address these issues, we modify the above procedure in the following manner. Given a current diffusion direction (we refer to the corresponding voxel as the current voxel), the voxel that it points to (we refer to this voxel as the destination voxel) may have (i) at least one direction; (ii) no direction (i.e., isotropic). In case (i), we will first identify the direction with the smallest angular difference with\nthe current direction. If its separation angle is smaller than a pre-specified\nthreshold (e.g., $\\pi\/6$), we enter the destination voxel and tracking\nwill go on along this direction. See Figure~\\ref{fig:tract} (Middle).\nOn the other hand, if the separation angle is\ngreater than the threshold, or case (ii) happens, we deem that the destination\nvoxel does not have a viable direction. In this case, tracking will go along\nthe current direction if it finds a viable direction within a pre-specified\nnumber of voxels. The number of voxels that are allowed to be skipped is\nset to be $1$ in our numerical illustrations.\nSee Figure~\\ref{fig:tract} (Right). On the other hand,\nthe tracking stops at the current voxel if no viable directions within a\npre-specified number of voxels can be found.\nThe detailed tracking algorithm is described in \\suppref{Algorithm~S5} in the\nSupplemental Material.\n\n\n\n\nAs for the choice of starting voxels, also known as seeds, there are two common strategies. One can choose seeds based on tracts of interest and start the tracking from a region of interest (ROI). This approach is based on knowledge on ROI and may not give a full picture of the tracts of interest if there are diverging branches. The other approach is \\revisedc{the} brute-force approach, where tracking starts from every voxel. It usually leads to a more comprehensive picture of tracts at a higher computational cost. The proposed algorithm can be coupled with either strategy.\n\n\n\nCombining the voxel-wise estimation method in Section~\\ref{sec:voxelwise} and the direction smoothing procedure in Section~\\ref{sec:smoothing} gives the proposed DiST method.\n\n\n\n\n\\begin{figure}[htpb]\n \\[\n \\includegraphics[height=3.5cm]{tract_algorithm.png}\n \\hspace*{0.2cm}\n \\includegraphics[height=3.5cm]{tract_illustrate1.pdf}\n \\hspace*{0.2cm}\n \\includegraphics[height=3.5cm]{tract_illustrate2.pdf}\n \\]\n \\caption{Left: Demonstration of the proposed algorithm in single fiber region. Middle:\n Demonstration of the proposed algorithm in crossing fiber region. Right:\n Demonstration of the proposed algorithm in case of absence of viable\n directions.}\\label{fig:tract}\n\\end{figure}\n\n\\section{Simulation \\revisedc{study}}\\label{sec:sim1}\n\\tlee{Extensive simulation experiments have been conducted to evaluate the practical performances of DiST. They are reported in \\suppref{Section~S6} of the Supplemental Material. Overall, the DiST method provided highly promising results.}\n\n\n\\section{Real data application}\\label{sec:real}\nIn this section, we apply the proposed methodology to a real dMRI data set,\nwhich was obtained from the\nAlzheimer's\nDisease Neuroimaging Initiative (ADNI) database (www.loni.ucla.edu\/ADNI).\nThe primary\ngoal of ADNI has been to test whether serial MRI,\npositron emission tomography (PET), other biological markers, and clinical and\nneuropsychological assessment can be combined to measure the progression of\nmild cognitive impairment (MCI) and onset of Alzheimer's disease (AD).\nIn the following, we use an eddy-current-corrected ADNI data set of a normal\nsubject for illustration of our technique.\n\nThis data set contains 41 distinct gradient directions with\n$b$-value set as $1000s\/mm^2$. In addition, there are 5 $b0$ images (corresponding to\n$b=0$), forming\nin total 46 measurements for each of the $256\\times256\\times59$ voxels.\nTo implement our technique, we require estimates of $S_0({\\brm{s}})$'s and $\\sigma$.\nWe first estimate\n$S_0({\\brm{s}})$ and $\\sigma({\\brm{s}})$ for each voxel by ML estimation based on the\n5 $b0$ images.\nThen we fix\n$\\sigma$ as the median of estimated $\\sigma({\\brm{s}})$'s for voxel-wise estimation\nof the diffusion directions.\nSince the original $256\\times256\\times59$ voxels contain volume outside the brain,\nwe only take median over a human-chosen set of $81\\times81\\times20$ voxels.\nThe estimated $\\sigma$ is 56.9.\n\nIn this analysis, we focus on a subset of voxels ($15\\times15\\times5$), which contains the intersection of corpus callosum (CC) and corona radiata (CR). This region is known to contain significant fiber crossing \\citep{Wiegell-Larsson-Wedeen00}.\n\\revisedb{See Figure~\\ref{fig:project} (Left) for a fiber orientation color map of one of the five $xy$-planes.\nWithin the whole focused region, $S_0({\\brm{s}})$'s have mean 1860.1 and standard deviation 522.7.}\n\n\n\nWe then apply voxel-wise estimation to individual voxels followed by the \\textsf{DiST-mcv}\nprocedure.\nDistributions of the estimated number of diffusion directions are summarized in Table\n\\ref{tab:real:hatJ}.\nFor comparison purposes, we also fit the single tensor model with the commonly\nused regression estimator \\citep[e.g.,][]{Mori07}.\n\nThe tracking results are produced by applying the proposed tracking algorithm to the estimated diffusion directions from \\textsf{DiST-mcv},\n\\rwong{which represents the DiST procedure with $h$ chosen by the median cross-validation score\n(see Sections S3 and S6 of the Supplemental Material),}\nand those from the single tensor model estimation.\n\\revisedb{For visualization purposes, we present the longest 900 tracts in Figure~\\ref{fig:corpus_track_dsmooth_level}.}\nFrom anatomy, the CC has a mediolateral direction\nwhile the CR has a superoinferior orientation. They are clearly shown in both\ntracking results.\nIn these figures, reconstructed fiber tracts are colored by a RGB color model\nwith red for left-right, green for anteroposterior, and blue for superior-inferior.\nThus, one can easily locate the CC and the CR as the red fiber bundle and the\nblue fiber bundle respectively.\nTracking result based on \\textsf{DiST-mcv} shows clear crossing between\nmediolateral fiber and the superoinferior fiber (in the figure, the crossing of\nred and blue fiber tracts).\nFrom neuroanatomic\natlases and previous studies, \\citet{Wiegell-Larsson-Wedeen00} conclude that\nthere are several fiber populations with crossing structure in this\nconjunction region of CC and CR,\nwhich matches with the tracking based on \\textsf{DiST-mcv}.\nHowever, the single tensor model estimation can only\nreconstruct one major diffusion direction in each voxel and thus\nthe corresponding tracking result does not show crossing structure.\nInstead, the CC (red fiber bundle) is blocked by the CR (blue fiber bundle) and this\nleads to either termination of the CC fiber tracts or significant merging of the CC and\nthe CR fiber tracts instead of the known crossing structure.\nTo give further illustration,\nFigure~\\ref{fig:project} shows the locations of the CC, the CR and the region of\ncrossing fibers (Cross).\nOne can see that estimated directions based on \\textsf{DiST-mcv} reproduces the crossing\nfiber structures between the CC and the CR, while the result based on single tensor\nmodel tends to connect the CC and the CR fibers.\n\nMoreover, the green fiber on top of the CC represents the cingulum bundle.\nBoth fiber tracking based on {DiST} and single fiber model\nproduce clear and sensible reconstruction of cingulum bundle.\nAll these features match with neuroanatomic atlases and provide\na good demonstration of our proposed method.\n\n\\jpeng{As shown by Figures~\\ref{fig:project} and~\\ref{fig:corpus_track_dsmooth_level}, when comparing with the results obtained by the single tensor model, {DiST} produces more biologically sensible and interpretable tracking results. This provides more reliable information on brain connectivity and in turn could lead to better understanding of neuro-degenerative diseases such as Alzheimer's disease and autism as well as better detection of brain abnormality, such as deformation and neuron loss in white matter regions.}\n\n\\renewcommand{\\baselinestretch}{1}\n\\begin{table}[htpb]\n \\centering\n \\caption{Number of voxels with different estimated number of diffusion\n directions.}\\label{tab:real:hatJ}\n \\vspace{0.3cm}\n {\\small\n \\begin{tabular}{c|ccccc|c}\n \\hline\\hline\n & \\multicolumn{5}{c|}{Number of diffusion directions}\\\\\n & 0 & 1 & 2 & 3 & 4 & \\rwong{total} \\\\\n \\hline\\hline\n Voxel-wise estimation & 37 & 476 & 589 & 23 & 0 & \\rwong{1125} \\\\\n Smoothing & 37 & 476 & 593 & 19 & 0 & \\rwong{1125}\\\\\n \\hline\\hline\n \\end{tabular}\n }\n\\end{table}\n\\renewcommand{\\baselinestretch}{1}\n\n\n\\begin{figure}[htpb]\n \\centering\n \\includegraphics[height=4.1cm]{fa_z2.png}\n \\includegraphics[height=4.1cm]{proj_z2_col_text.png}\n \\includegraphics[height=4.1cm]{proj_z2_col_text-single.png}\n \\caption{\\revisedb{Left: the fiber orientation color map (based on the single tensor\n model). The focused region is indicated by white rectangular box. Middle \\revisedc{(from \\textsf{DiST-mcv})}\n and right \\revisedc{(from single tensor model)}: The projection of fiber\n directions to the $xy$-plane\n at $z=102.6$\n for illustration of crossing fibers.\n \\rwong{(The five $xy$-planes that we focus on have reference values $z=99.9, 102.6, 105.3, 108, 110.7$ from bottom to top.)}\n The plot also shows the location of corpus callosum\n (CC), corona radiata (CR) and crossing region (Cross).\n The fiber orientation color map is overlaid as the background.}\n}\\label{fig:project}\n\\end{figure}\n\n\n\\begin{figure}[htpb]\n \\vspace{-0.5cm}\n \\[\n \\includegraphics[height=5cm]{track_900.png}\n \\hspace*{-0.5cm}\n \\includegraphics[height=5cm]{track_900-2.png}\n \\]\n \\vspace{-0.5cm}\n \\[\n \\includegraphics[height=5cm]{track_900-single.png}\n \\hspace*{-0.5cm}\n \\includegraphics[height=5cm]{track_900-single-2.png}\n \\]\n \\caption{Top: The longest 900 tracks using \\textsf{DiST-mcv}.\n Bottom: \\revisedc{The longest 900 tracks using the} single tensor model.\n The left and right figures correspond to different viewing angles.\n }\n \\label{fig:corpus_track_dsmooth_level}\n\\end{figure}\n\n\n\n\\section{Discussion}\n\\label{sec:discuss}\nUsing tensor estimation to resolve crossing fiber can be problematic,\n\\revisedc{due to the inability of estimating multiple diffusion directions by the single tensor model and}\nthe non-identifiability issue in multi-tensor model. In this paper, we take a\ndifferent route by focusing on the estimation of diffusion directions rather\nthan the diffusion tensors. We develop the corresponding direction smoothing\nprocedure and fiber tracking strategy, together called DiST, along this route.\nOur technique gives promising empirical results in both simulation study\n\\rwong{(see Section S6 of the Supplemental Material)}\nand real data analysis.\n\nThe procedure we presented works well even with moderate number of gradient directions (a few tens), as long as the number of distinct crossing fibers within a voxel is not \\tlee{larger than three}. With HARDI data, which can have up to a couple of hundreds gradient directions, rather than modeling the direction distribution within a tensor framework, we can estimate the fiber orientation distribution nonparametrically \\citep{Tuch04, Descoteaux-Angelino-Fitzgibbons07}.\n\n\\ignore{In that case, we can potentially extend the fiber tracking procedure presented here by adopting a probabilistic approach in which the directions for moving from one voxel to another are sampled from the fiber orientation distribution. Such a probabilistic fiber tracking has the additional advantage of giving a measure of uncertainty of the fiber tracts extracted from the data. This is a topic of future research.}\n\n\\jpeng{Applying {DiST} to multiple images from ADNI (either\n from the same subject over time or from multiple subjects) and then relating\n the tracking results with clinical outcomes such as cognitive measures would\n provide valuable information about the role of white matter connectivity in\n initiation and progression of Alzheimer's disease and dementia. Although this\n is an important direction of research, it is beyond the scope of this paper\n which focuses on developing a statistical procedure to denoise dMRI data and\n to provide better tracking results. We plan to explore more sophisticated\n applications of the proposed procedure in our future research.}\n\n\\section*{Acknowledgement}\nData collection and sharing for this project was funded by the Alzheimer's\nDisease Neuroimaging Initiative (ADNI) (National Institutes of Health Grant U01\nAG024904). ADNI is funded by the National Institute on Aging, the National\nInstitute of Biomedical Imaging and Bioengineering, and through generous\ncontributions from the following: Abbott, AstraZeneca AB, Bayer Schering Pharma\nAG, Bristol-Myers Squibb, Eisai Global Clinical Development, Elan Corporation,\nGenentech, GE Healthcare, GlaxoSmithKline, Innogenetics, Johnson and Johnson,\nEli Lilly and Co., Medpace, Inc., Merck and Co., Inc., Novartis AG, Pfizer Inc,\nF. Hoffman-La Roche, Schering-Plough, Synarc, Inc., as well as non-profit\npartners the Alzheimer's Association and Alzheimer's Drug Discovery Foundation,\nwith participation from the U.S. Food and Drug Administration. Private sector\ncontributions to ADNI are facilitated by the Foundation for the National\nInstitutes of Health (www.fnih.org). The grantee organization is the Northern\nCalifornia Institute for Research and Education, and the study is coordinated\nby the Alzheimer's Disease Cooperative Study at the University of California,\nSan Diego. ADNI data are disseminated by the Laboratory for Neuro Imaging at\nthe University of California, Los Angeles. This research was also supported by\nNIH grants P30 AG010129, K01 AG030514, and the Dana Foundation.\nThe authors would like to thank Professor Owen Carmichael for making available\nthe data and his valuable comments.\n\n\n\n\\bibliographystyle{imsart-nameyear}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n \\let\\thefootnote\\relax\\footnotetext{The work of A. G. Burr and K. Cumanan was supported by H2020- MSCA-RISE-2015 under grant number 690750. The work on which this paper is based was carried out in collaboration with COST Action CA15104 (IRACON).}\nMassive multiple-input multiple-output (MIMO) is a promising\ntechnique to achieve high data rate \\cite{5gdebbah,massivetddMarzetta14}. However, high performance multiuser MIMO (MU-MIMO) uplink techniques rely on the availability of full channel state information (CSI) of all user terminals at the base station (BS) receiver, which presents a major challenge to their practical implementation. This paper considers an uplink multiuser system where the BS is equipped with $M$ antennas and serves $K_s$ decentralized single antenna users ($M\\gg K_s$). In the uplink mode, the BS estimates the uplink channel and uses linear receivers to separate the transmitted data. The BS receiver uses the estimated channel to implement the zero-forcing (ZF) receiver which is suitable for Massive MIMO systems \\cite{MarzettaMRC13}. To investigate the performance of MIMO systems, an accurate small scale fading channel model is necessary. \n\nMost standardized MIMO channel models such as IEEE $802.11$, the 3GPP spatial model, and the COST 273 model rely on clustering \\cite{standard}. Geometry-based stochastic channel models (GSCMs) are mathematically tractable models to investigate the performance of MIMO systems \\cite{Molish_tufvesson}. The concept of clusters has been introduced in GSCMs to model scatterers in the cell environments \\cite{Molish_tufvesson}. In \\cite{Rappaport_globe15}, the authors use clusters to characterize an accurate statistical spatial channel model (SSCM) in millimeter-wave (mmWave) bands by grouping multipath components (MPCs) into clusters. {MmWave communication suffers from very large path losses, and hence requires large antenna arrays in compensation.} \\cite{Molish_tufvesson}. This paper investigates the throughput in the uplink for the Massive MIMO with carrier frequency in the order of 2 GHz, but the principles can also apply to other frequency bands, including mmWave.\n\nMost existing Massive MIMO techniques rely on the availability of the full CSI of all users at the BS, which presents a major challenge in implementing Massive MIMO. As a result, Massive MIMO techniques with reduced CSI requirement are of great interest. An important issue in Massive MIMO systems is investigating user scheduling in which multiuser diversity gain with imperfect CSI is considered \\cite{Liu_selection}. Recently, a range of user scheduling schemes have been proposed for large MIMO systems. Most of these, such as that described in \\cite{Lee14user}, require accurate knowledge of the channel from all potential users to the BS -which in the frequency division duplex (FDD) Massive MIMO case is completely infeasible to obtain. In \\cite{XuFDDuser}, the authors proposed a greedy user selection scheme by exploiting the instantaneous CSI of all users. However, in this paper we focus on a simplified and robust user scheduling algorithm, by considering Massive MIMO simplifications and the effect of the cell geometry.\n\\subsection{Contributions of This Work}\nThis work investigates a new user selection algorithm for high frequency stochastic geometry-based channels with large numbers of antennas at the BS receiver. We investigate user scheduling by considering the Massive MIMO assumption. The proposed geometry-based user scheduling (GUS) is similar to the greedy weight clique\n(GWC) algorithm but with a different cost function. In the GUS algorithm, the BS selects users based only on the geometry of the area, while in the GWC, the BS uses the channel of the users for user scheduling. Given a map of the area of the micro-cell, we perform efficient user scheduling based only on the position of users and clusters in the cell. In GSCMs, MPCs from common clusters cause high correlation which reduces the rank of the channel. In this paper, we investigate the effect of common clusters on the system performance. Moreover, we assume that the space-alternating generalized expectation (SAGE) algorithm \\cite{crlbc,crlbj} is used (offline) to estimate the direction of arrival (DoA) and the delay of the path. The performance analysis shows the significant effect of the distinct clusters on the system throughput. We prove that to maximize the capacity of system, it is required to select users with visibility of the maximum number of distinct clusters in the area. Next, we show that the position of clusters in the area can be given by geometrical calculation. \nOur results and contributions are summarized as follows: \n\n\\begin{itemize}\n\\item\n Close analytical approximations for Massive MIMO systems are found.\n\\item\n Using the map of the area and positions of users, a new user scheduling scheme is proposed \\textit{under the assumption of no CSI at the BS, other than the location of clusters}.\n Since the position of clusters in the area are fixed, we assume that cluster localization can be done offline. \n\\item \n Simulation results show that the proposed scheme significantly reduces the overhead channel estimation in Massive MIMO systems compared to conventional user scheduling algorithms, especially for indoor and outdoor of micro-cells.\n\\item\nTo investigate the robustness of the proposed algorithm to cluster localization, the performance degradation is shown for different values of the error in cluster localization and simulation results show the robustness of the proposed user scheduling algorithm to poor cluster localization.\n\\end{itemize}\n\n\\subsection{Outline}\nThe rest of the paper is organized as follows. Section II describes\nthe system model. The proposed user scheduling scheme is presented in Section III. Section IV presents performance analysis of the\nproposed user scheduling with no estimated CSI. {The robustness of the proposed user scheduling algorithm to cluster localization errors is investigated in Section V.} Numerical\nresults are presented in Section VI. Finally, Section VII concludes\nthe paper. \n\\subsection{Notation}\nNote that in this paper, uppercase and lowercase\nboldface letters are used for matrices and vectors, respectively.\nThe notation $\\mathbb{E}\\{\\cdot\\}$ denotes expectation. Moreover, $|\\cdot|$ stands for absolute value. Conjugate\ntranspose of vector $\\textbf{x}$ is $\\textbf{x}^{H}$. Finally $\\textbf{x}^T$ and $\\textbf{X}^{\\dag}$\ndenote the\ntranspose of vector $\\textbf{x}$ and the pseudo-inverse of matrix $\\textbf{X}$, respectively.\n\\section{SYSTEM MODEL}\nWe consider uplink transmission in a single cell Massive MIMO system with $M$ antennas at the BS and $K> M$ single antenna users. The $M \\times 1$ received signal at the BS when $K_s\\, (K_s\\ll M)$ users have been selected from the pool of $K$ users, is given by \n\\begin{equation}\n\\textbf{r}= \\sqrt{p_k}\\textbf{H}\\textbf{x}+n, \n\\end{equation}\nwhere $\\textbf{x}$ represents the symbol vector of $K_s$ users, $p_k$ \nis the average power of the $k$th user and $\\textbf{H}$ denotes the aggregate $M \\times K_s$ channel of all selected users. The BS is assumed to have CSI only of the selected users \\cite{my-master-vtc,myiet_master}. We are interested in a linear ZF receiver which can be provided by evaluating the pseudo-inverse of $\\textbf{H}$, the aggregate channel of all selected users according to \n\\begin{equation}\n\\textbf{W}=\\textbf{H}^{\\dag}=\\left(\\textbf{H}^{H}\\textbf{H}\\right)^{-1}\\textbf{H}^{H}.\n\\end{equation}\n Then after using the detector, the received signal at the BS is\n\\begin{equation}\n\\textbf{y}= \\sqrt{p_k}\\textbf{W}\\textbf{H}\\textbf{x}+\\textbf{W}\\textbf{n}.\n\\end{equation}\nLet us consider equal power allocation between users, i.e. $p=\\frac{P_t}{k}$, in which $P_t$ denotes the total power. The achievable sum-rate of the system is obtained as \\cite{my-master-vtc,myiet_master}\n\\begin{equation}\nR=\\sum_{k=1}^{K_s}{\\log_{2}\n \\bigg({{ 1+\\frac{p|{\\mathbf{w}_k}{\\mathbf{h}_k}|^2}{1+\\sum_{i=1,i\\ne{k}}^{K}p|\\mathbf{w}_k\\mathbf{h}_i|^2}}}\\bigg)},\n\\end{equation}\nwhere $\\textbf{w}_k$ and $\\textbf{h}_k$ are respectively the $k$th rows of the matrix $\\textbf{W}=[\\textbf{w}_1^T,\\textbf{w}_2^T,\\cdots,\\textbf{w}_{K_s}^T]^T$, and the \\textit{k}th column of $\\textbf{H}=[\\textbf{h}_1,\\textbf{h}_2,\\cdots,\\textbf{h}_{K_s}]$.\n\\subsection{Geometry-based Stochastic Channel Model}\n\n\\begin{figure}\n\\center\n\\includegraphics[width=91mm]{cluster.eps}\n\\caption{The general description of the cluster model. The spatial spreads for $c$th cluster are given.}\n\\label{cluster}\n\\end{figure}\nIn GSCMs, the double directional channel impulse response is a superposition of MPCs. The channel is given by \\cite{Costaction}\n\\begin{equation}\nh(t,\\tau,\\phi,\\theta)=\\sum_{j=1}^{N_C}\\sum_{i=1}^{N_p}a_{i,j}\\delta(\\phi-\\phi_{i,j})\\delta(\\theta-\\theta_{i,j})\\delta(\\tau-\\tau_{i,j}),\n\\label{h1}\n\\end{equation}\nwhere $N_p$ denotes the number of multipath components, $t$ is time, $\\tau$ denotes the delay, $\\delta$ denotes the Dirac delta function, and $\\phi$ and $\\theta$ represent the direction of arrival (DoA) and direction of departure (DoD) respectively. Similar to \\cite{Costaction,our_ew}, we group the multipath components with similar delay and directions into clusters. Three kinds of clusters are defined; local clusters, single clusters and twin clusters. Local clusters are located around users and the BS while single clusters are represented by one cluster and twin clusters are characterized by two clusters related respectively to the user and BS side as shown in Fig. \\ref{cluster}. A local cluster is a single cluster that surrounds a user: single clusters can also occur in a different position. Twin clusters consist of a linked pair of clusters, one of which defines the angles of departure of multipaths from the transmitter, while the other defines the angles of arrival at the receiver \\cite{Costaction}. There is a large number of clusters in the area, however just some of them can contribute to the channel. The circular visibility region (VR) determines whether the cluster is active or not for a given user. The MPC's gain scales by a transition function that is given by \n\\begin{equation}\nA_{VR}(\\bar{r}_{MS})=\\dfrac{1}{2}-\\dfrac{1}{\\pi}\\arctan \\left(\\dfrac{2\\sqrt{2}\\left(L_c+d_{MS,VR}-R_C\\right)}{\\sqrt{\\lambda L_c}}\\right),\n\\label{vr}\n\\end{equation}\nwhere $\\bar{r}_{MS}$ is the centre of the VR, $R_C$ denotes the VR radius, $L_C$ represents the size of the transition region and $d_{MS,VR}$ refers to the distance between the mobile stations (MS)s and the VR centre. For a constant expected number of clusters $N_C$, the area density of VRs is given by\n\\begin{equation}\n\\rho_C=\\frac{N_C-1}{\\pi \\left(R_C-L_C\\right)^2}.\n\\label{rho_C}\n\\end{equation}\nAll clusters are ellipsoids in the environment and can be characterized by the cluster spatial delay spread, elevation spread and azimuth spread. Once the position of the BS and users are fixed, we need to determine the positions of the clusters in the area by geometrical calculations. For the local clusters, we consider a circle around the users and the BS, so that the size of the local cluster can be characterized by the cluster delay spread ($a_C$), elevation spread ($h_C$) and the position of MPCs \\cite{Costaction}. For local clusters the cluster delay, azimuth and elevation spreads can be given by \n\\begin{subequations}\n\\begin{eqnarray}\n&a_C=\\dfrac{\\Delta\\tau c_0}{2},\\\\\n& b_C=a_C,\\\\\n& h_C=d_{C,BS}\\tan \\theta_{BS},\n\\end{eqnarray}\n\\end{subequations}\nwhere $c_0$ denotes the speed of light, $d_{C,BS}$ is the distance between the cluster and the BS, $\\Delta\\tau$ refers to the delay spread and $\\theta_{BS}$ is the elevation spread seen by the BS. The delay spread, angular spreads and shadow fading are correlated random variables and for all kinds of clusters are given by \\cite{corria}\n\\begin{subequations}\n\\begin{eqnarray}\n& \\Delta\\tau_c=\\mu_\\tau(\\frac{d}{1000})^{\\frac{1}{2}} 10^{\\sigma_\\tau \\frac{Z_c}{10}},\\\\\n& \\beta_c= \\tau_\\beta 10^{\\sigma_\\beta \\frac{Y_c}{10}},\\\\\n& S_m = 10^{\\sigma_s \\frac{X_c}{10}},\n\\end{eqnarray}\n\\end{subequations}\nwhere $\\Delta\\tau_c$ refers to the delay spread, $\\beta_c$ denotes angular spread, and $S_m$ is the shadow fading of cluster $c$. Moreover, $X_c$, $Y_c$ and $Z_c$ denote correlated random variables with zero mean and unit variance. Correlated random process can be computed by Cholesky factorization \\cite{corria}. Cholesky factorization can be used to generate a random vector with a desired covariance matrix \\cite{matrixbook}. The MPCs' positions can be drawn from the truncated Gaussian distribution given by \\cite{Costaction}\n\\begin{equation}\nf(r)\\!=\\!\\left\\{\n\\begin{array}{rl}\\dfrac{1}{\\sqrt{2\\pi\\sigma_{r,o}^2}}\\exp\\left(\\!-\\!(\\dfrac{r\\!-\\!\\mu_{r,o}}{\\sqrt{2}\\sigma_{r,o}})^2\\right)& |r|\\le {r}_{T},\\\\\n0~~~~~~~~~~~~~~~~~~~~~~~~~~ &\\text{otherwise},\n\\end{array} \\right.\n\\label{fr1}\n\\end{equation}\nwhere $r_T$ denotes the truncation value. For single clusters, the cluster delay, azimuth and elevation spreads can be given by \n\\begin{subequations}\n\\begin{eqnarray}\n&a_C=\\Delta\\tau c_0\/2,\\\\\n&b_C=d_{C,BS}\\tan \\phi_{BS},\\\\\n&h_C=d_{C,BS}\\tan \\theta_{BS}.\n\\end{eqnarray}\n\\end{subequations}\nTo get the fixed positions of the single clusters, the radial distance of the cluster from the BS drawn from the exponential distribution \\cite{Costaction}\n\\begin{equation}\nf(r)=\\left\\{\n\\begin{array}{rl}\n0~~~~~~~~~~~~~~~~~~~~~~~~~~ & r