diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzaeeb" "b/data_all_eng_slimpj/shuffled/split2/finalzzaeeb" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzaeeb" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThe polaron physics in the presence of electron-phonon ({\\it e-ph}) and \nelectron-electron ({\\it e-e}) interactions\nis an important subject of interest in the condensed matter physics.\\cite{Alex3,mott1}\nEnormous amount of analytical and numerical work has been performed \nin an effort to unravel the intriguing polaron-related physics \nin various interesting systems, such as CMR manganites,\\cite{cmr1,cmr2}\norganic superconductors,\\cite{org1} and \nhigh-T$_c$ superconductors.\\cite{sc1,sc2}\nMicroscopic models employed for the polaron (many-polaron) physics\nin the above systems are the Holstein-Hubbard and Fr\\\"{o}hlich-Hubbard models. \n\nThe analytical approaches to solve the above Hamiltonians are \nmostly based on the many-body perturbation theory \nand so their applicabilities are often\nrestricted to weak and strong-coupling regimes of the {\\it e-ph} coupling.\nAccordingly, they are less applicable to the physically interesting crossover regime.\nInstead, precise numerical methods are employed, such as\nvariational approaches based on the exact-diagonalization (VAED), \nthe density matrix renormalization group, and the quantum Monte-Carlo scheme.\nThe VAED is highly accurate for the polarons and bipolarons\nin the dilute limit.\nThe first VAED calculations were reported\\cite{Trug3,Trug4}\nmore than a decade ago and\nthey were quite accurate for large polarons and more so in the physically\ninteresting crossover regime. \nA very rudimentary effort to increase the scope of\nthe VAED method to the strong coupling regime and to the crossover regime for the\nadiabatic polarons was made by Chakrabarti {\\it et al.},\\cite{Atis1}\nwho started with two initial states\n(the zero phonon state and the state with a large number of phonons at the \nelectron site) to meet with some success. \nMore recently, the Lang-Firsov (LF) idea has been incorporated\nin the variational scheme,\\cite{Alt2,Mono2} \nwhich makes the method more precise through out the parameter\nregimes at least for polaron and bipolaron in the one-dimension (1D). \nThe scheme of Alvermann {\\it et al.},\\cite{Alt1} in which a shifted oscillator\nstate (SOS) is considered over the traditional VAED states,\nis very precise to account for the most\ndifficult adiabatic polarons in the crossover regimes. \n\nThe VAED method has been highly successful in the dilute limit, \nbut is applicable to only one or two particle system.\nReal systems, however, require the study of {\\it e-ph} models \nwith more than two electrons.\\cite{hohenalder1,Fehske2}\nThe question we have addressed in this paper is whether there is further scope to improve\nthe VAED method that could study more than two electron systems.\nTo this end, we have developed a new scheme, the self-consistent VAED (SC-VAED) method,\nwhich is quite efficient and general.\nIn the SC-VAED,\ninstead of generating the variational basis in a single step\nas done in traditional VAED method, we start with a small basis to calculate the ground state,\nand then restart the whole process only with a few initial states, which \ncarry the significant probabilities of the ground state wave function. \nThis process is repeated till the desired accuracy is achieved. \n\nThis paper is organized as follows.\nIn section II, we introduce the Hamiltonian in its most general form,\nwhich incorporates the {\\it e-e} and {\\it e-ph} interactions,\nwithin the Holstein-Hubbard and Fr\\\"{o}hlich-Hubbard model.\nIn section III, we describe the basis generation scheme in the SC-VAED method.\nIn section IV, we compare the ground-state energies of different Holstein and Fr\\\"{o}hlich systems \nobtained by using the SC-VAED with those available in the literature.\nWe also discuss the electron-lattice correlation function for a large polaron\nand the bipolaron mass in the strong {\\it e-ph} coupling regime\nto highlight the applicability of the SC-VAED method to different regimes of {\\it e-e}\nand {\\it e-ph} interactions.\nConclusion follows in Section V.\n\n\n\\section{The Hamiltonian}\n\n The general Hamiltonian on a discrete lattice,\\cite{Fehske1,Alex1}\nwhich includes both the {\\it e-e} and {\\it e-ph} interactions, \nis considered :\n\\begin{eqnarray}\nH = &&- \\sum_{i,\\sigma}(t c_{i,\\sigma}^{\\dag} c_{i+1,\\sigma} + h.c)\n+ \\omega \\sum_i a_i^{\\dag} a_i \\nonumber \\\\\n&&+ g\\omega \\sum_{i,j,\\sigma}f_{j}(i) n_{i,\\sigma} (a_{i+j}^{\\dag}\n+ a_{i+j}) \\nonumber \\\\\n&&+ U\\sum_{i}n_{i,\\uparrow}n_{i,\\downarrow},\n\\end{eqnarray}\n where $c_{i,\\sigma}^{\\dag}$($c_{i,\\sigma}$)\ncreates (annihilates) an electron of spin $\\sigma$,\nand $a_{i}^{\\dag}$ ($a_{i}$) creates (annihilates) a\nphonon at site $i$. The third term represents the coupling of\nan electron at site $i$ with an ion at site $j$, where $g$ is the\ndimensionless {\\it e-ph} coupling constant. \n$f_{j}(i)$ is the long-range {\\it e-ph} interaction, \nthe actual form of which is given by\\cite{Fehske1}\n\\begin{eqnarray}\nf_{j}(i) = \\frac{1}{(|i-j|^{2} +1 )^{\\frac{3}{2}}}.\n\\end{eqnarray}\n$U$ is the on-site Hubbard {\\it e-e} interaction strength. \n We set the electron hopping $t$=$1$ for the numerical calculations\nand all energy parameters are expressed in units of $t$.\n\nThe Holstein model is recovered by setting $i$=$j$ in Eq. 2. Incorporating the\nwhole Fr\\\"{o}hlich interaction is numerically impossible. \nBonca and Trugman\\cite{Trug2} simplified this model by placing ions\nin the interstitial sites located between Wannier orbitals,\nand then considered just the nearest-neighbor {\\it e-ph} \ninteraction (F2H model), \\cite{Mono2}\nwhich corresponds to the case of $f_{i \\pm \\frac{1}{2}}(i)$=$1$ and zero otherwise. \nThis case has been discussed in detail by Bonca and Trugman\\cite{Trug2} \nand Chakraborty {\\it et al.}.\\cite {Mono2}\nChakraborty {\\it et al.}\\cite {Mono2} also investigated\nthe effect of extending the spatial extent of {\\it e-ph} interaction\n(F3H and F5H models).\nIn the presence of $f_{j}(i)$ interaction,\nthe {\\it e-ph} coupling constant $\\lambda$ is \ndefined by\\cite{Fehske1,Trug2} \n\\begin{eqnarray}\n\\lambda= \\frac{\\omega g^{2}\\sum_{l}f_{l}^{2}(0)}{2t}.\n\\end{eqnarray}\n\n\n\\begin{table*}[t]\n\\caption {\nThe ground state energies E$_{0}$'s for different {\\it e-ph} systems\nobtained by the present SC-VAED are compared with the most precise E$_{0}$'s obtained by the VAED in the literature. \nThe basis sizes $N_{Basis}$ used to obtain E$_{0}$'s are provided together.\nD, Model, and N$_\\mathrm{e}$ represent the dimension, the Hamiltonian (Holstein (H) \nor Fr\\\"{o}hlich-2 (F2)), and the number of electrons in the system, respectively. \n$\\omega$, $\\lambda$, {\\it U} denote phonon frequency, {\\it e-ph} coupling, Coulomb interaction,\nrespectively, in units of $t$. \n}\n\\begin{ruledtabular}\n\\begin{tabular}{c l c l | l l c | l l | l l l} \n Case & D & Model & N$_\\mathrm{e}$ & $\\omega$ & $\\lambda$ & {\\it U} & E$_{0}$(SC-VAED) & $N_{Basis}$ & E$_{0}$(VAED) & $N_{Basis}$ & Literature \\\\ \\hline \n 1& 1D& H & 1 & 1& 0.5& 0 & -2.46968472393 &2.4$\\times 10^4$ & -2.469684723933 &8.8$\\times 10^4$ & Ref.[\\cite{Trug3,Trug4,Atis1}] \\\\ \n 2& 1D& H & 1 &0.1& 1.0& 0 & -2.53800667 &5.0$\\times 10^5$ & -2.53800669 &3.0$\\times 10^6$ & Ref.[\\cite{Alt1,Mono2}] \\\\ \n 3& 2D& H & 1 & 2& 0.5& 0 & -4.81473577884 &5.0$\\times 10^5$ & -4.814735778337 &5.5$\\times 10^6$ & Ref.[\\cite{Trug4}] \\\\ \n 4& 3D& H & 1 & 3& 0.5& 0 & -7.1623948637 &1.9$\\times 10^5$ & -7.1623948409 &7.0$\\times 10^6$ & Ref.[\\cite{Trug2,Mono2}] \\\\ \n 5& 1D& H & 2 & 1& 0.5& 0 & -5.4246528 &1.4$\\times 10^5$ & -5.4246528 &2.2$\\times 10^6$ & Ref.[\\cite{Mono2}] \\\\ \n 6& 1D& H & 2 & 1& 2.0& 0 & -16.25869250598 &2.0$\\times 10^5$ & -16.25869250598 &1.7$\\times 10^7$ & Ref.[\\cite{Trug2,Mono2}] \\\\ \n 7& 1D& F2H& 2 & 1& 0.5& 1 & -5.82261974 &2.75$\\times 10^5$ & -5.822621 &3.0$\\times 10^6$ & Ref.[\\cite{Mono2}] \\\\ \n\\end{tabular}\n\\end{ruledtabular}\n\\end{table*}\n\n\n\\section{The SC-VAED}\n\nFigure 1 provides a schematic picture of generating the basis state in the VAED.\nStarting from the initial state with two electrons and zero phonon,\nnew translationally invariant states are generated by a single operation \nof the Holstein Hamiltonian on the initial state.\nAs mentioned above, \nthe VAED method is restricted to one or two particle system. \nWe have thus tried to improve the VAED method \nto deal with systems with more than two electrons.\n\n\\begin{figure}[t]\n\\includegraphics[scale=0.30]{fig1.eps}\n\\caption{\\label{f1} (Color online)\nThe illustration of basis state generation from the initial singlet Holstein bipolaron state.\nTwo electrons with spin-up (red ball) and spin-down (blue ball) are located \nat the lattice site $1$.\nNew sates are generated by the single operation of off-diagonal term of the Holstein Hamiltonian. \nIf two states are related by translational symmetry, then a single state is retained.\\cite{Trug3,Trug1,Mono2}\nThe tilde mark represents the phonon.\n }\n\\end{figure}\n\n\\begin{figure}[b]\n\\includegraphics[scale=0.30]{fig2.eps}\n\\caption{\\label{f2} (Color online)\nThe electron-lattice correlation function $\\chi$(i-j) for\na large polaron at $\\omega$=$0.1$ and $\\lambda$=$0.05$. The inset shows the\nweight of $m$-phonon states for the ground state polaron.\\cite{Fehske1} \nWe compare the quantities calculated by using the SC-VAED and the VAED method. \nHere the size of the basis used in the SC-VAED is $26000$, \nwhereas the VAED requires much larger basis of $731027$ states.\\cite{Trug2,Atis1}\n}\n\\end{figure}\n\nWe have made systematic analyses of the ground state wave-functions\nof already well-studied systems, and found that most of the probability \nof the wave-function is contained in a few number of states. \nOn the basis of this finding, we devise a scheme that throws away not so\nimportant states and builds upon the higher weighted states.\nNamely, for a given lattice size, \ninstead of generating the variational space at once,\nwe first generate small number of states (say $10000$) and obtain \nthe ground state wave-function and energy. \nWe pick up a few of the states with the highest probability (say $1000$). \nNow a basis of bigger size than the first basis (say $12000$) is\ngenerated with these (say $1000$) states as the starting states. \nWe repeat this process with increasing the size of the basis at each step. \n\nThe result is quite encouraging.\nAs shown below, this scheme reproduces the best available results in all parameter regimes with a basis much\nsmaller than used before. The higher phonon number states are picked up by the\nself-consistency cycles. We check the convergence by comparing the converged\nenergies for different lattice sizes. \n\n\\section{Results}\n\n\nThe notable feature of our development is that we are in a position \nto reproduce the benchmark results at much lesser computational cost. \nTable 1 shows the ground state energies \nfor different {\\it e-ph} systems obtained by the SC-VAED, which are compared with\nthe best results available from literature. \nThe strength of the traditional VAED exists\nfor small {\\it e-ph} coupling and the intermediate phonon regime \n(case 1 in Table 1).\\cite{Trug1,Trug2,Trug3,Trug4} \nWe are able to obtain similar precision in the SC-VAED with a basis size much smaller. \nThe VAED fails to maintain its high standard for the adiabatic case \nwith intermediate {\\it e-ph} coupling. The SOS-VAED scheme of Alvermann {\\it et al.}\\cite{Alt1} \nis an excellent approach to overcome this limitation of the VAED \n(case 2 in Table 1). \nIncorporation of the LF idea\\cite{Mono2} also yields\nsimilar success, but with a much bigger basis size. \nNoteworthy is that the SC-VAED scheme obtains \nthe same precision in this regime too, again with a smaller basis size. \nChakraborty {\\it et al.}\\cite{Mono2} showed that\nthe strong coupling regime could be handled efficiently \nwith the LF-VAED (case 6 in Table 1).\nThe SC-VAED describes two-electron Holstein-Hubbard bipolaron system \nas efficient as the LF-VAED but at a much lower computational cost. \nThe SC-VAED works equally well for the Fr\\\"{o}hlich system too (case 7 in Table 1).\n\n\n\\begin{figure}[b]\n\\vskip 0.5cm\n\\includegraphics[scale=0.30]{fig3.eps}\n\\caption{\\label{f3} (Color online)\nEffective mass of a Holstein bipolaron as a function of $U$ \nat $\\omega$=$1.0$ and $\\lambda$=$3.25$,\nwhich is normalized by twice the mass of polaron at the same parametric regime.\nThe SC-VAED results (solid line) are compared with analytic results (dotted line) obtained from \nthe second order strong coupling perturbation theory.\\cite{Alex1,Trug1,Mono2}\n}\n\\end{figure}\n\nThe comparison in Table 1 clearly manifests that the SC-VAED scheme\nindeed brings down the numerical burden and thus extends \nthe ambit of the method to more difficult parametric regimes \nand to more number of particles.\nThe price that one has to pay for this method is \nto make the self-consistent basis at each parameter of the calculation. \nBut this is a small price to pay in view of its advantages.\n\n\n\nNow we consider two different systems\nin different regimes to explain the utility of our development. \nLet us first consider a typical large polaron system. \nFigure 2 shows the static electron-lattice correlation function \\cite{Trug3,Atis1,Mono1} \nin the adiabatic regime ($\\omega$=$0.1$)\nand at very low {\\it e-ph} coupling ($\\lambda$=$0.05$). \nThe inset shows $|C_{0}^{m}|^{2}$, which corresponds to\nthe weight of the phonon states as defined by Fehske {\\it et al.}.\\cite{Fehske1} \nThe SC-VAED results match excellently with the VAED results.\\cite{Trug3,Atis1,Mono1}\nThe VAED results were calculated with a basis size of $731027$, whereas the SC-VAED\ncalculations were done with a basis size of $26000$. \nAlthough the lattices sizes are similar, \nthe self-consistent cycles get rid of the higher phonon number\nstates that do not contribute to the ground state wave-function significantly,\nthus keeping intact the accuracy with a much smaller basis.\n\n \nWe next consider the case of extremely strong {\\it e-ph} coupling.\nFigure 3 shows the effective mass of a Holstein bipolaron \nas a function of on-site {\\it e-e} Hubbard interaction $U$ \nat $\\omega$=$1.0$ and $\\lambda$=$3.25$.\\cite{Trug1,Mono2}\nIt is normalized by twice the mass of the polaron at that parameter regime.\nIt is seen that the SC-VAED result is in close agreement with the\nanalytical calculation.\\cite{Alex2,Trug1,Mono2} \nIt should be noted that no prior numerical calculation\nhas been attempted at this regime for bipolarons.\n\nThe above two examples demonstrates the potential applicability of the SC-VAED scheme \nto any {\\it e-ph} coupling regime \nand to different polaron and bipolaron systems\nof both Holstein and Fr\\\"{o}hlich varieties. \n\n\n\n\\section{Conclusions}\n\nWe have developed the self-consistent variational approach (SC-VAED),\nwhich not only reproduces the most precise results with a much lesser computational effort\nbut also increases the scope of variational approach to much bigger systems. \nThe SC-VAED method is simple and easily implementable. \nThe real benefit of the SC-VAED scheme will become evident\nwhen applied to problems involving more electrons in higher dimension, \nsuggesting that the SC-VAED is a very promising method with a lot of applicability. \n\n\\acknowledgements\nThis work was supported by the NRF (No.2009- 0079947) and the\nPOSTECH Physics BK21 fund. \nStimulating discussions with H. Fehske and A. Alvermann are\ngratefully acknowledged. \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nJones analyzed the images of the braid group representations obtained from Temperley-Lieb algebras in \\cite{jones86} where, in particular, he determined when the braid group images are finite or not. Braid group representations with finite image were also recognized in \\cite{jones89} and \\cite{GJ}. Some 15 years later the problem of determining the closure of the image of braid group representations associated with Hecke algebras played a critical role in analyzing the computational power of the topological model for quantum computation \\cite{FLW}. Following these developments the author and collaborators analyzed braid group representations associated with $BMW$-algebras \\cite{LRW} and twisted doubles of finite groups \\cite{ERW}.\n\nPartially motivated by empirical evidence the author conjectured that the braid group representations associated with an object $X$ in a braided fusion category $\\mathcal{C}$ has finite image if, and only if, the Frobenius-Perron dimension of $\\mathcal{C}$ is integral (see eg. \\cite[Conjecture 6.6]{RSW}). In \\cite{NR,RUMA} various instances of this conjecture were verified. This current work verifies this conjecture for the braided fusion category $\\mathcal{C}(\\mathfrak{sl}_3,6)$ obtained from the representation category of the quantum group $U_q\\mathfrak{sl}_3$ at $q=e^{\\pi \\im\/6}$ (see \\cite{Rsurvey} for details and notation).\n\nMore generally, Jimbo's \\cite{Jm} quantum Schur-Weyl duality establishes a relationship between the modular categories $\\mathcal{C}(\\mathfrak{sl}_k,\\ell)$ obtained from the quantum group $U_q\\mathfrak{sl}_k$ at $q=e^{\\pi \\im\/\\ell}$ and certain semisimple quotients $\\mathcal{H}_n(k,\\ell)$ of specialized Hecke algebras $\\mathcal{H}_n(q)$ (defined below).\nThat is, if we denote by $X\\in\\mathcal{C}(\\mathfrak{sl}_k,\\ell)$ the simple object analogous to the vector representation of $\\mathfrak{sl}_k$ then there is an isomorphism $\\mathcal{H}_n(k,\\ell)\\cong\\End(X^{\\otimes n})$ induced by $g_i\\rightarrow I_X^{\\otimes i-1}\\otimes c_{X,X}\\otimes I^{\\otimes n-i-1}$.\n\nIn particular, the braid group representations associated with the modular category $\\mathcal{C}(\\mathfrak{sl}_3,6)$ are the same as those obtained from $\\mathcal{H}_n(3,6)$.\nIt is known that braid group representations obtained from $\\mathcal{H}_n(3,6)$ have finite image (mentioned in \\cite{FLW,LR,NR}) but a proof has never appeared in print. This fact was discovered by Goldschmidt and Jones during the writing of \\cite{GJ} and independently by Larsen during the writing of \\cite{FLW}. We benefitted from the notes of Goldschmidt and Jones containing the description of the quaternionic braid representation below. Our techniques follow closely those of \\cite{jones86,jones89,LR2}.\n\nThe rest of the paper is organized into three sections. In Section \\ref{Hecke} we recall some notation and facts about Hecke algebras and their quotients. The main results are in Section \\ref{quat}, and in Section \\ref{disc} we indicate how the category $\\mathcal{C}(\\mathfrak{sl}_3,6)$ is exceptional from topological and categorical points of view.\n\n\n\\section{Hecke Algebras}\\label{Hecke}\nWe extract the necessary definitions and results from \\cite{W} that we will need in the sequel.\n\\begin{definition}\nThe \\emph{Hecke algebra} $\\mathcal{H}_n(q)$ for $q\\in\\mathbb C$ is the $\\mathbb C$-algebra with generators $g_1,\\ldots, g_{n-1}$ satisfying relations:\n\n\\begin{enumerate}\n\\item[$(H1)^\\prime$] $g_ig_{i+1}g_i=g_{i+1}g_ig_{i+1}$ for $1\\leq i\\leq n-2$\n\\item[$(H2)^\\prime$] $g_ig_j=g_jg_i$ for $|i-j|>1$\n \\item[$(H3)^\\prime$] $(g_i+1)(g_i-q)=0$\n\\end{enumerate}\n\\end{definition}\n\nTechnically, $\\mathcal{H}_n(q)$ is the Hecke algebra of type $A$, but we will not be considering other types so we suppress this distinction. One immediately observes that $\\mathcal{H}_n(q)$ is the quotient of the braid group algebra $\\mathbb C\\mathcal{B}_n$ by the relation $(H3)^\\prime$.\n$\\mathcal{H}_n(q)$ may also be described in terms of the generators $e_i:=\\frac{(q-g_i)}{(1+q)}$, which satisfy:\n\\begin{enumerate}\n \\item[$(H1)$] $e_i^2=e_i$\n \\item[$(H2)$] $e_ie_j=e_je_i$ for $|i-j|>1$\n \\item[$(H3)$] $e_ie_{i+1}e_i-q\/(1+q)^2e_i=e_{i+1}e_{i}e_{i+1}-q\/(1+q)^2e_{i+1}$ for $1\\leq i\\leq n-2$\n\\end{enumerate}\n\nFor any $\\eta\\in\\mathbb C$, Ocneanu \\cite{FYHLMO} showed that one may uniquely define a linear functional $\\tr$ on $\\mathcal{H}_\\infty(q):=\\cup_{n=1}^\\infty \\mathcal{H}_n(q)$\nsatisfying\n\\begin{enumerate}\n \\item $\\tr(1)=1$\n\\item $\\tr(ab)=\\tr(ba)$\n\\item $\\tr(xe_n)=\\eta\\tr(x)$ for any $x\\in\\mathcal{H}_n(q)$\n\\end{enumerate}\nAny linear function on $\\mathcal{H}_\\infty$ satisfying these conditions is called a \\emph{Markov trace} and is determined by the value $\\eta=\\tr(e_1)$.\n\nNow suppose that $q=e^{2\\pi i\/\\ell}$ and $\\eta=\\frac{(1-q^{1-k})}{(1+q)(1-q^k)}$ for some integers $k<\\ell$. Then for each $n$, the (semisimple) quotient of $\\mathcal{H}_n(q)$ by the annihilator of the restriction of the trace $\\mathcal{H}_n(q)\/\\Ann(\\tr)$ is called the \\textit{$(k,\\ell)$-quotient}. We will denote this quotient by $\\mathcal{H}_n(k,\\ell)$ for convenience. Wenzl \\cite{W} has shown that $\\mathcal{H}_n(k,\\ell)$ is semisimple and described the irreducible representations $\\rho_{\\lambda}^{(k,\\ell)}$ where ${\\lambda}$ is a \\emph{$(k,\\ell)$-admissible} Young diagrams of size $n$. Here a Young diagram ${\\lambda}$ is $(k,\\ell)$-admissible if ${\\lambda}$ has at most $k$ rows and ${\\lambda}_1-{\\lambda}_k\\leq \\ell-k$ where ${\\lambda}_i$ denotes the number of boxes in the $i$th row of ${\\lambda}$. The (faithful) Jones-Wenzl representation is the sum: $\\rho^{(k,\\ell)}=\\bigoplus_{\\lambda} \\rho_{\\lambda}^{(k,\\ell)}$. Wenzl \\cite{W} has shown that $\\rho^{(k,\\ell)}$ is a $C^*$ representation, i.e. the representation space is a Hilbert space (with respect to a Hermitian form induced by the trace $\\tr$) and $\\rho_{\\lambda}^{(k,\\ell)}(e_i)$ is a self-adjoint operator.\nOne important consequence is that each $\\rho_{\\lambda}^{(k,\\ell}$ induces an irreducible unitary representation of the braid group $\\mathcal{B}_n$ via composition with $\\sigma_i\\rightarrow g_i$, which is also called the Jones-Wenzl representation of $\\mathcal{B}_n$.\n\n\n\n\n\n\\section{A Quaternionic Representation}\\label{quat}\nConsider the $(3,6)$-quotient $\\mathcal{H}_n(3,6)$. The $(3,6)$-admissible Young diagrams have at most $3$ rows and ${\\lambda}_1-{\\lambda}_3\\leq 3$. For $n\\geq 3$ there are either $3$ or $4$ Young diagrams of size $n$ that are $(3,6)$-admissible, and $\\eta=\\frac{(1-q^{1-3})}{(1+q)(1-q^3)}=1\/2$ in this case.\nDenote by $\\phi_n$ the unitary Jones-Wenzl representation of $\\mathcal{B}_n$ induced by $\\rho^{(3,6)}$. Our main goal is to prove the following:\n\\begin{theorem}\\label{mainthm}\nThe image $\\phi_n(\\mathcal{B}_n)$ is a finite group.\n\\end{theorem}\nWe will prove this theorem by embedding $\\mathcal{H}_n(3,6)$ into a finite dimensional algebra (Lemma \\ref{isolemma}) and then showing that the group generated by the images of $g_1,\\cdots,g_{n-1}$ is finite (Lemma \\ref{finitelemma}).\n\nDenote by $[\\;,\\;]$ the multiplicative group commutator and let $q=e^{2\\pi \\im\/6}$.\nConsider the $\\mathbb C$-algebra $Q_n$ with generators\n$u_1,v_1,\\ldots,u_{n-1},v_{n-1}$ subject to the relations:\n\\begin{enumerate}\n \\item[(G1)] $u_i^2=v_i^2=-1$,\n\\item[(G2)] $[u_i,v_j]=-1$ if $|i-j|\\leq 1$,\n\\item[(G3)] $[u_i,v_j]=1$ if $|i-j|\\geq 2$\n\\item[(G4)] $[u_i,u_j]=[v_i,v_j]=1$\n\\end{enumerate}\nNotice that the group $\\{\\pm 1,\\pm u_i,\\pm v_i,\\pm u_iv_i\\}$ is isomorphic to the group of quaternions.\nWe see from these relations that $\\dim(Q_n)=2^{2n-2}$ since each word in the $u_i,v_i$ has a unique normal form\n\\begin{equation}\\label{normform}\n \\pm u_1^{\\epsilon_1}\\cdots u_{n-1}^{\\epsilon_{n-1}}v_1^{\\nu_1}\\cdots v_{n-1}^{\\nu_{n-1}}\n\\end{equation}\nwith $\\nu_i,\\epsilon_i\\in\\{0,1\\}$. Observe that a basis for $Q_n$ is given by taking all $+$ signs in (\\ref{normform}). We define a $\\mathbb C$-valued trace $\\Tr$ on $Q_n$ by setting $\\Tr(1)=1$ and $\\Tr(w)=0$ for any non-identity word in the $u_i,v_i$. One deduces that $\\Tr$ is faithful from the uniqueness of the normal form (\\ref{normform}).\n\nDefine\n\\begin{equation}\\label{eqgj} s_i:=\\frac{-1}{2q}(1+u_i+v_i+u_iv_i)\\end{equation}\nfor $1\\leq i\\leq n-1$.\n\n\\begin{lemma}\\label{isolemma}\n The subalgebra $\\mathcal{A}_n\\subset Q_n$ generated by $s_1,\\ldots,s_{n-1}$ is isomorphic to $\\mathcal{H}_n(3,6)$.\n\\end{lemma}\n\n\\begin{proof}\n\nIt is a straightforward computation to see that the $s_i$ satisfy\n\\begin{enumerate}\n \\item[(B1)] $s_is_{i+1}s_i=s_{i+1}s_is_{i+1}$\n\\item[(B2)] $s_js_i=s_is_j$ if $|i-j|\\geq 2$\n\\item[(E1)] $(s_i-q)(s_i+1)=0$\n\\end{enumerate}\nIndeed, relation (B2) is immediate from relations (G3) and (G4). It is enough to check (B1) and (E1) for $i=1$. For this we compute:\n\\begin{eqnarray}\n&&s_1^{-1}=-\\frac{q}{2}(1-u_1-v_1-u_1v_1)\\nonumber\\\\\n&&s_1^{-1}u_1s_1=u_1v_1,\\quad s_1^{-1}v_1s_1=u_1,\\label{action1}\\\\\n&&s_1^{-1}u_2s_1=u_2v_1,\\quad s_1^{-1}v_2s_1=-u_1v_1v_2\\nonumber\n\\end{eqnarray}\nfrom which (B1) and (E1) are deduced.\n\nThus $\\varphi(g_i)=s_i$ induces an algebra homomorphism $\\varphi:\\mathcal{H}_n(q)\\rightarrow Q_n$ with $\\varphi(\\mathcal{H}_n(q))=\\mathcal{A}_n$. Set $f_i:=\\varphi(e_i)=\\frac{(q-s_i)}{(1+q)}$ and let $b\\in Q_{n-1}$ that is, $b$ is in the span of the words in $\\{u_1,v_1,\\ldots,u_{n-2},v_{n-2}\\}$. The constant term of $f_{n-1}b$ is the product of the constant terms of $b$ and $f_{n-1}$ since $f_{n-1}$ is in the span of $\\{1,u_{n-1},v_{n-1},u_{n-1}v_{n-1}\\}$, so $\\Tr(f_{n-1}b)=\\Tr(f_{n-1})\\Tr(b)$. For each $a\\in\\mathcal{H}_n(q)$ we define $\\varphi^{-1}(\\Tr)(a):=\\Tr(\\varphi(a))$, and conclude that $\\varphi^{-1}(\\Tr)$ is a Markov trace on $\\mathcal{H}_n(q)$. Computing, we see that $\\Tr(f_{n-1})=1\/2$ so that by uniqueness $\\varphi^{-1}(\\Tr)=\\tr$ as functionals on $\\mathcal{H}_n(q)$. Now if $a\\in\\ker(\\varphi)$ we see that $\\tr(ac)=\\Tr(\\varphi(ac))=0$ for any $c$ so that $\\ker(\\varphi)\\subset\\Ann(\\tr)$. On the other hand, if $a\\in\\Ann(\\tr)$ we must have $\\Tr(\\varphi(ac))=\\tr(ac)=0$ for all $c\\in\\mathcal{H}_n(q)$. If $\\varphi(a)\\neq 0$ then, by definition of $\\Tr$ and $\\varphi$, there exists an $a^\\dag\\in\\mathcal{H}_n(q)$ such that $\\Tr(\\varphi(a)\\varphi(a^\\dag))\\neq 0$ since $\\Tr$ is faithful. Therefore $\\Ann(\\tr)\\subset\\ker(\\varphi)$. In particular, we see that $\\varphi$ induces:\n$$\\mathcal{H}_n(3,6)=\\mathcal{H}_n(q)\/\\Ann(\\tr)\\cong \\varphi(\\mathcal{H}_n(q))=\\mathcal{A}_n\\subset Q_n.$$\n\\end{proof}\n\n\n\\begin{lemma}\\label{finitelemma}\nThe group $G_n$ generated by $s_1,\\cdots,s_{n-1}$ is finite.\n\\end{lemma}\n\\begin{proof}\n\n Consider the conjugation action of the $s_i$ on $Q_n$. We claim that the conjugation action of $s_i$ on the words in the $u_i,v_i$ is by a signed permutation. Since $s_i$ commutes with words in $u_j,v_j$ with $j\\not\\in\\{i-1,i,i+1\\}$, by symmetry it is enough to consider the conjugation action of $s_1$ on the four elements $\\{u_1,v_1,u_2,v_2\\}$, which is given in (\\ref{action1}).\n\nThus we see that $G_n$ modulo the kernel of this action is a (finite) signed permutation group.\nThe kernel of this conjugation action lies in the center $Z(Q_n)$ of $Q_n$. Using the normal form above we find that the center $Z(Q_n)$ is either $1$-dimensional or $4$-dimensional. Indeed, since the words:\n$$W:=\\{u_1^{\\epsilon_1}\\cdots u_{n-1}^{\\epsilon_{n-1}}v_1^{\\nu_1}\\cdots v_{n-1}^{\\nu_{n-1}}\\}$$\nfor $(\\epsilon_1,\\ldots,\\epsilon_{n-1},\\nu_1,\\ldots,\\nu_{n-1})\\in\\mathbb Z_2^{2n-2}$ form a basis for $Q_n$ and $tw=\\pm wt$ for $w,t\\in W$ we may explicitly compute a basis for the center as those words $w\\in W$ that commute with $u_i$ and $v_i$ for all $i$. This yields two systems of linear equations over $\\mathbb Z_2$:\n\\begin{equation}\\label{eqnmod2u}\n\\begin{cases} \\epsilon_1+\\epsilon_2=0,&\\\\\n\\epsilon_{i}+\\epsilon_{i+1}+\\epsilon_{i+2}=0, & 1\\leq i\\leq n-3 \\\\\n\\epsilon_{n-2}+\\epsilon_{n-1}=0 &\\end{cases}\n\\end{equation} and\n\\begin{equation}\\label{eqnmod2v}\n \\begin{cases} \\nu_1+\\nu_2=0 &\\\\\n\\nu_{i-1}+\\nu_{i}+\\nu_{i+1}=0, & 1\\leq i\\leq n-3\\\\\n\\nu_{n-2}+\\nu_{n-1}=0.&\\end{cases}\n\\end{equation}\nNon-trivial solutions to (\\ref{eqnmod2u}) only exist if $3\\mid n$ since we must have $\\epsilon_1=\\epsilon_2=\\epsilon_{n-2}=\\epsilon_{n-1}=1$ as well as $\\epsilon_i=0$ if $3\\mid i$ and $\\epsilon_j=1$ if $3\\nmid j$ and similarly for (\\ref{eqnmod2v}). Thus $Z(Q_n)$ is $\\mathbb C$ if $3\\nmid n$ and is spanned by $1,U,V$ and $UV$ where\n$U=\\prod_{3\\nmid i} u_i$ and $V=\\prod_{3\\nmid i} v_i$ if $3\\mid n$.\nThe determinant of the image of $s_i$ under any representation is a $6$th root of unity and hence the same is true for any element $z\\in Z(Q_n)\\cap G_n$. Thus for $3\\nmid n$ the image of any $z\\in Z(Q_n)\\cap G_n$ under the left regular representation is a root of unity times the identity matrix, and thus has finite order. Similarly, if $3\\mid n$, the restriction of any $z\\in Z(Q_n)\\cap G_n$ to any of the four simple components of the left regular representation is a root of unity times the identity matrix and so has finite order. So the group $G_n$ itself is finite.\n\\end{proof}\nThis completes the proof of Theorem \\ref{mainthm}.\n\n\n\\begin{remark} The proof of Lemma \\ref{finitelemma} shows that the projective image of $G_n$ is a (non-abelian) subgroup of the full monomial group $G(2,1,4^{n-1})$ of signed $4^{n-1}\\times 4^{n-1}$ matrices. The main goal of this paper is to verify \\cite[Conjecture 6.6]{RSW} in this case, but with further effort one could determine the group $G_n$ more precisely. It is suggested in \\cite{LR} that $G_n$ is an extension of $PSU(n-1,\\mathbb{F}_2)$ so that $$|G_n|\\approx \\frac{1}{3}2^{(n-1)(n-2)\/2}\\prod_{i=1}^{n-1}(2^i-(-1)^i)$$ but that such a result has not appeared in print. Modulo the center, the generators $s_i$ have order $3$ so that $G_n\/Z(G_n)$ is a quotient of the factor group $\\mathcal{B}_n\/\\langle\\sigma_1^3\\rangle$ (here $\\sigma_i$ are the usual generators of $\\mathcal{B}_n$). For $n\\leq 5$, Coxeter \\cite{cox} has shown that these quotients are finite groups and determined their structure. In particular, the projective image of $\\mathcal{B}_5\/\\langle \\sigma_1^3\\rangle$ is $PSU(4,\\mathbb{F}_2)$, so $G_5$ is an extension of this simple group. A strategy for showing $G_n$ is an extension of $PSU(n-1,\\mathbb{F}_2)$ for $n>5$ would be to find an $(n-1)$-dimensional invariant subspace of $Q_n$ so that the restricted action of the braid generators is by order $3$ pseudo-reflections (projectively). A comparison of the dimensions of the simple $\\mathcal{H}_n(2,6)$-modules with those of $PSU(n-1,\\mathbb{F}_2)$ indicates that one must also restrict to those $n$ not divisible by $3$.\n\\end{remark}\n\n\n\\section{Concluding Remarks, Questions and Speculations}\\label{disc}\nThe category $\\mathcal{C}(\\mathfrak{sl}_3,6)$ does not seem to have any obvious generalizations. We discuss some of the ways in which $\\mathcal{C}(\\mathfrak{sl}_3,6)$ appears to be exceptional by posing a number of (somewhat na\\\"ive) questions which we expect to have negative answers.\n\\subsection{Link Invariants}\nFrom any modular category one obtains (quantum) link invariants via Turaev's approach \\cite{Tur}.\nThe link invariant $P_L^\\prime(q,\\eta)$ associated with $\\mathcal{C}(\\mathfrak{sl}_k,\\ell)$ is (a variant of) the HOMFLY-PT polynomial (\\cite{FYHLMO}, where a different choice of variables is used). For the choices $q=e^{2\\pi \\im\/6}$ and $\\eta=1\/2$ corresponding to $\\mathcal{C}(\\mathfrak{sl}_3,6)$ the invariant has been identified \\cite{LM}:\n$$P_L^\\prime(e^{2\\pi \\im\/6},1\/2)=\\pm\\im(\\sqrt{2})^{\\dim H_1(T_L;\\mathbb Z_2)}$$ where $T_L$ is the triple cyclic cover of the three sphere $S^3$ branched over the link $L$. There is a similar series of invariants for any odd prime $p$: $\\pm\\im(\\sqrt{p})^{\\dim H_1(D_L;\\mathbb Z_p)}$ where $D_L$ is the double cyclic cover of $S^3$ branched over $L$ (see \\cite{LM,GJ}). It appears that this series of invariants can be obtained from modular categories $\\mathcal{C}(\\mathfrak{so}_{p},2p)$. This has been verified for $p=3,5$ (see \\cite{GJ} and \\cite{jones89}) and we have recently handled the $p=7$ case (unpublished, using results in \\cite{West}).\n\\begin{question} Are there modular categories with associated link invariant:\n$$\\pm\\im (\\sqrt{p})^{\\dim H_1(T_L;\\mathbb Z_p)}?$$\n\\end{question}\nIn \\cite{LRW} it is suggested that if the braid group images corresponding to some ribbon category are finite then the corresponding link invariant is \\emph{classical}, i.e. equivalent to a homotopy-type invariant. Another formulation of this idea is found in \\cite{twoparas} in which \\emph{classical} is interpreted in terms of computational complexity.\n\n\\subsection{Fusion Categories and $II_1$ Factors}\nThe category $\\mathcal{C}(\\mathfrak{sl}_3,6)$ is an \\emph{integral} fusion category, that is, the simple objects have integral dimensions. The categories $\\mathcal{C}(\\mathfrak{sl}_k,\\ell)$ are integral for $(k,\\ell)=(3,6)$ and $(k,k+1)$ but no other examples are known (or believed to exist). $\\mathcal{C}(\\mathfrak{sl}_3,6)$ has six simple (isomorphism classes of) objects: $\\{X_i,X_i^*\\}_{i=1}^3$ of dimension $2$ (dual pairs), three simple objects $\\mathbf{1},Z,Z^*$ of dimension $1$, and one simple object $Y$ of dimension $3$. The Bratteli diagram for tensor powers of the generating object $X_1$ is given in Figure \\ref{brat}. It is shown in \\cite{ENO} that $\\mathcal{C}$ is an integral fusion category if, and only if, $\\mathcal{C}\\cong \\Rep(H)$ for some semisimple finite dimensional quasi-Hopf algebra $H$, so in particular $\\mathcal{C}(\\mathfrak{sl}_3,6)\\cong\\Rep(H)$ for some quasi-triangular quasi-Hopf algebra $H$. One wonders if strict coassociativity can be achieved:\n\\begin{question} Is there a (quasi-triangular) semisimple finite dimensional Hopf algebra $H$ with $\\mathcal{C}(\\mathfrak{sl}_3,6)\\cong\\Rep(H)$?\n\\end{question}\nOther examples of integral categories are the representation categories $\\Rep(D^\\omega G)$ of twisted doubles of finite groups studied in \\cite{ERW} (here $G$ is a finite group and $\\omega$ is a $3$-cocycle on $G$). Any fusion category $\\mathcal{C}$ with the property that its Drinfeld center $\\mathcal{Z}(\\mathcal{C})$ is equivalent as a braided fusion category to $\\Rep(D^\\omega G)$ for some $\\omega,G$ is called \\emph{group-theoretical} (see \\cite{ENO,Nat}). The main result of \\cite{ERW} implies that if $\\mathcal{C}$ is any braided group-theoretical fusion category then the braid group representations obtained from $\\mathcal{C}$ must have finite image. In \\cite{NR} we showed that $\\mathcal{C}(\\mathfrak{sl}_3,6)$ is not group-theoretical and in fact has minimal dimension ($36$) among non-group-theoretical integral modular categories.\n\\begin{question}\nIs there a family of non-group-theoretical integral modular categories that includes $\\mathcal{C}(\\mathfrak{sl}_3,6)$?\n\\end{question}\n\\begin{figure}[t0]\n$\\xymatrix{ X_1\\ar@{-}[d]\\ar@{-}[dr]\\\\ X_1^*\\ar@{-}[d]\\ar@{-}[dr] & X_2\\ar@{-}[d]\\ar@{-}[dr] \\\\\n \\mathbf{1}\\ar@{-}[d] & Y\\ar@{-}[dl]\\ar@{-}[d]\\ar@{-}[dr] & Z\\ar@{-}[d] \\\\ X_1\\ar@{-}[d]\\ar@{-}[dr] & X_2^*\\ar@{-}[dl]\\ar@{-}[dr]& X_3\\ar@{-}[dl]\\ar@{-}[d]\\\\ X_1^*\\ar@{-}[d]\\ar@{-}[dr] & X_2\\ar@{-}[d]\\ar@{-}[dr] & X_3^*\\ar@{-}[dl]\\ar@{-}[dr]\\\\\n\\mathbf{1}\\ar@{-}[d] & Y\\ar@{-}[dl]\\ar@{-}[dr]\\ar@{-}[d] & Z\\ar@{-}[d] & Z^*\\ar@{-}[dll]\\\\\nX_1&X_2^*&X_3}$\n\\caption{Bratteli diagram for $\\mathcal{C}(\\mathfrak{sl}_3,6)$}\\label{brat}\n\\end{figure}\n\n\nNotice that $\\mathcal{C}(\\mathfrak{sl}_3,6)$ has a ribbon subcategory $\\mathcal{D}$ with simple objects $\\mathbf{1}, Z,Z^*$ and $Y$. The fusion rules are the same as those of $\\Rep(\\mathfrak{A}_4)$: $Y\\otimes Y\\cong\\mathbf{1}\\oplus Z\\oplus Z^*\\oplus Y$. However $\\mathcal{D}$ is not symmetric, and $\\mathcal{C}(\\mathfrak{sl}_3,6)$ has smallest dimension among modular categories containing $\\mathcal{D}$ as a ribbon subcategory (what M\\\"uger would call a \\emph{minimal modular extension} \\cite{Mu}). One possible generalization of $\\mathcal{C}(\\mathfrak{sl}_3,6)$ would be a minimal modular extension of a non-symmetric ribbon category $\\mathcal{D}_n$ similar to $\\mathcal{D}$ above. That is, $\\mathcal{D}_n$ should be a non-symmetric ribbon category with $n$ $1$-dimensional simple objects $\\mathbf{1}=Z_0,\\ldots,Z_{n-1}$ and one simple $n$-dimensional object $Y_n$ such that $Y_n\\otimes Y_n\\cong Y_n\\oplus\\bigoplus_{i=0}^{n-1} Z_i$ and the $Z_i$ have fusion rules like $\\mathbb Z_n$. For $\\mathcal{D}_n$ to exist even at the generality of fusion categories one must have $n=p^\\alpha-1$ for some prime $p$ and integer $\\alpha$ by \\cite[Corollary 7.4]{EGO}. However, V. Ostrik \\cite{os} informs us that these categories do not admit non-symmetric braidings except for $n=2,3$. So this does not produce a generalization.\n\n\nA pair of hyperfinite $II_1$ factors $A\\subset B$ with index $[B:A]=4$ can be constructed from $\\mathcal{C}(\\mathfrak{sl}_3,6)$ (see \\cite[Section 4.5]{wenzlcstar}). The corresponding principal graph is the Dynkin diagram $E_6^{(1)}$ the nodes of which we label by simple objects:\n$$\\xymatrix{&& Z^* \\ar@{-}[d] \\\\ && X_3\\ar@{-}[d]\\\\\n\\mathbf{1}\\ar@{-}[r]&X_1\\ar@{-}[r]&Y\\ar@{-}[r]&X_2^*\\ar@{-}[r]& Z}$$\nThis principal graph can be obtained directly from Bratteli diagram in Figure \\ref{brat} as the nodes in the $6$th and $7$th levels and the edges between them.\nHong \\cite{Ho} showed that any $II_1$ subfactor pair\n$M\\subset N$ with principal graph $E_6^{(1)}$ can be constructed from some $II_1$ factor $P$ with an outer action of $\\mathfrak{A}_4$ as $M=P\\rtimes \\mathbb Z_3\\subset P\\rtimes \\mathfrak{A}_4=N$. Subfactor pairs with principal graph $E_7^{(1)}$ and $E_8^{(1)}$ can also be constructed (see eg. \\cite{popa}).\nWe ask:\n\\begin{question}\\label{q:e7e8} Is there a unitary non-group-theoretical integral modular category with principal graph $E_7^{(1)}$ or $E_8^{(1)}$?\n\\end{question}\nEven a braided fusion category with such a principal graph would be interesting, and have interesting braid group image.\n\nNotice that the subcategory $\\mathcal{D}$ mentioned above plays a role here as $\\mathfrak{A}_4$ corresponds to the Dynkin diagram $E_6^{(1)}$ in the McKay correspondence. A modular category $\\mathcal{C}$ with principal graph $E_7^{(1)}$ (resp. $E_8^{(1)}$) would contain a ribbon subcategory $\\mathcal{F}_1$ (resp. $\\mathcal{F}_2$) with the same fusion rules as $\\Rep(\\mathfrak{S}_4)$ (resp. $\\Rep(\\mathfrak{A}_5)$). Using \\cite[Lemma 1.2]{EG} we find that such a category $\\mathcal{C}$ must have dimension divisible by $144$ (resp. $3600$). The ribbon subcategory $\\mathcal{F}_2$ must have symmetric braiding (D. Nikshych's proof: $\\Rep(\\mathfrak{A}_5)$ has no non-trivial fusion subcategories so if it has a non-symmetric braiding, the M\\\"uger center is trivial. But if the M\\\"uger center is trivial it is modular, which fails by \\cite[Lemma 1.2]{EG}). This suggests that for $E_8^{(1)}$ the answer to Question \\ref{q:e7e8} is ``no.'' There is a non-symmetric choice for $\\mathcal{F}_1$ (as V. Ostrik informs us \\cite{os}), with M\\\"uger center equivalent to $\\Rep(\\mathfrak{S}_3)$. By \\cite[Prop. 5.1]{Mu} a minimal modular extension $\\mathcal{C}$ of such an $\\mathcal{F}_1$ would have dimension $144$.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nAnalyzing spike trains from hundreds of neurons to find significant temporal patterns \nis an important current research problem \\cite{Brown2004,SS2006,Pipa2008}. \nBy using experimental techniques such as Micro Electrode Arrays or imaging of\nneural currents, spike data can be recorded\nsimultaneously from many neurons \\cite{Ikegaya2004,Potter2006}. Such multi-neuronal \nspike train data can \nnow be routinely gathered {\\em in vitro} from neural cultures or {\\em in vivo} from \nbrain slices, awake behaving animals and even humans. Such data would be a mixture \nof stochastic spiking activities of individual neurons as well as that due to correlated \nactivity of groups of neurons due to interconnections, possibly triggered by external inputs. \nAutomatically discovering patterns (regularities) in these spike trains can lead to better\nunderstanding of how interconnected neurons act in a coordinated manner to generate \nspecific functions. There has been much interest in techniques for analyzing the \nspike data so as to infer functional connectivity or \n the functional relationships within the \nsystem that produced the spikes \n\\cite{Abeles1988,Brillinger1992,Meister1996,Gerstein2004,Brown2004,Kass2005,Feber2007,sasaki2007,Ikegaya2004,Lee2004,SS2006,Pipa2008,FAHB2008}. \nIn addition to contributing towards our \nknowledge of brain function, \nunderstanding of functional relations embedded in spike trains leads to many\napplications, e.g., better brain-machine interfaces. Such an analysis\ncan also ultimately allow us to systematically address the \nquestion, \"is there a neural code?\".\n\nIn this paper, we consider the problem of discovering {\\em statistically \nsignificant} patterns from multi-neuronal spike train data. The patterns we \nconsider here are ordered firing sequences by a group of neurons with \nspecific time-lags or delays between successive neurons. \nSuch a pattern (when it repeats many times) may denote a chain of \ntriggering events and hence unearthing such patterns from spike data can \nhelp understand the underlying functional connectivity. For example, memory traces are \nprobably embedded in such sequential activation of neurons and signals \nof this form have been found in hippocampal neurons \\cite{Lee2002}. \nSuch patterns of ordered firing sequences with fairly constant delays between \nsuccessive neuronal firings have been observed in many experiments and there is \nmuch interest in detecting such patterns and assessing their statistical \nsignificance. (See \\cite{Abeles2001,Gerstein2004} and references therein). \n\nHere, we will call patterns of ordered firing sequences as {\\em sequential \n patterns}. \nSymbolically, we denote such a pattern as, e.g., \n$A \\stackrel{T_1}{\\rightarrow} B \\stackrel{T_2}{\\rightarrow} C$. \nThis represents the pattern of ordered firing sequence of $A$ followed by $B$ \nfollowed by $C$ with a delay of $T_1$ time units between $A$ \\& $B$ and \n$T_2$ time units between $B$ \\& $C$. (We note here that within any occurrence of such \na firing pattern, there could be spikes by other neurons). \n Such a pattern of firings may occur repeatedly in the spike train data \nif, e.g., there is an excitatory influence of total delay $T_1$ from $A$ to $B$ and an \nexcitatory influence of delay $T_2$ between $B$ and $C$. In general, the delays \nmay not be exactly constant because synaptic transmission etc. could have some random \nvariations. Hence, in our sequential patterns, we will allow the delays \nto be intervals of small length. At the least, we can take the length of the \ninterval as the time resolution in our measurements. In general, such patterns can \ninvolve more than three neurons. The {\\em size} of a pattern is the number of \nneurons in it. Thus, the above example is that of a size 3 pattern or a 3-node \npattern. \n\n One of the main computational methods for detecting \nsuch patterns that repeat often enough, is due to \n Abeles and Gerstein \\cite{Abeles1988}. This \nessentially consists of sliding the spike train of one neuron with respect to \nanother and noting coincidences at specific delays. There are also some recent \nvariations of this method \\cite{Tetko2001a,Tetko2001b}. Most of the current \nmethods for detecting such patterns essentially use correlations among time-shifted \nspike trains (and some statistics computed from the correlation counts), \nand these are computationally expensive when \n detecting large-size (typically greater than 4) \npatterns \\cite{Gerstein2004}. Another approach to detecting \nsuch ordered firing sequences is considered in \\cite{Lee2004,SS2006} while analyzing \nrecordings from hippocampal neurons. Given a specific ordering on a set of neurons, they \nlook for longest sequences in the data that respect this order. This is similar to our \nsequential patterns which are somewhat more general because we can also specify different \ndelays between consecutive elements of the pattern. \n\n In this paper we use a method based on some \ntemporal datamining techniques that we have recently proposed \\cite{PSU2008}. \nThis method can automatically detect all sequential patterns whose {\\em frequency} \nin the data is above a (user-specified) threshold where {\\em frequency} of the pattern \nis maximum number of non-overlapped occurrences\\footnote{We define this notion more \nprecisely in the next section} of the pattern in the spike data. The essence of \nthis algorithm is that instead of trying to count all occurrences of the pattern \nin the data we count only certain well-defined subset of occurrences and this makes \nthe process computationally efficient. \nThe method is effective in detecting long patterns and it would detect only those \npatterns that repeat more than a given threshold. Also, the method can \nautomatically decide on the most appropriate delays in each detected pattern by \nchoosing from a set of possible delays supplied by the user. \n (See \\cite{PSU2008} for details). \n\n\nThe main contribution of this paper is a method for assessing the statistical \nsignificance of such sequential patterns. The objective is to have a method \nso that we will detect only those patterns that repeat often enough to be \nsignificant (and thus fix the thresholds for the data mining algorithm automatically). \nWe tackle this issue in a classical hypothesis testing framework. \n\nThere have been many approaches for assessing the significance of detected firing \npatterns \\cite{Abeles2001,Gerstein2004,Lee2004,SS2006,FAHB2008}. \nIn the current analytical approaches, one generally employs a Null hypothesis \nthat the different spike trains are generated by independent processes. \nIn most cases one assumes (possibly inhomogeneous) Bernoulli or \nPoisson processes. Then one can calculate the probability of observing the given \nnumber of repetitions of the pattern (or of any other statistic derived from such \n counts) under the null hypothesis of independent processes and hence calculate \na minimum number of repetitions needed to conclude that a pattern is significant \n in the sense of being able to reject the null hypothesis. There are also \n some empirical approaches suggested for assessing significance \n\\cite{date2001,Abeles2001,Gerstein2004,Nadasdy1999}. \nHere one creates many surrogate data streams from the \nexperimentally observed data by perturbing the individual spikes while keeping \ncertain statistics same and then assessing significance by noting whether or not \nthe patterns are preserved in the surrogate data. There are many possibilities \n for the perturbations to be imposed to generate \nsurrogate data \\cite{Gerstein2004}. In these empirical methods also, the implicit \nnull hypothesis assumes independence. \n\n\nThe main motivation for the approach presented here is the following. If a \n sequential pattern repeats often enough to be significant then \none would like to think that there are strong influences among the \nneurons representing the pattern. However, different (detected) patterns \nmay represent different \nlevels or strengths of influences among their constituent neurons. Hence it would \nbe nice to have a method of significance analysis that can rank order different \n(significant) patterns in terms some `strength of influence' among the neurons of \nthe pattern. For this, here we propose that the strength of influence of $A$ on $B$ \n is well represented by \nthe conditional probability that $B$ will fire after some prescribed delay given that \n$A$ has fired. We then employ a composite null hypothesis specified through \none parameter that denotes an upper bound on all such pairwise conditional \nprobabilities. Using this we would be able to decide whether or not a given \npattern is significant at various values for this parameter in the null \nhypothesis and thus be able to rank-order different patterns. \n\nThere is an additional and important advantage of the above approach that we propose here. \nOur composite null hypothesis is such that any stochastic model for a set of \nspiking neurons would be in the null hypothesis if all the relevant \npairwise conditional probabilities are below some bound. Since this bound \n is a parameter that can be \nchosen by the user, the null hypothesis would include not only independent neuron \nmodels but also many models of interdependent neurons where the pair-wise \ninfluences among neurons are `weak'. Hence rejecting such a \nnull hypothesis is more attractive than rejecting a null hypothesis of independence \nwhen we want to conclude that a significant pattern \nindicates `strong' interactions among the neurons. In this sense, the approach \npresented here extends the currently available methods for significance analysis. \n\nWe analytically derive some bounds on the probability that our counting process would\ncome up with a given number of repetitions of the firing pattern if the data is\ngenerated by any model that is contained in our compound null hypothesis. \n As mentioned earlier, we use the number of non-overlapped occurrences \nof a pattern as our test statistic instead of the total number of repetitions \nand employ a temporal datamining algorithm for counting non-overlapped occurrences of sequential \n patterns \\cite{PSU2008}. \n This makes our method attractive for discovering significant \npatterns involving large number of neurons also. \nWe show the effectiveness of the method through extensive simulation experiments \non synthetic spike train data obtained through a model of inter-dependent \nnon-homogeneous Poisson processes. \n\nThe rest of the paper is organized as follows. In section~\\ref{sec:epi} we give a \nbrief overview of temporal datamining and explain our algorithm for detecting \nsequential patterns whose frequency is above some threshold. The full \ndetails of the algorithm are available elsewhere \\cite{PSU2008,archive-report} and we \nprovide only some details which are relevant for understanding the statistical \nsignificance analysis which is presented in section~\\ref{sec:stat}. \nIn section~\\ref{sec:simu}, \nwe present some simulation results on synthetic spike train data to show the \neffectiveness of our method. \nWe present results to show that our method \n is capable of ranking different patterns in terms of the synaptic \nefficacy of the connections. While we confine our attention in this paper to only \nsequential patterns, the statistical method we present can be generalized to \nhandle other types of patterns. We briefly indicate this and conclude the paper \nwith a discussion in section~\\ref{sec:dis}. \n\n\n\n\\section{Frequent Episodes Framework for discovery of sequential patterns}\n\\label{sec:epi}\n\nTemporal datamining is concerned with analyzing symbolic time series data to \ndiscover `interesting' patterns of temporal \ndependencies \\cite{Srivats-survey2005,Morchen2007}. Recently we have proposed \nthat some datamining techniques, based on the so called frequent episodes framework, \nare well suited for analyzing multi-neuronal spike train data \\cite{PSU2008}. \nMany patterns of interest in spike data such as synchronous firings by groups \nof neurons, the sequential patterns explained in the previous section, \nand synfire chains which are a combination of synchrony and ordered firings, can be \nefficiently discovered from the data using these datamining techniques. While \nthe algorithms are seen to be effective through simulations presented in \n\\cite{PSU2008}, no statistical theory was presented there to address the question \nof whether the detected patterns are significant in any formal sense which is the main \nissue addressed in this \npaper. In this \nsection we first briefly outline the frequent episodes framework and then \nqualitatively describe this datamining technique for discovering frequently \noccurring sequential patterns. \n\nIn the frequent episodes framework of temporal datamining. \n the data to be analyzed is a sequence of events\ndenoted by $\\langle(E_{1},t_{1}),(E_{2},t_{2}),\\ldots\\rangle$ where $E_{i}$\nrepresents an \\textit{event type} and $t_{i}$ the \\textit{time of occurrence} of\nthe $i^{th}$ event. $E_i$'s are drawn from a finite set of event types, $\\zeta$.\nThe sequence is ordered with respect to time of occurrences of the events so\nthat, $t_i\\le t_{i+1}$, $\\forall i$. The following is an\nexample event sequence containing 11 events with 5 event types.\n\\begin{small}\n\\begin{equation}\n\\langle(A,1),(B,3),(D,5),(A,5),(C,6),(A,10),(E,15),(B,15),(B,17),(C,18),(C,19)\\rangle\n\\label{eq:data-seq}\n\\end{equation}\n\\end{small}\n\nIn multi-neuronal spike data, the event type of a spike event is the label of the \nneuron\\footnote{or the electrode\nnumber when we consider multi-electrode array recordings without the\nspike sorting step} which\ngenerated the spike and the event has the associated time of occurrence.\nThe neurons in the ensemble under observation fire action potentials at\ndifferent times, that is, generate spike events. All these spike events are\nstrung together, in time order, to give a single long data sequence as needed for\nfrequent episode discovery. It may be noted that there can be more than one event \nwith the same time because two neurons can spike at the same time. \n\nThe temporal patterns that we wish to discover in this\nframework are called episodes. In general, episodes are partially ordered \nsets of event types. Here we are only interested in {\\em serial episodes} which \nare totally ordered. \n\nA \\textit{serial episode} is an ordered tuple of event types. For example,\n$(A\\rightarrow B\\rightarrow C)$ is a {\\em 3-node} serial episode. (We also say \nthat the size of this episode is 3). The arrows in this\nnotation indicate the order of the events. Such an episode is said to\n\\textit{occur} in an event sequence if there are corresponding events in the\nprescribed order in the data sequence.\nIn sequence (\\ref{eq:data-seq}), the events\n\\{${(A,1),(B,3),(C,6)}$\\} constitute an occurrence of the\nserial episode $(A\\rightarrow B \\rightarrow C)$ while\nthe events \\{$(B,3), (C,6), (A,10)$\\} do not.\nWe note here that occurrence of an episode does not\nrequire the associated event types to occur consecutively;\nthere can be other intervening events between them.\n\n In the multi-neuronal data, if neuron $A$ makes\nneuron $B$ to fire, then, we expect to see $B$ following $A$ often. However, in\ndifferent occurrences of such a substring, there may be different number of\nother spikes between $A$ and $B$ because many other neurons may also be spiking\nduring this time. Thus, the episode structure allows us to unearth patterns\nin the presence of such noise in spike data.\n\nThe objective in frequent episode discovery is to detect {\\em all} frequent episodes \n(of different lengths) from the data. \nA {\\em frequent episode} is one whose {\\em frequency} exceeds a \n (user specified) {\\em frequency threshold}.\nThe frequency of an episode can be defined in many ways. \nIt is intended\nto capture some measure of how often an episode occurs in an event\nsequence. One chooses a measure of frequency so that frequent episode discovery is\ncomputationally efficient and, at the same time, higher frequency would imply that\nan episode is occurring often.\n\nHere, we define frequency of an episode as the maximum number of non-overlapped \noccurrences of the episode in the data stream. \nTwo occurrences of an episode are\nsaid to be \\textit{non-overlapped} if no event associated with one occurrence appears in\nbetween the events associated with the other. A set of occurrences is \nsaid to be non-overlapped if every pair of occurrences in it \nare non-overlapped. \nIn our example sequence (\\ref{eq:data-seq}), there are two non-overlapped \n occurrences of $A \\rightarrow B \\rightarrow C$ given by the events: \n$( (A,1),(B,3),(C,6))$ and $( (A,10),(B,15),(C,18))$. Note that there are three \ndistinct occurrences of this episode in the data sequence though we can have only a \nmaximum of two non-overlapped occurrences. We also note that \nif we take the occurrence of the episode given by \n$((A,1),(B,15),(C,18))$, then there is no other occurrence that is non-overlapped with this \noccurrence. That is why we define the frequency to be the maximum number of non-overlapped \noccurrences. \n\nThis definition of frequency results in very efficient\ncounting algorithms with some interesting theoretical\nproperties \\cite{Srivats2005,Srivats-kdd07}. \nIn addition, in the context of our application, counting non-overlapped occurrences seems \nnatural because we would then be looking at chains that happen at\ndifferent times again and again. \n\nIn analyzing neuronal spike data, it is useful to consider\nmethods,\nwhere, while counting the frequency, we include only those occurrences which\nsatisfy some additional temporal constraints. Here we are interested in what we \ncall inter-event time constraint which is specified by giving an interval of \nthe form $(T_{low}, T_{high}]$. The constraint \n requires that the difference between the times of\n every pair of successive events in any occurrence of a serial episode\nshould be in this interval. In general, we may have\ndifferent time intervals for different pairs of events in each serial episode. \nAs is easy to see, a serial episode with inter-event time constraints corresponds \nto what we called a {\\em sequential pattern} in the previous section. These are \nthe temporal patterns of interest in this paper. \n\nThe inter-event time constraint allows us to take care of delays involved in the process \nof one neuron influencing another through a synapse. Suppose \nneuron $A$ is connected to neuron $B$ which, in turn, is connected to neuron $C$, \nthrough excitatory connections with delays $T_1$ and $T_2$ respectively. Then, we should be \ncounting only those occurrences of the episode $A\\rightarrow B \\rightarrow C$, where the \ninter-event times satisfy the delay constraint. This would be the sequential pattern \n$A\\stackrel{T_1}{\\rightarrow} B \\stackrel{T_2}{\\rightarrow} C$. In general, the inter-event \nconstraint could be an interval. \nOccurrences of such serial episodes with inter-event constraints in spike data are shown schematically \nin fig.~\\ref{fig:ser-epi-fig}\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=1.0]{figs\/serial-pattern.eps}\n\\caption{ A schematic showing two occurrences of the sequential pattern \n$A\\stackrel{T_1}{\\rightarrow} B \\stackrel{T_2}{\\rightarrow} C$ in the spike trains from \nneurons $A, B, C, D$. A small interval (usually 1 ms) is shown around the second and third spike \nto indicate possible variation in the delay. Note that within the duration of one occurrence \nof the pattern there may be other intervening spikes (from any of the neurons). }\n\\label{fig:ser-epi-fig}\n\\end{figure}\n\n \n\nIn any occurrence of the episode or \nsequential pattern, we call the difference between the times of the first and last \nevents as its {\\em span}. The span would be the total of all the delays. \nIf, in the above episode, \nthe span of all occurrences would be $T_1 + T_2$ and hence we may call it the span \nof the episode. If the inter-event time \nconstraints are intervals then the span of different occurrences could be different. \n\n\nThere are efficient algorithms for discovering all frequent serial episodes with specified \ninter-event constraints \\cite{PSU2008}. \nThat is, for discovering all episodes whose frequency (which \nis the number of non-overlapped occurrences of the episode) is above a given threshold. \n\n{\\em Conceptually, the algorithm does the following}. \n Suppose, we are operating at a time resolution of $\\Delta T$. (That is, \n the times of of events \nor spikes are recorded to a resolution of $\\Delta T$). \nThen we discretize the time axis into intervals of length $\\Delta T$.\n Then, for each episode whose frequency we want to find we \ndo the following. Suppose the episode is the one mentioned above. \nWe start with time instant 1. We check to see whether there is an \noccurrence of the episode starting from the current instant. For this, we need an $A$ at that \ntime instant and then we need a $B$ and a $C$ within appropriate time windows. If there \nare such $B$ and $C$, then we take the earliest of the $B$ and $C$ to satisfy the \ntime constraints, increment the counter for the episode and start looking for the occurrence \nagain starting with the next time instant (after $C$). On the other hand, \nif we can not find such an occurrence (either because $A$ does not occur at the \ncurrent time instant or because there are no $B$ or $C$ at appropriate times following \n$A$), then we move by one time instant and start the search again. \n\nThe actual search process would be very inefficient if \nimplemented as described above. The algorithm itself does the search in a much more \nefficient manner. There are two issues that the algorithm needs to address. Since, \na priori, we do not know what patterns to look for, we need to make a reasonable \nlist of candidate patterns and then obtain their frequencies so as to output only \nthose patterns whose frequency exceeds the preset threshold. The second issue is that \nin obtaining frequencies, the algorithm is required to count the frequencies of \nnot one but a set of candidates in one pass through the data \n and we need to do this efficiently. \nIn generating the candidates, we need to tackle the combinatorial explosion \nbecause all possible serial episodes of a given size increases exponentially \nwith the size. \nThis is tackled using an iterative procedure that is popular \nin datamining. To understand this, consider our example 3-node pattern \n $A\\stackrel{T_1}{\\rightarrow} B \\stackrel{T_2}{\\rightarrow} C$. This can not \nbe frequent unless certain 2-node {\\em subepisodes} of this, namely, \n$A\\stackrel{T_1}{\\rightarrow} B$ and $B \\stackrel{T_2}{\\rightarrow} C$ are \nfrequent. (This is because any two non-overlapped occurrences of the 3-node pattern \nalso gives us two non-overlapped occurrences of the two 2-node patterns mentioned \nabove). Thus, we should allow this 3-node episode to be a candidate only if \nthe appropriate 2-node episodes are already found to be frequent. Based on this \nidea, we have the following structure for the algorithm. \n We first get frequent 1-node episodes which are then used to make candidate 2-node \nepisodes. Then, by one more pass over data, we find frequent 2-node episodes which are \nthen used to make candidate 3-node episodes and so on. \nSuch a technique is quite effective in controlling combinatorial explosion and \n the number of candidates comes down drastically as the size increases. This is because, \nas the size increases, many of the combinatorially possible serial episodes of that \nsize would not be frequent. This allows \nthe algorithm to find large size frequent episodes efficiently. \n At each stage of this process, we count frequencies \nof not one but a whole set of candidate episodes \n(of a given size) through one sequential pass over \nthe data. We do not actually traverse the time axis in time ticks once for each \npattern whose occurrences we want to count. We traverse the \ntime-ordered data stream. As we traverse the data we remember enough from the data stream to \ncorrectly take care of all the occurrence possibilities of all episodes in the candidate set \nand thus compute all the frequent episodes of a given size through one pass over the data. \nThe complete details of the algorithm are available in \\cite{PSU2008}. \n\n\n\n\n\\section{Statistical Significance of Discovered Episodes or Serial Firing Patterns}\n\\label{sec:stat}\n\nIn this section we address the issue of the statistical significance of the \nsequential patterns discovered by our algorithm. \nThe question is when are the discovered \nepisodes significant, or, equivalently, what frequency threshold should we choose \nso that all discovered frequent episodes would be statistically significant. \n\nTo answer this question we follow a classical hypothesis testing framework. \nIntuitively we want significant sequential patterns to represent a \nchain of strong interactions among those neurons. \nSo, we have to essentially choose a {\\em null hypothesis} that asserts\nthat there is no `{\\em structure}' or `{\\em strong influences}' \nin the system of neurons generating the data. Also, as mentioned earlier, we want the \nnull hypothesis to contain a parameter that allows us to specify what we mean \nby saying that the influence one neuron has on another is not `strong'. \n\n\nFor this, we capture the strength of interactions among \nthe neurons in terms of conditional probabilities. Let \n$e_s(A, B, T)$ denote the conditional probability that $B$ fires in a time \ninterval $[T, T + \\Delta T]$ given that $A$ fired at time zero. $\\Delta T$ is \n essentially the time resolution at which we operate. \n(For example, $\\Delta T$ = 1ms). Thus, \n$e_s(A, B, T)$ is essentially, the conditional probability \nthat $B$ fires $T$ time units after $A$.\\footnote{For the analysis, \nwe think of the delay, $T$, as a constant. However, in practice our method \ncan easily take care of the case where \nthe actual delay is uniformly distributed over a small interval with \n$T$ as its expected value.} If there is a strong excitatory \nsynapse of delay $T$ between $A$ and $B$, then this conditional probability would be \nhigh. On the other hand if $A$ and $B$ are independent, then, this \nconditional probability is the same as the unconditional probability of $B$ firing \nin an interval of length $\\Delta T$. We denote the \n(unconditional) probability that a neuron, $A$, \nfires in any interval of length $\\Delta T$ by $\\rho_A$. \n(For example, if we take $\\Delta T = 1ms$ and that\nthe average firing rate of $B$ is 20Hz, then $\\rho_B$ would be\nabout 0.02). \n\nThe main assumption we make is that the conditional probability $e_s(A,B,T)$ is \nnot a function of time. That is, the conditional probability of $B$ firing at least \nonce in an interval $[t+T, \\ t+T+\\Delta T]$ given that $A$ has fired at $t$ \n is same for all $t$ within the \ntime window of the observations (data stream) that we are analyzing. We think this \nis a reasonable assumption and some recent analysis of spike trains from \n neural cultures suggests that such an assumption is justified \\cite{Feber2007}.\nNote that this assumption does not\nmean we are assuming that the firing rates of neurons are not time-varying. As a matter\nof fact, one of the main mechanisms by which this conditional probability is realized\nis by having a spike from $A$ affect the rate of firing by $B$ for a short duration\nof time. Thus, the neurons would be having time-varying firing rates even when the\nconditional probability is not time-varying. Essentially, the constancy of\n$e(A,B,T)$ would only mean that every time $A$ spikes, it has the same chance of\neliciting a spike from $B$ after a delay of $T$. Thus our assumption only\nmeans that there is no appreciable change in synaptic efficacies during the period\n in which the data being analyzed is gathered. \n\n\nThe intuitive idea behind our null hypothesis is that the conditional probability \n$e_s(A,B,T)$ is a good indicator of the `strength of interaction' between $A$ and \n$B$. For inferring functional connectivity from repeating sequential patterns, \nthe constancy of delays (between spikes by successive neurons) in multiple \nrepetitions is important. That is why we defined the conditional probability with \nrespect to a specific delay. Now, an assertion that the interactions \namong neurons is `weak' can \nbe formalized in terms of an upper bound on all such conditional probabilities. \nWe formulate our composite null hypothesis as follows. \n\n{\\em Our composite null hypothesis includes all models of interacting neurons for which \nwe have $e_s(x, y, T) < e_0$ for all pairs of neurons $x, y$ and for a set of \nspecified delays $T$, where $e_0$ is a fixed user-chosen number in \nthe interval $(0, \\ 1)$.}\n \nThus all models of inter-dependent neurons where the probability of $A$ causing \n$B$ to fire (after a delay) is less that $e_0$, \nwould be in our Null hypothesis. The actual mechanism by which spikes from $A$ affect \nthe firing by $B$ is immaterial to us. Whatever may be this mechanism \nof interaction, if the resulting conditional probability is less than $e_0$, then that \nmodel of interacting neurons would be in our null hypothesis.\\footnote{We note here \n that this conditional probability is well defined whether or not the two \nneurons are directly connected. If they are directly connected then \n$T$ could be taken as a typical delay involved in the process; otherwise it can be taken as some \nintegral multiple of such delays. \n In any case, our interest is in deciding on the significance \nof sequential patterns with some given values for $T$.} The user specified number, $e_0$, \n formalizes what we mean by interaction among neurons is strong. If $A$ and $B$ are \nindependent then this conditional probability is same as $\\rho_B$. As mentioned \nearlier, if $\\Delta T = 1ms$ and average firing rate for $B$ as 20 Hz, \nthen $\\rho_B=0.02$. So, if we choose $e_0=0.4$, it means that we agree to call the \ninfluence as strong if the conditional probability is 20 times what it would be if \nthe neurons are independent. By having different values for $e_0$ in the null \nhypothesis, we can ask what patterns are significant at what value of $e_0$ and thus \nrank-order patterns. \n\nNow if we are able to reject this Null hypothesis then it is \nreasonable to assert that the episode(s) \ndiscovered would indicate `strong' interactions among the appropriate neurons. \nThe `strength' of interaction is essentially chosen by us in terms of the \nbound $e_0$ on the conditional probability in our null hypothesis. \n\n\n\n\n\n\n\n\nWe now present a method for \n bounding the probability that the frequency (number \nof non-overlapped occurrences) of a given serial episode with inter-event constraints \nis more than a given threshold under the null hypothesis. To do this, we first compute \nthe expectation and variance (under the null hypothesis) \nof the random variable representing the number of \nnon-overlapped occurrences of a serial episode with inter-event constraints \n by using the following stochastic model. \n\n\n Let $\\{X_i, \\ i=1,2,\\ldots \\}$ be {\\em iid} random \nvariables with distribution given by\n\\begin{eqnarray}\nP[X_i = T] &=& p \\nonumber \\\\\nP[X_i = 1] &=& 1 - p \n\\label{eq:xi}\n\\end{eqnarray}\nwhere $T$ is a fixed constant (and $T>1$). Let $N$ be a random variable defined by \n\\begin{equation}\nN = \\min\\: \\{ n \\: : \\: \\sum_{i=1}^n X_i \\geq L\\}\n\\label{eq:N}\n\\end{equation}\nwhere $L$ is a fixed constant. \n\nLet the random variable $Z$ denote the number of $X_i$'s out of the first $N$ \nwhich have value $T$. Define the random variable $M$ by\n\\begin{eqnarray}\nM &=& Z \\ \\ \\mbox{~if~} \\ \\ \\sum_{i=1}^N X_i = L \\nonumber \\\\\nM &=& Z - 1 \\ \\ \\mbox{~if~} \\ \\ \\sum_{i=1}^N X_i > L \n\\label{eq:M}\n\\end{eqnarray}\n\nAll the random variables, $N, Z, M$ depend on the parameters $L, T, p$. When it \nis important to show this dependency we write $M(L, T, p)$ and so on. \n\nNow we will argue that $M(L,T,p)$ is the random variable representing \n the number of non-overlapped occurrences of an episode where $T$ is the span \n (or sum of all delays) of the episode and $L$ is the length of data (in \nterms of time duration). We would fix $p$ based on the bound $e_0$ in our null hypothesis \nas explained below. \n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.5,clip]{figs\/count.ps}\n\\caption{ A schematic of the counting process for non-overlapped occurrences \nof the episode $A\\stackrel{T}{\\rightarrow}B$ superimposed on the spike trains \nfrom neurons $A$ and $B$. In the yellow region there are no occurrences of the pattern \nstarting with that time instant and the counting scheme moves forward by one time step. \nIn the blue region there is an occurrence and the counting process moves by $T$ time steps. \nThe random variables $X_i$, defined by eq.~(\\ref{eq:xi}), capture the evolution of the \ncounting process} \n\\label{fig:counting}\n\\end{figure}\n\n\nConsider an episode $A \\stackrel{T}{\\rightarrow} B$ with an inter-event \n time constraint (or delay) \nof $T$. Now, the sequence $X_i$ essentially captures the counting \nprocess of our algorithm. A schematic of the counting process (as relevant for this discussion) \nis shown in fig.~\\ref{fig:counting}. As explained earlier, the algorithm can be viewed \nas traversing a {\\em discretized} time axis in steps of $\\Delta T$, \nlooking for an occurrence of the episode \nstarting at each time instant. At each time instant (which, on the discretized \ntime axis corresponds to an interval of length $\\Delta T$), \nlet $q_1$ denote the probability of spiking by $A$ and let $q_2$ denote the \nconditional probability that $B$ generates a spike $T$ instants later \ngiven that $A$ has spiked now. In terms of our earlier notation, $q_1 = \\rho_A$, \n$q_2 = e_s(A,B,T)$. \nThus, at any instant, $q_1q_2$ denotes the probability \nof occurrence of the episode starting at that instant. Now, in eq.(\\ref{eq:xi}) let \n$p=q_1q_2 (= \\rho_A e_s(A, B, T))$. Then $p$ represents the probability that this \nepisode occurs starting with any given time instant.\\footnote{We note here that we actually \ndo not know this $p$ because we do not know the exact value for $e_s(A,B,T)$. But \nfinally we would bound the relevant probability by using $e_0$ to bound \n$e_s(A,B,T)$.} \nLet $L$ in eq.(\\ref{eq:N}) denote the data length (in time units). Then the sequence, \n$X_1, X_2, \\ldots, X_N$, represents our counting process. If, at the \nfirst instant there is an occurrence of the episode starting at that instant then \nwe advance by $T$ units on the time-axis and then look for another occurrence (since \nwe are counting non-overlapped occurrences); if there is no occurrence starting at the \nfirst instant then we advance by one unit and look for an occurrence. \nAlso, whether or not there is \nan occurrence starting from the current instant is independent of how many occurrences are \ncompleted before the current instant (because we are counting only non-overlapped \noccurrences). \nSo, the counting process is well captured by accumulating the $X_i$'s defined above \ntill we reach the end of data. Hence $N$ captures the number of such $X_i$ that \nwe accumulate because $L$ is the data length in terms of time. \n Since $X_i$ take values $1$ or $T$, the \nonly way $\\sum X_i$ exceeds $L$ is if the last $X_i$ takes value $T$ which in turn \nimplies that when we reached end of data we have a partial occurrence of the episode. In \nthis case the total number of completed occurrences is one less than the number \nof $X_i$ (out of $N$) that take value $T$. If the last $X_i$ has taken value 1 (and \nhence the sum is equal to $L$) then the number of completed occurrences is equal to \nthe number of $X_i$ that take value $T$. Now, it is clear that $M$ is the \nnumber of non-overlapped occurrences counted. \n\nIt is easy to see that the model captures counting of episodes of arbitrary length also. \nFor example, if our episode is \n$A \\stackrel{T_1}{\\rightarrow} B \\stackrel{T_2}{\\rightarrow} C$ then $T$ is eq.(\\ref{eq:xi}) \nwould be $T_1+T_2$ and $p$ would be $\\rho_A e_s(A,B,T_1) e_s(B,C,T_2)$.\\footnote{Here we are \n assuming that firing of $C$ after a delay of $T_2$ from $B$, conditioned on \nfiring of $B$, is conditionally \nindependent of earlier firing of $A$. Since our objective is \nto unearth significant triggering chains, this is a reasonable assumption. Also, this allows \nus to capture the null hypothesis with a single parameter $e_0$. We discuss this further \nin section~\\ref{sec:dis}.}\n Suppose in a $n$-node episode the conditional probability \nof $j^{\\mbox{th}}$ neuron firing (after the prescribed delay) given that the previous \none has fired, is equal to $e_s^j$. Let the successive delays be \n$T_i$. Let the (unconditional) probability of the first neuron (of the episode) \nfiring at any instant (that is, in any interval of length $\\Delta T$) is $\\rho$. \nThen we will take (for the $n$-node episode) $p=\\rho \\Pi_{j=2}^{n}(e_s^j)$ and $T= \\sum T_i$. \n\n\n\n\n\\subsection{ Mean and Variance of $M(L, T, p)$} \n\nNow, we first derive some recurrence relations to calculate the mean and variance of \n$M(L, T, p)$ for a given episode. Fixing an episode fixes the value of $p$ and $T$. \nLet $F(L,T,p) = E\\:M(L,T,p)$ where $E$ denotes expectation. \nWe can derive a recurrence relation for $F$ as follows.\n\n\\begin{eqnarray}\nE\\:M(L,T,p) & = & E\\: \\left[\\; E\\: [ M(L,T,p) \\; | \\; X_1 ] \\; \\right] \\nonumber \\\\\n & = & E\\: [M(L,T,p) \\; | \\; X_1 =1] (1-p) \\; + \\: E[M(L,T,p) \\; | \\; X_1 \\neq 1] p \\nonumber \\\\\n& = & (1-p) E\\:[M(L-1, T, p)] \\: + \\: p ( 1 \\: + \\: E[M(L-T, T,p)] ) \\nonumber \\\\\n& & \n\\end{eqnarray}\n\nIn words what this says is: if the first $X_i$ is $1$ (which happens with probability $1-p$), \nthen the expected number of occurrences is same as those in data of length $L-1$; on \nthe other hand, if first $X_i$ is not $1$ (which happens with probability $p$) then \nthe expected number of occurrences are 1 plus the expected number of occurrences in \ndata of length $L-T$. \n\nHence our recurrence relation is:\n\\begin{equation}\nF(L,T,p) = (1-p) F(L-1, T, p) + p ( 1 + F(L-T, L, p))\n\\label{eq:rec1}\n\\end{equation}\nThe boundary conditions for this recurrence are:\n\\begin{equation}\nF(x,y,p) = 0, \\ \\ \\mbox{if} \\ \\ x < y \\ \\ \\mbox{and} \\ \\ \\forall p.\n\\label{eq:bd-cn}\n\\end{equation}\n\n\nLet $G(L,T,p) = E[M^2(L,T,p)]$. That is $G(L,T,p)$ is the second moment of $M(L,T,p)$. \nUsing the \nsame idea as in case of $F$ we can derive recurrence relation for $G$ as follows. \n\\begin{eqnarray}\nE\\:[M^2(L,T,p)] & = & E\\: \\left[ E\\: [ M^2(L,T,p) \\; | \\; X_1 ] \\right] \\nonumber \\\\\n & = & E\\: [M^2(L,T,p) \\; | \\; X_1 =1] (1-p) \\; \\nonumber \\\\ \n & & \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ + \\: E[M^2(L,T,p) \\; | \\; X_1 \\neq 1] p \\nonumber \\\\\n& = & (1-p) E\\:[M^2(L-1, T,p)] \\: + \\: p E( 1 \\: + \\: M(L-T, T,p) )^2 \\nonumber \\\\\n& = & (1-p) E\\:[M^2(L-1, T,p)] \\: + \\nonumber \\\\ \n & & \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\: p E( 1 \\: + \\: M^2(L-T, T,p) \\: + \\: 2 M(L-T, T,p)) \\nonumber \\\\ \n& & \n\\end{eqnarray}\nThus we get\n\\begin{equation}\nG(L,T,p) = (1-p) G(L-1, T,p) \\: + \\: p(1 \\: + \\: G(L-T, T,p) \\: + \\: 2 F(L-T, T,p))\n\\label{eq:var-rec1}\n\\end{equation}\n\nSolving the above, we get the second moment of $M$. Let, $V(L,T,p)$ be the \nvariance of $M(L,T, p)$. Then we have \n\\begin{equation}\nV(L,T,p) = G(L,T,p) \\: - \\: (F(L,T,p))^2 \n\\label{eq:var}\n\\end{equation}\n\n\nOnce we have the mean and variance we can bound the probability that the number \nof non-overlapped occurrences is beyond something. For example, we can use \nChebyshev inequality as\n\\begin{equation}\n\\mbox{Pr}\\left[|M(L.T,p) - F(L,T,p)| > k \\sqrt{V(L,T,p)}\\right] \\leq \\frac{1}{k^2}\n\\label{eq:cheb}\n\\end{equation}\nfor any positive $k$. \\footnote{This may be a loose bound. We may get better bounds by \nusing central limit theorem based arguments. But for our purposes here, this \nis not very important. Also, as we shall see from the empirical results \npresented in the next section, this bound seems to be adequate}. \nSuch bounds can be used for test of statistical significance as explained below. \n\n\\subsection{ Test for statistical significance} \n\nSuppose we are considering $n$-node episodes. Let the allowable Type I error for \nthe test be $\\epsilon$. Then what we need is a threshold, say, $m_{th}$ for \nwhich we have \n\\begin{equation}\n\\mbox{Pr}_n\\left[f_{epi} \\geq m_{th} \\right] \\leq \\epsilon, \n\\label{eq:test1}\n\\end{equation}\nwhere $f_{epi}$ is the frequency of any $n$-node episode and \n$\\mbox{Pr}_n$ denotes probability under the null hypothesis models. \n\nThis would imply that if we find a $n$-node episode with frequency greater than \n$m_{th}$ then, with $(1 - \\epsilon)$ confidence we can reject our null hypothesis \nand hence assert that the discovered episode represents `strong' interactions \namong those neurons. \n\nNow the above can be used for assessing statistical significance of any \nepisode as follows. Suppose \nwe are considering an $n$-node (serial) episode. Let the first node of this episode \nhave event type $A$. (That is, it corresponds to neuron $A$). Let $\\rho_A$ be the probability \nthat $A$ will spike in any interval of length $\\Delta T$. (We will fix $\\Delta T$ by \nthe time resolution being considered). \nLet $\\epsilon$ be the prescribed confidence level. Let $k$ be such that \n$k^2 \\geq \\frac{1}{\\epsilon}$. Fix $p= \\rho_A (e_0)^{n-1}$. Let $T$ be the sum of all inter-event \ndelay times in the episode. Let $L$ be the total length of data (as time span in units \nof $\\Delta T$). \n\nOur null hypothesis is that the conditional probability for any pair of neurons \nis less than $e_0$. Further, our random variable $M$ is such that its probability \nof taking higher values increases monotonically with $p$. \nHence, with the above $p$, the probability of $M(L,T,p)$ being greater than any value \nis an upper bound on the probability of the episode frequency being greater than \nthat value under any of the models in our null hypothesis. \n\nThus, a threshold for significance \nis $m_{th} = F(L,T,p) + k \\sqrt{V(L,T,p)}$ because, from \neq.~(\\ref{eq:cheb}) we have \n\\begin{equation}\n\\mbox{Pr}\\left[M(L.T,p) \\geq F(L,T,p) + k \\sqrt{V(L,T,p)}\\right] \\leq \\frac{1}{k^2} \\leq \n\\epsilon. \n\\label{eq:cheb1}\n\\end{equation}\n\nThough we do not have closed form expressions for $F$ and $V$, using our recurrence \nrelations, we can calculate $F(L,T,p)$ and $V(L,T,p)$ for any given values of \n$L,T,p$ and hence can calculate the above threshold. \nThe only thing unspecified for this calculation is how do we get $\\rho_A$. \nWe can obtain $\\rho_A$ by either estimating \nthe average rate of firing for this neuron from the data or from other prior knowledge. \n\nThus, we can use \neq.~(\\ref{eq:cheb1}) either for assessing the significance of a specific \n$n$-node episode or for fixing a threshold of any $n$-node episode in our datamining \nalgorithm. In either case, this allows us to deduce the `strong connections' (if any) \nin the neural system being analyzed by using our datamining method. \n\nWe can summarize the the test of significance as follows. Suppose the allowed type-I error \nis $\\epsilon$. We choose integer $k$ such that $\\epsilon < \\frac{1}{k^2}$. Suppose we want \nto assess the significance of a n-node sequential pattern with the total delay being $T$ \nbased on its count. Suppose $e_0$ is the bound we \nuse in our null hypothesis. Let $L$ be the total data length in time units. Let $\\rho$ be \nthe average firing rate of the first neuron in the data. Let $p=\\rho (e_0)^{n-1}$. We calculate \n$F(L,T,p)$ and $V(L,T,p)$ using (\\ref{eq:rec1}), (\\ref{eq:var-rec1}) and (\\ref{eq:var}). Then \nthe pattern is declared significant if its count \n exceeds $F(L,T,p) + k \\sqrt{V(L,T,p)}$.\\footnote{This threshold for a pattern to be significant \ndepends on the size of the pattern with smaller size patterns needing higher count to be \nsignificant, as is to be expected. This also adds to the efficiency of our data mining algorithm for \ndiscovering sequential patterns. In the level-wise procedure described earlier, we would have higher \nthresholds for smaller size patterns thus further mitigating the combinatorial explosion in \nthe process of frequent episode discovery.} \n\n\n{\\em We like to emphasize that the threshold frequency (count) given above for an episode \nto be significant (and hence represent strong interactions) is likely to be larger \nthan that needed. This is because it is obtained through a Chebyshev bound which \nis often loose.}\nThus, for example, if we choose $e_0=0.4$ then some strong connections which \nmay result in the effective conditional probability value of up to 0.5 may \nnot satisfy the test of significance at a particular significance level. This, in \ngeneral, is usual in any hypothesis testing framework. In practice, we found that \nwe can very accurately discover all connections whose strengths in terms of the \nconditional probabilities are about 0.2 more than $e_0$ at 5\\% confidence level. \nAt $\\epsilon = 0.05$, the threshold is about 4.5 standard deviations above the \nmean. In a specific application, for example, if we feel that three standard \ndeviations above the mean is a good enough threshold, then correspondingly we will \nbe able to discover even those connections whose effective conditional probability \nis only a little above $e_0$. \n\nThis test of significance allows us to rank order the discovered patterns. For this, \nwe run our datamining method with \ndifferent thresholds corresponding to different $e_0$ values. Then, by looking at the \nsets of episodes found at different $e_0$ values, we can essentially rank order \nthe strengths of different connections in the underlying system. Since any manner of \nassigning numerical values to strengths of connections is bound to be somewhat arbitrary, \nthis method of rank ordering different connections in terms of strengths can be much \nmore useful in analyzing microcircuits. \n\nWe illustrate all these through our simulation experiments in section~\\ref{sec:simu}. \n\n\\subsection{Extension to the model}\n\nSo far in this section we have assumed that the individual delays and hence \nthe span of an episode, $T$, to be constant. In practice, even if delay is \nrandom and varies over a small interval around $T$, the threshold we calculated \nearlier would be adequate. In addition to this, it is possible to \n extend our model to take care of some random variations \nin such delays. \n\nSince we have assumed that $\\Delta T$ is the time resolution at which we are working, \nit is reasonable to assume that the delay $T$ is actually specified in units of \n$\\Delta T$. Then we can think of the delay as a random variable taking \nvalues in a set $\\{ T-J, \\; T-J+1, \\; \\cdots, T+J \\}$ where $J$ is a small (relative to T) \ninteger. For example, suppose the delay is uniformly \ndistributed over $\\{ T-1, \\; T, \\; T+1 \\}$. \n Now we can change our model as follows:\n\n The $\\{X_i, \\ i=1,2,\\ldots \\}$ will now be {\\em iid} random \nvariables with distribution \n\\begin{eqnarray}\n\\mbox{Prob}[X_i = 1] & = & 1 - p \\nonumber \\\\\n\\mbox{Prob}[X_i = T-1] = \n\\mbox{Prob}[X_i = T] = \n\\mbox{Prob}[X_i = T+1] &=& \\frac{p}{3} \\nonumber \n\\label{eq:xi-new}\n\\end{eqnarray}\nwhere we now assume that $T > 2$. \n\nWe will define $N$ as earlier by eq.~(\\ref{eq:N}). We will now define $Z$ as the number \nof $X_i$ out of first $N$ that {\\bf do not} take value 1. In terms of this $Z$, we \nwill define $M$ as earlier by eq.~(\\ref{eq:M}). \n\nNow it is easy to see that our $M(L,T,p)$ would again be the random variable \ncorresponding to number of non-overlapped occurrences in this new scenario where \nthere are random variations in the delays. Now the recurrence relation for \n$F(L,T,p)$ would become \n\\begin{eqnarray}\nF(L,T,p) &=& (1-p) F(L-1, T, p) + \\nonumber \\\\\n & & \\hspace*{-1.5cm} p \\left( 1 + \\frac{1}{3}(F(L-T+1, L, p)+F(L-T, L, p) + F(L-T-1, L, p))\\right) \\nonumber \\\\\n& & \n\\label{eq:rec1-ex}\n\\end{eqnarray}\n\nThe recurrence relation for variance of $M(L,T,p)$ can also be similarly derived. Now, we can \neasily implement the significance test as derived earlier. While the recurrence relations \nare a little more complicated, it makes no difference to our method of significance \nanalysis because these recurrence relations are anyway to be solved numerically. \n\nIt is easy to see that this method can, in principle, take care of any distribution of the \ntotal delay (viewed as a random variable taking values in a finite set) by \nmodifying the recurrence relation suitably. \n\n\n\\section{Simulation Experiments}\n\\label{sec:simu}\n\n\nIn this section we describe some simulation experiments to show the effectiveness of \nour method of statistical significance analysis. We show that our stochastic model \nproperly captures our counting process and that the frequency threshold we calculate \nis effective for separating connections that are `strong' (in the sense of \nconditional probabilities). We also show \nthat our frequency can properly rank order the strengths of connections \nin terms of conditional probabilities. As a matter of fact, our results provide good \njustification for saying that conditional probabilities provide a very good scale \nfor denoting connection strengths. For all our experiments we choose synthetically \ngenerated spike trains. This is because then we know the ground truth about \nconnection strengths and hence can test the validity of our statistical theory. \nFor the simulations we use a data generation scheme \nwhere we model the spiking of each neuron as an inhomogeneous Poisson process on \nwhich is imposed an additional constraint of refractory period. (Thus the actual \nspike trains are not truly Poisson even if we keep the rate fixed). \nThe inhomogeneity in the Poisson process are due to\n the instantaneous firing rates being modified based on total \ninput spikes received by a neuron through its synapses. \n\nWe have shown elsewhere \\cite{PSU2008,archive-report} that our datamining algorithms \nare very efficient in discovering interesting patterns of firings from spike \ntrains and that we can discover patterns of more than ten neurons also. Since in this \npaper the focus is on statistical significance of the discovered patterns, we would \nnot be presenting any results for showing the computational efficiency of the method. \n\n\\subsection{Spike data generation}\n\nWe use a simulator for generating the spike data from a network of interconnected \nneurons. Let $N$ denote the number of neurons in the network. The spiking of each \nneuron is modelled as an inhomogeneous Poisson process whose rate of firing is \nupdated at time intervals of $\\Delta T$. (We normally take $\\Delta T$ to be 1ms). \nThe neurons are interconnected by synapses and each synapse is characterized by \na delay (which is in integral multiples of $\\Delta T$) and a weight which is a \nreal number. All neurons also have a refractory period. The rate of the Poisson \nprocess is varied with time as follows.\n\\begin{equation}\n\\lambda_j(k) = \\frac{K_j}{1 + \\exp{(-I_j(k) + d_j)}}\n\\label{eq:lambda-update}\n\\end{equation}\nwhere $\\lambda_j(k)$ is the firing rate of $j^{th}$ neuron at time $k \\Delta T$, \n and $K_j, d_j$ are two parameters. \n $I_j(k)$ is the total input into $j^{th}$ neuron at time $k \\Delta T$ and it is \ngiven by\n\\begin{equation} \nI_j(k) = \\sum_i O_i(k) w_{ij}\n\\label{eq:input}\n\\end{equation}\nwhere $O_i(k)$ is the output of $i^{th}$ neuron (as seen by\nthe $j^{th}$ neuron) at time $k \\Delta T$\n and $w_{ij}$ is the weight of synapse from $i^{th}$ to $j^{th}$ neuron.\n$O_i(k)$ is taken to be the number of spikes by the $i^{th}$ neuron in the time\ninterval $(\\;(k-h_{ij}-1) \\Delta T, \\ (k-h_{ij}) \\Delta T]$ where $h_{ij}$ represents the\n delay (in units of $\\Delta T$) for the synapse from $i$ to $j$.\nThe parameter $K_j$ is chosen based on the dynamic range of firing rates that we \nneed to span. The parameter $d_j$ determines the `background' spiking\nrate, say, $\\lambda_{0j}$. This is the firing rate of the $j^{th}$ neuron \nunder zero input. After choosing a suitable value for $K_j$, \nwe fix the value of $d_j$ based on this \nbackground firing rate specified for each neuron. \n\nWe first build a network that has many random interconnections with low weight values \nand a few strong interconnections with large weight values. We then generate spike \ndata from the network and show how our method can detect all strong connections. \nTo build the network we specify the background firing rate (which we normally keep \nsame for all neurons) which then fixes the value of $d_j$ in (\\ref{eq:lambda-update}). \nWe specify all weights in terms of conditional probabilities. Given a conditional \nprobability we first calculate the needed instantaneous firing rate so that probability \nof at least one spike in the $\\Delta T$ interval is equal to the specified \nconditional probability. Then, using (\\ref{eq:lambda-update}) and \n(\\ref{eq:input}), \nwe calculate the value of $w_{ij}$ needed so that the receiving neuron ($j$) \nreaches this instantaneous rate given that the sending neuron ($i$) spikes once \nin the appropriate interval and assuming that input into the receiving neurons from \nall other neurons is zero. \n\nWe note here that the background firing rate as well as the effective conditional \nprobabilities in our system would have some small random variations. As said above, \nwe fix $d_j$ so that on zero input the neuron would have the background firing rate. \nHowever, all neurons would have synapses with randomly selected other neurons and \nthe weights of these synapses are also random. Hence, even in the absence of any \nstrong connections, the firing rates of different neurons keep fluctuating around the \nbackground rate that is specified. Since we choose random weights from a zero mean \ndistribution, in an expected sense we can assume the input into a neuron to be \nzero and hence the average rate of spiking would be the background rate specified. \nWe also note that the way we calculate the effective weight for a given conditional \nprobability is also approximate and we chose it for simplicity. If we specify \na conditional probability for the connection from $A$ to $B$, then, the method stated \nin the previous paragraph fixes the weight of connection so that the probability of \n$B$ firing at least once in an appropriate interval given that $A$ has fired is equal \nto this conditional probability {\\em when all other input into $B$ is zero}. But since \n$B$ would be getting small random input from other neurons also, the effective \nconditional probability would also be fluctuating around the nominal value specified. \nFurther, even if the random weights have zero mean, the fluctuations in the \nconditional probability may not have zero mean due to the nonlinear sigmoidal \nrelationship in (\\ref{eq:lambda-update}). The nominal conditional probability \nvalue determines where we operate on this sigmoid curve and that determines \nthe bias in the excursions in conditional probability for equal fluctuations in either \ndirections in the random input into the neurons. We consider this as a noise in the \nsystem and show that our method of significance analysis is still effective. \n\n \n\nThe simulator is run as follows. First, for any neuron we fix a fraction (e.g., 25\\%) of \nall other neurons that it is connected to. The actual neurons that are connected to any \nneuron are then selected at random using a uniform distribution. We fix the delays \nand background firing rates for all neurons. \nWe then assign random weights to \nconnections by choosing uniformly from an interval. In our simulation experiments we \nspecify this range in terms of conditional probabilities. For example suppose the \nbackground firing rate is 20 Hz. Then with $\\Delta T = 1ms$, the probability of \nfiring in any interval of length $\\Delta T$ is (approximately) 0.02. Hence a conditional \nprobability of 0.02 would correspond to a weight value of zero. Then a range of \nconditional probabilities such as $[0.01, \\ 0.04]$ (increase or decrease by a \nfactor of 2 in either direction) would correspond to a weight range around zero. \nAfter fixing these random weights, we incorporate a few strong connections which \nvary in different simulation experiments. These weight values are also specified in \nterms of conditional probabilities. We then generate a spike train by simulating all \nthe inhomogeneous Poisson processes where rates are updated every $\\Delta T$ time instants. \nWe also fix refractory period for neurons (which is same for all neurons).\nOnce a neuron is fired, we will not let it fire till the refractory \nperiod is over. \n\n\n\\subsection{Results}\n\nFor the results reported here we used a network of 100 neurons with the nominal firing rate being \n20 Hz. Each neuron is connected to 25 randomly selected neurons with the effective conditional probability \nof the connection strength ranging over $[0.01, \\ 0.04]$. With 20Hz firing rate and 1ms time resolution, the \neffective conditional probability when two neurons are independent is 0.02. Thus the random connections \nhave conditional probabilities that vary by a factor of two on either side as compared to the independent case. \nWe then incorporated some strong connections among some neurons. For this we put in one 3-node episode, three \n4-node episodes, three 5-node episodes and one 6-node episode with different strengths for the connections. \nThe connection strengths are so chosen so that we have enough number of 3-node and 4-node episodes (as \npossibly subepisodes of the embedded episodes) spanning \nthe range of conditional probabilities from 0.1 to 0.8. All synaptic connections have a delay of 5ms. \nUsing our simulator described earlier, we generated \nspike trains for 20 sec of time duration (during which there are about 50,000 spikes typically), \n and obtained the counts of non-overlapped occurrences of episodes \nof all sizes using our datamining algorithms. In all results presented below, all statistics are calculated \nusing 1000 repetitions of this simulation. Typically, on a data sequence for 20 Sec duration, the mining \nalgorithms (run on a dual-core Pentium machine) take about a couple of minutes. \n\nAs explained earlier, in our simulator, the rate of the Poisson process (representing the spiking of a \nneuron) is updated every 1ms based on the actual spike inputs received by that neuron. This would, in general, \nimply that many pairs of neurons (especially those with strong connections) are not spiking as independent \nprocesses. Fig.~\\ref{fig:corr} shows this for a few pairs of neurons. The figure shows the cross correlograms \n(with bin size of 1 ms and obtained using 1000 replications) for pairs of neurons that have \n weak connections and for pairs of neurons that have \nstrong connections. There is a marked peakiness in the cross correlogram for neurons with strong interconnections, \nas expected. \n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.75,clip]{figs\/unni-new-fig3.ps}\n\\caption{Normalized cross correlograms (obtained through 1000 replications) for four different pairs of \nneurons. The top two panels show pairs of neurons with weak interconnections while the two bottom panels \nshow neuron pairs with strong interconnections. For neurons pairs in the bottom two panels, the \ncross correlogram shows strong peak.} \n\\label{fig:corr}\n\\end{figure}\n\nFig.~\\ref{fig:acc-3-4-5} shows that our theoretical model for calculating the mean and variance of \nof the non-overlapped count (given by $F$ and $V$ determined through eqns. (\\ref{eq:rec1}) and \n(\\ref{eq:var}) ) are accurate. \nThe figure shows plot of the mean ($F$) and \nmean plus three times standard deviation ($F+3\\sqrt{V}$) for different values of the connection \nstrength in terms of conditional probabilities ($e_0$), for the different episode sizes. Also shown are \n the actual counts obtained for episodes of that size with different $e_0$ values. As is \neasily seen, the theoretically calculated mean and standard deviations are very accurate. \nNotice that \nmost of the observed counts are below the $F+k\\sqrt{V}$ threshold for $k=3$ even though this \ncorresponds to a Type-I error of just over 10\\%. Thus our statistical test with \n$k=3$ or $k=4$ should be quite effective. \n\n\n\\begin{figure}\n\\centering \n\\begin{tabular}{cc}\n\\includegraphics[scale=0.5,clip]{figs\/theory_accuracy_2node_8_25_08.eps} &\n\\includegraphics[scale=0.5,clip]{figs\/theory_accuracy_3node_8_20_08.eps} \\\\\n\\includegraphics[scale=0.5,clip]{figs\/theory_accuracy_4node_8_20_08.eps} &\n\\includegraphics[scale=0.5,clip]{figs\/theory_accuracy_5node_8_20_08.eps} \\\\\n\\end{tabular}\n\\caption{The analytically calculated values for the mean (i.e., $F$) \n and the mean plus 3 $\\sigma$ (i.e., $F+3\\sqrt{V}$), \nas a function of the connection strength in terms of conditional probabilities. The top two panels \nshow plots for 2-node and 3-node patterns and bottom panels show plots for 4-node and 5-node patterns. For each \nvalue of the conditional probability, the actual counts as obtained by the algorithm are also shown \n These are obtained through 1000 replications. For these experimental counts, the mean value as well as the \n$\\pm 3 \\sigma$ range (where $\\sigma$ is the data standard deviation) are also indicated. \nAs can be seen, the calculated \nvalue of $F$ well captures the mean of the non-overlapped counts. The \n$F+3\\sqrt{V}$ line captures most of the count distribution. }\n\\label{fig:acc-3-4-5}\n\\end{figure}\n\n\nAs explained earlier, using the formulation of our significance test we can infer \n a (bound on the) connection strength in terms of conditional probability \nbased on the observed count. For this, given observed count of a sequential \npattern or episode, we ask what is the value of the strength or \nconditional probability of the connection at which this count is the threshold \nas per our significance test. This is illustrated in Fig.~\\ref{fig:infer-strength}. \nFor an $n$-node episode if the inferred strength is $q$ then we can assert (with the appropriate \nconfidence) that it is highly unlikely for this episode to have this count if connection strength between \nevery pair of neurons is less than $q$ \n\nIn Fig.~\\ref{fig:good-infer-strength} we show how good is this mechanism for inferring the strength of connection. \nHere we plot the actual value of the strength of connection in terms of the conditional probability as used in the \nsimulation against the inferred value of this strength from our theory based on the actual observed value of count. \nFor each value of the conditional probability, we have 1000 replications and these various inferred values are shown as \npoint clouds. Since the theory is based on a bound, the inferred value would always be lower than the actual strength.\nHowever, the results in this figure show the effectiveness of our approach to determining significance of \nsequential patterns based on counting the non-overlapped occurrences. We emphasize here that this \ninferred value of strength is based on our significance test and \nthere is no estimation of any conditional probabilities. \n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.5]{figs\/sand-fig3-ver2.eps}\n\\caption{Illustration of inferring of a connection strength based on observed count for a \npattern. Given the curves of mean $F$, and the various levels of threshold ($F+3\\surd{V}$, $F+4\\surd{V}$, and $F+6\\surd{V}$), we can `invert' the \nobserved count to obtain a connection strength at which the observed count makes the episode just significant at a particular level. \nWe call this the inferred connection strength based on the observed count. }\n\\label{fig:infer-strength}\n\\end{figure}\n\n\n\\begin{figure}\n\\centering \n\\begin{tabular}{cc}\n\\includegraphics[scale=0.5]{figs\/inferred_strength_3node_8_20_08.eps} &\n\\includegraphics[scale=0.5]{figs\/inferred_strength_4node_8_20_08.eps} \\\\\n\\end{tabular}\n\\caption{Plot of the actual value of the conditional probability used in simulation versus the \nvalue inferred from our test of significance as explained in text. (See fig.~\\ref{fig:infer-strength}. \nFor each value we do 1000 replications and the \ndifferent inferred values are shown as a point cloud. Also shown is a best fit line. The two panels show results \nfor episodes of size 3 and size 4. Our method is quite effective in inferring a connection strength \nbased on our count. }\n\\label{fig:good-infer-strength}\n\\end{figure}\n\nFinally, we present some results to illustrate the ability of our significance test to correctly rank order \ndifferent sequential patterns or episodes that are significant. For this we show the distribution of \ncounts for sequential patterns or episodes of different strength along with the thresholds as calculated \nby our significance test when the value of $e_0$ in the null hypothesis is varied. These results are shown for \n3-node, 4-node and 5-node episodes in fig.~\\ref{fig:rankorder-3-4-5}. \n From the figure we can see that, by choosing a particular $e_0$ value in the \nnull hypothesis, our test will flag only episodes corresponding to strength higher than $e_0$ as significant. Thus, \nby varying $e_0$ we can rank-order different significant patterns that are found by the mining algorithm. \nWe note here that our threshold actually overestimates the count needed because it is based on a loose bound. \nHowever, these results show that we can reliably infer the relative strengths of different sequential patterns. \n\n\\begin{figure}\n\\centering\n\\begin{tabular}{cc}\n\\includegraphics[scale=0.5]{figs\/rankorder_3node_8_20_08.eps} & \n\\includegraphics[scale=0.5]{figs\/rankorder_4node_8_20_08.eps} \\\\\n\\includegraphics[scale=0.5]{figs\/rankorder_5node_8_20_08.eps} & \\\\\n\\end{tabular}\n\\caption{Plot showing the ability of our method of statistical significance test at inferring relative strengths \nof different patterns. Top two panel shows the distribution of counts (over 1000 replications) for four 3-node \nand 4-node \nepisodes with connection strengths corresponding to 0.2, 0.4, 0.6 and 0.8. The dashed lines are the thresholds \non counts under our significance test (with $k=3$) corresponding to $e_0$ values of 0.05, 0.25, 0.45 and 0.65.\nThe bottom panel shows distributions for 5-node episodes with strengths 0.1, 0.3 and 0.5 with thresholds \ncorresponding to $e_0$=0.05, 0.15 and 0.35. \n Since our test is based on Chebyshev inequality, it overestimates the needed count. However, it is easy to \nsee that we can detect significant episodes corresponding to different strengths by varying the $e_0$ in our \nnull hypothesis. \nAs can be seen, our method is able to reliably \ninfer the relative strengths of different patterns. }\n\\label{fig:rankorder-3-4-5}\n\\end{figure}\n\n\n\n\n\n\n\n\n\\section{Discussion}\n\\label{sec:dis}\n\nIn this paper we addressed the problem of detecting statistically significant \nsequential patterns in multi-neuronal spike train data. We employed an efficient \ndatamining algorithm that detects all frequently occurring sequential patterns \nwith prescribed inter-neuron delays. A pattern is frequent if the number of \nnon-overlapping occurrences of the pattern is above a threshold. The strategy \nof counting only the non-overlapped occurrences rather than all occurrences makes \nthe method computationally attractive. The main contribution of the paper is a \nnew statistical significance test to determine when the count obtained by our \nalgorithm is statistically significant. Or, equivalently, the method gives a \nthreshold for different patterns so that the algorithm can detect only the \nsignificant patterns. \n\nThe novelty in assessing the significance in our approach is in the structure of \nthe null hypothesis. The idea is to use conditional probability as a mechanism to \ncapture strength of influence of one neuron on another. Our null hypothesis is specified in terms \nof a (user-chosen) bound on the conditional probability that $B$ will fire after \na specified delay given that $A$ has fired, for any pair of neurons $A$ and $B$. Thus this compound null \nhypothesis includes many models of inter-dependent neurons where the influences \namong neurons are `weak' in the sense that all such pairwise conditional probabilities \nare below the bound. Being able to reject such a null hypothesis makes a stronger \ncase for concluding that the detected patterns represent significant functional \nconnectivity. Equally interestingly, such a null hypothesis allows us to rank order \nthe different patterns in terms of their strengths of influence. If we chose this \nbound $e_0$ to be the value of the conditional probability when the different neurons \nare independent, then we get the usual null hypothesis of independent neuron model. \nBut since we can choose the $e_0$ to be much higher, we can decide which patterns are \nsignificant at different levels of $e_0$ and hence get an idea of the strength of \ninteraction they represent. Thus, the method presented here extends the current \ntechniques of significance analysis. \n\nWhile we specify our null hypothesis in terms of a bound on the conditional \nprobability, note that we are not in any way estimating such conditional \nprobabilities. Estimating all relevant conditional probabilities would be \ncomputationally intensive. Since our algorithm counts only non-overlapped \noccurrences and also uses the datamining idea of counting frequencies for only \nthe relevant candidate patterns, our counts do not give us all the pair-wise conditional \nprobabilities. However, the statistical analysis presented here allows us to \nobtain thresholds on the non-overlapped occurrences possible (at the \ngiven confidence level) if all the conditional probabilities are below our bound. \nThis is what gives us the test of significance. \n\nWe presented a method for bounding the probability that, under the null hypothesis, \na pattern would have more than some number of non-overlapped occurrences. Because \nwe are counting non-overlapped occurrences, we are able to capture our counting \nprocess in an interesting model specified in terms of sums of independent random variables. \nThis model allowed us to get recurrence relations for mean and variance of the \nrandom variable representing our count under the null hypothesis which allowed us \nto get the required threshold using Chebyshev inequality. While this may be a loose \nbound, as shown through our simulation results, the bound we calculate is very \neffective. \n\nOur method of analysis is quite general and it can be used in \nsituations other than what we considered here. By choosing the value of $p$ \nin eq.(\\ref{eq:xi}) appropriately we can realize this generality in the model. \n\n\nAs an illustration of this we will briefly describe one extension of the model. \nIn the method presented, while analyzing significance of a pattern \n$A\\stackrel{T_1}{\\rightarrow} B \\stackrel{T_2}{\\rightarrow} C$, we are assuming that \nfiring of $C$ after $T_2$ given that $B$ has fired is independent of $A$ having \nfired earlier. That is why we have used $p=\\rho_A (e_0)^2$ while calculating our \nthreshold. But suppose we do not want to assume this. Then we can have a null hypothesis \nthat is specified by bounds on different conditional probabilities. Suppose \n$e_2(x,y,T)$ is the conditional probability that $y$ fires after $T$ given $x$ has \nfired and suppose $e_3(x,y,z,T_1,T_2)$ be the probability that $y$ fires after $T_1$ and \n$z$ fires after another $T_2$ given $x$ has fired. Now we specify the null hypothesis \nin terms of two parameters as: $e_2(x,y,T) < e_{02}, \\ \\forall x,y$ and \n$e_3(x,y,z,T_1,T_2) < e_{03}, \\ \\forall x,y,z$. Now for assessing significance of \n3-node episodes we can use $p=\\rho_A e_{03}$. Our method of analysis is still \napplicable without any modifications. Of course, now the user has to specify two \nbounds on different conditional probabilities and he has to have some reasons for \ndistinguishing between the two conditional probabilities. But the main point here \nis that the model is fairly general and can accommodate many such extensions. \n\nThere are many other ways in which the idea presented here can be extended. Suppose \nwe want to assess significance of synchronous firing patterns rather than sequential \npatterns based on the count of number of non-overlapped occurrences of the synchronous \nfiring pattern. One possibility would be to use conditional probabilities of $A$ firing within an \nappropriate short time interval from $B$ in our null hypothesis and then use an \nappropriate expression for $p$ in our model. \nAnother example could be that of analyzing occurrences of neuronal firing sequences \nthat respect a pre-set order on the neurons as discussed in \\cite{SS2006}. Suppose \nwe want to assess the significance of count of such patterns of a fixed length. \nIf we use our type of non-overlapped occurrences count as the statistic, then the \nmodel presented here can be used to assess the significance. Now the parameter $p$ would \nbe the probability of occurrence of a sequence of that length (which respects the \nglobal order on the neurons) starting from any time instant. \n For a given null hypothesis, e.g., of independence, this \nwould be a combinatorial problem similar to the one tackled in \\cite{SS2006}. Once we \ncan derive an expression for $p$ we can use our method for assessing significance. \n\nThough we did not discuss the computational issues in this paper, the data mining algorithms \nused for discovering sequential patterns are computationally efficient (see \\cite{PSU2008} \nfor details). One computational issue that may be relevant for this paper may be that of data \nsufficiency. All the results reported here are on spike data of 20 sec duration with \nbackground spiking rate of 20 Hz. (That works out to about 400 spikes per neuron on the \naverage in the data). From fig.~\\ref{fig:acc-3-4-5} we can see that, with this much of data, \nwe can certainly distinguish between connection strengths that differ by about 0.2 on the \nconditional probability scale. (Notice that, in the figure, the mean plus three sigma range \nof the count distribution at a connection strength is below our threshold (with $k=3$) at a \nconnection strength 0.2 more). In fig.~\\ref{fig:rankorder-3-4-5} we showed that we can \nreliably rank order connection strengths with about the same resolution. Thus we can say that \n20 sec of data is good enough for this level of discrimination. Obviously, if we need to distinguish \nbetween only widely different strengths, much less data would suffice. \n \nIn terms of computational issues, we feel that one of the important conclusions from this paper \n is that temporal data mining may be an attractive approach for tackling the \nproblem of discovering firing patterns (or microcircuits) in multi-neuronal spike trains. \nIn temporal data mining literature, episodes are, in general, partially ordered sets of \nevent types. Here we used the methods for discovery of serial episodes which correspond to \nour sequential patterns. A general episode would correspond to a graph of interconnections \namong neurons. However, at present, there are no efficient algorithms for discovering \nfrequently occurring graph patterns from a data stream of events. Extending our data mining \nalgorithm and our analysis technique to tackle such graph patterns is another interesting \nopen problem. This would allow for discovery of more general microcircuits from \n spike trains. \n\nIn summary, we feel that the general approach presented here \n has a lot of potential and it can \nbe specialized to handle many of the data analysis needs in multi-neuronal spike \ntrain data. We would be exploring many of these issues in our future work.\n\n\\begin{center}\n{\\bf Acknowledgments} \\\\\n\\end{center}\n\nWe wish to thank Mr. Debprakash Patnaik and Mr. Casey Diekman for their help \nin preparing this paper. The simulator described here as well as the data mining package \nfor analyzing data streams is written by \nMr. Patnaik \\cite{PSU2008} and he has helped in running \nthe simulator. Mr. Diekman has helped in generating all the \nfigures. The work reported here is partially supported by a project funded by \nGeneral Motors R\\&D Center, Warren through SID, Indian Institute of Science, \nBangalore. \n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\nNetworked control systems (NCSs) are of significant importance to society as they have become part of our daily life. Examples include the power grids \\citep{singh2014stability}, and water supply networks \\citep{cembrano2000optimal}. However, due to the increased use of (possibly) un-secure open communication channels in NCS, they are prone to cyber-attacks. The social and economical consequences of such cyber-attacks can be disastrous \\citep{dibaji2019systems,sandberg2022secure}. Thus preventing, detecting, and mitigating such attacks is of utmost importance, and is currently an active research area with several contributions based on different approaches \\citep{ferrari2021safety}. \n\nIn the literature, there are different security concepts such as physical watermarking, moving target defense, and multiplicative watermarking \\citep{chong2019tutorial}. Such security concepts focus on detecting cyber attacks. On the other hand, various privacy concepts help reduce unauthorized access to the transmitted data \\citep{nekouei2019information}, thus mitigating attacks. \n\nHowever, except in a few works \\citep{mukherjee2021secure}, privacy and security are considered independently. In practice, a system operator prefers privacy (of data or system properties), and in the worst case, prefers to be secure (able to detect) cyber-attacks. Thus, inspired by internal model control \\citep{zhang2010advanced}, we propose an architecture for NCSs to provide a unified framework for privacy and security (which will be defined later). \n\nIn particular, we consider a Discrete-Time (DT) Linear Time-Invariant (LTI) description of a plant $(G)$ on one side of the network. The plant runs in parallel with simulations of $G$ and an arbitrary system ${S}$; these can be seen as being parts of a privacy filter layer or a smart sensor. On the other side of the network are the detector, the reference signal and controller, and similar simulations of $G$ and $S$ which can be thought of as being parts of a Digital Twin (DiT) \\citep{barricelli2019survey}. The exact mathematical description of the system is given in the next section, while a pictorial representation is given in Fig. \\ref{fig1}. \n\nWith the architecture in Fig. \\ref{fig1}, referred to as dynamic masking architecture, we consider an adversary deploying an attack in a two-step procedure (similar to \\citet{mukherjee2021secure}): first learn the system dynamics, and then inject an attack which is not detected but deteriorates the system performance. The key contributions of our paper under the proposed architecture are the following:\\vspace{-0.1cm}\n\\begin{enumerate}\n \\item We propose the Mean Squared Error (MSE) between the true plant and the system learnt by the adversary as a measure of privacy. That is, we define privacy in terms of the system parameters and not in terms of the signals themselves.\n \\item We show that the operator can introduce an arbitrary amount of bias into the parameters of the model identified by the adversary. In other words, the adversary will only be able to identify the arbitrary system $S$ and not the plant $G$. \n \\item We show that the attack performance deteriorates: the attack is effectively detected under some conditions on $S$. \n\\end{enumerate}\n\nOur approach is related to other works in the literature such as watermarking in that we require time-synchronization between the plant side and the controller side, and use dynamical filters in the cyber domain. However, to our knowledge, the use of dynamic watermarking for privacy was not considered before. Additionally, we do not require the invertibility of the filters used. Our work is also related to the 2-way coding \\citep{fang2019two}. However, they do not provide a measure of privacy.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8.4cm]{rep_new.pdf}\n \\caption{Proposed dynamic masking architecture. Here $z$ represents the $Z-$transform operator. $D1$ and $D2$ represent the two possible locations of the detector. The arbitrary system $S(z)$ in the dotted box is the plant seen by the adversary.}\n \\label{fig1}\n\\end{figure}\n\nThe remainder of this paper is organized as follows: we formulate our problem in Section \\ref{sec:problem_formulation}. Using the proposed system architecture, we discuss the privacy aspect in Section \\ref{sec:Privacy} and the security aspect in Section \\ref{sec:Security}. We depict the efficacy of the proposed architecture in Section \\ref{sec:NE}. Section \\ref{sec:Conclusion} concludes the paper and provides avenues for future research. \n\n\\textit{Notation:} Throughout this article, $\\mathbb{R}, \\mathbb{R}^{+}, \\mathbb{Z}$ and $\\mathbb{Z}^{+}$ represent the set of real numbers, positive real numbers, integers and non-negative integers respectively. Let $x: \\mathbb{Z} \\to \\mathbb{R}^n$ be a discrete-time signal with $x_k$ as the value of the signal $x$ at the time step $k \\in \\mathbb{Z}$. Let the time horizon be $[0,N]=\\{ k \\in \\mathbb{Z}^+|\\; 0 \\leq k\\leq N \\}$. The $\\ell_2$-norm of $x$ over the horizon $[0,N]$ is represented as $|| x ||_{\\ell_2, [0,N]}^2 \\triangleq \\sum_{k=0}^{N} x[k]^Tx[k]$. Let the space of square-summable signals be defined as $\\ell_2 \\triangleq \\{ x: \\mathbb{Z}^+ \\to \\mathbb{R}^n |\\; ||x||^2_{\\ell_2, [0,\\infty]} < \\infty\\}$ and the extended signal space be defined as $\\ell_{2e} \\triangleq \\{ x: \\mathbb{Z}^+ \\to \\mathbb{R}^n | \\;||x||^2_{[0,N]} < \\infty, \\forall N \\in \\mathbb{Z}^+ \\}$. \n\n\\section{Problem Background}\\label{sec:problem_formulation}\n\\input{.\/Problem_Formulation}\n\n\\section{Privacy under the proposed architecture}\\label{sec:Privacy}\n\\input{.\/Privacy}\n\n\\section{Security under the proposed architecture}\\label{sec:Security}\n\\input{.\/Security}\n\n\\section{Numerical Example}\\label{sec:NE}\n\\input{.\/NE}\n\n\\section{Conclusion}\\label{sec:Conclusion}\nIn this paper, we proposed a new architecture to enhance the privacy and security of NCS. We considered an adversary which first learns the plant dynamics, and then performs a ZDA. Under the proposed architecture, we show that it is possible to (i) introduce bias in the system knowledge of the adversary, and (ii) efficiently detect attacks. Through numerical simulations, we illustrate the efficacy of the proposed architecture. Future works include developing a systematic design procedure for $S$.\n\n\n\n\n\\subsection{Privacy concept}\nWe define privacy using Definition \\ref{defn:privacy}, so that a privacy leakage is understood as the ability of the attacker to infer a property $\\psi_G$ of the dynamics of the controlled system from a set of measurements of the transmitted signals over the network. This is similar to the work in \\cite{alisic2020ensuring}, where privacy was defined as the minimum variance\nof the estimator. Instead, here, we propose the use of the MSE as a measure of privacy. Recall that the MSE can be decomposed into two terms: the squared bias and the variance. As a privacy measure, it is thus a generalization of the variance\/Fisher information measure. We will be manipulating the bias instead of the variance. For the setup of NCSs, this is possible via\n the particular dynamic masking architecture in Figure \\ref{fig1}.\n\nWe consider a case where 1) the adversary has access to a set of measurements collected via eavesdropping, 2) the adversary knows the correct model structure (number of poles and zeros) and is using a consistent estimation method (with respect to the data generating mechanism) to learn the dynamics of the model. Notice that consistency is a very weak asymptotic property that is required from any sensible estimator.\n\n\n\\subsection{Data generating mechanisms}\nAs explained in the previous section, the dynamic masking architecture changes the signals transmitted over the communication channel. While the input signal is transmitted without a change, a new signal $w$ is transmitted in lieu of the plant output. This distorts the adversary's perspective of the system according to the following proposition. \n\n\\begin{prop} Consider the dynamic masking architecture in Figure \\ref{fig1}. The data generating mechanisms from the point view of the attacker with data $\\mathcal{I}_i$ are as follows\n\\begin{itemize}\n\\item plant side: transfer function from $u$ to $w$\n\\[\n\\bar{G}(z) = S(z)\n\\]\n\\item controller side: transfer function from $w$ to $u$\n\\[\n\\bar{C}(z) = (I- C_\\circ(z)(G_\\circ(z) - S(z))^{-1}C_\\circ(z)\n\\]\nwith the same reference signal $r$.\n\\end{itemize}\n\\end{prop}\n\\begin{pf}\nThe result is established by straightforward manipulations which are omitted here.\n\\end{pf}\n\n\n\n\\subsection{Bias analysis}\n\nThe goal of this part is to characterize the bias in the estimated model when a consistent estimator such as the maximum-likelihood estimator or a prediction error method estimator is used.\n\nAssociate the complex frequency function\n\\begin{equation}\nS(e^{i\\omega}) = \\sum_{k=1}^\\infty s_k e^{-ki\\omega}, \\qquad -\\pi \\leq \\omega \\leq \\pi,\n\\end{equation}\nwith $i = \\sqrt{-1}$, to the data generating mechanism on the plant side. Let\n\\[\n\\hat{G}_N(e^{i\\omega}) = G(e^{iw}; \\hat{\\theta}_N)\n\\]\nbe a model of the system estimated by the adversary based on data set $\\mathcal{I}_N$ of size $N$, by estimating a parameter vector $\\theta$. The bias of the estimated model with respect to the data-generating mechanism is defined as\n\\begin{equation}\nB_N(\\omega) \\triangleq \\mathbb{E}[\\hat{G}_N(e^{i\\omega})] - S(e^{i\\omega})\n\\end{equation}\nand the variance is defined as\n\\begin{equation}\n P_N(\\omega) \\triangleq \\mathbb{E}\\left[|\\hat{G}_N(e^{i\\omega}) - \\mathbb{E}[\\hat{G}_N(e^{i\\omega})] |^2\\right]\n\\end{equation}\n\n\nIf the estimator used by the attacker is unbiased for the data generating mechanism, i.e., $B_N(\\omega)=0 \\;\\forall \\omega$, it holds that the MSE of the estimated model with respect to the true system is\n\\begin{equation}\n\\mathbb{E}\\left[|\\hat{G}_N(e^{i\\omega}) - G_\\circ(e^{i\\omega}) |^2\\right] = |S(e^{i\\omega}) - G_\\circ(e^{i\\omega})| ^2 + P_N(\\omega)\n\\end{equation}\nOtherwise, it holds that\n\\begin{equation}\n\\label{eq:finite_MSE}\n\\begin{aligned}\n\\mathbb{E}\\left[|\\hat{G}_N(e^{i\\omega})\\right.& - \\left.G_\\circ(e^{i\\omega}) |^2\\right] \\geq \\\\\n&\\left| B_N^2(\\omega) - |S(e^{i\\omega}) - G_\\circ(e^{i\\omega})| ^2\\right| + P_N(\\omega)\n\\end{aligned}\n\\end{equation}\nIn either case, the bias is directly controlled by $S$. \nNow, suppose the attacker is using an optimal prediction error framework with a quadratic cost function\\footnote{this is chosen to simplify the exposition. Notice that in that case, the optimal prediction error method coincides with the maximum likelihood when all disturbances\/noise follow Gaussian distributions} \\citep{Ljung1999}, to construct an estimator of $\\theta$. Then, it is well-known that, under certain mild conditions on the data and the model parameterization, the criterion function converges to the asymptotic criterion function\n\\begin{equation}\\label{eq:asymptotic_cost}\n\\bar{V}(\\theta) = \\frac{1}{2\\pi} \\int_{-\\pi}^\\pi \\Phi_\\varepsilon(\\omega, \\theta) d\\omega\n\\end{equation}\nwhere $\\Phi_\\varepsilon(\\omega, \\theta)$ is the spectrum of the parameterized prediction error, and the estimator $\\hat{\\theta}_N = \\hat{\\theta}(\\mathcal{I}_N)$ converges almost surely to $\\theta^\\ast$, the minimizer of \\eqref{eq:asymptotic_cost} over a compact set $\\Theta$.\n\nDue to linearity of the controller, the spectrum of the input signal $u$ can be decomposed as\n\\[\n\\Phi(\\omega) = \\Phi_u^r(\\omega) + \\Phi_u^e(\\omega)\n\\]\nwhere $\\Phi_u^r(\\omega) $ is the part originating from the reference signal, and $\\Phi_u^e(\\omega)$ is the part originating from the noise.\\smallskip\n\n\\begin{thm}[\\cite{Ljung1999}]\\label{thm:convergence}\n\\begin{equation}\n\\hat{\\theta}_N \\to \\arg\\min_\\theta \\bar{V}_1(\\theta) + \\bar{V}_2(\\theta) \\quad \\text{almost surely}\n\\end{equation}\n\\[\n\\bar{V}_1(\\theta) = \\int_{-\\pi}^\\pi |S(e^{i\\omega)}) - \\hat{G}(e^{i\\omega}; \\theta) + \\Pi(e^{i\\omega},\\theta)|^2 \\frac{\\Phi_u(\\omega)}{|H(e^{i\\omega}, \\theta)|^2} d\\omega\n\\]\n\\[\n\\bar{V}_2(\\theta) = \\lambda_\\circ \\int_{-\\pi}^\\pi \\frac{|H_\\circ(e^{i\\omega}) -H(e^{i\\omega}, \\theta)|^2}{|H(e^{i\\omega}, \\theta)|^2} \\frac{\\Phi_u^r(\\omega)}{\\Phi_u(\\omega)} d\\omega\n\\]\n\\[\n\\Pi(e^{i\\omega},\\theta) = \\frac{\\lambda_\\circ}{\\Phi_u(e^{i\\omega})} \\frac{\\Phi_u^e(e^{i\\omega})}{\\Phi_u(e^{i\\omega})} |H_\\circ(e^{i\\omega})-H(e^{i\\omega}, \\theta)|^2\n\\]\n\\end{thm}\n\nIt is easy to see that $\\Pi(e^{i\\omega},\\theta)$ in Theorem \\ref{thm:convergence} will be identically zero if the noise model coincides with the true one. This could be achieved with flexible noise models and then $\\bar{V}_2 =0$. If $G$ and $H$ are independently parameterized, the asymptotic estimate becomes the minimizer of\n\\[\n\\bar{V}_1(\\theta) = \\int_{-\\pi}^\\pi |S(e^{i\\omega}) - \\hat{G}(e^{i\\omega}; \\theta)|^2 \\frac{\\Phi_u(\\omega)}{|H_\\circ(e^{i\\omega})|^2} d\\omega\n\\]\nLet us assume that the adversary either has a correct\/flexible noise structure, or parameterizes $G$ and $H$ independently. \nThen, we get that asymptotically in the data size\n\\[\n\\hat{G}(e^{i\\omega}; \\theta^\\ast) = S(e^{i\\omega}) \\quad \\forall \\omega \\in [-\\pi, \\pi]\n\\]\nNamely, the identified model is consistent for the data-generating model $S$.\n\n\nThe main idea of this contribution is to make $S$ different from $G_\\circ$, and thus get that\n\\[\n\\hat{G}(e^{i\\omega}; \\theta^\\ast) \\neq G_\\circ(e^{i\\omega}).\n\\]\n\nNotice that the distribution of the bias over the frequency $\\omega$ is controlled by the system operator's choice of the filter $S$. And thus, by tuning $S$ one can achieve a desired lower bound for the achievable MSE (also for the finite data case; see \\eqref{eq:finite_MSE}). \nPrivacy is then achieved according to Definition \\ref{defn:privacy}. No matter how long data sequences the attacker uses to estimate the dynamics, the obtained estimates will be biased and the error will be lower bounded by an MSE which is a function of $S$.\n\n\\begin{rem}\n The bias analysis provided above is nonparametric in the sense that it is defined for the transfer functions. Its translation into a parametric one is straightforward under the assumption that $S$ and $G_\\circ$ have the same number of zeros and poles.\n\\end{rem}\n\n\n\\begin{exmp}\nSuppose that $G(z) = \\frac{z-1.1}{(z-0.2)(z-0.5)}$, and let $S(z) = \\frac{z-(1.1+\\delta)}{(z-0.2)(z-0.5)}$ with $\\delta> 0$. Then, the non-minimum phase zero of $G$ is $\\delta$-private for any unbiased estimator of the zero.\n\\end{exmp}\n\n\n\\begin{rem}\nObserve that even in a case where the adversary knows the true architecture in use (Fig.~\\ref{fig1}), and successfully identified the cipher plant, it remains impossible to recover $G$ solely based on disclosed data: the plant is dynamically masked, and is not identifiable via $\\mathcal{I}_i$ regardless of its parameterization. \n\\end{rem}\n\\subsection{Adversary description}\nIn the communication channel between the controller and the plant, we consider an adversary injecting false data into the actuator of the plant. Next, we discuss the resources the adversary has \\citep{teixeira2015secure}.\n\n\\subsubsection{Disclosure and disruption resources:} The adversary can eavesdrop on the communication channels and collect data. We represent the data available to the adversary at any time instant $i \\in \\mathbb{Z}^{+}$ as \n\\begin{equation}\n \\mathcal{I}_i := \\cup_{t=0}^{i}\\{w_k,u_k\\}\n\\end{equation}\nThe adversary can also inject data into the control communication channels. This is represented by: \n\\begin{equation}\n \\tilde{u}_k = u_k + a_{k}\n\\end{equation}\nwhere $a_{k}$ is the data injected by the adversary. Here $a \\in \\mathcal{L}_{2e}$ since the $\\mathcal{L}_{2e}$ space allows us to study a wider class of attack signals than $\\mathcal{L}_{2}$.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8.4cm]{rep_ad.pdf}\n \\caption{Architecture believed by the adversary.}\n \\label{fig2}\n\\end{figure}\n\n\\subsubsection{Plant knowledge:} The adversary knows the order of the plant and the presence of at least one zero in the plant. Except for this, the adversary does not know anything about the NCS. They however devise an estimator of the plant $\\mathcal{I}_i \\mapsto \\hat{G}$, \n from the disclosed data, according to the believed model specifications in Fig.~\\ref{fig2}. We explain in the next section, how the dynamic masking architecture distorts $\\hat{G}$. \n\n\\subsubsection{Attack goals and constraints:}\nThe adversary aims at deteriorating the system's performance while remaining undetected. Hence, the adversary injects attack signals to maximize the energy of the states of the system $\\hat{G}$ output while keeping the energy of the detection output lower than $\\epsilon_r$. The aim of maximizing the energy of the states is to consequently maximize the energy of the output $z$ (this objective is contradictory to the plant). \n\n\n\\subsection{Problem Formulation}\nMany methods can be used to construct stealthy data injection attacks \\citep{fotiadis2020constrained,anand2021stealthy}, and in this paper, we consider zero-dynamics attacks (ZDA). Thus, to help us define security and privacy for the operator, we make the following assumptions.\n\\begin{assum}\\label{ass_un_zero}\n$G$ has at least one zero. $\\hfill \\triangleleft$. \n\\end{assum}\n\nAs depicted in \\citep{teixeira2015secure}, a ZDA is constructed based on the location of the zero. Thus, we adopt the following definition of privacy in our paper.\n\\smallskip\n\n\\begin{defn}\\label{defn:privacy}(Privacy of a property of $G$ with respect to an adversary)\nLet $\\psi_G$ be any property of $G$ (e.g., a zero of $G$). Then $\\psi_G$ is said to be $\\delta$-private with some $\\delta>0$, if the estimator $\\widehat{\\psi_G}= \\psi_{\\hat{G}}$ of $\\psi_G$ used by the adversary, based on disclosed data $\\mathcal{I}_i$, is such that $\\mathbb{E}\\|{\\psi}_{\\hat{G}} - \\psi_G\\|^2\\geq \\delta$. $\\hfill \\triangleleft$\n\\end{defn}\n\nAccording to the above definition, if $\\psi_G$ is $\\delta$-private, there is no way that the adversary can recover its exact true value based on $\\hat{G}(\\mathcal{I}_i)$, even when $i\\to \\infty$. Note that $\\delta$-privacy is to be established for the particular estimator (or class of possible estimators) used by the adversary. It is implicitly assumed that any used estimator possesses well-defined first- and second-order moments with respect to the underlying probability measure of the disturbances\/noises. The key to establishing privacy is then to introduce bias into the adversary's inference procedure, via the dynamic masking architecture, as explained in Section \\ref{sec:Privacy}.\n\nAs previously described, we consider a detector that raises an alarm when $\\Vert d_k \\Vert_{\\ell_2}^2 > \\epsilon_r$. Then, we adopt the following definition of security:\n\\begin{defn}[Security of the NCS]\\label{defn:security}\nThe closed-loop NCS is said to be secure if one of the following holds in the presence of a ZDA:\n\\begin{enumerate}\n \\item $\\Vert d_k \\Vert_{\\ell_2}^2 > \\epsilon_r$ if the performance deterioration is unbounded. \n \\item The performance deterioration is bounded. $\\hfill \\triangleleft$\n\\end{enumerate}\n\\end{defn}\nThen we consider the following problem in this paper: \n\\begin{prob}\nShow that the dynamic masking NCS architecture proposed in Fig. \\ref{fig1} provides privacy and security with respect to Definition \\ref{defn:privacy} and Definition \\ref{defn:security}. $\\hfill \\triangleleft$ \n\\end{prob}\nIn the next section, we first show how the system architecture proposed in Fig. \\ref{fig1} provides privacy.\n\\subsection{Zero-dynamics attack}\nBefore describing the construction of a ZDA, we define zero-dynamics of a system $\\Sigma$ next.\n\\begin{defn}[ZDA \\citep{teixeira2015secure}]\nGiven a system $\\Sigma$ with the state-space matrices $(A_{\\Sigma}, B_{\\Sigma}, C_{\\Sigma}, D_{\\Sigma})$, the ZDA are a class of data injection attacks, which yield the output of $\\Sigma$ identically zero. The attack is of the form \n\\begin{equation}\\label{ZDA:attack}\n a_{k}=g\\beta^k,\n\\end{equation}\nwith $g$ and $\\beta$ satisfying the following equation\n\\begin{equation}\\label{ZDA}\n\\begin{bmatrix}\n\\beta I-A_{\\Sigma} & -B_{\\Sigma}\\\\\nC_{\\Sigma} & D_{\\Sigma}\n\\end{bmatrix} \n\\begin{bmatrix}\nx_0\\\\g\n\\end{bmatrix}=\n\\begin{bmatrix}\n0\\\\0\n\\end{bmatrix},\n\\end{equation}\nin which $x_0$ is the initial condition of the system $\\Sigma$. $\\hfill \\triangleleft$\n\\end{defn}\n\nWe also say that the attack $a$ lies in the output nulling space of $\\Sigma$. Now, given that the adversary has perfect knowledge about $S$ and the architecture in Fig. \\ref{fig2}, the adversary injects an attack which is the zero-dynamics of $S$. \nThis is a strategic attack since it does not raise an alarm since $d_k =0,\\;\\forall k\\in \\mathbb{R}^{+}$, and the states of $S$ will diverge: making the performance deterioration unbounded. For clarity, the attack vector is of the form $a_{k}=g_s\\beta_s^k$, where $\\beta_s$ and $g_s$ are the zero and the input directions of $S$ respectively, and they satisfy \n\\begin{equation}\n\\begin{bmatrix}\n\\beta_sI-A_{S} & -B_{S}\\\\\nC_{S} & D_{S}\n\\end{bmatrix} \n\\begin{bmatrix}\nx_{0s}\\\\g_s\n\\end{bmatrix}=\n\\begin{bmatrix}\n0\\\\0\n\\end{bmatrix},\n\\end{equation}\n\nAnd, as we can see, the ZDA is dependent on the initial conditions of the plant. However, the adversary can drive the plant to the necessary initial conditions $x_0$ and then initiate a ZDA. Some works focus on a more detailed analysis on the effects of non-zero initial conditions on the stealthiness of the ZDA \\citep{teixeira2012revealing}.\n\nDuring the deployment of the attack, the attack might be easily detected if the reference changes. This is because the reference signal might interfere with the initial conditions necessary for the ZDA. To avoid this triviality, we make the following assumption\n\\begin{assum}\nDuring the attack, $r_k \\triangleq 0,\\forall k\\in \\mathbb{R}^{+}$. Also, it is known to the adversary that $r_k \\triangleq 0$ $\\hfill \\triangleleft$\n\\end{assum}\nThe above argument also highlights the necessity of ignoring the noise. Next, we describe how the ZDA is detected with the help of architecture in Fig. \\ref{fig1}. \n\\subsection{Detectability conditions}\nWe first present the detectability results of ZDA corresponding to an unstable zero.\n\\begin{thm}[Sufficient detectability conditions]\\label{thm:detect}\nLet $|\\beta_s|\\\\>1$. Then it holds that $||d_k||_{\\ell_2}^2 >\\epsilon_r$ for the architecture in Fig. \\ref{fig1} if, the unstable zeros of $S$ are not the unstable zeros of $G$. \n\\end{thm}\n\\begin{pf}\nThe ZDA generated by the adversary lies in the output nulling space of $S$. The states of $S$ grow exponentially since the attack is exponential. However, in architecture Fig. \\ref{fig1}, the attack passes through the plant $G$ whose output is fed to the detector. Then the attack makes the output of the plant $G$ grow exponentially: this is true since the attack does not correspond to a ZDA of $G$. This concludes the proof. $\\hfill \\blacksquare$\n\\end{pf}\n\nThe increase in the performance energy is unbounded because the ZDA corresponds to an unstable zero \\citep{teixeira2015secure}. However, we showed in Theorem \\ref{thm:detect} that the architecture in Fig. \\ref{fig1} can provide security (with respect to (i) in Definition \\ref{defn:security}), under some conditions. \n\nTheorem \\ref{thm:detect} does not generalize to stable ZDA because the attack corresponding to a stable ZDA decays exponentially (since $|\\beta_s|\\leq 1$), and so does the output of $G$. Thus, if the threshold is sufficiently large, the attack is undetected. However, the increase in performance energy is bounded \\citep{teixeira2015secure}. Thus, security is guaranteed with respect to (ii) in Definition \\ref{defn:security}).\n\nUntil now, we considered the detector at the plant output ($D1$). This was justified by the increased use of smart sensors. However, one could also opt for the traditional architecture where the detector is present at the controller $(D2)$ with access to $\\tilde{y}$ and ${u}$. Then the detector can be assumed to be a Kalman filter-based detector (see (3) in \\citep{teixeira2015secure}). Now we show how the dynamic masking architecture proposed in Fig. \\ref{fig1} can be used to provide security in terms of Definition \\ref{defn:security}. Then, we state the following result which immediately follows from the fact that the adversary will only estimate $S$.\n\n\\begin{lem}\nLet $S$ only have stable zeros. Then the increase in performance energy under a ZDA is bounded.$\\square$\n\\end{lem}\n\nAlthough the results in this section provide a way to enhance attack detection, they do not provide a general design guideline for $S$. This is left for future work but we briefly comment on the design. We want to \\textit{trick} the adversary into believing that the data correspond to the plant. This can be partly achieved by setting the poles to be equal for $S$ and $G$. An additional condition is for $S$ to be internally stable. \n\nThus in this section, we showed that the dynamic masking architecture proposed in Fig \\ref{fig1} provides security in terms of Definition \\ref{defn:security} against ZDA by either making the attack detectable or by making the performance deterioration bounded. We next depict the results of this paper through numerical examples.\n\\subsection{Privacy concept}\nWe define privacy using Definition \\ref{defn:privacy}, so that a privacy leakage is understood as the ability of the attacker to infer a property $\\psi_G$ of the dynamics of the controlled system from a set of measurements of the transmitted signals over the network. This is similar to the work in \\cite{alisic2020ensuring}, where privacy was defined as the minimum variance\nof the estimator. Instead, here, we propose the use of the MSE as a measure of privacy. Recall that the MSE can be decomposed into two terms: the squared bias and the variance. As a privacy measure, it is thus a generalization of the variance\/Fisher information measure. We will be manipulating the bias instead of the variance. For the setup of NCSs, this is possible via\n the particular dynamic masking architecture in Figure \\ref{fig1}.\n\nWe consider a case where 1) the adversary has access to a set of measurements collected via eavesdropping, 2) the adversary knows the correct model structure (number of poles and zeros) and is using a consistent estimation method (with respect to the data generating mechanism) to learn the dynamics of the model. Notice that consistency is a very weak asymptotic property that is required from any sensible estimator.\n\n\n\\subsection{Data generating mechanisms}\nAs explained in the previous section, the dynamic masking architecture changes the signals transmitted over the communication channel. While the input signal is transmitted without a change, a new signal $w$ is transmitted in lieu of the plant output. This distorts the adversary's perspective of the system according to the following proposition. \n\n\\begin{prop} Consider the dynamic masking architecture in Figure \\ref{fig1}. The data generating mechanisms from the point view of the attacker with data $\\mathcal{I}_i$ are as follows\n\\begin{itemize}\n\\item plant side: transfer function from $u$ to $w$\n\\[\n\\bar{G}(z) = S(z)\n\\]\n\\item controller side: transfer function from $w$ to $u$\n\\[\n\\bar{C}(z) = (I- C_\\circ(z)(G_\\circ(z) - S(z))^{-1}C_\\circ(z)\n\\]\nwith the same reference signal $r$.\n\\end{itemize}\n\\end{prop}\n\\begin{pf}\nThe result is established by straightforward manipulations which are omitted here.\n\\end{pf}\n\n\n\n\\subsection{Bias analysis}\n\nThe goal of this part is to characterize the bias in the estimated model when a consistent estimator such as the maximum-likelihood estimator or a prediction error method estimator is used.\n\nAssociate the complex frequency function\n\\begin{equation}\nS(e^{i\\omega}) = \\sum_{k=1}^\\infty s_k e^{-ki\\omega}, \\qquad -\\pi \\leq \\omega \\leq \\pi,\n\\end{equation}\nwith $i = \\sqrt{-1}$, to the data generating mechanism on the plant side. Let\n\\[\n\\hat{G}_N(e^{i\\omega}) = G(e^{iw}; \\hat{\\theta}_N)\n\\]\nbe a model of the system estimated by the adversary based on data set $\\mathcal{I}_N$ of size $N$, by estimating a parameter vector $\\theta$. The bias of the estimated model with respect to the data-generating mechanism is defined as\n\\begin{equation}\nB_N(\\omega) \\triangleq \\mathbb{E}[\\hat{G}_N(e^{i\\omega})] - S(e^{i\\omega})\n\\end{equation}\nand the variance is defined as\n\\begin{equation}\n P_N(\\omega) \\triangleq \\mathbb{E}\\left[|\\hat{G}_N(e^{i\\omega}) - \\mathbb{E}[\\hat{G}_N(e^{i\\omega})] |^2\\right]\n\\end{equation}\n\n\nIf the estimator used by the attacker is unbiased for the data generating mechanism, i.e., $B_N(\\omega)=0 \\;\\forall \\omega$, it holds that the MSE of the estimated model with respect to the true system is\n\\begin{equation}\n\\mathbb{E}\\left[|\\hat{G}_N(e^{i\\omega}) - G_\\circ(e^{i\\omega}) |^2\\right] = |S(e^{i\\omega}) - G_\\circ(e^{i\\omega})| ^2 + P_N(\\omega)\n\\end{equation}\nOtherwise, it holds that\n\\begin{equation}\n\\label{eq:finite_MSE}\n\\begin{aligned}\n\\mathbb{E}\\left[|\\hat{G}_N(e^{i\\omega})\\right.& - \\left.G_\\circ(e^{i\\omega}) |^2\\right] \\geq \\\\\n&\\left| B_N^2(\\omega) - |S(e^{i\\omega}) - G_\\circ(e^{i\\omega})| ^2\\right| + P_N(\\omega)\n\\end{aligned}\n\\end{equation}\nIn either case, the bias is directly controlled by $S$. \nNow, suppose the attacker is using an optimal prediction error framework with a quadratic cost function\\footnote{this is chosen to simplify the exposition. Notice that in that case, the optimal prediction error method coincides with the maximum likelihood when all disturbances\/noise follow Gaussian distributions} \\citep{Ljung1999}, to construct an estimator of $\\theta$. Then, it is well-known that, under certain mild conditions on the data and the model parameterization, the criterion function converges to the asymptotic criterion function\n\\begin{equation}\\label{eq:asymptotic_cost}\n\\bar{V}(\\theta) = \\frac{1}{2\\pi} \\int_{-\\pi}^\\pi \\Phi_\\varepsilon(\\omega, \\theta) d\\omega\n\\end{equation}\nwhere $\\Phi_\\varepsilon(\\omega, \\theta)$ is the spectrum of the parameterized prediction error, and the estimator $\\hat{\\theta}_N = \\hat{\\theta}(\\mathcal{I}_N)$ converges almost surely to $\\theta^\\ast$, the minimizer of \\eqref{eq:asymptotic_cost} over a compact set $\\Theta$.\n\nDue to linearity of the controller, the spectrum of the input signal $u$ can be decomposed as\n\\[\n\\Phi(\\omega) = \\Phi_u^r(\\omega) + \\Phi_u^e(\\omega)\n\\]\nwhere $\\Phi_u^r(\\omega) $ is the part originating from the reference signal, and $\\Phi_u^e(\\omega)$ is the part originating from the noise.\\smallskip\n\n\\begin{thm}[\\cite{Ljung1999}]\\label{thm:convergence}\n\\begin{equation}\n\\hat{\\theta}_N \\to \\arg\\min_\\theta \\bar{V}_1(\\theta) + \\bar{V}_2(\\theta) \\quad \\text{almost surely}\n\\end{equation}\n\\[\n\\bar{V}_1(\\theta) = \\int_{-\\pi}^\\pi |S(e^{i\\omega)}) - \\hat{G}(e^{i\\omega}; \\theta) + \\Pi(e^{i\\omega},\\theta)|^2 \\frac{\\Phi_u(\\omega)}{|H(e^{i\\omega}, \\theta)|^2} d\\omega\n\\]\n\\[\n\\bar{V}_2(\\theta) = \\lambda_\\circ \\int_{-\\pi}^\\pi \\frac{|H_\\circ(e^{i\\omega}) -H(e^{i\\omega}, \\theta)|^2}{|H(e^{i\\omega}, \\theta)|^2} \\frac{\\Phi_u^r(\\omega)}{\\Phi_u(\\omega)} d\\omega\n\\]\n\\[\n\\Pi(e^{i\\omega},\\theta) = \\frac{\\lambda_\\circ}{\\Phi_u(e^{i\\omega})} \\frac{\\Phi_u^e(e^{i\\omega})}{\\Phi_u(e^{i\\omega})} |H_\\circ(e^{i\\omega})-H(e^{i\\omega}, \\theta)|^2\n\\]\n\\end{thm}\n\nIt is easy to see that $\\Pi(e^{i\\omega},\\theta)$ in Theorem \\ref{thm:convergence} will be identically zero if the noise model coincides with the true one. This could be achieved with flexible noise models and then $\\bar{V}_2 =0$. If $G$ and $H$ are independently parameterized, the asymptotic estimate becomes the minimizer of\n\\[\n\\bar{V}_1(\\theta) = \\int_{-\\pi}^\\pi |S(e^{i\\omega}) - \\hat{G}(e^{i\\omega}; \\theta)|^2 \\frac{\\Phi_u(\\omega)}{|H_\\circ(e^{i\\omega})|^2} d\\omega\n\\]\nLet us assume that the adversary either has a correct\/flexible noise structure, or parameterizes $G$ and $H$ independently. \nThen, we get that asymptotically in the data size\n\\[\n\\hat{G}(e^{i\\omega}; \\theta^\\ast) = S(e^{i\\omega}) \\quad \\forall \\omega \\in [-\\pi, \\pi]\n\\]\nNamely, the identified model is consistent for the data-generating model $S$.\n\n\nThe main idea of this contribution is to make $S$ different from $G_\\circ$, and thus get that\n\\[\n\\hat{G}(e^{i\\omega}; \\theta^\\ast) \\neq G_\\circ(e^{i\\omega}).\n\\]\n\nNotice that the distribution of the bias over the frequency $\\omega$ is controlled by the system operator's choice of the filter $S$. And thus, by tuning $S$ one can achieve a desired lower bound for the achievable MSE (also for the finite data case; see \\eqref{eq:finite_MSE}). \nPrivacy is then achieved according to Definition \\ref{defn:privacy}. No matter how long data sequences the attacker uses to estimate the dynamics, the obtained estimates will be biased and the error will be lower bounded by an MSE which is a function of $S$.\n\n\\begin{rem}\n The bias analysis provided above is nonparametric in the sense that it is defined for the transfer functions. Its translation into a parametric one is straightforward under the assumption that $S$ and $G_\\circ$ have the same number of zeros and poles.\n\\end{rem}\n\n\n\\begin{exmp}\nSuppose that $G(z) = \\frac{z-1.1}{(z-0.2)(z-0.5)}$, and let $S(z) = \\frac{z-(1.1+\\delta)}{(z-0.2)(z-0.5)}$ with $\\delta> 0$. Then, the non-minimum phase zero of $G$ is $\\delta$-private for any unbiased estimator of the zero.\n\\end{exmp}\n\n\n\\begin{rem}\nObserve that even in a case where the adversary knows the true architecture in use (Fig.~\\ref{fig1}), and successfully identified the cipher plant, it remains impossible to recover $G$ solely based on disclosed data: the plant is dynamically masked, and is not identifiable via $\\mathcal{I}_i$ regardless of its parameterization. \n\\end{rem}\n\\subsection{Adversary description}\nIn the communication channel between the controller and the plant, we consider an adversary injecting false data into the actuator of the plant. Next, we discuss the resources the adversary has \\citep{teixeira2015secure}.\n\n\\subsubsection{Disclosure and disruption resources:} The adversary can eavesdrop on the communication channels and collect data. We represent the data available to the adversary at any time instant $i \\in \\mathbb{Z}^{+}$ as \n\\begin{equation}\n \\mathcal{I}_i := \\cup_{t=0}^{i}\\{w_k,u_k\\}\n\\end{equation}\nThe adversary can also inject data into the control communication channels. This is represented by: \n\\begin{equation}\n \\tilde{u}_k = u_k + a_{k}\n\\end{equation}\nwhere $a_{k}$ is the data injected by the adversary. Here $a \\in \\mathcal{L}_{2e}$ since the $\\mathcal{L}_{2e}$ space allows us to study a wider class of attack signals than $\\mathcal{L}_{2}$.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8.4cm]{rep_ad.pdf}\n \\caption{Architecture believed by the adversary.}\n \\label{fig2}\n\\end{figure}\n\n\\subsubsection{Plant knowledge:} The adversary knows the order of the plant and the presence of at least one zero in the plant. Except for this, the adversary does not know anything about the NCS. They however devise an estimator of the plant $\\mathcal{I}_i \\mapsto \\hat{G}$, \n from the disclosed data, according to the believed model specifications in Fig.~\\ref{fig2}. We explain in the next section, how the dynamic masking architecture distorts $\\hat{G}$. \n\n\\subsubsection{Attack goals and constraints:}\nThe adversary aims at deteriorating the system's performance while remaining undetected. Hence, the adversary injects attack signals to maximize the energy of the states of the system $\\hat{G}$ output while keeping the energy of the detection output lower than $\\epsilon_r$. The aim of maximizing the energy of the states is to consequently maximize the energy of the output $z$ (this objective is contradictory to the plant). \n\n\n\\subsection{Problem Formulation}\nMany methods can be used to construct stealthy data injection attacks \\citep{fotiadis2020constrained,anand2021stealthy}, and in this paper, we consider zero-dynamics attacks (ZDA). Thus, to help us define security and privacy for the operator, we make the following assumptions.\n\\begin{assum}\\label{ass_un_zero}\n$G$ has at least one zero. $\\hfill \\triangleleft$. \n\\end{assum}\n\nAs depicted in \\citep{teixeira2015secure}, a ZDA is constructed based on the location of the zero. Thus, we adopt the following definition of privacy in our paper.\n\\smallskip\n\n\\begin{defn}\\label{defn:privacy}(Privacy of a property of $G$ with respect to an adversary)\nLet $\\psi_G$ be any property of $G$ (e.g., a zero of $G$). Then $\\psi_G$ is said to be $\\delta$-private with some $\\delta>0$, if the estimator $\\widehat{\\psi_G}= \\psi_{\\hat{G}}$ of $\\psi_G$ used by the adversary, based on disclosed data $\\mathcal{I}_i$, is such that $\\mathbb{E}\\|{\\psi}_{\\hat{G}} - \\psi_G\\|^2\\geq \\delta$. $\\hfill \\triangleleft$\n\\end{defn}\n\nAccording to the above definition, if $\\psi_G$ is $\\delta$-private, there is no way that the adversary can recover its exact true value based on $\\hat{G}(\\mathcal{I}_i)$, even when $i\\to \\infty$. Note that $\\delta$-privacy is to be established for the particular estimator (or class of possible estimators) used by the adversary. It is implicitly assumed that any used estimator possesses well-defined first- and second-order moments with respect to the underlying probability measure of the disturbances\/noises. The key to establishing privacy is then to introduce bias into the adversary's inference procedure, via the dynamic masking architecture, as explained in Section \\ref{sec:Privacy}.\n\nAs previously described, we consider a detector that raises an alarm when $\\Vert d_k \\Vert_{\\ell_2}^2 > \\epsilon_r$. Then, we adopt the following definition of security:\n\\begin{defn}[Security of the NCS]\\label{defn:security}\nThe closed-loop NCS is said to be secure if one of the following holds in the presence of a ZDA:\n\\begin{enumerate}\n \\item $\\Vert d_k \\Vert_{\\ell_2}^2 > \\epsilon_r$ if the performance deterioration is unbounded. \n \\item The performance deterioration is bounded. $\\hfill \\triangleleft$\n\\end{enumerate}\n\\end{defn}\nThen we consider the following problem in this paper: \n\\begin{prob}\nShow that the dynamic masking NCS architecture proposed in Fig. \\ref{fig1} provides privacy and security with respect to Definition \\ref{defn:privacy} and Definition \\ref{defn:security}. $\\hfill \\triangleleft$ \n\\end{prob}\nIn the next section, we first show how the system architecture proposed in Fig. \\ref{fig1} provides privacy.\n\\subsection{Zero-dynamics attack}\nBefore describing the construction of a ZDA, we define zero-dynamics of a system $\\Sigma$ next.\n\\begin{defn}[ZDA \\citep{teixeira2015secure}]\nGiven a system $\\Sigma$ with the state-space matrices $(A_{\\Sigma}, B_{\\Sigma}, C_{\\Sigma}, D_{\\Sigma})$, the ZDA are a class of data injection attacks, which yield the output of $\\Sigma$ identically zero. The attack is of the form \n\\begin{equation}\\label{ZDA:attack}\n a_{k}=g\\beta^k,\n\\end{equation}\nwith $g$ and $\\beta$ satisfying the following equation\n\\begin{equation}\\label{ZDA}\n\\begin{bmatrix}\n\\beta I-A_{\\Sigma} & -B_{\\Sigma}\\\\\nC_{\\Sigma} & D_{\\Sigma}\n\\end{bmatrix} \n\\begin{bmatrix}\nx_0\\\\g\n\\end{bmatrix}=\n\\begin{bmatrix}\n0\\\\0\n\\end{bmatrix},\n\\end{equation}\nin which $x_0$ is the initial condition of the system $\\Sigma$. $\\hfill \\triangleleft$\n\\end{defn}\n\nWe also say that the attack $a$ lies in the output nulling space of $\\Sigma$. Now, given that the adversary has perfect knowledge about $S$ and the architecture in Fig. \\ref{fig2}, the adversary injects an attack which is the zero-dynamics of $S$. \nThis is a strategic attack since it does not raise an alarm since $d_k =0,\\;\\forall k\\in \\mathbb{R}^{+}$, and the states of $S$ will diverge: making the performance deterioration unbounded. For clarity, the attack vector is of the form $a_{k}=g_s\\beta_s^k$, where $\\beta_s$ and $g_s$ are the zero and the input directions of $S$ respectively, and they satisfy \n\\begin{equation}\n\\begin{bmatrix}\n\\beta_sI-A_{S} & -B_{S}\\\\\nC_{S} & D_{S}\n\\end{bmatrix} \n\\begin{bmatrix}\nx_{0s}\\\\g_s\n\\end{bmatrix}=\n\\begin{bmatrix}\n0\\\\0\n\\end{bmatrix},\n\\end{equation}\n\nAnd, as we can see, the ZDA is dependent on the initial conditions of the plant. However, the adversary can drive the plant to the necessary initial conditions $x_0$ and then initiate a ZDA. Some works focus on a more detailed analysis on the effects of non-zero initial conditions on the stealthiness of the ZDA \\citep{teixeira2012revealing}.\n\nDuring the deployment of the attack, the attack might be easily detected if the reference changes. This is because the reference signal might interfere with the initial conditions necessary for the ZDA. To avoid this triviality, we make the following assumption\n\\begin{assum}\nDuring the attack, $r_k \\triangleq 0,\\forall k\\in \\mathbb{R}^{+}$. Also, it is known to the adversary that $r_k \\triangleq 0$ $\\hfill \\triangleleft$\n\\end{assum}\nThe above argument also highlights the necessity of ignoring the noise. Next, we describe how the ZDA is detected with the help of architecture in Fig. \\ref{fig1}. \n\\subsection{Detectability conditions}\nWe first present the detectability results of ZDA corresponding to an unstable zero.\n\\begin{thm}[Sufficient detectability conditions]\\label{thm:detect}\nLet $|\\beta_s|\\\\>1$. Then it holds that $||d_k||_{\\ell_2}^2 >\\epsilon_r$ for the architecture in Fig. \\ref{fig1} if, the unstable zeros of $S$ are not the unstable zeros of $G$. \n\\end{thm}\n\\begin{pf}\nThe ZDA generated by the adversary lies in the output nulling space of $S$. The states of $S$ grow exponentially since the attack is exponential. However, in architecture Fig. \\ref{fig1}, the attack passes through the plant $G$ whose output is fed to the detector. Then the attack makes the output of the plant $G$ grow exponentially: this is true since the attack does not correspond to a ZDA of $G$. This concludes the proof. $\\hfill \\blacksquare$\n\\end{pf}\n\nThe increase in the performance energy is unbounded because the ZDA corresponds to an unstable zero \\citep{teixeira2015secure}. However, we showed in Theorem \\ref{thm:detect} that the architecture in Fig. \\ref{fig1} can provide security (with respect to (i) in Definition \\ref{defn:security}), under some conditions. \n\nTheorem \\ref{thm:detect} does not generalize to stable ZDA because the attack corresponding to a stable ZDA decays exponentially (since $|\\beta_s|\\leq 1$), and so does the output of $G$. Thus, if the threshold is sufficiently large, the attack is undetected. However, the increase in performance energy is bounded \\citep{teixeira2015secure}. Thus, security is guaranteed with respect to (ii) in Definition \\ref{defn:security}).\n\nUntil now, we considered the detector at the plant output ($D1$). This was justified by the increased use of smart sensors. However, one could also opt for the traditional architecture where the detector is present at the controller $(D2)$ with access to $\\tilde{y}$ and ${u}$. Then the detector can be assumed to be a Kalman filter-based detector (see (3) in \\citep{teixeira2015secure}). Now we show how the dynamic masking architecture proposed in Fig. \\ref{fig1} can be used to provide security in terms of Definition \\ref{defn:security}. Then, we state the following result which immediately follows from the fact that the adversary will only estimate $S$.\n\n\\begin{lem}\nLet $S$ only have stable zeros. Then the increase in performance energy under a ZDA is bounded.$\\square$\n\\end{lem}\n\nAlthough the results in this section provide a way to enhance attack detection, they do not provide a general design guideline for $S$. This is left for future work but we briefly comment on the design. We want to \\textit{trick} the adversary into believing that the data correspond to the plant. This can be partly achieved by setting the poles to be equal for $S$ and $G$. An additional condition is for $S$ to be internally stable. \n\nThus in this section, we showed that the dynamic masking architecture proposed in Fig \\ref{fig1} provides security in terms of Definition \\ref{defn:security} against ZDA by either making the attack detectable or by making the performance deterioration bounded. We next depict the results of this paper through numerical examples.\n\\section{Introduction}\nNetworked control systems (NCSs) are of significant importance to society as they have become part of our daily life. Examples include the power grids \\citep{singh2014stability}, and water supply networks \\citep{cembrano2000optimal}. However, due to the increased use of (possibly) un-secure open communication channels in NCS, they are prone to cyber-attacks. The social and economical consequences of such cyber-attacks can be disastrous \\citep{dibaji2019systems,sandberg2022secure}. Thus preventing, detecting, and mitigating such attacks is of utmost importance, and is currently an active research area with several contributions based on different approaches \\citep{ferrari2021safety}. \n\nIn the literature, there are different security concepts such as physical watermarking, moving target defense, and multiplicative watermarking \\citep{chong2019tutorial}. Such security concepts focus on detecting cyber attacks. On the other hand, various privacy concepts help reduce unauthorized access to the transmitted data \\citep{nekouei2019information}, thus mitigating attacks. \n\nHowever, except in a few works \\citep{mukherjee2021secure}, privacy and security are considered independently. In practice, a system operator prefers privacy (of data or system properties), and in the worst case, prefers to be secure (able to detect) cyber-attacks. Thus, inspired by internal model control \\citep{zhang2010advanced}, we propose an architecture for NCSs to provide a unified framework for privacy and security (which will be defined later). \n\nIn particular, we consider a Discrete-Time (DT) Linear Time-Invariant (LTI) description of a plant $(G)$ on one side of the network. The plant runs in parallel with simulations of $G$ and an arbitrary system ${S}$; these can be seen as being parts of a privacy filter layer or a smart sensor. On the other side of the network are the detector, the reference signal and controller, and similar simulations of $G$ and $S$ which can be thought of as being parts of a Digital Twin (DiT) \\citep{barricelli2019survey}. The exact mathematical description of the system is given in the next section, while a pictorial representation is given in Fig. \\ref{fig1}. \n\nWith the architecture in Fig. \\ref{fig1}, referred to as dynamic masking architecture, we consider an adversary deploying an attack in a two-step procedure (similar to \\citet{mukherjee2021secure}): first learn the system dynamics, and then inject an attack which is not detected but deteriorates the system performance. The key contributions of our paper under the proposed architecture are the following:\\vspace{-0.1cm}\n\\begin{enumerate}\n \\item We propose the Mean Squared Error (MSE) between the true plant and the system learnt by the adversary as a measure of privacy. That is, we define privacy in terms of the system parameters and not in terms of the signals themselves.\n \\item We show that the operator can introduce an arbitrary amount of bias into the parameters of the model identified by the adversary. In other words, the adversary will only be able to identify the arbitrary system $S$ and not the plant $G$. \n \\item We show that the attack performance deteriorates: the attack is effectively detected under some conditions on $S$. \n\\end{enumerate}\n\nOur approach is related to other works in the literature such as watermarking in that we require time-synchronization between the plant side and the controller side, and use dynamical filters in the cyber domain. However, to our knowledge, the use of dynamic watermarking for privacy was not considered before. Additionally, we do not require the invertibility of the filters used. Our work is also related to the 2-way coding \\citep{fang2019two}. However, they do not provide a measure of privacy.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8.4cm]{rep_new.pdf}\n \\caption{Proposed dynamic masking architecture. Here $z$ represents the $Z-$transform operator. $D1$ and $D2$ represent the two possible locations of the detector. The arbitrary system $S(z)$ in the dotted box is the plant seen by the adversary.}\n \\label{fig1}\n\\end{figure}\n\nThe remainder of this paper is organized as follows: we formulate our problem in Section \\ref{sec:problem_formulation}. Using the proposed system architecture, we discuss the privacy aspect in Section \\ref{sec:Privacy} and the security aspect in Section \\ref{sec:Security}. We depict the efficacy of the proposed architecture in Section \\ref{sec:NE}. Section \\ref{sec:Conclusion} concludes the paper and provides avenues for future research. \n\n\\textit{Notation:} Throughout this article, $\\mathbb{R}, \\mathbb{R}^{+}, \\mathbb{Z}$ and $\\mathbb{Z}^{+}$ represent the set of real numbers, positive real numbers, integers and non-negative integers respectively. Let $x: \\mathbb{Z} \\to \\mathbb{R}^n$ be a discrete-time signal with $x_k$ as the value of the signal $x$ at the time step $k \\in \\mathbb{Z}$. Let the time horizon be $[0,N]=\\{ k \\in \\mathbb{Z}^+|\\; 0 \\leq k\\leq N \\}$. The $\\ell_2$-norm of $x$ over the horizon $[0,N]$ is represented as $|| x ||_{\\ell_2, [0,N]}^2 \\triangleq \\sum_{k=0}^{N} x[k]^Tx[k]$. Let the space of square-summable signals be defined as $\\ell_2 \\triangleq \\{ x: \\mathbb{Z}^+ \\to \\mathbb{R}^n |\\; ||x||^2_{\\ell_2, [0,\\infty]} < \\infty\\}$ and the extended signal space be defined as $\\ell_{2e} \\triangleq \\{ x: \\mathbb{Z}^+ \\to \\mathbb{R}^n | \\;||x||^2_{[0,N]} < \\infty, \\forall N \\in \\mathbb{Z}^+ \\}$. \n\n\\section{Problem Background}\\label{sec:problem_formulation}\n\\input{.\/Problem_Formulation}\n\n\\section{Privacy under the proposed architecture}\\label{sec:Privacy}\n\\input{.\/Privacy}\n\n\\section{Security under the proposed architecture}\\label{sec:Security}\n\\input{.\/Security}\n\n\\section{Numerical Example}\\label{sec:NE}\n\\input{.\/NE}\n\n\\section{Conclusion}\\label{sec:Conclusion}\nIn this paper, we proposed a new architecture to enhance the privacy and security of NCS. We considered an adversary which first learns the plant dynamics, and then performs a ZDA. Under the proposed architecture, we show that it is possible to (i) introduce bias in the system knowledge of the adversary, and (ii) efficiently detect attacks. Through numerical simulations, we illustrate the efficacy of the proposed architecture. Future works include developing a systematic design procedure for $S$.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\nGauge theories can in principle, up to anomalies, be formulated for all simple Lie groups. This property has been used often to gain insight into structures or simplify calculations. One salient example is the large-$N$ limit in QCD. Another option is to use the exceptional group G$_2$, leading to G$_2$ QCD.\n\nThe proposal to make this replacement was made \\cite{Holland:2003jy} to understand the role of the center of the gauge group, which was long assumed to play a central role for many of the salient features of QCD, especially confinement. However, the detailed investigations, to be presented in section \\ref{sym}, showed that most of these features are also present in the G$_2$ case.\n\nBesides these conceptual questions concerning the center, another property of G$_2$ QCD is interesting from a practical point of view. Since all its representations are real, no sign problem arises when simulating G$_2$ QCD with dynamical fermions. It is thus possible to investigate the whole phase diagram of the theory using lattice calculations \\cite{Maas:2012wr}. G$_2$ QCD is so far the theory most similar to QCD where this is possible in the continuum limit. The resulting phase diagram \\cite{Maas:2012wr} is rather similar to the one obtained in other such theories, like QCD with gauge group SU(2) (QC$_2$D) \\cite{Hands:2006ve,Hands:2011ye,Strodthoff:2011tz,Cotter:2012tt} or QCD in the strong coupling limit \\cite{deForcrand:2009dh,Fromm:2011zz}. Thus, G$_2$ QCD offers another perspective on the QCD phase diagram. This will be detailed in sections \\ref{sqcd} and \\ref{ssmall}.\n\nIt is, of course, an interesting question whether there can be established any direct connection between the G$_2$ case and the SU(3) world. Breaking the G$_2$ gauge group using a Higgs field works for the Yang-Mills case \\cite{Holland:2003jy}, as briefly outlined in section \\ref{shiggs}, but it is yet not clear whether this is also possible in the QCD case.\n\nThus, gauge theories with gauge group G$_2$ are very interesting from many perspectives, as will be summarized in section \\ref{sconclusion}. However, most investigations are yet on a qualitative and exploratory level, and many interesting questions have not even been addressed yet.\n\n\\section{Yang-Mills theory}\\label{sym}\n\n\\subsection{Zero temperature}\n\nThe simplest realization of a gauge theory with the gauge group G$_2$ is Yang-Mills theory. Since the adjoint representation of G$_2$ is 14-dimensional, there are 14 gluons. Using the Macfarlane representation \\cite{Macfarlane:2002hr} a G$_2$ link (or group element) $U$ in the 7-dimensional fundamental representation can be written as\n\\begin{equation}\nU=Z\\begin{pmatrix} u & 0 & 0 \\cr 0 & 1 & 0 \\cr 0 & 0 & u^* \\end{pmatrix},\\nonumber\n\\end{equation}\n\\noindent where $Z$ is a 7-dimensional representation of $S^6$ and $u$ is an element of SU(3). Thus, 8 of the gluons can be considered loosely as 'SU(3)'-like. This will become important in section \\ref{shiggs}. Due to this explicit SU(3) subgroup, lattice simulations of a G$_2$ theory are straightforward but expensive, see \\cite{Maas:2007af,Wellegehausen:2011sc,Maas:2012wr,Wellegehausen:2010ai} for the algorithms employed here.\n\n\\begin{figure}\n\\input{potential40}\\input{StringBreakingPot}\n\\caption{\\label{fig:pot}The Wilson potential $\\tilde{V}$ divide by the scale $\\mu$ for different representations $\\mathcal{R}$ (left) and its string-breaking, compared to hybrid masses for two representations (right), both in three dimensions, from \\cite{Wellegehausen:2010ai}.}\n\\end{figure}\n\nG$_2$ is the smallest rank 2 gauge group with a trivial center. As a consequence, every fundamental charge can be screened by three adjoint charges, and thus there is no infinitely rising Wilson potential, and thus no confinement in the sense of a Wilson area law \\cite{Holland:2003jy}. However, in practice the corresponding Polyakov loop is found to be very small at zero temperature, and in fact only upper bounds are known, though it follows that it must be non-zero. In fact, at intermediate distances a linear rising Wilson potential \\cite{Pepe:2006er,Greensite:2006sm}, including a characteristic Casimir scaling \\cite{Wellegehausen:2010ai,Liptak:2008gx}, is found. Thus, a string appears in the same way as in QCD with dynamical quarks, up to a distance where string-breaking sets in \\cite{Wellegehausen:2010ai}. Hence, G$_2$ Yang-Mills theory is in the same sense (non-)confining as is QCD. These facts are illustrated in figure \\ref{fig:pot}. Of course, since the theory has no anomaly, it is still a well-defined theory, with only colorless asymptotic states \\cite{Holland:2003jy,Pepe:2006er}, like glueballs \\cite{Wellegehausen:2010ai,Lacroix:2012pt}.\n\n\\begin{figure}\n\\includegraphics[width=0.5\\linewidth]{gp}\\includegraphics[width=0.5\\linewidth]{alpha}\n\\caption{\\label{fig:gluon}The minimal Landau-gauge gluon propagator $D$ (left panel) and running coupling $\\alpha$ (right panel) of G$_2$ Yang-Mills theory compared to SU(3) Yang-Mills theory in three dimensions as a function of momentum $p$, from \\cite{Maas:2012wr}.}\n\\end{figure}\n\nIt is thus an interesting question what the effective degrees of freedom are. On the level of the elementary particles, the gluons, no qualitative, and little quantitative difference is found \\cite{Maas:2007af,Maas:2010qw}. This also manifests itself in a qualitative similar running coupling, even in the far infrared, see figure \\ref{fig:gluon}\\footnote{For all results for Yang-Mills theory, the scale has been set by giving the intermediate distance fundamental string-tension a value of (440 MeV)$^2$ \\cite{Maas:2007af,Danzer:2008bk}.}. Thus, at the level of gluons, there is no distinct difference.\n\nAnother set of effective degrees of freedom often used in Yang-Mills theory are topological ones. Similarly, for G$_2$ Yang-Mills theory vortices \\cite{Greensite:2006sm}, monopoles \\cite{DiGiacomo:2008nt}, dyons \\cite{Diakonov:2010qg}, and instantons \\cite{Ilgenfritz:2012aa} have been constructed. Using lattice simulations and cooling, it is indeed possible to verify the existence of topological lumps, which are associated with action lumps and a non-vanishing topological susceptibility of roughly $(150$ MeV$)^4$ \\cite{Ilgenfritz:2012aa}, though yet with large systematic errors. Though there exist differences in details, e.\\ g.\\ vortices are not associated with a center \\cite{Greensite:2006sm}, the salient features of these topological excitations are close to the ones in ordinary SU($N$) Yang-Mills theory. As one can expect from these observations, chiral symmetry is broken in the vacuum in the same way as in ordinary Yang-Mills theory \\cite{Danzer:2008bk}.\n\nThus in total, G$_2$ Yang-Mills theory in the vacuum is very similar to SU($N$) Yang-Mills theories.\n\n\\subsection{Finite temperature}\n\nSince the finite-temperature phase transition in SU($N$) Yang-Mills theories is associated with a center-symmetry breaking\/restoring phase transition, it was originally anticipated \\cite{Holland:2003jy} that there will not be a phase transition in G$_2$ Yang-Mills theory, though the gluonic sector suggested otherwise \\cite{Maas:2005ym}. Lattice simulations then indeed found a strong first-order phase transition in G$_2$ Yang-Mills theory \\cite{Pepe:2006er,Greensite:2006sm,Cossu:2007dk} using the free energy. However, in practice this is non-trivial due to a bulk transition requiring rather fine lattices \\cite{Pepe:2006er,Cossu:2007dk}. This phase transition is also reflected in the behavior of glueballs \\cite{Lacroix:2012pt}.\n\nAmazingly, though not being an order parameter, the Polyakov loop also reflected this phase transition. In fact, it is possible to use the Polyakov loops in various representations to describe the phase structure of G$_2$ Yang-Mills theory rather accurately \\cite{Wellegehausen:2009rq}. One of the main reasons seems to be that though there is no genuine center symmetry, a distorted three-fold structure is still preserved by G$_2$, which, when breaking the theory down to SU(3), yields the center symmetry, see section \\ref{shiggs} below.\n\nThis alone is already in remarkable agreement to ordinary Yang-Mills theory. But the similarities are even more pronounced. Since all representations are real, it would have been possible that the chiral transition, as is the case for the adjoint chiral condensate in SU($N$) \\cite{Karsch:1998qj,Bilgici:2009jy}, would not show a phase transition or only at a much higher transition temperature. This is not the case, and, within lattice resolution, the chiral condensate shows a response precisely at the same temperature as the Polyakov loop and the free energy \\cite{Danzer:2008bk}. As would be naively expected from the comparison to SU($N$) Yang-Mills theory, it is then also found that the topological properties change at the phase transition \\cite{Ilgenfritz:2012aa}, especially the topological susceptibility drops.\n\n\\begin{figure}\n\\includegraphics[width=\\linewidth]{pd-ym}\n\\caption{\\label{fig:phase-ym}The phase diagram of G$_2$ Yang-Mills theory. The critical temperature is taken from \\cite{Cossu:2007dk}, the Polyakov loop and chiral condensate from \\cite{Danzer:2008bk} and the topological susceptibility from \\cite{Ilgenfritz:2012aa}.}\n\\end{figure}\n\nThe resulting phase diagram is shown in figure \\ref{fig:phase-ym}. The first order nature is visible, though it requires a detailed study of scaling properties to ascertain it \\cite{Cossu:2007dk}. Thus, from the point of view of the phase diagram G$_2$ Yang-Mills theory behaves very similar to the SU(3) case, even though the phase transition is not related to a symmetry. This is one of the reasons why Yang-Mills theory is well suited as a stand-in for QCD thermodynamics, as discussed in section \\ref{sqcd-pd}. The reason for the existence of this similarity is besides the approximate three-fold structure \\cite{Wellegehausen:2009rq} the fact that the size of the gauge group appears to be more relevant for the phase structure than the center of the group \\cite{Pepe:2006er,Holland:2003kg}.\n\n\\section{Yang-Mills-Higgs theory}\\label{shiggs}\n\n\\begin{figure}\n\\includegraphics[width=\\linewidth]{phaseLines16x6_summary}\n\\caption{\\label{fig:phase-higgs}The phase diagram of G$_2$ Yang-Mills-Higgs theory, from \\cite{Wellegehausen:2011sc}, as a function of gauge coupling and Higgs hopping parameter at finite temperature.}\n\\end{figure}\n\nOne of the interesting features of G$_2$ is that it has SU(3) as a sub-group. Thus, it appears possible to somehow hide the S$^6$ part of the gauge group using the Higgs mechanism such that just SU(3) remains. In fact, it turns out that a single fundamental Higgs field is sufficient for this purpose \\cite{Holland:2003jy,Wellegehausen:2011sc}. In such a more complicated theory it is possible to follow the phase structure at finite temperature, and map a phase diagram in the temperature-Higgs mass plane at infinite four-Higgs coupling \\cite{Wellegehausen:2011sc}, as shown in figure \\ref{fig:phase-higgs}.\n\nThe phase structure is rather intricate at intermediate values of the couplings. Given the large systematic uncertainties encountered in such theories \\cite{Bonati:2009pf} a definite answer will remain hard to find. However, this question is highly relevant: If a continuous connection between the SU(3)-like domain and the G$_2$-like domain exists this would have significant implications for the physics of both theories.\n\nThe situation becomes much more complicated when introducing (fermionic) matter fields into the theory \\cite{Holland:2003jy}. In this case, a hiding with just one Higgs field will inevitably lead to an SU(3) theory with the matter fields in the wrong representation, in particular to real matter fields. Since the natural question is, whether a connection to ordinary QCD is possible, the hiding or breaking mechanism must complexify the matter fields to lead to the inequivalent fundamental and anti-fundamental representations of QCD. This will likely only be possible, if at all, by manipulating the theory on the level of Weyl fermions, a topic under current investigation \\cite{Maas:2012aa}.\n\n\\section{G$_2$ QCD}\\label{sqcd}\n\n\\subsection{Vacuum structure}\n\nWhen adding $N_f$ fundamental fermions to G$_2$ Yang-Mills theory one arrives at G$_2$ QCD. The vacuum structure of this theory is yet little explored \\cite{Holland:2003jy,Maas:2012wr}, but has a number of highly interesting features. The first concerns the spectrum. Due to the group structure, there exists a richer set of color-neutral bound states than in QCD \\cite{Holland:2003jy}, both of fermionic and bosonic type. In the boson sector there are as in QCD the glueballs and mesons. In addition, there are also diquarks, since due to the reality of the G$_2$ representations such states are color-neutral, different from QCD, but similar to QC$_2$D. In addition, there are also tetraquarks and heptaquarks consisting out of four and six quarks. Besides these bosonic hadrons there are also fermionic ones. Most notably the hybrid, consisting out of one quark and three gluons, but also a nucleon from three quarks, as well as pentaquarks and heptaquarks from five and seven quarks.\n\nThe mass hierarchy of these states will depend strongly on the masses of the quarks, even for degenerate flavors. E.\\ g., at heavy quark mass the hybrid will be the lightest particle in the fermionic sector, while the nucleon is expected to take over this role at low quark masses, but will still be heavier than the diquark or mesons. The details of these hierarchy are a dynamical problem.\n\nThese bound states are also influenced by the pattern of chiral symmetry breaking. Due to the reality of quarks, G$_2$ has, similarly to QC$_2$D, an enlarged chiral symmetry of U(2$N_f$) \\cite{Kogut:2000ek,Holland:2003jy,Hands:2000ei,Maas:2012wr}. This symmetry can be viewed as a flavor symmetry on the level of the Weyl fermions. Of this symmetry an axial U(1) is expected to be broken in the same way as in ordinary QCD by the axial anomaly. Taking for the following a single flavor leaves, in contrast to QCD, still an SU(2) chiral symmetry. This symmetry is spontaneously broken \\cite{Maas:2012wr}, like in the quenched case \\cite{Danzer:2008bk}, leaving only an U(1) intact. This conserved U(1) can then be associated with a baryon number. The Goldstone bosons of this breaking are then expected to be two diquarks \\cite{Maas:2012wr}, just like in QC$_2$D \\cite{Strodthoff:2011tz}. These two diquarks represent a flavor-doublet on the level of Weyl fermions.\n\n\\begin{figure}\n\\includegraphics[width=0.5\\linewidth]{pion-fit}\\includegraphics[width=0.5\\linewidth]{masses}\n\\caption{\\label{fig:mass}The connected part of the diquark\/scalar meson correlator with a mass fit (left panel) and the masses for the diquark and the pion as a function of the gauge coupling (right panel), from \\cite{Maas:2012wr}. The lattice spacing is strongly-dependent on the lattice parameters.}\n\\end{figure}\n\nIn numerical simulations this is rater hard to identify, as for one Dirac flavor the scalar and the diquarks only differ by disconnected contributions. Furthermore, it turns out that G$_2$ QCD in the range of accessible parameters is very sensitive to both the gauge coupling and the hopping parameter, and has at rather low gauge coupling at fixed hopping parameters already a transition into an unphysical phase \\cite{Maas:2012aa}, possibly an Aoki-like phase. Nonetheless, mass determinations are possible, as is demonstrated for $N_f=1$ in figure \\ref{fig:mass}. The determination of the vacuum spectrum is thus a challenging task, even at a qualitative level, and an ongoing project \\cite{Maas:2012aa}. Especially the mass of the nucleon is relevant, when one turns to the phase diagram.\n\n\\subsection{Phase diagram}\\label{sqcd-pd}\n\nDue to reality of the representations and the enlarged chiral symmetry, the whole phase diagram for the $N_f=1$ case is both accessible in lattice simulations and relevant. Even besides the fact that G$_2$ QCD is an interesting theory on an intellectual level, there is a number of features which makes it also highly relevant on the level of applications in the continuum limit. First of all, as described in section \\ref{sym}, the theory is in the quenched limit very similar to SU($N$) Yang-Mills theories, in contrast to theories with adjoint matter \\cite{Karsch:1998qj,Bilgici:2009jy,Hands:2000ei}. Furthermore, the theory has nucleons, and in general fermionic baryons, and thus also nuclei. Hadronic Pauli effects at intermediate densities will thus play a role, in contrast to QC$_2$D \\cite{Hands:2006ve,Hands:2011ye,Strodthoff:2011tz,Cotter:2012tt}. No other gauge theory with this combination of features has yet been simulated on a lattice, except without continuum limit \\cite{deForcrand:2009dh,Fromm:2011zz}.\n\nThis provides the possibility of a number of unprecedented tests of lattice approaches to finite density QCD. It is possible to test explicitly to which extent investigations using analytical continuation in imaginary or isospin chemical potential work (see e.\\ g.\\ \\cite{deForcrand:2010he,Cea:2012ev}), and whether Taylor expansions (see e.\\ g.\\ \\cite{Karsch:2010hm,Borsanyi:2012cr}), Lee-Yang zeros (see e.\\ g.\\ \\cite{Fodor:2004nz}), or other methods (see e.\\ g.\\ \\cite{Fodor:2007vv}) are reliable tools.\n\nFurthermore, and possibly even more important, the G$_2$ QCD lattice phase diagram provides new benchmarks for both models \\cite{Leupold:2011zz,Buballa:2003qv,Pawlowski:2010ht} and continuum methods, in the latter case especially functional methods \\cite{Pawlowski:2010ht,Braun:2011pp,Maas:2011se}. Furthermore, if breaking G$_2$ QCD to ordinary QCD should be possible, this would be even more helpful, though, of course, at some point the sign problem will prevent a simulation of QCD.\n\n\\begin{figure}\n\\includegraphics[width=\\linewidth]{pd}\n\\caption{\\label{fig:pd}The G$_2$ QCD phase diagram for one flavor of quarks. The left panel shows the (unrenormalized) Polyakov loop, the middle panel the normalized chiral condensate, and the right panel the Baryon density, normalized to the saturation density of 14 quarks\/lattice site. For details and simulation parameters, see \\cite{Maas:2012wr}. Note that at $\\mu_{\\text{Quark}}\\approx 1$ GeV the system starts to become dominated by systematic effects \\cite{Maas:2012wr}.}\n\\end{figure}\n\nThe first step in this program is provided by a proof-of-principle showing the accessibility of the phase diagram in lattice calculations, see figure \\ref{fig:pd}\\footnote{The scale is here chosen to get a zero-density transition temperature of about 160 MeV, and the first excited meson state is used to set the scale scale. This procedure \\cite{Maas:2012wr} is strongly affected by systematic errors, and will be improved \\cite{Maas:2012aa}.} \\cite{Maas:2012wr}. Though so far at a qualitative level, it shows already a structure close to the expected one, including indications \\cite{Maas:2012aa} of a silver-blaze point \\cite{Cohen:2003kd}, see also section \\ref{ssmall}. A more quantitative description will require more detailed calculations, in particular concerning systematic errors \\cite{Maas:2012aa}. Nonetheless, the theory shows the low-temperature, low-density ordinary phase, has a transition, likely a cross-over, to a high-temperature phase, and also a transition at finite density. Whether any of these are phase transitions remains to be seen, but so far the finite-density transitions are stronger. Also, first signals of additional structure at zero temperature have been observed \\cite{Maas:2012aa}, which may correspond to various phase also observed in QC$_2$D \\cite{Strodthoff:2011tz,Cotter:2012tt}. However, more details studies, especially of systematic effects are necessary before definite statements can be made.\n\n\\begin{figure}\n\\includegraphics[width=\\linewidth]{eom}\n\\caption{\\label{fig:eom}The baryon density (left panel) of G$_2$ QCD, compared to that of continuum and lattice \\cite{Hands:2006ve} Stefan-Boltzmann results with the same mass or for massless quarks, and to leading order chiral perturbation theory \\cite{Hands:2006ve,Kogut:2000ek} with coefficients fitted to the G$_2$ case at intermediate densities. The middle panel shows the corresponding ratios (note the logarithmic scale), and the right-hand panel the integrated equation of state, normalized to the continuum Stefan-Boltzmann case. All results unpublished from \\cite{Maas:2012aa}. The value of the lattice constant $a$ is approximately 0.2 fm.}\n\\end{figure}\n\nFinally, an interesting question is, to which extent the low-temperature case is simple, so that e.\\ g., quasi-particle models would be a good description. For this purpose, a comparison to a system of free quarks and to chiral perturbation theory is shown in figure \\ref{fig:eom}. While the high-density region, which is dominated by lattice artifacts \\cite{Maas:2012wr}, is rather well described by the corresponding free lattice system of quarks, this is not the case at low densities. Here, the equation of state is much more similar to lattice or continuum versions of a gas of free massless quarks, instead of massive ones, though the deviations are still very large at the smallest densities. At the same time, at least leading-order chiral perturbation theory is not able to reproduce even qualitatively the physics of G$_2$ QCD. Thus, non-trivial effects play a dominant role at densities below $a\\mu\\approx 0.5$, which translates in this case to roughly 500 MeV of quark chemical potential. In this region, highly non-trivial effects have to be dealt with.\n\n\\section{Results on a smaller lattice}\\label{ssmall}\n\nSince many of the investigations above are limited by the number of different lattice settings which can be simulated, the use of smaller lattices may help in mapping the phase diagram on a finer grid. However, due to the unphysical bulk transition it is not possible to study the full phase diagram on smaller lattices, especially at finite temperature on lattices with $N_t<5$. \nNevertheless, at zero temperature $G_2$ QCD is investigated on a $8^3 \\times 16$ lattice in\nthe parameter region $\\beta=0.90\\ldots1.10$ and $\\mu=0\\ldots2$. The monopole\ndensity is already sufficiently small, such that the system stays outside the\nbulk phase for all values of $\\beta$ and $\\mu$. \n\\begin{figure}[htb]\n\\begin{center}\n\\scalebox{1.1}{\\includeEPSTEX{Mass}}\\hskip10mm\n\\scalebox{1.1}{\\includeEPSTEX{MassRatio}}\n\\caption{\\label{fig:massSmall}Diquark and nucleon mass (left panel) and its ratio (right panel) on a $8^3 \\times 16$ lattice. From \\cite{Maas:2012aa}.}\n\\end{center}\n\\end{figure}\nIn Fig.~\\ref{fig:massSmall} the diquark and nucleon (proton) mass together with its ratio are shown as a function of $\\beta$. Assuming a nucleon mass of about $1$ GeV, the diquark mass changes from $\\sim 500$ MeV at $\\beta=0.90$ to $\\sim 200$ MeV at $\\beta=1.10$. On the small lattice the scale is set by\nthe ground state diquark mass $\\tilde{a}(\\beta) \\equiv m_\\text{diquark}(\\beta)$. The phase diagram at zero\ntemperature is then given as a function of the dimensionless parameter $\\tilde{\\mu}=\\mu\/m_\\text{diquark}$ and the\ndimensionless lattice spacing $\\tilde{a}$. \n\\begin{figure}[htb]\n\\begin{center}\n\\scalebox{1.1}{\\includeEPSTEX{pd816QND3D}}\\hskip10mm\n\\scalebox{1.1}{\\includeEPSTEX{pd8_16_QND_105}}\n\\caption{\\label{fig:QNDSmall}Quark number density as a function of the lattice spacing $\\tilde{a}$ (left panel) and at $\\beta=1.05$ (right panel) on a $8^3 \\times 16$ lattice. From \\cite{Maas:2012aa}.}\n\\end{center}\n\\end{figure}\nIn Fig.~\\ref{fig:QNDSmall} the\nquark number density is shown. Independent of the lattice spacing the quark number density takes it maximum value of $n_{q, \\text{sat}}=2 \\cdot N_c \\cdot N_f=14$ at large $\\tilde{\\mu}$. With decreasing lattice spacing, the saturation shifts to larger values of chemical potential, indicating that this saturation is only a lattice artifact. The Polyakov loop and the (renormalized) chiral condensate show almost the same behaviour as on the larger lattices, see Fig.~\\ref{fig:PolChiralSmall}.\n\\begin{figure}[htb]\n\\begin{center}\n\\scalebox{1.1}{\\includeEPSTEX{pd816Polyakov3D}}\\hskip10mm\n\\scalebox{1.1}{\\includeEPSTEX{pd816Chiral3D}}\n\\caption{\\label{fig:PolChiralSmall}Polyakov loop (left panel) and chiral condensate (right panel) on a $8^3 \\times 16$ lattice as a function of chemical potential $\\tilde{\\mu}$ and lattice spacing $\\tilde{a}$. From \\cite{Maas:2012aa}.}\n\\end{center}\n\\end{figure}\nFurthermore, the onset transition from the vacuum to nuclear matter is studied in\nFig.~\\ref{fig:onsetSmall}.\n\\begin{figure}[htb]\n\\begin{center}\n\\scalebox{1.1}{\\includeEPSTEX{pd816SBQND3D}}\\hskip10mm\n\\scalebox{1.1}{\\includeEPSTEX{onset}}\n\\caption{\\label{fig:onsetSmall}Quark number density (left panel) and onset transition compared to the diquark mass (right panel) on a $8^3 \\times 16$ lattice. From \\cite{Maas:2012aa}.}\n\\end{center}\n\\end{figure}\nAt $\\tilde{\\mu}_0\\approx0.5$, a transition in the quark number density\n(left panel) is observed. The value of the onset does almost not depend on the lattice\nspacing, indicating that at smaller values of $\\tilde{\\mu}$ finite size effects\nare less important than for larger values of the chemical potential. In the\nright panel, the transition (shaded region) is compared to half of the diquark\nmass, and a clear coincidence is visible. This indeed verifies that $G_2$ QCD possesses, as advertised above, the\nsilver blaze property \\cite{Cohen:2003kd} for baryon chemical potential, i.e. half of the \nmass of the lightest bound state carrying baryon number is a lower bound for the onset transition to nuclear matter.\nWith decreasing lattice spacing $\\tilde{a}$, a plateau develops for $\\tilde{\\mu}_0(\\tilde{a})<\\tilde{\\mu}<\\tilde{\\mu}_1(\\tilde{a})$, where the quark\nnumber density is almost constant. For $\\tilde{\\mu}>\\tilde{\\mu}_1(\\tilde{a})$ it\nstarts again to increase until it saturates at $\\tilde{\\mu}_\\text{sat}$.\n\n\n\\section{Conclusions}\\label{sconclusion}\n\nConcluding, G$_2$ QCD is a highly interesting arena to investigate both conceptual and practical questions. Conceptually, it has already taught us that the center of the gauge group is far less relevant than originally anticipated. Most of the salient features of Yang-Mills theory are also present for this case with trivial center. It can thus be expected that many other questions may be little affected by the center as well. However, it also taught us that the group structure and matter representation is important for the physics. \n\nInvestigating practical applications, which particularly involve benchmarks for models and continuum methods at finite densities and low temperatures, is only a newly emerging field. It has been shown that this is possible. It was furthermore already found that the low and intermediate density regime are quite different from simple systems, confirming the situation in QC$_2$D. To fully control this domain, so important for compact stellar objects, much progress will be needed. G$_2$ QCD will, almost certainly, play an important role in the support of this enterprise in the years to come.\n\n\\section{Acknowledgments}\n\nA.\\ M.\\ is grateful to Julia Danzer, Christof Gattringer, Ernst-Michael Ilgenfritz, and {\\v S}tefan Olejn\\'ik for the collaboration on these subjects. A.\\ M.\\ and B.\\ W.\\ are grateful to Lorenz von Smekal and Andreas Wipf for the collaboration on these subjects, especially on the yet unpublished results \\cite{Maas:2012aa} shown in figures \\ref{fig:eom}-\\ref{fig:onsetSmall}.\n\n\\bibliographystyle{bibstyle}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}