diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzcnsd" "b/data_all_eng_slimpj/shuffled/split2/finalzzcnsd" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzcnsd" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nLet $(X,d)$ be a compact metric space with metric $d$ and $f:\nX\\rightarrow X$ be continuous. Then $(X,f)$ is called a compact\nsystem.\nFor every positive integer $n$, we define $f^n$ inductively\nby $f^n=f\\circ f^{n-1}$, where $f^0$ is the identity map on $X$.\n\nA compact system $(X,f)$ is called Devaney chaos \\cite{Devaney} if\nit satisfies the following three conditions:\\\\\n\\indent (1) $f$ is transitive, i.e., for every pair $U$ and $V$ of\nnon-empty open subsets of $X$ there is a non-negative integer $k$\nsuch that $f^{n}(U)\\cap V\\neq\\emptyset$;\\\\\n\\indent (2) $f$ is periodically dense, i.e., the set of periodic\npoints of $f$ is dense in $X$;\\\\\n\\indent (3) $f$ is sensitive, i.e., there is a $\\delta > 0$ such\nthat, for any $x\\in X$ and any neighborhood $V$ of $x$, there is a\nnon-negative integer $n$ such that $d(f^n(x),f^n(y))>\\delta$.\n\nIt is worth noting that the conditions (1) and (2) imply that $f$ is\nsensitive if $X$ is infinite \\cite{Silverman, Banks}. This means the\ncondition (3) is redundant in the above definition.\n\nIn the works \\cite{Roman-Flores} Rom\\'{a}n-Flores investigated a\ncertain hyperspace system $(\\mathcal{K}(X),\\overline{f})$ associated\nto the base system $(X,f)$, where\n$\\overline{f}:\\mathcal{K}(X)\\rightarrow\\mathcal{K}(X)$ is the nature\nextension of $f$ and $\\mathcal{K}(X)$ is the family of all non-empty\ncompact sets of $(X,d)$ endowed with the Hausdorff metric induced by\n$d$. He presented a fundamental question: Does the chaoticity of\n$(X,f)$ (individual chaos) imply that of\n$(\\mathcal{K}(X),\\overline{f})$ (collective chaos)? and conversely?\n\nAs a partial response to this question, Rom\\'{a}n-Flores\n\\cite{Roman-Flores} discussed the transitivity of the two systems,\nand showed that the transitivity of $\\overline{f}$ implies that of\n$f$, but the converse is not true. Fedeli \\cite{Fedeli} showed that\nthe periodically density of $f$ implies that of $\\overline{f}$.\nBanks gave an example which has a dense set of periodic points in\nthe hyperspace system but has none in the base system \\cite{Banks}.\nGongfu Liao showed that there is an example on the interval which is\nDevaney chaos in the base system while is not in its induced\nhyperspace system \\cite{liaogf}.\n\n\nIn this paper, we show that $\\overline{f}$ being Devaney chaos need\nnot imply $f$ being Devaney chaos. Further, $f$ need not be Devaney\nchaos even if $\\overline{f}$ is mixing and periodically dense. This\nanswers the question posed by Rom\\'{a}n-Flores and Banks\n\\cite{Banks}.\n\n\\section{Preliminaries}\n\n$(X,f)$ is a compact system. If $Y\\subset X$ and $Y$ is\n$f$-invariant, i.e. $f(Y)\\subset Y$, then $(Y, f|_{Y})$ is called\nthe subsystem of $(X,f)$ or $f$, where $f|_{Y}$ is the restriction\nof $f$ on $Y$.\n\nA subset $A\\subset X$ is minimal if the closure of the orbit of any\npoint $x$ of $A$ is $A$, i.e. $\\overline{Orb(x)}= A$, for all $x\\in\nA$. An equivalent notion is that $A$ has no proper $f$-invariant\nclosed subset.\n\nA point $x\\in X$ is said to be an almost periodic point if for any\n$\\varepsilon>0$ exists $N\\in \\mathbb{N}$ such that for any integer\n$q\\geq 1$ exists an integer $r$ with property that $ q\\leq r 0$ exists $N>0$ such that for any $i>0$ symbol block $ b_{0}b_{1}\\cdots\n b_{j}$ appears in $ b_{i}b_{i+1}\\cdots b_{i+N}$ .\n\\end{lem}\n\\begin{lem}\n\\label{fhqhh}\n Let $(Y, \\sigma)$ be a subsystem of a symbol system $(\\Sigma_{k}, \\sigma)$,\n then $(Y, \\sigma)$ is mixing\n if and only if\n for any two admissible blocks $b_{0}b_{1}\\cdots b_{s}$ and $c_{0}c_{1}\\cdots c_{t}$\n in $Y$,\n there exists $N>0$ such that for any $n>N$ exists $n-1$ length admissible block $w_{1}w_{2}\\cdots\n w_{n-1}$ in $Y$ with the property that $b_{0}b_{1}\\cdots b_{s}w_{1}w_{2}\\cdots w_{n-1}c_{0}c_{1}\\cdots\n c_{t}$ is an admissible block of $Y$.\n\\end{lem}\n\nLet $(X,\\mathcal{J})$ be a topological space and $\\mathcal{K}(X)$ be\nthe set of all the non-empty compact sets of $X$. $\\mathcal{K}(X)$\nis called the hyperspace of $(X,\\mathcal{J})$ when endowed with the\nVietoris topology $\\mathcal{J}_{\\mathscr{V}}$ whose base consists of\nsets of the form\n \\begin{equation*} \\mathscr{B}(G_{1},G_{2},\\cdots\n,G_{n}) =\\{K\\in\\mathcal{K}(X):K\\subset\n\\overset{n}{\\underset{i=1}{\\cup}}G_{i}~\\text{and}~ K\\cap G_{i}\\neq\n\\emptyset,\\,\\,1\\leq i\\leq n \\},\n\\end{equation*}\nwhere $G_{1},G_{2},\\cdots ,G_{n}$ are non-empty open subsets of $X$.\n\n$(X,d)$ is a metric space, $A\\in \\mathcal{K}(X)$,\n``$\\epsilon$-dilatation of $A$\" is defined as the set\n$N(A,\\epsilon)=\\{x\\in X:d(x,A)<\\epsilon\\}$, where\n$d(x,A)=\\inf\\limits_{a\\in A}d(x,a)$.\nThe Hausdorff separation\n$\\rho(A,B)$ is $\\rho(A,B)=\\inf\\{\\epsilon>0:A\\subset\nN(B,\\epsilon)\\}$.\nThe Hausdorff mertic on $\\mathcal{K}(X)$ is\n$H_{d}(A,B)=\\max\\{\\rho(A,B),\\rho(B,A)\\}$.\n It is well known that the topology induced by the\n Hausdorff metric $H_{d}$ on $\\mathcal{K}(X)$ coincides with the\n Vietoris topology $\\mathcal{J}_{\\mathscr{V}}$ \\cite{Klein}.\n\nThe extension $\\overline{f}$ of $f$ to $\\mathcal{K}(X)$ is defined\nas $\\overline{f}(A)=\\{f(a):a\\in A\\}$. It is easy to show that $f$\nbeing continuous implies $\\overline{f}$ being continuous. If $(X,\nf)$ is compact system, then $(\\mathcal{K}(X), \\overline{f})$ is also\na compact system \\cite{Klein}. $(\\mathcal{K}(X), \\overline{f})$ is\nsaid to be the hyperspace system induced by the base system $(X,\nf)$.\n\nThe following two theorems can be found in\n\\cite{Banks,liaogf,Peris,gurongbao}\n\\begin{thm}\n\\label{chrdj} Let $(\\mathcal{K}(X),\\overline{f})$ be the hyperspace\nsystem induced by the compact system $(X,f)$, then the following are\nequivalent.\\\\\n\\indent (i) $f$ is weakly mixing,\\\\\n\\indent(ii) $\\overline{f}$ is weakly mixing,\\\\\n\\indent(iii) $\\overline{f}$ is transitive.\n\\end{thm}\n\\begin{thm}\n\\label{mixing}\n Let $(\\mathcal{K}(X),\\overline{f})$ be the hyperspace\nsystem induced by the compact system $(X,f)$, then $f$ being mixing\nis equivalent to $\\overline{f}$ being mixing.\n\\end{thm}\n\n\\section{Periodically dense}\n\n\n\\begin{thm}\n\\label{chbsh} If $(\\mathcal{K}(X), \\overline{f})$ is the hyperspace\nsystem induced by the compact system $(X,d)$, then $\\mathcal{K}(X)$\nhas a dense set of periodic points if and only if for any non-empty\nopen subset $U$ of $X$ there exists a compact subset $K\\subset U$\nand an integer $n>0$ such that $f^{n}(K)=K$.\n\\end{thm}\n\n\\noindent\n \\textbf{Proof}\n\n $\\Longrightarrow$ Since $\\mathcal{K}(X)$ has a dense set of periodic\n points, every non-empty open subset of $\\mathcal{K}(X)$ has at least one periodic point.\nFurther, every basic element has at least one periodic point. Let\n$U$ be an arbitrary open subset of $X$. Then by the definition of\nthe Vietoris topology $\\mathscr{B}(U)$ is a basic element of\n$\\mathcal{K}(X)$.\n According to the condition, there exists $K\\in\\mathcal{K}(X)$ and an integer\n$n>0$ such that $\\overline{f}^{n}(K)=K$. Since\n$\\overline{f}^{n}=\\overline{f^{n}}$, then $\\overline{f^{n}}(K)=K$.\nHence, there is a $K\\subset U$ in $X$ such that $f^{n}(K)=K$.\n\n$\\Longleftarrow$ Let $U_{1},U_{2},\\ldots,U_{n}$ be the open subsets\nof $X$, then\n$$\\mathscr{B}(U_{1},U_{2},\\ldots,U_{n})=\n\\{W\\in\\mathcal{K}(X):W\\subset\\overset{n}{\\underset{i=1}\\cup}{U_{i}}~\\text{\nand}~W\\cap U_{i}\\neq \\emptyset, \\,\\, 1\\leq i\\leq n\\}$$ is a basic\nelement of $\\mathcal{K}(X)$.\n According to the condition, there\nexists $K_{i}\\subset U_{i}, m_{i}>0$ such that\n$f^{m_{i}}(K_{i})=K_{i}, \\forall \\,\\, 1\\leq i\\leq n ,$ Let $m$ be\nthe least common multiple of $\\{m_{i}\\}$ and\n$K=\\overset{n}{\\underset{i=1}\\cup}{K_{i}}$.\n Then $f^{m}(K)=K$, and\n $K\\subset\\overset{n}{\\underset{i=1}\\cup}{U_{i}},\\quad K\\cap\nU_{i}\\neq \\emptyset, \\,\\, 1\\leq i\\leq n$. Therefore,\n$K\\in\\mathscr{B}(U_{1},U_{2},\\ldots,U_{n})$.\n\nThis means that every element of $\\mathcal{K}(X)$ has a\n$m$-invariant set of $X$ in it. Further, every open set of\n$\\mathcal{K}(X)$ has a periodic point in it.\n Then $\\mathcal{K}(X)$ has a dense set of periodic points.\n\n\\begin{rem}\n If $X$ is a topology space,\n and $\\mathcal{P}(X)$ is the family of all non-empty subset\n of it, then with the similar argument, it is not hard to see\n the above result is also hold for the Vietoris\n topology.\n\\end{rem}\n\n\n\\section{Devaney chaos}\n\nIf $a$ is an almost periodic point of $\\Sigma_{2}$ and\n$X=\\overline{orb(a)}$, then $(X, \\sigma)$ is usually a mixing\nnon-trivial minimal subsystem of $(\\Sigma_{2}, \\sigma)$, which\ndepends on the selection of the point $a$. There are many such\nexamples \\cite{Qinjie}. From these examples we take an arbitrary one\nas our research object and this do not affect the proof and our\nresults.\n\nWe construct a subsystem $(\\widetilde{X}, \\sigma)$ of $(\\Sigma_{3},\n\\sigma)$ associated to the selected $(X, \\sigma)$ as follows.\n\nFor $b=b_{0}b_{1}\\cdots b_{n}\\cdots\\in X$, denote\n$$\nX_{b} = \\{ x = x_{0}x_{1}\\cdots\\in \\Sigma_{3}: \\, \\exists\nn_{i}\\rightarrow\\infty\\,\\text{such that}\\,x_{n_{i}}= b_{i},\n\\,\\text{and}\\,x_{n} = 2 \\,\\text{if}\\,n\\not= n_{i}\\},\n$$\nthen $X_{b}$ is called the extension of $b$ in $\\Sigma_{3}$.\nDenote\n $\\widetilde{X} = \\overline{\\bigcup_{b\\in X}X_{b}}.$\nIt is easy to see that $\\widetilde{X}$ is $\\sigma$-invariant set of\n$\\Sigma_{3}$. Therefore, $(\\widetilde{X},\\sigma)$ is a subsystem of\n$(\\Sigma_{3},\\sigma)$.\n\n\\begin{prop}\n\\label{xqhh}\n $(\\widetilde{X},\\sigma)$ is mixing.\n\\end{prop}\n\n\\noindent\n \\textbf{Proof}\n Let $U=[x_{0}x_{1}\\cdots x_{n_{1}}]$ and $V=[y_{0}y_{1}\\cdots\ny_{n_{2}}]$ be the cylinder sets of $\\widetilde{X}$. They are two\nbasic elements of $\\widetilde{X}$ and both are closed sets.\n\nBy the construction of $\\widetilde{X}$, there exists two integer\nsequences $\\{i_{l}\\},\\{j_{m}\\},i_{l},j_{m}\\geq-1,l,m\\geq-1$~ such\nthat $x_{i_{l}}=b_{l},y_{j_{m}}=c_{m}$, and\n $U_{1}=[b_{0}b_{1}\\cdots b_{l}\\cdots b_{s}],\nV_{1}=[c_{0}c_{1}\\cdots c_{m}\\cdots b_{t}]$, $s\\leq n_{1}, t\\leq\nn_{2}$,\n are cylinder sets of $X$\n (We assign $U=[2^{n_{1}}]$ for $i_{l}=-1$, and $V=[2^{n_{2}}]$ for $j_{m}=-1$).\n So\n $u=b_{0}b_{1}\\cdots b_{l}\\cdots b_{s}, \\quad v=c_{0}c_{1}\\cdots\nc_{m}\\cdots b_{t}$ are the admissible blocks of $X$.\n\n\nFor $i_{l}\\geq 0$, $j_{m}\\geq 0$, $\\forall \\,\\, l,m$, from Lemma\n \\ref{fhqhh}, there exists $N\\in \\mathbb{N}$ such that for any $n\\geq\n N$ there exists an admissible block $w=d_{1}d_{2}\\cdots d_{n-1}$ of $X$\n with the property that $uwv$ is an admissible block of $X$.\nSince $U,V$ can be written as\n \\begin{align*}\n&U=[x_{0}x_{1}\\cdots x_{i_{s}}2^{[r]}],\\,\\, r=n_{1}-i_{s} \\\\\n&V=[2^{[j_{0}]}y_{j_{0}}y_{j_{0}+1}\\cdots y_{n_{2}}],\n\\end{align*}\naccording to the construction of $\\widetilde{X}$, obviously,\n$$x_{0}x_{1}\\cdots x_{i_{s}}2^{[r]}w2^{[j_{0}]}y_{j_{0}}y_{j_{0}+1}\\cdots y_{n_{2}}$$\nis an admissible block of $\\widetilde{X}$.\nThen\n$$[x_{0}x_{1}\\cdots x_{i_{s}}2^{[r]}w2^{[j_{0}]}y_{j_{0}}y_{j_{0}+1}\\cdots y_{n_{2}}]\n\\subset \\widetilde{X}$$ is the basic element of $\\widetilde{X}$.\nThis means that $\\sigma^{n}(U)\\cap V\\neq \\emptyset,\\quad \\forall \\,\nn\\geq N.$\n\nFor $i_{l}$ or $j_{m}$ is $-1$, we just need to add enough symbols 2\nin the admissible blocks $U$ and $V$.\n\n Therefore,\n$(\\widetilde{X},\\sigma)$ is mixing.\n\n\\begin{cor}\n\\label{mmxing} $(\\mathcal{K}(\\widetilde{X}),\\overline{\\sigma})$ is\nmixing.\n\\end{cor}\nThis is a direct result of Proposition \\ref{xqhh} and Theorem\n\\ref{mixing}.\n\n\\begin{prop}\n\\label{wyzhqd}\n $(\\widetilde{X},\\sigma)$ is not minimal and has only one periodic point.\n The periodic point is the unique fixed point of $\\widetilde{X}$.\n\\end{prop}\n\n \\textbf{Proof}\n From the construction,\n it is easy to see that $X\\subset \\widetilde{X}$ and $X$ is minimal proper set of\n $\\Sigma_{3}$.\n By the definition of minimal, $\\widetilde{X}$ is not minimal.\n We now prove $\\widetilde{X}$ has only one fixed point $2^{[\\infty]}$.\n It is the unique periodic point of $\\widetilde{X}$.\n\nSuppose that there exists $x\\in \\widetilde{X}$ and $n>0$ such that\n$\\sigma^{n}(x)=x$, then $x$ can be written as an infinite\npermutation of block $x_{0}x_{0}\\cdots x_{n-1}$,\n$$x=x_{0}x_{1}\\cdots x_{n-1}\\cdots x_{0}x_{1}\\cdots x_{n-1}\\cdots.$$\nIf $x_{i}=2$ for all $i$, $0\\leq i\\leq n-1$, then $x=2^{[\\infty]}$.\nObviously, $x$ is a fixed point of $\\widetilde{X}$.\n\nIf there exists a finite integer sequence $\\{i_{l}\\}$, $0\\leq\ni_{l}\\leq n-1$, $0\\leq l\\leq m$, $0\\leq m\\leq n-1$ such that\n$x_{i_{l}}\\neq 2$, then $x_{i_{0}}x_{i_{1}}\\cdots x_{i_{m}}$ is an\nadmissible block of $X$, we can write it for convenience as\n$a_{0}a_{1}\\cdots a_{m}$.\n Accordingly,\n $a=a_{0}a_{1}\\cdots\na_{m}\\cdots a_{0}a_{1}\\cdots a_{m}\\cdots$\n is an infinite permutation\nof $a_{0}a_{1}\\cdots a_{m}$,\nthen $a$ is a $(m+1)$-periodic point.\n\nSince $X$ is a non-trivial minimal set, so $a\\not\\in X$, and then\n$x\\not\\in X_{a}$. Further, it is not hard to see that $x\\not\\in\n\\widetilde{X}$. This is a contradiction.\n\n\\begin{rem}\n\\label{dense}\n The result implies that $\\widetilde{X}$ is not periodically dense.\n\\end{rem}\n\n\\begin{prop}\n\\label{chkjchm}\n The hyperspace system $(\\mathcal{K}(\\widetilde{X}),\\overline{\\sigma})$\n induced by $(\\widetilde{X},\\sigma)$ is periodically dense.\n\\end{prop}\n\n\\textbf{Proof} According to Theorem \\ref{chbsh}, we just need to\nprove that every open set $U$ of $X$ has a $m$-invariant compact\nsubset $K$, i.e. $\\sigma^{m}(K) = K$. Further, it is enough to prove\nthat every basic element of $X$ has a $m$-invariant compact subset.\n\n Let $U = [a_{0}a_{1}\\cdots a_{n-1}]$ be a cylinder set of $\\widetilde{X}$.\n\nIf $x_{i}=2$ for all $i$, $0\\leq i \\leq n-1$, then the single set\n$\\{2^{[\\infty]}\\}$ ia a $\\sigma$-invariant closed set of $U$. It is\na 1-invariant set of $\\sigma$ and satisfies the condition.\n\nSuppose that $a_{0}a_{1}\\cdots a_{n-1}$ have symbols which are not 2\nand the number of which is $j$, $1\\leq j \\leq n$. On the basis of\nthe definition of $\\widetilde{X}$, there exists a point\n$b=b_{0}b_{1}\\cdots$ in $X$ satisfied: the first $j$ symbols in\norder are just the symbols of $a_{0}a_{1}\\cdots a_{n-1}$ which are\nnot 2.\n\nSince $b$ is an almost periodic point, there exists $N > 0$ and\n$n_{k}\\rightarrow\\infty$ such that\n $ j\\leq n_{k+1} - n_{k}\\leq N+j$\nand the first $j$ symbols of $\\sigma^{n_{k}}(b)$ and $b$ are\ncompletely same.\n\nLet $m = n+N$ and define $\\bar{b}= {\\bar b_{0}}{\\bar b_{1}}\\cdots\\in\n\\Sigma_{3}$ as follows. For any $s \\geq 0$,\n\\begin{align*}\n{\\bar b_{s}}=\n\\begin{cases}\n a_{i}, &\\ \\text{if}~ s = km +i, \\quad \\text{where} \\ k \\geq 0,\\,\\, 0\\leq i < n,\\\\\n b_{n_{k}+j+i}, &\\ \\text{if}~ s = km +n +i, \\quad \\text{where} \\ k \\geq 0, \\,\\,0\\leq i < n_{k+1}- n_{k} - j,\\\\\n 2, &\\ \\text{other}.\n \\end{cases}\n\\end{align*}\nIt is not hard to see that $\\bar{b}\\in X_{b}\\subset \\widetilde{X}$.\n\nOn the other hand, obviously, for any $k \\geq\n 0$, $\\sigma^{km}(\\bar{b})\\in U$.\n\n Let $K $ be the $\\omega$-limit set of the point $\\bar b$ under the action of\n $\\sigma^{m}$, i.e. $K = \\omega(\\bar{b}, \\sigma^{m})$,\n then $K\\subset U$ is a closed set and $\\sigma^{m}(K) = K$.\n\nThe proof of Proposition \\ref{chkjchm} is completed.\n\n\\begin{thm}\n\\label{chchndch} The hyperspace system being periodically dense\nneed not imply the base system being periodically dense.\n\\end{thm}\n\\textbf{Proof} It is a direct result of Proposition \\ref{chkjchm}\nand Remark \\ref{dense}.\n\n\\begin{cor}\nThe hyperspace system being mixing and periodically dense need not\nimply the base system being periodically dense.\n\\end{cor}\n\n\\textbf{Proof} We can get the result from Proposition \\ref{xqhh},\n\\ref{wyzhqd}, \\ref{chkjchm} and Corollary \\ref{mmxing}.\n\\begin{thm}\nThe hyperspace system being Devaney chaos need not imply the base\nsystem being Devaney chaos.\n\\end{thm}\n\n\\textbf{Proof} Since mixing implies weakly mixing, and weakly mixing\nis equivalent to the transitivity by Theorem \\ref{chrdj}, then the\nhyperspace system being mixing and periodically dense implies it\nbeing Devaney chaos. However, from Theorem \\ref{chchndch}, the base\nsystem need not be periodically dense, then it need not be Devaney\nchaos.\n\n\n\n\n\\section{Conclusion}\n\nIn summary, we showed some diffence between the chaoticity of a\ncompact system and the chaoticity of its induced hyperspace system.\nDevaney chaos in the hyperspace system need not implies Devaney\nchaos in its base system. Even if the hyperspace system is mixing,\nthe former need not implies the latter. This kind of investigation\nshould be useful in many real problems, such as in ecological\nmodeling, demographic sciences associated to migration phenomena and\nnumerical simulation, etc.\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nType Iax supernovae (SNe~Iax; also referred to as 2002cx-like based on their prototypical member) are explosive transients similar in many ways to Type Ia supernovae (SNe~Ia). Both are believed to be thermonuclear explosions of white dwarfs (WDs) and share spectral similarities such as a lack of hydrogen lines and the presence of intermediate-mass and iron-group elements \\citep{Foley2013, Jha2017}. It is estimated that SNe~Iax occur at approximately 5-30\\% the rate of SNe~Ia \\citep{Foley2013, Liu2015}. Like SNe~Ia, there exists a correlation between luminosity and light curve shape for SNe Iax; however, the correlation for SNe~Iax is offset with increased scatter \\citep{Foley2013}. In comparison to SNe~Ia, characteristics of SNe~Iax include lower peak luminosities, lower maximum velocities, lower ejecta masses \\citep{Foley2013, Takaro2020}, and unique late-time spectra \\citep{Li03,Foley2016}. While SNe~Ia are found in both early- and late-type galaxies, all SNe~Iax have been found in late-type galaxies with the exception of SN 2008ge \\citep{Foley10,Takaro2020}. The presence of SNe~Iax in young host galaxies suggests that progenitor systems must produce and explode WDs with short delay times between star formation and supernova. The only thermonuclear explosion for which a potential companion star was observed before the explosion is SN~2012Z, whose companion was a luminous blue star with a mass of $\\sim 2 \\, M_\\odot$ \\citep{McCully2014}.\n\nAlthough the SNe~Iax class is relatively new, there has already been significant progress in narrowing down potential progenitor scenarios. The current leading contender involves the partial deflagration of a WD with a mass near the Chandrasekhar limit ($M_\\textrm{Ch}$) accreting from a helium donor star through Roche lobe overflow. \\cite{IbenTutukov1994} first suggested that a degenerate star accreting from a helium donor could potentially reach $M_\\textrm{Ch}$ and produce a supernova; subsequent calculations by \\cite{Yoon03} showed that this evolution was indeed plausible. The helium star donor is necessary to explain the short delay times in a Chandrasekhar-mass scenario since the stable mass transfer rate of helium is higher than for hydrogen \\citep{Ruiter2009,Jha2017}. Deflagration rather than detonation is thought to explain the lower kinetic energies and greater chemical mixing of the ejecta \\citep{Foley2013}. We also note that thermonuclear SNe require the ignition of carbon in the core; by contrast, off-center ignition results in stable burning and the formation of a massive O\/Ne WD or a neutron star \\citep{Shen12,Schwab2016}. The deflagration is considered ``partial'' if the destruction of the WD is incomplete. In fact, a study by \\cite{Jha2017} concluded that since the final velocities of the ejecta are typically lower than the escape velocity from the surface of the WD, some WD material likely remains bound. Additionally, studies of three-dimensional hydrodynamic simulations of the deflagration model also suggest a bound remnant is left behind \\citep{Michael2014, Min2014,Lach22}. This leads to the natural question of whether the surviving remnant could begin a subsequent accretion phase through Roche lobe overflow and undergo multiple SNe~Iax. \n\nIn section \\ref{sec:mod}, we discuss the methods used in this investigation to determine the viability of multiple SNe Iax. In section \\ref{sec:results}, we discuss the results of our simulations and describe the evolution of several systems in detail. We conclude in section \\ref{sec:conc}.\n\n\n\\section{Description of models and calculations}\n\\label{sec:mod}\n\nWe model the evolution of a variety of single-degenerate binary systems using Modules for Experiments in Stellar Astrophysics (MESA) \\citep{paxt11,paxt13,paxt15a,paxt18a,paxt19a}. The donor star for each of our systems starts as a helium main sequence star. The white dwarf (WD) accretor is modeled as a point source. When the WD accretes up to the Chandrasekhar limit ($M_\\textrm{Ch}$), we simulate a Type Iax event by decreasing the WD mass and adjusting the binary separation based on the angular momentum loss due to the ejected mass. We do not alter the mass or structure of the helium star, which is a reasonable assumption for compact donors, but is less so for helium giants due to the impact of the SN ejecta \\citep{Pan12,Pan13,Liu13,Bauer19}. We examine which systems begin mass accretion again and bring the remnant back up to $M_\\textrm{Ch}$. Additionally, each binary eventually becomes a double WD system with the potential to undergo a SN~Ia by in-spiralling due to gravitational wave emission \\citep{Guillochon2010,Dan2012}. We vary two main initial parameters: the helium star's initial mass and the initial period of the system. We keep the initial mass of the white dwarf fixed at $1.0 \\, M_\\odot$ for this investigation. Notably, studies by \\cite{Wong2019} and \\cite{Wong2021} performed similar simulations using MESA to investigate thermonuclear explosions of helium star and WD binaries. However, our investigation differs by focusing on the post-explosion evolution of these systems.\n\nWe begin by using MESA to create helium main sequence star models with masses of $1.0$, $1.5$, and $2.0 \\, M_\\odot$. Details for using MESA to create helium stars are located in the Appendix. We set the chemical composition to be 98\\% $^4$He and 2\\% $^{14}$N, by mass; we assume nitrogen is the endpoint of the CNO isotopes after CNO burning. For the initial masses we consider, the helium star expands after it exhausts helium in the core, potentially leading to mass transfer due to Roche lobe overflow.\n\nAfter performing the same stellar evolution with four nuclear reaction networks within MESA (\\texttt{basic}, \\texttt{approx21}, \\texttt{co\\_burn}, and \\texttt{mesa\\_201}), we determine that the mass fractions for the most abundant isotopes ($^4$He, $^{12}$C, $^{14}$N, and $^{16}$O) are consistent across all networks within a few percent. Additionally, these various networks do not noticeably alter the stellar evolution of the models. We thus choose the \\texttt{basic} network for simplicity.\n\nWe then evolve the previously created helium star models in a binary system with a $1.0 \\, M_\\odot$ point mass. For each helium star mass, we vary the initial period of the system. During our process, some MESA settings differed between systems in order to allow them to run more smoothly. Details are located in the Appendix.\n\nIn general, Roche lobe overflow can occur due to stellar expansion or shrinking binary separation due to loss of angular momentum. In our investigation, we turn off angular momentum loss from magnetic braking since our helium main sequence stars do not have large convective zones at their surface. We do allow angular momentum loss due to gravitational wave radiation. However, since the main sequence lifetime for helium stars of this mass is short (on the order $10^7$ years), the probability of encountering systems with small initial binary separations such that mass transfer begins during the main sequence phase is low. \n\nThe smallest initial period for each donor star mass is set such that the Roche lobe radius is slightly larger than the helium star's radius. Additional systems were constructed with periods increasing by factors of approximately 2. The WD is approximated as a point mass with perfectly conservative mass transfer, which allows transfer to occur both below and above the thermally stable WD accretion rates \\citep{Brooks2016}. The limitation of simulating the WD as a point source is the inability to determine whether carbon ignition begins centrally or off-center, which determines whether a supernova occurs. By assuming conservative mass transfer, we maximize the total mass and orbital angular momentum of the system. The results in this work can thus be regarded as maximizing the number of SNe~Iax. Although the studies by \\cite{Wong2019} and \\cite{Wong2021} consider the stellar interior of the WD and effects of non-conservative mass transfer, they do not investigate the evolution of systems beyond a single thermonuclear explosion. We leave a more in-depth analysis of recurrent thermonuclear SNe to future work.\n\nIf the WD reaches $M_\\textrm{Ch} = 1.4 \\, M_\\odot$, we assume a SN~Iax occurs and examine how the dynamics of the system change due to the sudden mass ejection from the WD. We assume no mass is stripped from the helium donor star during the explosion. We use the results from \\cite{hills83} for the ratio of pre- and post-SN semi-major axes and eccentricities due to a mass ejection of $\\Delta M$. \n\nSince there is ample time for circularization prior to the explosion, our initial binary systems have no eccentricity, and so we modify the \\cite{hills83} result by setting the initial eccentricity $e_0 = 0$. The ratio of semi-major axes is then:\n\\begin{equation} \\label{hills-a}\n \\frac{a}{a_0} = \\frac{1}{2} \\left(\\frac{M_0 - \\Delta M}{\\frac{M_0}{2} - \\Delta M}\\right) ,\n\\end{equation}\nwhere $a$ is the post-SN semi-major axis, $a_0$ is the pre-SN semi-major axis, $M_0$ is the initial total mass of the system, and $\\Delta M$ is the mass ejected from the WD due to the SN~Iax. \n\nThe final eccentricity from \\cite{hills83} is then:\n\\begin{equation} \\label{hills-e}\n e = \\left[ 1- \\frac{ \\left( 1- \\frac{2 \\Delta M}{M_0} \\right)}{ \\left( 1- \\frac{\\Delta M}{M_0} \\right)^2}\\right]^\\frac{1}{2} .\n\\end{equation}\n\nWe now assume that after the SN~Iax occurs, tidal circularization happens rapidly because the helium star donor is nearly filling its Roche lobe. We calculate a new semi-major axis assuming that the orbital angular momentum is conserved as the eccentricity dissipates to zero via tides. The orbital angular momentum is given by:\n\\begin{equation} \\label{ang-mom}\n L^2 = \\frac{G (M_0 - \\Delta M) \\, \\mu ^3}{2E} (e^2 -1) ^2 ,\n\\end{equation}\nwhere $M_0 - \\Delta M$ is the total mass of the system after mass ejection, $\\mu$ is the reduced mass, and $E$ is the energy of the system. \n\nWe then find the new binary separation $a_{\\rm recirc}$ is given by:\n\\begin{equation} \\label{a-recirc}\n \\frac{a_{\\rm recirc}}{a_0} = \\frac{M_0}{M_0 - \\Delta M} .\n\\end{equation}\n\nUsing Equation \\ref{a-recirc}, we reinitialize our MESA system with a WD mass of $1.2 \\, M_\\odot$, which simulates a mass ejection of $\\Delta M = 0.2 \\, M_\\odot$, and with a new semi-major axis $a_{\\rm recirc}$. We repeat this process every time the WD reaches $M_\\textrm{Ch}$. \n\nEventually, the systems reach a natural endpoint in MESA. The most common endpoint is when the helium star ceases helium-burning, evolves into a second white dwarf, and cools. Gravitational wave radiation will cause the resulting double WD systems to merge and possibly result in a SN~Ia. The merging times of such systems are:\n\\begin{equation} \\label{t-merge}\n t_{merge} = \\frac{5c^5}{256G^3} \\frac{a^4}{M_1M_2M_{tot}} .\n\\end{equation}\n\nWe count a SN~Ia as occurring if two conditions are met. The first is if this merging time is shorter than a Hubble time. The second is if the mass of the primary WD is greater than $\\sim 0.85 \\, M_\\odot$ and the secondary WD is also suitably massive \\citep{Dan2012, Guillochon2010, Shen17, Shen2018}, which are always the case for the systems we consider. \n\nAnother possible endpoint is when the helium star expands into a giant and causes the mass transfer rate to increase rapidly, which MESA is unable to handle self-consistently, preventing us from simulating the full evolution of these systems. Physically, when the mass transfer rate becomes very high, the white dwarf is unable to stably accrete, causing the material to accumulate in the circumbinary environment and yielding a common envelope \\citep{Iben1993, Ivanova2013}. Frictional forces from the common envelope cause the two stars to spiral towards each other. The difference in orbital energy is used to eject the envelope. Thus, for these systems in our simulation, we assume a common envelope with the mass of the unburned helium envelope of the donor star is formed and ejected, which therefore alters the binary separation. The remaining carbon core of the donor star is assumed to become a degenerate carbon-oxygen (CO) WD. Due to the assumption in our simulation that mass transfer onto the WD is stable at any accretion rate handled by MESA, we expect to see fewer common envelopes in our simulation than in reality.\n\nWe assume that the entire helium envelope is ejected and find the binding energy of that envelope:\n\\begin{equation}\n E_{bind} \\sim - \\frac{GM_{\\rm He\\, core}M_{\\rm He\\, env}}{R_{\\rm He\\, star}} .\n\\end{equation}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\textwidth]{Paraspace_WD1_0.png}\n \\caption{Summary of simulation results for various combinations of helium star mass and initial period. Fixed parameters are an initial WD mass of $1.0 \\, M_\\odot$ and ejected mass $\\delta m = 0.2 \\, M_\\odot$. Text to the right of each symbol represents the final mass of the WD point mass in solar masses, the final mass of the WD that the helium star evolves into in solar masses, and the merging time of the double WD binary in years. Symbols are summarized in the legend.}\n \\label{fig:Paraspace_WD1}\n \\vspace{0.2in}\n\\end{figure*}\n\nWe then assume that in order to unbind the envelope, orbital energy is lost. Thus, we can find the relationship between the initial and final binary separations:\n\\begin{equation}\n \\frac{E_{\\rm bind}}{\\alpha} = \\frac{GM_{\\rm He\\,star}M_{\\rm WD}}{a_i} - \\frac{GM_{\\rm He\\,core}M_{\\rm WD}}{a_f} ,\n\\end{equation}\nwhere $\\alpha$ is an efficiency parameter that we assume to be 1, and we assume the commonly used structure parameter, $\\lambda$, is also 1.\n\nAfter the common envelope ejection, the binary becomes a double WD system, and we calculate the expected merging time using Equation \\ref{t-merge}. \n\nThe final possible endpoint we consider is the evolution of the helium star through carbon-burning to become an oxygen\/neon\/magnesium (O\/Ne) WD. The evolution towards an O\/Ne WD is signalled by the ignition of off-center convective carbon-burning in the interior of the helium star. The calculation of such models in MESA becomes very time-intensive, so we do not further evolve these systems. However, since in each of our simulations at the stage where carbon-burning occurs, mass transfer has already decreased to negligible amounts, we assume that these carbon-burning stars in binary systems will become O\/Ne WDs with the same mass.\n\n\n\n\\section{Results}\n\\label{sec:results}\n\nWe find that systems of recurrent SNe~Iax are plausible. In particular, in systems with greater helium star masses and smaller initial periods, the potential for several SNe~Iax increases. \n\nHowever, our estimates are optimistic. We made several approximations that increase the likelihood of multiple SNe~Iax. First, we assume only $0.2 \\, M_\\odot$ of material is lost from the WD during each SNe~Iax. Such a small ejecta mass may be the case for very faint SNe~Iax such as SN~2008ha \\citep{Foley2009}, but typical SNe~Iax ejecta masses are likely several times larger \\citep{Foley2013}. Second, we assume that no mass is stripped from the helium donor during the SN event. Third, we assume that all mass transferred off the helium star remains on the WD. In nature, the range of accretion rates leading to steady and stable helium-burning is only a factor of a few \\citep{Brooks2016}; outside of this regime, mass is lost from the system via a combination of helium novae, optically thick winds, and binary-driven mass loss. \n\nFigure \\ref{fig:Paraspace_WD1} demonstrates a summary of our findings. The data points represent each MESA simulation with their symbol meanings explained in the legend. The text to the right of each data point represents the final mass of the WD point mass in solar masses, the final mass of the WD that results from the evolution of the helium star in solar masses, and the merging time of the subsequent double WD binary in years. \n\n\\subsection{Systems with $1.0 \\, M_\\odot$ helium star donors}\n\nLet us examine the possible scenarios for a $1.0 \\, M_\\odot$ helium star. None of these systems undergo a SNe~Iax because the WD does not become massive enough to explode prior to the cessation of mass transfer and the evolution of the helium star to degeneracy. However, for systems with an initial period of $\\leq$ 0.5 days, the two resulting degenerate stars will merge within a Hubble time due to the loss of orbital angular momentum by gravitational wave emission, and possibly yield a SN~Ia. Systems with an initial period $\\geq$ 800 days go through a common envelope phase. Since the ejected mass is only a few hundredths of a solar mass in both cases we explored, the effect of the common envelope ejection on the merging timescale is minimal. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{He1p0_Per0p105_TRho.png}\n \\caption{Temperature vs.\\ density profile at the time when mass transfer rate is the highest for a system with a $1.0 \\, M_\\odot$ helium star with an initial period of 0.105 days. Zones where the net nuclear energy (produced from nuclear reactions minus neutrino losses) is greater than $\\unit[1]{erg \\, g^{-1} \\, s^{-1}}$ are shown in orange. The dashed line shows conditions for which the radiation pressure equals the ideal gas pressure, assuming a chemical composition of fully ionized helium. Labelled black dots represent mass fractions contained within a certain zone.}\n \\label{fig:Hertzsprung}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{He1p0_Per400_TRho.png}\n \\caption{Temperature vs.\\ density profile at the time when mass transfer rate is the highest for a system with a $1.0 \\, M_\\odot$ helium star with an initial period of 400 days. In this profile, labelled mass fractions show a dense core and massive envelope, which demonstrates that the star is in the giant phase.}\n \\label{fig:giant}\n\\end{figure}\n\nFor a $1.0 \\, M_\\odot$ helium star, when the initial period is small, such as in the case of an initial period of 0.105 days, mass transfer begins as soon as the helium star begins to expand. Thus, the bulk of mass transfer occurs along the Hertzsprung gap before the star becomes a giant. This is evident in the temperature vs.\\ density profile of the star at the time when the mass transfer rate is highest, which is shown in Figure \\ref{fig:Hertzsprung}. In this profile, we see that the star is no longer on the main sequence because the burning region is outside the core. However, the star is not yet a giant because much of the mass is confined within a relatively compact inner region of the star. The density at an enclosed mass fraction of 0.99 is $3~\\text{g\/cm}^3$. In contrast, Figure \\ref{fig:giant} demonstrates the temperature vs.\\ density profile of a $1.0 \\, M_\\odot$ helium star that is a giant at the time when the mass transfer rate is highest. This system began with a much larger initial period of 400 days, and therefore requires the star to expand significantly before it fills its Roche Lobe. The temperature vs.\\ density profile shows a more massive low density envelope. The density of this star at a mass fraction of 0.99 is $4 \\times 10^{-8}~\\text{g\/cm}^3$. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{mdot_1p0.png}\n \\caption{Mass transfer rates for systems with an initial helium star mass of $1.0 \\, M_\\odot$ and differing initial periods. Dotted lines are the upper and lower bounds of accretion that lead to steady helium-burning on the WD's surface according to \\cite{Brooks2016}.}\n \\label{fig:mdot_1p0}\n\\end{figure}\n\nFigure \\ref{fig:mdot_1p0} demonstrates the mass transfer rate vs.\\ WD mass for several systems with an initial $1.0 \\, M_\\odot$ helium star. We see that none of the WDs reach the Chandrasekhar-mass. The black dotted lines represent the upper and lower boundaries of accretion required for steady and stable helium-burning on the WD according to Figure 3 from \\cite{Brooks2016}. All of the systems shown in Figure \\ref{fig:mdot_1p0} transfer a significant amount of mass outside these boundaries, which demonstrates the limitations of our assumption that all mass is stably transferred onto the WD point mass. The mass transfer rates run away above rates of a few times $10^{-5} M_\\odot$\/yr for systems with an initial period $\\geq$ 800 days. \n\n\\subsection{Systems with $1.5 \\, M_\\odot$ helium star donors}\n\nNext, we examine systems with $1.5 \\, M_\\odot$ helium stars. Systems with an initial period of approximately $\\leq$ 0.45 days can go through two sequential SNe~Iax. Within this group, systems with an initial period of approximately $\\leq$ 0.139 days can also merge within a Hubble time, which allows for the potential for a SN~Ia following the previous two SNe~Iax. For systems with periods between 1 and 400 days, we only find potential for one SN~Iax based on the previously described constraints. Notably, for systems with periods of 800 days, zero SNe are expected. Additionally, a common envelope phase occurs and ejects approximately $0.5 \\, M_\\odot$ of material, or approximately 34\\% of the helium star's original mass. This brings the system into closer orbit and reduces the merging time slightly. We also expect the common envelope phase will likely occur for systems with initial periods $\\geq$ 800 days. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{He1p5_Per800_TRho.png}\n \\caption{Temperature vs.\\ density profile at the time when mass transfer rate is the highest for a system with a $1.5 \\, M_\\odot$ helium star with an initial period of 800 days. The mass transfer rate for this system becomes high enough to enter a common envelope phase. In this profile, labelled mass fractions show a dense core and massive envelope, which demonstrates that the star is in the giant phase. In contrast to Figure \\ref{fig:giant}, a greater fraction of the mass resides in this low density envelope, which is ejected as a common envelope.}\n \\label{fig:common_env}\n\\end{figure}\n\nFigure \\ref{fig:common_env} demonstrates the temperature vs.\\ density profile for the system with an initial helium star mass of $1.5 \\, M_\\odot$ and an initial period of 800 days. This profile corresponds to the time when the mass transfer rate is highest, which is directly prior to the end of MESA's ability to calculate the remaining binary evolution. We notice that at least 60\\% of the star's mass is in a compact region at the core of the star at a density $\\geq 8 \\times 10^5~\\text{g\/cm}^3$ and a massive envelope contains at least 30\\% of the mass at a density $\\leq 10^{-5}~\\text{g\/cm}^3$. This convective envelope contributes to runaway mass transfer and the formation of a common envelope \\citep{Ge2010}. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{mdot_1p5.png}\n \\caption{Mass transfer rates for systems with helium star mass of $1.5 \\, M_\\odot$ and differing initial periods. The mass transfer evolution prior to the first and second SN~Iax are shown as solid and dashed lines, respectively, but any further evolution is neglected for visual simplicity.}\n \\label{fig:mdot_1p5}\n\\end{figure}\n\nFigure \\ref{fig:mdot_1p5} demonstrates the mass transfer rate vs.\\ WD mass for several systems with a $1.5 \\, M_\\odot$ helium star. The solid lines represent the mass transfer prior to the first SN and the dashed lines represent mass transfer prior to the second SN. If a second SN occurs, all further mass transfer is neglected for visual simplicity. This figure demonstrates that most mass transfer for these systems occurs above the rate at which transfer is stable. The mass transfer rate for the system with an initial period of 800 days runs away above a few times $10^{-4} M_\\odot$\/yr.\n\n\\subsection{Systems with $2.0 \\, M_\\odot$ helium star donors}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{He2p0_Per2_TRho.png}\n \\caption{Temperature vs.\\ density profile for a system with a $2.0 \\, M_\\odot$ helium star with an initial period of 2 days. Outer and inner burning locations (shown in orange) correspond to helium-burning and carbon-burning regions, respectively.}\n \\label{fig:ONe}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{mdot_2p0.png}\n \\caption{Mass transfer rates for systems with helium star mass of $2.0 \\, M_\\odot$ and differing initial periods. The mass transfer evolution beyond the first SN~Iax is neglected for visual simplicity.}\n \\label{fig:mdot_2p0}\n\\end{figure}\n\nFinally, consider systems with $2.0 \\, M_\\odot$ helium stars. Notably, for the entire range of initial periods that we explored, the systems undergo at least three Type Iax SNe. Systems with an initial period of 0.121 days have the potential for four SNe~Iax, and systems with an initial period $\\leq$ 0.5 days have the potential to merge within a Hubble time. Systems with an initial period of 512 days will undergo a common envelope phase and eject $0.15 \\, M_\\odot$ of material. Additionally, helium stars that will eventually evolve into WDs with a mass $\\geq 1.04 \\, M_\\odot$ ignite core carbon-burning. The inward propagation of the carbon-burning wave by diffusion results in very small simulation timesteps, and therefore we halt these calculations before their natural endpoints. However, we can assume that carbon-burning will cause the star to become an oxygen\/neon\/magnesium (O\/Ne) WD. The temperature vs.\\ density profile for a star becoming an O\/Ne WD is shown in Figure \\ref{fig:ONe}. The outer and inner burning regions correspond to helium-burning and convective carbon-burning regions, respectively. Much of the mass of this star is at a high density. The density at an enclosed mass fraction of 0.99 is $4 \\times 10^3~\\text{g\/cm}^3$. \n\nFigure \\ref{fig:mdot_2p0} demonstrates the mass transfer rate vs.\\ WD mass for several systems with a $2.0 \\, M_\\odot$ helium star. Only the mass transfer prior to the first SN is shown for visual simplicity. From this figure, we can determine that none of the mass transfer from $2.0 \\, M_\\odot$ helium stars occurs in the stable region prior to the first SN, which highlights the extreme optimism of our assumptions. In reality, since mass transfer occurs above the stable boundary, we expect the formation of an optically thick wind or a common envelope, which we did not account for in systems with a period $< 512$ days. \n\n\n\\section{Conclusions}\n\\label{sec:conc}\n\nIn this investigation, we used MESA to explore the potential for recurrent thermonuclear SNe from binary systems containing a helium star donor and WD accretor. Using our optimistic assumptions, we found that the possibility for several SNe~Iax exists in systems with high-mass helium stars and small initial periods. We also find that systems with small initial periods are likely to produce SNe~Ia by forming double WD systems and merging within a Hubble time, suggesting the possibility for both single- and double-degenerate thermonuclear SNe produced by the same system. \n\nHowever, we also find that the systems most likely to create recurrent SNe~Iax have the highest degree of error in assumptions in our simulation. Each of these systems transfers mass at rates above the WD stable accretion rate set by steady and stable helium-burning on the surface of the WD. Thus, our assumption of conservative mass transfer onto the WD is very optimistic. Likely, these systems would undergo a common envelope phase and eject much of the unaccreted material, which would greatly decrease the chances for repeated SNe~Iax. \n\nOur simulation is also optimistic due to the treatment of the WD as a point mass, and resulting inability to determine whether carbon ignition begins centrally or off-center. In cases of off-center carbon ignition, the WD burns non-explosively and is converted into an O\/Ne WD \\citep{Brooks2016} instead of leading to a SN.\n\nThe possibility that binaries can host multiple thermonuclear SNe has obvious implications for predictions of SN rates and galactic chemical enrichment. This possibility also suggests polluted helium stars in WD binaries may exist and may be observable throughout the galaxy. Furthermore, thermonuclear ash from previous SNe that remains inside a surviving WD that subsequently undergoes another SN may alter the resulting nucleosynthesis in interesting ways. However, given the small parameter space that leads to multiple SNe (restricted to short orbital periods and massive helium star donors) and the overly optimistic assumptions employed in this work, it appears unlikely that systems with multiple SNe play a significant role in nature. Still, given the exciting ramifications that are possible, we recommend that future studies be performed with more accurate physics regarding the stability and retention of accreted helium.\n\n\n\\acknowledgments\n\nThis project was initiated with Saurabh Jha and Todd Thompson during a Scialog Time Domain Astrophysics workshop. We further thank Todd Thompson for helpful comments that improved this manuscript. Support for this work was provided by NASA\/ESA Hubble Space Telescope program \\#15918.\n\n\n\\software{\\texttt{matplotlib} \\citep{hunt07a}, \\texttt{MESA} \\citep{paxt11,paxt13,paxt15a,paxt18a,paxt19a}}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn the paper \\cite{CJ} Jarlskog gave a recursive parametrization to \nunitary matrices. See also \\cite{Dita} as a similar parametrization. \nOne of the authors showed that the recursive method was essentially obtained \nby the so--called canonical coordinate of the second kind in the Lie group \ntheory, \\cite{KF0}.\n\nWe are working in Quantum Computation, therefore we are interested in \nsome application to quantum computation. \n\nOne of key points of quantum computation is to construct some unitary \nmatrices (quantum logic gates) in an efficient manner like Discrete \nFourier Transform (DFT) when $n$ is large enough, \\cite{PWS}. \nHowever, such a quick construction is in general not easy, see \\cite{nine} \nor \\cite{KF1}, \\cite{KF2}.\n\nThe parametrization of unitary matrices given by Jarlskog may be convenient \nfor our real purpose. \nWe want to apply the method to quantum computation based on multi--level \nsystem (qudit theory). One of reasons to study qudit theory comes from \na deep problem on decoherence (we don't repeat the reason here). \nIn the following let us consider an $n$ level system (for example, an atom \nhas $n$ energy levels). \n\nConcerning an explicit construction of quantum logic gates in qudit theory, \nsee for example \\cite{KF3}, \\cite{FHKW} and \\cite{KuF}. \nBy use of the new parametrization to unitary matrices we want to construct \nimportant logic gates {\\bf in an explicit manner}\\footnote{Quantum \ncomputation is not a pure mathematics, so we need an explicit construction}, \nespecially the generalized Pauli matrices and Walsh--Hadamard matrix, which \nplay a central role in qudit theory.\n\nIn this paper we construct the generalized Pauli matrices in a complete \nmanner, while the Walsh--Hadamard matrix is constructed only for the case of \n$3$, $4$ and $5$ level systems. A calculation to construct it for the $5$ \nlevel system is relatively complicated compared to the $3$ and $4$ level \nsystems.\nIn general, a calculation tends to become more and more complicated as $n$ \nbecomes large. \n\nThe generalized Walsh--Hadamard matrix gives a superposition of states in \nqudit theory, which is the heart of quantum computation.\nIt is natural for us to request {\\bf a quick and clean construction} to it.\n\nTherefore our calculation (or construction) may imply that a qudit theory \nwith $n \\geq 5$ is not realistic. Further study will be required.\n\n\n\n\n\n\n\n\n\n\n\\section{Jarlskog's Parametrization}\n\nLet us make a brief introduction to the parametrization of unitary matrices \nby Jarlskog with the method developed in \\cite{KF0}. \nThe unitary group is defined as \n\\begin{equation}\nU(n)=\\left\\{\\,U \\in M(n,{\\mathbf C})\\ |\\ U^{\\dagger}U=UU^{\\dagger}={\\bf 1}_{n}\\,\\right\\}\n\\end{equation}\nand its (unitary) algebra is given by\n\\begin{equation}\nu(n)=\\left\\{\\,X \\in M(n,{\\mathbf C})\\ |\\ X^{\\dagger}=-X\\,\\right\\}.\n\\end{equation}\nThen the exponential map is\n\\begin{equation}\n\\mathrm{exp} : u(n)\\ \\longrightarrow\\ U(n)\\ ;\\ X\\ \\mapsto\\ U\\equiv \\mathrm{e}^{X}.\n\\end{equation}\nThis map is canonical but not easy to calculate.\n\nWe write down the element $X \\in u(n)$ explicitly :\n\\begin{equation}\nX=\n\\left(\n\\begin{array}{cccccc}\ni\\theta_{1} & z_{12} & z_{13} & \\cdots & z_{1,n-1} & z_{1n} \\\\\n-\\bar{z}_{12} & i\\theta_{2} & z_{23} & \\cdots & z_{2,n-1} & z_{2n} \\\\\n-\\bar{z}_{13} & -\\bar{z}_{23} & i\\theta_{3} & \\cdots & z_{3,n-1} & z_{3n} \\\\\n \\vdots & \\vdots & \\vdots & \\ddots & \\vdots & \\vdots \\\\\n-\\bar{z}_{1,n-1} & -\\bar{z}_{2,n-1} & -\\bar{z}_{3,n-1} & \\cdots & \ni\\theta_{n-1} & z_{n-1,n} \\\\\n-\\bar{z}_{1,n} & -\\bar{z}_{2,n} & -\\bar{z}_{3,n} & \\cdots & -\\bar{z}_{n-1,n}\n& i\\theta_{n}\n\\end{array}\n\\right). \n\\end{equation}\nThis $X$ is decomposed into\n\\[\nX=X_{0}+X_{2}+\\cdots + X_{j}+\\cdots +X_{n}\n\\]\nwhere\n\\begin{equation}\nX_{0}=\n\\left(\n\\begin{array}{cccccc}\ni\\theta_{1} & & & & & \\\\\n& i\\theta_{2} & & & & \\\\\n& & i\\theta_{3} & & & \\\\\n& & & \\ddots & & \\\\\n& & & & i\\theta_{n-1} & \\\\\n& & & & & i\\theta_{n} \n\\end{array}\n\\right)\n\\end{equation}\nand for $2\\leq j\\leq n$\n\\begin{equation}\nX_{j}=\n\\left(\n\\begin{array}{cccccc}\n0 & & & & & \\\\\n& \\ddots & & \\ket{z_{j}} & & \\\\\n& & \\ddots & & & \\\\\n& -\\bra{z_{j}} & & 0 & & \\\\\n& & & & \\ddots & \\\\\n& & & & & 0\n\\end{array}\n\\right),\\quad \n\\ket{z_{j}}=\n\\left(\n\\begin{array}{c}\nz_{1j} \\\\\nz_{2j} \\\\\n\\vdots \\\\\nz_{j-1,j}\n\\end{array}\n\\right).\n\\end{equation}\n\nThen the canonical coordinate of the second kind in the unitary group (Lie \ngroup) is well--known and given by\n\\begin{equation}\n\\label{eq:the canonical coordinate of the second kind}\nu(n) \\ni X=X_{0}+X_{2}+\\cdots + X_{j}+\\cdots +X_{n}\n\\ \\longrightarrow\\ \n\\mathrm{e}^{X_{0}}\\mathrm{e}^{X_{2}}\\cdots \\mathrm{e}^{X_{j}}\\cdots \\mathrm{e}^{X_{n}}\n \\in U(n)\n\\end{equation}\nin this case \\footnote{There are of course some variations}. Therefore \nwe have only to calculate $\\mathrm{e}^{X_{j}}$ for $j\\geq 2$ ($j=0$ is trivial), \nwhich is easy (see Appendix). The result is\n\\begin{equation}\n\\label{eq:fundamental}\n\\mathrm{e}^{X_{j}}\n=\n\\left(\n\\begin{array}{ccc}\n{\\bf 1}_{j-1}-\\left(1-\\cos(\\sqrt{\\braket{z_{j}}})\\right)\n\\ket{\\tilde{z}_{j}}\\bra{\\tilde{z}_{j}}& \n\\sin(\\sqrt{\\braket{z_{j}}})\\ket{\\tilde{z}_{j}} & \\\\\n-\\sin(\\sqrt{\\braket{z_{j}}})\\bra{\\tilde{z}_{j}} & \n\\cos(\\sqrt{\\braket{z_{j}}}) & \\\\\n & & {\\bf 1}_{n-j}\n\\end{array}\n\\right)\n\\end{equation}\nwhere $\\ket{\\tilde{z}_{j}}$ is a normalized vector defined by\n\\begin{equation}\n\\ket{\\tilde{z}_{j}}\\equiv \\frac{1}{\\sqrt{\\braket{z_{j}}}}\\ket{z_{j}}\n\\ \\Longrightarrow\\ \\braket{\\tilde{z}_{j}}=1.\n\\end{equation}\n\n\n\\par \\vspace{3mm}\nWe make a comment on the case of $n=2$. Since \n\\[\n\\ket{\\tilde{z}}=\\frac{z}{|z|}\\equiv \\mathrm{e}^{i\\alpha},\\quad\n\\bra{\\tilde{z}}=\\frac{\\bar{z}}{|z|}=\\mathrm{e}^{-i\\alpha}\n\\ \\Longrightarrow\\\n\\ket{\\tilde{z}}\\bra{\\tilde{z}}=\\braket{\\tilde{z}}=1,\n\\]\nwe have\n\\begin{eqnarray}\n\\label{eq:Euler angle parametrization}\n\\mathrm{e}^{X_{0}}\\mathrm{e}^{X_{2}}\n&=&\\left(\n\\begin{array}{cc}\n\\mathrm{e}^{i\\theta_{1}} & \\\\\n & \\mathrm{e}^{i\\theta_{2}}\n\\end{array}\n\\right)\n\\left(\n\\begin{array}{cc}\n\\cos(|z|) & \\mathrm{e}^{i\\alpha}\\sin(|z|) \\\\\n-\\mathrm{e}^{-i\\alpha}\\sin(|z|) & \\cos(|z|)\n\\end{array}\n\\right) \\nonumber \\\\\n&=&\n\\left(\n\\begin{array}{cc}\n\\mathrm{e}^{i\\theta_{1}} & \\\\\n & \\mathrm{e}^{i\\theta_{2}}\n\\end{array}\n\\right)\n\\left(\n\\begin{array}{cc}\n\\mathrm{e}^{i\\alpha\/2} & \\\\\n & \\mathrm{e}^{-i\\alpha\/2}\n\\end{array}\n\\right)\n\\left(\n\\begin{array}{cc}\n\\cos(|z|) & \\sin(|z|) \\\\\n-\\sin(|z|) & \\cos(|z|)\n\\end{array}\n\\right)\n\\left(\n\\begin{array}{cc}\n\\mathrm{e}^{-i\\alpha\/2} & \\\\\n & \\mathrm{e}^{i\\alpha\/2}\n\\end{array}\n\\right) \\nonumber \\\\\n&=&\n\\left(\n\\begin{array}{cc}\n\\mathrm{e}^{i(\\theta_{1}+\\alpha\/2)} & \\\\\n & \\mathrm{e}^{i(\\theta_{2}-\\alpha\/2)}\n\\end{array}\n\\right)\n\\left(\n\\begin{array}{cc}\n\\cos(|z|) & \\sin(|z|) \\\\\n-\\sin(|z|) & \\cos(|z|)\n\\end{array}\n\\right)\n\\left(\n\\begin{array}{cc}\n\\mathrm{e}^{-i\\alpha\/2} & \\\\\n & \\mathrm{e}^{i\\alpha\/2}\n\\end{array}\n\\right).\n\\end{eqnarray}\nThis is just the Euler angle parametrization.\n\nTherefore the parametrization (\\ref{eq:the canonical coordinate of the second \nkind}) may be considered as a kind of generalization of Euler's angle one.\n\n\n\n\\par \\vspace{5mm}\nIn the following we set \n\\begin{eqnarray}\n\\label{eq:fundamental-modify}\nA_{0}\\equiv \\mathrm{e}^{X_{0}}\n&=&\n\\left(\n\\begin{array}{ccc}\n\\mathrm{e}^{i\\theta_{1}} & & \\\\\n & \\ddots & \\\\\n & & \\mathrm{e}^{i\\theta_{n}}\n\\end{array}\n\\right), \\nonumber \\\\\nA_{j}\\equiv \\mathrm{e}^{X_{j}}\n&=&\n\\left(\n\\begin{array}{ccc}\n{\\bf 1}_{j-1}-\\left(1-\\cos{\\beta_{j}}\\right)\n\\ket{\\tilde{z}_{j}}\\bra{\\tilde{z}_{j}}& \n\\sin{\\beta_{j}}\\ket{\\tilde{z}_{j}} & \\\\\n-\\sin{\\beta_{j}}\\bra{\\tilde{z}_{j}} & \\cos{\\beta_{j}} & \\\\\n & & {\\bf 1}_{n-j}\n\\end{array}\n\\right)\n\\end{eqnarray}\nfor $2\\leq j\\leq n$. More precisely, we write\n\\begin{equation}\n\\label{eq:modules}\nA_{0}=A_{0}(\\{\\theta_{1},\\theta_{2},\\cdots,\\theta_{n}\\}),\\quad \nA_{j}=A_{j}(\\{\\tilde{z}_{1j},\\tilde{z}_{2j},\\cdots,\\tilde{z}_{j-1,j}\\};\n\\beta_{j})\\quad \\textrm{for}\\quad j=2,\\cdots,n\n\\end{equation}\nincluding parameters which we can manipulate freely.\n\nFrom now on we consider $A_{j}$ {\\bf a kind of module} of qudit theory for \n$j=0,2,\\cdots,n$,\\ namely $\\{A_{j}|\\ j=0,2,\\cdots,n\\}$ becomes a complete set \nof modules. By combining them many times \n\\footnote{we take no account of an ordering or a uniqueness of $\\{A_{j}\\}$ in \nthe expression (\\ref{eq:the canonical coordinate of the second kind})}\nwe construct important matrices in qudit theory {\\bf in an explicit manner}.\n\n\n\n\n\n\n\n\n\n\n\n\\section{Qudit Theory}\n\nLet us make a brief introduction to a qudit theory. The theory is based on \nan atom with $n$ energy levels $\\{(\\ket{k},E_{k})\\ |\\ 0 \\leq k \\leq n-1\\}$ \n, see the figure 1.\n\n\\begin{figure}\n\\begin{center}\n\\setlength{\\unitlength}{1mm} %\n\\begin{picture}(100,100)(0,0)\n\\put(0,80){\\makebox(15,10)[c]{$E_{n-1}$}}\n\\put(15,85){\\line(1,0){70}}\n\\put(85,80){\\makebox(18,10)[c]{$|{n-1}\\rangle$}}\n\\put(0,70){\\makebox(15,10)[c]{$E_{n-2}$}}\n\\put(15,75){\\line(1,0){70}}\n\\put(85,70){\\makebox(18,10)[c]{$|{n-2}\\rangle$}}\n\\put(5,50){\\makebox(10,10)[c]{$E_2$}}\n\\put(15,55){\\line(1,0){70}}\n\\put(85,50){\\makebox(10,10)[c]{$|2\\rangle$}}\n\\put(5,35){\\makebox(10,10)[c]{$E_1$}}\n\\put(15,40){\\line(1,0){70}}\n\\put(85,35){\\makebox(10,10)[c]{$|1\\rangle$}}\n\\put(5,15){\\makebox(10,10)[c]{$E_0$}}\n\\put(15,20){\\line(1,0){70}}\n\\put(85,15){\\makebox(10,10)[c]{$|0\\rangle$}}\n\\put(50,10){\\circle*{3}}\n\\put(50,60){$\\cdot$}\n\\put(50,65){$\\cdot$}\n\\put(50,70){$\\cdot$}\n\\put(50,30){\\vector(0,1){10}}\n\\put(50,30){\\vector(0,-1){10}}\n\\put(53,25){\\makebox(10,10)[c]{$\\omega_1$}}\n\\put(50,50){\\vector(0,1){5}}\n\\put(50,50){\\vector(0,-1){10}}\n\\put(53,42){\\makebox(10,10)[c]{$\\omega_2$}}\n\\put(50,80){\\vector(0,1){5}}\n\\put(50,80){\\vector(0,-1){5}}\n\\put(53,75){\\makebox(10,10)[c]{$\\omega_{n-1}$}}\n\\end{picture}\n\\vspace{-10mm}\n\\caption{Atom with $n$ energy levels}\n\\end{center}\n\\end{figure}\n\nFirst of all we summarize some properties of the Pauli matrices and \nWalsh--Hadamard matrix, and next state corresponding ones of the generalized \nPauli matrices and generalized Walsh--Hadamard matrix within our necessity. \n\nLet $\\{\\sigma_{1}, \\sigma_{2}, \\sigma_{3}\\}$ be Pauli matrices : \n\\begin{equation}\n\\label{eq:pauli}\n\\sigma_{1} = \n\\left(\n \\begin{array}{cc}\n 0& 1 \\\\\n 1& 0\n \\end{array}\n\\right), \\quad \n\\sigma_{2} = \n\\left(\n \\begin{array}{cc}\n 0& -i \\\\\n i& 0\n \\end{array}\n\\right), \\quad \n\\sigma_{3} = \n\\left(\n \\begin{array}{cc}\n 1& 0 \\\\\n 0& -1\n \\end{array}\n\\right).\n\\end{equation}\nBy (\\ref{eq:pauli}) $\\sigma_{2}=i\\sigma_{1}\\sigma_{3}$, so that the essential \nelements of Pauli matrices are $\\{\\sigma_{1}, \\sigma_{3}\\}$ and they satisfy\n\\begin{equation}\n\\sigma_{1}^{2}=\\sigma_{3}^{2}={\\bf 1}_{2}\\ ;\\quad \n\\sigma_{1}^{\\dagger}=\\sigma_{1},\\\n\\sigma_{3}^{\\dagger}=\\sigma_{3}\\ ;\\quad \n\\sigma_{3}\\sigma_{1}=-\\sigma_{1}\\sigma_{3}=\\mathrm{e}^{i\\pi}\\sigma_{1}\\sigma_{3}.\n\\end{equation}\n\nA Walsh--Hadamard matrix is defined by \n\\begin{equation}\n \\label{eq:w-a}\n W = \\frac{1}{\\sqrt{2}}\n \\left(\n \\begin{array}{rr}\n 1& 1 \\\\\n 1& -1\n \\end{array}\n \\right)\\ \\in \\ O(2)\\ \\subset U(2).\n\\end{equation}\nThis matrix is unitary and it plays a very important role \nin Quantum Computation. Moreover it is easy to realize it in Quantum Optics \nas shown in \\cite{KF3}. \nLet us list some important properties of $W$ :\n\\begin{eqnarray}\n\\label{eq:properties of W-H (1)}\n &&W^{2}={\\bf 1}_{2},\\ \\ W^{\\dagger}=W=W^{-1}, \\\\\n\\label{eq:properties of W-H (2)}\n &&\\sigma_{1}= W\\sigma_{3}W^{-1},\n\\end{eqnarray}\nThe proof is very easy. \n\nNext let us generalize the Pauli matrices to higher dimensional cases. \nLet $\\{\\Sigma_{1}, \\Sigma_{3}\\}$ be the following matrices in $M(n,{\\mathbf C})$\n\\begin{equation}\n\\label{eq:gener-pauli}\n\\Sigma_{1}=\n\\left(\n\\begin{array}{cccccc}\n0& & & & & 1 \\\\\n1& 0& & & & \\\\\n & 1& 0& & & \\\\\n & & 1& \\cdot& & \\\\\n & & & \\cdot& \\cdot& \\\\\n & & & & 1 & 0\n\\end{array}\n\\right), \\qquad\n\\Sigma_{3}=\n\\left(\n\\begin{array}{cccccc} \n1& & & & & \\\\\n & \\sigma& & & & \\\\\n & & {\\sigma}^2& & & \\\\\n & & & \\cdot& & \\\\\n & & & & \\cdot& \\\\\n & & & & & {\\sigma}^{n-1}\n\\end{array}\n\\right)\n\\end{equation}\nwhere $\\sigma$ is a primitive root of unity ${\\sigma}^{n}=1$ \n($\\sigma=\\mathrm{e}^{\\frac{2\\pi i}{n}}$). We note that\n\\[\n\\bar{\\sigma}=\\sigma^{n-1},\\quad\n1+\\sigma+\\cdots+\\sigma^{n-1}=0 .\n\\]\nTwo matrices\n$\\{\\Sigma_{1}, \\Sigma_{3}\\}$ are generalizations of the Pauli matrices\n$\\{\\sigma_{1}, \\sigma_{3}\\}$, but they are not hermitian.\nHere we list some of their important properties :\n\\begin{equation}\n\\Sigma_{1}^{n}=\\Sigma_{3}^{n}={\\bf 1}_{n}\\ ; \\quad\n\\Sigma_{1}^{\\dagger}=\\Sigma_{1}^{n-1},\\\n\\Sigma_{3}^{\\dagger}=\\Sigma_{3}^{n-1}\\ ; \\quad\n\\Sigma_{3}\\Sigma_{1}=\\sigma \\Sigma_{1}\\Sigma_{3}\\ .\n\\end{equation}\nFor $n=3$ and $n=4$ $\\Sigma_{1}$ and its powers are given respectively as \n\\begin{equation}\n\\label{eq:sigma-1-three}\n\\Sigma_{1}=\n\\left(\n\\begin{array}{ccc}\n 0 & & 1 \\\\\n 1 & 0 & \\\\\n & 1 & 0\n\\end{array}\n\\right),\\quad \n\\Sigma_{1}^{2}=\n\\left(\n\\begin{array}{ccc}\n 0 & 1 & \\\\\n & 0 & 1 \\\\\n 1 & & 0\n\\end{array}\n\\right) \n\\end{equation}\nand\n\\begin{equation}\n\\label{eq:sigma-1-four}\n\\Sigma_{1}=\n\\left(\n\\begin{array}{cccc}\n 0 & & & 1 \\\\\n 1 & 0 & & \\\\\n & 1 & 0 & \\\\\n & & 1 & 0\n\\end{array}\n\\right),\\quad \n\\Sigma_{1}^{2}=\n\\left(\n\\begin{array}{cccc}\n 0 & & 1 & \\\\\n & 0 & & 1 \\\\\n 1 & & 0 & \\\\\n & 1 & & 0\n\\end{array}\n\\right),\\quad \n\\Sigma_{1}^{3}=\n\\left(\n\\begin{array}{cccc}\n 0 & 1 & & \\\\\n & 0 & 1 & \\\\\n & & 0 & 1 \\\\\n 1 & & & 0\n\\end{array}\n\\right).\n\\end{equation}\n\nIf we define a Vandermonde matrix $W$ based on $\\sigma$ as\n\\begin{eqnarray}\n\\label{eq:Large-double}\nW&=&\\frac{1}{\\sqrt{n}}\n\\left(\n\\begin{array}{ccccccc}\n1& 1& 1& \\cdot & \\cdot & \\cdot & 1 \\\\\n1& \\sigma^{n-1}& \\sigma^{2(n-1)}& \\cdot& \\cdot& \\cdot& \\sigma^{(n-1)^2} \\\\\n1& \\sigma^{n-2}& \\sigma^{2(n-2)}& \\cdot& \\cdot& \\cdot& \\sigma^{(n-1)(n-2)} \\\\\n\\cdot& \\cdot & \\cdot & & & & \\cdot \\\\\n\\cdot& \\cdot & \\cdot & & & & \\cdot \\\\\n1& \\sigma^{2}& \\sigma^{4}& \\cdot & \\cdot & \\cdot & \\sigma^{2(n-1)} \\\\\n1& \\sigma & \\sigma^{2}& \\cdot& \\cdot& \\cdot& \\sigma^{n-1}\n\\end{array}\n\\right), \\\\\n\\label{eq:Large-double-dagger}\nW^{\\dagger}&=&\\frac{1}{\\sqrt{n}}\n\\left(\n\\begin{array}{ccccccc}\n1& 1& 1& \\cdot & \\cdot & \\cdot & 1 \\\\\n1& \\sigma& \\sigma^{2}& \\cdot& \\cdot& \\cdot& \\sigma^{n-1} \\\\\n1& \\sigma^{2}& \\sigma^{4}& \\cdot& \\cdot& \\cdot& \\sigma^{2(n-1)} \\\\\n\\cdot& \\cdot & \\cdot & & & & \\cdot \\\\\n\\cdot& \\cdot & \\cdot & & & & \\cdot \\\\\n1& \\sigma^{n-2}& \\sigma^{2(n-2)}& \\cdot& \\cdot& \\cdot& \\sigma^{(n-1)(n-2)} \\\\\n1& \\sigma^{n-1} & \\sigma^{2(n-1)}& \\cdot& \\cdot& \\cdot& \\sigma^{(n-1)^2}\n\\end{array}\n\\right),\n\\end{eqnarray}\nthen it is not difficult to see\n\\begin{eqnarray}\n\\label{eq:properties of G-W-H (1)}\n&&W^{\\dagger}W=WW^{\\dagger}={\\bf 1}_{n}, \\\\\n\\label{eq:properties of G-W-H (2)}\n&&\\Sigma_{1}=W\\Sigma_{3}W^{\\dagger}=W\\Sigma_{3}W^{-1}.\n\\end{eqnarray}\n\nSince $W$ corresponds to the Walsh--Hadamard matrix (\\ref{eq:w-a}), \nit may be possible to call $W$ the generalized Walsh--Hadamard matrix. \nIf we write $W^{\\dagger}=(w_{ab})$, then \n\\[\nw_{ab}=\\frac{1}{\\sqrt{n}}\\sigma^{ab}=\n\\frac{1}{\\sqrt{n}}\\mathrm{exp}\\left(\\frac{2\\pi i}{n}ab\\right) \n\\quad \\textrm{for}\\quad 0\\leq a,\\ b \\leq n-1. \n\\]\nThis is just the coefficient matrix of Discrete Fourier Transform (DFT) \nif $n=2^{k}$ for some $k \\in {\\mathbf N}$, see \\cite{PWS}. \n\nFor $n=3$ and $n=4$ $W$ is given respectively as \n\\begin{equation}\n\\label{eq:w-a-three}\nW=\n\\frac{1}{\\sqrt{3}}\n\\left(\n\\begin{array}{ccc}\n1& 1& 1 \\\\\n1& \\sigma^{2} & \\sigma \\\\\n1& \\sigma & \\sigma^{2}\n\\end{array}\n\\right)\n=\n\\frac{1}{\\sqrt{3}}\n\\left(\n\\begin{array}{ccc}\n1& 1& 1 \\\\\n1& \\frac{-1-i\\sqrt{3}}{2} & \\frac{-1+i\\sqrt{3}}{2} \\\\\n1& \\frac{-1+i\\sqrt{3}}{2} & \\frac{-1-i\\sqrt{3}}{2}\n\\end{array}\n\\right)\n\\end{equation}\nand \n\\begin{equation}\n\\label{eq:w-a-four}\nW=\n\\frac{1}{2}\n\\left(\n\\begin{array}{cccc}\n 1 & 1 & 1 & 1 \\\\\n 1 & \\sigma^{3} & \\sigma^{2} & \\sigma \\\\\n 1 & \\sigma^{2} & 1 & \\sigma^{2} \\\\\n 1 & \\sigma & \\sigma^{2} & \\sigma^{3}\n\\end{array}\n\\right)\n=\n\\frac{1}{2}\n\\left(\n\\begin{array}{cccc}\n 1 & 1 & 1 & 1 \\\\\n 1 & -i & -1 & i \\\\\n 1 & -1 & 1 & -1 \\\\\n 1 & i & -1 & -i\n\\end{array}\n\\right).\n\\end{equation}\n\n\\vspace{5mm}\nWe note that the generalized Pauli and Walsh--Hadamard matrices in three and \nfour level systems can be constructed in a quantum optical manner (by using \nRabi oscillations of several types), see \\cite{KF3} and \\cite{FHKW}. \nConcerning an interesting application of the generalized Walsh--Hadamard one \nin three and four level systems to algebraic equation, see \\cite{KF4}.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Explicit Construction of the Generalized Pauli and \\\\\nGeneralized Walsh--Hadamard Matrices}\n\nFirst let us construct the generalized Pauli matrices. \nFrom (\\ref{eq:modules}) it is easy to see\n\\begin{equation}\n\\label{eq:Sigma-3}\nA_{0}(\\{0,2\\pi\/n,4\\pi\/n,\\cdots,2(n-1)\\pi\/n\\})=\\Sigma_{3}.\n\\end{equation}\nNext we construct $\\Sigma_{1}$. From (\\ref{eq:modules}) we also set\n\\begin{equation}\nA_{j}=A_{j}(\\{0,\\cdots,0,1\\};\\pi\/2)=\n\\left(\n\\begin{array}{cccc}\n{\\bf 1}_{j-2} & & & \\\\\n & 0 & 1 & \\\\\n & -1 & 0 & \\\\\n & & & {\\bf 1}_{n-j}\n\\end{array}\n\\right)\n\\end{equation}\nfor $j=2,\\cdots,n$. Then it is not difficult to see\n\\begin{equation}\nA_{2}A_{3}\\cdots A_{n}\n=\n\\left(\n\\begin{array}{ccccc}\n0 & & & & 1 \\\\\n-1 & 0 & & & \\\\\n0 & -1 & 0 & & \\\\\n & & \\ddots & \\ddots & \\\\\n & & & -1 & 0 \n\\end{array}\n\\right).\n\\end{equation}\nTherefore if we choose $A_{0}$ as\n\\begin{equation}\nA_{0}=A_{0}(\\{0,\\pi,\\cdots,\\pi\\})\n=\n\\left(\n\\begin{array}{ccccc}\n1 & & & & \\\\\n & -1 & & & \\\\\n & & -1 & & \\\\\n & & & \\ddots & \\\\\n & & & & -1\n\\end{array}\n\\right)\n\\end{equation}\nthen we finally obtain \n\\begin{equation}\n\\label{eq:Sigma-1}\nA_{0}A_{2}A_{3}\\cdots A_{n}=\\Sigma_{1}.\n\\end{equation}\nFrom (\\ref{eq:Sigma-3}) and (\\ref{eq:Sigma-1}) we can construct all the \ngeneralized Pauli matrices \\\\\n$\\left\\{\\mathrm{e}^{i\\phi(a,b)}\\Sigma_{1}^{a}\\Sigma_{3}^{b}\\ |\\ 0\\leq a,\\ b\\leq \nn-1\\right\\}$, \nwhere $\\mathrm{e}^{i\\phi(a,b)}$ is a some phase depending on $a$ and $b$.\n\n\\vspace{3mm}\nSimilarly we can construct the matrix\n\\begin{equation}\nK=\n\\left(\n\\begin{array}{cccccc}\n1 & & & & & \\\\\n & & & & & 1 \\\\\n & & & & 1 & \\\\\n & & & \\cdot & & \\\\\n & & \\cdot & & & \\\\\n & 1 & & & & \n\\end{array}\n\\right)\n\\end{equation}\nas follows. If $n=2k$, then\n\\begin{eqnarray}\n&&A_{0}(\\{\\underbrace{0,\\cdots,0}_{\\mbox{$k+1$}},\n\\underbrace{\\pi,\\cdots,\\pi}_{\\mbox{$k-1$}}\\})\nA_{k+2}(\\{0,\\cdots,0,1,0\\};\\pi\/2)\nA_{k+3}(\\{0,\\cdots,0,1,0,0\\};\\pi\/2)\n\\cdots \\nonumber \\\\\n\\times &&A_{2k-1}(\\{0,0,1,0,\\cdots,0\\};\\pi\/2)\nA_{2k}(\\{0,1,0,\\cdots,0\\};\\pi\/2)\n=K\n\\end{eqnarray}\nand if $n=2k-1$, then\n\\begin{eqnarray}\n&&A_{0}(\\{\\underbrace{0,\\cdots,0}_{\\mbox{$k$}},\n\\underbrace{\\pi,\\cdots,\\pi}_{\\mbox{$k-1$}}\\})\nA_{k+1}(\\{0,\\cdots,0,1\\};\\pi\/2)\nA_{k+2}(\\{0,\\cdots,0,1,0\\};\\pi\/2)\n\\cdots \\nonumber \\\\\n\\times &&A_{2k-2}(\\{0,0,1,0,\\cdots,0\\};\\pi\/2)\nA_{2k-1}(\\{0,1,0,\\cdots,0\\};\\pi\/2)\n=K.\n\\end{eqnarray}\n\nBoth $\\Sigma_{1}$ and $K$ play an important role in constructing the exchange \n(swap) gate in two--qudit systems like the figure 2 where \n$\\Sigma$=$\\Sigma_{1}$. To be more precise, see \\cite{KF5}.\n\n\\begin{figure}\n\\begin{center}\n\\setlength{\\unitlength}{1mm}\n\\begin{picture}(200,35)\n\\put(25,30){\\line(1,0){42}} \n\\put(25,5){\\line(1,0){10}} \n\\put(41,5){\\line(1,0){11}} \n\\put(58,5){\\line(1,0){27}} \n\\put(55,8){\\line(0,1){22}} \n\\put(38,5){\\circle{6}} \n\\put(35,0){\\makebox(6,10){K}} \n\\put(14,25){\\makebox(9,10)[r]{$|a\\rangle$}} \n\\put(14,0){\\makebox(9,10)[r]{$|b\\rangle$}} \n\\put(52,25){\\makebox(6,10){$\\bullet$}} \n\\put(55,5){\\circle{6}} \n\\put(52,0){\\makebox(6,10){$\\Sigma$}}\n\\put(70,30){\\circle{6}} \n\\put(67,25){\\makebox(6,10){K}} \n\\put(73,30){\\line(1,0){9}} \n\\put(85,5){\\line(1,0){27}} \n\\put(88,30){\\line(1,0){9}} \n\\put(103,30){\\line(1,0){12}} \n\\put(85,5){\\line(0,1){22}} \n\\put(82,0){\\makebox(6,10){$\\bullet$}} \n\\put(85,30){\\circle{6}} \n\\put(82,25){\\makebox(6,10){$\\Sigma$}} \n\\put(100,30){\\circle{6}} \n\\put(97,25){\\makebox(6,10){K}} \n\\put(115,30){\\line(1,0){22}} \n\\put(118,5){\\line(1,0){19}} \n\\put(115,8){\\line(0,1){22}} \n\\put(135,25){\\makebox(9,10)[r]{$|b\\rangle$}} \n\\put(135,0){\\makebox(9,10)[r]{$|a\\rangle$}} \n\\put(112,25){\\makebox(6,10){$\\bullet$}} \n\\put(115,5){\\circle{6}} \n\\put(112,0){\\makebox(6,10){$\\Sigma$}}\n\\end{picture}\n\\caption{Exchange gate on two--qudit system}\n\\end{center}\n\\end{figure}\n\\par \\noindent\n\nIt is interesting to note the simple relation\n\\begin{equation}\nW^{2}=K.\n\\end{equation}\nNamely, the generalized Walsh--Hadamard matrix $W$ (\\ref{eq:Large-double}) is \na square root of $K$.\n\n\n\n\n\n\\vspace{5mm}\nSecond we want to construct the generalized Walsh--Hadamard matrix, which is \nhowever very hard. Let us show only the case of $n=3$, $4$ and $5$. \n\\begin{flushleft}\n(a)\\ $n$ = 3\\ :\\ For (\\ref{eq:w-a-three}) we have\n\\end{flushleft} \n\\begin{equation}\n\\label{eq:Walsh-Hadamard 3}\nW=A_{0}A_{3}A_{2}A_{0}^{'}\n\\end{equation}\nwhere each of matrices is given by\n\\begin{eqnarray*}\nA_{0}&=&A_{0}(\\{0,2\\pi\/3,4\\pi\/3\\})=\n\\left(\n\\begin{array}{ccc}\n1 & & \\\\\n & \\mathrm{e}^{i2\\pi\/3} & \\\\\n & & \\mathrm{e}^{i4\\pi\/3}\n\\end{array}\n\\right), \\\\\nA_{0}^{'}&=&A_{0}^{'}(\\{-\\pi\/12,7\\pi\/12,0\\})=\n\\left(\n\\begin{array}{ccc}\n\\mathrm{e}^{-i\\pi\/12} & & \\\\\n & \\mathrm{e}^{i7\\pi\/12} & \\\\\n & & 1\n\\end{array}\n\\right)\n\\end{eqnarray*}\nand\n\\[\nA_{3}=A_{3}\\left(\\{1\/\\sqrt{2},1\/\\sqrt{2}\\};\\cos^{-1}(1\/\\sqrt{3})\\right)\n=\\frac{1}{\\sqrt{3}}\n\\left(\n\\begin{array}{rrr}\n \\frac{\\sqrt{3}+1}{2} & -\\frac{\\sqrt{3}-1}{2} & 1 \\\\\n-\\frac{\\sqrt{3}-1}{2} & \\frac{\\sqrt{3}+1}{2} & 1 \\\\\n-1 & -1 & 1\n\\end{array}\n\\right)\n\\]\nand\n\\[\nA_{2}=A_{2}\\left(\\{\\mathrm{e}^{-i\\pi\/2}\\};\\pi\/4\\right)\n=\n\\left(\n\\begin{array}{rrr}\n \\frac{1}{\\sqrt{2}} & -\\frac{i}{\\sqrt{2}} & \\\\\n-\\frac{i}{\\sqrt{2}} & \\frac{1}{\\sqrt{2}} & \\\\\n & & 1\n\\end{array}\n\\right).\n\\]\nHere we have used \n\\[\n\\cos(\\pi\/12)=\\frac{\\sqrt{6}+\\sqrt{2}}{4}\\quad \\mbox{and}\\quad\n\\sin(\\pi\/12)=\\frac{\\sqrt{6}-\\sqrt{2}}{4}.\n\\]\nIn this case, the number of modules is $4$.\n\n\n\\begin{flushleft}\n(b)\\ $n$ = 4 :\\ For (\\ref{eq:w-a-four}) we have\n\\end{flushleft}\n\\begin{equation}\n\\label{eq:Walsh-Hadamard 4}\nW=A_{0}A_{4}SA_{3}A_{2}A_{0}^{'}S\n\\end{equation}\nwhere each of matrices is given by\n\\begin{eqnarray*}\nA_{0}&=&A_{0}(\\{0,2\\pi\/4,4\\pi\/4,6\\pi\/4\\})=\n\\left(\n\\begin{array}{cccc}\n1 & & & \\\\\n & i & & \\\\\n & & -1 & \\\\\n & & & -i\n\\end{array}\n\\right), \\\\\nA_{0}^{'}&=&A_{0}(\\{\\pi\/4,5\\pi\/4,0,0\\})=\n\\left(\n\\begin{array}{cccc}\n \\frac{1+i}{\\sqrt{2}} & & & \\\\\n & -\\frac{1+i}{\\sqrt{2}} & & \\\\\n & & 1 & \\\\\n & & & 1\n\\end{array}\n\\right)\n\\end{eqnarray*}\nand\n\\[\nA_{4}=A_{4}\\left(\\{1\/\\sqrt{3},1\/\\sqrt{3},1\/\\sqrt{3}\\};\\pi\/3\\right)=\n\\left(\n\\begin{array}{rrrr}\n \\frac{5}{6} & -\\frac{1}{6} & -\\frac{1}{6} & \\frac{1}{2} \\\\\n-\\frac{1}{6} & \\frac{5}{6} & -\\frac{1}{6} & \\frac{1}{2} \\\\\n-\\frac{1}{6} & -\\frac{1}{6} & \\frac{5}{6} & \\frac{1}{2} \\\\\n-\\frac{1}{2} & -\\frac{1}{2} & -\\frac{1}{2} & \\frac{1}{2}\n\\end{array}\n\\right)\n\\]\nand\n\\[\nA_{3}=A_{3}\\left(\\{1\/\\sqrt{2},1\/\\sqrt{2}\\};\\cos^{-1}(-1\/3)\\right)=\n\\left(\n\\begin{array}{rrrr}\n \\frac{1}{3} & -\\frac{2}{3} & \\frac{2}{3} & \\\\\n-\\frac{2}{3} & \\frac{1}{3} & \\frac{2}{3} & \\\\\n-\\frac{2}{3} & -\\frac{2}{3} & -\\frac{1}{3} & \\\\\n & & & 1\n\\end{array}\n\\right)\n\\]\nand\n\\[\nS=A_{0}(\\{0,0,\\pi,0\\})A_{3}(\\{0,1\\};\\pi\/2)=\n\\left(\n\\begin{array}{cccc}\n1 & & & \\\\\n & 0 & 1 & \\\\\n & 1 & 0 & \\\\\n & & & 1\n\\end{array}\n\\right)\n\\]\nand\n\\[\nA_{2}=A_{2}\\left(\\{\\mathrm{e}^{i\\pi\/2}\\};\\pi\/4\\right)=\n\\left(\n\\begin{array}{cccc}\n \\frac{1}{\\sqrt{2}} & \\frac{i}{\\sqrt{2}} & & \\\\\n \\frac{i}{\\sqrt{2}} & \\frac{1}{\\sqrt{2}} & & \\\\\n & & 1 & \\\\\n & & & 1\n\\end{array}\n\\right).\n\\]\nIn this case, the number of modules is $9$.\n\n\nLast we show a calculation for the case of $n=5$. However, it is \nrelatively complicated as shown in the following. \n\\begin{flushleft}\n(c)\\ $n$ = 5 :\\ We have\n\\end{flushleft}\n\\begin{equation}\n\\label{eq:Walsh-Hadamard 5}\nW=A_{0}A_{5}A_{4}S_{1}A_{3}S_{2}A_{0}^{'}\n\\end{equation}\nwhere each of matrices is given by\n\\begin{eqnarray*}\nA_{0}&=&A_{0}(\\{0,2\\pi\/5,4\\pi\/5,6\\pi\/5,8\\pi\/5\\})=\n\\left(\n\\begin{array}{ccccc}\n1 & & & & \\\\\n & \\sigma & & & \\\\\n & & \\sigma^{2} & & \\\\\n & & & \\sigma^{3} & \\\\\n & & & & \\sigma^{4}\n\\end{array}\n\\right), \\\\\nA_{0}^{'}&=&A_{0}(\\{9\\pi\/10,13\\pi\/10,-3\\pi\/10,\\pi\/10,0\\})=\n\\left(\n\\begin{array}{ccccc}\n\\mathrm{e}^{i9\\pi\/10} & & & & \\\\\n & \\mathrm{e}^{i13\\pi\/10} & & & \\\\\n & & \\mathrm{e}^{-i3\\pi\/10} & & \\\\\n & & & \\mathrm{e}^{i\\pi\/10} & \\\\\n & & & & 1\n\\end{array}\n\\right) \n\\end{eqnarray*}\nwhere \n\\[\n\\sigma=\\mathrm{e}^{i2\\pi\/5}=\\cos(2\\pi\/5)+i\\sin(2\\pi\/5)=\n\\frac{\\sqrt{5}-1}{4}+i\\frac{\\sqrt{10+2\\sqrt{5}}}{4}\n\\]\nand\n\\begin{eqnarray*}\nA_{5}\n&=&A_{5}\\left(\\{1\/2,1\/2,1\/2,1\/2\\};\\cos^{-1}(1\/\\sqrt{5})\\right) \\nonumber \\\\\n&=&\n\\frac{1}{\\sqrt{5}}\n\\left(\n\\begin{array}{ccccc}\n \\frac{3\\sqrt{5}+1}{4} & -\\frac{\\sqrt{5}-1}{4} & -\\frac{\\sqrt{5}-1}{4} & \n -\\frac{\\sqrt{5}-1}{4} & 1 \\\\\n-\\frac{\\sqrt{5}-1}{4} & \\frac{3\\sqrt{5}+1}{4} & -\\frac{\\sqrt{5}-1}{4} & \n -\\frac{\\sqrt{5}-1}{4} & 1 \\\\\n-\\frac{\\sqrt{5}-1}{4} & -\\frac{\\sqrt{5}-1}{4} & \\frac{3\\sqrt{5}+1}{4} & \n -\\frac{\\sqrt{5}-1}{4} & 1 \\\\\n-\\frac{\\sqrt{5}-1}{4} & -\\frac{\\sqrt{5}-1}{4} & -\\frac{\\sqrt{5}-1}{4} & \\frac{3\\sqrt{5}+1}{4} & 1 \\\\\n-1 & -1 & -1& -1 & 1\n\\end{array}\n\\right)\n\\end{eqnarray*}\nand\n\\[\nA_{4}=A_{4}\\left(\\{a\/u,\\alpha\/u,-\\bar{\\alpha}\/u\\};\\theta_{4}\\right)\n=\n\\left(\n\\begin{array}{ccccc}\n1-sa^{2} & -sa\\bar{\\alpha} & sa\\alpha & \\frac{a}{\\sqrt{5}} & \\\\\n-sa\\alpha & 1-s|\\alpha|^{2} & s\\alpha^{2} & \\frac{\\alpha}{\\sqrt{5}} & \\\\\nsa\\bar{\\alpha} & s\\bar{\\alpha}^{2} & 1-s|\\alpha|^{2} & \n-\\frac{\\bar{\\alpha}}{\\sqrt{5}} & \\\\\n-\\frac{a}{\\sqrt{5}} & -\\frac{\\bar{\\alpha}}{\\sqrt{5}} & \n\\frac{\\alpha}{\\sqrt{5}} & -\\frac{a}{\\sqrt{5}} & \\\\\n & & & & 1\n\\end{array}\n\\right) \n\\]\nwhere \n\\begin{eqnarray*}\n&&a\\equiv \\sin(2\\pi\/5),\\quad \n\\alpha\\equiv \\sin(\\pi\/5)+i\\frac{\\sqrt{5}}{2}=\n\\frac{\\sqrt{10-2\\sqrt{5}}}{4}+i\\frac{\\sqrt{5}}{2},\\quad \n\\cos(\\theta_{4})\\equiv -\\frac{a}{\\sqrt{5}} \\\\\n&&u\\equiv \\sqrt{4+\\cos^{2}(2\\pi\/5)},\\quad \ns\\equiv \n\\frac{2(35+\\sqrt{5})}{305}\\left(1+\\frac{\\sin(2\\pi\/5)}{\\sqrt{5}}\\right)\n\\end{eqnarray*}\nand\n\\begin{eqnarray*}\nS_{1}&=&A_{0}(\\{0,\\pi,\\pi,0,0\\})A_{2}(\\{1\\};\\pi\/2)A_{3}(\\{0,1\\};\\pi\/2)=\n\\left(\n\\begin{array}{ccccc}\n0 & & 1 & & \\\\\n1 & 0 & & & \\\\\n & 1 & 0 & & \\\\\n & & & 1 & \\\\\n & & & & 1\n\\end{array}\n\\right), \\\\\nS_{2}&=&A_{0}(\\{0,0,\\pi,0,0\\})A_{3}(\\{1,0\\};\\pi\/2)=\n\\left(\n\\begin{array}{ccccc}\n0 & & 1 & & \\\\\n & 1 & & & \\\\\n1 & & 0 & & \\\\\n & & & 1 & \\\\\n & & & & 1\n\\end{array}\n\\right) \n\\end{eqnarray*}\nand\n\\[\nA_{3}=A_{3}\\left(\\{-\\beta\/v,\\bar{\\beta}\/v\\};\\theta_{3}\\right)\n=\n\\left(\n\\begin{array}{ccccc}\n1-\\frac{|\\beta|^{2}}{\\sqrt{5}\\left(\\sqrt{5}-(a+ta^{2})\\right)} & \n\\frac{\\beta^{2}}{\\sqrt{5}\\left(\\sqrt{5}-(a+ta^{2})\\right)} & \n-\\frac{\\beta}{\\sqrt{5}} & & \\\\\n\\frac{\\bar{\\beta}^{2}}{\\sqrt{5}\\left(\\sqrt{5}-(a+ta^{2})\\right)} & \n1-\\frac{|\\beta|^{2}}{\\sqrt{5}\\left(\\sqrt{5}-(a+ta^{2})\\right)} & \n\\frac{\\bar{\\beta}}{\\sqrt{5}} & & \\\\\n\\frac{\\bar{\\beta}}{\\sqrt{5}} & -\\frac{\\beta}{\\sqrt{5}} & \n-\\frac{a+ta^{2}}{\\sqrt{5}} & & \\\\\n & & & 1 & \\\\\n & & & & 1\n\\end{array}\n\\right) \n\\]\nwhere \n\\begin{eqnarray*}\n&&\\beta\\equiv \\bar{\\alpha}+ta\\alpha,\\quad \nv\\equiv \\sqrt{5-(a+ta^{2})^{2}}=\\sqrt{2}|\\beta|,\\quad \n\\cos(\\theta_{3})\\equiv -\\frac{a+ta^{2}}{\\sqrt{5}} \\\\\n&&\nt\\equiv \\frac{2(7\\sqrt{5}+1)}{61}\\left(1+\\frac{\\sin(2\\pi\/5)}{\\sqrt{5}}\\right).\n\\end{eqnarray*}\n\n\\vspace{5mm}\nA comment is in order.\n\n\\par \\noindent\n(1) Our construction is not necessarily minimal, namely a number of \nmodules can be reduced, see \\cite{KuF2}.\n\n\\par \\noindent\n(2) For $n \\geq 6$ we have not succeeded in obtaining the formula like \n(\\ref{eq:Walsh-Hadamard 3}) or (\\ref{eq:Walsh-Hadamard 4}) or \n(\\ref{eq:Walsh-Hadamard 5}). \nIn general, a calculation tends to become more and more complicated as $n$ \nbecomes large. \n\n\\par \\noindent\n(3) The heart of quantum computation is a superposition of (possible) states. \nTherefore a superposition like \n\\[\n\\ket{0} \\longrightarrow \n\\frac{\\ket{0}+\\ket{1}+\\ket{2}+\\ket{3}+\\ket{4}}{\\sqrt{5}}\n\\]\nin the $5$ level system must be constructed {\\bf in a quick and clean manner}. \nThe generalized Walsh--Hadamard matrix just gives such a superposition. \nOur construction in the system seems to be complicated, from which one may be \nable to conclude that a qudit theory with $n \\geq 5$ is not realistic. \n\n\n\n\n\n\\section{Discussion}\n\nIn this paper we treated the Jarlskog's parametrization of unitary \nmatrices as a complete set of modules in qudit theory and \nconstructed the generalized Pauli matrices in the general case and \ngeneralized Walsh--Hadamard matrix in the case of $n=3$, $4$ and $5$. \n\nIn spite of every efforts we could not construct the Walsh--Hadamard \nmatrix in the general case, so its construction is left as a future task.\n\nHowever, our view is negative to this problem. \nIn general, a calculation to construct it tends to become more and more \ncomplicated as $n$ becomes large. See (\\ref{eq:Walsh-Hadamard 3}) and \n(\\ref{eq:Walsh-Hadamard 4}) and (\\ref{eq:Walsh-Hadamard 5}). \nTherefore it may be possible to say that a qudit theory with $n \\geq 5$ \nis not realistic. Further study will be required.\n\nOur next task is to realize these modules $\\{A_{j}|\\ j=0,2,\\cdots,n\\}$ \nin a quantum optical method, which will be discussed in a forthcoming paper.\n\n\n\n\n\n\\vspace{5mm}\n\\noindent{\\em Acknowledgment.}\\\\\nK. Fujii wishes to thank Mikio Nakahara and Shin'ichi Nojiri for their \nhelpful comments and suggestions. \n\n\n\n\n\n\n\n\\vspace{5mm}\n\\begin{center}\n \\begin{Large}\n\\noindent{\\bfseries Appendix\\quad Proof of the Formula (\\ref{eq:fundamental})}\n \\end{Large}\n\\end{center}\n\nIn this appendix we derive the formula (\\ref{eq:fundamental}) to make \nthe paper self--contained. \n\n\\par \\noindent\nSince\n\\[\nX_{j}=\n\\left(\n\\begin{array}{ccc}\n{\\bf 0}_{j-1} & \\ket{z_{j}} & \\\\\n-\\bra{z_{j}} & 0 & \\\\\n& & {\\bf 0}_{n-j}\n\\end{array}\n\\right)\n\\equiv \n\\left(\n\\begin{array}{cc}\nK & \\\\\n & {\\bf 0}_{n-j}\n\\end{array}\n\\right)\n\\quad \\Longrightarrow \\quad\n\\mathrm{e}^{X_{j}}=\n\\left(\n\\begin{array}{cc}\n \\mathrm{e}^{K} & \\\\\n & {\\bf 1}_{n-j}\n\\end{array}\n\\right)\n\\]\nwe have only to calculate the term $\\mathrm{e}^{K}$, which is an easy task. \nFrom\n\\begin{eqnarray}\nK&=&\n\\left(\n\\begin{array}{cc}\n{\\bf 0}_{j-1} & \\ket{z_{j}} \\\\\n-\\bra{z_{j}} & 0\n\\end{array}\n\\right),\\quad \nK^{2}=\n\\left(\n\\begin{array}{cc}\n-\\ket{z_{j}}\\bra{z_{j}} & \\\\\n & -\\braket{z_{j}}\n\\end{array}\n\\right), \\nonumber \\\\\nK^{3}&=&\n\\left(\n\\begin{array}{cc}\n{\\bf 0}_{j-1} & -\\braket{z_{j}}\\ket{z_{j}} \\\\\n\\braket{z_{j}}\\bra{z_{j}} & 0 \n\\end{array}\n\\right)\n=-\\braket{z_{j}}K\n\\end{eqnarray}\nwe have important relations\n\\[\nK^{2n+1}=\\left(-\\braket{z_{j}}\\right)^{n}K,\\quad \nK^{2n+2}=\\left(-\\braket{z_{j}}\\right)^{n}K^{2}\n\\quad \\mbox{for}\\quad n\\geq 0.\n\\]\nTherefore\n\\begin{eqnarray*}\n\\mathrm{e}^{K}\n&=&{\\bf 1}_{j}+\\sum_{n=0}^{\\infty}\\frac{1}{(2n+2)!}K^{2n+2}\n +\\sum_{n=0}^{\\infty}\\frac{1}{(2n+1)!}K^{2n+1} \\\\\n&=&{\\bf 1}_{j}\n+\\sum_{n=0}^{\\infty}\\frac{1}{(2n+2)!}\\left(-\\braket{z_{j}}\\right)^{n}K^{2}\n+\\sum_{n=0}^{\\infty}\\frac{1}{(2n+1)!}\\left(-\\braket{z_{j}}\\right)^{n}K \\\\\n&=&{\\bf 1}_{j}\n+\\sum_{n=0}^{\\infty}(-1)^{n}\\frac{\\left(\\sqrt{\\braket{z_{j}}}\\right)^{2n}}\n{(2n+2)!}K^{2}\n+\\sum_{n=0}^{\\infty}(-1)^{n}\\frac{\\left(\\sqrt{\\braket{z_{j}}}\\right)^{2n}}\n{(2n+1)!}K \\\\\n&=&{\\bf 1}_{j}\n-\\frac{1}{\\braket{z_{j}}}\\sum_{n=0}^{\\infty}(-1)^{n+1}\n\\frac{\\left(\\sqrt{\\braket{z_{j}}}\\right)^{2n+2}}{(2n+2)!}K^{2}\n+\\frac{1}{\\sqrt{\\braket{z_{j}}}}\\sum_{n=0}^{\\infty}(-1)^{n}\n\\frac{\\left(\\sqrt{\\braket{z_{j}}}\\right)^{2n+1}}{(2n+1)!}K \\\\\n&=&{\\bf 1}_{j}\n-\\frac{1}{\\braket{z_{j}}}\\sum_{n=1}^{\\infty}(-1)^{n}\n\\frac{\\left(\\sqrt{\\braket{z_{j}}}\\right)^{2n}}{(2n)!}K^{2}\n+\\frac{1}{\\sqrt{\\braket{z_{j}}}}\\sum_{n=0}^{\\infty}(-1)^{n}\n\\frac{\\left(\\sqrt{\\braket{z_{j}}}\\right)^{2n+1}}{(2n+1)!}K \\\\\n&=&{\\bf 1}_{j}\n-\\frac{1}{\\braket{z_{j}}}\\left(\\cos(\\sqrt{\\braket{z_{j}}})-1\\right)K^{2}\n+\\frac{\\sin(\\sqrt{\\braket{z_{j}}})}{\\sqrt{\\braket{z_{j}}}}K \\\\\n&=&{\\bf 1}_{j}\n+\\left(1-\\cos(\\sqrt{\\braket{z_{j}}})\\right)\\frac{1}{\\braket{z_{j}}}K^{2}\n+\\sin(\\sqrt{\\braket{z_{j}}})\\frac{1}{\\sqrt{\\braket{z_{j}}}}K.\n\\end{eqnarray*}\n\nIf we define a normalized vector as\n\\[\n\\ket{\\tilde{z}_{j}}=\\frac{1}{\\sqrt{\\braket{z_{j}}}}\\ket{z_{j}}\n\\ \\Longrightarrow\\ \\braket{\\tilde{z}_{j}}=1\n\\]\nthen \n\\[\n\\frac{1}{\\sqrt{\\braket{z_{j}}}}K\n=\n\\left(\n\\begin{array}{cc}\n{\\bf 0}_{j-1} & \\ket{\\tilde{z}_{j}} \\\\\n-\\bra{\\tilde{z}_{j}} & 0 \n\\end{array}\n\\right),\\quad \n\\frac{1}{\\braket{z_{j}}}K^{2}\n=\n\\left(\n\\begin{array}{cc}\n-\\ket{\\tilde{z}_{j}}\\bra{\\tilde{z}_{j}} & \\\\\n & -1\n\\end{array}\n\\right).\n\\]\nTherefore\n\\begin{equation}\n\\mathrm{e}^{K}\n=\n\\left(\n\\begin{array}{cc}\n{\\bf 1}_{j-1}-\\left(1-\\cos(\\sqrt{\\braket{z_{j}}})\\right)\n\\ket{\\tilde{z}_{j}}\\bra{\\tilde{z}_{j}}& \n\\sin(\\sqrt{\\braket{z_{j}}})\\ket{\\tilde{z}_{j}} \\\\\n-\\sin(\\sqrt{\\braket{z_{j}}})\\bra{\\tilde{z}_{j}} & \n\\cos(\\sqrt{\\braket{z_{j}}})\n\\end{array}\n\\right).\n\\end{equation}\nAs a result we obtain the formula (\\ref{eq:fundamental})\n\\begin{equation}\n\\mathrm{e}^{X_{j}}\n=\n\\left(\n\\begin{array}{ccc}\n{\\bf 1}_{j-1}-\\left(1-\\cos(\\sqrt{\\braket{z_{j}}})\\right)\n\\ket{\\tilde{z}_{j}}\\bra{\\tilde{z}_{j}}& \n\\sin(\\sqrt{\\braket{z_{j}}})\\ket{\\tilde{z}_{j}} & \\\\\n-\\sin(\\sqrt{\\braket{z_{j}}})\\bra{\\tilde{z}_{j}} & \n\\cos(\\sqrt{\\braket{z_{j}}}) & \\\\\n & & {\\bf 1}_{n-j}\n\\end{array}\n\\right).\n\\end{equation}\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:introduction_network}\nNetworks are useful tools to visualize the relational information among a large number of variables. Undirected graphical model is a rich class of statistical network model that encodes the conditional independence~\\cite{lauritzen1996graphical}. Canonically, Gaussian graphical models (or its normalized version partial correlation~\\cite{pengwangzhouzhu2009a}) can be represented by the inverse covariance matrix (i.e., the precision matrix), where a zero entry is associated with a missing edge between two vertices in the graph. Specifically, two vertices are not connected if and only if they are conditionally independent given the value of all other variables.\n\nOn one hand, there is a large volume of literature on estimating the (static) precision matrix for graphical models in the high-dimensional setting, where the sample size and the dimension are both large~\\cite{MR2278363,friedmanhastietibshirani2008a,MR2417243,rothmanbickellevinazhu2008a,yuan2010a,yuanlin2007a,ravikumarwainwrightraskuttiyu2008a,MR2382651, MR2847973, MR2847949,fanfengwu2009a,MR3335801,loh2014high,loh2013structure}. Most of the earlier work along this line assumes that the underlying network is time-invariant. This assumption is quite restrictive in practice and hardly plausible for many real-world applications such as gene regulatory networks, social networks, and stocking market, where the underlying data generating mechanisms are often dynamic. On the other hand, dynamic random networks have been extensively studied from the perspective of large random graphs such as community detection and edge probability estimation for dynamic stochastic block models (DSBMs) ~\\cite{lebreeatal2010a,przytycka2010a,RePEc:eee:finmar:v:14:y:2011:i:1:p:1-46,chi2010network,durante2017nonparametric,durante2016locally,han2015consistent,danaher2014joint,dondelinger2013non,pensky2019dynamic,pensky2019spectral,bhattacharjee2018change,bartlett2018two,gaucher2019maximum}. Such approaches do not model the sampling distributions of the error (or noise), since the ``true\" networks are connected with random edges sampled from certain probability models such as the Erd\\H{o}s-R\\'enyi graphs~\\cite{ErdosRenyi59a} and random geometric graphs~\\cite{penrose2003}.\n\nIn this paper, we shall view the (time-varying) networks of interests as non-random graphs. We adopt the graph signal processing approach for denoising the nonstationary time series and target on estimating the {\\it true unknown} underlying graphs. Despite the recent attempts towards more flexible time-varying models~\\cite{zhoulaffertywasserman2010a,kolarxing2011a,kolarsongxing2010a,kolarxing2014a,qiu2015joint,lu2015post,ahmedxing2009,tibshiranietal2005}, there are still a number of major limitations in the current high-dimensional literature. First, theoretical analysis was derived under the fundamental assumption that the observations are either temporally {\\it independent}, or the temporal dependence has very specific forms such as Gaussian processes or (linear) vector autoregression (VAR)~\\cite{zhoulaffertywasserman2010a,kolarxing2011a, MR3335801,cho2015multiple,royatchadmichailidis2014a, qiu2015joint,zhou2014gemini}. Such dynamic structures are unduly demanding in view that many time series encountered in real applications have very complex nonlinear spatial-temporal dependency~\\cite{howell1993,fanyao2003}. Second, most existing work assumes the data have time-varying distributions with sufficiently light tails such as Gaussian graphical models and Ising models~\\cite{zhoulaffertywasserman2010a,kolarxing2011a,cho2015multiple,royatchadmichailidis2014a,kolarxing2014a}. Third, in change point estimation problems for high-dimensional time series, piecewise constancy is widely used~\\cite{royatchadmichailidis2014a,cho2015multiple,fryzlewicz2014wild,kokszkaleipus2000}, which can be fragile in practice. For instance, financial data often appears to have time-dependent cross-volatility with structural breaks~\\cite{MR2572452}. For resting-state fMRI signals, correlation analysis reveals both slowly varying and abrupt changing characteristics corresponding to modularities in brain functional networks \\cite{changglover2010a,hutchison2013a}.\n\n\nAdvances in analyzing high-dimensional (stationary) time series have been made recently to address the aforementioned the nonlinear spatial-temporal dependency issue~\\cite{qiu2015joint, wiesel2013time, qiu2014robust, MR3335801,zhou2014gemini,chen2013covariance,ChenXuWu2016_IEEETSP,basu2015,MR3205718,shu2014estimation}. In \\cite{chen2013covariance,MR3205718,shu2014estimation}, the authors considered the theoretical properties of regularized estimation of covariance and precision matrices, based on various dependence measure of high-dimensional time series. \\cite{lu2015post} considered the non-paranormal graphs that evolves with a random variable. \\cite{qiu2015joint} discussed the joint estimation of Gaussian graphical models based on a stationary VAR(1) model with special coefficient matrices, which may also depend on certain covariates. The authors applied a constrained $L_1$-minimization for inverse matrix estimation (CLIME) estimator with a kernel estimator of covariance matrix and developed consistency in the graph recovery at a given time points. \\cite{MR3335801} studied the recovery of the Granger causality across time and nodes assuming a stationary Gaussian VAR model with unknown order.\n\nIn this paper, we focus on the recovery of time-varying undirected graphs based on regularized estimation of the precision matrices for a general class of nonstationary time series. We simultaneously model two types of dynamics: abrupt changes with an unknown number of change points and the smooth evolution between the change points. In particular, we study a class of high-dimensional {\\it piecewise locally stationary processes} in a general nonlinear temporal dependency framework, where the observation are allowed to have a finite polynomial moment.\n\nMore specifically, there are two main goals of this paper: first to estimate the change point locations, as well as the number of change points, and second to estimate the smooth precision matrix functions between the change points. Accordingly, our proposed method contains two steps. In the first step, the maximum norm of the local difference matrix is computed at each time point and the jumps in the covariance matrices are detected at the location where the maximum norms are above a certain threshold. In the second step, the precision matrices before and after the jump are estimated by a regularized kernel smoothing estimator. These two steps are recursively performed until a stopping criterion is met. Moreover, a boundary correction procedure based on data reflection is considered to reduce the bias near the change point. \n\n\n\n\nWe provide an asymptotic theory to justify the proposed method in high dimensions: point-wise and uniform rates of convergence are derived for the change point estimation and graph recovery under mild and interpretable conditions. The convergence rates are determined via subtle interplay among the sample size, dimensionality, temporal dependence, moment condition, and the choice of bandwidth in the kernel estimator. Our results are significantly more involved than problems for sub-Gaussian tails and independent samples. We shall highlight that uniform consistency in terms of time-varying network structure recovery is much more challenging and difficult than pointwise consistency.\nFor the multiple change point detection problem, we also characterize the threshold of the difference statistic that gives consistent selection of the number of change points.\n\n\nWe fix some notation. Positive, finite and non-random constants, independent of the sample size $n$ and dimension $p$, are denoted by $C, C_1, C_2, \\dots$, whose values may differ from line to line. For the sequence of real numbers, $a_n$ and $b_n$, we write $a_n=O(b_n)$ or $a_n\\lesssim b_n$ if $\\lim\\sup_{n\\to\\infty} (a_n\/b_n)\\le C$ for some constant $C<\\infty$ and $a_n=o(b_n)$ if $\\lim_{n\\to\\infty}(a_n\/b_n)=0$. We say $a_n\\asymp b_n$ if $a_n=O(b_n)$ and $b_n=O(a_n)$. For a sequence of random variables $Y_n$ and a corresponding set of constants $a_n$, denote $Y_n=O_\\Prob(a_n)$ if for any $\\varepsilon>0$ there is a constant $C>0$ such that $\\Prob(|Y_n|\/a_n> C)< \\varepsilon$ for all $n$. For a vector $\\vx \\in \\mathbb{R}^p$, we write $|\\vx|= (\\sum_{j=1}^p x_j^2)^{1\/2}$. For a matrix $\\Sigma$, $|\\Sigma|_1 = \\sum_{j,k}|\\sigma_{j k}|$, $|\\Sigma|_\\infty = \\max_{j,k} |\\sigma_{jk}|$, $|\\Sigma|_{L_1}=\\max_k\\sum_j |\\sigma_{jk}|$, $|\\Sigma|_F = (\\sum_{j, k}\\sigma_{j k}^2)^{1\/2}$ and $\\rho(\\Sigma) = \\max \\{|\\Sigma \\vx| : |\\vx|=1\\}$. For a random vector $\\vz \\in \\mathbb{R}^p$, write $\\vz \\in {\\mathcal L}^a$, $a > 0$, if $\\| \\vz \\|_a =: [ \\E(|\\vz|^a) ]^{1\/a} < \\infty$. Let $\\RBR{\\vz}=\\RBR{\\vz}_2$. Denote $a\\wedge b = \\min(a,b)$ and $a \\vee b = \\max(a,b)$. \n\nThe rest of the paper is organized as following. Section~\\ref{sec:model+assumptions} presents the time series model, as well as the main assumptions, which can simultaneously capture the smooth and abrupt changes. In Section~\\ref{sec:method_network}, we introduce the two-step method that first segments the time series based on the difference between the localized averages on sample covariance matrices and then recovers the graph support based on a kernelized CLIME estimator. In Section~\\ref{sec:theory_network}, we state the main theoretical results for the change point estimation and support recovery. Simulation examples are presented in Section~\\ref{sec:simulation_network} and a real data application is given in Section~\\ref{sec:real-data-analysis_network}. Proof of main results can be found in Section~\\ref{sec:proof}.\n\n\n\n\\section{Time series model}\n\\label{sec:model+assumptions}\n\nWe first introduce a class of causal vector stochastic process. Then we state the assumptions to derive an asymptotic theory in Section~\\ref{sec:theory_network} and explain their implications. Let $\\boldsymbol\\varepsilon_i\\in \\mathbb{R}^p, {i\\in \\mathbb{Z}}$ be independent and identically distributed (i.i.d.) random vectors and ${\\calF}_i=(\\ldots, \\boldsymbol\\varepsilon_{i-1},\\boldsymbol\\varepsilon_i)$ be a shift process. Let $\\vX^\\circ_i(t) = (X^\\circ_{i1}(t), \\dots, X^\\circ_{ip}(t))$ be a $p$-dimensional nonstationary time series generated by\n\\begin{align}\n\\label{eqn:data_generetaion_mechnism}\n\\vX_i ^\\circ(t)= \\vH(\\calF_i; \\, t),\n\\end{align}\n where $\\vH(\\cdot;\\cdot)=\\big(H_1(\\cdot;\\cdot),\\ldots,H_p(\\cdot;\\cdot))$ is an $\\mathbb{R}^p$-valued jointly measurable function.\n Suppose we observe the data points $\\vX_i=\\vX_{i,n}=\\vX^\\circ_{i}(t_i)$ at the evenly spaced time intervals $t_i = i\/n, i=1,2,\\dots,n$,\n\\begin{align}\\label{eqn:data_local}\n\\vX_{i,n}=\\vH(\\calF_i; \\, i\/n).\n\\end{align}\n We drop the subscription $n$ in $\\vX_{i,n}$ in the rest of this section. Since our focus is to study the second-order properties, the data is assumed to be mean zero.\n\nModel (\\ref{eqn:data_generetaion_mechnism}) is first introduced in \\cite{draghicescu2009quantile}. The stochastic process $\\big(X^\\circ_{i}(t)\\big)_{i\\in \\mathbb Z, t\\in[0,1)}$ can be thought as a triangular array system, double indexed by $i$ and $t$, while the observations $(X_i)_{i=1}^n$ are sampled from the diagonal of the array. On one hand, fixing the time index $t$, the (vertical) process $\\big(X^\\circ_{i}(t)\\big)_{i\\in \\mathbb Z}$ is stationary. On the other hand, since $\\vH({\\calF}_i;t_i)$ is allowed to vary with $t_i$, the diagonal process (\\ref{eqn:data_local}) is able to capture nonstationarity. \n\nThe process $(\\vX_i)_{i\\in \\mathbb Z}$ is causal or non-anticipative as $\\vX_i$ is an output of the past innovations $(\\boldsymbol\\varepsilon_{j})_{j\\le i}$ and does not depend on the future innovations. In fact, it covers a broad range of linear and nonlinear, stationary and non-stationary processes such as vector auto-regressive moving average processes, locally stationary processes, Markov chains, nonlinear functional processes~ \\cite{MR2172215,draghicescu2009quantile,zhou2009local,zhouwu2010,chen2013covariance}. \n\nMotivated by real applications where nonstationary time series data can involve both abrupt breaks and smooth varies between the breaks, we model the underlying processes as piecewise locally stationary with a finite number of structural breaks.\n\n\\begin{defn}[Piecewise locally stationary time series model]\n\\label{defn:lp_function}\nDefine $\\mathrm{PLS}_\\iota([0, 1], L)$ as the collection of mean-zero piecewise locally stationary processes on $[0, 1]$, if for each $(X(t))_{0\\le t\\le 1} \\in $ $\\mathrm{PLS}_\\iota([0, 1], L)$, there is a nonnegative integer $\\iota$ such that $X(t)$ is piecewise stochastic Lipschitz continuous in $t$ with Lipschitz constant $L$ on the interval $[t^{(l)}, t^{(l+1)}), l = 0,\\cdots,\\iota$, where $0 = t^{(0)} < t^{(1)} \\dots < t^{(\\iota)} < t^{(\\iota+1)} =1$. A vector stochastic process $(\\vX(t))_{0\\le t\\le 1} \\in \\mathrm{PLS}_\\iota([0, 1], L)$ if all coordinates belong to $\\mathrm{PLS}_\\iota([0, 1], L)$. For the process $(X^\\circ_0(t))_{0\\le t\\le 1}$ defined in (\\ref{eqn:data_generetaion_mechnism}), this means that there exists a non-negative integer $\\iota$ and a constant $L>0$, such that\n\\[\n\\max_{1\\le j \\le p}\\RBR{H_{j}(\\calF_0;t)-H_{j}(\\calF_0;t')}\\le L|t-t'| \\mbox{ for all } t^{(l)}\\le t, t'< t^{(l+1)}, 0\\le l\\le \\iota.\n\\]\n\\end{defn}\n\n\\begin{rem}\nIf we assume $(\\vX_i^\\circ(t))_{0\\le t\\le 1}\\in \\mathrm{PLS}_\\iota([0, 1], L),i\\in \\mathbb Z$, then it follows that for each $i'=i-k,\\ldots, i+k$, where $k\/n\\to 0$, and that $t^{(l)}\\le i,i'0$, then\n\\begin{eqnarray}\n\\label{eqn:L-cond-1-ture-covariance-matrix}\n|\\Sigma(s) - \\Sigma(t)|_\\infty &\\le& 2CL |s-t|, \\qquad \\forall s,t \\in [t^{(l)},t^{(l+1)}), l=0,\\ldots, \\iota.\n\\end{eqnarray}\nThe reverse direction is not necessarily true, i.e., (\\ref{eqn:L-cond-1-ture-covariance-matrix}) does not indicate $(\\vX_i^\\circ(t))_t \\in \\mathrm{PLS}_\\iota([0, 1], L)$, $i\\in \\mathbb Z$ in general. As a trivial example, let $\\varepsilon_{ij}=2^{-1\/2}$ with probability $2\/3$ and $\\sqrt 2$ with probability $1\/3$ i.i.d for all $i,j$. At time $t_k=k\/n$, let $X_{ij}^\\circ(t_k)=(-1)^{k}\\sqrt t_k\\varepsilon_{ij}$. Then for any $k$ and $k'$ such that $k+k'$ is odd, $|\\Sigma(t_k)-\\Sigma(t_{k'})|_\\infty=|t_k-t_{k'}|$, while $\\|X^\\circ_{01}(t_k)-X^\\circ_{01}(t_{k'})\\|_2=\\sqrt{t_k}+\\sqrt{t_{k'}}$.\n\n\\begin{ass}\n[Piecewise smoothness]\n\\label{assumption:smoothness}\n(i) Assume $(\\vX_i^\\circ(t))_{0\\le t\\le 1}\\in \\mathrm{PLS}_\\iota([0, 1], L)$ for each $i\\in \\mathbb Z$, where $L>0$ and $\\iota\\ge 0$ are constants independent of $n$ and $p$. \n\n(ii) For each $l=0,\\ldots,\\iota$, and $1\\le j,k\\le p$, we have $\\sigma_{jk}(t)\\in{\\calC}^2[t^{(l)},t^{(l+1)})$.\n\\end{ass}\n\n\n\n\n\nNow we introduce the temporal dependence measure. We quantify the dependence of $\\big(\\vX_i^\\circ(t)\\big)_{i\\in \\mathbb Z}$ by the dependence adjusted norm (DAN) (cf. \\cite{wu2016}). Let $\\boldsymbol\\varepsilon'_{i}$ be an independent copy of $\\boldsymbol\\varepsilon_{i}$ and ${\\calF}_{i,\\{m\\}}=(\\dots,\\boldsymbol\\varepsilon_{i-m-1},\\boldsymbol\\varepsilon'_{i-m},\\boldsymbol\\varepsilon_{i-m+1},\\dots,\\boldsymbol\\varepsilon_i)$. Denote $\\vX^\\circ_{i,\\{m\\}}(t)=\\big(X^\\circ_{i1,\\{m\\}}(t),\\ldots, X^\\circ_{ip,\\{m\\}}(t)\\big)$, where $X^\\circ_{ij,\\{m\\}}(t)=H_j({\\calF}_{i,\\{m\\}}; t)$, $1\\le j\\le p$. Here $\\vX^\\circ_{i,\\{m\\}}(t)$ is a coupled version of $\\vX^\\circ_{i}(t)$, with the same generating mechanism and input, except that $\\boldsymbol\\varepsilon_{i-m}$ is replaced by an independent copy $\\boldsymbol\\varepsilon'_{i-m}$. \n\\begin{defn}[Dependence adjusted norm (DAN)]\n\\label{defn:dan}\nLet constants $a\\ge1, A > 0$. Assume $\\sup_{0\\le t\\le 1}\\|X^\\circ_{1j}(t)\\|_a<\\infty, j=1,\\ldots,p$. Define the uniform functional dependence measure for the sequences $(X^\\circ_{ij}(t))_{i\\in \\mathbb Z,t\\in[0,1]}$ of form (\\ref{eqn:data_generetaion_mechnism}) as\n\\[\n\\theta_{m,a,j}=\\sup_{0\\le t\\le 1}\\Vert X^\\circ_{ij}(t)-X^\\circ_{ij,\\{m\\}}(t)\\Vert_a, \\quad j = 1, \\dots, p,\n\\]\nand $\\Theta_{m,a,j}=\\sum_{i=m}^\\infty\\theta_{i,a,j}$. The dependence adjusted norm of $(X^\\circ_{ij}(t))_{i\\in \\mathbb Z,t\\in[0,1]}$ is defined as\n\\[\n\\RBR{X_{\\cdot,j}}_{a,A}=\\sup_{m \\ge 0} (m+1)^A\\Theta_{m,a,j},\n\\]\nwhenever $\\RBR{X_{\\cdot,j}}_{a,A} < \\infty$.\n\\end{defn}\n\nIntuitively, the physical dependence measure quantifies the adjusted stochastic difference between the random variable and its coupled version by replacing past innovations. Indeed, $\\theta_{m,a,j}$ measures the impact on $X^\\circ_{ij}(t)$ uniform over $t$ by replacing $\\mbf\\varepsilon_{i-m}$ while freezing all the other inputs, while $\\Theta_{m,a,j}$ quantifies the cumulative influence of replacing $\\mbf\\varepsilon_{-m}$ on $(X_{ij}^\\circ(t))_{i\\ge 0}$ uniform over $t$. Then $\\RBR{X_{\\cdot,j}}_{a,A}$ controls the uniform polynomial decay in the lag of the cumulative physical dependence, where $a$ depends on the the tail of marginal distributions of $X^\\circ_{1,j}(t)$ and $A$ quantifies the polynomial decay power and thus the temporal dependence strength. It is clear that $\\RBR{X_{\\cdot,j}}_{a,A}$ is a semi-norm, i.e., it is subaddative and absolutely homogeneous. \n\n\\begin{ass}[Dependence and moment conditions]\n\\label{assumption:dep}\nLet $\\vX_i^\\circ(t)$ be defined in (\\ref{eqn:data_generetaion_mechnism}) and $\\vX_i$ in (\\ref{eqn:data_local}). There exist $q > 2$ and $A > 0$ such that\n\\begin{equation}\n\\label{eqn:moment_dan_vecproc}\n\\nu_{2q}:=\\sup_{t\\in [0,1]}\\max_{1\\le j\\le p}\\E|X^\\circ_{j}(t)|^{2q} < \\infty \\qquad \\text{and} \\qquad N_{X,2q}:=\\max_{1\\le j\\le p}\\RBR{X_{\\cdot, j}}_{2q,A} < \\infty.\n\\end{equation}\n\\end{ass}\nWe let $M_{X,q}:=\\rbr{\\sum_{1\\le j\\le p}\\RBR{X_{\\cdot, j}}_{2q,A}^q}^{1\/q}$ and write $N_X = N_{X,4}$, $M_X = M_{X,2}$. The quantities $M_{X,q}$ and $N_{X,2q}$ measure the $L^q$-norm aggregated effect and the largest effect of the element-wise DANs respectively. Both quantities play a role in the convergence rates of our estimator. \n\nObviously we have $\\|X_{ij}-X_{ij,\\{m\\}}\\|_a\\le \\theta_{m,a,j}$ and $\\max_{1\\le j\\le p}\\E|X_{ij}|^{2q}\\le \\nu_{2q}$ for all $1\\le i\\le n$. \nIn contrast to other works in high-dimensional covariance matrix and network estimation, where sub-gaussian tails and independence are the keys to ensure consistent estimation, Assumption \\ref{assumption:dep} only requires that the time series have finite polynomial moment, and it allows linear and nonlinear processes with short memory in the time domain.\n\n \n\n\n\n\n\\begin{ex}[Vector linear process]\\label{example:var}\nConsider the following vector linear process model\n\\[\n\\vH(\\calF_i;t)=\\sum_{m=0}^{\\infty} A_m(t)\\mbf\\varepsilon_{i-m},\n\\]\nwhere $\\mbf\\varepsilon_i=(\\varepsilon_1,\\ldots,\\varepsilon_p)$ and $\\varepsilon_{ij}$ are i.i.d. with mean $0$ and variance $1$, and $\\|\\varepsilon_{ij}\\|_{q}\\le C_q$ for each $i\\in \\mathbb Z$ and $1\\le j\\le p$ with some constants $q>2$ and $C_q>0$. The vector linear process is commonly seen in literature and application~\\cite{Ltkepohl2007}. It includes the time-varying VAR model where $A_m(t)=A(t)^m$ as a special example.\n\nSuppose that the coefficient matrices $A_m(t)=(a_{m,jk}(t))_{1\\le j,k\\le p}, m=0,1,\\ldots$ satisfy the following condition.\n\\begin{enumerate}\n\\item[(A1)] For each $1\\le j,k\\le p$, $a_{m,jk}(t)\\in\\mathcal C^2[0,1].$\n\\item[(A2)] For each $1\\le j\\le p$, there is a constant $C_{A,j}>0$ such that for each $t\\in [0,1]$, $\\sum_{k=1}^p a_{m,jk}(t)^2\\le C_{A,j}(m+1)^{-2(A+1)}$ for all $m\\ge0$.\n\\item[(A3)] For any $t,t'\\in[0,1]$, $\\sum_{m=0}^\\infty\\sum_{k=1}^p[a_{m,jk}(t)-a_{m,jk}(t')]^2\\le L^2|t-t'|^2$ for each $j=1,\\ldots,p$.\n\\end{enumerate}\n\nNote that \n\\begin{align*}\n\\sigma_{jk}(t)&=\\sum_{m\\ge 0}A_{m,j\\cdot}^\\top(t) A_{m,k\\cdot}(t),\\\\\n\\Theta_{m,q,j}&\\le 2C_q\\sqrt{q-1}\\sum_{m=0}^\\infty (A_{m,j\\cdot}^\\top A_{m,j\\cdot})^{1\/2},\\\\\n\\|X_{ij}^\\circ(t)-X_{ij}^\\circ(t')\\|^2&=\\sum_{m=0}^\\infty A_{m,j\\cdot}\\sum_{k=1}^p[a_{m,jk}(t)-a_{m,jk}(t')]^2,\n\\end{align*}\n where $A_{m,j\\cdot}(t)$ is the $j$th row of $A_{m}(t)$. Under condition (A1)-(A3), one can easily verify that for each $1\\le j,k\\le p$, the process satisfies: (1) $\\sigma_{jk}(t)\\in{\\calC}^2[0,1]$; (2) $\\|X_{\\cdot,j}\\|_{q,A}\\le C_q \\sqrt{(q-1)C_{A,j}}$ (due to Burkholder's inequality, cf. \\cite{MR1476912}); (3) $\\|H_j(\\calF_0;t)-H_j(\\calF_0;t')\\|\\le L|t-t'|$.\n\nConditions (A1)-(A3) implicitly impose smoothness in each entry of the coefficient matrices, sparseness in each column of the entry and evolution, and polynomial decay rate in the lag $m$ of each entry and its derivative. \n\\end{ex}\n\n\nFor $1\\le l\\le \\iota$, let $\\delta_{jk}(t^{(l)}):=\\sigma_{jk}(t^{(l)})-\\sigma_{jk}(t^{(l)}-)$ and $\\Delta(t^{(l)})=\\big(\\delta_{jk}(t^{(l)})\\big)_{1\\le j,k\\le p}$, where $\\sigma_{jk}(t^{(l)}-)=\\lim_{t\\to t^{(l)}-}\\sigma_{jk}(t)$ is well-defined in view of (\\ref{eqn:L-cond-1-ture-covariance-matrix}). We assume that the change points are separated and sizeable.\n\\begin{ass}[Separability and sizeability of change points]\n\\label{assumption:jump}\nThere exist positive constants $c_1\\in(0,1)$ and $c_2>0$ independent of $n$ and $p$ such that $\\max_{0\\le l\\le \\iota}(t^{(l+1)}-t^{(l)})\\ge c_1$ and $\\delta(t_l):= |\\Delta(t_l)|_\\infty\\ge c_2$.\n\\end{ass}\n\nIn the high-dimensional context, we assume that the inverse covariance matrices are sparse in the sense of their $L_1$ norms.\n\n\\begin{ass}[Sparsity of precision matrices]\n\\label{assumption:sparsity}\nThe precision matrix $|\\Omega(t)|_{L^1}\\le \\kappa_p$ for each $t\\in[0,1]$, where $\\kappa_p$ is allowed to grow with $p$.\n\\end{ass}\n\n\nIf we further assume that the eigenvalues of the covariance matrices are bounded from below and above, i.e., there exists a constant $00$ be a bandwidth parameter such that $h=o(1)$ and $n^{-1}=o(h)$, and ${\\calD}_h(0) = \\{ h, h+1\/n, \\dots, 1-h\\}$ be a search grid in $(0,1)$. Define\n\\begin{equation}\n\\label{eqn:D_nonstationary}\nD(s) = n^{-1} \\left( \\sum_{i=0}^{hn-1} \\vX_{ns-i} \\vX_{ns-i}^\\top - \\sum_{i=1}^{hn} \\vX_{ns+i} \\vX_{ns+i}^\\top \\right), \\qquad s \\in {\\calD}_h(0).\n\\end{equation}\nTo estimate the change points, compute\n\\begin{equation}\n\\label{eqn:change-point-estimator-infty-nonstationary1}\n\\hat{s}_1 = \\text{argmax}_{s \\in {\\calD}_h(0)} |D(s)|_\\infty.\n\\end{equation}\nThe following steps are performed recursively. For $l=1,2,\\ldots$, let\n\\begin{eqnarray}\n\\label{eqn:change-point-estimator-infty-nonstationary}\n&{\\calD}_h(l) = {\\calD}_h(l-1)\\cap\\{\\hat s_l-2h,\\cdots,\\hat s_l+2h\\}^c,\\\\\n&\\hat{s}_{l+1} = \\arg\\max_{s \\in {\\calD}_h(l)} |D(s)|_\\infty,\n\\end{eqnarray}\nuntil the following criterion is attained:\n\\begin{align}\\label{eqn:early_stop}\n\\max_{s\\in {\\calD}_h(l)} |D(s)|_\\infty < \\nu,\n\\end{align}\nwhere $\\nu$ is an early stopping threshold. The value of $\\nu$ is determined in Section \\ref{sec:theory_network}, which depends on the dimension and sample size, as well as the serial dependence level, tail condition and local smoothness. Since our method only utilizes data in the localized neighborhood, multiple change points can be estimated and ranked in a single pass, which offers some computational advantage than the binary segmentation algorithm \\cite{cho2015multiple,fryzlewicz2014wild}.\n\n\n\n\n\n\nOnce the change points are claimed, in the second step,we consider recovering the networks from the locally stationary time series before and after the structural breaks. In~\\cite{MR2847973}, where $X_i,i=1,\\ldots,n$ are assumed with an identical covariance matrix, the precision matrix $\\hat \\Omega$ is estimated as,\n\\begin{align}\\label{eq:clime}\n \\hat\\Omega_\\lambda=\\arg\\min_{\\Omega\\in \\mathbb R^{p\\times p}} |\\Omega|_1\\quad \\mbox{s.t. }|\\hat\\Sigma\\Omega-\\Id_p|_\\infty\\le \\lambda,\n\\end{align}\nwhere $\\hat\\Sigma$ is the sample covariance matrix. Inspired by (\\ref{eq:clime}), we apply a kernelized time-varying (tv-) CLIME estimator for the covariance matrix functions of the multiple pieces of locally stationary processes before and after the structural breaks. Let\n\\begin{align}\n\\label{eqn:samplecov}\n\\hat\\Sigma(t)=\\sum_{i=1}^{n} w(t,t_i) \\vX_i \\vX_i^\\top,\n\\end{align}\nwhere\n\\begin{align}\\label{eqn:kernel weight}\nw(t,i)=\\frac{K_b(t_i,t)}{\\sum_{i=1}^n K_b(t_i,t)}\n\\end{align}\nand $K_b(u,v)=K(|u-v|\/ b )\/ b $. The bandwidth parameter $b$ satisfies that $b=o(1)$ and $n^{-1}=o(b)$. Denote $B_n=nb$. The kernel function $K(\\cdot)$ is chosen to have properties as follows.\n\\begin{ass}[Regularity of kernel function]\\label{assumption:kernel}\nThe kernel function $K(\\cdot)$ is non-negative, symmetric, and Lipschitz continuous with bounded support in $[-1,1]$, and that $\\int_{-1}^1K(u)du=1$.\n\\end{ass}\n\nAssumption \\ref{assumption:kernel} is a common requirement on the kernel functions and can be fulfilled by a range of kernel functions such as the uniform kernel, triangular kernel, and the Epanechnikov kernel. Now the tv-CLIME estimator of the precision matrix $\\Omega(t)$ is defined by $\\tilde\\Omega(t)=\\left(\\tilde{\\omega}_{jk}(t)\\right)_{1\\le j,k\\le p}$, where \n$\\tilde{\\omega}_{jk}(t)=\\min(\\hat\\omega_{jk}(t),\\hat\\omega_{kj}(t))$, and $\\hat \\Omega(t)\\equiv \\hat\\Omega_\\lambda(t)=(\\hat\\omega_{jk}(t))_{1\\le j,k\\le p}$,\n\\begin{align}\n\\label{eqn:CLIME}\n\\hat\\Omega_\\lambda(t)=\\arg\\min_{\\Omega\\in \\mathbb R^{p\\times p}} |\\Omega|_1\\quad \\mbox{s.t. }|\\hat\\Sigma(t)\\Omega-\\Id_p|_\\infty\\le \\lambda.\n\\end{align} \nSimilar hybridized kernel smoothing and CLIME method for estimating the sparse and smooth transition matrices in high-dimensional VAR model has been considered in~\\cite{ding2017}, where change point is not considered. Thus in the current setting we need to carefully control effect of (consistently) removing the change points before smoothing.\n\nThen, the network is estimated by the ``effective support\" defined as follows.\n\\begin{align}\n\\label{eqn:graph_estimation}\n\\hat G(t;u)=(\\hat g_{jk}(t;u))_{1\\le j,k\\le p}, \\quad\\mbox{where } \\, \\hat g_{jk}(t;u)=\\ind{|\\tilde{\\omega}_{jk}(t)| \\ge u}.\n\\end{align}\n\n\n It should be noted that the (vanilla) kernel smoothing estimator~\\eqref{eqn:samplecov} of the covariance matrix does not adjust for the boundary effect due to the change points in the covariance matrice function. Thus, in the neighborhood of the change points, larger bias can be induced in estimating $\\Sigma(t)$ by $\\hat \\Sigma(t)$. \nAs a remedy, we apply the following reflection procedure for boundary correction. Suppose $t\\in\\hat{\\mathcal{T}}_{b+ h^2}(j)$ for $1\\le j\\le \\iota$, Denote $\\hat{{\\calT}}_{d}(j):=[\\hat s_j-d,\\hat s_j+d)$ for $d\\in (0,1)$. We replace (\\ref{eqn:samplecov}) by\n\\[\n\\hat\\Sigma(t)=\\sum_{i=1}^{n} w(t,t_i) \\breve\\vx_i \\breve\\vx_i^\\top ,\n\\]\nand then apply the rest of the tv-CLIME approach. Here\n\\begin{align}\n\\breve\\vx_i=\\begin{cases}\n\\vx_i&\\mbox{if }(i-\\hat s_jn)(t-\\hat s_jn)\\ge 0;\\\\\n\\vx_{2\\hat s_jn-i}&\\mbox{otherwise}.\n\\end{cases}\n\\end{align}\n\n\n\\section{Theoretical results}\n\\label{sec:theory_network}\n\nIn this section, we derive the theoretical guarratees for the change point estimation and graph support recovery. Roughly speaking, Proposition \\ref{prop:precmx_smooth} and \\ref{prop:partial_support_recovery} below show that under appropriate conditions, if each element of the covariance matrix varies smoothly in time, one can obtain accurate snapshot estimation of the precision matrices as well as the time-varying graphs with high probability via the proposed kernel smoothed constrained $l_1$ minimization approach.\n\n\nDefine $J_{q,A}(n,p) = M_{X,q} (p \\varpi_{q,A}(n)) ^{1\/q}$, where $\\varpi_{q,A}(n) = n, n(\\log n)^{1+2q}, n^{q\/2-Aq}$ if $A > 1\/2-1\/q$, $A = 1\/2-1\/q$, and $0 < A < 1\/2-1\/q$, respectively.\n\n \n\\begin{prop}[Rate of convergence for estimating precision matrices: pointwise and uniform]\\label{prop:precmx_smooth}\nSuppose Assumptions \\ref{assumption:dep}, \\ref{assumption:sparsity} and \\ref{assumption:kernel} hold with $\\iota=0$. Let $B_n=bn$ for $n^{-1}=o(b)$ and $b=o(1)$. \n\n\\begin{enumerate}\n\\item[(i)] {\\bf Pointwise.} Choose the parameter $\\lambda^\\circ \\ge C\\kappa_p ( b ^2+B_n^{-1}J_{q,A}(B_n,p)+N_X (\\log{p}\/B_n)^{1\/2})$ in the tv-CLIME estimator $\\hat\\Omega_{\\lambda^\\circ}(t)$ in (\\ref{eqn:CLIME}), where $C$ is a sufficiently large constant independent of $n$ and $p$. Then for any $t\\in [b,1-b]$, we have\n\\begin{align}\\label{eqn:pre_dev}\n|\\hat\\Omega_{\\lambda^\\circ}(t)-\\Omega(t)|_\\infty&= O_\\mathbb{P}( \\kappa_p\\lambda^\\circ).\n\\end{align}\n\n\\item[(ii)] {\\bf Uniform.} Choose $\\lambda^\\diamond \\ge C \\kappa_p \\left( b ^2+B_n^{-1} J_{q,A}(n,p)+N_X B_n^{-1}(n\\log(p))^{1\/2}\\right)$ in the tv-CLIME estimator $\\hat\\Omega_{\\lambda^\\circ}(t)$ in (\\ref{eqn:CLIME}), where $C$ is a sufficiently large constant independent of $n$ and $p$. Then we have \n\\begin{align}\n\\label{eqn:pre_dev_unif}\n\\sup_{t \\in [b,1-b]} |\\hat\\Omega_{\\lambda^\\diamond}(t)-\\Omega(t)|_\\infty = O_\\mathbb{P}( \\kappa_p\\lambda^\\diamond ).\n\\end{align}\n\\end{enumerate}\n\\end{prop}\n\nThe optimal order of the bandwidth parameter $b= b_\\sharp$ in~\\eqref{eqn:pre_dev_unif} is the solution to the following equation:\n\\begin{eqnarray*}\nb^2 &=& B_n^{-1}\\max( J_{q,A}(n,p), \\, N_X(n\\log(p^2))^{1\/2}),\n\\end{eqnarray*}\nwhich implies that the closed-form expression for $b_\\sharp$ is given by\n\\begin{align*}\nb_\\sharp=C_1\\big(n^{-1}J_{q,A}(n,p)\\big)^{1\/3}+C_2 N_X^{1\/3}n^{-1\/6}\\log(p)^{1\/6}\n\\end{align*}\nfor some constants $C_1$ and $C_2$ that are independent of $n$ and $p$.\n\n\nGiven a finite sample, to distinguish the small entries in the precision matrix from the noise is challenging. Since a smaller magnitude of a certain element of the precision matrix implies a weaker connection of the edge in the graphical model, we instead consider the estimation of {\\it significant} edges in the graph. Define the set of {\\it significant} edges at level $u$ as ${\\calE}^*(t;u)=\\rBr{(j,k):g^*_{jk}(t;u)\\ne0}$, where \n\\[\ng^*_{jk}(t;u)=\\ind{|\\omega_{jk}(t)|>u}.\n\\]\nThen, as a consequence of (\\ref{eqn:pre_dev_unif}), we have the following support recovery consistency result.\n\n\\begin{prop}[Consistency of support recovery: significant edges]\n\\label{prop:partial_support_recovery}\nChoose $u$ as $u_\\sharp = C_0 \\kappa_p^2 b_\\sharp^2$, where $C_0$ is taken as a sufficiently large constant independent of $n$ and $p$. Suppose that $u_\\sharp=o(1)$ as $n,p\\to\\infty$. Then under conditions of Proposition \\ref{prop:precmx_smooth}, we have that as $n,p\\to\\infty$,\n\\begin{align}\\label{eqn:fpos}\n\\mathbb{P}\\Big(\\sup_{t \\in [b,1-b]} \\sum_{(j,k)\\in {\\calE}^c(t)}\\ind{\\hat g_{jk}(t;u_\\sharp)\\ne 0}\\ne 0\\Big)\\to 0,\\\\\\label{eqn:fneg}\n\\mathbb{P}\\Big(\\sup_{t \\in [b,1-b]} \\sum_{(j,k)\\in {\\calE}^*(t;2u_\\sharp)}\\ind{\\hat g_{jk}(t;u_\\sharp)=0}\\ne 0\\Big)\\to0.\n\\end{align}\n\\end{prop}\nProposition \\ref{prop:partial_support_recovery} shows that the pattern of significant edges in the time-varying true graphs $G(t), t\\in[b,1-b]$, can be correctly recovered with high probability. However, it is still an open question to what extent the edges with magnitude below $u$ can be consistently estimated, which can be naturally studied in the multiple hypothesis testing framework. Nonetheless, hypothesis testing for graphical models on the nonstationary high-dimensional time series is rather challenging. We leave it as a future problem.\n\n\nPropositions \\ref{prop:precmx_smooth} and \\ref{prop:partial_support_recovery} together yield that consistent estimation of the precision matrices and the graphs can be achieved before and after the change points. Now we provide theoretical result of the change point estimation. Theorem \\ref {thm:jumppointsestimation} below shows that if the change points are separated and sizeable, then we can consistently identify them via the single pass segmentation approach under suitable conditions. Denote\n\\[\nh_\\diamond=C_1\\big(n^{-1}J_{q,A}(n,p)\\big)^{1\/3}+C_2 N_X^{1\/3}n^{-1\/6}\\log(p)^{1\/6},\n\\]\nwhere $C_1$ and $C_2$ are constants independent of $n$ and $p$.\n\n\\begin{thm}[Consistency of change point estimation]\n\\label{thm:jumppointsestimation}\nAssume $\\vX_i\\in \\mathbb R^p$ admits the form (\\ref{eqn:data_local}). Suppose that Assumptions \\ref{assumption:dep} to \\ref{assumption:jump} are satisfied. Choose the bandwidth $h= h_\\diamond$, and $\\nu=(1+L) h_\\diamond^2$ in (\\ref{eqn:D_nonstationary}) and (\\ref{eqn:early_stop}) respectively. Assume that $h_\\diamond=o(1)$ as $n,p\\to\\infty$. We have that there exist constants $C_1,C_2,C_3$ independent of $n$ and $p$ such that\n\\begin{eqnarray}\\label{eqn:jumpnumber}\n\\mathbb{P}(|\\hat{\\iota}-\\iota|> 0)\\le C_1\\Big({p \\varpi_{q,A}(n) M_{X,q}^q \\nu_{2q}^q \\over n^{q} c_2^q}\\Big)^{1\/3}+C_2p^2\\exp\\Big\\{-C_3({n\\log^2(p)\\over N_X^2})^{1\/3}\\Big\\}.\n\\end{eqnarray}\nFurthermore, on the event $\\{\\iota=\\hat\\iota\\}$, the ordered change-point estimator $(\\hat{s}_{(1)}<\\hat{s}_{(2)}<\\cdots<\\hat{s}_{(\\hat \\iota)})$ defined in (\\ref{eqn:change-point-estimator-infty-nonstationary}) satisfies\n\\begin{eqnarray}\n\\label{eqn:jumppoint}\n&\\max_{1\\le j\\le \\iota}|\\hat{s}_{(j)}-t^{(j)}| = O_\\Prob(h^2_\\diamond).\n\\end{eqnarray}\n\\end{thm}\nProposition \\ref{prop:partial_support_recovery} and Theorem \\ref{thm:jumppointsestimation} together indicate the consistency in the snapshot estimation of the time-varying graphs before and after the change points. In a close neighborhood of the change points, we have the following result for the recovery of the time-varying network. Denote ${\\calS}:=\\big[b_\\sharp,1-b_\\sharp]\\cap(\\cup_{1\\le j\\le \\hat\\iota}\\hat{\\calT}^c_{h_\\diamond^2+b_\\sharp}(j)\\big)$ as the time intervals between the estimated change points, and ${\\calN}:=[0,b_\\sharp)\\cup\\big(\\cup_{1\\le j\\le \\hat\\iota}(\\hat{\\calT}_{h_\\diamond^2+b_\\sharp}\\cap\\hat{\\calT}^c_{h_\\diamond^2})\\big)\\cup(1-b_\\sharp,1]$ as the recoverable neighborhood of the jump. \n\n\n\\begin{thm}\\label{thm:support_recovery}\nLet Assumptions \\ref{assumption:dep} to \\ref{assumption:kernel} be satisfied. We have the following results as $n,p\\to \\infty$.\n\\begin{enumerate}\n\\item[(i)] {\\bf Between change points.} For $t\\in{\\calS}$, take $b=b_\\sharp$ and $u=u_\\sharp$, where $b_\\sharp$ and $u_\\sharp$ are defined in Proposition \\ref{prop:partial_support_recovery}. Suppose $u_\\sharp=o(1)$. we have\n\\begin{align}\\label{eqn:cov_boundary_unif}\n&\\sup_{t\\in{\\calS}}\\max_{j,k}|\\hat\\sigma_{j,k}(t)-\\sigma_{j,k}(t)|= O_\\mathbb{P}( b^2_\\sharp).\n\\end{align}\nChoose the penalty parameter as $\\lambda_\\sharp:=C_1\\kappa_pb^2_\\sharp$, where $C_1$ is a constant independent of $n$ and $p$. Then\n\\begin{align*}\n\\sup_{t\\in{\\calS}}|\\hat{\\Omega}_{\\lambda_\\sharp}(t)-\\Omega(t)|_\\infty= O_\\mathbb{P}( \\kappa_p^2 b_\\sharp ^2).\n\\end{align*}\nMoreover, \n\\begin{align}\n\\mathbb{P}\\Big(\\sup_{t\\in{\\calS}}\\sum_{(j,k)\\in {\\calE}^c(t)}\\ind{\\hat g_{j,k}(t;u_\\sharp)\\ne 0}= 0\\Big)\\to 1,\\\\\n\\mathbb{P}\\Big(\\sup_{t\\in{\\calS}} \\sum_{(j,k)\\in {\\calE}^*(t;2u_\\sharp)}\\ind{\\hat g_{jk}(t;u_\\sharp)=0}= 0\\Big)\\to 1.\n\\end{align}\n\n\\item[(ii)] {\\bf Around change points.} For $s\\in{\\calN}$, take $b=b_\\star:=C_1\\big(n^{-1}J_{q,A}(n,p)\\big)^{1\/2}+C_2 N_X^{1\/2}n^{-1\/4}\\log(p)^{1\/4}$, and $u=u_\\star: = C_0 \\kappa_p^2 b_\\star$, where $C_0$, $C_1$ and $C_2$ are constants independent of $n$ and $p$. Suppose $u_\\star=o(1)$. We have\n\\begin{align*\n&\\sup_{t\\in{\\calN}}\\max_{j,k}|\\hat\\sigma_{j,k}(t)-\\sigma_{j,k}(t)|= O_\\mathbb{P}( b_\\star).\n\\end{align*}\nChoose the penalty parameter as $\\lambda_\\star:=C_1\\kappa_pb_\\star$, where $C_1$ is a constant independent of $n$ and $p$. Then\n\\begin{align}\n\\label{pre_boundary_unif}\n\\sup_{t\\in{\\calN}}|\\hat{\\Omega}_{\\lambda_\\star}(t)-\\Omega(t)|_\\infty= O_\\mathbb{P}( \\kappa_p^2 b_\\star).\n\\end{align}\nMoreover,\n\\begin{align}\n\\mathbb{P}\\Big(\\sup_{t\\in{\\calN}}\\sum_{(j,k)\\in {\\calE}^c(t)}\\ind{\\hat g_{j,k}(t;u_\\star)\\ne 0}= 0\\Big)\\to 1,\\\\\n\\mathbb{P}\\Big(\\sup_{t\\in{\\calN}} \\sum_{(j,k)\\in {\\calE}^*(t;2u_\\star)}\\ind{\\hat g_{j,k}(t;u_\\star)=0}= 0\\Big)\\to 1.\n\\end{align}\n\\end{enumerate}\n\\end{thm} \n\nNote that the convergence rates for the covariance matrix entries and precision matrix entries in case (ii) around the jump locations are slower than those for points well separated from the jump locations in case (i). This is because on the boundary due to the reflection, the smooth condition may no longer holds true. Indeed, we only take advantage of the Lipschitz continuous property of the covariance matrix function. Thus we lose one degree of regularity in the covariance matrix function, and the bias term $b^2$ in the convergence rate of the between-jump area becomes $b$ around the jumps. \nWe also note that around the smaller neighborhood of the jump ${\\calJ}:=\\cup_{1\\le j\\le \\hat\\iota}\\hat{\\calT}_{h_\\diamond^2}$, due to the larger error in the change point estimation, consistent recovery the graphs is not achievable. \n\n\n\n\\section{A simulation study}\\label{sec:simulation_network}\nWe simulate data from the following multivariate time series model:\n\\[\nX_i=\\sum_{m=0}^{100}A_m(i)\\bs\\epsilon_{i-m}, i=1,\\ldots,n,\n\\]\nwhere $A_{m}(i)\\in\\mathbb R^{p\\times p},1\\le m\\le 100,1\\le i\\le n$, and $\\bs\\epsilon_{i-m}=(\\epsilon_{i-m,1},\\ldots,\\epsilon_{i-m,p})^\\top$, with $\\epsilon_{m,k}$, $m\\in \\mathbb Z$, $j=1,\\ldots,p$ generated as i.i.d. standardized $T(8)$ random variables. In the simulation, we fix $n=1000$ and vary $p=50$ and $p=100$. For each $m=1,\\ldots, 100$, the coefficient matrices $A_m(i)=(1+m)^{-\\beta}B_m(i)$, where $\\beta=1$, and $B_m(1)$ is an $R^{p\\times p}$ block diagonal matrix. The $5\\times 5$ diagonal blocks in $B_m(i)$ are fixed with i.i.d. $N(0,1)$ entries and all the other entries are $0$. \n\nWe consider the number of abrupt changes is $\\iota=2$ and $(nt^{(1)},nt^{(2)})=(300,650)$. The matrix $A_0(i)$ is set to be a zero matrix for $i=1,2,\\ldots,299$, while $A_0(i)=A_0(299)+\\boldsymbol\\alpha \\boldsymbol \\alpha^\\top$, $i=300,301,\\ldots,649$, and $A_0(i)=A_0(649)-\\boldsymbol\\alpha \\boldsymbol \\alpha^\\top$, $i=650,651,\\ldots,1000$, where the first $20$ entries in $\\boldsymbol \\alpha$ is taken to be a constant $\\delta_0$ and the others are $0$.\n\n\nWe let the coefficient matrices $A_1(i)=\\{a_{m,jk}(i)\\}_{1\\le j,k\\le p}$ evolve at each time point such that two entries are soft-thresholded and another two elements increase. Specifically, at time $i$, we randomly select two elements from the support of $A_1(i)$, which are denoted as $\\{a_{1,j_l^\\star k_l^\\star}(i)\\}, l=1,2$ and that $a_{1,j^\\star k^\\star}(i)\\ne 0$, and set them to $a^\\star_{1,j_l^\\star k_l^\\star}(i)=\\mbox{sign}(a_{1,j_l^\\star k_l^\\star}(i))(|a_{1,j_l^\\star k_l^\\star}(i)-0.05|)$. We also randomly select two elements from $A^\\star_1(i)$ and increase their values by $0.03$.\n\nFigure \\ref{cov50} and Figure \\ref{cov100} show the support of the true covariance matrices at $i=100,200,\\ldots,900$.\n\\begin{figure}[htbp] \n \\centering\n \\includegraphics[width=5in]{covsupport50.pdf} \n \\caption{Support of the true covariance matrices, $p=50$}\n \\label{cov50\n\n\\end{figure}\n\n\n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=5in]{covsupport100.pdf} \n \\caption{Support of the true covariance matrices, $p=100$}\n \\label{cov100}\n\\end{figure}\n\n\n\n\nIn detecting the change points, the cutoff value $\\nu$ of detection is chosen as follows. After removing the neighborhood of detected change points, we obtain $\\mathcal D_h^{(l)}$ by ordering $\\mathcal D_h^{(l)},\\ldots \\mathcal D_h^{(\\mathfrak l)}$, where $\\mathfrak l$ is obtained from (\\ref{eqn:early_stop}) with $\\nu=0$. For $l=1,2,\\ldots,\\mathfrak l-1$, compute\n\\[\n\\mathcal R_h^{(l)}={\\mathcal D_h^{(l)}\\over \\mathcal D_h^{(l+1)}}.\n\\]\nWe let $\\hat\\iota=\\arg\\max_{0\\le l\\le \\mathfrak l-1 }\\mathcal R_h^{(l)}$ and set $\\nu=\\mathcal D_h^{(\\hat\\iota)}$.\n\n\n\nWe report the number of estimated jumps and the average absolute estimation error, where the average absolute estimation error is the mean of the distance between the estimated change points and the true change points. As is shown from Table \\ref{table:distance} and Table \\ref{table:number}, there is an apparent improvement in the estimation accuracy as the jump magnitude increases and dimension decreases. The detection is relatively robust to the choice of bandwidth.\n\n\nWe evaluate the support recovery performance of the time-varying CLIME at the lattice $100,200,\\ldots,900$ with $\\lambda=0.02,0.06,0.1$. We take the uniform kernel function and the bandwidth is fixed as $0.2$. At each time point $t_0$, two quantities are computed: sensitivity and specificity, which are defined as:\n\\begin{align*}\n\\mbox{sensitivity}&=\\frac{\\sum_{1\\le j,k\\le p}\\mathbb{I}\\{\\hat g_{jk}(t_0;u)\\ne 0, g_{jk}(t_0;u)\\ne 0\\}}{\\sum_{1\\le j,k\\le p}\\mathbb{I}\\{g_{jk}(t_0;u)\\ne 0\\}},\\\\\n\\mbox{specificity}&=\\frac{\\sum_{1\\le j,k\\le p}\\mathbb{I}\\{\\hat g_{jk}(t_0;u)= 0, g_{jk}(t_0;u)= 0\\}}{\\sum_{1\\le j,k\\le p}\\mathbb{I}\\{ g_{jk}(t_0;u)= 0\\}}.\n\\end{align*}\nWe plot the Receiver Operating Characteristic (ROC) curve, that is, sensitivity against 1-specificity. From Figure \\ref{fig:roc_p50} and Figure \\ref{fig:roc_p100} we observe that, due to a screening step, the support recovery is robust to the choice of $\\lambda$, except at the change points, where a non-negligible estimation error of the covariance matrix is induced and the overall estimation is less accurate. As the effective dimension of the network remains the same at $p=50$ and $p=100$ by the construction of the coefficient matrix $A_m(i)$, there is no significant difference in the ROC curves at different dimensions.\n\n\\begin{table}[htp]\n\\caption{Average distance.}\n\\begin{center}\n\\begin{tabular}{cccccccc}\n\\hline\\hline\n&bandwidth &0.14&0.16&0.18&0.2&0.22&0.24\\\\\n\\hline\n\\multirow{3}{*}{$p=50$}\n&$\\delta_0=1$&23.4&21.0&17.47&16.6&14.7&16.5\\\\\n&$\\delta_0=2$&7.4&6.9&8.3&8.1&7.2&6.3\\\\\n\\hline\n\\multirow{3}{*}{$p=100$}\n&$\\delta_0=1$&37.2&30.1&26.4&25.5&21.2&21.3\\\\\n&$\\delta_0=2$&7.8&8.2&9.9&6.9&8.9&7.6\\\\\n\\hline\\hline\n\\end{tabular}\n\\end{center}\n\\label{table:distance}\n\\end{table}%\n\n\n\\begin{table}[htp]\n\\caption{Number of estimated change points.}\n\\begin{center}\n\\begin{tabular}{cccccccc}\n\\hline\\hline\n&Bandwidth &0.14&0.16&0.18&0.2&0.22&0.24\\\\\n\\hline\n\\multirow{3}{*}{$p=50$}\n&$\\delta_0=1$&2.38&2.16&1.99&2.00&2.00&2.00\\\\\n&$\\delta_0=2$&2.46&2.31&2.00&2.00&2.00&2.00\\\\\n\\hline\n\\multirow{3}{*}{$p=100$}\n&$\\delta_0=1$&2.25&2.09&1.99&1.99&2.00&2.00\\\\\n&$\\delta_0=2$&2.38&2.19&2.00&2.00&2.00&2.00\\\\\n\\hline\\hline\n\\end{tabular}\n\\end{center}\n\\label{table:number}\n\\end{table}%\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=5in]{roc50.pdf} \n \\caption{ROC curve of the time-varying CLIME, $p=50$}\n \\label{fig:roc_p50}\n\\end{figure}\n\n\n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=5in]{roc100.pdf} \n \\caption{ROC curve of the time-varying CLIME, $p=100$}\n \\label{fig:roc_p100}\n\\end{figure}\n\n\n\n\n\\section{A real data application}\n\\label{sec:real-data-analysis_network}\n\nUnderstanding the interconnection among financial entities and how they vary over time provides investors and policy makers with insights in risk control and decision making. \\cite{allen2009networks} presents a comprehensive study of the applications of network theory in financial systems. In this section, we apply our method to a real financial dataset from Yahoo! Finance (\\texttt{finance.yahoo.com}). The data matrix contains daily closing prices of 420 stocks that are always in the S\\&P 500 index between January 2, 2002 through December 30, 2011. In total, there are $n=2519$ time points. We select 100 stocks with the largest volatility and consider their log-returns; that is, for $j=1,\\dots,100$,\n\\begin{equation*}\nX_{ij} = \\log\\rbr{p_{i+1,j} \/ p_{ij}},\n\\end{equation*}\nwhere $p_{ij}$ is the daily closing price of the stock $j$ at time point $i$. \nWe first compute the statistic (\\ref{eqn:D_nonstationary}) and (\\ref{eqn:change-point-estimator-infty-nonstationary1}) for the change point detection. We look at the top three statistics for different bandwidths. For bandwidth $k=n^{-1\/5}=0.21$, we rank the test statistic and find that the location for the top change point is: February 07, 2008 ($n_{\\hat{s}_1} = 1536$), which is shown in Figure~\\ref{fig:D_max}. The detected change point is quite robust to a variety of choices of bandwidth. Our result is partially consistent with the change point detection method in \\cite{MR2572452}. In particular, the two breaks in 2006 and 2007 were also found in \\cite{MR2572452} and it is conjectured that the 2007 break may be associated to the U.S. house market collapse. Meanwhile, it is interesting to observe the increased volatility before the 2008 financial crisis.\n\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[width=4in]{change.eps}\n \\caption{Break size $|D_s|_\\infty$. From February 4, 2004, to November 30, 2009.\n }\n \\label{fig:D_max}\n\\end{figure}\n\nNext, we estimate the time-varying networks before and after the change point at May 26, 2006 with the largest jump size. Specifically, we look at four time points at: 813, 828, 888, and 903, corresponding to March 23, 2006 April 13, 2006, July 11, 2006, and August 1, 2006. We use tv-CLIME (\\ref{eqn:CLIME}) with the Epanechnikov kernel with the same bandwidth as in the change point detection to estimate the networks at the four points. Optimal tuning parameter $\\lambda$ is automatically selected according to the stability approach \\cite{liuroederwasserman2010}. The following matrix shows the number of different edges at those four time points. It is observed that time the first two time points (813 and 828) and the last two (888 and 903) have higher similarity than across the change point at time 858. The estimated networks are shown in Figure \\ref{fig:time-varying-networks}. Networks in the first and second row are estimated before and after the estimated change point at time 858, respectively. It is observed that at each time point the companies in the same section tend to be clustered together such as companies in the \\texttt{Energy} section: OXY, NOV, TSO, MRO and DO (highlighted in cyan). In addition, the distance matrix of estimated networks is estimated as\n\n\\begin{equation*}\n\\left(\n\\begin{array}{cccc}\n0 & 332 & 350 & 396 \\\\\n332 & 0 & 394 & 428 \\\\\n350 & 394 & 0 & 234 \\\\\n396 & 428 & 234 & 0 \\\\\n\\end{array}\n\\right).\n\\end{equation*}\n\n\n\\begin{figure}[htp]\n \\centering\n \\subfigure[Time 813.] {\\label{subfig:time_813}\\includegraphics[scale=0.3]{Time813.pdf}}\n\t\\subfigure[Time 828.] {\\label{subfig:time_828}\\includegraphics[scale=0.3]{Time828.pdf}} \\\\\n\t\\subfigure[Time 888.] {\\label{subfig:time_888}\\includegraphics[scale=0.3]{Time888.pdf}}\n\t\\subfigure[Time 903.] {\\label{subfig:time_903}\\includegraphics[scale=0.3]{Time903.pdf}}\\\\\n \\caption{Estimated networks at time points 813, 828, 888 and 903, corresponding to March 23, 2006, April 13, 2006, July 11, 2006, and August 1, 2006. Colors correspond to the nine sections in the S\\&P dataset.}\n \\label{fig:time-varying-networks}\n\\end{figure}\n\n\\section{Proof of main results}\n\\label{sec:proof}\n\n\\subsection{Preliminary lemmas}\n\\begin{lem}\\label{lem:nagaev}\nLet $(Y_i)_{i\\in Z}$ be a sequence that admits (\\ref{eqn:data_local}). Assume $Y_i\\in \\mathcal L^q$ for $i=1,2,\\dots$, and the dependence adjusted norm (DAN) of the corresponding underlying array $(Y_i^\\circ(t))$ satisfies $\\|Y_\\cdot\\|_{q,A}<\\infty$ for $q>2$ and $A>0$. Let $(\\omega(t,t_i))_{i=1}^n$ be defined in (\\ref{eqn:kernel weight}) and suppose that the kernel function $K(\\cdot)$ satisfies Assumption \\ref{assumption:kernel}. Denote $\\varpi_{q,A}(n) = n, n(\\log n)^{1+2q}, n^{q\/2-Aq}$ if $A > 1\/2-1\/q$, $A = 1\/2-1\/q$, and $0 < A < 1\/2-1\/q$, respectively. Then there exist constants $C_1,C_2$ and $C_3$ independent of $n$, such that for all $x>0$,\n\\begin{align}\n\\label{eqn:dev-Y}\n\\sup_{t\\in (0,1)}\\mathbb{P}\\rbr{\\rBR{\\sum_{i=1}^nw(t,t_i)\\big(Y_i-\\E(Y_i)\\big)}>x}\\le C_1{\\varpi_{q,A}(B_n)\\RBR{Y_\\cdot}_{q,A}^q\\over B_n^{q} x^q}+C_2\\exp\\rbr{-C_3 B_n x^2\\over \\RBR{Y_\\cdot}_{2,A}^2}.\n\\end{align}\n\\begin{align}\n\\label{eqn:max-dev-Y}\n\\mathbb{P}\\rbr{\\sup_{t\\in (0,1)}\\rBR{\\sum_{i=1}^nw(t,t_i)\\big(Y_i-\\E(Y_i)\\big)}>x}\\le C_1{\\varpi_{q,A}(n)\\RBR{Y_\\cdot}_{q,A}^q\\over B_n^{q} x^q}+C_2\\exp\\rbr{-C_3 B_n^2 x^2\\over n\\RBR{Y_\\cdot}_{2,A}^2}.\n\\end{align}\n\\end{lem}\n\\begin{proof}\nLet $S_i=\\sum_{j=1}^i \\big(Y_i-\\E (Y_i)\\big)$. Note that \n\\begin{align}\\nonumber\n\\sup_{t\\in (0,1)}\\rBR{\\sum_{i=1}^nw(t,t_i)Y_i}&=\\sup_{t\\in (0,1)}\\rBR{\\sum_{i=1}^n w(t,t_i)(S_{i}-S_{i-1})}\\\\\\nonumber\n&\\le \\sup_t\\rBR{\\sum_{i=1}^{n-1}\\rbR{\\big(w(t,t_i)-w(t,t_{i+1})\\big)S_i}}+\\sup_t\\rBR{w(t,1)S_n}\\\\\\nonumber\n&\\lesssim B_n^{-1}\\max_{1\\le i\\le n}|S_i|,\n\\end{align}\nwhere the last inequality follows from the fact that $\\sup_t\\sum_{i=1}^n|w(t,t_i)-w(t-t_{i+1})|\\asymp B_n^{-1}$ due to Assumption \\ref{assumption:kernel}. \n\nTo see (\\ref{eqn:max-dev-Y}), it suffices to show\n\\begin{align}\\label{eqn:SM121944}\n\\mathbb{P}\\rbr{\\max_{1\\le i\\le n}|S_i|>x}\\le C_1{\\varpi_{q,A}(n)\\RBR{Y_\\cdot}_{q,A}^q\\over x^q}+C_2\\exp\\rbr{-C_3 x^2\\over n\\RBR{Y_\\cdot}_{2,A}^2}.\n\\end{align}\n\nNow we develop a probability deviation inequality for $\\max_{1\\le i\\le n}|\\sum_{j=1}^i\\alpha_jY_j|$, where $\\alpha_j\\ge 0$, $1\\le j\\le n$ are constants such that $\\sum_{1\\le j\\le n}\\alpha_j=1$. \nDenote $\\mathcal P_0(Y_i)=\\E(Y_i|\\varepsilon_i)-\\E(Y_i)$ and\n\\[\n\\mathcal P_k(Y_i)=\\E(Y_i|\\varepsilon_{i-k},\\ldots,\\varepsilon_i)-\\E(Y_i|\\varepsilon_{i-k+1},\\ldots,\\varepsilon_i). \n\\]\nThen we can write \n\\begin{align}\\label{eqn:SM121947}\n\\max_{1\\le i\\le n}|\\sum_{j=1}^i\\alpha_jY_j|\n&\\le \\max_{1\\le i\\le n}|\\sum_{j=1}^i\\alpha_j \\Proj_0(Y_j)|\n+\\max_{1\\le i\\le n}|\\sum_{k=1}^n\\sum_{j=1}^i\\alpha_j \\Proj_k(Y_j)|\\\\\\nonumber\n&\\qquad+\\max_{1\\le i\\le n}|\\sum_{k=n+1}^\\infty \\sum_{j=1}^i\\alpha_j \\Proj_k(Y_j)|.\n\\end{align}\nNote that $(\\Proj_0(Y_j))_{j\\in\\mathbb Z}$ is an independent sequence. By Nagaev's inequality and Ottaviani's inequality, we have that \n\\begin{align}\\label{eqn:SM121948}\n\\Prob(\\max_{1\\le i\\le n}|\\sum_{j=1}^i\\alpha_j \\Proj_0(Y_j)|\\ge x)\n&\\lesssim {\\sum_{j=1}^n\\alpha_j^q \\RBR{\\Proj_0(Y_j)}_q^q\\over x^q}+\\exp\\big(-{C_3 x^2\\over \\sum_{j=1}^n\\alpha_j^2 \\|\\Proj_0(Y_j)\\|_2^2}\\big)\\\\\\nonumber\n&\\lesssim\\frac{\\sum_{j=1}^n\\alpha_j^q}{x^q\\|Y_j\\|_q}+\\exp\\big(-C_3 {x^2\\over \\sum_{j=1}^n\\alpha_j^2}\\big),\n\\end{align}\nwhere the last inequality holds because $\\|\\Proj_0(Y_j)\\|_q\\le 2\\|Y_j\\|_q$ by Jensen's inequality. \nSince $\\sum_{j=i+1}^\\infty\\alpha_j \\Proj_k(Y_j)$ is a martingale difference sequence with respect to $\\sigma(\\varepsilon_{i+1-k},\\varepsilon_{i+2-k},\\ldots)$, we have that $|\\sum_{k=1+n}^\\infty\\sum_{j=i+1}^n\\alpha_j\\Proj_k(Y_j)|$ is a non-negative sub-martingale. Then by Doob's inequality and Burkholder's inequality, we have\n\\begin{align}\\nonumber\n&\\Prob\\big({\\max_{1\\le i\\le n}|\\sum_{k=n+1}^\\infty\\sum_{j=1}^i\\alpha_j\\Proj_k(Y_j)|\\ge x}\\big)\\\\\\nonumber\n&\\le \\Prob\\big({|\\sum_{k=n+1}^\\infty\\sum_{j=1}^n\\alpha_j\\Proj_k(Y_j)|\\ge {x\\over2}}\\big)+\\Prob\\big({\\max_{1\\le i\\le n}|\\sum_{k=n+1}^\\infty\\sum_{j=1+i}^{n}\\alpha_j\\Proj_k(Y_j)|\\ge {x\\over2}}\\big)\\\\\\nonumber\n&\\lesssim\\frac{\\RBR{\\sum_{k=1+n}^\\infty\\sum_{j=1}^n\\alpha_j\\Proj_k(Y_j)}_q^q}{x^q}\\\\\\label{eqn:SM121951}\n&\\lesssim\\frac{(\\sum_{j=1}^n \\alpha_j^2)^{q\/2}\\Theta_{n,q}^q}{x^q}\n\\le \\frac{\\Theta_{n,q}^qn^{q\/2-1}\\sum_{j=1}^n \\alpha_j^q}{x^q}.\n\\end{align}\nNow we deal with the term $\\max_{1\\le i\\le n}|\\sum_{k=1}^n\\sum_{j=1}^i \\alpha_j\\Proj_k(Y_j)|$. Define $a_m=\\min(2^m,n)$ and $M_n=\\lceil\\log n\/\\log 2\\rceil$. Then\n\\begin{align}\\label{eqn:TM101953}\n\\max_{1\\le i\\le n}\\big|\\sum_{k=1}^n\\sum_{j=1}^i \\alpha_j\\Proj_k(Y_j)\\big|\n\\le \\sum_{m=1}^{M_n}\\max_{1\\le i\\le n}\\big|\\sum_{l=1}^{\\lceil i\/a_m\\rceil}\\sum_{j=1+(l-1)a_m}^{\\min(la_m,i)}\\sum_{k=1+a_{m-1}}^{a_m}\\alpha_j\\Proj_k(Y_j)\\big|.\n\\end{align}\nLet $\\mathcal A_{odd}=\\{1\\le l\\le \\lceil i\/a_m\\rceil,l \\mbox{ is odd}\\}$ and $\\mathcal A_{even}=\\{1\\le l\\le \\lceil i\/a_m\\rceil,l \\mbox{ is even}\\}$. We have\n\\begin{align*}\n\\Prob\\big(\\max_{1\\le i\\le n}\\big|\\sum_{l=1}^{\\lceil i\/a_m\\rceil}Z_{l,m,i}\\big|\\ge x\\big)\n\\le \\Prob\\big(\\max_{1\\le i\\le n}\\big|\\sum_{\\mathcal A_{odd}}Z_{l,m,i}\\big|\\ge x\/2\\big)\n+\\Prob\\big(\\max_{1\\le i\\le n}\\big|\\sum_{\\mathcal A_{even}}Z_{l,m,i}\\big|\\ge x\/2\\big),\n\\end{align*}\nwhere we have that $Z_{l,m,i}:=\\sum_{j=1+(l-1)a_m}^{\\min(la_m,i)}\\alpha_j\\Proj_{a_{m-1}}^{a_m}(Y_j)$ is independent of $Z_{l+2,m,i}$ for $1\\le l\\le \\lceil i\/a_m\\rceil, 1\\le m\\le M_n, 1\\le i\\le n$, as $\\Proj_{a_{m-1}}^{a_m}(Y_j):=\\sum_{k=1+a_{m-1}}^{a_m}\\Proj_k(Y_j)$ is $a_m$-dependent. Therefore, we can apply Ottaviani's inequality and Nagaev's inequality for independent variables. As a consequence, \n\\[\n\\Prob\\big(\\max_{1\\le i\\le n}\\big|\\sum_{l=1}^{\\lceil i\/a_m\\rceil}Z_{l,m,i}\\big|\\ge x\\big)\n\\lesssim \\frac{\\sum_{1\\le l\\le \\lceil n\/a_m\\rceil}\\|Z_{l,m,n}\\|_q^q}{x^q}\n+\\exp\\big(-{C_3 x^2\\over \\sum_{1\\le l\\le \\lceil n\/a_m\\rceil}\\|Z_{l,m,n}\\|_2^2}\\big).\n\\]\nAgain, by Burkholder's inequality, we have that for $q\\ge 2$,\n\\begin{align*}\n\\|Z_{l,m,n}\\|_q\n&\\le \\sum_{k=1+a_{m-1}}^{a_m}\\|\\sum_{j=1+(l-1)a_m}^{\\min(la_m,n)}\\alpha_j\\Proj_k(Y_j)\\|_q\\\\\n&\\lesssim (\\sum_{j=1+(l-1)a_m}^{\\min(la_m,n)}\\alpha_j^2)^{1\/2}(\\Theta_{a_{m-1}}-\\Theta_{a_m}).\n\\end{align*}\nNote $\\sum_{j=1+(l-1)a_m}^{\\min(la_m,n)}\\alpha_j^2\\le a_m^{(q-2)\/q}(\\sum_{j=1+(l-1)a_m}^{\\min(la_m,n)}\\alpha_j^q)^{2\/q}$. Let $\\tau_m=m^{-2} \/\\sum_{m=1}^{M_n} m^{-2}$, and we have $\\tau_m\\asymp m^{-2}$ as $1\\le \\sum_{m=1}^{M_n} m^{-2}\\le \\pi^2\/6$. In respect to (\\ref{eqn:TM101953}), we have that\n\\begin{align}\\label{eqn:SM122004}\n\\Prob\\big(\\max_{1\\le i\\le n}\\big|\\sum_{k=1}^n\\sum_{j=1}^i\\Proj_k(Y_j)\\big|\\ge x\\big)\n&\\le \\sum_{m=1}^{M_n} \\Prob\\big(\\max_{1\\le i\\le n}\\big|\\sum_{l=1}^{\\lceil i\/a_m\\rceil}Z_{l,m,i}\\big|\\ge \\tau_m x\\big)\\\\\\nonumber\n&\\hskip-2cm\n\\lesssim \\frac{\\sum_{i=1}^n \\alpha_j^q}{x^q}\\|Y_\\cdot\\|_{q,A}^q\\sum_{m=1}^{M_n} \\tau_m^{-q}a_m^{(1\/2-A)q-1}\n+ \\sum_{m=1}^{M_n}\\exp\\big(-\\frac{C_3x^2\\tau_m^2 a_m^{2A}}{\\sum_{j=1}^n\\alpha_j^2\\|Y_\\cdot\\|_{2,A}^2}\\big).\n\\end{align}\nNote $\\sum_{m=1}^{M_n} \\tau_m^{-q}a_m^{(1\/2-A)q-1}\\asymp n^{-1}\\varpi_{q,A}(n)$, and\n\\[\n\\sum_{m=1}^{M_n}\\exp\\big(-\\frac{C_3x^2\\tau_m^2 a_m^{2A}}{\\sum_{j=1}^n\\alpha_j^2\\|Y_\\cdot\\|_{2,A}^2}\\big) \\lesssim \\exp\\big(-\\frac{C_3x^2}{\\sum_{j=1}^n\\alpha_j^2 \\|Y_\\cdot\\|_{2,A}^2}\\big).\n\\]\nCombining (\\ref{eqn:SM121947}), (\\ref{eqn:SM121948}), (\\ref{eqn:SM121951}) and (\\ref{eqn:SM122004}), we obtain \n\\begin{align}\\nonumber\n&\\mathbb{P}\\big(\\max_{1\\le i\\le n}\\big|{\\sum_{j=1}^i\\alpha_j\\big(Y_j-\\E (Y_j)\\big)}\\big|>x\\big)\\\\\n\\label{eqn:WJ202046}\n&\\qquad\\le C_1{\\varpi_{q,A}(n)\\sum_{j=1}^n\\alpha_j^q\\RBR{Y_\\cdot}_{q,A}^q\\over nx^q}+C_2\\exp\\big({-C_3 x^2\\over \\sum_{j=1}^n\\alpha_j^2 \\RBR{Y}_{2,A}^2}\\big). \n\\end{align}\nNow we have (\\ref{eqn:SM121944}) by taking $\\alpha_j=n^{-1}$ for $j=1,\\ldots, n$. \nNote that since $K(\\cdot)$ has bounded support, for any given $t\\in [b,1-b]$, we have\n\\begin{align*}\n\\mathbb{P}\\big(\\big|\\sum_{i=1}^n w(t,t_i)(Y_i-\\E Y_i)\\big|>x\\big)\\le \\mathbb{P}\\big(\\big|\\sum_{i=-B_n}^{B_n}w(t,t_{tn+i})(Y_{tn+i}-\\E Y_{tn+i})\\big|>x\\big)\\\\\n\\le C_1{\\varpi_{q,A}(B_n)\\sum_{i=-B_n}^{B_n}w(t,t_{tn+i})^q\\RBR{Y_\\cdot}_{q,A}^q\\over B_nx^q}+C_2\\exp\\big({-C_3 x^2\\over\\sum_{i=-B_n}^{B_n}w(t,t_{tn+i})^2 \\RBR{Y_\\cdot}_{2,A}^2}\\big). \n\\end{align*}\n Therefore (\\ref{eqn:dev-Y}) follows from (\\ref{eqn:WJ202046}) by taking $\\alpha_j=w(t,tn+j)$ and note that \n Note that for any $t\\in [b,1-b]$, $\\sum_{i=-B_n}^{B_n}w(t,t_{tn+i})^\\beta\\asymp B_n^{1-\\beta}$ for a constant $\\beta\\ge 2$.\n\\end{proof}\n\n\\begin{lem}\n\\label{lemma:Y_i,jk_large-dev}\nSuppose $(X_{ij})_{i\\in \\mathbb Z, 1\\le j\\le p}$ satisfy Assumption \\ref{assumption:dep}. Also let Assumption \\ref{assumption:kernel} hold. Let $\\varpi_{q,A}(n)$ be defined as in Lemma \\ref{lem:nagaev}.\nThen there exist constants $C_1,C_2$ and $C_3$ independent of $n$ and $p$, such that for all $x > 0$, we have\n\\begin{eqnarray}\n\\nonumber\n&&\\sup_{t\\in (0,1)}\\Prob\\big( \\big|\\sum_{i=1}^n \\omega(t,t_i) \\big(\\vX_i\\vX_i^\\top-\\E( \\vX_i\\vX_i^\\top) \\big)\\big|_\\infty \\ge x \\big)\\\\\\label{eqn:Y_i,jk_large-dev}\n&&\\qquad\\qquad \\le C_1 \\nu_{2q}^q {p \\varpi_{q,A}(B_n) M_{X,q}^{q} \\over B_n^{q} x^q} + C_2 p^2 \\exp\\left( -C_3 {B_nx^2 \\over \\nu_4^2 N_X^2} \\right),\n\\end{eqnarray}\nand\n\\begin{eqnarray}\n\\nonumber\n&&\\Prob\\big( \\sup_{t\\in (0,1)}\\big|\\sum_{i=1}^n w(t,t_i) \\big(\\vX_i\\vX_i^\\top-\\E( \\vX_i\\vX_i^\\top) \\big)\\big|_\\infty \\ge x \\big)\\\\ \\label{eqn:Y_i,jk_large-dev_unif}\n&&\\qquad\\qquad\\le C_1 \\nu_{2q}^q {p \\varpi_{q,A}(n) M_{X,q}^{q} \\over B_n^q x^q} + C_2 p^2\\exp\\left( -C_3 {B_n^2x^2 \\over n\\nu_4^2 N_X^2} \\right).\n\\end{eqnarray}\n\\end{lem}\n\n\\begin{proof}\nFor $1\\le j,k\\le p$, let $Y_{i,jk} = X_{ij} X_{ik}$. We now check the conditions in Lemma \\ref{lem:nagaev} for $(Y_{i,jk})_{1\\le i\\le n}$. Denote $Y_{i,jk,\\{m\\}} = X_{ij,\\{m\\}} X_{ik,\\{m\\}}$. Then the uniform functional dependence measure of $(Y_{i,jk})_i$ is\n\\begin{eqnarray*}\n\\theta_{m,q,jk}^Y &=&\\sup_i \\|Y_{i,jk} - Y_{i,jk,\\{m\\}}\\|_q\\\\\n&=& \\sup_i \\| X_{ij} X_{ik} - X_{ij,\\{m\\}} X_{ik,\\{m\\}} \\|_q \\\\\n&\\le& \\sup_i \\|X_{ij} (X_{ik} - X_{ik,\\{m\\}} ) \\|_q + \\sup_i \\|X_{ik,\\{m\\}} (X_{ij} - X_{ij,\\{m\\}} ) \\|_q.\n\\end{eqnarray*}\nThus the DAN of the process $Y_{\\cdot, jk}$ satisfies that\n\\begin{align*}\n\\|Y_{\\cdot, jk}\\|_{q,A}\n\\le \\sup_i \\| X_{ij}\\|_{2q} \\; \\|X_{\\cdot,k}\\|_{2q,A} + \\sup_i \\| X_{ik}\\|_{2q} \\; \\|X_{\\cdot,j}\\|_{2q,A}\\le \\nu_q(\\|X_{\\cdot,k}\\|_{2q,A}+\\|X_{\\cdot,j}\\|_{2q,A}).\n\\end{align*}\nThe result follows immediately from Lemma \\ref{lem:nagaev} and the Bonferroni inequality. \n\\end{proof}\n\n\\begin{lem}\\label{lemma:covpredev}\nWe adopt the notation in Lemma \\ref{lemma:Y_i,jk_large-dev}. Suppose Assumptions \\ref{assumption:dep}, \\ref{assumption:smoothness} and \\ref{assumption:kernel} hold with $\\iota=0$. Recall $B_n = n b$, where $b\\to 0$ and $B_n\/\\sqrt n\\to \\infty$ as $n\\to\\infty$. Then there exists a constant $C$ independent of $n$ and $p$ such that $\\hat\\Sigma(t)$ in (\\ref{eqn:samplecov}) satisfies that for any $t\\in [c,1-c]$,\n\\begin{align}\\label{eqn:cov_dev}\n|\\hat\\Sigma(t)-\\Sigma(t)|_\\infty &= O_\\mathbb{P} \\left( b ^2+M_{X,q}\\nu_{2q}B_n^{-1}(p\\varpi_{q,A}(B_n))^{1\/q}+ \\nu_4N_X (\\log{p}\/B_n)^{1\/2} \\right).\n\\end{align}\nFurthermore, \n\\begin{align}\n\\label{eqn:cov_dev_unif}\n{\\sup_{t \\in [c,1-c]} |\\hat\\Sigma(t)-\\Sigma(t)|_\\infty}= O_\\mathbb{P}\\left( b ^2+M_{X,q}\\nu_{2q}B_n^{-1}(p\\varpi_{q,A}(n))^{1\/q}+\\nu_4N_XB_n^{-1} [n\\log p]^{1\/2} \\right).\n\\end{align}\n\\end{lem}\n\n\\begin{proof}\nFirst we have\n\\begin{align*}\n\\E\\hat\\sigma_{jk}(t)-\\sigma_{jk}(t) = \\sum_{i=1}^n w(t,t_i) [\\sigma_{jk}(t_i)-\\sigma_{jk}(t)].\n\\end{align*}\nApproximating the discrete summation with integral, we obtain for all $1\\le j,k\\le p$,\n\\begin{align*}\n\\sup_{t\\in [b,1-b]}\\rBR{\\E\\hat\\sigma_{jk}(t)-\\sigma_{jk}(t)-\\int_{-1}^{1} K(u)[\\sigma_{jk}(u b +t)-\\sigma_{jk}(t)]du}=O\\left({ B_n^{-1}}\\right).\n\\end{align*}\nBy Assumption \\ref{assumption:smoothness}, we have \n\\begin{align*}\n\\sigma_{jk}(ub+t)-\\sigma_{jk}(t)&=u b \\sigma'_{jk}(t)+\\frac{1}{2}u^2 b ^2\\sigma''_{jk}(t)+o( b ^2u^2).\n\\end{align*}\nThus we have $\\sup_{t\\in [c,1-c]}|\\E\\hat\\sigma(t)-\\sigma(t)|_\\infty=O\\left({B_n^{-1}}+ b ^2\\right)$, in view of Assumption \\ref{assumption:kernel}. \nBy Lemma \\ref{lemma:Y_i,jk_large-dev}, we have\n \\begin{align*}\n\\sup_{t\\in (0,1)} \\mathbb{P}\\left(\\left|\\hat\\Sigma(t)-\\E\\hat\\Sigma(t)\\right|_\\infty \\ge x\\right)\n&\\le C_1 p \\nu_q^q {M_{X,q}^q \\varpi_{q,A}(B_n) \\over B_n^{q} x^q} + C_2p^2\\exp\\rbr{-C_3{B_nx^2 \\over N_X^2}}.\n\\end{align*}\nDenote $u= C_4\\big(M_{X,q}\\nu_{2q}B_n^{-1}(p\\varpi_{q,A}(B_n))^{1\/q}+ \\nu_4N_X (\\log{p}\/B_n)^{1\/2}\\big)$ for a large enough constant $C_4$, then for any $t\\in (0,1)$,\n\\begin{align}\\nonumber\n\\left|\\hat\\Sigma(t)-\\E\\hat\\Sigma(t)\\right|_\\infty=O_\\mathbb{P}(u).\n\\end{align}\nThus (\\ref{eqn:cov_dev}) is proved. The result (\\ref{eqn:cov_dev_unif}) can be obtained similarly.\n\\end{proof}\n\n\n\n\n\n\\subsection{Proof of main results}\n\\begin{proof}[Proof of Proposition \\ref{prop:precmx_smooth}]\nGiven (\\ref{eqn:cov_dev}) and (\\ref{eqn:cov_dev_unif}), the proof of (\\ref{eqn:pre_dev}) is standard. (See, e.g. \\cite[Theorem 6]{MR2847973}). For $\\lambda^\\circ$ and $\\lambda^*$ given in Proposition \\ref{prop:precmx_smooth}, by Lemma \\ref{lemma:covpredev}, we have that respectively, \n\\begin{align}\\label{eqn:lambdao}\n\\lambda^\\circ&\\ge \\sup_{t}\\E\\big(\\kappa_p|\\hat \\Sigma(t)-\\Sigma(t)|_\\infty\\big),\\\\\n\\label{eqn:lambda*}\n\\lambda^\\diamond&\\ge \\E\\big(\\kappa_p\\sup_{t}|\\hat \\Sigma(t)-\\Sigma(t)|_\\infty\\big).\n\\end{align}\nThen note that for any $t\\in[0,1]$, for any $\\lambda>0$,\n\\begin{align*}\n&|\\hat\\Omega_\\lambda(t)-\\Omega(t)|_\\infty \\le |\\Omega(t)|_{L_1}|\\Sigma(t)\\hat\\Omega_\\lambda(t)-\\Id_p|_\\infty\\\\\n&\\qquad\\le |\\Omega(t)|_{L_1} \\big[|\\hat\\Sigma(t)\\hat\\Omega_\\lambda(t)-\\Id_p|_\\infty+|(\\Sigma(t)-\\hat\\Sigma(t))\\Omega(t)|_\\infty+|\\hat\\Omega_\\lambda(t)-\\Omega(t)|_{L^1}|\\hat\\Sigma(t)-\\Sigma(t)|_\\infty\\big]\n\\end{align*}\nwhere by construction, we have $|\\hat\\Sigma(t)\\hat\\Omega_\\lambda(t)-\\Id_p|_\\infty\\le \\lambda$ and $|\\hat\\Omega_\\lambda(t)-\\Omega(t)|_{L^1}\\le 2\\kappa_p$. Consequently,\n\\begin{align}\\label{eqn:MT1236p}\n|\\hat\\Omega_\\lambda(t)-\\Omega(t)|_\\infty\\le \\kappa_p\\big(\\lambda+3\\kappa_p|\\hat\\Sigma(t)-\\Sigma(t)|_\\infty\\big).\n\\end{align}\nThen (\\ref{eqn:pre_dev}) and (\\ref{eqn:pre_dev_unif}) follow from (\\ref{eqn:lambdao}) to (\\ref{eqn:MT1236p}).\n\\end{proof}\n\n\n\n\n\\begin{proof}[Proof of Proposition \\ref{prop:partial_support_recovery}]\nTheorem \\ref{prop:partial_support_recovery} is an immediate result of (\\ref{eqn:pre_dev_unif}).\n\\end{proof}\n\n\n\n\\begin{proof}[Proof of Theorem \\ref{thm:jumppointsestimation}]\nDenote $r_j, 1\\le j\\le \\iota$ as the time point(s) of the time of jump ordered decreasingly in the sense of the infinite norm of covariance matrices, i.e., $|\\Delta(r_1)|_\\infty\\ge |\\Delta(r_2)|_\\infty\\ge \\ldots\\ge |\\Delta(r_\\iota)|_\\infty\\ge |\\Delta(s)|_\\infty$ for $s\\in (0,1)\\cap \\{r_1,\\ldots, r_\\iota\\}^c$. (Temporal order is applied if there is a tie.) Let ${\\calT}_{h}(j)=[r_j-h,r_j+h)$. For $h=o(1)$, as a result of Assumption \\ref{assumption:jump}, ${\\calT}_{h}(j)\\cap {\\calT}_{h}(i)=\\emptyset$ if $i\\neq j$ for $n$ sufficiently large. That is to say, each time point $s\\in(0,1)$ is in the neighborhood of at most one change point. \n\nFor any $s\\in [t^{(j)},t^{(j+1)})$, $j=0,1,\\ldots, \\iota$, denote $\\mathbb{D}(s)=\\E[D(s)]$ and \n\\begin{equation}\n\\label{eqn:D(s)_expectation-nonstationary}\n\\D^\\diamond(s) = \\left\\{\n\\begin{array}{cc}\n(h-s+t^{(j)}) \\Delta(t^{(j)}), & t^{(j)} \\le s 0$,\n\\begin{align}\n\\label{eqn:large-deviation-infty}\n\\Prob\\left(\\sup_{s\\in [h,1-h]} |D(s) - \\D(s)|_\\infty \\ge x\\right)\n\\le C_1 {p \\varpi_{q,A}(n) M_{X,q}^q\\nu_{2q}^q \\over n^{q} x^q} + C_2 p^2 \\exp\\left(-C_3 {nx^2\\over N_X^{2}}\\right).\n\\end{align}\nIt follows that \n\\[\n|\\D(r_1)|_\\infty-|\\D(\\hat s_1)|_\\infty=O_\\mathbb{P}\\big(h^{-1}J_{q,A}(n,p)+N_Xh^{-1}(n^{-1}\\log (p))^{1\/2}\\big).\n\\]\nTaking $h=h_\\diamond$, we have\n\\[\n|\\hat s_1-r_1|=O_\\mathbb{P}(h_\\diamond^2).\n\\]\nFurthermore we have\n\\[\n\\mathbb{P}({\\calA})\\ge 1-C_1\\big({p \\varpi_{q,A}(n) M_{X,q}^q \\nu_{2q}^q \\over n^{q} c_2^q}\\big)^{1\/3}-C_2p^2\\exp\\big(-C_3({n\\log^2(p)\\over N_X^2})^{1\/3}\\big).\n\\]\n\n\n\nLet ${\\calA}_k:=\\{\\max_{1\\le j\\le k}|\\hat s_j-r_j|\\le c_2^{-1}2(L+1)h_\\diamond^2\\}$ for some $1\\le k\\le \\iota$. Assume ${\\calA}_k\\subset {\\calA}$. Under ${\\calA}_k$ we have that $[r_j-h_\\diamond,r_j+h_\\diamond)\\subset \\hat{{\\calT}}_{2h_\\diamond}(j)=:[\\hat s_j-2h_\\diamond,\\hat s_j+2h_\\diamond)$ for $1\\le j\\le k$ and $r_{k+1}\\notin\\cup_{1\\le j\\le k}\\hat{{\\calT}}_{2h_\\diamond}(j)$ as a consequence of Assumption \\ref{assumption:jump}. According to (\\ref{eqn:D_lower}) and (\\ref{eqn:D_upper}), we have if ${\\calA}$ is true, $|\\hat s_{k+1}-r_{k+1}|\\le c_2^{-1}2(L+1) h_\\diamond^2$, which implies ${\\calA}_{k+1}\\subset {\\calA}$. \nThe result (\\ref{eqn:jumppoint}) follows from deduction. \n\nSuppose ${\\calA}$ holds. By the choice of $\\nu$, as a consequence of (\\ref{eqn:diff_stat_nonstat}) and (\\ref{eqn:large-deviation-infty}), and that $\\nu\\ll h_\\diamond$, we have that \n\\[\n\\sup_{s\\in [0,1]}|D(s)-\\D^\\diamond(s)|_\\infty\\le \\nu.\n\\]\n As a result, \n \\[\n \\min_{1\\le j\\le \\iota} |D(r_j)|_\\infty\\ge c_2 h_\\diamond-\\nu\\ge \\nu,\n \\]\n i.e., $\\hat \\iota\\ge \\iota$. On the other hand, since $\\cup_{1\\le j\\le \\iota}\\hat{\\calT}_{2h_\\diamond}(j)$ is excluded from the searching region for $s_{\\iota+1}$, we have\n \\[\n \\sup_{s\\in \\big(\\cup_{1\\le j\\le \\iota}\\hat{\\calT}_{2h_\\diamond}(j)\\big)^c} |D(s)|_\\infty\\le \\nu.\n \\]\n In other words, $\\{\\hat \\iota=\\iota\\}\\subset {\\calA}$. Thus (\\ref{eqn:jumpnumber}) is proved. \n\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{thm:support_recovery}]\nWe adopt the notations in the proof of Theorem \\ref{thm:jumppointsestimation} and assume that $\\calE$ holds. Similar as in Lemma \\ref{lemma:covpredev}, we have that by Lemma \\ref{lemma:Y_i,jk_large-dev}, for any $t\\in (0,1)$,\n\\begin{align}\\nonumber\n\\left|\\hat\\Sigma(t)-\\E\\hat\\Sigma(t)\\right|_\\infty=O_\\mathbb{P}(u),\n\\end{align}\nwhere $u= C_4\\big(M_{X,q}\\nu_{2q}B_n^{-1}(p\\varpi_{q,A}(B_n))^{1\/q}+ \\nu_4N_X (\\log{p}\/B_n)^{1\/2}\\big)$ for a large enough constant $C_4$. \n\nSince under $\\calE$, ${\\calT}_{b}(j)\\subset \\hat{\\calT}_{b+h_\\diamond^2}(j)$. For $t\\in \\big(\\cup_{1\\le j\\le \\iota}\\hat{\\calT}_{b+h_\\diamond^2}(j)\\big)^c\\cap [b,1-b]$, we have that for all $1\\le j,k\\le p$, \n\\begin{align*}\n\\rBR{\\E\\hat\\sigma_{jk}(t)-\\sigma_{jk}(t)}\n&=\\int_{-1}^{1} K(u)[\\sigma_{jk}(u b +t)-\\sigma_{jk}(t)]du+O\\left({ B_n^{-1}}\\right)\\\\\n&= b\\sigma'_{jk}(t)\\int_{-1}^{1} uK(u) du+ \\big(\\frac{1}{2} b ^2\\sigma''_{jk}(t)+o(b^2)\\big)\\int_{-1}^{1} u^2K(u) du+O\\left({ B_n^{-1}}\\right)\\\\\n&=O(b^2+B_n^{-1}).\n\\end{align*}\nOn the other hand, for $t\\in \\cup_{1\\le j\\le \\iota}\\big(\\hat{\\calT}_{b+h_\\diamond^2}(j)\\cap {\\calT}_{h_\\diamond^2}^c(j)\\big)\\cup [0,b]\\cup[1-b,1]$, due to reflection, we no longer have that differentiability. As a result of the Lipschitz continuity, we get\n\\begin{align*}\n\\rBR{\\E\\hat\\sigma_{jk}(t)-\\sigma_{jk}(t)}\n=\\int_{-1}^{1} K(u)[\\sigma_{jk}(u b +t)-\\sigma_{jk}(t)]du+O\\left({ B_n^{-1}}\\right)=O(b+B_n^{-1}).\n\\end{align*}\nThe result (\\ref{eqn:cov_boundary_unif}) follows by the choices of $b$. The rest of the proof are similar as in that of Proposition \\ref{prop:precmx_smooth} and Theorem \\ref{prop:partial_support_recovery}.\n\\end{proof}\n\n\n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{State of the art}\n\nAndroid has captured roughly 78,8\\% of today's smartphone market with Version 6.0 of its operating system installed on over 28,1\\% of existing equipment. Over 99\\% of malware designed to target mobile devices are aimed at Android devices, and they keep increasing in number and ingenuity~\\cite{fsecuresecurity2017}. \nThey resort to such mechanisms as obfuscation, encryption, or communication with their command and control center in order to evade detection systems. \n\nIn early 2017, Google launched Verify Apps, a protection system that monitors and profiles the behavior of Android applications. If an application is flagged as potentially harmful, Verify Apps immediately informs the user that it needs to be removed. According to data by Google, Verify Apps could analyze more than 750 million applications per day and, although no detailed information has been released, it is likely to perform both static and dynamic security verification. Experts have shown that applications verified in Google Play were run in a virtual machine, whose profile can be reconstructed~\\cite{Oberheide}. However, this protection system is far from infallible, as evidenced by malware Dvmap~\\cite{dvmap}, which after gaining root access was able to deactivate Verify Apps and go undetected.\n\n\nMalware detection is generally done through the use of signatures, based on apks or behavioral profiles. Signatures and profiles can be built based on a static or dynamic analysis of the malware. Static analysis allows to build malware signatures against which the signature of a new sample can be compared~\\cite{griffin2009automatic,feng2014apposcopy}. \n\nThe method we put forward aims at describing the behavioral profile of an Android application. Behavioral analysis is often done using a dynamic approach. However, we consider that static analysis provides an appropriate representation of this profile, by taking into account all possible junctions, contrary to dynamic analysis that could be bypassed by malware~\\cite{Abraham2015}. Indeed, malware can carry out evasive techniques like transformation in order to evade signatures~\\cite{Rastogi2014} or sandbox detection~\\cite{Vidas:2014:EAR}. As a result, we offer a static approach based on the bytecode blocks constitutive of the application, with a control flow graph built from possible sequences between these blocks.\n\nOther Android malware detection methods based on the API exist. In~\\cite{MaMaDroid}, Mariconti \\emph{and all} have studied the API call sequences in order to determine the behavior of the application. To this end, they proceeded with the extraction of call graphs and used the Markov chain to represent the call sequences. They came up with an interesting detection rate of 99\\% for the new malware from the same period as the malware used for the learning process. This value decreases as one moves away from the learning period. In~\\cite{li2015detection}, Li \\emph{and all} proceeded with the extraction of APIs in order to determine characteristic dependencies. Relying on automatic learning algorithms, especially using decision tree forests, they have managed to identify close to 96\\% of the malware present in their database. This research shows the correlation between APIs and malware behavior -a correlation inherent in the Android architecture. In~\\cite{gascon2013structural}, Gascon \\emph{and all} presented a study on the relevance of function call graphs in the search for variants. Using automatic learning via support vector machines on over 12,000 pieces of malware, they achieved a detection rate of 89\\% with only 1\\% of false positives. \n\n\n\\section{Construction of the approach}\n\n\n\\subsection{Benefits of the proposed method}\n\nThis paper presents a novel approach to malware detection based on subgraph isomorphism using blocks of opcode. \nThese opcodes correspond to the opcode present in the bytecode of the Dalvik virtual machine.\nResorting to the subgraph isomorphism problem allows to conduct a structured comparison between the opcode blocks on top of a semantic comparison. This is the first benefit of the present approach compared to the state of the art that only uses semantic comparison as a general rule.\n\nSubgraph isomorphism allows to check unknown software against known malware for similarities. In other words, the subgraph isomorphism computing algorithm is going to compare the graphs of opcode blocks of the software that needs to be scanned to a database of graphs of malware opcode blocks. This is the second benefit of the present approach compared to previous research that often does not go into such level of detail, that is to say machine language.\n\nMachine language is our focus and more specifically operation codes or \"opcodes\". We assume that in any language the choice of words reveals the intentions of their user. In this regard, we consider that opcode blocks resulting from infected code feature a vocabulary that is slightly different from other opcode blocks. Our approach uses the relative frequency of opcode blocks following a text mining method. This is the third benefit of the present method, which analyzes machine language to extract similarity knowledge with infected code.\n\nWe make the basic assumption that malware developers follow the good practices of software engineering by reusing existing code to develop new malware. Identical blocks of machine code can be identified through the signature principle. This assumption especially holds for malware that combine clean programs with bits of infected code. For detection accuracy, one needs to keep in mind that malware developers may adapt, modify, complete, that is to say transform the bits of infected code they use. We put forward an approach which counters this obstacle. This is the fourth benefit of the present method, which is impervious can hold against withstand transformations to the infected code and is thereby able to detect pieces of malware from the same family.\n\n\n\\subsection{Overview of the present method}\n\nFigure~\\ref{global} presents the different steps of our approach. Due to a limited space, details of our approach are omitted here.\n\n\\begin{figure}\n \\includegraphics[width=\\textwidth]{schema_article_court2.png} \n \\caption{General presentation of our approach}\n \\label{global}\n\\end{figure}\n\n\nIn order to detect if a software is infected, our method relies on the comparison between that software and a malware database. Control Flow Graphs (CFGs) are created based on the machine code of the software to be scanned. The structure of the software and its logical sequences is therefore preserved. The graph nodes are labelled with the hash of the underlying opcode block. \n\nThe comparison between the CFGs of the software to be scanned and those of the database is carried out by calculating their subgraph isomorphisms. \nThe calculation includes the structure of the graph and the correspondence of hashes. \nThe computation for each graph pair being compared yields the following result: either the two CFGs are identical, which lets us know that the software under analysis is a piece of known malware; or the two CFGs do not match; or there is a match and in this case, the distance between the two CFGs is calculated to determine if it is a variant of a known malware family. \n\nThe CFG database is critical to our method. This database is based on a database of known malware. The most characteristic CFGs get extracted for each type of malware. Evidently, the CFGs cannot match sequences of infected code, but they must allow for the best possible characterization of the malware in comparison with its other CFGs. The database is built using machine learning, which identifies the most characteristic machine code sequences of the malware. Once the characteristic machine code sequences have been identified, they will be transformed into CFGs to feed the CFG database.\n\nAs for any approach based on machine learning, the first step is learning a model from a relevant learning database, then detection can take place. Text mining is used in the learning process. The relative frequency of opcode ngrams is used to discriminate the blocks of infected code. Ngrams are short sequences of\nopcodes.\n\nThus, learning is done through characteristic vectors from the analysis of the ngrams TF\/IDF frequency in the software. \nOnce the model has been learned from a database of clean software and malware for which the opcode TF\/IDF frequency has been extracted, the sequences of infected code can be predicted in known malware. Each of those sequences is then transformed into a CFG in order to feed the database of CFGs characteristic of malware. \n\n\n\n\\section{Experiments}\n\n\nRegarding the process of our method, as~\\fig{global} presents, there is two main experimentation to perform: validating the CFG database machine learning method to choose; experimenting our approach on malware characterization. For this last experimentation step, we test our research work both on its theoretical background and on real data. \n\n\n\\subsection{Validating the CFG database machine learning's method}\n\nAs presented upper, the CFG database is built using machine learning, which identifies the most characteristic machine code sequences of the malware. We experiment the following supervised machine learning algorithms: Random Forest, XGBoost, Decision Tree, SVM and KNN, as they are well suited to class prediction from a database. The computations have been automated using the Dataiku tool. These algorithms were tested for ngrams of size 1 to 9 with the following F1 score as a result. The database uses 80\\% learning database and 20\\% tests. A 10-fold cross-validation is used for all the tests. \n\n\\begin{table}[h]\n \\hspace*{-0.5in}\n\\begin{tabular}{llllllllll}\n & & & \\multicolumn{3}{l}{ngrams sizes} & & & & \\\\ \\cline{2-10} \n\\multicolumn{1}{l|}{Algorithmes} & \\multicolumn{1}{l|}{1} & \\multicolumn{1}{l|}{2} & \\multicolumn{1}{l|}{3} & \\multicolumn{1}{l|}{4} & \\multicolumn{1}{l|}{5} & \\multicolumn{1}{l|}{6} & \\multicolumn{1}{l|}{7} & \\multicolumn{1}{l|}{8} & \\multicolumn{1}{l|}{9} \\\\ \\hline\n\\multicolumn{1}{|l|}{Random Forest} & \\multicolumn{1}{l|}{\\textbf{0.923}} & \\multicolumn{1}{l|}{\\textbf{0.986}} & \\multicolumn{1}{l|}{\\textbf{0.935}} & \\multicolumn{1}{l|}{\\textbf{0.882}} & \\multicolumn{1}{l|}{\\textbf{0.846}} & \\multicolumn{1}{l|}{\\textbf{0.829}} & \\multicolumn{1}{l|}{\\textbf{0.820}} & \\multicolumn{1}{l|}{\\textbf{0.794}} & \\multicolumn{1}{l|}{\\textbf{0.796}} \\\\ \\hline\n\\multicolumn{1}{|l|}{XGBoost} & \\multicolumn{1}{l|}{0.919} & \\multicolumn{1}{l|}{0.984} & \\multicolumn{1}{l|}{0.934} & \\multicolumn{1}{l|}{0.882} & \\multicolumn{1}{l|}{\\textbf{0.846}} & \\multicolumn{1}{l|}{\\textbf{0.829}} & \\multicolumn{1}{l|}{\\textbf{0.820}} & \\multicolumn{1}{l|}{0.793} & \\multicolumn{1}{l|}{\\textbf{0.796}} \\\\ \\hline\n\\multicolumn{1}{|l|}{Decision Tree} & \\multicolumn{1}{l|}{0.916} & \\multicolumn{1}{l|}{0.972} & \\multicolumn{1}{l|}{0.925} & \\multicolumn{1}{l|}{0.867} & \\multicolumn{1}{l|}{0.835} & \\multicolumn{1}{l|}{0.821} & \\multicolumn{1}{l|}{0.813} & \\multicolumn{1}{l|}{0.787} & \\multicolumn{1}{l|}{0.793} \\\\ \\hline\n\\multicolumn{1}{|l|}{KNN} & \\multicolumn{1}{l|}{0.916} & \\multicolumn{1}{l|}{0.982} & \\multicolumn{1}{l|}{0.934} & \\multicolumn{1}{l|}{0.882} & \\multicolumn{1}{l|}{\\textless 0.5} & \\multicolumn{1}{l|}{\\textless 0.5} & \\multicolumn{1}{l|}{\\textless 0.5} & \\multicolumn{1}{l|}{\\textless 0.5} & \\multicolumn{1}{l|}{\\textless 0.5} \\\\ \\hline\n\\multicolumn{1}{|l|}{SVM} & \\multicolumn{1}{l|}{0.917} & \\multicolumn{1}{l|}{\\textbf{0.986}} & \\multicolumn{1}{l|}{\\textbf{0.935}} & \\multicolumn{1}{l|}{0.882} & \\multicolumn{1}{l|}{\\textbf{0.846}} & \\multicolumn{1}{l|}{\\textbf{0.829}} & \\multicolumn{1}{l|}{\\textbf{0.820}} & \\multicolumn{1}{l|}{0.793} & \\multicolumn{1}{l|}{\\textbf{0.796}} \\\\ \\hline\n\\end{tabular}\n \\caption{F1 score for algorithms based on ngram size for vector size of 100} \\label{tabl}\n\\end{table}\n\nAs Table~\\ref{tabl} presents, Random Forest and SVM show the best F1 scores. Conversely, KNN has rapidly lost in efficiency as soon as the ngram size reached 5 or above. Regarding the F1 scores of the best algorithms, we can assume that the present approach is able to identify features of interest among the opcodes of several hundred of apks so that a database can be built to search for isomorphisms.\n\n\\subsection{Laboratory database}\n\n\nIn this section, the objective is to test the performance and the robustness of the method. A common way to create malwares is to add infected codes to an existing Android software. However, existing works doesn't focus on verifying if only the malware is detected as one and not the safe original application. For this purpose we create this laboratory database. Moreover, our ame is also to detect variants of the same malware. In order to do this database, we introduced three variants of an infected code in ten Android applications, thereby creating the equivalent of 30 infected applications, all being variants of the same malware. The database was completed with the 10 original and unmodified applications and 100 other applications. The laboratory testing allowed us to validate the underlying concept of our approach.\n\n\nThe main benefit of this laboratory database is that one can dissect the results variant by variant. Three dictionaries were thus created, each one based on a unique variant. Table~\\ref{tab-labo} shows the results obtained using our method for each of the three dictionaries: all variants are found and their is no false-positive. The match threshold used to get those results was that a subgraph isomorphism was indeed found in the candidate program and at least half the hashes were a match. The results fully validate the present method, though the choice of match threshold should not be too high.\n\n\n\\begin{table}\n \\centering\n \\begin{tabular}{|l|ccc|}\n \\hline\n & Precision & Recall & F-mesure \\\\\n \\hline\n Variant 1 & 1 & 1 & 1 \\\\\n Variant 2 & 1 & 1 & 1 \\\\\n Variant 3 & 1 & 1 & 1 \\\\\n \\hline\n \\end{tabular}\n \\caption{Results obtained on the laboratory database}\n \\label{tab-labo}\n\\end{table}\n\n\n\\subsection{Set of real data}\n\nIn this section, we test the present method on a set of real data. In total, 10 variants of Droidjack and Opfake malware were collected and a database of 100 clean programs and as much malware was built. Thus, our method has been tested in real conditions. \n\n\n\nIn order to validate the method in real conditions, a database of 100 completely clean applications and 100 malware has been created. The infected specimens were collected from the collaborative platform Koodous~\\cite{koodous}. Among those specimens, two in particular, DroidJack and Opfake, were selected as well as their variants - 6 DroidJack variants and 4 Opfake variants were collected. \n\nThree tests were carried out with a dictionary created: based solely on a DroidJack variant; based solely on an Opfake variant; based on all the 100 malware.\n\nTable~\\ref{tab-reel} introduces the results of those three tests. The test carried out on the 6 known DroidJack variants shows that there were two false positives. However, the CFGs of those two false positives and those of Droidjack are a perfect match, which could mean that the two false positives may have been misfiled in Koodous.\n\n\\begin{table}\n \\centering\n \\begin{tabular}{|l|ccc|}\n \\hline\n Dictionary based on & Precision & Recall & F-mesure \\\\\n \\hline\n DroidJack & 0,98 & 1 & 0,99 \\\\\n Opfake & 1 & 1 & 1 \\\\\n Set of malware & 1 & 1 & 1 \\\\\n \\hline\n \\end{tabular}\n \\caption{Results obtained on the real data set consisting of 100 healthy applications and 100 malware}\n \\label{tab-reel}\n\\end{table}\n\n\n\n\n\\section{Conclusion}\n\nThis paper presents a malware detection method based on the construction of a control flow graph from Android bytecode mnemonics. The characterization of new malware is carried out by comparing its signature with that of known malware using subgraph isomorphism. Reference signatures are identified by a machine learning process. The results obtained from an ad hoc laboratory database and a set of real data validate the present approach with an almost perfect detection rate. Due to a limited space, details of our approach will be presented in an extended work. More experiments on other and more large database will be performed.\n\n\\bibliographystyle{plainurl}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}