diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzgdts" "b/data_all_eng_slimpj/shuffled/split2/finalzzgdts" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzgdts" @@ -0,0 +1,5 @@ +{"text":"\\section{}\nThe possibility of extracting statistical-mechanical \ninformation from a pure quantum state has been intensively discussed in the context of the foundation of statistical mechanics \n \\cite{Popescu,Lebowitz,SugitaJ,Reimann}.\nAs we shall demonstrate here, it also has a potential significance for \na new formulation of \nstatistical mechanics, \nand for a novel calculation technique.\n\nAs an illustration, let us \nconsider a closed quantum system composed of $N$ spins, \nwhich is enclosed by adiabatic walls.\nIn the ensemble formulation,\nits equilibrium properties are described by the microcanonical \nensemble, which is specified by $E$ (energy), $N$, and so on. \nThe corresponding subspace (energy shell) \nin the Hilbert space $\\mathcal{H}_N$ is denoted\nby $\\mathcal{E}_{E,N}$.\nLet us consider a random vector \n$\n| \\psi \\rangle = \n{\\sum_\\nu}^{\\!\\!\\! \\prime} \\ c_\\nu |\\nu \\rangle\n$\nin $\\mathcal{E}_{E,N}$, \nwhere\n$\\{ |\\nu \\rangle \\}_\\nu$ is an arbitrary orthonormal basis set\nof $\\mathcal{E}_{E,N}$, \n${\\sum_\\nu}^{\\!\\!\\! \\prime}~$ denotes the sum over this basis,\nand $\\{ c_\\nu \\}_\\nu$ is \na set of random complex numbers drawn uniformly\nfrom the unit sphere ${\\sum_\\nu}^{\\!\\!\\! \\prime} \\ |c_\\nu|^2 =1$ in the \ncomplex space of dimension $\\dim \\mathcal{E}_{E,N}$.\nIt was shown in Refs.~\\cite{Popescu,Lebowitz,SugitaJ,Reimann} \nthat almost every such vector \ngives the correct equilibrium values of \na certain class of observables $\\hat{A}$\nby $\\langle \\psi | \\hat{A} | \\psi \\rangle$.\nThis property was proved in Refs.~\\cite{Popescu,Lebowitz}\nfor observables of a subsystem, which is much smaller than the whole system.\nThe case of general observables, including\nobservables of the whole system (such as the total magnetic moment and its fluctuation), \nwas analyzed in Refs.~\\cite{SugitaJ,Reimann}.\nIt was shown that the above property holds {\\em not} for all observables\nbut for observables that are low-degree polynomials \n(i.e., their degree $\\ll N$) of local operators \\cite{SugitaJ}.\nWe here call such observables {\\em mechanical variables}.\nWe \nassume that all mechanical variables are \nnormalized in such a way that \nthey are dimensionless.\n\nFor conceptual clarity, we call {\\em generally} a pure quantum state that \nrepresents an equilibrium state a \n{\\em thermal pure quantum state} (TPQ state).\nStating more precisely for the case where \na state $|\\psi \\rangle$ has random variables \n(such as the random vector \ndiscussed above), \nwe call $|\\psi \\rangle$ a TPQ state \nif for an arbitrary positive number $\\epsilon$\n\\begin{equation}\n{\\rm P}( \n| \n\\langle \\psi | \\hat{A}| \\psi \\rangle\n-\n\\langle \\hat{A} \\rangle^{\\rm eq}_{E, N}\n|\n\\geq \\epsilon\n) \n\\leq\n\\eta_\\epsilon (N)\n\\label{eq:Pto0}\n\\end{equation}\nfor every mechanical variable $\\hat{A}$.\nHere, \n${\\rm P}(x)$ denotes the probability of event $x$,\n$\\langle \\cdot \\rangle^{\\rm eq}_{E, N}$ denotes the ensemble average,\nand \n$\\eta_\\epsilon (N)$\nis a function (of $N$ and $\\epsilon$) which vanishes as $N \\to \\infty$.\nThe above inequality means that for large $N$ \ngetting a {\\em single} realization of \na TPQ state is sufficient, with high probability, \nfor evaluating \nequilibrium values\nof mechanical variables. \nThe vector\n$\n{\\sum_\\nu}^{\\!\\!\\! \\prime} \\ c_\\nu |\\nu \\rangle\n$\nof Refs.~\\cite{Popescu,Lebowitz,SugitaJ,Reimann} \nis a TPQ state.\nHowever, important problems remain to be solved.\nMost crucially,\n{\\em genuine thermodynamic variables}, \nsuch as the entropy and temperature,\ncannot be calculated as $ \\langle \\psi | \\hat{A} | \\psi \\rangle$\nbecause they are not mechanical variables \\cite{vNS}.\nMoreover, \none needs to prepare a basis $\\{ |\\nu \\rangle \\}_\\nu$ of $\\mathcal{E}_{E,N}$ \nto construct ${\\sum_\\nu}^{\\!\\!\\! \\prime} \\ c_\\nu |\\nu \\rangle$.\nSince this is a hard task, such a TPQ state \nis hard to obtain.\n\nIn this Letter, we resolve these problems\nby proposing a new TPQ state, \na novel method of constructing it,\nand new formulas for obtaining genuine thermodynamic variables.\nThis novel formulation of statistical mechanics\nenables one to calculate\n{\\em all} variables of statistical-mechanical interest\nat {\\em finite temperature},\nfrom only a {\\em single} realization of the TPQ state.\nWe also show that \nthis formulation is very useful for practical calculations.\n\n{\\em New TPQ state -- }\nWe consider a discrete quantum system composed of $N$ sites, \nwhich is described by a Hilbert space $\\mathcal{H}_N$ \nof dimension $D=\\lambda^N$,\nwhere $\\lambda$ is a constant of $O(1)$.\n[For a spin-1\/2 system, $\\lambda=2$.]\nOur primary purpose is to obtain results in the thermodynamic limit: \n$N \\rightarrow \\infty$ while $E\/N$ is fixed. \nTherefore, we hereafter use quantities per site,\n$\\hat{h} \\equiv \\hat{H}\/N$ (where $\\hat{H}$ denotes the Hamiltonian), \n$u \\equiv E\/N$, and $(u; N)$ instead of $(E,N)$.\n[We do not write explicitly \nvariables other than $u$ and $N$, such as a magnetic field.]\nWe assume that the system is consistent with thermodynamics\nin the sense that \nthe density of states $g(u;N)$ behaves as \\cite{en:Boltzmann}\n\\begin{equation}\ng(u;N)= \n\\exp[Ns(u;N)], \n\\\n\\beta'(u;N)\n\\leq 0.\n\\label{eq:TDconsistency}\n\\end{equation}\nHere, $s(u;N)$ is the entropy density,\nwhich converges to the $N$-independent \none $s(u;\\infty)$ as $N\\rightarrow \\infty$,\n$\\beta(u;N) \\equiv \\partial s(u;N)\/\\partial u$\nis the inverse temperature,\nand\n$\\beta'\n\\equiv\n\\partial \\beta \/\\partial u\n$.\nThese conditions are satisfied, for example,\nby spin models and the Hubbard model.\nSince $D$ is finite,\n$\\beta$ may be positive and negative in \nlower- and higher-energy regions,\nrespectively.\nWe here consider the former region.\n\n\nWe propose the following TPQ state and \nthe procedure for constructing it. \nFirst, take a random vector \n$\n| \\psi_{0} \\rangle \n\\equiv\n\\sum_i c_i | i \\rangle\n$\nfrom the {\\em whole} Hilbert space $\\mathcal{H}_N$.\nHere, $\\{|i\\rangle\\}_i$ is an arbitrary orthonormal basis of $\\mathcal{H}_N$, \nand $\\{c_i\\}_i$ is a set of random complex numbers drawn uniformly\nfrom the unit sphere $\\sum|c_i|^2=1$ of the $D$-dimensional complex space. \nNote that this construction of random vectors is independent of the choice of the orthonormal basis $\\{|i\\rangle\\}_i$.\nOne can therefore use a trivial basis such as a set of product states.\nHence, $| \\psi_{0} \\rangle$ can be generated easily.\nOn ther whole, \nthe amplitude is almost equally distributed over {\\em all} the energy eigenstates \nin this state\n(as is easily seen by choosing the eigenstates of $\\hat{h}$ as the basis $\\{|i\\rangle\\}_i$).\nThus, the distribution of energy in $|\\psi_0\\rangle$ is proportional to $g(u;N)$.\nWe wish to modify this distribution into another distribution $r_k(u;N)$ \nwhich has a peak at an desired energy.\nThis is easily done by operating a suitable polynomial of $\\hat{h}$\nonto $|\\psi_0 \\rangle$ as we shall see below. \n[Operating $\\hat{h}$ onto a vector is much easier \nthan diagonalizing $\\hat{h}$.]\nWe denote the minimum and the maximum eigenvalues of $\\hat{h}$\nby $e_{\\rm{min}}$ and $e_{\\rm{max}}$, respectively.\nTake a constant $l$ of $O(1)$ \nsuch that $l \\ge e_{\\rm max}$. \nStarting from $| \\psi_{0} \\rangle$, calculate\n\\begin{eqnarray}\nu_{k} &\\equiv& \\langle \\psi_{k} | \\hat{h} | \\psi_{k} \\rangle,\n\\label{eq:uk}\n\\\\\n| \\psi_{k+1} \\rangle \n&\\equiv&\n( l - \\hat{h} ) | \\psi_{k} \\rangle\n\/\n\\| ( l - \\hat{h} ) | \\psi_{k} \\rangle \\|\n\\end{eqnarray}\niteratively for $k=0, 1, 2, \\cdots$. \nFrom Eq.~(\\ref{Nb=k\/L}) below, \n$u_{0}$ corresponds to $\\beta=0$, \ni.e., $g(u;N)$ takes the maximum at $u=u_{0}$.\nWe will \nalso show that $u_{k}$ decreases gradually down to $e_{\\rm min}$ as $k$ is increased, \ni.e., \n$\nu_{0} > u_{1} > \\cdots \\geq e_{\\rm{min}} \n$.\nOne may terminate the iteration \nwhen $u_{k}$ gets low enough for one's purpose.\nWe denote $k$ at this point by $k_{\\rm term}$. \nWe will show that\n$k_{\\rm term} = O(N)$ at finite temperature,\nand that \nthe states\n$\n\t| \\psi_{0} \\rangle, | \\psi_{1} \\rangle, \n\t\\cdots,\n\t| \\psi_{k_{\\rm term}} \\rangle\n$ \nbecome a series of TPQ states corresponding to various energy densities,\n$u_{0}, u_{1}, \\cdots, u_{k_{\\rm term}}$.\nHence, the equilibrium value \nof an arbitrary mechanical variable $\\hat{A}$ is obtained as\n$\\langle \\psi_{k} | \\hat{A} | \\psi_{k} \\rangle$, \nas a function of $u_{k}$.\nFor each realization of $\\{ c_i \\}_i$, \na series of realizations of TPQ states is obtained.\nWe will show that the dependence of \n$\\langle \\psi_{k} | \\hat{A} | \\psi_{k} \\rangle$ on $\\{ c_i \\}_i$ is\nexponentially small in size $N$ as $N$ increases.\nTherefore, \n{\\em only a single realization\nsuffices for getting\na fairly accurate value. }\nWhen better accuracy is required, \none can take the average over many realizations.\n\n\nWe now show that the states obtained with the above procedure are TPQ states.\nSince $| \\psi_{0} \\rangle$ is independent of the choice of the basis, \nwe take \nthe set of energy eigenstates $\\{ | n \\rangle \\}_n$\nas \n$\\{ | i \\rangle \\}_i$ \nin order to see properties of $| \\psi_k \\rangle$\n(although we never use such a basis in practical calculations).\nAfter $k$-times multiplication of $l-\\hat{h}$, \n$ | \\psi_{0} \\rangle\n=\n\\sum_n c_n | n \\rangle\n$\nturns into \n\\begin{equation}\n| \\psi_{k} \\rangle \n\\propto \n(l-\\hat{h})^k | \\psi_{0} \\rangle\n= \n\\sum_{n} c_{n} (l-e_{n})^k |n \\rangle,\n\\label{psi=Sdn}\n\\end{equation}\nwhere $\\hat{h} |n \\rangle = e_n |n \\rangle$. \nLet us examine how the energy density $u$ distributes in this state.\nThe (unnormalized) distribution function of $u$ is given by\n$\nr_k(u;N)\n\\equiv\n\\delta_r^{-1} {\\sum_n}^{\\!\\! \\prime \\prime} |c_{n}|^2 (l-e_{n})^{2k}\n$,\nwhere $\\delta_r = o(1)$ and\nthe sum is taken over $n$ such that $e_n$ lies in \na small interval $[u-\\delta_r \/2, u+\\delta_r \/2)$.\nSince the density of states $g(u;N)$ is exponentially large in size $N$, \n$r_k(u;N)$ converges (in probability) exponentially fast\nto its average. Hence, \n\\begin{equation}\nr_k(u;N)\n= D^{-1} \\exp[N \\xi_{\\kappa}(u;N)],\n\\label{Sd=e^xi}\n\\end{equation}\nwhere $\\xi_{\\kappa}(u;N) \\equiv s(u;N) + 2\\kappa \\ln (l-u)$ with $\\kappa \\equiv k\/N$.\nHereafter we often denote $k$ dependence by $\\kappa$, \ne.g., we express $u_k$ as $u_{\\kappa}$.\nNote that $\\xi_{\\kappa}(u;N)$ does not depend on $\\{ c_i \\}_i$,\nbecause the dependence vanishes\nwhen we have dropped negligible terms in Eq.~(\\ref{Sd=e^xi}). \n$\\xi_{\\kappa}(u;N)$ takes the maximum at $u^{\\ast}_{\\kappa}$ which satisfies\n\\begin{eqnarray}\n\\beta(u^{\\ast}_{\\kappa};N) = 2 \\kappa \/ (l-u^{\\ast}_{\\kappa}).\n\\label{Nb=k\/L}\n\\end{eqnarray}\nSince $\\beta(u^*_\\kappa; N)$ and $l-u^{\\ast}_{\\kappa}$ are $O(1)$, \nwe find $\\kappa = O(1)$, and hence $k = O(N)$.\nExpanding $\\xi_{\\kappa}(u;N)$ around $u^{\\ast}_{\\kappa}$, \nand noticing \n\\begin{eqnarray}\n\\xi''_{\\kappa} \n\\equiv \\partial^2 \\xi_{\\kappa}\/\\partial u^2 \n= \\beta'(u^{\\ast}_{\\kappa};N) - 2\\kappa\/(l-u^{\\ast}_{\\kappa})^2 < 0\n\\nonumber\n\\end{eqnarray}\nfrom Eq.~(\\ref{eq:TDconsistency}), \nwe get \n$\n\\xi_{\\kappa}(u;N) \n=\n\\xi_{\\kappa}(u^{\\ast}_{\\kappa};N) \n- |\\xi''_{\\kappa}| (u-u^{\\ast}_{\\kappa})^2 \/ 2\n+ \\xi'''_{\\kappa} (u-u^{\\ast}_{\\kappa})^3 \/ 6\n+ \\cdots\n$.\nHere, \n$\\xi'''_{\\kappa} \\equiv \\partial^3 \\xi_{\\kappa}\/\\partial u^3\n=\\beta''(u^{\\ast}_{\\kappa};N) - 4\\kappa\/(l-u^{\\ast}_{\\kappa})^3$.\nHence, $r_k(u;N)$ behaves almost as the Gaussian distribution, peaking at \n$u=u^*_\\kappa$, \nwith the vanishingly small variance $1\/N |\\xi''_{\\kappa}|$.\nLet us introduce the density operator \n$\n\\hat{\\rho}_k \\equiv \n(l-\\hat{h})^{2k}\n\/{\\rm Tr}(l-\\hat{h})^{2k}\n$, \nwhich has the same energy distribution $r_k(u;N)$.\nIn the ensemble formulation, \n$\\hat{\\rho}_k$ represents the equilibrium state specified by \n$(u_\\kappa; N)$\nbecause $r_k(u;N)$ has a sharp peak.\nWe call the ensemble corresponding to $\\hat{\\rho}_k$\nthe {\\em smooth microcanonical ensemble} (because\nthe energy distribution is smooth).\nIn a way similar to those of Refs.~\\cite{SugitaJ,Reimann}, \nwe can show that for an arbitrary positive number $\\epsilon$\n\\begin{eqnarray}\n{\\rm P}\\Big( \n\\Big| \\langle \\psi_{k} | \\hat{A} | \\psi_{k} \\rangle\n&-&\n{\\rm Tr}[\\hat{\\rho}_k \\hat{A}] \\Big|\n\\geq \\epsilon\n\\Big) \n\\leq \n{ \\| \\hat{A} \\|^2 r_k(e_{\\rm min};N) \\over \\epsilon^2 r_k(u_\\kappa^*;N)},\n\\quad\n\\label{DeltaA}\n\\\\\n\\overline{\\langle \\psi_{k} | \\hat{A} | \\psi_{k} \\rangle}\n&=&\n{\\rm Tr}[\\hat{\\rho}_k \\hat{A}]\n\\label{}\n\\end{eqnarray}\nfor every mechanical variable $\\hat{A}$.\nHere, \n$\\| \\cdot \\|$ denotes the operator norm \\cite{norm}, and\nthe overline represents the random average.\nWith increasing $N$, \n$\\|\\hat{A} \\|^2$ grows at most as a low-degree polynomial of $N$,\nwhereas \n$r_k(e_{\\rm min};N)\/r_k(u_\\kappa^*;N)$ \ndecreases exponentially at finite temperature \n(i.e., for $u_\\kappa^* > e_{\\rm min}$).\nTherefore, {\\em $| \\psi_{k} \\rangle$ is a TPQ state\nfor the smooth microcanonical ensemble}.\n\n{\\em Genuine thermodynamic variables -- }\nOne might think it impossible to obtain genuine thermodynamic \nvariables \nlike the temperature and entropy by only manipulating pure quantum states.\nHowever, \nour new TPQ state makes it possible.\nIn fact, \nby substituting $u_{\\kappa}$ for $u^{\\ast}_{\\kappa}$ in Eq.~(\\ref{Nb=k\/L}),\nand using Eq.~(\\ref{eq:u'}) below, \nwe obtain \n\\begin{eqnarray}\n\\beta(u_{\\kappa};N) = 2 \\kappa \/ (l-u_{\\kappa}) + O(1\/N).\n\\label{Nb=k\/L+O(1\/N)}\n\\end{eqnarray}\nThis gives $\\beta(u_{\\kappa};N)$, with an error of $O(1\/N)$, \nas a function of $u_{\\kappa}$\n[because\n$\\kappa$ and $l$ are known parameters]. \nThat is, one obtains the temperature of the equilibrium state\nspecified by $(u_{\\kappa};N)$ just by calculating \n $u_{\\kappa}$ with Eq.~(\\ref{eq:uk}).\n\nWe can also obtain formulas with less errors.\nFor example, \nusing Eq.~(\\ref{Sd=e^xi}) and the expansion of $\\xi_\\kappa(u;N)$, we have\n\\begin{equation}\nu^{\\ast}_{\\kappa} = u^{\\bullet}_{\\kappa} + O(1\/N^2),\n\\\nu^{\\bullet}_{\\kappa} \\equiv \nu_{\\kappa} - \\xi'''_{\\kappa}\/2N{\\xi''_{\\kappa}}^2.\n\\label{eq:u'}\n\\end{equation}\nSubstituting \n$u^{\\bullet}_{\\kappa}$ for $u^{\\ast}_{\\kappa}$ in Eq.~(\\ref{Nb=k\/L}),\nwe get a better formula\n\\begin{equation}\n\\beta(u^{\\bullet}_{\\kappa}; N)\n=\n2 \\kappa \/ (l - u^{\\bullet}_{\\kappa})\n+ O(1\/N^2).\n\\label{eq:betaN_better}\\end{equation}\nOne can evaluate \n$\\xi''_{\\kappa}$ and $\\xi'''_{\\kappa}$ \neasily by calculating \n$\n\\langle \\psi_{k} | \n(\\hat{h} - u_{\\kappa} )^2\n| \\psi_{k} \\rangle\n= 1\/N|\\xi''_{\\kappa}| + O(1\/N^2)\n$\nand\n$\n\\langle \\psi_{k} | \n(\\hat{h} - u_{\\kappa} )^3\n| \\psi_{k} \\rangle\n=\n\\xi'''_{\\kappa}\/N^2 |\\xi''_{\\kappa}|^3 + O(1\/N^3)\n$.\nHence, \nusing formula (\\ref{eq:betaN_better}), \none obtains \n$\\beta(u; N)$ (for $u = u^{\\bullet}_0, u^{\\bullet}_1,\\cdots$) \nwith an error of $O(1\/N^2)$.\nIn a similar manner, we can obtain formulas \nwhose errors are of even higher order of $1\/N$.\n\n\n\n\n\nHowever, $\\beta(u; N)$ is the inverse temperature of a {\\em finite} system,\nwhereas we are most interested in its thermodynamic limit $\\beta(u; \\infty)$.\nIn general, the difference $|\\beta(u; N) - \\beta(u; \\infty)|$ decays \nnot so quickly as $O(1\/N^2)$.\nTo obtain an even better formula for $\\beta(u; \\infty)$,\nwe consider $C$ identical copies of the $N$-site system.\nWe denote quantities of this $CN$-site system by tilde,\nsuch as \n$|\\tilde{\\psi}_{0} \\rangle \n\\equiv |\\psi_{0} \\rangle^{\\otimes C}$.\nThe state $| \\tilde{\\psi}_{\\tilde{k}} \\rangle$ \nis given by\n$\n| \\tilde{\\psi}_{\\tilde{k}} \\rangle \n\\propto (\\tilde{l}-\\tilde{h})^{C \\tilde{k}} | \\tilde{\\psi}_{0} \\rangle,\n$ \nwhere \n$\n\\tilde{h} \n\\equiv \n(\n\\hat{H} \\otimes \\hat{1}^{\\otimes (C-1)} \n+ \\hat{1} \\otimes \\hat{H} \\otimes \\hat{1}^{\\otimes (C-2)} \n+ \\cdots + \n\\hat{1}^{\\otimes (C-1)} \\otimes \\hat{H}\n)\/CN$. \nIn the limit of $C \\rightarrow \\infty$, \n$\\tilde{u}_{\\tilde{\\kappa}}$ approaches \nthe canonical average of $u$ in a single copy \nwith inverse temperature $\\tilde{\\beta}(\\tilde{u}_{\\tilde{\\kappa}};\\infty)$.\nAt the point where \n$\\tilde{\\beta}(\\tilde{u}_{\\tilde{\\kappa}};\\infty)\n=\\beta(u^{*}_{\\kappa}; N)$\nis satisfied,\nwe can estimate this canonical average,\nwhich is denoted by $\\tilde{u}^{\\rm c}_\\kappa$, \nin the same manner as Eq.~(11).\nThen, we get \n$\\tilde{u}^{\\rm c}_\\kappa = \\tilde{u}^{\\bullet}_\\kappa +O(1\/N^2)$,\nwhere\n\\begin{equation}\n\\tilde{u}^{\\bullet}_\\kappa\n\\equiv\nu^{\\bullet}_{\\kappa}\n+\\frac{\\xi'''_{\\kappa}+ 4\\kappa\/(l-u^{\\bullet}_\\kappa)^3}\n{2N [\\xi''_{\\kappa} + 2\\kappa\/(l-u^{\\bullet}_{\\kappa})^2]^2}.\n\\label{u=u-z\/g+b''\/b'}\n\\end{equation}\nWe thus find\n\\begin{equation}\n\\tilde{\\beta}(\\tilde{u}^{\\bullet}_\\kappa;\\infty) \n= \n2\\kappa\/(l-u^{\\bullet}_\\kappa)\n+O(1\/N^2),\n\\label{eq:beta_infty}\\end{equation}\nwhich gives the inverse temperature \n$\\tilde{\\beta}(u;\\infty)$ (for $u=\\tilde{u}^{\\bullet}_0,\\tilde{u}^{\\bullet}_1,\\cdots$)\nof an infinite system\ncomposed of an infinite number of $N$-site systems.\nWe expect that $\\tilde{\\beta}(u;\\infty)$ is much closer to $\\beta(u;\\infty)$\nthan $\\beta(u;N)$,\nbecause\ninformation of $\\xi(u;N)$ in the whole spectrum range of $u$\nis included in \n$\\tilde{\\beta}(u;\\infty)$.\n[By contrast, only the information at the peak of \n$\\xi(u;N)$ is included in $\\beta(u;N)$.]\nThis will be confirmed later by numerical computation.\n\nWe can also obtain the entropy density $s$ \nas a function of $u$ and $h_z$, \nby integrating $\\beta$ over $u$\nand $\\beta m_z$ over $h_z$.\nFor example, for an arbitrarily fixed value of $h_z$, we have\n\\begin{equation}\ns(u_{2p}) - s(u_{2q})\n=\n\\sum_{\\ell = p}^{q-1} \nv(u^{\\bullet}_{2\\ell},u^{\\bullet}_{2\\ell+1},\nu^{\\bullet}_{2\\ell+2})\n+O({1 \\over N^2}).\n\\label{s=Sum beta}\n\\end{equation}\nby generalizing Simpson's rule.\nHere, $u$ stands for \n$(u;N)$ or $(u;\\infty)$,\n$p$ and $q$ are integers, \nand\n$v(x,y,z)\n\\equiv\n(x-z) \\{ \\beta(x)+\\beta(z) \\}\/2\n-\n(x-z)^2\n[x \\{ \\beta(z)-\\beta(y) \\}+y \\{ \\beta(x)-\\beta(z) \\}\n+z \\{ \\beta(y)-\\beta(x) \\}]\n\/6(x-y)(y-z)\n$.\nWe have also developed another method of obtaining $s$, \nin which $g(u;N)$ \nis directly evaluated from the \ninner products\namong different realizations of a TPQ state \\cite{gakkai}.\n\nTo sum up,\none can obtain a series of TPQ states and values of all variables of statistical-mechanical \ninterest, by preparing a random vector \nand simply applying $(l - \\hat{h})$ iteratively.\t\nThat is, \nwe have established a new formulation of statistical mechanics,\nwhose fundamental formulas are \nEqs.~(\\ref{psi=Sdn}) and (\\ref{Nb=k\/L+O(1\/N)}).\n\t\n\n{\\em Numerical results -- }\nOur formulation is easily implemented as a method of numerical \ncomputation.\nWe apply it to the one-dimensional Heisenberg model\nin order to confirm the validity of the formulation.\nWe take\n$\n\\hat{H} \n= \\frac{J}{4} \\sum_{i=1}^{N} \n[\\hat{\\bm{\\sigma}}(i) \\cdot \\hat{\\bm{\\sigma}}(i+1) \n- h_z\\hat{\\sigma}_z(i)],\n$ \nwhere $J=-1$ (ferromagnetic) or $+1$ (antiferromagnetic).\nFor $N \\to \\infty$, \nthe exact results at finite temperature (i.e., $u > e_{\\rm min}$)\nhave been \nderived for \nmagnetization \n$\nm_z\n\\equiv\nN^{-1} \\sum_{i=1}^{N} \n\\langle \\sigma_z(i) \\rangle^{\\rm eq}_{u; N}\n$ \nat all values of $u$ and $h_z$ \\cite{Takahashi}, \nand for the correlation function \n$\n\\phi(j) \\equiv\nN^{-1} \\sum_{i=1}^{N} \\langle \\sigma_z(i) \\sigma_z(i+j) \\rangle^{\\rm eq}_{u; N}\n$ \nat all values of $u$ with $h_z=0$ \\cite{Sato}.\nThey are plotted in Figs.~\\ref{fig:hM} and \\ref{fig:Cor}\nby solid lines, where \ndifferent colors correspond to different values of $u$.\nWe calculate the corresponding results using our formulation,\nby performing numerical computation.\nThe results for $N=24$ are\nplotted by circles, where each circle is obtained \nfrom a {\\em single} realization of TPQ state.\nAccording to Eq.~(\\ref{DeltaA}), \nchoice of the initial random numbers $\\{ c_i \\}_i$ \nhas only an exponentially small effect on the results\nat finite temperature. \nWe have confirmed this fact \nby observing that \nthe standard deviation, \ncomputed from ten realizations of a TPQ state for each data point,\nis smaller than the radius of the circles of\nthese figures.\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.9\\linewidth]{hM_hz.eps}\n\\end{center}\n\\vspace{-6mm}\n\\caption{\nMagnetization plotted against a magnetic field for $J=-1$.\nSolid lines represent exact results for $N \\to \\infty$, \nfor various values of the energy density $u$ \\cite{Takahashi}. \nCircles denote results obtained with our formulation for $N=24$.\nResults for $N=4$-$20$ are also shown for $u=-0.3J$.\n}\n\\label{fig:hM}\n\\end{figure}\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.9\\linewidth]{Cor_0209.eps}\n\\end{center}\n\\vspace{-6mm}\n\\caption{\nCorrelation function $\\phi(j)$ plotted against $j$ for $J=+1$ and $h_z=0$.\nSolid lines represent exact results for $N \\to \\infty$, \nfor various values of $u$ \\cite{Sato}. \nCircles denote results of our formulation for $N=24$.\n(Left Inset) Results for $N=16$-$24$ at $j=2$ for $u=-0.36J$.\n(Right Inset) $\\phi(j)$ at finite $h_z$, \nobtained from a single realization of the TPQ state at $T \\simeq 0.45J$.\n}\n\\label{fig:Cor}\n\\end{figure}\n\nResults for other values of $N$ \nare plotted in Fig.~\\ref{fig:hM} for $u=-0.3J$, \nand in the left insets of Fig.~\\ref{fig:Cor} for $u=-0.36J$ at $j=2$.\nIt is seen that the $N$-dependence becomes fairly weak for $N \\gtrsim 20$,\nand that the results for $N=24$ agree well with the exact results.\nAs illustrated by this example, \n$N$ should be increased in our method until the \nvariation of the results with increasing $N$ \nbecomes less than the required accuracy.\n\nWe have also computed \n$\\phi(j)$ at finite $h_z$ and $T$, \nfor which {\\em exact results are unknown}.\nThe results at $T \\simeq 0.45J$ are plotted \nin the right inset of Fig.~\\ref{fig:Cor}.\n\nFor genuine thermodynamic variables, \nthe exact result for $1\/\\beta(u; \\infty)$ \\cite{AF_ET} is \nplotted by solid lines in Fig.~\\ref{fig:ET}.\nCorresponding results for \n$1\/\\beta(u; N)$ and $1\/\\tilde{\\beta}(u; \\infty)$, \nobtained with our method with $N=24$, are \nplotted by triangles and squares respectively, \nwhere each point is obtained \nfrom a {\\em single} realization of the TPQ state.\n[We have confirmed again that\ndependence on the choice of $\\{ c_i \\}_i$\nis negligibly small.]\nNot only \n$\\beta(u; N)$ but also $\\tilde{\\beta}(u; \\infty)$ \ndepend on $N$.\nHowever, \nthe dependence of $\\tilde{\\beta}(u; \\infty)$ becomes fairly weak\nfor $N \\gtrsim 20$, as shown in the inset.\n$\\tilde{\\beta}(u; \\infty)$ for $N=24$ \nagrees well with\nthe exact result, \nwhereas $\\beta(u; N)$ differs significantly from them for this value of $N$.\nWe have thus confirmed that \n$\\tilde{\\beta}(u; \\infty)$ is much closer \nto $\\beta(u; \\infty)$ than $\\beta(u; N)$, for finite $N$.\nNote however that $\\beta(u;N)$ gives almost correct result for $\\beta$\nof a {\\em finite} system, as seen from Eq.~(\\ref{eq:betaN_better}).\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.8\\linewidth]{T_0529.eps}\n\\end{center}\n\\vspace{-6mm}\n\\caption{\nTemperature $T$ plotted against $u$ for $J=+1$.\nSolid lines represent exact results for $N \\to \\infty$, \nfor various values of $h_z$ \\cite{AF_ET}. \nTriangles and squares denote \n$1\/\\beta(u; N)$ (triangles) and \n$1\/\\tilde{\\beta}(u; \\infty)$ (squares)\nfor $N=24$, obtained with our formulation. \n(Inset) \n$1\/\\tilde{\\beta}(u; \\infty)$ for $N=8$-$24$.\n}\n\\label{fig:ET}\n\\end{figure}\n\nWe have obtained a series of TPQ states \nat the discrete points $u_0, u_1, u_2, \\cdots, u_{\\rm term}$.\nThe discrete points are dense enough because their intervals \nare $O(1\/N)$, vanishing as $N \\rightarrow \\infty$.\nThe intervals also depend on the parameter $l$.\nWhen smaller $l$ is taken, $k$ gets smaller to reach \nthe same $u$ and temperature $T$, \nas seen from Eq.~(\\ref{Nb=k\/L}).\nHence, to obtain results at low $T$, \n$l\\simeq e_{\\rm max}$ is appropriate to reduce the amount of \ncomputation.\nAt high $T$, however, $u_{k}$ moves quickly as $k$ increases, \nfor such a small $l$. \nHence, to obtain results at many values of $u$ at high $T$, \n$l$ should be taken larger. \nWhen computing the data for Figs.~\\ref{fig:hM} and \\ref{fig:Cor},\nwe have taken $l \\simeq e_{\\rm max}$.\nSince the values of $u$ which are specified in these figures \nare not necessarily found among $u_k$'s,\nwe have slightly tuned $l$ in such a way that \n$u_k$ can be found within $0.001J$ of the specified values.\nIn these figures, \n$m_z$ and $\\phi(j)$ at such $u_k$'s are plotted.\nWhen computing the data for Fig.~\\ref{fig:ET}, \nwe have performed computations with two values of $l$; \n$l=e_{\\rm max}$ and $5J$.\nBoth results agree well with each other.\nFor better visualization, \nwe have plotted the results with $l=5J$ (orange and purple) \nand those with $l=e_{\\rm max}$ (red and blue) \nin the high- and low-$T$ regions, respectively.\n\n{\\em Advantages -- }\nWe now discuss advantages of our formulation \nwhen used as a method of numerical calculation.\nAt finite $T$, \nan exponentially large number of states are included in $\\mathcal{E}_{E,N}$.\nThis makes computation of eigenstates in $\\mathcal{E}_{E,N}$ pretty hard.\nIn contrast, \nour method takes full advantage of such a huge number of states,\nas seen, e.g., in the derivation of Eq.~(\\ref{Sd=e^xi}).\nAs a result, \nusing just a single realization of TPQ state, \none can calculate all quantities of statistical-mechanical \ninterest at finite $T$, on the solid theoretical basis that \nis developed in this Letter.\nMoreover, our method is applicable to \nsystems of any spatial dimensions, and to \nfrustrated or fermion systems as well.\nFurthermore, \nour method costs much less computational resources than \nthe numerical diagonalization.\nFor example, \nthe number of non-vanishing elements of \n$\\hat{H}$ of the Heisenberg model is\n$O(N 2^N)$. \nSince $k=O(N)$, \nthe computational time is $O(N^2 2^N)$ in our method, \nwhich is exponentially shorter than that of diagonalization.\nIn fact, \nit took only two hours to compute all data in \nFig.~\\ref{fig:ET} \non a PC.\nComputations can be made even faster by parallelizing \nthe algorithm, which is quite easy and efficient because our method \nconsists only of matrix multiplications.\n\nFurthermore, \nour method is effective over a wide range of \n$T$ \nbecause the rhs of Eq.~(\\ref{DeltaA}) is exponentially small \nas long as $s$ (and hence $T$) is finite of $O(1)$.\nIn fact, \nFigs.~\\ref{fig:hM}-\\ref{fig:ET} show that \nour results agree well with the rigorous results in a wide range of $T$, \nfrom $T \\ll J$ to $T \\gg J$.\nIn practical computations with finite $N$,\n$T$ ($=1\/\\tilde{\\beta}(u_\\kappa^{\\bullet};\\infty)$) \ncan be lowered as long as $r_k(e_{\\rm min};N)\/r_k(u_\\kappa^*;N) \\ll 1$.\n\n\n\nWe note that the quantum Monte Carlo method may be much faster.\nHowever, it suffers from the sign problem \nin frustrated systems and fermion systems.\nThe density-matrix renormalization group method\nhas been extended to finite temperature,\nand the state obtained in Ref.~\\cite{White}\nmight be close to TPQ states. \nHowever, its effectiveness in two- or more-dimensional systems \nis not clear yet.\nThe states obtained with the microcanonical Lanczos method \\cite{Long},\nwhich tried to obtain not TPQ states but eigenstates,\nmight also be close to TPQ states.\nHowever, the method \ncosts more computational time in Ref.~\\cite{Long}\nthan ours, and \na method of computing $T$ or $s$\nseems more difficult than ours.\nWe therefore expect that our method will \nmake it possible \nto analyze systems which could not be analyzed with other \nmethods.\n\n\n{\\em Concluding remarks -- }\nWe conclude this Letter by making several remarks.\nFirst,\none can evaluate the magnetic susceptibility \n$(\\partial m_z \/ \\partial h_z)_u$ from\nFig.~\\ref{fig:hM} or \\ref{fig:Cor}.\nOne can also obtain \n$(\\partial m_z \/ \\partial h_z)_T$ with the help of \nFig.~\\ref{fig:ET}. \n\nSecond, \n$| \\psi_{k} \\rangle$ remains to be a TPQ state \nafter time evolution, \nsince Eq.~(\\ref{psi=Sdn}) shows that \n$e^{\\hat{H}t\/{i\\hbar}} | \\psi_{k} \\rangle\n\\propto\n\\sum_{n} e^{- ie_n t\/\\hbar} c_{n} (l-e_{n})^k |n \\rangle\n$, \nwhich is just another realization of $| \\psi_{k} \\rangle$.\n\nThird, \nour formulation is advantageous to analyses of phase transitions.\nAs an example, consider the case where the energy density \n$u(T;N)$ for $N \\to \\infty$ is discontinuous at \nthe transition temperature $T_{\\rm tr}$ of a \nfirst-order transition.\nThen, the specific heat $c = {\\partial u \/ \\partial T}$\ndiverges at $T=T_{\\rm tr}$.\nIf one used the canonical formalism, \nwhere $T$ is an independent variable, \ncalculation of \n$u(T;\\infty)$ would be hard around $T = T_{\\rm tr}$.\nIn our formulation, by contrast, \n$u$ is taken as an independent variable, \nand $c$ is obtained as\n$c \n=-\\beta^2\/({\\partial \\beta \/ \\partial u})_{\\scriptscriptstyle N}$\nfrom $\\beta(u;\\infty)$. \nThe function $\\beta(u;\\infty)$\nis {\\em continuous even at the transition point},\nwhere it takes a constant value $1\/T_{\\rm tr}$\nin a finite interval of $u$ \ncorresponding to the phase coexistent region \\cite{AS:phaserule}.\nHence, $\\beta(u;\\infty)$ can be calculated more easily than $u(T;\\infty)$. \nIn fact, one can identify a first-order transition \nby simply observing that \nthe rhs of Eq.~(\\ref{eq:beta_infty}) takes a constant value,\napart from small deviation of $O(1\/N^2)$,\nfor multiple values of $\\tilde{u}^{\\bullet}_\\kappa$ and $\\kappa$.\nRegarding a continuous transition,\nit can be identified from a singularity in $c$, or\nan order parameter $m$, and so on.\nOne can calculate $m$ by adding \na symmetry-breaking field $f$ to the Hamiltonian,\nand thereby computing $m$ at $f= \\pm |f|$ for small $|f|$.\nOr alternatively, without introducing $f$, one can perform the \n`pure-state decomposition' (i.e., decomposition into \nmacroscopically definite states) by applying\nthe variance-covariance matrix method of Ref.~\\cite{VCM} to a TPQ state. \n\nFourth, \nEq.~(\\ref{psi=Sdn}) can be generalized as\n$\n|\\psi \\rangle \\propto Q(\\hat{h}) |\\psi_{0} \\rangle,\n$ \nwhich defines other new TPQ states.\nHere, $Q(u)$ is any differentiable real \nfunction such that $Q(u)^2 g(u;N)$ has a sharp peak,\nwhose width vanishes as $N \\to \\infty$,\nand $Q(u)^2 g(u;N)$ outside the peak decays quickly. \nUsing this $|\\psi \\rangle$,\none can calculate various quantities as we have done using $|\\psi_k \\rangle$.\nFor instance, the formula corresponding to \nEq.~(\\ref{Nb=k\/L}) is given by\n$\n\\beta(u^{\\ast};N) \n+ 2Q'(u^{\\ast}) \/ N Q(u^{\\ast}) = 0\n$.\n\nFinally,\nalthough a TPQ state (such as $| \\psi_k \\rangle$) and \nthe mixed state (such as $\\hat{\\rho}_k$) of the corresponding ensemble\nare identical with respect to mechanical variables, \n{\\em they are completely different with respect to entanglement}.\nAt $T \\gg J$, for example, $\\hat{\\rho}_k$ has only small \nentanglement (because it is close to the completely mixed state \n$(1\/D) \\hat{1}$, \nwhich has no entanglement), whereas we can show \nthat $| \\psi_k \\rangle$ has exponentially \nlarge entanglement (as previously shown for $T \\to \\infty$ in Ref.~\\cite{SugitaShimizu}). \nIt is thus seen that an equilibrium state can be represented \neither by a TPQ state with huge\nentanglement or by a mixed state with much less entanglement.\nTheir difference can be detected only by high-order polynomials of\nlocal operators, which are \n{\\em not} of statistical-mechanical interest \\cite{SugitaJ, SugitaShimizu}.\n\n\t\n\n\n\n\n\n\n\n\n\\begin{acknowledgments}\nWe thank\nJ. Sato, F. G\\\"{o}hmann, C. Trippe and \nK. Sakai for providing us with numerical\ndata of exact solutions.\nWe also thank H. Tasaki, A. Sugita, Y. Kato, Y. Oono, H. Katsura, \nK. Hukushima, S. Sasa and T. Yuge\nfor helpful discussions.\nThis work was supported by KAKENHI Nos.~22540407 and 23104707.\n\\end{acknowledgments}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nFunctions with values in a (nonlinear) subset of a vector space appear in several applications of imaging and in inverse problems, e.g.\n\n\n\\begin{itemize}\n \\item \\emph{Interferometric Synthetic Aperture Radar (InSAR)} \n is a technique used in remote sensing and geodesy to generate for example digital elevation \n maps of the earth's surface. \n InSAR images represent phase differences of waves between two or more SAR images, cf. \\cite{LiuMas16,RocPraFer97}. \n Therefore InSAR data are functions $f:\\Omega \\to \\sphere \\subseteq \\mathds{R}^2$.\n The pointwise function values are on the $\\sphere$, which is considered embedded into $\\mathds{R}^2$.\n \\item A \\emph{color image} can be represented as a function in \\emph{HSV}-space (hue, saturation, value) (see e.g. \\cite{PlaVen00}).\n Color images are then described as functions $f:\\Omega \\to K \\subseteq \\mathds{R}^3$. Here $\\Omega$ is a plane in $\\mathds{R}^2$, the image domain,\n and $K$ (representing the HSV-space) is a cone in 3-dimensional space $\\mathds{R}^3$. \n \\item Estimation of the \\emph{foliage angle distribution} has been considered for instance in\n \\cite{HelAndRobFin15,PutBriManWiePfeZliPfe16}.\n Thereby the imaging function is from $\\Omega \\subset \\mathds{R}^2$, a part of the Earth's surface, \n into $\\mathbb{S}^2 \\subseteq \\mathds{R}^3$, representing foliage angle orientation.\n \\item Estimation of functions with values in $SO(3) \\subseteq \\mathds{R}^{3 \\times 3}$. Such problems appear in \\emph{Cryo-Electron Microscopy} \n (see for instance \\cite{HadSin11,SinShk12,WanSinWen13}).\n\\end{itemize}\nWe emphasize that we are analyzing \\emph{vector}, \\emph{matrix}, \\emph{tensor}-valued functions, where pointwise function \nevaluations belong to some given (sub)set, but are always \\emph{elements} of the underlying vector space. This should not be confused with set-valued functions, where every function \nevaluation can be a set.\n\nInverse problems and imaging tasks, such as the ones mentioned above, might be unstable, or even worse, the solution could be \nambiguous. Therefore, numerical algorithms for imaging need to be \\emph{regularizing} to obtain approximations of the \ndesired solution in a stable manner. \nConsider the operator equation \n\\begin{equation}\\label{eq:basic_problem}\n\\op[w] = v^0,\n\\end{equation} \nwhere we assume that only (noisy) measurement data $v^\\delta$ of $v^0$ become available. \nIn this paper the method of choice is \\emph{variational regularization} which consists \nin calculating a minimizer of the variational regularization functional\n\\begin{equation}\n \\label{eq:energy}\n \\mathcal{F}(w) \\vcentcolon= \\mathcal{D}(\\op[w],v^\\delta) + \\alpha \\mathcal{R}(w).\n\\end{equation}\nHere\n\\begin{description}\n \\item{$w$} is an element of the \\emph{set} of admissible functions.\n \\item{$\\op$} is an operator modeling the image formation process (except the noise).\n \\item{$\\mathcal{D}$} is called the \\textit{data} or \\textit{fidelity term}, which is used to compare a pair \n of data in the image domain, that is to quantify the difference of the two data sets. \n \\item{$\\mathcal{R}$} is called \\textit{regularization functional}, which is used to impose certain \n properties onto a minimizer of the regularization functional $\\mathcal{F}$.\n \\item{$\\alpha > 0$} is called \\textit{regularization parameter} and provides a trade off between stability \n and approximation properties of the minimizer of the regularization functional $\\mathcal{F}$. \n \\item{$v^\\delta$} denotes measurement data, which we consider noisy. \n \\item{$v^0$} denotes the exact data, which we assume to be not necessarily available.\n\\end{description}\n\nThe main objective of this paper is to introduce a general class of regularization functionals for functions with values in a set of vectors. \nIn order to motivate our proposed class of regularization functionals we review a class of regularization functionals \nappropriate for analyzing \\emph{intensity data}.\n\n\n\\subsection*{Variational regularization for reconstruction of intensity data}\nOpposite to what we consider in the present paper, most commonly, imaging data $v$ and admissible functions $w$, respectively, are \nconsidered to be representable as \\emph{intensity functions}. That is, they are functions from some subset $\\Omega$ of an Euclidean \nspace with \\emph{real values}.\n\nIn such a situation the most widely used regularization functionals use regularization terms \nconsisting of powers of Sobolev (see \\cite{BouSau93,ChaLio97,CimMel12}) or total variation semi-norms \\cite{RudOshFat92}. \nIt is common to speak about \\emph{Tikhonov regularization} (see for instance \\cite{TikArs77}) when the data term and the \nregularization functional are squared Hilbert space norms, respectively. \nFor the \\emph{Rudin, Osher, Fatemi (ROF)} regularization \\cite{RudOshFat92}, also known as total variation \nregularization, the data term is the squared $L^2$-norm and $\\mathcal{R}(w) = |w|_{TV}$ is the total variation semi-norm. \nNonlocal regularization operators based on the generalized nonlocal gradient is used in \\cite{GilOsh08}. \\\\ \nOther widely used regularization functionals are \\emph{sparsity promoting} \\cite{DauDefDem04,KolLasNiiSil12},\n\\emph{Besov space norms} \\cite{LorTre08,LasSak09} and anisotropic regularization norms \\cite{OshEse04,SchWei00}.\nAside from various regularization terms there also have been proposed different fidelity terms other than quadratic norm\nfidelities, like the $p$-th powers of $\\ell^p$ and $L^p$-norms of the differences of \n$F(w)$ and $v$ , \\cite{SchGraGroHalLen09,SchuKalHofKaz12}, Maximum Entropy \\cite{Egg93,EngLan93} and Kullback-Leibler divergence \n\\cite{ResAnd07} (see \\cite{Poe08} for some reference work).\n\nOur work utilizes results from the seminal paper of \\citeauthor{BouBreMir01} \\cite{BouBreMir01}, which \nprovides an equivalent \\emph{derivative-free} characterization of Sobolev spaces and the space $\\BV$, \nthe space of functions of bounded total variation, which consequently, in this context, was analyzed in \\citeauthor{Dav02} \nand \\citeauthor{Pon04b} \\cite{Dav02,Pon04b}, respectively. It is shown in \n\\cite[Theorems 2 \\& 3']{BouBreMir01} and \\cite[Theorem 1]{Dav02} that when $(\\rho_\\varepsilon)_{\\varepsilon > 0}$ \nis a suitable sequence of non-negative, radially symmetric, radially decreasing mollifiers, then\n\\begin{equation}\n\\label{eq:double_integral}\n\\begin{aligned}\n\\lim_{\\varepsilon \\searrow 0} \\tilde{\\mathcal{R}}_\\varepsilon(w) & \n\\vcentcolon= \\lim_{\\varepsilon \\searrow 0} \\int\\limits_{\\Omega\\times \\Omega} \\frac{\\|w(x)- w(y)\\|_{\\mathds{R}}^p}{\\normN[x-y]^{p}} \\rho_\\varepsilon(x-y) \\,\\mathrm{d}(x,y)\\\\\n&= \n\\begin{cases}\nC_{p,N}|w|^p_{W^{1,p}} & \\mbox{if } w \\in W^{1,p}(\\Omega, \\mathds{R}), \\ 1 < p < \\infty, \\\\\nC_{1,N}|w|_{TV} & \\mbox{if } w \\in \\BV[\\mathds{R}], \\ p = 1, \\\\\n\\infty & \\mbox{otherwise},\n\\end{cases}\n\\end{aligned}\n\\end{equation}\nHence $\\tilde{\\mathcal{R}}_\\varepsilon$ approximates powers of Sobolev semi-norms and the total variation semi-norm, respectively. Variational \nimaging, consisting in minimization of $\\mathcal{F}$ from \\autoref{eq:energy} with $\\mathcal{R}$ replaced by \n$\\tilde{\\mathcal{R}}_\\varepsilon$, has been considered in \\cite{AubKor09,BouElbPonSch11}. \n\n\n\\subsection*{Regularization of functions with values in a set of vectors}\nIn this paper we generalize the derivative-free characterization of Sobolev spaces and functions of bounded variation to \nfunctions, $u:\\Omega \\to K$, where $K$ is some set of vectors, and use these functionals for variational \nregularization. The applications we have in mind contain that $K$ is a closed subset of $\\mathds{R}^M$ (for instance HSV-data) \nwith non-zero measure, or that $K$ is a sub-manifold (such as for instance InSAR-data).\n\nThe reconstruction of manifold--valued data with variational regularization methods has already been subject \nto intensive research (see for instance \\cite{KimSoc02,CreStr11,CreStr13,CreKoeLelStr13,BacBerSteWei16,WeiDemSto14}).\nThe variational approaches mentioned above use regularization and fidelity functionals based on \nSobolev and TV semi-norms: a total variation regularizer \nfor cyclic data on $\\sphere$ was introduced in \\cite{CreStr13,CreStr11}, see also \\cite{BerLauSteWei14,BerWei16,BerWei15}.\nIn \\cite{BacBerSteWei16,BerFitPerSte17} combined first and second order \ndifferences and derivatives were used for regularization to restore manifold--valued data. The later mentioned papers, however, \nare formulated in a finite dimensional setting, opposed to ours, which is considered in an infinite dimensional setting. \nAlgorithms for total variation minimization problems, including half-quadratic minimization and non-local patch based methods, \nare given for example in \\cite{BacBerSteWei16,BerChaHiePerSte16,BerPerSte16} as well as in \\cite{GroSpr14,LauNikPerSte17}.\nOn the theoretical side the total variation of functions with values in a manifold was investigated by \\citeauthor{GiaMuc07} \nusing the theory of Cartesian currents in \\cite{GiaMuc07,GiaMuc06}, and earlier \\cite{GiaModSou93} if the manifold is a \n$\\sphere$. \n\n\n\\subsection*{The contents and the particular achievements of the paper are as follows}\nThe contribution of this paper is to introduce and analytically analyze double integral \nregularization functionals for reconstructing functions with values in a set of vectors, generalizing functionals \nof the form \\autoref{eq:double_integral}. \nMoreover, we develop and analyze fidelity terms for comparing manifold--valued data. \nSumming these two terms provides a new class of regularization functionals of the form \n\\autoref{eq:energy} for reconstructing manifold--valued data. \n\nWhen analyzing our functionals we encounter several differences to existing \nregularization theory (compare \\autoref{sec: Setting}):\n\\begin{enumerate}\n\\item \n The \\emph{admissible functions}, where we minimize the regularization functional on, \n do form only a \\emph{set} but \\emph{not} a \\emph{linear} space.\n As a consequence, well--posedness \n of the variational method (that is, existence of a minimizer of the energy functional) cannot directly be proven \n by applying standard \n direct methods in the Calculus of Variations \\cite{Dac82,Dac89}. \n\\item The regularization functionals are defined via metrics and not norms, see \\autoref{sec: Existence}. \n\\item In general, the fidelity terms are \\emph{non-convex}.\n Stability and convergence results are proven in \\autoref{sec: Stability_and_Convergence}. \n\\end{enumerate} \n\nThe model is validated in \\autoref{sec:Numerical_results} where we present numerical results for denoising \nand inpainting of data of InSAR type.\n\n\n\\section{Setting} \\label{sec: Setting}\nIn the following we introduce the basic notation and the set of admissible functions which we are regularizing \non.\n\n\\begin{assumption}\n \\label{ass:1}\n All along this paper we assume that\n \\begin{itemize}\n \\item $p_1, p_2 \\in [1, +\\infty)$, $s \\in (0,1]$,\n \\item $\\Omega_1, \\Omega_2 \\subseteq \\mathds{R}^N$ are nonempty, bounded, and connected open sets with Lipschitz boundary, respectively,\n \\item $k \\in [0,N]$,\n \\item $K_1 \\subseteq \\mathds{R}^{M_1}, K_2 \\subseteq \\mathds{R}^{M_2}$ are nonempty and closed subsets of $\\mathds{R}^{M_1}$ and $\\mathds{R}^{M_2}$, respectively. \n \\end{itemize}\n Moreover, \n \\begin{itemize}\n \\item $\\normN$ and $\\|\\cdot\\|_{\\mathds{R}^{M_i}}, \\ i=1,2,$ are the Euclidean norms on $\\mathds{R}^N$ and $\\mathds{R}^{M_i}$, respectively.\n \\item $\\dRMi: \\mathds{R}^{M_i} \\times \\mathds{R}^{M_i} \\rarr [0, +\\infty)$ denotes the Euclidean distance on $\\mathds{R}^{M_i}$ for $i=1,2$ and \n \\item $\\d_i \\vcentcolon= \\mathrm{d}_{K_i}: K_i \\times K_i \\rarr [0, +\\infty)$ \n denote arbitrary metrics on $K_i$, which fulfill for $i=1$ and $i=2$\n \\begin{itemize}\n \t\\item $\\dRMi|_{K_i \\times K_i} \\leq d_i$,\n \t\\item $\\d_i$ is continuous with respect to $\\dRMi|_{K_i \\times K_i}$, meaning that for a sequence $\\seq{a}$ in $K_i \\subseteq \\mathds{R}^{M_i}$ converging to some $a \\in K_i$ we also have $\\d_i(a_n,a) \\rarr 0$. \n \\end{itemize} \n In particular, this assumption is valid if the metric $d_i$ is equivalent to $\\dRMi|_{K_i \\times K_i}$.\n When the set $K_i, \\ i=1,2$, is a suitable complete submanifold of $\\mathds{R}^{M_i}$,\n it seems natural to choose $d_i$ as the geodesic distance on the respective submanifolds.\n \\item $(\\rho_{\\varepsilon})_{\\varepsilon > 0}$ is a Dirac family of non-negative, radially symmetric mollifiers, i.e. for every $\\varepsilon > 0$ we have\n \\begin{enumerate}\n \\item $\\rho_\\varepsilon \\in \\mathcal{C}^{\\infty}_{c}(\\mathds{R}^N, \\mathds{R})$ is radially symmetric,\n \\item $\\rho_\\varepsilon \\geq 0$,\n \\item $\\int \\limits_{\\mathds{R}^N} \\rho_\\varepsilon (x) \\,\\mathrm{d}x = 1$, and \n \\item for all $\\delta > 0$, $\\lim_{\\varepsilon \\searrow 0}\\limits \\int_{\\set{\\normN[y] > \\delta}} \\rho_\\varepsilon(y) \\,\\mathrm{d}y = 0$.\n \\end{enumerate} \n We demand further that, for every $\\varepsilon > 0$, \n \\begin{enumerate}[resume]\n \\item there exists a $\\tau > 0$ and $\\eta_{\\tau} > 0$ such that\n $\\{ z \\in \\mathds{R}^N : \\rho_{\\varepsilon}(z) \\geq \\tau \\}= \\{z \\in \\mathds{R}^N : \\normN[z] \\leq \\eta_{\\tau} \\}$.\n \\end{enumerate}\n This condition holds, e.g., if $\\rho_{\\varepsilon}$ is a radially decreasing continuous function with $\\rho_{\\varepsilon}(0) > 0$.\n \\item When we write $p$, $\\Omega$, $K$, $M$, then we mean $p_i$, $\\Omega_i$, $K_i$, $M_i$, for either \n $i=1,2$. In the following we will often omit the subscript indices whenever possible.\n \\end{itemize}\n\\end{assumption}\n\n\\begin{example}\\label{ex:mol}\nLet $\\hat{\\rho} \\in C_c^\\infty(\\mathds{R},\\mathds{R}_+)$ be symmetric at $0$, monotonically decreasing on $[0, \\infty)$ and satisfy\n\\begin{equation*}\n \\abs{\\mathbb{S}^{N-1}}\\int_0^\\infty \\hat{t}^{N-1} \\hat{\\rho}\\left(\\hat{t}\\right)d \\hat{t} = 1\\;.\n\\end{equation*}\nDefining mappings $\\rho_\\varepsilon: \\mathds{R}^N \\to \\mathds{R}$ by\n\\begin{equation*}\n \\rho_\\varepsilon(x)\\vcentcolon= \\frac{1}{\\varepsilon^N} \\hat{\\rho}\\left(\\frac{\\normN[x]}{\\varepsilon}\\right)\n\\end{equation*}\nconstitutes then a family $(\\rho_\\varepsilon)_{\\varepsilon > 0}$ which fulfills the above properties \\it{(i) -- (v)}.\nNote here that\n\\begin{itemize}\n \\item by substitution $x = t \\theta$ with $t > 0, \\theta \\in \\mathbb{S}^{N-1}$ and $\\hat{t}=\\frac{t}{\\varepsilon}$, \n\\begin{equation}\n \\label{eq:molII}\n \\begin{aligned}\n \\int_{\\mathds{R}^N} \\rho_\\varepsilon(x)\\, \\,\\mathrm{d}x &= \\frac{1}{\\varepsilon^N} \\int_{\\mathds{R}^N} \\hat{\\rho}\\left(\\frac{\\normN[x]}{\\varepsilon}\\right) \\,\\mathrm{d}x \\\\\n &= \\frac{1}{\\varepsilon^N} \\int_0^\\infty t^{N-1} \\hat{\\rho}\\left(\\frac{t}{\\varepsilon}\\right)\\d t \\int_{\\mathbb{S}^{N-1}} \\d\\theta \\\\ \n &= \\abs{\\mathbb{S}^{N-1}}\\int_0^\\infty \\hat{t}^{N-1} \\hat{\\rho}\\left(\\hat{t}\\right)\\d \\hat{t} = 1\\;.\n \\end{aligned}\n\\end{equation}\nHere, $d\\theta$ refers to the canonical spherical measure.\n \\item Again by the same substitutions, taking into account that $\\hat{\\rho}$ has compact support, it follows for \n $\\varepsilon > 0$ sufficiently small that\n\\begin{equation}\n \\label{eq:molIIa}\n \\begin{aligned}\n \\int_{\\set{y:\\normN[y]>\\delta}} \\rho_\\varepsilon(x)\\, \\,\\mathrm{d}x \n &= \\frac{1}{\\varepsilon^N} \\int_{\\set{y:\\normN[y]> \\delta}} \\hat{\\rho}\\left(\\frac{\\normN[x]}{\\varepsilon}\\right) \\,\\mathrm{d}x \\\\\n &= \\frac{1}{\\varepsilon^N} \\int_\\delta^\\infty t^{N-1} \\hat{\\rho}\\left(\\frac{t}{\\varepsilon}\\right)\\d t \\int_{\\mathbb{S}^{N-1}} \\d\\theta \\\\ \n &= \\abs{\\mathbb{S}^{N-1}}\\int_{\\delta\/\\varepsilon}^\\infty \\hat{t}^{N-1} \\hat{\\rho}\\left(\\hat{t}\\right)\\d \\hat{t} =0 \\;.\n \\end{aligned}\n\\end{equation}\n\\end{itemize}\n\\end{example}\nIn the following we write down the basic spaces and sets, which will be used in the course of the paper.\n\\begin{definition}\n\\begin{itemize}\n \\item The \\emph{Lebesgue--Bochner space} of $\\mathds{R}^M$--valued functions on $\\Omega$ consists of the set \n \\begin{equation*}\n \\begin{aligned}\n \\Lp \\vcentcolon= \\{ \\phi : \\Omega \\to \\mathds{R}^M : {} & \\phi \\text{ is Lebesgue-Borel measurable and } \\\\ \n &\\normM[\\phi(\\cdot)]^p: \\Omega \\to \\mathds{R} \\text{ is Lebesgue--integrable on } \\Omega \\},\n \\end{aligned}\n \\end{equation*}\n which is associated with the norm $\\|\\cdot\\|_{\\Lp}$, given by\n \\begin{gather*}\n \\norm{\\phi}_{\\Lp} \\vcentcolon= \\Big( \\int_{\\Omega}\\limits \\normM[\\phi(x)]^p \\,\\mathrm{d}x \\Big)^{1\/p} \\; .\n \\end{gather*}\n \\item Let $0 < s < 1$. Then the \\emph{fractional Sobolev space} of order $s$ can be defined (cf. \\cite{Ada75}) as the set\n\\begin{gather*}\n \\Wsp \\vcentcolon= \\left\\{ w \\in \\Lp : \\frac{\\normM[w(x) - w(y)]}{\\normN[x-y]^{\\frac{N}{p}+s}} \\in \n L^p (\\Omega \\times \\Omega, \\mathds{R}) \\right\\} \\\\\n = \\{w \\in \\Lp : \\abs{w}_{\\Wsp} < \\infty \\},\n\\end{gather*}\nequipped with the norm\n\\begin{equation}\\label{eq:sobolev_norm}\n \\|\\cdot\\|_{\\Wsp} \\vcentcolon= \\big(\\|\\cdot\\|_{\\Lp}^{p} + \\abs{\\cdot}_{\\Wsp}^p \\big)^{1\/p},\n\\end{equation}\nwhere $\\abs{\\cdot}_{\\Wsp}$ is the semi-norm for $\\Wsp$, given by\n\\begin{equation}\\label{eq:sobolev_semi_norm}\n \\abs{w}_{\\Wsp} \\vcentcolon= \\Big(\\int\\limits_{\\Omega\\times \\Omega} \\frac{\\normM[w(x) - w(y)]^p}{\\normN[x-y]^{N+ps}} \\,\\mathrm{d}(x,y) \\Big)^{1\/p},\n \\quad w \\in \\Wsp\\;.\n\\end{equation}\n\\item \nFor $s = 1$ the Sobolev space $W^{1,p}(\\Omega, \\mathds{R}^M)$ consists of all weakly differentiable\nfunctions in $L^1(\\Omega,\\mathds{R}^M)$ for which \n\\begin{equation*}\n\\norm{w}_{W^{1,p}(\\Omega, \\mathds{R}^M)} \n \\vcentcolon= \\Big( \\norm{w}_{\\Lp}^p + \\int_{\\Omega}\\limits \\|\\nabla w(x)\\|^p_{\\mathbb{R}^{M\\times N}} \\,\\mathrm{d}x \\Big)^{1\/p} < \\infty\\;,\n\\end{equation*}\nwhere $\\nabla w$ is the weak Jacobian of $w$.\n\\item Moreover, we recall one possible definition of the space $\\BV$ from \\cite{AmbFusPal00}, which consists of all \n Lebesgue--Borel measurable functions $w:\\Omega \\to \\mathds{R}^M$ for which \n\\begin{gather*}\n\\norm{w}_{\\BV} \\vcentcolon= \\norm{w}_{L^1(\\Omega, \\mathds{R}^M)} + \\abs{w}_{\\BV} < \\infty,\n\\end{gather*}\nwhere \n\\begin{equation*}\n\\begin{aligned}\n~ & \\abs{w}_{\\BV} \\\\\n\\vcentcolon= {}& \\sup \\left\\{ \\int \\limits_\\Omega w(x) \\cdot \\mathrm{Div} \\varphi(x) \\,\\mathrm{d}x : \\ \n\\varphi \\in C_c^1(\\Omega, \\mathds{R}^{M \\times N}) ~ \\mathrm{ such~that } \\norm{\\varphi}_\\infty \\vcentcolon= \\mathop{\\mathrm{ess~sup}}_{x \\in \\Omega} \\norm{\\varphi(x)}_F \\leq 1 \\right\\},\n\\end{aligned}\n\\end{equation*}\nwhere $\\norm{\\varphi(x)}_F$ is the Frobenius-norm of the matrix $\\varphi(x)$ and \n$\\mathrm{Div}\\varphi = (\\mathrm{div} \\varphi_1, \\dots, \\mathrm{div} \\varphi_M)^\\mathrm{T}$ denotes the row--wise formed divergence of $\\varphi$.\n\\end{itemize}\n\\end{definition}\n\n\\begin{lemma}\n\\label{le:inclusion}\n Let $0 < s \\leq 1$ and $p \\in [1,\\infty)$, then $\\Wsp \\hookrightarrow \\Lp$ and the embedding is compact. Moreover, the embedding\n $\\BV \\hookrightarrow \\Lp$ is compact for all \n \\begin{gather*}\n 1 \\leq p < 1^* \\vcentcolon=\n \\begin{cases}\n +\\infty &\\mbox{if } N = 1 \\\\\n \\frac{N}{N-1} &\\mbox{otherwise }\n \\end{cases} \\,.\n \\end{gather*} \n\\end{lemma}\n\\begin{proof}\n The first result can be found in \\cite{DemDem07} for $0 < s < 1$ and in \\cite{Eva10} for $s = 1$. The second assertion \n is stated in \\cite{AmbFusPal00}.\n\\end{proof}\n\n\n\\begin{remark} \n\\label{re:notes_basic}\nLet \\autoref{ass:1} hold.\n We recall some basic properties of weak convergence in $\\Wsp$, $W^{1,p}(\\Omega, \\mathds{R}^M)$ and weak* convergence in $\\BV$ (see for instance \\cite{Ada75,AmbFusPal00}) :\n \\begin{itemize}\n \\item Let $p > 1$, $s\\in(0,1]$ and assume that $(w_n)_{n \\in \\mathds{N}}$ is bounded in $\\Wsp$. \n Then there exists a subsequence $(w_{n_k})_{k \\in \\mathds{N}}$ which converges weakly in $\\Wsp$. \n \\item Assume that $(w_n)_{n \\in \\mathds{N}}$ is bounded in $\\BV$. Then there exists a subsequence $(w_{n_k})_{k \\in \\mathds{N}}$ \n which converges weakly* in $\\BV$. \n \\end{itemize}\n\\end{remark}\n\n\nBefore introducing the regularization functional, which we investigate theoretically and numerically, we give \nthe definition of some sets of (equivalence classes of) admissible functions.\n\\begin{definition} \\label{def:spaces_etc}\n\\label{de:basic}\n For $0 < s \\leq 1$, $p \\geq 1$ and a nonempty closed subset $K \\subseteq \\mathds{R}^M$ we define \n \\begin{equation}\n \\label{eq:spacess_K}\n \\begin{aligned}\n \\Lp[K] \\vcentcolon= {} & \\{\\phi \\in \\Lp : \\phi(x) \\in K \\text{ for a.e. } x \\in \\Omega\\}; \\\\\n \\Wsp[K] \\vcentcolon= {} & \\{w \\in \\Wsp: w(x) \\in K \\text{ for a.e. } x \\in \\Omega \\}, \\\\\n \\BV[K] \\vcentcolon= {} & \\{w \\in \\BV: w(x) \\in K \\text{ for a.e. } x \\in \\Omega \\}.\n \\end{aligned}\n \\end{equation}\n and equip each of these (in general nonlinear) sets with some subspace topology:\n \\begin{itemize}\n \\item $\\Lp[K] \\subseteq \\Lp$ is associated with the strong $\\Lp$-topology,\n \\item $\\Wsp[K] \\subseteq \\Wsp$ is associated with the weak $\\Wsp$-topology, and \n \\item $\\BV[K] \\subseteq \\BV$ is associated with the weak* $\\BV$-topology.\n \\end{itemize}\n\n Moreover, we define \n \\begin{equation} \\label{eq:ChooseW}\n W(\\Omega,K) \\vcentcolon= \n \\begin{cases}\n \\Wsp[K] & \\text{ for } p \\in (1, \\infty) \\text { and } s \\in (0,1], \\\\\n \\BV[K] & \\text{ for } p = 1 \\text { and } s = 1\\;.\n \\end{cases}\n \\end{equation}\n Consistently, $W(\\Omega,K)$ \n \\begin{itemize}\n \\item is associated with the weak $\\Wsp$-topology in the case $p \\in (1, \\infty)$ and $s \\in (0,1]$ and\n \\item with the weak* $\\BV$-topology when $p=1$ and $s=1$.\n \\end{itemize}\n When we speak about \n \\begin{equation*}\n \\text{ convergence on } W(\\Omega,K) \\text{ we write } \\overset{W(\\Omega, K)}{\\longrightarrow} \\text{ or simply} \\rarr\n \\end{equation*}\n and mean weak convergence on $W^{s,p}(\\Omega,K)$ and weak* convergence on $\\BV[K]$, respectively.\n\\end{definition}\n\\begin{remark} \n\\label{re:notes_choose_w}\n~\\nopagebreak\n\\begin{itemize}[topsep=0pt]\n\\item In general $\\Lp[K], \\Wsp[K]$ and $\\BV[K]$ are sets which do not form a linear space.\n\\item If $K = \\sphere$, then $\\Wsp[K] = \\Wsp[\\sphere]$ as occurred in \\cite{BouBreMir00b}.\n\\item For an embedded manifold $K$ the dimension of the manifold is not necessarily identical with the space dimension of $\\mathds{R}^M$. \n For instance if $K = \\sphere \\subseteq \\mathds{R}^2$, then the dimension of $\\sphere$ is $1$ and $M=2$.\n\\end{itemize}\n\\end{remark}\nThe following lemma shows that $W(\\Omega,K)$ is a sequentially closed subset of $\\W$.\n\\begin{lemma}[Sequential closedness of $W(\\Omega,K)$ and {$\\Lp[K]$}] \\label{lem:Wsp_weakly_seq_closed_etc} \n ~\\nopagebreak \n \\vspace{-0.5\\baselineskip}\n \\begin{enumerate}[topsep=0pt]\n \\item \n Let $w_* \\in \\W$ and $(w_n)_{n\\in \\mathds{N}}$ be a sequence in $\\W[K] \\subseteq \\W$ with \n $w_n \\overset{W(\\Omega, \\mathds{R}^M)}{\\longrightarrow} w_*$ as $n \\to \\infty$. \n Then $w_* \\in \\W[K]$ and $w_n \\rarr w_*$ in $\\Lp[K]$.\n \\item \n Let $v_* \\in \\Lp$ and $(v_n)_{n \\in \\mathds{N}}$ be a sequence in $\\Lp[K] \\subseteq \\Lp$ with $v_n \\to v_*$ in $\\Lp$ as $n \\to \\infty$.\n Then $v_* \\in \\Lp[K]$ and there is some subsequence $(v_{n_k})_{k \\in \\mathds{N}}$ which converges to $v_*$ pointwise almost everywhere, i.e. $v_{n_k}(x) \\to v_*(x)$\n as $k \\to \\infty$ for almost every $x \\in \\Omega$.\n \\end{enumerate}\n\\end{lemma}\n\\begin{proof}\n For the proof of the second part, cf. \\cite{Els02}, Chapter VI, Corollary 2.7\n and take into account the closedness of $K \\subseteq \\mathds{R}^M$.\n The proof of the first part follows from standard convergence arguments in $\\Wsp$, $\\BV$ and $\\Lp$, respectively, \n using the embeddings from \\autoref{le:inclusion}, an argument on subsequences and part two.\n\\end{proof}\n\n\\begin{remark}\n \\autoref{le:inclusion} along with \\autoref{lem:Wsp_weakly_seq_closed_etc} imply that $\\W[K]$ is compactly embedded in $\\Lp[K]$, \n where these sets are equipped with the bornology inherited from $\\W$ and the topology inherited from $\\Lp$, respectively.\n\\end{remark}\n\n\n\nIn the following we postulate the assumptions on the operator $\\op$ which will be used throughout the paper:\n\\begin{assumption}\n\\label{ass:2}\nLet $\\W<1>$ be as in \\autoref{eq:ChooseW} and assume that $\\op$ is an operator from $\\W<1>$ to $\\Lp<2>$. \n\\end{assumption}\n\nWe continue with the definition of our regularization functionals:\n\\begin{definition} \\label{def:functional}\nLet \\autoref{ass:1} and \\autoref{ass:2} hold. Moreover, let $\\varepsilon > 0$ be fixed and let \n$\\rho \\vcentcolon= \\rho_\\varepsilon$ be a mollifier.\n\nThe regularization functional \n $\\F<\\alpha>[\\dKt, \\dKo] : \\W<1> \\rarr [0, \\infty]$ is defined as follows\n \\begin{equation}\\label{eq: functional_with_some_metric}\n \\boxed{\n \\F<\\alpha>[\\dKt, \\dKo] (w) \\vcentcolon= \\int\\limits_{\\Omega_2} \\dKt^{p_2}(\\op[w](x), v(x)) \\,\\mathrm{d}x + \\alpha \\int\\limits_{\\Omega_1\\times \\Omega_1} \\frac{\\dKo^{p_1}(w(x), w(y))}{\\normN[x-y]^{k+p_1 s}} \\rho^l(x-y) \\,\\mathrm{d}(x,y),}\n \\end{equation}\n where \n \\begin{enumerate}\n \\item $v \\in \\Lp<2>$,\n \\item $s \\in (0,1]$, \n \\item $\\alpha \\in (0, +\\infty)$ is the regularization parameter, \n \\item $l \\in \\set{0, 1}$ is an indicator and\n \\item \\label{itm:k}\n $\\begin{cases}\n k \\leq N &\\mbox{if } \\W<1> = W^{s,p_1}(\\Omega_1, K_1), \\ 0 = W^{1,p_1}(\\Omega_1, K_1) \\text{ or if } \\W<1> = BV(\\Omega_1, K_1), \\text{ respectively.}\n \\end{cases}$\n \\end{enumerate}\n Setting\n \\begin{equation}\\label{eq:d2}\n \\boxed{\n \\mebr[\\phi][\\nu]_{[\\dKt]} \\vcentcolon= \\left( \\int\\limits_{\\Omega_2} \\dKt^{p_2}(\\phi(x),\\nu(x)) \\,\\mathrm{d}x \\right)^{\\frac{1}{p_2}},}\n \\end{equation}\n and \n \\begin{equation}\\label{eq:d3}\n \\boxed{\n \\mathcal{R}_{[\\dKo]}(w) \\vcentcolon= \\int\\limits_{\\Omega_1\\times \\Omega_1} \\frac{\\dKo^{p_1}(w(x), w(y))}{\\normN[x-y]^{k+p_1 s}} \\rho^l(x-y) \\,\\mathrm{d}(x,y),}\n \\end{equation}\n \\autoref{eq: functional_with_some_metric} can be expressed in compact form\n \\begin{equation}\n \\label{eq:functional}\n \\boxed{\n \\F<\\alpha>[\\dKt,\\dKo](w) = \\mebr[\\op[w]][v]^{p_2}_{[\\dKt]} + \\alpha \\mathcal{R}_{[\\dKo]}(w).}\n \\end{equation}\n For convenience we will often skip some of the super- or subscript, and use compact notations like e.g. \n \\begin{equation*}\n \\F, \\F[\\dKt, \\dKo] \\text{ or } \\F(w) = \\mebr[\\op[w]][v]^{p_2} + \\alpha \\mathcal{R}(w).\n \\end{equation*}\n\\end{definition}\n\n\n\\begin{remark}\n~\n \\begin{enumerate}[topsep=0pt]\n \\item $l = \\set{0,1}$ is an indicator which allows to consider approximations of Sobolev semi-norms and double integral \n representations \n of the type of \\citeauthor{BouBreMir01} \\cite{BouBreMir01} in a uniform manner. \n \\begin{itemize}\n \\item when $k=0$, $s=1$, $l=1$ and when $d_1$ is the Euclidean distance, we get the double integrals of the \n \\citeauthor{BouBreMir01}-form \\cite{BouBreMir01}. Compare with \\autoref{eq:double_integral}.\n \\item When $d_1$ is the Euclidean distance, $k=N$ and $l=0$, we get Sobolev semi-norms.\n \\end{itemize}\n We expect a relation between the two classes of functionals for $l=0$ and $l=1$ as stated in \\autoref{ss:conjecture}.\n \\item \n When $d_1$ is the Euclidean distance then the second term in \\autoref{eq: functional_with_some_metric} is similar to the ones used in \n \\cite{AubKor09,BouElbPonSch11} and \\cite{BouBreMir01, Pon04b, Dav02}. \n\n \\end{enumerate}\n\\end{remark}\n\nIn the following we state basic properties of $\\mebr_{[\\dKt]}$ and the functional \n$\\F$.\n\n\\begin{proposition} \\label{pr:ExprIsOp}\nLet \\autoref{ass:1} hold. \n\n\\begin{enumerate}\n\\item Then the mapping $ \\mebr_{[\\dKt]} : \\Lp<2> \\times \\Lp<2> \\rarr [0, +\\infty]$ \nsatisfies the metric axioms.\n\n\\item \\label{itm: ExpIsOp} Let, in addition, \\autoref{ass:2} hold, assume that $v \\in \\Lp<2>$ and that both metrics $d_i$, $i=1,2$, \nare equivalent to $\\dRMi|_{K_i \\times K_i}$, respectively.\nThen the functional $\\F<\\alpha>[\\dKt, \\dKo]$ does not \nattain the value $+\\infty$ on its domain $\\W<1> \\neq \\emptyset$. \n\\end{enumerate}\n\\end{proposition}\n\n\\begin{proof}\n\\begin{enumerate}\n\\item The axioms of non-negativity, identity of indiscernibles and symmetry are fulfilled by $\\mebr_{[\\dKt]}$ \n since $\\dKt$ is a metric. To prove the triangle inequality let $\\phi,\\xi,\\nu \\in L^{p_2}(\\Omega_2, K_2)$. \n In the main case $\\mebr[\\phi][\\nu]_{[\\dKt]}^{p_2} \\in (0, \\infty)$ H\u00f6lder's inequality yields\n\t\\begin{align*}\n\t\\mebr[\\phi][\\nu]_{[\\dKt]}^{p_2} ={}&\n\t\\int\\limits_{\\Omega_2} \\dKt\\big(\\phi(x),\\nu(x) \\big) \\dKt^{p_2-1}\\big( \\phi(x),\\nu(x) \\big) \\,\\mathrm{d}x \\\\ \n\t\\leq{}& \\int\\limits_{\\Omega_2} \\dKt\\big( \\phi(x),\\xi(x) \\big) \\dKt^{p_2-1}\\big( \\phi(x),\\nu(x) \\big) \\,\\mathrm{d}x \n\t + \\int\\limits_{\\Omega_2} \\dKt\\big( \\xi(x),\\nu(x) \\big) \\dKt^{p_2-1} \\big( \\phi(x),\\nu(x) \\big) \\,\\mathrm{d}x \\\\\n \\leq{}& \n \\left( \\int\\limits_{\\Omega_2} \\dKt^{p_2} \\big(\\phi(x),\\xi(x) \\big) \\,\\mathrm{d}x \\right)^{\\frac{1}{p_2}} \n \\left( \\int\\limits_{\\Omega_2} \\dKt^{p_2}\\big( \\phi(x),\\nu(x) \\big) \\,\\mathrm{d}x \\right)^{\\frac{p_2-1}{p_2}}\\\\\n &+ \\left( \\int\\limits_{\\Omega_2} \\dKt^{p_2} \\big(\\xi(x),\\nu(x) \\big) \\,\\mathrm{d}x \\right)^{\\frac{1}{p_2}} \n \\left( \\int\\limits_{\\Omega_2} \\dKt^{p_2}\\big( \\phi(x),\\nu(x) \\big) \\,\\mathrm{d}x \\right)^{\\frac{p_2-1}{p_2}}\\\\\n\t={}& \\left( \\mebr[\\phi][\\xi]_{[\\dKt]} + \\mebr[\\xi][\\nu]_{[\\dKt]} \\right)\n\t\\mebr[\\phi][\\nu]_{[\\dKt]}^{p_2-1},\n\t\\end{align*} \n meaning \n \\begin{gather*}\n \\mebr[\\phi][\\nu]_{[\\dKt]} \\leq \\mebr[\\phi][\\xi]_{[\\dKt]} + \\mebr[\\xi][\\nu]_{[\\dKt]}.\n \\end{gather*}\n If $\\mebr[\\phi][\\nu]_{[\\dKt]} = 0$ the triangle inequality is trivially fulfilled. \n \n In the remaining case $\\mebr[\\phi][\\nu]_{[\\dKt]} = \\infty$ applying the estimate $(a+b)^p \\leq 2^{p-1} (a^p + b^p)$,\n see e.g. \\cite[Lemma 3.20]{SchGraGroHalLen09},\n to $a = \\dKt(\\phi(x), \\xi(x)) \\geq 0$ and $b = \\dKt(\\xi(x), \\nu(x)) \\geq 0$ yields \n \\begin{gather*}\n \\mebr[\\phi][\\nu]_{[\\dKt]}^{p_2} \\leq 2^{p_2-1} \\big( \\mebr[\\phi][\\xi]_{[\\dKt]}^{p_2} + \\mebr[\\xi][\\nu]_{[\\dKt]}^{p_2} \\big),\n \\end{gather*}\n implying the desired result.\n\\item We emphasize that $\\W<1> \\neq \\emptyset$ because every constant function $w(\\cdot) = a \\in K_1$ belongs to $\\Wsp<1>$ \n for $p_1 \\in (1, \\infty)$ and $s \\in (0,1]$ as well as to $\\BV<1>$ for $p_1 = 1$ and $s = 1$.\n Assume now that the metrics $d_i$ are equivalent to $\\dRMi|_{K_i \\times K_i}$ for $i=1$ and $i=2$, respectively,\n so that we have an upper bound $d_i \\leq C \\dRMi|_{K_i \\times K_i}$.\n We need to prove that $\\F<\\alpha>[\\dKt, \\dKo](w) < \\infty$ for every $w \\in \\W<1>$.\n Due to $\\mebr[\\phi][\\nu]^{p_2}_{[\\dKt]} \\leq C^{p_2} \\norm{\\phi - \\nu}^{p_2}_{\\Lp<2>[][\\mathds{R}^{M_2}]} < \\infty$ \n for all $\\phi, \\nu \\in \\Lp<2> \\subseteq \\Lp<2>[][\\mathds{R}^{M_2}]$ it is sufficient to show\n $\\mathcal{R}_{[\\dKo]}(w) < +\\infty$ for all $w \\in \\W<1>$.\n\\begin{itemize}\n \\item For $\\W<1> = \\BV<1>$ this is guaranteed by \\cite[Theorem 1.2]{Pon04b}.\n \\item For $\\W<1> = W^{1,p_1}(\\Omega_1, K_1)$ by \\cite[Theorem 1]{BouBreMir01}.\n \\item For $\\W<1> = \\Wsp<1>$, $s \\in (0,1)$, we distinguish between two cases. \\\\\n If $\\normN[x-y]< 1$ we have that $\\frac{1}{\\normN[x-y]^{k+p_1 s}} \\leq \\frac{1}{\\normN[x-y]^{N+p_1 s}}$ for $k \\leq N$ and hence\n \\begin{gather*}\\int\\limits_{\\begin{smallmatrix}\n \t(x,y) \\in \\Omega_1 \\times \\Omega_1 \\\\ \\normN[x-y]< 1\n \t\\end{smallmatrix}} \\frac{\\dKo^{p_1}(w(x), w(y))}{\\normN[x-y]^{k+p_1 s}} \\rho^l(x-y) \\,\\mathrm{d}(x,y) \n \\leq C^{p_1} \\norm{\\rho^l}_{\\infty} \\abs{w}_{W^{s,p_1}(\\Omega_1, \\mathds{R}^{M_1})}^{p_1} < \\infty\\;.\n \\end{gather*}\n If $\\normN[x-y]\\geq 1$ we can estimate \n \\begin{gather*}\\int\\limits_{\\begin{smallmatrix}\n \t(x,y) \\in \\Omega_1 \\times \\Omega_1 \\\\ \\normN[x-y]\\geq 1\n \t\\end{smallmatrix}} \\frac{\\dKo^{p_1}(w(x), w(y))}{\\normN[x-y]^{k+p_1 s}} \\rho^l(x-y) \\,\\mathrm{d}(x,y) \n \\leq C^{p_1} \\norm{\\rho^l}_{\\infty} 2^{p_1} |\\Omega_1| \\norm{w}^{p_1}_{L^{p_1}(\\Omega_1, \\mathds{R}^{M_1})} < \\infty\\;.\n \\end{gather*}\n In summary adding yields $\\mathcal{R}_{[\\dKo]}(w) < +\\infty$.\n \\end{itemize}\n\\end{enumerate}\n\\end{proof}\n\n\n\n\\section{Existence} \\label{sec: Existence}\n\nIn order to prove existence of a minimizer of the functional $\\F$ we apply the Direct Method in the Calculus of Variations\n(see e.g. \\cite{Dac82,Dac89}). \nTo this end we verify continuity properties of $\\mebr_{[\\dKt]}$ and $\\mathcal{R}_{[\\dKo]}$, resp. $\\F[\\dKt, \\dKo]$ and apply them along with\nthe sequential closedness of $\\W<1>$, already proven in \\autoref{lem:Wsp_weakly_seq_closed_etc}.\n\n\nIn this context we point out some setting assumptions and their consequences on $\\F$, resp. $\\mebr$ and $\\mathcal{R}$ in the following remark. \nFor simplicity we assume \n$p \\vcentcolon= p_1 = p_2 \\in (1, \\infty)$, $\\Omega \\vcentcolon= \\Omega_1 = \\Omega_2$\nand $(K, \\dK) \\vcentcolon= (K_1, \\dKo) = (K_2, \\dKt)$.\n\\begin{remark} \\label{re:tricks}\n~\n\t\\begin{itemize}[topsep=0pt]\n\t\t\\item\n\t\tThe continuity of $\\dK$ with respect to $\\dRM|_{K \\times K}$ guarantees lower semicontinuity of $\\mebr_{[\\dK]}$ and $\\mathcal{R}_{[\\dK]}$.\n\t\t\\item\n\t\tThe inequality $\\dRM|_{K \\times K} \\leq \\dK$ carries over to the inequalities \n\t\t$\\norm{\\widetilde v - v}_{\\Lp} \\leq \\mebr[\\widetilde v][v]_{[\\dK]}$ for all $\\widetilde v, v \\in \\Lp[K]$,\n\t\tand $|w|_{\\W} \\leq \\mathcal{R}_{[\\dK]}(w)$ for all $w \\in \\W[K]$, allowing to transfer\n\t\tproperties like coercivity from $\\F[\\dRM,\\dRM]$ to $\\F[\\dK,\\dK]$.\n\t\tMoreover, the extended real-valued metric space $(\\Lp[K], \\mebr_{[\\dK]})$ stays \n\t\trelated to the linear space $(\\Lp, \\norm{\\cdot}_{\\Lp})$ in terms of the topology and bornology induced by $\\mebr$, \n\t\tresp. those inherited by\n\t\t$\\norm{\\cdot}_{\\Lp}$.\n\t\t\\item\n\t\tThe closedness of $K \\subseteq \\mathds{R}^M$ is crucial in showing that $\\W[K]$ is a sequentially closed subset of the linear space $\\W$.\n\t\tThis closedness property acts as a kind of replacement for the, a priori not available, notion of completeness with respect to the \n\t\t``space'' $(\\W[K], \\mebr, \\mathcal{R})$.\n\t\\end{itemize}\n\tFor $l=0$, $k=N$ note in the latter item that\n\tequipping $\\W[K]$ with $\\mebr_{[\\dKt]}$ and $\\mathcal{R}_{[\\dKo]}$ does not even lead to an (extended real-valued) metric space,\n\tin contrast to the classical case $(K,\\dK) = (\\mathds{R}^M, \\dRM)$.\n\\end{remark}\n\n\nWe will use the following assumption:\n\n\\begin{assumption} \\label{as:Setting} \n Let \\autoref{ass:1} hold, $v^0 \\in \\Lp<2>$ and let $\\W<1>$ and the associated topology be as defined in \\autoref{eq:ChooseW}.\n \n In addition we assume: \n \\begin{itemize}\n \\item $\\op: \\W<1> \\to \\Lp<2>$ is well--defined and sequentially continuous with respect to the specified topology on $\\W<1>$ and \n \\item For every $t > 0$ and $\\alpha > 0$ the level sets \n \\begin{equation}\\label{itm: A} \n \\text{level}_t(\\F<\\alpha>[\\dKt, \\dKo]) \\vcentcolon= \\{ w \\in \\W<1> \\ : \\ \\F<\\alpha>[\\dKt,\\dKo] \\leq t \\} \n \\end{equation}\n are sequentially pre-compact subsets of $W(\\Omega_1, \\mathds{R}^{M_1})$.\n \\item There exists a $\\bar{t} > 0$ such that $\\text{level}_{\\bar{t}}(\\F<\\alpha>[\\dKt, \\dKo])$ is nonempty.\n \\item Only those $v \\in \\Lp<2>$ are considered which additionally fulfill $\\mebr[v][v^0]_{[\\dKt]} < \\infty$. \n \\end{itemize}\n\\end{assumption}\n\n\\begin{remark}\n The third condition is sufficient to guarantee $\\F<\\alpha>[\\dKt, \\dKo]) \\not \\equiv \\infty$. In contrast the condition $v^0 \\in \\Lp<2>$, cf. \n \\autoref{def:functional}, might not be sufficient if $d_2$ is not equivalent to $\\dRMt|_{K_2 \\times K_2}$.\n\\end{remark}\n\n\n\n\\begin{lemma} \\label{thm:F_and_its_summands_are_seq_weakly_closed} \n Let \\autoref{as:Setting} hold. \n Then the mappings $\\mebr_{[\\dKt]}$, $\\mathcal{R}_{[\\dKo]}$ and $\\F[\\dKt, \\dKo]$ have the following continuity properties:\n \\begin{enumerate}\n \\item \\label{enu:continuity_of_mebr}\n The mapping $\\mebr_{[\\dKt]}: \\Lp<2> \\times \\Lp<2> \\rarr [0, +\\infty]$ is sequentially lower semi-continuous,\n i.e. whenever sequences $\\seq{\\phi}$, $\\seq{\\nu}$ in $\\Lp<2>$ converge to $\\phi_* \\in \\Lp<2>$ and \n $\\nu_*\\in \\Lp<2>$, respectively, we have $\\mebr[\\phi_*][\\nu_*]_{[\\dKt]} \\leq \\limi \\limits \\mebr[\\phi_n][\\nu_n]_{[\\dKt]}$.\n \\item \\label{enu:seq_lscty_of_R} \n The functional $\\mathcal{R}_{[\\dKo]}: \\W<1> \\rarr [0,\\infty]$ is sequentially lower semi-continuous, i.e. whenever a \n sequence $(w_n)_{n \\in \\mathds{N}}$ in $\\W<1>$ converges to some $w_* \\in \\W<1>$ we have\n \\begin{equation*}\n \\mathcal{R}_{[\\dKo]}(w_*) \\leq \\limi \\mathcal{R}_{[\\dKo]}(w_n).\n \\end{equation*}\n \\item \\label{enu:seq_lscty_of_F} \n The functional $\\F[\\dKt,\\dKo]: W(\\Omega_1, K_1) \\rarr [0,\\infty]$ is sequentially lower semi-continuous.\n \\end{enumerate}\n\\end{lemma}\n\n\n\\begin{proof}\n\t\\begin{enumerate}\n\t\\item \\label{enu:lsc_of_mebr}\n\t\tIt is sufficient to show that for \\emph{every} pair of sequences $\\seq{\\phi}$, $\\seq{\\nu}$ in $\\Lp<2>$ which \n\t\tconverge to previously \\emph{fixed} elements $\\phi_* \\in \\Lp<2>$ and $\\nu_*\\in \\Lp<2>$, respectively, we can extract subsequences \n\t\t$(\\phi_{n_j})_{j \\in \\mathds{N}}$ and $(\\nu_{n_j})_{j \\in \\mathds{N}}$, respectively, with \n\t\t\\begin{gather*}\n\t\t\\mebr[\\phi_*][\\nu_*]_{[\\dKt]} \\leq \\liminf_{j \\rarr \\infty} \\mebr[\\phi_{n_j}][\\nu_{n_j}]_{[\\dKt]}.\n\t\t\\end{gather*}\n\t\tTo this end let $(\\phi_n)_{n \\in \\mathds{N}},(\\nu_n)_{n \\in \\mathds{N}}$ be some sequences in $\\Lp<2>$ with $\\phi_n \\rarr \\phi_*$ and $\\nu_n \\rarr \\nu_*$ in $\\Lp<2>$. \n\t\t\\autoref{lem:Wsp_weakly_seq_closed_etc} ensures that there exist subsequences $(\\phi_{n_j})_{j \\in \\mathds{N}}, (\\nu_{n_j})_{j \\in \\mathds{N}}$ \n\t\tconverging to $\\phi_*$ and $\\nu_*$ pointwise almost everywhere, \n\t\twhich in turn implies $\\big(\\phi_{n_j}(\\cdot), \\nu_{n_j}(\\cdot) \\big) \\to \\big( \\phi_*(\\cdot), \\nu_*(\\cdot) \\big)$ \n\t\tpointwise almost everywhere. Therefrom, together with the continuity of $\\dKt: K_2 \\times K_2 \\to [0, \\infty)$ \n\t\twith respect to $\\dRMt$, cf. \\autoref{sec: Setting}, we obtain\t\t\n\t\tby using the quadrangle inequality\n\t\tthat\n\t\t\\begin{gather*}\n\t\t | \\dKt(\\phi_{n_j}(x), \\nu_{n_j}(x)) - \\dKt(\\phi_*(x), \\nu_*(x)) | \\leq \\dKt(\\phi_{n_j}(x), \\phi_*(x)) + \\dKt(\\nu_{n_j}(x), \\nu_*(x)) \\rarr 0,\n\t \\end{gather*}\n\t\tand hence\t\t\n\t\t\\begin{gather*}\n\t\t\\dKt^{p_2}\\big( \\phi_{n_j}(x), \\nu_{n_j}(x) \\big) \\to \\dKt^{p_2} \\big( \\phi_*(x),\\nu_*(x) \\big) \\text{ for almost every }\n\t\tx \\in \\Omega_2.\n\t\t\\end{gather*}\n\t\tApplying Fatou's lemma we obtain\n\t\t\\begin{gather*}\n\t\t\\mebr[\\phi_*][\\nu_*]_{[\\dKt]} = \\int_{\\Omega_2 }\\limits \\dKt^{p_2}( \\phi_*(x),\\nu_*(x) ) \\,\\mathrm{d}x\n\t\t\\leq \\liminf_{j \\rarr \\infty} \\int_{\\Omega_2}\\limits \\dKt^{p_2} ( \\phi_{n_j}(x), \\nu_{n_j}(x) ) \\,\\mathrm{d}x\n\t\t= \\liminf_{j \\rarr \\infty} \\mebr[\\phi_{n_j}][\\nu_{n_j}]_{[\\dKt]}.\n\t\t\\end{gather*}\n\t\t\\item \n\t\tLet $(w_n)_{n \\in \\mathds{N}}$ be a sequence in $\\W<1>$ with $w_n \\rarr w_*$ as $n \\rarr \\infty$.\n\t\tBy \\autoref{lem:Wsp_weakly_seq_closed_etc} there is a subsequence \n\t\t$(w_{n_j})_{j \\in \\mathds{N}}$ which converges to $w_*$ both in $\\Lp<1>$ and pointwise almost everywhere.\n\t\tThis further implies that\n\t\t\\begin{gather*}\n\t\t\\dKo^{p_1}\\big( w_{n_j}(x), w_{n_j}(y) \\big) \\to \\dKo^{p_1} \\big( w_*(x),w_*(y) \\big)\n\t\t\\end{gather*}\n\t\tfor almost every \n\t\t\\begin{equation}\n\t\t \\label{eq:A} \n\t\t (x,y) \\in \\Omega_1 \\times \\Omega_1 \\supseteq \\{(x,y) \\in \\Omega_1 \\times \\Omega_1 : x \\neq y \\} =\\vcentcolon A.\n\t\t\\end{equation}\n\t\tDefining\n\t\t\\begin{equation*}\n\t\tf_j(x,y) \\vcentcolon= \\left\\{ \\begin{array}{rcl}\n\t\t\\frac{\\dKo^{p_1}(w_{n_j}(x), w_{n_j}(y)) }{\\normN[x-y]^{k+ps}} \\rho^l(x-y) & \\text{ for } & \n\t\t(x,y) \\in A,\\\\\n\t\t0 & \\text{ for } & \n\t\t(x,y) \\in (\\Omega_1 \\times \\Omega_1) \\setminus A,\\\\\n\t\t\\end{array} \\right. \\quad \\text{ for all } j \\in \\mathds{N}.\n\t\t\\end{equation*}\n\t\tand \n\t\t\\begin{equation*}\n\t\tf_*(x,y) \\vcentcolon= \\left\\{ \\begin{array}{ccl}\n\t\t\\frac{\\dKo^{p_1}(w_*(x), w_*(y)) }{\\normN[x-y]^{k+ps}} \\rho^l(x-y) & \\text{ for } & \n\t\t(x,y) \\in A,\\\\\n\t\t0 & \\text{ for } & \n\t\t(x,y) \\in (\\Omega_1 \\times \\Omega_1) \\setminus A\\\\\n\t\t\\end{array} \\right. \n\t\t\\end{equation*}\n\t\twe thus have $f_*(x,y) = \\lim_{j \\rarr \\infty} f_j(x,y)$\n\t\tfor almost every $(x,y) \\in \\Omega_1 \\times \\Omega_1$.\n\t\tApplying Fatou's lemma to the functions $f_j$ yields the assertion, due to the same reduction as in the proof of the first part.\n\t\t\n\t\t\\item \n\t\tIt is sufficient to prove that the components $\\mathcal{G}(\\cdot) = \\mebr[F(\\cdot)][v]_{[\\dKt]}$ and $\\mathcal{R} = \\mathcal{R}_{[\\dKo]}$ of \n\t\t$\\F[\\dKo,\\dKt] = \\mathcal{G} + \\alpha \\mathcal{R}$ are sequentially lower semi-continuous.\n\t\tTo prove that $\\mathcal{G}$ is sequentially lower semi-continuous in every $w_* \\in W(\\Omega_1, K_1)$\n\t\tlet $(w_n)_{n \\in \\mathds{N}}$ be a sequence in $W(\\Omega_1, K_1)$ with $w_n \\rarr w_*$ as $n \\rarr \\infty$.\n\t\t\\autoref{as:Setting}, ensuring the sequential continuity of $\\op: \\W<1> \\to \\Lp<2>$, implies hence $\\op[w_n] \\rarr \\op[w_*]$ in $\\Lp<2>$ \n\t\tas $n \\rarr \\infty$.\n\t\tBy \\autoref{enu:continuity_of_mebr} we thus obtain $\\mathcal{G}(w_*) = \\mebr[\\op[w_*]][v] \\leq \\limi \\mebr[\\op[w_n]][v] = \\limi \\mathcal{G}(w_n)$. \\\\\n\t\t$\\mathcal{R}$ is sequentially lower semi-continuous by \\autoref{enu:seq_lscty_of_R}.\n\t\\end{enumerate}\n\\end{proof}\n\n\n\\subsection{Existence of minimizers}\n\nThe proof of \n the existence of a \n minimizer of $\\F[\\dKt, \\dKo]$ is along the lines of the proof \nin \\cite{SchGraGroHalLen09}, taking into account \\autoref{re:tricks}\nWe will need the following useful lemma, cf. \\cite{SchGraGroHalLen09}, which links $\\text{level}_t(\\F<\\alpha>)$ and \n$\\text{level}_t(\\F<\\alpha>)$ for $\\mebr[v][v^0] < \\infty$.\n\n\n\\begin{lemma}\\label{lem:Ineq_F_parameterchange_to_other_v} \n\tIt holds \n\t\\begin{align*}\n\t\t\\F<>[\\dKt, \\dKo](w) & \\leq 2^{p_2-1} \\F<>[\\dKt, \\dKo](w) + 2^{p_2 -1} \\mebr[v_\\diamond][v_\\star]_{[\\dKt]}^{p_2}\n\t\\end{align*}\n\tfor every \n\t$w \\in W(\\Omega_1, K_1)$ and $v_\\star, v_\\diamond \\in L^{p_2}(\\Omega_2, K_2)$.\n\\end{lemma}\n\n\\begin{proof}\n\tUsing the fact that for $p \\geq 1$ we have that $|a+b|^p \\leq 2^{p-1}(|a|^p + |b|^p), \\ a,b \\in \\mathds{R} \\cup \\{\\infty\\}$ and that $\\mebr_{[\\dKt]}$ fulfills the triangle inequality we obtain\n\t\\begin{align*}\n\t\t\\F<>[\\dKt, \\dKo](w) \n\t\t& = \\mebr[\\op[w]][v_\\star]_{[\\dKt]}^{p_2} + \\alpha \\mathcal{R}_{[\\dKo]}(w) \\\\\n\t\t& \\leq 2^{p_2-1} \\big( \\mebr[\\op[w]][v_\\diamond]_{[\\dKt]}^{p_2} + \\mebr[v_\\diamond][v_\\star]_{[\\dKt]}^{p_2} \\big) + \\alpha \\mathcal{R}_{[\\dKo]}(w) \\\\\n\t\t& \\leq 2^{p_2-1}\\big( \\F<>[\\dKt, \\dKo](w) + \\mebr[v_\\diamond][v_\\star]_{[\\dKt]}^{p_2} \\big).\n\t\\end{align*}\n\\end{proof}\n\n\\begin{thm} \\label{thm:F_dK_has_a_minimizer} \nLet \\autoref{as:Setting} hold. Then the functional $\\F<\\alpha>[\\dKt, \\dKo]: W(\\Omega_1, K_1) \\rarr [0, \\infty]$ \nattains a minimizer.\n\\end{thm}\n\n\\begin{proof}\n We prove the existence of a minimizer via the Direct Method.\n We shortly write $\\F^v$ for $\\F<\\alpha>[\\dKt, \\dKo]$.\n Let $(w_n)_{n \\in \\mathds{N}}$ be a sequence in $W(\\Omega_1, K_1)$ with\n \\begin{gather}\\label{eq:w_n_is_minimizing_seq}\n \\lim_{n \\rarr \\infty }\\F^v(w_n) = \\inf_{w \\in W(\\Omega_1, K_1)} \\F^v(w).\n \\end{gather}\n The latter infimum is not $+\\infty$, because $\\F^v \\equiv +\\infty$ would imply also $\\F^{v^0} \\equiv +\\infty$ due to \n \\autoref{lem:Ineq_F_parameterchange_to_other_v}, violating \\autoref{as:Setting}.\n In particular there is some $c \\in \\mathds{R}$ such that \n $\\F^v(w_n) \\leq c$ for every $n \\in \\mathds{N}$. \n Applying \\autoref{lem:Ineq_F_parameterchange_to_other_v} yields\n $\\F^{v^0}(w_n) \\leq 2^{p_2-1} \\big( \\F^{v}(w_n) + \\mebr[v][v^0] \\big)\\leq 2^{p_2-1} \\big( c + \\mebr[v][v^0] \\big) =\\vcentcolon\\tilde{c} < \\infty$ due to \\autoref{as:Setting}.\n Since the level set $\\text{level}_{\\tilde{c}}(\\F^{v^0})$ is sequentially pre-compact with respect to the topology given to $W(\\Omega_1, \\mathds{R}^{M_1})$ we get the existence of a\n subsequence $(w_{n_k})_{k \\in \\mathds{N}}$ which converges to some $w_* \\in W(\\Omega_1, \\mathds{R}^{M_1})$, \n where actually $w_* \\in W(\\Omega_1, K_1)$ due to \\autoref{lem:Wsp_weakly_seq_closed_etc}.\n Because $\\F^v$ is sequentially lower semi-continuous, see \\autoref{thm:F_and_its_summands_are_seq_weakly_closed}, we have \n $\\F^v(w_*) \\leq \\liminf_{k \\rarr \\infty} \\F^v(w_{n_k})$. Combining this with \n \\autoref{eq:w_n_is_minimizing_seq} we obtain \n \\begin{gather*}\n \\inf_{w \\in W(\\Omega_1, K_1)} \\F^v(w) \\leq \\F^v(w_*) \\leq \\liminf_{k \\rarr \\infty} \\F^v(w_{n_k}) \n = \\lim_{n\\rarr \\infty} \\F^v(w_n) = \\inf_{w \\in W(\\Omega_1, K_1)} \\F^v(w).\n \\end{gather*}\n In particular $\\F^v(w_*) = \\inf_{w \\in W(\\Omega_1, K_1)}\\limits \\F^v(w)$, meaning that $w_*$ is a minimizer of $\\F^v$.\n\\end{proof}\n\nIn the following we investigate two examples, which are relevant for the numerical examples in \\autoref{sec:Numerical_results}.\n\\begin{example}\n\\label{ex:coercive}\nWe consider that \n$W(\\Omega_1,K_1) = W^{s, p_1}(\\Omega_1, K_1)$ \nwith $p_1>1, \\ 0 < s < 1$ and fix $k = N$. \n\nIf the operator $\\op$ is norm-coercive in the sense that the implication\n \\begin{equation} \\label{eq:coercive_F}\n \\norm{w_n}_{L^{p_1}(\\Omega_1, \\mathds{R}^{M_1})} \\rarr +\\infty \\Rightarrow \\norm{\\op[w_n]}_{L^{p_2}(\\Omega_2, \\mathds{R}^{M_2})} \\rarr +\\infty \n \\end{equation}\n holds true for every sequence $(w_n)_{n \\in \\mathds{N}}$ in $W^{s,p_1}(\\Omega_1, K_1)\\subseteq W^{s,p_1}(\\Omega_1, \\mathds{R}^{M_1})$, then the functional \n \\begin{equation*}\n \\F[\\dKt, \\dKo] = \\mebr[\\op[w]][v]^{p_2}_{[\\dKt]} + \\alpha \\mathcal{R}_{[\\dKo]}(w): W^{s,p_1}(\\Omega_1, K_1) \\rarr [0, \\infty]\n \\end{equation*}\n is coercive. This can be seen as follows: \n \n The inequality between $\\dKo$ and $\\dRMo|_{K_1 \\times K_1}$ resp. $\\dKt$ and $\\dRMt|_{K_2 \\times K_2}$, see \\autoref{ass:1}, carries over to \n $\\F[\\dKt,\\dKo]$ and $\\F[\\dRMt|_{K_2 \\times K_2},\\dRMo|_{K_1 \\times K_1}]$, i.e.\n \\begin{gather*}\n \\F[\\dKt, \\dKo] (w) \\geq \\F\\big[\\dRMt|_{K_2 \\times K_2},\\dRMo|_{K_1 \\times K_1}\\big](w) \\text{ for all } w \\in W^{s,p_1}(\\Omega_1, K_1).\n \\end{gather*} \n Thus it is sufficient to show that $\\F[\\dRMt|_{K_2 \\times K_2},\\dRMo|_{K_1 \\times K_1}]: W^{s,p_1}(\\Omega_1, K_1) \\rarr [0, \\infty]$ is coercive: \n To prove this we write shortly $\\F$ instead of $\\F[\\dRMt|_{K_2 \\times K_2},\\dRMo|_{K_1 \\times K_1}]$ and consider sequences\n $(w_n)_{n \\in \\mathds{N}}$ in $W^{s,p_1}(\\Omega_1, K_1)$ with $\\norm{w_n}_{W^{s,p_1}(\\Omega_1, \\mathds{R}^{M_1})} \\rarr +\\infty$ as $n \\rarr \\infty$. We \n show that $\\F(w_n) \\rarr +\\infty$, as $n \\rarr \\infty$.\n Since \n \\begin{equation*} \n \\norm{w_n}_{W^{s,p_1}(\\Omega_1, \\mathds{R}^{M_1})} = \\big( \\norm{w_n}_{L^{p_1}(\\Omega_1, \\mathds{R}^{M_1})}^{p_1} + \\abs{w_n}_{W^{s,p_1}(\\Omega_1, \\mathds{R}^{M_1})}^{p_1} \\big)^{\\frac{1}{p_1}}\n \\end{equation*}\n the two main cases to be considered are $\\norm{w_n}_{L^{p_1}(\\Omega_1, \\mathds{R}^{M_1})} \\rarr +\\infty$ and \n $\\abs{w_n}_{W^{s,p_1}(\\Omega_1, \\mathds{R}^{M_1})} \\rarr +\\infty$.\n \n \\begin{enumerate}[label=\\textbf{Case \\arabic*}]\n \\item \\label{ex: coecivity_case1}\n $\\norm{w_n}_{L^{p_1}(\\Omega_1, \\mathds{R}^{M_1})} \\rarr +\\infty$. \\\\\n The inverse triangle inequality and the norm-coercivity of $\\op$, \\autoref{eq:coercive_F}, give\n $\\norm{\\op[w_n] - v}_{L^{p_2}(\\Omega_2, \\mathds{R}^{M_2})} \\geq \\norm{\\op[w_n]}_{L^{p_2}(\\Omega_2, \\mathds{R}^{M_2})} - \\norm{v}_{L^{p_2}(\\Omega_2, \\mathds{R}^{M_2})} \\rarr +\\infty$.\n Therefore also \n \\begin{equation*}\n \\F(w_n) = \\norm{\\op[w_n] - v}^{p_2}_{L^{p_2}(\\Omega_2, \\mathds{R}^{M_2})} \n + \\alpha \\int\\limits_{\\Omega_1\\times \\Omega_1} \\frac{\\normMo[w_n(x) - w_n(y)]^{p_1}}{\\normN[x-y]^{N+p_1 s}} \\rho^l(x-y) \\,\\mathrm{d}(x,y)\n \\rarr +\\infty.\n \\end{equation*}\n\n \\noindent\n \\item \\label{ex: coecivity_case2}\n $\\abs{w_n}_{W^{s,p_1}(\\Omega_1, \\mathds{R}^{M_1})} \\rarr +\\infty$. \\\\\n If $l=0$, then $\\mathcal{R}_{[\\dKo]}$ is exactly the $W^{s,p_1}(\\Omega_1, \\mathds{R}^{M_1})$-semi-norm $|w|_{W^{s,p_1}(\\Omega_1, \\mathds{R}^{M_1})}$ and we trivially \n get the desired result. \n \n Hence we assume from now on that $l = 1$. \n The assumptions on $\\rho$ ensure that there exists a $\\tau > 0$ and $\\eta_{\\tau} > 0$ such that\n \\begin{align*}\n \\mathcal{S}_{\\tau} \\vcentcolon= {}& \\{(x,y) \\in \\Omega_1 \\times \\Omega_1 : \\rho(x-y) \\geq \\tau \\}\n \\\\\n = {}& \\{(x,y) \\in \\Omega_1 \\times \\Omega_1 : \\normN[x-y] \\leq \\eta_{\\tau} \\},\n \\end{align*}\n cf. \\autoref{fig:stripe}.\n\n Splitting $\\Omega_1 \\times \\Omega_1$ into $\\mathcal{S}_{\\tau} =\\vcentcolon \\mathcal{S}$ and its complement \n $(\\Omega_1 \\times \\Omega_1) \\setminus \\mathcal{S}_{\\tau} =\\vcentcolon \\mathcal{S}^c$\n we accordingly split the integrals \n $\\abs{w_n}_{W^{s,p_1}(\\Omega_1, \\mathds{R}^{M_1})} = \\int\\limits_{\\Omega_1 \\times \\Omega_1} \\frac{\\normMo[w_n(x) - w_n(y)]^{p_1}}{\\normN[x-y]^{N+p_1 s}} \\,\\mathrm{d}(x,y)$ \n and consider again two cases \n $\\int\\limits_{\\mathcal{S}} \\frac{\\normMo[w_n(x) - w_n(y)]^{p_1}}{\\normN[x-y]^{N+p_1 s}} \\,\\mathrm{d}(x,y) \\rarr +\\infty$ and \n $\\int\\limits_{\\mathcal{S}^c} \\frac{\\normMo[w_n(x) - w_n(y)]^{p_1}}{\\normN[x-y]^{N+p_1 s}} \\,\\mathrm{d}(x,y) \\rarr +\\infty$, respectively.\n\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\begin{minipage}[t]{0.35\\linewidth}\n\t\t\\vspace*{-110pt}\n\t\t\\begin{tikzpicture}\n\t\t\\draw (0,0) -- (3,0);\n\t\t\\draw (3,0) -- (3,3);\n\t\t\\draw (3,3) -- (0,3);\n\t\t\\draw (0,3) -- (0,0);\n\t\t\\draw [ fill = magenta] (0.25,0) -- (3, 2.75) --(3,3)--(2.75, 3) -- (0,0.25) --(0,0);\n\t\t\\draw [dashed] (0,0) -- (3,3);\n\t\t\\node at (1.35,1.3) {$\\mathcal{S}$};\n\t\t\\node at (2.0,0.5) {$\\mathcal{S}^c$};\n\t\t\\node at (4,3) {$\\Omega_1 \\times \\Omega_1$};\n\t\t\\draw[->] (-0.5,1.5) -- (3.5,1.5);\n\t\t\\node at (3.7,1.5){$x$};\n\t\t\\node at (-1.1,1.5){$y=y_0$};\n\t\t\\end{tikzpicture}\n\t\\end{minipage}\n\t\\hspace{1cm}\n\t\\begin{minipage}[t]{0.35\\linewidth}\n\t\t\\begin{tikzpicture}\n\t\t\\draw[->] (-2,0) -- (2,0);\n\t\t\\node[right] at (2,0) {$x$};\n\t\t\\draw[->](0,0) -- (0,3);\n\t\t\\node[above] at (0,3) {$\\rho(x-y_0)$};\n\t\t\\draw[ thick, blue, out=0, in=180] (0,2.5) to (1.5,0);\n\t\t\\draw[ thick, blue, out=180, in=0] (0,2.5) to (-1.5,0);\n\t\t\\draw[thick, blue] (1.5,0) -- (2,0);\n\t\t\\draw[thick, blue] (-1.5,0) -- (-2,0);\t\n\t\t\\draw[dashed] (-0.6,2) -- (0.6,2);\t\t\n\t\t\\node[right] at (0.6,2) {$\\tau$};\n\t\t\\draw[thick, magenta] (-0.6,0) -- (0.6,0);\t\t\n\t\t\\draw[dashed, magenta] (-0.6,0) to (-0.6,2);\n\t\t\\draw[dashed, magenta] (0.6,0) to (0.6,2);\n\t\t\\node at (0,-0.3){$y_0$};\n\t\t\\end{tikzpicture}\n\t\\end{minipage}\n\t\\caption{The stripe $\\mathcal{S} = \\mathcal{S}_{\\tau}$ if $\\Omega_1$ is an open interval and its connection to the radial mollifier $\\rho$ for fixed $y \\in \\Omega_1$.}\n\t\\label{fig:stripe}\n\\end{figure}\n \n \\begin{enumerate}[label=\\textbf{\\ref{ex: coecivity_case2}.\\arabic*}]\n \\item\t\n $\\int\\limits_{\\mathcal{S}} \\frac{\\normMo[w_n(x) - w_n(y)]^{p_1}}{\\normN[x-y]^{N+p_1 s}} \\,\\mathrm{d}(x,y) \\rarr + \\infty$. \\\\\n By definition of $\\mathcal{S}$ we have $\\rho(x-y) \\geq \\tau > 0$ for all $(x,y) \\in \\mathcal{S}$.\n Therefore\n \\begin{gather*}\n \\int\\limits_{\\mathcal{S}} \\frac{\\normMo[w_n(x) - w_n(y)]^{p_1}}{\\normN[x-y]^{N+p_1 s}} \\rho(x-y) \\,\\mathrm{d}(x,y) \n \\geq \\tau \\int\\limits_{\\mathcal{S}} \\frac{\\normMo[w_n(x) - w_n(y)]^{p_1}}{\\normN[x-y]^{N+p_1 s}} \\,\\mathrm{d}(x,y)\n \\rarr +\\infty.\n \\end{gather*}\n Since $\\alpha > 0$, it follows \n \\begin{align*}\n \\F(w_n) &= \\norm{\\op(w_n) - v}^{p_2}_{L^{p_2}(\\Omega_2, \\mathds{R}^{M_2})}\n + \\underbrace{\\alpha \\int\\limits_{\\mathcal{S}} \\frac{\\normMo[w_n(x) - w_n(y)]^{p_1}}{\\normN[x-y]^{N+p_1 s}} \\rho(x-y) \\,\\mathrm{d}(x,y) \n }_{\\rarr +\\infty} \\\\\n & + \\underbrace{ \\alpha \\int\\limits_{\\mathcal{S}^c} \\frac{\\normMo[w_n(x) - w_n(y)]^{p_1}}{\\normN[x-y]^{N+p_1 s}} \\rho(x-y) \\,\\mathrm{d}(x,y) \n }_{\\geq 0}\n \\rarr +\\infty.\n \\end{align*}\n\n \\noindent\n \\item\t\\label{ex: coecivity_case22}\n $\\int\\limits_{\\mathcal{S}^c} \\frac{\\normMo[w_n(x) - w_n(y)]^{p_1}}{\\normN[x-y]^{N+p_1 s}} \\,\\mathrm{d}(x,y) \\rarr + \\infty$.\\\\\n For $(x, y) \\in \\mathcal{S}^c$ it might happen that $\\rho(x-y) = 0$, and thus instead of proving \n $\\F(w_n) \\geq \\int\\limits_{\\mathcal{S}^c} \\frac{\\normMo[w_n(x) - w_n(y)]^{p_1}}{\\normN[x-y]^{N+p_1 s}} \\rho(x-y) \\,\\mathrm{d}(x,y) \\rarr +\\infty$,\n as in Case 2.1, we rather show that $\\F(w_n) \\geq \\norm{\\op[w_n] - v}^{p_2}_{L^{p_2}(\\Omega_2, \\mathds{R}^{M_2})} \\rarr +\\infty$.\n For this it is sufficient to show that for every $c > 0$ there is some $C \\in \\mathds{R}$ such that the implication\n \\begin{gather*}\n \\norm{\\op[w]-v}^{p_2}_{L^{p_2}(\\Omega_2, \\mathds{R}^{M_2})}\\leq c\n \\implies\n \\int\\limits_{\\mathcal{S}^c} \\frac{\\normMo[w(x) - w(y)]^{p_1}}{\\normN[x-y]^{N+p_1 s}} \\,\\mathrm{d}(x,y) \\leq C,\n \\end{gather*}\n holds true for all $w \\in W^{s,p_1}(\\Omega_1, K_1) \\subseteq W^{s,p_1}(\\Omega_1, \\mathds{R}^{M_1})$.\n To this end let $c > 0$ be given and consider an arbitrarily chosen $w \\in W^{s,p_1}(\\Omega_1, K_1)$ \n fulfilling $\\norm{\\op[w] - v}^{p_2}_{L^{p_2}(\\Omega_2, \\mathds{R}^{M_2})} \\leq c$.\n \n Then $\\norm{\\op[w] - v}_{L^{p_2}(\\Omega_2, \\mathds{R}^{M_2})} \\leq \\sqrt[p_2]{c}$. Using the triangle inequality and the monotonicity\n of the function $h: t \\mapsto t^{p_2}$ on $[0, +\\infty)$ we get further\n \\begin{align}\\label{eq: Norm_estimate}\n \\norm{\\op[w]}^{p_2}_{L^{p_2}(\\Omega_2, \\mathds{R}^{M_2})} \n &= \\norm{\\op[w] - v + v}^{p_2}_{L^{p_2}(\\Omega_2, \\mathds{R}^{M_2})} \\nonumber\\\\\n &\\leq \\left( \\norm{\\op[w] - v}_{L^{p_2}(\\Omega_2, \\mathds{R}^{M_2})} + \\norm{v}_{L^{p_2}(\\Omega_2, \\mathds{R}^{M_2})} \\right)^{p_2} \\nonumber\\\\\n & \\leq \\big( \\sqrt[p_2]{c} + \\norm{v}_{L^{p_2}(\\Omega_2, \\mathds{R}^{M_2})} \\big)^{p_2} =\\vcentcolon \\tilde{c}. \n \\end{align}\n \\noindent\n Due to the norm-coercivity, it thus follows \n that $\\norm{w}_{L^{p_1}(\\Omega_1, \\mathds{R}^{M_1})} \\leq \\bar{c}$, $\\bar{c}$ some constant. \n Using \\cite[Lemma 3.20]{SchGraGroHalLen09} it then follows that \n \\begin{gather}\\label{eq: Convexity_Inequality}\n \\normMo[w(x) - w(y)]^{p_1} \\leq 2^{p_1-1} \\normMo[w(x)]^{p_1} + 2^{p_1-1} \\normMo[w(y)]^{p_1} \n \\end{gather}\n for all $(x,y) \\in \\Omega_1 \\times \\Omega_1$.\n Using \\autoref{eq: Convexity_Inequality}, Fubini's Theorem and \\autoref{eq: Norm_estimate} we obtain\n \\begin{align*}\n \\int\\limits_{\\Omega_1 \\times \\Omega_1} \\normMo[w(x) - w(y)]^{p_1} \\,\\mathrm{d}(x,y)\n & \\leq \\int\\limits_{\\Omega_1 \\times \\Omega_1} 2^{p_1-1} \\normMo[w(x)]^{p_1} + 2^{p_1-1}\\normMo[w(y)]^{p_1} \\,\\mathrm{d}(x,y) \\\\\n &= \\abs{\\Omega_1} \\int\\limits_{\\Omega_1} 2^{p_1-1} \\normMo[w(x)]^{p_1} \\,\\mathrm{d}x + \\abs{\\Omega_1} \\int\\limits_{\\Omega_1} 2^{p_1-1}\\normMo[w(y)]^ {p_1} \\,\\mathrm{d}y \\\\\n &= 2\\abs{\\Omega_1} \\int\\limits_{\\Omega_1} 2^{p_1-1} \\normMo[w(x)]^{p_1} \\,\\mathrm{d}x \\\\\n & = 2^{p_1} \\abs{\\Omega_1} \\; \\norm{w}^{p_1}_{L^{p_1}(\\Omega_1, \\mathds{R}^{M_1})} \\leq 2^{p_1} \\abs{\\Omega_1} \\bar{c}^{p_1}.\n \\end{align*}\n\n Combining $\\normN[x-y] \\geq \\eta_{\\tau} > 0$ for all $(x,y) \\in \\mathcal{S}^c$ with the previous inequality we obtain the needed estimate\n \\begin{align*}\n \\int\\limits_{\\mathcal{S}^c} \\frac{\\normMo[w(x) - w(y)]^{p_1}}{\\normN[x-y]^{N+p_1 s}} \\,\\mathrm{d}(x,y)\n & \\leq \\frac{1}{\\eta_{\\tau}^{N+p_1 s}} \\int\\limits_{\\mathcal{S}^c} \\normMo[w(x) - w(y)]^{p_1} \\,\\mathrm{d}(x,y) \n \\\\\n & \\leq \\frac{1}{\\eta_{\\tau}^{N+p_1 s}} \\int\\limits_{\\Omega_1 \\times \\Omega_1} \\normMo[w(x) - w(y)]^{p_1} \\,\\mathrm{d}(x,y)\n \\\\\n & \\leq \\frac{2^{p_1} \\abs{\\Omega_1} \\bar{c}^{p_1}}{\\eta_{\\tau}^{N+p_1 s}} =\\vcentcolon C. \n \\end{align*}\n \\end{enumerate}\n \\end{enumerate}\n \\end{example}\n \n The second example concerns the coercivity of $\\F[\\dKt,\\dKo]$, defined in \\autoref{eq:functional}, when \n $\\op$ denotes the \\emph{masking operator} occurring in image inpainting. To prove this result we require the following auxiliary lemma:\n \\begin{lemma}\\label{lem. auxLemma}\n There exists a constant $C \\in \\mathds{R} $ such that for all $w \\in W^{s,p_1}(\\Omega_1, \\mathds{R}^{M_1}), \\ 0[][\\mathds{R}^{M_1}]$ for $p_1 \\in (1, \\infty)$ and \\autoref{lem:Wsp_weakly_seq_closed_etc} \n there exists a subsequence $(w_{n_k})_{k \\in \\mathds{N}}$ of $(w_n)_{n \\in \\mathds{N}}$ \n and $w_* \\in W^{s,p_1}(\\Omega_1, \\mathds{R}^{M_1})$\n such that $w_{n_k} \\rarr w^*$ strongly in $L^{p_1}(\\Omega_1, \\mathds{R}^{M_1})$ and \n pointwise almost everywhere.\n\n Using the continuity of the norm and dominated convergence we obtain\n \\begin{enumerate}\n \t\\item $\\norm{w^*}_{L^{p_1}(D, \\mathds{R}^{M_1})}^{p_1} = 1$, in particular $w^*$ is not the null-function on D,\n \t\\item $\\norm{w^*}_{L^{p_1}(\\Omega_1 \\setminus D, \\mathds{R}^{M_1})}^{p_1} = 0$ since $n \\in \\mathds{N}$ is arbitrary and hence $w^* \\equiv 0$ on $\\Omega_1 \\setminus D$.\n \t\\item \\begin{equation*}\n \t\\liminf_{n \\rarr \\infty} \\frac{1}{n} \n \t> \\liminf_{n \\rarr \\infty} \\int\\limits_{\\mathcal{S}} \\frac{\\normMo[w_n(x) - w_n(y)]^{p_1}}{\\normN[x-y]^{N+p_1 s}} \\rho(x-y) \\,\\mathrm{d}(x,y) \n \t\\geq \\frac{\\tau}{\\eta^{N+p_1 s}} \\int\\limits_{\\mathcal{S}} \\normMo[w^*(x) - w^*(y)]^{p_1},\n \t\\end{equation*}\n \ti.e. $w^*(x) = w^*(y) $ for $(x,y) \\in \\mathcal{S}$ yielding that $w^*$ locally constant and hence even constant since $\\Omega_1$ is connected,\n \\end{enumerate}\n which gives the contradiction. \n \n In the case $l=0$ we use similar arguments, where the distance $\\normN[x-y]$ in the last inequality can be estimated by $\\mathrm{diam}|\\Omega_1|$ (instead of $\\eta$) since $\\Omega_1$ is bounded.\n \\end{proof}\n \n \\begin{remark}\n \tIn case $l=1$ it follows that the sharper inequality holds true: \n \tThere exists a constant $C \\in \\mathds{R} $ such that for all $w \\in W^{s,p_1}(\\Omega_1, \\mathds{R}^{M_1}), \\ 01, \\ 0 < s < 1$ and fix $k = N$. \n \n Assume that $\\op$ is the inpainting operator, i.e. \n \\begin{equation*}\n \\op(w) = \\chi_{\\Omega_1 \\backslash D} (w),\n \\end{equation*} \n where $D \\subseteq \\Omega_1, \\ w \\in W^{s,p_1}(\\Omega_1, K_1)$. Since the dimension of the data $w$ and the image data $\\op(w)$ have the same dimension at every point $x \\in \\Omega_1$, we write $M \\vcentcolon= M_1 = M_2$. \\\\\n Then the functional \n \\begin{equation*}\n \\F[\\dKt, \\dKo] = \\mebr[\\op[w]][v]^{p_2}_{[\\dKt]} + \\alpha \\mathcal{R}_{[\\dKo]}(w): W^{s,p_1}(\\Omega_1, K_1) \\rarr [0, \\infty]\n \\end{equation*}\n is coercive for $p_2 \\geq p_1$: \\\\\n The fact that $p_2 \\geq p_1$ and that $\\Omega_1$ is bounded ensures that\n \\begin{equation}\\label{eq: LpEmbedding}\n L^{p_2}(\\Omega_1 \\backslash D, \\mathds{R}^M) \\subseteq L^{p_1}(\\Omega_1 \\backslash D, \\mathds{R}^M).\n \\end{equation} \n The proof is done\n using the same arguments as in the proof of \\autoref{ex:coercive}, where we additionally split \\ref{ex: coecivity_case1} into the two sub-cases \n \\begin{enumerate}[label=\\textbf{\\ref{ex: coecivity_case1}.\\arabic*}]\n \t\\item \\label{ex: coecivity_case11}\n \t$\\norm{w_n}_{L^{p_1}(D, \\mathds{R}^M)} \\rarr +\\infty$\n \t\\item \\label{ex: coecivity_case12} \n \t$\\norm{w_n}_{L^{p_1}(\\Omega_1 \\setminus D, \\mathds{R}^M)} \\rarr +\\infty$\n \\end{enumerate}\n and using additionally \\autoref{lem. auxLemma}, \\autoref{eq: inpaintingIneq2} and \\autoref{eq: LpEmbedding}. \n\\end{example}\n\n\n\\section{Stability and Convergence} \\label{sec: Stability_and_Convergence}\nIn this section we will first show a stability and afterwards a convergence result. We use the notation introduced in \n\\autoref{sec: Setting}. In particular $W(\\Omega_1, K_1)$ is as defined in \\autoref{eq:ChooseW}. \nWe also stress that we use \nnotationally simplified versions $\\F$ of $\\F<\\alpha>[\\dKt, \\dKo]$ and $\\mathcal{R}$ of $\\mathcal{R}_{[\\dKo]}$ whenever possible.\nSee \\autoref{eq: functional_with_some_metric}, \\autoref{eq:d2} and \\autoref{eq:d3}.\n\n\\begin{thm} \\label{thm:Stability}\nLet \\autoref{as:Setting} be satisfied.\nLet $v^\\delta \\in L^{p_2}(\\Omega_2, K_2)$ and let $(v_n)_{n \\in \\mathds{N}}$ be a sequence in $L^{p_2}(\\Omega_2, K_2)$ such that\n$\\mebr[v_n][v^\\delta]_{[\\dKt]} \\rarr 0$.\nThen every sequence $\\seq{w}$ with\n \\begin{equation*}\n w_n \\in \\arg \\min \\{ \\F<\\alpha>[\\dKt, \\dKo](w) \\ : \\ w \\in W(\\Omega_1, K_1) \\}\n \\end{equation*}\t\n has a converging subsequence w.r.t. the topology of $W(\\Omega_1, K_1)$.\n The limit $\\tilde{w}$ of any such converging subsequence $(w_{n_k})_{k \\in \\mathds{N}}$ is a minimizer of \n $\\F^{v^\\delta}[\\dKt, \\dKo]$.\n Moreover, $(\\mathcal{R}(w_{n_k}))_{k \\in \\mathds{N}}$ converges to $\\mathcal{R}(\\tilde{w})$.\n\\end{thm}\n\nThe subsequent proof of \\autoref{thm:Stability} is similar to the proof of \\cite[Theorem 3.23]{SchGraGroHalLen09}.\n\n\\begin{proof}\nFor the ease of notation we simply write $\\F$ instead of $\\F<\\alpha>[\\dKt, \\dKo]$ and \n$\\mebr[v][\\tilde{v}] = \\mebr[v][\\tilde{v}]_{[\\dKt]}$ \n\nBy assumption the sequence $(\\mebr[v_n][v^\\delta])_{n\\in \\mathds{N}}$ converges to $0$ and thus is bounded, i.e., \nthere exists $B \\in (0, +\\infty)$ such that \n\\begin{gather} \\label{eq:seq_bounded_IN_stability_proof}\n \\mebr[v_n][v^\\delta] \\leq B \\text{ for all } n \\in \\mathds{N}.\n\\end{gather}\nBecause $w_n \\in \\arg \\min \\{ \\F(w) : w \\in W(\\Omega_1, K_1) \\}$ it follows that \n\\begin{equation}\\label{eq: w_n Minimizer}\n \\F(w_n) \\leq \\F(w) \\text{ for all } w \\in W(\\Omega_1, K_1).\n\\end{equation}\nBy \\autoref{as:Setting} there is a $\\overline{w} \\in W(\\Omega_1, K_1)$ such that $\\F^{v^0}(\\overline{w}) <\\infty$. Set $c \\vcentcolon= 2^{p_2-1}$. \nUsing \\autoref{as:Setting} and applying \\autoref{lem:Ineq_F_parameterchange_to_other_v}, \n\\autoref{eq: w_n Minimizer} and \\autoref{eq:seq_bounded_IN_stability_proof} implies that for all $n \\in \\mathds{N}$\n\\begin{align*}\n \\F (w_n) \n & \\leq c \\F(w_n) + c \\mebr[v_n][v^\\delta]^{p_2} \n\\\\\n & \\leq c \\F(\\overline{w}) + c B^{p_2}\n\\\\\n & \\leq c \\big[c \\F(\\overline{w}) + c \\mebr[v^\\delta][v_n]^{p_2} \\big] + cB^{p_2}\n\\\\\n & \\leq c^2 \\F(\\overline{w}) + (c^2 + c)B^{p_2}\n\\\\\n & \\leq c^3 \\big( \\F(\\overline{w}) + \\mebr[v^0][v^\\delta] \\big) + (c^2 + c)B^{p_2} =\\vcentcolon m < \\infty.\n\\end{align*}\nApplying again \\autoref{lem:Ineq_F_parameterchange_to_other_v} we obtain\n$\\F (w_n) \\leq c \\F (w_n) + c \\mebr[v^\\delta][v^0]^{p_2} \\leq m + c \\mebr[v^\\delta][v^0]^{p_2} =\\vcentcolon \\widetilde m < \\infty$.\nHence, from item \\eqref{itm: A} it follows that the sequence $\\seq{w}$\ncontains a converging subsequence.\n\nLet now $(w_{n_k})_{k \\in \\mathds{N}}$ be an arbitrary subsequence of $\\seq{w}$ which converges in $W(\\Omega_1, K_1)$ to some \n$\\tilde w \\in W(\\Omega_1, \\mathds{R}^{M_1})$. Then, from \\autoref{lem:Wsp_weakly_seq_closed_etc} and the continuity properties of $\\op$ \nit follows that $\\tilde w \\in W(\\Omega_1, K_1)$ and $(\\op[w_{n_k}], v_{n_k}) \\rarr (\\op[\\tilde w], v^\\delta)$ in \n$L^{p_2}(\\Omega_2, K_2) \\times L^{p_2}(\\Omega_2, K_2)$. Moreover, using \\autoref{thm:F_and_its_summands_are_seq_weakly_closed},\n\\autoref{eq: w_n Minimizer} \nand the triangle inequality \nit follows that for every $w \\in W(\\Omega_1, K_1)$ the following estimate holds true\n\\begin{align*}\n \\F(\\tilde w) \n & = \\mebr[\\op(\\tilde w)][v^\\delta]^{p_2} + \\alpha \\mathcal{R}(\\tilde w)\n \\leq \\mebr[\\op(\\tilde w)][v^\\delta]^{p_2} + \\alpha \\liminf_{k \\rarr \\infty} \\mathcal{R}(w_{n_k})\n \\leq \\mebr[\\op(\\tilde w)][v^\\delta]^{p_2} + \\alpha \\limsup_{k \\rarr \\infty} \\mathcal{R}(w_{n_k})\n\\\\& \n \\leq \\liminf_{k \\rarr \\infty} \\mebr[\\op(w_{n_k})][v_{n_k}]^{p_2} + \\alpha \\limsup_{k \\rarr \\infty} \\mathcal{R}(w_{n_k}) \n \\leq \n \\limsup_{k \\rarr \\infty} \\F(w_{n_k})\n \\leq \\limsup_{k \\rarr \\infty} \\F(w)\n\\\\& \n= \\left( \\limsup_{k \\to \\infty} \\mebr[F(w)][v_{n_k}] \\right)^{p_2} + \\alpha \\mathcal{R}(w)\n \\leq \\left( \\limsup_{k \\to \\infty} \\big(\\mebr[F(w)][v^\\delta] + \\mebr[v^\\delta][v_{n_k}] \\big) \\right)^{p_2} + \\alpha \\mathcal{R}(w)\n\\\\& = \\F(w).\n\\end{align*}\n\n\nThis shows that $\\tilde w$ is a minimizer of $\\F$.\nChoosing $w = \\tilde w$ in the previous estimate we obtain the equality\n\\begin{gather*}\n \\mebr[\\op(\\tilde w)][v^\\delta]^{p_2} + \\alpha \\mathcal{R}(\\tilde w) \n = \\mebr[\\op(\\tilde w)][v^\\delta]^{p_2} + \\alpha \\liminf_{k \\rarr \\infty} \\mathcal{R}(w_{n_k}) \n = \\mebr[\\op(\\tilde w)][v^\\delta]^{p_2} + \\alpha \\limsup_{k \\rarr \\infty} \\mathcal{R}(w_{n_k}) \\,.\n\\end{gather*}\nDue to $\\mebr[\\op(\\tilde w)][v^\\delta]^{p_2} \\leq \\F(\\tilde w) \\leq m < \\infty$ this gives\n\\begin{gather*}\n \\mathcal{R}(\\tilde w) = \\lim_{k \\rarr \\infty} \\mathcal{R}(w_{n_k}).\n\\end{gather*}\n\\end{proof}\n\n\nBefore proving the next theorem we need the following definition, cf. \\cite{SchGraGroHalLen09}.\n\\begin{definition}\n Let $v^0 \\in \\Lp<2>$.\n Every element $w^* \\in W(\\Omega_1, K_1)$ fulfilling\n \\begin{align} \\label{eq: R_minimizing_solution}\n \\begin{split}\t\n &\\op[w^*] = v^0 \\\\ \n &\\mathcal{R}(w^*) = \\min \\{ \\mathcal{R}(w) \\ : \\ w \\in W(\\Omega_1, K_1), \\ \\op[w] = v^0 \\}.\n \\end{split}\n \\end{align}\n is called an \n \\emph{$\\mathcal{R}$-minimizing solution} of the equation $\\op[w] = v^0$ or shorter just\n \\emph{$\\mathcal{R}$-minimizing solution}.\n\\end{definition}\n\nThe following theorem and its proof are inspired by \\cite[Theorem 3.26]{SchGraGroHalLen09}. \n\n\\begin{thm} \\label{thm: convergence}\n Let \\autoref{as:Setting} be satisfied.\t \n Let there exist an $\\mathcal{R}$-minimizing solution $w^\\dagger \\in W(\\Omega_1, K_1)$ and \n let $\\alpha: (0, \\infty) \\rarr (0,\\infty)$ be a function satisfying\n \\begin{equation}\\label{eq: assumptions_on_alpha}\n \\alpha(\\delta) \\rarr 0 \\text{ and } \\frac{\\delta^{p_2}}{\\alpha(\\delta)} \\rarr 0\n\t\\text{ for } \\delta \\to 0.\n \\end{equation} \n Let $(\\delta_n)_{n \\in \\mathds{N}}$ be a sequence of positive real numbers converging to $0$. Moreover, let \n $(v_n)_{n \\in \\mathds{N}}$ be a sequence in $L^{p_2}(\\Omega_2, K_2)$ with $\\mebr[v^0][v_n]_{[\\dKt]} \\leq \\delta_n$ and \n set $\\alpha_n \\vcentcolon= \\alpha(\\delta_n)$.\n \n Then every sequence $\\seq{w}$ of minimizers \n \\begin{equation*}\n w_n \\in \\arg \\min \\{ \\F^{v_n}_{\\alpha_n}[\\dKt, \\dKo](w) \\ : \\ w \\in W(\\Omega_1, K_1) \\}\n \\end{equation*}\n has a converging subsequence $w_{n_k} \\rarr \\tilde{w}$ as $k \\to \\infty$, and the limit $\\tilde{w}$ is always an $\\mathcal{R}$-minimizing solution. \n In addition, $\\mathcal{R}(w_{n_k}) \\rarr \\mathcal{R}(\\tilde{w})$.\n\t\n Moreover, if $w^\\dagger$ is unique it follows that $w_n \\rarr w^\\dagger$ and $\\mathcal{R}(w_{n}) \\rarr \\mathcal{R}(w^\\dagger)$.\n\\end{thm}\n\n\\begin{proof}\n We write shortly $\\mebr$ for $\\mebr_{[\\dKt]}$.\n Taking into account that $w_n \\in \\argmin \\{ \\F^{v_n}_{\\alpha_n}[\\dKt, \\dKo](w) \\ : \\ w \\in W(\\Omega_1, K_1) \\}$ it follows that\n \\begin{gather*}\n \\mebr[\\op[w_n]][v_n]^{p_2} \\leq \\F<\\alpha_n>(w_n) \\leq \\F<\\alpha_n>(w^\\dagger) = \n \\mebr[v^0][v_n]^{p_2} + \\alpha_n \\mathcal{R}(w^\\dagger) \\leq \\delta_n^{p_2} + \\alpha_n \\mathcal{R}(w^\\dagger) \\rarr 0,\n \\end{gather*}\n yielding $\\mebr[\\op[w_n]][v_n] \\rarr 0$ as $n \\rarr \\infty$.\n The triangle inequality gives $\\mebr[\\op[w_n]][v^0] \\leq \\mebr[\\op[w_n]][v_n] + \\mebr[v_n][v^0] \\rarr 0$ as \n $n \\rarr \\infty$ and\n \\autoref{re:tricks} ensures \n $\\norm{F(w_n) - v^0}_{\\Lp<2>[][\\mathds{R}^{M_2}]} \\leq \\mebr[\\op[w_n]][v^0] \\rarr 0$ as \n $n \\rarr \\infty$, so that \n \\begin{gather}\\label{eq:Convergence_of_operator}\n \\op[w_n] \\rarr v^0 \\text{ in } L^{p_2}(\\Omega_2, \\mathds{R}^{M_2}).\n \\end{gather} \n Since\n \\begin{gather*}\n \\mathcal{R}(w_n) \\leq \\frac{1}{\\alpha_n} \\F<\\alpha_n>(w_n) \\leq \\frac{1}{\\alpha_n} \\F<\\alpha_n>(w^\\dagger) \n = \\frac{1}{\\alpha_n}\\big( \\mebr[v^0][v_n]^{p_2} + \\alpha_n \\mathcal{R}(w^\\dagger) \\big) \\leq \\frac{\\delta_n^{p_2}}{\\alpha_n} + \\mathcal{R}(w^\\dagger),\n \\end{gather*}\n we also get \n \\begin{gather}\\label{eq:Regularizer_values_bounded}\n \\limsup_{n \\rarr \\infty} \\mathcal{R}(w_n) \\leq \\mathcal{R}(w^\\dagger).\n \\end{gather}\n Set $\\alpha_{\\mathrm{max}} \\vcentcolon= \\max\\{\\alpha_n : n \\in \\mathds{N}\\}$. \n Since \n \\begin{gather*}\n \\limsup_{ n \\rarr \\infty } \\F<\\alpha_n>(w_{n}) \\leq \n \\limsup_{n \\rarr \\infty } \\big( \\mebr[\\op[w_{n}]][v^0]^{p_2} + \\alpha_{\\mathrm{max}} \\mathcal{R}(w_{n}) \\big) \\leq \\alpha_{\\mathrm{max}} \\mathcal{R}(w^\\dagger)\n \\end{gather*}\n the sequence $\\F<\\alpha_{\\mathrm{max}}>(w_{n})$ is bounded. From \\autoref{as:Setting}, item \\eqref{itm: A} it follows that there exists \n a converging subsequence $(w_{n_k})_{k \\in \\mathds{N}}$ of $\\seq{w}$. The limit of $(w_{n_k})_{k \\in \\mathds{N}}$ is denoted by \n $\\tilde{w}$. Then, from \\autoref{lem:Wsp_weakly_seq_closed_etc} it follows that $\\tilde{w} \\in W(\\Omega_1, K_1)$.\n Since the operator $\\op$ is sequentially continuous it follows that $\\op[w_{n_k}] \\rarr \\op[\\tilde{w}]$ in $L^{p_2}(\\Omega_2, K_2)$.\t\n This shows that actually $\\op[\\tilde{w}] = v^0$ since \\autoref{eq:Convergence_of_operator} is valid.\n Then, from \\autoref{thm:F_and_its_summands_are_seq_weakly_closed} it follows that the functional \n $\\mathcal{R}: W(\\Omega_1, K_1) \\rarr [0, +\\infty]$ is sequentially lower semi-continuous,\n\tso that $\\mathcal{R}(\\tilde{w}) \\leq \\liminf_{k \\rarr \\infty} \\mathcal{R}(w_{n_k})$.\n\tCombining this with \\autoref{eq:Regularizer_values_bounded} we also obtain \n\t$$ \\mathcal{R}(\\tilde{w}) \\leq \\liminf_{k \\rarr \\infty} \\mathcal{R}(w_{n_k}) \\leq \\limsup_{k \\rarr \\infty} \\mathcal{R}(w_{n_k}) \\leq \\mathcal{R}(w^\\dagger) \\leq \\mathcal{R}(\\tilde{w}),$$\n\tusing the definition of $w^\\dagger$. \n\tThis, together with the fact that $\\op[\\tilde{w}] = v^0$ we see that $\\tilde{w}$ is an $\\mathcal{R}$-minimizing solution and that \n\t$\\lim_{k \\rarr \\infty} \\mathcal{R}(w_{n_k})= \\mathcal{R}(\\tilde{w})$. \n\t\n\tNow assume that the solution fulfilling \\autoref{eq: R_minimizing_solution} is unique; we call it $w^\\dagger$. \n\tIn order to prove that $w_n \\rarr w^\\dagger$ it is sufficient to show that any subsequence\n\thas a further subsequence converging to $w^\\dagger$, cf.\n\t\\cite[Lemma 8.2]{SchGraGroHalLen09}.\n\tHence, denote by $(w_{n_k})_{k \\in \\mathds{N}}$ an arbitrary subsequence of $(w_n)$, the sequence of minimizers.\n\tLike before we can show that $\\F<\\alpha>(w_{n_k})$ is bounded and we can extract a converging subsequence \n\t$(w_{n_{k_l}})_{l \\in \\mathds{N}}$. The limit of this subsequence is $w^\\dagger$ since it is the unique solution fulfilling \n\t\\autoref{eq: R_minimizing_solution}, showing that $w_n \\rarr w^\\dagger$. Moreover, $w^\\dagger \\in W(\\Omega_1, K_1)$. \n\tFollowing the arguments above we obtain as well $\\lim_{n \\rarr \\infty} \\mathcal{R}(w_{n})= \\mathcal{R}(w^\\dagger).$ \n\\end{proof}\n\\begin{remark}\n\\autoref{thm:Stability} guarantees that the minimizers of $\\F<\\alpha>[\\dKt, \\dKo]$ depend continuously on $v^\\delta$ while \n\\autoref{thm: convergence} ensures that they converge to a solution of $\\op(w) = v^0$, $v^0$ the exact data, while $\\alpha$ tends to zero. \n\\end{remark}\n\n\\section{Discussion of the Results and Conjectures}\nIn this section we summarize some open problems related to double integral \nexpressions\nof functions with values on manifolds.\n\n\\subsection{Relation to single integral representations} \nIn the following we show for one particular case of functions that have values in a manifold, that the double \nintegral formulation $\\mathcal{R}_{[\\dKo]}$, defined in \\autoref{eq:d3}, approximates a single energy integral. The basic \ningredient for this derivation is the exponential map related to the metric $d_1$ on the manifold.\nIn the following we investigate manifold--valued functions $w \\in W^{1,2}(\\Omega, \\mathcal{M})$, where we \nconsider $\\mathcal{M} \\subseteq \\mathds{R}^{M \\times 1}$ to be a connected, complete Riemannian manifold. \nIn this case some of the regularization functionals $\\mathcal{R}_{[\\dKo]}$, defined in \\autoref{eq:d3}, can be \nconsidered as approximations of \\emph{single} integrals. In particular we aim to generalize \\autoref{eq:double_integral} \nin the case $p=2$. \n\nWe have that \n\\begin{equation*}\n \\nabla w = \\begin{bmatrix} \n \\frac{\\partial w_1}{\\partial x_1} & \\cdots & \\frac{\\partial w_1}{\\partial x_N} \\\\\n \\vdots & \\ddots & \\vdots\\\\\n \\frac{\\partial w_M}{\\partial x_1} & \\cdots & \\frac{\\partial w_M}{\\partial x_N}\n\\end{bmatrix} \\in \\mathds{R}^{M \\times N}.\n\\end{equation*}\nIn the following we will write $\\mathcal{R}_{[\\dKo],\\varepsilon}$ instead of $\\tfrac12 \\mathcal{R}_{\\dKo}$ to stress the dependence on $\\varepsilon$ in contrast to above;\nthe factor $\\frac{1}{2}$ was added due to reasons of calculation.\nMoreover, let \n$\\hat{\\rho} : \\mathds{R}_+ \\to \\mathds{R}_+$ be in $C_c^\\infty(\\mathds{R}_+, \\mathds{R}_+)$ and satisfy\n\\begin{equation*}\n\\abs{\\mathbb{S}^{N-1}}\\int_0^\\infty \\hat{t}^{N-1} \\hat{\\rho}\\left(\\hat{t}\\right)d \\hat{t} = 1\\;.\n\\end{equation*}\nThen for every $\\varepsilon > 0$\n\\begin{equation*}\nx \\in \\mathds{R}^n \\mapsto \\rho_\\varepsilon(x)\\vcentcolon= \\frac{1}{\\varepsilon^N} \\hat{\\rho}\\left(\\frac{\\normN[x]}{\\varepsilon}\\right)\n\\end{equation*}\nis a mollifier, cf. \\autoref{ex:mol}. \\\\\n$\\mathcal{R}_{[\\dKo],\\varepsilon}$ \n(with $p_1=2$) then reads as follows: \n\\begin{equation}\n\\label{eq:di_II}\n\n \\mathcal{R}_{[\\dKo],\\varepsilon}(w) \n \\vcentcolon= \n\n \\frac{1}{2}\\int\\limits_{\\Omega\\times \\Omega} \\frac{d_1^2(w(x),w(y))}{\\normN[x-y]^2} \\rho_\\varepsilon(x-y) \\,\\mathrm{d}(x,y)\\,.\n\\end{equation}\nSubstitution with spherical coordinates $y = x - t \\theta \\in \\mathds{R}^{N \\times 1}$ with \n$\\theta \\in \\mathbb{S}^{N-1} \\subseteq \\mathds{R}^{N \\times 1}$, $t \\geq 0$ gives\n\\begin{equation}\n\\label{eq:reg2}\n\\begin{aligned}\n\\lim_{\\varepsilon \\searrow 0}\n\\mathcal{R}_{[\\dKo],\\varepsilon}(w) &= \n\\lim_{\\varepsilon \\searrow 0}\n\\frac{1}{\\varepsilon^N}\n \\int\\limits_{\\Omega} \\int\\limits_{\\mathbb{S}^{N-1}} \n \\int\\limits_0^\\infty \\frac{1}{2} d_1^2(w(x),w(x-t \\theta)) t^{N-3} \\hat{\\rho}\\left(\\frac{t}{\\varepsilon}\\right) \\mathrm{d}t \\,\\mathrm{d}\\theta \\,\\mathrm{d}x\\;.\n\\end{aligned}\n\\end{equation}\nNow, using that for $m_1 \\in \\mathcal{M}$ fixed and $m_2 \\in \\mathcal{M}$ such that $m_1$ and $m_2$ are joined by a unique minimizing geodesic (see for instance \\cite{FigVil11} where the concept of exponential mappings is explained) \n\\begin{equation}\\label{eq:partial_II}\n \\frac{1}{2} \\partial_2 d_1^2(m_1,m_2) = - (\\exp_{m_2})^{-1}(m_1) \\in \\mathds{R}^{M \\times 1},\n\\end{equation}\nwhere $\\partial_2$ denotes the derivative of $d_1^2$ with respect to the second component. \nBy application of the chain rule we get\n\\begin{equation*}\n \\begin{aligned}\n - \\frac{1}{2} \\nabla_y d_1^2(w(x),w(y)) &= \n \\underbrace{(\\nabla w(y))^T}_{\\in \\mathds{R}^{N \\times M}} \\underbrace{(\\exp_{w(y)})^{-1}(w(x))}_{\\in \\mathds{R}^{M \\times 1}}\\in \\mathds{R}^{N \\times 1}\\;,\n \\end{aligned}\n\\end{equation*}\nwhere $w(x)$ and $w(y)$ are joined by a unique minimizing geodesic. This assumption seems reasonable due to the fact \nthat we consider the case $\\varepsilon \\searrow 0$. \nLet $\\cdot$ denote the scalar multiplication of two vectors in $\\mathds{R}^{N \\times 1}$, then the last equality shows that\n\\begin{equation*}\n \\begin{aligned}\n\\frac{1}{2} d_1^2(w(x),w(x-t \\theta)) \n&= - \\frac{1}{2} \\left[ d_1^2\\big(w(x),w( (x-t\\theta) + t \\theta )\\big) - d_1^2\\big(w(x),w(x-t \\theta)\\big) \\right] \\\\ \n&\\approx \\left( \\left(\\nabla w(x-t \\theta)\\right)^T (\\exp_{w(x-t \\theta)})^{-1}(w(x)) \\right) \\cdot t\\theta\\;. \n\\end{aligned}\n\\end{equation*}\nThus from \\autoref{eq:reg2} it follows that \n\\begin{equation}\n\\label{eq:reg3}\n\\begin{aligned}\n~ & \\lim_{\\varepsilon \\searrow 0} \n\\mathcal{R}_{[\\dKo],\\varepsilon}(w) \\\\\n\\approx & \n\\lim_{\\varepsilon \\searrow 0}\n \\frac{1}{\\varepsilon^N}\n \\int\\limits_{\\Omega} \\int\\limits_{\\mathbb{S}^{N-1}} \n \\int\\limits_0^\\infty \\left( \\left( \\nabla w(x-t \\theta)\\right)^T (\\exp_{w(x-t \\theta)})^{-1}(w(x)) \n \\right) \\cdot \n \\theta \\left(t^{N-2} \\hat{\\rho}\\left(\\frac{t}{\\varepsilon}\\right)\\right) \\mathrm{d}t \\,\\mathrm{d}\\theta \\,\\mathrm{d}x\\;.\n\\end{aligned}\n\\end{equation}\nNow we will use a Taylor series of power 0 for $ t\\mapsto \\nabla w(x-t \\theta)$ and of power 1 for $t \\mapsto (\\exp_{w(x-t \\theta)})^{-1}(w(x))$ to rewrite \\autoref{eq:reg3}.\nWe write \n\\begin{equation}\n F(w;x,t,\\theta) \\vcentcolon= (\\exp_{w(x-t \\theta)})^{-1}(w(x)) \\in \\mathds{R}^{M \\times 1}\n\\end{equation}\nand define \n\\begin{equation}\n \\dot{F}(w;x,\\theta) \\vcentcolon= \\lim_{t \\searrow 0} \\frac{1}{t} \\left((\\exp_{w(x-t \\theta)})^{-1}(w(x)) - \n \\underbrace{(\\exp_{w(x)})^{-1}(w(x))}_{=0} \n \\right) \\in \\mathds{R}^{M \\times 1}.\n\\end{equation}\nNote that because $(\\exp_{w(x)})^{-1}(w(x))$ vanishes, $\\dot{F}(w(x);\\theta)$ is the leading order term of the expansion of \n$(\\exp_{w(x-t \\theta)})^{-1}(w(x))$ with respect to $t$.\nMoreover, in the case that $\\nabla w(x) \\neq 0$ this is the leading order approximation of $\\nabla w(x-t \\theta)$. In summary we are calculating the leading order term of the expansion \nwith respect to $t$.\n\nThen from \\autoref{eq:reg3} it follows that\n\\begin{equation}\n\\label{eq:reg3a}\n\\lim_{\\varepsilon \\searrow 0}\n\\mathcal{R}_{[\\dKo],\\varepsilon}(w) \n\\approx\n\\lim_{\\varepsilon \\searrow 0}\n\\underbrace{\\frac{1}{\\varepsilon^N} \\int\\limits_0^\\infty t^{N-1} \\hat{\\rho}\\left(\\frac{t}{\\varepsilon}\\right) \\mathrm{d}t}_{= \\abs{\\mathbb{S}^{N-1}}^{-1}}\n \\int\\limits_{\\Omega} \\int\\limits_{\\mathbb{S}^{N-1}} \\left((\\nabla w(x))^T \\dot{F}(w;x,\\theta) \\right) \\cdot \\theta \\;\n \\mathrm{d}\\theta \\,\\mathrm{d}x\\;.\n\\end{equation}\nThe previous calculations show that the double integral simplifies to a double integral where the inner integration domain \nhas one dimension less than the original integral. Under certain assumption the integration domain can be further simplified:\n\n\\begin{example}\n If $d_1(x,y)=\\normM[x-y]$, $p_1=2$, then \n \\begin{equation*}\n \\dot{F}(w;x,\\theta) = \\lim_{t \\searrow 0} \\frac{1}{t} \\left(w(x) - w(x-t\\theta)\\right) = \\nabla w(x)\\theta \\in \\mathds{R}^{M \\times 1}.\n \\end{equation*}\n Thus from \\eqref{eq:reg3a} it follows that \n\\begin{equation}\n\\label{eq:reg3b}\n\\lim_{\\varepsilon \\searrow 0}\n\\mathcal{R}_{[\\dKo],\\varepsilon}(w) \n\\approx \\int\\limits_{\\Omega} \\underbrace{(\\nabla w(x))^T \\nabla w(x)}_{\\norm{\\nabla w(x)}^2_{\\mathds{R}^M}} \\,\\mathrm{d}x\\;.\n\\end{equation}\nThis is exactly the identity derived in \\citeauthor{BouBreMir01} \\cite{BouBreMir01}.\n\\end{example}\nFrom these considerations we can view $\\lim_{\\varepsilon \\searrow 0} \\mathcal{R}_{[\\dKo],\\varepsilon}$ as functionals, which generalize \nSobolev and $\\mathrm{BV}$ semi-norms to functions with values on manifolds.\n\n\\subsection{A conjecture on Sobolev semi-norms}\n\\label{ss:conjecture}\nStarting point for this conjecture is \\autoref{eq:d3}. We will write $\\Omega,M$ and $p$ instead of $\\Omega_1, M_1$ and $p_1$.\n\\begin{itemize}\n \\item In the case $l=0$, $k=N$, $0 0$ fixed (that is, we consider neither a standard Sobolev regularization nor the limiting \n case $\\varepsilon \\to 0$ as in \\cite{BouBreMir01}). In this case we have proven coercivity of the functional \n $\\F: \\Wsp[\\sphere] \\rarr [0,\\infty), \\ 0<\\alpha>[\\dS](w) \\vcentcolon= \\int\\limits_\\Omega \\dS^p(\\op[w](x), v^\\delta(x)) \\,\\mathrm{d}x + \\alpha \\mathcal{R}_{[\\dS]}(w),\n\\end{equation}\non $\\Wsp[\\sphere]$ and the lifted variant\n\\begin{equation} \\label{eq: functionalAlternative}\n\\FT<\\alpha>[\\dS](u) \\vcentcolon= \n\\int\\limits_\\Omega \\dS^p(\\op[\\Phi(u)](x), v^\\delta(x)) \\,\\mathrm{d}x + \\alpha \\mathcal{R}_{[\\dS]}^{\\Phi}(u) \n\\end{equation}\nover the space $\\Wsp[\\mathds{R}]$ (as in \\autoref{ss: spheredata}), where $\\Phi$ is defined as in \\eqref{eq:id_sphere_w}.\nNote that $\\FT = \\F \\circ \\Phi$.\n\n\n\\begin{lemma}\\label{lem: liftedFunctional}\nLet $\\emptyset \\neq \\Omega \\subset \\mathds{R}$ or $\\mathds{R}^2$ be a bounded and simply connected open set with Lipschitz boundary. \nLet $1 < p < \\infty$ and $s \\in (0,1)$. If $N=2$ assume that $sp < 1$ or $sp \\geq 2$. Moreover, let \\autoref{as:Setting} and \\autoref{ass:2} be satisfied. Then the mapping \n$\\FT<\\alpha>[\\dS]: W^{s,p}(\\Omega, \\mathds{R}) \\rarr [0,\\infty)$ attains a minimizer.\n\\end{lemma}\n\\begin{proof}\n\tLet $u \\in \\Wsp[\\mathds{R}]$. Then by Lemma \\ref{lem:2} we have that $w \\vcentcolon= \\Phi(u) \\in \\Wsp[\\sphere]$.\n\tAs arguing as in the proof of Lemma \\ref{lem: liftedRegularizer} we see that $\\FT<\\alpha>[\\dS](u) < \\infty$. \\\\\n\tSince we assume that \\autoref{as:Setting} is satisfied we get that $\\F<\\alpha>[\\dS](w)$ attains a minimizer $w^* \\in \\Wsp[\\sphere]$.\n\tIt follows from \\autoref{lem: lifting} that there exists a function $u^* \\in W^{s,p}(\\Omega, \\mathds{R})$ that can be lifted to $w^*$, i.e. $w^* = \\Phi(u^*)$.\n\tThen $u^*$ is a minimizer of \\eqref{eq: functionalAlternative} \n\tby definition of $\\FT$ and $\\Phi$.\n\\end{proof}\t\n\n\n\n\n\\subsection{Numerical minimization}\nIn our concrete examples we will consider two different operators $\\op$. \nFor numerical minimization we consider the functional from \\autoref{eq: functionalAlternative} \nin a discretized setting. For this purpose we approximate the functions $u \\in W^{s, p}(\\Omega,\\mathds{R})$, $0<\\alpha>[\\dS]: \\Wsp[\\mathds{R}] \\rarr [0,\\infty)$ \nattains a minimizer $u \\in W^{s, p}(\\Omega,\\mathds{R})$. \n\nIn the examples we will just consider the continuous approximation again denoted by $u$.\n\n\n\\subsection*{One dimensional test case}\nLet $\\Omega = (0,1)$ and consider the signal \n$u:\\Omega \\rarr [0,2\\pi)$ representing the angle of a cyclic signal. \\\\\nFor the discrete approximation shown in \\autoref{sfig:signal1-a} the domain $\\Omega$ is sampled equally at 100 points. \n$u$ is affected by an additive white Gaussian noise with $\\sigma = 0.1$ to obtain the noisy signal which is colored in blue in \\autoref{sfig:signal1-a}.\n \nIn this experiment we show the influence of the parameters $s$ and $p$.\nIn all cases the choice of the regularization parameter $\\alpha$ is 0.19 and $\\varepsilon = 0.01$.\\\\\nThe red signal in \\autoref{sfig:signal1-b} is obtained by choosing \n$s = 0.1$ and $p = 1.1$. \nWe see that the periodicity of the signal is handled correctly and that there is nearly no staircasing.\nIn \\autoref{sfig:signal1-c} the parameter $s$ is changed from $0.1$ to $0.6$. The value of the parameter $p$ stays fixed. \nIncreasing of $s$ leads the signal to be more smooth. We can observe an even stronger similar effect when increasing $p$ \n(here from $1.1$ to $2$) and letting $s$ fixed, see \\autoref{sfig:signal1-d}. This fits the expectation since $s$ only \nappears once in the denominator of the regularizer. At a jump increasing of $s$ leads thus to an increasing of the \nregularization term. The parameter $p$ appears twice in the regularizer. Huge jumps are hence weighted even more.\n\n\\begin{figure}[!h]\n\t\\centering\n\t\\begin{subfigure}[h]{0.35\\linewidth}\n\t\t\\includegraphics[width=1\\linewidth]{Images\/Signal_origAndNoisy_s01_reg019_p11_alpha001_eps001.jpg}\n\t\t\\caption{Original and noisy data}\n\t\t\\label{sfig:signal1-a}\n\t\\end{subfigure}\n \\begin{subfigure}[h]{0.35\\linewidth}\n \t\\includegraphics[width=1\\linewidth]{Images\/Signal_denoised_s01_reg019_p11_alpha001_eps001.jpg}\n \t\\caption{Denoised data}\n \t\\label{sfig:signal1-b}\n \\end{subfigure}\t\n\n\t\\begin{subfigure}[h]{0.35\\linewidth}\n\t\t\\includegraphics[width=1\\linewidth]{Images\/Signal_denoised_s06_reg019_p11_alpha001_eps001.jpg}\n\t\t\\caption{Increasing of $s$}\n\t\t\\label{sfig:signal1-c}\n\t\\end{subfigure}\n \\begin{subfigure}[h]{0.35\\linewidth}\n \t\\includegraphics[width=1\\linewidth]{Images\/Signal_denoised_s01_reg019_p2_alpha001_eps001.jpg}\n \t\\caption{Increasing of $p$}\n \t\\label{sfig:signal1-d}\n \\end{subfigure}\n\t\n\\caption{Function on $\\sphere$ represented in $[0,2\\pi)$: Left to right, top to bottom: Original data (black) and noisy data (blue) \nwith 100 data points. Denoised data (red) where we chose $s=0.1, p=1.1, \\alpha = 0.19$. Denoised data with \n$s=0.6, p=1.1, \\alpha = 0.19$ resp. $s=0.1, p=2, \\alpha=0.19$. }\n\\label{fig:signal1}\n\\end{figure}\n\nIn \\autoref{sfig:signal2-a} we considered a simple signal with a single huge jump. Again it is described by the angular value. \nWe proceeded as above to obtain the approximated discrete original data (black) and noisy signal with $\\sigma = 0.1$ (blue). We \nchose again $\\varepsilon = 0.01$. \\\\ \nAs we have seen above increasing of $s$ leads to a more smooth signal. \nThis effect can be compensated by choosing a rather small value of $p$, i.e. $p \\approx 1$. In \\autoref{sfig:signal2-b} the value of $s$ is $0.9$. We see that it is still possible to reconstruct jumps by choosing e.g. $p=1.01$. \\\\\nMoreover, we have seen that increasing of $p$ leads to an even more smooth signal. In \\autoref{sfig:signal2-c} we choose a quite large value of $p$, \n$p=2$ and a \nrather small value of $s$, $s = 0.001$. Even for this very simple signal is was not possible to get sharp edges. This is due to the fact that the parameter $p$ (but not $s$) additionally weights the height of jumps in the regularizing term. \n\n\n\\begin{figure}[!h]\n\t\\centering\n\t\\begin{subfigure}[h]{0.3\\linewidth}\n\t\\includegraphics[width=1\\linewidth]{Images\/Signal_origAndNoisy_s01_reg045_p101_alpha001.jpg}\n\t\\caption{Original and noisy data}\n\t\\label{sfig:signal2-a}\n\t\\end{subfigure}\n\t\\begin{subfigure}[h]{0.3\\linewidth}\n\t\t\\includegraphics[width=1\\linewidth]{Images\/Signal_denoised_s09_reg003_p101_alpha001.jpg}\n\t\t\\caption{$s = 0.9, \\ p = 1.01$}\n\t\t\\label{sfig:signal2-b}\n\t\\end{subfigure}\n \\begin{subfigure}[h]{0.3\\linewidth}\n \t\\includegraphics[width=1\\linewidth]{Images\/Signal_denoised_s0001_reg09_p2_alpha001.jpg}\n \t\\caption{$s = 0.001, \\ p = 2$}\n \t\\label{sfig:signal2-c}\n \\end{subfigure}\n \\caption{Left to right: Original data (black) and noisy data (blue) sampled at 100 data points. Denoised data (red) where we chose $s=0.9, p=1.01, \\alpha = 0.03$. Denoised data with $s=0.001, p=2, \\alpha = 0.9$.}\n\t\\label{fig:signal2}\n\\end{figure} \n\n\n\\subsection*{Denoising of a $\\sphere$-Valued Image}\nOur next example concerned a two-dimensional $\\sphere$-valued image represented by the corresponding angular values.\nWe remark that in this case where $N=2$ the existence of such a representation is always guaranteed in the cases where \n$sp < 1$ or $sp \\geq 2$, see \\autoref{lem: lifting}. \n\nThe domain $\\Omega$ is sampled into $60 \\times 60$ data points and can be considered as discrete grid, \n$\\{1, \\dots,60\\} \\times \\{1, \\dots,60\\} $. \nThe B-Spline approximation evaluated at that grid is given by\n\\begin{equation*}\nu(i,j) = u(i,0) \\vcentcolon= 4\\pi \\frac{i}{60} \\bmod 2\\pi, \\quad i,j \\in \\{1, \\dots,60\\}.\n\\end{equation*}\n\n\\begin{figure} \n\t\\centering\n\t{\\includegraphics[width=6cm]{Images\/rainbow_3dPlot.jpg}}\n\t\\caption{The function $u$ evaluated on the discrete grid.}\n\t\\label{fig: Rainbow_function}\n\\end{figure}\nThe function $u$ is shown in \\autoref{fig: Rainbow_function}. We used the $\\mathrm{hsv}$ colormap \nprovided in $\\mathrm{MATLAB}$ transferred to the interval $[0, 2\\pi]$. \n\nThis experiment shows the difference of our regularizer respecting the periodicity of the data in contrast to the classical \nTotal Variation regularizer. The classical TV-minimization is solved using a fixed point iteration (\\cite{LoeMag}); for the method see also \\cite{VogOma96}.\n\nIn \\autoref{sfig:rainbow-a} the function $u$ can be seen from the top, i.e. the axes correspond to the \n$i$ resp. $j$ axis in \\autoref{fig: Rainbow_function}.\nThe noisy data is obtained by adding white Gaussian noise with $\\sigma = \\sqrt{0.001}$ using the built-in function \n$\\mathtt{imnoise}$ in $\\mathrm{MATLAB}$. It is shown in \\autoref{sfig:rainbow-b}.\nWe choose as parameters $s=0.9, \\ p=1.1, \\ \\alpha = 1,$ and $\\varepsilon = 0.01$. \nWe observe significant noise reduction in both cases. However, only in \\autoref{sfig:rainbow-d} the color transitions are handled \ncorrectly. This is due to the fact, that our regularizer respects the periodicity, i.e. for the functional there is no jump in \n\\autoref{fig: Rainbow_function} since 0 and $2\\pi$ are identified. Using the classical TV regularizer the values 0 and $2\\pi$ are not \nidentified and have a distance of $2\\pi$. Hence, in the TV-denoised image there is a sharp edge in the middle of the image, see \n\\autoref{sfig:rainbow-c}. \\\\\n\n\n\\begin{figure}[!h]\n\t\\centering\n\t\\begin{subfigure}[h]{0.35\\linewidth}\n\t\t\\includegraphics[width=1\\linewidth]{Images\/Image_Original.jpg}\n\t\t\\caption{Original data}\n\t\t\\label{sfig:rainbow-a}\n\t\\end{subfigure}\n\t\\begin{subfigure}[h]{0.35\\linewidth}\n\t\t\\includegraphics[width=1\\linewidth]{Images\/rainbow_noisy_s09_p11_reg1_alpha02_eps001_400steps.jpg}\n\t\t\\caption{Noisy data}\n\t\t\\label{sfig:rainbow-b}\n\t\\end{subfigure}\t\n\t\n\t\\begin{subfigure}[h]{0.35\\linewidth}\n\t\t\\includegraphics[width=1\\linewidth]{Images\/rainbow_TVdenoised_s09_p11_reg1_alpha02_eps001_400steps.jpg}\n\t\t\\caption{TV-denoised data}\n\t\t\\label{sfig:rainbow-c}\n\t\\end{subfigure}\n\t\\begin{subfigure}[h]{0.35\\linewidth}\n\t\t\\includegraphics[width=1\\linewidth]{Images\/rainbow_denoised_s09_p11_reg1_alpha02_eps001_400steps.jpg}\n\t\t\\caption{Denoised data}\n\t\t\\label{sfig:rainbow-d}\n\t\\end{subfigure}\n\t\n\t\\caption{Left to right, top to bottom: Original and noisy data of an $60 \\times 60$ image. TV-denoised data using a fixed point iteration method. Denoised data where we chose $s=0.9, p=1.1, \\alpha = 1$, 400 steps. }\n\t\\label{fig:rainbow}\n\\end{figure}\n\n\n\\subsection*{Hue Denoising}\n\nThe $\\mathrm{HSV}$ color space is shorthand for Hue, Saturation, Value (of brightness). \nThe hue value of a color image is $\\sphere$-valued, while saturation and value of brightness are real-valued. \nRepresenting colors in this space better match the human perception than representing colors in \nthe RGB space. \n\nIn \\autoref{sfig:fruits-a} we see a part of size $70 \\times 70$ of the RGB image ``fruits'' \n(\\url{https:\/\/homepages.cae.wisc.edu\/~ece533\/images\/}).\n \nThe corresponding hue data is shown in \\autoref{sfig:fruits-b}, where we used again the colormap hsv, cf. \\autoref{fig: Rainbow_function}. Each pixel-value lies, after transformation, in the interval $[0, 2\\pi)$ and represents the angular value. Gaussian white noise with $\\sigma = \\sqrt{0.001}$ is added to obtain a noisy image, see \\autoref{sfig:fruits-c}.\\\\\nTo obtain the denoised image \\autoref{sfig:fruits-d} we again used the same fixed point iteration, cf. \\cite{LoeMag}, as before. \n\nWe see that the denoised image suffers from artifacts due to the non-consideration of periodicity. The pixel-values in the middle of the apple (the red object in the original image) are close to $2\\pi$ while those close to the border are nearly 0, meaning they have a distance of around $2\\pi$. \\\\\nWe use this TV-denoised image as starting image to perform the minimization of our energy functional. As parameters we choose \n$s = 0.49, \\ p = 2, \\ \\alpha = 2, \\ \\varepsilon = 0.006$. \n\nSince the cyclic structure is respected the disturbing artifacts in image \\autoref{sfig:fruits-d} are removed correctly. The edges are smoothed due to the high value of $p$, see \\autoref{sfig:fruits-e}.\n\n\n\n\\begin{figure}[!h]\n\t\\centering\n\t\\begin{subfigure}[h]{0.3\\linewidth}\n\t\t\\includegraphics[width=1\\linewidth]{Images\/Fruits_OrigRGB.jpg}\n\t\t\\caption{Original RGB image \\newline \\qquad \\newline}\n\t\t\\label{sfig:fruits-a}\n\t\\end{subfigure}\n \\hspace{1em}\n\t\\begin{subfigure}[h]{0.3\\linewidth}\n\t\t\\includegraphics[width=1\\linewidth]{Images\/Fruits_Orig.jpg}\n\t\t\\caption{Hue component represented in color, which represent function values on $\\sphere$}\n\t\t\\label{sfig:fruits-b}\n\t\\end{subfigure}\t\n \\hspace{1em}\t\n\t\\begin{subfigure}[h]{0.3\\linewidth}\n\t\t\\includegraphics[width=1\\linewidth]{Images\/Fruits_Noisy_s049.jpg}\n\t\t\\caption{Noisy hue value - again representing function values on $\\sphere$ \\newline}\n\t\t\\label{sfig:fruits-c}\n\t\\end{subfigure}\n\n\t\\begin{subfigure}[h]{0.3\\linewidth}\n\t\t\\includegraphics[width=1\\linewidth]{Images\/Fruits_TVstart_s049.jpg}\n\t\t\\caption{TV-denoised data}\n\t\t\\label{sfig:fruits-d}\n\t\\end{subfigure}\n \\hspace{1em}\n \\begin{subfigure}[h]{0.3\\linewidth}\n \t\\includegraphics[width=1\\linewidth]{Images\/Fruits_s49_reg2_p2_500steps_tau02_eps_0006_TVstart.jpg}\n \t\\caption{Denoised data}\n \t\\label{sfig:fruits-e}\n \\end{subfigure}\n\t\n\t\\caption{Left to right, top to bottom: Original RGB image and its Hue component. Noisy Hue data with $\\sigma^2 = 0.001$. TV minimization is done using an iterative approach. It is serving as starting point for the GD minimization. Denoised data with $s=0.49, p=2, \\alpha = 2$, 500 steps. }\n\t\\label{fig:fruits}\n\\end{figure}\n\n\\subsection{$\\sphere$-Valued Image Inpainting}\nIn this case the operator $\\op: \\Wsp[\\sphere] \\rarr \\Lp[\\sphere]$ is the inpainting operator, i.e. \n \\begin{equation*}\n \\op(w) = \\chi_{\\Omega \\backslash D} (w),\n \\end{equation*} \nwhere $D \\subseteq \\Omega$ is the area to be inpainted. \n\nWe consider the functional \n\\begin{equation*}\n\\F<\\alpha>[\\dS](w) \\vcentcolon= \\int\\limits_{\\Omega \\setminus D} \\dS^p(w(x), v^\\delta(x)) \\,\\mathrm{d}x + \n\\alpha \\int\\limits_{\\Omega\\times \\Omega} \\frac{\\dS^p(w(x), w(y))}{\\norm{x-y}_{\\mathds{R}^2}^{2+ps}} \\rho_{\\varepsilon}(x-y) \\,\\mathrm{d}(x,y),\n\\end{equation*}\non $\\Wsp[\\sphere]$.\n\nAccording to \\autoref{ex:in} the functional $\\F$ is coercive and \n\\autoref{as:Setting} is satisfied.\nFor $\\emptyset \\neq \\Omega \\subset \\mathds{R}$ or $\\mathds{R}^2$ a bounded and simply connected open set, $1 < p < \\infty$ and $s \\in (0,1)$ such that additionally $sp < 1$ or $sp \\geq 2$ if $N=2$ \\autoref{lem: liftedFunctional} applies which ensures that there exists a minimizer $u \\in W^{s, p}(\\Omega,\\mathds{R})$ of the lifted functional \n$\\FT<\\alpha>[\\dS]: \\Wsp[\\mathds{R}] \\rarr [0,\\infty)$ \n$u \\in W^{s, p}(\\Omega,\\mathds{R})$\n\n\\subsection*{Inpainting of a $\\sphere$-Valued Image}\n\nAs a first inpainting test-example we consider two $\\sphere$-valued images of size $28 \\times 28$, \nsee \\autoref{fig:blocks}, represented by its angular values. \nIn both cases the ground truth can be seen in \\autoref{sfig:blocks-a} and \\autoref{sfig:blocks2-a}. \nWe added Gaussian white noise with $\\sigma = \\sqrt{0.001}$ using the $\\mathrm{MATLAB}$ build-in function \n$\\mathtt{imnoise}$. The noisy images can be seen in \\autoref{sfig:blocks-b} and \\autoref{sfig:blocks2-b}. \nThe region $D$ consists of the nine red squares in \\autoref{sfig:blocks-c} and \\autoref{sfig:blocks2-c}.\n\nThe reconstructed data are shown in \\autoref{sfig:blocks-d} and \\autoref{sfig:blocks2-d}. \\\\\nFor the two-colored image we used as parameters $\\alpha = s = 0.3$, $p = 1.01$ and $\\varepsilon = 0.05$.\nWe see that the reconstructed edge appears sharp. \nThe unknown squares, which are completely surrounded by one color are inpainted perfectly.\nThe blue and green color changed slightly.\n\nAs parameters for the three-colored image we used $\\alpha = s = 0.4$, $p=1.01$ and $\\varepsilon = 0.05$. \nHere again the unknown regions lying entirely in one color are inpainted perfectly. The edges are preserved. \nJust the corner in the middle of the image is slightly smoothed. \\\\\nIn \\autoref{sfig:blocks-e} and \\autoref{sfig:blocks2-e} the TV-reconstructed data \nis shown. The underlying algorithm (\\cite{Get}) uses the split Bregman method (see \\cite{GolSta09}).\n\nIn \\autoref{sfig:blocks-e} the edge is not completely sharp. There are some lighter parts on the blue side. \nThis can be caused by the fact that the unknown domain in this area is not exactly symmetric with respect to the edge. \nThis is also the case in \\autoref{sfig:blocks2-e} where we observe the same effect. Unknown squares lying entirely \nin one color are perfectly inpainted. \n\n\n\\begin{figure}[!h]\n\t\\centering\n\t\\begin{subfigure}[h]{0.3\\linewidth}\n\t\t\\includegraphics[width=1\\linewidth]{Images\/2blocks_Orig.jpg}\n\t\t\\caption{Original image}\n\t\t\\label{sfig:blocks-a}\n\t\\end{subfigure}\n\t\\begin{subfigure}[h]{0.3\\linewidth}\n\t\t\\includegraphics[width=1\\linewidth]{Images\/2blocks_Noise.jpg}\n\t\t\\caption{Noisy image}\n\t\t\\label{sfig:blocks-b}\n\t\\end{subfigure}\t\t\n\t\\begin{subfigure}[h]{0.3\\linewidth}\n\t\t\\includegraphics[width=1\\linewidth]{Images\/2blocks_NoiseMask.jpg}\n\t\t\\caption{Noisy masked image}\n\t\t\\label{sfig:blocks-c}\n\t\\end{subfigure}\n\t\n\t\\begin{subfigure}[h]{0.3\\linewidth}\n\t\t\\includegraphics[width=1\\linewidth]{Images\/2blocks_Denoised_s0001_regParam07_p101_tau03_eps005_1200steps.jpg}\n\t\t\\caption{Reconstructed image}\n\t\t\\label{sfig:blocks-d}\n\t\\end{subfigure}\n\t\\begin{subfigure}[h]{0.3\\linewidth}\n\t\t\\includegraphics[width=1\\linewidth]{Images\/2blocks_TV_param8.jpg}\n\t\t\\caption{TV-reconstructed image}\n\t\t\\label{sfig:blocks-e}\n\t\\end{subfigure}\n\n \\begin{subfigure}[h]{0.3\\linewidth}\n \t\\includegraphics[width=1\\linewidth]{Images\/3blocks_Orig.jpg}\n \t\\caption{Original image}\n \t\\label{sfig:blocks2-a}\n \\end{subfigure}\n \\begin{subfigure}[h]{0.3\\linewidth}\n \t\\includegraphics[width=1\\linewidth]{Images\/3blocks_Noise.jpg}\n \t\\caption{Noisy image}\n \t\\label{sfig:blocks2-b}\n \\end{subfigure}\t\t\n \\begin{subfigure}[h]{0.3\\linewidth}\n \t\\includegraphics[width=1\\linewidth]{Images\/3blocks_NoiseMask.jpg}\n \t\\caption{Noisy masked image}\n \t\\label{sfig:blocks2-c}\n \\end{subfigure}\n \n \\begin{subfigure}[h]{0.3\\linewidth}\n \t\\includegraphics[width=1\\linewidth]{Images\/3blocks_Denoised_s001_regParam08_p1001_tau04_eps005_1200steps.jpg}\n \t\\caption{Reconstructed image}\n \t\\label{sfig:blocks2-d}\n \\end{subfigure}\n \\begin{subfigure}[h]{0.3\\linewidth}\n \t\\includegraphics[width=1\\linewidth]{Images\/3blocks_TV_param8.jpg}\n \t\\caption{TV-reconstructed image}\n \t\\label{sfig:blocks2-e}\n \\end{subfigure}\n\t\n\t\\caption{Left to right. Top to bottom: Original image and the noisy data with $\\sigma^2 = 0.001$. Noisy image with masking filter and denoised data with $s=0.3, p=1.01, \\alpha = 0.3$, 6000 steps. TV denoised data.\\\\\n Original image and the noisy data with $\\sigma^2 = 0.001$. Noisy image with masking filter and denoised data with $s=0.4, p=1.01, \\alpha = 0.4$, 10000 steps. TV denoised image.}\n\t\\label{fig:blocks}\n\\end{figure}\n\n\\subsection*{Hue Inpainting}\n\nAs a last example we consider again the Hue-component of the image ``fruits'', see \\autoref{sfig:fruits2-a}. \nThe unknown region $D$ is the string $\\mathit{01.01}$ which is shown in \\autoref{sfig:fruits2-b}. \nAs parameters we choose $p=1.1$, $s=0.1$, $\\alpha= 2$ and $\\varepsilon = 0.006$. We get the reconstructed image shown in \n\\autoref{sfig:fruits2-c}. The edges are preserved and the unknown area is restored quite well. This can be also observed in the \nTV reconstructed image, \\autoref{sfig:fruits2-d}, using again the split Bregman method as before, cf. \\cite{Get}.\n\n\\begin{figure}[!h]\n\t\\centering\n\t\\begin{subfigure}[h]{0.35\\linewidth}\n\t\t\\includegraphics[width=1\\linewidth]{Images\/Fruits_Orig.jpg}\n\t\t\\caption{Hue component}\n\t\t\\label{sfig:fruits2-a}\n\t\\end{subfigure}\t\t\n\t\\begin{subfigure}[h]{0.35\\linewidth}\n\t\t\\includegraphics[width=1\\linewidth]{Images\/Inpainting_fruits_date_Orig.jpg}\n\t\t\\caption{Image with masked region}\n\t\t\\label{sfig:fruits2-b}\n\t\\end{subfigure}\n\t\n\t\\begin{subfigure}[h]{0.35\\linewidth}\n\t\t\\includegraphics[width=1\\linewidth]{Images\/fruits_inpainting_1234_p11_regParam2_s01_tau02_eps0006_2000steps.jpg}\n\t\t\\caption{TV-reconstructed image}\n\t\t\\label{sfig:fruits2-c}\n\t\\end{subfigure}\n\t\\begin{subfigure}[h]{0.35\\linewidth}\n\t\t\\includegraphics[width=1\\linewidth]{Images\/fruits_TVinpainting_lambda100.jpg}\n\t\t\\caption{Reconstructed image}\n\t\t\\label{sfig:fruits2-d}\n\t\\end{subfigure}\n\t\n\t\\caption{Left to right, top to bottom: Original image and image with masked region. Reconstructed image with parameters $p=1.1, \\ \n\ts=0.1, \\ \\alpha= 2$ and $\\varepsilon = 0.006$, 2000 steps. TV-reconstructed image.}\n\t\\label{fig:fruits2}\n\\end{figure}\n\n\\subsection{Conclusion}\nIn this paper we developed a functional for regularization of functions with values in a set of vectors. The regularization functional \nis a derivative-free, nonlocal term, which is based on a characterization of Sobolev spaces of \n\\emph{intensity data} derived by Bourgain, Br\u00e9zis, Mironescu \\& D\u00e1vila. Our objective has been \nto extend their double integral functionals in a natural way to functions with values in a set of vectors, in particular functions with \nvalues on an embedded manifold. These new integral representations are used for regularization on a subset \nof the (fractional) Sobolev space $W^{s,p}(\\Omega, \\mathds{R}^M)$ and the space $BV(\\Omega, \\mathds{R}^M)$, respectively. \nWe presented numerical results for denoising of artificial InSAR data as well as an example of inpainting. \nMoreover, several conjectures are at hand on relations between double metric integral regularization \nfunctionals and single integral representations.\n\n\\subsection*{Acknowledgements}\nWe thank Peter Elbau for very helpful discussions and comments. MH and OS acknowledge support from \nthe Austrian Science Fund (FWF) within the national research network Geometry and Simulation, project S11704 \n(Variational Methods for Imaging on Manifolds). Moreover, OS is supported by the Austrian Science Fund (FWF), \nwith SFB F68, project F6807-N36 (Tomography with Uncertainties) and I3661-N27 (Novel Error Measures and Source \nConditions of Regularization Methods for Inverse Problems).\n\n\n\n\\section*{References}\n\\renewcommand{\\i}{\\ii}\n\\printbibliography[heading=none]\n\n\n\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Overview}\nAbout DCDP:\n\\begin{enumerate}\n \\item DCDP works well with a relatively large parameter $\\gamma = C_{\\gamma}K { \\mathcal B_n^{-1} \\Delta } \\kappa^2$.\n \\item Local refinement works well both in theory and in practice, though it is not quite ``novel\". Is it good to go? Or should we think about something else?\n \\item Next step: consider generalizing to nonlinear models like logistic regression.\n \\item Piece-wise linear signals?\n\\end{enumerate}\n\nAbout CPD for logistic regression and the Bradley-Terry model:\n\\begin{enumerate}\n \\item Given that the WBS based on loglikelihood performs well in practice, should we try proving some theory for it?\n\\end{enumerate}\n\n\nimprovement in the computation\n\ncompare with others\n\n\\section{Introduction}\n\\label{sec:introduction}\nChange point analysis is a well-established topic in statistics that is concerned with identifying abrupt changes in the data, typically observed as a time series, that are due to structural changes in the underlying distribution. Initially introduced in the 1940s \\citep{wald1945, page1954}, change point analysis has been the subject of a rich statistical literature and has produced a host of well-established methods for statistical inference. Despite their popularity, most existing change point methods available to practitioners are ill-suited or computationally too costly to handle high-dimensional or complex data. In this paper, we develop a general and flexible framework for high-dimensional change point analysis that enjoys very favorable statistical and computational properties.\n\nWe adopt a standard offline change point analysis set-up, whereby we observe a sequence $\\{ \\bm Z _i\\}_{i\\in [n]}$ of independent data points, where $[n]:=\\{1,\\ldots, n\\}$. We assume that each $ \\bm Z _i$ follows a high-dimensional parametric distribution $\\mathbb{P}_{ { \\bm \\theta } ^*_i}$ specified by an unknown parameter $ { \\bm \\theta } ^*_i$, and that sequence of parameters $\\{ { \\bm \\theta } _i\\}_{i \\in [n]}$ is piece-wise constant over time. For example, in the mean change point model (see ~\\Cref{sec: result mean} below), $ { \\mathbb{E} } ( \\bm Z _i) = { \\bm \\theta } ^*_i \\in \\mathbb R^p$, where $ { \\bm \\theta } ^*_i$ is a vector in $\\mathbb R^p$. In the regression change point model (see ~\\Cref{sec: result regression}), $ \\bm Z _i = ( \\bm Z _i, y_i) \\in \\mathbb R^p\\times \\mathbb R$ satisfying $ { \\mathbb{E} } (y_i| \\bm Z _i) = \\bm Z _i^\\top { \\bm \\theta } ^*_i $ where $ { \\bm \\theta } ^*_i$ is a vector of regression parameters.\n\nWe postulate that there exists an unknown sub-sequence \nof {\\it change points} $ 1=\\eta_0 < \\eta_1 < \\eta_2 < \\ldots <\\eta_K < \\eta _{K+1}= n+1$ such that $ { \\bm \\theta } ^*_{i}\\neq { \\bm \\theta } ^*_{i - 1}$ if and only if $i \\in \\{\\eta_{k}\\}_{k\\in [K]}$. For each $k\\in [K]=\\{ 1,\\ldots,K\\}$, define the local spacing parameter and local jump size parameter as\n\\begin{equation}\n \\Delta_k = \\eta_{k} -\\eta_{k-1} \\quad \\text{and} \\quad \\kappa_k \n\t : = \\| { \\bm \\theta } ^*_{\\eta_{k}} - { \\bm \\theta } ^*_{\\eta_k - 1}\\| \n\\end{equation}\nrespectively, where $\\|\\cdot \\|$ is some appropriate norm that is problem specific. Throughout the paper, we will allow the parameters of the data generating distributions, the spacing and jump sizes to change with $n$, though we will require $K$ to be bounded.\nOur goal is to estimate the number and locations of the change points sequence $\\{ \\eta_k\\}_{k\\in[K]}$. We will deem any estimator $\\{ \\widehat \\eta_k\\}_{k\\in[ \\widehat{K}]}$ of the change point sequence {\\it consistent} if, with probability tending to $1$ as $n\\rightarrow \\infty$, \n\\begin{equation} \n\\label{eq: consistency general}\n \\widehat{K} = K \\quad \\text{and} \\quad \\max_{k\\in [K]}|\\widehat{\\eta}_k - \\eta_k| = o(\\Delta_{\\min}),\n\\end{equation}\nwhere $\\Delta_{\\min} = \\min_{ k\\in[K] } \\Delta_k$.\n\nRecent years have witnessed significant advances in the fields of high-dimensional change point analysis, both in terms of methodological developments and theoretical advances. Most change point estimators for high-dimensional problems can be divided into two main categories: those based on variants of the binary segmentation algorithm and those relying on the penalized likelihood. See below for a brief summary of the relevant literature.\n\nIn this paper, we aim to develop a comprehensive framework for estimating change points in high-dimensional models using an $\\ell_0$-penalized likelihood approach. \nWhile $\\ell_0$-based change point algorithms have demonstrated excellent -- in fact, often optimal -- localization rates, their computational costs remain a significant challenge. Indeed, optimizing the $\\ell_0$-penalized objective function using a dynamic programming (DP) approach requires quadratic time complexity \\citep{friedrich2008complexitypenalizedmestimation} and, therefore, is often impractical. \n\nTo overcome this computational bottleneck, we propose a novel class of algorithms for high-dimensional multiple change point estimation problems called \\textit{divide and conquer dynamic programming} (DCDP) - see \\Cref{algorithm:DCDP}.\nThe DCDP framework is very versatile and can be applied to a wide range of high-dimensional change point problems. At the same time, it yields a substantial reduction in computational complexity compared to the vanilla DP. In particular, when the minimal spacing $\\Delta_{\\min}$ between consecutive change points is of order $n$, DCDP exhibits almost linear time complexity.\n\nMoreover, the DCDP algorithm retains a high degree of statistical accuracy. Indeed, we show that DCDP delivers minimax optimal localization error rates for the important problems of change point localization in sparse high-dimensional mean models, Gaussian graphical models and sparse linear regression problems. \nTo the best of our knowledge, DCDP is the first near-linear time procedure that can provide optimal statistical guarantees in these three different models. See \\Cref{remark:optimal mean} and \\Cref{remark:optimal regression} for more detailed discussions on optimality.\n \n\n \n\n{\\bf Structure of the paper.}\nBelow we provide a selective review of the recent relevant literature on high-dimensional change point analysis. In \\Cref{sec:method}, we describe the DCDP framework. In \\Cref{section:main}, we provide detailed theoretical studies to demonstrate that DCDP achieves minimax optimal localization errors in the three models. In \\Cref{sec:experiment}, we conduct extensive numerical experiments on synthetic and real data to illustrate the superior numerical performance of DCDP compared to existing procedures.\n\n\n\n{\\bf Relevant litearture.} \nBinary Segmentation(BS) is a greedy iterative approach that breaks the multiple-change-point problem down into a sequence of single change-point sub-problems. Originally introduced by \\cite{Scott1974} to handle the case of one change point, the BS algorithm was later shown by \\cite{Venkatraman1992} to be effective also in change point problems involving multiple change points. \nModern variants of the original BS algorithms include wild-binary segmentation of \\cite{Fryzlewicz2014} and the more computationally efficient \\textit{Seeded Binary Segmentation} (SBS) algorithm of \\citep{kovacs2020seeded}.\nBS-inspired change point procedures have been designed for various change point problems, including high-dimensional mean models \\citep{eichinger2018mosum,wang_samworth2018}, graphical models \\cite{londschien2021change} and covariance models {\\citep{wang_yi_rinaldo_2021_cov} network models \\cite{wang2021network_cpd}, functional models \\cite{oscar2022nonpara} and many more. \n \n \n\nPenalized likelihood-based approaches are also popular in the change point literature. Broadly, these approaches \n segment the time series by maximizing a likelihood function with an appropriate penalty to avoid over-segmentation.\n \\cite{yao_au1989} \n showed that $\\ell_0$-penalized likelihood-based methods yield consistent estimators of change points. \n\n Relaxing the $\\ell_0$-penalty to the $\\ell_1$-penalty results in the Fused Lasso algorithm, whose theoretical and computational properties have been analyzed by, e.g., \\citep{lin2017fused} for the mean setting and by \\citep{qian2016fused} for the linear regression setting. More recently, \\cite{unify_sinica2022} proposed a unified framework to analyze Fused-Lasso-based change point estimators in linear models.\n\n\n\nFew recent notable contributions in the literature have focused on designing \\textit{unified methodological frameworks} for offline change point analysis. \n\\cite{verzelen2020hd_mean} developed a general approach based on local two-sample tests to detect changes in means, but their approach can only consistently estimate the number of change points and the localization accuracy of the estimators is unspecified. \n\\citep{changeforest2022} proposed\n a novel multivariate nonparametric multiple change point detection method based on the likelihood ratio tests.\n\\cite{unify_sinica2022} studied a general framework based on the Fused Lasso to deal with change points in mean and linear regression models, but their detection boundary is sub-optimal and it is computationally demanding to numerically optimize the Fused Lasso objective function for high-dimensional time series. Until now, a unified framework for offline change point localization with optimal statistical guarantees and low computational complexity is still missing in the literature.\n\n\n\n\n\n{\\bf Notation.} For $n \\in \\mathbb Z^+$, denote $[n]:=\\{1,\\cdots, n\\}$. For a vector $ \\bm v \\in \\mathbb{R}^p$, denote the $i$-th entry as $v_i$, and similarly, for a matrxi $ \\bm A \\in \\mathbb{R}^{m\\times n}$, we use $A_{ij}$ to denote its element at the $i$-th row and $j$-th column. We use $\\mathbb{S}^p_+$ to denote the cone of positive semidefinite matrices in $\\mathbb{R}^{p\\times p}$. For two real numbers $a,b$, we denote $a\\vee b:=\\max\\{a,b\\}$.\n\n$\\|\\cdot\\|_1,\\|\\cdot\\|_2$ refer to the $\\ell_1$ and $\\ell_2$ norm of vectors, i.e., $\\| \\bm v \\|_1 = \\sum_{i\\in [p]}|v_i|$ and $\\| \\bm v \\|_2 = (\\sum_{i\\in [p]}v_i^2)^{1\/2}$. For a square matrix $A\\in \\mathbb{R}^{n\\times n}$, we use $\\| \\bm A \\|_F$ to denote its Frobenius norm, ${\\rm Tr}( \\bm A ) = \\sum_{i\\in [n]}A_{ii}$ to denote its trace, and $| \\bm A |$ to denote its determinant. For a random variable $X\\in \\mathbb{R}$, we denote $\\|X\\|_{\\psi_2}$ as the subgaussian norm \\citep{vershynin2018high}: $\\|X\\|_{\\psi_2}:=\\inf \\{t>0: \\mathbb{E} \\psi_2(|X| \/ t) \\leq 1\\}$ where $\\psi_2(t) = e^{t^2} - 1$.\n\nFor asymptotics, we denote $x_n\\lesssim y_n$\nor $x_n = O(y_n)$ if $\\forall n$,\n$x_n \\leq c_1 y_n$ for some universal constant $c_1>0$. $a_n = o(b_n)$ means $a_n\/b_n\\rightarrow 0$ as $n\\rightarrow \\infty$, and $X_n = o_p(Y_n)$ if $X_n\/Y_n\\rightarrow 0$ in probability. We call a positive sequence $\\{a_n\\}_{n\\in \\mathbb{Z}^+}$ a diverging sequence if $a_n\\rightarrow \\infty$ as $n\\rightarrow \\infty$.\n\n\\section{Methodology}\n\\label{sec:method}\nIn this section, we introduce the DCDP framework and analyze its computational complexity.\nRecall that in our framework, we observe the time series of independent data $\\{ \\bm Z _i\\}_{i\\in [n]}$ sampled from the unknown sequence of distributions $ \\{\\mathbb P_{ { \\bm \\theta } _i^*}\\}_{i\\in [n]}$. For a time interval $ \\mclI \\subset [1, n]$ comprised of integers, let $\\mclF( { \\bm \\theta } ,\\mclI) $ denote the value of an appropriately chosen goodness-of-fit function of the subset $\\{ \\bm Z _i\\}_{i\\in \\mclI }$, and for a fixed and common value of the parameter $ { \\bm \\theta } $. The choice of the goodness-of-fit function is problem dependent.\n\nNext, we use $\\widehat{ { \\bm \\theta } }_{\\mclI}$ to denote the penalized or unpenalized maximum likelihood estimator of $ { \\bm \\theta } ^*$ within the interval $\\mclI$. Intuitively, \n $ \\mathcal F( \\widehat { \\bm \\theta } _\\mathcal I ,\\mathcal I) $\n can be considered a local statistic to test for the existence of one or more change points in $\\mathcal I$.\n \n \n \n \n \n \n\n\n\nDCDP is a two-stage algorithm that entails a divide step and an optional conquer step; see \\Cref{algorithm:DCDP} for details. In the divide step, described in \\Cref{algorithm:DP}, DCDP first computes preliminary estimates of the change point locations by running DDP, a dynamic programming algorithm over a uniformly-spaced grid of time points $\\{s_i=\\lfloor i\\cdot n\/(\\mclQ + 1)\\rfloor\\}_{i\\in [\\mclQ]}$. (DDP can also take as input a random collection of time points, but there are no computational or statistical advantages in randomizing this choice). In the subsequent conquer step, detailed in \\Cref{algo:local_refine_general}, the localization accuracy of the initial estimates is improved using a penalized local refinement (PLR) methodology.\n\n{\\bf Computational complexity of DCDP.} \nThe DCDP procedure achieves substantial computational gains by using a coarse, regular grid of time points $\\{s_i\\}_{i \\in \\mclQ} \\subset [n]$ during the divide step. Additionally, the PLR procedure in the conquer step is a local algorithm and is easily parallelizable. The number of grid points $\\mclQ$ to be given as input to DDP in the divide step should be chosen to be of smaller order than the length of the time course $n$, but large enough to identify the number and the approximate positions of the true change points, with high probability.\n\n\n\\begin{algorithm}[h]\n \\caption{Divide and Conquer Dynamic Programming. DCDP $(\\{ \\bm Z _i\\}_{i \\in [n]}, \\gamma, \\zeta , \\mclQ)$}\n\\label{algorithm:DCDP}\n\t\\textbf{Input:} Data $\\{ \\bm Z _i \\}_{i \\in [n ] }$, tuning parameters $\\gamma,\\zeta, \\mclQ > 0$.\n\n Set grid points $s_i=\\lfloor\\frac{i\\cdot n}{\\mclQ+1}\\rfloor$ for $i\\in [\\mclQ]$.\n\\vskip 0.2cm \t \n(Divide Step) Compute the proxy estimators $\\{\\widehat{\\eta}_k\\}_{k\\in [\\widehat {K}]}$ using GDP $(\\{ \\bm Z _i\\}_{i \\in [n]},\\{s_i\\}_{i\\in [\\mclQ]},\\gamma )$ in \\Cref{algorithm:DP}.\n\\vskip 0.2cm \n(Conquer Step) Compute the final estimators $\\{\\widetilde {\\eta}_k\\}_{k\\in [\\widehat {K}]}$ using $ {\\rm PLR}(\\{\\widehat {\\eta}_k\\}_{k\\in [\\widehat{K}]},\\zeta )$ in \\Cref{algo:local_refine_general}. \n\\vskip 0.2cm \t\n\\textbf{Output:} The change point estimators $\\{\\widetilde {\\eta}_k\\}_{k\\in [\\widehat{K}]}$.\n\\end{algorithm} \n\n\n\\begin{algorithm}[h]\n\n\\caption{ Divided Dynamic Programming \n DDP $(\\{ \\bm Z _i\\}_{i \\in [n]},\\{s_i\\}_{i\\in [\\mclQ]},\\gamma )$: \n the divide step. }\n\\label{algorithm:DP}\n\t\\textbf{Input:} Data $\\{ \\bm Z _i \\}_{i \\in [n ]}$, ordered integers $\\{s_i\\}_{i\\in [ { \\mathcal Q} ]}\\subset (0,n)$, tuning parameter $\\gamma>0$.\n\n \t Set $ \\widehat {\\mathcal P}= \\emptyset$, $\\mathfrak{ p} =\\underbrace{(-1,\\ldots, -1)}_{ n } $, ${\\bm B} =\\underbrace{( \\gamma, \\infty , \\ldots, \\infty)}_{n } $. \n \\\\\n \t\\For {$r $ in $\\{s_i\\}_{i\\in [ { \\mathcal Q} ]}$} {\n \t \\For {$l $ in $\\{s_i\\}_{i\\in [ { \\mathcal Q} ]}$, $ l1$} {\n $ h \\leftarrow \\mathfrak{p}_ k $;\n \\\\\n $\\widehat {\\mathcal P} \\leftarrow \\widehat {\\mathcal P} \\ \\cup \\ \\{h\\}$;\n\\\\ \n $ k \\leftarrow h $.\n \t }\n\t\\textbf{Output:} The set of estimated change points \n\t$\\widehat {\\mathcal P}$. \n\\end{algorithm} \n\n\\begin{algorithm}[h] \n\\caption{Penalized Local Refinement ${\\rm PLR}(\\{\\widehat{\\eta}_k\\}_{k\\in [\\widehat{K}]},\\zeta)$: the conquer step. \n}\n\\label{algo:local_refine_general}\n\t\\textbf{Input:} Data $\\{ \\bm Z _i \\}_{i\\in[n]}$, estimated change points $\\{\\widehat{\\eta}_k\\}_{k \\in[\\widehat{K}]}$ from \\Cref{algorithm:DP}, tuning parameter $\\zeta > 0$.\n\n\tLet $(\\widehat{\\eta}_0, \\widehat{\\eta}_{\\widehat{K} + 1}) \\leftarrow (0, n)$.\n\t\n\t\\For{$k = 1, \\ldots, \\widehat{K}$} { \n\t$(s_k, e_k) \\leftarrow (\\frac{2}{3} \\widehat{\\eta}_{k-1} + \\frac{1}{3}\\widehat{\\eta}_{k} , \\ \\ \\frac{1}{3}\\widehat{\\eta}_{k} \n + \\frac{2}{3}\\widehat{\\eta}_{k+1} )$\n\\begin{align*}\\bigg(\\check{\\eta}_k,\\widehat{ { \\bm \\theta } }^{(1)},&\\widehat{ { \\bm \\theta } }^{(2)}\\bigg) \\leftarrow \\argmin_{\\eta, { \\bm \\theta } ^{(1)}, { \\bm \\theta } ^{(2)}} \\{\\mclF( { \\bm \\theta } ^{(1)},[s_k,\\eta)) + \n\\\\\n &\\mclF( { \\bm \\theta } ^{(2)}, [\\eta,e_k)) + \\zeta R( { \\bm \\theta } ^{(1)}, { \\bm \\theta } ^{(2)},\\eta; s_k, e_k )\\}\n\\end{align*}\n\t$$\\widetilde{\\eta}_k \\leftarrow \\argmin_{\\substack{\\eta}} \\left\\{\\mclF(\\widehat { \\bm \\theta } ^{(1)},[s_k,\\eta)) + \\mclF(\\widehat { \\bm \\theta } ^{(2)}, [\\eta,e_k)) \\right\\}$$\n\t}\n\t\\textbf{Output:} The refined estimators $\\{\\widetilde{\\eta}_k\\}_{k \\in [\\widehat{K}]}$.\n\\end{algorithm} \n\nIn detail, let $\\mclC_1(|\\mclI|, p)$ denote the time complexity of computing the goodness-of-fit function $ \\mathcal F (\\widehat{\\theta}_\\mclI, \\mclI) $. Naively, the time complexity of \\Cref{algorithm:DP} is $O(\\mclQ^2 \\cdot \\mclC_1(n, p))$, where $ { \\mathcal Q} $ is the size of the grid $\\{ s_i\\}_{i \\in [\\mclQ]}$ in \\Cref{algorithm:DP}. \nWith the memorization technique proposed in \\citep{yu2022temporal}, we show in \\Cref{lem: complexity divide step} that the complexity of the divide step can be reduced to $O(n\\mclQ\\cdot \\mclC_2(p))$, and in \\Cref{lem: complexity local refine} that the conquer step can be computed with time complexity $O(n\\cdot \\mclC_2(p))$, where $\\mclC_2(p)$ is independent of $n$. Furthermore, as shown later in \\Cref{section:main} and \\Cref{sec: fundamental lemma}, setting $\\mclQ = \\frac{4n}{\\Delta_{\\min}}\\log^2(n)$ ensures consistency of \\Cref{algorithm:DP}. Therefore, the complexity of DCDP is\n$$O\\left( \\frac{n^2}{\\Delta_{\\min}}\\cdot\\log^2(n)\\cdot \\mclC_2(p) \\right).$$\nWhen $\\Delta_{\\min}$ is of the same order as $n$, the complexity of DCDP becomes $O(n\\log^2(n)\\cdot \\mclC_2(p))$. To the best of our knowledge, DCDP is the first multiple-change-point detection algorithm that can provably achieve near-linear time complexity in the three models presented in \\Cref{section:main}.\n\n{\\bf Statistical accuracy.} As we will show below, though the DDP procedure in the divide step may already be sufficiently accurate to deliver consistent estimates as defined in \\eqref{eq: consistency general}, its error rate is suboptimal. Sharper, even optimal, localization errors can be achieved through the PLR algorithm in the conquer step (see \\Cref{algo:local_refine_general}). The PLR procedure takes as input the preliminary change points estimates from the divide step\\footnote{More generally, it can be shown that the PLR procedure remains effective as long as it is given as input any change point estimates whose Hausdorff distance from the true change points is bounded by $\\Delta_{\\min}$. Thus, the preliminary estimates need not even be consistent.}, and provably reduces their localization errors -- for some of the models considered in the next section, down to the minimax optimal rates. The effectiveness of local refinement methods to enhance the precision of initial change point estimates has been well-documented in the recent literature on change point analysis \\citep{rinaldo2021cpd_reg_aistats, li2022cpd_btl}.\nIn \\Cref{algo:local_refine_general}, \nthe additional penalty function $R( { \\bm \\theta } ^{(1)}, { \\bm \\theta } ^{(2)},\\eta; s, e)$ in \\Cref{algo:local_refine_general} is introduced to ensure numerical stability of the parameter estimates in high dimensions and, possibly, to reproduce desired structural properties, such as sparsity. Its choice is, therefore, problem specific. For example, in the sparse mean and linear change point model in \\Cref{sec: result mean}, $ { \\bm \\theta } ^{(1)}, { \\bm \\theta } ^{(2)} \\in \\mathbb{R}^{p}$ and we consider the group lasso penalty function\n\\begin{equation}\n\\label{eq:group lasso penalty}\n \n R(\\cdot) = \\sum_{i \\in [p]} \\sqrt{(\\eta - s)( { \\bm \\theta } ^{(1)} )_{i}^2 + (e - \\eta)( { \\bm \\theta } ^{(2)})_{i} ^2}.\n\\end{equation}\n\\bnrmk[Penalization] In \\Cref{algorithm:DP}, $\\gamma$ is a tuning parameter to control the number of selected change points and to avoid false discoveries. In \\Cref{algo:local_refine_general}, the tuning parameter $\\zeta$ is used to modulate the impact of the penalty function $R$. We derive theoretically valid choices of tuning parameters in \\Cref{section:main}, and provide practical guidance on how to select them in a data-driven way in \\Cref{sec:experiment}.\n \\enrmk\n\n\n\n\n\\section{Main results }\n\\label{section:main}\n\nIn this section, we investigate the theoretical performance of DCDP in three different high-dimensional change point problems. For each of the models examined, we first derive localization rates for the DDP algorithm in the divide step and find that, though they imply consistency, they are worse than the corresponding rates afforded by the computationally costly vanilla DP algorithm \\citep{wang2018univariate, rinaldo2021cpd_reg_aistats}. This suboptimal performance reflects the trade-off between computation efficiency and statistical accuracy and should not come as a surprise. \n Next, we demonstrate that, by using the PLR algorithm in the conquer step, the estimation accuracy increases and the final localization rates become comparable to the (often minimax) optimal rates.\n\nThroughout the section, we will consider the following high-dimensional offline change point analysis framework of reference.\n\n\\bnassum[]\n\\label{assp: DCDP_general}\nWe observe independent data points $\\{ \\bm Z _{i}\\}_{i\\in[n]}$ such that, for each $i$, $ \\bm Z _{i}$ is a draw from a parametric distribution $\\mathbb{P}_{ { \\bm \\theta } ^*_i}$ specified by an unknown parameter vector $ { \\bm \\theta } ^*_i$. There exists an unknown collection of change points $ 1=\\eta_0 < \\eta_1 < \\eta_2 < \\ldots <\\eta_K < \\eta _{K+1}= n+1$ such that $ { \\bm \\theta } ^*_{i} \\neq { \\bm \\theta } ^*_{i-1}$ if and only if $i \\in \\{\\eta_k\\}_{k\\in[K]}$. For each change point $\\eta_k$, we will let $\\kappa_k = \\| { \\bm \\theta } ^*_{\\eta_{k}} - { \\bm \\theta } _{\\eta_k - 1}^* \\|$ be the size of the corresponding change, where $\\|\\cdot\\|$ is an appropriate norm (to be specified, depending on the model). For simplicity, we further assume that the magnitudes of the changes are of the same order: there exists a $\\kappa>0$ such that \n\t $ \\kappa _k \\asymp \\kappa $ for all $k\\in [K]$. We denote the spacing between $\\eta_{k}$ and $\\eta_{k-1}$ with \n $\\Delta_k = \\eta_{k} -\\eta_{k-1} $ and let $ \\Delta_{\\min} = \\min_{k\\in[K]}\\Delta_k$ denote the minimal spacing. All the model parameters are allowed to change with $n$, with the exception of $K$.\n\\enassum\n\n \n\n\n \n\n\n\n\n\n\n\n\\subsection{ Changes in means}\n\\label{sec: result mean}\n\nChange point detection and localization of a piece-wise constant mean signal is arguably the most traditional and well-studied change point model. Initially developed in the 1940s for univariate data, the model has recently been generalized under various high-dimensional settings and thoroughly investigated: see, e.g., \n\\cite{wang_samworth2018, Gao2019, verzelen2020hd_mean, unify_sinica2022}. Below, we show that, for this model, DCDP achieves the sharp detection boundary and delivers the minimax optimal localization error rate.\n\\bnassum[Mean model]\n\\label{assp: DCDP_mean main}\n Suppose that for each $i\\in [n]$, $ \\bm Z _i= \\bm Z _{i }$ satisfies the mean model $ \\bm Z _{i} = { \\bm \\mu } _i^* + { \\bm \\epsilon } _i\\in \\mathbb{R}^p$ and \\Cref{assp: DCDP_general} holds with $ { \\bm \\theta } ^*_i = { \\bm \\mu } ^*_i$ and $\\|\\cdot\\| = \\|\\cdot\\|_2$. \n\n{\\bf (a)} The measurement errors $\\{ { \\bm \\epsilon } _i\\}_{i\\in[n]}$ are independent mean-zero random vectors with independent subgaussian entries such that $0<\\sigma_{\\epsilon} =\\sup_{i\\in[n]}\\sup_{j\\in[p]} \\|( { \\bm \\epsilon } _i)_j\\|_{\\psi_2}<\\infty$.\n \n\n{\\bf (b)} For each $i \\in [n]$, there exists a collection of subsets $ S_i \\subset [p]$, such that \n\t $ ( { \\bm \\mu } ^*_{i})_j =0 \\text{ if } j \\not \\in S_i.$\n\t In addition, the cardinality of the support satisfies $|S_i|\\leq \\mathfrak{s} $. \n\\enassum\n \n \n \n Conditions {\\bf (a)} and {\\bf (b)} above are standard assumptions for the high-dimensional linear regression time series models \\citep{Basu2015,unify_sinica2022}. In our first result, we establish consistency of the divide step. The proof of the following theorem is in \\Cref{sec: main proof mean}.\n \n\\bnthm \\label{thm:DCDP mean}\nSuppose that \\Cref{assp: DCDP_mean main} holds and that \n \\begin{align}\\label{eq:dp mean snr 1} \\Delta_{\\min} \\kappa^2 \\ge \\mathcal{B}_n \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(p\\vee n),\n \\end{align}\n for some slowly diverging sequence $\\{ \\mclB_n\\}_{n\\in \\mathbb{Z}^+}$.\n For sufficiently large constants $C_\\gamma$ and $C_\\mclF$, let \n $\\{\\widehat { \\eta }_k\\}_{k\\in [\\widehat K]} $ denote the output of \\Cref{algorithm:DP} with $\\mclQ = \\frac{4n}{\\Delta_{\\min}}\\log^2(n)$,\n \\begin{equation*}\n \\mathcal F (\\widehat{ { \\bm \\mu } }_\\I,\\I) := \\begin{cases}\n \\sum_{i \\in \\I } \\| \\bm Z _i - \\widehat { \\bm \\mu } _\\I \\|_2 ^2 &\\text{if } |\\I|\\geq C_\\mclF { \\mathfrak{s} } \\log(p\\vee n),\\\\\n 0 &\\text{otherwise},\n \\end{cases}\n\\end{equation*}\n and $\\gamma = C_\\gamma \\mathcal B_n^{-1\/2} \\Delta_{\\min}\\kappa^2$. Here \n\\begin{equation}\n \\widehat{ { \\bm \\mu } }_{\\I} = \\argmin_{ { \\bm \\mu } \\in \\mathbb{R}^p}\\| \\bm Z _i - { \\bm \\mu } \\|_2^2 + \\lambda \\sqrt{|\\I|}\\| { \\bm \\mu } \\|_1,\n\\end{equation}\nwith $\\lambda = C_{\\lambda}\\sqrt{\\log(p\\vee n)}$ and $C_\\lambda $ a\n sufficiently large constant. \n \n Then, \n with probability $1 - n^{-3}$, $\\widehat K = K$ and \n \\begin{equation*}\n \\max_{ k \\in [K] } |\\eta_k-\\widehat \\eta_k| \\lesssim \\frac{ \\sigma_{\\epsilon}^2\\log(p\\vee n) +\\gamma }{\\kappa^2 } + \\mathcal{B}_n^{-1\/2} \\Delta_{\\min}.\n \\end{equation*}\n \\enthm\n\nThe \\textit{signal-to-noise-ratio} (SNR) condition \\eqref{eq:dp mean snr 1} assumed in \\Cref{thm:DCDP mean} is frequently used in the change point detection literature \\citep{unify_sinica2022,wang_samworth2018}. Recently, \\cite{verzelen2020hd_mean} showed that, if $ { \\mathfrak{s} } \\le \\sqrt p$, condition \\eqref{eq:dp mean snr 1} is indeed necessary, in the sense that if \n$$ \\frac{\\Delta_{\\min} \\kappa^2 }{ { \\mathfrak{s} } \\sigma_{\\epsilon}^2 \\log(p\\vee n)} =o(1) ,$$\nthen there exists a setting for which no change point estimator is consistent. \nThe localization error of DCDP estimator $\\{ \\widehat \\eta_k\\}_{k\\in[\\widehat K] } $ returned by \\Cref{algorithm:DP} satisfies \n$$\n \\frac{\\max_{k\\in [K]} | \\eta_k-\\widehat{\\eta}_k|}{\\Delta_{\\min}} \\lesssim \\frac{ \\sigma_{\\epsilon}^2 \\log(p\\vee n)}{\\Delta_{\\min}\\kappa^2} + \\frac{\\gamma}{\\Delta_{\\min}\\kappa^2} + \\mclB_n^{-1\/2},\n$$\nwith high probability.\n Thus, using \\eqref{eq:dp mean snr 1} and given the choice of $\\gamma$, \nit follows that the resulting estimator is consistent:\n$$ \\frac{\\max_{k\\in [K]}| \\eta_k-\\widehat{\\eta}_k|}{\\Delta_{\\min}} \\lesssim \n\\mclB_n^{-1} + \\mclB_n^{-1\/2}\n =o_p(1). $$ \n\n\\bnrmk[Grid size]\nIn \\Cref{thm:DCDP mean} and in all the results of this section, we choose a value for the grid size $\\mclQ$ that, while coarse, ensures consistency. Any finer grid can yield the same error rate, at an additional computational cost. \n\\enrmk\n\nCompared to the localization error of the vanilla DP, the localization error of Divided DP \\Cref{algorithm:DP} picks up an additional term $ { \\mathcal B_n^{-1\/2} \\Delta } $. As remarked above, this is to be expected, as \\Cref{algorithm:DP} only deploys a subset of the data indices. Starting with the coarse (but still consistent) preliminary estimators from the divide step \\Cref{algorithm:DP}, the local refinement algorithm \\Cref{algo:local_refine_general} further improves its accuracy and, in fact, yields an optimal error rate.\n \n\\bnthm\n\\label{cor:mean local refinement} Let $\\{ \\mclB_n\\}_{n\\in \\mathbb{Z}^+}$ be any slowly diverging sequence and suppose that $\\Delta_{\\min} \\kappa ^2 \\geq \\mathcal{B}_n \\sigma_{\\epsilon}^2 \\mathfrak{s}^2\\log^3(p\\vee n) $.\n\t Let \n $\\{ \\widetilde \\eta_k\\}_{k\\in[\\widehat K]}$ be the output of \n \\Cref{algo:local_refine_general} with $\\zeta = C_{\\zeta} \\sqrt{\\log(p\\vee n)}$ for sufficiently large constant $C_{\\zeta}$ and $R(\\theta^{(1)}, \\theta^{(2)},\\eta; s, e)$ specified in \\eqref{eq:group lasso penalty}. Then under \\Cref{assp: DCDP_mean main}, for any $\\alpha \\in (0,1) $, with probability at least $1- ( \\alpha \\vee n^{-1} ) $ it holds that $\\widehat{K} = K$ and\n \\begin{equation}\n \\label{eq: mean rate op1}\n \\max_{k \\in [K]} |\\eta_k-\\widetilde \\eta_k|\\cdot \\kappa^2 \\lesssim { \\sigma_{\\epsilon}^2 \\log ( {1}\/{\\alpha} ) }.\n \\end{equation}\n\\enthm\n The proof of \\Cref{cor:mean local refinement} can be found in \\Cref{sec:mean op1}.\n\\bnrmk \\label{remark:optimal mean}\n The localization error bound \\eqref{eq: mean rate op1} is the tightest in the literature. It improves the existing bounds by \\cite{wang_samworth2018} and \\cite{unify_sinica2022}\n by a factor of $ { \\mathfrak{s} } \\log(p) $. It also matches the lower bound established in \\cite{wang_samworth2018}, showing that $O_p(1\/\\kappa^2)$ is the optimal error order and can not be further improved. \n\\enrmk\n\n\n\n\\subsection{Changes in regression coefficients}\n\\label{sec: result regression}\nWe now consider the more complex high-dimensional regression change point model in which the regression coefficients are sparse and change in a piecewise constant manner. Recently, various approaches and methods have been proposed to address this challenging scenario; see, in particular, \\cite{rinaldo2021cpd_reg_aistats,wang2021_jmlr,unify_sinica2022,yu2022temporal}. Below, we will show that DCDP yields optimal localization errors also for this class of change point models. \n\n\\bnassum[High-dimensional linear model]\\label{assp:dcdp_linear_reg main}\n Let the observed data $\\{ \\bm Z _{i }, y_i \\}_{i\\in [n] } \\subset \\mathbb R^p \\times \\mathbb R $ be such that \n\t $y_{i } = \\bm Z _i^\\top { \\bm \\beta } ^* _i +\\epsilon_i $ and let \\Cref{assp: DCDP_general} hold with $ { \\bm \\theta } ^*_i = { \\bm \\beta } ^*_i\\in\\mathbb{R}^p$ and $\\|\\cdot\\| = \\|\\cdot\\|_2$. In addition,\n\n{\\bf (a)} Suppose that $\\{ \\bm Z _i\\}_{i\\in [n]} \\overset{i.i.d.}{\\sim} N_p(0, { \\bm \\Sigma } )$ and that the minimal and the maximal eigenvalues of $ { \\bm \\Sigma } $ satisfy \n\t $\\Lambda_{\\min} ( { \\bm \\Sigma } )\\geq c_X$ and $\\Lambda_{\\max} ( { \\bm \\Sigma } ) \\le C_X$, with universal constants $c_X, C_X\\in(0,\\infty)$.\n In addition, suppose that $ \\{ \\epsilon_i \\}_{i\\in [n]} \\overset{i.i.d.}{\\sim} N(0, \\sigma^2_ \\epsilon)$ and is independent of $\\{ \\bm Z _i\\}_{i\\in [n]} $.\n\n{\\bf (b)} For each $i \\in [n]$, there exists a collection of indices $ S_i \\subset [p]$, such that \n\t $ ( { \\bm \\beta } ^* _ { i})_j =0 \\text{ if } j \\not \\in S_i.$\n\t In addition, the cardinality of the support satisfies $|S_i|\\leq \\mathfrak{s} $. \n\\enassum\n \nWe note that \\Cref{assp:dcdp_linear_reg main} {\\bf (a)} and {\\bf (b)} are standard assumptions for Lasso estimators. Similarly to the case of the mean change point model, we first analyze the performance of the divide step of DCDP and find it to be consistent, albeit at a sub-optimal rate.\n\n \n \\bnthm \\label{thm:DCDP regression} Suppose \\Cref{assp:dcdp_linear_reg main} holds and that \n \\begin{align}\\label{eq:snr divide linear regression}\n \\Delta_{\\min } \\kappa ^2 \\geq \\mathcal{B}_n \\sigma_{\\epsilon}^2 \\mathfrak{s}\\log(p\\vee n) \n \\end{align}for some diverging sequence $\\{ \\mclB_n\\}_{n\\in \\mathbb{Z}^+}$.\nLet \n $\\{\\widehat { \\eta }_k\\} _{k\\in [\\widehat K]} $ be the output of \\Cref{algorithm:DP} with $\\mclQ = \\frac{4n}{\\Delta_{\\min}}\\log^2(n)$, $\\gamma = C_\\gamma \\mathcal B_n^{-1\/2} \\Delta_{\\min}\\kappa^2$ and\n\\begin{equation*}\n \\mathcal F (\\widehat{\\beta}_{\\I},\\mclI) : = \\begin{cases} \n 0 \\quad \\quad\\quad \\ \\text{if } |\\mclI|< C_\\mclF { \\mathfrak{s} } \\log(p\\vee n);\n \\\\ \n \\sum_{i \\in \\mclI } (y_i - \\bm Z _i^\\top \\widehat { \\bm \\beta } _\\mclI ) ^2 \\quad \\text{otherwise,} \n \\end{cases} \n\\end{equation*}\n for sufficiently large constants $ C_\\gamma $ and $C_\\mclF$ and $\\widehat{ { \\bm \\beta } }_{\\I}$ given by\n\\begin{equation}\n \\widehat{ { \\bm \\beta } }_{\\I} = \\argmin_{ { \\bm \\beta } \\in \\mathbb{R}^p}(y_i - \\bm Z _i^\\top { \\bm \\beta } )^2 + \\lambda \\sqrt{|\\I|}\\| { \\bm \\beta } \\|_1,\n\\end{equation}\nwith $\\lambda = C_{\\lambda}\\sqrt{\\log(p\\vee n)}$, for $C_{\\lambda}$ a sufficiently large constant. \n Then,\n with probability $1 - n ^{-3}$, $\\widehat K = K$ and that\n $$ \\max_{ k\\in [K]} |\\eta_k-\\widehat \\eta_k| \\lesssim \\sigma_{\\epsilon}^2 \\big( \\frac{ \\mathfrak{s} \\log(p\\vee n) +\\gamma }{\\kappa^2 }\\big) + { \\mathcal B_n^{-1\/2} \\Delta } .$$\n \\enthm\n\nThe proof of \\Cref{thm:DCDP regression} is deferred to \\Cref{sec: main proof linear}. It is immediate to verify that, under the SNR condition \\eqref{eq:snr divide linear regression} and given the choice of $\\gamma$, estimators satisfy that $ \\max_{k\\in [K]} | \\eta_k-\\widehat{\\eta}_k| \n =o_p(\\Delta_{\\min}) $ and are therefore consistent. \n\n \n \nWith a slightly stronger SNR condition than \\eqref{eq:snr divide linear regression}, \nstatistically optimal change point estimators can be obtained in the conquer step. \n \n\\bnthm\n \\label{cor:regression local refinement}\n Let $\\{ \\mclB_n\\}_{n\\in \\mathbb{Z}^+}$ be any slowly diverging sequence and suppose that $\\Delta_{\\min} \\kappa ^2 \\geq \\mathcal{B}_n \\sigma_{\\epsilon}^2 \\mathfrak{s}^2\\log^3(p\\vee n) $.\n\n Let \n $\\{ \\widetilde \\eta_k\\}_{k\\in[\\widehat K]}$ be the output of \n \\Cref{algo:local_refine_general} with $\\zeta = C_{\\zeta} \\sqrt{\\log(p\\vee n)}$ for sufficiently large constant \n $C_\\zeta $ and $R( { \\bm \\theta } ^{(1)}, { \\bm \\theta } ^{(2)},\\eta)$ specified in \\eqref{eq:group lasso penalty}\n Then under \\Cref{assp:dcdp_linear_reg main}, for any $\\alpha\\in (0,1)$, with probability at least $1 - (\\alpha\\vee n^{-1})$, it holds that $\\widehat {K} = K$ and\n\t\\begin{equation}\n\t \n \\max_{k \\in [K]} | \\eta_k-\\widetilde \\eta_k | \\cdot \\kappa ^2 \\lesssim {\\sigma_\\epsilon^2 \\log^2({1}\/{\\alpha})}.\n\t \\label{eq: linear reg rate op1}\n\t\\end{equation}\n\\enthm\nThe proof of \\Cref{cor:regression local refinement} can be found in \\Cref{sec:regression op1}.\n\\bnrmk \\label{remark:optimal regression}\nThe localization error \\eqref{eq: linear reg rate op1} matches the existing lower bound \n established in \\cite{rinaldo2021cpd_reg_aistats} and, therefore, it is rate minimax optimal. To the best of our knowledge, the only other existing change point algorithm that can achieve optimal localization errors in the high-dimensional linear regression setting is the one developed in \\cite{yu2022temporal}, which allows for dependent observations. However, the approach by \\cite{yu2022temporal} requires quadratic time complexity.\n It is worth mentioning that both \\cite{rinaldo2021cpd_reg_aistats} and \\cite{yu2022temporal} also assume the SNR condition we use in \\Cref{thm:DCDP regression} and \\Cref{cor:regression local refinement}.\n\\enrmk\n\n\n\n\\subsection{Changes in precision matrices}\n\\label{sec: result covariance}\n\n\nFor our third and final example, we specialize the general change point framework of \\Cref{assp: DCDP_general} to the case of Gaussian graphical models, in which the distributional changes are induced by a sequence of temporally piece-wise constant precision matrices, with the magnitude of the changes measured in Frobenius norm.\n\n\\bnassum[Gaussian graphical model]\n\\label{assp:DCDP_covariance main} \n Suppose for each $i\\in [n]$, $ \\bm Z _{i }$ is a mean-zero Gaussian vector in $\\mathbb{R}^p$ with covariance matrix $ { \\bm \\Sigma } ^*_i = \\mathbb{E}[ \\bm Z _i \\bm Z _i^\\top]$, and \\Cref{assp: DCDP_general} holds with $ { \\bm \\theta } ^*_i = ( { \\bm \\Sigma } _i^*)^{-1}$ with $\\|\\cdot\\| = \\|\\cdot\\|_F$. Assume that for each $i\\in [n]$, the minimal and maximal eigenvalues of $ { \\bm \\Sigma } _i^*$ satisfy \n\t $\\Lambda_{\\min} ( { \\bm \\Sigma } _i^*)\\geq c_X$ and $\\Lambda_{\\max} ( { \\bm \\Sigma } _i^*) \\le C_X$, with universal constants $c_X, C_X\\in(0,\\infty)$.\n\\enassum \n\nThe recent literature contains several contributions addressing the problem of detecting change points in precision matrices; see, e.g., \\cite{Gibberd2017ggm,Gibberd2017fusedGlasso, Bybee2018jmlr,Keshavarz2020jmlr,londschien2021change,liu2021jmlr, unify_sinica2022}. Most of these studies focus on estimating a {\\it single} change point. To the best of our knowledge, only \\cite{unify_sinica2022} has provided theoretical guarantees for the multiple-change-point setting assuming sparse changes in the precision matrices. Below, we show that the divide step of the DCDP procedure is able to detect multiple change points in the precision matrices in the dense regime. \n \n\\bnthm\n\\label{thm:DCDP covariance main} Suppose \\Cref{assp:DCDP_covariance main} holds and that \n\t \\begin{align}\\label{eq:snr precision 1} \\Delta_{\\min} \\kappa^2 \\geq \\mathcal{B}_n p^2\\log(n\\vee p)\n \\end{align}\n for some slowly diverging sequence $\\{ \\mclB_n\\}_{n\\in \\mathbb{Z}^+}$. Let $ \\{ \\widehat \\eta_k\\}_{k\\in [\\widehat K]}$ be the output of \\Cref{algorithm:DP} with $\\mclQ = \\frac{4n}{\\Delta_{\\min}}\\log^2(n)$, $\\gamma = C_\\gamma \\mclB_n^{-1\/2}\\Delta_{\\min}\\kappa^2$ and\n\\begin{equation*}\n \\mathcal F (\\widehat{ { \\bm \\Omega } }_\\I,\\I) \n = \\begin{cases} 0 \n\\quad \\quad\\quad \\quad \\quad\\quad \\quad \\text{ if } |\\I|< C_\\mclF p\\log(p\\vee n);\\\\\n \\sum_{i \\in \\I } {\\rm Tr}[\\widehat{ { \\bm \\Omega } }_{\\mclI}^\\top \\bm Z _i \\bm Z _i^\\top] - |\\I|\\log|\\widehat{ { \\bm \\Omega } }_{\\mclI}| \n \\ \\text{otherwise}.\n \\end{cases}\n\\end{equation*}\nfor sufficiently large constants $ C_\\gamma $ and $C_\\mathcal F$. Here $\\widehat{ { \\bm \\Omega } }_{\\I}$ is\n\\begin{equation}\n\\label{eq:goodness-of-fit.precision}\n \\widehat{ { \\bm \\Omega } }_{\\I} = \\argmin_{ { \\bm \\Omega } \\in \\mathbb{S}^p_+} \\sum_{i \\in \\I } {\\rm Tr}[{ { \\bm \\Omega } }^\\top \\bm Z _i \\bm Z _i^\\top] - |\\I|\\log| { \\bm \\Omega } |. \n\\end{equation}\n Then\n with probability at least $1 - n ^{-3}$, $\\widehat K = K$ and that\n \\begin{equation} \n \\label{eq:loc.rate.precison}\n \\max_{ k\\in [K] } |\\eta_k-\\widehat \\eta_k| \\lesssim \\frac{ p^2 \\log(p\\vee n) +\\gamma }{\\kappa^2 } + \\mclB_n^{-\\frac{1}{2}}{\\Delta_{\\min}}.\n \\end{equation}\n\\enthm\nThe proof of \\Cref{thm:DCDP covariance main} is deferred to \\Cref{sec: main proof covariance}.\n\n\nThe goodness-of-fit function in \\eqref{eq:goodness-of-fit.precision} corresponds to a penalized Gaussian likelihood for the precision matrix and is a natural choice for our model settings.\n\nUnder the assumption of the theorem, the localization rate \\eqref{eq:loc.rate.precison} implies consistency, as defined in \\eqref{eq: consistency general}; indeed, it is easy to see that $ \\max_{k\\in [K]} | \\eta_k-\\widehat{\\eta}_k| \n =o_p( \\Delta_{\\min} ). $ \n \nAn analogous condition to Condition \\eqref{eq:snr precision 1} is used in \\cite{unify_sinica2022} under the slightly different settings of sparse changes. More precisely, the authors requires that $\\Delta_{\\min}\\kappa^2\\geq \\mclB_n d\\log(n\\vee p)$, where $d $ is the maximal number of nonzero entries in the precision matrices. When applied to our dense settings, their SNR condition matches \\eqref{eq:snr precision 1}.\n\nUnder a slightly stronger SNR condition, we further obtain that the local refinement algorithm in the conquer step improves the localization rate to match the sharpest rate known for this problem.\n \n\n\n\n\n\\bnthm[]\n\\label{cor:covariance local refinement main}Let $ \\mathcal{B}_n$ be an arbitrary slowly diverging sequence and suppose $ \\Delta_{\\min} \\kappa^2 \\geq \\mathcal{B}_n p^4\\log^2(n\\vee p)$.\n Let \n $\\{ \\widetilde \\eta_k\\}_{k\\in[\\widehat K]}$ be the output of \n \\Cref{algo:local_refine_general} with $R(\\theta^{(1)}, \\theta^{(2)},\\eta) =0$. \nThen under \\Cref{assp:DCDP_covariance main}, it holds that with probability at least $1 - n^{-1}$ \n\\begin{equation}\\label{eq:covariance final rate}\n \\max_{k \\in {[K]}} |\\eta_k-\\widetilde \\eta_k|\\cdot \\kappa^2\\lesssim \\log(n).\n \\end{equation}\n\\enthm\nThe proof of \\Cref{cor:covariance local refinement main} is in \\Cref{sec:cov op1}.\n The localization error bound obtained for DCDP in\n\\Cref{cor:covariance local refinement main} matches the sharpest error bounds obtained for the precision matrices change point model \\cite{liu2021jmlr, unify_sinica2022} and does not require the precision matrices to be sparse. To the best of our knowledge, DCDP is the first linear time algorithm that can optimally estimate multiple change points in the precision matrices in high dimensions.\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Numerical experiments}\n\\label{sec:experiment}\nWe evaluate the numerical performance of DCDP through examples of synthetic and real data. The tuning parameters $ \\gamma$ and $\\zeta$ of DCDP are chosen using cross-validation. The implementations of our numerical experiments are available online\n\\footnote{\\url{https:\/\/github.com\/MountLee\/DCDP}}.\nMore details, including the implementation for cross-validation and additional numerical results, can be found in \\Cref{sec:detail experiment} due to space constraints.\n\n \n\n\\subsection{Time complexity and accuracy of DCDP}\n\\label{sec:experiment_time_error_divide}\nWe generate i.i.d. Gaussian random variables $\\{y_i\\}_{i\\in[n]} \\subset \\mathbb R $ with $y_i = \\mu^*_i + \\epsilon_i$ and $\\sigma_\\epsilon =1$. We set $n=4\\Delta$ where $\\Delta $ will be specified in each setting. The three population change points of $\\{\\mu^*_i\\}_{i\\in[n]}$ are set to be $\\mu_{\\eta_0}^* = 0$, $\\mu_{\\eta_1}^* = 5$, $\\mu_{\\eta_2}^* = 0$, $\\mu_{\\eta_3}^* = 5$, where $\\eta_{k} = k\\Delta + \\delta_k$ with $\\delta_k\\sim{\\rm Unif[-\\frac{3}{10}\\Delta,\\frac{3}{10}\\Delta]}$ for $ k = 1,2,3$. We use the Hausdorff distance $H(\\{\\widehat{\\eta}_k\\}_{k\\in [\\widehat{K}]},\\{\\eta_k\\}_{k\\in [K]})$ to quantify the difference between the estimators and the true change points. \n\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width = 0.48\n \\textwidth]{Figure\/Delta5000_time_error_vs_grid_num.pdf}\n \\caption{Average localization error and average run time versus the number of grid points $\\mclQ$ over 100 trials. The shaded area indicates the upper and lower 0.1 quantiles of the corresponding quantities. }\n\\label{fig:loc_runtime_vs_n_grid}\n\\end{figure}\nIn the first set of experiments, we set $ \\Delta =5000$ and vary $\\mathcal Q$ from $25$ to $200$, and summarize results in \\Cref{fig:loc_runtime_vs_n_grid}. The left plot of the figure shows that while the localization errors of the divide step are sensitive to the choice of $\\mathcal Q$, the additional conquer step (Algorithm 3) greatly improves the numerical accuracy of the final estimators of DCDP. The right plot of the figure demonstrates that the time complexity of DCDP is quadratic in $\\mathcal Q$, which is in line with the complexity analysis presented in \\Cref{sec:method}.\n\n\nIn the second set of experiments, we fix $\\mathcal Q = 100 $ and let $\\Delta$ range from 1000 to 6000. The results are summarized in \\Cref{fig:error_runtime_on_n}. The left plot of the figure shows that while the localization errors of the dive step change with $\\Delta$, the accuracy of DCDP is consistently small for all the different values of $\\Delta$. The right plot of the figure shows that the time complexity is linear in $n$, and this observation matches the findings presented in \\Cref{sec:method}.\n \n \n \n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width = 0.48 \\textwidth]{Figure\/n_time_error_fine_gamma_Q100.pdf}\n \\caption{\n Average localization error and average run time v.s. $\\Delta$ over 100 trials. }\n \\label{fig:error_runtime_on_n}\n\\end{figure}\n\n\n\n\\subsection{Numerical performance of DCDP}\n\\label{sec: experiment comparison}\n\n\nBelow we report the outcome of various simulation studies in which we compare the numerical performance of DCDP with that of several other state-of-the-art methods, for each of the three models presented in \\Cref{section:main}. \n \n\n \n\nIn the following experiments,\nfor each specific $\\Delta$ we set the total number of observations $n = (K + 1)\\Delta$ and the locations of true change points $\\eta_k = k\\Delta + \\delta _k$, where $\\delta_k$ is a random variable sampled from the uniform distribution ${\\rm Unif}[-\\frac{3}{10}\\Delta,\\frac{3}{10}\\Delta]$.\nIn each setting, we conduct \n100 trials and report the average execution time, the average Hausdorff distance between true and estimated change points, and the frequency of cases in which $\\widehat{K}=K$, for each method\n\n{\\bf The mean model}\n\\\\\nWe set $K = 3$ and, for $k=0,\\cdots, K$ and $\\delta \\in \\{1, 5 \\}$, we assume a population mean vector of the form \n\\begin{align*}\n { \\bm \\mu } ^*_{\\eta_k} = ( \\underbrace{ 0, \\ldots, 0}_{5k}, \\underbrace{ \\delta , \\ldots, \\delta }_{5} , \\underbrace{ 0,\\ldots,0}_{p-5k-5})^\\top \\in \\mathbb R^{p}.\n\\end{align*}\n\nWe compare DCDP with Change-Forest\n(CF) \\cite{changeforest2022}, Block-wise Fused Lasso (BFL) \\cite{unify_sinica2022}, and Inspect \\citep{wang_samworth2018}. The results are summarized in \\Cref{tab:mean compare_methods}. On average, DCDP outputs the most accurate change point estimators while remaining computationally efficient. \n \n\n\\begin{table}[h]\n \\centering\n \\begin{tabular}{l l l c}\n \\hline\n Method & $H(\\hat{ { \\bm \\eta } }, { \\bm \\eta } )$ & Time & $\\widehat{\\mathbb{P}}[\\widehat{K}=K]$ \\\\\n \\hline\n \\multicolumn{4}{c}{$n = 200, p=100, K =3, \\delta = 5$}\n \\\\\n DCDP & 0.00 (0.00) & 0.6s (0.0) & 1.00 \\\\\n Inspect & 0.40 (3.50) & 0.0s (0.0) & 0.91 \\\\\n\n CF & 1.84 (6.27) & 0.8s (0.2) & 0.90 \\\\\n \n BFL & 47.84 (6.69) & 1.4s (0.2) & 0.00 \\\\\n \\hline\n \\multicolumn{4}{c}{$n = 200, p=100, K =3, \\delta = 1$} \n \\\\\n DCDP & 0.83 (0.87) & 0.8s (0.2) & 1.00 \\\\\n Inspect & 2.65 (5.16) & 0.0s (0.0) & 0.86 \\\\\n\n CF & 6.29 (9.57) & 1.1s (0.3) & 0.78 \\\\\n \n BFL & 47.19 (6.48) & 1.1s (0.2) & 0.00 \\\\\n \\hline\n\n\n\n\n\n\n\n\n \\end{tabular}\n \n \\caption{Numerical comparison of different methods in the high-dimensional mean shift models. The numbers in the cells indicate the averages over 100 trials and the numbers in the brackets indicate the corresponding standard errors.}\n \\label{tab:mean compare_methods}\n\\end{table}\n\n{\\bf The linear regression model}\n\nWe set $K = 3$ and, for $k=0,\\cdots, K$, assume population regression coefficients of the form\n\\begin{align*}\n { \\bm \\beta } ^*_{\\eta_k} = ( \\underbrace{ 0, \\ldots, 0}_{5k}, \\underbrace{ \\delta , \\ldots, \\delta }_{5} , \\underbrace{ 0,\\ldots,0}_{p-5k-5})^\\top \\in \\mathbb R^{p},\n\\end{align*}\nwhere $\\delta \\in \\{1, 5 \\}$. \n\n\n\nWe compare the numerical performance of DCDP with Variance-Projected Wild Binary Segmentation (VPBS) \\citep{wang2021_jmlr} and vanilla Dynamic Programming (DP) \\citep{rinaldo2021cpd_reg_aistats}. The results are summarized in \\Cref{tab:linear compare_methods}. On average, DCDP is the most efficient algorithm with compelling numerical accuracy. \n\n\n\n\n\\begin{table}[h]\n \\centering\n\\begin{tabular}{l l l c}\n \\hline\nMethod & $H(\\hat{ { \\bm \\eta } }, { \\bm \\eta } )$ & Time & $\\widehat{\\mathbb{P}}[\\widehat{K}=K]$ \n \\\\\n \\hline\n \\multicolumn{4}{c}{$n = 200, p=100, K =3, \\delta = 5$} \\\\\n DCDP & 0.13 (0.39) & 18.4s (1.1) & 1.00 \\\\\n DP & 0.01 (0.10) & 220.3s (16.8) & 0.98 \\\\\n VPWBS & 15.44 (17.99) & 120.1s (13.1) & 0.70 \\\\\n\n\n\n \\hline\n \\multicolumn{4}{c}{$n = 200, p=100, K =3, \\delta = 1$} \\\\\n DCDP & 1.45 (8.59) & 8.8s (0.7) & 0.98 \\\\\n DP & 0.22 (2.00) & 84.4s (5.7) & 0.99 \\\\\n VPWBS & 11.54 (11.23) & 120.4s (14.5) & 0.65 \\\\\n\n\n\n \\hline\n \\end{tabular}\n\n\\caption{Numerical comparison of different methods in the high-dimensional regression coefficient shift models.}\n \\label{tab:linear compare_methods}\n\\end{table}\n\n{\\bf The Gaussian graphical model}\n\\\\\nWe set $K=3$ and the population covariance matrix matrices as $ { \\bm \\Sigma } ^*_{\\eta_0}= { \\bm \\Sigma } ^*_{\\eta_2} = {\\bm I}_p$ and \n$ { \\bm \\Sigma } ^*_{\\eta_1}= { \\bm \\Sigma } ^*_{\\eta_3}$ where\n\\begin{equation*}\n( { \\bm \\Sigma } ^*_{\\eta_1})_{ij} =( { \\bm \\Sigma } ^*_{\\eta_3})_{ij} =\\begin{cases}\n \\delta_1, & i=j;\\\\\n \\delta_2, & |i-j|=1;\\\\\n 0, & \\text{otherwise},\n\\end{cases}\n\\end{equation*}\nwith $\\delta_1 = 5, \\delta_2 = 0.3$.\n\nWe compare the numerical performance of DCDP with Change-Forest\n(CF) \\cite{changeforest2022} and Block-wise Fused Lasso (BFL) \\cite{unify_sinica2022}. \nNote that the BFL algorithm produces empty set in all trials, so we only report DCDP and CF in \\Cref{tab:covariance compare_methods}.\nIt can be seen that on average DCDP outputs the most accurate change point estimates and is highly computationally efficient. \n \n\n\n\\begin{table}[h]\n \\centering\n\\begin{tabular}{l l l c}\n \\hline\n Method & $H(\\hat{ { \\bm \\eta } }, { \\bm \\eta } )$ & Time & $\\widehat{\\mathbb{P}}[\\widehat{K}=K]$ \n \\\\\n \\hline\n\n\n\n\n\n\n\n \\multicolumn{4}{c}{$n = 400, p=10, K =3, \\delta_1 = 5,\\delta_2 = 0.3$} \\\\\n DCDP & 0.42 (0.64) & 0.5s (0.0) & 1.00 \\\\\n\n\n CF & 5.54 (14.71) & 0.6s (0.1) & 0.88 \\\\\n\n \\hline\n \\multicolumn{4}{c}{$n = 400, p=20, K =3, \\delta_1 = 5,\\delta_2 = 0.3$} \\\\\n DCDP & 0.66 (4.37) & 0.9s (0.3) & 1.00 \\\\\n\n\n CF & 7.37 (18.76) & 1.0s (0.0) & 0.85 \\\\\n\n \\hline\n \\end{tabular}\n\n \\caption{Numerical comparison of different methods in the precision matrix shift models. }\n \\label{tab:covariance compare_methods}\n\\end{table}\n\n\n\n\n\n\n\n\n\\subsection{Real data analysis}\n\\label{sec: application}\nWe also applied DCDP to three popular real data examples and compared it with state-of-the-art methods. The results are deferred to \\Cref{sec:detail application} due to space constraints.\n\n \n\n\n\n\\section{Discussion}\n\\label{sec:discussion}\nIn this paper, we propose a novel framework called DCDP for offline change point detection that can efficiently localize multiple change points for a broad range of high-dimensional models. DCDP improves the computational efficiency of vanilla dynamic programming while preserving the accuracy of change point estimation. DCDP serves as a unified methodology for a large family of change point models and theoretical guarantees for the localization errors of DCDP under three specific models are established. Extensive numerical experiments are conducted to compare the performance of DCDP with other popular methods to support our theoretical findings.\n\\subsection{Results with random grid points}\n\n\n\n\\begin{table}[h]\n \\centering\n \\begin{tabular}{l l l l l l l}\n \\hline\n Setting & Method & $H(\\hat{\\eta},\\eta)$ & Time & $\\hat{K}K$\n \\\\\n \\hline\n \\multirow{1}{5cm}{$n = 200, p=20, K =3, \\delta = 5$} \n & DCDP & 0.00 (0.00) & 0.7s (0.1) & 0 & 100 & 0 \\\\\n \\hline\n \\multirow{1}{5cm}{$n = 200, p=20, K =3, \\delta = 1$} \n & DCDP & 0.51 (0.77) & 0.7s (0.1) & 0 & 100 & 0 \\\\\n \\hline\n \\multirow{1}{5cm}{$n = 200, p=20, K =3, \\delta = 0.5$} \n & DCDP & 16.54 (10.33) & 0.2s (0.0) & 10 & 87 & 3 \\\\\n \\hline\n \\multirow{1}{5cm}{$n = 200, p=100, K =3, \\delta = 5$} \n & DCDP & 0.0 (0.0) & 0.6s (0.1) & 0 & 100 & 0 \\\\\n \\hline\n \\multirow{1}{5cm}{$n = 200, p=100, K =3, \\delta = 1$} \n & DCDP & 0.82 (0.88) & 0.9s (0.1) & 0 & 100 & 0 \\\\\n \\hline\n \\multirow{1}{5cm}{$n = 800, p=100, K =3, \\delta = 0.5$} \n & DCDP & 60.78 (31.08) & 2.0s (0.1) & 5 & 95 & 0 \\\\\n \\hline\n \\end{tabular}\n \\caption{Comparison of DCDP and other methods under the mean model with different simulation settings. 100 trials are conducted in each setting. }\n\n\\end{table}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{table}[H]\n \\centering\n\\begin{tabular}{l l l l l l l}\n \\hline\n Setting & Method & $H(\\hat{\\eta},\\eta)$ & Time & $\\hat{K}K$\n \\\\\n \\hline\n \\multirow{1}{5cm}{$n = 200, p=20, K =3, \\delta = 5$} \n & DCDP & 0.03 (0.17) & 2.4s (0.1) & 0 & 100 & 0 \\\\\n \\hline\n \\multirow{1}{5cm}{$n = 200, p=20, K =3, \\delta = 1$} \n & DCDP & 1.57 (7.90) & 2.2s (0.1) & 3 & 97 & 0 \\\\\n \\hline\n \\multirow{1}{5cm}{$n = 200, p=100, K =3, \\delta = 5$} \n & DCDP & 0.12 (0.38) & 14.1s (0.5) & 0 & 100 & 0 \\\\\n \\hline\n \\multirow{1}{5cm}{$n = 200, p=100, K =3, \\delta = 1$} \n & DCDP & 2.20 (10.10) & 5.7s (0.2) & 4 & 96 & 0 \\\\\n \\hline\n \\end{tabular}\n \\caption{Comparison of DCDP and other methods under the linear model with different simulation settings. 100 trials are conducted in each setting. }\n\n\\end{table}\n\n\n\n\n\n\n\n\\begin{table}[h]\n \\centering\n\\begin{tabular}{l l l l l l l}\n \\hline\n Setting & Method & $H(\\hat{\\eta},\\eta)$ & Time & $\\hat{K}K$\n \\\\\n \\hline\n \\multirow{1}{7cm}{$n = 2000, p=5, K =3, \\delta_1 = 2,\\delta_2 = 0.3$} \n & DCDP & 5.10 (6.22) & 0.9s (0.2) & 0 & 100 & 0 \\\\\n \\hline\n \\multirow{1}{7cm}{$n = 2000, p=10, K =3, \\delta_1 = 5,\\delta_2 = 0.3$} \n & DCDP & 0.27 (0.49) & 1.1s (0.1) & 0 & 100 & 0 \\\\\n \\hline\n \\multirow{1}{7cm}{$n = 2000, p=20, K =3, \\delta_1 = 5,\\delta_2 = 0.3$} \n & DCDP & 0.03 (0.17) & 2.5s (0.3) & 0 & 100 & 0 \\\\\n \\hline\n \\multirow{1}{7cm}{$n = 400, p=10, K =3, \\delta_1 = 5,\\delta_2 = 0.3$} \n & DCDP & 0.38 (0.60) & 0.6s (0.1) & 0& 100& 0 \\\\\n \\hline\n \\multirow{1}{7cm}{$n = 400, p=20, K =3, \\delta_1 = 5,\\delta_2 = 0.3$} \n & DCDP & 0.68 (5.07) & 0.9s (0.1) & 0 & 100 & 0 \\\\\n \\hline\n \\end{tabular}\n \\caption{Comparison of DCDP and other methods under the covariance model with different simulation settings. 100 trials are conducted in each setting. }\n\n\\end{table}\n\n\n\\section{Detail of algorithms}\n\n\n \n \n\n\n\n\n\n\n\n\\section{Fundamental lemma}\n\\label{sec: fundamental lemma}\n\n\nIn the proof of localization error of the vanilla dynamic programming, we frequently compare the goodness-of-fit function $\\mclF( \\widehat \\theta_\\mclI , \\mclI)$ over an interval $\\mclI = (s,e]$ with\n\\[\n\\mclF(\\widehat \\theta_{(s,\\eta_{i + 1}] }, (s,\\eta_{i + 1}]) + \\cdots + \\mclF(\\widehat \\theta_{ (\\eta_{i + m},e]},(\\eta_{i + m},e]) + m\\gamma\n\\]\nwhere $\\{\\eta_{i + j}\\}_{j\\in [m]} = \\{\\eta_{\\ell}\\}_{\\ell\\in [K]}\\cap \\mclI$ is the collection of true change points within interval $\\mclI$ and $\\gamma$ is the penalty tuning parameter of the DP. \n\nHowever, for DCDP, we only search over the rough grid $\\{s_{i} = \\lfloor \\frac{i\\cdot n}{\\mclQ + 1}\\rfloor \\}_{i\\in [\\mclQ]}$ that may or may not contain any true change points. Therefore, we need to {\\bf a)} guarantee the existence of some reference points (contained in $\\{s_{i}\\}_{i\\in [\\mclQ]}$) that are close enough to true change points, and {\\bf b)} quantify the deviation of the goodness-of-fit function evaluated at the reference points compared to that evaluated at the true change points.\n\n\n\n\\paragraph{Reference points.} The grid is given by points $s_q = \\lfloor\\frac{q\\cdot n}{\\mclQ + 1}\\rfloor$ for $q\\in[ { \\mathcal Q} ]$.\n Let \n$ \\{ \\eta_k\\}_{k\\in[K]}$ be the collection of change points and denote \n\\begin{align*} \n \\mathcal L _k (\\delta) : = \\bigg \\{ \\{ s_q\\}_{q\\in[ { \\mathcal Q} ]} \\bigcap [\\eta_k -\\delta ,\\eta_k ] \\not =\\emptyset \\bigg \\} , \\quad \\text{and} \\quad \\mathcal R _k (\\delta) : = \\bigg \\{ \\{ s_q\\}_{q\\in[ { \\mathcal Q} ]} \\bigcap [\\eta_k ,\\eta_k + \\delta ] \\not =\\emptyset \\bigg \\} .\n\\end{align*} \nIntuitively, if $ s_q \\in [\\eta_k -\\delta ,\\eta_k ]$ and $ s_{q'} \\in [\\eta_k ,\\eta_k + \\delta ]$ , then $s_q, s_{q'}$ can serve as reference points of the true change point $\\eta_k$. \nDenote \n\\begin{align} \n\\label{eq:left and right approximation of change points} \\mathcal L(\\delta) : = \\bigcap_{k=1}^{K} \\mathcal L _k \\big ( \\delta \\big ) \n\\quad \\text{and} \\quad \n\\mathcal R (\\delta) : = \\bigcap_{k=1}^{K} \\mathcal R _k \\big ( \\delta \\big ) . \n\\end{align} \n\nThen it is straightforward to see that both events $\\mathcal L(\\delta)$ and $\\mathcal R(\\delta)$ will hold as long as $\\min_{q\\in [\\mclQ + 1]}|s_q - s_{q-1}|<\\frac{\\delta}{2}$, which is guaranteed if $\\mclQ > 3\\frac{n}{\\delta}$. For the proofs in \\Cref{sec: main proof mean}, \\Cref{sec: main proof linear}, and \\Cref{sec: main proof covariance}, we require that $\\mathcal L(\\mclB_n^{-1}\\Delta_{\\min})$ and $\\mathcal R(\\mclB_n^{-1}\\Delta_{\\min})$ hold. Therefore, for the theoretical results in \\Cref{section:main} to hold, $\\mclQ$ should satisfy that\n\\begin{equation*}\n \\mclQ > \\frac{3n}{\\Delta_{\\min}}\\mclB_n.\n\\end{equation*}\nSince in our paper, $\\{\\mclB_n\\}_{n\\in \\mclZ^+}$ is a slowly diverging sequence, we can take it as $\\mclB_n = \\log(n)$ and then it suffices to take $\\mclQ = \\frac{4n}{\\Delta_{\\min}}\\log^2(n)$.\n\nUnder the fixed-$K$ setting of paper and when $\\{\\Delta_k\\}_{k\\in [K]}$ are of the same order, the existence of reference points will be guaranteed as long as $\\mclQ > 4\\log^2(n)$.\n\n\n\\paragraph{Goodness-of-fit.} The deviation of goodness-of-fit functions at reference points are different from the one that occurs in the proof of the vanilla DP, because the fitted parameters would have some bias since reference points may not locate at true change points. For different models, the deviation of the goodness-of-fit has different orders. We need to analyze each model separately. The deviations are described in \\Cref{lem:mean one change deviation bound}, \\Cref{lem:regression one change deviation bound}, and \\Cref{lem:cov one change deviation bound}.\n\n\\paragraph{Complexity analysis.} In \\Cref{lem: complexity divide step} we analyze the complexity of the divide step.\n\n\\bnlem[Complexity of the divide step]\n\\label{lem: complexity divide step}\nUnder all three models in \\Cref{section:main}, with a memorization technique, the computation complexity of \\Cref{algorithm:DP} would be $O(n\\mclQ\\cdot \\mclC_2(p))$.\n\\enlem\n\\bprf\nFor generality, suppose $\\{s_i\\}_{i\\in [\\mclQ]}$ is an arbitrary grid of integers over $(0,n)$, i.e., $00$, with probability at least $1 - (n\\vee p)^{-5}$,\n$$\n\\|\\sum_{i\\in \\I}\\epsilon_i\\|_{\\infty}\\leq C\\sigma_{\\epsilon}\\sqrt{|\\I|\\log(n\\vee p)}\\leq \\frac{\\lambda}{4}\\sqrt{|\\I|},\n$$\nas long as $C_{\\lambda}$ is sufficiently large. Therefore, based on the sparsity assumption in \\Cref{assp: DCDP_mean}, it holds that\n\\begin{align*}\n &\\frac{\\lambda}{2}\\|\\widehat{\\mu}_{\\I} - \\mu^*_{\\I}\\|_1 + \\lambda[\\|{\\mu}^*_{\\I}\\|_1 - \\|\\widehat{\\mu}_{\\I}\\|_1]\\geq 0 \\\\\n \\Rightarrow & \\frac{\\lambda}{2}\\|\\widehat{\\mu}_{\\I} - \\mu^*_{\\I}\\|_1 + \\lambda[\\|({\\mu}^*_{\\I})_S\\|_1 - \\|(\\widehat{\\mu}_{\\I})_S\\|_1]\\geq \\lambda \\|(\\widehat{\\mu}_{\\I})_{S^c}\\|_1\\\\\n \\Rightarrow & \\frac{\\lambda}{2}\\|\\widehat{\\mu}_{\\I} - \\mu^*_{\\I}\\|_1 + \\lambda\\|({\\mu}^*_{\\I}-\\widehat{\\mu}_{\\I})_S\\|_1 \\geq \\lambda \\|(\\mu^*_{\\I} - \\widehat{\\mu}_{\\I})_{S^c}\\|_1\\\\\n \\Rightarrow & 3\\|({\\mu}^*_{\\I}-\\widehat{\\mu}_{\\I})_S\\|_1\\geq \\|({\\mu}^*_{\\I}-\\widehat{\\mu}_{\\I})_{S^c}\\|_1.\n\\end{align*}\nNow from \\Cref{tmp_eq:mean_concentration} we can get\n\\begin{align*}\n |\\I|\\|\\widehat{\\mu}_{\\I} - \\mu^*_{\\I}\\|_2^2\\leq &\\frac{3\\lambda}{2}\\sqrt{|\\I|}\\|\\widehat{\\mu}_{\\I} - \\mu^*_{\\I}\\|_1\\\\\n \\leq & \\frac{12\\lambda}{2}\\sqrt{|\\I|}\\|(\\widehat{\\mu}_{\\I} - \\mu^*_{\\I})_S\\|_1\\\\\n \\leq & 6\\lambda \\sqrt{ { \\mathfrak{s} } } \\sqrt{|\\I|}\\|(\\widehat{\\mu}_{\\I} - \\mu^*_{\\I})_S\\|_2\\\\\n \\leq & 6\\lambda \\sqrt{ { \\mathfrak{s} } } \\sqrt{|\\I|}\\|\\widehat{\\mu}_{\\I} - \\mu^*_{\\I}\\|_2,\n\\end{align*}\nwhich implies that\n$$\n\\|\\widehat{\\mu}_{\\I} - \\mu^*_{\\I}\\|_2\\leq 6C_{\\lambda}\\sigma_{\\epsilon}\\sqrt{\\frac{ { \\mathfrak{s} } \\log(n\\vee p)}{|\\I|}}.\n$$\nThe other inequality follows accordingly.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Technical lemmas}\nThroughout this section, let $\\widehat { \\mathcal P } $ denote the output of \\Cref{algorithm:DCDP}.\n \n\n\n\\bnlem[No change point]\n\\label{lem:mean loss deviation no change point}\nLet $\\I\\subset [1,\\ldots, n]$ be any interval that contains no change point. Then for any interval ${ \\mathcal J } \\supset \\I$, it holds with probability at least $1 - (n\\vee p)^{-5}$ that\n\\begin{equation*}\n \\mclF(\\mu^*_{\\I},\\I)\\leq \\mclF(\\widehat{\\mu}_{ \\mathcal J } ,\\I) + C\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p).\n\\end{equation*}\n\\enlem\n\\begin{proof}\n{\\bf Case 1}. If $|\\I|< C_\\mclF\\sigma_{\\epsilon} { \\mathfrak{s} } \\log(n\\vee p)$, then by definition, we have $\\mclF(\\mu^*_{\\I},\\I) = \\mclF(\\widehat \\mu^*_{{ \\mathcal J } },\\I) = 0$ and the inequality holds.\n\n{\\bf Case 2}. If $|\\I|\\geq C_\\mclF\\sigma_{\\epsilon} { \\mathfrak{s} } \\log(n\\vee p)$, then take difference and we can get\n\\begin{align*}\n &\\sum_{i\\in \\I}\\|X_i - \\mu^*_i\\|_2^2 - \\sum_{i\\in \\I}\\|X_i - \\widehat{\\mu}_{ \\mathcal J } \\|_2^2 \\\\\n =& 2(\\widehat \\mu_{{ \\mathcal J } } - \\mu^*_{\\I})^\\top\\sum_{i\\in \\I}\\epsilon_i -|\\I|\\|\\mu^*_{\\I} - \\widehat \\mu_{{ \\mathcal J } }\\|_2^2\\\\\n \\leq & 2(\\|(\\widehat \\mu_{{ \\mathcal J } } - \\mu^*_{\\I})_S\\|_1 + \\|(\\widehat \\mu_{{ \\mathcal J } } - \\mu^*_{\\I})_{S^c}\\|_1)\\|\\sum_{i\\in \\I}\\epsilon_i\\|_{\\infty} -|\\I|\\|\\mu^*_{\\I} - \\widehat \\mu_{{ \\mathcal J } }\\|_2^2\\\\\n \\leq& c_1\\|\\widehat \\mu_{{ \\mathcal J } } - \\mu^*_{\\I}\\|_2\\sigma_{\\epsilon}\\sqrt{ { \\mathfrak{s} } |\\I|\\log(n\\vee p)} + c_2\\sigma_{\\epsilon} { \\mathfrak{s} } \\sqrt{\\frac{\\log(n\\vee p)}{|\\I|}}\\cdot c_1\\sigma_{\\epsilon}\\sqrt{|\\I|\\log(n\\vee p)} - |\\I|\\|\\mu^*_{\\I} - \\widehat \\mu_{{ \\mathcal J } }\\|_2^2 \\\\\n \\leq & \\frac{1}{2}|\\I|\\|\\mu^*_{\\I} - \\widehat \\mu_{{ \\mathcal J } }\\|_2^2 + 2c_1^2\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) + c_2\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) - |\\I|\\|\\mu^*_{\\I} - \\widehat \\mu_{{ \\mathcal J } }\\|_2^2\\\\\n \\leq & C\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p),\n\\end{align*}\nwhere in the second inequality we use the definition of the index set $S$ and \\Cref{lem:estimation_high_dim_mean}.\n\\end{proof}\n\n\n\n\n\n\n \n\\bnlem[Single change point]\n\\label{lem:mean single cp}\nSuppose the good events \n$\\mathcal L ( { \\mathcal B_n^{-1} \\Delta } ) $ and $\\mathcal R ( { \\mathcal B_n^{-1} \\Delta } ) $ defined in \\Cref{eq:left and right approximation of change points} hold. \nLet $ \\I=(s,e] \\in \\mathcal {\\widehat P} $ be such that $\\I$ contains exactly one change point $ \\eta_k $. \n Then with probability at least $1-(n\\vee p)^{-3}$, it holds that \n $$\\min\\{ \\eta_k -s ,e-\\eta_k \\} \\lesssim \\sigma_{\\epsilon}^2\\bigg( \\frac{ { \\mathfrak{s} } \\log(n\\vee p) +\\gamma }{\\kappa_k^2 }\\bigg) + { \\mathcal B_n^{-1} \\Delta } .$$\n \\enlem\n\\begin{proof} \nIf either $ \\eta_k -s \\le { \\mathcal B_n^{-1} \\Delta } $ or $e-\\eta_k\\le { \\mathcal B_n^{-1} \\Delta } $, then there is nothing to show. So assume that \n$$ \\eta_k -s > { \\mathcal B_n^{-1} \\Delta } \\quad \\text{and} \\quad e-\\eta_k > { \\mathcal B_n^{-1} \\Delta } . $$\nBy event $\\mathcal R ( { \\mathcal B_n^{-1} \\Delta } )$, there exists $ s_ u \\in \\{ s_q\\}_{q=1}^ { \\mathcal Q} $ such that \n$$0\\le s_u - \\eta_k \\le { \\mathcal B_n^{-1} \\Delta } . $$\n So \n$$ \\eta_k \\le s_ u \\le e .$$\nDenote \n$$ \\I_ 1 = (s,s_u] \\quad \\text{and} \\quad \\I_2 = (s_u, e] .$$\nSince \n$s, e, s_u \\in \\{ s_q\\}_{q=1}^ { \\mathcal Q} $, it follows that \n\\begin{align}\\nonumber\n\\sum_{i \\in \\I }\\|X_i - \\widehat \\mu_\\I \\|_2 ^2 \\le & \\sum_{i \\in \\I_1 }\\|X_i - \\widehat \\mu_{\\I_1} \\|_2 ^2 + \\sum_{i \\in \\I_2 }\\|X_i - \\widehat \\mu_{\\I_2} \\|_2 ^2 + \\gamma \n\\\\\\nonumber \n\\le & \\sum_{i \\in \\I_1 }\\|X_i - \\mu^*_i \\|_2 ^2 + C_1 \\big ( \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) + (s_u -\\eta_k) \\kappa_k^2 \\big ) \n\\\\ \\nonumber\n& + \\sum_{i \\in \\I_1 }\\|X_i - \\mu^*_i \\|_2 ^2 + C_1\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) + \\gamma\n\\\\ \\nonumber \n=& \\sum_{i \\in \\I }\\|X_i - \\mu^*_i \\|_2 ^2 +C_2 \\big ( \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) + (s_u -\\eta_k) \\kappa_k^2 \\big )+ \\gamma \n\\\\\n\\le & \\sum_{i \\in \\I }\\|X_i - \\mu^*_i \\|_2 ^2 +C_2 \\big ( \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) + { \\mathcal B_n^{-1} \\Delta } \\kappa_k^2 \\big ) + \\gamma , \\label{eq:one change point step 1}\n\\end{align}\nwhere the first inequality follows from the fact that $ \\I =(s ,e ] \\in \\widehat { \\mathcal P} $ and so it is the local minimizer, the second inequality follows from \\Cref{lem:mean one change deviation bound} {\\bf a} and {\\bf b} and the observation that \n$$ \\eta_k -s > { \\mathcal B_n^{-1} \\Delta } \\ge s_u -\\eta_k $$ \nDenote \n$$ { \\mathcal J } _1 = (s,\\eta_k ] \\quad \\text{and} \\quad { \\mathcal J } _2 = (\\eta_k , e] .$$\n\\Cref{eq:one change point step 1} gives \n\\begin{align*}\n\\sum_{i \\in { \\mathcal J } _1 }\\|X_i - \\widehat \\mu_\\I \\|_2 ^2 +\\sum_{i \\in { \\mathcal J } _2 }\\|X_i - \\widehat \\mu_\\I \\|_2 ^2 \n \\le \\sum_{i \\in { \\mathcal J } _1 }\\|X_i - \\mu^*_{ { \\mathcal J } _1 } \\|_2 ^2 +\\sum_{i \\in { \\mathcal J } _2 }\\|X_i - \\mu^*_{ { \\mathcal J } _2 } \\|_2 ^2 +C_2 \\big (\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) + { \\mathcal B_n^{-1} \\Delta } \\kappa_k^2 \\big ) + \\gamma ,\n\\end{align*}\nwhich leads to \n\\begin{align*}\n&\\sum_{i \\in { \\mathcal J } _1 }\\|\\widehat \\mu_\\I - \\mu^*_{{ \\mathcal J } _1 }\\|_2^2 +\\sum_{i \\in { \\mathcal J } _2 }\\|\\widehat \\mu_\\I - \\mu^*_{{ \\mathcal J } _2 } \\|_2^2 \n\\\\\n \\leq & 2 \\sum_{i \\in { \\mathcal J } _1 } \\epsilon_i^\\top (\\widehat \\mu_\\I - \\mu^*_{ { \\mathcal J } _1 } ) +2 \\sum_{i \\in { \\mathcal J } _2 } \\epsilon_i^\\top (\\widehat \\mu_\\I - \\mu^*_{ { \\mathcal J } _2 } ) +C_2 \\big (\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) + \\kappa_k^2 { \\mathcal B_n^{-1} \\Delta } \\big ) + \\gamma \n \\\\\n \\leq & 2\\sigma_{\\epsilon}\\sum_{j = 1,2}\\|\\widehat \\mu_\\I - \\mu^*_{ { \\mathcal J } _j }\\|_2 \\sqrt{|{ \\mathcal J } _j|\\log(n\\vee p)}+ C_2 \\big( \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) + \\kappa_k^2 { \\mathcal B_n^{-1} \\Delta } \\big ) + \\gamma \\\\\n \\leq & \\frac{1}{2} \\sum_{j = 1,2}|{ \\mathcal J } _j|\\|\\widehat \\mu_\\I - \\mu^*_{ { \\mathcal J } _j }\\|_2^2 + C_3 \\big (\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) + \\kappa_k^2 { \\mathcal B_n^{-1} \\Delta } \\big ) + \\gamma,\n\\end{align*} \nwhere the second inequality holds because the Orlicz norm $\\|\\cdot\\|_{\\psi_2}$ of $\\sum_{i \\in { \\mathcal J } _1 } \\epsilon_i^\\top (\\mu_\\I - \\mu^*_{ { \\mathcal J } _1 } )$ is upper bounded by $|{ \\mathcal J } _1|\\sigma^2_{\\epsilon}\\|\\mu_\\I - \\mu^*_{ { \\mathcal J } _1 }\\|_2^2$.\n\nIt follows that \n$$ |{ \\mathcal J } _1 |\\| \\widehat \\mu_\\I - \\mu^*_{{ \\mathcal J } _1 }\\|_2 ^2 + |{ \\mathcal J } _2 |\\| \\widehat \\mu_\\I - \\mu^*_{{ \\mathcal J } _2 }\\|_2 ^2 = \\sum_{i \\in { \\mathcal J } _1 }\\| \\widehat \\mu_\\I - \\mu^*_{{ \\mathcal J } _1 }\\|_2 ^2 +\\sum_{i \\in { \\mathcal J } _2 }\\| \\widehat \\mu_\\I - \\mu^*_{{ \\mathcal J } _2 } \\|_2^2 \\le C_4\\big ( \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) + { \\mathcal B_n^{-1} \\Delta } \\kappa_k^2 \\big )+ 2\\gamma .$$\nNote that \n$$ \\inf_{ a \\in \\mathbb R } |{ \\mathcal J } _1 |\\| a - \\mu^*_{{ \\mathcal J } _1 }\\|_2 ^2 + |{ \\mathcal J } _2 |\\| a - \\mu^*_{{ \\mathcal J } _2 }\\| ^2 = \\kappa_k ^2 \\frac{|{ \\mathcal J } _1| |{ \\mathcal J } _2|}{| \\I| } \\ge \\frac{ \\kappa_k^2 }{2} \\min\\{ |{ \\mathcal J } _1| ,|{ \\mathcal J } _2| \\} . $$ \nThis leads to \n$$ \\frac{ \\kappa_k^2 }{2}\\min\\{ |{ \\mathcal J } _1| ,|{ \\mathcal J } _2| \\} \\le C_4\\big ( \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) + { \\mathcal B_n^{-1} \\Delta } \\kappa_k^2 + \\gamma \\big ) ,$$\nwhich is \n$$ \\min\\{ |{ \\mathcal J } _1| ,|{ \\mathcal J } _2| \\} \\le C_5 \\bigg( \\frac{\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) + \\gamma }{\\kappa_k^2 } + { \\mathcal B_n^{-1} \\Delta } \\bigg) .$$\n \\end{proof} \n \n \n \\bnlem [Two change points]\n \\label{lem:mean two change points}\nSuppose the good events \n$\\mathcal L ( { \\mathcal B_n^{-1} \\Delta } ) $ and $\\mathcal R ( { \\mathcal B_n^{-1} \\Delta } ) $ defined in \\Cref{eq:left and right approximation of change points} hold. \nLet $ \\I=(s,e] \\in \\mathcal {\\widehat P} $ be an interval that contains exactly two change points $ \\eta_k,\\eta_{k+1} $. Suppose in addition that \n\\begin{align} \\label{eq:1D two change points snr}\n\\Delta_{\\min} \\kappa^2 \\ge C { \\mathcal B_n} ^{1\/2} \\big(\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) + \\gamma) \n\\end{align} \nfor sufficiently large constant $C $. \n Then with probability at least $1-(n\\vee p)^{-3}$, it holds that \n\\begin{align*} \n \\eta_k -s \\lesssim { \\mathcal B_n^{-1\/2} \\Delta } \\quad \\text{and} \\quad \n e-\\eta_{k+1} \\lesssim { \\mathcal B_n^{-1\/2} \\Delta } . \n \\end{align*} \n \\enlem \n\\begin{proof} \nSince the events $\\mathcal L ( { \\mathcal B_n^{-1} \\Delta } ) $ and $\\mathcal R ( { \\mathcal B_n^{-1} \\Delta } ) $ hold, let $ s_u, s_v$ be such that \n $\\eta_k \\le s_u \\le s_v \\le \\eta_{k+1} $ and that \n $$ 0 \\le s_u-\\eta_k \\le { \\mathcal B_n^{-1} \\Delta } , \\quad 0\\le \\eta_{k+1} - s_v \\le { \\mathcal B_n^{-1} \\Delta } . $$\n\n \\begin{center}\n \\begin{tikzpicture} \n\\draw[ - ] (-10,0)--(1,0);\n \\node[color=black] at (-8,-0.3) {\\small s};\n \\draw[ (-, ultra thick, black] (-8,0) -- (-7.99,0);\n \\draw[ { -]}, ultra thick, black] (-1.0002,0) -- (-1,0);\n\n \\node[color=black] at (-7,-0.3) {\\small $\\eta_k$};\n\\draw(-7 ,0)circle [radius=2pt] ;\n\n \\node[color=black] at (-6.5,-0.3) {\\small $s_u$};\n\\draw plot[mark=x, mark options={color=black, scale=1.5}] coordinates {(-6.5,0) }; \n \n\n \\node[color=black] at (-2.3,-0.3) {\\small $\\eta_{k+1}$};\n\\draw(-2.3 ,0)circle [radius=2pt] ;\n \\node[color=black] at (-3 ,-0.3) {\\small $s_v$};\n\\draw plot[mark=x, mark options={color=black, scale=1.5}] coordinates {(-3,0) }; \n \n \\node[color=black] at (-1,-0.3) {\\small $e$};\n\n \n\\end{tikzpicture}\n\\end{center} \n\nDenote \n$$ \\mathcal I_ 1= ( s, s _u], \\quad \\I_2 =(s_u, s_v] \\quad \\text{and} \\quad \\I_3 = (s_v,e]. $$\nIn addition, denote \n$$ { \\mathcal J } _1 = (s,\\eta_k ], \\quad { \\mathcal J } _2=(\\eta_k, \\eta_{k} + \\frac{ \\eta_{k+1} -\\eta_k }{2}], \\quad { \\mathcal J } _3 = ( \\eta_k+ \\frac{ \\eta_{k+1} -\\eta_k }{2},\\eta_{k+1 } ] \\quad \\text{and} \\quad { \\mathcal J } _4 = (\\eta_{k+1} , e] .$$\nSince \n$s, e, s_u ,s_v \\in \\{ s_q\\}_{q=1}^ { \\mathcal Q} $, then it follows from the definition of $\\widehat {\\mclP}$ that \n\\begin{align}\\nonumber\n& \\sum_{i \\in \\I }\\|X_i - \\widehat \\mu_\\I \\|_2^2\n\\\\ \\nonumber \n\\le & \\sum_{i \\in \\I_1 }\\|X_i - \\widehat \\mu_{\\I_1} \\|_2 ^2 + \\sum_{i \\in \\I_2 }\\|X_i - \\widehat \\mu_{\\I_2}\\|_2^2 + \\sum_{i \\in \\I_3 }\\|X_i - \\widehat \\mu_{\\I_3}\\|_2^2 +2 \\gamma \n\\\\\\nonumber \n\\le & \\sum_{i \\in \\I_1 }\\|X_i - \\mu^*_i\\|_2^2 + C_1 \\bigg ( \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) + \\frac{|{ \\mathcal J } _1| (s_u -\\eta_k ) }{ |{ \\mathcal J } _1| + (s_u -\\eta_k ) }\\kappa_k^2 \\bigg ) +\n \\sum_{i \\in \\I_2 }\\|X_i - \\mu^*_i\\|_2^2 +C_1 \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p)\n \\\\ \\nonumber \n + & \\sum_{i \\in \\I_3 }\\|X_i - \\mu^*_i\\|_2^2 + C_1 \\bigg( \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p)+ \\frac{|{ \\mathcal J } _4 | (\\eta_{k+1} -s_v ) }{ |{ \\mathcal J } _4 | + (\\eta_{k+1} -s_v ) } \\kappa_{k+1} ^2 \\bigg) +2 \\gamma\n\\\\ \n\\le & \\sum_{i \\in \\I }\\|X_i - \\mu^*_i\\|_2^2 +C_1' \\bigg ( \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) + \\frac{|{ \\mathcal J } _1| (s_u -\\eta_k ) }{ |{ \\mathcal J } _1| + (s_u -\\eta_k ) }\\kappa_k^2 + \\frac{|{ \\mathcal J } _4 | (\\eta_{k+1} -s_v ) }{ |{ \\mathcal J } _4 | + (\\eta_{k+1} -s_v ) } \\kappa_{k+1} ^2 \\bigg )+2 \\gamma \n \\label{eq:two change points step 1}\n\\end{align}\nwhere the first inequality follows from the fact that $ \\I =(s ,e ] \\in \\widehat { \\mathcal P} $, the second inequality follows from \\Cref{lem:mean one change deviation bound} {\\bf a} and {\\bf b}. \n\\Cref{eq:two change points step 1} gives \n\\begin{align} \\nonumber \n& \\sum_{i \\in { \\mathcal J } _1 }\\|X_i - \\widehat \\mu_\\I\\|_2^2 +\\sum_{i \\in { \\mathcal J } _2 }\\|X_i - \\widehat \\mu_\\I\\|_2^2 +\\sum_{i \\in { \\mathcal J } _3 }\\|X_i - \\widehat \\mu_\\I\\|_2^2 +\\sum_{i \\in { \\mathcal J } _4 }\\|X_i - \\widehat \\mu_\\I\\|_2^2 \n\\\\ \\nonumber \n \\le& \\sum_{i \\in { \\mathcal J } _1 }\\|X_i - \\mu^*_{ { \\mathcal J } _1 }\\|_2^2 +\\sum_{i \\in { \\mathcal J } _2 }\\|X_i - \\mu^*_{ { \\mathcal J } _2 }\\|_2^2 +\\sum_{i \\in { \\mathcal J } _3 }\\|X_i - \\mu^*_{ { \\mathcal J } _3 }\\|_2^2 +\\sum_{i \\in { \\mathcal J } _4 }\\|X_i - \\mu^*_{ { \\mathcal J } _4 }\\|_2^2 \\\\\n + & C_1' \\bigg (\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) + \\frac{|{ \\mathcal J } _1| (s_u -\\eta_k ) }{ |{ \\mathcal J } _1| + (s_u -\\eta_k ) }\\kappa_k^2 + \\frac{|{ \\mathcal J } _4 | (\\eta_{k+1} -s_v ) }{ |{ \\mathcal J } _4 | + (\\eta_{k+1} -s_v ) } \\kappa_{k+1} ^2 \\bigg ) + 2\\gamma. \\label{eq:1D two change points first}\n\\end{align}\nNote that for $\\ell\\in \\{1,2,3,4 \\}$,\n\\begin{align}\\nonumber \n &\\sum_{i \\in { \\mathcal J } _\\ell }\\|X_i - \\widehat \\mu_\\I\\|_2^2 \n - \\sum_{i \\in { \\mathcal J } _\\ell }\\|X_i - \\mu^*_{ { \\mathcal J } _ \\ell }\\|_2^2 -\\sum_{i \\in { \\mathcal J } _ \\ell }\\|\\widehat \\mu_\\I - \\mu^*_{{ \\mathcal J } _ \\ell }\\|_2^2 \n\\\\\n=& \\nonumber \n 2 \\sum_{i \\in { \\mathcal J } _ \\ell } \\epsilon_i^\\top ( \\mu^*_{ { \\mathcal J } _\\ell }-\\widehat \\mu_\\I ) \n\\\\\\nonumber \n\\ge & - C\\sigma_{\\epsilon} \\|\\widehat \\mu_\\I - \\mu^*_{ { \\mathcal J } _\\ell }\\|_2 \\sqrt{|{ \\mathcal J } _\\ell|\\log(n\\vee p)}\n\\\\\n\\ge & -\\frac{1}{2}|{ \\mathcal J } _\\ell| \\|\\widehat \\mu_\\I - \\mu^*_{ { \\mathcal J } _\\ell }\\|_2^2 - C'\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p). \\nonumber \n\\end{align} \nwhich gives \n\\begin{align} \\label{eq:1D two change points second}\n &\\sum_{i \\in { \\mathcal J } _\\ell }\\|X_i - \\widehat \\mu_\\I \\|_2^2 \n - \\sum_{i \\in { \\mathcal J } _\\ell }\\|X_i - \\mu^*_{ { \\mathcal J } _ \\ell }\\|_2^2 \\ge \\frac{1}{2}\\sum_{i \\in { \\mathcal J } _ \\ell }\\| \\widehat \\mu_\\I - \\mu^*_{{ \\mathcal J } _ \\ell }\\|_2^2\n- C_2\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) .\n\\end{align} \n\\Cref{eq:1D two change points first} and \n\\Cref{eq:1D two change points second} together implies that\n \n\\begin{align} \\label{eq:1D two change points third} \\sum_{l=1}^ 4 \n |{ \\mathcal J } _l | (\\widehat \\mu_\\I - \\mu^*_{ { \\mathcal J } _\\ell } ) ^2 \\le C_3 \\bigg ( \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) +\\frac{|{ \\mathcal J } _1| (s_u -\\eta_k ) }{ |{ \\mathcal J } _1| + (s_u -\\eta_k ) } \\kappa_k^2 +\\frac{|{ \\mathcal J } _4 | (\\eta_{k+1} -s_v ) }{ |{ \\mathcal J } _4 | + (\\eta_{k+1} -s_v ) } \\kappa_{k+1} ^2 \\bigg ) +4 \\gamma . \n \\end{align}\nNote that \n\\begin{align} \\label{eq:1D two change points signal lower bound} \\inf_{ a \\in \\mathbb R } |{ \\mathcal J } _1 |( a - \\mu^*_{{ \\mathcal J } _1 }) ^2 + |{ \\mathcal J } _2 |( a - \\mu^*_{{ \\mathcal J } _2 }) ^2 =& \\frac{|{ \\mathcal J } _1| |{ \\mathcal J } _2|}{ |{ \\mathcal J } _1| + |{ \\mathcal J } _2| } \\kappa_k ^2.\n\\end{align} \nSimilarly\n\\begin{align} \\label{eq:1D two change points signal lower bound 2} \\inf_{ a \\in \\mathbb R } |{ \\mathcal J } _3 |( a - \\mu^*_{{ \\mathcal J } _3 }) ^2 + |{ \\mathcal J } _4 |( a - \\mu^*_{{ \\mathcal J } _4 }) ^2 = \\frac{|{ \\mathcal J } _3| |{ \\mathcal J } _4|}{|{ \\mathcal J } _3| + |{ \\mathcal J } _4| } \\kappa_{k+1} ^2 ,\\end{align} \n\\Cref{eq:1D two change points third} together with \n\\Cref{eq:1D two change points signal lower bound} and \\Cref{eq:1D two change points signal lower bound 2} leads to \n\\begin{align}\n\\label{eq:conclusion of two change points} \n \\frac{|{ \\mathcal J } _1| |{ \\mathcal J } _2|}{ |{ \\mathcal J } _1| + |{ \\mathcal J } _2| } \\kappa_k ^2 + \\frac{|{ \\mathcal J } _3| |{ \\mathcal J } _4|}{|{ \\mathcal J } _3| + |{ \\mathcal J } _4| } \\kappa_{k+1} ^2 \\le C_3 \\bigg (\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) +\\frac{|{ \\mathcal J } _1| (s_u -\\eta_k ) }{ |{ \\mathcal J } _1| + (s_u -\\eta_k ) } \\kappa_k^2 +\\frac{|{ \\mathcal J } _4 | (\\eta_{k+1} -s_v ) }{ |{ \\mathcal J } _4 | + (\\eta_{k+1} -s_v ) } \\kappa_{k+1} ^2 \\bigg ) +4 \\gamma .\n \\end{align}\n Note that \n$$ 0\\le s_u -\\eta_k \\le { \\mathcal B_n^{-1} \\Delta } \\quad \\text{and} \\quad \n0 \\le \\eta_{k+1} -s_v \\le { \\mathcal B_n^{-1} \\Delta } ,\n $$ \nand so there are four possible cases. \n\n{\\bf case a.} \nIf \n$$|{ \\mathcal J } _1| \\le { \\mathcal B_n^{-1\/2} \\Delta } \\quad \\text{and} \\quad \n |{ \\mathcal J } _4| \\le { \\mathcal B_n^{-1\/2} \\Delta } , $$ \n then the desired result follows immediately. \n\n{\\bf case b.} $|{ \\mathcal J } _1| > { \\mathcal B_n^{-1\/2} \\Delta } $ and $|{ \\mathcal J } _4| \\le { \\mathcal B_n^{-1\/2} \\Delta } $. Then since $|{ \\mathcal J } _2| \\ge \\Delta_{\\min} \/2 $, it holds that \n$$ \\frac{|{ \\mathcal J } _1| |{ \\mathcal J } _2|}{ |{ \\mathcal J } _1| + |{ \\mathcal J } _2| } \\ge \\frac{1}{2 }\\min\\{|{ \\mathcal J } _1| , |{ \\mathcal J } _2| \\} \\ge \\frac{1}{2 } { \\mathcal B_n^{-1\/2} \\Delta } . $$\nIn addition,\n$$\\frac{|{ \\mathcal J } _1| (s_u -\\eta_k ) }{ |{ \\mathcal J } _1| + (s_u -\\eta_k ) } \\le s_u -\\eta_k \\le { \\mathcal B_n^{-1} \\Delta } \\quad \\text{and} \\quad \\frac{|{ \\mathcal J } _4 | (\\eta_{k+1} -s_v ) }{ |{ \\mathcal J } _4 | + (\\eta_{k+1} -s_v ) } \\le \\eta_{k+1} -s_v \\le { \\mathcal B_n^{-1} \\Delta } . \n$$\nSo \\Cref{eq:conclusion of two change points} leads to \n\\begin{align}\\label{eq:conclusion of two change points case two} \n\\frac{1}{2 } { \\mathcal B_n^{-1\/2} \\Delta } \\kappa_k ^2 + \\frac{|{ \\mathcal J } _3| |{ \\mathcal J } _4|}{|{ \\mathcal J } _3| + |{ \\mathcal J } _4| } \\kappa_{k+1} ^2 \\le C_3 \\bigg (\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) + { \\mathcal B_n^{-1} \\Delta } \\kappa_k^2 + { \\mathcal B_n^{-1} \\Delta } \\kappa_{k+1} ^2 \\bigg ) +4 \\gamma .\n \\end{align} \nSince $ \\kappa_k \\asymp \\kappa$ and $ \\kappa_{k+1} \\asymp \\kappa$, \\Cref{eq:conclusion of two change points case two} gives \n$$ \\frac{1}{2 } { \\mathcal B_n^{-1\/2} \\Delta } \\kappa ^2 \\le C_4 \\bigg ( \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) + { \\mathcal B_n^{-1} \\Delta } \\kappa ^2 + { \\mathcal B_n^{-1} \\Delta } \\kappa ^2 \\bigg ) +4 \\gamma . $$\nSince $ { \\mathcal B_n} $ is a diverging sequence, the above display gives \n$$ \\Delta_{\\min} \\kappa ^2 \\le C_5 { \\mathcal B_n} ^{1\/2 } (\\log(n\\vee p) + \\gamma ).$$\nThis contradicts \\Cref{eq:1D two change points snr}.\n\n{\\bf case c.} $|{ \\mathcal J } _1| \\le { \\mathcal B_n^{-1\/2} \\Delta } $ and \n$|{ \\mathcal J } _4| > { \\mathcal B_n^{-1\/2} \\Delta } $. Then the same argument as that in {\\bf case b} leads to the same contradiction.\n\n {\\bf case d.} $|{ \\mathcal J } _1| > { \\mathcal B_n^{-1\/2} \\Delta } $ and $|{ \\mathcal J } _4| > { \\mathcal B_n^{-1\/2} \\Delta } $. Then since $|{ \\mathcal J } _2|\\ge \\Delta_{\\min} \/2 , |{ \\mathcal J } _4| \\ge \\Delta_{\\min} \/2 $, it holds that \n$$ \\frac{|{ \\mathcal J } _1| |{ \\mathcal J } _2|}{ |{ \\mathcal J } _1| + |{ \\mathcal J } _2| } \\ge \\frac{1}{2 }\\min\\{|{ \\mathcal J } _1| , |{ \\mathcal J } _2| \\} \\ge \\frac{1}{2 } { \\mathcal B_n^{-1\/2} \\Delta } \\quad \\text{and} \\quad \n \\frac{|{ \\mathcal J } _3| |{ \\mathcal J } _4|}{ |{ \\mathcal J } _3| + |{ \\mathcal J } _4| } \\ge \\frac{1}{2 }\\min\\{|{ \\mathcal J } _3| , |{ \\mathcal J } _4| \\} \\ge \\frac{1}{2 } { \\mathcal B_n^{-1\/2} \\Delta } $$\nIn addition, \n$$\\frac{|{ \\mathcal J } _4 | (\\eta_{k+1} -s_v ) }{ |{ \\mathcal J } _4 | + (\\eta_{k+1} -s_v ) } \\le \\eta_{k+1} -s_v \\le { \\mathcal B_n^{-1} \\Delta } \\quad \n\\frac{|{ \\mathcal J } _1| (s_u -\\eta_k ) }{ |{ \\mathcal J } _1| + (s_u -\\eta_k ) } \\le s_u -\\eta_k \\le { \\mathcal B_n^{-1} \\Delta } . \n$$\nSo \\Cref{eq:conclusion of two change points} leads to \n\\begin{align}\\label{eq:conclusion of two change points case four} \n\\frac{1}{2 } { \\mathcal B_n^{-1\/2} \\Delta } \\kappa_k ^2 + \\frac{1}{2 } { \\mathcal B_n^{-1\/2} \\Delta } \\kappa_{k+1} ^2 \\le C_6 \\bigg (\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) + { \\mathcal B_n^{-1} \\Delta } \\kappa_k^2 + { \\mathcal B_n^{-1} \\Delta } \\kappa_{k+1} ^2 \\bigg ) +4 \\gamma .\n \\end{align} \nNote that $ { \\mathcal B_n} $ is a diverging sequence. So the above display gives \n$$ \\Delta_{\\min} \\big ( \\kappa_{k} ^2+ \\kappa_{k+1}^2 \\big) \\le C_ 7 { \\mathcal B_n} ^{1\/2 } (\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) + \\gamma ) $$\nSince $ \\kappa_k \\asymp \\kappa$ and $ \\kappa_{k+1} \\asymp \\kappa$. This contradicts \\Cref{eq:1D two change points snr}. \n \\end{proof}\n\n\n\n \\bnlem[Three or more change points]\n \\label{lem:mean three or more cp}\nSuppose the good events \n$\\mathcal L ( { \\mathcal B_n^{-1} \\Delta } ) $ and $\\mathcal R ( { \\mathcal B_n^{-1} \\Delta } ) $ defined in \\Cref{eq:left and right approximation of change points} hold. \n Suppose in addition that \n\\begin{align} \\label{eq:1D three change points snr}\n\\Delta \\kappa^2 \\ge C \\big(\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) + \\gamma) \n\\end{align} \nfor sufficiently large constant $C $. \n Then with probability at least $1-(n\\vee p)^{-3}$, there is no interval $ \\widehat { \\mathcal P} $ containing three or more true change points. \n \\enlem \n \n\n\n\\begin{proof} \nFor contradiction, suppose $ \\I=(s,e] \\in \\mathcal {\\widehat P} $ be such that $ \\{ \\eta_1, \\ldots, \\eta_M\\} \\subset \\I $ with $M\\ge 3$. Throughout the proof, $M$ is assumed to be a parameter that can potentially change with $n$. \nSince the events $\\mathcal L ( { \\mathcal B_n^{-1} \\Delta } ) $ and $\\mathcal R ( { \\mathcal B_n^{-1} \\Delta } ) $ hold, by relabeling $\\{ s_q\\}_{q=1}^ { \\mathcal Q} $ if necessary, let $ \\{ s_m\\}_{m=1}^M $ be such that \n $$ 0 \\le s_m -\\eta_m \\le { \\mathcal B_n^{-1} \\Delta } \\quad \\text{for} \\quad 1 \\le m \\le M-1 $$ and that\n $$ 0\\le \\eta_M - s_M \\le { \\mathcal B_n^{-1} \\Delta } .$$ \n Note that these choices ensure that $ \\{ s_m\\}_{m=1}^M \\subset \\I . $\n \n \\begin{center}\n \\begin{tikzpicture} \n\\draw[ - ] (-10,0)--(1,0);\n \\node[color=black] at (-8,-0.3) {\\small s};\n \\draw[ (-, ultra thick, black] (-8,0) -- (-7.99,0);\n \\draw[ { -]}, ultra thick, black] (-1.0002,0) -- (-1,0);\n\n \\node[color=black] at (-7,-0.3) {\\small $\\eta_1$};\n\\draw(-7 ,0)circle [radius=2pt] ;\n\n \\node[color=black] at (-6.5,-0.3) {\\small $s_1$};\n\\draw plot[mark=x, mark options={color=black, scale=1.5}] coordinates {(-6.5,0) }; \n\n \\node[color=black] at (-5,-0.3) {\\small $\\eta_2$};\n\\draw(-5 ,0)circle [radius=2pt] ;\n \\node[color=black] at (-4.5,-0.3) {\\small $s_2$};\n\\draw plot[mark=x, mark options={color=black, scale=1.5}] coordinates {(-4.5,0) }; \n \n\n \\node[color=black] at (-2.5,-0.3) {\\small $\\eta_3$};\n\\draw(-2.5 ,0)circle [radius=2pt] ;\n \\node[color=black] at (-3 ,-0.3) {\\small $s_3$};\n\\draw plot[mark=x, mark options={color=black, scale=1.5}] coordinates {(-3,0) }; \n \n \\node[color=black] at (-1,-0.3) {\\small $e$};\n\n \n\\end{tikzpicture}\n\\end{center}\n\n{\\bf Step 1.}\n Denote \n $$ \\mathcal I_ 1= ( s, s _1], \\quad \\I_m =(s_{m-1} , s_m] \\text{ for } 2 \\le m \\le M \\quad \\text{and} \\quad \\I_{M+1} = (s_M,e]. $$\nThen since \n$ s , e, \\{ s_m \\}_{m=1}^M \\subset \\{ s_q\\}_{q=1}^ { \\mathcal Q} $, it follows that \n\\begin{align}\\nonumber\n& \\sum_{i \\in \\I }\\|X_i - \\widehat \\mu_\\I\\|_2^2\n\\\\ \\nonumber \n\\le & \\sum _{m=1}^{M+1 } \\sum_{i \\in \\I_m}\\|X_i - \\widehat y_{\\I_m } \\|_2^2 + M \\gamma \n\\\\ \\label{eq: 1D three change points deviation term 1}\n\\le & \\sum_{i \\in \\I_1 }\\|X_i - \\mu^*_i\\|_2^2 + C_1 \\bigg ( \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) + \\frac{ (\\eta_1 -s ) ( s_1-\\eta_ 1) }{ s_1-s }\\kappa_1^2 \\bigg )\n\\\\ \\label{eq: 1D three change points deviation term 2} \n+ & \n \\sum_{m=2}^{M-1} \\sum_{i \\in \\I_m }\\|X_i - \\mu^*_i\\|_2^2 +C_1 \\bigg(\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p)+ \\frac{(\\eta_m -s_{m-1} )(s_m-\\eta_{m } ) }{ s _{m}-s _{m-1} } \\kappa_m^2 \\bigg ) \n \\\\ \\label{eq: 1D three change points deviation term 3}\n + & C_1 \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p)\n\\\\ \\label{eq: 1D three change points deviation term 4}\n+ & \\sum_{i\\in \\I_{M+1}}\\|X_i-\\mu^*_i\\|_2^2+ C_1 \\bigg( \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) + \\frac{(\\eta_M -s_{M} )(e-\\eta_{M} ) }{ e-s _{M} } \\kappa_{M} ^2 \\bigg ) + M\\gamma, \n \\end{align} \n where Equations \\eqref{eq: 1D three change points deviation term 1}, \\eqref{eq: 1D three change points deviation term 2}\n \\eqref{eq: 1D three change points deviation term 3} and \\eqref{eq: 1D three change points deviation term 4} follow from \\Cref{lem:mean one change deviation bound} and in particular, \\Cref{eq: 1D three change points deviation term 3} corresponds to the interval $\\I _M = (s_{M-1},s_M] $ which by assumption containing no change points. \n Note that \n\\begin{align*} & \\frac{ (\\eta_1 -s ) ( s_1-\\eta_ 1) }{ s_1-s } \\le s_1-\\eta_1 \\le { \\mathcal B_n^{-1} \\Delta } ,\n\\\\ \n& \\frac{(\\eta_m -s_{m-1} )(s_m-\\eta_{m } ) }{ s _{m}-s _{m-1} } \\le s_m-\\eta_m \\le { \\mathcal B_n^{-1} \\Delta } , \\ \\text{ and } \n\\\\\n& \\frac{(\\eta_M -s_{M} )(e-\\eta_{M} ) }{ e-s _{M} } \\le \\eta_M-s_m \\le { \\mathcal B_n^{-1} \\Delta } \n \\end{align*} \n and that \n $ \\kappa_k \\asymp \\kappa $ for all $ 1\\le k \\le K$. Therefore \n\\begin{equation}\n \\label{eq:1D three change points step 1}\n \\sum_{i \\in \\I }\\|X_i - \\widehat \\mu_\\I\\|_2^2 \\le \\sum _{ i \\in \\I } \\|X_i - \\mu^*_i\\|_2^2 + C_2 \\bigg( M \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p)+ M { \\mathcal B_n^{-1} \\Delta } \\kappa^2\n+ M\\gamma \\bigg),\n\\end{equation}\nwhere $ C_2$ is some large constant independent of $M$.\n\n{\\bf Step 2.} Let \n$$ { \\mathcal J } _1 =(s, \\eta_1], \\ { \\mathcal J } _m = (\\eta_{m-1}, \\eta_m] \\text{ for } 2 \\le m \\le M , \\ { \\mathcal J } _{M+1} =(\\eta_M, e]. $$\nNote that $\\mu^*_i$ is unchanged in any of $\\{ { \\mathcal J } _m\\}_{m=0}^{M+1}$. \nSo for $ 1 \\le m \\le M+1 $,\n\\begin{align}\\nonumber \n &\\sum_{i \\in { \\mathcal J } _m }\\|X_i - \\widehat \\mu_\\I\\|_2^2 \n - \\sum_{i \\in { \\mathcal J } _ m }\\|X_i - \\mu^*_{ { \\mathcal J } _ m }\\|_2^2 -\\sum_{i \\in { \\mathcal J } _ m }\\|\\widehat \\mu_\\I - \\mu^*_{{ \\mathcal J } _ m }\\|_2^2 \n\\\\\n=& \\nonumber \n 2 \\sum_{i \\in { \\mathcal J } _ m} \\epsilon_i^\\top (\\mu^*_{{ \\mathcal J } _ m}-\\widehat \\mu_\\I) \n\\\\\\nonumber \n\\ge & - C\\sigma_{\\epsilon}\\|\\widehat \\mu_\\I - \\mu^*_{ { \\mathcal J } _ m }\\|_2\\sqrt{|{ \\mathcal J } _m|\\log(n\\vee p)} \n\\\\\n\\ge & - C_3 \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) - \\frac{1 }{2 } |{ \\mathcal J } _ m |\\|\\widehat \\mu_\\I - \\mu^*_{ { \\mathcal J } _ m }\\|_2^2 \\nonumber \n\\end{align} \nwhich gives \n\\begin{align} \\label{eq:1D three change points second}\n &\\sum_{i \\in { \\mathcal J } _m }\\|X_i - \\widehat \\mu_\\I\\|_2^2 \n - \\sum_{i \\in { \\mathcal J } _ m }\\|X_i - \\mu^*_{ { \\mathcal J } _ m }\\|_2^2 \\geq \\frac{1}{2}\\sum_{i \\in { \\mathcal J } _ m }\\|\\widehat \\mu_\\I - \\mu^*_{{ \\mathcal J } _ m }\\|_2 ^2 - C_3\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p).\n\\end{align} \nTherefore\n\\begin{align} \\label{eq:1D three change points third} \\sum_{ m=1}^{M+1} |{ \\mathcal J } _m |\\|\\widehat \\mu_\\I - \\mu^*_{{ \\mathcal J } _ m }\\|_2^2 = \\sum_{ m=1 }^{M+1} \\sum_{i \\in { \\mathcal J } _ m }\\|\\widehat \\mu_\\I - \\mu^*_{{ \\mathcal J } _ m }\\|_2^2 \\le C_4 M\\bigg(\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) + { \\mathcal B_n^{-1} \\Delta } \\kappa^2\n+ \\gamma \\bigg),\n\\end{align}\nwhere the equality follows from the fact that $\\mu^*_i$ is unchanged in any of $\\{ { \\mathcal J } _m\\}_{m=0}^{M+1}$, and\n the inequality follows from \\Cref{eq:1D three change points step 1} and \n\\Cref{eq:1D three change points second}.\n\\\\\n\\\\\n{\\bf Step 3.}\nFor any $ m \\in\\{2, \\ldots, M\\}$, it holds that\n\\begin{align} \\label{eq:1D three change points signal lower bound} \\inf_{ a \\in \\mathbb R } |{ \\mathcal J } _{m-1} |\\|a - \\mu^*_{{ \\mathcal J } _{m-1} }\\|_2^2 + |{ \\mathcal J } _{m} |\\|a - \\mu^*_{{ \\mathcal J } _{m} }\\|_2^2 =& \\frac{|{ \\mathcal J } _{m-1}| |{ \\mathcal J } _m|}{ |{ \\mathcal J } _{m-1}| + |{ \\mathcal J } _m| } \\kappa_m ^2 \\ge \\frac{1}{2} \\Delta_{\\min} \\kappa^2,\n\\end{align} \nwhere the last inequality follows from the assumptions that $\\eta_k - \\eta_{k-1}\\ge \\Delta_{\\min} $ and $ \\kappa_k \\asymp \\kappa$ for all $1\\le k \\le K$. So\n\\begin{align} \\nonumber &2 \\sum_{ m=1}^{M } |{ \\mathcal J } _m |\\|\\widehat \\mu_\\I - \\mu^*_{{ \\mathcal J } _ m }\\|_2^2 \n\\\\ \n\\ge & \\nonumber \\sum_{m=2}^M \\bigg( |{ \\mathcal J } _{m-1} | \\|\\widehat \\mu_\\I - \\mu^*_{{ \\mathcal J } _ { m-1} }\\|_2^2 + |{ \\mathcal J } _m |\\|\\widehat \\mu_\\I - \\mu^*_{{ \\mathcal J } _ m }\\|_2^2 \\bigg) \n\\\\ \\label{eq:1D three change points signal lower bound two} \n\\ge & (M-1) \\frac{ 1}{2} \\Delta_{\\min} \\kappa^2 \\ge \\frac{M}{4} \\Delta_{\\min} \\kappa^2 ,\n\\end{align} \nwhere the second inequality follows from \\Cref{eq:1D three change points signal lower bound} and the last inequality follows from $M\\ge 3$. \\Cref{eq:1D three change points third} and \\Cref{eq:1D three change points signal lower bound two} together imply that \n\\begin{align}\\label{eq:1D three change points signal lower bound three} \n \\frac{M}{4} \\Delta_{\\min} \\kappa^2 \\le 2 C_4 M\\bigg(\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) + { \\mathcal B_n^{-1} \\Delta } \\kappa^2 + \\gamma \\bigg) .\n\\end{align}\nSince $ { \\mathcal B_n} \\to \\infty $, it follows that for sufficiently large $n$, \\Cref{eq:1D three change points signal lower bound three} gives \n$$ \\Delta_{\\min}\\kappa^2 \\le C_5 \\big(\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) +\\gamma),$$ \nwhich contradicts \\Cref{eq:1D three change points snr}.\n \\end{proof} \n \n \n \n \\bnlem[Two consecutive intervals]\n \\label{lem:mean two intervals}\n Suppose $ \\gamma \\ge C_\\gamma K { \\mathcal B_n^{-1} \\Delta } \\kappa^2 $ for sufficiently large constant $C_\\gamma $. \nWith probability at least $1-(n\\vee p)^{-3}$, there are no two consecutive intervals $\\I_1= (s,t ] \\in \\widehat {\\mathcal P} $, $ \\I_2=(t, e] \\in \\widehat {\\mathcal P} $ such that $\\I_1 \\cup \\I_2$ contains no change points. \n \\enlem \n \\begin{proof} \n For contradiction, suppose that \n $$ \\I : =\\I_1\\cup \\I_2 $$\n contains no change points. \n Since \n $ s,t,e \\in \\{ s_q\\}_{q=1}^ { \\mathcal Q} $, it follows that\n $$ \\sum_{i\\in \\I_1} \\|X_i - \\widehat \\mu_{\\I_1}\\|^2 +\\sum_{i\\in \\I_2} \\|X_i - \\widehat \\mu_{\\I_2}\\|_2^2 +\\gamma \\le \\sum_{i\\in \\I } \\|X_i - \\widehat \\mu_{\\I }\\|_2^2. $$\n By \\Cref{lem:mean one change deviation bound}, it follows that \n\\begin{align*} \n& \\sum_{i\\in \\I_1} \\|X_i - \\mu^*_i\\|_2^2 \\le C_1 \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) + \\sum_{i\\in \\I_1} \\|X_i - \\widehat \\mu_{\\I_1}\\|_2^2 ,\n \\\\\n & \\sum_{i\\in \\I_2} \\|X_i - \\mu^*_i\\|_2^2 \\le C_1 \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) + \\sum_{i\\in \\I_2} \\|X_i - \\widehat \\mu_{\\I_2}\\|_2^2\n \\\\\n & \\sum_{i\\in \\I} \\|X_i - \\widehat{\\mu}_{\\I}\\|_2^2 \\le C_1 \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) + \\sum_{i\\in \\I} \\|X_i - \\mu_{i}^*\\|_2^2.\n \\end{align*} \n So \n $$\\sum_{i\\in \\I_1} \\|X_i - \\mu^*_i\\|_2^2 +\\sum_{i\\in \\I_2}\\|X_i - \\mu^*_i\\|_2^2 -2C_1 \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) +\\gamma \\le \\sum_{i\\in \\I }\\|X_i - \\mu^*_i\\|_2^2 +C_1\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p) . $$\n Since $\\mu^*_i$ is unchanged when $i\\in \\I$, it follows that \n $$ \\gamma \\le 3C_1\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p).$$\n This is a contradiction when $C_\\gamma> 3C_1. $\n \\end{proof} \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Linear model}\n\\label{sec: main proof linear}\n\nIn this section we show the proof of \\Cref{thm:DCDP regression}. Throughout this section, \nfor any generic interval $\\I\\subset [1,n]$, denote $\\beta^*_{\\I} = \\frac{1}{|\\I|}\\sum_{i\\in \\I}\\beta^*_i$ and\n$$ \\widehat \\beta_\\I = \\argmin_{\\beta\\in \\mathbb{R}^p}\\frac{1}{|\\mclI|}\\sum_{i\\in \\mclI}(y_i - X_i^\\top \\beta)^2 + \\frac{\\lambda}{\\sqrt{|\\mclI|}}\\|\\beta\\|_1. $$\nAlso, unless specified otherwise, for the output of \\Cref{algorithm:DCDP}, we always set the goodness-of-fit function $\\mclF(\\cdot, \\cdot)$ to be\n\\begin{equation}\n\\mathcal F (\\beta, \\I) : = \\begin{cases} \n \\sum_{i \\in \\I } (y_i - X_i^\\top \\beta) ^2 &\\text{if } |\\I|\\ge C_\\mclF { \\mathfrak{s} } \\log(n\\vee p) ,\n \\\\ \n 0 &\\text{otherwise,}\n \\end{cases}\n\\end{equation} \nwhere $C_{\\mclF}$ is a universal constant which is larger than $C_s$, the constant in sample size in \\Cref{lemma:interval lasso} and \\Cref{lemma:consistency}.\n\n\n\\paragraph{Assumptions.} For the ease of presentation, we combine the SNR condition we will use throughout this section and \\Cref{assp:dcdp_linear_reg main} into a single assumption.\n\n\n\\bnassum[Linear model] \n\\label{assp:dcdp_linear_reg}\nSuppose that \\Cref{assp:dcdp_linear_reg main} holds. In addition, suppose that $ \\Delta_{\\min} \\kappa^2 \\geq \\mathcal{B}_n \\frak{s}\\log(n\\vee p)$ as is assumed in \\Cref{thm:DCDP regression}.\n\\enassum\n\n\n\n\n\\begin{proof}[Proof of \\Cref{thm:DCDP regression}] \nBy \\Cref{prop:regression local consistency}, $K \\leq |\\widehat{\\mathcal{P}}| \\leq 3K$. This combined with \\Cref{prop:regression change points partition size consistency} completes the proof.\n\\end{proof}\n\n\n\n\n\n\n\n \\bnprop\\label{prop:regression local consistency}\n Suppose \\Cref{assp:dcdp_linear_reg} holds. Let \n $\\widehat { \\mathcal P } $ denote the output of \\Cref{algorithm:DCDP} with $\\gamma =C_\\gamma K { \\mathcal B_n^{-1} \\Delta } \\kappa^2$. Then with probability at least $1 - n ^{-3}$, the following properties hold.\n\t\\begin{itemize}\n\t\t\\item [(i)] For each interval $ \\I = (s, e] \\in \\widehat{\\mathcal{P}}$ containing one and only one true \n\t\t change point $ \\eta_k $, it must be the case that\n $$\\min\\{ \\eta_k -s ,e-\\eta_k \\} \\lesssim \\frac{\\sigma_{\\epsilon}^2\\vee 1 }{\\kappa^2 }\\bigg( { \\mathfrak{s} } \\log(n\\vee p) +\\gamma \\bigg) + { \\mathcal B_n^{-1} \\Delta } .$$\n\t\\item [(ii)] For each interval $ \\I = (s, e] \\in \\widehat{\\mathcal{P}}$ containing exactly two true change points, say $\\eta_ k < \\eta_ {k+1} $, it must be the case that\n\\begin{align*} \n \\eta_k -s \\lesssim \\frac{\\sigma_{\\epsilon}^2\\vee 1 }{\\kappa^2 }\\bigg( { \\mathfrak{s} } \\log(n\\vee p) +\\gamma \\bigg) + { \\mathcal B_n^{-1} \\Delta } \\ \\text{and} \\ \n e-\\eta_{k+1} \\le C \\frac{\\sigma_{\\epsilon}^2\\vee 1 }{\\kappa^2 }\\bigg( { \\mathfrak{s} } \\log(n\\vee p) +\\gamma \\bigg) + { \\mathcal B_n^{-1} \\Delta } . \n \\end{align*} \n\t\t \n\\item [(iii)] No interval $\\I \\in \\widehat{\\mathcal{P}}$ contains strictly more than two true change points; and \n\n\n\t\\item [(iv)] For all consecutive intervals $ \\I_1 $ and $ \\I_2 $ in $\\widehat{ \\mathcal P}$, the interval \n\t\t$ \\I_1 \\cup \\I_2 $ contains at least one true change point.\n\t\t\t\t\n\t \\end{itemize}\n\\enprop \n\n\n\n\n\n\n\\bprf\nThe four cases are proved in \\Cref{lemma: regression dcdp one change point}, \\Cref{lemma:regression two change points}, \\Cref{lem:regression three or more cp}, and \\Cref{lem:regression two intervals} respectively.\n\\eprf\n\n\n\n \n\n\\bnprop\\label{prop:regression change points partition size consistency}\nSuppose \\Cref{assp:dcdp_linear_reg} holds. Let \n $\\widehat { \\mathcal P } $ denote the output of \\Cref{algorithm:DCDP}. Suppose \n$\\gamma \\ge C_\\gamma K { \\mathcal B_n^{-1} \\Delta } \\kappa^2$ for sufficiently large constant $C_\\gamma$. Then\n with probability at least $1 - C n ^{-3}$, $| \\widehat { \\mathcal P} | =K $.\n\\enprop\n\n\n\n\\begin{proof}[Proof of \\Cref{prop:regression change points partition size consistency}] \nDenote $\\mathfrak{G} ^*_n = \\sum_{ i =1}^n (y_i - X_i^\\top \\beta^*_i)^2$. Given any collection $\\{t_1, \\ldots, t_m\\}$, where $t_1 < \\cdots < t_m$, and $t_0 = 0$, $t_{m+1} = n$, let \n\t\\begin{equation}\\label{regression eq-sn-def}\n\t\t { \\mathfrak{G} } _n(t_1, \\ldots, t_{m}) = \\sum_{k=1}^{m} \\sum_{ i = t_k +1}^{t_{k+1}} \\mclF(\\widehat{\\beta} _ {(t_{k}, t_{k+1}]}, (t_{k}, t_{k+1}]). \n\t\\end{equation}\nFor any collection of time points, when defining \\eqref{regression eq-sn-def}, the time points are sorted in an increasing order.\n\nLet $\\{ \\widehat \\eta_{k}\\}_{k=1}^{\\widehat K}$ denote the change points induced by $\\widehat {\\mathcal P}$. Suppose we can justify that \n\t\\begin{align}\n\t\t { \\mathfrak{G} } ^*_n + K\\gamma \\ge & { \\mathfrak{G} } _n(s_1,\\ldots,s_K) + K\\gamma - C_1 ( K +1) { \\mathfrak{s} } \\log(n\\vee p) - C_1\\sum_{k\\in [K]}\\kappa_k^2 { \\mathcal B_n^{-1} \\Delta } \\label{eq:regression K consistency step 1} \\\\ \n\t\t\\ge & { \\mathfrak{G} } _n (\\widehat \\eta_{1},\\ldots, \\widehat \\eta_{\\widehat K } ) +\\widehat K \\gamma - C_1 ( K +1) { \\mathfrak{s} } \\log(n\\vee p) - C_1\\sum_{k\\in [K]}\\kappa_k^2 { \\mathcal B_n^{-1} \\Delta } \\label{eq:regression K consistency step 2} \\\\ \n\t\t\\ge & { \\mathfrak{G} } _n ( \\widehat \\eta_{1},\\ldots, \\widehat \\eta_{\\widehat K } , \\eta_1,\\ldots,\\eta_K ) + \\widehat K \\gamma - C_1 ( K +1) { \\mathfrak{s} } \\log(n\\vee p)- C_1\\sum_{k\\in [K]}\\kappa_k^2 { \\mathcal B_n^{-1} \\Delta } \\label{eq:regression K consistency step 3}\n\t\\end{align}\n\tand that \n\t\\begin{align}\\label{eq:regression K consistency step 4}\n\t\t { \\mathfrak{G} } ^*_n - { \\mathfrak{G} } _n ( \\widehat \\eta_{1},\\ldots, \\widehat \\eta_{\\widehat K } , \\eta_1,\\ldots,\\eta_K ) \\le C_2 (K + \\widehat{K} + 2) { \\mathfrak{s} } \\log(n\\vee p) .\n\t\\end{align}\n\tThen it must hold that $| \\widehat p | = K$, as otherwise if $\\widehat K \\ge K+1 $, then \n\t\\begin{align*}\n\t\tC _2 (K + \\widehat{K} + 2) { \\mathfrak{s} } \\log(n\\vee p) & \\ge { \\mathfrak{G} } ^*_n - { \\mathfrak{G} } _n ( \\widehat \\eta_{1},\\ldots, \\widehat \\eta_{\\widehat K } , \\eta_1,\\ldots,\\eta_K ) \\\\\n\t\t& \\ge (\\widehat K - K)\\gamma -C_1 ( K +1) { \\mathfrak{s} } \\log(n\\vee p) - C_1\\sum_{k\\in [K]}\\kappa_k^2 { \\mathcal B_n^{-1} \\Delta } .\n\t\\end{align*} \n\tTherefore due to the assumption that $| \\widehat p| =\\widehat K\\le 3K $, it holds that \n\t\\begin{align} \\label{eq:regression Khat=K} \n\t\tC_2 (4K + 2) { \\mathfrak{s} } \\log(n\\vee p) + C_1(K+1) { \\mathfrak{s} } \\log(n\\vee p) +C_1\\sum_{k\\in [K]}\\kappa_k^2 { \\mathcal B_n^{-1} \\Delta } \\ge (\\widehat K - K)\\gamma \\geq \\gamma,\n\t\\end{align}\n\tNote that \\eqref{eq:regression Khat=K} contradicts the choice of $\\gamma$.\n\n\n\\\n\\\\\n{\\bf Step 1.} Note that \\eqref{eq:regression K consistency step 1} is implied by \n\t\\begin{align}\\label{eq:regression step 1 K consistency} \n\t\t\\left| \t { \\mathfrak{G} } ^*_n - { \\mathfrak{G} } _n(s_1,\\ldots,s_K) \\right| \\le C_3(K+1) \\lambda^2 + C_3\\sum_{k\\in [K]}\\kappa_k^2 { \\mathcal B_n^{-1} \\Delta } ,\n\t\\end{align}\n\twhich is an immediate consequence of \\Cref{lem:regression one change deviation bound}. \n\t\\\n\t\\\\\n\t\\\\\n\t{\\bf Step 2.} Since $\\{ \\widehat \\eta_{k}\\}_{k=1}^{\\widehat K}$ are the change points induced by $\\widehat {\\mathcal P}$, \\eqref{eq:regression K consistency step 2} holds because $\\widehat p$ is a minimizer.\n\\\\\n\\\\\n\t{\\bf Step 3.}\nFor every $ \\I =(s,e]\\in \\widehat p$, by \\Cref{prop:regression local consistency}, we know that with probability at least $1 - (n\\vee p)^{-5}$, $\\I$ contains at most two change points. We only show the proof for the two-change-point case as the other case is easier. Denote\n\t\\[\n\t\t \\I = (s ,\\eta_{q}]\\cup (\\eta_{q},\\eta_{q+1}] \\cup (\\eta_{q+1} ,e] = { \\mathcal J } _1 \\cup { \\mathcal J } _2 \\cup { \\mathcal J } _{3},\n\t\\]\nwhere $\\{ \\eta_{q},\\eta_{q+1}\\} =\\I \\, \\cap \\, \\{\\eta_k\\}_{k=1}^K$. \n\nFor each $m\\in\\{1,2,3\\}$, by \\Cref{lem:regression one change deviation bound}, it holds that\n\\[\n\\sum_{ i \\in { \\mathcal J } _m }(y_ i - X_i^\\top \\widehat{\\beta} _{{ \\mathcal J } _m} )^2 \\leq \\sum_{ i \\in { \\mathcal J } _ m } (y_ i - X_i^\\top \\beta^*_i )^2 + C\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p).\n\\]\nBy \\Cref{lem:regression loss deviation no change point}, we have\n\\[\n\\sum_{ i \\in { \\mathcal J } _m }(y_ i - X_i^\\top\\widehat {\\beta} _\\I )^2 \\ge \\sum_{ i \\in { \\mathcal J } _ m } (y_ i - X_i^\\top \\beta_i^* )^2 - C\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p).\n\\]\n Therefore the above inequality implies that \n\t\\begin{align} \\label{eq:regression K consistency step 3 inequality 3} \\sum_{i \\in \\I } (y_ i - X_i^*\\widehat {\\beta} _\\I )^2 \\ge \\sum_{m=1}^{3} \\sum_{ i \\in { \\mathcal J } _m }(y_ i - X_i^\\top\\widehat {\\beta} _{{ \\mathcal J } _m} )^2 -C\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p). \n\t\\end{align}\nNote that \\eqref{eq:regression K consistency step 3} is an immediate consequence of \\eqref{eq:regression K consistency step 3 inequality 3}.\n\n\t{\\bf Step 4.}\nFinally, to show \\eqref{eq:regression K consistency step 4}, let $\\widetilde { \\mathcal P}$ denote the partition induced by $\\{\\widehat \\eta_{1},\\ldots, \\widehat \\eta_{\\widehat K } , \\eta_1,\\ldots,\\eta_K\\}$. Then \n$| \\widetilde { \\mathcal P} | \\le K + \\widehat K+2 $ and that $\\beta^*_i$ is unchanged in every interval $\\I \\in \\widetilde { \\mathcal P}$. \n\tSo \\Cref{eq:regression K consistency step 4} is an immediate consequence of \\Cref{lem:regression one change deviation bound}.\n\\end{proof} \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Fundamental lemmas}\n\n\\bnlem \\label{lem:regression one change deviation bound}\nLet $\\mathcal I =(s,e] $ be any generic interval. \n\\\\\n{\\bf a.} If $\\I$ contains no change points and that \n$|\\I| \\ge C_s { \\mathfrak{s} } \\log(n\\vee p) $ where $C_s$ is the universal constant in \\Cref{lemma:interval lasso}. Then it holds that \n$$\\mathbb P \\bigg( \\bigg| \\sum_{ i \\in \\I } (y_i - X_i ^\\top \\widehat \\beta _\\I )^2 - \\sum_{ i \\in \\I } (y_i - X_i^\\top \\beta^*_\\I )^2 \\bigg| \\ge C { \\mathfrak{s} } \\log(n\\vee p) \\bigg) \\le n^{-4}. $$\n\\\n\\\\\n{\\bf b.} Suppose that the interval $ \\I=(s,e]$ contains one and only change point $ \\eta_k $ and that \n$|\\I| \\ge C_s { \\mathfrak{s} } \\log(n\\vee p) $. Denote $\\widehat \\mu_\\I = \\frac{1}{|\\I | } \\sum_{i\\in \\I } y_i $ and \n$$ \\mathcal J = (s,\\eta_k] \\quad \\text{and} \\quad \\mathcal J' = (\\eta_k, e] .$$ Then it holds that \n$$\\mathbb P \\bigg( \\bigg| \\sum_{ i \\in \\I } (y_i - X_i^\\top \\widehat \\beta_\\I )^2 - \\sum_{ i \\in \\I } (y_i - X_i^\\top \\beta^* _i )^2 \\bigg| \\ge C\\bigg\\{ \\frac{ | { \\mathcal J } ||{ \\mathcal J } '| }{ |\\I| } \\kappa_k ^2 + { \\mathfrak{s} } \\log(n\\vee p)\\bigg\\} \\bigg) \\le n^{-4}. $$\n\\enlem\n\\begin{proof} \nWe show {\\bf b} as {\\bf a} immediatelly follows from {\\bf b} with $ |{ \\mathcal J } '| =0$.\n Denote \n$$ \\mathcal J = (s,\\eta_k] \\quad \\text{and} \\quad \\mathcal J' = (\\eta_k, e] .$$ \nDenote $ \\beta _\\I^* = \\frac{1}{|\\I | } \\sum_{i\\in \\I } \\beta ^* _i $. Note that \n\\begin{align} \\nonumber \n \\bigg| \\sum_{ i \\in \\I } (y_i - X_i^\\top \\widehat \\beta_\\I )^2 - \\sum_{ i \\in \\I } (y_i - X_i^\\top \\beta^* _i )^2 \\bigg| \n = &\\bigg| \\sum_{i\\in \\I } \\big\\{ X_i^\\top ( \\widehat \\beta_\\I - \\beta^* _i ) \\big\\} ^2 - 2 \\sum_{i\\in \\I } \\epsilon_i X_i^\\top ( \\widehat \\beta_\\I - \\beta^*_i ) \\bigg| \n \\\\ \\label{eq:regression one change point deviation bound term 1}\n \\le & 2 \\sum_{i\\in \\I } \\big\\{ X_i^\\top ( \\widehat \\beta_\\I - \\beta^*_\\I ) \\big\\} ^2 \n \\\\ \\label{eq:regression one change point deviation bound term 2}\n + & 2 \\sum_{i\\in \\I } \\big\\{ X_i^\\top ( \\beta^*_\\I -\\beta^*_i ) \\big\\} ^2 \n \\\\ \\label{eq:regression one change point deviation bound term 3}\n +& 2 \\bigg| \\sum_{i\\in \\I } \\epsilon_i X_i^\\top ( \\widehat \\beta_\\I - \\beta^*_\\I ) \\bigg| \n \\\\ \\label{eq:regression one change point deviation bound term 4}\n +& 2 \\bigg| \\sum_{i\\in \\I } \\epsilon_i X_i^\\top ( \\beta^*_\\I -\\beta^*_i ) \\bigg| .\n\\end{align}\n\\\n\\\\\nSuppose all the good events in \\Cref{lemma:interval lasso} holds. \n\\\\\n\\\\\n{\\bf Step 1.} By \\Cref{lemma:interval lasso}, $\\widehat \\beta_\\I - \\beta^*_\\I $ satisfies the cone condition that \n$$ \\| (\\widehat \\beta_\\I - \\beta^*_\\I )_{S^c}\\|_1 \\le 3 \\| (\\widehat \\beta_\\I - \\beta^*_\\I )_S \\|_1 .$$\nIt follows from \\Cref{corollary:restricted eigenvalues 2} that with probability at least $ 1-n^{-5}$,\n\\begin{align*} \n \\bigg| \\frac{1}{|\\I| } \\sum_{i\\in \\I } \\big\\{ X_i^\\top ( \\widehat \\beta_\\I - \\beta^*_\\I ) \\big\\} ^2 - ( \\widehat \\beta_\\I - \\beta^*_\\I )^\\top \\Sigma ( \\widehat \\beta_\\I - \\beta^*_\\I ) \\bigg| \\le C_1 \\sqrt { \\frac{ { \\mathfrak{s} } \\log(n\\vee p) }{|\\I| }} \\| \\widehat \\beta_\\I - \\beta^*_\\I \\|_2 ^2 .\n\\end{align*}\nThe above display gives\n\\begin{align*} \\bigg| \\frac{1}{|\\I| } \\sum_{i\\in \\I } \\big\\{ X_i^\\top ( \\widehat \\beta_\\I - \\beta^*_\\I ) \\big\\}^2 \\bigg| \\le & \\| \\Sigma\\|_{\\text{op}} \\| \\widehat \\beta_\\I - \\beta^*_\\I \\|_2 ^2 + C_1 \\sqrt { \\frac{ { \\mathfrak{s} } \\log(n\\vee p) }{|\\I| }} \\| \\widehat \\beta_\\I - \\beta^*_\\I \\|_2 ^2 \n\\\\\n\\le & C_x \\| \\widehat \\beta_\\I - \\beta^*_\\I \\|_2 ^2 + C_1 \\sqrt { \\frac{ { \\mathfrak{s} } \\log(n\\vee p) }{C_\\zeta { \\mathfrak{s} } \\log(n\\vee p) }} \\| \\widehat \\beta_\\I - \\beta^*_\\I \\|_2 ^2 \n\\\\\n\\le & \\frac{ C_2 { \\mathfrak{s} } \\log(n\\vee p)}{|\\I| } \\end{align*} \nwhere the second inequality follows from the assumption that $|\\I| \\ge C_\\zeta { \\mathfrak{s} } \\log(n\\vee p) $ and the last inequality follows from \\Cref{eq:lemma:interval lasso term 1} in \\Cref{lemma:interval lasso}.\nThis gives \n$$ \\bigg| \\sum_{i\\in \\I } \\big\\{ X_i^\\top ( \\widehat \\beta_\\I - \\beta^*_\\I ) \\big\\}^2 \\bigg| \\le 2 C_2 { \\mathfrak{s} } \\log(n\\vee p) . $$\n\\\n\\\\\n{\\bf Step 2.} Observe that \n$ X_i^\\top (\\beta_\\I ^* - \\beta^*_i) $ is Gaussian with mean $0$ and variance \n\\begin{align*} \\omega_i ^2 = (\\beta_\\I ^* - \\beta^*_i )^\\top \\Sigma (\\beta_\\I ^* - \\beta^*_i) . \n\\end{align*}\nSince \n$$ \\beta_\\I ^* = \\frac{|{ \\mathcal J } | \\beta^*_{ \\mathcal J } +|{ \\mathcal J } '| \\beta^*_{{ \\mathcal J } '} }{|\\I | },$$\nit follows that \n$$ \\omega_i ^2 = \\begin{cases} \n\\bigg( \\frac{ | { \\mathcal J } '| (\\beta^*_{ \\mathcal J } -\\beta^*_{{ \\mathcal J } '} ) }{ | \\I| } \\bigg)^\\top \\Sigma \\bigg( \\frac{ | { \\mathcal J } '| (\\beta^*_{ \\mathcal J } -\\beta^*_{{ \\mathcal J } '} ) }{ | \\I| } \\bigg) \\le \\frac{| { \\mathcal J } '| ^2 \\kappa_k^2 }{ |\\I|^2 } &\\text{when } i \\in { \\mathcal J } ,\n\\\\\n\\bigg( \\frac{ | { \\mathcal J } | (\\beta^*_{{ \\mathcal J } '} -\\beta^*_{ \\mathcal J } ) }{ | \\I| } \\bigg) ^\\top \\Sigma \\bigg( \\frac{ | { \\mathcal J } '| ( \\beta^*_{{ \\mathcal J } '} -\\beta^*_{ \\mathcal J } ) }{ | \\I| } \\bigg) \\le \\frac{| { \\mathcal J } | ^2 \\kappa_k^2 }{ |\\I|^2 } &\\text{when } i \\in { \\mathcal J } '.\n\\end{cases} $$\n Consequently,\n$ \\{ X_i^\\top (\\beta_\\I ^* - \\beta^*_i)\\} ^2 $ is sub-Exponential with parameter $ \\omega_i ^2$. \nBy standard sub-Exponential tail bounds, it follows that \n\\begin{align*}\n &\\mathbb P \\bigg( \\bigg| \\sum_{ i \\in \\I }\\{ X_i^\\top (\\beta_\\I ^* - \\beta^*_i)\\} ^2 - { \\mathbb{E} } {\\sum_{ i \\in \\I }\\{ X_i^\\top (\\beta_\\I ^* - \\beta^*_i)\\} ^2} \\bigg| \\ge C_3 \\tau \\bigg) \n \\\\\n \\le & \\exp\\bigg (-c \\min\\bigg \\{ \\frac{ \\tau^2}{\\sum_{i\\in \\I } \\omega_i ^4 } , \\frac{ \\tau }{ \\max_{i\\in \\I } \\omega_i ^2 } \\bigg\\} \\bigg)\n \\\\\n \\le & \\exp\\bigg (-c' \\min\\bigg \\{ \\frac{ \\tau^2}{\\sum_{i\\in \\I } \\omega_i ^2 } , \\frac{ \\tau }{ \\max_{i\\in \\I } |\\omega_i | } \\bigg\\} \\bigg)\n \\\\\n \\le & \\exp\\bigg (-c ''\\min\\bigg \\{ \\tau^2 \\bigg( \\frac{|\\I | }{| { \\mathcal J } '| |{ \\mathcal J } | } \\kappa_k^{-2} \\bigg) , \\tau \\frac{|\\I| }{ \\max \\{|{ \\mathcal J } |, |{ \\mathcal J } '| \\}} \\kappa_k ^{-1} \\bigg\\} \\bigg) ,\n \\end{align*}\nwhere the second inequality follows from the observation that \n$$ \\omega_i^2 \\le \\kappa_k |\\omega_i| \\le C_\\kappa |\\omega_i| \\text{ for all } i \\in \\I , $$\n and the last inequality follows from the observation that \n\\begin{align*}\n \\sum_{i\\in \\I} \\omega_i ^2 \n \\le C_x | { \\mathcal J } | \\frac{| { \\mathcal J } '| ^2 \\kappa_k^2 }{ |\\I|^2 } + C_x | { \\mathcal J } ' | \\frac{| { \\mathcal J } | ^2 \\kappa_k^2 }{ |\\I|^2 } \n= C_x \\frac{| { \\mathcal J } '| |{ \\mathcal J } | }{|\\I | } \\kappa_k^2 .\n \\end{align*} \n So there exists a sufficiently large constant $C_4$ such that with probability at least $1- n^{-5}$, \n\\begin{align*}\n & \\bigg| \\sum_{ i \\in \\I }\\{ X_i^\\top (\\beta_\\I ^* - \\beta^*_i)\\} ^2 - { \\mathbb{E} } {\\sum_{ i \\in \\I }\\{ X_i^\\top (\\beta_\\I ^* - \\beta^*_i)\\} ^2} \\bigg|\n \\\\ \n \\le & C_ 4 \\bigg\\{\\sqrt { \\frac{| { \\mathcal J } '| |{ \\mathcal J } | }{|\\I | } \\log(n) \\kappa _k ^2 } + \\log(n) \\frac{\\max \\{|{ \\mathcal J } |, |{ \\mathcal J } '| \\} }{|\\I| } \\kappa _k \\bigg\\} \n \\\\\n \\le & C_ 4' \\bigg\\{ \\frac{| { \\mathcal J } '| |{ \\mathcal J } | }{|\\I | } \\kappa_k^2 + \\log(n) + \\log(n) \\frac{\\max \\{|{ \\mathcal J } |, |{ \\mathcal J } '| \\} }{|\\I| } \\kappa _k \\bigg\\} \n \\\\\n \\le &C_5 \\bigg\\{ \\frac{| { \\mathcal J } '| |{ \\mathcal J } | }{|\\I | } \\kappa_k^2 + \\log(n) \\bigg\\} \n \\end{align*}\n where $ \\kappa_k \\asymp \\kappa \\le C_\\kappa $ is used in the last inequality.\nSince $ { \\mathbb{E} } {\\sum_{ i \\in \\I }\\{ X_i^\\top (\\beta_\\I ^* - \\beta^*_i)\\} ^2} = \\sum_{i\\in \\I }\\omega_i ^2 \\le C_x \\frac{| { \\mathcal J } '| |{ \\mathcal J } | }{|\\I | } \\kappa_k^2$, it follows that \n$$ \\mathbb P \\bigg ( \\bigg| \\sum_{ i \\in \\I }\\{ X_i^\\top (\\beta_\\I ^* - \\beta^*_i)\\} ^2 \\bigg| \\le ( C_5 +C_x ) \\frac{| { \\mathcal J } '| |{ \\mathcal J } | }{|\\I | } \\kappa_k^2 + C_5 \\log(n) \\bigg ) \\ge 1-n^{-5}.$$\n\\\n\\\\\n{\\bf Step 3.} For \\Cref{eq:regression one change point deviation bound term 3}, it follows that with probability at least $1-2n^{-4}$\n\\begin{align*} \n \\frac{1}{|\\I| } \\sum_{i\\in \\I } \\epsilon_i X_i^\\top ( \\widehat \\beta_\\I - \\beta^*_\\I )\n \\le C_6 \\sqrt { \\frac{\\log(n\\vee p) }{ |\\I| } } \\| \\widehat \\beta_\\I - \\beta^*_\\I \\|_1 \\le C_7 \\frac{ { \\mathfrak{s} } \\log(n\\vee p) }{ |\\I| } \n\\end{align*}\nwhere the first inequality is a consequence of \\Cref{eq:independent condition 1c 1}, the second inequality follows from \\Cref{eq:lemma:interval lasso term 3} in \\Cref{lemma:interval lasso}. \n\\\n\\\\\n\\\\\n{\\bf Step 4.} From {\\bf Step 2}, we have that\n$ X_i^\\top (\\beta_\\I ^* - \\beta^*_i) $ is Gaussian with mean $0$ and variance \n$$ \\omega_i ^2 = \\begin{cases} \n\\bigg( \\frac{ | { \\mathcal J } '| (\\beta^*_{ \\mathcal J } -\\beta^*_{{ \\mathcal J } '} ) }{ | \\I| } \\bigg)^\\top \\Sigma \\bigg( \\frac{ | { \\mathcal J } '| (\\beta^*_{ \\mathcal J } -\\beta^*_{{ \\mathcal J } '} ) }{ | \\I| } \\bigg) \\le \\frac{| { \\mathcal J } '| ^2 \\kappa_k^2 }{ |\\I|^2 } &\\text{when } i \\in { \\mathcal J } ,\n\\\\\n\\bigg( \\frac{ | { \\mathcal J } | (\\beta^*_{{ \\mathcal J } '} -\\beta^*_{ \\mathcal J } ) }{ | \\I| } \\bigg) ^\\top \\Sigma \\bigg( \\frac{ | { \\mathcal J } '| ( \\beta^*_{{ \\mathcal J } '} -\\beta^*_{ \\mathcal J } ) }{ | \\I| } \\bigg) \\le \\frac{| { \\mathcal J } | ^2 \\kappa_k^2 }{ |\\I|^2 } &\\text{when } i \\in { \\mathcal J } '.\n\\end{cases} $$\n Consequently,\n$ \\epsilon_i X_i^\\top (\\beta_\\I ^* - \\beta^*_i) $ is centered sub-Exponential with parameter $ \\omega_i \\sigma_\\epsilon $. \nBy standard sub-Exponential tail bounds, it follows that \n\\begin{align*}\n &\\mathbb P \\bigg( \\bigg| \\sum_{ i \\in \\I } \\epsilon_i X_i^\\top (\\beta_\\I ^* - \\beta^*_i) \\bigg| \\ge C_8 \\tau \\bigg) \n \\\\\n \\le & \\exp\\bigg (-c \\min\\bigg \\{ \\frac{ \\tau^2}{\\sum_{i\\in \\I } \\omega_i ^2 } , \\frac{ \\tau }{ \\max_{i\\in \\I } |\\omega_i| } \\bigg\\} \\bigg)\n \\\\\n \\le & \\exp\\bigg (-c '\\min\\bigg \\{ \\tau^2 \\bigg( \\frac{|\\I | }{| { \\mathcal J } '| |{ \\mathcal J } | } \\kappa_k^{-2} \\bigg) , \\tau \\frac{|\\I| }{ \\max \\{|{ \\mathcal J } |, |{ \\mathcal J } '| \\}} \\kappa_k ^{-1} \\bigg\\} \\bigg) ,\n \\end{align*}\nwhere the last inequality follows from the observation that \n\\begin{align*}\n \\sum_{i\\in \\I} \\omega_i ^2 \n \\le C_x | { \\mathcal J } | \\frac{| { \\mathcal J } '| ^2 \\kappa_k^2 }{ |\\I|^2 } + C_x | { \\mathcal J } ' | \\frac{| { \\mathcal J } | ^2 \\kappa_k^2 }{ |\\I|^2 } \n= C_x \\frac{| { \\mathcal J } '| |{ \\mathcal J } | }{|\\I | } \\kappa_k^2 .\n \\end{align*} \n So there exists a sufficiently large constant $C_9$ such that with probability at least $1- n^{-5}$, \n\\begin{align*}\n & \\bigg| \\sum_{ i \\in \\I } \\epsilon_i X_i^\\top (\\beta_\\I ^* - \\beta^*_i) \\bigg|\n \\\\ \n \\le & C_ 9 \\bigg\\{\\sqrt { \\frac{| { \\mathcal J } '| |{ \\mathcal J } | }{|\\I | } \\log(n) \\kappa _k ^2 } + \\log(n) \\frac{\\max \\{|{ \\mathcal J } |, |{ \\mathcal J } '| \\} }{|\\I| } \\kappa _k \\bigg\\} \n \\\\\n \\le & C_ 9' \\bigg\\{ \\frac{| { \\mathcal J } '| |{ \\mathcal J } | }{|\\I | } \\kappa_k^2 + \\log(n) + \\log(n) \\frac{\\max \\{|{ \\mathcal J } |, |{ \\mathcal J } '| \\} }{|\\I| } \\kappa _k \\bigg\\} \n \\\\\n \\le &C_9 \\bigg\\{ \\frac{| { \\mathcal J } '| |{ \\mathcal J } | }{|\\I | } \\kappa_k^2 + \\log(n) \\bigg\\} \n \\end{align*}\n where $ \\kappa_k \\asymp \\kappa \\le C_\\kappa $ is used in the last inequality.\n\\end{proof} \n\n \n\\bnlem \\label{lemma:interval lasso} Suppose \\Cref{assp:dcdp_linear_reg} holds. Let $$ \\widehat \\beta _{ \\mathcal I } = \\arg\\min_{\\beta \\in \\mathbb R^p } \\frac{1}{ |\\mathcal I | }\\sum_{i \\in \\mathcal I} (y_i -X_i^\\top \\beta) ^2 + \\lambda \\|\\beta\\|_1$$ with $\\lambda = C_\\lambda (\\sigma_{\\epsilon} \\vee 1)\\sqrt { \\log(n\\vee p) }$ for some sufficiently large constant $C _ \\lambda $. There exists a sufficiently large constant $ C_s$ such that for all $ \\mathcal I \\subset (0, n] $ such that $| \\mathcal I | \\ge C_s { \\mathfrak{s} } \\log(n\\vee p)$, it holds with probability at least $ 1-(n\\vee p)^{-3}$ that \n\\begin{align} \\label{eq:lemma:interval lasso term 1}\n& \\| \\widehat \\beta_\\I -\\beta^*_\\I \\| _2^2 \\le \\frac{C (\\sigma_{\\epsilon}^2\\vee 1) { \\mathfrak{s} } \\log(n\\vee p)}{ |\\I | } ;\n \\\\ \\label{eq:lemma:interval lasso term 2}\n & \\| \\widehat \\beta_\\I -\\beta^*_\\I \\| _1 \\le C(\\sigma_{\\epsilon}\\vee 1) { \\mathfrak{s} } \\sqrt { \\frac{\\log(n\\vee p)} {|\\I | } } ;\n \\\\ \\label{eq:lemma:interval lasso term 3}\n & \\| (\\widehat \\beta_\\I -\\beta^*_\\I)_{S^c} \\| _1 \\le 3 \\| (\\widehat \\beta _\\I -\\beta^*_\\I )_{S } \\| _1 .\n\\end{align} \nwhere $ \\beta^*_\\I = \\frac{1}{|\\I|} \\sum_{i\\in \\I } \\beta^*_i $.\n \\enlem \n\\begin{proof} Denote $S=\\bigcup_{k=1}^K S_{\\eta_k+1} $. Since $K< \\infty$, \n$|S| \\asymp { \\mathfrak{s} } .$ \n It follows from the definition of $\\widehat{\\beta}_\\I$ that \n\t\\[\n\t \\frac{1}{|\\I | }\t\\sum_{ i \\in I} (y_i - X_i^{\\top}\\widehat{\\beta}_\\I )^2 + \\frac{ \\lambda}{ \\sqrt{ |\\I| }} \\|\\widehat{\\beta} _\\I\\|_1 \\leq \n\t \\frac{1}{|\\I | } \\sum_{t \\in \\I} (y_i - X_i^{\\top}\\beta^*_\\I )^2 + \\frac{ \\lambda}{ \\sqrt{ |\\I| }} \\|\\beta^*_\\I \\|_1.\n\t\\]\n\tThis gives \n\t\\begin{align*}\n\t\t \\frac{1}{|\\I | } \\sum_{i \\in \\I} \\bigl\\{X_i ^{\\top}(\\widehat{\\beta}_\\I - \\beta^*_\\I )\\bigr\\}^2 + \\frac{ 2 }{|\\I | } \\sum_{ i \\in \\I}(y_i - X_i^{\\top}\\beta^*_\\I)X_i^{\\top}(\\beta^*_\\I - \\widehat{\\beta}_\\I ) + \\frac{ \\lambda}{ \\sqrt{ |\\I| }} \\bigl\\|\\widehat{\\beta}_\\I \\bigr\\|_1 \n\t\t\\leq \\frac{ \\lambda}{ \\sqrt{ |\\I| }} \\bigl\\|\\beta^*_\\I \\bigr\\|_1,\n\t\\end{align*}\n\tand therefore\n\t\\begin{align}\n\t\t& \\frac{1}{|\\I | }\t \\sum_{i \\in \\I} \\bigl\\{X_i ^{\\top}(\\widehat{\\beta} _\\I - \\beta^*_\\I )\\bigr\\}^2 + \\frac{ \\lambda}{ \\sqrt{ |\\I| }} \\bigl\\|\\widehat{\\beta}_\\I \\bigr\\|_1 \\nonumber \\\\\n\t\t \\leq & \\frac{ 2}{|\\I | }\t \\sum_{i \\in \\I } \\epsilon_i X_i ^{\\top}(\\widehat{\\beta}_\\I - \\beta^*_\\I ) \n+ 2 (\\widehat{\\beta}_\\I - \\beta^*_\\I )^{\\top}\\frac{ 1 }{|\\I | }\t \\sum_{i\\in \\I} X_ i X_i^{\\top}( \\beta^*_i -\\beta^*_\\I )\t\t \n\t\t+ \\frac{ \\lambda}{ \\sqrt{ |\\I| }} \\bigl\\|\\beta^*_\\I \\bigr\\|_1 .\t \\label{eq-lem10-pf-2}\n\t\\end{align}\nTo bound\t\n\t\t$\\left\\|\\sum_{ i \\in \\I} X_ i X_i ^\\top (\\beta^* _\\I -\\beta^*_i ) \\right\\|_{\\infty}, \n$\n\tnote that for any $j \\in \\{ 1, \\ldots, p\\}$, the $j$-th entry of \n\t\\\\$\\sum_{i \\in \\I} X _ i X _i^\\top (\\beta^*_\\I -\\beta_i )$ satisfies \n\t\\begin{align*}\n\t\t E \\left\\{\\sum_{ i \\in \\I} X_i (j) X _ i ^\\top (\\beta^*_\\I - \\beta^*_ i )\\right\\} = \\sum_{ i \\in \\I} E \\{X _ i (j ) X_i ^\\top \\}\\{\\beta^*_\\I - \\beta^*_ i \\} \n\t\t= \\mathbb{E}\\{X _1( j ) X _1 ^\\top \\} \\sum_{ i \\in \\I}\\{\\beta^*_\\I - \\beta^*_ i \\} = 0.\n\t\\end{align*}\n\tSo $ E\\{ \\sum_{ i \\in \\I} X_ i X_i ^\\top (\\beta^* _\\I -\\beta^*_i )\\} =0 \\in \\mathbb R^p.$ \nBy \\Cref{lemma:consistency}{\\bf b},\n\t\\begin{align*}\n\t \\bigg| ( \\beta^*_i -\\beta^*_\\I ) ^\\top \\frac{ 1 }{|\\I | }\t \\sum_{i\\in \\I} X_t X_t^{\\top} (\\widehat{\\beta}_\\I - \\beta^*_\\I )\t \t \\bigg| \\le & C_1 \\big(\\max_{1\\le i \\le n } \\|\\beta^*_i -\\beta^*_\\I \\|_2 \\big) \\sqrt { \\frac{\\log(n\\vee p) }{ | \\I| }} \\| \\widehat{\\beta}_\\I - \\beta^*_\\I \\|_1 \n\t \\\\\n\t \\le & C_2 \\sqrt { \\frac{\\log(n\\vee p) }{ | \\I| }} \\| \\widehat{\\beta}_\\I - \\beta^*_\\I \\|_1 \n\t \\\\\n\t \\le & \\frac{\\lambda}{8\\sqrt { |\\I| } } \\|\\widehat{\\beta}_\\I - \\beta^*_\\I \\|_1 \n \t\\end{align*}\n \twhere the second inequality follows from \\Cref{lemma:beta bounded 1} and the last inequality follows from $\\lambda = C_\\lambda\\sigma_{\\epsilon} \\sqrt { \\log(n\\vee p) }$ with sufficiently large constant $C_\\lambda$.\n\tIn addition by \\Cref{lemma:consistency}{\\bf a},\n\t\\begin{equation*}\n\t \\frac{ 2}{|\\I | }\t \\sum_{i \\in \\I } \\epsilon_i X_i ^{\\top}(\\widehat{\\beta}_\\I - \\beta^*_\\I ) \\le C \\sigma_{\\epsilon}\\sqrt { \\frac{ \\log(n\\vee p) }{ |\\I| } }\\|\\widehat{\\beta}_\\I - \\beta^*_\\I \\|_1 \\le \\frac{\\lambda}{8\\sqrt { |\\I| } } \\|\\widehat{\\beta}_\\I - \\beta^*_\\I \\|_1 .\n\t\\end{equation*}\n\tSo \\eqref{eq-lem10-pf-2} gives \n\t\\begin{align*}\n\t\t \\frac{1}{|\\I | }\t \\sum_{i \\in \\I} \\bigl\\{X_i ^{\\top}(\\widehat{\\beta} _\\I - \\beta^*_\\I )\\bigr\\}^2 + \\frac{ \\lambda}{ \\sqrt{ |\\I| }} \\bigl\\|\\widehat{\\beta}_\\I \\bigr\\|_1 \n\t\t \\leq \\frac{\\lambda}{4\\sqrt { |\\I| } } \\|\\widehat{\\beta}_\\I - \\beta^*_\\I \\|_1 \n\t\t+ \\frac{ \\lambda}{ \\sqrt{ |\\I| }} \\bigl\\|\\beta^*_\\I \\bigr\\|_1 .\t \n\t\\end{align*} \nLet $\\Theta = \\widehat \\beta _\\I - \\beta^*_\\I $. The above inequality implies\n\\begin{align}\n\\label{eq:two sample lasso deviation 1} \\frac{1}{|\\I|} \\sum_{i \\in \\I } \\left( X_i^\\top \\Theta \\right)^2 + \\frac{ \\lambda}{2\\sqrt{ |\\I| } }\\| (\\widehat \\beta _\\I) _{ S ^c}\\|_1 \n \\le & \\frac{3\\lambda}{2\\sqrt{ |\\I| } } \\| ( \\widehat \\beta _\\I - \\beta^*_\\I ) _{S} \\| _1 ,\n \\end{align} \nwhich also implies that \n$$ \\frac{\\lambda }{2}\\| \\Theta _{S^c} \\|_1 = \\frac{ \\lambda}{2 }\\| (\\widehat \\beta_\\I) _{ S ^c}\\|_1 \\le \n\\frac{3\\lambda}{2 } \\| ( \\widehat \\beta _\\I - \\beta^* _\\I ) _{S} \\| _1 = \\frac{3\\lambda}{2 } \\| \\Theta _{S} \\| _1 . $$\nThe above inequality and \\Cref{corollary:restricted eigenvalues 2} imply that with probability at least $1-n^{-5}$,\n$$ \\frac{1}{|\\I| } \\sum_{i \\in \\I } \\left( X_i^\\top \\Theta \\right)^2 = \\Theta^\\top \\widehat \\Sigma _\\I \\Theta \n\\ge \n \\Theta^\\top \\Sigma \\Theta - C_3\\sqrt{ \\frac{ { \\mathfrak{s} } \\log(n\\vee p)}{ |\\I| }} \\| \\Theta\\|_2^2 \\ge \\frac{c_x}{2} \\|\\Theta\\|_2 ^2 ,$$\n where the last inequality follows from the assumption that $| \\mathcal I | \\ge C_s { \\mathfrak{s} } \\log(n\\vee p) $ for sufficiently large $C_s$.\nTherefore \\Cref{eq:two sample lasso deviation 1} gives\n\\begin{align}\n\\label{eq:two sample lasso deviation 2} c'\\|\\Theta\\|_2 ^2 + \\frac{ \\lambda}{2\\sqrt{ |\\I| } }\\| ( \\widehat \\beta_\\I - \\beta^*_\\I ) _{ S ^c}\\|_1 \n \\le \\frac{3\\lambda}{2\\sqrt{ |\\I| } } \\| \\Theta_{S} \\| _1 \\le \\frac{3\\lambda \\sqrt { \\mathfrak{s} } }{2 \\sqrt{ |\\I| } } \\| \\Theta \\| _2 \n \\end{align} \n and so \n $$ \\|\\Theta\\|_2 \\le \\frac{C \\lambda \\sqrt { \\mathfrak{s} } }{\\sqrt{| \\I |} } . $$ \n The above display gives \n $$ \\| \\Theta_{S} \\| _1 \\le \\sqrt { { \\mathfrak{s} } } \\| \\Theta_{S} \\| _2 \\le \\frac{C\\lambda { \\mathfrak{s} } }{\\sqrt{|\\I| }}. $$ \n Since \n $ \\| \\Theta_{S^c } \\| _1 \\le 3 \\| \\Theta_{S} \\| _1 ,$ \nit also holds that \n$$\\| \\Theta \\| _1 = \\| \\Theta_{S } \\| _1 +\\| \\Theta_{S^c } \\| _1 \\le 4 \\| \\Theta_{S} \\| _1 \\le \\frac{4C\\lambda { \\mathfrak{s} } }{\\sqrt{|\\I|} } .$$\n \\end{proof}\n\n \n \n\n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Technical lemmas}\n\nThroughout this section, let $\\widehat { \\mathcal P } $ denote the output of \\Cref{algorithm:DCDP}.\n \n \n \n\n\n\n\n \\bnlem[No change point]\n \\label{lem:regression loss deviation no change point}\nLet $\\I\\subset [1,T]$ be any interval that contains no change point. Then under \\Cref{assp:dcdp_linear_reg}, for any interval ${ \\mathcal J } \\supset \\I$, it holds with probability at least $1 - (n\\vee p)^{-5}$ that\n\\begin{equation*}\n \\mclF(\\beta^*_{\\I},\\I)\\leq \\mclF(\\widehat{\\beta}_{ \\mathcal J } ,\\I) + C(\\sigma_{\\epsilon}^2\\vee 1) { \\mathfrak{s} } \\log(n\\vee p).\n\\end{equation*}\n \\enlem\n\\begin{proof} \n\\noindent \\textbf{Case 1.} If $ |\\I| < C_{\\mclF} { \\mathfrak{s} } \\log(n p)$, then by the definition of $\\mclF(\\beta, \\mclI)$, we have $\\mclF(\\beta^*_{\\I},\\I)= \\mclF(\\widehat{\\beta}_{ \\mathcal J } ,\\I) =0$ and the inequality holds automatically.\n \n\\noindent \\textbf{Case 2.} If \n\t\\begin{equation}\\label{eq-lem16-i-cond} \n\t\t|\\I| \\geq C_{\\mclF} { \\mathfrak{s} } \\log(n p),\n\t\\end{equation}\n\tthen letting $\\delta_\\I = \\beta^*_\\I -\\widehat \\beta_{ \\mathcal J } $ and consider the high-probability event given in \\Cref{lem:restricted eigenvalue}, we have\n\t\\begin{align}\n\t\t& \\sqrt{\\sum_{t \\in \\I} (X_t^{\\top} \\delta_\\I)^2} \\geq {c_1'\\sqrt{|\\I|}} \\|\\delta_\\I\\|_2 - c_2'\\sqrt{\\log(p)} \\|\\delta_\\I\\|_1 \\nonumber \\\\\n\t\t= & {c_1'\\sqrt{|\\I|}} \\|\\delta_\\I\\|_2 - c_2'\\sqrt{\\log(p)} \\|(\\delta_I)_{S}\\|_1 - c_2'\\sqrt{\\log(p)} \\|(\\delta_\\I)_{S^c}\\|_1 \\nonumber \\\\\n\t\t\\geq & c_1'\\sqrt{|\\I|} \\|\\delta_\\I\\|_2 - c_2'\\sqrt{ { \\mathfrak{s} } \\log(p)} \\|\\delta_\\I\\|_2 - c_2'\\sqrt{\\log(p)} \\|(\\delta_\\I)_{S^c}\\|_1 \\nonumber \\\\\n\t\t\\geq & \\frac{c_1'}{2}\\sqrt{|\\I|} \\|\\delta_\\I\\|_2 - c_2'\\sqrt{\\log(p)} \\|(\\widehat{\\beta}_{ \\mathcal J } )_{S^c}\\|_1 \\geq c_1\\sqrt{|\\I|} \\|\\delta_\\I\\|_2 - c_2\\sqrt{\\log(p)} \\frac{ { \\mathfrak{s} } \\lambda}{\\sqrt{|\\I|}}, \\label{eq-lem16-pf-1}\n\t\\end{align} \n\twhere the last inequality follows from \\Cref{lemma:interval lasso} and the assumption that $(\\beta^*_t)_i = 0$, for all $t\\in [T]$ and $i\\in S^c$. Then by the fact that $(a - b)^2\\geq \\frac{1}{2}a^2 - b^2$ for all $a,b\\in \\mathbb{R}$, it holds that\n\t\\begin{equation}\n\t \\sum_{t \\in \\I} (X_t^{\\top} \\delta_\\I)^2\\geq \\frac{c_1^2}{2}{|\\I|} \\|\\delta_\\I\\|_2^2 - \\frac{c_2^2 \\lambda^2 { \\mathfrak{s} } ^2\\log(p)}{|\\I|}.\n\t\\end{equation}\n\tNotice that\n\t\\begin{align*}\n\t & \\sum_{t \\in \\I} (y_t - X_t^{\\top} \\beta^*_\\I)^2 - \\sum_{t \\in \\I} (y_t - X_t^{\\top} \\widehat{\\beta}_{ \\mathcal J } )^2 = 2 \\sum_{t \\in \\I} \\epsilon_t X_t^{\\top}\\delta_\\I - \\sum_{t \\in \\I} (X_t^{\\top}\\delta_\\I)^2 \\\\\n\t\\leq & 2\\|\\sum_{t \\in \\I } X_t \\epsilon_t \\|_{\\infty } \\left( \\sqrt { { \\mathfrak{s} } } \\| (\\delta_\\I)_{S} \\|_2 + \\| (\\widehat \\beta_{ \\mathcal J } )_{S^c} \\|_1 \\right)- \\sum_{t \\in \\I} (X_t^{\\top}\\delta_\\I)^2.\n\t\\end{align*}\n\tSince for each $t$, $\\epsilon_t$ is subgaussian with $\\|\\epsilon_t\\|_{\\psi_2}\\leq \\sigma_{\\epsilon}$ and for each $i\\in [p]$, $(X_t)_i$ is subgaussian with $\\|(X_t)_i\\|_{\\psi_2}\\leq C_x$, we know that $(X_t)_i\\epsilon_t$ is subexponential with $\\|(X_t)_i\\epsilon_t\\|_{\\psi_1}\\leq C_x\\sigma_{\\epsilon}$. Therefore, by Bernstein's inequality (see, e.g., Theorem 2.8.1 in \\cite{vershynin2018high}) and a union bound, for $\\forall u\\geq 0$ it holds that\n\t\\begin{equation*}\n\t \\mathbb{P}(\\|\\sum_{t \\in \\I } X_t \\epsilon_t \\|_{\\infty }> u)\\leq 2p\\exp(-c\\min\\{\\frac{u^2}{|\\I|C_x^2\\sigma_{\\epsilon}^2}, \\frac{u}{C_x\\sigma_{\\epsilon}}\\}).\n\t\\end{equation*}\n\tTake $u = cC_x\\sigma_{\\epsilon}\\sqrt{|\\I|\\log(n\\vee p)}$, then by the fact that $|\\I|\\geq C_{\\mclF} { \\mathfrak{s} } \\log(n\\vee p)$, it follows that with probability at least $1 - (n\\vee p)^{-7}$,\n\t\\begin{equation*}\n\t \\|\\sum_{t \\in \\I } X_t \\epsilon_t \\|_{\\infty } \\leq CC_x\\sigma_{\\epsilon}\\sqrt{|\\I|\\log(n\\vee p)}\\leq \\lambda \\sqrt{|\\I|},\n\t\\end{equation*}\n\twhere we use the fact that $\\lambda=C_{\\lambda}(\\sigma_{\\epsilon}\\vee 1)\\sqrt{\\log(n\\vee p)}$. Therefore, we have\n\t\\begin{align*}\n\t& \\sum_{t \\in \\I} (y_t - X_t^{\\top} \\beta^*_\\I)^2 - \\sum_{t \\in \\I} (y_t - X_t^{\\top} \\widehat{\\beta}_{ \\mathcal J } )^2 \\\\\n\t\\leq & 2\\lambda\\sqrt{|\\I| { \\mathfrak{s} } }\\|\\delta_\\I\\|_2 + 2\\lambda\\sqrt{|\\I|}\\cdot \\frac{\\lambda { \\mathfrak{s} } }{ \\sqrt{|\\I|}} - \\frac{c_1^2 |\\I|}{2}\\|\\delta_\\I\\|_2^2 + \\frac{c_2^2 \\lambda^2 { \\mathfrak{s} } ^2\\log(p)}{|\\I|}\\\\ \n\t\\leq & 2\\lambda\\sqrt{|\\I| { \\mathfrak{s} } }\\|\\delta_\\I\\|_2 + {2\\lambda^2 { \\mathfrak{s} } } - \\frac{c_1^2 |\\I|}{2}\\|\\delta_\\I\\|_2^2 + \\frac{c_2^2 \\lambda^2 { \\mathfrak{s} } ^2\\log(p)}{|\\I|} \\\\\n\t\\leq & \\frac{4}{c_1^2}\\lambda^2 { \\mathfrak{s} } + \\frac{c_1^2}{4}|\\I| \\|\\delta_\\I\\|^2_2 + {2\\lambda^2 { \\mathfrak{s} } } - \\frac{c_1^2}{2}|\\I|\\|\\delta_\\I\\|^2_2 + \\frac{c_2^2 \\lambda^2 { \\mathfrak{s} } ^2\\log(p)}{C_{\\mclF} { \\mathfrak{s} } \\log(n\\vee p)} \\\\\n\t\\leq & {c_3\\lambda^2 { \\mathfrak{s} } } + {2\\lambda^2 { \\mathfrak{s} } } + \\frac{c_2^2 \\lambda^2 { \\mathfrak{s} } ^2\\log(p)}{C_{\\mclF} { \\mathfrak{s} } \\log(n\\vee p)}\\\\\n\t\\leq & c_4\\lambda^2 { \\mathfrak{s} } .\n\t\\end{align*}\n\twhere the third inequality follows from $2ab \\leq a^2 + b^2$. \n\n\\end{proof} \n\n\n\n\n\n\n\n\n\n\n \n \n \n \\bnlem[Single change point]\n \\label{lemma: regression dcdp one change point}\nSuppose the good events \n$\\mathcal L ( { \\mathcal B_n^{-1} \\Delta } ) $ and $\\mathcal R ( { \\mathcal B_n^{-1} \\Delta } ) $ defined in \\Cref{eq:left and right approximation of change points} hold. \nLet $ \\I=(s,e] \\in \\mathcal {\\widehat P} $ be such that $\\I$ contains exactly one true change point $ \\eta_k $. \nSuppose $\\gamma \\geq C_\\gamma K { \\mathcal B_n^{-1} \\Delta } \\kappa^2 $. Then with probability at least $1- n^{-3}$, it holds that \n\\begin{equation*}\n \\min\\{ \\eta_k -s ,e-\\eta_k \\} \\lesssim \\frac{\\sigma_{\\epsilon}^2\\vee 1 }{\\kappa^2 }\\bigg( { \\mathfrak{s} } \\log(n\\vee p) +\\gamma \\bigg) + { \\mathcal B_n^{-1} \\Delta } .\n\\end{equation*}\n\\enlem\n\\begin{proof} \nIf either $ \\eta_k -s \\le C_\\mclF { \\mathfrak{s} } \\log(n\\vee p) $ or $e-\\eta_k\\le C_\\mclF { \\mathfrak{s} } \\log(n\\vee p) $, then \n$$ \\min\\{ \\eta_k -s ,e-\\eta_k \\} \\le C_\\mclF { \\mathfrak{s} } \\log(n\\vee p) $$\nand there is nothing to show. So assume that \n$$ \\eta_k -s > C_\\mclF { \\mathfrak{s} } \\log(n\\vee p) \\quad \\text{and} \\quad e-\\eta_k >C_\\mclF { \\mathfrak{s} } \\log(n\\vee p) . $$\nBy event $\\mathcal R ( { \\mathcal B_n^{-1} \\Delta } )$, there exists $ s_ u \\in \\{ s_q\\}_{q=1}^ { \\mathcal Q} $ such that \n$$0\\le s_u - \\eta_k \\le { \\mathcal B_n^{-1} \\Delta } . $$\n\n\\begin{center}\n \\begin{tikzpicture} \n\\draw[ - ] (-10,0)--(-1,0);\n \\node[color=black] at (-8,-0.3) {\\small s};\n \\draw[ (-, ultra thick, black] (-8,0) -- (-7.99,0);\n \\draw[ { -]}, ultra thick, black] (-3.0002,0) -- (-3,0);\n\n \\node[color=black] at (-7,-0.3) {\\small $\\eta_k$};\n\\draw(-7 ,0)circle [radius=2pt] ;\n\n \\node[color=black] at (-6.5,-0.3) {\\small $s_u$};\n\\draw plot[mark=x, mark options={color=black, scale=1.5}] coordinates {(-6.5,0) }; \n \n \\node[color=black] at (-3 ,-0.3) {\\small $e$};\n\\end{tikzpicture}\n\\end{center} \n\n{\\bf Step 1.} Denote \n$$ \\I_ 1 = (s,s_u] \\quad \\text{and} \\quad \\I_2 = (s_u, e] .$$\nSince \n$ \\eta_k-s \\ge C_\\mclF { \\mathfrak{s} } \\log(n\\vee p) , $\nit follows that \n $ |\\I| \\ge C_\\mclF { \\mathfrak{s} } \\log(n\\vee p) $ and $ |\\I_1| \\ge C_\\mclF { \\mathfrak{s} } \\log(n\\vee p) $. Thus \n$$ \\mathcal F (\\I) = \\sum_{ i \\in \\I } (y_i - X_i^\\top \\widehat \\beta_\\I )^2 \\quad \\text{and} \\quad \\mathcal F (\\I_1) = \\sum_{ i \\in {\\I_1} } (y_i - X_i^\\top \\widehat \\beta_{\\I_1} )^2 .$$\nSince $\\I\\in \\widehat { \\mathcal P}$, it holds that \n\\begin{align}\\label{eq:regression one change basic inequality step 1} \\mathcal F (\\I) \\le \\mathcal F (\\I_1) + \\mathcal F (\\I_2) + \\gamma. \n\\end{align}\n\n{\\bf Case a.} Suppose $|\\I_2| C_\\mclF { \\mathfrak{s} } \\log(n\\vee p) . $$\nSince the events $\\mathcal L ( { \\mathcal B_n^{-1} \\Delta } ) $ and $\\mathcal R ( { \\mathcal B_n^{-1} \\Delta } ) $ hold, let $ s_u, s_v$ be such that \n $\\eta_k \\le s_u \\le s_v \\le \\eta_{k+1} $ and that \n $$ 0 \\le s_u-\\eta_k \\le { \\mathcal B_n^{-1} \\Delta } , \\quad 0\\le \\eta_{k+1} - s_v \\le { \\mathcal B_n^{-1} \\Delta } . $$\n \\\n \\\\\n \\begin{center}\n \\begin{tikzpicture} \n\\draw[ - ] (-10,0)--(1,0);\n \\node[color=black] at (-8,-0.3) {\\small s};\n \\draw[ (-, ultra thick, black] (-8,0) -- (-7.99,0);\n \\draw[ { -]}, ultra thick, black] (-1.0002,0) -- (-1,0);\n\n \\node[color=black] at (-7,-0.3) {\\small $\\eta_k$};\n\\draw(-7 ,0)circle [radius=2pt] ;\n\n \\node[color=black] at (-6.5,-0.3) {\\small $s_u$};\n\\draw plot[mark=x, mark options={color=black, scale=1.5}] coordinates {(-6.5,0) }; \n \n\n \\node[color=black] at (-2.3,-0.3) {\\small $\\eta_{k+1}$};\n\\draw(-2.3 ,0)circle [radius=2pt] ;\n \\node[color=black] at (-3 ,-0.3) {\\small $s_v$};\n\\draw plot[mark=x, mark options={color=black, scale=1.5}] coordinates {(-3,0) }; \n \n \\node[color=black] at (-1,-0.3) {\\small $e$};\n\n \n\\end{tikzpicture}\n\\end{center} \n\\\n\\\\\n{\\bf Step 1.} Denote \n $$ \\mathcal I_ 1= ( s, s _u], \\quad \\I_2 =(s_u, s_v] \\quad \\text{and} \\quad \\I_3 = (s_v,e]. $$\n \\\n \\\\\nSince $ |\\I| \\ge \\eta_{k+1} -\\eta_k \\ge C_\\mclF { \\mathfrak{s} } \\log(n\\vee p) $, \n$$ \\mathcal F(\\I ) = \\sum_{i \\in \\I }(y_i - X_i^\\top \\widehat \\beta _\\I ) ^2 . $$\nSince \n $ |\\I_1| \\ge \\eta_k-s \\ge C_\\mclF { \\mathfrak{s} } \\log(n\\vee p) , $ \nit follows that \n $$ \\mathcal F(\\I_1 ) = \\sum_{i \\in \\I_1 }(y_i - X_i^\\top \\widehat \\beta _{\\I_1} ) ^2 . $$\nIn addition since $ |\\I_1| \\ge C_\\mclF { \\mathfrak{s} } \\log(n\\vee p) $, then \n \\begin{align*}\\mathcal F(\\I_1) = & \\sum_{i \\in \\I_1 }(y_i - X_i^\\top \\widehat \\beta _{\\I_1} ) ^2 \n \\\\\n \\le & \\sum_{i \\in \\I_1 }(y_i - X_i^\\top \\beta ^* _i ) ^2 + C_1 \\bigg\\{ \\frac{( \\eta_k-s) (s_u -\\eta_k ) }{ ( \\eta_k-s) + (s_u -\\eta_k ) } \\kappa ^2 + { \\mathfrak{s} } \\log(n\\vee p) \\bigg\\} \n \\\\\n \\le & \\sum_{i \\in \\I_1 }(y_i - X_i^\\top \\beta ^* _i ) ^2 + C_1 \\bigg\\{ (s_u -\\eta_k ) \\kappa ^2 + { \\mathfrak{s} } \\log(n\\vee p) \\bigg\\} \n \\\\\n \\le & \\sum_{i \\in \\I_1 }(y_i - X_i^\\top \\beta ^* _ i ) ^2 + C_1 \\bigg\\{ { \\mathcal B_n^{-1} \\Delta } \\kappa ^2 + { \\mathfrak{s} } \\log(n\\vee p) \\bigg\\} ,\n \\end{align*}\nwhere the first inequality follows from \\Cref{lem:regression one change deviation bound} and that $ \\kappa_{k} \\asymp \\kappa$.\n Similarly, since \n $ |\\I_2| \\ge \\Delta_{\\min} \/2 \\ge C_\\mclF { \\mathfrak{s} } \\log(n\\vee p) , $ \nit follows that \n $$ \\mathcal F(\\I_2 ) = \\sum_{i \\in \\I_2 }(y_i - X_i^\\top \\widehat \\beta _{\\I_2} ) ^2 . $$\n Since $|\\I_2| \\ge C_\\mclF { \\mathfrak{s} } \\log(n\\vee p) $ and $\\I_2$ contains no change points, by \\Cref{lem:regression one change deviation bound},\n \\begin{align*}\\mathcal F(\\I_2) \\le \\sum_{i \\in \\I_2 }(y_i - X_i^\\top \\beta ^* _ i ) ^2 + C_1 { \\mathfrak{s} } \\log(n\\vee p).\n \\end{align*} \n \\\\\n \\\\\n {\\bf Step 2.} If $ |\\I_3 | \\ge C_\\mclF { \\mathfrak{s} } \\log(n\\vee p) $, then \n \\begin{align*}\\mathcal F(\\I_3) = & \\sum_{i \\in \\I_3 }(y_i - X_i^\\top \\widehat \\beta _{\\I_3} ) ^2 \n \\\\\n \\le & \\sum_{i \\in \\I_3 }(y_i - X_i^\\top \\beta ^* _i ) ^2 + C_1 \\bigg\\{ \\frac{( \\eta_{k+1}-s_v) (e -\\eta_{k+1} ) }{ ( \\eta_{k+1}-s_v)+ (e -\\eta_{k+1} ) } \\kappa ^2 + { \\mathfrak{s} } \\log(n\\vee p) \\bigg\\} \n \\\\\n \\le & \\sum_{i \\in \\I_3 }(y_i - X_i^\\top \\beta ^* _i ) ^2 + C_1 \\bigg\\{ ( \\eta_{k+1}-s_v) \\kappa ^2 + { \\mathfrak{s} } \\log(n\\vee p) \\bigg\\} \n \\\\\n \\le & \\sum_{i \\in \\I_3 }(y_i - X_i^\\top \\beta ^* _ i ) ^2 + C_1 \\bigg\\{ { \\mathcal B_n^{-1} \\Delta } \\kappa ^2 + { \\mathfrak{s} } \\log(n\\vee p) \\bigg\\} ,\n \\end{align*}\n where the first inequality follows from \\Cref{lem:regression one change deviation bound}{\\bf b} and that $ \\kappa_{k+1} \\asymp \\kappa$. \n If $|\\I_3| < C_\\mclF { \\mathfrak{s} } \\log(n\\vee p) $, then \n$\\mathcal F(\\I_3) = 0 $.\n So \nboth cases imply that \n $$\\mathcal F(\\I_3) \\le \\sum_{i \\in \\I_3 }(y_i - X_i^\\top \\beta ^* _ i ) ^2 + C_1 \\bigg\\{ { \\mathcal B_n^{-1} \\Delta } \\kappa_k^2 + { \\mathfrak{s} } \\log(n\\vee p) \\bigg\\} .$$\n \\\n \\\\\n {\\bf Step 3.} Since $\\I \\in \\widehat{\\mathcal P} $, we have\n \\begin{align}\\label{eq:regression two change points local min} \\mathcal F(\\I ) \\le \\mathcal F(\\I_1 ) +\\mathcal F(\\I_2 )+\\mathcal F(\\I_3 ) + 2\\gamma. \n \\end{align} The above display and the calculations in {\\bf Step 1} and {\\bf Step 2} implies that\n\\begin{align} \n \\sum_{i \\in \\I }(y_i - X_i^\\top \\widehat \\beta _\\I ) ^2 \n\\le \\sum_{i \\in \\I }(y_i - X_i^\\top \\beta_i^* ) ^2 \n + 3 C_1 \\bigg\\{ { \\mathcal B_n^{-1} \\Delta } \\kappa ^2 + { \\mathfrak{s} } \\log(n\\vee p) \\bigg\\}\n+2\\gamma .\n \\label{eq:regression two change points step 2 term 1}\n\\end{align}\nDenote \n$$ { \\mathcal J } _1 = (s,\\eta_k ], \\quad { \\mathcal J } _2=(\\eta_k, \\eta_{k} + \\eta_{k+1} ] \\quad \\text{and} \\quad { \\mathcal J } _3 = (\\eta_{k+1} , e] .$$\n\\Cref{eq:regression two change points step 2 term 1} gives \n\\begin{align} \n \\sum_{\\ell=1}^3 \\sum_{i \\in { \\mathcal J } _{\\ell} }(y_i - X_i^\\top \\widehat \\beta _\\I ) ^2 \n \\le& \\sum_{\\ell=1}^3 \\sum_{i \\in { \\mathcal J } _ \\ell }(y_i - X_i^\\top \\beta^* _{{ \\mathcal J } _\\ell } ) ^2 \n + 3 C_1 \\bigg\\{ { \\mathcal B_n^{-1} \\Delta } \\kappa ^2 + { \\mathfrak{s} } \\log(n\\vee p) \\bigg\\} \n+2\\gamma \\label{eq:regression two change points first}\n\\end{align} \n\\\n\\\\\n{\\bf Step 4.} \nNote that for $\\ell\\in \\{1,2,3\\}$, \n\\begin{align*} \n\\| ( \\widehat \\beta _\\I - \\beta _{{ \\mathcal J } _\\ell }^* )_{S^c } \\|_1 = \\| ( \\widehat \\beta _\\I )_{S^c} \\|_{1} \n= \\| ( \\widehat \\beta _\\I -\\beta _\\I ^* )_{S^c} \\|_{1} \\le 3 \\| ( \\widehat \\beta _\\I -\\beta _\\I ^* )_{S } \\|_{1} \\le C_2 { \\mathfrak{s} } \\sqrt { \\frac{ \\log(n\\vee p)}{| \\I|} } , \n\\end{align*} \nwhere the last two inequalities follows from \\Cref{lemma:interval lasso}. So\n\\begin{align} \\label{eq:regression two change step 3 term 1}\n\\| \\widehat \\beta _\\I - \\beta _{{ \\mathcal J } _\\ell}^* \\|_1 = \\| ( \\widehat \\beta _\\I - \\beta _{{ \\mathcal J } _\\ell }^* )_{S } \\|_1 +\\| ( \\widehat \\beta _\\I - \\beta _{{ \\mathcal J } _\\ell}^* )_{S^c } \\|_1 \\le \\sqrt { { \\mathfrak{s} } } \\| \\widehat \\beta _\\I - \\beta _{{ \\mathcal J } _\\ell}^* \\|_2 + C_2 { \\mathfrak{s} } \\sqrt { \\frac{ \\log(n\\vee p)}{| \\I|} } .\n\\end{align} \nNote that by assumptions,\n$$|{ \\mathcal J } _1|\\ge C_\\mclF { \\mathfrak{s} } \\log(n\\vee p) \\quad \\text{and} \\quad |{ \\mathcal J } _2 |\\ge C_\\mclF { \\mathfrak{s} } \\log(n\\vee p) .$$\n So for $\\ell\\in \\{ 1, 2\\} $,\n it holds that\n \\begin{align}\\nonumber \n\t\t& \\sum_{i \\in { \\mathcal J } _\\ell } \\big\\{ X_i^\\top ( \\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ \\ell }) \\big\\} ^2 \n\t\t \\\\ \\nonumber \n\t\t \\ge& \\frac{c_x|{ \\mathcal J } _\\ell | }{16} \\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ \\ell } \\|_2^2 - C_3 \\log(p) \\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ \\ell } \\|_1 ^2 \n\t\t \\\\\\nonumber \n\t\t \\ge & \\frac{c_x|{ \\mathcal J } _\\ell | }{16} \\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ \\ell } \\|_2^2 - C_3' { \\mathfrak{s} } \\log(p) \\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ \\ell } \\|_ 2 ^2 - C_3' \\frac{ { \\mathfrak{s} } ^2 \\log(p) \\log(n\\vee p) }{|\\I| }\n\t\t \\\\ \\label{eq:regression two change step 3 term 2}\n\t\t \\ge & \\frac{c_x|{ \\mathcal J } _\\ell | }{32} \\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ \\ell } \\|_2^2 - C_4 { \\mathfrak{s} } \\log(n\\vee p),\n\t\\end{align} \n\twhere the first inequality follows from \\Cref{lem:restricted eigenvalue}, the second inequality follows from \\Cref{eq:regression two change step 3 term 1} and the last inequality follows from the observation that \n $$ |\\I|\\ge |{ \\mathcal J } _\\ell| \\ge C_\\gamma { \\mathfrak{s} } \\log(n\\vee p). \n $$ \n So for $\\ell\\in \\{ 1,2\\}$, \n\\begin{align*} &\\sum_{i \\in { \\mathcal J } _\\ell }(y_i - X_i^\\top \\widehat \\beta _\\I ) ^2 - \\sum_{i \\in { \\mathcal J } _\\ell }(y_i - X_i^\\top \\beta^* _{{ \\mathcal J } _\\ell } ) ^2\n= \\sum_{i \\in { \\mathcal J } _\\ell }\\big\\{ X_i^\\top (\\widehat \\beta _\\I -\\beta^* _{{ \\mathcal J } _4} )\\big\\}^2 -2 \\sum_{i \\in { \\mathcal J } _\\ell } \\epsilon_i X_i^\\top (\\widehat \\beta _\\I -\\beta^* _{{ \\mathcal J } _\\ell } )\n\\\\\n\\ge & \\sum_{i \\in { \\mathcal J } _\\ell }\\big\\{ X_i^\\top (\\widehat \\beta _\\I -\\beta^* _{{ \\mathcal J } _\\ell } )\\big\\}^2 - 2 \\| \\sum_{i \\in { \\mathcal J } _\\ell } \\epsilon_i X_i^\\top \\|_\\infty \\| \\widehat \\beta _\\I -\\beta^* _{{ \\mathcal J } _\\ell } \\|_1\n\\\\\n\\ge &\\sum_{i \\in { \\mathcal J } _\\ell }\\big\\{ X_i^\\top (\\widehat \\beta _\\I -\\beta^* _{{ \\mathcal J } _\\ell } )\\big\\}^2 - C_5 \\sqrt{ \\log(n\\vee p) |{ \\mathcal J } _\\ell| }\\bigg(\\sqrt { { \\mathfrak{s} } } \\| \\widehat \\beta _\\I - \\beta _{{ \\mathcal J } _1}^* \\|_2 + C_2 { \\mathfrak{s} } \\sqrt { \\frac{ \\log(n\\vee p)}{| \\I|} } \\bigg) \n\\\\\n\\ge & \\frac{c_x|{ \\mathcal J } _\\ell | }{32} \\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ \\ell } \\|_2^2 - C_4 { \\mathfrak{s} } \\log(n\\vee p) - C_5 \\sqrt{ \\log(n\\vee p) |{ \\mathcal J } _\\ell| }\\bigg(\\sqrt { { \\mathfrak{s} } } \\| \\widehat \\beta _\\I - \\beta _{{ \\mathcal J } _1}^* \\|_2 + C_2 { \\mathfrak{s} } \\sqrt { \\frac{ \\log(n\\vee p)}{| \\I|} } \\bigg) \n\\\\\n\\ge & \\frac{c_x|{ \\mathcal J } _\\ell | }{32} \\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ \\ell } \\|_2^2 -\\frac{c_x|{ \\mathcal J } _\\ell | }{64} \\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ \\ell } \\|_2^2 -C_6 { \\mathfrak{s} } \\log(n\\vee p) \n= \\frac{c_x|{ \\mathcal J } _\\ell | }{64} \\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ \\ell } \\|_2^2 -C_6 { \\mathfrak{s} } \\log(n\\vee p) ,\n\\end{align*}\nwhere the second inequality follows from the standard sub-Exponential tail bound and \\Cref{eq:regression two change step 3 term 1}, the third inequality follows from \\Cref{eq:regression two change step 3 term 2}, and the fourth inequality follows from $ { \\mathcal J } _\\ell \\subset \\I $ and so $ |\\I | \\ge |{ \\mathcal J } _\\ell|$.\n\\\\\n\\\\\nSo for $\\ell \\in \\{1,2 \\}$, \n\\begin{align} \\label{eq:regression change point step 3 last item}\\sum_{i \\in { \\mathcal J } _\\ell }(y_i - X_i^\\top \\widehat \\beta _\\I ) ^2 - \\sum_{i \\in { \\mathcal J } _\\ell }(y_i - X_i^\\top \\beta^* _{{ \\mathcal J } _\\ell } ) ^2 \n\\ge \\frac{c_x|{ \\mathcal J } _\\ell | }{64} \\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ \\ell } \\|_2^2 -C_6 { \\mathfrak{s} } \\log(n\\vee p).\n\\end{align} \n\\\n\\\\\n\\\\\n{\\bf Step 5.} For ${ \\mathcal J } _3$, if $|{ \\mathcal J } _3| \\ge C_\\mclF { \\mathfrak{s} } \\log(n\\vee p) $, following the same calculations as in {\\bf Step 4}, \n$$\\sum_{i \\in { \\mathcal J } _ 3 }(y_i - X_i^\\top \\widehat \\beta _\\I ) ^2 - \\sum_{i \\in { \\mathcal J } _ 3 }(y_i - X_i^\\top \\beta^* _{{ \\mathcal J } _3 } ) ^2 \\ge \\frac{c_x|{ \\mathcal J } _3 | }{64} \\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ 3 } \\|_2^2 -C_6 { \\mathfrak{s} } \\log(n\\vee p) \\ge -C_6 { \\mathfrak{s} } \\log(n\\vee p). \n$$\nIf $|{ \\mathcal J } _3 | < C_\\mclF { \\mathfrak{s} } \\log(n\\vee p) $, then\n\\begin{align} & \\sum_{i \\in { \\mathcal J } _3 }(y_i - X_i^\\top \\widehat \\beta _\\I ) ^2 - \\sum_{i \\in { \\mathcal J } _3 }(y_i - X_i^\\top \\beta^* _{{ \\mathcal J } _ 3} ) ^2 \\nonumber \n= \\sum_{i \\in { \\mathcal J } _ 3 }\\big\\{ X_i^\\top (\\widehat \\beta _\\I -\\beta^* _{{ \\mathcal J } _3} )\\big\\}^2 -2 \\sum_{i \\in { \\mathcal J } _3 } \\epsilon_i X_i^\\top (\\widehat \\beta _\\I -\\beta^* _{{ \\mathcal J } _3 } )\n\\\\ \\nonumber \n\\ge & \\sum_{i \\in { \\mathcal J } _3 }\\big\\{ X_i^\\top (\\widehat \\beta _\\I -\\beta^* _{{ \\mathcal J } _3 } )\\big\\}^2 -\\frac{1}{2 }\\sum_{i \\in { \\mathcal J } _3 }\\big\\{ X_i^\\top (\\widehat \\beta _\\I -\\beta^* _{{ \\mathcal J } _3 } )\\big\\}^2 - 4 \\sum_{i \\in { \\mathcal J } _3 }\\epsilon_i^2 \n\\\\ \\nonumber \n\\ge & \\frac{1}{2 }\\sum_{i \\in { \\mathcal J } _3 }\\big\\{ X_i^\\top (\\widehat \\beta _\\I -\\beta^* _{{ \\mathcal J } _3 } )\\big\\}^2 -C_7\\bigg( \\sqrt { \\gamma \\log(n)} + \\log(n)+ \\gamma \\bigg) \n\\\\ \\nonumber \n\\ge & \\frac{1}{2 }\\sum_{i \\in { \\mathcal J } _3 }\\big\\{ X_i^\\top (\\widehat \\beta _\\I -\\beta^* _{{ \\mathcal J } _3 } )\\big\\}^2 -C_7' \\bigg( \\log(n)+ \\gamma \\bigg) \n\\\\ \\ge \n& \\frac{1}{2 }\\sum_{i \\in { \\mathcal J } _3 }\\big\\{ X_i^\\top (\\widehat \\beta _\\I -\\beta^* _{{ \\mathcal J } _3 } )\\big\\}^2 -C_8 ( { \\mathfrak{s} } \\log(n\\vee p) + \\gamma) \n\\ge -C_8 ( { \\mathfrak{s} } \\log(n\\vee p) +\\gamma) \\label{eq:regression change point step 4 last item}\n\\end{align}\nwhere the second inequality follows from the standard sub-exponential deviation bound.\n\\\n\\\\\n\\\\\n{\\bf Step 6.} Putting \\Cref{eq:regression two change points first}, \\eqref{eq:regression change point step 3 last item} and \\eqref{eq:regression change point step 4 last item} together, it follows that \n $$ \\sum_{\\ell =1}^2 \\frac{c_x|{ \\mathcal J } _\\ell | }{64} \\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ \\ell } \\|_2^2 \\le C_9( { \\mathfrak{s} } \\log(n\\vee p) + { \\mathcal B_n^{-1} \\Delta } \\kappa ^2 + \\gamma) .$$\n This leads to \n $$ |{ \\mathcal J } _1 | \\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ 1 } \\|_2^2 + |{ \\mathcal J } _2 | \\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ 2 } \\|_2^2 \\le C_9( { \\mathfrak{s} } \\log(n\\vee p) + { \\mathcal B_n^{-1} \\Delta } \\kappa ^2 + \\gamma) .$$ \nObserve that \n$$ \\inf_{ \\beta \\in \\mathbb R ^p } |{ \\mathcal J } _1 | \\| \\beta - \\beta^* _{{ \\mathcal J } _ 1 } \\|_2^2 + |{ \\mathcal J } _2 | \\| \\beta - \\beta^* _{{ \\mathcal J } _ 2 } \\|_2^2 = \\kappa_k ^2 \\frac{|{ \\mathcal J } _1| |{ \\mathcal J } _2|}{| { \\mathcal J } _1| +|{ \\mathcal J } _2| } \\ge \\frac{ \\kappa_k ^2 }{2} \\min\\{ |{ \\mathcal J } _1| ,|{ \\mathcal J } _2| \\} \\ge \\frac{ c \\kappa ^2 }{2} \\min\\{ |{ \\mathcal J } _1| ,|{ \\mathcal J } _2| \\} . $$ \nThus\n$$ \\kappa ^2 \\min\\{ |{ \\mathcal J } _1| ,|{ \\mathcal J } _2| \\} \\le C_{10} ( { \\mathfrak{s} } \\log(n\\vee p)+ { \\mathcal B_n^{-1} \\Delta } \\kappa^2 + \\gamma) ,$$\nwhich is \n$$ \\min\\{ |{ \\mathcal J } _1| ,|{ \\mathcal J } _2| \\} \\le C_5 \\bigg( \\frac{ { \\mathfrak{s} } \\log(n\\vee p) + \\gamma }{\\kappa ^2 } + { \\mathcal B_n^{-1} \\Delta } +\\frac{ \\gamma}{\\kappa ^2} \\bigg) .$$ \nSince $ |{ \\mathcal J } _2 | \\ge \\Delta_{\\min} \\ge \\frac{ C ( { \\mathfrak{s} } \\log(n\\vee p) + \\gamma) }{ \\kappa^{2}} $\n for sufficiently large constant $C$, \n it follows that\n $$ |{ \\mathcal J } _2| \\ge \\Delta_{\\min}> C_5 \\bigg( \\frac{ { \\mathfrak{s} } \\log(n\\vee p) + \\gamma }{\\kappa ^2 } + { \\mathcal B_n^{-1} \\Delta } +\\frac{ \\gamma}{\\kappa ^2} \\bigg) .$$ So it holds that \n $$ |{ \\mathcal J } _1| \\le C_5 \\bigg( \\frac{ { \\mathfrak{s} } \\log(n\\vee p) + \\gamma }{\\kappa ^2 } + { \\mathcal B_n^{-1} \\Delta } \\bigg) .$$ \n \\end{proof} \n\n \\bnlem[Three or more change points]\n\\label{lem:regression three or more cp}\nSuppose the good events \n$\\mathcal L ( { \\mathcal B_n^{-1} \\Delta } ) $ and $\\mathcal R ( { \\mathcal B_n^{-1} \\Delta } ) $ defined in \\Cref{eq:left and right approximation of change points} hold. \n Suppose in addition that \n\\begin{align} \\label{eq:regression three change points snr}\n\\Delta_{\\min} \\kappa^2 \\ge C \\big( { \\mathfrak{s} } \\log(n\\vee p) + \\gamma) \n\\end{align} \nfor sufficiently large constant $C $. \n Then with probability at least $1- n^{-3}$, there is no intervals in $ \\widehat { \\mathcal P} $ containing three or more true change points. \n \\enlem \n \n\n\n\\begin{proof} For contradiction, suppose $ \\I=(s,e] \\in \\mathcal {\\widehat P} $ be such that $ \\{ \\eta_1, \\ldots, \\eta_M\\} \\subset \\I $ with $M\\ge 3$. \n\\\\\n\\\\ \nSince the events $\\mathcal L ( { \\mathcal B_n^{-1} \\Delta } ) $ and $\\mathcal R ( { \\mathcal B_n^{-1} \\Delta } ) $ hold, by relabeling $\\{ s_q\\}_{q=1}^ { \\mathcal Q} $ if necessary, let $ \\{ s_m\\}_{m=1}^M $ be such that \n $$ 0 \\le s_m -\\eta_m \\le { \\mathcal B_n^{-1} \\Delta } \\quad \\text{for} \\quad 1 \\le m \\le M-1 $$ and that\n $$ 0\\le \\eta_M - s_M \\le { \\mathcal B_n^{-1} \\Delta } .$$ \n Note that these choices ensure that $ \\{ s_m\\}_{m=1}^M \\subset \\I . $\n \\\n \\\\\n \\begin{center}\n \\begin{tikzpicture} \n\\draw[ - ] (-10,0)--(1,0);\n \\node[color=black] at (-8,-0.3) {\\small s};\n \\draw[ (-, ultra thick, black] (-8,0) -- (-7.99,0);\n \\draw[ { -]}, ultra thick, black] (-1.0002,0) -- (-1,0);\n\n \\node[color=black] at (-7,-0.3) {\\small $\\eta_1$};\n\\draw(-7 ,0)circle [radius=2pt] ;\n\n \\node[color=black] at (-6.5,-0.3) {\\small $s_1$};\n\\draw plot[mark=x, mark options={color=black, scale=1.5}] coordinates {(-6.5,0) }; \n\n \\node[color=black] at (-5,-0.3) {\\small $\\eta_2$};\n\\draw(-5 ,0)circle [radius=2pt] ;\n \\node[color=black] at (-4.5,-0.3) {\\small $s_2$};\n\\draw plot[mark=x, mark options={color=black, scale=1.5}] coordinates {(-4.5,0) }; \n \n\n \\node[color=black] at (-2.5,-0.3) {\\small $\\eta_M$};\n\\draw(-2.5 ,0)circle [radius=2pt] ;\n \\node[color=black] at (-3 ,-0.3) {\\small $s_M$};\n\\draw plot[mark=x, mark options={color=black, scale=1.5}] coordinates {(-3,0) }; \n \n \\node[color=black] at (-1,-0.3) {\\small $e$};\n\n \n\\end{tikzpicture}\n\\end{center}\n\\\n\\\\\n{\\bf Step 1.}\n Denote \n $$ \\mathcal I_ 1= ( s, s _1], \\quad \\I_m =(s_{m-1} , s_m] \\text{ for } 2 \\le m \\le M \\quad \\text{and} \\quad \\I_{M+1} = (s_M,e]. $$\n Then since $|\\I| \\ge \\Delta_{\\min} \\ge C_s { \\mathfrak{s} } \\log(n\\vee p) $, it follows that \n Since $ |\\I| \\ge \\eta_{k+1} -\\eta_k \\ge C_s { \\mathfrak{s} } \\log(n\\vee p) $, \n$$ \\mathcal F(\\I ) = \\sum_{i \\in \\I }(y_i - X_i^\\top \\widehat \\beta _\\I ) ^2 . $$\nSince $ | \\I_m | \\ge \\Delta_{\\min} \/2 \\ge C_s { \\mathfrak{s} } \\log(n\\vee p) $ for all $ 2 \\le m \\le M $, it follows from the same argument as {\\bf Step 1} in the proof of \\Cref{lemma:regression two change points} that \n \\begin{align*}\\mathcal F(\\I_m) = & \\sum_{i \\in \\I_m }(y_i - X_i^\\top \\widehat \\beta _{\\I_m} ) ^2\n \\le \\sum_{i \\in \\I_m }(y_i - X_i^\\top \\beta ^* _ i ) ^2 + C_1 \\bigg\\{ { \\mathcal B_n^{-1} \\Delta } \\kappa ^2 + { \\mathfrak{s} } \\log(n\\vee p) \\bigg\\} \\quad \\text{for all } 2 \n \\le m \\le M.\n \\end{align*}\n \\\n \\\\\n {\\bf Step 2.} It follows from the same argument as {\\bf Step 2} in the proof of \\Cref{lemma:regression two change points} that \n \\begin{align*}\n &\\mathcal F(\\I_1) \n \\le \\sum_{i \\in \\I_1 }(y_i - X_i^\\top \\beta ^* _ i ) ^2 + C_1 \\bigg\\{ { \\mathcal B_n^{-1} \\Delta } \\kappa ^2 + { \\mathfrak{s} } \\log(n\\vee p) \\bigg\\} , \\text{ and}\n \\\\\n &\\mathcal F(\\I_{M+1} ) \n \\le \\sum_{i \\in \\I_{M+1} }(y_i - X_i^\\top \\beta ^* _ i ) ^2 + C_1 \\bigg\\{ { \\mathcal B_n^{-1} \\Delta } \\kappa ^2 + { \\mathfrak{s} } \\log(n\\vee p) \\bigg\\} \n \\end{align*}\n \\\n \\\\\n {\\bf Step 3.} Since $\\I \\in \\widehat{\\mathcal P} $, we have\n \\begin{align}\\label{eq:regression three change points local min} \\mathcal F(\\I ) \\le \\sum_{m=1}^{M+1}\\mathcal F(\\I_m ) + M\\gamma. \n \\end{align} The above display and the calculations in {\\bf Step 1} and {\\bf Step 2} implies that\n\\begin{align} \n \\sum_{i \\in \\I }(y_i - X_i^\\top \\widehat \\beta _\\I ) ^2 \n\\le \\sum_{i \\in \\I }(y_i - X_i^\\top \\beta_i^* ) ^2 \n + (M +1) C_1 \\bigg\\{ { \\mathcal B_n^{-1} \\Delta } \\kappa ^2 + { \\mathfrak{s} } \\log(n\\vee p) \\bigg\\}\n+M\\gamma .\n \\label{eq:regression three change points step 2 term 1}\n\\end{align}\nDenote \n $$ { \\mathcal J } _1 =(s, \\eta_1], \\ { \\mathcal J } _m = (\\eta_{m-1}, \\eta_m] \\quad \\text{for}\\quad 2 \\le m \\le M , \\ { \\mathcal J } _{M+1} =(\\eta_M, e]. $$ \n\\Cref{eq:regression three change points step 2 term 1} gives \n\\begin{align} \n \\sum_{m=1}^{M+1} \\sum_{i \\in { \\mathcal J } _m }(y_i - X_i^\\top \\widehat \\beta _\\I ) ^2 \n \\le& \\sum_{m=1}^{M+1} \\sum_{i \\in { \\mathcal J } _ m }(y_i - X_i^\\top \\beta^* _{{ \\mathcal J } _m } ) ^2 \n + (M+1) C_1 \\bigg\\{ { \\mathcal B_n^{-1} \\Delta } \\kappa ^2 + { \\mathfrak{s} } \\log(n\\vee p) \\bigg\\} \n+M\\gamma \\label{eq:regression three change points first}\n\\end{align} \n \\\n \\\\\n {\\bf Step 4.} Using the same argument as in the {\\bf Step 4} in the proof of \\Cref{lemma:regression two change points},\nit follows that \n\\begin{align} \\label{eq:regression three change point step 3 last term}\\sum_{i \\in { \\mathcal J } _m }(y_i - X_i^\\top \\widehat \\beta _\\I ) ^2 - \\sum_{i \\in { \\mathcal J } _m }(y_i - X_i^\\top \\beta^* _{{ \\mathcal J } _m } ) ^2 \n\\ge \\frac{c_x|{ \\mathcal J } _m | }{64} \\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ m } \\|_2^2 -C_2 { \\mathfrak{s} } \\log(n\\vee p) \\quad \\text{for all} \\ 2 \\le m \\le M. \n\\end{align} \n \\\n \\\\\n {\\bf Step 5.}\n Using the same argument as in the {\\bf Step 4} in the proof of \\Cref{lemma:regression two change points}, it follows that \n\\begin{align} \\label{eq:regression three change point step 5 first term}& \\sum_{i \\in { \\mathcal J } _ 1 }(y_i - X_i^\\top \\widehat \\beta _\\I ) ^2 - \\sum_{i \\in { \\mathcal J } _ 1 }(y_i - X_i^\\top \\beta^* _{{ \\mathcal J } _1 } ) ^2 \\ge -C_3 ( { \\mathfrak{s} } \\log(n\\vee p) +\\gamma) \\text{ \n and }\n\\\\\\label{eq:regression three change point step 5 second term}\n& \\sum_{i \\in { \\mathcal J } _ {M+ 1} }(y_i - X_i^\\top \\widehat \\beta _\\I ) ^2 - \\sum_{i \\in { \\mathcal J } _ {M+ 1} }(y_i - X_i^\\top \\beta^* _{{ \\mathcal J } _ {M+ 1} } ) ^2 \\ge -C_3 ( { \\mathfrak{s} } \\log(n\\vee p) +\\gamma) \n \\end{align}\n\\\n\\\\\n{\\bf Step 6.} Putting \\Cref{eq:regression three change points first}, \\eqref{eq:regression three change point step 3 last term}, \\eqref{eq:regression three change point step 5 first term} and \\eqref{eq:regression three change point step 5 second term}, \n it follows that \n\\begin{align} \\label{eq:regression three change points step six} \\sum_{ m =2}^M \\frac{c_x|{ \\mathcal J } _m | }{64} \\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ m } \\|_2^2 \\le C_4M ( { \\mathfrak{s} } \\log(n\\vee p) + { \\mathcal B_n^{-1} \\Delta } \\kappa ^2 + \\gamma) .\n\\end{align} \n For any $ m \\in\\{2, \\ldots, M\\}$, it holds that\n\\begin{align} \\label{eq:regression three change points signal lower bound} \\inf_{ \\beta \\in \\mathbb R^p } |{ \\mathcal J } _{m-1} | \\| \\beta - \\beta^* _{{ \\mathcal J } _ {m-1} } \\| ^2 + |{ \\mathcal J } _{m} | \\| \\beta - \\beta^* _{{ \\mathcal J } _ m } \\| ^2 =& \\frac{|{ \\mathcal J } _{m-1}| |{ \\mathcal J } _m|}{ |{ \\mathcal J } _{m-1}| + |{ \\mathcal J } _m| } \\kappa_m ^2 \\ge \\frac{1}{2} \\Delta_{\\min} \\kappa^2,\n\\end{align} \nwhere the last inequality follows from the assumptions that $\\eta_k - \\eta_{k-1}\\ge \\Delta_{\\min} $ and $ \\kappa_k \\asymp \\kappa$ for all $1\\le k \\le K$. So\n\\begin{align} \\nonumber &2 \\sum_{ m=1}^{M } |{ \\mathcal J } _m |\\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ m } \\|_2^2 \n\\\\ \n\\ge & \\nonumber \\sum_{m=2}^M \\bigg( |{ \\mathcal J } _{m-1} \\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ {m-1} } \\|_2^2 + |{ \\mathcal J } _m |\\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ m } \\|_2^2 \\bigg) \n\\\\ \\label{eq:regression three change points signal lower bound two} \n\\ge & (M-1) \\frac{ 1}{2} \\Delta_{\\min} \\kappa^2 \\ge \\frac{M}{4} \\Delta_{\\min} \\kappa^2 ,\n\\end{align} \nwhere the second inequality follows from \\Cref{eq:regression three change points signal lower bound} and the last inequality follows from $M\\ge 3$. \\Cref{eq:regression three change points step six} and \\Cref{eq:regression three change points signal lower bound two} together imply that \n\\begin{align}\\label{eq:regression three change points signal lower bound three} \n \\frac{M}{4} \\Delta_{\\min} \\kappa^2 \\le 2 C_5 M\\bigg( { \\mathfrak{s} } \\log(n\\vee p ) + { \\mathcal B_n^{-1} \\Delta } \\kappa^2\n+ \\gamma \\bigg) .\n\\end{align}\nSince $ { \\mathcal B_n} \\to \\infty $, it follows that for sufficiently large $n$, \\Cref{eq:regression three change points signal lower bound three} gives \n$$ \\Delta_{\\min}\\kappa^2 \\le C_5 \\big( { \\mathfrak{s} } \\log(n\\vee p) +\\gamma),$$ \nwhich contradicts \\Cref{eq:regression three change points snr}. \n \n \n \\end{proof} \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \\bnlem[Two consecutive intervals]\n \\label{lem:regression two intervals}\n Suppose $ \\gamma \\ge C_\\gamma { \\mathfrak{s} } \\log(n\\vee p) $ for sufficiently large constant $C_\\gamma $. \nWith probability at least $1- n^{-3}$, there are no two consecutive intervals $\\I_1= (s,t ] \\in \\widehat {\\mathcal P} $, $ \\I_2=(t, e] \\in \\widehat {\\mathcal P} $ such that $\\I_1 \\cup \\I_2$ contains no change points. \n \\enlem \n \\begin{proof} \n For contradiction, suppose that \n $$ \\I : =\\I_1\\cup \\I_2 $$\n contains no change points. \nFor $\\I_1$, note that if $|\\I_1| \\ge C_\\zeta { \\mathfrak{s} } \\log(n\\vee p) $, then \n by \\Cref{lem:regression one change deviation bound} {\\bf a}, it follows that \n \\begin{align*} \n \\bigg| \\mathcal F(\\I_1) - \\sum_{i\\in \\I_1} (y_i - X_i^\\top \\beta^* _{i} )^2 \\bigg| = \\bigg| \\sum_{i\\in \\I_1} (y_i - X_i^\\top \\widehat \\beta _{\\I_1} )^2- \\sum_{i\\in \\I_1} (y_i - X_i^\\top \\beta_i^* )^2 \\bigg| \\le C_1 { \\mathfrak{s} } \\log(n\\vee p) .\n \\end{align*}\n If $|\\I_1| < C_\\zeta { \\mathfrak{s} } \\log(n\\vee p) $, then\n\\begin{align*} \n& \\bigg| \\mathcal F(\\I_1) - \\sum_{i\\in \\I_1} (y_i - X_i^\\top \\beta^* _{i} )^2 \\bigg| = \\bigg| \\sum_{i\\in \\I_1} (y_i - X_i^\\top \\beta^* _{i} )^2 \\bigg| \n=\\sum_{i\\in \\I_1} \\epsilon_i^2 \n \\\\ \n \\le & |\\I_1| E(\\epsilon^2_1) + C_2 \\sqrt { |\\I_1| \\log(n) } + \\log(n) \\le C_2' { \\mathfrak{s} } \\log(n\\vee p) .\n \\end{align*} \n So\n \\begin{align*} \n \\bigg| \\mathcal F(\\I_1) - \\sum_{i\\in \\I_1} (y_i - X_i^\\top \\beta^* _{i} )^2 \\bigg| \\le C_3 { \\mathfrak{s} } \\log(n\\vee p) .\n \\end{align*} \n Similarly, \n\\begin{align*} \n&\\bigg| \\mathcal F(\\I_2) - \\sum_{i\\in \\I_2} (y_i - X_i^\\top \\beta^* _{i} )^2 \\bigg| \\le C_3 { \\mathfrak{s} } \\log(n\\vee p), \\quad \\text{and}\n\\\\\n&\\bigg| \\mathcal F(\\I ) - \\sum_{i\\in \\I } (y_i - X_i^\\top \\beta^* _{i} )^2 \\bigg| \\le C_3 { \\mathfrak{s} } \\log(n\\vee p) .\n \\end{align*} \n So \n $$\\sum_{i\\in \\I_1} (y_i - X_i^\\top \\beta_i^* )^2 +\\sum_{i\\in \\I_2} (y_i - X_i^\\top \\beta_i^* )^2 -2C_1 { \\mathfrak{s} } \\log(n\\vee p ) +\\gamma \\le \\sum_{i\\in \\I } (y_i - X_i^\\top \\beta_i^* )^2 +C_1 { \\mathfrak{s} } \\log(n\\vee p) . $$\n Since $\\beta^* _i$ is unchanged when $i\\in \\I$, it follows that \n $$ \\gamma \\le 3C_1 { \\mathfrak{s} } \\log(n\\vee p).$$\n This is a contradiction when $C_\\gamma> 3C_1. $\n \\end{proof} \n \n \n \n \n \n\n\n\n\n\n\n\n\\bnlem\nLet $\\mathcal{S}$ be any linear subspace in $\\mathbb{R}^n$ and $\\mathcal{N}_{1\/4}$\tbe a $1\/4$-net of $\\mathcal{S} \\cap B(0, 1)$, where $B(0, 1)$ is the unit ball in $\\mathbb{R}^n$. For any $u \\in \\mathbb{R}^n$, it holds that\n\t\\[\n\t\t\\sup_{v \\in \\mathcal{S} \\cap B(0, 1)} \\langle v, u \\rangle \\leq 2 \\sup_{v \\in \\mathcal{N}_{1\/4}} \\langle v, u \\rangle,\n\t\\]\n\twhere $\\langle \\cdot, \\cdot \\rangle$ denotes the inner product in $\\mathbb{R}^n$.\n\\enlem\n\n\\begin{proof}\nDue to the definition of $\\mathcal{N}_{1\/4}$, it holds that for any $v \\in \\mathcal{S} \\cap B(0, 1)$, there exists a $v_k \\in \\mathcal{N}_{1\/4}$, such that $\\|v - v_k\\|_2 < 1\/4$. Therefore,\n\t\\begin{align*}\n\t\t\\langle v, u \\rangle = \\langle v - v_k + v_k, u \\rangle = \\langle x_k, u \\rangle + \\langle v_k, u \\rangle \\leq \\frac{1}{4} \\langle v, u \\rangle + \\frac{1}{4} \\langle v^{\\perp}, u \\rangle + \\langle v_k, u \\rangle,\n\t\\end{align*}\n\twhere the inequality follows from $x_k = v - v_k = \\langle x_k, v \\rangle v + \\langle x_k, v^{\\perp} \\rangle v^{\\perp}$. Then we have\n\t\\[\n\t\t\\frac{3}{4}\\langle v, u \\rangle \\leq \\frac{1}{4} \\langle v^{\\perp}, u \\rangle + \\langle v_k, u \\rangle.\n\t\\]\n\tIt follows from the same argument that \n\t\\[\n\t\t\\frac{3}{4}\\langle v^{\\perp}, u \\rangle \\leq \\frac{1}{4} \\langle v, u \\rangle + \\langle v_l, u \\rangle,\n\t\\]\n\twhere $v_l \\in \\mathcal{N}_{1\/4}$ satisfies $\\|v^{\\perp} - v_l\\|_2 < 1\/4$. Combining the previous two equation displays yields\n\t\\[\n\t\t\\langle v, u \\rangle \\leq 2 \\sup_{v \\in \\mathcal{N}_{1\/4}} \\langle v, u \\rangle,\n\t\\]\n\tand the final claims holds.\n\\end{proof}\n\n \n\n\\Cref{lem:deviation piecewise constant} is an adaptation of Lemma 3 in \\cite{wang2021_jmlr}.\n\n\\bnlem\\label{lem:deviation piecewise constant}\n\tGiven any interval $I = (s, e] \\subset \\{1, \\ldots, n\\}$. Let $\\mclR_m := \\{v \\in \\mathbb{R}^{(e-s)}| \\|v\\|_2 = 1, \\sum_{t = 1}^{e-s-1} \\mathbf{1}\\{v_i \\neq v_{i+1}\\} = m\\}$. Then for data generated from \\Cref{assp:dcdp_linear_reg}, it holds that for any $\\delta > 0$, $i \\in \\{1, \\ldots, p\\}$, \n\t\\[\n\t\t\\mathbb{P}\\left\\{\\sup_{v\\in \\mclR_m} \\left|\\sum_{t = s+1}^e v_t \\epsilon_t (X_t)_i\\right| > \\Delta_{\\min} \\right\\} \\leq C(e-s-1)^m 9^{m+1} \\exp \\left\\{-c \\min\\left\\{\\frac{\\delta^2}{4C_x^2}, \\, \\frac{\\delta}{2C_x \\|v\\|_{\\infty}}\\right\\}\\right\\}.\n\t\\]\n\\enlem\n\n\\begin{proof}\nFor any $v \\in \\mathbb{R}^{(e-s)}$ satisfying $\\sum_{t = 1}^{e-s-1}\\mathbbm{1}\\{v_i \\neq v_{i+1}\\} = m$, it is determined by a vector in $\\mathbb{R}^{m+1}$ and a choice of $m$ out of $(e-s-1)$ points. Therefore we have,\n\t\\begin{align*}\n\t\t& \\mathbb{P}\\left\\{\\sup_{\\substack{v \\in \\mathbb{R}^{(e-s)}, \\, \\|v\\|_2 = 1\\\\ \\sum_{t = 1}^{e-s-1} \\mathbf{1}\\{v_i \\neq v_{i+1}\\} = m}} \\left|\\sum_{t = s+1}^e v_t \\epsilon_t (X_t)_i\\right| > \\Delta_{\\min} \\right\\} \\\\\n\t\t\\leq & {(e-s-1) \\choose m} 9^{m+1} \\sup_{v \\in \\mathcal{N}_{1\/4}}\t\\mathbb{P}\\left\\{\\left|\\sum_{t = s+1}^e v_t \\epsilon_t (X_t)_i\\right| > \\delta\/2 \\right\\} \\\\\n\t\t\\leq & {(e-s-1) \\choose m} 9^{m+1} C\\exp \\left\\{-c \\min\\left\\{\\frac{\\delta^2}{4C_x^2}, \\, \\frac{\\delta}{2C_x \\|v\\|_{\\infty}}\\right\\}\\right\\} \\\\\n\t\t\\leq & C(e-s-1)^m 9^{m+1} \\exp \\left\\{-c \\min\\left\\{\\frac{\\delta^2}{4C_x^2}, \\, \\frac{\\delta}{2C_x \\|v\\|_{\\infty}}\\right\\}\\right\\}.\n\t\\end{align*}\n\n\\end{proof}\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n \n\n\n\\subsection{Additional Technical Results} \n\\bnlem\n\\label{theorem:restricted eigenvalues 2}\nSuppose $\\{X_{i } \\}_{1\\le i \\le n } \\overset{i.i.d.} {\\sim} N_p (0, \\Sigma ) $. \nDenote $\\mathcal C_S := \\{ v : \\mathbb R^p : \\| v_{S^c }\\|_1 \\le 3\\| v_{S}\\|_1 \\} $, where $ |S| \\le { \\mathfrak{s} } $.\nThen there exists constants $c$ and $C$ such that for all $\\eta\\le 1$, \n\\begin{align} \nP \\left( \\sup_{v \\in \\mathcal C _S , \\|v\\|_2=1 } \\left | v^\\top ( \\widehat \\Sigma -\\Sigma ) v \\right| \\ge C\\eta \\Lambda_{\\max} (\\Sigma) \\right)\n\\le 2\\exp( -c n\\eta ^2 + 2 { \\mathfrak{s} } \\log(p) ) .\n\\end{align}\n\\enlem\n\\begin{proof}\nThis is a well known restricted eigenvalue property for Gaussian design. The proof can be found in \\cite{Basu2015} or \\cite{Loh2012}.\n\\end{proof} \n\n\\bnlem\n\\label{corollary:restricted eigenvalues 2} Suppose $\\{X_{i } \\}_{1\\le i \\le n } \\overset{i.i.d.} {\\sim} N_p (0, \\Sigma ) $. \n Denote $\\mathcal C_S := \\{ v : \\mathbb R^p : \\| v_{S^c }\\|_1 \\le 3\\| v_{S}\\|_1 \\} $, where $ |S| \\le { \\mathfrak{s} } $. With probability at least $1- n ^{-5}$, it holds that\n$$ \\left | v^\\top ( \\widehat \\Sigma_\\I -\\Sigma ) v \\right| \\le C \\sqrt { \\frac{ { \\mathfrak{s} } \\log(n\\vee p) }{|\\I| } } \\|v\\|_2^2 \n $$\n for all $v \\in \\mathcal C _S $ and all $\\I \\subset (0, n]$ such that $| \\I |\\ge C_s { \\mathfrak{s} } \\log(n\\vee p) $, where $ C_s$\nis the constant in \\Cref{lemma:consistency} which is independent of $n, p$. \n\\enlem\n\\begin{proof} \nFor any $\\I \\subset (0, n]$ such that $| \\I |\\ge C_s { \\mathfrak{s} } \\log(n\\vee p) $, by \\Cref{theorem:restricted eigenvalues 2}, it holds that \n\\begin{align*} \nP \\left( \\sup_{v \\in \\mathcal C _S , \\|v\\|_2=1 } \\left | v^\\top ( \\widehat \\Sigma _\\I -\\Sigma ) v \\right| \\ge C\\eta \\Lambda_{\\max} (\\Sigma) \\right)\n\\le 2\\exp( -c |\\I| \\eta ^2 + 2 { \\mathfrak{s} } \\log(p) ) .\n\\end{align*}\nLet $\\eta= C_1 \\sqrt { \\frac{ { \\mathfrak{s} } \\log(n\\vee p) }{|\\I | } } $ for sufficiently large constant $C_1$. Note that $\\eta< 1$ if $ |\\I| > C_1^2 { \\mathfrak{s} } \\log(n\\vee p) $. Then with probability at least \n$(n\\vee p)^{-7} $, \n$$\\sup_{v \\in \\mathcal C _S , \\|v\\|_2=1 } \\left | v^\\top ( \\widehat \\Sigma _\\I -\\Sigma ) v \\right| \\ge C_2 \\sqrt { \\frac{s\\log(n\\vee p) }{|\\I | } } .$$\nSince there are at most $n^2$ many different choices of $ \\I \\subset (0,n]$, the desired result follows from a union bound argument. \n\\end{proof} \n\n\n\\bnlem\n\\label{lem:restricted eigenvalue}\nUnder \\Cref{assp:dcdp_linear_reg}, it holds that\n \\begin{align*}\n\t\t \\mathbb P \\bigg( \\sum_{t\\in \\I } ( X_t^\\top v )^2 \\ge \\frac{c_x|\\I| }{4} \\|v\\|_2^2 - C_2 \\log(n\\vee p) \\|v\\|_1 ^2 \\ \\forall v \\in \\mathbb R^p \\text{ and } \\forall |\\I|\\ge C_s { \\mathfrak{s} } \\log(n\\vee p) \\bigg) \\le n^{-5} \n\t\\end{align*} \n\twhere $ C_2 > 0$ is an absolute constant only depending on $C_x$.\n\\enlem\n\\begin{proof}\nBy the well known restricted eigenvalue condition, for any $\\I$, it holds that \n \\begin{align*}\n\t\t \\mathbb P \\bigg( \\sum_{t\\in \\I } ( X_t^\\top v )^2 \\ge \\frac{c_x |\\I| }{4} \\|v\\|_2^2 - C_2 \\log(n\\vee p) \\|v\\|_1 ^2 \\ \\ \\forall v \\in \\mathbb R^p \\bigg) \\le C_3 \\exp(-c_3|\\I| ).\n\t\\end{align*} \n Since $|\\I| \\ge C_s { \\mathfrak{s} } \\log(n\\vee p) $, \n \\begin{align*}\n\t\t \\mathbb P \\bigg( \\sum_{t\\in \\I } ( X_t^\\top v )^2 \\ge \\frac{c_ x |\\I| }{4} \\|v\\|_2^2 - C_2 \\log(n\\vee p) \\|v\\|_1^2 \\ \\ \\forall v \\in \\mathbb R^p \\bigg) \\le n^{-4}.\n\t\\end{align*} \n\tSince there are at most $n^2$ many subinterval $\\I \\subset (0,n]$, it follows from a union bound argument that \n\t \\begin{align*}\n\t\t \\mathbb P \\bigg( \\sum_{t\\in \\I } ( X_t^\\top v )^2 \\ge \\frac{c_x |\\I| }{4} \\|v\\|_2^2 - C_2 \\log(n\\vee p) \\|v\\|_1^2 \\ \\ \\forall v \\in \\mathbb R^p \\text{ and } \\forall |\\I|\\ge C_s { \\mathfrak{s} } \\log(n\\vee p) \\bigg) \\le n^{-2} .\n\t\\end{align*} \n\\end{proof} \n\n\n\n\n\\bnlem \\label{lemma:consistency}\nSuppose \\Cref{assp:dcdp_linear_reg} holds. There exists a sufficient large constant $ C_s$ such that the following conditions holds. \n\\\\\n\\\\\n{\\bf a.} With probability at least $1-n^{-3} $, it holds that\n\\begin{align}\\label{eq:independent condition 1c 1}\n \\left | \\frac{1}{|\\I | } \\sum_{i \\in \\mathcal I } \\epsilon_i X_i^\\top \\beta \\right| \\le C\\sigma_{\\epsilon}\\sqrt { \\frac{\\log(n\\vee p)}{|\\I | } } \\|\\beta\\|_1 \n \\end{align}\nuniformly for all $ \\beta \\in \\mathbb R^p$ and all $\\mathcal I \\subset (0, n]$ such that $|\\I|\\ge C_s { \\mathfrak{s} } \\log(n\\vee p) $, \n\\\\\n\\\\\n {\\bf b.} Let $\\{ u_i\\}_{i=1}^n \\subset \\mathbb R^p$ be a collection of deterministic vectors. Then with probability at least $1-n^{-3} $, it holds that \n\\begin{align}\\label{eq:independent condition 1c}\n \\left | \\frac{1}{|\\I | } \\sum_{i \\in \\I } u^\\top_i X_i X_i^\\top \\beta - \\frac{1}{ |\\I | } \\sum_{i \\in \\mathcal I } u^\\top _i \\Sigma \\beta \\right| \\le C \\left( \\max _{1\\le i \\le n }\\|u_i\\|_2 \\right) \\sqrt { \\frac{\\log(n\\vee p)}{|\\I | } } \\|\\beta\\|_1 \n \\end{align}\n uniformly for all $ \\beta \\in \\mathbb R^p$ and all $\\mathcal I \\subset (0, n]$ such that $|\\I|\\ge C_s { \\mathfrak{s} } \\log(n\\vee p) $. \n\\enlem \n \\begin{proof}\n The justification of the \\eqref{eq:independent condition 1c 1} is similar and simpler than the justification of \\eqref{eq:independent condition 1c}. For conciseness, only the justification of \\eqref{eq:independent condition 1c} is presented.\n \nFor any $\\mathcal I \\subset (0, n]$ such that $|\\I|\\ge C_s { \\mathfrak{s} } \\log(n\\vee p) $, it holds that\n \\begin{align*}\n &\\left| \\frac{1}{ |\\I | } \\sum_{i\\in \\I } u^\\top _i X_i X_i^\\top \\beta - \\frac{1}{ |\\I | } \\sum_{i\\in \\I } u^\\top_i \\Sigma \\beta \\right| \n \\\\\n =\n & \\left| \\left( \\frac{1}{|\\I | } \\sum_{i\\in \\I } u^\\top _i X_i X_i^\\top - \\frac{1}{|\\I | } \\sum_{i\\in \\I } u^\\top_i \\Sigma \\right) \\beta \\right| \n \\\\\n \\le \n & \\max_{1\\le j \\le p } \\left | \\frac{1}{|\\I | } \\sum_{i\\in \\I } u^\\top _i X_i X_{i,j} - \\frac{1}{|\\I | } \\sum_{i\\in \\I } u^\\top_i \\Sigma (, j) \\right| \\|\\beta\\|_1.\n \\end{align*}\n Note that \n $ E(u^\\top _i X_i X_{i,j} ) =u^\\top _i \\Sigma (, j) $ and in addition, \n $$u^\\top _ i X_i \\sim N(0, u^\\top _i \\Sigma u _i ) \\quad \\text{ and } \\quad X_{i,j} \\sim N(0, \\Sigma({j,j})) .$$\n So $u^\\top _i X_i X_{i,j} $ is a sub-exponential random variable such that \n $$ u^\\top _i X_i X_{i,j} \\sim SE( u^\\top_i \\Sigma u _i \\Sigma ({j,j})).$$\n As a result, for $\\gamma<1$ and every $j$,\n$$ P \\left( \\left | \\frac{1}{|\\I | } \\sum_{i \\in \\I } u^\\top _ i X_i X_{i,j} - u^\\top \\Sigma (, j) \\right| \\ge \\gamma \\sqrt { \\max_{1\\le i\\le n } ( u^\\top_i \\Sigma u_i) \\Sigma ( {j,j} ) }\\right) \\le \\exp(-c \\gamma^2 |\\I | ). $$\nSince \n$$ \\sqrt { u^\\top _i \\Sigma u _i \\Sigma ({j,j} ) } \\le C_x \\|u_i \\|_2, $$\nby union bound,\n$$ \\mathbb P \\left( \\max_{1\\le j \\le p }\\left | \\frac{1}{|\\I | } \\sum_{i \\in \\I} u^\\top _ i X_i X_{i,j} - \\frac{1}{|\\I | } \\sum_{i \\in\\I} u^\\top _i \\Sigma (, j) \\right| \\ge \\gamma \nC_x \\left( \\max _{1\\le i \\le n }\\|u_i\\|_2 \\right) \\right) \\le p\\exp(-c \\gamma^2 |\\I| ). $$\nLet $\\gamma = 3\\sqrt { \\frac{\\log(n\\vee p) }{c |\\I| } }$. Note that $ \\gamma<1$ if $|\\I|\\ge C_s { \\mathfrak{s} } \\log(n\\vee p) $ for sufficiently large $C_s$. Therefore \n$$ \\mathbb P \\left( \\max_{1\\le j \\le p }\\left | \\frac{1}{|\\I | } \\sum_{i \\in \\I} u^\\top _ i X_i X_{i,j} - \\frac{1}{|\\I | } \\sum_{i \\in\\I} u^\\top _i \\Sigma (, j) \\right| \\ge \nC_1\\sqrt{ \\frac{ \\log(n\\vee p)}{|\\I| } } \\left( \\max _{1\\le i \\le n }\\|u_i\\|_2 \\right) \\right) \\le p\\exp(-9\\log(n\\vee p) ). $$\nSince there are at most $n^2$ many intervals $\\I \\subset(0,n]$, it follows that \n\\begin{align*}\n &\\mathbb P \\left( \\max_{1\\le j \\le p }\\left | \\frac{1}{|\\I | } \\sum_{i \\in \\I} u^\\top _ i X_i X_{i,j} - \\frac{1}{|\\I | } \\sum_{i \\in\\I} u^\\top _i \\Sigma (, j) \\right| \\ge \nC_1\\sqrt{ \\frac{ \\log(n\\vee p)}{|\\I| } } \\left( \\max _{1\\le i \\le n }\\|u_i\\|_2 \\right) \\ \\forall |\\I| \\ge C_s { \\mathfrak{s} } \\log(n\\vee p) \\right)\n\\\\\n \\le & p n^2 \\exp(-9\\log(n\\vee p) ) \\le n^{-3}. \n\\end{align*}\nThis immediately gives \\eqref{eq:independent condition 1c}.\n \\end{proof} \n \n \n \n \n \\bnlem\\label{lem-x-bound}\nUder \\Cref{assp:dcdp_linear_reg}, for any interval $\\I \\subset (0, n]$, for any \n\t\\[\n\t\t\\lambda \\geq \\lambda_1 := C_{\\lambda}\\sigma_{\\epsilon} \\sqrt{\\log(n p)},\n\t\\]\n\twhere $C_{\\lambda} > 0$ is a large enough absolute constant, it holds with probability at least $1 - n^{-5}$ that\n\t\\[\n \\|\\sum_{i \\in \\I} \\epsilon_i X_i \\|_{\\infty} \\leq \\lambda \\sqrt{\\max\\{|\\I|, \\, \\log(n p)\\}}\/8,\n\t\\]\n\twhere $c_3 > 0$ is an absolute constant depending only on the distributions of covariants $\\{X_i\\}$ and $\\{\\epsilon_i\\}$.\n\\enlem\n\n\\begin{proof}\n\tSince $\\epsilon_i$'s are sub-Gaussian random variables and $X_i$'s are sub-Gaussian random vectors, we have that $\\epsilon_i X_i$'s are sub-Exponential random vectors with $\\|\\epsilon_iX_i\\|_{\\psi_1}\\leq C_x \\sigma_{\\epsilon}$ \\citep[see e.g.~Lemma~2.7.7 in][]{vershynin2018high}. It then follows from Bernstein's inequality \\citep[see e.g.~Theorem~2.8.1 in][]{vershynin2018high} that for any $t > 0$,\n\t\t\\[\n\t\\mathbb{P}\\left\\{\\|\\sum_{i \\in \\I} \\epsilon_i X_i \\|_{\\infty} > t\\right\\} \\leq 2p \\exp\\left\\{-c \\min\\left\\{\\frac{t^2}{|\\I|C_x^2\\sigma^2_{\\epsilon}}, \\, \\frac{t}{C_x \\sigma_{\\epsilon}}\\right\\}\\right\\}.\n\t\t\\]\n\t\tTaking \n\t\t\\[\n\t\t\tt = C_{\\lambda}C_x\/4 \\sigma_{\\epsilon} \\sqrt{\\log(n p)} \\sqrt{\\max\\{|\\I|, \\, \\log(n p)\\}}\n\t\t\\] \n\t\tyields the conclusion.\n\\end{proof}\n \n \n \n \n\n\\bnlem \\label{lemma:beta bounded 1}\nSuppose \\Cref{assp:dcdp_linear_reg} holds. Let $\\I \\subset [1,n] $. Denote \n $ \\kappa = \\min_{k\\in \\{1,\\ldots, K\\} } \\kappa_k,$ \nwhere $ \\{ \\kappa_k\\}_{k=1}^K$ are defined in \\Cref{assp:dcdp_linear_reg}. Then for any $i\\in [T]$,\n $$\\|\\beta^*_\\I - \\beta^*_i\\|_2 \\le C\\kappa \\le CC_\\kappa,$$\n for some absolute constant $C$ independent of $n$.\n\\enlem \n \\begin{proof}\n It suffices to consider $\\I =[1,n]$ and $\\beta_i^*= \\beta_1^*$ as the general case is similar. Observe that \n \\begin{align*}\n \\| \\beta^*_{[1,n]} - \\beta_ 1^* \\|_2 =& \n \\| \\frac{1}{n} \\sum_{i=1}^n \\beta_i ^* - \\beta_1^* \n \\|_2 \n = \\| \\frac{1}{n} \\sum_{k=0}^{K} \\Delta_k \\beta_{\\eta_k+ 1} ^* - \\frac{1}{n}\\sum_{k=0}^{K} \\Delta_k \\beta_1 ^*\n \\|_2 \n \\\\\n \\le \n & \\frac{1}{n} \\sum_{k=0}^{K} \\left\\| \\Delta_k (\\beta_{\\eta_k+ 1} ^* -\\beta_1 ^* ) \\right\\|_2 \\le \n \\frac{1}{n}\\sum_{k=0}^{K} \\Delta_k (K+1) \\kappa \\le (K+1)\\kappa . \n \\end{align*}\n By \\Cref{assp:dcdp_linear_reg}, both $\\kappa $ and $K $ bounded above. \n \\end{proof}\n \n \\bnlem\\label{eq:cumsum upper bound}\n Let $t\\in \\I =(s,e] \\subset [1,n]$. Denote \n $ \\kappa_{\\max}= \\max_{k\\in \\{1,\\ldots, K\\} } \\kappa_k,$ \nwhere $ \\{ \\kappa_k\\}_{k=1}^K$ are defined in \\Cref{assp:dcdp_linear_reg}. Then \n $$ \\sup_{0< s < t0$, it holds with probability at least $1 - \\exp(-u)$ that\n\\begin{equation}\n \\|\\widehat{\\Sigma}_{\\I} - \\Sigma_{\\I}\\|_{op}\\lesssim g_X^2(\\sqrt{\\frac{p + u}{|\\mclI|}}\\vee \\frac{p + u}{|\\mclI|}).\n \\label{eq: cov concentration 1}\n\\end{equation}\nAs a result, when $n\\geq C_s p\\log(n\\vee p)$ for some universal constant $C_s>0$, it holds with probability at least $1 - (n\\vee p)^{-7}$ that\n\\begin{equation}\n \\|\\widehat{\\Sigma}_{\\I} - \\Sigma_{\\I}\\|_{op}\\leq Cg_X^2\\sqrt{\\frac{p\\log(n\\vee p)}{|\\I|}},\n \\label{eq: cov concentration 2}\n\\end{equation}\nwhere $C$ is some universal constant that does not depend on $n, p, g_X$, and $C_s$. In addition let $\\widehat{\\Omega}_{\\I} = \\argmin_{\\Omega\\in \\mathbb{S}_+}L(\\Omega,\\I)$ and $\\widetilde{\\Omega}_{\\I} = (\\frac{1}{|\\I|}\\sum_{i\\in\\I}\\Sigma^*_i)^{-1}$. if $|\\I|\\geq C_s p\\log(n\\vee p)g_X^4 \/ c_X^2$ for sufficiently large constant $C_s>0$, then it holds with probability at least $1 - (n\\vee p)^{-7}$ that\n\\begin{equation}\n \\|\\widehat{\\Omega}_{\\I} -\\widetilde{\\Omega}_{\\I}\\|_{op}\\leq C \\frac{g_X^2}{c_X^2}\\sqrt{\\frac{p\\log(n\\vee p)}{|\\I|}}.\n \\label{eq: precision concentration}\n\\end{equation}\n\\enlem\n\\bprf\nIf there is no change point in $\\mclI$, then the two inequalities \\eqref{eq: cov concentration 1} and \\eqref{eq: cov concentration 2} are well-known results in the literature, see, e.g., \\cite{kereta2021ejs}. Otherwise, suppose $\\mclI$ is split by change points into $q$ subintervals $\\mclI_1,\\cdots, \\mclI_q$. By \\Cref{assp:DCDP_covariance}, we know that $q\\leq C$ for some constant $C<\\infty$. Thus with probability at least $1 - \\exp(-u)$,\n\\begin{align*}\n \\|\\widehat{\\Sigma}_{\\I} - \\Sigma_{\\I}\\|_{op} &\\leq \\|\\frac{1}{|\\mclI|}\\sum_{k\\in [q]}|\\mclI_k|(\\widehat{\\Sigma}_{\\I_k} - \\Sigma_{\\I_k})\\|_{op}\\\\\n & \\leq C_1g_X^2\\frac{\\sqrt{p + u}}{|\\mclI|}\\sum_{k\\in [q]}\\sqrt{|\\mclI_k|\\vee (p + u)} \\\\\n &\\leq C_2g_X^2\\frac{\\sqrt{p + u}}{|\\mclI|}\\max_{k\\in [q]}{|\\mclI_k|\\vee (p + u)}\\\\\n &\\leq C_2g_X^2\\sqrt{\\frac{{p + u}}{|\\mclI|}}(\\sqrt{\\frac{\\max_{k\\in [q]}{|\\mclI_k|}}{|\\mclI|}}\\vee \\sqrt{\\frac{{p + u}}{|\\mclI|}}) \\leq C_2 g_X^2(\\sqrt{\\frac{{p + u}}{|\\mclI|}}\\vee \\frac{{p + u}}{|\\mclI|}).\n \\end{align*}\nIt is then straightforward to see that \\Cref{eq: cov concentration 2} holds with probability at least $1 - (n\\vee p)^{-7}$ when $n\\geq C_sp\\log(n\\vee p)$ for some sufficiently large constant $C_s>0$.\n\nFor \\Cref{eq: precision concentration}, first vanish the gradient of the loss function $L(\\Omega, \\I)$ and we get\n\\begin{equation*}\n \\widehat{\\Omega}_{\\I} = (\\widehat{\\Sigma}_{\\I})^{\\dagger}.\n\\end{equation*}\nThen \\Cref{eq: precision concentration} is implied by \\Cref{eq: cov concentration 2} and the well-known property that\n$$\n\\left\\|\\mathbf{H}^{\\dagger}-\\mathbf{G}^{\\dagger}\\right\\|_{op} \\leq C\\max \\left\\{\\|\\mathbf{G}^{\\dagger}\\|^2_{op},\\|\\mathbf{H}^{\\dagger}\\|^2_{op}\\right\\}\\left\\|\\mathbf{H}-\\mathbf{G}\\right\\|_{op},\n$$\nfor two matrices $G,H\\in \\mathbb{R}^{p\\times p}$.\n\\eprf\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Technical lemmas}\n\n\n\n\n\n\\bnlem[No change point]\n\\label{lem:cov no cp}\nFor interval $\\mclI$ containing no change point, it holds with probability at least $1 - n^{-5}$ that\n\\begin{equation}\n \\mclF(\\widehat{\\Omega}_{\\I}, \\mclI) -\\mclF({\\Omega}^*, \\mclI) \\geq -g_X^4 p^2\\log(n\\vee p)\\max_{k\\in[K + 1]}\\|\\Omega^*_{\\eta_k}\\|_{op}^2.\n\\end{equation}\n\\enlem\n\\bprf\nIf $\\I< C_s\\frac{g_X^4}{c_X^2}p\\log(n\\vee p)$, then $\\mclF (\\widehat{\\Omega}_{\\I}, \\mclI) = \\mclF ({\\Omega}^*, \\mclI)=0$ and the conclusion holds automatically. If $\\I\\geq C_s\\frac{g_X^4}{c_X^2}p\\log(n\\vee p)$, then by \\Cref{lem:estimation covariance}, it holds with probability at least $1 - n^{-7}$ that\n\\begin{align}\n\\mclF(\\widehat{\\Omega}_{\\I}, \\mclI) -\\mclF({\\Omega}^*, \\mclI) \\geq & |\\mclI|{\\rm Tr}[(\\widehat{\\Omega}_{\\I} - \\Omega^*)^\\top (\\widehat{\\Sigma}_{\\I} - \\Sigma^*)] + \\frac{c|\\mclI|}{2\\|\\Omega^*\\|_{op}^2}\\|\\widehat{\\Omega}_{\\I} - \\Omega^*\\|_F^2. \\\\\n\\geq & -|\\mclI|\\|\\widehat{\\Omega}_{\\I} - \\Omega^*\\|_F \\|\\widehat{\\Sigma}_{\\I} - \\Sigma^*\\|_F \\\\\n\\geq & -|\\mclI|p \\|\\widehat{\\Omega}_{\\I} - \\Omega^*\\|_{op} \\|\\widehat{\\Sigma}_{\\I} - \\Sigma^*\\|_{op}\\\\\n\\geq & -g_X^4 p^2\\log(n\\vee p)\\|\\Omega^*\\|_{op}^2.\n\\end{align}\n\\eprf\n\n\n\n\n\\bnlem\n\\label{lem:cov loss deviation no change point}\nLet $\\I\\subset [1,T]$ be any interval that contains no change point. Then for any interval ${ \\mathcal J } \\supset \\I$, it holds with probability at least $1 - (n\\vee p)^{-5}$ that\n\\begin{equation*}\n \\mclF(\\Omega^*_{\\I},\\I)\\leq \\mclF(\\widehat{\\Omega}_{ \\mathcal J } ,\\I) + C\\frac{g_X^4}{c_X^2}p^2\\log(n\\vee p).\n\\end{equation*}\n\\enlem\n\\begin{proof}\nThe conclusion is guaranteed by \\Cref{lem:cov no cp} $\\widehat{\\Omega}_{\\I}$ is the minimizer of $\\mclF(\\Omega,\\I)$.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\\bnlem[Single change point]\n\\label{lem:cov single cp}\nSuppose the good events \n$\\mathcal L ( { \\mathcal B_n^{-1} \\Delta } ) $ and $\\mathcal R ( { \\mathcal B_n^{-1} \\Delta } ) $ defined in \\Cref{eq:left and right approximation of change points} hold. \nLet $ \\I=(s,e] \\in \\mathcal {\\widehat P} $ be such that $\\I$ contains exactly one change point $ \\eta_k $. \n Then with probability at least $1-(n\\vee p)^{-3}$, it holds that \n\\begin{equation}\n \\min\\{ \\eta_k -s ,e-\\eta_k \\} \\leq CC_{\\gamma} g_X^4\\frac{C_X^2}{c_X^6}\\frac{p^2\\log(n\\vee p)}{\\kappa_k^2} + Cg_X^4\\frac{C_X^6}{c_X^6} { \\mathcal B_n^{-1} \\Delta } .\n\\end{equation}\n\\enlem\n\\bprf\nIf either $ \\eta_k -s \\le { \\mathcal B_n^{-1} \\Delta } $ or $e-\\eta_k\\le { \\mathcal B_n^{-1} \\Delta } $, then there is nothing to show. So assume that \n$$ \\eta_k -s > { \\mathcal B_n^{-1} \\Delta } \\quad \\text{and} \\quad e-\\eta_k > { \\mathcal B_n^{-1} \\Delta } . $$\nBy event $\\mathcal R (p^{-1} { \\mathcal B_n^{-1} \\Delta } )$, there exists $ s_ u \\in \\{ s_q\\}_{q=1}^ { \\mathcal Q} $ such that \n$$0\\le s_u - \\eta_k \\le p^{-1} { \\mathcal B_n^{-1} \\Delta } . $$\n So \n$$ \\eta_k \\le s_ u \\le e .$$\nDenote \n$$ \\I_ 1 = (s,s_u] \\quad \\text{and} \\quad \\I_2 = (s_u, e],$$\nand $\\mclF^*({ \\mathcal J } ) = \\sum_{i\\in { \\mathcal J } }[{\\rm Tr}((\\Omega^*_i)^\\top X_iX_i^\\top) -\\log|\\Omega_i^*|]$. Since\n$s, e, s_u \\in \\{ s_q\\}_{q=1}^ { \\mathcal Q} $, by the definition of $\\widehat{P}$ and $\\widehat{\\Omega}$, and \\Cref{lem:cov one change deviation bound}, it holds that\n\\begin{align}\n \\mclF(\\widehat{\\Omega}_{\\I},\\I)\\leq& \\mclF(\\widehat{\\Omega}_{\\I_1},\\I_1) + \\mclF(\\widehat{\\Omega}_{\\I_2},\\I_2) + \\gamma \\nonumber \\\\\n \\leq & \\mclF^*(\\I_1) + \\frac{C_X^6 p}{c_X^4}(s_u - \\eta_k)\\kappa_k^2 + C\\frac{g_X^4 C_X^2}{c_X^4}p^2\\log(n\\vee p) + \\mclF^*(\\I_2) + \\gamma \\nonumber \\\\\n \\leq & \\mclF^*(\\I) + \\frac{C_X^6}{c_X^4}\\mclB_n^{-1}\\Delta\\kappa_k^2 + C\\frac{g_X^4 C_X^2}{c_X^4}p^2\\log(n\\vee p) + \\gamma,\n\\end{align}\nwhere the last inequality is due to $s_u - \\eta_k\\leq \\mclB_n^{-1}\\Delta$. Let \n$$\\widetilde{\\gamma} = \\frac{C_X^6}{c_X^4}\\mclB_n^{-1}\\Delta\\kappa_k^2 + C\\frac{g_X^4 C_X^2}{c_X^4}p^2\\log(n\\vee p) + \\gamma.$$ \nThen by Taylor expansion and \\Cref{lem:estimation covariance} we have\n\\begin{align}\n \\frac{c}{\\max_{k\\in [K]}\\|\\Omega_{\\I_k}^*\\|_{op}^2}\\sum_{t\\in \\mclI}\\|\\widehat{\\Omega}_{\\I} - \\Omega^*_t\\|_F^2 \n &\\leq \\widetilde{\\gamma} + \\sum_{i=k-1}^k|\\I_i|{\\rm Tr}[(\\Omega_{\\I_i}^* - \\widehat{\\Omega}_{\\I})^\\top (\\widehat{\\Sigma}_{\\I_i} - \\Sigma^*_{\\I_i})] \\nonumber \\\\\n &\\leq \\widetilde{\\gamma} + \\sum_{i=k-1}^k {|\\mclI_{i}|}\\|\\widehat{\\Sigma}_{\\I_i} - \\Sigma^*_{\\I_i}\\|_F\\|\\widehat{\\Omega}_{\\I} - \\Omega^*_{\\I_i}\\|_F \\nonumber \\\\\n &\\leq \\widetilde{\\gamma} + C_1g_X^2 p\\log^{\\frac{1}{2}}(n\\vee p)\\left[ \\sum_{i=k-1}^k \\sqrt{|\\mclI_{i}|}\\|\\widehat{\\Omega}_{\\I} - \\Omega^*_{\\I_i}\\|_F \\right] \\nonumber \\\\\n &\\leq \\widetilde{\\gamma} + C_1g_X^2 p\\log^{\\frac{1}{2}}(n\\vee p) \\sqrt{\\sum_{t\\in \\mclI}\\|\\widehat{\\Omega}_{\\I} - \\Omega^*_t\\|_F^2}.\n\\end{align}\nThe inequality above implies that\n\\begin{equation}\n \\sum_{t\\in \\mclI}\\|\\widehat{\\Omega}_{\\I} - \\Omega^*_t\\|_F^2 \\leq \\frac{2}{c}\\max\\|\\Omega_{\\I_k}^*\\|_{op}^2\\left[\\widetilde{\\gamma} + \\max\\|\\Omega_{\\I_k}^*\\|_{op}^2\\|X\\|^4_{\\psi_2} p^{2}\\log(n\\vee p)\\right].\n\\end{equation}\nOn the other hand,\n\\begin{equation}\n \\sum_{t\\in \\mclI}\\|\\widehat{\\Omega}_{\\I} - \\Omega^*_t\\|_F^2\\geq \\frac{|\\mclI_{k-1}||\\mclI_{k}|}{|\\mclI|}\\|\\Omega^*_{\\I_{k-1}} - \\Omega^*_{\\I_k}\\|_F^2,\n\\end{equation}\nwhich implies that\n\\begin{equation}\n \\min\\{|\\mclI_{k-1}|,|\\mclI_{k}|\\}\\leq C_2C_{\\gamma} g_X^4\\frac{C_X^2}{c_X^6}\\frac{p^2\\log(n\\vee p)}{\\kappa_k^2} + C_2g_X^4\\frac{C_X^6}{c_X^6} { \\mathcal B_n^{-1} \\Delta } .\n\\end{equation}\n\\eprf\n\n\nRecall that we assume for $i\\in [n]$, $c_XI_p \\preceq \\Sigma_i \\preceq C_X I_p$ for some universal constants $c_X>0, C_X<\\infty$.\n\n\\bnlem[Two change points]\n\\label{lem:cov two cp}\nSuppose the good events \n$\\mathcal L ( { \\mathcal B_n^{-1} \\Delta } ) $ and $\\mathcal R ( { \\mathcal B_n^{-1} \\Delta } ) $ defined in \\Cref{eq:left and right approximation of change points} hold. Let $ \\I=(s,e] \\in \\mathcal {\\widehat P} $ be an interval that contains exactly two change points $ \\eta_k,\\eta_{k+1}$. Suppose in addition that $\\gamma \\geq C_{\\gamma}\\frac{g_X^4}{c_X^2} p^2\\log(n\\vee p)$, and\n\n\\begin{equation}\n \\Delta_{\\min} \\kappa^2 \\geq \\mathcal{B}_n \\frac{g_X^4}{c_X^4}p^2\\log(n\\vee p),\n\\end{equation}\nthen with probability at least $1 - n^{-5}$ it holds that\n\\begin{equation}\n \\max\\{\\eta_k - s, e - \\eta_{k + 1}\\}\\leq { \\mathcal B_n^{-1\/2} \\Delta } .\n\\end{equation}\n\\enlem\n\\bprf\nSince the events $\\mathcal L ( { \\mathcal B_n^{-1} \\Delta } ) $ and $\\mathcal R ( { \\mathcal B_n^{-1} \\Delta } ) $ hold, let $ s_u, s_v$ be such that \n $\\eta_k \\le s_u \\le s_v \\le \\eta_{k+1} $ and that \n $$ 0 \\le s_u-\\eta_k \\le { \\mathcal B_n^{-1} \\Delta } , \\quad 0\\le \\eta_{k+1} - s_v \\le { \\mathcal B_n^{-1} \\Delta } . $$\n\n \\begin{center}\n \\begin{tikzpicture} \n\\draw[ - ] (-10,0)--(1,0);\n \\node[color=black] at (-8,-0.3) {\\small s};\n \\draw[ (-, ultra thick, black] (-8,0) -- (-7.99,0);\n \\draw[ { -]}, ultra thick, black] (-1.0002,0) -- (-1,0);\n\n \\node[color=black] at (-7,-0.3) {\\small $\\eta_k$};\n\\draw(-7 ,0)circle [radius=2pt] ;\n\n \\node[color=black] at (-6.5,-0.3) {\\small $s_u$};\n\\draw plot[mark=x, mark options={color=black, scale=1.5}] coordinates {(-6.5,0) }; \n \n\n \\node[color=black] at (-2.3,-0.3) {\\small $\\eta_{k+1}$};\n\\draw(-2.3 ,0)circle [radius=2pt] ;\n \\node[color=black] at (-3 ,-0.3) {\\small $s_v$};\n\\draw plot[mark=x, mark options={color=black, scale=1.5}] coordinates {(-3,0) }; \n \n \\node[color=black] at (-1,-0.3) {\\small $e$};\n\n \n\\end{tikzpicture}\n\\end{center} \n\nDenote \n$$ \\mathcal I_ 1= ( s, s _u], \\quad \\I_2 =(s_u, s_v] \\quad \\text{and} \\quad \\I_3 = (s_v,e]. $$\nIn addition, denote \n$$ { \\mathcal J } _1 = (s,\\eta_k ], \\quad { \\mathcal J } _2=(\\eta_k, \\eta_{k} + \\frac{ \\eta_{k+1} -\\eta_k }{2}], \\quad { \\mathcal J } _3 = ( \\eta_k+ \\frac{ \\eta_{k+1} -\\eta_k }{2},\\eta_{k+1 } ] \\quad \\text{and} \\quad { \\mathcal J } _4 = (\\eta_{k+1} , e] .$$\nSince \n$s, e, s_u ,s_v \\in \\{ s_q\\}_{q=1}^ { \\mathcal Q} $, by the event $\\mclL(p^{-1} { \\mathcal B_n^{-1} \\Delta } )$ and $\\mclR(p^{-1} { \\mathcal B_n^{-1} \\Delta } )$, it holds with probability at least $1 - n^{-3}$ that\n$$\n0\\leq s_u - \\eta_k\\leq p^{-1} { \\mathcal B_n^{-1} \\Delta } ,\\ 0\\leq \\eta_{k+1} - s_v\\leq p^{-1} { \\mathcal B_n^{-1} \\Delta } .\n$$\nDenote\n$$\\I_1 = (s ,\\eta_k], \\I_2 = (\\eta_k, \\eta_{k + 1}], \\I_3 = (\\eta_{k + 1}, e].$$\nBy the definition of DP and $\\widehat{\\Omega}_{\\I}$, it holds that\n\\begin{align}\n \\mclF(\\widehat {\\Omega}_{\\I},\\I)\n \\leq & \\sum_{i = 1}^3 \\mclF(\\widehat {\\Omega}_{\\I_i},\\I_i) + 2\\gamma \\\\\n \\leq & \\sum_{i = 1}^3 \\mclF({\\Omega}_{\\I_i}^*,\\I_i) + 2\\gamma + \\frac{C_X^6 p}{c_X^4} \\frac{|{ \\mathcal J } _1|(s_u - \\eta_k)}{|{ \\mathcal J } _1|+s_u - \\eta_k} \\kappa_k^2 + \\frac{C_X^6 p}{c_X^4} \\frac{|{ \\mathcal J } _4|(\\eta_{k + 1} - s_v)}{|{ \\mathcal J } _4|+\\eta_{k + 1} - s_v} \\kappa_k^2 + C\\frac{g_X^4 C_X^2}{c_X^4}p^2\\log(n\\vee p) \\\\\n \\leq & \\sum_{i = 1}^3 \\mclF({\\Omega}_{\\I_i}^*,\\I_i) + 2\\gamma + \\frac{C_X^6}{c_X^4} { \\mathcal B_n^{-1} \\Delta } \\kappa_k^2 + C\\frac{g_X^4 C_X^2}{c_X^4}p^2\\log(n\\vee p).\n\\end{align}\nLet\n$$\n\\widetilde{\\gamma} = 2\\frac{C_X^6}{c_X^4}\\mclB_n^{-1}\\Delta\\kappa_k^2 + C\\frac{g_X^4 C_X^2}{c_X^4}p^2\\log(n\\vee p) + 2\\gamma.\n$$\nThen by Taylor expansion and \\Cref{lem:estimation covariance} we have\n\\begin{align}\n cc_X^2\\sum_{t\\in \\mclI}\\|\\widehat{\\Omega}_{\\I} - \\Omega^*_t\\|_F^2 &\\leq \\widetilde{\\gamma} + \\sum_{i=1}^3|\\I_i|{\\rm Tr}[(\\Omega_{\\I_i}^* - \\widehat{\\Omega}_{\\I})^\\top (\\widehat{\\Sigma}_{\\I_i} - \\Sigma^*_{\\I_i})] \\nonumber \\\\\n &\\leq \\widetilde{\\gamma} + Cg_X^2 p\\log^{\\frac{1}{2}}(n\\vee p)\\left[ \\sum_{i=1}^3 \\sqrt{|\\mclI_{i}|}\\|\\widehat{\\Omega}_{\\I} - \\Omega^*_{i}\\|_F \\right] \\nonumber \\\\\n &\\leq \\widetilde{\\gamma} + Cg_X^2 p\\log^{\\frac{1}{2}}(n\\vee p) \\sqrt{\\sum_{t\\in \\mclI}\\|\\widehat{\\Omega}_{\\I} - \\Omega^*_t\\|_F^2}.\n\\end{align}\nThe inequality above implies that\n\\begin{equation}\n \\sum_{t\\in \\mclI}\\|\\widehat{\\Omega}_{\\I} - \\Omega^*_t\\|_F^2 \\leq \\frac{C_1}{c_X^2}\\left[\\widetilde{\\gamma} + \\frac{1}{c_X^2}\\|X\\|^4_{\\psi_2} p^{2}\\log(n\\vee p)\\right].\n\\end{equation}\nBy the choice of $\\gamma$, it holds that\n\\begin{equation}\n \\sum_{t\\in \\I_1\\cup\\I_2}\\|\\widehat{\\Omega}_{\\I} - \\Omega^*_t\\|_F^2 \\leq \\frac{C_1C_{\\gamma}}{c_X^4} \\|X\\|^4_{\\psi_2} p^{2}\\log(n\\vee p).\n\\end{equation}\nOn the other hand,\n\\begin{equation}\n \\sum_{t\\in \\I_1\\cup\\I_2}\\|\\widehat{\\Omega}_{\\I} - \\Omega^*_t\\|_F^2\\geq \\frac{|\\mclI_{1}||\\mclI_{2}|}{|\\mclI|}\\|\\Omega^*_{k-1} - \\Omega^*_k\\|_F^2\\geq \\frac{1}{2}\\min\\{|\\I_1|,|\\I_2|\\}\\|\\Omega^*_{k-1} - \\Omega^*_k\\|_F^2.\n\\end{equation}\nSuppose $|\\I_1|\\geq |\\I_2|$, then the inequality above leads to\n\\begin{equation*}\n \\Delta_{\\min} \\kappa^2\\leq \\frac{C_1C_{\\gamma}}{c_X^4} \\|X\\|^4_{\\psi_2} p^{2}\\log(n\\vee p),\n\\end{equation*}\nwhich is contradictory to the assumption on $\\Delta$. Therefore, $|\\I_1|<|\\I_2|$ and we have \n\\begin{equation}\n s-\\eta_k = |\\I_1|\\leq CC_{\\gamma} \\frac{g_X^4}{c_X^4\\|\\Omega_k^* - \\Omega_{k-1}^*\\|_F^2}p^2\\log(n\\vee p).\n\\end{equation}\nThe bound for $e - \\eta_{k+1}$ can be proved similarly.\n\\eprf\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\bnlem[Three or more change points]\n\\label{lem:cov three or more cp}\nSuppose the assumptions in \\Cref{assp:DCDP_covariance} hold. Then with probability at least $1-(n\\vee p)^{-3}$, there is no intervals in $\\widehat { \\mathcal P}$ containing three or more true change points. \n \\enlem \n\\bprf\nWe prove by contradiction. Suppose $ \\I=(s,e] \\in \\mathcal {\\widehat P} $ be such that $ \\{ \\eta_1, \\ldots, \\eta_M\\} \\subset \\I $ with $M\\ge 3$. Throughout the proof, $M$ is assumed to be a parameter that can potentially change with $n$. \nSince the events $\\mathcal L ( { \\mathcal B_n^{-1} \\Delta } ) $ and $\\mathcal R ( { \\mathcal B_n^{-1} \\Delta } ) $ hold, by relabeling $\\{ s_q\\}_{q=1}^ { \\mathcal Q} $ if necessary, let $ \\{ s_m\\}_{m=1}^M $ be such that \n $$ 0 \\le s_m -\\eta_m \\le { \\mathcal B_n^{-1} \\Delta } \\quad \\text{for} \\quad 1 \\le m \\le M-1 $$ and that\n $$ 0\\le \\eta_M - s_M \\le { \\mathcal B_n^{-1} \\Delta } .$$ \n Note that these choices ensure that $ \\{ s_m\\}_{m=1}^M \\subset \\I . $\n \n \\begin{center}\n \\begin{tikzpicture} \n\\draw[ - ] (-10,0)--(1,0);\n \\node[color=black] at (-8,-0.3) {\\small s};\n \\draw[ (-, ultra thick, black] (-8,0) -- (-7.99,0);\n \\draw[ { -]}, ultra thick, black] (-1.0002,0) -- (-1,0);\n\n \\node[color=black] at (-7,-0.3) {\\small $\\eta_1$};\n\\draw(-7 ,0)circle [radius=2pt] ;\n\n \\node[color=black] at (-6.5,-0.3) {\\small $s_1$};\n\\draw plot[mark=x, mark options={color=black, scale=1.5}] coordinates {(-6.5,0) }; \n\n \\node[color=black] at (-5,-0.3) {\\small $\\eta_2$};\n\\draw(-5 ,0)circle [radius=2pt] ;\n \\node[color=black] at (-4.5,-0.3) {\\small $s_2$};\n\\draw plot[mark=x, mark options={color=black, scale=1.5}] coordinates {(-4.5,0) }; \n \n\n \\node[color=black] at (-2.5,-0.3) {\\small $\\eta_3$};\n\\draw(-2.5 ,0)circle [radius=2pt] ;\n \\node[color=black] at (-3 ,-0.3) {\\small $s_3$};\n\\draw plot[mark=x, mark options={color=black, scale=1.5}] coordinates {(-3,0) }; \n \n \\node[color=black] at (-1,-0.3) {\\small $e$};\n\n \n\\end{tikzpicture}\n\\end{center}\n\n{\\bf Step 1.}\n Denote \n $$ \\mathcal I_ 1= ( s, s _1], \\quad \\I_m =(s_{m-1} , s_m] \\text{ for } 2 \\le m \\le M \\quad \\text{and} \\quad \\I_{M+1} = (s_M,e]. $$\nThen since \n$ s , e, \\{ s_m \\}_{m=1}^M \\subset \\{ s_q\\}_{q=1}^ { \\mathcal Q} $, it follows that \n\n\n\nSuppose $\\I = (s,e]\\in \\widehat{\\mclP}$ and there are $M\\geq 3$ true change points $\\{\\eta_{q + i}\\}_{i\\in [M]}$ inside $\\I$, and denote\n\\begin{equation*}\n \\I_1 = (s, \\eta_{q + 1}],\\ \\I_m = (\\eta_{q + m - 1},\\eta_{q + m}],\\ \\I_{M+1} = (\\eta_{q + M},e].\n\\end{equation*}\nThen by the definition of $\\widehat{\\mclP}$ and $\\widehat{\\Omega}_{\\I_m}$, it holds that\n\\begin{equation*}\n \\mclF(\\widehat {\\Omega}_{\\I},\\I)\\leq \\sum_{i = 1}^{M+1} \\mclF (\\widehat {\\Omega}_{\\I_i},\\I_i) + M\\gamma\\leq \\sum_{i = 1}^{M+1} \\mclF ({\\Omega}_{\\I_i}^*,\\I_i) + M\\gamma,\n\\end{equation*}\nwhich implies that\n\\begin{align}\n & \\sum_{t\\in \\I}{\\rm Tr}(\\widehat{\\Omega}_{\\I}^\\top (X_tX_t^\\top)) - |\\I|\\log|\\widehat{\\Omega}_{\\I}| \\nonumber \\\\\n \\leq & \\sum_{i=1}^{M+1} \\sum_{t\\in \\I_i}{\\rm Tr}(({\\Omega}_{\\I_i}^*)^\\top (X_tX_t^\\top)) - \\sum_{i=1}^{M+1}|\\I_i|\\log|{\\Omega}_{\\I_i}^*| + M\\gamma.\n\\end{align}\nBy Taylor expansion and \\Cref{lem:estimation covariance} we have\n\\begin{align}\n cc_X^2\\sum_{t\\in \\mclI}\\|\\widehat{\\Omega}_{\\I} - \\Omega^*_t\\|_F^2 &\\leq M\\gamma + \\sum_{i=1}^{M+1}|\\I_i|{\\rm Tr}[(\\Omega_{\\I_i}^* - \\widehat{\\Omega}_{\\I})^\\top (\\widehat{\\Sigma}_{\\I_i} - \\Sigma^*_{\\I_i})] \\nonumber \\\\\n &\\leq M\\gamma + Cg_X^2 p\\log^{\\frac{1}{2}}(n\\vee p)\\left[ \\sum_{i=1}^{M+1} \\sqrt{|\\mclI_{i}|}\\|\\widehat{\\Omega}_{\\I} - \\Omega^*_{i}\\|_F \\right] \\nonumber \\\\\n &\\leq M\\gamma + Cg_X^2 p\\log^{\\frac{1}{2}}(n\\vee p) \\sqrt{\\sum_{t\\in \\mclI}\\|\\widehat{\\Omega}_{\\I} - \\Omega^*_t\\|_F^2}.\n\\end{align}\nThe inequality above implies that\n\\begin{equation}\n \\sum_{t\\in \\mclI}\\|\\widehat{\\Omega}_{\\I} - \\Omega^*_t\\|_F^2 \\leq \\frac{C_1}{c_X^2}\\left[M\\gamma + \\frac{1}{c_X^2}\\|X\\|^4_{\\psi_2} p^{2}\\log(n\\vee p)\\right].\n\\end{equation}\nOn the other hand, for each $i\\in [M]$, we have\n\\begin{equation}\n \\sum_{t\\in \\I_i\\cup\\I_{i +1}}\\|\\widehat{\\Omega}_{\\I} - \\Omega^*_t\\|_F^2\\geq \\frac{|\\mclI_{i}||\\mclI_{i + 1}|}{|\\mclI|}\\|\\Omega^*_{\\eta_{q + i + 1}} - \\Omega^*_{\\eta_{q + i}}\\|_F^2.\n\\end{equation}\nIn addition, for each $i\\in \\{2,\\cdots,M\\}$, by definition, it holds that $\\min\\{|\\I_i|,|\\I_{i + 1}|\\}\\geq \\Delta_{\\min}$. Therefore, we have\n\\begin{align*}\n (M-2)\\Delta_{\\min} \\kappa^2\\leq \\frac{C_1}{c_X^2}\\left[M\\gamma + \\frac{1}{c_X^2}\\|X\\|^4_{\\psi_2} p^{2}\\log(n\\vee p)\\right].\n\\end{align*}\nSince $M\/(M-2)\\leq 3$ for any $M\\geq 3$, it holds that\n\\begin{equation}\n \\Delta_{\\min} \\leq CC_{\\gamma} \\frac{g_X^4}{c_X^4\\|\\Omega_k^* - \\Omega_{k-1}^*\\|_F^2}p^2\\log(n\\vee p),\n\\end{equation}\nwhich is contradictory to the assumption on $\\Delta$, and the proof is complete.\n\\eprf\n\n\n\n\n\n\n \\bnlem[Two consecutive intervals]\n \\label{lem:cov two consecutive interval}\n Under \\Cref{assp:DCDP_covariance} and the choice that $$\\gamma\\geq C_{\\gamma}\\frac{g_X^4}{c_X^2} p^2\\log(n\\vee p),$$ with probability at least $1-(n\\vee p)^{-3}$, there are no two consecutive intervals $\\I_1= (s,t ] \\in \\widehat{\\mathcal P}$, $\\I_2=(t, e] \\in \\widehat{\\mathcal P}$ such that $\\I_1 \\cup \\I_2$ contains no change points. \n \\enlem \n \\begin{proof} \n We prove by contradiction. Suppose that $\\I_1,\\I_2\\in \\widehat{\\mclP}$ and\n $$ \\I : =\\I_1\\cup \\I_2 $$\n contains no change points. \n By the definition of $\\widehat{\\mclP}$ and $\\widehat{\\Omega}_{\\I}$, it holds that\n $$ \\mclF (\\widehat{\\Omega}_{\\I_1},\\I_1) + \\mclF(\\widehat{\\Omega}_{\\I_2},\\I_2) + \\gamma \\leq \\mclF (\\widehat{\\Omega}_{\\I},\\I)\\leq \\mclF ({\\Omega}_{\\I}^*,\\I)$$\n By \\Cref{lem:cov no cp}, it follows that \n\\begin{align*} \n\\mclF({\\Omega}^*_{\\I_1}, \\mclI_1)\\leq & \\mclF(\\widehat{\\Omega}_{\\I_1}, \\mclI_1) + C\\frac{g_X^4}{c_X^2} p^2\\log(n\\vee p),\n \\\\\n\\mclF({\\Omega}^*_{\\I_2}, \\mclI_2)\\leq & \\mclF(\\widehat{\\Omega}_{\\I_2}, \\mclI_2) + C\\frac{g_X^4}{c_X^2} p^2\\log(n\\vee p) \n \\end{align*}\n So\n $$\\mclF({\\Omega}^*_{\\I_1}, \\mclI_1) +\\mclF({\\Omega}^*_{\\I_2}, \\mclI_2) -2C\\frac{g_X^4}{c_X^2} p^2\\log(n\\vee p) +\\gamma \\leq \\mclF ({\\Omega}_{\\I}^*,\\I). $$\n Since $\\I$ does not contain any change points, ${\\Omega}^*_{\\I_1} = {\\Omega}^*_{\\I_2} = {\\Omega}^*_{\\I}$, and it follows that \n $$ \\gamma \\leq 2C\\frac{g_X^4}{c_X^2} p^2\\log(n\\vee p).$$\n This is a contradiction when $C_\\gamma$ is sufficiently large.\n \\end{proof} \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Penalized local refinement}\n\\label{sec: proof local refinement}\nIn this section, we prove consistency results in \\Cref{section:main} for penalized local refinement, or the conquer step. We also provide more details on the computational complexity of local refinement using memorization technique which is summarized in \\Cref{sec:method}. In particular,\n\\begin{enumerate}\n \\item In \\Cref{sec:complexity local refine}, we analyze the complexity of the local refinement step and show that it is linear in terms of $n$, as is mentioned in \\Cref{sec:method}.\n \\item \\Cref{sec:op1 fundations} presents some fundamental lemmas to prove other results.\n \\item \\Cref{sec:mean op1} prove results for the mean model, i.e., \\Cref{cor:mean local refinement}.\n \\item \\Cref{sec:regression op1} prove results for the linear regression model, i.e., \\Cref{cor:regression local refinement}.\n \\item \\Cref{sec:cov op1} prove results for the Gaussian graphical model, i.e., \\Cref{cor:covariance local refinement main}.\n\\end{enumerate}\n\n\n\n\\subsection{Complexity analysis}\n\\label{sec:complexity local refine}\nWe show in \\Cref{lem: complexity local refine} that the complexity of the conquer step (\\Cref{algo:local_refine_general}) can be as low as $O(n\\cdot \\mathcal{C}_2(p))$.\n\n\\bnlem[Complexity of the conquer step]\n\\label{lem: complexity local refine}\nFor all three models we discussed in \\Cref{section:main}, with a memorization technique, the complexity of \\Cref{algo:local_refine_general} would be $O(n\\cdot \\mathcal{C}_2(p))$.\n\\enlem\n\\bprf\nIn \\Cref{algo:local_refine_general}, for each $k\\in [\\widehat{K}]$, we search over the interval of length $\\frac{2}{3}(\\widehat{\\Delta}_{k-1} + \\widehat{\\Delta}_k)$ where $\\widehat{\\Delta}_k:=\\widehat{\\eta}_{k+1} - \\widehat{\\eta}_{k}$. Without \nany algorithmic optimization, the complexity would be $O((\\widehat{\\Delta}_{k-1} + \\widehat{\\Delta}_k)\\mathcal{C}_1(\\widehat{\\Delta}_{k-1} + \\widehat{\\Delta}_k, p))$ where $\\mathcal{C}_1(m, p)$ is the complexity of calculating $\\widehat{\\theta}_{\\mclI}$ and $\\mclF(\\widehat{\\theta}_{\\mclI}, \\mclI)$ for an interval of length $m$,\n\nUnder the three models in \\Cref{section:main}, calculating $\\widehat{\\theta}_{\\mclI}$ involves the calculation of some sufficient statistics or gradients and a gradient descent or coordinate descent procedure which is independent of $|\\mclI|$. Therefore, $\\mathcal{C}_1(|\\mclI|, p) = O(|\\mclI|) + O(\\mclC_2(p))$. For instance, solving Lasso only takes $O(p)$ time once $\\sum_{i\\in[n]}X_iX_i^\\top$ and $\\sum_{{i\\in[n]}}X_iy_i$ are known. In the conquer step, each time we only update the two summations $(\\sum_{i\\in[n]}X_iX_i^\\top,\\sum_{{i\\in[n]}}X_iy_i)$ by one term, so we can use memorization trick to reduce $\\mathcal{C}_1(\\widehat{\\Delta}_{k-1} + \\widehat{\\Delta}_k, p)$ to $O(1) + O(\\mclC_2(p))$. Consequently, the complexity at the $k-th$ step of \\Cref{algo:local_refine_general} can be reduced to $O((\\widehat{\\Delta}_{k-1} + \\widehat{\\Delta}_k)\\mclC_2(p))$. Taking summation over $k\\in [\\widehat{K}]$ and considering the fact that $\\widehat{\\mclP}$ is a segmentation of $[1,n]$, the total complexity of the conquer step would be\n$$\n\\sum_{k\\in [\\widehat{K}]}O((\\widehat{\\Delta}_{k-1} + \\widehat{\\Delta}_k)\\cdot \\mclC_2(p)) = O(n\\cdot \\mclC_2(p)).\n$$\n\\eprf\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Fundamental lemma}\n\\label{sec:op1 fundations}\n As is introduced in \\Cref{sec:introduction}, the sub-gaussian norm of a random variable is defined as \\citep{vershynin2018high}: $\\|X\\|_{\\psi_2}:=\\inf \\{t>0: \\mathbb{E} \\psi_2(|X| \/ t) \\leq 1\\}$ where $\\psi_2(t) = e^{t^2} - 1$.\n\n Similarly, for sub-exponential random variables, one can define its Orlicz norm as $\\|X\\|_{\\psi_1}:=\\inf \\{t>0: \\mathbb{E} \\psi_1(|X| \/ t) \\leq 2\\}$ where $\\psi_1(t) = e^{t}$.\n\n\\bnlem\n\\label{lem:subexp deviation martingale}\nSuppose $\\left\\{z_i\\right\\}_{i=1}^{\\infty}$ is a collection of independent centered sub-exponential random variables with $0<\\sup _{1 \\leq i<\\infty}\\left\\|z_i\\right\\|_{\\psi_1} \\leq 1$. Then for any integer $d>0, \\alpha>0$ and any $x>0$\n$$\n\\mathbb{P}\\left(\\max _{k \\in[d,(1+\\alpha) d]} \\frac{\\sum_{i=1}^k z_i}{\\sqrt{k}} \\geq x\\right) \\leq \\exp \\left\\{-\\frac{x^2}{2(1+\\alpha)}\\right\\}+\\exp \\left\\{-\\frac{\\sqrt{d} x}{2}\\right\\} .\n$$\n\\enlem\n\\bprf\nDenote $S_n=\\sum_{i=1}^n z_i$. Let $\\zeta=\\sup _{1 \\leq i \\leq \\infty}\\left\\|z_i\\right\\|_{\\psi_1}$. For any two integers $m0$ be given. For any $x>0$, it holds that\n$$\n\\mathbb{P}\\left(\\sum_{i=1}^r z_i \\leq 4 \\sqrt{r\\{\\log \\log (4 \\nu r)+x+1\\}}+4 \\sqrt{r \\nu}\\{\\log \\log (4 \\nu r)+x+1\\} \\text { for all } r \\geq 1 \/ \\nu\\right) \\geq 1-2 \\exp (-x) \\text {. }\n$$\n\\enlem\n\\bprf\nLet $s \\in \\mathbb{Z}^{+}$and $\\mathcal{T}_s=\\left[2^s \/ \\nu, 2^{s+1} \/ \\nu\\right]$. By \\Cref{lem:subexp deviation martingale}, for all $x>0$,\n$$\n\\mathbb{P}\\left(\\sup _{r \\in \\mathcal{T}_s} \\frac{\\sum_{i=1}^r z_i}{\\sqrt{r}} \\geq x\\right) \\leq \\exp \\left\\{-\\frac{x^2}{4}\\right\\}+\\exp \\left\\{-\\frac{\\sqrt{2^s \/ \\nu} x}{2}\\right\\} \\leq \\exp \\left\\{-\\frac{x^2}{4}\\right\\}+\\exp \\left\\{-\\frac{x}{2 \\sqrt{\\nu}}\\right\\} .\n$$\nTherefore by a union bound,\n\\begin{align}\n & \\mathbb{P}\\left(\\exists s \\in \\mathbb{Z}^{+}: \\sup _{r \\in \\mathcal{T}_s} \\frac{\\sum_{i=1}^r z_i}{\\sqrt{r}} \\geq 2 \\sqrt{\\log \\log ((s+1)(s+2))+x}+2 \\sqrt{\\nu}\\{\\log \\log ((s+1)(s+2))+x\\}\\right) \\nonumber \\\\\n\\leq & \\sum_{s=0}^{\\infty} 2 \\frac{\\exp (-x)}{(s+1)(s+2)}=2 \\exp (-x) \\label{tmp_eq: sub-gauss iterated log}.\n\\end{align}\nFor any $r \\geq 2^s \/ \\nu, s \\leq \\log (r \\nu) \/ \\log (2)$, and therefore\n$$\n(s+1)(s+2) \\leq \\frac{\\log (2 r \\nu) \\log (4 r \\nu)}{\\log ^2(2)} \\leq\\left(\\frac{\\log (4 r \\nu)}{\\log (2)}\\right)^2 .\n$$\nThus\n$$\n\\log ((s+1)(s+2)) \\leq 2 \\log \\left(\\frac{\\log (4 r \\nu)}{\\log (2)}\\right) \\leq 2 \\log \\log (4 r \\nu)+1 .\n$$\nThe above display together with \\eqref{tmp_eq: sub-gauss iterated log} gives\n$$\n\\mathbb{P}\\left(\\sup _{r \\geq 1 \/ \\nu} \\frac{\\sum_{i=1}^r z_i}{\\sqrt{r}} \\geq 2 \\sqrt{2 r \\log \\log (4 r \\nu)+x+1}+2 \\sqrt{r \\nu}\\{\\log \\log (4 r \\nu)+x+1\\}\\right) \\leq 2 \\exp (-x) .\n$$\n\\eprf\n\n\n\n\n\n\nNext we present two analogous lemmas for sub-gaussian random variables.\n\n\n\n\n\\bnlem\n\\label{lem:subgaussian deviation martingale}\nSuppose $\\left\\{z_i\\right\\}_{i=1}^{\\infty}$ is a collection of independent centered sub-gaussian random variables with $0<\\sup _{1 \\leq i<\\infty}\\left\\|z_i\\right\\|_{\\psi_2} \\leq \\sigma$. Then for any integer $d>0, \\alpha>0$ and any $x>0$\n$$\n\\mathbb{P}\\left(\\max _{k \\in[d,(1+\\alpha) d]} \\frac{\\sum_{i=1}^k z_i}{\\sqrt{k}} \\geq x\\right) \\leq \\exp \\left\\{-\\frac{x^2}{2(1+\\alpha)\\sigma^2}\\right\\} .\n$$\n\\enlem\n\\bprf\nDenote $S_n=\\sum_{i=1}^n z_i$. Let $\\zeta=\\sup _{1 \\leq i \\leq \\infty}\\left\\|z_i\\right\\|_{\\psi_2}$. For any two integers $m0$,\n$$\n\\mathbb{E}\\left(\\exp \\left\\{t \\sqrt{d} x-\\frac{\\zeta^2 t^2 A}{2}\\right\\}\\right) \\leq \\mathbb{E}\\left(\\exp \\left\\{t S_A-\\frac{\\zeta^2 t^2 A}{2}\\right\\}\\right) \\leq \\mathbb{E}\\left(\\exp \\left\\{t S_1-\\frac{\\zeta^2 t^2}{2}\\right\\}\\right) \\leq 1 .\n$$\nBy definition of $A$,\n$$\n\\mathbb{P}\\left(\\max _{k \\in[d,(1+\\alpha) d]} \\frac{\\sum_{i=1}^k z_i}{\\sqrt{k}} \\geq x\\right) \\leq \\mathbb{P}(A \\leq(1+\\alpha) d) .\n$$\nSince $u \\rightarrow \\exp \\left(s \\sqrt{d} x-\\frac{\\zeta^2 t^2 u}{2}\\right)$ is decreasing, it follows that\n$$\n\\mathbb{P}\\left(\\max _{k \\in[d,(1+\\alpha) d]} \\frac{\\sum_{i=1}^k z_i}{\\sqrt{k}} \\geq x\\right) \\leq \\mathbb{P}\\left(\\exp \\left\\{t \\sqrt{d} x - \\zeta^2 t^2 A \/ 2\\right\\} \\geq \\exp \\left\\{t \\sqrt{d} x-\\zeta^2 t^2(1+\\alpha) d \/ 2\\right\\}\\right) .\n$$\nMarkov's inequality implies that,\n$$\n\\mathbb{P}\\left(\\max _{k \\in[d,(1+\\alpha) d]} \\frac{\\sum_{i=1}^k z_i}{\\sqrt{k}} \\geq x\\right) \\leq \\exp \\left\\{-t \\sqrt{d} x+\\zeta^2 t^2(1+\\alpha) d \/ 2\\right\\}\n$$\nSet $t= \\frac{x}{\\zeta^2(1+\\alpha) \\sqrt{d}}$, then\n$$\n-t \\sqrt{d} x+\\zeta^2 t^2(1+\\alpha) d \/ 2=-\\frac{x^2}{2(1+\\alpha)\\zeta^2}\n$$\nSo\n$$\n\\mathbb{P}\\left(\\max _{k \\in[d,(1+\\alpha) d]} \\frac{\\sum_{i=1}^k z_i}{\\sqrt{k}} \\geq x\\right) \\leq \\exp \\left\\{-\\frac{x^2}{2(1+\\alpha)\\zeta^2}\\right\\} .\n$$\n\\eprf\n\n\n\n\\bnlem\n\\label{lem:subgaussian uniform loglog law}\nSuppose $\\left\\{z_i\\right\\}_{i=1}^{\\infty}$ is a collection of independent centered sub-gaussian random variable with $0<\\sup _{1 \\leq i<\\infty}\\left\\|z_i\\right\\|_{\\psi_2} \\leq \\sigma$. Let $\\nu>0$ be given. For any $x>0$, it holds that\n$$\n\\mathbb{P}\\left(\\sum_{i=1}^r z_i \\leq 4\\sigma \\sqrt{r\\{\\log \\log (4 \\nu r)+x+1\\}} \\text { for all } r \\geq 1 \/ \\nu\\right) \\geq 1-2 \\exp (-x) \\text {. }\n$$\n\\enlem\n\\bprf\nLet $s \\in \\mathbb{Z}^{+}$and $\\mathcal{T}_s=\\left[2^s \/ \\nu, 2^{s+1} \/ \\nu\\right]$. By \\Cref{lem:subgaussian deviation martingale}, for all $x>0$,\n$$\n\\mathbb{P}\\left(\\sup _{r \\in \\mathcal{T}_s} \\frac{\\sum_{i=1}^r z_i}{\\sqrt{r}} \\geq x\\right) \\leq \\exp \\left\\{-\\frac{x^2}{4\\sigma^2}\\right\\}.\n$$\nTherefore by a union bound,\n\\begin{align}\n & \\mathbb{P}\\left(\\exists s \\in \\mathbb{Z}^{+}: \\sup _{r \\in \\mathcal{T}_s} \\frac{\\sum_{i=1}^r z_i}{\\sqrt{r}} \\geq 2\\sigma \\sqrt{\\log \\log ((s+1)(s+2))+x}\\right) \\nonumber \\\\\n\\leq & \\sum_{s=0}^{\\infty} 2 \\frac{\\exp (-x)}{(s+1)(s+2)}=2 \\exp (-x) \\label{tmp_eq: iterated log}.\n\\end{align}\nFor any $r \\geq 2^s \/ \\nu, s \\leq \\log (r \\nu) \/ \\log (2)$, and therefore\n$$\n(s+1)(s+2) \\leq \\frac{\\log (2 r \\nu) \\log (4 r \\nu)}{\\log ^2(2)} \\leq\\left(\\frac{\\log (4 r \\nu)}{\\log (2)}\\right)^2 .\n$$\nThus\n$$\n\\log ((s+1)(s+2)) \\leq 2 \\log \\left(\\frac{\\log (4 r \\nu)}{\\log (2)}\\right) \\leq 2 \\log \\log (4 r \\nu)+1 .\n$$\nThe above display together with \\eqref{tmp_eq: iterated log} gives\n$$\n\\mathbb{P}\\left(\\sup _{r \\geq 1 \/ \\nu} \\frac{\\sum_{i=1}^r z_i}{\\sqrt{r}} \\geq 2\\sigma \\sqrt{2 r \\log \\log (4 r \\nu)+x+1}\\right) \\leq 2 \\exp (-x) .\n$$\n\\eprf\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\clearpage\n\\subsection{Local Refinement in the mean model }\n\\label{sec:mean op1}\nFor the ease of notations, we re-index the observations in the $k$-th interval by $[n_0]:\\{1,\\cdots, n_0\\}$ (though the sample size of the problem is still $n$), and denote the $k$-th jump size as $\\kappa$ and the minimal spacing between consecutive change points as $\\Delta$ (instead of $\\Delta_{\\min}$ in the main text).\n\nBy \\Cref{assp: DCDP_mean} and the setting of the local refinement algorithm, we have for some $\\alpha^*,\\beta^*\\in \\mathbb{R}^p$ that\n$$\ny_i=\\begin{cases}\n\\alpha^*+\\epsilon_i & \\text { when } i \\in(0, \\eta] \\\\\n\\beta^*+\\epsilon_i & \\text { when } i \\in(\\eta, n_0]\n\\end{cases}\n$$\nwhere $\\{\\epsilon_i\\}$ is an i.i.d sequence of subgaussian variables such that $\\|\\epsilon_i\\|_{\\psi_2}=\\sigma_\\epsilon<\\infty$. In addition, there exists $\\theta \\in(0,1)$ such that $\\eta=\\lfloor n \\theta\\rfloor$ and that $\\|\\alpha^*-\\beta^*\\|_2=\\kappa<\\infty$. By \\Cref{assp: DCDP_mean}, it holds that $\\|\\alpha^*\\|_0 \\leq \\mathfrak{s},\\|\\beta^*\\|_0 \\leq \\mathfrak{s}$ and\n\\begin{equation}\n \\frac{\\mathfrak{s}^2 \\log^2 (n\\vee p)}{\\Delta \\kappa^2} \\rightarrow 0.\n \\label{op1_eq:mean snr}\n\\end{equation}\nBy \\Cref{lem:mean refinement step 1}, with probability at least $1 - n^{-2}$, there exist $\\widehat{\\alpha}$ and $\\widehat{\\beta}$ such that\n\\begin{align*}\n & \\|\\widehat{\\alpha}-\\alpha^*\\|_2^2\\leq C\\frac{\\mathfrak{s} \\log (n\\vee p)}{\\Delta} \\text{ and } \\|\\widehat{\\alpha}-\\alpha^*\\|_1\\leq C\\mathfrak{s} \\sqrt{\\frac{\\log (n\\vee p)}{\\Delta}}; \\\\\n& \\|\\widehat{\\beta}-\\beta^*\\|_2^2\\leq C\\frac{\\mathfrak{s} \\log (n\\vee p)}{\\Delta} \\text{ and } \\|\\widehat{\\beta}-\\beta^*\\|_1\\leq C \\mathfrak{s} \\sqrt{\\frac{\\log (n\\vee p)}{\\Delta}}.\n\\end{align*}\n\nLet\n$$\n\\widehat{\\mathcal{Q}}(k)=\\sum_{i=1}^k\\|y_i- \\widehat{\\alpha}\\|_2^2+\\sum_{i=k+1}^{n_0}\\|y_i-\\widehat{\\beta}\\|_2^2 \\quad \\text { and } \\quad \\mathcal{Q}^*(k)=\\sum_{i=1}^k\\|y_i- \\alpha^*\\|_2^2+\\sum_{i=k+1}^{n_0}\\|y_i- \\beta^*\\|_2^2 .\n$$\n\n\\bnlem[Refinement for the mean model]\nLet\n$$\n\\eta+r=\\underset{k \\in(0, n_0]}{\\arg \\max } \\widehat{\\mathcal{Q}}(k) .\n$$\nThen under the assumptions above, for any given $\\alpha\\in (0,1)$, it holds with probability $1 - (\\alpha\\vee n^{-1})$ that\n$$\n\\kappa^2 r \\leq C \\log\\frac{1}{\\alpha}.\n$$\n\\enlem\n\\bprf\nWithout loss of generality, suppose $r \\geq 0$. Since $\\eta+r$ is the minimizer, it follows that\n$$\n\\widehat{\\mathcal{Q}}(\\eta+r) \\leq \\widehat{\\mathcal{Q}}(\\eta) .\n$$\nIf $r \\leq \\frac{1}{\\kappa^2}$, then there is nothing to show. So for the rest of the argument, for contradiction, assume that\n$$\nr \\geq \\frac{1}{\\kappa^2}\n$$\nObserve that\n$$\n\\begin{aligned}\n\\widehat{\\mathcal{Q}}(\\eta + r)-\\widehat{\\mathcal{Q}}(\\eta) &=\\sum_{i=\\eta+1}^{\\eta+r}\\|y_i- \\widehat{\\alpha}\\|_2^2-\\sum_{i=\\eta+1}^{\\eta+r}\\|y_i-\\widehat{\\beta}\\|_2^2 \\\\\n\\mathcal{Q}^*(\\eta + r)-\\mathcal{Q}^*(\\eta) &=\\sum_{i=\\eta+1}^{\\eta+r}\\|y_i- \\alpha^*\\|_2^2-\\sum_{i=\\eta+1}^{\\eta+r}\\|y_i- \\beta^*\\|_2^2\n\\end{aligned}\n$$\n\\textbf{Step 1}. It follows that\n$$\n\\begin{aligned}\n& \\sum_{i=\\eta+1}^{\\eta+r}\\|y_i- \\widehat{\\alpha}\\|_2^2-\\sum_{i=\\eta+1}^{\\eta+r}\\|y_i-\\alpha^*\\|_2^2 \\\\\n=& \\sum_{i=\\eta+1}^{\\eta+r}\\|\\widehat{\\alpha}- \\alpha^*\\|_2^2+2\\left(\\widehat{\\alpha}-\\alpha^*\\right)^{\\top} \\sum_{i=\\eta+1}^{\\eta+r}\\left(y_i-\\alpha^*\\right) \\\\\n=& \\sum_{i=\\eta+1}^{\\eta+r}\\|\\widehat{\\alpha}- \\alpha^*\\|_2^2+2r\\left(\\widehat{\\alpha}-\\alpha^*\\right)^{\\top} \\left(\\beta^*-\\alpha^*\\right) +2\\left(\\widehat{\\alpha}-\\alpha^*\\right)^{\\top} \\sum_{i=\\eta+1}^{\\eta+r}\\epsilon_i\n\\end{aligned}\n$$\nBy assumptions, we have\n$$\n\\sum_{i=\\eta+1}^{\\eta+r}\\|\\widehat{\\alpha}- \\alpha^*\\|_2^2 \\leq C_1 r \\frac{\\mathfrak{s} \\log (p)}{\\Delta}.\n$$\nSimilarly\n$$\nr\\left(\\widehat{\\alpha}-\\alpha^*\\right)^{\\top}\\left(\\beta^*-\\alpha^*\\right)\\leq r\\|\\widehat{\\alpha}-\\alpha^*\\|_2\\|\\beta^*-\\alpha^*\\|_2 \\leq\nC_1 r\\kappa \\sqrt{\\frac{\\mathfrak{s} \\log (p)}{\\Delta} }\n$$\nwhere the second equality follows from $\\|\\beta^*-\\alpha^*\\|_2=\\kappa$, and the last equality follows from \\eqref{op1_eq:mean snr}. In addition,\n$$\n\\begin{aligned}\n&\\left(\\widehat{\\alpha}-\\alpha^*\\right)^{\\top} \\sum_{i=\\eta+1}^{\\eta+r} \\epsilon_i \\leq\\|\\widehat{\\alpha}-\\alpha^*\\|_1\\|\\sum_{i=\\eta+1}^{\\eta+r}\\epsilon_i\\|_{\\infty} \\\\\n=& C_2\\mathfrak{s} \\sqrt{\\frac{\\log (p)}{\\Delta}} \\sqrt{r \\log (p)}=C_2\\mathfrak{s}\\log(p)\\sqrt{\\frac{r}{\\Delta}}.\n\\end{aligned}\n$$\nTherefore\n\\begin{align}\n \\sum_{i=\\eta+1}^{\\eta+r}\\|y_i- \\widehat{\\alpha}\\|_2^2-\\sum_{i=\\eta+1}^{\\eta+r}\\|y_i-\\alpha^*\\|_2^2 &\\leq C_1 r \\frac{\\mathfrak{s} \\log (p)}{\\Delta} + C_1 r\\kappa \\sqrt{\\frac{\\mathfrak{s} \\log (p)}{\\Delta} } + C_2\\mathfrak{s}\\log(p)\\sqrt{\\frac{r}{\\Delta}}\\nonumber \\\\\n &\\leq C_1 r\\kappa^2 \\frac{\\mathfrak{s} \\log (p)}{\\Delta\\kappa^2} + C_1 r\\kappa^2 \\sqrt{\\frac{\\mathfrak{s} \\log (p)}{\\Delta\\kappa^2} } + C_2\\mathfrak{s}\\log(p)\\sqrt{\\frac{r\\kappa^2}{\\Delta\\kappa^2}}\\nonumber \\\\\n &\\leq C_3 r \\kappa^2 \\frac{\\mathfrak{s} \\log (p)}{\\sqrt{\\Delta\\kappa^2}}.\n\\end{align}\n\\textbf{Step 2}. Using the same argument as in the previous step, it follows that\n$$\n\\sum_{i=\\eta+1}^{\\eta+r}\\|y_i- \\widehat{\\beta}\\|_2^2-\\sum_{i=\\eta+1}^{\\eta+r}\\|y_i- \\beta^*\\|_2^2\\leq C_3 r \\kappa^2 \\frac{\\mathfrak{s} \\log (p)}{\\sqrt{\\Delta\\kappa^2}}.\n$$\nTherefore\n\\begin{equation}\n \\left|\\widehat{\\mathcal{Q}}(\\eta+r)-\\widehat{\\mathcal{Q}}(\\eta)-\\left\\{\\mathcal{Q}^*(\\eta+r)-\\mathcal{Q}^*(\\eta)\\right\\}\\right|\\leq C_3 r \\kappa^2 \\frac{\\mathfrak{s} \\log (p)}{\\sqrt{\\Delta\\kappa^2}}\n \\label{tmp_eq:op1 mean main prop eq 1}\n\\end{equation}\nNotice that $\\widehat{\\mathcal{Q}}(\\eta+r)-\\widehat{\\mathcal{Q}}(\\eta)\\leq 0$, so our goal is to find a regime where $\\mathcal{Q}^*(\\eta+r)-\\mathcal{Q}^*(\\eta)\\geq 0$, in order to get rid of the $|\\cdot|$.\n\n\\textbf{Step 3}. Observe that\n$$\n\\begin{aligned}\n\\mathcal{Q}^*(\\eta+r)-\\mathcal{Q}^*(\\eta) \n=& \\sum_{i=\\eta+1}^{\\eta+r}\\|y_i- \\alpha^*\\|_2^2-\\sum_{i=\\eta+1}^{\\eta+r}\\|y_i- \\beta^*\\|_2^2 \\\\\n=& r\\| \\alpha^*- \\beta^*\\|_2^2-2 \\sum_{i=\\eta+1}^{\\eta+r}(y_i- \\beta^*)( \\alpha^*- \\beta^*) \\\\\n=& r\\| \\alpha^*- \\beta^*\\|_2^2 -2 (\\alpha^*- \\beta^*)^\\top\\sum_{i=\\eta+1}^{\\eta+r} \\epsilon_i\n\\end{aligned}\n$$\nLet\n$$\nw_i=\\frac{1}{\\kappa} \\epsilon_i^\\top\\left( \\alpha^*-\\beta^*\\right)\n$$\nThen $\\left\\{w_i\\right\\}_{i=1}^{\\infty}$ are subgaussian random variables with bounded $\\psi_2$ norm. Therefore by \\Cref{lem:subgaussian uniform loglog law}, uniformly for all $r \\geq 1 \/ \\kappa^2$, with probability at least $1 -\\alpha\/2$,\n$$\n\\sum_{i=1}^r w_i\\leq4\\sqrt{r\\left\\{\\log \\log \\left(\\kappa^2 r\\right) +\\log\\frac{4}{\\alpha}+1\\right\\}}\n$$\nIt follows that\n$$\n\\sum_{i=\\eta+1}^{\\eta+r} \\epsilon_i^\\top \\left( \\alpha^*-\\beta^*\\right)\\leq 4\\sqrt{r \\kappa^2\\left\\{\\log \\log \\left(\\kappa^2 r\\right)+\\log\\frac{4}{\\alpha}+1\\right\\}}.\n$$\nTherefore\n\\begin{equation}\n \\begin{split}\n \\mathcal{Q}^*(\\eta+r)-\\mathcal{Q}^*(\\eta) &\\geq r\\kappa^2 - 4\\sqrt{r \\kappa^2\\left\\{\\log \\log \\left(\\kappa^2 r\\right)+\\log\\frac{4}{\\alpha}+1\\right\\}}\\\\\n &\\geq \n r\\kappa^2 - 4\\sqrt{r \\kappa^2\\{1\\vee \\log \\log \\left(\\kappa^2 r\\right)\\}}-4\\sqrt{r \\kappa^2 \\log\\frac{4}{\\alpha}}-4\\sqrt{r \\kappa^2}\n \\end{split}\n\\label{tmp_eq:op1 mean main prop eq 2}\n\\end{equation}\nSince $\\frac{x}{144} - \\log\\log (x)\\geq 0$ for all $x>0$, when $r\\kappa^2\\geq \\max\\{144\\log\\frac{4}{\\alpha},144\\}$, we have $\\mathcal{Q}^*(\\eta+r)-\\mathcal{Q}^*(\\eta)\\geq 0$.\n\n\\textbf{Step 4}. \\Cref{tmp_eq:op1 mean main prop eq 1} and \\Cref{tmp_eq:op1 mean main prop eq 2} together give, uniformly for all $r$ such that $r \\kappa^2 \\geq 144 (1\\vee \\log\\frac{4}{\\alpha})$,\n$$\n 0\\leq r\\kappa^2 - 4\\sqrt{r \\kappa^2\\left\\{\\log \\log \\left(\\kappa^2 r\\right)+\\log\\frac{4}{\\alpha}+1\\right\\}} \\leq C_3 r \\kappa^2 \\frac{\\mathfrak{s} \\log (p)}{\\sqrt{\\Delta\\kappa^2}}.\n$$\nSince we assume that $\\frac{\\mathfrak{s}^2\\log^2(p)}{\\Delta \\kappa^2}\\rightarrow 0$, this either leads to a contradiction or implies that $r\\kappa^2\\leq C_4(1\\vee \\log\\frac{1}{\\alpha})$. \n\\eprf\n\n\n\\bnlem[Local refinement step 1]\n\\label{lem:mean refinement step 1}\nThe output $\\check{\\eta}$ of step 1 of the local refinement satisfies that with probability at least $1 - n^{-3}$,\n\\begin{equation}\n \\max_{k\\in [K]}|\\check {\\eta}_k - \\eta_k|\\leq \\frac{C \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(p)}{\\kappa^2}.\n\\end{equation}\n\\enlem\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{proof}[Proof of \\Cref{lem:mean refinement step 1}]\nFor each $k\\in [K]$, let $\\widehat {\\mu}_t = \\widehat {\\mu}^{(1)}$ if $s_k\\eta_k$ and denote\n$$\n{ \\mathcal J } _1 = [s_k, \\eta_k), { \\mathcal J } _2 = [\\eta_k, \\check {\\eta}_k), { \\mathcal J } _3 = [\\check {\\eta}_k, e_k),\n$$\nand $\\mu^{(1)} = \\mu^*_{\\eta_k - 1}$, $\\mu^{(2)} = \\mu^*_{\\eta_k}$. Then \\Cref{tmp_eq:upper_bound_delta_local_refine} is equivalent to\n\\begin{equation*}\n { \\mathcal J } _1\\|\\widehat {\\mu}^{(1)} - \\mu^{(1)}\\|_2^2 + { \\mathcal J } _2\\|\\widehat {\\mu}^{(1)} - \\mu^{(2)}\\|_2^2 + { \\mathcal J } _3\\|\\widehat {\\mu}^{(2)} - \\mu^{(2)}\\|_2^2 \\leq C { \\mathfrak{s} } \\sigma_{\\epsilon}^2\\log(n\\vee p).\n\\end{equation*}\nSince $|\\mclJ_1|=\\eta_k - s_k\\geq c_0\\Delta$ with some constant $c_0$ under \\Cref{assp: DCDP_mean}, we have\n\\begin{equation}\n \\Delta \\|\\widehat {\\mu}^{(1)} - \\mu^{(1)}\\|_2^2\\leq c_0|\\mclJ_1|\\|\\widehat {\\mu}^{(1)} - \\mu^{(1)}\\|_2^2 \\leq C \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p)\\leq c_2\\Delta \\kappa^2,\n\\end{equation}\nwith some constant $c_2\\in (0,1\/4)$, where the last inequality is due to the fact that $ { \\mathcal B_n} \\rightarrow \\infty$. Thus we have\n\\begin{equation*}\n \\|\\widehat {\\mu}^{(1)} - \\mu^{(1)}\\|_2^2\\leq c_2\\kappa^2.\n\\end{equation*}\nTriangle inequality gives\n\\begin{equation*}\n \\|\\widehat {\\mu}^{(1)} - \\mu^{(2)}\\|_2\\geq \\|\\mu^{(1)} - \\mu^{(2)}\\|_2 - \\|\\widehat {\\mu}^{(1)} - \\mu^{(1)}\\|_2 \\geq \\kappa\/2.\n\\end{equation*}\nTherefore, $\\kappa^2|\\mclJ_2|\/4\\leq |\\mclJ_2|\\|\\widehat {\\mu}^{(1)} - \\mu^{(2)}\\|_2^2\\leq C \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p)$ and\n\\begin{equation*}\n |\\check {\\eta}_k - \\eta_k| = |\\mclJ_2|\\leq \\frac{C \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(n\\vee p)}{\\kappa^2}.\n\\end{equation*}\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\clearpage\n\\subsection{Local refinement in the regression model}\n\\label{sec:regression op1}\nFor the ease of notations, we re-index the observations in the $k$-th interval by $[n_0]:\\{1,\\cdots, n_0\\}$ (though the sample size of the problem is still $n$), and denote the $k$-th jump size as $\\kappa$ and the minimal spacing between consecutive change points as $\\Delta$ (instead of $\\Delta_{\\min}$ in the main text).\n\nBy \\Cref{assp:dcdp_linear_reg} and the setting of the local refinement algorithm, we have\n$$\ny_i=\\left\\{\\begin{array}{ll}\nX_i^{\\top} \\alpha^*+\\epsilon_i & \\text { when } i \\in(0, \\eta] \\\\\nX_i^{\\top} \\beta^*+\\epsilon_i & \\text { when } i \\in(\\eta, n_0]\n\\end{array} .\\right.\n$$\nIn addition, there exists $\\theta \\in(0,1)$ such that $\\eta=\\lfloor n_0 \\theta\\rfloor$ and that $\\|\\alpha^*-\\beta^*\\|_2=\\kappa<\\infty$. By \\Cref{assp:dcdp_linear_reg}, it holds that $\\left\\|\\alpha^*\\right\\|_0 \\leq \\mathfrak{s},\\left\\|\\beta^*\\right\\|_0 \\leq \\mathfrak{s}$, and\n\\begin{equation}\n \\frac{\\mathfrak{s}^2 \\log ^{3}(n\\vee p)}{\\Delta \\kappa^2} \\rightarrow 0.\n \\label{op1_eq:regression snr}\n\\end{equation}\nBy \\Cref{lem: regression local refinement step 1}, with probability at least $1 - n^{-2}$, the output of the first step of the PLR algorithm (\\Cref{algo:local_refine_general}) $\\widehat{\\alpha}$ and $\\widehat{\\beta}$ satisfies that\n\\begin{equation}\n \\begin{split}\n & \\|\\widehat{\\alpha}-\\alpha^*\\|_2^2\\leq C\\frac{\\mathfrak{s} \\log (n\\vee p)}{\\Delta} \\text{ and } \\|\\widehat{\\alpha}-\\alpha^*\\|_1\\leq C\\mathfrak{s} \\sqrt{\\frac{\\log (n\\vee p)}{\\Delta}}; \\\\\n& \\|\\widehat{\\beta}-\\beta^*\\|_2^2\\leq C\\frac{\\mathfrak{s} \\log (n\\vee p)}{\\Delta} \\text{ and } \\|\\widehat{\\beta}-\\beta^*\\|_1\\leq C \\mathfrak{s} \\sqrt{\\frac{\\log (n\\vee p)}{\\Delta}}.\n \\end{split}\n \\label{eq: regression op1 condition}\n\\end{equation}\nLet\n$$\n\\widehat{\\mathcal{Q}}(k)=\\sum_{i=1}^k\\left(y_i-X_i^{\\top} \\widehat{\\alpha}\\right)^2+\\sum_{i=k+1}^{n_0}\\left(y_i-X_i^{\\top} \\widehat{\\beta}\\right)^2 \\quad \\text { and } \\quad \\mathcal{Q}^*(k)=\\sum_{i=1}^k\\left(y_i-X_i^{\\top} \\alpha^*\\right)^2+\\sum_{i=k+1}^{n_0}\\left(y_i-X_i^{\\top} \\beta^*\\right)^2 .\n$$\n\n\\bnlem[Refinement for regression]\nLet\n$$\n\\eta+r=\\underset{k \\in(0, n_0]}{\\arg \\max } \\widehat{\\mathcal{Q}}(k) .\n$$\nThen under the assumptions above, it holds with probability at least $1 - (\\alpha\\vee n^{-1})$ that\n$$\n r\\kappa^2\\leq C \\log^2\\frac{1}{\\alpha}.\n$$\nwhere $C$ is a universal constant that only depends on $C_{\\kappa}$, $\\Lambda_{\\min}$, $\\sigma_{\\epsilon}$.\n\\enlem\n\\bprf\nFor the brevity of notations, we denote $p_n:=n\\vee p$ throughout the proof. Without loss of generality, suppose $r \\geq 0$. Since $\\eta+r$ is the minimizer, it follows that\n$$\n\\widehat{\\mathcal{Q}}(\\eta+r) \\leq \\widehat{\\mathcal{Q}}(\\eta) .\n$$\nIf $r \\leq \\frac{1}{\\kappa^2}$, then there is nothing to show. So for the rest of the argument, for contradiction, assume that\n$$\nr \\geq \\frac{1}{\\kappa^2}\n$$\nObserve that\n$$\n\\begin{aligned}\n\\widehat{\\mathcal{Q}}(t)-\\widehat{\\mathcal{Q}}(\\eta) &=\\sum_{i=\\eta+1}^{\\eta+r}\\left(y_i-X_i^{\\top} \\widehat{\\alpha}\\right)^2-\\sum_{i=\\eta+1}^{\\eta+r}\\left(y_i-X_i^{\\top} \\widehat{\\beta}\\right)^2 \\\\\n\\mathcal{Q}^*(t)-\\mathcal{Q}^*(\\eta) &=\\sum_{i=\\eta+1}^{\\eta+r}\\left(y_i-X_i^{\\top} \\alpha^*\\right)^2-\\sum_{i=\\eta+1}^{\\eta+r}\\left(y_i-X_i^{\\top} \\beta^*\\right)^2\n\\end{aligned}\n$$\n\\textbf{Step 1}. It follows that\n$$\n\\begin{aligned}\n& \\sum_{i=\\eta+1}^{\\eta+r}\\left(y_i-X_i^{\\top} \\widehat{\\alpha}\\right)^2-\\sum_{i=\\eta+1}^{\\eta+r}\\left(y_i-X_i^{\\top} \\alpha^*\\right)^2 \\\\\n=& \\sum_{i=\\eta+1}^{\\eta+r}\\left(X_i^{\\top} \\widehat{\\alpha}-X_i^{\\top} \\alpha^*\\right)^2+2\\left(\\widehat{\\alpha}-\\alpha^*\\right)^{\\top} X_i \\sum_{i=\\eta+1}^{\\eta+r}\\left(y_i-X_i^{\\top} \\alpha^*\\right) \\\\\n=& \\sum_{i=\\eta+1}^{\\eta+r}\\left(X_i^{\\top} \\widehat{\\alpha}-X_i^{\\top} \\alpha^*\\right)^2+2\\left(\\widehat{\\alpha}-\\alpha^*\\right)^{\\top} \\sum_{i=1}^r X_i X_i^{\\top}\\left(\\beta^*-\\alpha^*\\right)+2\\left(\\widehat{\\alpha}-\\alpha^*\\right)^{\\top} \\sum_{i=\\eta+1}^{\\eta+r} X_i \\epsilon_i\n\\end{aligned}\n$$\nBy \\Cref{lem:regression op1 base lemma 1}, uniformly for all $r$,\n$$\n\\left\\|\\frac{1}{r} \\sum_{i=1}^r X_i X_i^{\\top}-\\Sigma\\right\\|_{\\infty} \\leq C\\left(\\sqrt{\\frac{\\log (p_n)}{r}}+\\frac{\\log (p_n)}{r}\\right) .\n$$\nTherefore\n$$\n\\begin{aligned}\n\\sum_{i=\\eta+1}^{\\eta+r}\\left(X_i^{\\top} \\widehat{\\alpha}-X_i^{\\top} \\alpha^*\\right)^2 &=\\sum_{i=\\eta+1}^{\\eta+r}\\left(\\widehat{\\alpha}-\\alpha^*\\right)^{\\top} \\sum_{i=1}^r\\left\\{X_i X_i^{\\top}-\\Sigma\\right\\}\\left(\\widehat{\\alpha}-\\alpha^*\\right)+r\\left(\\widehat{\\alpha}-\\alpha^*\\right)^{\\top} \\Sigma\\left(\\widehat{\\alpha}-\\alpha^*\\right) \\\\\n& \\leq\\|\\widehat{\\alpha}-\\alpha^*\\|_1^2\\left\\|\\sum_{i=1}^r X_i X_i^{\\top}-\\Sigma\\right\\|_{\\infty}+\\Lambda_{\\max } r\\|\\widehat{\\alpha}-\\alpha^*\\|_2^2\\\\\n&\\leq C_1\\frac{\\mathfrak{s}^2 \\log (p_n)}{\\Delta}(\\sqrt{r \\log (p_n)}+\\log (p_n))+C_1 r \\frac{\\mathfrak{s} \\log (p)}{\\Delta} \\\\\n&\\leq C_1\\sqrt{r}\\frac{\\mathfrak{s}^2 \\log^{3\/2} (p_n)}{\\Delta}+C_1\\frac{\\mathfrak{s}^2 \\log^2 (p_n)}{\\Delta}+C_1 r \\frac{\\mathfrak{s} \\log (p_n)}{\\Delta}\n\\end{aligned}\n$$\nwhere the second inequality follows from \\Cref{lem:regression op1 base lemma 1}. Similarly\n$$\n\\begin{aligned}\n\\left(\\widehat{\\alpha}-\\alpha^*\\right)^{\\top} \\sum_{i=1}^r X_i X_i^{\\top}\\left(\\beta^*-\\alpha^*\\right) &=\\left(\\widehat{\\alpha}-\\alpha^*\\right)^{\\top} \\sum_{i=1}^r\\left\\{X_i X_i^{\\top}-\\Sigma\\right\\}\\left(\\beta^*-\\alpha^*\\right)+r\\left(\\widehat{\\alpha}-\\alpha^*\\right)^{\\top} \\Sigma\\left(\\beta^*-\\alpha^*\\right) \\\\\n& \\leq \\|\\widehat{\\alpha}-\\alpha^*\\|_1\\|\\left(\\beta^*-\\alpha^*\\right)^{\\top}\\left\\{\\sum_{i=1}^r X_i X_i^{\\top}-\\Sigma\\right\\}\\|_{\\infty}+\\Lambda_{\\max } r\\|\\widehat{\\alpha}-\\alpha^*\\|_2\\|\\beta^*-\\alpha^*\\|_2 \\\\\n&\\leq C_2\\mathfrak{s} \\sqrt{\\frac{\\log (p_n)}{\\Delta}} (\\kappa\\sqrt{r \\log (p_n)}+\\kappa\\log (p_n))+C_2 r\\kappa \\sqrt{\\frac{\\mathfrak{s} \\log (p_n)}{\\Delta} } .\\\\\n&\\leq C_2\\mathfrak{s}\\kappa\\log (p_n)\\sqrt{\\frac{r}{\\Delta}} +C_2\\mathfrak{s}\\kappa\\sqrt{\\frac{\\log^3 (p_n)}{\\Delta}} + C_2 r\\kappa \\sqrt{\\frac{\\mathfrak{s} \\log (p_n)}{\\Delta} }.\n\\end{aligned}\n$$\nwhere the second equality follows from $\\|\\beta^*-\\alpha^*\\|_2=\\kappa$ and \\Cref{lem:regression op1 base lemma 2}. In addition,\n$$\n\\begin{aligned}\n&\\left(\\widehat{\\alpha}-\\alpha^*\\right)^{\\top} \\sum_{i=\\eta+1}^{\\eta+r} X_i \\epsilon_i \\leq\\|\\widehat{\\alpha}-\\alpha^*\\|_1\\|\\sum_{i=\\eta+1}^{\\eta+r} X_i \\epsilon_i\\|_{\\infty} \\\\\n\\leq & C_3\\mathfrak{s} \\sqrt{\\frac{\\log (p_n)}{\\Delta}}(\\sqrt{r \\log (p_n)}+\\log (p_n))\\leq C_3\\mathfrak{s}\\log (p_n)\\sqrt{\\frac{r}{\\Delta}} +C_3 { \\mathfrak{s} } \\sqrt{\\frac{\\log^3 (p_n)}{\\Delta}}.\n\\end{aligned}\n$$\nwhere the second equality follows from \\Cref{lem:regression op1 base lemma 1}. Therefore\n\\begin{align*}\n &\\sum_{i=\\eta+1}^{\\eta+r}\\left(y_i-X_i^{\\top} \\widehat{\\alpha}\\right)^2-\\sum_{i=\\eta+1}^{\\eta+r}\\left(y_i-X_i^{\\top} \\alpha^*\\right)^2\\\\\n \\leq & C_4(\\kappa + 1)\\mathfrak{s}\\log (p_n)\\sqrt{\\frac{r}{\\Delta}} + C_4(\\kappa + 1) { \\mathfrak{s} } \\sqrt{\\frac{\\log^3 (p_n)}{\\Delta}} + C_4 r\\kappa^2 \\sqrt{\\frac{ { \\mathfrak{s} } \\log (p_n)}{\\Delta\\kappa^2}}\\\\\n & \\quad + C_1\\frac{ { \\mathfrak{s} } ^2\\log^2(p_n)}{\\Delta} + C_1\\sqrt{r}\\frac{ { \\mathfrak{s} } ^2\\log^{3\/2}(p_n)}{\\delta}\\\\\n \\leq & C_4({\\kappa} + 1)r\\kappa^2\\sqrt{\\frac{\\mathfrak{s}^2\\log^2 (p_n)}{\\Delta\\kappa^2}} + C_4(\\kappa^2 + \\kappa) \\sqrt{\\frac{ { \\mathfrak{s} } ^2\\log^3 (p_n)}{\\Delta\\kappa^2}} + C_4 {\\kappa}\\sqrt{r\\kappa^2} \\frac{ { \\mathfrak{s} } ^2\\log^{3\/2}(p_n)}{\\Delta \\kappa^2}\\\\\n \\leq & C_4({\\kappa} + 1)r\\kappa^2\\sqrt{\\frac{\\mathfrak{s}^2\\log^2 (p_n)}{\\Delta\\kappa^2}} + C_4\\kappa(\\kappa + 1 + \\sqrt{r\\kappa^2}) \\sqrt{\\frac{ { \\mathfrak{s} } ^2\\log^3 (p_n)}{\\Delta\\kappa^2}}\\leq C_5(C_\\kappa^2 + 1){r\\kappa^2} \\sqrt{\\frac{ { \\mathfrak{s} } ^2\\log^3 (p_n)}{\\Delta\\kappa^2}}.\n\\end{align*}\nwhere we use the assumption that $\\Delta \\kappa^2 \\geq \\mathcal{B}_n { \\mathfrak{s} } ^2\\log^2(p_n)$, $\\kappa\\leq C_{\\kappa}$, and $r\\kappa^2 \\geq 1$.\n\n\\textbf{Step 2}. Using the same argument as in the previous step, it follows that\n$$\n\\sum_{i=\\eta+1}^{\\eta+r}\\left(y_i-X_i^{\\top} \\widehat{\\beta}\\right)^2-\\sum_{i=\\eta+1}^{\\eta+r}\\left(y_i-X_i^{\\top} \\beta^*\\right)^2\\leq C_5(C_\\kappa^2 + 1){r\\kappa^2} \\sqrt{\\frac{ { \\mathfrak{s} } ^2\\log^3 (p_n)}{\\Delta\\kappa^2}}.\n$$\nTherefore\n\\begin{equation}\n \\left|\\widehat{\\mathcal{Q}}(\\eta+r)-\\widehat{\\mathcal{Q}}(\\eta)-\\left\\{\\mathcal{Q}^*(\\eta+r)-\\mathcal{Q}^*(\\eta)\\right\\}\\right|\\leq C_5(C_\\kappa^2 + 1){r\\kappa^2} \\sqrt{\\frac{ { \\mathfrak{s} } ^2\\log^3 (p_n)}{\\Delta\\kappa^2}}.\n \\label{tmp_eq:op1 regression main prop eq 1}\n\\end{equation}\n\\textbf{Step 3}. Observe that\n$$\n\\begin{aligned}\n& \\mathcal{Q}^*(\\eta+r)-\\mathcal{Q}^*(\\eta) \\\\\n=& \\sum_{i=\\eta+1}^{\\eta+r}\\left(y_i-X_i^{\\top} \\alpha^*\\right)^2-\\sum_{i=\\eta+1}^{\\eta+r}\\left(y_i-X_i^{\\top} \\beta^*\\right)^2 \\\\\n=& \\sum_{i=\\eta+1}^{\\eta+r}\\left(X_i^{\\top} \\alpha^*-X_i^{\\top} \\beta^*\\right)^2-2 \\sum_{i=\\eta+1}^{\\eta+r}\\left(y_i-X_i^{\\top} \\beta^*\\right)\\left(X_i^{\\top} \\alpha^*-X_i^{\\top} \\beta^*\\right) \\\\\n=& \\sum_{i=\\eta+1}^{\\eta+r}\\left(\\alpha^*-\\beta^*\\right)^{\\top}\\left\\{X_i^{\\top} X_i-\\Sigma\\right\\}\\left(\\alpha^*-\\beta^*\\right)+r\\left(\\alpha^*-\\beta^*\\right)^{\\top} \\Sigma\\left(\\alpha^*-\\beta^*\\right)-2 \\sum_{i=\\eta+1}^{\\eta+r} \\epsilon_i\\left(X_i^{\\top} \\alpha^*-X_i^{\\top} \\beta^*\\right)\n\\end{aligned}\n$$\nNote that\n$$\nz_i=\\frac{1}{\\kappa^2}\\left(\\alpha^*-\\beta^*\\right)^{\\top}\\left\\{X_i^{\\top} X_i-\\Sigma\\right\\}\\left(\\alpha^*-\\beta^*\\right)\n$$\nis a sub-exponential random variable with bounded $\\psi_1$ norm. Therefore by \\Cref{lem:subexp uniform loglog law}, uniformly for all $r \\geq 1 \/ \\kappa^2$, with probability at least $1 - \\alpha \/ 2$,\n$$\n\\sum_{i=1}^r z_i\\leq 4\\left(\\sqrt{r\\left\\{\\log \\log \\left(\\kappa^2 r\\right) + \\log\\frac{4}{\\alpha}+1\\right\\}}+\\sqrt{r \\kappa^2}\\left\\{\\log \\log \\left(\\kappa^2 r\\right)+\\log\\frac{4}{\\alpha}+1\\right\\}\\right).\n$$\nIt follows that\n\\begin{align*}\n \\sum_{i=\\eta+1}^{\\eta+r}\\left(\\alpha^*-\\beta^*\\right)^{\\top} &\\left\\{X_i^{\\top} X_i-\\Sigma\\right\\}\\left(\\alpha^*-\\beta^*\\right) \\\\\n &\\leq 4\\left(\\kappa^2\\sqrt{r\\left\\{\\log \\log \\left(\\kappa^2 r\\right) + \\log\\frac{4}{\\alpha}+1\\right\\}}+\\kappa^3\\sqrt{r}\\left\\{\\log \\log \\left(\\kappa^2 r\\right)+\\log\\frac{4}{\\alpha}+1\\right\\}\\right).\n\\end{align*}\nSimilarly, let\n$$\nw_i=\\frac{1}{\\kappa} \\epsilon_i\\left(X_i^{\\top} \\alpha^*-X_i^{\\top} \\beta^*\\right)\n$$\nThen $\\left\\{w_i\\right\\}_{i=1}^{\\infty}$ are sub-exponential random variables with bounded $\\psi_1$ norm. Therefore by \\Cref{lem:subexp uniform loglog law}, uniformly for all $r \\geq 1 \/ \\kappa^2$,\n$$\n\\sum_{i=1}^r w_i\\leq 4\\left(\\sqrt{r\\left\\{\\log \\log \\left(\\kappa^2 r\\right) +\\log\\frac{4}{\\alpha}+1\\right\\}}+\\sqrt{r \\kappa^2}\\left\\{\\log \\log \\left(\\kappa^2 r\\right)+\\log\\frac{4}{\\alpha}+1\\right\\}\\right)\n$$\nIt follows that\n\\begin{align*}\n \\sum_{i=\\eta+1}^{\\eta+r} \\epsilon_i &\\left(X_i^{\\top} \\alpha^*-X_i^{\\top} \\beta^*\\right)\\\\\n &\\leq 4\\left(\\sqrt{r \\kappa^2\\left\\{\\log \\log \\left(\\kappa^2 r\\right)+\\log\\frac{4}{\\alpha}+1\\right\\}}+\\kappa^2\\sqrt{r}\\left\\{\\log \\log \\left(\\kappa^2 r\\right)+\\log\\frac{4}{\\alpha}+1\\right\\}\\right).\n\\end{align*}\nTherefore\n\\begin{equation}\n \\begin{split}\n \\mathcal{Q}^*(\\eta+r)-\\mathcal{Q}^*(\\eta) \\geq & \\Lambda_{\\min } r \\kappa^2 - 4(\\kappa + 1)\\sqrt{r \\kappa^2\\left\\{\\log \\log \\left(\\kappa^2 r\\right)+\\log\\frac{4}{\\alpha}+1\\right\\}}\\\\\n & \\quad - 4(\\kappa^2 + \\kappa)\\sqrt{r \\kappa^2}\\left\\{\\log \\log \\left(\\kappa^2 r\\right)+\\log\\frac{4}{\\alpha}+1\\right\\}\\\\\n \\geq & \\Lambda_{\\min } r \\kappa^2 - 16\\sqrt{r\\kappa^2}(\\kappa^2\\vee 1)(1 + \\log\\frac{4}{\\alpha} + \\{1\\vee \\log\\log (r\\kappa^2)\\})\\\\\n \\geq & \\Lambda_{\\min } r\\kappa^2 - 16\\sqrt{r\\kappa^2}(C_{\\kappa}^2\\vee 1)(1 + \\log\\frac{4}{\\alpha} + \\{1\\vee \\log\\log (r\\kappa^2)\\}).\n \\end{split}\n\\label{tmp_eq:op1 regression main prop eq 2}\n\\end{equation}\nwhere $\\Lambda_{\\min}$ is the minimal eigenvalue of $\\Sigma$. By \\Cref{lem: x - c log^2log x}, for $r\\kappa^2 \\geq \\frac{48^2(C_{\\kappa}^2\\vee 1)^2}{\\Lambda_{\\min}^2} \\vee e^{2e}$, $\\frac{\\Lambda_{\\min}}{3}r\\kappa^2\\geq \\sqrt{r\\kappa^2}\\log\\log (r\\kappa^2)$. Thus, when $r\\kappa^2\\geq (\\frac{48^2(C_{\\kappa}^2\\vee 1)^2}{\\Lambda_{\\min}^2}\\log^2\\frac{4}{\\alpha}) \\vee e^{2e}$, we have $\\mathcal{Q}^*(\\eta+r)-\\mathcal{Q}^*(\\eta)\\geq 0$.\n\n\\textbf{Step 4}. \\Cref{tmp_eq:op1 regression main prop eq 1} and \\Cref{tmp_eq:op1 regression main prop eq 1} together give that, uniformly for all $r$ such that $r\\kappa^2 \\geq (\\frac{48^2(C_{\\kappa}^2\\vee 1)^2}{\\Lambda_{\\min}^2}\\log^2\\frac{4}{\\alpha}) \\vee e^{2e}$, with probability at least $1 - (\\alpha\\vee n^{-1})$\n$$\n\\Lambda_{\\min } r \\kappa^2 - 16\\sqrt{r\\kappa^2}(C_{\\kappa}^2\\vee 1)(1 + \\log\\frac{4}{\\alpha} + \\{1\\vee \\log\\log (r\\kappa^2)\\}) \\leq C_5(C_\\kappa^2 + 1){r\\kappa^2} \\sqrt{\\frac{ { \\mathfrak{s} } ^2\\log^3 (p_n)}{\\Delta\\kappa^2}},\n$$\nwhich either leads to a contradiction or implies the conclusion.\n\\eprf\n\nIn what follows, we first show that the first step in the local refinement gives estimators $\\hat{\\alpha}, \\hat{\\beta}$ that satisfies \\Cref{eq: regression op1 condition}, and then prove some relevant lemmas.\n\n\n\n\n\n\n\n\n\n\n\n\\bnlem[Local refinement step 1]\n\\label{lem: regression local refinement step 1}\nFor each $k\\in [K]$, let $\\check{\\eta}_k, \\widehat{\\beta}^{(1)}, \\widehat{\\beta}^{(2)}$ be the output of step 1 of the local refinement algorithm for linear regression, with\n\\begin{equation*}\n R(\\theta^{(1)}, \\theta^{(2)},\\eta; s, e) = \\zeta \\sum_{i \\in [p]} \\sqrt{(\\eta - s)(\\theta^{(1)}_{i} )^2 + (e - \\eta)(\\theta_{i} ^{(2)}) ^2}.\n\\end{equation*}\nand $\\zeta = C_{\\zeta}\\sqrt{\\log (n\\vee p)}$. Then with probability at least $1 - n^{-3}$, it holds that\n\\begin{equation}\n \\max_{k\\in [\\hat{K}]} |\\check{\\eta}_k - \\eta_k|\\leq C\\frac{ { \\mathfrak{s} } \\log(n\\vee p)}{\\kappa^2}.\n\\end{equation}\n\\enlem\n\n\n\\begin{proof}[Proof of \\Cref{lem: regression local refinement step 1}]\nFor each $k\\in [K]$, let $\\widehat {\\beta}_t = \\widehat {\\beta}^{(1)}$ if $s_kC_s { \\mathfrak{s} } \\log(n\\vee p)$. Similarly, since $|\\I_2| > \\Delta> C_s { \\mathfrak{s} } \\log(n\\vee p)$, we have\n\t\\[\n\t\t\\sqrt{\\sum_{t \\in \\I_2}(\\delta_{\\I_2}^{\\top}X_t)^2} \\geq {c_1\\sqrt{|\\I_2|}} \\|\\delta_{\\I_2}\\|_2 - c_2 \\sqrt{\\log(p)} \\|\\delta_{\\I_2}(S^c)\\|_1.\n\t\\]\nDenote $n_0 = C_s { \\mathfrak{s} } \\log(n\\vee p)$. We first bound the terms with $\\|\\cdot\\|_1$. Note that\n\t\\begin{align*}\n\t&\\sum_{i = 1}^3\\sum_{j\\in S^c}|(\\delta_{\\I_i})_j|\n\t\t\\leq \\sqrt{3}\\sqrt{\\sum_{i = 1}^3 (\\sum_{j \\in S^c} |(\\delta_{\\I_i})_j|)^2 } \\\\ \n\t\t\\leq & \\sqrt{3}\\sqrt{\\sum_{i = 1}^3{\\frac{|\\I_i|}{n_0}} (\\sum_{j \\in S^c}|(\\delta_{\\I_i})_j| )^2}\t\n\t\t\\leq \\sqrt{\\frac{3}{n_0}} \\sum_{i = 1}^3 \\sqrt{|\\I_i|}(\\sum_{j \\in S^c}|(\\delta_{\\I_i})_j| ) \\\\\n\t\t\\leq &\n\t\t\\sqrt{\\frac{3}{n_0}} \\sum_{j \\in S^c} \\sqrt{\\sum_{t = s_k + 1}^{e_k} (\\delta_t)_j^2}\n\t\t\\leq \\frac{3\\sqrt{3}}{\\sqrt{n_0}}\\sum_{j \\in S} \\sqrt{\\sum_{t = s_k + 1}^{e_k} (\\delta_t)_j^2}\n\t\t\\\\\n\t\t\\leq & \\frac{3\\sqrt{3}}{\\sqrt{n_0}} \\sqrt{ { \\mathfrak{s} } \\sum_{j \\in S} \\sum_{t = s_k + 1}^{e_k} (\\delta_t)_j^2} \\leq \\frac{c}{ \\sqrt{\\log(n\\vee p)}} \\sqrt{\\sum_{t = s_k + 1}^{e_k} \\|\\delta_t\\|_2^2}.\n\t\\end{align*}\nTherefore,\n\t\\begin{align*}\n\t\t& c_1\\sqrt{\\sum_{t = s_k + 1}^{e_k} \\|\\delta_t\\|_2^2} - \\frac{c_2}{ \\sqrt{\\log(n\\vee p)}} \\sqrt{\\sum_{t = s_k + 1}^{e_k} \\|\\delta_t\\|_2^2} \\\\\n\t\t\\leq & \\sum_{i = 1}^3 c_1 \\|\\delta_{I_i}\\|_2 - \\frac{c_2}{ \\sqrt{\\log(n\\vee p)}} \\sqrt{\\sum_{t = s_k + 1}^{e_k} \\|\\delta_t\\|_2^2} \\leq \\sqrt{3} \\sqrt{ \\sum_{t = s_k + 1}^{e_k} (\\delta_t^{\\top} X_t)^2 } \\\\\n\t\t\\leq & \\frac{3\\sqrt{\\zeta}}{\\sqrt{2}} { \\mathfrak{s} } ^{1\/4} \\left(\\sum_{t = s_k + 1}^{e_k} \\|\\delta_t\\|_2^2\\right)^{1\/4} \\leq \\frac{9\\zeta { \\mathfrak{s} } ^{1\/2}}{4c_1} + \\frac{c_1}{2}\\sqrt{\\sum_{t = s_k + 1}^{e_k} \\|\\delta_t\\|_2^2}\n\t\\end{align*}\n\twhere the third inequality follows from \\eqref{eq:reg local refine 5} and the fact that $\\sum_{i\\in S}\\sqrt{\\sum_{t=s_k + 1}^{e_k}(\\delta_t)_i^2}\\leq \\sqrt{s}\\sqrt{\\sum_{t=s_k + 1}^{e_k}\\|\\delta_t\\|_2^2}$. The inequality above implies that\n\t\\[\n\t\t\\frac{c_1}{4}\\sqrt{\\sum_{t = s_k + 1}^{e_k} \\|\\delta_t\\|_2^2} \\leq \\frac{9\\zeta { \\mathfrak{s} } ^{1\/2}}{4c_1}\n\t\\]\t\n\tTherefore,\n\t\\[\n\t\t\\sum_{t = s_k + 1}^{e_k}\\|\\widehat{\\beta}_t - \\beta^*_t\\|_2^2 \\leq 81\\zeta^2 { \\mathfrak{s} } \/c_1^4.\n\t\\]\n Recall that $\\beta^{(1)} = \\beta^*_{\\eta_k}$ and $\\beta^{(2)} = \\beta^*_{\\eta_k + 1}$. We have that\n\t\\[\n\t\t\\sum_{t = s_k + 1}^{e_k}\\|\\widehat{\\beta}_t - \\beta^*_t\\|_2^2 = |I_1| \\|\\beta^{(1)} - \\widehat{\\beta}^{(1)}\\|_2^2 + |I_2| \\|\\beta^{(2)} - \\widehat{\\beta}^{(1)}\\|_2^2 + |I_3| \\|\\beta^{(2)} - \\widehat{\\beta}^{(2)}\\|_2^2.\n\t\\]\nSince $\\eta_k - s_k \\geq\\frac{1}{3}\\Delta$ as is shown in \\Cref{tmp_eq:regression I_1}. we have that\n\t\\[\n\t\t\\Delta\\|\\beta^{(1)} - \\widehat{\\beta}^{(1)}\\|_2^2\/3 \\leq |I_1| \\|\\beta^{(1)} - \\widehat{\\beta}^{(1)}\\|_2^2 \\leq \\frac{C_1 C_{\\zeta}^2\\Delta \\kappa^2} { { \\mathfrak{s} } K \\sigma^2_{\\epsilon} { \\mathcal B_n} } \\leq c_3 \\Delta \\kappa^2,\n\t\\]\n\twhere $1\/4 > c_3 > 0$ is an arbitrarily small positive constant. Therefore we have\n\t\\[\n\t\t\\|\\beta^{(2)} - \\widehat{\\beta}_1\\|_2^2 \\leq 3c_3 \\kappa^2.\n\t\\]\nIn addition we have\n\t\\[\n\t\t\\|\\beta^{(2)} - \\widehat{\\beta}^{(1)}\\|_2 \\geq \\|\\beta^{(2)} - \\beta^{(1)}\\|_2 - \\|\\beta^{(1)} - \\widehat{\\beta}^{(1)}\\|_2 \\geq \\kappa\/2.\n\t\\]\t\n\tTherefore, it holds that \t\n\t\\[\n\t\t\\kappa^2 |I_2|\/4 \\leq |I_2| \\|\\beta^{(2)} - \\widehat{\\beta}^{(1)}\\|_2^2 \\leq C_2 { \\mathfrak{s} } \\zeta^2,\n\t\\]\n\twhich implies that \n\t\\[\n\t\t|\\widehat{\\eta}_k - \\eta_k| \\leq \\frac{4C_2 { \\mathfrak{s} } \\zeta^2}{\\kappa^2},\n\t\\]\nwhich gives the bound we want. \n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\bnlem\n\\label{lem:regression op1 base lemma 1}\nSuppose $\\left\\{X_i\\right\\}_{i=1}^n \\stackrel{i . i . d}{\\sim} N_p(0, \\Sigma)$ and $\\left\\{\\epsilon_i\\right\\}_{i=1}^n \\stackrel{i i . d}{\\sim} N\\left(0, \\sigma^2\\right)$. Then it holds that\n$$\n\\begin{aligned}\n&\\mathbb{P}\\left(\\|\\frac{1}{r} \\sum_{i=1}^r X_i X_i^{\\top}-\\Sigma\\|_{\\infty} \\geq C_1\\left(\\sqrt{\\frac{\\log (p\\vee n)}{r}}+\\frac{\\log (p\\vee n)}{r}\\right) \\text { for all } 1 \\leq r \\leq n\\right) \\leq (n\\vee p)^{-2}, \\\\\n&\\mathbb{P}\\left(\\|\\frac{1}{r} \\sum_{i=1}^r X_i \\epsilon_i\\|_{\\infty} \\geq C_2\\left(\\sqrt{\\frac{\\log (p\\vee n)}{r}}+\\frac{\\log (p\\vee n)}{r}\\right) \\text { for all } 1 \\leq r \\leq n\\right) \\leq (n\\vee p)^{-2}\n\\end{aligned}\n$$\n\\enlem\n\\bprf\nProof. For the first probability bound, observe that for any $j, k \\in[1, \\ldots, p], X_j X_k-\\Sigma_{j k}$ is subexponential random variable. Therefore for any $r>0$,\n$$\n\\mathbb{P}\\left\\{\\left|\\frac{1}{r} \\sum_{i=1}^r X_{i j} X_{i k}-\\Sigma_{j k}\\right| \\geq x\\right\\} \\leq \\exp \\left(-r c_1 x^2\\right)+\\exp \\left(-r c_2 x\\right)\n$$\nSo\n$$\n\\mathbb{P}\\left\\{\\left\\|\\frac{1}{r} \\sum_{i=1}^r X_{i j} X_{i k}-\\Sigma_{j k}\\right\\|_{\\infty} \\geq x\\right\\} \\leq p \\exp \\left(-r c_1 x^2\\right)+p \\exp \\left(-r c_2 x\\right) .\n$$\nThis gives, for sufficiently large $C_1>0$,\n$$\n\\mathbb{P}\\left\\{\\left\\|\\frac{1}{r} \\sum_{i=1}^r X_{i j} X_{i k}-\\Sigma_{j k}\\right\\|_{\\infty} \\geq C_1\\left(\\sqrt{\\frac{\\log (p\\vee n)}{r}}+\\frac{\\log (p\\vee n)}{r}\\right)\\right\\} \\leq (n\\vee p)^{-3} .\n$$\nBy a union bound,\n$$\n\\mathbb{P}\\left\\{\\left\\|\\frac{1}{r} \\sum_{i=1}^r X_{i j} X_{i k}-\\Sigma_{j k}\\right\\|_{\\infty} \\geq C_1\\left(\\sqrt{\\frac{\\log (p\\vee n)}{r}}+\\frac{\\log (p\\vee n)}{r}\\right) \\text { for all } 1 \\leq r \\leq n\\right\\} \\leq (n\\vee p)^{-2}\n$$\nThe desired result follows from the assumption that $p \\geq n^\\alpha$. The second probability bound follows from the same argument and therefore is omitted for brevity.\n\\eprf\n\n\n\n\n\\bnlem\n\\label{lem:regression op1 base lemma 2}\n Suppose $\\left\\{X_i\\right\\}_{i=1}^n \\stackrel{i.i.d.}{\\sim} N_p(0, \\Sigma)$ and $u \\in \\mathbb{R}^p$ is a deterministic vector such that $|u|_2=1$. Then it holds that\n$$\n\\mathbb{P}\\left(\\|u^{\\top}\\left\\{\\frac{1}{r} \\sum_{i=1}^r X_i X_i^{\\top}-\\Sigma\\right\\}\\|_{\\infty} \\geq C_1\\left(\\sqrt{\\frac{\\log (p\\vee n)}{r}}+\\frac{\\log (p\\vee n)}{r}\\right) \\text { for all } 1 \\leq r \\leq n\\right) \\leq (n\\vee p)^{-2} .\n$$\n\\enlem\n\\bprf\nFor fixed $j \\in[1, \\ldots, p]$, let\n$$\nz_i=u^{\\top} X_i X_{i j}-u^{\\top} \\Sigma_{\\cdot j},\n$$\nwhere $\\Sigma_{\\cdot j}$ denote the $j$-th column of $\\Sigma$. Note that $z_i$ is a sub-exponential random variable with bounded $\\psi_1$ norm. The desired result follows from the same argument as \\Cref{lem:regression op1 base lemma 1}.\n\\eprf\n\n\n\n\\bnlem\n\\label{lem: x - c log^2log x}\nGiven a fixed constant $c>0$, for $x\\geq c^2\\vee e^{2e}$, it holds that\n$$\nx\\geq c(\\log\\log x)^2.\n$$\n\\enlem\n\\bprf\nLet $f(x) =x- c(\\log\\log x)^2$ for $x>1$. We have $f'(x) = 1 - \\frac{2c\\log\\log x}{x\\log x}$. Therefore, when $x\\geq (2c)\\vee e^e$, $f'(x)> 0$. Let $x_0= c^2 \\vee e^{2e}$, and then\n$$\nf(x_0)\\geq ce^e - c\\log\\log e^{2e}= c[e^e - \\log 2 - 1]>0,\n$$\nand thus $f(x)>0$ for $x\\geq x_0 = c^2 \\vee e^{2e}$.\n\\eprf\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\clearpage\n\\subsection{Local refinement in the Gaussian graphical model}\n\\label{sec:cov op1}\nFor the ease of notations, we re-index the observations in the $k$-th interval by $[n_0]:\\{1,\\cdots, n_0\\}$ (though the sample size of the problem is still $n$), and denote the $k$-th jump size as $\\kappa$ and the minimal spacing between consecutive change points as $\\Delta$ (instead of $\\Delta_{\\min}$ in the main text).\n\nBy \\Cref{assp:DCDP_covariance main} and the setting of the local refinement algorithm, we have for some $G^*,H^*\\in \\mathbb{S}_+^{p}$ that\n$$\n\\mathbb{E}[X_iX_i^\\top]=\\left\\{\\begin{array}{ll}\nG^* & \\text { when } i \\in(0, \\eta] \\\\\nH^* & \\text { when } i \\in(\\eta, n_0]\n\\end{array} .\\right.\n$$\nIn addition, there exists $\\theta \\in(0,1)$ such that $\\eta=\\lfloor n_0 \\theta\\rfloor$ and that $\\|G^*-H^*\\|_F=\\kappa_{F}<\\infty$. By \\Cref{assp:DCDP_covariance main}, it holds that $c_XI_d \\preceq G^* \\preceq C_X I_d$, $c_XI_d \\preceq H^* \\preceq C_X I_d$, and\n\\begin{equation}\n \\frac{p^4\\log^2(p_n)}{\\Delta\\kappa_F^2}\\rightarrow 0, \\ \\frac{p^5\\log^3(p_n)}{\\Delta}\\rightarrow 0\n \\label{op1_eq:cov snr}\n\\end{equation}\nBy \\Cref{lem:estimation covariance 2}, there exist $\\widehat{G},\\widehat{H}$ such that\n\\begin{equation}\n \\begin{split}\n & \\|\\widehat{G}-G^*\\|_{op}\\leq C\\sqrt{\\frac{p \\log (n\\vee p)}{\\Delta}} \\text{ and } \\|\\widehat{G}-G^*\\|_{F}\\leq C p \\sqrt{\\frac{\\log (n\\vee p)}{\\Delta}}; \\\\\n& \\|\\widehat{H}-H^*\\|_{op}\\leq C\\sqrt{\\frac{p \\log (n\\vee p)}{\\Delta}} \\text{ and } \\|\\widehat{H}-H^*\\|_F\\leq C p \\sqrt{\\frac{\\log (n\\vee p)}{\\Delta}}.\n \\end{split}\n\\end{equation}\nLet\n$$\n\\widehat{\\mathcal{Q}}(k)=\\sum_{i=1}^k\\|X_iX_i^\\top- \\widehat{G}\\|_F^2+\\sum_{i=k+1}^{n_0}\\|X_iX_i^\\top-\\widehat{H}\\|_F^2 \\quad \\text { and } \\quad \\mathcal{Q}^*(k)=\\sum_{i=1}^k\\|X_iX_i^\\top- G^*\\|_F^2+\\sum_{i=k+1}^{n_0}\\|X_iX_i^\\top- H^*\\|_F^2 .\n$$\nThrough out this section, we use $\\kappa_{F} = \\|G^*-H^*\\|_F$ to measure the signal.\n\\bnlem[Refinement for covariance model]\nLet\n$$\n\\eta+r=\\underset{k \\in(0, n_0]}{\\arg \\max } \\widehat{\\mathcal{Q}}(k) .\n$$\nThen under the assumptions above, it holds that\n$$\n\\kappa_{F}^2 r=O_P(\\log(n)) .\n$$\n\\enlem\n\\bprf\nFor the brevity of notations, we denote $n\\vee p$ as $p_n$ throughout the proof. Without loss of generality, suppose $r \\geq 0$. Since $\\eta+r$ is the minimizer, it follows that\n$$\n\\widehat{\\mathcal{Q}}(\\eta+r) \\leq \\widehat{\\mathcal{Q}}(\\eta) .\n$$\nIf $r \\leq C\\frac{\\log(n)}{\\kappa_{F}^2}$, then there is nothing to show. So for the rest of the argument, for contradiction, assume that\n$$\nr \\geq C\\frac{\\log(n)}{\\kappa_{F}^2}\n$$\nObserve that\n$$\n\\begin{aligned}\n\\widehat{\\mathcal{Q}}(t)-\\widehat{\\mathcal{Q}}(\\eta) &=\\sum_{i=\\eta+1}^{\\eta+r}\\|X_iX_i^\\top- \\widehat{G}\\|_F^2-\\sum_{i=\\eta+1}^{\\eta+r}\\|X_iX_i^\\top- \\widehat{H}\\|_F^2 \\\\\n\\mathcal{Q}^*(t)-\\mathcal{Q}^*(\\eta) &=\\sum_{i=\\eta+1}^{\\eta+r}\\|X_iX_i^\\top- {G}^*\\|_F^2-\\sum_{i=\\eta+1}^{\\eta+r}\\|X_iX_i^\\top-{H}^*\\|_F^2\n\\end{aligned}\n$$\n\\textbf{Step 1}. It follows that\n$$\n\\begin{aligned}\n& \\sum_{i=\\eta+1}^{\\eta+r}\\|X_iX_i^\\top- \\widehat{G}\\|_F^2-\\sum_{i=\\eta+1}^{\\eta+r}\\|X_iX_i^\\top- {G}^*\\|_F^2 \\\\\n=& \\sum_{i=\\eta+1}^{\\eta+r}\\|\\widehat{G}- G^*\\|_F^2+2\\left\\langle G^*-\\widehat{G}, \\sum_{i=\\eta+1}^{\\eta+r}(X_iX_i^\\top-G^*)\\right\\rangle \\\\\n=& r\\|\\widehat{G}- G^*\\|_F^2+2r \\left\\langle G^* -\\widehat{G}, H^*-G^*\\right\\rangle +2\\left\\langle G^*-\\widehat{G}, \\sum_{i=\\eta+1}^{\\eta+r}(X_iX_i^\\top -H^*)\\right\\rangle\n\\end{aligned}\n$$\nBy assumptions, we have\n$$\nr\\|\\widehat{G}- G^*\\|_F^2 \\leq C_1 r \\frac{p^2 \\log (p_n)}{\\Delta}.\n$$\nSimilarly\n$$\nr\\left\\langle G^* -\\widehat{G}, H^*-G^*\\right\\rangle\\leq r\\|G^* -\\widehat{G}\\|_F\\| H^*-G^* \\|_F \\leq C_2 r\\kappa_{F} p\\sqrt{\\frac{ \\log (p_n)}{\\Delta} },\n$$\nwhere the second equality follows from $\\|G^*-H^*\\|_F=\\kappa_{F}$, and the last equality follows from \\eqref{op1_eq:cov snr}. In addition,\n$$\n\\begin{aligned}\n&\\left\\langle G^*-\\widehat{G}, \\sum_{i=\\eta+1}^{\\eta+r}(X_iX_i^\\top -H^*)\\right\\rangle \\leq\\| G^*-\\widehat{G}\\|_F\\|\\sum_{i=\\eta+1}^{\\eta+r}(X_iX_i^\\top -H^*)\\|_{F} \\\\\n\\leq & C_3 p \\sqrt{\\frac{\\log (p_n)}{\\Delta}} (p\\sqrt{r\\log (p_n)}+ p^{3\/2}\\log (p_n)) \\leq C_3 p^2 \\log (p_n) \\sqrt{\\frac{r}{\\Delta}} + C_3 p^2 \\sqrt{\\frac{p \\log^3 (p_n)}{\\Delta}}.\n\\end{aligned}\n$$\nTherefore\n\\begin{align*}\n & \\sum_{i=\\eta+1}^{\\eta+r}\\|X_iX_i^\\top- \\widehat{G}\\|_F^2-\\sum_{i=\\eta+1}^{\\eta+r}\\|X_iX_i^\\top- {G}^*\\|_F^2\\\\\n \\leq & C_1 p^2\\log(p_n) \\frac{r}{\\Delta} + C_2r\\kappa_F p\\sqrt{\\frac{\\log(p_n)}{\\Delta}} + C_3 p^2 \\log (p_n) \\sqrt{\\frac{r}{\\Delta}} + C_3 p^2 \\sqrt{\\frac{p \\log^3 (p_n)}{\\Delta}}\\\\\n \\leq & C_4 r\\kappa_F^2\\sqrt{\\frac{p^4\\log^2(p_n)}{\\Delta\\kappa_F^2}} + C_4 \\sqrt{\\frac{p^5\\log^3(p_n)}{\\Delta}}.\n\\end{align*}\n\\textbf{Step 2}. Using the same argument as in the previous step, it follows that\n$$\n\\sum_{i=\\eta+1}^{\\eta+r}\\|X_iX_i^\\top- \\widehat{H}\\|_F^2-\\sum_{i=\\eta+1}^{\\eta+r}\\|X_iX_i^\\top- {H}^*\\|_F^2\\leq C_4 r\\kappa_F^2\\sqrt{\\frac{p^4\\log^2(p_n)}{\\Delta\\kappa_F^2}} + C_4 \\sqrt{\\frac{p^5\\log^3(p_n)}{\\Delta}}.\n$$\nTherefore\n\\begin{equation}\n \\left|\\widehat{\\mathcal{Q}}(\\eta+r)-\\widehat{\\mathcal{Q}}(\\eta)-\\left\\{\\mathcal{Q}^*(\\eta+r)-\\mathcal{Q}^*(\\eta)\\right\\}\\right|\\leq C_4 r\\kappa_F^2\\sqrt{\\frac{p^4\\log^2(p_n)}{\\Delta\\kappa_F^2}} + C_4 \\sqrt{\\frac{p^5\\log^3(p_n)}{\\Delta}}.\n \\label{tmp_eq:op1 cov main prop eq 1}\n\\end{equation}\n\\textbf{Step 3}. Observe that\n$$\n\\begin{aligned}\n\\mathcal{Q}^*(\\eta+r)-\\mathcal{Q}^*(\\eta) \n=&\\sum_{i=\\eta+1}^{\\eta+r}\\|X_iX_i^\\top- {G}^*\\|_F^2-\\sum_{i=\\eta+1}^{\\eta+r}\\|X_iX_i^\\top- {H}^*\\|_F^2 \\\\\n=& r\\| G^*- H^*\\|_F^2-2 \\left\\langle H^*- G^*, \\sum_{i=\\eta+1}^{\\eta+r}(X_iX_i^\\top- H^*) \\right\\rangle\n\\end{aligned}\n$$\nDenote $D^* = H^*- G^*$, then we can write the noise term as\n$$\n\\left\\langle H^*- G^*, X_iX_i^\\top- H^* \\right\\rangle = X_i^\\top D^* X_i - \\mathbb{E}[X_i^\\top D^* X_i].\n$$\nSince $X_i$'s are Gaussian, denote $\\Sigma_{i} = \\mathbb{E}[X_iX_i^\\top] =U_i^\\top \\Lambda_i U_i$, then\n$$\n\\left\\langle H^*- G^*, \\sum_{i=\\eta+1}^{\\eta+r}(X_iX_i^\\top- H^*) \\right\\rangle = Z^\\top \\tilde{D} Z^\\top - \\mathbb{E}[Z^\\top \\tilde{D} Z^\\top],\n$$\nwhere $Z\\in \\mathbb{R}^{rd}$ is a standard Gaussian vector and \n$$\n\\widetilde{D} = {\\rm diag}\\{U_1D^*U_1^\\top,U_2D^*U_2^\\top,\\cdots, U_r D^*U_r^\\top\\}.\n$$\nSince $\\|\\widetilde{D}\\|_F = r\\kappa_F^2$, by Hanson-Wright inequality, with probability at least $1 - n^{-3}$, it holds uniformly for all $r\\geq C\\frac{\\log(n)}{\\kappa_{F}^2}$ that\n$$\n|\\left\\langle H^*- G^*, \\sum_{i=\\eta+1}^{\\eta+r}(X_iX_i^\\top- H^*) \\right\\rangle| \\leq C_5 \\|X\\|_{\\psi_2}^2\\sqrt{r}\\kappa_{F}\\log(r\\kappa_{F}^2).\n$$\nTherefore, by Hanson-Wright inequality, uniformly for all $r\\geq C \\frac{\\log(n)}{\\kappa_{F}^2}$ it holds that\n\\begin{equation}\n \\mathcal{Q}^*(\\eta+r)-\\mathcal{Q}^*(\\eta) \\geq r \\kappa_{F}^2 - C_5 \\|X\\|_{\\psi_2}^2\\sqrt{r}\\kappa_{F}\\log(r\\kappa_{F}^2),\n\\label{tmp_eq:op1 cov main prop eq 2}\n\\end{equation}\nand thus when $r\\geq C (\\|X\\|_{\\psi_2}^4\\vee 1)\\frac{\\log(n)}{\\kappa_{F}^2}$, $\\mathcal{Q}^*(\\eta+r)-\\mathcal{Q}^*(\\eta)\\geq 0$.\n\n\\textbf{Step 4}. \\Cref{tmp_eq:op1 cov main prop eq 1} and \\Cref{tmp_eq:op1 cov main prop eq 2} together give, uniformly for all $r \\geq C\\log(n) \/ \\kappa_{F}^2$,\n$$\n r \\kappa_{F}^2 - C_5 \\|X\\|_{\\psi_2}^2\\sqrt{r}\\kappa_{F}\\log(r\\kappa_{F}^2) \\leq C_4 r\\kappa_F^2\\sqrt{\\frac{p^4\\log^2(p_n)}{\\Delta\\kappa_F^2}} + C_4 \\sqrt{\\frac{p^5\\log^3(p_n)}{\\Delta}},\n$$\nwhich either leads to a contradiction or proves the conclusion since we assume that $\\frac{p^4\\log^2(p_n)}{\\Delta\\kappa_F^2}\\rightarrow 0$ and $\\frac{p^5\\log^3(p_n)}{\\Delta}\\rightarrow 0$.\n\\eprf\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\bnlem\n\\label{lem:estimation covariance 2}\nLet $\\{X_i\\}_{i\\in [n]}$ be a sequence of subgaussian vectors in $\\mathbb{R}^{d}$ with orlitz norm upper bounded $\\|X\\|_{\\psi_2}<\\infty$. Suppose $\\mathbb{E}[X_i] = 0$ and $\\mathbb{E}[X_iX_i^\\top] = \\Sigma$ for $i\\in [n]$. Let $\\widehat{\\Sigma}_{n} = \\frac{1}{n}\\sum_{i \\in [n]} X_i X_i^\\top$. Then for any $u>0$, it holds with probability at least $1 - \\exp(-u)$ that\n\\begin{equation}\n \\|\\widehat{\\Sigma}_{n} - \\Sigma\\|_{op}\\lesssim \\|X\\|_{\\psi_2}^2(\\sqrt{\\frac{d + u}{n}}\\vee \\frac{d + u}{n}).\n\\end{equation}\n\\enlem\n\\bprf\nThis is the same as \\Cref{lem:estimation covariance}.\n\\eprf\n\n\n\n\\bnlem[Hanson-Wright inequality]\nLet $X=\\left(X_1, \\ldots, X_n\\right) \\in \\mathbb{R}^n$ be a random vector with independent, mean zero, sub-gaussian coordinates. Let $A$ be an $n \\times n$ matrix. Then, for every $t \\geq 0$, we have\n$$\n\\mathbb{P}\\left\\{\\left|X^{\\top} A X-\\mathbb{E} X^{\\top} A X\\right| \\geq t\\right\\} \\leq 2 \\exp \\left[-c \\min \\left(\\frac{t^2}{K^4\\|A\\|_F^2}, \\frac{t}{K^2\\|A\\|_{op}}\\right)\\right],\n$$\nwhere $K=\\max _i\\left\\|X_i\\right\\|_{\\psi_2}$\n\\enlem\n\\bprf\nSee \\cite{vershynin2018high} for a proof and \\cite{hanson-wright-general} for a generalization to random vectors with dependence.\n\\eprf\n\n\n\n\n\n\n\\bncor[Local refinement step 1]\n\\label{lem: covariance local refinement step 1}\nUnder \\Cref{assp:DCDP_covariance main}, let $\\{\\widetilde{\\eta}_k\\}_{k \\in [\\widetilde {K}]}$ be a set of time points satisfying\n\t\\begin{equation}\n\t\t\\max_{k \\in [K]} |\\widetilde{\\eta}_k - \\eta_k| \\leq \\Delta\/5.\n\t\\end{equation}\n\tLet $\\{\\check{\\eta}_k\\}_{k\\in [\\widehat{K}]}$ be the change point estimators generated from step 1 of the local refinement algorithm with $\\{\\widetilde{\\eta}_k\\}_{k\\in [\\widehat{K}]}$ as inputs and the penalty function $R(\\cdot) = 0$. Then\n with probability at least $1 - C n ^{-3}$, $\\widehat K = K$ and that\n \\[ \n \\max_{ k\\in [K] } |\\check {\\eta}_k - \\eta_k| \\lesssim \\frac{\\|X\\|_{\\psi_2}^4}{c_{X}^4}\\frac{ p^2\\log(n\\vee p)}{\\kappa^2}.\n \\]\n\\encor\n\\bprf\nDenote $\\I = (s_k, e_k)$ as the input interval in the local refinement algorithm. Without loss of generality, assume that\n$$\n\\I = \\I_1\\cup \\I_2\\cup \\I_3 = [s,\\eta_k)\\cup [\\eta_k,\\check {\\eta}_k)\\cup [\\check {\\eta}_k, \\eta_{k + 1}).\n$$\nFor $\\I_2$, there are two cases.\n\n\\textbf{Case 1.} If\n$$|\\I_2| < \\max\\{C_s p\\log(n\\vee p), C_s p\\log(n\\vee p)\/\\kappa^2\\},$$\nthen the proof is complete.\n\n\\textbf{Case 2.} If\n$$|\\I_2| \\geq \\max\\{C_s p\\log(n\\vee p), C_s p\\log(n\\vee p)\/\\kappa^2\\},$$\nThen we proceed to prove that $|\\check {\\eta}_k - \\eta_k|\\leq C\\frac{\\|X\\|_{\\psi_2}^4}{c_X^4} p^2\\log(n\\vee p)\/\\kappa^2$ for some universal constant $C>0$ with probability at least $1 - (n\\vee p)^{-5}$.\n\nFor $t\\in \\I$, let $\\widehat{\\Omega}_t$ be the estimator at index $t$. By definition, we have\n\\begin{equation}\n \\sum_{t \\in \\I}{\\rm Tr}(\\widehat{\\Omega}_t^\\top X_tX_t^\\top) - \\sum_{t\\in \\I}\\log|\\widehat{\\Omega}_t|\\leq \\sum_{t \\in \\I}{\\rm Tr}(({\\Omega}_t^*)^\\top X_tX_t^\\top) - \\sum_{t\\in \\I}\\log|{\\Omega}_t^*|\n \\label{dp_tmp_eq:cov local refine 1}\n\\end{equation}\nDue to the property that\n\\begin{equation}\n \\ell_t(\\widehat{\\Omega}) - \\ell_t({\\Omega}^*)\\geq {\\rm Tr}[(\\widehat{\\Omega} - \\Omega^*)^\\top (X_tX_t^\\top - \\Sigma^*)] + \\frac{c}{2}\\frac{1}{\\|\\Omega^*\\|_{op}^2}\\|\\widehat{\\Omega} - \\Omega^*\\|_F^2,\n\\end{equation}\nequation \\eqref{dp_tmp_eq:cov local refine 1} implies that\n\\begin{align}\n & \\sum_{i=1}^3 \\frac{|\\I_i|}{\\|\\widehat{\\Omega}_{\\I_i}^*\\|_{op}^2}\\|\\widehat{\\Omega}_{\\I_i} - {\\Omega}_{\\I_i}^*\\|_F^2 \\nonumber \\\\\n \\leq & c_1\\sum_{i = 1}^3 |\\I_i|{\\rm Tr}[({\\Omega}^*_{\\I_i} - \\widehat{\\Omega}_{\\I_i})^\\top (\\widehat{\\Sigma}_{\\I_i} - {\\Sigma}^*_{\\I_i})] \\nonumber \\\\\n \\leq & c_1\\sum_{i = 1}^3 |\\I_i|\\|{\\Omega}^*_{\\I_i} - \\widehat{\\Omega}_{\\I_i}\\|_F \\|\\widehat{\\Sigma}_{\\I_i} - {\\Sigma}^*_{\\I_i}\\|_F \\nonumber \\\\\n \\leq & \\sum_{i=1}^3 \\frac{|\\I_i|}{2\\|\\widehat{\\Omega}_{\\I_i}^*\\|_{op}^2}\\|\\widehat{\\Omega}_{\\I_i} - {\\Omega}_{\\I_i}^*\\|_F^2 + c_2\\sum_{i = 1}^3 |\\I_i|\\|\\widehat{\\Sigma}_{\\I_i} - {\\Sigma}^*_{\\I_i}\\|_F^2\\|\\Omega_{\\I_i}^*\\|_{op}^2,\\label{dp_tmp_eq:cov local refine 2}\n\\end{align}\nwhere we denote $\\widehat{\\Omega}_{\\I_1} = \\widehat{\\Omega}_{\\I_2} = \\widehat{\\Omega}_{[s_k,\\check {\\eta}_k)}$, $\\widehat{\\Omega}_{\\I_3} = \\widehat{\\Omega}_{[\\check {\\eta}_k,e_k)}$, ${\\Omega}_{\\I_1}^* = \\Omega^*_{\\eta_k - 1}$, and ${\\Omega}_{\\I_2}^* = {\\Omega}_{\\I_3}^* = \\Omega^*_{\\eta_k}$.\n\nBy the setting of local refinement, we have $\\min\\{|\\I_1|,|\\I_3|\\}\\geq C_s p\\log(n\\vee p)$. Therefore, by \\Cref{lem:estimation covariance 2}, for $i=1,2,3$, it holds with probability at least $1 - (n\\vee p)^{-7}$ that\n$$\n\\|\\widehat{\\Sigma}_{\\I_i} - {\\Sigma}^*_{\\I_i}\\|_F^2\\leq p\\|\\widehat{\\Sigma}_{\\I_i} - {\\Sigma}^*_{\\I_i}\\|_{op}^2\\leq C \\|X\\|_{\\psi_2}^4 \\frac{p^2\\log(n\\vee p)}{|\\I_i|}.\n$$\nConsequently, we have\n\\begin{equation}\n \\sum_{i=1}^3 \\frac{|\\I_i|}{\\|\\widehat{\\Omega}_{\\I_i}^*\\|_{op}^2}\\|\\widehat{\\Omega}_{\\I_i} - {\\Omega}_{\\I_i}^*\\|_F^2\\leq c_2\\sum_{i = 1}^3 \\|\\Omega_{\\I_i}^*\\|_{op}^2 \\|X\\|_{\\psi_2}^4 {p^2\\log(n\\vee p)}.\n \\label{dp_tmp_eq:cov local refine 3}\n\\end{equation}\nIn particular, $\\Delta\\kappa^2>\\mclB_n \\frac{\\|X\\|_{\\psi_2}^4}{c_X^4} p^2\\log(n\\vee p)$, we have\n\\begin{align*}\n |\\I_1|\\|\\widehat{\\Omega}_{\\I_1} - {\\Omega}_{\\I_1}^*\\|_F^2\\leq& c_2\\|\\widehat{\\Omega}_{\\I_1}^*\\|_{op}^2\\sum_{i = 1}^3 \\|\\Omega_{\\I_i}^*\\|_{op}^2 \\|X\\|_{\\psi_2}^4 {p^2\\log(n\\vee p)}\\\\\n \\leq & 3c_2\\frac{\\|X\\|_{\\psi}^4}{c_X^4} p^2\\log(n\\vee p)\\leq \\frac{1}{12}\\Delta\\kappa^2,\n\\end{align*}\nfor sufficiently large $n$ because $\\mclB_n\\rightarrow \\infty$ as $n\\rightarrow \\infty$. Since $|\\I_1|\\geq \\frac{1}{3}\\Delta$, it follows from the inequality above that $\\|\\widehat{\\Omega}_{\\I_1} - {\\Omega}_{\\I_1}^*\\|_F\\leq \\frac{\\kappa}{2}$ and thus,\n\\begin{equation*}\n \\|\\widehat{\\Omega}_{\\I_2} - {\\Omega}_{\\I_2}^*\\|_F\\geq \\|\\widehat{\\Omega}_{\\I_2} - {\\Omega}_{\\I_1}^*\\|_F + \\|{\\Omega}^*_{\\I_1} - {\\Omega}_{\\I_2}^*\\|_F\\geq \\frac{\\kappa}{2}.\n\\end{equation*}\nPlug this back into \\Cref{dp_tmp_eq:cov local refine 3} and we can get\n\\begin{equation}\n \\frac{\\kappa^2}{4}|\\I_2|\\leq c_4\\frac{\\|X\\|_{\\psi_2}^4}{c_X^4}p^2\\log(n\\vee p),\n\\end{equation}\nwhich completes the proof.\n\\eprf\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Detail of application}\n\\label{sec:detail application}\nIn this section, we show the detail of numerical experiments on real data mentioned in \\Cref{sec: application}. We present results on three datasets that have been previously studied in the change point literature. All the three datasets we used in this section are publicly available.\n\n\\subsection{Bladder tumor micro-array data}\nThis dataset contains the micro-array records of 43 patients with bladder tumor, collected and studied by \\cite{acgh2006}. The result is visualized in \\Cref{fig:acgh}, where we only show the data of 10 patients are shown for the ease of presentation and reading. While there is no accurate ground truth of locations of change points, the 37 change points spotted by DCDP align well with previous research \\citep{ecp2015, wang_samworth2018}. \\Cref{fig:acgh} provides virtual support for the findings by DCDP.\n\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width = 0.8 \\textwidth]{Figure\/acgh.pdf}\n \\caption{Estimated change points in the micro-array data. The result is based on the data of all 43 patients, while only the data of 10 patients is presented. The estimated change points are indicated by dashed vertical lines.}\n \\label{fig:acgh}\n\\end{figure}\n\n\n\n\n\n \n\n\n\\subsection{Dow Jones industrial average index}\nWe apply DCDP to the weekly log return of the 29 companies composing the \\textit{Dow Jones Industrial Average(DJIA)} from April, 1990 to January, 2012, to detect changes in the covariance structure. We use the version of the data provided in \\cite{ecp2015}. Two change points at September 22, 2008 and May 4, 2009 are detected, which corresponds to the months during which the market is impacted by the financial crisis in 2008. The estimates by DCDP match well with previous research \\cite{ecp2015} on this data.\n\nTo give a virtual evaluation on estimated change points, in \\Cref{fig:djia} we show the estimated precision matrices on the three segments of the data split at estimated change points.\n\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width = 0.8 \\textwidth]{Figure\/djia_precision.pdf}\n \\caption{Estimated change points in the FRED data.}\n \\label{fig:djia}\n\\end{figure}\n\n\n\n\\subsection{FRED data}\nWe also apply DCDP to \\textit{Federal Reserve Economic Database (FRED)} data.\\footnote{The dataset is publicly available at \\url{https:\/\/research.stlouisfed.org\/econ\/mccracken\/fred-databases}.} We use the subset of monthly data spanning from January 2000 to December 2019, which consists of $n = 240$ samples. The original data has 128 features, including the date. We use the R package \\texttt{fbi}\\citep{fbi} to transform the raw data to be stationary and remove outliers, as is suggested by the data collector \\citep{fred2016}. After preprocessing, there are 118 features, including the date.\n\nWe use logarithm of the monthly growth rate of the US industrial production index (INDPRO named in the dataset) as the response variable, i.e., let $y_t = \\log({\\rm INDPRO}_t\/{\\rm INDPRO}_{t - 1})$, and other 116 macroeconomic variables as predictors. Previous research \\citep{wang2022testing_linear, yu2022temporal} suggests that there exist change points in the association between $y_t$ and predictors.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nA change point at January 2008 is spotted, which is consistent with previous research on this data \\citep{wang2022testing_linear,yu2022temporal}.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Additional Experiments}\n\\label{sec:detail experiment}\nThis section serves as a supplement to \\Cref{sec:experiment}. In \\Cref{sec:add_experiment}, we discuss the selection of $\\gamma$. In \\Cref{sec:experiment supp results}, we present full results of numerical experiments in \\Cref{sec: experiment comparison}.\n\n\n\\subsection{Selection of $\\gamma$}\n\\label{sec:add_experiment}\nIn the theory of DCDP, we need $\\gamma = C_{\\gamma}\\mclB_n^{-1\/2}\\Delta_{\\min}\\kappa^2$, which involves unknown population parameter $\\Delta_{\\min}$ and $\\kappa^2$. It is common in the change point literature and even broader literature that theoretically best tuning parameters involve unknown quantities, and a typical practical solution is to perform cross validation to select the best tuning parameter from a list of candidates.\n\nSuppose we have data $\\{ \\bm Z _i\\}_{i\\in [n]}$ with $ \\bm Z _i\\sim \\mathbb{P}_{ { \\bm \\theta } _i}$. Without loss of generality, suppose $n = 2m$ for some $m\\in\\mathbb{Z}^+$. We split the data by indices, such that data with odd indices $\\{ \\bm Z _{2i - 1}\\}_{i\\in [m]}$ is the training set and data with even indices $\\{ \\bm Z _{2i}\\}_{i\\in [m]}$ is the test set. This is a common way to conduct cross validation in the change point literature. Given a set of candidate parameters $G = \\{(\\gamma_i,\\zeta_i)\\}_{i\\in [l]}$, for each $i\\in [l]$, the CV has three steps: \n\\begin{enumerate}\n \\item Run DCDP on $\\{ \\bm Z _{2i - 1}\\}_{i\\in \\mclI_k}$ with $(\\gamma_i,\\zeta_i)$ to get a segmentation $\\widetilde{P} = \\{\\mclI_k\\}_{k\\in[\\widehat{K}+1]}$ of $[1,m]$ where $\\mclI_k = [\\widetilde{\\eta}_{k-1}, \\widetilde{\\eta}_{k})$.\n \\item Calculate $\\{\\widehat{ { \\bm \\theta } }_k\\}_{k\\in [\\widehat K+1]}$ from $\\{\\{ \\bm Z _{2i - 1}\\}_{i\\in \\mclI_k}\\}_{k\\in [\\widehat K+1]}$ and $$R_i = \\sum_{k\\in [\\widehat{K} + 1]}\\mclF(\\widehat{ { \\bm \\theta } }_k,\\mclI_k)$$ from $\\{\\{ \\bm Z _{2i}\\}_{i\\in \\mclI_k}\\}_{k\\in [\\widehat K + 1]}$.\n \\item Select $(\\gamma_{i_{cv}},\\zeta_{i_{cv}})$ with the index $i_{cv}=\\argmin_{i\\in[l]}R_i$.\n\\end{enumerate}\n\n\n\n\n\n\n\n\\clearpage\n\\subsection{More results on comparisons}\n\\label{sec:experiment supp results}\nIn this section we present full results of comparisons between DCDP and other methods in \\Cref{tab:mean compare_methods full}, \\Cref{tab:linear compare_methods full}, and \\Cref{tab:covariance compare_methods full}, as a supplement to \\Cref{sec: experiment comparison}. Among all involved methods, DCDP is implemented in Python, ChangeForest is implemented in Rust and provides Python API, Inspect, Variance-Projected WBS, vanilla DP, and Block-Fused-Lasso are implemented in R based on \\texttt{Rcpp}. For fair comparison, we first generate data in Python and then load the data in R for R-based methods. All experiments for DCDP and ChangeForest are run on a virtual machine of Google Colab with Intel(R) Xeon(R) CPU of 2 cores 2.30\nGHz and 12GB RAM (one setting at a time). All other experiments are run on a personal computer with Intel Core i7 8850H CPU of 6 cores 2.60GHz and 64GB RAM (one setting at a time). Notice that programs implemented by \\texttt{Rcpp} is usually faster than Python, and the machine to run \\texttt{Rcpp}-based methods has better parameters than the virtual machine to run DCDP and ChangeForest, the comparison of execution time would not be unfair against \\texttt{Rcpp}-based methods.\n\n\n\\Cref{tab:mean compare_methods full} shows full results of the comparison under the mean shift model.\n\n\\begin{table}[H]\n \\centering\n \\begin{tabular}{l l l l l l l}\n \\hline\n Setting & Method & $H(\\hat{\\eta},\\eta)$ & Time & $\\hat{K}K$\n \\\\\n \\hline\n \\multirow{4}{3.6cm}{$n = 200, p=20, K =3, \\delta = 5$} \n & DCDP & 0.00 (0.00) & 0.7s (0.2) & 0 & 100 & 0 \\\\\n & Inspect & 0.54 (4.46) & 0.0s (0.0) & 0 & 96 & 4 \\\\\n\n & CF & 3.59 (10.10) & 0.3s (0.0) & 0 & 84 & 16 \\\\\n\n & BFL & 42.56 (6.95) & 3.5s (0.6) & 100 & 0 & 0 \\\\\n \\hline\n \\multirow{4}{3.6cm}{$n = 200, p=20, K =3, \\delta = 1$} \n & DCDP & 0.51 (0.77) & 0.7s (0.2) & 0 & 100 & 0 \\\\\n & Inspect & 3.13 (5.50) & 0.0s (0.0) & 0 & 67 & 33 \\\\\n\n & CF & 4.38 (10.13) & 0.4s (0.1) & 0 & 81 & 19 \\\\\n\n & BFL & 43.30 (8.25) & 2.9s (0.6) & 100 & 0 & 0 \\\\\n \\hline\n \\multirow{4}{3.6cm}{$n = 200, p=20, K =3, \\delta = 0.5$} \n & DCDP & 8.30 (12.90) & 0.4s (0.0) & 8 & 90 & 2 \\\\\n & Inspect & 6.85 (7.53) & 0.0s (0.0) & 0 & 78 & 22 \\\\\n\n & CF & 7.15 (9.57) & 0.4s (0.1) & 1 & 78 & 21 \\\\\n\n & BFL & 54.48 (20.98) & 2.8s (1.1) & 100 & 0 & 0 \\\\\n \\hline\n \\multirow{4}{3.6cm}{$n = 200, p=100, K =3, \\delta = 5$} \n & DCDP & 0.0 (0.0) & 0.6s (0.0) & 0 & 100 & 0 \\\\\n & Inspect & 0.40 (3.50) & 0.0s (0.0) & 0 & 91 & 9 \\\\\n\n & CF & 2.85 (7.50) & 0.8s (0.2) & 0 & 85 & 15 \\\\\n\n & BFL & 47.80 (6.66) & 1.5s (0.3) & 100 & 0 & 0 \\\\\n \\hline\n \\multirow{4}{3.6cm}{$n = 200, p=100, K =3, \\delta = 1$} \n & DCDP & 0.83 (0.87) & 0.8s (0.2) & 0 & 100 & 0 \\\\\n & Inspect & 2.65 (5.16) & 0.0s (0.0) & 0 & 86 & 14 \\\\\n\n & CF & 3.28 (7.01) & 1.3s (0.1) & 0 & 85 & 15 \\\\\n\n & BFL & 47.59 (6.08) & 1.1s (0.2) & 100 & 0 & 0 \\\\\n \\hline\n \\multirow{4}{3.6cm}{$n = 800, p=100, K =3, \\delta = 0.5$} \n & DCDP & 9.36 (29.96) & 2.1s (0.3) & 3 & 97 & 0 \\\\\n & Inspect & 12.55 (22.14) & 0.1s (0.0) & 0 & 77 & 23 \\\\\n\n & CF & 14.73 (30.50) & 5.5s (0.3) & 0 & 82 & 18 \\\\\n\n & BFL & 80.10 (137.33) & 15.7s (3.8) & 28 & 71 & 1 \\\\\n \\hline\n \\end{tabular}\n \\caption{Comparison of DCDP and other methods under the mean model with different simulation settings. 100 trials are conducted in each setting. For the localization error and running time (in seconds), the average over 100 trials is shown with standard error in the bracket. The three columns on the right record the number of trials in which $\\hat{K}K$ respectively.}\n \\label{tab:mean compare_methods full}\n\\end{table}\n\n\n\n\\Cref{tab:linear compare_methods full} shows full results of the comparison under the linear regression coefficient shift model.\n\n\n\n\\begin{table}[H]\n \\centering\n\\begin{tabular}{l l l l l l l}\n \\hline\n Setting & Method & $H(\\hat{\\eta},\\eta)$ & Time & $\\hat{K}K$\n \\\\\n \\hline\n \\multirow{4}{3.6cm}{$n = 200, p=20, K =3, \\delta = 5$} \n & DCDP & 0.03 (0.17) & 5.1s (0.3) & 0 & 100 & 0 \\\\\n & DP & 0.04 (0.20) & 17.0s (0.5) & 0 & 100 & 0 \\\\\n & VPWBS & 7.69 (15.53) & 28.4s (3.5) & 1 & 71 & 28 \\\\\n\n\n & BFL & 84.45 (15.33) & 4.2s (0.7) & 100 & 0 & 0 \\\\\n \\hline\n \\multirow{4}{3.6cm}{$n = 200, p=20, K =3, \\delta = 1$} \n & DCDP & 0.94 (5.17) & 2.3s (0.2) & 2 & 98 & 0 \\\\\n & DP & 0.05 (0.22) & 12.8s (0.5) & 0 & 100 & 0 \\\\\n & VPWBS & 11.71 (19.82) & 30.4s (2.2) & 21 & 73 & 6 \\\\\n\n\n & BFL & 43.31 (8.82) & 3.1s (0.8) & 100 & 0 & 0 \\\\\n \\hline\n \\multirow{4}{3.6cm}{$n = 200, p=100, K =3, \\delta = 5$} \n & DCDP & 0.13 (0.39) & 18.4s (1.1) & 0 & 100 & 0 \\\\\n & DP & 0.01 (0.10) & 220.3s (16.8) & 0 & 98 & 2 \\\\\n & VPWBS & 15.44 (17.99) & 120.1s (13.1) & 18 & 70 & 12 \\\\\n\n\n & BFL & 47.84 (6.69) & 1.4s (0.2) & 100 & 0 & 0 \\\\\n \\hline\n \\multirow{4}{3.6cm}{$n = 200, p=100, K =3, \\delta = 1$} \n & DCDP & 1.45 (8.59) & 8.8s (0.7) & 2 & 98 & 0 \\\\\n & DP & 0.22 (2.00) & 84.4s (5.7) & 0 & 99 & 1 \\\\\n & VPWBS & 11.54 (11.23) & 120.4s (14.5) & 3 & 65 & 32 \\\\\n\n\n & BFL & 47.19 (6.48) & 1.1s (0.2) & 100 & 0 & 0 \\\\\n \\hline\n \\end{tabular}\n \\caption{Comparison of DCDP and other methods under the linear model with different simulation settings. 100 trials are conducted in each setting. For the localization error and running time (in seconds), the average over 100 trials is shown with standard error in the bracket. The three columns on the right record the number of trials in which $\\hat{K}K$ respectively.}\n \\label{tab:linear compare_methods full}\n\\end{table}\n\n\n\n\n\n\\Cref{tab:covariance compare_methods full} shows full results of the comparison under the precision shift model. In \\Cref{tab:covariance compare_methods full}, we didn't present the results of BFL because it produces empty set in all trials, for some unknown reason. We tried to fine tune the parameters in BFL, but didn't manage to produce nonempty sets, probably because the precision matrices under our setting are not sparse enough for BFL to perform well.\n\n\\begin{table}[H]\n \\centering\n\\begin{tabular}{l l l l l l l}\n \\hline\n Setting & Method & $H(\\hat{\\eta},\\eta)$ & Time & $\\hat{K}K$\n \\\\\n \\hline\n \\multirow{2}{3.6cm}{$n = 2000, p=5, K =3, \\delta_1 = 2,\\delta_2 = 0.3$} \n & DCDP & 5.16 (6.52) & 0.7s (0.3) & 0 & 100 & 0 \\\\\n\n\n & CF & 58.25 (151.74) & 1.8s (0.3) & 2 & 69 & 29 \\\\\n\n \\hline\n \\multirow{2}{3.6cm}{$n = 2000, p=10, K =3, \\delta_1 = 5,\\delta_2 = 0.3$} \n & DCDP & 0.27 (0.49) & 0.7s (0.1) & 0 & 100 & 0 \\\\\n\n\n & CF & 42.5 (137.92) & 2.9s (0.2) & 0 & 84 & 16 \\\\\n\n \\hline\n \\multirow{2}{3.6cm}{$n = 2000, p=20, K =3, \\delta_1 = 5,\\delta_2 = 0.3$} \n & DCDP & 0.03 (0.17) & 1.2s (0.2) & 0 & 100 & 0 \\\\\n\n\n & CF & 27.68 (97.20) & 4.8s (0.4) & 0 & 86 & 14 \\\\\n\n \\hline\n \\multirow{2}{3.6cm}{$n = 400, p=10, K =3, \\delta_1 = 5,\\delta_2 = 0.3$} \n & DCDP & 0.42 (0.64) & 0.5s (0.0) & 0& 100& 0 \\\\\n\n\n & CF & 5.54 (14.71) & 0.6s (0.1) & 0 & 88 & 12 \\\\\n\n \\hline\n \\multirow{2}{3.6cm}{$n = 400, p=20, K =3, \\delta_1 = 5,\\delta_2 = 0.3$} \n & DCDP & 0.66 (4.37) & 0.9s (0.3) & 0 & 100 & 0 \\\\\n\n\n & CF & 7.37 (18.76) & 1.0s (0.0) & 0 & 85 & 15 \\\\\n\n \\hline\n \\end{tabular}\n \\caption{Comparison of DCDP and other methods under the covariance model with different simulation settings. 100 trials are conducted in each setting. For the localization error and running time (in seconds), the average over 100 trials is shown with standard error in the bracket. The three columns on the right record the number of trials in which $\\hat{K}K$ respectively.}\n \\label{tab:covariance compare_methods full}\n\\end{table}\n\n\\section{Overview}\nAbout DCDP:\n\\begin{enumerate}\n \\item DCDP works well with a relatively large parameter $\\gamma = C_{\\gamma}K { \\mathcal B_n^{-1} \\Delta } \\kappa^2$.\n \\item Local refinement works well both in theory and in practice, though it is not quite ``novel\". Is it good to go? Or should we think about something else?\n \\item Next step: consider generalizaing to nonlinear models like logistic regression.\n \\item Piece-wise linear signals?\n\\end{enumerate}\n\nAbout CPD for logistic regression and Bradley-Terry model:\n\\begin{enumerate}\n \\item Given that the WBS based on loglikelihood performs well in practice, should we try proving some theory for it?\n\\end{enumerate}\n\n\nimprovement in the computation\n\ncompare with others\n\n\\section{Numerical experiments}\n\\subsection{Run time and localization error of the divide step}\n\\label{sec:experiment_time_error_divide}\nIn the first experiemnt, we generate $T=4\\Delta$ samples from the univariate Gaussian mean model $y_t = \\mu^*_t + \\epsilon_t$ with 3 change points at $\\eta_{k} = k\\Delta + 1$ for $k = 1,2,3$ and $\\mu_{\\eta_0} = 0$, $\\mu_{\\eta_1} = 5$, $\\mu_{\\eta_2} = 0$, $\\mu_{\\eta_3} = 5$. In \\Cref{fig:loc_vs_n_grid} and \\Cref{fig:runtime_n_grid}, we present the localization error rate and run time of DCDP with $\\mclR = 5$ as $\\mclQ$, the number of random samples in the grid changes from 25 to 200. The localization error rate is defined as $H(\\{\\hat{\\eta}\\},\\{\\eta\\})\/T$ where $H(\\cdot,\\cdot)$ is the Hausdorff distance between two sets.\n\n\n\n\nWe show results for $\\Delta = 500$ and $\\Delta = 5000$. For the tuning parameter $\\gamma$, we search over a list from $0.2\\Delta$ to $0.4\\Delta$. To tune $\\gamma$, we run DCDP on each value of $\\gamma$ on the training data and get estimates of mean values on induced intervals, then check the goodness-of-fit of the testing data on the estimated mean values, and pick the $\\gamma$ with the best goodness-of-fit value. Here the training and testing data are generated from the same model above. In practice, one can do a odd-even split on the data. In experiments, $60$ trials are run for each value of $\\mclQ$.\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width = 0.46 \\textwidth]{Figure\/Delta500_loc_error_grid_num.png}\n \\includegraphics[width = 0.47\n \\textwidth]{Figure\/Delta5000_loc_error_grid_num.png}\n \\caption{Localization error vs. number of points $\\mclQ$ in the grid. Left: $\\Delta = 500$; right: $\\Delta = 5000$. The line is the average and the shaded area shows the upper and lower 0.1 quantiles from 60 trials. Blue: only divide step; orange: dvide step followed by a local refinement.}\n \\label{fig:loc_vs_n_grid}\n\\end{figure}\n\n\\Cref{fig:runtime_n_grid} illustrates the efficiency of DCDP in the sense that the original DP with the grid set to be $[T]$ corresponds to $\\mclQ = T$, which will take much longer time to run. As a reference, the WBS method implemented by our own takes 0.14 and 10.79 seconds with average error $5.6$ and $0.0$ for $\\Delta = 500, 5000$.\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width = 0.46 \\textwidth]{Figure\/Delta500_runtime_grid_num.png}\n \\includegraphics[width = 0.47\n \\textwidth]{Figure\/Delta5000_runtime_grid_num.png}\n \\caption{Run time vs. number of points $\\mclQ$ in the grid. Left: $\\Delta = 500$; right: $\\Delta = 5000$. The line is the average and the shaded area shows the upper and lower 0.1 quantiles from 60 trials. Blue: only divide step; orange: dvide step followed by a local refinement.}\n \\label{fig:runtime_n_grid}\n\\end{figure}\n\n\n\\subsection{Comparison with other methods}\nIn this section, we will compare the performance of DCDP with several competitors in the literature. Apart from the standard Wild Binary Segmentation \\citep{Fryzlewicz2014} and the standard Dynamic Programming \\citep{pelt2012, rinaldo2021cpd_reg_aistats}, we also consider serveral competitors that were developed recently, including the Seeded WBS \\citep{kovacs2020seeded}, the Block Fused Lasso \\citep{unify_sinica2022}, and the Change Forest \\citep{changeforest2022}.\n\\section{Proof}\n\\label{sec:proof}\n\\subsection{Fundamental lemma}\nIn the proof of localization error of the standard dynamic programming, we frequently compare the goodness-of-fit $\\mclF(\\mclI)$ over an interval $\\mclI = (s,e]$ with\n\\[\n\\mclF((s,\\eta_{i + 1}]) + \\cdots + \\mclF((\\eta_{i + m},e]) + m\\gamma\n\\]\nwhere $\\{\\eta_{i + j}\\}_{j\\in [m]} = \\{\\eta_{\\ell}\\}_{\\ell\\in [K]}\\cap \\mclI$ is the collection of true change points within interval $\\mclI$ and $\\gamma$ is the penalty coefficient in the DP. \n\nHowever, for DCDP, we only search over the random grid $\\{s_{\\ell}\\}_{\\ell\\in [\\mclQ]}$ that does not contain any true change points in general. Therefore, we need two fundamental lemmas that guarantee the existence of some reference points that are close to true change points, and the deviation of goodness-of-fit evaluated at the output of DCDP compared to those reference points.\n\n\\paragraph{Reference points.} Suppose the sample points $\\{ s_q\\}_{q=1}^ { \\mathcal Q} $ are uniformly and independently sampled from $(0,n)$. \n Let \n$ \\{ \\eta_k\\}_{k=1}^{K}$ be the collection of change points and denote \n\\begin{align*} \n \\mathcal L _k (\\delta) : = \\bigg \\{ \\{ s_q\\}_{q=1}^ { \\mathcal Q} \\cap [\\eta_k -\\delta ,\\eta_k ] \\not =\\emptyset \\bigg \\} , \\quad \\text{and} \\quad \\mathcal R _k (\\delta) : = \\bigg \\{ \\{ s_q\\}_{q=1}^ { \\mathcal Q} \\cap [\\eta_k ,\\eta_k + \\delta ] \\not =\\emptyset \\bigg \\} .\n\\end{align*} \nDenote \n\\begin{align} \n\\label{eq:left and right approximation of change points} \\mathcal L(\\delta) : = \\bigcap_{k=1}^{K} \\mathcal L _k \\big ( \\delta \\big ) \n\\quad \\text{and} \\quad \n\\mathcal R (\\delta) : = \\bigcap_{k=1}^{K} \\mathcal R _k \\big ( \\delta \\big ) . \n\\end{align} \n\\bnlem Suppose the sample points $\\{ s_q\\}_{q=1}^ { \\mathcal Q} $ are uniformly and independently sampled from $(0,n)$. \nIt holds that \n\\[\n\\mathbb P \\bigg ( \\bigcap_{k=1}^{K} \\mathcal L _k (\\delta) \\bigg ) \\ge 1 - \\exp( - 2\\delta { \\mathcal Q} \/ n+ \\log(K) ) \n\\quad \\text{and} \\quad \n \\mathbb P \\bigg ( \\bigcap_{k=1}^{K} \\mathcal R _k (\\delta) \\bigg ) \\ge 1 - \\exp( - 2\\delta { \\mathcal Q} \/ n+ \\log(K) ).\n\\]\nIf in addition that $ { \\mathcal Q} \\ge 4 ( n\/\\Delta ) \\log(n) { \\mathcal B_n} $ for some slowly diverging sequence $ { \\mathcal B_n} $, then \n\\[\n\\mathbb P \\bigg ( \\mathcal L ( { \\mathcal B_n^{-1} \\Delta } \\big ) \\bigg ) \\ge 1 - \\frac{1}{n} \\quad \\text{and} \\quad \\mathbb P \\bigg ( \\mathcal R \\big ( { \\mathcal B_n^{-1} \\Delta } \\big ) \\bigg ) \\ge 1 - \\frac{1}{n} .\n\\]\n\\enlem\n\n\\begin{proof} \nNote that \n\\[\n\\mathbb P( \\mathcal L _k (\\delta) ^c ) = \\bigg( 1- \\frac{ \\delta }{ n } \\bigg)^ { \\mathcal Q} \\le \\exp( - \\delta { \\mathcal Q} \/ n ).\n\\]\nConsequently,\n\\[\n\\mathbb P \\bigg ( \\bigg\\{ \\bigcap_{k=1}^{K} \\mathcal L _k (\\delta) \\bigg\\}^c \\bigg ) \\le \\sum_{k=1}^K \\mathbb P ( \\mathcal L _k (\\delta) ^c ) \n\\le K \\exp( - \\delta { \\mathcal Q} \/ n ) = \\exp( - \\delta { \\mathcal Q} \/ n+ \\log(K) ) . \n\\]\nSetting $\\delta = { \\mathcal B_n^{-1} \\Delta } $ and noting that $K\\le n\/\\Delta$, it follows that if $ { \\mathcal Q} \\ge 4 ( n\/\\Delta ) { \\mathcal B_n} \\log(n) $, then \n\\[\n\\mathbb P \\bigg ( \\bigcap_{k=1}^{K} \\mathcal L _k \\big ( { \\mathcal B_n^{-1} \\Delta } \\big ) \\bigg ) \\ge 1- \\exp \\bigg( - 4 \\log (n ) + \\log( n\/\\Delta )\\bigg) \\ge 1 - \\frac{1}{n^3} .\n\\]\n\\end{proof} \n\n\n\n\n\n\n\n\n\n\n\n\n\\paragraph{Goodness-of-fit.} For different models, the deviation of the goodness-of-fit has different orders. We need to analyze each model separately.\n\n\\bnlem[Mean model]\n\\label{lem:mean one change deviation bound}\nLet $\\mathcal I =(s,e] $ be any generic interval and $\\widehat \\mu_\\I = \\argmin_{\\mu}[\\frac{1}{|\\mclI|}\\sum_{i\\in \\mclI}\\|X_i - \\mu\\|_2^2 + \\frac{\\lambda}{\\sqrt{|\\mclI|}}\\|\\mu\\|_1]$. \n\n{\\bf a.} If $\\I$ contains no change points. Then it holds that \n$$\\mathbb P \\bigg( \\bigg| \\sum_{ i \\in \\I } \\|X_i -\\widehat \\mu_\\I \\|_2^2 - \\sum_{ i \\in \\I } \\|X_i - \\mu^*_i \\|_2^2 \\bigg| \\ge C \\sigma_{\\epsilon}^2\\mathfrak{s}\\log(np) \\bigg) \\le (np)^{-3}. $$\n\n{\\bf b.} Suppose that the interval $ \\I=(s,e]$ contains one and only one change point $ \\eta_k $. Denote\n$$ \\mathcal J = (s,\\eta_k] \\quad \\text{and} \\quad \\mathcal J' = (\\eta_k, e] .$$ Then it holds that \n$$\\mathbb P \\bigg( \\bigg| \\sum_{ i \\in \\I } \\|X_i -\\widehat \\mu_\\I \\|^2 - \\sum_{ i \\in \\I } \\|X_i - \\mu^*_i \\|^2 \\bigg| \\geq \\frac{ | { \\mathcal J } ||{ \\mathcal J } '| }{ |\\I| } \\kappa_k ^2 +C \\sigma_{\\epsilon}^2\\mathfrak{s}\\log(np) \\bigg) \\le (np)^{-3}. $$\n\\enlem\n\\begin{proof} \nWe show {\\bf b} as {\\bf a} immediately follows from {\\bf b} with $ |{ \\mathcal J } '| =0$. Denote \n$$ \\mathcal J = (s,\\eta_k] \\quad \\text{and} \\quad \\mathcal J' = (\\eta_k, e] .$$ \nDenote $\\mu_\\I = \\frac{1}{|\\I | } \\sum_{i\\in \\I } \\mu^*_i $.\nThe it holds that \n\\begin{equation*}\n \\begin{split}\n &\\sum_{i\\in \\I} \\|X_i -\\widehat \\mu_\\I\\|_2^2 - \\sum_{i\\in \\I} \\|X_i - \\mu^*_i\\|_2^2 \\\\\n =& \\sum_{i \\in \\I } \\|\\widehat \\mu_\\I-\\mu^*_i \\|_2^2 -2 \\sum_{i \\in \\I } \\epsilon_i^\\top (\\widehat \\mu_\\I - \\mu^*_i)\\\\\n \\leq& \\sum_{i \\in \\I } \\|\\widehat \\mu_\\I-\\mu^*_\\I \\|_2^2 + \\sum_{i \\in \\I } \\|\\mu^*_\\I-\\mu^*_i \\|_2^2 -2 \\sum_{i \\in \\I } \\epsilon_i^\\top (\\widehat \\mu_\\I - \\mu^*_\\I) - 2 \\sum_{i \\in \\I } \\epsilon_i^\\top (\\mu^*_\\I - \\mu^*_i).\n \\end{split}\n\\end{equation*} \nObserve that \n\\begin{equation*}\n \\mathbb P \\bigg(\\|\\sum_{i\\in \\I } \\epsilon_i \\|_{\\infty} \\geq C\\sigma_{\\epsilon}\\sqrt {\\log(np)|\\I | }\\bigg) \\leq (np)^{-5 } \n\\end{equation*}\nSuppose this event holds.\n\n{\\bf Step 1.} By the event and \\Cref{lem:estimation_high_dim_mean}, we have\n\\begin{equation*}\n \\begin{split}\n \\sum_{i \\in \\I } \\|\\widehat \\mu_\\I-\\mu^*_\\I \\|_2^2 &\\leq C\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np),\\\\\n |2 \\sum_{i \\in \\I } \\epsilon_i^\\top (\\widehat \\mu_\\I - \\mu^*_\\I)| &\\leq |\\I|\\|\\sum_{i\\in \\I}\\epsilon_i\\|_{\\infty}\\|\\widehat \\mu_\\I-\\mu^*_\\I\\|_1\\leq C\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np).\n \\end{split}\n\\end{equation*}\n{\\bf Step 2.} Notice that \n\\begin{align*} \n \\sum_{ i \\in \\I } \\|\\mu^*_\\I-\\mu^*_i\\| ^2 = \n & \\sum_{ i \\in \\I } \\| \\frac{ | { \\mathcal J } | \\mu^*_{ \\mathcal J } + | { \\mathcal J } '| \\mu^*_{ { \\mathcal J } ' }}{|\\I| } -\\mu^*_i \\|_2 ^2\\\\\n = & \\sum_{i\\in { \\mathcal J } } \\| \\frac{ |{ \\mathcal J } '| (\\mu^*_{ \\mathcal J } - \\mu^*_{{ \\mathcal J } '} ) }{|\\I| } \\|_2^2 + \n \\sum_{i\\in { \\mathcal J } ' } \\| \\frac{ |{ \\mathcal J } | (\\mu^*_{ \\mathcal J } - \\mu^*_{{ \\mathcal J } '} ) }{|\\I| } \\|_2^2\n \\\\ = & \\frac{|{ \\mathcal J } | | { \\mathcal J } '| }{|\\I| }(\\mu^*_{ \\mathcal J } - \\mu^*_{{ \\mathcal J } '} ) ^2 = \\frac{|{ \\mathcal J } | | { \\mathcal J } '| }{|\\I| } \\kappa_k ^2.\n \\end{align*}\nMeanwhile, it holds that \n\\begin{align*} \n\\sum_{ i \\in \\I } \\epsilon_i ^\\top (\\mu^*_\\I - \\mu^*_i ) =\n& \\sum_{ i \\in \\I } \\epsilon_i^\\top \\bigg ( \\frac{ | { \\mathcal J } | \\mu^*_{ \\mathcal J } + | { \\mathcal J } '| \\mu^*_{ { \\mathcal J } ' }}{|\\I| }-\\mu^*_i \\bigg ) \n\\\\\n=& \\frac{|{ \\mathcal J } '|}{|\\I|}\\sum_{i\\in { \\mathcal J } } \\epsilon_i^\\top( \\mu^*_{{ \\mathcal J } '}-\\mu^*_{ \\mathcal J } ) + \n \\frac{|{ \\mathcal J } |}{|\\I|}\\sum_{i\\in { \\mathcal J } ' } \\epsilon_i^\\top( \\mu^*_{{ \\mathcal J } }-\\mu^*_{{ \\mathcal J } '} )\n \\\\\n \\le & C_2\\sigma_{\\epsilon}\\sqrt { \\frac{|{ \\mathcal J } | | { \\mathcal J } '| }{|\\I| } \\kappa_k ^2 \\log(np) }\n \\le \\frac{|{ \\mathcal J } | | { \\mathcal J } '| }{|\\I|} \\kappa_k ^2 + C\\sigma_{\\epsilon}^2\\log(np) ,\n\\end{align*}\nwhere the first inequality follows from the fact that the variance is upper bounded by\n$$\\sum_{i\\in { \\mathcal J } } \\sigma_{\\epsilon}^2 \\frac{ |{ \\mathcal J } '|^2 }{|\\I|^2 } \\|\\mu^*_{ \\mathcal J } - \\mu^*_{{ \\mathcal J } '} \\|_2^2 + \\sum_{i\\in { \\mathcal J } ' } \\sigma_{\\epsilon}^2 \\frac{ |{ \\mathcal J } |^2 }{|\\I|^2 } \\|\\mu^*_{ \\mathcal J } - \\mu^*_{{ \\mathcal J } '} \\|_2^2 = \\frac{|{ \\mathcal J } | | { \\mathcal J } '| }{|\\I| }\\sigma_{\\epsilon}^2 \\kappa_k^2 . $$\n\\end{proof} \n\n\n\n\n\\bnlem\n\\label{lem:estimation_high_dim_mean}\nFor any interval $\\mclI\\subset [1, n]$ with $|\\mclI|\\geq C_0\\mathfrak{s}\\log(np)$ that might contain change points. Let\n\\begin{equation*}\n \\hat{\\mu}_{\\mclI}\\defined \\argmin_{\\mu\\in \\mathbb{R}^p}\\frac{1}{|\\mclI|}\\|X_i - \\mu\\|_2^2 + \\frac{\\lambda}{\\sqrt{|\\mclI|}}\\|\\mu\\|_1,\n\\end{equation*}\nfor $\\lambda = C_{\\lambda}\\sigma_{\\epsilon}\\sqrt{\\log(np)}$ for sufficiently large constant $C_{\\lambda}$. Then it holds with probability at least $1 - (np)^{-5}$ that\n\\begin{equation}\n \\begin{split}\n \\|\\widehat{\\mu}_{\\I}-\\mu^*_{\\I}\\|_{2}^{2} &\\leq \\frac{C\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log (np)}{\\I} \\\\\n\\left\\|\\widehat{\\mu}_{\\I}-\\mu^*_{\\I}\\right\\|_{1} &\\leq C \\sigma_{\\epsilon} { \\mathfrak{s} } \\sqrt{\\frac{\\log (np)}{|\\I|}} \\\\\n\\|\\left(\\widehat{\\mu}_{\\I}-\\mu^*_{\\I}\\right)_{S^{c}}\\|_{1} &\\leq 3\\|\\left(\\widehat{\\mu}_{\\I}-\\mu^{*}_{\\I}\\right)_{S}\\|_{1},\n \\end{split}\n\\end{equation}\nwhere $\\mu_\\mclI = \\frac{1}{|\\I|}\\sum_{i\\in \\I}\\mu^*_i$.\n\\enlem\n\n\n\\begin{proof}\nBy definition, we have $L(\\widehat{\\mu}_{\\I},\\I)\\leq L({\\mu}_{\\I},\\I)$, that is\n\\begin{align}\n\\nonumber\n &\\sum_{i\\in \\I}\\|Y_i - \\widehat{\\mu}_{\\I}\\|_2^2 + \\lambda\\sqrt{|\\I|}\\|\\widehat{\\mu}_{\\I}\\|_1\\leq \\sum_{i\\in \\I}\\|Y_i - {\\mu}^*_{\\I}\\|_2^2 + \\lambda\\sqrt{|\\I|}\\|{\\mu}^*_{\\I}\\|_1\\\\\n\\nonumber\n \\Rightarrow & \\sum_{i\\in \\I}(\\widehat{\\mu}_{\\I} - \\mu^*_{\\I})^\\top (2Y_i - \\mu^*_{\\I} - \\widehat{\\mu}_{\\I}) + \\lambda\\sqrt{|\\I|}[\\|{\\mu}^*_{\\I}\\|_1 - \\|\\widehat{\\mu}_{\\I}\\|_1]\\geq 0 \\\\\n \\nonumber\n \\Rightarrow & (\\widehat{\\mu}_{\\I} - \\mu^*_{\\I})^\\top (\\sum_{i\\in \\I}\\epsilon_i) + 2(\\widehat{\\mu}_{\\I} - \\mu^*_{\\I})^\\top\\sum_{i\\in \\I}(\\mu^*_i - \\mu^*_{\\I}) - |\\I|\\sum_{i\\in \\I}\\|\\widehat{\\mu}_{\\I} - \\mu^*_{\\I}\\|_2^2 + \\lambda\\sqrt{|\\I|}[\\|{\\mu}^*_{\\I}\\|_1 - \\|\\widehat{\\mu}_{\\I}\\|_1]\\geq 0\\\\\n \\Rightarrow & \\|\\widehat{\\mu}_{\\I} - \\mu^*_{\\I}\\|_1 \\|\\sum_{i\\in \\I}\\epsilon_i\\|_{\\infty} + \\lambda\\sqrt{|\\I|}[\\|{\\mu}^*_{\\I}\\|_1 - \\|\\widehat{\\mu}_{\\I}\\|_1]\\geq |\\I|\\sum_{i\\in \\I}\\|\\widehat{\\mu}_{\\I} - \\mu^*_{\\I}\\|_2^2.\n \\label{tmp_eq:mean_concentration}\n\\end{align}\nBy a union bound, we know that for some universal constant $C>0$, with probability at least $1 - (np)^{-5}$,\n$$\n\\|\\sum_{i\\in \\I}\\epsilon_i\\|_{\\infty}\\leq C\\sigma_{\\epsilon}\\sqrt{|\\I|\\log(np)}\\leq \\frac{\\lambda}{2}\\sqrt{|\\I|},\n$$\nas long as $C_{\\lambda}$ is sufficiently large. Therefore, based on the sparsity assumption in \\Cref{assp: DCDP_mean}, it holds that\n\\begin{align*}\n &\\frac{\\lambda}{2}\\|\\widehat{\\mu}_{\\I} - \\mu^*_{\\I}\\|_1 + \\lambda[\\|{\\mu}^*_{\\I}\\|_1 - \\|\\widehat{\\mu}_{\\I}\\|_1]\\geq 0 \\\\\n \\Rightarrow & \\frac{\\lambda}{2}\\|\\widehat{\\mu}_{\\I} - \\mu^*_{\\I}\\|_1 + \\lambda[\\|({\\mu}^*_{\\I})_S\\|_1 - \\|(\\widehat{\\mu}_{\\I})_S\\|_1]\\geq \\lambda \\|(\\widehat{\\mu}_{\\I})_{S^c}\\|_1\\\\\n \\Rightarrow & \\frac{\\lambda}{2}\\|\\widehat{\\mu}_{\\I} - \\mu^*_{\\I}\\|_1 + \\lambda\\|({\\mu}^*_{\\I}-\\widehat{\\mu}_{\\I})_S\\|_1 \\geq \\lambda \\|(\\mu^*_{\\I} - \\widehat{\\mu}_{\\I})_{S^c}\\|_1\\\\\n \\Rightarrow & 3\\|({\\mu}^*_{\\I}-\\widehat{\\mu}_{\\I})_S\\|_1\\geq \\|({\\mu}^*_{\\I}-\\widehat{\\mu}_{\\I})_{S^c}\\|_1.\n\\end{align*}\nNow from \\Cref{tmp_eq:mean_concentration} we can get\n\\begin{align*}\n |\\I|\\|\\widehat{\\mu}_{\\I} - \\mu^*_{\\I}\\|_2^2\\leq &\\frac{3\\lambda}{2}\\sqrt{|\\I|}\\|\\widehat{\\mu}_{\\I} - \\mu^*_{\\I}\\|_1\\\\\n \\leq & \\frac{12\\lambda}{2}\\sqrt{|\\I|}\\|(\\widehat{\\mu}_{\\I} - \\mu^*_{\\I})_S\\|_1\\\\\n \\leq & 6\\lambda \\sqrt{ { \\mathfrak{s} } } \\sqrt{|\\I|}\\|(\\widehat{\\mu}_{\\I} - \\mu^*_{\\I})_S\\|_2\\\\\n \\leq & 6\\lambda \\sqrt{ { \\mathfrak{s} } } \\sqrt{|\\I|}\\|\\widehat{\\mu}_{\\I} - \\mu^*_{\\I}\\|_2,\n\\end{align*}\nwhich implies that\n$$\n\\|\\widehat{\\mu}_{\\I} - \\mu^*_{\\I}\\|_2\\leq 6C_{\\lambda}\\sigma_{\\epsilon}\\sqrt{\\frac{ { \\mathfrak{s} } \\log(np)}{|\\I|}}.\n$$\nThe other inequality follows accordingly.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Mean model}\nIn this section we show the proof of \\Cref{thm:DCDP mean} and \\Cref{cor:mean local refinement}. Throughout this section, for any generic interval $\\I\\subset [1,n]$, denote $\\mu^*_{\\I} = \\frac{1}{|\\I|}\\sum_{i\\in \\I}\\mu^*_i$ and\n$$ \\widehat \\mu_\\I = \\argmin_{\\mu}\\frac{1}{|\\mclI|}\\sum_{i\\in \\mclI}\\|X_i - \\mu\\|_2^2 + \\frac{\\lambda}{\\sqrt{|\\mclI|}}\\|\\mu\\|_1. $$\nAlso, unless specially mentioned, in this section, we set the goodness-of-fit function $\\mathcal F (\\I)$ in \\Cref{algorithm:DCDP_divide} to be\n\\begin{equation}\n \\mathcal F (\\I) := \\begin{cases}\n \\sum_{i \\in \\I } \\|X_i - \\widehat \\mu_\\I \\|_2 ^2, &\\text{ when } |\\I|\\geq C_\\mclF\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np),\\\\\n 0, &\\text{otherwise},\n \\end{cases}.\n\\end{equation}\nwhere $C_\\mclF$ is a universal constant.\n \n \\begin{proof}[Proof of \\Cref{thm:DCDP mean}]\n By \\Cref{prop:mean local consistency}, $K \\leq |\\widehat{\\mathcal{P}}| \\leq 3K$. This combined with \\Cref{prop:change points partition size consistency} completes the proof.\n \\end{proof}\n \n\n \\bnprop\\label{prop:mean local consistency}\n Suppose \\Cref{assp: DCDP_mean} holds. Let \n $\\widehat { \\mathcal P } $ denote the output of \\Cref{algorithm:DCDP_divide}.\n Then with probability at least $1 - C n ^{-3}$, the following conditions hold.\n\t\\begin{itemize}\n\t\t\\item [(i)] For each interval $ \\I = (s, e] \\in \\widehat{\\mathcal{P}}$ containing one and only one true \n\t\t change point $ \\eta_k $, it must be the case that\n $$\\min\\{ \\eta_k -s ,e-\\eta_k \\} \\lesssim \\sigma_{\\epsilon}^2\\bigg( \\frac{ { \\mathfrak{s} } \\log(np) +\\gamma }{\\kappa_k^2 }\\bigg) + { \\mathcal B_n^{-1} \\Delta } .$$\n\t\t\\item [(ii)] For each interval $ \\I = (s, e] \\in \\widehat{\\mathcal{P}}$ containing exactly two true change points, say $\\eta_ k < \\eta_ {k+1} $, it must be the case that\n\t\t\t\\begin{align*} \n \\eta_k -s \\lesssim { \\mathcal B_n^{-1\/2} \\Delta } \\quad \\text{and} \\quad \n e-\\eta_{k+1} \\lesssim { \\mathcal B_n^{-1\/2} \\Delta } . \n \\end{align*} \n\t\t \n\\item [(iii)] No interval $\\I \\in \\widehat{\\mathcal{P}}$ contains strictly more than two true change points; and \n\n\n\t\\item [(iv)] For all consecutive intervals $ \\I_1 $ and $ \\I_2 $ in $\\widehat{ \\mathcal P}$, the interval \n\t\t$ \\I_1 \\cup \\I_2 $ contains at least one true change point.\n\t\t\t\t\n\t \\end{itemize}\n\\enprop \n\n\n\\bnprop\\label{prop:change points partition size consistency}\nSuppose \\Cref{assp: DCDP_mean} holds. Let \n $\\widehat { \\mathcal P } $ be the output of \\Cref{algorithm:DCDP_divide}.\n Suppose \n$ \\gamma \\ge C_\\gamma K { \\mathcal B_n^{-1} \\Delta } \\kappa^2 $ for sufficiently large constant $C_\\gamma$. Then\n with probability at least $1 - C n ^{-3}$, $| \\widehat { \\mathcal P} | =K $.\n\\enprop\n\n\n\n\\begin{proof}[Proof of \\Cref{prop:change points partition size consistency}] \nDenote $\\mathfrak{G} ^*_n = \\sum_{ i =1}^n \\|X_i - \\mu^*_i\\|_2^2$. Given any collection $\\{t_1, \\ldots, t_m\\}$, where $t_1 < \\cdots < t_m$, and $t_0 = 0$, $t_{m+1} = n$, let \n\\begin{equation}\\label{eq-sn-def}\n\t { \\mathfrak{G} } _n(t_1, \\ldots, t_{m}) = \\sum_{k=1}^{m} \\sum_{ i = t_k +1}^{t_{k+1}} \\mclF(\\widehat \\mu_ {(t_{k}, t_{k+1}]}, (t_{k}, t_{k+1}]).\n\\end{equation}\nFor any collection of time points, when defining \\eqref{eq-sn-def}, the time points are sorted in an increasing order.\n\nLet $\\{ \\widehat \\eta_{k}\\}_{k=1}^{\\widehat K}$ denote the change points induced by $\\widehat {\\mathcal P}$. Suppose we can justify that \n\t\\begin{align}\n\t\t { \\mathfrak{G} } ^*_n + K\\gamma \\ge & { \\mathfrak{G} } _n(s_1,\\ldots,s_K) + K\\gamma - C_1 ( K +1)\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) - C_1\\sum_{k\\in [K]}\\kappa_k^2 { \\mathcal B_n^{-1} \\Delta } \\label{eq:K consistency step 1} \\\\ \n\t\t\\ge & { \\mathfrak{G} } _n (\\widehat \\eta_{1},\\ldots, \\widehat \\eta_{\\widehat K } ) +\\widehat K \\gamma - C_1 ( K +1) \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) - C_1\\sum_{k\\in [K]}\\kappa_k^2 { \\mathcal B_n^{-1} \\Delta } \\label{eq:K consistency step 2} \\\\ \n\t\t\\ge & { \\mathfrak{G} } _n ( \\widehat \\eta_{1},\\ldots, \\widehat \\eta_{\\widehat K } , \\eta_1,\\ldots,\\eta_K ) + \\widehat K \\gamma - C_1' ( K +1) \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np)- C_1'\\sum_{k\\in [K]}\\kappa_k^2 { \\mathcal B_n^{-1} \\Delta } \\label{eq:K consistency step 3}\n\t\\end{align}\n\tand that \n\t\\begin{align}\\label{eq:K consistency step 4}\n\t\t { \\mathfrak{G} } ^*_n - { \\mathfrak{G} } _n ( \\widehat \\eta_{1},\\ldots, \\widehat \\eta_{\\widehat K } , \\eta_1,\\ldots,\\eta_K ) \\le C_2 (K + \\widehat{K} + 2) \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np).\n\t\\end{align}\n\tThen it must hold that $| \\hatp | = K$, as otherwise if $\\widehat K \\ge K+1 $, then \n\t\\begin{align*}\n\t\tC _2 (K + \\widehat{K} + 2) \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) & \\ge { \\mathfrak{G} } ^*_n - { \\mathfrak{G} } _n ( \\widehat \\eta_{1},\\ldots, \\widehat \\eta_{\\widehat K } , \\eta_1,\\ldots,\\eta_K ) \\\\\n\t\t& \\ge (\\widehat K - K)\\gamma -C_1 ( K +1) \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) - C_1\\sum_{k\\in [K]}\\kappa_k^2 { \\mathcal B_n^{-1} \\Delta } .\n\t\\end{align*} \n\tTherefore due to the assumption that $| \\hatp| =\\widehat K\\le 3K $, it holds that \n\t\\begin{align} \\label{eq:Khat=K} \n\t\t[C_2 (4K + 2) + C_1(K+1)] \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) +C_1\\sum_{k\\in [K]}\\kappa_k^2 { \\mathcal B_n^{-1} \\Delta } \\ge (\\widehat K - K)\\gamma \\geq \\gamma,\n\t\\end{align}\n\tNote that \\eqref{eq:Khat=K} contradicts the choice of $\\gamma$.\n\n\n\\\n\\\\\n{\\bf Step 1.} Note that \\eqref{eq:K consistency step 1} is implied by \n\t\\begin{align}\\label{eq:step 1 K consistency} \n\t\t\\left| \t { \\mathfrak{G} } ^*_n - { \\mathfrak{G} } _n(s_1,\\ldots,s_K) \\right| \\le C_3(K+1) \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) + C_3\\sum_{k\\in [K]}\\kappa_k^2 { \\mathcal B_n^{-1} \\Delta } ,\n\t\\end{align}\n\twhich is an immediate consequence of \\Cref{lem:mean one change deviation bound}. \n\t\\\n\t\\\\\n\t\\\\\n\t{\\bf Step 2.} Since $\\{ \\widehat \\eta_{k}\\}_{k=1}^{\\widehat K}$ are the change points induced by $\\widehat {\\mathcal P}$, \\eqref{eq:K consistency step 2} holds because $\\hatp$ is a minimizer.\n\\\\\n\\\\\n\t{\\bf Step 3.}\nFor every $ \\I =(s,e]\\in \\hatp$, by \\Cref{prop:mean local consistency}, we know that $\\I$ contains at most two change points. We only show the proof for the two-change-points case as the other case is easier. Denote\n\t\\[\n\t\t \\I = (s ,\\eta_{q}]\\cup (\\eta_{q},\\eta_{q+1}] \\cup (\\eta_{q+1} ,e] = { \\mathcal J } _1 \\cup { \\mathcal J } _2 \\cup { \\mathcal J } _{3},\n\t\\]\nwhere $\\{ \\eta_{q},\\eta_{q+1}\\} =\\I \\, \\cap \\, \\{\\eta_k\\}_{k=1}^K$. \n\nFor each $ m=1,2,3$, if $|{ \\mathcal J } _m|\\geq C_\\mclF\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np)$, then by \\Cref{lem:mean one change deviation bound}, it holds that\n$$\n\\sum_{ i \\in { \\mathcal J } _m }\\|y_ i - \\widehat\\mu_{{ \\mathcal J } _m}\\|_2^2 \\leq \\sum_{ i \\in { \\mathcal J } _ m } \\|y_ i - \\mu^*_i\\|_2^2 + C\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np). \n$$\nThus, we have\n\\begin{equation}\n \\mclF(\\widehat\\mu_{{ \\mathcal J } _m},{ \\mathcal J } _m)\\leq \\mclF(\\mu^*_{{ \\mathcal J } _m},{ \\mathcal J } _m) + C\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np).\n\\end{equation}\nOn the other hand, by \\Cref{lem:mean loss deviation no change point}, we have\n$$\n\\mclF(\\widehat\\mu_{\\I},{ \\mathcal J } _m) \\geq \\mclF(\\mu^*_{{ \\mathcal J } _m},{ \\mathcal J } _m) - C\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np). \n$$\nTherefore the last two inequalities above imply that \n\\begin{align} \n\\label{eq:K consistency step 3 inequality 3} \n\\sum_{i \\in \\I } \\mclF(\\widehat \\mu_\\I,\\I) \\geq & \\sum_{m=1}^{3} \\sum_{ i \\in { \\mathcal J } _m }\\mclF(\\widehat \\mu_\\I,{ \\mathcal J } _m) \\\\\n\\geq & \\sum_{m=1}^{3} \\sum_{ i \\in { \\mathcal J } _m}\\mclF(\\widehat \\mu_{{ \\mathcal J } _m},{ \\mathcal J } _m) - C\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np).\n\\end{align}\nNote that \\eqref{eq:K consistency step 3} is an immediate consequence of \\eqref{eq:K consistency step 3 inequality 3}.\n\n{\\bf Step 4.}\nFinally, to show \\eqref{eq:K consistency step 4}, let $\\widetilde { \\mathcal P}$ denote the partition induced by $\\{\\widehat \\eta_{1},\\ldots, \\widehat \\eta_{\\widehat K } , \\eta_1,\\ldots,\\eta_K\\}$. Then \n$| \\widetilde { \\mathcal P} | \\le K + \\widehat K+2 $ and that $\\mu^*_i$ is unchanged in every interval $\\I \\in \\widetilde { \\mathcal P}$. \n\tSo \\Cref{eq:K consistency step 4} is an immediate consequence of \\Cref{lem:mean one change deviation bound}.\n\\end{proof} \n\n\n\n\n\n\n\\begin{proof}[Proof of \\Cref{cor:mean local refinement}]\nFor each $k\\in [K]$, let $\\hat{\\mu}_t = \\hat{\\mu}^{(1)}$ if $s_k\\eta_k$ and denote\n$$\n{ \\mathcal J } _1 = [s_k, \\eta_k), { \\mathcal J } _2 = [\\eta_k, \\hat{\\eta}_k), { \\mathcal J } _3 = [\\hat{\\eta}_k, e_k),\n$$\nand $\\mu^{(1)} = \\mu^*_{\\eta_k - 1}$, $\\mu^{(2)} = \\mu^*_{\\eta_k}$. Then \\Cref{tmp_eq:upper_bound_delta_local_refine} is equivalent to\n\\begin{equation*}\n { \\mathcal J } _1\\|\\hat{\\mu}^{(1)} - \\mu^{(1)}\\|_2^2 + { \\mathcal J } _2\\|\\hat{\\mu}^{(1)} - \\mu^{(2)}\\|_2^2 + { \\mathcal J } _3\\|\\hat{\\mu}^{(2)} - \\mu^{(2)}\\|_2^2 \\leq C { \\mathfrak{s} } \\sigma_{\\epsilon}^2\\log(np).\n\\end{equation*}\nSince $|\\mclJ_1|=\\eta_k - s_k\\geq c_0\\Delta$ with some constant $c_0$ under \\Cref{assp: DCDP_mean}, we have\n\\begin{equation}\n \\Delta \\|\\hat{\\mu}^{(1)} - \\mu^{(1)}\\|_2^2\\leq c_0|\\mclJ_1|\\|\\hat{\\mu}^{(1)} - \\mu^{(1)}\\|_2^2 \\leq C \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np)\\leq c_2\\Delta \\kappa^2,\n\\end{equation}\nwith some constant $c_2\\in (0,1\/4)$, where the last inequality is due to the fact that $ { \\mathcal B_n} \\rightarrow \\infty$. Thus we have\n\\begin{equation*}\n \\|\\hat{\\mu}^{(1)} - \\mu^{(1)}\\|_2^2\\leq c_2\\kappa^2.\n\\end{equation*}\nTriangle inequality gives\n\\begin{equation*}\n \\|\\hat{\\mu}^{(1)} - \\mu^{(2)}\\|_2\\geq \\|\\mu^{(1)} - \\mu^{(2)}\\|_2 - \\|\\hat{\\mu}^{(1)} - \\mu^{(1)}\\|_2 \\geq \\kappa\/2.\n\\end{equation*}\nTherefore, $\\kappa^2|\\mclJ_2|\/4\\leq |\\mclJ_2|\\|\\hat{\\mu}^{(1)} - \\mu^{(2)}\\|_2^2\\leq C \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np)$ and\n\\begin{equation*}\n |\\hat{\\eta}_k - \\eta_k| = |\\mclJ_2|\\leq \\frac{C \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np)}{\\kappa^2}.\n\\end{equation*}\n\\end{proof}\n\n\n\n\n\n\n\\subsubsection{Technical lemmas and proofs}\nThroughout this section, let $\\widehat { \\mathcal P } $ denote the output of \\Cref{algorithm:DCDP_divide}.\n \n \n \n\n\n\n\n\n\\bnlem\n\\label{lem:mean loss deviation no change point}\nLet $\\I\\subset [1,T]$ be any interval that contains no change point. Then for any interval ${ \\mathcal J } \\supset \\I$, it holds with probability at least $1 - (np)^{-5}$ that\n\\begin{equation*}\n \\mclF(\\mu^*_{\\I},\\I)\\leq \\mclF(\\widehat{\\mu}_{ \\mathcal J } ,\\I) + C\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np).\n\\end{equation*}\n\\enlem\n\\begin{proof}\n{\\bf Case 1}. If $|\\I|< C_\\mclF\\sigma_{\\epsilon} { \\mathfrak{s} } \\log(np)$, then by definition, we have $\\mclF(\\mu^*_{\\I},\\I) = \\mclF(\\widehat \\mu^*_{{ \\mathcal J } },\\I) = 0$ and the inequality holds.\n\n{\\bf Case 2}. If $|\\I|\\geq C_\\mclF\\sigma_{\\epsilon} { \\mathfrak{s} } \\log(np)$, then take difference and we can get\n\\begin{align*}\n &\\sum_{i\\in \\I}\\|X_i - \\mu^*_i\\|_2^2 - \\sum_{i\\in \\I}\\|X_i - \\widehat{\\mu}_{ \\mathcal J } \\|_2^2 \\\\\n =& 2(\\widehat \\mu_{{ \\mathcal J } } - \\mu^*_{\\I})^\\top\\sum_{i\\in \\I}\\epsilon_i -|\\I|\\|\\mu^*_{\\I} - \\widehat \\mu_{{ \\mathcal J } }\\|_2^2\\\\\n \\leq & 2(\\|(\\widehat \\mu^*_{{ \\mathcal J } } - \\mu^*_{\\I})_S\\|_1 + \\|(\\widehat \\mu_{{ \\mathcal J } } - \\mu^*_{\\I})_{S^c}\\|_1)\\|\\sum_{i\\in \\I}\\epsilon_i\\|_{\\infty} -|\\I|\\|\\mu^*_{\\I} - \\widehat \\mu^*_{{ \\mathcal J } }\\|_2^2\\\\\n \\leq& c_1\\|\\widehat \\mu_{{ \\mathcal J } } - \\mu^*_{\\I}\\|_2\\sigma_{\\epsilon}\\sqrt{ { \\mathfrak{s} } |\\I|\\log(np)} + c_2\\sigma_{\\epsilon}\\sqrt{\\frac{\\log(np)}{|\\I|}}\\cdot c_1\\sigma_{\\epsilon}\\sqrt{|\\I|\\log(np)} - |\\I|\\|\\mu^*_{\\I} - \\widehat \\mu_{{ \\mathcal J } }\\|_2^2 \\\\\n \\leq & \\frac{1}{2}|\\I|\\|\\mu^*_{\\I} - \\widehat \\mu_{{ \\mathcal J } }\\|_2^2 + 2c_1^2\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) + c_2\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) - |\\I|\\|\\mu^*_{\\I} - \\widehat \\mu_{{ \\mathcal J } }\\|_2^2\\\\\n \\leq & C\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np),\n\\end{align*}\nwhere in the second inequality we use the definition of the index set $S$ and \\Cref{lem:estimation_high_dim_mean}.\n\\end{proof}\n\n\n\n\n\n\n \n\\bnlem\n\\label{lem:mean single cp}\nSuppose the good events \n$\\mathcal L ( { \\mathcal B_n^{-1} \\Delta } ) $ and $\\mathcal R ( { \\mathcal B_n^{-1} \\Delta } ) $ defined in \\Cref{eq:left and right approximation of change points} hold. \nLet $ \\I=(s,e] \\in \\mathcal {\\widehat P} $ be such that $\\I$ contains exactly one change point $ \\eta_k $. \n Then with probability at least $1-(np)^{-3}$, it holds that \n $$\\min\\{ \\eta_k -s ,e-\\eta_k \\} \\lesssim \\sigma_{\\epsilon}^2\\bigg( \\frac{ { \\mathfrak{s} } \\log(np) +\\gamma }{\\kappa_k^2 }\\bigg) + { \\mathcal B_n^{-1} \\Delta } .$$\n \\enlem\n\\begin{proof} \nIf either $ \\eta_k -s \\le { \\mathcal B_n^{-1} \\Delta } $ or $e-\\eta_k\\le { \\mathcal B_n^{-1} \\Delta } $, then there is nothing to show. So assume that \n$$ \\eta_k -s > { \\mathcal B_n^{-1} \\Delta } \\quad \\text{and} \\quad e-\\eta_k > { \\mathcal B_n^{-1} \\Delta } . $$\nBy event $\\mathcal R ( { \\mathcal B_n^{-1} \\Delta } )$, there exists $ s_ u \\in \\{ s_q\\}_{q=1}^ { \\mathcal Q} $ such that \n$$0\\le s_u - \\eta_k \\le { \\mathcal B_n^{-1} \\Delta } . $$\n So \n$$ \\eta_k \\le s_ u \\le e .$$\nDenote \n$$ \\I_ 1 = (s,s_u] \\quad \\text{and} \\quad \\I_2 = (s_u, e] .$$\nSince \n$s, e, s_u \\in \\{ s_q\\}_{q=1}^ { \\mathcal Q} $, it follows that \n\\begin{align}\\nonumber\n\\sum_{i \\in \\I }\\|X_i - \\widehat \\mu_\\I \\|_2 ^2 \\le & \\sum_{i \\in \\I_1 }\\|X_i - \\widehat \\mu_{\\I_1} \\|_2 ^2 + \\sum_{i \\in \\I_2 }\\|X_i - \\widehat \\mu_{\\I_2} \\|_2 ^2 + \\gamma \n\\\\\\nonumber \n\\le & \\sum_{i \\in \\I_1 }\\|X_i - \\mu^*_i \\|_2 ^2 + C_1 \\big ( \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) + (s_u -\\eta_k) \\kappa_k^2 \\big ) \n\\\\ \\nonumber\n& + \\sum_{i \\in \\I_1 }\\|X_i - \\mu^*_i \\|_2 ^2 + C_1\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) + \\gamma\n\\\\ \\nonumber \n=& \\sum_{i \\in \\I }\\|X_i - \\mu^*_i \\|_2 ^2 +C_2 \\big ( \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) + (s_u -\\eta_k) \\kappa_k^2 \\big )+ \\gamma \n\\\\\n\\le & \\sum_{i \\in \\I }\\|X_i - \\mu^*_i \\|_2 ^2 +C_2 \\big ( \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) + { \\mathcal B_n^{-1} \\Delta } \\kappa_k^2 \\big ) + \\gamma , \\label{eq:one change point step 1}\n\\end{align}\nwhere the first inequality follows from the fact that $ \\I =(s ,e ] \\in \\widehat { \\mathcal P} $ and so it is the local minimizer, the second inequality follows from \\Cref{lem:mean one change deviation bound} {\\bf a} and {\\bf b} and the observation that \n$$ \\eta_k -s > { \\mathcal B_n^{-1} \\Delta } \\ge s_u -\\eta_k $$ \nDenote \n$$ { \\mathcal J } _1 = (s,\\eta_k ] \\quad \\text{and} \\quad { \\mathcal J } _2 = (\\eta_k , e] .$$\n\\Cref{eq:one change point step 1} gives \n\\begin{align*}\n\\sum_{i \\in { \\mathcal J } _1 }\\|X_i - \\widehat \\mu_\\I \\|_2 ^2 +\\sum_{i \\in { \\mathcal J } _2 }\\|X_i - \\widehat \\mu_\\I \\|_2 ^2 \n \\le \\sum_{i \\in { \\mathcal J } _1 }\\|X_i - \\mu^*_{ { \\mathcal J } _1 } \\|_2 ^2 +\\sum_{i \\in { \\mathcal J } _2 }\\|X_i - \\mu^*_{ { \\mathcal J } _2 } \\|_2 ^2 +C_2 \\big (\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) + { \\mathcal B_n^{-1} \\Delta } \\kappa_k^2 \\big ) + \\gamma \n\\end{align*}\nwhich leads to \n\\begin{align*}\n&\\sum_{i \\in { \\mathcal J } _1 }\\|\\widehat \\mu_\\I - \\mu^*_{{ \\mathcal J } _1 }\\|_2^2 +\\sum_{i \\in { \\mathcal J } _2 }\\|\\widehat \\mu_\\I - \\mu^*_{{ \\mathcal J } _2 } \\|_2^2 \n\\\\\n \\leq & 2 \\sum_{i \\in { \\mathcal J } _1 } \\epsilon_i^\\top (\\widehat \\mu_\\I - \\mu^*_{ { \\mathcal J } _1 } ) +2 \\sum_{i \\in { \\mathcal J } _2 } \\epsilon_i^\\top (\\widehat \\mu_\\I - \\mu^*_{ { \\mathcal J } _2 } ) +C_2 \\big (\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) + \\kappa_k^2 { \\mathcal B_n^{-1} \\Delta } \\big ) + \\gamma \n \\\\\n \\leq & 2\\sigma_{\\epsilon}\\sum_{j = 1,2}\\|\\widehat \\mu_\\I - \\mu^*_{ { \\mathcal J } _j }\\|_2 \\sqrt{|{ \\mathcal J } _j|\\log(np)}+ C_2 \\big( \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) + \\kappa_k^2 { \\mathcal B_n^{-1} \\Delta } \\big ) + \\gamma \\\\\n \\leq & \\frac{1}{2} \\sum_{j = 1,2}|{ \\mathcal J } _j|\\|\\widehat \\mu_\\I - \\mu^*_{ { \\mathcal J } _j }\\|_2^2 + C_3 \\big (\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) + \\kappa_k^2 { \\mathcal B_n^{-1} \\Delta } \\big ) + \\gamma,\n\\end{align*} \nwhere the second inequality holds because the variance of $\\sum_{i \\in { \\mathcal J } _1 } \\epsilon_i^\\top (\\mu_\\I - \\mu^*_{ { \\mathcal J } _1 } )$ is upper bounded by $|{ \\mathcal J } _1|\\sigma^2_{\\epsilon}\\|\\mu_\\I - \\mu^*_{ { \\mathcal J } _1 }\\|_2^2$.\n\nIt follows that \n$$ |{ \\mathcal J } _1 |\\| \\widehat \\mu_\\I - \\mu^*_{{ \\mathcal J } _1 }\\|_2 ^2 + |{ \\mathcal J } _2 |\\| \\widehat \\mu_\\I - \\mu^*_{{ \\mathcal J } _2 }\\|_2 ^2 = \\sum_{i \\in { \\mathcal J } _1 }\\| \\widehat \\mu_\\I - \\mu^*_{{ \\mathcal J } _1 }\\|_2 ^2 +\\sum_{i \\in { \\mathcal J } _2 }\\| \\widehat \\mu_\\I - \\mu^*_{{ \\mathcal J } _2 } \\|_2^2 \\le C_4\\big ( \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) + { \\mathcal B_n^{-1} \\Delta } \\kappa_k^2 \\big )+ 2\\gamma .$$\nNote that \n$$ \\inf_{ a \\in \\mathbb R } |{ \\mathcal J } _1 |\\| a - \\mu^*_{{ \\mathcal J } _1 }\\|_2 ^2 + |{ \\mathcal J } _2 |\\| a - \\mu^*_{{ \\mathcal J } _2 }\\| ^2 = \\kappa_k ^2 \\frac{|{ \\mathcal J } _1| |{ \\mathcal J } _2|}{| \\I| } \\ge \\frac{ \\kappa_k^2 }{2} \\min\\{ |{ \\mathcal J } _1| ,|{ \\mathcal J } _2| \\} . $$ \nThis leads to \n$$ \\frac{ \\kappa_k^2 }{2}\\min\\{ |{ \\mathcal J } _1| ,|{ \\mathcal J } _2| \\} \\le C_4\\big ( \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) + { \\mathcal B_n^{-1} \\Delta } \\kappa_k^2 + \\gamma \\big ) ,$$\nwhich is \n$$ \\min\\{ |{ \\mathcal J } _1| ,|{ \\mathcal J } _2| \\} \\le C_5 \\bigg( \\frac{\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) + \\gamma }{\\kappa_k^2 } + { \\mathcal B_n^{-1} \\Delta } \\bigg) .$$\n \\end{proof} \n \n \n \\bnlem \nSuppose the good events \n$\\mathcal L ( { \\mathcal B_n^{-1} \\Delta } ) $ and $\\mathcal R ( { \\mathcal B_n^{-1} \\Delta } ) $ defined in \\Cref{eq:left and right approximation of change points} hold. \nLet $ \\I=(s,e] \\in \\mathcal {\\widehat P} $ be an interval that contains exactly two change points $ \\eta_k,\\eta_{k+1} $. Suppose in addition that \n\\begin{align} \\label{eq:1D two change points snr}\n\\Delta \\kappa^2 \\ge C { \\mathcal B_n} ^{1\/2} \\big(\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) + \\gamma) \n\\end{align} \nfor sufficiently large constant $C $. \n Then with probability at least $1-(np)^{-3}$, it holds that \n\\begin{align*} \n \\eta_k -s \\lesssim { \\mathcal B_n^{-1\/2} \\Delta } \\quad \\text{and} \\quad \n e-\\eta_{k+1} \\lesssim { \\mathcal B_n^{-1\/2} \\Delta } . \n \\end{align*} \n \\enlem \n\\begin{proof} \nSince the events $\\mathcal L ( { \\mathcal B_n^{-1} \\Delta } ) $ and $\\mathcal R ( { \\mathcal B_n^{-1} \\Delta } ) $ hold, let $ s_u, s_v$ be such that \n $\\eta_k \\le s_u \\le s_v \\le \\eta_{k+1} $ and that \n $$ 0 \\le s_u-\\eta_k \\le { \\mathcal B_n^{-1} \\Delta } , \\quad 0\\le \\eta_{k+1} - s_v \\le { \\mathcal B_n^{-1} \\Delta } . $$\n\n \\begin{center}\n \\begin{tikzpicture} \n\\draw[ - ] (-10,0)--(1,0);\n \\node[color=black] at (-8,-0.3) {\\small s};\n \\draw[ (-, ultra thick, black] (-8,0) -- (-7.99,0);\n \\draw[ { -]}, ultra thick, black] (-1.0002,0) -- (-1,0);\n\n \\node[color=black] at (-7,-0.3) {\\small $\\eta_k$};\n\\draw(-7 ,0)circle [radius=2pt] ;\n\n \\node[color=black] at (-6.5,-0.3) {\\small $s_u$};\n\\draw plot[mark=x, mark options={color=black, scale=1.5}] coordinates {(-6.5,0) }; \n \n\n \\node[color=black] at (-2.3,-0.3) {\\small $\\eta_{k+1}$};\n\\draw(-2.3 ,0)circle [radius=2pt] ;\n \\node[color=black] at (-3 ,-0.3) {\\small $s_v$};\n\\draw plot[mark=x, mark options={color=black, scale=1.5}] coordinates {(-3,0) }; \n \n \\node[color=black] at (-1,-0.3) {\\small $e$};\n\n \n\\end{tikzpicture}\n\\end{center} \n\nDenote \n$$ \\mathcal I_ 1= ( s, s _u], \\quad \\I_2 =(s_u, s_v] \\quad \\text{and} \\quad \\I_3 = (s_v,e]. $$\nIn addition, denote \n$$ { \\mathcal J } _1 = (s,\\eta_k ], \\quad { \\mathcal J } _2=(\\eta_k, \\eta_{k} + \\frac{ \\eta_{k+1} -\\eta_k }{2}], \\quad { \\mathcal J } _3 = ( \\eta_k+ \\frac{ \\eta_{k+1} -\\eta_k }{2},\\eta_{k+1 } ] \\quad \\text{and} \\quad { \\mathcal J } _4 = (\\eta_{k+1} , e] .$$\nSince \n$s, e, s_u ,s_v \\in \\{ s_q\\}_{q=1}^ { \\mathcal Q} $, then it follows from the definition of $\\hat{\\mclP}$ that \n\\begin{align}\\nonumber\n& \\sum_{i \\in \\I }\\|X_i - \\widehat \\mu_\\I \\|_2^2\n\\\\ \\nonumber \n\\le & \\sum_{i \\in \\I_1 }\\|X_i - \\widehat \\mu_{\\I_1} \\|_2 ^2 + \\sum_{i \\in \\I_2 }\\|X_i - \\widehat \\mu_{\\I_2}\\|_2^2 + \\sum_{i \\in \\I_3 }\\|X_i - \\widehat \\mu_{\\I_3}\\|_2^2 +2 \\gamma \n\\\\\\nonumber \n\\le & \\sum_{i \\in \\I_1 }\\|X_i - \\mu^*_i\\|_2^2 + C_1 \\bigg ( \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) + \\frac{|{ \\mathcal J } _1| (s_u -\\eta_k ) }{ |{ \\mathcal J } _1| + (s_u -\\eta_k ) }\\kappa_k^2 \\bigg ) +\n \\sum_{i \\in \\I_2 }\\|X_i - \\mu^*_i\\|_2^2 +C_1 \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np)\n \\\\ \\nonumber \n + & \\sum_{i \\in \\I_3 }\\|X_i - \\mu^*_i\\|_2^2 + C_1 \\bigg( \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np)+ \\frac{|{ \\mathcal J } _4 | (\\eta_{k+1} -s_v ) }{ |{ \\mathcal J } _4 | + (\\eta_{k+1} -s_v ) } \\kappa_{k+1} ^2 \\bigg) +2 \\gamma\n\\\\ \n\\le & \\sum_{i \\in \\I }\\|X_i - \\mu^*_i\\|_2^2 +C_1' \\bigg ( \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) + \\frac{|{ \\mathcal J } _1| (s_u -\\eta_k ) }{ |{ \\mathcal J } _1| + (s_u -\\eta_k ) }\\kappa_k^2 + \\frac{|{ \\mathcal J } _4 | (\\eta_{k+1} -s_v ) }{ |{ \\mathcal J } _4 | + (\\eta_{k+1} -s_v ) } \\kappa_{k+1} ^2 \\bigg )+2 \\gamma \n \\label{eq:two change points step 1}\n\\end{align}\nwhere the first inequality follows from the fact that $ \\I =(s ,e ] \\in \\widehat { \\mathcal P} $, the second inequality follows from \\Cref{lem:mean one change deviation bound} {\\bf a} and {\\bf b}. \n\\Cref{eq:two change points step 1} gives \n\\begin{align} \\nonumber \n& \\sum_{i \\in { \\mathcal J } _1 }\\|X_i - \\widehat \\mu_\\I\\|_2^2 +\\sum_{i \\in { \\mathcal J } _2 }\\|X_i - \\widehat \\mu_\\I\\|_2^2 +\\sum_{i \\in { \\mathcal J } _3 }\\|X_i - \\widehat \\mu_\\I\\|_2^2 +\\sum_{i \\in { \\mathcal J } _4 }\\|X_i - \\widehat \\mu_\\I\\|_2^2 \n\\\\ \\nonumber \n \\le& \\sum_{i \\in { \\mathcal J } _1 }\\|X_i - \\mu^*_{ { \\mathcal J } _1 }\\|_2^2 +\\sum_{i \\in { \\mathcal J } _2 }\\|X_i - \\mu^*_{ { \\mathcal J } _2 }\\|_2^2 +\\sum_{i \\in { \\mathcal J } _3 }\\|X_i - \\mu^*_{ { \\mathcal J } _3 }\\|_2^2 +\\sum_{i \\in { \\mathcal J } _4 }\\|X_i - \\mu^*_{ { \\mathcal J } _4 }\\|_2^2 \\\\\n + & C_1' \\bigg (\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) + \\frac{|{ \\mathcal J } _1| (s_u -\\eta_k ) }{ |{ \\mathcal J } _1| + (s_u -\\eta_k ) }\\kappa_k^2 + \\frac{|{ \\mathcal J } _4 | (\\eta_{k+1} -s_v ) }{ |{ \\mathcal J } _4 | + (\\eta_{k+1} -s_v ) } \\kappa_{k+1} ^2 \\bigg ) + 2\\gamma. \\label{eq:1D two change points first}\n\\end{align}\nNote that for $\\ell\\in \\{1,2,3,4 \\}$,\n\\begin{align}\\nonumber \n &\\sum_{i \\in { \\mathcal J } _\\ell }\\|X_i - \\widehat \\mu_\\I\\|_2^2 \n - \\sum_{i \\in { \\mathcal J } _\\ell }\\|X_i - \\mu^*_{ { \\mathcal J } _ \\ell }\\|_2^2 -\\sum_{i \\in { \\mathcal J } _ \\ell }\\|\\widehat \\mu_\\I - \\mu^*_{{ \\mathcal J } _ \\ell }\\|_2^2 \n\\\\\n=& \\nonumber \n 2 \\sum_{i \\in { \\mathcal J } _ \\ell } \\epsilon_i^\\top ( \\mu^*_{ { \\mathcal J } _\\ell }-\\widehat \\mu_\\I ) \n\\\\\\nonumber \n\\ge & - C\\sigma_{\\epsilon} \\|\\widehat \\mu_\\I - \\mu^*_{ { \\mathcal J } _\\ell }\\|_2 \\sqrt{|{ \\mathcal J } _\\ell|\\log(np)}\n\\\\\n\\ge & -\\frac{1}{2}|{ \\mathcal J } _\\ell| \\|\\widehat \\mu_\\I - \\mu^*_{ { \\mathcal J } _\\ell }\\|_2^2 - C'\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np). \\nonumber \n\\end{align} \nwhich gives \n\\begin{align} \\label{eq:1D two change points second}\n &\\sum_{i \\in { \\mathcal J } _\\ell }\\|X_i - \\widehat \\mu_\\I \\|^2 \n - \\sum_{i \\in { \\mathcal J } _\\ell }\\|X_i - \\mu^*_{ { \\mathcal J } _ \\ell }\\|^2 \\ge \\frac{1}{2}\\sum_{i \\in { \\mathcal J } _ \\ell }\\| \\widehat \\mu_\\I - \\mu^*_{{ \\mathcal J } _ \\ell }\\| ^2\n- C_2\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) .\n\\end{align} \n\\Cref{eq:1D two change points first} and \n\\Cref{eq:1D two change points second} together implies that\n \n\\begin{align} \\label{eq:1D two change points third} \\sum_{l=1}^ 4 \n |{ \\mathcal J } _l | (\\widehat \\mu_\\I - \\mu^*_{ { \\mathcal J } _\\ell } ) ^2 \\le C_3 \\bigg ( \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) +\\frac{|{ \\mathcal J } _1| (s_u -\\eta_k ) }{ |{ \\mathcal J } _1| + (s_u -\\eta_k ) } \\kappa_k^2 +\\frac{|{ \\mathcal J } _4 | (\\eta_{k+1} -s_v ) }{ |{ \\mathcal J } _4 | + (\\eta_{k+1} -s_v ) } \\kappa_{k+1} ^2 \\bigg ) +4 \\gamma . \n \\end{align}\nNote that \n\\begin{align} \\label{eq:1D two change points signal lower bound} \\inf_{ a \\in \\mathbb R } |{ \\mathcal J } _1 |( a - \\mu^*_{{ \\mathcal J } _1 }) ^2 + |{ \\mathcal J } _2 |( a - \\mu^*_{{ \\mathcal J } _2 }) ^2 =& \\frac{|{ \\mathcal J } _1| |{ \\mathcal J } _2|}{ |{ \\mathcal J } _1| + |{ \\mathcal J } _2| } \\kappa_k ^2.\n\\end{align} \nSimilarly\n\\begin{align} \\label{eq:1D two change points signal lower bound 2} \\inf_{ a \\in \\mathbb R } |{ \\mathcal J } _3 |( a - \\mu^*_{{ \\mathcal J } _3 }) ^2 + |{ \\mathcal J } _4 |( a - \\mu^*_{{ \\mathcal J } _4 }) ^2 = \\frac{|{ \\mathcal J } _3| |{ \\mathcal J } _4|}{|{ \\mathcal J } _3| + |{ \\mathcal J } _4| } \\kappa_{k+1} ^2 ,\\end{align} \n\\Cref{eq:1D two change points third} together with \n\\Cref{eq:1D two change points signal lower bound} and \\Cref{eq:1D two change points signal lower bound 2} leads to \n\\begin{align}\n\\label{eq:conclusion of two change points} \n \\frac{|{ \\mathcal J } _1| |{ \\mathcal J } _2|}{ |{ \\mathcal J } _1| + |{ \\mathcal J } _2| } \\kappa_k ^2 + \\frac{|{ \\mathcal J } _3| |{ \\mathcal J } _4|}{|{ \\mathcal J } _3| + |{ \\mathcal J } _4| } \\kappa_{k+1} ^2 \\le C_3 \\bigg (\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) +\\frac{|{ \\mathcal J } _1| (s_u -\\eta_k ) }{ |{ \\mathcal J } _1| + (s_u -\\eta_k ) } \\kappa_k^2 +\\frac{|{ \\mathcal J } _4 | (\\eta_{k+1} -s_v ) }{ |{ \\mathcal J } _4 | + (\\eta_{k+1} -s_v ) } \\kappa_{k+1} ^2 \\bigg ) +4 \\gamma .\n \\end{align}\n Note that \n$$ 0\\le s_u -\\eta_k \\le { \\mathcal B_n^{-1} \\Delta } \\quad \\text{and} \\quad \n0 \\le \\eta_{k+1} -s_v \\le { \\mathcal B_n^{-1} \\Delta } ,\n $$ \nand so there are four possible cases. \n\n{\\bf case a.} \nIf \n$$|{ \\mathcal J } _1| \\le { \\mathcal B_n^{-1\/2} \\Delta } \\quad \\text{and} \\quad \n |{ \\mathcal J } _4| \\le { \\mathcal B_n^{-1\/2} \\Delta } , $$ \n then the desired result follows immediately. \n\n{\\bf case b.} $|{ \\mathcal J } _1| > { \\mathcal B_n^{-1\/2} \\Delta } $ and $|{ \\mathcal J } _4| \\le { \\mathcal B_n^{-1\/2} \\Delta } $. Then since $|{ \\mathcal J } _2| \\ge \\Delta \/2 $, it holds that \n$$ \\frac{|{ \\mathcal J } _1| |{ \\mathcal J } _2|}{ |{ \\mathcal J } _1| + |{ \\mathcal J } _2| } \\ge \\frac{1}{2 }\\min\\{|{ \\mathcal J } _1| , |{ \\mathcal J } _2| \\} \\ge \\frac{1}{2 } { \\mathcal B_n^{-1\/2} \\Delta } . $$\nIn addition,\n$$\\frac{|{ \\mathcal J } _1| (s_u -\\eta_k ) }{ |{ \\mathcal J } _1| + (s_u -\\eta_k ) } \\le s_u -\\eta_k \\le { \\mathcal B_n^{-1} \\Delta } \\quad \\text{and} \\quad \\frac{|{ \\mathcal J } _4 | (\\eta_{k+1} -s_v ) }{ |{ \\mathcal J } _4 | + (\\eta_{k+1} -s_v ) } \\le \\eta_{k+1} -s_v \\le { \\mathcal B_n^{-1} \\Delta } . \n$$\nSo \\Cref{eq:conclusion of two change points} leads to \n\\begin{align}\\label{eq:conclusion of two change points case two} \n\\frac{1}{2 } { \\mathcal B_n^{-1\/2} \\Delta } \\kappa_k ^2 + \\frac{|{ \\mathcal J } _3| |{ \\mathcal J } _4|}{|{ \\mathcal J } _3| + |{ \\mathcal J } _4| } \\kappa_{k+1} ^2 \\le C_3 \\bigg (\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) + { \\mathcal B_n^{-1} \\Delta } \\kappa_k^2 + { \\mathcal B_n^{-1} \\Delta } \\kappa_{k+1} ^2 \\bigg ) +4 \\gamma .\n \\end{align} \nSince $ \\kappa_k \\asymp \\kappa$ and $ \\kappa_{k+1} \\asymp \\kappa$, \\Cref{eq:conclusion of two change points case two} gives \n$$ \\frac{1}{2 } { \\mathcal B_n^{-1\/2} \\Delta } \\kappa ^2 \\le C_4 \\bigg ( \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) + { \\mathcal B_n^{-1} \\Delta } \\kappa ^2 + { \\mathcal B_n^{-1} \\Delta } \\kappa ^2 \\bigg ) +4 \\gamma . $$\nSince $ { \\mathcal B_n} $ is a diverging sequence, the above display gives \n$$ \\Delta \\kappa ^2 \\le C_5 { \\mathcal B_n} ^{1\/2 } (\\log(n) + \\gamma ).$$\nThis contradicts \\Cref{eq:1D two change points snr}.\n\n{\\bf case c.} $|{ \\mathcal J } _1| \\le { \\mathcal B_n^{-1\/2} \\Delta } $ and \n$|{ \\mathcal J } _4| > { \\mathcal B_n^{-1\/2} \\Delta } $. Then the same argument as that in {\\bf case b} leads to the same contradiction.\n\n {\\bf case d.} $|{ \\mathcal J } _1| > { \\mathcal B_n^{-1\/2} \\Delta } $ and $|{ \\mathcal J } _4| > { \\mathcal B_n^{-1\/2} \\Delta } $. Then since $|{ \\mathcal J } _2|\\ge \\Delta \/2 , |{ \\mathcal J } _4| \\ge \\Delta \/2 $, it holds that \n$$ \\frac{|{ \\mathcal J } _1| |{ \\mathcal J } _2|}{ |{ \\mathcal J } _1| + |{ \\mathcal J } _2| } \\ge \\frac{1}{2 }\\min\\{|{ \\mathcal J } _1| , |{ \\mathcal J } _2| \\} \\ge \\frac{1}{2 } { \\mathcal B_n^{-1\/2} \\Delta } \\quad \\text{and} \\quad \n \\frac{|{ \\mathcal J } _3| |{ \\mathcal J } _4|}{ |{ \\mathcal J } _3| + |{ \\mathcal J } _4| } \\ge \\frac{1}{2 }\\min\\{|{ \\mathcal J } _3| , |{ \\mathcal J } _4| \\} \\ge \\frac{1}{2 } { \\mathcal B_n^{-1\/2} \\Delta } $$\nIn addition, \n$$\\frac{|{ \\mathcal J } _4 | (\\eta_{k+1} -s_v ) }{ |{ \\mathcal J } _4 | + (\\eta_{k+1} -s_v ) } \\le \\eta_{k+1} -s_v \\le { \\mathcal B_n^{-1} \\Delta } \\quad \n\\frac{|{ \\mathcal J } _1| (s_u -\\eta_k ) }{ |{ \\mathcal J } _1| + (s_u -\\eta_k ) } \\le s_u -\\eta_k \\le { \\mathcal B_n^{-1} \\Delta } . \n$$\nSo \\Cref{eq:conclusion of two change points} leads to \n\\begin{align}\\label{eq:conclusion of two change points case four} \n\\frac{1}{2 } { \\mathcal B_n^{-1\/2} \\Delta } \\kappa_k ^2 + \\frac{1}{2 } { \\mathcal B_n^{-1\/2} \\Delta } \\kappa_{k+1} ^2 \\le C_6 \\bigg (\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) + { \\mathcal B_n^{-1} \\Delta } \\kappa_k^2 + { \\mathcal B_n^{-1} \\Delta } \\kappa_{k+1} ^2 \\bigg ) +4 \\gamma .\n \\end{align} \nNote that $ { \\mathcal B_n} $ is a diverging sequence. So the above display gives \n$$ \\Delta \\big ( \\kappa_{k} ^2+ \\kappa_{k+1}^2 \\big) \\le C_ 7 { \\mathcal B_n} ^{1\/2 } (\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) + \\gamma ) $$\nSince $ \\kappa_k \\asymp \\kappa$ and $ \\kappa_{k+1} \\asymp \\kappa$. This contradicts \\Cref{eq:1D two change points snr}. \n \\end{proof}\n\n\n\n \\bnlem \nSuppose the good events \n$\\mathcal L ( { \\mathcal B_n^{-1} \\Delta } ) $ and $\\mathcal R ( { \\mathcal B_n^{-1} \\Delta } ) $ defined in \\Cref{eq:left and right approximation of change points} hold. \n Suppose in addition that \n\\begin{align} \\label{eq:1D three change points snr}\n\\Delta \\kappa^2 \\ge C { \\mathcal B_n} \\big(\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) + \\gamma) \n\\end{align} \nfor sufficiently large constant $C $. \n Then with probability at least $1-(np)^{-3}$, there is no interval $ \\widehat { \\mathcal P} $ containing three or more true change points. \n \\enlem \n \n\n\n\\begin{proof} \nFor contradiction, suppose $ \\I=(s,e] \\in \\mathcal {\\widehat P} $ be such that $ \\{ \\eta_1, \\ldots, \\eta_M\\} \\subset \\I $ with $M\\ge 3$. Throughout the proof, $M$ is assumed to be a parameter that can potentially change with $n$. \nSince the events $\\mathcal L ( { \\mathcal B_n^{-1} \\Delta } ) $ and $\\mathcal R ( { \\mathcal B_n^{-1} \\Delta } ) $ hold, by relabeling $\\{ s_q\\}_{q=1}^ { \\mathcal Q} $ if necessary, let $ \\{ s_m\\}_{m=1}^M $ be such that \n $$ 0 \\le s_m -\\eta_m \\le { \\mathcal B_n^{-1} \\Delta } \\quad \\text{for} \\quad 1 \\le m \\le M-1 $$ and that\n $$ 0\\le \\eta_M - s_M \\le { \\mathcal B_n^{-1} \\Delta } .$$ \n Note that these choices ensure that $ \\{ s_m\\}_{m=1}^M \\subset \\I . $\n \n \\begin{center}\n \\begin{tikzpicture} \n\\draw[ - ] (-10,0)--(1,0);\n \\node[color=black] at (-8,-0.3) {\\small s};\n \\draw[ (-, ultra thick, black] (-8,0) -- (-7.99,0);\n \\draw[ { -]}, ultra thick, black] (-1.0002,0) -- (-1,0);\n\n \\node[color=black] at (-7,-0.3) {\\small $\\eta_1$};\n\\draw(-7 ,0)circle [radius=2pt] ;\n\n \\node[color=black] at (-6.5,-0.3) {\\small $s_1$};\n\\draw plot[mark=x, mark options={color=black, scale=1.5}] coordinates {(-6.5,0) }; \n\n \\node[color=black] at (-5,-0.3) {\\small $\\eta_2$};\n\\draw(-5 ,0)circle [radius=2pt] ;\n \\node[color=black] at (-4.5,-0.3) {\\small $s_2$};\n\\draw plot[mark=x, mark options={color=black, scale=1.5}] coordinates {(-4.5,0) }; \n \n\n \\node[color=black] at (-2.5,-0.3) {\\small $\\eta_3$};\n\\draw(-2.5 ,0)circle [radius=2pt] ;\n \\node[color=black] at (-3 ,-0.3) {\\small $s_3$};\n\\draw plot[mark=x, mark options={color=black, scale=1.5}] coordinates {(-3,0) }; \n \n \\node[color=black] at (-1,-0.3) {\\small $e$};\n\n \n\\end{tikzpicture}\n\\end{center}\n\n{\\bf Step 1.}\n Denote \n $$ \\mathcal I_ 1= ( s, s _1], \\quad \\I_m =(s_{m-1} , s_m] \\text{ for } 2 \\le m \\le M \\quad \\text{and} \\quad \\I_{M+1} = (s_M,e]. $$\nThen since \n$ s , e, \\{ s_m \\}_{m=1}^M \\subset \\{ s_q\\}_{q=1}^ { \\mathcal Q} $, it follows that \n\\begin{align}\\nonumber\n& \\sum_{i \\in \\I }\\|X_i - \\widehat \\mu_\\I\\|_2^2\n\\\\ \\nonumber \n\\le & \\sum _{m=1}^{M+1 } \\sum_{i \\in \\I_m}\\|X_i - \\widehat y_{\\I_m } \\|_2^2 + M \\gamma \n\\\\ \\label{eq: 1D three change points deviation term 1}\n\\le & \\sum_{i \\in \\I_1 }\\|X_i - \\mu^*_i\\|_2^2 + C_1 \\bigg ( \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) + \\frac{ (\\eta_1 -s ) ( s_1-\\eta_ 1) }{ s_1-s }\\kappa_1^2 \\bigg )\n\\\\ \\label{eq: 1D three change points deviation term 2} \n+ & \n \\sum_{m=2}^{M-1} \\sum_{i \\in \\I_m }\\|X_i - \\mu^*_i\\|_2^2 +C_1 \\bigg(\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np)+ \\frac{(\\eta_m -s_{m-1} )(s_m-\\eta_{m } ) }{ s _{m}-s _{m-1} } \\kappa_m^2 \\bigg ) \n \\\\ \\label{eq: 1D three change points deviation term 3}\n + & C_1 \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np)\n\\\\ \\label{eq: 1D three change points deviation term 4}\n+ & \\sum_{i\\in \\I_{M+1}}\\|X_i-\\mu^*_i\\|_2^2+ C_1 \\bigg( \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) + \\frac{(\\eta_M -s_{M} )(e-\\eta_{M} ) }{ e-s _{M} } \\kappa_{M} ^2 \\bigg ) + M\\gamma, \n \\end{align} \n where Equations \\eqref{eq: 1D three change points deviation term 1}, \\eqref{eq: 1D three change points deviation term 2}\n \\eqref{eq: 1D three change points deviation term 3} and \\eqref{eq: 1D three change points deviation term 4} follow from \\Cref{lem:mean one change deviation bound} and in particular, \\Cref{eq: 1D three change points deviation term 3} corresponds to the interval $\\I _M = (s_{M-1},s_M] $ which by assumption containing no change points. \n Note that \n\\begin{align*} & \\frac{ (\\eta_1 -s ) ( s_1-\\eta_ 1) }{ s_1-s } \\le s_1-\\eta_1 \\le { \\mathcal B_n^{-1} \\Delta } ,\n\\\\ \n& \\frac{(\\eta_m -s_{m-1} )(s_m-\\eta_{m } ) }{ s _{m}-s _{m-1} } \\le s_m-\\eta_m \\le { \\mathcal B_n^{-1} \\Delta } , \\ \\text{ and } \n\\\\\n& \\frac{(\\eta_M -s_{M} )(e-\\eta_{M} ) }{ e-s _{M} } \\le \\eta_M-s_m \\le { \\mathcal B_n^{-1} \\Delta } \n \\end{align*} \n and that \n $ \\kappa_k \\asymp \\kappa $ for all $ 1\\le k \\le K$. Therefore \n\\begin{equation}\n \\label{eq:1D three change points step 1}\n \\sum_{i \\in \\I }\\|X_i - \\widehat \\mu_\\I\\|_2^2 \\le \\sum _{ i \\in \\I } \\|X_i - \\mu^*_i\\|_2^2 + C_2 \\bigg( M \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np)+ M { \\mathcal B_n^{-1} \\Delta } \\kappa^2\n+ M\\gamma \\bigg),\n\\end{equation}\nwhere $ C_2$ is some large constant independent of $M$.\n\n{\\bf Step 2.} Let \n$$ { \\mathcal J } _1 =(s, \\eta_1], \\ { \\mathcal J } _m = (\\eta_{m-1}, \\eta_m] \\text{ for } 2 \\le m \\le M , \\ { \\mathcal J } _{M+1} =(\\eta_M, e]. $$\nNote that $\\mu^*_i$ is unchanged in any of $\\{ { \\mathcal J } _m\\}_{m=0}^{M+1}$. \nSo for $ 1 \\le m \\le M+1 $,\n\\begin{align}\\nonumber \n &\\sum_{i \\in { \\mathcal J } _m }\\|X_i - \\widehat \\mu_\\I\\|_2^2 \n - \\sum_{i \\in { \\mathcal J } _ m }\\|X_i - \\mu^*_{ { \\mathcal J } _ m }\\|_2^2 -\\sum_{i \\in { \\mathcal J } _ m }\\|\\widehat \\mu_\\I - \\mu^*_{{ \\mathcal J } _ m }\\|_2^2 \n\\\\\n=& \\nonumber \n 2 \\sum_{i \\in { \\mathcal J } _ m} \\epsilon_i^\\top (\\mu^*_{{ \\mathcal J } _ m}-\\widehat \\mu_\\I) \n\\\\\\nonumber \n\\ge & - C\\sigma_{\\epsilon}\\|\\widehat \\mu_\\I - \\mu^*_{ { \\mathcal J } _ m }\\|_2\\sqrt{|{ \\mathcal J } _m|\\log(np)} \n\\\\\n\\ge & - C_3 \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) - \\frac{1 }{2 } |{ \\mathcal J } _ m |\\|\\widehat \\mu_\\I - \\mu^*_{ { \\mathcal J } _ m }\\|_2^2 \\nonumber \n\\end{align} \nwhich gives \n\\begin{align} \\label{eq:1D three change points second}\n &\\sum_{i \\in { \\mathcal J } _m }\\|X_i - \\widehat \\mu_\\I\\|_2^2 \n - \\sum_{i \\in { \\mathcal J } _ m }\\|X_i - \\mu^*_{ { \\mathcal J } _ m }\\|_2^2 \\geq \\frac{1}{2}\\sum_{i \\in { \\mathcal J } _ m }\\|\\widehat \\mu_\\I - \\mu^*_{{ \\mathcal J } _ m }\\|_2 ^2 - C_3\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np).\n\\end{align} \nTherefore\n\\begin{align} \\label{eq:1D three change points third} \\sum_{ m=1}^{M+1} |{ \\mathcal J } _m |\\|\\widehat \\mu_\\I - \\mu^*_{{ \\mathcal J } _ m }\\|_2^2 = \\sum_{ m=1 }^{M+1} \\sum_{i \\in { \\mathcal J } _ m }\\|\\widehat \\mu_\\I - \\mu^*_{{ \\mathcal J } _ m }\\|_2^2 \\le C_4 M\\bigg(\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) + { \\mathcal B_n^{-1} \\Delta } \\kappa^2\n+ \\gamma \\bigg),\n\\end{align}\nwhere the equality follows from the fact that $\\mu^*_i$ is unchanged in any of $\\{ { \\mathcal J } _m\\}_{m=0}^{M+1}$, and\n the inequality follows from \\Cref{eq:1D three change points step 1} and \n\\Cref{eq:1D three change points second}.\n\\\\\n\\\\\n{\\bf Step 3.}\nFor any $ m \\in\\{2, \\ldots, M\\}$, it holds that\n\\begin{align} \\label{eq:1D three change points signal lower bound} \\inf_{ a \\in \\mathbb R } |{ \\mathcal J } _{m-1} |\\|a - \\mu^*_{{ \\mathcal J } _{m-1} }\\|_2^2 + |{ \\mathcal J } _{m} |\\|a - \\mu^*_{{ \\mathcal J } _{m} }\\|_2^2 =& \\frac{|{ \\mathcal J } _{m-1}| |{ \\mathcal J } _m|}{ |{ \\mathcal J } _{m-1}| + |{ \\mathcal J } _m| } \\kappa_m ^2 \\ge \\frac{1}{2} \\Delta \\kappa^2,\n\\end{align} \nwhere the last inequality follows from the assumptions that $\\eta_k - \\eta_{k-1}\\ge \\Delta $ and $ \\kappa_k \\asymp \\kappa$ for all $1\\le k \\le K$. So\n\\begin{align} \\nonumber &2 \\sum_{ m=1}^{M } |{ \\mathcal J } _m |\\|\\widehat \\mu_\\I - \\mu^*_{{ \\mathcal J } _ m }\\|_2^2 \n\\\\ \n\\ge & \\nonumber \\sum_{m=2}^M \\bigg( |{ \\mathcal J } _{m-1} | \\|\\widehat \\mu_\\I - \\mu^*_{{ \\mathcal J } _ { m-1} }\\|_2^2 + |{ \\mathcal J } _m |\\|\\widehat \\mu_\\I - \\mu^*_{{ \\mathcal J } _ m }\\|_2^2 \\bigg) \n\\\\ \\label{eq:1D three change points signal lower bound two} \n\\ge & (M-1) \\frac{ 1}{2} \\Delta \\kappa^2 \\ge \\frac{M}{4} \\Delta \\kappa^2 ,\n\\end{align} \nwhere the second inequality follows from \\Cref{eq:1D three change points signal lower bound} and the last inequality follows from $M\\ge 3$. \\Cref{eq:1D three change points third} and \\Cref{eq:1D three change points signal lower bound two} together imply that \n\\begin{align}\\label{eq:1D three change points signal lower bound three} \n \\frac{M}{4} \\Delta \\kappa^2 \\le 2 C_4 M\\bigg(\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) + { \\mathcal B_n^{-1} \\Delta } \\kappa^2 + \\gamma \\bigg) .\n\\end{align}\nSince $ { \\mathcal B_n} \\to \\infty $, it follows that for sufficiently large $n$, \\Cref{eq:1D three change points signal lower bound three} gives \n$$ \\Delta\\kappa^2 \\le C_5 \\big(\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) +\\gamma),$$ \nwhich contradicts \\Cref{eq:1D three change points snr}.\n \\end{proof} \n \n \n \n \\bnlem Suppose $ \\gamma \\ge C_\\gamma K { \\mathcal B_n^{-1} \\Delta } \\kappa^2 $ for sufficiently large constant $C_\\gamma $. \nWith probability at least $1-(np)^{-3}$, there are no two consecutive intervals $\\I_1= (s,t ] \\in \\widehat {\\mathcal P} $, $ \\I_2=(t, e] \\in \\widehat {\\mathcal P} $ such that $\\I_1 \\cup \\I_2$ contains no change points. \n \\enlem \n \\begin{proof} \n For contradiction, suppose that \n $$ \\I : =\\I_1\\cup \\I_2 $$\n contains no change points. \n Since \n $ s,t,e \\in \\{ s_q\\}_{q=1}^ { \\mathcal Q} $, it follows that\n $$ \\sum_{i\\in \\I_1} \\|X_i - \\widehat \\mu_{\\I_1}\\|^2 +\\sum_{i\\in \\I_2} \\|X_i - \\widehat \\mu_{\\I_2}\\|_2^2 +\\gamma \\le \\sum_{i\\in \\I } \\|X_i - \\widehat \\mu_{\\I }\\|_2^2. $$\n By \\Cref{lem:mean one change deviation bound}, it follows that \n\\begin{align*} \n& \\sum_{i\\in \\I_1} \\|X_i - \\mu^*_i\\|_2^2 \\le C_1 \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) + \\sum_{i\\in \\I_1} \\|X_i - \\widehat \\mu_{\\I_1}\\|_2^2 ,\n \\\\\n & \\sum_{i\\in \\I_2} \\|X_i - \\mu^*_i\\|_2^2 \\le C_1 \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) + \\sum_{i\\in \\I_2} \\|X_i - \\widehat \\mu_{\\I_2}\\|_2^2\n \\\\\n & \\sum_{i\\in \\I} \\|X_i - \\widehat{\\mu}_{\\I}\\|_2^2 \\le C_1 \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) + \\sum_{i\\in \\I} \\|X_i - \\mu_{i}^*\\|_2^2.\n \\end{align*} \n So \n $$\\sum_{i\\in \\I_1} \\|X_i - \\mu^*_i\\|_2^2 +\\sum_{i\\in \\I_2}\\|X_i - \\mu^*_i\\|_2^2 -2C_1 \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) +\\gamma \\le \\sum_{i\\in \\I }\\|X_i - \\mu^*_i\\|_2^2 +C_1\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) . $$\n Since $\\mu^*_i$ is unchanged when $i\\in \\I$, it follows that \n $$ \\gamma \\le 3C_1\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np).$$\n This is a contradiction when $C_\\gamma> 3C_1. $\n \\end{proof} \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\clearpage\n\\subsection{Covariance model}\n\n\n\\bnprop\\label{prop:covariance local consistency}\n Suppose \\Cref{assp:DCDP_covariance} holds. Let \n $\\widehat { \\mathcal P } $ denote the output of DP.\n Then with probability at least $1 - C n ^{-3}$, the following conditions hold.\n\t\\begin{itemize}\n\t\t\\item [(i)] For each interval $ \\I = (s, e] \\in \\widehat{\\mathcal{P}}$ containing one and only one true \n\t\t change point $ \\eta_k $, it must be the case that\n $$\\min\\{ \\eta_k -s ,e-\\eta_k \\} \\lesssim C_{\\gamma} \\|X\\|_{\\psi_2}^4\\frac{C_X^2}{c_X^6}\\frac{p^2\\log(np)}{\\kappa_k^2} +\\|X\\|_{\\psi_2}^4\\frac{C_X^6}{c_X^6} { \\mathcal B_n^{-1} \\Delta } .$$\n\t\t\\item [(ii)] For each interval $ \\I = (s, e] \\in \\widehat{\\mathcal{P}}$ containing exactly two true change points, say $\\eta_ k < \\eta_ {k+1} $, it must be the case that\n\t\t\t\\begin{align*} \n \\eta_k -s \\lesssim { \\mathcal B_n^{-1\/2} \\Delta } \\quad \\text{and} \\quad \n e-\\eta_{k+1} \\lesssim { \\mathcal B_n^{-1\/2} \\Delta } . \n \\end{align*} \n\t\t \n\\item [(iii)] No interval $\\I \\in \\widehat{\\mathcal{P}}$ contains strictly more than two true change points; and \n\n\n\t\\item [(iv)] For all consecutive intervals $ \\I_1 $ and $ \\I_2 $ in $\\widehat{ \\mathcal P}$, the interval \n\t\t$ \\I_1 \\cup \\I_2 $ contains at least one true change point.\n\t\t\t\t\n\t \\end{itemize}\n\\enprop \n\\bprf\nThe four cases are proved in \\Cref{lem:cov no cp}, \\Cref{lem:cov single cp}, \\Cref{lem:cov two cp}, and \\Cref{lem:cov two consecutive interval}.\n\\eprf\n\n\n\n\n\n\n\n\n\\bnprop\\label{prop:change points partition size consistency}\nSuppose \\Cref{assp:DCDP_covariance} holds. Let \n $\\widehat { \\mathcal P } $ be the output of DP. Suppose \n$ \\gamma \\ge C_\\gamma K { \\mathcal B_n^{-1} \\Delta } \\kappa^2 $ for sufficiently large constant $C_\\gamma$. Then\n with probability at least $1 - C n ^{-3}$, $| \\widehat { \\mathcal P} | =K $.\n\\enprop\n\n\n\n\\begin{proof}[Proof of \\Cref{prop:change points partition size consistency}] \nDenote $\\mathfrak{G} ^*_n = \\sum_{ i =1}^n [{\\rm Tr}[(\\Omega_i^*)^\\top X_iX_i^\\top] - \\log|\\Omega_i^*|]$. Given any collection $\\{t_1, \\ldots, t_m\\}$, where $t_1 < \\cdots < t_m$, and $t_0 = 0$, $t_{m+1} = n$, let \n\\begin{equation}\\label{eq-sn-def}\n\t { \\mathfrak{G} } _n(t_1, \\ldots, t_{m}) = \\sum_{k=1}^{m} \\sum_{ i = t_k +1}^{t_{k+1}} \\mclF(\\widehat {\\Omega}_{(t_{k}, t_{k+1}]}, (t_{k}, t_{k+1}]).\n\\end{equation}\nFor any collection of time points, when defining \\eqref{eq-sn-def}, the time points are sorted in an increasing order.\n\nLet $\\{ \\widehat \\eta_{k}\\}_{k=1}^{\\widehat K}$ denote the change points induced by $\\widehat {\\mathcal P}$. Suppose we can justify that \n\t\\begin{align}\n\t\t { \\mathfrak{G} } ^*_n + K\\gamma \\ge & { \\mathfrak{G} } _n(s_1,\\ldots,s_K) + K\\gamma - C(K + 1)\\frac{\\|X\\|_{\\psi_2}^2}{c_{X}^4}\\frac{ p^2\\log(np)}{\\kappa^2} - C\\sum{k\\in [K]}\\kappa_k^2\\mclB_n^{-1}\\Delta \\label{eq:cov K consistency step 1} \\\\ \n\t\t\\ge & { \\mathfrak{G} } _n (\\widehat \\eta_{1},\\ldots, \\widehat \\eta_{\\widehat K } ) +\\widehat K \\gamma - C_1'(K + 1)\\frac{\\|X\\|_{\\psi_2}^2}{c_{X}^4}\\frac{ p^2\\log(np)}{\\kappa^2} - C_1'\\sum{k\\in [K]}\\kappa_k^2\\mclB_n^{-1}\\Delta \\label{eq:cov K consistency step 2} \\\\ \n\t\t\\ge & { \\mathfrak{G} } _n ( \\widehat \\eta_{1},\\ldots, \\widehat \\eta_{\\widehat K } , \\eta_1,\\ldots,\\eta_K ) + \\widehat K \\gamma - C_1(K + 1)\\frac{\\|X\\|_{\\psi_2}^2}{c_{X}^4}\\frac{ p^2\\log(np)}{\\kappa^2} - C_1\\sum{k\\in [K]}\\kappa_k^2\\mclB_n^{-1}\\Delta \\label{eq:cov K consistency step 3}\n\t\\end{align}\n\tand that \n\t\\begin{align}\\label{eq:cov K consistency step 4}\n\t\t { \\mathfrak{G} } ^*_n - { \\mathfrak{G} } _n ( \\widehat \\eta_{1},\\ldots, \\widehat \\eta_{\\widehat K } , \\eta_1,\\ldots,\\eta_K ) \\le C_2 (K + \\widehat{K} + 2) \\frac{\\|X\\|_{\\psi_2}^4}{c_X^2} p^2\\log(np).\n\t\\end{align}\n\tThen it must hold that $| \\widehat{\\mclP} | = K$, as otherwise if $\\widehat K \\geq K+1 $, then \n\t\\begin{align*}\n\t\tC _2 (K + \\widehat{K} + 2) \\frac{\\|X\\|_{\\psi_2}^4}{c_X^2} p^2\\log(np) & \\ge { \\mathfrak{G} } ^*_n - { \\mathfrak{G} } _n ( \\widehat \\eta_{1},\\ldots, \\widehat \\eta_{\\widehat K } , \\eta_1,\\ldots,\\eta_K ) \\\\\n\t\t& \\ge (\\widehat K - K)\\gamma -C_1 ( K +1) \\frac{\\|X\\|_{\\psi_2}^4}{c_X^2} p^2\\log(np).\n\t\\end{align*} \n\tTherefore due to the assumption that $| \\widehat{\\mclP}|=\\widehat K\\leq 3K $, it holds that \n\t\\begin{align} \\label{eq:cov Khat=K} \n\t\t[C_2 (4K + 2) + C_1(K+1)] \\frac{\\|X\\|_{\\psi_2}^4}{c_X^2} p^2\\log(np) \\geq (\\widehat K - K)\\gamma \\geq \\gamma.\n\t\\end{align}\n\tNote that \\eqref{eq:cov Khat=K} contradicts the choice of $\\gamma$. Therefore, it remains to show \\Cref{eq:cov K consistency step 1} to \\Cref{eq:cov K consistency step 4}.\n\n{\\bf Step 1.} \\Cref{eq:cov K consistency step 1} holds because $\\widehat{\\Omega}_{\\I}$ is (one of) the minimizer of $\\F(\\Omega, \\I)$ for any interval $\\I$.\n\n{\\bf Step 2.} \\Cref{eq:cov K consistency step 2} is guaranteed by the definition of $\\widehat{\\mclP}$.\n\n{\\bf Step 3.} For every $ \\I =(s,e]\\in \\widehat{\\mclP}$, by \\Cref{prop:covariance local consistency}, we know that $\\I$ contains at most two change points. We only show the proof for the two-change-points case as the other case is easier. Denote\n\t\\[\n\t\t \\I = (s ,\\eta_{q}]\\cup (\\eta_{q},\\eta_{q+1}] \\cup (\\eta_{q+1} ,e] = { \\mathcal J } _1 \\cup { \\mathcal J } _2 \\cup { \\mathcal J } _{3},\n\t\\]\nwhere $\\{ \\eta_{q},\\eta_{q+1}\\} =\\I \\, \\cap \\, \\{\\eta_k\\}_{k=1}^K$. \n\nFor each $ m=1,2,3$, by definition it holds that\n\\begin{equation}\n \\mclF(\\widehat\\Omega_{{ \\mathcal J } _m},{ \\mathcal J } _m)\\leq \\mclF(\\Omega^*_{{ \\mathcal J } _m},{ \\mathcal J } _m).\n\\end{equation}\nOn the other hand, by \\Cref{lem:cov loss deviation no change point}, we have\n$$\n\\mclF(\\widehat\\Omega_{\\I},{ \\mathcal J } _m) \\geq \\mclF(\\Omega^*_{{ \\mathcal J } _m},{ \\mathcal J } _m) - C\\frac{\\|X\\|_{\\psi_2}^4}{c_X^2}p^2\\log(np). \n$$\nTherefore the last two inequalities above imply that \n\\begin{align} \n\\sum_{i \\in \\I } \\mclF(\\widehat \\Omega_\\I,\\I) \\geq & \\sum_{m=1}^{3} \\sum_{ i \\in { \\mathcal J } _m }\\mclF(\\widehat \\Omega_\\I,{ \\mathcal J } _m) \\nonumber \\\\\n\\geq & \\sum_{m=1}^{3} \\sum_{ i \\in { \\mathcal J } _m}\\mclF(\\widehat \\Omega_{{ \\mathcal J } _m},{ \\mathcal J } _m) - C\\frac{\\|X\\|_{\\psi_2}^4}{c_X^2}p^2\\log(np). \\label{eq:cov K consistency step 3 inequality 3}\n\\end{align}\nThen \\eqref{eq:cov K consistency step 3} is an immediate consequence of \\eqref{eq:cov K consistency step 3 inequality 3}.\n\n{\\bf Step 4.}\nFinally, to show \\eqref{eq:cov K consistency step 4}, let $\\widetilde { \\mathcal P}$ denote the partition induced by $\\{\\widehat \\eta_{1},\\ldots, \\widehat \\eta_{\\widehat K } , \\eta_1,\\ldots,\\eta_K\\}$. Then \n$| \\widetilde { \\mathcal P} | \\le K + \\widehat K+2 $ and that $\\Omega^*_i$ is unchanged in every interval $\\I \\in \\widetilde { \\mathcal P}$. So \\Cref{eq:cov K consistency step 4} is an immediate consequence of \\Cref{lem:cov no cp}.\n\\end{proof} \n\n\n\n\n\n\n\n\n\\subsubsection{Technical lemmas}\n\n\n\n\n\n\\bnlem\n\\label{lem:estimation covariance}\nLet $\\{X_i\\}_{i\\in [n]}$ be a sequence of subgaussian vectors in $\\mathbb{R}^{p}$ with orlitz norm upper bounded $\\|X\\|_{\\psi_2}<\\infty$. Suppose $\\mathbb{E}[X_i] = 0$ and $\\mathbb{E}[X_iX_i^\\top] = \\Sigma_i$ for $i\\in [n]$. Consider the change point setting in \\Cref{assp:DCDP_covariance} and consider a generic interval $\\I\\subset [1,n]$. Let $\\widehat{\\Sigma}_{\\I} = \\frac{1}{|\\I|}\\sum_{i \\in \\I} X_i X_i^\\top$ and $\\Sigma_{\\I} = \\frac{1}{|\\I|}\\Sigma^*_i$. Then for any $u>0$, it holds with probability at least $1 - \\exp(-u)$ that\n\\begin{equation}\n \\|\\widehat{\\Sigma}_{\\I} - \\Sigma_{\\I}\\|_2\\lesssim \\|X\\|_{\\psi_2}^2(\\sqrt{\\frac{p + u}{n}}\\vee \\frac{p + u}{n}).\n\\end{equation}\nAs a corollary, when $n\\geq C_s p\\log(np)$ for some universal constant $C_s>0$, it holds with probability at least $1 - (np)^{-7}$ that\n\\begin{equation}\n \\|\\widehat{\\Sigma}_{\\I} - \\Sigma_{\\I}\\|_2\\leq C\\|X\\|_{\\psi_2}^2\\sqrt{\\frac{p\\log(np)}{|\\I|}},\n\\end{equation}\nwhere $C$ is some universal constant that does not depend on $n, p, \\|X\\|_{\\psi_2}$, and $C_s$. In addition let $\\widehat{\\Omega}_{\\I} = \\argmin_{\\Omega\\in \\mathbb{S}_+}L(\\Omega,\\I)$ and $\\widetilde{\\Omega}_{\\I} = (\\frac{1}{|\\I|}\\sum_{i\\in\\I}\\Sigma^*_i)^{-1}$. if $|\\I|\\geq C_s p\\log(np)\\|X\\|_{\\psi_2}^4 \/ c_X^2$ for sufficiently large constant $C_s>0$, then it holds with probability at least $1 - (np)^{-7}$ that\n\\begin{equation}\n \\|\\widehat{\\Omega}_{\\I} -\\widetilde{\\Omega}_{\\I}\\|_2\\leq C \\frac{\\|X\\|_{\\psi_2}^2}{c_X^2}\\sqrt{\\frac{p\\log(np)}{|\\I|}}.\n\\end{equation}\n\\enlem\n\\bprf\nVanishing the gradient of the loss function $L(\\Omega, \\I)$ gives\n\\begin{equation*}\n \\widehat{\\Omega}_{\\I} = (\\widehat{\\Sigma}_{\\I})^{\\dagger}.\n\\end{equation*}\n\\eprf\n\n\n\n\n\n\n\n\n\n\\bnlem[Covariance model]\n\\label{lem:cov one change deviation bound}\nLet $\\mathcal I =(s,e] $ be any generic interval, and define the loss function $L(\\Omega, \\I) = \\sum_{i\\in \\I}{\\rm Tr}[\\Omega^\\top (X_iX_i^\\top)] - |\\I|\\log|\\Omega|$ and goodness-of-fit function $\\mclF(\\Omega, \\I) = L(\\Omega,\\I)$. Define $\\widehat{\\Omega}_{\\I} = \\argmin_{\\Omega\\in \\mathbb{S}_+}L(\\Omega,\\I)$ and $\\mclF^*(\\I) = \\sum_{i\\in \\I}[{\\rm Tr}((\\Omega_i^*)^\\top (X_iX_i^\\top)) - \\log|\\Omega_i^*|]$.\n\n{\\bf a.} If $\\I$ contains no change points. Then it holds that \n$$\\mathbb{P} \\bigg( | \\mclF(\\widehat{\\Omega}_{\\I},\\I) - \\mclF^*(\\I) | \\ge C \\frac{\\|X\\|_{\\psi_2}^4}{c_X^2}p^2\\log(np) \\bigg) \\le (np)^{-3}. $$\n\n{\\bf b.} Suppose that the interval $ \\I=(s,e]$ contains one and only one change point $ \\eta_k $. Denote\n$$ \\mathcal J = (s,\\eta_k] \\quad \\text{and} \\quad \\mathcal J' = (\\eta_k, e].$$ \nThen it holds that \n$$\\mathbb{P} \\bigg( | \\mclF(\\widehat{\\Omega}_{\\I},\\I) - \\mclF^*(\\I) | \\geq \\frac{C_X^2p}{c_X^8}\\frac{|{ \\mathcal J } ||{ \\mathcal J } '| }{|\\I|}\\kappa_k^2 + C\\frac{\\|X\\|_{\\psi_2}^4 C_X^2}{c_X^4}p^2\\log(np) \\bigg) \\le (np)^{-3}. $$\n\\enlem\n\\begin{proof} \nWe show {\\bf b} as {\\bf a} immediately follows from {\\bf b} with $ |{ \\mathcal J } '| =0$. Denote \n$$ \\mathcal J = (s,\\eta_k] \\quad \\text{and} \\quad \\mathcal J' = (\\eta_k, e] .$$ \nLet $\\widetilde{\\Omega}_{\\I} = (\\frac{1}{|\\I|}\\sum_{i\\in\\I}\\Sigma_i^*)^{-1}$.\nThen by Taylor expansion and \\Cref{lem:estimation covariance}, we have\n\\begin{align}\n & |\\mclF(\\widehat{\\Omega}_{\\I},\\I)-\\mclF(\\widetilde{\\Omega}_{\\I},\\I)| \\nonumber \\\\\n \\leq & |{\\rm Tr}[(\\widehat{\\Omega}_{\\I} - \\widetilde{\\Omega}_{\\I})^\\top (\\sum_{i\\in\\I} X_iX_i^\\top - |\\I|\\widetilde{\\Omega}_{\\I}^{-1})]| + \\frac{C_X^2}{2}|\\I|\\|\\widehat{\\Omega}_{\\I} - \\widetilde{\\Omega}_{\\I}\\|_F^2 \\nonumber \\\\\n \\leq & |\\I|\\|\\widehat{\\Omega}_{\\I} - \\widetilde{\\Omega}_{\\I}\\|_F \\|\\widehat{\\Sigma}_{\\I} - \\widetilde{\\Omega}_{\\I}^{-1}\\|_F + \\frac{C_X^2}{2}|\\I|\\|\\widehat{\\Omega}_{\\I} - \\widetilde{\\Omega}_{\\I}\\|_F^2 \\nonumber\\\\\n \\leq & C\\frac{\\|X\\|_{\\psi_2}^4}{c_X^2}p^2\\log(np) + C\\frac{\\|X\\|_{\\psi_2}^4 C_X^2}{c_X^4}p^2\\log(np)\\leq C\\frac{\\|X\\|_{\\psi_2}^4 C_X^2}{c_X^4}p^2\\log(np). \\label{eq:cov mixed deviation 1}\n\\end{align}\nOn the other hand, it holds that\n\\begin{align}\n & |\\mclF(\\widetilde{\\Omega}_{\\I},\\I)-\\mclF^*(\\I)| \\nonumber \\\\\n \\leq & |{\\rm Tr}[(\\widetilde{\\Omega}_{\\I} - {\\Omega}_{{ \\mathcal J } })^\\top (\\sum_{i\\in{ \\mathcal J } } X_iX_i^\\top - |{ \\mathcal J } |\\Omega_{{ \\mathcal J } }^{-1})]| + \\frac{C_X^2}{2}|{ \\mathcal J } |\\|\\widetilde{\\Omega}_{\\I} - {\\Omega}_{{ \\mathcal J } }\\|_F^2 \\nonumber\\\\\n & + |{\\rm Tr}[(\\widetilde{\\Omega}_{\\I} - {\\Omega}_{{ \\mathcal J } '})^\\top (\\sum_{i\\in{ \\mathcal J } '} X_iX_i^\\top - |{ \\mathcal J } '|\\Omega_{{ \\mathcal J } '}^{-1})]| + \\frac{C_X^2}{2}|{ \\mathcal J } '|\\|\\widetilde{\\Omega}_{\\I} - {\\Omega}_{{ \\mathcal J } '}\\|_F^2.\\label{eq:cov mixed deviation 2}\n\\end{align}\nTo bound $\\|\\widetilde{\\Omega}_{\\I} - {\\Omega}_{{ \\mathcal J } }\\|_F$ and $\\|\\widetilde{\\Omega}_{\\I} - {\\Omega}_{{ \\mathcal J } '}\\|_F$, notice that for two positive definite matrices $\\Sigma_1,\\Sigma_2\\in \\mathbb{S}_+$ and two positive numbers $w_1,w_2$ such that $w_1 + w_2 = 1$, we have\n\\begin{align*}\n & \\|(w_1\\Sigma_1 + w_2\\Sigma_2)^{-1} - \\Sigma_1^{-1}\\|_F\\\\\n \\leq &\\sqrt{p}\\|(w_1\\Sigma_1 + w_2\\Sigma_2)^{-1} - \\Sigma_1^{-1}\\|_2\\\\\n = &\\sqrt{p}\\|(w_1\\Sigma_1 + w_2\\Sigma_2)^{-1}[\\Sigma_1 - (w_1\\Sigma_1 + w_2\\Sigma_2)]\\Sigma_1^{-1}\\|_2\\\\\n \\leq & \\sqrt{p}\\|(w_1\\Sigma_1 + w_2\\Sigma_2)^{-1}\\|_2\\|\\Sigma_1 - (w_1\\Sigma_1 + w_2\\Sigma_2)\\|_2\\|\\Sigma_1^{-1}\\|_2\\\\\n \\leq & \\|(w_1\\Sigma_1 + w_2\\Sigma_2)^{-1}\\|_2\\|\\Sigma_1^{-1}\\|_2\\cdot\\sqrt{p}w_2\\|\\Sigma_1 - \\Sigma_2\\|_2 \\\\\n \\leq & \\|(w_1\\Sigma_1 + w_2\\Sigma_2)^{-1}\\|_2\\|\\Sigma_1^{-1}\\|_2\\|\\Sigma_1\\|_2\\|\\Sigma_2\\|_2\\cdot\\sqrt{p}w_2\\|\\Sigma_1^{-1} - \\Sigma_2^{-1}\\|_2.\n\\end{align*}\nTherefore, under \\Cref{assp:DCDP_covariance}, it holds that\n\\begin{equation}\n \\|\\widetilde{\\Omega}_{\\I} - {\\Omega}_{{ \\mathcal J } }\\|_F\\leq \\frac{C_X^2}{c_X^2}\\frac{|{ \\mathcal J } '|}{|\\I|}\\sqrt{p}\\kappa_k, \\|\\widetilde{\\Omega}_{\\I} - {\\Omega}_{{ \\mathcal J } '}\\|_F\\leq \\frac{C_X^2}{c_X^2}\\frac{|{ \\mathcal J } |}{|\\I|}\\sqrt{p}\\kappa_k,\n\\end{equation}\nwhere in the second inequality we use the fact that $2ab\\leq a^2+ b^2$. As a consequence, \\Cref{eq:cov mixed deviation 2} can be bounded as\n\\begin{align}\n & |\\mclF(\\widetilde{\\Omega}_{\\I},\\I)-\\mclF^*(\\I)| \\nonumber \\\\\n \\leq & C\\frac{\\|X\\|_{\\psi_2}^2p\\sqrt{\\log(np)}}{c_X^2C_X^{-2}}\\kappa_k(\\frac{|{ \\mathcal J } '||{ \\mathcal J } |^{\\frac{1}{2}}}{|\\I|} + \\frac{|{ \\mathcal J } '|^{\\frac{1}{2}}|{ \\mathcal J } |}{|\\I|}) + \\frac{C_X^6 p}{2c_X^4}\\kappa_k^2(\\frac{|{ \\mathcal J } ||{ \\mathcal J } '|^2 }{|\\I|^2} + \\frac{|{ \\mathcal J } '||{ \\mathcal J } |^2 }{|\\I|^2}) \\nonumber\\\\\n \\leq & C\\frac{\\|X\\|_{\\psi_2}^4p^2{\\log(np)}}{C_X^2} + \\frac{C_X^6p}{c_X^4}\\frac{|{ \\mathcal J } ||{ \\mathcal J } '| }{|\\I|}\\kappa_k^2.\\label{eq:cov mixed deviation 3}\n\\end{align}\nCombine \\Cref{eq:cov mixed deviation 1} and \\Cref{eq:cov mixed deviation 3} and we can get\n\\begin{equation*}\n \\begin{split}\n |\\mclF(\\widehat{\\Omega}_{\\I},\\I) - \\mclF^*(\\I)|\n \\leq & |\\mclF(\\widehat{\\Omega}_{\\I},\\I)-\\mclF({\\Omega}_{\\I},\\I)| +|\\mclF({\\Omega}_{\\I},\\I) - \\mclF^*(\\I)|\\\\\n \\leq& C\\frac{\\|X\\|_{\\psi_2}^4 C_X^2}{c_X^4}p^2\\log(np) + \\frac{C_X^6 p}{c_X^4}\\frac{|{ \\mathcal J } ||{ \\mathcal J } '| }{|\\I|}\\kappa_k^2.\n \\end{split}\n\\end{equation*}\n\\end{proof} \n\n\\bnrmk\nIt can be seen later that the $p$ factor in the signal term $\\frac{C_X^6 p}{c_X^4}\\frac{|{ \\mathcal J } ||{ \\mathcal J } '| }{|\\I|}\\kappa_k^2$ will require an additional $p$ factor in the number of points in the grid for DCDP, leading to an additional $p^2$ factor in the computation time.\n\nThis factor is hard to remove because it is rooted in the approximation error\n$$\n\\|(w_1\\Sigma_1 + w_2\\Sigma_2)^{-1} - \\Sigma_1^{-1}\\|_F.\n$$\nWe can try another slightly neater way of bounding this term. As is mentioned in \\cite{kereta2021ejs}, for two matrices $\\mathbf{G}, \\mathbf{H}\\in \\mathbb{R}^{d_1\\times d_2}$, it holds that\n$$\n\\left\\|\\mathbf{H}^{\\dagger}-\\mathbf{G}^{\\dagger}\\right\\|_F \\leq \\min \\left\\{\\left\\|\\mathbf{H}^{\\dagger}\\right\\|_2\\left\\|\\mathbf{G}^{\\dagger} (\\mathbf{H}-\\mathbf{G})\\right\\|_F,\\left\\|\\mathbf{G}^{\\dagger}\\right\\|_2\\left\\|\\mathbf{H}^{\\dagger} (\\mathbf{H}-\\mathbf{G})\\right\\|_2\\right\\},\n$$\nif $\\operatorname{rank}(\\mathbf{G})=\\operatorname{rank}(\\mathbf{H})=\\min \\left\\{d_1, d_2\\right\\}$. Therefore, we have\n\\begin{align*}\n \\|(w_1\\Sigma_1 + w_2\\Sigma_2)^{-1} - \\Sigma_1^{-1}\\|_F\\leq & \\|(w_1\\Sigma_1 + w_2\\Sigma_2)^{-1}\\|_2\\|\\Sigma_{1}^{-1}(w_1\\Sigma_1 + w_2\\Sigma_2 - \\Sigma_1)\\|_F\\\\\n \\leq & \\|(w_1\\Sigma_1 + w_2\\Sigma_2)^{-1}\\|_2\\|\\Sigma_{1}^{-1}\\|w_2\\|\\Sigma_2 - \\Sigma_1\\|_F.\n\\end{align*}\nHowever, to relate $\\|\\Sigma_2 - \\Sigma_1\\|_F$ to $\\|\\Sigma_2^{-1} - \\Sigma_1^{-1}\\|_F$, we need to proceed in the following way:\n\\begin{align*}\n \\|\\Sigma_2 - \\Sigma_1\\|_F\\leq & \\sqrt{d}\\|\\Sigma_2 - \\Sigma_1\\|_2 \\\\\n \\leq & \\|\\Sigma_1\\|_2\\|\\Sigma_2\\|_2 \\cdot \\sqrt{d}\\|\\Sigma_2^{-1} - \\Sigma_1^{-1}\\|_2\\\\\n \\leq & \\|\\Sigma_1\\|_2\\|\\Sigma_2\\|_2 \\cdot \\sqrt{d}\\|\\Sigma_2^{-1} - \\Sigma_1^{-1}\\|_F,\n\\end{align*}\nwhich leads to the same bound in \\Cref{lem:cov one change deviation bound}.\n\\enrmk\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\bnlem[No change point]\n\\label{lem:cov no cp}\nFor interval $\\mclI$ containing no change point, it holds with probability at least $1 - n^{-5}$ that\n\\begin{equation}\n \\F(\\widehat{\\Omega}_{\\I}, \\mclI) -\\F({\\Omega}^*, \\mclI) \\geq -\\|X\\|_{\\psi_2}^4 p^2\\log(np)\\max_{k\\in[K + 1]}\\|\\Omega^*_{\\eta_k}\\|_2^2.\n\\end{equation}\n\\enlem\n\\bprf\nIf $\\I< C_s\\frac{\\|X\\|_{\\psi_2}^4}{c_X^2}p\\log(np)$, then $\\F(\\widehat{\\Omega}_{\\I}, \\mclI) =\\F({\\Omega}^*, \\mclI)=0$ and the conclusion holds automatically. If $\\I\\geq C_s\\frac{\\|X\\|_{\\psi_2}^4}{c_X^2}p\\log(np)$, then by \\Cref{lem:estimation covariance}, it holds with probability at least $1 - n^{-7}$ that\n\\begin{align}\n\\F(\\widehat{\\Omega}_{\\I}, \\mclI) -\\F({\\Omega}^*, \\mclI) \\geq & |\\mclI|{\\rm Tr}[(\\widehat{\\Omega}_{\\I} - \\Omega^*)^\\top (\\widehat{\\Sigma}_{\\I} - \\Sigma^*)] + \\frac{c|\\mclI|}{2\\|\\Omega^*\\|_2^2}\\|\\widehat{\\Omega}_{\\I} - \\Omega^*\\|_F^2. \\\\\n\\geq & -|\\mclI|\\|\\widehat{\\Omega}_{\\I} - \\Omega^*\\|_F \\|\\widehat{\\Sigma}_{\\I} - \\Sigma^*\\|_F \\\\\n\\geq & -|\\mclI|p \\|\\widehat{\\Omega}_{\\I} - \\Omega^*\\|_2 \\|\\widehat{\\Sigma}_{\\I} - \\Sigma^*\\|_2\\\\\n\\geq & -\\|X\\|_{\\psi_2}^4 p^2\\log(np)\\|\\Omega^*\\|_2^2.\n\\end{align}\n\\eprf\n\n\n\n\n\\bnlem\n\\label{lem:cov loss deviation no change point}\nLet $\\I\\subset [1,T]$ be any interval that contains no change point. Then for any interval ${ \\mathcal J } \\supset \\I$, it holds with probability at least $1 - (np)^{-5}$ that\n\\begin{equation*}\n \\mclF(\\Omega^*_{\\I},\\I)\\leq \\mclF(\\widehat{\\Omega}_{ \\mathcal J } ,\\I) + C\\frac{\\|X\\|_{\\psi_2}^4}{c_X^2}p^2\\log(np).\n\\end{equation*}\n\\enlem\n\\begin{proof}\nThe conclusion is guaranteed by \\Cref{lem:cov no cp} and the fact that $\\F(\\Omega,\\I) = L(\\Omega,\\I)$ and $\\widehat{\\Omega}_{\\I}$ is (one of) the minimizer of $L(\\Omega,\\I)$.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\\bnlem[Single change point]\n\\label{lem:cov single cp}\nSuppose the good events \n$\\mathcal L ( { \\mathcal B_n^{-1} \\Delta } ) $ and $\\mathcal R ( { \\mathcal B_n^{-1} \\Delta } ) $ defined in \\Cref{eq:left and right approximation of change points} hold. \nLet $ \\I=(s,e] \\in \\mathcal {\\widehat P} $ be such that $\\I$ contains exactly one change point $ \\eta_k $. \n Then with probability at least $1-(np)^{-3}$, it holds that \n\\begin{equation}\n \\min\\{ \\eta_k -s ,e-\\eta_k \\} \\leq CC_{\\gamma} \\|X\\|_{\\psi_2}^4\\frac{C_X^2}{c_X^6}\\frac{p^2\\log(np)}{\\kappa_k^2} + C\\|X\\|_{\\psi_2}^4\\frac{C_X^6}{c_X^6} { \\mathcal B_n^{-1} \\Delta } .\n\\end{equation}\n\\enlem\n\\bprf\nIf either $ \\eta_k -s \\le { \\mathcal B_n^{-1} \\Delta } $ or $e-\\eta_k\\le { \\mathcal B_n^{-1} \\Delta } $, then there is nothing to show. So assume that \n$$ \\eta_k -s > { \\mathcal B_n^{-1} \\Delta } \\quad \\text{and} \\quad e-\\eta_k > { \\mathcal B_n^{-1} \\Delta } . $$\nBy event $\\mathcal R (p^{-1} { \\mathcal B_n^{-1} \\Delta } )$, there exists $ s_ u \\in \\{ s_q\\}_{q=1}^ { \\mathcal Q} $ such that \n$$0\\le s_u - \\eta_k \\le p^{-1} { \\mathcal B_n^{-1} \\Delta } . $$\n So \n$$ \\eta_k \\le s_ u \\le e .$$\nDenote \n$$ \\I_ 1 = (s,s_u] \\quad \\text{and} \\quad \\I_2 = (s_u, e],$$\nand $\\mclF^*({ \\mathcal J } ) = \\sum_{i\\in { \\mathcal J } }[{\\rm Tr}((\\Omega^*_i)^\\top X_iX_i^\\top) -\\log|\\Omega_i^*|]$. Since\n$s, e, s_u \\in \\{ s_q\\}_{q=1}^ { \\mathcal Q} $, by the definition of $\\widehat{P}$ and $\\widehat{\\Omega}$, and \\Cref{lem:cov one change deviation bound}, it holds that\n\\begin{align}\n \\mclF(\\widehat{\\Omega}_{\\I},\\I)\\leq& \\mclF(\\widehat{\\Omega}_{\\I_1},\\I_1) + \\mclF(\\widehat{\\Omega}_{\\I_2},\\I_2) + \\gamma \\nonumber \\\\\n \\leq & \\mclF^*(\\I_1) + \\frac{C_X^6 p}{c_X^4}(s_u - \\eta_k)\\kappa_k^2 + C\\frac{\\|X\\|_{\\psi_2}^4 C_X^2}{c_X^4}p^2\\log(np) + \\mclF^*(\\I_2) + \\gamma \\nonumber \\\\\n \\leq & \\mclF^*(\\I) + \\frac{C_X^6}{c_X^4}\\mclB_n^{-1}\\Delta\\kappa_k^2 + C\\frac{\\|X\\|_{\\psi_2}^4 C_X^2}{c_X^4}p^2\\log(np) + \\gamma,\n\\end{align}\nwhere the last inequality is due to $s_u - \\eta_k\\leq \\mclB_n^{-1}\\Delta$. Let \n$$\\widetilde{\\gamma} = \\frac{C_X^6}{c_X^4}\\mclB_n^{-1}\\Delta\\kappa_k^2 + C\\frac{\\|X\\|_{\\psi_2}^4 C_X^2}{c_X^4}p^2\\log(np) + \\gamma.$$ \nThen by Taylor expansion and \\Cref{lem:estimation covariance} we have\n\\begin{align}\n \\frac{c}{\\max_{k\\in [K]}\\|\\Omega_{\\I_k}^*\\|_2^2}\\sum_{t\\in \\mclI}\\|\\widehat{\\Omega}_{\\I} - \\Omega^*_t\\|_F^2 \n &\\leq \\widetilde{\\gamma} + \\sum_{i=k-1}^k|\\I_i|{\\rm Tr}[(\\Omega_{\\I_i}^* - \\widehat{\\Omega}_{\\I})^\\top (\\widehat{\\Sigma}_{\\I_i} - \\Sigma^*_{\\I_i})] \\nonumber \\\\\n &\\leq \\widetilde{\\gamma} + \\sum_{i=k-1}^k {|\\mclI_{i}|}\\|\\widehat{\\Sigma}_{\\I_i} - \\Sigma^*_{\\I_i}\\|_F\\|\\widehat{\\Omega}_{\\I} - \\Omega^*_{\\I_i}\\|_F \\nonumber \\\\\n &\\leq \\widetilde{\\gamma} + C_1\\|X\\|_{\\psi_2}^2 p\\log^{\\frac{1}{2}}(np)\\left[ \\sum_{i=k-1}^k \\sqrt{|\\mclI_{i}|}\\|\\widehat{\\Omega}_{\\I} - \\Omega^*_{\\I_i}\\|_F \\right] \\nonumber \\\\\n &\\leq \\widetilde{\\gamma} + C_1\\|X\\|_{\\psi_2}^2 p\\log^{\\frac{1}{2}}(np) \\sqrt{\\sum_{t\\in \\mclI}\\|\\widehat{\\Omega}_{\\I} - \\Omega^*_t\\|_F^2}.\n\\end{align}\nThe inequality above implies that\n\\begin{equation}\n \\sum_{t\\in \\mclI}\\|\\widehat{\\Omega}_{\\I} - \\Omega^*_t\\|_F^2 \\leq \\frac{2}{c}\\max\\|\\Omega_{\\I_k}^*\\|_2^2\\left[\\widetilde{\\gamma} + \\max\\|\\Omega_{\\I_k}^*\\|_2^2\\|X\\|^4_{\\psi_2} p^{2}\\log(np)\\right].\n\\end{equation}\nOn the other hand,\n\\begin{equation}\n \\sum_{t\\in \\mclI}\\|\\widehat{\\Omega}_{\\I} - \\Omega^*_t\\|_F^2\\geq \\frac{|\\mclI_{k-1}||\\mclI_{k}|}{|\\mclI|}\\|\\Omega^*_{\\I_{k-1}} - \\Omega^*_{\\I_k}\\|_F^2,\n\\end{equation}\nwhich implies that\n\\begin{equation}\n \\min\\{|\\mclI_{k-1}|,|\\mclI_{k}|\\}\\leq C_2C_{\\gamma} \\|X\\|_{\\psi_2}^4\\frac{C_X^2}{c_X^6}\\frac{p^2\\log(np)}{\\kappa_k^2} + C_2\\|X\\|_{\\psi_2}^4\\frac{C_X^6}{c_X^6} { \\mathcal B_n^{-1} \\Delta } .\n\\end{equation}\n\\eprf\n\n\n\n\\bnlem[Two change point]\n\\label{lem:cov two cp}\nSuppose the good events \n$\\mathcal L ( { \\mathcal B_n^{-1} \\Delta } ) $ and $\\mathcal R ( { \\mathcal B_n^{-1} \\Delta } ) $ defined in \\Cref{eq:left and right approximation of change points} hold. Let $ \\I=(s,e] \\in \\mathcal {\\widehat P} $ be an interval that contains exactly two change points $ \\eta_k,\\eta_{k+1}$. Suppose in addition that $\\gamma \\geq C_{\\gamma}\\|X\\|_{\\psi_2}^4\\max_{k\\in [K]}\\|\\Omega_{\\I_k}^*\\|_2^2p^2\\log(np)$, and\n\\begin{equation}\n \\Delta \\kappa^2 \\geq \\mathcal{B}_n \\frac{\\|X\\|_{\\psi_2}^4}{c_X^4}p^2\\log(np),\n\\end{equation}\nthen it holds with probability at least $1 - n^{-5}$ that\n\\begin{equation}\n \\max\\{\\eta_k - s, e - \\eta_{k + 1}\\}\\leq { \\mathcal B_n^{-1\/2} \\Delta } .\n\\end{equation}\n\\enlem\n\\bprf\nSince the events $\\mathcal L ( { \\mathcal B_n^{-1} \\Delta } ) $ and $\\mathcal R ( { \\mathcal B_n^{-1} \\Delta } ) $ hold, let $ s_u, s_v$ be such that \n $\\eta_k \\le s_u \\le s_v \\le \\eta_{k+1} $ and that \n $$ 0 \\le s_u-\\eta_k \\le { \\mathcal B_n^{-1} \\Delta } , \\quad 0\\le \\eta_{k+1} - s_v \\le { \\mathcal B_n^{-1} \\Delta } . $$\n\n \\begin{center}\n \\begin{tikzpicture} \n\\draw[ - ] (-10,0)--(1,0);\n \\node[color=black] at (-8,-0.3) {\\small s};\n \\draw[ (-, ultra thick, black] (-8,0) -- (-7.99,0);\n \\draw[ { -]}, ultra thick, black] (-1.0002,0) -- (-1,0);\n\n \\node[color=black] at (-7,-0.3) {\\small $\\eta_k$};\n\\draw(-7 ,0)circle [radius=2pt] ;\n\n \\node[color=black] at (-6.5,-0.3) {\\small $s_u$};\n\\draw plot[mark=x, mark options={color=black, scale=1.5}] coordinates {(-6.5,0) }; \n \n\n \\node[color=black] at (-2.3,-0.3) {\\small $\\eta_{k+1}$};\n\\draw(-2.3 ,0)circle [radius=2pt] ;\n \\node[color=black] at (-3 ,-0.3) {\\small $s_v$};\n\\draw plot[mark=x, mark options={color=black, scale=1.5}] coordinates {(-3,0) }; \n \n \\node[color=black] at (-1,-0.3) {\\small $e$};\n\n \n\\end{tikzpicture}\n\\end{center} \n\nDenote \n$$ \\mathcal I_ 1= ( s, s _u], \\quad \\I_2 =(s_u, s_v] \\quad \\text{and} \\quad \\I_3 = (s_v,e]. $$\nIn addition, denote \n$$ { \\mathcal J } _1 = (s,\\eta_k ], \\quad { \\mathcal J } _2=(\\eta_k, \\eta_{k} + \\frac{ \\eta_{k+1} -\\eta_k }{2}], \\quad { \\mathcal J } _3 = ( \\eta_k+ \\frac{ \\eta_{k+1} -\\eta_k }{2},\\eta_{k+1 } ] \\quad \\text{and} \\quad { \\mathcal J } _4 = (\\eta_{k+1} , e] .$$\nSince \n$s, e, s_u ,s_v \\in \\{ s_q\\}_{q=1}^ { \\mathcal Q} $, by the event $\\mclL(p^{-1} { \\mathcal B_n^{-1} \\Delta } )$ and $\\mclR(p^{-1} { \\mathcal B_n^{-1} \\Delta } )$, it holds with probability at least $1 - n^{-3}$ that\n$$\n0\\leq s_u - \\eta_k\\leq p^{-1} { \\mathcal B_n^{-1} \\Delta } ,\\ 0\\leq \\eta_{k+1} - s_v\\leq p^{-1} { \\mathcal B_n^{-1} \\Delta } .\n$$\nDenote\n$$\\I_1 = (s ,\\eta_k], \\I_2 = (\\eta_k, \\eta_{k + 1}], \\I_3 = (\\eta_{k + 1}, e].$$\nBy the definition of DP and $\\widehat{\\Omega}_{\\I}$, it holds that\n\\begin{align}\n \\F(\\hat{\\Omega}_{\\I},\\I)\n \\leq & \\sum_{i = 1}^3 \\F(\\hat{\\Omega}_{\\I_i},\\I_i) + 2\\gamma \\\\\n \\leq & \\sum_{i = 1}^3 \\F({\\Omega}_{\\I_i}^*,\\I_i) + 2\\gamma + \\frac{C_X^6 p}{c_X^4} \\frac{|{ \\mathcal J } _1|(s_u - \\eta_k)}{|{ \\mathcal J } _1|+s_u - \\eta_k} \\kappa_k^2 + \\frac{C_X^6 p}{c_X^4} \\frac{|{ \\mathcal J } _4|(\\eta_{k + 1} - s_v)}{|{ \\mathcal J } _4|+\\eta_{k + 1} - s_v} \\kappa_k^2 + C\\frac{\\|X\\|_{\\psi_2}^4 C_X^2}{c_X^4}p^2\\log(np) \\\\\n \\leq & \\sum_{i = 1}^3 \\F({\\Omega}_{\\I_i}^*,\\I_i) + 2\\gamma + \\frac{C_X^6}{c_X^4} { \\mathcal B_n^{-1} \\Delta } \\kappa_k^2 + C\\frac{\\|X\\|_{\\psi_2}^4 C_X^2}{c_X^4}p^2\\log(np).\n\\end{align}\nLet\n$$\n\\widetilde{\\gamma} = 2\\frac{C_X^6}{c_X^4}\\mclB_n^{-1}\\Delta\\kappa_k^2 + C\\frac{\\|X\\|_{\\psi_2}^4 C_X^2}{c_X^4}p^2\\log(np) + 2\\gamma.\n$$,\nThen by Taylor expansion and \\Cref{lem:estimation covariance} we have\n\\begin{align}\n cc_X^2\\sum_{t\\in \\mclI}\\|\\widehat{\\Omega}_{\\I} - \\Omega^*_t\\|_F^2 &\\leq \\widetilde{\\gamma} + \\sum_{i=1}^3|\\I_i|{\\rm Tr}[(\\Omega_{\\I_i}^* - \\widehat{\\Omega}_{\\I})^\\top (\\widehat{\\Sigma}_{\\I_i} - \\Sigma^*_{\\I_i})] \\nonumber \\\\\n &\\leq \\widetilde{\\gamma} + C\\|X\\|_{\\psi_2}^2 p\\log^{\\frac{1}{2}}(np)\\left[ \\sum_{i=1}^3 \\sqrt{|\\mclI_{i}|}\\|\\widehat{\\Omega}_{\\I} - \\Omega^*_{i}\\|_F \\right] \\nonumber \\\\\n &\\leq \\widetilde{\\gamma} + C\\|X\\|_{\\psi_2}^2 p\\log^{\\frac{1}{2}}(np) \\sqrt{\\sum_{t\\in \\mclI}\\|\\widehat{\\Omega}_{\\I} - \\Omega^*_t\\|_F^2}.\n\\end{align}\nThe inequality above implies that\n\\begin{equation}\n \\sum_{t\\in \\mclI}\\|\\widehat{\\Omega}_{\\I} - \\Omega^*_t\\|_F^2 \\leq \\frac{C_1}{c_X^2}\\left[\\widetilde{\\gamma} + \\frac{1}{c_X^2}\\|X\\|^4_{\\psi_2} p^{2}\\log(np)\\right].\n\\end{equation}\nBy the choice of $\\gamma$, it holds that\n\\begin{equation}\n \\sum_{t\\in \\I_1\\cup\\I_2}\\|\\widehat{\\Omega}_{\\I} - \\Omega^*_t\\|_F^2 \\leq \\frac{C_1C_{\\gamma}}{c_X^4} \\|X\\|^4_{\\psi_2} p^{2}\\log(np).\n\\end{equation}\nOn the other hand,\n\\begin{equation}\n \\sum_{t\\in \\I_1\\cup\\I_2}\\|\\widehat{\\Omega}_{\\I} - \\Omega^*_t\\|_F^2\\geq \\frac{|\\mclI_{1}||\\mclI_{2}|}{|\\mclI|}\\|\\Omega^*_{k-1} - \\Omega^*_k\\|_F^2\\geq \\frac{1}{2}\\min\\{|\\I_1|,|\\I_2|\\}\\|\\Omega^*_{k-1} - \\Omega^*_k\\|_F^2.\n\\end{equation}\nSuppose $|\\I_1|\\geq |\\I_2|$, then the inequality above leads to\n\\begin{equation*}\n \\Delta \\kappa^2\\leq \\frac{C_1C_{\\gamma}}{c_X^4} \\|X\\|^4_{\\psi_2} p^{2}\\log(np),\n\\end{equation*}\nwhich is contradictory to the assumption on $\\Delta$. Therefore, $|\\I_1|<|\\I_2|$ and we have \n\\begin{equation}\n s-\\eta_k = |\\I_1|\\leq CC_{\\gamma} \\frac{\\|X\\|_{\\psi_2}^4}{c_X^4\\|\\Omega_k^* - \\Omega_{k-1}^*\\|_F^2}p^2\\log(np).\n\\end{equation}\nThe bound for $e - \\eta_{k+1}$ can be proved similarly.\n\\eprf\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\bnlem[Three or more change points]\nSuppose the assumptions in \\Cref{assp:DCDP_covariance} hold. Then with probability at least $1-(np)^{-3}$, there is no interval $\\widehat { \\mathcal P}$ containing three or more true change points. \n \\enlem \n\\bprf\nWe prove by contradiction. Suppose $ \\I=(s,e] \\in \\mathcal {\\widehat P} $ be such that $ \\{ \\eta_1, \\ldots, \\eta_M\\} \\subset \\I $ with $M\\ge 3$. Throughout the proof, $M$ is assumed to be a parameter that can potentially change with $n$. \nSince the events $\\mathcal L ( { \\mathcal B_n^{-1} \\Delta } ) $ and $\\mathcal R ( { \\mathcal B_n^{-1} \\Delta } ) $ hold, by relabeling $\\{ s_q\\}_{q=1}^ { \\mathcal Q} $ if necessary, let $ \\{ s_m\\}_{m=1}^M $ be such that \n $$ 0 \\le s_m -\\eta_m \\le { \\mathcal B_n^{-1} \\Delta } \\quad \\text{for} \\quad 1 \\le m \\le M-1 $$ and that\n $$ 0\\le \\eta_M - s_M \\le { \\mathcal B_n^{-1} \\Delta } .$$ \n Note that these choices ensure that $ \\{ s_m\\}_{m=1}^M \\subset \\I . $\n \n \\begin{center}\n \\begin{tikzpicture} \n\\draw[ - ] (-10,0)--(1,0);\n \\node[color=black] at (-8,-0.3) {\\small s};\n \\draw[ (-, ultra thick, black] (-8,0) -- (-7.99,0);\n \\draw[ { -]}, ultra thick, black] (-1.0002,0) -- (-1,0);\n\n \\node[color=black] at (-7,-0.3) {\\small $\\eta_1$};\n\\draw(-7 ,0)circle [radius=2pt] ;\n\n \\node[color=black] at (-6.5,-0.3) {\\small $s_1$};\n\\draw plot[mark=x, mark options={color=black, scale=1.5}] coordinates {(-6.5,0) }; \n\n \\node[color=black] at (-5,-0.3) {\\small $\\eta_2$};\n\\draw(-5 ,0)circle [radius=2pt] ;\n \\node[color=black] at (-4.5,-0.3) {\\small $s_2$};\n\\draw plot[mark=x, mark options={color=black, scale=1.5}] coordinates {(-4.5,0) }; \n \n\n \\node[color=black] at (-2.5,-0.3) {\\small $\\eta_3$};\n\\draw(-2.5 ,0)circle [radius=2pt] ;\n \\node[color=black] at (-3 ,-0.3) {\\small $s_3$};\n\\draw plot[mark=x, mark options={color=black, scale=1.5}] coordinates {(-3,0) }; \n \n \\node[color=black] at (-1,-0.3) {\\small $e$};\n\n \n\\end{tikzpicture}\n\\end{center}\n\n{\\bf Step 1.}\n Denote \n $$ \\mathcal I_ 1= ( s, s _1], \\quad \\I_m =(s_{m-1} , s_m] \\text{ for } 2 \\le m \\le M \\quad \\text{and} \\quad \\I_{M+1} = (s_M,e]. $$\nThen since \n$ s , e, \\{ s_m \\}_{m=1}^M \\subset \\{ s_q\\}_{q=1}^ { \\mathcal Q} $, it follows that \n\n\n\nSuppose $\\I = (s,e]\\in \\widehat{\\mclP}$ and there are $M\\geq 3$ true change points $\\{\\eta_{q + i}\\}_{i\\in [M]}$ inside $\\I$, and denote\n\\begin{equation*}\n \\I_1 = (s, \\eta_{q + 1}],\\ \\I_m = (\\eta_{q + m - 1},\\eta_{q + m}],\\ \\I_{M+1} = (\\eta_{q + M},e].\n\\end{equation*}\nThen by the definition of $\\widehat{\\mclP}$ and $\\widehat{\\Omega}_{\\I_m}$, it holds that\n\\begin{equation*}\n \\F(\\hat{\\Omega}_{\\I},\\I)\\leq \\sum_{i = 1}^{M+1} \\F(\\hat{\\Omega}_{\\I_i},\\I_i) + M\\gamma\\leq \\sum_{i = 1}^{M+1} \\F({\\Omega}_{\\I_i}^*,\\I_i) + M\\gamma,\n\\end{equation*}\nwhich implies that\n\\begin{align}\n & \\sum_{t\\in \\I}{\\rm Tr}(\\widehat{\\Omega}_{\\I}^\\top (X_tX_t^\\top)) - |\\I|\\log|\\widehat{\\Omega}_{\\I}| \\nonumber \\\\\n \\leq & \\sum_{i=1}^{M+1} \\sum_{t\\in \\I_i}{\\rm Tr}(({\\Omega}_{\\I_i}^*)^\\top (X_tX_t^\\top)) - \\sum_{i=1}^{M+1}|\\I_i|\\log|{\\Omega}_{\\I_i}^*| + M\\gamma.\n\\end{align}\nBy Taylor expansion and \\Cref{lem:estimation covariance} we have\n\\begin{align}\n cc_X^2\\sum_{t\\in \\mclI}\\|\\widehat{\\Omega}_{\\I} - \\Omega^*_t\\|_F^2 &\\leq M\\gamma + \\sum_{i=1}^{M+1}|\\I_i|{\\rm Tr}[(\\Omega_{\\I_i}^* - \\widehat{\\Omega}_{\\I})^\\top (\\widehat{\\Sigma}_{\\I_i} - \\Sigma^*_{\\I_i})] \\nonumber \\\\\n &\\leq M\\gamma + C\\|X\\|_{\\psi_2}^2 p\\log^{\\frac{1}{2}}(np)\\left[ \\sum_{i=1}^{M+1} \\sqrt{|\\mclI_{i}|}\\|\\widehat{\\Omega}_{\\I} - \\Omega^*_{i}\\|_F \\right] \\nonumber \\\\\n &\\leq M\\gamma + C\\|X\\|_{\\psi_2}^2 p\\log^{\\frac{1}{2}}(np) \\sqrt{\\sum_{t\\in \\mclI}\\|\\widehat{\\Omega}_{\\I} - \\Omega^*_t\\|_F^2}.\n\\end{align}\nThe inequality above implies that\n\\begin{equation}\n \\sum_{t\\in \\mclI}\\|\\widehat{\\Omega}_{\\I} - \\Omega^*_t\\|_F^2 \\leq \\frac{C_1}{c_X^2}\\left[M\\gamma + \\frac{1}{c_X^2}\\|X\\|^4_{\\psi_2} p^{2}\\log(np)\\right].\n\\end{equation}\nOn the other hand, for each $i\\in [M]$, we have\n\\begin{equation}\n \\sum_{t\\in \\I_i\\cup\\I_{i +1}}\\|\\widehat{\\Omega}_{\\I} - \\Omega^*_t\\|_F^2\\geq \\frac{|\\mclI_{i}||\\mclI_{i + 1}|}{|\\mclI|}\\|\\Omega^*_{\\eta_{q + i + 1}} - \\Omega^*_{\\eta_{q + i}}\\|_F^2.\n\\end{equation}\nIn addition, for each $i\\in \\{2,\\cdots,M\\}$, by definition, it holds that $\\min\\{|\\I_i|,|\\I_{i + 1}|\\}\\geq \\Delta$. Therefore, we have\n\\begin{align*}\n (M-2)\\Delta \\kappa^2\\leq \\frac{C_1}{c_X^2}\\left[M\\gamma + \\frac{1}{c_X^2}\\|X\\|^4_{\\psi_2} p^{2}\\log(np)\\right].\n\\end{align*}\nSince $M\/(M-2)\\leq 3$ for any $M\\geq 3$, it holds that\n\\begin{equation}\n \\Delta \\leq CC_{\\gamma} \\frac{\\|X\\|_{\\psi_2}^4}{c_X^4\\|\\Omega_k^* - \\Omega_{k-1}^*\\|_F^2}p^2\\log(np),\n\\end{equation}\nwhich is contradictory to the assumption on $\\Delta$, and the proof is complete.\n\\eprf\n\n\n\n\n\n\n \\bnlem[Two consecutive intervals]\n \\label{lem:cov two consecutive interval}\n Under \\Cref{assp:DCDP_covariance} and the choice that $$\\gamma\\geq C_{\\gamma}\\|X\\|_{\\psi_2}^4\\max_{k\\in [K+1]}\\|\\Omega_{\\eta_k}^*\\|_2^2 p^2\\log(np),$$ with probability at least $1-(np)^{-3}$, there are no two consecutive intervals $\\I_1= (s,t ] \\in \\widehat{\\mathcal P}$, $\\I_2=(t, e] \\in \\widehat{\\mathcal P}$ such that $\\I_1 \\cup \\I_2$ contains no change points. \n \\enlem \n \\begin{proof} \n We prove by contradiction. Suppose that $\\I_1,\\I_2\\in \\widehat{\\mclP}$ and\n $$ \\I : =\\I_1\\cup \\I_2 $$\n contains no change points. \n By the definition of $\\widehat{\\mclP}$ and $\\widehat{\\Omega}_{\\I}$, it holds that\n $$ \\F(\\widehat{\\Omega}_{\\I_1},\\I_1) + \\F(\\widehat{\\Omega}_{\\I_2},\\I_2) + \\gamma \\leq \\F(\\widehat{\\Omega}_{\\I},\\I)\\leq \\F({\\Omega}_{\\I}^*,\\I)$$\n By \\Cref{lem:cov no cp}, it follows that \n\\begin{align*} \n\\F({\\Omega}^*_{\\I_1}, \\mclI_1)\\leq & \\F(\\widehat{\\Omega}_{\\I_1}, \\mclI_1) + C\\frac{\\|X\\|_{\\psi_2}^4}{c_X^2} p^2\\log(np),\n \\\\\n\\F({\\Omega}^*_{\\I_2}, \\mclI_2)\\leq & \\F(\\widehat{\\Omega}_{\\I_2}, \\mclI_2) + C\\frac{\\|X\\|_{\\psi_2}^4}{c_X^2} p^2\\log(np) \n \\end{align*}\n So\n $$\\F({\\Omega}^*_{\\I_1}, \\mclI_1) +\\F({\\Omega}^*_{\\I_2}, \\mclI_2) -2C\\frac{\\|X\\|_{\\psi_2}^4}{c_X^2} p^2\\log(np) +\\gamma \\leq \\F({\\Omega}_{\\I}^*,\\I). $$\n Since $\\I$ does not contain any change points, ${\\Omega}^*_{\\I_1} = {\\Omega}^*_{\\I_2} = {\\Omega}^*_{\\I}$, and it follows that \n $$ \\gamma \\leq 2C\\frac{\\|X\\|_{\\psi_2}^4}{c_X^2} p^2\\log(np).$$\n This is a contradiction when $C_\\gamma> 2C$.\n \\end{proof} \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\clearpage\n\\subsection{High-dimensional regression}\n\n\nThroughout this section, unless specified otherwise, for the output of \\Cref{algorithm:DCDP_divide}, we always set the loss function $\\mclF(\\cdot, \\cdot)$ to be\n\\begin{equation}\n\\mathcal F (\\beta, \\I) : = \\begin{cases} \n \\sum_{i \\in \\I } (y_i - X_i^\\top \\beta) ^2 &\\text{if } |\\I|\\ge C_\\mclF { \\mathfrak{s} } \\log(np) ,\n \\\\ \n 0 &\\text{otherwise,}\n \\end{cases}\n\\end{equation} \nwhere $C_{\\mclF}$ is a universal constant which is larger than $C_s$, the constant in sample size in \\Cref{lemma:interval lasso} and \\Cref{lemma:consistency}.\n\n\n\n \\bnprop\\label{prop:regression local consistency}\n Suppose \\Cref{assp:dcdp_linear_reg} holds. Let \n $\\widehat { \\mathcal P } $ denote the output of \\Cref{algorithm:DCDP_divide} with\n\nSuppose that $\\gamma \\ge C_\\gamma K { \\mathcal B_n^{-1} \\Delta } \\kappa^2$. Uniformly with probability at least $1 - C n ^{-3}$, the following conditions hold.\n\t\\begin{itemize}\n\t\t\\item [(i)] For each interval $ \\I = (s, e] \\in \\widehat{\\mathcal{P}}$ containing one and only one true \n\t\t change point $ \\eta_k $, it must be the case that\n $$\\min\\{ \\eta_k -s ,e-\\eta_k \\} \\lesssim \\frac{\\sigma_{\\epsilon}^2\\vee 1 }{\\kappa^2 }\\bigg( { \\mathfrak{s} } \\log(np) +\\gamma \\bigg) + { \\mathcal B_n^{-1} \\Delta } .$$\n\t\\item [(ii)] For each interval $ \\I = (s, e] \\in \\widehat{\\mathcal{P}}$ containing exactly two true change points, say $\\eta_ k < \\eta_ {k+1} $, it must be the case that\n\\begin{align*} \n \\eta_k -s \\lesssim \\frac{\\sigma_{\\epsilon}^2\\vee 1 }{\\kappa^2 }\\bigg( { \\mathfrak{s} } \\log(np) +\\gamma \\bigg) + { \\mathcal B_n^{-1} \\Delta } \\ \\text{and} \\ \n e-\\eta_{k+1} \\le C \\frac{\\sigma_{\\epsilon}^2\\vee 1 }{\\kappa^2 }\\bigg( { \\mathfrak{s} } \\log(np) +\\gamma \\bigg) + { \\mathcal B_n^{-1} \\Delta } . \n \\end{align*} \n\t\t \n\\item [(iii)] No interval $\\I \\in \\widehat{\\mathcal{P}}$ contains strictly more than two true change points; and \n\n\n\t\\item [(iv)] For all consecutive intervals $ \\I_1 $ and $ \\I_2 $ in $\\widehat{ \\mathcal P}$, the interval \n\t\t$ \\I_1 \\cup \\I_2 $ contains at least one true change point.\n\t\t\t\t\n\t \\end{itemize}\n\\enprop \n\n \n\n\\bnprop\\label{prop:regression change points partition size consistency}\nSuppose \\Cref{assp:dcdp_linear_reg} holds. Let \n $\\widehat { \\mathcal P } $ denote the output of \\Cref{algorithm:DCDP_divide}. Suppose \n$\\gamma \\ge C_\\gamma K { \\mathcal B_n^{-1} \\Delta } \\kappa^2$ for sufficiently large constant $C_\\gamma$. Then\n with probability at least $1 - C n ^{-3}$, $| \\widehat { \\mathcal P} | =K $.\n\\enprop\n\n\n\n\\begin{proof}[Proof of \\Cref{prop:regression change points partition size consistency}] \nDenote $\\mathfrak{G} ^*_n = \\sum_{ i =1}^n (y_i - X_i^\\top \\beta^*_i)^2$. Given any collection $\\{t_1, \\ldots, t_m\\}$, where $t_1 < \\cdots < t_m$, and $t_0 = 0$, $t_{m+1} = n$, let \n\t\\begin{equation}\\label{regression eq-sn-def}\n\t\t { \\mathfrak{G} } _n(t_1, \\ldots, t_{m}) = \\sum_{k=1}^{m} \\sum_{ i = t_k +1}^{t_{k+1}} \\mclF(\\widehat{\\beta} _ {(t_{k}, t_{k+1}]}, (t_{k}, t_{k+1}]). \n\t\\end{equation}\nFor any collection of time points, when defining \\eqref{regression eq-sn-def}, the time points are sorted in an increasing order.\n\nLet $\\{ \\widehat \\eta_{k}\\}_{k=1}^{\\widehat K}$ denote the change points induced by $\\widehat {\\mathcal P}$. Suppose we can justify that \n\t\\begin{align}\n\t\t { \\mathfrak{G} } ^*_n + K\\gamma \\ge & { \\mathfrak{G} } _n(s_1,\\ldots,s_K) + K\\gamma - C_1 ( K +1) { \\mathfrak{s} } \\log(np) - C_1\\sum_{k\\in [K]}\\kappa_k^2 { \\mathcal B_n^{-1} \\Delta } \\label{eq:regression K consistency step 1} \\\\ \n\t\t\\ge & { \\mathfrak{G} } _n (\\widehat \\eta_{1},\\ldots, \\widehat \\eta_{\\widehat K } ) +\\widehat K \\gamma - C_1 ( K +1) { \\mathfrak{s} } \\log(np) - C_1\\sum_{k\\in [K]}\\kappa_k^2 { \\mathcal B_n^{-1} \\Delta } \\label{eq:regression K consistency step 2} \\\\ \n\t\t\\ge & { \\mathfrak{G} } _n ( \\widehat \\eta_{1},\\ldots, \\widehat \\eta_{\\widehat K } , \\eta_1,\\ldots,\\eta_K ) + \\widehat K \\gamma - C_1 ( K +1) { \\mathfrak{s} } \\log(np)- C_1\\sum_{k\\in [K]}\\kappa_k^2 { \\mathcal B_n^{-1} \\Delta } \\label{eq:regression K consistency step 3}\n\t\\end{align}\n\tand that \n\t\\begin{align}\\label{eq:regression K consistency step 4}\n\t\t { \\mathfrak{G} } ^*_n - { \\mathfrak{G} } _n ( \\widehat \\eta_{1},\\ldots, \\widehat \\eta_{\\widehat K } , \\eta_1,\\ldots,\\eta_K ) \\le C_2 (K + \\widehat{K} + 2) { \\mathfrak{s} } \\log(np) .\n\t\\end{align}\n\tThen it must hold that $| \\hatp | = K$, as otherwise if $\\widehat K \\ge K+1 $, then \n\t\\begin{align*}\n\t\tC _2 (K + \\widehat{K} + 2) { \\mathfrak{s} } \\log(np) & \\ge { \\mathfrak{G} } ^*_n - { \\mathfrak{G} } _n ( \\widehat \\eta_{1},\\ldots, \\widehat \\eta_{\\widehat K } , \\eta_1,\\ldots,\\eta_K ) \\\\\n\t\t& \\ge (\\widehat K - K)\\gamma -C_1 ( K +1) { \\mathfrak{s} } \\log(np) - C_1\\sum_{k\\in [K]}\\kappa_k^2 { \\mathcal B_n^{-1} \\Delta } .\n\t\\end{align*} \n\tTherefore due to the assumption that $| \\hatp| =\\widehat K\\le 3K $, it holds that \n\t\\begin{align} \\label{eq:regression Khat=K} \n\t\tC_2 (4K + 2) { \\mathfrak{s} } \\log(np) + C_1(K+1) { \\mathfrak{s} } \\log(np) +C_1\\sum_{k\\in [K]}\\kappa_k^2 { \\mathcal B_n^{-1} \\Delta } \\ge (\\widehat K - K)\\gamma \\geq \\gamma,\n\t\\end{align}\n\tNote that \\eqref{eq:regression Khat=K} contradicts the choice of $\\gamma$.\n\n\n\\\n\\\\\n{\\bf Step 1.} Note that \\eqref{eq:regression K consistency step 1} is implied by \n\t\\begin{align}\\label{eq:regression step 1 K consistency} \n\t\t\\left| \t { \\mathfrak{G} } ^*_n - { \\mathfrak{G} } _n(s_1,\\ldots,s_K) \\right| \\le C_3(K+1) \\lambda^2 + C_3\\sum_{k\\in [K]}\\kappa_k^2 { \\mathcal B_n^{-1} \\Delta } ,\n\t\\end{align}\n\twhich is an immediate consequence of \\Cref{lam:regression one change deviation bound}. \n\t\\\n\t\\\\\n\t\\\\\n\t{\\bf Step 2.} Since $\\{ \\widehat \\eta_{k}\\}_{k=1}^{\\widehat K}$ are the change points induced by $\\widehat {\\mathcal P}$, \\eqref{eq:regression K consistency step 2} holds because $\\hatp$ is a minimizer.\n\\\\\n\\\\\n\t{\\bf Step 3.}\nFor every $ \\I =(s,e]\\in \\hatp$, by \\Cref{prop:regression local consistency}, we know that with probability at least $1 - (np)^{-5}$, $\\I$ contains at most two change points. We only show the proof for the two-change-point case as the other case is easier. Denote\n\t\\[\n\t\t \\I = (s ,\\eta_{q}]\\cup (\\eta_{q},\\eta_{q+1}] \\cup (\\eta_{q+1} ,e] = { \\mathcal J } _1 \\cup { \\mathcal J } _2 \\cup { \\mathcal J } _{3},\n\t\\]\nwhere $\\{ \\eta_{q},\\eta_{q+1}\\} =\\I \\, \\cap \\, \\{\\eta_k\\}_{k=1}^K$. \n\nFor each $m\\in\\{1,2,3\\}$, by \\Cref{lam:regression one change deviation bound}, it holds that\n\\[\n\\sum_{ i \\in { \\mathcal J } _m }(y_ i - X_i^\\top \\widehat{\\beta} _{{ \\mathcal J } _m} )^2 \\leq \\sum_{ i \\in { \\mathcal J } _ m } (y_ i - X_i^\\top \\beta^*_i )^2 + C\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np).\n\\]\nBy \\Cref{lam:regression goodness of fit J superset of I}, we have\n\\[\n\\sum_{ i \\in { \\mathcal J } _m }(y_ i - X_i^\\top\\widehat {\\beta} _\\I )^2 \\ge \\sum_{ i \\in { \\mathcal J } _ m } (y_ i - X_i^\\top \\beta_i^* )^2 - C\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np).\n\\]\n Therefore the above inequality implies that \n\t\\begin{align} \\label{eq:regression K consistency step 3 inequality 3} \\sum_{i \\in \\I } (y_ i - X_i^*\\widehat {\\beta} _\\I )^2 \\ge \\sum_{m=1}^{3} \\sum_{ i \\in { \\mathcal J } _m }(y_ i - X_i^\\top\\widehat {\\beta} _{{ \\mathcal J } _m} )^2 -C\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np). \n\t\\end{align}\nNote that \\eqref{eq:regression K consistency step 3} is an immediate consequence of \\eqref{eq:regression K consistency step 3 inequality 3}.\n\n\t{\\bf Step 4.}\nFinally, to show \\eqref{eq:regression K consistency step 4}, let $\\widetilde { \\mathcal P}$ denote the partition induced by $\\{\\widehat \\eta_{1},\\ldots, \\widehat \\eta_{\\widehat K } , \\eta_1,\\ldots,\\eta_K\\}$. Then \n$| \\widetilde { \\mathcal P} | \\le K + \\widehat K+2 $ and that $\\beta^*_i$ is unchanged in every interval $\\I \\in \\widetilde { \\mathcal P}$. \n\tSo \\Cref{eq:regression K consistency step 4} is an immediate consequence of \\Cref{lam:regression one change deviation bound}.\n\\end{proof} \n\n\n\n\n\n\n\\begin{proof}[Proof of \\Cref{cor:regression local refinement}]\nFor each $k\\in [K]$, let $\\hat{\\beta}_t = \\hat{\\beta}^{(1)}$ if $s_kC_s { \\mathfrak{s} } \\log(np)$. Similarly, since $|\\I_2| > \\Delta> C_s { \\mathfrak{s} } \\log(np)$, we have\n\t\\[\n\t\t\\sqrt{\\sum_{t \\in \\I_2}\\|\\delta_{\\I_2}^{\\top}X_t\\|_2^2} \\geq {c_1\\sqrt{|\\I_2|}} \\|\\delta_{\\I_2}\\|_2 - c_2 \\sqrt{\\log(p)} \\|\\delta_{\\I_2}(S^c)\\|_1.\n\t\\]\nDenote $n_0 = C_s { \\mathfrak{s} } \\log(np)$. We first bound the terms with $\\|\\cdot\\|_1$. Note that\n\t\\begin{align*}\n\t&\\sum_{i = 1}^3\\sum_{j\\in S^c}|(\\delta_{\\I_i})_j|\n\t\t\\leq \\sqrt{3}\\sqrt{\\sum_{i = 1}^3 (\\sum_{j \\in S^c} |(\\delta_{\\I_i})_j|)^2 } \\\\ \n\t\t\\leq & \\sqrt{3}\\sqrt{\\sum_{i = 1}^3{\\frac{|\\I_i|}{n_0}} (\\sum_{j \\in S^c}|(\\delta_{\\I_i})_j| )^2}\t\n\t\t\\leq \\sqrt{\\frac{3}{n_0}} \\sum_{i = 1}^3 \\sqrt{|\\I_i|}(\\sum_{j \\in S^c}|(\\delta_{\\I_i})_j| ) \\\\\n\t\t\\leq &\n\t\t\\sqrt{\\frac{3}{n_0}} \\sum_{j \\in S^c} \\sqrt{\\sum_{t = s_k + 1}^{e_k} (\\delta_t)_j^2}\n\t\t\\leq \\frac{3\\sqrt{3}}{\\sqrt{n_0}}\\sum_{j \\in S} \\sqrt{\\sum_{t = s_k + 1}^{e_k} (\\delta_t)_j^2}\n\t\t\\\\\n\t\t\\leq & \\frac{3\\sqrt{3}}{\\sqrt{n_0}} \\sqrt{ { \\mathfrak{s} } \\sum_{j \\in S} \\sum_{t = s_k + 1}^{e_k} (\\delta_t)_j^2} \\leq \\frac{c}{ \\sqrt{\\log(np)}} \\sqrt{\\sum_{t = s_k + 1}^{e_k} \\|\\delta_t\\|_2^2}.\n\t\\end{align*}\nTherefore,\n\t\\begin{align*}\n\t\t& c_1\\sqrt{\\sum_{t = s_k + 1}^{e_k} \\|\\delta_t\\|_2^2} - \\frac{c_2}{ \\sqrt{\\log(np)}} \\sqrt{\\sum_{t = s_k + 1}^{e_k} \\|\\delta_t\\|_2^2} \\\\\n\t\t\\leq & \\sum_{i = 1}^3 c_1 \\|\\delta_{I_i}\\|_2 - \\frac{c_2}{ \\sqrt{\\log(np)}} \\sqrt{\\sum_{t = s_k + 1}^{e_k} \\|\\delta_t\\|_2^2} \\leq \\sqrt{3} \\sqrt{ \\sum_{t = s_k + 1}^{e_k} \\|\\delta_t^{\\top} X_t\\|_2^2 } \\\\\n\t\t\\leq & \\frac{3\\sqrt{\\zeta}}{\\sqrt{2}} { \\mathfrak{s} } ^{1\/4} \\left(\\sum_{t = s_k + 1}^{e_k} \\|\\delta_t\\|_2^2\\right)^{1\/4} \\leq \\frac{9\\zeta { \\mathfrak{s} } ^{1\/2}}{4c_1} + \\frac{c_1}{2}\\sqrt{\\sum_{t = s_k + 1}^{e_k} \\|\\delta_t\\|_2^2}\n\t\\end{align*}\n\twhere the third inequality follows from \\eqref{eq:reg local refine 5} and the fact that $\\sum_{i\\in S}\\sqrt{\\sum_{t=s_k + 1}^{e_k}(\\delta_t)_i^2}\\leq \\sqrt{s}\\sqrt{\\sum_{t=s_k + 1}^{e_k}\\|\\delta_t\\|_2^2}$. The inequality above implies that\n\t\\[\n\t\t\\frac{c_1}{4}\\sqrt{\\sum_{t = s_k + 1}^{e_k} \\|\\delta_t\\|_2^2} \\leq \\frac{9\\zeta { \\mathfrak{s} } ^{1\/2}}{4c_1}\n\t\\]\t\n\tTherefore,\n\t\\[\n\t\t\\sum_{t = s_k + 1}^{e_k}\\|\\widehat{\\beta}_t - \\beta^*_t\\|_2^2 \\leq 81\\zeta^2 { \\mathfrak{s} } \/c_1^4.\n\t\\]\n Recall that $\\beta^{(1)} = \\beta^*_{\\eta_k}$ and $\\beta^{(2)} = \\beta^*_{\\eta_k + 1}$. We have that\n\t\\[\n\t\t\\sum_{t = s_k + 1}^{e_k}\\|\\widehat{\\beta}_t - \\beta^*_t\\|_2^2 = |I_1| \\|\\beta^{(1)} - \\widehat{\\beta}^{(1)}\\|_2^2 + |I_2| \\|\\beta^{(2)} - \\widehat{\\beta}^{(1)}\\|_2^2 + |I_3| \\|\\beta^{(2)} - \\widehat{\\beta}^{(2)}\\|_2^2.\n\t\\]\nSince $\\eta_k - s_k \\geq\\frac{1}{3}\\Delta$ as is shown in \\Cref{tmp_eq:regression I_1}. we have that\n\t\\[\n\t\t\\Delta\\|\\beta^{(1)} - \\widehat{\\beta}^{(1)}\\|_2^2\/3 \\leq |I_1| \\|\\beta^{(1)} - \\widehat{\\beta}^{(1)}\\|_2^2 \\leq \\frac{C_1 C_{\\zeta}^2\\Delta \\kappa^2} { { \\mathfrak{s} } K \\sigma^2_{\\epsilon} { \\mathcal B_n} } \\leq c_3 \\Delta \\kappa^2,\n\t\\]\n\twhere $1\/4 > c_3 > 0$ is an arbitrarily small positive constant. Therefore we have\n\t\\[\n\t\t\\|\\beta^{(2)} - \\widehat{\\beta}_1\\|_2^2 \\leq 3c_3 \\kappa^2.\n\t\\]\nIn addition we have\n\t\\[\n\t\t\\|\\beta^{(2)} - \\widehat{\\beta}^{(1)}\\|_2 \\geq \\|\\beta^{(2)} - \\beta^{(1)}\\|_2 - \\|\\beta^{(1)} - \\widehat{\\beta}^{(1)}\\|_2 \\geq \\kappa\/2.\n\t\\]\t\n\tTherefore, it holds that \t\n\t\\[\n\t\t\\kappa^2 |I_2|\/4 \\leq |I_2| \\|\\beta^{(2)} - \\widehat{\\beta}^{(1)}\\|_2^2 \\leq C_2 { \\mathfrak{s} } \\zeta^2,\n\t\\]\n\twhich implies that \n\t\\[\n\t\t|\\widehat{\\eta}_k - \\eta_k| \\leq \\frac{4C_2 { \\mathfrak{s} } \\zeta^2}{\\kappa^2},\n\t\\]\nwhich gives the bound we want. \n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsubsection{Technical lemmas}\n\nThroughout this section, let $\\widehat { \\mathcal P } $ denote the output of \\Cref{algorithm:DCDP_divide}.\n \n \n \n\n\\bnlem\nLet $\\mathcal{S}$ be any linear subspace in $\\mathbb{R}^n$ and $\\mathcal{N}_{1\/4}$\tbe a $1\/4$-net of $\\mathcal{S} \\cap B(0, 1)$, where $B(0, 1)$ is the unit ball in $\\mathbb{R}^n$. For any $u \\in \\mathbb{R}^n$, it holds that\n\t\\[\n\t\t\\sup_{v \\in \\mathcal{S} \\cap B(0, 1)} \\langle v, u \\rangle \\leq 2 \\sup_{v \\in \\mathcal{N}_{1\/4}} \\langle v, u \\rangle,\n\t\\]\n\twhere $\\langle \\cdot, \\cdot \\rangle$ denotes the inner product in $\\mathbb{R}^n$.\n\\enlem\n\n\\begin{proof}\nDue to the definition of $\\mathcal{N}_{1\/4}$, it holds that for any $v \\in \\mathcal{S} \\cap B(0, 1)$, there exists a $v_k \\in \\mathcal{N}_{1\/4}$, such that $\\|v - v_k\\|_2 < 1\/4$. Therefore,\n\t\\begin{align*}\n\t\t\\langle v, u \\rangle = \\langle v - v_k + v_k, u \\rangle = \\langle x_k, u \\rangle + \\langle v_k, u \\rangle \\leq \\frac{1}{4} \\langle v, u \\rangle + \\frac{1}{4} \\langle v^{\\perp}, u \\rangle + \\langle v_k, u \\rangle,\n\t\\end{align*}\n\twhere the inequality follows from $x_k = v - v_k = \\langle x_k, v \\rangle v + \\langle x_k, v^{\\perp} \\rangle v^{\\perp}$. Then we have\n\t\\[\n\t\t\\frac{3}{4}\\langle v, u \\rangle \\leq \\frac{1}{4} \\langle v^{\\perp}, u \\rangle + \\langle v_k, u \\rangle.\n\t\\]\n\tIt follows from the same argument that \n\t\\[\n\t\t\\frac{3}{4}\\langle v^{\\perp}, u \\rangle \\leq \\frac{1}{4} \\langle v, u \\rangle + \\langle v_l, u \\rangle,\n\t\\]\n\twhere $v_l \\in \\mathcal{N}_{1\/4}$ satisfies $\\|v^{\\perp} - v_l\\|_2 < 1\/4$. Combining the previous two equation displays yields\n\t\\[\n\t\t\\langle v, u \\rangle \\leq 2 \\sup_{v \\in \\mathcal{N}_{1\/4}} \\langle v, u \\rangle,\n\t\\]\n\tand the final claims holds.\n\\end{proof}\n\n \n\n\\Cref{lem:deviation piecewise constant} is an adaptation of Lemma 3 in \\cite{wang2021_jmlr}.\n\n\\bnlem\\label{lem:deviation piecewise constant}\n\tGiven any interval $I = (s, e] \\subset \\{1, \\ldots, n\\}$. Let $\\mclR_m := \\{v \\in \\mathbb{R}^{(e-s)}| \\|v\\|_2 = 1, \\sum_{t = 1}^{e-s-1} \\mathbf{1}\\{v_i \\neq v_{i+1}\\} = m\\}$. Then for data generated from \\Cref{assp:dcdp_linear_reg}, it holds that for any $\\delta > 0$, $i \\in \\{1, \\ldots, p\\}$, \n\t\\[\n\t\t\\mathbb{P}\\left\\{\\sup_{v\\in \\mclR_m} \\left|\\sum_{t = s+1}^e v_t \\epsilon_t (X_t)_i\\right| > \\delta \\right\\} \\leq C(e-s-1)^m 9^{m+1} \\exp \\left\\{-c \\min\\left\\{\\frac{\\delta^2}{4C_x^2}, \\, \\frac{\\delta}{2C_x \\|v\\|_{\\infty}}\\right\\}\\right\\}.\n\t\\]\n\\enlem\n\n\\begin{proof}\nFor any $v \\in \\mathbb{R}^{(e-s)}$ satisfying $\\sum_{t = 1}^{e-s-1}\\mathbbm{1}\\{v_i \\neq v_{i+1}\\} = m$, it is determined by a vector in $\\mathbb{R}^{m+1}$ and a choice of $m$ out of $(e-s-1)$ points. Therefore we have,\n\t\\begin{align*}\n\t\t& \\mathbb{P}\\left\\{\\sup_{\\substack{v \\in \\mathbb{R}^{(e-s)}, \\, \\|v\\|_2 = 1\\\\ \\sum_{t = 1}^{e-s-1} \\mathbf{1}\\{v_i \\neq v_{i+1}\\} = m}} \\left|\\sum_{t = s+1}^e v_t \\epsilon_t (X_t)_i\\right| > \\delta \\right\\} \\\\\n\t\t\\leq & {(e-s-1) \\choose m} 9^{m+1} \\sup_{v \\in \\mathcal{N}_{1\/4}}\t\\mathbb{P}\\left\\{\\left|\\sum_{t = s+1}^e v_t \\epsilon_t (X_t)_i\\right| > \\delta\/2 \\right\\} \\\\\n\t\t\\leq & {(e-s-1) \\choose m} 9^{m+1} C\\exp \\left\\{-c \\min\\left\\{\\frac{\\delta^2}{4C_x^2}, \\, \\frac{\\delta}{2C_x \\|v\\|_{\\infty}}\\right\\}\\right\\} \\\\\n\t\t\\leq & C(e-s-1)^m 9^{m+1} \\exp \\left\\{-c \\min\\left\\{\\frac{\\delta^2}{4C_x^2}, \\, \\frac{\\delta}{2C_x \\|v\\|_{\\infty}}\\right\\}\\right\\}.\n\t\\end{align*}\n\n\\end{proof}\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n \\bnlem\\label{lam:regression goodness of fit J superset of I}\nLet $\\I\\subset [1,T]$ be any interval that contains no change point. Then under \\Cref{assp:dcdp_linear_reg}, for any interval ${ \\mathcal J } \\supset \\I$, it holds with probability at least $1 - (np)^{-5}$ that\n\\begin{equation*}\n \\mclF(\\mu^*_{\\I},\\I)\\leq \\mclF(\\widehat{\\mu}_{ \\mathcal J } ,\\I) + C(\\sigma_{\\epsilon}^2\\vee 1) { \\mathfrak{s} } \\log(np).\n\\end{equation*}\n \\enlem\n\\begin{proof} \n\\noindent \\textbf{Case 1.} If $ |\\I| < C_{\\mclF} { \\mathfrak{s} } \\log(n p)$, then by the definition of $\\mclF(\\beta, \\mclI)$, we have $\\mclF(\\beta^*_{\\I},\\I)= \\mclF(\\widehat{\\beta}_{ \\mathcal J } ,\\I) =0$ and the inequality holds automatically.\n \n\\noindent \\textbf{Case 2.} If \n\t\\begin{equation}\\label{eq-lem16-i-cond} \n\t\t|\\I| \\geq C_{\\mclF} { \\mathfrak{s} } \\log(n p),\n\t\\end{equation}\n\tthen letting $\\delta_\\I = \\beta^*_\\I -\\widehat \\beta_{ \\mathcal J } $ and consider the high-probability event given in \\Cref{lem:restricted eigenvalue}, we have\n\t\\begin{align}\n\t\t& \\sqrt{\\sum_{t \\in \\I} (X_t^{\\top} \\delta_\\I)^2} \\geq {c_1'\\sqrt{|\\I|}} \\|\\delta_\\I\\|_2 - c_2'\\sqrt{\\log(p)} \\|\\delta_\\I\\|_1 \\nonumber \\\\\n\t\t= & {c_1'\\sqrt{|\\I|}} \\|\\delta_\\I\\|_2 - c_2'\\sqrt{\\log(p)} \\|(\\delta_I)_{S}\\|_1 - c_2'\\sqrt{\\log(p)} \\|(\\delta_\\I)_{S^c}\\|_1 \\nonumber \\\\\n\t\t\\geq & c_1'\\sqrt{|\\I|} \\|\\delta_\\I\\|_2 - c_2'\\sqrt{ { \\mathfrak{s} } \\log(p)} \\|\\delta_\\I\\|_2 - c_2'\\sqrt{\\log(p)} \\|(\\delta_\\I)_{S^c}\\|_1 \\nonumber \\\\\n\t\t\\geq & \\frac{c_1'}{2}\\sqrt{|\\I|} \\|\\delta_\\I\\|_2 - c_2'\\sqrt{\\log(p)} \\|(\\widehat{\\beta}_{ \\mathcal J } )_{S^c}\\|_1 \\geq c_1\\sqrt{|\\I|} \\|\\delta_\\I\\|_2 - c_2\\sqrt{\\log(p)} \\frac{ { \\mathfrak{s} } \\lambda}{\\sqrt{|\\I|}}, \\label{eq-lem16-pf-1}\n\t\\end{align} \n\twhere the last inequality follows from \\Cref{lemma:interval lasso}t and the assumption that $(\\beta^*_t)_i = 0$, for all $t\\in [T]$ and $i\\in S^c$. Then by the fact that $(a - b)^2\\geq \\frac{1}{2}a^2 - b^2$ for all $a,b\\in \\mathbb{R}$, it holds that\n\t\\begin{equation}\n\t \\sum_{t \\in \\I} (X_t^{\\top} \\delta_\\I)^2\\geq \\frac{c_1^2}{2}{|\\I|} \\|\\delta_\\I\\|_2^2 - \\frac{c_2^2 \\lambda^2 { \\mathfrak{s} } ^2\\log(p)}{|\\I|}.\n\t\\end{equation}\n\tNotice that\n\t\\begin{align*}\n\t & \\sum_{t \\in \\I} (y_t - X_t^{\\top} \\beta^*_\\I)^2 - \\sum_{t \\in \\I} (y_t - X_t^{\\top} \\widehat{\\beta}_{ \\mathcal J } )^2 = 2 \\sum_{t \\in \\I} \\epsilon_t X_t^{\\top}\\delta_\\I - \\sum_{t \\in \\I} (X_t^{\\top}\\delta_\\I)^2 \\\\\n\t\\leq & 2\\|\\sum_{t \\in \\I } X_t \\epsilon_t \\|_{\\infty } \\left( \\sqrt { { \\mathfrak{s} } } \\| (\\delta_\\I)_{S} \\|_2 + \\| (\\widehat \\beta_{ \\mathcal J } )_{S^c} \\|_1 \\right)- \\sum_{t \\in \\I} (X_t^{\\top}\\delta_\\I)^2.\n\t\\end{align*}\n\tSince for each $t$, $\\epsilon_t$ is subgaussian with $\\|\\epsilon_t\\|_{\\psi_2}\\leq \\sigma_{\\epsilon}$ and for each $i\\in [p]$, $(X_t)_i$ is subgaussian with $\\|(X_t)_i\\|_{\\psi_2}\\leq C_x$, we know that $(X_t)_i\\epsilon_t$ is subexponential with $\\|(X_t)_i\\epsilon_t\\|_{\\psi_1}\\leq C_x\\sigma_{\\epsilon}$. Therefore, by Bernstein's inequality (see, e.g., Theorem 2.8.1 in \\cite{vershynin2018high}) and a union bound, for $\\forall u\\geq 0$ it holds that\n\t\\begin{equation*}\n\t \\mathbb{P}(\\|\\sum_{t \\in \\I } X_t \\epsilon_t \\|_{\\infty }> u)\\leq 2p\\exp(-c\\min\\{\\frac{u^2}{|\\I|C_x^2\\sigma_{\\epsilon}^2}, \\frac{u}{C_x\\sigma_{\\epsilon}}\\}).\n\t\\end{equation*}\n\tTake $u = cC_x\\sigma_{\\epsilon}\\sqrt{|\\I|\\log(np)}$, then by the fact that $|\\I|\\geq C_{\\mclF} { \\mathfrak{s} } \\log(np)$, it follows that with probability at least $1 - (np)^{-7}$,\n\t\\begin{equation*}\n\t \\|\\sum_{t \\in \\I } X_t \\epsilon_t \\|_{\\infty } \\leq CC_x\\sigma_{\\epsilon}\\sqrt{|\\I|\\log(np)}\\leq \\lambda \\sqrt{|\\I|},\n\t\\end{equation*}\n\twhere we use the fact that $\\lambda=C_{\\lambda}(\\sigma_{\\epsilon}\\vee 1)\\sqrt{\\log(np)}$. Therefore, we have\n\t\\begin{align*}\n\t& \\sum_{t \\in \\I} (y_t - X_t^{\\top} \\beta^*_\\I)^2 - \\sum_{t \\in \\I} (y_t - X_t^{\\top} \\widehat{\\beta}_{ \\mathcal J } )^2 \\\\\n\t\\leq & 2\\lambda\\sqrt{|\\I| { \\mathfrak{s} } }\\|\\delta_\\I\\|_2 + 2\\lambda\\sqrt{|\\I|}\\cdot \\frac{\\lambda { \\mathfrak{s} } }{ \\sqrt{|\\I|}} - \\frac{c_1^2 |\\I|}{2}\\|\\delta_\\I\\|_2^2 + \\frac{c_2^2 \\lambda^2 { \\mathfrak{s} } ^2\\log(p)}{|\\I|}\\\\ \n\t\\leq & 2\\lambda\\sqrt{|\\I| { \\mathfrak{s} } }\\|\\delta_\\I\\|_2 + {2\\lambda^2 { \\mathfrak{s} } } - \\frac{c_1^2 |\\I|}{2}\\|\\delta_\\I\\|_2^2 + \\frac{c_2^2 \\lambda^2 { \\mathfrak{s} } ^2\\log(p)}{|\\I|} \\\\\n\t\\leq & \\frac{4}{c_1^2}\\lambda^2 { \\mathfrak{s} } + \\frac{c_1^2}{4}|\\I| \\|\\delta_\\I\\|^2_2 + {2\\lambda^2 { \\mathfrak{s} } } - \\frac{c_1^2}{2}|\\I|\\|\\delta_\\I\\|^2_2 + \\frac{c_2^2 \\lambda^2 { \\mathfrak{s} } ^2\\log(p)}{C_{\\mclF} { \\mathfrak{s} } \\log(np)} \\\\\n\t\\leq & {c_3\\lambda^2 { \\mathfrak{s} } } + {2\\lambda^2 { \\mathfrak{s} } } + \\frac{c_2^2 \\lambda^2 { \\mathfrak{s} } ^2\\log(p)}{C_{\\mclF} { \\mathfrak{s} } \\log(np)}\\\\\n\t\\leq & c_4\\lambda^2 { \\mathfrak{s} } .\n\t\\end{align*}\n\twhere the third inequality follows from $2ab \\leq a^2 + b^2$. \n\n\\end{proof} \n\n\n\n\n\n\n\n\n\n\n \n \n \n \\bnlem \\label{lemma: regression dcdp one change point}\nSuppose the good events \n$\\mathcal L ( { \\mathcal B_n^{-1} \\Delta } ) $ and $\\mathcal R ( { \\mathcal B_n^{-1} \\Delta } ) $ defined in \\Cref{eq:left and right approximation of change points} hold. \nLet $ \\I=(s,e] \\in \\mathcal {\\widehat P} $ be such that $\\I$ contains exactly one true change point $ \\eta_k $. \nSuppose $\\gamma \\geq C_\\gamma K { \\mathcal B_n^{-1} \\Delta } \\kappa^2 $. Then with probability at least $1- n^{-3}$, it holds that \n\\begin{equation*}\n \\min\\{ \\eta_k -s ,e-\\eta_k \\} \\lesssim \\frac{\\sigma_{\\epsilon}^2\\vee 1 }{\\kappa^2 }\\bigg( { \\mathfrak{s} } \\log(np) +\\gamma \\bigg) + { \\mathcal B_n^{-1} \\Delta } .\n\\end{equation*}\n\\enlem\n\\begin{proof} \nIf either $ \\eta_k -s \\le C_\\mclF { \\mathfrak{s} } \\log(np) $ or $e-\\eta_k\\le C_\\mclF { \\mathfrak{s} } \\log(np) $, then \n$$ \\min\\{ \\eta_k -s ,e-\\eta_k \\} \\le C_\\mclF { \\mathfrak{s} } \\log(np) $$\nand there is nothing to show. So assume that \n$$ \\eta_k -s > C_\\mclF { \\mathfrak{s} } \\log(np) \\quad \\text{and} \\quad e-\\eta_k >C_\\mclF { \\mathfrak{s} } \\log(np) . $$\nBy event $\\mathcal R ( { \\mathcal B_n^{-1} \\Delta } )$, there exists $ s_ u \\in \\{ s_q\\}_{q=1}^ { \\mathcal Q} $ such that \n$$0\\le s_u - \\eta_k \\le { \\mathcal B_n^{-1} \\Delta } . $$\n\n\\begin{center}\n \\begin{tikzpicture} \n\\draw[ - ] (-10,0)--(-1,0);\n \\node[color=black] at (-8,-0.3) {\\small s};\n \\draw[ (-, ultra thick, black] (-8,0) -- (-7.99,0);\n \\draw[ { -]}, ultra thick, black] (-3.0002,0) -- (-3,0);\n\n \\node[color=black] at (-7,-0.3) {\\small $\\eta_k$};\n\\draw(-7 ,0)circle [radius=2pt] ;\n\n \\node[color=black] at (-6.5,-0.3) {\\small $s_u$};\n\\draw plot[mark=x, mark options={color=black, scale=1.5}] coordinates {(-6.5,0) }; \n \n \\node[color=black] at (-3 ,-0.3) {\\small $e$};\n\\end{tikzpicture}\n\\end{center} \n\n{\\bf Step 1.} Denote \n$$ \\I_ 1 = (s,s_u] \\quad \\text{and} \\quad \\I_2 = (s_u, e] .$$\nSince \n$ \\eta_k-s \\ge C_\\mclF { \\mathfrak{s} } \\log(np) , $\nit follows that \n $ |\\I| \\ge C_\\mclF { \\mathfrak{s} } \\log(np) $ and $ |\\I_1| \\ge C_\\mclF { \\mathfrak{s} } \\log(np) $. Thus \n$$ \\mathcal F (\\I) = \\sum_{ i \\in \\I } (y_i - X_i^\\top \\widehat \\beta_\\I )^2 \\quad \\text{and} \\quad \\mathcal F (\\I_1) = \\sum_{ i \\in {\\I_1} } (y_i - X_i^\\top \\widehat \\beta_{\\I_1} )^2 .$$\nSince $\\I\\in \\widehat { \\mathcal P}$, it holds that \n\\begin{align}\\label{eq:regression one change basic inequality step 1} \\mathcal F (\\I) \\le \\mathcal F (\\I_1) + \\mathcal F (\\I_2) + \\gamma. \n\\end{align}\n\n{\\bf Case a.} Suppose $|\\I_2| C_\\mclF { \\mathfrak{s} } \\log(np) . $$\nSince the events $\\mathcal L ( { \\mathcal B_n^{-1} \\Delta } ) $ and $\\mathcal R ( { \\mathcal B_n^{-1} \\Delta } ) $ hold, let $ s_u, s_v$ be such that \n $\\eta_k \\le s_u \\le s_v \\le \\eta_{k+1} $ and that \n $$ 0 \\le s_u-\\eta_k \\le { \\mathcal B_n^{-1} \\Delta } , \\quad 0\\le \\eta_{k+1} - s_v \\le { \\mathcal B_n^{-1} \\Delta } . $$\n \\\n \\\\\n \\begin{center}\n \\begin{tikzpicture} \n\\draw[ - ] (-10,0)--(1,0);\n \\node[color=black] at (-8,-0.3) {\\small s};\n \\draw[ (-, ultra thick, black] (-8,0) -- (-7.99,0);\n \\draw[ { -]}, ultra thick, black] (-1.0002,0) -- (-1,0);\n\n \\node[color=black] at (-7,-0.3) {\\small $\\eta_k$};\n\\draw(-7 ,0)circle [radius=2pt] ;\n\n \\node[color=black] at (-6.5,-0.3) {\\small $s_u$};\n\\draw plot[mark=x, mark options={color=black, scale=1.5}] coordinates {(-6.5,0) }; \n \n\n \\node[color=black] at (-2.3,-0.3) {\\small $\\eta_{k+1}$};\n\\draw(-2.3 ,0)circle [radius=2pt] ;\n \\node[color=black] at (-3 ,-0.3) {\\small $s_v$};\n\\draw plot[mark=x, mark options={color=black, scale=1.5}] coordinates {(-3,0) }; \n \n \\node[color=black] at (-1,-0.3) {\\small $e$};\n\n \n\\end{tikzpicture}\n\\end{center} \n\\\n\\\\\n{\\bf Step 1.} Denote \n $$ \\mathcal I_ 1= ( s, s _u], \\quad \\I_2 =(s_u, s_v] \\quad \\text{and} \\quad \\I_3 = (s_v,e]. $$\n \\\n \\\\\nSince $ |\\I| \\ge \\eta_{k+1} -\\eta_k \\ge C_\\mclF { \\mathfrak{s} } \\log(np) $, \n$$ \\mathcal F(\\I ) = \\sum_{i \\in \\I }(y_i - X_i^\\top \\widehat \\beta _\\I ) ^2 . $$\nSince \n $ |\\I_1| \\ge \\eta_k-s \\ge C_\\mclF { \\mathfrak{s} } \\log(np) , $ \nit follows that \n $$ \\mathcal F(\\I_1 ) = \\sum_{i \\in \\I_1 }(y_i - X_i^\\top \\widehat \\beta _{\\I_1} ) ^2 . $$\nIn addition since $ |\\I_1| \\ge C_\\mclF { \\mathfrak{s} } \\log(np) $, then \n \\begin{align*}\\mathcal F(\\I_1) = & \\sum_{i \\in \\I_1 }(y_i - X_i^\\top \\widehat \\beta _{\\I_1} ) ^2 \n \\\\\n \\le & \\sum_{i \\in \\I_1 }(y_i - X_i^\\top \\beta ^* _i ) ^2 + C_1 \\bigg\\{ \\frac{( \\eta_k-s) (s_u -\\eta_k ) }{ ( \\eta_k-s) + (s_u -\\eta_k ) } \\kappa ^2 + { \\mathfrak{s} } \\log(np) \\bigg\\} \n \\\\\n \\le & \\sum_{i \\in \\I_1 }(y_i - X_i^\\top \\beta ^* _i ) ^2 + C_1 \\bigg\\{ (s_u -\\eta_k ) \\kappa ^2 + { \\mathfrak{s} } \\log(np) \\bigg\\} \n \\\\\n \\le & \\sum_{i \\in \\I_1 }(y_i - X_i^\\top \\beta ^* _ i ) ^2 + C_1 \\bigg\\{ { \\mathcal B_n^{-1} \\Delta } \\kappa ^2 + { \\mathfrak{s} } \\log(np) \\bigg\\} ,\n \\end{align*}\nwhere the first inequality follows from \\Cref{lam:regression one change deviation bound} and that $ \\kappa_{k} \\asymp \\kappa$.\n Similarly, since \n $ |\\I_2| \\ge \\Delta \/2 \\ge C_\\mclF { \\mathfrak{s} } \\log(np) , $ \nit follows that \n $$ \\mathcal F(\\I_2 ) = \\sum_{i \\in \\I_2 }(y_i - X_i^\\top \\widehat \\beta _{\\I_2} ) ^2 . $$\n Since $|\\I_2| \\ge C_\\mclF { \\mathfrak{s} } \\log(np) $ and $\\I_2$ contains no change points, by \\Cref{lam:regression one change deviation bound},\n \\begin{align*}\\mathcal F(\\I_2) \\le \\sum_{i \\in \\I_2 }(y_i - X_i^\\top \\beta ^* _ i ) ^2 + C_1 { \\mathfrak{s} } \\log(np).\n \\end{align*} \n \\\\\n \\\\\n {\\bf Step 2.} If $ |\\I_3 | \\ge C_\\mclF { \\mathfrak{s} } \\log(np) $, then \n \\begin{align*}\\mathcal F(\\I_3) = & \\sum_{i \\in \\I_3 }(y_i - X_i^\\top \\widehat \\beta _{\\I_3} ) ^2 \n \\\\\n \\le & \\sum_{i \\in \\I_3 }(y_i - X_i^\\top \\beta ^* _i ) ^2 + C_1 \\bigg\\{ \\frac{( \\eta_{k+1}-s_v) (e -\\eta_{k+1} ) }{ ( \\eta_{k+1}-s_v)+ (e -\\eta_{k+1} ) } \\kappa ^2 + { \\mathfrak{s} } \\log(np) \\bigg\\} \n \\\\\n \\le & \\sum_{i \\in \\I_3 }(y_i - X_i^\\top \\beta ^* _i ) ^2 + C_1 \\bigg\\{ ( \\eta_{k+1}-s_v) \\kappa ^2 + { \\mathfrak{s} } \\log(np) \\bigg\\} \n \\\\\n \\le & \\sum_{i \\in \\I_3 }(y_i - X_i^\\top \\beta ^* _ i ) ^2 + C_1 \\bigg\\{ { \\mathcal B_n^{-1} \\Delta } \\kappa ^2 + { \\mathfrak{s} } \\log(np) \\bigg\\} ,\n \\end{align*}\n where the first inequality follows from \\Cref{lam:regression one change deviation bound}{\\bf b} and that $ \\kappa_{k+1} \\asymp \\kappa$. \n If $|\\I_3| < C_\\mclF { \\mathfrak{s} } \\log(np) $, then \n$\\mathcal F(\\I_3) = 0 $.\n So \nboth cases imply that \n $$\\mathcal F(\\I_3) \\le \\sum_{i \\in \\I_3 }(y_i - X_i^\\top \\beta ^* _ i ) ^2 + C_1 \\bigg\\{ { \\mathcal B_n^{-1} \\Delta } \\kappa_k^2 + { \\mathfrak{s} } \\log(np) \\bigg\\} .$$\n \\\n \\\\\n {\\bf Step 3.} Since $\\I \\in \\widehat{\\mathcal P} $, we have\n \\begin{align}\\label{eq:regression two change points local min} \\mathcal F(\\I ) \\le \\mathcal F(\\I_1 ) +\\mathcal F(\\I_2 )+\\mathcal F(\\I_3 ) + 2\\gamma. \n \\end{align} The above display and the calculations in {\\bf Step 1} and {\\bf Step 2} implies that\n\\begin{align} \n \\sum_{i \\in \\I }(y_i - X_i^\\top \\widehat \\beta _\\I ) ^2 \n\\le \\sum_{i \\in \\I }(y_i - X_i^\\top \\beta_i^* ) ^2 \n + 3 C_1 \\bigg\\{ { \\mathcal B_n^{-1} \\Delta } \\kappa ^2 + { \\mathfrak{s} } \\log(np) \\bigg\\}\n+2\\gamma .\n \\label{eq:regression two change points step 2 term 1}\n\\end{align}\nDenote \n$$ { \\mathcal J } _1 = (s,\\eta_k ], \\quad { \\mathcal J } _2=(\\eta_k, \\eta_{k} + \\eta_{k+1} ] \\quad \\text{and} \\quad { \\mathcal J } _3 = (\\eta_{k+1} , e] .$$\n\\Cref{eq:regression two change points step 2 term 1} gives \n\\begin{align} \n \\sum_{\\ell=1}^3 \\sum_{i \\in { \\mathcal J } _{\\ell} }(y_i - X_i^\\top \\widehat \\beta _\\I ) ^2 \n \\le& \\sum_{\\ell=1}^3 \\sum_{i \\in { \\mathcal J } _ \\ell }(y_i - X_i^\\top \\beta^* _{{ \\mathcal J } _\\ell } ) ^2 \n + 3 C_1 \\bigg\\{ { \\mathcal B_n^{-1} \\Delta } \\kappa ^2 + { \\mathfrak{s} } \\log(np) \\bigg\\} \n+2\\gamma \\label{eq:regression two change points first}\n\\end{align} \n\\\n\\\\\n{\\bf Step 4.} \nNote that for $\\ell\\in \\{1,2,3\\}$, \n\\begin{align*} \n\\| ( \\widehat \\beta _\\I - \\beta _{{ \\mathcal J } _\\ell }^* )_{S^c } \\|_1 = \\| ( \\widehat \\beta _\\I )_{S^c} \\|_{1} \n= \\| ( \\widehat \\beta _\\I -\\beta _\\I ^* )_{S^c} \\|_{1} \\le 3 \\| ( \\widehat \\beta _\\I -\\beta _\\I ^* )_{S } \\|_{1} \\le C_2 { \\mathfrak{s} } \\sqrt { \\frac{ \\log(np)}{| \\I|} } , \n\\end{align*} \nwhere the last two inequalities follows from \\Cref{lemma:interval lasso}. So\n\\begin{align} \\label{eq:regression two change step 3 term 1}\n\\| \\widehat \\beta _\\I - \\beta _{{ \\mathcal J } _\\ell}^* \\|_1 = \\| ( \\widehat \\beta _\\I - \\beta _{{ \\mathcal J } _\\ell }^* )_{S } \\|_1 +\\| ( \\widehat \\beta _\\I - \\beta _{{ \\mathcal J } _\\ell}^* )_{S^c } \\|_1 \\le \\sqrt { { \\mathfrak{s} } } \\| \\widehat \\beta _\\I - \\beta _{{ \\mathcal J } _\\ell}^* \\|_2 + C_2 { \\mathfrak{s} } \\sqrt { \\frac{ \\log(np)}{| \\I|} } .\n\\end{align} \nNote that by assumptions,\n$$|{ \\mathcal J } _1|\\ge C_\\mclF { \\mathfrak{s} } \\log(np) \\quad \\text{and} \\quad |{ \\mathcal J } _2 |\\ge C_\\mclF { \\mathfrak{s} } \\log(np) .$$\n So for $\\ell\\in \\{ 1, 2\\} $,\n it holds that\n \\begin{align}\\nonumber \n\t\t& \\sum_{i \\in { \\mathcal J } _\\ell } \\big\\{ X_i^\\top ( \\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ \\ell }) \\big\\} ^2 \n\t\t \\\\ \\nonumber \n\t\t \\ge& \\frac{c_x|{ \\mathcal J } _\\ell | }{16} \\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ \\ell } \\|_2^2 - C_3 \\log(p) \\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ \\ell } \\|_1 ^2 \n\t\t \\\\\\nonumber \n\t\t \\ge & \\frac{c_x|{ \\mathcal J } _\\ell | }{16} \\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ \\ell } \\|_2^2 - C_3' { \\mathfrak{s} } \\log(p) \\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ \\ell } \\|_ 2 ^2 - C_3' \\frac{ { \\mathfrak{s} } ^2 \\log(p) \\log(np) }{|\\I| }\n\t\t \\\\ \\label{eq:regression two change step 3 term 2}\n\t\t \\ge & \\frac{c_x|{ \\mathcal J } _\\ell | }{32} \\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ \\ell } \\|_2^2 - C_4 { \\mathfrak{s} } \\log(np),\n\t\\end{align} \n\twhere the first inequality follows from \\Cref{lem:restricted eigenvalue}, the second inequality follows from \\Cref{eq:regression two change step 3 term 1} and the last inequality follows from the observation that \n $$ |\\I|\\ge |{ \\mathcal J } _\\ell| \\ge C_\\gamma { \\mathfrak{s} } \\log(np). \n $$ \n So for $\\ell\\in \\{ 1,2\\}$, \n\\begin{align*} &\\sum_{i \\in { \\mathcal J } _\\ell }(y_i - X_i^\\top \\widehat \\beta _\\I ) ^2 - \\sum_{i \\in { \\mathcal J } _\\ell }(y_i - X_i^\\top \\beta^* _{{ \\mathcal J } _\\ell } ) ^2\n= \\sum_{i \\in { \\mathcal J } _\\ell }\\big\\{ X_i^\\top (\\widehat \\beta _\\I -\\beta^* _{{ \\mathcal J } _4} )\\big\\}^2 -2 \\sum_{i \\in { \\mathcal J } _\\ell } \\epsilon_i X_i^\\top (\\widehat \\beta _\\I -\\beta^* _{{ \\mathcal J } _\\ell } )\n\\\\\n\\ge & \\sum_{i \\in { \\mathcal J } _\\ell }\\big\\{ X_i^\\top (\\widehat \\beta _\\I -\\beta^* _{{ \\mathcal J } _\\ell } )\\big\\}^2 - 2 \\| \\sum_{i \\in { \\mathcal J } _\\ell } \\epsilon_i X_i^\\top \\|_\\infty \\| \\widehat \\beta _\\I -\\beta^* _{{ \\mathcal J } _\\ell } \\|_1\n\\\\\n\\ge &\\sum_{i \\in { \\mathcal J } _\\ell }\\big\\{ X_i^\\top (\\widehat \\beta _\\I -\\beta^* _{{ \\mathcal J } _\\ell } )\\big\\}^2 - C_5 \\sqrt{ \\log(np) |{ \\mathcal J } _\\ell| }\\bigg(\\sqrt { { \\mathfrak{s} } } \\| \\widehat \\beta _\\I - \\beta _{{ \\mathcal J } _1}^* \\|_2 + C_2 { \\mathfrak{s} } \\sqrt { \\frac{ \\log(np)}{| \\I|} } \\bigg) \n\\\\\n\\ge & \\frac{c_x|{ \\mathcal J } _\\ell | }{32} \\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ \\ell } \\|_2^2 - C_4 { \\mathfrak{s} } \\log(np) - C_5 \\sqrt{ \\log(np) |{ \\mathcal J } _\\ell| }\\bigg(\\sqrt { { \\mathfrak{s} } } \\| \\widehat \\beta _\\I - \\beta _{{ \\mathcal J } _1}^* \\|_2 + C_2 { \\mathfrak{s} } \\sqrt { \\frac{ \\log(np)}{| \\I|} } \\bigg) \n\\\\\n\\ge & \\frac{c_x|{ \\mathcal J } _\\ell | }{32} \\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ \\ell } \\|_2^2 -\\frac{c_x|{ \\mathcal J } _\\ell | }{64} \\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ \\ell } \\|_2^2 -C_6 { \\mathfrak{s} } \\log(np) \n= \\frac{c_x|{ \\mathcal J } _\\ell | }{64} \\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ \\ell } \\|_2^2 -C_6 { \\mathfrak{s} } \\log(np) ,\n\\end{align*}\nwhere the second inequality follows from the standard sub-Exponential tail bound and \\Cref{eq:regression two change step 3 term 1}, the third inequality follows from \\Cref{eq:regression two change step 3 term 2}, and the fourth inequality follows from $ { \\mathcal J } _\\ell \\subset \\I $ and so $ |\\I | \\ge |{ \\mathcal J } _\\ell|$.\n\\\\\n\\\\\nSo for $\\ell \\in \\{1,2 \\}$, \n\\begin{align} \\label{eq:regression change point step 3 last item}\\sum_{i \\in { \\mathcal J } _\\ell }(y_i - X_i^\\top \\widehat \\beta _\\I ) ^2 - \\sum_{i \\in { \\mathcal J } _\\ell }(y_i - X_i^\\top \\beta^* _{{ \\mathcal J } _\\ell } ) ^2 \n\\ge \\frac{c_x|{ \\mathcal J } _\\ell | }{64} \\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ \\ell } \\|_2^2 -C_6 { \\mathfrak{s} } \\log(np).\n\\end{align} \n\\\n\\\\\n\\\\\n{\\bf Step 5.} For ${ \\mathcal J } _3$, if $|{ \\mathcal J } _3| \\ge C_\\mclF { \\mathfrak{s} } \\log(np) $, following the same calculations as in {\\bf Step 4}, \n$$\\sum_{i \\in { \\mathcal J } _ 3 }(y_i - X_i^\\top \\widehat \\beta _\\I ) ^2 - \\sum_{i \\in { \\mathcal J } _ 3 }(y_i - X_i^\\top \\beta^* _{{ \\mathcal J } _3 } ) ^2 \\ge \\frac{c_x|{ \\mathcal J } _3 | }{64} \\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ 3 } \\|_2^2 -C_6 { \\mathfrak{s} } \\log(np) \\ge -C_6 { \\mathfrak{s} } \\log(np). \n$$\nIf $|{ \\mathcal J } _3 | < C_\\mclF { \\mathfrak{s} } \\log(np) $, then\n\\begin{align} & \\sum_{i \\in { \\mathcal J } _3 }(y_i - X_i^\\top \\widehat \\beta _\\I ) ^2 - \\sum_{i \\in { \\mathcal J } _3 }(y_i - X_i^\\top \\beta^* _{{ \\mathcal J } _ 3} ) ^2 \\nonumber \n= \\sum_{i \\in { \\mathcal J } _ 3 }\\big\\{ X_i^\\top (\\widehat \\beta _\\I -\\beta^* _{{ \\mathcal J } _3} )\\big\\}^2 -2 \\sum_{i \\in { \\mathcal J } _3 } \\epsilon_i X_i^\\top (\\widehat \\beta _\\I -\\beta^* _{{ \\mathcal J } _3 } )\n\\\\ \\nonumber \n\\ge & \\sum_{i \\in { \\mathcal J } _3 }\\big\\{ X_i^\\top (\\widehat \\beta _\\I -\\beta^* _{{ \\mathcal J } _3 } )\\big\\}^2 -\\frac{1}{2 }\\sum_{i \\in { \\mathcal J } _3 }\\big\\{ X_i^\\top (\\widehat \\beta _\\I -\\beta^* _{{ \\mathcal J } _3 } )\\big\\}^2 - 4 \\sum_{i \\in { \\mathcal J } _3 }\\epsilon_i^2 \n\\\\ \\nonumber \n\\ge & \\frac{1}{2 }\\sum_{i \\in { \\mathcal J } _3 }\\big\\{ X_i^\\top (\\widehat \\beta _\\I -\\beta^* _{{ \\mathcal J } _3 } )\\big\\}^2 -C_7\\bigg( \\sqrt { \\gamma \\log(n)} + \\log(n)+ \\gamma \\bigg) \n\\\\ \\nonumber \n\\ge & \\frac{1}{2 }\\sum_{i \\in { \\mathcal J } _3 }\\big\\{ X_i^\\top (\\widehat \\beta _\\I -\\beta^* _{{ \\mathcal J } _3 } )\\big\\}^2 -C_7' \\bigg( \\log(n)+ \\gamma \\bigg) \n\\\\ \\ge \n& \\frac{1}{2 }\\sum_{i \\in { \\mathcal J } _3 }\\big\\{ X_i^\\top (\\widehat \\beta _\\I -\\beta^* _{{ \\mathcal J } _3 } )\\big\\}^2 -C_8 ( { \\mathfrak{s} } \\log(np) + \\gamma) \n\\ge -C_8 ( { \\mathfrak{s} } \\log(np) +\\gamma) \\label{eq:regression change point step 4 last item}\n\\end{align}\nwhere the second inequality follows from the standard sub-exponential deviation bound.\n\\\n\\\\\n\\\\\n{\\bf Step 6.} Putting \\Cref{eq:regression two change points first}, \\eqref{eq:regression change point step 3 last item} and \\eqref{eq:regression change point step 4 last item} together, it follows that \n $$ \\sum_{\\ell =1}^2 \\frac{c_x|{ \\mathcal J } _\\ell | }{64} \\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ \\ell } \\|_2^2 \\le C_9( { \\mathfrak{s} } \\log(np) + { \\mathcal B_n^{-1} \\Delta } \\kappa ^2 + \\gamma) .$$\n This leads to \n $$ |{ \\mathcal J } _1 | \\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ 1 } \\|_2^2 + |{ \\mathcal J } _2 | \\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ 2 } \\|_2^2 \\le C_9( { \\mathfrak{s} } \\log(np) + { \\mathcal B_n^{-1} \\Delta } \\kappa ^2 + \\gamma) .$$ \nObserve that \n$$ \\inf_{ \\beta \\in \\mathbb R ^p } |{ \\mathcal J } _1 | \\| \\beta - \\beta^* _{{ \\mathcal J } _ 1 } \\|_2^2 + |{ \\mathcal J } _2 | \\| \\beta - \\beta^* _{{ \\mathcal J } _ 2 } \\|_2^2 = \\kappa_k ^2 \\frac{|{ \\mathcal J } _1| |{ \\mathcal J } _2|}{| { \\mathcal J } _1| +|{ \\mathcal J } _2| } \\ge \\frac{ \\kappa_k ^2 }{2} \\min\\{ |{ \\mathcal J } _1| ,|{ \\mathcal J } _2| \\} \\ge \\frac{ c \\kappa ^2 }{2} \\min\\{ |{ \\mathcal J } _1| ,|{ \\mathcal J } _2| \\} . $$ \nThus\n$$ \\kappa ^2 \\min\\{ |{ \\mathcal J } _1| ,|{ \\mathcal J } _2| \\} \\le C_{10} ( { \\mathfrak{s} } \\log(np)+ { \\mathcal B_n^{-1} \\Delta } \\kappa^2 + \\gamma) ,$$\nwhich is \n$$ \\min\\{ |{ \\mathcal J } _1| ,|{ \\mathcal J } _2| \\} \\le C_5 \\bigg( \\frac{ { \\mathfrak{s} } \\log(np) + \\gamma }{\\kappa ^2 } + { \\mathcal B_n^{-1} \\Delta } +\\frac{ \\gamma}{\\kappa ^2} \\bigg) .$$ \nSince $ |{ \\mathcal J } _2 | \\ge \\Delta \\ge \\frac{ C ( { \\mathfrak{s} } \\log(np) + \\gamma) }{ \\kappa^{2}} $\n for sufficiently large constant $C$, \n it follows that\n $$ |{ \\mathcal J } _2| \\ge \\Delta> C_5 \\bigg( \\frac{ { \\mathfrak{s} } \\log(np) + \\gamma }{\\kappa ^2 } + { \\mathcal B_n^{-1} \\Delta } +\\frac{ \\gamma}{\\kappa ^2} \\bigg) .$$ So it holds that \n $$ |{ \\mathcal J } _1| \\le C_5 \\bigg( \\frac{ { \\mathfrak{s} } \\log(np) + \\gamma }{\\kappa ^2 } + { \\mathcal B_n^{-1} \\Delta } \\bigg) .$$ \n \\end{proof} \n\n \\bnlem \nSuppose the good events \n$\\mathcal L ( { \\mathcal B_n^{-1} \\Delta } ) $ and $\\mathcal R ( { \\mathcal B_n^{-1} \\Delta } ) $ defined in \\Cref{eq:left and right approximation of change points} hold. \n Suppose in addition that \n\\begin{align} \\label{eq:regression three change points snr}\n\\Delta \\kappa^2 \\ge C \\big( { \\mathfrak{s} } \\log(np) + \\gamma) \n\\end{align} \nfor sufficiently large constant $C $. \n Then with probability at least $1- n^{-3}$, there is no intervals in $ \\widehat { \\mathcal P} $ containing three or more true change points. \n \\enlem \n \n\n\n\\begin{proof} For contradiction, suppose $ \\I=(s,e] \\in \\mathcal {\\widehat P} $ be such that $ \\{ \\eta_1, \\ldots, \\eta_M\\} \\subset \\I $ with $M\\ge 3$. \n\\\\\n\\\\ \nSince the events $\\mathcal L ( { \\mathcal B_n^{-1} \\Delta } ) $ and $\\mathcal R ( { \\mathcal B_n^{-1} \\Delta } ) $ hold, by relabeling $\\{ s_q\\}_{q=1}^ { \\mathcal Q} $ if necessary, let $ \\{ s_m\\}_{m=1}^M $ be such that \n $$ 0 \\le s_m -\\eta_m \\le { \\mathcal B_n^{-1} \\Delta } \\quad \\text{for} \\quad 1 \\le m \\le M-1 $$ and that\n $$ 0\\le \\eta_M - s_M \\le { \\mathcal B_n^{-1} \\Delta } .$$ \n Note that these choices ensure that $ \\{ s_m\\}_{m=1}^M \\subset \\I . $\n \\\n \\\\\n \\begin{center}\n \\begin{tikzpicture} \n\\draw[ - ] (-10,0)--(1,0);\n \\node[color=black] at (-8,-0.3) {\\small s};\n \\draw[ (-, ultra thick, black] (-8,0) -- (-7.99,0);\n \\draw[ { -]}, ultra thick, black] (-1.0002,0) -- (-1,0);\n\n \\node[color=black] at (-7,-0.3) {\\small $\\eta_1$};\n\\draw(-7 ,0)circle [radius=2pt] ;\n\n \\node[color=black] at (-6.5,-0.3) {\\small $s_1$};\n\\draw plot[mark=x, mark options={color=black, scale=1.5}] coordinates {(-6.5,0) }; \n\n \\node[color=black] at (-5,-0.3) {\\small $\\eta_2$};\n\\draw(-5 ,0)circle [radius=2pt] ;\n \\node[color=black] at (-4.5,-0.3) {\\small $s_2$};\n\\draw plot[mark=x, mark options={color=black, scale=1.5}] coordinates {(-4.5,0) }; \n \n\n \\node[color=black] at (-2.5,-0.3) {\\small $\\eta_M$};\n\\draw(-2.5 ,0)circle [radius=2pt] ;\n \\node[color=black] at (-3 ,-0.3) {\\small $s_M$};\n\\draw plot[mark=x, mark options={color=black, scale=1.5}] coordinates {(-3,0) }; \n \n \\node[color=black] at (-1,-0.3) {\\small $e$};\n\n \n\\end{tikzpicture}\n\\end{center}\n\\\n\\\\\n{\\bf Step 1.}\n Denote \n $$ \\mathcal I_ 1= ( s, s _1], \\quad \\I_m =(s_{m-1} , s_m] \\text{ for } 2 \\le m \\le M \\quad \\text{and} \\quad \\I_{M+1} = (s_M,e]. $$\n Then since $|\\I| \\ge \\Delta \\ge C_s { \\mathfrak{s} } \\log(np) $, it follows that \n Since $ |\\I| \\ge \\eta_{k+1} -\\eta_k \\ge C_s { \\mathfrak{s} } \\log(np) $, \n$$ \\mathcal F(\\I ) = \\sum_{i \\in \\I }(y_i - X_i^\\top \\widehat \\beta _\\I ) ^2 . $$\nSince $ | \\I_m | \\ge \\Delta \/2 \\ge C_s { \\mathfrak{s} } \\log(np) $ for all $ 2 \\le m \\le M $, it follows from the same argument as {\\bf Step 1} in the proof of \\Cref{lemma:regression two change points} that \n \\begin{align*}\\mathcal F(\\I_m) = & \\sum_{i \\in \\I_m }(y_i - X_i^\\top \\widehat \\beta _{\\I_m} ) ^2\n \\le \\sum_{i \\in \\I_m }(y_i - X_i^\\top \\beta ^* _ i ) ^2 + C_1 \\bigg\\{ { \\mathcal B_n^{-1} \\Delta } \\kappa ^2 + { \\mathfrak{s} } \\log(np) \\bigg\\} \\quad \\text{for all } 2 \n \\le m \\le M.\n \\end{align*}\n \\\n \\\\\n {\\bf Step 2.} It follows from the same argument as {\\bf Step 2} in the proof of \\Cref{lemma:regression two change points} that \n \\begin{align*}\n &\\mathcal F(\\I_1) \n \\le \\sum_{i \\in \\I_1 }(y_i - X_i^\\top \\beta ^* _ i ) ^2 + C_1 \\bigg\\{ { \\mathcal B_n^{-1} \\Delta } \\kappa ^2 + { \\mathfrak{s} } \\log(np) \\bigg\\} , \\text{ and}\n \\\\\n &\\mathcal F(\\I_{M+1} ) \n \\le \\sum_{i \\in \\I_{M+1} }(y_i - X_i^\\top \\beta ^* _ i ) ^2 + C_1 \\bigg\\{ { \\mathcal B_n^{-1} \\Delta } \\kappa ^2 + { \\mathfrak{s} } \\log(np) \\bigg\\} \n \\end{align*}\n \\\n \\\\\n {\\bf Step 3.} Since $\\I \\in \\widehat{\\mathcal P} $, we have\n \\begin{align}\\label{eq:regression three change points local min} \\mathcal F(\\I ) \\le \\sum_{m=1}^{M+1}\\mathcal F(\\I_m ) + M\\gamma. \n \\end{align} The above display and the calculations in {\\bf Step 1} and {\\bf Step 2} implies that\n\\begin{align} \n \\sum_{i \\in \\I }(y_i - X_i^\\top \\widehat \\beta _\\I ) ^2 \n\\le \\sum_{i \\in \\I }(y_i - X_i^\\top \\beta_i^* ) ^2 \n + (M +1) C_1 \\bigg\\{ { \\mathcal B_n^{-1} \\Delta } \\kappa ^2 + { \\mathfrak{s} } \\log(np) \\bigg\\}\n+M\\gamma .\n \\label{eq:regression three change points step 2 term 1}\n\\end{align}\nDenote \n $$ { \\mathcal J } _1 =(s, \\eta_1], \\ { \\mathcal J } _m = (\\eta_{m-1}, \\eta_m] \\quad \\text{for}\\quad 2 \\le m \\le M , \\ { \\mathcal J } _{M+1} =(\\eta_M, e]. $$ \n\\Cref{eq:regression three change points step 2 term 1} gives \n\\begin{align} \n \\sum_{m=1}^{M+1} \\sum_{i \\in { \\mathcal J } _m }(y_i - X_i^\\top \\widehat \\beta _\\I ) ^2 \n \\le& \\sum_{m=1}^{M+1} \\sum_{i \\in { \\mathcal J } _ m }(y_i - X_i^\\top \\beta^* _{{ \\mathcal J } _m } ) ^2 \n + (M+1) C_1 \\bigg\\{ { \\mathcal B_n^{-1} \\Delta } \\kappa ^2 + { \\mathfrak{s} } \\log(np) \\bigg\\} \n+M\\gamma \\label{eq:regression three change points first}\n\\end{align} \n \\\n \\\\\n {\\bf Step 4.} Using the same argument as in the {\\bf Step 4} in the proof of \\Cref{lemma:regression two change points},\nit follows that \n\\begin{align} \\label{eq:regression three change point step 3 last term}\\sum_{i \\in { \\mathcal J } _m }(y_i - X_i^\\top \\widehat \\beta _\\I ) ^2 - \\sum_{i \\in { \\mathcal J } _m }(y_i - X_i^\\top \\beta^* _{{ \\mathcal J } _m } ) ^2 \n\\ge \\frac{c_x|{ \\mathcal J } _m | }{64} \\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ m } \\|_2^2 -C_2 { \\mathfrak{s} } \\log(np) \\quad \\text{for all} \\ 2 \\le m \\le M. \n\\end{align} \n \\\n \\\\\n {\\bf Step 5.}\n Using the same argument as in the {\\bf Step 4} in the proof of \\Cref{lemma:regression two change points}, it follows that \n\\begin{align} \\label{eq:regression three change point step 5 first term}& \\sum_{i \\in { \\mathcal J } _ 1 }(y_i - X_i^\\top \\widehat \\beta _\\I ) ^2 - \\sum_{i \\in { \\mathcal J } _ 1 }(y_i - X_i^\\top \\beta^* _{{ \\mathcal J } _1 } ) ^2 \\ge -C_3 ( { \\mathfrak{s} } \\log(np) +\\gamma) \\text{ \n and }\n\\\\\\label{eq:regression three change point step 5 second term}\n& \\sum_{i \\in { \\mathcal J } _ {M+ 1} }(y_i - X_i^\\top \\widehat \\beta _\\I ) ^2 - \\sum_{i \\in { \\mathcal J } _ {M+ 1} }(y_i - X_i^\\top \\beta^* _{{ \\mathcal J } _ {M+ 1} } ) ^2 \\ge -C_3 ( { \\mathfrak{s} } \\log(np) +\\gamma) \n \\end{align}\n\\\n\\\\\n{\\bf Step 6.} Putting \\Cref{eq:regression three change points first}, \\eqref{eq:regression three change point step 3 last term}, \\eqref{eq:regression three change point step 5 first term} and \\eqref{eq:regression three change point step 5 second term}, \n it follows that \n\\begin{align} \\label{eq:regression three change points step six} \\sum_{ m =2}^M \\frac{c_x|{ \\mathcal J } _m | }{64} \\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ m } \\|_2^2 \\le C_4M ( { \\mathfrak{s} } \\log(np) + { \\mathcal B_n^{-1} \\Delta } \\kappa ^2 + \\gamma) .\n\\end{align} \n For any $ m \\in\\{2, \\ldots, M\\}$, it holds that\n\\begin{align} \\label{eq:regression three change points signal lower bound} \\inf_{ \\beta \\in \\mathbb R^p } |{ \\mathcal J } _{m-1} | \\| \\beta - \\beta^* _{{ \\mathcal J } _ {m-1} } \\| ^2 + |{ \\mathcal J } _{m} | \\| \\beta - \\beta^* _{{ \\mathcal J } _ m } \\| ^2 =& \\frac{|{ \\mathcal J } _{m-1}| |{ \\mathcal J } _m|}{ |{ \\mathcal J } _{m-1}| + |{ \\mathcal J } _m| } \\kappa_m ^2 \\ge \\frac{1}{2} \\Delta \\kappa^2,\n\\end{align} \nwhere the last inequality follows from the assumptions that $\\eta_k - \\eta_{k-1}\\ge \\Delta $ and $ \\kappa_k \\asymp \\kappa$ for all $1\\le k \\le K$. So\n\\begin{align} \\nonumber &2 \\sum_{ m=1}^{M } |{ \\mathcal J } _m |\\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ m } \\|_2^2 \n\\\\ \n\\ge & \\nonumber \\sum_{m=2}^M \\bigg( |{ \\mathcal J } _{m-1} \\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ {m-1} } \\|_2^2 + |{ \\mathcal J } _m |\\|\\widehat \\beta _\\I - \\beta^* _{{ \\mathcal J } _ m } \\|_2^2 \\bigg) \n\\\\ \\label{eq:regression three change points signal lower bound two} \n\\ge & (M-1) \\frac{ 1}{2} \\Delta \\kappa^2 \\ge \\frac{M}{4} \\Delta \\kappa^2 ,\n\\end{align} \nwhere the second inequality follows from \\Cref{eq:regression three change points signal lower bound} and the last inequality follows from $M\\ge 3$. \\Cref{eq:regression three change points step six} and \\Cref{eq:regression three change points signal lower bound two} together imply that \n\\begin{align}\\label{eq:regression three change points signal lower bound three} \n \\frac{M}{4} \\Delta \\kappa^2 \\le 2 C_5 M\\bigg( { \\mathfrak{s} } \\log(np ) + { \\mathcal B_n^{-1} \\Delta } \\kappa^2\n+ \\gamma \\bigg) .\n\\end{align}\nSince $ { \\mathcal B_n} \\to \\infty $, it follows that for sufficiently large $n$, \\Cref{eq:regression three change points signal lower bound three} gives \n$$ \\Delta\\kappa^2 \\le C_5 \\big( { \\mathfrak{s} } \\log(np) +\\gamma),$$ \nwhich contradicts \\Cref{eq:regression three change points snr}. \n \n \n \\end{proof} \n \n \\bnlem Suppose $ \\gamma \\ge C_\\gamma { \\mathfrak{s} } \\log(np) $ for sufficiently large constant $C_\\gamma $. \nWith probability at least $1- n^{-3}$, there is no two consecutive intervals $\\I_1= (s,t ] \\in \\widehat {\\mathcal P} $, $ \\I_2=(t, e] \\in \\widehat {\\mathcal P} $ such that $\\I_1 \\cup \\I_2$ contains no change points. \n \\enlem \n \\begin{proof} \n For contradiction, suppose that \n $$ \\I : =\\I_1\\cup \\I_2 $$\n contains no change points. \nFor $\\I_1$, note that if $|\\I_1| \\ge C_\\zeta { \\mathfrak{s} } \\log(np) $, then \n by \\Cref{lam:regression one change deviation bound} {\\bf a}, it follows that \n \\begin{align*} \n \\bigg| \\mathcal F(\\I_1) - \\sum_{i\\in \\I_1} (y_i - X_i^\\top \\beta^* _{i} )^2 \\bigg| = \\bigg| \\sum_{i\\in \\I_1} (y_i - X_i^\\top \\widehat \\beta _{\\I_1} )^2- \\sum_{i\\in \\I_1} (y_i - X_i^\\top \\beta_i^* )^2 \\bigg| \\le C_1 { \\mathfrak{s} } \\log(np) .\n \\end{align*}\n If $|\\I_1| < C_\\zeta { \\mathfrak{s} } \\log(np) $, then\n\\begin{align*} \n& \\bigg| \\mathcal F(\\I_1) - \\sum_{i\\in \\I_1} (y_i - X_i^\\top \\beta^* _{i} )^2 \\bigg| = \\bigg| \\sum_{i\\in \\I_1} (y_i - X_i^\\top \\beta^* _{i} )^2 \\bigg| \n=\\sum_{i\\in \\I_1} \\epsilon_i^2 \n \\\\ \n \\le & |\\I_1| E(\\epsilon^2_1) + C_2 \\sqrt { |\\I_1| \\log(n) } + \\log(n) \\le C_2' { \\mathfrak{s} } \\log(np) .\n \\end{align*} \n So\n \\begin{align*} \n \\bigg| \\mathcal F(\\I_1) - \\sum_{i\\in \\I_1} (y_i - X_i^\\top \\beta^* _{i} )^2 \\bigg| \\le C_3 { \\mathfrak{s} } \\log(np) .\n \\end{align*} \n Similarly, \n\\begin{align*} \n&\\bigg| \\mathcal F(\\I_2) - \\sum_{i\\in \\I_2} (y_i - X_i^\\top \\beta^* _{i} )^2 \\bigg| \\le C_3 { \\mathfrak{s} } \\log(np), \\quad \\text{and}\n\\\\\n&\\bigg| \\mathcal F(\\I ) - \\sum_{i\\in \\I } (y_i - X_i^\\top \\beta^* _{i} )^2 \\bigg| \\le C_3 { \\mathfrak{s} } \\log(np) .\n \\end{align*} \n So \n $$\\sum_{i\\in \\I_1} (y_i - X_i^\\top \\beta_i^* )^2 +\\sum_{i\\in \\I_2} (y_i - X_i^\\top \\beta_i^* )^2 -2C_1 { \\mathfrak{s} } \\log(np ) +\\gamma \\le \\sum_{i\\in \\I } (y_i - X_i^\\top \\beta_i^* )^2 +C_1 { \\mathfrak{s} } \\log(np) . $$\n Since $\\beta^* _i$ is unchanged when $i\\in \\I$, it follows that \n $$ \\gamma \\le 3C_1 { \\mathfrak{s} } \\log(np).$$\n This is a contradiction when $C_\\gamma> 3C_1. $\n \\end{proof} \n \n \n \n \n \n \n\n\\subsection{Additional Results Related to Lasso}\n\n\\bnlem \\label{lam:regression one change deviation bound}\nLet $\\mathcal I =(s,e] $ be any generic interval. \n\\\\\n{\\bf a.} If $\\I$ contains no change points and that \n$|\\I| \\ge C_s { \\mathfrak{s} } \\log(np) $ where $C_s$ is the universal constant in \\Cref{lemma:interval lasso}. Then it holds that \n$$\\mathbb P \\bigg( \\bigg| \\sum_{ i \\in \\I } (y_i - X_i ^\\top \\widehat \\beta _\\I )^2 - \\sum_{ i \\in \\I } (y_i - X_i^\\top \\beta^*_\\I )^2 \\bigg| \\ge C { \\mathfrak{s} } \\log(np) \\bigg) \\le n^{-4}. $$\n\\\n\\\\\n{\\bf b.} Suppose that the interval $ \\I=(s,e]$ contains one and only change point $ \\eta_k $ and that \n$|\\I| \\ge C_s { \\mathfrak{s} } \\log(np) $. Denote $\\widehat \\mu_\\I = \\frac{1}{|\\I | } \\sum_{i\\in \\I } y_i $ and \n$$ \\mathcal J = (s,\\eta_k] \\quad \\text{and} \\quad \\mathcal J' = (\\eta_k, e] .$$ Then it holds that \n$$\\mathbb P \\bigg( \\bigg| \\sum_{ i \\in \\I } (y_i - X_i^\\top \\widehat \\beta_\\I )^2 - \\sum_{ i \\in \\I } (y_i - X_i^\\top \\beta^* _i )^2 \\bigg| \\ge C\\bigg\\{ \\frac{ | { \\mathcal J } ||{ \\mathcal J } '| }{ |\\I| } \\kappa_k ^2 + { \\mathfrak{s} } \\log(np)\\bigg\\} \\bigg) \\le n^{-4}. $$\n\\enlem\n\\begin{proof} \nWe show {\\bf b} as {\\bf a} immediatelly follows from {\\bf b} with $ |{ \\mathcal J } '| =0$.\n Denote \n$$ \\mathcal J = (s,\\eta_k] \\quad \\text{and} \\quad \\mathcal J' = (\\eta_k, e] .$$ \nDenote $ \\beta _\\I^* = \\frac{1}{|\\I | } \\sum_{i\\in \\I } \\beta ^* _i $. Note that \n\\begin{align} \\nonumber \n \\bigg| \\sum_{ i \\in \\I } (y_i - X_i^\\top \\widehat \\beta_\\I )^2 - \\sum_{ i \\in \\I } (y_i - X_i^\\top \\beta^* _i )^2 \\bigg| \n = &\\bigg| \\sum_{i\\in \\I } \\big\\{ X_i^\\top ( \\widehat \\beta_\\I - \\beta^* _i ) \\big\\} ^2 - 2 \\sum_{i\\in \\I } \\epsilon_i X_i^\\top ( \\widehat \\beta_\\I - \\beta^*_i ) \\bigg| \n \\\\ \\label{eq:regression one change point deviation bound term 1}\n \\le & 2 \\sum_{i\\in \\I } \\big\\{ X_i^\\top ( \\widehat \\beta_\\I - \\beta^*_\\I ) \\big\\} ^2 \n \\\\ \\label{eq:regression one change point deviation bound term 2}\n + & 2 \\sum_{i\\in \\I } \\big\\{ X_i^\\top ( \\beta^*_\\I -\\beta^*_i ) \\big\\} ^2 \n \\\\ \\label{eq:regression one change point deviation bound term 3}\n +& 2 \\bigg| \\sum_{i\\in \\I } \\epsilon_i X_i^\\top ( \\widehat \\beta_\\I - \\beta^*_\\I ) \\bigg| \n \\\\ \\label{eq:regression one change point deviation bound term 4}\n +& 2 \\bigg| \\sum_{i\\in \\I } \\epsilon_i X_i^\\top ( \\beta^*_\\I -\\beta^*_i ) \\bigg| .\n\\end{align}\n\\\n\\\\\nSuppose all the good events in \\Cref{lemma:interval lasso} holds. \n\\\\\n\\\\\n{\\bf Step 1.} By \\Cref{lemma:interval lasso}, $\\widehat \\beta_\\I - \\beta^*_\\I $ satisfies the cone condition that \n$$ \\| (\\widehat \\beta_\\I - \\beta^*_\\I )_{S^c}\\|_1 \\le 3 \\| (\\widehat \\beta_\\I - \\beta^*_\\I )_S \\|_1 .$$\nIt follows from \\Cref{corollary:restricted eigenvalues 2} that with probability at least $ 1-n^{-5}$,\n\\begin{align*} \n \\bigg| \\frac{1}{|\\I| } \\sum_{i\\in \\I } \\big\\{ X_i^\\top ( \\widehat \\beta_\\I - \\beta^*_\\I ) \\big\\} ^2 - ( \\widehat \\beta_\\I - \\beta^*_\\I )^\\top \\Sigma ( \\widehat \\beta_\\I - \\beta^*_\\I ) \\bigg| \\le C_1 \\sqrt { \\frac{ { \\mathfrak{s} } \\log(np) }{|\\I| }} \\| \\widehat \\beta_\\I - \\beta^*_\\I \\|_2 ^2 .\n\\end{align*}\nThe above display gives\n\\begin{align*} \\bigg| \\frac{1}{|\\I| } \\sum_{i\\in \\I } \\big\\{ X_i^\\top ( \\widehat \\beta_\\I - \\beta^*_\\I ) \\big\\}^2 \\bigg| \\le & \\| \\Sigma\\|_{\\text{op}} \\| \\widehat \\beta_\\I - \\beta^*_\\I \\|_2 ^2 + C_1 \\sqrt { \\frac{ { \\mathfrak{s} } \\log(np) }{|\\I| }} \\| \\widehat \\beta_\\I - \\beta^*_\\I \\|_2 ^2 \n\\\\\n\\le & C_x \\| \\widehat \\beta_\\I - \\beta^*_\\I \\|_2 ^2 + C_1 \\sqrt { \\frac{ { \\mathfrak{s} } \\log(np) }{C_\\zeta { \\mathfrak{s} } \\log(np) }} \\| \\widehat \\beta_\\I - \\beta^*_\\I \\|_2 ^2 \n\\\\\n\\le & \\frac{ C_2 { \\mathfrak{s} } \\log(np)}{|\\I| } \\end{align*} \nwhere the second inequality follows from the assumption that $|\\I| \\ge C_\\zeta { \\mathfrak{s} } \\log(np) $ and the last inequality follows from \\Cref{eq:lemma:interval lasso term 1} in \\Cref{lemma:interval lasso}.\nThis gives \n$$ \\bigg| \\sum_{i\\in \\I } \\big\\{ X_i^\\top ( \\widehat \\beta_\\I - \\beta^*_\\I ) \\big\\}^2 \\bigg| \\le 2 C_2 { \\mathfrak{s} } \\log(np) . $$\n\\\n\\\\\n{\\bf Step 2.} Observe that \n$ X_i^\\top (\\beta_\\I ^* - \\beta^*_i) $ is Gaussian with mean $0$ and variance \n\\begin{align*} \\omega_i ^2 = (\\beta_\\I ^* - \\beta^*_i )^\\top \\Sigma (\\beta_\\I ^* - \\beta^*_i) . \n\\end{align*}\nSince \n$$ \\beta_\\I ^* = \\frac{|{ \\mathcal J } | \\beta^*_{ \\mathcal J } +|{ \\mathcal J } '| \\beta^*_{{ \\mathcal J } '} }{|\\I | },$$\nit follows that \n$$ \\omega_i ^2 = \\begin{cases} \n\\bigg( \\frac{ | { \\mathcal J } '| (\\beta^*_{ \\mathcal J } -\\beta^*_{{ \\mathcal J } '} ) }{ | \\I| } \\bigg)^\\top \\Sigma \\bigg( \\frac{ | { \\mathcal J } '| (\\beta^*_{ \\mathcal J } -\\beta^*_{{ \\mathcal J } '} ) }{ | \\I| } \\bigg) \\le \\frac{| { \\mathcal J } '| ^2 \\kappa_k^2 }{ |\\I|^2 } &\\text{when } i \\in { \\mathcal J } ,\n\\\\\n\\bigg( \\frac{ | { \\mathcal J } | (\\beta^*_{{ \\mathcal J } '} -\\beta^*_{ \\mathcal J } ) }{ | \\I| } \\bigg) ^\\top \\Sigma \\bigg( \\frac{ | { \\mathcal J } '| ( \\beta^*_{{ \\mathcal J } '} -\\beta^*_{ \\mathcal J } ) }{ | \\I| } \\bigg) \\le \\frac{| { \\mathcal J } | ^2 \\kappa_k^2 }{ |\\I|^2 } &\\text{when } i \\in { \\mathcal J } '.\n\\end{cases} $$\n Consequently,\n$ \\{ X_i^\\top (\\beta_\\I ^* - \\beta^*_i)\\} ^2 $ is sub-Exponential with parameter $ \\omega_i ^2$. \nBy standard sub-Exponential tail bounds, it follows that \n\\begin{align*}\n &\\mathbb P \\bigg( \\bigg| \\sum_{ i \\in \\I }\\{ X_i^\\top (\\beta_\\I ^* - \\beta^*_i)\\} ^2 - { \\mathbb{E} } {\\sum_{ i \\in \\I }\\{ X_i^\\top (\\beta_\\I ^* - \\beta^*_i)\\} ^2} \\bigg| \\ge C_3 \\tau \\bigg) \n \\\\\n \\le & \\exp\\bigg (-c \\min\\bigg \\{ \\frac{ \\tau^2}{\\sum_{i\\in \\I } \\omega_i ^4 } , \\frac{ \\tau }{ \\max_{i\\in \\I } \\omega_i ^2 } \\bigg\\} \\bigg)\n \\\\\n \\le & \\exp\\bigg (-c' \\min\\bigg \\{ \\frac{ \\tau^2}{\\sum_{i\\in \\I } \\omega_i ^2 } , \\frac{ \\tau }{ \\max_{i\\in \\I } |\\omega_i | } \\bigg\\} \\bigg)\n \\\\\n \\le & \\exp\\bigg (-c ''\\min\\bigg \\{ \\tau^2 \\bigg( \\frac{|\\I | }{| { \\mathcal J } '| |{ \\mathcal J } | } \\kappa_k^{-2} \\bigg) , \\tau \\frac{|\\I| }{ \\max \\{|{ \\mathcal J } |, |{ \\mathcal J } '| \\}} \\kappa_k ^{-1} \\bigg\\} \\bigg) ,\n \\end{align*}\nwhere the second inequality follows from the observation that \n$$ \\omega_i^2 \\le \\kappa_k |\\omega_i| \\le C_\\kappa |\\omega_i| \\text{ for all } i \\in \\I , $$\n and the last inequality follows from the observation that \n\\begin{align*}\n \\sum_{i\\in \\I} \\omega_i ^2 \n \\le C_x | { \\mathcal J } | \\frac{| { \\mathcal J } '| ^2 \\kappa_k^2 }{ |\\I|^2 } + C_x | { \\mathcal J } ' | \\frac{| { \\mathcal J } | ^2 \\kappa_k^2 }{ |\\I|^2 } \n= C_x \\frac{| { \\mathcal J } '| |{ \\mathcal J } | }{|\\I | } \\kappa_k^2 .\n \\end{align*} \n So there exists a sufficiently large constant $C_4$ such that with probability at least $1- n^{-5}$, \n\\begin{align*}\n & \\bigg| \\sum_{ i \\in \\I }\\{ X_i^\\top (\\beta_\\I ^* - \\beta^*_i)\\} ^2 - { \\mathbb{E} } {\\sum_{ i \\in \\I }\\{ X_i^\\top (\\beta_\\I ^* - \\beta^*_i)\\} ^2} \\bigg|\n \\\\ \n \\le & C_ 4 \\bigg\\{\\sqrt { \\frac{| { \\mathcal J } '| |{ \\mathcal J } | }{|\\I | } \\log(n) \\kappa _k ^2 } + \\log(n) \\frac{\\max \\{|{ \\mathcal J } |, |{ \\mathcal J } '| \\} }{|\\I| } \\kappa _k \\bigg\\} \n \\\\\n \\le & C_ 4' \\bigg\\{ \\frac{| { \\mathcal J } '| |{ \\mathcal J } | }{|\\I | } \\kappa_k^2 + \\log(n) + \\log(n) \\frac{\\max \\{|{ \\mathcal J } |, |{ \\mathcal J } '| \\} }{|\\I| } \\kappa _k \\bigg\\} \n \\\\\n \\le &C_5 \\bigg\\{ \\frac{| { \\mathcal J } '| |{ \\mathcal J } | }{|\\I | } \\kappa_k^2 + \\log(n) \\bigg\\} \n \\end{align*}\n where $ \\kappa_k \\asymp \\kappa \\le C_\\kappa $ is used in the last inequality.\nSince $ { \\mathbb{E} } {\\sum_{ i \\in \\I }\\{ X_i^\\top (\\beta_\\I ^* - \\beta^*_i)\\} ^2} = \\sum_{i\\in \\I }\\omega_i ^2 \\le C_x \\frac{| { \\mathcal J } '| |{ \\mathcal J } | }{|\\I | } \\kappa_k^2$, it follows that \n$$ \\mathbb P \\bigg ( \\bigg| \\sum_{ i \\in \\I }\\{ X_i^\\top (\\beta_\\I ^* - \\beta^*_i)\\} ^2 \\bigg| \\le ( C_5 +C_x ) \\frac{| { \\mathcal J } '| |{ \\mathcal J } | }{|\\I | } \\kappa_k^2 + C_5 \\log(n) \\bigg ) \\ge 1-n^{-5}.$$\n\\\n\\\\\n{\\bf Step 3.} For \\Cref{eq:regression one change point deviation bound term 3}, it follows that with probability at least $1-2n^{-4}$\n\\begin{align*} \n \\frac{1}{|\\I| } \\sum_{i\\in \\I } \\epsilon_i X_i^\\top ( \\widehat \\beta_\\I - \\beta^*_\\I )\n \\le C_6 \\sqrt { \\frac{\\log(np) }{ |\\I| } } \\| \\widehat \\beta_\\I - \\beta^*_\\I \\|_1 \\le C_7 \\frac{ { \\mathfrak{s} } \\log(np) }{ |\\I| } \n\\end{align*}\nwhere the first inequality is a consequence of \\Cref{eq:independent condition 1c 1}, the second inequality follows from \\Cref{eq:lemma:interval lasso term 3} in \\Cref{lemma:interval lasso}. \n\\\n\\\\\n\\\\\n{\\bf Step 4.} From {\\bf Step 2}, we have that\n$ X_i^\\top (\\beta_\\I ^* - \\beta^*_i) $ is Gaussian with mean $0$ and variance \n$$ \\omega_i ^2 = \\begin{cases} \n\\bigg( \\frac{ | { \\mathcal J } '| (\\beta^*_{ \\mathcal J } -\\beta^*_{{ \\mathcal J } '} ) }{ | \\I| } \\bigg)^\\top \\Sigma \\bigg( \\frac{ | { \\mathcal J } '| (\\beta^*_{ \\mathcal J } -\\beta^*_{{ \\mathcal J } '} ) }{ | \\I| } \\bigg) \\le \\frac{| { \\mathcal J } '| ^2 \\kappa_k^2 }{ |\\I|^2 } &\\text{when } i \\in { \\mathcal J } ,\n\\\\\n\\bigg( \\frac{ | { \\mathcal J } | (\\beta^*_{{ \\mathcal J } '} -\\beta^*_{ \\mathcal J } ) }{ | \\I| } \\bigg) ^\\top \\Sigma \\bigg( \\frac{ | { \\mathcal J } '| ( \\beta^*_{{ \\mathcal J } '} -\\beta^*_{ \\mathcal J } ) }{ | \\I| } \\bigg) \\le \\frac{| { \\mathcal J } | ^2 \\kappa_k^2 }{ |\\I|^2 } &\\text{when } i \\in { \\mathcal J } '.\n\\end{cases} $$\n Consequently,\n$ \\epsilon_i X_i^\\top (\\beta_\\I ^* - \\beta^*_i) $ is centered sub-Exponential with parameter $ \\omega_i \\sigma_\\epsilon $. \nBy standard sub-Exponential tail bounds, it follows that \n\\begin{align*}\n &\\mathbb P \\bigg( \\bigg| \\sum_{ i \\in \\I } \\epsilon_i X_i^\\top (\\beta_\\I ^* - \\beta^*_i) \\bigg| \\ge C_8 \\tau \\bigg) \n \\\\\n \\le & \\exp\\bigg (-c \\min\\bigg \\{ \\frac{ \\tau^2}{\\sum_{i\\in \\I } \\omega_i ^2 } , \\frac{ \\tau }{ \\max_{i\\in \\I } |\\omega_i| } \\bigg\\} \\bigg)\n \\\\\n \\le & \\exp\\bigg (-c '\\min\\bigg \\{ \\tau^2 \\bigg( \\frac{|\\I | }{| { \\mathcal J } '| |{ \\mathcal J } | } \\kappa_k^{-2} \\bigg) , \\tau \\frac{|\\I| }{ \\max \\{|{ \\mathcal J } |, |{ \\mathcal J } '| \\}} \\kappa_k ^{-1} \\bigg\\} \\bigg) ,\n \\end{align*}\nwhere the last inequality follows from the observation that \n\\begin{align*}\n \\sum_{i\\in \\I} \\omega_i ^2 \n \\le C_x | { \\mathcal J } | \\frac{| { \\mathcal J } '| ^2 \\kappa_k^2 }{ |\\I|^2 } + C_x | { \\mathcal J } ' | \\frac{| { \\mathcal J } | ^2 \\kappa_k^2 }{ |\\I|^2 } \n= C_x \\frac{| { \\mathcal J } '| |{ \\mathcal J } | }{|\\I | } \\kappa_k^2 .\n \\end{align*} \n So there exists a sufficiently large constant $C_9$ such that with probability at least $1- n^{-5}$, \n\\begin{align*}\n & \\bigg| \\sum_{ i \\in \\I } \\epsilon_i X_i^\\top (\\beta_\\I ^* - \\beta^*_i) \\bigg|\n \\\\ \n \\le & C_ 9 \\bigg\\{\\sqrt { \\frac{| { \\mathcal J } '| |{ \\mathcal J } | }{|\\I | } \\log(n) \\kappa _k ^2 } + \\log(n) \\frac{\\max \\{|{ \\mathcal J } |, |{ \\mathcal J } '| \\} }{|\\I| } \\kappa _k \\bigg\\} \n \\\\\n \\le & C_ 9' \\bigg\\{ \\frac{| { \\mathcal J } '| |{ \\mathcal J } | }{|\\I | } \\kappa_k^2 + \\log(n) + \\log(n) \\frac{\\max \\{|{ \\mathcal J } |, |{ \\mathcal J } '| \\} }{|\\I| } \\kappa _k \\bigg\\} \n \\\\\n \\le &C_9 \\bigg\\{ \\frac{| { \\mathcal J } '| |{ \\mathcal J } | }{|\\I | } \\kappa_k^2 + \\log(n) \\bigg\\} \n \\end{align*}\n where $ \\kappa_k \\asymp \\kappa \\le C_\\kappa $ is used in the last inequality.\n\\end{proof} \n\n \n\\bnlem \\label{lemma:interval lasso} Suppose \\Cref{assp:dcdp_linear_reg} holds. Let $$ \\widehat \\beta _{ \\mathcal I } = \\arg\\min_{\\beta \\in \\mathbb R^p } \\frac{1}{ |\\mathcal I | }\\sum_{i \\in \\mathcal I} (y_i -X_i^\\top \\beta) ^2 + \\lambda \\|\\beta\\|_1$$ with $\\lambda = C_\\lambda (\\sigma_{\\epsilon} \\vee 1)\\sqrt { \\log(np) }$ for some sufficiently large constant $C _ \\lambda $. There exists a sufficiently large constant $ C_s$ such that for all $ \\mathcal I \\subset (0, n] $ such that $| \\mathcal I | \\ge C_s { \\mathfrak{s} } \\log(np)$, it holds with probability at least $ 1-(np)^{-3}$ that \n\\begin{align} \\label{eq:lemma:interval lasso term 1}\n& \\| \\widehat \\beta_\\I -\\beta^*_\\I \\| _2^2 \\le \\frac{C (\\sigma_{\\epsilon}^2\\vee 1) { \\mathfrak{s} } \\log(np)}{ |\\I | } ;\n \\\\ \\label{eq:lemma:interval lasso term 2}\n & \\| \\widehat \\beta_\\I -\\beta^*_\\I \\| _1 \\le C(\\sigma_{\\epsilon}\\vee 1) { \\mathfrak{s} } \\sqrt { \\frac{\\log(np)} {|\\I | } } ;\n \\\\ \\label{eq:lemma:interval lasso term 3}\n & \\| (\\widehat \\beta_\\I -\\beta^*_\\I)_{S^c} \\| _1 \\le 3 \\| (\\widehat \\beta _\\I -\\beta^*_\\I )_{S } \\| _1 .\n\\end{align} \nwhere $ \\beta^*_\\I = \\frac{1}{|\\I|} \\sum_{i\\in \\I } \\beta^*_i $.\n \\enlem \n\\begin{proof} Denote $S=\\bigcup_{k=1}^K S_{\\eta_k+1} $. Since $K< \\infty$, \n$|S| \\asymp { \\mathfrak{s} } .$ \n It follows from the definition of $\\widehat{\\beta}_\\I$ that \n\t\\[\n\t \\frac{1}{|\\I | }\t\\sum_{ i \\in I} (y_i - X_i^{\\top}\\widehat{\\beta}_\\I )^2 + \\frac{ \\lambda}{ \\sqrt{ |\\I| }} \\|\\widehat{\\beta} _\\I\\|_1 \\leq \n\t \\frac{1}{|\\I | } \\sum_{t \\in \\I} (y_i - X_i^{\\top}\\beta^*_\\I )^2 + \\frac{ \\lambda}{ \\sqrt{ |\\I| }} \\|\\beta^*_\\I \\|_1.\n\t\\]\n\tThis gives \n\t\\begin{align*}\n\t\t \\frac{1}{|\\I | } \\sum_{i \\in \\I} \\bigl\\{X_i ^{\\top}(\\widehat{\\beta}_\\I - \\beta^*_\\I )\\bigr\\}^2 + \\frac{ 2 }{|\\I | } \\sum_{ i \\in \\I}(y_i - X_i^{\\top}\\beta^*_\\I)X_i^{\\top}(\\beta^*_\\I - \\widehat{\\beta}_\\I ) + \\frac{ \\lambda}{ \\sqrt{ |\\I| }} \\bigl\\|\\widehat{\\beta}_\\I \\bigr\\|_1 \n\t\t\\leq \\frac{ \\lambda}{ \\sqrt{ |\\I| }} \\bigl\\|\\beta^*_\\I \\bigr\\|_1,\n\t\\end{align*}\n\tand therefore\n\t\\begin{align}\n\t\t& \\frac{1}{|\\I | }\t \\sum_{i \\in \\I} \\bigl\\{X_i ^{\\top}(\\widehat{\\beta} _\\I - \\beta^*_\\I )\\bigr\\}^2 + \\frac{ \\lambda}{ \\sqrt{ |\\I| }} \\bigl\\|\\widehat{\\beta}_\\I \\bigr\\|_1 \\nonumber \\\\\n\t\t \\leq & \\frac{ 2}{|\\I | }\t \\sum_{i \\in \\I } \\epsilon_i X_i ^{\\top}(\\widehat{\\beta}_\\I - \\beta^*_\\I ) \n+ 2 (\\widehat{\\beta}_\\I - \\beta^*_\\I )^{\\top}\\frac{ 1 }{|\\I | }\t \\sum_{i\\in \\I} X_ i X_i^{\\top}( \\beta^*_i -\\beta^*_\\I )\t\t \n\t\t+ \\frac{ \\lambda}{ \\sqrt{ |\\I| }} \\bigl\\|\\beta^*_\\I \\bigr\\|_1 .\t \\label{eq-lem10-pf-2}\n\t\\end{align}\nTo bound\t\n\t\t$\\left\\|\\sum_{ i \\in \\I} X_ i X_i ^\\top (\\beta^* _\\I -\\beta^*_i ) \\right\\|_{\\infty}, \n$\n\tnote that for any $j \\in \\{ 1, \\ldots, p\\}$, the $j$-th entry of \n\t\\\\$\\sum_{i \\in \\I} X _ i X _i^\\top (\\beta^*_\\I -\\beta_i )$ satisfies \n\t\\begin{align*}\n\t\t E \\left\\{\\sum_{ i \\in \\I} X_i (j) X _ i ^\\top (\\beta^*_\\I - \\beta^*_ i )\\right\\} = \\sum_{ i \\in \\I} E \\{X _ i (j ) X_i ^\\top \\}\\{\\beta^*_\\I - \\beta^*_ i \\} \n\t\t= \\mathbb{E}\\{X _1( j ) X _1 ^\\top \\} \\sum_{ i \\in \\I}\\{\\beta^*_\\I - \\beta^*_ i \\} = 0.\n\t\\end{align*}\n\tSo $ E\\{ \\sum_{ i \\in \\I} X_ i X_i ^\\top (\\beta^* _\\I -\\beta^*_i )\\} =0 \\in \\mathbb R^p.$ \nBy \\Cref{lemma:consistency}{\\bf b},\n\t\\begin{align*}\n\t \\bigg| ( \\beta^*_i -\\beta^*_\\I ) ^\\top \\frac{ 1 }{|\\I | }\t \\sum_{i\\in \\I} X_t X_t^{\\top} (\\widehat{\\beta}_\\I - \\beta^*_\\I )\t \t \\bigg| \\le & C_1 \\big(\\max_{1\\le i \\le n } \\|\\beta^*_i -\\beta^*_\\I \\|_2 \\big) \\sqrt { \\frac{\\log(np) }{ | \\I| }} \\| \\widehat{\\beta}_\\I - \\beta^*_\\I \\|_1 \n\t \\\\\n\t \\le & C_2 \\sqrt { \\frac{\\log(np) }{ | \\I| }} \\| \\widehat{\\beta}_\\I - \\beta^*_\\I \\|_1 \n\t \\\\\n\t \\le & \\frac{\\lambda}{8\\sqrt { |\\I| } } \\|\\widehat{\\beta}_\\I - \\beta^*_\\I \\|_1 \n \t\\end{align*}\n \twhere the second inequality follows from \\Cref{lemma:beta bounded 1} and the last inequality follows from $\\lambda = C_\\lambda\\sigma_{\\epsilon} \\sqrt { \\log(np) }$ with sufficiently large constant $C_\\lambda$.\n\tIn addition by \\Cref{lemma:consistency}{\\bf a},\n\t\\begin{equation*}\n\t \\frac{ 2}{|\\I | }\t \\sum_{i \\in \\I } \\epsilon_i X_i ^{\\top}(\\widehat{\\beta}_\\I - \\beta^*_\\I ) \\le C \\sigma_{\\epsilon}\\sqrt { \\frac{ \\log(np) }{ |\\I| } }\\|\\widehat{\\beta}_\\I - \\beta^*_\\I \\|_1 \\le \\frac{\\lambda}{8\\sqrt { |\\I| } } \\|\\widehat{\\beta}_\\I - \\beta^*_\\I \\|_1 .\n\t\\end{equation*}\n\tSo \\eqref{eq-lem10-pf-2} gives \n\t\\begin{align*}\n\t\t \\frac{1}{|\\I | }\t \\sum_{i \\in \\I} \\bigl\\{X_i ^{\\top}(\\widehat{\\beta} _\\I - \\beta^*_\\I )\\bigr\\}^2 + \\frac{ \\lambda}{ \\sqrt{ |\\I| }} \\bigl\\|\\widehat{\\beta}_\\I \\bigr\\|_1 \n\t\t \\leq \\frac{\\lambda}{4\\sqrt { |\\I| } } \\|\\widehat{\\beta}_\\I - \\beta^*_\\I \\|_1 \n\t\t+ \\frac{ \\lambda}{ \\sqrt{ |\\I| }} \\bigl\\|\\beta^*_\\I \\bigr\\|_1 .\t \n\t\\end{align*} \nLet $\\Theta = \\widehat \\beta _\\I - \\beta^*_\\I $. The above inequality implies\n\\begin{align}\n\\label{eq:two sample lasso deviation 1} \\frac{1}{|\\I|} \\sum_{i \\in \\I } \\left( X_i^\\top \\Theta \\right)^2 + \\frac{ \\lambda}{2\\sqrt{ |\\I| } }\\| (\\widehat \\beta _\\I) _{ S ^c}\\|_1 \n \\le & \\frac{3\\lambda}{2\\sqrt{ |\\I| } } \\| ( \\widehat \\beta _\\I - \\beta^*_\\I ) _{S} \\| _1 ,\n \\end{align} \nwhich also implies that \n$$ \\frac{\\lambda }{2}\\| \\Theta _{S^c} \\|_1 = \\frac{ \\lambda}{2 }\\| (\\widehat \\beta_\\I) _{ S ^c}\\|_1 \\le \n\\frac{3\\lambda}{2 } \\| ( \\widehat \\beta _\\I - \\beta^* _\\I ) _{S} \\| _1 = \\frac{3\\lambda}{2 } \\| \\Theta _{S} \\| _1 . $$\nThe above inequality and \\Cref{corollary:restricted eigenvalues 2} imply that with probability at least $1-n^{-5}$,\n$$ \\frac{1}{|\\I| } \\sum_{i \\in \\I } \\left( X_i^\\top \\Theta \\right)^2 = \\Theta^\\top \\widehat \\Sigma _\\I \\Theta \n\\ge \n \\Theta^\\top \\Sigma \\Theta - C_3\\sqrt{ \\frac{ { \\mathfrak{s} } \\log(np)}{ |\\I| }} \\| \\Theta\\|_2^2 \\ge \\frac{c_x}{2} \\|\\Theta\\|_2 ^2 ,$$\n where the last inequality follows from the assumption that $| \\mathcal I | \\ge C_s { \\mathfrak{s} } \\log(np) $ for sufficiently large $C_s$.\nTherefore \\Cref{eq:two sample lasso deviation 1} gives\n\\begin{align}\n\\label{eq:two sample lasso deviation 2} c'\\|\\Theta\\|_2 ^2 + \\frac{ \\lambda}{2\\sqrt{ |\\I| } }\\| ( \\widehat \\beta_\\I - \\beta^*_\\I ) _{ S ^c}\\|_1 \n \\le \\frac{3\\lambda}{2\\sqrt{ |\\I| } } \\| \\Theta_{S} \\| _1 \\le \\frac{3\\lambda \\sqrt { \\mathfrak{s} } }{2 \\sqrt{ |\\I| } } \\| \\Theta \\| _2 \n \\end{align} \n and so \n $$ \\|\\Theta\\|_2 \\le \\frac{C \\lambda \\sqrt { \\mathfrak{s} } }{\\sqrt{| \\I |} } . $$ \n The above display gives \n $$ \\| \\Theta_{S} \\| _1 \\le \\sqrt { { \\mathfrak{s} } } \\| \\Theta_{S} \\| _2 \\le \\frac{C\\lambda { \\mathfrak{s} } }{\\sqrt{|\\I| }}. $$ \n Since \n $ \\| \\Theta_{S^c } \\| _1 \\le 3 \\| \\Theta_{S} \\| _1 ,$ \nit also holds that \n$$\\| \\Theta \\| _1 = \\| \\Theta_{S } \\| _1 +\\| \\Theta_{S^c } \\| _1 \\le 4 \\| \\Theta_{S} \\| _1 \\le \\frac{4C\\lambda { \\mathfrak{s} } }{\\sqrt{|\\I|} } .$$\n \\end{proof}\n\n \n \n\n \n \n\\subsection{Additional Technical Results} \n\\bnlem\n\\label{theorem:restricted eigenvalues 2}\nSuppose $\\{X_{i } \\}_{1\\le i \\le n } \\overset{i.i.d.} {\\sim} N_p (0, \\Sigma ) $. \nDenote $\\mathcal C_S := \\{ v : \\mathbb R^p : \\| v_{S^c }\\|_1 \\le 3\\| v_{S}\\|_1 \\} $, where $ |S| \\le { \\mathfrak{s} } $.\nThen there exists constants $c$ and $C$ such that for all $\\eta\\le 1$, \n\\begin{align} \nP \\left( \\sup_{v \\in \\mathcal C _S , \\|v\\|_2=1 } \\left | v^\\top ( \\widehat \\Sigma -\\Sigma ) v \\right| \\ge C\\eta \\Lambda_{\\max} (\\Sigma) \\right)\n\\le 2\\exp( -c n\\eta ^2 + 2 { \\mathfrak{s} } \\log(p) ) .\n\\end{align}\n\\enlem\n\\begin{proof}\nThis is a well known restricted eigenvalue property for Gaussian design. The proof can be found in \\cite{Basu2015} or \\cite{Loh2012}.\n\\end{proof} \n\n\\bnlem\n\\label{corollary:restricted eigenvalues 2} Suppose $\\{X_{i } \\}_{1\\le i \\le n } \\overset{i.i.d.} {\\sim} N_p (0, \\Sigma ) $. \n Denote $\\mathcal C_S := \\{ v : \\mathbb R^p : \\| v_{S^c }\\|_1 \\le 3\\| v_{S}\\|_1 \\} $, where $ |S| \\le { \\mathfrak{s} } $. With probability at least $1- n ^{-5}$, it holds that\n$$ \\left | v^\\top ( \\widehat \\Sigma_\\I -\\Sigma ) v \\right| \\le C \\sqrt { \\frac{ { \\mathfrak{s} } \\log(np) }{|\\I| } } \\|v\\|_2^2 \n $$\n for all $v \\in \\mathcal C _S $ and all $\\I \\subset (0, n]$ such that $| \\I |\\ge C_s { \\mathfrak{s} } \\log(np) $, where $ C_s$\nis the constant in \\Cref{lemma:consistency} which is independent of $n, p$. \n\\enlem\n\\begin{proof} \nFor any $\\I \\subset (0, n]$ such that $| \\I |\\ge C_s { \\mathfrak{s} } \\log(np) $, by \\Cref{theorem:restricted eigenvalues 2}, it holds that \n\\begin{align*} \nP \\left( \\sup_{v \\in \\mathcal C _S , \\|v\\|_2=1 } \\left | v^\\top ( \\widehat \\Sigma _\\I -\\Sigma ) v \\right| \\ge C\\eta \\Lambda_{\\max} (\\Sigma) \\right)\n\\le 2\\exp( -c |\\I| \\eta ^2 + 2 { \\mathfrak{s} } \\log(p) ) .\n\\end{align*}\nLet $\\eta= C_1 \\sqrt { \\frac{ { \\mathfrak{s} } \\log(np) }{|\\I | } } $ for sufficiently large constant $C_1$. Note that $\\eta< 1$ if $ |\\I| > C_1^2 { \\mathfrak{s} } \\log(np) $. Then with probability at least \n$(np)^{-7} $, \n$$\\sup_{v \\in \\mathcal C _S , \\|v\\|_2=1 } \\left | v^\\top ( \\widehat \\Sigma _\\I -\\Sigma ) v \\right| \\ge C_2 \\sqrt { \\frac{s\\log(np) }{|\\I | } } .$$\nSince there are at most $n^2$ many different choices of $ \\I \\subset (0,n]$, the desired result follows from a union bound argument. \n\\end{proof} \n\n\n\\bnlem\n\\label{lem:restricted eigenvalue}\nUnder \\Cref{assp:dcdp_linear_reg}, it holds that\n \\begin{align*}\n\t\t \\mathbb P \\bigg( \\sum_{t\\in \\I } ( X_t^\\top v )^2 \\ge \\frac{c_x|\\I| }{4} \\|v\\|_2^2 - C_2 \\log(np) \\|v\\|_1 ^2 \\ \\forall v \\in \\mathbb R^p \\text{ and } \\forall |\\I|\\ge C_s { \\mathfrak{s} } \\log(np) \\bigg) \\le n^{-5} \n\t\\end{align*} \n\twhere $ C_2 > 0$ is an absolute constant only depending on $C_x$.\n\\enlem\n\\begin{proof}\nBy the well known restricted eigenvalue condition, for any $\\I$, it holds that \n \\begin{align*}\n\t\t \\mathbb P \\bigg( \\sum_{t\\in \\I } ( X_t^\\top v )^2 \\ge \\frac{c_x |\\I| }{4} \\|v\\|_2^2 - C_2 \\log(np) \\|v\\|_1 ^2 \\ \\ \\forall v \\in \\mathbb R^p \\bigg) \\le C_3 \\exp(-c_3|\\I| ).\n\t\\end{align*} \n Since $|\\I| \\ge C_s { \\mathfrak{s} } \\log(np) $, \n \\begin{align*}\n\t\t \\mathbb P \\bigg( \\sum_{t\\in \\I } ( X_t^\\top v )^2 \\ge \\frac{c_ x |\\I| }{4} \\|v\\|_2^2 - C_2 \\log(np) \\|v\\|_1^2 \\ \\ \\forall v \\in \\mathbb R^p \\bigg) \\le n^{-4}.\n\t\\end{align*} \n\tSince there are at most $n^2$ many subinterval $\\I \\subset (0,n]$, it follows from a union bound argument that \n\t \\begin{align*}\n\t\t \\mathbb P \\bigg( \\sum_{t\\in \\I } ( X_t^\\top v )^2 \\ge \\frac{c_x |\\I| }{4} \\|v\\|_2^2 - C_2 \\log(np) \\|v\\|_1^2 \\ \\ \\forall v \\in \\mathbb R^p \\text{ and } \\forall |\\I|\\ge C_s { \\mathfrak{s} } \\log(np) \\bigg) \\le n^{-2} .\n\t\\end{align*} \n\\end{proof} \n\n\n\n\n\\bnlem \\label{lemma:consistency}\nSuppose \\Cref{assp:dcdp_linear_reg} holds. There exists a sufficient large constant $ C_s$ such that the following conditions holds. \n\\\\\n\\\\\n{\\bf a.} With probability at least $1-n^{-3} $, it holds that\n\\begin{align}\\label{eq:independent condition 1c 1}\n \\left | \\frac{1}{|\\I | } \\sum_{i \\in \\mathcal I } \\epsilon_i X_i^\\top \\beta \\right| \\le C\\sigma_{\\epsilon}\\sqrt { \\frac{\\log(np)}{|\\I | } } \\|\\beta\\|_1 \n \\end{align}\nuniformly for all $ \\beta \\in \\mathbb R^p$ and all $\\mathcal I \\subset (0, n]$ such that $|\\I|\\ge C_s { \\mathfrak{s} } \\log(np) $, \n\\\\\n\\\\\n {\\bf b.} Let $\\{ u_i\\}_{i=1}^n \\subset \\mathbb R^p$ be a collection of deterministic vectors. Then with probability at least $1-n^{-3} $, it holds that \n\\begin{align}\\label{eq:independent condition 1c}\n \\left | \\frac{1}{|\\I | } \\sum_{i \\in \\I } u^\\top_i X_i X_i^\\top \\beta - \\frac{1}{ |\\I | } \\sum_{i \\in \\mathcal I } u^\\top _i \\Sigma \\beta \\right| \\le C \\left( \\max _{1\\le i \\le n }\\|u_i\\|_2 \\right) \\sqrt { \\frac{\\log(np)}{|\\I | } } \\|\\beta\\|_1 \n \\end{align}\n uniformly for all $ \\beta \\in \\mathbb R^p$ and all $\\mathcal I \\subset (0, n]$ such that $|\\I|\\ge C_s { \\mathfrak{s} } \\log(np) $. \n\\enlem \n \\begin{proof}\n The justification of the \\eqref{eq:independent condition 1c 1} is similar and simpler than the justification of \\eqref{eq:independent condition 1c}. For conciseness, only the justification of \\eqref{eq:independent condition 1c} is presented.\n \nFor any $\\mathcal I \\subset (0, n]$ such that $|\\I|\\ge C_s { \\mathfrak{s} } \\log(np) $, it holds that\n \\begin{align*}\n &\\left| \\frac{1}{ |\\I | } \\sum_{i\\in \\I } u^\\top _i X_i X_i^\\top \\beta - \\frac{1}{ |\\I | } \\sum_{i\\in \\I } u^\\top_i \\Sigma \\beta \\right| \n \\\\\n =\n & \\left| \\left( \\frac{1}{|\\I | } \\sum_{i\\in \\I } u^\\top _i X_i X_i^\\top - \\frac{1}{|\\I | } \\sum_{i\\in \\I } u^\\top_i \\Sigma \\right) \\beta \\right| \n \\\\\n \\le \n & \\max_{1\\le j \\le p } \\left | \\frac{1}{|\\I | } \\sum_{i\\in \\I } u^\\top _i X_i X_{i,j} - \\frac{1}{|\\I | } \\sum_{i\\in \\I } u^\\top_i \\Sigma (, j) \\right| \\|\\beta\\|_1.\n \\end{align*}\n Note that \n $ E(u^\\top _i X_i X_{i,j} ) =u^\\top _i \\Sigma (, j) $ and in addition, \n $$u^\\top _ i X_i \\sim N(0, u^\\top _i \\Sigma u _i ) \\quad \\text{ and } \\quad X_{i,j} \\sim N(0, \\Sigma({j,j})) .$$\n So $u^\\top _i X_i X_{i,j} $ is a sub-exponential random variable such that \n $$ u^\\top _i X_i X_{i,j} \\sim SE( u^\\top_i \\Sigma u _i \\Sigma ({j,j})).$$\n As a result, for $\\gamma<1$ and every $j$,\n$$ P \\left( \\left | \\frac{1}{|\\I | } \\sum_{i \\in \\I } u^\\top _ i X_i X_{i,j} - u^\\top \\Sigma (, j) \\right| \\ge \\gamma \\sqrt { \\max_{1\\le i\\le n } ( u^\\top_i \\Sigma u_i) \\Sigma ( {j,j} ) }\\right) \\le \\exp(-c \\gamma^2 |\\I | ). $$\nSince \n$$ \\sqrt { u^\\top _i \\Sigma u _i \\Sigma ({j,j} ) } \\le C_x \\|u_i \\|_2, $$\nby union bound,\n$$ \\mathbb P \\left( \\max_{1\\le j \\le p }\\left | \\frac{1}{|\\I | } \\sum_{i \\in \\I} u^\\top _ i X_i X_{i,j} - \\frac{1}{|\\I | } \\sum_{i \\in\\I} u^\\top _i \\Sigma (, j) \\right| \\ge \\gamma \nC_x \\left( \\max _{1\\le i \\le n }\\|u_i\\|_2 \\right) \\right) \\le p\\exp(-c \\gamma^2 |\\I| ). $$\nLet $\\gamma = 3\\sqrt { \\frac{\\log(np) }{c |\\I| } }$. Note that $ \\gamma<1$ if $|\\I|\\ge C_s { \\mathfrak{s} } \\log(np) $ for sufficiently large $C_s$. Therefore \n$$ \\mathbb P \\left( \\max_{1\\le j \\le p }\\left | \\frac{1}{|\\I | } \\sum_{i \\in \\I} u^\\top _ i X_i X_{i,j} - \\frac{1}{|\\I | } \\sum_{i \\in\\I} u^\\top _i \\Sigma (, j) \\right| \\ge \nC_1\\sqrt{ \\frac{ \\log(np)}{|\\I| } } \\left( \\max _{1\\le i \\le n }\\|u_i\\|_2 \\right) \\right) \\le p\\exp(-9\\log(np) ). $$\nSince there are at most $n^2$ many intervals $\\I \\subset(0,n]$, it follows that \n\\begin{align*}\n &\\mathbb P \\left( \\max_{1\\le j \\le p }\\left | \\frac{1}{|\\I | } \\sum_{i \\in \\I} u^\\top _ i X_i X_{i,j} - \\frac{1}{|\\I | } \\sum_{i \\in\\I} u^\\top _i \\Sigma (, j) \\right| \\ge \nC_1\\sqrt{ \\frac{ \\log(np)}{|\\I| } } \\left( \\max _{1\\le i \\le n }\\|u_i\\|_2 \\right) \\ \\forall |\\I| \\ge C_s { \\mathfrak{s} } \\log(np) \\right)\n\\\\\n \\le & p n^2 \\exp(-9\\log(np) ) \\le n^{-3}. \n\\end{align*}\nThis immediately gives \\eqref{eq:independent condition 1c}.\n \\end{proof} \n \n \n \n \n \\bnlem\\label{lem-x-bound}\nUder \\Cref{assp:dcdp_linear_reg}, for any interval $\\I \\subset (0, n]$, for any \n\t\\[\n\t\t\\lambda \\geq \\lambda_1 := C_{\\lambda}\\sigma_{\\epsilon} \\sqrt{\\log(n p)},\n\t\\]\n\twhere $C_{\\lambda} > 0$ is a large enough absolute constant, it holds with probability at least $1 - n^{-5}$ that\n\t\\[\n \\|\\sum_{i \\in \\I} \\epsilon_i X_i \\|_{\\infty} \\leq \\lambda \\sqrt{\\max\\{|\\I|, \\, \\log(n p)\\}}\/8,\n\t\\]\n\twhere $c_3 > 0$ is an absolute constant depending only on the distributions of covariants $\\{X_i\\}$ and $\\{\\epsilon_i\\}$.\n\\enlem\n\n\\begin{proof}\n\tSince $\\epsilon_i$'s are sub-Gaussian random variables and $X_i$'s are sub-Gaussian random vectors, we have that $\\epsilon_i X_i$'s are sub-Exponential random vectors with $\\|\\epsilon_iX_i\\|_{\\psi_1}\\leq C_x \\sigma_{\\epsilon}$ \\citep[see e.g.~Lemma~2.7.7 in][]{vershynin2018high}. It then follows from Bernstein's inequality \\citep[see e.g.~Theorem~2.8.1 in][]{vershynin2018high} that for any $t > 0$,\n\t\t\\[\n\t\\mathbb{P}\\left\\{\\|\\sum_{i \\in \\I} \\epsilon_i X_i \\|_{\\infty} > t\\right\\} \\leq 2p \\exp\\left\\{-c \\min\\left\\{\\frac{t^2}{|\\I|C_x^2\\sigma^2_{\\epsilon}}, \\, \\frac{t}{C_x \\sigma_{\\epsilon}}\\right\\}\\right\\}.\n\t\t\\]\n\t\tTaking \n\t\t\\[\n\t\t\tt = C_{\\lambda}C_x\/4 \\sigma_{\\epsilon} \\sqrt{\\log(n p)} \\sqrt{\\max\\{|\\I|, \\, \\log(n p)\\}}\n\t\t\\] \n\t\tyields the conclusion.\n\\end{proof}\n \n \n \n \n\n\\bnlem \\label{lemma:beta bounded 1}\nSuppose \\Cref{assp:dcdp_linear_reg} holds. Let $\\I \\subset [1,n] $. Denote \n $ \\kappa = \\min_{k\\in \\{1,\\ldots, K\\} } \\kappa_k,$ \nwhere $ \\{ \\kappa_k\\}_{k=1}^K$ are defined in \\Cref{assp:dcdp_linear_reg}. Then for any $i\\in [T]$,\n $$\\|\\beta^*_\\I - \\beta^*_i\\|_2 \\le C\\kappa \\le CC_\\kappa,$$\n for some absolute constant $C$ independent of $n$.\n\\enlem \n \\begin{proof}\n It suffices to consider $\\I =[1,n]$ and $\\beta_i^*= \\beta_1^*$ as the general case is similar. Observe that \n \\begin{align*}\n \\| \\beta^*_{[1,n]} - \\beta_ 1^* \\|_2 =& \n \\| \\frac{1}{n} \\sum_{i=1}^n \\beta_i ^* - \\beta_1^* \n \\|_2 \n = \\| \\frac{1}{n} \\sum_{k=0}^{K} \\Delta_k \\beta_{\\eta_k+ 1} ^* - \\frac{1}{n}\\sum_{k=0}^{K} \\Delta_k \\beta_1 ^*\n \\|_2 \n \\\\\n \\le \n & \\frac{1}{n} \\sum_{k=0}^{K} \\left\\| \\Delta_k (\\beta_{\\eta_k+ 1} ^* -\\beta_1 ^* ) \\right\\|_2 \\le \n \\frac{1}{n}\\sum_{k=0}^{K} \\Delta_k (K+1) \\kappa \\le (K+1)\\kappa . \n \\end{align*}\n By \\Cref{assp:dcdp_linear_reg}, both $\\kappa $ and $K $ bounded above. \n \\end{proof}\n \n \\bnlem\\label{eq:cumsum upper bound}\n Let $t\\in \\I =(s,e] \\subset [1,n]$. Denote \n $ \\kappa_{\\max}= \\max_{k\\in \\{1,\\ldots, K\\} } \\kappa_k,$ \nwhere $ \\{ \\kappa_k\\}_{k=1}^K$ are defined in \\Cref{assp:dcdp_linear_reg}. Then \n $$ \\sup_{0< s < t0, \\alpha>0$ and any $x>0$\n$$\n\\mathbb{P}\\left(\\max _{k \\in[d,(1+\\alpha) d]} \\frac{\\sum_{i=1}^k z_i}{\\sqrt{k}} \\geq x\\right) \\leq \\exp \\left\\{-\\frac{x^2}{2(1+\\alpha)}\\right\\}+\\exp \\left\\{-\\frac{\\sqrt{d} x}{2}\\right\\} .\n$$\n\\enlem\n\\bprf\nDenote $S_n=\\sum_{i=1}^n z_i$. Let $\\zeta=\\sup _{1 \\leq i \\leq \\infty}\\left\\|z_i\\right\\|_{\\psi_1}$. For any two integers $m0$ be given. For any $x>0$, it holds that\n$$\n\\mathbb{P}\\left(\\sum_{i=1}^r z_i \\leq 4 \\sqrt{r\\{\\log \\log (4 \\nu r)+x+1\\}}+4 \\sqrt{r \\nu}\\{\\log \\log (4 \\nu r)+x+1\\} \\text { for all } r \\geq 1 \/ \\nu\\right) \\geq 1-2 \\exp (-x) \\text {. }\n$$\n\\enlem\n\\bprf\nLet $s \\in \\mathbb{Z}^{+}$and $\\mathcal{T}_s=\\left[2^s \/ \\nu, 2^{s+1} \/ \\nu\\right]$. By \\Cref{lem:subexp deviation martingale}, for all $x>0$,\n$$\n\\mathbb{P}\\left(\\sup _{r \\in \\mathcal{T}_s} \\frac{\\sum_{i=1}^r z_i}{\\sqrt{r}} \\geq x\\right) \\leq \\exp \\left\\{-\\frac{x^2}{4}\\right\\}+\\exp \\left\\{-\\frac{\\sqrt{2^s \/ \\nu} x}{2}\\right\\} \\leq \\exp \\left\\{-\\frac{x^2}{4}\\right\\}+\\exp \\left\\{-\\frac{x}{2 \\sqrt{\\nu}}\\right\\} .\n$$\nTherefore by a union bound,\n\\begin{align}\n & \\mathbb{P}\\left(\\exists s \\in \\mathbb{Z}^{+}: \\sup _{r \\in \\mathcal{T}_s} \\frac{\\sum_{i=1}^r z_i}{\\sqrt{r}} \\geq 2 \\sqrt{\\log \\log ((s+1)(s+2))+x}+2 \\sqrt{\\nu}\\{\\log \\log ((s+1)(s+2))+x\\}\\right) \\nonumber \\\\\n\\leq & \\sum_{s=0}^{\\infty} 2 \\frac{\\exp (-x)}{(s+1)(s+2)}=2 \\exp (-x) \\label{tmp_eq: iterated log}.\n\\end{align}\nFor any $r \\geq 2^s \/ \\nu, s \\leq \\log (r \\nu) \/ \\log (2)$, and therefore\n$$\n(s+1)(s+2) \\leq \\frac{\\log (2 r \\nu) \\log (4 r \\nu)}{\\log ^2(2)} \\leq\\left(\\frac{\\log (4 r \\nu)}{\\log (2)}\\right)^2 .\n$$\nThus\n$$\n\\log ((s+1)(s+2)) \\leq 2 \\log \\left(\\frac{\\log (4 r \\nu)}{\\log (2)}\\right) \\leq 2 \\log \\log (4 r \\nu)+1 .\n$$\nThe above display together with \\eqref{tmp_eq: iterated log} gives\n$$\n\\mathbb{P}\\left(\\sup _{r \\geq 1 \/ \\nu} \\frac{\\sum_{i=1}^r z_i}{\\sqrt{r}} \\geq 2 \\sqrt{2 r \\log \\log (4 r \\nu)+x+1}+2 \\sqrt{r \\nu}\\{\\log \\log (4 r \\nu)+x+1\\}\\right) \\leq 2 \\exp (-x) .\n$$\n\\eprf\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\clearpage\n\\subsubsection{Mean}\n\\label{sec:mean op1}\nSuppose $\\alpha^*,\\beta^*\\in \\mathbb{R}^p$ and\n$$\ny_i=\\begin{cases}\n\\alpha^*+\\epsilon_i & \\text { when } i \\in(0, \\eta] \\\\\n\\beta^*+\\epsilon_i & \\text { when } i \\in(\\eta, n]\n\\end{cases}\n$$\nwhere $\\{\\epsilon_i\\}$ is an i.i.d sequence of subgaussian variables such that $\\|\\epsilon_i\\|_{\\psi_2}=\\sigma_\\epsilon<\\infty$. In addition, suppose that there exists $\\theta \\in(0,1)$ such that $\\eta=\\lfloor n \\theta\\rfloor$ and that $\\|\\alpha^*-\\beta^*\\|_2=\\kappa<\\infty$. Suppose that $\\|\\alpha^*\\|_0 \\leq \\mathfrak{s},\\|\\beta^*\\|_0 \\leq \\mathfrak{s}$ and that\n\\begin{equation}\n \\frac{\\mathfrak{s} \\log ^{3 \/ 2}(p)}{\\sqrt{n}} \\rightarrow 0, \\quad \\frac{\\mathfrak{s}^2 \\log ^{3 \/ 2}(p)}{n \\kappa} \\rightarrow 0, \\quad \\frac{\\mathfrak{s} \\log ^{3 \/ 2}(p)}{\\sqrt{n} \\kappa} \\rightarrow 0, \\quad \\frac{\\mathfrak{s} \\log (p)}{n \\kappa^2} \\rightarrow 0.\n \\label{op1_eq:mean snr}\n\\end{equation}\nSuppose there exists $\\widehat{\\alpha}$ and $\\widehat{\\beta}$ such that\n\\begin{align*}\n & \\|\\widehat{\\alpha}-\\alpha^*\\|_2^2=O_p\\left(\\frac{\\mathfrak{s} \\log (p)}{n}\\right) \\text{ and } \\|\\widehat{\\alpha}-\\alpha^*\\|_1=O_p\\left(\\mathfrak{s} \\sqrt{\\frac{\\log (p)}{n}}\\right); \\\\\n& \\|\\widehat{\\beta}-\\beta^*\\|_2^2=O_p\\left(\\frac{\\mathfrak{s} \\log (p)}{n}\\right) \\text{ and } \\|\\widehat{\\beta}-\\beta^*\\|_1=O_p\\left(\\mathfrak{s} \\sqrt{\\frac{\\log (p)}{n}}\\right).\n\\end{align*}\nLet\n$$\n\\widehat{\\mathcal{Q}}(k)=\\sum_{i=1}^k\\|y_i- \\widehat{\\alpha}\\|_2^2+\\sum_{i=k+1}^n\\|y_i-\\widehat{\\beta}\\|_2^2 \\quad \\text { and } \\quad \\mathcal{Q}^*(k)=\\sum_{i=1}^k\\|y_i- \\alpha^*\\|_2^2+\\sum_{i=k+1}^n\\|y_i- \\beta^*\\|_2^2 .\n$$\n\n\\bnlem[Refinement for the mean model]\nLet\n$$\n\\eta+r=\\underset{k \\in(0, n]}{\\arg \\max } \\widehat{\\mathcal{Q}}(k) .\n$$\nThen under the assumptions above, it holds that\n$$\n\\kappa^2 r=O_p(1) .\n$$\n\\enlem\n\\bprf\nBy assumptions above, we have\n$$\n\\frac{\\mathfrak{s} \\log ^{3 \/ 2}}{\\sqrt{n}} \\rightarrow 0, \\quad \\frac{\\mathfrak{s}^2 \\log ^{3 \/ 2}(p)}{n \\kappa} \\rightarrow 0, \\quad \\frac{\\mathfrak{s} \\log ^{3 \/ 2}(p)}{\\sqrt{n} \\kappa} \\rightarrow 0, \\quad \\frac{\\mathfrak{s} \\log (p)}{n \\kappa^2} \\rightarrow 0 .\n$$\nWithout loss of generality, suppose $r \\geq 0$. Since $\\eta+r$ is the minimizer, it follows that\n$$\n\\widehat{\\mathcal{Q}}(\\eta+r) \\leq \\widehat{\\mathcal{Q}}(\\eta) .\n$$\nIf $r \\leq \\frac{1}{\\kappa^2}$, then there is nothing to show. So for the rest of the argument, for contradiction, assume that\n$$\nr \\geq \\frac{1}{\\kappa^2}\n$$\nObserve that\n$$\n\\begin{aligned}\n\\widehat{\\mathcal{Q}}(\\eta + r)-\\widehat{\\mathcal{Q}}(\\eta) &=\\sum_{i=\\eta+1}^{\\eta+r}\\|y_i- \\widehat{\\alpha}\\|_2^2-\\sum_{i=\\eta+1}^{\\eta+r}\\|y_i-\\widehat{\\beta}\\|_2^2 \\\\\n\\mathcal{Q}^*(\\eta + r)-\\mathcal{Q}^*(\\eta) &=\\sum_{i=\\eta+1}^{\\eta+r}\\|y_i- \\alpha^*\\|_2^2-\\sum_{i=\\eta+1}^{\\eta+r}\\|y_i- \\beta^*\\|_2^2\n\\end{aligned}\n$$\n\\textbf{Step 1}. It follows that\n$$\n\\begin{aligned}\n& \\sum_{i=\\eta+1}^{\\eta+r}\\|y_i- \\widehat{\\alpha}\\|_2^2-\\sum_{i=\\eta+1}^{\\eta+r}\\|y_i-\\alpha^*\\|_2^2 \\\\\n=& \\sum_{i=\\eta+1}^{\\eta+r}\\|\\widehat{\\alpha}- \\alpha^*\\|_2^2+2\\left(\\widehat{\\alpha}-\\alpha^*\\right)^{\\top} \\sum_{i=\\eta+1}^{\\eta+r}\\left(y_i-\\alpha^*\\right) \\\\\n=& \\sum_{i=\\eta+1}^{\\eta+r}\\|\\widehat{\\alpha}- \\alpha^*\\|_2^2+2r\\left(\\widehat{\\alpha}-\\alpha^*\\right)^{\\top} \\left(\\beta^*-\\alpha^*\\right) +2\\left(\\widehat{\\alpha}-\\alpha^*\\right)^{\\top} \\sum_{i=\\eta+1}^{\\eta+r}\\epsilon_i\n\\end{aligned}\n$$\nBy assumptions, we have\n$$\n\\sum_{i=\\eta+1}^{\\eta+r}\\|\\widehat{\\alpha}- \\alpha^*\\|_2^2 \\leq C r \\frac{\\mathfrak{s} \\log (p)}{n} \\\\\n=o_p\\left(r \\sigma^2_{\\epsilon} \\kappa^2\\right).\n$$\nSimilarly\n$$\nr\\left(\\widehat{\\alpha}-\\alpha^*\\right)^{\\top}\\left(\\beta^*-\\alpha^*\\right)\\leq r\\|\\widehat{\\alpha}-\\alpha^*\\|_2\\|\\beta^*-\\alpha^*\\|_2 \\leq\nO_p\\left(r\\kappa \\sqrt{\\frac{\\mathfrak{s} \\log (p)}{n} } \\right)=o_p\\left(r\\kappa^2\\right)\n$$\nwhere the second equality follows from $\\|\\beta^*-\\alpha^*\\|_2=\\kappa$, and the last equality follows from \\eqref{op1_eq:mean snr}. In addition,\n$$\n\\begin{aligned}\n&\\left(\\widehat{\\alpha}-\\alpha^*\\right)^{\\top} \\sum_{i=\\eta+1}^{\\eta+r} \\epsilon_i \\leq\\|\\widehat{\\alpha}-\\alpha^*\\|_1\\|\\sum_{i=\\eta+1}^{\\eta+r}\\epsilon_i\\|_{\\infty} \\\\\n=& O_p\\left(\\mathfrak{s} \\sqrt{\\frac{\\log (p)}{n}}\\right) O_p(\\sqrt{r \\log (p)})=O_p(\\kappa \\sqrt{r})\n\\end{aligned}\n$$\nTherefore\n$$\n\\sum_{i=\\eta+1}^{\\eta+r}\\|y_i- \\widehat{\\alpha}\\|_2^2-\\sum_{i=\\eta+1}^{\\eta+r}\\|y_i-\\alpha^*\\|_2^2=O_p(\\kappa \\sqrt{r})+o_p\\left(r \\kappa^2\\right) .\n$$\n\\textbf{Step 2}. Using the same argument as in the previous step, it follows that\n$$\n\\sum_{i=\\eta+1}^{\\eta+r}\\|y_i- \\widehat{\\beta}\\|_2^2-\\sum_{i=\\eta+1}^{\\eta+r}\\|y_i- \\beta^*\\|_2^2=O_p(\\kappa \\sqrt{r})+o_p(r \\kappa^2) .\n$$\nTherefore\n\\begin{equation}\n \\left|\\widehat{\\mathcal{Q}}(\\eta+r)-\\widehat{\\mathcal{Q}}(\\eta)-\\left\\{\\mathcal{Q}^*(\\eta+r)-\\mathcal{Q}^*(\\eta)\\right\\}\\right|=O_p(\\kappa \\sqrt{r})+o_p\\left(r \\kappa^2\\right)\n \\label{tmp_eq:op1 mean main prop eq 1}\n\\end{equation}\n\\textbf{Step 3}. Observe that\n$$\n\\begin{aligned}\n\\mathcal{Q}^*(\\eta+r)-\\mathcal{Q}^*(\\eta) \n=& \\sum_{i=\\eta+1}^{\\eta+r}\\|y_i- \\alpha^*\\|_2^2-\\sum_{i=\\eta+1}^{\\eta+r}\\|y_i- \\beta^*\\|_2^2 \\\\\n=& r\\| \\alpha^*- \\beta^*\\|_2^2-2 \\sum_{i=\\eta+1}^{\\eta+r}(y_i- \\beta^*)( \\alpha^*- \\beta^*) \\\\\n=& r\\| \\alpha^*- \\beta^*\\|_2^2 -2 (\\alpha^*- \\beta^*)^\\top\\sum_{i=\\eta+1}^{\\eta+r} \\epsilon_i\n\\end{aligned}\n$$\nLet\n$$\nw_i=\\frac{1}{\\sigma_{\\epsilon}\\kappa} \\epsilon_i^\\top\\left( \\alpha^*-\\beta^*\\right)\n$$\nThen $\\left\\{w_i\\right\\}_{i=1}^{\\infty}$ are subgaussian random variables with bounded $\\psi_2$ norm. Therefore by \\Cref{lem:subexp uniform loglog law}, uniformly for all $r \\geq 1 \/ \\kappa^2$,\n$$\n\\sum_{i=1}^r w_i=O_p\\left(\\sqrt{r\\left\\{\\log \\log \\left(\\kappa^2 r\\right)+1\\right\\}}+\\sqrt{r \\kappa^2}\\left\\{\\log \\log \\left(\\kappa^2 r\\right)+1\\right\\}\\right)\n$$\nSince $\\kappa<\\infty$, it follows that\n$$\n\\sum_{i=\\eta+1}^{\\eta+r} \\epsilon_i^\\top \\left( \\alpha^*-\\beta^*\\right)=O_p\\left(\\sqrt{r \\kappa^2\\left\\{\\log \\log \\left(\\kappa^2 r\\right)+1\\right\\}}+\\sqrt{r \\kappa^2}\\left\\{\\log \\log \\left(\\kappa^2 r\\right)+1\\right\\}\\right)\n$$\nTherefore\n\\begin{equation}\n \\mathcal{Q}^*(\\eta+r)-\\mathcal{Q}^*(\\eta) = r \\kappa^2+O_p\\left(\\sqrt{r \\kappa^2\\left\\{\\log \\log \\left(\\kappa^2 r\\right)+1\\right\\}}+\\sqrt{r \\kappa^2}\\left\\{\\log \\log \\left(\\kappa^2 r\\right)+1\\right\\}\\right)\n\\label{tmp_eq:op1 mean main prop eq 2}\n\\end{equation}\n\n\\textbf{Step 4}. \\Cref{tmp_eq:op1 mean main prop eq 1} and \\Cref{tmp_eq:op1 mean main prop eq 2} together give, uniformly for all $r \\geq 1 \/ \\kappa^2$,\n$$\n r \\kappa^2+O_p\\left(\\sqrt{r \\kappa^2\\left\\{\\log \\log \\left(\\kappa^2 r\\right)+1\\right\\}}+\\sqrt{r \\kappa^2}\\left\\{\\log \\log \\left(\\kappa^2 r\\right)+1\\right\\}\\right) \\leq O_p(\\kappa \\sqrt{r})+o_p\\left(r \\kappa^2\\right)\n$$\nThis implies that\n$$\nr \\kappa^2=O_p()\n$$\n\\eprf\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\clearpage\n\\subsubsection{Covariance}\n\\label{sec:cov op1}\nSuppose $G^*,H^*\\in \\mathbb{R}^d$ and\n$$\n\\mathbb{E}[X_iX_i^\\top]=\\left\\{\\begin{array}{ll}\nG^* & \\text { when } i \\in(0, \\eta] \\\\\nH^* & \\text { when } i \\in(\\eta, n]\n\\end{array} .\\right.\n$$\nIn addition, suppose that there exists $\\theta \\in(0,1)$ such that $\\eta=\\lfloor n \\theta\\rfloor$ and that $\\|G^*-H^*\\|_2=\\kappa_{\\Sigma}<\\infty$. Suppose that $c_XI_d \\preceq G^* \\preceq C_X I_d$, $c_XI_d \\preceq H^* \\preceq C_X I_d$, and that\n\\begin{equation}\n \\frac{d^{5\/2} \\log ^{3 \/ 2}(nd)}{\\sqrt{n}} \\rightarrow 0, \\quad \\frac{d^2 \\log ^{3 \/ 2}(nd)}{n \\kappa_{\\Sigma}} \\rightarrow 0, \\quad \\frac{d \\log ^{3 \/ 2}(nd)}{\\sqrt{n} \\kappa_{\\Sigma}} \\rightarrow 0, \\quad \\frac{d \\log (nd)}{n \\kappa_{\\Sigma}^2} \\rightarrow 0.\n \\label{op1_eq:cov snr}\n\\end{equation}\nBy \\Cref{lem:estimation covariance}, there exist $\\widehat{G},\\widehat{H}$ such that\n\\begin{align*}\n & \\|\\widehat{G}-G^*\\|_2=O_p\\left(\\sqrt{\\frac{d \\log (nd)}{n}}\\right) \\text{ and } \\|\\widehat{G}-G^*\\|_F=O_p\\left(d \\sqrt{\\frac{\\log (nd)}{n}}\\right); \\\\\n& \\|\\widehat{H}-H^*\\|_2=O_p\\left(\\sqrt{\\frac{d \\log (nd)}{n}}\\right) \\text{ and } \\|\\widehat{H}-H^*\\|_F=O_p\\left(d \\sqrt{\\frac{\\log (nd)}{n}}\\right).\n\\end{align*}\nLet\n$$\n\\widehat{\\mathcal{Q}}(k)=\\sum_{i=1}^k\\|X_iX_i^\\top- \\widehat{G}\\|_F^2+\\sum_{i=k+1}^n\\|X_iX_i^\\top-\\widehat{H}\\|_F^2 \\quad \\text { and } \\quad \\mathcal{Q}^*(k)=\\sum_{i=1}^k\\|X_iX_i^\\top- G^*\\|_F^2+\\sum_{i=k+1}^n\\|X_iX_i^\\top- H^*\\|_F^2 .\n$$\nThrough out this section, we use $\\kappa_{\\Sigma} = \\|G^*-H^*\\|_F$ to measure the signal.\n\\bnlem[Refinement for regression]\nLet\n$$\n\\eta+r=\\underset{k \\in(0, n]}{\\arg \\max } \\widehat{\\mathcal{Q}}(k) .\n$$\nThen under the assumptions above, it holds that\n$$\n\\kappa_{\\Sigma}^2 r=O_P(\\log(n)) .\n$$\n\\enlem\n\\bprf\nBy assumptions above, we have\n$$\n\\frac{d^{5\/2} \\log ^{3 \/ 2}(nd)}{\\sqrt{n}} \\rightarrow 0, \\quad \\frac{d^2 \\log ^{3 \/ 2}(nd)}{n \\kappa_{\\Sigma}} \\rightarrow 0, \\quad \\frac{d \\log ^{3 \/ 2}(nd)}{\\sqrt{n} \\kappa_{\\Sigma}} \\rightarrow 0, \\quad \\frac{d \\log (nd)}{n \\kappa_{\\Sigma}^2} \\rightarrow 0 .\n$$\nWithout loss of generality, suppose $r \\geq 0$. Since $\\eta+r$ is the minimizer, it follows that\n$$\n\\widehat{\\mathcal{Q}}(\\eta+r) \\leq \\widehat{\\mathcal{Q}}(\\eta) .\n$$\nIf $r \\leq C\\frac{\\log(n)}{\\kappa_{\\Sigma}^2}$, then there is nothing to show. So for the rest of the argument, for contradiction, assume that\n$$\nr \\geq C\\frac{\\log(n)}{\\kappa_{\\Sigma}^2}\n$$\nObserve that\n$$\n\\begin{aligned}\n\\widehat{\\mathcal{Q}}(t)-\\widehat{\\mathcal{Q}}(\\eta) &=\\sum_{i=\\eta+1}^{\\eta+r}\\|X_iX_i^\\top- \\widehat{G}\\|_F^2-\\sum_{i=\\eta+1}^{\\eta+r}\\|X_iX_i^\\top- \\widehat{H}\\|_F^2 \\\\\n\\mathcal{Q}^*(t)-\\mathcal{Q}^*(\\eta) &=\\sum_{i=\\eta+1}^{\\eta+r}\\|X_iX_i^\\top- {G}^*\\|_F^2-\\sum_{i=\\eta+1}^{\\eta+r}\\|X_iX_i^\\top-{H}^*\\|_F^2\n\\end{aligned}\n$$\n\\textbf{Step 1}. It follows that\n$$\n\\begin{aligned}\n& \\sum_{i=\\eta+1}^{\\eta+r}\\|X_iX_i^\\top- \\widehat{G}\\|_F^2-\\sum_{i=\\eta+1}^{\\eta+r}\\|X_iX_i^\\top- {G}^*\\|_F^2 \\\\\n=& \\sum_{i=\\eta+1}^{\\eta+r}\\|\\widehat{G}- G^*\\|_F^2+2\\left\\langle G^*-\\widehat{G}, \\sum_{i=\\eta+1}^{\\eta+r}(X_iX_i^\\top-G^*)\\right\\rangle \\\\\n=& r\\|\\widehat{G}- G^*\\|_F^2+2r \\left\\langle G^* -\\widehat{G}, H^*-G^*\\right\\rangle +2\\left\\langle G^*-\\widehat{G}, \\sum_{i=\\eta+1}^{\\eta+r}(X_iX_i^\\top -H^*)\\right\\rangle\n\\end{aligned}\n$$\nBy assumptions, we have\n$$\nr\\|\\widehat{G}- G^*\\|_F^2 =O_p\\left(r \\frac{d^2 \\log (nd)}{n}\\right) \\\\\n=o_p\\left(r \\kappa_{\\Sigma}^2\\right).\n$$\nSimilarly\n$$\n\\left\\langle G^* -\\widehat{G}, H^*-G^*\\right\\rangle\\leq r\\|G^* -\\widehat{G}\\|_F\\| H^*-G^* \\|_F \\leq\nO_p\\left(r\\kappa_{\\Sigma} d\\sqrt{\\frac{ \\log (nd)}{n} } \\right)=o_p\\left(r \\kappa_{\\Sigma}^2\\right)\n$$\nwhere the second equality follows from $\\|G^*-H^*\\|_2=\\kappa_{\\Sigma}$, and the last equality follows from \\eqref{op1_eq:cov snr}. In addition,\n$$\n\\begin{aligned}\n&\\left\\langle G^*-\\widehat{G}, \\sum_{i=\\eta+1}^{\\eta+r}(X_iX_i^\\top -H^*)\\right\\rangle \\leq\\| G^*-\\widehat{G}\\|_F\\|\\sum_{i=\\eta+1}^{\\eta+r}(X_iX_i^\\top -H^*)\\|_{F} \\\\\n=& O_p\\left(d \\sqrt{\\frac{\\log (nd)}{n}}\\right) O_p(d\\sqrt{r\\log (nd)}+ d^{3\/2}\\log (nd))=O_p(\\kappa_{\\Sigma} \\sqrt{r})+O_p(1)\n\\end{aligned}\n$$\nTherefore\n$$\n\\sum_{i=\\eta+1}^{\\eta+r}\\|X_iX_i^\\top- \\widehat{G}\\|_F^2-\\sum_{i=\\eta+1}^{\\eta+r}\\|X_iX_i^\\top- {G}^*\\|_F^2=O_p(\\kappa_{\\Sigma} \\sqrt{r})+O_p(1)+o_p\\left(r \\kappa_{\\Sigma}^2\\right) .\n$$\n\\textbf{Step 2}. Using the same argument as in the previous step, it follows that\n$$\n\\sum_{i=\\eta+1}^{\\eta+r}\\|X_iX_i^\\top- \\widehat{H}\\|_F^2-\\sum_{i=\\eta+1}^{\\eta+r}\\|X_iX_i^\\top- {H}^*\\|_F^2=O_p(\\kappa_{\\Sigma} \\sqrt{r})+O_p(1)+o_p(r \\kappa_{\\Sigma}^2) .\n$$\nTherefore\n\\begin{equation}\n \\left|\\widehat{\\mathcal{Q}}(\\eta+r)-\\widehat{\\mathcal{Q}}(\\eta)-\\left\\{\\mathcal{Q}^*(\\eta+r)-\\mathcal{Q}^*(\\eta)\\right\\}\\right|=O_p(\\kappa_{\\Sigma} \\sqrt{r})+O_p(1)+o_p\\left(r \\kappa_{\\Sigma}^2\\right)\n \\label{tmp_eq:op1 cov main prop eq 1}\n\\end{equation}\n\\textbf{Step 3}. Observe that\n$$\n\\begin{aligned}\n\\mathcal{Q}^*(\\eta+r)-\\mathcal{Q}^*(\\eta) \n=&\\sum_{i=\\eta+1}^{\\eta+r}\\|X_iX_i^\\top- {G}^*\\|_F^2-\\sum_{i=\\eta+1}^{\\eta+r}\\|X_iX_i^\\top- {H}^*\\|_F^2 \\\\\n=& r\\| G^*- H^*\\|_F^2-2 \\left\\langle H^*- G^*, \\sum_{i=\\eta+1}^{\\eta+r}(X_iX_i^\\top- H^*) \\right\\rangle\n\\end{aligned}\n$$\nDenote $D^* = H^*- G^*$, then we can write the noise term as\n$$\n\\left\\langle H^*- G^*, X_iX_i^\\top- H^* \\right\\rangle = X_i^\\top D^* X_i - \\mathbb{E}[X_i^\\top D^* X_i].\n$$\nSince $X_i$'s are Gaussian, denote $\\Sigma_{i} = \\mathbb{E}[X_iX_i^\\top] =U_i^\\top \\Lambda_i U_i$, then\n$$\n\\left\\langle H^*- G^*, \\sum_{i=\\eta+1}^{\\eta+r}(X_iX_i^\\top- H^*) \\right\\rangle = Z^\\top \\tilde{D} Z^\\top - \\mathbb{E}[Z^\\top \\tilde{D} Z^\\top],\n$$\nwhere $Z\\in \\mathbb{R}^{rd}$ is a standard Gaussian vector and \n$$\n\\tilde{D} = {\\rm diag}\\{U_1D^*U_1^\\top,U_2D^*U_2^\\top,\\cdots, U_r D^*U_r^\\top\\}.\n$$\nTherefore, by Hanson-Wright inequality, uniformly for all $r\\geq C\\frac{\\log(n)}{\\kappa_{\\Sigma}^2}$ it holds that\n\\begin{equation}\n \\mathcal{Q}^*(\\eta+r)-\\mathcal{Q}^*(\\eta) \\leq r \\kappa_{\\Sigma}^2 + O_P(\\|X\\|_{\\psi_2}^2\\sqrt{r}\\kappa_{\\Sigma}\\log(r\\kappa_{\\Sigma}^2)),\n\\label{tmp_eq:op1 cov main prop eq 2}\n\\end{equation}\n\n\\textbf{Step 4}. \\Cref{tmp_eq:op1 cov main prop eq 1} and \\Cref{tmp_eq:op1 cov main prop eq 2} together give, uniformly for all $r \\geq C\\log(n) \/ \\kappa_{\\Sigma}^2$,\n$$\n r \\kappa_{\\Sigma}^2+O_P(\\|X\\|_{\\psi_2}^2\\sqrt{r}\\kappa_{\\Sigma}\\log(r\\kappa_{\\Sigma}^2)) \\leq O_p(\\kappa_{\\Sigma} \\sqrt{r})+O_p(1)+o_p\\left(r \\kappa_{\\Sigma}^2\\right),\n$$\nwhich leads to contradiction and thus proves that\n$$\nr \\kappa_{\\Sigma}^2=O_P(\\log(n)).\n$$\n\\eprf\n\n\n\n\\bnlem\n\\label{lem:estimation covariance}\nLet $\\{X_i\\}_{i\\in [n]}$ be a sequence of subgaussian vectors in $\\mathbb{R}^{d}$ with orlitz norm upper bounded $\\|X\\|_{\\psi_2}<\\infty$. Suppose $\\mathbb{E}[X_i] = 0$ and $\\mathbb{E}[X_iX_i^\\top] = \\Sigma$ for $i\\in [n]$. Let $\\widehat{\\Sigma}_{n} = \\frac{1}{n}\\sum_{i \\in [n]} X_i X_i^\\top$. Then for any $u>0$, it holds with probability at least $1 - \\exp(-u)$ that\n\\begin{equation}\n \\|\\widehat{\\Sigma}_{n} - \\Sigma\\|_2\\lesssim \\|X\\|_{\\psi_2}^2(\\sqrt{\\frac{d + u}{n}}\\vee \\frac{d + u}{n}).\n\\end{equation}\n\\enlem\n\n\n\n\n\\bnlem[Hanson-Wright inequality]\nLet $X=\\left(X_1, \\ldots, X_n\\right) \\in \\mathbb{R}^n$ be a random vector with independent, mean zero, sub-gaussian coordinates. Let $A$ be an $n \\times n$ matrix. Then, for every $t \\geq 0$, we have\n$$\n\\mathbb{P}\\left\\{\\left|X^{\\top} A X-\\mathbb{E} X^{\\top} A X\\right| \\geq t\\right\\} \\leq 2 \\exp \\left[-c \\min \\left(\\frac{t^2}{K^4\\|A\\|_F^2}, \\frac{t}{K^2\\|A\\|}\\right)\\right],\n$$\nwhere $K=\\max _i\\left\\|X_i\\right\\|_{\\psi_2}$\n\\enlem\n\n\n\n\n\n\n\n\n\\clearpage\n\\subsubsection{Regression}\n\\label{sec:regression op1}\nSuppose\n$$\ny_i=\\left\\{\\begin{array}{ll}\nX_i^{\\top} \\alpha^*+\\epsilon_i & \\text { when } i \\in(0, \\eta] \\\\\nX_i^{\\top} \\beta^*+\\epsilon_i & \\text { when } i \\in(\\eta, n]\n\\end{array} .\\right.\n$$\nIn addition, suppose that there exists $\\theta \\in(0,1)$ such that $\\eta=\\lfloor n \\theta\\rfloor$ and that $\\left|\\alpha^*-\\beta^*\\right|_2=\\kappa<\\infty$. Suppose that $\\left\\|\\alpha^*\\right\\|_0 \\leq \\mathfrak{s},\\left\\|\\beta^*\\right\\|_0 \\leq \\mathfrak{s}$ and that\n\\begin{equation}\n \\frac{\\mathfrak{s} \\log ^{3 \/ 2}(p)}{\\sqrt{n}} \\rightarrow 0, \\quad \\frac{\\mathfrak{s}^2 \\log ^{3 \/ 2}(p)}{n \\kappa} \\rightarrow 0, \\quad \\frac{\\mathfrak{s} \\log ^{3 \/ 2}(p)}{\\sqrt{n} \\kappa} \\rightarrow 0, \\quad \\frac{\\mathfrak{s} \\log (p)}{n \\kappa^2} \\rightarrow 0.\n \\label{op1_eq:regression snr}\n\\end{equation}\nSuppose there exists $\\widehat{\\alpha}$ and $\\widehat{\\beta}$ such that\n\\begin{align*}\n & \\|\\widehat{\\alpha}-\\alpha^*\\|_2^2=O_p\\left(\\frac{\\mathfrak{s} \\log (p)}{n}\\right) \\text{ and } \\|\\widehat{\\alpha}-\\alpha^*\\|_1=O_p\\left(\\mathfrak{s} \\sqrt{\\frac{\\log (p)}{n}}\\right); \\\\\n& \\|\\widehat{\\beta}-\\beta^*\\|_2^2=O_p\\left(\\frac{\\mathfrak{s} \\log (p)}{n}\\right) \\text{ and } \\|\\widehat{\\beta}-\\beta^*\\|_1=O_p\\left(\\mathfrak{s} \\sqrt{\\frac{\\log (p)}{n}}\\right).\n\\end{align*}\nLet\n$$\n\\widehat{\\mathcal{Q}}(k)=\\sum_{i=1}^k\\left(y_i-X_i^{\\top} \\widehat{\\alpha}\\right)^2+\\sum_{i=k+1}^n\\left(y_i-X_i^{\\top} \\widehat{\\beta}\\right)^2 \\quad \\text { and } \\quad \\mathcal{Q}^*(k)=\\sum_{i=1}^k\\left(y_i-X_i^{\\top} \\alpha^*\\right)^2+\\sum_{i=k+1}^n\\left(y_i-X_i^{\\top} \\beta^*\\right)^2 .\n$$\n\n\\bnlem[Refinement for regression]\nLet\n$$\n\\eta+r=\\underset{k \\in(0, n]}{\\arg \\max } \\widehat{\\mathcal{Q}}(k) .\n$$\nThen under the assumptions above, it holds that\n$$\n\\kappa^2 r=O_p(1) .\n$$\n\\enlem\n\\bprf\nBy assumptions above, we have\n$$\n\\frac{\\mathfrak{s} \\log ^{3 \/ 2}(p)}{\\sqrt{n}} \\rightarrow 0, \\quad \\frac{\\mathfrak{s}^2 \\log ^{3 \/ 2}(p)}{n \\kappa} \\rightarrow 0, \\quad \\frac{\\mathfrak{s} \\log ^{3 \/ 2}(p)}{\\sqrt{n} \\kappa} \\rightarrow 0, \\quad \\frac{\\mathfrak{s} \\log (p)}{n \\kappa^2} \\rightarrow 0 .\n$$\nWithout loss of generality, suppose $r \\geq 0$. Since $\\eta+r$ is the minimizer, it follows that\n$$\n\\widehat{\\mathcal{Q}}(\\eta+r) \\leq \\widehat{\\mathcal{Q}}(\\eta) .\n$$\nIf $r \\leq \\frac{1}{\\kappa^2}$, then there is nothing to show. So for the rest of the argument, for contradiction, assume that\n$$\nr \\geq \\frac{1}{\\kappa^2}\n$$\nObserve that\n$$\n\\begin{aligned}\n\\widehat{\\mathcal{Q}}(t)-\\widehat{\\mathcal{Q}}(\\eta) &=\\sum_{i=\\eta+1}^{\\eta+r}\\left(y_i-X_i^{\\top} \\widehat{\\alpha}\\right)^2-\\sum_{i=\\eta+1}^{\\eta+r}\\left(y_i-X_i^{\\top} \\widehat{\\beta}\\right)^2 \\\\\n\\mathcal{Q}^*(t)-\\mathcal{Q}^*(\\eta) &=\\sum_{i=\\eta+1}^{\\eta+r}\\left(y_i-X_i^{\\top} \\alpha^*\\right)^2-\\sum_{i=\\eta+1}^{\\eta+r}\\left(y_i-X_i^{\\top} \\beta^*\\right)^2\n\\end{aligned}\n$$\n\\textbf{Step 1}. It follows that\n$$\n\\begin{aligned}\n& \\sum_{i=\\eta+1}^{\\eta+r}\\left(y_i-X_i^{\\top} \\widehat{\\alpha}\\right)^2-\\sum_{i=\\eta+1}^{\\eta+r}\\left(y_i-X_i^{\\top} \\alpha^*\\right)^2 \\\\\n=& \\sum_{i=\\eta+1}^{\\eta+r}\\left(X_i^{\\top} \\widehat{\\alpha}-X_i^{\\top} \\alpha^*\\right)^2+2\\left(\\widehat{\\alpha}-\\alpha^*\\right)^{\\top} X_i \\sum_{i=\\eta+1}^{\\eta+r}\\left(y_i-X_i^{\\top} \\alpha^*\\right) \\\\\n=& \\sum_{i=\\eta+1}^{\\eta+r}\\left(X_i^{\\top} \\widehat{\\alpha}-X_i^{\\top} \\alpha^*\\right)^2+2\\left(\\widehat{\\alpha}-\\alpha^*\\right)^{\\top} \\sum_{i=1}^r X_i X_i^{\\top}\\left(\\beta^*-\\alpha^*\\right)+2\\left(\\widehat{\\alpha}-\\alpha^*\\right)^{\\top} \\sum_{i=\\eta+1}^{\\eta+r} X_i \\epsilon_i\n\\end{aligned}\n$$\nBy \\Cref{lem:regression op1 base lemma 1}, uniformly for all $r$,\n$$\n\\left\\|\\frac{1}{r} \\sum_{i=1}^r X_i X_i^{\\top}-\\Sigma\\right\\|_{\\infty} \\geq O_p\\left(\\sqrt{\\frac{\\log (p)}{r}}+\\frac{\\log (p)}{r}\\right) .\n$$\nTherefore\n$$\n\\begin{aligned}\n\\sum_{i=\\eta+1}^{\\eta+r}\\left(X_i^{\\top} \\widehat{\\alpha}-X_i^{\\top} \\alpha^*\\right)^2 &=\\sum_{i=\\eta+1}^{\\eta+r}\\left(\\widehat{\\alpha}-\\alpha^*\\right)^{\\top} \\sum_{i=1}^r\\left\\{X_i X_i^{\\top}-\\Sigma\\right\\}\\left(\\widehat{\\alpha}-\\alpha^*\\right)+r\\left(\\widehat{\\alpha}-\\alpha^*\\right)^{\\top} \\Sigma\\left(\\widehat{\\alpha}-\\alpha^*\\right) \\\\\n& \\leq\\|\\widehat{\\alpha}-\\alpha^*\\|_1^2\\left\\|\\sum_{i=1}^r X_i X_i^{\\top}-\\Sigma\\right\\|_{\\infty}+\\Lambda_{\\max } r\\|\\widehat{\\alpha}-\\alpha^*\\|_2^2\\\\\n&=O_p\\left(\\frac{\\mathfrak{s}^2 \\log (p)}{n}\\right)O_p(\\sqrt{r \\log (p)}+\\log (p))+O_p\\left(r \\frac{\\mathfrak{s} \\log (p)}{n}\\right) \\\\\n&=O_p\\left(\\sqrt{r \\kappa^2}\\right)+O_p(1)+o_p\\left(r \\kappa^2\\right)\n\\end{aligned}\n$$\nwhere the inequality follows from \\Cref{lem:regression op1 base lemma 1} and last equality follows from assumption \\eqref{op1_eq:regression snr}. Similarly\n$$\n\\begin{aligned}\n\\left(\\widehat{\\alpha}-\\alpha^*\\right)^{\\top} \\sum_{i=1}^r X_i X_i^{\\top}\\left(\\beta^*-\\alpha^*\\right) &=\\left(\\widehat{\\alpha}-\\alpha^*\\right)^{\\top} \\sum_{i=1}^r\\left\\{X_i X_i^{\\top}-\\Sigma\\right\\}\\left(\\beta^*-\\alpha^*\\right)+r\\left(\\widehat{\\alpha}-\\alpha^*\\right)^{\\top} \\Sigma\\left(\\beta^*-\\alpha^*\\right) \\\\\n& \\leq \\|\\widehat{\\alpha}-\\alpha^*\\|_1\\|\\left(\\beta^*-\\alpha^*\\right)^{\\top}\\left\\{\\sum_{i=1}^r X_i X_i^{\\top}-\\Sigma\\right\\}\\|_{\\infty}+\\Lambda_{\\max } r\\|\\widehat{\\alpha}-\\alpha^*\\|_2\\|\\beta^*-\\alpha^*\\|_2 \\\\\n&=O_p\\left(\\mathfrak{s} \\sqrt{\\frac{\\log (p)}{n}}\\right) O_p(\\kappa\\{\\sqrt{r \\log (p)}+\\log (p)\\})+O_p\\left(r\\kappa \\sqrt{\\frac{\\mathfrak{s} \\log (p)}{n} } \\right).\\\\\n&=O_p(\\kappa \\sqrt{r})+O_p(1)+o_p\\left(r \\kappa^2\\right)\n\\end{aligned}\n$$\nwhere the second equality follows from $\\|\\beta^*-\\alpha^*\\|_2=\\kappa$ and \\Cref{lem:regression op1 base lemma 2}, and the last equality follows from \\eqref{op1_eq:regression snr}. In addition,\n$$\n\\begin{aligned}\n&\\left(\\widehat{\\alpha}-\\alpha^*\\right)^{\\top} \\sum_{i=\\eta+1}^{\\eta+r} X_i \\epsilon_i \\leq\\|\\widehat{\\alpha}-\\alpha^*\\|_1\\|\\sum_{i=\\eta+1}^{\\eta+r} X_i \\epsilon_i\\|_{\\infty} \\\\\n=& O_p\\left(\\mathfrak{s} \\sqrt{\\frac{\\log (p)}{n}}\\right) O_p(\\sqrt{r \\log (p)}+\\log (p))=O_p(\\kappa \\sqrt{r})+O_p(1)\n\\end{aligned}\n$$\nwhere the second equality follows from \\Cref{lem:regression op1 base lemma 1}. Therefore\n$$\n\\sum_{i=\\eta+1}^{\\eta+r}\\left(y_i-X_i^{\\top} \\widehat{\\alpha}\\right)^2-\\sum_{i=\\eta+1}^{\\eta+r}\\left(y_i-X_i^{\\top} \\alpha^*\\right)^2=O_p(\\kappa \\sqrt{r})+O_p(1)+o_p\\left(r \\kappa^2\\right) .\n$$\n\\textbf{Step 2}. Using the same argument as in the previous step, it follows that\n$$\n\\sum_{i=\\eta+1}^{\\eta+r}\\left(y_i-X_i^{\\top} \\widehat{\\beta}\\right)^2-\\sum_{i=\\eta+1}^{\\eta+r}\\left(y_i-X_i^{\\top} \\beta^*\\right)^2=O_p(\\kappa \\sqrt{r})+O_p(1)+o_p\\left(r \\kappa^2\\right) .\n$$\nTherefore\n\\begin{equation}\n \\left|\\widehat{\\mathcal{Q}}(\\eta+r)-\\widehat{\\mathcal{Q}}(\\eta)-\\left\\{\\mathcal{Q}^*(\\eta+r)-\\mathcal{Q}^*(\\eta)\\right\\}\\right|=O_p(\\kappa \\sqrt{r})+O_p(1)+o_p\\left(r \\kappa^2\\right)\n \\label{tmp_eq:op1 regression main prop eq 1}\n\\end{equation}\n\\textbf{Step 3}. Observe that\n$$\n\\begin{aligned}\n& \\mathcal{Q}^*(\\eta+r)-\\mathcal{Q}^*(\\eta) \\\\\n=& \\sum_{i=\\eta+1}^{\\eta+r}\\left(y_i-X_i^{\\top} \\alpha^*\\right)^2-\\sum_{i=\\eta+1}^{\\eta+r}\\left(y_i-X_i^{\\top} \\beta^*\\right)^2 \\\\\n=& \\sum_{i=\\eta+1}^{\\eta+r}\\left(X_i^{\\top} \\alpha^*-X_i^{\\top} \\beta^*\\right)^2-2 \\sum_{i=\\eta+1}^{\\eta+r}\\left(y_i-X_i^{\\top} \\beta^*\\right)\\left(X_i^{\\top} \\alpha^*-X_i^{\\top} \\beta^*\\right) \\\\\n=& \\sum_{i=\\eta+1}^{\\eta+r}\\left(\\alpha^*-\\beta^*\\right)^{\\top}\\left\\{X_i^{\\top} X_i-\\Sigma\\right\\}\\left(\\alpha^*-\\beta^*\\right)+r\\left(\\alpha^*-\\beta^*\\right)^{\\top} \\Sigma\\left(\\alpha^*-\\beta^*\\right)-2 \\sum_{i=\\eta+1}^{\\eta+r} \\epsilon_i\\left(X_i^{\\top} \\alpha^*-X_i^{\\top} \\beta^*\\right)\n\\end{aligned}\n$$\nNote that\n$$\nz_i=\\frac{1}{\\kappa^2}\\left(\\alpha^*-\\beta^*\\right)^{\\top}\\left\\{X_i^{\\top} X_i-\\Sigma\\right\\}\\left(\\alpha^*-\\beta^*\\right)\n$$\nis a sub-exponential random variable with bounded $\\psi_1$ norm. Therefore by \\Cref{lem:subexp uniform loglog law}, uniformly for all $r \\geq 1 \/ \\kappa^2$,\n$$\n\\sum_{i=1}^r z_i=O_p\\left(\\sqrt{r\\left\\{\\log \\log \\left(\\kappa^2 r\\right)+1\\right\\}}+\\sqrt{r \\kappa^2}\\left\\{\\log \\log \\left(\\kappa^2 r\\right)+1\\right\\}\\right) .\n$$\nSince $\\kappa<\\infty$, it follows that\n$$\n\\sum_{i=\\eta+1}^{\\eta+r}\\left(\\alpha^*-\\beta^*\\right)^{\\top}\\left\\{X_i^{\\top} X_i-\\Sigma\\right\\}\\left(\\alpha^*-\\beta^*\\right)=O_p\\left(\\sqrt{r \\kappa^2\\left\\{\\log \\log \\left(\\kappa^2 r\\right)+1\\right\\}}+\\sqrt{r \\kappa^2}\\left\\{\\log \\log \\left(\\kappa^2 r\\right)+1\\right\\}\\right)\n$$\nSimilarly, let\n$$\nw_i=\\frac{1}{\\kappa} \\epsilon_i\\left(X_i^{\\top} \\alpha^*-X_i^{\\top} \\beta^*\\right)\n$$\nThen $\\left\\{w_i\\right\\}_{i=1}^{\\infty}$ are sub-exponential random variables with bounded $\\psi_1$ norm. Therefore by \\Cref{lem:subexp uniform loglog law}, uniformly for all $r \\geq 1 \/ \\kappa^2$,\n$$\n\\sum_{i=1}^r w_i=O_p\\left(\\sqrt{r\\left\\{\\log \\log \\left(\\kappa^2 r\\right)+1\\right\\}}+\\sqrt{r \\kappa^2}\\left\\{\\log \\log \\left(\\kappa^2 r\\right)+1\\right\\}\\right)\n$$\nSince $\\kappa<\\infty$, it follows that\n$$\n\\sum_{i=\\eta+1}^{\\eta+r} \\epsilon_i\\left(X_i^{\\top} \\alpha^*-X_i^{\\top} \\beta^*\\right)=O_p\\left(\\sqrt{r \\kappa^2\\left\\{\\log \\log \\left(\\kappa^2 r\\right)+1\\right\\}}+\\sqrt{r \\kappa^2}\\left\\{\\log \\log \\left(\\kappa^2 r\\right)+1\\right\\}\\right)\n$$\nTherefore\n\\begin{equation}\n \\mathcal{Q}^*(\\eta+r)-\\mathcal{Q}^*(\\eta) \\geq \\Lambda_{\\min } r \\kappa^2+O_p\\left(\\sqrt{r \\kappa^2\\left\\{\\log \\log \\left(\\kappa^2 r\\right)+1\\right\\}}+\\sqrt{r \\kappa^2}\\left\\{\\log \\log \\left(\\kappa^2 r\\right)+1\\right\\}\\right)\n\\label{tmp_eq:op1 regression main prop eq 2}\n\\end{equation}\nwhere $\\Lambda_{\\min }$ is the minimal eigenvalue of $\\Sigma$.\n\n\\textbf{Step 4}. \\Cref{tmp_eq:op1 regression main prop eq 1} and \\Cref{tmp_eq:op1 regression main prop eq 1} together give, uniformly for all $r \\geq 1 \/ \\kappa^2$,\n$$\n\\Lambda_{\\min } r \\kappa^2+O_p\\left(\\sqrt{r \\kappa^2\\left\\{\\log \\log \\left(\\kappa^2 r\\right)+1\\right\\}}+\\sqrt{r \\kappa^2}\\left\\{\\log \\log \\left(\\kappa^2 r\\right)+1\\right\\}\\right) \\leq O_p(\\kappa \\sqrt{r})+O_p(1)+o_p\\left(r \\kappa^2\\right)\n$$\nThis implies that\n$$\nr \\kappa^2=O_p(1)\n$$\n\\eprf\n\n\n\\bnlem\n\\label{lem:regression op1 base lemma 1}\nSuppose $\\left\\{X_i\\right\\}_{i=1}^n \\stackrel{i . i . d}{\\sim} N_p(0, \\Sigma)$ and $\\left\\{\\epsilon_i\\right\\}_{i=1}^n \\stackrel{i i . d}{\\sim} N\\left(0, \\sigma^2\\right)$. Then it holds that\n$$\n\\begin{aligned}\n&\\mathbb{P}\\left(\\|\\frac{1}{r} \\sum_{i=1}^r X_i X_i^{\\top}-\\Sigma\\|_{\\infty} \\geq C_1\\left(\\sqrt{\\frac{\\log (p)}{r}}+\\frac{\\log (p)}{r}\\right) \\text { for all } 1 \\leq r \\leq n\\right) \\leq n^{-2}, \\\\\n&\\mathbb{P}\\left(\\|\\frac{1}{r} \\sum_{i=1}^r X_i \\epsilon_i\\|_{\\infty} \\geq C_2\\left(\\sqrt{\\frac{\\log (p)}{r}}+\\frac{\\log (p)}{r}\\right) \\text { for all } 1 \\leq r \\leq n\\right) \\leq n^{-2}\n\\end{aligned}\n$$\n\\enlem\n\\bprf\nProof. For the first probability bound, observe that for any $j, k \\in[1, \\ldots, p], X_j X_k-\\Sigma_{j k}$ is subexponential random variable. Therefore for any $r>0$,\n$$\n\\mathbb{P}\\left\\{\\left|\\frac{1}{r} \\sum_{i=1}^r X_{i j} X_{i k}-\\Sigma_{j k}\\right| \\geq x\\right\\} \\leq \\exp \\left(-r c_1 x^2\\right)+\\exp \\left(-r c_2 x\\right)\n$$\nSo\n$$\n\\mathbb{P}\\left\\{\\left\\|\\frac{1}{r} \\sum_{i=1}^r X_{i j} X_{i k}-\\Sigma_{j k}\\right\\|_{\\infty} \\geq x\\right\\} \\leq p \\exp \\left(-r c_1 x^2\\right)+p \\exp \\left(-r c_2 x\\right) .\n$$\nThis gives, for sufficiently large $C_1>0$,\n$$\n\\mathbb{P}\\left\\{\\left\\|\\frac{1}{r} \\sum_{i=1}^r X_{i j} X_{i k}-\\Sigma_{j k}\\right\\|_{\\infty} \\geq C_1\\left(\\sqrt{\\frac{\\log (p n)}{r}}+\\frac{\\log (p n)}{r}\\right)\\right\\} \\leq n^{-3} .\n$$\nBy a union bound,\n$$\n\\mathbb{P}\\left\\{\\left\\|\\frac{1}{r} \\sum_{i=1}^r X_{i j} X_{i k}-\\Sigma_{j k}\\right\\|_{\\infty} \\geq C_1\\left(\\sqrt{\\frac{\\log (p n)}{r}}+\\frac{\\log (p n)}{r}\\right) \\text { for all } 1 \\leq r \\leq n\\right\\} \\leq n^{-2}\n$$\nThe desired result follows from the assumption that $p \\geq n^\\alpha$. The second probability bound follows from the same argument and therefore is omitted for brevity.\n\\eprf\n\n\n\n\n\\bnlem\n\\label{lem:regression op1 base lemma 2}\n Suppose $\\left\\{X_i\\right\\}_{i=1}^n \\stackrel{2.2 . d}{\\sim} N_p(0, \\Sigma)$ and $u \\in \\mathbb{R}^p$ is a deterministic vector such that $|u|_2=1$. Then it holds that\n$$\n\\mathbb{P}\\left(\\|u^{\\top}\\left\\{\\frac{1}{r} \\sum_{i=1}^r X_i X_i^{\\top}-\\Sigma\\right\\}\\|_{\\infty} \\geq C_1\\left(\\sqrt{\\frac{\\log (p)}{r}}+\\frac{\\log (p)}{r}\\right) \\text { for all } 1 \\leq r \\leq n\\right) \\leq n^{-2} .\n$$\n\\enlem\n\\bprf\nFor fixed $j \\in[1, \\ldots, p]$, let\n$$\nz_i=u^{\\top} X_i X_{i j}-u^{\\top} \\Sigma_{\\cdot j},\n$$\nwhere $\\Sigma_{\\cdot j}$ denote the $j$-th column of $\\Sigma$. Note that $z_i$ is a sub-exponential random variable with bounded $\\psi_1$ norm. The desired result follows from the same argument as \\Cref{lem:regression op1 base lemma 1}.\n\\eprf\n\n\n\n\n\n\n\n\n\\section{Discussion}\n\\label{sec:discussion}\n\\section{Detail of algorithms}\n\\label{sec:dp}\n\nThe $\\ell_0$ penalization approach considers the joint optimization problem\n\\begin{equation}\n \\hat{\\mclP} = \\argmin_{\\mclP} \\left\\lbrace \\sum_{I\\in \\mclP} L(\\hat{\\theta}(\\mclI),\\mclI) + \\gamma |\\mclP| \\right\\rbrace, \\quad \\hat{\\theta}(\\mclI) = \\argmin_{\\theta} L(\\theta,\\mclI),\n \\label{eq:P_hat_high_dim}\n\\end{equation}\nwhere $\\mclP$ is a collections of points in $(0,T]$ and for any interval $\\mclI = (s,e]$, $\\mclI\\in \\mclP$ means $\\{s,e\\}\\in \\mclP$. If we put the constraint $|\\mclP|\\leq K$, then a brute-force search can find the global minimizer $\\hat{\\mclP}$ with $O(T^K)$ time complexity. For a given $\\gamma$, it can be seen that $|\\hat{\\mclP}|\\leq \\frac{1}{\\gamma}L(\\hat{\\theta}((0,T]),(0,T])$. Instead, by the dynamic programming algorithm \\ref{algorithm:DP} with grid $[T]$, we can solve \\eqref{eq:P_hat_high_dim} with $T^2$ complexity, which is free of parameter $\\gamma$.\n\n\n\n\n\n\\section{Additional Experiments}\n\\label{sec:add_experiment}\n\n\n\\subsection{Selection of $\\gamma$}\nIn the theory of DCDP, we need $\\gamma = C_{\\gamma}K { \\mathcal B_n^{-1} \\Delta } $, which is increasing with $\\Delta$. If $\\gamma$ gets smaller, it's more likely that the DP will select more false positive change points. In experiments, we can observe a similar phenomenon: in \\Cref{fig:small_big_gamma}, the localization error using CV list of $\\gamma$ with bigger values is smaller than that using CV list with smaller values.\n\nThe next question is, if we include both small and big values of $\\gamma$ in the CV list, can the algorithm pick the big values by itself? The answer is NO. \\Cref{fig:pick_gamma} shows the best gamma that selected by the CV step, and there is no clear pattern for selecting bigger $\\gamma$. On the other hand, \\Cref{fig:error_gamma} shows that bigger $\\gamma$ indeed leads to better localization error. Therefore, in conclusion, we do need bigger $\\gamma$ for DCDP, but the algorithm itself cannot pick bigger $\\gamma$ from a wide range: what we can do is to manually set a CV list of $\\gamma$ with big values.\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width = 0.92 \\textwidth]{Figure\/small_big_gamma.pdf}\n \\caption{Localization error vs. $\\mclQ$ with $\\Delta = 5000$. Left: the CV list of $\\gamma$ is $\\{10, 20, 40, 80\\}$; right: the CV list of $\\gamma$ is $\\{1000,2000,3000\\}$. The line is the average and the shaded area shows the upper and lower 0.1 quantiles from 60 trials.}\n \\label{fig:small_big_gamma}\n\\end{figure}\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width = 0.92 \\textwidth]{Figure\/pick_gamma.pdf}\n \\caption{$\\Delta = 5000$. Left: Localization error; right: the selected best $\\gamma$ in the CV. The CV list of $\\gamma$ is $\\{10, 20, 40, 80, 200, 500, 1000, 2000, 3000, 4000, 5000\\}$;. The line is the average and the shaded area shows the upper and lower 0.1 quantiles from 100 trials.}\n \\label{fig:pick_gamma}\n\\end{figure}\n\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width = 0.6 \\textwidth]{Figure\/scatter_gamma_error.pdf}\n \\caption{$\\Delta = 5000$. Scatter plot of localization error versus $\\gamma$ picked in the CV step. Each point corresponds to a trial in the experiment of \\Cref{fig:pick_gamma}.}\n \\label{fig:error_gamma}\n\\end{figure}\n\\section{Introduction}\n\\label{sec:introduction}\nChange point detection is a classical problem in statistics that can date back to 1940s \\citep{wald1945, page1954}. In 1980s people established asymptotic theory for change point detection methods \\citep{vostrikova1981,james1987,yao_au1989}. Most of these classical papers study the univariate mean model. Recently with more advanced theoretical tools developed in modern statistics, more delicate analysis of change point detection came out in univariate mean models \\citep{wang2018univariate}, in high-dimensional mean models \\citep{cho2015,jirak2015,aston2018,wang_samworth2018}, in covariance models \\citep{aue2009, avanesov2018, wang_yi_rinaldo_2021_cov}, in high-dimensional regression models \\citep{rinaldo2021cpd_reg_aistats, wang2021_jmlr}, in network models \\citep{wang2021network_cpd}, and in temporally-correlated times series \\citep{lavielle2000, davis2006, aue2009, wang2020sepp}.\n\nSuppose we observe $Z_t$ at time $t$ for $t\\in [T]$ and all observations are independent. Each $Z_t$ follows the same family of distribution $\\mathbb{P}(\\theta_t)$ where $\\theta_t$ is a parameter that can change over time. We are interested in capturing abrupt changes in the distributions and equivalently in the parameters. Towards that goal, we assume that there is a collection of change points $ 0=\\eta_0 < \\eta_1 < \\eta_2 < \\ldots <\\eta_K < \\eta _{K+1}= n $ such that $\\theta_{t}\\neq \\theta_{t + 1}$ only when $t \\in \\{\\eta_{k}\\}_{k\\in [K]}$. Define the spacing and jump size as\n\\begin{equation}\n \\Delta_k = \\eta_{k+1} -\\eta_{k},\\ \\kappa_k \n\t : = \\|\\theta_{\\eta_{k}+1} - \\theta_{\\eta_k}\\|_2,\n\\end{equation}\nwhere $\\|\\cdot \\|$ is some norm in the parameter space $\\Theta$, and let $\\Delta = \\min_{k\\in [K] } \\Delta_k$, $\\kappa = \\min_{k\\in [K]}\\kappa_k$. Change point detection aims to provide \\textit{consistent} estimates $\\{\\hat{\\eta}_k\\}$.\n\nIt is well-known that the dynamic programming algorithm can solve the $\\ell_0$ penalization problem for change point detection with $O(T^2 \\mclC(T, p))$ time complexity, where $\\mclC(T,p)$ is the time complexity for getting the estimator $\\hat{\\theta}(\\mclI)$. \n\nThe main concern upon the dynamic programming approach is that DP is more expensive to execute in practice compared to the wild binary segmentation. This motivates our project on improving the computational efficiency of DP with theoretical guarantee on the localization rate.\n\nImproving the computational efficiency of change point detection algorithms is an important research direction. \\cite{kovacs2020seeded} propose an algorithm called seeded binary segmentation that can reduce the computational complexity of Wild Binary Segmentation from $O(TM\\mclC(T))$ to $O(T\\log(T)\\mclC(T))$ where $M$ is the number of initial intervals in the WBS algorithm and $\\mclC(T)$ is the time complexity of fitting an estimator on samples of order $T$ which depends on the model.\n\nWe propose a new algorithm for retrospective change point detection called divide and conquer dynamic programming (DCDP) that can greatly improve the computational efficiency of the dynamic programming method that are currently used in the literature. We show by theory and numerical experiments that this algorithm works well for a broad range of problems and thus serve as a unified framework for change point detection.\n\nOur contributions:\n\\begin{enumerate}\n \\item A new efficient algorithm for change point detection with guarantee\n \\item It's a unified method\tfor change point detection problems. We establish theoretical guarantee for high-dimensional mean model and high-dim linear model.\n\\end{enumerate}\n\n\n\\section{Methodology}\nWe first discuss the general form of DCDP and its time complexity of it.\n\nSuppose we have $T$ observations and for any interval $\\mclI \\subset (0,T]$, we have some loss function $L(\\theta,\\mclI)$ and some goodness of fit function $\\mclF(\\theta,\\mclI)$ that are defined according to the setting of the problem. For instance, when considering change points in high-dimensional linear models, we have $L(\\beta,\\mclI) = \\sum_{i \\in \\mathcal I} (y_i -X_i^\\top \\beta) ^2 + \\lambda\\sqrt{|\\I|} \\|\\beta\\|_1$, where\n\\begin{equation}\n \\widehat \\beta _{ \\mathcal I } = \\arg\\min_{\\beta \\in \\mathbb R^p } \\sum_{i \\in \\mathcal I} (y_i -X_i^\\top \\beta) ^2 + \\lambda \\sqrt{|\\I|} \\|\\beta\\|_1,\n\\end{equation}\nand\n\\begin{equation}\n \\mathcal F (\\beta, \\I) := \\begin{cases}\n \\sum_{i \\in \\I } (y_i - X_i^\\top \\beta)^2, &\\text{ when } |\\I|\\geq C_\\mclF { \\mathfrak{s} } \\log(np),\\\\\n 0, &\\text{otherwise}.\n \\end{cases}.\n\\end{equation}\nDCDP contains two steps: divide and conquer. In the divide step, we randomly sample $\\mclQ$ integer points $\\{s_i\\}_{i\\in [\\mclQ]}\\subset [n]$ and find best approximations to the true change points by \\Cref{algorithm:DP}. The conquer step aims to further improve the localization accuracy based on the output of the divide step. It applies the local refinement in \\Cref{algo:local_refine_general} to the output of the divide step.\n\\begin{algorithm}[htbp]\n\t\\textbf{INPUT:} Data $\\{ y_i, X_i \\}_{i =1}^{n }$, tuning parameters $\\lambda, \\gamma, \\mclQ > 0$.\n \n Randomly uniformly sample $\\mclQ $ points $\\{ s_1, \\ldots, s_ \\mclQ \\}$ from $(0, n)$.\n \t \n(Divide) Get $\\{\\tilde{\\eta}_k\\}_{k\\in [\\tilde{K}]}$ with DP $(\\{y_i, X_i\\}_{i =1}^{n},\\{s_i\\}_{i\\in [\\mclQ]}, \\lambda, \\gamma)$.\n\n(Conquer) $\\{\\hat{\\eta}_k\\}_{k\\in [\\hat{K}]}\\leftarrow {\\rm Refinement}(\\{\\tilde{\\eta}_k\\}_{k\\in [\\tilde{K}]})$. \n\t\n\\textbf{OUTPUT:} The estimated change points $\\{\\hat{\\eta}_k\\}_{k\\in [\\hat{K}]}$.\n \t\n \\caption{Divide and Conquer Dynamic Programming. DCDP $(\\{y_i, X_i\\}_{i =1}^{n}, \\lambda, \\gamma , \\mclQ)$}\n\\label{algorithm:DCDP_divide}\n\\end{algorithm} \n\n\\begin{algorithm}[htbp]\n\t\\textbf{INPUT:} Data $\\{ y_i, X_i \\}_{i =1}^{n }$, tuning parameters $\\lambda, \\gamma$, grid $\\{s_i\\}_{i\\in [m]}$ to search over.\n \t \n \t Set $\\mathcal P= \\emptyset$, $\\mathfrak{ p} =\\underbrace{(-1,\\ldots, -1)}_{ n } $, $B =\\underbrace{( \\gamma, \\infty , \\ldots, \\infty)}_{n } $. Denote $B_i$ to be the $i$-$th$ entry of $B$. \n \t \n \t\\For {$r $ in $\\{s_i\\}_{i\\in [m]}$} {\n \t \\For {$l $ in $\\{s_i\\}_{i\\in [m]}$, $ l1$} {\n \n $ h \\leftarrow \\mathfrak{p}_ k $;\n $\\mathcal P = \\mathcal P \\cup h $;\n $ k \\leftarrow h $.\n \t }\n\t\\textbf{OUTPUT:} The estimated change points $\\mathcal P$. \n \n \\caption{Dynamic Programming. DP $(\\{y_i, X_i\\}_{i =1}^{n},\\{s_i\\}_{i\\in [m]}, \\lambda, \\gamma)$}\n\\label{algorithm:DP}\n\\end{algorithm} \n\n\n\\begin{algorithm}[htbp]\n\t\\textbf{INPUT:} Data $\\{(Z_i)\\}_{i=1}^{n}$, $\\{\\widetilde{\\eta}_k\\}_{k = 1}^{\\widetilde{K}}$ , $\\zeta > 0$.\n\n\tLet $(\\widetilde{\\eta}_0, \\widetilde{\\eta}_{\\widetilde{K} + 1}) \\leftarrow (0, n)$, $n_0\\leftarrow C_{\\mclF} { \\mathfrak{s} } \\log(np)$.\n\t\n\t\\For{$k = 1, \\ldots, \\widetilde{K}$} { \n\t$(s_k, e_k) \\leftarrow (2\\widetilde{\\eta}_{k-1}\/3 + \\widetilde{\\eta}_{k}\/3, \\widetilde{\\eta}_{k}\/3 + 2\\widetilde{\\eta}_{k+1}\/3)$\n\n\n\t\t\\begin{align}\n\t\t\t\\left(\\hat{\\eta},\\hat{\\theta}^{(1)},\\hat{\\theta}^{(2)}\\right) &\\leftarrow \\argmin_{\\substack{\\eta \\in \\{s_k + n_0, \\ldots, e_k - n_0\\} \\\\ \\theta^{(1)}, \\theta^{(2)} \\in \\mathbb{R}^{p},\\, \\theta^{(1)} \\neq \\theta^{(2)}}} \\Bigg\\{\\mclF(\\theta^{(1)},(s_k,\\eta)) + \\mclF(\\theta^{(2)}, [\\eta,e_k)) \\nonumber \\\\\n\t\t & \\hspace{1.5cm} + \\zeta \\sum_{i = 1}^p \\sqrt{(\\eta - s_k)(\\theta^{(1)})_{i}^2 + (e_k - \\eta)(\\theta^{(2)})_{i}^2}\\Bigg\\} \\label{eq-g-lasso}\n\t\t\\end{align}\n\t}\n\t\\textbf{OUTPUT:} $\\{\\widehat{\\eta}_k\\}_{k = 1}^{\\widetilde{K}}$.\n\\caption{Local Refinement. \n\\label{algo:local_refine_general}\n\\end{algorithm} \n\n\nIn general, the computational complexity of the divide step is $O(\\mclQ^2 \\mclC(n))$ where $\\mclC(n)$ is the computational complexity of getting $\\hat{\\theta}(\\mclI)$ where $|\\mclI|\\asymp n$. In contrast, the standard DP costs $O(n^2 \\mclC(n))$. Thus we want smaller $\\mclQ$ to make the improvement significant. As we will show later, in the current framework of DCDP, when the minimal spacing is $\\Delta\\geq \\mclB_n \\Delta_0$, we need $\\mclQ \\geq \\frac{n}{\\Delta}\\mclB_n \\log(n)$ to guarantee consistency. Here $\\{\\mclB_n\\}$ is any diverging sequence and $\\Delta_0$ is usually the minimal sample size for the estimator of the model to have $o(1)$ estimation error. Therefore, the complexity of the divide step is in general\n\\[\nO(\\left(\\frac{n}{\\Delta}\\right)^2\\log^3(n) \\cdot \\mclC(n)).\n\\]\nThe reduction in complexity gets bigger as $\\Delta$ gets bigger. For models with linear structures such as linear regression and covariance estimation, by some pre-calculation step that costs $O(n)$, we can further reduce $\\mclC(n)$ to be free of $n$, as we will discuss next. However, for nonlinear models like logistic regression, $\\mclC(n)$ is generally at least of order $n$. In either case, when $\\Delta\\asymp n$, the complexity will be of order $O(n\\log^3(n))$, which is a big improvement compared to the standard DP.\n\\section{Theoretical analysis}\nWe present some results on the localization error of DCDP in the case of mean model and high dimensional linear regression. Briefly, with the reduction on the computational complexity, the localization error with a single divide step will be larger than that of the vanilla DP by a term $ { \\mathcal B_n^{-1\/2} \\Delta } $. As is shown in \\Cref{cor:mean local refinement} and \\Cref{cor:regression local refinement}, this extra term can be removed by applying a local refinement algorithm to the output of the divide step.\n\nConsider a generic interval $\\I = (s,e)$ containing a single change point $\\eta$ and let $\\I_1 = (s,\\eta]$, $\\I_2 = (\\eta,e)$. In the proof of DP, a key property to prove is\n\\begin{equation}\n L(\\widehat{\\theta}_{\\I},\\I) - L(\\theta_1^*,\\I_1) - L(\\theta_1^*,\\I_1) \\geq {\\rm signal}(\\I_1,\\I_2) + {\\rm noise},\n\\end{equation}\nand when $|\\I_1|,|\\I_2|$ are large enough,\n\\begin{equation}\n \\gamma < {\\rm signal}(\\I_1,\\I_2) - |{\\rm noise}|,\\ {\\rm signal}(\\I_1,\\I_2) > |{\\rm noise}|.\n\\end{equation}\nFor DCDP, we need to prove that in addition,\n\\begin{equation}\n L(\\widehat{\\theta}_{\\I},\\I) - L(\\theta_1^*,\\I_1) - L(\\theta_1^*,\\I_1) \\leq {\\rm approx}(\\I_1,\\I_2) + {\\rm noise},\n\\end{equation}\nand when $|\\I_1|,|\\I_2|$ are small enough,\n\\begin{equation}\n \\gamma > {\\rm approx}(\\I_1,\\I_2) + |{\\rm noise}|.\n\\end{equation}\n\n\n\n\n\\subsection{Change in the mean}\n\n\\bnassum[Mean model]\n\\label{assp: DCDP_mean}\n Suppose for $1 \\le i \\le n$, the data $\\{X_{i } \\}_{1\\le i \\le n } $ satisfy mean model $X_{i} = \\mu _i +\\epsilon_i\\in \\mathbb{R}^p$ with a collection of change points $ 0=\\eta_0 < \\eta_1 < \\eta_2 < \\ldots <\\eta_K < \\eta _{K+1}= n $ and that the following conditions hold. \n\\begin{enumerate}\n \\item The measurement errors are independent mean-zero subgaussian random variables with $\\sigma_{\\epsilon} =\\sup_{i} \\|\\epsilon_i\\|_{\\psi_2}<\\infty$.\n \n \\item For each $i \\in \\{ 1,\\ldots, n\\}$, there exists a collection of subsets $ S_i \\subset\\{1, \\ldots, p \\}$, such that \n\t $$ \\mu _ { i,j} =0 \\text{ if } j \\not \\in S_i.$$\n\t In addition the cardinality of the support satisfies $|S_i|\\leq \\mathfrak{s} $. \n\t \\item Denote \n\t $ \\kappa_k \n\t : = \\| \\mu_{\\eta_{k}+1}^* - \\mu_{\\eta_k}^* \\|_2 $. Suppose that \n\t $ \\kappa _k \\asymp \\kappa$ for all $1\\le k \\le K$.\n\t \\item Denote \n\t $\\Delta_k = \\eta_{k+1} -\\eta_{k}. $ \n\t and \n\t $\\Delta = \\min_{1\\le k \\le K } \\Delta_k$. Let $ \\mathcal{B}_n$ be any diverging sequence. Suppose that\n\t $ \\Delta \\kappa^2 \\ge \\mathcal{B}_n \\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np) $.\n\\end{enumerate}\n\\enassum\n\n\\bnthm \\label{thm:DCDP mean}\nSuppose \\Cref{assp: DCDP_mean} holds and $\\gamma\\ge C_\\gamma K { \\mathcal B_n^{-1} \\Delta } \\kappa^2$ for sufficiently large constant $C_\\gamma$. Let \n $\\widehat { \\mathcal P } $ denote the output of \\Cref{algorithm:DP} with \n \\begin{equation}\n \\mathcal F (\\widehat{\\mu}_\\I,\\I) := \\begin{cases}\n \\sum_{i \\in \\I } \\|X_i - \\widehat \\mu_\\I \\|_2 ^2, &\\text{ when } |\\I|\\geq C_\\mclF\\sigma_{\\epsilon}^2 { \\mathfrak{s} } \\log(np),\\\\\n 0, &\\text{otherwise},\n \\end{cases}.\n\\end{equation}\nwhere $\\widehat{\\mu}_{\\I}$ is given by\n\\begin{equation}\n \\widehat{\\mu}_{\\I} = \\argmin_{\\mu\\in \\mathbb{R}^p}\\|X_i - \\mu\\|_2^2 + \\lambda \\sqrt{|\\I|}\\|\\mu\\|_1,\n\\end{equation}\nwith $\\lambda = C_{\\lambda}\\sqrt{\\log(np)}$. Here $C_{\\lambda}$ and $C_{\\mclF}$ are sufficiently large universal constants. Let $\\{ \\widehat \\eta_k\\}_{k=1}^{\\widehat K } $ be the collection of change points induced by $\\widehat { \\mathcal P}$.\n Then\n with probability at least $1 - C n ^{-3}$, $\\widehat K = K$ and that\n \\[ \n \\max_{ 1\\le k \\le K } |\\eta_k-\\widehat \\eta_k| \\lesssim \\sigma_{\\epsilon}^2\\frac{ \\log(n) +\\gamma }{\\kappa^2 } + { \\mathcal B_n^{-1\/2} \\Delta } .\n \\]\n \\enthm\n \n\\bncor\n\\label{cor:mean local refinement}\nUnder the conditions of \\Cref{thm:DCDP mean}, let $\\{\\widehat{\\eta}_k\\}_{k = 1}^{K}$ be a set of time points satisfying $\\widehat K = K$ and\n\t\\begin{equation}\\label{eq-lr-cond-1}\n\t\t\\max_{k = 1, \\ldots, K} |\\widehat{\\eta}_k - \\eta_k| \\leq \\Delta\/5.\n\t\\end{equation}\n\tLet $\\{\\widetilde{\\eta}_k\\}_{k = 1}^{\\widehat{K}}$ be the change point estimators generated from \\Cref{algo:local_refine_general} with $\\{\\widehat{\\eta}_k\\}_{k = 1}^{K}$ and $\\zeta = 0$. Then\n\t\\[ \n \\max_{ 1\\le k \\le K } (\\eta_k-\\widehat \\eta_k)\\kappa^2 = O_P(\\sigma_{\\epsilon}^2).\n \\]\n\\encor\n\n\\subsection{Change in the covariance}\n\n\\bnassum[Covariance model]\n\\label{assp:DCDP_covariance}\n Suppose for $i\\in [n]$, the data $\\{X _{i } \\}_{i\\in [n]} $ are mean-zero Gaussian variables in $\\mathbb{R}^p$ with non-singular covariance matrix $\\Sigma_i = \\mathbb{E}[X_iX_i^\\top]$ and Orlicz norm $g_X = \\sup_{i\\geq 1}\\|X_i\\|_{\\psi_2}<\\infty$. Assume there is a collection of change points $ 0=\\eta_0 < \\eta_1 < \\eta_2 < \\ldots <\\eta_K < \\eta _{K+1}= n $ and that the following conditions hold. \n\\begin{enumerate}\n \\item For $i\\in [n]$, $c_XI_p \\preceq \\Sigma_i \\preceq C_X I_p$ where $c_X>0, C_X<\\infty$ are universal constants.\n\t \\item Denote \n\t $ \\kappa_k \n\t : = \\|\\Sigma^{-1}_{\\eta_{k}+1} - \\Sigma^{-1}_{\\eta_k}\\|_F$. Suppose that \n\t $ \\kappa _k \\asymp \\kappa$ for all $1\\le k \\le K$.\n\t \\item Denote \n\t $\\Delta_k = \\eta_{k+1} -\\eta_{k}. $ \n\t and \n\t $\\Delta = \\min_{1\\le k \\le K } \\Delta_k$. Let $ \\mathcal{B}_n$ be any diverging sequence. Suppose that\n\t $ \\Delta \\kappa^2 \\geq \\mathcal{B}_n \\frac{g_X^4}{c_X^4} p^2\\log(np)$.\n\\end{enumerate}\n\\enassum\n\n\\bnrmk\nSince $\\{X_i\\}_{i\\in [n]}$ are Gaussian, we can make $g_X = C_X$ since $\\|X_i\\|_{\\psi_2} = \\lambda_{\\max}(\\Sigma_{X})$. I am trying to generalize to subgaussian variables with convex concentration property.\n\\enrmk\n\n\n\\bnthm\n\\label{thm:DCDP covariance}\nSuppose \\Cref{assp:DCDP_covariance} holds and $\\gamma\\ge C_\\gamma K { \\mathcal B_n^{-1} \\Delta } \\kappa^2$ for sufficiently large constant $C_\\gamma$. Let \n $\\widehat { \\mathcal P } $ denote the output of \\Cref{algorithm:DP} with \n \\begin{equation}\n \\mathcal F (\\widehat{\\Omega}_\\I,\\I) := \\begin{cases}\n \\sum_{i \\in \\I } {\\rm Tr}[\\widehat{\\Omega}^\\top X_iX_i^\\top] - |\\I|\\log|\\widehat{\\Omega}| , &\\text{ when } |\\I|\\geq C_\\mclF\\frac{g_X^4}{c_X^2}p\\log(np),\\\\\n 0, &\\text{otherwise},\n \\end{cases}.\n\\end{equation}\nwhere $\\widehat{\\Omega}_{\\I}$ is given by\n\\begin{equation}\n \\widehat{\\Omega}_{\\I} = \\argmin_{\\Omega\\in \\mathbb{S}^p_+} \\sum_{i \\in \\I } {\\rm Tr}[{\\Omega}^\\top X_iX_i^\\top] - |\\I|\\log|\\Omega|.\n\\end{equation}\nHere $C_{\\mclF}$ is a sufficiently large universal constant. Let $\\{ \\widehat \\eta_k\\}_{k=1}^{\\widehat K } $ be the collection of change points induced by $\\widehat { \\mathcal P}$.\n Then\n with probability at least $1 - C n ^{-3}$, $\\widehat K = K$ and that\n \\[ \n \\max_{ 1\\le k \\le K } |\\eta_k-\\widehat \\eta_k| \\lesssim \\frac{g_X^4C_X^2}{c_X^6}\\frac{ p^2 \\log(np) +\\gamma }{\\kappa^2 } + { \\mathcal B_n^{-1\/2} \\Delta } .\n \\]\n\\enthm\n\n\n\n\\bncor\n\\label{cor:covariance local refinement}\nUnder the conditions of \\Cref{thm:DCDP covariance}, let $\\{\\widehat{\\eta}_k\\}_{k = 1}^{K}$ be a set of time points satisfying $\\widehat K = K$ and\n\t\\begin{equation}\\label{eq-lr-cond-1}\n\t\t\\max_{k = 1, \\ldots, K} |\\widehat{\\eta}_k - \\eta_k| \\leq \\Delta\/5.\n\t\\end{equation}\n\tLet $\\{\\widetilde{\\eta}_k\\}_{k = 1}^{\\widehat{K}}$ be the change point estimators generated from \\Cref{algo:local_refine_general} with\n\t$$\n\t\\mclF(\\theta, \\I) = \\|\\theta - \\frac{1}{|\\I|}\\sum_{i\\in \\I}X_iX_i^\\top\\|_F^2,\n\t$$\n\t$\\{\\widehat{\\eta}_k\\}_{k = 1}^{K}$ and $\\zeta = 0$. Then \n \\[ \n \\max_{ 1\\le k \\le K } (\\eta_k-\\widehat \\eta_k)\\kappa^2 = O_P(\\log(n)).\n \\]\n\\encor\n\n\n\n\n\n\n\\subsection{Change in the linear model}\n\n\n \\bnassum[High-dimensional linear regression]\\label{assp:dcdp_linear_reg}\n Suppose for $1 \\le i \\le n$, $\\{X_{i } \\}_{1\\le i \\le n } \\subset \\mathbb R^p $ and $ \\{ y_i\\}_{i=1}^n \\subset \\mathbb R $ satisfy the regression model\n\t$y_{i } = X_i^\\top \\beta ^* _i +\\epsilon_i $ with a collection of change points $ 0=\\eta_0 < \\eta_1 < \\eta_2 < \\ldots <\\eta_K < \\eta _{K+1}=n $\nand that the following conditions hold. \n\\begin{enumerate}\n \\item Suppose that $\\{ X_i\\}_{i=1}^n \\overset{i.i.d.}{\\sim} N_p(0, \\Sigma )$ and that the minimal and the maximal eigenvalues of $\\Sigma$ satisfy \n\t $$ 0< c_x \\le \\Lambda_{\\min} (\\Sigma) \\le \\Lambda_{\\max} (\\Sigma ) \\le C_x < \\infty . $$ In addition, suppose that $ \\{ \\epsilon_i \\}_{i=1}^n \\overset{i.i.d.}{\\sim} N(0, \\sigma^2_ \\epsilon)$ is a collection of random errors independent of $\\{ X_i\\}_{i=1}^n $.\n\t \\item For each $i \\in \\{ 1,\\ldots, n\\}$, there exists a collection of subsets $ S_i \\subset\\{1, \\ldots, p \\}$, such that \n\t $$ \\beta^* _ { i,j} =0 \\text{ if } j \\not \\in S_i.$$\n\t In addition the cardinality of the support satisfies $|S_i|\\leq \\mathfrak{s} $. \n\t \\item There exists a parameter $\\kappa>0$ such that for each $ k \\in \\{ 0,\\ldots, K\\} $, it holds that\n\t $$ \n\t \\| \\beta^* _{\\eta_k+1} - \\beta_{\\eta_k} ^* \\|_2 \\asymp \\kappa .$$\n\t In addition, suppose that $ \\kappa < C_\\kappa$ for some universal constant $C_{\\kappa}<\\infty$.\n\t \\item Denote the spacing of change points as\n\t $\\Delta_k = \\eta_{k+1} -\\eta_{k}. $ \n\t and \n\t $\\Delta = \\min_{1\\le k \\le K } \\Delta_k$. It holds that $\\Delta \\kappa ^2 \\geq \\mathcal{B}_n (\\sigma_{\\epsilon}^2\\vee 1)\\mathfrak{s}\\log(np) $.\n\\end{enumerate}\n\\enassum\n\n\n \\bnthm \\label{thm:DCDP regression}\nSuppose \\Cref{assp:dcdp_linear_reg} holds and $\\gamma\\ge C_\\gamma K { \\mathcal B_n^{-1} \\Delta } \\kappa^2$ for sufficiently large constant $C_\\gamma$. Let \n $\\widehat { \\mathcal P } $ denote the output of \\Cref{algorithm:DCDP_divide} and\n \\[\n \\mathcal F (\\hat{\\beta}_{\\I},\\mclI) : = \\begin{cases} \n \\sum_{i \\in \\mclI } (y_i - X_i^\\top \\widehat \\beta _\\mclI ) ^2 &\\text{if } |\\mclI|\\ge C_\\mclF { \\mathfrak{s} } \\log(np),\n \\\\ \n 0 &\\text{otherwise,} \n \\end{cases} \n \\]\n where $C_{\\mclF}$ is a universal constant. Let \n$ \\{ \\widehat \\eta_k\\}_{k=1}^{\\widehat K } $ be the collection of change points induced by $\\widehat { \\mathcal P}$.\n Then\n with probability at least $1 - C n ^{-3}$, $\\widehat K = K$ and that\n $$ \\max_{ 1\\le k \\le K } |\\eta_k-\\widehat \\eta_k| \\lesssim (\\sigma_{\\epsilon}^2\\vee 1)\\bigg( \\frac{ \\mathfrak{s} \\log(np) +\\gamma }{\\kappa^2 }\\bigg) + { \\mathcal B_n^{-1\/2} \\Delta } $$\n \\enthm\n \n Once we get the output $\\{\\hat{\\eta}_k\\}_{k\\in [\\hat{K}]}$ from the DP, we can apply the local refinement algorithm and get refined estimates $\\{\\widetilde{\\eta}_k\\}_{k\\in [\\hat{K}]}$ that further reduce the term $ { \\mathcal B_n^{-1\/2} \\Delta } $ in the localization error.\n \n\\bncor\n \\label{cor:regression local refinement}\nUnder the conditions of \\Cref{thm:DCDP regression}, let $\\{\\widehat{\\eta}_k\\}_{k = 1}^{K}$ be a set of time points satisfying $\\widehat K = K$ and\n\t\\begin{equation}\\label{eq-lr-cond-1}\n\t\t\\max_{k = 1, \\ldots, K} |\\widehat{\\eta}_k - \\eta_k| \\leq \\Delta\/5.\n\t\\end{equation}\n\tLet $\\{\\widetilde{\\eta}_k\\}_{k = 1}^{\\widehat{K}}$ be the change point estimators generated from \\Cref{algo:local_refine_general} with $\\{\\widehat{\\eta}_k\\}_{k = 1}^{K}$ and $\\zeta = 0$. Then\n\t\\begin{align*}\n\t \\max_{k = 1, \\ldots, K}(\\widetilde{\\eta}_k - \\eta_k)\\kappa^2 = O_P(\\sigma_{\\epsilon}^2\\vee 1).\n\t\\end{align*}\n\\encor","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe goal of this work is to study the complexity classes {\\sf P} and \n{\\sf NP} via functions, and via semigroups of functions, rather than \njust as sets of languages.\nThis approach is intuitive (in particular, because of the immediate \nconnection with certain one-way functions), and quickly leads to results. \nIt is not clear whether this will contribute to a solution of the famous \n{\\sf P}-vs.-{\\sf NP} problem, but the semigroups considered here, as well \nas the ``inversive reductions'' and the accompanying completeness results \nfor one-way functions, are interesting in their own right. \n \nThe starting point is a certain kind of {\\em one-way functions}, and the \nwell-known fact that one-way functions of this kind exist iff \n${\\sf P} \\neq {\\sf NP}$. \n\nFirst, some notation: \nWe fix an alphabet $A$, which will be $\\{0,1\\}$ unless another alphabet\nis explicitly mentioned. The set of all strings over $A$, including the\nempty string, is denoted by $A^*$.\nFor a partial function $f$, the domain (i.e., the inputs $x$ for which\n$f(x)$ is defined) is denoted by ${\\sf Dom}(f)$. The image set of $f$ is\ndenoted by ${\\sf Im}(f)$ or by $f(A^*)$ or $f({\\sf Dom}(f))$. As a rule we\nwill use partial functions, even when the word ``partial'' is omitted; we\nsay {\\it total function} for functions whose domain is $A^*$. \nAs usual,\n{\\sf P} and {\\sf NP} are the class of languages accepted by deterministic,\nrespectively nondeterministic, polynomial-time Turing machines\n\\cite{DuKo,Papadim}.\n\n\\medskip\n\n\\noindent {\\bf Definition scheme:} A function $f: A^* \\to A^*$ is called \n{\\bf ``one-way''} \\ iff \\ from $x$ and a description of $f$ it is \n{\\sl ``easy''} to compute $f(x)$, but from $f$ and $y \\in {\\sf Im}(f)$ it \nis {\\sl ``difficult''} to find any $x \\in A^*$ such that $f(x) = y$. \n\n\\medskip\n\nThis is an old idea going back at least to W.S.\\ Jevons in 1873, who also \ncompared the difficulties of multiplication and factorization of integers \n(as pointed out in \\cite{Handbook}). \nThe concept became well-known after the work of Diffie and Hellman\n\\cite{DiffHell}. \nLevin's paper \\cite{LevinTale} discusses some deeper connections of one-way \nfunctions.\nThe definition scheme can be turned into precise definitions, in many\n(non-equivalent) ways, by defining ``easy'' and ``difficult'' (and, if needed, \n``description'' of $f$).\n \n\n\\begin{defn}\nA partial function $f: A^* \\to A^*$ is {\\em polynomially balanced} iff \nthere exists polynomials $p, q$ such that for all $x \\in {\\sf Dom}(f)$:\n \\, $|f(x)| \\leq p(|x|)$ and $|x| \\leq q(|f(x)|)$.\n\\end{defn}\nWe call the polynomial $q$ above an {\\em input balance} function of $f$.\nThe word ``honest'' is often used in the literature for polynomially \nbalanced.\n\nWe introduce the following set of ``easy'' functions.\n\n\\begin{defn} {\\bf (the semigroup {\\sf fP}).} \n \\ {\\sf fP} is the set of partial functions $f: A^* \\to A^*$ that are \npolynomially balanced, and such that \n \\ $x \\in {\\sf Dom}(f) \\longmapsto f(x)$ \\ is computable by a \ndeterministic polynomial-time Turing machine. \n(It follows from the second condition that ${\\sf Dom}(f)$ is in {\\sf P}.)\n\\end{defn}\nAs a rule, a machine that computes a partial function $f$ can always also\nbe used as an acceptor of ${\\sf Dom}(f)$. \n\nWhen $A$ is an arbitrary alphabet (as opposed to $\\{0,1\\}$) we write \n${\\sf fP}_A$ or ${\\sf fP}_{|A|}$.\nThe complexity class {\\sf fP} is different from the complexity class \n{\\sf FP}, considered in the literature \\cite{Papadim}; {\\sf FP} is a set \nof relations (viewed as search problems) whereas {\\sf fP} is a set of \npartial functions. \nIt is easy to see that ${\\sf fP}_{|A|}$ is closed under composition, so it \nis a semigroup. \n\n\n\\begin{defn} {\\bf (worst-case deterministic one-way function).} \nA partial function $f: A^* \\to A^*$ is {\\em one-way} iff $f \\in {\\sf fP}$, \nbut there exists {\\em no} deterministic polynomial-time algorithm which, on \nevery input $y \\in {\\sf Im}(f)$, \noutputs some $x \\in A^*$ such that \\ $f(x) = y$.\nThere is no requirement when $y \\not\\in {\\sf Im}(f)$.\n\\end{defn} \nThis kind of one-way functions is defined in terms of worst-case complexity, \nhence it is not ``cryptographic''. However, it is relevant to the \n{\\sf P}-vs.-{\\sf NP} problem because of the following fact (see e.g., \n\\cite{HemaOgi} p.\\ 33 for a proof and history). \n\n\\begin{pro} {\\bf (folklore).} \n \\ One-way functions (in the worst-case sense) exist iff \n \\ ${\\sf P} \\neq {\\sf NP}$. \\ \\ \\ $\\Box$\n\\end{pro}\nThe concept of an {\\it inverse} is central to one-way functions.\nThe following lemma is straightforward to prove.\n\n{\\sf Notation:} \nFor a partial function $f$ and a set $S$, the restriction of $f$ to $S$ is \ndenoted by $f|_S$; for the restriction of the identity map to $S$ we simply \nwrite ${\\sf id}_S$.\n\n\n\\begin{lem} \\label{concept} {\\bf (concept of inverse).} \n\n\\noindent\nFor partial functions $f, f': A^* \\to A^*$ the following are equivalent.\n\n\\smallskip\n\n\\noindent $\\bullet$ \\ For all $y \\in {\\sf Im}(f)$, \\ $f'(y)$ is defined\nand $f(f'(y)) = y$. \n\n\\smallskip\n\n\\noindent $\\bullet$ \\ $f \\circ f'|_{{\\sf Im}(f)} = {\\sf id}_{{\\sf Im}(f)}$ .\n\n\\smallskip\n\n\\noindent $\\bullet$ \\ $f \\circ f' \\circ f = f$. \\ \\ \n\n\\smallskip\n\n\\noindent These properties imply ${\\sf Im}(f) \\subseteq {\\sf Dom}(f')$. \n \\ \\ \\ \\ \\ $\\Box$\n\\end{lem}\n\n\n\\begin{defn} \\ A function $f'$ such that $f \\circ f' \\circ f = f$ is called \nan {\\em inverse of} $f$.\n\\end{defn}\nThe following recipe gives more intuition about inverses.\n\n\\medskip\n\n\\noindent Pseudo-algorithm: {\\it How any inverse $f'$ of a given $f$ is \nmade.}\n\n\\smallskip\n\n\\noindent (1) \\ Choose ${\\sf Dom}(f')$ such that \n${\\sf Im}(f) \\subseteq {\\sf Dom}(f')$. \n\n\\smallskip\n\n\\noindent (2) \\ For every $y \\in {\\sf Im}(f)$, choose $f'(y)$ to be any\n$x \\in f^{-1}(y)$.\n\n\\smallskip\n\n\\noindent (3) \\ For every $y \\in {\\sf Dom}(f') - {\\sf Im}(f)$, choose \n$f'(y)$ arbitrarily in $A^*$. \n\n\\bigskip\n\n\\noindent {\\bf Remark.} When $f'$ is an inverse of $f$, the restriction \n$f'|_{ {\\sf Im}(f)}: y \\in {\\sf Im}(f) \\mapsto f'(y)$ is the {\\em choice\nfunction} corresponding to $f'$. For set theory in general, the existence \nof choice functions (and the existence of inverses) for every partial \nfunction is equivalent to the Axiom of Choice. The existence of one-way \nfunctions in our sense amounts to the non-existence of certain inverses; \nthe existence of one-way functions is thus equivalent to the {\\em non-validity \nof the Axiom of Choice} in the (highly restricted) context of deterministic \npolynomial time-complexity. \n\nFrom the definition of polynomially balanced we can see now:\n{\\em If $f$ is polynomially balanced then so is every choice function \ncorresponding to any inverse of $f$. } \n\n\n\\begin{defn} \\ Let $S$ be a semigroup. An element $x \\in S$ is called\n{\\em regular} iff there exists $x' \\in S$ such that $x x' x = x$. \nIn that case, $x'$ is called an {\\em inverse} of $x$. \nA semigroup $S$ is called regular iff every element of $S$ is regular.\n\nLet $S$ be a monoid with identity {\\bf 1}. Then $x'$ is a {\\em left}- (or \n{\\em right}-) inverse of $x$ iff $x'x = {\\bf 1}$ (respectively \n$x x' = {\\bf 1}$). If $x'x = x x' = {\\bf 1}$ then $x'$ is a two-sided inverse\nor group-inverse. {\\rm (See \\cite{CliffPres,Grillet}.)}\n\\end{defn}\n\n\\noindent The following summarizes what we have seen, and gives the\ninitial motivation for studying the class {\\sf NP} via certain functions\nand semigroups. \n\n\\begin{pro} \\ \nThe monoid {\\sf fP} is {\\em regular} iff one-way functions do {\\sl not} \nexist (iff \\ ${\\sf P} = {\\sf NP}$). \\ \\ \\ \\ \\ $\\Box$\n\\end{pro} \nSome properties of the image set of functions in {\\sf fP}:\n\n\n\\begin{pro} \\label{ImfP} \\!\\!\\! .\n\n\\noindent (1) \\ For every $f \\in {\\sf fP}$, \\, ${\\sf Im}(f) \\in {\\sf NP}$.\n\n\\noindent (2) \\ If $f \\in {\\sf fP}$ and $f$ is regular then\n${\\sf Im}(f) \\in {\\sf P}$.\n\n\\noindent (3) \\ For every language $L \\in {\\sf NP}$ there exists\n$f_L \\in {\\sf fP}$ such that $L = {\\sf Im}(f_L)$. \n\n \\ Moreover, the set of functions $\\{f_L \\in {\\sf fP} : L \\in {\\sf NP}\\}$ \ncan be chosen so that $f_L$ is regular iff \\ $L \\in {\\sf P}$.\nThe map $L \\in {\\sf NP} \\mapsto f_L \\in {\\sf fP}$ is an embedding of\n{\\sf NP} (as a set) into {\\sf fP}, such that {\\sf P} (and only {\\sf P}) is \nmapped into the regular elements of {\\sf fP}. \n\\end{pro}\n\\noindent {\\bf Proof.} (1) is obvious (polynomial balance is needed).\n\n\\noindent (2) Let $f' \\in {\\sf fP}$ be an inverse of $f$.\nIf $y \\in {\\sf Im}(f)$ then $ff'(y) = y$.\nIf $y \\not\\in {\\sf Im}(f)$, then either $f'(y)$ is not defined, or\n$ff'(y) \\in {\\sf Im}(f)$, hence $ff'(y) \\neq y$. Thus, $y \\in {\\sf Im}(f)$ \niff $ff'(y) = y$.\nWhen $f, f' \\in {\\sf fP}$ then on input $y$ the properties $ff'(y) = y$,\n$y \\not\\in {\\sf Dom}(f')$, $ff'(y) \\neq y$, can be decided deterministically\nin polynomial time.\n\n\\noindent (3) Let $M$ be a nondeterministic Turing machine accepting $L$, \nsuch that all computations of $M$ are polynomially bounded, and do not halt\nbefore the whole input has been read. We can assume that $M$ has binary \nnondeterminism, i.e., in each transition it has at most two nondeterministic \nchoices. Define \n\n\\smallskip\n\n \\ \\ \\ \\ \\ \\ \n $f_L(x,s) = x$ \\ \\ iff \\ \\ $M$, with {\\it choice sequence} $s$, \n accepts $x$; \n\n \\ \\ \\ \\ \\ \\ \n $f_L(x,s)$ is undefined otherwise. \n\n\\smallskip \n\n\\noindent Choice sequences are also called guessing sequences, or advice \nsequences.\nThen $L = {\\sf Im}(f_L)$ and $f_L \\in {\\sf fP}$; balancing comes from the \nfact that $s$ has polynomially bounded length.\n\nWe saw that if $f_L$ is regular then ${\\sf Im}(f_L) = L \\in {\\sf P}$.\nMoreover, if $L \\in {\\sf P}$ then the Turing machine $M$ can be chosen to\nbe deterministic, and the choice sequence $s$ is the all-0 word;\nso in that case, $f_L$ is not one-way.\n \\ \\ \\ $\\Box$ \n\n\n\\begin{cor} \\ The {\\em image transformation} {\\sf Im}: \n$f \\mapsto {\\sf Im}(f)$ maps {\\sf fP} onto {\\sf NP}, and maps the set of \nregular elements of {\\sf fP} onto {\\sf P}. \nMoreover, {\\sf NP} is embedded into {\\sf fP} (by the transformation \n$L \\mapsto f_L$ above), and {\\sf NP} is a {\\em retract} of {\\sf fP} (by the \ntransformations $L \\mapsto f_L$ and {\\sf Im}).\n \\ \\ \\ \\ \\ \\ $\\Box$\n\\end{cor}\nThe following suggests that ${\\sf Im}(f) \\in {\\sf P}$ is not equivalent \nto the regularity of $f$.\n\n\n\\begin{thm} \n \\ If $\\Pi_2^{\\sf P} \\neq \\Sigma_2^{\\sf P}$ then there exist surjective \none-way functions. {\\rm (Proved in \\cite{JCBcircsizeinv}.)} \n\\end{thm}\n\nIn the next sections we show various properties of {\\sf fP} and of \nclosely related semigroups. The fact that these semigroups have \ninteresting properties, and that the proofs are not difficult, gives \na second motivation for studying these semigroups.\n\n\n\n\\section{The Green relations of polynomial-time function semigroups}\n\nWe give a few properties of the Green relations $\\leq_{\\cal R}, \n \\ \\leq_{\\cal L}, \\ \\leq_{\\cal J}, \\ \\equiv_{\\cal D}$ (see\n\\cite{CliffPres,Grillet}). First some notation.\n\nLet $F: X \\to Y$ be a partial function; then $F^{-1}: Y \\to X$ denotes \nthe {\\em inverse relation} of $F$, i.e., for all \n$(x,y) \\in X \\times Y$: \\ $x \\in F^{-1}(y)$ iff $F(x) = y$. By\n${\\sf mod}F$ we denote the partition on ${\\sf Dom}(F)$, defined by\n\\ $x_1$ ${\\sf mod}F$ $x_2$ iff $F(x_1) = F(x_2)$.\nThe set of blocks (equivalence classes) of ${\\sf mod}F$ is \n\\ $\\{ F^{-1}F(x) : x \\in {\\sf Dom}(F)\\}$.\nFor two partial functions $F, C: X \\to Y$ we write\n${\\sf mod}C \\leq {\\sf mod}F$ (the partition of $C$ is {\\em coarser} \nthan the partition of $F$, or the partition of $F$ {\\em refines} the \npartition of $C$) iff \\ ${\\sf Dom}(C) \\subseteq {\\sf Dom}(F)$ \\ and\n \\ $F^{-1}F(x) \\subseteq C^{-1}C(x)$ \\ for all $x \\in {\\sf Dom}(C)$;\nequivalently, every ${\\sf mod}C$-class is a union of ${\\sf mod}F$-classes.\n\n\n\\begin{pro}{\\rm (regular ${\\cal L}$- and $\\cal R$-orders).}\n\\label{regLR} \nIf $f, r \\in {\\sf fP}$ and $r$ is {\\em regular} with an inverse\n$r' \\in {\\sf fP}$ then:\n\n\\smallskip\n\n\\noindent $\\bullet$ \\ $f \\leq_{\\cal R} r$ \\ \\ iff \\ \\ $f = r r' f$\n \\ \\ iff \\ \\ ${\\sf Im}(f) \\subseteq {\\sf Im}(r)$.\n\n\\smallskip\n\n\\noindent $\\bullet$ \\ $f \\leq_{\\cal L} r$ \\ \\ iff \\ \\ $f = fr'r$ \n \\ \\ iff \\ \\ ${\\sf mod}f \\leq {\\sf mod}r$. \n\\end{pro}\n{\\bf Proof.} [$\\cal R$-order]: \\ \n$f \\leq_{\\cal R} r$ \\ iff \\ for some $u \\in {\\sf fP}: f = ru$. Then \n$f = r r' r u = r r' f$. \nAlso, it is straightforward that $f = ru$ implies \n ${\\sf Im}(f) \\subseteq {\\sf Im}(r)$.\n\nConversely, if ${\\sf Im}(f) \\subseteq {\\sf Im}(r)$ then \n \\ ${\\sf id}_{{\\sf Im}(f)} \\ = \\ $\n${\\sf id}_{{\\sf Im}(r)} \\circ {\\sf id}_{{\\sf Im}(f)} \\ = \\ $\n$r \\circ r'|_{{\\sf Im}(r)} \\circ {\\sf id}_{{\\sf Im}(f)}$. Hence,\n$f \\ = \\ {\\sf id}_{{\\sf Im}(f)} \\circ f \\ = \\ $\n$r \\circ r'|_{{\\sf Im}(r)} \\circ {\\sf id}_{{\\sf Im}(f)} \\circ f \\ = \\ $\n$r \\circ r'|_{{\\sf Im}(r)} \\circ f \\ \\leq_{\\cal R} r$. \n \n\\medskip\n\n[$\\cal L$-order]: \\\n$f \\leq_{\\cal L} r$ \\ iff \\ for some $v \\in {\\sf fP}: f = vr$. Then \n$f = v r r' r = f r' r$. \nAnd it is straightforward that $f = vr$ implies\n ${\\sf mod}f \\leq {\\sf mod}r$.\n\nConversely, if ${\\sf mod}f \\leq {\\sf mod}r$ then for all \n$x \\in {\\sf Dom}(f)$, \\ $r^{-1}r(x) \\subseteq f^{-1}f(x)$. And for \nevery $x \\in {\\sf Dom}(f)$, \\ $\\{f(x)\\} = f \\circ f^{-1} \\circ f(x)$.\nMoreover,\n$f \\circ r^{-1} \\circ r(x) \\subseteq f \\circ f^{-1} \\circ f(x) = \\{f(x)\\}$, \nand since $r^{-1} \\circ r(x) \\neq \\varnothing$, it follows that \n$f \\circ r^{-1} \\circ r(x) = \\{f(x)\\}$. So, $f = f \\circ r^{-1} \\circ r$. \nMoreover, $f \\circ r' \\circ r(x) \\in f \\circ r^{-1} \\circ r(x) = \\{f(x)\\}$,\nhence $f \\circ r' \\circ r(x) = f(x)$. Hence, $f = f r' r \\leq_{\\cal L} r$. \n \\ \\ \\ $\\Box$\n \n\\bigskip\n\n\\noindent The ${\\cal D}$-relation between elements of {\\sf fP} with infinite\nimage sets seems difficult, even in the case of regular elements.\nA first question (inspired from the Thompson-Higman monoids \n\\cite{JCBmonThH}): Are all regular elements of {\\sf fP} with infinite image \nin the same ${\\cal D}$-class, i.e., the ${\\cal D}$-class of \n${\\sf id}_{A^*}$?\n\n\n\\begin{pro} \\label{D_inj} Let $f \\in {\\sf fP}$ be regular. \nThen $f \\equiv_{\\cal D} {\\sf id}_{A^*}$ iff there exists $g \\in {\\sf fP}$ \nsuch that $g$ is injective, total, and regular, and such that \n${\\sf Im}(f) = {\\sf Im}(g)$.\n\\end{pro}\n{\\bf Proof.} Assume $f \\equiv_{\\cal D} {\\sf id}_{A^*}$. By Prop.\\ \\ref{regLR},\n$f \\equiv_{\\cal R} {\\sf id}_L$, where $L = {\\sf Im}(f)$.\nAnd ${\\sf id}_{A^*} \\equiv_{\\cal D} {\\sf id}_L$ iff there exists \n$g \\in {\\sf fP}$ such that \n${\\sf id}_{A^*} \\equiv_{\\cal L} g \\equiv_{\\cal R} {\\sf id}_L$.\nHence, ${\\sf id}_{A^*} = g' \\, g$ for some $g' \\in {\\sf fP}$; this equality\nimplies that $g$ is total and injective. The existence of $g' \\in {\\sf fP}$\nimplies that $g$ is regular. Since $g \\equiv_{\\cal R} {\\sf id}_L$, \n${\\sf Im}(g) = L$. Hence, $g$ has the required properties. \n\nTo prove the converse we will use the following:\n\n\\medskip\n\n\\noindent {\\sf Claim.} \\ For every $g \\in {\\sf fP}$ we have:\n \\ $g$ is injective, total, and regular \\ iff \n \\ $(\\exists g' \\in {\\sf fP}) \\, g' g = {\\sf id}_{A^*}$. \n\n\\noindent Proof of the Claim. \nThe right-to-left implication is straightforward. In the other direction, \nif $g$ is regular then there exists $g' \\in {\\sf fP}$ such that $gg'g =g$. \nAnd if $g$ is total and injective, there exists a partial function $h$ such \nthat $hg = {\\sf id}_{A^*}$. Now $gg'g =g$ implies $hgg'g = hg$, hence by \nusing $hg = {\\sf id}_{A^*}$ we obtain: $g' g = {\\sf id}_{A^*}$. \nThis proves the Claim.\n\n\\medskip\n \nFor the converse of the Proposition, assume there exists $g \\in {\\sf fP}$\nwith the required properties. \nIf such a $g$ exists, then $f \\equiv_{\\cal R} g$, by Prop.\\ \\ref{regLR}. \nMoreover, $g \\equiv_{\\cal L} {\\sf id}_{A^*}$; this follows from \n$g' g = {\\sf id}_{A^*}$, which holds by the Claim. Hence\n$f \\equiv_{\\cal R} g \\equiv_{\\cal L} {\\sf id}_{A^*}$.\n \\ \\ \\ $\\Box$\n\n\n\\bigskip\n\n\\noindent However, it is an open problem whether every infinite language $L$ \nin {\\sf P} is the image of an injective, total, polynomial-time computable \nfunction $g$ (and whether $g$ can be taken to be regular or one-way).\nAlso, not much is known about which infinite languages in {\\sf P} can be \nmapped onto each other by maps in {\\sf fP}.\n\n\nWhen ${\\sf Im}(f)$ is a right ideal, more can be said. \nBy definition, a {\\em right ideal} of $A^*$ is a subset $R \\subseteq A^*$\nsuch that $R \\, A^* = R$ (i.e., $R$ is closed under right-concatenation by \nany string).\nEquivalently, a right ideal is a set of the form $R = L \\, A^*$, for any set \n$L \\subseteq A^*$; in that case we also say that $L$ {\\em generates} $R$\nas a right ideal.\nA {\\em prefix code} in $A^*$ is a set $P \\subseteq A^*$ such that no word \nin $P$ is a prefix of another word in $P$. It is not hard to prove that \nfor any right ideal $R$ there exists a unique prefix code $P_{_R}$ such that\n$R = P_{_{\\!\\! R}} \\, A^*$; in other words, $P_{_R}$ is the minimum generating set \nof $R$, as a right ideal.\n\nA {\\em right ideal morphism} is a partial function $h: A^* \\to A^*$ such\nthat for all $x \\in {\\sf Dom}(h)$ and all $w \\in A^*$: \\ $h(xw) = h(x) \\, w$.\nOne proves easily that then ${\\sf Dom}(h)$ and ${\\sf Im}(h)$ are right \nideals. \n\nWe also consider $A^{\\omega}$ (the $\\omega$-sequences over $A$, see e.g.\\\n\\cite{PerrinPin}). For a set $L \\subseteq A^*$ we define ${\\sf ends}(L)$ to \nconsist of all elements of $A^{\\omega}$ that have a prefix in $L$. \nThe Cantor space topology on $A^{\\omega}$ uses the sets of the form \n${\\sf ends}(L)$ (for $L \\subseteq A^*$) as its open sets; here we can \nassume without loss of generality that $L$ is a prefix code or a right \nideal of $A^*$. \n\n\n\\begin{lem} \\label{codeVSrideal} \\\nIf a right ideal $R \\subseteq A^*$ belongs to {\\sf P} then the corresponding \nprefix code $P$ (such that $R = PA^*$) also belongs to {\\sf P}. Conversely, \nif $L$ is in {\\sf P} then $LA^*$ is in {\\sf P}.\n\\end{lem}\n{\\bf Proof.} The first statement follows immediately from the fact \nthat $x \\in P$ iff $x \\in R$ and every strict prefix of $x$ does not belong\nto $R$. The converse is straightforward. \\ \\ \\ $\\Box$\n\n\\bigskip\n\n\\noindent {\\bf Notation.} \\ Below, $\\ov{PA^*}$ \\, denotes $A^* - PA^*$ \n(complement).\n\n\\begin{pro} \\label{Pp_0} \\ Let $P \\subseteq A^*$ be a prefix code that \nbelongs to {\\sf P}, and let $p_0 \\in P$. Then all {\\em regular} elements \n$r \\in {\\sf fP}$ whose image is \\ ${\\sf Im}(r) \\, = \\, L_P \\, = \\, $\n$(P - \\{p_0\\}) \\, A^* \\, \\cup \\ p_0 \\, (p_0 A^* \\ \\cup \\ \\ov{PA^*})$ \\ are \nin the ${\\cal D}$-class of \\, ${\\sf id}_{A^*}$.\nWe can view $L_P$ as an ``approximation'' of the right ideal $PA^*$ since \n\n\\medskip\n\n\\hspace{1in} $(P - \\{p_0\\}) \\, A^* \\ \\subset \\ L_P \\ \\subset \\ P A^*$. \n \n\\end{pro} \n{\\bf Proof.} \\ Let $L = L_P = {\\sf Im}(r)$. By Prop.\\ \\ref{regLR}, \n$r \\equiv_{\\cal R} {\\sf id}_L$, so it suffices to prove that \n \\ ${\\sf id}_L \\equiv_{\\cal D} {\\sf id}_{A^*}$.\nWe define $\\pi, \\pi' \\in {\\sf fP}$ by\n\\[ \\pi(x) = \\left\\{ \n \\begin{array}{ll}\n x & \\mbox{if \\ $x \\in (P - \\{p_0\\}) A^*$, } \\\\ \n p_0 x & \\mbox{otherwise (i.e., if $x \\in p_0A^*$ or $x \\not\\in PA^*$);}\n \\end{array} \\right.\n\\]\n\\[ \\pi'(x) = \\left\\{ \n \\begin{array}{ll} \n x & \\mbox{if \\ $x \\in (P - \\{p_0\\}) A^*$, } \\\\ \n z & \\mbox{if \\ $x \\in p_0A^*$ with $x = p_0z$, } \\\\ \n {\\rm undefined} & \\mbox{otherwise (i.e., when $x \\not\\in PA^*$). }\n \\end{array} \\right.\n\\]\n\\noindent Then $\\pi$ is a total and injective function, and\n${\\sf Im}(\\pi) = L$. Hence, $\\pi \\equiv_{\\cal R} {\\sf id}_L$.\nMoreover, $\\pi' \\circ \\pi = {\\sf id}_{A^*}$, as is easily verified, \nhence $\\pi \\equiv_{\\cal L} {\\sf id}_{A^*}$. In summary,\n \\ $r \\equiv_{\\cal R} {\\sf id}_L \\equiv_{\\cal R} \\pi \\equiv_{\\cal L}$\n${\\sf id}_{A^*}$. \n \\ \\ \\ \\ \\ $\\Box$\n\n\\bigskip\n\n\\noindent \nFunctions that have right ideals as domain and image are of great interest, \nbecause of the remarkable properties of the Thompson-Higman \ngroups and monoids \\cite{McKTh,Th,Scott,Hig74,CFP,BiThomps,BiCoNP} \nand \\cite{JCBonewPerm, JCBmonThH, JCBrl}.\nProp.\\ \\ref{Pp_0} gives an additional motivation for looking at the special\nrole of right ideals.\nThis motivates the following.\n\n\n\\begin{defn} \\label{RM2P} \n \\ \\ \\ ${\\cal RM}_{_{|A|}}^{\\sf P} \\ = \\ \\{ f \\in {\\sf fP} \\ : \\ $\n$f$ is a right ideal morphism of $A^* \\}$.\n\\end{defn}\nWhen $f$ is a right ideal morphism, ${\\sf Dom}(f)$ and ${\\sf Im}(f)$ are\nright ideals.\n${\\cal RM}_{_{|A|}}^{\\sf P}$ is closed under composition, and\n${\\cal RM}_2^{\\sf P}$ is a submonoid of {\\sf fP}. \n\n\nAn interesting submonoid of ${\\cal RM}_{_{|A|}}^{\\sf P}$ is \n${\\cal RM}_{_{|A|}}^{\\sf fin}$, consisting of all those\n$f \\in {\\cal RM}_{_{|A|}}^{\\sf P}$ for which ${\\sf Dom}(f)$ (and hence also\n${\\sf Im}(f)$) is a {\\it finitely generated} right ideal. The monoid\n${\\cal RM}_{_{|A|}}^{\\sf fin}$ is used to define the Thompson-Higman monoid\n$M_{|A|,1}$ in \\cite{JCBmonThH}. \n\n\\begin{pro} \\label{fPinvinRM} \nIf an element $f \\in {\\cal RM}_2^{\\sf P}$ has an inverse in {\\sf fP} then\n$f$ also has an inverse in ${\\cal RM}_2^{\\sf P}$.\n\\end{pro}\n{\\bf Proof.} Let $f_0' \\in {\\sf fP}$ be an inverse of $f$; we want to \nconstruct an inverse $f'$ of $f$ that belongs to ${\\cal RM}_2^{\\sf P}$.\nSince $f$ is regular in {\\sf fP}, we know from Prop.\\ \\ref{ImfP} that \n${\\sf Im}(f)$ is in {\\sf P}.\nHence we can restrict $f_0'$ to ${\\sf Im}(f)$, i.e., \n${\\sf Dom}(f_0') = {\\sf Im}(f)$.\nWe proceed to define $f'(y)$ for $y \\in {\\sf Im}(f)$.\n\nFirst, we compute the shortest prefix $p$ of $y$ that satisfies \n$p \\in {\\sf Dom}(f_0') = {\\sf Im}(f)$. Since ${\\sf Im}(f) \\in {\\sf P}$, \nthis can be done in polynomial time. Now, $y = p \\, z$ for some string $z$. \n\nSecond, we define $f'(y) \\ = \\ f'_0(p) \\ z$, \\ where $p$ and $z$ are as \nabove. Thus, $f'$ is a right-ideal morphism.\n\n\\smallskip\n\nLet us verify that $f'$ has the claimed properties.\nClearly, $f'$ is polynomial-time computable, and polynomially \nbalanced (the latter following from the fact that $f'$ is an inverse of $f$,\nwhich we prove next).\nTo prove that $f'$ is an inverse of $f$, let $x \\in {\\sf Dom}(f)$.\nThen $f (f' (f(x))) = f (f'(p \\, z))$, where $y = f(x) = p \\, z$, and $p$\nis the shortest prefix of $y$ such that $p \\in {\\sf Im}(f)$.\nThen, $f'(p \\, z) = f_0'(p) \\, z$, by the definition of $f'$.\nThen, since $f$ is a right-ideal morphism, $f(f_0'(p) \\, z)$\n$= f(f_0'(p)) \\, z = p \\, z$ (the latter since $f'_0$ is an inverse of $f$,\nand since $p \\in {\\sf Im}(f)$).\nHence, $ff'|_{{\\sf Im}(f)} = {\\sf id}_{{\\sf Im}(f)}$. Thus, by Prop.\\ \n\\ref{concept}, $f'$ is an inverse of $f$. \n \\ \\ \\ $\\Box$\n\n\\bigskip\n\n\\noindent {\\bf Remark and notation:} \n \\ For $f \\in {\\cal RM}_{_{|A|}}^{\\sf P}$ we saw that ${\\sf Dom}(f)$ and \n${\\sf Im}(f)$ are right ideals. Let ${\\sf domC}(f)$, called the \n{\\em domain code}, be the prefix code that generates ${\\sf Dom}(f)$ as a \nright ideal. Similarly, let ${\\sf imC}(f)$, called the {\\em image code}, \nbe the prefix code that generates ${\\sf Im}(f)$.\n\nIn general, ${\\sf imC}(f) \\subseteq f({\\sf domC}(f))$, and it can happen that\n${\\sf imC}(f) \\neq f({\\sf domC}(f))$. However the last paragraph of proof of\nProp.\\ \\ref{fPinvinRM} shows that in any case: \n \\ {\\em If $f \\in {\\cal RM}_{_{|A|}}^{\\sf P}$ is regular then $f$ has an \ninverse $f' \\in {\\cal RM}_{_{|A|}}^{\\sf P}$ such that \n${\\sf domC}(f') = {\\sf imC}(f)$. }\n\n\\bigskip\n\n\\noindent {\\bf Notation:} \\ For two words $u,v \\in A^*$,\n$(v \\leftarrow u)$ denotes the right ideal morphism $ux \\mapsto vx$\n(for all $x \\in A^*$). In particular,\n$(\\varepsilon \\leftarrow \\varepsilon) = {\\sf id}_{A^*}$, where\n$\\varepsilon$ denotes the empty word.\nThe morphism $(v \\leftarrow u)$ is length-balanced because $|u|, |v|$ are\nconstants for a given morphism.\n\n\n\\begin{pro} \\label{j0} \\ For every alphabet $A$, the monoid \n${\\cal RM}_{_{|A|}}^{\\sf P}$ is ${\\cal J}^0$-simple (i.e., the only ideals\nare $\\{0\\}$ and ${\\cal RM}_{_{|A|}}^{\\sf P}$ itself).\n\\end{pro}\n{\\bf Proof.} For any $f \\in {\\cal RM}_{|A|}^{\\sf P}$ that is not the \nempty map, there exist words $x_0, y_0$ such that $f(x_0) = y_0$. Then \n$(\\varepsilon \\leftarrow \\varepsilon)$ $=$ \n$(\\varepsilon \\leftarrow y_0) \\circ f \\circ (x_0 \\leftarrow \\varepsilon)$.\nHence, ${\\sf id}_{A^*} \\leq_{\\cal J} f$ for every non-empty element \n$f \\in {\\cal RM}_{|A|}^{\\sf P}$. \n \\ \\ \\ $\\Box$ \n\n\n\\begin{pro} \\ \\ {\\sf fP} is not ${\\cal J}^0$-simple, and it has regular \nelements in different non-0 ${\\cal J}$-classes.\n\\end{pro}\n{\\bf Proof.} The map $\\ell: x \\in \\{0,1\\}^* \\longmapsto 0^{|x|}$\nis in {\\sf fP} and it is an idempotent. \n\nMoreover, $\\ell \\not\\equiv_{\\cal J} {\\sf id}_{A^*}$. \nIndeed, if there exist functions $\\beta, \\alpha$ such that for all \n$x \\in A^*$, $x = \\beta \\, \\ell \\, \\alpha(x) = \\beta(0^{|\\alpha(x)|})$, then \n$|\\alpha(x)|$ is different for every $x \\in A^*$. But then $\\alpha$ is not\npolynomially balanced, since $|\\alpha(x)|$ would have to range over \n$|A|^{|x|}$ values. \n \\ \\ \\ $\\Box$\n\n\n\\begin{cor} \\ \\ {\\sf fP} and ${\\cal RM}_{_{|A|}}^{\\sf P}$ are not\nisomorphic. \\ \\ \\ \\ \\ \\ $\\Box$\n\\end{cor}\n\n\n\\noindent As a consequence of Prop. \\ref{Pp_0} we have:\n\n\\begin{cor} \\ Every {\\em regular} element $r \\in {\\cal RM}_2^{\\sf P}$ is \n``close'' to an element of {\\sf fP} belonging to the ${\\cal D}$-class of\n${\\sf id}_{A^*}$. \nHere, $h_{p_0} \\in {\\sf fP}$ is called ``close'' to $r$ \\, iff \\ \n${\\sf Im}(r) = PA^*$ for a prefix code $P$, and there exists $p_0 \\in P$ such\nthat:\n\n\\smallskip\n\n\\noindent $\\bullet$ \\ \n$(P - \\{p_0\\}) \\, A^* \\ \\subseteq \\ {\\sf Im}(h_{p_0}) \\ \\subseteq PA^*$, \n \\ and \n\n\\smallskip\n\n\\noindent $\\bullet$ \n \\ $h_{p_0}(x) = r(x)$ whenever $r(x) \\in {\\sf Im}(h_{p_0})$. \n\n\\end{cor}\n{\\bf Proof.} Let $P = {\\sf domC}(r)$, so $PA^* = {\\sf Im}(r)$. \nFor every $p_0 \\in P$, $r$ is close to ${\\sf id}_{L_P} \\circ r$, whose image \nset is $L_P \\ = \\ $ \n$(P - \\{p_0\\}) \\, A^* \\ \\cup \\ p_0 \\, (p_0 A^* \\cup \\ov{PA^*})$, hence\n \\ $(P - \\{p_0\\}) \\, A^* \\subset L_P \\subset PA^*$. \nAnd ${\\sf id}_{L_P} \\circ r \\equiv_{\\cal R} {\\sf id}_{L_P}$ since\n$L_P \\subset PA^*$. \n \\ \\ \\ $\\Box$\n\n\\medskip\n\n\\noindent Recall the notation $(v \\leftarrow u)$ given just before Prop.\\\n\\ref{j0}.\n\n\\begin{lem} \\label{RM1LR} \\ \nIn ${\\cal RM}_2^{\\sf P}$, the $\\cal L$-class of ${\\sf id}_{A^*}$ is \n$\\{(v \\leftarrow \\varepsilon) \\in {\\cal RM}_2^{\\sf P} : v \\in A^*\\}$.\nThis is the set of elements of ${\\cal RM}_2^{\\sf P}$ that are injective \nand total (i.e., defined for all $x \\in A^*$).\n\nThe $\\cal R$-class of ${\\sf id}_{A^*}$ is \n$\\{ f \\in {\\cal RM}_2^{\\sf P} : \\, \\varepsilon \\in {\\sf Im}(f)\\}$. \nThis is the set of elements of ${\\cal RM}_2^{\\sf P}$ that are surjective \n(i.e., map onto $A^*$).\n\\end{lem}\n{\\bf Proof.} If $f \\equiv_{\\cal L} {\\sf id}_{A^*}$ then \n$\\varepsilon \\in {\\sf Dom}(f) = A^*$, so there is $v \\in A^*$ such that \n$v = f(\\varepsilon)$. Then $f(x) = vx$ for all $x\\in A^*$. \nConversely, if $f(x) = vx$ for all $x\\in A^*$ then \n$(\\varepsilon \\leftarrow v) \\circ f = {\\sf id}_{A^*}$.\n\nIf $f \\equiv_{\\cal R} {\\sf id}_{A^*}$ then ${\\sf Im}(f) = A^*$. So\n$\\varepsilon \\in {\\sf Im}(f)$. \nConversely, if $f$ satisfies $\\varepsilon \\in {\\sf Im}(f)$, i.e., \n$\\varepsilon = f(x_0)$ for some $x_0 \\in A^*$, then \n$f \\circ (x_0 \\leftarrow \\varepsilon) = (\\varepsilon \\leftarrow \\varepsilon)$\n$= {\\sf id}_{A^*}$.\n \\ \\ \\ $\\Box$\n\n\\bigskip\n\n\\noindent Lemma \\ref{RM1LR} shows that the $\\cal L$-class of\n${\\sf id}_{A^*}$ in ${\\cal RM}_2^{\\sf P}$ is also the $\\cal L$-class of\n${\\sf id}_{A^*}$ in ${\\cal RM}_2^{\\sf fin}$.\n\n\\begin{pro} \\ ${\\cal RM}_2^{\\sf P}$ has trivial group of units, i.e., the \n$\\cal D$-class of the identity ${\\sf id}_{A^*}$ is $\\cal H$-trivial.\n\\end{pro}\n{\\bf Proof.} \nIf $f \\equiv_{\\cal H} {\\sf id}_{A^*}$ then by Lemma \\ref{RM1LR} (for the \n$\\cal L$-class of ${\\sf id}_{A^*}$), $f(x) = vx$ for all $x$. \nAlso by Lemma \\ref{RM1LR} (for the $\\cal R$-class of ${\\sf id}_{A^*}$), \n$f(x_1) = v x_1 = \\varepsilon$, for some $x_1$.\nThis implies $v = \\varepsilon$, hence $f = {\\sf id}_{A^*}$.\n \\ \\ \\ $\\Box$\n\n\\bigskip\n\n\\noindent As a consequence of Lemma \\ref{RM1LR}, ${\\cal RM}_2^{\\sf P}$ can \nbe injectively mapped (non-homomorphically) into the $\\cal R$-class of \n${\\sf id}_{A^*} \\in {\\cal RM}_2^{\\sf P}$. Let us define $f \\mapsto \\psi_f$ \nby ${\\sf Dom}(\\psi_f) = \\{0\\} \\, \\cup \\, 1 \\, {\\sf Dom}(f)$, and\n\n\\smallskip\n\n \\ \\ \\ \\ \\ $\\psi_f(0) \\ = \\ \\varepsilon$, \\ \\ and\n\n\\smallskip\n\n \\ \\ \\ \\ \\ $\\psi_f(1x) \\ = \\ 1 \\, f(x)$, \\, for all $x \\in {\\sf Dom}(f)$.\n\n\\smallskip\n\n\\noindent Then for all $1x \\in 1 \\, A^*$, \n \\ $\\psi_{g \\circ f}(1x) = (\\psi_g \\circ \\psi_f)(1x) = 1 \\, (f g)(x)$.\nSo, $f \\mapsto \\psi_f|_{1A^*}$ is a morphism (where $\\psi_f|_{1A^*}$ is\nthe restriction of $\\psi_f$ to $1A^*$). \nBut $\\psi$ is not a morphism; indeed, since ${\\cal RM}_2^{\\sf P}$ \ncontains non-trivial groups, but the $\\cal D$-class of ${\\sf id}_{A^*}$ is \n$\\cal H$-trivial, there cannot be a homomorphic embedding of \n${\\cal RM}_2^{\\sf P}$ into the $\\cal D$-class of ${\\sf id}_{A^*}$. \n\n\n\n\\section{\\bf Embedding {\\sf fP} into ${\\cal RM}_2^{\\sf P}$ }\n\n\n\\noindent {\\bf Transforming any map into a right-ideal morphism:}\n\n\\medskip\n\n\\noindent The semigroup {\\sf fP} uses the alphabet $\\{0,1\\}$; let $\\#$ be \na new letter. For $f \\in {\\sf fP}$ we define\n$f_{\\#}: \\{0, 1, \\#\\}^* \\to \\{0, 1, \\#\\}^*$ \\, by letting \\, \n${\\sf Dom}(f_{\\#}) \\ = \\ {\\sf Dom}(f) \\, \\# \\, \\{0, 1, \\#\\}^*$, and\n\n\\smallskip\n\n \\ \\ \\ \\ \\ \\ $f_{\\#}(x \\# w) \\ = \\ f(x) \\ \\# \\, w$, \n\n\\smallskip\n\n\\noindent for all $x \\in {\\sf Dom}(f)$ $( \\subseteq \\{0,1\\}^*)$, and all \n$w \\in \\{0, 1, \\#\\}^*$. So ${\\sf domC}(f_{\\#}) = {\\sf Dom}(f) \\, \\# $. \n\n\n\n\\begin{pro}\\hspace{-.06in}{\\bf .} \n\n\\noindent (1) \\ For any $L \\subseteq \\{0,1\\}^*$, $L \\#$ is a prefix code \nin $\\{0,1,\\#\\}^*$. \n\n\\smallskip\n\n\\noindent (2) \\ $L$ is in {\\sf P} \\ iff \\ $L \\#$ is in {\\sf P}.\n\n\\smallskip\n\n\\noindent (3) \\ For any partial function $f: \\{0,1\\}^* \\to \\{0,1\\}^*$,\n \\, $f_{\\#}$ is a right ideal morphism of $\\{0,1,\\#\\}^*$.\n\n\\smallskip\n\n\\noindent \n(4) \\ $f \\in {\\sf fP}$ \\ iff \\ \\, $f_{\\#} \\in {\\cal RM}^{\\sf P}_3$.\n\\end{pro}\n{\\bf Proof.} This is straightforward. \\ \\ \\ $\\Box$\n\n\\medskip\n\n\\bigskip\n\n\\noindent {\\bf Coding from three letters to two letters:}\n \n\\medskip\n\n\\noindent We consider the following encoding from the 3-letter alphabet \n$\\{0, 1, \\#\\}$ to the 2-letter alphabet $\\{0,1\\}$. \n\n\\smallskip\n\n \\ \\ \\ \\ \\ \\ ${\\sf code}: \\, \\{0,1,\\#\\} \\, \\to \\, \\{00, 01, 11\\}$ \n \\ \\ is defined by\n\n\\smallskip\n\n \\ \\ \\ \\ \\ \\ ${\\sf code}(0) = 00$, \\ \\ \\ ${\\sf code}(1) = 01$, \n \\ \\ \\ ${\\sf code}(\\#) = 11$.\n\n\\smallskip\n\n\\noindent For a word $w \\in \\{0,1,\\#\\}^*$, ${\\sf code}(w)$ is the \nconcatenation of the encodings of the letters in $w$. \n\nThe choice of this code is somewhat arbitrary; e.g., we could \nhave picked the encoding \n$c$ from $\\{0,1,\\#\\}$ onto the maximal prefix code $\\{00, 01, 1\\}$, defined \nby $c(0) = 00$, $c(1) = 01$, $c(\\#) = 1$. \n\n\\begin{defn} \\label{f_encoding} \n \\ We define $f^C: \\{0,1\\}^* \\to \\{0,1\\}^*$ by letting \n \\, ${\\sf Dom}(f^C) \\ = \\ {\\sf code}({\\sf Dom}(f) \\, \\#) \\ \\{0, 1\\}^*$, and \n\n\\smallskip\n\n \\ \\ \\ \\ \\ \\ \n$f^C({\\sf code}(x \\#) \\, v) \\ = \\ {\\sf code}(f(x) \\, \\#) \\ v$,\n\n\\smallskip\n\n\\noindent for all $x \\in {\\sf Dom}(f)$ $( \\subseteq \\{0,1\\}^*)$,\nand all $v \\in \\{0,1\\}^*$.\nWe call $f^C$ the {\\em encoding of $f$}. \n\\end{defn}\n\n\n\\begin{pro}.\n\n\\noindent (1) \\ For any $L \\subseteq \\{0,1\\}^*$, \\, ${\\sf code}(L \\#)$ is a \nprefix code.\n\n\\smallskip\n\n\\noindent (2) \\ $L$ is in {\\sf P}\n \\ iff \\ ${\\sf code}(L \\#)$ is in {\\sf P}.\n\n\\smallskip\n\n\\noindent (3) \\ For any partial function $f: \\{0,1\\}^* \\to \\{0,1\\}^*$, \n \\, $f^C$ is a right ideal morphism of $\\{0,1\\}^*$.\n\n\\smallskip\n\n\\noindent (4) \\ $f \\in {\\sf fP}$ \n \\ iff \\ $f^C \\in {\\cal RM}_2^{\\sf P}$.\n\\end{pro}\n{\\bf Proof.} This is straightforward. \\ \\ \\ $\\Box$\n\n\n\\begin{pro}.\n\n\\noindent (1) \\ The transformations \n$f \\in {\\sf fP} \\longmapsto f_{\\#} \\in {\\cal RM}_3^{\\sf P}$ and \n$f \\in {\\sf fP} \\longmapsto f^C \\in {\\cal RM}_2^{\\sf P}$ are injective\ntotal homomorphisms from {\\sf fP} into ${\\cal RM}_3^{\\sf P}$, respectively\n${\\cal RM}_2^{\\sf P}$.\n\n\\smallskip\n\n\\noindent (2) \\ $f$ is regular in {\\sf fP} \\ iff \\ $f_{\\#}$ is regular in\n${\\cal RM}^{\\sf P}_3$ \\ iff \\ $f^C$ is regular in ${\\cal RM}_2^{\\sf P}$.\n\n\\smallskip\n\n\\noindent (3) \\ There are one-to-one correspondences between the inverses\nof $f$ in {\\sf fP}, the inverses of $f_{\\#}$ in ${\\cal RM}^{\\sf P}_3$, and\nthe inverses of $f^C$ in ${\\cal RM}_2^{\\sf P}$.\n\\end{pro} \n{\\bf Proof.} \\ (1) is straightforward, and (2) follows from injectiveness \nand from the fact that the homomorphic image of an inverse is an inverse. \n\n(3) Let $G \\in {\\cal RM}^{\\sf P}_3$ be such that \n$f_{\\#} \\circ G \\circ f_{\\#} = f_{\\#}$; i.e., \n$f_{\\#}(G(f(x) \\# w)) = f(x) \\# w$, for all $x \\in \\{0,1\\}^*$ and \n$w \\in \\{0,1,\\#\\}^*$. Since $f_{\\#}(G(f(x) \\# w))$ ($= f(x) \\# w$) contains \n$\\#$, $G(f(x) \\# w)$ is of the form $G(f(x) \\# w) = z \\# v$, for some\n$z \\in \\{0,1\\}^*$ and $v \\in \\{0,1,\\#\\}^*$. Hence $f_{\\#}(G(f(x) \\# w)) = $\n$f_{\\#}(z \\# v) = f(z) \\# v = f(x) \\# w$, so $f(z) = f(x)$ and $v = w$. So,\n$G(f(x) \\# w) = z \\# w$ for some $z \\in f^{-1}f(x)$.\nThus there exists a function $g: \\{0,1\\}^* \\to \\{0,1\\}^*$ such that\n$G(y\\#) = g(y) \\#$ for all $y \\in {\\sf Im}(f)$; then $G(y\\# w) = g(y) \\# w$,\nfor all $w \\in \\{0,1,\\#\\}^*$. Hence $g$ is an inverse \nof $f$. Moreover, $g$ is clearly in {\\sf fP} if $G$ is in \n${\\cal RM}^{\\sf P}_3$. \n\nLet $H \\in {\\cal RM}^{\\sf P}_2$ be such that $f^C \\circ H \\circ f^C = f^C$;\ni.e., $f^C(H({\\sf code}(f(x) \\#) \\, v)) = {\\sf code}(f(x) \\#) \\, v$, for all \n$x, v \\in \\{0,1\\}^*$. Since $f^C$ outputs ${\\sf code}(f(x) \\#) \\, v$ \non input $H({\\sf code}(f(x) \\#) \\, v)$, the definition of $f^C$ implies \nthat for all $v \\in \\{0,1\\}^*:$ \\, $H({\\sf code}(f(x) \\#) \\ v)$ is of the \nform ${\\sf code}(z \\#) \\, v$ for some $z \\in \\{0,1\\}^*$ with $f(z) = f(x)$. \nHence there exists a function $h: \\{0,1\\}^* \\to \\{0,1\\}^*$ such that \n$H({\\sf code}(y \\#)) = {\\sf code}(h(y) \\#)$ for all $y$; then \n$H({\\sf code}(y \\#) \\, v) = {\\sf code}(h(y) \\#) \\, v$, for all $y,v$.\nHence $h$ is an inverse of $f$. \nMoreover, $h$ is clearly in {\\sf fP} if $H$ is in ${\\cal RM}_2^{\\sf P}$. \n \\ \\ \\ $\\Box$\n\n\\bigskip\n\n\\noindent We will show in Section 5 that the encoding $f \\mapsto f^C$ \ncorresponds to an ``inversive reduction''.\n\n\\bigskip\n\n\\noindent In summary we have the following \n{\\bf relation between {\\sf fP} and ${\\cal RM}_2^{\\sf P}$:}\n\n\n$${\\sf fP} \\ \\stackrel{ \\, C}{\\hookrightarrow} \\ {\\cal RM}_2^{\\sf P} \n \\ \\hookrightarrow \\ [ {\\sf id} ]_{_{{\\cal J}({\\sf fP})}}^0 \\ \\hookrightarrow \n \\ {\\sf fP} .$$\n\n\\noindent Here $[ {\\sf id} ]^0_{_{{\\cal J}({\\sf fP})}}$ is the \n${\\cal J}$-class of the identity {\\sf id} of {\\sf fP}, together with the \nzero element (i.e., it is the {\\it Rees quotient} of the ${\\cal J}$-class\nof {\\sf id} in {\\sf fP}).\nThe embedding into $[ {\\sf id} ]^0_{_{{\\cal J}({\\sf fP})}}$ holds because \n${\\cal RM}_2^{\\sf P}$ is ${\\cal J}^0$-simple. \n\n\n\\bigskip\n\n\\section{\\bf Evaluation maps}\n\n\\bigskip\n\n\\noindent The {\\em Turing machine evaluation function}\n${\\sf eval}_{_{\\sf TM}}$ is the input-output function of a universal Turing\nmachine; it has the form\n \\ ${\\sf eval}_{_{\\sf TM}}(w, x) \\ = \\ \\phi_w(x)$,\nwhere $\\phi_w$ is the input-output function described by the word \n(``program'') $w$. \n(Recall that by ``function'' we always mean partial function.)\nSimilarly, there is an {\\em evaluation function for acyclic circuits}, \n${\\sf eval}_{\\sf circ}(C, x) = f_C(x)$, where $f_C$ is the input-output map \nof the circuit $C$. Here we will only consider length-preserving circuits, \ni.e., $|f_C(x)| = |x|$. \nWe also identify the circuit with a bitstring that describes the circuit.\nThe map ${\\sf eval}_{\\sf circ}$ is polynomial-time computable, but not \npolynomially balanced (since the size of input component $C$ is not bounded \nin terms of the output length $|f_C(x)|$).\n\nLevin \\cite{Levin1W} noted that functions of the form \n\n\\smallskip\n\n\\hspace{1in} ${\\sf ev}(w, x) \\ = \\ (w, \\phi_w(x))$, \n\n\\smallskip\n\n\\noindent (under some additional assumptions) are polynomially balanced and \npolynomial-time computable; and he observed that {\\sf ev} is a {\\it critical \none-way function} in the following sense: \n\n\\begin{defn}\nA function $e \\in {\\sf fP}$ is {\\em critical} (or {\\sf fP}{\\em -critical}) \niff the following holds:\nOne-way functions exist iff the function $e$ is a one-way function.\nSimilarly, a set $L \\in {\\sf NP}$ is {\\em critical} (or \n{\\sf P}{\\em -critical}) iff the following holds: ${\\sf P} \\neq {\\sf NP}$ \niff $L \\not\\in {\\sf P}$.\n\\end{defn}\nThe literature calls these functions ``universal'' one-way functions; \nhowever, not all critical one-way functions are universal (in the sense of \nuniversal Turing machines, or other universal computing devices).\nLevin's idea of a ``universal'' (critical) one-way function has also been \nused in probabilistic settings for one-way functions (see e.g., \n\\cite{Goldreich}). \n\nTo make {\\sf ev} polynomial-time computable, additional features have to be\nintroduced. One approach is to simply build a counter into the program of \n{\\sf ev} that stops the computation of ${\\sf ev}(w, x)$ after a polynomial\nnumber of steps (for a fixed polynomial). For example, the computation could \nbe stopped after a quadratic number of steps, i.e., $c \\, (|w| + |x|)^2 + c$ \nsteps (for a fixed constant $c \\geq 1$); we call this function \n${\\sf ev}_{(2)}$. \nThere exist other approaches; see e.g., section 2.4 of \\cite{Goldreich}, \nor p.\\ 178 of \\cite{AroraBarak}, where it is proved that ${\\sf ev}_{(2)}$ \nis {\\sf fP}-critical.\n\nHere is another simple example of a critical one-way function:\n\n\\smallskip\n\n\\hspace{1in} ${\\sf ev}_{\\sf circ}(C, x) \\ = \\ (C, f_C(x))$,\n\n\\smallskip\n\n\\noindent where $C$ ranges over finite acyclic circuits (more precisely, \nstrings that describe finite acyclic circuits), and $f_C$ is the \ninput-output map of a circuit $C$. \nThe function ${\\sf ev}_{\\sf circ}$ is in {\\sf fP}; it is balanced since \n$|x| \\leq |C|$ and $C$ is part of the output. Here we only consider\nlength-preserving circuits, i.e., $|f_C(x)| = |x|$.\nWe will prove later that ${\\sf ev}_{\\sf circ}$ is not only critical, but \nalso {\\em complete} with respect to a reduction that is appropriate for \none-way functions. \n\nA similar example of a critical function is \n$(B, \\tau) \\longmapsto (B, B(\\tau))$, where $B$ ranges over all boolean\nformulas (or over all boolean formulas in 3CNF), $\\tau$ is any truth-value \nassignment for $B$ (i.e., a bitstring whose length is the number of boolean\nvariables in $B$), and $B(\\tau)$ is the truth-value of $B$ for the \ntruth-value assignment $\\tau$. More generally we have:\n\n\\begin{pro} \\label{f_Mcrit} \\, \nFor a nondeterministic polynomial-time Turing machine $M$, let $f_M$ be \ndefined by \n\n\\smallskip\n\n \\ \\ \\ $f_M(x,s) = x$ \\ iff \\ $M$, with {\\it choice sequence} $s$,\naccepts $x$ (and undefined otherwise).\n\n\\smallskip\n\n\\noindent Then $f_M$ is {\\sf fP}-critical iff the language (in {\\sf NP})\naccepted by $M$ is {\\sf P}-critical. \n\\end{pro}\n{\\bf Proof.} We studied the functions $f_M$ in Prop.\\ \\ref{ImfP} (where we \nused the notation $f_L$). We saw that $f_M$ is one-way iff \n${\\sf Im}(f_M) \\not\\in {\\sf P}$. Moreover, ${\\sf Im}(f_M)$ is the language \naccepted by $M$, and ${\\sf Im}(f_M) \\in {\\sf NP}$. So, $f_M$ is one-way iff \n${\\sf P} \\neq {\\sf NP}$. \n \\ \\ \\ $\\Box$ \n\n\\bigskip\n\n\\bigskip\n\n\\bigskip\n\n\n\\noindent {\\bf Machine model for {\\sf fP}}:\n\n\\medskip\n\n\\noindent Every function in {\\sf fP} can be computed by a Turing machine\nwith a built-in polynomial-time counter, that is used for enforcing \ntime-complexity and input balance. \nAs usual, to say that time-complexity or balance functions are \n``polynomial'' means that they have polynomial upper-bounds.\nMore precisely, we will describe every polynomial-time multi-tape Turing \nmachine $M$ by a program $v$ (which consists of the list of transitions of \nthe Turing machine, as well as its start and accept states), and a \npolynomial $p$ such that $p(n)$ is an upper-bound on the time-complexity \nand the input balance of $M$ on all inputs of length $\\le n$. Since we only \nrequire polynomial {\\it upper-bounds}, we can take $p$ of the form \n$p(n) = a \\, n^k + a$, where $k, a$ are positive integers.\nSo $p$ is determined by two integers (stored as bitstrings). \nWe do not need to assume anything about the time-complexity of the Turing \nmachine with program $v$ (and in general it is undecidable whether $v$\nhas polynomial-time complexity); instead, we want to consider pairs $(v,p)$ \nwhere $v$ is a Turing machine program, and $p$ is a polynomial (given by \ntwo integers $k,a$). \nBased on pairs $(v,p)$ we define the following: A partial function $f$ \nis computed by $(v,p)$ iff for all $x \\in A^*$, $f(x)$ is computed by the \nTuring machine with program $v$ in time $\\le a \\, |x|^k + a = p(|x|)$ and \ninput balance $\\le p(|f(x)|)$; when $f(x)$ is undefined then the program\neither gives no output, or violates the time bound or the input balance\nbound.\nIn this way {\\sf fP} can be recursively enumerated by pairs $(v,p)$. \nFor {\\sf P} and {\\sf NP} this (or a similar idea) goes back to the work of \nHartmanis, Lewis, Stearns, and others in the 1970's; compare with the \ngeneric {\\sf NP}-complete problem in \\cite{Hartmanis}, the proof of the \ncomplexity hierarchy theorems in chapter 12 in \\cite{HU}, and the section \non critical (``universal'') one-way functions in \\cite{Goldreich}. \n\nHowever, pairs $(v,p)$ do not form a machine model, being hybrids\nconsisting of a machine and two numbers.\nIn order to obtain a machine model for {\\sf fP} we take a Turing machine \nwith program $v$, and add an extra tape that will be used as a counter. \nWe assume that every tape has a left endmarker. On input $x$, a Turing \nmachine with counter first computes $p(|x|) = a \\, |x|^k + a$, and \nmoves the head of the counter tape $p(|x|)$ positions to the right. \nAfter the counter has been prepared (and the head on the input $x$ has been \nmoved back to the left end), the Turing machine executes program $v$ on the \nother tapes, while in each transition the head on the counter tape moves \nleft by one position. If the counter head gets back to the left endmarker\n(``it triggers the counter''), the Turing machine stops and rejects (and \nproduces no output). If the machine halts before triggering the counter, \nthe counter has no effect on the result of program $v$ on input $x$. After \nthis, if there is an output $y$ the Turing machine with counter checks the \ninput balance: If $|y| \\ge |x|$ the balance condition obviously holds, so\n$y$ is the final output. If $|y| < |x|$, the machine computes $p(|y|)$ \n($< p(|x|)$); if $|x| > p(|y|)$ the machine rejects (and produces no final \noutput); otherwise, $y$ is the final output. \n\n\\smallskip\n\nIn order to mark off space of length $p(|x|)$ on the counter tape, we need \nan algorithm for computing $p(|x|) = a \\, |x|^k + a$, and we will look at \nthe time-complexity of this algorithm. Recall that the bitlength of a \npositive integer $n$ is $\\lfloor \\log_2 n \\rfloor + 1$ (in unsigned binary\nrepresentation). \n\n\\smallskip\n\n\\noindent \n(1) First, we compute $|x|$ in binary, by repeatedly dividing $|x|$ by 2, \nusing two tapes: On one tape, we start with a length $n = |x|$, \nthen a mod-2 counter produces $\\lfloor n\/2 \\rfloor$ on the 2nd tape and \nrecords the remainder (0 or 1) on a 3rd tape; then a mod-2 counter computes \nhalf of the 2nd tape and writes it on the 1st tape, and records the \nremainder on tape 3, etc. This takes time \n \\ $\\le \\sum_{i=0}^{\\lfloor \\log_2 |x| \\rfloor} |x|\/2^i$ \n$ \\ = \\ 2^{\\lfloor \\log_2 |x| \\rfloor + 1} - 1 \\ < \\ 2 \\, |x|$.\n\n\\smallskip\n\n\\noindent \n(2) We compute $a \\, |x|^k + a$ in binary, using $k$ multiplications \nand one addition. This takes time \n\n\\smallskip\n\n$\\le \\ c \\, (\\log_2 |x|)^2 + 2 c (\\log_2 |x|)^2 + \\ \\ldots \\ + $\n$ (k-1) c (\\log_2 |x|)^2$ $+$ $c k \\log_2 |x| \\ \\log_2 a$\n\n \\ \\ \\ \\ \\ $+$ $ \\ c\\, \\max\\{k \\log_2 |x|, \\log_2 a\\}$ \n\n\\smallskip\n\n$< \\ c \\, k^2 \\, (\\log_2 |x|)^2 + c \\, k \\ \\log_2 a + c \\, k \\ \\log_2 |x|$ \n \\ \\ \\ \\ \\ (when \\ $k \\log_2 |x| \\ge \\log_2 a$, i.e., $|x| \\ge a^{1\/k}$)\n\n\\smallskip\n\n$< \\ c \\ k^2 \\ \\log_2 a \\ (\\log_2 |x|)^2$,\n\n\\smallskip\n\n\\noindent where $c$ is a positive constant that depends on the details of \nthe multiplication and addition algorithms. Two integers $n_1, n_2$ (in \nbinary) can be multiplied in time $\\le c \\, \\log_2 n_1 \\, \\log_2 n_2$, and \nadded in time $\\le c \\, \\max\\{\\log_2 n_1, \\ \\log_2 n_2\\}$. The bitlength \nof the product $n_1 n_2$ is $\\le$ $\\lfloor \\log_2 n_1 \\rfloor$ $+$ \n$\\lfloor \\log_2 n_2\\rfloor + 2$. \nFor the last expression in the calculation above, we have \n \\ $c \\ k^2 \\cdot \\log_2 a \\cdot (\\log_2 |x|)^2 \\ < \\ |x|$ \n \\ for large enough $|x|$, i.e., when $c_p \\le |x|$ \\, (where $c_p$ is \na positive integer depending on $p$). \n\nRemark (concrete upper-bound on $c_p$): We define $c_p$ to be the smallest \nnumber $N$ such that for all $x \\in A^{\\ge N}$, \nthe time to prepare the counter is $\\le |x|$. We saw that this time is\n$\\le |x|$ when \\ $|x| \\ge a^{1\/k}$ \\ and\n \\ $|x| \\ge c\\, k^2 \\, \\log_2 a \\ (\\log_2 |x|)^2$.\nOne proves easily that $n \\ge (\\log_2 n)^2$ for all $n \\ge 16$.\nSo, $|x| = |x|^{1\/2} \\cdot |x|^{1\/2}$\n$ \\ge c\\, k^2 \\, \\log_2 a \\cdot (\\log_2 |x|)^2$ \\ is implied by\n \\ $|x|^{1\/2} \\ge c\\, k^2 \\, \\log_2 a$ \\ and \\ $|x|^{1\/2} \\ge 16$. \nThus we have: \\ $c_p \\ \\le \\ $\n$\\max\\{256, \\ \\ c^2 \\, k^4 \\, (\\log_2 a)^2, \\ \\ a^{1\/k} \\}$.\n\n\\smallskip\n\n\\noindent \n(3) We mark off space of length $p(|x|)$ by converting the binary \nrepresentation of $p(|x|)$ to ``unary'': This is done by the Horner scheme \nwith repeated doubling (where the doubling is done by using two tapes, \nand writing two spaces on the 2nd tape for each space on the 1st tape). \nThis takes time \n \\ $\\sum_{i=0}^{\\lfloor \\log_2 p(|x|) \\rfloor + 1} 2^i \\ < \\ 4 \\, p(|x|)$. \n\n\\smallskip\n\nThus, the total time used to prepare the counter on input $x$ is \n$< \\ 2 \\, |x| + |x| + 4 \\, p(|x|) \\, \\le \\, 7 \\, p(|x|)$, when \n$|x| \\ge c_p$. \n\nFor inputs $x$ with $|x| < c_p$, the counter will also receive space \n$p(|x|)$, but the time used for this could be more than $7 \\, p(|x|)$.\nWe can remove the exception of the finitely many inputs of length $< c_p$ \nas follows. For these inputs we let the Turing machine operate as a\nfinite-state machine (without using the work tapes or any counter); for \nsuch an input $x$, the time to set up the counter will then be \n \\ $\\le \\, |x| + p(|x|)$ ($< 7 \\, p(|x|)$). \n\nAfter execution of program $v$ on input $x$ for time $\\le p(|x|)$, if\nthere is an output $y$ so far, the input balance is checked. \nIf $|y| \\ge |x|$ input balance holds automatically; checking whether\n$|y| \\ge |x|$ takes time $\\le |x|$. If $|x| > |y|$,\nwe compute $p(|y|)$ ($< p(|x|)$) in binary, in the same ways as in steps \n(1) and (2) of the counter-tape preparation above. \nThis takes time \\, $\\le \\, 2 \\, |y| + |y| \\, < \\, 3 \\, |x|$, when \n$|y| \\ge c_p$. The time needed to compare the binary representations of \n$|x|$ and $|y|$ (of length $\\le \\lfloor \\log_2 |x| \\rfloor + 1$) is \nabsorbed in the time to calculate $p(|y|)$. \nSo, the time for checking the input balance is $< |x| + 3 \\, |x|$\n$<$ $4 \\, p(|x|)$.\n\n\\smallskip\n\nSo far we have obtained a machine, described by $(v,p)$, whose \ntime-complexity is $\\le 7 \\, p(.) + p(.) + 4\\, p(.) = 12 \\ p(.)$ and whose \ninput balance is $\\le p(.)$.\nBecause of the preparation of the counter, the time-complexity of the new \nmachine is always larger than $p(.)$; therefore we will further modify the \nconstruction, as follows. Let $p(n) = a \\, (n^k + 1)$; we will assume from \nnow on that $a \\ge 12$. Let $p'(n) = (a - a\\%12) \\cdot (n^k + 1)$, where \n$a\\%12$ is the remainder of the division of $a$ by 12. \nSo, $a - a\\%12 \\ge 12$ and $a - a\\%12$ is a multiple of 12. \n \\ (1) Instead of marking off space of length $p(|x|)$, the new machine marks \noff length $\\frac{1}{12} \\, p'(|x|)$, in time $\\leq \\frac{7}{12} \\, p'(|x|)$.\n \\ (2) It executes the program $v$ for time $\\le \\frac{1}{12} \\, p'(|x|)$, \nusing the marked-off counter.\n(3) It checks the input balance in time $\\le \\frac{4}{12} \\, p'(|x|)$. \n The total time of the modified machine is then \n$\\le p'(|x|) \\le p(|x|)$, and the input balance is $\\le p'(|x|) \\le p(|x|)$.\n\n\nSo for every Turing machine program $v$ and any polynomial $p$ this \nmodified machine computes a function in {\\sf fP}. Conversely, if $v$ is a \nprogram with polynomial time for a function $f \\in {\\sf fP}$, then the \nmodified program with pair $(v,p)$ correctly computes $f$ if \n$\\frac{1}{12} \\, p'(.)$ is larger than the time (and balance) that $v$ uses \non all inputs; if $f$ belongs to {\\sf fP} then a polynomial $p$ such that\n$\\frac{1}{12} \\, p'(.)$ is large enough for bounding the time and the input \nbalance, exists. \nSuch a modified machine will be called {\\em Turing machine with polynomial\ncounter}. A program for such a machine consists of a pair $(v,p)$ and an\nextra program for preparing the counter and checking input balance; let's\ncall that extra program $u_p$ (it only depends on $p$, and not on $v$). \nThe triple $(v, p, u_p)$, or more precisely, the word \n${\\sf code}(v \\# k \\# a \\# u_p)$ with the numbers $k,a$ written in \nbinary, will be called a {\\it polynomial program}. Thus, Turing machines \nwith polynomial counter are a machine model for {\\sf fP}.\nSo we have proved:\n\n\\begin{pro} \\label{Polyn_TM}\nThere exists a class of modified Turing machines, called {\\em Turing machine \nwith polynomial counter}, with the following properties:\nFor every $f \\in {\\sf fP}$ there exists a Turing machine with polynomial \ncounter that computes $f$; and for every Turing machine with polynomial\ncounter, the input-output function belongs to {\\sf fP}. \n \\ \\ \\ $\\Box$\n\\end{pro} \n{\\bf Notation:} \\ A polynomial program \n$w = {\\sf code}(v \\# k \\# a \\# u_p)$, based on a Turing machine program $v$ \nand a polynomial $p$ (with $p(n) = a \\, n^k + a$), will be denoted by \n$\\langle v,p \\rangle$. The polynomial $p$ that appears in $w$ will often be \ndenoted by $p_w$. The function computed by a polynomial program \n$w = \\langle v,p \\rangle$ will be denoted by $\\phi_w$ ($\\in {\\sf fP}$).\n\n\\bigskip\n\n\n\\noindent {\\bf Evaluation maps for {\\sf fP}}:\n\n\\medskip\n\nAt first we consider a function ${\\sf ev}_{\\sf poly}$ defined by \n${\\sf ev}_{\\sf poly}(w, x) = (w, \\phi_w(x))$, \nwhere $w = \\langle v,p \\rangle$ is any polynomial program.\nBut ${\\sf ev}_{\\sf poly}$ is {\\em not} in {\\sf fP}. Indeed, the output \nlength (and hence the time-complexity) of ${\\sf ev}_{\\sf poly}$ on input \n$(w,x)$ is equal to $p(|x|)$ (in infinitely many cases, when $p$ is a tight \nupper-bound); as $w$ varies, the degree of $p$ is unboundedly large, hence \nthe time-complexity of ${\\sf ev}_{\\sf poly}$ has no polynomial upper-bound. \nWe will nevertheless be able to build {\\sf fP}-critical functions. For a \nfixed polynomial $q$ (of the form $q(n) = a \\, n^k + a$), let\n\n\\medskip\n\n\\hspace{1in} ${\\sf fP}^{q} \\ = \\ \\{ \\phi_w \\in {\\sf fP} : \\ p_w \\le q\\}$.\n\n\\medskip\n\n\\noindent More explicitly, $p_w \\le q$ means that the polynomial program $w$\nhas time-complexity $\\le p_w(|x|) \\le q(|x|)$ and input-balance \n$|x| \\leq p_w(|\\phi_w(x)|) \\le q(|\\phi_w(x)|)$, for all \n$x \\in {\\sf Dom}(\\phi_w)$.\n\nIn general, for polynomials $q_1, q_2$ we say $q_1 \\le q_2$ iff for all \nnon-negative integers $n$: $q_1(n) \\le q_2(n)$.\nInterestingly, for polynomials of the form $q_i(n) = a_i \\, n^{k_i} + a_i$ \nwe have: \n$q_1 \\le q_2$ \\ iff \\ $k_1 \\le k_2$ and $a_1 \\le a_2$. \nHence it is easy to check whether $p_w \\le q$, given $w$ and the two numbers\nthat determine $q$. \n\n\\smallskip\n\nA polynomial program $w$ such that $p_w \\le q$ (for a fixed polynomial $q$) \nis called a {\\em $q$-polynomial program}. We define ${\\sf ev}_{q}$ by \n\n\\smallskip\n\n\\hspace{1in} ${\\sf ev}_{q}(w, x) = (w, \\phi_w(x))$,\n\n\\smallskip\n\n\\noindent where $w$ is any $q$-polynomial program. \nThe function ${\\sf ev}_{q}$ above has two input and two output strings. \nTo make ${\\sf ev}_{q}$ fit into our framework of functions with one input \nand one output string we encode ${\\sf ev}_{q}$ as \n${\\sf ev}_{q}^C: \\{0,1\\}^* \\to \\{0,1\\}^*$ where for all \n$w, x \\in \\{0,1\\}^*$ such that $w$ is a $q$-polynomial program, \n\n\\medskip\n\n\\hspace{1in} ${\\sf ev}_{q}^C\\big({\\sf code}(w) \\, 11 \\, x\\big) = $\n ${\\sf code}(w) \\, 11 \\, \\phi_w(x)$.\n\n\\medskip\n\n\\noindent From now on we will call ${\\sf ev}_{q}^C$ an {\\it ``evaluation \nmap''}. We observe that in the special case where $\\phi_w$ (for a fixed \n$w$) is a right ideal morphism, the function \n\n\\medskip\n\n\\hspace{1in} ${\\sf ev}_{q}^C\\big({\\sf code}(w) \\, 11 \\, \\cdot\\big)$:\n$ \\ \\ x \\ \\longmapsto \\ \\ {\\sf code}(w) \\, 11 \\, \\phi_w(x)$ \n\n\\medskip\n\n\\noindent is also a right ideal morphism.\n\n\\bigskip\n\n\n\\noindent {\\bf Criticality of ${\\sf ev}_{q}^C$:}\n\n\\medskip\n\n\\noindent For any fixed word $v \\in \\{0,1\\}^*$ we define the prepending \nmap\n\n\\smallskip\n\n\\hspace{1in} $\\pi_v: x \\in \\{0,1\\}^* \\longmapsto v \\, x$ ;\n\n\\smallskip\n\n\n\\noindent and for any fixed positive integer $k$ we define \n\n\\smallskip\n\n\\hspace{1in} $\\pi_k': z \\, x \\in \\{0,1\\}^* \\longmapsto x$, \\ where $|z| = k$\n \\ (with $\\pi_k'(t)$ undefined when $|t| < k$). \n\n\\smallskip\n\n\\noindent Clearly, $\\pi_v, \\pi_k' \\in {\\cal RM}_2^{\\sf P}$, and we have \n \\ $\\pi'_{|v|} \\circ \\pi_v = {\\sf id}_{A^*}$ \n \\ (i.e., $\\pi'_{|v|}$ is a left inverse of $\\pi_v$).\n\nWe observe that $\\pi_v$ can be written as a composite of the\nmaps $\\pi_0$ and $\\pi_1$, for any $v \\in \\{0,1\\}^*$. Similarly, $\\pi_k'$ \nis the $k$th power of $\\pi_1'$. \n\n\n\\begin{pro} \\label{ev_q} \n \\ Let $q$ be any polynomial such that for all $n \\ge 0$, $q(n) > cn+c$\n(where $c > 1$ is a constant). \nThen ${\\sf ev}_{q}^C$ belongs to {\\sf fP}, and ${\\sf ev}_{q}^C$ \nis a one-way function if one-way functions exist. \n\\end{pro}\n{\\bf Proof.} We saw that testing whether $p_w \\le q$ is easy for \npolynomials of the form that we consider. \nBy reviewing the workings of a universal Turing machine (e.g., \nin \\cite{HU}) we see that the time-complexity of ${\\sf ev}_{q}(w,x)$ is \\, \n$\\leq \\ c_0 \\ |w| \\cdot p_w(|x|)^2$ (when $p_w$ is at least linear); here, \n$c_0 \\ge 1$ is a constant (independent of $x$ and $w$). \nThe factor $p_w(|x|)^2$ comes the fact that Turing machines can have any \nnumber of tapes, whereas a Turing machine for ${\\sf ev}_{q}$ has a fixed \nnumber of tapes; any number of tapes can be converted to one tape, but the\ncomplexity increases by a square (the more efficient Hennie-Stearns \nconstruction converts any number of tapes to two tapes, with a complexity\nincrease from $T$ to $T \\log T$). The universal Turing machine \nsimulates each transition of program $w$ (modified into a 1-tape Turing \nmachine) using $\\le c_1 \\, |w|$ steps (for a constant $c_1 \\ge 1$). \n\nFor input balance: When $|x| \\le |\\phi_w(x)|$ we also have \n$|w| + |x| \\le |w| + |\\phi_w(x)|$ so balance is automatic.\nWhen $|x| > |\\phi_w(x)|$ then (since $p_w$ bounds the input balance of\n$\\phi_w$), the input-length satisfies \\, \n$|w| + |x| \\leq |w| + p_w(|\\phi_w(x)|) \\ \\leq \\ |w| + q(|\\phi_w(x)|)$\n$\\le$ $q(|w| + |\\phi_w(x)|)$. \nHence ${\\sf ev}_{q}$, and similarly ${\\sf ev}_{q}^C$, belongs to {\\sf fP}.\n\nCriticality: \nIf the function ${\\sf ev}_{q}$ has an inverse $e'_q \\in {\\sf fP}$, then \n \\, ${\\sf ev}_{q} \\circ e'_q \\circ {\\sf ev}_{q}(w,x) = (w, \\phi_w(x)$.\nHence, for any function $\\phi_w \\in {\\sf fP}^{q}$ with a fixed program $w$ \nwe have: \n$\\phi_w \\circ \\pi_{|w|}' \\circ e'_q \\circ \\pi_w \\circ \\phi_w = \\phi_w$,\nwhere $\\pi_{|w|}': (w,v) \\mapsto v$, and $\\pi_w: y \\mapsto (w,y)$. \nFor a fixed $w$ we have $\\pi_{|w|}', \\pi_w \\in {\\sf fP}$, so \\, \n$\\pi_{|w|}' \\circ e'_q \\circ \\pi_w$ \\, is an inverse of $\\phi_w$. Hence, if\n${\\sf ev}_{q}$ is not one-way, no function in ${\\sf fP}^{q}$ is one-way. \nBut there are functions (e.g., some $f_M$ as seen in Prop.\\ \\ref{f_Mcrit}) \nthat are {\\sf fP}-critical, even when $q$ is linear (since by padding \narguments one can obtain {\\sf NP}-complete languages with nondeterministic\nlinear time-complexity). So some {\\sf fP}-critical $f_M$ would have an \ninverse in {\\sf fP}. The same proof is easily adapted to ${\\sf ev}_{q}^C$. \n \\ \\ \\ $\\Box$ \n\n\\bigskip\n\n\n\\noindent {\\bf Finite generation of {\\sf fP}:}\n\n\\medskip\n\n\\noindent We will show that ${\\sf ev}_{q}^C$ can be used to simulate\nuniversal evaluation maps, and to prove that {\\sf fP} is finitely generated. \nThis is based on the universality of ${\\sf ev}_{q_2}^C$ for \n${\\sf fP}^{q_2}$, combined with a padding argument; here $q_2$ is a\npolynomial of degree 2, with $q_2(n) \\ge c\\, n^2 +c$ for a constant \n$c \\ge 12$. (We chose 12 in view of the reasoning before Prop.\\ \n\\ref{Polyn_TM}.)\nWe need some auxiliary functions first.\n\n\\medskip\n\n\\noindent We define the {\\em expansion (or padding) map}, first as a \nmulti-variable function for simplicity:\n\n\\smallskip\n\n\\hspace{.6in} ${\\sf expand}(w, x) \\ = \\ $\n $({\\sf e}(w), \\ (0^{4 \\, |x|^2 + 7 \\, |x| + 2}, \\ x))$\n\n\\smallskip\n\n\\noindent where ${\\sf e}(w)$ is such that \\ \n\n\\smallskip\n\n\\hspace{.6in} \n$\\phi_{{\\sf e}(w)}(0^k, \\ x) \\ = \\ (0^k, \\ \\phi_w(x))$, \\ for all $k \\geq 0$.\n\n\\smallskip\n\n\\noindent The program ${\\sf e}(w)$ is easily obtained from the program \n$w$, since it just processes the padding in front of the input and in front \nof the output of $\\phi_w$, and acts as $\\phi_w$ on $x$; and the time and\nbalance polynomial is decreased due to padding. \nAs a one-variable function, {\\sf expand} is defined by\n\n\\medskip\n\n\\hspace{.6in} ${\\sf expand}\\big({\\sf code}(w) \\ 11 \\ x\\big) \\ = \\ \n {\\sf code}({\\sf ex}(w)) \\ 11 \\ 0^{4 \\, |x|^2 + 7 \\, |x| + 2} \\ 11 \\ x$,\n\n\\medskip\n\n\\noindent where for one-variable functions, the program ${\\sf ex}(w)$ \nis such that \\ \n\n\\medskip\n\n\\hspace{.6in} $\\phi_{{\\sf ex}(w)}(0^k \\ 11 \\ x) \\ = \\ 0^k \\ 11 \\ \\phi_w(x)$.\n\n\\medskip\n\n\\noindent Again, ${\\sf ex}(w)$ is a slight modification of \n$w = {\\sf code}(v \\# k \\# a \\# u_p)$ (following the notation for the\nmachine model for {\\sf fP}), to allow inputs and outputs with padding, and \nto readjust the complexity and balance polynomial. \nMore precisely, ${\\sf ex}(w)$ is of the form \n${\\sf code}(r \\# v \\# \\lceil k\/2 \\rceil \\# a_e \\# u_{p_e})$, where $r$ is \na preprocessing subprogram by which the prefix $0^k 11$ of the input is \nsimply copied to the output; at the end of execution of $r$, the state and \nhead-positions are the start state and start positions of the subprogram \n$w$. The appropriate complexity and balance polynomial stored in \n${\\sf ex}(w)$ is \n\n\\medskip\n\n \\ \\ \\ $p_e(n) = a_e \\, n^{\\lceil k\/2 \\rceil} + a_e$, \\ with \\ \n$a_e \\ = \\ \\max\\{12, \\ \\lceil a\/2^k \\rceil +1\\}$. \n\n\\medskip\n\n\\noindent Indeed, if $m = |x|$, an input \n$0^{4 \\, |x|^2 + 7 \\, |x| + 2} \\ 11 \\ x$ of $\\phi_{{\\sf ex}(w)}$ has \nlength $i = 4 \\, m^2 + 8 \\, m + 4$. So, $m = \\sqrt{i}\/2 - 1$. \nLet $a \\, (m^k + 1)$ be the polynomial of program $w$.\nThe complexity of $\\phi_{{\\sf ex}(w)}$ on its input \nis $4 \\, m^2 + 7 \\, m + 4$ (for reading the part $0^* 11$ of the input), \nplus \\, $a \\, (m^k + 1)$ (for using $x$ and computing $\\phi_w(x)$). \nSo in terms of its input length $i$, the complexity of \n$\\phi_{{\\sf ex}(w)}$ is \\ $< \\ i + a \\, (m^k + 1)$\n$\\le \\ i^{\\lceil k\/2 \\rceil} \\ + \\ a \\, ((\\sqrt{i}\/2 - 1)^k + 1)$ \n$\\le \\ i^{\\lceil k\/2 \\rceil} \\ + \\ a\/2^k \\, i^{k\/2}$; \nthe last step uses the fact that $(z - 1)^m \\le z^m - 1$ for all \n$z \\ge 0, \\ m \\ge 1$. Hence the complexity of $\\phi_{{\\sf ex}(w)}$ is \n$< \\ (a\/2^k + 1) \\, i^{\\lceil k\/2 \\rceil}$. \nFor the input balance of $\\phi_{{\\sf ex}(w)}$ we have: The input-length\nis bounded by twice the output-length.\nIndeed, the input length is $i = 4 \\, |x|^2 + 8 \\, |x| + 4$\n$<$ $2 \\cdot |0^{4 \\, |x|^2 + 7 \\, |x| + 2} \\ 11|$\n$<$ $2 \\cdot |0^{4 \\, |x|^2 + 7 \\, |x| + 2} \\, 11 \\, \\phi_w(x)|$ $=$ \n$2 \\cdot |\\phi_{{\\sf ex}(w)}(0^{4 \\, |x|^2 + 7 \\, |x| + 2} \\, 11 \\, x)|$.\nMoreover, we want $a_e$ to stay $\\ge 12$ (in view of the reasoning before \nProp.\\ \\ref{Polyn_TM}). \n\n\\smallskip\n\nIn order to achieve an arbitrarily large polynomial amount of padding we \niterate the quadratic padding operation. Therefore we define a {\\em repeated \nexpansion (or re-padding)} map, first as a two-variable function:\n\n\\medskip\n\n\\hspace{.6in} ${\\sf reexpand}(u, (0^h, x)) \\ = \\ $\n $({\\sf e}(u), (0^{4 \\, h^2 + 8 \\, h + 2}, x))$, \\ for all $h > 0$, \n\n\\medskip\n\n\\noindent where ${\\sf e}(.)$ is as above. As a one-variable function, \n\n\\medskip\n\n\\hspace{.6in} ${\\sf reexpand}\\big({\\sf code}(u) \\ 11 \\ 0^h \\ 11 \\ x \\big)$ \n$ \\ = \\ $ \n${\\sf code}({\\sf ex}(u)) \\ 11 \\ 0^{4 \\, h^2 + 8 \\, h + 2} \\ 11 \\ x$, \n \\ for any $h \\geq 0$,\n\n\\medskip\n\n\\noindent with ${\\sf ex}(.)$ as in {\\sf expand} above.\n\n\\medskip\n\nWe also introduce a {\\em contraction (or unpadding) map}, which is a partial\nleft inverse of {\\sf expand}. We define {\\sf contr} first as a multi-variable \nfunction:\n\n\\medskip\n\n\\hspace{.6in} \n$ {\\sf contr}(w, \\ (0^h, \\ y)) \\ = \\ ({\\sf c}(w), \\ y)$, \n \\ \\ if $h \\leq 4 \\, |y|^2 + 7 \\, |y| + 2$ \\ \\ (undefined otherwise). \n \n\\medskip\n\n\\noindent As a one-variable function, {\\sf contr} is defined by\n\n\\medskip \n\n\\hspace{.6in} \n${\\sf contr}\\big({\\sf code}(w) \\ 11 \\ 0^h \\ 11 \\ y \\big) \\ = \\ $\n${\\sf code}({\\sf co}(w))\\ 11 \\ y$, \\ \\ if $h \\leq 4\\, |y|^2 + 7\\, |y| + 2$ \n \n\\hspace{.6in} (undefined otherwise). \n\n\\medskip\n\n\\noindent The program transformations ${\\sf c}(.)$ and ${\\sf co}(.)$ are\ninverses of ${\\sf e}(.)$, respectively ${\\sf ex}(.)$. So, ${\\sf c}(.)$ and\n${\\sf co}(.)$ erase the prefix $r$ in ${\\sf ex}(u)$, and replace the\npolynomial $b \\, n^h + b$, encoded in ${\\sf ex}(u)$, by\n$b_c \\, n^{2h} + b_c$, where \\, $b_c = (b-1) \\, 2^{2h}$.\n\n\\smallskip\n\nTo invert repeated padding we introduce a {\\em repeated contraction (or \nunpadding)} map, first as a multi-variable function. \nNote that if $h = 4 \\, k^2 + 8 \\, k + 2$ (which is the amount of padding \nintroduced by {\\sf reexpand}), then $k = \\frac{1}{2} \\, \\sqrt{h+2} -1$.\nTherefore, for any $h \\ge 0$ we define\n\n\\medskip\n\n\\hspace{.6in} ${\\sf recontr}(u, \\ (0^h, \\ y)) \\ = \\ $ \n$({\\sf c}(u), \\ (0^{\\max\\{1, \\ \\lfloor \\sqrt{h+2}\/2 \\rfloor - 1\\}}, \\ y))$\n \\ \\ (undefined on other inputs).\n \n\\medskip\n\n\\noindent As a one-variable function, {\\sf recontr} is defined by\n\n\\medskip\n\n\\hspace{.6in} \n${\\sf recontr}\\big({\\sf code}(u) \\ 11 \\ 0^h \\ 11 \\ y \\big)$\n $ \\ = \\ $ \n${\\sf code}({\\sf co}(u))\\ 11$\n $0^{\\max\\{1, \\ \\lfloor \\sqrt{h+2}\/2 \\rfloor - 1\\}} \\ 11\\ y$ \n\n\\hspace{.6in} (undefined on other inputs).\n\n\\medskip\n\n\nThe maps {\\sf expand}, {\\sf reexpand}, {\\sf contr}, and {\\sf recontr} belong \nto {\\sf fP}, and they are regular (they have polynomial-time inverses). \n\n\n\n\\begin{pro} \\label{fPfingen} \n \\ \\ {\\sf fP} is finitely generated.\n\\end{pro}\n{\\bf Proof.} \\ We will show that the following is a generating set of \n{\\sf fP}: \n\n\\medskip\n \n\\hspace{1in} \n$\\{{\\sf expand}, \\ {\\sf reexpand}, \\ {\\sf contr}, \\ {\\sf recontr}, $\n$ \\ \\pi_0, \\ \\pi_1, \\ \\pi_1', \\ {\\sf ev}_{q_2}^C \\}$, \n\n\\medskip\n\n\\noindent where $q_2$ is the polynomial $q_2(n) = c \\, n^2 + c$, with\n$c \\geq 12$ (the number 12 comes from the discussion before Prop.\\\n\\ref{Polyn_TM}).\n\n\\smallskip\n\n\\noindent {\\sf Remark:} \\, The functions {\\sf expand}, {\\sf reexpand}, \n{\\sf contr}, and {\\sf recontr} all have quadratic time-complexity and\nbalance functions, so they can be generated by $\\pi_0, \\pi_1, \\pi_1'$, and\n${\\sf ev}_{q_2}^C$ (provided that the constant $c$ in $q_2$ is chosen large \nenough, so that ${\\sf ev}_{q_2}^C$ can execute {\\sf expand}, {\\sf reexpand}, \n{\\sf contr}, and {\\sf recontr}). Thus \\, \n\n\\medskip\n\n\\hspace{1in} \n$\\{\\pi_0, \\ \\pi_1, \\ \\pi_1', \\ {\\sf ev}_{q_2}^C \\}$ \\, \n\n\\medskip\n\n\\noindent is a generating set of {\\sf fP}. \nWe use the larger generating set since it yields simpler formulas.\n\n\\medskip\n\nLet $w$ be a program with polynomial counter and let $m$ be an integer \nupper-bound on $\\log_2(a + k)$, where $p_w (n) = a \\, n^k + a$. \nWe also assume that the program $w = \\langle v, p \\rangle$ is such that \nfor any polynomial $P(.) > p_w(.)$, $\\langle v, P \\rangle$ also computes \n$\\phi_w$; indeed, since $\\phi_w \\in {\\sf fP}$, we can choose $v$ and $p$ so \nthat the execution of $v$ by the Turing machine with polynomial counter \n(described by $\\langle v, p \\rangle$) never triggers the counter; in that \ncase, making the counter larger does not change the function. \nThen for all $x \\in \\{0,1\\}^*$,\n\n\\bigskip\n\n\\noindent $(\\star)$ \n\\hspace{.5in} $\\phi_w(x) \\ = \\ $\n$\\pi_{_{2 \\, |w'| + 2}}' \\circ {\\sf contr} \\circ {\\sf recontr}^{2 \\, m}$ \n$\\circ$ ${\\sf ev}_{q_2}^C$ $\\circ$ \n${\\sf reexpand}^{2m} \\circ {\\sf expand}$ $\\circ$\n$\\pi_{_{{\\sf code}(w) \\, 11}}(x)$ ,\n\n\\bigskip\n\n\\noindent where $w' = {\\sf co}^{2m+1} \\circ {\\sf ex}^{2m+1}(w)$.\nIndeed,\n\n\\medskip\n\n $x \\ \\ \\stackrel{\\pi_{_{{\\sf code}(w) \\, 11}}}{\\longmapsto} \\ \\ $\n${\\sf code}(w) \\ 11 \\ x \\ \\ $\n$\\stackrel{\\sf expand}{\\longmapsto} \\ \\ $\n${\\sf code}({\\sf ex}(w)) \\ 11 \\ 0^{4 \\, |x|^2 + 7\\, |x| +2} \\ 11 \\ x$ \n\n\\medskip\n\n$\\stackrel{{\\sf reexpand}^{2m}}{\\longmapsto} \\ \\ $ \n${\\sf code}({\\sf ex}^{2m+1}(w)) \\ 11 \\ 0^{N_{2m+1}} \\ 11 \\ x$. \n\n\\medskip\n\n\\noindent Here $N_1 = |x|^2 + 7\\, |x| +2$, and \n$|0^{N_1} \\, 11 \\, x| = (2 \\, (|x| + 1))^2$; inductively, \n$N_i = 4\\, N_{i-1}^2 + 8\\, N_{i-1} + 2$ \\ for $1 < i \\leq 2m+1$,\nand $|0^{N_i} \\, 11| = (2 \\, (N_{i-1} + 1))^2$.\n\n\\noindent Continuing the calculation, \n\n\\medskip\n\n$\\stackrel{{\\sf ev}_{q_2}^C}{\\longmapsto} \\ \\ $\n${\\sf code}({\\sf ex}^{2m+1}(w)) \\ 11 \\ 0^{N_{2m+1}} \\ 11 \\ \\phi_w(x) $\n\n\\medskip\n\n$\\stackrel{{\\sf recontr}^{2m}}{\\longmapsto} \\ \\ $\n${\\sf code}(w') \\ 11 \\ 0^{\\ell} \\ 11 \\ \\phi_w(x) \\ \\ $\n\n\\medskip\n\n$\\stackrel{\\sf contr}{\\longmapsto} \\ \\ $\n${\\sf code}(w') \\ 11 \\, \\phi_w(x) \\ \\ $\n$\\stackrel{\\pi_{_{2 \\, |w'| + 2}}'}{\\longmapsto} \\ \\ \\phi_w(x)$\n\n\\medskip\n\n\\noindent where $w' = {\\sf co}^{2m+1} \\circ {\\sf ex}^{2m+1}(w)$.\nWe use $2m$ in ${\\sf recontr}^{2 m}$ because $\\phi_w(x)$ could be much \nshorter than $x$ (but by input balance, $|x| \\le p_w(|\\phi_w(x)|)$).\nAs a consequence, we also use $2m$ in ${\\sf reexpand}^{2 m}$ in order to have \nequal numbers of program transformations ${\\sf ex}(.)$ and ${\\sf co}(.)$. \nNote that doing more input padding than necessary does not do any harm; also,\nif $w$ contains a polynomial $p_w$ larger than needed for computing \n$\\phi_w$, this does not cause a problem (by our assumption on $v$). \nBy the choice of $2m$, the value of $\\ell$ above is less than \n$4 \\, |\\phi_w(x)|^2 + 7 \\, |\\phi_w(x)| + 2$, so {\\sf contr} can be applied \ncorrectly. \n\nThe argument of ${\\sf ev}_{q_2}^C$ in the above calculation has length\n$> N_{2m+1} + 2 + |x|$, which is much larger than the time it \ntakes to simulate the machine with program $w$ on input $x$ (that time is \n$< c_0 |w| \\, p_w(|x|)^2$). In fact, by the choice of $m$, the polynomial \nencoded in ${\\sf ex}^{2m+1}(w)$ is the linear polynomial $12 \\, (n + 1)$ \n(which is $< q_2(n)$). Hence, ${\\sf ev}_{q_2}^C$ works correctly on its \ninput in this context.\n \\ \\ \\ $\\Box$\n\n\\medskip\n\nWe saw that {\\sf fP} does not have an evaluation map in the same sense as\nthe Turing evaluation map. However, formula $(\\star)$ in the proof of \nProp.\\ \\ref{fPfingen} shows that the map ${\\sf ev}_{q_2}^C$ simulates every\nfunction in {\\sf fP}, in the following sense: \n$f_2$ {\\em simulates} $f_1$ (denoted by $f_1 \\preccurlyeq f_2$) iff there \nexist $\\beta, \\alpha \\in {\\sf fP}$ such that \n$f_1 = \\beta \\circ f_2 \\circ \\alpha$; this is discussed further at the \nbeginning of Section 5. \nFormula $(\\star)$ in Prop.\\ \\ref{fPfingen} implies:\n\n\\begin{pro} \\label{evalfP} \n \\ Every function $f \\in {\\sf fP}$ is simulated by ${\\sf ev}^C_{q_2}$.\n \\ \\ \\ \\ \\ \\ $\\Box$\n\\end{pro} \nIt follows from this and the definition of simulation that\n${\\sf ev}^C_{q_2}$ belongs to the $\\cal J$-class of ${\\sf id}_{A^*}$ in \n{\\sf fP}. \n\n\\medskip\n\nSince {\\sf fP} is finitely generated we now have two ways of representing \neach element $g \\in {\\sf fP}$ by a word: \n(1) We have $g = \\phi_w$ for some polynomial program $w \\in A^*$ (as seen \nin Prop.\\ \\ref{ev_q}), and \n(2) $g$ can be represented by a string of generators (considering the finite \nset of generators of {\\sf fP} as an alphabet). The next proposition \ndescribes the translation between these two representations.\n\n\n\\begin{pro} \\label{compilers}\nThere are total computable maps $\\alpha, \\beta$ such that for any word $s$ \nover a finite generating set of {\\sf fP}, $\\alpha(s)$ is a polynomial \nprogram for the function given by $s$; and for any polynomial program $u$, \n$\\beta(u)$ is a word for $\\phi_u$ over the generators of {\\sf fP}.\n\nMore precisely, let $\\Gamma$ be a finite generating set of {\\sf fP}. \nFor any $s \\in \\Gamma^*$, let $\\Pi s \\in {\\sf fP}$ be the element of \n{\\sf fP} obtained by composing the generators in the sequence $s$. \nThere exist total recursive {\\em ``compiler maps''} \n$\\alpha: \\Gamma^* \\to \\{0,1\\}^*$ and $\\beta: \\{0,1\\}^* \\to \\Gamma^*$ such \nthat for all $s \\in \\Gamma^*$ and all $w \\in \\{0,1\\}^*$: \\, \n$f_{\\alpha(s)} = \\Pi s$, and $\\Pi \\beta(w) = \\phi_w$. \n\\end{pro}\n\n\\smallskip\n\n\\noindent\n{\\bf Proof.} The map $\\beta$ is given by formula $(\\star)$ in the proof \nof Prop.\\ \\ref{fPfingen}, where a representation over the generators is \nexplicitly constructed. When $u$ is not a well-formed polynomial program\nwe let $\\beta(u)$ be a sequence of generators for the empty function. \n\nConversely, by composing a sequence of generators, a function in {\\sf fP} \nis obtained (note that every sequence $s$ of generators has a finite \nlength). More precisely, if $f_1, f_2 \\in {\\sf fP}$ have as complexity \nand balance bounds the polynomials \n$q_i(n) = a_i \\, n^{k_i} + a_i$ ($i = 1, 2$), then \n$f_2 \\circ f_1$ has input balance $\\le q_2 \\circ q_1(n)$ (obviously), and \ntime-complexity $\\le q_1(n) + q_2 \\circ q_1(n)$. \nIndeed, a polynomial-time program for $f_2 \\circ f_1$ is obtained by first \ntaking the program for $f_1$ on input $x$, and then applying the program \nfor $f_2$ to $f_1(x)$ (in time $\\le q_2(|f_1(x)|)$). \nThe corresponding polynomial upper-bound is\n \\, $q_1(n) + q_2 \\circ q_1(n)$ $=$ \n$a_1 \\, (n^{k_1} + 1) + a_2 \\, a_1^{k_2} \\, (n^{k_1} + 1)^{k_2} + a_2$ $<$\n$(a_1 + a_2 \\, a_1^{k_2})\\, (n^{k_1} + 1)^{k_2} + a_2$.\nIn order to obtain a polynomial upper-bound of the form $a \\, n^k + a$, we\nuse the inequality \n\n\\smallskip\n\n$(n + 1)^j \\ \\le \\ 2^{j-1} \\, (n^j + 1)$, \\ for all $n \\ge 0, \\ j \\ge 1$. \n\n\\smallskip\n\n\\noindent (To prove this inequality apply calculus to the function \n$f(x) = 2^{j-1} (x^j + 1) - (x + 1)^j$.) \nThus for $f_2 \\circ f_1$ we get a complexity and balance upper-bound \n\n\\smallskip\n\n$q(n) = a \\, n^{k_1 k_2} + a$, where\n \\ $a = a_2 + a_1 + a_2 \\, a_1^{k_2} \\, 2^{k_2}$.\n\\smallskip\n\n\\noindent This yields an algorithm for obtaining a polynomial program for \n$f_2 \\circ f_1$ from polynomial programs for $f_1$ and $f_2$. For a sequence\nof generators $s$, this algorithm can be repeated $|s| - 1$ times to yield\na polynomial programs for the sequence $s$ of generators. \n \\ \\ \\ $\\Box$ \n\n\\medskip\n\n\\noindent A finite generating set $\\Gamma$ for {\\sf fP} can be used to \nconstruct a {\\em generator-based evaluation map} for {\\sf fP}, defined by\n \\ $(s, x) \\in \\Gamma^* \\times A^* \\ \\longmapsto \\ {\\sf ev}_{\\Gamma}(s,x)$\n$=$ $(s, \\, (\\Pi s)(x))$. \n However, ${\\sf ev}_{\\Gamma}$ does not belong to {\\sf fP}, for the \nsame reasons as we saw at the beginning of Sect.\\ 4 for ${\\sf ev}_{\\sf poly}$.\n(But just as for ${\\sf ev}_{\\sf poly}$ we could restrict ${\\sf ev}_{\\Gamma}$ \nto a function that belongs to {\\sf fP} and that simulates every element \nof {\\sf fP}.)\n\n\n\\begin{pro} \\label{notFinPres}\n \\ {\\sf fP} is {\\em not} finitely presented. Its word problem is \nco-r.e., but not r.e.\n\\end{pro}\n{\\bf Proof.} The word problem is co-r.e.: Let $U, V \\in \\Gamma^*$; using \nProp.\\ \\ref{compilers} we effectively find programs $u, v \\in A^*$ from \n$U, V$ such that $\\phi_u = \\Pi U$, $\\phi_v = \\Pi V$.\nIf $\\phi_u \\neq \\phi_v$ then by exhaustive search we will find $x$ such that\n$\\phi_u(x) \\neq \\phi_v(x)$, thus showing that $U \\neq V$ in {\\sf fP}.\nWhen $U = V$ in {\\sf fP} then this procedure rejects by not halting.\n\nThe word problem of {\\sf fP} is undecidable, since the equality problem for\nlanguages in {\\sf P} can be reduced to this (reducing $L$ to ${\\sf id}_L$ \nor to ${\\sf id}_{{\\sf code}(L\\#)}$).\nAnd the equality problem for languages in {\\sf P} is undecidable, since the\nuniversality problem of context-free languages can be reduced to the equality\nproblem for languages in {\\sf P}; all context-free languages are in {\\sf P}.\nThe universality problem for context-free language is the question whether\nfor a given context-free grammar $G$ with terminal alphabet $A$ (with\n$|A| \\geq 2$), the language generated by $G$ is $A^*$; this problem is\nundecidable (see \\cite{HU} Thm.\\ 8.11).\n\nSince the word problem is co-r.e.\\ but undecidable, it is not r.e. Hence\nthese finitely generated monoids are not finitely presented (since the word\nproblem of a finitely presented monoid is r.e.).\n \\ \\ \\ $\\Box$\n\n\n\\begin{pro} \\ \\ {\\sf fP} is finitely generated by {\\em regular} elements.\n\\end{pro}\n{\\bf Proof.} All the listed generators of {\\sf fP} are regular, except \npossibly ${\\sf ev}_{q_2}^C$.\nLet us define a partial function $E_{q_2} \\in {\\sf fP}^{q_2}$ by \\,\n $E_{q_2}(w, x) = (w, \\phi_w(x), x)$, when $\\phi_w \\in {\\sf fP}^{q_2}$. \nObviously, $E_{q_2}$ is not one-way.\nBut ${\\sf ev}_{q_2}$ (as a two-variable function) can be expressed as a\ncomposition of $E_{q_2}$ and the other (regular) generators.\nIn more detail, ${\\sf ev}_{q_2} = \\pi'_{q_2} \\circ E_{q_2}$, where \n$\\pi'_{q_2}(w,z,x) = (w,z)$ if $|z| \\le q_2(|x|)$ and $|x| \\le q_2(|z|)$. \nSo ${\\sf ev}_{q_2}^C$ can be replaced by $E_{q_2}^C$ as a generator.\n \\ \\ \\ $\\Box$\n\n\n\\begin{pro} \\ There are elements of {\\sf fP} and of ${\\cal RM}_2^{\\sf P}$\nthat are critical (i.e., non-regular if ${\\sf P} \\neq {\\sf NP}$), whose \nproduct is a non-zero idempotent.\n\\end{pro}\n{\\bf Proof.} For $i = 0, 1$, let $e_i \\in {\\sf fP}$ be defined, as a\ntwo-variable function, by\n\n\\smallskip\n\n \\ \\ \\ \\ \\ \\\n$e_i(w,x) \\ = \\\n\\left\\{\n\\begin{array}{ll}\n(w, \\phi_w(x)) & \\mbox{if $x \\in i\\, \\{0,1\\}^*$,\\ \n $\\phi_w(x) \\in i\\, \\{0,1\\}^*$,\n and $|\\phi_w(x)| = |x|$;} \\\\\n(w, 0^{|x|}) & \\mbox{otherwise.}\n\\end{array} \\right. $\n\n\\smallskip\n\n\\noindent Then $(e_1 \\circ e_0)(w,x) = (w, 0^{|x|})$ for all $(w, x)$,\nso $e_1 \\circ e_0$ is an idempotent.\n\nTo prove that $e_i$ is critical we reduce the satisfiability problem to\nthe inversion problem of $e_i$.\nThe reduction for $e_i$ maps a boolean formula $B$ with $n$ variables to\n$(b, i^n 1)$, where $b$ is a program such that\n$f_b(i \\, \\tau) = i^n B(\\tau)$; i.e., for a truth-value assignment\n$\\tau \\in \\{0,1\\}^n$, $f_b$ evaluates $B$ on $\\tau$, and outputs the \nresulting truth-value, prefixed with $n$ copies of $i$.\nIf $e_i$ were regular then ${\\sf Im}(e_i)$ would be in {\\sf P}, by\nProp.\\ \\ref{ImfP}.\nThen satisfiability of $B$ could be checked by a {\\sf P}-algorithm,\nsince $B$ is satisfiable iff $(b, i^n 1) \\in {\\sf Im}(e_i)$.\nTo obtain one-variable functions we can take $e_i^C$.\n\nTo prove the proposition for ${\\cal RM}_2^{\\sf P}$ we define $e_i \\in$\n${\\cal RM}_2^{\\sf P}$ for $i = 0, 1$ as follows, first as two-variable\nfunctions:\n\n\\smallskip\n\n \\ \\ \\ \\ \\ \\\n$e_i(w,x) \\ = \\\n\\left\\{\n\\begin{array}{ll}\n(w, \\phi_w(x)) & \\mbox{if $x \\in 0i \\, \\{0,1\\}^*$,\n \\ $\\phi_w(x) \\in 0i \\, \\{0,1\\}^*$, and $|\\phi_w(x)| = |x|$;} \\\\\n(w, x) & \\mbox{if $x \\in 1 \\{0,1\\}^*$;} \\\\\n\\mbox{undefined} & \\mbox{otherwise.}\n\\end{array} \\right. $\n\n\\smallskip\n\n\\noindent Then $(e_1 \\circ e_0)(w,x) = (w,x)$ when $x \\in 1 \\, \\{0,1\\}^*$, \nand $(e_1 \\circ e_0)(w,x)$ is undefined otherwise; so $e_1 \\circ e_0$ is a\npartial identity.\nThe reduction of the satisfiability problem to the inversion problem of $e_i$\nis similar to the case of {\\sf fP}.\n \\ \\ \\ $\\Box$\n\n\n\n\\section{Reductions and completeness}\n\n\\noindent The usual reduction between partial functions \n$f_1, f_2: A^* \\to A^*$ is as follows. \n\n\\begin{defn} \\label{simul} \\ \n \\ $f_1$ {\\em is simulated by} $f_2$ (denoted by\n$f_1 \\preccurlyeq f_2$) \\ iff \\ there exist polynomial-time computable\npartial functions $\\beta, \\alpha$ such that \n$ \\, f_1 = \\beta \\circ f_2 \\circ \\alpha$.\n\\end{defn} \nRecall polynomial-time {\\em many-to-one reduction} that is used for \nlanguages; it is defined by $L_1 \\preccurlyeq_{\\sf m:1} L_2$ iff for some \npolynomial-time computable function $\\alpha$ and for all $x \\in A^*$: \n$x \\in L_1$ iff $\\alpha(x) \\in L_2$. \nThis is equivalent to $L_1 = \\alpha^{-1}(L_2)$, and also to \n$\\chi_{_{L_1}} = \\chi_{_{L_2}} \\circ \\alpha$ (where $\\chi_{_{L_j}}$ \ndenotes the characteristic function of $L_j$). \nSo $L_1 \\preccurlyeq_{\\sf m:1} L_2$ implies that $\\chi_{_{L_1}}$ is simulated\nby $\\chi_{_{L_2}}$.\n\nMoreover, when we talk about simulations between functions we will always \nuse the following \n\n\\medskip\n\n\\noindent {\\bf Addendum to Definition 5.1.} \\ {\\it We assume that\n$\\beta, \\alpha \\in {\\sf fP}$.\nFor a simulation between two right-ideal morphisms of $A^*$ we assume}\n$\\beta, \\alpha \\in {\\cal RM}_{|A|}^{\\sf P}$.\n\n\\bigskip\n\nWe can define simulation for monoids in general. For monoids $M_0 \\leq M_1$\nand $s, t \\in M_1$, simulation $s \\preccurlyeq t$ is the same thing as \n$s \\leq_{{\\cal J}(M_0)} t$, i.e., the submonoid ${\\cal J}$-order on $M_1$, \nusing multipliers in the submonoid $M_0$.\n\n\\smallskip\n\nSimulation tells us which functions are harder to compute than \nothers, but it does not say anything about the hardness of inverses of\nfunctions. We want a reduction with the property that if a one-way \nfunction $f_1$ reduces to a function $f_2 \\in {\\sf fP}$ then $f_2$ is \nalso one-way. The intuitive idea is that $f_1$ ``reduces inversively'' to \n$f_2$ iff (1) $f_1$ is simulated by $f_2$, and \n(2) the ``easiest inverses'' of $f_1$ are simulated by the ``easiest \ninverses'' of $f_2$. But ``easiest inverses'' are difficult to define.\nWe rigorously define inversive reduction as follows.\n\n\n\\begin{defn} {\\bf (inversive reduction).} \n \\ Let $f_1, f_2: A^* \\to A^*$ be any partial functions.\nWe say that $f_1$ {\\em reduces inversively} to $f_2$ (notation, \n$f_1 \\leqslant_{\\sf inv} f_2$) \\ iff \n\n\\smallskip\n\n\\noindent (1) \\ \\ $f_1 \\preccurlyeq f_2$ \\ \\ and\n\n\\smallskip\n\n\\noindent (2) \\ \\ for every inverse $f_2'$ of $f_2$ there exists an\ninverse $f_1'$ of $f_1$ such that $f_1' \\preccurlyeq f_2'$ .\n\\end{defn}\nHere, $f_1, f_2, f_1', f_2'$ range over all partial functions $A^* \\to A^*$.\n\n\\medskip\n\n\\noindent The relation $\\leqslant_{\\sf inv}$ can be defined on monoids \n$M_0 \\leq M_1 \\leq M_2$ in general: We let $f_1, f_2$ range over $M_1$, and \nlet inverses $f_1', f_2'$ range over $M_2$. For simulation $\\preccurlyeq$ \nwe pick $\\leq_{{\\cal J}(M_0)}$ (i.e., multipliers are in $M_0$). \nWe should assume that $M_1$ is regular within $M_2$ in order to avoid empty\nranges for the quantifiers ``$(\\forall f_2')(\\exists f_1')$''; otherwise,\nwhen $f_2$ has no inverse in $M_2$, $f_1 \\leqslant_{\\sf inv} f_2$ is \ntrivially equivalent to $f_1 \\preccurlyeq f_2$.\n\n\n\\begin{pro} \\ $\\leqslant_{\\sf inv}$ is transitive and reflexive.\n\\end{pro}\n{\\bf Proof.} \\ Simulation is obviously transitive. Moreover, if\n$f_1 \\leqslant_{\\sf inv} f_2$ and $f_2 \\leqslant_{\\sf inv} f_3$, then\nfor each $f_3'$ there exists an inverse\n$f_2' = \\beta_{23} \\circ f_3' \\circ \\alpha_{23}$, and for $f_2'$ there \nis an inverse $f_1' = \\beta_{12} \\circ f_2' \\circ \\alpha_{12}$. \nThen $f_1' = \\beta_{12} \\circ \\beta_{23} \\circ f_3'$ $\\circ$\n$\\alpha_{23} \\circ \\alpha_{12}$, so $f_3'$ simulates some inverse of $f_1$. \n \\ \\ \\ $\\Box$\n\n\n\\begin{pro} \\ If $f_1 \\leqslant_{\\sf inv} f_2$, \n \\ $f_2 \\in {\\sf fP}$, and $f_2$ is regular, then $f_1 \\in {\\sf fP}$ and\n$f_1$ is regular.\n\n\\noindent Contrapositive: If $f_1, f_2 \\in {\\sf fP}$, \n$f_1 \\leqslant_{\\sf inv} f_2$, and $f_1$ is one-way, then $f_2$ is one-way.\n\\end{pro}\n{\\bf Proof.} \\ The property $f_1 \\in {\\sf fP}$ follows from simulation. \nIf $f_2$ is regular, then it has an inverse $f_2' \\in {\\sf fP}$, and $f_1$ \nhas an inverse $f_1' = \\beta \\circ f_2' \\circ \\alpha$. All the factors \nare in {\\sf fP}, so $f_1' \\in {\\sf fP}$. \n \\ \\ \\ $\\Box$\n \n\n\\begin{defn} \\ \nA partial function $g$ is {\\em complete} (or {\\sf fP}{\\em -complete}) with \nrespect to $\\leqslant_{\\sf inv}$ \\ iff \\ $g \\in {\\sf fP}$, and for every\n$f \\in {\\sf fP}$ we have $f \\leqslant_{\\sf inv} g$. In a similar way we can\ndefine ${\\cal RM}_2^{\\sf P}${\\em -complete}. \n\\end{defn}\nObservation: If $g$ is {\\sf fP}-complete then \n$g \\equiv_{\\cal J} {\\sf id}_{A^*}$. \n\n\n\\begin{pro} \\label{ev_complete}\n \\ The map ${\\sf ev}_{q_2}^C$ is {\\sf fP}-{\\em complete} with respect to \ninversive reduction.\n\\end{pro}\n{\\bf Proof.} \\ Any $\\phi_w \\in {\\sf fP}$ with $q_2$-polynomial program $w$ is\nsimulated by ${\\sf ev}_{q_2}^C$; recall formula $(\\star)$ in the proof of \nProp.\\ \\ref{fPfingen}:\n\n\\smallskip\n\n \\ \\ \\ \\ \\ $\\phi_w \\ = \\ $\n$\\pi_{_{2 \\, |w'| + 2}}' \\circ {\\sf contr} \\circ {\\sf recontr}^{2 \\, m}$\n$\\circ$ ${\\sf ev}_{q_2}^C$ $\\circ$\n${\\sf reexpand}^{2 \\, m} \\circ {\\sf expand}$ $\\circ$\n$\\pi_{_{{\\sf code}(w) \\, 11}}$, \n\n\\smallskip\n\n\\noindent where $w' = {\\sf co}^{2m+1} \\circ {\\sf ex}^{2m+1}(w)$.\n\nTo prove the inversive property, let ${\\sf e}'$ be any inverse of \n${\\sf ev}_{q_2}^C$. We apply ${\\sf e}'$ to a string of the form \\, \n${\\sf code}({\\sf ex}^{2m+1}(w)) \\ 11 \\ 0^{N_{2m+1}} \\ 11 \\ y$,\nwhere $\\phi_{{\\sf ex}^{2m+1}(w)} \\in {\\sf fP}^{q_2}$ and \n$y \\in {\\sf Im}(\\phi_w)$.\nThus, $N_{2m+1}$ is at least as large as the time of the computation that \nled to output $y$. Note that we use $2m$ in $N_{2m+1}$ because the input \nthat led to output $y$ could be polynomially longer than $y$ (by polynomial \n$q_2$). Then we have: \n\n\\smallskip\n\n \\ \\ \\ \\ \\ \n${\\sf e}'\\big({\\sf code}({\\sf ex}^{2m+1}(w))\\ 11\\ 0^{N_{2m+1}}\\ 11\\ y \\big)$\n$ \\ = \\ $\n${\\sf code}({\\sf ex}^{2m+1}(w)) \\ 11 \\ 0^{N_{2m+1}} \\, 11 \\, x_i$, \n \\ \\ for some $x_i \\in \\phi_w^{-1}(y)$.\n\n\\smallskip\n\n\\noindent We don't care whether and how ${\\sf e}'(Z)$ is defined when the\ninput $Z$ is not of the above form.\nThen ${\\sf e}'$ simulates an inverse $f'$ of $\\phi_w$ defined by \n\n\\smallskip\n\n \\ \\ \\ \\ \\ $f'(y) \\ = \\ $ \n$\\pi_{_{2 \\, |w'| + 2}}' \\circ {\\sf contr} \\circ {\\sf recontr}^{2 \\, m}$\n$\\circ$ ${\\sf e}'$ $\\circ$\n${\\sf reexpand}^{2 \\, m} \\circ {\\sf expand}$ $\\circ$\n$\\pi_{_{{\\sf code}(w) \\, 11}}(y)$\n\n\\smallskip\n\n\\noindent for all $y \\in {\\sf Im}(\\phi_w)$. Indeed, $f'(y) = x_i$ \n($\\in \\phi_w^{-1}(y)$ as above). \n\nWhen $y \\not\\in {\\sf Im}(\\phi_w)$ the right side of the above formula may\ngive a value to $f'(y)$; but it does not matter whether and how $f'$ is \ndefined outside of ${\\sf Im}(\\phi_w)$. Thus, \nevery inverse of ${\\sf ev}_{q_2}^C$ simulates an inverse of $\\phi_w$.\n \\ \\ \\ $\\Box$\n\n\\medskip\n\nIn a similar way one can prove that the generator-based evaluation map \n${\\sf ev}_{\\Gamma,q}$ (for a large enough polynomial $q$) is complete\nin {\\sf fP}.\n\n\\medskip\n\n\\noindent {\\bf Notation:} \\ For a partial function $f: A^* \\to A^*$, the \n{\\em set of all inverses} $f': A^* \\to A^*$ of $f$ is denoted by \n${\\sf Inv}(f)$. \n\n\n\\begin{defn} {\\bf (uniform inversive reduction).} \n \\ Let $f, g$ be partial functions. An inversive reduction \n$f \\leqslant_{\\sf inv} g$ is called {\\em uniform} \\ iff\n \\ $f \\preccurlyeq g$, and $(\\exists \\beta, \\alpha \\in {\\sf fP})$\n$(\\forall g' \\in {\\sf Inv}(g))$ $(\\exists f' \\in {\\sf Inv}(f))$ \n$[ \\, f' = \\beta \\circ g' \\circ \\alpha \\, ]$. So $\\beta$ and $\\alpha$ only\ndepend on $f$ and $g$, but not on $g'$ or $f'$.\n\\end{defn}\nWe observe that in the proof of Prop.\\ \\ref{ev_complete} the simulation of\n$f'$ by ${\\sf e'}$ only depends on $\\phi_w$ and ${\\sf e}$, but not on \n$f'$ nor on ${\\sf e'}$. We conclude:\n \n\\begin{cor} \nThe map ${\\sf ev}_{q_2}^C$ is {\\sf fP}-complete with respect to \n{\\em uniform} inversive reduction. \\ \\ \\ $\\Box$\n\\end{cor}\n\n\nNext we study the completeness of the circuit evaluation map \n ${\\sf ev}_{\\sf circ}$ (defined at the beginning of Section 4).\nSince it is defined in terms of length-preserving circuits, \n${\\sf ev}_{\\sf circ}$ is itself length-preserving, i.e., it belongs to \nthe submonoid of {\\em length-preserving} partial functions in\n{\\sf fP},\n\n\\smallskip\n\n \\ \\ \\ \\ \\ \\ ${\\sf fP}_{\\sf lp} \\ = \\ $\n$\\{ f \\in {\\sf fP} \\ : \\ |f(x)| = |x|$ \\ for all $x \\in {\\sf Dom}(f) \\}$. \n\n\n\\begin{pro} \\label{LevComplLP}\nThe critical map ${\\sf ev}_{\\sf circ}$ is complete in the submonoid\n${\\sf fP}_{\\sf lp}$ with respect to inversive reduction.\n\\end{pro}\n\n\\noindent {\\bf Proof.} Let $f \\in {\\sf fP}_{\\sf lp}$ be a fixed \nlength-preserving partial function, and let $M$ be a fixed deterministic \npolynomial-time Turing machine that computes $f$.\n\nSimulation of $f$ by ${\\sf ev}_{\\sf circ}$: It is well known that for every \ninput length $n$ (of inputs of $f$) one can construct an acyclic circuit \n$C_n$ such that $C_n(x) = f(x)$ for all $x$ of length $n$. The circuit can \nbe constructed from $M$ and $n$ in polynomial time (as a function of $n$). \nLet $\\alpha(x) = (C_{|x|}, x)$, and let $\\beta(C_n, y) = y$, where \n$|y| = n$.\nThen $f = \\beta \\circ {\\sf ev}_{\\sf circ} \\circ \\alpha$.\n \nSimulation between inverses: Any inverse $e'$ of ${\\sf ev}_{\\sf circ}$ has \nthe form $e'(C,y) = (C, x_i)$ for some $x_i \\in C^{-1}(y)$, when\n$y \\in {\\sf Im}(C)$. When $y \\not\\in {\\sf Im}(C)$, $e'(C,y)$ could be any \npair of bitstrings.\nThen an inverse $f'$ of $f$ is obtained by defining \n$f'(y) = \\beta \\circ e' \\circ \\alpha(y)$, where $\\alpha, \\beta$ are as in\nthe simulation of $f$ (in the first part of this proof). Indeed, when \n$y \\in {\\sf Im}(f)$ we have $\\alpha: y \\longmapsto (C_n, y)$, where $|y| = n$.\nNext, $e': (C_n, y) \\longmapsto (C_n, x_i)$ for some \n$x_i \\in C_n^{-1}(y) = f^{-1}(y)$; recall that we only use length-preserving\ncircuits, so $|y| = n = |x_i|$.\nFinally, $\\beta: (C_n, x_i) \\longmapsto x_i \\in f^{-1}(y)$. So $f'$ is an \ninverse of $f$ on ${\\sf Im}(f)$; outside of ${\\sf Im}(f)$, the values of \n$f'$ do not matter.\n \\ \\ \\ $\\Box$\n\n\\bigskip\n\n\\noindent To show completeness of ${\\sf ev}_{\\sf circ}$ in {\\sf fP} (rather\nthan just in ${\\sf fP}_{\\sf lp}$), a stronger inversive reduction is needed, \nthat overcomes the limitation of length-preservation in ${\\sf ev}_{\\sf circ}$. \n\n\\bigskip\n\n\\noindent {\\bf Remark:} \nCircuits are usually generalized to allow the output length to be different \nfrom the input length. But that would not simplify the completeness proof for\n${\\sf ev}_{\\sf circ}$, \nbecause the main limitation is that all inputs of a circuit have the same \nlength, and all outputs of a circuit have the same length. \n\n\n\\begin{defn} \\label{Tsimulation} {\\bf (polynomial-time Turing simulation).}\nLet $f_1, f_2: A^* \\to A^*$ be two partial functions.\nBy definition, $f_1 \\preccurlyeq_{\\sf T} f_2$ \\ iff \\ $f_1$ is computed by a \ndeterministic polynomial-time Turing machine that can make oracle calls to \n$f_2$; these can include, in particular, calls on the membership problem of \n${\\sf Dom}(f_2)$.\n\\end{defn} \nIn the next proofs we do not need the full power of Turing reductions. \nThe following, much weaker reduction, will be sufficient.\n\n\\medskip\n\n\\noindent {\\bf Notation:} Let $L \\subseteq A^*$. Then ${\\sf fP}^L$ denotes \nthe set of all polynomially balanced partial functions computed by \ndeterministic polynomial-time Turing machines that can make oracle calls to \nthe membership problem of $L$. In particular we will consider \n${\\sf fP}^{{\\sf Dom}(f)}$ for any partial function $f: A^* \\to A^*$. \n\n\\begin{defn} \\label{weakTsimulation} {\\bf (weak Turing simulation).} \nA {\\em weak Turing simulation} of $f_1$ by $f_2$ consists of two partial \nfunctions $\\beta, \\alpha$ such that $f_1 = \\beta \\circ f_2 \\circ \\alpha$, \nwhere $\\alpha \\in {\\sf fP}^{{\\sf Dom}(f_2)}$ and $\\beta \\in {\\sf fP}$.\nThe existence of a weak Turing simulation of $f_1$ by $f_2$ is denoted\nby $f_1 \\preccurlyeq_{\\sf wT} f_2$.\n\\end{defn}\nInformally we also write \n$f_1 = \\beta \\circ f_2 \\circ \\alpha^{{\\sf Dom}(f_2)}$. \nIn a weak Turing simulation by $f_2$, only the domain of $f_2$ is repeatedly\nqueried; $f_2$ itself is called only once, and this call of $f_2$ takes the \nform of an ordinary (not a Turing) simulation.\n\n\n\\begin{defn} \\label{inversif} {\\bf (inversification of a simulation).}\nFor any simulation relation $\\preccurlyeq_{\\sf X}$ between partial functions,\nthe corresponding {\\em inversive reduction} $\\leqslant_{\\sf inv,X}$ is\ndefined as follows:\n\n \\ \\ $f_1 \\leqslant_{\\sf inv,X} f_2$ \\ \\ iff\n\n\\smallskip\n\n \\ \\ \n$f_1 \\preccurlyeq_{\\sf X} f_2$, and for every inverse $f_2'$ of $f_2$ there \nexists an inverse $f_1'$ of $f_1$ such that $f_1' \\preccurlyeq_{\\sf X} f_2'$.\n\\end{defn}\nOne easily proves:\n\n\\begin{pro}\n \\ If $\\preccurlyeq_{\\sf X}$ is transitive then $\\leqslant_{\\sf inv,X}$ is\ntransitive. \\ \\ \\ \\ \\ \\ $\\Box$\n\\end{pro}\nBased on this general definition we can introduce {\\em polynomial-time \ninversive Turing reductions}, denoted by $\\leqslant_{\\sf inv,T}$, and \n{\\em polynomial-time inversive weak Turing reductions}, denoted by\n$\\leqslant_{\\sf inv,wT}$.\nThe following is straightforward to prove.\n\n\\begin{pro} \nIf $f_1 \\leqslant_{\\sf inv,T} f_2$ then: \n\n\\noindent $\\bullet$ \\ $f_2 \\in {\\sf fP}$ implies $f_1 \\in {\\sf fP}$;\n\n\\noindent $\\bullet$ \\ $f_2 \\in {\\sf fP}$ and $f_2$ is regular, implies\n$f_1$ is regular.\n \\ \\ \\ \\ \\ $\\Box$\n\\end{pro}\nThe following shows that $\\leqslant_{\\sf inv, wT}$ can overcome the \nlimitations of length-preservation.\n\n\\begin{pro} \\label{lpVSall} \nFor every $f \\in {\\sf fP}$ there exists $\\ell \\in {\\sf fP}_{\\sf lp}$ \nsuch that $f \\leqslant_{\\sf inv,wT} \\ell$.\n\\end{pro}\n{\\bf Proof.} For any $f \\in {\\sf fP}$ we define \n$\\ell_f \\in {\\sf fP}_{\\sf lp}$ by\n\n\\medskip\n\n \\ \\ \\ \\ \\ \\ \\ $\\ell_f(0^n 1 \\, x) \\ = \\ \\left\\{ \n \\begin{array}{ll}\n 0^{|x|} 1 \\, f(x) \\ \\ \\ \\ \\ \\ & \\mbox{if $n = |f(x)|$, } \\\\\n {\\rm undefined} & \\mbox{on all other inputs.}\n \\end{array} \\right.\n$\n\n\\medskip\n\n\\noindent Let $p_f(.)$ be a polynomial upper-bound on the time-complexity and \non the balance of $f$.\n\n\\smallskip\n\n\\noindent 1. \\ Proof that $f \\preccurlyeq \\ell_f$ (simulation):\n \\ We have $f = \\beta \\circ \\ell_f \\circ \\alpha$, where \n \\ $\\alpha(x) = 0^{|f(x)|} 1 \\, x$ \\ for all $x \\in A^*$; and \n \\ $\\beta(0^m 1 \\, y) = y$ \\ for all $y \\in A^*$ and all \n $m \\leq p_f(|y|)$ \\ ($\\beta$ is undefined otherwise).\n\n\\smallskip\n\n\\noindent 2. \\ Proof that for every inverse $\\ell'$ of $\\ell_f$ there is\nan inverse $f'$ of $f$ such that $f' \\preccurlyeq_{\\sf wT} \\ell_f'$ :\n\n\\smallskip\n\nEvery element of ${\\sf Im}(\\ell_f)$ has the form $0^m 1 \\, y$\nwhere $y \\in {\\sf Im}(f)$, for some appropriate $m$. More precisely,\n$\\ell_f^{-1}(0^m \\, 1 \\, y)$ $ = $ \n$\\{ 0^{|y|} \\, 1 \\, x : x \\in f^{-1}(y) \\cap A^m\\}$. Hence, \n$0^m \\, 1 \\, y \\in {\\sf Im}(\\ell_f)$ iff \n$f^{-1}(y) \\cap A^m \\neq \\varnothing$. Therefore, any inverse $\\ell'$ \nsatisfies $\\ell'(0^m 1 \\, y) = 0^{|y|} 1 \\, x_i$ for some choice of \n$x_i \\in f^{-1}(y) \\cap A^m$; we do not care about the values of $\\ell'$ \nwhen its inputs are not in ${\\sf Im}(\\ell_f)$.\nThus we can define an inverse of $f$ on each $y \\in {\\sf Im}(f)$ by \n\n\\smallskip\n\n$f'(y) = x_i$, \\, for $x_i$ chosen in $f^{-1}(y) \\cap A^m$ \n\n\\hspace{.5in} where $m$ is the minimum integer such that \n$f^{-1}(y) \\cap A^m \\neq \\varnothing$. \n\n\\smallskip\n\n\\noindent We don't care what $f'(y)$ is when $y \\not\\in {\\sf Im}(f)$.\n \nTo obtain an inversive weak Turing reduction we need to compute \n$x_i = f'(y)$ from $y$, based on oracle calls to ${\\sf Dom}(\\ell')$ and\none simulation of $\\ell'$. This computation of $x_i$ is done in two steps: \n First we compute the minimum $m$ ($= |x_i|$) such that \n$f^{-1}(y) \\cap A^m \\neq \\varnothing$ (see Step 1 below for details). \n Second, we apply $\\ell'$ to compute\n$\\ell'(0^{|x_i|} 1 \\, y) = 0^{|y|} 1 \\, x_i$. From this we obtain $x_i$ by \napplying the map $\\beta$ defined above (in part 1 of this proof). \nThe first step is a Turing reduction to the domain of $\\ell'$. \nThe second step is a simulation by $\\ell'$. In more detail:\n\nStep 1: \nBy input balance we have $|x_i| \\leq p_f(|y|)$ when $x_i \\in f^{-1}(y)$.\nFor each $m \\in \\{0, 1, \\ldots, p_f(|y|)\\}$, in increasing order, we make \nan oracle call to the membership problem in ${\\sf Dom}(\\ell')$ with query \ninput $0^m 1 \\, y$.\nIf $y \\in {\\sf Im}(f)$ then the first of these queries with a positive\nanswer determines $m$, and $0^m 1 \\, y$ is returned.\n\nStep 2: To the result $0^m 1 \\, y$ of step 1 we apply the functions\n$\\ell'$ and $\\beta$. This yields $x_i$, which is $f'(y)$. \nThus, step 2 is just a simulation. \n\nTogetherm, steps 1 and 2 form a weak polynomial Turing simulation of $f'$ \nby $\\ell'$. \n \\ \\ \\ $\\Box$\n\n\n\\begin{cor}\nThe critical map ${\\sf ev}_{\\sf circ}$ is {\\sf fP}-complete with respect to \ncomposites of polynomial inversive weak Turing reductions and polynomial\ninversive simulation reductions ($\\leqslant_{{\\sf inv,wT}}$ and \n$\\leqslant_{\\sf inv}$).\n\\end{cor} \n{\\bf Proof.} For every $f \\in {\\sf fP}$ we first reduce $f$ to a\nlength-preserving function $\\ell_f$, by Prop.\\ \\ref{lpVSall}. Then we reduce \n$\\ell_f$ to ${\\sf ev}_{\\sf circ}$ by Prop. \\ref {LevComplLP}.\n \\ \\ \\ $\\Box$\n\n\\bigskip\n\n\n\\noindent {\\bf Reduction and completeness in ${\\cal RM}_2^{\\sf P}$}\n\n\\medskip\n\n\\noindent The following shows that the encoding that embeds {\\sf fP} into\n${\\cal RM}_2^{\\sf P}$ does not make inversion easier.\n\n\\begin{pro} \\label{code_invred} \n \\ For all $f \\in {\\sf fP}$ we have $f \\leqslant_{\\sf inv} f^C$, \nwhere $\\leqslant_{\\sf inv}$ is based on simulation in {\\sf fP}. \n\\end{pro}\n{\\bf Proof.} Recall the encoding maps\n $(.)_{\\#}: x \\in \\{0,1\\}^* \\longmapsto x \\# \\in \\{0,1, \\#\\}^*$, and \n${\\sf code}$ which replaces $0, 1, \\#$ by respectively $00, 01, 11$, defined\nin Section 3; and recall $f^C$ from Def.\\ \\ref{f_encoding}. \nWe now introduce inverses of these maps.\nLet ${\\sf dec}: {\\sf code}(x) \\in \\{00, 01, 11\\}^* $\n$\\longmapsto x \\in \\{0,1\\}^*$ (undefined outside of $\\{00, 01, 11\\}^*$), \nand $r: x \\# \\longmapsto x \\in \\{0,1\\}^*$ (undefined outside of \n$\\{0,1\\}^*\\#$). Then \n\n\\smallskip\n\n \\ \\ \\ \\ \\ \\ $f \\ = \\ $\n$r \\circ {\\sf dec} \\circ f^C \\circ {\\sf code} \\circ (.)_{\\#}$ .\n\n\\smallskip\n\n\\noindent Clearly, $(.)_{\\#}, \\, {\\sf code}, \\, {\\sf dec}, \\, r$\n$ \\in {\\sf fP}$. Hence $f^C$ simulates $f$. \n\nFor the inversive part of the reduction, let $\\varphi'$ be any inverse of\n$f^C$; we want to find an inverse $f'$ of $f$ such that \n$f' \\preccurlyeq \\varphi'$, where $\\preccurlyeq$ denotes simulation in \n{\\sf fP}. \nAny element of ${\\sf Im}(f^C)$ has the form ${\\sf code}(s) \\, 11 \\, t$, \nwith $s, t \\in \\{0,1\\}^*$, and $s \\in {\\sf Im}(f)$. Moreover, if \n${\\sf code}(s) \\, 11 \\, t \\in {\\sf Im}(f^C)$ then\n${\\sf code}(s) \\, 11 \\in {\\sf Im}(f^C)$. Let us define $f'$ for any\n$s \\in {\\sf Im}(f)$ by $f'(s) = x_1$ where $x_1$ is such that \n$\\varphi'({\\sf code}(s) \\, 11) = {\\sf code}(x_1) \\, 11$ \n$ \\in (f^C)^{-1}({\\sf code}(s) \\, 11)$. Then $x_1 \\in f^{-1}(s)$. \nIn general, finally, we define $f'$ by \n\n\\smallskip \n\n \\ \\ \\ \\ \\ \\ $f' \\ = \\ $\n $r \\circ {\\sf dec} \\circ \\varphi' \\circ {\\sf code} \\circ (.)_{\\#}$ .\n\n\\smallskip\n\n\\noindent For $s \\in {\\sf Im}(f)$ we indeed have then:\n$r \\circ {\\sf dec} \\circ \\varphi' \\circ {\\sf code} \\circ (.)_{\\#}(s) \\ = \\ $\n$r \\circ {\\sf dec} \\circ \\varphi'({\\sf code}(s) \\, 11) \\ = \\ $\n$r \\circ {\\sf dec}({\\sf code}(x_1) \\, 11) \\ = \\ x_1$, \\, where \n$x_1 \\in f^{-1}(s)$, as above. So this definition makes $f'$ an inverse of \n$f$ on ${\\sf Im}(f)$; hence $f'$ is an inverse of $f$. \nThe above formula for $f'$ explicitly shows that $f' \\preccurlyeq \\varphi'$. \n \\ \\ \\ $\\Box$\n\n \n\\medskip\n\nLet $\\equiv_{\\sf inv}$ denote $\\leqslant_{\\sf inv}$-equivalence (i.e.,\n$f \\equiv_{\\sf inv} g$ iff $f \\leqslant_{\\sf inv} g$ and \n$g \\leqslant_{\\sf inv} f$).\nThe $\\leqslant_{\\sf inv}$-complete functions of {\\sf fP} obviously form an\n$\\equiv_{\\sf inv}$-class, and this is the maximum class for the \n$\\leqslant_{\\sf inv}$-preorder. \nSimilarly, the complete functions of ${\\cal RM}_2^{\\sf P}$ are the maximum \ninversive reducibility class in ${\\cal RM}_2^{\\sf P}$.\nThe non-empty regular elements of ${\\cal RM}_2^{\\sf P}$ also form an \nequivalence class in ${\\cal RM}_2^{\\sf P}$, and this is the minimum class \nof all non-empty functions, as the following shows:\n\n\n\\begin{pro} \\ For every $f, r \\in {\\cal RM}_2^{\\sf P}$ where $r$ is \nregular and $f$ is non-empty, we have $r \\leqslant_{\\sf inv} f$.\n\\end{pro}\n{\\bf Proof.} The simulation $r \\preccurlyeq f$ follows from \n${\\cal J}^0$-simplicity of ${\\cal RM}_2^{\\sf P}$.\nLet $f'$ be any inverse of $f$ (with $f'$ not necessarily in \n${\\cal RM}_2^{\\sf P}$). Since $r$ is regular, there is an inverse \n$r' \\in {\\cal RM}_2^{\\sf P}$ of $r$. Since $f'$ is not the empty map there \nexist $x_0, y_0$ with $f'(y_0) = x_0$. Then $(x_0 \\leftarrow y_0)$ is \nsimulated by $f'$, since\n$(x_0 \\leftarrow y_0) = {\\sf id}_{\\{x_0\\}} \\circ f'$. \nMoreover, $(x_0 \\leftarrow y_0)$ is regular and $(x_0 \\leftarrow y_0)$ \nbelongs to ${\\cal RM}_2^{\\sf P}$, so $(x_0 \\leftarrow y_0)$ simulates $r'$ \n(again by ${\\cal J}^0$-simplicity of ${\\cal RM}_2^{\\sf P}$). Thus, $f'$ \nsimulates $r'$.\n \\ \\ \\ $\\Box$\n\n\n\\begin{pro} \\ In both {\\sf fP} and ${\\cal RM}_2^{\\sf P}$: The \n$\\equiv_{\\cal D}$-relation is a refinement of $\\equiv_{\\sf inv}$.\n\\end{pro}\n{\\bf Proof.} Is suffices to prove that both $\\equiv_{\\cal R}$ and \n$\\equiv_{\\cal L}$ refine $\\equiv_{\\sf inv}$.\nWe will prove that $f \\equiv_{\\cal R} g$ implies $f \\equiv_{\\sf inv} g$.\n(The same reasoning works for $\\equiv_{\\cal L}$.) \nWhen $f \\equiv_{\\cal R} g$, there exist $\\alpha, \\beta \\in {\\sf fP}$ (or \n$\\in {\\cal RM}_2^{\\sf P}$) such that $f = g \\, \\alpha$ and $g = f \\, \\beta$. \nSo, $f$ and $g$ simulate each other.\n\nFor any inverse $f'$ of $f$ we have $f = f \\, f' \\, f $\n$ = g \\, \\alpha f' f$. Right-multiplying by $\\beta$ we obtain\n$g = g \\, \\alpha f' \\, g$, hence $\\alpha f'$ is an inverse of $g$, and \n$\\alpha f'$ is obviously simulated by $f'$. So, $g$ inversely reduces to \n$f$. Similarly, $f$ inversely reduces to $g$.\n \\ \\ \\ $\\Box$ \n\n\n\n\\section{\\bf The polynomial hierarchy}\n\n\\noindent The classical polynomial hierarchy for languages is defined by\n$\\Sigma_1^{\\sf P} = {\\sf NP}$, $\\Pi_1^{\\sf P} = {\\sf coNP}$, and \nfor all $k > 0$: $\\Sigma_{k+1}^{\\sf P} = {\\sf NP}^{\\Sigma_k^{\\sf P}}$\n(i.e., all languages accepted by nondeterministic Turing machines with oracle \nin $\\Sigma_k^{\\sf P}$, or equivalently, with oracle in $\\Pi_k^{\\sf P}$); and\n$\\Pi_{k+1}^{\\sf P} = ({\\sf coNP})^{\\Sigma_k^{\\sf P}}$ \n$\\big( = {\\sf co}({\\sf NP}^{\\Sigma_k^{\\sf P}})\\big)$. Moreover,\n${\\sf PH} \\ = \\ \\bigcup_k \\Sigma_k^{\\sf P}$ \\ \\ ($\\subseteq {\\sf PSpace}$).\n\n\\medskip\n\n\\noindent {\\sf Polynomial hierarchy for functions:}\n\n\\smallskip\n\n${\\sf fP}^{\\Sigma_k^{\\sf P}}$ \\, consists of all polynomially balanced \npartial functions $A^* \\to A^*$ computed by deterministic polynomial-time \nTuring machines with oracle in $\\Sigma_k^{\\sf P}$ (or equivalently, with \noracle in $\\Pi_k^{\\sf P}$);\n\n\\smallskip\n\n${\\sf fP}^{\\sf PH}$ \\, consists of all polynomially balanced partial \nfunctions $A^* \\to A^*$ computed by deterministic polynomial-time Turing \nmachines with oracle in {\\sf PH}. Equivalently, \n${\\sf fP}^{\\sf PH} = \\, \\bigcup_k \\, {\\sf fP}^{\\Sigma_k^{\\sf P}}$.\n\n\\smallskip\n\nMoreover, ${\\sf fPSpace}$ \\, consists of all polynomially balanced\npartial functions (on $A^*$) computed by deterministic polynomial-space \nTuring machines.\n\n\\smallskip\n\nWe can also define a polynomial hierarchy over ${\\cal RM}_2^{\\sf P}$.\n\n\n\\begin{pro} \\label{fPNP_inv} \\ Every $f \\in {\\sf fP}$ has an inverse in \n${\\sf fP}^{\\Sigma_1^{\\sf P}}$, and\nevery $f \\in {\\sf fP}^{\\Sigma_k^{\\sf P}}$ has an inverse in\n${\\sf fP}^{\\Sigma_{k+1}^{\\sf P}}$. The monoids \n${\\sf fP}^{\\sf PH}$ and ${\\sf fPSpace}$ are regular.\n\\end{pro}\n{\\bf Proof.} \\ The following is an inverse of $f$:\n\n\\smallskip\n\n \\ \\ \\ \\ \\ $f'_{\\sf min}(y) \\ = \\ \\left\\{\n \\begin{array}{ll}\n {\\sf min}(f^{-1}(y)) & \\mbox{if $y \\in {\\sf Im}(f)$, } \\\\\n {\\rm undefined} & \\mbox{otherwise,} \n \\end{array} \\right.\n$\n\n\\smallskip\n\n\\noindent where ${\\sf min}(S)$ denotes the minimum of a set of strings $S$ \nin dictionary order (or alternatively in length-lexicographic order).\nTo show that $f'_{\\sf min} \\in {\\sf fP}^{\\sf NP}$ when $f \\in {\\sf fP}$ we \nfirst observe that for any fixed $f \\in {\\sf fP}$ the following problems are \nin {\\sf NP}: \n\n\\noindent (1) On input $y \\in A^*$, decide whether $y \\in {\\sf Im}(f)$.\n\n\\noindent (2) Fix $u \\in A^*$; on input $y \\in A^*$, decide whether \n$y \\in f(u \\, A^*)$ (i.e., decide whether there exists $x \\in u \\, A^*$ such \nthat $f(x) = y$).\n\nWhen $y \\not\\in {\\sf Im}(f)$ then it doesn't matter what value we choose \nfor $f'_{\\sf min}(y)$; we choose $f'_{\\sf min}(y)$ to be undefined then.\n\nHere is an ${\\sf fP}^{\\sf NP}$-algorithm for computing $f'_{\\sf min}(y)$.\nIt is a form of binary search in the sorted list $A^*$, that ends when \na string in $f^{-1}(y)$ has been found. A growing prefix $z$ of \n$x = f'_{\\sf min}(y)$ is constructed; at each step we query whether $z$ can \nbe extended by a 0 or a 1; i.e., we ask whether $y \\in f(z0 A^*)$; we don't \nneed to ask whether $y \\in f(z1 A^*)$ too, since we tested already that \n$y \\in {\\sf Im}(f)$. Oracle calls are denoted by angular brackets \n$\\langle \\ldots \\rangle$, and $\\varepsilon$ denotes the empty word. \n\n\\smallskip\n\n\\noindent Algorithm for $f'_{\\sf min}$ on input $y:$\n\n\\smallskip\n\n{\\tt if $\\langle y \\in {\\sf Im}(f) \\rangle$ then\n\n \\ \\ \\ $z := \\varepsilon$;\n \n \\ \\ \\ while $\\langle z \\not\\in f^{-1}(y) \\rangle$ do \\hspace{1in}\n \/\/ assume $y \\in f(z A^*)$\n\n \\ \\ \\ \\ \\ \\ if $\\langle y \\in f(z0 A^*) \\rangle$ then \n $z := z 0$; \n\n \\ \\ \\ \\ \\ \\ else $z := z 1$; \n\n \\ \\ \\ output $z$.\n} \n\n\\smallskip\n\n\\noindent One can prove in a similar way that when \n$f \\in {\\sf fP}^{\\Sigma_k^{\\sf P}}$ then $f'_{\\sf min} \\in$\n${\\sf fP}^{\\Sigma_{k+1}^{\\sf P}}$: In that case the problems (1) and (2)\nabove are in ${\\sf NP}^{ \\Sigma_k^{\\sf P}} = \\Sigma_{k+1}^{\\sf P}$. \n\nThe regularity of ${\\sf fP}^{\\sf PH}$ follows immediately from the \nfact about ${\\sf fP}^{\\Sigma_k^{\\sf P}}$ for each $k$.\nThe regularity of ${\\sf fPSpace}$ holds because the above algorithm \ncan be carried out in ${\\sf fPSpace}$. \n \\ \\ \\ $\\Box$\n\n\\bigskip\n\nThe above algorithm is similar to the proofs in the literature that \n{\\sf P} $\\neq$ {\\sf NP} iff one-way functions exist; see e.g.\\\n\\cite{HemaOgi} p.\\ 33.\n\n\\medskip\n\nIn the proof of Prop.\\ \\ref{fPNP_inv} we used the function $f'_{\\sf min}$.\nIn a similar way, by using ${\\sf max}(f^{-1}(y))$ one can define\n$f'_{\\sf max} \\in {\\sf fP}^{\\Sigma_1^{\\sf P}}$, which is also an inverse of \n$f$. Yet more inverses can be defined: for any positive integer $i$ let\n\n\\smallskip\n\n \\ \\ \\ \\ \\ $ f'_i(y) \\ = \\ \\left\\{\n\\begin{array}{ll}\ni^{\\rm th} \\ {\\rm word \\ in} \\ f^{-1}(y) \\ \\ \\ \n & \\mbox{if $y \\in {\\sf Im}(f)$,} \\\\\n{\\rm undefined} & \\mbox{otherwise.}\n\\end{array} \\right.\n$\n\n\\smallskip\n\n\\noindent Here, ``$i^{\\rm th}$ word'' refers to the dictionary order; also, \nwhen $i > |f^{-1}(y)|$, the $i^{\\rm th}$ word is taken to be the maximum \nword in $f^{-1}(y)$. Then $f'_i$ is an inverse of $f$ and \n$f'_i \\in {\\sf fP}^{\\Sigma_1^{\\sf P}}$; note that $i$ is fixed for each \nfunction $f'_i$. \n\n \n\\begin{pro} \\ For any {\\em {\\sf fP}-critical} partial function \n$f \\in {\\sf fP}$ we have:\n \\ $f$ is one-way \\, iff \\, $f'_{\\sf min} \\not\\in {\\sf fP}$.\nSimilarly, \\, $f$ is one-way \\, iff \\, $f'_{\\sf max} \\not\\in {\\sf fP}$\n\\, iff \\, $(\\exists i > 0)[ \\, f'_i \\not\\in {\\sf fP} \\, ]$.\n\\end{pro}\n{\\bf Proof.} Since $f'_{\\sf min}$ is an inverse of $f$, the direction \n``$\\Rightarrow$'' is obvious by the definition of one-way function. \nConversely, we saw that if $f \\in {\\sf fP}$ then \n$f'_{\\sf min} \\in {\\sf fP}^{\\Sigma_1^{\\sf P}}$. \nIf $f'_{\\sf min} \\not\\in {\\sf fP}$ then \n${\\sf fP} \\neq {\\sf fP}^{\\Sigma_1^{\\sf P}}$, hence ${\\sf P} \\neq {\\sf NP}$, \nhence one-way functions exist.\nThen any {\\sf fP}-critical function $f$ is one-way.\n \\ \\ \\ $\\Box$\n\n\\medskip\n\n\\noindent Recall that a partial function $f$ is called {\\sf fP}-critical iff\n$f \\in {\\sf fP}$ and the existence of one-way functions implies that $f$ is\none-way. In particular, {\\sf fP}-complete functions (with respect to inversive\nreduction) are {\\sf fP}-critical.\nAn interesting consequence of the above Proposition is that now we do not \nonly have critical functions, but these functions also have {\\em critical \ninverses}.\n\n\\begin{defn} \\ Let $f$ be an {\\sf fP}-critical function. We say that an \ninverse $f'$ of $f$ is a {\\em critical inverse} of $f$ \\, iff \\, \n$f' \\not\\in {\\sf fP}$ implies that $f$ is one-way.\n\\end{defn}\n\n\\begin{cor}\nFor the {\\sf fP}-critical function ${\\sf ev}_{\\sf circ}$, the inverses \n$({\\sf ev}_{\\sf circ})'_{\\sf min}$, $({\\sf ev}_{\\sf circ})'_{\\sf max}$\nand $({\\sf ev}_{\\sf circ})'_i$ are critical inverses.\n \\ \\ \\ $\\Box$ \n\\end{cor}\nThus, to decide whether ${\\sf P} \\neq {\\sf NP}$ it suffices to consider \none function, and one of its inverses.\n\n\n\\begin{pro} \\label{polyHierFinGen} \\ For each $k \\geq 1$ the monoid \n${\\sf fP}^{\\Sigma_k^{\\sf P}}$ is finitely generated, but not finitely \npresented. The monoid ${\\sf fPSpace}$ is also finitely generated, \nbut not finitely presented.\n\nThe monoid ${\\sf fP}^{\\sf PH}$ is not finitely generated, unless the\npolynomial hierarchy collapses.\n\\end{pro}\n{\\bf Proof.} The proof for ${\\sf fPSpace}$ is similar to the proof that we \ngave for {\\sf fP} in Prop.\\ \\ref{notFinPres}. \n\nFor ${\\sf fP}^{\\Sigma_k^{\\sf P}}$, let $Q_k$ be any \n$\\Sigma_k^{\\sf P}$-complete problem; we can assume that all oracle calls\nare calls to $Q_k$. Then every $f \\in {\\sf fP}^{\\Sigma_k^{\\sf P}}$ has a\nprogram which is like an {\\sf fP}-program, but with oracle calls to $Q_k$ \nadded. \nFor every polynomial $q \\geq q_2$, an evaluation function ${\\sf ev}_q^{Q_k}$ \nfor ${\\sf fP}^{\\Sigma_k^{\\sf P}}$ can then be designed; in the computation of \n${\\sf ev}_q^{Q_k}(w, x)$, oracle calls to $Q_k$ are made whenever the program \n$w$ being executed makes calls to $Q_k$. Then, \n${\\sf ev}_q^{Q_k}(w, x) = (w, \\phi_w(x))$.\nBy using ${\\sf ev}_q^{Q_k}$ the proof for ${\\sf fP}^{\\Sigma_k^{\\sf P}}$\nis similar to the proof of Prop.\\ \\ref{notFinPres}. \n\nIf ${\\sf fP}^{\\sf PH}$ were finitely generated then let $m$ be the lowest\nlevel in the hierarchy that contains a finite generating set. Then \n${\\sf fP}^{\\sf PH} \\subseteq {\\sf fP}^{\\Sigma_m^{\\sf P}}$.\n \\ \\ \\ $\\Box$\n\n\\bigskip\n\nInstead of using all of ${\\sf fP}^{\\sf NP}$ to obtain inverses for the \nelements of {\\sf fP}, we could simply adjoin inverses to {\\sf fP} (within\n${\\sf fP}^{\\sf NP}$). \nIt turns out that it suffices to adjoin just one inverse \n$e' \\in {\\sf fP}^{\\sf NP}$ of a function $e$ that is {\\sf fP}-complete for \n$\\leqslant_{\\sf inv}$.\n\n\\medskip\n\n\\noindent \n{\\bf Notation:} For a semigroup $S$ and a subset $W \\subseteq S$, the \nsubsemigroup of $S$ generated by $W$ is denoted by $\\langle W \\rangle_S$.\nFor any $h \\in {\\sf fP}^{\\sf NP}$, we denote \n$\\langle {\\sf fP} \\cup \\{h\\} \\rangle_{{\\sf fP}^{\\sf NP}}$ by \n${\\sf fP}[h]$ (called {\\em ``{\\sf fP} with $h$ adjoined''}).\nSo, \\ \\ \n${\\sf fP} \\ \\subseteq \\ {\\sf fP}[h] \\ \\subseteq \\ {\\sf fP}^{\\sf NP}$.\n\n\n\n\\begin{pro}\nLet $g \\in {\\sf fP}$ be any function that is {\\sf fP}-complete with respect\nto $\\leqslant_{\\sf inv}$, and let $g'$ be any inverse of $g$ such that \n$g' \\in {\\sf fP}^{\\sf NP}$.\nThen the subsemigroup ${\\sf fP}[g']$ of ${\\sf fP}^{\\sf NP}$ contains at least\none inverse of each element of {\\sf fP}. \n\\end{pro}\n{\\bf Proof.} From the assumption that $g$ is complete it follows that\n\n\\smallskip\n\n \\ \\ \\ \\ \\ \\ \\ \\ \\ $(\\forall f \\in {\\sf fP})$\n$(\\forall g' \\in {\\sf Inv}(g) \\cap {\\sf fP}^{\\sf NP})$\n$(\\exists f' \\in {\\sf Inv}(f))$ $(\\exists \\beta, \\alpha \\in {\\sf fP})$ \n$[ \\, f' = \\beta \\, g' \\, \\alpha \\, ]$. \n\n\\smallskip\n\n\\noindent So for any fixed $g' \\in {\\sf Inv}(g) \\cap {\\sf fP}^{\\sf NP}$, \nevery $f \\in{\\sf fP}$ has an inverse of the form \n$f' = \\beta \\, g' \\, \\alpha$, for some $\\beta, \\alpha \\in {\\sf fP}$ (that\ndepend on $f'$). Hence $f' \\in {\\sf fP}[g']$. \n\\ \\ \\ $\\Box$\n\n\\bigskip\n\n\n\n\\noindent {\\bf Observations:} \n\n\\smallskip\n\n\\noindent 1.\nWe saw in the proof of Prop.\\ \\ref{polyHierFinGen} that ${\\sf fP}^{\\sf NP}$ \nhas complete elements with respect to simulation. For any \n${\\sf fP}^{\\sf NP}$-complete element $h$ we have\n${\\sf fP}^{\\sf NP} = {\\sf fP}[h]$. This raises the question: \nIs ${\\sf fP}^{\\sf NP} \\neq {\\sf fP}[g']$, when $g' \\in {\\sf fP}^{\\sf NP}$ and\n$g'$ is an inverse of an element $g$ that is {\\sf fP}-complete \n(for $\\leqslant_{\\sf inv}$)? \nIn one direction we have:\n\n\\smallskip\n\n{\\it If there exists $g$ which is {\\sf fP}-complete with respect \nto $\\leqslant_{\\sf inv}$, and an inverse $g' \\in {\\sf fP}^{\\sf NP}$ such that \n${\\sf fP}^{\\sf NP} \\neq {\\sf fP}[g']$, then ${\\sf P} \\neq {\\sf NP}$.\n}\n\nIndeed, if ${\\sf fP}^{\\sf NP} \\neq {\\sf fP}[g']$ then\n${\\sf fP} \\subseteq {\\sf fP}[g'] \\varsubsetneq {\\sf fP}^{\\sf NP}$, hence\n${\\sf fP} \\neq {\\sf fP}^{\\sf NP}$, hence ${\\sf P} \\neq {\\sf NP}$.\n\n\\medskip\n\n\\noindent 2. {\\it If there exist $g_1, g_2$ (not necessarily different) that\nare {\\sf fP}-complete with respect to $\\leqslant_{\\sf inv}$, and inverses\n$g'_1, g'_2 \\in {\\sf fP}^{\\sf NP}$ of $g_1$, respectively $g_2$, such that\n${\\sf fP}[g'_1] \\neq {\\sf fP}[g'_2]$, then ${\\sf P} \\neq {\\sf NP}$.\n}\n\nIndeed, by contraposition, if ${\\sf P} = {\\sf NP}$ then \n${\\sf fP} = {\\sf fP}^{\\sf NP}$, hence $g'_1, g'_2 \\in {\\sf fP}$. Then\n${\\sf fP}[g'_1] = {\\sf fP} = {\\sf fP}[g'_2]$.\n\n\\medskip\n\n\\noindent 3. {\\it The following two statements are equivalent: \n \\ (1) \\ ${\\sf P} \\neq {\\sf NP}$; \n \\ (2) \\ there exist $g$ which is {\\sf fP}-complete with respect to\n$\\leqslant_{\\sf inv}$, and an inverse $g' \\in {\\sf fP}^{\\sf NP}$ such that\n${\\sf fP} \\neq {\\sf fP}[g']$.\n}\n\nIndeed, if such a $g$ and $g'$ exist then $g$ is a one-way function,\nhence ${\\sf P} \\neq {\\sf NP}$.\nIf for such a $g$ and $g'$ we have ${\\sf fP} = {\\sf fP}[g']$, then $g$ is\nan {\\sf fP}-complete function which is not one-way, hence one-way functions\ndo not exist. \n\n\n\\bigskip\n\n\\noindent {\\bf Other monoids:} \n\n\\smallskip\n\n\\noindent (1) We have: \\, ${\\sf fP}^{\\sf PSpace} = {\\sf fPSpace}$. \n\nIndeed, the monoid ${\\sf fP}^{\\sf PSpace}$ consists of polynomially \nbalanced functions that are polynomial-time computable, with calls to \n{\\sf PSpace} oracles. \nThe monoid {\\sf fPSpace} consists of polynomially balanced functions that \nare polynomial-space computable (hence they might use exponential time).\nObviously, ${\\sf fP}^{\\sf PSpace} \\subseteq {\\sf fPSpace}$. But the \nconverse holds too, since the polynomially many output bits of a function\nin {\\sf fPSpace} can be found one by one, by a polynomial number of calls\nto {\\sf PSpace} oracles.\n\n\\medskip\n\n\\noindent (2) We define {\\sf fLog} (``functions in log-space'') to \nconsist of the polynomially balanced partial functions that are computable \nin deterministic log space. {\\sf fLog} is closed under composition (see \n\\cite{HU}), and ${\\sf fLog} \\subseteq {\\sf fP}$. \n\n\nIf ${\\sf P} \\neq {\\sf NP}$ then {\\sf fLog} is non-regular;\nmore strongly, in that case {\\sf fLog} contains one-way functions (with \nno inverse in {\\sf fP}). Indeed, the 3{\\sc cnf} \nsatisfiability problem reduces to the inversion of the map\n$(B, \\alpha) \\mapsto (B, B(\\alpha))$, where $B$ is any boolean formula in \n3{\\sc cnf}, and $\\alpha$ is a truth-value assignment for $B$. It is not\ndifficult to prove that this map is in {\\sf fLog} when $B$ is in 3{\\sc cnf}. \nOne of the referees observed that {\\sf fLog} is regular iff {\\sf NP} $=$\n{\\sf L}, i.e., the class of languages accepted in deterministic log-space. \n\n\\medskip\n\n\\noindent (3) We define {\\sf fLin} (``functions in linear time'') to \nconsist of the linearly balanced partial functions that are computable \nin deterministic linear time. {\\sf fLin} is closed under composition, and\nit is non-regular \\ iff \\ ${\\sf P} \\neq {\\sf NP}$. More strongly, if ${\\sf P}\n\\neq {\\sf NP}$ then {\\sf fLin} contains one-way functions (with no inverse \nin {\\sf fP}); this is proved by padding arguments.\n\n\\bigskip\n\n\\bigskip\n\n\\noindent {\\bf Acknowledgement:} This paper benefitted greatly form \ncorrections offered by the referees. \n\n\n\n\n{\\small\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nTiming noise is low-frequency fluctuation in the rotation-rate of\npulsars and is evident in the timing residuals of all young and\nmiddle-aged pulsars. The basic properties of timing noise were reviewed\nrecently by \\cite[Hobbs et al. (2010)]{hlk10} who presented the results of an\nanalysis of the rotation of 366 pulsars. In summary, timing noise is\nseen as smooth changes in the timing residuals (and rotation\nfrequency). The timing residuals are often asymmetric, with peaks and\ntroughs having different radii of curvature; the variations are\noften quasi-periodic with timescales which are typically 1-10\nyears. For long, it was thought to arise from the fluid interiors of\nthe neutron stars.\n\nA breakthrough was made in 2006, when \\cite[Kramer et al.]{klo+06}\nshowed that the timing noise in the long-term intermittent pulsar\nB1931+24 could be explained in detail by switching in the magnitude of\nthe slowdown rate $\\dot{\\nu}$ betwen the \"ON\" and \"OFF\" emission\nstates of the pulsar, indicating that changes in the current flow from\nthe pulsar resulted in changes of both the radio emission and of the\nbraking torque. The implication that changes in magnetospheric\ncurrents could alter pulsar emission properties as well as slowdown\nrate led \\cite[Lyne et al. (2010)]{lhk+10} (hereafter LHKSS) to study\nthe detailed pulse profiles of some of those pulsars having the\nlargest amounts of timing noise. They demonstrated that six of these\npulsars exhibited the well-known phenomenon of mode-changing, in which\na pulsar switches abruptly between two stable profiles. Moreover, in\nall six pulsars, there was a high degree of correlation between the\npulse shape and slowdown rate, the pulsars switching rapidly between\nlow- and high-spindown rates.\n\nTwo years after the publication of LHKSS, we review the observational\nevidence for switched changes in magnetospheric states, both in\nintermittent pulsars and in mode-changing pulsars, and discuss the\nrelationship between the two phenomena.\n\n\\section{Intermittent pulsars}\nMany pulsars show intermittency in their radio emission, although\nusually the durations of the \"ON\" and \"OFF\" states are measured in\nseconds to hundreds of seconds, timescales which are far too short to\npermit the determination of any change in slowdown rate between the\nstates. This is the phenomenon of pulse nulling which has been known\nsince shortly after the discovery of pulsars (\\cite[Backer 1970]{bac70}).\n\n \\begin{figure}[t]\n \\begin{center}\n \\includegraphics[scale=0.31, angle=-90]{fig1.eps} \n \\caption{The rotational frequency evolution of PSR~J1832+0029\n (\\cite{llm+12}).} \n \\label{fig1}\n \\end{center}\n \\end{figure}\n\n \\begin{figure}[t]\n \\begin{center}\n \\includegraphics[scale=0.31, angle=-90]{fig2.eps} \n \\caption{The rotational frequency evolution of PSR~J1841$-$0500\n (\\cite[Camilo et al. 2012]{crc+12} and Lyne, priv. comm.).} \n \\label{fig2}\n \\end{center}\n \\end{figure}\n\nHowever, the intermittent pulsar B1931+24 is typically ON for 1 week\nand OFF for about 1 month, permitting \\cite[Kramer et\nal. (2006)]{klo+06} to show that the ratio of ON- and OFF- slowdown\nvalues $\\dot{\\nu}_{\\rm ON}\/\\dot{\\nu}_{\\rm OFF}=1.5\\pm0.1$, roughly\nconsistent with an absence of all magnetospheric currents during\nthe OFF phase, in accordance with the calculations of the braking\neffects of magnetospheric currents by \\cite[Goldreich \\& Julian\n(1969)]{gj69}.\n\nShortly after that publication, a second long-term intermittent\npulsar was discovered (PSR~J1832+0029) and reported to show similar\nlarge changes in in slowdown rate ($\\dot{\\nu}_{\\rm ON}\/\\dot{\\nu}_{\\rm\nOFF}=1.7\\pm0.1$; \\cite[Kramer 2008]{kra08}, \\cite[Lyne 2009]{lyn09},\n\\cite[Lorimer et al. 2012]{llm+12}). Fig.~1 shows the measured values\nof rotation rate during the 10 years since its discovery. With rather\npoor statistics, the lengths of the ON and OFF states are typically\nmany hundreds of days, compared with tens of days for B1931+24.\n\nMore recently, a third long-term intermittent object\n(PSR~J1841$-$0500), also with timescales measured in hundreds of days,\nhas been reported by \\cite[Camilo et al. (2012)]{crc+12}. Even though\nthe statistics are also poor for this pulsar, it is clear that this\nhas an even greater slowdown rate ratio ($\\dot{\\nu}_{\\rm\nON}\/\\dot{\\nu}_{\\rm OFF}=2.5\\pm0.2$; see Fig.~2).\n\n\n\\section{Profile-switching pulsars}\nLHKSS studied those pulsars in the Jodrell Bank timing database which\nshowed the largest amounts of timing noise, measured as the ratio of\nmaximum to minimum values of slowdown rate. Seventeen examples are\nshown in Fig.~3. Pulsars typically have peak-to-peak values of about\n1\\% of the mean, over a 4-orders-of-magnitude range of slowdown rates.\nIndividual pulsars may have a factor of 10 times more or less than\nthis. LHKSS found that six of the 17 pulsars showed pulse-shape\nchanges which were correlated with these slowdown rate variations.\nSubsequent studies (Lyne et al., in prep) have now shown that a\nfurther four of these 17 also have significant pulse-shape variations\nthat are correlated with slowdown rate (PSRs B0919+06, B1642$-$03,\nB1826$-$17 and B1903+07) as well as two others (PSRs B1740$-$03 and\nB0105+65).\n\n \\begin{figure}[hbt]\n \\begin{center}\n \\includegraphics[width=0.4\\textwidth]{fig3new.eps} \n \\caption{The slowdown rate ($\\dot{\\nu}$) of 17 pulsars (from Lyne et\n al. 2010).} \n \\label{fig3}\n \\end{center}\n \\end{figure}\n\n \\begin{figure}[hbt]\n \\begin{center}\n \\includegraphics[scale=0.35, angle=0]{fig4.eps} \n \\caption{The profiles of the ``normal'' (top) and ``abnormal''\n(bottom) states of PSR~J2047+5029 (Janssen et al., in prep.).} \n \\label{fig4}\n \\end{center}\n \\end{figure}\n\nA further striking example of correlated changes in pulse shape and\nslowdown is displayed by PSR~J2047+5029, discovered at Westerbork in\nthe 8gr8 survey (\\cite[Janssen et al. 2009]{jsb+09}). At discovery,\nthe observed profile showed a main pulse and an interpulse with an\nassociated precursor having about 1\/3 of the flux density of the main\npulse (Fig.~4 top). However, when monitoring commenced at Jodrell\nBank, the main pulse had reduced in intensity by a factor of about 10,\nso that it was then much weaker than the interpulse (Fig.~4 bottom).\nThe pulsar has since changed from this ``abnormal mode'' back to the\nearlier ``normal'' mode. These changes were accompanied by changes in\nslowdown rate, which was larger when in the normal mode than in the\nabnormal mode, again consistent with the notion that the particles\nresponsible for much of the normal-mode main-pulse radio flux density\nwere also responsible for the increase in braking.\n\nIn total there are now 16 pulsars with established synchronised\nchanges in the radio emission and the slowdown rate. The properties\nof these pulsars are summarised in Table~1, in decreasing order of the\nmagnitude of the timing noise, measured as the ratio of the maximum and\nminimum slowdown rates.\n\n\\begin{table}\n \\begin{center}\n \\caption{Timing noise slowdown rate ratios ($\\dot{\\nu_1}\/\\dot{\\nu_2}$) and emission changes in 16 pulsars.}\n \\label{tab1}\n \\scriptsize\n \\begin{tabular}{|l|l|l|l|}\\hline \n{\\bf PULSAR} & {\\bf $\\dot{\\nu}_1\/\\dot{\\nu}_2$}~~ & {\\bf Emission Change} & {\\bf Reference} \\\\ \\hline\nJ1841$-$0500~~ & 2.5 & Deep null & Camilo et al. (2012) \\\\\nJ1832+0029 & 1.7 & Deep null & Lorimer et al. (2012) \\\\\nB1931+24 & 1.5 & Deep null & Kramer et al. (2012) \\\\ \nB2035+36 & 1.13 & 28\\% change in W$_{\\rm eq}$ & Lyne et al. (2010) \\\\\nB1740$-$03 & 1.13 & 70\\% change in component ratio & This paper \\\\\nB0105+65 & 1.11 & 30\\% change in W$_{\\rm eq}$ & This paper \\\\\nB1903+07 & 1.07 & 10\\% change in W$_{\\rm 10}$ & This paper \\\\\nJ2043+2740 & 1.06 & 100\\% change in W$_{\\rm 50}$ & Lyne et al. (2010) \\\\\nB1822$-$09 & 1.033& 100\\% change in precursor\/interpulse~~ & Lyne et al. (2010) \\\\\nJ2047+5029 & 1.030& 90\\% change in main pulse & Janssen et al. (in prep)~~ \\\\\nB1642$-$03 & 1.025& 20\\% change in cone\/core & This paper \\\\\nB1540$-$06 & 1.017& 12\\% change in W$_{\\rm 10}$ & Lyne et al. (2010) \\\\\nB1828$-$11 & 1.007& 100\\% change in W$_{\\rm 10}$ & Lyne et al. (2010) \\\\\nB1826$-$17 & 1.007& 10\\% change in cone\/core & This paper \\\\\nB0919+06 & 1.007& 30\\% change in component ratio & This paper \\\\\nB0740$-$28 & 1.007& 20\\% change in W$_{\\rm 75}$ & Lyne et al. (2010) \\\\\n\\hline\n \\end{tabular}\n \\end{center}\n\\vspace{1mm}\n \\scriptsize\n\\end{table}\n\n\\section{The nature of the switching}\n\nThe time sequences of slowdown rates shown in Fig.~2 are usually\nbounded by well-defined maximum and minimum levels, each extreme level\nbeing identifiable with a characteristic emission profile or flux\ndensity. As reported by LHKSS, each pulsar is usually seen to switch\nabruptly between these extreme states. The fact that the patterns in\nFig.~2 are generally smooth and do not display abrupt switching\nbehaviour was demonstrated to arise from changing statistical\nproperties of the mode-changing phenomenon, and the observed profile\nshape parameter is determined by the proportion of time spent in the\ntwo modes. This is most clearly illustrated in Fig.~5 which shows a\nnumber of 8-hour observations of PSR~B1828$-$11 chosen at different\nphases of the 500-day oscillating pattern displayed by this pulsar in\nFig.~3. We note that although pulse nulling and profile mode-changing\nwas first observed in 1970 (\\cite[Backer 1970]{bac70}; \\cite[Backer\n1970a]{bac70a}), the following four decades have seen no study of the\nstability of the statistics of nulling or mode-changing.\n\n\n \\begin{figure}[bt]\n \\begin{center}\n \\includegraphics[scale=0.38, angle=-90]{fig5.eps} \n \\caption{Four 8-hour time-sequences of the profile state of\n PSR~B1828$-$11, taken at four different phases of the 500-day oscillation\n seen in Fig.~3 for this pulsar. Although the switching timescale may\n be short, the fraction of time spent in one state or the other\n changes slowly (Stairs et al. in prep.)} \n \\label{fig5}\n \\end{center}\n \\end{figure}\n\n\\section{The relationship between nulling and mode-changing}\nThe processes of nulling and mode-changing are similar in many\nways. Both are switched phenomena between (usually) two discrete\nemission states. They both have similar large ranges of timescales\nand both have a major synchronisation with the spindown rate of the pulsar.\nThey are both understood in terms of changes in magnetospheric particle\ncurrents. The natural conclusion is that nulling is probably an extreme form of\nmode-changing. This view is supported by a few cases in which an\napparently nulling pulsar has been found to have low-level emission in\nthe null state. One such example is PSR~B0826$-$34 shown in Fig.~6,\nin which integration of the data during apparently ``null'' episodes\nshows pulsed emission at a level of about 2\\% of the un-nulled pulses.\n \nPerhaps one telescope's nulling pulsar is a larger telescope's\nmode-changing pulsar ! \n\n\n \\begin{figure}[hbt]\n \\begin{center}\n \\includegraphics[scale=0.35, angle=-90]{fig6new.eps} \n \\caption{The integrated ``normal'' (top) and ``abnormal'' (bottom) \nprofiles of PSR~B0826$-$34 at 1374~MHz (from \\cite[Esamdin et al. 2005]{elg+05}).} \n \\label{fig6}\n \\end{center}\n \\end{figure}\n\n\\section{Conclusions}\nPulsar magnetospheres switch between a small number of discrete\nstates, usually two, each of which corresponds to an apparently\nquasi-stable magnetospheric configuration. It seems that changes in\nmagnetospheric current flows between these states cause variations in\nboth the emission beam snd the slow-down rate. This is supported by\nthe general observation that the larger slowdown-rate is nearly\nalways associated with enhanced emission, particularly of the pulsar\n``core'' emission. I have no understanding of why there are these\ndiscrete states, or of the origin of the multi-year quasi-periodicities that\nmodulate the statistical properties of the states. Free-precession of\nthe neutron star and orbiting asteroids have been proposed, but any\nlinks are obscure.\n\nFinally, it must be emphasised that these phenomena are\nwidespread. The majority of pulsars of young and intermediate\ncharacteristic age display detectable timing noise. The studies of\nprofiles described here have mostly been of those pulsars which have\nthe largest fractional changes in slowdown rates and hence may be\nexpected to suffer the greatest magnetospheric changes and\ncorresponding variation in emission properties. In fact, 12 of the 19\npulsars with largest timing noise show correlated emission\nvariations. Most of the remaining 7 have much poorer signal-to-noise\nratio, making the precise determination of pulse shape changes\nchallenging. The changes expected in less timing-noisy pulsars are\nlikely to be much more subtle and a challenge to detect. At present,\nthere is no reason to doubt that all timing noise has its origin in\nswitched magnetospheric states.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}