diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzizkq" "b/data_all_eng_slimpj/shuffled/split2/finalzzizkq" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzizkq" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nNetworks have attracted considerable recent attention in physics and other\nfields as a foundation for the mathematical representation of a variety of\ncomplex systems, including biological and social systems, the Internet, the\nworldwide web, and many others~\\cite{Newman03d,Boccaletti06,DM03b,NBW06}.\nA common feature of many networks is ``community structure,'' the tendency\nfor vertices to divide into groups, with dense connections within groups\nand only sparser connections between them~\\cite{GN02,Newman04b}. Social\nnetworks~\\cite{GN02}, biochemical networks~\\cite{HHJ03,GA05,PDFV05}, and\ninformation networks such as the web~\\cite{FLGC02}, have all been shown to\npossess strong community structure, a finding that has substantial\npractical implications for our understanding of the systems these networks\nrepresent. Communities are of interest because they often correspond to\nfunctional units such as cycles or pathways in metabolic\nnetworks~\\cite{GA05,PDFV05,HH06} or collections of pages on a single topic\non the web~\\cite{FLGC02}, but their influence reaches further than this. A\nnumber of recent results suggest that networks can have properties at the\ncommunity level that are quite different from their properties at the level\nof the entire network, so that analyses that focus on whole networks and\nignore community structure may miss many interesting features.\n\nFor instance, in some social networks one finds individuals with different\nmean numbers of contacts in different groups; the individuals in one group\nmight be gregarious, having many contacts with others, while the\nindividuals in another group might be more reticent. An example of this\nbehavior is seen in networks of sexual contacts, where separate communities\nof high- and low-activity individuals have been\nobserved~\\cite{Garnett96,Aral99}. If one were to characterize such a\nnetwork by quoting only a single figure for the average number of contacts\nan individual has, one would be missing features of the network directly\nrelevant to questions of scientific interest such as epidemiological\ndynamics~\\cite{GAM89}.\n\nIt has also been shown that vertices' positions within communities can\naffect the role or function they assume. In social networks, for example,\nit has long been accepted that individuals who lie on the boundaries of\ncommunities, bridging gaps between otherwise unconnected people, enjoy an\nunusual level of influence as the gatekeepers of information flow between\ngroups~\\cite{Granovetter73,Burt76,Freeman77}. A surprisingly similar\nresult is found in metabolic networks, where metabolites that straddle the\nboundaries between modules show particular persistence across\nspecies~\\cite{GA05}. This finding might indicate that modules in metabolic\nnetworks possess some degree of functional independence within the cell,\nallowing vertices central to a module to change or disappear with\nrelatively little effect on the rest of the network, while vertices on the\nborders of modules are less able to change without affecting other aspects\nof the cellular machinery.\n\nOne can also consider the communities in a network themselves to form a\nhigher level meta-network, a coarse-grained representation of the full\nnetwork. Such coarse-grained representations have been used in the past as\ntools for visualization and analysis~\\cite{NG04} but more recently have\nalso been investigated as topologically interesting entities in their own\nright. In particular, networks of modules appear to have degree\ndistributions with interesting similarities to but also some differences\nfrom the degree distributions of other networks~\\cite{PDFV05}, and may also\ndisplay so-called preferential attachment in their formation~\\cite{PPV06},\nindicating the possibility of distinct dynamical processes taking place at\nthe level of the modules.\n\nFor all of these reasons and others besides there has been a concerted\neffort in recent years within the physics community and elsewhere to\ndevelop mathematical tools and computer algorithms to detect and quantify\ncommunity structure in networks. A huge variety of community detection\ntechniques have been developed, based variously on centrality measures,\nflow models, random walks, resistor networks, optimization, and many other\napproaches~\\cite{GN02,Zhou03,Krause03,WH04a,Radicchi04,NG04,Newman04a,CSCC04,FLM04,RB04,DM04,ZL04,PDFV05,GA05,Clauset05,PL05,Newman06b,DDA06,RB06,Hastings06}.\nFor reviews see Refs.~\\cite{Newman04b,DDDA05}.\n\nIn this paper we focus on one approach to community detection that has\nproven particularly effective, the optimization of the benefit function\nknown as ``modularity'' over the possible divisions of a network. Methods\nbased on this approach have been found to produce excellent results in\nstandardized tests~\\cite{DDDA05,GLH06}. Unfortunately, exhaustive\noptimization of the modularity demands an impractically large computational\neffort, but good results have been obtained with various approximate\noptimization techniques, including greedy\nalgorithms~\\cite{Newman04a,CNM04}, simulated annealing~\\cite{GSA04,RB06},\nand extremal optimization~\\cite{DA05}. In this paper we describe a\ndifferent approach, in which we rewrite the modularity function in matrix\nterms, which allows us to express the optimization task as a spectral\nproblem in linear algebra. This approach leads to a family of fast new\ncomputer algorithms for community detection that produce results\ncompetitive with the best previous methods. Perhaps more importantly, our\nwork also leads to a number of useful insights about network structure via\nthe close relations we will demonstrate between communities and matrix\nspectra.\n\nOur work is by no means the first to find connections between divisions of\nnetworks and matrix spectra. There is a large literature within computer\nscience on so-called spectral partitioning, in which network properties are\nlinked to the spectrum of the graph Laplacian\nmatrix~\\cite{Fiedler73,PSL90,Fjallstrom98}. This method is different from\nthe one introduced here and is not in general well suited to the problem of\ncommunity structure detection. The reasons for this, however, turn out to\nbe interesting and instructive, so we begin our presentation with a brief\nreview of the traditional spectral partitioning method in\nSection~\\ref{specpart}. A consideration of the deficiencies of this method\nin Section~\\ref{modsec} leads us in Sections~\\ref{specmod}--\\ref{multiway}\nto introduce and develop at length our own method, which is based on the\ncharacteristic matrix we call the ``modularity matrix.''\nSections~\\ref{negative} and~\\ref{otheruses} explore some further ideas\narising from the study of the modularity matrix but not directly related to\ncommunity detection. In Section~\\ref{concs} we give our conclusions. A\nbrief report of some of the results described in this paper has appeared\npreviously as Ref.~\\cite{Newman06b}.\n\n\n\\section{Graph partitioning and the Laplacian matrix}\n\\label{specpart}\nThere is a long tradition of research in computer science on graph\npartitioning, a problem that arises in a variety of contexts, but most\nprominently in the development of computer algorithms for parallel or\ndistributed computation. Suppose a computation requires the performance of\nsome number~$n$ of tasks, each to be carried out by a separate process,\nprogram, or thread running on one of $c$ different computer processors.\nTypically there is a desired number of tasks or volume of work to be\nassigned to each of the processors. If the processors are identical, for\ninstance, and the tasks are of similar complexity, we may wish to assign\nthe same number of tasks to each processor so as to share the workload\nroughly equally. It is also typically the case that the individual tasks\nrequire for their completion results generated during the performance of\nother tasks, so tasks must communicate with one another to complete the\noverall computation. The pattern of required communications can be thought\nof as a network with $n$ vertices representing the tasks and an edge\njoining any pair of tasks that need to communicate, for a total of $m$\nedges. (In theory the amount of communication between different pairs of\ntasks could vary, leading to a \\emph{weighted} network, but we here\nrestrict our attention to the simplest unweighted case, which already\npresents interesting challenges.)\n\nNormally, communications between processors in parallel computers are slow\ncompared to data movement within processors, and hence we would like to\nkeep such communications to a minimum. In network terms this means we\nwould like to divide the vertices of our network (the processes) into\ngroups (the processors) such that the number of edges between groups is\nminimized. This is the graph partitioning problem.\n\nProblems of this type can be solved exactly in polynomial time~\\cite{GH88},\nbut unfortunately the polynomial in question is of leading order~$n^{c^2}$,\nwhich is already prohibitive for all but the smallest networks even when\n$c$ takes the smallest possible value of~2. For practical applications,\ntherefore, a number of approximate solution methods have been developed\nthat appear to give reasonably good results. One of the most widely used\nis the spectral partitioning method, due originally to\nFiedler~\\cite{Fiedler73} and popularized particularly by\nPothen~{\\it{}et~al.}~\\cite{PSL90}. We here consider the simplest instance of the\nmethod, where $c=2$, i.e.,~where our network is to be divided into just two\nnon-intersecting subsets such that the number of edges running between the\nsubsets is minimized.\n\nWe begin by defining the adjacency matrix~$\\mathbf{A}$ to be the matrix with\nelements\n\\begin{equation}\nA_{ij} = \\begin{cases}\n \\enspace 1 & \\text{if there is an edge joining vertices $i,j$,} \\\\\n \\enspace 0 & \\text{otherwise.}\n \\end{cases}\n\\label{adjacency}\n\\end{equation}\n(We restrict our attention in this paper to undirected networks, so that\n$\\mathbf{A}$ is symmetric.) Then the number of edges~$R$ running between our\ntwo groups of vertices, also called the \\textit{cut size}, is given by\n\\begin{equation}\nR = \\mbox{$\\frac12$}\\!\\!\n \\sum_{\\parbox{3em}{\\scriptsize\\centering $i,j$ in different groups}}\\!\\!\n A_{ij},\n\\label{cutsize1}\n\\end{equation}\nwhere the factor of $\\mbox{$\\frac12$}$ compensates for our counting each edge twice in\nthe sum.\n\nTo put this in a more convenient form, we define an \\textit{index\nvector}~$\\vec{s}$ with $n$ elements\n\\begin{equation}\ns_i = \\begin{cases}\n \\enspace +1 & \\text{if vertex~$i$ belongs to group~1,} \\\\\n \\enspace -1 & \\text{if vertex~$i$ belongs to group~2.}\n \\end{cases}\n\\label{defss}\n\\end{equation}\n(Note that $\\vec{s}$ satisfies the normalization condition\n$\\vec{s}^T\\vec{s}=n$.) Then\n\\begin{equation}\n\\mbox{$\\frac12$}(1-s_is_j) = \\begin{cases}\n \\enspace 1 & \\text{if $i$ and $j$ are in different groups,} \\\\\n \\enspace 0 & \\text{if $i$ and $j$ are in the same group,}\n \\end{cases}\n\\label{sisj}\n\\end{equation}\nwhich allows us to rewrite Eq.~\\eqref{cutsize1} as\n\\begin{equation}\nR = \\mbox{$\\frac14$} \\sum_{ij} (1-s_is_j) A_{ij}.\n\\label{cutsize2}\n\\end{equation}\nNoting that the number of edges~$k_i$ connected to a vertex~$i$---also\ncalled the\n\\textit{degree} of the vertex---is given by\n\\begin{equation}\nk_i = \\sum_j A_{ij},\n\\label{degree}\n\\end{equation}\nthe first term of the sum in~\\eqref{cutsize2} is\n\\begin{equation}\n\\sum_{ij} A_{ij} = \\sum_i k_i = \\sum_i s_i^2 k_i\n = \\sum_{ij} s_i s_j k_i \\delta_{ij},\n\\end{equation}\nwhere we have made use of $s_i^2=1$ (since $s_i=\\pm1$), and $\\delta_{ij}$\nis 1 if $i=j$ and zero otherwise. Thus\n\\begin{equation}\nR = \\mbox{$\\frac14$} \\sum_{ij} s_i s_j (k_i \\delta_{ij} - A_{ij}).\n\\end{equation}\nWe can write this in matrix form as\n\\begin{equation}\nR = \\mbox{$\\frac14$} \\vec{s}^T\\mathbf{L}\\vec{s},\n\\label{defsr}\n\\end{equation}\nwhere $\\mathbf{L}$ is the real symmetric matrix with elements $L_{ij}=k_i\n\\delta_{ij} - A_{ij}$, or equivalently\\footnote{We assume here that the\nnetwork is a \\textit{simple graph}, having at most one edge between any pair\nof vertices and no self-edges (edges that connect vertices to themselves).}\n\\begin{equation}\nL_{ij} = \\begin{cases}\n k_i & \\text{if $i=j$,} \\\\\n -1 & \\text{if $i\\ne j$ and there is an edge $(i,j)$,} \\\\\n 0 & \\text{otherwise.}\n \\end{cases}\n\\end{equation}\n$\\mathbf{L}$ is called the \\textit{Laplacian matrix} of the graph or sometimes\nthe \\textit{admittance matrix}. It appears in many contexts in the theory of\nnetworks, such as the analysis of diffusion and random walks on\nnetworks~\\cite{Chung97}, Kirchhoff's theorem for the number of spanning\ntrees~\\cite{Bollobas98}, and the dynamics of coupled\noscillators~\\cite{BP02b,NMLH03}. Its properties are the subject of\nhundreds of papers in the mathematics and physics literature and are by now\nquite well understood. For our purposes, however, we will need only a few\nsimple observations about the matrix to make progress.\n\nOur task is to choose the vector $\\vec{s}$ so as to minimize the cut size,\nEq.~\\eqref{defsr}. Let us write $\\vec{s}$ as a linear combination of the\nnormalized eigenvectors~$\\vec{v}_i$ of the Laplacian thus:\n$\\vec{s}=\\sum_{i=1}^n a_i \\vec{v}_i$, where $a_i=\\vec{v}_i^T\\vec{s}$ and\nthe normalization $\\vec{s}^T\\vec{s}=n$ implies that\n\\begin{equation}\n\\sum_{i=1}^n a_i^2 = n.\n\\label{normalization}\n\\end{equation}\nThen\n\\begin{equation}\nR = \\sum_i a_i \\vec{v}_i^T \\mathbf{L} \\sum_j a_j \\vec{v}_j\n = \\sum_{ij} a_i a_j \\lambda_j \\delta_{ij}\n = \\sum_i a_i^2 \\lambda_i,\n\\label{rvalue}\n\\end{equation}\nwhere $\\lambda_i$ is the eigenvalue of $\\mathbf{L}$ corresponding to the\neigenvector~$\\vec{v}_i$ and we have made use of\n$\\vec{v}^T_i\\vec{v}_j=\\delta_{ij}$. Without loss of generality, we assume\nthat the eigenvalues are labeled in increasing order\n$\\lambda_1\\le\\lambda_2\\le\\ldots\\le\\lambda_n$. The task of minimizing~$R$\ncan then be equated with the task of choosing the nonnegative\nquantities~$a_i^2$ so as to place as much as possible of the weight in the\nsum~\\eqref{rvalue} in the terms corresponding to the lowest eigenvalues,\nand as little as possible in the terms corresponding to the highest, while\nrespecting the normalization constraint~\\eqref{normalization}.\n\nThe sum of every row (and column) of the Laplacian matrix is zero:\n\\begin{equation}\n\\sum_j L_{ij} = \\sum_j (k_i \\delta_{ij} - A_{ij}) = k_i - k_i = 0,\n\\end{equation}\nwhere we have made use of~\\eqref{degree}. Thus the vector $(1,1,1,\\ldots)$\nis always an eigenvector of the Laplacian with eigenvalue zero. It is less\ntrivial, but still straightforward, to demonstrate that all eigenvalues of\nthe Laplacian are nonnegative. (The Laplacian is symmetric and equal to\nthe square of the edge incidence matrix, and hence its eigenvalues are all\nthe squares of real vectors.) Thus the eigenvalue~0 is always the smallest\neigenvalue of the Laplacian and the corresponding eigenvector is\n$\\vec{v}_1=(1,1,1,\\ldots)\/\\sqrt{n}$, correctly normalized.\n\nGiven these observations it is now straightforward to see how to minimize\nthe cut size~$R$. If we choose $\\vec{s}=(1,1,1,\\ldots)$, then all of the\nweight in the final sum in Eq.~\\eqref{rvalue} is in the term corresponding\nto the lowest eigenvalue~$\\lambda_1=0$ and all other terms are zero, since\n$(1,1,1,\\ldots)$ is an eigenvector and the eigenvectors are orthogonal.\nThus this choice gives us $R=0$, which is the smallest value it can take\nsince it is by definition a nonnegative quantity.\n\nUnfortunately, when we consider the physical interpretation of this\nsolution, we see that it is trivial and uninteresting. Given the\ndefinition~\\eqref{defss} of~$\\vec{s}$, the choice $\\vec{s}=(1,1,1,\\ldots)$\nis equivalent to placing all the vertices in group~1 and none of them in\ngroup~2. Technically, this is a valid division of the network, but it is\nnot a useful one. Of course the cut size is zero if we put all the\nvertices in one of the groups and none in the other, but such a trivial\nsolution tells us nothing about how to solve our original problem.\n\nWe would like to forbid this trivial solution, so as to force the method to\nfind a nontrivial one. A variety of ways have been explored for achieving\nthis goal, of which the most common is to fix the sizes of the two groups,\nwhich is convenient if, as discussed above, the sizes of the groups are\nspecified anyway as a part of the problem. In the present case, fixing the\nsizes of the groups fixes the coefficient~$a_1^2$ of the $\\lambda_1$~term\nin the sum in Eq.~\\eqref{rvalue}; if the required sizes of the groups are\n$n_1$ and~$n_2$, then\n\\begin{equation}\na_1^2 = \\bigl( \\vec{v}_1^T\\vec{s} \\bigr)^2 = {(n_1-n_2)^2\\over n}.\n\\end{equation}\nSince we cannot vary this coefficient, we shift our attention to the other\nterms in the sum. If there were no further constraints on our choice\nof~$\\vec{s}$, apart from the normalization condition~$\\vec{s}^T\\vec{s}=n$,\nour course would be clear: $R$~would be minimized by choosing $\\vec{s}$\nproportional to the second eigenvector~$\\vec{v}_2$ of the Laplacian, also\ncalled the \\textit{Fiedler vector}. This choice places all of the weight in\nEq.~\\eqref{rvalue} in the term involving the second-smallest\neigenvalue~$\\lambda_2$, also known as the \\textit{algebraic connectivity}.\nThe other terms would automatically be zero, since the eigenvectors are\northogonal.\n\nUnfortunately, there is an additional constraint on $\\vec{s}$ imposed by\nthe condition, Eq.~\\eqref{defss}, that its elements take the values~$\\pm1$,\nwhich means in most cases that $\\vec{s}$ cannot be chosen parallel\nto~$\\vec{v}_2$. This makes the optimization problem much more difficult.\nOften, however, quite good approximate solutions can be obtained by\nchoosing $\\vec{s}$ to be as close to parallel with $\\vec{v}_2$ as possible.\nThis means maximizing the quantity\n\\begin{equation}\n\\bigl| \\vec{v}_2^T\\vec{s} \\bigr| = \\biggl| \\sum_i v^{(2)}_i s_i \\biggr|\n \\le \\sum_i \\bigl| v^{(2)}_i \\bigr|,\n\\label{tomax}\n\\end{equation}\nwhere $v^{(2)}_i$ is the $i$th element of $\\vec{v}_2$. Here the second\nrelation follows via the triangle inequality, and becomes an equality only\nwhen all terms in the first sum are positive (or negative). In other\nwords, the maximum of $|\\vec{v}_2^T\\vec{s}|$ is achieved when $v^{(2)}_i\ns_i\\ge0$ for all~$i$, or equivalently when $s_i$ has the same sign as\n$v^{(2)}_i$. Thus the maximum is obtained with the choice\n\\begin{equation}\ns_i = \\begin{cases}\n +1 & \\quad\\text{if $v^{(2)}_i\\ge0$,} \\\\\n -1 & \\quad\\text{if $v^{(2)}_i<0$.}\n \\end{cases}\n\\end{equation}\nEven this choice however is often forbidden by the condition that the\nnumber of $+1$ and $-1$ elements of $\\vec{s}$ be equal to the desired sizes\n$n_1$ and $n_2$ of the two groups, in which case the best solution is\nachieved by assigning vertices to one of the groups in order of the\nelements in the Fiedler vector, from most positive to most negative, until\nthe groups have the required sizes. For groups of different sizes there\nare two distinct ways of doing this, one in which the smaller group\ncorresponds to the most positive elements of the vector and one in which\nthe larger group does. We can choose between them by calculating the cut\nsize~$R$ for both cases and keeping the one that gives the better result.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{eppstein.eps}\n\\end{center}\n\\caption{(a)~The mesh network of Bern~{\\it{}et~al.}~\\cite{BEG90}. (b)~The best\ndivision into equal-sized parts found by the spectral partitioning\nalgorithm based on the Laplacian matrix.}\n\\label{eppstein}\n\\end{figure}\n\nThis then is the spectral partitioning method in its simplest form. It is\nnot guaranteed to minimize~$R$, but, particularly in cases where\n$\\lambda_2$ is well separated from the eigenvalues above it, it often does\nvery well. Figure~\\ref{eppstein} shows an example application typical of\nthose found in the literature, to a two-dimensional mesh such as might be\nused in parallel finite-element calculations. This particular mesh is a\nsmall 547-vertex example from Bern~{\\it{}et~al.}~\\cite{BEG90} and is shown complete\nin panel~(a) of the figure. Panel~(b) shows the division of the mesh into\ntwo parts of $273$ and $274$ vertices respectively using the spectral\npartitioning approach, which finds a cut of size 46 edges in this case.\n\nAlthough the cut found in this example is a reasonable one, it does not\nappear---at least to this author's eye---that the vertex groups in\nFig.~\\ref{eppstein}b constitute any kind of natural division of the network\ninto ``communities.'' This is typical of the problems to which spectral\npartitioning is usually applied: in most circumstances the network in\nquestion does not divide up easily into groups of the desired sizes, but\none must do the best one can. For these types of tasks, spectral\npartitioning is an effective and appropriate tool. The task of finding\nnatural community divisions in a network, however, is quite different, and\ndemands a different approach, as we now discuss.\n\n\n\\section{Community structure and modularity}\n\\label{modsec}\nDespite its evident success in the graph partitioning arena, spectral\npartitioning is a poor approach for detecting natural community structure\nin real-world networks, which is the primary topic of this paper. The\nissue is with the condition that the sizes of the groups into which the\nnetwork is divided be fixed. This condition is neither appropriate nor\nrealistic for community detection problems. In most cases we do not know\nin advance the sizes of the communities in a network and choosing arbitrary\nsizes will usually preclude us from finding the best solution to the\nproblem. We would like instead to let the group sizes be free, but the\nspectral partitioning method breaks down if we do this, as we have seen: if\nthe group sizes are not fixed, then the minimum cut size is always achieved\nby putting all vertices in one group and none in the other. Indeed, this\nstatement is considerably broader than the spectral partitioning method\nitself, since any method that correctly minimizes the cut size without\nconstraint on the group sizes is sure to find, in the general case, that\nthe minimum value is achieved for this same trivial division.\n\nSeveral approaches have been proposed to get around this problem. For\ninstance, the \\textit{ratio cut} method~\\cite{WC89} minimizes not the simple\ncut size~$R$ but the ratio $R\/(n_1 n_2)$, where $n_1$ and $n_2$ are again\nthe sizes of the two groups of vertices. This penalizes configurations in\nwhich either of the groups is small and hence favors balanced divisions\nover unbalanced ones, releasing us from the obligation to fix the group\nsizes. Spectral algorithms based on ratio cuts have been\nproposed~\\cite{HK92,CSZ93} and have proved useful for certain classes of\npartitioning problems. Still, however, this approach effectively chooses\nthe group sizes, at least approximately, since it is biased in favor of\ndivisions into equal-sized parts. Variations are possible that are biased\ntowards other, unequal part sizes, but then one must choose those parts\nsizes and so again we have a situation in which we need to know in advance\nthe sizes of the groups if we are to get the ``right'' results. The ratio\ncut method does allow some leeway for the sizes to vary around their\nspecified values, which makes it more flexible than the simple minimum cut\nmethod, but at its core it still suffers from the same drawbacks that make\nstandard spectral partitioning inappropriate for community detection.\n\nThe fundamental problem with all of these methods is that cut sizes are\nsimply not the right thing to optimize because they don't accurately\nreflect our intuitive concept of network communities. A good division of a\nnetwork into communities is not merely one in which the number of edges\nrunning between groups is small. Rather, it is one in which the number of\nedges between groups is \\emph{smaller than expected}. Only if the number\nof between-group edges is significantly lower than would be expected purely\nby chance can we justifiably claim to have found significant community\nstructure. Equivalently, we can examine the number of edges \\emph{within}\ncommunities and look for divisions of the network in which this number is\nhigher than expected---the two approaches are equivalent since the total\nnumber of edges is fixed and any edges that do not lie between communities\nmust necessarily lie inside them.\n\nThese considerations lead us to shift our attention from measures based on\npure cut size to a modified benefit function~$Q$ defined by\n\\begin{eqnarray}\nQ &=& \\mbox{(number of edges within communities)} \\nonumber\\\\\n & & \\quad{} - \\mbox{(expected number of such edges)}.\n\\label{defsq1}\n\\end{eqnarray}\nThis benefit function is called \\textit{modularity}~\\cite{Newman03c,NG04}.\nIt is a function of the particular division of the network into groups,\nwith larger values indicating stronger community structure. Hence we\nshould, in principle, be able to find good divisions of a network into\ncommunities by optimizing the modularity over possible divisions. This\napproach, proposed in~\\cite{Newman04a} and since pursued by a number of\nauthors~\\cite{CNM04,GSA04,GA05,DA05,Newman06b}, has proven highly effective\nin practice~\\cite{DDDA05} and is the primary focus of this article.\n\nThe first term in Eq.~\\eqref{defsq1} is straightforward to calculate. The\nsecond, however, is rather vague and needs to be made more precise before\nwe can evaluate the modularity. What exactly do we mean by the ``expected\nnumber'' of edges within a community? Answering this question is\nessentially equivalent to choosing a ``null model'' against which to\ncompare our network. The definition of the modularity involves a\ncomparison of the number of within-group edges in a real network and the\nnumber in some equivalent randomized model network in which edges are\nplaced without regard to community structure.\n\nIt is one of the strengths of the modularity approach that it makes the\nrole of this null model explicit and clear. All methods for finding\ncommunities are, in a sense, assuming some null model, since any method\nmust make a value judgment about when a particular density of edges is\nsignificant enough to define a community. In most cases, this assumption\nis hidden within the workings of a computer algorithm and is difficult to\ndisentangle, even when the algorithm itself is well understood. By\nbringing its assumptions out into the open, the modularity method gives us\nmore control over our calculations and more understanding of their\nimplications.\n\nOur null model must have the same number of vertices~$n$ as the original\nnetwork, so that we can divide it into the same groups for comparison, but\napart from this we have a good deal of freedom about our choice of model.\nWe here consider the broad class of randomized models in which we specify\nseparately the probability~$P_{ij}$ for an edge to fall between every pair\nof vertices~$i,j$. More precisely, $P_{ij}$~is the expected number of\nedges between $i$ and~$j$, a definition that allows for the possibility\nthat there may be more than one edge between a pair of vertices, which\nhappens in certain types of networks. We will consider some particular\nchoices of $P_{ij}$ in a moment, but for now let us pursue the developments\nin general form.\n\nGiven $P_{ij}$, the modularity can be defined as follows. The actual\nnumber of edges falling between a particular pair of vertices $i$ and $j$\nis~$A_{ij}$, Eq.~\\eqref{adjacency}, and the expected number is, by\ndefinition,~$P_{ij}$. Thus the actual minus expected number of edges\nbetween $i$ and $j$ is $A_{ij}-P_{ij}$ and the modularity is (proportional\nto) the sum of this quantity over all pairs of vertices belonging to the\nsame community. Let us define $g_i$ to be the community to which\nvertex~$i$ belongs. Then the modularity can be written\n\\begin{equation}\nQ = {1\\over2m} \\sum_{ij} \\bigl[ A_{ij} - P_{ij} \\bigr] \\delta(g_i,g_j),\n\\label{q1}\n\\end{equation}\nwhere $\\delta(r,s)=1$ if $r=s$ and 0 otherwise and $m$ is again the number\nof edges in the network. The extra factor of $1\/2m$ in Eq.~\\eqref{q1} is\npurely conventional; it is included for compatibility with previous\ndefinitions of the modularity and plays no part in the maximization of~$Q$\nsince it is a constant for any given network. A special case of\nEq.~\\eqref{q1} was given previously by the present author\nin~\\cite{Newman04f} and independently, in slightly different form, by White\nand Smyth~\\cite{WS05}. A number of other expressions for the modularity\nhave also been presented by various authors~\\cite{NG04,GSA04,DA05} and are\nconvenient in particular applications. Also of interest is the derivation\nof the modularity given recently by Reichardt and Bornholdt~\\cite{RB06},\nwhich is quite general and provides an interesting alternative to the\nderivation presented here.\n\nReturning to the null model, how should $P_{ij}$ be chosen? The choice is\nnot entirely unconstrained. First, we consider in this paper only\nundirected networks, which implies that $P_{ij}=P_{ji}$. Second, it is\naxiomatically the case that $Q=0$ when all vertices are placed in a single\ngroup together: by definition, the number of edges within groups and the\nexpected number of such edges are both equal to~$m$ in this case. Setting\nall $g_i$ equal in Eq.~\\eqref{q1}, we find that $\\sum_{ij} [ A_{ij} -\nP_{ij} ] = 0$ or equivalently\n\\begin{equation}\n\\sum_{ij} P_{ij} = \\sum_{ij} A_{ij} = 2m.\n\\label{sanity}\n\\end{equation}\nThis equation says that we are restricted to null models in which the\nexpected number of edges in the entire network equals the actual number of\nedges in the original network---a natural choice if our comparison of\nnumbers of edges within groups is to have any meaning.\n\nBeyond these basic considerations, there are many possible choices of null\nmodel and several have been considered previously in the\nliterature~\\cite{NG04,RB04,MD05}. Perhaps the simplest is the standard\n(Bernoulli) random graph, in which edges appear with equal probability\n$P_{ij}=p$ between all vertex pairs. With a suitably chosen value of $p$\nthis model can be made to satisfy~\\eqref{sanity} but, as many authors have\npointed out~\\cite{Strogatz01,DM02,WS98}, the model is not a good\nrepresentation of most real-world networks. A particularly glaring aspect\nin which it errs is its degree distribution. The random graph has a\nbinomial degree distribution (or Poisson in the limit of large graph size),\nwhich is entirely unlike the right-skewed degree distributions found in\nmost real-world networks~\\cite{BA99b,ASBS00}. A much better null model\nwould be one in which the degree distribution is approximately the same as\nthat of the real-world network of interest. To satisfy this demand we will\nrestrict our attention in this paper to models in which the expected degree\nof each vertex within the model is equal to the actual degree of the\ncorresponding vertex in the real network. Noting that the expected degree\nof vertex~$i$ is given by $\\sum_j P_{ij}$, we can express this condition as\n\\begin{equation}\n\\sum_j P_{ij} = k_i.\n\\label{constraint}\n\\end{equation}\nIf this constraint is satisfied, then~\\eqref{sanity} is automatically\nsatisfied as well, since $\\sum_i k_i = 2m$.\n\nEquation~\\eqref{constraint} is a considerably more stringent constraint\nthan~\\eqref{sanity}---in most cases, for instance, it excludes the\nBernoulli random graph---but it is one that we believe makes good sense,\nand one moreover that has a variety of desirable consequences for the\ndevelopments that follow.\n\nThe simplest null model in this class, and the only one that has been\nconsidered at any length in the past, is the model in which edges are\nplaced entirely at random, subject to the constraint~\\eqref{constraint}.\nThat is, the probability that an end of a randomly chosen edge attaches to\na particular vertex~$i$ depends only on the expected degree~$k_i$ of that\nvertex, and the probabilities for the two ends of a single edge are\nindependent of one another. This implies that the expected number of edges\n$P_{ij}$ between vertices~$i$ and $j$ is the product $f(k_i)f(k_j)$ of\nseparate functions of the two degrees, where the functions must be the same\nsince $P_{ij}$ is symmetric. Then Eq.~\\eqref{constraint} implies\n\\begin{equation}\n\\sum_{j=1}^n P_{ij} = f(k_i) \\sum_{j=1}^n f(k_j) = k_i,\n\\end{equation}\nfor all~$i$ and hence $f(k_i)=Ck_i$ for some constant~$C$. And\nEq.~\\eqref{sanity} says that\n\\begin{equation}\n2m = \\sum_{ij} P_{ij} = C^2 \\sum_{ij} k_i k_j = (2mC)^2,\n\\end{equation}\nand hence $C=1\/\\sqrt{2m}$ and\n\\begin{equation}\nP_{ij} = {k_ik_j\\over2m}.\n\\label{defscm}\n\\end{equation}\nThis model has been studied in the past in its own right as a model of a\nnetwork, for instance by Chung and Lu~\\cite{CL02a}. It is also closely\nrelated to the \\textit{configuration model}, which has been studied widely in\nthe mathematics and physics literature~\\cite{Luczak92,MR95,NSW01,CL02a}.\nIndeed, essentially all expected properties of our model and the\nconfiguration model are identical in the limit of large network size, and\nhence Eq.~\\eqref{defscm} can be considered equivalent to the configuration\nmodel in this limit.\\footnote{The technical difference between the two\nmodels is that the configuration model is a random multigraph conditioned\non the actual degree sequence, while the model used here is a random\nmultigraph conditioned on the expected degree sequence. This makes the\nensemble of the former considerably smaller than that of the latter, but\nthe difference is analogous to the difference between canonical and grand\ncanonical ensembles in statistical mechanics and the two give the same\nanswers in the thermodynamic limit for roughly the same reason. In\nparticular, we note that the probability of an edge falling between two\nvertices $i$ and $j$ in the configuration model is also given by\nEq.~\\eqref{defscm} in the limit of large network size; for smaller\nnetworks, there are corrections of order~$1\/n$.}\n\nAlthough many of the developments outlined in this paper are true for quite\ngeneral choices of the null model used to define the modularity, the\nchoice~\\eqref{defscm} is the only one we will pursue here. It is worth\nkeeping mind however that other choices are possible: Massen and\nDoye~\\cite{MD05}, for instance, have used a variant of the configuration\nmodel in which multiedges and self-edges were excluded. And further\nchoices could be useful in specific cases, such as cases where there are\nstrong correlations between the degrees of vertices~\\cite{PVV01,Newman02f}\nor where there is a high level of network transitivity~\\cite{WS98}.\n\n\n\\section{Spectral optimization of modularity}\n\\label{specmod}\nOnce we have an explicit expression for the modularity we can determine the\ncommunity structure by maximizing it over possible divisions of the\nnetwork. Unfortunately, exhaustive maximization over all possible\ndivisions is computational intractable because there are simply too many\ndivisions, but various approximate optimization methods have proven\neffective~\\cite{Newman04a,CNM04,GSA04,GA05,MD05,RB06,DA05}. Here, we\ndevelop a matrix-based approach analogous to the spectral partitioning\nmethod of Section~\\ref{specpart}, which leads not only to a whole array of\npossible optimization algorithms but also to new insights about the nature\nand implications of community structure in networks.\n\n\n\\subsection{Leading eigenvector method}\n\\label{singlevec}\nAs before, let us consider initially the division of a network into just\ntwo communities and denote a potential such division by an index\nvector~$\\vec{s}$ with elements as in Eq.~\\eqref{defss}. We notice that the\nquantity $\\mbox{$\\frac12$}(s_is_j+1)$ is 1 if $i$ and $j$ belong to the same group and\n0 if they belong to different groups or, in the notation of Eq.~\\eqref{q1},\n\\begin{equation}\n\\delta(g_i,g_j) = \\mbox{$\\frac12$} (s_is_j+1).\n\\end{equation}\nThus we can write~\\eqref{q1} in the form\n\\begin{eqnarray}\nQ &=& {1\\over4m} \\sum_{ij} \\bigl[ A_{ij} - P_{ij} \\bigr] (s_is_j+1)\n \\nonumber\\\\\n &=& {1\\over4m} \\sum_{ij} \\bigl[ A_{ij} - P_{ij} \\bigr] s_is_j,\n\\end{eqnarray}\nwhere we have in the second line made use of Eq.~\\eqref{sanity}. This\nresult can conveniently be rewritten in matrix form as\n\\begin{equation}\nQ = {1\\over4m} \\vec{s}^T \\mathbf{B} \\vec{s},\n\\label{q2}\n\\end{equation}\nwhere $\\mathbf{B}$ is the real symmetric matrix having elements\n\\begin{equation}\nB_{ij} = A_{ij} - P_{ij}.\n\\label{defsbij}\n\\end{equation}\nWe call this matrix the \\textit{modularity matrix} and it plays a role in the\nmaximization of the modularity equivalent to that played by the Laplacian\nin standard spectral partitioning: Equation~\\eqref{q2} is the equivalent of\nEq.~\\eqref{defsr} for the cut size and matrix methods can thus be applied\nto the modularity that are the direct equivalents of those developed for\nspectral partitioning, as we now show.\n\nFirst, let us point out a few important properties of the modularity\nmatrix. Equations~\\eqref{degree} and~\\eqref{constraint} together imply\nthat all rows (and columns) of the modularity matrix sum to zero:\n\\begin{equation}\n\\sum_j B_{ij} = \\sum_j A_{ij} - \\sum_j P_{ij}\n = k_i - k_i = 0.\n\\end{equation}\nThis immediately implies that for any network the vector $(1,1,1,\\ldots)$\nis an eigenvector of the modularity matrix with eigenvalue zero, just as is\nthe case with the Laplacian. Unlike the Laplacian however, the eigenvalues\nof the modularity matrix are not necessarily all of one sign and in\npractice the matrix usually has both positive and negative eigenvalues.\nThis observation---and the eigenspectrum of the modularity matrix in\ngeneral---are, as we will see, closely tied to the community structure of\nthe network.\n\nWorking from Eq.~\\eqref{q2} we now proceed by direct analogy with\nSection~\\ref{specpart}. We write $\\vec{s}$ as a linear combination of the\nnormalized eigenvectors~$\\vec{u}_i$ of the modularity matrix, $\\vec{s} =\n\\sum_{i=1}^n a_i \\vec{u}_i$ with $a_i=\\vec{u}_i^T\\vec{s}$. Then\n\\begin{equation}\nQ = {1\\over4m} \\sum_i a_i^2 \\beta_i,\n\\label{q3}\n\\end{equation}\nwhere $\\beta_i$ is the eigenvalue of $\\mathbf{B}$ corresponding to the\neigenvector~$\\vec{u}_i$. We now assume that the eigenvalues are labeled in\n\\emph{decreasing} order $\\beta_1\\ge\\beta_2\\ge\\ldots\\ge\\beta_n$ and the task\nof maximizing~$Q$ is one of choosing the quantities~$a_i^2$ so as to place\nas much as possible of the weight in the sum~\\eqref{q3} in the terms\ncorresponding to the largest (most positive) eigenvalues.\n\nAs with ordinary spectral partitioning, this would be a simple task if our\nchoice of $\\vec{s}$ were unconstrained (apart from normalization): we would\njust choose $\\vec{s}$ proportional to the leading eigenvector $\\vec{u}_1$\nof the modularity matrix. But the elements of $\\vec{s}$ are restricted to\nthe values $s_i=\\pm1$, which means that $\\vec{s}$ cannot normally be chosen\nparallel to~$\\vec{u}_1$. Again as before, however, good approximate\nsolutions can be obtained by choosing $\\vec{s}$ to be as close to parallel\nwith $\\vec{u}_1$ as possible, which is achieved by setting\n\\begin{equation}\ns_i = \\begin{cases}\n +1 & \\quad\\text{if $u^{(1)}_i\\ge0$,} \\\\\n -1 & \\quad\\text{if $u^{(1)}_i<0$.}\n \\end{cases}\n\\end{equation}\nThis then is our first and simplest algorithm for community detection: we\nfind the eigenvector corresponding to the most positive eigenvalue of the\nmodularity matrix and divide the network into two groups according to the\nsigns of the elements of this vector.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{dolphins.eps}\n\\end{center}\n\\caption{The dolphin social network of Lusseau~{\\it{}et~al.}~\\cite{Lusseau03a}.\nThe dashed curve represents the division into two equally sized parts found\nby a standard spectral partitioning calculation (Section~\\ref{specpart}).\nThe solid curve represents the division found by the modularity-based\nmethod of this section. And the squares and circles represent the actual\ndivision of the network observed when the dolphin community split into two\nas a result of the departure of a keystone individual. (The individual who\ndeparted is represented by the triangle.)}\n\\label{dolphins}\n\\end{figure}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=12cm]{books.eps}\n\\end{center}\n\\caption{The network of political books described in the text. Vertex\ncolors range from blue to red to represent the values of the corresponding\nelements of the leading eigenvector of the modularity matrix.}\n\\label{books}\n\\end{figure*}\n\nIn practice, this method works nicely, as discussed in~\\cite{Newman06b}.\nMaking the choice~\\eqref{defscm} for our null model, we have applied it to\na variety of standard and less standard test networks and find that it does\na good job of finding community divisions. Figure~\\ref{dolphins} shows a\nrepresentative example, an animal social network assembled and studied by\nLusseau~{\\it{}et~al.}~\\cite{Lusseau03a}. The vertices in this network represent 62\nbottlenose dolphins living in Doubtful Sound, New Zealand, with social ties\nbetween dolphin pairs established by direct observation over a period of\nseveral years. This network is of particular interest because, during the\ncourse of the study, the dolphin group split into two smaller subgroups\nfollowing the departure of a key member of the population. The subgroups\nare represented by the shapes of the vertices in the figure. The dotted\nline denotes the division of the network into two equal-sized groups found\nby the standard spectral partitioning method. While, as expected, this\nmethod does a creditable job of dividing the network into groups of these\nparticular sizes, it is clear to the eye that this is not the natural\ncommunity division of the network and neither does it correspond to the\ndivision observed in real life. The spectral partitioning method is\nhamstrung by the requirement that we specify the sizes of the two\ncommunities; unless we know what they are in advance, blind application of\nthe method will not usually find the ``right'' division of the network.\n\nThe method based on the leading eigenvector of the modularity matrix,\nhowever, does much better. Unconstrained by the need to find groups of any\nparticular size, this method finds the division denoted by the solid line\nin the figure, which, as we see, corresponds quite closely to the split\nactually observed---all but three of the 62 dolphins are placed in the\ncorrect groups.\n\nThe magnitudes of the elements of the eigenvector $\\vec{u}_1$ also contain\nuseful information about the network, indicating, as discussed\nin~\\cite{Newman06b}, the ``strength'' with which vertices belong to the\ncommunities in which they are placed. As an example of this phenomenon\nconsider Fig.~\\ref{books}, which depicts the network of political books\nfrom Ref.~\\cite{Newman06b}. This network, compiled by V.~Krebs\n(unpublished), represents recent books on US politics, with edges\nconnecting pairs of books that are frequently purchased by the same\ncustomers of the on-line bookseller Amazon.com. Applying our method, we\nfind that the network divides as shown in the figure, with the colors of\nthe vertices representing the values of the elements of the eigenvector.\nThe two groups correspond closely to the apparent alignment of the books\naccording to left-wing and right-wing points of view~\\cite{Newman06b}, and\nare suggestively colored blue and red in the figure.\\footnote{By a fluke of\nrecent history, the colors blue and red have come to denote liberal and\nconservative points of view respectively in US politics, where in most\nother parts of the world the color-scheme is the other way around.} The\nmost blue and most red vertices are those that, by our calculation, belong\nmost strongly to the two groups and are thus, perhaps, the ``most\nleft-wing'' and ``most right-wing'' of the books under consideration.\nThose familiar with current US politics will be unsurprised to learn that\nthe most left-wing book in this sense was the polemical \\emph{Bushwacked}\nby Molly Ivins and Lou Dubose. Perhaps more surprising is the most\nright-wing book: \\emph{A National Party No More} by Zell\nMiller.\\footnote{Miller is a former Democratic (i.e.,~ostensibly liberal)\ngovernor and US senator for the state of Georgia. He became known in the\nlater years of his career, however, for views that aligned more closely\nwith the conservative Republicans than with the Democrats. Even so, Miller\nwas never the most conservative member of the senate, nor is his book the\nmost conservative in this study. But our measure is not based on the\ncontent of the books; it merely finds the vertices in the network that are\nmost central to their communities. The ranking of Miller's book in this\ncalculation results from its centrality within the community of\nconservative book buying. This book, while not in fact as right-wing as\nsome, apparently appeals widely and exclusively to conservatives,\npresumably because of the unusual standing of its author as a nominal\nDemocrat supporting the Republican cause.}\n\n\n\\subsection{Other eigenvectors of the modularity matrix}\n\\label{multivec}\nThe algorithm described in the previous section has two obvious\nshortcomings. First, it divides networks into only two communities, while\nreal-world networks can certainly have more than two. Second, it makes use\nonly of the leading eigenvector of the modularity matrix and ignores all\nthe others, which throws away useful information contained in those other\nvectors. Both of these shortcomings are remedied by the following\ngeneralization of the method.\n\nConsider the division of a network into~$c$ non-overlapping communities,\nwhere $c$ may now be greater than~2. Following Alpert and Yao~\\cite{AY95}\nand more recently White and Smyth~\\cite{WS05}, let us define an $n\\times c$\nindex matrix $\\mathbf{S}$ with one column for each community:\n$\\mathbf{S}=(\\vec{s}_1|\\vec{s}_2|\\ldots|\\vec{s}_c)$. Each column is an index\nvector now of $(0,1)$ elements (rather than $\\pm1$ as previously), such\nthat\n\\begin{equation}\nS_{ij} = \\begin{cases}\n 1 & \\quad\\text{if vertex~$i$ belongs to community~$j$,}\\\\\n 0 & \\quad\\text{otherwise.}\n \\end{cases}\n\\end{equation}\nNote that the columns of $\\mathbf{S}$ are mutually orthogonal, that the rows\neach sum to unity, and that the matrix satisfies the normalization\ncondition $\\mathrm{Tr}(\\mathbf{S}^T\\mathbf{S})=n$.\n\nObserving that the $\\delta$-symbol in Eq.~\\eqref{q1} is now given by\n\\begin{equation}\n\\delta(g_i,g_j) = \\sum_{k=1}^c S_{ik} S_{jk},\n\\end{equation}\nthe modularity for this division of the network is\n\\begin{equation}\nQ = \\sum_{i,j=1}^n\\:\\sum_{k=1}^c B_{ij} S_{ik} S_{jk}\n = \\mathrm{Tr}(\\mathbf{S}^T\\mathbf{B}\\mathbf{S}),\n\\label{matq}\n\\end{equation}\nwhere here and henceforth we suppress the leading multiplicative\nconstant~$1\/2m$ from Eq.~\\eqref{q1}, which has no effect on the position of\nthe maximum of the modularity.\n\nWriting $\\mathbf{B}=\\mathbf{U}\\mathbf{D}\\mathbf{U}^T$, where\n$\\mathbf{U}=(\\vec{u}_1|\\vec{u}_2|\\ldots)$ is the matrix of eigenvectors\nof~$\\mathbf{B}$ and $\\mathbf{D}$ is the diagonal matrix of eigenvalues\n$D_{ii}=\\beta_i$, we then find that\n\\begin{equation}\nQ = \\sum_{j=1}^n \\sum_{k=1}^c \\beta_j (\\vec{u}_j^T\\vec{s}_k)^2.\n\\label{qmulti}\n\\end{equation}\nAgain we wish to maximize this modularity, but now we have no constraint on\nthe number~$c$ of communities; we can give~$\\mathbf{S}$ as many columns as we\nlike in our effort to make~$Q$ as large as possible.\n\nIf the elements of the matrix $\\mathbf{S}$ were unconstrained apart from the\nbasic conditions on the rows and columns mentioned above, a choice of $c$\ncommunities would be equivalent to choosing $c-1$ independent, mutually\northogonal columns $\\vec{s}_1\\ldots\\vec{s}_{c-1}$. (Only $c-1$ of the\ncolumns are independent, the last being fixed by the condition that the\nrows of $\\mathbf{S}$ sum to unity.) In this case our path would be clear:\n$Q$~would be maximized by choosing the columns proportional to the leading\neigenvectors of~$\\mathbf{B}$. However, only those eigenvectors corresponding\nto positive eigenvalues can give positive contributions to the modularity,\nso the optimal modularity would be achieved by choosing exactly as many\nindependent columns of $\\mathbf{S}$ as there are positive eigenvalues, or\nequivalently by choosing the number of groups~$c$ to be 1 greater than the\nnumber of positive eigenvalues.\n\nUnfortunately, our problem has the additional constraint that the index\nvectors~$\\vec{s}_i$ have only binary $(0,1)$ elements, which means it may\nnot be possible to find as many index vectors making positive contributions\nto the modularity as the set of positive eigenvalues suggests. Thus the\nnumber of positive eigenvalues, plus~1, is an \\emph{upper bound} on the\nnumber of communities and again we see that there is an intimate connection\nbetween the properties of the modularity matrix and the community structure\nof the network it describes.\n\n\n\\subsection{Vector partitioning algorithm}\n\\label{vectorpart}\nIn Section~\\ref{singlevec} we maximized the modularity approximately by\nfocusing solely on the term in $Q$ proportional to the largest eigenvalue\nof~$\\mathbf{B}$. Let us now make the more general (and often better)\napproximation of keeping the leading $p$ eigenvalues, where $p$ may be\nanywhere between 1 and~$n$. Some of the eigenvalues, however, may be\nnegative, which will prove inconvenient. To get around this we rewrite\nEq.~\\eqref{matq} thus:\n\\begin{eqnarray}\nQ &=& n\\alpha + \\mathrm{Tr}[\\mathbf{S}^T\\mathbf{U}(\\mathbf{D}-\\alpha\\mathbf{I})\\mathbf{U}^T\\mathbf{S}]\n \\nonumber\\\\\n &=& n\\alpha + \\sum_{j=1}^n \\sum_{k=1}^c (\\beta_j-\\alpha)\n \\biggl[ \\sum_{i=1}^n U_{ij} S_{ik} \\biggr]^2,\n\\label{alphaq}\n\\end{eqnarray}\nwhere $\\alpha$ is a constant whose value we will choose shortly and we have\nmade use of $\\mathrm{Tr}(\\mathbf{S}^T\\mathbf{S})=n$ and the fact that $\\mathbf{U}$ is\northogonal.\n\nNow, employing an argument similar to that used for ordinary spectral\npartitioning in~\\cite{AY95}, let us define a set of \\textit{vertex\nvectors}~$\\vec{r}_i$, $i=1\\ldots n$, of dimension~$p$, such that the $j$th\ncomponent of the $i$th vector is\n\\begin{equation}\n\\bigl[ \\vec{r}_i \\bigr]_j = \\sqrt{\\beta_j-\\alpha}\\,U_{ij}.\n\\label{defsri}\n\\end{equation}\nProvided we choose $\\alpha\\le\\beta_p$, $\\vec{r}_i$~is guaranteed real for\nall~$i$. Then, dropping terms in~\\eqref{alphaq} proportional to the\nsmallest $n-p$ of the factors~$\\beta_j-\\alpha$, we have\n\\begin{eqnarray}\nQ &\\simeq& n\\alpha + \\sum_{j=1}^p \\sum_{k=1}^c\n \\biggl[ \\sum_{i=1}^n \\sqrt{\\beta_j-\\alpha}\\,U_{ij} \n S_{ik} \\biggr]^2 \\nonumber\\\\\n &=& n\\alpha + \\sum_{k=1}^c \\sum_{j=1}^p\n \\biggl[ \\sum_{i\\in G_k} \\bigl[ \\vec{r}_i \\bigr]_j \\biggr]^2 \\nonumber\\\\\n &=& n\\alpha + \\sum_{k=1}^c | \\vec{R}_k |^2,\\qquad{}\n\\label{finalq}\n\\end{eqnarray}\nwhere $G_k$ is the set of vertices comprising group~$k$ and the\n\\textit{community vectors}~$\\vec{R}_k$, $k=1\\ldots c$, are\n\\begin{equation}\n\\vec{R}_k = \\sum_{i\\in G_k} \\vec{r}_i.\n\\label{defsrk}\n\\end{equation}\n\nThe community structure problem is now equivalent to choosing a division of\nthe vertices into groups so as to maximize the magnitudes of the\nvectors~$\\vec{R}_k$. This means we need to arrange that the individual\nvertex vectors~$\\vec{r}_i$ going into each group point in approximately the\nsame direction. Problems of this type are called \\textit{vector\npartitioning} problems.\n\nThe parameter~$p$ controls the balance between the complexity of the vector\npartitioning problem and the accuracy of the approximation we make by\nkeeping only some of the eigenvalues. The calculations will be faster but\nless accurate for smaller~$p$ and slower but more accurate for larger. For\nthe special case $p=n$ where we keep all of the eigenvalues,\nEq.~\\eqref{finalq} is exact. In this case, we note that the vertex vectors\nhave the property\n\\begin{equation}\n\\vec{r}_i^T\\vec{r}_j = \\sum_{k=1}^n U_{ik} (\\beta_k-\\alpha) U_{jk}\n = B_{ij} - \\alpha\\delta_{ij}.\n\\label{rirj}\n\\end{equation}\nIt's then simple to see that Eq.~\\eqref{finalq} is trivially equivalent to\nthe fundamental definition~\\eqref{q1} of the modularity, so in the $p=n$\ncase our mapping to a vector partitioning problem gives little insight into\nthe modularity maximization problem. The real advantage of our approach\ncomes when $p 0.\n\\end{equation}\nSimilarly adding vertex~$i$ to a community for which\n$\\vec{R}_k\\cdot\\vec{r}_i>0$ also increases~$|\\vec{R}_k|^2$. Hence, we can\nalways increase the modularity by moving vertices until they are in groups\nsuch that $\\vec{R}_k\\cdot\\vec{r}_i>0$.\n\nTaken together, these results imply that possible candidates for the\noptimal division of a network into two groups are fully specified by just\nthe \\emph{direction} of the single vector~$\\vec{R}_1$. Once we have this\ndirection, we know that the vertices divide according to whether their\nprojection along this direction is positive or negative. Alternatively, we\ncan consider the direction of $\\vec{R}_1$ to define a perpendicular plane\nthrough the origin in the $p$-dimensional vector space occupied by the\nvertex vectors~$\\vec{r}_i$. The vertices then divide according to which\nside of this plane their vectors fall on. Finding the maximum of the\nmodularity is then a matter of choosing this bisecting plane to maximize\nthe magnitude of~$\\vec{R}_1$.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=8cm]{zachvecs.eps}\n\\end{center}\n\\caption{A plot of the vertex vectors~$\\vec{r}_i$ for a small network with\n$p=2$. The dotted line represents one of the $n$ possible topologically\ndistinct cut planes.}\n\\label{zachvecs}\n\\end{figure}\n\nIn general, this still leaves us with a moderately difficult optimization\nproblem: the number of bisecting planes that give distinct partitions of\nthe vertex vectors is large and difficult to enumerate as the dimension~$p$\nof the space becomes large. For the case $p=2$, however, a relatively\nsimple solution exists. Consider Fig.~\\ref{zachvecs}, which shows a\ntypical example of the vertex vectors.\\footnote{In fact, this figure shows\nthe vectors for the ``karate club'' network used previously as an example\nin Ref.~\\cite{Newman06b}.} In this two-dimensional case, there are only\n$n$ topologically distinct choices of the bisecting plane (actually just a\nline in this case, denoted by the dashed line in the figure), and\nfurthermore the divisions of the vertices that these choices represent\nchange by only a single vertex at a time as we rotate the plane about the\norigin. This makes it computationally simple to perform the rotation, keep\ntrack of the value of~$\\vec{R}_1$, and so find the maximum of the\nmodularity within this approximation. Evaluating the magnitude of\n$\\vec{R}_1$ involves a constant number of operations each time we move the\nline, and hence the total work involved in finding the maximum is $\\mathrm{O}(n)$\nfor all $n$ possible positions, which is the same as the $\\mathrm{O}(n)$\noperations needed to separate the vertices in the $p=1$ case.\n\nFor $p>2$, we do not know of an efficient method to enumerate exhaustively\nall topologically distinct bisecting planes in the vertex vector space, and\nhence we have to turn to approximate methods for solving the vector\npartitioning problem. A number of reasonable heuristics have been\ndescribed in the past. We have found acceptable though not spectacular\nresults, for instance, with the ``MELO'' algorithm of~\\cite{AY95}, which is\nessentially a greedy algorithm in which a grouping of vectors is built up\nby repeatedly adding to it the vector that makes the largest contribution\nto~$Q$.\n\n\n\\subsection{Choice of $\\alpha$}\nBefore implementing any of these methods, a crucial question we must answer\nis what value we should choose for the parameter~$\\alpha$. By tuning this\nvalue we can improve the accuracy of our approximation to~$Q$ as follows.\n\nBy dropping the $n-p$ most negative eigenvalues, we are in effect making an\napproximation to the matrix~$\\mathbf{B}-\\alpha\\mathbf{I}$ in which it takes not\nits full value $\\mathbf{U}(\\mathbf{D}-\\alpha\\mathbf{I})\\mathbf{U}^T$, but an\napproximate value $\\mathbf{U}(\\mathbf{D}'-\\alpha\\mathbf{I}')\\mathbf{U}^T$, where\n$\\mathbf{D}'$ and $\\mathbf{I}'$ are the matrices $\\mathbf{D}$ and $\\mathbf{I}$ with the\nlast $n-p$ diagonal elements set to zero. We can quantify the error this\nintroduces by calculating the sum of the squares of the elements of the\ndifference between the two matrices, which is given by\n\\begin{eqnarray}\n\\chi^2 &=& \\mathrm{Tr}[\\mathbf{U}(\\mathbf{D}-\\alpha\\mathbf{I})\\mathbf{U}^T - \n \\mathbf{U}(\\mathbf{D}'-\\alpha\\mathbf{I}')\\mathbf{U}^T]^2\n \\nonumber\\\\\n &=& \\mathrm{Tr}[(\\mathbf{D}-\\alpha\\mathbf{I})-(\\mathbf{D}'-\\alpha\\mathbf{I}')]^2\n = \\!\\!\\sum_{i=p+1}^n (\\beta_i-\\alpha)^2,\\qquad{}\n\\end{eqnarray}\nwhere in the second line we have made use of the fact that $\\mathbf{U}$ is\northogonal.\n\nMinimizing this error by setting the derivative $\\d\\chi^2\/\\d\\alpha=0$, we\nfind\n\\begin{equation}\n\\alpha = {1\\over n-p} \\sum_{i=p+1}^n \\beta_i.\n\\end{equation}\nIn other words, the minimal mean square error introduced by our\napproximation is achieved by setting $\\alpha$ equal to the mean of the\neigenvalues that have been dropped. The only exception is when $p=n$,\nwhere the choice of $\\alpha$ makes no difference since no approximation is\nbeing made anyway. In our calculations we have used $\\alpha=\\beta_n$ in\nthis case, but any choice $\\alpha\\ge\\beta_n$ would work equally well.\n\n\n\\section{Implementation}\n\\label{implementation}\nImplementation of the methods described in Section~\\ref{specmod} is\nstraightforward. The leading-eigenvector method of Section~\\ref{singlevec}\nrequires us to find only the single eigenvector of the modularity\nmatrix~$\\mathbf{B}$ corresponding to the most positive eigenvalue. This is\nmost efficiently achieved by the direct multiplication or power method.\nStarting with a trial vector, we repeatedly multiply by the modularity\nmatrix and---unless we are unlucky enough to have chosen another\neigenvector as our trial vector---the result will converge to the\neigenvector of the matrix having the eigenvalue of largest magnitude. In\nsome cases this eigenvalue will be the most positive one, in which case our\ncalculation ends at this point. In other cases the eigenvalue of largest\nmagnitude may be negative. If this happens then, denoting this eigenvalue\nby $\\beta_n$, we calculate the shifted matrix $\\mathbf{B}-\\beta_n\\mathbf{I}$,\nwhich has eigenvalues $\\beta_i-\\beta_n$ (necessarily all nonnegative) and\nthe same eigenvectors as the modularity matrix itself. Then we repeat the\npower-method calculation for this new matrix and this time the eigenvalue\nof largest magnitude must be $\\beta_1-\\beta_n$ and the corresponding\neigenvector is the one we are looking for.\n\nFor the method of Section~\\ref{multivec}, we require either all of the\neigenvectors of the modularity matrix or a subset corresponding to the $p$\nmost positive eigenvalues. These are most conveniently calculated using\nthe Lanczos method or one of its variants~\\cite{Meyer00}. The fundamental\nmatrix operation at the heart of the Lanczos method is again multiplication\nof the matrix~$\\mathbf{B}$ into a trial vector.\n\nEfficient implementation of any of these methods thus rests upon our\nability to rapidly multiply an arbitrary vector~$\\vec{x}$ by the modularity\nmatrix. This presents a problem because the modularity matrix is dense,\nand hence it appears that matrix multiplications will demand $\\mathrm{O}(n^2)$\ntime each, where $n$ is, as before, the number of vertices in the network\n(which is also the size of the matrix). By contrast, the equivalent\ncalculation in standard spectral partitioning is much faster because the\nLaplacian matrix is sparse, having only $\\mathrm{O}(n+m)$ nonzero elements, where\n$m$ is the number of edges in the network.\n\nFor the standard choice, Eq.~\\eqref{defscm}, of null model used to define\nthe modularity, however, it turns out that we can multiply by the\nmodularity matrix just as fast as by the Laplacian by making use of the\nspecial structure of the matrix. In vector notation the modularity matrix\ncan in this case be written\n\\begin{equation}\n\\mathbf{B} = \\mathbf{A} - {\\vec{k}\\vec{k}^T\\over2m},\n\\end{equation}\nwhere $\\mathbf{A}$ is the adjacency matrix, Eq.~\\eqref{adjacency}, and\n$\\vec{k}$ is the $n$-element vector whose elements are the degrees~$k_i$ of\nthe vertices. Then\n\\begin{equation}\n\\mathbf{B}\\vec{x} = \\mathbf{A}\\vec{x} - {\\vec{k}^T\\vec{x}\\over2m}\\,\\vec{k}.\n\\end{equation}\nSince the adjacency matrix is sparse, having only $\\mathrm{O}(m)$ elements, the\nfirst term can be evaluated in $\\mathrm{O}(m)$ time, while the second requires us\nto evaluate the inner product $\\vec{k}^T\\vec{x}$ only once and then\nmultiply it into each element of $\\vec{k}$ in turn, both operations taking\n$\\mathrm{O}(n)$ time. Thus the entire matrix multiplication can be completed in\n$\\mathrm{O}(m+n)$ time, just as with the normal Laplacian matrix. If a shift of\nthe eigenvalues is required to find the most positive one, as described\nabove, then there is an additional term $-\\beta_n\\mathbf{I}$ in the matrix,\nbut this also can be multiplied into an arbitrary vector in $\\mathrm{O}(n)$ time,\nso again the entire operation can be completed in $\\mathrm{O}(m+n)$ time.\n\nTypically $\\mathrm{O}(n)$ matrix multiplications are required for either the\npower method or the Lanczos method to converge to the required eigenvalues,\nand hence the calculation takes $\\mathrm{O}((m+n)n)$ time overall. In the common\ncase in which the network is sparse and $m\\propto n$, this simplifies to\n$\\mathrm{O}(n^2)$.\n\nWhile this is, essentially, the end of the calculation for the power\nmethod, the Lanczos method unfortunately demands more effort to find the\neigenvectors themselves. In fact, it takes $\\mathrm{O}(n^3)$ time to find all\neigenvectors of a matrix using the Lanczos method, which is quite slow.\nThere are on the other hand variants of the Lanczos method (as well as\nother methods entirely) that can find just a few leading eigenvectors\nfaster than this, which makes calculations that focus on a fixed small\nnumber of eigenvectors preferable to ones that use all eigenvectors. In\nour calculations we have primarily concentrated on algorithms that use only\none or two eigenvectors, which typically run in time~$\\mathrm{O}(n^2)$ on a\nsparse network.\n\n\n\\subsection{Refinement of the modularity}\n\\label{refinement}\nThe methods for spectral optimization of the modularity described in\nSection~\\ref{specmod} are only approximate. Indeed, the problem of\nmodularity optimization is formally equivalent to an instance of the\nNP-hard MAX-CUT problem, so it is almost certainly the case that no\npolynomial-time algorithm exists that will find the modularity optimum in\nall cases. Given that the algorithms we have described run in polynomial\ntime, it follows that they must fail to find the optimum in some cases, and\nhence that there is room for improvement of the results.\n\nIn standard graph partitioning applications it is common to use a spectral\napproach based on the graph Laplacian as a first pass at the problem of\ndividing a network. The spectral method gives a broad picture of the\ngeneral shape the division should take, but there is often room for\nimprovement. Typically another algorithm, such as the Kernighan--Lin\nalgorithm~\\cite{KL70}, which swaps vertex pairs between groups in an effort\nto reduce the cut size, is used to refine this first pass, and the\nresulting two-stage joint strategy gives considerably better results than\neither stage on its own.\n\nWe have found that a similar joint strategy gives good results in the\npresent case also: the divisions found with our spectral approach can be\nimproved in small but significant ways by adding a refinement step akin to\nthe Kernighan--Lin algorithm. As described in~\\cite{Newman06b}, we take an\ninitial division into two communities derived, for instance, from the\nleading-eigenvector method of Section~\\ref{singlevec} and move single\nvertices between the communities so as to increase the value of the\nmodularity as much as possible, with the constraint that each vertex can be\nmoved only once. Repeating the whole process iteratively until no further\nimprovement is obtained, we find a final value of the modularity which can\nimprove on that derived from the spectral method alone by tens of percent\nin some cases, and smaller but still significant amounts in other cases.\nAlthough the absolute gains in modularity are not always large, we find\nthat this refinement step is very much worth the effort it entails, raising\nthe typical level of performance of our methods from the merely good to the\nexcellent, when compared with other algorithms. Specific examples are\ngiven in~\\cite{Newman06b}.\n\nIt is certainly possible that other refinement strategies might also give\ngood results. For instance, the ``extremal optimization'' method explored\nby Duch and Arenas~\\cite{DA05} for optimizing modularity could be employed\nas a refinement method by using the output of our spectral division as its\nstarting point, rather than the random configuration used as a starting\npoint by Duch and Arenas.\n\n\n\\section{Dividing networks into more than two communities}\n\\label{multiway}\nSo far we have discussed primarily methods for dividing networks into two\ncommunities. Many of the networks we are concerned with, however, have\nmore than two communities. How can we generalize our methods to this case?\nThe simplest approach is repeated division into two. That is, we use one\nof the methods described above to divide our network in two and then divide\nthose parts in two again, and so forth. This approach was described\nbriefly in Ref.~\\cite{Newman06b}.\n\nIt is important to appreciate that upon further subdividing a community\nwithin a network into two parts, the additional contribution~$\\Delta Q$ to\nthe modularity made by this subdivision is not given correctly if we apply\nthe algorithms of Section~\\ref{specmod} to that community alone. That is,\nwe cannot simply write down the modularity matrix for the community in\nquestion considered as a separate graph in its own right and examine the\nleading eigenvector or eigenvectors. Instead we proceed as follows. Let\nus denote the set of vertices in the community to be divided by~$G$ and let\n$n_G$ be the number of vertices within this community. Now let $\\mathbf{S}$\nbe an $n_G\\times c$ index matrix denoting the subdivision of the community\ninto $c$ subcommunities such that\n\\begin{equation}\nS_{ij} = \\begin{cases}\n 1 & \\quad\\text{if vertex~$i$ belongs to subcommunity~$j$,}\\\\\n 0 & \\quad\\text{otherwise.}\n \\end{cases}\n\\end{equation}\nThen, following Eq.~\\eqref{matq}, $\\Delta Q$~is the difference between the\nmodularities of the network before and after subdivision of the community\nthus:\n\\begin{eqnarray}\n\\Delta Q &=& \\sum_{i,j\\in G}\\:\\sum_{k=1}^c B_{ij} S_{ik} S_{jk}\n - \\sum_{i,j\\in G} B_{ij} \\nonumber\\\\\n &=& \\sum_{k=1}^c\\:\\sum_{i,j\\in G} \n \\biggl [ B_{ij} - \\delta_{ij} \\sum_{l\\in G} B_{il} \\biggr]\n S_{ik} S_{jk} \\nonumber\\\\\n &=& \\mathrm{Tr}(\\mathbf{S}^T\\mathbf{B}^{(G)}\\mathbf{S}),\n\\label{deltaq}\n\\end{eqnarray}\nwhere $\\mathbf{B}^{(G)}$ is an $n_G\\times n_G$ generalized modularity matrix\nwith elements indexed by the vertex labels $i,j$ of the vertices within\ngroup~$G$ and having values\n\\begin{equation}\nB^{(G)}_{ij} = B_{ij} - \\delta_{ij} \\sum_{l\\in G} B_{il},\n\\label{genmod}\n\\end{equation}\nwith $B_{ij}$ defined by Eq.~\\eqref{defsbij}.\n\nEquation~\\eqref{deltaq} has the same form as our previous expression,\nEq.~\\eqref{matq}, for the modularity of the full network, and, following\nthe same argument as for Eqs.~\\eqref{alphaq} to~\\eqref{defsrk}, we can then\nshow that optimization of the additional modularity contribution from\nsubdivision of a community can also be expressed as a vector partitioning\nproblem, just as before. We can approximate this vector partitioning\nproblem using only the leading eigenvector as in Section~\\ref{singlevec} or\nusing more than one vector as in Section~\\ref{multivec}. The resulting\ndivisions can also be optimized using a ``refinement'' stage as in\nSection~\\ref{refinement}, to find the best possible modularity at each\nstep.\n\nUsing this method we can repeatedly subdivide communities to partition\nnetworks into smaller and smaller groups of vertices and in principle this\nprocess could continue until the network is reduced to $n$ communities\ncontaining only a single vertex each. Normally, however, we stop before\nthis point is reached because there is no point in subdividing a community\nany further if no subdivision exists that will increase the modularity of\nthe network as a whole. The appropriate strategy is to calculate\nexplicitly the modularity contribution $\\Delta Q$ at each step in the\nsubdivision of a network, and to decline to subdivide any community for\nwhich the value of $\\Delta Q$ is not positive. Communities with the\nproperty of having no subdivision that gives a positive contribution to the\nmodularity of the network as a whole we call \\textit{indivisible}; the\nstrategy described here is equivalent to subdividing communities repeatedly\nuntil every remaining community is indivisible.\n\nThis strategy appears to work very well in practice. It is, however, not\nperfect (a conclusion we could draw under any circumstances from the fact\nthat it runs in polynomial time---see above). In particular, it is certain\nthat repeated subdivision of a network into two parts will in some cases\nfail to find the optimal modularity configuration. Consider, for example,\nthe (rather trivial) network shown in Fig.~\\ref{line8}, which consists of\neight vertices connected together in a line. By exhaustive enumeration we\ncan show that, among possible divisions of this network into only two\nparts, the division indicated in Fig.~\\ref{line8}a, right down the middle\nof the line, is the one that gives the highest modularity. The optimum\nmodularity over divisions into any number of parts, however, is achieved\nfor the three-way division shown in Fig.~\\ref{line8}b. It is clear that if\nwe first split the network as shown in Fig.~\\ref{line8}a, no subsequent\nsubdivision of the network can ever find the configuration of\nFig.~\\ref{line8}b, and hence our algorithm will fail in this case to find\nthe global optimum. Nonetheless, the algorithm does appear to find\ndivisions that are close to optimal in most cases we have investigated.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=7cm]{line8.eps}\n\\end{center}\n\\caption{Division by the method of optimal modularity of a simple network\nconsisting of eight vertices in a line. (a)~The optimal division into just\ntwo parts separates the network symmetrically into two groups of four\nvertices each. (b)~The optimal division into any number of parts divides\nthe network into three groups as shown here.}\n\\label{line8}\n\\end{figure}\n\nRepeated subdivision is the approach we have taken to multi-community\ndivisions in our own work, but it is not the only possible approach. In\nsome respects a more satisfying approach would be to work directly from the\nexpression~\\eqref{finalq} for the modularity of the complete network with a\nmulti-community division. Unfortunately, maximizing~\\eqref{finalq}\nrequires us to perform a vector partitioning into more than two groups, a\nproblem about whose solution rather little is known. Some general\nobservations are, however, worth making. First, we note that the community\nvectors~$\\vec{R}_k$ in the optimal solution of a vector partitioning\nproblem always have directions more than $90^\\circ$ apart. To demonstrate\nthis, we note that the change in the contribution to Eq.~\\eqref{finalq} if\nwe amalgamate two communities into one is\n\\begin{equation}\n\\bigl| \\vec{R}_1 + \\vec{R}_2 \\bigr|^2 - \\bigl( \\bigl| \\vec{R}_1 \\bigr|^2\n + \\bigl| \\vec{R}_2 \\bigr|^2 \\bigr)\n = 2\\vec{R}_1\\cdot\\vec{R}_2,\n\\end{equation}\nwhich is positive if the directions of $\\vec{R}_1$ and $\\vec{R}_2$ are less\nthan $90^\\circ$ apart. Thus we can always increase the modularity by\namalgamating a pair of communities unless their vectors are more than\n$90^\\circ$ apart.\n\nBut the maximum number of directions more than $90^\\circ$ apart that can\nexist in a $p$-dimensional space is $p+1$, which means that $p+1$ is also\nthe maximum number of communities we can find by optimizing a\n$p$-dimensional spectral approximation to the modularity. Thus if we use\nonly a single eigenvector we will find at most two groups; if we use two we\nwill find at most three groups, and so forth. So the choice of how many\neigenvectors~$p$ to work with is determined to some extent by the network:\nif the overall optimum modularity is for a division into $c$ groups, we\nwill certainly fail to find that optimum if we use less than $c-1$\neigenvectors.\n\nSecond, we note that while true multi-way vector partitioning may present\nproblems, simple heuristics that group the vertex vectors together can\nstill produce good results. For instance, White and Smyth~\\cite{WS05} have\napplied the standard technique of $k$-means clustering based on group\ncentroids to a different but related optimization problem and have found\ngood results. It is possible this approach would work for our problem also\nif applied to the centroids of the end-points of the vertex vectors. It is\nalso possible that an intrinsically vector-based variant of $k$-means\nclustering could be created to tackle the vector partitioning problem\ndirectly, although we are not aware of such an algorithm in the current\nvector partitioning literature.\n\n\n\\section{Negative eigenvalues and bipartite structure}\n\\label{negative}\nIt is clear from the developments of the previous sections that there is\nuseful information about the structure of a network stored in the\neigenvectors corresponding to the most positive eigenvalues of the\nmodularity matrix. It is natural to ask whether there is also useful\ninformation in the eigenvectors corresponding to the negative eigenvalues\nand indeed it turns out that there is: the negative eigenvalues and their\neigenvectors contain information about a nontrivial type of\n``anti-community structure'' that is of substantial interest in some\ninstances.\n\nConsider again the case in which we divide our network into just two groups\nand look once more at Eq.~\\eqref{q3}, which gives the modularity in this\ncase. Suppose now that instead of maximizing the terms involving the most\npositive eigenvalues, we maximize the terms involving the most negative\nones. As we can easily see from the equation, this is equivalent to\n\\emph{minimizing} rather than maximizing the modularity.\n\nWhat effect will this have on the divisions of the network that we find?\nLarge negative values of the modularity correspond to divisions in which\nthe number of edges within groups is \\emph{smaller} than expected on the\nbasis of chance, and the number of edges between groups correspondingly\nbigger. Figure~\\ref{bipartex} shows a sketch of a network having this\nproperty. Such networks are said to be \\textit{bipartite} if there are no\nedges at all within groups, or approximately bipartite if there are a few\nwithin-group edges as in the figure. Bipartite or approximately bipartite\ngraphs have attracted some attention in the recent literature. For\ninstance, Kleinberg~\\cite{Kleinberg99a} has suggested that small bipartite\nsubgraphs in the web graph may be a signature of so-called hub\/authority\nstructure within web communities, while Holme~{\\it{}et~al.}~\\cite{HLEK03} and\nEstrada and Rodr{\\'\\i}guez-Vel\\'azquez~\\cite{ER05} have independently\ndevised measures of bipartitivity and used them to analyze a variety of\nreal-world networks.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=6cm]{bipartex.eps}\n\\end{center}\n\\caption{A small example of an approximately bipartite network. The\nnetwork is composed of two groups of vertices and most edges run between\nvertices in different groups.}\n\\label{bipartex}\n\\end{figure}\n\nThe arguments above suggest that we should be able to detect bipartite or\napproximately bipartite structure in networks by looking for divisions of\nthe vertices that minimize modularity. In the simplest approximation, we\ncan do this by focusing once more on just a single term in Eq.~\\eqref{q3},\nthat corresponding to the most negative eigenvalue~$\\beta_n$, and\nmaximizing the coefficient of this eigenvalue by choosing $s_i=-1$ for\nvertices having a negative element in the corresponding eigenvector and\n$s_i=+1$ for the others. In other words, we can achieve an approximation\nto the minimum modularity division of the network by dividing vertices\naccording to the signs of the elements in the eigenvector~$\\vec{u}_n$, and\nthis division should correspond roughly to the most nearly bipartite\ndivision. We can also append a ``refinement'' step to the calculation,\nsimilar to that described in Section~\\ref{refinement}, in which, starting\nfrom the division given by the eigenvector, we move single vertices between\ngroups in an effort to minimize the modularity further.\n\nAs an example of this type of calculation, consider Fig.~\\ref{words}, which\nshows a network representing juxtapositions of words in a corpus of English\ntext, in this case the novel \\textit{David Copperfield} by Charles Dickens.\nTo construct this network, we have taken the 60 most commonly occurring\nnouns in the novel and the 60 most commonly occurring adjectives. (The\nlimit on the number of words is imposed solely to permit a clear\nvisualization; there is no reason in principle why the analysis could not\nbe extended to a much larger network.) The vertices in the network\nrepresent words and an edge connects any two words that appear adjacent to\none another at any point in the book. Eight of the words never appear\nadjacent to any of the others and are excluded from the network, leaving a\ntotal of 112 vertices.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{words.eps}\n\\end{center}\n\\caption{(a)~The network of commonly occurring English adjectives (circles)\nand nouns (squares) described in the text. (b)~The same network redrawn\nwith the nodes grouped so as to minimize the modularity of the grouping.\nThe network is now revealed to be approximately bipartite, with one group\nconsisting almost entirely of adjectives and the other of nouns.}\n\\label{words}\n\\end{figure}\n\nTypically adjectives occur next to nouns in English. It is possible for\nadjectives to occur next to other adjectives (``the big green bus'') or for\nnouns to occur next to other nouns (``the big tour bus''), but these\njuxtapositions are less common. Thus we would expect our network to be\napproximately bipartite in the sense described above: edges should run\nprimarily between vertices representing different types of words, with\nfewer edges between vertices of the same type. One would be hard pressed\nto tell this from Fig.~\\ref{words}a, however: the standard layout algorithm\nused to draw the network completely fails to reveal the structure present.\nFigure~\\ref{words}b shows what happens when we divide the vertices by\nminimizing the modularity using the method described above---a first\ndivision according to the elements of the eigenvector with the most\nnegative eigenvalue, followed by a refinement stage to reduce the\nmodularity still further. It is now clear that the network is in fact\nnearly bipartite, and the two groups found by the algorithm correspond\nclosely to the known groups of adjectives and nouns, as indicated by the\nshapes of the vertices. 83\\% of the words are classified correctly by this\nsimple calculation.\n\nDivisions with large negative modularity are---like those with large\npositive modularity---not limited to having only two groups. If we are\ninterested purely in minimizing the modularity we can in principle use as\nmany groups as we like to achieve that goal. A division with $k$ groups is\ncalled $k$-partite if edges run only between groups and approximately\n$k$-partite if there are a few within-group edges. One might imagine that\none could find $k$-partite structure in a network just by looking for\ndivisions that minimize the number of within-group edges, but brief\nreflection persuades us that the optimum solution to this search problem is\nalways to put each vertex in a group on its own, which automatically means\nthat all edges lie between groups and none within groups. As with the\nordinary community structure problem, the way to avoid this trivial\nsolution is to concentrate not on the total number of edges within groups\nbut on the difference between this number and the expected number of such\nedges. Thus, once again, we are led naturally to the consideration of\nmodularity as a measure of the best way to divide a network.\n\nOne way to minimize modularity over divisions into an arbitrary number of\ngroups is to proceed by analogy with our earlier calculations of community\nstructure and repeatedly divide the network in two using the\nsingle-eigenvector method above. Just as before, Eq.~\\eqref{deltaq} gives\nthe additional change $\\Delta Q$ in the modularity upon subdivision of a\ngroup in a network, and the division process ends when the algorithm fails\nto find any subdivision with $\\Delta Q<0$. Alternatively, one can derive\nthe analog of Eq.~\\eqref{finalq} and thereby map the minimization of the\nmodularity onto a vector partitioning problem. The appropriate definition\nof the vertex vectors turns out to be\n\\begin{equation}\n\\bigl[ \\vec{r}_i \\bigr]_j = \\sqrt{\\alpha-\\beta_{n+1-j}}\\,U_{i,n+1-j},\n\\label{defsri2}\n\\end{equation}\nwhere $\\alpha$ is a constant chosen sufficiently large as to make\n$\\alpha-\\beta_j\\ge0$ for all terms in the sum that we keep. Then the\nmodularity is given by\n\\begin{equation}\nQ = n\\alpha - \\sum_{k=1}^c | \\vec{R}_k |^2,\n\\end{equation}\nwith the community vectors~$\\vec{R}_k$ defined according to\nEq.~\\eqref{defsrk}.\n\n\n\\section{Other uses of the modularity matrix}\n\\label{otheruses}\nOne of the striking properties of the Laplacian matrix is that, as\ndescribed in Section~\\ref{specpart}, it arises repeatedly in various\ndifferent areas of graph theory. It is natural to ask whether the\nmodularity matrix also crops up in other areas. In this section we\ndescribe briefly two other situations in which the modularity matrix\nappears, although neither has been viewed in terms of this matrix in the\npast, as far as we are aware.\n\n\n\\subsection{Network correlations}\nFor our first example, suppose we have a quantity~$x_i$ defined on the\nvertices~$i=1\\ldots n$ of a network, such as degrees of vertices, ages of\npeople in a social network, numbers of hits on web pages, and so forth.\nAnd let $\\vec{x}$ be the $n$-component vector whose elements are the~$x_i$.\nThen consider the quantity\n\\begin{equation}\nr = {1\\over2m} \\vec{x}^T\\mathbf{B}\\vec{x},\n\\label{assortativity}\n\\end{equation}\nwhere here we will take the same definition~\\eqref{defscm} for our null\nmodel that we have been using throughout. Observing that $\\sum_{ij}A_{ij}\n= \\sum_i k_i = 2m$, we can rewrite~$r$ as\n\\begin{eqnarray}\nr &=& {1\\over2m} \\sum_{ij} \\biggl[ A_{ij} - {k_ik_j\\over2m} \\biggr] x_ix_j\n \\nonumber\\\\\n &=& {\\sum_{ij} A_{ij} x_ix_j\\over\\sum_{ij} A_{ij}}\n - \\Biggl[ {\\sum_{ij} A_{ij} x_i\\over\\sum_{ij} A_{ij}} \\Biggr]^2.\n\\end{eqnarray}\nNote that the ratios appearing in the second line are simply averages over\nall edges in the network, and hence $r$ has the form\n$\\av{x_ix_j}-\\av{x_i}\\av{x_j}$ of a correlation function measuring the\ncorrelation of the values $x_i$ over all pairs of vertices joined by an\nedge in the network.\n\nCorrelation functions of exactly this type have been considered previously\nas measures of so-called ``assortative mixing,'' the tendency for adjacent\nvertices in networks to have similar properties~\\cite{Newman02f,Newman03c}.\nFor example, if the quantity $x_i$ is just the degree $k_i$ of a vertex,\nthen $r$~is the covariance of the degrees of adjacent vertices, which takes\npositive values if vertices tend to have similar degrees to their\nneighbors, high-degree vertices linking to other high-degree vertices and\nlow to low, and negative values if high-degree links to low.\n\nEquation~\\eqref{assortativity} is not just a curiosity, but provides some\ninsight concerning assortativity. If we expand $\\vec{x}$ in terms of the\neigenvectors~$\\vec{u}_i$ of the modularity matrix, as we did for the\nmodularity itself in Eq.~\\eqref{q3}, we get\n\\begin{equation}\nr = {1\\over2m} \\sum_i c_i^2 \\beta_i,\n\\end{equation}\nwhere $\\beta_i$ is again the $i$th largest eigenvalue of~$\\mathbf{B}$ and\n$c_i=\\vec{u}_i^T\\vec{x}$. Thus $r$ will have a large positive value if\n$\\vec{x}$ has a large component in the direction of one or more of the most\npositive eigenvectors of the modularity matrix, and similarly for large\nnegative values. Now we recall that the leading eigenvectors of the\nmodularity matrix also define the communities in the network and we see\nthat there is a close relation between assortativity and community\nstructure: networks will be assortative according to some property $x$ if\nthe values of that property divide along the same lines as the communities\nin the network. Thus, for instance, a network will be assortative by\ndegree if the degrees of the vertices are partitioned such that the\nhigh-degree vertices fall in one community and the low-degree vertices in\nanother.\n\nThis lends additional force to the discussion given in the introduction,\nwhere we mentioned that different communities in networks are often found\nto have different average properties such as degree. In fact, as we now\nsee, this is probably the case for any property that displays significant\nassortative mixing, which includes an enormous variety of quantities\nmeasured in networks of all types. Thus, it is not merely an observation\nthat different communities have different average properties---it is an\nexpected behavior in a network that has both community structure and\nassortativity.\n\n\n\\subsection{Community centrality}\n\\label{communityc}\nFor our second example of other uses of the modularity matrix, we consider\ncentrality measures, one of the abiding interests of the network analysis\ncommunity for many decades. In Section~\\ref{singlevec} we argued that the\nmagnitudes of the elements of the leading eigenvector of the modularity\nmatrix give a measure of the ``strength'' with which vertices belong to\ntheir assigned communities. Thus these magnitudes define a kind of\ncentrality index that quantifies how central vertices are in communities.\nFocusing on just a single eigenvector of the modularity matrix, however, is\nlimiting. As we have seen, all the eigenvectors contain useful information\nabout community structure. It is useful to ask what the appropriate\nmeasure is of strength of community membership when the information in all\neigenvectors is taken into account. Given Eq.~\\eqref{finalq}, the obvious\ncandidate seems to be the projection of the vertex vector~$\\vec{r}_i$ onto\nthe community vector~$\\vec{R}_k$ of the community to which vertex~$i$\nbelongs. Unfortunately, this projection depends on the arbitrary\nparameter~$\\alpha$, which we introduced in Eq.~\\eqref{alphaq} to get around\nproblems caused by the negative eigenvalues of the modularity matrix. This\nin turn threatens to introduce arbitrariness into our centrality measure,\nwhich we would prefer to avoid. So for the purposes of defining a\ncentrality index we propose a slightly different formulation of the\nmodularity, which is less appropriate for the optimization calculations\nthat are the main topic of this paper, but more satisfactory for present\npurposes, as we will see.\n\nSuppose that there are $p$ positive eigenvalues of the modularity matrix\nand $q$ negative ones. We define two new sets of vertex vectors\n$\\set{\\vec{x}_i}$ and $\\set{\\vec{y}_i}$, of dimension $p$ and $q$, thus:\n\\begin{eqnarray}\n\\bigl[ \\vec{x}_i \\bigr]_j &=& \\sqrt{\\beta_j}\\,U_{ij},\\\\\n\\bigl[ \\vec{y}_i \\bigr]_j &=& \\sqrt{-\\beta_{n+1-j}}\\,U_{i,n+1-j}.\n\\end{eqnarray}\n(Note that $p+q8$. We used three known constructions of certificates, all of which require finding primes with certain properties. We applied substantially greater computing power than in previous work, maintaining 2 teraflops over 12 months through distributed computing. By doing so, we were able to check all primes through 950 million to construct the best certificates possible using these methods through length 24. By contrast Rabung and Lotts \\cite{RabungL12} checked all primes through 10 million and $k$ through 12. The growth rates of the lower bounds given by our computations may give insight into the growth rates of van der Waerden numbers since there is some evidence that these constructions give tight lower bounds.\n\nThe paper is organized as follows. Section 2 presents the three known constructions of certificates for establishing lower bounds. Section 3 gives reasons to believe that these constructions establish tight lower bounds. Section 4 explains the computational resources employed to find these constructions. Section 5 describes the lower bounds that were found and presents a conjecture on the growth rate of van der Waerden numbers in $k$.\n \n\n\\section{Constructions}\nWe used four existing constructions (\\cite{Rabung}, \\cite{HerwigHLM07}, \\cite{Xu2013}, \\cite{Blankenship}) and applied substantially greater computing power.\n\nFirst, Rabung's method, which he attributes to Folkman, provides a coloring of the integers $1 \\ldots p-1$ that colors the $n$th entry $\\log_\\rho n \\mod r$, where $\\rho$ is a primitive root of $p$ and $\\log_\\rho n$ is the discrete logarithm defined as $m$ such that $\\rho^m=n \\mod p$. The choice of primitive root does not affect the length of the certificate or the length of the longest monochromatic arithmetic progression. Rabung shows there is a shortcut for determining the longest monochromatic arithmetic progression in this certificate. Suppose there is a monochromatic progression of length 3 and spacing $d$ at positions $a$, $a + d$, $a+ 2d$, in which case $\\log_\\rho a = \\log_\\rho a + d = \\log_\\rho a + 2d \\mod r$. Find $d^{-1} * d = 1 \\mod p$. Then multiply through by $d^{-1}$ so $\\log_\\rho d^{-1}a = \\log_\\rho d^{-1}a + 1 = \\log_\\rho d^{-1}a + 2 \\mod r$, which is an arithmetic progression of spacing $1$. Therefore, there is a monochromatic arithmetic progression of spacing $d$ and length 3 in $1 \\ldots p-1$ if and only if there is a monochromatic arithmetic progression of spacing 1. Consider an $r$-coloring of $0 \\ldots (k-1)p$ such that position $n$ is given color $\\log_\\rho n \\mod r$ if $n$ is not in the set ${0,p,2p, \\ldots, (k-1)p}$. Assign any colors to members of that set so that not all have the same color. Rabung's theorem states that this coloring contains no monochromatic arithmetic progression of length $k$ if and only if: (a) there is no monochromatic arithmetic progression of spacing 1 in $1 \\ldots p-1$ and (b) if 1 and $p-1$ have the same color then $1,2,\\ldots,(k-1)\/2$ do not all have the same color, while if 1 and $p-1$ have different colors then $1,2,\\ldots,(k-1)$ do not all have the same color.\\footnote{The choice of primitive root does not affect the length of the longest monochromatic arithmetic progression.}\n\nAs an example, take a prime number $p$ (shown in parentheses in Table 2) and a primitive root of that number. For example, let $p$ equal 11. See that $W(4,2)$- has 11 in parentheses. Because 2 is a primitive root of 11, its powers up to $2^{10}$ (2,4,8,16,32,64,128,256,512,1024) modulo 11 are all distinct, and equal (2,4,8,5,10,9,7,3,6,1). Color the certificate green and blue in alternation and get $\\mathbf{(\\textcolor{green}{2}, \\textcolor{blue}{4}, \\textcolor{green}{8}, \\textcolor{blue}{5}, \\textcolor{green}{10}, \\textcolor{blue}{9}, \\textcolor{green}{7}, \\textcolor{blue}{3}, \\textcolor{green}{6}, \\textcolor{blue}{1})}$. Reordering this sequence, we obtain $\\mathbf{(\\textcolor{blue}{1}, \\textcolor{green}{2}, \\textcolor{blue}{3}, \\textcolor{blue}{4}, \\textcolor{blue}{5}, \\textcolor{green}{6}, \\textcolor{green}{7}, \\textcolor{green}{8}, \\textcolor{blue}{9}, \\textcolor{green}{10})}$, or $\\mathbf{\\textcolor{blue}{B}\\textcolor{green}{G}\\textcolor{blue}{BBB}\\textcolor{green}{GGG}\\textcolor{blue}{B}\\textcolor{green}{G}}$. We apply this coloring to $12\\ldots21$ and $23\\ldots32$. We may choose any coloring for 0, 11, 22, and 33, as long as they are not all equivalent. Rabung's Theorem states that the resulting certificate contains a monochromatic arithmetic progression of length $k$ if and only if there is one in the original certificate. Observing that the longest monochromatic arithmetic progression of spacing 1 is of length 3 (\\textcolor{blue}{B} in positions 3,4,5 and \\textcolor{green}{G} in positions 6,7,8), we establish that the final certificate contains no monochromatic arithmetic progressions of length 4. The existence of this certificate of length 34 proves that $W(4,2)$ is greater than 34. In fact, $W(4,2)=35$ (Table 1). We used Rabung's method exhaustively to check for all primes up to 950 million.\n\nSecond, we used the Cyclic Zipper Method of Herwig et al \\cite{HerwigHLM07}, a method that doubles the length of a sequence generated by Rabung's method by interleaving it with itself after being spread, turned, and shifted. That is, given a $k$-coloring $\\chi$ of $1\\ldots p-1$, construct a zipped coloring $\\chi_z$ of $1\\ldots 2p-2$ such that $\\chi_z(2i)=\\chi(i)$ for $i$ in $1\\ldots p-1$. This determines the even positions of $\\chi_z$. For $i$ in $\\dfrac{p+1}{2} \\ldots p-1$, let $\\chi_z(2(i-\\dfrac{p-1}{2})+1) = \\chi(i)+k\/2 \\mod k$. For $i$ in $1 \\ldots \\dfrac{p-1}{2}$, let $\\chi_z(2(i+\\dfrac{p-1}{2})+1) = \\chi(i)+k\/2 \\mod k$.\\footnote{For code zip a coloring, see https:\/\/github.com\/mlotts\/van-der-waerden-zipper\/blob\/master\/zipper\/zipperFinal.c.} \\footnote{In Table 2, lower bounds based on this method are marked Z. In some cases, Herwig et al \\cite{HerwigHLM07} and Rabung and Lotts \\cite{RabungL12} were able to zip a certificate twice and quadruple its length. In Table 2, these are marked ZZ.} We used this method for $k$ up to 18 based on primes up to 40 million and $r$ up to 10, using code shared by Rabung and Lotts \\cite{RabungL12}.\\footnote{It can be found at: https:\/\/github.com\/mlotts\/van-der-waerden-zipper.} We did find some lower bounds using this method, but they were outperformed by the method described in the next paragraph.\n\nThe third method that we applied was Xu's \\cite{Xu2013} multiplying of certificates by applying colors recursively. He shows how to use an $s$-coloring and a $t$-coloring to create an $st$-coloring. For example, each B in BGGBBGGB is replaced with YRRYYRRY and each G with PVVPPVVP. This creates a 4-coloring of $0\\ldots63$ that avoids a monochromatic arithmetic progression of length 3 (this is not an improvement on $W(3, 4) = 76$). He defines $WR(k, r)$, or the ring van der Waerden number, as a coloring of $\\mathbb{Z}_n$, or a van der Waerden number that consider arithmetic progressions modulo $n$, such as $n-3, n-1, 1, 3$. He states that if $k\\geq3, s\\geq2, t\\geq2, 5\\leq nn(W(k,t)-1)+1$.\n\nFinally, Blankenship et al \\cite{Blankenship} showed that $W(k,r)>p\\cdot(W(k,r-\\ceil{\\frac{r}{p}})-1)$, where $p$ is prime and less than or equal to $k$. This generalizes Berlekamp's \\cite{Berlekamp} lower bound for $r=2$.\n\n\\section{Evidence for Optimality of Constructions}\n\nGiven that the constructions above yield tight lower bounds for most exact van der Waerden numbers, this data may shed light on the growth of exact van der Waerden Numbers. Five of the seven known exact van der Waerden numbers (except $W(3,2)$ and $W(3,3)$) have tight lower bounds based on Rabung's method and Cyclic Zipping. \n\nIn particular, $W(4, 2)=35$, while Rabung's Method gave a lower bound of 34. Kouril \\cite{Kouril} also showed that all maximal certificates for $W(4,3)=293$, that is, all 3-colorings of $0 \\ldots 291$ free of monochromatic arithmetic progressions of length 4, have the form given by Rabung's method with prime 97. In addition, Kouril and Paul \\cite{KourilP08} found that all maximal certificates for $W(6,2)=1132$ are of the form generated by the cyclic zipper method with prime 113.\n\nHowever, there are cases where SAT solvers have outperformed Rabung's method plus Cyclic Zipping. For instance, in $W(5,3)$, a SAT solver \\cite{Heuleetal} beat those two methods.\n\nThere is another reason to expect Rabung's method to give good lower bounds. If $\\log_\\rho n = \\log_\\rho n + 1 = \\log_\\rho n + 2 \\mod r$, which is an arithmetic progression of spacing 1 and length 3, then for any $c$, $\\log_\\rho cn = \\log_\\rho cn + c = \\log_\\rho cn + 2c \\mod r$, which is an arithmetic progression of spacing $c$. Therefore, these sequences are full of arithmetic progressions of length $k-1$ of every possible spacing, which is a reason to think they are good lower bounds for $W(k,r)$. This is a property that sequences not generated using Rabung's method are unlikely to have.\\footnote{It does not appear to have been observed in the literature that Rabung's transformation can be applied to permute any coloring of $0\\ldots p-1$ to change the spacing within progressions but their lengths. For instance, if $\\chi_d$ is defined as $\\chi_d(i)=\\chi(di)$, then a monochromatic arithmetic progression of spacing $c$ in $\\chi$ where progressions are considered modulo $p$ becomes a monochromatic progression of spacing $dc$ in $\\chi_d$.} \n\n\\section{Computations}\n\nWe used Berkeley Open Infrastructure For Network Computing (BOINC), to distribute the work among our volunteers' computers. We were able to get over two teraflops of computing power, or about two trillion floating point operations per second, over 12 months. Two computers applied Rabung's method to each prime to validate the results. There were a total of 516 volunteers and 1760 computers in 53 countries. We created both Linux and Windows versions and wrote the program in C++ for speed. Each computer was assigned as input a range of primes and outputted lower bounds associated with each prime. The server consolidated the output into a table of best lower bounds as shown in bold in Table 1.\n\nThere are a number of reasons that a reader should have confidence in the results reported. First, two computers verified each computation in the BOINC project. Second, we reproduced all known lower bounds found using these methods. Third, the reader can verify that the primes in Table 2 give the lower bounds in Table 1, for instance, using our source code or that of Rabung and Lotts.\\footnote{Our source code can be found at https:\/\/github.com\/hmonroe\/vdw.} Verifying that the primes we found in Table 2 give the best lower bounds would require similar resources as our project i.e. 2 teraflops for 12 months.\n\n\\section{New Lower Bounds}\nWe found new lower bounds for greater $r$ and $k$ than in previous work, as can be seen in Tables 1-3. These new lower bounds are shown in bold. The lower bounds that are not in bold are the best known of previous work: Rabung and Lotts \\cite{RabungL12}, Herwig et al \\cite{HerwigHLM07}, Kouril and Paul \\cite{KourilP08}, Landman et al \\cite{Landman}, Landman and Robertson \\cite{landmanrobertson}, and Rabung \\cite{Rabung}. We checked the number of colors, $r$, up to 7, and the length of the subsequence trying to be avoided, $k$, up to 25.\n\nOur lower bounds on $W(k,2)$ grow roughly exponentially as $k$ increases. Let $W'(k,r)$ be this paper's lower bounds. The ratio $\\dfrac{W'(k, 2)}{W'(k-1, 2)}$ seems to oscillate between 2 and 2.7 when $k>14$, which is shown in Figure 1. The ratio $\\dfrac{W'(k, 3)}{W'(k-1, 3)}$ seems to hover around 3 or 4. We conjecture:\n\n$\\textbf{\\emph{Conjecture.}}$ $\\underset{k \\to \\infty}{\\lim}\\dfrac{W(k, r)}{W(k - 1, r)} = r$.\n\nThere are several lower bound formulas that have this property. Berlekamp's \\cite{Berlekamp} bound, which states that $W(p+1, 2)\\ge p(2^p-1)$ for primes $p$. Blankenship et al \\cite{Blankenship} state that $W(p+1,r)>p^{r-1}2^p$ for primes $p$, which is a generalization of this bound. Szab\\'{o} \\cite{Szabo} found a lower bound for $W(k, 2)\\ge\\frac{2^{k-1}}{ek}$. Landman and Robertson \\cite{landmanrobertson} state that for primes $q$, $p\\ge5$, $W(p+1,q)\\ge p(q^p-1)+1$. \\footnote{Assuming that discrete logarithms modulo $r$ behave like independent coin flips with heads of probability $1\/r$, a heuristic argument that Rabung's lower bounds grow roughly with ratio $r$ in $k$ is that the expected value for the longest run of heads in $p$ consecutive coin flips is of order $k=O(\\log_{r}{p})$ \\cite{Shilling}. This suggests that $p$ is of the order $k^r$.} Ronald Graham has offered a prize of US \\$1000 to show that $W(k,2)<2^{k^2}$; the conjecture above would imply a slower growth rate.\n\n\n\n\t\\begin{tikzpicture}[baseline={(current bounding box.center)}]\n\t\\begin{axis}[\n\ttitle={Figure 1. Growth Rate of Lower Bounds $W'(k,r)$},\n\txlabel={Length of Monochromatic Arithmetic Progression $k$},\n\tylabel={$\\dfrac{W'(k, r)}{W'(k - 1, r)}$},ylabel style={rotate=-90},\n\txmin=8, xmax=25,\n\tymin=0, ymax=12,\n\txtick={8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25},\n\tytick={0,2,4,6,8,10,12},\n\txtick pos=left,\n\tlegend pos=north east,\n\tymajorgrids=true,\n\tgrid style=dashed,\n\tevery axis plot\/.append style={thick},\n\t]\n\t\n\t\\addplot[\n\tcolor=blue,\n\tmark=none,\n\t]\n\tcoordinates {\n\t\t(8,3.1)\n\t\t(9,3.6)\n\t\t(10,2.5)\n\t\t(11,1.9)\n\t\t(12,3.3)\n\t\t(13,2.6)\n\t\t(14,1.9)\n\t\t(15,2.7)\n\t\t(16,1.9)\n\t\t(17,2.8)\n\t\t(18,2.0)\n\t\t(19,2.8)\n\t\t(20,2.1)\n\t\t(21,2.7)\n\t\t(22,1.8)\n\t\t(23,2.4)\n\t\t(24,1.77)\n\t\t(25,2.095)\n\t\t\n\t\t\n\t};\n\t\\addplot[\n\tcolor=red,\n\tmark=none,\n\t]\n\tcoordinates {\n\t\t(8,4.9)\n\t\t(9,3.9)\n\t\t(10,4.5)\n\t\t(11,4.5)\n\t\t(12,4.3)\n\t\t(13,3.2)\n\t\t(14,2.7)\n\t\t(15,3.4)\n\t\t(16,4.1)\n\t\t\n\t};\n\t\\addplot[\n\tcolor=green,\n\tmark=none,\n\t]\n\tcoordinates {\n\t\t(8,5.4)\n\t\t(9,11.4)\n\t\t(10,8.5)\n\t\t(11,2.9)\n\t\t(12,10.7)\n\t\t(13,6.1)\n\t\t\n\t};\n\t\\legend{2 colors,3 colors,4 colors}\n\t\n\t\\end{axis}\n\t\\end{tikzpicture}\n\n\n\\newpage\n\n\\begin{table}[]\n\t\\footnotesize\n\t\\centering\n\t\\caption{Lower Bounds for $W(k, r)$}\n\t\\label{my-label}\n\t\\def\\arraystretch{2}\n\t\\begin{tabular}{lrrrrrrrrr}\n\t\t& 2 colors & 3 colors & 4 colors & 5 colors & 6 colors & 7 colors \\\\\\hline\n\t\tLength 3 & 9 & 27 & 76 & \\textgreater170 & \\textgreater225 & \\textgreater336 \\\\\n\n\t\tLength 4 & 35 & 293 & \\textgreater1,048 & \\textgreater2,254 & {\\textgreater9,778} & \\\\\n\t\tLength 5 & 178 & \\textgreater2,173 & \\textgreater17,705 & \\textgreater98,741 & {\\textgreater98,754} & \\textgreater493,705 \\\\\n\t\tLength 6 & 1,132 & \\textgreater11,191 & \\textgreater157,348 & \\textgreater786,740 & \\textgreater1,555,549 & \\textgreater3,933,700 \\\\\n\t\tLength 7 & \\textgreater3,703 & \\textgreater48,811 & \\textgreater2,284,751 & \\textgreater15,993,257 & {\\textgreater111,952,799} & \\textgreater783,669,593 \\\\\n\t\tLength 8 & \\textgreater11,495 & \\textgreater238,400 & \\textgreater12,288,155 & \\textgreater86,017,085 & {\\textgreater602,119,595} & \\textgreater4,214,837,165 \\\\\n\t\tLength 9 & \\textgreater41,265 & \\textgreater932,745 & \\textgreater139,847,085 & \\textgreater978,929,595 & \\textgreater6,852,507,165 & \\textgreater47,967,550,155 \\\\\n\t\tLength 10 & \\textgreater103,474 & \\textgreater4,173,724 & \\textgreater1,189,640,578 & \\textgreater8,327,484,046 & \\textgreater58,292,388,322 & \\textgreater408,046,718,254 \\\\\n\t\tLength 11 & \\textgreater193,941 & \\textgreater18,603,731 & \\textgreater3,464,368,083 & \\textgreater38,108,048,913 & \\textgreater419,188,538,043 & \\textgreater4,611,073,918,473 \\\\\n\t\tLength 12 & \\textgreater638,727 & \\textgreater79,134,144 & \\textgreater37,054,469,451 & \\textgreater407,599,163,961 \\\\\n\t\tLength 13 & \\textgreater1,642,309 & \\textbf{\\textgreater251,282,317} & \\textgreater224,764,767,431 \\\\\n\t\tLength 14 & \\textgreater3,118,350 & \\textbf{\\textgreater669,256,082} & \\textgreater748,007,969,550 \\\\\n\t\tLength 15 & \\textgreater8,523,047 & \\textbf{\\textgreater2,250,960,279} \\\\\n\t\tLength 16 & \\textgreater16,370,086 & \\textbf{\\textgreater9,186,001,216} \\\\\n\t\tLength 17 & \\textgreater46,397,777 & \\textbf{\\textgreater15,509,557,937} \\\\\n\t\tLength 18 & \\textgreater91,079,252 \\\\\n\t\tLength 19 & \\textgreater250,546,915 \\\\\n\t\tLength 20 & \\textgreater526,317,462 \\\\\n\t\tLength 21 & \\textbf{\\textgreater1,409,670,741} \\\\\n\t\tLength 22 & \\textbf{\\textgreater2,582,037,634} \\\\\n\t\tLength 23 & \\textbf{\\textgreater6,206,141,987} \\\\\n\t\tLength 24 & \\textbf{\\textgreater10,980,093,212} \\\\\n\t\tLength 25 & \\textbf{\\textgreater23,003,662,489} \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\n\t\n\t\n\t\n\t\n\t\\textgreater \\space means the actual number is unknown but we know it is more than that number. The lower bounds that are bold are new or have been improved by this paper.\n\t \n\\end{table}\n\n\n\n\\begin{table}[]\n\t\\footnotesize\n\t\\centering\n\t\\caption{Primes Used to Find Lower Bounds for $W(k, r)$}\n\t\\def\\arraystretch{2}\n\t\\begin{tabular}{lrrrrrrrrr}\n\t\t\n\t\t& 2 colors & 3 colors & 4 colors & 5 colors & 6 colors & 7 colors \\\\\\hline\n\t\tLength 3 & && 37 & & $3\\cdot(W(3,4)-1)$ \\\\\n\t\tLength 4\n\t\t& 11 & 97 & 349 & 751 & 3,259 & \\\\\n\t\tLength 5\n\t\t& 11ZZ & & 2,213Z & & & $5\\cdot W'(5,5)$ \\\\\n\t\tLength 6\n\t\t& 113Z & & 1,132x139 & $5\\cdot W'(5,4)$ & 11,191x139 & $5\\cdot W'(6,5)$ \\\\\n\t\tLength 7\n\t\t& 617 & & 3,703x617 & $7\\cdot W'(7,4)$ & $7\\cdot W'(7,5)$ & $7\\cdot W'(7,6)$ \\\\\n\t\tLength 8\n\t\t& 821Z & 34,057 & 11,495x1,069 & $7\\cdot W'(8,4)$ & $7\\cdot W'(8,5)$ & $7\\cdot W'(8,6)$ \\\\\n\t\tLength 9\n\t\t& & 116,593 & 41,265x3,389 & $7\\cdot W'(9,4)$ & $7\\cdot W'(9,5)$ & $7\\cdot W'(9,6)$ \\\\\n\t\tLength 10\n\t\t& 11,497 & 463,747 & 103,474x11,497 & $7\\cdot W'(10,4)$ & $7\\cdot W'(10,5)$ & $7\\cdot W'(10,6)$ \\\\\n\t\tLength 11\n\t\t& 9,697Z & 1,860,373 & 193,941x17,863 & $11\\cdot W'(11,4)$ & $11\\cdot W'(11,5)$ & $11\\cdot W'(11,6)$ \\\\\n\t\tLength 12\n\t\t& 29,033Z & 7,194,013 & 638,727x58,013 & $11\\cdot W'(12,4)$ \\\\\n\t\tLength 13\n\t\t& 136,859 & \\textbf{20,940,193} & 1,642,309x136,859 \\\\\n\t\tLength 14\n\t\t& 239,873 & \\textbf{51,481,237} & 3,118,350x608,789 \\\\\n\t\tLength 15\n\t\t& 608,789 & \\textbf{160,782,877} \\\\\n\t\tLength 16\n\t\t& 1,091,339 & \\textbf{612,400,081} \\\\\n\t\tLength 17\n\t\t& 2,899,861 & \\textbf{969,347,371} \\\\\n\t\tLength 18\n\t\t& 5,357,603 \\\\\n\t\tLength 19\n\t\t& 13,919,273 \\\\\n\t\tLength 20 \n\t\t& 27,700,919 \\\\\n\t\tLength 21\n\t\t& \\textbf{70,483,537} \\\\\n\t\tLength 22\n\t\t& \\textbf{122,954,173} \\\\\n\t\tLength 23\n\t\t& \\textbf{282,097,363} \\\\\n\t\tLength 24 & \\textbf{477,395,357} \\\\\n\t\tLength 25 & \\textbf{958,485,937} \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\n\t\n\t\n\t\n\t\n\tThe numbers above are the primes we used to find the lower bounds. Results with the Cyclic Zipper Method \\cite{HerwigHLM07} are shown above. Z=zipped once, ZZ=zipped twice. Entries with two numbers separated by an ``x\" are the numbers used in Xu's \\cite{Xu2013} method. Entries with a prime and van der \n\tWaerden number separated by a $\\cdot$ were produced with Blankenship et al's \\cite{Blankenship} recurrence relation. The lower bounds that are bold are new or have been improved by this paper. For primes close to 1 billion, there may be better primes above 1 billion which we did not check.\n\\end{table}\n\n\n\n\n\n\n\n\n\\begin{table}[]\n\t\\footnotesize\n\t\\centering\n\t\\caption{References for Lower Bounds for $W(k, r)$}\n\t\\def\\arraystretch{2}\n\t\\begin{tabular}{lrrrrrrrrr}\n\t\t& 2 colors & 3 colors & 4 colors & 5 colors & 6 colors & 7 colors \\\\\\hline\n\t\tLength 3 & \\cite{Chvatal} & \\cite{Chvatal} & \\cite{Beeler} & \\cite{HeuleMaaren} & \\cite{Blankenship} & \\cite{Komkov17} \\\\\n\t\t \n\t\t \n\t\tLength 4 & \\cite{Chvatal} &\\cite{Kouril} & \\cite{Rabung} & \\cite{Rabung} & \\cite{Rabung} \\\\\n\t\t \n\t\t Length 5 & \\cite{Stevens_Shantaram} & \\cite{Heuleetal} & \\cite{HerwigHLM07} & \\cite{Komkov} & \\cite{Komkov} & \\cite{Blankenship} \\\\\n\t\n\t\t Length 6 & \\cite{KourilP08} & \\cite{Heuleetal} & \\cite{Xu2013} & \\cite{Blankenship} & \\cite{Xu2013} & \\cite{Blankenship} \\\\\n\t\t\n\t\t\n\t\t Length 7 & \\cite{Rabung} & \\cite{Delft} & \\cite{Xu2013} & \\cite{Blankenship} & \\cite{Blankenship} & \\cite{Blankenship} \\\\\n\t\t\n\t\t Length 8 & \\cite{HerwigHLM07} & \\cite{HerwigHLM07} & \\cite{Xu2013} & \\cite{Blankenship} & \\cite{Blankenship} & \\cite{Blankenship} \\\\\n\t\t\n\t\t Length 9 & \\cite{HerwigHLM07} & \\cite{RabungL12} & \\cite{Xu2013} & \\cite{Blankenship} & \\cite{Blankenship} & \\cite{Blankenship} \\\\\n\t\t Length 10 & \\cite{Rabung} & \\cite{RabungL12} & \\cite{Xu2013} & \\cite{Blankenship} & \\cite{Blankenship} & \\cite{Blankenship}\\\\\n\t\t Length 11 & \\cite{RabungL12} & \\cite{RabungL12} & \\cite{Xu2013} & \\cite{Blankenship} & \\cite{Blankenship} & \\cite{Blankenship} \\\\\n\t\t Length 12 & \\cite{RabungL12} & \\cite{RabungL12} & \\cite{Xu2013} & \\cite{Blankenship} \\\\\n\t\t Length 13 & \\cite{Xu12} & * & \\cite{Xu2013} \\\\\n\t\t Length 14 & \\cite{Xu12} & * & \\cite{Xu2013} \\\\\n\t\t Length 15 & \\cite{Xu12} & * \\\\\n\t\t Length 16 & \\cite{Xu12} & * \\\\\n\t\t Length 17 & \\cite{Xu12} & * \\\\\n\t\t Length 18 & \\cite{Xu12} \\\\\n\t\t Length 19 & \\cite{Xu12} \\\\\n\t\t Length 20 & \\cite{Xu12} \\\\\n\t\t Length 21 & * \\\\\n\t\t Length 22 & * \\\\\n\t\t Length 23 & * \\\\\n\t\t Length 24 & * \\\\\n\t\t Length 25 & * \\\\\n\t\t \n\t\\hline\n\t\\end{tabular}\n\t\n\t The asterisks (*) represent the lower bounds that this project found.\n\\end{table}\n\n\n\n\n\\newpage\n\n\\clearpage\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro}\nThe charge-hiding effect by a horned particle, which are spaces containing a space where asymptotically $r \\rightarrow \\infty $ connected to a space where $r=constant$ and this is called a \"horned\" region because there is another cordinate there (that replaces $r$ which is constant and cannot be used as a coordinate therefore) which runs along an infinite range\\footnote{In \\ct{hidingLLB} we called these spaces wormholes, but it has been pointed out to us that they are more correctly described as \"horned particles\" as done for the spacetimes studied in \\ct{horned particle-1}, \\ct{horned particle-2} that have large \"tube\" like structre also. Somewhat similar are the so called \"Gravitational bags\", where some extra dimensions grows very large at the center of the four dimensional projected metric \\ct{GB-1}, \\ct{GB-2},\\ct{GB-3}}, which was studied for the case where gravity\/gauge-field system is self-consistently interacting with a charged lightlike brane (LLB) as a matter source \\ct{hidingLLB}, is now studied for the case of a time like brane, where we demand that no surfaces of infinite coordinate time redshift\\footnote{Such surface of infinite redshift is in fact a horizon at $r=r_h$, so the resulting object is similar to a black hole, but not exactly, in our previous studies of wormholes constructed this way, because there is no \"interior\", that is $r>r_h$ everywhere in these previous studies and since $r>r_h$ everywhere, there is no possibility of \"collapse\" to $r=0$ } appear in the problem, leading therefore to a completely explorable horned particle space, according to not only the traveller that goes through the horned particle space (as was the case for the LLB), but also to a static external observer. This requires negative surface energy density for the shell sitting at the throat of the horned particle. We study a gauge field subsystem which is of a special non-linear form containing a square-root of the Maxwell term and which previously has been shown to produce a QCD-like confining gauge field dynamics in flat space-time. The condition of finite energy of the system or asymptotic flatness on one side of the horned particle implies that the charged object sitting at the ``throat'' expels all the flux it produces into the other side of the horned particle, which turns out to be of a ``tube-like'' nature. An outside observer in the asymptotically flat universe detects, therefore, a neutral object. The hiding of the electric flux behind the horned region of the particle appears to be the only possible way that a truly charged particle can still be of finite energy, which points to the physical relevance of such solutions, even though there is the need of negative energy density at the throat, which can be of quantum mechanical origin.\n\nThis effect is indeed the opposite to the famous Misner-Wheeler ``charge without charge'' effect \\ct{misner-wheeler}, one of the most interesting physical phenomena produced by wormholes.\nMisner and Wheeler realized that wormholes connecting two asymptotically flat \nspace times provide the possibility of existence of electromagnetically\nnon-trivial solutions, where the lines of force of the electric field flow from one \nuniverse to the other without a source and giving the impression of being \npositively charged in one universe and negatively charged in the other universe.\nWormholes may be classified according to their traversability properties, which can be addressed according to whether a \"traveller\" that attempts to cross from one side of the wormhole throat \nto the other to the other side can do so in a finite proper time (the travellers proper time), or one may require this, but in addition that a static observer, which uses the coordinate time, will see the traveller go from one side of the throat and come back in a finite coordinate time. For a detailed account of the general theory of traversable wormholes according to this second, most stringent definition, we refer to Visser's book \\ct{visser-book} (see also \\ct{visser-1,visser-2} and some more recent accounts \\ct{WH-rev-1}--\\ct{bronnikov-2}.\n\nIn contrast to the Misner and Wheeler effect using wormhole, the charge-hiding effect by a horned particle, a genuinely charged matter source of gravity and electromagnetism may appear {\\em electrically neutral} to an external observer. In previous publications it has been shown that this phenomenon takes place in a gravity\/gauge-field system self-consistently coupled to a charged lightlike brane as a matter source, where the gauge field subsystem is of a special non-linear form containing a square-root of the Maxwell term \\ct{hidingLLB}. The latter has been previously shown \\ct{GG-1}--\\ct{GG-6} to produce a QCD-like confining (``Cornell'' \\ct{cornell-potential-1}--\\ct{cornell-potential-3}) potential in flat space-time. There the lightlike brane, at the ``throat'' connects a ``universe'' (where $r\\rightarrow\\infty$) with a ``universe'' (where $r=Const$), is electrically charged, however all of its flux flows into the \"tube-like universe'' only. No Coulomb field is produced in the ``universe'' containing the $r\\rightarrow\\infty$ region, therefore, the horned particle hides the charge from an external observer in the latter ``universe''.\n\nIn the case where lightlike branes are present as source of the system, sitting at the throat, there is at the throat a surface of infinite redshift for the coordinate time, so the horn is accessible only according to the weaker definition that the \"traveller\" that attempts to cross from one side of the horn throat to the other side can do so in a finite proper time. If we want to assure the more stringent definition of traversability or accessibility, we must use time like branes, this will be done here, in the framework of the thin-wall approach to domain walls or thin shells coupled to gravity \\ct{Israel-66}.\n\nThe gravity\/gauge-field system with a square-root of the Maxwell term\nwas recently studied in \\ct{grav-cornell} (see the brief review in Section 2 below) where the following interesting new features of the pertinent static spherically symmetric solutions have been found:\\\\\n(i) appearance of a constant radial electric field (in addition to the Coulomb one)\nin charged black holes within Reissner-Nordstr{\\\"o}m-de-Sitter-type\nand\/or Reissner-Nordstr{\\\"o}m-{\\em anti}-de-Sitter-type\nspace-times, in particular, in electrically neutral black holes with \nSchwarzschild-de-Sitter\nand\/or Schwarzschild-{\\em anti}-de-Sitter\ngeometry;\\\\\n(ii) novel mechanism of {\\em dynamical generation} of cosmological constant\nthrough the nonlinear gauge field dynamics of the ``square-root'' Maxwell term;\\\\\n(iii) appearance of confining-type effective potential in charged test particle \ndynamics in the above black hole backgrounds.\n\nIn Section 3 of the present paper we review the results of \\ct{hidingLLB} concerning tube or Levi-Civita-Bertotti-Robinson \\ct{LC-BR-1}--\\ct{LC-BR-3}, \\textsl{i.e.} type solutions, with space-time geometry of the form $\\cM_2 \\times S^2$ with $\\cM_2$ being a two-dimensional anti-de Sitter, Minkowski or de Sitter space depending on the relative strength of the electric field w.r.t. coupling of the square-root Maxwell term. \n\nIn previous papers \\ct{LL-main-1}--\\ct{beograd-2010} an explicit reparametrization invariant world-volume Lagrangian formulation of lightlike $p$-branes was used to construct various types of wormhole, regular black hole and lightlike braneworld solutions in $D\\!=\\!4$ or higher-dimensional asymptotically flat or asymptotically anti-de Sitter bulk space-times. In particular, in refs.\\ct{BR-kink}--\\ct{beograd-2010} it has been shown that lightlike branes can trigger a series of transitions of space-time regions, \\textsl{e.g.}, from ``tube-like'' Levi-Civita-Bertotti-Robinson spaces to non-compact Reissner-Nordstr{\\\"o}m or Reissner-Nordstr{\\\"o}m-de-Sitter region or {\\sl vice versa}. Let us note that wormholes with ``tube-like'' structure (and regular black holes with ``tube-like'' core) have been previously obtained in \nrefs.\\ct{eduardo-wh}--\\ct{zaslavskii-3}. \n\n\nAlthough light like branes will not be used in this paper, one should point out nevertheless the essential role of the proper world-volume Lagrangian formulation of\nlightlike branes which manifests itself most clearly in the correct self-consistent \nconstruction \\ct{LL-main-5,Kerr-rot-WH-2} of the simplest wormhole solution first \nproposed by Einstein and Rosen \\ct{einstein-rosen} -- the Einstein-Rosen ``bridge'' wormhole.\nNamely, in refs.\\ct{LL-main-5,Kerr-rot-WH-2} it\nhas been shown that the Einstein-Rosen ``bridge'' in its original formulation\n\\ct{einstein-rosen} naturally arises as the simplest particular case of {\\em static} \nspherically symmetric wormhole solutions produced by lightlike branes as\ngravitational sources, where the two identical ``universes'' with Schwarzschild\nouter-region geometry are self-consistently glued together by a lightlike brane occupying\ntheir common horizon -- the wormhole ``throat''. An understanding of this\npicture within the framework of Kruskal-Szekeres manifold was subsequently\nprovided in ref.\\ct{poplawski}, which involves Rindler's elliptic\nidentification of the two antipodal future event horizons. The system resembles black hole in the sense that there is a surface of infinite redshift at $r=r_h$ but unlike the standard black hole there is no $rr_h$.\n\nAt this point let us strongly emphasize that the original notion of \n``Einstein-Rosen bridge'' in ref.\\ct{einstein-rosen} is qualitatively different from \nthe notion of ``Einstein-Rosen bridge'' defined in several popular textbooks ({\\sl e.g.}, \nrefs.\\ct{MTW,Carroll}) using the Kruskal-Szekeres manifold, where the ``bridge'' has \n{\\em dynamic} space-time geometry. Namely, the two regions in \nKruskal-Szekeres space-time corresponding to the two copies of outer Schwarzschild \nspace-time region ($r>2m$) (the building blocks of the original static Einstein-Rosen \n``bridge'') and labeled $(I)$ and $(III)$ in ref.\\ct{MTW} are generally\n{\\em disconnected} and share only a two-sphere (the angular part) as a common border\n($U=0, V=0$ in Kruskal-Szekeres coordinates), whereas in the original Einstein-Rosen\n``bridge'' construction \\ct{einstein-rosen} the boundary between the two identical \ncopies of the outer Schwarzschild space-time region ($r>2m$) is a three-dimensional \nlightlike hypersurface ($r=2m)$. \n\nIn Section 4 below we consider the matching of an external solution, containing $r\\rightarrow\\infty$ region, to a tube like solution, where $r=const$, through a time like brane which will serve as a matter source of gravity and (nonlinear) electromagnetism. Then in section 5 we recognize the interesting phenomenon that for asymptotic flatness ( and therefore for the configuration to be recognized as a legitimate finite energy excitation from flat space) the charged particle has to redirect all the flux it produces in the direction of the tube region. This new charge ``confining'' phenomena is entirely due to the presence of the ``square-root'' Maxwell term, which assigns infinite energy to any configuration that does not hide the flux (i.e. that does not send all the electric flux in the direction of the tube region).\n\n\\section{Lagrangian Formulation. Spherically Symmetric Solutions}\n\\label{lagrange}\nIn Refs. \\ct{GG-1}--\\ct{GG-6} a flat space-time model of nonlinear gauge field system\nwith a square-root of the Maxwell term was considered ($f$ below is a positive constant that sets the scale for confinement effects in the theory\n\\br\nS = \\int d^4 x L(F^2) \\quad ,\\quad\nL(F^2) = - \\frac{1}{4} F^2 - \\frac{f}{2} \\sqrt{- F^2} \\; \\lab{flatmodel}\n\\\\\nF^2 \\equiv F_{\\m\\n} F^{\\m\\n} \\quad ,\\quad \nF_{\\m\\n} = \\pa_\\m A_\\n - \\pa_\\n A_\\m \\; .\n\\nonu\n\\er\n\nThe equations of motion are\n\\be\n\\pa_\\n \\( \\( \\sqrt{-F_{\\a\\b}F^{\\a\\b}}-f\\) \\frac{F^{\\m\\n}}{\\sqrt{-F_{\\a\\b}F^{\\a\\b}}} \\)=0 \n\\lab{flat GG-eqs}\n\\ee \nThen, assuming spherical symmetry and time independence, we find that, in addition to a Coulomb like piece, a linear term proportional to $f$ is obtained for $A^{0}$, which is of the form of the well-known \"Cornell\" potential \\ct{cornell-potential-1}--\\ct{cornell-potential-3} in quantum chromodynamics (QCD). Furthermore, these equations are consistent with the 't Hooft criteria for perturbative confinement. In fact, in the infrared region the above equation implies that\n\\be\nF^{\\m\\n}=f\\frac{F^{\\m\\n}}{\\sqrt{-F_{\\a\\b}F^{\\a\\b}}}\n\\ee\nplus negligible terms in the infrared sector. Interestingly enough, for a static source, this automatically implies that\nthe chromoelectric field has a fixed amplitude. Confinement is obvious then, since in the presence of two external\noppositely charged sources, by symmetry arguments, one can see that such a constant amplitude chromoelectric\nfield must be in the direction of the line joining the two charges. The potential that gives rise to this kind of field\nconfiguration is of course a linear potential. Also, for static field configurations the model \\rf{flatmodel} yields the following electric displacement field $\\overrightarrow{D} = \\overrightarrow{E} - \\dfrac{f}{\\sqrt{2}}\\dfrac{\\overrightarrow{E}}{|\\overrightarrow{E}|}$. The pertinent energy density for the electrostatic case turns out to be, $ \\dfrac{1}{2} \\overrightarrow{E}^2$ and for the case $\\overrightarrow{E}$ and $\\overrightarrow{D}$ point in the same direction, which is satisfied if $E= |E|> \\dfrac{f}{\\sqrt{2}}$ , then, $ \\dfrac{1}{2} \\overrightarrow{E}^2=\\dfrac{1}{2} \\overrightarrow{D}^2+\\dfrac{f}{\\sqrt{2}}D+\\dfrac{f^2}{4} $, so that it indeed contains a term linear w.r.t. $D= |D|$ as argued by 't Hooft \\ct{tHooft-03}. \nThe vacuum state is degenerated and is defined by the states with $\\overrightarrow{D} = 0$, notice that a charge source generates $\\overrightarrow{D}$, not $\\overrightarrow{E}$, furthermore, the states with \n$\\overrightarrow{D} = 0$\nare solutions of the equations of motion, not so states with $\\overrightarrow{E} = 0$, in fact at such a point in field space the equations of motion are not even well defined.\nHowever, for all the solutions studied in this paper the excitations \"over the vacuum\" will satisfy $E=|\\overrightarrow{E}| > \\dfrac{f}{\\sqrt{2}}$, while $E= \\dfrac{f}{\\sqrt{2}}$, will correspond to the \"vacuum configuration\" of the electrostatic theory, as it will be discussed. Similar connection between $\\overrightarrow{D}$ and $\\overrightarrow{E}$ has been considered as an example of a \"classical model of confinement\" in Ref. \\ct{GG-7} and analyzed generalizing the methods developed for the \"leading logarithm model\" in Ref. \\ct{GG-8}.\n\nThe natural appearance of the ``square-root'' Maxwell term in the effective\ngauge field action \\rf{flatmodel} was further motivated by 't Hooft \\ct{tHooft-03}\nwho has proposed that such gauge field actions are adequate for describing \nconfinement (see especially Eq.(5.10) in \\ct{tHooft-03}). He has in particular described a\nconsistent quantum approach in which these kind of terms play the\nrole of ``infrared counterterms''.\nAlso, it has been shown in first three refs.\\ct{GG-1}--\\ct{GG-6} that the square root of \nthe Maxwell term naturally arises as a result of spontaneous breakdown of scale symmetry of \nthe original scale-invariant Maxwell theory with $f$ appearing as an integration \nconstant responsible for the latter spontaneous breakdown.\n\n\\smallskip\n\n\nWe will consider the simplest coupling to gravity of the nonlinear gauge field system\nwith a square-root of the Maxwell term.\nThe relevant action is given by (we use units with Newton constant $G_N=1$):\n\\br\nS = \\int d^4 x \\sqrt{-g} \\Bigl\\lb \\frac{R(g)-2\\Lambda}{16\\pi} + L(F^2)\\Bigr\\rb \\quad ,\\quad\nL(F^2) = - \\frac{1}{4} F^2 - \\frac{f}{2} \\sqrt{- F^2} \\; ,\n\\lab{gravity+GG} \\\\\nF^2 \\equiv F_{\\k\\l} F_{\\m\\n} g^{\\k\\m} g^{\\l\\n} \\quad ,\\quad \nF_{\\m\\n} = \\pa_\\m A_\\n - \\pa_\\n A_\\m \\; .\n\\nonu\n\\er\nHere $R(g)$ is the scalar curvature of the space-time metric\n$g_{\\m\\n}$ and $g \\equiv \\det\\Vert g_{\\m\\n}\\Vert$, $f$ is a positive coupling constant. \n\nLet us note that the Lagrangian $L(F^2)$ in \\rf{gravity+GG} contains both the \nusual Maxwell term as well as a non-analytic function of $F^2$ and thus it\nis a {\\em non-standard} form of nonlinear electrodynamics. In this way it is \nsignificantly different from the original purely ``square root'' Lagrangian \n$- \\frac{f}{2}\\sqrt{F^2}$ first proposed by Nielsen and Olesen \\ct{N-O-1} to\ndescribe string dynamics (see also refs.\\ct{N-O-2,N-O-3}). The Nielsen and Olesen action was designed to be applicable only to \"magnetic dominated\" configuration, for electric dominated configurationd the square root becomes imaginary.\n\\noindent\nIn contrast, here will be interested in the \"electric\" sector of the theory and notice that now for magnetic dominated configurations the argument inside the square root becomes negative and the square root itself imaginary. This could be a real effect (that gauge fields must be electrically dominated), or since the action is not analytic, it is of course possible to construct a simple modification that allows magnetic dominated configurations by taking absolute value inside the square root. This will not affect the electrostatic sector of the theory, but it will be harder to motivate (for example from spontaneous symmetry breaking of scale invariance), but that discussion would go well beyond the purpose of the present paper, which is to see what can a theory that provides confining theory in flat space do in the presence of horned space times. \n\nAs done in \\ct{tHooft-03} we also mean to use Lagrangian \\rf{gravity+GG} to describe a truncation of the non Abelian theory, where for simplicity we take the gauge field potential in a specific direction in color space (so commutator terms vanish). Let us also remark that one could start with the non-Abelian version of the \ngauge field action in \\rf{gravity+GG}. Since we will be interested in static \nspherically symmetric solutions, the non-Abelian gauge theory effectively reduces \nto an Abelian one as pointed out in the ref.\\ct{GG-1}.\n\nThe corresponding equations of motion of \\rf{gravity+GG} reads:\n\\be\nR_{\\m\\n} - \\h g_{\\m\\n} R+\\Lambda g_{\\m\\n} = 8\\pi T^{(F)}_{\\m\\n} \\; ,\n\\lab{einstein-eqs}\n\\ee\nwhere\n\\be\nT^{(F)}_{\\m\\n} =\nL(F^2)\\,g_{\\m\\n} - 4 L^{\\pr}(F^2) F_{\\m\\k} F_{\\n\\l} g^{\\k\\l} \\; ,\n\\lab{T-F}\n\\ee\nand\n\\be\n\\pa_\\n \\(\\sqrt{-g} L^{\\pr}(F^2) F_{\\k\\l} g^{\\m\\k} g^{\\n\\l}\\)=0 \\; ,\n\\lab{GG-eqs}\n\\ee\nwhere $L^{\\pr} (F^2)$ denotes derivative w.r.t. $F^2$ of the function $L(F^2)$ in \n\\rf{gravity+GG}. \n\nA note concerning the vacuum of the theory is in order here. As opposed to the standard Maxwell theory, the vacuum of the theory is not obtained for zero gauge field strength $F_{\\m\\n}=0$. Instead, the vacuum of the theory in obtained for configuration such that $L^{'}(F^2)=0$, notice that in such case $T^{(F)}_{\\m\\n} \\propto g_{\\m\\n}$, that is, it gives a \"cosmological constant type contribution\", also notice that this is obtained for $F^2 \\neq 0$, which agrees with the notion that the vacuum of confining theory contains gauge field condensates.\n\n\n\nIn our preceding paper \\ct{grav-cornell} we have shown that the gravity-gauge-field\nsystem \\rf{gravity+GG} possesses static spherically symmetric solutions\nwith a radial electric field containing both Coulomb and {\\em constant} vacuum\npieces:\n\\be\nF_{0r} = \\frac{\\vareps_F f}{\\sqrt{2}} + \\frac{Q}{\\sqrt{4\\pi}\\, r^2} \n\\quad ,\\quad \\vareps_F = \\mathrm{sign}(Q) \\; ,\n\\lab{cornell-sol}\n\\ee\n\\noindent\nthe sign of the second term, determined by $\\vareps_F$, which is the field strength $F_{0r}$ divided by its absolute value has the sign of $F_{0r}$ itself and looking at small enough $r$, we see that this sign is determined by $Q$, since there the Coulomb part dominates. We see therefore that both contributions in \\ref{cornell-sol} have the same sign and therefore $|F_{0r}|>\\frac{f}{\\sqrt{2}}$, i.e., bigger than its vacuum value,\nand the space-time metric we have: \n\\br\nds^2 = - A(r) dt^2 + \\frac{dr^2}{A(r)} + r^2 \\bigl(d\\th^2 + \\sin^2 \\th d\\vp^2\\bigr)\n\\; ,\n\\lab{spherical-static} \\\\\nA(r) = 1 - \\sqrt{8\\pi}|Q|f - \\frac{2m}{r} + \\frac{Q^2}{r^2} - \\frac{\\Lambda_{eff}}{3} r^2 \\; ,\n\\lab{RN-dS+const-electr}\n\\er\nis Reissner-Nordstr{\\\"o}m-de-Sitter-type space, where $\\Lambda_{eff}=2\\pi f^2+\\Lambda$\n\nAppearance in \\rf{RN-dS+const-electr} of a ``leading'' constant term different from 1\nresembles the effect on gravity produced by a spherically symmetric ``hedgehog''\nconfiguration of a nonlinear sigma-model scalar field with $SO(3)$ symmetry, that is the field of a global monopole \n\\ct{BV-hedge} (cf. also \\ct{Ed-Rab-hedge}).\n\n\\section{Generalized Levi-Civita-Bertotti-Robinson Space-Times}\n\\label{gen-BR}\nHere we will look for static solutions of Levi-Civita-Bertotti-Robinson type \n\\ct{LC-BR-1}--\\ct{LC-BR-3} of the system \\rf{einstein-eqs}--\\rf{GG-eqs}, this was studied in \\ct{hidingLLB},\\ct{hide-confine}. Namely, with \nspace-time geometry of the form $\\cM_2 \\times S^2$ where $\\cM_2$ is some two-dimensional\nmanifold:\n\\be\nds^2 = - A(\\eta) dt^2 + \\frac{d\\eta^2}{A(\\eta)} \n+ a^2 \\bigl(d\\th^2 + \\sin^2 \\th d\\vp^2\\bigr) \\;\\; ,\\;\\; \n-\\infty < \\eta <\\infty \\;\\; ,\\;\\; a = \\mathrm{const}\n\\; ,\n\\lab{gen-BR-metric}\n\\ee\nand the electromagnetic field is given by\n\\be\nF_{\\m\\n} = 0 \\;\\; \\mathrm{for}\\; \\m,\\n\\neq 0,\\eta \\quad ,\\quad\nF_{0\\eta} = F_{0\\eta} (\\eta) \\; ;\n\\lab{electr-static}\n\\ee\n\nThe gauge field equations of motion become:\n\\be\n\\pa_\\eta \\Bigl( F_{0\\eta} - \\frac{\\vareps_F f}{\\sqrt{2}}\\Bigr) = 0\n\\quad ,\\quad \\vareps_F \\equiv \\mathrm{sign}(F_{0\\eta}) \\; ,\n\\lab{GG-eqs-0}\n\\ee\nyielding a constant vacuum electric field:\n\\be\nF_{0\\eta} = c_F = \\mathrm{arbitrary ~const} \\; .\n\\lab{const-electr}\n\\ee\nThe (mixed) components of energy-momentum tensor \\rf{T-F} read:\n\\be\n{T^{(F)}}^0_0 = {T^{(F)}}^\\eta_\\eta = - \\h F^2_{0\\eta} \\quad ,\\quad\nT^{(F)}_{ij} = g_{ij}\\Bigl(\\h F^2_{0\\eta} - \\frac{f}{\\sqrt{2}}|F_{0\\eta}|\\Bigr)\n\\; .\n\\lab{T-F-electr}\n\\ee\nTaking into account \\rf{T-F-electr}, the Einstein eqs.\\rf{einstein-eqs} for\n$(ij)$, where $R_{ij}=\\frac{1}{a^2} g_{ij}$ because of the $S^2$ factor in\n\\rf{gen-BR-metric}, yield:\n\\be\n\\frac{1}{a^2} = 4\\pi c_F^2+\\Lambda \n\\lab{einstein-ij}\n\\ee\nThe $(00)$ Einstein eq.\\rf{einstein-eqs} using the expression \n$R^0_0 = - \\h \\pa_\\eta^2 A$ (ref.\\ct{Ed-Rab-1}; see also \\ct{Ed-Rab-2}) becomes:\n\\be\n\\pa_\\eta^2 A = 8\\pi h(|c_F|) \\quad ,\\quad h(|c_F|) \\equiv c_F^2-\\sqrt{2}f|c_F|-\\frac{\\Lambda}{4\\pi}\n\\lab{einstein-00}\n\\ee\nIn the particular case where $\\Lambda=0$, studied in \\ct{hidingLLB}, we arrive at the following three distinct types of\nLevi-Civita-Bertotti-Robinson solutions for gravity coupled to the\nnon-Maxwell gauge field system \\rf{gravity+GG}:\n\n(i) $AdS_2 \\times S^2$ with strong constant vacuum electric field\n$|F_{0\\eta}| = |c_F|>\\sqrt{2}f$, where $AdS_2$ is two-dimensional anti-de Sitter \nspace with:\n\\be\nA(\\eta) = 1+ 4\\pi |c_F| \\(|c_F| - \\sqrt{2}f\\)\\,\\eta^2\n\\lab{AdS2}\n\\ee\nin the metric \\rf{gen-BR-metric}.\n\n(ii) $M_2 \\times S^2$ with constant vacuum electric field \n$|F_{0\\eta}| = |c_F|=\\sqrt{2}f$, where $M_2$ is the flat two-dimensional \n space with:\n\\be\nA(\\eta) = 1 \n\\lab{Rindler2}\n\\ee\nin the metric \\rf{gen-BR-metric}.\n\n(iii) $dS_2 \\times S^2$ with weak constant vacuum electric field\n$|F_{0\\eta}| = |c_F|<\\sqrt{2}f$, where $dS_2$ is two-dimensional de Sitter space with:\n\\be\nA(\\eta) = 1 - 4\\pi |c_F| \\(\\sqrt{2}f - |c_F|\\)\\,\\eta^2\n\\lab{dS2}\n\\ee\n For the special value $|c_F| = \\frac{f}{\\sqrt{2}}$\nwe recover the Nariai solution \\ct{nariai-1,nariai-2} with $A(\\eta) = 1 - 2\\pi f^2 \\eta^2$ \nand equality (up to signs) among energy density, radial and transverse pressures:\n$\\rho = - p_r = - p_{\\perp} = \\frac{f^2}{4}$ (${T^{(F)}}^\\m_\\n = \\mathrm{diag} \\(-\\rho,p_r,p_{\\perp},p_{\\perp}\\)$).\n\nIn all three cases above the size of the $S^2$ factor is given by \n\\rf{einstein-ij}. Solutions \\rf{Rindler2} and \\rf{dS2} are new ones and are specifically due to \nthe presence of the non-Maxwell square-root term in the gauge field Lagrangian \\rf{gravity+GG}.\n\nIn this paper we will consider the case $\\Lambda\\neq 0$ and demand that $\\Lambda_{eff}=0$. This leaves us only with solutions similar to \\rf{AdS2} although with a different dependence on $|c_F|$ (see eq. \\rf{AntiDeSitter})\n\n\\section{Matching through a regular (time-like) thin shell }\n\nNow, we want to discuss the matching of \\rf{spherical-static} to \\rf{gen-BR-metric} at a spherically symmetric wall. The metric induced at the wall has to be well defined, and the coefficient of purely angular displacements $d\\Omega^2$ in \\rf{spherical-static} and \\rf{gen-BR-metric} has to agree at the position of the wall, to give the same value of $ds^2$. Therefore, at the wall \n\n\\be\n r=a\n \\lab{location}\n\\ee\n\nThe equations of motion of a thin layer in GR have been obtained by W.Israel \\cite{Israel-66}, we now briefly review these results. To get those equations it is useful to define a Gaussian Normal Coordinate system in a neighbourhood of the wall as follows; denoting the $2+1$ dimensional hypersurface $\\Sigma$ and introducing a coordinate system on $\\Sigma$, two are taken to be the angular variables $\\theta,\\phi$, which are always well defined up to an overall rotation for a spherically symmetric configuration. For the other coordinate in the wall, one can use the proper time variable $\\tau$ that would be measured by an observer moving along with the wall. The fourth coordinate $\\xi$ is taken as the proper distance along the geodesics intersecting $\\Sigma$ orthogonally. We adopt the convention that $\\xi$ is taken to be positive in the Reissner-Nordstr{\\\"o}m-de-Sitter-type regime and negative in the Generalized Levi-Civita-Bertotti-Robinson regime, and $\\xi=0$ is of course the position of the wall. Thus the full set of coordinates is given by $x^\\mu=(\\tau,\\theta,\\phi,\\xi)$, \nin this coordinates \n\\be\ng^{\\xi\\xi}=g_{\\xi\\xi}=1 \\quad , \\quad g^{\\xi i}=g_{\\xi i}=0 \n\\ee\n\nAlso, we define $n_{\\mu}$ to be the normal to an $\\xi=constant$ hypersurface, which in Gaussian normal coordinates has the simple form $n_{\\mu}=n^{\\mu}=(0,0,0,1)$. We then define the extrinsic curvature corresponding to each $\\xi=constant$ hypersurface, which is a $3$-dimensional tensor whose components are defined by\n\\be\nK_{ij}=n_{i;j}=\\frac{\\pa n_i}{\\pa x^j}-\\G^{\\alpha}_{ij}n_{\\alpha}=-\\G^{\\xi}_{ij}=\\frac{1}{2}\\pa_{\\xi}g_{ij} \n\\lab{extrinsic curvature}\n\\ee\nAs we can see, the extrinsic curvature gives the change of the metric in the direction perpendicular to the surface.\n\nIn terms of these variables, the Einstein's equations take the form;\\\\\n\\begin{equation}\n \\begin{array}{l l}\n G^{\\xi}_{\\xi}\\equiv -\\frac{1}{2}{^{(3)}R}+\\frac{1}{2}\\left[(Tr K)^2-Tr(K^2) \\right]=8\\pi G T^{\\xi}_{\\xi} & \\\\ \\\\\n G^{\\xi}_{i}\\equiv K^{m}_{i|m}-(TrK)_{|i}=8\\pi G T^{\\xi}_{i}\n \\end{array} \n \\lab{first Einstein FQ}\n\\end{equation}\nand \n\\begin{equation}\n \\begin{array}{l l}\nG^{i}_{j}\\equiv {^{(3)}G^{i}_{j}}-\\left(K^{i}_{j}-\\delta^{i}_{j} TrK\\right)_{,\\xi} -\\left(Tr K\\right)K^{i}_{j} &\\\\ \\\\ \\qquad +\\frac{1}{2}\\delta^{i}_{j} \\left[(Tr K)^2+Tr(K^2) \\right]=8\\pi G T^{i}_{j}\n \\end{array} \n \\lab{second Einstein FQ}\n\\end{equation}\n\nwhere the subscript vertical bar denotes the $3$-dimensional covariant derivative in the $2+1$ dimensional space of coordinates $(\\tau,\\theta,\\phi)$, and comma denotes an ordinary derivative. Also quantities like $^{(3)}R$, $^{(3)}G^{i}_{j}$, etc. are to be evaluated as if they concerned to a purely $3$-dimensional metric $g_{ij}$, without any reference as to how it is embedded in the higher four dimensional space. \n\nBy definition, for a thin wall, the energy-momentum tensor $T^{\\mu\\nu}$ has a delta function singularity at the wall, so one can define a surface stress energy tensor $S^{\\mu\\nu}$\n\\be\nT^{\\mu\\nu}=S^{\\mu\\nu}\\delta(\\xi)+regular\\space\\ terms\n\\lab{Energy momentum tensor}\n\\ee\nWhen the energy momentum tensor \\rf{Energy momentum tensor} is inserted into the field equations \\rf{first Einstein FQ},\\rf{second Einstein FQ}, we obtain that \\rf{first Einstein FQ} are satisfied automatically, provided that they are satisfied for $\\xi\\neq0$ and provided that $g_{ij}$ is continuous at $\\xi=0$ (so that $K_{ij}$ does not acquire a $\\delta$-function singularity). Eq.\\rf{second Einstein FQ} however, when integrated from $\\xi=-\\varepsilon$ to $\\xi=\\varepsilon$ ($\\varepsilon\\rightarrow 0$ and $\\varepsilon >0$), leads to the discontinuity condition \n\\be\nS^i_j=-\\frac{1}{8\\pi G}\\left[ \\gamma^i_j-\\delta^i_j Tr\\gamma \\right]\n\\ee\nor equivalently\n\\be\n\\gamma^i_j=-8\\pi G \\left[ S^i_j-\\frac{1}{2}\\delta^i_j Tr S \\right]\n\\lab{JCE}\n\\ee\nwhere\n\\be\n \\gamma_{ij}=\\lim_{\\epsilon\\rightarrow0}\\left[K_{ij}(\\xi=+\\epsilon)-K_{ij}(\\xi=-\\epsilon) \\right]\n\\ee\nis therefore the \\emph{jump} of extrinsic curvature across $\\Sigma$.\n\nLocal conservation of $T_{\\mu\\nu}$ and the demand of spherical symmetry gives a surface stress-energy tensor of the form\n\\be\nS^{\\mu\\nu}=\\sigma(\\tau)U^{\\mu}U^{\\nu}-\\omega(\\tau)[h^{\\mu\\nu}+U^{\\mu}U^{\\nu}]\n\\lab{surface energy tensor}\n\\ee\nwhere $h^{\\mu\\nu}=g^{\\mu\\nu}-n^{\\mu}n^{\\nu}$ is the metric projected onto the hypersurface of the wall, and $U^{\\mu}=(1,0,0,0)$ is the four velocity of the wall. \n\nIn \\rf{surface energy tensor}, $\\sigma$ has the interpretation of energy per unit surface, as detected by an observer at rest with respect to the wall, and $\\omega$ has the interpretation of surface tension. For a given equation of state \n$p=p(\\sigma)$ ($p=-\\omega$) the local energy momentum at the wall gives $d(r^2\\sigma)=-pd(r^2)$,\nbut since we have $r=Const$ at the matching point, we get $\\sigma=Const$ and from the generic equation of state $p=p(\\sigma)=Const$. For the shell at the junction $r=a$, the angular coordinates $( \\theta , \\phi)$ must be identified. Also the radial coordinate of the shell, as seen from the outside, is clearly $r=a$, but this smooth sewing so that the requirement that the induced metrics on the shell from the inside and outside giving the same result on the shell, and therefore being well defined, is not enough to determine $ \\eta $ in \\rf{gen-BR-metric}, which will have a non trivial time dependence $\\eta=\\eta(t)$ to be determined by the Israel junction conditions. \n\nIn addition to this, the timelike brane can also have a delta function charge densnity $j^\\mu=\\delta ^{\\mu}_0 q \\delta(\\xi)$ coming from the discontinuity in the gauge field strength across the matching at $r=a$. \n\n\\be\n[F_{0\\nu}]_{r=a}=q\n\\ee \nwhich can also be defined in terms of the electric flux, or more clearly, by defining the electric displacement field $D_{0\\nu}$ in the two regions which in the present case is significantly different from the electric field $F_{0\\nu}$ due to the presence of the \"square-root\" Maxwell term\n\\be\n D_{0r}|_{r=a}-D_{0\\eta}=q \n\\lab{gauge field discontinuity}\n\\ee\nwhere $D_{0r}=\\left(1-\\frac{f}{\\sqrt{2}|F_{0r}|}\\right)F_{0r}$ and $D_{0\\eta}=\\left(1-\\frac{f}{\\sqrt{2}|F_{0\\eta}|}\\right)F_{0\\eta}$\n\n\\bigskip\n\nWe now consider the matching through a thin spherical wall of the Reissner-Nordstr{\\\"o}m-de-Sitter-type regime (denoted by \"+\") to the Generalized Levi-Civita-Bertotti-Robinson regime (denoted by \"-\"). First consider the discontinuity of $K_{\\theta\\theta}$. Using \\rf{location},\\rf{JCE} and \\rf{surface energy tensor} we get \n\\be\n K_{\\theta \\theta}^-- K_{\\theta \\theta}^+=4\\pi \\sigma a^2\n\\ee\n\nfor the Levi-Civita-Bertotti-Robinson space $g_{\\theta\\theta}=a^2=constant$, so that $K_{\\theta \\theta}^-=0$ according to \\rf{extrinsic curvature}. Therefore $ K_{\\theta \\theta}^+=-4\\pi \\sigma a^2$\n\\be\n \\sqrt{A_{0}(a)}=-4\\pi \\sigma a\n \\lab{theta component}\n\\ee\nwhere $-A_o$ denotes the $0-0$ component of the metric \"outside\", on the $r>a$ region.\nSo, if we are dealing with a standard static space time outside (for example in the case of Schwarzschild space using only region I), the above equation implies $\\sigma<0$. That is, the matching of the tube space time with the \"normal\" outside space will require negative energy densities. We could insist on $\\sigma>0$ but this requires the use of all regions of Kruskal space \\ct{eduardo-wh}(this allows the square root in \\rf{theta component} to be negative \\ct{eduardo-wh}) and therefore the resulting wormhole is not traversable, we will not follow this approach in this paper.\n\\smallskip\n\nThe $\\phi\\phi$ component of equation \\rf{JCE} gives the same information due to the spherical symmetry of the problem. The additional information will come from the $\\tau\\tau$ component, which reads \n\\be\n K_{\\tau \\tau}^-- K_{\\tau \\tau}^+=4\\pi (\\sigma -2 \\omega)\n \\lab{tau component}\n\\ee\nfrom $U^\\mu n_\\mu=0$ we have $U^\\mu n_{\\mu ; \\nu}=-n_{\\mu}U^{\\mu}_{;\\nu}$, and the $K_{\\tau \\tau}$ component then takes the form\n\\be\nK_{\\tau \\tau}=-n_{\\mu}\\left(\\frac{d U^\\mu}{d\\tau}+\\Gamma^{\\mu}_{\\alpha\\beta}U^{\\alpha}U^{\\beta}\\right)\n\\lab{K tau tau}\n\\ee\n\nUsing this expression for the $\\tau\\tau$ component of the extrinsic curvature, \nits discontinuity equation gives then,\n \n\\be\n -\\frac{1}{\\dot{\\eta}} \\frac{d}{d \\tau}\\left(\\sqrt{A_i(\\eta)+\\dot{\\eta}^2} \\right)+\\frac{1}{2\\sqrt{A_0(a)}}\\frac{\\partial A_o(r)}{\\partial r} \\v_{r=a}=4\\pi (\\sigma -2 \\omega)\n\\ee\nwhere $-A_i(\\eta)$ denotes the $0-0$ component of the metric \"inside\", on the tube like compactified region.\n\nUsing \\rf{theta component}\n\\be\n-\\frac{1}{\\dot{\\eta}} \\frac{d}{d \\tau}\\left(\\sqrt{A_i(\\eta)+\\dot{\\eta}^2} \\right)-\\frac{1}{8\\pi \\sigma a}\\frac{\\partial A_o(r)}{\\partial r} \\v_{r=a}=4\\pi (\\sigma -2 \\omega)\n\\lab{equation for eta}\n\\ee\nmultiplying by $\\dot{\\eta}$ and defining\n\\be\n\\Delta \\equiv \\frac{1}{8\\pi \\sigma a}\\frac{\\partial A_o(r)}{\\partial r} \\v_{r=a}+4\\pi (\\sigma -2 \\omega) \n\\ee\nwe can see that equation \\rf{equation for eta} becomes a total derivative of the proper time $\\tau$ \n\\be\n\\frac{d}{d \\tau}\\left[\\sqrt{A_i(\\eta)+\\dot{\\eta}^2}+\\Delta \\eta \\right]=0\n\\ee\nwhich, upon integrating, we obtain\n\\be\n\\sqrt{A_i(\\eta)+\\dot{\\eta}^2}+\\Delta \\eta=E=const\n\\ee\ntaking $\\Delta$ to the right hand side and then squaring, we finally obtain\n\\be\n\\dot{\\eta}^2=(\\Delta \\eta-E)^2-A_i(\\eta)\n\\lab{eta potential}\n\\ee\nwhere the general expression of $\\Delta$, using the explicit form \\rf{RN-dS+const-electr} for the uncompactified region, is \n\\be\n\\Delta \\equiv \\frac{1}{4\\pi \\sigma a^4}(ma -Q^2-\\Lambda_{eff}a^4)+4\\pi (\\sigma -2 \\omega) \n\\lab{diff of delta}\n\\ee\n\n\\smallskip\n\n\\section{Asymptotically flat, finite energy solutions, imply hiding the electric flux}\n\nLet us assume our ground state is just flat space, and on top of that we would like to build finite energy excitations. In the first place this means that $\\Lambda_{eff}=0$, but still, this is not enough to ensure asymptotic flatness of finite energy. Indeed, if $Q \\neq 0$, the leading behaviour of the metric is not flat, but rather \"hedhehog type\" ,\n\\ct{BV-hedge}, \\ct{Ed-Rab-hedge}, which have energy momentum tensor that decrease only as the square of the radius for large distances, which of course are infinite energy solutions.\n\nThis is of course consistent with the notion that in a confining theory an isolated charge has an associated infinite energy. Here, if we add the horned particle to the problem, the isolated charge can have finite energy, provided it sends all the electric flux it produces to the tube region, this requires the vanishing of the external Coulomb part of the electric field, or $Q=0$.\n\nWhen $Q=0$, then, according to \\rf{cornell-sol}, the electric field outside has magnitude $|F_{0r}|=\\frac{f}{\\sqrt{2}}$. Therefore, in the $r>a$ region, the displacement field is equal to zero $D_{0r}=0$. Thus, the absence of Coulomb field, our choice $\\Lambda_{eff}=0$, and assuming absence of magnetic field, the outer region then simply becomes a Schwarzschild solution with vacuum electric field, which has constant magnitude $\\frac{f}{\\sqrt{2}}$ . But for this value of the field strength we have that the gauge field Lagrangian has a minimum, in fact, $L^{\\pr}(F^2)=0$, so using only that $F^2=f^2$ the equation of motion for the gauge field is satisfied automatically, that is to say, we now do not need the electric field to be radial, the orientation is now completely arbitrary, once $F^2=f^2$ is satisfied. In this disordered vacuum, where the electric field with constant magnitude does not point in one fixed direction, a test charged particle will not be able to get energy from the electric field, instead, it will undergo a kind of Brownian motion, therefore \\emph{no} Schwinger pair-creation mechanism will take place. \n\n\\bigskip\n\nFrom \\rf{diff of delta} and the above discussion, $\\Delta$ takes the following simple form\n\\be\n\\Delta\\rightarrow \\frac{m}{4\\pi \\sigma a^3}+4\\pi (\\sigma -2 \\omega)\n\\ee\nand from \\rf{einstein-00}\n\\be\n \\partial_\\eta^2 A_i(\\eta)=8\\pi D(|c_F|)^2 \n\\ee\nwhere $D(|c_F|)$ is the displacement field in the compactified region, this leads to a metric of the anti de Sitter form for two dimensional space factor,\n\\be\nA_i(\\eta)=1+4\\pi D^2\\eta^2\n\\lab{AntiDeSitter}\n\\ee\nequation \\rf{eta potential} then becomes \n\\be\n \\dot{\\eta}^2=E^2-1+(\\Delta^2-4\\pi D^2)\\eta^2-2\\Delta E\\eta\n\\ee\nwhich can be cast onto the form \n\\be\n \\dot{\\eta}^2+(4\\pi D^2-\\Delta^2)\\left(\\eta+\\frac{\\Delta E}{4\\pi D^2-\\Delta^2}\\right)^2=\\frac{4\\pi D^2 E^2}{4\\pi D^2-\\Delta^2}-1\n\\ee\n\n\\begin{figure}[H]\n \\centering\n \\subfloat[stable]{\\lab{fig:stable}\\includegraphics[width=0.5\\textwidth]{Sta}} \n \\subfloat[unstable]{\\lab{fig:unstable}\\includegraphics[width=0.5\\textwidth]{Uns}}\n \\caption{An illustration of the possible solutions; in the first one (a), we have a stable solution with the \"Energy\" bounded $E^2>1-\\left(\\frac{\\Delta}{\\sqrt{4\\pi}D}\\right)^2\\equiv E^2_{min}$. The second case (b), present an unstable solution. In this case there is no bound on the \"Energy\" since $\\frac{4\\pi D^2 E^2}{4\\pi D^2-\\Delta^2}-1<0$ for all values of $E$ }\n \\lab{potential}\n\\end{figure}\n\nGoing back to Eq.\\rf{gauge field discontinuity}, the charge of the time like shell, call it $K$, will be determined by the flux produced only by the tube region $D(|c_F|)$. Using $D_{0r}=0$, we have $D_{0\\eta}=-q=-K\/4\\pi a^2$. Notice that $D$ and therefore $q$ must be different from zero since Eq.\\rf{einstein-ij} together with $\\Lambda_{eff}=0$ imply $|c_F|>\\frac{f}{\\sqrt{2}}$, otherwise the the radii $a$ will be ill-defined or infinite. \n\n\\smallskip\n\nUsing now the condition $\\Lambda_{eff}=0$, we obtain from \\rf{einstein-ij\n\n\\be\n\\frac{1}{a^2} = 4\\pi \\left(|c_F| - \\frac{f}{\\sqrt{2}}\\right)\\left(|c_F| + \\frac{f}{\\sqrt{2}}\\right) \n\\lab{newa}\n\\ee\n\nExpressing $|c_F|$ in terms of $D$ and then $D$ in terms of $|K|$, and taking into account the Sign of $c_F$ and $q$ we see that \\rf{newa} provides a quadratic equation for the charge $|K|$ in terms of $a$ and $f$\n\\be\n0=\\frac{|K|^2}{4\\pi a^2}+\\sqrt{2}f|K|-1\n\\ee\nwhich has a positive solution for all possible values of those parameters, since the discriminant of such quadratic equation is manifestly \npositive.\n\\begin{figure}[H]\n \\centering\n {\\lab{fig:charge}\\includegraphics[width=0.5\\textwidth]{charge}} \n \\caption{The charge of the time like shell at the throat of the wormhole as a function of the radii $a$.}\n \\lab{charge}\n\\end{figure}\nThis completes the proof that there are indeed consistent, horned particle solutions where all the electric flux of the charged particle at the throat flows into the tube region, furthermore, these are the only finite energy solutions.\n\n\\section{Discussion and Perspectives for future Research}\n\nThe charge-hiding effect by a wormhole, which was studied for the case where gravity\/gauge-field system is self-consistently interacting with a charged lightlike brane (LLB)as a matter source, is now studied for the case of a time like brane. From the demand that no surfaces of infinite coordinate time redshift appear in the problem we are lead now to a horned like particle where the horn region of the particle is completely accessible from the outside region containing the $r\\rightarrow\\infty$ region and vice versa, according to not only the traveller that goes through the shell (as was the case for the LLB), but also to a static external observer. This requires negative surface energy density for the shell sitting at the throat. \nWe study a gauge field subsystem which is of a special non-linear form containing a square-root of the \nMaxwell term and which previously has been shown to produce a QCD-like confining gauge field dynamics in flat space-time. \nThe condition of finite energy of the system or asymptotic flatness on one side of the horned particle implies that the charged object sitting at the throat expels all the flux it produces into the other side of the horned particle, which turns out to be of a ``tube-like'' nature. An outside observer in the asymptotically flat universe detects, therefore, apparently neutral object. The hiding of the electric flux in the horn region is the only possible way that a truly charged particle can still be of finite energy, which points to the physical relevance of such solutions, even though there is the need of negative energy density at the throat, which can be of quantum mechanical origin.\n\nIn addition to the \"hiding\" effect, one can also study the \"confinement\" \\ct{hide-confine}, where instead of considering just the case of a matching of an external uncompactified region with a compactified tube region, we consider two asymptotically flat regions connected by a tube region and at the two points where we match the corresponding uncompactified to the tube region we have a brane, the system will contain a brane plus associated \"antibrane\" at the other matching point. In the case where Light like Branes are used at the matching points, the branes are, classically at least, located at fixed coordinate locations \\ct{hide-confine}, for time like branes, this will not be generically the case, so the brane could in principle collide with its antibrane. These possibilities will be studied in a future publication.\n\n\nWe have seen, for zero $Q$, that in the exterior region any configuration satisfying ${\\mathcal L^{\\prime}(F^{2})} = 0$, or (or $D = 0$ in the electrostatic case) is a solution of the gauge field equation, obtaining in the vacuum region a \"disordered ferroelectric state\". This degeneracy of the outside state is good from point of view the high entropy content of the configuration, since it means that there is a great many ways, infinite indeed, to achieve this matching with zero Coulomb field outside. \nThis subject and its presumably favorable consequences for the stability of this vacuum state with gauge field condensation will be studied in future publications.\n\n\nGoing back to the hiding effect studied in this paper, it is interesting to consider now whether these solutions where the charged particle expels the flux exclusively in the direction of the tube region can take place in nature, or whether this is just a mathematical exercise. This question naturally relates to whether the negative energy density at the throat is physically realizable. To start with, we know that negative energy densities can be achieved through quatum correctios, as it has been discussed in \\ct{FordRoman}, \\ct{RomanFord}. These authors have however found, studying some field theory models that to build up regions of negative energy density, one must \"pay\" by building compensating regions with positive energy density elsewhere. More generically this means that one cannot arbitrarily assign some negative energy density to some region of space and just blame quantum fluctuations for that. For explicit calculations showing that quantum fluctuations can be the origin of negative energy densities see \\ct{Negative energy},\\ct{Static negative energies}.\n\n\nIt i interesting that the gauge field model that produces confinement may give the possibility of obtaining negative energy densities. In the solutions studied so far in this paper, this has not been the case because the electic fields involved in all of the solutions considered in this paper are stronger that the vacuum value, recall for example that in the tube region $|c_F|>\\frac{f}{\\sqrt{2}}$. If we look at lower field strengths absolute values lower than the vacuum value, if they could somehow be obtained (may be as a result of quantum fluctuations), then negative energy densities could result. To see this, just consider the flat space situation and then recall that for static field configurations, the model \\rf{flatmodel} yields the following electric displacement field $\\overrightarrow{D} = \\overrightarrow{E} - \\dfrac{f}{\\sqrt{2}}\\dfrac{\\overrightarrow{E}}{|\\overrightarrow{E}|}$. The pertinent energy density for the electrostatic case turns out to be, $ \\dfrac{1}{2} \\overrightarrow{E}^2$ and for the case $\\overrightarrow{E}$ and $\\overrightarrow{D}$ point in opposite directions, which is satisfied if $E= |E|< \\dfrac{f}{\\sqrt{2}}$ , then, $ \\dfrac{1}{2} \\overrightarrow{E}^2=\\dfrac{1}{2} \\overrightarrow{D}^2- \\dfrac{f}{\\sqrt{2}}D+\\dfrac{f^2}{4} $, so that the term linear w.r.t. $D= |D|$ is negative now. \nFor low values of $D$, this dominates over the quadratic contribution and therefore, we get an energy density lower than that of the vacuum state, which is the state with $E=|E|= \\dfrac{f}{\\sqrt{2}}$ (or $D = 0$). With the appropriate bare cosmological constant chosen here, the vacuum state vacuum energy density is zero, so the low electric field configurations produce then a negative vacuum energy density. Whether this way of achieving negative energy densities can be used to achieve horned particle hiding charge remains an interesting subject for future research.\n\n\nIn the case of the charge hiding in a horned particle studied here, we have the special situation where the hiding of the electric flux behind the horn region, and no flux of $D$ going in the outside region is the only possible way that a truly charged particle (in a model with confining dynamics) can still be of finite energy, while still remaining truly charged, which points to the physical relevance of such solutions, even though there is the need of negative energy density at the throat, which can be of quantum mechanical origin or may be just classical (by means of low electric field strengths). One can then argue that a variational approach to the problem, based on minimization of energy must produce indeed select the \nappropriate state that gives rise the necessary negative energy density at the throat of the horned particle, so that the flux lines can be now redirected into the horn region and make the finite energy solution possible.\n\n\n\n\\section*{Acknowledgments}\nWe would like to thank A. Kaganovich, E. Nissimov, S. Pacheva for very usefull conversations.\n\n\\newpage\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe transverse Ising model (TIM) was developed by deGennes in 1963 to describe the ferroelectric transition in hydrogen-bonded materials like potassium dihydrogen phosphate (KDP) \\cite{degennes63}. As suggested by its name, the model formally describes a system of magnetic Ising moments in a transverse magnetic field \\cite{stinchcombe73},\nand since its discovery it has become significant because it is one of the simplest models to exhibit a quantum phase transition \\cite{Sachdev:2011}. The focus of this work is more practical; we explore the use of the TIM to describe the dielectric properties of SrTiO$_3$. Indeed, the TIM has been used widely to model the low-energy physics of systems in which local degrees of freedom can be represented by pseudospins \\cite{stinchcombe73}. In KDP, for example, the $S=\\frac{1}{2}$ Ising spin states represent the two degenerate positions available to each hydrogen atom, while the transverse field represents the quantum mechanical tunneling between the states.\n\nBecause the TIM starts from a picture of fluctuating local dipole moments, it naturally describes materials, like KDP, with order-disorder transitions. However, the model has also been applied to materials like SrTiO$_3$, which are close to a displacive ferroelectric transition. While there are some clear discrepancies between the model and experiments \\cite{Muller:1979wa}, the mean-field TIM nonetheless gives a useful quantitative phenomenology for the dielectric properties of both pure \\cite{hemberger95,hemberger96} and doped\\cite{kleemann00,kleemann02,kleemann98_di,wu03,guo12} SrTiO$_3$.\n\nThe local nature of the Ising pseudospins makes the TIM valuable as a model for inhomogeneous systems, including doped quantum paraelectrics \\cite{kleemann00,kleemann02,kleemann98_di,wu03,guo12}, ferroelectric thin films \\cite{wangcl92,sun08,oubelkacem09,wangCD10,lu13,li16}, superlattices \\cite{wangCL00,yao02}, and various low-dimensional structures \\cite{xin99,lang07,lu14}. However, we show here that the TIM, as it is conventionally formulated, fails to correctly describe SrTiO$_3$ whenever nanoscale inhomogeneity is important. Most egregiously, the TIM fails to predict the formation of a quantized two-dimensional electron gas (2DEG) at LaAlO$_3$\/SrTiO$_3$ interfaces, in contradiction with both theory and experiments \\cite{gariglio15}. The goal of this paper is to propose a modification that we believe captures the essential physics of spatial inhomogeneity, and to compare it to the conventional TIM for model SrTiO$_3$ thin films and interfaces.\n\nIn the TIM, the lattice polarization $P_{i}$ in unit cell $i$ is modelled by a pseudospin. This polarization is given by\n\\begin{equation} \\label{P}\nP_{i} = \\mu \\eta S^{(3)}_{i},\n\\end{equation}\nwhere $\\mu$ sets the scale of the electric dipole moment, $\\eta = a^{-3}$ is the volume density of dipoles, and $a$ is the lattice constant. The pseudospin is usually taken to be $S=\\frac 12$, and $S^{(3)}_{i}$ is the third component of the corresponding three-dimensional pseudospin vector $\\bt{S}_{i}$. The other two components, $S^{(1)}_{i}$ and $S^{(2)}_{i}$, are fictitious degrees of freedom, with only the projection of ${\\bf S}_i$ onto the $(3)$-axis corresponding to the physical polarization. (The unpolarized state is therefore described by the pseudospin lying entirely in the $(1)$-$(2)$ plane.) In a quantum model, $S^{(3)}_{i}$ is the expectation value of the operator $\\hat{S}^{(3)}_{i}$, which is identical to the spin matrix $\\hat{S}^{z}$ but which acts within pseudospin space.\n\nThe simplest version of the $S = \\frac 12$ TIM is \\cite{hemberger96}\n\\begin{equation} \\label{TIM_orig}\n\\hat{H} = - \\Omega \\sum_{i} \\hat{S}^{(1)}_{i} - J_{1} \\sum_{\\langle i, i' \\rangle} \\hat{S}^{(3)}_{i} \\hat{S}^{(3)}_{i'} - \\mu \\sum_{i} E_{i} \\hat{S}^{(3)}_{i},\n\\end{equation}\nwhere $\\Omega$ plays the role of a transverse magnetic field that flips the Ising spins, $J_1$ is a nearest-neighbour coupling constant with $\\langle i,i' \\rangle$ indicating nearest-neighbour sites, and $E_i$ is the electric field in unit cell $i$. For $J_1>0$, the model tends towards a ferroelectric state at low temperatures; however, this is limited by $\\Omega$, which disorders the ferroelectric state. Under mean-field theory the model predicts a ferroelectric phase transition only if $\\Omega < Z J_{1}$, where $Z$ is the coordination number of the lattice.\n\nAlthough the TIM is only microscopically justified for order-disorder ferroelectrics, it is often used as a tool to characterize ferroelectrics of all types, and variations of this model have been applied to ferroelectricity in perovskites, including BaTiO$_{3}$ \\cite{zhang00} and SrTiO$_3$ (STO) \\cite{hemberger96}. As a phenomenological model, the TIM is more complex than simple Landau-Ginzburg-Devonshire theories; however, it is also more versatile. The TIM, for example, is particularly well-suited to doped quantum paraelectrics, namely Sr$_{1-x}$M$_x$TiO$_3$ with M typically representing Ca or Ba \\cite{kleemann00,kleemann02,kleemann98_di,wu03,tao04,guo12}. In these materials, small dopant concentrations are sufficient to induce a ferroelectric transition. Several groups have successfully modeled these materials as binary alloys of SrTiO$_3$ and MTiO$_3$ with doping-independent model parameters \\cite{kleemann02,kleemann98_di,wu03,tao04,guo12}.\n\nThe current work is motivated by the application of the TIM to metallic LaAlO$_{3}$\/SrTiO$_{3}$ (LAO\/STO) interfaces. These, and other related perovskite interfaces, have been widely studied since the discovery in 2004 that a 2DEG appears spontaneously at the interface when the LAO film is more than four unit cells thick \\cite{ohtomo04}. This system is rich with interesting properties, including coexisting ferromagnetism and superconductivity \\cite{Brinkman:2007fk,Reyren:2007gv,Dikin:2011gl}, nontrivial spin-orbit effects \\cite{BenShalom:2010kv,Caviglia:2010jv}, a metal-insulator transition \\cite{thiel06,Liao:2011bk}, gate-controlled superconductivity \\cite{Caviglia:2008uh}, and a possible nematic transition at (111) interfaces \\cite{Miao:2016hr,Davis:2017,Boudjada:2018,Boudjada:2019}. Furthermore, STO's proximity to the ferroelectric state has led to suggestions that quantum fluctuations shape its band structure \\cite{atkinson17} and support superconductivity \\cite{Edge:2015fj,Dunnett:2018}. More generally, there has been a growing appreciation that lattice degrees of freedom play a key role in shaping the electronic structure near LAO\/STO interfaces \\cite{Behtash:2016dt,Lee:2016dj,Gazquez:2017bu,raslan18}. With this in mind, the recent discovery that ferroelectric-like properties persist in some metallic perovskites \\cite{Rischau:2017vj} naturally leads one to explore the effects of Ca or Ba doping on LAO\/STO interfaces and, as described above, the TIM provides a natural framework in which to do this.\n\nWe found, however, that the TIM as it is usually formulated in equation~\\eref{TIM_orig} cannot reproduce the interfacial 2DEG and therefore fails to describe even the simple LAO\/STO interface. In this work, we explain the reason for this failure and propose a modification to the TIM. In \\sref{sec:FE}, we introduce the modified model and by comparison with the standard Landau-Ginzburg-Devonshire (LGD) expansion, illustrate why the failure arises and how we fix it. As a simple example, we apply the modified model to ferroelectric thin films. In \\sref{sec:interface}, we then apply the model to the LAO\/STO interface, and show explicitly how the modification allows for the formation of the 2DEG.\n\n\n\n\\section{Inhomogeneous Ferroelectrics}\n\\label{sec:FE}\n\nWe begin by describing a modified TIM (\\sref{sec:TIM}) that contains an additional anisotropic interaction; depending on its sign, this interaction generates either a pseudospin easy axis or easy plane. We obtain mean-field equations for the pseudospin and susceptibility, and by comparison to the LGD theory (\\sref{sec:LGD}) we show that the Landau parameters are under-determined by the conventional TIM. Essentially, the problem is that equation~\\eref{TIM_orig} contains three adjustable parameters ($\\Omega$, $J_1$, and $\\mu$), while the simplest LGD model requires four parameters to describe an inhomogeneous system. The additional interaction in the modified TIM fixes this discrepancy. In sections~\\ref{sec:fit} and \\ref{sec:J1} we obtain fits to the model parameters for the case of STO. As a simple application, in \\sref{sec:FEfilm} we explore how the new term modifies the polarization distribution of a ferroelectric thin film.\n\n\n\\subsection{The Modified TIM}\n\\label{sec:TIM}\n\nThe modified Hamiltonian for general pseudospin $S$ is\n\\begin{eqnarray} \\label{TIM_full}\n\\fl \\hat{H} = - \\Omega \\sum_{i} \\hat{S}^{(1)}_{i} - \\frac{J_{1}}{2S} \\sum_{\\langle i,i' \\rangle} \\hat{S}^{(3)}_{i} \\hat{S}^{(3)}_{i'} - \\frac{J_\\mathrm{an}}{2S} \\sum_{i} \\hat{S}^{(3)}_{i} \\hat{S}^{(3)}_{i} - \\mu \\sum_{i} E_{i} \\hat{S}^{(3)}_{i}.\n\\end{eqnarray}\nThis is equivalent to the Blume-Capel model in a transverse magnetic field \\cite{Albayrak:2013}. The third term introduces an anisotropic pseudospin energy. If $J_\\mathrm{an} > 0$, this term tends to align dipoles along the (3)-axis, making it an easy axis, which enhances the polarization; if $J_\\mathrm{an} < 0$, the term tilts the dipole away from the (3)-axis, creating an easy plane and reducing the polarization.\n\nThe TIM is traditionally formulated with a spin-$\\frac 12$ pseudospin. In that case, $\\hat{S}^{(3)}_{i}$ is written in terms of a Pauli spin matrix, and $(\\hat{S}^{(3)}_{i})^2$ is proportional to the identity operator. The new term therefore does not produce the desired anisotropy when $S = \\frac{1}{2}$. This problem does not exist for higher spin models, and for this reason we formulate the TIM in terms of a general pseudospin $S$. However, we will show below that at the mean-field level, the model provides nearly the same results for any value of $S$, and for simplicity we revert to $S=1$ when we show results as a way of gaining insight into the general case.\n\nApplying mean-field theory to equation~\\eref{TIM_full} gives the following self-consistent expression for $S^{(3)}_{i}$:\n\\begin{equation} \\label{S3}\nS^{(3)}_i = \\frac{S h^{(3)}_i}{h_i} f_{S}(h_i),\n\\end{equation}\nwhere\n\\begin{equation} \\label{f_S}\nf_{S}(h_i) = \\frac{1}{S} \\frac{\\sum\\limits_{l=-S}^{S} l \\rme^{\\beta h_i l}}{\\sum\\limits_{n = -S}^{S} \\rme^{\\beta h_i n}} = B_S(\\beta h_i S),\n\\end{equation}\n$B_S(x)$ is the Brillouin function, $\\beta = (k_\\mathrm{B} T)^{-1}$, $T$ is temperature, $h_{i} = | \\bt{h}_{i} |$, and $h_i^{(3)}$ is the $(3)$-component of the Weiss mean field for lattice site $i$,\n\\begin{equation} \\label{h_i}\n\\textbf{h}_i = \\left( \\Omega, 0, \\frac{J_{1}}{S} \\sum_{i'} S^{(3)}_{i'} + \\frac{J_\\mathrm{an}}{S} S^{(3)}_i + \\mu E_{i} \\right).\n\\end{equation}\nThe summation $\\sum_{i'}$ is a sum over the nearest neighbours of site $i$, and therefore depends on whether pseudospin $i$ is in a surface or bulk layer.\n\nWe linearize equation~\\eref{S3} to obtain the condition that ensures ferroelectricity. In the uniform case,\n\\begin{equation} \\label{h_uniform}\n\\bt{h} = \\left( \\Omega, 0, \\frac{J_{0}}{S} S^{(3)} + \\mu E \\right),\n\\end{equation}\nwhere\n\\begin{equation} \\label{J0}\nJ_0 = ZJ_1 + J_\\mathrm{an},\n\\end{equation}\nfor coordination number $Z$. At zero-temperature, $f_{S}(h_{i}) \\rightarrow 1$, and from equation~\\eref{S3} the model therefore predicts a paraelectric-ferroelectric phase transition when\n\\begin{equation}\nS^{(3)} = \\frac{J_0S^{(3)}}{\\Omega}.\n\\end{equation}\nFrom this one sees that, for any $S$, a ferroelectric transition occurs at nonzero temperature only when $J_0 > \\Omega$. In the case of a paraelectric like STO, $J_0 < \\Omega$.\n\n\\begin{figure}[]\n\t\\centering\n\t\\includegraphics[width=0.7\\linewidth]{figure1}\n\t\\caption{Inverse dielectric susceptibility versus temperature for SrTiO$_{3}$, modelled using three-, four- and five-component pseudospins. The fitting parameters were found separately for each pseudospin (\\tref{tab:pars}). {\\it{Inset}}: The SrTiO$_3$ unit cell is illustrated, showing that the polarization is primarily due to the soft phonon mode (black arrows), in which the oxygen cage moves opposite to the titanium ions. The inset is re-published from \\cite{atkinson17}.}\n\t\\label{fig:STOX_compspin}\n\\end{figure}\n\nTo show that the choice of $S$ has a small effect at the mean-field level, the uniform inverse dielectric susceptibility of STO is plotted for different values of $S$ in \\fref{fig:STOX_compspin}. From equation~\\eref{P}, the susceptibility for a weak uniform electric field $E$ is \n\\begin{equation} \\label{X_gen}\n\\chi (T) = \\left . \\frac{1}{\\epsilon_0} \\frac{dP}{dE} \\right|_{E=0} =\\left . \\frac{\\mu \\eta}{\\epsilon_{0}} \\frac{d S^{(3)}}{dE} \\right|_{E=0},\n\\end{equation}\nwhere $dS^{(3)}\/dE$ is obtained from equation~\\eref{S3} with $\\bt{h}$ given by equation~\\eref{h_uniform}. \\Fref{fig:STOX_compspin} shows results for $S=1$, $S=\\frac 32$ and $S=2$. The fitting parameters $J_0$, $\\mu$, and $\\Omega$ depend on the value of $S$ and were determined by fitting to the experimental susceptibility, as described in \\sref{sec:fit} below. (Note that $J_1$ is not explicitly used here because the calculations are for bulk STO.) The values of all these parameters are listed in \\tref{tab:pars}.\n\nBecause the model was fitted to low- and high-temperature susceptibilities, the curves in \\fref{fig:STOX_compspin} are expected to be close in value at these limits. However, they also differ only slightly in between, indicating that STO is well-described by the simplest case shown, $S=1$, when using mean-field theory. In particular, the model accurately captures both Curie-Weiss behaviour at high temperature, and the saturation of the susceptibility at low temperature (where the ferroelectric transition is suppressed by quantum fluctuations).\n\n\\begin{table}\n\t\\caption{Model parameters for SrTiO$_3$. Parameters are obtained from fits to the experimental susceptibility~\\cite{sakudo71} and phonon dispersion \\cite{cowley64}.}\n\t\\centering\n\t\\begin{indented}\n\t\t\\item[]\\begin{tabular}{c c c c}\n\t\t\t\\br\n\t\t\t& Spin-1 & Spin-3\/2 & Spin-2 \\\\\n\t\t\t\\mr\n\t\t\t$\\Omega$ (meV) & 4.41 & 3.53 & 2.94 \\\\\n\t\t\t$J_{0}$ (meV) & 3.88 & 3.10 & 2.58 \\\\\n\t\t\t$J_{1}$ (meV) & 30-130 & 40-160 & 50-200 \\\\\n\t\t\t$\\mu$ ($e$\\AA) & 1.88 & 1.37 & 1.09 \\\\\n\t\t\t\\br\n\t\t\\end{tabular}\n\t\\end{indented}\n\t\\label{tab:pars}\n\\end{table}\n\n\n\n\\subsection{Comparison to the Landau-Ginzburg-Devonshire Expansion}\n\\label{sec:LGD}\n\nWhile equation~\\eref{S3} is the fundamental self-consistent equation for $S^{(3)}_{i}$, the role each parameter plays in determining the pseudospin is not transparent. For example, it is not immediately evident from this expression why the conventional TIM (with $J_\\mathrm{an}=0$) is unable to describe inhomogeneous systems. To explore this point, we expand equation~\\eref{S3} in powers of $h_{i}^{(3)}$ and compare the coefficients to those in a typical LGD expansion. We show that the transition temperature and correlation length cannot be set independently unless $J_\\mathrm{an}$ is nonzero.\n\nThe typical LGD free energy with order parameter $S^{(3)}(\\bt{r})$ has the form\n\\begin{eqnarray} \\label{F}\n\\mathcal{F} &=& \\eta \\int d^{3} r\\, \\Bigg[ \\frac{A}{2} \\left( S^{(3)}(\\bt{r}) \\right)^{2} + \\frac{B}{4} \\left( S^{(3)}(\\bt{r}) \\right)^{4} \\nonumber \\\\ && + \\frac{C}{2} \\left( \\nabla S^{(3)}(\\bt{r}) \\right)^{2} - D E(\\bt{r}) S^{(3)}(\\bt{r}) \\Bigg].\n\\end{eqnarray}\n$E(\\bt{r})$ is the electric field, $A$, $B$, $C$ and $D$ are the LGD coefficients that describe the material, and $\\eta$ is the inverse volume of a unit cell. Minimizing equation~\\eref{F} with respect to $S^{(3)}(\\bt{r})$ gives the familiar equation\n\\begin{equation} \\label{Fmin}\n0 = A S^{(3)}(\\bt{r}) + B \\left( S^{(3)}(\\bt{r})\\right)^{3} -C \\nabla^{2} S^{(3)}(\\bt{r}) - D E(\\bt{r}),\n\\end{equation}\nwhich can be solved for the pseudospin. The critical temperature is set by $A$, which changes sign at the ferroelectric transition, while $B$ determines the zero-temperature polarization. In the paraelectric phase, $D$ is determined by the dielectric susceptibility and $C$ and $A$ set the correlation length $\\xi=\\sqrt{C\/A}$.\n\nWe expand equation~\\eref{S3} in powers of $h_{i}^{(3)}$ to obtain\n\\begin{equation} \\label{expand_2}\nS^{(3)}_i = \\frac{S f_S (\\Omega)}{\\Omega} h^{(3)}_i + \\frac{1}{2 \\Omega} \\left( \\frac{d}{d\\Omega} \\frac{Sf_{S}(\\Omega)}{\\Omega} \\right) \\left( h^{(3)}_{i} \\right)^{3},\n\\end{equation}\nwhere ${h}_i$ and $h_i^{(3)}$ are defined by equation~\\eref{h_i}. To proceed further, we note that the discretized second derivative of a function $f_j=f(x_j)$ is\n\\begin{equation}\n\\left . \\frac{d^{2}f(x) }{dx^{2}} \\right |_{x=x_j} \\approx \\frac{f_{j-1} - 2f_j + f_{j+1}}{a^{2}}.\n\\end{equation}\nThen, equation~\\eref{h_i} can be re-written as\n\\begin{equation}\nh^{(3)}_i = \\frac{J_0}{S} S^{(3)}_i + \\frac{J_1}{S} a^{2} \\nabla^{2} S^{(3)}_i + \\mu E_i,\n\\end{equation}\nwith $J_0$ defined by equation~\\eref{J0}. This can now be substituted into equation~\\eref{expand_2}.\n\nKeeping only terms that are directly comparable to those in equation~\\eref{Fmin}, we obtain\n\\numparts\n\\begin{eqnarray}\nA & = \\frac{\\Omega}{Sf_{S}(\\Omega)} - \\frac{J_{0}}{S}, \\label{coeff_A} \\\\\nB & = -\\frac{1}{2 S f_{S}(\\Omega)} \\left( \\frac{J_{0}}{S} \\right)^{3} \\frac{d}{d\\Omega} \\left( \\frac{S f_S (\\Omega)}{\\Omega} \\right), \\label{coeff_B} \\\\\nC & = \\frac{J_{1} a^{2}}{S}, \\label{coeff_C} \\\\\nD & = \\mu. \\label{coeff_D}\n\\end{eqnarray}\n\\endnumparts\nThese equations show that $A$ and $B$ are determined by combinations of $J_0$ and $\\Omega$, while $C$ and $D$ are determined by $J_1$ and $\\mu$, respectively. The key point is that $J_0$ reduces to $ZJ_1$ for the conventional TIM, in which case $A$ and $C$ are not independent. Physically, this means that the correlation length, which sets the length scale over which the material responds to inhomogeneities, cannot be determined independently of the transition temperature and low-$T$ polarization. In other words, the four coefficients $A$, $B$, $C$ and $D$ are only described by three parameters, $\\Omega$, $J_{1}$ and $\\mu$.\n\nIn this case, the model predicts a significantly smaller correlation length at low temperatures than does the modified TIM. From equations~(\\ref{coeff_A}) and (\\ref{coeff_C}),\n\\begin{equation}\n\\xi = \\sqrt{\\frac{C}{A}} = \\sqrt{\\frac{J_1 a^2}{\\frac{\\Omega}{f_S(h)} - J_0}}.\n\\end{equation}\nAt low temperatures, $f_S(\\Omega) \\rightarrow 1$. In this case, the conventional TIM ($J_1 = J_0\/Z$) gives $\\xi\\approx4.3$~\\AA, independent of $S$. For $S=1$, the range of correlation lengths from the modified TIM, where the $J_1$ values are taken from \\tref{tab:pars}, is 2.9-6.1 nm, which is an order of magnitude larger. The pseudospin anisotropy $J_\\mathrm{an}$ is therefore an essential part of the TIM.\n\n\n\\subsection{Fitting $\\Omega$, $J_0$, and $\\mu$ for SrTiO$_3$}\n\\label{sec:fit}\n\nMost of the TIM parameters can be fit to existing susceptibility data. We do this for STO, as it will form the basis of our discussion in \\sref{sec:interface}. \n\nInserting equation~\\eref{S3} into equation~\\eref{X_gen}, we obtain the susceptibility\n\\begin{equation} \\label{X}\n\\chi(T,0) = \\frac{\\mu^{2} \\eta}{\\epsilon_{0}} \\frac{1}{L(h,T) - J_{0}\/S} \\Bigg\\vert_{E=0},\n\\end{equation}\nwhere $h = |\\bt{h}|$, $\\bt{h}$ is given by equation~\\eref{h_uniform}, and\n\\begin{eqnarray}\n\\fl L(h,T) = \\Bigg[ S \\left( \\frac{1}{h} - \\frac{\\left( h^{(3)} \\right)^{2}}{h^{3}} \\right) f_S(h) + S \\frac{\\left( h^{(3)} \\right)^{2}}{h^{2}} \\frac{\\partial f_S(h)}{\\partial h} \\Bigg]^{-1}.\n\\end{eqnarray}\n\nAt high temperatures, this expression simplifies. Taking $L(h,T)|_{T \\rightarrow \\infty}=[\\beta S(S+1)\/3]^{-1}$, equation~\\eref{X} obtains a Curie-Weiss form, \n\\begin{equation} \\label{X_0}\n\\chi(T,0) = \\frac{\\mu^{2} \\eta S(S+1) }{3 \\epsilon_{0} k_B} \\frac{1}{T - T_\\mathrm{CW}},\n\\end{equation}\nwhere $T_\\mathrm{CW} = (S+1) J_{0}\/3k_\\mathrm{B} \\approx 30$~K \\cite{sakudo71} is the transition temperature implied by the high-temperature susceptibility. (In STO, this transition is suppressed by quantum fluctuations.) $J_0$ and $\\mu$ are thus obtained by matching equation~\\eref{X_0} to high-$T$ experiments.\n\nAt low $T$, equation~\\eref{X} takes one of two forms depending on whether the system is ferroelectric or not. For a ferroelectric, $\\Omega$ can be found from the behaviour of the susceptibility at $T \\rightarrow T_\\mathrm{c}^{+}$ for critical temperature $T_\\mathrm{c}$. In this case, $h^{(3)}=0$, $h=\\Omega$, and setting the denominator of equation~\\eref{X} to zero gives a self-consistent equation for $\\Omega$,\n\\begin{equation}\n\\Omega = J_0 f_S(\\Omega).\n\\end{equation}\n\nFor a paraelectric like STO, on the other hand, we obtain $\\Omega$ from the zero-temperature susceptibility. In this limit $L(h,T)|_{T \\rightarrow 0} = \\Omega\/S$ and equation~\\eref{X} may easily be inverted for $\\Omega$. The values of $J_{0}$, $\\mu$ and $\\Omega$ for STO determined from equation~\\eref{X} are listed in \\tref{tab:pars}.\n\nThe closeness in value between $J_0$ and $\\Omega$ for STO can be understood from their physical meanings. $J_0$ sets the temperature at which a transition would occur in the absence of quantum fluctuations, while $\\Omega$ sets the scale of the quantum fluctuations; that these two are close in value is because STO is close to a ferroelectric transition. Further, since the Curie-Weiss temperature is small, both of these parameters are small.\n\n\n\\subsection{Estimating $J_{1}$ for SrTiO$_{3}$}\n\\label{sec:J1}\n\nAs was shown in \\sref{sec:LGD}, $J_{1}$ sets the scale of the gradient term $C$ in the LGD expansion, and it can therefore be obtained from quantities related to spatial gradients of the polarization. In perovskites, the polarization is closely connected to an optical phonon mode \\cite{cowley64,atkinson17}, pictured in \\fref{fig:STOX_compspin}. One can therefore obtain $J_1$ from the phonon dispersion.\n\nKey to this analysis is that the optical phonon has a large dipole moment that is represented by the TIM pseudospins. The phonon spectrum can therefore be obtained from the dynamical pseudospin correlation function. In the paraelectric phase, the term proportional to $\\Omega$ in equation~\\eref{TIM_full} ensures that the pseudospins lie primarily along the (1)-axis. Perturbations of this state can be viewed as the magnons of a fictitious ferromagnetic material in which the magnetic moments align along the (1)-axis. The phonons can then be described as spin-wave excitations.\n\nThe spin operators are difficult to work with, however, and it is useful to bosonize them. This is achieved with the Holstein-Primakoff transformation \\cite{holstein,holstein40}. This transformation maps the pseudospin operators on to the boson creation and annihilation operators, $\\hat{a}^{\\dag}_{i}$ and $\\hat{a}_{i}$. Pseudospin projections on the (3)-axis are then modelled as boson excitations, with a pseudospin that is entirely polarized along the (1)-axis represented by the vacuum state.\n\nIn this representation, the raising and lowering operators for site $i$ differ from the typical set by a cyclic permutation of the pseudospin axes. We then define \\cite{holstein}\n\\numparts\n\\begin{eqnarray} \\label{S+}\n\\hat{S}^{+}_{i} & = \\hat{S}^{(2)}_{i} + i \\hat{S}^{(3)}_{i} \\\\\n& = \\sqrt{2S} \\left( 1 - \\frac{1}{2S} \\hat{a}^{\\dag}_{i} \\hat{a}_{i} \\right)^{1\/2} \\ \\hat{a}_{i},\n\\end{eqnarray}\n\\endnumparts\n\\numparts\n\\begin{eqnarray} \\label{S-}\n\\hat{S}^{-}_{i} & = \\hat{S}^{(2)}_{i} - i \\hat{S}^{(3)}_{i} \\\\ \n& = \\sqrt{2S} \\hat{a}^{\\dag}_{i} \\left( 1 - \\frac{1}{2S} \\hat{a}^{\\dag}_{i} \\hat{a}_{i} \\right)^{1\/2}.\n\\end{eqnarray}\n\\endnumparts\nSince the polarization lies close to the (1)-axis in the paraelectric state, only low bosonic excitation states are relevant. In this case, $\\hat{S}^{+}_{i} \\approx \\sqrt{2S} \\hat{a}_{i}$ and $\\hat{S}^{-}_{i} \\approx \\sqrt{2S} \\hat{a}^{\\dag}_{i}$. Additionally, the (1)-component of the pseudospin is defined as \\cite{holstein}\n\\begin{equation} \\label{S1}\n\\hat{S}^{(1)}_{i} = S - \\hat{a}^{\\dag}_{i} \\hat{a}_{i},\n\\end{equation}\nand the (3)-component is\n\\begin{eqnarray} \\label{S3_a}\n\\hat{S}^{(3)}_{i} & = \\frac{1}{2i} \\left( \\hat{S}^{+}_{i} - \\hat{S}^{-}_{i} \\right) \\\\\n& = \\frac{\\sqrt{2S}}{2i} \\left( \\hat{a}_{i} - \\hat{a}^{\\dag}_{i} \\right).\n\\end{eqnarray}\nBecause $\\hat{S}^{(3)}_{i}$ represents atomic displacements, $\\hat{a}_i$ and $\\hat{a}^{\\dag}_i$ are therefore phonon operators.\n\nEquations~\\eref{S1} and \\eref{S3_a} can now be substituted into equation~\\eref{TIM_full}. We transform to reciprocal space using $\\hat{a}_{i}$~=~$\\frac{1}{\\sqrt{N}}\\sum_{\\bt{k}} e^{i\\bt{k} \\cdot \\bt{r}_{i}} \\hat{b}_{\\bt{k}}$:\n\\begin{eqnarray}\n\\fl \\hat{H} = - N \\left( \\Omega S + \\frac{J_\\mathrm{an}}{4} \\right) + \\sum_{\\bt{k}} \\Gamma_{\\bt{k}} \\hat{b}^{\\dag}_{\\bt{k}} \\hat{b}_{\\bt{k}} + \\sum_{\\bt{k}} \\frac{\\Delta_{\\bt{k}}}{2} \\left( \\hat{b}_{\\bt{k}} \\hat{b}_{-\\bt{k}} + \\hat{b}^{\\dag}_{\\bt{k}} \\hat{b}^{\\dag}_{-\\bt{k}} \\right),\n\\end{eqnarray}\nwhere $\\gamma_{\\bt{k}} = 2 \\cos (k_{x}a) + 2 \\cos (k_{y}a) + 2 \\cos (k_{z} a)$, $N$ is the total number of lattice sites, and\n\\numparts\n\\begin{eqnarray}\n\\Delta_{\\bt{k}} & = \\frac{J_{1}}{2} \\gamma_{\\bt{k}} + \\frac{J_\\mathrm{an}}{2}, \\\\\n\\Gamma_{\\bt{k}} & = \\Omega - \\Delta_{\\bt{k}}.\n\\end{eqnarray}\n\\endnumparts\nNote that we have set $E = 0$ here, since the phonon spectrum is measured at zero field.\n\nIt is convenient to formulate the dynamics of the pseudomagnons using Green's functions. The Green's functions are correlation functions between the pseudomagnon creation and annihilation operators, and the equations of motion of the Green's functions therefore include the equations of motion of $\\hat{b}_{\\bt{k}}$ and $\\hat{b}^{\\dag}_{\\bt{k}}$. The spin-wave excitation spectrum can then be obtained from the poles of the Green's function.\n\nThe Green's function and its equation of motion are, respectively,\n\\begin{equation} \\label{G1}\nD_{1}(\\bt{k}, t) = -i \\left\\langle \\left[ \\hat{b}_{\\bt{k}}(t), \\hat{b}^{\\dag}_{\\bt{k}}(0) \\right] \\right\\rangle \\theta (t),\n\\end{equation}\n\\begin{equation} \\label{G1_eom}\n\\frac{dD_{1}(\\bt{k}, t)}{dt} = -i \\delta (t) - i \\Gamma_{\\bt{k}} D_{1}(\\bt{k}, t) - i \\Delta_{\\bt{k}} D_{2}(\\bt{k}, t),\n\\end{equation}\nwhere $\\theta(t)$ is the step function. The second Green's function that appears in equation~\\eref{G1_eom} and its equation of motion are\n\\begin{equation} \\label{G2}\nD_{2}(\\bt{k}, t) = -i \\left\\langle \\left[ \\hat{b}^{\\dag}_{-\\bt{k}} (t), \\hat{b}^{\\dag}_{\\bt{k}} (0) \\right] \\right\\rangle \\theta (t),\n\\end{equation}\n\\begin{equation} \\label{G2_eom}\n\\frac{dD_{2}(\\bt{k}, t)}{dt} = i \\Gamma_{-\\bt{k}} D_{2}(\\bt{k}, t) + i \\Delta_{\\bt{k}} D_{1}(\\bt{k}, t).\n\\end{equation}\nFourier transforming equations~\\eref{G1_eom} and \\eref{G2_eom} in time and solving for $D_{1}(\\bt{k}, \\omega_{\\bt{k}})$ gives the following expression for the Green's function:\n\\begin{equation} \\label{G(om)}\nD_{1}(\\bt{k}, \\omega_{\\bt{k}}) = \\frac{\\omega_{\\bt{k}} + \\Gamma_{\\bt{k}}}{\\omega^{2}_{\\bt{k}} - \\Gamma_{\\bt{k}}^{2} + \\Delta_{\\bt{k}}^{2}}.\n\\end{equation}\nThe phonon dispersion is therefore given by\n\\numparts\n\\begin{eqnarray}\n\\omega_{\\bt{k}} & = \\sqrt{\\Gamma_{\\bt{k}}^{2} - \\Delta_{\\bt{k}}^{2}} \\\\\n& = \\sqrt{\\Omega \\left( \\Omega - 2\\Delta_{\\bt{k}} \\right)}.\n\\end{eqnarray}\n\\endnumparts\n\nWe obtain an expression for $J_{1}$ by comparing the frequency at $k_{x} = \\pi\/2$ and the zone centre:\n\\begin{equation} \\label{J1}\nJ_{1} = \\frac{\\hbar^{2} \\left( \\omega^{2}_{\\pi\/2} - \\omega^{2}_{0} \\right)}{\\Omega \\left( \\gamma_{0} - \\gamma_{\\pi\/2} \\right)},\n\\end{equation}\nwhere the subscripts $\\pi\/2$ and 0 indicate $\\bt{k}=(\\pi\/2,0,0)$ and $\\bt{k}=(0,0,0)$, respectively. Since $\\Omega$ is already known from bulk susceptibility data, $J_{1}$ can be estimated solely using the material's phonon dispersion. Using neutron scattering data from \\cite{cowley64}, we obtained a range of $J_{1}$ values between 30 and 200~meV depending on $S$ and on how the fit was made. As will be shown in \\sref{sec:interface}, these estimates are somewhat lower than the values required to produce a 2DEG at the LAO\/STO interface, which is likely a limitation of the TIM. Nonetheless, this calculation shows that $J_{1}$ is orders of magnitude larger than the value $J_1 = J_0\/Z$ that is implicit in the conventional TIM.\n\nThis large discrepancy between $J_1$ and $J_0$ is a key feature of STO, and that there is more than an order of magnitude difference between their values can be related to their different physical origins. Further, from equation~\\eref{J0} it follows that $J_\\mathrm{an}$ is not small; rather, it is negative and nearly cancels $ZJ_1$. $J_\\mathrm{an}$ would however play less of a role in a material with a high transition temperature, where $J_1$ and $J_0$ would be closer in value.\n\n\n\n\n\\subsection{Ferroelectric Thin Films}\n\\label{sec:FEfilm}\n\nWe first model the polarization in ferroelectric thin films as a simple application of the modified TIM. A ferroelectric's properties can vary drastically between the bulk and thin-film forms, and the origins and applications of these differences have been increasingly studied in recent years \\cite{setter06}. Ferroelectric thin films provide significant advantages in electronic devices such as increased efficiency in photovoltaic cells \\cite{liu16,zenkevich14,kutes14} and decreased power usage in non-volatile memory storage \\cite{muller11}.\n\nWe focus on weakly ferroelectric materials, like those obtained by doping STO with $^{18}$O, Ca, or Ba. We take $S=1$, and we thus fix the parameters $J_{0}=3.88$~meV and $\\mu=1.88$~$e$\\AA, which were determined in section~\\ref{sec:fit} for STO. To obtain a ferroelectric transition, we take $\\Omega=3.2$~meV, which yields a bulk transition temperature $T_\\mathrm{c}\\approx20$~K,\nsimilar to what is observed in Sr$_{1-x}$Ca$_x$TiO$_3$. We treat $J_1$ as an adjustable parameter.\n\n\nThin films have a layered geometry that simplifies calculations. Taking each layer to be one unit cell thick, and assuming translational invariance within the $xy$-plane, the pseudospin, electric field, and polarization depend only on the layer index $i_{z}$ (instead of site $i$). Equation~\\eref{S3} becomes\n\\begin{equation} \\label{S3_i}\nS^{(3)}_{i_z} = \\frac{S h^{(3)}_{i_z}}{h_{i_z}} f_{S}(h_{i_z}),\n\\end{equation}\nwhere $h_{i_z} = |\\bt{h}_{i_z}|$ and the Weiss mean field is \n\\begin{equation} \\label{eq:hz}\n\\textbf{h}_{i_z} = \\left( \\Omega, 0, \\frac{J_{1}}{S} \\sum_{i'} S^{(3)}_{i'} + \\frac{J_\\mathrm{an}}{S} S^{(3)}_{i_z} + \\mu E_{i_z} \\right),\n\\end{equation}\nwhere, for the cubic STO crystal structure, the sum over nearest neighbours of a pseudospin in layer $i_z$ is $\\sum_{i'}S_{i'} = 4S^{(3)}_{i_z} + S^{(3)}_{i_z-1} + S^{(3)}_{i_z+1}$. The lattice polarization in layer $i_{z}$ is then\n\\begin{equation} \\label{P_i}\nP_{i_{z}} = \\mu \\eta S^{(3)}_{i_{z}}.\n\\end{equation}\n(Recall that $\\mu S$ is the maximum dipole moment per unit cell and $\\eta$ is the dipole moment density.)\n\nWe assume a short-circuit geometry, in which the top and bottom surfaces of the film are connected by a wire that maintains a zero voltage difference between them. This geometry is commonly adopted to minimize the effects of depolarizing electric fields. We thus have two kinds of charge: a bound charge $\\rho^\\mathrm{b}(z) = - \\partial_z P_\\mathrm{tot}(z)$ coming from a sum of atomic and lattice polarizations, and the external charges $\\rho^\\mathrm{ext}(z)$ on the top and bottom electrodes.\n\nThe electric field in equation~\\eref{eq:hz} is obtained from these charges via Gauss' law, \n\\begin{equation}\n\\epsilon_{0} \\frac{d}{dz} E(z) = \\rho^\\mathrm{b}(z) + \\rho^\\mathrm{ext}(z).\n\\end{equation}\nWe break the polarization into lattice and atomic pieces, $P(z)$ and $\\epsilon_0 \\alpha E(z)$ respectively, with $\\alpha$ the atomic polarizability, and defining the optical dielectric constant $\\epsilon_\\infty=\\epsilon_0(1+\\alpha)\\approx 5.5 \\epsilon_0$ \\cite{raslan17,zollner00}, we obtain the usual expression \n\\begin{equation} \\label{Gauss}\n\\frac{d}{dz} \\left[ \\epsilon_{\\infty} E(z) + P(z) \\right] = \\rho^\\mathrm{ext}(z),\n\\end{equation}\nwhich can be integrated to find $E(z)$.\n\nThe charge density in the top and bottom electrodes is written as \n\\begin{equation}\n\\rho^\\mathrm{ext}(z) = \\frac{en}{a^{2}} [ \\delta(z) - \\delta(z-L) ],\n\\end{equation}\nwhere $L$ is the film thickness, and $n$ is the positive charge per 2D unit cell on the top electrode. Integrating equation~\\eref{Gauss} gives \n\\begin{equation} \\label{E_Gauss}\n\\epsilon_{\\infty} E(z) = - P(z) + \\frac{en}{a^{2}}.\n\\end{equation}\nA second integration, of equation~\\eref{E_Gauss} across the thickness of the film, gives\n\\begin{equation} \\label{eq:ena2}\n\\frac{en}{a^{2}} = \\frac{\\int_{0}^{L}dz P(z) - \\epsilon_{\\infty} V}{L},\n\\end{equation}\nwith $V$ the potential difference across the film. Using this to eliminate $en\/a^2$ in equation~\\eref{E_Gauss}, and setting $V=0$ for the short-circuit geometry, we obtain\n\\begin{equation} \\label{E}\nE(z) = \\frac{P_\\mathrm{ave} - P(z)}{\\epsilon_{\\infty}},\n\\end{equation}\nwith $P_\\mathrm{ave}$ the average polarization of the film. Equations~\\eref{eq:hz} and \\eref{E} are evaluated at discrete positions $z = i_z a$, and together with equation~\\eref{S3_i} form a closed set that can be solved self-consistently.\n\n\\begin{figure}[tb]\n\t\\includegraphics[width = 0.5\\linewidth]{figure2a}\n\t\\includegraphics[width = 0.5\\linewidth]{figure2b}\n\t\\caption{Polarization versus layer for 50-layer films (a) without and (b) with the depolarizing field included. Results are shown for $J_1$ values between 0.65~meV and 100~meV, with $T$~=~1~K, $J_{0}=3.88$~meV, $\\Omega =3.2$~meV and $\\mu=1.88$~$e$\\AA. $J_1=0.65$~meV corresponds to the conventional transverse Ising model ($J_\\mathrm{an} = 0$). For reference, the bulk polarization is $P_\\mathrm{bulk} = 29 \\ \\mu$C cm$^{-2}$. Here and throughout, results are shown for $S=1$.}\n\t\\label{fig:Ecomp}\n\\end{figure}\n\n\\Fref{fig:Ecomp} shows the results of simulations for a film that is $N_L = 50$ layers thick. The figure illustrates two main points: First, the results depend qualitatively on whether or not electric fields are included in the simulation, even in the short-circuit geometry (for which naive considerations suggest the field vanishes); second, for fixed $J_0$, the value of $J_1$ has a large impact on the polarization.\n\nThe effects of electric fields in thin films were discussed at length by Kretschmer and Binder \\cite{kretschmer79}, and the results in \\fref{fig:Ecomp} serve as a reminder of their importance. In \\fref{fig:Ecomp}(a), where electric fields are not included, the polarization is reduced at the surfaces and increases to its bulk value over a length scale set by the correlation length. In the ferroelectric phase, the correlation length is $\\xi=\\sqrt{-C\/2A}$ (in terms of LGD parameters), which is proportional to $\\sqrt{J_1}$. The conventional TIM with $J_\\mathrm{an}=0$ has $J_1=0.65$~meV, which corresponds to a correlation length of $\\xi=2.7$ \\AA. Consistent with this, \\fref{fig:Ecomp}(a) shows that for the conventional TIM, surface effects are confined to narrow regions near the edges of the film. Conversely, the modified TIM with a more realistic value of $J_1=100$~meV gives the correlation length $\\xi=3.3$~nm, which is comparable to the film thickness. In this case, the polarization is inhomogeneous throughout the film. In contrast to both of these cases, the polarization is nearly constant across the film when electric fields are included [\\fref{fig:Ecomp}(b)]; the polarization decreases with increasing $J_1$, and is suppressed completely for $J_1=100$~meV.\n\nThe apparent uniformity of the polarization across the film in \\fref{fig:Ecomp}(b) is because the correlation length is replaced by a shorter length scale $\\kappa^{-1}$ when electric fields are included, with \\cite{kretschmer79}\n\\begin{equation} \\label{kappa}\n\\kappa = \\sqrt{\\xi^{-2} + \\frac{\\mu^{2} \\eta}{\\epsilon_{0} C}}.\n\\end{equation}\nIn STO, this length scale is less than a unit cell, and the polarization is therefore nearly constant, with only a small reduction in the surface layer. This slight reduction is, nonetheless, enough that the depolarizing fields are incompletely screened by the electrodes. There is thus a residual depolarizing field in the STO film that reduces the overall polarization of the film.\n\nTo make the dependence of $\\kappa$ on the TIM parameters explicit, we substitute values for the LGD parameters from equations~\\eref{coeff_A}-\\eref{coeff_D} into equation~\\eref{kappa} in the limit $T\\rightarrow 0$. For spin-1 we find\n\\begin{equation}\n\\kappa = \\sqrt{-\\frac{2(\\Omega - J_{0})}{J_{1}a^{2}} + \\frac{\\mu^{2} \\eta}{\\epsilon_{0} J_{1} a^{2}}}.\n\\end{equation}\nFor fixed $\\Omega$ and $J_{0}$ (i.e.\\ for a fixed value of the bulk $T_\\mathrm{c}$), $\\kappa^{-1}$ increases as $\\sqrt{J_{1}}$. Because the difference between the polarizations at the film surface and interior depends on $\\kappa^{-1}$,\nthe depolarizing field also grows with $J_1$; it then follows immediately that $P_\\mathrm{ave}$ decreases as $J_1$ increases. This suppression is illustrated in \\fref{fig:Pdep}(a), which shows the dependence of both the average polarization and $\\kappa^{-1}$ on $J_1$. The polarization equals its bulk value when $J_1=0$ and drops as $J_1$ increases. Notably, there is a critical value of $J_1$ (which depends on the number of layers, $N_L$, in the film) above which ferroelectricity is completely suppressed. For the 50-layer film modelled here, this value is approximately 17~meV.\n\n\\begin{figure*}[tb]\n\t\\centering\n\t\\includegraphics[width = 0.3\\linewidth]{figure3a}\n\t\\includegraphics[width = 0.3\\linewidth]{figure3b}\n\t\\includegraphics[width = 0.3\\linewidth]{figure3c}\n\t\\caption{Average polarization $P_\\mathrm{ave}$ of a ferroelectric thin film in the short-circuit geometry, including the depolarizing field. Because the polarization is nearly uniform, $P_\\mathrm{ave}$ is almost the same as the polarization in each layer.\t(a) Dependence of $P_\\mathrm{ave}$ (blue) and $\\kappa^{-1}$ (green) on $J_{1}$; also shown for comparison is the bulk value $P_\\mathrm{bulk}$ of the polarization; (b) dependence of $P_\\mathrm{ave}$ on film thickness $N_{L}$ (where $N_L=L\/a$ is the total number of layers); (c) dependence of $P_\\mathrm{ave}$ on $J_{0}$. Except where otherwise indicated, parameters are the same as in \\fref{fig:Ecomp} and $N_L=50$. In (b), $J_1 = 10$~meV; in (c), $J_1 = 25$~meV.}\n\t\\label{fig:Pdep}\n\\end{figure*}\n\nAlternatively, one can fix $J_1$ and consider how $P_\\mathrm{ave}$ depends on film thickness, as shown in \\fref{fig:Pdep}(b). Here polarization increases and asymptotically approaches the bulk value with increasing $N_L$. Ferroelectricity is completely suppressed below a critical film thickness, with the value of this critical thickness depending on $J_1$. The results shown in \\fref{fig:Pdep}(b) are for $J_1=10$~meV, and give a critical thickness of 30 layers. For $J_1=100$~meV, the critical thickness is closer to 300 layers.\n\nFinally, the effect of increasing $J_{0}$ is shown in \\fref{fig:Pdep}(c). Because the bulk value of polarization $P_\\mathrm{bulk}$ depends on $J_0$, we show the ratio $P_\\mathrm{ave}\/P_\\mathrm{bulk}$ as a function of $J_0\/\\Omega$. In bulk materials, the threshold for ferroelectricity is $J_0=$~$\\Omega$, and this is increased by finite size effects in the 50-layer film as shown in \\fref{fig:Pdep}(c). Size effects quickly become unimportant with increasing $J_0$, as $P_\\mathrm{ave}$ rapidly increases towards its bulk value. Indeed, when $J_0$ is only twice $\\Omega$, $P_\\mathrm{ave}=0.93P_\\mathrm{bulk}$.\n\nThese calculations show that doped quantum paraelectrics such as Sr$_{1-x}$Ca$_x$TiO$_3$, which have $J_0$ close to $\\Omega$, should be highly sensitive to film thickness in the short-circuit geometry. While this might be naively anticipated based on the argument that the correlation length $\\xi$ is comparable to the film thickness near a ferroelectric transition, this argument is wrong because the relevant length $\\kappa^{-1}$ is actually rather short and does not diverge at the quantum critical point. Rather, the sensitivity is due to depolarizing fields, which can easily overwhelm the weak ferroelectricity.\n\n\n\n\n\\section{(001) LAO\/STO Interface}\n\\label{sec:interface}\n\nIn the final section of this work, we apply the modified TIM to the (001) LAO\/STO interface. For this calculation, the Hamiltonian must include an electronic term that describes the 2DEG that forms at the interface. The total Hamiltonian is thus\n\\begin{equation}\n\\hat{H} = \\hat{H}_\\mathrm{e} + \\hat{H}_\\mathrm{TIM},\n\\end{equation}\nwhere $\\hat{H}_\\mathrm{TIM}$ is given by equation~\\eref{TIM_full} and $\\hat{H}_\\mathrm{e}$ is the electronic term discussed below. These two terms are linked through the electric field, which appears explicitly in $\\hat{H}_\\mathrm{TIM}$, and appears implicitly in $\\hat{H}_\\mathrm{e}$ through the electrostatic potential.\n\nWe outline the calculations in \\sref{sec:int_method}, and show results for the effect of $J_{1}$ on the interfacial 2DEG in \\sref{sec:int_res}. The main result from this section is that the conventional and modified TIM make very different predictions for the structure of the 2DEG.\n\n\n\n\\subsection{Method} \\label{sec:int_method}\nWe assume that the 2DEG arises due to a combination of top gating and the polar catastrophe. In this case a total charge density $-en_\\mathrm{LAO}$ is donated from the LAO surface to the interface, where $n_\\mathrm{LAO}$ is the surface hole density, in order to neutralize the polar discontinuity between the two materials. Top gating gives control over the number of free electrons doped into the system.\n\n\\begin{figure}[]\n\t\\centering\n\t\\includegraphics[width = 0.7\\linewidth]{figure4}\n\t\\caption{Structure of the LaAlO$_{3}$\/SrTiO$_{3}$ interface. (a) The SrTiO$_3$ substrate is discretized, with each layer a single unit cell thick. The electron density $n_{i_{z}}$ is confined to the $i_z$th TiO$_2$ plane, which makes up the left face of layer $i_z$. The regions between the TiO$_2$ layers are treated as a polarizable medium that is modelled by the TIM. The positive charge density $+en_\\mathrm{LAO}$ at the LAO surface is compensated by an equal but opposite charge in the two-dimensional electron gas (2DEG). (b) Electrons in the 2DEG hop between neighbouring Ti $t_{2\\mathrm{g}}$ orbitals of the same type. The hopping amplitude between orbitals is either strong ($t^{\\parallel}$) if the hopping path is in the plane of the orbitals, or weak ($t^{\\perp}$) if the hopping path is perpendicular to the plane of the orbitals. Panel (b) is re-published from \\cite{atkinson17}.}\n\t\\label{fig:LAOSTO}\n\\end{figure}\n\nAs shown in \\fref{fig:LAOSTO}(a), we adopt a discretized model comprising alternating metallic TiO$_2$ layers with electron densities $n_{i_z}$ and dielectric layers with polarizations $P_{i_z}$. Translational invariance is assumed within the $xy$-plane, but not along the $z$-axis perpendicular to the interface. The system's properties are therefore only dependent on layer.\n\nThe 2DEG is composed of electrons that occupy titanium $t_{2\\mathrm{g}}$ orbitals in the STO substrate. Although the unit cell is tetragonally distorted both by unit cell rotations about the $c$-axis and by interfacial strains, to a good approximation we can assume STO has the cubic structure typical of a perovskite material, as shown in the inset of \\fref{fig:STOX_compspin}. We adopt a tight-binding model in which the conduction bands are made up of $t_{2\\mathrm{g}}$ orbitals \\cite{raslan17,Stengel:2011hy,Khalsa:2012fu}, and assume that electrons only hop between orbitals of the same type (ie. from one $d_{xz}$ orbital to another $d_{xz}$ orbital; other hopping matrix elements vanish in the cubic phase by symmetry, and are generally small when lattice distortions are included).\n\n\\subsubsection{Electronic Hamiltonian}\n\nThe electronic Hamiltonian is made up of a hopping kinetic energy $\\hat{T}$ and an electrostatic potential energy $\\hat{U}$:\n\\begin{equation}\n\\hat{H}_\\mathrm{e} = \\hat{T} + \\hat{U}.\n\\end{equation}\nThe hopping energy is \n\\begin{eqnarray} \\label{T_k}\n\\hat{T} = \\sum_{i_{z} \\bt{k} \\alpha \\sigma} \\epsilon_{i_{z} \\alpha} \\hat{c}^{\\dag}_{i_{z} \\bt{k} \\alpha \\sigma} \\hat{c}_{i_{z} \\bt{k} \\alpha \\sigma} + \\sum_{\\langle i_{z},i_{z}' \\rangle \\alpha \\sigma} \\sum_{\\bt{k} \\bo{\\delta}} t^{\\alpha}_{\\bo{\\delta}} e^{-i \\bt{k} \\cdot \\bo{\\delta}} \\hat{c}^{\\dag}_{i_{z}' \\bt{k} \\alpha \\sigma} \\hat{c}_{i_{z} \\bt{k} \\alpha \\sigma},\n\\end{eqnarray}\nwhere $\\hat{c}^{\\dag}_{i_{z} \\bt{k} \\alpha \\sigma}$ creates an electron with spin $\\sigma$ and orbital type $\\alpha$ in the 2D plane-wave state $\\bt{k}=(k_{x}, k_{y})$ in layer $i_{z}$. $\\sum_{\\langle i_{z}, i_{z}' \\rangle}$ is a sum over nearest-neighbour layers $i_{z}$ and $i_{z}'$. $t^{\\alpha}_{\\bo{\\delta}}$ is the hopping matrix element for an electron in orbital type $\\alpha$ hopping along path $\\bo{\\delta}$ to a nearest-neighbour site. $\\epsilon_{i_{z} \\alpha}$ is the atomic energy of an orbital site in layer $i_{z}$, and can be set to zero in calculations.\n\nIn the tight-binding model, there are six possible hopping paths. Hopping along $\\hat{x}$ corresponds to a displacement $\\bo{\\delta}_{x}=(\\pm a, 0, 0)$ and hopping amplitude $t^{\\alpha}_{x}$, and so on for hopping along $\\hat{y}$ and $\\hat{z}$. Then, equation~\\eref{T_k} simplifies to\n\\begin{eqnarray} \\label{T_full}\n\\fl \\hat{T} = \\sum_{i_{z} \\bt{k} \\alpha \\sigma} \\Big( \\epsilon_{\\bt{k} \\alpha} \\hat{c}^{\\dag}_{i_{z}\\bt{k}\\alpha \\sigma} \\hat{c}_{i_{z}\\bt{k}\\alpha \\sigma} + t^{\\alpha}_{z} \\hat{c}^{\\dag}_{i_{z}+1, \\bt{k}\\alpha \\sigma} \\hat{c}_{i_{z}\\bt{k}\\alpha \\sigma} + t^{\\alpha}_{z} \\hat{c}^{\\dag}_{i_{z}-1, \\bt{k}\\alpha \\sigma} \\hat{c}_{i_{z}\\bt{k}\\alpha \\sigma} \\Big),\n\\end{eqnarray}\nwhere $\\epsilon_{\\bt{k} \\alpha} = 2 t^{\\alpha}_{x} \\cos (k_{x} a) + 2 t^{\\alpha}_{y} \\cos (k_{y} a)$. As illustrated in \\fref{fig:LAOSTO}(b), the amplitudes $t^{\\alpha}_{x}, t^{\\alpha}_{y}$ and $t^{\\alpha}_{z}$ are denoted by $t^\\|$ for hopping paths that lie in the plane defined by $\\alpha$, and $t^\\perp$ for hopping paths that are perpendicular to this plane. We take $t^{\\parallel}=-0.236$~eV and $t^{\\perp}=-0.035$~eV as in \\cite{raslan17}.\n\nThe electrostatic potential energy is due to the charge on the LAO surface, the 2DEG, and the bound charge due to the polarization of the STO:\n\\begin{equation} \\label{U_k}\n\\hat{U} = -e \\sum_{i_{z} \\bt{k} \\alpha \\sigma} V_{i_{z}} \\hat{c}^{\\dag}_{i_{z}\\bt{k}\\alpha \\sigma} \\hat{c}_{i_{z}\\bt{k}\\alpha \\sigma},\n\\end{equation}\nwhere $e$ is electron charge and $V_{i_{z}}$ is the electrostatic potential in layer $i_{z}$.\n\nCombining equations~\\eref{T_full} and \\eref{U_k} gives the full electronic Hamiltonian:\n\\begin{eqnarray} \\label{Hel_full}\n\\hat{H}_\\mathrm{e} &=& \\sum_{i_{z} \\bt{k} \\alpha \\sigma} \\Big\\lbrace \\Big( \\epsilon_{\\bt{k} \\alpha} - e V_{i_{z}} \\Big) \\hat{c}^{\\dag}_{i_{z}\\bt{k}\\alpha \\sigma} \\hat{c}_{i_{z}\\bt{k}\\alpha \\sigma} \\nonumber \\\\ && + t^{\\alpha}_{z} \\hat{c}^{\\dag}_{i_{z}+1, \\bt{k}\\alpha \\sigma} \\hat{c}_{i_{z}\\bt{k}\\alpha \\sigma} + t^{\\alpha}_{z} \\hat{c}^{\\dag}_{i_{z}-1, \\bt{k}\\alpha \\sigma} \\hat{c}_{i_{z}\\bt{k}\\alpha \\sigma} \\Big\\rbrace.\n\\end{eqnarray}\nThe Hamiltonian can be written as an $N_L\\times N_L$ matrix in the layer index, $\\hat{H}_\\mathrm{e} =$~$\\sum_{\\bt{k} \\alpha \\sigma} \\bo{\\hat{c}}^{\\dag}_{\\bt{k} \\alpha \\sigma} \\bt{H}_{\\bt{k} \\alpha \\sigma} \\bo{\\hat{c}}_{\\bt{k} \\alpha \\sigma}$, with $\\bo{\\hat{c}}_{\\bt{k} \\alpha \\sigma} =$~$(\\hat c_{0\\bt{k} \\alpha \\sigma}, \\ldots, \\hat c_{N_L-1, \\bt{k}\\alpha \\sigma})$ and \n\\begin{equation}\n\\bt{H}_{\\bt{k} \\alpha \\sigma} = \\bt{H}_\\alpha + \\epsilon_{\\bt{k} \\alpha} \\bt{I},\n\\end{equation}\nwhere $\\bt{H}_\\alpha$ is independent of $\\bt{k}$ and $\\bt{I}$ is the identity matrix. The eigenergies are particularly simple, with\n\\begin{equation}\n\\epsilon_{n\\bt{k}\\alpha} = \\lambda_{n\\alpha} + \\epsilon_{\\bt{k} \\alpha}, \n\\end{equation}\nwhere $\\lambda_{n\\alpha}$ are the eigenvalues of $\\bt{H}_\\alpha$ and $n$ is the band index. The eigenvectors of $\\bt{H}_{\\bt{k} \\alpha \\sigma} $, which represent the layer-dependent wavefunctions, are $\\bt{k}$-independent and satisfy\n\\begin{equation}\n\\sum_{j_z} [\\bt{H}_\\alpha]_{i_z j_z} \\psi_{j_z n \\alpha} = \\lambda_{n\\alpha} \\psi_{i_z n \\alpha}. \n\\end{equation}\n\nFrom this, the free electron density (per unit cell) in layer $i_{z}$ is\n\\begin{equation} \\label{n_i}\nn_{i_{z}} = \\frac{1}{ N} \\sum_{n \\bt{k} \\alpha \\sigma} f_\\mathrm{FD} (\\epsilon_{n \\bt{k} \\alpha}) | \\psi_{i_{z} n \\alpha} |^{2},\n\\end{equation}\nwhere $N$ is the total number of $k_{x}$- and $k_{y}$-points, and $f_\\mathrm{FD} (\\epsilon_{n \\bt{k} \\alpha})$ is the Fermi-Dirac distribution.\n\nWe note that the mean-field equations described in this section neglect thermal fluctuations of both the lattice and the charge density. Both of these broaden the electronic spectral functions, as in Fermi liquid theory, and can in principle mix the bands. These are perturbative effects, however, and band structure calculations like the one outlined here generally provide a good quantitative description of the electronic structure, even at room temperature.\n\n\n\\subsubsection{Electric Field}\n\nThe electric potential in layer $i_{z}$ is obtained by integrating the electric field from layer 0 to layer $i_z$, which sets the interface to be the zero of potential. Then,\n\\begin{equation} \\label{V}\nV_{i_{z}} = -a \\sum_{j_{z} < i_{z}} E_{j_{z}} + V_{0},\n\\end{equation}\nwith $a=3.902$~\\AA{} the STO lattice constant.\n\nJust as in \\sref{sec:FEfilm}, the electric field can be obtained using Gauss' law, \n\\begin{equation} \\label{Gauss_int}\n\\frac{d}{dz} (\\epsilon_{\\infty} E(z) + P(z)) = \\rho^\\mathrm{2DEG}(z) + \\rho^\\mathrm{ext}(z),\n\\end{equation}\nwhere $\\rho^\\mathrm{2DEG}(z)$ is the free charge density and $\\rho^\\mathrm{ext}$ is the external charge density along the LAO surface. The polarization $P(z)$ is obtained from the modified TIM.\n\nWithin the discretized model, the electrons are treated as if they are confined to two-dimensional TiO$_2$ layers, so\n\\begin{equation}\n\\rho^\\mathrm{2DEG}(z) = -\\frac{e}{a^{2}} \\sum_{i_{z}} n_{i_{z}} \\delta(z - i_z a),\n\\end{equation}\nwhere $n_{i_{z}}$ is given by equation~\\eref{n_i}. Similarly, the external charge density is confined to the top LAO layer,\n\\begin{equation}\n\\rho^\\mathrm{ext} = \\frac{en_\\mathrm{LAO}}{a^{2}} \\delta(z - z_\\mathrm{LAO}),\n\\end{equation}\nwhere $z_\\mathrm{LAO}$ is the distance from the interface to the LAO surface. Now, integrating equation~\\eref{Gauss_int} over $z$ gives the electric field in layer $i_{z}$:\n\\begin{equation} \\label{E_int}\n\\epsilon_{\\infty} E_{i_{z}} = - P_{i_{z}} - \\frac{e}{a^{2}} \\sum_{j_{z} \\leq i_{z}} n_{j_{z}} + \\frac{en_\\mathrm{LAO}}{a^{2}},\n\\end{equation}\nwhich is required for the TIM [equation~\\eref{TIM_full}] and the electric potential [equation~\\eref{V}].\n\n\n\\subsection{Results} \\label{sec:int_res}\n\nHere, we explore the effect that $J_{1}$ has on the electron distribution, eigenenergies, polarization and potential energy for the (001) LAO\/STO interface. As a key point of comparison, these calculations include the case $J_1=0.65$~meV ($J_\\mathrm{an}=0$), which corresponds to the conventional TIM, in order to clearly highlight why the modified TIM requires the term introduced in equation~\\eref{TIM_full} to correctly model interfaces.\n\n\\begin{figure}[tb]\n\t\\centering\n\t\\includegraphics[width=0.7\\linewidth]{figure5a_5f}\n\t\\caption{Electron density (per unit cell) for the first 20 layers of a 200-layer film. Results are for different $J_{1}$ values, at $T=10$~K and 300~K, and (a)-(b) $n_\\mathrm{LAO}=0.01\/a^{2}$, (c)-(d) $n_\\mathrm{LAO}=0.05\/a^{2}$, and (e)-(f) $n_\\mathrm{LAO}=0.1\/a^{2}$.}\n\t\\label{fig:int_nvsN}\n\\end{figure}\n\nPrevious work has established that the 2DEG is composed of both interfacial and tail components. The interfacial component is tightly confined to the interface, and appears as a peak in the electron density extending over the first few layers of the substrate, while the tail component extends far into the STO substrate \\cite{copie09,dubroka10,park13,gariglio15,raslan18}. Except at the very lowest dopings, the majority of the electrons are confined close to the interface, with as many as 70\\% of the electrons in the 2DEG found in approximately the first 10~nm \\cite{raslan17,copie09}. This interfacial peak in the electron density is strongly temperature- and electron doping-dependent, with the electrons spreading further out into the STO as temperature or doping decreases \\cite{raslan17}. The first $d_{xy}$ band contributes the most electrons to the interface states, while the first $d_{yz}$ and $d_{xz}$ bands make up the majority of the tail states and are seen to have the most temperature-dependence \\cite{raslan17}.\n\nThe electron density is plotted in figures~\\ref{fig:int_nvsN} and \\ref{fig:int_nvsN_one}. \\Fref{fig:int_nvsN} explores the effect $J_{1}$ has on the electron density, focusing particularly on the interface region, while \\fref{fig:int_nvsN_one} shows the full profile over the entire film for a typical set of model parameters.\n\nWe begin analyzing \\fref{fig:int_nvsN} by focusing on the results of the conventional TIM. When $J_1=0.65$~meV, there is no evidence that electrons are confined to the interface region at 10~K at any doping, in disagreement with experiments. Weak confinement does appear at 300~K due to the reduced dielectric susceptibility at high $T$, and the 2DEG does move towards the interface with increasing doping; however, the density is expected to be strongly peaked at the interface, and this is not seen. The conventional TIM, therefore, does not capture the physics of STO interfaces.\n\nThe remaining curves in \\fref{fig:int_nvsN} show how the charge profile changes with increasing $J_1$. These results are for fixed $J_0$ and $\\Omega$ (which determine the uniform dielectric susceptibility), and the only difference between the curves is therefore the correlation length $\\xi$. These curves show that increasing $J_{1}$ (or equivalently, increasing $\\xi$) tends to increase electron density at the interface, except at the lowest doping.\n\nAt the lowest doping, $n_\\mathrm{LAO}=0.01\/a^{2}$, $J_{1}$ has little effect on the electron density at both high and low temperature. Indeed, interface states are absent for all $J_1$ values up to 400~meV. While this lack of interface states is consistent with previous calculations \\cite{raslan18}, it is not consistent with experiments \\cite{yin19,joshua12}, and likely points to some additional missing physics in the model \\cite{raslan18}.\n\nAt intermediate doping, $n_\\mathrm{LAO}=0.05\/a^{2}$, the electron density does develop an interfacial component as $J_{1}$ increases. This interfacial state extends only a few unit cells from the interface, and is more tightly confined at large $J_1$. There is thus a clear qualitative distinction between the modified and conventional TIMs in this case. At high doping, $n_\\mathrm{LAO} = 0.1\/a^{2}$, the trends are similar. The electron density is confined closer to the interface and is less strongly temperature dependent than at lower doping, at least when $J_1\\geq200$~meV. Both of these trends are consistent with results reported in \\cite{raslan17}. \n\n\\begin{figure}[]\n\t\\centering\n\t\\includegraphics[width=0.7\\linewidth]{figure6}\n\t\\caption{Profile of the electron density (solid) for $n_\\mathrm{LAO}=0.05\/a^{2}$ and $J_{1}=300$~meV, at both 10~K and 300~K. Probability distributions (dashed) for the two lowest-energy $d_{xy}$ bands ($n = 1,2$) are shown for 10~K.}\n\t\\label{fig:int_nvsN_one}\n\\end{figure}\n\n\\begin{figure}[]\n\t\\centering\n\t\\includegraphics[width=0.7\\linewidth]{figure7}\n\t\\caption{Band structure for $n_\\mathrm{LAO}=0.05\/a^{2}$ at $J_1=0.65$~meV and 300~meV. Results are shown for a 200-layer substrate at both 10~K (top) and 300~K (bottom).}\n\t\\label{fig:int_bands05}\n\\end{figure}\n\n\\Fref{fig:int_nvsN_one} shows the electron density across the full thickness of the STO film for a typical $J_1$ value at intermediate doping for both 10~K and 300~K. We choose the value of $J_{1}=300$~meV as physically reasonable based on the results in \\fref{fig:int_nvsN}. At 10~K, the charge profile shows a peak-dip-hump structure that has not been reported in previous calculations. To understand its origin, we plot also the wavefunctions $|\\psi_{i_z n \\alpha}|^2$ for the first ($n=1$) and second ($n=2$) $d_{xy}$ bands ($\\alpha=xy$) at 10 K. These show that the dip comes from the extremely tight confinement of the first $d_{xy}$ band to the interface.\n\nThe band structure is shown in \\fref{fig:int_bands05} for intermediate doping for both the conventional TIM and the modified TIM ($J_1= 300$~meV). At 10~K, the band structures of the two models are quasi-continuous, which is indicative of deconfined tail states, except for a single $d_{xy}$ band that sits below the continuum in the modified TIM, and which corresponds to the interface state discussed above. At high $T$, the band structures are discrete, which is indicative of confinement to the interface region. At this temperature, the effects of $J_1$ are quantitative, rather than qualitative.\n\n\\begin{figure}[tb]\n\t\\includegraphics[width=0.5\\linewidth]{figure8a}\n\t\\includegraphics[width=0.5\\linewidth]{figure8b}\n\t\\caption{(a) Polarization and (b) electron potential energy in the first 25 layers of a 200-layer SrTiO$_3$ substrate for different $J_{1}$ values at 300~K and $n_\\mathrm{LAO}=0.05\/a^{2}$.}\n\t\\label{fig:int_PvsN}\n\\end{figure}\n\nFinally, we plot in \\fref{fig:int_PvsN} the polarization and potential energy at 300~K for intermediate doping. These plots show that there are clear distinctions between the conventional and modified TIMs in the interfacial region. In particular, the polarization near the interface is reduced, by up to 25\\%, as $J_1$ increases. This reduction is similar to that discussed in the case of the thin film, with one key difference: because electric fields are screened by the 2DEG, the relevant length scale over which differences between the curves decay in \\fref{fig:int_PvsN}(a) is $\\xi$, and not $\\kappa^{-1}$ \\cite{atkinson17}.\n\nSimilar to the ferroelectric thin films discussed in \\sref{sec:FEfilm}, this reduced polarization incompletely screens the electric fields produced by the LAO surface charge and results in a large field at the interface. This is reflected in the potential energy profiles shown in \\fref{fig:int_PvsN}(b). In particular, large values of $J_1$ generate a deep potential well that confines the lowest $d_{xy}$ band tightly to the interface. On the other hand, $J_{1}$ has little effect on the electric field away from the interface, and so each potential energy curve has roughly the same slope for $i_{z} > 2$. In summary, \\fref{fig:int_PvsN} illustrates the mechanism by which the anisotropic pseudospin term in the modified TIM generates the interfacial component of the 2DEG that is observed at LAO\/STO interfaces.\n\n\\section{Conclusions}\n\nWe showed that the conventional transverse Ising model misses key features of spatially inhomogeneous STO-based nanostructures. To fix this we modified the TIM by adding an anisotropic pseudospin energy to the Hamiltonian. This corrects a deficiency of the TIM, namely that if one fits the model parameters to the bulk (homogeneous) susceptibility, the polarization correlation length is also fixed by the model and is at least an order of magnitude smaller than it should be. \n\nTo illustrate the effects of the new term, we considered two applications of the modified TIM: first, to thin films of an STO-like ferroelectric; and second, to a metallic LAO\/STO interface. In both cases, the key point is that the conventional TIM underestimates the reduction of the polarization due to the surface; this reduced polarization leads to a reduced screening of electric fields in the interface region, which in turn has profound effects on the film or interface. In the case of the ferroelectric film, these fields depolarize the polarization in the film; in the case of the interface, they create a confining potential that generates tightly bound interface states.\n\n\n\n\\ack\nThis work has been supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada. \n\n\\section*{References}\n\\bibliographystyle{iopart-num}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Problem statement}\nIn a traditional sampling and reconstruction system, the sensors are designed so that the recovery of the signal(s) of interest is possible. \nFor instance, when considering the sparse recovery problem, one tries to find the sparsest solution $\\hat{{\\bf x}} \\in {\\mathbb K}^N$ from the noisy measurements ${\\bf y} = {\\mathbf A} {\\bf x} +{\\bf e} \\in {\\mathbb K}^m$ and $m \\ll N$. \nHere ${\\mathbb K}$ denotes the field ${\\mathbb R}$ or ${\\mathbb C}$. \nThis is done by solving the mathematical program\n\\begin{equation} \\label{eq:generalCS}\n \\tag{${\\boldsymbol\\ell}^0$-min}\n \\hat{{\\bf x}} := \\operatorname{argmin}\\|{\\bf z}\\|_0, \\quad \\text{ subject to } \\|{\\mathbf A} {\\bf z} - {\\bf y}\\|_2 \\leq \\eta. \n\\end{equation}\nThis problem is NP-Hard and usually only approximately solved, for instance by solving its convex relaxation, known as the Basis Pursuit denoising\n\\begin{equation}\n \\label{eq:bpdn}\n \\tag{BPDN}\n \\hat{{\\bf x}} := \\operatorname{argmin}\\|{\\bf z}\\|_1, \\quad \\text{ subject to } \\|{\\mathbf A} {\\bf z} - {\\bf y}\\|_2 \\leq \\eta. \n\\end{equation}\nIt is known that for a given \\emph{complexity} (usually measured as the sparsity) $s$ of the signal ${\\bf x}$, the number of random subgaussian linear measurements needs to grow as $m \\gtrsim s \\log(N\/s)$ for $\\widehat{{\\bf x}}$ to be a good enough approximation to ${\\bf x}$. \nSaid differently, if the design of a sensor can be made at will, then knowing the complexity of the signal, here characterized by the sparsity, is sufficient for a stable and robust recovery. \nThis paper looks at the problem of sampling potentially highly non-sparse signals when the quality of the sensors is constrained. \nWe emphasize in passing that throughout this paper, the sought after signal will always be considered of high-complexity. \nWe use the sparsity or density as a measure of complexity, but one could consider other models. \nWe investigate problems where the number of measurements $m$ cannot be chosen based on the complexity of the signals to recover. \nIn the context described above, one would have a limit on the sparsity of the vectors that can be recovered by $s \\lesssim m\/\\log(N\/m)$. \nThese constraints can be due to many reasons such as cost -- e.g. using 10 sensors at a coarser resolution is cheaper than one at the finest --, frequency rate -- sensors at 2000 THz might not exists for a while --, legal regulation -- e.g. in nuclear medicine where one should not expose a patient to too high radiations at once. \nProblems arise when the signals being sampled are \\emph{too dense} for the usual mathematical theories. \nWhen one thinks about compressed sensing, the size of the sensor required is driven by a certain measure of complexity of the signals considered. \nAllowing for the recovery of signals with higher level of complexity entails the use of better sensors. \nHere we look at the problem differently: first, we assume constraints on the sensor design which are fixed due to some outside reasons. Under these assumptions, we take on the following challenge: split the information carried by the signal in a clever way so that a mathematical recovery is possible. \n\nThis paper revisits the theory of fusion frames and applies it to the dense signal recovery problem. \nWe show that by using advanced mathematical techniques stemming from applied harmonic analysis, it is possible to handle very high complexity signals in an efficient and stable manner. \nBefore we dig into the more technical details, we present some real-world scenarios where our framework appears useful, if not essential. \n\n\\subsection{Examples}\\label{IntroExamplesSubSection}\n\n\\subsubsection{Unavailability of high quality observation devices}\n\\label{sssec:sampling}\nA typical time-invariant bounded linear operator is always represented by a circulant matrix ${\\mathbf A}$. So suppose ${\\mathbf A}$ represents a sensing device whose number of rows $m$ is physically limited by the sampling rate (or resolution) of the device. Suppose also that the sparsity of the sampled signal ${\\bf x}$ is substantially larger than what a single observation by ${\\mathbf A}$ could handle\/recover by various compressed sensing techniques.\n\nIn this context, the limitations on the sensing devices combined with the (potentially) high number of non-zeros in the signals makes it impossible for a state-of-the-art algorithm to recover the unknown signal ${\\bf x}$. As illustrated in Figure~\\ref{fig_1}, we suggest to apply $n$ such devices in parallel after prefiltering. The fused compressed sensing technique introduced later allows to resolve the problem that otherwise a single device can not!\nSuch scenarios actually exist and show the necessity of the fused compressed sensing technique presented below.\n\n\n For example, suppose an application requires a sensing device of capacity $X$, described by a sensing matrix $\\tilde {\\mathbf A}$. \nIn case such a device is either very expensive, or not available, we may choose to combine $n$ parallel projections $\\{P_j\\}_{j=1}^{n}$ prior to measuring, and use $n$ low-quality sensing devices of capacity $\\frac{1}{n}X$, each described by the sensing matrix ${\\mathbf A}$. \nThe (sparse) signal ${\\bf x}$ is then subsequently recovered by various techniques\nvia each channel and, through the theory of fusion frames~\\cite{Casazza2004framessubspaces,Casazza2008ff,Cahill2012nonOrthFF}, merged into a single vector.\nAs long as $\\{P_j\\}_{j=1}^{n}$ are projections - or any filtering operations - with the property that $C{\\mathbf I}\\le \\sum_i P_j^* P_j\\le D{\\mathbf I} $ for some $00$ such that\n \\begin{equation}\n\\label{eq:ffInequality}\n C \\Vert f \\Vert^2 \\leq \\sum_{i \\in I} \\Vert P_i(f) \\Vert^2 \\leq D\\Vert f \\Vert^2 \\; \\text{for all} \\; f \\in {\\mathcal H}. \n\\end{equation}\n \\end{adef}\n\\begin{rmk}\n\tFusion frames are often accompanied by respective weights.\nIn the weighted case, the frame condition reads $C \\Vert f \\Vert^2 \\leq \\sum_{i \\in I} v_i^2 \\Vert P_i(f) \\Vert^2 \\leq D\\Vert f \\Vert^2 \\; \\text{for all} \\; f \\in {\\mathcal H}$ and some positive weights $(v_i)_{i \\in I}$. \n\tAll (unweighted) results derived below apply mutatis mutandis to the weighted case.\n\\end{rmk}\n\t\n Given a fusion frame ${\\mathcal W}$ for a Hilbert space ${\\mathcal H}$, let ${\\mathcal F}_{i}:=\\{f_{ij} \\, | \\, j \\in J_i \\}$ be a frame for $W_i$, $i \\in I$. Then $\\{ \\(W_i, {\\mathcal F}_i \\) | i \\in I \\}$ is a FF system for ${\\mathcal H}$.\n The following theorem \\cite{Casazza2004framessubspaces} illustrates the relationship between the local and global properties of a FF: \n\n \\begin{atheorem}\n For each $i \\in I$, let $W_i$ be a closed subspace of ${\\mathcal H}$, and let ${\\mathcal F}_{i}=\\{f_{ij} \\, | \\, j \\in J_i \\}$ be a frame for $W_i$, with frame bounds $A_i$, $B_i$. If $0 < A = \\inf_{i \\in I} A_i \\leq \\sup_{i \\in I} B_i = B < \\infty$, then the following conditions are equivalent:\n \\begin{itemize}\n \\item $\\cup_{i \\in I} \\{f_{ij} \\, | \\, j \\in J_i \\}$ is a frame for ${\\mathcal H}$.\n \\item $\\{ W_i\\}_{i \\in I}$ is a fusion frame for ${\\mathcal H}$.\n \\end{itemize} In particular, if $\\{ \\( W_i, {\\mathcal F}_i \\) \\}_{i \\in I}$ is a fusion frame system for ${\\mathcal H}$ with frame bounds $C$ and $D$, then $\\cup_{i \\in I} \\{f_{ij} \\, | \\, j \\in J_i \\}$ is a frame for ${\\mathcal H}$ with frame bounds $AC$ and $BD$.\n \\end{atheorem}\n In the fusion frame theory, an input signal is represented by a collection of vector coefficients that represent the projection onto each subspace. \nThe representation space used in this setting is\n \\[ \\( \\sum_{i \\in I} \\oplus W_i \\)_{{\\boldsymbol\\ell}^2} =\n \\{ { \\{ f_i \\}_{i \\in I} \\, | \\, f_i \\in W_i } \\} \\]\n with $ \\{\\Vert f_i \\Vert \\} _{i \\in I} \\in {\\boldsymbol\\ell}^2(I)$.\n\n The analysis operator $T$ is then defined by\n $$T(f) := \\{ P_i(f)\\}_{i \\in I} \\; \\text{for all} \\; f \\in {\\mathcal H}, $$ while its adjoint operator is the synthesis operator $T^*: \\( \\sum_{i \\in I} \\oplus W_i \\)_{{\\boldsymbol\\ell}^2} \\to {\\mathcal H}$, defined by\n $$T^* (f) = \\sum_{i} f_i, \\; \\text{where} \\; f = \\{f_i\\}_{i \\in I} \\in \\( \\sum_{i \\in I} \\oplus W_i \\)_{{\\boldsymbol\\ell}^2}. $$\n The fusion frame operator $S = T^* T$ is given by\n $$S(f) = \\sum_{i \\in I} P_i(f).$$ It is easy to verify that $S$ is a positive and invertible operator on ${\\mathcal H}$. In particular, it holds $C {\\mathbf I} \\leq S \\leq D {\\mathbf I}$, with ${\\mathbf I}$ denoting the identity.\n\n If the dual frames ${\\mathcal G}_i$, $i \\in I$, for each local frames are known then the fusion frame operator can be expressed in terms of the local (dual) frames \\cite{Casazza2008ff}:\n \\[ S = \\sum_{i \\in I} T^*_{{\\mathcal G}_i} T_{{\\mathcal F}_i} = \\sum_{i \\in I} T^*_{ {\\mathcal F}_i} T_{{\\mathcal G}_i}.\\]\n\n For computational needs,\nwe may only consider the fusion frame operator in finite frame settings, where the fusion frame operator becomes a sum of matrices of each subspace frame operator. The evaluation of the fusion frame operator $S$ and its inverse $S^{-1}$ in finite frame settings are conveniently straightforward. By $F_i$ we denote the frame matrices formed by the frame vectors from ${\\mathcal F}_i$, in a column-by-column format. Let $G_i$ be defined in the same way from the dual frame $\\{ g_{ij}\\}_{j \\in J_i}$. Then the fusion frame operator is\n$$ S(f) = \\sum_{i \\in I} F_iG_i ^T f= \\sum_{i \\in I} G_iF_i ^T f. $$\n Hence a distributed fusion processing is feasible in an elegant way, since the reconstruction formula for all $f \\in {\\mathcal H}$ is\n \\[\n f = \\sum_{i \\in I} S^{-1} P_i(f). \\]\n %\n\n The standard distributed fusion procedure uses the local projections of each subspace. In this procedure, the local reconstruction takes place first in each subspace $W_i$, and the inverse fusion frame is applied to each local reconstruction and combined together:\n \\begin{equation}\n \\label{firstapproach}\n f = \\sum_{i \\in I} S^{-1} P_i(f) = \\sum_{i \\in I} S^{-1} \\(\\sum_{j \\in J_i} \\langle f,f_{ij} \\rangle g_{ij} \\) \\; \\text{for all} \\; f \\in {\\mathcal H}.\n\\end{equation}\n %\n Alternatively, one may use a global-like reconstruction procedure, which is possible if the coefficients of signal\/function decompositions are available:\n\\begin{equation}\n \\label{secondapproach} f = \\sum_{i \\in I} \\sum_{j \\in J_i} \\langle f,f_{ij} \\rangle (S^{-1} g_{ij}) \\; \\text{for all} \\; f \\in {\\mathcal H}.\n \\end{equation} The difference in procedure \\eqref{secondapproach}, compared with a global frame reconstruction lies in the fact that the (global) dual frame $\\{ S^{-1} g_{ij}\\}$ is first calculated at the local level, and then fused into the global dual frame by applying the inverse fusion frame operator. This potentially makes the evaluation of (global) duals much more efficient.\n\n\nAs stated above, computing the inverse (fusion) frame operator is often a challenging task. \nInstead, one can approximate the solution $\\widehat{{\\bf x}}$ by employing the so-called frame algorithm. \nAll we need to start the iterative algorithm is $S\\widehat{{\\bf x}}$, which we already have. We recall the relevant result below for completeness. \n\n\\begin{aprop}\\label{CasLiIterative}\\cite{Casazza2008ff}\n Let $(W_i)_{i\\in I}$ be a fusion frame in ${\\mathbb K}^N$, with fusion frame operator $S=S_W$, \nand fusion frame bounds $C, D$. Further, let ${\\bf x} \\in {\\mathbb K}^N$, and define the sequence $({\\bf x}_k)$\nby\n${\\bf x}_0 = 0$ and ${\\bf x}_k = {\\bf x}_{k-1} + \\frac{2}{C+D}S({\\bf x} - {\\bf x}_{k-1})$, $k \\geq 1$. Then we have ${\\bf x} = \\lim_{k\\rightarrow \\infty} {\\bf x}_k$, with the\nerror estimate $\\Vert {\\bf x} - {\\bf x}_{k} \\Vert \\leq \\left( \\frac{D-C}{D+C}\\right)^k \\Vert {\\bf x} \\Vert. $\n\\end{aprop}\n\nConcretely, using the fusion frame reconstruction, the updates read\n\\begin{equation*}\n{\\bf x}_k = {\\bf x}_{k-1} + \\frac{2}{C+D}\\sum_{i=1}^n\\widehat{{\\bf x}^{(i)}} + \\frac{2}{C+D}\\sum_{i=1}^nP_i {\\bf x}_{k-1}.\n\\end{equation*}\nThe middle term is computed once and for all. \nThe following updates then only require some basic matrix-vector multiplications. \nNote that starting the algorithm with $\\widehat{{\\bf x}}_0 = 0$ yields $\\widehat{{\\bf x}}_1 = \\frac{2}{C+D}S\\widehat{{\\bf x}}$. \n\n\nSimilarly, in Section~\\ref{ShidongSection}, we have $\\widehat{{\\bf x}}= S^{-1}\\left(\\sum_j \\widehat{{\\bf x}^{(j)}}\\right)$, and $\\widehat{f}=D \\widehat{{\\bf x}}$. \nWe can find $\\widehat{{\\bf x}}$ via the iterative approach of Proposition~\\ref{CasLiIterative}, by starting the algorithm with $\\widehat{{\\bf x}_0 }= 0$, $\\widehat{{\\bf x}_1} = \\frac{2}{C+D}S(\\widehat{{\\bf x}})=\\frac{2}{C+D}\\sum_j S \\widehat{{\\bf x}^{(j)}}$, and so on. \nThe $n-$th term approximation of $\\widehat{f}$ is then estimated with $\\widehat{f}_n=D \\widehat{{\\bf x}}_n$.\n\n\n\n\\subsection{Signal recovery in fusion frames}\nThis work describes an approach for sensing and reconstructing signals in a fusion frame structure. \nAs presented above, given some local information ${\\bf x}^{(i)} := P_{i}({\\bf x})$, for $1 \\leq i \\leq n$, a vector can easily be reconstructed by applying the inverse fusion frame operator\n\\begin{equation}\n\\label{eq:reconstructionFormula}\n{\\bf x} := S^{-1}S({\\bf x}) = S^{-1}\\sum_{i=1}^n P_{i}({\\bf x}) = S^{-1}\\(\\sum_{i = 1}^n {\\bf x}^{(i)}\\), \\text{ where ${\\bf x}^{(i)} := P_{i}({\\bf x})$, for $1 \\leq i \\leq n$}.\n\\end{equation}\nThroughout the work, we assume that the projected vectors are sampled independently from one another, with $n$ devices modeled by the same sensing matrix ${\\mathbf A}$. \nFormally, the problem is as follows \\emph{From the measurements ${\\bf y}^{(i)} = {\\mathbf A} {\\bf x}^{(i)} + {\\bf e}^{(i)}$, reconstruct an estimation $\\widehat{{\\bf x}^{(i)}}$ of the local information to compute the approximation $\\widehat{{\\bf x}}$ of the signal ${\\bf x}$}. \n\n\nFrom this point on, there are two ways of thinking about the problem. \nIn a first scenario, the signals are measured in the subspace and the recovery of the ${\\bf x}^{(i)}$ are done locally. \nIn other words, it accounts for solving $n$~\\eqref{eq:generalCS} problems (or their approximations via~\\eqref{eq:bpdn} for instance) in the subspaces, and then transmitting the estimated local signals to a central unit taking care of the fusion via Equation~\\eqref{eq:reconstructionFormula}.\nThe other approach consists of transmitting the local observations ${\\bf y}^{(i)} = {\\mathbf A} P_{i}{\\bf x}$ to a central processing station which takes care of the whole reconstruction process. \nIn this case, the vector ${\\bf x}$ can be recovered by solving a unique \\eqref{eq:generalCS} problem directly with, letting ${\\mathbf I}_n$ denote the $n$ dimensional identity matrix,\n\\begin{equation}\n\\label{eq:centralReconstruction}\n{\\bf y} = \\( \\begin{array}{c} {\\bf y}^{(1)} \\\\ \\vdots \\\\ {\\bf y}^{(n)} \\end{array}\\) = {\\mathbf I}_n \\otimes {\\mathbf A} \\left[ \\begin{array}{c} P_{1} \\\\ \\vdots \\\\ P_{n} \\end{array} \\right] {\\bf x}\n\\end{equation}\nWhile the latter case is interesting on its own (see for instance~\\cite{adcock2016CSparallel}), we investigate here some results for the first case. \nOur results can be investigated and generalized further, integrating ideas where the measurements matrix (here, the sensors) vary locally, as is the case in~\\cite{adcock2016CSparallel} or driven with some structured acquisition (see for instance~\\cite{boyer2015compressed}).\n\n\nWe would like to put our work in context. \nThis paper is not the first one to describe the use of fusion frames in sparse signal recovery. \nHowever, it is inherently different from previous works in~\\cite{boufounos2011csfs,ayaz2014ff}. \nIn~\\cite{boufounos2011csfs} the authors provide a framework for recovering sparse fusion frame coefficients. \nIn other words, given a fusion frame system $\\{ \\(W_i, P_i \\) \\}_{i=1}^n$ a vector is represented on this fusion frame as a set of $n$ vectors $({\\bf x}^{(i)})_{i=1}^n$ where each of the ${\\bf x}^{(i)}$ corresponds to the coefficient vector in subspace $W_i$.\nThe main idea of the authors is that the original signal may only lie in few of the $n$ subspaces, implying that most of the ${\\bf x}^{(i)}$ should be $0$. To rephrase the problem, we can say that the vector has to lie in a sparse subset of the original fusion frame.\nWhile~\\cite{boufounos2011csfs} is concerned with some recovery guarantees under (fusion-) RIP and average case analysis, the paper~\\cite{ayaz2014ff} gives uniform results for subgaussian measurements and derive results on the minimum number of vector measurements required for robust and stable recovery.\n\nWe, on the other hand, exploit structures (or sparsity) locally\\footnote{This sparsity assumption is needed only when a given frame component has a large dimension. We hope that in most of the cases we can control this dimension to be small enough avoiding the necessity of sparse assumptions. See the following section for more details.}. \nWe do not ask that only a few of the subspaces be active for a given signal, but that the signal only has a few active components per subspace. \nThis justifies the use of the local properties of fusion frame systems. \n\n\n\\section{Examples and applications}\n\\label{motivatingExamplesSection}\nThis section introduces some theoretical examples where the application of our fusion frame-based recovery shows increased performance over traditional recovery techniques. \nIn particular, we justify in Proposition~\\ref{prop:recoveryfixrank} that the fusion frame recovery may provide better robustness against noise.\n\n\\subsection{Orthogonal canonical projections}\n\nConsider the problem of recovering ${\\bf x} \\in {\\mathbb K}^N$ from the measurements:\n\\begin{equation*}\n{\\bf y}^{(i)} = {\\mathbf A}_i {\\bf x} + {\\bf e}^{(i)}, \\quad 1 \\leq i \\leq n,\n\\end{equation*}\nwhere ${\\mathbf A}_i$'s are defined as ${\\mathbf A}_i = {\\mathbf A} P_i$ and $P_i$ is the orthogonal projection onto $\\Omega_i$, with $\\Omega_i \\subseteq \\{1,\\cdots, N\\}$.\nWe assume that the noise vectors ${\\bf e}^{(i)}$ are uncorrelated and independent, with $\\|{\\bf e}^{(i)}\\|_2 \\leq \\eta_i$. \n${\\mathbf A}$ corresponds to a matrix of linear measurements in ${\\mathbb K}^{m \\times N}$.\nNote that this framework is an example of recovery from Multiple Measurement Vectors (\\emph{MMV}) (see~\\cite{duarte2005distributed,gribonval2008atoms,rauhut2008csRdeundant} and references therein) where the signal is the same in every measurements,\nand the matrices are different for each set of measurements. \nWe consider the problem of recovering local vectors ${\\bf x}^{(i)}$, for $1 \\leq i \\leq n$, where the ${\\bf x}^{(i)}$ are defined as ${\\bf x}^{(i)} := P_i{\\bf x}$.\nIn this case, we have that the sparsity of each local signal is at most $N_i := \\operatorname{rank}(P_i) = |\\Omega_i|$.\nOnce the local pieces of information $\\widehat{{\\bf x}^{(i)}}$ are recovered, the original signal ${\\bf x}$ can be estimated from a \\emph{fusion frame like} reconstruction:\n\\begin{equation}\n \\label{eq:ffreconstructionfromProj}\n \\widehat{{\\bf x}} = S^{-1}\\( \\sum_{i=1}^n \\widehat{{\\bf x}^{(i)}} \\) = \\sum_{i=1}^n S^{-1}\\widehat{{\\bf x}^{(i)}},\n\\end{equation}\nwhere the fusion frame operator $S$ is defined in the usual way as $S: {\\mathbb K}^N \\to {\\mathbb K}^N$, $S({\\bf x}) = \\sum_{i = 1}^n P_i({\\bf x})$.\nThe problem is to recover the unknown vector ${\\bf x}$ from the measurements $\\{{\\bf y}^{(i)}\\}_{i=1}^{n}$ by solving local problems.\nClearly, a necessary condition for uniform recovery is that we have $\\bigcup_{i=1}^n \\Omega_i = \\{1,\\cdots,N\\}$ in a deterministic setting, or with high probability in a probabilistic setting.\n\nThe main idea of our approach is to solve the (very) high-dimensional, high-complexity, and demanding problem ${\\bf y} = A {\\bf x} + {\\bf e}$ by combining results obtained from $n$ problems that are much easier to solve.\nWe investigate first the case where the orthogonal projections do not overlap, i.e. $\\Omega_i \\cap \\Omega_j = \\varnothing$, for any $i,j \\in \\{1,\\cdots,n\\}$ with $i \\neq j$ and $\\bigcup_{i=1}^n \\Omega_i = \\{1,\\cdots, N\\}$. \nThe results developed are reminiscent of some work on partial sparse recovery, when part of the support is already known~\\cite{bandeira2013partialNSP}.\nIn a second step we analyze the use of random and deterministic projections that may overlap for the recovery of the signal ${\\bf x}$.\n\n\n\\begin{rmk}\n\\label{rmk:nbOfVariables}\nIt is important to pause here and understand the problem above. \nKnowing the support of the projections, one could work on the local vectors ${\\bf x}^{(i)}$ of smaller sizes $N_i < N$. \nDoing this would allow faster computations of the solutions but also has the implication that the sparsity (per subspace) has to remain small. \nOn the other hand, considering large vectors with many zero components allows to keep a sparsity rather large (compared to the subspace dimension). \nWe choose to investigate mainly the latter case as we try to break the limitation on the complexity for a given sensing matrix.\n\\end{rmk}\n\n\\subsubsection{Decomposition via direct sums}\n\\label{ssec:direcsums}\nIt is clear that the dimension $N$ of the ambient space will play an important role. Indeed, if we consider no overlap between the projections, we have two drastically different extreme cases.\nThese scenarios are important as they shed light on the main ideas behind our recovery results.\n\n\\underline{Case $n=N$} This case is characterized by (up to a permutation of the indices) $\\Omega_i = \\{i\\}$ for all $1 \\leq i \\leq N$.\nIn other words, each (set of) measurements ${\\bf y}^{(i)} = {\\mathbf A} P_i{\\bf x} + {\\bf e}^{(i)}$ gives information about a single entry $x_i$ of the input vector ${\\bf x}$ via an overdetermined linear system.\nIn this case, assuming ${\\mathbf A}$ has full rank, we can compute an estimate $\\widehat{x_i}$ of the entry $x_i$ as the solution to the ${\\boldsymbol\\ell}^2$ minimization problem:\n\\begin{equation}\n \\label{eq:l2minNproj}\n \\widehat{x_i} := \\operatorname{argmin}_{x \\in {\\mathbb R}} \\|{\\bf y}^{(i)} - x {\\bf a}_i\\|_2^2 = \\frac{{\\bf a}_i^T{\\bf y}^{(i)}}{{\\bf a}_i^T{\\bf a}_i}\n\\end{equation}\nwhere ${\\bf a}_i$ denotes the $i^{th}$ column of the matrix ${\\mathbf A}$.\nThe $N$ independent ${\\boldsymbol\\ell}^2$ minimizations ensure that the final solution $\\widehat{{\\bf x}}$ satisfies the bound\n\\begin{equation}\n \\label{eq:l2bound}\n \\|{\\bf x} - \\widehat{{\\bf x}}\\|_2 = \\sqrt{\\sum_{i=1}^N\\frac{|\\langle {\\bf a}_i, {\\bf e}^{(i)}\\rangle|^2}{\\|{\\bf a}_i\\|_2^4}} \\leq \\sqrt{\\sum_{i=1}^N \\frac{ \\|{\\bf e}^{(i)}\\|_2^2 }{\\|{\\bf a}_i\\|_2^2} } \\leq \\sum_{i=1}^N \\frac{\\|{\\bf e}^{(i)}\\|_2}{\\|{\\bf a}_i\\|_2}.\n\\end{equation}\nIn particular, in the noiseless scenario (${\\bf e}^{(i)} =0$, for all $i$), the recovery is exact. In terms of (normalized) matrices from Gaussian measurements, which ensures that ${\\mathbf A}$ has full rank -- even numerically -- with high-probability~\\cite{chen2005condition,rudelson2009smallestSingValue}, it holds ${\\mathbb E}\\|{\\bf a}_i\\|_2 = 1$ and it follows ${\\mathbb E}\\|{\\bf x} - \\widehat{{\\bf x}}\\|_2 \\leq \\sum_{i=1}^N \\|{\\bf e}^{(i)}\\|_2$.\n\n\nNotice that even though we have only one fixed matrix ${\\mathbf A} \\in {\\mathbb K}^{m \\times N}$, (obviously) we are no longer limited to the recovery of sparse vectors, which is the motivation for using fusion frames as described below.\nHowever, this has a price as the (actual) number of measurements drastically increased to $m \\cdot N$.\nThis is clearly too many measurements in any practical scenarios, but with similar ideas, we can reach practical applications such as using multiple lower sampling rate devices in order to resolve a harder problem, typically required for much higher rate of sampling devices (see Section~\\ref{sssec:sampling}). \nNote also that all the results here should be put in the context of MMV problems, and not with the usual CS setup.\n\n\n\n\\underline{Case $n=2$}\nWe consider now two sets $\\Omega_1$ and $\\Omega_2$ such that $\\Omega_1 \\cup \\Omega_2 = \\{1,\\cdots, N\\}$ and $\\Omega_1 \\cap \\Omega_2 = \\varnothing$. \nWithout loss of generality we can assume that $\\Omega_1 = \\{1, \\cdots, N_1\\}$ and $\\Omega_2 = \\{N_1+1,\\cdots,N_1+N_2\\}$ with $N_1+N_2 = N$.\nWe measure twice the unknown vector ${\\bf x}$ as ${\\bf y}^{(1)} = {\\mathbf A}_1 {\\bf x} + {\\bf e}^{(1)}= {\\mathbf A} P_{1} {\\bf x} + {\\bf e}^{(1)}$ and ${\\bf y}^{(2)} = {\\mathbf A}_2 {\\bf x} + {\\bf e}^{(2)} = {\\mathbf A} P_{2} {\\bf x} + {\\bf e}^{(2)}$. If both $N_1$ and $N_2$ are smaller than $m$, then two ${\\boldsymbol\\ell}^2$ minimizations recover $\\widehat{{\\bf x}^{(1)}}$ and $\\widehat{{\\bf x}^{(2)}}$ independently and it follows that the solution $\\widehat{{\\bf x}} = \\widehat{{\\bf x}^{(1)}}+\\widehat{{\\bf x}^{(2)}}$ obeys the following error bound:\n\\begin{equation}\n \\|\\widehat{{\\bf x}} - {\\bf x}\\|_2^2 \\leq \\|{\\mathbf A}_{\\Omega_1}^+{\\bf e}^{(1)}\\|_2^2 + \\|{\\mathbf A}_{\\Omega_2}^+{\\bf e}^{(2)}\\|_2^2.\n\\end{equation}\nThe reconstruction is again perfect in the noiseless case.\nMoreover, as in the previous case, there is no need for a sparsity assumption on the original vector ${\\bf x}$.\nThe oversampling ratio is not very large as we only have a total of $2m$ measurements.\n\nIf however one at least of $N_1$ or $N_2$ (say $N_1$) is larger than $m$ then we need to use other tools as we are now solving an underdetermined linear system.\nDriven by ideas from CS, we can assume the vector to be sparse on $\\Omega_1$. The recovery problem becomes\n\\begin{equation}\n \\label{eq:l1CS}\n \\text{Find } \\widehat{{\\bf x}^{(1)}}\t\\text{ that minimizes } \\|{\\bf z}\\|_1, \\quad\n\t\t\t\\text{ subject to } \\|{\\mathbf A}_1 {\\bf z} - {\\bf y}^{(1)}\\|_2 \\leq \\eta_1 \n\\end{equation}\n\n\nThe ${\\boldsymbol\\ell}^1$ minimization problem is introduced as a convex relaxation of the NP-hard ${\\boldsymbol\\ell}^0$ minimization.\nNote that the constraints apply only on the support $\\Omega_1$ of the unknown vector.\nHence the usual sparsity requirements encountered in the CS literature need not to apply to the whole vector.\nUnfortunately, the sparsity assumption being applied independently on $\\Omega_1$ and\/or $\\Omega_2$ restricts ourselves to non-uniform recovery guarantees only, at least when considering the full set of sparse signals.\nThe following definition will become handy in the analysis.\n\\begin{adef}[Partial null space property]\n\\label{def:prnsp}\nA matrix ${\\mathbf A} \\in {\\mathbb K}^{m \\times N}$ is said to satisfy a partial robust and stable null space property of order $s$ with respect to a subset $\\Omega \\subset \\{1, \\cdots, N\\}$ with $N \\geq |\\Omega | > m$, $P$ the orthogonal projection onto $\\Omega$, and constants $0 < \\rho < 1$, $0 < \\tau$ if \n\\begin{equation}\n\\|(P{\\bf v})_S\\|_1 \\leq \\rho \\|(P{\\bf v})_{\\bar{S}}\\|_1 + \\tau \\|{\\mathbf A}{\\bf v}\\|_2\n\\end{equation}\nholds for any vector ${\\bf v}$ and any set $S \\subset \\Omega$ such that $|S| \\leq s$.\n\\end{adef}\n\\begin{rmk}\nThe previous definition is a weakening of the usual robust null space property, where what happens in the complement of the set $\\Omega$ is irrelevant.\nIt is worth noticing that it also coincides with the usual definition when $\\Omega = \\{1, \\cdots, N\\}$.\n\\end{rmk}\nThe partial robust null space property of the measurement matrix ${\\mathbf A}$ on $\\Omega_1$ ensures that the recovered local vector $\\widehat{{\\bf x}^{(1)}}$ obeys the following error bound~\\cite[Theorem 4.19]{foucart2013book}:\n\\begin{equation}\n\\label{eq:errorBoundMixedLargeSmallRanks}\n\\|\\widehat{{\\bf x}^{(1)}} - P_1{\\bf x}\\|_1 \\leq \\frac{2(1+\\rho)}{1-\\rho}\\sigma_{s}({\\bf x}^{(1)})_1 + \\frac{4\\tau}{1-\\rho}\\|{\\bf e}^{(1)}\\|_2.\n\\end{equation}\nEquation~\\eqref{eq:errorBoundMixedLargeSmallRanks} is obtained by modifying the proofs of~\\cite[Theorem 4.19, Lemma 4.15]{foucart2013book} and adapting them to the presence of the projection $P_1$. \nA formal proof of a more general statement is given in the proof of the later Theorem~\\ref{thm:rdnspRecovery}.\nThis yields the following direct consequence\n\\begin{aprop}\n \\label{prop:recovery2proj}\n Let $N$ and $m$ be positive integers with $N > m$ and ${\\mathbf A} \\in {\\mathbb K}^{m \\times N}$. \nThere exist two integers $N_1$ and $N_2$ such that, up to a permutation, $\\Omega_1 = \\{1, \\cdots, N_1\\}$ and $\\Omega_2 = \\{N_1+1, \\cdots N_1+N_2\\}$.\n Assume that $N_2 \\leq m$ and $N_1 > m$.\n Assume that the matrix ${\\mathbf A}$ satisfies the partial robust null space property of order $s$ with respect to $\\Omega_1$ with constants $0 < \\rho < 1$ and $\\tau > 0$ and that ${\\mathbf A}_{\\Omega_2}$ has full rank.\n For ${\\bf x} \\in {\\mathbb K}^N$, let $\\widehat{{\\bf x}} := \\widehat{{\\bf x}^{(1)}} + \\widehat{{\\bf x}^{(2)}}$ with $\\widehat{{\\bf x}^{(1)}}$ solution to Problem~\\eqref{eq:l1CS} and $\\widehat{{\\bf x}^{(2)}}$ solution to the overdetermined ${\\boldsymbol\\ell}^2$ minimization problem on $\\Omega_2$. Then the solution obeys:\n \\begin{equation}\n \\label{eq:boundOverUnderRecovery}\n \\|{\\bf x} - \\widehat{{\\bf x}}\\|_2 \\leq \\|{\\mathbf A}_{\\Omega_2}^+{\\bf e}^{(2)}\\|_2 + \\frac{2(1+\\rho)}{1-\\rho}\\sigma_{s}({\\bf x}^{(1)})_1 + \\frac{4\\tau}{1-\\rho}\\|{\\bf e}^{(1)}\\|_2.\n \\end{equation}\n Moreover, the total number of measurements amounts to $2m$ for the recovery of an $N_2+s$ sparse vector.\n\\end{aprop}\nRecovering the vector ${\\bf x}^{(2)}$ does not create any problem, as long as the pseudo-inverse has a reasonable ${\\boldsymbol\\ell}^2$ norm, i.e. as soon as the rank of the projection is reasonably small (see discussion below and Theorem~\\ref{thm:minSingValue} in particular).\nThe recovery of the vector ${\\bf x}^{(1)} \\in {\\mathbb K}^{N}$ is ensured provided the number of subgaussian measurements $m$ scales as~\\cite[Corollary 9.34]{foucart2013book}\n\\begin{equation}\n m \\gtrsim 2s \\ln(eN\/s)\\( 1+\\rho^{-1} \\)^2.\n\\end{equation}\nProposition~\\ref{prop:recovery2proj} slightly reformulated yields the following result:\n\\begin{acorol}\n Let ${\\mathbf A} := {\\mathbf A} P_{1} + {\\mathbf A} P_{2} \\in {\\mathbb K}^{m \\times N}$ with $\\Omega_1 \\cap \\Omega_2 = \\varnothing$ and $\\Omega_1 \\cup \\Omega_2 = \\{1, \\cdots, N\\}$ and $N_1 = |\\Omega_1|, N_2 = |\\Omega_2|$.\n If ${\\mathbf A}$ is a subgaussian matrix and ${\\mathbf A}_{\\Omega_2}$ has full rank, then provided that\n \\begin{equation*}\n m \\gtrsim 2s \\ln(eN\/s)\\( 1+\\rho^{-1} \\)^2,\n \\end{equation*}\n any vector ${\\bf x} \\in \\Theta := \\Sigma_{s}^{N_1} + {\\mathbb K}^{N_2} \\subset \\Sigma_{s+N_2}^N$ can be recovered with the bound~\\eqref{eq:boundOverUnderRecovery}.\n\\end{acorol}\nHere, $\\Sigma_s^{N_1}$ denotes the set of $s$ sparse vectors in ${\\mathbb K}^{N_1}$.\nThe previous Corollary provides a uniform recovery result, on the proviso on the model for the vector ${\\bf x}$. \n\n\n\n\\subsubsection{Controlled deterministic projections}\n\\label{ssec:controlledDetProj}\n\nWe look here at a scenario where the sets $\\Omega_i$ (and hence the projections $P_i$) are completely and deterministically controlled.\nWe consider an integer $n > 0$ and subsets $\\Omega_i \\subset \\{1,\\cdots,N\\}$ with $|\\Omega_i| = N_i$, $\\cup_{i=1}^n \\Omega_i = \\{1,\\cdots,N\\}$ and $N_i \\leq m$, and assume again that ${\\mathbf A}$ has full rank.\nIn this case, we can always enforce the disjointness of the support $\\Omega_i \\cap \\Omega_j = \\varnothing$, for any $i \\neq j$.\nThis yields the trivial recovery of the input signal from its local information vectors:\n\\begin{equation}\n \\widehat{{\\bf x}} := \\sum_{i = 1}^n \\widehat{{\\bf x}^{(i)}}.\n\\end{equation}\nHere, all the local vectors are recovered via generalized Moore-Penrose pseudo inverses of the sub-matrices ${\\mathbf A}_{\\Omega_i}$ composed of only the columns supported on $\\Omega_i$:\n\\begin{equation}\n \\widehat{{\\bf x}^{(i)}} = {\\mathbf A}_{\\Omega_i}^+ {\\bf y}^{(i)} = {\\bf x}^{(i)} + {\\mathbf A}_{\\Omega_i}^+ {\\bf e}^{(i)}.\n\\end{equation}\nThe error estimate follows directly:\n\\begin{equation}\n \\label{eq:errorControlledDeterministic}\n \\|{\\bf x} - \\widehat{{\\bf x}}\\|_2^2 \\leq \\sum_{i = 1}^n \\|{\\mathbf A}_{\\Omega_i}^+ {\\bf e}^{(i)}\\|_2^2\n\\end{equation}\n\nNote that it holds $\\|{\\mathbf A}^+{\\bf e}\\|_2 \\leq \\|{\\mathbf A}^+\\|_{2 \\to 2} \\|{\\bf e}\\|_2$ and that $\\|{\\mathbf A}^+\\|_{2 \\to 2} = 1\/\\operatorname{min}_{1 \\leq i \\leq r}(\\sigma_i)$, with $r$ the rank of ${\\mathbf A}$ and $\\{\\sigma_i\\}_{i=1}^r$ its singular values.\nAs a consequence, when dealing with ``nice'' matrices, the bound~\\eqref{eq:errorControlledDeterministic} is reasonable.\nIn a CS setup, normalized sensing matrices generated at random from a Gaussian distribution have the property of being \\emph{well-behaved}, as suggested by~\\cite[Theorem 9.26]{foucart2013book}:\n\\begin{atheorem}\n\\label{thm:minSingValue}\nLet $\\widetilde{{\\mathbf A}}$ be an $m \\times s$ Gaussian matrix with $m > s$ and ${\\mathbf A} := \\frac{1}{\\sqrt{m}}\\widetilde{{\\mathbf A}}$ be its variance-normalized counterpart. \nThen for $t > 0$,\n \\begin{align*}\n {\\mathbb P}\\(\\sigma_\\text{max}({\\mathbf A}) \\geq 1 + \\sqrt{s\/m} + t\\) &\\leq e^{-\\frac{mt^2}{2}} \\\\\n {\\mathbb P}\\(\\sigma_\\text{min}({\\mathbf A}) \\leq 1 - \\sqrt{s\/m} - t\\) &\\leq e^{-\\frac{mt^2}{2}},\n \\end{align*}\nwhere $\\sigma_{max}(\\Theta)$ and $\\sigma_{min}(\\Theta)$ are the maximum and the minimum singular values of $\\Theta$, respectively.\n\\end{atheorem}\nFor small projection ranks, and ${\\mathbf A}$ as in Theorem~\\ref{thm:minSingValue}, it holds $s \\leq r < m$, and therefore ${\\mathbb P}(\\|{\\mathbf A}_{\\Omega_i}^+\\|_{2 \\to 2} \\geq \\frac{1}{1-\\sqrt{\\frac{r}{m}} - t}) \\leq e^{-mt^2\/2}$ which justifies that the bound~\\eqref{eq:errorControlledDeterministic} is small.\n\n\n\n\n\n\n\\subsubsection{Rank-controlled projections}\n\\label{ssec:rankRdmSupport}\nThis scenario differs from the previous one by the fact that we may control only the ranks $N_i$'s of the projections but let the support of projections be random.\nThis example is motivated by the SAR applications where the rank of the projections is controlled by the aperture and resolution of the sensing device. \nIn this case, for uniform recovery of any vector ${\\bf x}$ we need to ensure that, with high probability, the whole support $\\{1,\\cdots,N\\}$ is covered by the random projections.\nWe assume that all the projections have the same rank $r$ for simplicity of calculations, but similar ideas apply if variations in the rank of the projections were needed.\nWe pick uniformly at random $n$ sets $\\Omega_i$ of size $r$ in $\\{1,\\cdots,N\\}$. It holds\n\\begin{equation}\n\\label{eq:probaElementNotInOmega} \n{\\mathbb P}\\left[ \\exists i \\in \\{1,\\cdots,N\\}: i \\notin \\Omega := \\cup_{j = 1}^n \\Omega_j \\right] = N {\\mathbb P}\\left[ 1 \\notin \\Omega \\right] = N\\({\\mathbb P}\\left[ 1 \\notin \\Omega_1 \\right] \\)^n \n = N\\( \\frac{N-r}{N} \\)^n.\n\\end{equation}\nThis gives that ${\\mathbb P}\\left[ \\exists i \\in \\{1,\\cdots,N\\}: i \\notin \\Omega := \\cup_{j = 1}^n \\Omega_j \\right] \\leq \\varepsilon$ whenever\n\\begin{equation}\n \\label{eq:nbMeasVectors}\n n \\geq \\frac{\\log(N\/\\varepsilon)}{\\log(N\/(N-r))}.\n\\end{equation}\n\nThis direct consequence follows.\n\\begin{aprop}\n \\label{prop:recoveryfixrank}\n Let ${\\mathbf A} \\in {\\mathbb K}^{m \\times N}$ with $m < N$, $r \\leq m$, $\\varepsilon > 0$, and $n$ be such that Equation~\\eqref{eq:nbMeasVectors} holds. \nIn addition, assume that every submatrix with $r$ columns extracted from ${\\mathbf A}$ has full rank. \nLet $\\Omega_1, \\Omega_2, \\cdots, \\Omega_n$ be $n$ subsets in $\\{1,\\cdots,N\\}$ of size $r$ chosen independently and uniformly at random.\n Any vector ${\\bf x}$ recovered from the fusion of local recovery as $\\widehat{{\\bf x}} := S^{-1} \\sum_{i=1}^n \\widehat{{\\bf x}^{(i)}}$ from the measurements ${\\bf y}^{(i)} = {\\mathbf A} P_i{\\bf x} + {\\bf e}^{(i)}$, $1 \\leq i \\leq n$ satisfies, with probability at least $1-\\varepsilon$ \n \\begin{equation}\n \\label{eq:errorBoundProp2} \\| {\\bf x} - \\widehat{{\\bf x}} \\|_2^2 \\leq \\frac{1}{C} \\sum_{i=1}^n \\|{\\mathbf A}_{\\Omega_i}^+ {\\bf e}^{(i)}\\|_2^2\n \\end{equation}\n where $S({\\bf x}) := \\sum_{i=1}^n P_i {\\bf x}$ and $\\widehat{{\\bf x}^{(i)}} := A_{\\Omega_i}^+ {\\bf y}^{(i)}$ and $C$ denotes the lower frame bound of $S$.\n\\end{aprop}\nThe lower frame bound is defined for general fusion frame operators as in Definition~\\ref{def:fusionframes}. \nThe error bound above is applicable in case of any fusion\/filtering scenario. \nIn our particular set-up, we calculate exactly this bound.\n\\begin{alemma}\nGiven $n$ projections $P_i$ of rank $r$, the lower frame bound of the fusion frame operator $S := \\sum P_i$ is given by\n\\begin{equation}\n\\label{eq:lowerFrameBound}\nC := \\min_{1 \\leq k \\leq N} {\\mathcal M}(k) =: {\\mathcal M},\n\\end{equation}\nwhere ${\\mathcal M}(k)$ is the multiplicity of index $k$.\nFormally, let $\\Gamma_k := \\{ j: 1\\leq j \\leq n: k \\in \\Omega_j \\}$, then ${\\mathcal M}(k) := |\\Gamma_k|$.\n\\end{alemma}\n\n\\begin{proof}\nThat $C \\leq {\\mathcal M}$ is clear. \nIndeed, let ${\\bf x} = {\\bf e}_{k_0}$ where $k_0$ is one of the indices achieving the minimum ${\\bf e}_{k_0}$ denotes the canonical vector. \nThen it holds $S({\\bf e}_{k_0}) = \\sum_{j \\in \\Gamma_{k_0}} {\\bf e}_{k_0}$ and it follows that $\\|S({\\bf x})\\|_2^2 \\leq {\\mathcal M}^2$.\n\nOn the other hand, for any ${\\bf x} \\in {\\mathbb K}^N$, the fusion frame operator can be rewritten as $S({\\bf x}) = \\sum_{i=1}^N {\\mathcal M}(i)x_i$. \nConsequently it holds that for any ${\\bf x} \\in {\\mathbb K}^N, \\|S({\\bf x})\\|_2^2 \\geq {\\mathcal M}^2 \\sum_{i=1}^N|x_i|^2$ and hence $C^2 \\geq {\\mathcal M}^2$.\n\\end{proof}\n\nAs a consequence, it is trivial to see that for any $1 \\leq k \\leq N$, ${\\mathcal M}(k)$ is a non-decreasing function of the number of projections $n$. Hence, the greater the number of projections (the redundancy, in fusion frame terms) the greater the bound $C$ and therefore the smaller the error in the recovery according to Equation~\\eqref{eq:errorBoundProp2}. \nThis however can only hold true, as long as the noise per measurement vectors remains small.\n\n\nNote that in the case of subsets selected independently uniformly at random, we can have a precise statement. \nThe lower frame bound is defined as the minimum number of occurrences of any index $k \\in \\{1,\\cdots, N\\}$. \nLet us consider ${\\mathcal M}(k)$ and $\\Gamma_k$ as in the previous lemma. \nLet $k \\in \\{1,\\cdots,N\\}$ be any index and $\\Omega_j$ denote a draw of a random set. \nIt holds ${\\mathbb P}[k \\in \\Omega_j] = r\/N$. \nThe subsets being independent of each other, ${\\mathcal M}(k)$ is a binomial random variable with probability of success $P = r\/N$ for each of the $n$ trials. \nIt follows that ${\\mathbb P}[{\\mathcal M}(k) = l] = {n \\choose l} P^l(1-P)^{n-l}$ for all $1 \\leq k \\leq N$ and $1 \\leq l \\leq n$.\n\nPutting everything together, we get that $C := \\min \\limits_{k \\in \\{1,\\cdots, N\\}}{\\mathcal M}(k)$ is a random variable such that \n\\begin{align}\n{\\mathbb P}[C \\geq l] &= {\\mathbb P}[\\forall 1 \\leq k \\leq N, {\\mathcal M}(k) \\geq l] \\\\ \n &= \\( {\\mathbb P}[{\\mathcal M}(k) \\geq l] \\)^N = \\( 1 - \\sum_{j=1}^{l-1}{\\mathbb P}[{\\mathcal M}(k) = j] \\)^N = \\( 1 - F(l-1,n,P) \\)^N.\n\\end{align} \n\nThese expressions resemble the calculations involved in Equations~\\eqref{eq:probaElementNotInOmega} and ~\\eqref{eq:nbMeasVectors}, however with a much more complicated probability distribution. \nTo avoid unnecessarily tedious calculations that would infer the readability of the paper, we choose to leave the following result as a conjecture.\n\\begin{aconj}\n\\label{conj:expectedLowerFramebound}\nLet $\\Omega_1, \\Omega_2, \\cdots, \\Omega_n$ be $n$ subsets of $r$ elements taken uniformly at random in $\\{1,\\cdots,N\\}$. \nLet $\\Gamma_k := \\{ j: 1\\leq j \\leq n: k \\in \\Omega_j \\}$, and ${\\mathcal M}(k) := |\\Gamma_k|$.\nThen $C := \\min_{1 \\leq k \\leq N} {\\mathcal M}(k) =: {\\mathcal M}$ grows at least linearly with $n$, i.e., there exists constants $c > 0$ and $b \\geq 0$, independent of $n$, such that \n\\begin{equation}\n\\label{eq:linearGrowthC}\n{\\mathbb E}[C] \\geq cn - b.\n\\end{equation}\n\\end{aconj}\nA direct consequence of this is that one can always find a multiplicative constant $c' \\leq c$ such that for a number of projections $n \\geq b\/(c-c')$, ${\\mathbb E}[C] \\geq c'n$ (the case $c = c'$ can only be true for $b = 0$, in which case, there is no need for this remark).\nThough the proof is not given, the result is backed up with numerical results shown in Figure~\\ref{fig:lowerFrameBounds}.\\footnote{It is also backed up with particular cases, when the ranks are exactly half the dimension, see http:\/\/math.stackexchange.com\/questions\/1135253\/mean-value-of-minimum-of-binomial-variables} \nThe graphs always show the expected linear growth, independently of the parameters of the problem. \n\n\\begin{figure}[htb]\n\\centering\n\\subfigure[Large projections]{\n\\includegraphics[width=0.35\\linewidth]{lowerFrameBoundsLargeProjections.eps}\n\\label{fig:lowFrameLargeProj}\n}\n\\subfigure[Small projections]{\n\\includegraphics[width=0.35\\linewidth]{lowerFrameBoundsSmallProjections.eps}\n\\label{fig:lowFrameSmallProj}\n}\n\\caption{Linear growth of the expectation of the lower frame bound for various use cases. The averages are calculated from 300 random experiments. The left graphs show the average lower frame bound when dealing with rather large projections (with rank $r = N\/2$) while the right graphs are in the case of much smaller projections ($r = N\/5$). The oversampling ratio is calculated as a linear function of the number of projections and is independent of the ambient dimension $N$; $\\zeta = rn\/N$.}\n\\label{fig:lowerFrameBounds}\n\\end{figure}\n\nAssuming the conjecture to be true, Equation~\\eqref{eq:errorBoundProp2} then simplifies to \n\\begin{equation}\n{\\mathbb E}\\|\\widehat{{\\bf x}} - {\\bf x}\\|_2^2 \\leq \\frac{1}{c'n}\\sum_{i=1}^n\\|{\\mathbf A}_{\\Omega_i}^+{\\bf e}^{(i)}\\|_2^2 \\leq \\frac{\\delta \\nu}{c'},\n\\end{equation}\nwhere the expectation is taken over the draw of the projections, for a certain constant $c' > 0$, where we let $\\delta = \\max_i\\|{\\mathbf A}_{\\Omega_i}^+\\|_{2 \\to 2}$ and $\\nu := \\max_i\\|{\\bf e}^{(i)}\\|_2^2$.\nIt shows that at least in expectation, the error should not grow as the number of measurements increases. \nMoreover, numerical results illustrated in Figure~\\ref{fig:nbProjections} suggest that the expected reconstruction error decreases. \nAn intuition for this behavior is provided in a simple example in Equations~\\eqref{eq:simpleCalculations1} and ~\\eqref{eq:simpleCalculations2}.\nFrom a compressed sensing perspective, it has been shown that the least favorable case appears as the case when the vectors are repeated again and again. \nThis translates in our scenario to the case where the increase of the measurements consist solely in repeating the same projections. \nConsidering the example of a SAR imaging process developed in Section~\\ref{ssec:examples:SAR}, this would correspond to a plane flying over a given region multiple times, with the spatial filtering being exactly the same. \n\n\n\n\n\\begin{rmk}\n It is important to note here that there is no sparsity assumption. It is possible to recover {\\bf any} vector ${\\bf x}$ with high probability, as long as the number of measurement vector scales reasonably with the dimension of the input space.\n\\end{rmk}\n\n\\subsubsection{Constrained number of measurements and ranks of projections}\nIn certain scenarios, the physical measurement devices limit our freedom in choosing the ranks and size of the measurements.\nWe hence deal here with fixed ranks $N_i$'s for the projections $P_i$.\nAssume, without loss of generality, that the first $0 \\leq k \\leq n$ projections have ranked lower than the number of measurements $m$.\nFor these subspaces, the usual ${\\boldsymbol\\ell}^2$ minimization procedure yields perfect (or optimal in the noisy case) recovery of the local information ${\\bf x}^{(i)}$, for $1 \\leq i \\leq k$.\nFor the remaining $n-k$ subspaces, the pseudo-inverse is not sufficient, and we use tools from CS.\nWe here again use the partial robust null space property from Definition~\\ref{def:prnsp} on every subspace $\\Omega_j$, for $k+1 \\leq j \\leq n$.\nCombining results from Propositions~\\ref{prop:recoveryfixrank} and~\\ref{prop:recovery2proj} yields the following direct corollary as a consequence:\n\\begin{acorol}\n\\label{cor:constrainedRecovery}\nLet $m < N$ and $A \\in {\\mathbb K}^{m \\times N}$. \nAssume that we are given a set of $n$ sets $\\{\\Omega_i\\}_{i=1}^n$ in $\\{1,\\cdots,N\\}$ of respective sizes $\\{N_i\\}_{i=1}^n$ with $N_1 \\leq N_2 \\leq \\cdots N_n$.\nAssume in addition that $\\cup_{i}\\Omega_i = \\{1,\\cdots,N\\}$. \nMoreover, there exists a unique $0\\leq k \\leq n$ such that $N_k \\leq m$ and $N_{k+1} > m$ (with the convention that $N_0 = 0$ and $N_{n+1} > m$). \nAssume that the submatrices ${\\mathbf A}_{\\Omega_i}$, for $1 \\leq i \\leq k$, have full rank.\nIf in addition the measurement matrix satisfies a partial robust null space property of order $s$ with respect to every subset $\\Omega_i$, $i \\geq k+1$, then, the approximation $\\widehat{{\\bf x}}$ defined as $\\widehat{{\\bf x}}:= S^{-1}\\sum_{i=1}^n \\widehat{{\\bf x}^{(i)}}$ with $\\widehat{{\\bf x}^{(i)}} := {\\mathbf A}_{\\Omega_i}^+ {\\bf y}^{(i)}$, for $1 \\leq i \\leq k$ and $\\widehat{{\\bf x}^{(i)}}$ solutions to the ${\\boldsymbol\\ell}^2$ constrained ${\\boldsymbol\\ell}^1$ minimization problems~\\eqref{eq:bpdn}, for $k +1 \\leq i \\leq n$ fulfills the following estimate:\n\\begin{equation}\n\\|{\\bf x} - \\widehat{{\\bf x}}\\|_2 \\leq \\frac{1}{{\\mathcal M}}\\( \\sum_{i=1}^k \\| {\\mathbf A}_{\\Omega_i}^+ {\\bf e}^{(i)} \\|_2 + \\frac{2(1+\\rho)}{1-\\rho}\\sum_{i = k+1}^n \\sigma_s({\\bf x}^{(i)})_1 + \\frac{4\\tau}{1-\\rho}\\sum_{i=k+1}^n\\|{\\bf e}^{(i)}\\|_2 \\)\n\\end{equation}\n\\end{acorol}\nNote that for simplicity we have assumed that the NSP is valid on every subspaces for the same set of parameters. \nTo avoid notational encumbrance we do not write results where the $\\rho$, $\\tau$, and $s$ may depend on the subspace considered. \n\n\\begin{rmk}\nThis is a direct application of the previous results. \nIt is obtained by recovering every local pieces of information independently from one another.\nIt is possible to improve these estimates by a sequential procedure where we first estimate the $x_j$ for $j \\in \\Omega_i$, for some $1 \\leq i \\leq k$, and then using this reliable estimate to improve the accuracy of the ${\\boldsymbol\\ell}^1$ minimization program.\n\\end{rmk}\n\nA final remark considers the case where the rank of the projection is also random. We can think of the sets $\\Omega_i$, for $1 \\leq i \\leq n$ as Binomial random variables with probability of success $p$. In this case, the rank of the projection $P_i$ if controlled by the expectation of this random variable. An analysis similar to the previous one can be carried over to ensure recovery of any vector with high probability.\n\n\\subsection{Numerical examples}\n\\label{ssec:numerics}\nThis Section describes numerical recovery results on the case described above. \nWe first show some particular examples of recovery, when dealing with dense signals and show that we can break the traditional sparsity limit by adding a few sensors. \nThe next example shows the behavior of the quality of the reconstruction when slowly adding sensors. \nFinally, the last subsection validates our Conjecture and show that the noise tends to decrease while adding projections, and that the overall quality of recovery scales linearly with the noise level. \nIn another contribution, we have also verified that we can recover a very dense Fourier spectrum, by using our approach with Fourier measurement matrices, see~\\cite{ABL2017Conm}. \n\\subsubsection{Examples of recovery}\n\n\n\\begin{figure}[htb]\n\\centering\n\\subfigure[Large ranks]{\n\\includegraphics[width=0.30\\linewidth]{largeRankSmallSparsityComparison.jpg}\n\\label{fig:lRSS}\n}\n\\subfigure[Small ranks, small sparsity]{\n\\includegraphics[width=0.30\\linewidth]{smallRankSmallSparsityComparison.jpg}\n\\label{fig:sRSS}\n}\n\\subfigure[Small ranks large sparsity]{\n\\includegraphics[width=0.30\\linewidth]{smallRankLargeSparsityComparison.jpg}\n\\label{fig:sRLS}\n}\n\\caption{Examples of fusion reconstruction.}\n\\label{fig:examples}\n\\end{figure}\n\nThe examples of reconstruction depicted on Figures~\\ref{fig:examples} were created by generating at random $s$ sparse Gaussian vectors with $s=200$ for Figures~\\ref{fig:lRSS} and~\\ref{fig:sRSS}, and $s = 500$ for Figure~\\ref{fig:sRLS}. \nFor each cases, the $250 \\times 600$ matrix was generated at random with entries i.i.d. from a Gaussian distribution. \nIn the first example, the rank is set to $r = 300 > m$ and hence, ${\\boldsymbol\\ell}^1$ minimization (traditional basis pursuit) is used on every subspaces. \nThis yields a total of $13$ projections generated at random.\nFor the two last examples, the ranks of the projections are set to $r = 200 < m$. In this case, only a classical ${\\boldsymbol\\ell}^2$ inversion is needed on every subspace. \nHere, since the ranks are smaller, the number of projections has to be increased to $22$ in order to ensure that the whole set $\\{1,\\cdots, N\\}$ is covered with high probability. \nBy looking carefully at the last example, one can see that an index with non-zero magnitude has not been selected (around index 520). These cases, however, are rare. \nIn every figure, the red '+' crosses represent the true signal, the blue circles represent the reconstructed signal from our fusion approach, and the green 'x' correspond to the reconstruction with traditional basis pursuit. \nWhile the very last set-up (very high number of non-zero components) is clearly not suitable for usual compressed sensing, the advantage of projections and fusions can be seen even in the fist two (where the number of non-zeros remains relatively small). \nAnother aspect to look at is that when dealing with small ranks, the solutions to the ${\\boldsymbol\\ell}^2$ minimization problems are computed efficiently. As a consequence, even when the number of projections increases, the calculation of the recovered $\\widehat{{\\bf x}}$ is still orders of magnitude faster. \n\\subsubsection{Recovery and number of projections}\n\\label{ssec:constraintedProj}\nGiven any scenario introduced above, it is expected from Conjecture \\ref{conj:expectedLowerFramebound} that increasing the number of projections (of a given rank) will increase the lower frame bound~\\eqref{eq:lowerFrameBound} at least linearly. \nThis linear increase of the lower frame bound compensates for the potential increase in the reconstruction error. \nOne could however hope for better results according to the numerical evidence illustrated in Figure~\\ref{fig:nbProjections}, where the average case error seems to be reduced as the number of projections increases.\nGiven a set of $n$ projections generating a fusion frame, the least favorable case, studied in~\\cite{eldar2010average} for the case of a single filtering operation, when adding an extra $n$ projections, is when the exact same $\\Omega_i$ are repeated.\nThe exact behavior of the reconstruction error with respect to the number of projections is still under research. \n\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=0.75\\linewidth]{nbProjections.jpg}\n\\caption{${\\boldsymbol\\ell}^2$ error of recovery for various number of projections.}\n\\label{fig:nbProjections}\n\\end{figure}\nFor a total of 10 generations of random Gaussian matrices of size $300 \\times 1000$, and $10$ random Gaussian $200$ sparse vectors (note: that is a setting where usual compressed sensing has no recovery guarantees), we let the number of random projections of rank $r = 500$ go from $60\\%$ up to $5$ times the minimum given by Equation~\\eqref{eq:nbMeasVectors} ($17$ in our setting, for a success of covering the whole input set of $\\varepsilon = 99\\%$).\nAs can be seen from the figure, the recovery is unlikely to be correct as long as the number of projections remains small. The trend is generally towards the perfect recovery - though this can only be guarantee with high accuracy, which explains the little spikes. Moreover, while the maximum might be somewhat bigger, it appears 1) that it remains within reasonable bounds (depending on the application) and 2) the average of the error among the $100$ tests per number of projections is very small, suggesting that only a few of the recoveries are off. \nA way to understand this is to consider the case, where we have two projections that are slightly overlapping. Given the global set $\\Omega = \\{1, \\cdots, N\\}$, assume we split it into $\\Omega_1$ and $\\Omega_2$ with some overlap, i.e. $\\Omega = \\Omega_1 \\cup \\Omega_2$ and $\\Omega_{12} := \\Omega_1 \\cap \\Omega_2 \\neq \\varnothing$. We independently solve\n\\begin{align}\n\\widehat{{\\bf x}^{(i)}} &:= \\operatorname{argmin} \\|{\\bf x} \\|_1 \\\\ \n\t\t\t\t\t\t\t&\\text{s.t. } \\|{\\mathbf A} P_i {\\bf x} - {\\bf y}^{(i)}\\|_2 \\leq \\eta_i\n\\end{align}\nfor $1 \\leq i \\leq 2$. For now we consider only the noiseless case, i.e. $\\varepsilon_1 = \\varepsilon_2 = 0$ which is equivalent to solving two basis pursuit optimization problems. \nWe can decompose the solutions recovered as $\\widehat{{\\bf x}^{(i)}} = \\widehat{{\\bf x}^{(i)}}|_{\\Omega_i \\backslash \\Omega_{12}} + \\widehat{{\\bf x}^{(i)}}|_{\\Omega_{12}}$. \nIf we assume, for simplicity of understanding, that the first component is recovered exactly, we have that $\\widehat{{\\bf x}^{(1)}} = {\\bf x}^{(1)}$. \nIf however the second component is not recovered accurately, we can write $\\widehat{{\\bf x}^{(2)}} = {\\bf x}^{(2)} + {\\bf e}^{(2)}$ (for further generalizations, we can always write $\\widehat{{\\bf x}^{(i)}} = {\\bf x}^{(i)} + {\\bf e}^{(i)}$, for $1 \\leq i \\leq n$ with ${\\bf e}^{(i)} = 0$ in case of successful recovery on the set $\\Omega_i$). \nIn this very simple example, we have $S({\\bf x}) = \\sum_{i=1}^n {\\bf x}^{(i)} = {\\bf x}^{(1)} + {\\bf x}^{(2)}$. \nSimilarly, the inverse fusion frame operator can be seen as the point wise empirical average of the evidences (i.e. the local recoveries): \n\\begin{equation}\n\\label{eq:simpleCalculations1}\nS^{-1}({\\bf x}) := {\\bf x}^{(1)}|_{\\Omega_1 \\backslash \\Omega_{12}} + {\\bf x}^{(2)}|_{\\Omega_2 \\backslash \\Omega_{12}} + \\frac{{\\bf x}^{(1)}|_{\\Omega_{12}} + {\\bf x}^{(2)}|_{\\Omega_{12}}}{2}.\n\\end{equation}\nNote that the $2$ in the denominator denotes the number of times that the indices in $\\Omega_{12}$ are picked in the creation of the (random) projections.\nIn other words, increasing the number of projections increases potentially the denominator under a given index and hence reduces the error. \n\\begin{equation}\n\\label{eq:simpleCalculations2}\n\\widehat{{\\bf x}} - {\\bf x} = S^{-1}\\( \\widehat{{\\bf x}^{(1)}} + \\widehat{{\\bf x}^{(2)}} - {\\bf x}^{(1)} - {\\bf x}^{(2)} \\) = {\\bf e}^{(2)}|_{\\Omega_2 \\backslash \\Omega_{12}} + \\frac{{\\bf e}^{(2)}|_{\\Omega_{12}}}{2}\n\\end{equation}\nAs a consequence, the error (on the set $\\Omega_{12}$) is bounded by the maximum of the error from every recovery problem divided by the number of times a particular index appears. \n\\subsubsection{Robustness to noise}\n\n\\begin{figure}[htb]\n\\centering\n\\subfigure[Minimum number of projections]{\n\\includegraphics[width=0.30\\linewidth]{noise_minNbProj.jpg}\n\\label{fig:noise_minProj}\n}\n\\subfigure[Double number of projections]{\n\\includegraphics[width=0.30\\linewidth]{noise_doubleNbProj.jpg}\n\\label{fig:noise_doubleProj}\n}\n\\subfigure[Basis pursuit denoising]{\n\\includegraphics[width=0.30\\linewidth]{noise_l1.jpg}\n\\label{fig:noise_l1}\n}\n\\caption{Behavior facing noise}\n\\label{fig:noise}\n\\end{figure}\n\nIn the set of experiments illustrated in Figures~\\ref{fig:noise}, we have compared the robustness of our algorithm compared to traditional basis pursuit denoising when dealing with noise. \nWe have set the parameters in a regime where~\\eqref{eq:bpdn} works for fair comparison. \nFor a $500$ dimensional ambient space and $300$ dimensional measurement space, we generated 5 random $80$ sparse Gaussian vectors for each of the 10 randomly generated Gaussian matrices. \nFor every single test, some additive Gaussian noise with a given ${\\boldsymbol\\ell}^2$ norm $\\theta$ has been added. \nThis means that for the case of the distributed approach, $n$ independent noise components of norm $\\theta$ have been added in total; one for each channel. \nThe figures depict the evolution of the error of the ${\\boldsymbol\\ell}^2$ norm when the energy of the noise component increases. \nThe black line represents the maximum of the error, the blue one the minimum, and the red one the average over every samples. \nThe left graph shows the result when the number of projections is exactly set as in Equation~\\eqref{eq:nbMeasVectors} while the second one doubles this number. \nThe third figure shows the result when using the usual Basis Pursuit Denoising. \nIt is important to notice that the higher peak appearing on the first figure is due to an index with non zero component from the support of the original vector ${\\bf x}$ not being selected at all during the random projections. \nThis however does not contradict the high probability of recovery. \nThe improvement in the noise behavior from the first figure to the second shows how the fusion frame operator tends to average out the local errors to yield a better estimate (as described in the previous section). \nFinally, all of the algorithms scale linearly with the norm of the noise per measurements (as suggested by Equation~\\eqref{eq:errorBoundProp2}) and even if the total noise is increased as the number of projections is increased, the recovery tends to be improved by considering more projections. \n\n\n\n\\section{An extension of traditional compressed sensing}\n\\label{sec:extensions}\n\nOur results so far ensure that robust and stable recovery of dense signals is possible, by smartly combining local information.\nIn this section we show that the ideas developed in Section~\\ref{ShidongSection} can also be considered as an extension of the traditional CS.\nIn particular, it gives solid mathematical foundations to our work. \nIt is important to note that similar ideas have been developed in parallel in~\\cite{adcock2016CSparallel}. \nThere, the authors introduced a model similar to~\\eqref{eq:centralReconstruction}, albeit asking that the fusion frame be tight with $C = D= 1$, but reconstruct the whole signal globally, without the use of the fusion process. \nOther authors~\\cite{boyer2015compressed} looked at CS with structured acquisition and structure sparsity. \nIn simple words, they prove that adapting the sampling matrices to some prior knowledge of the sparsity pattern allowed for larger applicability of the CS framework. \n\nIn this section we describe the recovery of a signal ${\\bf x}$ by means of CS in the local subspaces and fusion processing. \nThe local pieces of information are computed as solutions to the problems\n\\begin{equation}\n\\tag{${\\mathcal P}_{1,\\eta}$}\n\\label{eq:fcsbpdn}\n\\begin{array}{rl} \\min_{{\\bf x} \\in {\\mathbb K}^N} & \\|{\\bf z}\\|_1 \\\\ \\text{s.t. } & \\|{\\mathbf A} P_i{\\bf z} - {\\bf y}^{(i)}\\|_2 \\leq \\eta_i. \\end{array}\n\\end{equation}\n\nIn the noiseless case, the problem is solved by the basis pursuit\n\\begin{equation}\n\\tag{${\\mathcal P}_{1,0}$}\n\\label{eq:fcsbp}\n\\begin{array}{rl} \\min_{{\\bf x} \\in {\\mathbb K}^N} & \\|{\\bf z}\\|_1 \\\\ \\text{s.t. } & {\\mathbf A} P_i{\\bf z} = {\\bf y}^{(i)}. \\end{array}\n\\end{equation}\n\n\n\\begin{alemma}\n\\label{lemma:suppSolution}\nLet $\\widehat{{\\bf z}^{(i)}}$ be a solution to the noisy~\\eqref{eq:fcsbpdn} or noiseless~\\eqref{eq:fcsbp} basis pursuit problem. \nThen $\\widehat{{\\bf z}^{(i)}} \\in W_i$. \n\\end{alemma}\n\\begin{proof}\nLet $\\widehat{{\\bf z}{(i)}}$ be a solution and let $\\tilde{{\\bf z}} = P_i\\widehat{{\\bf z}^{(i)}}$. Then $\\|\\tilde{{\\bf z}}\\|_1 \\leq \\|\\widehat{{\\bf z}^{(i)}}\\|_1 + \\|(I-P_i)(\\widehat{{\\bf z}^{(i)}}))\\|_1$. $\\tilde{{\\bf z}}$ being also admissible, it follows that $\\|\\tilde{{\\bf z}}\\|_1 = \\|\\widehat{{\\bf z}^{(i)}}\\|_1$ and $\\widehat{{\\bf z}^{(i)}} = P_i(\\widehat{{\\bf z}^{(i)}}))$. \n\\end{proof}\n\n\\subsection{Signal models and recovery conditions}\n\n\\subsubsection{Extension of the sparsity model}\nThe traditional sparsity model is not appropriate in this setting. \nAs an example, let us consider that all the $s$ nonzero components of a vector ${\\bf x}$ fall within a certain subspace (say $W_1$), there is, a priori, no hope to improve the recovery performance compared to a single sensor\/subspace problem. \nIndeed, in this case, the recovery is ensured (locally) by CS methods if the number of observations $m$ scales as \n\\begin{equation*}\nm \\asymp s\\log(N\/s).\n\\end{equation*}\nSince we are dealing here only with an identical sensing model, this yields a total number of observations $m_T$ scaling as\n\\begin{equation*}\nm_T \\asymp ns\\log(N\/s).\n\\end{equation*}\nThis may be acceptable if we consider only very few subspaces but may explode in certain cases. \nTherefore the model of distributed sparsity may be more appropriate. \n\\begin{adef}\nA signal ${\\bf x} \\in {\\mathbb K}^N$ is said to be ${\\bf s} = (s_1, \\cdots s_n)$-distributed sparse with respect to a fusion frame ${\\mathcal W} = (W_i, P_i)_{1\\leq i \\leq n}$, if $\\|P_{i}({\\bf x})\\|_0 \\leq s_i$, for every $1 \\leq i \\leq n$. \n${\\bf s}$ is called the \\emph{sparsity pattern} of ${\\bf x}$ with respect to ${\\mathcal W}$. \n\\end{adef}\nWe denote by $\\Sigma_{\\bf s}^{({\\mathcal W})}$ the set of all ${\\bf s}$-distributed sparse vectors with respect to the family of subspaces $(W_i)_{i}$. \nWe let $s = \\|{\\bf s}\\|_1$ denote the global sparsity of the vector, with respect to ${\\mathcal W}$. \nIn the case that the sparsity of the signal is uniformly distributed among the subspaces ($s_i = s\/n$), the usual CS recovery guarantees ensure us that\n\\begin{equation*}\nm \\asymp s_i \\log(N\/s_i)\n\\end{equation*}\nobservations per subspace are required for a stable and robust recovery of the signal. \nThis accounts for a total number of measurements scaling as $m_T \\asymp n s_i \\log(N\/s_i) = s \\log(N\/s_i)$. \nIn other words, we are able to recover similar sparsities as in the classical CS framework.\nBut in opposition to the classical theory most of the computations can be easily carried in a distributed setting, where only pieces of the information are available. \nOnly the fusion process requires all the local information to compute the final estimation of a signal. \n\nLocally, it only requires solving some very small CS system, which can be done faster than solving the original one. \nThis is also the findings found in parallel in~\\cite{adcock2016CSparallel}, where it is concluded that the number of measurements per sensor decreases linearly with the number of sensor. \nWe describe a similar problem, while looking at it from a different perspective. \nIn particular, we try to find the sparsity patterns that may be recovered for a given sensor design. \nThe motivation for this problem comes from the applications in SAR imaging where the sensor is given and the same everywhere, and where we may not have any control on the number of observations per subspace. \nAs it will become useful later, we also need to introduce the local best approximations.\n\\begin{adef}\nLet ${\\mathcal W} = (W_i,P_i)_{i=1}^n$ be a fusion frame, and let ${\\bf x} \\in {\\mathbb K}^N$. \nFor $p > 0$ and a sparsity pattern ${\\bf s} = (s_1, \\cdots, s_n)$ with $s_i \\in {\\mathbb N}$ for all $1 \\leq i \\leq n$, the ${\\boldsymbol\\ell}^p$ errors of best ${\\bf s}$-term approximations are defined as the vector\n\\begin{equation*}\n\\sigma_{{\\bf s}}^{{\\mathcal W}}({\\bf x})_p := \\( \\sigma_{s_1}(P_{1}{\\bf x}), \\sigma_{s_2}(P_{2}{\\bf x}), \\cdots, \\sigma_{s_n}(P_{n}{\\bf x}) \\)^T\n\\end{equation*}\n\\end{adef}\n\n\\subsubsection{Partial properties}\nThe \\emph{null space property} (NSP) has been used throughout the past decade in the CS literature as a necessary and sufficient condition for the sparse recovery problem via~\\eqref{eq:generalCS}. \nA matrix ${\\mathbf A}$ is said to satisfy the (robust) null space property with parameters $\\rho \\in (0,1)$ and $\\tau > 0$ relative to a set $S \\subset \\{1, \\cdots, N\\}$ if \n\\begin{equation*}\n\\|{\\bf v}_S\\|_1 \\leq \\rho\\|{\\bf v}_{\\overline{S}}\\|_1 + \\tau \\|{\\mathbf A} {\\bf v}\\|_{2}, \\quad \\text{for all } {\\bf v} \\in {\\mathbb K}^{N}.\n\\end{equation*}\nMore generally, we say that the matrix ${\\mathbf A}$ satisfies the NSP of order $s$ if it satisfies the NSP relative to all sets $S$ such that $|S| \\leq s$. \nWe extend here this idea to the context of distributed sparsity with respect to fusion frames, as already mentioned in Definition~\\ref{def:prnsp}. \nHere we talk about a sparsity pattern and ask that the NSP property be valid for all local subspaces up to a certain (local) sparsity level. \n\\begin{adef}[Robust and stable partial null space property (RP-NSP)]\n\\label{def:rpnsp}\nLet $n$ be an integer and ${\\mathcal W} = (W_i, P_i)_{i=1}^n$ be a frame for ${\\mathbb K}^N$.\nLet ${\\bf s} = (s_1,\\cdots,s_n)$ be a sequence of non negative numbers representing the sparsity pattern with respect to ${\\mathcal W}$.\nFor a number $q \\geq 1$, a sensing matrix ${\\mathbf A} \\in {\\mathbb K}^{m \\times N}$ is said to satisfy the ${\\boldsymbol\\ell}^q$-RP-NSP with pattern ${\\bf s}$ with respect to ${\\mathcal W}$ and with constants $\\rho_1, \\cdots, \\rho_n \\in (0,1)$ and $\\tau_1, \\cdots, \\tau_n >0$ if \n\\begin{equation*}\n\\|(P_{i}{\\bf v})_{S_i}\\|_q \\leq \\frac{\\rho_i}{s^{1-1\/q}}\\|(P_{i}{\\bf v})_{overline{S_i}}\\|_1 + \\tau_i \\|{\\mathbf A} {\\bf v}\\|_{2}, \\quad \\text{for all } {\\bf v} \\in {\\mathbb K}^N, 1 \\leq i \\leq n, S_i \\subset W_i, \\text{ and } |S_i| \\leq s_i.\n\\end{equation*} \n\\end{adef}\nThis definition is reminiscent of the work on sparse recovery with partially known support~\\cite{bandeira2013partialNSP}.\nThe difference here being that there is no need to enforce a condition on the vector ${\\bf v}$ to lie in the range of the other subspaces. \nIn a sense, this is taken care of by the fusion process and the fact that we have multiple measurement vectors. \n\n\\begin{rmk}\nNote that we could simplify the definition by asking that the parameters be uniform and independent of the local subspace. Namely, introducing $\\tau := \\max_{1 \\leq i \\leq n} \\tau_i$ and $\\rho := \\max_{1 \\leq i \\leq n}\\rho_i$, the above definition becomes\n\\begin{equation*}\n\\left\\|\\(P_{W_i}{\\bf v}\\)_{S_i}\\right\\|_q \\leq \\frac{\\rho}{s^{1-1\/q}}\\left\\|\\(P_{W_i}{\\bf v}\\)_{\\overline{S_i}}\\right\\|_1 + \\tau \\|{\\mathbf A} {\\bf v}\\|_{2}, \\quad \\text{for all } {\\bf v} \\in {\\mathbb K}^N, 1 \\leq i \\leq n, S_i \\subset W_i, \\text{ and } |S_i| \\leq s_i.\n\\end{equation*} \n\\end{rmk}\n\n\nA stronger, but easier to verify, condition often used as a sufficient recovery condition is the by-now well known \\emph{Restricted Isometry Property} (RIP). \nInformally speaking a matrix is said to satisfy the $RIP(s,\\delta)$ if it behaves like an isometry (up to a constant $\\delta$) on every $s$-sparse vector ${\\bf v} \\in \\Sigma_s$. \nFormally speaking, ${\\mathbf A} \\in {\\mathbb K}^{m \\times N}$ satisfies $RIP(s,\\delta)$, for some $s \\geq 2$ and $\\delta \\in (0,1)$ if \n\\begin{equation}\n\\label{eq:traditionalRIP}\n(1-\\delta)\\|{\\bf v}\\|_2^2 \\leq \\|{\\mathbf A}{\\bf v}\\|_2^2 \\leq (1+\\delta)\\|{\\bf v}\\|_2^2, \\quad \\text{for every } {\\bf v} \\in \\Sigma_s.\n\\end{equation}\nThe lowest $\\delta$ satisfying the inequalities is called the \\emph{restricted isometry constant}. \nOnce again, we want to derive similar properties on our sensing matrix for the distributed sparse signal model. \n\n\n\\begin{adef}[Partial-RIP (P-RIP)] \n\\label{def:p-rip}\nLet ${\\mathcal W} = (W_i,P_i)_{i=1}^n$ be a fusion frame, and let ${\\mathbf A} \\in {\\mathbb K}^{m \\times N}$. \nAssume that ${\\mathbf A}$ satisfies the $RIP(s_i,\\delta_i)$ on $W_i$, with $\\delta_i \\in (0,1)$, $i \\in I=\\{1,\\cdots, n\\}$.\nThen, we say that ${\\mathbf A} \\in {\\mathbb K}^{m \\times N}$ satisfies the partial RIP with respect to ${\\mathcal W}$, with bounds $\\delta_1, \\cdots, \\delta_n$ and sparsity pattern ${\\bf s} =(s_1, \\cdots, s_n)$.\n\\end{adef}\nIn other words, ${\\mathbf A}$ satisfies a P-RIP conditions, if it satisfies RIP-like conditions on every subset of vectors in $\\operatorname{range}(P_i)$. \n\n\n\\begin{rmk}\nThis definition is consistent with the definition of the classical RIP in the sense that the case $n = 1$ (only one projection, one subspace) recovers the usual RIP. \n\\end{rmk}\n\nThe P-RIP can be written in a form similar to the traditional RIP, Equation~\\eqref{eq:traditionalRIP}.\n\n\\begin{aprop}\nLet ${\\mathcal W}=(W_i)_{i \\in I}$ be a fusion frame (with frame bounds $0 < C \\leq D < \\infty$). Let ${\\mathbf A} \\in {\\mathbb K}^{m \\times N}$ satisfy the P-RIP with respect to ${\\mathcal W}$, with bounds $\\delta_1, \\cdots, \\delta_n$ and sparsity pattern ${\\bf s} =(s_1, \\cdots, s_n)$, and let $C_o= C \\min_i \\{ 1-\\delta_i \\}$, $D_o= D\\max_i \\{ 1+\\delta_i \\}$. Then, for any ${\\bf v} \\in {\\mathbb K}^N$, \n\\begin{equation*}\\label{pr13}\n C_o \\|{\\bf v} \\|_2^2 \\leq \\sum_{i} \\|{\\mathbf A}{\\bf v}_i \\|_2^2 \\leq D_o \\| {\\bf v} \\|_2^2. \n\\end{equation*}\n\\end{aprop}\n\n\n\\begin{proof}\nUsing the fusion frame inequality, and inequalities \\eqref{eq:traditionalRIP} for all $i \\in I$, we obtain \n $$C \\min_i \\{ 1-\\delta_i \\} \\|{\\bf v} \\|_2^2 \\leq \\min_i \\{ 1-\\delta_i \\} \\sum_{i} \\| {\\bf v}_i \\|_2^2 \\leq \\sum_{i} (1-\\delta_i)\\|{\\bf v}_i\\|_2^2\\leq \\sum_{i} \\|{\\mathbf A}{\\bf v}_i \\|_2^2 $$ $$ \\leq \\sum_{i}( 1+\\delta_i ) \\| {\\bf v}_i \\|_2^2 \\leq \\max_i \\{ 1+\\delta_i \\}\\sum_{i} \\| {\\bf v}_i \\|_2^2 \\leq D\\max_i \\{ 1+\\delta_i \\} \\| {\\bf v} \\|_2^2.$$\n\\end{proof}\n\n\n\\begin{atheorem}\n\\label{HFadaptDRIP}\nLet $\\varepsilon > 0$. \nLet ${\\mathcal W} = (W_i,P_i)_{i=1}^n$ be a fusion frame for ${\\mathbb K}^N$, $N \\geq 1$. \nLet ${\\mathbf A} \\in {\\mathbb K}^{m \\times N}$ be a subgaussian matrix with parameters $\\beta, k$. \nThen, there exists a constant $C = C_{\\beta,k}$ such that the P-RIP constants of $\\frac{1}{\\sqrt{m}}{\\mathbf A}$ satisfy $\\delta_{s_i} \\leq \\delta_i$, for $1 \\leq i \\leq n$ with probability at least $1-\\varepsilon$, provided\n\\begin{equation*}\nm \\geq C \\min(\\delta)^{-2}\\( \\max(s_i)\\ln(eN\/\\max(s_i)) + \\ln(2\\varepsilon n) \\)\n\\end{equation*}\n\\end{atheorem}\n\nBefore we prove this result, we recall a standard RIP result for subgaussian matrices (Theorem 9.2 in \\cite{foucart2013book}):\n\\begin{atheorem}\\label{HF} Let ${\\mathbf A}$ be an $m\\times N$ subgaussian random matrix. Then there exists a constant $C>0$ (depending only on subgaussian parameters $\\beta$, $k$) such that the RIP constant of $\\frac{1}{\\sqrt{m}} {\\mathbf A}$ satisfies $\\delta_s \\leq \\delta$ with probability at least $1-\\varepsilon$, if \n\\[ m \\geq C \\delta^{-2} \\(s \\ln(eN\/s) +\\ln(2\\epsilon^{-1})\\).\\]\n\\end{atheorem}\n\n\n\\begin{proof}(of Theorem~\\ref{HFadaptDRIP})\n\n For some constants $\\delta_i \\in (0,1)$, let $E$ be the event ``$\\frac{1}{\\sqrt{m}}{\\mathbf A}$ does not satisfy P-RIP with respect to ${\\mathcal W}$ with constants $\\delta_1, \\cdots, \\delta_n$''.\nApplying a union bound it follows that\n\\begin{equation*}\n{\\mathbb P}(E) = {\\mathbb P}(\\exists i \\in \\{1, \\cdots, n\\}: \\delta_{s_i} > \\delta_i) \\leq \\sum_{i = 1}^n {\\mathbb P}(\\delta_{s_i} > \\delta_i) = \\sum_{i = 1}^n \\varepsilon_i\n\\end{equation*}\n\n For some $\\varepsilon_i$ such that $\\varepsilon_1 + \\cdots + \\varepsilon_n = \\varepsilon \\in (0,1)$,\n since the sensing matrix is the same for every sensors, there exists a unique $C > 0$ (depending on $\\beta,k$) such that $\\frac{1}{\\sqrt{m}}{\\mathbf A}$ satisfies the RIP locally within the subset $W_i$ and for a sparsity $s_i$ with constant $\\delta_{s_i} \\leq \\delta_i$, provided that $m \\geq \\max_{1\\leq i \\leq n} m_i$, with \n \\begin{equation*}\\label{condGausrip}\n m_i \\geq C \\delta_i^{-2} (s_i \\ln(eN\/s_i) +\\ln(2\\epsilon_i^{-1}) ).\n \\end{equation*} \n Additionally, since the function $s \\to s\\ln(eN\/s)$ is monotonically increasing on $(0,N)$ and $s \\ll N$, \n $\\max_{1 \\leq i \\leq n}m_i \\geq C \\max_{1 \\leq i \\leq n}\\delta_i^{-2}\\( \\max_{1 \\leq i \\leq n}s_i\\ln(eN\/\\max_{1 \\leq i \\leq n}s_i) + \\ln(2\\varepsilon_i^{-1}) \\)$.\n Considering $\\varepsilon_1 = \\cdots = \\varepsilon_n = \\varepsilon\/n$ concludes the proof.\n\\end{proof}\nNote: all Gaussian and Bernoulli random matrices are subgaussian random matrices, so Theorem~\\ref{HFadaptDRIP} holds true for Gaussian and Bernoulli random matrices.\n\n\n\n\n\n\\subsection{Recovery in general fusion frames settings}\nWith the tools introduced above, we show that any signals with sparsity pattern ${\\bf s}$ can be recovered in a stable and robust manner via the fusion frame approach described in the previous sections. \n\n\\subsubsection{RP-NSP based results}\nWe give a first recovery guarantee based on the robust partial NSP, introduced in Definition~\\ref{def:rpnsp}.\n\n\\begin{atheorem}\n\\label{thm:rdnspRecovery}\nLet ${\\mathbf A} \\in {\\mathbb K}^{m \\times N}$ and ${\\mathcal W} = \\( W_i, P_i \\)_{i=1}^n$ a fusion frame with frame bounds $0 < C \\leq D < \\infty$ and frame operator $S$. \nLet $\\left({\\bf y}^{(i)}\\right)_{i=1}^n$ be the linear measurements ${\\bf y}^{(i)} = {\\mathbf A} P_i {\\bf x} + {\\bf e}^{(i)}$, $1\\leq i \\leq n$ for some noise vectors ${\\bf e}^{(i)}$ such that $\\|{\\bf e}^{(i)}\\|_2 \\leq \\eta_i$. \nDenote by $\\widehat{{\\bf x}^{(i)}}$ the solution to the local Basis Pursuit problems~\\eqref{eq:fcsbpdn} and let $\\widehat{{\\bf x}} = S^{-1} \\sum_i \\widehat{{\\bf x}^{(i)}}$.\nIf the matrix ${\\mathbf A}$ satisfies the ${\\boldsymbol\\ell}^1$-RP-NSP with sparsity pattern ${\\bf s} = (s_1,\\cdots,s_n)$ with constants $0 < \\rho_1, \\cdots, \\rho_n < 1$ and $\\tau_1, \\cdots, \\tau_n > 0$ with respect to ${\\mathcal W}$, then the estimation $\\widehat{{\\bf x}}$ approximates ${\\bf x}$ in the following sense: \n\\begin{equation}\n\\label{eq:rdnspRecovery}\n\\|\\widehat{{\\bf x}} - {\\bf x} \\|_2 \\leq \\frac{2}{C}\\( \\langle \\vec{\\rho}, \\sigma_{{\\bf s}}^{{\\mathcal W}}({\\bf x})_1 \\rangle + \\langle \\vec{\\tau}, \\vec{\\eta} \\rangle\\),\n\\end{equation}\nwhere $\\vec{\\rho} = \\(\\frac{1+\\rho_i}{1-\\rho_i}\\)_{i=1}^n$, $\\vec{\\tau} = \\(\\frac{2 \\tau_i}{1-\\rho_i}\\)_{i=1}^n$, and $\\vec{\\eta} = \\( \\eta_i \\)_{i=1}^n$.\n\\end{atheorem}\n\n\\begin{proof}\nThe solution is given by the fusion process $\\widehat{{\\bf x}} = S^{-1}\\( \\sum_{i=1}^n \\widehat{{\\bf x}^{(i)}} \\)$ with $\\widehat{{\\bf x}^{(i)}}$ the solutions to the local problems~\\eqref{eq:fcsbpdn}. \nIt holds\n\\begin{equation*}\n\\|{\\bf x} - \\widehat{{\\bf x}}\\|_2 = \\left\\|S^{-1}\\(\\sum_{i = 1}^nP_{i}{\\bf x} - \\sum_{i = 1}^n\\widehat{{\\bf x}^{(i)}}\\)\\right\\|_2 \\leq C^{-1}\\sum_{i = 1}^n \\left\\|P_{i}{\\bf x} - \\widehat{{\\bf x}^{(i)}}\\right\\|_2 \\leq C^{-1}\\sum_{i = 1}^n \\left\\|P_{i}{\\bf x} - \\widehat{{\\bf x}^{(i)}}\\right\\|_1.\n\\end{equation*}\nFor a particular $i \\in \\{1, \\cdots, n\\}$, we estimate the error on the subspace $W_i$ in the ${\\boldsymbol\\ell}^1$ sense. \nWe follow the proof techniques from~\\cite[Section 4.3]{foucart2013book} with the adequate changes. \nWith ${\\bf v} := P_i{\\bf x} - \\widehat{{\\bf x}^{(i)}}$ and $S_i \\subset W_i$ the set of best $s_i$ components of ${\\bf x}$ supported on $W_i$, the ${\\boldsymbol\\ell}^1$-RP-NSP yields \n\\begin{equation*}\n\\|(P_{i}{\\bf v})_{S_i}\\|_1 \\leq \\rho_i\\left\\|\\(P_{i}{\\bf v}\\)_{\\overline{S_i}}\\right\\|_1 + \\tau_i\\|{\\mathbf A} {\\bf v}\\|_2.\n\\end{equation*}\nCombining with~\\cite[Lemma 4.15]{foucart2013book} stating \n\\begin{equation*}\n\\left\\|\\(P_{i}{\\bf v}\\)_{\\overline{S_i}}\\right\\|_1 \\leq \\left\\|P_{i}\\widehat{{\\bf x}^{(i)}}\\right\\|_1 - \\left\\|P_{i}{\\bf x}\\right\\|_1 + \\left\\|\\(P_{i}{\\bf v}\\)_{S_i}\\right\\|_1 + 2 \\left\\|\\(P_{i}{\\bf x}\\)_{\\overline{S_i}}\\right\\|_1.\n\\end{equation*}\nwe arrive at\n\\begin{equation*}\n(1-\\rho_i)\\left\\|\\(P_{i}{\\bf v}\\)_{\\overline{S_i}}\\right\\|_1 \\leq \\|P_{i}\\widehat{{\\bf x}^{(i)}}\\|_1 - \\|P_{i}{\\bf x}\\|_1 + 2 \\left\\|\\(P_{i}{\\bf x}\\)_{\\overline{S_i}}\\right\\|_1 + \\tau_i\\|{\\mathbf A}{\\bf v}\\|_2 .\n\\end{equation*}\nApplying once again the ${\\boldsymbol\\ell}^1$-RP-NSP, it holds\n\\begin{align*}\n\\left\\|P_{i}{\\bf v}\\right\\|_1 &= \\left\\| (P_{i}{\\bf v})_{S_i} \\right\\|_1 + \\left\\| \\(P_{i}{\\bf v}\\)_{\\overline{S_i}} \\right\\|_1 \\leq \\rho_i\\left\\| \\(P_{i}{\\bf v}\\)_{\\overline{S_i}} \\right\\|_1 + \\tau_i\\|{\\mathbf A} {\\bf v}\\|_2 + \\left\\| \\(P_{i}{\\bf v}\\)_{\\overline{S_i}} \\right\\|_1 \\\\ \n &\\leq \\(1+\\rho_i\\)\\left\\| \\(P_{i}{\\bf v}\\)_{\\overline{S_i}} \\right\\|_1 + \\tau_i\\|{\\mathbf A}{\\bf v}\\|_2 \\\\ \n &\\leq \\frac{1+\\rho_i}{1-\\rho_i}\\( \\|P_{i}\\widehat{{\\bf x}^{(i)}}\\|_1 - \\|P_{i}{\\bf x}\\|_1 + 2 \\left\\|\\(P_{i}{\\bf x}\\)_{\\overline{S_i}}\\right\\|_1\\) + \\frac{4\\tau_i}{1-\\rho_i}\\|{\\mathbf A}{\\bf v}\\|_2. \n\\end{align*}\nWe now remember Lemma~\\ref{lemma:suppSolution} and notice that $P_i \\widehat{{\\bf x}^{(i)}} = \\widehat{{\\bf x}^{(i)}}$. \n$\\widehat{{\\bf x}^{(i)}}$ being the optimal solution to~\\eqref{eq:fcsbpdn}, it is clear that $\\|\\widehat{{\\bf x}}\\|_1 \\leq \\|P_{i}{\\bf x}\\|_1$ from what we can conclude that\n\\begin{equation*}\n\\left\\|P_i {\\bf x} - \\widehat{{\\bf x}^{(i)}}\\right\\|_1 = \\|P_{i}{\\bf v}\\|_1 \\leq 2\\frac{1+\\rho_i}{1-\\rho_i}\\sigma_{{\\bf s}}^{{\\mathcal W}}({\\bf x})_{1,i} + \\frac{4\\tau_i}{1-\\rho_i}\\|{\\mathbf A}{\\bf v}\\|_2.\n\\end{equation*}\nSumming up the contributions for all $i$ in $\\{1, \\cdots, n\\}$ and applying the inverse frame operator finishes the proof.\n\\end{proof}\n\nSimilarly, assuming ${\\boldsymbol\\ell}^q$-RP-NSP, one can adapt the proof techniques from~\\cite[Theorems 4.22, 4.25]{foucart2013book} to the local problems. \nThis yields the following result\n\\begin{atheorem}\n\\label{thm:rdnspRecoveryl2NSP}\nLet ${\\mathbf A} \\in {\\mathbb K}^{m \\times N}$ and ${\\mathcal W} = \\( W_i, P_i \\)_{i=1}^n$ a fusion frame with frame bounds $0 < C \\leq D < \\infty$ and frame operator $S$. \nLet $\\left({\\bf y}^{(i)}\\right)_{i=1}^n$ be the linear measurements ${\\bf y}^{(i)} = {\\mathbf A} P_i {\\bf x} + {\\bf e}^{(i)}$, $1\\leq i \\leq n$ for some noise vectors ${\\bf e}^{(i)}$ such that $\\|{\\bf e}^{(i)}\\|_2 \\leq \\eta_i$. \nDenote by $\\widehat{{\\bf x}^{(i)}}$ the solution to the local Basis Pursuit problems~\\eqref{eq:fcsbpdn} and let $\\widehat{{\\bf x}} = S^{-1} \\sum_i \\widehat{{\\bf x}^{(i)}}$.\nIf the matrix ${\\mathbf A}$ satisfies the ${\\boldsymbol\\ell}^2$-RP-NSP with sparsity pattern ${\\bf s} = (s_1,\\cdots,s_n)$ with constants $0 < \\rho_1, \\cdots, \\rho_n < 1$ and $\\tau_1, \\cdots, \\tau_n > 0$ with respect to ${\\mathcal W}$, then the estimation $\\widehat{{\\bf x}}$ approximates ${\\bf x}$ in the following sense: \n\\begin{equation}\n\\label{eq:rdnspRecoveryl2NSP}\n\\|\\widehat{{\\bf x}} - {\\bf x} \\|_p \\leq \\frac{1}{C}\\( \\frac{\\langle \\vec{\\rho}, \\sigma_{{\\bf s}}^{{\\mathcal W}}({\\bf x})_1 \\rangle}{s^{1-1\/p}} + \\frac{\\langle \\vec{\\tau}, \\vec{\\eta} \\rangle}{s^{1\/2-1\/p}}\\), \\quad 1 \\leq p \\leq 2,\n\\end{equation}\nwhere $\\vec{\\rho} = \\(\\frac{2(1+\\rho_i)^2}{1-\\rho_i}\\)_{i=1}^n$, $\\vec{\\tau} = \\(\\frac{3-\\rho_1}{1-\\rho_i}\\tau_i\\)_{i=1}^n$, and $\\vec{\\eta} = \\( \\eta_i \\)_{i=1}^n$.\n\\end{atheorem}\n \n\n\\subsubsection{P-RIP based recovery}\nOne can show that the P-RIP is sufficient for stable and robust recovery by combining Theorem~\\ref{thm:rdnspRecovery} with the following result, showing the existence of random matrices satisfying the RP-NSP, from an RIP argument.\n\\begin{atheorem}\\label{secondthm}\nLet ${\\mathbf A} \\in {\\mathbb K}^{m \\times N}$ be a matrix satisfying the P-RIP($2{\\bf s}$,$\\delta$), with ${\\bf s} = (s_1,\\cdots,s_n)$ and $\\delta = (\\delta_1,\\cdots,\\delta_n)$ and $\\delta_i < 4\/\\sqrt{41}$, for all $1 \\leq i \\leq n$. \nThen, ${\\mathbf A}$ satisfies the ${\\boldsymbol\\ell}^2$-RP-NSP with constants $(\\rho_i,\\tau_i)_{i=1}^n$ where \n\\begin{equation}\n\\label{eq:RIPtoNSP}\n\\begin{array}{l}\n\\rho_i := \\frac{\\delta_i}{\\sqrt{1-\\delta_i^2} - \\delta_i\/4} < 1 \\\\\n\\tau_i := \\frac{\\sqrt{1+\\delta_i}}{\\sqrt{1-\\delta_i^2} - \\delta_i\/4}.\n\\end{array}\n\\end{equation}\n\\end{atheorem}\n\n\\begin{proof}\nThe proof of this results consists in simply applying~\\cite[Theorem 6.13]{foucart2013book} to every subspaces independently.\n\\end{proof}\n\nFrom this result and Theorem~\\ref{thm:rdnspRecoveryl2NSP}, it follows.\n\\begin{atheorem}\n\\label{thm:fCS_RIPresults}\nLet ${\\mathcal W} = (W_i,P_i)_{i=1}^n$ be a fusion frame for ${\\mathbb K}^N$ with frame operator $S$ and frame bounds $0 < C \\leq D < \\infty$. \nLet ${\\mathbf A} \\in {\\mathbb K}^{m \\times N}$ be a matrix satisfying the P-RIP($2{\\bf s}$, $\\delta$) where ${\\bf s} = (s_1, \\cdots, s_n$) and $\\delta = (\\delta_1, \\cdots, \\delta_n)$ with $\\delta_i < 4\/\\sqrt{41}$, for all $1 \\leq i \\leq n$. \nThen any distributed-sparse vector ${\\bf x} \\in \\Sigma_{\\bf s}^{({\\mathcal W})}$ can be recovered\nby solving $n$ \\eqref{eq:bpdn} problems.\n\nAssuming the noise in each \\eqref{eq:bpdn} problem is controlled by $\\|{\\bf e}^{(i)}\\|_2 \\leq \\eta_i$, $1\\leq i \\leq n$, and set $vec{\\eta} = \\( \\eta_i \\)_{i=1}^n$. \n \nLet $\\widehat{{\\bf x}} = S^{-1}\\sum \\widehat{{\\bf x}^{(i)}}$. Then\n \n\\begin{equation*}\n \\|\\widehat{{\\bf x}} - {\\bf x} \\|_2 \\leq \\frac{1}{C} \\sum_{i=1}^n \\alpha_i\\frac{\\sigma_{{\\bf s}}^{{\\mathcal W}}({\\bf x})_{1,i}}{\\sqrt{s_i}} + \\beta_i\\eta_i\n\\end{equation*}\nwhere $\\alpha_i$ and $\\beta_i$ depend only on the RIP constants $\\delta_i$.\n\\end{atheorem}\n\n\n\n\n \n\\section{Local sparsity in general dictionaries and frames}\n\\label{ShidongSection}\n\nThe redundancy inherent to frame structures (and their generalization to frames of subspaces) makes them appealing to signal analysis task. \nSo far, we have used the redundancy of the fusion frame process in order to increase the global sparsity of the original vector ${\\bf x}$ as well as increase the robustness to noise. \nWe investigate now the use of local dictionaries in order to use the redundancy within subspaces, using the local frames for the representation of the partial information. \nA common scenario in applications is when $f \\in {\\mathcal H}$ has a sparse frame representation $f=D{\\bf x}$, i.e. ${\\bf x}$ is sparse, and the multiple measurements are given by\n\\[\n {\\bf y}^{(i)} = {\\mathbf A} P_i f + {\\bf e}^{(i)} = {\\mathbf A} P_i D {\\bf x} + {\\bf e}^{(i)}, \\ \\ \\ 1 \\leq i \\leq n.\n\\]\nHere $P_i$ can be any projection onto a subspace of $\\mathcal{H}$. \nIn practical applications such as in SAR radar imaging, $P_i$ can just be a projection or spatial filter onto $W_i\\equiv \\operatorname{span}\\{{\\bf d}_k\\}_{k\\in \\Omega_i}$, where ${\\bf d}_k$ is the $k^{th}$ column of $D$.\nSuch an operation can potentially reduce the number of nonzero entries of ${\\bf x}$ in the $i^{th}$ observation, when the respective column vectors ${\\bf d}_k$ are in $\\operatorname{ker} P_i$.\nIn particular, let us denote by $\\Gamma_i\\equiv\\{k \\; | \\; k\\not\\in\\Omega_i,\\ {\\bf d}_k\\in W_i\\}$, and $\\Lambda_i\\equiv \\{l \\; | \\; {\\bf d}_l\\in \\operatorname{ker} P_i\\}$.\n\\begin{eqnarray*}\nP_if &=& P_i D {\\bf x} =P_i\\left(\\{{\\bf d}_k\\}_{k\\in\\Omega_i}, \\{{\\bf d}_j\\}_{j\\in\\Gamma_i}, \\{{\\bf d}_l\\}_{l\\in\\Lambda_i}\\right) {\\bf x} \\\\\n &=& \\left(\\{{\\bf d}_k\\}_{k\\in\\Omega_i}, \\{y_j=P_i {\\bf d}_j\\}_{j\\in\\Gamma_i}, \\{0's\\}_{l\\in\\Lambda_i}\\right)\\left(\\begin{array}{c}\n {\\bf x}_{\\Omega_i} \\\\\n {\\bf x}_{\\Gamma_i} \\\\\n {\\bf x}_{\\Lambda_i}\n \\end{array}\\right) \\\\\n &=& \\left(\\{{\\bf d}_k\\}_{k\\in\\Omega_i}, \\{y_l=P_i{\\bf d}_l\\}_{l\\in\\Gamma_i} \\right)\\left(\\begin{array}{c}\n {\\bf x}_{\\Omega_i} \\\\\n {\\bf x}_{\\Gamma_i}\n \\end{array}\\right)\\\\\n &=& D_i {\\bf x}^{(i)},\n\\end{eqnarray*}\nwhere $D_i\\equiv \\left(\\{{\\bf d}_k\\}_{k\\in\\Omega_i}, \\{y_k=P_i{\\bf d}_k\\}_{k\\in\\Gamma_i} \\right)$, and\n\\[\n{\\bf x}^{(i)}\\equiv \\left(\\begin{array}{c}\n {\\bf x}_{\\Omega_i} \\\\\n {\\bf x}_{\\Gamma_i}\n \\end{array}\\right).\n\\]\n\nAs a result, the $i^{th}$ measurement becomes\n\\begin{equation*}\n\\label{eqn_Ba}\n {\\bf y}^{(i)} = {\\mathbf A} D_i {\\bf x}^{(i)} + {\\bf e}^{(i)}, \\quad 1 \\leq i \\leq n,\n\\end{equation*}\nor\n\\begin{equation*}\\label{eqn_Bb}\n {\\bf y}^{(i)} = {\\mathbf A} f^{(i)} + {\\bf e}^{(i)}, \\quad f^{(i)}=D_i{\\bf x}^{(i)}, \\quad 1 \\leq i \\leq n.\n\\end{equation*}\nNote that the first version suggests the use of ${\\boldsymbol\\ell}^1$ synthesis methods, while the second one looks at ${\\boldsymbol\\ell}^1$ analysis tools. \n${\\boldsymbol\\ell}^1$ synthesis corresponds to the usual sparse recovery, via a dictionary $D$:\n\\begin{equation*} \n\\label{eq:dict_l0}\n\\min_{{\\bf z}} \\|{\\bf z}\\|_0, \\quad \\text{ subject to } \\|{\\mathbf A} D{\\bf z} - {\\bf y}\\|_2 \\leq \\eta.\n\\end{equation*}\nThe solution $\\widehat{f}$ is later computed as $\\widehat{f} = D\\widehat{{\\bf x}}$. \nIn the ${\\boldsymbol\\ell}^1$ analysis approach, we do not care for a particular (sparse) representation of $f$. \nWe just ask for this representation to have high fidelity with the data:\n\\begin{equation}\n\\label{eq:dict_analysis}\n\\min_{{\\bf g}} \\|D^*{\\bf g}\\|_0, \\quad \\text{ subject to } \\|{\\mathbf A} {\\bf g} - {\\bf y}\\|_2 \\leq \\eta\n\\end{equation}\nHere, $D^*$ denotes the canonical dual frame. \nBoth approaches are further detailed in the next sections. \nWe comment that if the choice of $P_i$ is allowed, one strategy is again to use random projections by randomly selecting the index set $\\Omega_i$ to set the subspaces $W_i=\\operatorname{span}\\{{\\bf d}_j\\}_{j\\in\\Omega_i}$.\n\n\n\\subsection{Recovery via general ${\\boldsymbol\\ell}^1$-analysis method}\nAs introduced above, we try to recover $f$ that has {\\bf a} sparse representation by solving Problem~\\eqref{eq:dict_analysis}. \nWhile the problem is written in terms of the canonical dual frame, there is no obligation in using this particular dual frame. \nOne may instead optimize the dual frame considered and use the sparsity-inducing dual frame \\cite{liu2012optimaldual}, computed as part of the optimization problem:\n\\begin{equation}\\label{eqn_B1}\n \\widehat{f^{(i)}} =\\operatorname{argmin}_{g,\\, D\\tilde{D}_i^*={\\mathbf I}}\\|\\tilde D^*_i g\\|_1 \\quad \\text{s.t. } \\|{\\mathbf A} g -{\\bf y}^{(i)}\\|_2\\le \\eta_i, \\quad 1 \\leq i \\leq n.\n\\end{equation} \nThe sparsity-inducing frame $\\tilde D_i$ can be uniform across all $i$ but not necessarily.\nThe following result is known to hold for any dual frame~\\cite{liu2012optimaldual}.\n\\begin{atheorem}\\label{thm1}\n Let $D$ be a general frame of $\\mathbb{R}^{N}$ with frame bounds $0