diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzjdlx" "b/data_all_eng_slimpj/shuffled/split2/finalzzjdlx" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzjdlx" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nNetworks are excellent tools for describing complex biological, social, and infrastructural systems~\\cite{newmanbook,chauhan2016,trevino2012,Bhavnani2012}. \nMost real-world examples of complex networks are far from being random and have a community or modular structure within them~\\cite{newman2003,danon2005,fortunato2010}. Detecting this structure is crucial in understanding the function and dynamics of a complex network. Although there is no universally accepted definition of a community structure~\\cite{schaub2017,peel2017}, it is often characterized by dense connectivity within groups and sparser connectivity between different groups. Modularity, $Q$, is a widely used metric to quantify the presence of this type of structure~\\cite{newman2003,newman2002,newman2004,newman2004epj,sun2009,trevino2015,guo2019}. For a partition of the nodes of an unweighted network, $C = \\{c_1, c_2, c_3, ..\\}$, it is defined as \n\\begin{equation} \nQ = \\frac{1}{2m}\\sum_{c \\in C}\\left(2m_c -\\frac{K_c^2}{2m}\\right)\n\\label{Q}\n\\end{equation}\nwhere $m_c$ is the number of links in community $c$,\n$K_c$ is the sum of degrees of nodes in $c$,\nand $m$ is the total number of links in the network. $Q$ measures the difference between the fraction of links within communities and the expected fraction if the links were randomly placed. The partition that maximizes the metric $Q$ identifies the community structure of the network. \nDespite its intuitively appealing definition, there is a fundamental problem with using $Q$ to find community structure. Namely, communities smaller than a certain size in large network may not be detected. \nThis {\\it Resolution Limit} (RL) problem~\\cite{fortunato2007,traag2011} reduces the domain of applicability of $Q$ and\nis often a significant issue when analyzing empirical networks.\n\nAlternate metrics have been proposed in recent years~\\cite{ronhovde2010,arenas2008,granell2012,aldecova2011,mingming2013,mingming2014,charo2016,chen2018,haq2019} to mitigate the RL problem. Some of these metrics~\\cite{mingming2013,mingming2014,charo2016,chen2018,haq2019}, known as modularity density metrics, weights that are functions of the internal link density of communities are applied to the two terms in Eq.~\\ref{Q}.\nIn this paper we propose a new metric of this form, which we call {\\it generalized modularity density} $Q_g$. $Q_g$ is an extension of $Q$, as it reduces to $Q$ in a limit. The main reasons for introducing this new metric are as follows. \nFirst, it has an adjustable parameter $\\chi$ that controls the resolution density of the communities that are detected.\nSecond, $Q_g$ can be extended to detect communities in weighted networks in a way that has a clear interpretation and is independent of the scale of the link weights.\n\nThe RL problem can be seen in the simple example of cliques arranged in a ring connected to one another in series by single links~\\cite{fortunato2007}. The expectation in this case is that the cliques should be detected as separate communities. Unfortunately, with some metrics, pairs of cliques are merged into the same community. Of course, if all possible cross-links between two cliques are present, then it is sensible to merge them into one community as they simply form a clique of larger size.\nHowever, when cliques are connected by an intermediate number of links or when the network is weighted, it is unclear whether the cliques should be merged or separated~\\cite{lancichinetti2011}.\nIntuitively, it makes sense to merge two cliques at sufficiently high density of cross-links. Generally, methods of community detection that use different metrics have a different critical value for this density. The answer may also depend on the specific application being considered. Thus, it is useful to have some flexibility in allowing the communities to be separated or merged. $Q_g$ achieves this goal by varying a parameter $\\chi$. We will show that for a properly chosen value of $\\chi$, the partition that maximizes $Q_g$ separates two cliques at any desired strength of inter-connectivity. This tunability of our metric is extremely useful for analyzing networks that exhibit hierarchical community structure~\\cite{newman2003}, which is found in many real-world networks. A common way to investigate these hierarchical structures is to iteratively perform community detection within detected communities~\\cite{park2019global}. Using our approach, one can simply vary $\\chi$.\n\nFinally, we compare the performance of our metric against other modularity density metrics by using them to find the structure in a more complex benchmark network than a simple ring of cliques. Our analysis indicates that $Q_g$ performs better than all other metrics considered. We then use $Q_g$ to find structure in a variety of empirical and artificial networks to demonstrate its ability to detect hidden community structure. We find that it eliminates the resolution limit problem that we consider and that it is applicable to a wider range of problems than other metrics. In addition, the network partition that maximizes $Q_g$ can be efficiently and accurately found using the recently introduced Reduced Network Extremal Ensemble Learning (RenEEL) scheme~\\cite{guo2019}.\n\n\\section{Methods}\n\\subsection{Generalized Modularity Density}\nWe define the \\emph{Generalized Modularity Density} of a node partition of unweighted network as\n\\begin{equation} \\label{eq:Qg}\nQ_g=\\frac{1}{2m}\\sum_c (2m_c-\\frac{K_c^2}{2m})\\rho_c^\\chi\n\\end{equation}\nwhere $m$ is the number of total links of the network, $m_c$ is the number of links within a community c, $K_c$ is the sum of degrees of all nodes in community c, $\\rho_c$ is the link density of community c, the exponent $\\chi$ is a control parameter. Here we assume that $\\chi$ is a non-negative real number. The link density of a community is the ratio of the number of links that exist in $c$ to the number of possible links that can exist\n\\begin{equation} \\label{eq:rel-density}\n \\rho_c=\\frac{2 m_c}{n_c(n_c-1)} \\; ,\n\\end{equation} where $n_c$ is the number of nodes in $c$. \n$Q_g$ is an extension of modularity, i.e. at $\\chi = 0$, $Q_g=Q$.\n\nThe metric $Q_g$, like the Modularity metric $Q$ (Eq.~\\ref{Q}), can be easily extended to weighted networks. For $Q$ this is done by simply replacing the number of links with the sum of link weights in $m$, $m_c$ and $K_c$~\\cite{newman2004weighted,newman2008directed}.\nExtending the definition of modularity density metrics to weighted networks is complicated by the fact that they depend on link density, and link density can be problematic to use with weighted networks.\nOne way to deal with these problems is to simply ignore the link weights and calculate the link density as if the network was unweighted~\\cite{mingming2013,mingming2014}. Unfortunately, this loses the information contained in the link weights.\nThe correct way is to use a normalized definition of link density, where the sum of the weight of all internal links divided by the maximum value that sum would have if the community were fully connected with links of weight equal to the maximum weight of any link in the network,\n\\begin{equation} \\label{eq:abs-density}\n \\rho_c=\\frac{2 m_c}{n_c(n_c-1)w_{\\max}}\n\\end{equation}\nwhere $m_c$ is the sum of the weights within community $c$, $n_c$ is the number of nodes in $c$, and $w_{\\max}$ is the maximum weight of any link in the network.\nThis definition of $\\rho_c$ is consistent with the definition for unweighted networks, \nbut it can be problematic because it involves the global variable $w_{\\max}$.\nThe community structure found using some metrics, such as those proposed in \nRefs.~\\citeonline{mingming2013,chen2018,haq2019}, can be very sensitive to the value of $w_{\\max}$. \nThis makes their use potentially troublesome, especially in empirical studies where the value of $w_{\\max}$ can be difficult to accurately measure. \nAdditionally, if there is a wide distribution of link weights and $w_{\\max}\\rightarrow \\infty$, then $\\rho_c \\rightarrow 0$ for all communities and the algorithms for finding the partition that maximizes the modularity density metric become numerically unstable. \n\nGeneralized Modularity Density, unlike other modularity density metrics, does not have problems with $w_{\\max}$. Both terms in Eq.~\\ref{eq:Qg} are weighted by the same function of $w_{\\max}$, which can factored out and simply modifies the value of $Q_g$ for every possible partition by the same constant factor. It is, thus, irrelevant for determining the partition that maximizes $Q_g$. So, instead of the absolute link density, Eq.~\\ref{eq:abs-density}, a relative link density, given by Eq.~\\ref{eq:rel-density} with $m_c$ being the sum of the weight of links in $c$,\ncan be used in the metric $Q_g$ without affecting results.\nThe community partitions found with Generalized Modularity Density are also independent of the scale of the link weights. As it is with Modularity, multiplying all link weights by a common factor does not affect the results obtained with $Q_g$. This important property is needed for preserving the information in the link weights.\n\n\\subsection{Resolution Density}\n\\label{sec:resolutionscale}\nThe RL problem can be viewed as a problem with a metric, when using it yields a partition that merges two ``well separated\" communities. \nA resolution-limit-free metric is expected to resolve these communities. \nConversely, a metric should also avoid splitting two groups of nodes that are ``well connected\" to each other. The RL problem is clear at these two extremes. However, more generally, the notion of well separated\/connected communities is not well defined. \nIt is unclear whether two partially connected communities should be merged or not.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.6\\textwidth]{fig1.pdf}\n \\caption{{\\bf Benchmark network for studying the resolution limit problem.} The network consists of two cliques of sizes $n_1$ and $n_2$ and an arbitrary component with $n_a$ nodes and $m_a$ links. The two cliques share $m_{1a}$ and $m_{2a}$ links with the arbitrary component, respectively, and have $m_{12}$ links between. The links of the network can be weighted, in which case, $m_a$, $m_{1a}$, $m_{2a}$ and $m_{12}$ are the sums of link weights.}\n \\label{fig:template}\n\\end{figure}\nConsider the benchmark network shown in Fig.~\\ref{fig:template}. This network consists of three parts: two cliques and an external arbitrary component to which the cliques are weakly connected.\nAs the cliques are fully connected, they have no internal community structure.\nAssume clique 1 has $n_1$ nodes, clique 2 has $n_2$ nodes, and both $n_1$ and $n_2 \\geq 3$. Without loss of generality, we assume $n_2 \\ge n_1$. Let $m_{12}$ be the sum of weights of links between the two cliques, and let $m_{1a}$ and $m_{2a}$ be the sum of the weights of links that connect each clique with the arbitrary component. $n_a$ and $m_a$ are number of nodes and the sum of weights of links within the arbitrary component, respectively.\nWithout loss of generality, assume $n_1 \\leq n_2$.\nAlso, assume that $m_{1a} \\ll n_1^2w_{\\max}$ and $m_{2a} \\ll n_2^2w_{\\max}$, so that the cliques are only weakly connected to the arbitrary component.\nThe RL question concerning this network is whether or not the two cliques should be merged or split, and whether or not using a given metric will meet this expectation.\nThis choice of network gives greater flexibility to explore the RL problem than a simple ring of cliques, since the external component can have an arbitrary structure and the strength of inter-connectivity between the two cliques can be varied. \nGenerally, there is a threshold, or critical, value of $m_{12}$ below which the cliques are separated and above which they are merged.\nWe impose an arbitrary \\emph{expected critical value} $m_{\\rm exp}$ such that the cliques should be merged if $m_{12} \\geq m_{\\rm exp}$ and separated if $m_{12}0$, merging is preferred. \nEq.~\\ref{eq:dQg} determines whether the use of the metric $Q_g$, for a given value of $\\chi$, will lead to M or S phase as a function of the variables $(p,d,t)$. \nThe value of $d$ at which the phase boundary separating the M and S phases occurs is $\\delta_{Q_g}$.\n\n\\begin{figure}\n\\centering\n(a)\\includegraphics[width=0.45\\textwidth]{Qgchi0t10-6.pdf}\n(b)\\includegraphics[width=0.45\\textwidth]{Qg10-6.pdf}\n(c)\\includegraphics[width=0.45\\textwidth]{Qgchi3t10-6.pdf}\n(d)\\includegraphics[width=0.45\\textwidth]{Qgchi10t10-6.pdf}\n \\caption{{\\bf Phase diagram of clique splitting with generalized modularity density at large external influence as $\\chi$ is varied}. The values of clique size ratio $p$ and link density $d$ where the M phase occurs is shown in orange and where the S phase occurs is shown in blue. Results are for different values of the control parameter $\\chi$: (a) $\\chi=0$, (b) $\\chi=1$, (c) $\\chi=3$, (d) $\\chi=10$. The external influence parameter is $t=10^6$}\n \\label{fig:Qgchi}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n(a)\\includegraphics[width=0.45\\textwidth]{Qg0.pdf}\n(b)\\includegraphics[width=0.45\\textwidth]{Qg1.pdf}\n(c)\\includegraphics[width=0.45\\textwidth]{Qg10.pdf}\n(d)\\includegraphics[width=0.45\\textwidth]{Qg10-6.pdf}\n\\caption{{\\bf Phase diagram of clique splitting with generalized modularity density at fixed $\\chi$ as the external influence is varied}. The values of\n clique size ratio $p$ and link density $d$ where the M phase occurs is shown in orange and where the S phase occurs is shown in blue. Results are for $\\chi = 1$ and different choices of the external influence parameter $t$: (a) $t=0$, (b) $t=1$, (c) $t=10$, (d) $t=10^6$.}\n \\label{fig:Qgphase}\n\\end{figure}\n\nIn the limit of large external influence parameter $t$, which is often the situation encountered in empirical studies where RL problems are considered problematic, \nthe value of $\\delta_{Q_g}$ \nfor a given value of $\\chi$ is\n\\begin{equation}\n \\label{equ:Qgworst}\n \\lim_{t\\rightarrow\\infty}\\delta_{Q_g}=\\frac{r}{2}\\left[\\left(1+\\frac{2}{r}\\right)^{\\frac{\\chi}{\\chi+1}}-1\\right]\\; .\n\\end{equation}\nThis limit increases from $\\delta_{Q_g}=0$, when $\\chi=0$, to $\\delta_{Q_g}=1$, when $\\chi\\rightarrow \\infty$, for all values of $p$.\nAt intermediate values of $\\chi$ the result is only weakly dependent on $p$, being just slightly larger at small $p$, as can be seen in Fig.~\\ref{fig:Qgchi}.\nThe figure shows shows the phase diagram as a function of $p$ and $d$ at various values $\\chi$ for large $t$.\nFor $\\chi = 0$, when $Q_g=Q$, the cliques are merged at all values of $p$ and $d$ as shown in Fig.~\\ref{fig:Qgchi}(a). \nFor $\\chi > 0$ at smaller values of $d$ the cliques separate and are, thus, resolved.\nAs $\\chi$ increases, $\\delta_{Q_g}$ also increases and approaches 1 in the limit of large $\\chi$, Figs.~\\ref{fig:Qgchi}(b)-(d), meaning that at large $\\chi$ the cliques are always resolved. \n\nThe effect of varying $t$ at fixed $\\chi$ on the ($p$, $d$) phase diagram are shown in Fig.~\\ref{fig:Qgphase}. As shown in Fig.~\\ref{fig:Qgphase}(a), at $t=0$ when there is no influence by the external component on the two cliques, the S phase occupies the entire space and $\\delta_{Q_g}=1$ for all $p$.\nIn this case, the cliques are always separated unless they are fully connected to each other.\nFor $t>0$, when there is some influence from an external component, the cliques are merged and, thus, not resolved for large values of $d$. As $t$ increases, shown in Figs.~\\ref{fig:Qgphase}(b)-(d), the M phase occupies an increasing area and $\\delta_{Q_g}$ decreases until reaching the limiting value given by Eq.~\\ref{equ:Qgworst}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.7\\textwidth]{level.pdf}\n \\caption{{\\bf Number of communities found using $Q_g$ with different $\\chi$.} Communities in each level (from the largest to the smallest) in the hierarchy are revealed as $\\chi$ is varied.}\n \\label{fig:levelplot}\n\\end{figure}\n\nThese results show that, as the control exponent $\\chi$ is varied, a wide range of $\\delta_{Q_g}$ results. The range increases \nwith $t$ and varies from 0 to 1, the complete possible range, in the limit of large $t$.\nThis freedom gives leeway in applications to choose $\\chi$ so that $\\delta_{Q_g}$ matches the expected critical resolution link density $\\delta_{\\rm exp}$.\n\nIn general, as $\\chi$ increases the number of communities found also increases, but gives stable results for a range of $\\chi$. (See the example discussed in Sec.~\\ref{sec:artificial network}.)\nIncreasing $\\chi$ thus tends to result in smaller communities being detected.\nThe appropriate, or best, choice of $\\chi$ depends on the problem. \nIf there is some ``ground truth'' knowledge about the community structure in the network, or in similar networks, that knowledge can be used to select a $\\chi$ that results in communities that match the ground truth.\nIf there is no ground truth knowledge, then a default choice of $\\chi=1$ may be appropriate. That choice results in a critical resolution density of $\\delta_{Q_g}=1\/2$ in the limit of large $t$ and $r$ (Eq.~\\ref{equ:Qgworst}).\nThus, an advantage of $Q_{g}$ is that even for the extreme values of $t$ and $r$, the metric has a positive lower bound of $\\delta_{Q_g}$ that can be controlled by $\\chi$.\n\nIn contrast to $Q$~\\cite{newman2002}, $Q_{ds}$~\\cite{mingming2013}, $Q_{x}$~\\cite{chen2018} and $Q_{AFG}$~\\cite{arenas2008} (see Supplemental Information~\\ref{sub:metric}), $Q_g$ has a finite non-zero lower limit of $\\delta$, which implies that for $d$ smaller than this value, the two cliques of the benchmark network are guaranteed to be split for all possible values of $(r,t)$. Thus, $Q_g$ can successfully avoid resolution limit problem in these extreme cases (See last paragraph in Section~\\ref{sec:resolutionscale}).\nWhile the metric $Q_w$ (see Supplemental Information~\\ref{sub:metric}) also shows this lower limit (Table~\\ref{tab:table1}), the advantage of $Q_g$ is that the lower limit of $\\delta_{Q_g}$ can be adjusted by tuning the parameter $\\chi$ for any desired resolution density. \nTable~\\ref{tab:table1} summarizes the kind of resolution problems with $Q$, $Q_{ds}$, $Q_x$, $Q_w$ and $Q_{AFG}$ that would be encountered when tested on the benchmark network (See Supplemental Information Section~\\ref{sub:diagram} for details).\n\nIn principle, a reasonable $\\delta_{\\rm exp}$ is always in $[0,1]$ but a given metric can still have a $\\delta$ that is out of this range. Since $\\delta_{\\rm exp}$ is strictly positive (no matter how small), if it is possible to construct a network for which $\\delta\\rightarrow0$ then that metric presents a resolution limit problem. Even worse, if $\\delta < 0$, it would result in merging of disconnected communities. On the other hand $\\delta\\rightarrow1$ does not pose a resolution problem as long as $\\delta\\le1$ and $\\delta\\ge\\delta_{\\rm exp}$ is satisfied. However, higher $\\delta_{\\rm exp}$ imposes a stricter criterion for merging. But if $\\delta>1$, it will have the unwanted consequence of cliques being subdivided. Thus, a metric is problematic if it can not avoid $\\delta\\rightarrow0$, $\\delta<0$ or $\\delta>1$.\n\n\\begin{table}[]\n \\centering\n \\begin{tabular}{||c|c||}\n \\hline\n {\\bf Metric} & {\\bf Resolution limit problem} \\\\ [0.5ex] \n \\hline \\hline\n $Q$ & $\\delta\\rightarrow0$ when $t\\rightarrow\\infty$\\\\\n \\hline\n $Q_{ds}$ & $\\delta<0$ when $p$ is small\\\\\n \\hline\n $Q_x$ & $\\delta<0$ when $p$ and $\\rho$ are small \\\\\n \\hline\n $Q_w$ & $\\delta_{min}=0.236$ when $t\\rightarrow\\infty$ and $p=1$ \\\\\n \\hline\n $Q_{AFG}$ &$\\delta<0$ when $s<0$ and $p$ is small \\\\\n & $\\delta>1$ when $s>0$ and $p$ is small\\\\\n \\hline\n \\end{tabular}\n \\caption{{\\bf Resolution limit problems of different metrics.} $Q,Q_{ds}$, $Q_x$, $Q_w$ and $Q_{AFG}$ have different resolution limits problems. $\\rho$, which appears in $Q_x$, is the global link density. $s$ is used in the metric $Q_{AFG}$ as a weight to every node (equivalent to adding a self-loop to every node) and thereby modifying the strength of a community. $Q_{AFG}$ reduces to modularity at $s=0$, and by controlling $s$ substructures ($s>0$) or superstructures ($s<0$) can be explored.}\n \\label{tab:table1}\n\\end{table}\n\n\\subsection{Applications}\n\n\\subsubsection{American college football network}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=1.0\\textwidth]{football.pdf}\n \\caption{{\\bf Communities found in American college football network.} Blue blobs show the communities detected by modularity and gray blobs show the communities found by generalized modularity density.}\n \\label{fig:my_label}\n\\end{figure}\n\nWe use the $Q_g$ metric to detect communities in the network of American college football games between Division IA colleges during regular season of Fall 2000~\\cite{newman2002, evans2010}. A link between two colleges is present if they played a game against each other. Colleges play games within the same conference more frequently, thus, a community detection algorithm should be able to recover these conferences from the network data. First, we show the result of using modularity ($Q$) that are indicated by light blue blobs in Fig~\\ref{fig:my_label}. It matches the conference memberships (distinguished by node color) well except Independents, which are absorbed by three communities and that it groups Big West and Mountain West in the same community. Using $Q_g(\\chi=3)$ in this network we find communities that are shown by gray blobs. There are some key differences between the $Q$ and $Q_g$ partitions. First, the $Q_g$ partition does not merge the Independents with other conferences. Instead, it divides them into three disjoint communities. Second, it successfully identifies the Big West and Mountain West as two different groups. But more interestingly, unlike modularity, it divides each of the Mid-American, Southeastern, and Big Twelve conferences into two communities. This apparent deviation from ground truth actually turns out to be a major advantage of using $Q_g$. Each of these three conferences have subdivisions within them that are in perfect agreement (considering their membership as of year 2000) with the partition found by $Q_g$. Mid-American conference has East Division and West Division, Southeastern also has Eastern Division and Western Division, whereas Big-Twelve conference has Northern Division and Southern Division. These subdivisions are indicated by different node shapes (circles and squares) in Fig.~\\ref{fig:my_label}.\n\n\\subsubsection{Artificial network with hierarchical community structure}\n\\label{sec:artificial network}\n\\begin{figure}\n \\centering\n \\includegraphics[width=1.0\\textwidth]{levels.pdf}\n \\caption{{\\bf Example hierarchical network.} Level 1: A clique of five nodes. Level 2: A clique of five level 1 cliques. Level 3: A clique of five level 2 cliques. Level 4: A clique of five level 3 cliques.}\n \\label{fig:multilevel}\n\\end{figure}\n\nTo demonstrate the ability of $Q_g$ for detecting the community structure at different resolution densities, we construct a hierarchical network. Similar constructions have been used as a model for hierarchical network structure~\\cite{arenas2008}. We consider a structure shown in Fig.~\\ref{fig:multilevel} that includes four levels of hierarchy, although it can be extended to include any number of levels. The elementary level (level 1) is a clique formed by fully connecting five nodes with links weighted $\\alpha_1$. To construct a level 2 network, we use the clique network from level 1 as a {\\it generalized node} to form a clique of size 5 with links weighted $\\alpha_2$. Link between two generalized nodes is achieved by connecting all the internal nodes from one generalized node to those in another generalized node. Similarly, level $k$ network is constructed by using level $k-1$ network as a generalized node to form a clique of size 5 with links weighted $\\alpha_k$. Here we keep $\\alpha_1>\\alpha_2>...>\\alpha_k$ so that the hierarchy of structure is preserved.\\\\\nWe use the metric $Q_g$ on a level 4 network with $\\alpha_k=5-k$ and show that it successfully detects the planted hierarchical communities at every level. The level 4 network consists of 125 level 1 cliques, and 625 nodes in total. The results obtained by maximizing $Q_g$ is shown in Fig.~\\ref{fig:levelplot}. We observe that the 5 level 3 cliques are detected when $\\chi<2.8$, the 25 level 2 cliques are detected when $2.9<\\chi<6.4$, the 125 level 1 cliques are detected when $\\chi>6.5$. There are 3 stages, corresponding to 3 levels of construction. There should not be a single ``best\" choice of $\\chi$ by the nature of the problem. The choice of $\\chi$ or desired resolution density should be based on specific requirement and the background information of the particular problem.\n\n\\section{Conclusion}\n\nCommunity detection in networks is commonly performed by finding the partition of the network nodes that maximizes an objective function. Such a partition can sometimes yield unexpected community structure. Resolution limit, for example, is an unwanted but inevitable consequence of modularity maximization. Other such metrics, namely modularity density measures, which attempt to fix this problem also differ in the community structure that they obtain and can also violate our general expectation. While at what number of cross links between two strongly connected groups of nodes should be called a single community remains mostly subjective and vague, our metric $Q_g$ provides a quantifiable notion and solves the resolution limit problem. In particular, with a free parameter $\\chi$, one can control this threshold of merging two cliques. It is quite appealing to have a metric that can be adjusted to meet the specific requirement set by the user because the idea of a community may vary from one application to another and may be specific to the network under consideration. At the same time, due to its ability to detect communities at many resolution densities it also useful in uncovering the hierarchical community structure, an inherent characteristic observed in many complex networks.\nThe existing benchmarks, e.g. the ring of cliques, are too restrictive to evaluate and compare the performance of different metrics with respect to solving the specific resolution limit problem. In this paper we consider a more general yet simple network structure, which can be used to quantitatively examine the limits of metrics such as modularity. Using this general framework we demonstrated that our metric $Q_g$ eliminates resolution limit problem at a desired resolution density, shows better performance, and is straightforward to extend for studying weighted and directed networks. Among other important problems, finding communities at high resolution is particularly useful in analyzing gene regulatory networks where the goal of functional annotation of genes is to find very specific gene functions~\\cite{mentzen2008}. \n\n\\section*{Acknowledgments}\nThis work was supported by the NSF through grants DMR-1507371 and IOS-1546858.\n\n\\section*{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction\\label{Sec:Intro}}\nMany body localization (MBL) is an example of a stable dynamical phase of matter that avoids thermal equilibrium. The existence of the MBL phase may be understood from the stability of localization in the Anderson Hamiltonian~\\cite{Anderson1958} with respect to weak but finite interactions~\\cite{Gornyi2005a,Basko2006,Abanin2019}. Recently a number of dynamical properties of MBL~\\cite{Bardarson2012,Serbyn2013a} were explained via the existence of an extensive number of quasilocal conserved quantities~\\cite{Serbyn2013,Huse2014,Imbrie2016}. At the same time, experiments provided strong support for the stability of the MBL phase on long timescales, verified a number of theoretical predictions, and started exploring new regimes where the theoretical understanding is incomplete~\\cite{Schreiber2015,Bloch2017a,Greiner2019a,Greiner2019b}. \nIn particular, experiments suggested the existence of MBL in higher dimension~\\cite{Choi2016,Bloch2017c} and also probe the so-called many-body mobility edges~\\cite{Guo2019}, although theoretically their existence is subject to debate~\\cite{DeRoeck2016,Luitz2015,Brighi2020,Yao20,Pomata,DeRoeck2017,Altman2019,Pal2019,Mirlin2020,Eisert2020}. \n\nAnother open question concerns the stability of localization in an MBL system coupled to a so-called quantum bath, represented by another quantum system that thermalizes in the absence of the coupling. In the case of a thermodynamically large bath, where the back-action of the localized system can be neglected, the system-bath coupling is expected to result in delocalization. However, considering a bath whose dimension is comparable to the localized system, or smaller, can yield distinct outcomes, especially if the back-action on the bath is taken into account. The MBL degrees of freedom can localize the bath -- a phenomenon dubbed ``MBL proximity effect'' by~\\textcite{Nandkishore2015a}. Alternatively, the bath can thermalize the formerly localized system~\\cite{Luitz2017}.\n\nInspired by the avalanche mechanism~\\cite{DeRoeck2017} for the delocalization transition, a number of works studied the effect of a ``thermal grain'' coupled to a localized system~\\cite{Huse2015,Luitz2017,Pollmann2019}. These works described the bath by a random matrix type Hamiltonian and account for the quasi-local nature of integrals of motion while considering coupling between the bath and localized system. \nIn these models the bath lacks spatial structure, thus it cannot be localized by the MBL system, excluding \\emph{a priori} the MBL proximity effect. \nIn order to keep the microscopic structure of the bath, Refs.~\\cite{Nandkishore2015a,Hyatt2017,Goihl2019} represented it by a set of thermalizing particles. In this framework, thermal and localized degrees of freedom coexist and are coupled through local interaction. It has been numerically shown~\\cite{Hyatt2017} that localization can globally persist, if the bandwidth of the thermal particles is small. However, the fingerprints of localization of the bath under the influence of the disordered degrees of freedom was not studied in detail and still remains an open question.\n\nAll the studies discussed so far relied on the use of exact diagonalization (ED), which dramatically limits the system sizes available. In contrast, recent experimental studies~\\cite{Rubio-Abadal2019,Leonard2020} have addressed this problem on bosonic quantum simulators, enabling the study of large systems. In Ref.~\\cite{Rubio-Abadal2019} a ``global\" setup was used, where the thermal degrees of freedom are homogeneously distributed through the $2$d lattice. Varying the number of thermal particles, the experiment showed evidence of the stability of localization when the thermalizing bosons are a small fraction of the total. On the other hand, the authors of Ref.~\\cite{Leonard2020} used a different approach. There, a $1$d chain is split into a disorder-free segment, that represents a bath, and is connected to a disordered segment. The experiment investigated the stability of localization while changing the size of the disorder-free segment. While localization is stable for small thermal chains, signs of delocalization were observed when the bath constitutes half of the whole system.\n\nIn this work, inspired by Ref.~\\cite{Rubio-Abadal2019}, we study a one-dimensional system of two hard-core bosonic species interacting with one another. The disordered particles form an Anderson insulator and are coupled locally to a single disorder-free boson representing the smallest possible bath. Analyzing the model under a mean-field type approximation, we formulate a perturbative criterion for stability of localization, that suggests that localization remains robust at strong interaction and disorder. This is in contrast with the results discussed in Ref.~\\cite{Krause2021}, where ergodicity is introduced in the system by a stable doublon, that can freely move in the strong interaction limit, therefore facilitating delocalization in the case when single-particle localization length is long enough.\n\nIn an accompanying paper~\\cite{Brighi2021a}, we studied the dynamics of the model discussed here in large systems using matrix product state (MPS)~\\cite{Verstraete2006} techniques. There, at strong interactions and disorder we observed the localization of the bath through the back-action of the Anderson insulator. Here, besides analytic results supporting localization in such a regime, we provide a thorough comparison of fully interacting dynamics with an approximate time-dependent Hartree method, which neglects entanglement and quantum correlations among the two particle species. This latter technique reveals delocalization at long times, supported by the diffusive spreading of the particle constituting the bath and decaying memory of the initial state of the Anderson insulator. This result highlights the fundamental role played by entanglement, which we study in detail in the present work.\n\nFinally, we use density-matrix renormalization group for excited states method (DMRG-X)~\\cite{Serbyn2016,DMRG-X,Pollman2016}, to study highly excited eigenstates of the Hamiltonian in large chains. This effectively allows us to probe localization at infinite times. Analyzing the expectation value of density in eigenstates, we observe localization of the small bath due to the interaction with the Anderson insulator. Furthermore, we find that eigenstates show area-law entanglement, thus providing complementary support for the persistence of localization at strong interactions.\n\nThe structure of this paper is as follows. In Section~\\ref{Sec:Model} we introduce the model and describe the typical initial state chosen for the dynamics. In Section~\\ref{Sec:Hartree} we analyze the Hartree limit and study perturbatively the effect of the interaction. Following that, in Section~\\ref{Sec:Dynamics}, we introduce the time-dependent Hartree approximation for the dynamics, showing its results and comparing them with quasi-exact TEBD results. Finally, in Section~\\ref{Sec:DMRG-X} we present our DMRG-X study.\n\n\\section{Model\\label{Sec:Model}}\nIn this work, we study two bosonic species of particles, interacting through an on-site potential. The \\textit{disordered} bosons ($d$-bosons) are subject to a random potential drawn from a uniform distribution $\\epsilon_i\\in[-W,W]$ and are governed by the Hamiltonian $\\hat{H}_d$\n\\begin{equation}\n\\label{Eq:Hd}\n\\hat{H}_d = t_d\\sum_{i=1}^{L-1}(\\hat{d}^\\dagger_i\\hat{d}_{i+1} + \\text{h.c.}) + \\sum_{i=1}^L\\epsilon_i\\hat{n}_{d,i},\n\\end{equation}\nwhere $\\hat{d_i}$ is the annihilation operator for the $d$-bosons, $\\hat{n}_{d,i} = \\hat{d}^\\dagger_i \\hat{d}_i$ is their density operator, and $t_d$ is the hopping strength. These bosons realize an Anderson insulator~\\cite{Anderson1958}, that is localized at any density of bosons thereby providing a specific system that avoids thermalization. \n\nThe small quantum bath will be represented by the \\textit{clean} bosons ($c$-bosons) described by $\\hat{H}_c$, \n\\begin{equation}\n\\label{Eq:Hc} \n\\hat{H}_c = t_c\\sum_{i=1}^{L-1}(\\hat{c}^\\dagger_i\\hat{c}_{i+1} + \\text{h.c.}),\n\\end{equation}\nthat are characterized by a single hopping parameter $t_c$ and are not subject to disordered potential. Finally, the two boson species are coupled via the on-site Hubbard interaction,\n\\begin{align}\n\\label{Eq:Hint}\n\\hat{H}_\\text{int} = U\\sum_{i=1}^L\\hat{n}_{c,i}\\hat{n}_{d,i},\n\\end{align}\nwhere $U$ is the interaction strength and $\\hat{n}_{c,i} = \\hat{c}^\\dagger_i \\hat{c}_i$ is the number operator of $c$-bosons. \n\nThe full interacting Hamiltonian then reads:\n\\begin{equation}\n\\label{Eq:H}\n\\hat{H} = \\hat{H}_d+\\hat{H}_c+\\hat{H}_\\text{int}.\n\\end{equation}\nThe system described by the Hamiltonian~(\\ref{Eq:H}) has $U(1)\\times U(1)$ symmetry, as the number of both types of particles, $\\hat{N}_{c\/d} = \\sum_i\\hat{n}_{c\/d,i}$, is conserved. In what follows we restrict to a sector of finite $d$-bosons density and to the case $t_d=t_c=1$. In particular, density of $\\nu_d = N_d\/L =1\/3$ would be used unless specified otherwise. At the same time, throughout this work we consider the presence of a \\emph{single} $c$-boson, that realizes the smallest possible quantum bath with non-trivial spatial structure and local coupling to a localized system. In this respect, our setting resembles the experimental setup of Ref.~\\cite{Rubio-Abadal2019} which was, however, performed on a two dimensional lattice and considered various densities of clean particles. \n\nAlthough the boson density is sufficient to specify a particular sector of the Hilbert space, we further restrict ourselves to states where $d$-bosons have a globally homogeneous distribution. In particular, we will study initial states corresponding to a $d$-bosons density wave, with the single $c$-boson located on the central site, as exemplified by $\\ket{\\psi_0}$ below on a system of $L=18$ sites and with $\\nu_d=1\/3$\n\\begin{equation}\n\\label{Eq:psi0}\n\\ket{\\psi_0} = |{\\bullet}{\\circ}{\\circ}{\\bullet}{\\circ}{\\circ}{\\bullet}{\\circ}{\\color{red}\\bullet}{\\bullet}{\\circ}{\\circ} {\\bullet}{\\circ}{\\circ} {\\bullet}{\\circ}{\\circ}\\rangle,\n\\end{equation}\nwhere empty circles represent empty sites, and black (red) circles are sites occupied by $d$- and $c$-boson respectively. Such initial state resembles the configurations used in experiments and can be characterized by the so-called imbalance~\\cite{Bloch2017a,Bloch2017b}, quantifying the memory of the initial density-wave configuration in the system after a quench.\n\nWhile a strictly periodic arrangement of $d$-bosons akin to the state~(\\ref{Eq:psi0}) is not required, we assume that the density of $d$-bosons is on average distributed uniformly on a scale that is larger than a few lattice spacings. This assumption is important since the presence of large empty\/occupied regions in the chain would imply the effective absence of disorder for $c$-boson in that region. While such configurations could be used to imitate another experimental study of MBL-bath coupling~\\cite{Leonard2020}, states where extensive regions of the chain are fully empty or occupied by $d$-bosons are far from typical initial product states. Moreover, we expect that if the localization of $d$-bosons persists, such states are nearly decoupled from spatially homogenous initial states of the type~(\\ref{Eq:psi0}). Indeed, in order to connect the highly inhomogeneous state such as $\\ket{\\psi_0} = |{\\bullet}{\\bullet}{\\bullet}{\\bullet}{\\bullet}{\\bullet}{\\circ}{\\circ}{\\circ}{\\circ}{\\circ}{\\color{red}\\bullet}{\\circ}{\\circ} {\\circ}{\\circ}{\\circ}{\\circ}\\rangle$ to the density-wave state in Eq.~(\\ref{Eq:psi0}), the tunneling of an extensive number of $d$-boson over long distances is required. \n\n\\begin{figure*}[t]\n\\includegraphics[width=1.95\\columnwidth]{fig1.pdf}\n\\caption{\\label{Fig:effective disorder}\n(a) Subtracting deterministic components from the distribution of $\\langle \\hat{n}_{d,i}\\rangle$ (inset) results in an approximately Gaussian distribution of the effective disorder potential $\\tilde{\\epsilon}_i$, shown here for $\\nu_d=1\/3$. Standard deviation of this distribution, $\\sigma$, is used to define the effective disorder strength, $\\tilde{W}\/U=\\sigma$. \n(b) The effective disorder strength $\\tilde{W}$ shows a non-monotonic behavior as a function of $W$ with a maximum at $W^*\\approx 5$. The data shown are obtained for $L=500$ sites and averaged over $50$ disorder realizations.\n}\n\\end{figure*}\n\n\n\\section{Hartree approximation and criterion for stability of localization\\label{Sec:Hartree}}\nFirst, we formulate the Hartree approximation and use it to study the Hamiltonian~(\\ref{Eq:H}). This facilitates the choice of parameters in the Hamiltonian, and allows to formulate an analytic criterion for stability of localization with respect to two-particle tunneling processes.\n\n\\subsection{Effective Disorder\\label{Sec:Disorder}}\nThe Hartree approximation adopted in this section consists of replacing the density operator $\\hat{n}_{d,i}$ in Eq.~(\\ref{Eq:H}) with its infinite-time average $\\langle\\hat{n}_{d,i}\\rangle$. This approximation would be fully justified in the case of instantaneous relaxation of $d$-bosons with respect to $c$-boson dynamics, i.e.\\ if $t_d,W\\gg t_c$. Moreover, as in an Anderson insulator the fluctuations of the expectation value $\\langle\\hat{n}_{d,i}\\rangle$ are finite at all times, the Hartree approximation overestimates localization. In spite of these shortcomings, the Hartree approximation will assist us with the choice of model parameters and will also allow defining an effective disorder strength thus quantifying the analytic criterion for stability of localization.\n\nThe infinite-time average of the $d$-bosons density is given by the diagonal ensemble generated by the eigenstates of $\\hat{H}_d$:\n\\begin{equation}\n\\label{Eq:nd-inf}\n\\begin{split}\n\\langle \\hat{n}_{d,i} \\rangle &= \\lim_{T\\to\\infty}\\frac{1}{T}\\int_0^Tdt\\,\\bra{\\psi(t)}\\hat{n}_{d,i}\\ket{\\psi(t)}\\\\\n&= \\sum_n |c_n|^2\\bra{E_n}\\hat{n}_{d,i}\\ket{E_n},\n\\end{split}\n\\end{equation}\nwhere $\\{\\ket{E_m}\\}$ is the eigenbasis of $\\hat{H}_d$, $c_n = \\langle\\psi_0\\ket{E_n}$ and $\\ket{\\psi(t)} = \\hat{T}e^{-\\imath\\int_0^t \\hat{H}_d(t') dt'}\\ket{\\psi_0}=e^{-\\imath \\hat{H}_d t}\\ket{\\psi_0} $. The initial state $\\ket{\\psi_0}$ is taken to be a density wave, see Eq.~(\\ref{Eq:psi0}). Due to the random potential in the Anderson Hamiltonian, the $d$-bosons occupation at infinite times acquires a quasi-random nature and is distributed according to $P(\\langle \\hat{n}_{d,i})\\rangle$ in the range $[0,1]$. \n\nThanks to the noninteracting nature of the problem in the Hartree approximation, the eigenstates in Eq.~(\\ref{Eq:nd-inf}) can be written as product states of single particle orbitals $\\ket{\\ell}$. Then, by going into eigenbasis one can decouple the sum over ${L}\\choose{N_d}$ terms in Eq.~(\\ref{Eq:nd-inf}) into a double sum over the $N_d$ occupied sites in the initial state and the $L$ single particle orbitals. The problem of finding the infinite time occupation values then reduces to finding eigenstates of a single-particle Hamiltonian, whose Hilbert space scales as $L$, thus greatly reducing the complexity of the problem. The infinite-time average of the $d$-bosons density, hence, can be carried out through Eq.~(\\ref{Eq:nd-inf}) for very large systems, $L=500$, where boundary effects on its probability distribution are negligible, and at various disorder strengths.\n\nThe infinite-time occupation of the $d$-bosons plays the role of an effective random chemical potential experienced by the $c$-boson in the Hartree approximation. As shown in the inset of Figure~\\ref{Fig:effective disorder}(a), when the disorder is strong compared to the hopping parameter, $t_d=1$, $P(\\langle \\hat{n}_{d,i})\\rangle$ resembles a bimodal distribution, with two distinct peaks at $\\langle \\hat{n}_{d,i}\\rangle\\sim0,1$. This behavior can be understood as a consequence of the strong localization of $d$-bosons for disorder $W=10$. Thus, even at infinite time, the $d$-bosons remain close to their initial position and expectation value of $\\langle \\hat{n}_{d,i}\\rangle$ remains close to its initial value, i.e.~either $0$ or $1$, depending on the considered site. As disorder is decreased, the two peak structure gradually disappears, and the distribution acquires a single peak around the average density $\\nu_d$. \n\nIn order to define the effective disorder $\\tilde{\\epsilon}_i$ generated by the $d$-bosons, we need to isolate the random part of the distribution of $\\langle \\hat{n}_{d,i}\\rangle$. To this end, we subtract from $\\langle \\hat{n}_{d,i}\\rangle$ the uniform and the period-$1\/\\nu_d$ contribution. In Fourier space this corresponds to modifying the Fourier harmonics of density, $\\tilde{n}_d(k) = \\sum_j \\langle\\hat{n}_{d,j}\\rangle e^{-\\imath k j}$, as follows:\n\\begin{equation}\n\\label{Eq:effective disorder}\n\\tilde{\\epsilon}(k) = \\tilde{n}_d(k) - (1-\\alpha)L\\nu_d\\delta_{k,0}-\\alpha\\sum_{j=1}^L f(j,\\nu_d) e^{-\\imath kj},\n\\end{equation}\nwhere $\\alpha = \\sum_j \\langle\\hat{n}_{d,j}\\rangle e^{-\\imath\\pi\\nu_d j}$ corresponds to the weight of the $1\/\\nu_d$ harmonic in the particle distribution and $f(j,\\nu_d) = \\sum_{n=0}^{N_d-1}\\delta_{j,n\/\\nu_d+1}$ is a functional representation of the initial density wave configuration. Transforming $\\tilde{\\epsilon}(k)$ back to the real space, we obtain the effective random potential $\\tilde{\\epsilon}_i$. Its distribution differs from the one of the infinite-time density, as it can be seen in Figure~\\ref{Fig:effective disorder}(a). In particular, $P(\\tilde{\\epsilon}_i)$ is centered around zero for all values of disorder, and has an approximately Gaussian shape. Thus, we use the standard deviation of this distribution as the effective disorder strength $\\tilde{W} = U \\mathop{\\rm std} P(\\tilde{\\epsilon}_i)$ experienced by $c$-boson in the Hartree approximation.\n\nThe effective disorder strength measured in units of $U$, $\\tilde W\/U$ is shown as a function of the disorder strength experienced by $d$-bosons, $W$, in Figure~\\ref{Fig:effective disorder}(b). First, we note a non-monotonic dependence of $\\tilde W$ on the $d$-boson disorder strength, $W$. The effective disorder strength $\\tilde W$ presents a maximum around $W^*\\approx 5$, whose position depends weakly on the density of $d$-bosons. This behavior can be naturally explained by considering two opposite limits: at weak $W$ the localization length of $d$-bosons is much larger and the initial period-2\/3\/4 density wave configuration is washed out at late times resulting in weak effective disorder $\\tilde W$. In the opposite limit of very strong $W$, the $d$-bosons remain frozen close to their initial positions resulting in a nearly perfect periodic potential experienced by $c$-boson. However such periodic potential is unable to localize $c$-boson and is subtracted in Eq.~(\\ref{Eq:effective disorder}), thus again resulting in a weak effective disorder. Given that $\\tilde W$ is expected to decrease for very large and small $W$, we expect it to achieve a maximal value at some intermediate disorder $W$. \nFinally, we study in Fig.~\\ref{Fig:effective disorder}(b) the dependence of the effective disorder on the $d$-bosons density $\\nu_d$. As $\\nu_d$ is increased, $\\tilde W$ increases accordingly due to the fact that the effective random potential $\\tilde{\\epsilon}_i$ is generated by the $d$-bosons density. We note, however, that for $\\nu_d>1\/2$ the effective disorder would decrease again, because of the hard-core nature of the bosons.\n\n\\subsection{Localization length of the $c$-boson in Hartree approximation\\label{Sec:LocHartree}}\n\nWe demonstrated that in the Hartree approximation the $c$-boson experiences an approximately Gaussian-distributed random potential with disorder strength $\\tilde{W}$ that depends on the initial state and disorder experienced by $d$-bosons. Since an arbitrary weak disorder potential suffices to localize a single particle in a one-dimensional lattice, the $c$-boson in the Hartree approximation is always localized. We proceed with the calculation of its localization length $\\xi_c$ that provides a characteristic lengthscale of localization and can be compared with the localization length of the $d$-bosons, $\\xi_d$.\n\nTo obtain the localization length for the $c$-boson we use the weak disorder approximation, justified at weak values of $U$ as the random potentials $\\tilde{\\epsilon}_i$ are restricted to $[-1,1]$. Perturbing around the tight-binding limit with the transfer matrix method~\\cite{Thouless1972}, we obtain $\\xi_c(k)\\approx {8t^2_c \\sin^2(k)}\/{\\langle\\tilde{\\epsilon}^2_i\\rangle}$ that depends on the momentum $k$ that determines single particle energy in absence of disorder, $E(k) = -2t_c\\cos(k)$. The average localization length is calculated by performing the integral over the complete band\n\\begin{equation}\n\\label{Eq:xic ave}\n\\xi_c\\approx \\frac{8t^2_c}{\\langle\\tilde{\\epsilon}^2_i\\rangle} \\int_{-2t_c}^{2t_c} d\\varepsilon \\left[1-\\left(\\frac{\\varepsilon}{2t_c}\\right)^2\\right]\\rho(\\varepsilon),\n\\end{equation}\nwhere $\\rho(\\varepsilon)$ is the usual density of states of the tight-binding Hamiltonian. Carrying out the integral we realize that it contributes only to a numerical factor as the $c$-boson hopping terms cancel out. Recalling that, since $\\langle\\tilde{\\epsilon}_i\\rangle=0$, the variance $\\langle \\tilde{\\epsilon}^2_i\\rangle$ corresponds to the definition of the effective disorder strength squared, we obtain\n\\begin{equation}\n\\label{Eq:xic final}\n\\xi_c\\approx \\frac{4t_c^2}{a\\tilde{W}^2},\n\\end{equation}\nwhere $a=1$ is the lattice spacing.\n\\begin{figure}[t]\n\\includegraphics[width=0.95\\columnwidth]{fig2.pdf}\n\\caption{\\label{Fig:xi Hartree}\nThe localization length of the $c$-boson from the Hartree approximation for initial period-3 density wave state of $d$-bosons. Inset shows bare data, for different values of $U$ ranging from $U=0.5$ (dark blue) to $U=5$ (yellow). For a broad range of interaction strength, $\\xi_c$ has a minimum at $W^*\\approx5$. The main plot shows the collapse of data, confirming the scaling $\\xi_c\\sim U^{-2}$.\n}\n\\end{figure}\n\n\n\\begin{figure*}[t]\n\\includegraphics[width=1.95\\columnwidth]{fig3.pdf}\n\\caption{\\label{Fig:Resonances}\n(a) The typical ratio between matrix element and level spacing as a function of disorder for different coupling strengths $U$ rapidly decreases as the interaction strength increases. In the left inset we compare analytic expression for $\\mathcal{R}$ with its numerical estimate. The right inset shows the tunneling process induced by $\\hat{H}_\\text{int}$. \n(b) Phase diagram resulting from the criterion ${\\cal R}=1$ reveals a broad region where the localization is perturbatively stable, which increases with increasing $U$ and $W$. \n}\n\\end{figure*}\n\nThe resulting simple expression for $\\xi_c$, Eq.~(\\ref{Eq:xic final}), has two main consequences. First, we expect that $\\xi_c$ inherits the non-monotonic dependence on $W$ and has a minimum approximately when the effective disorder strengths is maximal in Fig.~\\ref{Fig:effective disorder}. Second, since $\\tilde{W}\\propto U$, we expect that localization length diverges as $\\xi_c\\propto 1\/U^2$ at small values of $U$ where weak disorder approximation is controllable. To confirm these predictions, we extract the localization length of $c$-bosons from numerical simulations. Using the effective disorder obtained from the eigenstates of the $d$-boson Hamiltonian, we use exact diagonalization to calculate the single-particle wave functions of $c$-bosons resulting from the Hamiltonian $H_c^\\text{Hartree} = H_c+\\sum_i U \\langle \\hat{n}_{d,i}\\rangle \\hat{n}_{c,i}$. Each eigenstate $\\ket{\\phi_\\varepsilon}$ of this Hamiltonian can be characterized by an (energy-dependent) localization length $\\xi_c(\\varepsilon)$. After obtaining $\\xi_c(\\varepsilon)$ for each eigenstate through an exponential fit of its probability distribution in the real space, $|\\bra{i}\\phi_\\varepsilon\\rangle|^2$, we average the localization length over all eigenstates and further over $50$ disorder realizations.\n\nThe localization length resulting from the numerical simulation is shown in Fig.~\\ref{Fig:xi Hartree}, where the non-monotonic behavior of $\\xi_c$ and its scaling with the interaction strength becomes apparent. We note that for the adopted choice of hoppings $t_d=t_c=1$, $\\xi_c$ always exceeds the localization length of $d$-bosons and is tunable by the interaction strength in a broad range. In particular, for disorder strength $W=6.5$, used in the accompanying paper~\\cite{Brighi2021a}, $\\xi_d\\approx 0.5$ is smaller than one lattice spacing, while $\\xi_c$ takes values from $1.5$ to $100$ lattice spacings, as the interaction strength is decreased from $U=5$ to $U=0.5$ respectively. This result also assists in the choice of disorder strength: in order to facilitate the numerical studies, we fix disorder at $W=6.5$ so that the localization length of $c$-bosons is close to its minimum.\n\n\n\n\n\\subsection{Analytic criterion for stability of localization\\label{Sec:Stability}}\nThe drawback of the Hartree approximation presented above is that it ignores fluctuations of $d$-bosons density, thus strongly favoring localization. However, it provides a useful starting point to address the perturbative stability of such localized system. To this end, we use the basis of Anderson localized orbitals provided by Hartree approximation to address the stability of the system with respect to interactions between clean and disordered bosons. \n\nWe transform the interaction Hamiltonian~(\\ref{Eq:Hint}) to the basis of localized orbitals via the relation $\\hat{d}_\\alpha = \\sum_i \\psi_{d\\alpha}(i)\\hat{d}_i$, where $\\hat{d}_\\alpha$ is the annihilation operator of an Anderson orbital and $\\psi_{d\\alpha}(i)=\\langle{\\alpha}|i\\rangle$ is the corresponding wave function and similar expressions hold for $c$-bosons. This leads to \n\\begin{equation}\n\\label{Eq:Interaction Anderson basis}\n\\hat{H}_\\text{int} = \\sum_{\\alpha\\beta\\gamma\\delta}V_{\\alpha\\beta\\gamma\\delta} \\hat{d}^\\dagger_\\alpha \\hat{c}^\\dagger_\\beta\\hat{d}_\\gamma\\hat{c}_\\delta.\n\\end{equation}\nAs shown in the schematic representation in the inset of Figure~\\ref{Fig:Resonances}, the matrix element $V_{\\alpha\\beta\\gamma\\delta}$ corresponds to a correlated hopping process with its value being given by the overlap of the wave functions of the corresponding localized orbitals, \n\\begin{equation}\n\\label{Eq:V}\nV_{\\alpha\\beta\\gamma\\delta}= U\\sum_{i=1}^L\\psi^*_{d\\alpha}(i)\\psi^*_{c\\beta}(i)\\psi_{d\\gamma}(i)\\psi_{c\\delta}(i),\n\\end{equation} \nwhere the envelope of the wave functions decays on the scale of the corresponding localization length, $|\\psi_{d\\alpha} (i)|\\sim e^{-|i-x_\\alpha|\/(2\\xi_d)}\/{\\sqrt{2\\xi_d}}$, where $x_\\alpha$ is the site around which the orbital is localized. In order to address the stability of localization with respect to such correlated hoppings triggered by interaction, we compare the matrix element of this process to the corresponding level spacing. \n\nKeeping only the leading behavior of the wave functions, i.e.~the exponential decay, and neglecting their oscillatory behavior results in an upper bound for the matrix element in Eq.~(\\ref{Eq:V}) and thus favors delocalization. In this case, the matrix element can be easily estimated as \n\\begin{equation}\n\\label{Eq:V1}\nV_{\\alpha\\beta\\gamma\\delta} \\approx \\frac{U}{4\\xi_c\\xi_d}\\sum_{i=1}^L e^{-\\frac{|x_\\alpha-i|+|x_\\gamma-i|}{\\xi_d}}\n e^{-\\frac{|x_\\beta-i|+|x_\\delta-i|}{\\xi_c}},\n\\end{equation}\nfrom where one can immediately realize that it is exponentially suppressed whenever any of terms in the exponents $|x-i|\/\\xi_{c\/d}>1$ exceeds one. This restricts the localized orbitals that can efficiently participate in the hopping process:\n\\begin{equation}\n\\label{Eq:loc centers constraint}\n\\begin{cases}\n|x_\\alpha-x_\\gamma|&\\lesssim\\xi_d,\\quad |x_\\alpha-x_\\beta|\\lesssim\\max(\\xi_c,\\xi_d)\\\\\n|x_\\beta-x_\\delta|&\\lesssim\\xi_c,\\quad |x_\\gamma-x_\\delta|\\lesssim\\max(\\xi_c,\\xi_d).\n\\end{cases}\n\\end{equation}\nUsing the fact that $\\xi_c\\geq \\xi_d$, the summation cancels the $\\xi_d$ in the denominator resulting in a simple estimate\n\\begin{equation}\n\\label{Eq:V res}\nV_{\\alpha\\beta\\gamma\\delta}\\approx \\frac{U}{4\\xi_c}.\n\\end{equation}\n\nThe second element needed to understand if such tunneling processes are resonant is the typical level spacing $\\delta_{cd}$, obtained as the ratio of the typical energy difference $\\Delta E$ to the number of states $\\mathcal{N}$ connected by the interaction. Since the two states differ for the position of the two bosons, the typical energy difference is given by the sum of the two disorder strengths: $\\Delta E\\approx W+\\tilde{W}$. To account for the number of states $(\\ket{\\psi},\\ket{\\psi'})$ connected by the interaction we need to consider the constraints~(\\ref{Eq:loc centers constraint}), which restrict the possible configurations. First, in the initial state the two bosons must lie within a distance $\\xi_c$ from one another, as $\\xi_c\\geq \\xi_d$, thus contributing $2\\xi_c$ possible configurations. Due to the localized nature of wave functions, the position of the bosons in the final state must lie within a distance $\\xi_c$ and $\\xi_d$ for the $c$-boson and the $d$-bosons respectively. A na{\\\"i}ve computation would then give $\\mathcal{N} = (2\\xi_c)(2\\xi_c)(2\\xi_d) = 8\\xi^2_c\\xi_d$. However, one must be careful as also in the final state the two bosons must be separated by at most $\\xi_c$ for the matrix element not to vanish. This reduces the total number of states, as not all the moves increasing the distance among the bosons are allowed. A careful computation reveals that for a given $d$-boson hopping there are only $(3\/2)\\xi_c$ possible choices, thus reducing the total number of states. Finally, one has to take into account the finite density of the $d$-bosons, together with their hard-core nature, that requires that an initially occupied site must be left empty after the hopping and vice versa, leading to\n\\begin{equation}\n\\label{Eq:number of states}\n\\mathcal{N} \\approx 6\\xi^2_c\\xi_d\\nu_d(1-\\nu_d), \\quad \\delta_{cd} = (W+\\tilde W)\/\\mathcal{N} .\n\\end{equation}\n\nGathering the results of Eqs.~(\\ref{Eq:V res})-(\\ref{Eq:number of states}) we obtain the expression for the ratio of level spacing to the matrix element,\n\\begin{equation}\n\\label{Eq:Res}\n\\mathcal{R} = \\frac{3}{2}U\\frac{\\xi_c\\xi_d\\nu_d(1-\\nu_d)}{W+\\tilde{W}}.\n\\end{equation}\nCondition $\\mathcal{R}<1$ provides a criterion for stability of localization. \nRecalling that the $c$-boson localization length depends on the interaction strength $\\xi_c\\approx\\xi_c^0\/U^2$, the criterion $\\mathcal{R}<1$ can be rearranged as \n\\begin{equation}\n\\label{Eq:Res Criterion}\n(W+\\tilde{W})U>\\frac{3}{2}\\xi_c^0\\xi_d\\nu_d(1-\\nu_d),\n\\end{equation}\nsuggesting that localization remains stable at strong interactions and large disorder.\n\n\nThe typical probability of resonance $\\mathcal{R}$ can also be evaluated numerically and compared with the prediction of Eq.~(\\ref{Eq:Res}). To this end, after diagonalizing the Hamiltonian in the Hartree approximation, for each $c$-boson eigenstate we select states lying within the localization lengths $\\xi_c$ and $\\xi_d$ from one another and evaluate for all of them the matrix element $V_{\\alpha\\beta\\gamma\\delta}$ and the energy difference. Then we define the resonance value $\\mathit{r}_\\beta$ fixing the initial state of the $c$-boson, e.g. $\\beta$, and maximizing the ratio of the matrix element to the energy difference over all other available states. In a similar way we define the resonance $\\mathit{r}_{\\alpha\\beta}$, where both the initial state of the $c$-boson ($\\beta$) and of the $d$-boson ($\\alpha$) are fixed. Being a more constrained version of $\\mathit{r}_\\beta$, the following inequality holds\n\\begin{equation}\n\\label{Eq:maxima Res}\n\\mathit{r}_\\beta = \\max_{\\alpha\\gamma\\delta}\\Bigr[\\frac{V_{\\alpha\\beta\\gamma\\delta}}{\\Delta E}\\Bigr]\\geq \\mathit{r}_{\\alpha\\beta} = \\max_{\\gamma\\delta}\\Bigr[\\frac{V_{\\alpha\\beta\\gamma\\delta}}{\\Delta E}\\Bigr].\n\\end{equation}\nFinally, we obtain the typical resonance probability by taking the median of the distribution of $\\mathit{r}_\\beta$ and $\\mathit{r}_{\\alpha\\beta}$. The results of this numerical evaluation are shown in the inset of Figure~\\ref{Fig:Resonances}(a) together with the analytic prediction~(\\ref{Eq:Res}) and show an overall agreement. In particular, we notice that $\\mathit{r}_{\\alpha\\beta}$ is always smaller than $\\mathit{r}_\\beta$.\n\nFrom Figure~\\ref{Fig:Resonances}(a) we observe that the disorder $W$ at which the condition~(\\ref{Eq:Res Criterion}) is satisfied decreases as the interaction strength $U$ increases. Taking disorder value where ${\\cal R}= 1$ as a critical point, we obtain an estimate for the phase diagram shown in Figure~\\ref{Fig:Resonances}(b), which predicts localization for strong interaction and strong disorder. Note, that our considerations assume a homogeneous initial distribution of $d$-bosons, as discussed at the end of Sec.~\\ref{Sec:Model}. Figure~\\ref{Fig:Resonances}(b) suggests that at disorder $W=6.5$ system is expected to be localized for sufficiently large $U\\gtrapprox 1.5$. In what follows we explore the properties of the localized system for $U=12$, a point that is located deep in the localized regime according to our stability condition.\n\n\\section{Quench dynamics as a probe of localization\\label{Sec:Dynamics}}\n\nThe static Hartree approximation introduced above allowed to formulate a criterion for the stability of localization. Below we proceed with probing the dynamics of the full model in the localized regime. First, we extend the Hartree approximation attempting to include dynamical effects in the system. More importantly, we study the full quantum dynamics using TEBD. In this section we fix $U=12$ and $W=6.5$ and investigate the dynamics of a quench from the state~(\\ref{Eq:psi0}) with density $\\nu_d=1\/3$.\n\n\\subsection{Time-dependent Hartree approximation\\label{Sec:TDHF}}\n\nThe static Hartree approximation relied on the assumption of quick equilibration of the $d$-boson density. This overestimated localization in the system, as we shall demonstrate below. Here, we adopt a different approximation: we assume that the full many-body wave function can be decomposed into a product of $c$- and $d$-bosons wave functions respectively. Such representation completely ignores entanglement between two boson species. However, it allows performing fast and efficient simulation of dynamics on long timescales for large systems. We note here that similar approaches to the approximation of dynamics in MBL system have already been used~\\cite{Bera2021}, showing results qualitatively similar to what we present below. \n\nThe product state structure of the wave function of the full system allows to reduce the full many-body Schr\\\"odinger equation to the simultaneous evolution of two non-interacting but time dependent Hamiltonians. Specifically, the time evolution of $c$- and $d$-bosons is governed by time-dependent Hamiltonians $\\hat{H}^\\text{dH}_c(t)$ and $\\hat{H}^\\text{dH}_d(t)$ respectively:\n\\begin{equation}\n\\label{Eq:Hd(t)}\n\\begin{split}\n\\hat{H}^\\text{dH}_d(t) &= t_d\\sum_{i=1}^{L-1}(\\hat{d}^\\dagger_{i+1}\\hat{d}_i + \\text{h.c.}) + \\sum_{i=1}^L V_{d,i}(t)\\hat{n}_{d,i},\n\\\\ &V_{d,i}(t) = \\epsilon_i + U\\langle\\hat{n}_{c,i}(t)\\rangle \n\\end{split}\n\\end{equation}\n\\begin{equation}\n\\label{Eq:Hc(t)}\n\\begin{split}\n\\hat{H}^\\text{dH}_c(t) &= t_c\\sum_{i=1}^{L-1}(\\hat{c}^\\dagger_{i+1}\\hat{c}_i + \\text{h.c.}) + \\sum_{i=1}^L V_{c,i}(t)\\hat{n}_{c,i},\n\\\\ &V_{c,i}(t) = U\\langle\\hat{n}_{d,i}(t)\\rangle,\n\\end{split}\n\\end{equation}\nwhere the expectation value of densities of corresponding boson species reads $\\langle\\hat{n}_{c\/d,i}(t)\\rangle = \\bra{\\psi_{c\/d}(t)}\\hat{n}_{c\/d}\\ket{\\psi_{c\/d}(t)}$, and $\\ket{\\psi_{c\/d}(t)}$ is the wave function obtained from the time-evolution of the initial state~(\\ref{Eq:psi0}) with the correspondent time-dependent Hamiltonian:\n\\begin{equation}\n\\label{Eq:psi_cd(t)}\n\\ket{\\psi_{c\/d}(t)} = \\hat{T}\\exp\\Bigr(-\\imath\\int_0^t dt'\\hat{H}^\\text{dH}_{c\/d}(t')\\Bigr)\\ket{\\psi^0_{c\/d}}.\n\\end{equation}\nAlthough the $d$-bosons formally are described by a many-body wave function due to their finite density, one can exploit their non-interacting nature to perform an efficient simulation of dynamics. In fact, as explained after Eq.~(\\ref{Eq:nd-inf}), both the initial state and the eigenstates can be written as product states. After rotating the orbitals into the real space basis and back, the time-evolution reduces to a double sum over initially occupied sites and orbitals. The $d$-bosons can, then, be treated separately as single particles, and their overall wave-function can be reconstructed from the single-particle evolved states, thus reducing the complexity from exponential to linear in $L$. \n\n\\begin{figure}[b]\n\\includegraphics[width=.95\\columnwidth]{fig4.pdf}\n\\caption{\\label{Fig:nct tdHF}\nEvolution of the density profile of the clean boson in the dynamic Hartree approximation has a clear diffusive behavior. The red line corresponds to the diffusive ``lightcone'', $\\langle\\hat{n}_{c,x}(t)\\rangle\\sim\\sqrt{t}$. The data is averaged over $400$ disorder realizations, $L=90$.}\n\\end{figure}\n\n\nFirst, we study the dynamics of the $c$-boson, illustrated in Figure~\\ref{Fig:nct tdHF}, where we plot the density profile of the clean boson $\\langle \\hat{n}_{c,x}(t)\\rangle$ as a function of distance from its initial location, $x=i-L\/2$, and time. We observe that in contrast to the case of static Hartree approximation, where the $c$-boson spreads within an exponentially localized envelope (see Appendix~\\ref{App:Hartree-Dynamics}), the dynamic Hartree approximation results in a diffusive spreading of $c$-boson and its complete delocalization over the entire system on a timescale $t\\sim L^2$. Once the $c$-boson spreads over the whole chain, it does not reach a steady state, which can be attributed to the fact that this approximation oversimplifies the true many-body character of the problem.\n\n\\begin{figure}[t]\n\\includegraphics[width=.95\\columnwidth]{fig5.pdf}\n\\caption{\\label{Fig:fluct c}\nFluctuations of the $c$-boson density decay slightly faster than polynomially in time. Inset shows that the saturation value of density fluctuations reached at late times decreases monotonically with the system size as $1\/L^{b}$, with $b\\approx1.6$. This data is obtained averaging over $400$ disorder realizations.}\n\\end{figure}\n\n\nThe fluctuations of the $c$-boson density result in a time-dependent potential acting on disordered bosons, see Eq.~(\\ref{Eq:Hd(t)}). In order to quantify the density fluctuations of the clean boson, we average the absolute value of the deviation of its density from the mean value, $\\delta n_{c,i}(t) = |\\langle\\hat{n}_{c,i}(t)\\rangle-\\overline{\\langle\\hat{n}_{c,i}\\rangle}|$,\nwhere $\\overline{\\langle\\hat{n}_{c,i}\\rangle}$ corresponds to the average over the interval $t\\in[9.9\\times 10^3,10^4]$.\nAs shown in Figure~\\ref{Fig:fluct c}, the density fluctuations around the initial position of the $c$-boson decay in time, until they eventually reach a plateau. Note the long timescale involved in reaching the saturation value, even for a relatively small system of $L=30$ this happens at times $t\\gtrsim 10^3$.\n\n\nThe dynamics of the $d$-bosons can be intuitively understood as resulting from an Anderson insulator globally coupled to a weak, non-Markovian, local noise. While it is known~\\cite{Knap2017,Marino2018,Gefen1984,Logan1987,Scardicchio2021} that Anderson localization is unstable with respect to global noise, a recent work~\\cite{BarLev2021} has demonstrated that coupling of an Anderson insulator to a local Markovian white noise leads to a logarithmically slow particle transport and entanglement growth. Our dynamics differs from that of Ref.~\\cite{BarLev2021} in that $d$-bosons are coupled to fluctuations of the $c$-boson density globally throughout the entire length of the chain. Another important difference is that the density fluctuations of the clean boson produce a temporally and spatially correlated noise. \n\nIn Figure~\\ref{Fig:nd Id tdHF}(a) we show the density profile of $d$-bosons at late times, $T=10^4$, for different system sizes $L=30$, $60$, and $90$. Although the relaxation is strongest in the middle of the chain, the memory of initial density wave configuration survives even at late times. \nTo understand the dynamics of relaxation of density profile, we consider the imbalance $I(t)$~\\cite{Schreiber2015}. Imbalance quantifies the memory of the initial state using the difference in occupation among initially occupied and empty sites ($N_o$ and $N_e$ respectively), \n\\begin{equation}\n\\label{Eq:Id}\nI(t) = \\frac{N_o(t)-N_e(t)}{N_o(t)+N_e(t)}.\n\\end{equation}\nFor the initial period-3 density wave state considered here, the explicit form of $N_{o\/e}$ reads:\n\\begin{equation}\n\\label{Eq:No Ne}\nN_o = \\sum_{i=1}^{L\/3}\\hat{n}_{d,3i-2}\\;,\\quad N_e = \\frac{1}{2}\\sum_{i=1}^{L\/3}\\Bigr(\\hat{n}_{d,3i}+\\hat{n}_{d,3i-1}\\Bigr).\n\\end{equation}\nFigure~\\ref{Fig:nd Id tdHF}(b) reveals that $I(t)$ decays without any signs of saturation even at times $T=10^4$. After rescaling the time axis with a factor of $1\/L^2$ we observe the collapse of the data, which further supports the intuition that although the imbalance is not a conserved quantity, its relaxation will happen on timescale that scales as $t\\propto L^2$ with the system size. The decay of imbalance, after a plateau extending up to $t\\approx 0.1L^2$ follows a power-law scaling, $I(t)\\sim \\left({L^2}\/{t}\\right)^{\\beta}$, with $\\beta\\approx 0.4$ as shown in the inset of Fig.~\\ref{Fig:I comparison}(a). \n\n\\begin{figure}[t]\n\\includegraphics[width=.95\\columnwidth]{fig6.pdf}\n\\caption{\\label{Fig:nd Id tdHF}\n(a) The density profile of $d-$bosons at late time $T=10^4$ retains memory of the initial state. Comparing data for different system sizes, we observe that larger systems keep stronger memory of the initial state.\n(b) Inset shows that the imbalance in the middle of the chain follows an approximately power-law relaxation without any signs of saturation at late times. The main panel reveals that imbalance dynamics collapses for different system sizes after rescaling of the time axis by $1\/L^2$. The averaging is performed over $400$ disorder realizations.\n}\n\\end{figure}\n\n\nIn summary, the dynamic Hartree approximation suggests a complete delocalization of the clean particle, that is followed by the melting of the imbalance of disordered bosons. This does not agree with our expectations from the criterion for stability of localization obtained in Sec.~\\ref{Sec:Stability}, and also disagrees with the results of the TEBD dynamics presented below. It is natural to attribute the delocalization observed here to the nature of dynamic Hartree approximation, that completely neglects the many-body character of the problem and discards the correlations between the two species of bosons. The inclusion of such correlations is expected to lead to a more efficient relaxation of density fluctuations and may stop the diffusive spreading of $c$-boson as we illustrate below. \n\n\n\\subsection{Stability of localization from TEBD dynamics}\n\nWe now turn our attention to the fully interacting system, thus reintroducing all the effects neglected in the Hartree approximation (both static and time-dependent). In order to simulate dynamics of large systems we use the time-evolving block decimation (TEBD) method~\\cite{Vidal2003} with time-step $dt=0.05$, truncation $\\varepsilon=10^{-9}$ and maximum bond dimension $\\chi=3000$, delegating all details of numerical implementation to Appendix~\\ref{App:TEBD}.\n\n\\subsubsection{Stability of localization of clean and disordered bosons}\nStudying the dynamics of the $c$-boson in the Hartree limit, see Appendix~\\ref{App:Hartree-Dynamics}, we realized that it presents different behaviors according to the distance from the central site. Close to the center, within a time-dependent radius $R(t)$, the density is exponentially suppressed with a decay length $\\ell_c(t)$. On the other hand, farther from the middle, it shows rather a Gaussian expansion. In the accompanying work, Ref.~\\cite{Brighi2021a}, we analyzed the dynamical localization of the $c$-boson observed in the TEBD dynamics. Observing features similar to the static Hartree case, there we proposed the following ansatz for the dynamics of clean boson density, \n\\begin{equation}\n\\label{Eq:nc(x,t)}\nn_c(x,t) = \\mathcal{N}_c(t)\\exp\\Bigr(-\\frac{|x|}{\\ell_c(t)\\tanh\\bigr(R(t)\/|x|\\bigr)}\\Bigr),\n\\end{equation}\nwhere $\\mathcal{N}_c(x,t)$ is a normalization factor. Eq.~(\\ref{Eq:nc(x,t)}) smoothly interpolates between the two behaviors observed, as $|x|$ runs across $R(t)$. Based on phenomenological observations, we suggested the following form for the decay length\n\\begin{equation}\n\\label{Eq:ell_c(t)}\n\\ell_c(t) = \\xi_c\\frac{\\log\\bigr(1+t\/T_0\\bigr)}{1+\\log\\bigr(1+t\/T_0\\bigr)},\n\\end{equation}\nwhich implies saturation of the decay length to a constant localization length $\\xi_c$ on timescales $t\\gg T_0$.\n\n\\begin{figure}[t]\n\\includegraphics[width=.95\\columnwidth]{fig7.pdf}\n\\caption{\\label{Fig:nct TEBD}\nThe spreading of the $c$-boson density in the $L=60$ system is well described by the ansatz of Eq.~(\\ref{Eq:nc(x,t)}) (blue curve), while it deviates from the diffusive behavior (red curve). }\n\\end{figure}\n\nFigure~\\ref{Fig:nct TEBD} shows the evolution of the density profile of the clean boson with time. While at early times it approximately agrees with a diffusive profile extracted from time-dependent Hartree approximation (red line and Fig.~\\ref{Fig:nct tdHF}), at times of order one, the density spreading slows down. To describe the envelope of the density profile, we use the fact that when $R(t)$ saturates to $L\/2$, the ansatz~(\\ref{Eq:nc(x,t)}) reduces to $n_c(x,t)\\approx e^{-|x|\/\\ell_c(t)}$. By imposing the condition $n_c(x,t)=\\text{const}$, we obtain the scaling $|x|\\approx \\ell_c(t)$ shown in Fig.~\\ref{Fig:nct TEBD} by the blue line, that provides better description for the envelope of the density profile.\n\n\\begin{figure}[b]\n\\includegraphics[width=0.95\\columnwidth]{fig8.pdf}\n\\caption{\\label{Fig:I comparison}\n(a) TEBD dynamics (blue line) reveals a broad plateau in imbalance at late times, whereas time-dependent Hartree approximation (red line) predicts a power-law decay.\n(b) The imbalance of central six sites in TEBD dynamics reveals a weak power-law decay. The stronger deviation between TEBD and approximate time-dependent Hartree data highlights the importance of entanglement for the accurate description of the dynamics. Data is shown for $L=60$ and averaged over $400$ and $50$ disorder realizations for time-dependent Hartree and TEBD respectively.\n}\n\\end{figure}\n\nNext, we consider the dynamics of imbalance of $d$-bosons, defined in Eq.~(\\ref{Eq:Id}), that is shown in Fig.~\\ref{Fig:I comparison}(a). While TEBD and time-dependent Hartree approximation approximately agree at times up to $t\\approx 10$, at later times the imbalance obtained from TEBD remains approximately constant, while the time-dependent Hartree approximation predicts a power-law decay of the imbalance. This is further highlighted by the exponent $\\beta$, shown in the inset, that is compatible within error bars with value $\\beta=0$ for TEBD dynamics, which corresponds to non-decaying imbalance. \n\nIn order to quantify the stronger relaxation close to the center of the chain, triggered by the presence of the clean boson, we also study the local imbalance in the middle of the chain, $I_\\text{mid}(t)$ defined according to Eq.~(\\ref{Eq:Id}) but restraining the sum in Eq.~(\\ref{Eq:No Ne}) to the six central sites. The quantity $I_\\text{mid}(t)$ shows a stronger decay in Fig.~\\ref{Fig:I comparison}(b) as compared to its global counterpart in Fig.~\\ref{Fig:I comparison}(a). However, again the TEBD dynamics reveals a much weaker effect of the $c$-boson compared to the Hartree approximation. Fitting the decay to the power-law form, the resulting exponent remains small and it does not change with system size (inset). Hence, we expect the effect on the central region to be system size independent, thus leaving the boundaries unaffected as $L\\to\\infty$. \n\n\\subsubsection{Entanglement dynamics}\n\n\nAfter demonstrating the stability of localization in TEBD dynamics, we present additional details for the picture of entanglement spreading provided in Ref.~\\cite{Brighi2021a}. Thanks to $U(1)$ conservation, we can write the reduced density matrix of the first $i$ sites from the left, $\\rho_i$, in block-diagonal form, $\\rho_i = \\sum_n p_i^{(n)}\\rho_i^{(n)}$, where $p_i^{(n)}$ corresponds to the weight of the $n$-particles sector and $\\mathrm{tr}\\rho_i^{(n)}=1$. It is then convenient to separate the entanglement entropy, $S(i) = -\\mathrm{tr}\\rho_i\\log\\rho_i$, in two contributions: the configuration entanglement $S_C(i)$ and the particle entanglement $S_n(i)$~\\cite{Lukin2019}\n\\begin{align}\n\\label{Eq:S}\nS(i) = &S_n(i)+S_C(i) \\\\\n\\label{Eq:Sc}\nS_C(i) = -&\\sum_n p_i^{(n)} \\mathrm{tr}\\rho_i^{(n)}\\log\\rho_i^{(n)}\\\\\n\\label{Eq:Sn}\nS_n(i) = &-\\sum_n p_i^{(n)}\\log p_i^{(n)}.\n\\end{align}\n\n\n\\begin{figure}[b]\n\\includegraphics[width=.95\\columnwidth]{fig9.pdf}\n\\caption{\\label{Fig:S(t)}\n(a) The configuration entropy shows a steady logarithmic growth, whose start is delayed as the cut gets farther from the center of the chain, due to the weaker effect of the interaction at the boundaries. \n(b) The real space profile of the configuration entropy is highly non-uniform, with entropy being maximal in the center of the chain, where the $c$-boson is localized. Far from the center, $S_C(i)$ is close to zero, indicating the absence of interactions in the boundaries up to late times.\n(c) The growth of number entropy is similarly delayed with increasing distance from the center of the chain. It shows slow growth, compatible with the weak relaxation of $d$-bosons observed in the imbalance dynamics.\n(d) The number entropy also has a non-uniform profile, with its values far from the center being consistent with the particle entropy of the Anderson insulator.\nThis data refer to a system of $L=60$ sites, averaged over $50$ disorder realizations.\n}\n\\end{figure}\n\n\n\\begin{figure}[t]\n\\includegraphics[width=.95\\columnwidth]{fig10.pdf}\n\\caption{\\label{Fig:S_C spread}\nThe configuration entropy $S_C(i,t)$ at $U=12$ and $L=60$, characterizing the interacting nature of the system, presents an extremely inhomogeneous growth. In agreement with the picture of propagation of MBL, its growth is constrained within the ``MBL light-front'' $x_\\text{MBL}(t)$ shown by the red line. When $x\\gg x_\\text{MBL}$, the configuration entropy $S_C$ is close to zero, suggesting that system remains effectively non-interacting until late times. \n}\n\\end{figure}\n\n\nThe configuration entropy $S_C(i,t)$ arises from the superposition of different product states in $\\ket{\\psi(t)}$ and measures the correlation between particle configurations in the two subsystems. The growths of configuration entropy can be attributed to the interacting nature of the system. Thus configuration entropy does not show interesting dynamics in Anderson insulator, while displaying a logarithmic growth in a many-body localized phase~\\cite{Lukin2019} providing the main contribution to the entanglement growth~\\cite{Serbyn2013a}. Particle number entanglement entropy $S_n(i)$ arises from the occupation of different sub-sectors of the density matrix $\\rho_i$ and hence accounts for the particle transport in the system. Recent studies~\\cite{Kiefer-Emmanouilidis2020,Kiefer-Emmanouilidis2021} suggested an extremely slow, albeit finite, growth $S_n(i,t)\\sim\\log\\log t$, which they attributed to slow particle transport. Subsequent work by \\textcite{Luitz2020} however suggested absence of such transport and saturating particle number entanglement. \n\nThe dynamics of the two contributions to the entanglement in our model is presented in Figure~\\ref{Fig:S(t)}, where the first row shows the configuration entanglement growth for different cuts in the chain~(a) and its profile at different times~(b). In the second row, we show the dynamics of particle number entanglement. Similarly to the configuration entropy, $S_n(i,t)$ has a non-homogeneous growth across the system shown in Fig.~\\ref{Fig:S(t)}(c)-(d).\n\nSeparating entanglement $S(i)$ into configurational and particle part, it is clear that the logarithmic growth is due to the configuration entropy, $S_C(i)\\sim \\xi_S\\log(t)$, as clearly shown in Figure~\\ref{Fig:S(t)}(a). The configuration entanglement growth has a non-homogeneous behavior in the system, explained by the propagation of MBL phenomenology discussed in Ref.~\\cite{Brighi2021a}. Introducing an effective interaction $U_\\text{eff}(x,t) = Un_c(x,t)$, the dynamics of $S_C(i)$ can be described by the ansatz $S_C(i,t)\\approx\\xi_S\\log(1+U_\\text{eff}(i,t)t)$. This explains the delay of onset of growth of configurational entropy in Fig.~\\ref{Fig:S(t)}(a) away from the center of the chain. In addition, this phenomenology predicts an linearly decreasing entanglement profile away from the center of the chain. Indeed, using an exponential profile for the density of the clean boson at late times we obtain the following effective interaction strength, $U_\\text{eff}\\approx Ue^{-|i-L\/2|\/\\ell_c(t)}$. This gives the following asymptotic behavior of configurational entanglement, \n\\begin{equation}\\label{Eq:}\nS_C(i,t\\gg 1)\\sim \\text{const}-\\frac{\\xi_S}{\\ell_c(t)}|i-L\/2|,\n\\end{equation}\nthat decays linearly away from the center of the chain with slope given by $1\/\\xi_S$. This prediction is confirmed by the numerical results shown in Fig.~\\ref{Fig:S(t)}(b), yielding the value of $\\xi_S\\approx0.31$.\n\nAccording to the effective interaction picture describing the propagation of MBL, the interacting nature spreads through the system producing a many-body localization lightcone. At late times, when $n_c(i,t)\\approx\\mathcal{N}_c(t)e^{-|i-L\/2|\/\\ell_c(t)}$, this can be captured analytically by solving $S_C(x,t)=\\text{const}$ for $x$ and defines the MBL ``light-front\",\n\\begin{equation}\n\\label{Eq:xMBL}\nx_\\text{MBL}(t) = \\ell_c(t)\\log\\bigr(\\mathcal{N}_c(t)Ut\\bigr).\n\\end{equation}\nAs shown in Figure~\\ref{Fig:S_C spread}, the prediction of the MBL lightcone (red curve) provides an accurate description of the actual behavior of the spread of configuration entropy, thus testing the genuine propagation of the many-body nature through the chain.\n\n\\section{Probing eigenstates with DMRG-X\\label{Sec:DMRG-X}}\n\nAlthough TEBD enabled simulation of dynamics for large systems, the maximal time remained limited by entanglement growth. Below we use the DMRG-X algorithm~\\cite{Serbyn2016,DMRG-X,Pollman2016} to probe eigenstates of large systems, providing an effective insight into infinite time behavior. \n\n\n\\begin{figure}[b]\n\\includegraphics[width=.95\\columnwidth]{fig11.pdf}\n\\caption{\\label{Fig:nc DMRG}\nThe density profile of the $c$-boson averaged over different eigenstates as a function of the distance from $i_\\text{max}$ shows an exponential decay for both the system sizes considered. Through a fit to the density in different regions we obtain an upper and lower bound for the localization length, with $\\xi^\\text{low}_c\\approx 2.1$ (red dashed line) corresponding to the fit excluding the central site and $\\xi^\\text{up}_c\\approx 3$ (black dashed line) obtained from the fit that exclude 12 central sites.}\n\\end{figure}\n\n\\subsection{$c$- and $d$-boson localization}\n\nWe extract highly excited eigenstates of the Hamiltonian~(\\ref{Eq:H}) with $L=30$ and $60$ sites, $U=12$ and 1\/3 filling of $d$-bosons. We obtain in total $250$ (500) eigenstates in the middle of the spectrum by targeting $25$ different energies for each of the $10$ ($20$) disorder realizations considered for $L=30$ (60) chain respectively.~\\footnote{We note, that the first 10 disorder realizations for $L=60$ system are chosen in such a way that the values of the random energies of the $30$ central sites agree with those for $L=30$ system.} The states are obtained as a result of $100$ sweeps where the targeted eigenstate is variationally approximated by an MPS of maximum bond dimension $\\chi=500$ and $250$ for systems of size $30$ and $60$, respectively. The algorithm is initialized with different copies of the initial state~(\\ref{Eq:psi0}), where the $c$-boson initial position is chosen among empty sites in the central region. We refer the reader to Appendix~\\ref{App:DMRG-X} for details on the implementation of the algorithm as well as its performance metrics. \n\n\n\nFor each of the eigenstates we extract the peak position of the $c$-boson density, $i_\\text{max}$. The average $c$-boson density plotted as a function of the distance from its peak presents an exponential profile, as shown in Fig.~\\ref{Fig:nc DMRG}. We notice that the density profile around the peak is enhanced with respect to the tails, which can be potentially attributed to the large value of $U$ producing stable doublons. The exponentially decaying density confirms the localization of the clean particle and allows us to extract the upper and lower bound on the localization length, $\\xi^\\text{low}_c \\approx 2.1$ and $\\xi^\\text{up}_c \\approx 3$ resulting from fitting its profile close to the center (excluding the central site) and at the edges of the chain. The two bounds quantitatively agree with the lengthscale $\\ell_c(t\\to\\infty) = \\xi_c = 2.5$ \\emph{extrapolated} from TEBD dynamics in Ref.~\\cite{Brighi2021a}, suggesting that the localization of the $c$-boson observed in dynamics is not a transient effect, but it is a property of the system, provided the correct sector of the Hilbert space is explored.\n\n\\begin{figure}[t]\n\\includegraphics[width=.95\\columnwidth]{fig12.pdf}\n\\caption{\\label{Fig:nd fluct DMRG}\nStrong deviation of the $d$-bosons density away from the thermal value $\\nu_d=1\/3$ consistent between different system sizes indicates the breakdown of thermalization in eigenstates. }\n\\end{figure}\n\n\n\nProbing potential localization of $d$-bosons in eigenstates is more complicated: the finite particle density does not allow to define a localization center. Therefore, we consider the average deviations of the $d$-bosons density from the thermal value given by their average density $\\nu_d = 1\/3$ as a function of the distance from $i_\\text{max}$. In Figure~\\ref{Fig:nd fluct DMRG}, we show the averaged absolute value of the deviation of the density expectation value from $\\nu_d$, which is limited to the range of values $0<\\langle| \\hat{n}_{d,i}-\\nu|\\rangle<1-\\nu_d$. While in a thermalizing system we expect this quantity to be small and decrease exponentially with the system size, the eigenstates of our problem have on average large expectation value of $\\langle| \\hat{n}_{d,i}-\\nu|\\rangle$. In addition, increasing the system size does not lead to a decrease of the average distance from the thermal value, thus suggesting that the $d$-bosons density strongly fluctuates around $\\nu_d$ in eigenstates due to their localization. Interestingly, we notice a slight peak of this quantity at $i=i_\\text{max}$, which is in agreement with the enhancement of the $c$-boson density observed in Fig.~\\ref{Fig:nc DMRG} and may be attributed to the effect of doublons. In the region around the center, however, we observe a slight weakening of localization, corresponding to smaller values of $\\langle| \\hat{n}_{d,i}-\\nu|\\rangle$. This can be attributed to the fact that in the vicinity of the peak of the $c$-boson, the interactions are effectively stronger, leading to an increased relaxation of the $d$-bosons.\n\n\\begin{figure}[b]\n\\includegraphics[width=.95\\columnwidth]{fig13.pdf}\n\\caption{\\label{Fig:spDM spectrum}\nThe spectrum of $\\rho_d$ shows a step behavior, dropping quickly to small values at $i\/L=\\nu_d=1\/3$ (dashed line). This corresponds to the presence of $N_d$ occupied single-particle orbitals, thus confirming localization. We note that $w_{d,i}>1$ is due to the bosonic nature of these particles. The inset shows the log-averaged spectrum of $\\rho_c$ on a logarithmic scale, revealing that there is a single significantly occupied orbital, while all the others have an exponentially decaying weight.\n}\n\\end{figure}\n\n\\begin{figure}[b]\n\\includegraphics[width=.95\\columnwidth]{fig14.pdf}\n\\caption{\\label{Fig:S DMRG}\n(a) Average entanglement as a function of the cut position results in a featureless profile. The agreement in entanglement between different system sizes where they overlap is due to particular disorder choice.\n(b) Averaging the entanglement profile as a function of the distance from the center of localization of $c$-boson, $i_\\text{max}$, shows enhancement in entanglement caused by the presence of the clean boson. The dashed line shows the comparison to an exponential fit. The collapse of entanglement profile between different system sizes suggests area-law entanglement scaling. \n}\n\\end{figure}\n\n\nFurther evidence of localization for both types of particles is found in the spectrum of the single-particle density matrices $\\rho_{c}=\\langle \\hat{c}^\\dagger_i\\hat{c}_j\\rangle$ ($\\rho_{d}=\\langle \\hat{d}^\\dagger_i\\hat{d}_j\\rangle$)~\\cite{Bardarson2015,Bardarson2017}. In the particular case treated here, the von Neumann entropy of $\\rho_c$ corresponds to the intra-species entanglement $S_{cd} = -\\operatorname{tr}\\rho_c\\log\\rho_c$. We study its distribution and average value among eigenstates, obtaining average entropy of $ S_{cd}\\approx 0.85$ for both system sizes. Furthermore, the eigenvalues of $\\rho_{c\/d}$, $w_{c\/d,i}$, sorted by decreasing value, can be interpreted as occupation numbers of single-particle orbitals~\\cite{Bardarson2015,Bardarson2017}. In the case of an MBL system, it is expected that particles sit on almost localized sites, hence only the first $N_{c\/d}$ orbitals should be significantly occupied. In Figure~\\ref{Fig:spDM spectrum} we show the average, $\\langle \\cdot\\rangle$, and log-average, $\\overline{x} = e^{\\langle\\ln x\\rangle}$, value of the ordered eigenvalues for $\\rho_d$ and $\\rho_c$ respectively. The scaling of $\\langle w_{d,i}\\rangle$ shown in the inset presents a step behavior, consistent with the localization of the $d$-bosons. Similarly, the typical value $\\overline{w}_{c,i}$ presents a single large eigenvalue, while the rest decay exponentially.\n\n\n\\subsection{Entanglement structure of eigenstates}\n\n \n\nIn Figure~\\ref{Fig:S DMRG}(a) we show the bipartite entanglement profile averaged as a function of the position of the cut $i$. While the entanglement profile of eigenstates suggests an area-law entanglement scaling characteristic of many-body localization~\\cite{Serbyn2013,Bauer2013}, this representation of data does not show any effect from the presence of the clean boson. To observe the influence of the $c$-boson, we average the entanglement profile defined with respect to the distance from the localization site of the $c$-boson, $i-i_\\text{max}$. Figure~\\ref{Fig:S DMRG}(b) reveals a peak in $S(i-i_\\text{max})$ around zero, indicating that the clean boson is responsible for additional entanglement in eigenstates. Away from $i_\\text{max}$ entanglement saturates to a constant value, revealing a clear area-law scaling. Fitting the decay of entanglement away from the center with an exponential of the type $S_0+ce^{-|i-i_\\text{max}|\/\\zeta_S}$ allows to extract the value of $\\zeta_S\\approx4$, suggesting that the single $c$-boson is capable of generating non-trivial entanglement patterns on a scale larger than its localization length. \n\nThe increase in entanglement of eigenstates that decays exponentially with the distance away from the location of $c$-boson provides further support to the localized nature of eigenstates. Moreover, this provides a complementary view on the picture of the inhomogeneous effective interaction triggered by the presence of $c$-boson and resulting in the dynamical ``propagation of MBL'' presented in~\\cite{Brighi2021a}. \n\n\n\\section{Discussion}\n\nWe investigated the effect of the coupling of a small local bath represented by a single clean boson to a one dimensional Anderson insulator with finite particle density. First, we used a static Hartree approximation that overestimates localization to obtain the parameters range most favorable for localization. In addition, we analyzed the effect of two-particle resonances triggered by the interaction, estimating the region of parameters where localization is perturbatively stable with respect to such resonances. \n\nFocusing on the parameter values where the perturbative criterion predicts localization, we studied the dynamics in the coupled system. We demonstrated that the time-dependent Hartree approximation, which neglects the entanglement between the localized particles and the degree of freedom representing the bath, predicts a slow delocalization of the system consistent with diffusion. In contrast, the TEBD numerical simulation, which takes into account all quantum correlations among the two bosonic species, shows evidence of localization within the experimentally relevant timescales achieved. In particular, the imbalance of disordered particles saturates, suggesting that memory of the initial density-wave configuration is retained even at long times. In addition simulations reveal logarithmic growth of configuration entanglement, in agreement with the MBL phenomenology, and extremely slow growth of particle number entanglement, ascribed to the weak relaxation of the disordered particles due to the interaction with the small bath. In agreement with Ref.~\\cite{Brighi2021a}, we observed an enhanced effect of the bath on the $d$-bosons in the center of the system, while far from it the difference with the Anderson case is negligible. \n\nFinally, in order to probe the fate of localization at infinite times, we used DMRG-X method to extract highly excited eigenstates of the Hamiltonian. We observe that the clean boson remains localized in eigenstates, confirming the intuition gained from the study of dynamics. We further notice area-law scaling of the entanglement entropy of eigenstates, with an enhancement in the vicinity of the localization site of the $c$-boson, $i_\\text{max}$. These results provide evidence for the effective stability of localization even at infinite times and further support for the picture proposed in the accompanying work~\\cite{Brighi2021a}.\n\nIn summary, our work provides evidence for MBL proximity effect and a phenomenological picture of dynamics~\\cite{Brighi2021a} for large systems that are beyond the reach of numerical exact diagonalization. Our predictions are readily verifiable in state of the art experimental setups~\\cite{Rubio-Abadal2019,Leonard2020} that are capable of probing the particle dynamics with a single-site resolution. \n\nHowever, there remain many open questions. Describing a more ``powerful'' thermal bath requires a finite $c$-bosons density. The persistence of the phenomenology proposed here in such a scenario is non-trivial and its study can lead to a better understanding of the MBL proximity effect. In the same direction, a careful investigation of the role of the interaction strength is in order for an accurate description of MBL-bath systems. In particular, in the opposite limit of weak interaction the $c$-boson is expected to delocalize quickly yielding a very weak and uniform effective coupling. This regime can provide a different point of view on the MBL-bath model. A further interesting direction would be to address the stability of our predictions to inter-species interactions, i.e.~replacing the Anderson insulator Hamiltonian $\\hat{H}_d$ with an MBL Hamiltonian. Although we expect the qualitative picture discussed above to hold, a different entanglement dynamics is supposed to arise in this case. Finally, replacing the Hamiltonian evolution with a periodic Floquet drive may allow the investigation of the long-time behavior of the model, since it would avoid the large number of singular value decompositions required for reaching long times with TEBD algorithm. In this framework, the saturation of the measured quantities could be observed, thus providing insight into the infinite-time fate of the system.\n\n\\begin{acknowledgments}\nWe thank M.~Ljubotina for insightful discussions. P.~B., A.~M.\\ and M.~S.\\ acknowledge support by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (Grant Agreement No.~850899). D.~A.~was supported by the Swiss National Science Foundation and by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (Grant Agreement No.~864597). The development of parallel TEBD code was supported by S.~Elefante from the Scientific Computing (SciComp) that is part of Scientific Service Units (SSU) of IST Austria. Some of the computations were performed on the Baobab cluster of the University of Geneva.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro}\n\nConditional independence is a crucial notion that facilitates efficient inference and parameter learning in probabilistic models. Its logical and algorithmic properties as well as its graphical representations have led to the advent of graphical models as a discipline within artificial intelligence~\\cite{Koller:2009}. The notion of finite (partial) exchangeability~\\cite{Diaconis:1980}, on the other hand, has not yet been explored as a basic building block for tractable probabilistic models. A sequence of random variables is exchangeable if its distribution is invariant under variable permutations. Similar to conditional independence, partial exchangeability, a generalization of exchangeability, can reduce the complexity of parameter learning and is a concept that facilitates high tree-width graphical models with tractable inference. For instance, the graphical models (a)-(c) with Bernoulli variables in Figure~\\ref{fig:sym} depict typical low tree-width models based on the notion of (conditional) independence. Graphical models (d)-(f) have high tree-width but are tractable if we assume the variables with identical shades to be exchangeable. \nWe will see that EVMs are especially beneficial for high-dimensional and sparse domains such as text and collaborative filtering problems.\nWhile there exists work on tractable models, with a majority focusing on low tree-width graphical models, a framework for finite partial exchangeability as a basic building block of \\emph{tractable} probabilistic models seems natural but does not yet exist. \n\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[width=0.475\\textwidth]{sym}\n\\caption{\\label{fig:sym} Illustration of low tree-width models exploiting independence (a)-(c) and exchangeable variable models (EVMs) exploiting finite exchangeability (variable nodes with identical shades are exchangeable) (d)-(f).}\n\\end{center}\n\\end{figure}\n\nWe propose exchangeable variable models (EVMs), a novel family of probabilistic models for classification and probability estimation. While most probabilistic models are built on the notion of conditional independence and its graphical representation, EVMs have finite partially exchangeable sequences as basic components. We show that EVMs can represent complex positive and negative correlations between large sets of variables with few parameters and without sacrificing tractable inference. \nThe parameters of EVMs are estimated under the maximum-likelihood principle and we assume the examples to be independent and identically distributed.\nWe develop methods for efficient probabilistic inference, maximum-likelihood estimation, and structure learning.\n\nWe introduce the mixtures of EVMs (MEVMs) family of models which is strictly more expressive than the naive Bayes family of models but as efficient to learn. MEVMs represent classifiers that are optimal under zero-one loss for a large class of Boolean functions including parity and threshold functions. Extensive experiments show that exchangeable variable models, when combined with the notion of conditional independence, are effective both for classification and probability estimation. The MEVM classifier significantly outperforms state of the art classifiers on numerous high-dimensional and sparse data sets. MEVMs also outperform several tractable graphical model classes on typical probability estimation problems while being orders of magnitudes more efficient.\n\n\\section{Background}\n\nWe begin by reviewing the statistical concepts of finite exchangeability and finite partial exchangeability.\n\n\\subsection{Finite Exchangeability}\n\nFinite exchangeability is best understood in the context of a finite sequence of binary random variables such as a finite number of coin tosses. Here, finite exchangeability means that it is only the number of heads that matters and not their particular order. Since exchangeable variables are not necessarily independent, finite exchangeability can model highly correlated variables, a graphical representation of which would be the fully connected graph with high tree-width (see Figure~\\ref{fig:sym}(d)). However, as we will later see, the number of parameters and the complexity of inference remains linear in the number of variables.\n\n\\begin{definition}[Exchangeability]\n\\label{full-exch}\nLet $X_1, ...,X_n$ be a sequence of random variables with joint distribution $P$ and let $S(n)$ be the group of all permutations acting on $\\{1, ..., n\\}$. We say that $X_1, ...,X_n$ is exchangeable if $P(X_1, ...,X_n) = P(X_{\\pi(1)}, ... ,X_{\\pi(n)})$\nfor all $\\pi \\in S(n)$.\n\\end{definition}\n\nIn this paper, we are concerned with exchangeable \\emph{variables} and iid \\emph{examples}. The literature has mostly focused on exchangeability of an \\emph{infinite} sequence of random variables. In this case, one can express the joint distribution as a mixture of iid sequences~\\cite{finetti:1938}. However, for finite sequences of exchangeable variables this representation is inadequate -- while finite exchangeable sequences can be approximated with de Finetti style mixtures of iid sequences, these approximations are not suitable for finite sequences of random variables not extendable to an infinite exchangeable sequence~\\cite{DnF:1980}. Moreover, negative correlations can only be modeled in the finite case. There are interesting connections between the automorphisms of graphical models and finite exchangeability~\\cite{niepertorbits}. An alternative approach to exchangeability considers its relationship to sufficiency~\\cite{Diaconis:1980,lauritzen:1984} which is at the core of our work.\n\n\\subsection{Finite Partial Exchangeability}\n\nThe assumption that all variables of a probabilistic model are exchangeable is often too strong. Fortunately, finite exchangeability can be generalized to the concept of finite partial exchangeability using the notion of a statistic. \n\n\\begin{definition}[Partial Exchangeability]\nLet $X_1, ...,X_n$ be a sequence of random variables with distribution $P$, let $\\mathbf{Val}(X_i)$ be the domain of $X_i$, and let $\\mathcal{T}$ be a finite set. The sequence $X_1, ...,X_n$ is partially exchangeable with respect to the statistic $T: \\mathbf{Val}(X_1)\\times ...\\times \\mathbf{Val}(X_n)\\rightarrow \\mathcal{T}$ if\n$$T(\\mathbf{x})=T(\\mathbf{x'}) \\mbox { implies } P(\\mathbf{x})=P(\\mathbf{x'}),$$ where $\\mathbf{x}$ and $\\mathbf{x'}$ are assignments to the sequence of random variables $X_1,...,X_n$.\n\\end{definition}\n\nThe following theorem states that the joint distribution of a sequence of random variables, which is partially exchangeable with respect to a statistic $T$, is a unique mixture of uniform distributions.\n\n\\begin{theorem}\\emph{\\cite{Diaconis:1980}}\n\\label{theorem-param-pe}\nLet $X_1, ...,X_n$ be a sequence of random variables with distribution $P$, let $\\mathcal{T}$ be a finite set, and let $T: \\mathbf{Val}(X_1)\\times ...\\times \\mathbf{Val}(X_n)\\rightarrow \\mathcal{T}$ be a statistic. Moreover, let $S_t = \\{\\mathbf{x} \\in \\mathbf{Val}(X_1)\\times ...\\times \\mathbf{Val}(X_n) \\mid T(\\mathbf{x})=t\\}$, let $U_t$ be the uniform distribution on $S_t$, and let $w_t = P(S_t)$. If $X_1, ...,X_n$ is partially exchangeable with respect to $T$, then\n\\begin{equation}\n\\label{equation-paramterization}\nP(\\mathbf{x}) = \\sum_{t \\in \\mathcal{T}} w_t U_t(\\mathbf{x}).\n\\end{equation}\n\\end{theorem}\n\n\\begin{figure}[t!]\n\\begin{center}\n\\includegraphics[width=0.37\\textwidth]{urnmodel}\n\\caption{\\label{fig:urn} A finite sequence of exchangeable variables can be parameterized as a unique mixture of urn processes. Each such urn process is a series of draws without replacement. }\n\\end{center}\n\\end{figure}\n\nThe theorem provides an implicit description of the distributions $U_t$. The challenge for specific families of random variables lies in finding a statistic $T$ with respect to which a sequence of variables is partially exchangeable and an efficient algorithm to compute the probabilities $U_t(\\mathbf{x})$. For the case of exchangeable sequences of discrete random variables and, in particular, exchangeable sequences of binary random variables, an explicit description does exist and is well-known in the statistics literature \\cite{Diaconis:1980,Stefanescu:2003}.\n\n\\begin{example}\n\\label{example-exch}\nLet $X_1,X_2,X_3$ be three exchangeable binary variables with joint distribution $P$. Then, the sequence $X_1,X_2,X_3$ is partially exchangeable with respect to the statistic $T: \\{0,1\\}^3 \\rightarrow \\mathcal{T} = \\{0,1,2,3\\}$ with $T(\\mathbf{x}=(x_1,x_2,x_3)) = x_1+x_2+x_3.$ Thus, we can write \n$$P(\\mathbf{x}) = \\sum_{t \\in \\mathcal{T}} w_t U_t(\\mathbf{x}),$$ where $w_t = P(T(\\mathbf{x}) = t)$, $U_t(\\mathbf{x}) = [[ T(\\mathbf{x})=t]]\\binom{3}{t}^{-1}$, and $[[ \\cdot ]]$ is the indicator function.\nHence, the distribution can be parameterized as a unique mixture of four urn processes, where $T$'s value is the number of black balls. Figure~\\ref{fig:urn} illustrates the mixture model. The generative process is as follows. First, choose one of the four urns according to the mixing weights $w_t$; then draw three consecutive balls from the chosen urn without replacement.\n\\end{example}\n\n\n\n\\section{Exchangeable Variable Models}\n\nWe propose exchangeable variable models (EVMs) as a novel family of tractable probabilistic models for classification and probability estimation. While probabilistic graphical models are built on the notion of (conditional) independence and its graphical representation, EVMs are built on the notion of finite (partial) exchangeability. EVMs can model both \\emph{negative} and \\emph{positive} correlations in what would be high tree-width graphical models without losing tractability of probabilistic inference. \n\nThe basic components of EVMs are tuples $(\\mathbf{X}, T)$ where $\\mathbf{X}$ is a sequence of \\emph{discrete} random variables partially exchangeable with respect to the statistic $T$ with values $\\mathcal{T}$.\n\n\n\n\\subsection{Probabilistic Inference}\n\nWe can relate finite partial exchangeability to tractable probabilistic inference (see also \\cite{Niepert:2014}). We assume that for every joint assignment $\\mathbf{x}$, $P(\\mathbf{x})$ can be computed in time $\\mathbf{poly}(|\\mathbf{X}|)$.\n\n\\begin{prop}\n \\label{prop-eff}\n Let $\\mathbf{X}$ be partially exchangeable with respect to the statistic $T$ with values $\\mathcal{T}$, let $|\\mathcal{T}| = \\mathbf{poly}(|\\mathbf{X}|)$, and let, for any partial assignment $\\mathbf{e}$, $ S_{t,\\mathbf{e}} := \\left\\{\\mathbf{x} \\mid T(\\mathbf{x}) = t \\text{ and } \\mathbf{x} \\sim \\mathbf{e} \\right\\},$ where $\\mathbf{x} \\sim \\mathbf{e}$ denotes that $\\mathbf{x}$ and $\\mathbf{e}$ agree on the variables in their intersection~\\cite{Koller:2009}.\n If we can in time $\\mathbf{poly}(|\\mathbf{X}|)$,\n \\begin{enumerate}\n \\vspace{-2mm}\n \\item[(1)] for every $\\mathbf{e}$ and every $t \\in \\mathcal{T}$, decide if there exists an $\\mathbf{x} \\in S_{t,\\mathbf{e}}$ and, if so, construct such an $\\mathbf{x}$, \n \\end{enumerate}\n then the complexity of MAP inference, that is, computing $\\argmax_{\\mathbf{y}} P(\\mathbf{y},\\mathbf{e})$ for any partial assignment $\\mathbf{e}$, is $\\mathbf{poly}(|\\mathbf{X}|)$.\n If, in addition, we can in time $\\mathbf{poly}(|\\mathbf{X}|)$,\n \\begin{enumerate}\n \\vspace{-2mm}\n \t\\item[(2)] for every $\\mathbf{e}$ and every $t \\in \\mathcal{T}$, compute $|S_{t,\\mathbf{e}}|$,\n \\end{enumerate}\n then the complexity of marginal inference, that is, computing $P(\\mathbf{e})$ for any partial assignment $\\mathbf{e}$, is $\\mathbf{poly}(|\\mathbf{X}|)$.\n \\end{prop}\n\nProposition~\\ref{prop-eff} generalizes to probabilistic models where $P(\\mathbf{x})$ can only be computed up to a constant factor $Z$ such as undirected graphical models. Please note that computing conditional probabilities is tractable whenever conditions (1) and (2) are satisfied. We say a statistic is tractable if either of the conditions is fulfilled. \n\nProposition~\\ref{prop-eff} provides a theoretical framework for developing tractable non-local potentials.\nFor instance, for $n$ exchangeable Bernoulli variables, the complexity of MAP and marginal inference is polynomial in $n$. This follows from the statistic $T$ satisfying conditions (1) and (2) and since $|\\mathcal{T}|=n+1$.\nRelated work on cardinality-based potentials has mostly focused on MAP inference~\\cite{Gupta:2007,tarlow2010hop}. Finite exchangeability also speaks to marginal inference via the tractability of computing $U_t(\\mathbf{e}) = |S_{t,\\mathbf{e}}|^{-1}$. EVMs can model unary potentials using singleton sets of exchangeable variables. \nWhile not all instances of finite partial exchangeability result in tractable probabilistic models there exist several examples satisfying conditions (1) and (2) which go beyond finite exchangeability. In the supplementary material, in addition to the proofs of all theorems and propositions, we present examples of tractable statistics that are different from those associated with cardinality-based potentials~\\cite{Gupta:2007,tarlow2010hop,tarlow2012fast,Bui:2012}.\n\n\n\n\\subsection{Parameter Learning}\n\nThe parameters of finite sequences of partially exchangeable variables are the mixture weights of the parameterization given in Equation~\\ref{equation-paramterization} of Theorem~\\ref{theorem-param-pe}. Estimating the parameters of these basic components of EVMs is a crucial task. We derive the maximum-likelihood estimates for these mixture weight vectors. \n\n\n\\begin{theorem}\n\\label{theorem-mle}\nLet $X_1, ...,X_n$ be a sequence of random variables with joint distribution $P$, let $T$ be a statistic with distinct values $t_0,...,t_k$, and let $X_1,...,X_n$ be partially exchangeable with respect to $T$. The ML estimates for $N$ examples, $\\mathbf{x}^{(1)}, ..., \\mathbf{x}^{(N)}$, are $\\mathtt{MLE}[(w_0,...,w_k)] = \\left(\\frac{c_0}{N}, ..., \\frac{c_k}{N}\\right)$, where $c_i = \\sum_{j=1}^{N} [[T\\left(\\mathbf{x}^{(j)}\\right)=t_i]]$.\n\\end{theorem}\n\nHence, the statistical parameters to be estimated are identical to the statistical parameters of a multinomial distribution with $|\\mathcal{T}|$ distinct categories.\n\n\\subsection{Structure Learning}\n\nLet $\\mathbf{\\hat{X}}$ be a sequence of random variables and let $\\mathbf{\\hat{x}}^{(1)}, ..., \\mathbf{\\hat{x}}^{(N)}$ be $N$ iid examples drawn from the data-generating distribution.\nIn order to learn the structure of EVMs we need to address two problems. \n\n\\textbf{Problem 1:} Find subsequences $\\mathbf{X} \\subseteq \\mathbf{\\hat{X}}$ that are exchangeable with respect to a given tractable statistic $T$. This identifies individual EVM components $(\\mathbf{X}, T)$ for which tractable inference and learning is possible. We may utilize different tractable statistics for different components. \n\n\\textbf{Problem 2:} Construct graphical models whose potentials are the previously learned tractable EVM components. In order to preserve tractability of the global model, we have to restrict the class of possible graphical structures.\n\nWe now present approaches to these two problems that learn expressive EVMs while maintaining tractability.\n\n\nLet us first address \\textbf{Problem 1}. We focus on EVMs with finitely exchangeable components. Fortunately, there exist several \\emph{necessary} conditions for finite exchangeability (see Definition~\\ref{full-exch}) of a sequence of random variables. \n\n\\begin{proposition}\n\\label{prop-criteria}\nThe following statements are necessary conditions for exchangeability of a finite sequence of random variables $X_1,...,X_n$. For all $i,j,i',j' \\in \\{1,...,n\\}$ with $i\\neq j$ and $i' \\neq j'$ \n\\begin{enumerate}\n\\item[(1)] $\\mathtt{\\mathbf{E}}(X_i)=\\mathtt{\\mathbf{E}}(X_j)$; \n\\item[(2)] $\\mathtt{\\mathbf{Var}}(X_i)=\\mathtt{\\mathbf{Var}}(X_j)$; and\n\\item[(3)] $\\mathtt{\\mathbf{Cov}}(X_i,X_j) = \\mathtt{\\mathbf{Cov}}(X_{i'},X_{j'}) \\geq - \\frac{\\mathtt{\\mathbf{Var}}(X_i)}{(n-1)}$.\n\\end{enumerate}\n\\end{proposition}\n\nThe necessary conditions can be exploited to assess whether a sequence of variables is finitely exchangeable. In order to learn EVM components $(\\mathbf{X}, T)$ we assume that a sequence of variables is exchangeable unless a statistical test contradicts some or all of the necessary conditions for finite exchangeability. For instance, if a statistical test deemed the expectations $\\mathtt{\\mathbf{E}}(X)$ and $\\mathtt{\\mathbf{E}}(X')$ for two variables $X$ and $X'$ identical, we could assume $X$ and $X'$ to be exchangeable. If we wanted the statistical test for finite exchangeability to be more specific and less sensitive, we would also require conditions (2) and\/or (3) to hold. Please note the analogy to structure learning with conditional independence tests. Instead of identifying (conditional) independencies we identify finite exchangeability among random variables. For a sequence of identically distributed variables the assumption of exchangeability is \\emph{weaker} than that of independence. \nTesting whether two discrete variables have identical mean and variance is efficient algorithmically. Of course, the application of the necessary conditions for finite exchangeability is only one possible approach to learning the components of EVMs. \n\nLet us now turn to \\textbf{Problem 2}. To ensure tractability, the global graphical structure has to be restricted to tractable classes such as chains and trees. Here, we focus on mixture models where, conditioned on the values of the latent variable, $\\mathbf{\\hat{X}}$ is partitioned into exchangeable blocks (see Figure~\\ref{fig:spectrum}). Hence, for each value $y$ of the latent variable, we perform the statistical tests of \\textbf{Problem 1} with estimates of the conditional expectations $\\mathtt{\\mathbf{E}}(X \\mid y)$. We introduce this class of EVMs in the next section and leave more complex structures to future work.\n\nIn the context of longitudinal studies and repeated-measures experiments, where an observation is made at different times and under different conditions, there exist several models taking into account the correlation between these observations and assuming identical or similar covariance structure for subsets of the variables~\\cite{Jennrich:1986}. These compound symmetry models, however, do not make the assumption of exchangeability and, therefore, do not generally facilitate tractable inference. Nevertheless, finite exchangeability can be seen as a form of parameter tying, a method that has also been applied in the context of hidden Markov models, neural networks~\\cite{Rumelhart:1986} and, most notably, statistical relational learning~\\cite{Getoor:2007}. Collective graphical models~\\cite{Sheldon:2011} (CGMs) and high-order potentials~\\cite{tarlow2010hop,tarlow2012fast} (HOPs) are models based on non-local potentials. Proposition~\\ref{prop-criteria} can be applied for learning the structure of novel tractable instances of CGMs and HOPs.\n\n\n\n\\section{Exchangeable Variable Models for Classification and Probability Estimation}\n\nWe are now in the position to design model families that combine the notions of (partial) exchangeability with that of (conditional) independence. Instead of specifying a structure that solely models the (conditional) independence characteristics of the probabilistic model, EVMs also specify sequences of variables that are (partially) exchangeable. The previous results provide the necessary tools to learn both the structure and parameters of partially exchangeable sequences and to perform tractable probabilistic inference. \n\n\\begin{figure}[t!]\n\\begin{center}\n\\includegraphics[width=0.48\\textwidth]{spectrum}\n\\caption{\\label{fig:spectrum} The combination of exchangeable and independent variables leads to a spectrum of models. On the one end is the model where, conditioned on the class, all variables are independent (but possibly not identically distributed; left). On the other end is the model where, conditioned on the class, all variables are exchangeable (but possibly correlated; right). The partition of the variables into exchangeable blocks can vary with the class value.}\n\\end{center}\n\\end{figure}\n\nThe possibilities for building families of exchangeable variable models (EVMs) are vast. Here, we focus on a family of mixtures of EVMs generalizing the widely used naive Bayes model. The family of probabilistic models is therefore also related to research on extending the naive Bayes classifier~\\cite{domingos:1997,rennie:2003}. \nThe motivation behind this novel class of EVMs is that it facilitates \\emph{both} tractable maximum-likelihood learning \\emph{and} tractable probabilistic inference.\n\nIn line with existing work on mixture models, we derive the maximum-likelihood estimates for the fully observed setting, that is, when there are no examples with missing class labels. We also discuss the expectation maximization (EM) algorithm for the case where the data is partially observed, that is, when examples with missing class labels exist.\n\n\\begin{definition}[Mixture of EVMs]\nThe mixture of EVMs (MEVM) model consists of a class variable $Y$ with $k$ possible values, a set of binary attributes $\\mathbf{\\hat{X}}=\\{X_1, ..., X_n\\}$ and, for each $y \\in \\{1, ..., k\\}$, a set $\\mathcal{X}_y$ specifying a partition of the attributes into blocks of exchangeable sequences. The structure of the model, therefore, is defined by $\\mathcal{X} = \\{\\mathcal{X}_i\\}_{i=1}^{k}$, the set of attribute partitions, one for each class value. The model has the following parameters:\n\n\\begin{enumerate}\n\\item A parameter $p(y)$ for every $y \\in \\{1, ..., k\\}$ specifying the prior probability\nof seeing class value $y$. \n\\item A parameter $q_{(\\mathbf{X})}(\\ell \\mid y)$ for every $y \\in \\{1, ..., k\\}$, every $\\mathbf{X} \\in \\mathcal{X}_y$, and every $\\ell \\in \\{0, 1, ..., |\\mathbf{X}|\\}$. The value of $q_{(\\mathbf{X})}(\\ell \\mid y)$ is the probability of the exchangeable sequence $\\mathbf{X} \\subseteq \\mathbf{\\hat{X}}$ having an assignment with $\\ell$ number of $1$s, conditioned on the class label being~$y$. \n\\end{enumerate}\nLet $\\mathtt{n}_{\\mathbf{X}}(\\mathbf{\\hat{x}})$ be the number of $1$s in the joint assignment $\\mathbf{\\hat{x}}$ projected onto the variable sequence $\\mathbf{X} \\subseteq \\mathbf{\\hat{X}}$. The probability for every $y, \\mathbf{\\hat{x}}= (x_1, ..., x_n)$ is then defined as $$\\mathtt{\\textbf{P}}(y, \\mathbf{\\hat{x}}) = p(y)\\prod_{\\mathbf{X} \\in \\mathcal{X}_y} q_{(\\mathbf{X})}(\\mathtt{n}_{\\mathbf{X}}(\\mathbf{\\hat{x}}) \\mid y)\\binom{|\\mathbf{X}|}{\\mathtt{n}_{\\mathbf{X}}(\\mathbf{\\hat{x}})}^{-1}.$$\n\\end{definition}\n\n\nHence, conditioned on the class, the attributes are partitioned into mutually independent and disjoint blocks of exchangeable sequences. \nFigure~\\ref{fig:spectrum} illustrates the model family with the naive Bayes model being positioned on one end of the spectrum. Here, $\\mathcal{X}_y = \\{\\{X_1\\},...,\\{X_n\\}\\}$ for all $y \\in \\{1, ..., k\\}$. On the other end of the spectrum is the model that assumes full exchangeability conditioned on the class. Here, $\\mathcal{X}_y = \\{\\{X_1,...,X_n\\}\\}$ for all $y \\in \\{1, ..., k\\}$. For binary attributes, the number of \\emph{free} parameters is $k + kn - 1$ for \\emph{each} member of the MEVM family. The following theorem provides the maximum-likelihood estimates for these parameters.\n \n\\begin{theorem}\n\\label{theorem-em-mevm}\nThe maximum-likelihood estimates for a MEVM with attributes $\\mathbf{\\hat{X}}$, structure $\\mathcal{X} = \\{\\mathcal{X}_i\\}_{i=1}^{k}$, and a sequence of examples $\\left( y^{(i)},\\mathbf{\\hat{x}}^{(i)}\\right), 1 \\leq i \\leq N,$ are\n$$p(y) = \\frac{\\sum_{i=1}^N [[y^{(i)} = y]]}{N}$$ and, for each $y$ and each $\\mathbf{X} \\in \\mathcal{X}_y$,\n$$q_{(\\mathbf{X})}(\\ell \\mid y) = \\frac{\\sum_{i=1}^{N}[[y^{(i)} = y \\mbox{ and } \\mathtt{n}_{\\mathbf{X}}\\hspace{-1mm}\\left(\\mathbf{\\hat{x}}^{(i)}\\right) = \\ell]]}{\\sum_{i=1}^{N}[[y^{(i)}=y]]}.$$\n\\end{theorem}\n\nWe utilize MEVMs for classification problems by learning the parameters and computing the MAP state of the class variable conditioned on assignments to the attribute variables. For probability estimation the class is latent and we can apply Algorithm~\\ref{algorithm-em}. \nThe expectation maximization (EM) algorithm is initialized by assigning random examples to the mixture components. In each EM iteration, the examples are fractionally assigned to the components, and the block structure and parameters are updated. Finally, either the previous or current structure is chosen based on the maximum likelihood. For the structure learning step we can, for instance, apply conditions from Proposition~\\ref{prop-criteria} where we use the conditional expectations $\\mathtt{\\mathbf{E}}(X_j \\mid y)$, estimated by $\\sum_{i=1}^{N}x_j^{(i)}\\delta(y \\mid i)\/N$, for the statistical tests to construct $\\mathcal{X}_y$. Since the new structure is chosen from a set containing the structure from the previous EM iteration, the convergence of Algorithm~\\ref{algorithm-em} follows from that of structural expectation maximization~\\cite{Friedman:1998}. \n\n\\begin{algorithm}[t]\n\\floatname{require}{Initialize}\n\\caption{Expectation Maximization for MEVMs}\n\\label{algorithm-em}\n\\begin{algorithmic}\n\\STATE \\textbf{Input:} The number of classes $k$. Training examples $\\langle \\mathbf{\\hat{x}}^{(i)}=(x_1^{(i)},...,x_n^{(i)})\\rangle, 1 \\leq i \\leq N$. A parameter specifying a stopping criterion.\n\\STATE \\textbf{Initialization:} Assign $\\lfloor N\/k\\rfloor$ random examples to each mixture component. For each class value $y \\in \\{1,...,k\\}$, partition the $n$ variables into exchangeable sequences $\\mathcal{X}^{(0)}_y$, and compute $p^{(0)}(y)$ and $q^{(0)}_{(\\mathbf{X})}(\\ell \\mid y)$ for each $\\mathbf{X} \\in \\mathcal{X}^{(0)}_y$ and $0 \\leq \\ell \\leq |\\mathbf{X}|$ using Theorem~\\ref{theorem-em-mevm}.\n\\STATE \\textbf{Iterate:} \\ \\ until stopping criterion is met\n\\STATE \\ \\ 1. For $i=1,...,N$ and $y = 1,...,k$ compute\n\\STATE $$\\delta(y \\mid i) = \\frac{\\mathtt{\\textbf{P}}^{(t-1)}\\hspace{-1mm}\\left(y,\\mathbf{\\hat{x}}^{(i)}\\right)}{\\sum_{j=1}^{k}\\mathtt{\\textbf{P}}^{(t-1)}\\hspace{-1mm}\\left(j,\\mathbf{\\hat{x}}^{(i)}\\right)}.$$\n\\STATE \\ \\ 2. For each $y \\in \\{1, ..., k\\}$, partition the variables \n\\STATE \\ \\ \\ \\ \\ \\ into blocks of exchangeable sequences $\\mathcal{X}^{(t)}_y$.\n\\STATE \\ \\ 3. Update parameters for both $\\mathcal{X}_y^{(t-1)}$ and $\\mathcal{X}_y^{(t)}$:\n\\STATE $$p^{(t)}(y) = \\frac{\\sum_{i=1}^{N} \\delta(y \\mid i)}{N},$$\n$$q^{(t)}_{(\\mathbf{X})}(\\ell \\mid y) = \\frac{\\sum_{i=1}^{N}[[\\mathtt{n}_{\\mathbf{X}}\\hspace{-1mm}\\left(\\mathbf{\\hat{x}}^{(i)}\\right) = \\ell]] \\ \\delta(y \\mid i)}{\\sum_{i=1}^{N} \\delta(y \\mid i)}.$$\n\\STATE \\ \\ 4. Select the new block structure according to the\n\\STATE \\ \\ \\ \\ \\ \\ maximum log-likelihood on training examples.\n\\STATE \\textbf{Output:} Structure and parameter estimates.\n\\end{algorithmic}\n\\end{algorithm}\n\nA crucial question is how \\emph{expressive} the novel model family is. We provide an analytic answer by showing that MEVMs are globally optimal under zero-one loss for a large class of Boolean functions, namely, conjunctions and disjunctions of attributes and symmetric Boolean functions. \\emph{Symmetric Boolean functions} are Boolean function whose value depends only on the number of ones in the input~\\cite{canteaut:2005}. The class includes (a) Threshold functions, whose value is $1$ on inputs vectors with $k$ or more ones for a fixed $k$; (b) Exact-value functions, whose value is $1$ on inputs vectors with $k$ ones for a fixed $k$; (c) Counting functions, whose value is $1$ on inputs vectors with the number of ones congruent to $k\\ \\mathtt{mod}\\ m$ for fixed $k, m$; and (d) Parity functions, whose value is $1$ if the input vector has an odd number of ones. \n\n\\begin{definition}\\cite{domingos:1997}\nThe Bayes rate for an example is the lowest zero-one loss achievable by any classifier on that example. A classifier is \\emph{locally optimal} for an example iff its zero-one loss on that example is equal to the Bayes rate. A classifier is \\emph{globally optimal} for a sample iff it is locally optimal for every example in that sample. A classifier is globally optimal for a problem iff it is globally optimal for all possible samples of that problem.\n\\end{definition}\n\n\n\nWe can now state the following theorem\n\n\\begin{theorem}\n\\label{optimality}\nThe mixtures of EVMs family is globally optimal under zero-one loss for \n\\begin{enumerate}\n\\item Conjunctions and disjunctions of attributes;\n\\item Symmetric Boolean functions such as\n\\begin{itemize}\n\\item Threshold (m-of-n) functions \n\\item Parity functions\n\\item Counting functions\n\\item Exact value functions\n\\end{itemize}\n\\end{enumerate}\n\\end{theorem}\n\nTheorem~\\ref{optimality} is striking as the parity function and its special case, the XOR function, are instances of not linearly separable functions which are often used as examples of particularly challenging classification problems. \nThe optimality for symmetric Boolean functions holds even for the model that assumes \\emph{full} exchangeability of the attributes given the value of the class variable (see Figure~\\ref{fig:spectrum}, right). It is known that the naive Bayes classifier is \\emph{not} globally optimal for threshold (m-of-n) functions despite them being linearly separable~\\cite{domingos:1997}. Hence, combining conditional independence and exchangeability leads to highly tractable probabilistic models that are globally optimal for a broader class of Boolean functions. \n\n\\section{Experiments} \n\nWe conducted extensive experiments to assess the efficiency and effectiveness of MEVMs as tractable probabilistic models for classification and probability estimation. A major objective is the comparison of MEVMs and naive Bayes models. We also compare MEVMs with several state of the art classification algorithms. For the probability estimation experiments, we compare MEVMs to latent naive Bayes models and several widely used tractable graphical model classes such as latent tree models.\n\n\\begin{table}[t!]\n\\caption{\\label{table-property-class} Properties of the classification data sets and mean and standard deviation of the number of MEVM blocks.}\n\\small\n\\begin{center}\n\\begin{tabular}{|l||r|r|r|r|}\n\\hline \nData set & $|V|$ & Train & Test & Blocks \\\\ \n\\hline \n\\hline\nParity & 1,000 & $10^6$ & 10,000 & $1.3 \\pm 0.3$ \\\\\nCounting & 1,000 & $10^6$ & 10,000 & $1.9 \\pm 0.9$ \\\\\nM-of-n & 1,000 & $10^6$ & 10,000 & $2.4 \\pm 1.6$ \\\\\nExact & 1,000 & $10^6$ & 10,000 & $3.2 \\pm 2.1$ \\\\\n\\hline\n\\hline\n20Newsgrp & 19,726.1 & 1,131.4 & 753.2 & $19.2 \\pm 1.5$ \\\\ \nReuters-8 & 19,398.0 & 1,371.3 & 547.2 & $16.9 \\pm 9.1$\\\\ \nPolarity & 38,045.8 & 1,800.0 & 200.0 & $34.1 \\pm 0.7$ \\\\ \nEnron & 43,813.6 & 4,000.0 & 1,000.0 & $30.2 \\pm 6.0$ \\\\\nWebKB & 7,290.0 & 1,401.5 & 698.0 & $19.3 \\pm 3.6$ \\\\ \nMNIST & 784.0 & 12,000.0 & 2,000.0 & $72.3 \\pm 3.1$\\\\\n\\hline\n\\end{tabular} \n\\end{center}\n\\end{table}\n\n\n\\subsection{Classification}\n\nWe evaluated the MEVM classifier using both synthetic and real-world data sets. Each synthetic data set consists of $10^6$ training and $10000$ test examples. Let $\\mathtt{n}(\\mathbf{x})$ be the number of ones of the example $\\mathbf{x}$. The parity data was generated by sampling uniformly at random an example $\\mathbf{x}$ from the set $\\{0,1\\}^{1000}$ and assigning it to the first class if $\\mathtt{n}(\\mathbf{x})\\ \\mathtt{ mod }\\ 2 = 1$, and to the second class otherwise. For the $10$-of-$1000$ data set we assigned an example $\\mathbf{x}$ to the first class if $\\mathtt{n}(\\mathbf{x}) \\geq 10$, and to the second class otherwise. For the counting data set we assigned an examples $\\mathbf{x}$ to the first class if $\\mathtt{n}(\\mathbf{x})\\ \\mathtt{ mod }\\ 5 = 3$, and to the second class otherwise. For the exact data set we assigned an example $\\mathbf{x}$ to the first class if $\\mathtt{n}(\\mathbf{x}) \\in \\{0,200,400,600,800,1000\\}$, and to the second class otherwise.\n\nWe used the \\textsc{SciKit} $0.14$\\footnote{http:\/\/scikit-learn.org\/} functions to load the 20Newsgroup train and test samples. \nWe removed headers, footers, and quotes from the training and test documents. This renders the classification problem more difficult and leads to significantly higher zero-one loss for all classifiers. For the Reuters-8 data set we considered only the Reuters-21578 documents with a single topic and the top $8$ classes that have at least one train and one test example. For the WebKB text data set we considered the classes $\\mathtt{project}$, $\\mathtt{course}$, $\\mathtt{faculty}$, and $\\mathtt{student}$. For all text data sets we used the binary bag-of-word representation resulting in feature spaces with up to $45000$ dimensions. \nFor the MNIST data set, a collection of hand-written digits, we set a feature value to $1$ if the original feature value was greater than $50$, and to $0$ otherwise. The polarity data set is a well-known sentiment analysis problem based on movie reviews~\\cite{Pang:2004}. The problem is to classify movie reviews as either positive or negative. We used the cross-validation splits provided by the authors. \nThe Enron spam data set is a collection of e-mails from the Enron corpus that was divided into spam and no-spam messages~\\cite{Metsis:2006}. Here, we applied randomized $100$-fold cross validation. We did not apply feature extraction algorithms to any of the data sets.\nTable~\\ref{table-property-class} lists the properties of the data sets and the mean and standard deviation of the number of blocks of the MEVMs.\nWe distinguished between two-class and multi-class (more than $2$ classes) problems. When the original data set had more than two classes, we created the two-class problems by considering every pair of classes as a separate cross-validation problem. We draw this distinction because we want to compare classification approaches independent of particular multi-class strategies (1-vs-n, 1-vs-1, etc.).\n\n\\begin{table}[t!]\n\\caption{\\label{table-results-twoclass} Accuracy values for the two-class experiments. Bold numbers indicate significance (paired t-test; $p < 0.01$) compared to non-bold results in the same row.}\n\\small\n\\begin{center}\n\\begin{tabular}{|l||c|c|c|c|c|}\n\\hline \nData set & MEVM & NB & DT & SVM & $5$-NN \\\\ \n\\hline \n\\hline\nParity & \\textbf{0.958} & 0.497 & 0.501 & 0.493 & 0.502 \\\\\nCounting & \\textbf{0.967} & 0.580 & 0.655 & 0.768 & 0.765 \\\\\nM-of-n & \\textbf{0.994} & 0.852 & 0.990 & \\textbf{0.995} & 0.715 \\\\\nExact & \\textbf{0.996} & 0.566 & 0.983 & \\textbf{0.995} & 0.974 \\\\\n\\hline\n\\hline\n20Newsgrp & \\textbf{0.905} & 0.829 & 0.803 & 0.867 & 0.582 \\\\ \nReuters-8 & \\textbf{0.968} & 0.940 & \\textbf{0.965} & \\textbf{0.982} & 0.881 \\\\ \nPolarity & 0.826 & 0.794 & 0.623 & \\textbf{0.859} & 0.520 \\\\ \nEnron & \\textbf{0.980} & 0.915 & 0.948 & 0.972 & 0.743 \\\\\nWebKB &\\textbf{0.943} & 0.907 & 0.899 & \\textbf{0.952} & 0.780 \\\\ \nMNIST & 0.969 & 0.964 & 0.981 & 0.983 & \\textbf{0.995}\\\\\n\\hline\n \\end{tabular} \n\\end{center}\n\\caption{\\label{table-results-multiclass} Accuracy values for the multi-class experiments. Bold numbers indicate significance (paired t-test; $p < 0.01$) compared to non-bold results in the same column.}\n \\small\n \\begin{center}\n \\begin{tabular}{|l||c|c|c|c|}\n \\hline \n Classifier & 20Newsgrp & Reuters-8 & WebKB & MNIST \\\\ \n \\hline \n \\hline\nMEVM & \\textbf{0.626} & \\textbf{0.911} & \\textbf{0.860} & \\textbf{0.855} \\\\\nNB & 0.537 & 0.862 & 0.783 & 0.842 \\\\\n\\hline\n \\end{tabular} \n\\end{center}\n\\end{table}\n\nWe exploited necessary condition (1) from Proposition~\\ref{prop-criteria} to learn the block structure of the MEVM classifiers. For each pair of variables $X, X'$ and each class value $y$, we applied Welch's t-test to test the null hypothesis $\\mathtt{\\mathbf{E}}(X \\mid y) = \\mathtt{\\mathbf{E}}(X' \\mid y)$. If, for two variables, the test's p-value was less than $0.1$, we rejected the null hypothesis and placed them in different blocks conditioned on $y$. We applied Laplace smoothing with a constant of $0.1$. The same parameter values were applied across \\emph{all} data sets and experiments.\nFor all other classifiers we used the \\textsc{SciKit} $0.14$ implementations naive\\_bayes.BernoulliNB, tree.DecisionTreeClassifier, svm.LinearSVC, and neighbors.KNeighborsClassifier. We used the classifiers' standard settings except for the naive Bayes classifier where we applied a Laplace smoothing constant (alpha) of $0.1$ to ensure a fair comparison (NB results deteriorated for alpha values of $1.0$ and $0.01$). The standard setting for the classifiers are available as part of the \\textsc{SciKit} $0.14$ documentation.\nAll implementations and data sets will be published.\n\nTable~\\ref{table-results-twoclass} lists the results for the two-class problems. The MEVM classifier was one of the best classifiers for $8$ out of the $10$ data sets. With the exception of the MNIST data set, where the difference was insignificant, MEVM significantly outperformed the naive Bayes classifier (NB) on all data sets. The MEVM classifier outperformed SVMs on $4$ data sets, two of which are real-world text classification problems and achieved a tie on $4$. For the parity data set only the MEVM classifier was better than random.\nTable~\\ref{table-results-multiclass} shows the results on the multi-class problems. Here, the MEVM classifier significantly outperforms naive Bayes on all data set. The MEVM classifier outperformed all classifiers on the 20Newsgroup and was a close second on the Reuters-8 and WebKB data sets. The MEVM classifier is particularly suitable for high-dimensional and sparse data sets. We hypothesize that this has three reasons. First, MEVMs can model both negative and positive correlations between variables. Second, MEVMs perform a non-linear transformation of the feature space. Third, MEVMs cluster noisy variables into blocks of exchangeable sequences which acts as a form of regularization in sparse domains.\n\n\\subsection{Probability Estimation}\n \nWe conducted experiments with a widely used collection of data sets~\\cite{Haaren:2012,Gens:2013,Lowd:2013}.\nTable~\\ref{table-de-dataset} lists the number of variables, training and test examples, and the number of blocks of the MEVM models. We set the latent variable's domain size to $20$ for each problem and applied the same EM initialization for MEVMs and NB models. This way we could compare NB and MEVM independent of the tuning parameters specific to EM. We implemented EM exactly as described in Algorithm~\\ref{algorithm-em}. For step (2), we exploited Proposition~\\ref{prop-criteria}~(1) and, for each $y$, partitioned the variables into exchangeable blocks by performing a series of Welch's t-tests on the expectations $\\mathtt{\\mathbf{E}}(X_j \\mid y)$, estimated by $\\sum_{i=1}^{N}x_j^{(i)}\\delta(y \\mid i)\/N$, assigning two variables to different blocks if the null hypothesis of identical means could be rejected at a significance level of $0.1$. For MEVM and NB we again used a Laplace smoothing constant of $0.1$. We ran EM until the average log-likelihood increase between iterations was less than $0.001$. We restarted EM $10$ times and chose the model with the maximal log-likelihood on the training examples. We did not use the validation data. For LTM~\\cite{Choi:2011}, we applied the four methods, CLRG, CLNJ, regCLRG, and regCLNJ, and chose the model with the highest validation log-likelihood.\n\n \\begin{table}[t!]\n \\caption{\\label{table-de-dataset} Properties of the data sets used for probability estimation and mean and standard deviation of the number of MEVM blocks.}\n \\small\n \\begin{center}\n \\begin{tabular}{|l|r|r|r|r|}\n \\hline \n Data set & $|V|$ & Train & Test & Blocks \\\\ \n \\hline \n NLTCS & 16 & 16,181 & 3,236 & $8.8 \\pm 1.9$ \\\\ \n MSNBC & 17 & 291,326 & 58,265 & $15.9 \\pm 1.1$\\\\\n KDDCup 2000& 64 & 180,092 & 34,955 & $15.8 \\pm 4.7$\\\\\n Plants & 69 & 17,412 & 3,482 & $15.9 \\pm 2.9$\\\\ \n Audio & 100 & 15,000 & 3,000 & $13.7 \\pm 3.0$ \\\\ \n Jester & 100 & 9,000 & 4,116 & $10.4 \\pm 2.0$ \\\\ \n Netflix & 100 & 15,000 & 3,000 & $14.8 \\pm 3.2$\\\\ \n MSWeb & 294 & 29,441 & 5,000 & $21.3 \\pm 2.0$ \\\\ \n Book & 500 & 8,700 & 1,739 & $12.4 \\pm 2.9$ \\\\ \n WebKB & 839 & 2,803 & 838 & $10.6 \\pm 2.3$ \\\\ \n Reuters-52 & 889 & 6,532 & 1,540 & $16.7 \\pm 3.1$ \\\\ \n 20Newsgroup & 910 & 11,293 & 3,764 & $17.9 \\pm 3.7$ \\\\ \n \\hline\n \\end{tabular}\n\\end{center}\n\\end{table}\n\nTable~\\ref{table-density-results-ll} lists the average log-likelihood of the test data for the MEVM, the latent naive Bayes~\\cite{Lowd:2005} (NB), the latent tree (LTM), and the Chow-Liu tree model~\\cite{Chow:2006} (CL). Even without exploiting the validation data for model tuning, the MEVM models outperformed the CL models on all, and the LTMs on all but two of the data set. MEVMs achieve the highest log-likelihood score on $7$ of the $12$ data sets. With the exception of the Jester data set, MEVMs either outperformed or tied the NB model. While the results indicate that MEVMs are effective for higher-dimensional and sparse data sets, where the increase in log-likelihood was most significant, MEVMs also outperformed the NB models on $3$ data sets with less than $100$ variables. The MEVM and NB models have exactly the same number of free parameters. \nSince results on the same data sets are available for other tractable model classes we also compared MEVMs with SPNs~\\cite{Gens:2013} and ACMNs~\\cite{Lowd:2013}.\nHere, MEVMs are outperformed by the more complex SPNs on $5$ and by ACMNs on $6$ data sets. However, MEVMs are competitive and outperform SPNs on $7$ and ACMNs on $6$ of the $12$ data sets. Following previous work~\\cite{Haaren:2012}, we applied the Wilcoxon signed-rank test. MEVM outperforms the other models at a significance level of $0.0124$ (NB), $0.0188$ (LTM), and $0.0022$ (CL). The difference is insignificant compared to ACMNs ($0.6384$) and SPNs ($0.7566$).\n\nTo compute the probability of one example, MEVMs require as many steps as there are blocks of exchangeable variables. Hence, EM for MEVM is significantly more efficient than EM for NB, both for a single EM iteration and to reach the stopping criterion. While the difference was less significant for problems with fewer than $100$ variables, the EM algorithm for MEVM was up to \\emph{two orders of magnitude} faster for data sets with $100$ or more variables. \n\n\n\n\\section{Discussion}\n\n\n\\begin{table}[t!]\n\\caption{\\label{table-density-results-ll} Average log-likelihood of the MEVM, the naive Bayes, the latent tree, and the Chow-Liu tree model.}\n\\small\n\\vspace{-0.38mm}\n\\begin{center}\n\\begin{tabular}{|l|r|r|r|r|}\n\\hline \nData set & MEVM & NB & LTM & CL \\\\ \n\\hline \nNLTCS & -6.04 & -6.04 & -6.46 & -6.76 \\\\ \nMSNBC & \\textbf{-6.23} & -6.71 & -6.52 & -6.54 \\\\\nKDDCup 2000& \\textbf{-2.13} & -2.15 & -2.18 & -2.29 \\\\\nPlants & \\textbf{-14.86} & -15.10 & -16.39 & -16.52 \\\\ \nAudio & \\textbf{-40.63} & -40.69 & -41.89 & -44.37 \\\\ \nJester & -53.22 & \\textbf{-53.19} & -55.17 & -58.23 \\\\ \nNetflix & \\textbf{-57.84} & -57.87 & -58.53 & -60.25 \\\\ \nMSWeb & -9.96 & -9.96 & -10.21 & -10.19 \\\\ \nBook & -34.63 & -34.80 & \\textbf{-34.23} & -34.70 \\\\ \nWebKB & -157.21 & -158.01 & \\textbf{-156.84} & -163.48 \\\\ \nReuters-52 & \\textbf{-86.98} & -87.32 & -91.25 & -94.37 \\\\ \n20Newsgroup & \\textbf{-152.69} & -152.78 & -156.77 & -164.13\\\\ \n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\nExchangeable variable models (EVMs) provide a framework for probabilistic models combining the notions of conditional independence and partial exchangeability. As a result, it is possible to efficiently learn the parameters and structure of tractable high tree-width models. EVMs can model complex positive and negative correlations between large numbers of variables. We presented the theory of EVMs and showed that a particular subfamily is optimal for several important classes of Boolean functions. Experiments with a large number of data sets verified that mixtures of EVMs are powerful and highly efficient models for classification and probability estimation.\n\nEVMs are potential components in deep architectures such as sum-product networks~\\cite{Gens:2013}. In light of Theorem~\\ref{optimality}, exchangeable variable nodes, complementing sum and product nodes, can lead to more compact representations with fewer parameters to learn.\nEVMs are also related to graphical modeling with perfect graphs~\\cite{Jebara:2013}. In addition, EVMs provide an insightful connection to lifted probabilistic inference~\\cite{Kersting:2012}, an active research area concerned with exploiting symmetries for more efficient probabilistic inference. We have developed a principled framework based on partial exchangeability as an important notion of structural symmetry. There are numerous opportunities for cross-fertilization between EVMs, perfect graphical models, collective graphical models, and statistical relational models. \n\nDirections for future work include more sophisticated structure learning, EVMs with continuous variables, EVMs based on instances of partial exchangeability other than finite exchangeability, novel statistical relational formalisms incorporating EVMs, applications of EVMs, and a general theory of graphical models with exchangeable potentials.\n\n\n\\section*{Acknowledgments}\n\\small\nMany thanks to Guy Van den Broeck, Hung Bui, and Daniel Lowd for helpful discussions. This research was partly funded by ARO grant W911NF-08-1-0242,\nONR grants N00014-13-1-0720 and N00014-12-1-0312, and AFRL contract\nFA8750-13-2-0019. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ARO, ONR, AFRL, or the United States Government.\n\n\\newpage\n\\small\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLet $ \\Phi=(G, \\varphi) $ be a connected complex unit gain graph ($ \\mathbb{T} $-gain graph) on a simple graph $ G $ with $ n $ vertices. Let $ V(G)=\\{v_1, v_2, \\dots, v_n\\} $ and $ E(G) $ be the vertex set and the edge set of $ G $, respectively. If two vertices $ v_i $ and $ v_j $ are connected by an edge, then we write $ v_i\\sim v_j $. If $ v_i\\sim v_j $, then the edge between them is denoted by $ e_{i,j} $. The \\emph{adjacency matrix } $ A(G) $ of a graph $ G $ is a symmetric matrix whose $ (i,j)th $ entry is $ 1 $ if $ v_i\\sim v_j $ and zero otherwise. A \\emph{path} $ P $ in $ G $ between the vertices $ s$ and $t $ is denoted by $ sPt $.\nThe \\emph{distance} between two vertices $ s$ and $t$ in $ G $ is the length of the shortest path between $ s$ and $t $, and is denoted by $ d_{G}(s,t) $ (or simply $ d(s,t) $). The \\emph{distance matrix} of an undirected graph $ G $, denoted by $ D(G) $, is the symmetric $ n\\times n$ matrix whose $ (i,j) $th entry is $ d(v_i,v_j) $. The distance matrix of an undirected graph has been widely studied in the literature, see \\cite{Dist_Bapat,Graham, Lovasz, Graham_1971} and the references therein.\n\nLet $ G $ be a simple undirected graph. An oriented edge from the vertex $ v_s $ to the vertex $ v_t $ is denoted by $ \\overrightarrow{e}_{s,t} $. For each undirected edge $ e_{s,t}\\in E(G) $, there is a pair of oriented edges $ \\overrightarrow{e}_{s,t} $ and $ \\overrightarrow{e}_{t,s} $. The collection $ \\overrightarrow{E}(G):=\\{ \\overrightarrow{e}_{s,t},\\overrightarrow{e}_{t,s}: e_{s,t}\\in E(G)\\} $ is the \\emph{oriented edge set associated with $ G $}. Let $ \\mathbb{T}=\\{ z\\in \\mathbb{C}: |z|=1\\} $. A \\emph{complex unit gain graph (or $ \\mathbb{T} $-gain graph)} on a simple graph $ G $ is an ordered pair $ (G, \\varphi) $, where the gain function $ \\varphi: \\overrightarrow{E}(G) \\rightarrow \\mathbb{T} $ is a mapping such that $ \\varphi( \\overrightarrow{e}_{s,t}) =\\varphi(\\overrightarrow{e}_{t,s})^{-1}$, for every $ e_{s,t}\\in E(G) $. A $ \\mathbb{T} $-gain graph $ (G, \\varphi) $ is denoted by $ \\Phi $. The \\emph{ adjacency matrix} of a $ \\mathbb{T} $-gain graph $ \\Phi=(G, \\varphi)$ is a Hermitian matrix, denoted by $ A(\\Phi)$ and its $ (s,t)th $ entry is defined as follows:\n$$a_{st}=\\begin{cases}\n\t\\varphi(\\overrightarrow{e}_{s,t})&\\text{if } \\mbox{$v_s\\sim v_t$},\\\\\n\t0&\\text{otherwise.}\\end{cases}$$\n The spectrum and the spectral radius of $ \\Phi $ are the spectrum and the spectral radius of $ A(\\Phi) $ and denoted by $ \\spec(\\Phi) $ and $ \\rho(\\Phi) $, respectively.\n A \\emph{signed graph} is a graph $ G $ together with a signature function $ \\psi : E(G) \\rightarrow \\{\\pm 1\\} $, and is denoted by $\\Psi= (G, \\psi) $. The adjacency matrix of $ \\Psi $, denoted by $ A(\\Psi) $, is an $ n\\times n$ matrix whose $ (i,j) $th entry is $ \\psi(e_{i,j}) $. Therefore, a signed graph can be considered as a $ \\mathbb{T} $-gain graph $ \\Psi=(G, \\psi) $, where $ \\psi $ is a signature function. The notion of adjacency matrix of $\\mathbb{T}$-gain graphs generalize the notion of adjacency matrix of undirected graphs, adjacency matrix of signed graphs and the Hermitian adjacency matrix of a mixed graph. For more information about the properties of gain graphs and $\\mathbb{T}$-gain graphs, we refer to \\cite{ reff1, Our-paper-2, gain-genesis2, Zaslav}.\n\n\n\nLet $ \\Psi=(G, \\psi ) $ be a signed graph. The sign of a path in $ \\Psi $ is the product of sign of all edges of the path \\cite{Zas4}. Recently, in \\cite{Sign_distance} the authors introduced the notion of signed distance matrices $ D^{\\max}(\\Psi) $ and $ D^{\\min}(\\Psi) $ for a signed graph $ \\Psi $.\n\\begin{definition}[{\\cite[Definition 1.1]{Sign_distance}}]{\\rm\nLet\t$ \\Psi=(G, \\psi ) $ be a signed graph with vertex set $ V(G) =\\{ v_1, v_2, \\dots, v_n\\}$. Then two auxiliary signs are defined as follows:\n\\begin{itemize}\n\t\\item[(a)] $ \\psi_{\\max}(v_i,v_j)=-1 $ if all shortest $ v_iv_j $-paths are negative, $ +1 $ otherwise,\n\t\\item[(b)] $ \\psi_{\\min}(v_i,v_j)=+1 $ if all shortest $ v_iv_j $-paths are positive, $ -1 $ otherwise.\n\\end{itemize}\nThe two signed distance matrices are defined as follows:\n\\begin{itemize}\n\t\\item[(a)] $ D^{\\max}(\\Psi)=(d_{\\max}(v_i,v_j))_{n\\times n}$,\n\t\\item[(b)] $ D^{\\min}(\\Psi)=(d_{\\min}(v_i,v_j))_{n\\times n} $,\n\\end{itemize}\nwhere $ d_{\\max}(v_i,v_j)=\\psi_{\\max}(v_i,v_j)d(v_i,v_j)$ and $ d_{\\min}(v_i,v_j)=\\psi_{\\min}(v_i,v_j)d(v_i,v_j)$.}\n\\end{definition}\n\nA signed graph $ \\Psi $ is distance compatible if and only if $ D^{\\max}(\\Psi)=D^{\\min}(\\Psi) $. A characterization of balanced signed graph in terms of signed distance matrices is obtained in \\cite{Sign_distance}. For more about signed distance matrices, see \\cite{Sign_distance, shijin2020signed}.\n\n\n\nIn this article, we introduce the notion of gain distance matrices $ D^{\\max}_{<} (\\Phi)$ and $ D^{\\min}_{<} (\\Phi) $ for a $ \\mathbb{T} $-gain graph $ \\Phi=(G, \\varphi) $ associated with an ordered vertex set $ (V(G), <) $. These concepts generalize the notions of signed distance matrices of signed graphs and distance matrices of undirected graphs. We define positively weighted $ \\mathbb{T} $-gain graphs and establish two new characterizations for balance of gain graphs. Acharya's Spectral criterion and Stani\\'c's spectral criterion are particular cases of these characterizations. Besides, we introduce two properties of a $ \\mathbb{T} $-gain graph, ordered-independence, and distance compatibility to gain distance matrices. Thereupon we establish two characterizations for the balance of $ \\mathbb{T} $-gain graphs in terms of gain distance matrices and distance compatibility properties. Subsequently, we present some results on the characterization of distance compatibility.\n\nThis paper is organized as follows: In section \\ref{prelim}, we collect needed known definitions and results. In section \\ref{gain_distance}, we define the notion of gain distance matrices, order-independent and distance compatibility, and discuss their properties. In section \\ref{Positively_weighted}, we discuss the positively weighted $ \\mathbb{T} $-gain graphs and establish two spectral characterizations for the balance(Theorem \\ref{th4.2}, Theorem \\ref{Th3.4}). In section \\ref{Char_of_balanced}, we derive two characterizations for balance $ \\mathbb{T} $-gain graph in terms of the gain distance matrices (Theorem \\ref{Th5.1}, Theorem \\ref{th5.2}). In section \\ref{Dist_compt}, we obtain a couple of characterizations for distance compatible $ \\mathbb{T} $-gain graphs (Theorem \\ref{Th6.1}, Theorem \\ref{Th6.2}, Theorem \\ref{Th6.3}).\n\t\n\\section{Definitions, notation and preliminary results}\\label{prelim}\n\nLet $ G=(V(G),E(G))$ be a connected undirected graph with no loops and multiple edges, where $ V(G)=\\{ v_1, v_2, \\dots, v_n\\} $ is the vertex set and $ E(G) $ is the edge set of $ G $. A graph $ G $ is \\emph{geodetic}, if there exists a unique shortest path between any two vertices of $ G $. Let $ \\Phi=(G, \\varphi) $ be a $ \\mathbb{T} $-gain graph on $ G $. For $ s,t \\in V(G) $, $ sPt $ denotes a path starts at $ s $ and ends at $ t $ in $ G $. In case of gain graph, $ sPt $ denotes the oriented path from the vertex $ s $ to the vertex $ t $. The gain of the path $ sPt$ is $ \\varphi(sPt)=\\prod\\limits_{j=1}^{k}\\varphi(\\overrightarrow{e_j})$, where $ \\overrightarrow{e_1}, \\overrightarrow{e_2}, \\dots, \\overrightarrow{e_k} $ are the consecutive oriented edges in $ sPt$. Therefore, $\\varphi(tPs)= \\overline{\\varphi(sPt)}$. The gain of an oriented cycle $ \\overrightarrow{C_n} $ with edges $ \\overrightarrow{e_1}, \\overrightarrow{e_2}, \\dots, \\overrightarrow{e_n} $ is $\\varphi(\\overrightarrow{C_n})=\\prod\\limits_{j=1}^{n}\\varphi(\\overrightarrow{e_j}) $. A cycle $ C $ is \\emph{neutral} in $ \\Phi $ if $ \\varphi(\\overrightarrow{C})=1 $. A gain graph $ \\Phi $ is \\emph{balanced}, if all cycles in $ \\Phi $ are neutral. A $ \\mathbb{T} $-gain graph $ \\Phi $ is \\emph{anti-balanced} if $ -\\Phi $ is balanced. Let $ \\Rea(x) $ and $ \\Ima(x) $ denote the real and imaginary part of a complex number $ x $, respectively.\n\nA function $ \\zeta: V(G) \\rightarrow \\mathbb{T} $ is a \\emph{switching function}. Let $ \\Phi_1=(G, \\varphi_1) $ and $ \\Phi_2=(G, \\varphi_2) $ be two $ \\mathbb{T} $-gain graphs. Then $ \\Phi_1 $ and $ \\Phi_2 $ are \\emph{switching equivalent}, denoted by $ \\Phi_1 \\sim \\Phi_2 $, if there exists a switching function $ \\zeta $ such that $ \\varphi_1(\\overrightarrow{e}_{i,j})= \\zeta(v_i)^{-1} \\varphi_2(\\overrightarrow{e}_{i,j})\\zeta(v_j)$, for all $ e_{i,j}\\in E(G) $. If $ \\Phi_1 \\sim \\Phi_2 $, then $ A(\\Phi_1) $ and $ A(\\Phi_2) $ are diagonally similar and hence have the same spectra.\n\n\\begin{lemma}[{\\cite[Corollary 3.2]{Our-paper-2}}]\\label{lm2.1}\n\tLet $ \\Phi_1 $ and $ \\Phi_2 $ be two $ \\mathbb{T} $-gain graphs on a connected graph $ G $ with a normal spanning tree $ T $. Then $ \\Phi_1 \\sim \\Phi_2 $ if and only if $ \\varphi_1(\\overrightarrow{C_j})=\\varphi_2(\\overrightarrow{C_j}) $, for all fundamental cycles $ C_j $ with respect to $ T $.\n\\end{lemma}\n\n A signed graph is a $ \\mathbb{T} $-gain graph $ \\Psi=(G, \\psi) $, where $ \\psi(\\overrightarrow{e}_{i,j})=1$ or $ -1 $ for $ e_{i,j}\\in E(G) $. The \\emph{sign} of a path in $\\Psi$ is the product of the signs (the gains) of the edges in the path.\n\n\n\\begin{theorem}[Harary's path criterion \\cite{Harary} ]\\label{lm.2.3}\n\tLet $ \\Psi $ be a signed graph on an underlying graph $ G $. Then $ \\Psi $ is balanced if and only if any pair of vertices $ s,t $, every $ st $-path have the same signature.\n\\end{theorem}\nLet $ \\mathbb{C}^{m\\times n} $ denote the set of all $ m\\times n $ matrices with complex entries. For $ A=(a_{ij})\\in \\mathbb{C}^{n \\times n} $, define $ |A|=(|a_{ij}|) $. For two matrices $ A=(a_{ij})$ and $ B=(b_{ij})$, we write $ A \\leq B $ if $ a_{ij} \\leq b_{ij} $ for all $ i,j $. A matrix is \\emph{non-negative}, if all entries of a matrix are non-negative. The spectral radius of a matrix $ A $ is denoted by $ \\rho(A) $.\n\\begin{theorem}[{\\cite[Theorem 8.4.5]{horn-john2}}]\\label{th2.2}\nLet $ A, B \\in \\mathbb{C}^{n \\times n}$. Suppose $ A $ is irreducible and non-negative and $ A \\geq |B| $. Let $ \\mu=e^{i\\theta} \\rho(B)$ be a given maximum modulus eigenvalue of $ B $. If $ \\rho(A)=\\rho(B) $, then there is a unitary diagonal matrix $ D $ such that $ B=e^{i\\theta}DAD^{-1} $.\n\\end{theorem}\n\nLet $ A=(a_{ij}), B=(b_{ij})\\in \\mathbb{C}^{m\\times n} $. The \\emph{Hadamard product} of $ A $ and $ B $, denoted by $ A\\circ B $, is defined as $ A\\circ B =(a_{ij}b_{ij})_{m\\times n}$. For any three matrices $ A, B, C $ of same order, $ (A \\circ B) \\circ C=A \\circ (B \\circ C) $.\nLet us recall the following property of Hadamard product of matrices.\n\\begin{prop}[{\\cite[Lemma 5.1.2]{Matrix_Analysis}}]\\label{prop3.1}\n\tLet $ A,B,C $ be three $n \\times n$ matrices and $ D,E $ be two $n \\times n$ diagonal matrices. Then\n\t\\begin{equation*}\n\t\t D(A\\circ B)E=(DAE)\\circ B=(DA)\\circ (BE)=(AE)\\circ (DB)=A\\circ(DBE).\n\t\\end{equation*}\n\\end{prop} \t\n\t\n\\section{Gain distance matrices}\\label{gain_distance}\nThis section introduces the notion of gain distance matrices of $ \\mathbb{T} $-gain graphs, which generalize the notion of distance matrices of undirected graphs and signed distance matrices of signed graphs.\n\n Let $ \\Phi=(G, \\varphi) $ be a connected $ \\mathbb{T} $-gain graph on $ G $. For $ s,t\\in V(G) $, $sPt$ denotes the oriented path from the vertex $s$ to the vertex $t$. Define three sets of paths $ \\mathcal{P}(s,t), \\mathcal{P}^{\\max}(s,t) $ and $ \\mathcal{P}^{\\min}(s,t) $ as follows:\n\n\n\\begin{center}\n\t$ \\mathcal{P}(s,t)=\\left\\{ sPt: sPt \\text{ is a shortest path} \\right\\},$\n\\end{center}\n\n\\begin{center}\n\t$ \\mathcal{P}^{\\max}(s,t)=\\left\\{ sPt \\in \\mathcal{P}(s,t): \\Rea(\\varphi(sPt))=\\max\\limits_{s\\tilde{P}t \\in \\mathcal{P}(s,t)}\\Rea(\\varphi(s\\tilde{P}t)) \\right\\}$\n\\end{center}\nand\n\\begin{center}\n\t$ \\mathcal{P}^{\\min}(s,t)=\\left\\{ sPt \\in \\mathcal{P}(s,t): \\Rea(\\varphi(sPt))=\\min\\limits_{s\\tilde{P}t \\in \\mathcal{P}(s,t)}\\Rea(\\varphi(s\\tilde{P}t)) \\right\\}.$\n\\end{center}\nNote that $\\mathcal{P}^{\\max}(s,t) = \\mathcal{P}^{\\max}(t, s) $ and $\\mathcal{P}^{\\min}(s,t) = \\mathcal{P}^{\\min}(t,s)$.\n\nLet $ G $ be a simple graph with vertex set $ V(G) =\\{ v_1, v_2, \\dots, v_n\\} $. We denote $ \\left(V(G), <\\right)$ as an ordered vertex set, where $`<`$ is a total ordering of the vertices of $ G $. An ordering $ `<_r` $ is the \\emph{reverse ordering} of $ `<` $ if $ v_i<_r v_j $ if and only if $ v_j0 $. Hence $ (i) $ holds.\n\tSimilarly, $ D^{\\min}_{<}(\\Phi)\\ne D^{\\min}_{<_r}(\\Phi)$ implies $ (ii)$.\n\t\n\tConversely, suppose statement $ (i) $ holds. Then there exist two shortest $ \\overrightarrow{v_sv_t} $-paths $ v_sP_1v_t $ and $ v_sP_2v_t $ in $ \\mathcal{P}^{\\max}(v_s,v_t) $ with different gains. If $ \\varphi(v_sP_1v_t )=x+iy \\in \\mathbb{T}$, then $ \\varphi(v_sP_2v_t)=x-iy$, $ y\\neq0 $. Without loss of generality, assume that $y>0$. If $ v_s0, $$\n\ta contradiction. Thus $ \\Phi $ is balanced.\n\\end{proof}\n\n\nThe well known Acharya's spectral criterion for the balance of signed graphs follows from Theorem \\ref{th4.2}.\n\\begin{corollary} [{\\cite[Corollary 1.1]{Acharya}}]\n\tLet $ \\Psi=(G, \\psi) $ be a signed graph. Then spectra of $ \\Psi $ and $ G $ coincide if and only if $ \\Psi $ is balanced.\n\\end{corollary}\n\\begin{proof}\n\tBy taking $ \\varphi=\\pm 1 $ and $ w=1 $ in Theorem \\ref{th4.2}, we get the result.\n\\end{proof}\nAnother consequence is the following recent result about the signed graph.\n\\begin{corollary}[{\\cite[Theorem 2.4]{Sign_distance}}]\n\tLet $ \\Psi=(G, \\psi)$ be a signed graph and $ w $ be a positively weighted function, where $ \\psi=\\pm 1 $. Then $ \\Psi_w $ and $ G_w $ are cospectral if and only if $ \\Psi $ is balanced.\n\\end{corollary}\n\nNext, we prove one of the main results of this article.\n\n\\begin{theorem}\\label{Th3.4}\n\tLet $ \\Phi_w=(G, \\varphi, w) $ be a connected positively weighted $\\mathbb{T}$-gain graph. Then the spectral radius of $ \\Phi_w $ and $ G_w$ are equal if and only if either $ \\Phi $ or $ -\\Phi $ is balanced.\n\\end{theorem}\n\\begin{proof}\n\tSuppose either $ \\Phi $ or $ -\\Phi $ is balanced. Then, it is easy to see that, the spectral radius $ \\Phi_{w} $ and $ G_w $ are equal.\n\tConversely, suppose $ \\rho(\\Phi_{w})=\\rho(G_w) $. Let $ \\mu_n\\leq \\mu_{n-1}\\leq \\dots \\leq \\mu_1 $ be the eigenvalues of $ \\Phi_{w} $. Then either $ \\rho(\\Phi_{w})=\\mu_1 $ or $ \\rho(\\Phi_{w})=-\\mu_n $.\n\t\n\t\\noindent{\\bf Case 1:} If $ \\rho(\\Phi_{w})=\\mu_1 $, then, by Theorem \\ref{th2.2}, there exists a diagonal unitary matrix $ D $ such that $ A(\\Phi_{w})=DA(G_w)D^{*} $. Now $ A(\\Phi)\\circ A(G_w)=D(A(G) \\circ A(G_w))D^{*} $. Then, by Proposition \\ref{prop3.1}, $ A(\\Phi)\\circ A(G_w)=(DA(G)D^{*} )\\circ A(G_w)$. Define $ B=(b_{ij}) $ as follows: $ b_{ij} $ is the inverse of the nonzero $ (i,j)th $-entry of $ A(G_w) $, otherwise zero. Then $( A(\\Phi)\\circ A(G_w))\\circ B=((DA(G)D^{*} )\\circ A(G_w))\\circ B$. Thus, by Proposition \\ref{prop3.1}, we have $A(\\Phi)=DA(G)D^{*}$. Thus $ \\Phi $ is balanced.\n\t\n\t\\noindent{\\bf Case 2:} If $ \\rho(\\Phi_w)=-\\mu_n$, then $ \\mu_n=e^{i\\pi} \\rho(\\Phi_w)$. By Theorem \\ref{th2.2}, there exists a diagonal unitary matrix $ D $, such that $ A(\\Phi_{w})=e^{i\\pi}DA(G_w)D^{*} $. That is, $-A(\\Phi_{w})=DA(G_w)D^{*} $. Then $ A(-\\Phi)\\circ A(G_w)=D(A(G) \\circ A(G_w))D^{*} $. By Proposition \\ref{prop3.1}, we have $A(-\\Phi)=DA(G)D^{*}$. Thus $ -\\Phi $ is balanced.\n\\end{proof}\nNow we present the following consequences of the above results.\n\\begin{corollary}\\label{Cor3.2}\n\tLet $ \\Phi_w=(G, \\varphi, w) $ be a connected positively weighted $\\mathbb{T}$-gain graph. Then the largest eigenvalue of $ \\Phi_w $ and $ G_w $ are equal if and only if $ \\Phi $ is balanced.\t\n\\end{corollary}\n\n\n\\begin{corollary}\\label{lm3.2}\n\tLet $ \\Phi=(G, \\varphi) $ be a connected $ \\mathbb{T} $-gain graph. Then the largest eigenvalue of $ \\Phi $ and $ G $ are equal if and only if $ \\Phi $ is balanced.\n\\end{corollary}\n\\begin{proof}\n\tThe proof follows from Corollary \\ref{Cor3.2} by assuming $ w=1 $.\n\\end{proof}\n\nAlso Theorem \\ref{Th3.4} unifies the following recent results.\n\\begin{corollary}[{\\cite[Corollary 2.7]{Sign_distance}}]\n\tLet $ \\Psi_w=(G, \\psi,w) $ be a connected positively weighted signed graph. Then $ \\Psi$ is balanced if and only if the largest eigenvalue of $ \\Psi_w $ and $ G_w $ coincide.\n\\end{corollary}\n\\begin{proof}\n\tBy taking $ \\varphi=\\pm1 $,\tthe result follows from Corollary \\ref{Cor3.2}.\n\\end{proof}\n\n\\begin{corollary}[{\\cite[Theorem 4.4]{Our-paper-1}}]\n\tLet $ \\Phi=(G, \\varphi) $ be a connected $ \\mathbb{T} $-gain graph. Then spectral radius of $ \\Phi $ and $ G $ coincide if and only if either $ \\Phi $ is balanced or $ -\\Phi $ is balanced.\n\\end{corollary}\n\\begin{proof}\n\tTake $ w=1 $ in Theorem \\ref{Th3.4}.\n\\end{proof}\n\n\n\n\\begin{corollary}[{(Stani\\'c's spectral criterion \\cite[Lemma 2.1]{Stanic})}]\n\tLet $\\Psi=(G, \\psi) $ be a connected signed graph. Then the largest eigenvalue of $ \\Psi $ and $ G $ coincide if and only if $ \\Psi $ is balanced.\n\\end{corollary}\n\\begin{proof}\n\tResult follows from Corollary \\ref{Cor3.2} by choosing $ \\varphi=\\pm 1 $ and $ w=1 $.\n\\end{proof}\n\\section{Characterizations of balanced $ \\mathbb{T} $-gain graphs in terms of gain distance matrices}\\label{Char_of_balanced}\n\nIn this section, we establish two characterizations for balanced $ \\mathbb{T} $-gain graphs using the gain distance matrices. Let us define two complete $ \\mathbb{T} $-gain graphs which are obtained from gain distance matrices $ D^{\\max}_{<}(\\Phi) $ and $ D^{\\min}_{<}(\\Phi) $.\n\n\n\\begin{definition}{\\rm\n\tLet $ \\Phi=(G, \\varphi) $ be a $ \\mathbb{T} $-gain graph and $`<`$ be an order on $V(G)$. The complete $ \\mathbb{T} $-gain graph with respect to $D^{\\max}_{<}(\\Phi) $, denoted by $ K^{D^{\\max}_{<}}(\\Phi) $, is defined as follows: keep the edges of $ \\Phi $ unchanged, and join non adjacent vertices $v_i$ and $v_j$ with gain $ \\varphi(\\overrightarrow{e}_{i,j})= \\varphi^{<}_{\\max}(v_i,v_j)$ for all $v_i,v_j$. Similarly\n $ K^{D^{\\min}_{<}}(\\Phi) $ is defined using $D^{\\min}_{<}(\\Phi) $.}\n\t\\end{definition}\n\nFor a $ \\mathbb{T} $-gain graph $ \\Phi=(G, \\varphi) $ with order $<$, if $ D^{\\max}_{<}(\\Phi)=D^{\\min}_{<}(\\Phi) $, then the associated complete $ \\mathbb{T} $-gain graphs $ K^{D^{\\max}}(\\Phi) $ and $K^{D^{\\max}}(\\Phi) $ are the same, and it is denoted by $ K^{D}(\\Phi) $. Then the proof of the following theorem is easy to verify.\n\n\\begin{theorem}\n\tFor a $ \\mathbb{T} $-gain graph $ \\Phi=(G, \\varphi) $, the following are equivalent:\n\t\\begin{enumerate}\n\t\t\\item [(1)] $ K^{D}(\\Phi) $ is well defined.\n\t\t\\item [(2)] $ K^{D^{\\max}}(\\Phi)=K^{D^{\\max}}(\\Phi) = K^{D^{\\max}_{<}}(\\Phi)=K^{D^{\\min}_{<}}(\\Phi)=K^{D}(\\Phi) $.\n\t\t\\item [(3)] $ D^{\\max}_{<}(\\Phi)=D^{\\min}_{<}(\\Phi)$, for some ordering $ < $.\n\t\\end{enumerate}\n\\end{theorem}\n\n\\begin{theorem}\\label{Th5.1}\n\tLet $ \\Phi=(G, \\varphi) $ be a $ \\mathbb{T} $-gain graph with vertex order $<$. Then the following statements are equivalent.\n\t\\begin{enumerate}\n\t\t\\item [(i)] $ \\Phi $ is balanced.\n\t\t\\item [(ii)] $ K^{D^{\\max}}(\\Phi) $ is balanced.\n\t\t\\item [(iii)] $ K^{D^{\\min}}(\\Phi) $ is balanced.\n\t\t\\item [(iv)] $ D^{\\max}(\\Phi) =D^{\\min}(\\Phi)$ and associated complete $ \\mathbb{T} $-gain graph $ K^{D}(\\Phi) $ is balanced.\n\t\\end{enumerate}\n\\end{theorem}\n\\begin{proof}\n\t$ (i) \\implies (iv) $\tLet $ V(G)=\\{v_1, v_2, \\dots, v_n\\} $. Suppose $ \\Phi $ is balanced. Let $ v_i, v_j\\in V(G) $. Then, by Lemma \\ref{lm3.1}, all shortest oriented paths $ v_iPv_j$ have the same gain. Thus $ \\varphi_{\\max}^{<}(v_i,v_j)=\\varphi_{\\min}^{<}(v_i,v_j) $. Therefore, $ D^{\\max}_{<}(\\Phi)=D^{\\min}_{<}(\\Phi) $. By Proposition \\ref{prop.3.1} and Corollary \\ref{cor3.1}, $ D^{\\max}(\\Phi)=D^{\\min}(\\Phi) $. Hence $ K^{D}(\\Phi) $ is well defined.\\\\\n\t{\\bf Claim:} $ K^{D}(\\Phi) $ is balanced.\\\\\n\tLet $ v_i\\nsim v_j $ in $ G $ and $ e_{i,j} $ be the edge joining $v_i$ and $v_j$ in $ K^{D}(\\Phi)$. For every oriented path $ v_iPv_j$ in $ \\Phi $ have the same gain. So every cycle passing through the edge $ e_{i,j} $ has gain $ 1 $. Let $ T $ be a normal spanning tree of $ G $. Suppose $ v_i\\nsim v_j $ in $ G $. In $ T $, joining the edge $ e_{i,j} $ creates a fundamental cycle of $K^{D}(\\Phi) $, say $ C_T $. Now by previous observation, $ \\varphi(C_T)=1$. Thus all the fundamental cycles in $ K^{D}(\\Phi) $ are neutral, and hence, by Lemma \\ref{lm2.1}, $ K^{D}(\\Phi)$ is balanced.\n\t\n\n\nIf $ K^{D}(\\Phi) $ is balanced, then, as $ \\Phi $ is a subgraph of $ K^{D}(\\Phi) $, so $ \\Phi $ is balanced. Therefore, $ (iv) \\implies (i), (iii) \\implies (i)$ and $ (ii) \\implies (i) $ follow.\n\n\tThe proofs $ (iv) \\implies (iii) $ and $ (iv) \\implies (ii) $ are easy to see.\n\\end{proof}\n\\begin{theorem}\\label{th5.2}\nLet $ \\Phi=(G, \\varphi) $ be a $ \\mathbb{T} $-gain graph with vertex order $<$.Then the following statements are equivalent:\n\t\\begin{enumerate}\n\t\t\\item [(i)] $ \\Phi $ is balanced.\n\t\t\\item [(ii)] $ D^{\\max}(\\Phi) $ is cospectral with $ D(G) $.\n\t\t\\item [(iii)]$ D^{\\min}(\\Phi) $ is cospectral with $ D(G) $.\n\t\t\\item[(iv)] The largest eigenvalue of $D^{\\max}(\\Phi) $ and $ D(G) $ are equal.\n\t\t\\item [(v)]The largest eigenvalue of $D^{\\min}(\\Phi) $ and $ D(G) $ are equal.\n\t\\end{enumerate}\n\\end{theorem}\n\n\\begin{proof}\n\n\tLet $ V(G)=\\{ v_1, v_2, \\dots, v_n\\} $, and $ \\Phi $ be balanced. Then by Theorem \\ref{Th5.1}, $ K^{D^{\\max}}(\\Phi)$ is balanced. Note that $K^{D^{\\max}}(\\Phi)=(K_n, \\psi) $ with $ \\psi(\\overrightarrow{e}_{i,j})=\\varphi_{\\max}(v_i,v_j)=\\varphi(v_iPv_j)$, where $v_iPv_j$ is a shortest path in $ \\Phi $. Consider $ {D^{\\max}}(\\Phi)$ as the adjacency matrix of a positively weighted $ \\mathbb{T} $-gain graph $ (K_n, \\psi, w) $ with weight function $w: E(K_n) \\rightarrow \\mathbb{R}^{+}$ is defined as $ w(e_{i,j})=d(v_i,v_j) $, where $ d(v_i,v_j) $ is the distance between $ v_i$ and $ v_j $ in $ G $. Then the adjacency matrix of $ (K_n, w)$ is same as $D(G) $. By Theorem \\ref{th4.2} and Theorem \\ref{Th5.1}, $ \\Phi $ is balanced if and only if $ D^{\\max}(\\Phi) $ is cospectral with $ D(G) $. Thus $(i) \\Leftrightarrow (ii)$.\n\t\n\tNow, by Corollary \\ref{Cor3.2}, $ \\Phi $ is balanced if and only if the largest eigenvalue of $D^{\\max}(\\Phi) $ and $ D(G) $ coincide. This proves \t$ (i) \\Leftrightarrow (iv) $.\n\t\n\tThe proofs of $ (i) \\Leftrightarrow (iii) $ and $ (i) \\Leftrightarrow (v) $ are similar .\t\t \n\\end{proof}\nBoth of the above characterizations extend the corresponding known characterizations \\cite[Theorem 3.1]{Sign_distance} and \\cite[Theorem 3.5]{Sign_distance} for signed graph.\n\\begin{corollary}\\label{cor5.1}\n\tLet $ \\Phi=(G, \\varphi) $ be a $ \\mathbb{T} $-gain graph. Then $ \\Phi $ is balanced if and only if $ D(\\Phi) $ exist and it is cospectral with $ D(G) $.\n\\end{corollary}\n\\section{Distance compatible gain graphs}\\label{Dist_compt}\n\t\nIn this final section, we establish a couple of characterizations for distance compatible $ \\mathbb{T} $-gain graphs. These results extend the corresponding known results for signed graph \\cite{Sign_distance}.\n\\begin{theorem}\\label{Th6.1}\n\tLet $ \\Phi=(G, \\varphi) $ be any bipartite $ \\mathbb{T} $-gain graph. Then $ \\Phi $ is distance compatible if and only if $ \\Phi $ is balanced.\n\\end{theorem}\n\\begin{proof}\n\tIf $ \\Phi $ is balanced, by Theorem \\ref{th5.2}, $ \\Phi $ is distance compatible. Conversely, suppose $ \\Phi $ is distance compatible. Then, by Proposition \\ref{prop3.3}, $ \\Phi $ is order-independent and $ D^{\\max}(\\Phi)=D^{\\min}(\\Phi) $. Suppose that $ \\Phi $ is unbalanced. Since $ \\Phi $ is bipartite, there exists an unbalanced even cycle $ C $. Let $ v_i $ and $ v_j $ be two diametrical vertices of $ C $. Then $ C $ contains two disjoint paths $ v_iP_1v_j $ and $ v_iP_2v_j $ of same length. Since $ \\varphi(C)\\ne 1 $ and $ \\varphi(\\overrightarrow{C})=\\varphi(v_iP_1v_j)\\varphi(v_jP_2v_i)\\ne 1$, so $ \\varphi(v_iP_1v_j)\\ne \\varphi(v_iP_2v_j) $.\\\\\n\t{\\bf Claim:} $ v_iP_1v_j $ and $ v_iP_2v_j $ are shortest paths between $ v_i$ and $ v_j $.\\\\\n\tSuppose $ v_iP_1v_j $ and $ v_iP_2v_j $ are not shortest paths. Let $ v_iPv_j $ be a shortest path. Then at least one of the even cycle formed by $ v_iP_1v_j $, $ v_iPv_j $ and $ v_iP_2v_j $, $ v_iPv_j $ is unbalanced, and has length strictly smaller than that of $ C $, which is a contradiction. Since $ \\varphi(v_iP_1v_j)\\ne \\varphi(v_iP_2v_j) $, for any ordered vertex set $ (V(G),<) $, $ \\varphi_{\\max}^{<}(v_i,v_j)\\ne \\varphi_{\\min}^{<}(v_i,v_j) $, a contradiction. Thus $ \\Phi $ is balanced.\n\\end{proof}\nA \\emph{cut vertex} in a graph $G$ is a vertex whose removal creates more components than the number of components of $G$. A \\emph{block} of a graph $ G $ is a maximum connected subgraph of $ G $ that has no cut vertex.\n\n\\begin{theorem}\\label{Th6.2}\n\tLet $ \\Phi=(G, \\varphi) $ be a $ \\mathbb{T} $-gain graph. Then, $ \\Phi $ is distance compatible if and only if every block of $\\Phi$ is distance compatible.\n\\end{theorem}\n\\begin{proof}\n\tLet $ B_1,B_2, \\dots, B_k $ be the blocks of $\\Phi$. Suppose every block is distance compatible. Let $s,t\\in V(G)$. If $ s $ and $ t $ are in the same block then they are distance compatible. Suppose $ s $ and $ t $ are in different blocks. Without loss of generality, suppose $ s $ is in $ B_1 $ and $ t $ is in $ B_2 $. Then any path $ sPt $ in $ G $ must passes through the cut vertices $ v_i $ and $ v_j $ where $ v_i $ and $ v_j $ are in $ B_1 $ and $ B_2 $, respectively ($ v_i $ may be same as $ v_j $). Any shortest path $ sPt$ can be decompose into $ sPv_i\\cup v_iPv_j\\cup v_jPt $. Since $ B_1 $ is distance compatible, so any shortest path $ sPv_i $ has unique gain. As the vertices $ v_i $ and $ v_j $ are connected by a unique path, so $ \\varphi(v_iPv_j) $ is unique. Proofs of the other cases are similar. Therefore, any shortest path from $ s $ to $ t $ has same gain. Thus $ s $ and $ t $ are distance compatible. Hence $ \\Phi $ is distance compatible. Converse is easy to verify.\n\\end{proof}\n\nLet $ \\Phi=(G, \\varphi) $ be $ \\mathbb{T} $-gain graph with the standard order $<$ on the vertex set. Let $ s,t \\in V( G) $. If $ \\varphi_{\\max}^{<}(s,t)= \\varphi_{\\min}^{<}(s,t)$, then $ s $ and $ t $ are called distance compatible. Note that $ \\varphi_{\\max}^{<}(s,t)= \\varphi_{\\min}^{<}(s,t)$ if and only if $ \\varphi_{\\max}^{<_a}(s,t)= \\varphi_{\\min}^{<_a}(s,t)$, for any other vertex order $<_a$. Therefore, the vertices $ s $ and $ t $ are called \\emph{distance-incompatible} if for some order $<_a$, $ \\varphi_{\\max}^{<_a}(s,t)\\ne \\varphi_{\\min}^{<_a}(s,t) $ holds.\n\\begin{lemma}\\label{lm7.1}\n\tLet $ \\Phi=(G, \\varphi) $ be a $ 2 $-connected non-geodetic $ \\mathbb{T} $-gain graph. If $ s $ and $ t $ are two incompatible vertices of least distance in $ G $ then there exists at least two internally disjoint shortest paths between $ s $ and $ t $ which have different gains.\n\\end{lemma}\n\\begin{proof}\n\tSince $ s,t $ are distance-incompatible and $ G $ is non-geodetic, so there exist at least two shortest paths say $sP_1t $ and $sP_2t$ such that $ \\varphi(sP_1t)\\ne \\varphi(sP_2t) $. If $sP_1t$ and $sP_2t $ are internally disjoint, then we are done. Suppose $sP_1t$ and $sP_2t $ are not internally disjoint. Let $ v_1, v_2, \\dots,v_p $ be the common internal vertices of the paths $sP_1t$ and $sP_2t $ . Let $ C_1, C_2, \\dots, C_r $ be the only cycles formed by $sP_1t$ and $sP_2t $. Thus $ \\varphi(sP_1t)\\varphi(tP_2s)=\\sum\\limits_{i=1}^{r} \\varphi(\\overrightarrow{C_i}) \\ne 1$. Then there exist a cycle $ C_j $ which is not balanced. Let $ C_j $ be formed by $ v_jP_1v_{j+1} $ and $ v_jP_2v_{j+1} $. Since $ sP_1t $ and $ sP_2t $ are shortest paths, so $ v_jP_1v_{j+1} $ and $ v_jP_2v_{j+1} $ must be shortest paths in between $ v_j $ and $ v_{j+1} $ and of same lengths. Also $ \\varphi(\\overrightarrow{C_j})=\\varphi(v_jP_1v_{j+1}) \\varphi(v_{j+1}P_2v_{j})\\ne 1$. Thus $ \\varphi(v_jP_1v_{j+1}) \\ne \\varphi(v_jP_2v_{j+1})$. Hence $ v_j $ and $ v_{j+1} $ are distance-incompatible, and distance between them in $ G $ is smaller than the distance between the vertices $ s $ and $ t $ in $ G $, a contradiction.\n\\end{proof}\n\\begin{theorem}\\label{Th6.3}\n\tLet $ \\Phi=(G, \\varphi) $ be any $ 2 $-connected non-geodetic $ \\mathbb{T} $-gain graph. Then $ \\Phi $ is distance-incompatible if and only if there is an unbalanced even cycle such that there exist two diametrically opposite vertices $ s $ and $ t $ which have no other smaller length path.\n\\end{theorem}\n\\begin{proof}\n\tIf $ \\Phi $ is distance-incompatible, then there exist vertices $ s,t $ which are distance-incompatible and of least distance. Then by Lemma \\ref{lm7.1}, there exists a pair of shortest disjoint paths in between $ s $ and $ t $ such that they have different gains. Let $ C_{2l} $ be the cycle formed by the two disjoin paths. Therefore, $ \\varphi(\\overrightarrow{C_{2l}})\\ne 1$ and $ s,t $ do not have any other shorter length path.\n\t\n\tThe converse is easy to verify.\n\\end{proof}\n\\section*{Acknowledgments}\nThe authors thank Prof Thomas Zaslavsky, Binghamton University, for his comments and suggestions, which improved the paper's presentation.\tAniruddha Samanta thanks University Grants Commission(UGC) for the financial support in the form of the Senior Research Fellowship (Ref.No: 19\/06\/2016(i)EU-V; Roll No. 423206). M. Rajesh Kannan would like to thank the SERB, Department of Science and Technology, India, for financial support through the projects MATRICS (MTR\/2018\/000986) and Early Career Research Award (ECR\/2017\/000643).\n\t\n\t\\bibliographystyle{amsplain}\n\t","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nImaging of extrasolar planets is a very challenging goal because of the very large \nluminosity contrast (of the order 10$^{-6}$ for young giant planets and of the order of \n10$^{-8}$-10$^{-10}$ for old giant and rocky planets) and the small angular separation (few \ntenths of arcsec for a planet at $\\sim$10 AU around a star at some tens of pc) between the host \nstar and the companion objects. \nHowever, a number of different project are either now running (e.g. Project 1640 at the 5 m \nPalomar Telescope - see~\\cite[Crepp et al. 2011]{Crepp2011}) or are going to begin like the \nGemini Planet Imager (GPI) at \nthe Gemini South Telescope~(\\cite[Macintosh et al. 2006]{Macintosh2006}) or SPHERE at the ESO \nVery Large Telescope~(\\cite[Beuzit et al. 2006]{Beuzit2006}).\nThis last instrument, in particular, includes three scientific channels that are a \ndifferential imager and dual band polarimeter called IRDIS operating in the near infrared \nbetween the Y and the Ks band (\\cite[Dohlen et al. 2008]{Dohlen2008}), a polarimeter called \nZIMPOL that will look for old planets at visible wavelengths (\\cite[Thalmann et al. 2008]\n{Thalmann2008}) and an Integral Field Spectrograph (IFS) operating in the near infrared between\nthe Y and the H band (\\cite[Claudi et al. 2008]{Claudi2008}). In the next paragraphs we will\npresent the results of the laboratory tests on the IFS. \n\n\\section{Test description}\n\nTests on the IFS instrument were held in January and February 2013 at the \\textit{Institut de \nPlanetologie et d'Astrophysique de Grenoble} (IPAG) facility with the aim to validate \nfunctionality of the science and calibration templates and to preliminary estimate the \nperformances of the instrument. The tests were performed both in the YJ (0.95$\\div$1.35 \nmicron) and in YH (0.95$\\div$1.65 micron) mode using the appropriate combination of Lyot \ncoronagraph and apodized mask.\n\nData were then reduced exploiting the Data Reduction and Handling (DRH) software that \nallows to perform all the required calibrations and the speckle subtraction procedure through \nthe spectral deconvolution (SD) method (\\cite[Sparks \\& Ford 2002]{Sparks2002}). A \nfurther speckle suppression can be obtained applying angular differential imaging (ADI)\n(\\cite[Marois et al. 2006]{Marois2006}). Given that we do cannot perform any rotation of the \nfield of view during our tests, we can just perform a simulation of the method so that our \nresults have to be regarded as just an estimation of the contrast that we will be able to get. \n\n\\section{Results}\n\n\\begin{figure}[b]\n\\begin{center}\n \\subfloat{\\includegraphics[width=6.0cm]{mesa_fig1a.eps}} \\quad\n \\subfloat{\\includegraphics[width=6.0cm]{mesa_fig1b.eps}} \n \\caption{Contrast plot for IFS operating in the YJ-mode (left panel) and in the YH-mode (right\n panel). }\n \\label{fig2}\n\\end{center}\n\\end{figure}\n\nIn Figure~\\ref{fig2} we display the 5$\\sigma$ contrast plot that we can get for the IFS \noperating both in the YJ-mode (left panel) and in the YH-mode (right panel). A contrast better than 10$^{-6}$ can be obtained for both the modes appropriately combining SD and ADI. To \nfurther confirm this results we add a number of simulated planets to the raw data at different separations and with luminosity contrast of 10$^{-5}$ and 10$^{-6}$ and reduced these data following the same procedure. All simulated planets are visible with a S\/N greater than 5. \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}