diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzoagm" "b/data_all_eng_slimpj/shuffled/split2/finalzzoagm" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzoagm" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nNetwork data is ubiquitous, such as social networks, biology networks, and citation networks. Recently, Graph Convolutional Networks (GCNs), a class of neural networks designed to learn graph data, have shown great popularity in tackling graph analytics problems, such as node classification~\\cite{abu-el-haija2019mixhop, wu2019demo}, graph classification~\\cite{gao2019graph, zhang2018an}, link prediction~\\cite{you2019position, kipf2016variational} and recommendation~\\cite{ying2018graph, fan2019graph}.\n\nThe typical GCN~\\cite{kipf2017semi} and its variants~\\cite{ve2018graph, hamilton2017inductive, ma2019disentangled, you2019position, wu2019simplifying} usually follow a message-passing manner. A key step is feature aggregation, i.e., a node aggregates feature information from its topological neighbors in each convolutional layer. In this way, feature information propagates over network topology to node embedding, and then node embedding learned as such is used in classification tasks. The whole process is supervised partially by the node labels. The enormous success of GCN is partially thanks to that GCN provides a fusion strategy on topological structures and node features to learn node embedding, and the fusion process is supervised by an end-to-end learning framework.\n\nSome recent studies, however, disclose certain weakness of the state-of-the-art GCNs in fusing node features and topological structures. For example, Li~\\textit{et al.}~\\cite{li2018deeper} show that GCNs actually perform the Laplacian smoothing on node features, and make the node embedding in the whole network gradually converge. Nt and Maehara~\\cite{nt2019revisiting} and Wu~\\textit{et al.}~\\cite{wu2019simplifying} prove that topological structures play the role of low-pass filtering on node features when the feature information propagates over network topological structure. Gao~\\textit{et~al.}~\\cite{gao2019conditional} design a Conditional Random Field (CRF) layer in GCN to explicitly preserve connectivity between nodes.\n\n\\emph{What information do GCNs really learn and fuse from topological structures and node features?} This is a fundamental question since GCNs are often used as an end-to-end learning framework. A well informed answer to this question can help us understand the capability and limitations of GCNs in a principled way. This motivates our study immediately.\n\nAs the first contribution of this study, we present experiments assessing the capability of GCNs in fusing topological structures and node features. Surprisingly, our experiments clearly show that the fusion capability of GCNs on network topological structures and node features is clearly distant from optimal or even satisfactory. Even under some simple situations that the correlation between node features\/topology with node label is very clear, GCNs still cannot adequately fuse node features and topological structures to extract the most correlated information (shown in Section \\ref{sec:capability}). The weakness may severely hinder the capability of GCNs in some classification tasks, since GCNs may not be able to adaptively learn some correlation information between topological structures and node features.\n\n\n\nOnce the weakness of the state-of-the-art GCNs in fusion is identified, a natural question is, ``\\emph{Can we remedy the weakness and design a new type of GCNs that can retain the advantages of the state-of-the-art GCNs and, at the same time, enhance the capability of fusing topological structures and node features substantially?}''\n\nA good fusion capability of GCNs should substantially extract and fuse the most correlated information for classification task, however, one biggest obstacle in reality is that the correlation between network data and classification task is usually very complex and agnostic. The classification can be correlated with either the topology, or node features, or their combinations. This paper tackles the challenge and proposes an adaptive multi-channel graph convolutional networks for semi-supervised classification (AM-GCN). The central idea is that we learn the node embedding based on node features, topological structures, and their combinations simultaneously. The rationale is that the similarity between features and that inferred by topological structures are complementary to each other and can be fused adaptively to derive deeper correlation information for classification tasks.\n\nTechnically, in order to fully exploit the information in feature space, we derive the \\textit{k}-nearest neighbor graph generated from node features as the feature structural graph.\nWith the feature graph and the topology graph, we propagate node features over both topology space and feature space, so as to extract two specific embeddings in these two spaces with two specific convolution modules. Considering the common characteristics between two spaces, we design a common convolution module with a parameter sharing strategy to extract the common embedding shared by them. We further utilize the attention mechanism to automatically learn the importance weights for different embeddings, so as to adaptively fuse them. In this way, node labels are able to supervise the learning process to adaptively adjust the weight to extract the most correlated information. Moreover, we design the consistency and disparity constraints to ensure the consistency and disparity of the learned embeddings.\n\nWe summarize our main contributions as follows:\n\\begin{itemize}\n\\vspace{-\\topsep}\\item We present experiments\nassessing the capability of GCNs in fusing topological structures and node features and identify the weakness of GCN. We further study the important problem, i.e., how to substantially enhance the fusion capability of GCN for classification.\n\\item We propose a novel adaptive multi-channel GCN framework, AM-GCN, which performs graph convolution operation over both topology and feature spaces. Combined with attention mechanism, different information can be adequately fused.\n\\item Our extensive experiments on a series of benchmark data sets clearly show that AM-GCN outperforms the state-of-the-art GCNs and extracts the most correlation information from both node features and topological structures nicely for challenging classifcation tasks.\n\\vspace{-\\topsep}\\end{itemize}\n\nThe rest of the paper is organized as follows. In Section~\\ref{sec:capability} we experimentally investigate the capability of GCNs in fusing node features and topology. In Section~\\ref{sec:am-gcn}, we develop AM-GCN. We report experimental results in Section~\\ref{sec:exp}, and review related work in Section~\\ref{sec:related-work}. We conclude the paper in Section~\\ref{sec:con}.\n\n\\vspace{-\\topsep}\\section{Fusion Capability of GCNs: An Experimental Investigation}\\label{sec:capability}\n\nIn this section, we use two simple yet intuitive cases to examine whether the state-of-the-art GCNs can adaptively learn from node features and topological structures in graphs and fuse them sufficiently for classification tasks. The main idea is that we will clearly establish the high correlation between node label with network topology and node features, respectively, then we will check the performance of GCN on these two simple cases. A good fusion capability of GCN should adaptively extract the correlated information with the supervision of node label, providing a good result. However, if the performance drops sharply in comparison with baselines, this will demonstrate that GCN cannot adaptively extract information from node features and topological structures, even there is a high correlation between node features or topological structures with the node label.\n\n\n\\vspace{-5pt}\\subsection{Case 1: Random Topology and Correlated Node Features}\n\nWe generate a random network consisting of 900 nodes, where the probability of building an edge betweean any two nodes is $0.03$. Each node has a feature vector of 50 dimensions. To generate node features, we randomly assign 3 labels to the 900 nodes, and for the nodes with the same label, we use one Gaussian distribution to generate the node features. The Gaussian distributions for the three classes of nodes have the same covariance matrix, but three different centers far away from each other. In this data set, the node labels are highly correlated with the node features, but not the topological structures.\n\nWe apply GCN~\\cite{kipf2017semi} to train this network. For each class we randomly select 20 nodes for training and another 200 nodes for testing. We carefully tune the hyper-parameters to report the best performance and avoid over smoothing. Also, we apply MLP~\\cite{pal1992multilayer} to the node features only. The classification accuracies of GCN and MLP are $75.2\\%$ and $100\\%$, respectively.\n\nThe results meet the expectation. Since the node features are highly correlated with the node labels, MLP shows excellent performance. GCN extracts information from both the node features and the topological structures, but cannot adaptively fuse them to avoid the interference from topological structures. It cannot match the high performance of MLP.\n\n\\vspace{-5pt}\\subsection{Case 2: Correlated Topology and Random Node Features}\n\nWe generate another network with 900 nodes. This time, the node features, each of 50 dimensions, are randomly generated. For the topological structure, we employ the Stochastic Blockmodel (SBM)~\\cite{karrer2011stochastic} to split nodes into 3 communities (nodes 0-299, 300-599, 600-899, respectively). Within each community, the probability of building an edge is set to $0.03$, and the probability of building an edge between nodes in different communities is set to $0.0015$. In this data set, the node labels are determined by the communities, i.e., nodes in the same community have the same label.\n\nAgain we apply GCN to this network. We also apply DeepWalk~\\cite{perozzi2014deepwalk} to the topology of the network, that is, the features are ignored by DeepWalk. The classification accuracies of GCN and DeepWalk are $87\\%$ and $100\\%$, respectively.\n\n\nDeepWalk performs well because it models network topological structures thoroughly. GCN extracts information from both the node features and the topological structures, but cannot adaptively fuse them to avoid the interference from node features. It cannot match the high performance of DeepWalk.\n\n\n\\textbf{Summary}. These cases show that the current fusion mechanism of GCN~\\cite{kipf2017semi} is distant from optimal or even satisfactory. Even the correlation between node label with network topology or node features is very high, the current GCN cannot make full use of the supervision by node label to adaptively extract the most correlated information. However, the situation is more complex in reality, because it is hard to know whether the topology or the node features are more correlated with the final task, which prompts us to rethink the current mechanism of GCN.\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=0.48\\textwidth]{model.pdf}\n\t\\setlength{\\abovecaptionskip}{2pt}\n\t\\setlength{\\belowcaptionskip}{-13pt}\n\t\\caption{The framework of AM-GCN model. Node feature \\textbf{X} is to construct a feature graph. AM-GCN consists of two \\textit{specific convolution modules}, one \\textit{common convolution module} and the \\textit{attention mechanism}.}\n\t\\Description{AM-GCN Model.}\n\t\\label{model}\n\\end{figure}\n\n\n\\vspace{-\\topsep}\\section{AM-GCN: the Proposed Model}\\label{sec:am-gcn}\n\n$\\textbf{Problem Settings:}$ We focus on semi-supervised node classification in an attributed graph $G=(\\textbf{A},\\textbf{X})$, where $\\textbf{A}\\in\\mathbb{R}^{n\\times n}$ is the symmetric adjacency matrix with $n$ nodes and $\\textbf{X}\\in\\mathbb{R}^{n\\times d}$ is the node feature matrix, and \\textit{d} is the dimension of node features. Specifically, $A_{ij}=1$ represents there is an edge between nodes $i$ and $j$, otherwise, $A_{ij}=0$. We suppose each node belongs to one out of $C$ classes.\n\nThe overall framework of AM-GCN is shown in Figure \\ref{model}. The key idea is that AM-GCN permits node features to propagate not only in topology space, but also in feature space, and the most correlated information with node label should be extracted from both of these two spaces. To this end, we construct a feature graph based on node features $\\textbf{X}$. Then with two specific convolution modules, $\\textbf{X}$ is able to propagate over both of feature graph and topology graph to learn two specific embeddings $\\textbf{Z}_F$ and $\\textbf{Z}_T$, respectively. Further, considering that the information in these two spaces have common characteristics, we design a common convolution module with parameter sharing strategy to learn the common embedding $\\textbf{Z}_{CF}$ and $\\textbf{Z}_{CT}$, also, a consistency constraint $\\mathcal{L}_{c}$ is employed to enhance the \"common\" property of $\\textbf{Z}_{CF}$ and $\\textbf{Z}_{CT}$. Besides, a disparity constraint $\\mathcal{L}_{d}$ is to ensure the independence between $\\textbf{Z}_F$ and $\\textbf{Z}_{CF}$, as well as $\\textbf{Z}_T$ and $\\textbf{Z}_{CT}$. Considering that node label may be correlated with topology or feature or both, AM-GCN utilizes an attention mechanism to adaptively fuse these embeddings with the learned weights, so as to extract the most correlated information $\\textbf{Z}$ for the final classification task.\n\n\n\\subsection{Specific Convolution Module}\n\nFirstly, in order to capture the underlying structure of nodes in feature space, we construct a \\textit{k}-nearest neighbor (\\textit{k}NN) graph $G_f = (\\textbf{A}_f, \\textbf{X})$ based on node feature matrix \\textbf{X}, where $\\textbf{A}_f$ is the adjacency matrix of \\textit{k}NN graph. Specifically, we first calculate the similarity matrix $\\textbf{S} \\in \\mathbb{R}^{n\\times n}$ among $n$ nodes. Actually, there are many ways to obtain \\textbf{S}, and we list two popular ones here, in which $\\textbf{x}_i$ and $\\textbf{x}_j$ are feature vectors of nodes $i$ and $j$:\n\n1) \\textbf{Cosine Similarity}: It uses the cosine value of the angle between two vectors to measure the similarity:\n\\begin{equation}\n\\textbf{S}_{ij} = \\frac{{\\textbf{x}_i}\\cdot{\\textbf{x}_j}}{|{\\textbf{x}_i}||{\\textbf{x}_j}|}.\n\\end{equation}\n\n2) \\textbf{Heat Kernel}: The similarity is calculated by the Eq. \\eqref{kernel} where \\textit{t} is the time parameter in heat conduction equation and we set $t=2$.\n\\begin{equation}\n\\textbf{S}_{ij} = e^{-\\frac{\\|{\\textbf{x}_i}-{\\textbf{x}_j}\\|^2}{t}}.\n\\label{kernel}\n\\end{equation}\nHere we uniformly choose the Cosine Similarity to obtain the similarity matrix $\\textbf{S}$, and then we choose top $k$ similar node pairs for each node to set edges and finally get the adjacency matrix $\\textbf{A}_f$.\n\nThen with the input graph $(\\textbf{A}_f, \\textbf{X})$ in feature space, the \\textit{l}-th layer output $\\textbf{Z}^{(\\textit{l})}_f$ can be represented as:\n\\begin{equation}\n\\textbf{Z}^{(\\textit{l})}_f = ReLU(\n\\tilde{\\textbf{D}}^{-\\frac{1}{2}}_f\n\\tilde{\\textbf{A}}_f\n\\tilde{\\textbf{D}}^{-\\frac{1}{2}}_f\n\\textbf{Z}^{(\\textit{l-1})}_f\n\\textbf{W}^{(\\textit{l})}_f),\n\\end{equation}\nwhere $ \\textbf{W}^{(\\textit{l})}_f$ is the weight matrix of the \\textit{l}-th layer in GCN, $ReLU$ is the Relu activation function and the initial $\\textbf{Z}^{(0)}_f =\\textbf{X}$. Specifically, we have $\\tilde{\\textbf{A}}_f = \\textbf{A}_f + \\textbf{I}_f $ and $\\tilde{\\textbf{D}}_f$ is the diagonal degree matrix of $\\tilde{\\textbf{A}}_f$. We denote the last layer output\nembedding as $\\textbf{Z}_F$.\nIn this way, we can learn the node embedding which captures the specific information $\\textbf{Z}_F$ in feature space.\n\nAs for the topology space, we have the original input graph $G_t = (\\textbf{A}_t, \\textbf{X}_t)$ where $ \\textbf{A}_t = \\textbf{A} $ and $\\textbf{X}_t = \\textbf{X}$. Then the learned output embedding $\\textbf{Z}_T$ based on topology graph can be calculated in the same way as in feature space. Therefore, the specific information encoded in topology space can be extracted.\n\n\\subsection{\\textbf{Common Convolution Module}}\n\nIn reality, the feature and topology spaces are not completely irrelevant. Basically, the node classification task, may be correlated with the information either in feature space or in topology space or in both of them, which is difficult to know beforehand. Therefore, we not only need to extract the node specific embedding in these two spaces, but also to extract the common information shared by the two spaces. In this way, it will become more flexible for the task to determine which part of information is the most correlated. To address this, we design a \\textit{Common}-GCN with parameter sharing strategy to get the embedding shared in two spaces.\n\nFirst, we utilize \\textit{Common}-GCN to extract the node embedding $\\textbf{Z}^{(\\textit{l})}_{ct}$ from topology graph ($\\textbf{A}_t$, $\\textbf{X}$)\nas follows\n\\begin{equation}\n\\label{tGCN}\n \\textbf{Z}^{(\\textit{l})}_{ct} = ReLU(\n \\tilde{\\textbf{D}}^{-\\frac{1}{2}}_t\n \\tilde{\\textbf{A}}_t\n \\tilde{\\textbf{D}}^{-\\frac{1}{2}}_t\n \\textbf{Z}^{(\\textit{l-1})}_{ct}\n \\textbf{W}^{(\\textit{l})}_c),\n\\end{equation}\nwhere $ \\textbf{W}^{(\\textit{l})}_c$ is the \\textit{l}-th layer weight matrix of \\textit{Common}-GCN and $\\textbf{Z}^{(\\textit{l-1})}_{ct}$ is the node embedding in the $(l-1)$th layer and $\\textbf{Z}^{(0)}_{ct}=\\textbf{X}$.\nWhen utilizing \\textit{Common}-GCN to learn the node embedding from feature graph ($\\textbf{A}_f$, $\\textbf{X}$), in order to extract the shared information, we share the same weight matrix $\\textbf{W}^{(\\textit{l})}_c$ for every layer of \\textit{Common}-GCN as follows:\n\\begin{equation}\n\\label{fGCN}\n \\textbf{Z}^{(\\textit{l})}_{cf} = ReLU(\n \\tilde{\\textbf{D}}^{-\\frac{1}{2}}_f\n \\tilde{\\textbf{A}}_f\n \\tilde{\\textbf{D}}^{-\\frac{1}{2}}_f\n \\textbf{Z}^{(\\textit{l-1})}_{cf}\n \\textbf{W}^{(\\textit{l})}_c),\n\\end{equation}\nwhere $\\textbf{Z}^{(\\textit{l})}_{cf}$ is the l-layer output embedding and $\\textbf{Z}^{(0)}_{cf}=\\textbf{X}$.\nThe shared weight matrix can filter out the shared characteristics from two spaces. According to different input graphs, we can get two output embedding $\\textbf{Z}_{CT}$ and $\\textbf{Z}_{CF}$ and the common embedding $\\textbf{Z}_C$ of the two spaces is:\n\\begin{equation}\n\\textbf{Z}_C = (\\textbf{Z}_{CT} + \\textbf{Z}_{CF})\/2.\n\\end{equation}\n\n\\subsection{Attention Mechanism}\n\nNow we have two specific embeddings $\\textbf{Z}_T$ and $\\textbf{Z}_F$, and one common embedding $\\textbf{Z}_C$. Considering the node label can be correlated with one of them or even their combinations, we use the attention mechanism $\\textit{att}(\\textbf{Z}_{T}, \\textbf{Z}_{C}, \\textbf{Z}_{F})$ to learn their corresponding importance $(\\bm{\\alpha}_t, \\bm{\\alpha}_c, \\bm{\\alpha}_f)$ as follows:\n\\begin{equation}\n(\\bm{\\alpha}_t, \\bm{\\alpha}_c, \\bm{\\alpha}_f) = \\textit{att} (\\textbf{Z}_{T}, \\textbf{Z}_{C}, \\textbf{Z}_{F}),\n\\end{equation}\nhere $\\bm{\\alpha}_t, \\bm{\\alpha}_c, \\bm{\\alpha}_f \\in \\mathbb{R}^{n\\times1}$ indicate the attention values\nof $n$ nodes with embeddings $\\textbf{Z}_{T}, \\textbf{Z}_{C}, \\textbf{Z}_{F}$, respectively.\n\nHere we focus on node $i$, where its embedding in $\\textbf{Z}_{T}$ is $\\textbf{z}_{T}^i \\in \\mathbb{R}^{1\\times h} $ (i.e., the $i$-th row of $\\textbf{Z}_T$).\nWe firstly transform the embedding through a nonlinear transformation, and then use one shared attention vector $\\textbf{q} \\in \\mathbb{R}^{h'\\times 1}$ to get the attention value $\\omega_{T}^i$ as follows:\n\\begin{equation}\n\\omega_T^i = \\textbf{q}^T \\cdot tanh(\\textbf{W} \\cdot (\\textbf{z}_T^i)^T + \\textbf{b}).\n\\end{equation}\nHere $\\textbf{W} \\in \\mathbb{R}^{h'\\times h}$ is the weight matrix and $\\textbf{b}\\in \\mathbb{R}^{h'\\times 1}$ is the bias vector. Similarly, we can get the attention values $\\omega_{C}^i$\nand $\\omega_{F}^i$ for node $i$ in embedding matrices $\\textbf{Z}_C$ and $\\textbf{Z}_F$, respectively.\nWe then normalize the attention values $\\omega_{T}^i, \\omega_{C}^i, \\omega_{F}^i$ with softmax function to get\n the final weight:\n\\begin{equation}\n\\alpha_T^i =softmax(\\omega_T^i) = \\frac{exp(\\omega_T^i)}{exp(\\omega_T^i)+exp(\\omega_C^i)+exp(\\omega_F^i)}.\n\\end{equation}\nLarger $\\alpha_T^i$ implies the corresponding embedding is more important. Similarly, $\\alpha_C^i =softmax(\\omega_C^i)$\nand $\\alpha_F^i =softmax(\\omega_F^i)$.\nFor all the $n$ nodes, we have the learned weights $\\bm{\\alpha}_t=[\\alpha_T^i], \\bm{\\alpha}_c=[\\alpha_C^i], \\bm{\\alpha}_f=[\\alpha_F^i] \\in \\mathbb{R}^{n\\times1}$, and denote $\\bm{\\alpha_T} = diag(\\bm{\\alpha}_t)$, $\\bm{\\alpha_C} = diag(\\bm{\\alpha}_c)$ and $\\bm{\\alpha_F} = diag(\\bm{\\alpha}_f)$. Then we combine these three embeddings to obtain the final embedding \\textbf{Z} :\n\\begin{equation}\n\\label{output}\n\\textbf{Z} = \\bm{\\alpha_T} \\cdot \\textbf{Z}_T + \\bm{\\alpha_C} \\cdot \\textbf{Z}_C + \\bm{\\alpha_F} \\cdot \\textbf{Z}_F.\n\\end{equation}\n\n\n\\subsection{Objective Function}\n\n\\subsubsection{\\textbf{Consistency Constraint}}\n\nFor the two output embeddings $\\textbf{Z}_{CT}$ and $\\textbf{Z}_{CF}$ of \\textit{Common}-GCN, despite the \\textit{Common}-GCN has the shared weight matrix, here we design a consistency constraint to further enhance their commonality.\n\nFirstly, we use $L_2$-normalization to normalize the embedding matrix as $\\textbf{Z}_{CTnor}$, $\\textbf{Z}_{CFnor}$.\nThen, the two normalized matrix can be used to capture the similarity of $n$ nodes as $\\textbf{S}_{T}$ and $\\textbf{S}_{F}$ as follows:\n\\begin{equation}\n\\begin{aligned}\n\\textbf{S}_{T} &= \\textbf{Z}_{CTnor}^{} \\cdot \\textbf{Z}_{CTnor}^T,\\\\\n\\textbf{S}_{F} &= \\textbf{Z}_{CFnor}^{} \\cdot \\textbf{Z}_{CFnor}^T.\n\\end{aligned}\n\\end{equation}\n\nThe consistency implies that the two similarity matrices should be similar, which gives rise to the following constraint:\n\\begin{equation}\n\\mathcal{L}_{c} = \\|\\textbf{S}_{T}-\\textbf{S}_{F}\\|_F^2.\n\\end{equation}\n\n\\subsubsection{\\textbf{Disparity Constraint}}\nHere because embeddings $\\textbf{Z}_T$ and $\\textbf{Z}_{CT}$ are learned from the same graph $G_t = (\\textbf{A}_t, \\textbf{X}_t)$,\nto ensure they can capture different information, we employ the Hilbert-Schmidt Independence Criterion (HSIC)~\\cite{song2007supervised}, a simple but effective measure of independence, to enhance the disparity of these two embeddings. Due to its simplicity and neat theoretical properties, HSIC has been applied to several machine learning tasks~\\cite{niu2010multiple, gretton2005measuring}. Formally, the HSIC constraint of $\\textbf{Z}_T$ and $\\textbf{Z}_{CT}$ is defined as:\n\\begin{equation}\nHSIC(\\textbf{Z}_{T}, \\textbf{Z}_{CT}) = (n-1)^{-2}tr(\\textbf{R}\\textbf{K}_{T}\\textbf{R}\\textbf{K}_{CT}),\n\\end{equation}\nwhere $\\textbf{K}_{T}$ and $\\textbf{K}_{CT}$ are the Gram matrices with $k_{T,ij}=k_T(\\textbf{z}_{T}^i,\\textbf{z}_{T}^j)$ and $k_{CT,ij}=k_{CT}(\\textbf{z}_{CT}^i,\\textbf{z}_{CT}^j)$. And $\\textbf{R} = \\textbf{I} - \\frac{1}{n}ee^T$, where \\textbf{I} is an identity matrix and $e$ is an all-one column vector. In our implementation, we use the inner product kernel function for $\\textbf{K}_{T}$ and $\\textbf{K}_{CT}$.\n\nSimilarly, considering the embeddings $\\textbf{Z}_F$ and $\\textbf{Z}_{CF}$ are also learned from the same graph $(\\textbf{A}_f, \\textbf{X})$, their disparity should also be enhanced by HSIC:\n\\begin{equation}\nHSIC(\\textbf{Z}_{F}, \\textbf{Z}_{CF}) = (n-1)^{-2}tr(\\textbf{R}\\textbf{K}_{F}\\textbf{R}\\textbf{K}_{CF}).\n\\end{equation}\n\nThen we set the disparity constraint as $\\mathcal{L}_{d}$ where:\n\\begin{equation}\n\\mathcal{L}_{d} = HSIC(\\textbf{Z}_{T}, \\textbf{Z}_{CT}) + HSIC(\\textbf{Z}_{F}, \\textbf{Z}_{CF}).\n\\end{equation}\n\n\\begin{table}[htbp]\n\t\\caption{The statistics of the datasets}\n\t\\label{dataset}\n\t\\setlength{\\tabcolsep}{0.5mm}{\n\t\t\\begin{tabular}{lcccccc}\n\t\t\t\\hline\n\t\t\tDataset&Nodes&Edges&Classes&Features&Training&Test\\\\\n\t\t\t\\hline\n\t\t\tCiteseer&3327&4732&6&3703&120\/240\/360&1000\\\\\n\t\t\tUAI2010&3067&28311&19&4973&380\/760\/1140&1000\\\\\n\t\t\tACM&3025&13128&3&1870&60\/120\/180&1000\\\\\n\t\t\tBlogCatalog&5196&171743&6&8189&120\/240\/360&1000\\\\\n\t\t\tFlickr&7575&239738&9&12047&180\/360\/540&1000\\\\\n\t\t\tCoraFull&19793&65311&70&8710&1400\/2800\/4200&1000\\\\\n\t\t\t\\hline\n\t\\end{tabular}}\n\\end{table}\n\n\\subsubsection{\\textbf{Optimization Objective}}\n\nWe use the output embedding \\textbf{Z} in Eq. \\eqref{output} for semi-supervised multi-class classification with a linear transformation and a softmax function. Denote the class predictions for $n$ nodes as $\\hat{\\textbf{Y}}=[\\hat{y}_{ic}]\\in \\mathbb{R}^{n\\times C}$ where $\\hat{y}_{ic}$ is the probability of node \\textit{i} belonging to class \\textit{c}. Then the $\\hat{\\textbf{Y}}$ can be calculated in the following way:\n\\begin{equation}\n\\hat{\\textbf{Y}} = softmax(\\textbf{W} \\cdot \\textbf{Z} + \\textbf{b}),\n\\end{equation}\nwhere $softmax(x)=\\frac{\\exp(x)}{\\Sigma_{c=1}^{C}exp(x_c)}$ is actually a normalizer across all classes.\n\nSuppose the training set is $L$, for each $l\\in L$ the real label is $\\textbf{Y}_l$ and the predicted label is $\\hat{\\textbf{Y}}_l$. Then the cross-entropy loss for node classification over all training nodes is represented as $\\mathcal{L}_{t}$ where:\n\\begin{equation}\n\\mathcal{L}_{t} = -\\sum\\nolimits_{l\\in L}\\sum\\nolimits_{i=1}^{C}\\textbf{Y}_{li}\\rm{ln}\\hat{\\textbf{Y}}_\\textit{li}.\n\\end{equation}\n\nCombining the node classification task and constraints, we have the following overall objective function:\n\\begin{equation}\n\\label{final_target}\n\\mathcal{L} = \\mathcal{L}_{t}+\\gamma\\mathcal{L}_{c}+\\beta\\mathcal{L}_{d},\n\\end{equation}\nwhere $\\gamma$ and $\\beta$ are parameters of the consistency and disparity constraint terms. With the guide of labeled data, we can optimize the proposed model via back propagation and learn the embedding of nodes for classification.\n\n\n\\section{Experiments}\\label{sec:exp}\n\\subsection{Experimental Setup}\n\n\\noindent{\\bfseries Datasets}\n\\quad Our proposed AM-GCN is evaluated on six real world datasets which are summarized in Table \\ref{dataset}, moreover, we provide all the data websites in the supplement for reproducibility.\n\n\\begin{itemize}\n\\item \\textbf{Citeseer}~\\cite{kipf2017semi}: Citeseer is a research paper citation network, where nodes are publications and edges are citation links. Node attributes are bag-of-words representations of the papers and all nodes are divided into six areas.\n\\item \\textbf{UAI2010}~\\cite{wang2018a}: We use this dataset with 3067 nodes and 28311 edges which has been tested in graph convolutional networks for community detection in~\\cite{wang2018a}.\n\\item \\textbf{ACM}~\\cite{wang2019heterogeneous}: This network is extracted from ACM dataset where nodes represent papers and there is an edge between two papers if they have the same author. All the papers are divided into 3 classes (\\textit{Database}, \\textit{Wireless Communication}, \\textit{DataMining}). The features are the bag-of-words representations of paper keywords.\n\\item \\textbf{BlogCatalog}~\\cite{meng2019co}: It is a social network with bloggers and their social relationships from the BlogCatalog website. Node attributes are constructed by the keywords of user profiles, and the labels represent the topic categories provided by the authors, and all nodes are divided into 6 classes.\n\\item \\textbf{Flickr}~\\cite{meng2019co}: Flickr is an image and video hosting website, where users interact with each other via photo sharing. It is a social network where nodes represent users and edges represent their relationships, and all the nodes are divided into 9 classes according to interest groups of users.\n\n\\item \\textbf{CoraFull}~\\cite{bojchevski2018deep}: This is the larger version of the well-known citation network Cora dataset, where nodes represent papers and edges represents their citations, and the nodes are labeled based on the paper topics.\n\n\\end{itemize}\n\n\\noindent{\\bfseries Baselines} \\quad We compare AM-GCN with two types of state-of-the-art methods, covering two network embedding algorithms and six graph neural network based methods. Moreover, we provide all the code websites in the supplement for reproducibility.\n\n\\begin{itemize}\n\\item \\textbf{DeepWalk}~\\cite{perozzi2014deepwalk} is a network embedding method which uses random walk to obtain contextual information and uses skip-gram algorithm to learn network representations.\n\\item \\textbf{LINE}~\\cite{tang2015line} is a large-scale network embedding method preserving first-order and second-order proximity of the network separately. Here we use LINE (1st+2nd).\n\\item \\textbf{Chebyshev}~\\cite{defferrard2016convolutional} is a GCN-based method utilizing Chebyshev filters.\n\\item \\textbf{GCN}~\\cite{kipf2017semi} is a semi-supervised graph convolutional network model which learns node representations by aggregating information from neighbors.\n\\item \\textbf{\\textit{k}NN-GCN}. For comparison, instead of traditional topology graph, we use the sparse \\textit{k}-nearest neighbor graph calculated from feature matrix as the input graph of GCN and represent it as \\textit{k}NN-GCN.\n\\item \\textbf{GAT}~\\cite{ve2018graph} is a graph neural network model using attention mechanism to aggregate node features.\n\\item \\textbf{DEMO-Net}~\\cite{wu2019demo} is a degree-specific graph neural network for node classification.\n\\item \\textbf{MixHop}~\\cite{abu-el-haija2019mixhop} is a GCN-based method which mixes the feature representations of higher-order neighbors in one graph convolution layer.\n\\end{itemize}\n\n\n\\begin{table*}\n\t\\caption{Node classification results(\\%). (Bold: best; Underline: runner-up.) \n\t\\label{node classification}\n\n\t\\begin{tabular}{c|c|c||cccccccc|c}\n\t\t\\hline\n\t\tDatasets&Metrics&L\/C &DeepWalk&LINE&Chebyshev&GCN&\\textit{k}NN-GCN&GAT&DEMO-Net&MixHop&AM-GCN\\\\\n\t\t\\hline\n\t\t\\multirow{6}{*}{Citeseer}&\n\t\t\\multirow{3}{*}{ACC}\n\t\t&20&43.47&32.71&69.80&70.30&61.35&\\underline{72.50}&69.50&71.40&\\textbf{73.10}\\\\\n\t\t& &40&45.15&33.32&71.64&\\underline{73.10}&61.54&73.04&70.44&71.48&\\textbf{74.70}\\\\\n\t\t& &60&48.86&35.39&73.26&74.48&62.38&\\underline{74.76}&71.86&72.16&\\textbf{75.56}\\\\\n\t\t\\cline{2-12}\n\t\t&\\multirow{3}{*}{F1}\n\t\t&20&38.09&31.75&65.92&67.50&58.86&\\underline{68.14}&67.84&66.96&\\textbf{68.42}\\\\\n\t\t& &40&43.18&32.42&68.31&\\underline{69.70}&59.33&69.58&66.97&67.40&\\textbf{69.81}\\\\\n\t\t& &60&48.01&34.37&70.31&\\underline{71.24}&60.07&\\textbf{71.60}&68.22&69.31&70.92\\\\\n\t\t\\hline\n\t\t\\multirow{6}{*}{UAI2010}&\n\t\t\\multirow{3}{*}{ACC}\n\t\t&20&42.02&43.47&50.02&49.88&\\underline{66.06}&56.92&23.45&61.56&\\textbf{70.10}\\\\\n\t\t& &40&51.26&45.37&58.18&51.80&\\underline{68.74}&63.74&30.29&65.05&\\textbf{73.14}\\\\\n\t\t& &60&54.37&51.05&59.82&54.40&\\underline{71.64}&68.44&34.11&67.66&\\textbf{74.40}\\\\\n\t\t\\cline{2-12}\n\t\t&\\multirow{3}{*}{F1}\n\t\t&20&32.93&37.01&33.65&32.86&\\underline{52.43}&39.61&16.82&49.19&\\textbf{55.61}\\\\\n\t\t& &40&46.01&39.62&38.80&33.80&\\underline{54.45}&45.08&26.36&53.86&\\textbf{64.88}\\\\\n\t\t& &60&44.43&43.76&40.60&34.12&54.78&48.97&29.05&\\underline{56.31}&\\textbf{65.99}\\\\\n\t\t\\hline\n\t\t\\multirow{6}{*}{ACM}&\n\t\t\\multirow{3}{*}{ACC}\n\t\t&20&62.69&41.28&75.24&\\underline{87.80}&78.52&87.36&84.48&81.08&\\textbf{90.40}\\\\\n\t\t& &40&63.00&45.83&81.64&\\underline{89.06}&81.66&88.60&85.70&82.34&\\textbf{90.76}\\\\\n\t\t& &60&67.03&50.41&85.43&\\underline{90.54}&82.00&90.40&86.55&83.09&\\textbf{91.42}\\\\\n\t\t\\cline{2-12}\n\t\t&\\multirow{3}{*}{F1}\n\t\t&20&62.11&40.12&74.86&\\underline{87.82}&78.14&87.44&84.16&81.40&\\textbf{90.43}\\\\\n\t\t& &40&61.88&45.79&81.26&\\underline{89.00}&81.53&88.55&84.83&81.13&\\textbf{90.66}\\\\\n\t\t& &60&66.99&49.92&85.26&\\underline{90.49}&81.95&90.39&84.05&82.24&\\textbf{91.36}\\\\\n\t\t\\hline\n\t\t\\multirow{6}{*}{BlogCatalog}&\n\t\t\\multirow{3}{*}{ACC}\n\t\t&20&38.67&58.75&38.08&69.84&\\underline{75.49}&64.08&54.19&65.46&\\textbf{81.98}\\\\\n\t\t& &40&50.80&61.12&56.28&71.28&\\underline{80.84}&67.40&63.47&71.66&\\textbf{84.94}\\\\\n\t\t& &60&55.02&64.53&70.06&72.66&\\underline{82.46}&69.95&76.81&77.44&\\textbf{87.30}\\\\\n\t\t\\cline{2-12}\n\t\t&\\multirow{3}{*}{F1}\n\t\t&20&34.96&57.75&33.39&68.73&\\underline{72.53}&63.38&52.79&64.89&\\textbf{81.36}\\\\\n\t\t& &40&48.61&60.72&53.86&70.71&\\underline{80.16}&66.39&63.09&70.84&\\textbf{84.32}\\\\\n\t\t& &60&53.56&63.81&68.37&71.80&\\underline{81.90}&69.08&76.73&76.38&\\textbf{86.94}\\\\\n\t\t\\hline\n\t\t\\multirow{6}{*}{Flickr}&\n\t\t\\multirow{3}{*}{ACC}\n\t\t&20&24.33&33.25&23.26&41.42&\\underline{69.28}&38.52&34.89&39.56&\\textbf{75.26}\\\\\n\t\t& &40&28.79&37.67&35.10&45.48&\\underline{75.08}&38.44&46.57&55.19&\\textbf{80.06}\\\\\n\t\t& &60&30.10&38.54&41.70&47.96&\\underline{77.94}&38.96&57.30&64.96&\\textbf{82.10}\\\\\n\t\t\\cline{2-12}\n\t\t&\\multirow{3}{*}{F1}\n\t\t&20&21.33&31.19&21.27&39.95&\\underline{70.33}&37.00&33.53&40.13&\\textbf{74.63}\\\\\n\t\t& &40&26.90&37.12&33.53&43.27&\\underline{75.40}&36.94&45.23&56.25&\\textbf{79.36}\\\\\n\t\t& &60&27.28&37.77&40.17&46.58&\\underline{77.97}&37.35&56.49&65.73&\\textbf{81.81}\\\\\n\t\t\\hline\n\t\t\\multirow{6}{*}{CoraFull}&\n\t\t\\multirow{3}{*}{ACC}\n\t\t&20&29.33&17.78&53.38&56.68&41.68&\\underline{58.44}&54.50&47.74&\\textbf{58.90}\\\\\n\t\t& &40&36.23&25.01&58.22&60.60&44.80&\\underline{62.98}&60.28&57.20&\\textbf{63.62}\\\\\n\t\t& &60&40.60&29.65&59.84&62.00&46.68&\\underline{64.38}&61.58&60.18&\\textbf{65.36}\\\\\n\t\t\\cline{2-12}\n\t\t&\\multirow{3}{*}{F1}\n\t\t&20&28.05&18.24&47.59&52.48&37.15&\\underline{54.44}&50.44&45.07&\\textbf{54.74}\\\\\n\t\t& &40&33.29&25.43&53.47&55.57&40.42&\\underline{58.30}&56.26&53.55&\\textbf{59.19}\\\\\n\t\t& &60&37.95&30.87&54.15&56.24&43.22&\\underline{59.61}&57.26&56.40&\\textbf{61.32}\\\\\n\t\t\\hline\n\t\t\n\t\\end{tabular\n\\end{table*}\n\n\\noindent{\\bfseries Parameters Setting}\n\\quad To more comprehensively evaluate our model, we select three label rates for training set (i.e., 20, 40, 60 labeled nodes per class) and choose 1000 nodes as the test set. All baselines are initialized with same parameters suggested by their papers and we also further carefully turn parameters to get optimal performance. For our model, we train three 2-layer GCNs with the same hidden layer dimension (\\textit{nhid1}) and the same output dimension (\\textit{nhid2}) simultaneously, where \\textit{nhid1} $\\in\\{512, 768\\}$ and \\textit{nhid2} $\\in\\{32, 128, 256\\}$. We use $0.0001\\thicksim0.0005$ learning rate with Adam optimizer. In addition, the dropout rate is 0.5, weight decay $\\in\\{5e-3, 5e-4\\}$ and \\textit{k} $\\in\\{2\\ldots10\\}$ for \\textit{k}-nearest neighbor graph. The coefficient of consistency constraint and disparity constraints are searched in $\\{0.01, 0.001, 0.0001\\}$ and $\\{1e-10, 5e-9, 1e-9, 5e-8 , 1e-8\\}$. For all methods, we run 5 times with the same partition and report the average results. And we use Accuracy (ACC) and macro F1-score (F1) to evaluate performance of models. For the reproducibility, we provide the specific parameter values in the supplement (Section~\\ref{sec:parameters}).\n\n\\begin{figure*}[t]\n\t\\centering\n\t\\vspace{-5pt}\n\t\\subfigure{\n\t\t\\includegraphics[width=0.27\\linewidth]{citeseer_variants_new-eps-converted-to.pdf}\n\t\t\\label{citeseer}\n\t}\n\t\\vspace{-5pt}\n\t\\subfigure{\n\t\t\\includegraphics[width=0.27\\linewidth]{uai_variants_new-eps-converted-to.pdf}\n\t}\n\t\\subfigure{\n\t\t\\includegraphics[width=0.27\\linewidth]{acm_variants_new-eps-converted-to.pdf}\n\t}\n\t\\vspace{-5pt}\n\t\\subfigure{\n\t\t\\includegraphics[width=0.27\\linewidth]{blog_variants_new-eps-converted-to.pdf}\t\n\t}\n\t\\subfigure{\n\t\t\\includegraphics[width=0.27\\linewidth]{flickr_variants_new-eps-converted-to.pdf}\n\t}\n\t\\subfigure{\n\t\t\\includegraphics[width=0.27\\linewidth]{cora_variants_new-eps-converted-to.pdf}}\n\t\\vspace{-5pt}\n\t\\caption{The results(\\%) of AM-GCN and its variants on six datasets.}\n\t\\label{variants}\n\\end{figure*}\n\n\\begin{figure*}[htbp]\n\t\\vspace{-15pt}\n\t\\centering\n\t\\subfigure[\\textbf{DeepWalk}]{\n\t\t\\includegraphics[width=0.23\\textwidth]{BlogCatalog_deepwalk.pdf}\n\t}\n\t\\subfigure[\\textbf{GCN}]{\n\t\t\\includegraphics[width=0.23\\textwidth]{BlogCatalog_GCN.pdf}\n\t}\n\t\\subfigure[\\textbf{GAT}]{\n\t\t\\includegraphics[width=0.23\\textwidth]{BlogCatalog_GAT.pdf}\n\t}\n\t\\subfigure[\\textbf{AM-GCN}]{\n\t\t\\includegraphics[width=0.23\\textwidth]{BlogCatalog_AM-GCN.pdf}\n\t}\n\t\\caption{Visualization of the learned node embeddings on BlogCatalog dataset.}\n\t\\vspace{-5pt}\n\t\\label{visualization_BlogCatalog}\n\\end{figure*}\n\n\n\\subsection{Node Classification}\n\n\nThe node classification results are reported in Table \\ref{node classification}, where L\/C means the number of labeled nodes per class. We have the following observations:\n\\begin{itemize}\n\\vspace{-\\topsep}\\item Compared with all baselines, the proposed AM-GCN generally achieves the best performance on all datasets with all label rates. Especially, for ACC, AM-GCN achieves maximum relative improvements of 8.59\\% on BlogCatalog and 8.63\\% on Flickr. The results demonstrate the effectiveness of AM-GCN.\n\\item AM-GCN consistently outperforms GCN and \\textit{k}NN-GCN on all the datasets, indicating the effectiveness of the adaptive fusion mechanism in AM-GCN, because it can extract more useful information than only performing GCN and \\textit{k}NN-GCN, respectively.\n\\item Comparing with GCN and \\textit{k}NN-GCN, we can learn that there does exist structural difference between topology graph and feature graph and performing GCN on traditional topology graph does not always show better result than on feature graph. For example, in BlogCatalog, Flickr and UAI2010, the feature graph performs better than topology. This further confirms the necessity of introducing feature graph in GCN.\n\\item Moreover, compared with GCN, the improvement of AM-GCN is more substantial on the datasets with better feature graph (\\textit{k}NN), such as UAI2010, BlogCatalog, Flickr. This implies that AM-GCN introduces a better and more suitable \\textit{k}NN graph for label to supervise feature propagation and node representation learning.\n\\end{itemize}\n\n\\vspace{-\\topsep}\\subsection{Analysis of Variants}\n\nIn this section, we compare AM-GCN with its three variants on all datasets to validate the effectiveness of the constraints.\n\\begin{itemize}\n\\item \\textbf{AM-GCN-w\/o}: AM-GCN without constraints $\\mathcal{L}_{c}$ and $\\mathcal{L}_{d}$.\n\\item \\textbf{AM-GCN-c}: AM-GCN with the consistency constraint $\\mathcal{L}_{c}$.\n\\item \\textbf{AM-GCN-d}: AM-GCN with the disparity constraint $\\mathcal{L}_{d}$.\n\\end{itemize}\n\nFrom the results in Figure \\ref{variants}, we can draw the following conclusions: (1) The results of AM-GCN are consistently better than all the other three variants, indicating the effectiveness of using the two constraints together. (2) The results of AM-GCN-c and AM-GCN-d are usually better than AM-GCN-w\/o on all datasets with all label rates, verifying the usefulness of the two constraints. (3) AM-GCN-c is generally better than AM-GCN-d on all datasets, which implies the consistency constraint plays a more vital role in this framework. (4) Comparing the results of Figure~\\ref{variants} and Table~\\ref{node classification}, we can find that AM-GCN-w\/o, although without any constraints, still achieves very competitive performance against baselines, demonstrating that our framework is stable and competitive.\n\n\\vspace{-\\topsep}\\subsection{Visualization}\n\nFor a more intuitive comparison and to further show the effectiveness of our proposed model, we conduct the task of visualization on BlogCatalog dataset. We use the output embedding on the last layer of AM-GCN (or GCN, GAT) before $softmax$ and plot the learned embedding of test set using t-SNE~\\cite{maaten2008visualizing}. The results of BlogCatalog in Figure \\ref{visualization_BlogCatalog} are colored by real labels.\n\nFrom Figure \\ref{visualization_BlogCatalog}, we can find that the results of DeepWalk, GCN, and GAT are not satisfactory, because the nodes with different labels are mixed together. Apparently, the visualization of AM-GCN performs best, where the learned embedding has a more compact structure, the highest intra-class similarity and the clearest distinct boundaries among different classes.\n\n\\subsection{Analysis of Attention Mechanism}\n\nIn order to investigate whether the attention values learned by our proposed model are meaningful, we analyze the attention distribution and attention learning trend, respectively.\n\n\\begin{figure}[htbp]\n\t\\centering\n\t\\subfigure[Citeseer.]{\n\t\t\\includegraphics[width=0.22\\textwidth]{attention_citeseer-eps-converted-to.pdf}\n\t}\n\t\\vspace{-5pt}\n\t\\subfigure[UAI2010.]{\n\t\t\\includegraphics[width=0.22\\textwidth]{attention_uai-eps-converted-to.pdf}\n\t}\n\t\\vspace{-5pt}\n\t\\subfigure[ACM.]{\n\t\t\\includegraphics[width=0.22\\textwidth]{attention_acm-eps-converted-to.pdf}\n\t}\n\t\\subfigure[BlogCatalog.]{\n\t\t\\includegraphics[width=0.22\\textwidth]{attention_blog-eps-converted-to.pdf}\n\t}\n\t\\vspace{-5pt}\n\t\\subfigure[Flickr.]{\n\t\t\\includegraphics[width=0.22\\textwidth]{attention_flickr-eps-converted-to.pdf}\n\t\t\\label{attention_flickr}\n\t}\n\t\\subfigure[CoraFull.]{\n\t\t\\includegraphics[width=0.22\\textwidth]{attention_coraml-eps-converted-to.pdf}\n\t\t\\label{attention_generate}\n\t}\n\t\\caption{Analysis of attention distribution.}\n\t\\vspace{-5pt}\n\t\\label{boxplot}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\t\\centering\n\t\\vspace{-5pt}\n\t\\subfigure[\\textbf{Citeseer}]{\n\t\t\\includegraphics[width=0.22\\textwidth]{attention_change_citeseer-eps-converted-to.pdf}\n\t}\n\t\\subfigure[\\textbf{BlogCatalog}]{\n\t\t\\includegraphics[width=0.22\\textwidth]{attention_change_blog-eps-converted-to.pdf}\n\t}\n\t\\caption{The attention changing trends w.r.t epochs.}\n\t\\vspace{-8pt}\n\t\\label{attention_change}\n\\end{figure}\n\n\n\\textbf{Analysis of attention distributions.}\nAM-GCN learns two specific and one common embeddings, each of which is associated with the attention values. We conduct the attention distribution analysis on all datasets with 20 label rate, where the results are shown in Figure \\ref{boxplot}. As we can see, for Citeseer, ACM, CoraFull, the attention values of specific embeddings in topology space are larger than the values in feature space, and the values of common embeddings are between them. This implies that the information in topology space should be more important than the information in feature space. To verify this, we can see that the results of GCN are better than \\textit{k}NN-GCN on these datasets in Table \\ref{node classification}. Conversely, for UAI2010, BlogCatalog and Flickr, in comparison with Figure \\ref{boxplot} and Table \\ref{node classification}, we can find \\textit{k}NN-GCN performs better than GCN, meanwhile, the attention values of specific embeddings in feature space are also larger than those in topology space. In summary, the experiment demonstrates that our proposed AM-GCN is able to adaptively assign larger attention value for more important information.\n\n\\textbf{Analysis of attention trends.}\nWe analyze the changing trends of attention values during the training process. Here we take Citeseer and BlogCatalog with 20 label rate as examples in Figure \\ref{attention_change}, where \\textbf{x}-axis is the epoch and \\textbf{y}-axis is the average attention value. More results are in supplement~\\ref{trends}. At the beginning, the average attention values of Topology, Feature, and Common are almost the same, with the training epoch increasing, the attention values become different. For example, in BlogCatalog, the attention value for topology gradually decreases, while the attention value for feature keeps increasing. This phenomenon is consistent with the conclusions in Table \\ref{node classification} and Figure \\ref{boxplot}, i.e., \\textit{k}NN-GCN with feature graph performs better than GCN and the information in feature space is more important than in topology space. We can see that AM-GCN can learn the importance of different embeddings step by step.\n\n\n\n\\begin{figure}[t]\n\t\\centering\n\t\\vspace{-5pt}\n\t\\subfigure[\\textbf{Citeseer}]{\n\t\t\\includegraphics[width=0.22\\textwidth]{citeseer_alpha-eps-converted-to.pdf}\n\t}\n\t\\subfigure[\\textbf{BlogCatalog}]{\n\t\t\\includegraphics[width=0.22\\textwidth]{BlogCatalog_alpha-eps-converted-to.pdf}\n\t}\n\t\\caption{Analysis of parameter \\textit{$\\gamma$}.}\n\t\\vspace{-5pt}\n\t\\label{alpha}\n\\end{figure}\n\n\n\\begin{figure}[t]\n\t\\centering\n\t\\vspace{-5pt}\n\t\\subfigure[\\textbf{Citeseer}]{\n\t\t\\includegraphics[width=0.22\\textwidth]{citeseer_beta-eps-converted-to.pdf}\n\t\t\\label{citeseer}\n\t}\n\t\\subfigure[\\textbf{BlogCatalog}]{\n\t\t\\includegraphics[width=0.22\\textwidth]{BlogCatalog_beta-eps-converted-to.pdf}\n\t}\n\t\\caption{Analysis of parameter \\textit{$\\beta$}.}\n\t\\vspace{-5pt}\n\t\\label{beta}\n\\end{figure}\n\n\\begin{figure}[t]\n\t\\centering\n\t\\vspace{-5pt}\n\t\\subfigure[\\textbf{Citeseer}]{\n\t\t\\includegraphics[width=0.22\\textwidth]{citeseer_k-eps-converted-to.pdf}\n\t\n\t}\n\t\\subfigure[\\textbf{BlogCatalog}]{\n\t\t\\includegraphics[width=0.22\\textwidth]{Blogcatalog_k-eps-converted-to.pdf}\n\t}\n\t\\caption{Analysis of parameter \\textit{k}.}\n\t\\vspace{-8pt}\n\t\\label{k}\n\\end{figure}\n\n\n\n\\vspace{-\\topsep}\\subsection{Parameter Study}\\label{sec:parameter study}\nIn this section, we investigate the sensitivity of parameters on Citeseer and BlogCatalog datasets. More results are in \\ref{ps}.\n\n\\textbf{Analysis of consistency coefficient} \\textit{$\\gamma$}.\nWe test the effect of the consistency constraint weight $\\gamma$ in Eq. \\eqref{final_target}, and vary it from 0 to 10000. The results are shown in Figure \\ref{alpha}. With the increase of the consistency coefficient, the performances raise first and then start to drop slowly. Basically, AM-GCN is stable when the \\textit{$\\gamma$} is within the range from 1e-4 to 1e+4 on all datasets. We can also see that the curves of 20, 40, 60 label rates show similar changing trend.\n\n\\textbf{Analysis of disparity constraint coefficient} \\textit{$\\beta$}.\nWe then test the effect of the disparity constraint weight $\\beta$ in Eq. \\eqref{final_target}, and vary it from 0 to 1e-5. The results are shown in Figure \\ref{beta}. Similarly, with the increase of \\textit{$\\beta$}, the performances also raise first, but the performance will drop quickly if \\textit{$\\beta$} is larger than 1e-6 for Citeseer in Figure \\ref{citeseer}, while for BlogCatalog, it is relatively stable.\n\n\\textbf{Analysis of \\textit{k}-nearest neighbor graph} \\textit{k}.\nIn order to check the impact of the top \\textit{k} neighborhoods in \\textit{k}NN graph, we study the performance of AM-GCN with various number of \\textit{k} ranging from 2 to 10 in Figure \\ref{k}. For Citeseer and BlogCatalog, the accuracies increase first and then start to decrease. It may probably because that if the graph becomes denser, the feature is easier to be smoothed, and also, larger \\textit{k} may introduce more noisy edges.\n\n\n\\section{Related Work}\\label{sec:related-work}\n\nRecently, graph convolutional network (GCN) models~\\cite{gao2018large, chen2018fastgcn, ma2019graph, qu2019gmnn, xu2019how, ying2019hyperbolic} have been widely studied. For example, \\cite{bruna2014spectral} first designs the graph convolution operation in Fourier domain by the graph Laplacian. Then \\cite{defferrard2016convolutional} further employs the Chebyshev expansion of the graph Laplacian to improve the efficiency. \\cite{kipf2017semi} simplifies the convolution operation and proposes to only aggregate the node features from the one-hop neighbors. GAT \\cite{ve2018graph} introduces the attention mechanism to aggregate node features with the learned weights. GraphSAGE \\cite{hamilton2017inductive} proposes to sample and aggregate features from local neighborhoods of nodes with mean\/max\/LSTM pooling. DEMO-Net \\cite{wu2019demo} designs a degree-aware feature aggregation process. MixHop \\cite{abu-el-haija2019mixhop} aggregates feature information from both first-order and higher-order neighbors in each layer of network, simultaneously. Most of the current GCNs essentially focus on fusing network topology and node features to learn node embedding for classification. Also, there are some recent works on analyzing the fusion mechanism of GCN. For example, ~\\cite{li2018deeper} shows that GCNs actually perform the Laplacian smoothing on node features, ~\\cite{nt2019revisiting} and ~\\cite{wu2019simplifying} prove that topological structures play the role of low-pass filtering on node features. To learn more works on GCNs, please refer to the elaborate reviews \\cite{zhang2018deep,wu2019comprehensive}. However, whether the GCNs can adaptively extract the correlated information from node features and topological structures for classification remains unclear.\n\n\n\\section{Conclusion}\\label{sec:con}\n\\balance\nIn this paper, we rethink the fusion mechanism of network topology and node features in GCN and surprisingly discover it is distant from optimal. Motivated by this fundamental problem, we study how to adaptively learn the most correlated information from topology and node features and sufficiently fuse them for classification. We propose a multi-channel model AM-GCN which is able to learn suitable importance weights when fusing topology and node feature information. Extensive experiments well demonstrate the superior performance over the state-of-the-art models on real world datasets.\n\n\n\\section{Acknowledgments}\nThis work is supported in part by the National Natural Science Foundation of China (No. 61702296, 61772082, 61806020, U1936104, U1936219, 61772304, U1611461, 61972442), the National Key Research and Development Program of China (No. 2018YFB1402600, 2018AAA0102004), the CCF-Tencent Open Fund, Beijing Academy of Artificial Intelligence (BAAI), and a grant from the Institute for Guo Qiang, Tsinghua University. Jian Pei's research is supported in part by the NSERC Discovery Grant program. All opinions, findings, conclusions and recommendations are those of the authors and do not necessarily reflect the views of the funding agencies.\n\n\n\n\\bibliographystyle{ACM-Reference-Format}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Background}\n\nWe are interested in satisfiability of first-order formulas. Formulas are\nassumed to be in conjunctive normal form, i.e., of the form \n$\\phi = \\psi_1(x_1,\\dots,x_n)\\wedge\\dots\\wedge\\psi_m(x_1,\\dots x_n),$ where \n$V_\\phi = \\{x_1,\\dots,x_n\\}$ are the free variables of $\\phi$ and the $\\psi_i$\nare clauses (disjunctions of atoms).\n\nCraig's interpolation theorem provides a way to characterize the relationship\nbetween two formulas when one implies the other:\n\n\\begin{theorem}[\\label{thm:craig}Craig Interpolation~\\cite{Craig57}]\nLet $\\phi$ and $\\psi$ be first-order formulas. If $\\phi\\Rightarrow\\psi$\nthen there exists an \\emph{Interpolant} $I$ such that $\\phi\\Rightarrow I \n\\;\\wedge\\; I\\Rightarrow\\psi$ and $V_I\\subseteq V_\\phi\\cap V_\\psi$.\n\\end{theorem}\n\nEquivalently, there is an interpolant $I$ such that \n$\\phi\\Rightarrow I \\;\\wedge\\; I\\Rightarrow\\neg\\psi$\nwhenever $\\phi\\wedge\\psi$ is unsatisfiable, because\n$\\phi\\Rightarrow\\neg\\psi\\equiv \\neg(\\phi\\wedge\\psi)$.\nCraig's theorem guarantees the existence of an interpolant, but does not\nprovide an algorithm for obtaining it. However, such algorithms are known for\nmany logics. We refer to two interpolation algorithms for propositional logic\nand to describe them, we require some definitions: \n\nA literal $l$ is either a propositional variable $x$ or its negation $\\neg\nx$. A clause is a disjunction of literals, denoted $\\{ l_1, \\dots, l_n \\}$. A\nformula $\\phi$ is assumed to be in conjunctive normal form (CNF), i.e., it is a\nconjunction of clauses, denoted $\\phi = \\{ c_1, \\dots, c_n \\}$. \n\nA (partial) assignment $\\alpha$ is a consistent set of clauses of size 1. The\n(propositional) SAT problem is to determine whether for a given formula $\\phi$\nthere exists a total assignment such that $\\phi \\wedge \\alpha\\equiv\\top$. \nGiven two clauses of the form $C_1 \\cup \\{x\\}$ and $C_2 \\cup \\{\\neg x\\}$, \ntheir resolution is defined as $C_1\\cup C_2$ and if it is not tautological the\nresult is called a \\emph{resolvent} and the variable $x$ is called the pivot\nvariable. A resolution refutation is a sequence of resolution operations such\nthat the final resolvent is empty, proving that the formula is unsatisfiable.\n\nThere are multiple techniques for propositional interpolation. Here,\nwe refer to two popular systems, one by McMillan~\\cite{McMillan03} and the\nother by Huang, Kraj\\'icek and Pudl\\'ak~\\cite{Huang95,Krajicek97,Pudlak97}\n(HKP). Both are methods that require time linear in the size of the resolution\nrefutation of $\\neg(\\phi\\wedge\\psi)$. Both interpolation methods construct\ninterpolants by associating an intermediate interpolant\nwith every resolvent in the resolution refutation of $\\phi\\wedge\\psi$. The\ninterpolant associated with the final resolvent (the empty clause) constitutes\nan interpolant $I$ for which Theorem~\\ref{thm:craig} holds. For a\ncharacterization of these and other interpolation systems see,\ne.g.,~\\cite{DSilva10}.\n\nMcMillan's interpolation system associates with every clause\n$C$ in $\\phi$ the intermediate interpolant $C \\setminus \\{ v, \\neg v | v \\in\nV_\\psi\\}$, i.e., the restriction of $C$ to the variables in $\\psi$. Every\nclause in $\\psi$ is associated with the intermediate interpolant $\\top$. Every other\ninterpolant is calculated depending on the corresponding resolution\nstep. Consider the derivation of resolvent $R$ from clauses $C_1$ and $C_2$\nwith pivot variable $x$, where $C_1$ and $C_2$ have previously been associated\nwith intermediate interpolants $I_{C_1}$ and $I_{C_2}$. The resulting clause $R$\nis then associated with the intermediate interpolant\n\\begin{displaymath}\nI_R := \\left\\{\n\\begin{array}{cl}\n I_{C_1} \\vee I_{C_2} & \\mbox{if } x\\in V_\\phi \\wedge x\\not\\in V_\\psi\\\\\n I_{C_1} \\wedge I_{C_2} & \\mbox{if } x\\in V_\\phi \\wedge x\\in V_\\psi\\\\\n I_{C_1} \\wedge I_{C_2} & \\mbox{if } x\\not\\in V_\\phi \\wedge x\\in V_\\psi\n\\end{array}\n\\right.\n\\;.\n\\end{displaymath}\n\nIn the HKP system, every clause in $\\phi$ is associated the intermediate\ninterpolant $\\bot$, while the clauses in $\\psi$ are associated $\\top$. Every\nresolvent $R$ obtained from clauses $C_1$ and $C_2$ with pivot variable $x$ is\nassociated with the intermediate interpolant \n\\begin{displaymath}\nI_R := \\left\\{\n\\begin{array}{cl}\n I_{C_1} \\vee I_{C_2} & \\mbox{if } x\\in V_\\phi \\wedge x\\not\\in V_\\psi\\\\\n (x \\vee I_{C_1}) \\wedge (\\neg x \\vee I_{C_2}) & \\mbox{if } x\\in V_\\phi \\wedge x\\in V_\\psi\\\\\n I_{C_1} \\wedge I_{C_2} & \\mbox{if } x\\not\\in V_\\phi \\wedge x\\in V_\\psi\n\\end{array}\n\\right.\n\\;.\n\\end{displaymath}\n\nFor every propositional interpolation system that computes an interpolant\nfor $\\phi\\Rightarrow\\psi$, the \\emph{dual} system is defined by the computation\nof an interpolant for $I$ for $\\psi\\Rightarrow\\phi$, with the effect that \n$\\neg I$ is an interpolant for $\\phi\\Rightarrow\\psi$. It is known that the HKP\nsystem is self-dual and that McMillan's system is not~\\cite{DSilva10}.\n\n\\section{Conclusion}\n\nWe present the concept of lazy distribution for first-order decision\nprocedures. Formulas are decomposed into partitions without the need for\nquantifier elimination or any other method for logical disconnection of the\npartitions. Instead, local models for the partitions are reconciled globally\nthrough the use of Craig interpolation. Experiments using different\ninterpolation systems and decompositions for propositional formulas indicate\nthat our approach performs better than traditional solving methods even when\nsequentialized, i.e., when no additional resources are used. At the same time,\nour algorithm provides straight-forward opportunities for parallelization and\ndistribution of the solving process.\n\n\n\n\n\n\n\\section{Experimental Evaluation}\n\\label{sec:experiments}\n\nAs a first step in evaluating our algorithm, we implemented a propositional\nsatisfiability solver based on the MiniSAT solver. We restrict ourselves to\nthe slightly outdated version 1.14p, because propositional interpolation\nmethods require proof production, which is not available in more recent\nversions of MiniSAT~\\cite{EenS03}. Interpolants are produced by iterating over\nthe resolution proof, which is saved (explicitly) in memory. We use Reduced\nBoolean Circuits (RBCs~\\cite{AbdullaBE00}) to represent interpolants such that\nrecurring structure is exploited. Furthermore, Algorithm~\\ref{alg:recon}\npermits the exploitation of state-of-the-art SAT solver technology, like\nincremental solving techniques in solving partitions. Furthermore, every\nassignment to the globals is a set of clauses of size 1, which means that\nfacilities for solving a formula under assumptions may be made use of. \nThe lazy decomposition used by our implementation is indeed quite trivial: it\nsimply divides the clauses of the problem into a predefined number $p$ of\nequally sized partitions. Clauses are ordered as they appear in the input file\nand each partition $i$ is assigned the clauses numbered from $i \\cdot\n\\frac{n}{p}$ to $(i+1) \\cdot \\frac{n}{p}$, where $n$ is the total number of\nclauses.\n\nOur implementation is evaluated on set of formulas which are small but hard to\nsolve~\\cite{AloulRMS02}\\footnote{\\url{http:\/\/www.aloul.net\/benchmarks.html}}. They\nare known to contain symmetries, which potentially can be exploited by\ninterpolation. For this evaluation, our implementation uses only a single\nprocessing element, i.e., the evaluation of the partitions of a decomposition\nis sequentialized. Through this, we are able to show that our algorithm\nperforms well even when using the same resources as a traditional\nsolver. Preliminary experiments have shown that an actual (shared-memory)\nparallelization of our algorithm performs better than the sequentialized\nversion, but not significantly so, which is due to the lack of a load-balancing\nmechanism to balance the runtime of partition evaluation.\n\nAll our experiments are executed on a Windows HPC cluster of dual Quad-Xeon\n2.5~GHz processors with 16~GB of memory each, using a timeout of 3600~seconds\nand a memory limit of 2~GB.\n\n\\begin{figure}[p]\n\\centering\n\\subfloat[][\\label{fig:symmetry-m}McMillan interpolants]\n{ \n \\input{data\/symmetry-m}\n}\n\\quad\n\\subfloat[][\\label{fig:symmetry-p}HKP interpolants]\n{\n \\input{data\/symmetry-p}\n}\n\\qquad\n\\subfloat[][\\label{fig:symmetry-im}Dual McMillan interpolants]\n{\n \\input{data\/symmetry-im}\n}\n\\caption{Decomposition into $\\{2,\\dots,50\\}$ partitions, using different\n interpolation systems.}\n\\end{figure}\n\n\\begin{table}[p]\n\\centering\n\\scriptsize\n\\input{data\/table.csv}\n\\caption{\\label{tbl:runtimes}Runtime comparison with MiniSAT (versions 2.2.0\n and 1.14p), using McMillan's interpolation system. Bold numbers indicate\n the smallest runtime in the decompositions shown here or the best\n decomposition into $\\{2,\\dots,50\\}$ partitions (right-most column).}\n\\end{table}\n\n\nTo assess the impact of the decomposition on the solver performance, we\ninvestigate all decompositions into 2 to 50 partitions for each of the three\ninterpolation methods (McMillan's, Dual McMillan's and\nHKP). Each line in Figures~\\ref{fig:symmetry-m},~\\ref{fig:symmetry-p},\nand~\\ref{fig:symmetry-im} corresponds to one benchmark file and indicates the\nchange in runtime required to solve the benchmark when using from 2 up to 50\npartitions in the decomposition. These figures provide strong evidence for an\nimprovement of the runtime behavior as the number of partitions increase. Note\nthat, as mentioned before, the evaluation of partitions was sequentialized for\nthis experiment, i.e., this effect is \\emph{not} due to an increasing number of\nresources being utilized.\nThe graphs in Figures~\\ref{fig:symmetry-m},~\\ref{fig:symmetry-p},\nand~\\ref{fig:symmetry-im} provide equally strong evidence for the utility of\nMcMillan's interpolation system: the impact on the runtime is the largest and\nmost consistent of all three interpolation systems.\n\n\nFinally, Table~\\ref{tbl:runtimes} provides a comparison of the runtime of\nMiniSAT versions 2.2.0 and 1.14p with a selection of different decompositions,\nthe right-most column indicating the time of the best decomposition found\namong all those evaluated. It is clear from this table that no single\npartitioning can be identified as the best overall method. However, some\ndecompositions, like the one into 50 partitions, perform consistently well and\nalmost always better than either versions of MiniSAT.\n\n\\section{Introduction}\n\nDecision procedures for first-order logic problems, or fragments thereof, have\nseen a tremendous increase in popularity in the recent past. This is due to the\ngreat increase in performance of solvers for the propositional satisfiability\n(SAT) problem, as well as the increasing popularity of verification tools for\nboth soft- and hardware which extensively use first-order decision procedures\nlike SAT and SMT solvers.\n\nAs the decision problems that occur in large-scale verification problems become\nlarger, methods for distribution and parallelization are required to overcome\nmemory and runtime limitations of modern computer systems. Frequently,\ncomputing clusters and multi-core processors are employed to solve such\nproblems. The inherent parallelism in these systems is often used to solve\nmultiple problems concurrently, while distributed and parallel decision\nprocedures would allow for much better performance. This has led to the\ndevelopment of distributed verification tools, for example through the\ndistribution of Bounded-Model-Checking (BMC) problems (see,\ne.g.,~\\cite{GanaiGYA06,CamposNZS09}).\n\nIn the following sections we present a general method for distributed decision\nprocedures which is applicable to decision procedures of first-order\nfragments. The key component in this method is Craig's interpolation\ntheorem~\\cite{Craig57}. This theorem enables us to arbitrarily split formulas\ninto multiple parts without any restrictions on the nature or size of the\ncut. We propose to use this lazy formula decomposition because it does not\nrequire analysis of the semantics of a formula prior to the distribution of the\nproblem. In many other distributed algorithms, such a lazy decomposition\nclearly has a negative impact on the overall runtime of the decision\nprocedure. However, when using an interpolation scheme, the abstraction\nprovided by the interpolation algorithm is often strong enough to\ncounterbalance this.\n\nThrough an experimental evaluation of our algorithm on propositional formulas,\nwe are able to show that, at least in the propositional case, large speed-ups\nresult from using a lazy decomposition when using a suitable interpolation\nalgorithm. \n\n\n\n\\section{Related Work}\n\nOur work is most closely related to parallel and distributed decision\nprocedures. Many decision procedures exists for the propositional\nsatisfiability and some of them exploit parallelism. \n\n\\subsection{Parallel SAT Solving}\n\nIn parallel SAT, the objective is to simultaneously explore different parts of\nthe search space in order to quickly solve a problem. There are two main\napproaches to parallel SAT solving. First, the classical concept of\ndivide-and-conquer, which divides the search space into subspaces and allocate\neach of them to sequential SAT solvers. The search space is divided thanks to \nguiding-path constraints (typically unit clauses). A formula is found\nsatisfiable if one worker is able to find a solution for its subspace, and\nunsatisfiable if all the subspaces have been proved unsatisfiable. Workers\nusually cooperate through a load balancing strategy which performs the dynamic\ntransfer of subspaces to idle workers, and through the exchange of\nconflict-clauses~\\cite{gradsat,pminisat}.\n\nIn 2008, Hamadi et al.~\\cite{ManySAT08,ManySATJSAT09} introduced the parallel\nportfolio approach. This method exploits the complementarity between different\nsequential DPLL strategies to let them compete and cooperate on the original\nformula. Since each worker deals with the whole formula, there is no need for\nload balancing, and the cooperation is only achieved through the exchange of\nlearnt clauses. Moreover, the search process is not artificially influenced by\nthe original set of guiding-path constraints like in the first category of\ntechniques. With this approach, the crafting of the strategies is important,\nand the objective is to cover the space of the search strategies in the best\npossible way. \n\nThe main drawback of parallel SAT techniques comes from their required\nreplication of the formula. This is obvious for the parallel portfolio\napproach. It is also true for divide-and-conquer algorithms whose guiding-path\nconstraints do not produce significantly smaller subproblems (only $log_2 c$\nvariables have to be set to obtain $c$ subproblems). This makes these\ntechniques only applicable to problems which fit into the memory of a single\nmachine. \n\nIn the last two years, portfolio-based parallel solvers became prominent and\nit has been used in SMT decision procedures as well~\\cite{WintersteigerHM09}. \nWe are not aware of a recently developed improvements on the divide-and-conquer\napproach (the latest being \\cite{pminisat}).\nWe give a brief description of the parallel solvers qualified for the 2010 SAT\nRace\\footnote{\\url{http:\/\/baldur.iti.uka.de\/sat-race-2010}}:\n \n\\begin{itemize}\n\\item In \\texttt{plingeling}~\\cite{plingeling}, the original SAT instance is\n duplicated by a boss thread and allocated to worker threads. The strategies\n used by these workers are mainly differentiated around the amount of\n pre-processing, random seeds, and variables branching. Conflict clause\n sharing is restricted to units which are exchanged through the boss\n thread. This solver won the parallel track of the 2010 SAT Race.\n\n\\item \\texttt{ManySAT} \\cite{ManySATJSAT09} was the first parallel SAT\n portfolio. It duplicates the instance of the SAT problem to solve, and runs\n independent SAT solvers differentiated on their restart policies, branching\n heuristics, random seeds, conflict clause learning, etc. It exchanges clauses\n through various policies. Two versions of this solver were presented at the\n 2010 SAT Race, they finished second and third.\n\n\\item In \\texttt{SArTagnan}, \\cite{sartagnan} different SAT algorithms are\n allocated to different threads, and differentiated with respect to restart\n policies and VSIDS heuristics. Some threads apply a dynamic resolution\n process~\\cite{plingeling,lazyhyperbinary} or exploit reference\n points~\\cite{Kottler10}. Some others try to simplify a shared\n clauses database by performing dynamic variable elimination or\n replacement. This solver finished fourth.\n\n\\item In \\texttt{Antom}~\\cite{antom}, the SAT algorithms are differentiated on\n decision heuristic, restart strategy, conflict clause detection, lazy hyper\n binary resolution \\cite{plingeling,lazyhyperbinary}, and dynamic unit\n propagation lookahead. Conflict clause sharing is implemented. This solver\n finished fifth. \n\\end{itemize}\n\n\\subsection{Distributed SAT Solving}\n\nContrary to parallel SAT, in distributed SAT, the goal is to handle problems\nwhich are by nature distributed or, even more interestingly, to handle problems\nwhich are too large to fit into the memory of a single computing\nnode. Therefore, the speed-up against a sequential execution is not necessarily\nthe main objective, and in some cases (large instances) cannot even be measured. \n\nTo the best of our knowledge, the only relevant work in the area presents an\narchitecture tailored for large distributed Bounded Model\nChecking~\\cite{GanaiGYA06}. The objective is to perform deep BMC unwindings\nthanks to a network of standard computers, where the SAT formulas become so\nlarge that they cannot be handled by any one of the machines. This approach\nuses a master\/slaves topology, and the unrolling of an instance \nprovides a natural partitioning of the problem in a chain of workers. Each\nworker has to reconcile its local solution with its neighbors. The master\ndistributes the parts, and controls the search. First, based on proposals\ncoming from the slaves, it selects a globally best variable to branch on. From\nthat decision, each worker performs Boolean Constraint Propagation (BCP) on its\nsubproblem, and the master performs the same on the globally learnt\nclauses. The master maintains the global assignment, and to ensure the\nconsistency of the parallel BCP algorithms propagates to the slaves Boolean\nimplications. The master also records the causality of these implications which\nallows him to perform conflict-analysis when required.\n\n\n\\subsection{Interpolation}\n\nMcMillan's propositional interpolation system~\\cite{McMillan03} when employed\nin a suitable Model Checking algorithm, has been shown to perform competitively\nwith algorithms based purely on SAT-solving, i.e., McMillan showed that the\nabstraction obtained through interpolation for Model Checking problems is at\nleast as good as and sometimes better than previously known abstraction\nmethods.\n\n\\section{Lazy Decomposition}\n\nWhen considering distributed decision procedures, it is usually assumed that\nthe formulas which are to be solved are too large to be solved on a single\ncomputing node. Under this premise, strategies for distributing a formula have\nto be employed. If there exists a quantifier elimination algorithm for the\nfragment considered, then it is straight-forward, but comparatively expensive\nto distribute the problem: Find sparsely connected partitions of the formula\nand eliminate the connections such that the partitions become\nindependent. For example, let formula $\\phi=\\phi_1\\wedge\\phi_2$ where the\npartitions $\\phi_1$ and $\\phi_2$ overlap on variables $X=V_{\\phi_1}\\cap\nV_{\\phi_2}$. The elimination of $X$ from $\\exists X \\;.\\; \\phi_1\\wedge\\phi_2$\nproduces two independent parts $\\phi_1'$ and $\\phi_2'$, which, respectively,\ndepend on variables $V_{\\phi_1}\\setminus X$ and $V_{\\phi_2}\\setminus X$ and\ntherefore can be solved independently. While this distribution strategy is\nquite simple, it depends on the existence of a quantifier elimination\nalgorithm. Furthermore, the performance of such an algorithm in practice\ndepends greatly on the fact that the problem is sparsely connected, which is\nnot generally a given. We therefore use a different and cheaper method for\ndistribution: \n\n\\begin{definition}[Lazy Decomposition]\nLet $\\phi$ be in conjunctive normal form, i.e.,\n$\\phi=\\phi_1\\wedge\\dots\\wedge\\phi_n$. A \\emph{lazy decomposition} of $\\phi$\ninto $k$ partitions is an equivalent set of formulas $\\{\\psi_1,\\dots,\\psi_k\\}$\nsuch that each $\\psi_i$ is equivalent to some conjunction of clauses from\n$\\phi$, i.e., there exist $a,b$ $(a0$, $\\rho_{1,2}(0)$ are two initial states of the system and $\\rho_{1,2}(t)$ is their time-evolved form. The maximisation is performed over all possible pairs of initial states. The function $\\partial_t D(\\rho_1(t),\\rho_2(t))$ encompasses the condition for revealing the non-Markovianity of an evolution: the existence of even a single region where $\\partial_t D(\\rho_1(t),\\rho_2(t))>0$ is sufficient to guarantee non-Markovian nature of a dynamics. Conceptually, in fact, ${\\cal N}$ accounts for all the temporal regions where the distance between two arbitrary input states increases, thus witnessing a re-flux of information from the environment to the system under scrutiny. Such a re-flux magnifies the difference between two arbitrarily picked input states evolved up to the same instant of time by the same map. Contractivity of the trace distance under divisible maps ensures that such re-flux never occurs, which in turn is equivalent to $\\partial_t D(\\rho_1(t),\\rho_2(t))\\le0$. As the evolution under scrutiny here proceeds in discrete temporal steps, we will employ the discretised version of Eq.~(\\ref{measure}), which is obtained as\n\\begin{equation}\n{\\cal N}=\\max\\sum_n[D(\\rho^S_{1,n},\\rho^S_{2,n})-D(\\rho^S_{2,n-1},\\rho^S_{2,n-1})]\n\\end{equation} \nwith $\\rho^S_{k,n}$ the state of system $S$ obtained starting from the initial state $\\rho^S_{k,0}$ after $n$ steps of our protocol. In the following, we will use the system preparation\n\\begin{equation}\n\\rho^S_{k}=\n\\begin{pmatrix}\n\\cos^2\\theta_k&\\cos\\theta_k\\sin\\theta_k\\\\\n\\cos\\theta_k\\sin\\theta_k&\\sin^2\\theta_k\n\\end{pmatrix},~~~\\theta_k\\in[0,2\\pi]\n\\end{equation} \nstanding for a pure initial state of $S$ determined by the angle $\\theta_k$ in the Bloch sphere. \n\n\\subsection{Quantum homogeneization}\nArmed with such a tool, we start noticing that, without introducing inter-ancilla collisions in our process, i.e. for $\\delta=0$, the model describes the process of {\\it quantum homogenisation}~\\cite{Buzek01} whereupon preparation of an identical fiducial state for all the elements of the super-environment, the state of the system eventually homogenises to it, thus realizing a microscopic model for Markovian decoherence.\n\\begin{figure}[t]\n{\\bf (a)}\\hskip4cm{\\bf (b)}\n\\includegraphics[width=\\columnwidth]{homogearray1.eps}\n\\caption{The trace distance between evolved system states [panel {\\bf (a)}] and the fidelity of the system state at step $n$ with the target one [panel {\\bf (b)}] are shown against the number of steps of the evolution for the case of the basic collision model that forbids inter-ancilla interactions and a super-environment prepared in $\\ket{\\{0\\}}$. We have taken the swap strength $\\gamma=0.05$, while the initial states used to calculate the trace distance in panel {\\bf (a)} correspond to $\\theta_1=\\pi\/2$ and $\\theta_2=0$. }\n\\label{homogeneous}\n\\end{figure}\nIn order to set a useful benchmark for the comparison with quantum homogenisation, in our assessment of the non-Markovian features arising from the inclusion of intra-environment interactions, we will also consider an identical fiducial state for the super-environment. In particular, we will consider the initialisation $\\ket{\\{0\\}}\\equiv\\otimes^N_{j=1}|0\\rangle_{j}$. In Fig.~\\ref{homogeneous}, we show the behaviour of trace distance $D$ and state fidelity $F={}_S\\langle0|\\rho^S_{n}|0\\rangle_S$ between the state of the system after $n$ collisions with the super-environment and the target state $\\ket{0}_S$ (i.e. the state into which each element of the environment is prepared). The trace decreases monotonically with the number of system-environment collisions, thus witnessing the complete absence of any back-flow mechanisms that might give rise to non-Markovian features. These conclusions hold qualitatively regardless of the initial preparation of the super-environment.\n\n\n\n\n\\subsection{Description of the strategies}\n\nWe are now in a position to attack the main goal of this paper and address the delicate point of tracking the reduced dynamics of the system $S$. We will focus on two inequivalent ways of tracing out the degrees of freedom of the super-environment. In turn, this will allow us to pinpoint the\nkey role played by SECs in the settling of non-Markovian features. \n\nThe first method that we use in order to compute the reduced dynamics of $S$ (which we dub {\\it \\text{\\it Strategy 1}\\,}) is that of tracing out one of the subenvironments as soon as the system has interacted with it. This corresponds to taking a ``utilitarian\" viewpoint according to which element $E_j$ of the super-environment is relevant to determining the evolution of $S$ only as far as their mutual interaction is concerned. This means that correlations between the $S$ and $E_j$ are erased before the interaction between $E_j$ and $E_{j+1}$ occurs and cannot, therefore, affect $S$ during the next iteration of interactions. In this respect, the effect that the ``collision\" with $S$ has on the state of $E_j$ is carried over across the super-environment regardless of the actual state of $S$ itself. That is, the reduced state of the system qubit after the interaction with a set of $n$ subenvironments is described by the dynamical map \n\\begin{equation}\n\\label{one}\n\\rho^S_n=\\text{Tr}_{n-1,n}(\\hat{\\Psi}_{n-1,n}[\\hat{\\Phi}_{S,n}[\\rho^S_{n-1}\\otimes \\text{Tr}_{S}(\\rho^{SE}_{n-1})\\otimes \\ket{0}\\!\\bra{0}_n]]),\n\\end{equation}\nwith $\\text{Tr}_{S}(\\rho^{SE}_{n-1})$ the reduced state of the $(n-1)^\\text{th}$ element of the subenvironment after its interaction with the system qubit, which is in turn left in state $\\rho^S_{n-1}$.\n\nThe second approach (named {\\it \\text{\\it Strategy 2}\\,} hereafter) implies the tracing out of a subenvironment only after it has outlived its usefulness. In more explicit terms, we consider the various subenvironments in a time-non-local fashion: element $E_j$ will be traced out only after its active role in the collision model has expired, i.e. when it has interacted with the ordered triplet ($E_{j-1}, S, E_{j+1}$). The dynamical map resulting from the implementation of this strategy thus gives rise to the reduced state of the system,\n\\begin{equation}\n\\label{two}\n\\rho^S_n\n\\text{Tr}_{n-1,n}(\\hat{\\Psi}_{n-1,n}[\\hat{\\Phi}_{S,n}[\\text{Tr}_{\\{n-2\\}}(\\rho^{SE}_{n-1})\\otimes\\ket{0}\\!\\bra{0}_n]]),\n\\end{equation}\nwhere $\\text{Tr}_{\\{n-2\\}}[\\cdot]$ denotes the partial trace over the whole set of $n-2$ elements of the super-environment prior to the interaction between $S$ and element $E_{n-1}$.\nQuite intuitively, the difference between the two strategies resides in the different way SECs are treated: while $\\text{\\it Strategy 1}\\,$ erases all the SECs established by a given system-subenvironment collision, the second one carries these over to the next intra-environment interaction. This results in considerable differences in the non-Markovianity features arising from the dynamical maps formalised by Eq.~\\eqref{one} and \\eqref{two}. However, contrary to a naive expectation, $\\text{\\it Strategy 1}\\,$ does not give rise to a homogenisation process such as the one addressed earlier on, in light of a non negligible environmental memory effect, and it leaves room for non-Markovian manifestations. We have checked that the qualitative features that will be showcased throughout our analysis depend critically on the degree of purity of the overall super-environmental state, but only weakly on its explicit form.\n\n\\section{Analysis of the degree of non-Markovianity}\n\\label{number}\n\nHere we present our analysis of the non-Markovian features of the collision model, addressing both of the strategies identified above. \n\n\\subsection{Non-Markovianity resulting from both Strategies}\n\n\\label{nonmd}\n\nWe start analysing the behaviour of the trace distance $D(\\rho_{\\theta_1},\\rho_{\\theta_2})$ as the collision-based model for system-environment interaction is iterated.\nWhen using $\\text{\\it Strategy 2}\\,$ with $\\delta={\\pi}\/{2}$ (i.e. for a full state-swap between two consecutive super-environmental elements), the joint dynamics of $S$ and $E$ resembles that of an iterated two-qubit system.\n This is due to the complete exchange of information between environment qubits at every step, which causes the system to interact at step $n$ with a {\\it fresh} physical information carriers that, however, carries fully the effect of the collision occurred at step $n-1$. This results, needless to say, in dynamics characterised by undamped oscillations of the trace distance, which would in turn give rise to a degree of non-Markovianity that would grow with the size of the temporal window of observation of the evolution reaching, asymptotically, an infinite value [cf. Fig.~\\ref{nonmmeas1}{\\bf (a)}]. \n \n However, requiring $\\delta=\\pi\/2$ implies, in quite a general sense, a strong intra-environment interaction. Unsurprisingly, this results in the non-Markovian features highlighted above, in light of the pronounced dynamical nature of the corresponding super-environment. It is thus interesting to address the case of $\\delta<{\\pi}\/{2}$, i.e. a weaker subenvironment-subenvironment coupling strength. The expectation is that this would correspond to a loss of information over the state of the $n^{\\text{th}}$ environmental element, whose state is only partially carried over to element $(n+1)^{\\text{th}}$. This is well captured by the trace distance, which oscillates with a degraded amplitude, as illustrated in Fig.~\\ref{nonmmeas1}{\\bf(b)}.\n\n\n\\begin{figure}[!t]\n{\\bf (a)}\\hskip4cm{\\bf (b)}\\\\\n\\includegraphics[width=\\columnwidth]{NEWFIGURE1.eps}\\\\\n{\\bf (c)}\\hskip4cm{\\bf (d)}\\\\\n\\includegraphics[width=\\columnwidth]{NEWFIGURE2.eps}\n\\caption\nThe trace distance $D$ plotted against the number of iterations of the collision model for both the strategies and for different swap strengths put in place in our analysis. Panels {\\bf (a)} and {\\bf (b)} [{\\bf (c)} and {\\bf (d)}] report the results valid for \\text{\\it Strategy 2}\\, [\\text{\\it Strategy 1}\\,].\nWe have used $\\delta={\\pi}\/{2}$ in panels {\\bf{(a)}} and {\\bf{(c)}} and $\\delta=0.95 \\times {\\pi}\/{2}$ in panels {\\bf{(b)}} and {\\bf{(d)}} to show how varying the inter-ancilla swap strength changes substantially the degree of non-Markovianity. We have taken $\\gamma=$0.05 for the system-environment interaction strength, consistently with the assumption of weak $S$-$E$ coupling. }\n\\label{nonmmeas1}\n\\end{figure}\n\nThe trend shown in Figs.~\\ref{nonmmeas1}{\\bf (c)} and {\\bf (d)} highlights the differences between the two strategies addressed in our study. In fact, when \\text{\\it Strategy 1}\\, is employed to model the $S$-$E$ interaction, even the strongest intra-environmental coupling strength produces a depleted back-flow mechanism: \nthe initially significant oscillations of the trace distance gradually fade to a small yet non-null value. While the dynamics persists to be non-Markovian even under \\text{\\it Strategy 1}\\,, the qualitative features of the evolution are indeed strongly dependent on the way information is propagated across. As we will argue in the following Section, the differences between the two dynamical strategies are due to the different way SECs are accounted for.\nAlso, while for the case of {\\it strategy 2} the maximum inherent in the definition of the measure of non-Markovianity used here is achieved for the input states $|0\\rangle$ and $|1\\rangle$, for {\\it strategy 1} we require the pair $({|0\\rangle-|1\\rangle})\/{\\sqrt{2}}, ({|0\\rangle+|1\\rangle})\/{\\sqrt{2}}$. We can quantify the difference between the two schemes by putting in place the measure stated in Eq.~\\eqref{measure} for different values of $\\delta$. In Fig.~\\ref{logplot} the results associated with the two schemes are shown for 100 different values of $\\delta\\in[0,{\\pi}\/{2}]$. Strategy 2 is spectacularly superior to \\text{\\it Strategy 1}\\, in setting a non-zero degree of non-Markovianity even at moderate values of the $S$-$E$ interaction, and it has a comparably smaller threshold in the value of $\\delta$ above which the dynamics is signalled as non-Markovian. For $\\delta=\\gamma={\\pi}\/{2}$, ${\\cal N}=n$. In this case, in fact, we have complete swaps at every iteration in both system-environment and intra-environment interactions. \n\n\n\\begin{figure}[t]\n\\includegraphics[width=0.8\\columnwidth]{markovianitylogplotMauro.eps}\n\\caption{Measure ${\\cal N}$ plotted against the intra-environment interaction strength $\\delta$, for both the strategies addressed in this work. We have used $\\gamma=0.05, n=3\\times10^4$, and the super-environment is initialised in $\\ket{\\{0\\}}$. By approaching the full-swap condition embodied by $\\delta={\\pi}\/{2}$, the non-Markovianity measure shoots up. The vertical axis of the plot has been truncated to ${\\cal N}=10$ so as to improve the visibility of the details of the plot. The qualitative and quantitative differences inherent in the different strategies for the modelling of $S$-$E$ interactions are evident in the different thresholds in the value of $\\delta$ above which ${\\cal N}>0$. The measure is optimised over all possible pairs of initial $S$ states.} \n\\label{logplot}\n\\end{figure}\n\n\n\\subsection{The role of SECs}\\label{SEC}\n\nSo far, the importance of SECs in establishing non-Markovian features in our model has been only hinted at without a rigorous quantitative justification. We now fill this gap by using a recently proposed framework that can provide an upper bound to the changes of the trace distance based on the amount of SECs in the state at hand~\\cite{Mazzola12,Smirne}. By calling $\\beta(t,\\rho^S_{1,2})=\\partial_t D(\\rho^S_1,\\rho^S_2)$ and dropping the iteration label for ease of notation, such upper bound is formally given by\n\\begin{equation}\\label{upper}\n\\begin{aligned}\n\\beta\\left(t,\\rho_{1,2}^S\\right)&\\leq\\frac{1}{2}\\left( \\min\\limits_{k=1,2} \\left\\|\\text{Tr}_E\\left[\\hat{H},\\rho_k^S(t)\\otimes(\\rho_1^E(t)-\\rho_2^E(t))\\right]\\right\\|\\right.\\\\\n&\\left.+ \\left\\|\\text{Tr}_E\\left[\\hat{H},\\left(\\chi_1^{SE}(t)-\\chi_2^{SE}(t)\\right)\\right]\\right\\|\\right).\n\\end{aligned}\n\\end{equation\nHere $\\rho^E_k(t)\\equiv\\rho^E_{k,n}(t)$ is the reduced state of the environment after $n$ iterations corresponding to the preparation of state $\\rho^S_{k,0}$ for the system, and $\\chi_j^{SE}(t) = \\rho_j^{SE}(t) - \\rho_j^S(t) \\otimes \\rho_j^E(t)$ is the $S$-$E$ correlation matrix. The first term of Eq.~\\eqref{upper} contains information about the way the environment evolves when different initial states of the system are inputted. The second term deals with the effects due to non-null SECs. We have examined the respective contributions from the two terms to explore the origin of the non-Markovianity we observed in Section~\\ref{nonmd}. The corresponding results are shown in Fig.~\\ref{upperbound}:\n\\begin{figure}[b]\n\\includegraphics[width=0.9\\columnwidth]{non-mmeasureupperboundnew1.eps}\n\\caption{The derivative of the trace distance (dashed curve) and the part of the upper bound Eq.~\\eqref{upper} dependent on SECs (solid curve) plotted against $n$ for $\\delta ={\\pi}\/{2}$ and $\\gamma=0.05$. }\n\\label{upperbound}\n\\end{figure}\nThe upper bound is found to be completely formed from SECs, the term corresponding to environmental differences being null when the input system states are mutually orthogonal. In turn, this explains why the derivative of the trace distance corresponding to \\text{\\it Strategy 1}\\, was maximised for values other than the $|0\\rangle$ and the $|1\\rangle$ state. Indeed, as the origin of non-Markovianity relies entirely on the establishment of SECs, orthogonal input states in the computational basis in \\text{\\it Strategy 1}\\,, which cancels all of them, would only give rise to ${\\cal N}=0$. It is also worth mentioning that when Eq.~\\eqref{upper} is computed using \\text{\\it Strategy 2}\\, and the input states that are optimal for \\text{\\it Strategy 1}\\,, we do observe a contribution to the quantitative value of the upper bound to $\\beta\\left(t,\\rho_{1,2}^S\\right)$ coming from the dynamical nature of the environment [i.e. the first term in Eq.~\\eqref{upper}]. Such a contribution becomes irrelevant for the optimal case of orthogonal input $S$ states.\n\n\nWe have extended our numerical analysis by considering aleatory system-environment interactions: in the spirit of Monte Carlo simulations, we have introduced a random variable in our iterative model: should such a variable take a value smaller than a chosen threshold (which we let span the range of values $[0,1]$), $S$ and $E_j$ would interact at step $j$ of the evolution. This process was examined for varying threshold values in Fig.~\\ref{montedelta}. Clearly, as the threshold is increased, we allow for system environment interactions to occur, thus increasing the resulting degree of non-Markovianity, which appears to depend linearly on the chosen threshold. \n\n\\begin{figure}[b]\n\\includegraphics[width=0.9\\columnwidth]{scanningptgraph.eps}\n\\caption{The Non-Markovianity measure, $\\cal N$ plotted against the strength of the \"coin\". For this graph we used $\\delta=\\pi\/2$, $\\eta=0.01$ and $n=10000$. When we take $\\delta=\\eta=\\pi\/2$ we find that ${\\cal N}=n$ for the threshold value equal to one. }\n\\label{montedelta}\n\\end{figure}\n\n\nWe have found that, for $\\delta={\\pi}\/{2}$, such a numerical experiment yields changes only in the actual degree of non-Markovianity, which depends on the value taken by the threshold. This is due to the fact that the full swap occurring at the sub-environmental level is not at all affected by a ``missed\" system-environment collision: such an event would merely shift the feed-back of the environment into the system to the next ``allowed\" interaction. Decreasing the probability of $S$-$E$ interaction will only affect the period of the oscillations of the trace distance, leaving their amplitude unaffected.\n\n\n\n\n\n\\section{Conclusions and outlook}\n\\label{conc}\nWe have studied the non-Markovian phenomenology arising from a collision-based microscopic model for system-environment interaction. Our analysis focused on the role that SECs play in the settling of non-Markovian features in the system's dynamics: by putting in place recently proposed tools for the in-depth analysis of the trace distance-based measure of non-Markovianity, and addressing explicitly two non-equivalent iterative protocols for the joint evolution of the system and a multi-particle environment, we have been able to provide evidences of the actual crucial contribution of SECs for the determination of the actual degree of non-Markovianity and the characterisation of the details of such evolution. A Monte Carlo-inspired numerical modelling, built by biasing the chance that a given system-environment interaction actually occurs, showed the persistence of the non-Markovian charactered of the overall evolution. This analysis opens up interesting avenues for the thermodynamic-inspired exploration of non-Markovianity in collision-based models.\n\n\n\\acknowledgments\nWe thank F. Ciccarello for his helpful insight in the development of this project, Laura Mazzola for her invaluable discussions and Andr\\'e Xuereb for his constructive suggestions. RMcC acknowledges financial support from the Northern Ireland DEL. MP thanks the UK EPSRC for support through a Career Acceleration Fellowship and a grant awarded under the ``New Directions for Research Leaders\" initiative (EP\/G004579\/1), the Alexander von Humboldt Stiftung, the John Templeton Foundation (grant 43467), and the EU project TherMiQ (grant agreement 618074). \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nAnomalous transport phenomena in Weyl metals \\cite{WM1,WM2,WM3,WM_Reviews}, resulting from effects of both Berry curvature and chiral anomaly \\cite{Nielsen_Ninomiya}, have been investigated extensively in both experimental and theoretical aspects. In particular, both anomalous Hall and chiral magnetic effects have been discussed theoretically based on the Boltzmann transport theoretical framework modified by the Berry curvature and the chiral anomaly \\cite{CME1,CME2,CME3,CME4,CME5,CME6,CME7, Boltzmann_Chiral_Anomaly1,Boltzmann_Chiral_Anomaly2,Boltzmann_Chiral_Anomaly3,Boltzmann_Chiral_Anomaly4,Boltzmann_Chiral_Anomaly5,Boltzmann_Chiral_Anomaly6,Boltzmann_Chiral_Anomaly7, Boltzmann_Chiral_Anomaly8,AHE1,AHE2,AHE3}. The so called negative longitudinal magnetoresistivity has been observed experimentally in various types of Weyl metals \\cite{NLMR_First_Exp,NLMR_Followup_I,NLMR_Followup_II,NLMR_Followup_III,NLMR_Followup_IV, NLMR_Followup_V,NLMR_Followup_VI,NLMR_Followup_VII,NLMR_Followup_VIII}, also well understood by the topologically ``generalized\" Boltzmann transport theory \\cite{CME1,CME2,CME3,CME4,CME5,CME6,CME7, Boltzmann_Chiral_Anomaly1,Boltzmann_Chiral_Anomaly2,Boltzmann_Chiral_Anomaly3,Boltzmann_Chiral_Anomaly4,Boltzmann_Chiral_Anomaly5,Boltzmann_Chiral_Anomaly6,Boltzmann_Chiral_Anomaly7,Boltzmann_Chiral_Anomaly8}. However, not only the Boltzmann transport theory but also the Maxwell's electrodynamics theory should be modified by such topological ingredients. The chiral-anomaly modified Maxwell theory is referred to as the theory of axion electrodynamics, where an emergent $\\bm{E} \\cdot \\bm{B}$ term is introduced into the conventional Maxwell action of electromagnetic fields \\cite{Axion_EM}. The angle coefficient of the $\\bm{E} \\cdot \\bm{B}$ term turns out to be a function of space and time, originating from an effective Weyl band structure \\cite{Boltzmann_Chiral_Anomaly8,AHE1,AHE2,Disordered_Weyl_Metal1,Disordered_Weyl_Metal2,Anomaly_ISBWM}. Unfortunately, both theoretical and experimental studies for this axion electrodynamics have not been performed satisfactorily yet. In particular, light scattering measurements are still missing as far as we know, where the axion electrodynamics theory in Weyl metals with broken time reversal symmetry has not been proven yet.\n\nIn this study we investigate both transmission\/reflection coefficients and Faraday\/Kerr rotation angles in spin-orbit coupled Dirac metals, particularly, as a function of both the external magnetic field and the frequency of light for various configurations of light propagation. It is well known that the applied magnetic field splits the four-fold degeneracy into a pair of the two-fold degeneracy, which gives rise to a Weyl band structure. Based on the chiral anomaly calculation, the $\\theta$ coefficient of the topological-in-origin $\\bm{E} \\cdot \\bm{B}$ term has been found, the gradient of which is given by the distance of the pair of Weyl points, resulting from the applied magnetic field \\cite{AHE1,AHE2,Anomaly_ISBWM}. As a result, electromagnetic properties of spin-orbit coupled Dirac metals under external magnetic fields are governed by the axion electrodynamics. Our theoretical predictions on the magnetic-field dependence, more precisely, the gradient $\\theta$ dependence, can be regarded as a guideline for experiments, proposed to be one of the fingerprints of the axion electrodynamics in Weyl metals.\n\nBefore going into our work, we would like to discuss recent studies related with ours. Faraday\/Kerr rotation angles have been measured as a function of both frequencies and external magnetic fields for the surface state of a topological insulator, where anomalous Hall effects associated with the axion electrodynamics give rise to such rotations \\cite{Axion_EM_Exp_TI_I,Axion_EM_Exp_TI_II}. Surface plasmon modes were also observed at the interface between topologically non-trivial cylindrical core and topological-trivial surrounding material, involved with the axion electrodynamics and modified constitutive relations \\cite{Axion_EM_Exp_TI_III}. Various interesting theoretical studies of electromagnetic waves in a Weyl metallic state have been performed during recent few years \\cite{AxionEMreferee1,AxionEMreferee2,AxionEMreferee3,Axion_EM_Th_WM,AxionEMreferee4}. Especially, transmission\/reflection coefficients and Faraday\/Kerr rotations in the axion electrodynamics have been discussed in Refs. \\cite{AxionEMreferee1,AxionEMreferee2,AxionEMreferee3,Axion_EM_Th_WM}. We would like to point out that our results differ from these previous studies in several aspects although some parts are all consistent with each other; i) our normal modes inside the Weyl metal phase differ from those of the previous studies when the pair of Weyl nodes are aligned along the direction of the oscillating magnetic field, ii) reflectivity enhancement in the case of $\\v{E}\/\/\\v{B_{ext}}$ results from the longitudinal negative magnetoresistance, iii) we find eigenmodes depending on the parameter $\\eta$, which is the ratio between the conventional Hall conductivity from normal electrons and the anomalous Hall conductivity from Weyl electrons, and iv) external magnetic-field dependencies on both transmission\/reflection coefficients and Faraday\/Kerr rotation angles are the fingerprints of the Weyl metal phase.\n\n\\section{Axion electrodynamics in Weyl metals with broken time-reversal symmetry \\label{axion electrodynamics in weyl metals} \\label{generalsol}}\n\n\\subsection{Axion electrodynamics \\label{Axionem}}\n\nOur theoretical setup is shown in Fig. \\ref{incidenttoweyl}. The setup has similarity to an optical response of a ferromagnetic system. Indeed, some optical properties such as the Faraday\/Kerr rotation have resemblance to a ferromagnetic system (see Fig. \\ref{eliptic}a). However, there exist significant differences between two systems because their origins of anomalous electromagnetic responses are totally different. For instance, the anomalous Hall conductivity ($\\sigma_{\\text{Hall}}$) in a ferromagnetic system mainly originates from three types of scattering mechanisms of conduction electrons due to magnetic moments \\cite{AHE_Review} and the Faraday\/Kerr rotation is mainly due to non-diagonal magnetic permeability tensor. However, in Weyl metals, the anomalous Hall conductivity ($\\sigma_{\\text{Weyl}}$) and the Faraday\/Kerr rotation both originate from $\\grad \\theta$, which is linearly proportional to the momentum space distance between a Weyl pair. Therefore, not only the magnetic field dependence of the Hall effect should be different from each other, but also the Faraday\/Kerr rotation and electromagnetic (EM) mode oscillating in the longitudinal direction exists even with the diagonal magnetic permeability and electric permittivity in a Weyl metal state as described below.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=8cm]{incidenttoweyl.pdf}\n\\caption{Schematic diagram of the physical setup}\n\\label{incidenttoweyl}\n\\end{figure}\n\nNow, we are going to present how $\\grad \\theta$ makes the anomalous Hall conductivity ($\\sigma_{\\text{Weyl}}$) and the Faraday\/Kerr rotation by solving the modified Maxwell equation in a Weyl metal phase. An incident beam propagating along the $\\bm{\\hat{z}}-$direction is shined homogeneously on an infinite $xy$ plane of a Weyl metal under an arbitrary external magnetic field, $\\v{B_\\text{ext}} = (B_x,B_y,B_z)$. Here, the Weyl-metal sample is semi-infinite in the $\\bm{\\hat{z}}-$direction. Electromagnetic properties of the Weyl metal state are described by axion electrodynamics \\cite{Axion_EM}\n\\begin{eqnarray}\n\\div \\textbf{D} &=& \\rho + \\frac{2 \\alpha}{\\pi} \\sqrt{\\frac{\\epsilon_0}{\\mu_0}} \\grad \\theta \\cdot \\textbf{B} \\label{Maxwell1} \\\\\n\\div \\textbf{B} &=& 0 \\label{Maxwell2}\\\\\n\\curl \\v{E} &=& -\\pd{\\v{B}}{t} \\label{Maxwell3}\\\\\n\\curl \\v{H} &=& \\pd{\\v{D}}{t} + \\v{J} -\\frac{2\\alpha}{\\pi}\\sqrt{\\frac{\\epsilon_0}{\\mu_0}} \\grad{\\theta} \\times \\v{E} ,\n\\label{Maxwell4}\n\\end{eqnarray}\nmodified by the appearance of a position-dependent $\\theta-$term from the Maxwell dynamics. Actually, one finds $\\grad \\theta = g \\v{B}$ with a Lande$-g$ factor, where $2 \\grad \\theta$ gives the momentum space distance between a pair of Weyl points \\cite{AHE1,Disordered_Weyl_Metal1,Disordered_Weyl_Metal2}. Here, we resort to $\\v{D} = \\epsilon \\v{E}$ and $\\v{B} = \\mu \\v{H}$, where homogeneous permittivity $\\epsilon$ and permeability $\\mu \\approx \\mu_0$ are taken into account. For simplicity, we replace $\\frac{\\alpha}{\\pi}\\sqrt{\\frac{\\epsilon_0}{\\mu_0}}$ with $\\alpha$.\n\nTaking curl to Eq. (\\ref{Maxwell3}), we obtain\n\\begin{eqnarray}\n\\curl (3) &=& \\curl \\curl \\v{E} \\nonumber \\\\\n &=& \\grad (\\div \\v{E}) - \\grad^2 \\v{E} \\nonumber \\\\\n &=& \\grad (\\frac{\\rho}{\\epsilon} + \\frac{2 \\alpha}{\\epsilon}) - \\grad^2 \\v{E} \\nonumber \\\\\n &=& \\curl (-\\pd{\\v{B}}{t}) \\nonumber \\\\\n &=& -\\mu \\epsilon \\partial^2_t \\v{E} -\\mu \\partial_t \\v{J} + 2\\alpha \\mu \\partial_t (\\grad \\theta \\times \\v{E}), \\label{curl3}\n\\end{eqnarray}\nwhere Eqs. (\\ref{Maxwell1}) and (\\ref{Maxwell4}) have been utilized for the third and last lines, respectively.\nAssuming the constituent equation ($\\v{J} = \\boldsymbol{\\sigma} \\cdot \\v{E}$), we obtain an eigenvalue equation composed of the $\\v{E}$ field only,\n\\begin{eqnarray}\n\\grad (\\div{\\v{E}})-\\grad^2 \\v{E} &=& -\\mu \\epsilon \\partial^2_t\\v{E} -\\mu \\partial_t \\boldsymbol{\\sigma} \\cdot \\v{E} + 2\\alpha \\mu \\partial_t \\grad \\theta \\times \\v{E}, \\nonumber \\label{curlcurlE} \\\\\n\\end{eqnarray}\nwhere\n\\begin{equation}\n\\boldsymbol{\\sigma} = \\left(\\begin{array}{ccc} (1+c_{\\omega x}B_{x}^2)\\sigma & \\sigma_{Bz} & -\\sigma_{By} \\\\\n-\\sigma_{Bz} & {(1+c_{\\omega y}B_{y}^2)\\sigma} & \\sigma_{Bx} \\\\\n\\sigma_{B_y} & -\\sigma_{Bx} & (1+c_{\\omega z}B_{z}^2)\\sigma \\end{array}\\right) . \\nonumber\n\\end{equation}\nHere, the conductivity tensor $\\boldsymbol{\\sigma}$ consists of both diagonal and off-diagonal components. The diagonal component $\\sigma_{ii}=(1+c_{\\omega i}B_{i}^2)\\sigma$ is referred to as the chiral-anomaly enhanced longitudinal magnetoconductivity, where the Drude conductivity is modified from the chiral anomaly in the Weyl metal phase \\cite{WM1,WM2,WM3,WM_Reviews,Nielsen_Ninomiya}. $c_{\\omega i}$ is a positive numerical constant, referred to as the chiral anomaly coefficient in the longitudinal magnetoconductivity. The off-diagonal component $\\sigma_{ij} = \\epsilon_{ijk} \\sigma_{Bk}$ with the antisymmetric tensor $\\epsilon_{ijk}$ is the conventional Hall conductivity ($\\sigma_{Bk}$) driven by the $k-$th component of the external magnetic field $\\v{B_{ext}}$. This conventional Hall effect results from normal electrons, which can coexist with Weyl electrons. We point out that the anomalous Hall effect has been introduced into the axion electrodynamics via $\\grad{\\theta}$ terms, more precisely, the last term in Eq. (\\ref{curlcurlE}).\n\nIn order to solve Eq. (\\ref{curlcurlE}), it is essential to deal with $\\div{\\v{E}}$. Here, we discuss how to find $\\div{\\v{E}}$ carefully since the Weyl metal physics is encoded into this quantity. More concretely, this term plays an important role in determining our normal modes inside the Weyl metal state. Our careful treatment of this term is a distinguished point, compared with the previous studies \\cite{AxionEMreferee1,AxionEMreferee2,AxionEMreferee3,Axion_EM_Th_WM}. Resorting to the constituent equation of $\\v{J} = \\boldsymbol{\\sigma} \\cdot \\v{E}$, we rewrite the continuity equation $\\div{\\v{J}} = - \\pd{\\rho}{t}$ as\n\\begin{eqnarray}\n- \\pd{\\rho}{t} &=& \\div{(\\boldsymbol{\\sigma} \\cdot \\v{E})} \\nonumber \\\\\n&\\approx&\\frac{\\sigma_{zz}}{\\epsilon} \\rho + \\frac{2\\alpha \\sigma_{zz}}{\\epsilon} (\\grad \\theta \\cdot \\v{B}) ,\n\\label{continuity}\n\\end{eqnarray}\njustified when $\\sigma_{zz} \\gg \\sigma_{xy},~\\sigma_{yz},~\\sigma_{zx}$ and indeed for normal situations.\n\nConsidering the electromagnetic field given by $\\v{B}=\\v{B}_{ext}+\\v{B}_{dyn}$ and $\\v{B}_{dyn}=\\v{B_0} e^{i(kz-\\omega t)}$, one can solve Eq. (\\ref{continuity}) for $\\rho(t)$ as follows\n\\begin{eqnarray}\n\\rho(t) &=& -2\\alpha g \\sigma_{zz} \\Big(\\frac{\\v{B^2_{ext}}}{\\sigma_{zz}}+\\frac{2\\v{B_{ext}}\\cdot \\v{B_0}e^{i(kz-\\omega t)}}{\\sigma_{zz}-i\\epsilon \\omega}\\nonumber \\\\ &&+\\frac{\\v{B_0}^2 e^{2(ikz-\\omega t)}}{\\sigma_{zz} - 2i \\epsilon \\omega}\\Big) + C_0e^{\\frac{-\\sigma_{zz} t}{\\epsilon}}\\nonumber \\\\ &\\approx& -2 \\alpha g \\Big\\{B^2_{ext}+2B_{ext}B_0e^{i(kz-\\omega t)} \\Big(1+\\frac{\\omega i}{\\sigma_{zz}\/\\epsilon} \\Big) \\nonumber \\\\ \\label{rho}\n&&+B_0^2e^{2i(kz-\\omega t)} \\Big(1+\\frac{2\\omega i}{\\sigma_{zz}\/\\epsilon} \\Big)\\Big\\} \\nonumber.\n\\end{eqnarray}\nHere, the typical metallic condition $\\sigma_{zz}\/\\epsilon \\gg \\omega \\gg 1$ has been used \\cite{Jackson}, which allows us to keep the first order in $\\frac{\\omega}{\\sigma_{zz}\/\\epsilon}$. In addition, we ignore the $e^{\\frac{-\\sigma t}{\\epsilon}}$ term due to fast relaxation in metals \\cite{Jackson}. Then, this expression can be reformulated in the following way\n\\begin{eqnarray}\n\\rho(t) &=&-2\\alpha(\\grad{\\theta}\\cdot \\v{B})-4\\alpha g \\v{B_{ext}}\\cdot \\v{B_0}e^{i(kz-\\omega t)} \\frac{\\omega i}{\\sigma_{zz}\/\\epsilon} \\nonumber \\\\\n&&-2\\alpha g B_0^2 e^{2i(kz-\\omega t)}\\frac{2\\omega i}{\\sigma_{zz}\/\\epsilon} . \\label{Density_Harmonic_Approx}\n\\end{eqnarray}\nThe second harmonic term ($\\propto e^{2i(kz-\\omega t)}$) can be also negligible in the case of $|\\v{B_{ext}}| \\gg |\\v{B_{dyn}}|$.\n\nIf the external magnetic field $\\v{B_{ext}}$ is perpendicular to the oscillating field $\\v{B_{dyn}}$, i.e., $\\v{B_{ext}} \\cdot \\v{B_{dyn}}=0$, we have $\\rho(t) \\approx 0$ in the level of the harmonic approximation. As a result, we reach the conclusion\n\\begin{equation}\n\\div{\\v{E}} = \\frac{\\rho}{\\epsilon} + \\frac{2 \\alpha (\\grad \\theta \\cdot \\v{B})}{\\epsilon} \\approx 0.\n\\end{equation}\nIn this case which corresponds to Figs. \\ref{eliptic}a and \\ref{eliptic}b, the eigenmode with the longitudinal component (oscillating in $z$ direction) cannot appear due to the divergence free condition, and only transverse modes of the electromagnetic field are observable within the harmonic solution. Actually, the previous studies on this point are consistent \\cite{AxionEMreferee1,AxionEMreferee2,AxionEMreferee3,Axion_EM_Th_WM}. On the other hand, if $\\v{B_{ext}} \\cdot \\v{B_{dyn}} \\neq 0$ which corresponds to Figs. \\ref{eliptic}c and \\ref{eliptic}d, the harmonic term is nonvanishing in the charge density [Eq. (\\ref{Density_Harmonic_Approx})], resulting in\n\\begin{equation}\n\\div{\\v{E}} \\approx -4\\alpha g \\omega\/\\sigma_{zz} i \\v{B_{ext}} \\cdot \\v{B_{0}}e^{i(kz-\\omega t)}. \\label{divE}\n\\end{equation}\nThis is rather a striking result since this condition allows the longitudinal mode inside the Weyl metal phase. We emphasize that this anomalous behavior originates from the chiral anomaly, regarded to be a characteristic feature of the axion electrodynamics. This is our point beyond the previous studies \\cite{AxionEMreferee1,AxionEMreferee2,AxionEMreferee3,Axion_EM_Th_WM}. Although this aspect has been discussed very briefly in Ref. \\cite{Axion_EM_Th_WM}, their eigenvectors differ from ours.\n\nEquation (\\ref{divE}) determines the first term in Eq. (\\ref{curlcurlE}) as\n\\begin{equation}\n\\grad (\\div{\\v{E}}) \\approx \\frac{4\\alpha g k^2}{\\sigma(1+c_{\\omega z}B_z^2)}\\left(\\begin{array}{ccc} 0 & 0 & 0 \\\\ 0 & 0 & 0 \\\\ B_y & -B_x & 0 \\\\ \\end{array} \\right) \\v{E}. \\label{graddiv}\n\\end{equation}\nThen, Eq. (\\ref{curlcurlE}) reads\n\\begin{eqnarray}\n&&\\left(\\begin{array}{ccc}\nk^2 & 0 & 0 \\\\\n0 & k^2 & 0 \\\\\n\\frac{4\\alpha gB_y}{\\sigma(1+c_{\\omega z}B_z^2)}k^2 & -\\frac{4\\alpha gB_x}{\\sigma(1+c_{\\omega z}B_z^2)}k^2 & k^2 \\\\\n\\end{array} \\right) \\v{E} \\nonumber \\\\\n&=&\\mu \\epsilon \\omega^2\\v{E} + (\\mu \\omega i)\\boldsymbol{\\sigma}\\cdot \\v{E} - 2\\alpha g \\mu \\omega i \\left(\\begin{array}{ccc}\n0 & -B_z & B_y \\\\\nB_z & 0 & -B_x \\\\\n-B_y & B_x & 0 \\\\\n\\end{array} \\right)\\v{E}. \\nonumber\n\\end{eqnarray}\nThis eigenvalue equation can be reduced to a more compact form\n\\begin{equation}\n\\left(\\begin{array}{ccc}\nk^2-\\lambda_x & id_z & -id_y \\\\\n-id_z & k^2-\\lambda_y & id_x \\\\\naB_y k^2 + id_y & -aB_x k^2-id_x & k^2-\\lambda_z \\\\\n\\end{array} \\right) \\v{E} = 0, \\label{matrixEq.0}\n\\end{equation}\nwhere $d_j \\equiv -\\mu \\omega (2\\alpha g B_j + \\sigma_{Bj})$, $\\lambda_j \\equiv \\mu \\epsilon \\omega^2 + i\\mu \\omega (1+c_{\\omega j}B_j^2)\\sigma$, and $a \\equiv \\frac{4\\alpha g}{\\sigma(1+c_{\\omega z}B_z^2)}$.\n\nAlthough Eq. (\\ref{matrixEq.0}) looks simple, it turns out that general expressions for eigenvalues and eigenvectors are too complex, which may not be useful for physical insight. More concretely, the eigenvalue equation is given by the third-order algebraic equation for $k^{2}$, and this equation has its general solutions according to Cardano's method \\cite{Cardano_Method}. However, these general solutions would have complex numbers inside the root, and it is necessary to decompose the real and imaginary parts explicitly for such solutions in order to have physical interpretation. It turns out that this procedure is not possible, generally speaking, since the order of the algebraic equation becomes doubled for this decomposition. In this respect it is better to consider a concrete physical situation, which will be discussed from the next section. Here, we present a general solution as a reference for a specific case, given by $\\sigma_{Bj} \\approx 0$, $a \\ll \\textrm{Im}[\\lambda_j]$, and $c_{wi} \\approx 0$. This corresponds to the absence of the conventional Hall effect, the charge density modulation, and the positive longitudinal magnetoconductivity. We keep only the anomalous Hall effect due to the Berry curvature, which leads the matrix to be symmetric. As a result, we find three eigenvalues:\n\\begin{widetext}\n\\begin{equation}\n\\left\\{\n\\begin{array}{l}\n\\textrm{eigenvalue 1)}\\quad k_1 = k_{r1} + i k_{i1}\\\\\n\\quad k_{r1} = \\omega\\sqrt{\\frac{\\mu \\epsilon}{2}}[\\sqrt{(1-\\frac{2\\alpha g B_{ext}}{\\epsilon \\omega})^2+(\\frac{\\sigma}{\\epsilon \\omega})^2}+(1-\\frac{2\\alpha g B_{ext}}{\\epsilon \\omega})]^{1\/2} \\\\\n\\quad k_{i1}=\\omega\\sqrt{\\frac{\\mu \\epsilon}{2}}[\\sqrt{(1-\\frac{2\\alpha g B_{ext}}{\\epsilon \\omega})^2+(\\frac{\\sigma}{\\epsilon \\omega})^2}-(1-\\frac{2\\alpha g B_{ext}}{\\epsilon \\omega})]^{1\/2} \\\\\n\\textrm{eigenvalue 2)}\\quad k_2 = k_{r2} + i k_{i2}\\\\\n\\quad k_{r2} = \\omega\\sqrt{\\frac{\\mu \\epsilon}{2}}[\\sqrt{(1+\\frac{2\\alpha g B_{ext}}{\\epsilon \\omega})^2+(\\frac{\\sigma}{\\epsilon \\omega})^2}+(1+\\frac{2\\alpha g B_{ext}}{\\epsilon \\omega})]^{1\/2} \\\\\n\\quad k_{i2}=\\omega\\sqrt{\\frac{\\mu \\epsilon}{2}}[\\sqrt{(1+\\frac{2\\alpha g B_{ext}}{\\epsilon \\omega})^2+(\\frac{\\sigma}{\\epsilon \\omega})^2}-(1+\\frac{2\\alpha g B_{ext}}{\\epsilon \\omega})]^{1\/2} \\\\\n\\textrm{eigenvalue 3)}\\quad k_3 = k_{r3} + i k_{i3}\\\\\n\\quad k_{r3} = \\omega\\sqrt{\\frac{\\mu \\epsilon}{2}}[\\sqrt{1+(\\frac{\\sigma}{\\epsilon \\omega})^2}+1]^{1\/2}\\\\\n\\quad k_{i3}=\\omega\\sqrt{\\frac{\\mu \\epsilon}{2}}[\\sqrt{1+(\\frac{\\sigma}{\\epsilon \\omega})^2}-1]^{1\/2} \\\\\n\\end{array} \\right. \\label{eigenvaluescalarsigma}\n\\end{equation}\n\\end{widetext}\nCorresponding eigenvectors are given by\n\\begin{equation}\n\\left\\{\n\\begin{array}{lll}\n\\textrm{mode 1)}\\quad & \\left(\\begin{array}{l} -B_xB_y - iB_z\\sqrt{B_x^2 +B_y^2 + B_z^2} \\\\ B_x^2 + B_z^2 \\\\ -B_yB_z + iB_x\\sqrt{B_x^2+B_y^2+B_z^2} \\end{array}\\right) \\\\\n\\textrm{mode 2)}\\quad & \\left(\\begin{array}{l} B_xB_y - iB_z\\sqrt{B_x^2 +B_y^2 + B_z^2} \\\\ -(B_x^2 + B_z^2) \\\\ B_yB_z + iB_x\\sqrt{B_x^2+B_y^2+B_z^2} \\end{array}\\right) \\\\\n\\textrm{mode 3)}\\quad & \\left(\\begin{array}{l} B_x \\\\ B_y \\\\ B_z \\end{array}\\right) \\\\\n\\end{array} \\right.\n\\end{equation}\nHere, the normalization is given by $B_{x}^{2} + B_{y}^{2} + B_{z}^{2}$.\nThese eigenvectors are self-consistent in the case of $\\v{B_{ext}}\/\/\\hat{\\v{z}}$, i.e., $B_{z} \\not= 0$ and $B_{x} = B_{y} = 0$, where $\\hat{\\v{z}}$ is the propagating direction of light. On the other hand, they are not in the case of $\\v{B_{ext}}\/\/\\hat{\\v{x}}$, i.e., $B_{x} \\not= 0$ and $B_{y} = B_{z} = 0$, where the existence of the longitudinal component violates the divergence free condition of the electric field. It turns out that $a = 0$ in the above approximation is not consistent for this situation. As discussed before, there appear longitudinal charge-density fluctuations, responsible for the existence of the longitudinal component of the eigenvector. In this respect $a = 0$ is just an approximation for an analytic expression. Below, we discuss full solutions for two specific directions of external magnetic fields.\n\n\\subsection{Solution in the case of $\\v{B_{ext}}\/\/\\hat{\\v{z}}$}\n\nThis configuration corresponds to Fig. \\ref{eliptic}a. The continuity equation allows us to take $\\div \\v{E} \\approx 0$ due to fast relaxation of electrons. As a result, Eq. (\\ref{matrixEq.0}) becomes\n\\begin{eqnarray}\n&& \\left(\\begin{array}{ccc} k^2-\\lambda & id_z & 0 \\\\ -id_z & k^2 - \\lambda & 0 \\\\ 0 & 0 & k^2 - \\lambda_z \\\\ \\end{array} \\right) \\v{E} = 0 , \\label{matrixequation}\n\\end{eqnarray}\nwhere $\\lambda \\equiv \\mu \\epsilon \\omega^2 + \\mu \\omega i\\sigma$. Obviously, the $\\sigma_\\text{Weyl}\\equiv 2\\alpha g B_z$ term in $d_z$, which is proportional to the external magnetic field, plays the role of effective Hall conductivity, and it appears only when the field is oscillating, i.e., $\\omega \\neq 0$. $\\sigma_\\text{Weyl}$ is linear in the external magnetic field whereas the conventional Hall effect or an anomalous Hall effect in a ferromagnet should be saturated for an increasing external magnetic field. This different behavior of $\\sigma_\\text{Weyl}$ makes characteristic magnetic-field dependencies for the propagating light in a Weyl metal state.\n\nConsidering $\\v{E}=(E_x,E_y,E_z)e^{i(kz-\\omega t)}$, we can find eigenvalues and eigenvectors in Eq. (\\ref{matrixequation}). The eigenvalues are given by\n\\begin{widetext}\n\\begin{eqnarray}\nk_{r1} &=& \\omega\\sqrt{\\frac{\\mu \\epsilon}{2}}\\left[\\sqrt{\\left(1-\\frac{|1+\\eta|\\sigma_\\text{Weyl}}{\\epsilon \\omega}\\right)^2+(\\frac{\\sigma}{\\epsilon \\omega})^2}+\\left(1-\\frac{|1+\\eta|\\sigma_\\text{Weyl}}{\\epsilon \\omega}\\right)\\right]^{1\/2} , \\label{kr1}\\\\\nk_{i1} &=& \\omega\\sqrt{\\frac{\\mu \\epsilon}{2}}\\left[\\sqrt{\\left(1-\\frac{|1+\\eta|\\sigma_\\text{Weyl}}{\\epsilon \\omega}\\right)^2+(\\frac{\\sigma}{\\epsilon \\omega})^2}-\\left(1-\\frac{|1+\\eta|\\sigma_\\text{Weyl}}{\\epsilon \\omega}\\right)\\right]^{1\/2} \\label{ki1}\n\\end{eqnarray}\nfor the momentum $k_1 = k_{r1} + i k_{i1}$,\n\\begin{eqnarray}\nk_{r2} &=& \\omega\\sqrt{\\frac{\\mu \\epsilon}{2}}\\left[\\sqrt{\\left(1+\\frac{|1+\\eta|\\sigma_\\text{Weyl}}{\\epsilon \\omega}\\right)^2+(\\frac{\\sigma}{\\epsilon \\omega})^2}+\\left(1+\\frac{|1+\\eta|\\sigma_\\text{Weyl}}{\\epsilon \\omega}\\right)\\right]^{1\/2} \\label{kr2}, \\\\\nk_{i2} &=& \\omega\\sqrt{\\frac{\\mu \\epsilon}{2}}\\left[\\sqrt{\\left(1+\\frac{|1+\\eta|\\sigma_\\text{Weyl}}{\\epsilon \\omega}\\right)^2+(\\frac{\\sigma}{\\epsilon \\omega})^2}-\\left(1+\\frac{|1+\\eta|\\sigma_\\text{Weyl}}{\\epsilon \\omega}\\right)\\right]^{1\/2} \\label{ki2}\n\\end{eqnarray}\nfor the momentum $k_2 = k_{r2} + i k_{i2}$,\n\\end{widetext}\nand\n\\begin{eqnarray}\nk_{r3} &=& \\omega\\sqrt{\\frac{\\mu \\epsilon}{2}}[\\sqrt{1+(\\frac{\\sigma}{\\epsilon \\omega})^2(1+c_{\\omega}B_\\text{ext}^2)^2}+1]^{1\/2} \\label{kr3}, \\\\\nk_{i3} &=& \\omega\\sqrt{\\frac{\\mu \\epsilon}{2}}[\\sqrt{1+(\\frac{\\sigma}{\\epsilon \\omega})^2(1+c_{\\omega}B_\\text{ext}^2)^2}-1]^{1\/2} \\label{ki3}\n\\end{eqnarray}\nfor the momentum $k_3 = k_{r3} + i k_{i3}$, respectively. The real part of the eigenvalues ($k_{rn}$) describes the energy of a propagating mode inside a Weyl metal (dispersion relation), and the imaginary part ($k_{in}$) gives the inverse of skin depth. Here, we introduced $\\eta$ as a phenomenological parameter, defined by the ratio between the conventional Hall conductivity and the distance of the pair of Weyl points and given by\n\\begin{equation}\n\\eta \\equiv \\frac{\\sigma_{Bz}}{\\sigma_{\\text{Weyl}}}.\n\\end{equation}\nThe Hall coefficient ($\\sigma_{Bz}$) can be modified, depending on the sample situation in experiments. In this respect one can investigate dispersion relations as a function of $\\eta$. For example, $\\eta \\rightarrow -1$ describes the dispersion relation of a normal metal state with $k_{rn} = \\omega\\sqrt{\\frac{\\mu \\epsilon}{2}}[\\sqrt{1+(\\frac{\\sigma}{\\epsilon \\omega})^2}+1]^{1\/2}$ and $k_{in} = \\omega\\sqrt{\\frac{\\mu \\epsilon}{2}}[\\sqrt{1+(\\frac{\\sigma}{\\epsilon \\omega})^2}-1]^{1\/2}$. On the other hand, $\\eta \\rightarrow 0$ gives rise to that of a pure Weyl metal phase without the conventional Hall effect.\n\n\n\nThe corresponding eigenvector for each of eigenvalues is given by\n\\begin{equation}\n\\begin{array}{ccc}\n\\textrm{ mode 1)} \\left(\\begin{array}{c} 1 \\\\ i \\\\ 0\\\\ \\end{array}\\right) & \\textrm{ mode 2)} \\left(\\begin{array}{c} 1 \\\\ -i \\\\ 0 \\\\ \\end{array}\\right) & \\textrm{mode 3)}\n\\left(\\begin{array}{c} 0 \\\\ 0 \\\\ 1 \\\\ \\end{array}\\right) , \\\\\n\\label{eigenvectoreta}\n\\end{array}\n\\end{equation}\nrespectively, where the first two are circularly-polarized transverse modes with different chiralities and the last one is a linearly-polarized longitudinal mode. Note that for this specific configuration of $\\v{B_{ext}}\/\/\\hat{z}$, the third eigenmode does not exist in the first-order harmonic approximation, which results from the divergence free condition of the electric field, i.e., $\\div{\\v{E}}=0 \\rightarrow E_z=0$. These eigenvectors are consistent with all of the previous studies \\cite{AxionEMreferee1,AxionEMreferee2,AxionEMreferee3,Axion_EM_Th_WM}. However, in general situations, for example, the case of $\\v{B_{ext}}\/\/\\hat{x}$, we emphasize that there exists the contribution of charge density fluctuations according to the electromagnetic field ($\\div{\\v{E}} \\neq 0$), so the propagating light in the Weyl metal phase should have the longitudinal component.\n\\begin{figure}\n\\centering\n\\includegraphics[width=9cm]{kdependence.pdf}\n\\caption{Frequency and magnetic field dependencies for each eigenmode in a specific configuration (Fig. \\ref{eliptic}a). a, c, and e. The real parts (propagating wave number) of the eigenvalue 1, 2, and 3 as a function of $\\omega$ and $\\grad \\theta$. We recall that the $k_{3}$ mode does not exist at this configuration, but we show it as a reference. The magenta-vertical linecuts correspond to Fig. \\ref{kdependence}g whereas the green-horizontal linecuts correspond to Fig. \\ref{kdependence}h, respectively. b, d, and f. The imaginary parts (inverse of skin depth) of the eigenvalue 1, 2, and 3 as a function of $\\omega$ and $\\grad \\theta$. g. Frequency dependencies of the wave number $k_r$ (dispersion relation) for each eigenmode at a finite field, $\\grad \\theta = 10^{10}$ m$^{-1}$. h. Magnetic field dependencies of the wave number $k_r$ for each eigenmode at a finite frequency, $\\omega = 10^{14}$ Hz.}\n\\label{kdependence}\n\\end{figure}\nWe show frequency $\\omega$ and magnetic field $\\grad \\theta = g \\v{B_{ext}}$ dependencies of the eigenvalues with the situation $\\v{B_{ext}}\/\/\\hat{\\v{z}}$ in Fig. \\ref{kdependence}. We set several numerical parameters to plot the eigenvalues, for example, $\\sigma \\sim 10^5$ $\\Omega^{-1}$m$^{-1}$ and $\\epsilon \\sim 10\\epsilon_0$, but such parameters in a real sample may be modified, thereby the appropriate frequency range in an experiment might be different from our fittings. Note that a proper frequency range of our results should be $\\omega < \\sigma\/\\epsilon$. Real (imaginary) parts of eigenvalues as a function of $\\omega$ and $\\grad \\theta$ are described in Figs. \\ref{kdependence}a, \\ref{kdependence}c, and \\ref{kdependence}e (\\ref{kdependence}b, \\ref{kdependence}d, and \\ref{kdependence}f). Different features among a, c and e (b, d and f) show that propagating wave numbers (skin depth) inside a Weyl metal should be different for different modes. We recall that the $k_{3}$ mode does not exist. Here, we show it just for a reference (Note that the dispersion of mode3 in this configuration is almost same as that of conventional metals except $c_\\omega B_{ext}^2$ term). An incident light from vacuum might split into right and left circularly polarized eigenmodes. For clear presentation of splitting, a linecut graph of real $k$ at a finite magnetic field ($\\grad \\theta = g\\v{B_{ext}}$) and at a finite frequency ($\\omega$) are shown in Figs. \\ref{kdependence}g and \\ref{kdependence}h. Dispersion relation splits into two circularly polarized modes but their dispersions become linear at high frequency ($\\omega \\sim \\sigma\/\\epsilon$) as shown in Fig. \\ref{kdependence}g. On the other hand, Fig. \\ref{kdependence}h shows quite different magnetic field dependencies for skin depth of each mode. When magnetic field is increasing, the differences of velocities among the modes get larger, and this splitting of a beam is the main reason of the Faraday\/Kerr rotation in a Weyl metal. Note that we set $\\eta = 0$ for the all plots to only see the pure Weyl metal phase, where $\\eta$ is the phenomenological parameter introduced before.\n\n\\subsection{Solution in the case of $\\v{B_{ext}}\/\/\\hat{\\v{x}}$ \\label{B\/\/x}}\n\nSetting the specific configuration of the external magnetic field $\\v{B_{ext}}=(B_x,0,0)$ in Eq. (\\ref{matrixEq.0}), which corresponds to Figs. \\ref{eliptic}b, \\ref{eliptic}c and \\ref{eliptic}d, we find eigenvalues and eigenvectors as follows\n\\begin{eqnarray}\n\\left( \\begin{array}{ccc} k^2 - \\lambda_x & 0 & 0 \\\\ 0 & k^2-\\lambda & id_x \\\\ 0 & -aB_x k^2 -id_x & k^2-\\lambda \\end{array} \\right) \\v{E} =0. \\label{matrixEq.Bx}\n\\end{eqnarray}\nThree eigenvalues are given by\n\\begin{eqnarray}\nk^2_{1} &=& \\lambda+\\sqrt{d_x^2-i a B_x d_x k_1^2}\n\\end{eqnarray}\nfor the momentum $k_1 = k_{r1} + i k_{i1}$,\n\\begin{eqnarray}\nk^2_{2} &=& \\lambda-\\sqrt{d_x^2-i a B_x d_x k_1^2}\n\\end{eqnarray}\nfor the momentum $k_2 = k_{r2} + i k_{i2}$, and\n\\begin{eqnarray}\nk_{r3} &=& \\omega\\sqrt{\\frac{\\mu \\epsilon}{2}}[\\sqrt{1+(\\frac{\\sigma}{\\epsilon \\omega})^2(1+c_{\\omega}B_\\text{ext}^2)^2}+1]^{1\/2} \\label{k3rBx} \\\\\nk_{i3} &=& \\omega\\sqrt{\\frac{\\mu \\epsilon}{2}}[\\sqrt{1+(\\frac{\\sigma}{\\epsilon \\omega})^2(1+c_{\\omega}B_\\text{ext}^2)^2}-1]^{1\/2} \\label{k3iBx}\n\\end{eqnarray}\nfor the momentum $k_3 = k_{r3} + i k_{i3}$. Unfortunately, only the third eigenvalue $k_3$ can be expressed in a simple analytic form, which is the same as that of the previous case. It turns out that analytic expressions for both $k_{1}$ and $k_{2}$ eigenmodes are quite complicated, which may not be useful for physical insight. Corresponding three eigenvectors are given by\n\\begin{eqnarray}\n\\begin{array}{ll}\n\\textrm{ mode 1)} \\left(\\begin{array}{c} 0 \\\\ \\frac{1}{\\sqrt{1-i\\frac{aB_xk_1^2}{d_x}}} \\\\ i \\\\ \\end{array}\\right) & \\textrm{ mode 2)} \\left(\\begin{array}{c} 0 \\\\ \\frac{1}{\\sqrt{1-i\\frac{aB_xk_2^2}{d_x}}} \\\\ -i \\\\ \\end{array}\\right) \\nonumber\n\\end{array}\n\\end{eqnarray}\n\\begin{eqnarray}\n\\begin{array}{l}\n\\textrm{mode 3)} \\left(\\begin{array}{c} 1 \\\\ 0 \\\\ 0\\\\ \\end{array}\\right) \\label{mode3},\n\\end{array}\n\\end{eqnarray}\nrespectively. We point out that the complex number appears inside the root for the eigenvector. In addition, eigenmodes 1) $\\&$ 2) have their longitudinal components.\n\nIn order to find their physical meaning, we take the simple approximation to ignore the $\\div{\\v{E}}$ contribution for the eigenvalues (not for the eigenvectors) as the zeroth-order approximation, assuming that the generated electromagnetic field from $\\div{\\v{E}}$ is much smaller than that from other terms. This approximation in the characteristic equation is realized by considering the condition $1 \\gg |\\frac{aB_xd_x}{\\text{Im}[\\lambda]}|=\\frac{\\sigma^2_{\\text{Weyl}}}{\\sigma^2}(1+\\eta)$, and justified if the anomalous Hall conductivity due to Weyl electrons is much smaller than the Drude conductivity. Within the zeroth order approximation of $\\frac{\\sigma^2_{\\text{Weyl}}}{\\sigma^2}(1+\\eta)$, we obtain the rest of eigenvalues as Eqs. (\\ref{kr1}) to (\\ref{ki2}), the same as those of the previous configuration. Based on these eigenvalues, the rest of the eigenvectors can be approximated, depending on the range of the parameter $\\eta=\\frac{\\sigma_{Bx}}{\\sigma_{Weyl}}$.\n\nFirst, we consider $|\\eta| \\gg 1$ ($|\\sigma_{Bx}| \\gg |\\sigma_{\\text{Weyl}}|$), where the conventional Hall effect dominates over the anomalous Hall effect. Performing the Taylor's expansion for two small parameters of $\\frac{\\omega}{\\sigma\/\\epsilon}$ and $\\frac{\\sigma_{\\text{Weyl}}}{\\sigma}$, we find the first and second eigenvectors given by\n\\begin{eqnarray}\n\\begin{array}{l}\n\\textrm{ mode 1)} \\left(\\begin{array}{c} 0 \\\\ \\pm 1 \\\\ i \\\\ \\end{array}\\right) \\pm i\\left(\\begin{array}{c} 0 \\\\ \\frac{\\sigma_{\\text{Weyl}}}{\\sigma}-\\frac{\\omega}{\\sigma\/\\epsilon(\\eta-1)} \\\\ 0 \\\\ \\end{array}\\right) \\\\\n\\\\ \\textrm{ mode 2)} \\left(\\begin{array}{c} 0 \\\\ \\pm 1 \\\\ -i \\\\ \\end{array}\\right) \\mp i\\left(\\begin{array}{c} 0 \\\\ \\frac{\\sigma_{\\text{Weyl}}}{\\sigma}+\\frac{\\omega}{\\sigma\/\\epsilon(\\eta-1)} \\\\ 0 \\\\ \\end{array}\\right) . \\label{bigeta}\\\\\n\\end{array}\n\\end{eqnarray}\nThese modes are elliptically polarized light on the $yz$ plane (see Fig. \\ref{eliptic}c), which are slightly deformed from circular polarization. It is rather unexpected that two polarizations are degenerate on the $yz$ plane for a given eigenvalue, shown by both $\\pm$ signs in the corresponding eigenvector. In other words, both left and right elliptically polarized light on the $yz$ plane can be an eigenvector for a given eigenvalue. This originates from non-hermiticity in the eigenvalue equation [Eq. (\\ref{matrixEq.0})], given by the axion electrodynamics of $\\grad \\theta$.\n\nSecond, we consider $|\\eta| \\ll 1$ ($ |\\sigma_{Bx}| \\ll|\\sigma_{\\text{Weyl}}| $), where the anomalous Hall conductivity dominates over the conventional Hall effect. These modes are deformed in longtitudinal directions (see Fig. \\ref{eliptic}d), and corresponding eigenvectors are approximated as\n\\begin{eqnarray}\n\\begin{array}{l}\n\\textrm{ mode 1)} \\left(\\begin{array}{c} 0 \\\\ \\pm 1 \\\\ 1 \\\\ \\end{array}\\right) \\mp i\\left(\\begin{array}{c} 0 \\\\ \\frac{\\sigma_{\\text{Weyl}}}{\\sigma} -\\frac{\\omega}{\\sigma\/\\epsilon} \\\\ 0 \\\\ \\end{array}\\right) \\\\\n\\\\ \\textrm{ mode 2)} \\left(\\begin{array}{c} 0 \\\\ \\pm1 \\\\ -1 \\\\ \\end{array}\\right) \\pm i\\left(\\begin{array}{c} 0 \\\\ \\frac{\\sigma_{\\text{Weyl}}}{\\sigma} +\\frac{\\omega}{\\sigma\/\\epsilon} \\\\ 0 \\\\ \\end{array}\\right) .\\label{smalleta}\\\\\n\\end{array}\n\\end{eqnarray}\nHere, we multiplied $-i$ into all components in order to make the expressions ``conventional\", but not essential. The major mode is linearly polarized and the minor mode is added with out of phase by $\\pi\/2$ for each mode. Again, we find a two-fold degeneracy with the longitudinal component.\n\nThird, we consider $|\\eta|\\approx 1$ ($|\\sigma_{\\text{Weyl}}|\\approx |\\sigma_{Bx}|$), where the anomalous Hall conductivity is the same order of the conventional one. Then, the eigenmodes cannot be Taylor-expanded. Instead, we find\n\\begin{eqnarray}\n\\begin{array}{l}\n\\textrm{ mode 1)} \\left(\\begin{array}{c} 0 \\\\ 1\/\\sqrt{\\frac{\\sigma_{\\text{Weyl}}}{\\sigma}-\\frac{2\\omega}{\\sigma\/\\epsilon}} \\\\ i\\sqrt{i} \\\\ \\end{array}\\right) \\\\\n\\\\ \\textrm{ mode 2)} \\left(\\begin{array}{c} 0 \\\\ 1\/\\sqrt{\\frac{\\sigma_{\\text{Weyl}}}{\\sigma}+\\frac{2\\omega}{\\sigma\/\\epsilon}} \\\\ -i\\sqrt{i} \\\\ \\end{array}\\right) .\\label{oneeta}\\\\\n\\end{array}\n\\end{eqnarray}\nHere, we multiplied $\\sqrt{i}$ into all components in order to transfer the $\\sqrt{i}$ contribution of the $y$ component into the $z$ component, but not essential. If a resonance condition ($|\\sigma_{\\text{Weyl}}\/\\epsilon|\\sim | 2\\omega|$) is fulfilled, at least one of the denominators in the $y$ components vanishes. This means that the $y$ component dominates over the $z$ component. Then, the eigenmode 1 or 2 becomes ``consistent\" to an incident beam of $\\v{E_i}=(0,1,0)e^{i(kz-\\omega t)}$. In this configuration, no splitting occurs and the propagating motion of the EM field is almost same as Fig. \\ref{eliptic}b except the positions of $E$ and $B$ are switched. Note that $\\sqrt{i}=\\pm\\frac{i+1}{\\sqrt{2}}$ has both $\\pm$ signs. As a result, we have a two-fold degeneracy with the longitudinal component again at both eigenvalues of $k_{1}$ and $k_{2}$.\n\nIn this configuration ($\\v{B_{ext}}\/\/\\hat{\\bm{x}}$) the mode 1 and mode 2 have different polarization directions, given by different phases in their longitudinal oscillations. However, we emphasize that these solutions are completely consistent with $\\div{\\v{B}}=0$ and all the other Maxwell's equations. In other words, only the electric field is oscillating to be described with 3 degrees of freedom, whereas the magnetic field is oscillating with 2 transverse degrees of freedom. See Figs. \\ref{eliptic}c and \\ref{eliptic}d. Of course, all eigenvectors satisfy the modified Maxwell's equations [Eqs. (\\ref{Maxwell1}) $\\sim$ (\\ref{Maxwell4})].\n\nTo visualize this point clearly, we present Fig. \\ref{eliptic}. Figure \\ref{eliptic}a corresponds to the case that the external magnetic field is applied into the $\\hat{\\v{z}}-$direction, where the mode 3 does not exist due to the divergence free condition ($\\nabla \\cdot \\bold{E} = 0$ and $\\nabla \\cdot \\bold{B} = 0$). Both electric and magnetic fields have two transverse degrees of freedom and the longitudinal oscillation in the $\\hat{\\v{z}}-$direction is forbidden. When the external magnetic field is applied into the $\\hat{\\v{x}}-$direction (Figs. \\ref{eliptic}b, \\ref{eliptic}c, and \\ref{eliptic}d), the longitudinal oscillation of the electric field is allowed due to the non-zero divergence, i.e., $\\nabla \\cdot \\bold{E} = \\frac{\\rho}{\\epsilon} + 2 \\alpha g \\nabla \\theta \\cdot \\bold{B} \\neq 0$, which is one essential modification in the axion electrodynamics. Here, the mode 3 is just a conventional electromagnetic field configuration with only transverse oscillating components ($\\vec{\\v{E}}\/\/\\hat{\\bm{x}}$ and $\\vec{\\v{B}}\/\/\\hat{\\bm{y}}$) as shown in Fig. \\ref{eliptic}b. On the other hand, the mode 1 and mode 2 have longitudinally oscillating components. For the $\\eta \\gg 1$ case (Fig. \\ref{eliptic}c), two elliptically polarized electric fields are possible for the mode 1 and mode 2. The oscillation of the $B$ field is in the $\\hat{\\v{x}}-$direction for both modes. For the $\\eta \\ll 1$ case (Fig. \\ref{eliptic}d), two linearly polarized electric fields are possible for each mode. The oscillation of the $B$ field is in the $\\hat{\\v{x}}-$direction again for either mode. Consequently, there can exist three eigenvectors with two transverse degrees of freedom in the $B$ field oscillation. \n\n\n\n\\section{Transmission and reflection of light}\n\nWe are ready to calculate Faraday\/Kerr rotation angles and transmission\/reflection coefficients for specific situations of the alignment of $\\v{B_\\text{ext}}$ and the polarization of the incident beam. Here, we focus on three cases based on the solutions in the previous section: one with $\\v{B_\\text{ext}}\/\/\\bm{\\hat{z}}$ and the other two with $\\v{B_\\text{ext}}\/\/\\bm{\\hat{x}}$, where light always propagates along the $\\bm{\\hat{z}}-$direction (Figs. \\ref{eliptic}a $\\sim$ \\ref{eliptic}d).\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=18cm]{deformrotation.pdf}\n\\caption{Polarizations and rotations of the electromagnetic field inside the Weyl metal phase. a. A deformation and the Faraday-Kerr rotation when $\\v{B_\\text{ext}}\/\/\\bm{\\hat{z}}$ and $\\v{E_i}\/\/\\bm{\\hat{x}}$: The rotation angle is on the $xy$ plane. The propagating wave is a consequence of the linear combination of two circularly polarized mode 1 and 2. There is no mode 3 when $\\v{B_\\text{ext}}\/\/\\bm{\\hat{z}}$ as discussed in the main text. b. No deformation occurs when $\\v{B_\\text{ext}}\/\/ \\bm{\\hat{x}}$ and $\\v{E_i}\/\/ \\bm{\\hat{x}}$. The incident beam and propagating wave is consistent with the mode 3. In this configuration, no deformation occurs but reflectivity changes as a function of the external magnetic field. It represents a unique property of Weyl metals as described in section \\ref{reflectivity change}. c. A deformation of the transmitted light when $\\v{B_\\text{ext}}\/\/ \\bm{\\hat{x}}$ and $\\v{E_i}\/\/ \\bm{\\hat{y}}$ with $\\eta \\gg 1$: The oscillating direction of the propagating electric field is rotating on the $yz$ plane. The propagating wave is a linear combination of elliptically polarized two modes, where the polarization axis of the mode is not the propagating direction (not the $\\bm{\\hat{z}}$ axis), but the $\\grad \\theta$ direction (the $\\bm{\\hat{x}}$ axis). There exists a beat in the propagating wave due to different group velocities of the two modes. d. A deformation of the transmitted light when $\\v{B_\\text{ext}}\/\/ \\bm{\\hat{x}}$ and $\\v{E_i}\/\/ \\bm{\\hat{y}}$ with $\\eta \\ll 1$: The oscillating direction of the propagating electric field is rotating on the $yz$ plane. The propagating wave is a linear combination of linearly polarized two modes whose oscillating directions are approximately (0,1,1) and (0,1,-1). A beat also exists in the propagation wave due to different group velocities of the two modes. We point out that for the transmitted\/reflected beam of Figs. \\ref{eliptic}c and \\ref{eliptic}d in a vacuum, the longitudinal component can exist only in the near-field region because it radiates as a short-dipole-antenna field near the surface, completely consistent with boundary conditions.}\n\\label{eliptic}\n\\end{figure*}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=18cm]{Maximum_amplitude_rotation_angle_for_a_linear_incident_beam.pdf}\n\\caption{Transmission-reflection and Faraday-Kerr rotation angles as a function of external magnetic fields (at a given frequency $\\omega=10^{14}$ Hz) and frequencies (at a given magnetic field $\\grad \\theta=10^{10}$ m$^{-1}$). a $\\&$ d. Amplitudes of transmission\/reflection (red lines\/black lines) depending on the applied magnetic field \\& frequency. Figure. \\ref{eliptic}a shows the experimental situation. $N_{i} = N_{i1} + N_{i2}$ is a total transmission\/reflection amplitude which is sum of the mode 1 and 2 ($i = t$ or $r$). $N_{ij}$ ($j= 1$ or $2$) is the amplitude of each transmitted\/reflected eigenmode. b $\\&$ e. Amplitudes of transmission\/reflection of the third eigenmode depending on the applied magnetic field \\& frequency. Figure. \\ref{eliptic}b shows the experimental situation. $N_{t3}$ and $N_{r3}$ are the amplitude of the transmitted and reflected eigenmode. All amplitudes of a, b, d, and e are normalized by the incident beam. c \\& f. Faraday\/Kerr rotation angles as a function of the applied magnetic field \\& frequency. The Faraday angle is an increasing function of the applied magnetic field at a given frequency and a decreasing function of frequency at a given external magnetic field, whereas the Kerr angle shows non-monotonic behaviors in both cases.}\n\\label{Maximum_amplitude_Rotation_angle_for_a_linear_incident_beam}\n\\end{figure*}\n\n\\subsection{$\\v{B_{\\text{ext}}}$ \/\/ \\v{\\hat{z}}} \\label{B\/\/z}\n\nFirst, we consider the case when $\\v{B_\\text{ext}}$ is aligned with the propagation of light, as shown in Fig. \\ref{eliptic}a. Inside and outside of the Weyl metal, the electric and magnetic fields are given by\n\\begin{equation}\n\\textrm{$\\v{E}$ field} \\left \\{\n\\begin{array}{lll}\n\\v{E_{in}} &=& \\v{E}_{Ti}e^{ik_{Ti} z -\\omega t} \\\\\n\\v{E_{out}} &=& \\v{E}_Ie^{ik_0 z -\\omega t} + \\v{E}_{Ri}e^{ik_{Ri} z -\\omega t} \\\\\n\\end{array} \\right. \\nonumber\n\\end{equation}\nand\n\\begin{equation}\n\\textrm{$\\v{B}$ field} \\left \\{\n\\begin{array}{lll}\n\\v{B_{in}} &=& \\v{B}_{Ti}e^{ik_{Ti} z -\\omega t} \\\\\n\\v{B_{out}} &=& \\v{B}_Ie^{ik_0 z -\\omega t} + \\v{B}_{Ri}e^{ik_{Ri} z -\\omega t} \\\\\n\\end{array} \\right. , \\nonumber\n\\end{equation}\nrespectively. $\\v{E_{in}}$ ($\\v{B_{in}}$) and $\\v{E_{out}}$ ($\\v{B_{out}}$) correspond to the electric (magnetic) field inside and outside of the Weyl metal, where $\\v{E}_I$ is the amplitude of the incident beam, $k_0$ is the wave number of the incident beam, $k_{Ti}$ ($k_{Ri}$) is the wave number of the transmitted (reflected) beam, and $\\v{E}_{Ti}$ ($\\v{E}_{Ri})$\/$\\v{B}_{Ti}$ ($\\v{B}_{Ri}$) is the transmitted (reflected) amplitude of the electric (magnetic) field with an $i-$th mode. Then, amplitudes of these electric and magnetic fields at the interface ($z=0$) are given by\n\\begin{equation}\n\\textrm{$\\v{E}$ field} \\left \\{\n\\begin{array}{lll}\n\\textrm{Incident beam}&\\v{E}_I=& E_I^x \\hat{x} + E_I^y \\hat{y} \\\\\n\\textrm{Reflected beam}&\\v{E}_{Ri}=& E_{Ri}^x \\hat{x} + E_{Ri}^y \\hat{y} + E_{Ri}^z \\hat{z}\\\\\n\\textrm{Transmitted beam}&\\v{E}_{Ti}=& E_{Ti}^x \\hat{x} + E_{Ti}^y \\hat{y} + E_{Ti}^z \\hat{z} \\\\\n\\end{array} \\right. \\nonumber\n\\end{equation}\nand\n\\begin{equation}\n\\textrm{$\\v{B}$ field} \\left \\{\n\\begin{array}{lll}\n\\textrm{Incident beam}&\\v{B}_I=& B_I^x \\hat{x} + B_I^y \\hat{y} \\\\\n\\textrm{Reflected beam}&\\v{B}_{Ri}=& B_{Ri}^x \\hat{x} + B_{Ri}^y \\hat{y} \\\\\n\\textrm{Transmitted beam}&\\v{B}_{Ti}=& B_{Ti}^x \\hat{x} + B_{Ti}^y \\hat{y} \\\\\n\\end{array} \\right. , \\nonumber\n\\end{equation}\nrespectively. Note that only $E_T$ and $E_R$ can have an oscillating $z-$component (parallel to the propagating direction) because the incident beam is propagating in vacuum without a source term and Eq. (\\ref{Maxwell3}) does not allow the magnetic field to have a $z-$component. With the divergence-free condition and Eq. (\\ref{Maxwell3}), the boundary conditions at the interface for each mode give rise to the following eight equations,\n\\begin{equation}\\label{Boundary_Condition_1}\n\\begin{array}{cccc}\n1)&\\quad E_I^x+E_{Ri}^x &=& E_{Ti}^x \\\\\n2)&\\quad E_I^y+E_{Ri}^y &=& E_{Ti}^y \\\\\n3)&\\quad B_I^x+B_{Ri}^x &=& B_{Ti}^x \\\\\n4)&\\quad B_I^y+B_{Ri}^y &=& B_{Ti}^y \\\\\n5)&\\quad B_{Ti}^x &=& -\\frac{k}{\\omega}E_{Ti}^y \\\\\n6)&\\quad B_{Ti}^y &=& \\frac{k}{\\omega}E_{Ti}^x \\\\\n7)&\\quad B_{Ri}^x &=& \\frac{1}{c}E_{Ri}^y \\\\\n8)&\\quad B_{Ri}^y &=& -\\frac{1}{c}E_{Ri}^x \\\\\n\\end{array} .\n\\end{equation}\nHere, $E_I^x$, $E_I^y$, $B_I^x$, and $B_I^y$ serve as initial conditions. These eight equations fix the eight unknown variables completely.\n\nApplying these boundary conditions to each $i-$th mode, we obtain the transmission\/reflection coefficient for each eigenmode\n\\begin{eqnarray}\n\\textrm{Transmission coefficient : } T_i &=& \\frac{2}{1+\\beta_i} \\label{Tscalar} \\\\\n\\textrm{Reflection coefficient : } R_i &=& \\frac{1-\\beta_i}{1+\\beta_i} \\label{Rscalar}\n\\end{eqnarray}\nwith $\\beta_i = k_i\/k_0$, where $k_i$ is the $i-$th eigenvalue. Now we have the transmission\/reflection coefficient for each mode. We can decompose an incident beam $(a,b,c)$ in a proper linear combination of the three eigenmodes and analyze the total transmission in each basis as following,\n\\begin{eqnarray} \\label{decompose}\n\\v{T} \\left(\\begin{array}{l} a \\\\ b \\\\ c \\end{array}\\right) &=& T_1\\frac{\\left(a-bi\\right)}{2} \\left(\\begin{array}{l} 1 \\\\ i \\\\ 0 \\end{array}\\right) +\nT_2\\frac{\\left(a+bi\\right)}{2} \\left(\\begin{array}{l} 1 \\\\ -i \\\\ 0 \\end{array}\\right) \\nonumber \\\\ &+& T_3 c\\left(\\begin{array}{l} 0 \\\\ 0 \\\\ 1 \\end{array}\\right) .\n\\end{eqnarray}\n\nPutting each $T_i$ given by Eq. (\\ref{Tscalar}) to Eq. (\\ref{decompose}), we find the total transmission matrix ($\\v{T}$) as\n\\begin{eqnarray}\n\\v{T} = \\left(\\begin{array}{ccc} \\frac{1}{1+\\beta_1} + \\frac{1}{1+\\beta_2} & -i(\\frac{1}{1+\\beta_1} - \\frac{1}{1+\\beta_2}) & 0 \\\\\ni(\\frac{1}{1+\\beta_1} - \\frac{1}{1+\\beta_2}) & \\frac{1}{1+\\beta_1} + \\frac{1}{1+\\beta_2} & 0 \\\\\n0 &0 & \\frac{2}{1+\\beta_3} \\end{array}\\right) . \\label{BzTgeneral}\n\\end{eqnarray}\nSimilarly, the total reflection matrix ($\\v{R}$) is given by\n\\begin{eqnarray}\n\\v{R} = \\frac{1}{2}\\left(\\begin{array}{ccc} \\frac{1-\\beta_1}{1+\\beta_1} + \\frac{1-\\beta_2}{1+\\beta_2} & -i(\\frac{1-\\beta_1}{1+\\beta_1} - \\frac{1-\\beta_2}{1+\\beta_2}) & 0 \\\\\ni(\\frac{1-\\beta_1}{1+\\beta_1} - \\frac{1-\\beta_2}{1+\\beta_2}) & \\frac{1-\\beta_1}{1+\\beta_1} + \\frac{1-\\beta_2}{1+\\beta_2} & 0 \\\\\n0 &0 & \\frac{2(1-\\beta_3)}{1+\\beta_3} \\end{array}\\right) . \\label{BzRgeneral} \\nonumber \\\\\n\\end{eqnarray}\n\nShining a linearly polarized beam into the Weyl metal state with $\\v{E_i}=(E_0 e^{i(kz-\\omega t)},0,0)$, we obtain\n\\begin{eqnarray}\n\\v{E_t} &=& \\left(\\frac{2}{1+\\beta_1}+\\frac{2}{1+\\beta_2}, i(\\frac{2}{1+\\beta_1}-\\frac{2}{1+\\beta_2}), 0 \\right) \\frac{E_0}{2}e^{i(-\\omega t)} \\label{BzT} \\nonumber \\\\ \\\\\n\\v{E_r} &=& \\left(\\frac{1-\\beta_1}{1+\\beta_1}+\\frac{1-\\beta_2}{1+\\beta_2}, i(\\frac{1-\\beta_1}{1+\\beta_1}-\\frac{1-\\beta_2}{1+\\beta_2}), 0 \\right) \\frac{E_0}{2}e^{i(-\\omega t)} \\label{BzR} \\nonumber \\\\\n\\end{eqnarray}\nat the boundary of $z = 0$. Introducing $N_{ti}e^{i\\phi_{ti}} \\equiv T_i = \\frac{2}{1+\\beta_i}$ and $N_{ri}e^{i\\phi_{ri}}\\equiv R_i = \\frac{1-\\beta_i}{1+\\beta_i}$, where imaginary parts can be easily eliminated, we find their real parts\n\\begin{eqnarray}\nRe(\\v{E_t}) &=& \\frac{E_0}{2}\\left( \\begin{array}{c}N_{t1}\\cos{(\\omega t - \\phi_{t1})}+N_{t2}\\cos{(\\omega t - \\phi_{t2})}\n\\\\ N_{t1}\\sin{(\\omega t - \\phi_{t1})}-N_{t2}\\sin{(\\omega t - \\phi_{t2})}\n\\\\ 0 \\end{array} \\right) \\label{ReEtBz} \\nonumber \\\\ \\\\\nRe(\\v{E_r}) &=& \\frac{E_0}{2}\\left( \\begin{array}{c}N_{r1}\\cos{(\\omega t - \\phi_{r1})}+N_{r2}\\cos{(\\omega t - \\phi_{r2})}\n\\\\ N_{r1}\\sin{(\\omega t - \\phi_{r1})}-N_{r2}\\sin{(\\omega t - \\phi_{r2})}\n\\\\ 0 \\end{array} \\right) . \\label{ReErBz} \\nonumber \\\\\n\\end{eqnarray}\nInstead of linearly polarized incident beams, we get elliptically polarized transmitted (reflected) beams. The maximum amplitude is modified by the factor of $\\frac{(N_{t1}+N_{t2})}{2}$ ($\\frac{(N_{r1}+N_{r2})}{2}$), whereas the major axis is rotated by $\\phi = \\frac{\\phi_{t2}-\\phi_{t1}}{2}$ ($\\frac{\\phi_{r2}-\\phi_{r1}}{2}$) from that of the incident beam. Here, we introduced\n\\begin{eqnarray}\n\\phi_{tj} &\\equiv& \\arctan{\\left(\\frac{-\\text{Im}[\\beta_{j}]}{1+\\text{Re}[\\beta_{j}]}\\right)} \\\\\n\\phi_{rj} &\\equiv& \\arctan{\\left(\\frac{-2 \\text{Im}[\\beta_{j}]}{1-\\text{Re}[\\beta_{j}]^2-\\text{Im}[\\beta_{j}]^2}\\right)} \\\\\nN_{tj} &\\equiv& \\frac{2}{\\sqrt{1+\\text{Re}[\\beta_{j}]^2+\\text{Im}[\\beta_{j}]^2}} \\\\\nN_{rj} &\\equiv& \\frac{\\sqrt{(1-\\text{Re}[\\beta_{j}]^2-\\text{Im}[\\beta_{j}]^2)^2+4\\text{Im}[\\beta_{j}]^2}}{(1+\\text{Re}[\\beta_{j}]^2)^2+\\text{Im}[\\beta_{j}]^2}.\n\\end{eqnarray}\n\nThis elliptical shape of the electric field and the rotation of the major axis at the interface originates from the splitting of light into eigenmodes at the Weyl metal. The incident beam $(1,0,0)$ is not an eigenmode inside the Weyl metal state, thereby it is decomposed into the mode $1$ and $2$ ($(1,i,0)$ and $(1,-i,0)$) at the interface. These (right- and left- handed) circularly polarized beams have different transmission (reflection) coefficients $T_i$ ($R_i$) in the Weyl metal phase. Therefore, we observe differences of their phases and amplitudes. Non-vanishing components perpendicular to the incident beam arise from the inhomogeneous $\\theta-$term which makes the Faraday\/Kerr rotation with an elliptical shape of the beam.\n\nNot only at the interface but also inside the Weyl metal, the elliptical deformation and its rotation occur. When the light propagates during a distance $D$, we get a transmitted electric field, just modified by the exponential argument from $i(-\\omega t)$ to $i(kD-\\omega t)$. Extracting out only the real part, we obtain\n\\begin{eqnarray}\nRe(\\v{E_D}) &=& \\frac{E_0}{2}\\left( \\begin{array}{c}N_{D1}\\cos{(\\omega t - \\phi_{D1})}+N_{D2}\\cos{(\\omega t - \\phi_{D2})}\n\\\\ N_{D1}\\sin{(\\omega t - \\phi_{D1})}-N_{D2}\\sin{(\\omega t - \\phi_{D2})}\n\\\\ 0 \\end{array} \\right) \\nonumber\n\\end{eqnarray}\nwhere $N_{Dj} = N_{tj}e^{-\\text{Im}[k_{j}]D}$ and $\\phi_{Dj} \\equiv \\phi_{tj}+\\text{Re}[k_{j}]D$. The amplitude of $N_{Dj}$ is reduced by a factor of $e^{-\\text{Im}[k_{j}]D}$ from $N_{tj}$. The phase of $\\phi_{Dj}$ is shifted more by $\\text{Re}[k_{j}]D$ from $\\phi_{tj}$, so the total rotation angle of the major axis is $\\phi=\\frac{\\phi_{t2}-\\phi_{t1}}{2}+\\frac{k_{r2}-k_{r1}}{2}D$. Here, the first term ($\\frac{\\phi_{t2}-\\phi_{t1}}{2}$) corresponds to the rotation at the interface, and the second term ($\\frac{k_{r2}-k_{r1}}{2}D$) is the rotation proportional to the propagating length inside the Weyl metal, just similar to the conventional Faraday rotation.\n\n\\subsection{$\\v{B_\\text{ext}} \/\/ \\v{\\hat{x}}$ $\\&$ $\\v{E_i} \/\/ \\v{\\hat{x}}$ \\label{reflectivity change}}\n\nWhen the external magnetic field is not parallel with the propagating direction of light, the divergence of the electric field could be non-vanishing, given by $\\div{\\v{E}} \\approx -\\frac{4\\alpha g \\omega}{\\sigma(1+c_{\\omega z} B_z^2)}i \\v{B_{ext}}\\cdot \\v{B_0}e^{i(kz-\\omega t)} \\neq 0$. Then, the polarization direction of $\\v{E_i}$ and $\\v{B_i}$ is important. We are considering the incident beam as $\\v{E_i}=(E_0 e^{i(kz-\\omega t)},0,0)$ (see Fig. \\ref{eliptic}b). In this case the incident beam is ``consistent\" with the mode $3$ in Eq. (\\ref{mode3}). Boundary conditions can be applied in the same as those of the normal metal state. As a result, the transmission\/reflection coefficients are given by the scalar form in Eqs. (\\ref{Tscalar}) and (\\ref{Rscalar}). There is no Faraday\/Kerr rotation, and the eigenvalue $k_3$ (see Eqs. (\\ref{k3rBx}) and (\\ref{k3iBx})) resembles that of a normal metal except for the $B^{2}-$enhancement factor. However, this $B^{2}$ magnetic-field dependence, which can be found from the topologically generalized Boltzmann transport theory \\cite{CME3,CME4,CME5,CME6,CME7,Boltzmann_Chiral_Anomaly1,Boltzmann_Chiral_Anomaly2,Boltzmann_Chiral_Anomaly3,Boltzmann_Chiral_Anomaly4,Boltzmann_Chiral_Anomaly5,Boltzmann_Chiral_Anomaly6,Boltzmann_Chiral_Anomaly7,\nBoltzmann_Chiral_Anomaly8,NLMR_First_Exp,NLMR_Followup_VIII,Disordered_Weyl_Metal2}, represents the unique property of the Weyl metal phase on this configuration ($\\v{E_i}\/\/\\v{B_{ext}}$); Weyl metals become more ``reflective\" with an increasing external magnetic field. Considering the configuration of Fig. \\ref{eliptic}b, the reflectivity of the Weyl metal is enhanced as a function of the applied magnetic field due to the longitudinal magnetoconductivity enhanced by the $B^2$ factor as shown in Fig. \\ref{Maximum_amplitude_Rotation_angle_for_a_linear_incident_beam}b. This originates from the existence of a perfect metallic channel, referred to as the chiral anomaly \\cite{WM1,WM2,WM3,WM_Reviews,Nielsen_Ninomiya}.\n\n\\subsection{$\\v{B_\\text{ext}}\/\/\\v{\\hat{x}}$ $\\&$ $\\v{E_i}\/\/\\v{\\hat{y}}$}\n\nFinally, we consider the case of Figs. \\ref{eliptic}c and \\ref{eliptic}d with an incident beam $\\v{E_i}=(0,E_0 e^{i(kz-\\omega t)},0)$. The incident beam is not an eigenmode in the Weyl metal state, thereby it should be decomposed into the mode $1$ and $2$ at the interface. Accordingly, we consider the following boundary conditions, which differ from those of Fig. \\ref{eliptic}a,\n\\begin{displaymath}\n\\begin{array}{ccccc}\n1)&\\quad E_I^y+E_R^y &=& E_T^y& \\\\\n2)&\\quad \\epsilon_0(E_I^z+E_R^z) - \\epsilon E_T^z &=& \\rho_{s} & \\\\\n3)&\\quad (B_I^x+B_R^x)\/\\mu_0 -B_T^x\/\\mu &=& J_s& \\\\\n4)&\\quad B_T^x &=& -\\frac{k}{\\omega}E_T^y& \\\\\n5)&\\quad (-1)^{j-1}(i\\sqrt{1-i\\frac{aB_xk_j^2}{d_x}})E_T^y&=& E_T^z& (\\text{$j$=1 or 2})\\\\\n6)&\\quad (-1)^{j-1}(i\\sqrt{1-i\\frac{aB_xk_j^2}{d_x}})E_R^y&=& E_R^z& (\\text{$j$=1 or 2}),\n\\end{array}\n\\end{displaymath}\nwhere $\\rho_s$ and $J_s$ are charge and current density at the interface, respectively. The surface charge density is given by Eq. (\\ref{rho}), involved with the axion electrodynamics. One can obtain the surface current density via the constituent relation, given by the surface conductivity $\\sigma_{s}$. For the surface current density, we would like to refer detailed discussions to Ref. \\cite{Axion_EM_Th_WM}. All the parameters of $a$, $B_x$, $k_j$, and $d_x$ are already introduced in section \\ref{Axionem}, where $j=1$ or $2$ corresponds to the mode 1 or 2. Both electric and magnetic fields at the boundary should be considered as\n\\begin{equation}\n\\textrm{$\\v{E}$ field}\\left\\{\n\\begin{array}{lll}\n\\textrm{Incident beam}&=& E_I^y \\hat{y} + E_I^z \\hat{z} \\\\\n\\textrm{Reflected beam}&=& E_R^y \\hat{y} + E_R^z \\hat{z} \\\\\n\\textrm{Transmitted beam}&=& E_T^y \\hat{y} + E_T^z \\hat{z} \\\\\n\\end{array} \\right. \\nonumber\n\\end{equation}\nand\n\\begin{equation}\n\\textrm{$\\v{B}$ field}\\left\\{\n\\begin{array}{lll}\n\\textrm{Incident beam}&=& B_I^x \\hat{x} \\\\\n\\textrm{Reflected beam}&=& B_R^x \\hat{x}\\\\\n\\textrm{Transmitted beam}&=& B_T^x \\hat{x}\\\\\n\\end{array} \\right. , \\nonumber\n\\end{equation}\nrespectively.\n\nAs discussed in section \\ref{generalsol}, the explicit form of eigenvalues and eigenvectors are rather complicated to use. For physical insight, we consider the $|\\eta|\\gg1$ condition, following the same strategy of section \\ref{B\/\/z}, where the conventional Hall effect is dominant. Considering major eigenmodes in Eq. (\\ref{bigeta}), we find the electric fields from the boundary conditions\n\\begin{eqnarray}\nRe(\\v{E_t}) &=& \\frac{E_0}{2}\\left( \\begin{array}{c} 0\n\\\\ N_{t1}\\cos{(\\omega t - \\phi_{t1})}+N_{t2}\\cos{(\\omega t - \\phi_{t2})}\n\\\\ N_{t1}\\sin{(\\omega t - \\phi_{t1})}-N_{t2}\\sin{(\\omega t - \\phi_{t2})}\n\\end{array} \\right) \\nonumber \\label{ReEtEy} \\\\ \\\\\nRe(\\v{E_r}) &=& \\frac{E_0}{2}\\left( \\begin{array}{c} 0\n\\\\ N_{r1}\\cos{(\\omega t - \\phi_{r1})}+N_{r2}\\cos{(\\omega t - \\phi_{r2})}\n\\\\ N_{r1}\\sin{(\\omega t - \\phi_{r1})}-N_{r2}\\sin{(\\omega t - \\phi_{r2})}\n\\end{array} \\right) \\nonumber . \\label{ReErEy} \\\\\n\\end{eqnarray}\nWe note that the zeroth order approximation for $|\\eta|$ results in the same eigenmodes and eigenvalues with parameters $N_{ij}$ and $\\phi_{ij}$ as the case of section \\ref{B\/\/z}. These electric fields are most dominant but there could exist minor corrections as described in Eqs. (\\ref{bigeta}) to (\\ref{oneeta}). The only difference compared to those of the case of section \\ref{B\/\/z} within this approximation is in the rotation direction of the major axis. In particular, a longitudinal component turns out to appear in both transmission ($Re(\\v{E_t})$) and reflection ($Re(\\v{E_t})$). See Eqs. (\\ref{ReEtEy}) and (\\ref{ReErEy}). In order to match boundary conditions, the longitudinal component is indispensable for the transmitted\/reflected light ($Re(\\v{E_t})$\/$Re(\\v{E_r})$) in vacuum. Remember that the oscillating $\\hat{z}$ component is allowed if and only if there exist charge oscillations due to the divergence term. We recall Eqs. (\\ref{continuity}) to (\\ref{divE}). It is natural to expect that the longitudinal component $(0,0,1)$ of the transmitted or reflected beam in vacuum should be observed only in the near-field region ($r \\ll \\lambda_0$) from the interface because the charge oscillation can exist only inside the Weyl metal. Here, $\\lambda_0$ is the wavelength of light in the vacuum. On the other hand, the $(0,1,0)$ component can propagate without radiation in vacuum just as the conventional electromagnetic wave.\n\nIn an experimental situation with a detector located at a far-field region ($r \\gg \\lambda_0$), the $\\bm{\\hat{z}}-$component of the transmitted\/reflected beam ($E_{rz}$) should be observed as the radiation pattern of a short dipole antenna due to the charge oscillation at the interface ($E_{rz}\\approx \\frac{\\sigma_s KE_0(N_{r1}+N_{r2})}{2\\omega r}\\sin{\\frac{\\phi_{r2}-\\phi_{r1}}{2}}\\sin{\\theta_z}\\cos{(kz-\\omega t-\\frac{\\phi_{2r}-\\phi_{1r}}{2}})$). Here, $E_0$ is an amplitude of the incident beam, $\\theta_z$ is a polar angle, $r$ is a distance from the center of the shined area, $K$ is a geometrical factor of shined area, and $\\sigma_s$ is the conductivity at the surface \\cite{Jackson}. This dipole antenna solution is from the oscillating charge accumulation at the surface. Suppose a periodic boundary of the sample \\cite{Jackson}. Then, the continuity equation should be satisfied as $\\div{\\v{J}}=\\nabla \\cdot \\sigma_s\\v{E}=-\\pd{\\rho}{t}$ at the interface, where $\\sigma_s$ is a surface conductivity. Applying the Gauss theorem to the whole sample surface, we get $\\int E_r^z dS = \\frac{i\\omega}{\\sigma_s}\\int \\rho dV$ $\\longrightarrow$ $E_r^z = \\frac{i\\omega}{\\sigma_s} \\int \\rho dz$. Considering a surface charge density $n_q \\equiv \\int \\rho dz$ given by an oscillating surface charge density $Re(n_q) = \\frac{\\sigma E_0}{2\\omega}\\left(N_{r1}\\sin{(\\omega t-\\phi_{r1})}-N_{r2}\\sin{(\\omega t - \\phi_{r2})}\\right)$, we obtain the radiation electric field with the longitudinal component. In other words, we may consider this oscillating surface charge density as a source of the radiation of the $\\bm{\\hat{z}}-$directional field in the vacuum space.\n\nWe emphasize that the propagating wave shows beating inside the Weyl metal phase (see Figs. \\ref{eliptic}c and \\ref{eliptic}d). The beating phenomenon originates from splitting of the incident beam into two different modes with different group velocities. One may point out that the Faraday rotation in Fig. \\ref{eliptic}a also occurs in ferromagnetic materials. In this case only the transverse components of the propagating wave are existing and rotating. However, the longitudinal component of the propagating electric field appears to show beating in Weyl metals, regarded to be the manifestation of longitudinal charge density wave fluctuations and resulting from the axion electrodynamics.\n\nWe summarize all these features in Fig. \\ref{Maximum_amplitude_Rotation_angle_for_a_linear_incident_beam}, showing transmission\/reflection coefficients and Faraday\/Kerr rotation angles as a function of both external magnetic fields and light frequencies in the experimental setup of Fig. \\ref{eliptic}a and Fig. \\ref{eliptic}b. Figures \\ref{Maximum_amplitude_Rotation_angle_for_a_linear_incident_beam}a (b) and \\ref{Maximum_amplitude_Rotation_angle_for_a_linear_incident_beam}d (e) show maximum amplitudes of transmission\/reflection coefficients at the situation of Fig. \\ref{eliptic}a (Fig. \\ref{eliptic}b). $N_{t} = N_{t1} + N_{t2}$ ($N_{r} = N_{r1} + N_{r2}$) is a total maximum transmission (reflection) amplitude, where $N_{ti}$ ($N_{ri}$) with $i = 1 ~ \\& ~ 2$ is that of the transmitted (reflected) eigenmode. The external magnetic-field dependence of reflectivity enhancement at the configuration of Fig. \\ref{eliptic}b originates from the chiral anomaly induced enhancement of the longitudinal magnetoconductivity, shown in Fig. \\ref{Maximum_amplitude_Rotation_angle_for_a_linear_incident_beam}b. The total transmission coefficient depends on external magnetic fields quite strongly and it shows a non-monotonic behavior as a function of frequency at a given magnetic field. On the other hand, the Faraday angle shows a monotonic behavior with an increasing function of the applied magnetic field and decreasing function of the frequency, as shown in Figs. \\ref{Maximum_amplitude_Rotation_angle_for_a_linear_incident_beam}c and \\ref{Maximum_amplitude_Rotation_angle_for_a_linear_incident_beam}f whereas the Kerr angle shows non-monotonic behaviors in both cases, rather unexpected.\n\n\\section{Conclusion}\n\nLight scattering experiments in $\\omega < \\sigma\/\\epsilon$ were investigated theoretically in order to prove the axion electrodynamics theory in Weyl metals. We uncovered the existence of longitudinal components in the transmitted\/reflected light, which results from longitudinal charge-density fluctuations allowed by the axion electrodynamics. The longitudinal beats in the propagating electric fields are the manifestation of the longitudinal charge density fluctuations. In addition, we found strong dependencies of external magnetic fields in both transmission\/reflection coefficients and amount of Faraday\/Kerr rotation angles for general configurations. The helicity and amount of Faraday\/Kerr rotation angles are determined by $\\bm{\\nabla} \\theta \\times \\bm{E}_{light} = \\bm{B}_{ext} \\times \\bm{E}_{light}$. Especially, we find various forms of eigenvectors depending on the range of the parameter $\\eta$ and a functional dependence of reflectivity under the $\\v{E}\/\/\\v{B}$ condition, which can be special fingerprints of Weyl metals governed by the axion electrodynamics. Consequently, the light propagation in a Weyl metal phase can be controlled by engineering external magnetic fields.\n\n\\section*{ACKNOWLEDGEMENT}\n\nThis study was supported by the Ministry of Education, Science, and Technology (No. NRF-2015R1C1A1A01051629 and No. 2011-0030046) of the National Research Foundation of Korea (NRF). KS appreciates helpful discussions with J.-H. Kim.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{\\normalsize\\bf I. INTRODUCTION}\n\nUnitary ensembles of (single) random matrices (UE) are the most well known and thoroughly studied among random matrix ensembles (RME). Integrable properties possessed by their correlation functions and spectral gap probabilities, their ties with integrable hierarchies (see e.g. Ref.~\\cite{DJKM}) of partial differential equations (PDE) were extensively studied in the last two decades, see e.g. Ref.~\\cite{Me04}. Still the approaches used to derive equations satisfied by gap probabilities, e.g. Refs.~\\cite{TW1, ASvM, BorDei}, seem to be far from each other, moreover, the dissimilarities grow when considering more complicated RME like coupled or Pfaffian ensembles. Due to the aforementioned integrability, there are some PDE in every approach, which are universal, i.e. independent of the particular UE being studied. They may be equations satisfied by resolvent kernels of Fredholm integral operators and functions they are composed of. This was established as a general approach to study RME spectra in Ref.~\\cite{TW1} (we call it TW further on). They also may be well-known nonlinear integrable PDE --- the first members of KP or Toda lattice hierarchies, or their subhierarchies like KdV and others. This is the case in the approach started in Ref.~\\cite{ASvM} (called ASvM from now on), see also Refs.~\\cite{AvM2, AvM7, AvMdKP, PvM2007}. But even this universal part of all methods, supplied by the integrable structure itself, and thus identical for different ensembles, is described differently in various approaches. As for the non-universal part, depending on the particular properties of a given probability measure, it practically leads to the necessity of almost case by case study of various ensembles. However, one might hope that it should be possible after all to describe the universal properties in a universal way, incorporating all existing approaches and emphasizing the common underlying structure inherent in all cases. \n\\par First steps in this direction were made in Refs.~\\cite{Har} and \\cite{IR1}, where the important TW and ASvM approaches were matched for several interesting cases: Airy, Bessel and Sine in Ref.~\\cite{Har} and Gaussian (or Hermite) in Ref.~\\cite{IR1}. In the second paper some very simple relations between the dependent variables of TW and ASvM approaches for the Gaussian UE (GUE) were revealed. Here we extend these relations to all UE with spectrum on real line and expose some additional relations of the same kind, which seem to be overlooked in all previous considerations. (Everything we do here can be repeated for circular UE or even for orthogonal-polynomial ensembles with eigenvalues on a contour in complex plane, considered e.g. in Ref.~\\cite{BeEyHa-06}. Then, however, some specific formulas will differ from presented here, because of a different form of the basic three-term recurrence relations -- our starting point, see below.) This allows us to derive some universal forms of PDE satisfied by gap probabilities of all UE irrespective of the specific spectral measure (or potential, in other words). Besides, we make contact with the isomonodromic approach~\\cite{JMMS, JMU, JM2, IIKS, BorDei, BeEyHa-06} first applied to random matrices in Refs.~\\cite{JMMS, FIK1, FIK2, HarTW}. In connection with TW work, it was most clearly exposed by Palmer in Ref.~\\cite{Pal}, where the generalized Schlesinger isomonodromic deformation equations for UE were written with a common form of non-universal, i.e. potential dependent, terms. We demonstrate that combinations of these last equations in general give the form of ASvM equations for $\\tau$-functions obtained in Ref.~\\cite{IR1} for the Gaussian case. These combinations expose an amazing similarity in structure with our universal PDE claimed above. At the core of all our considerations is still, as it was in Ref.~\\cite{IR1}, the structure of orthogonal function bases, three-term recurrence relations they satisfy, and one-dimensional Toda lattice hierarchy whose flows are in fact the continuous transformations among different such bases. We would like to stress here that while the last sentence is, strictly speaking, directly related only to {\\it finite size} UE, the infinite ensembles can always be obtained as limits of finite ones. Moreover, the simple relations between TW and ASvM dependent variables, revealed here, allow one to {\\it define} some quantities, originally meaningful only for finite matrices, like the $\\tau$-ratios of matrix integrals of different matrix dimension, also for infinite ensembles.\n\\par The plan of the paper is as follows. Section II gives a brief summary of the key results in the paper. In section III we derive the three-term recurrence relations for resolvent kernel related orthogonal functions. The functions are identified with orthogonal functions for ensembles with measure restricted to the complement of the original measure support~\\cite{BorSosh}. Using these recurrences, in section IV we derive the universal relations between the TW and ASvM variables extending and generalizing results of Ref.~\\cite{IR1} for the Gaussian case to all UE. In section V, from the results of previous sections, we derive new universal PDE satisfied by all UE. We obtain three equations reflecting the fundamental Toda-AKNS structure but in addition also another equation arising from the defining connection between logarithmic derivative of $\\tau$-function and resolvent kernel. This allows us to obtain a single universal PDE for gap probabilities of UE modified by Toda times in ASvM approach, separately for each endpoint of the spectrum. Besides, we obtain a new interesting recursion structure for finite size matrix integrals. In the next section we introduce a different framework for UE --- that of the isomonodromic deformations and Schlesinger equations, following the important work of Palmer~\\cite{Pal}. We show then that our Toda-AKNS type system, derived by ASvM method in Ref.~\\cite{IR1}, can always be obtained also by summing up the Schlesinger equations. A partial summation of them gives also a Toda-AKNS-type system analogous to the one from previous section, this time, however, describing {\\it unmodified} UE, i.e. ensembles with fixed couplings in the potential. The additional equation of previous section is now a consequence of summed Schlesinger system. Another consequence is two universal constraints for the non-universal terms in TW\/Schlesinger equations. In section VII we consider simplest examples from the perspective of universal PDE (\\ref{eq:T}) of section V and explain the difficulty of its direct application to more complex cases. Section VIII is devoted to conclusions and open questions. \n\n\\section*{\\normalsize\\bf II. BRIEF SUMMARY OF MAIN RESULTS}\n\nFor any UE on real line, given the corresponding orthogonal polynomial system and related Christoffel-Darboux kernel $K_n$, functions $Q_n(x)$, $P_n(x)$ ($n$ -- matrix size), defining the resolvent kernel $R_n$ of its restriction $K_n^J$ to a subset $J^c$ of $\\Re$ also satisfy certain three-term recurrence relations,\n\n\\begin{equation}\nxQ_n = b_nQ_{n+1}+(a_n+v_n-v_{n+1})Q_n+b_{n-1}(1-\\bar u_n)(1+\\bar w_n)Q_{n-1}, \\label{eq:3Q} \n\\end{equation}\n\n\n\\begin{equation}\nxP_n = b_{n-1}(1-\\bar u_n)(1+\\bar w_n)P_{n+1}+(a_{n-1}-v_n+v_{n-1})P_n+b_{n-2}P_{n-1}, \\label{eq:3P} \n\\end{equation}\n\n\\noindent with coefficients explicitly determined by inner products $u_n$, $v_n$ and $w_n$ (\\ref{eq:11}) over the subset, which were the additional auxiliary variables introduced by Tracy and Widom in Refs.~\\cite{TW-Airy, TW1} to derive PDE for level spacing probabilities. Here $\\bar u_n = u_n\/b_{n-1}$, $\\bar w_n = w_n\/b_{n-1}$, and $a_n$, $b_n$ are the coefficients in the original three-term relations (\\ref{eq:1}). There are also simple recursions among the above quantities:\n\n\\begin{equation}\nP_{n+1}(x) = \\frac{Q_n(x)}{1-\\bar u_n}, \\hspace{1.5cm} Q_{n-1}(x) = \\frac{P_n(x)}{1+\\bar w_n}, \\label{eq:16}\n\\end{equation}\n\n\\begin{equation}\n\\bar w_{n+1} = \\bar u_n\/(1-\\bar u_n). \\label{eq:17}\n\\end{equation}\n\n\\noindent Renormalized functions\n\n\\begin{equation}\nr_k(x) = \\frac{Q_k(x)}{(1-\\bar u_k)^{1\/2}}, \\label{eq:21}\n\\end{equation}\n\n\\noindent are orthonormal w.r.t. the original measure $\\exp(-V(x))dx$ restricted to the complement $J$ of the subset. Thus, they also satisfy a usual Christoffel-Darboux formula:\n\n\\begin{equation}\nR_n(x,y) = \\sum_{k=0}^{n-1}r_k(x)r_k(y) = b_{n-1}(1-\\bar u_n)^{1\/2}(1+\\bar w_n)^{1\/2}\\frac{r_n(x)r_{n-1}(y)-r_{n-1}(x)r_n(y)}{x-y}, \\label{eq:22}\n\\end{equation}\n\n\\noindent and the three-term relations with symmetric (Jacobi) matrix of coefficients\n\n$$\nxr_n(x) = b_n(1-\\bar u_{n+1})^{1\/2}(1+\\bar w_{n+1})^{1\/2}r_{n+1}(x)+(a_n+v_n-v_{n+1})r_n(x)+\n$$\n\n\\begin{equation}\n+ b_{n-1}(1-\\bar u_n)^{1\/2}(1+\\bar w_n)^{1\/2}r_{n-1}(x). \\label{eq:23}\n\\end{equation}\n\n\\noindent The above formulas add in fact some more specific details to the earlier result of Borodin and Soshnikov~\\cite{BorSosh} on restricted RME.\n\\par As a consequence of the above simple facts and general connection of orthogonal polynomial systems with Toda lattice, the simple universal relations hold with the UE matrix integrals, which are Toda $\\tau$-functions:\n\n\\begin{equation}\nu_n \\equiv (\\varphi, (I-K_n^J)^{-1}\\varphi) = b_{n-1}\\left(1-\\frac{\\tau_{n+1}^J\/\\tau_{n+1}}{\\tau_n^J\/\\tau_n}\\right) = \\sqrt\\frac{\\tau_{n+1}\\tau_{n-1}}{(\\tau_n)^2}\\left(1-\\frac{\\tau_{n+1}^J\/\\tau_{n+1}}{\\tau_n^J\/\\tau_n}\\right), \\label{eq:30}\n\\end{equation}\n\n\\begin{equation}\nw_n \\equiv (\\psi, (I-K_n^J)^{-1}\\psi) = b_{n-1}\\left(\\frac{\\tau_{n-1}^J\/\\tau_{n-1}}{\\tau_n^J\/\\tau_n}-1\\right) = \\sqrt\\frac{\\tau_{n+1}\\tau_{n-1}}{(\\tau_n)^2}\\left(\\frac{\\tau_{n-1}^J\/\\tau_{n-1}}{\\tau_n^J\/\\tau_n}-1\\right), \\label{eq:32}\n\\end{equation}\n \n\\begin{equation}\n\\left. v_n \\equiv (\\varphi, (I-K_n^J)^{-1}\\psi) \\equiv (\\psi, (I-K_n^J)^{-1}\\varphi) = -\\frac{\\prt}{\\prt t_1}\\ln\\frac{\\tau_n^J}{\\tau_n}\\right|_{t=0}. \\label{eq:41a}\n\\end{equation}\n\n\\noindent where $\\tau_n^J$ is the matrix integral over $J^n$, $\\tau_n$ -- the same integral over $\\Re$, and $t_1$ is the first Toda time.\n\\par Due to the last relations, there is a universal system of PDE with respect to {\\it each single} endpoint of the spectrum $\\xi$ {\\it and} $t_1$ satisfied by $T \\equiv \\ln\\tau_n^J$ and $\\tau$-ratios $U_n \\equiv \\frac{\\tau_{n+1}^J}{\\tau_n^J}$ and $W_n \\equiv \\frac{\\tau_{n-1}^J}{\\tau_n^J}$, if we consider the original measure modified by a linear potential, $V(x) \\to V(x) - t_1x$:\n \n\\begin{equation}\n\\left(\\frac{\\prt\\ln\\tau_n^J}{\\prt\\xi}\\right)^2 = R_n^2(\\xi,\\xi) = -\\frac{1}{4}\\frac{\\prt U_n}{\\prt\\xi}\\frac{\\prt W_n}{\\prt \\xi}\\left(\\frac{\\prt}{\\prt \\xi}\\ln\\left(-\\frac{\\prt U_n\/\\prt \\xi}{\\prt W_n\/\\prt \\xi}\\right)\\right)^2. \\label{eq:51} \n\\end{equation}\n\n\\begin{equation}\n\\frac{\\prt^2 \\ln\\tau_n^J}{\\prt t_1^2} = -\\frac{\\prt v_n}{\\prt t_1} + \\frac{\\prt^2 \\ln\\tau_n}{\\prt t_1^2} = U_nW_n, \\label{eq:52}\n\\end{equation}\n\n\\begin{equation}\n\\frac{\\prt^2 U_n}{\\prt \\xi\\prt t_1} = \\xi\\frac{\\prt U_n}{\\prt\\xi} + 2\\frac{\\prt v_n}{\\prt\\xi}U_n, \\label{eq:53}\n\\end{equation}\n\n\\begin{equation}\n\\frac{\\prt^2 W_n}{\\prt \\xi\\prt t_1} = -\\xi\\frac{\\prt W_n}{\\prt\\xi} + 2\\frac{\\prt v_n}{\\prt\\xi}W_n, \\label{eq:54}\n\\end{equation}\n\n\\noindent Here (\\ref{eq:52}) is the well-known Toda equation, and the other three PDE satisfied by any such modified UE can be written equivalently as ($t \\equiv t_1$, $U = U_n$, $W = W_n$ below)\n\n\\begin{equation}\n\\prt_t(T_{\\xi})^2 = U_{\\xi}W_{\\xi\\xi} - W_{\\xi}U_{\\xi\\xi}, \\label{eq:57}\n\\end{equation}\n\n\\begin{equation}\n\\left(T_{\\xi t}\\right)^2 = -U_{\\xi}W_{\\xi}, \\label{eq:55}\n\\end{equation}\n\n\\begin{equation}\nWU_{\\xi t} - UW_{\\xi t} = \\xi T_{\\xi tt}. \\label{eq:56}\n\\end{equation}\n\n\\noindent It is possible to eliminate $U$ and $W$ from (\\ref{eq:57}), (\\ref{eq:55}) and (\\ref{eq:52}), and we obtain a {\\it universal PDE for the logarithm of gap probability} $\\Pr$ ($T = \\ln\\tau_n^J = \\ln\\Pr + \\ln\\tau_n$):\n\n\\begin{equation}\n\\left(T_{\\xi t} T_{\\xi\\xi tt} - T_{\\xi tt}T_{\\xi\\xi t} + 2(T_{\\xi t})^3\\right)^2 = (T_{\\xi})^2\\left(4T_{tt}(T_{\\xi t})^2 + (T_{\\xi tt})^2\\right). \\label{eq:T} \n\\end{equation}\n\n\\noindent In addition, using also (\\ref{eq:56}) allows to derive a universal PDE for the logarithm of the ratio $T_+ \\equiv U_n \\equiv \\ln(\\tau_{n+1}^J\/\\tau_n^J)$ (where `prime' stands for the derivative w.r.t. $\\xi$):\n\n\\begin{equation}\n(T_+'')_{tt} - \\frac{(T_+')_{tt}T_+''}{T_+'} - (T_+')^2(T_+)_{tt} - \\frac{(T_+')_{t}(T_+'')_t}{T_+'} + \\frac{(T_+')_{t}^2T_+''}{(T_+')^2} + (\\xi - (T_+)_t)T_+'(2(T_+')_t - 1) = 0. \\label{eq:T+} \n\\end{equation}\n \n\\par The universal Toda-AKNS-like structure of (\\ref{eq:52})--(\\ref{eq:54}) appears in a different guise from Schlesinger equations of isomonodromic approach, where Toda time $t_1$ is not involved, in section VI. \n\n\n\\section*{\\normalsize\\bf III. RESOLVENT KERNEL${\\backslash}$RESTRICTED ENSEMBLES' 3-TERM RELATIONS}\n\nConsider a system of orthogonal functions (quasi-polynomials) $\\{\\varphi_n\\}$ that satisfy a 3-term relation ( $\\varphi_n(x) = \\pi_n(x)\\exp(-V(x)\/2)$, $\\{\\pi_n\\}$ are polynomials orthonormal w.r.t. a probability measure $\\exp(-V(x))dx$ on real line $\\Re$ ):\n\n\\begin{equation}\nx\\varphi_n = b_n\\varphi_{n+1} + a_n\\varphi_n + b_{n-1}\\varphi_{n-1}, \\label{eq:1}\n\\end{equation}\n\n\\noindent which can be written as \n\n\\begin{equation}\nx\\varphi_n = (\\L\\varphi)_n, \\hspace{1cm} \\L = b\\Lambda^T + a\\Lambda^0 + b\\Lambda, \\label{eq:1a}\n\\end{equation}\n\n\\noindent i.e. $\\L$ - infinite matrix with $n$-th row $(\\L_n)_j = b_n\\delta_{n+1,j} + a_n\\delta_{n,j} + b_{n-1}\\delta_{n-1,j}$ ($\\Lambda_{ij} = \\delta_{i,j+1}$ - the infinite shift matrix, $a = diag(a_0, a_1, a_2, ...)$, $b = diag(b_0, b_1, b_2, ...)$). The corresponding Christoffel-Darboux formula reads:\n\n\\begin{equation}\nK_n(x, y) = \\frac{\\varphi(x)\\psi(y) - \\psi(x)\\varphi(y)}{x - y} = \\sum_{k=0}^{n-1}\\varphi_k(x)\\varphi_k(y), \\label{eq:2}\n\\end{equation}\n\n\\noindent where $\\varphi(x) = \\sqrt{b_{n-1}}\\varphi_n(x)$, $\\psi(x) = \\sqrt{b_{n-1}}\\varphi_{n-1}(x)$. Denote by $K_n^J$ the Fredholm operator with kernel (\\ref{eq:2}) acting {\\it on the complement} $J^c$ of subset $J$ of $\\Re$. The resolvent kernel (we will sometimes use short-hand notation $K$ for $K_n^J$), $R(x, y) \\equiv R_n^J(x, y)$, the kernel of $K(I-K)^{-1}$, is defined so that the operator identity holds:\n\n\\begin{equation}\n(I + R)(I - K) = I. \\label{eq:3}\n\\end{equation}\n\nLet us introduce the auxiliary functions playing a prominent role in TW approach~\\cite{TW1}:\n\n\\begin{equation}\nQ(x;J) = (I-K)^{-1}\\varphi(x), \\hspace{1cm} P(x;J) = (I-K)^{-1}\\psi(x). \\label{eq:4}\n\\end{equation}\n\n\\noindent It is known since the seminal work~\\cite{IIKS} that if $K$ is an {\\it integrable} operator, i.e. its kernel can be expressed by the left-hand side of the Christoffel-Darboux formula (\\ref{eq:2}), the resolvent operator is also integrable and its kernel can be written the same way in terms of the functions $Q$ and $P$:\n\n\\begin{equation}\nR_n(x, y) = \\frac{Q(x)P(y) - P(x)Q(y)}{x - y} = b_{n-1}\\frac{Q_n(x)P_n(y) - P_n(x)Q_n(y)}{x - y}, \\label{eq:5}\n\\end{equation}\n\n\\noindent where we let $Q = \\sqrt{b_{n-1}}Q_n$, $P = \\sqrt{b_{n-1}}P_n$, then\n\n\\begin{equation}\n\\varphi_n = (I-K)Q_n, \\hspace{2cm} \\varphi_{n-1} = (I-K)P_n \\label{eq:6}\n\\end{equation}\n\n\\noindent Important for our purposes will be inner products $u$, $v$ and $w$, introduced in Ref.~\\cite{TW1}:\n\n\\begin{equation}\n\\begin{array}{l} u \\equiv u_n = \\int_{J^c} Q(x;J)\\varphi(x)dx, \\ \\ \\ \\ \\ w \\equiv w_n = \\int_{J^c} P(x;J)\\psi(x)dx, \\\\ \\\\ v \\equiv v_n = \\int_{J^c} P(x;J)\\varphi(x)dx = \\int_{J^c} Q(x;J)\\psi(x)dx \\end{array} \\label{eq:11}\n\\end{equation}\n\n\\noindent the second equality in the definition of $v$ being true by symmetry of the operator $K_n^J$ and, consequently, also $R_n^J$. Let $\\bar u_n = u_n\/b_{n-1}$, $\\bar w_n = w_n\/b_{n-1}$, then\n\n\\begin{lemma}\n\n$$\nP_{n+1}(x) = \\frac{Q_n(x)}{1-\\bar u_n}, \\hspace{1.5cm} Q_{n-1}(x) = \\frac{P_n(x)}{1+\\bar w_n}. \\eqno(\\ref{eq:16})\n$$\n\n$$\n\\bar w_{n+1} = \\bar u_n\/(1-\\bar u_n). \\eqno(\\ref{eq:17})\n$$\n\n\\end{lemma}\n\n\\begin{proof}\n\\par Functions $P_{n+1}$ and $Q_n$ (as well as $Q_{n-1}$ and $P_n$) are not independent:\n\n\\begin{equation}\n\\varphi_n = (I-K)Q_n = (I-K_{n+1})P_{n+1} = (I-K)P_{n+1} - (K_{n+1}-K)P_{n+1}, \\label{eq:13}\n\\end{equation}\n\n\\noindent which can be rewritten as\n\n\\begin{equation}\n\\left(I - (I-K)^{-1}(K_{n+1}-K)\\right)P_{n+1} = Q_n. \\label{eq:14} \n\\end{equation}\n\n\\noindent On the other hand, the operator difference $K_{n+1}^J-K_n^J$ in (\\ref{eq:14}) is a simple projector (with $\\dot=$ meaning the kernel of the operator on the left):\n\n\\begin{equation}\nK_{n+1}-K\\ {\\dot=}\\ \\varphi_n(x)\\varphi_n(y)\\chi_{J^c}(y), \\label{eq:15} \n\\end{equation}\n\n\\noindent where $\\chi_{J^c}(y)$ is the characteristic functon of $J^c$, equal to $1$ on $J^c$ and $0$ outside. After applying (\\ref{eq:15}) in (\\ref{eq:14}) and using the definition of quantities $u_n$ and $w_n$, we find simple recursion formulas for $P_{n+1}$ and $Q_{n-1}$ in terms of $Q_n$ and $P_n$, i.e. (\\ref{eq:16}). Shifting indices in (\\ref{eq:16}), we find (\\ref{eq:17}) as their consistency condition.\n\n\\end{proof}\n\n\\begin{theorem}\n\nFunctions $Q_n$ and $P_n$ in the resolvent kernel of a Fredholm operator associated with finite size UE, satisfy three term recurrence relations:\n\n$$\nxQ_n = b_nQ_{n+1}+(a_n+v_n-v_{n+1})Q_n+b_{n-1}(1-\\bar u_n)(1+\\bar w_n)Q_{n-1}, \\eqno(\\ref{eq:3Q}) \n$$\n\n$$\nxP_n = b_{n-1}(1-\\bar u_n)(1+\\bar w_n)P_{n+1}+(a_{n-1}-v_n+v_{n-1})P_n+b_{n-2}P_{n-1}. \\eqno(\\ref{eq:3P}) \n$$\n\n\\end{theorem}\n\n\\begin{proof}\n\\noindent From (\\ref{eq:1a}) and (\\ref{eq:6}) we have\n\n\\begin{equation}\nx\\varphi_n = (\\L(I-K)Q)_n = (I-K)(\\L Q)_n - ([\\L,K]Q)_n, \\label{eq:7}\n\\end{equation}\n\n\\noindent where the commutator gives\n\n$$\n([\\L,K]Q)_n = (\\L KQ)_n - K(\\L Q)_n = \n$$\n\n$$\n= (b_nK_{n+1}Q_{n+1}+a_nKQ_n+b_{n-1}K_{n-1}Q_{n-1}) - K(b_nQ_{n+1}+a_nQ_n+b_{n-1}Q_{n-1}) = \n$$\n\n\\begin{equation}\n= b_n(K_{n+1}-K)Q_{n+1} + b_{n-1}(K_{n-1}-K)Q_{n-1}. \\label{eq:8}\n\\end{equation}\n\n\\noindent Similarly, \n\n$$\nx\\varphi_{n-1} = (I-K)(\\L P)_n - ([\\L,K]P)_n \\eqno(\\ref{eq:7}a)\n$$\n\n\\noindent with\n\n$$\n([\\L,K]P)_n = b_{n-1}(K_{n+1}-K)P_{n+1} + b_{n-2}(K_{n-1}-K)P_{n-1}. \\eqno(\\ref{eq:8}a)\n$$\n\n\\noindent Applying operator $(I-K)^{-1}$ to (\\ref{eq:7}), (\\ref{eq:7}a) we get\n\n\\begin{equation}\n(I-K)^{-1}x\\varphi_n = (\\L Q)_n - (I-K)^{-1}([\\L,K]Q)_n, \\label{eq:9}\n\\end{equation}\n\n$$\n(I-K)^{-1}x\\varphi_{n-1} = (\\L P)_n - (I-K)^{-1}([\\L,K]P)_n \\eqno(\\ref{eq:9}a)\n$$\n\n\\par For the left-hand side of (\\ref{eq:9}) we derive (imitating similar derivations of Refs.~\\cite{IIKS, TW1}, see also Ref.~\\cite{BIK}):\n\n$$\n(I-K)^{-1}x\\varphi_n = \\frac{1}{\\sqrt{b_{n-1}}}(I-K)^{-1}x\\varphi = \\frac{1}{\\sqrt{b_{n-1}}}(I+R)x\\varphi = \n$$\n\n$$\n= \\frac{1}{\\sqrt{b_{n-1}}}\\int_J\\left(\\delta(x-y)+\\frac{Q(x)P(y)-Q(y)P(x)}{x-y}\\right)\\left(x\\varphi(y) + (y-x)\\varphi(y)\\right)dy = \n$$\n\n\\begin{equation}\n= \\frac{1}{\\sqrt{b_{n-1}}}\\left(x(I+R)\\varphi - \\int_J(Q(x)P(y)-Q(y)P(x))\\varphi(y)dy\\right) = xQ_n - vQ_n + uP_n, \\label{eq:10}\n\\end{equation}\n\n\\noindent Completely analogously we get\n\n\\begin{equation}\n(I-K)^{-1}x\\varphi_{n-1} = xP_n - wQ_n + vP_n. \\label{eq:12}\n\\end{equation}\n\n\\noindent On the right-hand side of (\\ref{eq:9}) and (\\ref{eq:9}a), respectively, using (\\ref{eq:8}), (\\ref{eq:8}a), (\\ref{eq:15}) and the definitions (\\ref{eq:11}), we get\n\n\\begin{equation}\n(I-K)^{-1}([\\L,K]Q)_n = v_{n+1}Q_n - b_{n-1}\\u_{n-1}P_n, \\label{eq:18}\n\\end{equation}\n\n$$\n(I-K)^{-1}([\\L,K]P)_n = b_{n-1}\\w_{n+1}Q_n - v_{n-1}P_n. \\eqno(\\ref{eq:18}a)\n$$\n\n\\noindent Finally, plugging (\\ref{eq:10}), (\\ref{eq:18}) into (\\ref{eq:9}), and (\\ref{eq:12}), (\\ref{eq:18}a) into (\\ref{eq:9}a), we come to the three-term relations (\\ref{eq:3Q}), (\\ref{eq:3P}).\n\\end{proof}\n\n\\par Consider now the difference of resolvent kernels for the operators of consecutive rank:\n\n$$\nR_{n+1}(x,y)-R_n(x,y) = \\frac{Q_n(x)Q_n(y)}{1-\\bar u_n} = Q_n(x)P_{n+1}(y) = P_{n+1}(x)Q_n(y),\n$$\n\n\\noindent therefore\n\n$$\nR_n(x,y) = b_{n-1}\\frac{Q_n(x)P_n(y)-P_n(x)Q_n(y)}{x-y} = \\sum_{k=0}^{n-1}Q_k(x)P_{k+1}(y).\n$$\n\n\\noindent If we introduce the following normalization for the orthogonal functions:\n\n$$\nr_k(x) = \\frac{Q_k(x)}{(1-\\bar u_k)^{1\/2}}, \\eqno(\\ref{eq:21}) \n$$\n\n\\noindent then one can easily check that the new functions $r_k$ are orthonormal w.r.t. the measure $\\exp(-V(x))dx$ restricted to $J$, i.e. the original measure on $J$ and zero outside of it. Thus, in terms of these functions we can write out \n\n\\begin{theorem}\n\nThe Christoffel-Darboux formula for the resolvent kernel holds:\n\n$$\nR_n(x,y) = \\sum_{k=0}^{n-1}r_k(x)r_k(y) = b_{n-1}(1-\\bar u_n)^{1\/2}(1+\\bar w_n)^{1\/2}\\frac{r_n(x)r_{n-1}(y)-r_{n-1}(x)r_n(y)}{x-y}, \\eqno(\\ref{eq:22})\n$$\n\n\\noindent and the three-term relations with symmetric (Jacobi) matrix of coefficients are\n\n$$\nxr_n(x) = b_n(1-\\bar u_{n+1})^{1\/2}(1+\\bar w_{n+1})^{1\/2}r_{n+1}(x)+(a_n+v_n-v_{n+1})r_n(x)+\n$$\n \n$$\n+ b_{n-1}(1-\\bar u_n)^{1\/2}(1+\\bar w_n)^{1\/2}r_{n-1}(x). \\eqno(\\ref{eq:23})\n$$\n\n\\end{theorem}\n\nAt this point one should recall the contents of the paper by Borodin and Soshnikov~\\cite{BorSosh}. There the setup is completely identical to ours: two complementary subsets $J$ and $J^c = \\Re\\setminus J$ of $\\Re$ are considered, and the corresponding operator defined by (3). Just the name ``resolvent kernel\" is not used, another name -- ``Janossy densities\" -- is applied. The authors, however, in fact give a simple proof of orthogonality of the above functions $Q$, $P$ or $r$, so we just restate it briefly here in terms of Refs.~\\cite{IIKS, TW1}. By definition of functions $Q_n$ we can write\n\n$$\n\\varphi_n = (I - K_n^J)Q_n = Q_n - \\sum_{k=0}^{n-1}\\varphi_k(\\varphi_k, Q_n)_{J^c},\n$$\n\n\\noindent or\n\n\\begin{equation}\nQ_n = \\varphi_n + \\sum_{k=0}^{n-1}A_{nk}\\varphi_k, \\label{eq:24}\n\\end{equation}\n\n\\noindent where $A_{nk} = (\\varphi_k, Q_n)_{J^c}$ -- the (matrix of) inner products, so that $A_{nn} = \\bar u_n$. Taking inner products of (\\ref{eq:24}) with $\\varphi_k$, $k \\le n$, over $\\Re$, we see that\n\n$$\n(\\varphi_k, Q_n)_{\\Re} = (\\varphi_k, Q_n)_{J^c}, \\text{ for } k < n,\n$$\n\n\\noindent so $\\{Q_n\\}$ are orthogonal on $J$ and\n\n$$\n(\\varphi_n, Q_n)_{\\Re} = (\\varphi_n, \\varphi_n)_{\\Re} = 1.\n$$\n\n\\noindent It follows that\n\n$$\n(Q_n, Q_n)_{J} = (\\varphi_n, Q_n)_{J} = (\\varphi_n, Q_n)_{\\Re} - (\\varphi_n, Q_n)_{J^c} = 1- \\u_n,\n$$\n\n\\noindent which shows that functions $\\{r_n(x; J)\\}$ defined by (\\ref{eq:21}) are indeed orthonormal on $J \\subset \\Re$. Following Ref.~\\cite{BorSosh}, let us introduce the $n \\times n$ matrices with elements ($j, k = 0,1,\\dots,n-1$) \n\n$$\n(G_J)_{jk} = \\int_J \\varphi_j(x)\\varphi_k(x)dx, \\ \\ \\ (G_{J^c})_{jk} = \\delta_{jk}-(G_J)_{jk} = \\int_{J^c} \\varphi_j(x)\\varphi_k(x)dx\n$$\n\n\\noindent Taking inner products of (\\ref{eq:24}) with $\\varphi_k$ over $J^c$, one can easily see that\n\n$$\n\\delta_{jk} + A_{jk} = \\left(\\delta_{jk} - (G_{J^c})_{jk}\\right)^{-1} = \\left((G_J)_{jk}\\right)^{-1},\n$$\n\n\\noindent i.e. the relation for the restriction of $R_n^J$ to the $n$-dimensional subspace $\\H = \\text{span} \\{\\varphi_j\\}_{j=0,\\dots,n-1}$ equal to $G_{J^c}(I-G_{J^c})^{-1}$, which was crucial in the argument of Ref.~\\cite{BorSosh}. \\\\\n{\\it Remark.} Also, see Proposition 2.5 of Ref.~\\cite{Bor}, if we take the Gauss decomposition of $G_J$, $G_J = S^{-1}(S^T)^{-1}$, where $S$ is lower-triangular, then matrix $S$ gives the coefficients of expansion of $r_k$ w.r.t. the $\\varphi_j$ basis: $r_k(x; J) = \\sum_{j=0}^{n-1}S_{kj}(J)\\varphi_j(x)$, $k = 0, \\dots, n-1$.\n\\par A more recent consideration of Janossy densities for general UE at the spectral edge, using Riemann-Hilbert problem techniques was undertaken in Ref.~\\cite{RiZh}, also without making explicit connections with the framework of Ref.~\\cite{TW1}.\n\n\\section*{\\normalsize\\bf IV. UNIVERSAL RELATIONS BETWEEN TW VARIABLES AND TODA $\\tau$-FUNCTIONS}\n\nBy definitions of the matrix integral over $J^n$, $\\tau_n^J$, and the resolvent operator $R_n^J = K_n^J(I-K_n^J)^{-1}$, we have the fundamental relation between them:\n\n\\begin{equation}\nR_n^J(a_j,a_j) = (-1)^{j-1}\\frac{\\prt\\ln\\tau_n^J}{\\prt a_j}. \\label{eq:25}\n\\end{equation}\n\n\\noindent This is just a consequence of the defining connection, the equality of two different expressions for the probability of all eigenvalues to lie in $J$,\n\n\\begin{equation}\n\\det(I - K_n^J) = \\frac{\\tau_n^J}{\\tau_n}, \\label{eq:26}\n\\end{equation}\n\n\\noindent where $\\tau_n$ is the corresponding matrix integral over whole $\\Re^n$. Moreover, we have\n\n\\begin{theorem}\n\n$$\nu_n \\equiv (\\varphi, (I-K_n^J)^{-1}\\varphi) = b_{n-1}\\left(1-\\frac{\\tau_{n+1}^J\/\\tau_{n+1}}{\\tau_n^J\/\\tau_n}\\right) = \\sqrt\\frac{\\tau_{n+1}\\tau_{n-1}}{(\\tau_n)^2}\\left(1-\\frac{\\tau_{n+1}^J\/\\tau_{n+1}}{\\tau_n^J\/\\tau_n}\\right). \\eqno(\\ref{eq:30})\n$$\n\n$$\nw_n \\equiv (\\psi, (I-K_n^J)^{-1}\\psi) = b_{n-1}\\left(\\frac{\\tau_{n-1}^J\/\\tau_{n-1}}{\\tau_n^J\/\\tau_n}-1\\right) = \\sqrt\\frac{\\tau_{n+1}\\tau_{n-1}}{(\\tau_n)^2}\\left(\\frac{\\tau_{n-1}^J\/\\tau_{n-1}}{\\tau_n^J\/\\tau_n}-1\\right). \\eqno(\\ref{eq:32})\n$$\n\n\\end{theorem}\n\n\\begin{proof}\nLet us consider the values of our orthonormal functions $r_n(x)$ at the endpoints of $J$, $a_j$. From the expressions preceding the definition (\\ref{eq:21}) of functions $r_n$ and (\\ref{eq:25}) it follows that\n\n\\begin{equation}\nr_n(a_j)^2 = R_{n+1}^J(a_j,a_j)-R_n^J(a_j,a_j) = (-1)^{j-1}\\frac{\\prt\\ln(\\tau_{n+1}^J\/\\tau_n^J)}{\\prt a_j}. \\label{eq:27}\n\\end{equation}\n\n\\noindent On the other hand, let us recall the universal TW equations~\\cite{TW1}\n\n\\begin{equation}\n\\frac{\\prt u}{\\prt a_j} = (-1)^j Q(a_j; J)^2, \\label{eq:28}\n\\end{equation}\n\n\\begin{equation}\n\\frac{\\prt w}{\\prt a_j} = (-1)^j P(a_j; J)^2, \\label{eq:31}\n\\end{equation}\n\n\\begin{equation}\n\\frac{\\prt v}{\\prt a_j} = (-1)^j Q(a_j;J)P(a_j; J). \\label{eq:33}\n\\end{equation}\n\n\\noindent From (\\ref{eq:28}) and (\\ref{eq:21}) we have also \n\n\\begin{equation}\nr_n(a_j)^2 = (-1)^{j-1}\\frac{\\prt\\ln(1-\\bar u_n)}{\\prt a_j}. \\label{eq:29}\n\\end{equation}\n\n\\noindent Comparing the expressions (\\ref{eq:27}) and (\\ref{eq:29}), we get the simple universal relation (\\ref{eq:30}) between $\\bar u_n$ and $\\tau$-ratios. The other universal relation, for $\\bar w_n$ and corresponding $\\tau$-ratios is obtained completely similarly, using (\\ref{eq:31}) and Lemma 1.\n\\end{proof}\n \n\\noindent In fact, if we recall the recurrence relation between $\\bar w_n$ and $\\bar u_{n-1}$ (\\ref{eq:17}), we see that (\\ref{eq:32}) is the same relation as (\\ref{eq:30}), just written in terms of $\\bar w$ instead of $\\bar u$. The last two formulas were derived in Ref.~\\cite{IR1} for the Gaussian matrices only, using the specific properties of this case. Now it is clear that they hold for all UE. \n\\par For the Gaussian case we also obtained the following relation~\\cite{IR1}:\n\n\\begin{equation}\n2v \\equiv 2(\\varphi, (I-K_n^J)^{-1}\\psi) = \\B_{-1}\\ln\\tau_n^J. \\label{eq:34}\n\\end{equation}\n\n\\noindent This relation, as we will see, is not true for general UE. However, it turns out that it has a universal analogue. To see it, the framework of Toda lattice for UE, established in ASvM approach~\\cite{ASvM, AvM7}, see also Ref.~\\cite{PvM2007}, sections 5--8, is crucial. Following these works, let us consider the UE with modified probability measure: $\\rho(x) = e^{-V(x)} \\to \\rho_t(x) = e^{-V(x)+\\sum_{k=1}^{\\infty}t_kx^k}$, $t = (t_1, t_2, t_3, ...)$, then the matrix integral\n\n\\begin{equation}\n\\tau_n^J(t) = \\frac{1}{n!}\\int_{J^n} \\Delta_n^2(x) \\prod_1^n \\rho_t(x_i) dx_i \\label{eq:35}\n\\end{equation}\n\n\\noindent is a $\\tau$-function of integrable hierarchies -- KP and 1-dimensional Toda lattice -- as is the corresponding integral over whole $\\Re^n$, $\\tau_n(t)$. The orthogonal functions for $t$-deformed weight satisfy $t$-dependent 3-term recurrence relation\n\n$$\nx\\varphi_n(x;t) = b_n(t)\\varphi_{n+1}(x;t) + a_n(t)\\varphi_n(x;t) + b_{n-1}(t)\\varphi_{n-1}(x;t),\n$$\n\n\\noindent and the coefficients are expressed in terms of the Toda $\\tau$-functions:\n\n\\begin{equation}\na_n(t) = \\frac{\\prt}{\\prt t_1}\\ln\\frac{\\tau_{n+1}(t)}{\\tau_n(t)}, \\label{eq:36}\n\\end{equation}\n\n\\begin{equation}\nb_{n-1}^2(t) = \\frac{\\tau_{n+1}(t)\\tau_{n-1}(t)}{\\tau_n^2(t)}. \\label{eq:37}\n\\end{equation}\n\n\\noindent Let us denote $v_+ = v_{n+1}-v_n-a_n$, $v_- = v_{n-1}-v_n+a_{n-1}$. Then 3-term relation (\\ref{eq:23}) together with the same Toda lattice correspondence for restricted (``resolvent\") ensemble means:\n\n\\begin{equation}\nb_{n-1}^2 = \\frac{\\tau_{n+1}\\tau_{n-1}}{\\tau_n^2} \\rightarrow b_{n-1}^2(1-\\bar u_n)(1+\\bar w_n) = \\frac{\\tau_{n+1}^J\\tau_{n-1}^J}{(\\tau_n^J)^2}, \\label{eq:38}\n\\end{equation}\n\n\\begin{equation}\n\\left. a_n = \\frac{\\prt}{\\prt t_1}\\ln\\frac{\\tau_{n+1}}{\\tau_n}\\right|_{t=0}, \\hspace{0.5cm} a_n \\rightarrow -v_+, \\label{eq:39}\n\\end{equation}\n\n\\begin{equation}\n\\left. v_+ = -\\frac{\\prt}{\\prt t_1}\\ln\\frac{\\tau_{n+1}^J}{\\tau_n^J}\\right|_{t=0}, \\hspace{0.5cm} \\left. v_- = -\\frac{\\prt}{\\prt t_1}\\ln\\frac{\\tau_{n-1}^J}{\\tau_n^J}\\right|_{t=0}. \\label{eq:40}\n\\end{equation}\n\n\\noindent From the last relations one can immediately conclude that \n\n\\begin{theorem}\n\nThe inner product $v_n(t)$ can be expressed as\n\n\\begin{equation}\nv_n(t) = -\\frac{\\prt}{\\prt t_1}\\ln\\frac{\\tau_n^J(t)}{\\tau_n(t)}. \\label{eq:41}\n\\end{equation}\n\n\\noindent and the universal relation for the original (unmodified) ensemble follows by taking the point $t=0$:\n\n$$\n\\left. v_n = -\\frac{\\prt}{\\prt t_1}\\ln\\frac{\\tau_n^J}{\\tau_n}\\right|_{t=0}. \\eqno(\\ref{eq:41a})\n$$\n\n\\end{theorem}\n \n\\noindent Now, if we continue to consider the $t$-modified RME, the above universal relations together with 3-term recursion relations for the restricted measure (or resolvent kernel) ensembles will lead us to some universal PDE for matrix integral $\\tau$-functions, containing derivatives w.r.t. the endpoints {\\it and} the first Toda time derivatives.\n\n\\section*{\\normalsize\\bf V. UNIVERSAL PDE FROM 3-TERM RELATIONS FOR RESOLVENT KERNEL}\n\nLet us consider our 3-term relations from section 2 taken at a spectral endpoint $\\xi$. Unlike the notations in Ref.~\\cite{TW1} we denote $q_n = Q(\\xi; J)$, $p_n = P(\\xi; J)$, i.e. the subscript refers to the size of the matrix rather than to the order number of the endpoint as in Ref.~\\cite{TW1}. Also let $\\bar q_n = Q_n(\\xi; J)$, $\\bar p_n = P_n(\\xi; J)$ (see (\\ref{eq:5}), (\\ref{eq:6})). Then we have from 3-term relations (\\ref{eq:3Q}), (\\ref{eq:3P}):\n\n\\begin{equation}\nb_n\\bar q_{n+1} = (\\xi - a_n + v_{n+1} - v_n)\\bar q_n - b_{n-1}(1-\\bar u_n)\\bar p_n, \\label{eq:42}\n\\end{equation}\n\n\\begin{equation}\nb_{n-2}\\bar p_{n-1} = (\\xi - a_{n-1} + v_n - v_{n-1})\\bar p_n - b_{n-1}(1+\\bar w_n)\\bar q_n. \\label{eq:43} \n\\end{equation}\n\n\\begin{theorem}\n\nFor all finite size UE, ratios of matrix integrals $U_n \\equiv \\tau_{n+1}^J\/\\tau_n^J$, $W_n \\equiv \\tau_{n-1}^J\/\\tau_n^J$ and function $v_n(t) = -\\prt t_1\\ln(\\tau_n^J(t)\/\\tau_n(t))$ satisfy the following PDE w.r.t. any specific spectral endpoint $\\xi \\in \\prt J$ and the first 1-Toda time $t_1$:\n\n$$\n\\frac{\\prt^2 U_n}{\\prt \\xi\\prt t_1} = \\xi\\frac{\\prt U_n}{\\prt\\xi} + 2\\frac{\\prt v_n}{\\prt\\xi}U_n, \\eqno(\\ref{eq:53})\n$$\n\n$$\n\\frac{\\prt^2 W_n}{\\prt \\xi\\prt t_1} = -\\xi\\frac{\\prt W_n}{\\prt\\xi} + 2\\frac{\\prt v_n}{\\prt\\xi}W_n. \\eqno(\\ref{eq:54}) \n$$\n\n\\end{theorem}\n\n\\begin{proof}\n\\noindent Using formulas (\\ref{eq:16}) and (\\ref{eq:33}), one gets for $\\xi = a_k$:\n\n\\begin{equation}\n\\frac{\\prt v_{n+1}}{\\prt a_k} = (-1)^k b_n\\bar q_{n+1}\\bar p_{n+1} = (-1)^k b_n\\bar q_{n+1}\\frac{\\bar q_n}{1-\\bar u_n}, \\label{eq:44}\n\\end{equation}\n\n\\begin{equation}\n\\frac{\\prt v_{n-1}}{\\prt a_k} = (-1)^k b_{n-2}\\bar q_{n-1}\\bar p_{n-1} = (-1)^k b_{n-2}\\bar p_{n-1}\\frac{\\bar p_n}{1+\\bar w_n}, \\label{eq:45}\n\\end{equation}\n\n\\noindent Substituting (\\ref{eq:42}) and (\\ref{eq:43}) into (\\ref{eq:44}) and (\\ref{eq:45}), respectively, we get\n\n$$\n\\frac{\\prt v_{n+1}}{\\prt \\xi} = -(-1)^k b_{n-1}\\bar q_n\\bar p_n + (\\xi - a_n + v_{n+1} - v_n)(-1)^k \\frac{\\bar q_n^2}{1-\\bar u_n} =\n$$\n\n\\begin{equation}\n= -\\frac{\\prt v_n}{\\prt\\xi} - (\\xi - a_n + v_{n+1} - v_n)\\frac{\\prt\\ln(1-\\bar u_n)}{\\prt\\xi}, \\label{eq:46}\n\\end{equation}\n\n\\begin{equation}\n\\frac{\\prt v_{n-1}}{\\prt \\xi} = -\\frac{\\prt v_n}{\\prt\\xi} + (\\xi - a_{n-1} + v_n - v_{n-1})\\frac{\\prt\\ln(1+\\bar w_n)}{\\prt\\xi}. \\label{eq:47}\n\\end{equation}\n\n\\noindent We introduce $U_n \\equiv \\tau_{n+1}^J\/\\tau_n^J$ and $W_n \\equiv \\tau_{n-1}^J\/\\tau_n^J$. Then, with the help of relations (\\ref{eq:30}), (\\ref{eq:32}) from previous section, (\\ref{eq:46}) and (\\ref{eq:47}), respectively, can be written as (recall that $v_+ = v_{n+1}-v_n-a_n$, $v_- = v_{n-1}-v_n+a_{n-1}$)\n\n\\begin{equation}\n\\frac{\\prt(U_nv_+)}{\\prt\\xi} = -\\xi \\frac{\\prt U_n}{\\prt\\xi} - 2\\frac{\\prt v_n}{\\prt\\xi}U_n, \\label{eq:48}\n\\end{equation}\n\n\\begin{equation}\n\\frac{\\prt(W_nv_-)}{\\prt\\xi} = \\xi \\frac{\\prt W_n}{\\prt\\xi} - 2\\frac{\\prt v_n}{\\prt\\xi}W_n. \\label{eq:49}\n\\end{equation}\n\n\\noindent Since, by (\\ref{eq:41}), $v_+ = -\\prt_{t1}\\ln U_n$, $v_- = -\\prt_{t1}\\ln W_n$, the last two equations are equivalent to \\ref{eq:53} and \\ref{eq:54}.\n\\end{proof}\n\n\\par Thus, on the one hand, there always is a universal system of PDE valid for any UE and very similar in structure to the system derived in Ref.~\\cite{IR1} for the Gaussian case using the first equations of 1-Toda lattice hierarchy~\\cite{AvM2, AvM7, PvM2007} -- the coupled Toda-AKNS system (see, e.g., Ref.~\\cite{Newell}). It consists of the Toda equation itself: \n\n$$\n\\frac{\\prt^2 \\ln\\tau_n^J}{\\prt t_1^2} = -\\frac{\\prt v_n}{\\prt t_1} + \\frac{\\prt^2 \\ln\\tau_n}{\\prt t_1^2} = U_nW_n, \\eqno(\\ref{eq:52}) \n$$\n\n\\noindent and the analog of AKNS derived above -- the equations (\\ref{eq:53}), (\\ref{eq:54}). \n\\par However, another important universal equation follows from matching with TW approach, if we recall formula$^{29}$ for the diagonal element of the resolvent kernel taken at an endpoint $\\xi$, \n\n\\begin{equation}\nR_n^2(\\xi,\\xi) = p_n\\frac{\\prt q_n}{\\prt \\xi} - q_n\\frac{\\prt p_n}{\\prt \\xi}. \\label{eq:50}\n\\end{equation}\n\n\\begin{theorem}\n\nWith respect to any particular endpoint $\\xi$ of the spectrum for a finite size UE, the following universal PDE holds for matrix integrals -- $\\tau$-functions:\n\n$$\n\\left(\\frac{\\prt\\ln\\tau_n^J}{\\prt\\xi}\\right)^2 = R_n^2(\\xi,\\xi) = -\\frac{1}{4}\\frac{\\prt U_n}{\\prt\\xi}\\frac{\\prt W_n}{\\prt \\xi}\\left(\\frac{\\prt}{\\prt \\xi}\\ln\\left(-\\frac{\\prt U_n\/\\prt \\xi}{\\prt W_n\/\\prt \\xi}\\right)\\right)^2. \\eqno(\\ref{eq:51}) \n$$\n\n\\end{theorem}\n\n\\begin{proof}\n\nIf we substitute (\\ref{eq:50}) into the square of (\\ref{eq:25}), use TW formulas (\\ref{eq:28}), (\\ref{eq:31}), and then formulas (\\ref{eq:30}) and (\\ref{eq:32}) from previous section, we arrive at equation (\\ref{eq:51}).\n\n\\end{proof}\n\nThis will make it possible to derive something even more interesting -- a universal PDE for only one dependent variable, the logarithm of the $\\tau$-function $\\tau_n^J$.\nFrom the TW equations alone one can immediately get the simple bilinear relation:\n\n\\begin{equation}\n\\left(\\frac{\\prt v_n}{\\prt \\xi}\\right)^2 = \\frac{\\prt u_n}{\\prt \\xi} \\frac{\\prt w_n}{\\prt \\xi} \\label{eq:55a} \n\\end{equation}\n\n\\noindent From now on we will denote $T = \\ln\\tau_n^J$ and write $U$, $W$ instead of $U_n$, $W_n$ and just $t$ instead of $t_1$; also we will write just $f_{\\xi}$ for $\\prt f\/\\prt\\xi$, $f_t$ for $\\prt f\/\\prt t$ etc. Using the universal relations (\\ref{eq:30}), (\\ref{eq:32}) and (\\ref{eq:41}) with $\\tau$-functions, (\\ref{eq:55a}) can be rewritten as \n\n$$\n\\left(v_{\\xi}\\right)^2 = \\left(T_{\\xi t}\\right)^2 = -U_{\\xi}W_{\\xi}. \\eqno(\\ref{eq:55})\n$$\n\n\\noindent The last equation is also a consequence of the above (\\ref{eq:53}), (\\ref{eq:54}). Thus, (\\ref{eq:53}) and (\\ref{eq:54}) can be alternatively rewritten as a different pair of equations -- the equation (\\ref{eq:55}) and their other independent combination:\n\n$$\nWU_{\\xi t} - UW_{\\xi t} = \\xi\\prt_{\\xi}(UW) = \\xi T_{\\xi tt}. \\eqno(\\ref{eq:56}) \n$$\n\n\\noindent The equation (\\ref{eq:51}) also can be transformed with the help of universal relations into another form:\n\n$$\n\\prt_t(T_{\\xi})^2 = U_{\\xi}W_{\\xi\\xi} - W_{\\xi}U_{\\xi\\xi}, \\eqno(\\ref{eq:57}) \n$$\n\n\\noindent which will be used in what follows. It is convenient to introduce functions $G$ and $H$,\n\n\\begin{equation}\nG = WU_{\\xi} - UW_{\\xi}, \\label{eq:58} \n\\end{equation}\n\n\\begin{equation}\nH = WU_t - UW_t. \\label{eq:59}\n\\end{equation}\n\n\\noindent We now proceed to prove\n\n\\begin{theorem}\nThe logarithm of gap probability $\\Pr$ ($T = \\ln\\tau_n^J = \\ln\\Pr + \\ln\\tau_n$) for random matrix unitary ensembles (UE) satisfies a {\\it universal PDE}:\n\n$$\n\\left(T_{\\xi t} T_{\\xi\\xi tt} - T_{\\xi tt}T_{\\xi\\xi t} + 2(T_{\\xi t})^3\\right)^2 = (T_{\\xi})^2\\left(4T_{tt}(T_{\\xi t})^2 + (T_{\\xi tt})^2\\right). \\eqno(\\ref{eq:T})\n$$\n\n\\end{theorem}\n\n\\noindent It would be interesting to find out whether this equation appeared before somewhere in the studies of nonlinear integrable PDE. So far the author is not aware of this.\n\n\\begin{proof}\n\nIn terms of $G$ (\\ref{eq:55}) takes the form (using the Toda equation to eliminate $U$ and $W$):\n\n\\begin{equation}\nG^2 = 4T_{tt}(T_{\\xi t})^2 + (T_{\\xi tt})^2. \\label{eq:61}\n\\end{equation}\n\n\\par Now turn to the equation (\\ref{eq:57}). One can express the combinations of derivatives of $U$ and $W$ in terms of $T$ and $G$. Using the $\\xi$-derivative of equation (\\ref{eq:52}) and the definition (\\ref{eq:58}), we get\n\n\\begin{equation}\n2WU_{\\xi} = T_{\\xi tt} + G, \\label{eq:63} \n\\end{equation}\n\n\\begin{equation}\n2UW_{\\xi} = T_{\\xi tt} - G. \\label{eq:64} \n\\end{equation}\n\n\\noindent Differentiating (\\ref{eq:63}) and (\\ref{eq:64}) w.r.t. $\\xi$ and using (\\ref{eq:55}) gives, respectively, \n\n\\begin{equation}\n2WU_{\\xi\\xi} = 2(T_{\\xi t})^2 + T_{\\xi\\xi tt} + G_{\\xi}, \\label{eq:65} \n\\end{equation}\n\n\\begin{equation}\n2UW_{\\xi\\xi} = 2(T_{\\xi t})^2 + T_{\\xi\\xi tt} - G_{\\xi}. \\label{eq:66}\n\\end{equation}\n\n\\noindent We now multiply (\\ref{eq:64}) by (\\ref{eq:65}) and subtract the product of (\\ref{eq:63}) and (\\ref{eq:66}) to get\n\n$$\n2UW(W_{\\xi}U_{\\xi\\xi} - U_{\\xi}W_{\\xi\\xi}) = T_{\\xi tt}G_{\\xi} - (2(T_{\\xi t})^2 + T_{\\xi\\xi tt})G,\n$$\n\n\\noindent or, using (\\ref{eq:52}) and (\\ref{eq:57}),\n\n\\begin{equation}\nT_{\\xi tt}G_{\\xi} - (2(T_{\\xi t})^2 + T_{\\xi\\xi tt})G = -2T_{tt}\\prt_t(T_{\\xi})^2. \\label{eq:67}\n\\end{equation}\n\n\\noindent Differentiating (\\ref{eq:61}) w.r.t. $\\xi$ one obtains\n\n\\begin{equation}\nGG_{\\xi} = T_{\\xi tt}(2(T_{\\xi t})^2 + T_{\\xi\\xi tt}) + 4T_{tt}T_{\\xi t}T_{\\xi\\xi t}. \\label{eq:68}\n\\end{equation}\n\n\n\\noindent After multiplying (\\ref{eq:67}) by $G$ and applying expressions (\\ref{eq:61}) for $G^2$ and (\\ref{eq:68}) for $GG_{\\xi}$ on the left-hand side some of the terms and common factors cancel out and finally we arrive at\n\n\\begin{equation}\nT_{\\xi}G = T_{\\xi t} T_{\\xi\\xi tt} - T_{\\xi tt}T_{\\xi\\xi t} + 2(T_{\\xi t})^3. \\label{eq:69}\n\\end{equation}\n\nThus, we have two independent expressions for the function $G$ in terms of the derivatives of $T$ -- equations (\\ref{eq:69}) and (\\ref{eq:61}). Substituting G from (\\ref{eq:69}) into (\\ref{eq:61}) we obtain (\\ref{eq:T}). \n\\end{proof}\n\n\\par Equation (\\ref{eq:56}) was not used to obtain (\\ref{eq:T}). It contains the $\\xi$-variable in the coefficient, while (\\ref{eq:T}) has the feature that its coefficients are constant. It turns out that the additional information from (\\ref{eq:56}) can be used to obtain another universal PDE, this time for the logarithm of the ratio $T_+ \\equiv \\ln(\\tau_{n+1}^J\/\\tau_n^J)$ (now `prime' stands for the derivative w.r.t. $\\xi$):\n\n\\begin{theorem}\n\nThe logarithm of the ratio of matrix integrals $T_+ \\equiv \\ln(\\tau_{n+1}^J\/\\tau_n^J)$ for random matrix unitary ensembles (UE) satisfies a {\\it universal PDE}:\n\n$$\n(T_+'')_{tt} - \\frac{(T_+')_{tt}T_+''}{T_+'} - (T_+')^2(T_+)_{tt} - \\frac{(T_+')_{t}(T_+'')_t}{T_+'} + \\frac{(T_+')_{t}^2T_+''}{(T_+')^2} + (\\xi - (T_+)_t)T_+'(2(T_+')_t - 1) = 0. \\eqno(\\ref{eq:T+})\n$$\n\n\\end{theorem}\n\n\\begin{proof}\nIt is a tedious calculation. First one substitutes $W$ expressed from Toda equation (\\ref{eq:52}) and $W_{\\xi}$ from (\\ref{eq:55}) into the $\\xi$-derivative of (\\ref{eq:52}). This gives\n\n$$\nT_{\\xi tt} = T_{tt}T_+' - \\frac{T_{\\xi t}^2}{T_+'}.\n$$\n\n\\noindent Besides, we use (\\ref{eq:53}), writing it as\n\n$$\n2T_{\\xi t} = -(T_+')_t + (\\xi - (T_+)_t)T_+',\n$$\n\n\\noindent and its $t$-derivative,\n\n$$\n2T_{\\xi tt} = -(T_+')_{tt} + (\\xi - (T_+)_t)(T_+')_t - T_+'(T_+)_{tt}.\n$$\n\n\\noindent We substitute the last two equations into the previous one, divide it by $T_+'\/4$ and get:\n\n$$\n4T_{tt} = -2\\frac{(T_+')_{tt}}{T_+'} - 2(T_+)_{tt} + \\left(\\frac{(T_+')_t}{T_+'}\\right)^2 + (\\xi - (T_+)_t)^2.\n$$\n\n\\noindent We differentiate the last equation once again w.r.t. $\\xi$ and compare the two expressions for $T_{\\xi tt}$ in terms of derivatives of $T_+$. This gives (\\ref{eq:T+}).\n\\end{proof} \n\n\\par In addition to the main results -- universal equations (\\ref{eq:57})--(\\ref{eq:56}) and especially (\\ref{eq:T}) and (\\ref{eq:T+}), it seems worth to reformulate the preceding findings in this section. In terms of $G$ and $H$ and the main function $T$ (\\ref{eq:56}) acquires a nice linear form:\n\n\\begin{equation}\nG_t + H_{\\xi} = 2\\xi T_{\\xi tt}. \\label{eq:60}\n\\end{equation}\n\n\\noindent Besides, if we use the three equations (\\ref{eq:52})--(\\ref{eq:54}), eliminating the mixed derivatives $U_{\\xi t}$ and $W_{\\xi t}$ by substituting their expressions from (\\ref{eq:53}) and (\\ref{eq:54}), respectively, into (\\ref{eq:52}), differentiated twice, once w.r.t. $\\xi$ and once w.r.t. $t$, we get (using also $v_{\\xi} = -T_{\\xi t}$)\n\n$$\nT_{\\xi ttt} = \\xi G + U_{\\xi}W_t + W_{\\xi}U_t - 4T_{tt}T_{\\xi t}.\n$$\n\n\\noindent This in turn can be rewritten as an expression of $H$ in terms of $G$ and derivatives of $T$:\n\n\\begin{equation}\nH = 2\\xi T_{tt} - \\frac{2T_{tt}T_{\\xi ttt} - T_{ttt}T_{\\xi tt} - 8(T_{tt})^2T_{\\xi t}}{G}. \\label{eq:62}\n\\end{equation}\n\n\\noindent We thus have general universal expressions for logarithmic derivatives of Toda $\\tau$-ratios $\\tau_{n \\pm 1}\/\\tau_n$ in terms of derivatives of the main ($n$-level) $\\tau$-function (due to explicit expressions (\\ref{eq:61}), (\\ref{eq:69}) for $G$ and (\\ref{eq:62}) for $H$):\n\n$$\n2\\prt_{\\xi}\\ln U = \\frac{T_{\\xi tt} + G}{T_{tt}}, \\hspace{2cm} 2\\prt_{\\xi}\\ln W = \\frac{T_{\\xi tt} - G}{T_{tt}},\n$$\n\n$$\n2\\prt_{t}\\ln U = \\frac{T_{ttt} + H}{T_{tt}}, \\hspace{2cm} 2\\prt_{t}\\ln W = \\frac{T_{ttt} - H}{T_{tt}}.\n$$\n\nTherefore we can write out two pairs of general recursion relations for the derivatives of UE $\\tau$-functions (restoring the subscript $n$ for the matrix size)\n\n$$\n2\\prt_{\\xi}T_{n+1} = 2\\prt_{\\xi}T_n + \\prt_{\\xi}\\ln \\prt^2_{tt}T_n + \\frac{G_n}{\\prt^2_{tt}T_n},\n$$\n\n$$\n2\\prt_{\\xi}T_n = 2\\prt_{\\xi}T_{n+1} + \\prt_{\\xi}\\ln \\prt^2_{tt}T_{n+1} - \\frac{G_{n+1}}{\\prt^2_{tt}T_{n+1}},\n$$\n\n\\noindent and\n\n$$\n2\\prt_{t}T_{n+1} = 2\\prt_{t}T_n + \\prt_{t}\\ln \\prt^2_{tt}T_n + \\frac{H_n}{\\prt^2_{tt}T_n},\n$$\n\n$$\n2\\prt_{t}T_n = 2\\prt_{t}T_{n+1} + \\prt_{t}\\ln \\prt^2_{tt}T_{n+1} - \\frac{H_{n+1}}{\\prt^2_{tt}T_{n+1}},\n$$\n\n\\noindent where $G_n \\equiv G$, $H_n \\equiv H$ at level (matrix size) $n$, and $G_{n+1}$, $H_{n+1}$ are given by the same formulas (\\ref{eq:69}), (\\ref{eq:62}), only in terms of $\\ln\\tau_{n+1}^J$ rather than $\\ln\\tau_n^J$ . The first equations in the pairs follow from previous formulas for $U$, while the second ones -- from the companion formulas for $W$ with the shift $n-1 \\to n$.\n\n\\section*{\\normalsize\\bf VI. ISOMONODROMIC DEFORMATIONS, SCHLESINGER EQUATIONS AND TODA-AKNS STRUCTURE} \n\nThe direct connection of TW equations for random matrix UE with general Schlesinger equations -- the compatibility conditions for systems of linear ordinary differential equations (ODE) related to their isomonodromic deformations -- was first pointed out in the Ref.~\\cite{HarTW}. There the matrices of the quadratic combinations of TW variables $Q(a_k)$, $P(a_k)$ ($a_k$ is the $k$-th endpoint of the spectrum) were introduced,\n\n\\begin{equation}\nA_k = (-1)^{k-1}\\left(\\begin{array}{cc} Q(a_k)P(a_k) & -Q(a_k)^2 \\\\ P(a_k)^2 & -Q(a_k)P(a_k) \\end{array}\\right), \\label{eq:72}\n\\end{equation}\n\n\\noindent and shown to satisfy the Schlesinger equations. A little later an important work by Palmer~\\cite{Pal} appeared, where the isomonodromic deformations for UE were studied in details. Palmer considered the corresponding ``Cauchy-Riemann operator\" $\\prt_z$ problem similar to a Riemann-Hilbert problem (RHP) (for different or more general considerations of RHP in the context of matrix UE see e.g. an excellent book~\\cite{Dei} as well as the original papers~\\cite{FIK1, FIK2, DeiItsZhu, BorDei, DKLVZ}). He studied the deformation of a fundamental matrix $F(z)$,\n\n$$\nF(z) = \\left(\\begin{array}{cc} \\varphi & \\tilde\\varphi \\\\ \\psi & \\tilde\\psi \\end{array}\\right),\n$$\n\n\\noindent of solutions to $2\\times 2$ linear system of ODE -- called ``differentiation formulas\" in Ref.~\\cite{TW1} (for another, more recent, study of general matrix UE based on deformations of such differential systems see Ref.~\\cite{BeEyHa-06}):\n\n$$\n\\frac{d}{dz}\\left(\\begin{array}{cc} \\varphi \\\\ \\psi \\end{array}\\right) = M(z)\\left(\\begin{array}{cc} \\varphi \\\\ \\psi \\end{array}\\right),\n$$\n\n\\begin{equation}\nM(z) = \\frac{1}{m(z)}\\left(\\begin{array}{cc} A(z) & B(z) \\\\ C(z) & -A(z) \\end{array}\\right), \\label{eq:73}\n\\end{equation}\n\n\\noindent where $m(z)$ is a polynomial in $z$, the functions $A(z)$, $B(z)$ and $C(z)$ are determined~\\cite{Bauldry, TW1}, by the orthogonal functions $\\varphi$, $\\psi$ (see (\\ref{eq:2})) and the corresponding potential $V(x)$:\n\n$$\nA(z) = -\\int \\varphi(y)\\psi(y)\\cdot \\frac{V'(z)-V'(y)}{z-y}dy - \\frac{V'(z)}{2},\n$$\n\n$$\nB(z) = \\int \\varphi^2(y)\\cdot \\frac{V'(z)-V'(y)}{z-y}dy,\n$$\n\n$$\nC(z) = \\int \\psi^2(y)\\cdot \\frac{V'(z)-V'(y)}{z-y}dy .\n$$\n\nThe matrix $F(z)$, whose determinant must be a constant, is normalized by the condition $\\det F(z) \\equiv 1$. The deformed matrix $Y(z; a)$ is uniquely determined by analyticity requirement outside the real axis, the asymptotics near infinity, $Y(z; a) \\sim F(z), z\\to\\infty$, and matching conditions on a finite number of cuts $[a_j, a_{j+1}]$ on $\\Re$:\n\n\\begin{equation}\nY_+(z) = Y_-(z)\\cdot(I - \\chi_{[a_j,a_{j+1}]} + \\chi_{[a_j,a_{j+1}]}\\Theta),\\ \\ \\ \\Theta = \\left(\\begin{array}{cc} 1 & 2\\pi i\\lambda \\\\ 0 & 1 \\end{array}\\right), \\label{eq:74}\n\\end{equation}\n\n\\noindent where $Y_+(z)$ and $Y_-(z)$ are analytic in the upper and lower half-plane, respectively, and define $Y(z)$ in their domains of analyticity. As shown in Ref.~\\cite{Pal}, the solution to the above problem can be expressed by the formula\n\n\\begin{equation}\nY = \\left(\\begin{array}{cc} Q & \\tilde\\varphi - \\lambda\\tilde KQ \\\\ P & \\tilde\\psi - \\lambda\\tilde KP \\end{array}\\right), \\label{eq:75}\n\\end{equation}\n\n\\noindent where $Q$ and $P$ are the functions in (\\ref{eq:4}), evaluated for the rescaled operator $\\lambda K^J$, i.e. $Q = (I-\\lambda K^J)^{-1}\\varphi$, $P = (I-\\lambda K^J)^{-1}\\psi$, and $\\tilde K$ is the operator with {\\it singular kernel} (i.e. turning to $\\infty$ as $x \\to y$) \n\n$$\n\\tilde K(x,y) = \\frac{\\tilde\\psi(x)\\varphi(y) - \\tilde\\varphi(x)\\psi(y)}{x-y}\\chi_{J}(y).\n$$\n\n\\noindent The normalization $\\det F = 1$ implies, by analyticity of $\\det Y(z)$ and their asymptotic matching at infinity, the corresponding condition for $Y$, $\\det Y(z; a) \\equiv 1$. This leads to a nontrivilal relation for the elements of $Y$:\n\n\\begin{equation}\n\\det Y = \\tilde\\psi Q - \\tilde\\varphi P + \\lambda(P\\cdot\\tilde K Q - Q\\cdot\\tilde K P) = 1. \\label{eq:76}\n\\end{equation}\n\nThen, following Palmer, introduce a matrix function $\\Delta(z)$ such that, by definition,\n\n\\begin{equation}\nF(z)Y_+^{-1}(z) = I + \\Delta(z). \\label{eq:77}\n\\end{equation}\n \n\\noindent Differentiation of (\\ref{eq:77}) w.r.t. $z$ combined again with analyticity considerations then yields the following ODE satisfied by $Y$:\n\n\\begin{equation}\n\\frac{dY}{dz}Y^{-1} = (I+\\Delta)^{-1}M(I+\\Delta) - (I+\\Delta)^{-1}\\frac{d\\Delta}{dz}, \\label{eq:78}\n\\end{equation}\n\n\\noindent where $M = M(z)$ is given by (\\ref{eq:73}).\nThe matrix $\\Delta$ can be explicitly expressed~\\cite{IIKS, Pal} in terms of related functions $\\varphi$, $\\psi$, $Q$ and $P$. It follows from (\\ref{eq:75}), (\\ref{eq:76}) and the definitions (\\ref{eq:77}) and (\\ref{eq:74}), that\n\n\\begin{equation}\n\\Delta(z; J) = \\intop \\frac{\\lambda\\chi_J(y)}{z-y}dy\\left(\\begin{array}{cc} -\\varphi(y)P(y) & \\varphi(y)Q(y) \\\\ \\\\ -\\psi(y)P(y) & \\psi(y)Q(y) \\end{array}\\right). \\label{eq:79}\n\\end{equation}\n\nConsideration of monodromies around the ends of the branch cuts $a_j$ for $Y$ and the singular point at $\\infty$ leads to the conclusion~\\cite{Pal} that there exists a matrix function $\\Pi(z; J)$ analytic everywhere in $z$-plane except for the point at $\\infty$ and possible zeros of $m(z)$ (see (\\ref{eq:73}), (\\ref{eq:78})) such that\n\n\\begin{equation}\n\\frac{dY}{dz}Y^{-1} = \\Pi(z; J) + \\lambda \\sum_k \\frac{A_k}{z-a_k}, \\label{eq:80}\n\\end{equation}\n\n\\noindent with matrices $A_k$ given by (\\ref{eq:72}). In the simpler case, when $m(z) \\equiv 1$, the function $\\Pi(z)$ is polynomial in $z$. One can see from general equations (\\ref{eq:78}) and (\\ref{eq:80}) that the principal part at $\\infty$,\n\n\\begin{equation}\n\\text{p. p. }(Y'Y^{-1}) = \\text{p. p. }(\\Pi) = \\text{p. p. }((I+\\Delta)^{-1}M(I+\\Delta)). \\label{eq:81} \n\\end{equation}\n\n\\noindent Also in general $m(z)\\Pi(z)$ is a matrix polynomial having no singularities besides $\\infty$ so that\n\n\\begin{equation}\n\\text{p. p. }(m(z)\\Pi(z)) = m(z)\\Pi(z). \\label{eq:82}\n\\end{equation}\n\n\\noindent This in principle determines matrix $\\Pi$ in general from $M$ and $\\Delta$ given by (\\ref{eq:73}) and (\\ref{eq:79}).\n\\par The equations for the derivatives of $Y$ w.r.t. the endpoints $a_j$ follow also from (\\ref{eq:77}):\n\n$$\n-FY^{-1}\\prt_aYY^{-1} = \\prt_a\\Delta(z; J)\n$$\n\n\\noindent or\n\n$$\n\\prt_aYY^{-1} = - (I+\\Delta)^{-1}\\prt_a\\Delta(z; J) = O(z^{-1}) \\ \\text{ as } z\\to \\infty,\n$$\n\n\\noindent due to the asymptotic expansion as $z \\to \\infty$,\n\n\\begin{equation}\n\\Delta(z) = \\frac{\\Delta_1}{z} + \\frac{\\Delta_2}{z^2} + \\dots \\label{eq:83}\n\\end{equation}\n\n\\noindent The last equations determine that near the endpoints $a_j$\n\n$$\n\\text{p. p. }(\\prt_aYY^{-1}) = -\\frac{\\lambda A_j}{z-a_j},\n$$\n\n\\noindent then by analyticity of $\\prt_aYY^{-1}$ in $z$ except for the poles at $a_j$, one has\n\n\\begin{equation}\n\\prt_aYY^{-1} = -\\frac{\\lambda A_j}{z-a_j}. \\label{eq:84}\n\\end{equation}\n\n\\noindent Calculating the mixed derivatives $\\prt^2_{za_j}Y$ and $\\prt^2_{a_ka_j}Y$ from equations (\\ref{eq:80}) and (\\ref{eq:84}) for different $j$ and equating the {\\it singular parts} when considering various limits $z \\to a_j$, gives Schlesinger equations modified~\\cite{Pal} by $\\Pi$, see also Ref.~\\cite{BorDei}:\n\n\\begin{equation}\n\\frac{\\prt A_j}{\\prt a_j} = [\\Pi(a_j; J), A_j] - \\lambda\\sum_{k\\ne j} \\frac{[A_j, A_k]}{a_j-a_k}, \\label{eq:85}\n\\end{equation}\n\n\\begin{equation}\n\\frac{\\prt A_j}{\\prt a_k} = \\lambda\\frac{[A_j, A_k]}{a_j-a_k} = \\frac{\\prt A_k}{\\prt a_j}. \\label{eq:86} \n\\end{equation}\n\n\\noindent Another nonstandard Schlesinger equation for $\\Pi$ arises from the same calculation~\\cite{Pal}:\n\n\\begin{equation}\n\\frac{\\prt \\Pi(z; J)}{\\prt a_j} = \\lambda\\frac{[\\Pi(z; J) - \\Pi(a_j; J), A_j]}{z-a_j}. \\label{eq:87}\n\\end{equation}\n\n\\noindent If we consider the limit $z \\to \\infty$ in the last equation as Palmer did and write $M(z) = M_0(z)(I + o(1))$ near infinity, then, using (\\ref{eq:81}) and the expansion (\\ref{eq:83}), we get\n\n$$\n[M_0, \\frac{\\prt\\Delta_1}{\\prt a_j} - \\lambda A_j] = 0.\n$$\n\n\\noindent Since matrix $M_0$ is arbitrary, a completely universal equation follows:\n\n\\begin{equation}\n\\frac{\\prt\\Delta_1}{\\prt a_j} = \\lambda A_j, \\label{eq:88}\n\\end{equation}\n\n\\noindent with $\\Delta_1$ determined by expansion of the formula (\\ref{eq:79}):\n\n\\begin{equation}\n\\Delta_1 = \\lambda\\left(\\begin{array}{cc} -v & u \\\\ -w & v \\end{array}\\right). \\label{eq:89}\n\\end{equation}\n\n\\noindent We see that parameter $\\lambda$ drops out of (88) and we obtain exactly all three universal TW equations (\\ref{eq:28}), (\\ref{eq:31}) and (\\ref{eq:33}) from Ref.~\\cite{TW1} for the ``auxiliary\" variables $u$, $v$ and $w$, which now are given the special importance and meaning in terms of $\\tau$-functions, see (\\ref{eq:30}), (\\ref{eq:32}) and (\\ref{eq:41a}). There is also a general expression for logarithmic derivatives of $\\tau_n^J$ w.r.t. the endpoints in terms of $A_j$ and $\\Pi$ in the isomonodromic approach (e.g. in Refs.~\\cite{Pal, BorDei}):\n\n\\begin{equation}\n\\frac{\\prt \\ln\\tau_n^J}{\\prt a_j} = \\text{Tr} \\left[\\Pi(a_j; J)A_j + \\sum_{k\\neq j}\\frac{A_kA_j}{a_j-a_k}\\right], \\label{eq:90}\n\\end{equation}\n\n\\noindent which is a consequence of existence of fundamental closed form $\\Omega$, found first for the most general case of systems of linear ODE with rational coefficients in Ref.~\\cite{JMU}:\n\n\\begin{equation}\n\\Omega = d_a \\ln\\tau^J = d_a \\ln\\det(I-K^J) = \\sum_j \\text{Tr} \\left[\\Pi(a_j; J)A_j + \\sum_{k\\neq j}\\frac{A_kA_j}{a_j-a_k}\\right]da_j. \\label{eq:91}\n\\end{equation}\n\n\\par If we now sum up the equations (\\ref{eq:85}) and (\\ref{eq:86}) over {\\it all} endpoints $a_j$ {\\it and} over {\\it all} matrices $A_j$ and use the equations (\\ref{eq:88}), we arrive at a system of three second order PDE for three functions $u$, $v$ and $w$ (the two equations for diagonal components are the same due to the tracelessness of the matrices involved):\n\n\\begin{equation}\n\\B_{-1}^2 u = -\\sum_j [\\Pi(a_j; J), A_j]_{12}, \\label{eq:92}\n\\end{equation}\n\n\\begin{equation}\n\\B_{-1}^2 w = \\sum_j [\\Pi(a_j; J), A_j]_{21}, \\label{eq:93}\n\\end{equation}\n\n\\begin{equation}\n\\B_{-1}^2 v = -\\sum_j [\\Pi(a_j; J), A_j]_{11}. \\label{eq:94}\n\\end{equation}\n\n\\noindent Rewritten in terms of $\\tau$-functions with the help of the universal relations (\\ref{eq:30}), (\\ref{eq:32}), and (\\ref{eq:41}) this becomes the universal analogue of the system of ASvM type obtained in Ref.~\\cite{IR1} for the Gaussian case. Note, however, that the equation (\\ref{eq:94}) becomes in fact the {\\it third} order PDE after using (\\ref{eq:41}). For the Gaussian case, though, it turns out to be the total derivative of the first integral obtained in Ref.~\\cite{TW1}. And the first integral itself is identified by (\\ref{eq:34}) with our ``boundary-Toda\" equation~\\cite{IR1}.\n\\par Let us now sum up only equations (\\ref{eq:85}) and (\\ref{eq:86}) with derivatives w.r.t. a {\\it single particular endpoint} $a_j$ only, using again (\\ref{eq:88}) and (\\ref{eq:89}) so that \n\n\\begin{equation}\n\\sum_k A_k = \\frac{1}{\\lambda}\\sum_k \\frac{\\prt\\Delta_1}{\\prt a_k} = \\B_{-1}\\left(\\begin{array}{cc} -v & u \\\\ -w & v \\end{array}\\right). \\label{eq:95}\n\\end{equation}\n\n\\noindent Again only terms involving $\\Pi$ remain on the right-hand side. Expanding these commutators we write down \n\n\\begin{prop}\nThe three independent components of the Schlesinger equations associated with UE, summed over all $a_k$ for every matrix $A_j$, have the form of three-term PDE, \n\n\\begin{equation}\n\\frac{\\prt}{\\prt a_j} \\B_{-1} u = (\\Pi_0)_j\\frac{\\prt u}{\\prt a_j} + 2(\\Pi_+)_j\\frac{\\prt v}{\\prt a_j}, \\label{eq:96}\n\\end{equation}\n\n\\begin{equation}\n\\frac{\\prt}{\\prt a_j} \\B_{-1} w = -(\\Pi_0)_j\\frac{\\prt w}{\\prt a_j} + 2(\\Pi_-)_j\\frac{\\prt v}{\\prt a_j}, \\label{eq:97}\n\\end{equation}\n\n\\begin{equation}\n\\frac{\\prt}{\\prt a_j} \\B_{-1} v = (\\Pi_-)_j\\frac{\\prt u}{\\prt a_j} + (\\Pi_+)_j\\frac{\\prt w}{\\prt a_j}, \\label{eq:98}\n\\end{equation}\n\n\\noindent where we denoted $\\Pi_j = \\Pi(a_j; J)$ and matrix elements of $\\Pi$ as $\\Pi_+ = \\Pi_{12}$, $\\Pi_- = \\Pi_{21}$, $\\Pi_0 = \\Pi_{11} - \\Pi_{22} = 2\\Pi_{11}$.\n\n\\end{prop}\n\n\\noindent The last equality is again due to $\\Pi$ being traceless as a consequence of tracelessness of the original matrix $M(z)$ of differential relations. \n\\par One can see a striking similarity in structure of the last equations and equations (\\ref{eq:52})--(\\ref{eq:54}) from previous section. Only the current equation (\\ref{eq:98}) looks like a derivative of the Toda equation (\\ref{eq:52}), which it is indeed in the special Gaussian case. The Gaussian case is now seen as a very special in the respect that there the general structural correspondence of the two systems holds literally term by term. This is not true for the other UE but still the Toda-AKNS-like structure and three-term character are present in both forms {\\it as if such exact correspondence existed at some deeper level}. We believe this must be actually true although the real explanation is lacking yet.\n\\par Let us now consider further the simplest (though important) case of only one endpoint $\\xi \\equiv a_1$. Then $\\B_{-1} = \\prt_{\\xi}$ and we rewrite the system (\\ref{eq:98}), (\\ref{eq:96}), (\\ref{eq:97}) applying also the universal expressions of $u$ and $w$ in terms of $\\tau$-ratios. Then, if we introduce\n\n$$\n\\tilde\\Pi_+ = \\frac{\\tau_{n+1}}{\\tau_n b_{n-1}}\\Pi_+, \\hspace{2cm} \\tilde\\Pi_- = \\frac{\\tau_{n-1}}{\\tau_n b_{n-1}}\\Pi_-, \n$$\n\n\\noindent our system reads:\n\n\\begin{equation}\nU_{\\xi\\xi} = \\Pi_0U_{\\xi} - 2\\tilde\\Pi_+v_{\\xi}, \\label{eq:100}\n\\end{equation}\n\n\\begin{equation}\nW_{\\xi\\xi} = -\\Pi_0W_{\\xi} + 2\\tilde\\Pi_-v_{\\xi}, \\label{eq:101}\n\\end{equation}\n\n\\begin{equation}\nv_{\\xi\\xi} = -\\tilde\\Pi_-U_{\\xi} + \\tilde\\Pi_+W_{\\xi}. \\label{eq:102}\n\\end{equation}\n\n\\noindent We can add to this system equation (\\ref{eq:90}), which for the single endpoint case can be written as\n\n$$\nT_{\\xi} = -\\Pi_0v_{\\xi} + \\Pi_-u_{\\xi} - \\Pi_+w_{\\xi} = -\\Pi_0v_{\\xi} - \\tilde\\Pi_-U_{\\xi} - \\tilde\\Pi_+W_{\\xi}. \\eqno(\\ref{eq:90}a)\n$$\n\n\\noindent Here so far there are only $\\xi$-derivatives and no $t$-derivatives, however, the elements of $\\Pi$, although known in principle from Palmer's considerations (see (\\ref{eq:81}) and (\\ref{eq:82})), still do not have nice explicit expressions in general. Therefore, in general we obtain only some {\\it universal constraints} for the non-universal $\\Pi$-quantities. The system (\\ref{eq:100})--(\\ref{eq:102}), if treated as a linear algebraic system w.r.t. $\\Pi_0$, $\\tilde\\Pi_+$ and $\\tilde\\Pi_-$, is degenerate, and equation (\\ref{eq:55}), $(v_{\\xi})^2 = -U_{\\xi}W_{\\xi}$, is also a consequence of (\\ref{eq:100}), (\\ref{eq:101}) like it was for (\\ref{eq:53}), (\\ref{eq:54}). The other independent combination of (\\ref{eq:100}) and (\\ref{eq:101}) together with (\\ref{eq:90}a) bring back our universal equation (\\ref{eq:51}):\n\n$$\nW_{\\xi}U_{\\xi\\xi} - U_{\\xi}W_{\\xi\\xi} = 2v_{\\xi}(-\\Pi_0v_{\\xi} - \\tilde\\Pi_-U_{\\xi} - \\tilde\\Pi_+W_{\\xi}) = 2v_{\\xi}T_{\\xi}. \n$$\n\n\\noindent Thus one can get only two rather than three universal constraints for $\\Pi$-quantities, i.e. their expressions in terms of the derivatives of $T \\equiv \\ln\\tau_n^J$. They are in fact equation (\\ref{eq:90}a) and also (\\ref{eq:102}) itself.\n\n\n \n\\section*{\\normalsize\\bf VII. UNIVERSAL PDE (\\ref{eq:T}) AND EXAMPLES} \n\n{\\it Example 1: Gaussian emsemble.} \\\\\nThe Virasoro constraints~\\cite{ASvM},~\\cite{AvM7} give in this case (consider one interval situation -- the distribution of largest eigenvalue):\n\n$$\n\\prt_t T = -\\frac{1}{2}\\prt_{\\xi} T, \\ \\ \\ \\prt_{tt}^2 T = \\frac{1}{4}\\prt_{\\xi\\xi}^2 T + \\frac{n}{2}.\n$$\n\n\\noindent We substitute these expressions for the derivatives of $\\ln\\tau_n^J$ w.r.t. the first Toda time into our universal PDE (\\ref{eq:T}) and, denoting $T' = \\prt_{\\xi}T$, get a fourth-order ODE:\n\n\\begin{equation}\n\\left(T''T'''' - (T''')^2 + 2(T'')^3\\right)^2 = 4(T')^2\\left((T''')^2 + 4(T'')^2(T'' + 2n)\\right). \\label{eq:105}\n\\end{equation}\n\n\\noindent This is actually a third order ODE for $r \\equiv T'$:\n\n\\begin{equation}\n\\left(r'r''' - (r'')^2 + 2(r')^3\\right)^2 = 4r^2\\left((r'')^2 + 4(r')^2(r' + 2n)\\right). \\label{eq:106}\n\\end{equation}\n\n\\noindent This third-order ODE is a nonlinear equation with coefficients not involving the independent variable $\\xi$, unlike the third-order equation for Gaussian ensemble~\\cite{TW1}:\n\n\\begin{equation}\nr''' + 6(r')^2 + 8nr' = 4\\xi(\\xi r' - r), \\label{eq:107}\n\\end{equation}\n\n\\noindent which can be integrated once to give a form of Painlev\\'e IV equation~\\cite{TW1}:\n\n\\begin{equation}\n(r'')^2 + 4(r')^2(r'+2n) = 4(\\xi r'-r)^2. \\label{eq:108}\n\\end{equation}\n\n\\noindent One can verify that our equation (\\ref{eq:106}) is in fact a consequence of (\\ref{eq:108}). The Painlev\\'e equation (\\ref{eq:108}) itself can also be obtained from (\\ref{eq:106}). To this end define a function $g(\\xi)$ such that the right-hand side of (\\ref{eq:106}) equals $4r^2g^2$ and\n\n\\begin{equation}\nr'r''' - (r'')^2 + 2(r')^3 = 2rg. \\label{eq:109}\n\\end{equation}\n\n\\noindent Differentiating the expression on the right-hand side of (\\ref{eq:106}), defining $g^2(\\xi)$ we have also\n\n\\begin{equation}\n(g^2)' = 2gg' = 2r''r''' + 8r'r''(r'+2n) + 4(r')^2r'' = 2r''(r''' + 6(r')^2 + 8nr'). \\label{eq:110}\n\\end{equation}\n\n\\noindent Eliminating the third derivative $r'''$ from (\\ref{eq:109}) and (\\ref{eq:110}) gives\n\n$$\nr'gg' = r''g^2 + 2rr''g\n$$\n\n\\noindent or\n\n\\begin{equation}\nr'g' - r''g = 2rr''. \\label{eq:111}\n\\end{equation}\n\n\\noindent Substituting $g = r'\\Phi$ into (\\ref{eq:111}) and solving the ODE for $\\Phi$, then using the boundary condition $r \\to 0$ as $\\xi \\to \\infty$, one gets\n\n$$\ng = 2(\\xi r' - r),\n$$\n\n\\noindent and, finally, using again the definition of $g^2$ from the right-hand side of (\\ref{eq:106}), one arrives at (\\ref{eq:108}).\n\n\\medskip\n\n\\noindent {\\it Example 2: Laguerre ensemble.} \\\\\nHere the potential is $V(x) = x - \\alpha\\ln x$, and the relevant Virasoro constraints~\\cite{ASvM},~\\cite{AvM7} give expressions of Toda-time derivatives in terms of the endpoint ones:\n\n$$\n\\prt_t T = -\\xi T' + n(n+\\alpha), \\ \\ \\ \\prt_{tt} T = \\xi^2T'' + n(n+\\alpha),\n$$\n\n\\noindent so, for the function $r = T'$, one finds\n\n$$\nr_t = T_{\\xi t} = -\\xi r' - r, \\ \\ \\ r_{tt} = T_{\\xi tt} = \\xi^2r'' + 2\\xi r',\n$$\n\n$$\nr_{\\xi t} = T_{\\xi\\xi t} = -\\xi r'' - 2r', \\ \\ \\ r_{\\xi tt} = T_{\\xi\\xi tt} = \\xi^2r''' + 4\\xi r'' + 2r'.\n$$\n\n\\noindent For the Laguerre case it is convenient and standard to introduce the function $\\sigma = \\xi r$ instead of $r$, then one can see that\n\n$$\nr_t = -\\sigma', \\ \\ \\ r_{tt} = \\xi\\sigma'', \\ \\ \\ r_{\\xi t} = -\\sigma'', \\ \\ \\ r_{\\xi tt} = \\xi\\sigma''' + \\sigma'',\n$$\n\n\\noindent and the universal PDE (\\ref{eq:T}) reduces to the following ODE for $\\sigma$:\n\n\\begin{equation}\n\\xi^2\\left(-\\sigma'(\\xi\\sigma''' + \\sigma'') + \\xi(\\sigma'')^2 - 2(\\sigma')^3\\right)^2 = \\sigma^2\\left(4(\\xi\\sigma'-\\sigma+n(n+\\alpha))(\\sigma')^2 + \\xi^2(\\sigma'')^2\\right). \\label{eq:112}\n\\end{equation}\n\n\\noindent Again let us introduce function $g(\\xi)$ such that r.h.s. of (\\ref{eq:112}) $= \\sigma^2g^2$ and\n\n\\begin{equation}\n\\xi\\left(\\sigma'(\\xi\\sigma''' + \\sigma'') - \\xi(\\sigma'')^2 + 2(\\sigma')^3\\right) = \\sigma g. \\label{eq:113}\n\\end{equation}\n\n\\noindent Differentiating the expression for $g^2$ on the r.h.s. of (\\ref{eq:112}) gives\n\n\\begin{equation}\ngg' = \\sigma''\\left(\\xi^2\\sigma''' + \\xi\\sigma'' + 6\\xi(\\sigma')^2 - 4\\sigma\\sigma' + 4n(n+\\alpha)\\sigma'\\right). \\label{eq:114}\n\\end{equation}\n\n\\noindent Again combining (\\ref{eq:113}) and (\\ref{eq:114}) to eliminate the third derivative $\\sigma'''$ leads to\n\n$$\n\\sigma'gg' = \\sigma''g(g+\\sigma),\n$$\n\n\\noindent or\n\n\\begin{equation}\n\\sigma'g' - \\sigma''g = gg'', \\label{eq:115}\n\\end{equation}\n\n\\noindent which is the same as (\\ref{eq:111}) up to a factor of 2 on the right-hand side. So one gets\n\n\\begin{equation}\ng = \\xi\\sigma' - \\sigma. \\label{eq:116}\n\\end{equation}\n\n\\noindent Finally from the r.h.s. of (\\ref{eq:112}) and (\\ref{eq:116}) we arrive at\n\n\\begin{equation}\n\\xi^2(\\sigma'')^2 + 4(\\xi\\sigma'-\\sigma+n(n+\\alpha))(\\sigma')^2 - (\\xi\\sigma' - \\sigma)^2 = 0, \\label{eq:117}\n\\end{equation}\n\n\\noindent which is the standard form of Painlev\\'e V equation for Laguerre ensemble~\\cite{TW1, ASvM}.\n\n\\bigskip\n\nThe procedure just done for two classical ensembles can be applied for equation (\\ref{eq:T}) in general. If we put $r = T_{\\xi}$ then the universal PDE (\\ref{eq:T}) reads\n\n\\begin{equation}\n\\left(r_t r_{\\xi tt} - r_{tt}r_{\\xi t} + 2(r_t)^3\\right)^2 = r^2\\left(4T_{tt}(r_t)^2 + (r_{tt})^2\\right). \\label{eq:118}\n\\end{equation}\n\n\\noindent We introduce function $\\Phi$ such that (actually $r_t\\Phi = G$, with $G$ defined by (\\ref{eq:58}) and satisfying (\\ref{eq:61}), (\\ref{eq:69}))\n\n\\begin{equation}\n\\Phi^2 = \\left(\\frac{r_{tt}}{r_t}\\right)^2 + 4T_{tt} \\label{eq:119}\n\\end{equation}\n\n\\noindent and\n\n\\begin{equation}\nr_t r_{\\xi tt} - r_{tt}r_{\\xi t} + 2(r_t)^3 = rr_t\\Phi. \\label{eq:120}\n\\end{equation}\n\n\\noindent Now differentiate (\\ref{eq:119}) with respect to $\\xi$ and eliminate the most senior derivative $r_{\\xi tt}$ with the help of (\\ref{eq:120}). Then one obtains\n\n\\begin{equation}\n\\Phi_{\\xi} = \\frac{rr_{tt}}{(r_t)^2}. \\label{eq:121}\n\\end{equation}\n\n\\noindent Two equations -- (\\ref{eq:119}) and (\\ref{eq:121}) -- together give an equivalent representation of the single equation (\\ref{eq:118}). If (\\ref{eq:121}) could be integrated in general to get an expression for $\\Phi$, then substituting it into (\\ref{eq:119}) would give a universal analog of Painlev\\'e equations for all unitary ensembles. Unfortunately, this is not the case. And, most important, there seems to be no not only general method to express the two $t$-derivatives -- first and second -- entering the universal PDE in terms of $\\xi$-derivatives and functions of $\\xi$ only but even case by case application of Virasoro constraints beyond classical ensembles leads to infinitely many equations connecting derivatives w.r.t. all different times of Toda hierarchy, together with $\\xi$-derivatives. This seems to forbid simply eliminating the $t$-derivatives from the universal PDE for non-classical ensembles. To make this equation useful, a different point of view is needed. It might require some considerations of differential systems of the previous section, see also e.g. their treatment in Ref.~\\cite{BeEyHa-06}, or a completely different perspective yet to be found. This open problem seems worth futher efforts since the approach presented here seems rather appealing.\n\n\\section*{\\normalsize\\bf VIII. CONCLUSIONS}\n\nThe simple universal connections between eigenvalue spacing probabilities and their ratios represented as ratios of one-dimensional Toda $\\tau$-functions on the one side, and auxiliary variables naturally arising in the analytic approach to PDE for the probabilities using Fredholm determinants and resolvent operators on the other side, are derived. They hold for all Hermitian random matrix ensembles with unitary symmetry of the probability measure. They are obtained as a consequence of also derived here universal three-term recurrence relations for systems of functions orthogonal on subsets of real line. These orthogonal bases for restricted ensembles were studied by Borodin and Soshnikov in Ref.~\\cite{BorSosh}, so we make a missing connection of their work with Ref.~\\cite{TW1}. All our considerations can be generalized to unitarily invariant random matrix ensembles with eigenvalues on a circle or other simple contours in complex plane. \n\\par The above relations of functional-analytic approach with the structure of orthogonal functions and one-dimensional Toda (or Toda-AKNS) integrable hierarchy allow us to derive some universal PDE for 1-Toda $\\tau$-functions-matrix integrals and their ratios. Even a single universal PDE for the logarithm of gap probability (\\ref{eq:T}) of various unitary ensembles is obtained. Although the difficulty of applying the universal PDE (\\ref{eq:T}) directly to obtain specific ODE for various non-classical ensembles is rather disappointing, the author believes that equation (\\ref{eq:T}), and also (\\ref{eq:T+}) for the ratio of consecutive size matrix integrals, may be important and could be somehow used to describe both universal and model-specific properties of random matrix UE. How to use them remains an interesting question for future research.\n\n\\bigskip\n\\bigskip\n{\\noindent\\bf ACKNOWLEDGMENTS} \\\\\nAuthor is especially grateful to C.A.Tracy for constant support and encouragement and valuable comments on the text of the paper. Special thanks are also due to A.Borodin for his general interest in this work and turning author's attention to paper of Palmer~\\cite{Pal}. Author also wishes to thank A.Its and A.Soshnikov for useful discussions; E.Kanzieper for organizing the ISF Research Workshop on Random Matrices and Integrability, Yad Hashmona, Israel, March 2009, where a preliminary version of this work was presented. Author thanks the referee whose comments helped improve the previous version of the manuscript.\n\\par This work was supported in part by National Science Foundation under grant DMS-0906387 and VIGRE grant DMS-0636297.\n\n\\bigskip\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}