applied-ai-018's picture
Add files using upload-large-folder tool
4b834b1 verified
raw
history blame
221 kB
{"text":"\\section{Introduction}\n\\label{sec:introduction}\n\nMulti-view data contain information relevant for the identification of patterns or clusters that allow us to specify groups of subjects or objects. Our focus is on patients for which we have bio-medical and\/or clinical observations describing patient characteristics obtained from various diagnostic procedures or produced by different molecular technologies~\\cite{fu2020overview}. The different types of subject characteristics constitute views related to these patients. Integrative clustering of these views facilitates the detection of patient groups, with the consequence of improved clinical diagnostic and treatment schemes.\n\nSimple integration of single view clustering results is not appropriate for the diversity and complexity of available medical observations. Even state-of-the-art multi-view approaches have their limitations. Ensemble clustering has the potential to overcome some of them \\cite{ronan2018openensembles}\\cite{ alqurashi2019clustering}. For instance, while spectral clustering might be the optimal method for a specific image-based analysis, agglomerative clustering might be more appropriate for tabular data. This can be the case, where patient data reflect some hierarchical structure in a disease of interest and its subtypes \\cite{ciriello2013emerging}. Moreover, in real-world applications the data views originate from highly heterogeneous input sources. Thus, each view needs to be clustered with the best possible and most adequate strategy. Multi-view clustering methods are widely applied within the bio-medical domain. Molecular data from different biological layers are retrieved for the same set of patients. The clusters inferred from these multi-omics observations facilitate the stratification of cancer patients into sub-groups, paving the way towards precision medicine. \n\nHere, we present Parea, a generic and flexible methodology to build clustering ensembles of arbitrary complexity. To be precise, we introduce Parea$_{\\textit{hc}}$, an ensemble method which performs hierarchical clustering and data fusion for disease subtype discovery. The name of our method is derived from the Greek word {\\it Parea}, meaning a group of friends who gather to share experiences, values, and ideas.\n\nThe manuscript is structured as follows: Section~\\ref{sec:Parea_general} formally describes the ensemble structures we have developed. Section~\\ref{sec:approach} presents our multi-view hierarchical ensemble clustering approach for disease subtype detection. We discuss related work in Section~\\ref{sec:other_methods} and introduce the methods we used for benchmark comparisons. In Section~\\ref{sec:results} the results are presented and discussed. A brief introduction to the \\textit{Pyrea} Python package is given in Section~\\ref{sec:pyrea}. We conclude with Section~\\ref{sec:conclusion}.\n\n\\section{General ensemble architecture}\n\\label{sec:Parea_general}\n\nThe following concept for multi-view ensemble clustering is proposed. Each view $V \\in \\mathbb{R}^{n\\times p}$ is associated with a specific clustering method $c$, where $n$ is the number of samples and $p$ is the number of predictors, and in total we have $N$ data views. An ensemble, called $\\mathcal{E}$, can be modelled using a set of views $\\mathcal{V}$ and an associated fusion algorithm $f$. \n\n\\begin{equation}\n \\mathcal{V} \\mapsfrom \\{(V \\in \\mathbb{R}^{n\\times p}, c)\\} \n\\end{equation}\n\n\\begin{equation}\n \\mathcal{E}(\\mathcal{V}, f) \\mapsto \\widetilde{V}\\in \\mathbb{R}^{p\\times p}\n\\end{equation}\n\n\\begin{equation}\n \\mathcal{V} \\mapsfrom \\{(\\widetilde{V}\\in \\mathbb{R}^{p\\times p}, c)\\} \n\\end{equation}\n\nFrom the above equations we can see that a specified ensemble $\\mathcal{E}$ creates a view $\\widetilde{V} \\in \\mathbb{R}^{p\\times p}$ which again can be used to specify $\\mathcal{V}$, including an associated clustering algorithm $c$. With this concept it is possible to \\textit{layer-wise} stack views and ensembles into arbitrarily complex ensemble architectures. It should be noted, however, that the resulting view of a specified ensemble $\\mathcal{E}$ forms an affinity matrix of dimension $p \\times p$, and thus only those clustering methods which are congruent with an affinity or a distance matrix for input are applicable. \n\n\\section{Proposed Ensemble Approach}\n\\label{sec:approach}\n\nThe Parea$_{\\textit{hc}}$ ensemble approach comprises two different strategies: Parea$_{\\textit{hc}}^{1}$ is limited to the application of two selected hierarchical clustering methods. Parea$_{\\textit{hc}}^{2}$ allows for the hierarchical clustering methods in the data fusion process to be varied. \n\nThe two hierarchical clustering methods of Parea$_{\\textit{hc}}^{1}$ for multiple data views are $hc_{1}$ and $hc_{2}$. The resulting fused matrices $\\widetilde{V}$ are clustered again with the same methods and the results are combined to form a final consensus (see Figure \\ref{fig:Parea_arch}, panel (a)). A formal description of Parea$_{\\textit{hc}}^{1}$ is give by:\n\n\\begin{equation}\n\\mathcal{V}_{1} \\mapsfrom \\{(V_{1},hc_{1}),(V_{2},hc_{1}),\\ldots, (V_{N},hc_{1})\\}, \n\\quad\n\\mathcal{V}_{2} \\mapsfrom \\{(V_{1},hc_{2}),(V_{2},hc_{2}),\\ldots, (V_{N},hc_{2})\\}\n\\end{equation}\n\n\\begin{equation}\n\\mathcal{E}_{1}(\\mathcal{V}_{1}, f) \\mapsto \\widetilde{V}_{1},\n\\quad\n\\mathcal{E}_{2}(\\mathcal{V}_{2}, f) \\mapsto \\widetilde{V}_{2}\n\\end{equation}\n\n\n\\begin{equation}\n \\mathcal{V}_{3} \\mapsfrom \\{(\\widetilde{V}_{1},hc_{1}),(\\widetilde{V}_{2},hc_{2})\\} \n\\end{equation}\n\n\n\\begin{equation}\n\\mathcal{E}_{3}(\\mathcal{V}_{3}, f) \\mapsto \\widetilde{V}_{3}.\n\\end{equation}\nThe affinity matrix $\\widetilde{V}_{3}$ is then clustered with $hc_{1}$ and $hc_{2}$ from the first layer, and the consensus of the obtained clustering solutions constitute the final cluster assignments: \n\n\\begin{equation}\n\\mathcal{V}_{4} \\mapsfrom \\{(\\widetilde{V_{3}},hc_{1}),(\\widetilde{V_{3}},hc_{2})\\}\n\\end{equation}\n\n\n\\begin{equation}\n\\text{cons}(\\mathcal{V}_{4}) \n\\end{equation}\nGiven the proposed ensemble architecture, a genetic algorithm infers the optimal combination of $hc_{1}$ and $hc_{2}$ using the silhouette coefficient \\cite{rousseeuw1987silhouettes} as a fitness function. For the data fusion algorithm $f$, we utilize a method introduced in \\cite{pfeifer2021hierarchical}. See Figure \\ref{fig:Parea_arch} for a graphical illustration of the described ensemble architecture. \n\nIn the Parea$_{\\textit{hc}}^{2}$ approach the views are clustered with up to $N$ different hierarchical clustering methods $hc_{1}, hc_{2}, \\ldots, hc_{N}$, where $N$ is the number of data views.\nA formal description of the Parea$_{\\textit{hc}}^{2}$ is given by:\n\n\n\\begin{equation}\n\\mathcal{V}_{1} \\mapsfrom \\{(V_{1},hc_{1}),(V_{2},hc_{2}),\\ldots, (V_{N},hc_{N})\\}\n\\end{equation}\n\n\n\\begin{equation}\n\\mathcal{E}_{1}(\\mathcal{V}_{1}, f) \\mapsto \\widetilde{V}_{1}\n\\end{equation}\nThe affinity matrix $\\widetilde{V}_{1}$ is then clustered with $hc_{1}$, $hc_{2}$, and $hc_{3}$. The consensus of the obtained clustering results are the final cluster assignments: \n\n\n\\begin{equation}\n \\mathcal{V}_{2} \\mapsfrom \\{(\\widetilde{V}_{1},hc_{1}),(\\widetilde{V}_{1},hc_{2}), (\\widetilde{V}_{1},hc_{3})\\} \n\\end{equation}\n\n\n\\begin{equation}\n\\text{cons}(\\mathcal{V}_{2}) \n\\end{equation}\nThe best combination of clustering methods ($hc_{1}$, $hc_{2}$, and $hc_{3}$) is again inferred by a genetic algorithm, where the silhouette coefficient \\cite{rousseeuw1987silhouettes} is deployed as a fitness function. We consider eight different hierarchical clustering methods, namely single-linkage and complete-linkage clustering \\cite{murtagh2012algorithms}, an unweighted pair-group method using arithmetic averages (UPGMA) \\cite{sokal1958statistical}, a weighted pair-group method using arithmetic averages (WPGMA) \\cite{sokal1958statistical} clustering, a weighted pair-group method using centroids (WPGMC) \\cite{gower1967comparison}, an unweighted pair-group method using centroids (UPGMC) \\cite{sokal1958statistical} clustering, and clustering based on Ward's minimum variance \\cite{ward1963hierarchical}\\cite{murtagh2014ward}.\n\n\\begin{figure*}[h!]\n\\begin{center}\n\\includegraphics[width=17.3cm]{Figures\/Parea_general_crop.pdf\n\\end{center}\n\\caption{\\textbf{(a)} The Parea$_{\\textit{hc}}^{1}$ ensemble architecture. The views are organised in two ensembles. Each ensemble is associated with a specific clustering method. \\textbf{(b)} The Parea$_{\\textit{hc}}^{2}$ ensemble architecture. The views can be clustered using different hierarchical clustering techniques. }\\label{fig:Parea_arch}\n\\end{figure*} \n\n\\section{Alternative approaches}\n\\label{sec:other_methods}\n\nA comparison with alternative approaches was conducted using a set of state-of-the-art multi-view clustering methods implemented within the Python package mvlearn \\cite{perry2021mvlearn}. Part of this set is a multi-view spectral clustering approach with and without the use of a co-training framework \\cite{kumar2011co}. An additional method is based on multi-view $k$-means \\cite{chao2017survey} clustering, plus an implementation of multi-view spherical $k$-means using the co-EM framework as described in \\cite{bickel2004multi}.\n\nFor disease subtype detection we compared Parea$_{\\textit{hc}}$ with NEMO \\cite{rappoport2019nemo}, SNF (Similar Network Fusion) \\cite{wang2014similarity}, HCfused \\cite{pfeifer2021hierarchical}, and PINSplus \\cite{nguyen2019pinsplus}. SNF models the similarity between subjects or objects as a network and then fuses these networks via an interchanging diffusion process. Spectral clustering is applied to the fused network to infer the final cluster assignments. NEMO builds upon SNF, but provides solutions to partial data and implements a novel \\textit{eigen-gap} method \\cite{von2007tutorial} to infer the optimal number of clusters.\nThe method implemented within PINSplus systematically adds noise to the data and infers the best number of clusters based on the stability against this noise. When the best $k$ (number of clusters) is detected, binary matrices are formulated reflecting the cluster solutions for each single-view contribution. A final agreement matrix is derived by counting the number of times two subjects or objects appear in the same cluster. This agreement matrix is then used for a standard clustering method, such as $k$-means. \n\nHCfused is a hierarchical clustering and data fusion algorithm. It is based on \\textit{bottom-up} agglomerative clustering to create a fused affinity matrix. At each step two clusters are merged within the view that provides the minimal distance between these clusters. The number of times two samples appear in the same cluster is reflected by a co-association matrix. The final cluster assignments are obtained from hierarchical clustering based on Ward's minimum variance \\cite{ward1963hierarchical}\\cite{murtagh2014ward}.\n\n\\section{Results and discussion}\n\\label{sec:results}\n\n\\subsection*{Evaluation on machine learning benchmark data sets}\n\nWe tested Parea$_{\\textit{hc}}$ on the IRIS data set available from the UCI Machine Learning Repository\\footnote{See \\url{https:\/\/archive.ics.uci.edu\/ml\/datasets\/iris}}. It contains three classes of 50 instances each, where each class refers to a type of iris plant.\nWe compared the results of the ensemble with each of the possible ensemble members, and also compared the outcomes with a consensus approach, where the cluster solutions of all single algorithms were combined. The data pool was sampled 50 times and the clustering methods were applied on each iteration. The number of clusters $k$ was set to three corresponding to the ground truth.\nThese sanity checks revealed superior results for the Parea$_{\\textit{hc}}^{1}$ ensemble method compared to a simple consensus approach, as judged by the Adjusted Rand Index (ARI) (see Figure \\ref{fig:IRIS}). We could further observe that the underlying genetic algorithm infers ward.D and ward.D2 as the best performing method combination (Figure \\ref{fig:IRIS}, panel (b)). \n\nIn an additional investigation, we evaluated the accuracy of the discussed methods on the nutrimouse data set \\cite{martin2007novel}. The data set originates from a nutrigenomic study of the mouse in which the effects of five regimens with contrasted fatty acid compositions on liver lipids and hepatic gene expression in mice were considered. Two views were acquired from forty mice. First, gene expressions of 120 genes were measured in liver cells, as potentially relevant for the nutrition study. Second, lipid concentrations of 21 hepatic fatty acids were measured by gas chromatography.\n\nFor the nutrimouse data set Parea$_{\\textit{hc}}^{2}$ performs better than Parea$_{\\textit{hc}}^{1}$ (see Figure \\ref{fig:100leaves}, panel (a)). This observation suggests that higher accuracy can be achieved when the views are analysed with disjoint clustering strategies. Parea$_{\\textit{hc}}^{2}$ performs best when the median NMI (\\textit{Normalised Mutual Information}) is used as a metric. However, at the same time we observed higher variance for the Parea ensembles compared to multi-view spectral clustering. It is worth noting that the alternative spectral-based approaches from the mvlearn Python package performed as well as Parea$_{\\textit{hc}}^{1}$. \n\nLast, we further studied Parea's performance on the one-hundred plant species leaves multi-view data set, available from the UCI Machine Learning Repository\\footnote{See \\url{https:\/\/archive.ics.uci.edu\/ml\/datasets\/One-hundred+plant+species+leaves+data+set}}. For each feature, a 64 element vector is observed per leaf sample. These vectors form contiguous descriptors for shape as well as texture and margin. In contrast to the IRIS data set it is composed of multiple views. As can be seen in Figure \\ref{fig:100leaves}, Parea$_{\\textit{hc}}^{1}$ and Parea$_{\\textit{hc}}^{2}$ compete well with the spectral-based approaches. Multi-view $k$-means clustering does not capture the ground truth class distribution (see Figure \\ref{fig:100leaves}, panel (b)).\n\n\\begin{figure}\n \\centering\n \\begin{subfigure}[b]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{Figures\/Parea_vs_Single.pdf}\n \\caption{}\n \n \\end{subfigure}\n \n \\begin{subfigure}[b]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{Figures\/Heatmap.pdf}\n \\caption{}\n \n \\end{subfigure}\n \\caption{(a) Parea$_{\\textit{hc}}$ versus each of the available ensemble methods executed on the \\textit{single-view} IRIS data set. (b) The pairwise Parea$_{\\textit{hc}}$ ensembles inferred by the genetic algorithm for clustering the IRIS data set.}\n \\label{fig:IRIS}\n\\end{figure}\n\n\n\n\\begin{figure}\n \\centering\n \\begin{subfigure}[b]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{Figures\/Mouse_diet.pdf}\n \\caption{}\n \n \\end{subfigure}\n \n \\begin{subfigure}[b]{0.50\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{Figures\/100leaves.pdf}\n \\caption{}\n \n \\end{subfigure}\n \\caption{\n Parea$_{\\textit{hc}}$ versus a set of state-of-the-art multi-view methods implemented within the mvlearn package \\cite{perry2021mvlearn}.\n (a) The \\textit{multi-view} nutrimouse data set. (b) The \\textit{multi-view} 100 leaves data set.\n }\n \\label{fig:100leaves}\n\\end{figure}\n\n\n\\subsection*{Multi-omics clustering for disease subtype discovery}\n\nWe applied the aforementioned ensemble methods to patient data of seven different cancer types, namely glioblastoma multiforme (GBM), kidney renal clear cell carcinoma (KIRC), liver hepatocellular carcinoma (LIHC), skin cutaneous melanoma (SKCM), ovarian serous cystadenocarcinoma (OV), sarcoma (SARC), and acute myeloid leukemia (AML), aiming at the externally known survival outcome. The Parea$_{\\textit{hc}}$ ensemble approach was studied on multi-omics data, including gene expression data (mRNA), DNA methylation, and micro-RNAs. The obtained results were compared with the alternative approaches SNF, NEMO, PINSplus, and HCfused. All data were retrieved from the ACGT lab at Tel Aviv University\\footnote{See \\url{http:\/\/acgt.cs.tau.ac.il\/multi_omic_benchmark\/download.html}}, which makes available a data repository recently proposed as a convenient benchmark for multi-omics clustering approaches \\cite{rappoport2018multi}. The data were pre-processed as follows: patients and features with more than $20\\%$ missing values were removed and the remaining missing values were imputed with $k$-nearest neighbor imputation. In the methylation data, we selected those 5000 features with maximal variance in each data set. All features were then normalised to have mean zero and standard deviation one. \n\nThe resulting multi-omics views were then clustered with the discussed multi-view clustering approaches. It is important to mention that the cancer patients were exclusively analysed based on their genomic footprints. The survival statuses and times of all patients were retrieved to validate the quality of the inferred patient clusters. Here, a well performing clustering method is determined by its capability to separate patients into groups with similar event statuses in terms of survival. To this end we randomly sampled 100 patients 30 times from the data pool and performed the Cox log-rank test, which is an inferential procedure for the comparison of event time distributions among independent (i.e. clustered) patient groups.\n\nIn the case of Parea$_{\\textit{hc}}$, the optimal number of clusters was determined by the silhouette coefficient. We set the number of HCfused \\cite{pfeifer2021hierarchical} fusion iterations to 30. The maximum possible number of clusters was fixed to 10. The obtained Cox log-rank $p$-values ($\\alpha=0.05$ significance level) are displayed in Figure \\ref{fig:TCGA} and Table \\ref{tab:table1}. Our Parea$_{\\textit{hc}}$ ensembles outperformed the alternative approaches in six out of seven cases. SKCM is the only cancer type for which HCfused achieved better results. Notably, spectral-based clustering (see NEMO \\cite{rappoport2019nemo} and SNF \\cite{wang2014similarity}) does not perform as well as with the benchmark machine learning data sets. We further learned that Parea$_{\\textit{hc}}^{1}$ is more accurate than Parea$_{\\textit{hc}}^{2}$ (Figure \\ref{fig:TCGA}, Table \\ref{tab:table1}). \n\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[width=14cm]{Figures\/TCGA_median.pdf\n\\end{center}\n\\caption{Parea$_{\\textit{hc}}$ in comparison with the alternative approaches SNF, NEMO, PINSplus, and HCfused. Colored bars represent the method-specific median Cox log-rank $p$-values for the seven different cancer types. The vertical line refers to $\\alpha=0.05$ significance level.}\n\\label{fig:TCGA}\n\\end{figure}\n\n \n \n\n\n\\begin{table}[h!]\n \\begin{center}\n \\caption{Survival analysis of TCGA cancer data}\n \\label{tab:table1}\n \\begin{tabular}{l l l l l l l l}\n \\textbf{Cancer type} & \\textbf{Sample size} & SNF & PINSplus & NEMO & HCfused &Parea$_{\\textit{hc}}^{1}$ & Parea$_{\\textit{hc}}^{2}$ \\\\\n \\hline\n GBM & 538 & 0.1304 & 0.2223 & \\textbf{0.0347} & 0.0997 & \\textbf{0.0447}& \\textbf{0.0347}\\\\\n KIRC & 606 & 0.3962 & 0.4005 & 0.3464 & 0.0561 & \\textbf{0.0137} & \\textbf{0.0400}\\\\\n \n LIHC & 423 & 0.5357 & 0.6731 & 0.4354 & 0.2062 & \\textbf{0.0334} & \\textbf{0.0436}\\\\\n SKCM & 473 & 0.5153 & 0.3956 & 0.4565 & 0.0699 & 0.1677 & 0.1629\\\\\n OV & 307 & 0.4042 & 0.5300 & 0.3593 & 0.2594 & 0.1685 & 0.2870 \\\\\n SARC & 265 & 0.1622 & 0.2024 & 0.0979 & \\textbf{0.0408} & \\textbf{0.0076} & \\textbf{0.0109} \\\\\n AML & 173 & 0.0604 & 0.1973 & \\textbf{0.0440} & 0.1148 & \\textbf{0.0167} & 0.0502\\\\ \\hline\n \n \\end{tabular}\n \\end{center}\n \\centering\nDisplay of the median Cox log-rank p-values. Significant results ($\\alpha=0.05$) are highlighted in bold. \n\\end{table}\n\n\\section{Pyrea Software Library}\n\\label{sec:pyrea}\nPyrea is a software framework which allows for flexible ensembles to be built, layer-wise, using a variety of clustering algorithms and fusion techniques. It can be used to implement the ensemble methods discussed throughout this paper and any other custom ensemble structure that users may wish to create. It is made available as a Python package, which can be installed via the \\textit{pip} package manager. See the project's GitHub repository under \\url{https:\/\/github.com\/mdbloice\/Pyrea} for source code and usage examples, such as how to implement some of the ensembles mentioned above. Comprehensive documentation is available under \\url{https:\/\/pyrea.readthedocs.io}. Pyrea is MIT licensed, and available for Windows, macOS, and Linux.\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nWe have presented Parea$_{\\textit{hc}}$, an ensemble approach for multi-view hierarchical clustering for cancer subtype discovery. We could show that Parea$_{\\textit{hc}}$ competes well with current state-of-the-art multi-view clustering techniques, on classical machine learning data sets as well as for real-world multi-omics cancer patient data. The proposed methodology for building ensembles is highly versatile and allows for ensembles to be stacked layer-wise. Additionally, the Parea ensemble strategy is not limited to a specific clustering technique. Within our developed Python package \\textit{Pyrea}, we enable flexible ensemble building, while providing a wide-range of clustering algorithms, data fusion techniques, and metrics to infer the best number of clusters. \n\n\n\n\n\n \n\n\n\n \n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
{"text":"\n\\section{Introduction}\nThe problem of group synchronization is critical for various tasks in data science, including structure from motion (SfM), simultaneous localization and mapping (SLAM), Cryo-electron microscopy imaging, sensor network localization, multi-object matching and community detection. Rotation synchronization, also known as rotation averaging, is the most common group synchronization problems in 3D reconstruction. It asks to recover camera rotations from \nmeasured\nrelative rotations \nbetween\npairs of cameras. Permutation synchronization, which has applications in multi-object matching, asks to obtain globally consistent matches of objects from possibly erroneous measurements of matches between some pairs of objects. The simplest example of group synchronization is $\\mathbb Z_2$ synchronization, which appears in community detection. \n\nThe general problem of group synchronization can be mathematically formulated as follows.\nAssume a graph $G([n],E)$ with $n$ vertices indexed by $[n]=\\{1,\\ldots,n\\}$, a group $\\mathcal{G}$, and a set of group elements $\\{g_i^*\\}_{i=1}^n \\subseteq \\mathcal{G}$. \nThe \nproblem asks to recover $\\{g_i^*\\}_{i=1}^n$ from noisy and corrupted measurements $\\{g_{ij}\\}_{ij \\in E}$ of the group ratios $\\{g_i^*g_j^{*-1}\\}_{ij \\in E}$.\nWe note that one can only recover, or approximate, the original group elements $\\{g_i^*\\}_{i\\in [n]}$ up to a right group action.\nIndeed, for any $g_0\\in \\mathcal G$, $g_{ij}^*$ can also be written as $g_i^*g_0(g_j^*g_0)^{-1}$ and thus $\\{g_i^*g_0\\}_{i\\in [n]}$ is also a solution. \nThe above mentioned synchronization problems (rotation, permutation, and $\\mathbb Z_2$ synchronization) correspond to the groups $SO(3)$, $S_N$, and $\\mathbb Z_2$, respectively.\n\nThe most challenging issue for group synchronization is the practical scenario of highly corrupted and noisy measurements. Traditional least squares solvers often fail to produce accurate results in such a scenario. \nMoreover, some basic estimators that seem to be robust to corruption\noften do not tolerate in practice high level of noise. \nWe aim to propose a general method for group synchronization that may tolerate high levels and different kinds of corruption and noise. While our basic ideas are formally general, in order to carefully refine and test them, we focus on the special problem of rotation synchronization, which is also known as rotation averaging \\cite{RotationAveraging13}.\nWe choose this problem as it is the most common, and possibly most difficult, synchronization problem in 3D computer vision.\n\n\\subsection{Related Works}\n\\label{sec:related}\nMost previous group synchronization solvers minimize an energy function. \nFor the discrete groups $\\mathbb Z_2$ and $S_N$, least squares energy minimization is commonly used.\nRelevant robustness results, under special corruption and noise models, are discussed in \\citet{Z2Afonso2, Z2abbe, Z2Afonso, chen_partial, Huang13, PPM_vahan, deepti}.\n\nFor Lie groups, such as $SO(D)$, that is, the group of $D \\times D$ orthogonal matrices with determinant 1, where $D\\geq 2$, least squares minimization was proposed to handle Gaussian noise \\citep{rotationNP,StrongDuality18,Govindu04_Lie}. However, when the measurements are also adversarially corrupted, this framework does not work well and other corruption-robust energy functions need to be used \\cite{ChatterjeeG13_rotation, L12, HartleyAT11_rotation, SO2ML, wang2013exact}. \nThe most common corruption-robust energy function uses least absolute deviations.\n\\citet{wang2013exact} prove that under a very special probabilistic setting with $\\mathcal G = SO(D)$, the pure minimizer of this energy function can exactly recover the underlying group elements with high probability. However, their assumptions are strong and they use convex relaxation, which changes the original problem and is expensive to compute. \n \\citet{SO2ML} apply a trimmed averaging procedure for robustly solving $SO(2)$ synchronization. They are able to recover the ground truth group elements under a special deterministic condition on the topology of the corrupted subgraph. However, the verification of this condition and its extension to $SO(D)$, where $D>2$, are nontrivial.\n\\citet{HartleyAT11_rotation} used the Weiszfeld algorithm\n to minimize the least-absolute-deviations energy function\nwith $\\mathcal G = SO(3)$. Their method iteratively computes geodesic medians. However, they update only one rotation matrix per iteration, which results in slow empirical convergence and may increase the possibility of getting stuck at local minima. \\citet{L12} proposed a robust Lie-algebraic averaging method over $\\mathcal G = SO(3)$. They apply an iteratively reweighted least squares (IRLS) procedure in the tangent space of $SO(3)$ to optimize different robust energy functions, including the one that uses least absolute deviations. They claim that the use of the $\\ell_{1\/2}$ norm for deviations results in highest empirical accuracy. The empirical robustness of the two latter works is not theoretically guaranteed, even in simple settings. A recent deep learning method \\cite{NeuroRA} solves a supervised version of rotation synchronization,\nbut it does not apply to the above unsupervised formulation of the problem.\n\n\\citet{truncatedLS} use least absolute deviations minimization for solving 1D translation synchronization, where $\\mathcal G=\\mathbb R$ with addition. \nThey propose a special version of IRLS and provide a deterministic exact recovery guarantee that depends on properties of the graph and its Laplacian. They do not explain their general result in an adversarial setting, but in a very special noisy setting.\n\n \nRobustness results were established for least absolute deviations minimization in camera location estimation, which is somewhat similar to group synchronization \\cite{HandLV15,LUDrecovery}. These results assume special probabilistic setting, however, they have relatively weak assumptions on the corruption model.\n\n\n\n Several energy minimization solutions have been proposed to $SE(3)$ synchronization \\cite{SE3_MCMC, SE3_SDP_jesus, SE3_Rosen, SE3_sync, SE3_RPCA}. This problem asks to jointly estimate camera rotations and locations from relative measurements of both. Neither of these solutions successfully address highly corrupted scenarios.\n\n\nOther works on group synchronization, which do not minimize energy functions but aim to robustly recover corrupted solutions,\nscreen corrupted edges using cycle consistency information. For a group $\\mathcal{G}$ with group identity denoted by $e$, any $m \\geq 3$, any cycle\n$L=\\{i_1i_2, i_2i_3\\dots i_m i_1\\}$ of length $m$ and any corresponding product of ground-truth group ratios along $L$, $g^*_L=g_{i_1i_2}^*g_{i_2i_3}^*\\cdots g_{i_mi_1}^*$, the cycle-consistency constraint is\n$g^*_L= e$.\nIn practice, one is given the product of measurements, that is, $g_L=g_{i_1i_2}g_{i_2i_3}\\cdots g_{i_mi_1}$, and in order to ``approximately satisfy the cycle-consistency constraint'' one tries to enforce $g_L$ to be sufficiently close to $e$.\n\\citet{Zach2010} uses the cycle-consistency constraint to detect corrupted relative rotations in $SO(3)$. It seeks to maximize a log likelihood function, which is based on the cycle-consistency constraint, using either belief propagation or convex relaxation. However, no theoretical guarantees are provided for the accuracy of outlier detection. Moreover, the log likelihood function implies very strong assumptions on the joint densities of the given relative rotations.\n\\citet{shen2016} classify the relative rotations as uncorrupted if they belong to any cycle that approximately satisfies the cycle-consistency constraint. However, this work only exploits local information and cannot handle the adversarial corruption case, where corrupted cycles can be approximately consistent. \n\nAn iterative reweighting strategy, IR-AAB \\cite{AAB}, was proposed to detect and remove corrupted pairwise directions for the different problem of camera location estimation. It utilizes another notion of cycle-consistency to infer the corruption level of each edge. \\citet{cemp} extend the latter idea, and interpret it as a message passing procedure, to solve group synchronization with any compact group. They refer to their new procedure as cycle-edge message passing (CEMP). While We follow ideas of \\citet{cemp,AAB}, we directly solve for group elements, instead of estimating corruption levels, using them to initial cleaning of edges and solving the cleaner problem with another method.\n\nTo the best of our knowledge, the unified frameworks for group synchronization are \\citet{ICMLphase,cemp,AMP_compact}. However, \\citet{ICMLphase} and \\citet{AMP_compact} assume special probabilistic models that do not address adversarial corruption. Furthermore, \\citet{ICMLphase} only applies to Lie groups and the different setting of multi-frequencies.\n\n\n\\subsection{Contribution of This Work}\nCurrent group synchronization solvers often do not perform well with highly corrupted and noisy group ratios. In order to address this issue, we propose a rotation synchronization solver that can address in practice high levels of noise and corruption. Our main ideas seem to generalize to group synchronization with any compact group, but more careful developments and testing are needed for other groups. We emphasize the following specific contributions of this work:\n\\begin{itemize}\n\\item\nWe propose the message passing least squares (MPLS) framework as an alternative paradigm to IRLS for group synchronization, and in particular, rotation synchronization. It uses the theoretically guaranteed CEMP algorithm\nfor estimating the underlying corruption levels.\nThese estimates are then used for learning better weights for the weighted least squares problem.\n\\item \nWe explain in Section \\ref{sec:issue} why the common IRLS solver may not be accurate enough and in Section \\ref{sec:mpls} why MPLS can overcome these obstacles. \n\\item While MPLS can be formally applied to any compact group, we refine and test it for the group $\\mathcal G = SO(3)$. We demonstrate state-of-the-art results for rotation synchronization with both synthetic data having nontrivial scenarios and real SfM data.\n\n\n\\end{itemize}\n\n\n\n\n \n\n\n\n\n\n\\section{Setting for Robust Group Synchronization}\\label{sec:adversarial}\nSome previous robustness theories for group synchronization typically assume a very special and often unrealistic corruption probabilistic model for very special groups \\cite{deepti,wang2013exact}. In general, simplistic probabilistic models for corruption,\nsuch as generating potentially corrupted group ratios according to the Haar measure on $\\mathcal G$ \\cite{wang2013exact},\nmay not generate some nontrivial scenarios that often occur in practice. \nFor example, in the application of rotation synchronization that arise in SfM, the corrupted camera relative rotations can be self-consistent due to the ambiguous scene structures \\citep{1dsfm14}. However, in common probabilistic models, such as the one in \\citet{wang2013exact}, cycles with corrupted edges are self-consistent with probability zero. A more realistic model is the adversarial corruption model for the different problem of camera location \\cite{LUDrecovery, HandLV15}. However, it also assumes very special probabilistic models for the graph and camera locations, which are not realistic. A more general model of adversarial corruption with noise is due to \\citet{cemp} and we review it here.\n\nWe assume a graph $G([n],E)$ \nand a compact group $\\mathcal G$\nwith a bi-invariant metric $d$, that is, for any $g_1$, $g_2$, $g_3\\in \\mathcal G$, \n $d(g_1,g_2)=$ $d(g_3g_1,g_3g_2)=$ $d(g_1g_3,g_2g_3)$. \n For $\\mathcal G = SO(3)$, or any Lie group, $d$ is commonly chosen to be the geodesic distance.\n\n Since $\\mathcal G$ is compact, we can scale $d$ and assume \n\n that $d(\\cdot)\\leq 1$.\n \n \n \nWe partition $E$ into $E_g$ and $E_b$, which represent sets of good (uncorrupted) and bad (corrupted) edges, respectively. \nWe will need a topological assumption on $E_b$, or equivalently, $E_g$. A necessary assumption is that $G([n],E_g)$ is connected, though further restrictions on $E_b$ may be needed for \nestablishing theoretical guarantees\n\\cite{cemp}.\n\nIn the noiseless case, the adversarial corruption model generates group ratios in the following way.\n\\begin{align}\\label{eq:model}\ng_{ij}=\\begin{cases}\ng^*_{ij}:=g_i^*g_j^{*-1}, & ij \\in E_g;\\\\\n\\tilde g_{ij} \\neq g^*_{ij}, & ij \\in E_b.\n\\end{cases}\n\\end{align}\nThat is, for edges $ij\\in E_b$, the corrupted group ratio $\\tilde g_{ij}\\neq g_{ij}^*$ can be arbitrarily chosen from $\\mathcal G$.\nThe corruption is called adversarial since \none can maliciously corrupt the group ratios for $ij\\in E_b$ and also maliciously choose $E_b$ as long as the needed assumptions on $E_b$ are satisfied. One can even form cycle-consistent corrupted edges, so that they can be confused with the good edges.\n\nIn the noisy case, we assume a noise model for $d(g_{ij},g_{ij}^*)$, where $ij\\in E_g$. In theory, one may need to restrict this model \\cite{cemp}, but in practice we test highly noisy scenarios.\n\nFor $ij\\in E$ we define the corruption level of $ij$ as \\[s_{ij}^* = d(g_{ij},g_{ij}^*).\\]\nWe use ideas of \\citet{cemp} to estimate $\\{s_{ij}^*\\}_{ij \\in E}$, but then we propose new ideas to estimate $\\{g_{i}^*\\}_{i \\in [n]}$. While exact estimation of both quantities is equivalent in the noiseless case \\cite{cemp}, this property is not valid when adding noise.\n\n \n\\section{Issues with the Common IRLS}\\label{sec:issue}\nWe first review the least squares minimization, least absolute and unsquared deviations minimization and IRLS for group synchronization. We then explain why IRLS may not form a good solution for the group synchronization problem, and in particular for Lie algebraic groups, such as the rotation group.\n\nThe least squares minimization \ncan be formulated as follows:\n\\begin{align}\n\\label{eq:l2}\n \\min_{\\{g_i\\}_{i=1}^n \\subseteq \\mathcal G}\\sum_{ij\\in E}d^2(g_{ij},g_ig_j^{-1}),\n\\end{align}\nwhere one often relaxes this formulation.\nThis formulation is generally sensitive to outliers and thus more robust energy functions are commonly used when considering corrupted group ratios. More specifically, one may choose a special function $\\rho(x) \\neq x^2$ and solve the following least unsquared deviation formulation\n\\begin{align}\\label{eq:lp}\n \\min_{\\{g_i\\}_{i=1}^n \\subseteq \\mathcal G}\\sum_{ij\\in E}\\rho\\left(d(g_{ij},g_ig_j^{-1})\\right).\n\\end{align}\nThe special case of $\\rho(x)=x$ \\cite{HandLV15,HartleyAT11_rotation,ozyesil2015robust, wang2013exact} is referred to as least absolute deviations. Some other common choices are $\\rho(x)=x^2\/(x^2+\\sigma^2)$ \\cite{ChatterjeeG13_rotation} and $\\rho(x)=\\sqrt x$ \\cite{L12}. \n\n\nThe least unsquared formulation is typically solved using IRLS, where at iteration $t$ one solves the weighted least squares problem:\n\\begin{align} \\label{eq:irls}\n \\{g_{i,t}\\}_{i\\in [n]}&= \\operatorname*{arg\\,min}_{\\{g_i\\}_{i=1}^n \\subseteq \\mathcal G}\\sum_{ij\\in E} w_{ij,t-1}d^2(g_{ij},g_ig_j^{-1}).\n\\end{align}\nIn the first iteration the weights can be initialized in a certain way, but in the next iterations the weights are updated using the residuals of this solution. Specifically, for $ij \\in E$ and iteration $t$, the residual is $r_{ij,t}=d(g_{ij},g_{i,t}g_{j,t}^{-1})$ and the weight $w_{ij,t}$ is \n\\begin{align}\nw_{ij,t}&=\nF(r_{ij,t}),\n\\label{eq:weight_irls}\n\\end{align}\nwhere the function $F$ depends on the choice of $\\rho$. For $\\rho(x)=x^p$, where $0<p<2$, $F(x)=\\min\\{x^{p-2},A\\}$, where $1\/A$ is a regularization parameter and here we fix $A=10^8$.\n\n\nThe above IRLS procedure poses the following three issues. First, its convergence to the solution $\\{g_i^*\\}_{i\\in [n]}$\nis not guaranteed, especially under severe corruption. Indeed, IRLS succeeds when it accurately estimates the correct weights $w_{ij,t}$ for each edge. Ideally, when the solution $\\{g_{i,t}\\}_{i\\in [n]}$ is close to the ground truth $\\{g_i^*\\}_{i\\in [n]}$, the residual $r_{ij,t}$ must be close to the corruption level $s_{ij}^*$ so that weight $w_{ij,t}$ must be close to $F(s_{ij}^*)$. \nHowever, if edge $ij \\in E_b$ is severely corrupted (or edge $ij \\in E_g$ has high noise) and either $g_i^*$ or $g_j^*$ is wrongly estimated, then the residual $r_{ij,t}$ might have a very small value. \nThus the weight $w_{ij,t}$ in \\eqref{eq:weight_irls} can be extremely large and may result in an inaccurate solution in the next iteration and possibly low-quality solution at the last iteration.\n\n\nThe second issue is that for common groups each iteration of \\eqref{eq:irls} requires either SDP relaxation or tangent space approximation (for Lie groups). However, if the weights of IRLS are wrongly estimated in the beginning, then they may affect the tightness of the SDP relaxation and the validity of tangent space approximation. Therefore, such procedures tend to make the IRLS scheme sensitive to corruption and initialization of weights and group elements. \n\nAt last, when dealing with noisy\ndata where most of $s_{ij}^*$, $ij \\in E$, are significantly greater than $0$, the current reweighting strategy usually gives non-negligible positive weights to outliers. This can be concluded from the expression of $F$ (e.g., for $\\ell_p$ minimization) and the fact that in a good scenario $r_{ij,t} \\approx s_{ij}^*$ and $s_{ij}^*$ can be away from 0. Therefore, outliers can be overweighed and this may lead to low-quality solutions. \nWe remark that this issue is more noticeable in Lie groups, such as the rotation group, as all measurements are often noisy and corrupted; whereas in discrete groups some measurements may be rather accurate \\cite{robust_multi_object2020}.\n\n\\section{Message Passing Least Squares (MPLS)}\n\\label{sec:mpls}\nIn view of the drawbacks of the common IRLS scheme, we \npropose the MPLS (Message Passing Least Squares), or Minneapolis, algorithm.\nIt carefully initializes and reevaluates the weights of a weighted least squares problem by our CEMP algorithm \\cite{cemp} or a modified version of it. \nWe first review the ideas of CEMP in Section \\ref{sec:cemp}. We remark that its goal is to estimate the corruption levels $\\{s_{ij}^*\\}_{ij \\in E}$ and not the group elements $\\{g_{i}^*\\}_{i \\in [n]}$. \nSection \\ref{sec:mpls2} formally describes MPLS for the general setting of group synchronization. Section \\ref{sec:so3} carefully refines MPLS for rotation synchronization. Section \\ref{sec:complexity} summarizes the complexity of the proposed algorithms.\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{illustration.pdf}\n \\caption{Demonstration of MPLS in comparison with IRLS. IRLS updates the graph weights directly from the residuals, which can be inaccurate. It is demonstrated in the upper loop of the figure, where the part different than MPLS is crossed out. In contrast, MPLS updates the graph weights by applying a CEMP-like procedure to the residuals, demonstrated in the ``message-passing unit''. Good edges, such as $jk_1$, are marked with green, and bad edges are marked with red. For $ij\\in E$ and $k\\in \\{k_1,k_2,\\dots k_{50}\\}$, $q_{ij,k}^t$ is updated using the two residuals $r_{ik,t}$ and $r_{jk,t}$ according to the indicated operation. The length of a bar around the computed value of each $q_{ij,k}^t$ is proportional to magnitude and the green or red colors designate good or bad corresponding cycles, respectively. \n \n The weighted sum $h_{ij,t}$ aims to approximate $s_{ij}^*$ and this approximation is good when the green $q_{ij,k}^t$ bars are much longer than the red bars. The weight $w_{ij,t}$ is formed as a convex combination of $r_{ij,t}$ and $h_{ij,t}$. The rest of the procedure is similar to IRLS. }\\label{fig:demo}\n\\end{figure*}\n\n\\subsection{Cycle-Edge Message Passing (CEMP)}\n\\label{sec:cemp}\nThe CEMP procedure aims to estimate the corruption levels\n$\\{s_{ij}^*\\}_{ij \\in E}$\nfrom the cycle inconsistencies, which we define next. \nFor simplicity and for ease of computation, we work here with 3-cycles, that is, triangles in the graph.\nFor each edge $ij\\in E$, we independently sample with replacement 50 \nnodes that form 3-cycles with $i$ and $j$. That is, if $k$ is such a node then $ik$ and $jk$ are in $E$.\nWe denote this set of nodes by $C_{ij}$. We remark that the original version of CEMP in \\citet{cemp} uses all 3-cycles and can be slower.\nWe define the cycle inconsistency of the 3-cycle $ijk$ associated with edge $ij$ and $k \\in C_{ij}$ as follows\n\\begin{equation}\n\\label{eq:def_dL}\nd_{ij,k} := d(g_{ij}g_{jk}g_{ki}, e), \\quad k\\in C_{ij}, ij\\in E.\n\\end{equation}\n\n\nThe idea of CEMP is to iteratively estimate each corruption level $s_{ij}^*$ for $ij \\in E$ from a weighted average of the cycle inconsistencies $\\{d_{ij,k}\\}_{k \\in C_{ij}}$. \nTo motivate this idea, we assume the noiseless adversarial corruption model\nand formulate the following proposition whose easy proof appears in \\citet{cemp}.\n\\begin{proposition}\\label{prop:good cycle}\nIf $s_{ik}^*=s_{jk}^*=0$, that is, $ik$, $jk \\in E_g$, then $$s_{ij}^*=d_{ij,k}.$$\n\\end{proposition}\nIf the condition of the above proposition holds, we call $ijk$ a good cycle with respect to $ij$,\notherwise we call it a bad cycle.\n\n\nCEMP also aims to estimate the conditional probability $p_{ij,k}^t$ that the cycle $ijk$ is good, so $s_{ij}^*=d_{ij,k}.$ The conditioning is on the estimates of corruption levels computed in the previous iteration. CEMP uses this probability as a weight for the cycle inconsistency $d_{ij,k}$. The whole weighted sum\nthus aims to estimate the conditional expectation of $s_{ij}^*$ at iteration $t$. This estimate is denoted by $s_{ij,t}$.\nThe iteration of CEMP thus contains two main steps:\n1) Computation of the weight $p_{ij,k}^t$ of $d_{ij,k}$; 2) Computation of $s_{ij,t}$ as a weighted average of the cycle inconsistencies $\\{d_{ij,k}\\}_{k \\in C_{ij}}$. In the former stage messages are passed from edges to cycles and in the latter stage messages are passed from cycles to edges. The simple procedure is summarized in Algorithm~\\ref{alg:cemp}. We formulate it in generality for a compact group $\\mathcal G$ with \na bi-invariant metric $d$ \nand a graph $G([n],E)$, for which the cycle inconsistencies, $\\{d_{ij,k}\\}_{ij\\in E, k\\in C_{ij}}$, were computed in advance.\nOur default parameters are later specified in Section \\ref{sec:implementation}. For generality, we write $|C_{ij}|$ instead of 50. \\newpage\n\\begin{algorithm}[h]\n\\caption{CEMP \\cite{cemp}}\n\\label{alg:cemp}\n\\begin{algorithmic}\n\\REQUIRE $\\{d_{ij,k}\\}_{ij\\in E, k\\in C_{ij}}$, time step $T$, increasing $\\{\\beta_t\\}_{t=0}^T$\\\\\n\\STATE \\textbf{Steps:}\n\\STATE $s_{ij,0} = \\frac{1}{|C_{ij}|}\\sum_{k\\in C_{ij}} d_{ij,k} $ \\hspace*{\\fill} $ij\\in E$\n\\FOR {$t=0:T$}\n\\STATE \\emph{Message passing from edges to cycles:}\n\\STATE $p_{ij,k}^{t} = \\exp\\left(-\\beta_t(s_{ik,t}+s_{jk,t})\\right)$ \\hspace*{\\fill} $k\\in C_{ij},\\, ij\\in E$\n\\STATE \\emph{Message passing from cycles to edges:}\n\\STATE $s_{ij,t+1}\n =\\frac{1}{Z_{ij,t}}\\sum\\limits_{k\\in C_{ij}}p_{ij,k}^t d_ {ij,k}$, \n\\hspace*{\\fill} $ij\\in E$\n\\STATE\nwhere $Z_{ij,t}=\\sum\\limits_{k\\in C_{ij}}p_{ij,k}^t$ is a normalization factor\n\\ENDFOR\n\\ENSURE $\\{s_{ij,T}\\}_{ij\\in E}$\n\\end{algorithmic}\n\\end{algorithm}\n\nAlgorithm \\ref{alg:cemp} can be explained in two different ways \\cite{cemp}. First of all, it can be theoretically guaranteed to be robust to adversarial corruption and stable to low level of noise (see Theorem 5.4 in \\citet{cemp}). Second of all, our above heuristic explanation can be made more rigorous using some statistical assumptions. Such assumptions are common in other message passing algorithms in statistical mechanics formulations and we thus find it important to motivate them here. We remark that our statistical assumptions are weaker than those of previous works on message passing \\cite{AMP_Donoho,BP}, and in particular, those for group synchronization \\cite{AMP_compact,Zach2010}. We also remark that they are not needed for establishing the above mentioned theory.\n\n\n\nOur first assumption is that $ijk$ is a good cycle if and only if $s_{ij}^*=d_{ij,k}$. Proposition \\ref{prop:good cycle} implies the only if part, but the other part is generally not true. However, under special random corruption models (e.g., models in \\citet{wang2013exact,ozyesil2015robust}), the assumed equivalence holds with probability $1$. \nWe further assume that $\\{s_{ij}^*\\}_{ij \\in E}$ and $\\{s_{ij,t}\\}_{ij \\in E}$ are both i.i.d.~random variables and that for any $ij \\in E$, $s_{ij}^{*}$ is independent of $s_{kl,t}$ for $kl \\neq ij \\in E$. We further assume that for any $ij\\in E$\n\\begin{equation}\n\\label{eq:pr_f}\n\\Pr(s_{ij}^*=0|s_{ij,t}=x) = \\exp(-\\beta_t x).\n\\end{equation}\nWe also assume the existence of good cycle $ijk$ for any $ij\\in E$. \n\nIn view of these assumptions, in particular the i.i.d.~sampling and \\eqref{eq:pr_f}, we obtain that the expression for $p_{ij,k}^t$ used in Algorithm \\ref{alg:cemp} coincides with the conditional probability that $ijk$ is a good cycle, that is, the conditional probability that $s_{ik}^*=s_{jk}^*=0$:\n\\begin{align}\\label{eq:pijk}\n p_{ij,k}^t &= \\Pr(s_{ik}^*=s_{jk}^*=0|\\{s_{ab,t}\\}_{ab\\in E})\\nonumber\\\\\n &= \\Pr(s_{ik}^*=0|s_{ik,t})\\Pr(s_{jk}^*=0|s_{jk,t})\\\\ \\nonumber\n &=\\exp(-\\beta_t (s_{ik,t} +s_{jk,t})).\n\\end{align}\n\nUsing the definition of conditional expectation, \n the equivalence assumption, the above i.i.d.~sampling assumptions and \\eqref{eq:pr_f},\n we show that the expression for $s_{ij,t}$ used in Algorithm \\ref{alg:cemp} coincides with the conditional expectation of $s_{ij}^*$:\n \\begin{align}\\label{eq:sijt}\n \n &\\mathbb E\\left(s_{ij}^*|\\{s_{ab,t}\\}_{ab\\in E}\\right)\\nonumber\\\\\n &=\\frac{1}{Z_{ij,t}}\\sum_{k\\in C_{ij}}\\Pr\\left(s_{ij}^*=d_{ij,k}|\\{s_{ab,t}\\}_{ab\\in E}\\right)d_{ij,k}\\nonumber\n \\end{align}\n \\begin{align}\n &=\\frac{1}{Z_{ij,t}}\\sum_{k\\in C_{ij}}\\Pr\\left(s_{ik}^*=s_{jk}^*=0|\\{s_{ab,t}\\}_{ab\\in E}\\right)d_{ij,k} \\nonumber\\\\\n &=\\frac{1}{Z_{ij,t}}\\sum_{k\\in C_{ij}}\\exp\\left(-\\beta_t(s_{ik,t}+s_{jk,t})\\right)d_{ij,k}\n \\\\ &= \n \\frac{1}{Z_{ij,t}}\\sum_{k\\in C_{ij}} p_{ij,k}^t d_{ij,k}.\\nonumber\n \\end{align}\nNote that our earlier motivation of Algorithm\n \\ref{alg:cemp} assumed both\n \\eqref{eq:pijk} and \\eqref{eq:sijt}.\n\nDemonstration of a procedure similar to CEMP, but with different notation, appears in the lower part of Figure \\ref{fig:demo}.\n\n\n\\subsection{General Formulation of MPLS}\n\\label{sec:mpls2}\nMPLS uses the basic ideas of CEMP in order to robustly estimate the residuals $\\{r_{ij,t}\\}_{ij \\in E}$ and weights $\\{w_{ij,t}\\}_{ij \\in E}$ of the IRLS scheme as well as to carefully initialize this scheme. It also incorporates a novel truncation idea. We explain in this and the next section how these new ideas address the drawbacks of the common IRLS procedure, which was reviewed in Section \\ref{sec:issue}.\nWe sketch MPLS in Algorithm \\ref{alg:MPLS}, demonstrate it in Figure \\ref{fig:demo} and explain it below. For generality, we assume in this section a compact group $\\mathcal G$, \na bi-invariant metric $d$ on $\\mathcal G$ and a graph $G([n],E)$ with given relative measurements and cycle inconsistencies computed in advance.\nOur default parameters are later specified in Section \\ref{sec:implementation}.\n\nInstead of using the traditional IRLS reweighting function $F(x)$ explained in Section \\ref{sec:issue}, we use its truncated version $F_{\\tau}(x)=F(x)\\mathbf{1}_{\\{x\\leq \\tau\\}}+10^{-8}\\mathbf{1}_{\\{x> \\tau\\}}$ with a parameter $\\tau>0$. We decrease $\\tau$ as the iteration number increases in order to avoid overweighing outliers. By doing this we aim to address the third drawback of IRLS mentioned in Section \\ref{sec:issue}. We remark that the truncated function is $F(x)\\mathbf{1}_{\\{x\\leq \\tau\\}}$ and the additional term $10^{-8}\\mathbf{1}_{\\{x> \\tau\\}}$ is needed to ensure that the graph with weights resulting from $F_{\\tau}$ is connected.\n\n\n\n\n\\begin{algorithm}[h]\n\\caption{Message Passing Least Squares (MPLS)}\\label{alg:MPLS}\n\\begin{algorithmic}\n\\REQUIRE $\\{g_{ij}\\}_{ij\\in E}$, $\\{d_{ij,k}\\}_{k\\in C_{ij}}$, nonincreasing $\\{\\tau_t\\}_{t\\geq 0}$, increasing $\\{\\beta_t\\}_{t=0}^T$, decreasing $\\{\\alpha_t\\}_{t\\geq 1}$\n\\STATE \\textbf{Steps:}\n\\STATE Compute $\\{s_{ij,T}\\}_{ij\\in E}$ by CEMP\n\\STATE $w_{ij,0}=F_{\\tau_0}(s_{ij,T})$\n\\hspace*{\\fill} $ij\\in E$\n\\STATE $t=0$\n\\WHILE {not convergent}\n\\STATE $t=t+1$\n\\STATE $\\{g_{i,t}\\}_{i\\in [n]}=\\operatorname*{arg\\,min}\\limits_{g_i\\in\\mathcal G}\\sum\\limits_{ij\\in E} w_{ij,t-1}d^2(g_{ij},g_ig_j^{-1})$\n\n\\STATE $r_{ij,t}=d(g_{ij}, g_{i,t}g_{j,t}^{-1})$\n\\hspace*{\\fill} $ij\\in E$\n\n\\STATE $q_{ij,k}^{t} = \\exp(-\\beta_T(r_{ik,t}+r_{jk,t}))$\n\\hspace*{\\fill} $k\\in C_{ij},\\, ij\\in E$\n\n\\STATE $h_{ij,t}\n =\\frac{\\sum_{k\\in C_{ij}}q_{ij,k}^t d_ {ij,k}}{\\sum_{k\\in C_{ij}}q_{ij,k}^t}$\n\\hspace*{\\fill} $ij\\in E$\n\\STATE $w_{ij,t}=F_{\\tau_t}(\\alpha_t h_{ij,t}+(1-\\alpha_t)r_{ij,t})$\n\\hspace*{\\fill} $ij\\in E$\n\\ENDWHILE\n\\ENSURE $\\left\\{g_{i,t}\\right\\}_{i\\in [n]}$\n\\end{algorithmic}\n\\end{algorithm}\nThe initial step of the algorithm estimates the corruption levels $\\{s_{ij}^*\\}_{ij \\in E}$ by CEMP. The initial weights for the IRLS procedure \nfollow \\eqref{eq:weight_irls} with additional truncation. \nAt each iteration, the group ratios $\\{g_{i,t}\\}_{i\\in [n]}$ are estimated from the weighted least squares procedure in \\eqref{eq:irls}. However, the weights $w_{ij,t}$ are updated in a very different way. First of all, for each $ij \\in E$ the corruption level $s_{ij}^*$ is re-estimated in two different ways and a convex combination of the two estimates is taken. The first estimate is a residual $r_{ij,t}$ computed with the newly updated estimates $\\{g_{i,t}\\}_{i\\in [n]}$. This is the error of approximating the given measurement $g_{ij}$ by the newly estimated group ratio. The other estimate practically applies CEMP to re-estimate the corruption levels. For edge $ij \\in E$, the latter estimate of $s_{ij}^*$ is denoted by $h_{ij,t}$. \nFor interpretation, we can replace \\eqref{eq:pr_f} with \n$\\Pr(s_{ij}^*|r_{ij,t})=\\exp(-\\beta_T x)$ and use it to derive analogs of \\eqref{eq:pijk}\nand \n\\eqref{eq:sijt}.\nUnlike CEMP, we use the single parameter, $\\beta_T$, as we assume that CEMP provides a sufficiently good initialization.\nAt last, a similar weight as in \\eqref{eq:weight_irls}, but truncated, is applied to the combined estimate $\\alpha_t h_{ij,t}+(1-\\alpha_t)r_{ij,t}$. \n\n\nWe remark that utilizing the estimate $h_{ij,t}$ for the corruption level addresses the first drawback of IRLS discussed in Section \\ref{sec:issue}. Indeed, assume the case where $ij\\in E_b$ and $r_{ij,t}$ is close to 0. Here, $w_{ij,t}$ computed by IRLS is relatively large; however, since $ij\\in E_b$, $w_{ij,t}$ needs to be small. Unlike $r_{ij,t}$ in IRLS, we expect that $h_{ij,t}$ in MPLS should not be too small as long as for some $k\\in C_{ij}$, $d_{ij,k}$ are sufficiently large. This happens as long as there exists some $k\\in C_{ij}$ for which the cycle $ijk$ is good. Indeed, in this case $s_{ij}^*$ is sufficiently large and for good cycles $d_{ij,k}=s_{ij}^*$.\n\n\nWe further remark that $h_{ij,t}$ is a good approximation of $s_{ij}^*$ under certain conditions. For example, if for all $k\\in C_{ij}$, $r_{ik,t}\\approx s_{ik}^*$ and $r_{jk,t}\\approx s_{jk}^*$, then plugging in the definition of $p_{ij,k}^t$ to the expression of $h_{ij,t}$, using the fact that $\\beta_T$ is sufficiently large and at last applying Proposition \\ref{prop:good cycle}, we obtain that\n\\begin{align}\n h_{ij,t}\n &= \\sum_{k\\in C_{ij}}\\frac{\\exp(-\\beta_T(r_{ik,t}+r_{jk,t}))}{\\sum_{k\\in C_{ij}}\\exp(-\\beta_T(r_{ik,t}+r_{jk,t}))}d_ {ij,k} \\nonumber\\\\\n &\\approx \\sum_{k\\in C_{ij}}\\frac{\\exp(-\\beta_T(s_{ik}^*+s_{jk}^*)) }{\\sum_{k\\in C_{ij}}\\exp(-\\beta_T(s_{ik}^*+s_{jk}^*))} d_ {ij,k} \\\\\n &\\approx \\sum_{k\\in C_{ij}}\\frac{\\mathbf{1}_{\\{ijk \\text{ is a good cycle}\\}}}{\\sum_{k\\in C_{ij}}\\mathbf{1}_{\\{ijk \\text{ is a good cycle}\\}}} d_ {ij,k}=s_{ij}^*.\\nonumber\n\\end{align}\nThis intuitive argument for a restricted case conveys the idea that ``local good information'' can be used to estimate $s_{ij}^*$. \nThe theory of CEMP \\cite{cemp} shows that under weaker conditions such information can propagate through the whole graph within a few iterations, but we cannot extend it to MPLS.\n\n\nIf the graph $G([n], E)$ is dense with sufficiently many good cycles, then we expect that this good information can propagate in few iterations and that $h_{ij,t}$ will have a significant advantage over $r_{ij,t}$. However, in real scenarios of rotation synchronization in SfM, one may encounter sparse graphs, which may not have enough cycles and, in particular, not enough good cycles. In this case, utilizing $h_{ij,t}$ is mainly useful in the early iterations of the algorithm. On the other hand, when $\\{g_{i,t}\\}_{i \\in [n]}$ are close to $\\{g_i^*\\}_{i \\in [n]}$, $\\{r_{ij,t}\\}_{i \\in [n]}$ will be sufficiently close to $\\{s_{ij}^*\\}_{i \\in [n]}$. Aiming to address rotation synchronization, we decrease $\\alpha_t$, the weight of $h_{ij,t}$, with $t$. In other applications, different choices of $\\alpha_t$ can be used \\cite{robust_multi_object2020}.\n\n\nThe second drawback of IRLS, discussed in Section \\ref{sec:issue}, is the possible difficulty of implementing the weighted least squares step of \\eqref{eq:irls}. This issue is application-dependent, and since in this work we focus on rotation synchronization (equivalently, $SO(3)$ synchronization), we show in the next subsection how MPLS can deal with the above issue in this specific problem. \nNevertheless, we claim that our framework can also be applied to other compact group synchronization problems and we demonstrate this claim in a follow up work \\cite{robust_multi_object2020}.\n\n\\subsection{MPLS for $SO(3)$ synchronization} \n\\label{sec:so3}\nRotation synchronization, or $SO(3)$ synchronization, aims to solve 3D rotations $\\{\\boldsymbol{R}_i^*\\}_{i\\in [n]} \\in SO(3)$ from measurements $\\{\\boldsymbol{R}_{ij}\\}_{ij\\in E} \\in SO(3)$ of the 3D relative rotations $\\{\\boldsymbol{R}_i^*\\boldsymbol{R}_j^{*-1}\\}_{ij\\in E} \\in SO(3)$. \nThroughout the rest of the paper, we use the following normalized geodesic distance for $\\boldsymbol{R}_1,\\boldsymbol{R}_2 \\in SO(3)$:\n\\begin{equation}\n\\label{eq:geo_distance}\n d(\\boldsymbol{R}_1,\\boldsymbol{R}_2)=\\|\\log(\\boldsymbol{R}_1\\boldsymbol{R}_2^{-1})\\|_F\/(\\sqrt2\\pi),\n\\end{equation}\nwhere $\\log$ is the matrix logarithm and the normalization factor ensures that the diameter of $SO(3)$ is $1$. \nWe provide some relevant preliminaries of the Riemannian geometry of $SO(3)$ in Section \\ref{sec:prelim_riem} and then describe the implementation of MPLS for $SO(3)$, which we refer to as MPLS-$SO(3)$, in Section \\ref{sec:mpls_so3_details}.\n\n\\subsubsection{Preliminaries: $SO(3)$ and $\\mathfrak{so}(3)$} \\label{sec:prelim_riem}\nWe note that $SO(3)$ is a Lie group, and its corresponding Lie algebra, $\\mathfrak{so}(3)$, is the space of all skew symmetric matrices, which is isomorphic to $\\mathbb R^3$. For each $\\boldsymbol{R}\\in SO(3)$, its corresponding element in $\\mathfrak{so}(3)$ is $\\boldsymbol\\Omega=\\log(\\boldsymbol{R})$, where $\\log$ denotes matrix logarithm. Each $\\boldsymbol\\Omega\\in \\mathfrak{so}(3)$ can be represented as $[\\boldsymbol{\\omega}]_{\\times}$ for some $\\boldsymbol\\omega =(\\omega_1, \\omega_2, \\omega_3)^T \\in \\mathbb R^3$ in the following way:\n\\begin{align*}\n [\\boldsymbol{\\omega}]_{\\times} :=\n \\left(\\begin{array}{ccc}\n 0 & - \\omega_3 & \\omega_2 \\\\\n \\omega_3 & 0& -\\omega_1 \\\\\n -\\omega_2 & \\omega_1 & 0\n \\end{array}\\right).\n\\end{align*}\nIn other words, we can map any $\\boldsymbol\\omega\\in \\mathbb R^3$ to \n$\\boldsymbol\\Omega=[\\boldsymbol{\\omega}]_{\\times} \\in \\mathfrak{so}(3)$ and $\\boldsymbol{R}=\\exp([\\boldsymbol{\\omega}]_{\\times}) \\in SO(3)$, where $\\exp$ denotes the matrix exponential function. We remark that geometrically $\\boldsymbol\\omega$ is the tangent vector at $\\boldsymbol I$ of the geodesic path from $\\boldsymbol I$ to $\\boldsymbol{R}$.\n\n\n\\subsubsection{Details of MPLS-$SO(3)$}\n\\label{sec:mpls_so3_details}\nWe note that in order to adapt MPLS to the group $SO(3)$, we only need a specific algorithm to solve the following formulation of the weighted least squares problem at iteration $t$\n\\begin{align}\n \\nonumber &\\min\\limits_{\\boldsymbol{R}_{i,t}\\in \\mathcal SO(3)}\\sum\\limits_{ij\\in E} w_{ij,t}d^2(\\boldsymbol{R}_{ij},\\boldsymbol{R}_{i,t}\\boldsymbol{R}_{j,t}^{-1})\\\\\n =& \\min\\limits_{\\boldsymbol{R}_{i,t}\\in \\mathcal SO(3)}\\sum\\limits_{ij\\in E} w_{ij,t}d^2(\\boldsymbol{I}, \\boldsymbol{R}_{i,t}^{-1}\\boldsymbol{R}_{ij}\\boldsymbol{R}_{j,t}),\\label{eq:wlsSO3}\n\\end{align}\nwhere the last equality follows from the bi-invariance of $d$.\nThe constraints on orthogonality and determinant of $\\boldsymbol{R}_i$ are non-convex. If one relaxes those constraints, with an appropriate choice of the metric $d$, then the solution of the least squares problem in the relaxed Euclidean space often lies away from the embedding of $SO(3)$ into that space. \nFor this reason, we follow the common choice of $d$ according to \\eqref{eq:geo_distance} and implement the Lie-algebraic Averaging (LAA) procedure \\cite{Govindu04_Lie, ChatterjeeG13_rotation, L12,consensusSO3}. \nWe review LAA, explain why it may be problematic and why our overall implementation may overcome its problems. \nLAA aims to move from $\\boldsymbol{R}_{i,t}$ to $\\boldsymbol{R}_{i,t+1}$ along the manifold using the right group action $\\boldsymbol{R}_{i,t}=\\boldsymbol{R}_{i,t-1}\\Delta \\boldsymbol{R}_{i,t}$, where \n$\\Delta \\boldsymbol{R}_{i,t}\\in SO(3)$. \nFor this purpose, it defines $\\Delta\\boldsymbol{R}_{ij,t}=\\boldsymbol{R}_{i,t-1}^{-1}\\boldsymbol{R}_{ij}\\boldsymbol{R}_{j,t-1}$\nso that \n\\begin{multline*}\n(\\Delta \\boldsymbol{R}_{i,t})^{-1}\\Delta \\boldsymbol{R}_{ij,t} \\Delta \\boldsymbol{R}_{j,t} = \\\\(\\Delta \\boldsymbol{R}_{i,t})^{-1}\\boldsymbol{R}_{i,t-1}^{-1}\\boldsymbol{R}_{ij}\\boldsymbol{R}_{j,t-1} \\Delta \\boldsymbol{R}_{j,t}=\\boldsymbol{R}_{i,t}^{-1}\\boldsymbol{R}_{ij}\\boldsymbol{R}_{j,t} \n\\end{multline*}\nand \\eqref{eq:wlsSO3} \ncan be transformed to the still hard to solve equation \n\\begin{equation}\\label{eq:deltaR}\n \\min\\limits_{\\Delta \\boldsymbol{R}_{i,t}\\in\\mathcal SO(3)}\\sum\\limits_{ij\\in E} w_{ij,t}d^2(\\boldsymbol I,(\\Delta \\boldsymbol{R}_{i,t})^{-1}\\Delta \\boldsymbol{R}_{ij,t} \\Delta \\boldsymbol{R}_{j,t}).\n\\end{equation}\nLAA then maps $\\{\\Delta \\boldsymbol{R}_{i,t}\\}_{i\\in [n]}$ and $\\{\\Delta \\boldsymbol{R}_{ij,t}\\}_{ij\\in E}$ to the tangent space of $\\boldsymbol{I}$ by $\\Delta \\boldsymbol\\Omega_{i,t}=\\log \\Delta \\boldsymbol{R}_{i,t}$\nand $\\Delta \\boldsymbol\\Omega_{ij,t}=\\log \\Delta \\boldsymbol{R}_{ij,t}$.\nApplying \\eqref{eq:geo_distance} and the fact that the Riemannian logarithmic map, which is represented by $\\log$, preserves the geodesic distance and using a ``naive approximation'':\n$d(\\boldsymbol I,(\\Delta \\boldsymbol{R}_{i,t})^{-1}\\Delta \\boldsymbol{R}_{ij,t} \\Delta \\boldsymbol{R}_{j,t}) $. Therefore, \nLAA uses the following approximation \n\\begin{multline}\n\\label{eq:app_laa}\n d(\\boldsymbol{I},(\\Delta \\boldsymbol{R}_{i,t})^{-1}\\Delta \\boldsymbol{R}_{ij,t}\\Delta \\boldsymbol{R}_{j,t}) \n = \\\\\n \\|\\log((\\Delta \\boldsymbol{R}_{i,t})^{-1}\\Delta \\boldsymbol{R}_{ij,t} \\Delta \\boldsymbol{R}_{j,t})\\|_F\/(\\sqrt2\\pi)\n \\approx \\\\\n\\|-\\log(\\Delta \\boldsymbol{R}_{i,t})+\\log(\n\\Delta \\boldsymbol{R}_{ij,t})+\\log( \\Delta \\boldsymbol{R}_{j,t}))\\|_F\/(\\sqrt2\\pi) = \\\\\n\\|\\Delta \\boldsymbol\\Omega_{i,t}-\\Delta \\boldsymbol\\Omega_{j,t}-\\Delta \\boldsymbol\\Omega_{ij,t}\\|_F\/(\\sqrt2\\pi).\n\\end{multline}\nConsequently, LAA transforms \\eqref{eq:deltaR} as follows:\n\\begin{equation}\\label{eq:delta_omega}\n \\min\\limits_{\\Delta \\boldsymbol\\Omega_{i,t}\\in\\mathfrak{so}(3)}\\sum\\limits_{ij\\in E} w_{ij,t}\\|\\Delta \\boldsymbol\\Omega_{i,t}-\\Delta \\boldsymbol\\Omega_{j,t}-\\Delta \\boldsymbol\\Omega_{ij,t}\\|^2_F.\n\\end{equation}\nHowever, the approximation in \\eqref{eq:app_laa}\nis only valid when $\\Delta \\boldsymbol{R}_{ij,t}$, $\\Delta \\boldsymbol{R}_{i,t}$, $\\Delta \\boldsymbol{R}_{j,t}$ $\\approx \\boldsymbol{I}$, \nwhich is unrealistic. \n\nOne can check that the following conditions: $\\boldsymbol{R}_{ij}\\approx \\boldsymbol{R}_i^*\\boldsymbol{R}_j^{*-1}$ ($s_{ij}^*\\approx 0$), $\\boldsymbol{R}_{i,t} \\approx \\boldsymbol{R}_i^*$ and $\\boldsymbol{R}_{j,t} \\approx \\boldsymbol{R}_j^*$ for $t \\geq 0$ imply that $\\Delta \\boldsymbol{R}_{ij,t}$, $\\Delta \\boldsymbol{R}_{i,t}$, $\\Delta \\boldsymbol{R}_{j,t}$ $\\approx \\boldsymbol{I}$ and thus \nimply \\eqref{eq:app_laa}. Therefore, to make LAA work we need to give large weights to edges $ij$ with small $s_{ij}^*$ and provide a good initialization $\\{\\boldsymbol{R}_{i,0}\\}_{i\\in [n]}$ that is reasonably close to $\\{\\boldsymbol{R}_{i}^*\\}_{i\\in [n]}$ and so that $\\{\\boldsymbol{R}_{i,t}\\}_{i\\in [n]}$ for all $t\\geq 1$ are still close to the ground truth. \nOur heuristic argument is that good approximation by CEMP, followed by MPLS, addresses these requirements. Indeed, to address the first requirement, we note that good initialization by CEMP can result in $s_{ij,T} \\approx s_{ij}^*$ and by the nature of $F$, $w_{ij,0}$ is large when $s_{ij,T}$ is small. \nAs for the second requirement, \nwe assign the weights $s_{ij,T}$, obtained by CEMP, to each $ij\\in E$ and find the minimum spanning tree (MST) for the weighted graph by Prim's algorithm. We initialize the rotations by fixing $\\boldsymbol{R}_{1,0}=\\boldsymbol{I}$, multiplying relative rotations along the computed MST and consequently obtaining $\\boldsymbol{R}_{i,0}$ for any node $i$. We summarize our MPLS version of rotation averaging in Algorithm~\\ref{alg:SO3}. \n\\begin{algorithm}[h]\n\\caption{MPLS-$SO(3)$}\\label{alg:SO3}\n\\begin{algorithmic}\n\\REQUIRE $\\{\\boldsymbol{R}_{ij}\\}_{ij\\in E}$, $\\{d_{ij,k}\\}_{k\\in C_{ij}}$, $\\{\\tau_t\\}_{t\\geq 0}$, $\\{\\beta_t\\}_{t=0}^T$, $\\{\\alpha_t\\}_{t\\geq 1}$\n\\STATE \\textbf{Steps:}\n\n\\STATE Compute $\\{s_{ij,T}\\}_{ij\\in E}$ by CEMP\n\\STATE Form an $n \\times n$ weight matrix $\\boldsymbol{W}$, where $W_{ij}=W_{ji}= s_{ij,T}$ for $ij\\in E$, and $W_{ij}=W_{ji}=0$ otherwise\n\\STATE $G([n],E_{ST})=$ minimum spanning tree of $G([n],W)$\n\\STATE $\\boldsymbol{R}_{1,0}=\\boldsymbol{I}$\n\\STATE find $\\{\\boldsymbol{R}_{i,0}\\}_{i>1}$ by $\\boldsymbol{R}_i=\\boldsymbol{R}_{ij}\\boldsymbol{R}_j$ for $ij\\in E_{ST}$ \n\\STATE $t=0$\n\\STATE $w_{ij,0}=F_{\\tau_0}(s_{ij,T})$ \n\\WHILE {not convergent}\n\\STATE $t=t+1$\n\\STATE $\\Delta \\boldsymbol\\Omega_{ij,t}=\\log(\\boldsymbol{R}_{i,t-1}^{-1}\\boldsymbol{R}_{ij}\\boldsymbol{R}_{j,t-1})$ \\hspace*{\\fill} $ij\\in E$\n\n\\STATE $\\{\\Delta \\boldsymbol\\Omega_{i,t}\\}_{i\\in [n]}=$\n\\STATE \\quad \\quad $\\operatorname*{arg\\,min}\\limits_{\\Delta \\boldsymbol\\Omega_{i,t}\\in\\mathfrak{so}(3)}\\sum\\limits_{ij\\in E} w_{ij,t}\\|\\Delta \\boldsymbol\\Omega_{i,t}-\\Delta \\boldsymbol\\Omega_{j,t}-\\Delta \\boldsymbol\\Omega_{ij,t}\\|^2_F$\n\\STATE $\\boldsymbol{R}_{i,t}=\\boldsymbol{R}_{i,t-1}\\exp(\\Delta \\boldsymbol\\Omega_{i,t})$ \\hspace*{\\fill} $i\\in [n]$\n\\STATE $r_{ij,t}=\\|\\Delta \\boldsymbol\\Omega_{i,t}-\\Delta \\boldsymbol\\Omega_{j,t}-\\Delta \\boldsymbol\\Omega_{ij,t}\\|_F\/(\\sqrt2\\pi)$ \\hspace*{\\fill} $ij\\in E$\n\n\\STATE $q_{ij,k}^{t} =\\exp(-\\beta_T(r_{ik,t}+r_{jk,t}))$\n\\hspace*{\\fill} $k\\in C_{ij},\\, ij\\in E$\n\\STATE $h_{ij,t}\n =\\frac{\\sum_{k\\in C_{ij}}q_{ij,k}^t d_ {ij,k}}{\\sum_{k\\in C_{ij}}q_{ij,k}^t}$ \\hspace*{\\fill} $ij\\in E$\n\\STATE $w_{ij,t}=F_{\\tau_t}(\\alpha_t h_{ij,t}+(1-\\alpha_t)r_{ij,t})$\n\\hspace*{\\fill} $ij\\in E$\n\\ENDWHILE\n\\ENSURE $\\left\\{\\boldsymbol{R}_{i,t}\\right\\}_{i\\in [n]}$\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\subsection{Computational Complexity}\n\\label{sec:complexity}\nCEMP requires the computation \nof $d_{ij,k}$ for $ij \\in E$ and $k\\in C_{ij}$. Its computational complexity per iteration is thus of order $O(|E|)$ as we use $|C_{ij}|=50$ for all $ij \\in E$. Since we advocate few iterations ($T=5$) of CEMP, or due to its fast convergence under special settings \\cite{cemp}, we can assume that its total complexity is $O(|E|)$.\nThe computational complexity of MPLS depends on the complexity of solving the weighted least squares problem, which depends on the group. For MPLS-$SO(3)$, the most expensive part is solving the weighted least squares problem in the tangent space, whose complexity is at most $O(n^3)$. This is thus also the complexity of MPLS-$SO(3)$ per iteration. Unlike CEMP, we have no convergence guarantees yet for MPLS.\n\n\n\n\\section{Numerical Experiments}\n\\label{sec:numerics}\nWe test the proposed MPLS algorithm on rotation synchronization, while comparing with state-of-the-art methods. We also try simpler ideas than MPLS that are based on the basic strategy of CEMP.\nAll computational tasks were implemented on a machine with 2.5GHz Intel i5 quad core processors and 8GB memory. \n\n\\subsection{Implementation}\n\\label{sec:implementation}\nWe use the following default parameters for Algorithm \\ref{alg:cemp}:\n$|C_{ij}|=50$ for $ij\\in E$; $T=5$; $\\beta_t=2^{t}$ and $t=0, \\ldots, 5$. If an edge is not contained in any 3-cycle, we set its corruption level as 1.\nFor MPLS-$SO(3)$, which we refer to in this section as MPLS, we use the above parameters of Algorithm \\ref{alg:cemp} and the following ones for $t\\geq 1$: \n $$\\alpha_t=1\/(t+1) \\ \\text{ and } \\ \\tau_{t}=\\inf_x\\left\\{\\hat P_t(x)> \\max\\{1-0.05t\\,,0.8\\}\\right\\}.$$\nHere, $\\hat P_t$ denotes the empirical distribution of \n$\\{\\alpha_t h_{ij,t}+$ $(1-\\alpha_t)r_{ij,t}\\}_{ij\\in E}$.\nThat is, for $t=0$, 1, 2, 3, we ignore $0\\%$, $5\\%$, $10\\%$, $15\\%$ of edges that have highest $\\alpha_t h_{ij,t}+$ $(1-\\alpha_t)r_{ij,t}$, and for $t \\geq 4$ we ignore $20\\%$ of such edges. \n$F(x)$ for MPLS is chosen as $x^{-3\/2}$ and it corresponds to $\\rho(x)=\\sqrt x$. For simplicity and consistency, we use these choices of parameters for all of our experiments. We remark that our choice of $\\beta_t$ in Algorithm \\ref{alg:cemp} is supported by the theory of \\citet{cemp}. \nWe found that MPLS is not so sensitive to its parameters. One can choose other values of $\\{\\beta_t\\}_{t\\geq 0}$, for example any geometric sequence with ratio 2 or less, and stop after several iterations. Similarly, one may replace 0.8 and 0.05\nin the definition of $\\tau_t$ with $0.7-0.9$ and $0.01-0.1$, respectively, and perform similarly on average.\n\nWe test two previous state-of-the-art IRLS methods: IRLS-GM \\cite{ChatterjeeG13_rotation} with $\\rho(x) = x^2\/(x^2+25)$, $F(x)=25\/(x^2+25)^2$\nand IRLS-$\\ell_{1\/2}$ \\cite{L12} with $\\rho(x)=\\sqrt x$, $F(x)=x^{-3\/2}$.\nWe use their implementation by \\citet{L12}. \n\nWe have also separately implemented the part of initializing the rotations \nof MPLS in Algorithm~\\ref{alg:SO3} and refer to it by CEMP+MST. Recall that it solves rotations by direct propagation along the minimum weighted spanning tree of the graph with weights obtained by \nAlgorithm \\ref{alg:cemp} (CEMP).\nWe also test the application of this initialization to \nthe main algorithms in \\citet{ChatterjeeG13_rotation} and \\citet{L12} and refer to the resulting methods by CEMP+IRLS-GM\nand CEMP+IRLS-$\\ell_{1\/2}$, respectively. We remark that the original algorithms initialize by a careful least absolute deviations minimization.\nWe use the convergence criterion\n$\\sum_{i\\in[n]}\\|\\Delta \\boldsymbol\\Omega_{i,t}\\|_F\/(\\sqrt2n)< 0.001$\nof \\citet{L12} for all the above algorithms.\n\nBecause the solution is determined up to a right group action, we align our estimated rotations $\\{\\hat\\boldsymbol{R}_i\\}$ with the ground truth ones $\\{ \\boldsymbol{R}_i^*\\}$. That is, we find a rotation matrix $\\boldsymbol{R}_\\text{align}$ so that $\\sum_{i\\in [n]}\\|\\hat\\boldsymbol{R}_i\\boldsymbol{R}_\\text{align}-\\boldsymbol{R}_i^*\\|_F^2$ is minimized. For synthetic data, we report the following mean estimation error in degrees: $180\\cdot\\sum_{i\\in [n]} d(\\hat \\boldsymbol{R}_i\\boldsymbol{R}_\\text{align}\\,, \\boldsymbol{R}_i^*)\/n$. For real data, we also report the median of $\\{180\\cdot d(\\hat \\boldsymbol{R}_i\\boldsymbol{R}_\\text{align}\\,, \\boldsymbol{R}_i^*)\\}_{i\\in [n]}$.\n\n\n\n\n\n\n\\subsection{Synthetic Settings}\nWe test the methods in the following two types of artificial scenarios. In both scenarios, the graph is generated by the Erd\\H{o}s-R\\'{e}nyi model $G(n,p)$ with $n=200$ and $p=0.5$.\n\\subsubsection{Uniform Corruption}\nWe consider the following random model for generating $\\boldsymbol{R}_{ij}$\n\\begin{equation}\n \\boldsymbol{R}_{ij}=\\begin{cases}\n \\text{Proj}(\\boldsymbol{R}_{ij}^*+\\sigma \\boldsymbol{W}_{ij}),&\\text{w.p. } 1-q;\\\\\n \\tilde \\boldsymbol{R}_{ij}\\sim \\text{Haar}($SO(3)$),& \\text{w.p. } q,\n \\end{cases}\n\\end{equation}\nwhere Proj denotes the projection onto $SO(3)$; $\\boldsymbol{W}_{ij}$ is a $3 \\times 3$ Wigner matrix whose elements follow i.i.d.~standard normal distribution; $\\sigma\\geq 0$ is a fixed noise level; $q$ is the probability that an edge is corrupted and Haar$(SO(3))$ is the Haar probability measure on $SO(3)$. We clarify that for any $3 \\times 3$ matrix $\\boldsymbol{A}$, $\\text{Proj}(\\boldsymbol{A}) = \\operatorname*{arg\\,min}_{\\boldsymbol{R}\\in SO(3)} \\|\\boldsymbol{R}-\\boldsymbol{A}\\|_F$.\n\nWe test the algorithms with four values of $\\sigma:$ $0$, $0.1$, $0.5$, and $1$. We average the mean error over 10 random samples from the uniform model and report it as a function of $q$ in Figure \\ref{fig:s1}.\n\n We note that MPLS consistently outperforms the other methods for all tested values of $q$ and $\\sigma$.\n In the noiseless case, MPLS exactly recovers the group ratios even when $70\\%$ of the edges are corrupted.\n It also nearly recovers with $80\\%$ corrupted edges, where the estimation errors for IRLS-GM and IRLS-$\\ell_{1\/2}$ are higher than 30 degrees. MPLS is also shown to be stable under high level of noise. Since all algorithms produce poor solutions when $q=0.9$, we only show results for $0\\leq q\\leq 0.8$.\n \n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=8cm]{uniform.pdf}\n \\caption{Performance under uniform corruption. The mean error (in degrees) is plotted against the corruption probability $q$ for 4 values of $\\sigma$. }\n \\label{fig:s1}\n\\end{figure}\n\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=8cm]{adv.pdf}\n \\caption{\n Performance under self-consistent corruption. The mean error is plotted against the corruption probability $q$ for 4 values of $\\sigma$.}\n \\label{fig:adv}\n\\end{figure}\n\n\n\n\n\n\n\n\\begin{table*}[!htbp]\n\\centering\n\\resizebox{2\\columnwidth}{!}{\n\\renewcommand{\\arraystretch}{1.3}\n\\tabcolsep=0.1cm\n\\begin{tabular}{|l||c|c||c|c|c|c||c|c|c|c||c|c|c|c||c|c|c|c|}\n\\hline\nAlgorithms & \\multicolumn{2}{c||}{}& \\multicolumn{4}{c||}{IRLS-GM} &\n\\multicolumn{4}{c||}{IRLS-$\\ell_{\\frac12}$} &\n \\multicolumn{4}{c||}{CEMP+MST}&\n \\multicolumn{4}{c|}{MPLS} \\\\\n\\text{Dataset}& $n$ & $m$ & {\\large$\\tilde{e}$} & {\\large $\\hat{e}$} & runtime & iter $\\#$ \n& {\\large$\\tilde{e}$} & {\\large $\\hat{e}$} & runtime & iter $\\#$\n& {\\large$\\tilde{e}$} & {\\large $\\hat{e}$} & runtime & iter $\\#$ &{\\large$\\tilde{e}$} & {\\large $\\hat{e}$} & runtime & iter $\\#$\\\\\\hline\nAlamo& 564 & 71237 &\n3.64 & 1.30 & 14.2 & 10+8 & \n3.67 & 1.32 & 15.5 & 10+9 & \n4.05 & 1.62 & \\textbf{10.38} & 6 &\n\\textbf{3.44} & \\textbf{1.16} & 20.6 & 6+8\n\\\\\\hline\n\nEllis Island& 223 & 17309 &\n3.04 & 1.06 & 3.2 & 10+9 &\n2.71 & 0.93 & 2.8 & 10+13 &\n2.94 & 1.11 & \\textbf{2.4} & 6 &\n\\textbf{2.61} & \\textbf{0.88} & 4.0 & 6+11\n\\\\\\hline\n\nGendarmenmarkt& 655 & 32815&\n\\textbf{39.24} & \\textbf{7.07} & 6.5 & 10+14 &\n39.41 & 7.12 & 7.3 & 10+19 &\n45.33 & 8.62 & \\textbf{4.7} & 6 &\n44.94 & 9.87 & 17.8 & 6+25\n\\\\\\hline\n\nMadrid Metropolis & 315 & 14903 &\n5.30 & 1.78 & 3.8 & 10+30 &\n4.88 & 1.88 & 2.7 & 10+12 &\n5.10 & 1.66 & \\textbf{2.1} & 6 &\n\\textbf{4.65} & \\textbf{1.26} & 5.2 & 6+23\n\\\\\\hline\nMontreal N.D.& 442 & 44501 &\n\n1.25 & 0.58 & 6.5 & 10+6 &\n1.22 & 0.57 & 7.3 & 10+8 &\n1.33 & 0.79 & \\textbf{6.3} & 6 &\n\\textbf{1.04} & \\textbf{0.51} & 9.3 & 6+7\n\\\\\\hline\nNotre Dame & 547 & 88577 &\n\n2.63 & 0.78 & 17.2 & 10+7 &\n2.26 & 0.71 & 22.5 & 10+10 &\n2.35 & 0.94 & \\textbf{13.2} & 6 &\n\\textbf{2.06} & \\textbf{0.67} & 31.5 & 6+8\n\\\\\\hline\nNYC Library& 307 & 13814 &\n\n2.71 & 1.37 & 2.5 & 10+14 &\n2.66 & 1.30 & 2.6 & 10+15 &\n3.00 & 1.41 & \\textbf{1.9} & 6 &\n\\textbf{2.63} & \\textbf{1.24} & 4.5 & 6+14\n\\\\\\hline\nPiazza Del Popolo & 306 & 18915 &\n\n4.10 & 2.17 & 2.8 & 10+9 &\n3.99 & 2.09 & 3.1 & 10+13 &\n\\textbf{3.44} & \\textbf{1.57} & \\textbf{2.6} & 6 &\n3.73 & 1.93 & 3.5 & 6+3\n\\\\\\hline\nPiccadilly& 2031 & 186458 &\n 5.12 & 2.02 & 153.5 & 10+16 &\n 5.19 & 2.34 & 170.2 & 10+19 &\n 4.66 & 1.98 & \\textbf{45.8} & 6 &\n \\textbf{3.93} & \\textbf{1.81} & 191.9 & 6+21\n \n\\\\\\hline\nRoman Forum& 989 & 41836 &\n\n2.66 & 1.58 & 8.6 & 10+9 &\n2.69 & 1.57 & 11.4 & 10+17 &\n2.80 & 1.45 & \\textbf{6.1} & 6 &\n\\textbf{2.62} & \\textbf{1.37} & 8.8 & 6+8\n\n\\\\\\hline\nTower of London& 440 & 15918 &\n\n3.42 & 2.52 & 2.6 & 10+8 &\n3.41 & 2.50 & 2.4 & 10+12 &\n\\textbf{2.84} & \\textbf{1.57} & \\textbf{2.2} & 6 &\n3.16 & 2.20 & 2.7 & 6+7\n\n\\\\\\hline\nUnion Square& 680 & 17528 &\n\n6.77 & 3.66 & 5.0 & 10+32 &\n6.77 & 3.85 & 5.6 & 10+47 &\n7.47 & 3.64 & \\textbf{2.5} & 6 &\n\\textbf{6.54} & \\textbf{3.48} & 5.7 & 6+21\n\\\\\\hline\nVienna Cathedral& 770 & 87876 &\n\n8.13 & 1.92 & 28.3 & 10+13 &\n8.07 & \\textbf{1.76} & 45.4 & 10+23 &\n\\textbf{6.91} & 2.63 & \\textbf{13.1} & 6 &\n7.21 & 2.83 & 42.6 & 6 +19\n\n\\\\\\hline\nYorkminster & 410 & 20298 &\n\n2.60 & 1.59 & \\textbf{2.4} & 10+7 &\n\\textbf{2.45} & 1.53 & 3.3 & 10+9 &\n2.49 & \\textbf{1.37} & 2.8 & 6 &\n2.47 & 1.45 & 3.9 & 6+7\n\\\\\\hline\n\n\n\\end{tabular}}\n\\caption{Performance on the Photo Tourism datasets: $n$ and $m$ are the number of nodes and edges, respectively; $\\tilde e$ and $\\hat e$ indicate mean and median errors in degrees, respectively.; runtime is in seconds; and numbers of iterations (explained in the main text).}\\label{tab:real}\n\\end{table*}\n\n\n\\subsubsection{Self-Consistent Corruption}\nIn order to simulate self-consistent corruption, we independently draw from Haar($SO(3)$) two classes of rotations: $\\{\\boldsymbol{R}_i^*\\}_{i\\in [n]}$ and $\\{\\tilde \\boldsymbol{R}_i\\}_{i\\in [n]}$. We denote their corresponding relative rotations by $\\boldsymbol{R}_{ij}^*=\\boldsymbol{R}_i^*\\boldsymbol{R}_j^{*\\intercal}$ and $\\tilde\\boldsymbol{R}_{ij}=\\tilde\\boldsymbol{R}_i\\tilde\\boldsymbol{R}_j^{\\intercal}$ for $ij\\in E$. \nThe idea is to assign to edges in $E_g$ and $E_b$ relative rotations from two different classes, so cycle-consistency occurs in both\n$G([n],E_g)$ and $G([n],E_b)$. We also add noise to these relative rotations and assign them with Bernoulli model to the two classes, so one class is more significant. \nMore specifically, for $ij \\in E$\n\\begin{equation}\n \\boldsymbol{R}_{ij}=\\begin{cases}\n \\text{Proj}(\\boldsymbol{R}^*_{ij}+\\sigma \\boldsymbol{W}_{ij}),&\\text{w.p. } 1-q;\\\\\n \\text{Proj}(\\tilde\\boldsymbol{R}_{ij}+\\sigma \\boldsymbol{W}_{ij}),& \\text{w.p. } q,\n \\end{cases}\n\\end{equation}\nwhere $q$, $\\sigma$, and $\\boldsymbol{W}_{ij}$ are the same as in the above uniform corruption model. We remark that an information-theoretic threshold for the exact recovery when $\\sigma =0$ is $q=0.5$. That is, for $q\\geq 0.5$ there is no hope of exactly recovering $\\{\\boldsymbol{R}_{i}^*\\}_{i\\in [n]}$.\n\nWe test the algorithms with four values of $\\sigma:$ $0$, $0.1$, $0.5$, and $1$. We average the mean error over 10 random samples from the self-consistent model and report it as a function of $q$ in Figure \\ref{fig:adv}.\nWe focus on values of $q$ approaching the information-theoretic bound $0.5$ ($q=0.4$, $0.45$ and $0.48$). We note that MPLS consistently outperforms the other algorithm and that when $\\sigma=0$ it can exactly recover the ground truth rotations when $q=0.48$.\n\n\n\n\n\\subsection{Real Data}\nWe compare the performance of the different algorithms on the Photo Tourism datasets \\citep{1dsfm14}. Each of the 14 datasets consists of hundreds of 2D images of a 3D scene taken by cameras with different orientations and locations. For each pair of images of the same scene, we use the pipeline proposed by \\citet{ozyesil2015robust} to estimate the relative 3D rotations. The ground truth camera orientations are also provided. Table \\ref{tab:real}\ncompares the performance of IRLS-GM, IRLS-$\\ell_{1\/2}$, CEMP+MST and MPLS, while reporting \nmean and median errors, runtime and number of iterations. The number of iterations is the sum of the number of iterations to initialize the rotations and the number of iterations of the rest of the algorithm, where CEMP+MST only has iterations in the initialization step.\n\nMPLS achieves the lowest mean and median error on $9$ out of $14$ datasets with runtime comparable to both IRLS, while IRLS-GM only outperforms MPLS on the Gendarmenmarkt dataset. This dataset is relatively sparse and lacks cycle information. It contains a large amount of self-consistent corruption and none of the methods solve it reasonably well. Among the tested 4 methods, the fastest approach is CEMP+MST.\nIt achieves shortest runtime on 13 out of 14 dataset. Moreover, CEMP+MST is 3 times faster than other tested methods on the largest dataset (Piccadilly). We remark that CEMP+MST is able to achieve comparable results to common IRLS on most datasets, and has superior performance on 2 datasets, which have some perfectly estimated edges.\nIn summary, for most of the datasets, MPLS provides the highest accuracy and CEMP+MST obtains the fastest runtime. \n\\section{Conclusion}\n\\label{sec:conclusion}\n\nWe proposed a framework for solving group synchronization under high corruption and noise. \nThis general framework requires a successful solution of the weighted least squares problem, which depends on the group. For $SO(3)$, we explained how a well-known solution integrates well with our framework. We demonstrated state-of-the-art performance of our framework for $SO(3)$ synchronization. We have motivated our method as an alternative to IRLS and explained how it may overcome the limitations of IRLS when applied to group synchronization.\n\nThere are many directions to expand \nour work. One can carefully adapt and implement our proposed framework to other groups that occur in practice. \nOne may develop certain theoretical guarantees for convergence and exact recovery of MPLS.\n\n\\section*{Acknowledgement}\nThis work was supported by NSF award DMS-18-21266. We thank Tyler Maunu for his valuable comments\non an earlier version of this manuscript.\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
{"text":"\\section{Introduction}\nThe linear fractional maps from the complex plane into itself are among the very first objects of study in one-variable complex analysis since they have many good geometric properties (e.g. mapping real lines and circles among themselves) and they are also the only biholomorphisms of the unit disk and the Riemann sphere. It is thus very natural to look at the linear fractional maps in several complex variables and such explorations have been made by, for instance, Cowen-MacCluer~\\cite{CM} and Bisi-Bracci~\\cite{BB}. In their works, they have focused on the linear fractional maps that map the unit ball into itself. They obtained a variety of results for those linear fractional maps, including the classification, normal forms, and also fixed point theorems for such maps. From the point of view of function theory, every linear fractional self map of the unit ball also give rise to a composition operator in various function spaces of the unit ball and such an operator has been studied by many people, e.g. Cowen~\\cite{Co}, Bayart~\\cite{Ba} and Chen-Jiang~\\cite{CJ}.\n\nIn this article, we will try to extend the study of linear fractional self maps to a much more general class of domains, which in particular include the classical bounded symmetric domains of type I and also the so-called \\textit{generalized balls}. These two classes both contain the usual complex unit balls as special cases. Although here our major focus is on the generalized balls, our results should shed some light on the behavior of the linear fractional maps defined on the classical bounded symmetric domains of type-I since they are determined by the same set of matrices (as we will see in Theorem~\\ref{expansion matrix}).\n\nFor any two positive integers $p$ and $q$, let $H_{p,q}$ be the standard non-degenerate\nHermitian form of signature $(p,q)$ on $\\mathbb C^{p+q}$ where $p$ eigenvalues are $1$\nand $q$ eigenvalues are $-1$, represented by the matrix $\\begin{pmatrix}I_p&0\\\\0&-I_q\\end{pmatrix}$ under the standard coordinates.\nFor a positive integer $r< p+q$, denote by $Gr(r, \\mathbb C^{p+q})$ the Grassmannian of $r$-dimensional complex linear subspaces\n(or simply $r$-\\textit{planes}) of $\\mathbb C^{p+q}$.\nWhen $1\\leq r\\leq p$, we define the domain $D^r_{p,q}$ in $Gr(r, \\mathbb C^{p+q})$ \nto be the set of positive definite $r$-planes in $\\mathbb C^{p+q}$ with respect to $H_{p,q}$.\nWe call $D^r_{p,q}$ a {\\it generalized type-I domain}.\nThe generalized type-I domain $D_{p,q}^r$ is an $SU(p,q)$-orbit on $Gr(r, \\mathbb C^{p+q})$ under the natural action induced by that of $SL(p+q;\\mathbb C)$ on $Gr(r,\\mathbb C^{p+q})$. It is an example of \\textit{flag domains} in a general context, of which one can find a comprehensive reference \\cite{FHW}. Recently $D_{p,q}^r$ have been studied by Ng~\\cite{ng1, ng2} regarding the proper holomorphic mappings between them\nand by Kim~\\cite{Kim} regarding the CR maps on some CR manifolds in $\\partial D_{p,q}^r$. We remark that the case $r=p$ corresponds to the type-I classical bounded symmetric domains~\\cite{ng2}. On the other extreme, when $r=1$, which will be the case of special interest to us, corresponds to the domains $D_{p,q}:=D^1_{p,q}$, which are called the generalized balls.\nIt follows immediately from our definition that $D_{p,q}$ can be also defined as the following domain on $\\mathbb P^{p+q-1}$:\n$$\nD_{p,q} = \\left\\{ [z_1,\\dots, z_{p+q}] \\in \\PP^{p+q-1} : |z_1|^2 +\\dots + |z_p|^2 > |z_{p+1}|^2 + \\dots + |z_{p+q}|^2\\right\\}.\n$$\nWhen $p=1$, it is biholomorphic to the unit ball in the Euclidean space $\\CC^q$.\n\nThe generalized ball is one of the simplest kinds of domains on the projective space and their boundaries are smooth Levi non-degenerate (but not pseudoconvex in general) CR manifolds, of which detailed studies have been carried out by Baouendi-Huang~\\cite{BH} and Baouendi-Ebenfelt-Huang~\\cite{BEH}. More recently, Ng~\\cite{ng2} discovered that the proper holomorphic mapping problem for the generalized balls is deeply linked to that of the classical bounded symmetric domains of type-I.\n\nHere we recall that a linear fractional map on $\\mathbb C^n$ is in fact just a restriction of a linear map on $\\mathbb P^n$ expressed in terms of the inhomogeneous coordinates of a Euclidean coordinate chart $\\mathbb C^n$ in $\\mathbb P^n$. Similarly, if we follow our notations, a linear fractional self map of the unit ball $D_{1,q}$ simply comes from a linear map on $\\mathbb P^{q}$ that maps $D_{1,q}$ into itself. We have defined our generalized type-I domains as domains on the Grassmannians on which, just like the case of the projective space, there are homogeneous coordinates and the associated linear maps. Since we will always work with homogeneous coordinates, we will thus call a self map of a generalized type-I domain a \\textit{linear self map} if it is the restriction of a linear map of the ambient Grassmannian. \n\\newline\n\n\\noindent\\textbf{Remark.} Hence, according to our terminology, \\textit{a linear self map and a linear fractional self map are the same.} ``Fractions\" appear only because inhomogeneous coordinates are used.\n\\newline\n\nWe call a linear self map of a generalized type-I domain $D^r_{p,q}$ \\textit{non-minimal} if its range is not of minimal dimension (see Definition~\\ref{minimality}). In the case of the unit balls, a non-minimal linear self map is nothing but a non-constant linear self map. Roughly speaking, the minimal linear self maps, just like the constant maps for the unit balls, are the cases for which most statements become trivialities or non-applicable. In Section~\\ref{type-I domain}, we will first establish a fundamental result for the study of linear self maps of generalized type-I domains. Namely, we will prove (Theorem~\\ref{expansion matrix}) that every non-minimal linear self map of a generalized type-I domain can be represented by a matrix $M$ satisfying the inequality\n\n\\begin{equation}\nM^H H_{p,q} M - H_{p,q}>0, \\label{expansioneq}\n\\end{equation}\nwhere $H_{p,q} = \\begin{pmatrix}I_p & 0 \\\\0 & -I_q\\end{pmatrix}$, in which $I_p$, $I_q$ are identity matrices of rank $p$ and $q$, and $\\cdot^H$ denotes Hermitian transposition, and ``$\\geq$\" means Hermitian semi-positivity. Furthermore, we will prove (Theorem~\\ref{isometrythm}) that a surjective linear self map can be represented by a matrix which makes the equality hold, i.e. by a matrix in the indefinite unitary group $U(p,q)$. For the case of the unit balls, these results have been established by Cowen-MacCluer~\\cite{CM} and we will modify their proof to work for any generalized type-I domain. As simple as it may look, such a matrix inequality is extremely useful in obtaining various kinds of results for the linear self maps of generalized type-I domains. In particular, we will use it to show that any non-minimal linear self map extends to a neighborhood of the closure of the domain (Theorem~\\ref{extension thm}).\n\nIn Section~\\ref{automorphisms}, we will study in detail the automorphism groups of the generalized balls, including the partial double transitivity on the boundary, fixed point theorems and normal forms. Here we remark that a generalized ball cannot be realized as a bounded convex domain in the Euclidean space unless it is a usual unit ball (see e.g.~\\cite{gn}) and hence one cannot apply Brouwer's fixed point theorem to get a fixed point in its closure. We will establish the existence of fixed points (in the closure) in Theorem~\\ref{existence of fixed points} and give a number of results regarding the behavior of the fixed points (Theorem~\\ref{fixed point number thm} and~Corollaries~\\ref{at most}, \\ref{hs generalization}). For obtaining a normal form for automorphisms, we will show that the subgroup $U(p)\\times U(q)$ and the ``non-isotropic dilations\" generate the full automorphism group $Aut(D_{p,q})$ (Theorem~\\ref{normal form}).\n\nAfter studying the automorphisms, we will then look at arbitrary linear self maps of the generalized balls in Section~\\ref{linear maps section}. We will again prove some results regarding their fixed points, including especially Theorem~\\ref{bb generalization}, which is about how the number of fixed points on the boundary of a generalized ball is related to the existence of interior fixed points. This generalizes a result for the unit ball (see Bisi-Bracci~\\cite{BB}) saying that any linear self map of the unit ball with more than two boundary fixed points must have an interior fixed point. We will also obtain in this section a relation between the linear self maps of the \\textit{real} generalized balls of $D_{p,q}$ and those of $D_{p,q}$ (Theorem~\\ref{real generalized ball}).\n\nFinally, in Section~\\ref{examples} we will collect some illustrating or extremal examples for the results obtained in the previous sections.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\bigskip\n{\\bf Acknowledgements.} The first author was partially supported by the Key Program of NSFC, (No. 11531107). The second author\nwas partially supported by Thousand Talents Program of the Organization Department of the CPC Central Committee, and Science and Technology Commission of Shanghai Municipality (STCSM) (No. 13dz2260400). The third author was partially supported by Basic Science Research Program through the National Research \nFoundation of Korea (NRF) funded by the Ministry of Education (NRF-2019R1F1A1060175). \n\n\n\\section{Generalized type-I domains and their linear self maps}\\label{type-I domain}\n\n\\noindent\\textbf{Notations.} For what follows, for any $p,q\\in\\mathbb N^+$, we will equip $\\mathbb C^{p+q}$ with the standard non-degenerate indefinite Hermitian form $H_{p,q}$ and denote the resulting indefinite inner product space by $\\mathbb C^{p,q}$.\n\\newline\n\n\nDenote $M_n$ the set of $n$-by-$n$ matrices with complex entries. Let $M\\in M_{p+q}$ and consider the linear map, which will also be denoted by $M$, from $\\mathbb C^{p,q}$ into itself, given by $z\\mapsto Mz$, where $z\\in\\mathbb C^{p,q}$ is regarded as a column vector.\nLet the null space of $M$ be $ker(M):=\\{z\\in\\mathbb C^{p,q}:Mz=0\\}$. Then the image of every positive definite $r$-plane in $\\mathbb C^{p,q}$ under $M$ is still an $r$-plane if and only if $ker(M)$ is negative semi-definite with respect to $H_{p,q}$. In such case, $M$ gives rise to a holomorphic map from each $D^r_{p,q}$ to the Grassmannian $Gr(r,\\mathbb C^{p,q})$. It is clear that two matrices $M, M'\\in M_{p+q}$ induce the same map on $D^r_{p,q}$ if $M=\\lambda M'$ for some $\\lambda\\in\\mathbb C^*$.\nThe question of interest is whether or not such a map is actually a \\textit{self map} of $D^r_{p,q}$.\n\n\\begin{definition}\nA self map of $D^r_{p,q}$ is called a \\textit{linear self map}, if it is given by a matrix $M\\in M_{p+q}$ in the way described above. If there is no danger of confusion, we will also denote any such self map by the same symbol $M$. Conversely, any matrix $M\\in M_{p+q}$ inducing a given linear self map of $D^r_{p,q}$ is called a \\textit{matrix representation} of the linear self map.\n\\end{definition}\n\nLet $M\\in M_{p+q}$ be a matrix representation of a linear self map of some $D^r_{p,q}$. Then, we must have rank$(M)\\geq p$ since otherwise $\\dim_{\\mathbb C}(ker(M))>q$ and $ker(M)$ would not be negative semi-definite in $\\mathbb C^{p,q}$, contradicting the fact that $M$ induces a linear map on $D^r_{p,q}$. We \nmake the following definition in relation to this.\n\n\\begin{definition}\\label{minimality}\nA linear self map of $D^r_{p,q}$ is called \\textit{minimal} if it is given by a matrix $M\\in M_{p+q}$ with rank$(M)=p$. Otherwise, we say that the linear self map is \\textit{non-minimal}. \n\\end{definition}\n\n\\noindent\\textbf{Remark.} For the unit balls $D_{1,q}$, a non-minimal linear self map is simply a non-constant linear self map.\n\\newline\n\n\n\n\nIf $M\\in M_n$ satisfies Inequality~\\eqref{expansioneq}, then it follows immediately that the image of any positive definite $r$-plane is again a positive definite $r$-plane and thus $M$ induces a linear self map on each $D^r_{p,q}$. \nWe are now going to show that conversely any \\textit{non-minimal} linear self map of $D^r_{p,q}$ can be represented by a matrix satisfying Inequality~(\\ref{expansioneq}). For this purpose, we will use some terminologies and results by Cowen-MacCluer~\\cite{CM}. Let $(V,[\\cdot,\\cdot])$ be a finite dimensional complex vector space equipped with an indefinite Hermitian form $[\\cdot,\\cdot]$. Following~\\cite{CM}, we will say that a linear map $T$ of $V$ into itself is an \\textit{expansion} if $[Tv,Tv]\\geq [v,v]$ for all $v\\in V$, and an \\textit{isometry} if $[Tv,Tv]=[v,v]$ for all $v\\in V$. In particular, if we identify the linear maps of $\\mathbb C^{p,q}$ into itself with their matrix representations with respect to the standard basis, then $M\\in M_{p+q}$ is an expansion of $\\mathbb C^{p,q}$ if and only if it satisfies the inequality~(\\ref{expansioneq}) and is an isometry if $M$ makes the equality hold. We are going to show that the non-minimal linear self maps of $D^r_{p,q}$ are precisely those given by the expansions of $\\mathbb C^{p,q}$, and the surjective linear self maps of $D^r_{p,q}$ are given by the isometries of $\\mathbb C^{p,q}$.\n\n\n\n\nThere following lemma can be found in \\cite{CM} but we rewrite it (reversing the signs) in a way more suitable for our purpose.\n\n\n\\begin{lem}[\\cite{CM}] \\label{expansion}\nSuppose $[\\cdot,\\cdot]_1$ and $[\\cdot,\\cdot]_2$ are indefinite Hermitian forms on the complex vector space $V$ such that $[x,x]_1=0$ implies $[x,x]_2 \\geq 0$. Then,\n$$\\lambda:=-\\inf_{[y,y]_1=-1}[y,y]_2 < \\infty $$\nand $$[x,x]_2\\geq \\lambda[x,x]_1$$\nfor all $x\\in V$.\n\\end{lem}\n\n\n\n\nWe are now in a position to show that the non-minimal linear self maps of $D^r_{p,q}$ are all given by the matrices satisfying Inequality~(\\ref{expansioneq}). (The case for the unit ball was obtained by Cowen-MacCluer~\\cite{CM} and part of our proof is taken from there.)\n\n\\begin{thm}\\label{expansion matrix}\nEvery non-minimal linear self map of $D^r_{p,q}$ can be represented by a matrix satisfying Inequality~(\\ref{expansioneq}).\n\\end{thm}\n\n\\begin{proof}\nLet $M\\in M_{p+q}$ be a matrix such that it induces linear self map (also denoted by $M$) on some $D^r_{p,q}$. For $x,y\\in\\mathbb C^{p,q}$, let $[x,y]_1=y^H H_{p,q} x$ and let $[x,y]_2=(My)^HH_{p,q} Mx$. The hypothesis that $M$ maps\n $D^r_{p,q}$ into $D^r_{p,q}$ means that whenever $[x,x]_1>0$, we have $[x,x]_2>0$. By continuity, we get that if $[x,x]_1 \\geq 0$, then $[x,x]_2 \\geq 0$.\n Thus, the hypotheses of Lemma \\ref{expansion} are satisfied. So we only need to show that $\\la > 0$. Then $\\la^{-1\/2}M$ is an expansion in $\\mathbb C^{p,q}$.\n\nTo see that $\\lambda >0$, we let the range of $M$ be $R(M):=\\left\\{y\\in\\mathbb C^{p,q}: y=Mz \\textrm{\\,\\,for some\\,\\,} z\\right\\}$. Now if $M$ is non-minimal, then\n$\\dim_{\\mathbb C}(R(M))\\geq p+1$ (Definition~\\ref{minimality}). Thus, $R(M)$ must contain a negative vector $y$ and any preimage $z$ of $y$ must also be a negative vector since we have already seen in the previous paragraph that $[z,z]_1 \\geq 0$ would imply that $[y,y]_1=[z,z]_2 \\geq 0$.\nTherefore, we can find $z\\in \\mathbb{C}^{p,q}$ such that $[z,z]_1<0$ as well as $[z,z]_2<0$ and hence $\\la > 0$. \n\n\\end{proof}\n\n\\begin{thm}\\label{isometrythm}\nEvery surjective linear self map of $D^r_{p,q}$ can be represented by a matrix satisfying the equality in~(\\ref{expansioneq}). In particular, every surjective linear self map of $D^r_{p,q}$ is an automorphism.\n\\end{thm}\n\\begin{proof}\nIf a linear self map $M$ of a given $D^r_{p,q}$ is surjective, then $M$ is surjective as a linear map on $\\mathbb C^{p,q}$ since its image contains all the positive vectors (which constitute an open set in $\\mathbb C^{p,q}$). The inverse linear map $M^{-1}$ also maps positive vectors to positive vectors since each of the 1-planes generated by positive vectors can be regarded as the intersection of a set of positive $r$-planes and $M$ is surjective (as a self map of $D^r_{p,q}$). Hence, we see that $M^{-1}$ is also a surjective linear self map of $D^r_{p,q}$ and is thus an expansion. Therefore, there are non-zero scalars $\\alpha$ and $\\beta$ such that $\\alpha M$ and $\\beta M^{-1}$ are expansions. Thus, for every $z\\in\\mathbb C^{p,q}$, if we write $\\|z\\|^2_{p,q}:=z^H H_{p,q} z$, then\n$$\n\t\\|\\beta z\\|^2_{p,q}\\geq \\|Mz\\|^2_{p,q}\\geq \\|\\alpha^{-1} z\\|^2_{p,q}\n$$\nand hence\n$$\n\t|\\alpha\\beta|^2\\|z\\|^2_{p,q}\\geq\\|\\alpha Mz\\|^2_{p,q}\\geq\\|z\\|^2_{p,q}.\n$$\n\nSince the inequality is true for both positive vectors and negative vectors, we deduce that $|\\alpha\\beta|=1$ and $\\|\\alpha Mz\\|^2_{p,q}=\\|z\\|^2_{p,q}$ for every $z$. That is, $\\alpha M$ is an isometry.\n\\end{proof}\n\nA linear self map $M$ of $D^r_{p,q}$ originally comes from a linear map $\\tilde M$ defined on the ambient Grassmannian $Gr(r,\\mathbb C^{p,q})$. If $M$ is not surjective, then there is a set of indeterminacy $Z\\subset Gr(r,\\mathbb C^{p,q})$ on which $\\tilde M$ is not defined. The set $Z$ is outside $D^r_{p,q}$ and consists of the points corresponding to the $r$-planes that intersect the kernel of a matrix representation of $\\tilde M$. A priori, $Z$ can intersect the boundary $\\partial D^r_{p,q}$ and obstructs the extension of $M$ across $\\partial D^r_{p,q}$, but we are now going to show that this does not happen for non-minimal linear self maps. \n\n\\begin{thm}\\label{extension thm}\nEvery non-minimal linear self map of $D^r_{p,q}$ extends holomorphically to an open neighborhood of the closure $\\overline{D^r_{p,q}}:=D^r_{p,q}\\cup\\partial D^r_{p,q}$.\n\\end{thm}\n\\begin{proof}\nLet $M$ be a non-minimal linear self map of $D^r_{p,q}$ and denote also by $M$ a matrix representation of it which satisfies the inequality $M^HH_{p,q}M-H_{p,q}\\geq 0$. As mentioned at the beginning of this section, $ker(M)$ must be negative semi-definite with respect to $H_{p,q}$. We are now going to show that $ker(M)$ does not contain any non-zero null vector if $M$ is non-minimal. Suppose on the contrary there is a non-zero null vector $\\eta\\in ker(M)$. Let $v\\in\\mathbb C^{p,q}$ and write $\\|v\\|_{p,q}^2=v^HH_{p,q}v$. Then for any $r\\in\\mathbb R$, we have $M(v+r\\eta)=Mv$ and\n$$\n\\|Mv\\|_{p,q}^2=\\|M(v+r\\eta)\\|_{p,q}^2\\geq \\|v+r\\eta\\|_{p,q}^2=\\|v\\|_{p,q}^2+r(\\eta^HH_{p,q}v+v^HH_{p,q}\\eta).\n$$\n\nNow if $v$ is chosen such that $\\textrm{Re}(v^HH_{p,q}\\eta)\\neq 0$, then the above inequality cannot hold for every $r\\in\\mathbb R$ and hence we get a contradiction. Consequently, $ker(M)$ does not contain any non-zero null vector and therefore the image of any positive semi-definite $r$-plane under $M$ is still an $r$-plane and the set of indeterminacy $Z\\subset Gr(r,\\mathbb C^{p,q})$ of $M$ (as a linear map on $Gr(r,\\mathbb C^{p,q}))$ is disjoint from $\\overline{D^r_{p,q}}$. Since both $Z$ and $\\overline{D^r_{p,q}}$ are closed in $Gr(r,\\mathbb C^{p,q})$ (and hence compact), there is an open neighborhood of $\\overline{D^r_{p,q}}$ disjoint from $Z$ and now the theorem follows.\n\\end{proof}\n\n\\noindent\\textbf{Remark.} The non-minimality is indeed necessary to guarantee the extension across the entire boundary. See Example~\\ref{no extension} for a minimal linear self map which does not extend across some boundary point.\n\n\n\\section{Automorphisms on generalized balls}\\label{automorphisms}\n\nIn this section, we are going to study in detail the automorphisms on the generalized balls $D_{p,q}$, regarding their fixed point sets and also their normal form. We begin by determining the automorphism group of $D_{p,q}$.\n\n\\begin{thm}\nEvery automorphism of $D_{p,q}$ is a linear self map and thus extends to an automorphism of the ambient projective space $\\mathbb P^{p+q-1}$. The automorphism group $\\textrm{Aut}(D_{p,q})$ of $D_{p,q}$ is isomorphic to $PU(p,q)$, the projectivization of the indefinite unitary group $U(p,q)$. In particular, every automorphism of $D_{p,q}$ can be represented by a matrix in $U(p,q)$.\n\\end{thm}\n\\begin{proof}\nThe statements are well-known for the complex unit balls $D_{1,q}$ and also for the complements of the complex unit balls $D_{p,1}$. Suppose now $p,q\\geq 2$. Then, it has been shown by Baouendi-Huang~\\cite{BH} and (for a more geometric proof, see Ng~\\cite{ng1}) that every automorphism of $D_{p,q}$ is necessarily a linear map. Now by Theorem~\\ref{isometrythm} here (or Lemma 2.13 in~\\cite{ng1}), we see that every automorphism can be represented by a matrix in $U(p,q)$. Since two elements in $U(p,q)$ represent the same automorphism of $D_{p,q}$ if and only if they are scalar multiples of each other, it follows now that $\\textrm{Aut}(D_{p,q})\\cong PU(p,q)$.\n\\end{proof}\n\n\\begin{cor}\\label{extension cor}\nThe action of $\\textrm{Aut}(D_{p,q})$ extends real-analytically to $\\partial D_{p,q}$.\n\\end{cor}\n\n\n\nThe following version of Witt's theorem is very useful in studying the various transitivities of $\\textrm{Aut}(D_{p,q})$.\n\n\\begin{lem}[Witt \\cite{Witt}] \\label{cor_Witt theorem}\nLet $X$ be a complex vector space equipped with a non-degenerate Hermitian form and $Y \\subset X$ be any complex vector subspace. Then any isometric embedding $f : Y \\rightarrow X$ extends to an isometry $F$ of $X$.\n\\end{lem}\n\n\n\\begin{thm}\\label{doubly transitive} For $u\\in \\mathbb C^{p,q}$, let $[u]$ be its projectivization in $\\mathbb P^{p+q-1}$.\n\\begin{enumerate}\n\\item\n $\\text{Aut}(D_{p,q})$ is transitive on $D_{p,q}$ and also on $\\partial D_{p,q}$.\n\\item\nLet $[v_1]$, $[v_2]$, $[w_1]$, $[w_2]\\in \\partial D_{p,q}$, where $[v_1]\\neq [v_2]$ and $[w_1]\\neq [w_2]$. Then, there exists $M\\in \\textrm{Aut}(D_{p,q})$ such that $M([v_j])=[w_j]$ for $j=1,2$ if and only if there exists non-zero $\\alpha\\in\\mathbb C$ such that $v_1^HH_{p,q}v_2=\\alpha w_1^HH_{p,q}w_2$.\n\\end{enumerate}\n\\end{thm}\n\\begin{proof}\n{ 1. Let $[u_1]$, $[u_2]\\in D_{p,q}$. Then we choose some $k>0$ such that the linear map $f:\\mathbb Cu_1\\rightarrow \\mathbb C^{p,q}$, defined by $u_1\\mapsto ku_2$ is an isometric embedding. By Lemma~\\ref{cor_Witt theorem} $f$ extends to an isometry of $\\mathbb C^{p,q}$ and therefore there is an automorphism of $D_{p,q}$ mapping $[u_1]$ to $[u_2]$. Similarly, for any two points $[v_1]$, $[v_2] \\in \\partial D_{p,q}$. The map\n$i\\colon \\mathbb C v_1\\rightarrow \\mathbb C^{p,q}$ defined by $v_1\\mapsto v_2$ is an isometric embedding and hence there exists an automorphism of $D_{p,q}$ such mapping $[v_1]$ to $[v_2]$.\n}\n\n2. Suppose that there exists $\\alpha\\in\\mathbb C^*$ such that $v_1^HH_{p,q}v_2=\\alpha w_1^HH_{p,q}w_2$. Let $\\phi : Y \\rightarrow \\CC^{p,q}$, where $Y=\\textrm{Span}\\{v_1, v_2\\}$ be the linear embedding defined by $\\phi(v_1)=w_1$ and $\\phi(v_2)=w_2$. By replacing $w_1$ by some scalar multiple, we can make $v_1^HH_{p,q}v_2=w_1^HH_{p,q}w_2$. Then $\\phi$ is an isometric embedding and hence $\\phi$ extends to an isometry of $\\mathbb C^{p,q}$ by Lemma \\ref{cor_Witt theorem} and the desired result follows. The converse is trivial.\n\\end{proof}\n\n\\noindent\\textbf{Remark.}\nTheorem~\\ref{doubly transitive} is a generalization of the double transitivity of the automorphism groups of the complex unit balls on their boundaries. This is because for $[v_1], [v_2]\\in\\partial D_{1,q}$ with $[v_1]\\neq [v_2]$, we always have $v_1^HH_{p,q}v_2\\neq 0$ since otherwise we would get a two dimensional isotropic subspace in $\\mathbb C^{1,q}$.\n\n\n\n\n\\subsection{Fixed points on $D_{p,q}$ and $\\partial D_{p,q}$}\n\nLet $A\\in U(p,q)$ and denote also by $A$ the corresponding automorphism of $D_{p,q}$.\nIt follows directly from the definition of the matrix representation of a linear self map that the fixed points of $A$ (as an automorphism on $D_{p,q}$) correspond precisely to the one-dimensional eigenspaces (or projectivized eigenvectors) of $A$ associated to the non-zero eigenvalues. The following simple observation regarding eigenvalues and eigenvectors of matrices in $U(p,q)$ will be very useful in studying the fixed points of the automorphisms of $D_{p,q}$.\n\n\\begin{lem}\\label{ortho lemma}\nLet $A\\in U(p,q)$. If $\\lambda_1$, $\\lambda_2$ are eigenvalues of $A$ and $v_1$, $v_2$ are two eigenvectors associated to them respectively, then either $v_1$ and $v_2$ are orthogonal with respect to $H_{p,q}$ or $\\overline{\\lambda_2}\\lambda_1=1$.\n\\end{lem}\n\\begin{proof}\nThe result follows from\n$\nv_2^H H_{p,q} v_1= v_2^HA^H H_{p,q} Av_1=\\left(\\ov{\\lambda}_2\\lambda_1\\right)\\,v^H_2 H_{p,q} v_1.\n$\n\\end{proof}\n\nLet $\\ov{D_{p,q}}:=D_{p,q}\\cup\\partial D_{p,q}$ be the closure of $D_{p,q}$. Since the closed complex unit balls $\\overline{\\mathbb B^q}\\cong\\overline{D_{1,q}}$ is a convex compact set in $\\mathbb C^q\\cong \\mathbb R^{2n}$, it follows from Corollary~\\ref{extension cor} and Brouwer's fixed point theorem that every element in $Aut(D_{1,q})$ has a fixed point in $\\overline{D_{1,q}}$. \nWhen $p\\geq 2$, any $D_{p,q}$ cannot be embedded as a convex compact set in some Euclidean space since it contains positive dimensional projective subspaces (see~\\cite{ng1}). Nevertheless, with a bit of ``detour\", we will still be able to use Brouwer's fixed point theorem to get a fixed point in the closure for \\textit{any} linear self map of $D_{p,q}$. The proof will be given in Section~\\ref{linear maps section}. Hence, we have\n\n\\begin{thm}\\label{existence of fixed points}\nEvery element in $\\textrm{Aut}(D_{p,q})$ has a fixed point on $\\ov{D_{p,q}}:=D_{p,q}\\cup\\partial D_{p,q}$.\n\\end{thm}\n\\begin{proof}\nFollows directly from Theorem~\\ref{general existence of fixed points}.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\nWe now recall some elementary linear algebra. Let $M\\in M_n$ and $\\lambda$ be an eigenvalue of $M$. For some $r\\in \\mathbb N$ a vector $v\\in\\mathbb C^n$ is called a generalized eigenvector of rank $r$ of $M$ associated to the eigenvalue $\\lambda$ if $(M-\\lambda I)^rv=0$ but $(M-\\lambda I)^{r-1}v\\neq 0$.\nIt turns out that both the absolute values of the eigenvalues and the existence of generalized eigenvectors of higher rank give information about the fixed-point set of the linear self map on a generalized ball represented by $M$.\n\nAs before, for a vector $v\\in\\mathbb C^{p,q}$, we will denote by $[v]\\in\\mathbb P^{p+q-1}$ its projectivization. Similarly, for any complex vector subspace $W\\subset\\mathbb C^{p,q}$, we denote its projectivization by $[W]$.\n\n\\begin{prop}\\label{fixed point}\nLet $A\\in \\text{Aut}(D_{p,q})$ and choose a matrix representation in $U(p,q)$ and denote it also by $A$. Let $\\lambda$ be an eigenvalue of $A$ and $v$ be an associated generalized eigenvector of rank $r$.\n\\begin{enumerate}\\label{projective subspace on the boundary}\n\\item If $|\\lambda|\\neq 1$, then $[(A-\\lambda I)^{r-1}v]$ is a fixed point of $A$ on $\\partial D_{p,q}$. Furthermore, if $r\\geq 2$, then there is an $(r-1)$-dimensional projective linear subspace $[W]$ in $\\partial D_{p,q}$\ninvariant under $A$ and $[(A-\\lambda I)^{r-1}v]\\in [W]$ is a unique fixed point of $A$ in $[W]$.\n\\item If $|\\lambda|=1$ and $r\\geq 2$,\nthen $[(A-\\lambda I)^{r-1}v]$ is a fixed point of $A$ on $\\partial D_{p,q}$.\nFurthermore, there is an $\\left(\\left[ \\frac{r}{2} \\right]-1\\right) $-dimensional projective subspace\n$[W]$ in $\\partial D_{p,q}$ invariant under $A$ and $[(A-\\lambda I)^{r-1}v]$ is a unique fixed point of $A$ in $[W]$.\n\\end{enumerate}\n\\end{prop}\n\\begin{proof}\n1. Let $A\\in U(p,q)$, $\\lambda$ be an eigenvalue of $A$ and $v$ be an associated generalized eigenvector of rank $r$. Let $v_r:=v$ and define inductively $v_{j-1}:=(A-\\lambda I)v_j$ for $j\\in\\{r,r-1,\\ldots, 2\\}$. In particular $v_1$ is an eigenvector of $A$ associated to $\\lambda$.\n\\begin{eqnarray*}\nAv_1&=&\\lambda v_1, \\\\\nAv_2 &=& v_1 + \\lambda v_2,\\\\\n&&\\vdots\\\\\nA v_r &=&v_{r-1} + \\lambda v_r.\n\\end{eqnarray*}\nSuppose that $|\\lambda|\\neq 1$.\nWe claim that\n\\begin{equation}\nv_i^HH_{p,q} v_j=0 \\text{ for all } 1\\leq i,j\\leq r.\n\\end{equation}\nWe will prove it using induction.\nBecause of $A^H H_{p,q} A = H_{p,q}$, we have\n$v_1^H H_{p,q}v_1 =0$\nand this implies that $[v_1]\\in \\partial D_{p,q}$.\nSuppose that $r\\geq 2$ and $v_1^H H_{p,q}v_{j'}=0$ for every $j'<j\\leq r$. Then\n\\begin{eqnarray*}\nv_1^H H_{p,q}v_{j} = (Av_1)^H H_{p,q}(Av_j)\n= \\ov\\lambda v_1^H H_{p,q}(v_{j-1}+\\lambda v_j)\n=|\\lambda|^2 v_1^HH_{p,q}v_j.\n\\end{eqnarray*}\nHence $v_1^HH_{p,q}v_j=0$ and as a consequence, \n\\begin{equation}\nv_1^HH_{p,q}v_j=0 \\quad\\text{ and }\\quad v_j^HH_{p,q}v_1=0\n\\quad\\text{ for all } j \\,\\text{ with } \\,1\\leq j\\leq r.\n\\end{equation}\nNow fix $i\\geq2$ and $j\\geq 2$.\nFor the induction, assume that $v_{i'}^HH_{p,q} v_{j'}=0$ for all $i'<i$ or $j'<j$.\nSince\n\\begin{eqnarray*}\nv_i^HH_{p,q} v_j &=& (v_{i-1}+\\lambda v_i)^H H_{p,q} (v_{j-1}+\\lambda v_j)\\\\\n&=& v_{i-1}^H H_{p,q}v_{j-1} + \\lambda v_{i-1}^HH_{p,q} v_j\n+ \\ov\\lambda v_i^HH_{p,q} v_{j-1} + |\\lambda|^2 v_i^HH_{p,q} v_j,\n\\end{eqnarray*}\nwe can obtain $v_i^HH_{p,q} v_j=0$ and we obtain the claim.\nDefine the subspace $W$ in $\\CC^{p,q}$ spanned by $v_1,\\ldots, v_r$.\nThen $[W]$ is a projective subspace in $\\mathbb P^{p+q-1}$ contained in $\\partial D_{p,q}$ which is invariant with respect to the action of $A$ and it has a unique fixed point $[v_1]$ since $v_1$ is the unique eigenvector in $W$ (up to scalar multiplication).\n\n\n\n\n2. Consider the case $|\\lambda|=1$.\nBy replacing $A$ with $\\frac{1}{\\lambda}A$,\nwe may assume that $\\lambda=1$ without any loss of generality.\nFor $j$ with $1\\leq j\\leq r-1$ we have\n$$v_1^H H_{p,q}v_{j+1} = (Av_1)^H H_{p,q} (Av_{j+1}) = v_1^H H_{p,q} (v_{j+1}+v_j).$$\nThis implies that\n\\begin{equation}\\label{1j}\nv_1^HH_{p,q}v_j=0 \\,\\text{ and }\\, v_j^HH_{p,q}v_1=0 \\,\\text{ for }\\, j\\, \\text{ with }\\, 1\\leq j\\leq r-1.\n\\end{equation}\nSince for $m$ with $2\\leq m\\leq r-1$ we have\n$$\nv_2^H H_{p,q}v_m = (Av_2)^H H_{p,q}(Av_m) = (v_1+v_2)^HH_{p,q}(v_{m-1}+v_m) = v_2^HH_{p,q}(v_{m-1}+v_m)\n$$\nby \\eqref{1j}, one obtains\n\\begin{equation}\nv_2^HH_{p,q}v_m =0 \\,\\,\\text{ and }\\,\\, v_m^HH_{p,q}v_2 =0 \\,\\,\\text{ for all }\\,\\,\nm \\,\\,\\text{ with }\\,\\,1\\leq m\\leq r- 2.\n\\end{equation}\nBy repeating this process, we obtain that for a fixed $j$ with $1\\leq j\\leq r-1$,\n\\begin{equation}\nv_m^H H_{p,q} v_j=0 \\,\\,\\text{ for all }\\,\\, m \\,\\,\\text{ with } \\,\\,1\\leq m\\leq r-j.\n\\end{equation}\nAs a result\n\\begin{equation}\nv_m^H H_{p,q}v_j=0 \\,\\,\\text{ for all }\\,\\, m,\\,j \\,\\,\\text{ with } \\,\\,1\\leq m,j\\leq \\left[ \\frac{r}{2} \\right].\n\\end{equation}\nDefine the complex vector subspace $W$ of $\\CC^{p,q}$ spanned by $v_1,\\ldots, v_{\\left[ \\frac{r}{2} \\right]}$.\nThen $[W]$ is a projective subspace in $\\mathbb P^{p+q-1}$ contained in $\\partial D_{p,q}$ which is invariant with respect to the action of $A$ and it contains a unique fixed point $[v_1]$.\n\\end{proof}\n\n\n\n\\begin{cor}\nLet $A\\in Aut(D_{p,q})$ and choose a matrix representation in $U(p,q)$ and denote it also by $A$. Then, in any Jordan canonical form of $A$, every higher-rank Jordan block or rank-one Jordan block with a non-unimodular eigenvalue corresponds to a fixed point on $\\partial D_{p,q}$.\n\\end{cor}\n\n\\noindent\\textbf{Remark.} There exist indeed non-diagonalizable elements in $U(p,q)$. We refer the reader to Example~\\ref{nondiagonalizable}.\n\\newline\n\nThrough the generalized Cayley transform one can map the complex unit balls $D_{1,q}$ biholomorphically onto some Siegel domains of the second kind. Thus, the dilations on the Siegel domains give elements in $\\textrm{Aut}(D_{1,q})$ which do not have fixed points in $D_{1,q}$ and hence there exist fixed points on the boundary by Brouwer's fixed point theorem. On the other hand, Hayden-Suffridge~\\cite{hs} showed that if an automorphism of a complex unit ball has more than two fixed points on the boundary then it must have fixed points in the interior (in fact, there is at least an affine line on which every point is fixed). For $p,q\\geq 2$, the generalized balls $D_{p,q}$ contain projective linear subspaces in their boundaries and this leads to a lot of differences between the complex unit balls and other generalized balls when studying their holomorphic mappings. Example~\\ref{hs example} in Section~\\ref{examples} will show that the result of Hayden-Suffridge cannot be generalized to $D_{p,q}$ by simply increasing the number of fixed points on the boundary in relation to the existence of these projective subspaces in the boundary. Before getting a suitable generalization of Hayden-Suffridge (in Corollary~\\ref{hs generalization}), we observe the following general behavior relating the fixed points and the projective lines on which every point is fixed.\n\n\n\\begin{thm} \\label{fixed point number thm}\nLet $A\\in \\text{Aut}(D_{p,q})$ and choose a matrix representation in $U(p,q)$ and denote it also by $A$.\n\\begin{enumerate}\n\\item\nIf $A$ has at least $p+1$ fixed points in $D_{p,q}$, then\nthere exists a projective line intersecting $D_{p,q}$ on which every point is fixed by $A$. In particular, if an element in $\\textrm{Aut}(D_{p,q})$ has only a finite number of fixed points in $D_{p,q}$, then it can have at most $p$ fixed points in $D_{p,q}$.\n\\item\nIf A has at least $2p+1$ fixed points in $\\partial D_{p,q}$, then there exists a projective line on which every point is fixed by $A$.\n\\end{enumerate}\n\\end{thm}\n\n\\begin{proof}\n\n(1) Suppose that $A$ has $p+1$ fixed points $\\{[v_1],\\ldots,[v_{p+1}]\\}$ in $D_{p,q}$ associated to $p+1$ distinct eigenvalues $\\{\\lambda_1,\\ldots,\\lambda_{p+1}\\}$.\nBy Lemma~\\ref{ortho lemma}, we have either $\\overline{\\lambda_i}\\lambda_j=1$ or\n$v_i^HH_{p,q}v_j=0$.\nHowever, we must have $|\\lambda_j|=1$ for all $j$ by (1) in Proposition \\ref{fixed point}, and all $\\lambda_j$ are distinct, we see that for $i\\neq j$, $\\overline{\\lambda_i}\\lambda_j=1$ is impossible. This implies that\n$v_i^HH_{p,q}v_j=0$ whenever $i\\neq j$. Then $\\{v_1,\\ldots, v_{p+1}\\}$ span a positive definite $(p+1)$-dimensional subspace in $\\mathbb C^{p,q}$, which is a contradiction. Thus, at least two elements in $\\{v_1,\\ldots, v_{p+1}\\}$ are associated to the same eigenvalue and they span a 2-dimensional eigenspace and this gives a projective line on which every point is fixed by $A$.\n\n(2) Suppose that $A$ has $2p+1$ fixed points in $\\partial D_{p,q}$ associated to $2p+1$ distinct eigenvalues. Pick any one fixed point and denote it by $[v_1]$, with the associated eigenvalue denoted by $\\lambda_1$. In the remaining $2p$ fixed points, there is at most one of them is associated to the eigenvalue $(\\overline{\\lambda_1})^{-1}$. Thus, there are at least $2p-1$ of them whose corresponding projectivized eigenvectors are orthogonal to $[v_1]$ by Lemma~\\ref{ortho lemma}. Pick any one such fixed point and denote it by $[v_2]$, with the associated eigenvalue denoted by $\\lambda_2$. By repeating this procedure we see that we can choose $p+1$ fixed points whose corresponding projectivized eigenvectors are pairwise orthogonal and thus we get a $(p+1)$-dimensional isotropic subspace in $\\mathbb C^{p,q}$, which is a contradiction. Thus, at least two fixed points are associated to the same eigenvalue and we again get a projective line on which every point is fixed by $A$, as in $(1)$.\n\\end{proof}\n\n\n\\noindent\\textbf{Remark.} The numbers $p+1$ and $2p+1$ in (1) and (2) are sharp. It is illustrated by Example~\\ref{sharp example 3} and Example~\\ref{sharp example 2} in Section~\\ref{examples}.\n\n\n\n\\begin{cor}\\label{at most}\nLet $A\\in \\text{Aut}(D_{p,q})$ and\nsuppose that there does not exist any projective line on which every point is fixed by $A$. Then $A$ has at most $p$, $q$ and $\\min\\{2p,2q\\}$ fixed points in $D_{p,q}$, $\\mathbb P^{p+q-1}\\setminus \\overline{ D_{p,q}}$ and $\\partial D_{p,q}$ respectively.\n\\end{cor}\n\nWe can now give the following generalization of Hayden-Suffridge's result for the generalized balls. (This also gives an alternative proof for their result for the unit balls of finite dimension.)\n\n\n\n\\begin{cor}\\label{hs generalization}\nLet $A\\in\\textrm{Aut}(D_{p,q})$ be an automorphism such that there does not exist any projective line in $\\partial D_{p,q}$ on which every point is fixed by $A$. If $A$ has at least $2p+1$ fixed points on $\\partial D_{p,q}$, then $A$ must have fixed points in $D_{p,q}$.\n\\end{cor}\n\n\\begin{proof\nBy (2) in Theorem~\\ref{fixed point number thm}, we get a projective line $L$ on which every point is fixed by $A$. Furthermore, from its proof we know that $L$ intersects $\\partial D_{p,q}$ at at least two points. This means we have two linearly independent null vectors in the 2-dimensional subspace of $\\mathbb C^{p,q}$ whose projectivization is $L$. Thus, this 2-dimensional subspace is either isotropic or contains positive vectors and this in turn means that either $L$ is completely contained in $\\partial D_{p,q}$ or it intersects $D_{p,q}$. The desired result now follows.\n\\end{proof}\n\n\n\n\n\n\\subsection{Normal forms of automorphisms on $D_{p,q}$}\n\nIn this section we will find a normal form elements in $U(p,q)$, which are matrices representing the automorphisms of $D_{p,q}$. It is a generalization of a result given in\n\\cite[Proposition 5]{CM} for the unit ball case.\n\n\nFor a point $v\\in \\mathbb C^{p,q}$, write $v=(v', v'')$ with\n$v'\\in \\mathbb C^p$ and $v''\\in \\mathbb C^q$. Also write $\\|v\\|^2_{p,q}:=v^HH_{p,q}v=|v'|^2_p-|v''|^2_q$, where $|\\cdot|^2_p$ and $|\\cdot|^2_q$ denote the Euclidean norms.\nIf we naturally identify $U(p)\\times U(q)$ as a subgroup of $U(p,q)$ (as block diagonal matrices), then there exists $U \\in U(p)\\times U(q)$ such that\n$Uv = (|v'|_p,0,\\ldots, 0, |v''|_q)$.\n\nNow suppose $v$ is a unit vector, i.e. $\\|v\\|_{p,q}^2 =1$. Define\n$$M = \\left( \\begin{array}{ccccc}\n|v'|_p & &&&-|v''|_q \\\\\n &1 &&& \\\\\n& & \\ddots&&\\\\\n & & &1 & \\\\\n|v''|_q & & &&-|v'|_p \\\\\n\\end{array}\\right),\n$$\nwhere other entries are zero. Then, $M\\in U(p,q)$, $M^{-1}=M$ and we have $MUv=(1,0,\\ldots, 0)^t$.\nHence, we also have $v = U^{-1}M(1,0,\\ldots, 0)^t$.\n\nNow let $A\\in U(p,q)$ and let $v\\in\\mathbb C^{p,q}$ be the vector such that $Av = (1,0,\\ldots, 0)^t$. In particular, $\\|v\\|^2_{p,q}=1$.\nThen from above we see that there exist $V_1\\in U(p)\\times U(q)$ and $M_1\\in U(p,q)$ such that $AV_1 M_1(1,0,\\ldots, 0)^t = (1,0,\\ldots, 0)^t$.\nThis implies that $AV_1M_1$ belongs to the isotropy subgroup at $(1,0,\\ldots,0)^t$, which is $\\{1\\}\\times U(p-1,q)$ (as block diagonal matrices in $U(p,q)$).\nSince\n$\\{1\\}\\times U(p-1,q)$ preserves the subspace $\\{v\\in\\mathbb C^{p,q}: v=(0,*,\\ldots,*)\\}$, which\nis isometric to $\\mathbb C^{p-1,q}$, we can repeat the argument for $p\\geq 2$ and find\n$V_2\\in \\{1\\}\\times U(p-1)\\times U(q)\\subset U(p)\\times U(q)$ and $M_2\\in\\{1\\}\\times U(p-1,q)\\subset U(p,q)$ \nsuch that $AV_1M_1V_2M_2$ fixes both $(1,0,\\ldots,0)^t$ and $(0,1,0,\\ldots,0)^t$, where $M_2$ is of the following form for some $a,b\\geq 0$ such that $a^2-b^2=1$,\n$$\\left( \\begin{array}{c|ccccccc}\n 1& &&&& &\\\\\\hline\n & a&&&& &-b\\\\\n & &1 &&& & \\\\\n& && \\ddots& && \\\\\n & & &&1& & \\\\\n &b && &&&-a \\\\\n\\end{array}\\right).\n$$\n\nHence, we see that $AV_1M_1V_2M_2\\in \\{I_2\\}\\times U(p-2,q)$.\nBy repeating this process, we can find $V_j\\in U(p)\\times U(q)$, $M_j\\in U(p,q)$, $1\\leq j\\leq p$, so that $A V_1M_1\\cdots V_pM_{p}\\in \\{I_p\\}\\times U(q)$.\nWe call the automorphisms associated to the matrices of the form as $M_j$, $1\\leq j\\leq p$, the {\\it non-isotropic dilations} of $D_{p,q}$.\nIn conclusion, we have obtained the following:\n\\begin{thm}\\label{normal form}\nThe subgroup $U(p)\\times U(q)$ and the non-isotropic dilations generate the full automorphism group of $D_{p,q}$. Furthermore, every element in $Aut(D_{p,q})$ can be written in the form $U_{p+1}M_pU_p\\cdots M_1U_1$, where $U_{1},\\ldots,U_{p+1}$ are automorphisms fixing the subspace $\\{[z]\\in D_{p,q}: [z]=[z_1,\\ldots,z_p,0,\\ldots,0]\\}$ and $M_1,\\ldots,M_p$ are non-isotropic dilations of $D_{p,q}$.\n\\end{thm}\n\n\n\n\n\n\n\n\n\n\\section{Linear self maps on generalized balls}\\label{linear maps section}\n\\subsection{Fixed points of general linear self maps}\n\nFor any linear self map of a unit ball $D_{1,q}$, one can apply Brouwer's fixed point theorem to conclude that there is at least one fixed point in the closure of the ball. However, the argument cannot be directly carried over to other generalized balls since they cannot be realized as bounded convex domains in the Euclidean space (see~\\cite{gn}). Nevertheless, we are going to prove that the same fixed point theorem still holds for the generalized balls. We first recall that the fixed points of a linear self map on a generalized ball correspond to the one-dimensional eigenspaces associated to the non-zero eigenvalues of any given matrix representation of the linear self map.\n\n\\begin{thm}\\label{general existence of fixed points}\nEvery linear self map of $D_{p,q}$ has at least one fixed point in $\\overline{D_{p,q}}:=D_{p,q}\\cup\\partial D_{p,q}$.\n\\end{thm}\n\\begin{proof}\nLet $F$ be a linear self map of $D_{p,q}$ and denote by $A$ one of its matrix representations. Then, $A$ maps the positive $p$-planes in $\\mathbb C^{p,q}$ onto positive $p$-planes since it maps positive vectors to positive vectors (for $F$ to be well defined on $D_{p,q}$). In other words, $A$ also induces a linear self map $\\tilde F$ of $D^p_{p,q}$, in which the latter is a classical bounded symmetric domain of type-I embedded in $Gr(p,\\mathbb C^{p,q})$ and if we use the inhomogeneous coordinates in a standard Euclidean chart $\\mathbb C^{pq}\\subset Gr(p,\\mathbb C^{p,q})$, then $D^p_{p,q}\\Subset\\mathbb C^{pq}$ is just the standard Harish-Chandra realization of the bounded symmetric domain (see~\\cite{ng2}) and hence is a bounded convex domain~\\cite{H}. Now in terms of these Euclidean coordinates, $\\tilde F$ is a rational map and since $\\tilde F(D^p_{p,q})\\subset D^p_{p,q}$, and the latter is a bounded domain in $\\mathbb C^{pq}$, we deduce that the rational map $\\tilde F$ is also well defined (holomorphic) in a neighborhood of the closure $D^p_{p,q}\\cup\\partial D^p_{p,q}$. We can now apply Brouwer's fixed point theorem to get a fixed point $x_0\\in D^p_{p,q}\\cup\\partial D^p_{p,q}$ of $\\tilde F$. Since $x_0$ corresponds to a positive semi-definite $p$-plane $E_0\\subset\\mathbb C^{p,q}$, and from how $\\tilde F$ is constructed from $A$, we see that $A(E_0)=E_0$. Then $A$ has at least one eigenvector in $E_0$ associated to a non-zero eigenvalue. Such an eigenvector is either a positive vector or a null vector and it gives us a fixed point in $\\overline{D_{p,q}}$.\n\\end{proof}\n\nThe following lemma will be useful for further analyzing the fixed points of a linear self map on a generalized ball.\n\n\\begin{lem}\\label{semipositive}\nIf $A$ is a matrix satisfying $A^H H_{p,q}A-H_{p,q}\\geq 0$ and $v\\in\\mathbb C^{p,q}$ is such that $\\|Av\\|_{p,q}^2=\\|v\\|_{p,q}^2$, then $A(v^\\perp)\\subset v^\\perp$, where $v^\\perp$ is the orthogonal complement of $v$ with respect to $H_{p,q}$.\n\\end{lem}\n\n\\begin{proof}\nSince $A^H H_{p,q}A-H_{p,q}$ is a positive semi-definite Hermitian matrix, there exists a unitary matrix $P$ such that $P^{H}(A^H H_{p,q}A-H_{p,q})P=\\text{diag}(\\lambda_1, \\lambda_2, \\ldots,\\lambda_{r},$ $0,\\ldots,0)$, with $\\lambda_{i}>0$ for all $i$.\n\nAs $\\|Av\\|_{p,q}^2=\\|v\\|_{p,q}^2$, we have $v^H (A^H H_{p,q}A-H_{p,q})v=0$. \nLet $v'=P^Hv=({x'}_1,x'_2,\\ldots, x'_{p+q})$. Then $x'_1=x'_2=\\cdots=x'_r=0$ since $v'^H \\text{diag}(\\lambda_1, \\lambda_2, \\ldots,\\lambda_{r},0,\\ldots,0)v'=v^H (A^H H_{p,q}A-H_{p,q})v=0$. It follows that $P^H(A^H H_{p,q}A-H_{p,q})PP^Hv=0$ and hence $(A^H H_{p,q}A-H_{p,q})v=0$. Then $u^H (A^H H_{p,q}A-H_{p,q})v=0$ for any $u\\in \\mathbb{C}^{p,q}$.\n Therefore $(Au)^HH_{p,q}Av=u^HH_{p,q}v$. So $A(v^\\perp)\\subset v^\\perp$.\n\\end{proof}\n\n\n\nThe following generalization of the fixed point theorem of Hayden-Suffridge to linear fractional maps of the complex unit ball was accomplished by Bisi-Bracci~\\cite{BB}: \\textit{If a linear fractional map of the complex unit ball has more than\ntwo fixed points on the boundary, then it has fixed points in the interior.} Just like what happens for automorphisms (as studied in Section~\\ref{automorphisms}), we will have to assume that no projective line in the boundary is fixed everywhere by the linear self map before we can generalize this result to $D_{p,q}$.\n\n\n\nWe have proven in Theorem~\\ref{extension thm} that any non-minimal linear self map of a generalized ball extends holomorphically across the boundary. For minimal linear self maps, we still have such an extension for a \\textit{general} boundary point since the set of indeterminacy cannot contain the entire boundary. But even so, a minimal linear self map cannot have any boundary fixed point. In order to see this, let $F$ be a minimal linear self map of a generalized ball. The range $R$ of its matrix representation (also denoted by $F$) is of dimension $p$ (see Definition~\\ref{minimality}). On the other hand, since $F$ maps positive vectors to positive vectors, for any positive $p$-plane $P\\subset\\mathbb C^{p,q}$, we must have $\\dim_\\mathbb C F(P)=p$ and hence $F(P)=R$, which implies that $R$ is a positive $p$-plane. Hence, the image of any null vector under $F$ is either a positive vector or the zero vector. Therefore, a minimal linear self map cannot have any fixed point on the boundary.\n\n\nWe will first establish our fixed point theorem for the complex unit ball and its exterior, i.e. for $D_{1,q}$ and $D_{p,1}$. In particular, this will give a new proof for the result of Bisi-Bracci~\\cite{BB}.\n\n\n\\begin{thm}\\label{ball}\n\nLet $q\\geq 1$ and $p\\geq 2$. For $D_{1,q}$ (resp. $D_{p,1}$), any linear self map with at least three (resp. two ) boundary fixed points has a fixed point in the interior.\n\n\\end{thm}\n\n\n\\begin{proof}\n\n\n\nWe will first prove the theorem for $D_{1,q}$. Let $F$ be a linear self map of $D_{1,q}$ with at least three fixed points in $\\partial D_{1,q}$. Let $A$ be a matrix representation of $F$ and by the hypotheses we can find three null vectors $v_1,v_2,v_3\\in\\mathbb C^{1,q}$ such that any two of them are not proportional and they are eigenvectors of $A$ associated to some non-zero eigenvalues.\n\nFirst of all, when $q=1$, then $\\dim_\\mathbb C(\\mathbb C^{1,1})=2$. But $v_1,v_2,v_3$ are pairwise non-proportional eigenvectors and so we deduce that $A$ can only be a non-zero multiple of the identity matrix and the desired result follows in this case.\n\nNow we can assume that $q\\geq 2$. Let $E=span_\\mathbb C(v_1,v_2,v_3)$. Then $E$ is an invariant subspace of $A$. Moreover, the restriction $H_{1,q}|_E$ is non-degenerate since there is no isotropic subspace in $\\mathbb C^{1,q}$ of dimension greater than one.\n\n If $\\dim_\\mathbb C(E)=2$, then the $H_{1,q}|_E$ must be of signature $(1,1)$ and we are back to the case $q=1$. Suppose now $\\dim_\\mathbb C(E)=3$. Then $H_{1,q}|_E$ is of signature $(1,2)$. Let $E_{12}=span_\\mathbb C(v_1,v_2)$. Then, the signature of $H_{1,q}|_E$ is again $(1,1)$. Hence, the orthogonal complement of $E_{12}$ in $E$, denoted by $N_{12}$, is complementary to $E_{12}$ and $\\dim_\\mathbb C(N_{12})=1$. Therefore, $E=E_{12}\\oplus N_{12}$. Note that since $\\|Av_j\\|^2_{p,q}=\\|v_j\\|_{p,q}^2=0$ for $j=1,2$, by Lemma~\\ref{semipositive}, we see that $N_{12}=v_1^\\perp\\cap v_2^\\perp$ is invariant by $A$ and thus is a one dimensional eigenspace of $A$. Choose now an eigenvector $v_4\\in N_{12}$. But on the other hand, since $H_{1,q}|_E$ is of signature $(1,2)$, we see that $H_{1,q}|_{N_{12}}$ is negative definite. This implies that $v_3$ and $v_4$ are not proportional. We now have four eigenvectors $v_1,v_2,v_3,v_4\\in E$ which are pairwise non-proportional. But $\\dim_\\mathbb C(E)=3$, so this is possible only if the restriction of $A$ on $E$ has at most two different eigenvalues. Suppose $v_j$ and $v_k$ are associated to the same eigenvalue $\\lambda$ (which must be non-zero), then for $E_{jk}:=span_\\mathbb C\\{v_j,v_k\\}$, we have that $H_{1,q}|_{E_{jk}}$ is of signature $(1,1)$ and hence we get an eigenvector $v\\in E_{jk}$ which is also a positive vector and the desired result now follows.\n\nFor the case of $D_{p,1}$ with $p\\geq 2$, we similarly let $G$ be a linear self map of $D_{p,1}$ with at least two fixed points in $\\partial D_{p,1}$ and let $B$ be a matrix representation of $G$. By the hypotheses, we can find two null vectors $u_1,u_2\\in\\mathbb C^{p,1}$ such that they are not proportional and are eigenvectors of $B$ associated to some non-zero eigenvalues. Let $F_{12}:=span_\\mathbb C(u_1,u_2)$. Again, since $\\|Bu_j\\|^2_{p,q}=\\|u_j\\|_{p,q}^2=0$ for $j=1,2$, we see from Lemma~\\ref{semipositive} that $Q_{12}:=F_{12}^\\perp=u_1^\\perp\\cap u_2^\\perp$ is invariant by $B$. The signature of $H_{p,1}|_{F_{12}}$ is $(1,1)$ and thus $H_{p,1}|_{Q_{12}}$ is of signature $(p-1,0)$. But $\\dim_\\mathbb C(Q_{12})=p-1$ and hence $Q_{12}$ is a positive definite invariant subspace of $A$. It now follows that we can find an eigenvector of $A$ which is a positive vector. Therefore, we get a fixed point in $D_{p,1}$.\n\n\n\\end{proof}\n\n\n\n\nWe can now generalize our result to an arbitrary generalized ball.\n\n\\begin{thm}\\label{bb generalization}\nLet $F$ be a linear self map of $D_{p,q}$. Suppose there is no projective line in $\\partial D_{p,q}$ on which every point is fixed by $F$.\n\\begin{enumerate}\n\\item If $p\\leq q$ and $F$ has at least $2p+1$ fixed points on $\\partial D_{p,q}$, then $F$ must have a fixed point in $D_{p,q}$. Furthermore, there is a projective line fixed everywhere by $F$.\n\\item If $p>q$ and $F$ has at least $2q$ fixed points on $\\partial D_{p,q}$, then $F$ must have a fixed point in $D_{p,q}$.\n\\end{enumerate}\n\n\\end{thm}\n\n\\noindent\\textbf{Remark.} Note that every linear self map of the ball (or its exterior) satisfies the condition given in Theorem~\\ref{bb generalization}\nsince there is no projective line in the boundary.\nHowever, in the case of $D_{p,q}$ with $p,q>1$, the condition is necessary. For instance, there is an automorphism\nof $D_{2,2}$ which has infinitely many fixed points on the boundary but no fixed point in $D_{2,2}$ (Example~\\ref{hs example}). \nFurthermore, the number $2p+1$ in Theorem~\\ref{bb generalization} (1) is sharp since there is an automorphism having four fixed points on $\\partial D_{2,2}$ and no fixed point in $D_{2,2}$ and there is no projective line in $\\partial D_{2,2}$ on which every point is fixed (Example~\\ref{sharp example 2}).\n\n\\begin{proof}[Proof of Theorem~\\ref{bb generalization}]\nLet $A$ be a matrix representation of $F$. \nAs explained before Theorem~\\ref{ball}, we can assume that $F$ is non-minimal. Thus, we can choose $A$ such that $A^H H_{p,q}A-H_{p,q}\\geq 0$ by Theorem~\\ref{expansion matrix}. \n\n\nLet $s\\geq 2$ be a positive integer and for every $k\\in\\{1,\\ldots,s\\}$, let $v_k\\in\\mathbb C^{p,q}$ be a null vector which is also an eigenvector of $A$ associated to a non-zero eigenvalue $\\lambda_k$. Suppose that $\\{v_1,\\ldots,v_s\\}$ are linearly independent. Thus, $\\{[v_1],\\ldots,[v_s]\\}\\subset\\partial D_{p,q}$ are $s$ distinct boundary fixed points of $F$. For $i\\neq j$, define $E_{ij}:=span_\\mathbb C\\{v_i, v_j\\}\\subset\\mathbb C^{p,q}$.\n\n Assume that $\\lambda_k=\\lambda_\\ell$ for some $k\\neq\\ell$. Then, as $v_k$, $v_\\ell$ are two linearly independent null vectors, $H_{p,q}|_{E_{k\\ell}}$ must be of signature $(1,1)$ or $E_{k\\ell}$ is isotropic. In the first situation, we can find a positive vector in $E_{k\\ell}$ while in the latter situation, the projectivization of $E_{k\\ell}$ gives a projective line in $\\partial D_{p,q}$. But $E_{k\\ell}$ is an eigenspace of $A$ associated to $\\lambda_k=\\lambda_\\ell$, so we either get a fixed point of $A$ in $D_{p,q}$ or we get a projective line in $\\partial D_{p,q}$ on which every point is fixed by $A$.\n\nSo now we only need to consider the case where $\\lambda_1,\\ldots,\\lambda_s$ are distinct.\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,$(\\star)$\n\n\nWe now divide the proof into two cases: \n\\begin{enumerate}\n\\item Every $E_{ij}$ is isotropic; \\label{i}\n\\item At least one $E_{ij}$ is not isotropic. \\label{ii}\n\\end{enumerate}\n\nIn case \\eqref{i}, $v_i$ is orthogonal (with respect to $H_{p,q}$) to $ v_j$ for every $i,j$ and it follows that $span_\\mathbb C \\{ v_1,\\ldots, v_{s}\\}$ is isotropic.\nIf $s\\geq 2p+1$ or $s\\geq 2q$, we get a contradiction since the maximal isotropic subspace in $\\mathbb C^{p,q}$ is of dimension $\\min(p,q)$.\n\nIn case \\eqref{ii}, without loss of generality, we may assume that $E_{12}$ is non-isotropic. \nThe restriction of $H_{p,q}$ on $E_{12}$ must be of signature $(1,1)$ (hence non-degenerate) as the null vectors $v_1$ and $v_2$ are linearly independent. \n Now take any $v_k$, where $k\\neq 1, 2$. Let $E_{12k}:=span_\\mathbb C\\{v_1, v_2, v_k\\}$, which is a 3-dimensional invariant subspace of $A$. Let $N$ be the orthogonal complement of $E_{12}$ in $E_{12k}$. Then $N\\cap E_{12}=\\{0\\}$ since $H_{p,q}|_{E_{12}}$ is non-degenerate. In particular, $\\dim_\\mathbb C(N)=1$. Moreover, as $\\|Av_1\\|_{p,q}^2=\\|v_1\\|_{p,q}^2=\\|Av_2\\|_{p,q}^2=\\|v_2\\|_{p,q}^2=0$, by Lemma \\ref{semipositive}, $N=v_1^\\perp\\cap v_2^\\perp$ is invariant under $A$. Hence, $N$ is a one-dimensional eigenspace of $A$. If $v_k\\not\\in N$, then we get four distinct one-dimensional eigenspaces in $E_{12k}$. Since $\\dim_\\mathbb C(E_{12k})=3$, at least three of these one-dimensional eigenspaces are associated to the same eigenvalue. \nIt implies that at least two elements of $\\{\\lambda_1,\\lambda_2,\\lambda_k\\}$ are equal, which contradicts ($\\star$).\n\nFinally, we just need to settle the situation where $v_k$ belongs to the orthogonal complement of $E_{12}$ for every $k\\neq 1,2$.\nLet $N_{12}$ be the orthogonal complement of $E_{12}$ in $\\mathbb C^{p,q}$. Then, as before, since $H_{p,q}|_{E_{12}}$ is non-degenerate, we have $\\mathbb C^{p,q}=E_{12}\\oplus N_{12}$ and hence the restriction of $H_{p,q}$ on $N_{12}$ is of signature $(p-1,q-1)$. As argued previously, $N_{12}$ is an invariant subspace of $A$ by Lemma~\\ref{semipositive}.\nFurthermore, $[N_{12}]\\cap D_{p,q}\\cong D_{p-1,q-1}$, where $[N_{12}]$ denotes the projectivization of $N_{12}$ and for every $k\\neq 1,2$, we have $[v_k]\\in [N_{12}]\\cap\\partial D_{p,q}\\cong\\partial D_{p-1,q-1}$. That is, we have $s-2$ fixed points on $\\partial D_{p-1,q-1}$ and the desired result now follows by induction together with Theorem \\ref{ball}, which serves as the initial step for the induction. The proof is complete.\n\\end{proof}\n\n\n\n\n\\subsection{Real generalized balls}\n\n\nLet $D_{p,q}^\\mathbb R$ be the subspace of $D_{p,q}$\ndefined by \n$$\nD_{p,q}^\\mathbb R = \\left\\{ [x_1,\\ldots, x_{p+q}]\\in D_{p,q} : x_i\/x_j\\in \\mathbb R \\, \\text{ for all } i,j \\textrm{ whenever } x_j\\neq 0\\,\\right\\}.\n$$\n\nThus, for a point in $D_{p,q}^\\mathbb R$, we can choose homogeneous coordinates which are all real. We call $D_{p,q}^\\mathbb R$ a \\textit{real generalized ball}.\nWhen $p=1$, Cowen-MacCluer \\cite[Theorem 10]{CM} proved that any linear fractional map with real coefficients\nmaps $D_{1,q}^\\mathbb R$ into itself if and only if it maps $D_{1,q}$ into itself. We now show that the same holds true for any $p\\geq 1$.\n\n\n\\begin{thm}\\label{real generalized ball}\nLet $M \\in M_{p+q}$ be a matrix with real entries.\nThen $M$ defines a linear self map of $D_{p,q}$ if and only if it defines a linear self map of $D_{p,q}^\\mathbb R$ in the same manner.\n\\end{thm}\n\\begin{proof}\nSuppose that $M$ defines a linear self map of $D_{p,q}$.\nSince the entries of $M$ are all real, $D_{p,q}^\\mathbb R$ is mapped into $D_{p,q}^\\mathbb R$.\n\nSuppose that $M$ defines a linear self map of $D_{p,q}^\\mathbb R$. For $[z]\\in D_{p,q}$, write $z = x+iy$ with\n$x=(x', x'')\\in\\mathbb R^{p+q}$, $y=(y', y'')\\in\\mathbb R^{p+q}$\nfor $x'$, $y'\\in \\mathbb R^p$, $x''$, $y''\\in \\mathbb R^q$.\nNote that $|x'|^2 + |y'|^2 > |x''|^2 + |y''|^2$.\nIf $|x'|^2 >|x''|^2$ and $|y'|^2 >|y''|^2$, then $Mz = Mx+iMy$ belongs to $D_{p,q}$.\nIf otherwise, without loss of generality, we may assume that\n$|x'|^2 \\leq|x''|^2$ and $|y'|^2 > |y''|^2$.\nNow, for any $\\theta\\in\\mathbb R$, we have $[e^{i\\theta}z]=[z]$. And since $e^{i\\theta}z = x \\cos\\theta - y \\sin\\theta + i(y\\cos\\theta + x\\sin\\theta) $ and the expression\n\\begin{equation}\\label{real part}\n|x'\\cos\\theta - y'\\sin\\theta|^2 - |x'' \\cos\\theta - y'' \\sin\\theta|^2\n\\end{equation}\nequals $|x'|^2 - |x''|^2\\leq 0$ when $\\theta=0$ and\n$|y'|^2 - |y''|^2> 0$ when $\\theta=\\pi\/2$, by continuity there exists\n$\\theta$ such that equation \\eqref{real part} becomes zero.\nThis implies that for $[z]\\in D_{p,q}$, we can always choose homogeneous coordinates $z=x+iy$ such that $|x'|^2 = |x''|^2$ and $|y'|^2 > |y''|^2$. Here the last inequality is due to the fact that $e^{i\\theta }z\\in D_{p,q}$.\nHence we have $Mz\\in D_{p,q}$ and in particular $M$ induces a linear self map of $D_{p,q}$.\n\\end{proof}\n\n\n\n\n\n\n\n\\section{Examples}\\label{examples}\n\nIn this section, we collect a number of examples that the reader have been referred to from various places in the article.\n\n\\begin{example}\\label{no extension}\nLet $F$ be the rational map on $\\mathbb P^3$ defined by $F[z_1,z_2,z_3,z_4]=[z_1+z_3, z_4, 0, 0]$. Then the indeterminacy of $F$ is the projective subspace spanned by $[1,0,-1,0]$ and $[0,0,0,1]$. As elements in $\\mathbb C^{2,2}$, these two vectors spanned a negative semi-definite 2-dimensional subspace and hence we see that the set of indeterminacy of $F$ intersects $\\partial D_{2,2}$ at $[1,0,-1,0]$ but lies outside $D_{2,2}$. Thus, $F$ is a holomorphic on $D_{2,2}$ but it cannot extend across the boundary point $[1,0,-1,0]$. It is a minimal linear self map of $D_{2,2}$ because the range of $F$ (as a linear map on $\\mathbb C^{2,2}$) is positive definite and of dimension 2. \n\n\\end{example}\n\n\\begin{example}\\label{hs example}\nLet $$A = \\left(\n \\begin{array}{cc}\n \\sqrt{2}I_2& I_2 \\\\\n I_2 & \\sqrt{2}I_2\\\\\n\n \\end{array}\n \\right)\\in U(2,2).$$\n$A$ induces an automorphism of $D_{2,2}$. The characteristic polynomial of $A$ is $(x-\\sqrt{2}+1)^2(x-\\sqrt{2}-1)^2$ and $A$ has two eigenvalues $\\sqrt{2} \\pm 1$. \nThe eigenspace of the eigenvalue $\\sqrt{2}-1$ is spanned by $v_1=(1,0,-1,0)^t$ and $v_2 = (0,1,0,-1)^t$ and that of $\\sqrt{2}+1$ is spanned by $v_3=(1,0,1,0)^t$ and $v_4=(0,1,0,1)^t$. One also sees immediately that both eigenspaces are isotropic in $\\mathbb C^{2,2}$ and thus their projectivizations lie inside $\\partial D_{2,2}$. This implies that $A$ has infinitely many fixed points on the boundary but no fixed point in $D_{2,2}$. \nThis example can be generalized to $D_{p,p}$ in a straightforward way.\n\\end{example}\n\n\\begin{example}\\label{sharp example 3}\nLet\t$$A = \\left(\n \\begin{array}{cccc}\n 1& 0 & 0& 0 \\\\\n 0 & -1 & 0 & 0\\\\\n 0 & 0 & i & 0\\\\\n 0 &0&0& -i\\\\\n \\end{array}\n \\right)\\in U(2,2).$$\nTrivially $A$ has four different eigenvalues and two of them correspond to fixed points in $D_{2,2}$\nand the other two of them corresponds to fixed points in $\\mathbb P^3 \\setminus \\overline{D_{2,2}}$. There is no projective line on which every point is fixed.\n\\end{example}\n\n\\begin{example}\\label{sharp example 2}\nLet\t$$A = \\left(\n \\begin{array}{cccc}\n 1& 1 & 1& 0 \\\\\n 1 & -1 & 0 & 1\\\\\n 1 & 0 & 1 & 1\\\\\n 0 &1&1& -1\\\\\n \\end{array}\n \\right)\\in U(2,2).$$\n\nThe matrix $A$ has four different eigenvalues. The eigenvalues and the corresponding eigenvectors are the following:\n\\begin{equation}\n\\begin{array}{ll}\n\\lambda_1=1+\\sqrt{2}, & \\alpha_1 =(\\frac{1}{4}+\\frac{\\sqrt{2}}{8},\\frac{\\sqrt{2}}{8},\\frac{1}{4}+\\frac{\\sqrt{2}}{8},\\frac{\\sqrt{2}}{8})^t \\\\\n\\lambda_2=-1+\\sqrt{2},& \\alpha_2=(\\frac{1}{4}+\\frac{\\sqrt{2}}{8},\\frac{\\sqrt{2}}{8},-\\frac{1}{4}-\\frac{\\sqrt{2}}{8},-\\frac{\\sqrt{2}}{8})^t \\\\\n\\lambda_3=1-\\sqrt{2},& \\alpha_3=(\\frac{1}{4}-\\frac{\\sqrt{2}}{8},-\\frac{\\sqrt{2}}{8}, \\frac{1}{4}-\\frac{\\sqrt{2}}{8},-\\frac{\\sqrt{2}}{8})^t \\\\\n\\lambda_4=-1-\\sqrt{2},& \\alpha_4=(\\frac{1}{4}-\\frac{\\sqrt{2}}{8},-\\frac{\\sqrt{2}}{8},-\\frac{1}{4}+\\frac{\\sqrt{2}}{8},\\frac{\\sqrt{2}}{8})^t \\\\\n\\end{array}\n\\end{equation}\nThe automorphism of $D_{2,2}$ induced from $A$ has four fixed points on $\\partial D_{2,2}$ but it has no fixed point in $D_{2,2}$. Moreover there is no projective line in $\\partial D_{2,2}$\non which $A$ fixes every point. \n\\end{example}\n\n\n\n\n\n\n\n\n\n\\begin{example} \\label{nondiagonalizable}\nFor any non-zero real number $\\alpha$ let $$A = \\left(\n \\begin{array}{cccc}\n 1& \\alpha & 0& \\alpha \\\\\n -\\alpha & 1 & \\alpha & 0\\\\\n 0 & \\alpha & 1 & \\alpha\\\\\n \\alpha &0&-\\alpha& 1\\\\\n \\end{array}\n \\right)\\in U(2,2).$$\nThen the characteristic polynomial of $A$ is $(1-x)^4$ and the minimal polynomial of $A$ is $(1-x)^2$. Thus, in any Jordan canonical form, the Jordan blocks of $A$ are at most of rank 2. By direct computation, one sees that $A$ only has two linearly independent eigenvectors, which are chosen to be $ (1,0,1,0)^t$ and $(0,1,0,-1)^t$, associated to the eigenvalue 1. In particular, there are two Jordan blocks of rank 2.\n\n\n\n\\end{example}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
{"text":"\\section{Introduction}\nThe study of opinion dynamics focuses on decision making process in\nmulti-agent systems. This line of research dates back to 1950s\n\\cite{french1956formal}. Since then, the study of opinion dynamics has\nattracted researchers from diverse areas, and hence various approaches have been\nproposed for modeling of the evolution of opinions \\cite{castellano2009statistical}. \nIt is natural to assume that an agent's decision making process is influenced by the information he\/she receives from the society. The influence of society has been considered in the form of the agent's interactions with his\/her ``neighbors.'' An agent's neighbors can be chosen based on the agent's opinion \\cite{stamoulas2018convergence,blondel2010continuous,hegselmann2002opinion,hendrickx2016symmetric,ceragioli2012continuous} or based on a given communication graph independent of the opinions \\cite{degroot1974reaching,jia2015opinion,friedkin1990social,ceragioli2012continuous,bartolozzi2005stochastic,holyst2000phase,yildiz2013binary,sznajd2000opinion}. \nThe set of possible opinions of a given \nagent at a given time may be regarded as a finite discrete set without any \nadditional structure, the simplest example being a binary set \\cite{bartolozzi2005stochastic,holyst2000phase,yildiz2013binary,sznajd2000opinion,holley1975ergodic,san2005binary,galam2013modeling,zanette2006opinion,mobilia2007role,lyst2002social,palombi2014stochastic}, or alternatively \nas a continuum of values in $\\mathbb{R}$ or $\\mathbb{R}^d$ \\cite{stamoulas2018convergence,blondel2010continuous,hegselmann2002opinion,degroot1974reaching,weisbuch2002meet}. The structure of the $\\mathbb{R}^d$ imposes a concept of nearby opinions \nand leads to the notion of confidence bounds where an agent may only interact \nwith other agents within his\/her confidence bound. See for instance \\cite{hegselmann2002opinion,blondel2010continuous,hendrickx2016symmetric,stamoulas2018convergence,ceragioli2012continuous}. In this paper, we shall \nfocus on the binary opinion case and as such the notion of bounded confidence\ndoes not apply. See \\cite{bartolozzi2005stochastic,holyst2000phase,yildiz2013binary,sznajd2000opinion,holley1975ergodic,san2005binary,galam2013modeling,zanette2006opinion,mobilia2007role,lyst2002social} for binary opinion models.\n\nOne simple way an agent can choose to update his\/her opinion would be by interacting with his\/her neighbor(s) and adopting the opinion of the neighbor or the leading opinion of the group of neighbors\n \\cite{holley1975ergodic,san2005binary,galam2013modeling,zanette2006opinion}. \nIt is important to note that this opinion updating rule assumes that agents conform to the neighbor(s) with whom they are interacting, and hence it does not take agents' personalities into account.\nIn this respect, \\cite{mobilia2007role,yildiz2013binary,galam2013modeling,sznajd2011phase,schneider2004influence,de2005spontaneous} and references therein introduce agents with \\emph{personalities}. \\cite{yildiz2013binary,galam2013modeling} include \\emph{stubborn} agents who do not change their opinion, however, influence other agents. \\emph{Independent} agents who change their opinions independent of the interactions are considered in \\cite{sznajd2011phase}. As a special case of independent agents, \\emph{zealots} are introduced in \\cite{mobilia2007role}. Zealots are independent agents who may favor one opinion. \\cite{galam2013modeling,schneider2004influence,de2005spontaneous} study \\emph{contrarian} agents who adopt the opposite of the leading opinion with a certain probability. The effect of contrarians and independent agents to the group's limiting behavior are discussed in \\cite{nyczka2013anticonformity}. The presence of \\emph{leaders} is considered in \\cite{ellero2013stochastic,lyst2002social,holyst2000phase}. \n\nIt is natural to focus on two possible extreme outcomes:\nbalance of opinions and consensus where all agents agree. In reality, the situation is not ``black\nor white''; for instance, one may find that in the long-term, one opinion is likely\nto be held by say $70\\%$ of the agents resulting in dominance of one opinion. \nThe idea of balance of opinions and dominance of opinions are explored in \\cite{galam2013modeling,mobilia2007role,sznajd2000opinion} in discrete time setting in relation to the personalities of agents. \n\nIn this study, we propose a continuous time binary opinion model (say $0$ or $1$) for a group in which agents\nare considered to have personalities defined by a monotonic function and a\nspontaneity coefficient. An agent's personality determines the effect of\nsocial influence on the agent (e.g., conformists, rebellious). Moreover, in our model, contrary to the pair-wise\ninteraction of agents that are defined to be neighbors, agents are considered\nto be informed on the distribution of opinions in the entire group at each time\n$t$. We investigate various personality traits and their effect on the group's limiting behavior for a large number of agents. \nIn particular, the personality traits that lead to dominance of one opinion is our main interest.\nWe note that, our model can be thought of as the result of a situation \nwhere agents do not change their mind after one (pairwise or group) interaction, but rather after \nseveral interactions. In this case, assuming all agents can interact with all \nothers, in a large population an agent interacts with\nsufficiently many others before changing their mind and the sample from the \nsufficiently many can be taken as a good approximation of sampling the entire\npopulation. \n\nWe also assume (as is done in other models) that there is no ``natural bias'' towards one of the opinions. \nAs an example, if the opinion is about whether the earth is flat or not, one\nwould expect that in an informed society, agents are more likely to believe\nthe truth. We are focused on the long-term probability distribution for \nthe number of agents with opinion $1$ when the number $N$ of total agents is\nvery large. In particular, we study the effects of agents' personalities on balance \nand dominance of the opinions. We found that the shape of the so called conformity function plays an important role.\n\n\nThe paper is organized as follows. In Section \\ref{sec:model} we introduce our\nbinary model and focus on a \\emph{homogeneous} group. In section \\ref{sec:C1}\nwe study the effects of personality of the (homogeneous) group on the group's\nlimiting behavior. We extend our model to \\emph{heterogeneous} groups and\nexamine limiting decision behavior when the group is formed by two extreme\npersonality classes in Section \\ref{sec:heterogeneous}. The concluding remarks follow in Section\n\\ref{sec:conclusions}. \n\n\n\n\n\n\n\\section{\\label{sec:model}The model and the homogeneous case}\n\nWe consider a group of $N$ agents where each agent holds an opinion from the set $\\{0,1\\}$. \nAn agent flips his\/her opinion based on the group's current configuration and his\/her \\emph{personality}. Here, \\emph{personality} of an agent $i$, $i = 1,\\dots,N$, is given by the pair $(\\phi_i, \\beta_i)$ where \n $\\phi_i: [0,1]\\to[0,\\infty)$ is a monotonic function that accounts for {\\em conformity}, $\\beta_i$ is a \nnonnegative quantity that accounts for {\\em spontaneity}. We call a group\n\\emph{homogeneous} if all agents in the group share the same personality,\n$\\phi = \\phi_i, \\beta = \\beta_i \\; \\forall i= 1,\\dots,N$. We first look at the\ncase when the group is homogeneous. \n\nWe define\n$X^N(t)$ to be the number of agents holding opinion 1\n at time $t \\in [0,\\infty)$ and assume that a given agent \nflips his\/her opinion during a time interval $(t,t+h]$ with a \nprobability\n\\begin{equation}\n\\label{eq:opinion-change-rate}\n(\\phi(n\/N) + \\beta) h + o(h) \\;\\; h \\to 0+,\n\\end{equation}\nwhere $n$ is the number of\nagents with opposite opinion and $N$ is the total number of agents at time\n$t$.\nThus we note that $\\phi(x)$ determines the rate of conformity where $x$ is the\nfraction of the population holding the opposite view. We note that the\npairwise interaction model is a special case of our model where $\\phi$ is\nlinear, that is\n$\\phi(x)=\\alpha x$ for some $\\alpha>0$. \n \nThis results in a Markov process model for $X^N(t)$ with the state space $\\{0,1,2,\\ldots,N\\}$. \n Moreover, we regard $X^N(t)$ as a birth-death process since when an agent flips his\/her opinion, the\nprocess increases\/decreases by one. \nUsing the rate of opinion change for an agent defined by \\eqref{eq:opinion-change-rate}, \n the birth rate $\\lambda_i^N$ and the death rate $\\mu_i^N$ at the state $X^N(t) = i$ can be written as follows:\n\\begin{eqnarray}\n\\label{eq:jump-up}\n\\lambda_i^N\n&=& \\left(\\phi \\left(\\frac{i}{N} \\right) + \\beta \\right)(N-i)\n= (N-i) \\phi \\left(\\frac{i}{N} \\right)+\\beta (N-i), \\nonumber \\\\\n\\mu_i^N \n\\label{eq:jump-down}\n&=& \\left( \\phi \\left(\\frac{N-i}{N} \\right) + \\beta \\right) i\n= i \\phi \\left( \\frac{N-i}{N} \\right) + \\beta i.\n\\end{eqnarray}\nSince the state space $\\{0,1,2,\\ldots,N\\}$ is finite, $\\lambda_N^N=0$ and $\\mu_0^N=0$. \nUsing these transition rates one can construct a transition rate matrix\n$Q^N=[q_{ij}^N]$, where $q_{i( i+1)}^N = \\lambda_i^N$, $q_{i( i-1)}^N = \\mu_i^N$, $q^N_{ii}= - \\sum_{j\\ne i} q^N_{ij}$\nand $q_{ij}^N = 0 \\; \\forall j \\notin \\{i-1,i, i+1\\}$. \n\nWe may rewrite the birth rate $ \\lambda_i^N $ and the death rate $\\mu_i^N$ as follows:\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:bd-rates}\n \\lambda_i^N &= N \\bar{\\lambda} \\left(\\frac{i}{N} \\right), \\quad \\\\\n \\mu_i^N &= N\\bar{\\mu}\\left(\\frac{i}{N} \\right) \\quad i =0, 1,\\dots, N,\n\\end{aligned}\n\\end{equation}\nwhere $\\bar{\\lambda},\\bar{\\mu}:[0,1] \\rightarrow [0,\\infty)$ are\n\n given by\n\\begin{equation}\n\\begin{aligned}\n\\label{lamda_mu_bar}\n\t\\bar{\\lambda}(x) =& (1-x) \\phi(x)+ \\beta (1-x), \\\\\n\t\\bar{\\mu}(x) =& x \\phi(1-x)+ \\beta x.\n\t\\end{aligned}\n\\end{equation} \nWe define the probability $p^N(t)=(p^N_0(t), p^N_1(t), \\ldots, p^N_N(t) )$,\nwhere $p^N_i(t)=\\mathbb{P}[X^N(t)=i]$ for each $i=0,1,\\ldots,N$. The\nprobability mass function satisfies the Kolmogorov's forward equation \n\\begin{equation}\n\\label{eq:p-derivative}\n\\dot{p}^N_j(t) = \\sum_k p^N_k(t)q^N_{kj}.\n\\end{equation}\nIt should be noted that when \\emph{spontaneity} coefficient $\\beta =0$, \nthe states $i =0$ and $i= N$ are absorbing states since \n$\\lambda_0^N = \\mu_N^N\n= 0$. When $\\beta>0$, the birth-death process $X^N(t)$ is an irreducible\nMarkov process with the finite state space $\\{0,1,2,\\ldots,N\\}$ and thus,\n$X^N(t)$ is ergodic and attains a unique stationary probability distribution\nas $t \\to \\infty.$ The probability vector $p^N(t) \\to \\pi^N =\n(\\pi^N_0,\\pi^N_1,\\ldots,\\pi^N_N)$ as $t\\rightarrow \\infty$ and $\\pi^N$ does\nnot depend on the initial state $X^N(0)$. Using the detailed balance condition at stationarity, one can obtain\n\\begin{equation*}\n \\pi^N_n= \\frac{\\lambda^N_{n-1}\\lambda^N_{n-2}\\ldots\\lambda^N_0}\n {\\mu^N_n \\mu^N_{n-1}\\ldots \\mu^N_1} \\pi_0^N, \\quad n=1,2,\\ldots,N.\n\\end{equation*}\nHence,\n\\begin{eqnarray}\n\\label{eq:pi_n}\n \\pi_n^N &=& \\frac{R^N_n}{\\sum_{k=0}^N R^N_k}, \\quad n=0,1,\\ldots,N. \\\\\n\\label{eq:pi_0}\n \\pi_0^N &=& \\frac{1}{\\sum_{k=0}^N R^N_k}, \n\\end{eqnarray}\nwhere \n\\begin{equation}\n\\label{eq:R_n^N}\nR_n^N = r^N_n r^N_{n-1}\\ldots r^N_1, \\quad n = 1,2,\\ldots,N\n\\end{equation}\n for $r_n^N = \\frac{\\lambda^N_{n-1}}{\\mu^N_n}$ and $R^N_0 = 1$.\n \n\n\\section{\\label{sec:C1}The effects of the conformity function : The homogeneous case }\n\nIn order to study $X^N(t)$ for large $N$ and $t$, we shall consider the normalized process $X_N(t) = \\frac{X^N(t)}{N}$. As $N \\to \\infty$, in the fluid limit, one expects $X_N$ to converge to $x$ \nwhere $x$ satisfies the ODE \n\\begin{equation}\n\\label{eq:binary-ode-gen}\n\\dot{x}(t)= F(x(t)) = \\bar{\\lambda}\\big(x(t)\\big) - \\bar{\\mu}\\big(x(t)\\big),\n\\end{equation}\nwhere\n\\begin{equation}\n\\label{eq:vectorfield-homo}\nF (x) = \\phi\\left(x\\right) \\left( 1-x \\right) - x \\phi\\left(1-x \\right) +\n \\beta (1 - 2 x).\n\\end{equation}\nIntuitively, when $N$ and $t$ are both large, one expects the peaks of the probability \ndistribution of $X_N(t)$ to occur near the stable equilibria of this ODE. \nThis observation will motivate the rest of the analysis in this paper. \n\nWhile we do not make new claims about rigorous limits as $t \\to \\infty$ and $N \\to \\infty$ jointly, some rigorous limits exist in literature that we \nmention here. A major result is that if $F$ is $C^1$ (continuously differentiable), then \ngiven any finite time interval $[0,T]$, as $N \\to \\infty$ $X_N \\to x$ uniformly \non $[0,T]$ with probability one, and moreover a diffusion approximation\nfor $X_N$ is also available \\cite{ethier2009markov}. Since this result\nonly considers the limit as $N \\to \\infty$ over finite intervals of time, \none needs to be cautious in interpreting the large $N$ and large $t$\napproximation. In particular, if one fixes any large final time $t$, and considers \nincreasing $N$, then one expects distributions at time $t$ to have peaks\naround the stable equilibria of the ODE. \nWhen $F$ has a unique globally attractive \nequilibrium $\\overline{x}$, under suitable conditions, as $N \\to \\infty$ one can\nrigorously justify a Gaussian approximation with mean $\\overline{x}$ for the\nstationary probability distribution (see Theorem 2.7 in \\cite{kurtz1976limit}).\n\nHenceforth, we shall study the system \\eqref{eq:binary-ode-gen} for its\nstable equilibria. We note that $F(0)>0$, $F(1)<0$ and $\\overline{x}=\\frac{1}{2}$ is always an\nequilibrium for the dynamics \\eqref{eq:binary-ode-gen}. If\n$\\overline{x}=\\frac{1}{2}$ is the unique equilibrium, it will be globally\nattractive on $[0,1]$. Then, for very large $N$, the stationary probability\ndistribution will be a narrow Gaussian with mean $\\overline{x}$ and hence, the\nmodel leads to balance of opinions. \n\nWe shall see that the shape of the conformity function $\\phi$ plays an\nimportant role in deciding if dominance of an opinion is likely. \nIntuitively, one may expect that greater conformity leads to dominance of one\nopinion while greater spontaneity leads to balance of opinions via a law of\nlarge number effect. However, our examples suggest that the dependence on \n$\\phi$ is more subtle in that the shape of the function plays a crucial role. \nWe see that when $\\phi$ is strictly convex, for sufficiently small $\\beta$ \ndominance is observed. When $\\phi$ is not strictly convex or if it is concave, we do not \nsee dominance in our examples. \n\n{\\bf Remark:} For the sake of precision, we shall use the term {\\em balance} to\nmean the situation where there is only one stable equilibrium of\n\\eqref{eq:binary-ode-gen} which is $\\overline{x}=1\/2$. We shall use the \nterm {\\em dominance} rather loosely to stand for lack of balance. \n\nIn order to investigate the effects of $\\phi$, it makes sense to make some\nnatural assumptions. The most natural \nconditions on $\\phi:[0,1] \\to [0,\\infty)$ are that $\\phi(0)=0$ and $\\phi$ is\n increasing for conformity, and $\\phi(1)=0$ and decreasing for {\\em rebelliousness}\n (opposite of conformity). \n\n \n\\begin{thm}\nConsider conformity function $\\phi(x): [0,1] \\to [0, \\infty)$ such that\n $\\phi'(x)$ strictly increasing on $(0,1)$ and suppose that $\\phi(0)=0$.\nThen for sufficiently small $\\beta$, the equilibrium $\\overline{x}=1\/2$ is unstable and hence the model leads to dominance of one opinion for large $N$ and large $t$. \n\\end{thm} \n\\begin{proof}\nDefine\n$\nG(x) = \\phi(x) (1-x) - x \\phi(1-x).\n$\nThen, the vector field \\eqref{eq:vectorfield-homo} is \n\\[\nF(x) = G(x) + \\beta (1-2x).\n\\]\nWe note that $G(0)=G(1\/2)=G(1)=0$. \nWe note that $F'(1\/2) = G'(1\/2) - 2 \\beta$, and that \n \\[\n \tG'(1\/2) = \\phi'(1\/2) - 2 \\phi(1\/2).\n \\]\nUsing the mean value theorem for $\\phi(x)$ on $ [0,1\/2]$, we conclude that for some $c \\in (0,1\/2)$, \n\\[\n\t\\phi'(c) = \\frac {\\phi(1\/2)-\\phi(0) }{1\/2} = 2 \\phi(1\/2),\n\\]\nsince $\\phi(0) = 0$. Thus,\n\\[\n\tG'(1\/2) = \\phi'(1\/2) -\\phi'(c) >0, \\quad c\\in(0,1\/2)\n\\]\nsince $\\phi '(x)$ is strictly increasing on $(0,1)$. Then $F'(1\/2)=G'(1\/2) - 2 \\beta >0$ for sufficiently small $\\beta$ and thus $1\/2$ is unstable. On the other hand since $F(0)=\\beta>0$, $F(1)=-\\beta<0$ there must be at least two (symmetrically placed) equilibria, one in\n$(0,1\/2)$, \nand the other in $(1\/2,1)$. Moreover, generically, these equilibria \nwill be stable, and hence the model leads to dominance of one opinion for\nlarge $N$ and large $t$. \n\n\\end{proof}\n \n Next, we provide examples with various conformity functions $\\phi$ and investigate the stable\nequilibria to predict dominance or balance. We shall also check the prediction\nfrom the stable equilibria of the ODE model \nagainst computational results of the stationary probability distributions. \nWe note that, there are two methods to compute the stationary\ndistributions. One is to use the formulas \\eqref{eq:pi_n} and \\eqref{eq:pi_0}\nand the other is to use an ODE solver to compute the solution to\n\\eqref{eq:p-derivative}. For very large $N$ values, our numerical experiments\nsuggest that using \\eqref{eq:pi_n} and \\eqref{eq:pi_0} provide more accurate\nresults compared to the ODE solver. Hence, throughout this study, we refer to\n\\eqref{eq:pi_n} and \\eqref{eq:pi_0} to verify our predictions from the \nstable equilibria of \\eqref{eq:binary-ode-gen}.\n\n\n \n\\textbf{Example 1}\nConsider the simplest example of $\\phi(x) =x$. This is convex, but not\nstrictly so. \nIn this case, the rate that an agent changes his\/her opinion is\n$\\frac{n}{N} + \\beta$. \nUsing \\eqref{eq:binary-ode-gen}, we can conclude that as $N$ gets large $X_N(t)$ converges to $x(t)$, where \n\\begin{equation}\n\\label{eq:ode-x}\n\t\\dot{x}(t) = \\beta ( 1 -2 x(t) )\n\\end{equation}\nSince this ODE has a unique equilibrium at $\\overline{x} = \\frac{1}{2}$ that is globally attractive, regardless of spontaneity coefficient $ \\beta>0$, it is expected that the group will reach balance of opinions as can be observed in Fig.~\\ref{fig:exact_gen1_x}. We note that in Fig.~\\ref{fig:exact_gen1_x}(b) the bell shape curve becomes narrower as N gets larger. \n\nIt is thus interesting to note that as long as $\\beta>0$, no matter how small, one\nexpects balance of opinions. \n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width = 8.4cm]{exact_gen1.eps}\n\\label{fig:exact_gen1}\n\\end{center}\n\\quad\n\\begin{center}\n\\includegraphics[width = 8.4cm]{exact_gen1_second.eps} \n\\label{fig:exact_gen1_second}\n\\caption{Exact values of the stationary probabilities calculated using \\eqref{eq:pi_n} and \\eqref{eq:pi_0} for $\\phi(x) = x$ and $N = 1000$. (a) $\\beta = 5$. (b) $\\beta = 0.01$. }\n\\label{fig:exact_gen1_x}\n\\end{center}\n\\end{figure}\n\n\\textbf{Example 2}\nConsider $\\phi(x) = x^2$ which is strictly convex. \nThe limiting ODE is\n\\begin{equation}\n\\label{eq:binary-ode_x2}\n\t\\dot{x}(t) =\\left(1-2x(t) \\right) ( x(t)^2- x(t) + \\beta).\n\\end{equation}\nIt is easy to see that \\eqref{eq:binary-ode_x2} has three possible\nequilibria; $\\overline{x}_1 = \\frac{1}{2}$ and $\\overline{x}_{2,3} =\n\\frac{1}{2} \\pm \\frac{\\sqrt{1 -4 \\beta}}{2 }$. Hence, based on the choice of\nspontaneity coefficient $ \\beta$, different scenarios are expected. When\n$\\beta \\geq \\frac{1}{4}$, the stable equilibrium is $\\overline{x}_1=\\frac{1}{2}$ and the model leads to balance of opinions. On the other hand, when $ \\beta < \\frac{1}{4}$, the stable\nequilibria are $ \\overline{x}_{2,3}$ and the model leads to dominance. In Fig.~\\ref{fig:exact_gen2_x2}(a) one can observe that the model leads to balance of opinions for $\\beta = 5$. On the other hand, when $\\beta =0.2$, as can be seen in Fig.~\\ref{fig:exact_gen2_x2}(b), the model leads to dominance of one opinion. \n\\begin{figure}\n\\begin{center}\n\\includegraphics[width = 8.4cm]{exact_gen2.eps}\n\\end{center}\n\\quad\n\\begin{center}\n\\includegraphics[width = 8.4cm]{exact_gen2_second.eps} \n\\caption{ Exact values of the stationary probabilities calculated using \\eqref{eq:pi_n} and \\eqref{eq:pi_0} for $\\phi(x) = x^2$ and $N = 1000$. (a) $\\beta = 5 $. (b) $\\beta = 0.2$. }\n\\label{fig:exact_gen2_x2}\n\\end{center}\n\\end{figure}\n\n\\textbf{Example 3}\nWe consider $\\phi(x) = 1-x^2$ to explore the impact of rebelliousness in our model. In this case, the rate an agent changes his\/her opinion is $ \\frac{N^2-n^2}{N^2} + \\beta$. Hence, as the number of agents with the opposite opinion increases, the rate of opinion change decreases. \nThe limiting ODE for this model is\n\\begin{equation}\n\\label{eq:binary-ode_(1-x2)}\n\t\\dot{x}(t) =\\big( x(t)^2 - x(t) - 1 - \\beta \\big) \\big( 2x(t)-1 \\big).\n\\end{equation}\nOne can see that \\eqref{eq:binary-ode_(1-x2)} has a unique equilibrium at\n$\\overline{x} = \\frac{1}{2}$ regardless of the spontaneity coefficient $\\beta\n> 0$. Thus, we can conclude that the model leads to balance of opinions as \ncan also be observed in Fig.~\\ref{fig:exact_gen_(1-x2)}.\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width = 8.4cm]{exact500_1-x2.eps}\n\\label{fig:exact500_(1-x2)}\n\\end{center}\n\\quad\n\\begin{center}\n\\includegraphics[width = 8.4cm]{exact500_1-x2_second.eps} \n\\label{fig:exact500_(1-x2)_second}\n\\caption{ Exact values of the stationary probabilities calculated using \\eqref{eq:pi_n} for $\\phi(x) = 1-x^2$ and and $N = 500$. (a) $\\beta = 7$. (b) $\\beta = 0.001$.}\n\\label{fig:exact_gen_(1-x2)}\n\\end{center}\n\\end{figure}\n\n\\textbf{Example 4}\n Let $\\phi(x) = \\sqrt{x}$ which is concave. Then \nthe corresponding ODE is \n\\begin{equation}\n\\label{eq:binary-ode_(sqrtx)}\n \\dot{x}(t) = (1-2x(t)) \\left( \\frac{ \\sqrt{x(t)}\\sqrt{1-x(t)} }{ \\sqrt{1-x(t)}+\\sqrt{x(t)} } + \\beta \\right) .\n\\end{equation}\nWe first observe that $F$ is not $C^1$ on $[0,1]$ and the fluid limit theorem \ndoes not apply. Nevertheless, we proceed heuristically to look for the stable equilibria. \nOne can observe that $\\overline{x} = 1\/2$ is the only equilibrium for \\eqref{eq:binary-ode_(sqrtx)}.\nThus, one may expect that the model will lead to balance of opinions regardless of the choice of the spontaneity coefficient $\\beta$ as can be observed in Fig.~\\ref{fig:exact_gen_sqrt}.\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width = 8.4cm]{exact_gen1_sqrt.eps}\n\\end{center}\n\\quad\n\\begin{center}\n\\includegraphics[width = 8.4cm]{exact_gen2_sqrt.eps} \n\\caption{ Exact values of the stationary probabilities calculated using \\eqref{eq:pi_n} and \\eqref{eq:pi_0} for $\\phi(x) = \\sqrt{x}$ and $N = 1000$. (a) $\\beta = 10 $. (b) $\\beta = 0.1$. }\n\\label{fig:exact_gen_sqrt}\n\\end{center}\n\\end{figure}\n\n\\textbf{Example 5} Consider the monotone increasing, convex conformity\nfunction $\\phi(x) = \\frac{x}{1-x}$, so that the conformity function is the\nratio of the fraction of agents with opposite opinion to those with the same opinion. Then, the rate of change of one's opinion is \n$n\/(N-n) +\\beta$.\nWe note that $\\phi(x)$ has a singularity at $x=1$ and that \\eqref{eq:bd-rates} does not hold for $i = 0,N$, \nand hence the fluid limit theorem does not hold. Nevertheless, we proceed \nheuristically to \nconsider $F$ in \\eqref{eq:vectorfield-homo} which is given by\n\\[\nF(x) = (\\beta -1)(1-2x),\n\\]\nwhich has only one equilibrium $\\overline{x}=1\/2$ and it is (asymptotically) stable if and only\nif $\\beta >1$. Thus we expect balance for $\\beta>1$. When $\\beta <1$, we \nexpect dominance of one opinion. \n\nThis heuristic is verified both by our numerical computation of the stationary\nprobabilities as well as the asymptotic formulas for the stationary\nprobabilities \\eqref{eq:pi_n} and \\eqref{eq:pi_0} that we derive in Appendix \\ref{sec:asymptotic approximations}.\n\nIn Fig.~\\ref{fig:figure-ab}, we present an example for $\\beta <1$ case. In this experiment we use $\\beta = 0.2$ and $N=100$. Fig.~\\ref{fig:figure-ab}. shows the asymptotic approximation $\\tilde{\\pi}^N =(\\tilde{\\pi}^N_1, \\tilde{\\pi}^N_2,\\ldots,\\tilde{\\pi}^N_N)$ calculated in \\eqref{eq:asym_pi_n}\nAs shown in Fig.~\\ref{fig:figure-ab} our model leads to dominance of one opinion. \n\\begin{figure}\n\\begin{center}\n\\includegraphics[width = 8.4cm]{asymp.eps} \n\\label{fig:asymp_pi}\n\\caption{ Asymptotic approximation $\\tilde{\\pi}^N$ when $\\phi(x) = \\frac{x}{1-x}$, $\\beta = 0.2$ and $N=100$. }\n\\label{fig:figure-ab}\n\\end{center}\n\\end{figure}\nOn the other hand, an example of $\\beta>1$ case is shown in Fig.~\\ref{fig:figure-ab_second} and it displays\nthe approximations for stationary probabilities, $\\tilde{\\pi}^N$,\ncalculated using \\eqref {eq:asym_pi_n_second}.\nAs seen in Fig.~\\ref{fig:figure-ab_second} the model leads to balance of opinions.\n \\begin{figure}\n\\begin{center}\n\\includegraphics[width = 8.4cm]{asymp100_second.eps} \n\\caption{Asymptotic approximation for $\\tilde{\\pi}^N$ when $\\phi(x) = \\frac{x}{1-x}$, $\\beta = 10$ and $N=100$.}\n\\label{fig:figure-ab_second}\n\\end{center}\n\\end{figure}\n\\section{\\label{sec:heterogeneous}Heterogeneous binary opinion dynamics}\nLet us consider the case where the group is \\emph{heterogeneous}. Namely, suppose we have $m$ personality classes of agents such that all agents \nwithin a class $i$ (where $i=1,\\dots, m$) have the same personality\n$(\\phi_i,\\beta_i)$, but personalities differ among the classes. \nThis results in a Markov process model where the state is \na vector $x=(x_1,\\dots, x_m)$ where $0 \\leq x_i \\leq N_i$ is the\nnumber of class $i$ agents who hold opinion $1$ with $N_i$ being the \ntotal number of class $i$ agents. We assume that the personalities of agents \nis fixed in time, thus $N_i$ is a constant for each $i$ and $N=N_1+\\dots+N_m$\nis the total number of all agents. We note that during a time interval\n$(t,t+h]$ an agent from class $i$ will flip with probability\n\\[\n(\\phi_i(n\/N) + \\beta_i)h + o(h) \\;\\; h \\to 0+,\n\\]\nwhere $n$ is the total number of all agents who have the opposite opinion \nto that of the given agent. We shall be concerned with the case of large $N$ \nwith the fractions $k_i=N_i\/N$ within classes being held constant. \n\nThis results in a family of Markov process $X^N(t)$ which undergo\na jump $e_i$ or $-e_i$ for $i=1,\\dots,m$ (here $e_i \\in \\mathbb{R}^m$ is the vector\nwith $i$th component equal to one and all others equal to zero) with corresponding class $i$ \nbirth and death rates given by\n\\begin{equation}\\label{eq-hetero-lambdamu}\n\\begin{aligned}\n\\lambda^N_i(x) &= \\phi_i\\left(\\frac{|x|}{N}\\right) (N_i-x_i) + \\beta_i (N_i - x_i),\\\\ \n\\mu^N_i(x) &= \\phi_i\\left(1-\\frac{|x|}{N}\\right) x_i + \\beta_i x_i, \n\\end{aligned}\n\\end{equation}\nwhere given the state $x=(x_1,\\dots,x_m)$ we denote by $|x|$ the total \nnumber of agents holding opinion $1$:\n\\[\n|x| = \\sum_{i=1}^m x_i.\n\\]\nWe note that $0 \\leq x_i \\leq N_i$. We shall consider the normalized process $X_N(t)=X^N(t)\/N$, \nwhere $X_{N,i}(t)$ is the fraction of class $i$ agents with opinion $1$ where the fraction is\nnormalized by $N$ and not $N_i$. We may write\n\\[\n\\begin{aligned}\n\\lambda^N_i(x) &= N \\bar{\\lambda}_i\\left( \\frac{x}{N}\\right), \\\\ \n\\mu^N_i(x) &= N \\bar{\\mu}_i \\left( \\frac{x}{N} \\right) \n\\end{aligned}\n\\]\nwhere \n\\begin{equation}\\label{eq-hetero-barlambdamu}\n\\begin{aligned}\n\\bar{\\lambda}_i(x) &= \\phi_i(|x|) (k_i-x_i) + \\beta_i (k_i-x_i),\\\\ \n\\bar{\\mu}_i(x) &= \\phi_i\\left(1-|x|\\right) x_i + \\beta_i x_i, \n\\end{aligned}\n\\end{equation}\nwhere as before $|x|=x_1 + \\dots x_m$.\n\nIn the fluid limit, as $N \\to \\infty$, one expects $X_N$ to converge to $x$ \nwhere $x$ satisfies the ODE \n\\[\n\\dot{x}(t)= F(x(t)),\n\\]\nwhere the $m$ dimensional vector field $F$ is given by\n\\[\nF_i(x) = \\bar{\\lambda}_i(x)-\\bar{\\mu}_i(x), \\quad i=1,\\dots,m,\n\\]\nwhich simplifies to\n\\begin{equation}\\label{eq_hetero_F}\nF_i(x) = \\beta_i (k_i - 2 x_i) + \\phi(|x|) (k_i-x_i) - \\phi(1-|x|) x_i\n\\end{equation}\nfor $ i=1,\\dots,m.$ When $N$ and $t$ are both large, we expect to see the peaks of the probability \ndistribution of $X_N(t)$ to occur near the stable equilibria of this ODE. \nWe note that $\\bar{x}=(k_1\/2,\\dots,k_m\/2)$ is always an equilibrium. \n\n{\\bf Example with two extreme personality classes}\n\nWe consider the case of two extreme personality classes $(\\phi_i,\\beta_i)$ \nfor $i=1,2$ where \n\\begin{equation}\\label{eq_two_types}\n\\begin{aligned}\n\\phi_1(\\xi) &= \\phi(\\xi), & \\beta_1&=0,\\\\\n\\phi_2(\\xi) &= 0, & \\beta_2&=\\beta>0,\\\\ \n\\end{aligned}\n\\end{equation}\nwhere $\\phi: [0,1] \\to \\mathbb{R}$ is a monotonic function.\nWe note that class $1$ corresponds to total conformity and class $2$ \ncorresponds to total spontaneity. Let us write $k_1= 2k$ (thus $k$ is half the\nfraction of class $1$) and thus $k_2=1- 2k$. This results in \n\\begin{equation}\\label{eq:vectorfields}\n\\begin{aligned}\nF_1(x) &= \\phi(x_1+x_2)(2k-x_1) - \\phi(1-x_1-x_2) x_1,\\\\\nF_2(x) &= \\beta (1-2k - 2 x_2).\n\\end{aligned}\n\\end{equation}\nAt an equilibrium, clearly $x_2=\\frac{1}{2}-k$ and $ \\bar{x}=(k,1\/2-k)$ is always an\nequilibrium. Additional equilibria are found by solving the equation\n\\begin{equation}\\label{eq:eq-x1bar}\nF_1(x) = \\phi(x_1 + \\frac{1}{2}-k) (2k-x_1) - \\phi(\\frac{1}{2}-x_1+k) x_1\n\\end{equation}\nfor $x_1$. We note that class 2 (spontaneous class) is always expected to reach a balance since at an equilibrium $x_2 = \\frac{1-2k}{2}=\\frac{k_2}{2}$. \n\n\\begin{thm}\nConsider the group with two extreme personality classes \\eqref{eq_two_types} such that the derivative of conformity function, $\\phi'(x)$, is strictly increasing on $(0,1)$ and the fraction of the class of conformists $2k > \\frac{2\\phi(1\/2)}{ \\phi'(1\/2)}$. Then the equilibrium $\\bar{x}= (k,1\/2-k)$ is unstable and hence the model \\eqref{eq_two_types} leads to dominance of one opinion for large $N$ and large $t$. \n\\end{thm} \n\\begin{proof}\nNote that it is sufficient to study the limiting behavior of the class of conformists to conclude the whole group's behavior for large $N$ and $t$. \nNext, we analyze the stability of the equilibrium $(k, 1\/2-k)$. The Jacobian at the equilibrium,\n\\[\n\tJ(1\/2-k, k) = \n \\begin{bmatrix}\n \\frac{\\partial F_1}{\\partial x_1} & \\frac{\\partial F_1}{\\partial x_2} \\\\\n \\frac{\\partial F_2}{\\partial x_1} & \\frac{\\partial F_2}{\\partial x_2}.\n \\end{bmatrix}.\n\\]\nSince $\\frac{\\partial F_2}{\\partial x_1} =0$, eigenvalues at the equilibrium are \n\\[\n\\epsilon_1 = \\frac{\\partial F_1}{\\partial x_1}, \\quad \\epsilon_2 = \\frac{\\partial F_2}{\\partial x_2} = -2\\beta<0,\n\\]\nwhere\n\\[\n\\epsilon_1 =2k \\phi'(1\/2) -2 \\phi(1\/2).\n\\]\nSolving for $k$ to study the sign of $\\epsilon_1$, we conclude that when $2k < \\frac{2\\phi(1\/2)}{ \\phi'(1\/2)}$, $\\epsilon_1 <0$ and hence the equilibrium $(k,1\/2-k)$ is stable leading to balance for the conformists and thus for the entire group.\n\nHowever, when $2k > \\frac{2\\phi(1\/2)}{ \\phi'(1\/2)}$, the equilibrium $(k, 1\/2-k)$ is unstable. On the other hand, since $ 0 \\leq x_1+x_2 \\leq 1$, at an equilibrium $0 \\leq x_1 \\leq \\frac{1}{2}+k$ where $0 < k < 1\/2$. We note that there are no other equilibrium points except those on the line $x_2=1-2k$ as shown in Fig.~\\ref{fig:phase}.\n\nMoreover, from \\eqref{eq:eq-x1bar} one can conclude that $F_1(0,\n\\frac{1}{2}-k) = 2k\\phi(\\frac{1}{2} - k) >0$, $F_1(\\frac{1}{2}+k,\\frac{1}{2}-k\n) = (k-\\frac{1}{2}) \\phi(1) <0$. Thus, there have to be two other equilibria\non the line $x_2 = \\frac{1}{2}-k$ that are symmetrically placed. For one of\nsuch equilibrium point $ x_1 \\in(0,k)$ and for the other $x_1 \\in (k,\n\\frac{1}{2}+k)$. \nMoreover, generically, these equilibria will be stable and\nconformists as well as the entire group will reach dominance of one opinion for large $N$ and $t$. Table \\ref{Tab:class limits} summarizes the limiting behaviors of the classes with respect to the fraction of class populations.\n\\end{proof}\n\\begin{figure}\\centering\n\\begin{tikzpicture}[\n scale=3,\n axis\/.style={thick, ->, >=stealth'},\n important line\/.style={thick},\n dashed line\/.style={dashed, thin},\n pile\/.style={thick, ->, >=stealth', shorten <=2pt, shorten\n >=2pt},\n every node\/.style={color=black,}\n ]\n \n \\draw[axis] (-0.1,0) -- (1.2,0) node(xline)[right]\n {\\tiny$x_1$};\n \\draw[axis] (0,-0.1) -- (0,1.2) node(yline)[above] {\\tiny$x_2$};\n \n \\draw[important line] (0,1) coordinate (A) node[left, text width=0.5em] {\\tiny$1$} -- (1,0)\n coordinate (B) node[below, text width=0.5em] {\\tiny$1$};\n \\draw[important line] (0,0.6) coordinate (C) node[left, text width=2em] {\\tiny$1-2k$}-- (0.4,0.6)\n coordinate (D); \n \\draw[important line] (0.4,0.6) coordinate (D) -- (0.4,0)\n coordinate (E) node[below, text width=0.5em] {\\tiny$2k$}; \n \\fill[gray!20, opacity =1] (C)--(D)--(E)--(0,0)--cycle;\n \\draw[important line] (0,0.3) coordinate (F)node[left, text width=2em]{\\tiny$\\frac{1}{2}-k$} -- (0.8,0.3)\n coordinate (G) node[right, text width=5em]{\\tiny$x_2=\\frac{1}{2}-k$};\n \\filldraw[black] (0.2,0.3) circle (0.3pt);\n \\draw [shorten <=0pt, dashed] (0.2,0.3) coordinate (H)-- (0.2,0) node[below, text width = 0.5em]{\\tiny$k$};\n \\draw[->] (0.3,0.4)--(0.3,0.35); \n \\draw[->] (0.2,0.4)--(0.2,0.35); \n \\draw[->] (0.1,0.4)--(0.1,0.35);\n \\draw[->] (0.3,0.2)--(0.3,0.25); \n \n \\draw[->] (0.1,0.2)--(0.1,0.25);\n \\end{tikzpicture}\n \\caption{ Phase portrait of the system $\\dot{X}= F$ with \\eqref{eq:vectorfields}. The equilibria is on the line $x_2=\\frac{1}{2}-k$ and $(k, \\frac{1}{2}-k)$ is always an equilibrium. Values of $x_2$ evolves towards the line $x_2 = \\frac{1}{2} - k$ for any initial condition and regardless of $x_1$ values.}\n \\label{fig:phase}\n\\end{figure}\n\n\n\\begin{table}[H]\n\\begin{center}\n\\caption{Limiting behavior of group classes with respect to the fraction of conformists, 2k }\n\\begin{tabular}{ l | p{2.4cm} | p{2.4cm}}\n & $2k < \\frac{2\\phi(1\/2)}{ \\phi'(1\/2)}$ & $2k > \\frac{2\\phi(1\/2)}{ \\phi'(1\/2)}$ \\\\\n\\hline \nClass 1 (conformists)& balance & dominance \\\\\n\\hline\nClass 2 (spontaneous) & balance & balance \\\\\n\\hline\n\\textbf{Whole group} & \\textbf{balance} & \\textbf{dominance}\\\\\n\\hline\n \\end{tabular}\n \\label{Tab:class limits}\n \\end{center}\n\\end{table}\n\\textbf{Example 1} \nConsider $\\phi(x) = x$. The corresponding ODE has a unique equilibrium $(k, 1\/2-k)$ and it is stable ($\\epsilon_1 = 2k - 1$, $\\epsilon_2 = -2\\beta$). Thus regardless of the\nchoice of $\\beta$ and the fraction of the class of conformists, $2k$, this model leads to balance in both classes. Therefore, the whole group reaches balance. \n\n\\textbf{Example 2} Let $\\phi(x) = x^2$. In this case, when conformists are less than $50\\% $ of the population, $2k < \\frac{1}{2}$, the corresponding ODE has only one equilibria $(k, 1\/2-k)$ and it is stable $( \\epsilon_1 = 2k-1\/2)$. Thus, conformists reach balance as well as the spontaneous class, and hence the entire group reaches balance for large $N$ and large $t$. On the other hand, when conformists are more than $ 50\\% $ of the population, $2k >\\frac{1}{2}$, the equilibrium $(k, 1\/2-k)$ is unstable and the class of conformists reaches to dominance of one opinion. In fact, the other two equilibria are $(k \\pm \\frac{\\sqrt{4k-1}}{2}, 1\/2-k )$ and \n the stability analysis suggests that these equilibrium points are stable. Hence, the model leads to dominance for the whole group. One Example is given in Fig.~\\ref{fig:TwoTypes} where probabilities are computed using Monte Carlo simulations of 10.000 trajectories. Here, the total number of agents is $N = 120$ with $N_1=100$ ($2k = 5\/6$) being the population of the conformists where $\\phi(x) = x^2 $ and spontaneity coefficient $\\beta = 0.02$.\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width =8.4cm]{TwoTypes.eps} \n\\caption{Empirical probability mass function for the number of agents with opinion 1 at time $T=100$ for the case of two extreme classes with $\\phi(x)=x^2$ and $\\beta = 0.02$.}\n\\label{fig:TwoTypes}\n\\end{center}\n\\end{figure}\n\\section{Conclusions}\n\\label{sec:conclusions}\nWe have proposed a simple binary model where agents hold an opinion from the\nset $\\{0,1\\}$ at any time $t \\geq 0$. An agent flips his\/her opinion based on\nthe number of agents with opinion $1$ in the entire population. The influence\nof the group on an agent is determined by his\/her personality. A personality\nis formed by a conformity function $\\phi$ and a spontaneity coefficient $\\beta\n$. When all agents in the group share the same personality, we call the group\nhomogeneous. \n\nInitially, focusing on a homogeneous group, we analyzed the\nlong time probabilities for a large population size for different personality characteristics of the group. The question\nof what personality characteristics lead to dominance of one opinion was\nstudied. We found that the shape of the conformity function, namely strict\nconvexity or lack thereof, seems to be an important determining factor in \nwhether dominance of one opinion occurs for sufficiently small $\\beta$. \n\nWe extended our model to a heterogeneous group, where the group consists of\ndifferent personality classes. In particular, when the group is formed by two\nextreme classes, complete conformity and complete spontaneity, the dominance\nof group opinion is analyzed. In this\nexample, we found that the fraction of the pure conformists was a key \ndetermining factor of dominance along \nwith the strict convexity of $\\phi$. \n\n\n\\bibliographystyle{plain} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
{"text":"\\section{Introduction}\nThe propagation of electromagnetic waves along periodic arrays of sub-wavelength apertures has been the subject of many studies in the last two decades. These structures have various applications, including sensors \\cite{yahiaoui2016terahertz,gordon2008new,dhawan2008plasmonic}, filters \\cite{panwar2017progress}, high refractive index metamaterials \\cite{shen2005mechanism}, negative refractive index metamaterials \\cite{torres2015accurate}, superlenses \\cite{huang2018superfocusing}, etc. Along with these novel applications, observation of the physical phenomena such as extraordinary transmission (EoT) \\cite{ebbesen1998extraordinary} and broadband Brewster transmission \\cite{edalatipour2015physics} has increased the importance of these structures.\n\nIn recent years, understanding and justifying the physical behaviors of the array of holes has been researched using various analytical methods \\cite{de2007colloquium,martin2001theory}. For instance, the extraordinary transmission at optical frequencies may be understood by coupling incident light with the surface plasmon polariton modes supported at the boundary between metals and dielectrics \\cite{liu2008microscopic}. On the other hand, this enhanced transmission at microwave frequencies through perforated PEC structures, has recently been linked to the role of proper complex modes \\cite{ansari2020local}. Providing an analytical method in the form of a circuit model can become a tool for designing and predicting these behaviors. This work focuses on microwave and terahertz frequencies, where the permittivity of the metals becomes high enough to justify perfect electric conductor (PEC) approximation. The transmission line model is considered in this paper due to its appropriate accuracy and high flexibility, which can be easily generalized to multilayer structures.\n\nModeling a waveguide structure in the form of a transmission line has been researched for a long time and has numerous applications in obtaining scattering parameters \\cite{marcuvitz1951waveguide,pozar2011microwave}. The transmission line model is used in a wide frequency range from microwave to infrared \\cite{costa2014overview,zhao2011homogenization,khavasi2014analytical}. With the help of the transmission line, the array of apertures in a PEC film has already been investigated \\cite{molero2021cross,borgese2020simple,kaipa2010circuit,medina2008extraordinary}.\nIn the first developed circuit model, the array of holes was analyzed by considering the thickness of the PEC film. In this case, the circuit model was created by considering only the dominant mode inside the apertures \\cite{medina2009extraordinary,khavasi2015corrections}.\n\nIn previous works, a circuit model has been developed for rectangular apertures by considering only $TE_{01}$ as the dominant mode \\cite{khavasi2015corrections,marques2009analytical}. In addition to two-dimensional arrays, one-dimensional arrays of slits have also been investigated by considering the $TEM$ mode as the dominant mode using the circuit model \\cite{medina2009extraordinary,yarmoghaddam2014circuit}. If the thickness of the aperture is large enough that the excited high-order modes inside the hole are damped from the first opening to the second opening of the hole, the model has high accuracy.\nIn these models, if the structure's thickness decreases, the error increases due to high-order modes; additionally, the error increases more rapidly for circular-shape apertures compared to rectangular apertures. Due to the widespread applications of metasurfaces and frequency selective surfaces, we need to create models for thin arrays.\n\nFrequency selective surfaces are thin periodic surfaces that are important both in terms of industrial applications as well as in terms of fundamental science. The array of apertures perforated in a PEC thin film is a kind of FSS, and many powerful circuit models have been proposed for these structures that are suitable for better design and understanding of the behavior of these structures \\cite{mesa2018efficient,alex2021exploring}. In these structures, the circuit model is created with the help of the spatial profile of the transverse electric field at the opening of the aperture. Previous works have provided profiles for rectangular \\cite{alex2021exploring}, cross \\cite{molero2021cross}, and annular holes \\cite{rodriguez2017annular,alex2021exploring}. Initially, the circuit model was developed for a structure consisting of a single layer of the intended FSS \\cite{costa2014overview}; then, the circuit model was improved for the case where the FSS is embedded in a layered medium \\cite{rodriguez2012analytical,rodriguez2015analytical}. Structures with multiple dielectric layers and multiple FSS layers were then examined using circuit models \\cite{molero2021cross,alex2021exploring}, but all of these circuit models were developed regardless of the FSS's thickness. It should be noted that effect of thickness can cause a considerable error; however, the thickness is about one-fiftieth of the structure's period.\n\nIn this paper, an extended circuit model is developed, which is much faster than full-wave simulations and is much more accurate than its predecessors. The circuit model is formulated in such a way that it takes into account high-order modes inside the holes; thus, it has high accuracy in low thickness structures. A closed-form expression is presented for the spatial profile of the transverse electrical field on the circular aperture which means that, unlike previous models, we do not need to extract the field profile from the full-wave simulators. It is shown that the previous electric field profile for the square hole has a large error when the width of the square hole is large (the width of the square is greater than half the period) \\cite{molero2021cross}, and this problem is well solved by presenting the new profile. Unlike square and circular holes, an array of PEC pillars always supports propagating mode that makes interesting properties \\cite{edalatipour2015physics}, and these structures are also analyzed analytically with the help of the proposed circuit model in this work.\n\nThe paper is organized as follows: Section 2 is devoted to describing how the proposed circuit model is developed, and three different cases: single-mode, multi-mode, and modified single-mode circuit models are described. In section 3, the low-thickness structures are investigated. In section 4, the proposed circuit model is extended to multilayered structures; finally, a conclusion is described in section 5.\n\n\\section{The proposed circuit model}\nPeriodic structures can be analyzed similar to other waveguide structures by considering a unit cell as a waveguide; thus, the formulation presented can be extended to any waveguide structure, but we focus only on the square array of sub-wavelength holes. In addition to the previous circuit models, environmental models for the desired structure have also been investigated \\cite{edalatipour2015physics,edalatipour2012creation}. Similarly, the isotropic\/anisotropic environmental model is created only by considering the dominant mode inside the aperture, which means that these methods also have a large error for a thin array of holes. The following formulation is presented by considering all the modes inside the hole.\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[width=13cm,height=5cm]{structures.pdf}\n\t\\caption{Figure (a) shows the array of perforated square apertures in a PEC film, and (b) shows the array of circular holes. In two structures, the period is P in both directions, the width of the square is W, and the circle's diameter is D. Refractive index of the upper and lower environments are $n_{1}$ and $n_{3}$, respectively, and the holes are filled with \n\t\trefractive index $n_{2}$.}\\label{periodic_array}\n\\end{figure}\n\nFigure \\ref{periodic_array} shows the schematic of the structures which are illuminated by a normal incident plane wave. The upper environment of the structure is region I ($n_{1}$), inside the holes is region II ($n_{2}$), and the lower environment is region III ($n_{3}$). For simplicity's sake, we first assume the structures depicted in Fig. \\ref{periodic_array} is semi-infinite, i.e., $L \\rightarrow \\infty$. Impinging of a plane-wave on a semi-infinite structure excites aperture modes in region II, and the Floquet modes in region I. The Floquet expansion of the transverse (x, y components) electric and magnetic field at the discontinuity ($z=0$) can be\nwritten as follow \\cite{molero2021cross,rodriguez2015analytical}:\n\\begin{equation}\\label{up_mode_expansion1}\n\t\\begin{cases}\n\t\tE^{1}(x,y)=(1 + R)e_{0}^{1}(x,y) + \\sum_{h}^{'}V_{h}^{1} e_{h}^{1}(x,y)\n\t\t\\vspace{.5cm}\\\\\n\t\tH^{1}(x,y)=(1 - R)Y_{0}^{1}(\\hat z \\times e_{0}^{1}(x,y)) - \\sum_{h}^{'}V_{h}^{1} Y_{h}^{1}(\\hat z \\times e_{h}^{1}(x,y))\n\t\\end{cases}\n\\end{equation}\nwhere the expression $e_{0}^{1}(x,y)$ is the tangential component of the incident wave whose reflection is R. The superscript refers to the region, and the prime in the series of Eq. (\\ref{up_mode_expansion1}) indicates that the incident wave is excluded from the summation. $e_{h}^{1}(x,y)$ is the normalized transverse electric field of $h$th Floquet harmonic ($h$ is associated with a pair of integer numbers $mn$). Similarly, expansion of aperture modes can be written as\n\\begin{equation}\\label{aperture_mode_expansion1}\n\t\\begin{cases}\n\t\tE^{2}(x,y)=V_{0}^{2}e_{0}^{2}(x,y) + \\sum_{h}^{'}V_{h}^{2} e_{h}^{2}(x,y)\n\t\t\\vspace{.5cm}\\\\\n\t\tH^{2}(x,y)=V_{0}^{2}Y_{0}^{2}(\\hat z \\times e_{0}^{2}(x,y)) + \\sum_{h}^{'}V_{h}^{2} Y_{h}^{2}(\\hat z \\times e_{h}^{2}(x,y)) \\hspace{.7cm}\n\t\\end{cases}\n\\end{equation}\nwhere $e_{0}^{2}(x,y)$ is the tangential component of the dominant mode inside the aperture. It should be noted that we want to use the dominant mode to create the transmission line; thus, we have written this mode separately from the other modes. Similarly, $e_{h}^{2}(x,y)$ is the transverse electric field of harmonic $h$ inside the aperture. Modes inside the aperture can be used in equations depending on the type of aperture but the Floquet modes in region I can be written as\n\\begin{equation}\\label{fl1}\n\t\\begin{aligned}\n\t\te_{h}^{1}(x,y)=\\frac{e^{-jK_{th}^{1}.\\hat{\\rho}}}{PP} \\hat{e_{h}}^{1} \\hspace{1cm} \\hat{\\rho}=x\\hat{x} + y\\hat{y}\n\t\\end{aligned}\n\\end{equation}\n\\begin{equation}\\label{fl2}\n\t\\begin{aligned}\n\t\tk_{th}^{1}=k_{xm}^{1}\\hat{x} + k_{yn}^{1}\\hat{y} = (k_{m}^{1} + k_{x0}^{1} )\\hat{x} + (k_{n}^{1} + k_{y0}^{1})\\hat{y}\n\t\\end{aligned}\n\\end{equation}\n\\begin{equation}\\label{fl3}\n\t\\begin{aligned}\n\t\tk_{x0}^{1}=k^{1}\\sin(\\theta)\\cos(\\phi) \\hspace{1cm} k_{y0}^{1}=k^{1}\\sin(\\theta)\\sin(\\phi)\n\t\\end{aligned}\n\\end{equation}\n\\begin{equation}\\label{fl4}\n\t\\begin{aligned}\n\t\tk_{m}^{1}=\\frac{2\\pi m}{P} \\hspace{1cm} k_{n}^{1}=\\frac{2\\pi n}{P}\n\t\\end{aligned}\n\\end{equation}\n\\begin{equation}\\label{fl5}\n\t\\hat{e_{h}}^{1}=\n\t\\begin{cases}\n\t\t\\hat{k_{th}}^{1} \\hspace{.5cm} & TM\n\t\t\\vspace{.5cm}\\\\\n\t\t(\\hat{k_{th}}^{1} \\times \\hat{z} ) \\hspace{.5cm} & TE\n\t\\end{cases}\n\t\\hspace{1cm} \\hat{k_{th}}^{1}=\\frac{k_{th}^{1}}{\\mid k_{th}^{1} \\mid} \\hspace{1cm}\n\\end{equation}\nThe modal admittances $Y_{h}^{1}$ are given by\n\\begin{equation}\\label{fl6}\n\tY_{h}^{1}=\\frac{1}{\\eta^{1}}\n\t\\begin{cases}\n\t\t\\frac{k^{1}}{k_{zh}^{1}} \\hspace{.5cm} & TM\n\t\t\\vspace{.5cm}\\\\\n\t\t\\frac{k_{zh}^{1}}{k^{1}} \\hspace{.5cm} & TE\n\t\\end{cases}\n\t\\hspace{.5cm} k_{zh}^{1}=\\sqrt{(k^{1})^{2} - \\mid k_{th}^{1} \\mid ^{2}}\n\\end{equation}\nwith \n\\begin{equation}\\label{f21}\n\t\\begin{aligned}\n\t\tk^{1}=n_{1}k_{0} \\hspace{1cm} \\eta^{1}=\\frac{\\eta^{0}}{n_{1}}\n\t\\end{aligned}\n\\end{equation}\nwhere $k_{0}$ is the vacuum wavenumber, $\\eta^{0}$ is the intrinsic impedance of free space. These equations are for any incident angle; however, we focus on normal incident waves in this article for simplicity in presentation. In the case of a normal incident wave ($\\theta = 0, \\phi = 0$), we can use modes of a waveguide with PEC and PMC walls instead of using the written Floquet modes which decrease computational volume significantly \\cite{marques2009analytical,kaipa2010circuit}. Aperture modes can be used in equations depending on the shape of the aperture. In this paper, square and circular aperture modes are used (the dominant modes are $TE_{01}$ and $TE_{11}$ for the square and circular aperture, respectively).\n\nTo create the circuit model, we use the transverse electric field at the opening of the aperture ($E_{a}$). The transverse electric field on the aperture can be written as $E_{a}(x,y)=f(\\omega)e_{a}(x,y)$. By changing the frequency, almost the spatial profile of the transverse electric field on the aperture doesn't change ($e_{a}(x,y)$), and only its factor changes ($f(\\omega)$). This assumption is the basis for creating many analytical models that were referred to in the introduction. To develop the circuit model, we must apply boundary conditions at the discontinuity. The boundary conditions of the tangential electric fields at the discontinuity are\n\\begin{equation}\\label{eq_Exa1}\n\t\\begin{aligned}\n\t\t(1 + R)e_{0}^{1}(x,y) + \\sum_{h}\\phantom{}^{'}V_{h}^{1} e_{h}^{1}(x,y)=E_{a}(x,y)\n\t\\end{aligned}\n\\end{equation}\n\\begin{equation}\\label{eq_Exa2}\n\t\\begin{aligned}\n\t\tV_{0}^{2}e_{0}^{2}(x,y) + \\sum_{h}\\phantom{}^{'}V_{h}^{2} e_{h}^{2}(x,y)=E_{a}(x,y)\n\t\\end{aligned}\n\\end{equation}\n\nWe can write the orthogonality of modes as\n\\begin{equation}\\label{ortho}\n\t\\iint\\limits_{c} (e_{h}^{1} \\times (\\hat{z}\\times e_{k}^{1})^{*}).\\hat{z} ds =0 \\hspace{.5cm} if \\hspace{.5cm} k\\neq h\n\\end{equation}\n\nIf we use the orthogonality of Floquet modes in Eq. (\\ref{eq_Exa1}), the coefficients of Floquet harmonics are calculated as:\n\\begin{equation}\\label{orth_e1}\n\t\\begin{cases}\n\t\t(1 + R)=\\frac{\\iint\\limits_{a} (E_{a}(x,y) \\times (\\hat{z}\\times e_{0}^{1})^{*}).\\hat{z} ds}{\\iint\\limits_{c} (e_{0}^{1} \\times (\\hat{z}\\times e_{0}^{1})^{*}).\\hat{z} ds}=\\frac{A_{0}}{C_{0}}\n\t\t\\vspace{.5cm}\\\\\n\t\tV_{h}^{1}=\\frac{\\iint\\limits_{a} (E_{a}(x,y) \\times (\\hat{z}\\times e_{h}^{1})^{*}).\\hat{z} ds}{\\iint\\limits_{c} (e_{h}^{1} \\times (\\hat{z}\\times e_{h}^{1})^{*}).\\hat{z} ds}=\\frac{A_{h}}{C_{h}}\n\t\t\\vspace{.5cm}\\\\\n\t\tV_{h}^{1}=(1 + R)\\frac{A_{h}C_{0}}{C_{h}A_{0}}\n\t\\end{cases}\n\\end{equation}\n\nSimilarly, using the orthogonality of aperture modes and Eq. (\\ref{eq_Exa2}) implies that:\n\\begin{equation}\n\t\\begin{cases}\n\t\tV_{0}^{2}=\\frac{\\iint\\limits_{a} (E_{a}(x,y) \\times (\\hat{z}\\times e_{0}^{2})^{*}).\\hat{z} ds}{\\iint\\limits_{a} (e_{0}^{2} \\times (\\hat{z}\\times e_{0}^{2})^{*}).\\hat{z} ds}=\\frac{B_{0}}{D_{0}}\n\t\t\\vspace{.5cm}\\\\\n\t\tV_{h}^{2}=\\frac{\\iint\\limits_{a} (E_{a}(x,y) \\times (\\hat{z}\\times e_{h}^{2})^{*}).\\hat{z} ds}{\\iint\\limits_{a} (e_{h}^{2} \\times (\\hat{z}\\times e_{h}^{2})^{*}).\\hat{z} ds}=\\frac{B_{h}}{D_{h}}\n\t\t\\vspace{.5cm}\\\\\n\t\tV_{h}^{2}=V_{0}^{2}\\frac{B_{h}D_{0}}{D_{h}B_{0}}\n\t\\end{cases}\n\\end{equation}\n\n\nThe continuity of the tangential magnetic field on the aperture's surface helps us to create the circuit model. The continuity of the magnetic field on the aperture can be written as:\n\\begin{equation}\\label{H_con}\n\t\\begin{aligned}\n\t\t(1 - R)Y_{0}^{1}(\\hat z \\times e_{0}^{1}(x,y)) - \\sum_{h}\\phantom{}^{'}V_{h}^{1} Y_{h}^{1}(\\hat z \\times e_{h}^{1}(x,y))= \n\t\tV_{0}^{2}Y_{0}^{2}(\\hat z \\times e_{0}^{2}(x,y)) + \\sum_{h}\\phantom{}^{'}V_{h}^{2} Y_{h}^{2}(\\hat z \\times e_{h}^{2}(x,y))\n\t\\end{aligned}\n\\end{equation}\n\nMultiplying both sides of Eq. (\\ref{H_con}) by $E_{a}(x,y)$ and a few mathematical operations yields\n\\begin{equation}\\label{H_con_2}\n\t\\begin{aligned}\n\t\tY_{0}^{1}(1 - R)A_{0}^{*} - \\sum_{h}\\phantom{}^{'}Y_{h}^{1} V_{h}^{1}A_{h}^{*} =V_{0}^{2}Y_{0}^{2} B_{0}^* + \\sum_{h}\\phantom{}^{'}Y_{h}^{2}V_{h}^{2} B_{h}^{*}\n\t\\end{aligned}\n\\end{equation}\n\nIf we now place $V_{h}^{1}$ and $V_{h}^{2}$ in Eq. (\\ref{H_con_2}), the following equation is obtained.\n\\begin{equation}\\label{circuit_eq}\n\t\\begin{aligned}\n\t\tY_{0}^{1}\\frac{1 - R}{1 + R} =Y_{0}^{2} \\frac{C_{0}B_{0}B_{0}^*}{D_{0}A_{0}A_{0}^*} +\\sum_{h}\\phantom{}^{'}Y_{h}^{2} \\frac{B_{h}B_{h}^{*}C_{0}}{A_{0}A_{0}^{*}D_{h}} +\\sum_{h}\\phantom{}^{'} Y_{h}^{1} \\frac{A_{h}A_{h}^{*}C_{0}}{A_{0}A_{0}^{*}C_{h}}\n\t\\end{aligned}\n\\end{equation}\n\nEquation \\ref{circuit_eq} can be converted to the circuit model shown in\nFig. \\ref{fig:circuit_1}, where the circuit model admittances are in the form of equations in Eq. (\\ref{circuit_parameter1}). In the obtained admittances, the frequency dependence of $E_{a}(x,y)$ is eliminated; Thus, the admittances depend only on the spatial profile of the transverse electric field at the opening of the aperture and the electric field value on the aperture does not affect the admittances. In this circuit model, $Y_{eq1}$ is the result of all the Floquet modes except the incident wave in region I (the incident mode is used as the transmission line $Y_{0}^{1}$). $Y_{st1}$ is the result of the dominant mode inside the aperture, and the effect of other modes inside the aperture is modeled as admittance $Y_{st2}$.\n\\begin{equation}\\label{circuit_parameter1}\n\t\\begin{cases}\n\t\tY_{eq1}=\\sum_{h}^{'} Y_{h}^{1} \\frac{A_{h}A_{h}^{*}C_{0}}{A_{0}A_{0}^{*}C_{h}}\n\t\t\\vspace{.5cm} \\\\\n\t\tY_{st1}=Y_{0}^{2} \\frac{C_{0}B_{0}B_{0}^*}{D_{0}A_{0}A_{0}^*}\n\t\t\\vspace{.5cm} \\\\\n\t\tY_{st2}=\\sum_{h}^{'}Y_{h}^{2} \\frac{B_{h}B_{h}^{*}C_{0}}{A_{0}A_{0}^{*}D_{h}} \\hspace{1.5cm}\n\t\\end{cases}\n\\end{equation}\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=4.5cm,height=3.5cm]{circuit_11.pdf}\n\t\\caption{Circuit model for a semi-infinite array of sub-wavelength apertures.\n\t\t\\label{fig:circuit_1}\n\t}\n\\end{figure}\n\n\nIn the developed circuit model, we assume that the structures are semi-infinite; therefore, we need to change the admittances to achieve a circuit model compatible with the finite-thickness array of holes. It should be noted that the evanescent modes are equivalent to the energy storage elements, so the TE modes are equivalent to the inductors, and the TM modes are equivalent to the capacitors. The corresponding admittance of each harmonic is proportional to the energy stored in that mode. Unlike the semi-infinite structure, the aperture modes do not extend to infinity in the finite-thickness array; therefore, we must modify the admittances related to aperture modes. We use the concept of energy to modify the admittances. Admittances related to aperture modes to represent the energy stored in the finite thickness must be multiplied by $(1 - e^{j2K_{zh}^{2}L})$. Admittance related to region I ($Y_{eq1}$) does not change; similarly, admittance related to region III is written. It should be noted that the dominant mode inside the hole does not need to be modified because it has become a transmission line. According to the above, the admittances of a finite-thickness array of holes in Fig. \\ref{periodic_array} are written as:\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=5cm,height=4.5cm]{circuit_22.pdf}\n\t\\caption{The final circuit model for finite-thickness arrays of sub-wavelength holes shown in Fig. \\ref{periodic_array}.}\\label{fig:circuit_3}\n\\end{figure}\n\n\\begin{equation}\\label{circuit_model_3}\n\t\\begin{cases}\n\t\tY_{eq1}=\\sum_{h}^{'} Y_{h}^{1} \\frac{A_{h}A_{h}^{*}C_{0}}{A_{0}A_{0}^{*}C_{h}}\n\t\t\\vspace{.5cm} \\\\\n\t\tY_{st1}=Y_{0}^{2} \\frac{C_{0}B_{0}B_{0}^*}{D_{0}A_{0}A_{0}^*}\n\t\t\\vspace{.5cm} \\\\\n\t\tY_{st2}=\\sum_{h}^{'}Y_{h}^{2} \\frac{B_{h}B_{h}^{*}C_{0}}{A_{0}A_{0}^{*}D_{h}}(1 - e^{j2K_{zh}^{2}L})\n\t\t\\vspace{.5cm} \\\\\n\t\tY_{eq3}=\\sum_{h}^{'} Y_{h}^{3} \\frac{A_{h}A_{h}^{*}C_{0}}{A_{0}A_{0}^{*}C_{h}}\n\t\\end{cases}\n\\end{equation}\n\nThe bottom opening of the aperture has the same profile as the top opening of the aperture, so in the bottom opening of the aperture, the admittance $Y_{st2}$ will be unchanged. Only the directions of the aperture modes at the top and bottom are opposite, which does not make a difference in the value of this admittance. Due to the similarity of the profiles at the top and bottom openings, $Y_{eq3}$ is computed like $Y_{eq1}$. The final circuit model is examined in 3 forms which are described below.\n\n\\subsection{Single-Mode Circuit Model (SMCM)}\nThis case has been discussed in previous works \\cite{khavasi2015corrections,marques2009analytical,medina2009extraordinary}, but additional explanations are provided to clarify the differences between the cases in this subsection. The SMCM considers all the Floquet modes in regions I and III, but it considers only the dominant mode inside the aperture ($Y_{st2}=0$). The formulation in previous articles has been written in such a way that the electric field profile at the aperture opening is considered the same as the electric field profile of the dominant mode. For instance, in square and rectangular holes, the electric field profile on the aperture is considered as follows:\n\\begin{equation}\\label{TE01_model}\n\tE_{a}(x,y)=e_{0}^{2}(x,y)=\\cos(\\frac{\\pi y}{W}) \\hat{x}\n\\end{equation}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[width=8cm,height=6cm]{pillar_structure.pdf}\n\t\\caption{A periodic array of square PEC pillars with period P. The width of the pillars is W and the thickness of the pillars is L.\\label{PEC_pillar}}\n\\end{figure}\n\nIn general, to find the realistic dominant mode profile inside the hole, we can increase the thickness of the array in full-wave simulations, and the electric field profile in the middle of the thickness can be used. For apertures with shapes such as circles, squares, and rectangles, the dominant mode has an analytical closed-form expression; however, a simulation profile can be used for unusual shapes. This case is suitable for arrays with a high thickness (almost thickness greater than $\\frac{P}{4}$).\n\nOne of the structures that can be examined with the help of this model is the array of PEC pillars (Fig. \\ref{PEC_pillar}). For this structure, the dominant mode which propagates between the pillars was obtained analytically \\cite{edalatipour2015physics}. Using this profile, isotropic\/anisotropic environmental models were presented and also if this profile is used in the SMCM, it gives accurate results (Fig. \\ref{pill_results}).\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[width=13cm,height=5cm]{pillar_results.pdf}\n\t\\caption{The results obtained from the single-mode circuit model for two arrays of PEC pillars (Figure (a) geometrical parameters: $W=.9P$,$L=2P$ and (b) geometrical parameters: $W=.6P$,$L=1P$). The blue curves are the results of the circuit model, and the red dashed curves are the full-wave simulation results.\\label{pill_results}}\n\\end{figure}\n\n\\subsection{Multi-Mode Circuit Model (MMCM)}\nIn the MMCM, we consider all the Floquet modes in regions I and III, as well as all the modes inside the apertures. In the MMCM, the electric field profile on aperture is not considered similar to the dominant mode and this profile is obtained from the full-wave simulator. First, let us examine an array of square holes in which the width of the squares is $\\frac{P}{2}$, and the thickness of the holes is $\\frac{P}{40}$. The electric field on the aperture at the resonance frequency obtained from the Comsol simulation is used to compute circuit model admittances (it should be noted that\nwe have used field profiles at the first resonance frequency). In Fig. \\ref{yeq_yst2}, the values $Y_{st2}$ and $Y_{eq1}$ are depicted. TE and TM modes in the circuit model are considered inductors and capacitors, respectively. Due to the predominance of TE modes at low frequencies, they show an inductive effect, which gradually decreases with increasing frequency, and eventually, the capacitive effect will be dominant (the amount of transmitted power through the array is shown in Fig. \\ref{cm_1}).\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[width=8cm,height=5.5cm]{Yeq1.pdf}\n\t\\caption{\n\t\tThe red curve shows the admittance resulting from the modes inside the aperture except the dominant mode. The blue curve shows the admittance resulting from Floquet modes except the incident mode in region I.}\\label{yeq_yst2}\n\\end{figure}\n\nThe MMCM can be more efficient in multilayer structures. First, we simulate the array at a frequency close to the resonance; then, with the help of the obtained profile, the multilayer structures with any number of layers can be examined. One of the disadvantages of the MMCM is that it requires an accurate electric field profile on the aperture, so we must obtain the field profile at a frequency close to the resonance frequency from the full-wave simulator and use it to compute admittances (of course, we can use the electric field profile at an arbitrary frequency, but the closer to the frequencies of interest e.g. resonance, the more accurate the results). Another complexity of the MMCM is that it is difficult to obtain high-order modes for unusual shapes; thus, we present another case that efficiently solves this problem.\n\n\\subsection{Modified Single-Mode Circuit Model (MSMCM)}\nThe MSMCM similar to MMCM and SMCM considers all the Floquet modes in regions I and III, but considers only the dominant mode inside the aperture. The difference between the MSMCM and the SMCM is in the electric field profile. In the SMCM, the electric field profile at aperture opening is considered equivalent to the electric field profile of the dominant mode; However, in the MSMCM, we use the electric field profile on aperture obtained from the full-wave simulator (in the next section, for circular and square holes, closed-form expressions are presented). The modified single-mode circuit model gives results obtained in Fig. \\ref{cm_1} for the array of square holes which was examined in the previous subsection.\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=8cm,height=5.5cm]{All_method.pdf}\n\t\\caption{Comparison of the results of circuit models and full-wave simulation.}\n\t\\label{cm_1}\n\\end{figure}\n\nAs clearly seen in Fig. \\ref{cm_1}, the circuit models predict extraordinary transmission (EoT). As can be seen, the accuracy of the MSMCM is less than the MMCM, but it is more accurate than SMCM. In the MSMCM, modifying the aperture profile has changed $Y_{eq1}$ and $Y_{st1}$ compared to the SMCM, which has significantly increased the accuracy. Since the MSMCM uses only one dominant mode, which makes this model more flexible than the MMCM, so we use this model in the following sections.\n\n\\section{Thin arrays of sub-wavelength apertures}\nDue to the importance and applications of arrays of sub-wavelength apertures perforated in a thin film, we focus on these structures in this section. In the previous section, we saw that the two cases MMCM and MSMCM have high accuracy. In these models, we need to use the aperture field profile obtained from the full-wave simulation. If the structure's thickness is zero, closed-form expressions are presented as electric field profiles of both circle and square shapes; however, these profiles also produce good results for non-zero thickness arrays. It should be noted that in order to use the MMCM, we must have an accurate electric field profile on the aperture; therefore, we use the approximated profiles in the MSMCM.\n\n\\subsection{Circular aperture}\nApertures with different shapes have different properties. Changing the shape and size of the aperture can help us to control electromagnetic waves. Experimental results have been investigated for holes with different shapes, one of which is circular holes.\nA closed-form expression has not been provided for the electric field profile of circular holes. Equation \\ref{app_cr} is an excellent approximation of the electric field profile for circular apertures. The actual (top row) vs approximated (bottom row) profiles are depicted in Fig. \\ref{pro_cr}. Compared to rectangular apertures, in circular apertures, changes in the angle of incident waves have little effect on the reflection or transmission power; thus, these structures can be considered from this perspective. In the case of circular apertures, the dominant mode within the aperture is $TE_{11}$. \n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[width=8cm,height=7cm]{profile_circle.pdf}\n\t\\caption{Transverse electric field profiles obtained from the full-wave simulation and Eq. (\\ref{app_cr})}\\label{pro_cr}\n\\end{figure}\n\n\\begin{equation}\\label{app_cr}\n\t\\begin{cases}\n\t\tE_{xa}=.1936\\frac{1}{\\sqrt{1 - 3.85(\\frac{r}{D})^2}} \\frac{\\cos(\\frac{\\pi r\\sin(\\phi)}{D})}{\\sqrt{1 - 1.12(\\frac{r\\sin(\\phi)}{D})^2}}\n\t\t\\vspace{.5cm} \\\\\n\t\tE_{ya}=.1074\\frac{(\\frac{2r}{D})^2}{\\sqrt{1 - 3.68(\\frac{r}{D})^2}} \\sin(2\\phi)\n\t\\end{cases}\n\\end{equation}\n\nUsing the approximated profile for the circular aperture, the result in Fig. \\ref{cr_ll} is obtained. As can be seen, the thickness $L=\\frac{P}{40}$ in the full-wave results can make a significant difference at high frequencies compared to the zero thickness full-wave simulation. However the MSMCM has predicted the non-zero thickness result quite accurately, not previously possible with other methods. Since we use the MSMCM, the admittance resulting from the high-order modes inside the aperture is zero. If the thickness is zero, the middle transmission line resulting from the dominant mode inside the aperture is also removed.\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[width=8cm,height=6cm]{circle_results.pdf}\n\t\\caption{Results of full-wave simulations for two different thicknesses and the MSMCM result (geometrical parameters used in the circuit model: $D=.5P$, $L=\\frac{P}{40}$).}\n\t\\label{cr_ll}\n\\end{figure}\n\n\\subsection{Square aperture}\nPrevious profiles for square shapes only consider field changes in one direction, which can cause errors, especially when the aperture is large (the width is more than half a period). The profiles used in previous studies produced accurate results for narrow rectangular apertures because the field changes along the breadth could be ignored. For square holes, we can use Eq. (\\ref{app_sq}) as an electric field profile. The comparison of the electric field profile obtained from the full-wave simulation and closed-form expression is provided in Fig. \\ref{pro_sq} and using the approximated profile yields the results in Fig. \\ref{sq_l0}.\n\\begin{equation}\\label{app_sq}\n\t\\begin{cases}\n\t\tE_{xa}=.1936\\frac{\\cos(\\frac{\\pi y}{W})}{\\sqrt{1 - 4(\\frac{y}{W})^2}} \\frac{1}{\\sqrt{1 - 3.85(\\frac{x}{W})^2}}\n\t\t\\vspace{.5cm} \\\\\n\t\tE_{ya}=.0928\\frac{\\frac{y}{W}}{\\sqrt{1 - 3.85(\\frac{y}{W})^2}} \\sin(\\frac{2\\pi x}{W})\n\t\\end{cases}\n\\end{equation}\n\\begin{figure}[!h]\n\t\\centering\n\t\\includegraphics[width=8cm,height=7cm]{profile_square.pdf}\n\t\\caption{Transverse electric field profiles obtained from the full-wave simulation and Eq. (\\ref{app_sq})} \\label{pro_sq}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[width=13cm,height=5cm]{square_results1.pdf}\n\t\\caption{Figure (a) compares the results from the previous profile and the new profile ($L=0,W=.75P$). Figure (b) is the results obtained from the modified single-mode circuit model using the new profile.}\n\t\\label{sq_l0}\n\\end{figure}\n\n\n\\section{multilayer structure}\nUsing an array of sub-wavelength apertures perforated in a film, in practical applications requires placing the array between one or more dielectric layers. For example, to use these structures for applications such as sensors, typically a substrate is used, and eventually, this two-layer structure is placed on a layer of glass for testing \\cite{yahiaoui2016terahertz,gordon2008new,dhawan2008plasmonic}. Adding dielectric layers increases the full-wave simulation time. Moreover, since the thickness of the dielectric layers may be many times larger than the other dimensions, the optimization will be very time-consuming. One of the advantages of the circuit model is its flexibility and simplicity in analyzing multilayer structures. This feature further highlights the importance of developing accurate circuit models. In this part, other layers are added to the upper and lower environment of the periodic structure. In this case, we do not change the admittance related to the inside of the hole, but for the top and bottom layer, we have to modify the admittances as follows:\n\\begin{equation}\n\t\\begin{cases}\n\t\tY_{eq12}=\\sum_{h}^{'}Y_{h}^{2} \\frac{(Y_{h}^{1} - jY_{h}^{2}\\tan(K_{zh}^{2}T1))}{(Y_{h}^{2} - jY_{h}^{1}\\tan(K_{zh}^{2}T1))} \\frac{A_{h}A_{h}^{*}C_{0}}{A_{0}A_{0}^{*}C_{h}}\n\t\t\\vspace{.5cm} \\\\\n\t\tY_{eq45}=\\sum_{h}^{'} Y_{h}^{4}\\frac{(Y_{h}^{5} - jY_{h}^{4}\\tan(K_{zh}^{4}T2))}{(Y_{h}^{4} - jY_{h}^{5}\\tan(K_{zh}^{4}T2))} \\frac{A_{h}A_{h}^{*}C_{0}}{A_{0}A_{0}^{*}C_{h}}\n\t\\end{cases}\n\\end{equation}\n\nFor a multilayer structure with an array of circular apertures like Fig. \\ref{multi_layer_1}, the result of the circuit model is shown in Fig. \\ref{mult1}. As can be seen, the reflection curve has undergone many changes compared to the monolayer structure, and many resonances have occurred.\nIn the examined multilayer structure, a layer was added to the upper environment of the structure, which can represent the sample under test within the sensor. As we know, in sensors, the goal is to detect the change in the refractive index of layer 2, which is the reason for adding this layer.\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[width=7cm,height=6cm]{multi_circuit.pdf}\n\t\\caption{Modified single-mode circuit model for the array of circular apertures (Fig. \\ref{periodic_array}(b)) with two layers added to the top and bottom. The thickness of the circular holes is $\\frac{P}{40}$, and the thickness of the added dielectric layers is shown in the figure ( $n_{1}= n_{5}=1,n_{2}=1.8,n_{3}=2, n_{4}=1.4 , L=\\frac{P}{40} , D=\\frac{P}{2}$). \\label{multi_layer_1} }\n\\end{figure}\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[width=8.5cm,height=6cm]{multi_result.pdf}\n\t\\caption{The reflection curve corresponding to the circuit model shown in Fig. \\ref{multi_layer_1}. \\label{mult1}}\n\\end{figure}\n\n\n\n\n\n\\section{Conclusion}\nIn this paper, an extended circuit model was proposed for the arrays of sub-wavelength holes in a PEC film, by considering all the modes inside the aperture. The circuit model was examined in 3 general cases. The SMCM, due to the lack of need for electric fields profile at hole opening (dominant mode profile is considered as the profile at hole opening), is more useful than the others if the structure's thickness is large. If the thickness is large, for holes with common shapes such as circles, squares, and rectangles, the use of the SMCM is recommended. MMCM and MSMCM provide superior and accurate results, which is especially unprecedented for thin arrays. The MSMCM has higher performance than other models in thin arrays because it has high accuracy and, unlike the MMCM, does not need to obtain high-order modes inside the aperture, which means that the MSMCM has more flexibility. Using the approximated profiles and MSMCM, we obtained accurate results for thin arrays of circular and square holes.\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}