{"text":"\\section{Introduction}\n\\label{sec:introduction}\n\nMulti-view data contain information relevant for the identification of patterns or clusters that allow us to specify groups of subjects or objects. Our focus is on patients for which we have bio-medical and\/or clinical observations describing patient characteristics obtained from various diagnostic procedures or produced by different molecular technologies~\\cite{fu2020overview}. The different types of subject characteristics constitute views related to these patients. Integrative clustering of these views facilitates the detection of patient groups, with the consequence of improved clinical diagnostic and treatment schemes.\n\nSimple integration of single view clustering results is not appropriate for the diversity and complexity of available medical observations. Even state-of-the-art multi-view approaches have their limitations. Ensemble clustering has the potential to overcome some of them \\cite{ronan2018openensembles}\\cite{ alqurashi2019clustering}. For instance, while spectral clustering might be the optimal method for a specific image-based analysis, agglomerative clustering might be more appropriate for tabular data. This can be the case, where patient data reflect some hierarchical structure in a disease of interest and its subtypes \\cite{ciriello2013emerging}. Moreover, in real-world applications the data views originate from highly heterogeneous input sources. Thus, each view needs to be clustered with the best possible and most adequate strategy. Multi-view clustering methods are widely applied within the bio-medical domain. Molecular data from different biological layers are retrieved for the same set of patients. The clusters inferred from these multi-omics observations facilitate the stratification of cancer patients into sub-groups, paving the way towards precision medicine. \n\nHere, we present Parea, a generic and flexible methodology to build clustering ensembles of arbitrary complexity. To be precise, we introduce Parea$_{\\textit{hc}}$, an ensemble method which performs hierarchical clustering and data fusion for disease subtype discovery. The name of our method is derived from the Greek word {\\it Parea}, meaning a group of friends who gather to share experiences, values, and ideas.\n\nThe manuscript is structured as follows: Section~\\ref{sec:Parea_general} formally describes the ensemble structures we have developed. Section~\\ref{sec:approach} presents our multi-view hierarchical ensemble clustering approach for disease subtype detection. We discuss related work in Section~\\ref{sec:other_methods} and introduce the methods we used for benchmark comparisons. In Section~\\ref{sec:results} the results are presented and discussed. A brief introduction to the \\textit{Pyrea} Python package is given in Section~\\ref{sec:pyrea}. We conclude with Section~\\ref{sec:conclusion}.\n\n\\section{General ensemble architecture}\n\\label{sec:Parea_general}\n\nThe following concept for multi-view ensemble clustering is proposed. Each view $V \\in \\mathbb{R}^{n\\times p}$ is associated with a specific clustering method $c$, where $n$ is the number of samples and $p$ is the number of predictors, and in total we have $N$ data views. An ensemble, called $\\mathcal{E}$, can be modelled using a set of views $\\mathcal{V}$ and an associated fusion algorithm $f$. \n\n\\begin{equation}\n \\mathcal{V} \\mapsfrom \\{(V \\in \\mathbb{R}^{n\\times p}, c)\\} \n\\end{equation}\n\n\\begin{equation}\n \\mathcal{E}(\\mathcal{V}, f) \\mapsto \\widetilde{V}\\in \\mathbb{R}^{p\\times p}\n\\end{equation}\n\n\\begin{equation}\n \\mathcal{V} \\mapsfrom \\{(\\widetilde{V}\\in \\mathbb{R}^{p\\times p}, c)\\} \n\\end{equation}\n\nFrom the above equations we can see that a specified ensemble $\\mathcal{E}$ creates a view $\\widetilde{V} \\in \\mathbb{R}^{p\\times p}$ which again can be used to specify $\\mathcal{V}$, including an associated clustering algorithm $c$. With this concept it is possible to \\textit{layer-wise} stack views and ensembles into arbitrarily complex ensemble architectures. It should be noted, however, that the resulting view of a specified ensemble $\\mathcal{E}$ forms an affinity matrix of dimension $p \\times p$, and thus only those clustering methods which are congruent with an affinity or a distance matrix for input are applicable. \n\n\\section{Proposed Ensemble Approach}\n\\label{sec:approach}\n\nThe Parea$_{\\textit{hc}}$ ensemble approach comprises two different strategies: Parea$_{\\textit{hc}}^{1}$ is limited to the application of two selected hierarchical clustering methods. Parea$_{\\textit{hc}}^{2}$ allows for the hierarchical clustering methods in the data fusion process to be varied. \n\nThe two hierarchical clustering methods of Parea$_{\\textit{hc}}^{1}$ for multiple data views are $hc_{1}$ and $hc_{2}$. The resulting fused matrices $\\widetilde{V}$ are clustered again with the same methods and the results are combined to form a final consensus (see Figure \\ref{fig:Parea_arch}, panel (a)). A formal description of Parea$_{\\textit{hc}}^{1}$ is give by:\n\n\\begin{equation}\n\\mathcal{V}_{1} \\mapsfrom \\{(V_{1},hc_{1}),(V_{2},hc_{1}),\\ldots, (V_{N},hc_{1})\\}, \n\\quad\n\\mathcal{V}_{2} \\mapsfrom \\{(V_{1},hc_{2}),(V_{2},hc_{2}),\\ldots, (V_{N},hc_{2})\\}\n\\end{equation}\n\n\\begin{equation}\n\\mathcal{E}_{1}(\\mathcal{V}_{1}, f) \\mapsto \\widetilde{V}_{1},\n\\quad\n\\mathcal{E}_{2}(\\mathcal{V}_{2}, f) \\mapsto \\widetilde{V}_{2}\n\\end{equation}\n\n\n\\begin{equation}\n \\mathcal{V}_{3} \\mapsfrom \\{(\\widetilde{V}_{1},hc_{1}),(\\widetilde{V}_{2},hc_{2})\\} \n\\end{equation}\n\n\n\\begin{equation}\n\\mathcal{E}_{3}(\\mathcal{V}_{3}, f) \\mapsto \\widetilde{V}_{3}.\n\\end{equation}\nThe affinity matrix $\\widetilde{V}_{3}$ is then clustered with $hc_{1}$ and $hc_{2}$ from the first layer, and the consensus of the obtained clustering solutions constitute the final cluster assignments: \n\n\\begin{equation}\n\\mathcal{V}_{4} \\mapsfrom \\{(\\widetilde{V_{3}},hc_{1}),(\\widetilde{V_{3}},hc_{2})\\}\n\\end{equation}\n\n\n\\begin{equation}\n\\text{cons}(\\mathcal{V}_{4}) \n\\end{equation}\nGiven the proposed ensemble architecture, a genetic algorithm infers the optimal combination of $hc_{1}$ and $hc_{2}$ using the silhouette coefficient \\cite{rousseeuw1987silhouettes} as a fitness function. For the data fusion algorithm $f$, we utilize a method introduced in \\cite{pfeifer2021hierarchical}. See Figure \\ref{fig:Parea_arch} for a graphical illustration of the described ensemble architecture. \n\nIn the Parea$_{\\textit{hc}}^{2}$ approach the views are clustered with up to $N$ different hierarchical clustering methods $hc_{1}, hc_{2}, \\ldots, hc_{N}$, where $N$ is the number of data views.\nA formal description of the Parea$_{\\textit{hc}}^{2}$ is given by:\n\n\n\\begin{equation}\n\\mathcal{V}_{1} \\mapsfrom \\{(V_{1},hc_{1}),(V_{2},hc_{2}),\\ldots, (V_{N},hc_{N})\\}\n\\end{equation}\n\n\n\\begin{equation}\n\\mathcal{E}_{1}(\\mathcal{V}_{1}, f) \\mapsto \\widetilde{V}_{1}\n\\end{equation}\nThe affinity matrix $\\widetilde{V}_{1}$ is then clustered with $hc_{1}$, $hc_{2}$, and $hc_{3}$. The consensus of the obtained clustering results are the final cluster assignments: \n\n\n\\begin{equation}\n \\mathcal{V}_{2} \\mapsfrom \\{(\\widetilde{V}_{1},hc_{1}),(\\widetilde{V}_{1},hc_{2}), (\\widetilde{V}_{1},hc_{3})\\} \n\\end{equation}\n\n\n\\begin{equation}\n\\text{cons}(\\mathcal{V}_{2}) \n\\end{equation}\nThe best combination of clustering methods ($hc_{1}$, $hc_{2}$, and $hc_{3}$) is again inferred by a genetic algorithm, where the silhouette coefficient \\cite{rousseeuw1987silhouettes} is deployed as a fitness function. We consider eight different hierarchical clustering methods, namely single-linkage and complete-linkage clustering \\cite{murtagh2012algorithms}, an unweighted pair-group method using arithmetic averages (UPGMA) \\cite{sokal1958statistical}, a weighted pair-group method using arithmetic averages (WPGMA) \\cite{sokal1958statistical} clustering, a weighted pair-group method using centroids (WPGMC) \\cite{gower1967comparison}, an unweighted pair-group method using centroids (UPGMC) \\cite{sokal1958statistical} clustering, and clustering based on Ward's minimum variance \\cite{ward1963hierarchical}\\cite{murtagh2014ward}.\n\n\\begin{figure*}[h!]\n\\begin{center}\n\\includegraphics[width=17.3cm]{Figures\/Parea_general_crop.pdf\n\\end{center}\n\\caption{\\textbf{(a)} The Parea$_{\\textit{hc}}^{1}$ ensemble architecture. The views are organised in two ensembles. Each ensemble is associated with a specific clustering method. \\textbf{(b)} The Parea$_{\\textit{hc}}^{2}$ ensemble architecture. The views can be clustered using different hierarchical clustering techniques. }\\label{fig:Parea_arch}\n\\end{figure*} \n\n\\section{Alternative approaches}\n\\label{sec:other_methods}\n\nA comparison with alternative approaches was conducted using a set of state-of-the-art multi-view clustering methods implemented within the Python package mvlearn \\cite{perry2021mvlearn}. Part of this set is a multi-view spectral clustering approach with and without the use of a co-training framework \\cite{kumar2011co}. An additional method is based on multi-view $k$-means \\cite{chao2017survey} clustering, plus an implementation of multi-view spherical $k$-means using the co-EM framework as described in \\cite{bickel2004multi}.\n\nFor disease subtype detection we compared Parea$_{\\textit{hc}}$ with NEMO \\cite{rappoport2019nemo}, SNF (Similar Network Fusion) \\cite{wang2014similarity}, HCfused \\cite{pfeifer2021hierarchical}, and PINSplus \\cite{nguyen2019pinsplus}. SNF models the similarity between subjects or objects as a network and then fuses these networks via an interchanging diffusion process. Spectral clustering is applied to the fused network to infer the final cluster assignments. NEMO builds upon SNF, but provides solutions to partial data and implements a novel \\textit{eigen-gap} method \\cite{von2007tutorial} to infer the optimal number of clusters.\nThe method implemented within PINSplus systematically adds noise to the data and infers the best number of clusters based on the stability against this noise. When the best $k$ (number of clusters) is detected, binary matrices are formulated reflecting the cluster solutions for each single-view contribution. A final agreement matrix is derived by counting the number of times two subjects or objects appear in the same cluster. This agreement matrix is then used for a standard clustering method, such as $k$-means. \n\nHCfused is a hierarchical clustering and data fusion algorithm. It is based on \\textit{bottom-up} agglomerative clustering to create a fused affinity matrix. At each step two clusters are merged within the view that provides the minimal distance between these clusters. The number of times two samples appear in the same cluster is reflected by a co-association matrix. The final cluster assignments are obtained from hierarchical clustering based on Ward's minimum variance \\cite{ward1963hierarchical}\\cite{murtagh2014ward}.\n\n\\section{Results and discussion}\n\\label{sec:results}\n\n\\subsection*{Evaluation on machine learning benchmark data sets}\n\nWe tested Parea$_{\\textit{hc}}$ on the IRIS data set available from the UCI Machine Learning Repository\\footnote{See \\url{https:\/\/archive.ics.uci.edu\/ml\/datasets\/iris}}. It contains three classes of 50 instances each, where each class refers to a type of iris plant.\nWe compared the results of the ensemble with each of the possible ensemble members, and also compared the outcomes with a consensus approach, where the cluster solutions of all single algorithms were combined. The data pool was sampled 50 times and the clustering methods were applied on each iteration. The number of clusters $k$ was set to three corresponding to the ground truth.\nThese sanity checks revealed superior results for the Parea$_{\\textit{hc}}^{1}$ ensemble method compared to a simple consensus approach, as judged by the Adjusted Rand Index (ARI) (see Figure \\ref{fig:IRIS}). We could further observe that the underlying genetic algorithm infers ward.D and ward.D2 as the best performing method combination (Figure \\ref{fig:IRIS}, panel (b)). \n\nIn an additional investigation, we evaluated the accuracy of the discussed methods on the nutrimouse data set \\cite{martin2007novel}. The data set originates from a nutrigenomic study of the mouse in which the effects of five regimens with contrasted fatty acid compositions on liver lipids and hepatic gene expression in mice were considered. Two views were acquired from forty mice. First, gene expressions of 120 genes were measured in liver cells, as potentially relevant for the nutrition study. Second, lipid concentrations of 21 hepatic fatty acids were measured by gas chromatography.\n\nFor the nutrimouse data set Parea$_{\\textit{hc}}^{2}$ performs better than Parea$_{\\textit{hc}}^{1}$ (see Figure \\ref{fig:100leaves}, panel (a)). This observation suggests that higher accuracy can be achieved when the views are analysed with disjoint clustering strategies. Parea$_{\\textit{hc}}^{2}$ performs best when the median NMI (\\textit{Normalised Mutual Information}) is used as a metric. However, at the same time we observed higher variance for the Parea ensembles compared to multi-view spectral clustering. It is worth noting that the alternative spectral-based approaches from the mvlearn Python package performed as well as Parea$_{\\textit{hc}}^{1}$. \n\nLast, we further studied Parea's performance on the one-hundred plant species leaves multi-view data set, available from the UCI Machine Learning Repository\\footnote{See \\url{https:\/\/archive.ics.uci.edu\/ml\/datasets\/One-hundred+plant+species+leaves+data+set}}. For each feature, a 64 element vector is observed per leaf sample. These vectors form contiguous descriptors for shape as well as texture and margin. In contrast to the IRIS data set it is composed of multiple views. As can be seen in Figure \\ref{fig:100leaves}, Parea$_{\\textit{hc}}^{1}$ and Parea$_{\\textit{hc}}^{2}$ compete well with the spectral-based approaches. Multi-view $k$-means clustering does not capture the ground truth class distribution (see Figure \\ref{fig:100leaves}, panel (b)).\n\n\\begin{figure}\n \\centering\n \\begin{subfigure}[b]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{Figures\/Parea_vs_Single.pdf}\n \\caption{}\n \n \\end{subfigure}\n \n \\begin{subfigure}[b]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{Figures\/Heatmap.pdf}\n \\caption{}\n \n \\end{subfigure}\n \\caption{(a) Parea$_{\\textit{hc}}$ versus each of the available ensemble methods executed on the \\textit{single-view} IRIS data set. (b) The pairwise Parea$_{\\textit{hc}}$ ensembles inferred by the genetic algorithm for clustering the IRIS data set.}\n \\label{fig:IRIS}\n\\end{figure}\n\n\n\n\\begin{figure}\n \\centering\n \\begin{subfigure}[b]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{Figures\/Mouse_diet.pdf}\n \\caption{}\n \n \\end{subfigure}\n \n \\begin{subfigure}[b]{0.50\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{Figures\/100leaves.pdf}\n \\caption{}\n \n \\end{subfigure}\n \\caption{\n Parea$_{\\textit{hc}}$ versus a set of state-of-the-art multi-view methods implemented within the mvlearn package \\cite{perry2021mvlearn}.\n (a) The \\textit{multi-view} nutrimouse data set. (b) The \\textit{multi-view} 100 leaves data set.\n }\n \\label{fig:100leaves}\n\\end{figure}\n\n\n\\subsection*{Multi-omics clustering for disease subtype discovery}\n\nWe applied the aforementioned ensemble methods to patient data of seven different cancer types, namely glioblastoma multiforme (GBM), kidney renal clear cell carcinoma (KIRC), liver hepatocellular carcinoma (LIHC), skin cutaneous melanoma (SKCM), ovarian serous cystadenocarcinoma (OV), sarcoma (SARC), and acute myeloid leukemia (AML), aiming at the externally known survival outcome. The Parea$_{\\textit{hc}}$ ensemble approach was studied on multi-omics data, including gene expression data (mRNA), DNA methylation, and micro-RNAs. The obtained results were compared with the alternative approaches SNF, NEMO, PINSplus, and HCfused. All data were retrieved from the ACGT lab at Tel Aviv University\\footnote{See \\url{http:\/\/acgt.cs.tau.ac.il\/multi_omic_benchmark\/download.html}}, which makes available a data repository recently proposed as a convenient benchmark for multi-omics clustering approaches \\cite{rappoport2018multi}. The data were pre-processed as follows: patients and features with more than $20\\%$ missing values were removed and the remaining missing values were imputed with $k$-nearest neighbor imputation. In the methylation data, we selected those 5000 features with maximal variance in each data set. All features were then normalised to have mean zero and standard deviation one. \n\nThe resulting multi-omics views were then clustered with the discussed multi-view clustering approaches. It is important to mention that the cancer patients were exclusively analysed based on their genomic footprints. The survival statuses and times of all patients were retrieved to validate the quality of the inferred patient clusters. Here, a well performing clustering method is determined by its capability to separate patients into groups with similar event statuses in terms of survival. To this end we randomly sampled 100 patients 30 times from the data pool and performed the Cox log-rank test, which is an inferential procedure for the comparison of event time distributions among independent (i.e. clustered) patient groups.\n\nIn the case of Parea$_{\\textit{hc}}$, the optimal number of clusters was determined by the silhouette coefficient. We set the number of HCfused \\cite{pfeifer2021hierarchical} fusion iterations to 30. The maximum possible number of clusters was fixed to 10. The obtained Cox log-rank $p$-values ($\\alpha=0.05$ significance level) are displayed in Figure \\ref{fig:TCGA} and Table \\ref{tab:table1}. Our Parea$_{\\textit{hc}}$ ensembles outperformed the alternative approaches in six out of seven cases. SKCM is the only cancer type for which HCfused achieved better results. Notably, spectral-based clustering (see NEMO \\cite{rappoport2019nemo} and SNF \\cite{wang2014similarity}) does not perform as well as with the benchmark machine learning data sets. We further learned that Parea$_{\\textit{hc}}^{1}$ is more accurate than Parea$_{\\textit{hc}}^{2}$ (Figure \\ref{fig:TCGA}, Table \\ref{tab:table1}). \n\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[width=14cm]{Figures\/TCGA_median.pdf\n\\end{center}\n\\caption{Parea$_{\\textit{hc}}$ in comparison with the alternative approaches SNF, NEMO, PINSplus, and HCfused. Colored bars represent the method-specific median Cox log-rank $p$-values for the seven different cancer types. The vertical line refers to $\\alpha=0.05$ significance level.}\n\\label{fig:TCGA}\n\\end{figure}\n\n \n \n\n\n\\begin{table}[h!]\n \\begin{center}\n \\caption{Survival analysis of TCGA cancer data}\n \\label{tab:table1}\n \\begin{tabular}{l l l l l l l l}\n \\textbf{Cancer type} & \\textbf{Sample size} & SNF & PINSplus & NEMO & HCfused &Parea$_{\\textit{hc}}^{1}$ & Parea$_{\\textit{hc}}^{2}$ \\\\\n \\hline\n GBM & 538 & 0.1304 & 0.2223 & \\textbf{0.0347} & 0.0997 & \\textbf{0.0447}& \\textbf{0.0347}\\\\\n KIRC & 606 & 0.3962 & 0.4005 & 0.3464 & 0.0561 & \\textbf{0.0137} & \\textbf{0.0400}\\\\\n \n LIHC & 423 & 0.5357 & 0.6731 & 0.4354 & 0.2062 & \\textbf{0.0334} & \\textbf{0.0436}\\\\\n SKCM & 473 & 0.5153 & 0.3956 & 0.4565 & 0.0699 & 0.1677 & 0.1629\\\\\n OV & 307 & 0.4042 & 0.5300 & 0.3593 & 0.2594 & 0.1685 & 0.2870 \\\\\n SARC & 265 & 0.1622 & 0.2024 & 0.0979 & \\textbf{0.0408} & \\textbf{0.0076} & \\textbf{0.0109} \\\\\n AML & 173 & 0.0604 & 0.1973 & \\textbf{0.0440} & 0.1148 & \\textbf{0.0167} & 0.0502\\\\ \\hline\n \n \\end{tabular}\n \\end{center}\n \\centering\nDisplay of the median Cox log-rank p-values. Significant results ($\\alpha=0.05$) are highlighted in bold. \n\\end{table}\n\n\\section{Pyrea Software Library}\n\\label{sec:pyrea}\nPyrea is a software framework which allows for flexible ensembles to be built, layer-wise, using a variety of clustering algorithms and fusion techniques. It can be used to implement the ensemble methods discussed throughout this paper and any other custom ensemble structure that users may wish to create. It is made available as a Python package, which can be installed via the \\textit{pip} package manager. See the project's GitHub repository under \\url{https:\/\/github.com\/mdbloice\/Pyrea} for source code and usage examples, such as how to implement some of the ensembles mentioned above. Comprehensive documentation is available under \\url{https:\/\/pyrea.readthedocs.io}. Pyrea is MIT licensed, and available for Windows, macOS, and Linux.\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nWe have presented Parea$_{\\textit{hc}}$, an ensemble approach for multi-view hierarchical clustering for cancer subtype discovery. We could show that Parea$_{\\textit{hc}}$ competes well with current state-of-the-art multi-view clustering techniques, on classical machine learning data sets as well as for real-world multi-omics cancer patient data. The proposed methodology for building ensembles is highly versatile and allows for ensembles to be stacked layer-wise. Additionally, the Parea ensemble strategy is not limited to a specific clustering technique. Within our developed Python package \\textit{Pyrea}, we enable flexible ensemble building, while providing a wide-range of clustering algorithms, data fusion techniques, and metrics to infer the best number of clusters. \n\n\n\n\n\n \n\n\n\n \n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\n\\section{Introduction}\nThe problem of group synchronization is critical for various tasks in data science, including structure from motion (SfM), simultaneous localization and mapping (SLAM), Cryo-electron microscopy imaging, sensor network localization, multi-object matching and community detection. Rotation synchronization, also known as rotation averaging, is the most common group synchronization problems in 3D reconstruction. It asks to recover camera rotations from \nmeasured\nrelative rotations \nbetween\npairs of cameras. Permutation synchronization, which has applications in multi-object matching, asks to obtain globally consistent matches of objects from possibly erroneous measurements of matches between some pairs of objects. The simplest example of group synchronization is $\\mathbb Z_2$ synchronization, which appears in community detection. \n\nThe general problem of group synchronization can be mathematically formulated as follows.\nAssume a graph $G([n],E)$ with $n$ vertices indexed by $[n]=\\{1,\\ldots,n\\}$, a group $\\mathcal{G}$, and a set of group elements $\\{g_i^*\\}_{i=1}^n \\subseteq \\mathcal{G}$. \nThe \nproblem asks to recover $\\{g_i^*\\}_{i=1}^n$ from noisy and corrupted measurements $\\{g_{ij}\\}_{ij \\in E}$ of the group ratios $\\{g_i^*g_j^{*-1}\\}_{ij \\in E}$.\nWe note that one can only recover, or approximate, the original group elements $\\{g_i^*\\}_{i\\in [n]}$ up to a right group action.\nIndeed, for any $g_0\\in \\mathcal G$, $g_{ij}^*$ can also be written as $g_i^*g_0(g_j^*g_0)^{-1}$ and thus $\\{g_i^*g_0\\}_{i\\in [n]}$ is also a solution. \nThe above mentioned synchronization problems (rotation, permutation, and $\\mathbb Z_2$ synchronization) correspond to the groups $SO(3)$, $S_N$, and $\\mathbb Z_2$, respectively.\n\nThe most challenging issue for group synchronization is the practical scenario of highly corrupted and noisy measurements. Traditional least squares solvers often fail to produce accurate results in such a scenario. \nMoreover, some basic estimators that seem to be robust to corruption\noften do not tolerate in practice high level of noise. \nWe aim to propose a general method for group synchronization that may tolerate high levels and different kinds of corruption and noise. While our basic ideas are formally general, in order to carefully refine and test them, we focus on the special problem of rotation synchronization, which is also known as rotation averaging \\cite{RotationAveraging13}.\nWe choose this problem as it is the most common, and possibly most difficult, synchronization problem in 3D computer vision.\n\n\\subsection{Related Works}\n\\label{sec:related}\nMost previous group synchronization solvers minimize an energy function. \nFor the discrete groups $\\mathbb Z_2$ and $S_N$, least squares energy minimization is commonly used.\nRelevant robustness results, under special corruption and noise models, are discussed in \\citet{Z2Afonso2, Z2abbe, Z2Afonso, chen_partial, Huang13, PPM_vahan, deepti}.\n\nFor Lie groups, such as $SO(D)$, that is, the group of $D \\times D$ orthogonal matrices with determinant 1, where $D\\geq 2$, least squares minimization was proposed to handle Gaussian noise \\citep{rotationNP,StrongDuality18,Govindu04_Lie}. However, when the measurements are also adversarially corrupted, this framework does not work well and other corruption-robust energy functions need to be used \\cite{ChatterjeeG13_rotation, L12, HartleyAT11_rotation, SO2ML, wang2013exact}. \nThe most common corruption-robust energy function uses least absolute deviations.\n\\citet{wang2013exact} prove that under a very special probabilistic setting with $\\mathcal G = SO(D)$, the pure minimizer of this energy function can exactly recover the underlying group elements with high probability. However, their assumptions are strong and they use convex relaxation, which changes the original problem and is expensive to compute. \n \\citet{SO2ML} apply a trimmed averaging procedure for robustly solving $SO(2)$ synchronization. They are able to recover the ground truth group elements under a special deterministic condition on the topology of the corrupted subgraph. However, the verification of this condition and its extension to $SO(D)$, where $D>2$, are nontrivial.\n\\citet{HartleyAT11_rotation} used the Weiszfeld algorithm\n to minimize the least-absolute-deviations energy function\nwith $\\mathcal G = SO(3)$. Their method iteratively computes geodesic medians. However, they update only one rotation matrix per iteration, which results in slow empirical convergence and may increase the possibility of getting stuck at local minima. \\citet{L12} proposed a robust Lie-algebraic averaging method over $\\mathcal G = SO(3)$. They apply an iteratively reweighted least squares (IRLS) procedure in the tangent space of $SO(3)$ to optimize different robust energy functions, including the one that uses least absolute deviations. They claim that the use of the $\\ell_{1\/2}$ norm for deviations results in highest empirical accuracy. The empirical robustness of the two latter works is not theoretically guaranteed, even in simple settings. A recent deep learning method \\cite{NeuroRA} solves a supervised version of rotation synchronization,\nbut it does not apply to the above unsupervised formulation of the problem.\n\n\\citet{truncatedLS} use least absolute deviations minimization for solving 1D translation synchronization, where $\\mathcal G=\\mathbb R$ with addition. \nThey propose a special version of IRLS and provide a deterministic exact recovery guarantee that depends on properties of the graph and its Laplacian. They do not explain their general result in an adversarial setting, but in a very special noisy setting.\n\n \nRobustness results were established for least absolute deviations minimization in camera location estimation, which is somewhat similar to group synchronization \\cite{HandLV15,LUDrecovery}. These results assume special probabilistic setting, however, they have relatively weak assumptions on the corruption model.\n\n\n\n Several energy minimization solutions have been proposed to $SE(3)$ synchronization \\cite{SE3_MCMC, SE3_SDP_jesus, SE3_Rosen, SE3_sync, SE3_RPCA}. This problem asks to jointly estimate camera rotations and locations from relative measurements of both. Neither of these solutions successfully address highly corrupted scenarios.\n\n\nOther works on group synchronization, which do not minimize energy functions but aim to robustly recover corrupted solutions,\nscreen corrupted edges using cycle consistency information. For a group $\\mathcal{G}$ with group identity denoted by $e$, any $m \\geq 3$, any cycle\n$L=\\{i_1i_2, i_2i_3\\dots i_m i_1\\}$ of length $m$ and any corresponding product of ground-truth group ratios along $L$, $g^*_L=g_{i_1i_2}^*g_{i_2i_3}^*\\cdots g_{i_mi_1}^*$, the cycle-consistency constraint is\n$g^*_L= e$.\nIn practice, one is given the product of measurements, that is, $g_L=g_{i_1i_2}g_{i_2i_3}\\cdots g_{i_mi_1}$, and in order to ``approximately satisfy the cycle-consistency constraint'' one tries to enforce $g_L$ to be sufficiently close to $e$.\n\\citet{Zach2010} uses the cycle-consistency constraint to detect corrupted relative rotations in $SO(3)$. It seeks to maximize a log likelihood function, which is based on the cycle-consistency constraint, using either belief propagation or convex relaxation. However, no theoretical guarantees are provided for the accuracy of outlier detection. Moreover, the log likelihood function implies very strong assumptions on the joint densities of the given relative rotations.\n\\citet{shen2016} classify the relative rotations as uncorrupted if they belong to any cycle that approximately satisfies the cycle-consistency constraint. However, this work only exploits local information and cannot handle the adversarial corruption case, where corrupted cycles can be approximately consistent. \n\nAn iterative reweighting strategy, IR-AAB \\cite{AAB}, was proposed to detect and remove corrupted pairwise directions for the different problem of camera location estimation. It utilizes another notion of cycle-consistency to infer the corruption level of each edge. \\citet{cemp} extend the latter idea, and interpret it as a message passing procedure, to solve group synchronization with any compact group. They refer to their new procedure as cycle-edge message passing (CEMP). While We follow ideas of \\citet{cemp,AAB}, we directly solve for group elements, instead of estimating corruption levels, using them to initial cleaning of edges and solving the cleaner problem with another method.\n\nTo the best of our knowledge, the unified frameworks for group synchronization are \\citet{ICMLphase,cemp,AMP_compact}. However, \\citet{ICMLphase} and \\citet{AMP_compact} assume special probabilistic models that do not address adversarial corruption. Furthermore, \\citet{ICMLphase} only applies to Lie groups and the different setting of multi-frequencies.\n\n\n\\subsection{Contribution of This Work}\nCurrent group synchronization solvers often do not perform well with highly corrupted and noisy group ratios. In order to address this issue, we propose a rotation synchronization solver that can address in practice high levels of noise and corruption. Our main ideas seem to generalize to group synchronization with any compact group, but more careful developments and testing are needed for other groups. We emphasize the following specific contributions of this work:\n\\begin{itemize}\n\\item\nWe propose the message passing least squares (MPLS) framework as an alternative paradigm to IRLS for group synchronization, and in particular, rotation synchronization. It uses the theoretically guaranteed CEMP algorithm\nfor estimating the underlying corruption levels.\nThese estimates are then used for learning better weights for the weighted least squares problem.\n\\item \nWe explain in Section \\ref{sec:issue} why the common IRLS solver may not be accurate enough and in Section \\ref{sec:mpls} why MPLS can overcome these obstacles. \n\\item While MPLS can be formally applied to any compact group, we refine and test it for the group $\\mathcal G = SO(3)$. We demonstrate state-of-the-art results for rotation synchronization with both synthetic data having nontrivial scenarios and real SfM data.\n\n\n\\end{itemize}\n\n\n\n\n \n\n\n\n\n\n\\section{Setting for Robust Group Synchronization}\\label{sec:adversarial}\nSome previous robustness theories for group synchronization typically assume a very special and often unrealistic corruption probabilistic model for very special groups \\cite{deepti,wang2013exact}. In general, simplistic probabilistic models for corruption,\nsuch as generating potentially corrupted group ratios according to the Haar measure on $\\mathcal G$ \\cite{wang2013exact},\nmay not generate some nontrivial scenarios that often occur in practice. \nFor example, in the application of rotation synchronization that arise in SfM, the corrupted camera relative rotations can be self-consistent due to the ambiguous scene structures \\citep{1dsfm14}. However, in common probabilistic models, such as the one in \\citet{wang2013exact}, cycles with corrupted edges are self-consistent with probability zero. A more realistic model is the adversarial corruption model for the different problem of camera location \\cite{LUDrecovery, HandLV15}. However, it also assumes very special probabilistic models for the graph and camera locations, which are not realistic. A more general model of adversarial corruption with noise is due to \\citet{cemp} and we review it here.\n\nWe assume a graph $G([n],E)$ \nand a compact group $\\mathcal G$\nwith a bi-invariant metric $d$, that is, for any $g_1$, $g_2$, $g_3\\in \\mathcal G$, \n $d(g_1,g_2)=$ $d(g_3g_1,g_3g_2)=$ $d(g_1g_3,g_2g_3)$. \n For $\\mathcal G = SO(3)$, or any Lie group, $d$ is commonly chosen to be the geodesic distance.\n\n Since $\\mathcal G$ is compact, we can scale $d$ and assume \n\n that $d(\\cdot)\\leq 1$.\n \n \n \nWe partition $E$ into $E_g$ and $E_b$, which represent sets of good (uncorrupted) and bad (corrupted) edges, respectively. \nWe will need a topological assumption on $E_b$, or equivalently, $E_g$. A necessary assumption is that $G([n],E_g)$ is connected, though further restrictions on $E_b$ may be needed for \nestablishing theoretical guarantees\n\\cite{cemp}.\n\nIn the noiseless case, the adversarial corruption model generates group ratios in the following way.\n\\begin{align}\\label{eq:model}\ng_{ij}=\\begin{cases}\ng^*_{ij}:=g_i^*g_j^{*-1}, & ij \\in E_g;\\\\\n\\tilde g_{ij} \\neq g^*_{ij}, & ij \\in E_b.\n\\end{cases}\n\\end{align}\nThat is, for edges $ij\\in E_b$, the corrupted group ratio $\\tilde g_{ij}\\neq g_{ij}^*$ can be arbitrarily chosen from $\\mathcal G$.\nThe corruption is called adversarial since \none can maliciously corrupt the group ratios for $ij\\in E_b$ and also maliciously choose $E_b$ as long as the needed assumptions on $E_b$ are satisfied. One can even form cycle-consistent corrupted edges, so that they can be confused with the good edges.\n\nIn the noisy case, we assume a noise model for $d(g_{ij},g_{ij}^*)$, where $ij\\in E_g$. In theory, one may need to restrict this model \\cite{cemp}, but in practice we test highly noisy scenarios.\n\nFor $ij\\in E$ we define the corruption level of $ij$ as \\[s_{ij}^* = d(g_{ij},g_{ij}^*).\\]\nWe use ideas of \\citet{cemp} to estimate $\\{s_{ij}^*\\}_{ij \\in E}$, but then we propose new ideas to estimate $\\{g_{i}^*\\}_{i \\in [n]}$. While exact estimation of both quantities is equivalent in the noiseless case \\cite{cemp}, this property is not valid when adding noise.\n\n \n\\section{Issues with the Common IRLS}\\label{sec:issue}\nWe first review the least squares minimization, least absolute and unsquared deviations minimization and IRLS for group synchronization. We then explain why IRLS may not form a good solution for the group synchronization problem, and in particular for Lie algebraic groups, such as the rotation group.\n\nThe least squares minimization \ncan be formulated as follows:\n\\begin{align}\n\\label{eq:l2}\n \\min_{\\{g_i\\}_{i=1}^n \\subseteq \\mathcal G}\\sum_{ij\\in E}d^2(g_{ij},g_ig_j^{-1}),\n\\end{align}\nwhere one often relaxes this formulation.\nThis formulation is generally sensitive to outliers and thus more robust energy functions are commonly used when considering corrupted group ratios. More specifically, one may choose a special function $\\rho(x) \\neq x^2$ and solve the following least unsquared deviation formulation\n\\begin{align}\\label{eq:lp}\n \\min_{\\{g_i\\}_{i=1}^n \\subseteq \\mathcal G}\\sum_{ij\\in E}\\rho\\left(d(g_{ij},g_ig_j^{-1})\\right).\n\\end{align}\nThe special case of $\\rho(x)=x$ \\cite{HandLV15,HartleyAT11_rotation,ozyesil2015robust, wang2013exact} is referred to as least absolute deviations. Some other common choices are $\\rho(x)=x^2\/(x^2+\\sigma^2)$ \\cite{ChatterjeeG13_rotation} and $\\rho(x)=\\sqrt x$ \\cite{L12}. \n\n\nThe least unsquared formulation is typically solved using IRLS, where at iteration $t$ one solves the weighted least squares problem:\n\\begin{align} \\label{eq:irls}\n \\{g_{i,t}\\}_{i\\in [n]}&= \\operatorname*{arg\\,min}_{\\{g_i\\}_{i=1}^n \\subseteq \\mathcal G}\\sum_{ij\\in E} w_{ij,t-1}d^2(g_{ij},g_ig_j^{-1}).\n\\end{align}\nIn the first iteration the weights can be initialized in a certain way, but in the next iterations the weights are updated using the residuals of this solution. Specifically, for $ij \\in E$ and iteration $t$, the residual is $r_{ij,t}=d(g_{ij},g_{i,t}g_{j,t}^{-1})$ and the weight $w_{ij,t}$ is \n\\begin{align}\nw_{ij,t}&=\nF(r_{ij,t}),\n\\label{eq:weight_irls}\n\\end{align}\nwhere the function $F$ depends on the choice of $\\rho$. For $\\rho(x)=x^p$, where $0
\\tau\\}}$ with a parameter $\\tau>0$. We decrease $\\tau$ as the iteration number increases in order to avoid overweighing outliers. By doing this we aim to address the third drawback of IRLS mentioned in Section \\ref{sec:issue}. We remark that the truncated function is $F(x)\\mathbf{1}_{\\{x\\leq \\tau\\}}$ and the additional term $10^{-8}\\mathbf{1}_{\\{x> \\tau\\}}$ is needed to ensure that the graph with weights resulting from $F_{\\tau}$ is connected.\n\n\n\n\n\\begin{algorithm}[h]\n\\caption{Message Passing Least Squares (MPLS)}\\label{alg:MPLS}\n\\begin{algorithmic}\n\\REQUIRE $\\{g_{ij}\\}_{ij\\in E}$, $\\{d_{ij,k}\\}_{k\\in C_{ij}}$, nonincreasing $\\{\\tau_t\\}_{t\\geq 0}$, increasing $\\{\\beta_t\\}_{t=0}^T$, decreasing $\\{\\alpha_t\\}_{t\\geq 1}$\n\\STATE \\textbf{Steps:}\n\\STATE Compute $\\{s_{ij,T}\\}_{ij\\in E}$ by CEMP\n\\STATE $w_{ij,0}=F_{\\tau_0}(s_{ij,T})$\n\\hspace*{\\fill} $ij\\in E$\n\\STATE $t=0$\n\\WHILE {not convergent}\n\\STATE $t=t+1$\n\\STATE $\\{g_{i,t}\\}_{i\\in [n]}=\\operatorname*{arg\\,min}\\limits_{g_i\\in\\mathcal G}\\sum\\limits_{ij\\in E} w_{ij,t-1}d^2(g_{ij},g_ig_j^{-1})$\n\n\\STATE $r_{ij,t}=d(g_{ij}, g_{i,t}g_{j,t}^{-1})$\n\\hspace*{\\fill} $ij\\in E$\n\n\\STATE $q_{ij,k}^{t} = \\exp(-\\beta_T(r_{ik,t}+r_{jk,t}))$\n\\hspace*{\\fill} $k\\in C_{ij},\\, ij\\in E$\n\n\\STATE $h_{ij,t}\n =\\frac{\\sum_{k\\in C_{ij}}q_{ij,k}^t d_ {ij,k}}{\\sum_{k\\in C_{ij}}q_{ij,k}^t}$\n\\hspace*{\\fill} $ij\\in E$\n\\STATE $w_{ij,t}=F_{\\tau_t}(\\alpha_t h_{ij,t}+(1-\\alpha_t)r_{ij,t})$\n\\hspace*{\\fill} $ij\\in E$\n\\ENDWHILE\n\\ENSURE $\\left\\{g_{i,t}\\right\\}_{i\\in [n]}$\n\\end{algorithmic}\n\\end{algorithm}\nThe initial step of the algorithm estimates the corruption levels $\\{s_{ij}^*\\}_{ij \\in E}$ by CEMP. The initial weights for the IRLS procedure \nfollow \\eqref{eq:weight_irls} with additional truncation. \nAt each iteration, the group ratios $\\{g_{i,t}\\}_{i\\in [n]}$ are estimated from the weighted least squares procedure in \\eqref{eq:irls}. However, the weights $w_{ij,t}$ are updated in a very different way. First of all, for each $ij \\in E$ the corruption level $s_{ij}^*$ is re-estimated in two different ways and a convex combination of the two estimates is taken. The first estimate is a residual $r_{ij,t}$ computed with the newly updated estimates $\\{g_{i,t}\\}_{i\\in [n]}$. This is the error of approximating the given measurement $g_{ij}$ by the newly estimated group ratio. The other estimate practically applies CEMP to re-estimate the corruption levels. For edge $ij \\in E$, the latter estimate of $s_{ij}^*$ is denoted by $h_{ij,t}$. \nFor interpretation, we can replace \\eqref{eq:pr_f} with \n$\\Pr(s_{ij}^*|r_{ij,t})=\\exp(-\\beta_T x)$ and use it to derive analogs of \\eqref{eq:pijk}\nand \n\\eqref{eq:sijt}.\nUnlike CEMP, we use the single parameter, $\\beta_T$, as we assume that CEMP provides a sufficiently good initialization.\nAt last, a similar weight as in \\eqref{eq:weight_irls}, but truncated, is applied to the combined estimate $\\alpha_t h_{ij,t}+(1-\\alpha_t)r_{ij,t}$. \n\n\nWe remark that utilizing the estimate $h_{ij,t}$ for the corruption level addresses the first drawback of IRLS discussed in Section \\ref{sec:issue}. Indeed, assume the case where $ij\\in E_b$ and $r_{ij,t}$ is close to 0. Here, $w_{ij,t}$ computed by IRLS is relatively large; however, since $ij\\in E_b$, $w_{ij,t}$ needs to be small. Unlike $r_{ij,t}$ in IRLS, we expect that $h_{ij,t}$ in MPLS should not be too small as long as for some $k\\in C_{ij}$, $d_{ij,k}$ are sufficiently large. This happens as long as there exists some $k\\in C_{ij}$ for which the cycle $ijk$ is good. Indeed, in this case $s_{ij}^*$ is sufficiently large and for good cycles $d_{ij,k}=s_{ij}^*$.\n\n\nWe further remark that $h_{ij,t}$ is a good approximation of $s_{ij}^*$ under certain conditions. For example, if for all $k\\in C_{ij}$, $r_{ik,t}\\approx s_{ik}^*$ and $r_{jk,t}\\approx s_{jk}^*$, then plugging in the definition of $p_{ij,k}^t$ to the expression of $h_{ij,t}$, using the fact that $\\beta_T$ is sufficiently large and at last applying Proposition \\ref{prop:good cycle}, we obtain that\n\\begin{align}\n h_{ij,t}\n &= \\sum_{k\\in C_{ij}}\\frac{\\exp(-\\beta_T(r_{ik,t}+r_{jk,t}))}{\\sum_{k\\in C_{ij}}\\exp(-\\beta_T(r_{ik,t}+r_{jk,t}))}d_ {ij,k} \\nonumber\\\\\n &\\approx \\sum_{k\\in C_{ij}}\\frac{\\exp(-\\beta_T(s_{ik}^*+s_{jk}^*)) }{\\sum_{k\\in C_{ij}}\\exp(-\\beta_T(s_{ik}^*+s_{jk}^*))} d_ {ij,k} \\\\\n &\\approx \\sum_{k\\in C_{ij}}\\frac{\\mathbf{1}_{\\{ijk \\text{ is a good cycle}\\}}}{\\sum_{k\\in C_{ij}}\\mathbf{1}_{\\{ijk \\text{ is a good cycle}\\}}} d_ {ij,k}=s_{ij}^*.\\nonumber\n\\end{align}\nThis intuitive argument for a restricted case conveys the idea that ``local good information'' can be used to estimate $s_{ij}^*$. \nThe theory of CEMP \\cite{cemp} shows that under weaker conditions such information can propagate through the whole graph within a few iterations, but we cannot extend it to MPLS.\n\n\nIf the graph $G([n], E)$ is dense with sufficiently many good cycles, then we expect that this good information can propagate in few iterations and that $h_{ij,t}$ will have a significant advantage over $r_{ij,t}$. However, in real scenarios of rotation synchronization in SfM, one may encounter sparse graphs, which may not have enough cycles and, in particular, not enough good cycles. In this case, utilizing $h_{ij,t}$ is mainly useful in the early iterations of the algorithm. On the other hand, when $\\{g_{i,t}\\}_{i \\in [n]}$ are close to $\\{g_i^*\\}_{i \\in [n]}$, $\\{r_{ij,t}\\}_{i \\in [n]}$ will be sufficiently close to $\\{s_{ij}^*\\}_{i \\in [n]}$. Aiming to address rotation synchronization, we decrease $\\alpha_t$, the weight of $h_{ij,t}$, with $t$. In other applications, different choices of $\\alpha_t$ can be used \\cite{robust_multi_object2020}.\n\n\nThe second drawback of IRLS, discussed in Section \\ref{sec:issue}, is the possible difficulty of implementing the weighted least squares step of \\eqref{eq:irls}. This issue is application-dependent, and since in this work we focus on rotation synchronization (equivalently, $SO(3)$ synchronization), we show in the next subsection how MPLS can deal with the above issue in this specific problem. \nNevertheless, we claim that our framework can also be applied to other compact group synchronization problems and we demonstrate this claim in a follow up work \\cite{robust_multi_object2020}.\n\n\\subsection{MPLS for $SO(3)$ synchronization} \n\\label{sec:so3}\nRotation synchronization, or $SO(3)$ synchronization, aims to solve 3D rotations $\\{\\boldsymbol{R}_i^*\\}_{i\\in [n]} \\in SO(3)$ from measurements $\\{\\boldsymbol{R}_{ij}\\}_{ij\\in E} \\in SO(3)$ of the 3D relative rotations $\\{\\boldsymbol{R}_i^*\\boldsymbol{R}_j^{*-1}\\}_{ij\\in E} \\in SO(3)$. \nThroughout the rest of the paper, we use the following normalized geodesic distance for $\\boldsymbol{R}_1,\\boldsymbol{R}_2 \\in SO(3)$:\n\\begin{equation}\n\\label{eq:geo_distance}\n d(\\boldsymbol{R}_1,\\boldsymbol{R}_2)=\\|\\log(\\boldsymbol{R}_1\\boldsymbol{R}_2^{-1})\\|_F\/(\\sqrt2\\pi),\n\\end{equation}\nwhere $\\log$ is the matrix logarithm and the normalization factor ensures that the diameter of $SO(3)$ is $1$. \nWe provide some relevant preliminaries of the Riemannian geometry of $SO(3)$ in Section \\ref{sec:prelim_riem} and then describe the implementation of MPLS for $SO(3)$, which we refer to as MPLS-$SO(3)$, in Section \\ref{sec:mpls_so3_details}.\n\n\\subsubsection{Preliminaries: $SO(3)$ and $\\mathfrak{so}(3)$} \\label{sec:prelim_riem}\nWe note that $SO(3)$ is a Lie group, and its corresponding Lie algebra, $\\mathfrak{so}(3)$, is the space of all skew symmetric matrices, which is isomorphic to $\\mathbb R^3$. For each $\\boldsymbol{R}\\in SO(3)$, its corresponding element in $\\mathfrak{so}(3)$ is $\\boldsymbol\\Omega=\\log(\\boldsymbol{R})$, where $\\log$ denotes matrix logarithm. Each $\\boldsymbol\\Omega\\in \\mathfrak{so}(3)$ can be represented as $[\\boldsymbol{\\omega}]_{\\times}$ for some $\\boldsymbol\\omega =(\\omega_1, \\omega_2, \\omega_3)^T \\in \\mathbb R^3$ in the following way:\n\\begin{align*}\n [\\boldsymbol{\\omega}]_{\\times} :=\n \\left(\\begin{array}{ccc}\n 0 & - \\omega_3 & \\omega_2 \\\\\n \\omega_3 & 0& -\\omega_1 \\\\\n -\\omega_2 & \\omega_1 & 0\n \\end{array}\\right).\n\\end{align*}\nIn other words, we can map any $\\boldsymbol\\omega\\in \\mathbb R^3$ to \n$\\boldsymbol\\Omega=[\\boldsymbol{\\omega}]_{\\times} \\in \\mathfrak{so}(3)$ and $\\boldsymbol{R}=\\exp([\\boldsymbol{\\omega}]_{\\times}) \\in SO(3)$, where $\\exp$ denotes the matrix exponential function. We remark that geometrically $\\boldsymbol\\omega$ is the tangent vector at $\\boldsymbol I$ of the geodesic path from $\\boldsymbol I$ to $\\boldsymbol{R}$.\n\n\n\\subsubsection{Details of MPLS-$SO(3)$}\n\\label{sec:mpls_so3_details}\nWe note that in order to adapt MPLS to the group $SO(3)$, we only need a specific algorithm to solve the following formulation of the weighted least squares problem at iteration $t$\n\\begin{align}\n \\nonumber &\\min\\limits_{\\boldsymbol{R}_{i,t}\\in \\mathcal SO(3)}\\sum\\limits_{ij\\in E} w_{ij,t}d^2(\\boldsymbol{R}_{ij},\\boldsymbol{R}_{i,t}\\boldsymbol{R}_{j,t}^{-1})\\\\\n =& \\min\\limits_{\\boldsymbol{R}_{i,t}\\in \\mathcal SO(3)}\\sum\\limits_{ij\\in E} w_{ij,t}d^2(\\boldsymbol{I}, \\boldsymbol{R}_{i,t}^{-1}\\boldsymbol{R}_{ij}\\boldsymbol{R}_{j,t}),\\label{eq:wlsSO3}\n\\end{align}\nwhere the last equality follows from the bi-invariance of $d$.\nThe constraints on orthogonality and determinant of $\\boldsymbol{R}_i$ are non-convex. If one relaxes those constraints, with an appropriate choice of the metric $d$, then the solution of the least squares problem in the relaxed Euclidean space often lies away from the embedding of $SO(3)$ into that space. \nFor this reason, we follow the common choice of $d$ according to \\eqref{eq:geo_distance} and implement the Lie-algebraic Averaging (LAA) procedure \\cite{Govindu04_Lie, ChatterjeeG13_rotation, L12,consensusSO3}. \nWe review LAA, explain why it may be problematic and why our overall implementation may overcome its problems. \nLAA aims to move from $\\boldsymbol{R}_{i,t}$ to $\\boldsymbol{R}_{i,t+1}$ along the manifold using the right group action $\\boldsymbol{R}_{i,t}=\\boldsymbol{R}_{i,t-1}\\Delta \\boldsymbol{R}_{i,t}$, where \n$\\Delta \\boldsymbol{R}_{i,t}\\in SO(3)$. \nFor this purpose, it defines $\\Delta\\boldsymbol{R}_{ij,t}=\\boldsymbol{R}_{i,t-1}^{-1}\\boldsymbol{R}_{ij}\\boldsymbol{R}_{j,t-1}$\nso that \n\\begin{multline*}\n(\\Delta \\boldsymbol{R}_{i,t})^{-1}\\Delta \\boldsymbol{R}_{ij,t} \\Delta \\boldsymbol{R}_{j,t} = \\\\(\\Delta \\boldsymbol{R}_{i,t})^{-1}\\boldsymbol{R}_{i,t-1}^{-1}\\boldsymbol{R}_{ij}\\boldsymbol{R}_{j,t-1} \\Delta \\boldsymbol{R}_{j,t}=\\boldsymbol{R}_{i,t}^{-1}\\boldsymbol{R}_{ij}\\boldsymbol{R}_{j,t} \n\\end{multline*}\nand \\eqref{eq:wlsSO3} \ncan be transformed to the still hard to solve equation \n\\begin{equation}\\label{eq:deltaR}\n \\min\\limits_{\\Delta \\boldsymbol{R}_{i,t}\\in\\mathcal SO(3)}\\sum\\limits_{ij\\in E} w_{ij,t}d^2(\\boldsymbol I,(\\Delta \\boldsymbol{R}_{i,t})^{-1}\\Delta \\boldsymbol{R}_{ij,t} \\Delta \\boldsymbol{R}_{j,t}).\n\\end{equation}\nLAA then maps $\\{\\Delta \\boldsymbol{R}_{i,t}\\}_{i\\in [n]}$ and $\\{\\Delta \\boldsymbol{R}_{ij,t}\\}_{ij\\in E}$ to the tangent space of $\\boldsymbol{I}$ by $\\Delta \\boldsymbol\\Omega_{i,t}=\\log \\Delta \\boldsymbol{R}_{i,t}$\nand $\\Delta \\boldsymbol\\Omega_{ij,t}=\\log \\Delta \\boldsymbol{R}_{ij,t}$.\nApplying \\eqref{eq:geo_distance} and the fact that the Riemannian logarithmic map, which is represented by $\\log$, preserves the geodesic distance and using a ``naive approximation'':\n$d(\\boldsymbol I,(\\Delta \\boldsymbol{R}_{i,t})^{-1}\\Delta \\boldsymbol{R}_{ij,t} \\Delta \\boldsymbol{R}_{j,t}) $. Therefore, \nLAA uses the following approximation \n\\begin{multline}\n\\label{eq:app_laa}\n d(\\boldsymbol{I},(\\Delta \\boldsymbol{R}_{i,t})^{-1}\\Delta \\boldsymbol{R}_{ij,t}\\Delta \\boldsymbol{R}_{j,t}) \n = \\\\\n \\|\\log((\\Delta \\boldsymbol{R}_{i,t})^{-1}\\Delta \\boldsymbol{R}_{ij,t} \\Delta \\boldsymbol{R}_{j,t})\\|_F\/(\\sqrt2\\pi)\n \\approx \\\\\n\\|-\\log(\\Delta \\boldsymbol{R}_{i,t})+\\log(\n\\Delta \\boldsymbol{R}_{ij,t})+\\log( \\Delta \\boldsymbol{R}_{j,t}))\\|_F\/(\\sqrt2\\pi) = \\\\\n\\|\\Delta \\boldsymbol\\Omega_{i,t}-\\Delta \\boldsymbol\\Omega_{j,t}-\\Delta \\boldsymbol\\Omega_{ij,t}\\|_F\/(\\sqrt2\\pi).\n\\end{multline}\nConsequently, LAA transforms \\eqref{eq:deltaR} as follows:\n\\begin{equation}\\label{eq:delta_omega}\n \\min\\limits_{\\Delta \\boldsymbol\\Omega_{i,t}\\in\\mathfrak{so}(3)}\\sum\\limits_{ij\\in E} w_{ij,t}\\|\\Delta \\boldsymbol\\Omega_{i,t}-\\Delta \\boldsymbol\\Omega_{j,t}-\\Delta \\boldsymbol\\Omega_{ij,t}\\|^2_F.\n\\end{equation}\nHowever, the approximation in \\eqref{eq:app_laa}\nis only valid when $\\Delta \\boldsymbol{R}_{ij,t}$, $\\Delta \\boldsymbol{R}_{i,t}$, $\\Delta \\boldsymbol{R}_{j,t}$ $\\approx \\boldsymbol{I}$, \nwhich is unrealistic. \n\nOne can check that the following conditions: $\\boldsymbol{R}_{ij}\\approx \\boldsymbol{R}_i^*\\boldsymbol{R}_j^{*-1}$ ($s_{ij}^*\\approx 0$), $\\boldsymbol{R}_{i,t} \\approx \\boldsymbol{R}_i^*$ and $\\boldsymbol{R}_{j,t} \\approx \\boldsymbol{R}_j^*$ for $t \\geq 0$ imply that $\\Delta \\boldsymbol{R}_{ij,t}$, $\\Delta \\boldsymbol{R}_{i,t}$, $\\Delta \\boldsymbol{R}_{j,t}$ $\\approx \\boldsymbol{I}$ and thus \nimply \\eqref{eq:app_laa}. Therefore, to make LAA work we need to give large weights to edges $ij$ with small $s_{ij}^*$ and provide a good initialization $\\{\\boldsymbol{R}_{i,0}\\}_{i\\in [n]}$ that is reasonably close to $\\{\\boldsymbol{R}_{i}^*\\}_{i\\in [n]}$ and so that $\\{\\boldsymbol{R}_{i,t}\\}_{i\\in [n]}$ for all $t\\geq 1$ are still close to the ground truth. \nOur heuristic argument is that good approximation by CEMP, followed by MPLS, addresses these requirements. Indeed, to address the first requirement, we note that good initialization by CEMP can result in $s_{ij,T} \\approx s_{ij}^*$ and by the nature of $F$, $w_{ij,0}$ is large when $s_{ij,T}$ is small. \nAs for the second requirement, \nwe assign the weights $s_{ij,T}$, obtained by CEMP, to each $ij\\in E$ and find the minimum spanning tree (MST) for the weighted graph by Prim's algorithm. We initialize the rotations by fixing $\\boldsymbol{R}_{1,0}=\\boldsymbol{I}$, multiplying relative rotations along the computed MST and consequently obtaining $\\boldsymbol{R}_{i,0}$ for any node $i$. We summarize our MPLS version of rotation averaging in Algorithm~\\ref{alg:SO3}. \n\\begin{algorithm}[h]\n\\caption{MPLS-$SO(3)$}\\label{alg:SO3}\n\\begin{algorithmic}\n\\REQUIRE $\\{\\boldsymbol{R}_{ij}\\}_{ij\\in E}$, $\\{d_{ij,k}\\}_{k\\in C_{ij}}$, $\\{\\tau_t\\}_{t\\geq 0}$, $\\{\\beta_t\\}_{t=0}^T$, $\\{\\alpha_t\\}_{t\\geq 1}$\n\\STATE \\textbf{Steps:}\n\n\\STATE Compute $\\{s_{ij,T}\\}_{ij\\in E}$ by CEMP\n\\STATE Form an $n \\times n$ weight matrix $\\boldsymbol{W}$, where $W_{ij}=W_{ji}= s_{ij,T}$ for $ij\\in E$, and $W_{ij}=W_{ji}=0$ otherwise\n\\STATE $G([n],E_{ST})=$ minimum spanning tree of $G([n],W)$\n\\STATE $\\boldsymbol{R}_{1,0}=\\boldsymbol{I}$\n\\STATE find $\\{\\boldsymbol{R}_{i,0}\\}_{i>1}$ by $\\boldsymbol{R}_i=\\boldsymbol{R}_{ij}\\boldsymbol{R}_j$ for $ij\\in E_{ST}$ \n\\STATE $t=0$\n\\STATE $w_{ij,0}=F_{\\tau_0}(s_{ij,T})$ \n\\WHILE {not convergent}\n\\STATE $t=t+1$\n\\STATE $\\Delta \\boldsymbol\\Omega_{ij,t}=\\log(\\boldsymbol{R}_{i,t-1}^{-1}\\boldsymbol{R}_{ij}\\boldsymbol{R}_{j,t-1})$ \\hspace*{\\fill} $ij\\in E$\n\n\\STATE $\\{\\Delta \\boldsymbol\\Omega_{i,t}\\}_{i\\in [n]}=$\n\\STATE \\quad \\quad $\\operatorname*{arg\\,min}\\limits_{\\Delta \\boldsymbol\\Omega_{i,t}\\in\\mathfrak{so}(3)}\\sum\\limits_{ij\\in E} w_{ij,t}\\|\\Delta \\boldsymbol\\Omega_{i,t}-\\Delta \\boldsymbol\\Omega_{j,t}-\\Delta \\boldsymbol\\Omega_{ij,t}\\|^2_F$\n\\STATE $\\boldsymbol{R}_{i,t}=\\boldsymbol{R}_{i,t-1}\\exp(\\Delta \\boldsymbol\\Omega_{i,t})$ \\hspace*{\\fill} $i\\in [n]$\n\\STATE $r_{ij,t}=\\|\\Delta \\boldsymbol\\Omega_{i,t}-\\Delta \\boldsymbol\\Omega_{j,t}-\\Delta \\boldsymbol\\Omega_{ij,t}\\|_F\/(\\sqrt2\\pi)$ \\hspace*{\\fill} $ij\\in E$\n\n\\STATE $q_{ij,k}^{t} =\\exp(-\\beta_T(r_{ik,t}+r_{jk,t}))$\n\\hspace*{\\fill} $k\\in C_{ij},\\, ij\\in E$\n\\STATE $h_{ij,t}\n =\\frac{\\sum_{k\\in C_{ij}}q_{ij,k}^t d_ {ij,k}}{\\sum_{k\\in C_{ij}}q_{ij,k}^t}$ \\hspace*{\\fill} $ij\\in E$\n\\STATE $w_{ij,t}=F_{\\tau_t}(\\alpha_t h_{ij,t}+(1-\\alpha_t)r_{ij,t})$\n\\hspace*{\\fill} $ij\\in E$\n\\ENDWHILE\n\\ENSURE $\\left\\{\\boldsymbol{R}_{i,t}\\right\\}_{i\\in [n]}$\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\subsection{Computational Complexity}\n\\label{sec:complexity}\nCEMP requires the computation \nof $d_{ij,k}$ for $ij \\in E$ and $k\\in C_{ij}$. Its computational complexity per iteration is thus of order $O(|E|)$ as we use $|C_{ij}|=50$ for all $ij \\in E$. Since we advocate few iterations ($T=5$) of CEMP, or due to its fast convergence under special settings \\cite{cemp}, we can assume that its total complexity is $O(|E|)$.\nThe computational complexity of MPLS depends on the complexity of solving the weighted least squares problem, which depends on the group. For MPLS-$SO(3)$, the most expensive part is solving the weighted least squares problem in the tangent space, whose complexity is at most $O(n^3)$. This is thus also the complexity of MPLS-$SO(3)$ per iteration. Unlike CEMP, we have no convergence guarantees yet for MPLS.\n\n\n\n\\section{Numerical Experiments}\n\\label{sec:numerics}\nWe test the proposed MPLS algorithm on rotation synchronization, while comparing with state-of-the-art methods. We also try simpler ideas than MPLS that are based on the basic strategy of CEMP.\nAll computational tasks were implemented on a machine with 2.5GHz Intel i5 quad core processors and 8GB memory. \n\n\\subsection{Implementation}\n\\label{sec:implementation}\nWe use the following default parameters for Algorithm \\ref{alg:cemp}:\n$|C_{ij}|=50$ for $ij\\in E$; $T=5$; $\\beta_t=2^{t}$ and $t=0, \\ldots, 5$. If an edge is not contained in any 3-cycle, we set its corruption level as 1.\nFor MPLS-$SO(3)$, which we refer to in this section as MPLS, we use the above parameters of Algorithm \\ref{alg:cemp} and the following ones for $t\\geq 1$: \n $$\\alpha_t=1\/(t+1) \\ \\text{ and } \\ \\tau_{t}=\\inf_x\\left\\{\\hat P_t(x)> \\max\\{1-0.05t\\,,0.8\\}\\right\\}.$$\nHere, $\\hat P_t$ denotes the empirical distribution of \n$\\{\\alpha_t h_{ij,t}+$ $(1-\\alpha_t)r_{ij,t}\\}_{ij\\in E}$.\nThat is, for $t=0$, 1, 2, 3, we ignore $0\\%$, $5\\%$, $10\\%$, $15\\%$ of edges that have highest $\\alpha_t h_{ij,t}+$ $(1-\\alpha_t)r_{ij,t}$, and for $t \\geq 4$ we ignore $20\\%$ of such edges. \n$F(x)$ for MPLS is chosen as $x^{-3\/2}$ and it corresponds to $\\rho(x)=\\sqrt x$. For simplicity and consistency, we use these choices of parameters for all of our experiments. We remark that our choice of $\\beta_t$ in Algorithm \\ref{alg:cemp} is supported by the theory of \\citet{cemp}. \nWe found that MPLS is not so sensitive to its parameters. One can choose other values of $\\{\\beta_t\\}_{t\\geq 0}$, for example any geometric sequence with ratio 2 or less, and stop after several iterations. Similarly, one may replace 0.8 and 0.05\nin the definition of $\\tau_t$ with $0.7-0.9$ and $0.01-0.1$, respectively, and perform similarly on average.\n\nWe test two previous state-of-the-art IRLS methods: IRLS-GM \\cite{ChatterjeeG13_rotation} with $\\rho(x) = x^2\/(x^2+25)$, $F(x)=25\/(x^2+25)^2$\nand IRLS-$\\ell_{1\/2}$ \\cite{L12} with $\\rho(x)=\\sqrt x$, $F(x)=x^{-3\/2}$.\nWe use their implementation by \\citet{L12}. \n\nWe have also separately implemented the part of initializing the rotations \nof MPLS in Algorithm~\\ref{alg:SO3} and refer to it by CEMP+MST. Recall that it solves rotations by direct propagation along the minimum weighted spanning tree of the graph with weights obtained by \nAlgorithm \\ref{alg:cemp} (CEMP).\nWe also test the application of this initialization to \nthe main algorithms in \\citet{ChatterjeeG13_rotation} and \\citet{L12} and refer to the resulting methods by CEMP+IRLS-GM\nand CEMP+IRLS-$\\ell_{1\/2}$, respectively. We remark that the original algorithms initialize by a careful least absolute deviations minimization.\nWe use the convergence criterion\n$\\sum_{i\\in[n]}\\|\\Delta \\boldsymbol\\Omega_{i,t}\\|_F\/(\\sqrt2n)< 0.001$\nof \\citet{L12} for all the above algorithms.\n\nBecause the solution is determined up to a right group action, we align our estimated rotations $\\{\\hat\\boldsymbol{R}_i\\}$ with the ground truth ones $\\{ \\boldsymbol{R}_i^*\\}$. That is, we find a rotation matrix $\\boldsymbol{R}_\\text{align}$ so that $\\sum_{i\\in [n]}\\|\\hat\\boldsymbol{R}_i\\boldsymbol{R}_\\text{align}-\\boldsymbol{R}_i^*\\|_F^2$ is minimized. For synthetic data, we report the following mean estimation error in degrees: $180\\cdot\\sum_{i\\in [n]} d(\\hat \\boldsymbol{R}_i\\boldsymbol{R}_\\text{align}\\,, \\boldsymbol{R}_i^*)\/n$. For real data, we also report the median of $\\{180\\cdot d(\\hat \\boldsymbol{R}_i\\boldsymbol{R}_\\text{align}\\,, \\boldsymbol{R}_i^*)\\}_{i\\in [n]}$.\n\n\n\n\n\n\n\\subsection{Synthetic Settings}\nWe test the methods in the following two types of artificial scenarios. In both scenarios, the graph is generated by the Erd\\H{o}s-R\\'{e}nyi model $G(n,p)$ with $n=200$ and $p=0.5$.\n\\subsubsection{Uniform Corruption}\nWe consider the following random model for generating $\\boldsymbol{R}_{ij}$\n\\begin{equation}\n \\boldsymbol{R}_{ij}=\\begin{cases}\n \\text{Proj}(\\boldsymbol{R}_{ij}^*+\\sigma \\boldsymbol{W}_{ij}),&\\text{w.p. } 1-q;\\\\\n \\tilde \\boldsymbol{R}_{ij}\\sim \\text{Haar}($SO(3)$),& \\text{w.p. } q,\n \\end{cases}\n\\end{equation}\nwhere Proj denotes the projection onto $SO(3)$; $\\boldsymbol{W}_{ij}$ is a $3 \\times 3$ Wigner matrix whose elements follow i.i.d.~standard normal distribution; $\\sigma\\geq 0$ is a fixed noise level; $q$ is the probability that an edge is corrupted and Haar$(SO(3))$ is the Haar probability measure on $SO(3)$. We clarify that for any $3 \\times 3$ matrix $\\boldsymbol{A}$, $\\text{Proj}(\\boldsymbol{A}) = \\operatorname*{arg\\,min}_{\\boldsymbol{R}\\in SO(3)} \\|\\boldsymbol{R}-\\boldsymbol{A}\\|_F$.\n\nWe test the algorithms with four values of $\\sigma:$ $0$, $0.1$, $0.5$, and $1$. We average the mean error over 10 random samples from the uniform model and report it as a function of $q$ in Figure \\ref{fig:s1}.\n\n We note that MPLS consistently outperforms the other methods for all tested values of $q$ and $\\sigma$.\n In the noiseless case, MPLS exactly recovers the group ratios even when $70\\%$ of the edges are corrupted.\n It also nearly recovers with $80\\%$ corrupted edges, where the estimation errors for IRLS-GM and IRLS-$\\ell_{1\/2}$ are higher than 30 degrees. MPLS is also shown to be stable under high level of noise. Since all algorithms produce poor solutions when $q=0.9$, we only show results for $0\\leq q\\leq 0.8$.\n \n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=8cm]{uniform.pdf}\n \\caption{Performance under uniform corruption. The mean error (in degrees) is plotted against the corruption probability $q$ for 4 values of $\\sigma$. }\n \\label{fig:s1}\n\\end{figure}\n\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=8cm]{adv.pdf}\n \\caption{\n Performance under self-consistent corruption. The mean error is plotted against the corruption probability $q$ for 4 values of $\\sigma$.}\n \\label{fig:adv}\n\\end{figure}\n\n\n\n\n\n\n\n\\begin{table*}[!htbp]\n\\centering\n\\resizebox{2\\columnwidth}{!}{\n\\renewcommand{\\arraystretch}{1.3}\n\\tabcolsep=0.1cm\n\\begin{tabular}{|l||c|c||c|c|c|c||c|c|c|c||c|c|c|c||c|c|c|c|}\n\\hline\nAlgorithms & \\multicolumn{2}{c||}{}& \\multicolumn{4}{c||}{IRLS-GM} &\n\\multicolumn{4}{c||}{IRLS-$\\ell_{\\frac12}$} &\n \\multicolumn{4}{c||}{CEMP+MST}&\n \\multicolumn{4}{c|}{MPLS} \\\\\n\\text{Dataset}& $n$ & $m$ & {\\large$\\tilde{e}$} & {\\large $\\hat{e}$} & runtime & iter $\\#$ \n& {\\large$\\tilde{e}$} & {\\large $\\hat{e}$} & runtime & iter $\\#$\n& {\\large$\\tilde{e}$} & {\\large $\\hat{e}$} & runtime & iter $\\#$ &{\\large$\\tilde{e}$} & {\\large $\\hat{e}$} & runtime & iter $\\#$\\\\\\hline\nAlamo& 564 & 71237 &\n3.64 & 1.30 & 14.2 & 10+8 & \n3.67 & 1.32 & 15.5 & 10+9 & \n4.05 & 1.62 & \\textbf{10.38} & 6 &\n\\textbf{3.44} & \\textbf{1.16} & 20.6 & 6+8\n\\\\\\hline\n\nEllis Island& 223 & 17309 &\n3.04 & 1.06 & 3.2 & 10+9 &\n2.71 & 0.93 & 2.8 & 10+13 &\n2.94 & 1.11 & \\textbf{2.4} & 6 &\n\\textbf{2.61} & \\textbf{0.88} & 4.0 & 6+11\n\\\\\\hline\n\nGendarmenmarkt& 655 & 32815&\n\\textbf{39.24} & \\textbf{7.07} & 6.5 & 10+14 &\n39.41 & 7.12 & 7.3 & 10+19 &\n45.33 & 8.62 & \\textbf{4.7} & 6 &\n44.94 & 9.87 & 17.8 & 6+25\n\\\\\\hline\n\nMadrid Metropolis & 315 & 14903 &\n5.30 & 1.78 & 3.8 & 10+30 &\n4.88 & 1.88 & 2.7 & 10+12 &\n5.10 & 1.66 & \\textbf{2.1} & 6 &\n\\textbf{4.65} & \\textbf{1.26} & 5.2 & 6+23\n\\\\\\hline\nMontreal N.D.& 442 & 44501 &\n\n1.25 & 0.58 & 6.5 & 10+6 &\n1.22 & 0.57 & 7.3 & 10+8 &\n1.33 & 0.79 & \\textbf{6.3} & 6 &\n\\textbf{1.04} & \\textbf{0.51} & 9.3 & 6+7\n\\\\\\hline\nNotre Dame & 547 & 88577 &\n\n2.63 & 0.78 & 17.2 & 10+7 &\n2.26 & 0.71 & 22.5 & 10+10 &\n2.35 & 0.94 & \\textbf{13.2} & 6 &\n\\textbf{2.06} & \\textbf{0.67} & 31.5 & 6+8\n\\\\\\hline\nNYC Library& 307 & 13814 &\n\n2.71 & 1.37 & 2.5 & 10+14 &\n2.66 & 1.30 & 2.6 & 10+15 &\n3.00 & 1.41 & \\textbf{1.9} & 6 &\n\\textbf{2.63} & \\textbf{1.24} & 4.5 & 6+14\n\\\\\\hline\nPiazza Del Popolo & 306 & 18915 &\n\n4.10 & 2.17 & 2.8 & 10+9 &\n3.99 & 2.09 & 3.1 & 10+13 &\n\\textbf{3.44} & \\textbf{1.57} & \\textbf{2.6} & 6 &\n3.73 & 1.93 & 3.5 & 6+3\n\\\\\\hline\nPiccadilly& 2031 & 186458 &\n 5.12 & 2.02 & 153.5 & 10+16 &\n 5.19 & 2.34 & 170.2 & 10+19 &\n 4.66 & 1.98 & \\textbf{45.8} & 6 &\n \\textbf{3.93} & \\textbf{1.81} & 191.9 & 6+21\n \n\\\\\\hline\nRoman Forum& 989 & 41836 &\n\n2.66 & 1.58 & 8.6 & 10+9 &\n2.69 & 1.57 & 11.4 & 10+17 &\n2.80 & 1.45 & \\textbf{6.1} & 6 &\n\\textbf{2.62} & \\textbf{1.37} & 8.8 & 6+8\n\n\\\\\\hline\nTower of London& 440 & 15918 &\n\n3.42 & 2.52 & 2.6 & 10+8 &\n3.41 & 2.50 & 2.4 & 10+12 &\n\\textbf{2.84} & \\textbf{1.57} & \\textbf{2.2} & 6 &\n3.16 & 2.20 & 2.7 & 6+7\n\n\\\\\\hline\nUnion Square& 680 & 17528 &\n\n6.77 & 3.66 & 5.0 & 10+32 &\n6.77 & 3.85 & 5.6 & 10+47 &\n7.47 & 3.64 & \\textbf{2.5} & 6 &\n\\textbf{6.54} & \\textbf{3.48} & 5.7 & 6+21\n\\\\\\hline\nVienna Cathedral& 770 & 87876 &\n\n8.13 & 1.92 & 28.3 & 10+13 &\n8.07 & \\textbf{1.76} & 45.4 & 10+23 &\n\\textbf{6.91} & 2.63 & \\textbf{13.1} & 6 &\n7.21 & 2.83 & 42.6 & 6 +19\n\n\\\\\\hline\nYorkminster & 410 & 20298 &\n\n2.60 & 1.59 & \\textbf{2.4} & 10+7 &\n\\textbf{2.45} & 1.53 & 3.3 & 10+9 &\n2.49 & \\textbf{1.37} & 2.8 & 6 &\n2.47 & 1.45 & 3.9 & 6+7\n\\\\\\hline\n\n\n\\end{tabular}}\n\\caption{Performance on the Photo Tourism datasets: $n$ and $m$ are the number of nodes and edges, respectively; $\\tilde e$ and $\\hat e$ indicate mean and median errors in degrees, respectively.; runtime is in seconds; and numbers of iterations (explained in the main text).}\\label{tab:real}\n\\end{table*}\n\n\n\\subsubsection{Self-Consistent Corruption}\nIn order to simulate self-consistent corruption, we independently draw from Haar($SO(3)$) two classes of rotations: $\\{\\boldsymbol{R}_i^*\\}_{i\\in [n]}$ and $\\{\\tilde \\boldsymbol{R}_i\\}_{i\\in [n]}$. We denote their corresponding relative rotations by $\\boldsymbol{R}_{ij}^*=\\boldsymbol{R}_i^*\\boldsymbol{R}_j^{*\\intercal}$ and $\\tilde\\boldsymbol{R}_{ij}=\\tilde\\boldsymbol{R}_i\\tilde\\boldsymbol{R}_j^{\\intercal}$ for $ij\\in E$. \nThe idea is to assign to edges in $E_g$ and $E_b$ relative rotations from two different classes, so cycle-consistency occurs in both\n$G([n],E_g)$ and $G([n],E_b)$. We also add noise to these relative rotations and assign them with Bernoulli model to the two classes, so one class is more significant. \nMore specifically, for $ij \\in E$\n\\begin{equation}\n \\boldsymbol{R}_{ij}=\\begin{cases}\n \\text{Proj}(\\boldsymbol{R}^*_{ij}+\\sigma \\boldsymbol{W}_{ij}),&\\text{w.p. } 1-q;\\\\\n \\text{Proj}(\\tilde\\boldsymbol{R}_{ij}+\\sigma \\boldsymbol{W}_{ij}),& \\text{w.p. } q,\n \\end{cases}\n\\end{equation}\nwhere $q$, $\\sigma$, and $\\boldsymbol{W}_{ij}$ are the same as in the above uniform corruption model. We remark that an information-theoretic threshold for the exact recovery when $\\sigma =0$ is $q=0.5$. That is, for $q\\geq 0.5$ there is no hope of exactly recovering $\\{\\boldsymbol{R}_{i}^*\\}_{i\\in [n]}$.\n\nWe test the algorithms with four values of $\\sigma:$ $0$, $0.1$, $0.5$, and $1$. We average the mean error over 10 random samples from the self-consistent model and report it as a function of $q$ in Figure \\ref{fig:adv}.\nWe focus on values of $q$ approaching the information-theoretic bound $0.5$ ($q=0.4$, $0.45$ and $0.48$). We note that MPLS consistently outperforms the other algorithm and that when $\\sigma=0$ it can exactly recover the ground truth rotations when $q=0.48$.\n\n\n\n\n\\subsection{Real Data}\nWe compare the performance of the different algorithms on the Photo Tourism datasets \\citep{1dsfm14}. Each of the 14 datasets consists of hundreds of 2D images of a 3D scene taken by cameras with different orientations and locations. For each pair of images of the same scene, we use the pipeline proposed by \\citet{ozyesil2015robust} to estimate the relative 3D rotations. The ground truth camera orientations are also provided. Table \\ref{tab:real}\ncompares the performance of IRLS-GM, IRLS-$\\ell_{1\/2}$, CEMP+MST and MPLS, while reporting \nmean and median errors, runtime and number of iterations. The number of iterations is the sum of the number of iterations to initialize the rotations and the number of iterations of the rest of the algorithm, where CEMP+MST only has iterations in the initialization step.\n\nMPLS achieves the lowest mean and median error on $9$ out of $14$ datasets with runtime comparable to both IRLS, while IRLS-GM only outperforms MPLS on the Gendarmenmarkt dataset. This dataset is relatively sparse and lacks cycle information. It contains a large amount of self-consistent corruption and none of the methods solve it reasonably well. Among the tested 4 methods, the fastest approach is CEMP+MST.\nIt achieves shortest runtime on 13 out of 14 dataset. Moreover, CEMP+MST is 3 times faster than other tested methods on the largest dataset (Piccadilly). We remark that CEMP+MST is able to achieve comparable results to common IRLS on most datasets, and has superior performance on 2 datasets, which have some perfectly estimated edges.\nIn summary, for most of the datasets, MPLS provides the highest accuracy and CEMP+MST obtains the fastest runtime. \n\\section{Conclusion}\n\\label{sec:conclusion}\n\nWe proposed a framework for solving group synchronization under high corruption and noise. \nThis general framework requires a successful solution of the weighted least squares problem, which depends on the group. For $SO(3)$, we explained how a well-known solution integrates well with our framework. We demonstrated state-of-the-art performance of our framework for $SO(3)$ synchronization. We have motivated our method as an alternative to IRLS and explained how it may overcome the limitations of IRLS when applied to group synchronization.\n\nThere are many directions to expand \nour work. One can carefully adapt and implement our proposed framework to other groups that occur in practice. \nOne may develop certain theoretical guarantees for convergence and exact recovery of MPLS.\n\n\\section*{Acknowledgement}\nThis work was supported by NSF award DMS-18-21266. We thank Tyler Maunu for his valuable comments\non an earlier version of this manuscript.\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
{"text":"\\section{Introduction}\nThe linear fractional maps from the complex plane into itself are among the very first objects of study in one-variable complex analysis since they have many good geometric properties (e.g. mapping real lines and circles among themselves) and they are also the only biholomorphisms of the unit disk and the Riemann sphere. It is thus very natural to look at the linear fractional maps in several complex variables and such explorations have been made by, for instance, Cowen-MacCluer~\\cite{CM} and Bisi-Bracci~\\cite{BB}. In their works, they have focused on the linear fractional maps that map the unit ball into itself. They obtained a variety of results for those linear fractional maps, including the classification, normal forms, and also fixed point theorems for such maps. From the point of view of function theory, every linear fractional self map of the unit ball also give rise to a composition operator in various function spaces of the unit ball and such an operator has been studied by many people, e.g. Cowen~\\cite{Co}, Bayart~\\cite{Ba} and Chen-Jiang~\\cite{CJ}.\n\nIn this article, we will try to extend the study of linear fractional self maps to a much more general class of domains, which in particular include the classical bounded symmetric domains of type I and also the so-called \\textit{generalized balls}. These two classes both contain the usual complex unit balls as special cases. Although here our major focus is on the generalized balls, our results should shed some light on the behavior of the linear fractional maps defined on the classical bounded symmetric domains of type-I since they are determined by the same set of matrices (as we will see in Theorem~\\ref{expansion matrix}).\n\nFor any two positive integers $p$ and $q$, let $H_{p,q}$ be the standard non-degenerate\nHermitian form of signature $(p,q)$ on $\\mathbb C^{p+q}$ where $p$ eigenvalues are $1$\nand $q$ eigenvalues are $-1$, represented by the matrix $\\begin{pmatrix}I_p&0\\\\0&-I_q\\end{pmatrix}$ under the standard coordinates.\nFor a positive integer $r< p+q$, denote by $Gr(r, \\mathbb C^{p+q})$ the Grassmannian of $r$-dimensional complex linear subspaces\n(or simply $r$-\\textit{planes}) of $\\mathbb C^{p+q}$.\nWhen $1\\leq r\\leq p$, we define the domain $D^r_{p,q}$ in $Gr(r, \\mathbb C^{p+q})$ \nto be the set of positive definite $r$-planes in $\\mathbb C^{p+q}$ with respect to $H_{p,q}$.\nWe call $D^r_{p,q}$ a {\\it generalized type-I domain}.\nThe generalized type-I domain $D_{p,q}^r$ is an $SU(p,q)$-orbit on $Gr(r, \\mathbb C^{p+q})$ under the natural action induced by that of $SL(p+q;\\mathbb C)$ on $Gr(r,\\mathbb C^{p+q})$. It is an example of \\textit{flag domains} in a general context, of which one can find a comprehensive reference \\cite{FHW}. Recently $D_{p,q}^r$ have been studied by Ng~\\cite{ng1, ng2} regarding the proper holomorphic mappings between them\nand by Kim~\\cite{Kim} regarding the CR maps on some CR manifolds in $\\partial D_{p,q}^r$. We remark that the case $r=p$ corresponds to the type-I classical bounded symmetric domains~\\cite{ng2}. On the other extreme, when $r=1$, which will be the case of special interest to us, corresponds to the domains $D_{p,q}:=D^1_{p,q}$, which are called the generalized balls.\nIt follows immediately from our definition that $D_{p,q}$ can be also defined as the following domain on $\\mathbb P^{p+q-1}$:\n$$\nD_{p,q} = \\left\\{ [z_1,\\dots, z_{p+q}] \\in \\PP^{p+q-1} : |z_1|^2 +\\dots + |z_p|^2 > |z_{p+1}|^2 + \\dots + |z_{p+q}|^2\\right\\}.\n$$\nWhen $p=1$, it is biholomorphic to the unit ball in the Euclidean space $\\CC^q$.\n\nThe generalized ball is one of the simplest kinds of domains on the projective space and their boundaries are smooth Levi non-degenerate (but not pseudoconvex in general) CR manifolds, of which detailed studies have been carried out by Baouendi-Huang~\\cite{BH} and Baouendi-Ebenfelt-Huang~\\cite{BEH}. More recently, Ng~\\cite{ng2} discovered that the proper holomorphic mapping problem for the generalized balls is deeply linked to that of the classical bounded symmetric domains of type-I.\n\nHere we recall that a linear fractional map on $\\mathbb C^n$ is in fact just a restriction of a linear map on $\\mathbb P^n$ expressed in terms of the inhomogeneous coordinates of a Euclidean coordinate chart $\\mathbb C^n$ in $\\mathbb P^n$. Similarly, if we follow our notations, a linear fractional self map of the unit ball $D_{1,q}$ simply comes from a linear map on $\\mathbb P^{q}$ that maps $D_{1,q}$ into itself. We have defined our generalized type-I domains as domains on the Grassmannians on which, just like the case of the projective space, there are homogeneous coordinates and the associated linear maps. Since we will always work with homogeneous coordinates, we will thus call a self map of a generalized type-I domain a \\textit{linear self map} if it is the restriction of a linear map of the ambient Grassmannian. \n\\newline\n\n\\noindent\\textbf{Remark.} Hence, according to our terminology, \\textit{a linear self map and a linear fractional self map are the same.} ``Fractions\" appear only because inhomogeneous coordinates are used.\n\\newline\n\nWe call a linear self map of a generalized type-I domain $D^r_{p,q}$ \\textit{non-minimal} if its range is not of minimal dimension (see Definition~\\ref{minimality}). In the case of the unit balls, a non-minimal linear self map is nothing but a non-constant linear self map. Roughly speaking, the minimal linear self maps, just like the constant maps for the unit balls, are the cases for which most statements become trivialities or non-applicable. In Section~\\ref{type-I domain}, we will first establish a fundamental result for the study of linear self maps of generalized type-I domains. Namely, we will prove (Theorem~\\ref{expansion matrix}) that every non-minimal linear self map of a generalized type-I domain can be represented by a matrix $M$ satisfying the inequality\n\n\\begin{equation}\nM^H H_{p,q} M - H_{p,q}>0, \\label{expansioneq}\n\\end{equation}\nwhere $H_{p,q} = \\begin{pmatrix}I_p & 0 \\\\0 & -I_q\\end{pmatrix}$, in which $I_p$, $I_q$ are identity matrices of rank $p$ and $q$, and $\\cdot^H$ denotes Hermitian transposition, and ``$\\geq$\" means Hermitian semi-positivity. Furthermore, we will prove (Theorem~\\ref{isometrythm}) that a surjective linear self map can be represented by a matrix which makes the equality hold, i.e. by a matrix in the indefinite unitary group $U(p,q)$. For the case of the unit balls, these results have been established by Cowen-MacCluer~\\cite{CM} and we will modify their proof to work for any generalized type-I domain. As simple as it may look, such a matrix inequality is extremely useful in obtaining various kinds of results for the linear self maps of generalized type-I domains. In particular, we will use it to show that any non-minimal linear self map extends to a neighborhood of the closure of the domain (Theorem~\\ref{extension thm}).\n\nIn Section~\\ref{automorphisms}, we will study in detail the automorphism groups of the generalized balls, including the partial double transitivity on the boundary, fixed point theorems and normal forms. Here we remark that a generalized ball cannot be realized as a bounded convex domain in the Euclidean space unless it is a usual unit ball (see e.g.~\\cite{gn}) and hence one cannot apply Brouwer's fixed point theorem to get a fixed point in its closure. We will establish the existence of fixed points (in the closure) in Theorem~\\ref{existence of fixed points} and give a number of results regarding the behavior of the fixed points (Theorem~\\ref{fixed point number thm} and~Corollaries~\\ref{at most}, \\ref{hs generalization}). For obtaining a normal form for automorphisms, we will show that the subgroup $U(p)\\times U(q)$ and the ``non-isotropic dilations\" generate the full automorphism group $Aut(D_{p,q})$ (Theorem~\\ref{normal form}).\n\nAfter studying the automorphisms, we will then look at arbitrary linear self maps of the generalized balls in Section~\\ref{linear maps section}. We will again prove some results regarding their fixed points, including especially Theorem~\\ref{bb generalization}, which is about how the number of fixed points on the boundary of a generalized ball is related to the existence of interior fixed points. This generalizes a result for the unit ball (see Bisi-Bracci~\\cite{BB}) saying that any linear self map of the unit ball with more than two boundary fixed points must have an interior fixed point. We will also obtain in this section a relation between the linear self maps of the \\textit{real} generalized balls of $D_{p,q}$ and those of $D_{p,q}$ (Theorem~\\ref{real generalized ball}).\n\nFinally, in Section~\\ref{examples} we will collect some illustrating or extremal examples for the results obtained in the previous sections.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\bigskip\n{\\bf Acknowledgements.} The first author was partially supported by the Key Program of NSFC, (No. 11531107). The second author\nwas partially supported by Thousand Talents Program of the Organization Department of the CPC Central Committee, and Science and Technology Commission of Shanghai Municipality (STCSM) (No. 13dz2260400). The third author was partially supported by Basic Science Research Program through the National Research \nFoundation of Korea (NRF) funded by the Ministry of Education (NRF-2019R1F1A1060175). \n\n\n\\section{Generalized type-I domains and their linear self maps}\\label{type-I domain}\n\n\\noindent\\textbf{Notations.} For what follows, for any $p,q\\in\\mathbb N^+$, we will equip $\\mathbb C^{p+q}$ with the standard non-degenerate indefinite Hermitian form $H_{p,q}$ and denote the resulting indefinite inner product space by $\\mathbb C^{p,q}$.\n\\newline\n\n\nDenote $M_n$ the set of $n$-by-$n$ matrices with complex entries. Let $M\\in M_{p+q}$ and consider the linear map, which will also be denoted by $M$, from $\\mathbb C^{p,q}$ into itself, given by $z\\mapsto Mz$, where $z\\in\\mathbb C^{p,q}$ is regarded as a column vector.\nLet the null space of $M$ be $ker(M):=\\{z\\in\\mathbb C^{p,q}:Mz=0\\}$. Then the image of every positive definite $r$-plane in $\\mathbb C^{p,q}$ under $M$ is still an $r$-plane if and only if $ker(M)$ is negative semi-definite with respect to $H_{p,q}$. In such case, $M$ gives rise to a holomorphic map from each $D^r_{p,q}$ to the Grassmannian $Gr(r,\\mathbb C^{p,q})$. It is clear that two matrices $M, M'\\in M_{p+q}$ induce the same map on $D^r_{p,q}$ if $M=\\lambda M'$ for some $\\lambda\\in\\mathbb C^*$.\nThe question of interest is whether or not such a map is actually a \\textit{self map} of $D^r_{p,q}$.\n\n\\begin{definition}\nA self map of $D^r_{p,q}$ is called a \\textit{linear self map}, if it is given by a matrix $M\\in M_{p+q}$ in the way described above. If there is no danger of confusion, we will also denote any such self map by the same symbol $M$. Conversely, any matrix $M\\in M_{p+q}$ inducing a given linear self map of $D^r_{p,q}$ is called a \\textit{matrix representation} of the linear self map.\n\\end{definition}\n\nLet $M\\in M_{p+q}$ be a matrix representation of a linear self map of some $D^r_{p,q}$. Then, we must have rank$(M)\\geq p$ since otherwise $\\dim_{\\mathbb C}(ker(M))>q$ and $ker(M)$ would not be negative semi-definite in $\\mathbb C^{p,q}$, contradicting the fact that $M$ induces a linear map on $D^r_{p,q}$. We \nmake the following definition in relation to this.\n\n\\begin{definition}\\label{minimality}\nA linear self map of $D^r_{p,q}$ is called \\textit{minimal} if it is given by a matrix $M\\in M_{p+q}$ with rank$(M)=p$. Otherwise, we say that the linear self map is \\textit{non-minimal}. \n\\end{definition}\n\n\\noindent\\textbf{Remark.} For the unit balls $D_{1,q}$, a non-minimal linear self map is simply a non-constant linear self map.\n\\newline\n\n\n\n\nIf $M\\in M_n$ satisfies Inequality~\\eqref{expansioneq}, then it follows immediately that the image of any positive definite $r$-plane is again a positive definite $r$-plane and thus $M$ induces a linear self map on each $D^r_{p,q}$. \nWe are now going to show that conversely any \\textit{non-minimal} linear self map of $D^r_{p,q}$ can be represented by a matrix satisfying Inequality~(\\ref{expansioneq}). For this purpose, we will use some terminologies and results by Cowen-MacCluer~\\cite{CM}. Let $(V,[\\cdot,\\cdot])$ be a finite dimensional complex vector space equipped with an indefinite Hermitian form $[\\cdot,\\cdot]$. Following~\\cite{CM}, we will say that a linear map $T$ of $V$ into itself is an \\textit{expansion} if $[Tv,Tv]\\geq [v,v]$ for all $v\\in V$, and an \\textit{isometry} if $[Tv,Tv]=[v,v]$ for all $v\\in V$. In particular, if we identify the linear maps of $\\mathbb C^{p,q}$ into itself with their matrix representations with respect to the standard basis, then $M\\in M_{p+q}$ is an expansion of $\\mathbb C^{p,q}$ if and only if it satisfies the inequality~(\\ref{expansioneq}) and is an isometry if $M$ makes the equality hold. We are going to show that the non-minimal linear self maps of $D^r_{p,q}$ are precisely those given by the expansions of $\\mathbb C^{p,q}$, and the surjective linear self maps of $D^r_{p,q}$ are given by the isometries of $\\mathbb C^{p,q}$.\n\n\n\n\nThere following lemma can be found in \\cite{CM} but we rewrite it (reversing the signs) in a way more suitable for our purpose.\n\n\n\\begin{lem}[\\cite{CM}] \\label{expansion}\nSuppose $[\\cdot,\\cdot]_1$ and $[\\cdot,\\cdot]_2$ are indefinite Hermitian forms on the complex vector space $V$ such that $[x,x]_1=0$ implies $[x,x]_2 \\geq 0$. Then,\n$$\\lambda:=-\\inf_{[y,y]_1=-1}[y,y]_2 < \\infty $$\nand $$[x,x]_2\\geq \\lambda[x,x]_1$$\nfor all $x\\in V$.\n\\end{lem}\n\n\n\n\nWe are now in a position to show that the non-minimal linear self maps of $D^r_{p,q}$ are all given by the matrices satisfying Inequality~(\\ref{expansioneq}). (The case for the unit ball was obtained by Cowen-MacCluer~\\cite{CM} and part of our proof is taken from there.)\n\n\\begin{thm}\\label{expansion matrix}\nEvery non-minimal linear self map of $D^r_{p,q}$ can be represented by a matrix satisfying Inequality~(\\ref{expansioneq}).\n\\end{thm}\n\n\\begin{proof}\nLet $M\\in M_{p+q}$ be a matrix such that it induces linear self map (also denoted by $M$) on some $D^r_{p,q}$. For $x,y\\in\\mathbb C^{p,q}$, let $[x,y]_1=y^H H_{p,q} x$ and let $[x,y]_2=(My)^HH_{p,q} Mx$. The hypothesis that $M$ maps\n $D^r_{p,q}$ into $D^r_{p,q}$ means that whenever $[x,x]_1>0$, we have $[x,x]_2>0$. By continuity, we get that if $[x,x]_1 \\geq 0$, then $[x,x]_2 \\geq 0$.\n Thus, the hypotheses of Lemma \\ref{expansion} are satisfied. So we only need to show that $\\la > 0$. Then $\\la^{-1\/2}M$ is an expansion in $\\mathbb C^{p,q}$.\n\nTo see that $\\lambda >0$, we let the range of $M$ be $R(M):=\\left\\{y\\in\\mathbb C^{p,q}: y=Mz \\textrm{\\,\\,for some\\,\\,} z\\right\\}$. Now if $M$ is non-minimal, then\n$\\dim_{\\mathbb C}(R(M))\\geq p+1$ (Definition~\\ref{minimality}). Thus, $R(M)$ must contain a negative vector $y$ and any preimage $z$ of $y$ must also be a negative vector since we have already seen in the previous paragraph that $[z,z]_1 \\geq 0$ would imply that $[y,y]_1=[z,z]_2 \\geq 0$.\nTherefore, we can find $z\\in \\mathbb{C}^{p,q}$ such that $[z,z]_1<0$ as well as $[z,z]_2<0$ and hence $\\la > 0$. \n\n\\end{proof}\n\n\\begin{thm}\\label{isometrythm}\nEvery surjective linear self map of $D^r_{p,q}$ can be represented by a matrix satisfying the equality in~(\\ref{expansioneq}). In particular, every surjective linear self map of $D^r_{p,q}$ is an automorphism.\n\\end{thm}\n\\begin{proof}\nIf a linear self map $M$ of a given $D^r_{p,q}$ is surjective, then $M$ is surjective as a linear map on $\\mathbb C^{p,q}$ since its image contains all the positive vectors (which constitute an open set in $\\mathbb C^{p,q}$). The inverse linear map $M^{-1}$ also maps positive vectors to positive vectors since each of the 1-planes generated by positive vectors can be regarded as the intersection of a set of positive $r$-planes and $M$ is surjective (as a self map of $D^r_{p,q}$). Hence, we see that $M^{-1}$ is also a surjective linear self map of $D^r_{p,q}$ and is thus an expansion. Therefore, there are non-zero scalars $\\alpha$ and $\\beta$ such that $\\alpha M$ and $\\beta M^{-1}$ are expansions. Thus, for every $z\\in\\mathbb C^{p,q}$, if we write $\\|z\\|^2_{p,q}:=z^H H_{p,q} z$, then\n$$\n\t\\|\\beta z\\|^2_{p,q}\\geq \\|Mz\\|^2_{p,q}\\geq \\|\\alpha^{-1} z\\|^2_{p,q}\n$$\nand hence\n$$\n\t|\\alpha\\beta|^2\\|z\\|^2_{p,q}\\geq\\|\\alpha Mz\\|^2_{p,q}\\geq\\|z\\|^2_{p,q}.\n$$\n\nSince the inequality is true for both positive vectors and negative vectors, we deduce that $|\\alpha\\beta|=1$ and $\\|\\alpha Mz\\|^2_{p,q}=\\|z\\|^2_{p,q}$ for every $z$. That is, $\\alpha M$ is an isometry.\n\\end{proof}\n\nA linear self map $M$ of $D^r_{p,q}$ originally comes from a linear map $\\tilde M$ defined on the ambient Grassmannian $Gr(r,\\mathbb C^{p,q})$. If $M$ is not surjective, then there is a set of indeterminacy $Z\\subset Gr(r,\\mathbb C^{p,q})$ on which $\\tilde M$ is not defined. The set $Z$ is outside $D^r_{p,q}$ and consists of the points corresponding to the $r$-planes that intersect the kernel of a matrix representation of $\\tilde M$. A priori, $Z$ can intersect the boundary $\\partial D^r_{p,q}$ and obstructs the extension of $M$ across $\\partial D^r_{p,q}$, but we are now going to show that this does not happen for non-minimal linear self maps. \n\n\\begin{thm}\\label{extension thm}\nEvery non-minimal linear self map of $D^r_{p,q}$ extends holomorphically to an open neighborhood of the closure $\\overline{D^r_{p,q}}:=D^r_{p,q}\\cup\\partial D^r_{p,q}$.\n\\end{thm}\n\\begin{proof}\nLet $M$ be a non-minimal linear self map of $D^r_{p,q}$ and denote also by $M$ a matrix representation of it which satisfies the inequality $M^HH_{p,q}M-H_{p,q}\\geq 0$. As mentioned at the beginning of this section, $ker(M)$ must be negative semi-definite with respect to $H_{p,q}$. We are now going to show that $ker(M)$ does not contain any non-zero null vector if $M$ is non-minimal. Suppose on the contrary there is a non-zero null vector $\\eta\\in ker(M)$. Let $v\\in\\mathbb C^{p,q}$ and write $\\|v\\|_{p,q}^2=v^HH_{p,q}v$. Then for any $r\\in\\mathbb R$, we have $M(v+r\\eta)=Mv$ and\n$$\n\\|Mv\\|_{p,q}^2=\\|M(v+r\\eta)\\|_{p,q}^2\\geq \\|v+r\\eta\\|_{p,q}^2=\\|v\\|_{p,q}^2+r(\\eta^HH_{p,q}v+v^HH_{p,q}\\eta).\n$$\n\nNow if $v$ is chosen such that $\\textrm{Re}(v^HH_{p,q}\\eta)\\neq 0$, then the above inequality cannot hold for every $r\\in\\mathbb R$ and hence we get a contradiction. Consequently, $ker(M)$ does not contain any non-zero null vector and therefore the image of any positive semi-definite $r$-plane under $M$ is still an $r$-plane and the set of indeterminacy $Z\\subset Gr(r,\\mathbb C^{p,q})$ of $M$ (as a linear map on $Gr(r,\\mathbb C^{p,q}))$ is disjoint from $\\overline{D^r_{p,q}}$. Since both $Z$ and $\\overline{D^r_{p,q}}$ are closed in $Gr(r,\\mathbb C^{p,q})$ (and hence compact), there is an open neighborhood of $\\overline{D^r_{p,q}}$ disjoint from $Z$ and now the theorem follows.\n\\end{proof}\n\n\\noindent\\textbf{Remark.} The non-minimality is indeed necessary to guarantee the extension across the entire boundary. See Example~\\ref{no extension} for a minimal linear self map which does not extend across some boundary point.\n\n\n\\section{Automorphisms on generalized balls}\\label{automorphisms}\n\nIn this section, we are going to study in detail the automorphisms on the generalized balls $D_{p,q}$, regarding their fixed point sets and also their normal form. We begin by determining the automorphism group of $D_{p,q}$.\n\n\\begin{thm}\nEvery automorphism of $D_{p,q}$ is a linear self map and thus extends to an automorphism of the ambient projective space $\\mathbb P^{p+q-1}$. The automorphism group $\\textrm{Aut}(D_{p,q})$ of $D_{p,q}$ is isomorphic to $PU(p,q)$, the projectivization of the indefinite unitary group $U(p,q)$. In particular, every automorphism of $D_{p,q}$ can be represented by a matrix in $U(p,q)$.\n\\end{thm}\n\\begin{proof}\nThe statements are well-known for the complex unit balls $D_{1,q}$ and also for the complements of the complex unit balls $D_{p,1}$. Suppose now $p,q\\geq 2$. Then, it has been shown by Baouendi-Huang~\\cite{BH} and (for a more geometric proof, see Ng~\\cite{ng1}) that every automorphism of $D_{p,q}$ is necessarily a linear map. Now by Theorem~\\ref{isometrythm} here (or Lemma 2.13 in~\\cite{ng1}), we see that every automorphism can be represented by a matrix in $U(p,q)$. Since two elements in $U(p,q)$ represent the same automorphism of $D_{p,q}$ if and only if they are scalar multiples of each other, it follows now that $\\textrm{Aut}(D_{p,q})\\cong PU(p,q)$.\n\\end{proof}\n\n\\begin{cor}\\label{extension cor}\nThe action of $\\textrm{Aut}(D_{p,q})$ extends real-analytically to $\\partial D_{p,q}$.\n\\end{cor}\n\n\n\nThe following version of Witt's theorem is very useful in studying the various transitivities of $\\textrm{Aut}(D_{p,q})$.\n\n\\begin{lem}[Witt \\cite{Witt}] \\label{cor_Witt theorem}\nLet $X$ be a complex vector space equipped with a non-degenerate Hermitian form and $Y \\subset X$ be any complex vector subspace. Then any isometric embedding $f : Y \\rightarrow X$ extends to an isometry $F$ of $X$.\n\\end{lem}\n\n\n\\begin{thm}\\label{doubly transitive} For $u\\in \\mathbb C^{p,q}$, let $[u]$ be its projectivization in $\\mathbb P^{p+q-1}$.\n\\begin{enumerate}\n\\item\n $\\text{Aut}(D_{p,q})$ is transitive on $D_{p,q}$ and also on $\\partial D_{p,q}$.\n\\item\nLet $[v_1]$, $[v_2]$, $[w_1]$, $[w_2]\\in \\partial D_{p,q}$, where $[v_1]\\neq [v_2]$ and $[w_1]\\neq [w_2]$. Then, there exists $M\\in \\textrm{Aut}(D_{p,q})$ such that $M([v_j])=[w_j]$ for $j=1,2$ if and only if there exists non-zero $\\alpha\\in\\mathbb C$ such that $v_1^HH_{p,q}v_2=\\alpha w_1^HH_{p,q}w_2$.\n\\end{enumerate}\n\\end{thm}\n\\begin{proof}\n{ 1. Let $[u_1]$, $[u_2]\\in D_{p,q}$. Then we choose some $k>0$ such that the linear map $f:\\mathbb Cu_1\\rightarrow \\mathbb C^{p,q}$, defined by $u_1\\mapsto ku_2$ is an isometric embedding. By Lemma~\\ref{cor_Witt theorem} $f$ extends to an isometry of $\\mathbb C^{p,q}$ and therefore there is an automorphism of $D_{p,q}$ mapping $[u_1]$ to $[u_2]$. Similarly, for any two points $[v_1]$, $[v_2] \\in \\partial D_{p,q}$. The map\n$i\\colon \\mathbb C v_1\\rightarrow \\mathbb C^{p,q}$ defined by $v_1\\mapsto v_2$ is an isometric embedding and hence there exists an automorphism of $D_{p,q}$ such mapping $[v_1]$ to $[v_2]$.\n}\n\n2. Suppose that there exists $\\alpha\\in\\mathbb C^*$ such that $v_1^HH_{p,q}v_2=\\alpha w_1^HH_{p,q}w_2$. Let $\\phi : Y \\rightarrow \\CC^{p,q}$, where $Y=\\textrm{Span}\\{v_1, v_2\\}$ be the linear embedding defined by $\\phi(v_1)=w_1$ and $\\phi(v_2)=w_2$. By replacing $w_1$ by some scalar multiple, we can make $v_1^HH_{p,q}v_2=w_1^HH_{p,q}w_2$. Then $\\phi$ is an isometric embedding and hence $\\phi$ extends to an isometry of $\\mathbb C^{p,q}$ by Lemma \\ref{cor_Witt theorem} and the desired result follows. The converse is trivial.\n\\end{proof}\n\n\\noindent\\textbf{Remark.}\nTheorem~\\ref{doubly transitive} is a generalization of the double transitivity of the automorphism groups of the complex unit balls on their boundaries. This is because for $[v_1], [v_2]\\in\\partial D_{1,q}$ with $[v_1]\\neq [v_2]$, we always have $v_1^HH_{p,q}v_2\\neq 0$ since otherwise we would get a two dimensional isotropic subspace in $\\mathbb C^{1,q}$.\n\n\n\n\n\\subsection{Fixed points on $D_{p,q}$ and $\\partial D_{p,q}$}\n\nLet $A\\in U(p,q)$ and denote also by $A$ the corresponding automorphism of $D_{p,q}$.\nIt follows directly from the definition of the matrix representation of a linear self map that the fixed points of $A$ (as an automorphism on $D_{p,q}$) correspond precisely to the one-dimensional eigenspaces (or projectivized eigenvectors) of $A$ associated to the non-zero eigenvalues. The following simple observation regarding eigenvalues and eigenvectors of matrices in $U(p,q)$ will be very useful in studying the fixed points of the automorphisms of $D_{p,q}$.\n\n\\begin{lem}\\label{ortho lemma}\nLet $A\\in U(p,q)$. If $\\lambda_1$, $\\lambda_2$ are eigenvalues of $A$ and $v_1$, $v_2$ are two eigenvectors associated to them respectively, then either $v_1$ and $v_2$ are orthogonal with respect to $H_{p,q}$ or $\\overline{\\lambda_2}\\lambda_1=1$.\n\\end{lem}\n\\begin{proof}\nThe result follows from\n$\nv_2^H H_{p,q} v_1= v_2^HA^H H_{p,q} Av_1=\\left(\\ov{\\lambda}_2\\lambda_1\\right)\\,v^H_2 H_{p,q} v_1.\n$\n\\end{proof}\n\nLet $\\ov{D_{p,q}}:=D_{p,q}\\cup\\partial D_{p,q}$ be the closure of $D_{p,q}$. Since the closed complex unit balls $\\overline{\\mathbb B^q}\\cong\\overline{D_{1,q}}$ is a convex compact set in $\\mathbb C^q\\cong \\mathbb R^{2n}$, it follows from Corollary~\\ref{extension cor} and Brouwer's fixed point theorem that every element in $Aut(D_{1,q})$ has a fixed point in $\\overline{D_{1,q}}$. \nWhen $p\\geq 2$, any $D_{p,q}$ cannot be embedded as a convex compact set in some Euclidean space since it contains positive dimensional projective subspaces (see~\\cite{ng1}). Nevertheless, with a bit of ``detour\", we will still be able to use Brouwer's fixed point theorem to get a fixed point in the closure for \\textit{any} linear self map of $D_{p,q}$. The proof will be given in Section~\\ref{linear maps section}. Hence, we have\n\n\\begin{thm}\\label{existence of fixed points}\nEvery element in $\\textrm{Aut}(D_{p,q})$ has a fixed point on $\\ov{D_{p,q}}:=D_{p,q}\\cup\\partial D_{p,q}$.\n\\end{thm}\n\\begin{proof}\nFollows directly from Theorem~\\ref{general existence of fixed points}.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\nWe now recall some elementary linear algebra. Let $M\\in M_n$ and $\\lambda$ be an eigenvalue of $M$. For some $r\\in \\mathbb N$ a vector $v\\in\\mathbb C^n$ is called a generalized eigenvector of rank $r$ of $M$ associated to the eigenvalue $\\lambda$ if $(M-\\lambda I)^rv=0$ but $(M-\\lambda I)^{r-1}v\\neq 0$.\nIt turns out that both the absolute values of the eigenvalues and the existence of generalized eigenvectors of higher rank give information about the fixed-point set of the linear self map on a generalized ball represented by $M$.\n\nAs before, for a vector $v\\in\\mathbb C^{p,q}$, we will denote by $[v]\\in\\mathbb P^{p+q-1}$ its projectivization. Similarly, for any complex vector subspace $W\\subset\\mathbb C^{p,q}$, we denote its projectivization by $[W]$.\n\n\\begin{prop}\\label{fixed point}\nLet $A\\in \\text{Aut}(D_{p,q})$ and choose a matrix representation in $U(p,q)$ and denote it also by $A$. Let $\\lambda$ be an eigenvalue of $A$ and $v$ be an associated generalized eigenvector of rank $r$.\n\\begin{enumerate}\\label{projective subspace on the boundary}\n\\item If $|\\lambda|\\neq 1$, then $[(A-\\lambda I)^{r-1}v]$ is a fixed point of $A$ on $\\partial D_{p,q}$. Furthermore, if $r\\geq 2$, then there is an $(r-1)$-dimensional projective linear subspace $[W]$ in $\\partial D_{p,q}$\ninvariant under $A$ and $[(A-\\lambda I)^{r-1}v]\\in [W]$ is a unique fixed point of $A$ in $[W]$.\n\\item If $|\\lambda|=1$ and $r\\geq 2$,\nthen $[(A-\\lambda I)^{r-1}v]$ is a fixed point of $A$ on $\\partial D_{p,q}$.\nFurthermore, there is an $\\left(\\left[ \\frac{r}{2} \\right]-1\\right) $-dimensional projective subspace\n$[W]$ in $\\partial D_{p,q}$ invariant under $A$ and $[(A-\\lambda I)^{r-1}v]$ is a unique fixed point of $A$ in $[W]$.\n\\end{enumerate}\n\\end{prop}\n\\begin{proof}\n1. Let $A\\in U(p,q)$, $\\lambda$ be an eigenvalue of $A$ and $v$ be an associated generalized eigenvector of rank $r$. Let $v_r:=v$ and define inductively $v_{j-1}:=(A-\\lambda I)v_j$ for $j\\in\\{r,r-1,\\ldots, 2\\}$. In particular $v_1$ is an eigenvector of $A$ associated to $\\lambda$.\n\\begin{eqnarray*}\nAv_1&=&\\lambda v_1, \\\\\nAv_2 &=& v_1 + \\lambda v_2,\\\\\n&&\\vdots\\\\\nA v_r &=&v_{r-1} + \\lambda v_r.\n\\end{eqnarray*}\nSuppose that $|\\lambda|\\neq 1$.\nWe claim that\n\\begin{equation}\nv_i^HH_{p,q} v_j=0 \\text{ for all } 1\\leq i,j\\leq r.\n\\end{equation}\nWe will prove it using induction.\nBecause of $A^H H_{p,q} A = H_{p,q}$, we have\n$v_1^H H_{p,q}v_1 =0$\nand this implies that $[v_1]\\in \\partial D_{p,q}$.\nSuppose that $r\\geq 2$ and $v_1^H H_{p,q}v_{j'}=0$ for every $j'