diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzbkcs" "b/data_all_eng_slimpj/shuffled/split2/finalzzbkcs"
new file mode 100644--- /dev/null
+++ "b/data_all_eng_slimpj/shuffled/split2/finalzzbkcs"
@@ -0,0 +1,5 @@
+{"text":"\\section{Introduction}\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=1.0\\columnwidth]{Fig1_Motivation.pdf}\n\\caption{\\small\nThe statistics of common neighbors between different models.\nCNN-CNN and ViT-ViT curves denote the average number of common neighbors in the k nearest neighbors of each instance.\nThe features are extracted by two homogeneous networks trained with different initialization.\nSimilarly, CNN-ViT represents the common neighbors between CNN and ViT.\nFurthermore, Upbound refers to the maximum number of neighbors to consider.\nThese models are trained on the CUHK-SYSU dataset in a supervised manner and cluster on the Market1501 dataset.\nCompared to the CNN-CNN and ViT-ViT, the CNN-ViT contains fewer common neighbors, and the ways they distinguish two individuals are more different than homogeneous networks.\nIt demonstrates that heterogeneous networks address the task in different patterns.\n}\n\\label{fig1:Motivation}\n\\end{figure}\n\n\\IEEEPARstart{P}{erson} re-identification (Re-ID)~\\cite{series\/acvpr\/GongCLH14} aims at matching individual pedestrian images from images captured by different cameras according to identity.\n\\IEEEpubidadjcol\nThis task is challenging because the variations of viewpoints, body poses, illuminations, and backgrounds will influence a person's appearance.\nRecently, supervised person Re-ID methods~\\cite{journals\/ijcv\/LiZG20, journals\/pami\/SunZLYTW21, journals\/ijcv\/YinWZ20, conf\/eccv\/SunZYTW18, conf\/cvpr\/ZhangLZJ020, conf\/cvpr\/WangZGL18, conf\/iccv\/WuZL19, conf\/cvpr\/YuZ20, conf\/eccv\/ZhongZLY18, conf\/iccv\/HanYZTZGS19} made impressive progress.\nHowever, as the number of images increases and the ensuing scene changes, regular supervised learning approaches are losing their ability to adapt to complex scenarios.\nThe performance of person re-ID models trained on existing datasets will evidently suffer for person images from a new video surveillance system due to the domain gap.\nTo avoid time-consuming annotations on the new dataset, unsupervised domain adaptation (UDA) is proposed to adapt the model trained on the labeled source-domain dataset to the unlabeled target-domain dataset.\n\nGenerating trusted identity information on the target domain is seen as the core of the UDA task.\nSome UDA Re-ID methods~\\cite{conf\/cvpr\/Deng0YK0J18, conf\/iccv\/LiLLW19, conf\/cvpr\/WeiZ0018, conf\/eccv\/ZhongZLY18, conf\/eccv\/ZouYYKK20} directly apply GANs~\\cite{journals\/corr\/GoodfellowPMXWOCB14} to transfer the style of pedestrian images from the source domain to the target while keeping the identities to train the model.\nHowever, the complexity of the human form and the limited number of instance in a Re-ID dataset limit the quality of generated images.\nAfter abandoning the image generation, some methods~\\cite{conf\/aaai\/ChangYXH19, conf\/cvpr\/WangZGL18} introduce the attribute to bridge the domain gap.\nThese methods introduce additional annotation information which defeats the purpose of the UDA Re-ID task.\nLimited by the missing label on the target domain, others~\\cite{conf\/iccv\/QiWHZSG19, conf\/cvpr\/Zhong0LL019, conf\/cvpr\/YangZLCLLS21, journals\/tip\/DaiCWHYTLJ22} align the distributions of target and source domains while only learning classifying on the source.\nTo better adapt the distribution of the target domain and train with the target-domain identity knowledge, various methods~\\cite{conf\/aaai\/LinD00019, conf\/iccv\/ZhangCSY19, conf\/cvpr\/BaiWW0D21} apply a clustering algorithm in the target domain to generate the pseudo labels for training in a supervised manner. \nOne of the keys to improving performance is alleviating the influence of noisy labels. \nIn this context, many methods~\\cite{conf\/iclr\/GeCL20, conf\/cvpr\/ZhengLH0LZ21, conf\/eccv\/ZhaiYLJJ020} based on clustering algorithms are proposed to rule out the harmony from the noisy labels by introducing more than one framework to predict pseudo labels.\nThey aim to generate knowledge with specific differences in samples and exchange the knowledge among the networks to enhance their ability.\nDespite encouraging progress, the benefits from the knowledge mined by homogeneous networks are limited.\nAs shown in Fig.~\\ref{fig1:Motivation} CNN-CNN and ViT-ViT, these homogeneous networks with similar structures identify pedestrians in a comparable manner, and the relation among the instances are similar.\nIt suggests they use similar patterns to extract pedestrian features, and networks may converge to equal each other.\nFurthermore, this mode of operation makes it possible for the networks to make the same mistakes and not be able to correct them.\nSuch a design limits the knowledge models can learn from the training set and makes it possible for the networks to repeat mistakes without being able to remedy them.\nAs a result, mining the information from different subspaces is required to broaden the scope of knowledge and generate reliable pseudo labels.\n\nTo tackle this problem, heterogeneous networks, as shown in Fig.~\\ref{fig1:Motivation} CNN-ViT, can discover the information from multiple subspaces and have more extensive latent knowledge.\nWe propose Dual-level Asymmetric Mutual Learning (DAML), a novel unsupervised domain adaption method for person Re-ID that broadens the scope of knowledge for the network by exploiting information from two different subspaces and selectively transferring information between heterogeneous networks.\nThe proposed DAML consists of a CNN that focuses on identity learning as a teacher network and a ViT that concentrates on adapting knowledge from the target domain as a student network for embedding samples into different subspaces and setting the constraints among the classifiers for asymmetric mutual learning.\n\nIn particular, the CNN that works as a teacher will train on both source and target datasets under the supervision of ground-truth source-domain labels and pseudo-target-domain labels.\nThe former can provide reliable identity information for extracting discriminative feature representation, while the latter will assist the network in adapting the distribution of the target domain.\nHowever, learning from the source domain will harm the distribution that the network adapted limiting the performance.\nTo avoid this disadvantage, the ViT that works as a student only trains with the guidance of pseudo-target-domain labels and learns the knowledge from the teacher.\nIn the pseudo label generation stage, the relationship between two samples is weighted according to their teacher and student features similarity.\nMoreover, this process wholly exchanges the knowledge learned from two different subspaces.\nAfter predicting the identities of input images, the asymmetric constraints between two heterogeneous networks selectively exchange the knowledge.\nThe student learns the identity knowledge from the teacher network under the constraints from the target-domain samples.\nFurthermore, for the student can benefit more from the teacher and better utilize the ground-truth labels, the source-domain identity knowledge learned by the teacher is transferred to the target domain with the constraints based on source-domain samples.\nIn summary, the DAML employs diverse subspaces to generate reliable pseudo label in the target domain and help student adopt ground-truth knowledge in the source domain.\n\nOur main contributions are summarized below:\n\n\\begin{itemize}\n %\n \\item We address the diverse subspaces learning and target-domain identity learning for unsupervised domain adaptation person Re-ID with proposed Dual-level Asymmetric Mutual Learning (DAML). \n %\n The former has rarely been studied in the existing research, while the latter is crucial for retrieving person in the target domain.\n %\n \\item We propose a novel Dual-level Asymmetric Mutual Learning (DAML) method for unsupervised domain adaptation person Re-ID. \n %\n The asymmetric knowledge learning between the teacher and the student helps them play their roles better.\n %\n \\item To learn from diverse subspaces, the proposed DAML introduces two heterogeneous networks to mine valuable information from different subspaces and selectively exchange the information between them.\n %\n \\item To better utilize the knowledge mined by heterogeneous networks and ensure the networks orient to the task, the proposed DAML smoothly update the classifiers in a hard distillation manner and exchange knowledge during training in a soft distillation manner.\n %\n\\end{itemize}\n\n\\section{Related Work}\n\n\\subsection{Unsupervised Domain Adaptation Person Re-ID}\nUnsupervised Domain Adaptation Person Re-ID has attracted increasing attention in recent years due to its effectiveness in reducing manual annotation costs.\nThere are two main categories of methods are proposed to address this issue.\nFirstly, GAN-based methods aim to transfer samples from the source domain to the target domain without altering their identities. \nSPGAN~\\cite{conf\/cvpr\/Deng0YK0J18} and PDA-Net~\\cite{conf\/iccv\/LiLLW19} transfer images directly from the source domain to the target domain while maintaining the original identity knowledge.\nThe generated images have a similar style to the target-domain images and are used to train the model under the supervision of their original labels in the source domain.\nTo produce generated images that are more realistic and have more detail, DG-Net~\\cite{conf\/cvpr\/ZhengYY00K19} and DG-Net++~\\cite{conf\/eccv\/ZouYYKK20} introduce disentanglement for the generation stage.\nBut the generation is expensive and the style of generated images may not well fit the target domain.\nRather than transfer images from the source domain to the target domain, HHL~\\cite{conf\/eccv\/ZhongZLY18} transfers target-domain images among the cameras to generate images that have the same identity but contain the difference at the same time.\nSecondly, the clustering-based methods clustering based methods do not require expensive GAN networks for generation and have achieved state-of-the-art performance to date.\nTo reduce the impact of noisy label, MMT~\\cite{conf\/iclr\/GeCL20} proposed a mutual learning method providing soft labels.\nFor more reliable pseudo labels, SSG~\\cite{conf\/iccv\/FuWWZSUH19} clusters samples in three scales and validate each other.\nMEB-Net~\\cite{conf\/eccv\/ZhaiYLJJ020} respectively introduces multiple groups of prototypes or homogeneous networks to generate the pseudo labels.\nUNRN~\\cite{conf\/aaai\/ZhengLZZZ21} and GLT~\\cite{conf\/cvpr\/ZhengLH0LZ21} design a memory bank to save anchors for aligning the distribution and learning identities in a contrastive learning manner.\nLimited by the constraints in the feature level these methods rely on, the models that collaborate to generate pseudo labels are homogeneous.\nThese characteristics determine that the model can only learn similar knowledge from others.\nNevertheless, these approaches alleviate the domain gap only considering the single embedding space inevitably makes some mistakes.\n\n\\subsection{Knowledge Distillation}\nKnowledge distillation makes a student network learns from a strong teacher network to improve the student's ability.\nThe common approaches can be summarized as hard distillation and soft distillation.\nSoft distillation~\\cite{journals\/corr\/HintonVD15, conf\/eccv\/WeiXXZ0T20} minimizes the distribution difference between the prediction generated by teacher and student.\nThe soft label generated by the teacher model can alleviate overfitting just like labels smoothing~\\cite{conf\/cvpr\/YuanTLWF20}.\nUnlike the soft, hard-label distillation regards the prediction result of the teacher as a valid label.\nAnd positive pairs predicted by the teacher are used to transfer identity knowledge from the teacher to a student network in semi-supervised and unsupervised learning tasks.\nTemporal ensembling~\\cite{conf\/iclr\/LaineA17} put the former networks as the teacher and use memory saving average predictions for each sample as supervision for the unlabeled samples.\nTo avoid storing predictions for saving memory, Mean Teacher~\\cite{conf\/iclr\/TarvainenV17} averaged student model weights as the parameter of the teacher.\nDuring the training, the predictions made by the teacher are seen as supervision for unlabeled samples.\nThe models consider similar information in these methods because the teacher and the student have the same structure and similar initialization.\nIt makes the networks focus within a certain range and limit the knowledge student can learn.\nThe proposed DAML exchange knowledge utilizes both soft and hard distillation in the different training stages.\nThanks to the heterogeneous networks, the proposed DAML gives the student model a broader perspective and can generate pseudo labels from different views.\n\n\\begin{figure*}[t]\n\\centering\n\\includegraphics[width=0.9\\textwidth]{Fig2_Framework.pdf}\n\\vspace{-3mm}\n\\caption{ \\small\nOverview of our Dual-level Asymmetric Mutual Learning method (DAML).\nThe teacher network is trained under the supervision of pseudo labels and ground-truth labels for target-domain and source-domain samples.\nAnd the student only directly learns knowledge from target-domain samples with pseudo labels.\nAt the beginning of epochs, we first generate the pseudo labels for target dataset, and update the classifiers based on the predictions of cluster centers.\nTo distill the different subspace knowledge from the teacher to the student, $\\mathcal{L}_{id}$ makes the student predictions of target-domain samples close to the teacher.\nMeanwhile, for student can better adopt the identity knowledge learned by the teacher, we minimize the distribution differences of the same source-domain samples with $\\mathcal{L}_{dom}$.\n}\n\\label{fig2:Framework}\n\\end{figure*}\n\n\\subsection{CNN and ViT}\nSince AlexNet~\\cite{conf\/nips\/KrizhevskySH12} achieve great success on ImageNet~\\cite{conf\/cvpr\/DengDSLL009}, a variety of convolutional neural networks (CNN)~\\cite{conf\/cvpr\/SzegedyVISW16, conf\/cvpr\/HeZRS16, conf\/cvpr\/HuangLMW17, conf\/eccv\/PanLST18} is proposed to solve different tasks.\nAs Transformers~\\cite{conf\/nips\/VaswaniSPUJGKP17} were proposed for machine translation and were seen with significant results in many NLP tasks, the application of self-attention to images is widely concerned. \nA new model without any convolution, Vision Transformers (ViT)~\\cite{conf\/iclr\/DosovitskiyB0WZ21}, has been proposed for computer vision tasks and shows its potential.\nDuring the calculation process, the CNN keeps the spatial information and can only focus on the surrounding area in one layer due to the nature of convolution.\nIn contrast, ViT emphasizes the correlation between two patches, and its receptive field involves the whole feature map.\nThese differences make the CNN and ViT learn different knowledge from the training set for the same task.\nAnd in our paper, we take advantage of this difference to achieve asymmetric distillation, making ViT a better performer with our DAML.\nThe ViT works as a student because the receptive field of a patch in the ViT covers the area that one convolution kernel can consider, not vice-versa. \n\n\\section{Methodology}\n\\label{Sec:Methodology}\n\\subsection{Overview}\n\nThe ultimate goal of the unsupervised domain adaptation (UDA) person Re-ID is to gain a model work on a target-domain dataset based on a labeled source-domain dataset and an unlabeled target-domain dataset.\nLet $\\mathcal{S}=\\{(\\mathbf{x}_s^i, \\mathbf{y}_s^i)\\}_{i=1}^{N_s}$ and $\\mathcal{T}=\\{\\mathbf{x}_t^i\\}_{i=1}^{N_t}$ respectively denote the source-domain images with ground-truth labels and the unlabeled target-domain images, where $N_s$ and $N_t$ are the numbers of samples from these two domains.\n\nAs shown in Fig.~\\ref{fig2:Framework}, the Dual-level Asymmetric Mutual Learning (DAML) method trains the student to extract discriminative representations from two different subspaces to perform the UDA person Re-ID task.\nFirstly, DAML adopts two heterogeneous networks: teacher CNN ${\\rm E}_T(\\cdot)$ and student ViT ${\\rm E}_S(\\cdot)$ which are pre-trained on the source-domain dataset in a supervised manner to extract features in different subspaces.\nAt each epoch, we first group target-domain samples into $K$ classes by the clustering algorithm.\nThe distance between two target-domain samples will be calculated according to the features ${\\rm E}_T(\\mathbf{x}_t^i)=\\mathbf{t}_i^T \\in \\mathbb{R}^{c_T}$ and ${\\rm E}_S(\\mathbf{x}_t^i)=\\mathbf{t}_i^S \\in \\mathbb{R}^{c_S}$ extracted by the teacher and student models with corresponding weights.\nThe $\\{\\hat{\\mathbf{y}}_i\\}_{i=1}^{N_t}$ are the pseudo labels for the target-domain samples.\nThen, for each class center $\\mathbf{c}_y$, we generate its prediction with the classifiers ${\\rm C}(\\cdot|\\mathbf{W}_t^S)$ and ${\\rm C}(\\cdot|\\mathbf{W}_t^T)$ for updating the parameter $\\mathbf{W}_t^S$ and $\\mathbf{W}_t^T$ in a smooth method.\n\nAfter that, we train the teacher and the student models with the pseudo labels in a supervised manner.\nFor the teacher model, classifier ${\\rm C}(\\cdot|[\\mathbf{W}_s^T, \\mathbf{W}_t^T])$ will learn both source-domain and target-domain knowledge.\nWhile the classifier ${\\rm C}(\\cdot|\\mathbf{W}_t^S)$ for the student model only directly learns the target-domain knowledge.\nThe constraints between two networks transfer the identity knowledge learned by the teacher to the target and help the student learn from diverse subspaces. \nFinally, we only adopt the features $\\mathbf{t}_i^S = {\\rm E}_S(\\mathbf{x}_t^i)$ extracted by the student model for testing.\n\n\\subsection{Smooth Classifier Update (SCU)}\n\nAt the beginning of epochs, we extract the target-domain features $\\mathbf{t}_i^T = E_T(\\mathbf{x}^i_t)$ and $\\mathbf{t}_i^S = E_S(\\mathbf{x}^i_t)$ with two heterogeneous networks.\nTo better utilize the knowledge from the two models, we first define the neighborhood of an instance according to its relations in two different subspaces:\n\\begin{equation}\n N_i = \\Bigg\\{x_j \\Bigg| 1 - \\frac{\\langle t_i^M, t_j^M \\rangle}{\\Vert t_i^M\\Vert_2 \\Vert t_j^M\\Vert_2} < \\alpha, M \\in \\{T, S\\} \\Bigg\\},\n\\end{equation}\nthe $\\alpha$ here is a hyper-parameter.\nWith the limitation of neighbor selection considering both teacher features and student features simultaneously, the neighbors of an instance should be close to it in both subspaces.\nThe above constraint ensures that instances with apparent differences will not be clustered as the same identity because the patterns of the two models applied to recognize an instance are different.\n\nTo exploit the information from two different subspaces and make the pseudo labels more reliable, we combine features from heterogeneous networks and define the distance between two samples as:\n\\begin{equation}\n d_{i,j} =\n\\begin{cases}\n\\begin{aligned}\n& 1 - \\frac{\\langle[\\mathbf{t}_i^T, \\mathbf{t}_i^S],[\\mathbf{t}_j^T, \\mathbf{t}_j^S]\\rangle}\n {\\Vert[\\mathbf{t}_i^T, \\mathbf{t}_i^S]\\Vert_2 \\Vert[\\mathbf{t}_j^T, \\mathbf{t}_j^S]\\Vert_2}, & x_i \\in N_j \\ and \\ x_j \\in N_i \\\\\n& \\rm{Inf}, & Others\n\\end{aligned}\n\\end{cases}\n\\end{equation}\nwhere $[\\cdot,\\cdot]$ represents the concatenation of two features, and $\\langle\\cdot,\\cdot\\rangle$ denotes the dot product between two features.\nIn short, we define the similarity between two samples as the cos similarity between the features constructed by concatenating their teacher feature and student feature.\nThen the pseudo labels can be generated based on the relationship among instances with the clustering algorithm.\n\nWith the pseudo labels, some methods \\cite{conf\/iclr\/GeCL20,conf\/aaai\/ZhengLZZZ21} directly update the classifier by replacing the classifier parameters with the new class centers to adapt the count of classes change.\nThese methods will make the knowledge lost because the class centers may not represent the corresponding class well.\nTo protect the knowledge involved in the classifiers, we update the classifiers more smoothly as follows:\n\\begin{equation}\n\\begin{aligned}\n\\mathbf{W}^i_t=\\sum_{k=1}^{\\hat{K}} \\mathbf{\\hat{W}}^k_t \\cdot \\frac{e^{\\mathbf{p}_i^k}}{\\sum_{j=1}^{\\hat{K}} e^{\\mathbf{p}_i^j}},\n\\end{aligned}\n\\label{update_the_target_classifiers}\n\\end{equation}\nwhere $\\mathbf{W}_t^i$ is the parameters for the $i^{th}$ target-domain identity in the next epoch, $\\mathbf{p}_i ={\\rm C}(\\mathbf{c}_i|\\mathbf{\\hat{W}}_t)$ is the prediction of class center $\\mathbf{c}_i$ with the parameters $\\mathbf{\\hat{W}}_t$ from the last epoch which includes $\\hat{K}$ classes.\nNote that the momentum for SGD is updated following the parameters in the process.\n\n\\subsection{Identity Learning}\n\nThe core of person re-identification is identifying the persons.\nFor two heterogeneous networks learning to extract discriminative representation, there are two level objective functions are applied.\nFirstly, at the feature level, the triplet loss:\n\\begin{equation}\n\\begin{aligned}\n\\mathcal{L}_{tri}(\\mathbf{f})=\\frac{1}{n}\\sum_{i=1}^n \\max \\{ \\rho + d_p - d_n, 0 \\},\n\\end{aligned}\n\\label{loss_triplet}\n\\end{equation}\nis applied to guarantee the features can well represent their corresponding samples.\nWhere $\\mathbf{f}$ represents a batch of the features, $n=|\\mathbf{f}|$ is the size of the batch, \n$\\rho$ is the tiniest margin between the distance to the furthest positive instance $d_p$ and the distance to the nearest negative instance $d_n$.\nThe relationship between two instances from the source domain depends on the ground-truth labels and the pseudo labels for target-domain samples.\nDue to the different dimensions of the features extracted by heterogeneous networks, the triplet loss can only be applied in a certain subspace.\n\nThen, in the logits level, we apply the cross-entropy loss with classifiers:\n\\begin{equation}\n\\begin{aligned}\n\\mathcal{L}_{Ttid}=-\\frac{1}{n}\\sum_{i=1}^n\\log{P(\\hat{\\mathbf{y}}_i|{\\rm C}(\\mathbf{t}_i^T | [\\mathbf{W}_s^T,\\mathbf{W}_t^T]))},\n\\end{aligned}\n\\label{loss_teacher_learn_target_identity}\n\\end{equation}\n\\begin{equation}\n\\begin{aligned}\n\\mathcal{L}_{Stid}=-\\frac{1}{n}\\sum_{i=1}^n\\log{P(\\hat{\\mathbf{y}}_i|{\\rm C}(\\mathbf{t}_i^S | \\mathbf{W}_t^S))},\n\\end{aligned}\n\\label{loss_student_learn_target_identity}\n\\end{equation}\nwhere $\\hat{\\mathbf{y}}_i$ is the pseudo label for target-domain example $\\mathbf{x}_t^i$.\nThe trainable parameters $\\mathbf{W}_s^T$, $\\mathbf{W}_t^T$ and $\\mathbf{W}_t^S$ respectively denote the classifier parameters for the teacher classifying source-domain samples, the teacher classifying target-domain samples, and the student classifying target-domain samples.\nMeanwhile, to take advantage of the ground-truth label, the teacher also learns the source-domain knowledge by:\n\\begin{equation}\n\\begin{aligned}\n\\mathcal{L}_{Tsid}=-\\frac{1}{n}\\sum_{i=1}^n\\log{P(\\mathbf{y}_i|{\\rm C}(\\mathbf{s}_i^T | [\\mathbf{W}_s^T,\\mathbf{W}_t^T]))},\n\\end{aligned}\n\\label{loss_teacher_learn_source_identity}\n\\end{equation}\nhere, $\\mathbf{y}_i$ is the ground-truth label for source-domain sample $\\mathbf{x}_s^i$.\nNote that, with Eq.(\\ref{loss_teacher_learn_target_identity}) and Eq.(\\ref{loss_teacher_learn_source_identity}), the classifier ${\\rm C}(\\mathbf{t}_i^T | [\\mathbf{W}_s^T,\\mathbf{W}_t^T])$ in the teacher has learned both two domain knowledge while classifier ${\\rm C}(\\mathbf{t}_i^T | \\mathbf{W}_t^S)$ for the student learns the target-domain knowledge only.\n\n\\subsection{Asymmetric Mutual Learning (AML)}\n\nCompare the structure of the Convolutional Neural Network (CNN) and Vision Transformer (ViT), the most evident difference is that the ViT can capture long-range information with its cascaded self-attention modules. \nHowever, CNN only focuses on the local limited by the size of the convolution kernel.\nIn addition, the CNN inductive bias, which includes assumptions of the data, can involve information that ViT may not consider and the convolution kernel with a deterministic shape guarantees spatial information.\nMore intuitively, the features extracted by the two networks have different dimensions.\nIt ensures the subspaces learned by the heterogeneous networks are different but makes feature-level constraints unusable.\nThe asymmetric distillation benefit from the difference in the patterns that two heterogeneous networks predict the identity.\nAnd focus on twofold: to allow students access to knowledge from the different subspaces and transfer the knowledge from the source to the target.\n\nTo make the student learn from different subspaces and take advantage of the reliable source-domain labels, the proposed DAML transfers the identity knowledge from the teacher by reducing the Kullback-Leibler divergence between the predictions of the target-domain features as:\n\\begin{equation}\n\\begin{aligned}\n\\mathcal{L}_{id}=\\frac{1}{n}\\sum_{i=1}^n&{\\rm C}(\\mathbf{t}^T_i|\\mathbf{W}_t^T)\\log{\\frac{{\\rm C}(\\mathbf{t}^S_i|\\mathbf{W}_t^S)}{{\\rm C}(\\mathbf{t}^T_i|\\mathbf{W}_t^T)}},\n\\end{aligned}\n\\label{equation_knwoledge_distillation}\n\\end{equation}\nwith the above objective function, the student can learn the knowledge from the teacher which adopts knowledge from both source and target domain with Eq.(\\ref{loss_teacher_learn_target_identity}) and Eq.(\\ref{loss_teacher_learn_source_identity}).\nHowever, domain knowledge is also transferred to students and may harm the performance in the target domain.\nThe ideal way to alleviate the distribution effect is to make the teacher predict identities under the target domain.\n\nLimited by the domain gap, the knowledge learned from the source domain can not be directly applied to the target domain.\nAnd the goal of the proposed asymmetric mutual learning is to gain a student network that adapts to the target domain while benefiting from the source-domain identity knowledge.\nMaking source-domain predictions from the teacher similar to the student will transfer the classifying knowledge learned from the source domain to the target domain:\n\\begin{equation}\n\\begin{aligned}\n\\mathcal{L}_{dom}=\\frac{1}{n}\\sum_{i=1}^n&{\\rm C}(\\mathbf{s}^S_i|\\mathbf{W}_t^S)\\log{\\frac{{\\rm C}(\\mathbf{s}^T_i|\\mathbf{W}_t^T)}{{\\rm C}(\\mathbf{s}^S_i|\\mathbf{W}_t^S)}},\n\\end{aligned}\n\\label{equation_eliminate_domain_gap}\n\\end{equation}\nhere, $\\mathbf{W}_t^T$ learns source-domain knowledge with Eq.(\\ref{loss_teacher_learn_source_identity}) while $\\mathbf{W}_t^S$ only learns the knowledge from target domain.\nEq.(\\ref{equation_eliminate_domain_gap}) focuses on making the teacher predict source-domain samples in the same way as the student.\nIn this way, the student can better adopt identity knowledge from the source domain without a domain gap as much as possible.\nCompared with Eq.(\\ref{equation_knwoledge_distillation}), the above equation distills the knowledge in a different direction and together make the student model can distinguish pedestrian in the target domain. \n\n\\subsection{Optimization}\nThe total loss $\\mathcal{L}$ of DAML can be summarized as:\n\\begin{equation}\n\\begin{aligned}\n\\mathcal{L}=&\\big(\\mathcal{L}_{Ttid} + \\mathcal{L}_{tri}(\\mathbf{t}^T)\\big) +\n \\big(\\mathcal{L}_{Stid} + \\mathcal{L}_{tri}(\\mathbf{t}^S)\\big) \\\\\n &\\lambda_1\\big(\\mathcal{L}_{Tsid} + \\mathcal{L}_{tri}(\\mathbf{s}^T)\\big)\n +\\lambda_2\\mathcal{L}_{id} + \\lambda_3\\mathcal{L}_{dom}\n\\end{aligned}\n\\label{full_loss_functions}\n\\end{equation}\nwhere $\\lambda_1$, $\\lambda_2$, and $\\lambda_3$ are hype-parameters to balance the contributions of individual loss terms.\n\n\\section{Experiments}\n\n\\subsection{Datasets}\n\n\\textbf{Datasets}\nWe evaluated our method on three public datasets \\textbf{Market-1501}~\\cite{conf\/iccv\/ZhengSTWWT15}, \\textbf{CUHK-SYSU}~\\cite{journals\/corr\/XiaoLWLW16} and \\textbf{MSMT17}~\\cite{conf\/cvpr\/WeiZ0018}.\n\n\\begin{itemize}\n \\item \\textbf{Market-1501} contains $32,668$ labeled images captured from $1,501$ identities by $6$ cameras.\n %\n The training set has $12,936$ images of $751$ identities. \n %\n In addition, $3,368$ query images and $19,732$ gallery images from the other $750$ identities are used as the testing set.\n %\n \\item \\textbf{CUHK-SYSU} includes $33,901$ labeled images of $8,432$ identities taken in diverse scenes.\n %\n The training set is constructed with $5,532$ identities having $15,088$ images, and the rest is used for testing.\n %\n There are $2,900$ images for the query and $6,978$ images for the gallery in the testing set.\n %\n \\item \\textbf{MSMT17} is a large-scale dataset consisting of $126,441$ bounding boxes of $4,101$ identities caught on $12$ outdoor and $3$ indoor cameras.\n %\n Among them, $32,621$ images of $1,041$ identities are used for training and $93,820$ of $3,060$ identities are used for testing.\n\\end{itemize}\n\n\\begin{table*}[t]\n\n\\caption{\nComparison of CMC (\\%) and \\emph{m}AP (\\%) performances with the SOTA methods on \\textbf{Market-1501}, \\textbf{CUHK-SYSU} and \\textbf{MSMT17}.\n}\n\n\\centering\n\\resizebox{1.0\\linewidth}{!}{\n\\setlength{\\tabcolsep}{2mm}\n\\begin{tabular}{l||c|ccc||c|ccc}\n\\hline\n\\multirow{2}{*}{Method} & \\multicolumn{4}{c||}{Market-1501 $\\rightarrow$ CUHK-SYSU} & \\multicolumn{4}{c}{CUHK-SYSU $\\rightarrow$ Market-1501} \\\\\n\\cline{2-9} & \\emph{m}AP & R1 & R5 & R10 & \\emph{m}AP & R1 & R5 & R10 \\\\\n\\hline\nDirectly Transfer (IBN-ResNet-50) & 74.1 & 77.2 & 85.7 & 88.7 & 38.8 & 63.7 & 79.4 & 85.2 \\\\\nDirectly Transfer (ViT-Base) & 86.0 & 87.2 & 94.1 & 95.0 & 36.2 & 60.3 & 76.5 & 82.9 \\\\\n\\hline\nUNRN~\\cite{conf\/aaai\/ZhengLZZZ21}(AAAI'21) & 62.3 & 64.1 & 76.9 & 82.0 & 70.9 & 86.7 & 92.8 & 94.5 \\\\\nMMT(IBN-ResNet-50)~\\cite{conf\/iclr\/GeCL20}(ICLR'20) & 78.4 & 81.0 & 89.7 & 92.2 & 76.0 & 88.8 & 95.2 & 97.0 \\\\\nMEB-Net~\\cite{conf\/eccv\/ZhaiYLJJ020}(ECCV'20) & 81.1 & 83.2 & 90.9 & 93.1 & 69.3 & 84.0 & 92.9 & 95.2 \\\\\n\\hline\nDAML (Ours) & \\textbf{84.3} & \\textbf{86.2} & \\textbf{92.6} & \\textbf{94.6} & \\textbf{84.1} & \\textbf{93.1} & \\textbf{97.7} & \\textbf{98.2} \\\\\n\\hline\nSupervised (IBN-ResNet-50) & 90.8 & 95.2 & 96.6 & 89.0 & 83.0 & 94.1 & 97.4 & 98.4 \\\\\nSupervised (ViT-Base) & 93.1 & 97.2 & 97.8 & 92.1 & 82.3 & 93.2 & 97.9 & 98.8 \\\\\n\\hline\n\\hline\n\\multirow{2}{*}{Method} & \\multicolumn{4}{c||}{Market-1501 $\\rightarrow$ MSMT17} & \\multicolumn{4}{c}{CUHK-SYSU $\\rightarrow$ MSMT17} \\\\\n\\cline{2-9} & \\emph{m}AP & R1 & R5 & R10 & \\emph{m}AP & R1 & R5 & R10 \\\\\n\\hline\nDirectly Transfer (IBN-ResNet-50) & 8.4 & 23.8 & 34.5 & 39.6 & 10.3 & 26.3 & 38.3 & 44.3 \\\\\nDirectly Transfer (ViT-Base) & 13.0 & 33.3 & 45.3 & 51.1 & 12.5 & 28.2 & 41.1 & 47.5 \\\\\n\\hline\nMEB-Net~\\cite{conf\/eccv\/ZhaiYLJJ020}(ECCV'20) & 20.6 & 44.1 & 58.3 & 64.3 & 21.3 & 45.6 & 59.5 & 65.6 \\\\\nUNRN~\\cite{conf\/aaai\/ZhengLZZZ21}(AAAI'21) & 25.3 & 52.4 & 64.7 & 69.7 & 12.6 & 31.1 & 43.8 & 49.7 \\\\\nMMT(IBN-ResNet-50)~\\cite{conf\/iclr\/GeCL20}(ICLR'20) & 26.6 & 54.4 & 67.6 & 72.9 & 24.0 & 49.0 & 63.0 & 68.6 \\\\\n\\hline\nDAML (Ours) & \\textbf{41.4} & \\textbf{65.4} & \\textbf{76.0} & \\textbf{80.2} & \\textbf{44.0} & \\textbf{67.0} & \\textbf{78.0} & \\textbf{81.9} \\\\\n\\hline\nSupervised (ViT-Base) & 54.1 & 76.6 & 87.5 & 90.8 & 54.1 & 76.6 & 87.5 & 90.8 \\\\\nSupervised (IBN-ResNet-50) & 49.9 & 79.2 & 88.2 & 91.3 & 49.9 & 79.2 & 88.2 & 91.3 \\\\\n\\hline\n\\end{tabular}\n}\n\n\\label{table_comparison_SOTA}\n\\end{table*}\n\n\\begin{table}[!t]\n\\centering\n\\setlength{\\tabcolsep}{1.0mm}\n{\n\\caption{\\small\nAblation study in terms of \\emph{m}AP (\\%) and CMC (\\%) on \\textbf{CUHK-SYSU (CS) $\\rightarrow$ Market-1501 (M)}.\n}\n\\label{table_ablation}\n\\resizebox{1.0\\linewidth}{!}{\n\\renewcommand{\\multirowsetup}{\\centering}\n\\begin{tabular}{l||c|c}\n\\hline\n\\multirow{2}{*}{\\ \\ \\ \\ \\ \\ \\ \\ \\ Method} & \\multicolumn{2}{c}{CS $\\rightarrow$ M} \\\\\n\\cline{2-3} & \\ \\ \\emph{m}AP \\ \\ & \\ \\ R1 \\ \\\\\n\\hline\nIBN-ResNet-50(Directly) & 38.8 & 63.7 \\\\\nViT-Base(Directly) & 36.2 & 60.3 \\\\\n\\hline\nDAML w\/o $\\mathcal{L}_{Tsid} + \\mathcal{L}_{tri}(\\mathbf{s}^T)$ & 83.6 & 92.8 \\\\\nDAML w\/o $\\mathcal{L}_{id}$ & 83.3 & 92.3 \\\\\nDAML w\/o $\\mathcal{L}_{dom}$ & 83.6 & 92.5 \\\\\nDAML w\/o SCU & 81.0 & 91.0 \\\\\n\\hline\nDAML & 84.1 & 93.1 \\\\\n\\hline\nIBN-ResNet-50(Supervised) & 83.0 & 94.1 \\\\\nViT-Base(Supervised) & 82.3 & 93.2 \\\\\n\\hline\n\\end{tabular}\n}\n}\n\\end{table}\n\n\n\\subsection{Experiment Setting}\n\n\\textbf{Performance Metric:}\nAs a UDA task, we select one dataset as the source-domain dataset and another as the target-domain dataset.\nThe model is trained with the labeled source-domain training set and adapts the target domain through the unlabeled target-domain training set.\nThen the performance is evaluated according to the student network which work on the target-domain testing set.\nIn our experiments, following the standard metrics, we employ the cumulative matching characteristic (CMC) curve and the mean average precision (\\emph{m}AP) score.\nOur experiments report rank-1, rank-5, and rank-10 accuracy and mAP scores.\n\n\\textbf{Implementation Details:} \nIn the most common setting, we select IBN-ResNet-50~\\cite{conf\/eccv\/PanLST18} as the teacher network and ViT-Base~\\cite{conf\/iclr\/DosovitskiyB0WZ21} as the student network.\nThe batch size is set to $64$ for both source-domain and target-domain datasets. \nIn one batch, the sampler will select $16$ identities and $4$ images for each identity according to the ground-truth label or pseudo label for two domains.\nThe input image has a fixed size of $256 \\times 128$.\n\nIn the pre-training stage, we first train models $120$ epochs on the source-domain dataset.\nThe teacher CNN model is optimized by SGD with an initial learning rate of $1\\times{10}^{-2}$ and weight decay of $5\\times{10}^{-4}$ with a learning rate decays at $40^{th}$ and $70^{th}$ epoch.\nThe SGD optimizer is employed with a momentum of $0.9$ and the weight decay of $1\\times{10}^{-4}$ for student ViT.\nThe learning rate is set to $8\\times{10}^{-3}$, and the cosine schedule is applied.\nThe input images are augmented with random flip and randomly erase with $50\\%$ probability.\n\nIn the fine-tuning stage, we adopted half the learning rate of the previous stage.\nSpecifically, the learning rate is set to $5\\times10^{-3}$ for teacher CNN and $4\\times10^{-3}$ for student ViT.\nAnd the total number of training epochs is set to $60$.\nThe input images for two heterogeneous networks are randomly flipped and erased with $50\\%$ probability.\nWhen calculating the neighbors of an instance, the maximum acceptable distance $\\alpha$ is $0.6$.\nWe generate the pseudo labels by DBSCAN~\\cite{conf\/kdd\/EsterKSX96}.\nFor DBSCAN, we select $0.6$ as the maximum distance between neighbors and set the minimal number of neighbors to $2$ for \\textbf{CUHK-SYSU} and $4$ for others.\nThe hype-parameters $\\alpha$, $\\lambda_1$, $\\lambda_2$ and $\\lambda_3$ are set to $0.5$, $0.1$, $0.7$ and $1.2$, respectively.\nAt the feature level, the margin $\\rho$ for triplet loss is set to $1.2$.\n\n\\subsection{Comparison with State-of-the-art Methods}\nSince Duke University terminated the \\textbf{DukeMTMC}~\\cite{conf\/eccv\/RistaniSZCT16} dataset, which has been widely used for evaluation of unsupervised domain adaptation person Re-ID task, the comparison becomes difficult.\nTo meet the moral and ethical requirements and provide a new baseline for comparison, we evaluate the performance of some representative works which have official open-source codes based on the \\textbf{CUHK-SYSU} dataset.\nAnd the results in \\textbf{Market-1501 $\\rightarrow$ MSMT17} setting is from the authors' reports.\nWe compare our DAML with state-of-the-art (SOTA) unsupervised domain adaptation person Re-ID approaches.\nMMT~\\cite{conf\/iclr\/GeCL20} applies two networks that have the same structure for learning from each other with both feature-level and logit-level constraints.\nMEB-Net~\\cite{conf\/eccv\/ZhaiYLJJ020} introduces three homogeneous networks, and the output of each network is considered comprehensively in the pseudo label generation.\nMoreover, UNRN~\\cite{conf\/aaai\/ZhengLZZZ21} designs a memory bank storing class centers from both source and target domains to mitigate the influence of noise labels.\nAs shown in Tab.~\\ref{table_comparison_SOTA}, we evaluate the performance in four different manners, \\textit{i.e.}, \\textbf{Market-1501 $\\rightarrow$ CUHK-SYSU}, \\textbf{CUHK-SYSU $\\rightarrow$ Market-1501}, \\textbf{Market-1501 $\\rightarrow$ MSMT17}, and \\textbf{CUHK-SYSU $\\rightarrow$ MSMT17}.\n\n\\textbf{Comparisons on large-scale datasets:}\nThe comparison results on \\textbf{Market-1501 $\\rightarrow$ MSMT17} and \\textbf{CUHK-SYSU $\\rightarrow$ MSMT17} are shown in the bottom of Table~\\ref{table_comparison_SOTA}.\nThe proposed DAML outperforms existing SOTAs by large margins.\nSpecifically, DAML achieves the Rank-1 accuracy of $65.4\\%$ and \\emph{m}AP of $41.4\\%$ in the \\textbf{Market-1501 $\\rightarrow$ MSMT17} setting, significantly improving the Rank-1 accuracy by $11.0\\%$ and \\emph{m}AP by $14.8\\%$ over the SOTA MMT.\nWhen compared to the SOTAs in \\textbf{CUHK-SYSU $\\rightarrow$ MSMT17} setting, the performance margin between our DAML and MMT is also significantly, e.g., the Rank-1 boost is $18.0\\%$, and the \\emph{m}AP boost is $20.0\\%$.\n\n\\begin{table*}[!t]\n\\centering\n\\setlength{\\tabcolsep}{1.0mm}\n{\n\\caption{\\small\nInfluence of different backbones in terms of \\emph{m}AP (\\%) and Rank-1 (\\%) on \\textbf{CUHK-SYSU (CS)} $\\rightarrow$ \\textbf{Market-1501 (M)}.\n}\n\\resizebox{1.0\\linewidth}{!}{\n\\renewcommand{\\multirowsetup}{\\centering}\n\\begin{tabular}{l|l|c|c||c|ccc}\n\\hline\n\\multirow{2}{*}{Method} & \\multirow{2}{*}{Backbone} & \\multirow{2}{*}{\\makecell{Training \\\\Parameter}} & \\multirow{2}{*}{\\makecell{Testing \\\\Parameter}} & \\multicolumn{4}{c}{CS $\\rightarrow$ M} \\\\\n\\cline{5-8} & & & & \\ \\emph{m}AP \\ & \\ R1 \\ & \\ R5 \\ & \\ R10 \\ \\\\\n\\hline\nMMT & IBN-ResNet-50 + IBN-ResNet-50 & 99.8M & 24.9M & 76.0 & 88.8 & 95.2 & 97.0 \\\\\nMMT & ViT-Base + ViT-Base & 345.2M & 86.3M & 75.2 & 86.7 & 94.2 & 96.4 \\\\\nUNRN & ResNet-50-NL & 77.1M & 38.5M & 70.9 & 86.7 & 92.8 & 94.5 \\\\\nUNRN & ViT-Base & 185.4M & 86.3M & 73.2 & 86.8 & 93.2 & 95.2 \\\\\n\\hline\nDAML & IBN-ResNet-50 + ViT-Base & 111.2M & 86.3M & 84.1 & 93.1 & 97.7 & 98.2 \\\\\n\\hline\n\\end{tabular}\n\\label{table_parameter}\n}\n\n}\n\\end{table*}\n\n\\textbf{Comparisons on small-scale dataset:}\nWe also evaluate DAML on two small-scale target-domain datasets settings, \\textbf{Market-1501 $\\rightarrow$ CUHK-SYSU} and \\textbf{CUHK-SYSU $\\rightarrow$ Market-1501}, as shown in the top of Table~\\ref{table_comparison_SOTA}.\nSimilar to the results on large-scale datasets, DAML consistently outperforms current SOTAs.\nSpecifically, we achieve Rank-1 accuracy of $86.2\\%$ and \\emph{m}AP of $84.3\\%$ in \\textbf{Market-1501 $\\rightarrow$ CUHK-SYSU} setting.\nCompared with the SOTA MEB-Net, the Rank-1 and \\emph{m}AP respectively improved by $3.0\\%$ and $3.2\n\\%$.\nMeanwhile Rank-1 accuracy of $93.1\\%$ and \\emph{m}AP of $84.1\\%$ are gained in \\textbf{CUHK-SYSU $\\rightarrow$ Market-1501} setting.\nIt improves the Rank-1 accuracy and \\emph{m}AP by $4.3\\%$ and $8.1\\%$ compared with the SOTA MMT.\nNote that the performance on \\textbf{Market-1501 $\\rightarrow$ CUHK-SYSU} setting is even worse than direct transfer. \nBecause there are only two samples per class in \\textbf{CUHK-SYSU} on average and it harms the pseudo label generation.\nWe will discuss this problem in section \\textbf{Samples Augment}.\n\nThe above results demonstrate the outstanding performance of DAML thanks to its ability to learn knowledge from different subspaces and selectively transfer knowledge between two heterogeneous networks for unsupervised domain adaptation person Re-ID.\n\n\\subsection{Ablation Study}\n\nIn this section, we conduct ablation experiments on \\textbf{CUHK-SYSU $\\rightarrow$ Market-1501} setting to assess the contribution of each component by separately removing them from DAML for training and evaluation.\n\nAs shown in Table~\\ref{table_ablation}, when removing $\\mathcal{L}_{Tsid} + \\mathcal{L}_{tri}(\\mathbf{s}^T)$, the Rank-1 accuracy drops by $0.3\\%$ and \\emph{m}AP drops by $0.5\\%$, since the reliable identity information is underutilized.\nIt illustrates that the ability to use the information in the source domain effectively is an essential factor in determining the model's performance.\nIt illustrates the essential to effectively use the information in the source domain\nWhen removing $\\mathcal{L}_{id}$, which helps student network to learn the identity knowledge from the teacher, the performance drops of Rank-1 and \\emph{m}AP are $0.8\\%$ and $0.8\\%$, respectively, compared with the full DAML.\nThe performance drops due to the ignorance of the knowledge from the different subspaces in the logit level.\nAnd the knowledge can still transfer to each other through the pseudo label generation.\nSimilarly, to validate the effectiveness of $\\mathcal{L}_{dom}$, we remove it from DAML.\nThe result also shows the margin of the Rank-1 accuracy by $0.6\\%$ and \\emph{m}AP by $0.5\\%$ to the complete DAML, which demonstrates that $\\mathcal{L}_{dom}$ effectively helps to make the teacher network predict samples in \nThe smooth classifier update (SCU) saves the knowledge learned in the last epoch.\nWhen it is removed, the Rank-1 accuracy drops by $2.1\\%$, and \\emph{m}AP drops by $3.1\\%$.\nThe results prove that learning the knowledge from different subspaces and taking advantage of correct identity information from the source domain are the two keys to solving UDA person Re-ID.\n\n\\subsection{Discussions}\n\n\\subsubsection{Influence of Backbone}\nTo meet the requirement of heterogeneous networks in the proposed DAML, we introduce the ViT-Base, which contains more trainable parameters as the backbone.\nTo clarify the source of performance growth, we repeat the experiments of MMT~\\cite{conf\/iclr\/GeCL20} and UNRN~\\cite{conf\/aaai\/ZhengLZZZ21} while replacing the backbone with ViT-Base.\nAs shown in Tab.~\\ref{table_parameter}, \"Backbone\" represents the construction to extract features in the testing stage.\nWhen the ViT-base replaces the backbone, the optimization process follows the setting in TransReID \\cite{conf\/iccv\/He0WW0021}.\nAs shown in table.~\\ref{table_parameter}, after replacing the backbone with ViT-Base, there is no significant change in results.\nLimited by the symmetrical design in mutual learning manner and the high similarity in the classifying ways of ViTs, the Rank-1 and \\emph{m}AP of MMT dropped $2.1\\%$ and $0.8\\%$, respectively.\nFor the UNRN method, which focuses on making pseudo labels reliable through the memory mechanism, the ViT brings $0.1\\%$ and $2.3\\%$ in Rank-1 and \\emph{m}AP with much more parameters.\nBased on the above experiments, we can conclude that the heterogeneous networks and the asymmetric learning strategy play a major role in the growth of performance.\n\n\\subsubsection{Heterogeneous Networks Analysis}\n\\begin{figure*}[t]\n\\centering\n\\includegraphics[width=2.0\\columnwidth]{Fig3_CamGrad.pdf}\n\\caption{\\small\nVisualization results of the models on \\textbf{Market-1501}.\nFor each line, we show an input image, the area considered by the pre-trained ViT, the pre-trained CNN, and the different combinations of teacher and student in turn.\n}\n\\label{fig3:CamGrad}\n\\end{figure*}\n\\begin{table}[!t]\n\\centering\n\\setlength{\\tabcolsep}{1.0mm}\n{\n\\caption{\\small\nAsymmetric distillation analysis in terms of \\emph{m}AP (\\%) and Rank-1 (\\%) on \\textbf{CUHK-SYSU (CS)} $\\rightarrow$ \\textbf{Market-1501 (M)}.\n}\n\\resizebox{1.0\\linewidth}{!}{\n\\renewcommand{\\multirowsetup}{\\centering}\n\\begin{tabular}{l|l||c|c}\n\\hline\n\\multicolumn{2}{c||}{Method} & \\multicolumn{2}{c}{CS $\\rightarrow$ M} \\\\\n\\hline\n\\ \\ Teacher & \\ \\ Student & \\ \\emph{m}AP \\ & \\ \\ R1 \\ \\\\\n\\hline\nIBN-ResNet-50 & ViT-Base & 84.1 & 93.1 \\\\\nViT-Base & ViT-Base & 82.0 & 91.7 \\\\\nViT-Base & IBN-ResNet-50 & 80.1 & 91.3 \\\\\nIBN-ResNet-50 & IBN-ResNet-50 & 79.7 & 91.0 \\\\\n\\hline\n\\end{tabular}\n\\label{table_asymmetric}\n}\n}\n\\end{table}\nOne of the keys to improving UDA person Re-ID is learning knowledge from the different subspaces.\nTo illustrate the effect of heterogeneous networks, we train the proposed DAML with different combinations of teacher and student.\nFrom Table.~\\ref{table_asymmetric}, we can figure out that the student with a heterogeneous teacher will achieve better performance.\nSpecifically, the performance of student ViT improved by $0.7\\%$ and $1.1\\%$ in Rank-1 and \\emph{m}AP with the asymmetric teacher. \nThe results in the student CNN is similar, Rank-1 and \\emph{m}AP are enhanced by $2.5\\%$ and $5.3\\%$.\nThese experimental results strongly prove the necessity of using two heterogeneous networks to work as the teacher and student.\nAnd the knowledge from different subspaces has the capacity to help the student to learn broader knowledge.\n\nWhen ViT is seen as the student, the benefit from the heterogeneous teacher is more evident than the improvement that CNN works as the student.\nThe heterogeneous teacher for ViT brings in $1.4\\%$ and $2.1\\%$ on Rank-1 and \\emph{m}AP.\nWhile it only improves Rank-1 and \\emph{m}AP in $0.3\\%$ and $0.4\\%$ for the student CNN.\nThis phenomenon can be ascribed to the difference in the range of receptive field of these two networks that the former can consider the relationship between any two areas, but the size of convolution kernels limits the latter.\nIt gives ViT has the ability to learn the pattern that CNN applied to classify identities but not vice versa.\n\n\\subsubsection{Visualization}\nThe proposed DAML can make the student learn the knowledge from different subspaces.\nTo further illustrate the effectiveness of DAML, which can selectively transfer the knowledge between two networks, we apply Score-CAM \\cite{conf\/cvpr\/WangWDYZDMH20} to visualize the pixel-wise attention areas on \\textbf{CUHK-SYSU $\\rightarrow$ Market-1501} setting.\nFig.~\\ref{fig3:CamGrad} visualizes individual attention patterns for the three people from the target domain, where each column represents the attention area of the pre-trained CNN, ViT, and the different combinations of teacher-student.\nFrom the first two columns, we can observe that the classifying patterns of CNN and ViT are different, which states the difference between their embedding spaces.\nWith these discrepancies, the heterogeneous networks can be improved by learning knowledge from heterogeneous networks.\nIn the last four columns, we can find that the networks mutual learning with heterogeneous networks can better consider the individual by the whole pedestrian while also taking into account many details that identify the persons more efficiently.\nOn the contrary, the recognition patterns of the networks that mutual learning with the same network have no significant change.\nThe visualization demonstrates the function of DAML in learning the knowledge from different subspaces improving the performance of the student.\n\n\n\n\\subsubsection{Samples Augment}\n\\begin{table}[!t]\n\\centering\n\\setlength{\\tabcolsep}{1.0mm}\n{\n\\caption{\\small\nInfluence of Sample Augment for clustering algorithm in terms of \\emph{m}AP (\\%) and CMC (\\%) on \\textbf{Market-1501 (M) $\\rightarrow$ CUHK-SYSU (CS)}.\n}\n\\label{table_augment}\n\\resizebox{1.0\\linewidth}{!}{\n\\renewcommand{\\multirowsetup}{\\centering}\n\\begin{tabular}{l|c||c|ccc}\n\\hline\n\\multirow{2}{*}{\\ Method} & \\multirow{2}{*}{\\makecell{Repeat \\\\Times}} & \\multicolumn{4}{c}{M $\\rightarrow$ CS} \\\\\n\\cline{3-6} & & \\emph{m}AP & \\ R1 \\ & R5 & R10 \\\\\n\\hline\nDirectly Transfer & - & 86.0 & 87.2 & 94.1 & 95.0 \\\\\n\\hline\nDAML & 0 & 84.3 & 86.2 & 92.6 & 94.6 \\\\\n\\hline\n\\multirow{2}{*}{+ Random Crop} & 1 & 89.3 & 90.6 & 95.5 & 96.9 \\\\\n & 2 & 89.1 & 90.4 & 95.4 & 96.6 \\\\\n\\hline\n\\multirow{2}{*}{+ Random Erase} & 1 & 88.3 & 90.0 & 95.0 & 96.3 \\\\\n & 2 & 89.2 & 90.6 & 95.5 & 96.6 \\\\\n\\hline\n\\multirow{2}{*}{\\makecell{+ Random Crop \\\\+ Random Erase}} & 1 & 88.5 & 89.8 & 95.4 & 96.8 \\\\\n & 2 & 87.6 & 89.0 & 95.2 & 96.6 \\\\\n\\hline\nSupervised & - & 90.8 & 95.2 & 96.6 & 89.0 \\\\\n\\hline\n\\end{tabular}\n}\n\n}\n\\end{table}\nDue to the small number of samples for each class in \\textbf{CUHK-SYSU}, the performance of clustering algorithm is severely limited\nThe simplest and most direct way to address this problem is by augmenting the samples with random erase~\\cite{conf\/aaai\/Zhong0KL020} and random crop, which can generate new samples while keeping the original identity when extracting features for the clustering algorithm.\nAs shown in Tab.~\\ref{table_augment}, the performance with augmented data is better than the original. \nSpecifically, the Rank-1 and \\emph{m}AP are enhanced by $4.4\\%$ and $5.0\\%$ when applying the random crop method.\nSimilarly, the random erase improves Rank-1 and \\emph{m}AP by $4.4\\%$ and $4.9\\%$.\nThe above experiment results state the necessity of enough samples for each class in the clustering algorithm.\nNevertheless, applying both random crop and random erase is not as effective as applying only one. \nThe Rank-1 and \\emph{m}AP are only increased by $3.6\\%$ and $4.2\\%$.\nIt suggests that the excessive augment method may harm identity knowledge and reduce the benefits from the augmented samples.\n\nCompared to datasets collected for research, the number of identities and the number of samples in each identity are unknown in a real-world system.\nAnd this information cannot be counted on the raw data unless annotated on them.\nHowever, one of the advantages of unsupervised domain adaptation Re-ID is avoiding the annotation on the target domain, and it means the class-dependent super parameters are not available.\nBecause clustering algorithms elapse a long time to run on large datasets and take up most of the total training time, the super parameter selection experiments may be unacceptable for real-world systems.\nThus, a method without any clustering algorithm that still can mine identity knowledge from the target domain may make more sense for applying to the real world.\nOn the other hand, an efficient data augment method can restore the UDA methods based on clustering algorithm.\n\n\\section{Conclusion}\nIn this paper, we proposed the Dual-level Asymmetric Mutual Learning, termed DAML, to learn knowledge from a broader scope via asymmetric mutual learning with heterogeneous networks for unsupervised domain adaptation person Re-ID.\nOur method aims to learn the knowledge from various subspaces and transfer the identity knowledge from the source to the target domain.\nThe former can improve feature expressiveness while also rectifying potential faults during training.\nThe latter takes full advantage of the identity knowledge from the source domain to improve the performance in the target domain.\nSpecifically, DAML first generates the pseudo labels according to the features extracted by heterogeneous networks, which are more reliable due to the consideration of various subspaces.\nAnd the knowledge from two subspaces can be exchanged in a hard distillation manner in this process.\nWith the smooth classifier update, the classifiers can maintain the knowledge from the last epoch.\nThen, the teacher will train on both source-domain and target-domain datasets to utilize the ground-truth label and transfer the knowledge to the target domain with the domain knowledge from the student.\nTo better adapt to the target domain, the student only trained on the target-domain dataset and benefited from the guidance from the teacher, which had learned the source-domain knowledge.\nExperiments on four different experiment settings prove essential to learn the knowledge from various subspaces and demonstrate the effectiveness of the proposed DAML for unsupervised domain adaptation person Re-ID.\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
+{"text":"\\section{Riemann-squared modified gravities}\n Let me not to list all papers which mention modified gravity\ntheories with Lagrangian composed of the three invariants\nquadratic in the Riemannian curvature.\n To my thinking, all Riemann-squared-gravities (as well\nas $f(R)-$gravities, $f(R)\\neq R$) are not appropriate.\n\n So,\n$(a+bR+R^2\/2)-$gravity ($R$ is the Ricci scalar) leads to\nincompatible PDE system (the trace part,\n$\\mathbf{E}_\\mu{}^\\mu=0$, can be added or subtracted freely):\n\\[\n\\mathbf{E}_{\\mu\\nu}=R_{;\\mu;\\nu} -\nR_{\\mu\\nu}(b+R)+g_{\\mu\\nu}(...R..R^2)=0;\n\\]\nsome people prefer to move the principal derivatives (4-th\norder) to RHS, perhaps trying to hide them in the\nenergy-momentum tensor; however it is not good way to deal with\nPDEs. The next combination of prolonged equations,\n\\[ \\mathbf{E}_{\\mu\\nu;\\lambda} - \\mathbf{E}_{\\mu\\lambda;\\nu } =0, \\]\nafter cancellation of principal derivatives (5-th order), gives\n new 3-d order equations which are irregular in the second jets:\n the term\n \\[ R_{;\\ve}R^{\\ve}{}_{\\mu\\nu\\lambda} \\]\n can not be cancelled by other terms which contain only Ricci tensor\n and scalar.\nThe rank of the new subsystem depends on the second derivatives,\n $g_{\\mu\\nu,\\lambda\\rho}$\n(for the definition of PDE regularity see \\cite{eima}).\n\n As a rule, researchers of modified gravities concentrate their\n efforts on most symmetrical problems including cosmological\n solutions with the spherical symmetry. In this case the irregularity\n of the above system is safely masked: the new subsystem becomes just\n identity due to skew-symmetry of its two indices.\n\nAlso irregular in second jets are equations of Gauss-Bonnet (or\nLovelock) gravity with extra dimension(s).\n\nThe most interesting case, $R_{\\mu\\nu} G^{\\mu\\nu}-$gravity\n(the Ricci tensor is contracted with the Einstein tensor), gives the\nfollowing compatible system:\n \\begin{equation}} \\def\\ee{\\end{equation} \\label{rg} -\\mathbf{D}_{\\mu\\nu}=\n G_{\\mu\\nu;\\lambda}{}^{;\\lambda}+\n G^{\\epsilon \\tau} (2R_{\\epsilon\n\\mu\\tau\\nu } - \\fr12g_{\\mu\\nu}R_{\\epsilon \\tau }) =0;\n \\ \\, \\mathbf{D}_{\\mu\\nu;\\lambda}g^{\\nu\\lambda}\\equiv 0\\,.\n\\ee\n In linear approximation, there are simple evolution\nequations for Ricci tensor and scalar (total $D(D-1)\/2\\,$\npolarizations---including one scalar polarization):\n\\[ \\square R=0, \\ \\square R_{\\mu\\nu}=0. \\]\nUsing the Bianchi identity,\n $R_{\\mu\\nu[\\lambda\\epsilon;\\tau]}\\equiv 0$,\n its prolongation and contractions,\n\\[ R_{\\mu\\nu[\\lambda\\epsilon;\\tau];\\rho} g^{\\tau\\rho}\\equiv0, \\\n R_{\\mu\\nu[\\lambda\\epsilon;\\tau]}g^{\\mu\\tau}\\equiv0, \\]\none can write the evolution equation\n for the Riemann (or Weyl) tensor\n(in linear approximation again):\n \\begin{equation}} \\def\\ee{\\end{equation} \\label{riem} \\square\n R_{\\mu\\epsilon\\nu\\tau}=R_{\\mu\\nu,\\epsilon\\tau}-\n R_{\\mu\\tau,\\nu\\epsilon}+R_{\\epsilon\\tau,\\mu\\nu}\n -R_{\\epsilon\\nu,\\mu\\tau}.\n \\ee\n This equation is more complex: it has the source term (in its\n RHS)\n composed from the Ricci tensor. As a result in general case, when\n the Ricci-polarizations do not vanish, the polarizations\n related to the Weyl tensor [and responsible for gravity, tidal\n forces; their number is $D(D-3)\/2$] should grow linearly with\n time,\n \\[a(t) = (c_0 + c_1 t)\\exp(- i \\omega t),\\]\n while linear\n approximation is valid.\n\n\nThis means that the regime of weak gravity is linearly\nunstable, as well as the trivial solution itself\n (i.e., \\emph{nothing is unreal} in this theory). Hence\nthe theory is physically irrelevant---we still live in very weak\ngravity. (Note that in General Relativity, when the Ricci tensor is\nexpressed\n through the energy-momentum tensor\n [which does not expand into plane waves---with dispersion of light in\n vacuum],\n the equation (\\ref{riem}) defines\n radiation of gravitation waves.)\n\nThe linear instability here does not contradict the correctness\nof Cauchy problem; the modern compatibility theory (e.g., the\nPommaret's book \\cite{eima}) gives easy answers about the Cauchy\nproblem, number of polarizations, and so on (especially easy for\nanalytical PDE systems).\n\nAnd the last remark. Let ${\\cal L}=\\sqrt{-g}L$ is a homogeneous\nLagrangian density of order $p$ in metrics, that is\n\\[ {\\cal L}(\\k g_{\\mu\\nu})=\\k^p {\\cal L}(g_{\\mu\\nu})\\, . \\]\nThe result of variation, the symmetric tensor\n \\[ \\mathbf{D}^{\\mu\\nu} = \\frac1 {\\sqrt{-g}}\n \\frac{\\delta} \\def\\eps{{\\epsilon}} \\def\\ve{\\varepsilon {\\cal L}}{\\delta} \\def\\eps{{\\epsilon}} \\def\\ve{\\varepsilon g_{\\mu\\nu}}, \\\n \\mathbf{D}^{\\mu\\nu}{}_{;\\nu}\\equiv0, \\]\n has the next relation to its Lagrangian (the trace is\n proportional to the Lagrangian scalar up to a covariant divergence):\n \\[ \\mathbf{D}^{\\mu\\nu} g_{\\mu\\nu} = p\\, L + A^{\\mu}{}_{;\\mu}. \\]\n\nIf $ \\mathbf{D}_{\\mu\\nu}$ (with\n$\\mathbf{D}^{\\mu\\nu}{}_{;\\nu}\\equiv0$) is found just by using the\nBianchi identity, this relation reveals the corresponding\nLagrangian, see {eg.}\\\n equation (\\ref{rg}).\n\n\\section{Frame field theory}\nThe theory of frame field, $h^a{}_\\mu$, also known as Absolute\nParallelism (AP), has large symmetry group\nwhich includes both global symmetries of Special Relativity (this\n defines the signature) and local symmetries, the\npseudogroup {\\it Diff}$(D)$, of General Relativity.\n\n AP is more appropriate as a\nmodified gravity, or just a good theory with topological charges and\nquasi-charges (their phenomenology, at some\nconditions and to a certain extent, can look\n like a quantum field theory) \\cite{tc}. In this\ncase, the Ricci tensor has very specific form (due to field\nequations; linear approximation): \n\\[ R_{\\mu\\nu}\\propto \\Phi_{(\\mu,\\nu)}, \\, \\\n \\\n \\Phi_\\mu=h_a{}^\\nu (h^a{}_{\\nu,\\mu}-h^a{}_{\\mu,\\nu})\n\\ \\mbox{-- trace of torsion};\n\\]\nthis form does not cause the Weyl polarizations growth, see\n(1); so the weak gravity regime (but not the trivial\nsolution!$\\cdot\\!$) is stable.\n\n \\subsection{{Co- and contra-singularities\n and unique equation }}\n\n\nThere is one unique equation of AP (non-Lagrangian, with the\nunique $D$) which solutions are free of arising singularities.\n The formal integrability test\n \\cite{pommaret} can be extended to the cases of degeneration of\neither co-frame matrix, $h^a{}_\\mu$ (co-singularities), or\ncontra-variant frame (or density of some weight), serving as a\nlocal and covariant test for singularities. This test singles out\nthe next equation (and $D$=5 \\cite{tc};\n$\\eta_{ab}=\\mbox{diag}(-1,1,\\ldots,1)$, then $h=\\det\nh^a{}_\\mu=\\sqrt{-g}$):\n \\begin{equation} \\label{ue}\n {\\bf E}_{a\\mu}=L_{a\\mu\\nu;\\nu}- \\fr13 (f_{a\\mu}\n +L_{a\\mu \\nu }\\Phi _{\\nu })=0\\, ;\n\\end{equation}\n here \\qquad \\qquad \\qquad $ L_{a\\mu \\nu }=L_{a[\\mu \\nu]}=\n\\Lambda_{a\\mu \\nu }-S_{a\\mu \\nu }-\\fr23 h_{a[\\mu }\\Phi_{\\nu]},$\n \\[\n\\Lambda_{a\\mu \\nu }=2h_{a[\\mu,\\nu]}, \\ S_{\\mu \\nu \\l}=3\\L_{[\\mu\n\\nu \\l]}, \\ \\Phi_\\mu=\\L_{aa \\mu}, \\ f_{\\mu\\nu}=2\\Phi_{[\\mu,\\nu]}\n. \\]\n Coma \",\" and semicolon \";\" denote partial derivative and\ncovariant differentiation with symmetric Levi-Civita connection,\nrespectively. One should retain the identities:\n \\begin{equation}} \\def\\ee{\\end{equation}\\label{iden}\n \\Lambda_{a[\\mu\\nu;\\lambda]} \\equiv 0\\,,\n \\ \\ h_{a\\l}\\Lambda_{abc;\\l}\\equiv f_{cb}\\,\n (= f_{\\mu} \\def\\n{\\nu} \\def\\p{\\pi\\n}h_c^\\mu} \\def\\n{\\nu} \\def\\p{\\pi h_b^\\n), \\ f_{[\\mu} \\def\\n{\\nu} \\def\\p{\\pi\\n;\\l]}\\equiv0\n .\\ee\n Equation ${\\bf E}_{a\\mu;\\mu}=0$ gives\n `Maxwell-like equation' (I omit\n $\\eta_{ab}$ and $g^{\\mu} \\def\\n{\\nu} \\def\\p{\\pi\\n}=h_a^\\mu h_a^\\nu$\n in contractions):\n\\begin{equation}\\label{max}{\n(f_{a\\mu}\n +L_{a\\mu \\nu }\\Phi _{\\nu })_{;\\mu}=0, \\mbox{ or \\ }\n f_{\\mu\\nu;\\nu}=(S_{\\mu \\nu\\l }\\Phi _{\\l })_{;\\n} \\ \\\n(= -\\fr1 2 S_{\\mu \\nu\\l }f_{\\n\\l}, \\mbox{ see below}) \\, .}\n\\end{equation}\nReally (\\ref{max})\nfollows from the symmetric part, because\nskewsymmetric one gives the identity; the\ntrace part\n becomes irregular (principal\n derivatives vanish) if $D=4$ (forbidden $D$):\n \\[{\n2{\\bf E}_{[\\nu\\mu]}=S_{\\mu\\nu\\l;\\l}=0, \\ {\\bf\nE}_{[\\nu\\mu];\\nu}\\equiv 0; \\ \\ {\\bf E}_{\\mu} \\def\\n{\\nu} \\def\\p{\\pi\\m}={\\bf\nE}_{a\\mu} \\def\\n{\\nu} \\def\\p{\\pi}h_b^\\mu} \\def\\n{\\nu} \\def\\p{\\pi\\eta^{ab} =\\fr{4-D}3 \\Phi_{\\mu} \\def\\n{\\nu} \\def\\p{\\pi;\\mu}+ (\\L^2)=0.\n } \\]\nSystem (\\ref{ue}) remains compatible under adding\n$f_{\\mu} \\def\\n{\\nu} \\def\\p{\\pi\\n}=0$, see (\\ref{max}); this is not the case for other\ncovariants, $S, \\Phi$, or Riemannian curvature, which relates to\n$\\L$ as usually:\n \\[ R_{a\\mu\\nu\\lambda}= 2h_{a\\mu;[\\nu;\\lambda]}; \\\nh_{a\\mu} \\def\\n{\\nu} \\def\\p{\\pi}h_{a\\nu;\\l}=\\fr12 S_{\\mu} \\def\\n{\\nu} \\def\\p{\\pi\\n\\l}-\\L_{\\l\\mu} \\def\\n{\\nu} \\def\\p{\\pi\\n}.\\]\n\nGR is a special case of AP. Using 3-minors (ie., co-rank 3) of\nco-metric,\n \\[ [\\mu \\nu ,\\varepsilon \\tau ,\\alpha \\beta ] \\equiv\n \\partial^3 (-g)\/(\\partial g_{\\mu \\nu }\n\\partial g_{\\varepsilon \\tau }\\partial g_{\\alpha \\beta }) ,\\,\\]\nand their skew-symmetry, one can write the vacuum GR equation\n as\nfollows: $\\, 2(-g)G^{\\mu \\nu }=$\n \\begin{equation}} \\def\\ee{\\end{equation} \\label{gr}{\n [\\mu \\nu ,\\varepsilon \\tau ]_{,\\ve\\tau}+ (g'^2)=\n [\\mu \\nu ,\\varepsilon \\tau , \\alpha\n\\beta ](g_{\\alpha \\beta ,\\varepsilon \\tau }+ g^{\\rho \\phi }\\Gamma\n_{\\rho ,\\varepsilon \\tau } \\Gamma _{\\phi ,\\alpha \\beta })=\n [\\mu \\nu ,\\varepsilon \\tau , \\alpha\n\\beta ]R_{\\alpha \\beta \\varepsilon \\tau } =0.}\\ee\n Similarly, all AP equations can be rewritten\n that 2-minors of co-frame,\n\\[{ \\pmatrix{\\mu\\; \\nu\n\\vspace{-2.mm}\\cr a\\;b} =\\frac{\\partial^2 h}{\\partial h^{a}{}_{\\mu}\n\\partial h^{b}{}_{\\nu}}\n= 2!\\, h h^{\\mu}_{[a} h^{\\nu}_{b]} \\ \\ \n\\mbox{i.e. } \\\n [\\mu_1 \\nu_1 ,\\ldots, \\mu_k \\nu_k ]=\\fr1{k!}\n \\pmatrix{\\mu_1\\,\\cdots\\, \\mu_k\n\\vspace{-2.mm}\\cr a_1\\,\\cdots\\, \\alpha} \\def\\b{\\beta} \\def\\g{\\gamma_k}\\!\\! \\pmatrix{\\nu_1\\,\\cdots\\,\n\\nu_k\n\\vspace{-2.mm}\\cr a_1\\,\\cdots\\, \\alpha} \\def\\b{\\beta} \\def\\g{\\gamma_k} \\\n),} \\] completely define the coefficients at the principal\nderivatives. The simplest compatible equation\n(see Einstein--Mayer classification of compatible\nequations in 4$D$ AP \\cite{eima}),\n \\begin{equation}} \\def\\ee{\\end{equation} \\label{estar} {\\bf\nE^*}_{a\\mu}=\\L_{a\\mu\\nu;\\nu}=0, \\ {\\bf\nE^*}_{a\\mu;\\mu}\\equiv0 \\ee\n gives\n \\[{ h^{2}{\\bf E^*}_{a}{}^{\\mu }=\n(h_{a\\alpha ,\\beta \\nu }-h_{a\\beta,\\alpha \\nu } )(-g)g^{\\alpha\n\\mu}g^{\\beta \\nu}+(h^\\prime{}^{2})\n= h_{a\\alpha ,\\beta \\nu }[\\vspace{1mm}\\alpha \\mu ,\\beta \\nu\n\\vspace{-1mm}] +(h^\\prime{}^{2})\\ .}\\]\n Like determinant,\n$k$-minors ($k\\leq D$)\n are multi-linear expressions in\n elements of $h^a{}_\\mu$-matrix, and some minors do not\nvanish when rank$\\,h^a{}_\\mu=D-1$.\n\n For any AP equation [including\n Eqs.~(\\ref{gr}) and (\\ref{estar})],\n with the \\emph{unique exception}, Eq.~(\\ref{ue}),\n (where only skew-symmetric\n part participates in identity and can be written with 2- and 3-minors,\n while symmetric part needs 1-minors vanishing too\n rapidly), the\n regularity of principal terms survives (and\n symbol $G_2$ keeps involutive)\n if\n rank$\\,g_{\\mu\\nu}=D-1$.\n\nThis observation is important and relevant to the problem of\nsingularities; it means seemingly that the unique equation\n(\\ref{ue}) does not suffer of nascent co-singularities in\nsolutions of general position.\n\nThe other case is contra-singularities \\cite{tc} relating to\ndegeneration of contra-variant density of some weight:\n \\begin{equation}} \\def\\ee{\\end{equation}\n\\label{dens} { H_a{}^\\mu= h^{1\/D_*} h_a{}^\\mu;\n H=\\det H^a{}_\\mu, \\\nh_a{}^\\mu= H^{1\/(D-D_*)} H_a{}^\\mu\\, .} \\ee\n Here $D_*$ depends on\nequation: $D_*=2$ for GR, $D_*=\\infty$ for Eq.~(\\ref{estar}), and\n$D_*=4$ for the unique equation (which can be written 3-linearly\nin $H_a{}^\\mu$ and its derivatives \\cite{tc}). %\n\nIf integer, $D_*$ is the forbidden spacetime dimension. The nearest\npossible $D=5$ is of special interest: in this case minor\n$H^{-1} H^a{}_\\mu$ simply coincides with $h^a{}_\\mu$; that is,\ncontra-singularity simultaneously implies co-singularity (of high\nco-rank), but that is impossible! The possible interpretation of\nthis observation is:\n for the unique equation,\n contra-singularities are impossible if $D=5$\n (perhaps due to some specifics of\n\\emph{Diff}-orbits on $H_a{}^\\mu$-space).\n\\ %\nThis leaves no room for changes in the theory (assuming\n Nature does not like singularities).\n\n \\subsection{{Tensor $T_{\\mu\\nu}$ and post-Newtonian effects\n (Pauli's questions to AP)}}\n One might rearrange ${\\bf E}_{(\\mu\\nu)}=0$ picking out\n (into LHS) the Einstein tensor,\n $G_{\\mu} \\def\\n{\\nu} \\def\\p{\\pi\\n}=R_{\\mu} \\def\\n{\\nu} \\def\\p{\\pi\\n}-\\fr12g_{\\mu} \\def\\n{\\nu} \\def\\p{\\pi\\n}R$, but the rest terms are not\n proper energy-momentum tensor: they contain linear terms\n $\\Phi_{(\\mu} \\def\\n{\\nu} \\def\\p{\\pi;\\nu)}$ (no positive energy (!); instead one more\n presentation of `Maxwell equation' (\\ref{max}) is\n possible---as divergence of symmetrical tensor).\n\n However, the prolonged equation\n${\\bf E}_{(\\mu\\nu);\\l;\\l}=0$ can be written as $RG$-gravity:\n \\begin{equation}} \\def\\ee{\\end{equation} \\label{tmunu}{\n G_{\\mu \\nu\n;\\lambda ;\\lambda }+ G_{\\epsilon \\tau} (2R_{\\epsilon \\mu \\tau \\nu\n} - \\fr12g_{\\mu \\nu }R_{\\epsilon \\tau }) =T_{\\mu\\nu} (\\Lambda\n'^{2},\\cdots), \\ T_{\\mu\\nu;\\nu}=0; }\\ee\n up to\nquadratic terms, \\ \\ ${ T_{\\mu\\nu}=\n\\fr29(\\fr14g_{\\mu} \\def\\n{\\nu} \\def\\p{\\pi\\n}f^2-f_{\\mu} \\def\\n{\\nu} \\def\\p{\\pi\\l}f_{\\n\\l})\n+A_{\\mu} \\def\\n{\\nu} \\def\\p{\\pi\\eps\\n\\tau}(\\L^2){}_{,\\eps\\tau};\n } $\\\ntensor $A$ has symmetries of Riemann tensor, so the term $A''$\nadds nothing to momentum and angular momentum.\n\nIt is worth noting that:\n\\\\\n(a) the theory does not match GR, but reveals $RG$-gravity\n(sure, (\\ref{tmunu}) does not contain all);\n\\\\\n(b) only $f$-component (three transverse polarizations in $D=5$)\ncarries $D$-momentum and angular momentum (`powerful' waves); other\n12 polarizations are `powerless', or `weightless'. This is a very\nunusual feature---impossible\n in Lagrangian tradition; how to quantize ?\n\n\\\\\n(c) $f$-component feels only metric and $S$-field,\n see (\\ref{max}), but $S$\nhas effect only on polarization of $f$: $S_{[\\mu} \\def\\n{\\nu} \\def\\p{\\pi\\n\\l]}$ does not\nenter eikonal equation, and $f$ moves along usual Riemannian\ngeodesic;\ntrace $T_{\\mu} \\def\\n{\\nu} \\def\\p{\\pi\\m}=\\fr1{18}f_{\\mu} \\def\\n{\\nu} \\def\\p{\\pi\\n}f_{\\mu} \\def\\n{\\nu} \\def\\p{\\pi\\n}$ can be non-zero\nif $f^2\\neq0$;\n\\\\\n(e) it should be stressed that $f$-component is\nnot usual (quantum) EM-field---just important covariant\nresponsible for energy-momentum (there is\nno gradient invariance for $f$; phenomenological quantum fields\nshould account somehow for topological (quasi)charges \\cite{tc}).\n\nAnother strange feature is the instability of trivial solution:\n some `powerless' polarizations grow linearly with time\n in presence of\n `powerful' $f$-polarizations. Really, the linearized\n Eq.~(\\ref{ue}) and identity (\\ref{iden}) give\n (following equations should be understood as linearized):\n \\[ {\n \\Phi_{a,a}=0 \\ (D\\neq 4), \\\n 3\\Lambda_{abd,d}= \\Phi_{a,b}-2\\Phi_{b,a},\n \\ \\Lambda_{a[bc,d],d}\\equiv0\\, \\ \\Rightarrow \\\n 3\\Lambda_{abc,dd}=-2 f_{bc,a}\\, . }\\]\n The last D`Alembert equation has the \\emph{source} in its RHS.\n Some components of $\\Lambda$\n (most symmetrical irreducible parts)\n do not grow (as well as curvature), because\n (linearized equations)\n \\[{ S_{abc,dd}=0, \\ \\Phi_{a,dd}=0, \\ \\,\n f_{ab,dd}=0, \\ R_{abcd,ee}=0. }\\]\n However the least symmetrical $\\L$-components\n do go up with time (three growing but powerless\n polarizations),\n if the ponderable waves (three $f$-polarizations)\n do not vanish; this should be the case for\n solutions of general position. Again, nothing is unreal!\n\n\n\\subsection{Expanding\n O$_4$-symmetrical solutions and cosmology}\nThe unique symmetry of AP equations gives scope for symmetrical\n solutions. In contrast to GR, Eqn.~(\\ref{ue}) has\nnon-stationary spherically\n symmetric solutions.\nThe $O_4$-symmetric field can be generally written \\cite{sn} as\n\\begin{equation} \\label{spsy}{\n h^{a}{}_{\\mu }(t,x^i)=\n \\pmatrix{a&bn_{i} \\cr\n cn_{i} & en_{i}n_{j}+d\\Delta _{ij} };\n\\ \\ i,j=(1,2,3,4), \\ n_i=\\frac{x^i}{r}. }\n\\end{equation}\n Here $a,\\ldots,e$ are functions of time, $t=x^0$, and radius\n $r$, $\\Delta_{ij}=\\delta_{ij}-n_i n_j, \\ r^2=x^i x^i$.\n As functions of radius, $b,c$ are odd,\n while the others are even;\n other boundary conditions: $e=d$ at $r=0$,\n and $h^a{}_\\mu\\to \\delta^{\\,a}_\\mu$ as $r\\to \\infty$.\nPlacing in (\\ref{spsy}) $b=0, e=d$ (the other interesting choice\nis $b{=}c{=}0$)\n and making integrations one can arrive to the next system\n (resembling dynamics of Chaplygin gas; dot and prime\n denote derivation on time and radius, resp.)\n\\begin{equation}\\label{gas}{\nA^{\\cdot}=AB^\\prime -BA^\\prime +\\frac{\\,3}r AB, \\ B^{\\,\\cdot\n}=AA^\\prime -BB^\\prime -\\frac{\\,2}r B^{2}, \\mbox{where }A=\n \\frac {\\,a} e=e^{1\/2},\\ B=-\\frac c {\\,e}\\, .}\n\\end{equation}\nThis system has non-stationary solutions, and a single-wave\nsolution (of proper `sign') might serve as a suitable (stable)\ncosmological expanding background.\n The condition $f_{\\mu\\nu}{=}0$ is a must for solutions with such\n a high symmetry (as well as\n $S_{\\mu\\nu\\l}{=}0$); so, these $O_4$-solutions\n carry no energy, weight nothing---some lack of \\emph{gravity} !\n\n More realistic cosmological model might look like a single\n $O_4$-wave\n (or a sequence of waves) moving along the radius and being\n filled with chaos, or stochastic waves, both\n powerful (\\emph{weak}, $\\Delta h\\ll1$) and\n powerless ($\\Delta h<1$, but intense enough that\n to give non-linear\n fluctuations with $\\Delta h\\sim1$).\n The development and examination of stability\n of this model is an interesting problem.\n The inhomogeneity of metric in giant $O_4$-wave\n can serve as\n a time-dependent `shallow dielectric guide' for that weak\n $f-$waves. The ponderable waves (which slow down\nthe large wave) should have wave-vectors almost tangent to the\n$S^3$-sphere of wave-front to be trapped inside this\nshallow wave-guide; the imponderable waves can grow up, and\npartly escape from the wave-guide,\n and their wave-vectors\ncan be less tangent to the $S^3$-sphere.\nThe waveguide thickness can be small for an `observer' in the center\nof $O_4$-symmetry, but in co-moving coordinates it can be very\nlarge (still $\\ll R$). This model can explain well the SNe1a redshift\n data \\cite{sn}.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
+{"text":"\\section{Introduction}\nLet $\\Gamma=\\operatorname{PSL}(2,\\Z)$ be the modular group and $\\Hb$ the upper half-plane.\nWe consider the (completed) \nnonholomorphic Eisenstein series $E^\\ast(z,s)$ for the modular\ngroup which is given for $z\\in\\Hb$ and $\\Re(s)>1$ by\n\\begin{equation}\nE^\\ast(z,s):=\\pi^{-s}\\Gamma(s)\\left(\\frac{1}{2}\\sum_{(m,n)\\in\\Z^2\\setminus (0,0)}\n\\frac{y^s}{|cz+d|^{2s}}\\right).\n\\end{equation}\nIt is well known that for fixed $z$, $E^\\ast(z,s)$ admits a meromorphic \ncontinuation to the whole $s$-plane, and satisfies the functional equation\n\\[E^\\ast(z,s)=E^\\ast(z,1-s).\\]\nIts only singularities are simple poles at $s=1$ and $s=0$. As a function of\n$z$, it is invariant under $\\Gamma$\n\\[E^\\ast(\\gamma z,s)=E^\\ast(z,s), \\quad \\gamma\\in\\Gamma.\\]\nIn particular $E^\\ast(z,s)$ is invariant under $z\\mapsto z+1$ and so it has a\nFourier expansion\n\\[E^\\ast(x+iy,s)=\\sum_{n\\in\\Z}a_n(y,s)e^{2\\pi ix}.\\]\nThe zeroth Fourier coefficient $\\varphi_0(y,s)$ is given by\n\\[\\varphi_0(y,s)=\\Lambda(2s)y^s+\\Lambda(2s-1)y^{1-s},\\]\nwhere $\\Lambda(s)$ is the completed zeta function\n\\[\\Lambda(s)=\\pi^{-s\/2}\\Gamma\\left(\\frac{s}{2}\\right)\\zeta(s).\\]\nLet $a>0$. The zeros of $\\varphi_0(a,s)$ \nhave been studied by various authors. Hejhal\n\\cite[Proposition 5.3(f)]{He} proved that for all $a\\ge 1$, the complex zeros\nof $\\varphi_0(a,s)$ are on the critical line $\\Re(s)=1\/2$. Lagarias and Suzuki \n\\cite{LS} reproved this result and also determined the occurrence of real\nzeros. Ki \\cite{Ki} proved that all complex zeros are simple.\nPutting these results together, we have following theorem.\n\\begin{theo}\\label{th0.1}\nFor each $a\\ge 1$ all complex zeros of $\\varphi_0(y,s)$ are simple and lie on\nthe \ncritical line $\\Re(s)=1\/2$. Moreover there is a critical value\n$a^\\ast=4\\pi e^{-\\gamma}=7.055...$\nsuch that the following holds:\n\\begin{itemize}\n\\item[1)] For $1\\le a\\le a^\\ast$ all zeros are on the critical line.\n\\item[2)] For $a>a^\\ast$ there are exactly two zeros off the critical\n line. These \nare real simple zeros $\\rho_a, 1-\\rho_a$ with $\\rho_a\\in(1\/2,1)$. The zero\n$\\rho_a$ is a nondecreasing function of $a$ and $\\lim_{a\\to\\infty}\\rho_a=1$.\n\\end{itemize}\n\\end{theo}\nThe first aim of this paper is to point out that there is actually a\nspectral interpretation of the zeros of $\\varphi_0(a,s)$, which \ngives another proof of this theorem. For $a> 0$ let $\\Delta_a$ be the\ncut-off Laplacian $\\Delta_a$ introduced by Lax and Phillips \\cite{LP}.\nIt acts in the subspace $\\H_a\\subset L^2(\\Gamma\\ba \\Hb)$ of all\n$f\\in L^2(\\Gamma\\ba \\Hb)$ satisfying\n$\\int_0^1f(x+iy)\\,dx=0 $ for almost all $y\\ge a$.\nThe cut-off Laplacian $\\Delta_a$ is a nonnegative self-adjoint operator with\npure point spectrum. The spectrum has been studied by Colin de Verdiere \n\\cite{CV}. Let \n\\begin{equation}\\label{0.2}\nc(s)=\\frac{\\Lambda(2s-1)}{\\Lambda(2s)},\\quad s\\in\\C.\n\\end{equation}\nThe following theorem is an immediate consequence of \\cite[Th\\'eor\\`eme 5]{CV}.\n\n\\begin{theo}\\label{th0.2}\nFor every $a>0$, the spectrum of $\\Delta_a$ is the union of the cuspidal\neigenvalues \n$0<\\lambda_1\\le\\lambda_2\\le\\cdots\\to\\infty$ of $\\Delta$ and a sequence of\neigenvalues\n$$0<\\mu_0(a)< \\mu_1(a)<\\cdots$$\nwith the following properties:\n\\begin{enumerate}\n\\item[1)] Each eigenvalue $\\mu_j(a)$ is a decreasing function of $a$.\n\\item[2)] If $a\\ge1$, each eigenvalue $\\mu_j(a)$ has multiplicity 1. Moreover\n $\\lim_{a\\to\\infty}\\mu_0(a)=0$ and if $j\\ge 1$, then $\\mu_j(a)\\ge 1\/4$ and\n $\\lim_{a\\to\\infty}\\lambda_j(a)= 1\/4$. \n\\item[3)] Let $a\\ge1$. Then the map $s\\mapsto s(1-s)$ is a bijection between\n the zeros $\\rho\\not=1\/2$ of $\\varphi_0(a,s)$ and the eigenvalues\n $\\mu_j(a)\\not=1\/4$ of $\\Delta_a$.\n\\item[4)] $1\/2$ is a zero of $\\varphi_0(a,s)$ for all $a>0$. \nThere is $j$ with $\\mu_j(a)=1\/4$ if and only if $c^\\prime(1\/2)=-2\\log a$.\n\\end{enumerate}\n\\end{theo}\nThis theorem implies that for $a\\ge 1$ there is at most one\n zero of $\\varphi_0(a,s)$ which is off the\nline $\\Re(s)=1\/2$. The simplicity of \nthe zeros follows from the consideration of the corresponding eigenfunctions\nof $\\Delta_a$. \n\nThe main purpose of this article is to extend these results to the constant\nterm of other Eisenstein series. In the \npresent paper we will consider the Eisenstein series attached to\n$\\operatorname{PSL}(2,\\cO_K)$, \nwhere $\\cO_K$ is the ring of integers of\nan imaginary quadratic field $K=\\Q(\\sqrt{-D})$ of class number one, $D$ being\na square free positive integer. These are exactly the fields\n$\\Q(\\sqrt{-D})$ with $D=1,2,3,7,11,19,43,67,163$ \\cite{St}. Let $d_K$ be the\ndiscriminant of $K$ and let $\\zeta_K(s)$ be the Dedekind zeta function of\n$K$. Let\n\\begin{equation}\n\\Lambda_K(s)=\\left(\\frac{\\sqrt{|d_K|}}{2\\pi}\\right)^s\\Gamma(s)\\zeta_K(s)\n\\end{equation}\nbe the completed zeta function. \nThen $\\Lambda_K(s)$ satisfies the functional equation \n$\\Lambda_K(s)=\\Lambda_K(1-s)$.\nFor $a>0$ let \n\\begin{equation}\n\\varphi_{K}(a,s):=a^s\\Lambda_K(s)+a^{2-s}\\Lambda_K(s-1).\n\\end{equation}\nNote that $\\varphi_{K}(a,s)$ is a Dirichlet series which satisfies the\nfunctional equation $\\varphi_{K}(a,s)=\\varphi_{K}(a,2-s)$.\nLet $\\xi_K(s)=s(s-1)\\Lambda_K(s)$. Then $\\xi_K(s)$ is entire. Put\n\\[a^\\ast_K:=\\exp\\left(1+\\frac{\\xi_K^\\prime(1)}{\\xi_K(1)}\\right). \\]\nThen our main result is the following theorem.\n\\begin{theo}\\label{th0.3}\nLet $K=\\Q(\\sqrt{-D})$ be an imaginary quadratic field of class number one. \nThen for each $a\\ge 1$ all complex zeros of \n$\\varphi_K(a,s)$ are simple and lie on the line $\\Re(s)=1$. Moreover \n\\begin{enumerate}\n\\item[1)] For $a>\\max\\{a^\\ast_K,1\\}$ there are exactly two zeros off the \ncritical line.\nThese are simple zeros $\\rho_a,2-\\rho_a$ with $\\rho_a\\in(1,2)$. The zero\n$\\rho_a$ is a nondecreasing function of $a$, and $\\lim_{a\\to\\infty}\\rho_a=2$. \n\\item[2)] If $1\\le a^\\ast_K$ and $1\\le a\\le a^\\ast_K$,\n all zeros of $\\varphi_K(a,s)$ are on the critical line.\n\\end{enumerate}\n\\end{theo} \nTo prove Theorem \\ref{th0.3}, we extend Colin de Verdiere's Theorem \n\\cite[Th\\'eor\\`eme 5]{CV} to our setting. Let $\\cO_K$ be the ring of integers\nof $K$ and let $\\Gamma=\\operatorname{PSL}(2,\\cO_K)$. Then $\\Gamma$ is a discrete subgroup of \n$\\operatorname{PSL}(2,\\C)$ which acts properly discontinuously on the 3-dimensional\nhyperbolic space $\\Hb^3$. The quotient $\\Gamma\\ba\\Hb^3$ has finite volume\nand a single cups at $\\infty$. The function $\\varphi_K(y,s)$ appears to be the \nzeroth Fourier coefficient of the modified Eisenstein series attached to the\ncusp $\\kappa=\\infty$. Let $\\Delta_a$ be the corresponding cut-off Laplacian\non $\\Gamma\\ba\\Hb^3$. Then we generalize Theorem \\ref{th0.2} to this setting.\nAs above, this leads to a spectral interpretation of the zeros of\n$\\varphi_K(a,s)$, and we deduce Theorem \\ref{th0.3} from this spectral\ninterpretation. \n\nThe constant $a^\\ast_K$ can be computed using the Kronecker limit formula \n\\cite[p. 273]{La} \nand the Chowla-Selberg formula \\cite[(2), p.110]{SC}.\nFor example, we get\n\\[a^\\ast_{\\Q(i)}=\\frac{4\\pi^2 e^{2+\\gamma}}{\\Gamma(1\/4)^4}\\approx 3.00681,\n\\quad\na^\\ast_{\\Q(\\sqrt{-2})}=\\frac{8\\pi^2e^{2+\\gamma}}\n{\\left(\\Gamma\\left(\\frac{1}{8}\\right)\\Gamma\\left(\\frac{3}{8}\\right)\\right)^{2}}\n\\approx 3.2581 .\\] \nThus $a^\\ast_{\\Q(\\sqrt{-D})}>1$ for $D=1,2$ and therefore, in the range \n$1\\le a\\le a^\\ast_{\\Q(\\sqrt{-D})}$ all zeros of\n$\\varphi_{\\Q(\\sqrt{-D})}(a,s)$ are \non the line $\\Re(s)=1$. \n\nThe method used to prove Theorem \\ref{th0.3} can be extended so that\nan arbitrary number field $K$ of class number one can be treated. \nThe underlying global Riemannian symmetric \nspace is $X=\\Hb^{r_1}\\times (\\Hb^3)^{r_2}$, where $r_1$ is the number of real\nplaces and $r_2$ the number of pairs of complex conjugate places of $K$. The\nstructure of $\\Gamma\\ba X$ is slightly more complicated, however\nthe proof of the corresponding statement is completely analogous. More\ngenerally, it seems to be possible to prove similar results for the\nconstant terms of rank one cuspidal Eisenstein series attached to\n$\\operatorname{PSL}(n,\\Z)$. \n \n\\section{The cut-off Laplacian}\n\\setcounter{equation}{0}\n\nThe cut-off Laplacian can be defined for every rank one locally symmetric \nspace. In the present paper we are dealing only with hyperbolic\nmanifolds of dimension 2 and 3. So we discuss only the case of a\nhyperbolic 3-manifold. The general case is similar.\n\nLet \n\\[\\Hb^3=\\{(x_1,x_{2},y)\\in\\R^3\\colon y>0\\}\\]\nbe the hyperbolic 3-space with its hyperbolic metric\n\\[ds^2=\\frac{dx_1^2+dx_{2}^2+dy^2}{y^2}.\\]\nThe hyperbolic Laplacian $\\Delta$ is given by\n\\[\\Delta=-y^2\\left(\\frac{\\partial^2}{\\partial x_1^2}\n+\\frac{\\partial^2}{\\partial x_2^2}+ \n\\frac{\\partial^2}{\\partial y^2}\\right)+y\\frac{\\partial}{\\partial y}.\\]\nIf we regard $\\Hb^3$ as the set of all quaternions $z+yj$ with $z\\in\\C$\nand $y>0$, then $G=\\operatorname{PSL}(2,\\C)$ is the group of all orientation preserving\nisometries of $\\Hb^3$. It acts by linear fractional transformations, i.e., for\n$\\gamma=\\begin{pmatrix}a&b\\\\c&d\\end{pmatrix}\\in G$,\n\\[\\gamma(w)=(aw+b)(cw+d)^{-1},\\quad w\\in\\Hb^3.\\]\nLet\n$$B(\\C):=\\left\\{\\begin{pmatrix}\\lambda&z\\\\0&\\lambda^{-1}\\end{pmatrix}\\colon\n \\lambda\\in\\C^\\ast,\\; z\\in\\C\\right\\}\/\\{\\pm \\operatorname{Id}\\},\\quad\nN(\\C):=\\left\\{\\begin{pmatrix}1&z\\\\0&1\\end{pmatrix}\\colon z\\in\\C\\right\\}.$$ \nLet $\\Gamma\\subset G$ be a discrete subgroup of finite co-volume. \nLet $\\kappa_1=\\infty, \\kappa_2,...,\\kappa_m\\in \\Pf^1(\\C)$ be a complete set of\n$\\Gamma$-inequivalent cusps, and let $\\Gamma_i$ be the stabilizer of\n$\\kappa_i$ in $\\Gamma$. Choose $\\sigma_i\\in G$ such that\n$\\sigma_i(\\kappa_i)=\\infty$, $i=1,...,m$. Then\n\\begin{equation}\\label{1.1}\n\\sigma_i\\Gamma_i\\sigma_i^{-1}\\cap\nN(\\C)=\\left\\{\\begin{pmatrix}1&\\lambda\\\\0&1\\end{pmatrix}\\colon \n\\lambda\\in L_i\\right\\},\n\\end{equation}\nwhere $L_i$ is a lattice in $\\C$ \\cite[Theorem 2.1.8]{EGM}. Note that\nfor $D\\not=1,3$ the intersection actually coincides with \n$\\sigma_i\\Gamma_i\\sigma_i^{-1}$. \nChoose closed fundamental sets $\\cP_i$ for\nthe action of $\\sigma_i\\Gamma_i \\sigma_i^{-1}$ \non $\\Pf^1(\\C)\\setminus\\{\\infty\\}=\\C$. For $T>0$ define\n\\[\\tilde \\cF_i(T):=\\left\\{ (z,y)\\in\\Hb^3\\colon z\\in\\cP_i,\\;\\; y\\ge T\\right\\}.\\] \nLet $\\cF_i(T)=\\sigma_i^{-1}(\\tilde \\cF_i(T))$. There exists $T_i>0$ such \nthat\nany two points in $\\cF_i(T_i)$ are $\\Gamma$-equivalent if and only if they are\n$\\Gamma_i$-equivalent. For such a choice of $T_1,...,T_m$ there exists a\ncompact subset $\\cF_0\\subset \\Hb^3$ such that\n\\begin{equation}\\label{1.2}\n\\cF:=\\cF_0\\cup \\cF_1(T_1)\\cup\\cdots\\cup \\cF_m(T_m)\n\\end{equation}\nis a fundamental domain for $\\Gamma$ \\cite[Proposition 2.3.9]{EGM}. Let\n\\begin{equation}\\label{1.3}\nb:=\\max\\{T_1,...,T_m\\}.\n\\end{equation}\nBy $C^\\infty_c(\\Gamma\\ba\\Hb^3)$ we denote\nthe space of $\\Gamma$-invariant $C^\\infty$-functions on $\\Hb^3$ with compact \nsupport in $\\cF$. For $f\\in C^\\infty_c(\\Gamma\\ba \\Hb^3)$ let\n$$\\parallel f\\parallel^2=\\int_{\\cF}|f(x_1,x_2,y)|^2\\;\\frac{dx_1dx_2dy}{y^3},$$\nand let $L^2(\\Gamma\\ba \\Hb^3)$ be the completion of $C^\\infty_c(\\Gamma\\ba\n\\Hb^3)$ with respect to this norm.\nSimilarly let\n$$\\parallel df\\parallel^2=\\int_{\\cF} df\\wedge\\ast \\overline{df}\n=\\int_{\\cF}\\parallel\ndf(x_1,x_2,y)\\parallel^2\\;\\frac{dx_1dx_2dy}{y}.$$ \nLet $H^1(\\Gamma\\ba \\Hb^3)$ denote the completion of $C^\\infty_c(\\Gamma\\ba\n\\Hb^3)$ with respect to the norm\n$$\\parallel f\\parallel^2_1:=\\parallel f\\parallel^2+\\parallel df\\parallel^2.$$\nDenote by $|\\cP_i|$ the Euclidean area of the fundamental domain \n$\\cP_i\\subset\\C$ of\n$\\sigma_i\\Gamma_i\\sigma_i^{-1}$.\nFor $f\\in L^2(\\Gamma\\ba \\Hb^3)$ let \n\\[f_{j,0}(y)=\\frac{1}{|\\cP_i|}\\int_{\\cP_i} f(\\sigma_i^{-1}(x_1,x_2,y))\\;dx_1dx_2\\]\nbe the zeroth coefficient of the Fourier expansion of $f$ at the cusp\n$\\kappa_j$. Note that for $f\\in H^1(\\Gamma\\ba \\Hb^3)$, each $f_{j,0}$ belongs to\n$H^1(\\R^+)$ and therefore, each $f_{j,0}$ is a continuous function on $\\R^+$. \nFor $a>0$ let\n\\[H^1_a(\\Gamma\\ba \\Hb^3):=\\left\\{f\\in H^1(\\Gamma\\ba \\Hb^3)\\colon f_{j,0}(y)=0\\;\n\\;{\\mathrm for}\\;y\\ge a,\\;j=1,...,m\\right\\}.\\]\nThen $H^1_a(\\Gamma\\ba\\Hb^3)$ is a closed subspace of $H^1(\\Gamma\\ba\\Hb^3)$. \nHence the quadratic form \n\\[q_a(f)=\\parallel df\\parallel^2,\\quad f\\in H^1_a(\\Gamma\\ba\\Hb^3),\\]\nis closed. Let $\\Delta_a$ denote the self-adjoint operator which represents\nthe quadratic form $q_a$. It acts in the Hilbert space $\\H_a$ which is the\nclosure of $H^1_a(\\Gamma\\ba\\Hb^3)$ in $L^2(\\Gamma\\ba\\Hb^3)$. By definition,\n$\\Delta_a$ is nonnegative. Its domain can be described as follows.\nLet $\\psi_{j,a}\\in\\cD^\\prime(\\Gamma\\ba\\Hb^3)$ be defined by \n\\[\\psi_{j,a}(f):=f_{j,0}(a),\\quad f\\in C^\\infty_c(\\Gamma\\ba\\Hb^3).\\]\nLet $b>0$ be defined by (\\ref{1.3}). \n\\begin{lem} Let $a\\ge b$. Then the domain of $\\Delta_a$ consists of all \n$f\\in H^1_a(\\Gamma\\ba \\Hb^3)$ for which there exist $C_1,...,C_m\\in\\C$ \nsuch that\n\\begin{equation}\\label{1.4}\n\\Delta f-\\sum_{j=1}^mC_j\\psi_{j,a}\\in L^2(\\Gamma\\ba \\Hb^3).\n\\end{equation}\n\\end{lem}\nThe proof of the lemma is analogous to the proof of Th\\'eor\\`em 1 in\n\\cite{CV}. \nLet $f\\in \\operatorname{dom}(\\Delta_a)$. By the lemma, there exist $C_1,...,C_m\\in\\C$ such \nthat (\\ref{1.4}) holds. Then $\\Delta_af$ is given by\n\\begin{equation}\\label{1.5}\n\\Delta_af=\\Delta f-\\sum_{j=1}^mC_j\\psi_{j,a}.\n\\end{equation}\n\n\nFurthermore, we have\n\\begin{lem}\\label{l1.2}\n$\\Delta_a$ has a compact resolvent.\n\\end{lem}\n\\begin{proof} The proof is a simple extension of the argument in \n\\cite[p.206]{LP}. \n\\end{proof}\nSo the spectrum of $\\Delta_a$ consists of a sequence of eigenvalues \n$0\\le \\lambda_1(a)\\le \\lambda_2(a)\\le\\cdots\\to\\infty$ with finite\nmultiplicities. To describe the eigenvalues and eigenfunctions of $\\Delta_a$\nmore explicitely, we need to consider the Eisenstein series. Let $\\sigma_j\\in\nG$, $j=1,...,m$, be as above. \nFor each cusp $\\kappa_i$, the Eisenstein series $E_i(w,s)$ attached to\n$\\kappa_i$ is defined to be\n\\[E_i(w,s):=\\sum_{\\gamma\\in\\Gamma_i\\ba \\Gamma}y(\\sigma_i\\gamma w)^s,\\quad\n\\Re(s)>2,\\]\nwhere $y(\\sigma_i\\gamma w)$ denotes the $y$-component of $\\sigma_i\\gamma w$. \nSelberg has shown that $E_i(w,s)$ can be meromorphically continued in $s$ to \nthe whole complex plane $\\C$, and it is an automorphic eigenfunction of\n$\\Delta$ with \n$$\\Delta E_i(w,s)=s(2-s)E_i(w,s).$$\nSince $E_i(w,s)$ is $\\Gamma$-invariant, it is invariant under the stabilizer\n$\\Gamma_j$ of the cusp $\\kappa_j$ and therefore, admits a Fourier expansion\nat $\\kappa_j$. The constant Fourier coefficient is given by\n\\begin{equation}\\label{1.3a}\n\\delta_{ij}y^s+C_{ij}(s)y^{2-s}.\n\\end{equation}\nPut\n\\[C(s):=\\left(C_{ij}(s)\\right)_{i,j=1,...,m}.\\]\nThis is the so called ``scattering matrix''. The scattering matrix and the \nEisenstein series satisfy the following system of functional equations.\n\\begin{equation}\\label{1.7}\n\\begin{split}\n&C(s)C(2-s)=\\operatorname{Id},\\\\\n&E_i(w,s)=\\sum_{j=1}^m C_{ij}(s)E_j(w,2-s), \\quad i=1,...,m.\n\\end{split}\n\\end{equation}\n\nNow recall that a square integrable eigenfunction $f$ of $\\Delta$ is called \ncuspidal, if the zeroth Fourier coefficient $f_{j,0}$ of $f$\nat the cusp $\\kappa_j$ vanishes for all $j=1,...,m$. Denote by\n$S(\\lambda;\\Gamma)$ the space of cuspidal eigenfunctions of $\\Delta$ with \neigenvalue $\\lambda$. A function $f\\in C^0(\\Gamma\\ba \\Hb^3)$ is called of \nmoderate growth, if there exists $R\\in\\R^+$ such that\n\\[f(\\sigma^{-1}_i(x_1,x_2,y))=O(y^R),\\quad {\\mathrm\n for}\\;\\;(x_1,x_2,y)\\in\\cF_i(b),\\; i=1,...,m.\\] \nIt follows from the Fourier expansion in the cusps \\cite[Section 3.3.3]{EGM}\nthat a cuspidal eigenfunction $f$ of $\\Delta$ is rapidly decreasing in each\ncusp. Therefore for $\\psi\\in\\E(\\lambda,\\Gamma)$ and $\\varphi\\in\nS(\\lambda,\\Gamma)$ , the inner product $\\langle \\psi, \\varphi\\rangle$ \nis well defined. For $\\lambda\\in\\C$ let\n\\begin{equation}\n\\E(\\lambda;\\Gamma):=\\left\\{\\psi\\in L^2_{\\loc}(\\Gamma\\ba\\Hb^3)\\colon\n \\Delta\\psi=\\lambda\\psi,\\;\\psi\\;\\;{\\mathrm is \\; of\\;\n moderate\\; growth},\\;\\psi\\perp S(\\lambda;\\Gamma)\\right\\}.\n\\end{equation}\n\\begin{lem}\\label{l1.3} Let $m$ be the number of cusps of $\\Gamma\\ba\\Hb^3$. \nThen \n\\begin{enumerate}\n\\item[1)] For every $\\lambda\\in\\C$ we have $\\dim\\E(\\lambda;\\Gamma)\\le m$. \n\\item[2)] Suppose that $\\lambda=s(2-s)$, where $\\Re(s)\\ge 1$, $s\\not=1$,\n and $s$ is not a pole of any Eisenstein series. Then\n$E_1(w,s),...,E_m(w,s)$ is a basis of $\\E(\\lambda;\\Gamma)$.\n\\end{enumerate}\n\\end{lem}\n\\begin{proof}\nThe proof is analogous to the proof of Satz 11 in \\cite[p.171]{Ma}. Let \n$\\phi,\\psi\\in\\E(\\lambda;\\Gamma)$ and $\\lambda=s(2-s)$. The zeroth Fourier \ncoefficient of $\\phi$, $\\psi$ at the $j$-th cusp is given by\n\\[\\phi_{0,j}(y)=a_jy^s+b_jy^{2-s},\\quad\n\\psi_{0,j}(y)=c_jy^s+d_jy^{2-s},\\;s\\not=1,\\] \nand for $s=1$ by\n\\[\\phi_{0,j}(y)=a_jy+b_jy\\log y,\\quad \\psi_{0,j}(y)=c_jy+d_jy\\log y.\\]\nLet $\\chi_{j,[a,\\infty)}$ denote the characteristic function of $\\Gamma_i\\ba\n\\cF_i(a)$ in $\\Gamma\\ba\\Hb^3$. Let\n\\[\\phi_a=\\phi-\\sum_{j=1}^m\\chi_{j,[a,\\infty)}\\phi_{0,j},\\quad \n\\psi_a=\\psi- \\sum_{j=1}^m\\chi_{j,[a,\\infty)}\\psi_{0,j}\\] \nbe the truncations of $\\phi$ and $\\psi$, respectively, at level $a\\ge b$. \nLet $s\\not=1$. Integrating by parts, we get\n\\begin{equation}\\label{1.9}\n\\begin{split}\n0=\\int_{\\Gamma\\ba \\Hb^3}(\\phi_a\\Delta\\psi_a-\\psi_a\\Delta\\phi_a)\\;d{\\mathrm \nvol}&=\n\\sum_{j=1}^m\\left(\\psi_{0,j}(y)\\frac{\\partial\\phi_{0,j}}{\\partial y}(y)-\n\\phi_{0,j}(y)\\frac{\\partial\\psi_{0,j}}{\\partial y}(y)\\right)\\bigg|_{y=a}\\\\\n&=(2-2s)\\sum_{j=1}^m(a_jd_j-b_jc_j).\n\\end{split}\n\\end{equation}\nFor $s=1$ the right hand side equals $\\sum_{j=1}^m(a_jd_j-b_jc_j)$.\n\nAn element of $\\E(\\lambda)$ is uniquely determined by its zeroth Fourier \ncoefficients. Define $i\\colon \\E(\\lambda)\\to \\C^{2m}$ by\n$i(\\phi):=(a_1,b_1,...,a_m,b_m),$\nwhere $a_j,b_j$ are the $0$-th Fourier coefficients of $\\phi$ at the $j$-th\ncusp. Then $i$ is an \nembedding. Let \n\\[I(x,y):=\\sum_{j=1}^m(x_{2j-1}y_{2j}-x_{2j}y_{2j-1})\\]\nbe the standard symplectic form on $\\C^{2m}$. Then by (\\ref{1.9}) and the\ncorresponding statement for $s=1$,\n$\\E(\\lambda)$ is an isotropic subspace for $I$. Hence $\\dim\\E(\\lambda)\\le m$. \n\nIf $s\\not=1$ and $s$ is not a pole of $E_j(w,s)$, $j=1,...,m$, the Eisenstein \nseries have linearly independent $0$-th Fourier coefficients given by \n(\\ref{1.3a}). This implies that $E_1(w,s),...,E_m(w,s)$ are linearly\nindependent and hence, form a basis of $\\E(\\lambda,\\Gamma)$, where\n$\\lambda=s(2-s)$.\n\\end{proof}\nWe are now ready to describe the spectrum of $\\Delta_a$. Let\n$0=\\Lambda_0<\\Lambda_1<\\cdots<\\Lambda_N<1$ be the eigenvalues of $\\Delta$ with\nnon-cuspidal eigenfunctions. Then $\\Lambda_j$ has the form \n$\\Lambda_j=\\sigma_j(2-\\sigma_j)$ with $\\sigma_j\\in (1,2]$ and $\\sigma_j$ is a \nsimple pole of some \nEisenstein series $E_k(w,s)$. The corresponding residue is an \neigenfunction of $\\Delta$ with eigenvalue $\\Lambda_j$ \n\\cite[Proposition 6.2.2]{EGM}.\nThe following theorem\nis an extension of Theorem 5 in \\cite{CV} to the 3-dimensional case.\n\\begin{theo}\\label{th1.4}\nThe spectrum of $\\Delta_a$ is the union of the cuspidal \neigenvalues $(\\lambda_i)_i $ of $\\Delta$ and a sequence\n$0<\\mu_0(a)\\le \\mu_1(a)\\le \\cdots $\nwith the following properties:\n\\begin{enumerate}\n\\item[1)] Each $\\mu_j(a)$ is a decreasing function of $a$,\n\\item[2)] Let $a\\ge b$. Then each $\\mu_j(a)$ has multiplicity $\\le m$, and\n the map $s\\mapsto s(2-s)$ is a bijection between the\neigenvalues $\\mu_j(a)\\not\\in\\{1,\\Lambda_0,...,\\Lambda_N\\}$ and the zeros of\n$\\rho\\in\\C\\setminus\\{1,\\sigma_0,...,\\sigma_N\\}$ of \n$\\varphi(s):=\\det(C(s)+a^{2s-2}\\operatorname{Id})$.\n\\end{enumerate}\n\\end{theo}\n\\begin{proof}\nLet $f\\in L^2(\\Gamma\\ba\\Hb^3)$ be a cuspidal eigenfunction of $\\Delta$.\nThen $f$ belongs to\n$H^1_a(\\Gamma\\ba \\Hb^3)$ and $\\Delta f$ is square integrable. Hence\nby (\\ref{1.4}) and (\\ref{1.5}), $f\\in\\operatorname{dom}(\\Delta_a)$ and $\\Delta_af=\\Delta f$.\nThus all cuspidal eigenfunctions of $\\Delta$ are also eigenfunctions of\n$\\Delta_a$ with the same eigenvalues. \n\nFor 1)\nobserve that for $a\\le a^\\prime$ we have\n\\[H^1_a(\\Gamma \\ba\\Hb^3)\\subset H^1_{a^\\prime}(\\Gamma \\ba\\Hb^3).\\]\nSince $\\Delta_a$ is a non-negative self-adjoint operator with compact\nresolvent, 1) follows from the mini-max characterization of its eigenvalues.\n\n\n\n2) Let $\\Phi$ be an eigenfunction of $\\Delta_a$ with eigenvalue\n$\\mu_j(a)=s_j(2-s_j)$ and suppose that $\\Phi\\perp\nS(\\mu_j(a);\\Gamma)$. Let $\\Phi_{i,0}$ be the zeroth Fourier coefficient of\n$\\Phi$ at the cusp $\\kappa_i$. There exists $1\\le j\\le m$ such that\n$\\Phi_{j,0}\\not\\equiv 0$. Let $\\hat\\Phi_{i,0}$ denote the analytic continuation\nof $\\Phi_{i,0}$ from $(0,a)$ to $\\R^+$. Put\n\\[\\hat\\Phi(w):=\\Phi(w)+\\sum_{i=1}^m\\chi_{i,[a,\\infty)}\\hat\\Phi_{i,0}.\\]\nThen $\\hat\\Phi\\in C^\\infty(\\Gamma\\ba \\Hb^3)$ and $\\Delta \\hat\\Phi\n=\\mu_j(a)\\hat\\Phi$. \nMoreover $\\hat\\Phi$\nis of moderate growth and $\\hat\\Phi\\perp S(\\mu_j(a),\\Gamma)$. Therefore $\\hat\\Phi\\in\\E(\\mu_j(a);\\Gamma)$. \nBy Lemma \\ref{l1.3},\n1), it follows that the multiplicity of $\\mu_j(a)$ is bounded by $m$.\n\nNext suppose that\n$\\mu_j(a)=s_j(2-s_j)\\not\\in\\{1,\\Lambda_0,...,\\Lambda_N\\}$. We can \nassume that $\\Re(s_j)\\ge 1$. As explained above, $s_j$ is not a pole of any\nEisenstein series $E_k(w,s)$, $k=1,...,m$. Therefore, by Lemma \\ref{l1.3} \nthere exist \n$c_1,...,c_m\\in\\C$ such that \n\\[\\hat\\Phi(w)=\\sum_{i=1}^m c_i E_i(w,s_j).\\]\nBy construction, the zeroth Fourier coefficient $\\hat\\Phi_{l,0}(y)$ of \n$\\hat\\Phi(w)$ at\n$\\kappa_l$ vanishes at $y=a$ for all $l=1,...,m$. By (\\ref{1.3a}) this implies\n\\[\\sum_{i=1}^mc_i\\left(\\delta_{il}a^{s_j}+C_{il}(s_j)a^{2-s_j}\\right)=0,\\quad\nl=1,...,m.\\]\nHence $\\det(C(s_j)+a^{2s_j-2}\\operatorname{Id})=0$.\n\nNow assume that $\\det(C(s_j)+a^{2s_j-2}\\operatorname{Id})=0$, $\\Re(s_j)\\ge 1$,\n and $s_j\\not\\in\\{1,\\sigma_0,...,\\sigma_N\\}$. Then $s_j$ is not a pole of any\nEisenstein series. Let $0\\not=\\psi\\in\\C^m$ such\nthat\n\\begin{equation}\\label{1.10}\nC(s_j)\\psi=-a^{2s_j-2}\\psi.\n\\end{equation}\nLet $\\psi=(c_1,...,c_m)$ and set\n\\[\\hat\\Phi(w):=\\sum_{k=1}^m c_kE_k(w,s_j).\\] \nBy (\\ref{1.3a}) and (\\ref{1.10}), the $0$-th Fourier coefficient of \n$\\hat\\Phi(w)$ at the cusp $\\kappa_l$ is given by\n\\begin{equation}\n\\sum_{k=1}^m\nc_k\\left(\\delta_{kl}y^{s_j}+C_{kl}(s_j)y^{2-s_j}\\right)\n=c_l\\left(y^{s_j}-a^{2s_j-2}y^{2-s_j}\\right). \n\\end{equation}\nTherefore, the $0$-th Fourier coefficient $\\hat\\Phi_{l,0}(y)$ vanishes at\n$y=a$ for all $l=1,...,m$. Put\n\\[\\Phi(w)=\\hat\\Phi(w)-\\sum_{i=1}^m\\chi_{i,[a,\\infty)}\\hat\\Phi_{l,0}(y).\\]\nThen $\\Phi\\in H^1_a(\\Gamma\\ba\\Hb)$ and it follows from the description of\nthe domain of $\\Delta_a$ that $\\Phi\\in\\operatorname{dom}(\\Delta_a)$ and \n$\\Delta_a \\Phi=s_j(2-s_j)\\Phi$. \n\\end{proof}\n\nNow assume that $\\Gamma\\ba \\Hb^3$ has a single cusp $\\kappa=\\infty$. Then\nthere is a single \nEisenstein series $E(w,s)$, which is given by\n\\[E(w,s)=\\sum_{\\gamma\\in\\Gamma_\\infty\\ba\\Gamma}y(\\gamma w)^s,\\quad \\Re(s)>2.\\]\nThe zeroth Fourier coefficient of $E(w,s)$ at $\\infty$ equals\n\\begin{equation}\n\\varphi_0(y,s):=y^s+c(s)y^{2-s},\n\\end{equation}\nwhere $c(s)$ is a meromorphic function of $s\\in\\C$. \nThe following facts are well known \n\\cite[pp.243-245]{EGM} .\nThe poles of $E(w,s)$ in the half-plane $\\Re(s)>1$ are \ncontained in the interval $(1,2]$ and are all simple. The residue of $E(w,s)$\nat a pole \n$\\sigma\\in (1,2]$ is a square integrable eigenfunction of $\\Delta$ with \neigenvalue $\\sigma(2-\\sigma)$, which is non-cuspidal. Moreover all\nnon-cuspidal eigenfunctions of $\\Delta$ are obtained in this way. Thus\nthe non-cuspidal eigenvalues have multiplicity one. So in this case we get \nthe following refinement of Theorem \\ref{th1.4}.\n\\begin{theo}\\label{th1.5}\nAssume that $\\Gamma\\ba\\Hb^3$ has a single cusp.\nThen the spectrum of $\\Delta_a$ is the union of the cuspidal \neigenvalues $(\\lambda_i)_i $ of $\\Delta$ and a sequence\n$0<\\mu_0(a)< \\mu_1(a)< \\cdots $\nwith the following properties:\n\\begin{enumerate}\n\\item[1)] Each $\\mu_j(a)$ is a decreasing function of $a$;\n\\item[2)] Let $a\\ge b$. Then each $\\mu_j(a)$ has multiplicity 1 and the map\n$s\\mapsto s(2-s)$ is a bijection between the zeros $\\rho\\not=1$ of \n$\\varphi_0(a,s)$ and the eigenvalues $\\mu_j(a)\\not=1$ of $\\Delta_a$. \n\\item[3)] There is $j$ with $\\mu_j(a)=1$ if and only if $c(1)=-1$ and\n $c^\\prime(1)=-2\\log a$. \n\\item[4)] Let $0=\\Lambda_0<\\Lambda_1<\\cdots<\\Lambda_N<1$ be the\n eigenvalues of $\\Delta$ \nwith non-cuspidal eigenfunctions. If $a\\ge b$, then \n\\[0=\\Lambda_0<\\mu_0(a)<\\Lambda_1<\\mu_1(a)<\\cdots <\\Lambda_N< \n\\mu_N(a).\\]\nMoreover, \n\\begin{equation}\n\\lim_{a\\to\\infty}\\mu_j(a)=\\begin{cases}\\Lambda_j&,\\; 0\\le j\\le N;\\\\\n1&, \\;j>N.\n\\end{cases}\n\\end{equation}\n\\end{enumerate}\n\\end{theo}\n\\begin{proof}\n1) follows from Theorem \\ref{th1.4}. If $s\\not\\in\\{1,\\sigma_0,...,\\sigma_N\\}$,\nthen 2) follows also from 2) of Theorem \\ref{th1.4}. Now suppose that $s_0$\nis a pole of $E(w,s)$ in the half-plane $\\Re(s)\\ge 1$. Then the residue\n$\\psi$ of $E(w,s)$ at $s_0$ is an eigenfunction of $\\Delta$ with non-vanishing\n$0$-th Fourier coefficient. Moreover $\\psi\\perp S(\\lambda,\\Gamma)$.\nTherefore $\\psi\\in\\E(\\lambda,\\Gamma)$, where \n$\\lambda=s_0(2-s_0)$. By Lemma \\ref{l1.3} it follows that\n $\\dim\\E(\\lambda,\\Gamma)=1$.\nMoreover the constant term $\\psi_0(y)$ of $\\psi$ has the form \n$\\psi_0(y)=cy^{2-s_0}$. Especially, it never vanishes on $\\R^+$. On the other\nhand, if $\\Phi$ were an eigenfunction of $\\Delta_a$ with eigenvalue $\\lambda$,\nthen the corresponding eigenfunction $\\hat\\Phi\\in\\E(\\lambda,\\Gamma)$, \nconstructed in the proof of Theorem \\ref{th1.4} would have a constant term\nwhich \nvanishes at $y=a$. Hence the eigenvalues $\\Lambda_j$ can not be eigenvalues of\n$\\Delta_a$. This proves 2)\n\nFor 3) we note that $c(1)^2=1$ by the\nfunctional equation (\\ref{1.7}). Thus $c(1)=\\pm 1$. If $c(1)=-1$, then \n$E(w,1)\\equiv 0$. Put\n$\\psi(w):=\\frac{d}{ds}E(w,s)\\big|_{s=1}.$ Then $\\Delta \\psi=\\psi$ and the \nzeroth Fourier coefficient of $\\psi$ is given by\n\\[\\psi_0(y)=(2\\log y+c^\\prime(1))y.\\]\nSuppose that $c^\\prime(1)=-2\\log a$. Put \n$\\hat \\psi_a(w):=\\psi(w)-\\chi_{[a,\\infty)}\\psi_0(y(w))$. Then $\\hat \\psi_a$\nis in the domain of $\\Delta_a$ and $\\Delta_a\\hat\\psi_a=\\hat\\psi_a$. \n\nFor the other direction suppose that $\\hat\\psi$ is an eigenfunction of\n$\\Delta_a$ with eigenvalue \n$1$ and $\\hat\\psi\\perp S(1,\\Gamma)$. Let $\\hat\\psi_0(y)$ be the $0$-th Fourier\ncoefficient of $\\hat\\psi$. \nExtend $\\hat\\psi_0(y)$ in the obvious way from $(b,a)$ to a smooth function \n$\\psi_0$ on $(b,\\infty)$. Set\n\\[\\psi:=\\hat\\psi+\\chi_[a,\\infty)\\psi_{0}.\\]\nThen $\\psi$ is smooth, of moderate growth, satisfies $\\Delta\\psi=\\psi$ and\n$\\psi\\perp S(1,\\Gamma)$. Thus $\\psi\\in\\E(1,\\Gamma)$. Therefore by Lemma\n\\ref{l1.3} it follows that $\\dim\\E(1,\\Gamma)=1$. On the other hand, we have\n$0\\not=\\frac{d}{ds}E(s,w)\\big|_{s=1}\\in\\E(1,\\Gamma)$. \nTherefore it follows that there exists\n$c\\not=0$ such that $\\psi(w)=c\\frac{d}{ds}E(w,s)\\big|_{s=1}$. \nComparing the constant terms, it follows that $c^\\prime(1)=-2\\log a$. \n\n4) follows from the mini-max principle and the fact that $\\cup_{a\\ge b}\nH^1_a(\\Gamma \\ba\\Hb^3)$ is dense in $H^1(\\Gamma \\ba\\Hb^3)$.\n\\end{proof}\n\n\\section{Hyperbolic surfaces of finite area}\n\\setcounter{equation}{0}\n\nIn this section we prove Theorem \\ref{th0.2} and deduce Theorem\n\\ref{th0.1} from it.\n\nLet $\\Gamma=\\operatorname{PSL}(2,\\Z)$ be the modular group. Then $\\Gamma\\ba \\Hb$ has a single\ncusp $\\kappa=\\infty$. As fundamental domain we take the standard domain\n\\[F:=\\left\\{z\\in\\Hb\\colon |\\Re(z)|<1\/2,\\,|z|>1\\right\\}.\\]\nSo we can take $b=1$. \nThe Eisenstein series attached to the cusp $\\infty$ is given by\n\\[E(z,s)=\\sum_{\\gamma\\in\\Gamma_\\infty\\ba\\Gamma}\\Im(\\gamma z)^s=\\sum_{(m,n)=1}\n\\frac{y^s}{|mz+n|^{2s}},\\quad \\Re(s)>1.\\]\nThe constant term of $E(z,s)$ equals\n\\begin{equation}\\label{2.1}\ny^s+c(s)y^{1-s},\n\\end{equation}\nwhere $c(s)$ is the meromorphic function defined by (\\ref{0.2}). \n\n\\noindent\n{\\bf Proof of Theorem 0.2:}\nSince the \nthe zeros of the completed zeta function $\\Lambda(s)$ are all contained in the\nstrip $0<\\Re(s)<1$, it follows that $\\Lambda(2s-1)$ and $\\Lambda(2s)$ have no\ncommon zeros. This implies that the zeros of \n\\[\\varphi_0(a,s)=\\Lambda(2s)a^s+\\Lambda(2s-1)a^{1-s}\\]\ncoincide with the zeros of\n$a^s+c(s)a^{1-s}$. Now note that $\\Lambda(s)$ has poles of order 1 at $s=1,0$.\nThe residue at $s=1$ is 1 and the residue at $s=0$ is $-1$. Hence\n\\begin{equation}\\label{2.2}\nc(1\/2)=\\lim_{s\\to 1\/2}\\frac{\\Lambda(2s-1)}{\\Lambda(2s)}=-1.\n\\end{equation}\nNow the statements 1), 3) and 4) follow immediately from \\cite[Th\\'eor\\`em\n5]{CV}. For 2) we use that by Roelcke \\cite{Ro}, the smallest positive\neigenvalue $\\lambda_1$ of $\\Delta$ on $\\Gamma\\ba \\Hb$ satisfies\n$\\lambda_1>1\/4$. Then 2) follows from Th\\'eor\\`em 5, (iii), of \\cite{CV}. \n\n\\noindent\n{\\bf Proof of Theorem 0.1:} Let $a\\ge 1$ and let $\\rho\\not=1\/2$ be a zero of\n$\\varphi_0(a,s)$. Then by Theorem 0.2, 3), $\\rho(1-\\rho)$ is an eigenvalue of\n$\\Delta_a$. Hence $\\rho(1-\\rho)$ is a non-negative real number. If \n$\\rho(1-\\rho)> 1\/4$, then $\\rho$ is a complex zero with $\\Re(\\rho)=1\/2$. \nBy Theorem 0.2, 2), there exist at most two zeros, $\\rho_a$ with \n$0<\\rho_a(1-\\rho_a)<1\/4$. Thus $\\rho_a$ and $1-\\rho_a$ are zeros and we\nmay assume that \n$\\rho_a\\in (1\/2,1)$. Let $aa^\\ast$, and $\\mu_0(a)>1\/4$, if\n$1\\le a0$, the zeros of \n\\[\\varphi_K(a,s)=a^s\\Lambda_K(s-1)+a^{2-s}\\Lambda_K(s)\\]\ncoincide with the zeros of $E_0(a,s)$. \n\nWe can now apply Theorem \\ref{th1.5}. By (\\ref{3.2}) we can take $b=1$.\nLet $a\\ge 1$ and let $\\rho\\not=1$ be a zero of $\\psi_K(a,s)$. Then by\nTheorem \\ref{th1.5}, 2), $\\rho(2-\\rho)$ is an eigenvalue of $\\Delta_a$. This\nimplies \nthat if $\\rho(2-\\rho)\\ge 1$, then $\\rho=1+ir$ with $r\\in\\R$, $r\\not=0$. \nIf $\\rho(2-\\rho)< 1$, then $\\rho$ is real. Thus all complex zeros of \n$\\varphi_K(a,s)$ are on the line $\\Re(s)=1$. \n\nFor the real zeros we need to consider the non-cuspidal eigenvalues \n of $\\Delta$. By the spectral resolution of the Laplacian, these eigenvalues \nare in \none-to-one correspondence with the poles of the Eisenstein series $E(w,s)$ in \nthe interval $(1,2]$ \\cite[Proposition 6.2.2]{EGM}.\n On the other hand, using the\nMaass-Selberg relations, it follows that\nthe poles of $E(w,s)$ in $(1,2]$ coincide with the poles of the scattering\nmatrix $c_K(s)$ in $(1,2]$. Since $\\Lambda_K(s)$ has no zeros in $\\Re(s)>1$\nand the only pole in $\\Re(s)>0$ is a simple pole at $s=1$, it follows from\n(\\ref{3.4}) that the only pole of $c_K(s)$ in $\\Re(s)>1$ is a simple pole at \n$s=2$ which corresponds to the eigenvalue 0. This shows that $\\Delta$ has no\nnon-cuspidal eigenvalues in the interval $(0,1)$. \n By Theorem \\ref{th1.5}, 4), it follows that $\\varphi_K(a,s)$ has at most \ntwo real zeros $\\rho_a$ and $2-\\rho_a$ with\n$\\rho_a\\in (1,2)$. Let $aa^\\ast_K$, and $\\mu_0(a)>1$, if\n$agin}\nLet $\\sigma$ and $\\tau$ be two term orders on $S_{[n]}$.\nThen $\\mbox{\\upshape Gin}_{\\sigma}(I)\\geq_\\sigma \\mbox{\\upshape Gin}_{\\tau}(I)$\nfor any homogeneous ideal $I\\subset S_{[n]}$.\n\\end{lemma} \n\\smallskip\\noindent {\\it Proof: \\ }\nLet $f_1, \\ldots, f_t$ be a basis of $I_d$, and let \n$g$ be a generic $n \\times n$\nupper-triangular matrix. \nSince $M'_d:=\\mbox{\\upshape in}_{>_{\\tau}}(g(f_1)\\wedge\\ldots\\wedge g(f_t))$ appears\nin $g(f_1)\\wedge\\ldots\\wedge g(f_t)$ with a non-zero coefficient, it follows\nthat \n$M_d:=\\mbox{\\upshape in}_{\\sigma}(g(f_1)\\wedge\\ldots\\wedge g(f_t)) \\geq_{\\sigma} M'_d$\n(for every $d\\geq 0$).\nProposition \\ref{gin_constr} implies the lemma.\n\\hfill$\\square$\\medskip\n\nWe remark that a stronger version of Lemma \\ref{gin>gin} was proved in \n\\cite[Cor.~1.6]{Conca}.\n\nAnother ingredient needed for defining revlex shifting\nis the notion of the squarefree operation. This is a bijection $\\Phi$\nbetween the set of all monomials in $\\{x_i : i\\in \\mathbb{N}\\}$ and the set of all\nsquarefree monomials in $\\{x_i : i\\in \\mathbb{N}\\}$, defined by\n$$\\Phi(\\prod_{j=1}^k x_{i_j})=\\prod_{j=1}^k x_{i_j+j-1}, \\mbox{ where }\n i_1\\leq i_2\\leq \\ldots\\leq i_k.\n$$\n Note that for a monomial $m\\in S_{[n]}$, \n$\\Phi(m)$ may not belong to $S_{[n]}$. However the graded \nreverse lexicographic order \n has the following remarkable property \n\\cite[Lemma 6.3(ii)]{K91}, \\cite[Lemma 1.1]{AHH}: if \n$m$ is a minimal generator of $\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny rl}}} I_\\Gamma$ \n(where $\\Gamma$ is a simplicial \ncomplex on $[n]$), then $\\Phi(m)$ is an element of $S_{[n]}$.\nThis leads to the following definition (due to Kalai):\n\\begin{definition} \\label{equiv}\nLet $\\Gamma$ be a simplicial complex on the vertex set $[n]$.\nThe reverse lexicographic shifting of $\\Gamma$,\n$\\Delta_{\\mbox{\\upshape {\\tiny rl}}}(\\Gamma)$, is a simplicial complex on $[n]$ whose \nStanley-Reisner ideal is given by\n$$I_{\\Delta_{\\mbox{\\upshape {\\tiny rl}}}(\\Gamma)}=\\langle \\Phi(m) \\; : \\; m\\in\nG(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny rl}}} I_\\Gamma) \\rangle,$$\nwhere $G(I)$ denotes the set of\nthe minimal generators of a monomial ideal $I$.\n\\end{definition}\n\nWe now provide a new and simple proof of Eq.~(\\ref{P5})\n (due originally to Aramova, Herzog, and Hibi \\cite{AHH}).\n\\begin{theorem} \\label{AHH}\nThe revlex shifting $\\Delta_{\\mbox{\\upshape {\\tiny rl}}}$ satisfies all the conditions of \nTheorem \\ref{stable}. Thus $\\Delta_{\\mbox{\\upshape {\\tiny rl}}}(\\Gamma)=\\Gamma$ for every\n shifted complex $\\Gamma$.\n\\end{theorem} \n\\smallskip\\noindent {\\it Proof: \\ } It is well-known that (symmetric) revlex shifting satisfies all\nthe conditions of Theorem \\ref{stable}, except possibly for property\n(2) whose proof appears to be missing in the literature (for the\nexterior version of algebraic shifting it was recently verified by\nNevo \\cite{Nevo}): the fact that $\\Delta(\\Gamma)$ is a shifted\nsimplicial complex follows from Lemma~\\ref{cone}(1); property (1) is\n\\cite[Lemma 6.3(i)]{K91}; property (3) is a consequence of \nLemma~\\ref{cone}(3); property (4) follows from \\cite[Cor.~8.25]{H} asserting\nthat $\\beta_i(\\Gamma)=\\beta_i(\\Delta(\\Gamma))$ for all $i$. To prove\nproperty (2) it suffices to check that $\\Delta(\\Gamma)$ and\n$\\Delta(\\Gamma\\star \\{n+1\\})$ have the same set of minimal nonfaces\n(equivalently, $I_{\\Delta(\\Gamma)}\\subset S_{[n]}$ and\n$I_{\\Delta(\\Gamma\\star \\{n+1\\})}\\subset S_{[n+1]}$ have the same set\nof minimal generators). This follows from Definition \\ref{equiv} and\nLemma \\ref{cone}(4). \\hfill$\\square$\\medskip\n\n\\paragraph{\\bf Remarks} \\quad\n\n(1)\n We note that to verify the inequality $\\sum \\beta_i(\\Gamma)\\leq \\sum\n \\beta_i(\\Delta_{\\mbox{\\upshape {\\tiny rl}}}(\\Gamma))$ one does not need to use the fact\n that $\\beta_i(\\Gamma)=\\beta_i(\\Delta_{\\mbox{\\upshape {\\tiny rl}}}(\\Gamma))$ for all\n $i$, which is a consequence of the deep result due to\n Bayer--Charalambous--Popescu \\cite{BCP} and Aramova--Herzog~\\cite{AH} \nthat revlex shifting preserves extremal (algebraic) Betti\n numbers. Instead one can use the standard flatness argument (see\n \\cite[Thm.~3.1]{H}) to show that $\\beta_{i,j}(I_\\Gamma) \\leq\n \\beta_{i,j}(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny rl}}}(I_\\Gamma)) = \\beta_{i,j}(I_{\\Delta(\\Gamma)})$ \nfor all $i$, $j$, where the equality comes from\n the fact that $\\Phi$ applied to (minimal generators of)\n a strongly stable ideal\n $\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny rl}}}(I_\\Gamma)$ preserves algebraic Betti numbers (see\n \\cite[Lemma 2.2]{AHH}). The Hochster formula \\cite{Hoc} then asserts\n that the reduced Betti numbers of a simplicial complex are equal to\n certain algebraic graded Betti numbers of its Stanley-Reisner ideal.\n\n(2)\n In algebraic terms, the statement of\nTheorem \\ref{AHH} translates to the fact that if\n$I\\subset S_{[n]}$ is a squarefree strongly stable ideal, then\n$\\Phi(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny rl}}}(I))=I$, where \n$\\Phi(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny rl}}}(I)):=\\langle \\Phi(m): m \\in G(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny rl}}}(I)) \\rangle\n$.\nHence $\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny rl}}}(I)=\\langle\\Phi^{-1}(\\mu) : \\mu\\in G(I)\\rangle$, that is,\n computing the revlex Gin of a squarefree strongly stable ideal $I$\nsimply amounts to applying $\\Phi^{-1}$ to the minimal generators\nof $I$.\n\n(3) Our proof (as well as the original proof in \\cite{AHH})\nof the equation $\\Phi(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny rl}}}(I))=I$ for a squarefree strongly\nstable ideal $I$ works only over a field ${\\bf k}$ of characteristic zero.\nWe however do not know of any counterexamples in the case of a field\nof positive characteristic.\n\n\n\n\\section{Combinatorics of USLIs, almost USLIs, and lex Gins} \n \\label{USLI_section}\nIn this section we introduce and study \nthe class of universal squarefree lexsegment ideals (USLIs)\nand the class of almost USLIs. These notions turn out to be crucial \nin the proof of Theorem \\ref{Thm2}.\nTo allow for infinitely generated ideals \n(as we need in the following section) we \nconsider the system of rings \n$S_{[n]}$, $n\\in\\mathbb{N}$,\nendowed with natural embeddings $S_{[n]}\\subseteq S_{[m]}$ for $m\\geq n$, and\nprovide definitions suitable\nfor the direct limit \nring $S=\\lim_{n\\rightarrow\\infty}S_{[n]}={\\bf k}[x_i : i\\in\\mathbb{N}]$.\n\n\nRecall that a squarefree monomial ideal \n$I\\subset S$ ($I\\subset S_{[n]}$, respectively) is a \n{\\em squarefree lexsegment ideal} of $S$ ($S_{[n]}$, respectively)\n if for every\nmonomial $m\\in I$ and every\nsquarefree monomial $m'\\in S$ ($m'\\in S_{[n]}$, respectively)\nsuch that $\\deg(m')=\\deg(m)$ and $m'>_{\\mbox{\\upshape {\\tiny lex}}} m$, \n$m'$ is an element of $I$ as well.\n\\begin{definition} \\label{USLI}\nAn ideal $L$ of $S$ (or of $S_{[n]}$)\nis\na {\\em universal squarefree lexsegment ideal} (abbreviated USLI)\nif it is finitely generated in each degree and $LS$ is\na squarefree lexsegment ideal of $S$. \nEquivalently, an ideal \n$L=L(k_\\bullet)$ (here $k_\\bullet=\\{k_i\\}_{i\\in\\mathbb{N}}$ is a \nsequence of nonnegative integers) is a \nUSLI with $k_i$ minimal generators\nof degree $i$ (for $i\\in\\mathbb{N}$) if and only if\n the set of minimal generators of $L$, \n$G(L)$, is given by\n$$G(L)=\n\\bigcup_{r=1}^{\\infty}\\left\\{\n(\\prod_{j=1}^{r-1} x_{R_j})\\cdot x_l \\;:\\; R_{r-1}+1 \n\\leq l \\leq R_r-1\\right\\}, \\mbox{ where } R_j=j+\\sum_{i=1}^j k_i. \n \n$$\n\\end{definition}\n\nThe easiest way to verify the description of the set\n$G(L)=\\{m_1 >_{\\mbox{\\upshape {\\tiny lex}}} m_2 >_{\\mbox{\\upshape {\\tiny lex}}} \\cdots \\ >_{\\mbox{\\upshape {\\tiny lex}}} m_s >_{\\mbox{\\upshape {\\tiny lex}}} \\cdots \\}$ \nof a USLI $L$ is by induction on $s$. Indeed, if $m_1, \\cdots, m_s$ satisfy\nthe above description and\n$m_s=(\\prod_{j=1}^{r-1} x_{R_j})\\cdot x_l$, \nthen there are two possibilities for\n$m_{s+1}$: either $\\deg(m_{s+1})=\\deg(m_s)=r$ (equivalently, $lr$ (equivalently, $l=R_r-1$ and $k_i=0$ for all $r_{\\mbox{\\upshape {\\tiny lex}}} m_{s+1}$ and since $m_s$ is the \nimmediate lex-predecessor of\n$m':=(\\prod_{j=1}^{r-1} x_{R_j})\\cdot x_{l+1}$,\nit follows that\n$m'\\geq_{\\mbox{\\upshape {\\tiny lex}}} m_{s+1}\\in L$ \nwhich together\nwith $L$ being a USLI implies that \n$m'\\in L$. Since $m'$ is not divisible by any of $m_1, \\cdots, m_s$,\nthis yields\n$m_{s+1}=m'$. The treatment\nof the latter case is similar: just observe that every squarefree monomial\nof degree $r'$ that is lex-smaller than\n $m':=(\\prod_{j=1}^{r-1} x_{R_j})\\cdot (\\prod_{j=1}^{r'-r+1} x_{l+j})=\n (\\prod_{j=1}^{r'-1} x_{R_j})\\cdot x_{R_{r'-1}+1}$ is divisible by at least\none of\n$m_1, \\ldots, m_s$ and hence is in $L-G(L)$, while $m'$ is not divisible\nby any of $m_1, \\cdots, m_s$.\n\n\\begin{example} \\quad\n\\begin{enumerate} \n\\item\n The ideal $\\langle x_1x_2, x_1x_3,\n x_2x_3\\rangle$ (the Stanley-Reisner ideal of three isolated points)\n is a lexsegment in $S_{[3]}$, but is not a lexsegment in $S$, and\n hence is not a USLI. \n\n\\item\n The ideal $I = \\langle x_1x_2, x_1x_3,\n x_1x_4x_5x_6x_7\\rangle$ is the USLI with $k_1 = 0, k_2 = 2, k_3 =\n k_4 = 0, k_5 = 1$ and $k_i = 0$ for all $i > 5$. In this example,\n check that $R_1 = 1, R_2 = 4, R_3 = 5, R_4 = 6$ and $R_5 = 8$. \n\\end{enumerate}\n\\end{example}\n\nNote that every USLI is a squarefree strongly stable ideal, and hence\nis the Stanley-Reisner ideal of a shifted (possibly infinite)\nsimplicial complex (we refer to such complex as a {\\em USLI complex}).\nAll complexes considered in this section are assumed to be finite.\n\nThe following lemma describes certain combinatorial properties of \nUSLI complexes. This lemma together with\n Lemmas \\ref{main_lemma} and \\ref{Pardue_lemma}\nbelow provides a key step in the proof of \nTheorem \\ref{Thm2}.\n\\begin{lemma} \\label{comb_USLI}\nLet $\\Gamma$ be a USLI complex on the vertex set $[n]$\nwith $I_\\Gamma=L(k_\\bullet)$. \n\\begin{enumerate}\n\\item If $I_\\Gamma\\neq 0$ and $k_d$ is the last nonzero entry\nin the sequence $k_\\bullet$, then $\\Gamma$ has exactly $d$ facets. \nThey are given by\n$$F_i=\\left\\{\\begin{array}{ll}\n \\{R_j : 1\\leq j \\leq i-1\\}\\cup [R_i+1, n] & \\mbox{ if $1\\leq\n i\\leq d-1$,}\\\\ \n \\{R_1, \\ldots, R_{d-1}\\}\\cup [R_d, n] & \\mbox{ if $i=d$.}\n\\end{array}\\right.\n$$\n\\item If $\\Gamma'$ is a shifted complex on $[n]$ such that \n$f(\\Gamma)=f(\\Gamma')$, then $\\Gamma=\\Gamma'$. (In other words\nevery USLI complex is the only shifted complex in its $f$-class).\n\\end{enumerate}\n\\end{lemma} \n\\smallskip\\noindent {\\it Proof: \\ } We verify part (1) by induction on $n+d+\\sum k_i$. The assertion\nclearly holds if $d=1$ or if $\\sum k_i=1$. For instance, if $d=1$ and $k_1=n$\n(equivalently, $R_1=n+1$), then $F_1=[n+1, n]=\\emptyset$ is the only \nfacet of $\\Gamma$.\n\nNote that $R_d$\nis the index of the first variable that does\nnot divide any of the minimal generators of $I_\\Gamma$.\nThus if $R_d\\leq n$, then $\\Gamma=\\mbox{\\upshape lk}\\,_\\Gamma(n)\\star\\{n\\}$, and we are done\nby applying induction hypothesis to the USLI complex $\\mbox{\\upshape lk}\\,_\\Gamma(n)$.\nSo assume that $R_d=n+1$.\nThen $\\mbox{\\upshape lk}\\,_\\Gamma(n)$ and $\\mbox{\\upshape ast}\\,_\\Gamma(n)$\nare easily seen to be the USLI complexes on the vertex set $[n-1]$\nwhose Stanley-Reisner ideals are given\nby $L_1=L(k_1, \\ldots, k_{d-2}, k_{d-1}+1)$ and \n$L_2=L(k_1, \\ldots, k_{d-1}, k_d-1)$,\nrespectively. \nHence by induction hypothesis the complex\n$\\mbox{\\upshape lk}\\,_\\Gamma(n)\\star\\{n\\}$ has exactly $d-1$ facets, namely\nthe sets $F_1, \\ldots, F_{d-1}$ from the list above.\nNow if $k_d>1$, then by induction hypothesis\nthe facets of $\\mbox{\\upshape ast}\\,_\\Gamma(n)$ are the sets $F_1-\\{n\\}, \\ldots,\nF_{d-1}-\\{n\\}, F_d$. Since\n$\\Gamma= (\\mbox{\\upshape lk}\\,_\\Gamma(n)\\star\\{n\\}) \\cup \\mbox{\\upshape ast}\\,_\\Gamma(n)$,\nit follows that $\\max(\\Gamma)=\\{F_1, \\ldots, F_d\\}$.\nSimilarly, if $k_d=1$ and $k_j$ is the last nonzero\nentry in the sequence $(k_1, \\ldots, k_{d-1})$, then \n the facets of $\\mbox{\\upshape ast}\\,_\\Gamma(n)$ are the sets $F_1-\\{n\\}, \\ldots,\nF_{j-1}-\\{n\\}, F_d$, and the result follows in this case as well.\n\nTo prove part (2) we induct on $n$. The assertion is obvious for $n=1$.\nFor $n>1$ we consider two cases.\n\n{\\bf Case 1:} $R_d\\leq n$. In this case \n$\\Gamma=\\mbox{\\upshape lk}\\,_\\Gamma(n)\\star\\{n\\}$, so $\\beta_i(\\Gamma)=0$ for all $i$.\nSince among all squarefree strongly stable\nideals with the same Hilbert function\nthe squarefree lexsegment ideal has the largest \nalgebraic Betti numbers \\cite[Thm.~4.4]{AHHlex}, \nand since by Hochster's formula \\cite{Hoc}, \n$\\beta_{n-i-1}(\\Lambda)=\\beta_{i-1,n}(I_\\Lambda)$ \nfor any simplicial complex $\\Lambda$ on the vertex set $[n]$,\n it follows that $\\beta_i(\\Gamma')\\leq\\beta_i(\\Gamma)=0$,\nand so $\\beta_i(\\Gamma')=0$ for all $i$. Since $\\Gamma'$ is shifted,\nLemma \\ref{betti} implies that all facets of $\\Gamma'$ contain $n$.\nThus $\\Gamma'=\\mbox{\\upshape lk}\\,_{\\Gamma'}(n)\\star\\{n\\}$, and the assertion follows\nfrom induction hypothesis applied to \n$\\mbox{\\upshape lk}\\,_\\Gamma(n)$ and $\\mbox{\\upshape lk}\\,_{\\Gamma'}(n)$.\n\n{\\bf Case 2:} $R_d=n+1$. In this case all facets of $\\Gamma$ but $F_d$ \ncontain vertex $n$ (this follows from part (1) of the Lemma), and we infer\nfrom Lemma \\ref{betti} that\n$$\\beta_i(\\Gamma)=\\left\\{\\begin{array}{ll} 0, \\mbox{ if $i\\neq d-2$} \\\\\n 1, \\mbox{ if $i= d-2$.} \\\\\n \\end{array}\n \\right.\n$$\nRecall the Euler-Poincar\\'e formula asserting that for any simplicial\ncomplex $\\Lambda$, \n$$\\sum_{j\\geq -1}(-1)^j f_j(\\Lambda) \n= \\sum_{j\\geq -1}(-1)^j \\beta_j(\\Lambda)\n=:\\widetilde{\\chi}(\\Lambda).$$\nTherefore, $\\widetilde{\\chi}(\\Gamma')=\\sum_{j\\geq -1}(-1)^j f_j(\\Gamma')=\n\\sum_{j\\geq -1}(-1)^j f_j(\\Gamma)=\\widetilde{\\chi}(\\Gamma)=(-1)^{d-2}$, and\nhence not all Betti numbers of $\\Gamma'$ vanish. The\nsame reasoning as in Case 1 then shows that \n$\\beta_i(\\Gamma')=\\beta_i(\\Gamma)$ for all $i$. Applying\nLemma \\ref{betti} once again, we obtain that \n$\\Gamma'=(\\mbox{\\upshape lk}\\,_{\\Gamma'}(n)\\star\\{n\\})\\cup\\{ F'\\}$, where $|F'|=d-1$\nand $F'$\nis the only facet of $\\Gamma'$ that does not contain $n$. Thus \n$f(\\mbox{\\upshape lk}\\,_{\\Gamma}(n))= f(\\mbox{\\upshape lk}\\,_{\\Gamma'}(n))$ and\n $f(\\mbox{\\upshape ast}\\,_{\\Gamma}(n))= f(\\mbox{\\upshape ast}\\,_{\\Gamma'}(n))$, and so\n$\\mbox{\\upshape lk}\\,_{\\Gamma}(n)=\\mbox{\\upshape lk}\\,_{\\Gamma'}(n)$ and \n$\\mbox{\\upshape ast}\\,_{\\Gamma}(n)=\\mbox{\\upshape ast}\\,_{\\Gamma'}(n)$ (by induction hypothesis),\nyielding that $\\Gamma=\\Gamma'$.\n\\hfill$\\square$\\medskip\n\nWe now turn to the class of {\\em almost USLIs}. \n(Recall our convention that lower degree monomials are \nlex-larger than higher degree monomials.)\n\n\n\\begin{definition}\nLet $I\\subset S$ (or $I\\subset S_{[n]}$)\nbe a squarefree strongly stable monomial ideal with \n$G(I)=\\{m_1>_{\\mbox{\\upshape {\\tiny lex}}} \\ldots >_{\\mbox{\\upshape {\\tiny lex}}} m_l>_{\\mbox{\\upshape {\\tiny lex}}} m_{l+1} \\}$. \nWe say that $I$ is {\\em an almost USLI}\nif $I$ is not a USLI, but $L=\\langle m_1, \\ldots, m_l\\rangle$ is a USLI.\nWe say that a simplicial complex $\\Gamma$ is {\\em an almost USLI complex} \nif $I_\\Gamma$ is an almost USLI.\n\\end{definition}\n\nAs we will see in the next section (see also Lemma \\ref{Pardue_lemma} below),\n what makes almost USLI complexes noninvariant under\nlex shifting is the following combinatorial property. (We recall that the \n{\\em regularity} of a finitely generated stable monomial ideal $I$, $\\mbox{\\upshape reg}(I)$,\nis the maximal degree of its minimal generators.)\n\n\\begin{lemma} \\label{main_lemma}\nLet $\\Gamma$ be an almost USLI complex.\nThen $|\\max(\\Gamma)|>\\mbox{\\upshape reg}(I_\\Gamma)$.\n\\end{lemma}\n\n\\smallskip\\noindent {\\it Proof: \\ } \nAssume $\\Gamma$ is a simplicial complex on $[n]$\nwith $G(I_\\Gamma)=\\{m_1>_{\\mbox{\\upshape {\\tiny lex}}}\\ldots>_{\\mbox{\\upshape {\\tiny lex}}}m_l>_{\\mbox{\\upshape {\\tiny lex}}}m_{l+1}\\}$.\nWe have to show that $|\\max(\\Gamma)|>\\deg(m_{l+1})=:d$.\nWe verify this by induction on $d$. To simplify the notation\n assume without loss of generality that every singleton \n$\\{i\\}\\subset[n]$ is a vertex of $\\Gamma$\n(equivalently, $I_\\Gamma$ has no generators of degree 1).\nIf there are generators of degree 1 then the proof given below can\nbe modified by letting the index $R_1$ play the role of the index $1$. \nAs $I_\\Gamma$ is an almost USLI, and so\n$\\langle m_1, \\ldots, m_l\\rangle$ is a USLI,\nthis leaves two possible cases:\n\n{\\bf Case 1:}\n{\\em $m_1, \\ldots, m_l$ are divisible by $x_1$, but\n$m_{l+1}$ is not divisible by $x_1$.}\nSince $I_\\Gamma$ is squarefree strongly stable, it follows that \n$m_{l+1}=\\prod_{j=2}^{d+1}x_j$. In this case each set $F_i=[n]-\\{1, i\\}$,\n$i=2, \\ldots, d+1$, is a facet of $\\Gamma$. \n(Indeed the product $\\prod\\{x_j : j\\in F_i\\}$\n is not divisible by $m_{l+1}$,\nand it is also not divisible by $x_1$, and hence\nby $m_1, \\ldots, m_{l}$, implying that $F_i$ is a face. To show that $F_i$\nis a maximal face observe that \n $F_i\\cup \\{i\\}$ \ncontains the support of $m_{l+1}$, and hence is not a face,\nbut then shiftedness of $\\Gamma$ implies that\nneither is $F_i\\cup\\{1\\}$.)\nSince there also should be a facet containing $1$, we conclude\nthat $\\max(\\Gamma)\\geq d+1>\\deg(m_{l+1})$, \ncompleting the proof of this case.\n\n{\\bf Case 2:} \n{\\em All minimal generators of $I$ are divisible by $x_1$.}\nIn this case consider an almost USLI\n$I_\\Gamma':=\\langle x_1, m_1\/x_1, \\ldots, m_{l+1}\/x_1 \\rangle$.\n By induction hypothesis $\\Gamma'$\nhas $s>\\deg(m_{l+1})-1$ facets which we denote by $F_1, \\ldots, F_s$.\nOne easily verifies that\n$\\max(\\Gamma)=\\left\\{\\{1\\}\\cup F_1, \\ldots, \\{1\\}\\cup F_s, [2,n]\\right\\},\n$\nand so $|\\max(\\Gamma)|=s+1>\\deg(m_{l+1})$.\n\\hfill$\\square$\\medskip\n\n\nWe close this section with an algebraic lemma that relates regularity of\n$\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I_\\Gamma)$ to the number of facets of $\\Gamma$ (for an arbitrary\ncomplex $\\Gamma$).\n\n\\begin{lemma} \\label{Pardue_lemma}\nFor a (finite) simplicial complex $\\Gamma$, \n$\\mbox{\\upshape reg}(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I_{\\Gamma}))\\geq |\\max(\\Gamma)|$.\n\\end{lemma}\n\\smallskip\\noindent {\\it Proof: \\ }\nThis fact is a corollary of \\cite[Lemma 23]{Pardue}\napplied to squarefree (and hence radical) ideal $I_\\Gamma\\in S_{[n]}$. \nFor $\\sigma\\subseteq[n]$, we denote by $P_\\sigma$ the (prime)\nideal in $S_{[n]}$ generated by $\\{x_j : j\\notin\\sigma\\}$. It is well known\nthat $I_\\Gamma$ has the following prime decomposition:\n$\nI_\\Gamma=\\cap_{\\sigma\\in\\max(\\Gamma)} P_\\sigma.\n$\n Thus the variety of $I_\\Gamma$, $\\mathcal{V}(I_\\Gamma)$,\nis the union (over $\\sigma\\in\\max(\\Gamma)$)\nof the irreducible subvarieties $\\mathcal{V}(P_\\sigma)$.\nEach such subvariety is a \nlinear subspace of ${\\bf k}^n$ of codimension $n-|\\sigma|$.\n\\cite[Lemma 23]{Pardue} then implies that the monomial $m:=\\prod x_i^{r_i}$,\nwhere $r_i=|\\{\\sigma\\in\\max(\\Gamma): |\\sigma|=n-i\\}|$,\nis a minimal generator of $\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I_\\Gamma)$.\nHence $\\mbox{\\upshape reg}(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I_\\Gamma))\\geq \\deg(m)=|\\max(\\Gamma)|$.\n\\hfill$\\square$\\medskip\n\n\n\n\n\n\n\\section{Lex shifting, $B$-numbers and the limit complex} \n \\label{infinite_section}\nIn this section after defining the notion of\nlexicographic shifting and the notion of $B$-numbers \n(a certain analog of the Hilbert function) we prove Theorem~\\ref{Thm2}. \nWe remark that extending the notion of algebraic shifting to an arbitrary term \norder\n$\\succ$ is not entirely automatic since the $\\Phi$-image of the set of\nminimal generators of $\\mbox{\\upshape Gin}_{\\succ}(I_\\Gamma)\\subset S_{[n]}$, \n$G(\\mbox{\\upshape Gin}_{\\succ}(I_\\Gamma))$, may not be a subset of $S_{[n]}$.\n This however can be easily corrected if one considers the system of rings \n$S_{[n]}$, $n\\in\\mathbb{N}$,\nendowed with natural embeddings $S_{[n]}\\subseteq S_{[m]}$ \nfor $m\\geq n$, and makes \nall the computations in the direct limit ring \n$S=\\lim_{n\\rightarrow\\infty}S_{[n]}={\\bf k}[x_i : i\\in\\mathbb{N}]$. This is the approach\nwe adopt here.\n We work with the class of monomial ideals $I\\subset S$\nfinitely generated in each degree.\nThroughout this section we use the graded \nlexicographic term order on $S$.\n\n\\begin{definition} \\label{gin_def}\nLet $I$ be a monomial ideal of $S$ that is finitely generated\nin each degree.\nDefine \n$$\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I):=\\lim_{n\\rightarrow\\infty}\\,\n\\left(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I\\cap S_{[n]})\\right)S,\n$$ \nwhere we consider $I\\cap S_{[n]}$ as an ideal of $S_{[n]}$.\n\\end{definition}\nSince the $d$-th component of $\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I\\cap S_{[n]})$ depends only on the\n$d$-th component of $I\\cap S_{[n]}$,\n or equivalently on the minimal generators of \n$I\\cap S_{[n]}$ of degree $\\leq d$,\nLemma \\ref{cone}(4) implies that \n$\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I)$ is well-defined and that for every\n$d$ there is $n(d)$ such that\n$(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}I)_d=((\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I\\cap S_{[n]}))S)_d$ \nfor all $n\\geq n(d)$.\nThus $\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I)$ is a monomial ideal finitely generated\nin each degree. (It is finitely generated if $I$ is.)\n Moreover, it follows from Lemma \\ref{cone}(1) that\n$\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I)$ is a strongly stable ideal. \n\n\nRecall that the squarefree operation $\\Phi$ \ntakes monomials of $S$ to squarefree\nmonomials of $S$. \nIf $I\\subset S$ is a monomial ideal finitely generated in each degree,\nwe define $\\Phi(I):=\\langle \\Phi(m) : m\\in G(I)\\rangle$, where $G(I)$ is\nthe set of minimal generators of $I$.\n\\begin{definition}\nLet $I$ be a homogeneous ideal of $S$ that is finitely generated\nin each degree. The {\\em lexicographic shifting} \nof $I$ is the squarefree strongly stable ideal \n$\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(I)=\\Phi(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I))$. \nThe {\\em $i$-th lexicographic shifting} of $I$\nis the ideal $\\Delta_{\\mbox{\\upshape {\\tiny lex}}}^i(I)$, where $\\Delta_{\\mbox{\\upshape {\\tiny lex}}}^i$ stands \nfor $i$ successive applications of $\\Delta_{\\mbox{\\upshape {\\tiny lex}}}$. \nWe also define the {\\em limit ideal}\n$\\overline{\\Delta}(I):=\\lim_{k\\rightarrow\\infty}\\Delta_{\\mbox{\\upshape {\\tiny lex}}}^k(I)$.\n\\end{definition}\n\nThe rest of the section is devoted to the proof of Theorem \\ref{Thm2}.\nFirst however we digress and review\nseveral facts on algebraic Betti numbers (defined by Eq.~(\\ref{alg_betti})).\n\n\n\\begin{lemma} \\label{betti-prop}\nLet $I$ and $J$ be monomial ideals of $S_{[n]}$.\n\\begin{enumerate}\n\\item If $I_j=J_j$ for all $0\\leq j\\leq j_0$, then \n$\\beta_{i,j}(I)=\\beta_{i,j}(J)$ for all $i$ and all $j\\leq j_0$. \n\\item The Betti numbers of $I\\subset S_{[n]}$ coincide with those\nof $IS_{[n+1]}\\subset S_{[n+1]}$, that is,\n $\\beta_{i,j}(I)=\\beta_{i,j}(IS_{[n+1]})$ for all $i, j$.\n\\end{enumerate}\n\\end{lemma}\n\\smallskip\\noindent {\\it Proof: \\ }\nPart (1) follows from the standard facts that\n$$\\beta_{i,j}(I)=\n\\dim_{\\bf k} \\mbox{\\upshape Tor}_i^{S_{[n]}}({\\bf k}, I)_{j}=\n\\dim_{\\bf k} \\mbox{\\upshape Tor}_i^{S_{[n]}}(I, {\\bf k})_{j},$$ \nwhere we identify ${\\bf k}$ with the $S_{[n]}$-module \n$S_{[n]}\/\\langle x_1, \\ldots, x_n\\rangle$. \nFor part (2) note that if $\\mathbb{F}$ is\nthe free minimal \nresolution of $I$ over $S_{[n]}$, then $\\mathbb{F}\\otimes_{S_{[n]}} S_{[n+1]}$ \nis the free minimal resolution of $IS_{[n+1]}$ over $S_{[n+1]}$,\nyielding the lemma.\n\\hfill$\\square$\\medskip\n\nThe above properties allow to extend the definition\nof the Betti numbers to the class of monomial ideals of $S$\nthat are finitely generated in each degree.\n\n\\begin{definition} \\label{betti_def}\nLet $I\\subset S$ be a monomial ideal finitely generated in each\ndegree.\n Define \n$$\\beta_{i,j}(I):=\n\\lim_{n\\rightarrow\\infty}\\beta_{i,j}(I\\cap S_{[n]}) \\quad \\mbox{for all } \ni, j\\geq0,\n$$\nwhere we consider $I\\cap S_{[n]}$ as an ideal of $S_{[n]}$.\n\\end{definition}\nWe remark that since $I$ is finitely generated in each degree,\nfor a fixed $j_0$\nthere exists $n_0$ such that $(I\\cap S_{[n+1]})_j=((I\\cap S_{[n]})S_{[n+1]})_j$\nfor all $0\\leq j \\leq j_0$ and $n\\geq n_0$.\n Hence it follows from Lemma \\ref{betti-prop}\nthat (for a fixed $i$)\nthe sequence $\\{\\beta_{i,j_0}(I\\cap S_{[n]})\\}_{n\\in\\mathbb{N}}$\nis a constant for indices starting with $n_0$, and thus\n $\\beta_{i,j_0}(I)$ is well-defined. \n\n\nThe Betti numbers of strongly stable ideals (of $S_{[n]}$) were computed by\nEliahou and Kervaire \\cite{ElKer}, and the analog of this\nformula for squarefree strongly stable ideals (of $S_{[n]}$) was established\nby Aramova, Herzog, and Hibi \\cite{AHHlex}. Definition \\ref{betti_def}\nallows to state these results as follows. (For a monomial $u$\ndefine $m(u):=\\max\\{i : x_i|u\\}$.)\n\\begin{lemma} \\label{EK}\nLet $I\\subset S$ be a monomial ideal finitely generated in each degree, \nlet $G(I)$ denote its set of minimal generators, and let \n$G(I)_j=\\{u\\in G(I): \\deg u=j\\}$. \n\\begin{enumerate}\n\\item If $I$ is strongly stable, then\n$\\beta_{i, i+j}(I)=\\sum_{u\\in G(I)_j} {m(u)-1 \\choose i}$;\n\\item If $I$ is squarefree strongly stable, then\n$\\beta_{i, i+j}(I)=\\sum_{u\\in G(I)_j} {m(u)-j \\choose i}$.\nIn particular, if $I=L(k_\\bullet)$ is a USLI, then\n$\\beta_{i, i+j}(I)=\\sum_{l=1}^{k_j}{k_1+\\ldots+k_{j-1}+l-1 \\choose i}$.\n\\end{enumerate}\n\\end{lemma}\n\nUsing the notion of the Betti numbers, one can define \na certain analog of the Hilbert function ---\n the $B$-numbers --- of a monomial ideal $I$ of $S$ that is finitely generated \nin each degree.\n\n\\begin{definition} \\label{B-definition}\nLet $I\\subset S$ (or $I\\subset S_{[n]}$)\nbe a monomial ideal finitely generated in each degree, and \nlet $\\beta_{i,j}(I)$ be its graded Betti numbers. Define\n$$\nB_j(I):=\\sum_{i=0}^j (-1)^i\\beta_{i,j}(I) \\quad \\mbox{ for all }\nj\\geq 0 \\quad (\\mbox{e.g., $B_0=0$ and $B_1(I)=|G(I)_1|$}).\n$$\n The sequence \n$B(I):=\\{B_j(I): j\\geq 1\\}$ is called the {\\bf $B$-sequence} of $I$. \n\\end{definition}\n\\begin{remark}\nIt is well known and is easy to prove (see \\cite[Section 1B.3]{Eis2})\n that for every $n\\in\\mathbb{N}$ the polynomial\n $\\sum_j B_j(I\\cap S_{[n]})x^j$\nequals $(1-x)^n \\text{Hilb}(I\\cap S_n, x)$, where \n$\\text{Hilb}(I\\cap S_n, x)$ is the Hilbert series of $I\\cap S_{[n]}$. \nIn particular, if $\\Gamma$ is a $(d-1)$-dimensional\nsimplicial complex on $[n]$ and \n$I_{\\Gamma}\\subset S_{[n]}$\nis its Stanley-Reisner ideal then \n\\[\n\\frac{1-\\sum_j B_j(I_\\Gamma)x^j}{(1-x)^n} = \\text{Hilb}(S_{[n]}\/I_\\Gamma, x)\n=\\sum_{i=0}^{d} \\frac{f_{i-1}(\\Gamma)x^i}{(1-x)^i}=\n\\frac{\\sum_{i=0}^d h_i(\\Gamma)x^i}{(1-x)^{d}}, \n\\]\nwhere $\\{h_i(\\Gamma)\\}_{i=0}^d$ is the $h$-vector of $\\Gamma$ \\cite{St}.\n(Recall that\n$h_j=\\sum_{i=0}^j (-1)^{j-i}{d-i \\choose j-i}f_{i-1}$ for\n$0\\leq j \\leq d$. In particular, $h_1=f_0-d$.)\nThus \n$\\sum_j B_j(I_\\Gamma)x^j=1-(1-x)^{h_1}\\sum_i h_ix^i$\n(if one assumes that $\\{i\\}\\in\\Gamma$ for every $i\\in[n]$), and so\nthe $h$-vector of $\\Gamma$ \ndefines the $B$-sequence of $I_\\Gamma$.\n\\end{remark}\n\nThe following lemma provides the\nanalog of the ``$f(\\Gamma)=f(\\Delta_{\\mbox{\\upshape {\\tiny rl}}}(\\Gamma))$-property\".\n\\begin{lemma} \\label{cone2}\nIf $I\\subset S$ is a monomial ideal that is\n finitely generated in each degree, then \nthe ideals $I$ and $\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(I)$ have the same $B$-sequence.\nIn particular, if $I$ is finitely generated, then for a sufficiently large $n$,\nthe ideals $I\\cap S_{[n]}$ and $\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(I)\\cap S_{[n]}$ \nhave the same Hilbert function\n(in $S_{[n]}$).\n\\end{lemma}\n\\smallskip\\noindent {\\it Proof: \\ }\nSince for every $n\\in\\mathbb{N}$ the ideals\n$I\\cap S_{[n]}$ and $\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I\\cap S_{[n]})$ have the same Hilbert function\n(in $S_{[n]}$) (see Lemma \\ref{cone}), and since \n$B_i(I)=\\lim_{n\\rightarrow\\infty} B_i(I\\cap S_{[n]})$, \nthe above remark implies that $B(I)=B(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I))$. Finally,\n since $\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I)$\nis a strongly stable ideal (Lemma \\ref{cone}), we infer (by comparing\nthe two formulas of Lemma \\ref{EK})\nthat \n$\\beta_{i,j}(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I))=\\beta_{i,j}(\\Phi\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I))=\\beta_{i,j}\n(\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(I))$ for all $i, j$,\nand so $B(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I))=B(\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(I))$.\nThe result follows. \\hfill$\\square$\\medskip\n\n\n\n\nNow we are ready to verify the first part of Theorem \\ref{Thm2}. In fact\nwe prove the following slightly more general result.\n\n\\begin{theorem} \\label{main}\nLet $I$ be a squarefree strongly stable ideal of $S$ finitely \ngenerated in each degree. \nThen $\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(I)>_{\\mbox{\\upshape {\\tiny lex}}} I$ unless $I$ is a USLI in which case\n$\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(I)= I$. \nMoreover if $I$ is finitely\ngenerated and is not a USLI, then \nall ideals in the sequence\n$\\{\\Delta^i_{\\mbox{\\upshape {\\tiny lex}}}(I)\\}_{i\\geq 0}$ are distinct. \n\\end{theorem}\n\n\n\\smallskip\\noindent {\\it Proof: \\ } There are several possible cases.\n\n{\\bf Case 1:} $I=L(k_\\bullet)$ is a USLI. To prove that \n$\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(I)=I$,\nit suffices to show that for every $d\\geq 1$, \n$\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(L(k^{(d)})= L(k^{(d)})$, where \n$k^{(d)}:=\\{k_1, \\ldots, k_d, 0,0,\\ldots\\}$ is the sequence $k_\\bullet$\ntruncated at $k_d$.\nBut this is immediate from Lemmas \\ref{comb_USLI}(2) and \\ref{cone2}. \nIndeed, for $n=n(d)$ sufficiently large\nthe simplicial complexes on the vertex set $[n]$ whose Stanley-Reisner\nideals are given by $\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(L(k^{(d)})\\cap S_{[n]}$ and \n$L(k^{(d)})\\cap S_{[n]}$, respectively,\nare shifted and have the same $f$-numbers. \nSince the second complex is a USLI complex,\nit follows that those complexes, and hence their ideals, coincide. \n \n{\\bf Case 2:} $I=\\langle m_1, \\ldots, m_l, m_{l+1} \\rangle$\n is an almost USLI.\nLet $n$ be the largest index of a variable appearing in $\\prod_{i=1}^{l+1}m_i$,\nand let $\\Gamma$ be a simplicial complex on $[n]$ with \n$I_\\Gamma = I\\cap S_{[n]}$.\nThen\n$$\\mbox{\\upshape reg}(\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(I))=\\mbox{\\upshape reg}(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I_\\Gamma)) \n\\stackrel{\\mbox{\\tiny {Lemma \\ref{Pardue_lemma}}}}{\\geq} |\\max(\\Gamma)|\n\\stackrel{\\mbox{\\tiny {Lemma \\ref{main_lemma}}}}{>}\\mbox{\\upshape reg}(I_\\Gamma)=\\mbox{\\upshape reg}(I),\n$$\nyielding that $\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(I)\\neq I$ in this case.\nMoreover, since by Eq.~(\\ref{P5}), \n$\\Phi(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny rl}}}(I_\\Gamma))=I_\\Gamma$ and since $\\Phi$ is a \nlex-order preserving map,\nwe infer from Lemma \\ref{gin>gin} that \n$\\Phi(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny lex}}}(I_\\Gamma))\\geq_{\\mbox{\\upshape {\\tiny lex}}} \\Phi(\\mbox{\\upshape Gin}_{\\mbox{\\upshape {\\tiny rl}}}(I_\\Gamma))\n=I_\\Gamma$,\nand hence that $\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(I)>_{\\mbox{\\upshape {\\tiny lex}}} I$.\n\n{\\bf Case 3:} I is squarefree strongly stable, but is not a USLI. \nIn this case we sort $G(I)=\\{m_1, \\ldots, m_l, m_{l+1}, \\ldots\\}$ by graded lex-order\nand assume that $m_{l+1}$ is the first non-USLI generator of $I$.\nLet \n$I_1=\\langle m_1, \\ldots, m_l \\rangle$ and let \n$I_2=\\langle m_1, \\ldots, m_{l+1} \\rangle$. \nThen $I_1$ is a USLI, $I_2$ is an almost USLI, and $I_1\\subset I_2\\subseteq I$. \nHence by the previous two cases \n$I_1=\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(I_1)\\subset\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(I_2)$ \nand \n$\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(I_2)>_{\\mbox{\\upshape {\\tiny lex}}} I_2$, and so \n there exists a monomial $m$, $m_l>_{\\mbox{\\upshape {\\tiny lex}}} m>_{\\mbox{\\upshape {\\tiny lex}}} m_{l+1}$, such that\n$m \\in G(\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(I_2)) \\subseteq G(\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(I))$. \nThus\n$\\Delta_{\\mbox{\\upshape {\\tiny lex}}}(I)>_{\\mbox{\\upshape {\\tiny lex}}} I$. \n\nFinally to show that for a finitely generated ideal $I$,\nall ideals in the sequence $\\{\\Delta^i_{\\mbox{\\upshape {\\tiny lex}}}(I)\\}_{i\\geq 0}$ are distinct, \nit suffices to check that none of those ideals is a USLI. \nThis is an immediate corollary of Lemmas \\ref{comb_USLI}(2) and \\ref{cone2}. \\hfill$\\square$\\medskip\n\n\n\nOur next goal is to prove the second part of Theorem \\ref{Thm2}. \nTo do that we fix a sequence of integers\n$B=\\{B_j : j\\geq 1\\}$ and study the class $\\mathcal{M}(B)$ of all monomial ideals $I\\subset S$ \nthat are finitely generated in each degree and satisfy $B(I)=B$.\n\n\\begin{lemma}\nThere is at most one USLI in the class $\\mathcal{M}(B)$.\n\\end{lemma}\n\\smallskip\\noindent {\\it Proof: \\ } \nRecall that a USLI $L=L(k_\\bullet)$ \nis uniquely defined by its $k$-sequence\n$k_\\bullet=\\{k_i : i\\geq 1\\}$, where $k_i=\\beta_{0,i}(L)=|G(L)_i|$. \nRecall also that \n$B(L)$ is a function of \n$k_\\bullet$ (see Lemma \\ref{EK}(2)), and so to complete the proof it suffices\nto show that this function is\none-to-one, or more precisely that\n $k_j$ is determined by\n$k_1, \\ldots, k_{j-1}, B_j$ (for every $j\\geq 1$). \nAnd indeed, \n\\begin{eqnarray*}\nk_j&=&\\beta_{0,j}(L)=B_j-\\sum_{i=1}^j (-1)^i\\beta_{i,j}(L) \n \\quad (\\mbox{by definition of } B_j)\\\\\n&=& B_j-\\sum_{i=1}^j(-1)^i \\sum_{l=1}^{k_{j-i}}{k_1+\\ldots+k_{j-i-1}+l-1 \\choose i}\n\\quad (\\mbox{by Lemma } \\ref{EK}(2)). \n\\end{eqnarray*}\n\\hfill$\\square$\\medskip\n\nNow we are ready to prove (the slightly more general \nversion of) the second part of Theorem \\ref{Thm2}.\n\\begin{theorem}\nFor every ideal $I\\in\\mathcal{M}(B)$, the limit ideal $\\overline{\\Delta}_{\\mbox{\\upshape {\\tiny lex}}}(I)$ \nis well defined and is the unique USLI of $\\mathcal{M}(B)$.\n\\end{theorem}\n\\smallskip\\noindent {\\it Proof: \\ }\nFix $I\\in \\mathcal{M}(B)$. To show that $\\overline{\\Delta}_{\\mbox{\\upshape {\\tiny lex}}}(I)$ \nis well defined, it suffices to check that for every $d\\geq 0$,\nthere exists $s=s(d)$ such that \n\\begin{equation} \\label{stab}\nG(\\Delta^{s}_{\\mbox{\\upshape {\\tiny lex}}}(I))_{\\leq d}=G(\\Delta^{s+1}_{\\mbox{\\upshape {\\tiny lex}}}(I))_{\\leq d} \n\\end{equation}\n(where $G(J)_{\\leq d}:=\\cup_{j\\leq d} G(J)_j$),\nand hence that all ideals $\\Delta^{i}_{\\mbox{\\upshape {\\tiny lex}}}(I)$, $i\\geq s$,\nhave the same $d$-th homogeneous component.\nWe verify this fact by showing that the collection of all \npossible sets of minimal generators \n\\begin{equation} \\label{finite}\n \\mathcal{G}_{\\leq d}:=\\{ G(J)_{\\leq d} : \n J\\in\\mathcal{M}(B), J \\mbox{ is squarefree strongly stable}\\} \n \\quad \\mbox{is finite}.\n\\end{equation}\n(This yields (\\ref{stab}), since all ideals $\\Delta^{i}_{\\mbox{\\upshape {\\tiny lex}}}(I)$, $i\\geq 1$,\nare squarefree strongly stable, and since \n$\\Delta^{i}_{\\mbox{\\upshape {\\tiny lex}}}(I)\\leq_{\\mbox{\\upshape {\\tiny lex}}} \\Delta^{i+1}_{\\mbox{\\upshape {\\tiny lex}}}(I)$ \nby Theorem \\ref{main}.)\nEq.~(\\ref{finite}) can be easily proved by induction.\n It clearly holds for \n$d=0$. Now if $J\\in\\mathcal{M}(B)$ is squarefree strongly stable, then\nby Lemma \\ref{EK}(2) and Definition \\ref{B-definition}, \n$$ |G(J)_d|=\\beta_{0,d}(J)=\nB_d-\\sum_{i=1}^{d}(-1)^i\\sum_{u\\in G(J)_{d-i}}{m(u)-(d-i) \\choose i},\n$$\nso assuming that the collection \n$\\mathcal{G}_{\\leq d-1}$ is finite, or equivalently that the set of integers\n$\\{m(u): u\\in G(J)_{\\leq d-1}\\in\\mathcal{G}_{\\leq d-1}\\}$ is bounded \n(say by $n(d)$), \nwe obtain that \nthere exists a constant $g(d)$ such that\n$|G(J)_d|\\leq g(d)$ for all squarefree strongly stable ideals $J\\in\\mathcal{M}(B)$.\nBut then the squarefree strongly stable property implies that\n$m(u)< n(d)+g(d)+d$ for every $u\\in G(J)_{\\leq d}\\in \\mathcal{G}_{\\leq d}$,\nand (\\ref{finite}) follows.\n\nThe second part of the statement is now immediate:\nindeed if $G(\\Delta^s(I))_{\\leq d} = G(\\Delta^{s+1}(I))_{\\leq d}$,\n then by Theorem \\ref{main}, \n$G(\\Delta^s(I))_{\\leq d}= G(\\overline{\\Delta}(I))_{\\leq d}$ \nis the set of minimal generators of a USLI.\n\\hfill$\\square$\\medskip\n\n\\section{Remarks on other term orders}\nWe close the paper by discussing several results and conjectures\nrelated to algebraic shifting with respect to arbitrary term orders. \n To this end, we say that an order $\\succ$ \non monomials of $S$ is a {\\em term order} if \n$x_i\\succ x_{i+1}$ for $i\\geq 1$, \n$m\\succ m'$ as long as $\\deg(m)<\\deg(m')$,\nand the restriction of $\\succ$\n to $S_{[n]}$\nis a term order on $S_{[n]}$ for all $n\\geq 1$. In addition,\nwe restrict our discussion\nonly to those term orders on $S$ that are compatible with the squarefree\noperation $\\Phi$, that is, $\\Phi(m)\\succ\\Phi(m')$ if $m\\succ m'$.\n\nSimilarly to Definition \\ref{gin_def}, for a term order $\\succ$ on $S$ and\na homogeneous ideal $I\\subset S$ that is finitely generated in each degree,\nwe define $\\Delta_\\succ(I):=\\Phi(\\mbox{\\upshape Gin}_\\succ(I))$. Thus $\\Delta_\\succ(I)$\nis a squarefree strongly stable ideal that has the same $B$-sequence as $I$.\n(Indeed, the proof of Lemma \\ref{cone2} carries over to this more \ngeneral case.)\n\nWe say that a squarefree monomial ideal $I\\subset S$ is a {\\em US$\\succ$I}\nif for every monomial $m\\in I$ and every squarefree monomial $m'$\nsuch that\n$\\deg(m)=\\deg(m')$ and $m'\\succ m$, $m'$ is an element of $I$ as well. \nBeing US$\\succ$I implies being squarefree strongly stable.\n\nIn view of Theorems \\ref{Thm2} and \\ref{AHH}\nit is natural to ask the following:\n\\begin{enumerate}\n\\item Does $\\Delta_\\succ(I)=I$ hold for every US$\\succ$I I?\n\\item Is there a term order $\\succ$ other than the lexicographic order\nfor which the equality $\\Delta_\\succ(I)=I$ implies that $I$ is a \nUS$\\succ$I?\n\\item Is there a term order $\\succ$ other than the \nreverse lexicographic order such that the equation $\\Delta_\\succ(I)=I$ \nholds for all squarefree strongly stable ideals $I$?\n\\end{enumerate}\n\nThe next proposition answers the first question in the affirmative.\n\\begin{proposition}\nIf $I$ is a US$\\succ$I, then $\\Delta_\\succ(I)=I$\nfor every term order on $S$ that is compatible with $\\Phi$.\n\\end{proposition}\n\\smallskip\\noindent {\\it Proof: \\ } Exactly as in the proof of Theorem \\ref{main} (see the\nlast three lines of Case 2), \none can show that $\\Delta_\\succ(I)\\succeq I$. Hence either\n$\\Delta_\\succ(I)= I$, in which case we are done, or the \n$\\succ$-largest monomial, $m$, in the symmetric difference of \n$G(\\Delta_\\succ(I))$ and $G(I)$ is an element of $G(\\Delta_\\succ(I))$.\nSince $I$ is a US$\\succ$I, we obtain in the latter case that \n$G(\\Delta_\\succ(I))_i=G(I)_i$ for all $i<\\deg(m)$ and\n$$\nG(I)_{i_0}=\\{m'\\in G(\\Delta_\\succ(I))_{i_0} : m'\\succ m\\}\n\\quad \\mbox{ for } i_0=\\deg(m),\n$$\nthat is, $G(I)_{i_0}$ is a strict subset of $ G(\\Delta_\\succ(I))_{i_0}$.\nThis is however impossible, since it contradicts the fact that\nthe ideals $I$ and $\\Delta_\\succ(I)$ have the same $B$-sequence.\n\\hfill$\\square$\\medskip\n\nThe answer to the second question is negative as follows from \nthe following result.\n\n\\begin{proposition}\nIf $I$ is a USLI, then $\\Delta_\\succ(I)=I$ for all term orders $\\succ$.\n\\end{proposition}\nWe omit the proof as it is completely analogous to that of \nTheorem \\ref{main}, Case 1.\n\nWhile we do not know the answer to the third question, we believe that it\nis negative. In fact it is tempting to conjecture that the following holds.\nLet $\\succ$ be a term order on $S$ other than the (graded) \nreverse lexicographic order, and let $k\\geq 2$ be the smallest degree\non which $\\succ$ and revlex disagree. Write $m_i$ to denote the $i$th \nsquarefree monomial of $S$ of degree $k$ with respect to the revlex order.\n(It is a fundamental property of the revlex order that every squarefree \nmonomial of $S$ of degree $k$ is of the form $m_i$ for some finite $i$.)\n\n\\begin{conjecture}\nLet $i_0\\geq 1$ be the smallest index for which \n$I_{i_0}:=\\langle m_1, \\cdots, m_{i_0}\\rangle$ is not a US$\\succ$I. \nThen $\\Delta_{\\succ}(I_{i_0})\\neq I_{i_0}$.\n\\end{conjecture}\n\n\n\\section*{Acknowledgments} We are grateful to Aldo Conca for\nhelpful discussions and to the anonymous referees for insightful comments.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
+{"text":"\\section{INTRODUCTION}\n\nMonocular Visual Odometry (MVO) is a popular method for camera pose estimation, but due to the scale ambiguity \\cite{song2015high,zhou2016reliable, wu2020eao,2009Absolute,2010scale}, the MVO system cannot provide real odometry data. Therefore, an accurate and robust scale recovery algorithm is of great significance in the application of MVO \\cite{hawkeye-zhu}.\nThe key of scale recovery is to integrate absolute reference information, such as the gravity orientation from IMU \\cite{qin2018vins} or the depth measured by Lidar \\cite{zhang2014real, li2017hybrid}. The baseline of stereo cameras can also serve as such a reference \\cite{mur2017orb}. However, these sensors are not always available in real-world applications. Moreover, a complicated sensor calibration and fusion process is needed to align the MVO system with other sensors. \n\nAnother frequently used method for removing scale ambiguity is to take as reference the height of a mounted camera above the ground plane, which remains a stable signal during the navigation of vehicles. The idea is to estimate a ratio between the real camera height and the relative one calculated from image features. The ratio then can be used to recover the real scale. The advantages of this method are significant since it does not depend on other sensors and is with high feasibility. The method is also regarded as one of the most promising solutions in this research area.\n\nPrior work like \\cite{zhou2019ground,zhou2016reliable,song2015high,1211380} typically leverages the results of feature matching to calculate the homography matrix and then decompose the matrix to estimate the parameters of the ground plane, based on which the relative camera height can be obtained. The major deficiency of this method is that the decomposition is very sensitive to noises and multiple solutions exist, which requires additional operations to eliminate the ambiguity. Some other work like \\cite{grater2015robust,1211380} chooses to directly fit the ground plane using feature points that lie on the ground, e.g., the center regions of a road. By removing low-quality image features outside the target region, the robustness is improved. However, the target region may be occluded sometimes, which interferes with the detection of image features, thus degrading the accuracy of scale recovery.\n\n\nIn recent work \\cite{yin2017scale,andraghetti2019enhancing,xue2020toward,wagstaff2020self, wang2020tartanair}, deep learning based MVO algorithms are proposed, in which the camera pose with a real scale is directly predicted by the neural network in an end-to-end manner. Such methods have received much attention in recent years, but their generalization ability across different scenarios is very limited \\cite{wang2020tartanair}. Some other deep learning based methods take scale recovery as an independent problem. For instance, DNet \\cite{xue2020toward} is proposed to perform ground segmentation and depth regression simultaneously. Based on the predicted depth points within the ground region, a dense geometrical constraint is then formulated to help recover the scale. In \\cite{wagstaff2020self}, a scale-recovery loss is developed based on the idea of enforcing the consistency between the known camera height and the predicted one. Constrained by this loss, the neural network can predict more accurate ego poses. Nonetheless, these methods usually require a large-scale training process, and the computational cost is prohibitively expensive.\n\nIn this paper, we propose a light-weight method for accurate and robust scale recovery based on plane geometry. The method includes an efficient Ground Point Extraction (GPE) algorithm based on Delaunay triangulation \\cite{shewchuk1996triangle} and a Ground Points Aggregation (GPA) algorithm for aggregating ground points from consecutive frames. Based on these two algorithms, a large number of high-quality ground points are selected. For scale recovery, we first formulate a least-square problem to fit the ground plane and then estimate the relative camera height to calculate the real scale. By leveraging the high-quality points and a RANSAC-based optimizer, the scale can be estimated accurately and robustly. Benefiting from the light-weight design of the algorithms, our method can achieve a 20Hz running frequency on the benchmark dataset. \n\nThe main contributions of this work are as follows:\n\\begin{itemize}\n\t\\item We propose a GPE algorithm based on Delaunay triangulation, which can accurately extract ground points. \n\t\\item We propose a GPA algorithm that can effectively aggregate local ground points and perform robust optimization of the ground plane parameters. \n\t\\item Based on the proposed algorithms, we implement a real-time MVO system with accurate and robust scale recovery, aiming to reduce scale drift and provide accurate odometry in long-distance navigations without loop closure. \n\n\\end{itemize}\n\n\n\\begin{figure*}[ht]\n\t\\centering\n\t\\includegraphics[height=6.3cm]{sys2.pdf}\n\t\\caption{Overview of the proposed system. There are two parallel threads in the system: 1) The MVO thread takes image frames as input and estimates the current camera pose, 2) The GPE-GPA thread fetches image features from the MVO thread and selects high-quality points for ground plane estimation.}\n\n\t\\label{overview}\n\t\\vspace{-2mm}\n\\end{figure*}\n\n\n\n\\section{System Overview}\nThe notations used in this paper are as follows:\n\\begin{itemize}\n\t\\item $T_t\\in{R^{4\\times4}}$ - The camera pose of image frame $I_t$ in the global frame, which is composed of a camera orientation $R_t\\in{R^{3\\times3}}$ and a translation $\\boldsymbol{t}_t\\in{R^{3\\times1}}$.\n\t\\item $T_{t,t-1}$ - The relative pose of frame $I_{t-1}$ w.r.t. frame $I_t$.\n\t\\item $K$ - The intrinsic matrix of a pinhole camera model.\n\t\\item $\\mathbf{x}_i, \\mathbf{u}_i$ - The 3D map point in camera frame and its corresponded 2D point on image plane after projection.\n\n\t\\item $\\mathbf{n}_t, h_t$ - The plane parameters, i.e., $\\mathbf{n}_t \\cdot \\mathbf{x}_i-h_t=\\mathbf{0}$, where $\\mathbf{n}_t$ is the normal vector and $h_t$ is the distance to the plane.\n\t\\item $h, {h}^{\\dagger}, h^*$ -- The calculated camera height form image features, the estimated camera height after scale recovery, and the real camera height.\n\\end{itemize}\n\n\n\\subsection{Problem Definition}\nGiven consecutive image frames from a calibrated monocular camera, our goal is to estimate the absolute scale of camera poses and then recover the real camera trajectory by making use of the prior known camera height $h^*$. Under scale ambiguity, the camera height $h$ calculated from image features maintains a ratio with the real one, i.e., $s=h^*\/h$. Therefore, scale recovery is essentially to compute $s$, and the key lies in the accurate estimation of the ground plane.\n\n\n\n\n\n\n\\subsection{System Architecture}\nThe proposed system in this work is shown in Fig. \\ref{overview}. There are two parallel threads in the system: The first one is the MVO thread, which takes consecutive images as input and estimates the current camera pose, e.g., the ORB-SLAM2 framework. The second thread is used to run the GPE and GPA algorithms for scale recovery. The proposed system is based on such an assumption that the ground is locally flat and can be approximated by a plane with a surface normal. The workflow of the second thread is as follows.\n\nAs shown in the red block in Fig.\\ref{overview}, for each image frame from the MVO thread, the Delaunay triangulation is first applied to segment the matched feature points into a set of triangles. Each triangle is then back-projected into the camera frame, and the associated plane parameters are also estimated. After that, several geometrical constraints are leveraged to select and then refine ground points. \n\nNote that selected ground points are not enough for an accurate estimation of the plane parameters. We thus propose the GPA algorithm to aggregate ground points from multiple frames using a sliding windows method, as shown in the orange block of Fig.\\ref{overview}. Based on the aggregated local points, a robust parameter estimation procedure is then performed to fit the ground plane. Accordingly, the relative camera height of each frame can be estimated, and the absolute camera trajectory is recovered, shown in the blue block of Fig.\\ref{overview}.\n\n\n\n\\section{Ground Plane Estimation}\n\n\\subsection{Ground Point Extraction}\nFor a given set of matched feature points $\\mathbf{u}^t_i$, $i\\in\\{1,2,\\dots,N\\}$, in the current image frame $I_t$, the Delaunay triangulation uses each of the feature points as a triangle vertex. We back-project the triangles from the image plane into the current camera frame and denote them by $\\Delta_i^t, i\\in\\{1,2,\\dots M\\}$ associated with a set of vertices $\\mathbf{x}^t_{ij},j\\in\\{1,2,3\\}$. The normal vector $\\mathbf{n}_i^t = ({n}_{i,x}^t, {n}_{i,y}^t, {n}_{i,z}^t)$ of each triangle can be obtained by the cross product,\n\\begin{equation}\n\t\\mathbf{n}_i^t=\\frac{(\\mathbf{x}^t_{i1}-\\mathbf{x}^t_{i2})\\times(\\mathbf{x}^t_{i1}-\\mathbf{x}^t_{i3})}{||(\\mathbf{x}^t_{i1}-\\mathbf{x}^t_{i2})\\times(\\mathbf{x}^t_{i1}-\\mathbf{x}^t_{i3})||_2},\n\t\\label{est-n}\n\\end{equation}\nwhere $\\mathbf{n}_i^t$ has an unit length. For each vertex of the triangle, the following geometrical constraint then holds:\n\\begin{equation}\n\t\\mathbf{n}^t_i \\cdot \\mathbf{x}^t_{ij}-h^t_i=0.\n\t\\label{est-h}\n\\end{equation}\n\nTherefore, $h^t_i$ can also be estimated. Here, we also add two additional constraints, i.e., ${n}_{i,y}^t>0$ and $h^t_i>0$, based on the fact that the camera is mounted on the top of the vehicle and is above the ground plane, as shown in Fig.\\ref{overview}.\n\nNote that the triangles are scattered in the whole image plane, hence we need to identify the ones located on the ground plane, named \\textit{ground triangles}, for estimating the plane parameters. Based on the fact that the normal of a ground triangles is orthogonal to camera translation $\\boldsymbol{t}_{t,t-1}$, and that the pitch angle of the camera is zero, the ground triangles can be identified by testing with the following constraints,\n\\begin{equation}\n\t\\begin{aligned}\n\t\t\\arccos(\\mathbf{n}^t_i, \\boldsymbol{t}_{t,t-1}) &= 0, \\\\\n\t\t|\\arctan(-\\frac{R_{32}}{R_{33}})| = 0,\\, &R_{33}\\neq0. \\\\\n\t\\end{aligned}\n\\end{equation}\n\nIn practice, the equality condition cannot be strictly satisfied. We thus set a tolerance value of $5^\\circ$ in the actural implementation.\n\n\nFor ground triangles that satisfy the above constraints, their vertices are categorized into a new point set $\\tilde{\\mathbf{x}}^t_{ij}, i\\in\\{1,2,\\dots K\\}, j\\in\\{1,2,3\\}, K\\small{<}M$. Since the same vertex point may be shared by multiple triangles, we also need to remove the repetitive ones from the point set. This will ensure the same contribution of each point to the ground plane estimation. \n\nThe ground points are now initially segmented out, denoted by $\\tilde{\\mathbf{x}}^t_{k} \\in \\mathcal{G}$, but there may still exist some outliers introduced by moving objects and some remote points.\nTo further improve the quality of $\\mathcal{G}$, a RANSAC-based method is leveraged to optimize $\\tilde{\\mathbf{x}}^t_{k}$, which minimizes a plane-distance error as follows,\n\\begin{equation}\n\t\\min_{\\tilde{\\mathbf{x}}^t_{g} \\in \\mathcal{G}} \\;\\sum_{g=1}^{|\\mathcal{G}|}||\\mathbf{n}^t\\cdot\\tilde{\\mathbf{x}}^t_{g}-h^t||_2.\n\t\\label{ransac}\n\\end{equation}\n\nIn the implementation, we randomly sample three points to estimate a new plane with eq. \\eqref{est-n}-\\eqref{est-h}, and then we calculate the total distance of the remaining points to the estimated plane. Such a process repeats $Z$ times, and the plane that induces the minimum total distance error is reserved. The points with a distance larger than $0.01$m to the reserved plane are then removed.\nThis is actually a more strict criterion for ground point selection. After this process, only high-quality ground points are reserved. In Alg. \\ref{ag1}, we present a complete procedure of the GPE algorithm, which gives more details about the proposed implementation.\n\n\n\\begin{algorithm}[t]\n\t\\caption{ Ground Point Extraction (GPE) }\n\t\\label{ag1}\n\n\t\\KwIn{ $\\mathbf{u}^t_i$ , $R_{t,t-1}$, $\\boldsymbol{t}_{t,t-1}$}\n\t\\KwOut{$\\{\\tilde{\\mathbf{x}}^t_{g}\\}$}\n\t\n\n\t\n\t$\\{\\Delta_i^t\\}$$\\gets$ \\textsc{DelaunayTriangulation}($\\mathbf{u}^t_i$) \n\t\n\n\t\n\n\t\n\n\t\n\ttriangles points set $\\{\\tilde{\\mathbf{x}}^t_{ij}\\}\\gets\\emptyset$\n\t\n\tsegmented points set $\\{\\tilde{\\mathbf{x}}^t_{k}\\}\\gets\\emptyset$\n\t\n\tground points set $\\mathcal{G}_{best}\\gets\\emptyset$\n\t\n\ttemp ground points set $\\mathcal{G}_k\\gets\\emptyset$\n\t\n\t\n\t\n\t\\For{each $\\mathbf{u}_{ij}^t\\in\\{\\Delta_i^t\\}, j=\\{1,2,3\\}$ }{\n\t\t\n\t\t\n\t\t\n\t\tback-project $\\mathbf{x}^t_{ij}$=$K^{-1}\\mathbf{u}_{ij}^t$\n\t\t\n\t\tcalculate $\\mathbf{n}_i^t$ by Eq.(1)\n\t\t\n\t\tcalculate $h_i^t$ by Eq.(2)\n\t\t\n\t\t\\If{$\t|\\arctan(-\\frac{R_{32}}{R_{33}})|<\\theta_{1}$ \\& $h^t_i>0$}\n\t\t{\n\t\t\t\\If{$\\arccos(\\mathbf{n}^t_i, \\boldsymbol{t}_{t,t-1})<\\theta_{2}$}\n\t\t\t{\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t$\\{\\tilde{\\mathbf{x}}^t_{ij}\\}\\cup\\mathbf{x}^t_{ij}$\t\n\t\t\t}\n\t\t}\t\n\t}\n\n\tdelete repeate vertices and get $ \\{\\tilde{\\mathbf{x}}^t_{k}\\}\\subset\\{\\tilde{\\mathbf{x}}^t_{ij}\\}$\n\t\n\n\t{\/* \\emph{ensure enough points} *\/} \\\\\n\t\\While{size ($ \\{\\tilde{\\mathbf{x}}^t_{k}\\}$) $>$ 5 }{ \n\t\t\n\t\t\\For{iterations $$ size $(\\mathcal{G}_{best})$ }\n\t\t\t{\n\t\t\t\t$\\mathcal{G}_{best}\\gets\\mathcal{G}_k$ \\tcp{best model select}\n\t\t\t}\t\n\t\t\t\n\t\t}\n\t\t$\\{\\tilde{\\mathbf{x}}^t_{g}\\}\\gets\\mathcal{G}_{best}$\n\t\t\n\t\t\\Return {$\\{\\tilde{\\mathbf{x}}^t_{g}\\}$}\n\t}\n\\end{algorithm}\t\n\n\n\n\\subsection{Ground Point Aggregation}\n\nDue to the strict criteria by GPE, the inliers are not enough for accurate estimation of the ground plane. Therefore, we propose the GPA algorithm to aggregate ground points from consecutive image frames. As shown in Fig. \\ref{lpg}, we leverage the sliding window method to select image frames, and a frame buffer is maintained to store the camera poses and ground points in the current window. At each time step, with the arrival of a new image frame, we update the buffer and then estimate the ground plane by solving a least-squares problem. \n\n\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[height=5.5cm]{SW.pdf}\n\t\\caption{Illustration of the GPA algorithm. In the bottom-left figure, the red dots indicate ground points, and the green line segment is the normal of each triangle. In the bottom-right figure, the red quadrilateral is the estimated ground plane based on the aggregated ground points in the current window.}\n\t\\label{lpg}\n\\end{figure}\n\nFrom the MVO thread, we can get the pose $T_t$ and the inliers $\\mathbf{x}^t_g$ of each frame from the buffer. We then can transform each of the inliers, denoted by $\\mathbf{x}^t_{gi}$, into the global frame,\n\\begin{equation}\n\t\\mathbf{p}_i=T_t \\,[\\mathbf{x}^t_{gi}, 1]^T.\n\\end{equation}\n\nSuppose there are $N$ local ground points in the buffer, the least-squares problem that minimizes the plane-distance error of these points is formulated as follows:\n\\begin{equation}\n\t\\begin{aligned}\n\t\t\\min_{\\boldsymbol{\\mu}}\\sum_{i=1}^{N}\\|\\boldsymbol{\\mu}^T \\mathbf{p}_i\\|_2&=\\min_{\\boldsymbol{\\mu}}\\boldsymbol{\\mu}{P_t}{P_t}^T\\boldsymbol{\\mu}^T, \\\\\n\t\t\\boldsymbol{\\mu} = &[\\mathbf{n}_t, -h_t]^T,\n\t\\end{aligned}\n\t\\label{mat}\n\\end{equation}\nwhere ${P_t}=[\\mathbf{p}_1, \\mathbf{p}_2, \\cdots, \\mathbf{p}_N ]\\in R^{4\\times N}$. Equation (\\ref{mat}) can be rewritten as follows:\n\\begin{equation}\n\t\\begin{aligned}\n\t\t&\\min_{\\boldsymbol{\\mu}} \\boldsymbol{\\mu} Q_t\\boldsymbol{\\mu}^T, \\\\ \n\t\tQ_t&={P}_t{P}_t^T\\in R^{4\\times4},\t\n\t\\end{aligned}\n\t\\label{opt}\n\\end{equation}\nwhich can then be efficiently solved by the SVD method.\n\nTo further improve the estimation accuracy of $\\boldsymbol{\\mu}$, we also introduce a weighting matrix $\\Sigma={\\sigma^{-2}_z}I$, where $\\sigma_z$ is the normalized distance of the point depth to their mean value. As a result, the matrix $Q$ in Eq. \\eqref{opt} becomes,\n\\begin{equation}\n\tQ_t={P}_t \\Sigma {P}_t^T\\in R^{4\\times4}.\n\\end{equation}\n\nAnother important refinement on the estimated plane parameter is to conduct a RANSAC-based optimization, which shares the same idea as Eq. \\eqref{ransac}. In each iteration of the optimization, we first estimate $\\boldsymbol{\\mu}$, and then calculate the distance between $\\mathbf{p}_i$ and the estimated plane. Points with a distance larger than $0.01$m are removed, and the remaining is then leveraged to estimate a new $\\boldsymbol{\\mu}$. Such a process continues until convergence. We denote the final plane normal by $\\mathbf{n}_t^*$ and the reserved ground points by $\\mathbf{p}_k^*, k\\in\\{1, 2, \\cdots, K\\}$. The relative camera height then can be calculated by projecting the camera center to the ground plane:\n\\begin{equation}\n\th^t_j=\\mathbf{n}_t^* \\cdot (\\mathbf{p}_c-\\mathbf{p}_k^*),\n\t\\label{multi-h}\n\\end{equation}\nwhere $\\mathbf{p}_c$ is the camera center of frame $I_t$ in the global frame. It is worth noting that there are $K$ estimated camera heights, which will be further processed to recover a smooth scale. Details of the GPA algorithm are presented in Alg. \\ref{alg2}.\n\n\\begin{algorithm}[t]\n\t\\label{alg2}\n\t\\caption{Ground Points Aggregation (GPA)}\n\t\n\t\\KwIn{ $I_t$, $\\{\\mathbf{x}^t_{g}\\}$, $T_t$}\n\t\n\t\\KwOut{$\\{h^t\\}$}\n\t\n\tbuffer $\\{queue\\}\\gets \\emptyset$\n\t\n\tinliers in global frame $\\{\\mathbf{p}_t\\} \\gets \\emptyset$\n\t\n\tinliers in current frame $\\{\\mathbf{x}_g^t\\} \\gets \\emptyset$\n\t\n\treserved ground points $\\{\\mathbf{p}_k^*\\}\\gets \\emptyset$\n\t\n\tcamera heights $\\{h_t\\}\\gets \\emptyset$\n\t\n\t\n\t\\While{$I_t$ is not empty}{\n\t\t$\\{queue\\} \\cup \\{I_t\\} $\t\n\t\t\n\t\t\\If{$size(\\{queue\\})>4$}{\n\t\t\t\n\t\t\t\\textsc{pop$\\_$front}($\\{queue\\}$) \n\t\t\t\n\t\t\t\\tcp{fixed number of frames}\n\t\t\t\\For{$each$ $I_t$ in $\\{queue\\}$}{\n\t\t\t\tcalculate $\\mathbf{p}_i$ by Eq.(5)\n\t\t\t\t\n\t\t\t\t$\\{\\mathbf{p}_t\\} \\cup \\{\\mathbf{p}_i\\} $\n\t\t\t}\n\t\t\t\n\t\t\tMatrix $P_t$\n\t\t\t\n\t\t\tcalculate $\\Sigma={\\sigma^{-2}_z}I$ \\tcp{Weighting Matrix}\n\t\t\t\n\t\t\tcalculate $Q_t$ by Eq.(8)\n\t\t\t\n\t\t\t$\\boldsymbol{\\mu} \\gets \\textsc{SVDDecomposition}(Q)$\n\t\t\t\n\t\t}\n\t\n\t\t\n\t\t\\For{iteration $ d_{thresh}$}{\n\t\t\t\t\n\t\t\t\tRemove $\\mathbf{p}_s$ in $\\{\\mathbf{p}_t\\}$\n\t\t\t\t\n\t\t\t\t\n\t\t\t}\n\t\t\t$\\{\\mathbf{p}_k^*\\}\\gets \\{\\mathbf{p}_t\\}$\n\t\t\t\n\t\t\tcalsulate $Q_s$ by Eq.(8)\n\t\t\t\n\t\t\t$Q \\gets Q_s$\n\t\t\t\n\t\t\t$\\boldsymbol{\\mu_s} \\gets \\textsc{SVDDecomposition}(Q)$\n\t\t\t\n\t\t\t$\\boldsymbol{\\mu} \\gets \\boldsymbol{\\mu_s}$ \\tcp{update model}\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t}\n\t\t\n\t\t\\For{$\\mathbf{p}^*$ in $\\{\\mathbf{p}_k^*\\}$}{\n\t\t\tcalculate $h_j^t$ by Eq.(9)\n\t\t\t\n\t\t\t$\\{h^t\\}\\cup \\{h_j^t\\}$\n\t\t}\n\t\t\\Return{$\\{h^t\\}$}\n\t}\n\t\n\t\n\n\\end{algorithm}\n\n\\subsection{Filter-Based Scale Recovery}\nAfter we compute the relative camera height $h$ of each frame, the scale factor is then obtained by $s_t=h^*\/h$, the motion scale of each frame is recovered by\n\\begin{equation}\n\t\\boldsymbol{t}^{\\dagger}_{t,t-1}=s_t \\cdot \\boldsymbol{t}_{t,t-1}.\n\\end{equation}\n\nCorresponding to the multiple $h$ values in Eq. \\eqref{multi-h}, there are also multiple estimated scales.\n\n\nBy plotting the scaled camera heights of each frame in the figure, shown in Fig. \\ref{gaussian}, we find the data do not strictly follow a Gaussian distribution. Therefore, we choose the median point as the scale of the current frame. In the time domain, a moving average filter is applied, shown in Fig. \\ref{filter-b}, which can give a more smooth result.\n\n\\begin{figure}[!htbp]\n\t\\centering\t\n\t\\subfigure[The distribution of the scaled camera heights.]{\n\t\t\\includegraphics[width=0.90\\linewidth]{distribution.pdf}\n\t\t\\label{gaussian}\n\t}\n\t\\subfigure[The estimated camera height on sequence-02 and -05.]{\n\t\t\\includegraphics[width=0.50\\linewidth]{Figure01.png}\\hspace{-5mm}\n\t\t\\includegraphics[width=0.50\\linewidth]{Figure02.png}\n\t\t\\label{filter-b}\n\t}\t\n\t\\centering\n\t\\caption{Demonstration of the filter-based scale recovery. The green points are the scaled camera heights. The red curve is the smoothed one.}\n\t\\label{filter}\n\\end{figure}\n\n\n\\section{Experiments}\nWe conduct experiments to evaluate the performance of our proposed method. The MVO system used in the experiments is implemented based on ORB-SLAM2, and the proposed scale recovery method is integrated as an independent thread. The system architecture is demonstrated in Fig. \\ref{overview}. The KITTI dataset \\cite{geiger2012we} is adopted as the benchmark dataset, in which sequence-01 is not used since it fails most feature-based VO systems. All the experiments are conducted using a laptop with Intel(R) Core(TM) i5-6300HQ CPU @ 2.30 GHz. \n\n\\begin{figure*}[ht]\n\t\\centering\n\t\\includegraphics[height=8.2cm]{trajectoryv2.pdf}\n\t\\caption{The re-scaled trajectories on KITTI dataset. The blue and green trajectories are generated by our system and ORB-SLAM2-noLC, respectively.}\n\t\\label{fig3}\n\\end{figure*}\n\n\\subsection{Qualitative Evaluation}\n\nThe qualitative evaluation results of the proposed method are visualized in Fig. \\ref{fig3}. The trajectories outputted by the system are recovered using the proposed method, which means similarity transformation is not necessary when compared with ground-truth trajectories\\cite{grupp2017evo}. The baseline trajectories, indicated by green color in Fig. \\ref{fig3}, are generated by ORB-SLAM2 with loop closure detection disabled, denoted by ORB-SLAM2-noLC. We can see that our re-scaled trajectories can eliminate scale drift and form correct loops, which demonstrates the effectiveness of the proposed method. \n\nThe comparison of trajectory length between the ground truth and the proposed method is shown in Table \\ref{tabel-1}, in which the Relative Length Error (RLE) is computed by $e=|l_{gt}-l_{ours}|\/|l_{gt}|$, where $l$ is the length of a trajectory.\nFor sequence-02, -04, -06, -07, and -10, the RLE is less than 1$\\%$. The high performance is due to the fact that most roads in these sequences are straight lines and contain rich features. Sequence-00 and -08 are more complicated cases, in which the scenario is composed of a lot of curves and turns. The path lengths are relatively longer than other sequences. The RLE is thus slightly higher, 2.17$\\%$ and 2.72$\\%$, respectively. Nevertheless, the results show that the proposed system can estimate an accurate scale of the entire trajectory.\n\n\\begin{table}[t]\n\t\\renewcommand\\tabcolsep{16.0pt} \n\t\\caption{Comparison of re-scaled trajectory \\protect\\\\length with ground truth}\n\t\\begin{center}\n\t\t\\label{table_time}\t\n\t\t\\begin{tabular}{lllllll}\n\t\t\t\\toprule\n\t\t\tSeq & GT (m)&Ours (m) & RLE (\\%)\\\\\n\t\t\t\\midrule\n\t\t\t00 & 3645.213 & 3724.419 & 2.173 \\\\\n\t\t\t02 & 5071.151 &5067.233 &0.757 \\\\\n\t\t\t03 &547.774 &560.888 &0.558 \\\\\n\t\t\t04 &391.861 &393.645 &0.455 \\\\\n\t\t\t05 &2175.159 &2205.576 &1.398 \\\\\n\t\t\t06 &1235.982 &1232.876 &0.251 \\\\\n\t\t\t07 &709.065 &694.697 &0.767 \\\\\n\t\t\t08 &3137.398 &3222.795 &2.722 \\\\\n\t\t\t09 &1738.601 &1705.051 &1.929 \\\\\n\t\t\t10 &911.399 &919.518 &0.890 \\\\\n\t\t\t\\bottomrule \t\t\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\label{tabel-1}\n\\end{table}\n\\renewcommand{\\arraystretch}{1} \n\\begin{table*}[t]\n\t\\centering\n\t\\fontsize{6}{8.4}\\selectfont\n\t\\begin{threeparttable}\n\t\t\\caption{comparison of average translation errors and rotation errors with the latest visual odometry methods on kitti dataset}\n\t\t\\label{table2}\n\t\t\\begin{tabular}{ccccccccccccccccccc}\n\t\t\t\\toprule\n\t\t\t\\multirow{3}{*}{Seq}& \t\n\t\t\t\\multicolumn{2}{c}{ORB-SLAM2-noLC \\cite{mur2017orb}}\n\t\t\t&\\multicolumn{2}{c}{ VISO2-M\\cite{2011StereoScan}}\n\t\t\t&\\multicolumn{2}{c}{VISO2-Stereo\\cite{2011StereoScan}}\n\t\t\t&\\multicolumn{2}{c}{Song et.al\\cite{song2015high}}\n\t\t\t&\\multicolumn{2}{c}{Zhou et.al\\cite{zhou2019ground}}\n\t\t\t&\\multicolumn{2}{c}{Brandon et.al\\cite{wagstaff2020self}}\n\t\t\t&\\multicolumn{2}{c}{DNet\\cite{xue2020toward}}\n\t\t\t\n\t\t\t&\\multicolumn{2}{c}{Ours}\\cr\n\t\t\t\\cmidrule(lr){2-3} \\cmidrule(lr){4-5}\n\t\t\t\\cmidrule(lr){5-6} \\cmidrule(lr){6-7}\n\t\t\t\\cmidrule(lr){8-9} \\cmidrule(lr){10-11}\n\t\t\t\\cmidrule(lr){12-13} \\cmidrule(lr){14-15}\n\t\t\t\\cmidrule(lr){16-17}\n\t\t\t&Trans&Rot&Trans&Rot&Trans&Rot&Trans&Rot\n\t\t\t&Trans&Rot&Trans&Rot&Trans&Rot&Trans&Rot\\cr\n\t\t\t&(\\%)&(deg\/m)&(\\%)&(deg\/m)&(\\%)&(deg\/m)&(\\%)&(deg\/m)\n\t\t\t&(\\%)&(deg\/m)&(\\%)&(deg\/m)&(\\%)&(deg\/m)&(\\%)&(deg\/m)\\cr\n\t\t\t\\midrule\n\t\t\t00&20.8&$-$&11.91&0.0209&2.32&0.0109&2.04&0.0048&2.17&0.0039&1.86& &1.94&$-$ &\\textbf{1.41}&0.0054\\cr\n\t\t\t\n\t\t\t01&$-$&$-$&$-$&$-$&$-$&$-$&$-$ &$-$ & $-$&$-$ &$-$ &$-$ &$-$&$-$&$-$&$-$\\cr\n\t\t\t\n\t\t\t02&9.52&$-$&3.33&0.0114&\\textbf{2.01}&0.0074&1.50&0.0035& $-$&$-$ &2.27&$-$ &3.07&$-$ &2.18&0.0046\\cr\n\t\t\t\n\t\t\t03&$-$&$-$&10.66&0.0197&2.32&0.0107&3.37&0.0021&2.70&0.0044&$-$& $-$&$-$&$-$ &\\textbf{1.79}&0.0041\\cr\n\t\t\t\n\t\t\t04&$-$&$-$&7.40&0.0093&0.99&0.0081&2.19&0.0028&$-$ &$-$ &$-$&$-$ &$-$&$-$ &\\textbf{1.91}&0.0021\\cr\n\t\t\t\n\t\t\t05&18.63&$-$&12.67&0.0328&1.78&0.0098&1.43&0.0038&$-$ &$-$ &\\textbf{1.50}&$-$ &3.32&$-$ &1.61&0.0064\\cr\n\t\t\t\n\t\t\t06&18.98&$-$&4.74&0.0157&\\textbf{1.17}&0.0072&2.09&0.0081& $-$& $-$&2.05&$-$ &2.74&$-$ &2.03&0.0044\\cr\n\t\t\t\n\t\t\t07&13.82&$-$&$-$&$-$&$-$&$-$&$-$&$-$&$-$ &$-$ &1.78& $-$&2.74& $-$ &\\textbf{1.77} &0.0230\\cr\n\t\t\t\n\t\t\t08&22.06&$-$&13.94&0.0203&2.35&0.0104&2.37&0.0044& $-$& $-$&2.05&$-$ &2.72&$-$ &\\textbf{1.51}&0.0076\\cr\n\t\t\t\n\t\t\t09&12.74&$-$&4.04&0.0143&2.36&0.0094&1.76&0.0047&$-$ &$-$ &\\textbf{1.50}&$-$ &3.70& $-$ &1.77&0.0118\\cr\n\t\t\t\n\t\t\t10&4.86&$-$&25.20&0.0388&1.37&0.0086&2.12&0.0085&2.09&0.0054&3.70& &5.09& &\\textbf{1.25} &0.0031& \\cr\n\t\t\t\\midrule\n\t\t\tAvg&18.17& $-$&14.39&0.0245&2.32&0.0095&2.03&\\textbf{0.0045}&2.32&0.045&2.03 & $-$&3.17&$-$&\\textbf{1.72} & 0.0068&\\cr\n\t\t\t\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\\end{threeparttable}\n\\end{table*}\n\n\n\n\n\\subsection{Quantitative Evaluation}\n\n{The quantitative comparison} between our method and the baseline methods, including \\cite{wagstaff2020self,xue2020toward,song2015high,zhou2019ground,2011StereoScan,mur2017orb}, is presented in Table \\ref{table2}. The average translation error and rotation error are adopted as evaluation metrics.\n\nWe can see that ORB-SLAM2-noLC and VISO2-M have the worst performance due to the lack of loop closure detection. The scale drift of the two methods induces a large translation error, 18.17$\\%$ and 14.39$\\%$ respectively, while the VO systems with scale recovery all maintain a low translation error, $<4\\%$. It can also be seen that a MVO system with scale recovery \\cite{song2015high, zhou2019ground, xue2020toward, wagstaff2020self} can exhibit competitive performance with a stereo VO system like VISO2-M \\cite{2011StereoScan}, which significantly demonstrates the importance of scale recovery for MVO. \n\nIn terms of monocular systems, we can see our proposed method achieves the minimum translation error while maintaining a competitive performance on rotation error. The methods proposed by Song \\textit{et al.} \\cite{song2015high} and Zhou \\textit{et al.} \\cite{zhou2019ground} can not work with sequence-07, because they both rely on a fixed region to extract ground points, whereas occlusions by moving vehicles occur frequently in this sequence. In contrast with \\cite{song2015high, zhou2019ground}, the proposed method works well with sequence-07 with a translation error of 1.77$\\%$, benefiting from the GPA algorithm. \n\n\nIn \\cite{xue2020toward}, a deep neural network, named DNet, is proposed for monocular depth prediction and scale recovery. Compared with this method, our method shows a better accuracy in all the sequences. In \\cite{wagstaff2020self}, a real-time MVO system is implemented based on self-supervised learning techniques. This method can slightly outperform our proposed method in sequence-05 and -09, but has a much lower accuracy in sequence-00, -08, and -10. A similar phenomenon can be observed when comparing with \\cite{song2015high}. This indicates a significant variance on the performance of \\cite{wagstaff2020self}. Actually, this is the limitation of most deep learning based methods, which has been discussed in detail by \\cite{wang2020tartanair}. \n\nThe comparative experiments in Table \\ref{table2} significantly verify the effectiveness of our method and demonstrates the advantages over the latest methods in the literature.\n\\subsection{Efficiency Evaluation}\nAnother significant advantage of our method lies in its high efficiency. We evaluate the run-time of our system on all the KITTI sequences mentioned above, and the experiment repeats five times. The median run-time is reported in Fig. \\ref{time}. In all the experiments, the MVO thread requires $50$-$55$ ms, while the GPE and GPA requires less than $10$ ms, which makes the system suitable for real-time applications. \n\\section{CONCLUSIONS}\n\nIn this work, we present a light-weight MVO system with accurate and robust scale recovery, aiming to reduce scale drift and provide accurate odometry in long-distance navigations without loop closure. We solve the scale ambiguity for MVO by implementing our GPE-GPA algorithm for selecting high-quality points and optimizing them in a local sliding window. Sufficient data and robust optimizer provide accurate metric trajectory leveraging the ratio of the estimated camera height and the real one. Extensive experiments show that our proposed framework can achieve state-of-the-art accuracy and recover a metric trajectory without additional sensors. The system is designed to be a light-weight framework, which can achieve real-time performance with 20 Hz running frequency. Our proposed light-weight MVO system facilitates the localization and navigation of low-cost autonomous vehicles in long-distance scenarios.\n\nFurther study into integrating the uncertainty of plane estimation will be considered, which will further improve the accuracy of scale recovery. The light-weight neural network for ground segmentation will also be considered to help constrain the extraction of high-quality ground points.\n\n\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[scale=0.45]{time-consum2.pdf}\n\t\\caption{Run-time of our MVO system on KITTI dataset. The time costs of MVO thread, the GPE algorithm, and the GPA algorithm are reported.}\n\t\\label{time}\n\\end{figure}\n\\enlargethispage{-7.8cm}\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}