diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzaixh" "b/data_all_eng_slimpj/shuffled/split2/finalzzaixh" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzaixh" @@ -0,0 +1,5 @@ +{"text":"\\subsection*{Acknowledgments}\nWe thank Denny Wu, Xiaoyu Liu, Dongruo Zhou, Vedant Nanda, Ziyan Yang, Xiaoxiao Li, Jiahao Su, Wei Hu, Bo Han, Simon S. Du, Justin Brody, and Don Perlis for helpful feedback and insightful discussions. \nAdditionally, we thank Houze Wang, Qin Yang, Xin Li, Guodong Zhang, Yixuan Ren, and Kai Wang for help with computing resources.\nThis research is partially performed while Jingling Li is a remote research intern at the Vector Institute and the University of Toronto.\nLi and Dickerson were supported by an ARPA-E DIFFERENTIATE Award, NSF CAREER IIS-1846237, NSF CCF-1852352, NSF D-ISN \\#2039862, NIST MSE \\#20126334, NIH R01 NLM-013039-01, DARPA GARD \\#HR00112020007, DoD WHS \\#HQ003420F0035, and a Google Faculty Research Award.\nBa is supported in part by the CIFAR AI Chairs program, LG Electronics, and NSERC.\nXu is supported by NSF CAREER award 1553284 and NSF III 1900933. Xu is also partially supported by JST ERATO JPMJER1201 and JSPS Kakenhi JP18H05291. \nZhang is supported by ODNI, IARPA, via the BETTER Program contract \\#2019-19051600005. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. \n\n{\n\\small\n\\bibliographystyle{unsrtnat}\n\\typeout{}\n\n\\subsection{Related Work}\n\\label{sec:related}\nA commonly studied type of noisy label is the random label noise, where the noisy labels are drawn i.i.d.~from a uniform distribution.\nWhile neural networks trained with random labels easily overfit~\\citep{zhang2016understanding}, it has been observed that networks learn simple patterns first~\\cite{arpit2017closer}, converge faster on downstream tasks~\\cite{maennel2020neural}, and benefit from memorizing atypical training samples~\\cite{feldman2020neural}.\n\nAccordingly, many recent works on noisy label training are based on the assumption that when trained with noisy labels, neural networks would first fit to clean labels~\\cite{ lyu2019curriculum, han2018co, jiang2018mentornet,li2020dividemix, liu2020early} and learn useful feature patterns~\\cite{hendrycks2018using, lee2019robust, bahri2020deep, wu2020topological}.\nYet, these methods are often more effective on random label noise than on more realistic label noise (i.e., class-dependent and instance-dependent label noise).\n\nMany works on representation learning have investigated the features preferred by a network during training~\\cite{arpit2017closer, hermann2020shapes, shah2020pitfalls, sanyal2020benign}, and how to interpret or control the learned representations on clean data~\\cite{alain2016understanding, hermann2020shapes, hermann2019origins, montavon2018methods, yuan2020clime}. \nOur paper focuses more on the predictive power rather than the explanatory power in the learned representations.\nWe adapt the method in~\\cite{alain2016understanding} to measure the predictive power in representations, and we study learning from noisy labels rather than from a clean distribution.\n\nOn noiseless settings, prior works show that neural networks have the inductive bias to learn simple patterns~\\cite{arpit2017closer, hermann2020shapes, shah2020pitfalls, sanyal2020benign}. \nOur work formalizes what is considered as a simple pattern for a given network via architectural alignments, and we extend the definition of alignment in~\\cite{Xu2020What} to noisy settings.\n\n\\section{Introduction}\n\\label{sec:intro}\nSupervised learning starts with collecting labeled data.\nYet, high-quality labels are often expensive.\nTo reduce annotation cost, we collect labels from non-experts~\\cite{snow2008cheap, welinder2010multidimensional, yan2014learning, yu2018learning} or online queries~\\cite{blum2003noise, jiang2020beyond, liu2011noise}, which are inevitably noisy. \nTo learn from these noisy labels, previous works propose many techniques, including modeling the label noise~\\cite{natarajan2013learning, liu2015classification, yao2020dual}, designing robust losses~\\cite{ghosh2017robust, lyu2019curriculum, wang2019symmetric, zhang2018generalized}, adjusting loss before gradient updates~\\cite{arazo2019unsupervised, chang2017active, han2020sigua, hendrycks2018using, ma2018dimensionality, patrini2017making, reed2014training, song2019selfie, wang2017multiclass}, selecting trust-worthy samples~\\cite{lyu2019curriculum, song2019selfie, chen2019understanding, han2018co, jiang2018mentornet, malach2017decoupling, nguyen2019self, shen2019learning, wang2018iterative, yu2019does}, designing robust architectures~\\cite{bekker2016training, chen2015webly, goldberger2016training, han2018masking, jindal2016learning, li2020understanding, sukhbaatar2014training, yao2018deep}, applying robust regularization in training~\\cite{goodfellow2014explaining, hendrycks2019using, jenni2018deep, pereyra2017regularizing, tanno2019learning, zhang2017mixup}, using meta-learning to avoid over-fitting~\\cite{garcia2016noise, li2019learning}, and applying semi-supervised learning~\\cite{nguyen2019self, ding2018semi, li2020dividemix, liu2020early, yan2016robust} to learn better representations.\n\nWhile these methods improve some networks' robustness to noisy labels, we observe that their effectiveness depends on how well the network's architecture aligns with the target\/noise functions, and they are less effective when encountering more realistic label noise that is class-dependent or instance-dependent.\nThis motivates us to investigate an understudied topic: how the network's architecture impacts its robustness to noisy labels.\n\nWe formally answer this question by analyzing how a network's architecture aligns with the target function and the noise.\nTo start, we measure the robustness of a network via the predictive power in its learned representations (Definition~\\ref{def:predict_power}), as\nmodels with large test errors may still learn useful predictive hidden representations~\\cite{arpit2017closer, maennel2020neural}. \nIntuitively, the predictive power measures how well the representations can predict the target function. \nIn practice, we measure it by training a linear model on top of the learned representations using a small set of clean labels and evaluate the linear model's test performance~\\cite{alain2016understanding}.\n\nWe find that a network having a more aligned architecture with the target function is more robust to noisy labels due to its more predictive representations, whereas a network having an architecture more aligned with the noise function is less robust.\nIntuitively, a \\textit{good} alignment between a network's architecture and a function exists if the architecture can be decomposed into several modules such that each module can simulate one part of the function with a \\textit{small} sample complexity.\nThe formal definition of alignment is in Section~\\ref{subsec:alignment_formal}, adapted from~\\cite{Xu2020What}. \n\nOur proposed framework provides initial theoretical support for our findings on a simplified noisy setting (Theorem~\\ref{thm:main}).\nEmpirically, we validate our findings on synthetic graph algorithmic tasks by designing several variants of Graph Neural Networks (GNNs), whose theoretical properties and alignment with algorithmic functions have been well-studied~\\cite{Xu2020What, du2019graph, xu2020neural}. \nMany noisy label training methods are applied to image classification datasets, so we also validate our findings on image domains using different architectures.\n\nMost of our analysis and experiments use standard neural network training. \nInterestingly, we find similar results when using DivideMix~\\cite{li2020dividemix}, a SOTA method for learning with noisy labels:\nfor networks less aligned with the target function, the SOTA method barely helps and sometimes even hurts test accuracy; whereas for more aligned networks, it helps greatly.\n\nFor well-aligned networks, the predictive power of their learned representation could further improve the test performance of SOTA methods, especially on class-dependent or instance-dependent label noise where current methods on noisy label training are less effective. \nMoreover, on Clothing1M~\\cite{xiao2015learning}, a large-scale dataset with real-world label noise, the predictive power of a well-aligned network's learned representations could even outperform some sophisticated methods that use clean labels. \n\nIn summary, we investigate how an architecture's alignments with different (target and noise) functions affect the network's robustness to noisy labels, in which we discover that despite having large test errors, networks well-aligned with the target function can still be robust to noisy labels when evaluating their predictive power in learned representations. \nTo formalize our finding, we provide a theoretical framework to illustrate the above connections.\nAt the same time, we conduct empirical experiments on various datasets with various network architectures to validate this finding.\nBesides, this finding further leads to improvements over SOTA noisy-label-training methods on various datasets and under various kinds of noisy labels (Tables~\\ref{table:cifar10_sym}-\\ref{table:webvision} in Appendix~\\ref{suppsec:add_exp_results}).\n\\section{Theoretical Framework} \n\\label{sec:prelim}\nIn this section, we introduce our problem settings, give formal definitions for ``predictive power'' and ``alignment,'' and present our main hypothesis as well as our main theorem. \n\n\\subsection{Problem Settings}\nLet $\\mathcal{X}$ denote the input domain, which can be vectors, images, or graphs.\nThe task is to learn an underlying target function $f: \\mathcal{X} \\rightarrow \\mathcal{Y}$ on a noisy training dataset $S := \\lbrace (\\x_i, y_i) \\rbrace_{i \\in \\mathcal{I}} \\bigcup \\ \\lbrace (\\x_i, \\hat{y}_i) \\rbrace_{i \\in \\mathcal{I}'}$, \nwhere $y := f(\\x)$ denotes the true label for an input $\\x$, and $\\hat{y}$ denotes the noisy label. \nHere, the set $\\mathcal{I}$ contains indices with clean labels, and $\\mathcal{I}'$ contains indices with noisy labels.\nWe denote $\\frac{|\\mathcal{I}'|}{|S|}$ as the \\textit{noise ratio} in the dataset $S$.\nWe consider both regression and classification problems.\n\n\\textbf{Regression settings.} We consider a label space $\\mathcal{Y} \\subseteq \\mathbb{R}$ and two types of label noise: \na) \\textbf{additive label noise}~\\cite{hu2019simple}: $\\hat{y} := y + \\epsilon$, where $\\epsilon$ is a random variable independent from $\\x$; \nb) \\textbf{instance-dependent label noise}: $\\hat{y} := g(\\x)$ where $g: \\mathcal{X} \\rightarrow \\mathcal{Y}$ is a noise function dependent on the input.\n\n\\textbf{Classification settings.} We consider a discrete label space with $C$ classes: $\\mathcal{Y} = \\{1,2, \\cdots, C\\}$, and three types of label noise: \na) \\textbf{uniform label noise}: $\\hat{y} \\sim \\text{Unif}(1, C)$, where the noisy label is drawn from a discrete uniform distribution with values between $1$ and $C$, and thus is independent of the true label; \nb) \\textbf{flipped label noise}: $\\hat{y}$ is generated based on the value of the true label $y$ and does not consider other input structures;\nc) \\textbf{instance-dependent label noise}: $\\hat{y} := g(\\x)$ where $g: \\mathcal{X} \\rightarrow \\mathcal{Y}$ is a function dependent on the input $\\x$'s internal structures.\nPrevious works on noisy label learning commonly study uniform and flipped label noise. \nA few recent works~\\cite{cheng2020learning, wang2020proselflc} explore the instance-dependent label noise as it is more realistic.\n\n\\subsection{Predictive Power in Representations}\nA network's robustness is often measured by its test performance after trained with noisy labels. \nYet, since models with large test errors may still learn useful representations, we measure the robustness of a network by how good the learned representations are at predicting the target function --- the predictive power in representations.\nTo formalize this definition, we decompose a neural network $\\mathcal{N}$ into different modules $\\mathcal{N}_1, \\mathcal{N}_2, \\cdots$, where each module can be a single layer (e.g., a convolutional layer) or a block of layers (e.g., a residual block).\n\n\\begin{definition}\n\\label{def:predict_power}\n\\textit{(Predictive power).} Let $f: \\mathcal{X} \\rightarrow \\mathcal{Y}$ denote the underlying target function where the input $\\x \\in \\mathcal{X}$ is drawn from a distribution $\\mathcal{D}$. \nLet $\\mathcal{C} := \\lbrace (\\x_i, y_i) \\rbrace_{i=1}^{m}$ denote a small set of clean data (i.e., $y_i = f(\\x_i)$). \nGiven a network $\\mathcal{N}$ with $n$ modules $\\mathcal{N}_j$, let $h^{(j)}(\\x)$ denote the representation from module $\\mathcal{N}_j$ on the input $\\x$ (i.e., the output of $\\mathcal{N}_j$). \nLet ${L}$ denote the linear model trained with the clean set $\\mathcal{C}$ where we use $h^{(j)}(\\x)$ as the input, and $y_i$ as the target value during training.\nThen the predictive power of representations from the module $\\mathcal{N}_i$ is defined as \n\\begin{align}\n \\pred_{j}(f, \\mathcal{N}, \\mathcal{C}) := \\mathop{\\mathbb{E}}_{\\x \\sim \\mathcal{D}} \n\\left[l\\left(f(\\x), {L}(h^{(j)}(\\x))\\right) \\right],\n\\end{align}\nwhere $l$ is a loss function used to evaluate the test performance on the learning task. \n\\end{definition}\n\\textbf{Remark.} Notice that smaller $\\pred_{j}(f, \\mathcal{N}, \\mathcal{C})$ indicates better predictive power; i.e., the representations are better at predicting the target function. \nWe empirically evaluate the predictive power using linear regression to obtain a trained linear model ${L}$, which avoids the issue of local minima as we are solving a convex problem; then we evaluate ${L}$ on the test set.\n\n\\subsection{Formalization of Alignment}\n\\label{subsec:alignment_formal}\nOur analysis stems from the intuition that a network would be more robust to noisy labels if it could learn the target function more easily than the noise function.\nThus, we use architectural alignment to formalize what is easy to learn by a given network. \\citet{Xu2020What} define the alignment between a network and a deterministic function via a sample complexity measure (i.e., the number of samples needed to ensure low test error with high probability) in a PAC learning framework (Definition 3.3 in~\\citet{Xu2020What}). \nIntuitively, a network aligns well with a function if each network module can easily learn one part of the function with a small sample complexity.\n\n\\begin{definition}\n\\label{def:alignment}\n\\textit{(Alignment, simplified based on~\\citet{Xu2020What}).} \nLet $\\mathcal{N}$ denote a neural network with $n$ modules $\\mathcal{N}_j$. \nGiven a function $f: \\mathcal{X} \\rightarrow \\mathcal{Y}$ which can be decomposed into $n$ functions $f_j$ (e.g., $f(\\x) = f_1(f_2(...f_n(\\x)))$), the alignment between the network $\\mathcal{N}$ and $f$ is defined via \n\\begin{align}\n \\textit{Alignment}(\\mathcal{N}, f, \\epsilon, \\delta) := \\max_{j} \\mathcal{M}_{{A}_j}(f_j, \\mathcal{N}_j, \\epsilon, \\delta),\n\\end{align}\nwhere $\\mathcal{M}_{{A}_j}(f_j, \\mathcal{N}_j, \\epsilon, \\delta)$ denotes the sample complexity measure for $\\mathcal{N}_j$ to learn $f_j$ with $\\epsilon$ precision at a failure\nprobability $\\delta$ under a learning algorithm $A_j$.\n\n\\textbf{Remark.} \nNotice that smaller $\\textit{Alignment}(\\mathcal{N}, f, \\epsilon, \\delta)$ indicates better alignment between network $\\mathcal{N}$ and function $f$. \nIf $f$ is obtuse or does not have a structural decomposition, we can choose $n=1$, and the definition of alignment degenerates into the sample complexity measure for $\\mathcal{N}$ to learn $f$.\nAlthough it is sometimes non-trivial to compute the exact alignment for a task without clear algorithmic structures, we could break this complicated task into sub-tasks, and it would be easier to measure the sample complexity of learning each sub-task. \n\n\\end{definition}\n\\citet{Xu2020What} further prove that better alignment implies better sample complexity and vice versa.\n\\begin{theorem} \n\\label{thm:xu2020}\n\\textit{(Informal;~\\cite{Xu2020What})} \nFix $\\epsilon$ and $\\delta$.\nGiven a target function $f: \\mathcal{X} \\rightarrow \\mathcal{Y}$ and a network $\\mathcal{N}$, suppose $\\left\\lbrace x_i \\right\\rbrace_{i=1}^M$ are i.i.d. samples drawn from a distribution $\\mathcal{D}$, and let $y_i := f(x_i)$. \nThen $\\textit{Alignment}(\\mathcal{N}, f, \\epsilon, \\delta) \\leq M$\nif and only if there exists a learning algorithm $A$ such that\n\\begin{align}\n \\mathbb{P}_{x \\sim \\mathcal{D}} \\left[ \\| f_{\\mathcal{N}, A}(x) - f(x) \\| \\leq \\epsilon \\right] \\geq 1- \\delta,\n\\end{align}\nwhere $f_{\\mathcal{N}, A}$ is the function generated by $A$ on the training data $\\left\\lbrace x_i, y_i \\right\\rbrace_{i=1}^M$.\n\\end{theorem}\n\\textbf{Remark.} Intuitively, a function $f$ (with a decomposition $\\{f_j\\}_j$) can be efficiently learned by a network $\\mathcal{N}$ (with modules $\\{\\mathcal{N}_j\\}_j$) iff each $f_j$ can be efficiently learned by $\\mathcal{N}_j$. \n\nWe further extend Definition~\\ref{def:alignment} to work with a random process $\\mathcal{F}$ (i.e., a set of all possible sample functions that describes the noisy label distribution). \n\\begin{definition}\n\\label{def:alignment_extend}\n\\textit{(Alignment, extension to various noise functions).} \nGiven a neural network $\\mathcal{N}$ and a random process $\\mathcal{F}$,\nfor each $f \\in \\mathcal{F}$, the alignment between $\\mathcal{N}$ and $f$ is measured via $\\max_{j} \\mathcal{M}_{{A}_j}(f_j, \\mathcal{N}_j, \\epsilon, \\delta)$ based on Definition~\\ref{def:alignment}.\nThen the alignment between $\\mathcal{N}$ and $\\mathcal{F}$ is defined as\n\\[\n \\textit{Alignment}^{*}(\\mathcal{N}, \\mathcal{F}, \\epsilon, \\delta) := \\sup_{f\\in\\mathcal{F}} \\max_{j} \\mathcal{M}_{{A}_j}(f_j, \\mathcal{N}_j, \\epsilon, \\delta),\n \\vspace{-0.5em}\n\\]\nwhere $\\mathcal{N}$ can be decomposed differently for various $f$.\n\\end{definition}\n\n\\subsection{Better Alignment Implies Better Robustness (Better Predictive Power)}\nBuilding on the definitions of \\textit{predictive power} and \\textit{alignment}, we hypothesize that\na network better-aligned with the target function (smaller $\\textit{Alignment}(\\mathcal{N}, f, \\epsilon, \\delta)$) would learn more predictive representations (smaller $\\pred_{j}(f, \\mathcal{N}, \\mathcal{C})$) when trained on a given noisy dataset.\n\\begin{hypothesis}\n\\label{thm:hypothesis} (Main Hypothesis). \nLet $f: \\mathcal{X} \\rightarrow \\mathcal{Y}$ denote the target function. \nFix $\\epsilon$, $\\delta$, a learning algorithm $A$, a noise ratio, and a noise function $g: \\mathcal{X} \\rightarrow \\mathcal{Y}$ (which may be a drawn from a random process).\nLet S denote a noisy training dataset and $\\mathcal{C}$ denote a small set of clean data.\nThen for a network $\\mathcal{N}$ trained on $S$ with the learning algorithm $A$, \n\\begin{align}\n\\textit{Alignment}(\\mathcal{N}, f, \\epsilon, \\delta)\\downarrow \\ \\Longrightarrow \\pred_{j}(f, \\mathcal{N}, \\mathcal{C})\\downarrow,\n\\end{align}\nwhere $j$ is selected based on the network's architectural alignment with the target function (for simplicity, we consider $j = n-1$ in this work).\n\\end{hypothesis}\n\nWe prove this hypothesis for a simplified case where the target function shares some common structures with the noise function (e.g., class-dependent label noise). \nWe refer the readers to Appendix~\\ref{suppapp:thm} for a full statement of our main theorem with detailed assumptions. \n\\begin{theorem}\n\\label{thm:main}\n\\textit{(Main Theorem; informal)} \nFor a target function $f: \\mathcal{X} \\rightarrow \\mathcal{Y}$ and a noise function $g: \\mathcal{X} \\rightarrow \\mathcal{Y}$, consider a neural network $\\mathcal{N}$ well-aligned with $f$ such that $\\pred_{j}(f, \\mathcal{N}, \\mathcal{C})$ is small when training $\\mathcal{N}$ on clean data (i.e., $\\pred_{j}(f, \\mathcal{N}, \\mathcal{C}) < c$ for some small constant $c$). If there exists a function $h$ on the input domain $\\mathcal{X}$ such that $f$ and $g$ can be decomposed as follows: $\\forall x \\in \\mathcal{X}$, $f(\\x) = f_r(h(\\x))$ with $f_r$ being a linear function, and $g(\\x) = g_r(h(\\x))$ for some function $g_r$, then the representations learned by $\\mathcal{N}$ on the noisy dataset still have a good predictive power with $\\pred_{j}(f, \\mathcal{N}, \\mathcal{C}) < c$.\n\\end{theorem}\n\nWe further provide empirical support for our hypothesis via systematic experiments on various architectures, target and noise functions across both regression and classification settings.\n\\section{Experiments on Graph Neural Networks}\n\\label{sec:gnn}\nWe first validate our hypothesis on synthetic graph algorithmic tasks by designing GNNs with different levels of alignments to the underlying target\/noise functions. \nWe consider regression tasks.\nThe theoretical properties of GNNs and their alignment with algorithmic regression tasks are well-studied~\\cite{Xu2020What, du2019graph, xu2020neural, sato2019approximation}.\nTo start, we conduct experiments on different types of additive label noise and extend our experiments to instance-dependent label noise, which is closer to real-life noisy labels.\n\n\\textbf{Common Experimental Settings.} \nThe training and validation sets always have the same noise ratio, the percentage of data with noisy labels. \nWe choose mean squared error (MSE) and Mean Absolute Error (MAE) as our loss functions.\nDue to space limit, the results using MAE are in Appendix~\\ref{appsec:mae}.\nAll training details are in Appendix~\\ref{suppsec:gnn_training_details}.\nThe test error is measured by mean absolute percentage error (MAPE), a relative error metric.\n\n\\subsection{Background: Graph Neural Networks}\nGNNs are structured networks operating on graphs with MLP modules~\\cite{battaglia2018relational, scarselli2009graph, xu2018how, xu2018representation, xu2021optimization, liao2021information, cai2021graphnorm}. \nThe input is a graph ${\\mathcal{G}}=(V,E)$ where each node $u \\in V$ has a feature vector $\\x_u$, and we use $\\mathcal{N}(u)$ to denote the set of neighbors of $u$.\nGNNs iteratively compute the node representations via message passing: (1) the node representation $\\bm{h}_u$ is initialized as the node feature: $\\bm{h}_u^{(0)} = \\x_u$; (2) in iteration $k=1..K$, the node representations $\\bm{h}_u^{(k)}$ are updated by aggregating the neighboring nodes' representations with MLP modules~\\cite{gilmer2017neural}. \nWe can optionally compute a graph representation $\\bm{h}_{{\\mathcal{G}}}$ by aggregating the final node representations with another MLP module.\nFormally,\n\\begin{align}\n\\vspace{-0.5em}\n\\bm{h}_u^{(k)} &:= \\sum\\limits_{v \\in \\mathcal{N}(u)} \\text{MLP}^{(k)} \\Big( \\bm{h}_u^{(k - 1)}, \\bm{h}_v^{(k - 1)} \\Big), \\\\\n\\bm{h}_{{\\mathcal{G}}} &:= \\text{MLP}^{(K+1)} \\Big( \\sum_{u \\in {\\mathcal{G}}} \\bm{h}_u^{(K)} \\Big).\n\\vspace{-0.5em}\n\\end{align}\nDepending on the task, the output is either the graph representation $\\bm{h}_{{\\mathcal{G}}}$ or the final node representations $\\bm{h}_u^{(K)}$.\nWe refer to the neighbor aggregation step for $\\bm{h}_u^{(k)} $ as \\emph{aggregation} and the pooling step for $\\bm{h}_{{\\mathcal{G}}}$ as \\emph{readout}. Different tasks require different aggregation and readout functions.\n\n\\subsection{Additive Label Noise}\n\\label{subsubsec:unstructured_noise}\n\n\\begin{figure}\n\\vspace{-1em}\n\\centering\n\\begin{minipage}{.45\\textwidth}\n \\centering\n \\includegraphics[width=1.0\\textwidth]{max_sum_gnn_alignment.pdf}\n \\captionof{figure}{\\textbf{Max-sum GNN aligns well with the task maximum degree.} \n Max-sum GNN $h_{{\\mathcal{G}}}$ can be decomposed into two modules: $\\text{Module}^{(1)}$ and $\\text{Module}^{(2)}$, and the target function $f({\\mathcal{G}})$ can be similarly divided as $f({\\mathcal{G}}) = f_2(f_1({\\mathcal{G}}))$. As the nonlinearities of the target function have been encoded in the GNN's architecture, $f({\\mathcal{G}})$ can be easily learned by $h_{{\\mathcal{G}}}$: $f_1(\\cdot)$ can be easily learned by $\\text{Module}^{(1)}$, and $f_2(\\cdot)$ is the same as $\\text{Module}^{(2)}$. }\n \\label{fig:maxdeg_illustrate}\n\\end{minipage}\n\\hspace{1.0em}\n\\begin{minipage}{.45\\textwidth}\n \\centering\n \\includegraphics[width=1.0\\textwidth]{GNN_max_sum_gaussian_mean10_reg_true_label.pdf}\n \\captionof{figure}{\\textbf{PCA visualization of hidden representations} from a max-sum GNN trained with additive label noise drawn from $\\mathcal{N}(10,\\,15)$ at 100\\% noise ratio. Each dot denotes a single training example and is colored with its true label. The x-axis and y-axis denote the projected values at the first and second principal components. As the colors change gradually from left to right, the largest principal component of the representations have a clear linear relationship with the true labels. }\n \\label{fig:maxdeg_graph_embed}\n\\end{minipage}\n\\end{figure}\n\n\\citet{hu2019simple} prove that MLPs are robust to additive label noises with zero mean, if the labels are drawn i.i.d.~from a Sub-Gaussian distribution. \n\\citet{wu2020optimal} also show that linear models are robust to zero-mean additive label noise even in the absence of explicit regularization.\nIn this section, we show that a GNN \\textit{well-aligned} to the target function not only achieves low test errors on additive label noise with zero-mean, but also learns \\textit{predictive} representations on noisy labels that are drawn from non-zero-mean distributions despite having large test error.\n\n\\textbf{Task and Architecture.} The task is to compute the maximum node degree:\n\\begin{align}\n\\label{eq:maxdeg}\nf({\\mathcal{G}}) &:= \\text{max}_{u \\in {\\mathcal{G}}} \\sum\\limits_{v \\in \\mathcal{N}(u)} 1.\n\\end{align}\nWe choose this task as we know which GNN architecture aligns well with this target function---a 2-layer GNN with max-aggregation and sum-readout (max-sum GNN):\n\\begin{align}\n\\label{eq:max_sum_gnn}\n\\bm{h}_{{\\mathcal{G}}} &:= \\text{MLP}^{(2)} \\Big( \\text{max}_{u \\in {\\mathcal{G}}} \\sum\\limits_{v \\in \\mathcal{N}(u)} \\text{MLP}^{(1)} \\Big({\\bm{h}}_u, {\\bm{h}}_v\\Big) \\Big), \\\\\n{\\bm{h}}_u &:= \\sum\\limits_{v \\in \\mathcal{N}(u)} \\text{MLP}^{(0)} \\Big({\\bm{x}}_u, {\\bm{x}}_v\\Big).\n\\end{align}\nFigure~\\ref{fig:maxdeg_illustrate} demonstrates how exactly the max-sum GNN aligns with $f({\\mathcal{G}})$.\nIntuitively, they are well-aligned as the MLP modules of max-sum GNN only need to learn simple constant functions to simulate $f({\\mathcal{G}})$. \nBased on Figure~\\ref{fig:maxdeg_illustrate}, we take the output of $\\text{Module}^{(2)}$ as the learned representations for max-sum GNNs when evaluating the predictive power.\n\n\\textbf{Label Noise.} We corrupt labels by adding independent noise ${\\epsilon}$ drawn from three distributions: Gaussian distributions with zero mean $\\mathcal{N}(0,\\,40)$ and non-zero mean $\\mathcal{N}(10,\\,15)$, and a long-tailed Gamma distribution with zero-mean $\\Gamma(2,\\,1\/15)-30$. \nWe also consider more distributions with non-zero mean in Appendix~\\ref{appsec:additive}. \n\n\\textbf{Findings.} In Figure~\\ref{fig:maxdeg_unstructured}, while the max-sum GNN is robust to \\textit{zero-mean} additive label noise (dotted yellow and purple lines), its test error is much higher under non-zero-mean noise $\\mathcal{N}(10,\\,15)$ (dotted red line) as the learned signal may be ``shifted'' by the non-centered label noise. \nYet, max-sum GNNs' learned representations under these three types of label noise all predict the target function well when evaluating their predictive powers with 10\\% clean labels (solid lines in Figure~\\ref{fig:maxdeg_unstructured}).\n\nMoreover, when we plot the representations (using PCA) from a max-sum GNN trained under 100\\% noise ratio with $\\epsilon \\sim \\mathcal{N}(10,\\,15)$, the representations indeed correlate well with true labels (Figure~\\ref{fig:maxdeg_graph_embed}). \nThis explains why the representation learned under noisy labels can recover surprisingly good test performance despite that the original model has large test errors.\n\nThe predictive power of randomly-initialized max-sum GNNs is in Table~\\ref{table:unstructured_noise} (Appendix~\\ref{appsec:random}).\n\n\\begin{figure}\n\\vspace{-1em}\n\\centering\n\\begin{minipage}{.45\\textwidth}\n \\centering\n \\includegraphics[width=1.0\\textwidth]{maxdeg_unstructured_noise_early_stop_reg_improve.pdf}\n \\captionof{figure}{\\textbf{Representations are very predictive for a GNN well-aligned with the target function under additive label noise.} On the maximum degree task, the representations' predictive power (solid lines) achieves low test MAPE ($< 5\\%$) across all three types of noise for the max-sum GNN, despite that the model's test MAPE (dotted lines) may be quite large (for non-zero-mean noise). We average the statistics over 3 runs using different random seeds.}\n \\label{fig:maxdeg_unstructured}\n\\end{minipage}\n\\hspace{1.0em}\n\\begin{minipage}{.45\\textwidth}\n \\centering\n \\includegraphics[width=1.0\\textwidth]{maxdeg_structured_noise_reg_loss_improve_on_last_epoch.pdf}\n \\captionof{figure}{\\textbf{Representations are more predictive for GNNs more aligned with the target function, and less predictive for GNNs more aligned with the noise function}. \n On the maximum node feature task, while all three GNNs have large test errors under high noise ratios (dotted lines), the predictive power (solid lines) in representations from Deepset (yellow) and max-max GNN (red) greatly reduces the test MAPE. \n In contrast, the representation's predictive power for max-sum GNN barely reduces the model's test MAPE (tiny gap between dotted and solid purple lines).}\n \\label{fig:gnn_structured}\n\\end{minipage}\n\\vspace{-1em}\n\\end{figure}\n\n\\subsection{Instance-Dependent Label Noise}\n\\label{subsubsec:structured_noise}\nRealistic label noise is often instance-dependent. For example, an option is often incorrectly priced in the market, but its incorrect price (i.e., the noisy label) should depend on properties of the underlying stock.\nSuch instance-dependent label noise is more challenging, as it may contain \\textit{spurious signals} that are easy to learn by certain architectures. \nIn this section, we evaluate the representation' predictive power for three different GNNs trained with instance-dependent label noise.\n\n\\textbf{Task and Label Noise.}\nWe experiment with a new task---computing the maximum node feature: \n\\begin{align}\n\\label{eq:max_node_feature}\nf({\\mathcal{G}}) := \\text{max}_{u\\in {\\mathcal{G}}} ||{\\bm{x}}_u||_{\\infty}.\n\\end{align}\nTo create a instance-dependent noise, we randomly replace the label with the maximum degree:\n\\begin{align}\ng({\\mathcal{G}}) :=\\text{max}_{u \\in {\\mathcal{G}}} \\sum\\limits_{v \\in \\mathcal{N}(u)} 1.\n\\end{align}\n\n\\textbf{Architecture.} We consider three GNNs: DeepSet~\\cite{zaheer2017deep}, max-max GNN, and max-sum GNN. \nDeepSet can be interpreted as a special GNN that does not use neighborhood information: \n\\begin{align}\n h_{{\\mathcal{G}}} = \\text{MLP}^{(1)} \\Big( \\text{max}_{u \\in {\\mathcal{G}}} \\text{MLP}^{(0)} \\Big({\\bm{x}}_u\\Big) \\Big).\n\\end{align}\nMax-max GNN is a 2-layer GNN with max-aggregation and max-readout. Max-sum GNN is the same as the one in the previous section.\n\nDeepSet and max-max GNN are well-aligned with the target function $f({\\mathcal{G}})$, as their MLP modules only need to learn simple linear functions. \nIn contrast, max-sum GNN is more aligned with $g({\\mathcal{G}})$ than $f({\\mathcal{G}})$ since neither its MLP modules or sum-aggregation module can efficiently learn the max-operation in $f({\\mathcal{G}})$~\\cite{Xu2020What, xu2020neural}.\n\nMoreover, DeepSet cannot learn $g({\\mathcal{G}})$ as the model ignores \\textit{edge information}.\nWe take the hidden representations before the last MLP modules in all three GNNs and compare their predictive power.\n\n\\textbf{Findings.}\nWhile all three GNNs have large test errors under high noise ratios (dotted lines in Figure~\\ref{fig:gnn_structured}), the predictive power in representations from GNNs more aligned with the target function --- DeepSet (solid yellow line) and max-max GNN (solid red line) --- significantly reduces the original models' test errors by 10 and 1000 times respectively. \nYet, for the max-sum GNN, which is more aligned with the noise function, training with noisy labels indeed destroy the internal representations such that they are no longer to predict the target function --- its representations' predictive power (solid purple line) barely decreases test error. \nWe also evaluate the predictive power of these three types of randomly-initialized GNNs, and the results are in Table~\\ref{table:structured_noise} (Appendix~\\ref{appsec:random}).\n\\section{Experiments on Vision Datasets}\nMany noisy label training methods are benchmarked on image classification; \nthus, we also validate our hypothesis on image domains.\nWe compare the representations' predictive power between MLPs and \\mbox{CNN-based} networks using 10\\% clean labels (all models are trained until they could perfectly fit the noisy labels, a.k.a., achieving close to 100\\% training accuracy). \nWe further evaluate the predictive power in representations learned with SOTA methods.\nPredictive power on networks that aligned well with the target function could further improve SOTA method's test performance (Section~\\ref{sec:eval_sota}).\nThe final model also outperforms some sophisticated methods on noisy label training which also use clean labels (Appendix~\\ref{appsec:compare_sota}). \nAll our experiment details are in Appendix~\\ref{suppsec:vision_training_details}.\n\\vspace{-0.5em}\n\\subsection{MLPs vs. \\mbox{CNN-based} networks}\nTo validate our hypothesis, we consider several target functions with different levels of alignments to MLPs and CNN-based networks. \nAll models in this section are trained with standard procedures without any robust training methods or robust losses.\n\\paragraph{Datasets and Label Noise.} We consider two types of target functions: one aligns better with \\mbox{CNN-based} models than MLPs, and the other aligns better with MLPs than \\mbox{CNN-based} networks. \n\n\\textbf{1).} \\textbf{CIFAR-10} and \\textbf{CIFAR-100}~\\cite{krizhevsky2009learning} come with clean labels.\nTherefore, we generate two types of noisy labels following existing works: (1) \\textbf{uniform label noise} randomly replaces the true labels with all possible labels, and (2) \\textbf{flipped label noise} swaps the labels between similar classes (e.g., deer$\\leftrightarrow$horse, dog$\\leftrightarrow$cat) on CIFAR-10~\\cite{li2020dividemix}, or flips the labels to the next class on CIFAR-100~\\cite{natarajan2013learning}.\n\n\\textbf{2).} \\textbf{CIFAR-Easy} is a dataset modified on CIFAR-10 with labels generated by procedures in Figure~\\ref{fig:our_cifar10_demo} --- the class\/label of each image depends on the location of a special pixel.\nWe consider three types of noisy labels on CIFAR-Easy: (1) \\textbf{uniform label noise} and (2) \\textbf{flipped label noise} (described as above); and (3) \\textbf{instance-dependent label noise} which takes the original image classification label as the noisy label.\n\n\\begin{figure}\n\\centering\n\\begin{minipage}{.45\\textwidth}\n \\centering\n \\includegraphics[width=1.0\\textwidth]{cat.pdf}\n \\captionof{figure}{\\textbf{Synthetic Labels on CIFAR-Easy.} For each image, we mask a pixel at the top left corner with pink color. Then the synthetic label for this image is the location of the pink pixel\/mask (i.e., the cat image in the above example has label 4).}\n \\label{fig:our_cifar10_demo}\n\\end{minipage}%\n\\hspace{1.0em}\n\\begin{minipage}{.45\\textwidth}\n \\centering\n \\includegraphics[width=1.0\\textwidth]{sample_complexity.pdf}\n \\captionof{figure}{\\textbf{Sample complexity of MLPs and CNNs on CIFAR-Easy.} Both MLPs and CNNs can achieve 100\\% test accuracy given sufficient training examples, but MLPs need far fewer examples than CNNs and thus are more sample-efficient on CIFAR-Easy.}\n \\label{fig:sample_complex}\n\\end{minipage}\n\\vspace{-1em}\n\\end{figure}\n\n\\input{cifar_baseline_compare}\n\\input{our_cifar10_compare}\n\\input{baseline}\n\\input{our_cifar10}\n\n\\paragraph{Architectures.}\n\\label{exp:our_cifar10}\nOn CIFAR-10\/100, we evaluate the predictive power in representations for three architectures: 4-layer MLPs, 9-layer CNNs, and 18-layer PreAct ResNets~\\cite{he2016identity}.\nOn CIFAR-Easy, we compare between MLPs and CNNs. \nWe take the representations before the penultimate layer when evaluating the predictive power for these networks.\n\nAs the designs of \\mbox{CNN-based} networks (e.g., CNNs and ResNets) are similar to human perception system because of the receptive fields in convolutional layers and a hierarchical extraction of more and more abstracted features~\\cite{lecun1995convolutional, kheradpisheh2016deep}, \\mbox{CNN-based} networks are expected to \\textit{align better} with the target functions than MLPs on image classification datasets (e.g., \\mbox{CIFAR-10\/100}).\n\nOn the other hand, on CIFAR-Easy, while both CNNs and MLPs can generalize perfectly given sufficient training examples, MLPs have a much smaller sample complexity than CNNs (Figure~\\ref{fig:sample_complex}).\nThus, both MLP and CNN are \\textit{well-aligned} with the target function on CIFAR-Easy, but MLP is \\textit{better-aligned} than CNN according to Theorem~\\ref{thm:main}.\nMoreover, since the instance-dependent label on CIFAR-Easy \\ is the original image classification label, CNN is also \\textit{aligned} with this instance-dependent noise function on CIFAR-Easy.\n\n\\paragraph{Experimental Results.}\nFirst, we empirically verify our hypothesis that \\emph{networks better-aligned with the target function have more predictive representations}. \nAs expected, across most noise ratios on CIFAR-10\/100, the representations in \\mbox{CNN-based} networks (i.e., CNN and ResNet) are more predictive than those in MLPs (Figure~\\ref{fig:cifar_baseline_compare}) under both types of label noise.\nMoreover, the predictive power in representations learned by less aligned networks (i.e., MLPs) sometimes are even worse than the vanilla-trained models' test performance, suggesting that the noisy representations on less aligned networks may be more corrupted and less linearly separable.\nOn the other hand, across all three types of label noise on CIFAR-Easy, MLPs, which align better with the target function, have more predictive representations than CNNs (Figure~\\ref{fig:our_cifar10_compare}). \n\nWe also observe that \\emph{models with similar test performance could have various levels of predictive powers in their learned representations}. \nFor example, in Figure~\\ref{fig:our_cifar10_compare}, while the test accuracies of MLPs and CNNs are very similar on CIFAR-Easy\\ under flipped label noise (i.e., dotted purple and yellow lines overlap), the predictive power in representations from MLPs is much stronger than the one from CNNs (i.e., solid purple lines are much higher than yellow lines).\nThis also suggests that when trained with noisy labels, if we do not know which architecture is more aligned with the underlying target function, we can evaluate the predictive power in their representations to test alignment.\n\nWe further discover that \\textit{for networks well-aligned with the target function, its learned representations are more predictive when the noise function shares more mutual information with the target function}. \nWe compute the empirical mutual information between the noisy training labels and the original clean labels across different noise ratios on various types of label noise. \nThe predictive power in representations improves as the mutual information increases (Figure~\\ref{fig:mutual_info} in Appendix~\\ref{suppsec:add_exp_results}).\nThis explains why the predictive power for a network is often higher under flipped noise than uniform noise: at the same noise ratio, flipped noise has higher mutual information than uniform noise. \nMoreover, comparing across the three datasets in Figure~\\ref{fig:mutual_info}, we observe the growth rate of a network's predictive power w.r.t. the mutual information depends on both the intrinsic difficulties of the learning task and the alignment between the network and the target function.\n\n\\vspace{-0.5em}\n\\subsection{Predictive Power in Representations for Models Trained with SOTA Methods}\n\\label{sec:eval_sota}\n\\vspace{-0.5em}\nAs previous experiments are on standard training procedures, we also validate our hypothesis on models learned with SOTA methods on noisy label training. \nWe evaluate the representations' predictive power for models trained with the SOTA method, DivideMix~\\cite{li2020dividemix}, which leverages techniques from semi-supervised learning to treat examples with unreliable labels as unlabeled data. \n\nWe compare (1) the test performance for models trained with standard procedures on noisy labels (denoted as \\textbf{Vanilla training}), (2) the SOTA method's test performance (denoted as \\textbf{DivideMix}), and (3) the predictive power in representations from models trained with DivideMix in (2) (denoted as \\textbf{DivideMix's Predictive Power}).\n\nWe discover that \\textit{the effectiveness of DivideMix also depends on the alignment between the network and the target\/noise functions}.\nDivideMix only slightly improves the test accuracy of MLPs on CIFAR-10\/100 (Table~\\ref{table:cifar_baseline}), and DivideMix's predictive power does not improve the test performance of MLPs, either.\nIn Table~\\ref{table:mlp_ourcifar}, DivideMix also barely helps CNNs as they are well-aligned with the instance-dependent noise, where the noisy label is the original image classification label.\n \nMoreover, we observe that \\textit{even for networks well-aligned with the target function, DivideMix may only slightly improve or do not improve its test performance at all} (e.g., red entries of DivideMix on MLPs in Table~\\ref{table:mlp_ourcifar}).\nYet, the representations learned with DivideMix can still be very predictive: the predictive power can achieve over 50\\% improvements over DivideMix for CNN-based models on CIFAR-10\/100 (e.g., 80\\% flipped noise), and the improvements can be over 80\\% for MLPs on CIFAR-Easy\\ (e.g., 90\\% uniform noise).\n\nTables~\\ref{table:cifar_baseline} and~\\ref{table:mlp_ourcifar} shows that the representations' predictive power on networks well aligned with the target function could further improve SOTA test performance. \nAppendix~\\ref{appsec:compare_sota} further demonstrates that on large-scale datasets with real-world noisy labels, the predictive power in well-aligned networks could outperform sophisticated methods that also use clean labels (Table~\\ref{table:clothing} and Table~\\ref{table:webvision}). \n\\section{Concluding Remarks}\n\\vspace{-0.5em}\nThis paper is an initial step towards formally understanding how a network's architectures impacts its robustness to noisy labels. \nWe formalize our intuitions and hypothesize that a network better-aligned with the target function would learn more predictive representations under noisy label training. \nWe prove our hypothesis on a simplified noisy setting and conduct systematic experiments across various noisy settings to further validate our hypothesis.\n\nOur empirical results along with Theorem~\\ref{thm:main} suggest that knowing more structures of the target function can help design more robust architectures. \nIn practice, although an exact mathematical formula for a decomposition of a given target function is often hard to obtain, a high-level decomposition of the target function often exists for real-world tasks \nand will be helpful in designing robust architectures --- a direction undervalued by existing works on learning with noisy labels. \n\n\n\\section{Additional Experimental Results}\n\\label{suppsec:add_exp_results}\nIn this section, we include additional experimental results for the predictive power in (a) representations from randomly initialized models (Appendix~\\ref{appsec:random}), (b) representations learned under different types off additive label noise (Appendix~\\ref{appsec:additive}) and (c) representations learned with a robust loss function (Appendix~\\ref{appsec:mae}). \nWe further demonstrates that the predictive power in well-aligned networks could even outperform sophisticated methods that also utilize clean labels (Appendix~\\ref{appsec:compare_sota}). \n\n\\subsection{Predictive Power of Randomly Initialized Models}\n\\label{appsec:random}\nWe first evaluate the predictive power of randomly initialized models (a.k.a., untrained models), and we compare their results with GNNs trained on clean data (a.k.a., 0\\% noise ratio).\n\n\\begin{table}[ht]\n \\vspace{-0.5em}\n\t\\centering\n\t\\small\n\t\\caption{\n\t\tPredictive power in representations from random and trained max-sum GNNs on the maximum degree task (Section~\\ref{subsubsec:unstructured_noise}). Notice that lower test MAPE denotes better test performance.\n\t\t}\n\t\\label{table:unstructured_noise}\n\t\\vskip 0.1in\n\t\\begin{tabular}\t{l |c|c }\n\t\t\\toprule\t \t\n\t\t\t\\multirow{2}{*}{\\bf Model} & \\multicolumn{2}{c}{\\bf Test MAPE} \\\\\n\t\t\t\\cmidrule{2-3}\n\t\t\t& Random & Trained \\\\\n\t\t\t\\midrule\t\t\t\n\t\t\tMax-sum GNN & 12.74 $\\pm$ 0.57 & 0.37 $\\pm$ 0.08 \\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\t\\vspace{-0.5em}\n\\end{table}\t\t\n\n\\begin{table}[ht]\n \\vspace{-0.5em}\n\t\\centering\n\t\\small\n\t\\caption{\n\t\tPredictive power in representations from various types of random and trained GNNs on the maximum node feature task (Section~\\ref{subsubsec:structured_noise}). Notice that lower test MAPE denotes better test performance.\n\t\t}\n\t\\label{table:structured_noise}\n\t\\vskip 0.1in\n\t\\begin{tabular}\t{l |c|c|c|c }\n\t\t\\toprule\t \t\n\t\t\t\\multirow{2}{*}{\\bf Model} & \\multicolumn{2}{c|}{\\bf Test MAPE} & \\multicolumn{2}{c}{\\bf Test MAPE (log scale)} \\\\\n\t\t\t\\cmidrule{2-5}\n\t\t\t& Random & Trained & Random & Trained \\\\\n\t\t\t\\midrule\t\t\t\n\t\t\tDeepSet & 5.14e-05 & 1.06e-05 & -4.29 & -4.97 \\\\ \\hline\n\t\t\tMax-max GNN & 0.794 & 0.0099 & -0.10 & -2.00 \\\\ \\hline\n\t\t\tMax-sum GNN & 54.28 & 3.08 & 1.73 & 0.488 \\\\ \n\t\t\\bottomrule\n\t\\end{tabular}\n\t\\vspace{-0.5em}\n\\end{table}\t\n\n\\subsection{Additive Label Noise on Graph Algorithmic Datasets}\n\\label{appsec:additive}\n We conduct additional experiments on additive label noise drawn from distributions with larger mean and larger variance. We consider four such distributions: Gaussian distributions $\\mathcal{N}(10,\\,30)$ and $\\mathcal{N}(20,\\,15)$, a long-tailed Gamma distribution with mean equal to $10$: $\\Gamma(2, \\dfrac{1}{15}) - 20$, and another long-tailed t-distribution with mean equal to $10$: $\\mathcal{T}(\\nu=1) + 10$. \n Figure~\\ref{fig:app_maxdeg_unstructured} demonstrates that for a GNN well aligned to the target function, its representations are still very predictive even under non-zero mean distributions with larger mean and large variance. \n\\input{app_gnn_unstructured}\n\n\\vspace{-1em}\n\\subsection{Training with a Robust Loss Function}\n\\label{appsec:mae}\nWe also train the models with a robust loss function--Mean Absolute Error (MAE), and we observe similar trends in the representations' predictive power as training the models using MSE (Figure~\\ref{fig:app_maxdeg_l1}).\n\\begin{align}\n \\text{loss} = \\sum_{i=1}^n |y_\\text{true} - y_\\text{pred}|.\n\\end{align}\n\n\\begin{figure*}[h!]\n \\centering\n \\begin{subfigure}[b]{0.4\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{maxdeg_unstructured_noise_early_stop_l1_improve.pdf}\n \\vspace{-1em}\n \\caption{Test errors of max-sum GNNs on the maximum degree task with additive label noise}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.44\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{maxdeg_structured_noise_l1_loss_improve_on_last_epoch.pdf}\n \\caption{Test errors of three different GNNs on the maximum node feature task with instance-dependent label noise}\n \\end{subfigure}\n \\caption{\\textbf{Predictive power in representations trained with MAE}. For GNNs trained with MAE, the predictive power in representations exhibits similar trends as models trained with MSE. The robust loss function, MAE, is more helpful in learning more predictive representations under smaller noise ratios. \n }\n \\label{fig:app_maxdeg_l1}\n \\vspace{-0.25em}\n\\end{figure*}\n\n\\input{mutual_information}\n\n\\subsection{Comparing with Sophisticated Methods Using Clean Labels}\n\\label{appsec:compare_sota}\n\\input{cifar10_sym_noise}\n\\input{cifar10_asym_noise}\n\\input{cifar100_sym_noise}\n\\input{cifar100_asym_noise}\n\\input{clothing}\n\\input{webvision}\nIn previous experiments (section~\\ref{sec:eval_sota}), we have shown that the predictive power in well-aligned models could further improve the test performance of SOTA methods on noisy label training. \nAs we use a small set of clean labels to measure the predictive power, we also wonder how the improvements obtained by the predictive power compare with the sophisticated methods that also use clean labels.\n\n\\subsubsection{Sophisticated Methods Using Clean Labels}\nIn our experiments, we consider the following methods which use clean labels: L2R~\\cite{ren2018learning}, MentorNet~\\cite{jiang2018mentornet}, SELF~\\cite{nguyen2019self}, GLC~\\cite{hendrycks2018using}, Meta-Weight-Net~\\cite{shu2019meta}, and IEG~\\cite{zhang2019ieg}.\nBesides, as the SOTA method, DivideMix, keeps dividing the training data into labeled and unlabeled sets during training, we also compare with directly using clean labels in DivideMix: we mark the small set of clean data as labeled data during the semi-supervised learning step in DivideMix.\nWe denote this method as \\textit{DivideMix w\/ Clean Labels (DwC)} and further measure the predictive power in representations learned by DwC.\n\n\\subsubsection{Datasets}\nWe conduct experiments on CIFAR-10\/100 with synthetic noisy labels and on two large-scale datasets with real-world noisy labels: Clothing1M and Webvision.\n\n\\textbf{Clothing1M}~\\cite{xiao2015learning} has real-world noisy labels with an estimated 38.5\\% noise ratio.\nThe dataset has a small human-verified training data, which we use as clean data.\nFollowing recent method~\\cite{li2020dividemix}, we use 1000 mini-batches in each epoch to train models on Clothing1M. \n\n\\textbf{WebVision}~\\cite{li2017webvision} also has real-world noisy labels with an estimated 20\\% noise ratio. \nIt shares the same 1000 classes as ImageNet~\\cite{deng2009imagenet}.\nFor a fair comparison, we follow~\\cite{jiang2018mentornet} to create a mini WebVision dataset with the top 50 classes from the Google image subset of WebVision. \nWe train all models on mini WebVision dataset and evaluate on both the WebVision and ImageNet validation sets.\nWe select 100 images per class from ImageNet training data as clean data.\n\n\\subsubsection{Experimental Settings}\nWe use the same architectures and hyperparameters as DivideMix: an 18-layer {PreAct Resnet~\\cite{he2016identity}} for \\mbox{CIFAR-10\/100}, a ResNet-50 pre-trained on ImageNet for Clothing1M, and Inception-ResNet-V2~\\cite{szegedy2016inception} for WebVision.\nWe use the test accuracy reported in the original papers whenever possible, and the accuracy for L2R~\\cite{ren2018learning} are from~\\cite{zhang2019ieg}. For IEG, we use the reported test accuracy obtained by ResNet-29 rather than WRN28-10, because ResNet-29 has a comparable number of parameters as the \\mbox{PreAct ResNet-18} we use.\n\nAs CIFAR-10\/100 do not have a validation set, we follow previous works to report the averaged test accuracy over the last 10 epochs: we measure the predictive power in representations for models from these epochs and report the averaged test accuracy. \nFor Clothing1M and Webvision, we use the associated validation set to select the best model and measure the predictive power in its representations.\n\n\\subsubsection{Results}\nTables~\\ref{table:cifar10_sym}-\\ref{table:cifar100_asym} show the results on CIFAR-10 and CIFAR-100 with uniform and flipped label noise, where \\textbf{boldfaced numbers} denote test accuracies better than all methods we compared with.\nWe see that across different noise ratios on CIFAR-10\/100 with flipped label noise, the predictive power in representations remains roughly the same as the test performance of the model trained on clean data for a network well-aligned with the target function, which matches with Lemma~\\ref{thm:main_extend}. For CIFAR-10 with uniform label noise, the predictive power in representations achieves better test performance using only 10 clean labels per class on most noise ratios; for CIFAR-100 with uniform label noise, the predictive power in representations could achieve better test performance using only 50 labels per class.\n\nMoreover, we observe that adding clean data to the labeled set in DivideMix (DwC) may barely improve the model's test performance when the noise ratio is small and under flipped label noise.\nAt 90\\% uniform label noise, DwC can greatly improve the model's test performance, and the predictive power in representations can achieve a even higher test accuracy with the same set of clean data used to train DwC.\n\nOn Clothing1M, we compare the predictive power in representations learned by DivideMix with existing methods that use the small set of human-verified data: CleanNet~\\cite{lee2018cleannet}, F-correction~\\cite{patrini2017making} and Self-learning~\\cite{han2019deep}. \nAs these methods also use the clean subset to fine-tune the whole model, we follow similar procedures to fine-tune the model (trained by DivideMix) for 10 epochs and then select the best model based on the validation accuracy to measure the predictive power in its representations.\nThe predictive power in representations could further improve the test accuracy of DivideMix by around 6\\% and outperform IEG, CleanNet, and F-correction (Table~\\ref{table:clothing}). The improved test accuracy is also competitive to~\\cite{han2019deep}, which uses a much more complicated learning framework. \n\nOn Webvision, the predictive power also improves the model's test performance (Table~\\ref{table:webvision}). \nThe improvement is less significant than on Clothing1M as the estimated noise ratio on Webvision (20\\%) is smaller than Clothing1M (38.5\\%).\n\\section{Experimental Details}\n\\label{suppsec:exp_details}\n\n\\subsection{Computing Resources}\nWe conduct all the experiments on one NVIDIA RTX 2080 Ti GPU, except for the experiment on the WebVision dataset~\\cite{li2017webvision} (Table~\\ref{table:webvision}), which uses 4 GPUs concurrently.\n\n\\subsection{Measuring the Predictive Power}\nWe use linear regression to train the linear model when measuring the predictive power in representations. For representations from all models except MLPs, we use ordinary least squares linear regression (OLS). When the learned representations are from MLPs, we e use ridge regression with \\mbox{penalty = 1} since we find the linear models trained by OLS may easily overfit to the small set of clean labels. \n\n\\subsection{Experimental Details on GNNs}\n\\label{suppsec:gnn_training_details}\n\\paragraph{Common settings.} In the generated datasets, each graph ${\\mathcal{G}}$ is sampled from Erd\\H{o}s-R\\'enyi random graphs with an edge probability uniformly chosen from $\\{0.1, 0.2, \\cdots, 0.9\\}$. This sampling procedure generates diverse graph structures. The training and validation sets contain 10,000 and 2,000 graphs respectively, and the number of nodes in each graph is randomly picked from $\\{20, 21, \\cdots, 40\\}$. The test set contains 10,000 graphs, and the number of nodes in each graph is randomly picked from $\\{50, 51, \\cdots, 70\\}$. \n\n\\subsubsection{Additive Label Noise} \n\\paragraph{Dataset Details.} In each graph, the node feature $\\x_u$ is a scalar randomly drawn from $\\{1, 2, \\cdots, 100\\}$ for all $u \\in {\\mathcal{G}}$.\n\n\\paragraph{Model and hyperparameter settings.} We consider a 2-layer GNN with max-aggregation and sum-readout (max-sum GNN):\n\\[\n\\bm{h}_G = \\text{MLP}^{(2)} \\Big( \\text{max}_{u \\in {\\mathcal{G}}} \\sum\\limits_{v \\in \\mathcal{N}(u)} \\text{MLP}^{(1)} \\Big({\\bm{h}}_u, {\\bm{h}}_v\\Big) \\Big), \\\\\n{\\bm{h}}_u = \\sum\\limits_{v \\in \\mathcal{N}(u)} \\text{MLP}^{(0)} \\Big({\\bm{x}}_u, {\\bm{x}}_v\\Big).\n\\]\nThe width of all MLP modules are set to $128$. The number of layers are set to $3$ for $\\text{MLP}^{(0)}$ and $\\text{MLP}^{(1)}$. The number of layers are set to $1$ for $\\text{MLP}^{(2)}$.\nWe train the max-sum GNNs with loss function MSE or MAE for 200 epochs. We use the Adam optimizer with default parameters, zero weight decay, and initial learning rate set to 0.001. The batch size is set to 64. We early-stop based on a noisy validation set. \n\n\\subsubsection{Instance-Dependent Label Noise.} \n\\paragraph{Dataset Details.} \nSince the task is to predict the maximum node feature and we use the maximum degree as the noisy label, the correlation between true labels and noisy labels are very high on large and dense graphs if the node features are uniformly sampled from $\\{1, 2, \\cdots, 100\\}$.\nTo avoid this, we use a two-step method to sample the node features.\nFor each graph ${\\mathcal{G}}$, we first sample a constant upper-bound $M_{\\mathcal{G}}$ uniformly from $\\{20, 21, \\cdots, 100\\}$. For each node $u \\in {\\mathcal{G}}$, the node feature ${\\bm{x}}_u$ is then drawn from $\\{1, 2, \\cdots, M_{\\mathcal{G}}\\}$.\n\n\\paragraph{Model and hyperparameter settings.} We consider a 2-layer GNN with max-aggregation and sum-readout (max-sum GNN), a 2-layer GNN with max-aggregation and max-readout (max-max GNN), and a special GNN (DeepSet) that does not use edge information: \n\\[ h_G = \\text{MLP}^{(1)} \\Big( \\text{max}_{u \\in {\\mathcal{G}}} \\ \\text{MLP}^{(0)} \\Big({\\bm{x}}_u\\Big) \\Big). \\] \nThe width of all MLP modules are set to $128$. The number of layers is set to $3$ for $\\text{MLP}^{(0)}, \\text{MLP}^{(1)}$ in max-max and max-sum GNNs and for $\\text{MLP}^{(0)}$ in DeepSet. The number of layers is set to $1$ for $\\text{MLP}^{(2)}$ in max-max and max-sum GNNs and for $\\text{MLP}^{(1)}$ in DeepSet.\nWe train these GNNs with MSE or MAE as the loss function for 600 epochs. We use the Adam optimizer with zero weight decay. \nWe set the initial learning rate to $0.005$ for DeepSet and $0.001$ for max-max GNNs and max-sum GNNs. \nThe models are selected from the last epoch so that they can overfit the noisy labels more.\n\n\\subsection{Experimental Details on Vision Datasets}\n\\label{suppsec:vision_training_details}\n\\paragraph{Neural Network Architectures.} Table~\\ref{tab:models-cnn} describes the 9-layer CNN~\\cite{miyato2018virtual} used on CIFAR-Easy\\ and \\mbox{CIFAR-10\/100}, which contains 9 convolutional layers and 19 trainable layers in total. Table~\\ref{tab:models-mlp} describes the 4-layer MLP used on CIFAR-Easy\\ and \\mbox{CIFAR-10\/100}, which has 4 linear layers and ReLU as the activation function.\n\n\\begin{table}[ht]\n \\centering\n \\begin{minipage}[t]{0.45\\textwidth}\n \\caption{9-layer CNN on CIFAR-Easy\\ and \\mbox{CIFAR-10\/100}.}\n \\label{tab:models-cnn}\n \\vskip 0.1in\n \\centering\\small\n \\begin{tabular*}{\\textwidth}{l@{\\extracolsep{\\fill}}c}\n \\toprule\n \\multirow{1}*{Input}\n & 32$\\times$32 Color Image \\\\\n \\midrule\n \\multirow{5}*{Block 1}\n & Conv(3$\\times$3, 128)-BN-LReLU \\\\\n & Conv(3$\\times$3, 128)-BN-LReLU \\\\\n & Conv(3$\\times$3, 128)-BN-LReLU \\\\\n & MaxPool(2$\\times$2, stride = 2) \\\\\n & Dropout(p = 0.25) \\\\\n \\midrule\n \\multirow{5}*{Block 2}\n & Conv(3$\\times$3, 256)-BN-LReLU \\\\\n & Conv(3$\\times$3, 256)-BN-LReLU \\\\\n & Conv(3$\\times$3, 256)-BN-LReLU \\\\\n & MaxPool(2$\\times$2, stride = 2) \\\\\n & Dropout(p = 0.25) \\\\\n \\midrule\n \\multirow{4}*{Block 3}\n & Conv(3$\\times$3, 512)-BN-LReLU \\\\\n & Conv(3$\\times$3, 256)-BN-LReLU \\\\\n & Conv(3$\\times$3, 128)-BN-LReLU \\\\\n & GlobalAvgPool(128) \\\\\n \\midrule\n Score & Linear(128, 10 or 100) \\\\\n \\bottomrule\n \\end{tabular*}\n \n \\end{minipage}\n \\hfill\n \\begin{minipage}[t]{0.45\\textwidth}\n \\caption{4-layer FC on CIFAR-Easy\\ and \\mbox{CIFAR-10\/100}.}\n \\label{tab:models-mlp}\n \\vskip 0.1in\n \\begin{tabular*}{\\textwidth}{lc}\n \\toprule\n \\multirow{1}*{Input}\n & 32$\\times$32 Color Image \\\\\n \\midrule\n \\multirow{3}*{Block 1}\n & Linear(32$\\times$32$\\times$3, 512)-ReLU \\\\\n & Linear(512, 512)-ReLU \\\\\n & Linear(512, 512-ReLU \\\\\n \\midrule\n Score & Linear(512, 10 or 100) \\\\\n \\bottomrule\n \\end{tabular*}\n \\end{minipage}\n \\vspace{-1ex}\n\\end{table}\n\n\\paragraph{Vanilla Training.} For models trained with standard procedures, we use SGD with a momentum of 0.9, a weight decay of 0.0005, and a batch size of 128. For ResNets and CNNs, the initial learning rate is set to $0.1$ on CIFAR-10\/100 and $0.01$ on CIFAR-Easy. For MLPs, the initial learning rate is set to $0.01$ on CIFAR-10\/100 and $0.001$ on CIFAR-Easy. The initial learning rate is multiplied by 0.99 per epoch on CIFAR-10\/100, and it is decayed by 10 after 150 and 225 epochs on CIFAR-Easy.\n\n\\paragraph{Train Models with SOTA Methods.} We use the same set of hyperparameter settings from DivideMix~\\cite{li2020dividemix} to obtain corresponding trained models and measure the predictive power in representations from these models.\n\nOn CIFAR-10\/100 with flipped noise, we only use the small set of clean labels to train the linear model in our method, and the clean subset is randomly selected from the training data.\nOn CIFAR-10\/100 with uniform noise, the clean labels we use are from examples with highest model uncertainty~\\cite{lewis1994sequential}. \nBesides the clean set, we also use randomly-sampled training examples labeled with the model's original predictions to train the linear model. \nWe use 5,000 such samples under 20\\%, 40\\%, 50\\%, and 80\\% noise ratios, and we use 500 such samples under 90\\% noise ratio. \n\\section{Theoretical Results}\n\\label{suppapp:thm}\nWe first provide a formal version of Theorem~\\ref{thm:xu2020} based on~\\cite{Xu2020What}. Theorem~\\ref{thm:xu2020} connects a network's architectural alignment with the target function to its learned representations' predictive power \\textit{when trained on clean data}. \n\\begin{theorem} \n\\label{thm:main_formal}\n{(Better alignment implies better predictive power on clean training data; \\cite{Xu2020What}). } \nFix $\\epsilon$ and $\\delta$.\nGiven a target function $f: \\mathcal{X} \\rightarrow \\mathcal{Y}$ that can be decomposed into functions $f_1, ..., f_n$ and given a network $\\mathcal{N}$, where $\\mathcal{N}_1, ..., \\mathcal{N}_n$ are $\\mathcal{N}$'s modules in sequential order,\nsuppose the training dataset $\\mathcal{S} := \\left\\lbrace \\x_j, y_j \\right\\rbrace_{j=1}^M$ contains $M$ i.i.d. samples drawn from a distribution with clean labels $y_j := f(x_j)$. \nThen under the following assumptions, $\\textit{Alignment}(\\mathcal{N}, f, \\epsilon, \\delta) \\leq M$\nif and only if there exists a learning algorithm $A$ such that the network's last module $\\mathcal{N}_n$'s representations learned by $A$ on the training data $\\mathcal{S}$ have predictive power $\\pred_{n}(f, \\mathcal{N}, \\mathcal{S}) \\leq \\epsilon$ with probability $1-\\delta$.\n \\vspace{0.05in} \\\\\n Assumptions: \\vspace{0.05in} \\\\\n \\textbf{(a)} We train each module $\\mathcal{N}_i$'s sequentially: for each $\\mathcal{N}_i$, the input samples are $\\{h^{(i-1)}(\\x_j),f_i(h^{(i-1)}(\\x_j))\\}_{j=1}^M$ with $h^{(0)}(\\x) =\\x$.\n Notice that each input $h^{(i-1)}(\\x_j)$ is the output from the previous modules, but its label is generated by the function $f_{i}$ on $h^{(i-1)}(\\x_j)$. \\\\\n \\textbf{(b)} For the clean training set $\\mathcal{S}$, let $\\mathcal{S}' := \\left\\lbrace \\hat{\\x}_j, y_j \\right\\rbrace_{j=1}^M$ denote the perturbed training data ($\\hat{\\x}_j$ and $\\x_j$ share the same label $y_j$).\n Let $f_{\\mathcal{N}, A}$ and $f'_{\\mathcal{N}, A}$ denote the functions obtained by the learning algorithm $A$ operating on $\\mathcal{S}$ and $\\mathcal{S}'$ respectively.\n Then for any $\\x \\in \\mathcal{X}$, $\\| f_{\\mathcal{N}, A}(\\x) - f'_{\\mathcal{N}, A}(\\x) \\| \\leq L_0 \\cdot \\max_{\\x_j \\in \\mathcal{S}} \\| \\x_j - \\hat{\\x}_j \\|$, for some constant $L_0$. \\\\\n \\textbf{(c) } For each module $\\mathcal{N}_i$, let $\\hat{f}_i$ denotes its corresponding function learned by the algorithm $A$. Then for any $\\x, \\hat{\\x} \\in \\mathcal{X}$, $ \\| \\hat{f}_j (\\x) - \\hat{f}_j (\\hat{\\x}) \\| \\leq L_1 \\| \\x - \\hat{\\x} \\|$, for some constant $L_1$. \n\\end{theorem}\n\nWe have empirically shown that Theorem~\\ref{thm:main_formal} also hold when we train the models on noisy data. \nMeanwhile, we prove Theorem~\\ref{thm:main_formal} for a simplified noisy setting where the target function and noise function share a common feature space, but have different prediction rules. \nFor example, the target function and noise function share the same feature space under flipped label noise (in classification setting). Yet, their mappings from the learned features to the associated labels are different.\n\n\\begin{theorem}\n\\label{thm:main_extend}\n{(Better alignment implies better predictive power on noisy training data). } \nFix $\\epsilon$ and $\\delta$. \nLet $\\left\\lbrace \\x_j \\right\\rbrace_{j=1}^M$ be i.i.d. samples drawn from a distribution. \nGiven a target function $f: \\mathcal{X} \\rightarrow \\mathcal{Y}$ and a noise function $g: \\mathcal{X} \\rightarrow \\mathcal{Y}$, let $y := f(\\x)$ denote the true label for an input $\\x$, and $\\hat{y} := g(\\x)$ denote the noisy label of $\\x$. \nLet $\\hat{\\mathcal{S}} := \\lbrace (\\x_j, y_j) \\rbrace_{j=1}^N \\bigcup \\ \\lbrace (\\x_j, \\hat{y}_j) \\rbrace_{j=N+1}^M$ denote a noisy training set with $M-N$ noisy samples for some $N \\in \\{1,2,\\cdots,M\\}$.\nGiven a network $\\mathcal{N}$ with modules $\\mathcal{N}_i$, suppose $\\mathcal{N}$ is well-aligned with the target function $f$ (i.e., the alignment between $\\mathcal{N}$ and $f$ is less than $M$ --- $\\textit{Alignment}(\\mathcal{N}, f, \\epsilon, \\delta) \\leq M$). \nThen under the same assumptions in Theorem~\\ref{thm:main_formal} and the additional assumptions below, there exists a learning algorithm $A$ and a module $\\mathcal{N}_i$ such that when training the network $\\mathcal{N}$ on the noisy data $\\hat{\\mathcal{S}}$ with algorithm $A$, the representations from its $i$-th module have predictive power $\\pred_{i}(f, \\mathcal{N}, \\mathcal{C}) \\leq \\epsilon$ with probability $1-\\delta$, where $\\mathcal{C}$ is a small set of clean data with a size greater than the number of dimensions in the output of module $\\mathcal{N}_i$. \n\n\\vspace{0.05in} \nAdditional assumptions (a simplified noisy setting): \\vspace{0.05in} \\\\\n\\textbf{(a)} There exists a function $h$ on the input domain $\\mathcal{X}$ such that the target function $f: \\mathcal{X} \\rightarrow \\mathcal{Y}$ and the noise function $g: \\mathcal{X} \\rightarrow \\mathcal{Y}$ can be decomposed as: $f(\\x) = f_r(h(\\x))$ with $f_r$ being a linear function and $g(\\x) = g_r(h(\\x))$ for some function $g_r$. \\\\\n\\textbf{(b)} $f_r$ is a linear map from a high-dimensional space to a low-dimensional space. \\\\ \n\\textbf{(c)} The loss function used in measuring the predictive power is mean squared error (denoted as $\\|\\cdot\\|$) . \n\\end{theorem}\n\n\\textbf{Remark.} Theorem~\\ref{thm:main_extend} suggests that the representations' predictive power for models well aligned with the target function should remain roughly similar across different noise ratios under flipped label noise. Empirically, we observe similar phenomenons in Figures~\\ref{fig:cifar_baseline_compare}-\\ref{fig:our_cifar10_compare}, and in Tables~\\ref{table:cifar10_asym} and~\\ref{table:cifar100_asym}. \nSome discrepancy between the experimental and theoretical results could exist under vanilla training as Theorem~\\ref{thm:main_extend} assumes sequential training, which is different from standard training procedures.\n\n\\paragraph{Proof of Theorem~\\ref{thm:main_extend}.} According to the definition of alignment in Definition~\\ref{def:alignment}, since $\\textit{Alignment}(\\mathcal{N}, f, \\epsilon, \\delta) \\leq M$ and $f(\\x) = f_r(h(\\x))$, we can find a sub-structure (denoted as $\\mathcal{N}_{sub}$) in the network $\\mathcal{N}$ with sequential modules $\\{\\mathcal{N}_1, \\cdots, \\mathcal{N}_i\\}$ such that $\\mathcal{N}_{sub}$ can efficiently learn the function $h$ (i.e., the sample complexity for $\\mathcal{N}_{sub}$ to learn $h$ is no larger than $M$). \nAccording to Theorem~\\ref{thm:main_formal}, applying sequential learning to train $\\mathcal{N}_{sub}$ with labels $h(\\x)$, the representations of $\\mathcal{N}_{sub}$ will have predictive power $\\pred_{i}(h, \\mathcal{N}_{sub}, \\mathcal{C}) \\leq \\epsilon$ with probability $1-\\delta$. \n\nSince for each input $\\x$ in the noisy training data $\\hat{\\mathcal{S}}$, its label can be written as $f_r(h(\\x))$ (if it is clean) or $g_r(h(\\x))$ (if it is noisy), when the network $\\mathcal{N}$ is trained on $\\hat{\\mathcal{S}}$ using sequential learning, its sub-structure $\\mathcal{N}_{sub}$ can still learn $h$ efficiently (i.e., $\\mathcal{M}_{A} (h, \\mathcal{N}_{sub}, \\epsilon, \\delta) \\leq M$ for some learning algorithm $A$). Thus, the representations learned from the noisy training data $\\hat{\\mathcal{S}}$ can still be very predictive (i.e., $\\pred_{i}(h, \\mathcal{N}_{sub}, \\mathcal{C}) \\leq \\epsilon$ with probability $1-\\delta$). \n\nSince $f_r$ is a linear map from a high-dimensional space to a low-dimensional space, and the clean data $\\mathcal{C}$ has enough samples to learn $f_r$ ($\n|\\mathcal{C}|$ is larger than the input dimension of $f_r$), the linear model $L$ learned by linear regression can also generalize $f_r$ (since linear regression has a closed form solution in this case as the problem is over-complete).\nTherefore, as $\\pred_{i}(h, \\mathcal{N}_{sub}, \\mathcal{C}) \\leq \\epsilon$, $\\pred_{i}(f, \\mathcal{N}_{sub}, \\mathcal{C}) \\leq \\epsilon$ also holds.\nNotice that $\\pred_{i}(f, \\mathcal{N}_{sub}, \\mathcal{C}) = \\pred_{i}(f, \\mathcal{N}, \\mathcal{C})$ as $\\mathcal{N}_{i}$ is also the $i$-th module in $\\mathcal{N}$.\nHence, we have shown that there exist some module $\\mathcal{N}_i$ such that $\\pred_{i}(f, \\mathcal{N}, \\mathcal{C}) \\leq \\epsilon$ with probability $1-\\delta$.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Abstract}\n\nArea and orientation preserving diffeomorphisms of the standard 2-disc, referred to as symplectomorphisms of $\\disc$, allow decompositions in terms of \\emph{positive} twist diffeomorphisms.\nUsing the latter decomposition we utilize the Conley index theory of discrete braid classes as introduced in \\cite{BraidConleyIndex,GVV-pre} in order to obtain a\nMorse type forcing theory of periodic points: a priori information about periodic points determines a mapping class which may force additional periodic points.\n\n\n\n{\\let\\thefootnote\\relax\\footnotetext{* \\textit{Institute of Computer Science and Computational Mathematics, Jagiellonian University, Krak\\'ow}}}\n\n{\\let\\thefootnote\\relax\\footnotetext{** \\textit{Department of Mathematics, VU University, Amsterdam}}}\n\n\n\\newpage\n\n\n\\begin{sloppypar}\n\n\\section{Prelude}\nLet $\\disc\\subset\\plane$ be the standard unit 2-disc with coordinates $z=(x,y)\\in \\plane$, and let $\\omega=dx\\wedge dy$ be the standard area 2-form on $\\plane$.\nA diffeomorphism $F\\colon\\disc \\to\\disc$ is said to be symplectic if $F^*\\omega = \\omega$ --- area and orientation preserving ---\nand is referred to as a \\emph{symplectomorphism} of $\\disc$.\nSymplectomorphisms of the 2-disc form a group which is denoted by $\\Symp(\\disc)$. \nA diffeomorphism $F$\nis \\emph{Hamiltonian} if it is given as the time-1 mapping of a Hamiltonian system\n\\begin{equation}\\label{HE} \n \\begin{aligned}\n \\dot{x}&=\\partial_y H(t,x,y); \\\\ \\dot{y}&=-\\partial_xH(t,x,y),\n \\end{aligned}\n\\end{equation}\nwhere $ H \\in C^{\\infty}(\\rr\\times \\disc)$ a the Hamiltonian function with the additional property that $H(t, \\cdot)|_{\\partial \\disc} = const.$ for all $ t \\in \\rr$. The set of Hamiltonians satisfying these requirements is denoted by ${\\mathcal{H}}(\\disc)$ and\nthe associated flow of \\eqref{HE} is denoted by $\\psi_{t,H}$. The group $\\Ham(\\disc)$ signifies the group of Hamiltonian diffeomorphisms of $\\disc$.\n Hamiltonian diffeomorphisms are symplectic by construction. For the 2-disc these notions are equivalent, i.e. $\\Symp(\\disc) = \\Ham(\\disc)$ and we may therefore study Hamiltonian systems in order to prove properties about symplectomorphisms of $\\disc$, cf.\\ \\cite{Boyland2005}, and Appendix \\ref{sec:sympMCG}.\n\nA subset $B\\subset \\disc$ is a invariant set for $F$ if $F(B) = B$.\n We are interested in finite invariant sets.\n Such invariant sets consist of periodic points, i.e.\npoints $z\\in\\disc$ such that $F^k(z) = z$ for some $k\\ge 1$.\nSince $\\partial \\disc$ is also invariant, periodic point are either in ${\\rm int~}\\disc$, or $\\partial \\disc$.\n\nThe main result of this paper concerns a\n \\emph{forcing} problem. Given a finite invariant set $B\\subset \\inter \\disc$ for $F \\in \\Symp(\\disc)$, do there \nexist additional periodic points? More generally, does there exist a finite invariant set $A\\subset\\inter \\disc$, with $A\\cap B=\\varnothing$? This is much alike a similar question for the discrete dynamics on an interval, where \nthe famous Sharkovskii theorem establishes a forcing order among periodic points based on their period.\nIn the 2-dimensional situation such an order is much harder to establish, cf.\\ \\cite{Boyland2005}.\nThe main results are based on braiding properties of periodic points and are stated and proved in Section \\ref{sec:main1}.\nThe braid invariants introduced in this paper add additional information to existing invariants in the area-preserving case.\nFor instance, Example \\ref{exm:exist3} describes a braid class which forces additional invariant sets solely in the area-preserving case\nhence extending the non-symplectic methods described in \\cite{JiangZheng}.\n\n\nThe theory in this paper can be further\ngenelarized to include symplectomorphisms of bounded subsets of $\\rr^2$ with smooth boundary, eg. annuli, and symplectomorphisms of $\\rr^2$.\n\n\\vskip.3cm\n\n{\\bf Acknowledgements:} AC was supported by the Foundation for Polish Science under the MPD Programme `Geometry\nand Topology in Physical Models', co-financed by the EU European Regional Development Fund,\nOperational Program Innovative Economy 2007-2013.\n\n\\section{Mapping class groups}\n\\label{sec:discinv}\nA priori knowledge of finite invariant sets $B$ for $F$\ncategorize mappings in so-called mapping classes.\nTraditionally mapping class groups are defined for orientation preserving homeomorphisms,\n cf.\\ \\cite{G3} for an overview. \nDenote by $\\Homeo^+(\\disc)$ the space of orientation preserving homeomorphisms and by $\\Homeo^+_0(\\disc)$ the homeomorphisms that leave the boundary point wise invariant.\nTwo homeomorphisms $F,G\\in \\Homeo^+(\\disc)$ are isotopic if there exists an isotopy $\\phi_t$, with $\\phi_t\\in\\Homeo^+(\\disc)$ for all $t\\in [0,1]$,\nsuch that $\\phi_0=F$ and $\\phi_1 = G$.\nThe equivalence classes in $\\pi_0\\bigl(\\Homeo^+(\\disc)\\bigr) = \\Homeo^+(\\disc)\/\\!\\!\\sim$ are called \\emph{mapping classes} and form a group under composition. The latter is referred to as the \\emph{mapping class group} of the 2-disc and is denoted by $\\Mod(\\disc)$. \nFor homeomorphisms that leave the boundary point wise invariant the mapping class group is denoted by $\\Mod_0(\\disc) = \\pi_0\\bigl(\\Homeo^+_0(\\disc)\\bigr)$. In Appendix \\ref{sec:MCGclbr} we provide proofs of the relevant facts about mapping class groups.\n\\begin{prop}\n\\label{prop:MCG11}\nBoth mapping class groups $\\Mod(\\disc)$ and $\\Mod_0(\\disc)$ are trivial.\n\\end{prop}\nThe mapping class groups $\\Mod(\\disc)$ and $\\Mod_0(\\disc)$ may also be defined using diffeomorphisms, cf.\\ Appendix \\ref{sec:MCGclbr}. \nIn Proposition \\ref{prop:mapclass1}, we show that $\\pi_0\\bigl(\\Symp(\\disc)\\bigr) = \\Mod(\\disc)$ and in Propositon\n \\ref{prop:mapclass3} we show that $\\Ham(\\disc) = \\Symp(\\disc)$, which implies that every homeomorphism, or diffeomorphism is isotopic to\n a Hamiltonian symplectomorphism.\n\n\\begin{prop}\n\\label{prop:MCG11a}\n$\\pi_0\\bigl(\\Symp(\\disc)\\bigr) = \\pi_0\\bigl(\\Ham(\\disc)\\bigr) =\\Mod(\\disc)\\cong 1$.\n\\end{prop}\n\n\n\nMore refined information about mapping classes is obtained by considering finite invariant sets $B$.\nThis leads to the notion of the \\emph{relative mapping classes}. \nTwo homeomorphisms $F,G\\in \\Homeo^+(\\disc)$ are of the same mapping class \\emph{relative to} $B$ if there\nexists an isotopy $\\phi_t$, with $\\phi_t\\in\\Homeo^+(\\disc)$ and $\\phi_t(B) = B$ for all $t\\in [0,1]$, such that\n$\\phi_0=F$ and $\\phi_1=G$. The subgroup of such homeomorphisms is denoted by $\\Homeo^+(\\disc\\rel B)$ and $\\Homeo^+_0(\\disc\\rel B)$\nin case $\\partial\\disc$ is point wise invariant.\nThe associated mapping class groups are denoted by \n$\\Mod(\\disc\\rel B) = \\pi_0\\bigl(\\Homeo^+(\\disc\\rel B) \\bigr)$ and\n$\\Mod_0(\\disc\\rel B) = \\pi_0\\bigl(\\Homeo^+_0(\\disc\\rel B) \\bigr)$ respectively.\n\n\\begin{prop}\n\\label{prop:MCG12}\n$\\Mod(\\disc\\rel B)\\cong {\\mathscr{B}}_m\/Z({\\mathscr{B}}_m)$ and $\\Mod_0(\\disc\\rel B)\\cong {\\mathscr{B}}_m$,\nwhere ${\\mathscr{B}}_m$ is the Artin braid group, with $m = \\# B$ and $Z({\\mathscr{B}}_m)$ is the center of the braid group.\n\\end{prop}\n\nLet $\\C_m\\disc$ be the \\emph{configuration space} of unordered configurations of $m$ points in\n$\\disc$. \n\\emph{Geometric braids} on $m$ strands on $\\disc$ are closed loops in $\\C_m\\disc$ based at $B_0 = \\{z_1,\\cdots,z_m\\}$, where the points $z_i$ are defined\nas follows: $z_i = (x_i,0)$, $x_0=-1$, and $x_{i+1}= x_i + 2\/(m+1)$.\nThe \\emph{classical braid group} on $\\disc$ is the fundamental group $\\pi_1\\bigl(\\C_m\\disc,B_0\\bigr)$ and is denote by ${\\mathcal{B}}_m\\disc$.\nThe (algebraic) \\emph{Artin braid group} ${\\mathscr{B}}_{m}$ is a free group spanned by the $m-1$ generators $\\sigma_{i}$, modulo\nfollowing relations:\n\\begin{align}\\label{eqn:braidrel}\n \\begin{cases}\n \\sigma_{i} \\sigma_{j} = \\sigma_{j} \\sigma_{i}, & \\ |i-j| \\geq 2,\\ i,j \\in \\{1, \\dots ,m-1\\} \\\\\n \\sigma_{i} \\sigma_{i+1} \\sigma_{i} = \\sigma_{i+1} \\sigma_{i} \\sigma_{i+1}, & \\ 1\\le i \\le m-2.\n \\end{cases}\n\\end{align} \nFull twists are denoted algebraically by $\\square= (\\sigma_{1} \\dots \\sigma_{m-1})^{m}$ and generate the center of the braid group ${\\mathscr{B}}_m$.\nPresentation of words consisting only of the $\\sigma_i$'s (not the inverses) and the relations in \\eqref{eqn:braidrel} form a monoid\nwhich is called the \\emph{positive braid monoid} ${\\mathscr{B}}_m^+$.\n\nThere exists a canonical isomorphism ${\\bm{i}}_m\\colon {\\mathscr{B}}_m \\to {\\mathcal{B}}_m\\disc$, cf.\\ \\cite[Sect.\\ 1.4]{Birman}.\nFor closed loops ${\\bm{\\beta}}(t)$ based at $B\\in \\C_m\\disc$ we have a canonical isomorphism ${\\bm{j}}_B\\colon\\pi_1\\bigl(\\C_m\\disc,B\\bigr)\\to \\pi_1\\bigl(\\C_m\\disc,B_0\\bigr) ={\\mathcal{B}}_m\\disc$.\nLet $p\\colon [0,1]\\to \\C_m\\disc$ be a path connecting $B_0$ to $B$, then define ${\\bm{j}}_B\\bigl([{\\bm{\\beta}}]_B\\bigr) := [(p\\cdot {\\bm{\\beta}})\\cdot p^*]_{B_0}\n= [p\\cdot({\\bm{\\beta}}\\cdot p^*)]_{B_0}$, where $p^*$ is the inverse path connecting $B$ to $B_0$. The definition of ${\\bm{j}}_B$ is independent of the chosen path $p$.\nThis yields the isomorphism \n$\n\\imath_B = {\\bm{i}}_m^{-1}\\circ {\\bm{j}}_B\\colon \\pi_1\\bigl(\\C_m\\disc,B\\bigr) \\to\n{\\mathscr{B}}_m.\n$\n\nThe construction of the isomorphism $\\Mod_0(\\disc\\rel B) \\cong {\\mathcal{B}}_{m}\\disc$ can be understood as follows, cf.\\ \\cite{Birman}, \\cite{Birman2}. For \n$F\\in \\Homeo^+_0(\\disc\\rel B)$\nchoose an isotopy $\\phi_t\\in \\Homeo^+_0(\\disc), \\ t \\in [0,1]$, \nsuch that $\\phi_1=F$.\nSuch an isotopy exists since $\\Homeo^+_0(\\disc)$ is contractible, cf.\\ Propostion \\ref{prop:MCG11}.\nFor $G\\in [F]\\in \\Mod_0(\\disc\\rel B)$, the composition and scaling of the isotopies defines isotopic braids based at $B\\in \\C_m\\disc$.\nThe isomorphism $\\jmath_B\\colon \\Mod_0(\\disc\\rel B) \\to {\\mathscr{B}}_m$ is given by $\\jmath_B([F]) = \\iota_B\\bigl(d_*^{-1}([F])\\bigr)= \\imath_B([{\\bm{\\beta}}]_B)$, with ${\\bm{\\beta}}(t) = \\phi_t(B)$\nthe geometric braid generated by $\\phi_t$. The isomorphism $d_*$ is given in Appendix \\ref{subsec:braidMCG} and $[{\\bm{\\beta}}]_B$ denotes the homotopy class in $\\pi_1\\bigl(\\C_m\\disc,B\\bigr)$.\nFor $\\Mod(\\disc\\rel B)$ we use the same notation for the isomorphism which is given by\n\\[\n\\jmath_B\\colon\\Mod(\\disc\\rel B) \\cong {\\mathscr{B}}_{m}\/Z({\\mathscr{B}}_m),\\quad [F] \\mapsto \\jmath_B([F])= \\beta\\!\\!\\!\\!\\mod\\square,\n\\]\nwhere $\\beta = \\imath_B\\bigl( [{\\bm{\\beta}}]_B\\bigr)$.\nThe above mapping class groups can also be defined using diffeomorphisms and symplectomorphisms. \n\n\\begin{prop}\n\\label{prop:MCG12a} \n $ \\pi_0\\bigl(\\Ham(\\disc\\rel B)\\bigr) =\\Mod(\\disc\\rel B)\\cong {\\mathscr{B}}_m\/Z({\\mathscr{B}}_m)$.\n\\end{prop}\nIn Appendix \\ref{sec:sympMCG} we show that $\\pi_0\\bigl(\\Symp(\\disc\\rel B)\\bigr) = \\Mod(\\disc\\rel B)$ and that $\\Symp(\\disc\\rel B) = \\Ham(\\disc\\rel B)$\nand therefore that every mapping class can be represented by Hamiltonian symplectomorphisms.\n\n\n\n\n\n\\section{Braid classes}\n\\label{subsec:2color}\nConsidering free loops in a configuration space as opposed to based loops leads to classes of closed braids, which are the key tool for studying periodic points.\n\n\n\n\\subsection{Discretized braids}\n\\label{subsec:discbr}\nFrom \\cite{BraidConleyIndex} we recall the notion of positive piecewise linear braid diagrams and discretized braids.\n\\begin{defn}\n\\label{PL}\nThe space of {\\em discretized period $d$ closed braids on $n$ strands},\ndenoted $\\Conf^d_m$, is the space of all pairs $({\\bm{b}},\\tau)$ where\n$\\tau\\in S_m$ is a permutation on $m$ elements, and ${\\bm{b}}$ is an \nunordered set of $m$ {\\em strands}, ${\\bm{b}}=\\{{\\bm{b}}^\\mu\\}_{\\mu=1}^m$,\ndefined as follows:\n\\begin{enumerate}\n\\item[(a)]\n\teach strand \n\t${\\bm{b}}^\\mu=(x^\\mu_0,x^\\mu_1,\\ldots,x^\\mu_d)\\in\\rr^{d+1}$\n\tconsists of $d+1$ {\\em anchor points} $x_j^\\mu$;\n\\item[(b)] $x^\\mu_d = x^{\\tau(\\mu)}_0$\n\tfor all $\\mu=1,\\ldots,m$;\n\\item[(c)]\n\tfor any pair of distinct strands ${\\bm{b}}^\\mu$ and ${\\bm{b}}^{\\mu'}$\n\tsuch that $x^\\mu_j=x^{\\mu'}_j$ for some $j$,\n\tthe \\emph{ transversality} condition\n\t $\\bigl(x^\\mu_{j-1}-x^{\\mu'}_{j-1}\\bigr)\n\t\\bigl(x^\\mu_{j+1}-x^{\\mu'}_{j+1}\\bigr) < 0$ holds.\n\\end{enumerate}\n\\end{defn}\n\n\\begin{rem}\nTwo discrete braids $({\\bm{b}},\\tau)$ and $( {\\bm{b}}', \\tau')$ are close if the strands ${\\bm{b}}^{\\zeta(\\mu)}$ and $ {\\bm{b}}'^\\mu$\nare close in $\\rr^{md}$ for some permutation $\\zeta$ such that $ \\tau'=\\zeta\\tau\\zeta^{-1}$.\nWe suppress the permutation $\\tau$ from the notation. \nPresentations via the braid monoid ${\\mathscr{B}}_m^+$ store the permutations.\n\\end{rem}\n\n\\begin{defn}[cf.\\ \\cite{BraidConleyIndex}]\n\\label{defn:closure}\nThe closure $\\bar\\Conf_m^d$ of the space $\\Conf_m^d$ consists of pairs $({\\bm{b}},\\tau)$ for which (a)-(b) in Definition \\ref{PL} are satisfied.\n\\end{defn}\n\nThe path components of $\\Conf_m^d$ are the \\emph{discretized braids classes} $[{\\bm{b}}]$.\nBeing in the same path connected component is an equivalence relation on $\\Conf_m^d$, where the braid classes are the\nequivalence classes expressed by the notation ${\\bm{b}},{\\bm{b}}'\\in [{\\bm{b}}]$, and ${\\bm{b}}\\sim{\\bm{b}}'$. \nThe associated permutations $\\tau$ and $\\tau'$ are conjugate. A path connecting ${\\bm{b}}$ and ${\\bm{b}}'$ is called a \\emph{positive isotopy} and the equivalence relation is referred to \\emph{positively isotopic}.\n\nTo a configuration ${\\bm{b}}\\in\\Conf_m^d$ one can associate a \n\\emph{piecewise linear braid diagram} $\\Bd({\\bm{b}})$. For \neach strand ${\\bm{b}}^\\mu\\in {\\bm{b}}$, consider the piecewise-linear (PL) \ninterpolation\n\\begin{equation}\\label{interpolate1}\n\\Bd^{\\mu}(t) := x^\\mu_{\\floor{d\\cdot t}}+(d\\cdot t-\\floor{d\\cdot t})\n\t(x^\\mu_{\\ceil{d\\cdot t}}-x^\\mu_{\\floor{d\\cdot t}}),\n\\end{equation}\nfor $t\\in[0,1]$.\nThe braid diagram $\\Bd({\\bm{b}})$ is then defined to be the \nsuperimposed graphs of all the functions $\\Bd^{\\mu}(t)$.\nA braid diagram $\\Bd({\\bm{b}})$ is not only a good bookkeeping tool for keeping track of the strands\nin $\\Bd({\\bm{b}})$, but also plays natural the role of a braid diagram projection with only positive intersections, cf.\\ Section \\ref{subsec:discbrinv12}.\n\nThe set of $t$-coordinates of intersection points in $\\Bd({\\bm{b}})$ is denoted by $\\{t_i\\}$, $i=1,\\cdots,|{\\bm{b}}|$, where $|{\\bm{b}}|$ is the total number of\nintersections in $\\Bd({\\bm{b}})$ counted with multiplicity. The latter is also referred to as the \\emph{word metric} and is an invariant for ${\\bm{b}}$.\nA discrete braid ${\\bm{b}}$ is \\emph{regular} if all points $t_i$ and anchor points $x_j^\\mu$ are distinct.\nThe regular discrete braids in $[{\\bm{b}}]$ form a dense subset and every discrete braid is positively isotopic to a regular discrete braid. \nTo a regular discrete braid ${\\bm{b}}$ one can assign a unique positive word $\\beta = \\beta({\\bm{b}})$ defined as follows:\n\\begin{equation}\n\\label{eqn:word1}\n{\\bm{b}} \\mapsto \\beta({\\bm{b}}) = \\sigma_{k_1} \\cdots \\sigma_{k_\\ell},\n\\end{equation}\n where $k_i$ and $k_i +1$ are the positions that intersect at $t_i$, cf.\\ \\cite[Def.\\ 1.13]{Dehornoy1}. \nOn the positive braid monoid ${\\mathscr{B}}_m^+$ two positive words $\\beta$ and $\\beta'$ are positively equal,\nnotation $\\beta \\doteq\\beta'$, if they represent the same element in ${\\mathscr{B}}_m^+$ using the relations in \\eqref{eqn:braidrel}.\nOn ${\\mathscr{B}}_m^+$ we define an equivalence relation which acts as an analogue of conjugacy in the braid group, cf.\\ \\cite[Sect.\\ 2.2]{BDV}.\nFor a given word $\\sigma_{i_1}\\cdots\\sigma_{i_n}$, define the relation\n\\[\n\\sigma_{i_1}\\sigma_{i_2}\\cdots\\sigma_{i_n} \\equiv \\sigma_{i_2}\\cdots\\sigma_{i_n}\\sigma_{i_1}.\n\\]\n\n\\begin{defn}\n\\label{defn:equiv12}\nTwo positive words $\\beta,\\beta'\\in {\\mathscr{B}}_m^+$ are \\emph{positively conjugate}, notation\n$\\beta \\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim} \\beta'$, if there exists a sequence of words $\\beta_0,\\cdots,\\beta_\\ell\\in {\\mathscr{B}}_m^+$, with $\\beta_0=\\beta$ and $\\beta_\\ell =\\beta'$, such\nthat for all $k$, either $\\beta_k\\doteq \\beta_{k+1}$, or $\\beta_k\\equiv \\beta_{k+1}$\n\\end{defn}\n\nPositive conjugacy is an equivalence relation on ${\\mathscr{B}}_m^+$ and the set of positive conjugacy classes $\\llbracket\\beta\\rrbracket$ of the braid monoid ${\\mathscr{B}}_m^+$ is denoted by\n$\\CC {\\mathscr{B}}_m^+$. \n\nThe above defined assignment ${\\bm{b}} \\mapsto \\beta({\\bm{b}})$ can be extended to all discrete braids. A discrete braid ${\\bm{b}}$ is positively isotopic to a regular braid ${\\bm{b}}'$ and the mapping $\\Conf_m^d \\to \\CC{\\mathscr{B}}_m^+$, given by ${\\bm{b}} \\mapsto \\llbracket\\beta({\\bm{b}})\\rrbracket$, is well-defined\nby choosing $\\beta({\\bm{b}})$ to be any representative in the positive conjugacy class $\\llbracket\\beta({\\bm{b}}')\\rrbracket$.\nObserve that for fixed $d$ the mapping $\\Conf_m^d \\to \\CC{\\mathscr{B}}_m^+$ is not surjective.\n\n\\begin{rem}\nThe positive conjugacy relation defined in Definition \\ref{defn:equiv12} is symmetric by construction since it is defined on finite words.\nFor instance, consider $\\sigma_1\\sigma_2\\sigma_3 \\equiv \\sigma_2\\sigma_3\\sigma_1$.\nThe question whether the $\\sigma_2\\sigma_3\\sigma_1 \\equiv \\sigma_1\\sigma_2\\sigma_3$ is answered as follows:\n$\\sigma_2\\sigma_3\\sigma_1 \\equiv \\sigma_3\\sigma_1\\sigma_2 \\equiv \\sigma_1\\sigma_2\\sigma_3$, which, by Definition \\ref{defn:equiv12},\nshows that $\\sigma_2\\sigma_3\\sigma_1 \\equiv \\sigma_1\\sigma_2\\sigma_3$.\n\\end{rem}\n\n\n\nThe presentation of discrete braids via words in ${\\mathscr{B}}_m^+$ yields the \n following alternative equivalence relation.\n\\begin{defn}\n\\label{defn:topeq}\nTwo discretized braids ${\\bm{b}}, {\\bm{b}}'\\in \\Conf_m^d$ are \\emph{topologically equivalent} if $\\beta({\\bm{b}}) \\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim} \\beta({\\bm{b}}')$ in ${\\mathscr{B}}_m^+$, i.e. \n$\\beta({\\bm{b}})$ and $\\beta({\\bm{b}}')$ are positively conjugate.\nNotation: ${\\bm{b}} \\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim} {\\bm{b}}'$.\n\\end{defn}\n\nSummarizing, ${\\bm{b}}\\sim{\\bm{b}}'$ implies ${\\bm{b}} \\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim} {\\bm{b}}'$\nand $\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim}$ defines a coarser equivalence relation on $\\Conf_m^d$.\nThe equivalence classes with respect to $\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim}$ are denote by $[{\\bm{b}}]_{\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim}}$.\nThe converse is not true in general, cf.\\ \\cite[Fig.\\ 8]{BraidConleyIndex}.\nFollowing \\cite[Def.\\ 17]{BraidConleyIndex}, a discretized braid class $[{\\bm{b}}]$ is \\emph{free} if\n$[{\\bm{b}}] = [{\\bm{b}}]_{\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim}}$.\n\\begin{prop}[\\cite{BraidConleyIndex}, Prop.\\ 27]\n\\label{prop:free}\nIf $d>|{\\bm{b}}|$, then $[{\\bm{b}}]$ is a free braid class.\n\\end{prop}\n\nTaking $d$ sufficiently large is a sufficient condition to ensure free braid classes, but this condition is not a necessary condition.\n\n\\begin{figure}[tb]\n\\centering\n\\begin{tikzpicture}[xscale=0.4, yscale=0.4, line width = 1.5pt]\n \\draw[-] (0,0) -- (2,6) -- (4,0); \n \\draw[fill] (0,0) circle (0.07)\n (2,6) circle (0.07)\n (4,0) circle (0.07);\n\n \\draw[-] (0,2) -- (2,2) -- (4,2); \n \\draw[fill] (0,2) circle (0.07)\n (2,2) circle (0.07)\n (4,2) circle (0.07);\n\n\n \\draw[-] (0,4) -- (2,4) -- (4,4);\n \\draw[fill] (0,4) circle (0.07)\n (2,4) circle (0.07)\n (4,4) circle (0.07);\n\\foreach \\x in {0,4}\n \\draw[line width=1.0pt, -] (\\x,-1) -- (\\x,7); \n\n\\end{tikzpicture}\n\\qquad\n\\qquad\n\\begin{tikzpicture}[xscale=0.4, yscale=0.4, line width = 1.5pt]\n \\draw[-] (0,6) -- (2,0) -- (4,6); \n \\draw[fill] (0,6) circle (0.07)\n (2,0) circle (0.07)\n (4,6) circle (0.07);\n\n \\draw[-] (0,2) -- (2,2) -- (4,2); \n \\draw[fill] (0,2) circle (0.07)\n (2,2) circle (0.07)\n (4,2) circle (0.07);\n\n\n \\draw[-] (0,4) -- (2,4) -- (4,4);\n \\draw[fill] (0,4) circle (0.07)\n (2,4) circle (0.07)\n (4,4) circle (0.07);\n\\foreach \\x in {0,4} \n \\draw[line width=1.0pt, -] (\\x,-1) -- (\\x,7); \n\n\\end{tikzpicture}\n\\qquad\n\\qquad\n\\begin{tikzpicture}[xscale=0.4, yscale=0.4, line width = 1.5pt]\n \\draw[-] (0,0) -- (2,6) -- (4,3.5) -- (6,0); \n \\draw[fill] (0,0) circle (0.07)\n (2,6) circle (0.07)\n (4,3.5) circle (0.07)\n (6,0) circle (0.07);\n\n \\draw[-] (0,2) -- (2,2) -- (4,2) -- (6,2); \n \\draw[fill] (0,2) circle (0.07)\n (2,2) circle (0.07)\n (4,2) circle (0.07)\n (6,2) circle (0.07);\n\n\n \\draw[-] (0,4) -- (2,4) -- (4,4) -- (6,4);\n \\draw[fill] (0,4) circle (0.07)\n (2,4) circle (0.07)\n (4,4) circle (0.07)\n (6,4) circle (0.07);\n\\foreach \\x in {0,6}\n \\draw[line width=1.0pt, -] (\\x,-1) -- (\\x,7); \n\\end{tikzpicture}\n\\caption{The left and middle diagrams show representatives ${\\bm{b}},{\\bm{b}}'\\in \\Conf_3^2$ in Example \\ref{exm:free2}.\nThe right diagram shows a representative the same topological braid class in $\\Conf_3^3$ (free).}\n\\label{fig:free1}\n\\end{figure}\n\n\n\\begin{exm}\\label{exm:free2}\nGiven the braids ${\\bm{b}}\\in \\Conf_3^2$ with ${\\bm{b}}^1 = (1,4,1)$, ${\\bm{b}}^2 = (2,2,2)$ and ${\\bm{b}}^3 = (3,3,3)$ and consider the braid\nclass $[{\\bm{b}}]$, see Figure \\ref{fig:free1}[left and middle]. Since ${\\bm{b}}$ is regular, $\\beta({\\bm{b}})$ is uniquely defined and $\\beta({\\bm{b}}) = \\sigma_1\\sigma_2^2\\sigma_1$.\nAlso define ${\\bm{b}}'\\in \\Conf_3^2$ with ${\\bm{b}}'^1 = (4,1,4)$, ${\\bm{b}}'^2={\\bm{b}}^2$ and ${\\bm{b}}'^3={\\bm{b}}^3$ and the braid\nclass $[{\\bm{b}}']$. Since ${\\bm{b}}'$ is also regular we have the unique braid word $\\beta({\\bm{b}}') = \\sigma_2\\sigma_1^2\\sigma_2$.\nObserve that $ \\sigma_1\\sigma_2^2\\sigma_1\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim} \\sigma_2\\sigma_1^2\\sigma_2$, which implies that ${\\bm{b}}$ and\n${\\bm{b}}'$ are topologically equivalent. However, ${\\bm{b}}$ and ${\\bm{b}}'$ are not positively isotopic in $\\Conf_3^2$ and $[{\\bm{b}}]$ and\n$[{\\bm{b}}']$ are two different path components of $\\Conf_3^2$.\nThe positive conjugacy class of $\\sigma_1\\sigma_2^2\\sigma_1$ is given by $\\llbracket\\sigma_1\\sigma_2^2\\sigma_1\\rrbracket = \\{\\sigma_1\\sigma_2^2\\sigma_1,\\sigma_2^2\\sigma_1^2,\n\\sigma_2\\sigma_1^2\\sigma_2,\\sigma_1^2\\sigma_2^2\\}$. The words $\\sigma_2^2\\sigma_1^2$ and $\\sigma_1^2\\sigma_1^2$\nare not represented in $\\Conf_3^2$.\nIf we consider $ {\\bm{b}}''\\in \\Conf_3^3$ given by ${\\bm{b}}'' = \\bigl\\{ (1,4,1,1), (2,2,2,2), (3,3,3,3)\\bigr\\}$, then\nthe associated braid class $[{\\bm{b}}'']$ is free, which confirms that the condition in Proposition \\ref{prop:free} is not a necessary\ncondition, see Figure \\ref{fig:free1}[right].\n\\end{exm}\n\n\n\n Let \n$\\beta = \\sigma_{i_1} \\cdots \\sigma_{i_d}\\in {\\mathscr{B}}_m^+$ be a positive braid word, then define\n\\[\n\\ev_q(\\beta) := {\\bm{b}} = \\{ {\\bm{b}}^\\mu\\} \\in \\Conf_m^{d+q},\\quad {\\bm{b}}^\\mu = (x_j^\\mu),\\quad\\mu=1,\\cdots, m,~~q\\ge 0,\n\\]\nwith $x_0^\\mu = \\mu$, $x_j^\\mu = x_0^{\\sigma_{i_1} \\cdots \\sigma_{i_j}(\\mu)}$, $j=1,\\cdots,d$, and $x_{d+q}^\\mu =\\cdots = x_d^\\mu$. \nThe expression $\\sigma_{i_1} \\cdots \\sigma_{i_j}(\\mu)$, $\\mu = 1,\\cdots,m$ describes the permutation of the set $\\{1,\\cdots,m\\}$, where\n$\\sigma_{i_1} \\cdots \\sigma_{i_j}$ is regarded as a concatenation of permutations given by the generators $\\sigma_i$ interpreted as\na basic permutation of $i$ and $i+1$.\nBy Proposition \\ref{prop:free}, $[\\ev_q(\\beta)]$ is free for all $q\\ge 1$, and every\n $\\llbracket\\beta\\rrbracket\\in \\CC{\\mathscr{B}}_m^+$ defines a free discrete braid class $[\\ev_q(\\beta)]$ in $\\Conf_m^{d+q}$\n for all $q\\ge 1$. \n\n\n\\subsection{Discrete 2-colored braid classes}\n\\label{subsec:discbr2}\nOn closed configuration spaces we define the following product:\n\\[ \n\\bar\\Conf_n^d\\times\\bar\\Conf_m^d \\to \\bar\\Conf_{n+m}^d,\\quad ({\\bm{a}},{\\bm{b}}) \\mapsto {\\bm{a}}\\sqcup{\\bm{b}},\n\\]\nwhere ${\\bm{a}}\\sqcup{\\bm{b}}$ is the disjoint union of the strands in ${\\bm{a}}$ and ${\\bm{b}}$ regarded as an element in $\\bar\\Conf_{n+m}^d$.\nThe definition yields a canonical permutation on the labels in ${\\bm{a}}\\sqcup{\\bm{b}}$.\nDefine the space of \\emph{2-colored discretized braids} as the space of ordered pairs\n\\begin{equation}\n\\label{eqn:2color}\n\\Conf_{n,m}^d := \\bigl\\{ {\\bm{a}}\\rel{\\bm{b}}:=({\\bm{a}},{\\bm{b}})~|~ {\\bm{a}}\\sqcup{\\bm{b}} \\in \\Conf_{n+m}^d\\bigr\\}.\n\\end{equation}\nThe strand labels in ${\\bm{a}}$ range from $\\mu=1,\\cdots,n$ and the strand labels in ${\\bm{b}}$ range from $\\mu=n+1,\\cdots,n+m$.\nThe associated permutation $\\tau_{{\\bm{a}},{\\bm{b}}} = \\tau_{\\bm{a}} \\oplus\\tau_{\\bm{b}} \\in S_{n+m}$, where\n $\\tau_{\\bm{a}}\\in S_n$ and $\\tau_{\\bm{b}} \\in S_m$, and $\\tau_{\\bm{a}}$ acts on the labels\n$\\{1,\\cdots,n\\}$ and $\\tau_{\\bm{b}}$ acts on the labels $\\{{n+1},\\cdots,{n+m}\\}$.\nThe strands ${\\bm{a}} = \\{x_j^{\\mu}\\}$, $\\mu=1,\\cdots,n$ are the \\emph{red}, or \\emph{free} strands and\nthe strands ${\\bm{b}} = \\{x_j^{\\mu}\\}$, $\\mu=n+1,\\cdots,n+m$ are the \\emph{black}, or \\emph{skeletal} strands.\nA path component $[{\\bm{a}}\\rel{\\bm{b}}]$ in $\\Conf_{n,m}^d$ is called a \\emph{2-colored discretized braid class}.\nThe canonical projections are given by\n$\\varpi\\colon\\Conf_{n,m}^d \\to \\Conf_m^d$ with ${\\bm{a}}\\rel{\\bm{b}} \\mapsto {\\bm{b}}$ and by\n$\\varpi^*\\colon\\Conf_{n,m}^d \\to \\Conf_n^d$ with ${\\bm{a}}\\rel{\\bm{b}} \\mapsto {\\bm{a}}$.\nThe mapping $\\varpi$ yields a fibration\n\\begin{equation}\n\\label{eqn:fiberbundle2}\n[{\\bm{a}}]\\rel {\\bm{b}} \\to [{\\bm{a}}\\rel {\\bm{b}}] \\to [{\\bm{b}}].\n\\end{equation}\n The pre-images\n$\\varpi^{-1}({\\bm{b}}) = [{\\bm{a}}]\\rel {\\bm{b}}\\subset \\Conf_{n}^d$, are called the \\emph{relative discretized braid class fibers}.\n\nThere exists a natural embedding $\\Conf_{n,m}^d \\hookrightarrow \\Conf_{n+m}^d$, defined by ${\\bm{a}}\\rel{\\bm{b}} \\mapsto {\\bm{a}}\\sqcup{\\bm{b}}$.\nVia the embedding we define the notion of topological equivalence of two 2-colored discretized braids:\n${\\bm{a}}\\rel{\\bm{b}} \\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim} {\\bm{a}}'\\rel{\\bm{b}}'$ if ${\\bm{a}}\\sqcup{\\bm{b}} \\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim} {\\bm{a}}'\\sqcup{\\bm{b}}'$. The associated equivalence classes are denoted by $[{\\bm{a}}\\rel{\\bm{b}}]_{\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim}}$, which are\nnot necessarily connected sets in $\\Conf_{n,m}^d$. A 2-colored discretized braid class $[{\\bm{a}}\\rel{\\bm{b}}]$ is free if $[{\\bm{a}}\\rel{\\bm{b}}] = [{\\bm{a}}\\rel{\\bm{b}}]_{\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim}}$.\nIf $d>|{\\bm{a}}\\sqcup{\\bm{b}}|$, then $[{\\bm{a}}\\rel{\\bm{b}}]$ is free by Proposition \\ref{prop:free}. \n\nThe set of collapsed singular braids in $\\bar \\Conf_{n,m}^d$ is given by:\n\\[\n\\begin{aligned}\n\\Sigma^- := \\{{\\bm{a}}\\rel{\\bm{b}}\\in \\bar\\Conf_{n,m}^d~|~ {\\bm{a}}^\\mu =~&{\\bm{a}}^{\\mu'}, \\hbox{or~} {\\bm{a}}^\\mu={\\bm{b}}^{\\mu'}\\\\\n&\\hbox{for some~~}\\mu\\not=\\mu',~\\hbox{and~} {\\bm{b}}\\in \\Conf_m^d\\}.\n\\end{aligned}\n\\]\nA 2-colored discretized braid class $[{\\bm{a}}\\rel{\\bm{b}}]$ is \\emph{proper} if \n$\\partial [{\\bm{a}}\\rel{\\bm{b}}]_{\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim}} \\cap \\Sigma^- = \\varnothing$.\nIf a braid class $[{\\bm{a}}\\rel{\\bm{b}}]$ is not proper it is called \\emph{improper}.\nIn \\cite{BDV} properness is considered in a more general setting. The notion of properness in this paper coincides with weak properness in \\cite{BDV}.\n\nA 2-colored discretized braid class $[{\\bm{a}}\\rel{\\bm{b}}]$\nis called \\emph{bounded} if its fibers are bounded as sets in $\\rr^{nd}$.\nNote that $[{\\bm{a}}\\rel{\\bm{b}}]$\nis \\emph{not} a bounded set in $\\rr^{(n+m)d}$.\n\n\n\\subsection{Algebraic presentations}\n\\label{subsec:algpres}\nDiscretized braid classes are presented via the positive conjugacy classes of the positive braid monoid ${\\mathscr{B}}_m^+$.\nFor 2-colored discretized braids we seek a similar presentation.\n\nIn order to keep track of colors we define coloring on words in ${\\mathscr{B}}_{n+m}^+$.\nWords in ${\\mathscr{B}}_{n+m}^+$ define associated permutations $\\tau$ and the permutations $\\tau$ yield partitions of the set $\\{1,\\cdots,n+m\\}$.\nLet $\\gamma \\in {\\mathscr{B}}_{n+m}^+$ be a word for which the induced partition contains a union of equivalence classes $\\aset \\subset \\{1,\\cdots,n+m\\}$\nconsisting of $n$ elements. The set $\\aset$ is the \\emph{red coloring} of length $n$ and the remaining partitions are colored black, denoted by $\\bset$. \nThe pair $(\\gamma,\\aset)$ is \ncalled a 2-colored positive braid word, see Figure \\ref{fig:relative1}.\nFor a given coloring $\\aset \\subset \\{1,\\cdots,n+m\\}$ of length $n$ the set of all words $(\\gamma,\\aset)$ forms a monoid which is denoted by ${\\mathscr{B}}^+_{n,m,\\aset}$ and is referred as\nthe \\emph{2-colored braid monoid} with coloring $\\aset$.\n\n\nTwo pairs $(\\gamma,\\aset)$ and $(\\gamma',\\aset')$ are positively conjugate if $\\gamma\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim} \\gamma'$ and\n$\\aset' = \\zeta^{-1}(\\aset)$, where $\\zeta$ is a permutation conjugating the induced permutations $\\tau_\\gamma$ and $\\tau_{\\gamma'}$, i.e. $\\tau_{\\gamma'} = \\zeta\\tau_\\gamma\\zeta^{-1}$.\nIf $\\xi$ is another permutation such that $\\tau_{\\gamma'} = \\xi\\tau_\\gamma\\xi^{-1}$, then \n$ \\zeta\\tau_\\gamma\\zeta^{-1} = \\xi\\tau_\\gamma\\xi^{-1}$. This implies that $\\tau_\\gamma = \\zeta^{-1}\\xi\\tau_\\gamma\\xi^{-1}\\zeta$\nand thus $\\xi^{-1}\\zeta(\\aset) = \\aset$, which is equivalent to \n$\\zeta^{-1}(\\aset) = \\xi^{-1}(\\aset)$. This shows\nthat the conjugacy relation in well-defined.\nPositive conjugacy for 2-colored braid words is again denoted by $(\\gamma,\\aset) \\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim} (\\gamma', \\aset')$\nand a conjugacy class is denoted by $\\llbracket\\gamma,\\aset\\rrbracket$.\nThe set of 2-colored positive conjugacy classes \nwith red colorings of length $n$ is denoted by $\\CC{\\mathscr{B}}_{n,m}^+$.\n\nThe words corresponding to the different colors $(\\gamma,\\aset)$ can be derived from the information in $(\\gamma,\\aset)$.\nLet $\\aset_0\\subset \\aset$ be a cycle of length $\\ell\\le n$ and let $k \\in \\aset_0$. If $\\gamma = \\sigma_{i_1}\\cdots \\sigma_{i_d}$, then we define\nan $\\ell$-periodic sequence $\\{k_j\\}$, with \n\\[\nk_0 = k,\\quad\\hbox{and}\\quad k_{j} = \\sigma_{i_j}(k_{j-1}), ~~~j=1,\\cdots,\\ell d,\n\\]\nby considering the word $\\gamma^\\ell$. Now use the following rule: if $k_j-k_{j-1} \\not = 0$, remove \n$\\sigma_{i_{j'}}$ from $\\gamma$, for $j=1,\\cdots, \\ell d$, where $j' =j \\!\\!\\mod d \\in \\{1,\\cdots,d\\}$.\nMoreover, $\\sigma_{i_j}$ is replaced by $\\sigma_{i_j-1}$, if $k_j=k_{j-1}0$ and $\\partial_3\\mathcal{R}_j>0$ is called a \\emph{parabolic recurrence relation}.\nFrom \\cite[Lem.\\ 55-57]{BraidConleyIndex} there exists a parabolic recurrence relation ${\\mathcal{R}}=\\{{\\mathcal{R}}_j\\}$\nsuch that ${\\bm{b}}$ is a zero for ${\\mathcal{R}}$, i.e. ${\\mathcal{R}}_j(x_{j-1}^{\\mu_\\nu},x_j^{\\mu_\\nu},x_{j+1}^{\\mu_\\nu})=0$ for all $j\\in \\zz$ and for all ${\\nu}=1,\\cdots,m$.\nThe recurrence relation ${\\mathcal{R}}$ may regarded as vector field and is integrated via the equations\n\\begin{equation}\n\\label{parabolicvectorfield}\n \\frac{d}{ds}x^{{\\mu_\\nu}}_{j}=\\mathcal{R}_{j}(x^{{\\mu_\\nu}}_{j-1},x^{{\\mu_\\nu}}_{j},x^{{\\mu_\\nu}}_{j+1}),\\quad \\nu=1,\\cdots,m.\n\\end{equation}\nLet $N$ denoted the closure in $\\rr^{nd}$ of $[{\\bm{a}}]\\rel {\\bm{b}}$.\nBy \\cite[ Prop.\\ 11 and Thm.\\ 15]{BraidConleyIndex}, the set $N$ is an isolating neighborhood for the parabolic flow generated by\nEquation \\eqref{parabolicvectorfield}.\nWe define ${\\mathrm{h}}({\\bm{a}}\\rel{\\bm{b}})$ as the homotopy Conley index of $\\Inv(N,{\\mathcal{R}})$, cf.\\ \\cite{BraidConleyIndex}, \\cite{ConleyIndex}.\nThe Conley index is independent of the choice of parabolic recurrence relations ${\\mathcal{R}}$ for which ${\\mathcal{R}}({\\bm{b}})=0$, cf.\\ \\cite[Thm.\\ 15(a)-(b)]{BraidConleyIndex},\nas well as the choice of the fiber, i.e. ${\\bm{a}}\\rel{\\bm{b}} \\sim {\\bm{a}}'\\rel{\\bm{b}}'$, then ${\\mathrm{h}}({\\bm{a}}\\rel{\\bm{b}}) = {\\mathrm{h}}({\\bm{a}}'\\rel{\\bm{b}}')$, cf.\\ \\cite[Thm.\\ 15(c)]{BraidConleyIndex}.\nThis makes ${\\mathrm{h}}({\\bm{a}}\\rel{\\bm{b}})$ an invariant of the discrete 2-colored braid class $[{\\bm{a}}\\rel{\\bm{b}}]$.\n\nThere is an intrinsic way to define ${\\mathrm{h}}({\\bm{a}}\\rel{\\bm{b}})$ without using parabolic recurrence relations.\nWe define $N^-\\subset \\partial N$ to be the set of boundary points for which the word\nmetric is locally maximal.\nThe pair $(N,N^-)$ is an index\npair for any parabolic system ${\\mathcal{R}}$ such that ${\\mathcal{R}}({\\bm{b}})=0$, and\nthus by the independence of Conley index on ${\\mathcal{R}}$, the pointed homotopy type\nof\n$N\/N^-$ gives the Conley index: \n$\n{\\mathrm{h}}({\\bm{a}}\\rel{\\bm{b}}) = [N\/N^-],\n$\nsee Figure \\ref{fig:conley1} and \\cite[Sect.\\ 4.4]{BraidConleyIndex} for more details on the construction.\n\n\\begin{figure}[hbt]\n\\centering\n\\begin{tikzpicture}[xscale=0.5, yscale=0.4, line width = 1.5pt]\n \\draw[-] (0,4) -- (2,5) -- (4,4); \n \\draw[fill] (0,4) circle (0.07)\n (2,5) circle (0.07)\n (4,4) circle (0.07);\n \\draw[-] (0,1) -- (2,0) -- (4,1);\n \\draw[fill] (0,1) circle (0.07)\n (2,0) circle (0.07)\n (4,1) circle (0.07);\n\n \\draw[-] (0,5) -- (2,1) -- (4,5); \n \\draw[fill] (0,5) circle (0.07)\n (2,1) circle (0.07)\n (4,5) circle (0.07);\n \\draw[-] (0,0) -- (2,4) -- (4,0);\n \\draw[fill] (0,0) circle (0.07)\n (2,4) circle (0.07)\n (4,0) circle (0.07);\n\n \\draw[-, color=red] (0,2) -- (2,2.5) -- (4,2);\n \\draw[color=red,fill] (0,2) circle (0.07)\n (2,2.5) circle (0.07)\n (4,2) circle (0.07);\n\\foreach \\x in {0,4}\n \\draw[line width=1.0pt, -] (\\x,-1) -- (\\x,6); \n\\end{tikzpicture}\n\\qquad\n\\qquad\n\\begin{tikzpicture}[xscale=0.5, yscale=0.5, line width = 1.5pt]\n \\draw[-,color=white] (1,1);\n \\fill[color=red!90] (2,6) -- (6,6) -- (6,2) -- (2,2) -- (2,6);\n \\draw[-] (2,6) -- (6,6) -- (6,2) -- (2,2) -- (2,6); \n \\draw[->] (1.5,5) -- (2.5,5);\n \\draw[->] (1.5,4) -- (2.5,4);\n \\draw[->] (1.5,3) -- (2.5,3);\n\n \\draw[<-] (1.5+4,5) -- (2.5+4,5);\n \\draw[<-] (1.5+4,4) -- (2.5+4,4);\n \\draw[<-] (1.5+4,3) -- (2.5+4,3);\n \n \\draw[<-] (5,1.5) -- (5,2.5);\n \\draw[<-] (4,1.5) -- (4,2.5);\n \\draw[<-] (3,1.5) -- (3,2.5);\n\n \\draw[->] (5,1.5+4) -- (5,2.5+4);\n \\draw[->] (4,1.5+4) -- (4,2.5+4);\n \\draw[->] (3,1.5+4) -- (3,2.5+4);\n\\end{tikzpicture}\n\\qquad\n\\qquad\n\\begin{tikzpicture}[xscale=0.6, yscale=0.8, line width = 1.5pt]\n \\draw[-,color=white] (1,1);\n \\draw[fill=red!90] (1,3.5) ellipse (2.0 and 1.0);\n \\draw[fill=white] (0.5,3.15) ellipse (1.0 and 0.6);\n \\draw[fill] (0.07,2.63) circle (0.07);\n\\end{tikzpicture}\n\\caption{The Conley index for the braid in Example \\ref{exm:exist2}.\nThe homotopy of the pionted space in given by $h({\\bm{a}}\\rel{\\bm{b}}) = \\sbb^1$.}\n\\label{fig:conley1}\n\\end{figure}\n\nThe invariant ${\\mathrm{h}}({\\bm{a}}\\rel{\\bm{b}})$ is not necessarily invariant with respect to the number of discretization points $d$.\nIn order to have invariance also with respect to $d$, another invariant for discrete braid classes was introduced in \\cite{BraidConleyIndex}.\nConsider the equivalence class induced by the relation ${\\bm{a}}\\rel{\\bm{b}} \\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim} {\\bm{a}}'\\rel{\\bm{b}}'$ on $\\Conf_{n,m}^d$, which defines the class\n$[{\\bm{a}}\\rel{\\bm{b}}]_{\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim}}$ of proper discrete 2-colored braids.\nVia the projection $\\varpi\\colon [{\\bm{a}}\\rel{\\bm{b}}]_{\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim}} \\to [{\\bm{b}}]_{\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim}}$ we obtain fibers $\\varpi^{-1}({\\bm{b}})$.\nSuppose $[{\\bm{a}}\\rel{\\bm{b}}]_{\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim}}$ is a bounded class, i.e. all fibers $\\varpi^{-1}({\\bm{b}})$ are bounded sets in $\\rr^{nd}$.\nFollowing \\cite[Def.\\ 18]{BraidConleyIndex}\nthe closure $N$ of a fiber\n$\\varpi^{-1}({\\bm{b}})$ is an isolating neighborhood since ${\\bm{a}}\\rel{\\bm{b}}$ is proper.\nDefine ${\\mathrm{H}}({\\bm{a}}\\rel{\\bm{b}})$ as the homotopy Conley index of $N$.\nIf $[{\\bm{a}}_k]\\rel {\\bm{b}}$ are the fibers belonging to the components $[{\\bm{a}}_k\\rel{\\bm{b}}]$ of $[{\\bm{a}}\\rel{\\bm{b}}]_{\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim}}$, then \n\\begin{equation}\n\\label{eqn:braidindex22}\n{\\mathrm{H}}({\\bm{a}}\\rel{\\bm{b}}) := \\bigvee_{k} {\\mathrm{h}}([{\\bm{a}}_k]\\rel{\\bm{b}}).\n\\end{equation}\n\nDefine the following extension mapping\n $\\E:\\Conf_m^d\\to\\Conf_m^{d+1}$, cf.\\ \\cite{BraidConleyIndex},\nvia concatenation with the trivial braid of period one: \n\\begin{equation}\n\t(\\E{\\bm{b}})^\\mu := \\left\\{\n\t\\begin{array}{cl}\n\t\tx_j^{\\mu} &\tj=0,\\ldots,d; \t\\\\\n\t\tx_d^{\\mu} & j=d+1 .\n\t\\end{array}\\right.\n\\end{equation}\nProperness remains unchanged under the extension mapping $\\E$, however boundedness may not be preserved.\nDefine the skeletal augmentation:\n\\[\n\\A\\colon \\Conf_m^d \\to \\Conf_{m+2}^d,\\quad {\\bm{b}} \\mapsto \\A{\\bm{b}} = {\\bm{b}}^* = {\\bm{b}}\\cup {\\bm{b}}^-\\cup{\\bm{b}}^+,\n\\]\nwhere ${\\bm{b}}^- =\\{\\min_{\\mu} \\{x_j^\\mu\\} - 1\\}_j$ and ${\\bm{b}}^+ =\\{\\max_{\\mu} \\{x_j^\\mu\\} + 1\\}_j$.\nIf $[{\\bm{a}}\\rel{\\bm{b}}]_{\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim}}$ is bounded, then\n ${\\mathrm{h}}([{\\bm{a}}_k]\\rel{\\bm{b}}) = {\\mathrm{h}}([{\\bm{a}}_k]\\rel {\\bm{b}}^*)$ for all $k$ and therefore ${\\mathrm{H}}({\\bm{a}}\\rel{\\bm{b}}) = {\\mathrm{H}}({\\bm{a}}\\rel{\\bm{b}}^*)$.\nOne can define second skeletal augmentation:\n\\[\n\\B\\colon \\Conf_m^d \\to \\Conf_{m+2}^d,\\quad {\\bm{b}} \\mapsto \\B{\\bm{b}}={\\bm{b}}^\\# = {\\bm{b}}\\cup {\\bm{b}}^s\\cup{\\bm{b}}^n,\n\\]\nwhere ${\\bm{b}}^s =\\{(-1)^j\\min_{\\mu} \\{x_j^\\mu\\} - (-1)^j\\}_j$ and ${\\bm{b}}^n =\\{(-1)^j\\max_{\\mu} \\{x_j^\\mu\\} + (-1)^j\\}_j$.\nAs before, if $[{\\bm{a}}\\rel{\\bm{b}}]_{\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim}}$ is bounded, then\n ${\\mathrm{h}}([{\\bm{a}}_k]\\rel{\\bm{b}}) = {\\mathrm{h}}([{\\bm{a}}_k]\\rel {\\bm{b}}^\\#)$ for all $k$ and therefore ${\\mathrm{H}}({\\bm{a}}\\rel{\\bm{b}}) = {\\mathrm{H}}({\\bm{a}}\\rel{\\bm{b}}^\\#)$.\n\n\nConsider the proper, bounded 2-colored braid classes $[{\\bm{a}}\\rel{\\bm{b}}]_{\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim}}$ and $[\\E{\\bm{a}}\\rel\\E{\\bm{b}}]_{\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim}}$.\nThe main result in \\cite[Thm.\\ 20]{BraidConleyIndex} is the Stabilization Theorem which states that\n\\begin{equation}\n\\label{eqn:braidindex13}\n{\\mathrm{H}}({\\bm{a}}\\rel{\\bm{b}}^*) = {\\mathrm{H}}(\\E{\\bm{a}}\\rel \\E{\\bm{b}}^*).\n\\end{equation}\nThe independence of ${\\mathrm{H}}$ on the skeleton ${\\bm{b}}$ can be derived from the Stabilization Theorem.\nSince a 2-colored discretized braid class is free when $d$ is sufficiently large, we have that $[\\E^p{\\bm{a}}\\rel \\E^p{\\bm{b}}^*]$ is free\nfor some $p>0$ sufficiently large, and by stabilization ${\\mathrm{H}}({\\bm{a}}\\rel{\\bm{b}}^*) = {\\mathrm{H}}(\\E^p{\\bm{a}}\\rel \\E^p{\\bm{b}}^*)$.\nLet ${\\bm{a}} \\rel {\\bm{b}} \\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim} {\\bm{a}}'\\rel{\\bm{b}}'$, then $\\E^p{\\bm{a}}\\rel \\E^p{\\bm{b}}^* \\sim \\E^p{\\bm{a}}'\\rel \\E^p{\\bm{b}}'^*$.\nBy \\cite[Thm.\\ 15(c)]{BraidConleyIndex}, a continuation can be constructed which proves that\n${\\mathrm{H}}(\\E^p{\\bm{a}}\\rel \\E^p{\\bm{b}}^*) ={\\mathrm{H}}(\\E^p{\\bm{a}}'\\rel \\E^p{\\bm{b}}'^*)$. Consequently,\n\\[\n{\\mathrm{H}}({\\bm{a}}\\rel{\\bm{b}}^*) = {\\mathrm{H}}(\\E^p{\\bm{a}}\\rel \\E^p{\\bm{b}}^*) = {\\mathrm{H}}(\\E^p{\\bm{a}}'\\rel \\E^p{\\bm{b}}'^*) = {\\mathrm{H}}({\\bm{a}}'\\rel{\\bm{b}}'^*),\n\\]\nwhich shows that the index ${\\mathrm{H}}$ only depends on the topological type $\\llbracket \\gamma,\\aset\\rrbracket$, with $\\gamma=\\beta({\\bm{a}}\\rel{\\bm{b}})$.\n\\begin{defn}\n\\label{defn:discrbrinv}\nLet $\\llbracket\\gamma,\\aset\\rrbracket$ be proper, positive conjugacy class. Then, the \\emph{braid Conley index} is defined as\n\\begin{equation}\n\\label{eqn:relbrinv}\n{\\mathrm{H}}\\llbracket\\gamma,\\aset\\rrbracket := {\\mathrm{H}}({\\bm{a}}\\rel{\\bm{b}}^*).\n\\end{equation}\n\\end{defn}\nThe braid Conley index ${\\mathrm{H}}$ may be computed using any\nrepresentative ${\\bm{a}}\\rel{\\bm{b}}^*$ for any sufficiently large $d$ and any associated recurrence relation ${\\mathcal{R}}$.\n\nFinally, we mention that besides the extension $\\E$, we also have a \\emph{half twist} extension operator $\\T$:\n\\begin{equation}\n\t(\\T{\\bm{b}})^\\mu := \\left\\{\n\t\\begin{array}{cl}\n\t\tx_j^{\\mu} &\tj=0,\\ldots,d \t\\\\\n\t\t-x_d^{\\mu} & j=d+1 .\n\t\\end{array}\\right.\n\\end{equation}\nEvery discretized braid can be dualized via the mapping $\\{x^\\mu_j\\} \\mapsto \\{(-1)^j x_j^\\mu\\}$. On $\\Conf_m^{2d}$ this yields\na well-defined operator $\\D\\colon\\Conf_m^{2d}\\to \\Conf_m^{2d}$ mapping proper, bounded discretized braid classes $[{\\bm{a}}\\rel{\\bm{b}}]$ to proper, bounded discretized braid classes\n$[\\D{\\bm{a}}\\rel\\D{\\bm{b}}]$.\nFrom \\cite[Cor.\\ 31]{BraidConleyIndex} we recall the following result.\nLet ${\\bm{a}}\\rel{\\bm{b}}\\in \\Conf_{n,m}^{2d}$ be proper, then\n\\begin{equation}\n\\label{eqn:thedual}\n{\\mathrm{H}}\\bigl(\\T^2\\circ \\D({\\bm{a}}\\rel{\\bm{b}}^*)\\bigr) = {\\mathrm{H}}\\bigl(\\D({\\bm{a}}\\rel{\\bm{b}}^*)\\bigr) \\wedge \\sbb^{2n},\n\\end{equation}\nwhere the wedge is the $2n$-suspension of the Conley index.\n\nFrom the singular homology $H_*({\\mathrm{H}}({\\bm{a}}\\rel{\\bm{b}}^*))$ the Poincar\\'e polynomial \nis denoted by $P_t({\\bm{a}}\\rel{\\bm{b}}^*)$, or $P_t\\llbracket\\gamma,\\aset\\rrbracket$ in terms of the topological type.\nThis yields an important invariant: $|P_t({\\bm{a}}\\rel{\\bm{b}}^*)| = |P_t\\llbracket\\gamma,\\aset\\rrbracket|$, which is the number of monomial term in the Poincar\\'e polynomial.\n\n\n\n\n\\section{The variational formulation}\nFor a given symplectomorphism $F\\in \\Symp(\\disc)$\n the problem of finding periodic points can be reformulated in terms of parabolic recurrence relations.\n\n\n\\subsection{Twist symplectomorphisms}\n\n\nLet $F(x,y) = \\bigl(f(x,y),g(x,y)\\bigr)$ be a symplectomorphism of $\\plane$, with $f,g$ smooth functions on $\\plane$.\nRecall that $F\\in \\Symp(\\plane)$ is a \\emph{positive} twist symplectomorphism if\n\\[\n\\frac{\\partial f(x,y)}{\\partial y} >0.\n\\]\nFor twist symplectomorphisms there exists a variational principle for finding periodic points, cf.\\ \\cite{LeCalvez}, \\cite{Moser}.\nSuch a variational principle also applies to symplectomorphisms that are given as a composition:\n\\[\nF= F_d\\circ \\cdots \\circ F_1,\n\\]\nwith $F_j\\in \\Symp(\\plane)$ positive twist symplectomorphisms for all $j$.\nIt is important to point out that $F$ itself is \\emph{not} twist in general.\nAn important question is whether every mapping $F\\in \\Symp(\\plane)$ can be written as a composition of (positive) twist symplectomorphisms, cf.\\ \\cite{LeCalvez}.\nSuppose $F\\in \\Ham(\\plane)$,\nand $F$ allows a Hamiltonian isotopy $\\psi_{t,H}$ with appropriate asymptotic conditions near infinity,\nsuch that $\\psi_{t_i,H}\\circ\\psi_{t_{i-1},H}^{-1}$ is close to the identity mapping in the $C^1$-norm for sufficiently small time steps $t_i-t_{i-1}$. Then, define $G_i = \\psi_{t_i,H}\\circ\\psi_{t_{i-1},H}^{-1}$, $i=1,\\cdots,k$,\nand $F = G_k\\circ \\cdots \\circ G_1$.\nWe remark that in this construction\n the individual mappings $G_i$ are not twist necessarily.\nThe following observation provides a decomposition consisting solely of positive twist symplectomorphisms.\nConsider the $90^o$ degree clockwise rotation\n\\[\n\\psi(x,y) = (y,-x), \\quad \\psi^4={\\rm id},\n\\]\nwhich is positive twist symplectomorphism.\nThis yields the decomposition:\n\\begin{equation}\n\\label{eqn:decomp12}\nF = (G_k\\circ \\psi) \\circ\\psi\\circ\\psi\\circ\\psi\\circ \\cdots \\circ (G_1\\circ\\psi)\\circ\\psi\\circ\\psi\\circ\\psi,\n\\end{equation}\nwhere $F_{4i} = G_{i}\\circ \\psi$ and $F_j=\\psi$ for $j\\not = 4i$ for some $i$ and $d=4k$.\nSince the mappings $G_i$ are close to the identity, the compositions $G_i\\circ \\psi$ are positive twist symplectomorphisms.\nThe above procedure intertwines symplectomorphisms with $k$ full rotations. As we will see later on this results in\npositive braid representations of mapping classes.\nThe choice of $\\psi$ is arbitrary since other rational rotations also yield twist symplectomorphisms.\n\nFor symplectomorphisms $F \\in \\Symp(\\disc)$ we establish a similar decomposition in terms of positive twist symplectomorphisms, with the additional property that the decomposition can be extended to symplectomorphisms of $\\plane$,\nwhich is necessary to apply the variational techniques in \\cite{BraidConleyIndex}.\n\n\\subsection{Interpolation}\\label{subsec:interpl}\n\nA symplectomorphism $F\\in \\Symp(\\plane)$ satisfies the \\emph{uniform twist condition} if the there exists a $\\delta>0$ such that\n\\begin{equation}\n\\label{eqn:twist}\n\\delta^{-1} \\ge \\frac{\\partial f(x,y)}{\\partial y} \\ge \\delta >0,\\quad \\forall (x,y)\\in \\plane.\n\\end{equation}\nThe subset of such symplectomorphism is denoted by $SV(\\plane)$, cf.\\ \\cite{LeCalvez}.\nA result by Moser implies that all symplectomorphisms of $\\plane$ with a uniform twist condition \nare Hamiltonian.\n\\begin{prop}[cf.\\ \\cite{Moser}]\\label{MoserThm}\n Let $F \\in SV(\\plane)$. Then, there exists a Hamiltonian $H \\in \\mathcal{H}(\\plane)$\n such that $0<\\delta \\le H_{yy} \\le \\delta^{-1}$ and\n $\\psi_{1,H} = F$, where $\\psi_{t,H}$ is the associated Hamiltonian flow.\nAll orbits of $\\psi_{t,H}$ project to straight lines in $(t,x)$-plane, and $\\psi_{t,H}\\in SV(\\plane)$ for all $t\\in (0,1]$.\n\\end{prop}\n\nFor completeness we give a self-contained proof of Proposition \\ref{MoserThm}, which is the same as the proof\n in \\cite{Moser} modulo a few alterations.\n\n\\begin{proof}\nFollowing \\cite{Moser} we consider action\n integral $\\int_{0}^{1} L(t,x(t),\\dot{x}(t)) dt$\n for functions $x(t)$ with $x(0)=x_0$ and $x(1) = x_1$.\n We require that extremals are affine lines, i.e. $\\ddot x(t)=0$. \nFor extremals the action is given by $S(x_0,x_1) = \\int_{0}^{1} L(t,x(t),\\dot{x}(t)) dt$ and we seek\nLagrangians such that $S={\\bm{h}}$, where ${\\bm{h}}$ is the generating function for $F$.\n %\n For Lagrangians this implies \n\\begin{equation}\n\\tfrac{d}{dt}(\\partial_pL)-\\partial_xL = ( \\partial_{t}+p \\partial_{x} )\\partial_p L-\\partial_x L=0,\n \\label{straightEL}\n\\end{equation}\nwhere $p=\\dot x$. Solving the first order partial differential equation yields\n $L=L_{0}(t,x,p)+p \\partial_x m + \\partial_t m$,\nwith\n\\begin{equation}\n L_{0}:=-\\int_{0}^{p}(p-p')\\partial^2_{x_0 x_1}{\\bm{h}}(x-p't,x+p'(1-t)) dp' .\n \\label{Fzero}\n\\end{equation}\nand $m= m(t,x)$ to be specified later, cf.\\ see \\cite{Moser} for details.\nThe extremals $x(t)$ are also extremals for $L_0$. Let $S_{0}(x_{0},x_{1}) = \\int_{0}^{1} L_0(t,x(t),\\dot{x}(t)) dt$, then\n\\begin{equation}\n \\int_{0}^{1}p \\partial_x m(t,x(t))+\\partial_t m(t,x(t)) dt = m(1,x_{1})-m(0,x_{0})\n\\end{equation}\nand hence \n %\n\\begin{equation}\n S(x_{0},x_{1})=S_{0}(x_{0},x_{1})+m(1,x_{1})-m(0,x_{0}).\n\\end{equation}\nDifferentiating $S$ yields\n\\begin{equation*}\n\n \\partial_{x_0} S = -\\partial_p L(0,x_{0},x_{1}-x_{0}),\\quad\n\n \\partial_{x_1} S = \\partial_p L(0,x_{1},x_{1}-x_{0})\n \\label{Sdiff}\n\\end{equation*}\nand for the mixed derivat\n\\begin{equation}\n \\partial^2_{x_0 x_1} S_0(x_0,x_1)=-\\partial^2_{pp}L(0,x_{0},x_{1}-x_{0})=\\partial^2_{x_{0} x_{1}}{\\bm{h}}(x_{0},x_{1}).\n \\label{FppRel}\n\\end{equation}\nThen, $S_{0}(x_{0},x_{1})-h(x_{0},x_{1})=u(x_{0})+v(x_{1})$ and the choice\n\\begin{equation*}\n m(t,x):=(1-t)u(x)-tv(x)\n\\end{equation*}\nimplies $S={\\bm{h}}$. Differentiating the relation $y=-\\partial_x {\\bm{h}}(x,x_{1})$ with respect to $y$ and using the fact that\n$x_1 = f(x,y)$, yields\n\\begin{equation*}\n\\begin{aligned}\n1&= -\\partial^2_{y x}{\\bm{h}} (x,x_1) = - \\partial^2_{x x_{1}}{\\bm{h}} (x,x_{1})\\partial_yf(x, x_1)\\\\\n&= -\\partial^2_{x x_{1}}{\\bm{h}}(x,x_{1})\\partial_y f(x, x_{1})\n \\end{aligned}\n\\end{equation*}\nand thus $\\delta\\le \\partial_y f \\le \\delta^{-1}$ if and only if $-\\delta^{-1} \\leq \\partial^2_{x x_{1}}{\\bm{h}} \\leq -\\delta$.\nBy relation \\eqref{FppRel} we have $\\partial^2_{pp}L \\in [ \\delta, \\delta^{-1} ]$.\n\nThe Hamiltonian is obtained via the Legendre transform\n\\begin{equation}\n H(t,x,y):=yp-L(t,x,p),\n \\label{Ltransform}\n\\end{equation}\nwhere \n\\begin{equation}\n y=\\partial_p L(t,x,p),\n \\label{Ltransform2}\n\\end{equation}\nand we can solve for $p$, i.e. $p = \\lambda(x,y)$. As before, differentiating \\eqref{Ltransform2} gives\n$1=\\partial^2_{pp}L\\cdot \\partial_y \\lambda$ and differentiating \\eqref{Ltransform} gives $\\partial_yH = \\lambda$. Combining these two identities yields $\\partial^2_{pp}L\\cdot \\partial^2_{yy}H=1$, from which the desired property $\\partial^2_{yy}H \\in [ \\delta, \\delta^{-1} ]$ follows.\n\nFrom the above analysis we obtain the following expression for the isotopy $\\psi_{t,H}$:\n\\begin{equation}\n\\label{eqn:MoserIso}\n\\psi_{t,H}(x,y) = \\Bigl(x+\\lambda(x,y)t,\\partial_pL\\bigl(t,x+\\lambda(x,y)t,\\lambda(x,y)\\bigr) \\Bigr).\n\\end{equation}\nLet $\\pi_x$ denote the projection onto the $x$-coordinate.~Then, $\\partial_y \\pi_x \\psi_{t,H}(x,y) =\\partial_y \\lambda(x,y) t = \\partial^2_{yy}H t$, which proves that\n$\\psi_{t,H}$ is positive twist for all $t\\in (0,1]$.\n\\end{proof}\n\n\nUsing Proposition \\ref{MoserThm} we obtain\na decomposition of symplectomorphisms $F\\in \\Symp(\\plane)$ as given in \\eqref{eqn:decomp12}\nand which satisfy additional properties such that the discrete braid invariants in \\cite{BraidConleyIndex} are applicable.\n\n\\begin{prop}\\label{Interpolation2} \nLet $F \\in \\Symp(\\disc)$. Then, there exists an isotopy $\\phi_{t} \\subset \\Symp(\\plane)$ for all $t\\in [0,1]$, an integer $d\\in \\nn$ and \n a sequence $\\{t_{j} \\}_{j=0}^{d} \\subset [0,1]$ with $t_j = j\/d$,\n such that\n \\begin{enumerate}\n \\item[(i)] $\\phi_{0}= \\id$, $\\phi_{1}|_{\\disc}=F$;\n \\item[(ii)] $\\phi_{t}$ is smooth with respect to $t$ on the intervals $[t_j,t_{j+1}]$ (piecewise smooth);\n \\item[(iii)] $\\widehat F_j :=\\phi_{t_{j}} \\circ \\phi_{t_{j-1}}^{-1} \\in SV(\\plane)$ for all $1\\le j \\le d$, and $F_j:=\\widehat F_j|_{\\disc}$;\n \\item[(iv)] the projection of the graph of $\\phi_{t}(x,y)$ onto $(t,x)$-plane is linear on the intervals $t \\in (t_{j-1},t_{j})$ for all $1\\le j \\le d$, and for all $(x,y) \\in \\plane$;\n \\item[(v)] $\\phi_{t} (\\disc) \\subset [-1,1] \\times \\rr $ for all $t\\in [0,1]$;\n \\item[(vi)] the points $z_\\pm = (\\pm 2,0)$ \n\n are fixed points of $\\widehat F_j = \\phi_{t_{j}} \\circ \\phi_{t_{j-1}}^{-1}$ for all $1\\le j \\le d$;\n \\item[(vii)] the points $z'_\\pm = (\\pm 4,0)$ are period-2 points of $\\widehat F_j = \\phi_{t_{j}} \\circ \\phi_{t_{j-1}}^{-1}$ for all $1\\le j \\le d$,\n i.e. $\\widehat F_j(z'_\\pm) = z'_\\mp = -z'_\\pm$, for all $j$.\n \\end{enumerate}\nThe decomposition\n\\begin{equation}\n\\label{eqn:decomp14}\n\\widehat F = \\widehat F_d\\circ \\cdots \\circ \\widehat F_1,\n\\end{equation}\n is a generalization of the decomposition given in \\eqref{eqn:decomp12}.\n\\end{prop}\n\nThe isotopy constructed in Proposition \\ref{Interpolation2} is called a \\emph{chained Moser isotopy}.\nBefore proving Proposition \\ref{Interpolation2} we construct analogues of the rotation mapping used in \\eqref{eqn:decomp12}.\n\n\\begin{lem}\\label{AlternatePsi} \nFor every integer $\\ell\\ge 3$ there exists \n a positive Hamiltonian twist diffeomorphism $\\Psi$ of the plane $\\plane$,\nsuch that: \n\\begin{enumerate}\n\\item[(i)] the restriction $\\Psi|_{\\disc}$ is a rotation over angle $2\\pi\/\\ell$ and $\\Psi|_{\\disc}^\\ell = \\id$;\n\\item[(ii)] the points $z_\\pm=(\\pm 2,0)$ are fixed points for $\\Psi$;\n\\item[(iii)] the points $z'_\\pm=(\\pm 4,0)$ are period-2 points for $\\Psi$, i.e. $\\Psi(z'_\\pm) = z'_\\mp$.\n\\end{enumerate}\n\\end{lem}\n\n\\begin{proof}\nA linear rotation mapping on $\\plane$ is a positive twist mapping for all rotation angles $\\vartheta\\in (0,\\pi)$.\nThe generating function for a rotation is given by\n\\begin{equation}\n\\label{eqn:rot12}\n{\\bm{h}}_\\vartheta(x,x') = \\tfrac{1}{2}\\cot(\\vartheta) x^2 - \\csc(\\vartheta) x x' + \\tfrac{1}{2}\\cot(\\vartheta) x'^2.\n\\end{equation}\nIn order to construct the mappings $\\Psi$ we construct special generating functions.\nLet $\\ell\\ge 3$ be an integer and let $\\vartheta_\\ell = 2\\pi\/\\ell \\in (0,\\pi)$.\nConsider generating functions of the form\n\\begin{equation}\n{\\bm{h}}_\\Psi(x,x') = \\xi_\\ell(x) - \\csc(\\vartheta_\\ell) x x' + \\xi_\\ell(x'),\n\\end{equation}\nwhich generate positive twist mappings for all $\\ell\\ge 3$.\nWe choose $\\xi_\\ell$ as follows: $\\xi_\\ell(x) = \\frac{1}{2}\\cot(\\vartheta_\\ell) x^2$ for all $|x|\\le 1$, $\\xi_\\ell(x) = \\frac{1}{2}\\csc(\\vartheta_\\ell) x^2$ for all $3\/2\\le |x|\\le 5\/2$, and $\\xi_\\ell(x) = -\\frac{1}{2}\\csc(\\vartheta_\\ell) x^2$ for all $7\/2\\le |x|\\le 9\/2$.\nThe mapping $\\Psi$ is defined by $h_\\Psi$ and \n$y = -\\partial_1 {\\bm{h}}_\\Psi(x,x')$ and $y' = \\partial_2 {\\bm{h}}_\\Psi(x,x')$.\\footnote{To simplify notation we express the derivatives of ${\\bm{h}}$ with respect to its two coordinates \n by $\\partial_1{\\bm{h}}$ and $\\partial_2{\\bm{h}}$.} \nFor $|x|,|x'|\\le 1$, the generating function restricts to \\eqref{eqn:rot12} which yields the rotation over $\\vartheta_\\ell$ on $\\disc$ and establishes (i).\nFor $3\/2\\le x,x'\\le 5\/2$ we have \n\\[\ny = \\csc(\\vartheta_\\ell)(x'-x),\\quad\\hbox{and}\\quad y' = \\csc(\\vartheta_\\ell)(x'-x),\n\\]\nwhich verifies that $z_+$ are fixed points and same holds for $z_-$, completing the verification of (ii).\nFor $7\/2\\le x \\le 9\/2$ and $-9\/2\\le x'\\le -7\/2$, we have \n\\[\ny = \\csc(\\vartheta_\\ell)(x'+x),\\quad\\hbox{and}\\quad y' = -\\csc(\\vartheta_\\ell)(x'+x),\n\\]\nthen $z'_+$ is mapped to $z'_-$ and similarly $z'_-$ is mapped to $z'_+$, which completes (iii) and proof of the lemma.\n\\end{proof}\n\n\nIn order to extend chained Moser isotopies yet another type of Hamiltonian twist diffeomorphism is needed.\n\n\\begin{lem}\\label{AlternatePsi3} \nFor every integer $\\ell\\ge 3$ there exists \n a positive Hamiltonian twist symplectomorphism $\\Upsilon$ of the plane $\\plane$,\nsuch that: \n\\begin{enumerate}\n\\item[(i)] the restriction $\\Upsilon|_{\\disc}$ is a rotation over angle $2\\pi\/\\ell$, i.e. $\\Upsilon|_{\\disc}^{\\ell} = \\id$;\n\\item[(ii)] the points $z_\\pm=(\\pm 2,0)$ and $z'_\\pm=(\\pm 4,0)$ are period-2 points for $\\Upsilon$, i.e. $\\Upsilon(z'_\\pm) = z'_\\mp$.\n\\end{enumerate}\n\\end{lem}\n\n\\begin{proof}\nAs before consider\n generating functions of the form\n\\begin{equation}\n\\label{eqn:rot14}\n{\\bm{h}}_\\Upsilon(x,x') = \\xi_\\ell(x) - \\csc(\\vartheta_\\ell) x x' + \\xi_\\ell(x'),\n\\end{equation}\nwhich generate positive twist mappings for all $\\ell\\ge 3$.\nWe choose $\\xi_\\ell$ as follows: $\\xi_\\ell(x) = \\frac{1}{2}\\cot(\\vartheta_\\ell) x^2$ for all $|x|\\le 1$, and $\\xi_\\ell(x) = -\\frac{1}{2}\\csc(\\vartheta_\\ell) x^2$ for all $3\/2\\le |x|\\le 9\/2$.\nThe mapping $\\Upsilon$ is defined by $h_\\Upsilon$.\nFor $|x|,|x'|\\le 1$, the generating function restricts to \\eqref{eqn:rot12} which yields the rotation over $\\vartheta_\\ell$ on $\\disc$ and establishes (i).\nFor $3\/2\\le x,\\le 9\/2$ and $-9\/2\\le x'\\le -7\/2$, we have \n\\[\ny = \\csc(\\vartheta_\\ell)(x'+x),\\quad\\hbox{and}\\quad y' = -\\csc(\\vartheta_\\ell)(x'+x),\n\\]\nthen $z_+$ and $z'_+$ are mapped to $z_-$ and $z'_-$ respectively and similarly $z_-$ and $z'_-$ are mapped to $z_+$ and $z'_+$ respectively, which completes proof.\n\\end{proof}\n\n\n\n\n\\begin{proof}[Proof of Proposition \\ref{Interpolation2}]\nConsider the subgroup $\\Symp_{c}(\\plane)$ formed by compactly supported symplectomorphisms of the plane.\\footnote{A symplectomorphism is compactly supported in $\\plane$ if it is the identity outside a compact subset of $\\plane$\n}\n Recall that due to the uniform twist property the set $SV(\\plane)$ is open in the topology given by $C^{1}$-convergence on compact sets, cf.\\ \\cite{LeCalvez}. \n Let $\\Psi\\in \\Ham(\\plane)$ be given by Lemma \\ref{AlternatePsi} for some $\\ell\\ge 3$.\n Then, there exists\nan open neighborhood ${\\mathscr{W}} \\subset \\Symp_{c}(\\plane)$ of the identity, such that $\\varphi\\circ \\Psi\\in SV(\\plane)$ for all $\\varphi \\in {\\mathscr{W}}$.\n\nFor $F\\in \\Symp(\\disc)$, Proposition \\ref{prop:MCG11a}\nprovides a Hamiltonian\n $H_{} \\in \\mathcal{H}(\\disc)$ such that $F = \\psi_{1,H}$. \n Let $H^\\dagger$ be a smooth extension to $\\rr\\times\\plane$ and ${\\mathscr{U}}_\\epsilon(\\disc) = \\{z\\in \\plane~|~|z|<1+\\epsilon\\}$ and\nlet $\\alpha \\colon\\plane \\rightarrow \\rr$ \nbe a smooth bump function satisfying $\\alpha|_{\\disc}=1$, $\\alpha = 0$ on $\\plane\\setminus{\\mathscr{U}}_\\epsilon(\\disc)$.\nTake $\\epsilon\\in (0,1\/2)$ and define $\\widetilde H = \\alpha H^\\dagger$ with $\\widetilde H \\in {\\mathcal{H}}(\\plane)$.\nThe associated Hamiltonian isotopy is denoted by $\\psi_{t,\\widetilde H}$ and $\\widetilde F = \\psi_{1,\\widetilde H}\n\\in \\Ham(\\plane)$. Moreover, $\\psi_{t,\\widetilde H}$\nequals the identity on ${\\plane\\setminus{\\mathscr{U}}_\\epsilon(\\disc)}$, i.e. $\\psi_{t,\\widetilde H}$ is supported in ${\\mathscr{U}}_\\epsilon(\\disc)$,\nand $\\widetilde F|_{\\disc} = F$.\n\nFix $\\ell\\ge 3$ and choose $k>0$\n sufficiently large such that\nthe symplectomorphisms \n\\begin{equation}\n G_{i}=\\psi_{i\/k,\\widetilde H} \\circ \\psi_{(i-1)\/k,\\widetilde H}^{-1}, \\ i \\in \\{1, \\dots,k \\}\n\\end{equation}\nare elements of ${\\mathscr{W}}$. Each $G_{i}$ restricted to $\\disc$ can be decomposed as follows:\n\\begin{equation}\n G_{i}|_\\disc = \\bigl(G_{i}|_\\disc \\circ \\Psi\\bigr) \\circ \\underbrace{\\Psi'\\circ\\cdots\\circ\\Psi'}_{\\kappa(\\ell-1)},\\quad \\ell\\ge3,~\\kappa\\in \\nn,\n\\end{equation}\nwhere $\\Psi$ and $\\Psi'$ are obtained from Lemma \\ref{AlternatePsi} by choosing rotation angles $2\\pi\/\\ell$ and $2\\pi\/\\kappa\\ell$ respectively. \nObserve that $\\Psi \\circ \\Psi'^{\\kappa(\\ell-1)}|_{\\disc} = \\id$.\nFrom $\\widetilde F$ we define the mapping $\\widehat F\\in \\Symp(\\plane)$:\n\\begin{equation} \n \\widehat{F}=(G_{k} \\circ \\Psi) \\circ \\underbrace{\\Psi' \\circ \\dots \\circ \\Psi'}_{\\kappa(\\ell-1)}\\circ\\cdots\\circ (G_{1} \\circ \\Psi) \\circ \\underbrace{\\Psi'\\circ\\cdots\\circ \\Psi'}_{\\kappa(\\ell-1)}.\n\\end{equation}\nBy construction we have $\\widehat F|_\\disc = F$.\nLet $\\ell_\\kappa= \\kappa(\\ell-1) +1$ and $d = \\ell_\\kappa k$ and put \n\\begin{equation}\n \\widehat F_{j} = \\begin{cases} G_{j\/\\ell_\\kappa} \\circ \\Psi &\\mbox{for }~~ j \\in \\{\\ell_\\kappa, 2\\ell_\\kappa,\\cdots, d\\}\n \\\\ \\Psi' &\\mbox{for } j \\in \\{1, \\dots, d \\} \\backslash \\{\\ell_\\kappa, 2\\ell_\\kappa, \\dots, d \\}. \\end{cases}\n\\end{equation} \nwith $\\widehat F_j \\in SV(\\plane)$ for $j \\in \\{1, \\dots, d \\}$ and $F_j = \\widehat F_j|_{\\disc}$.\nUsing the latter we obtain a decomposition of $F$ as given in \\eqref{eqn:decomp12},\nand with the additional property that the mappings $F_j$ extend to twist symplectomorphisms of the $\\plane$,\nwhich proves \\eqref{eqn:decomp14}.\n\nEach symplectomorphism $\\widehat F_j$ can be connected to identity by a Hamiltonian path.\nLet $H^j$ be the Hamiltonian given by Proposition \\ref{MoserThm}, which connects\n$\\widehat F_j$ to the identity via the Moser isotopy $\\psi_{s,H^j}$, $s\\in [0,1]$.\nLet $t_{j}=j\/d$ for all $j \\in \\{0, \\dots, d \\}$ and define \n\\begin{equation}\n\\label{eqn:theisotopy}\n\\phi_t = \\psi_{s^j(t),H^j} \\circ \\widehat F_{j-1} \\circ\\cdots\\circ \\widehat F_0,\\quad t\\in [t_{j-1},t_{j}], ~~j\\in \\{1,\\cdots,d\\},\n\\end{equation}\nwith $s^j(t) = d(t-t_j)$ and $\\widehat F_0 = \\id$.\nObserve that, by construction, $\\phi_{t_j}\\circ \\phi_{t_{j-1}}^{-1} = \\widehat F_j$, for all\n$j=1,\\cdots,d$ and (i) - (iv) is satisfied.\nCondition (v) follows from (iv) and from the fact that each $\\widehat F_j$ leaves the disc $\\disc$ invariant.\n\n All the symplectomorphisms in the decomposition are supported in the disc ${\\mathscr{U}}_\\epsilon(\\disc)$, hence Conditions (ii) and (iii) of Lemma\n \\ref{AlternatePsi} \n imply Properties (vi) and (vii).\n\\end{proof}\n\n\\begin{rem}\n\\label{rem:extendedMI}\nThe chained Moser isotopies in Proposition \\ref{MoserThm} can be extended with two more parameters $r\\ge 0$ and $\\rho\\ge 0$.\nConsider the decoposition\n\\begin{equation}\n\\label{eqn:decomp16}\n\\widehat F = \\widehat F_d\\circ \\cdots \\circ \\widehat F_1 \\circ \\underbrace{\\Psi^{\\ell_r}_r\\circ\\cdots\\circ\\Psi^{\\ell_r}_r}_r \\circ\n\\underbrace{\\Upsilon^{\\ell_\\rho}_\\rho\\circ\\cdots\\circ\\Upsilon^{\\ell_\\rho}_\\rho}_\\rho, \n\\end{equation}\nwhere $\\Psi_r^{\\ell_r}|_\\disc = \\id$ and $\\ell_r\\ge 3$, and $\\Upsilon_\\rho^{\\ell_\\rho}|_\\disc = \\id$ and $\\ell_\\rho\\ge 3$.\nWe can again define an Moser isotopy as in \\eqref{eqn:theisotopy} with $d$ replaced by $d+r\\ell_r+\\rho\\ell_\\rho$. \nThe isotopy is again called a chained Moser isotopy and denoted by $\\phi_t$, and the extended period will again be denoted by $d$.\nThe strands $\\phi_t(z_\\pm)$ link with the cylinder $[0,1]\\times\\disc$ and with each other with linking number $2\\rho$.\n\\end{rem}\n\n\n\n\n\n\n\n\\subsection{The discrete action functional}\\label{subsec:genfun}\n\nLet $F \\in \\Symp(\\disc)$ be the given symplectomorphism of the 2-disc and let $\\{ \\phi_{t} \\}_{t \\in \\rr}$ and $\\{t_{j} \\}_{j \\in \\{ 0, \\dots, d \\} }$ \nbe the associated continuous isotopy and sequence of discretization times as given in Proposition~\\ref{Interpolation2}\nfor the extension $\\widehat F$.\nThe isotopy is extended periodically, that is $ \\phi_{t+s} = \\phi_{t} \\circ \\phi_{1}^{s} $ and $t_{j+s d} = s + t_{j}$ for all $s \\in \\mathbb{Z}$.\nThe decomposition of $\\widehat F$ given by Proposition \\ref{Interpolation2} yields a periodic sequence of positive twist symplectomorphisms $\\{\\widehat F_j\\}$, with $\\widehat F_j =\n\\phi_{t_{j+1}} \\circ \\phi_{t_{j}}^{-1} \\in SV(\\plane)$ and $\\widehat F_{j+d} = \\widehat F_j$.\n\n\n\\begin{defn}\nA sequence $\\{ (x_{j},y_{j}) \\}_{j \\in \\mathbb{Z}}$ is a \\textit{full orbit} for the system $\\{ \\widehat F_j\\}$ if\n\\[\n(x_{j+1},y_{j+1}) = \\widehat F_j (x_{j},y_{j}), \\quad j\\in \\zz.\n\\]\nIf $(x_{j+n},y_{j+d}) = (x_j,y_j)$ for all $j$, then $\\{ (x_{j},y_{j}) \\}_{j \\in \\mathbb{Z}}$ is called an \\emph{$d$-periodic sequence}\nfor the system $\\{ \\widehat F_j\\}$.\n\\end{defn}\n\n\n\n For every twist symplectomorphism $\\widehat F_j \\in SV(\\plane)$ \nwe assign a generating function\n ${\\bm{h}}_{j}={\\bm{h}}_{j}(x_{j}, x_{j+1})$ on the $x$-coordinates, which implies that\n $y_{j}= -\\partial_1 {\\bm{h}}_j$\n and $y_{j+1}=\\partial_2 {\\bm{h}}_j$.\n From the twist property it follows that\n \\begin{equation}\\label{hmonotonicity}\n \\partial_{1} \\partial_{2} {\\bm{h}}_{j} < 0,\\quad \\forall j\\in \\zz.\n \\end{equation}\n Note that the sequence $\\{{\\bm{h}}_{j} \\}$ is $d$-periodic. \n \n Define the \\textit{action functional} $W_{d}:\\rr^{\\mathbb{Z}\/d\\mathbb{Z}} \\rightarrow \\rr$ by\n \\begin{equation}\n \\label{eqn:action1}\n W_{d}\\bigl(\\{x_j\\}\\bigr):= \\sum\\limits_{j=0}^{d-1} {\\bm{h}}_{j} (x_j, x_{j+1}).\n \\end{equation}\nA sequence $\\{x_j\\}$ is a critical point of $W_d$ if and only if\n\\begin{equation}\n\\label{eqn:parabrec}\n \\mathcal{R}_{j}(x_{j-1}, x_{j}, x_{j+1}) := -\\partial_{2} {\\bm{h}}_{j-1} \\left(x_{j-1}, x_{j}\\right) - \\partial_{1} {\\bm{h}}_j \\left(x_{j}, x_{j+1}\\right)=0,\n\\end{equation}\n for all $j \\in \\mathbb{Z}$.\n The $y$-coordinates satisfy $y_{j}=\\partial_{2} {\\bm{h}}_{j-1}\\left(x_{j-1},x_{j}\\right)$.\n \n Periodicity and exactness of $\\mathcal{R}_j$ is immediate. The monotonicity follows directly from inequality~\\eqref{hmonotonicity}.\nA periodic point $z$, i.e. $F^d(z) = z$, is equivalent to the periodic sequence $\\{ (x_{j},y_{j}) \\}_{j \\in \\mathbb{Z}}$, with $z=(x_0,y_0)\\in \\disc$.\nSince $z=(x_0,y_0)\\in \\disc$, the invariance of $\\disc$ under $F$ implies that $(x_{j},y_{j})\\in \\disc$ for all $j$.\nThe above considerations yield the following variational principle.\n\\begin{prop}\n\\label{prop:varprin1}\nA $d$-periodic sequence $\\{ (x_{j},y_{j}) \\}_{j \\in \\mathbb{Z}}$ is an $d$-periodic orbit for the system $\\{ \\widehat F_j\\}$ if and only if\nthe sequence of $x$-coordinates $\\{ x_{j} \\}$ is a critical point of $W_d$. \n\\end{prop}\n\nThe idea of periodic sequences can be generalized to periodic configurations.\nLet $\\{B_j\\}_{j\\in \\zz}$, $B_j = \\{(x_j^\\mu,y_j^\\mu)~|~\\mu=1,\\cdots,m\\}\\in \\C_m(\\plane)$ and $B_{j+d} = B_j$ for all $j$. \nSuch a sequence $\\{B_j\\}$ is a $d$-periodic sequence for $\\{\\widehat F_j\\}$ if $\\widehat F_j(B_j) = B_{j+1}$ for all $j\\in \\zz$.\n\nFor a $d$-periodic sequence $\\{B_j\\}$, the $x$-projection yields a discretized braid ${\\bm{b}} = \\{{\\bm{b}}^\\mu\\} = \\{x_j^\\mu\\}$, cf.\\ Definition \\ref{PL}.\nThe above action functional can be extended to the space of discretized braids $\\Conf_m^d$:\n\\begin{equation}\n\\label{eqn:action2}\nW_d({\\bm{b}}) := \\sum_{\\mu=1}^m W_d({\\bm{b}}^\\mu), \n\\end{equation}\nwhere $W_d({\\bm{b}}^\\mu)$ is given by \\eqref{eqn:action1}. This yields the following extension of the variational principle.\n\\begin{prop}\n\\label{prop:varprin2}\nA $d$-periodic sequence $\\{ B_j \\}_{j \\in \\mathbb{Z}}$, $B_j\\in \\C_m(\\plane)$, is a $d$-periodic sequence of configurations for the system $\\{ \\widehat F_j\\}$ if and only if\nthe sequence of $x$-coordinates ${\\bm{b}}=\\{ x^\\mu_{j} \\}$ is a critical point of $W_d$ on $\\Conf_m^d$. \n\\end{prop}\n\nA discretized braid ${\\bm{b}}$ that is stationary for $W_d$ if it satisfies the parabolic recurrence relations in \\eqref{eqn:parabrec} for all $\\mu$\nand the periodicity condition in Definition \\ref{PL}(b).\nIn Section \\ref{sec:braiding} we show that $d$-periodic sequences of configurations $\\{B_j\\}$ for the system $\\{ \\widehat F_j\\}$\nyields geometric braids.\n\n\n\n\n\n\n\\section{Braiding of periodic points}\n\\label{sec:braiding}\nFor symplectomorphisms $F\\in \\Symp(\\disc)$, with a finite invariant set $B\\subset \\inter \\disc$, the mapping class\ncan be identified via a chained Moser isotopy.\n\n\\begin{prop}\n\\label{prop:traceout1}\nLet $B\\subset\\inter\\disc$, with $\\# B=m$, be a finite invariant set for $F\\in \\Symp(\\disc)$ and let\n$\\phi_t$ be a chained Moser isotopy given in Proposition \\ref{Interpolation2}. Then, ${\\bm{\\beta}}(t) = \\phi_t(B)$ represents a geometric braid based at $B\\in \\C_m\\disc$ with only positive crossings and $\\beta = \\imath_B\\bigl([{\\bm{\\beta}}]_B\\bigr)$ is a\npositive word in\nthe braid monoid ${\\mathscr{B}}_m^+$.\nThe $x$-projection ${\\bm{b}}(t) = \\pi_x{\\bm{\\beta}}(t)$ on the $(t,x)$-plane is a (continuous) piecewise linear braid diagram.\n\\end{prop}\n\n Proposition \\ref{prop:MCG12} implies that the associated positive braid word $\\beta\\in \\mathcal{B}_m$, derived from the braid diagram $\\pi_x\\phi_t(B)$ determines the mapping class of $F$\nrelative to $B$.\nIf the based path ${\\bm{\\beta}}(t)=\\phi_t(B)$ is regarded as a \\emph{free loop} $\\sbb^1 \\to \\C_m\\disc$, i.e. discarding the base point, then ${\\bm{\\beta}}$ is referred to as a \\emph{closed} geometric braid in $\\disc$.\n\\begin{defn}\n\\label{defn:acylindrical}\nLet ${\\bm{\\beta}}$ be a geometric braid in $\\disc$. A component of ${\\bm{\\beta}}'\\subset {\\bm{\\beta}}$ is called \\emph{cylindrical in} ${\\bm{\\beta}}$ if ${\\bm{\\beta}}'$ can be deformed onto $\\partial \\disc$ as a closed geometric braid. Otherwise ${\\bm{\\beta}}'$ is called \\emph{acylindrical}. A union of components ${\\bm{\\beta}}'$ is called cylindrical\/acylindrical in ${\\bm{\\beta}}$ if all members are.\n\\end{defn}\n\n\\begin{rem}\n\\label{rmk:acylindrical}\nA positive conjugacy class $\\llbracket \\gamma,\\aset\\rrbracket$ is associated with braid classes in $[{\\bm{a}}\\rel{\\bm{b}}]_{\\stackrel{\\mbox{\\tiny\\textnormal{\\raisebox{0ex}[0ex][0ex]{$+$}}}}{\\sim}}$ \nin $\\Conf_{n,m}^{d+q}$, cf.\\ Section \\ref{subsec:algpres}. If for a representative ${\\bm{a}}\\rel{\\bm{b}}$ it holds that $\\Bd({\\bm{a}})$ is cylindrical\/acylindrical\nin ${\\bm{\\gamma}} =\\Bd({\\bm{a}})\\rel\\Bd({\\bm{b}})$,\nthen $\\llbracket \\gamma,\\aset\\rrbracket$ is said to cylindrical\/acylindrical, cf.\\ Definition \\ref{defn:proper2}.\n\\end{rem}\n\n\nLet $z,z'\\in \\plane$ be distinct points with the property that $\\widehat F^n(z) = z$ and $\\widehat F^n(z') = z'$, for some $n\\ge 1$, and where $\\widehat F = \\phi_1$ and\n$\\phi_t$ a chained Moser isotopy constructed in Proposition \\ref{Interpolation2}.\nDefine the continuous functions $z(t) = \\phi_t(z)$ and $z'(t) = \\phi_t(z')$ and let $x(t)$ and $x'(t)$ the $x$-projection of $z(t)$ and $z'(t)$ respectively.\nBy Proposition \\ref{Interpolation2}, $x(t)$ and $x'(t)$ are (continuous) piecewise linear functions that are uniquely determined by the sequence\n$\\{t_j\\}_{j=0}^{nd}$, $t_j = j\/d$.\n\n\\begin{lem}[cf.\\ \\cite{BraidConleyIndex}]\n\\label{lem:propergraphs}\nThe two $x$-projections $x(t)$ and $x'(t)$ form a (piecewise linear) braid diagram, i.e.\\ no tangencies.\nThe intersection number $\\iota\\bigl(x(t),x'(t)\\bigr)$, given as the total number of intersections of the graphs of $x(t)$ and $x'(t)$\non the interval $t\\in [0,n]$,\n is well-defined and even.\n\\end{lem}\n\n\\begin{proof}\nLet $x_j = x(t_j)$ and $x_j' = x'(t_j)$, $j=0,\\cdots,nd$ and by the theory in Section \\ref{subsec:genfun} the sequences satisfy the parabolic recurrence relations\n$\\mathcal{R}_{j}(x_{j-1}, x_{j}, x_{j+1}) =0$ and $\\mathcal{R}_{j}(x'_{j-1}, x'_{j}, x'_{j+1}) =0$.\nSuppose the sequences $\\{x_j\\}$ and $\\{x'_j\\}$ have a tangency at $x_j = x_j'$ (but are not identically equal). Then, either $x'_{j-1}x'_{j+1}$.\nLet $\\tau\\in [t_j,t_{j+1}]$ be the intersection point and $x(\\tau) = x'(\\tau) = x_*$.\nAfter rescaling and shifting to the interval $[0,1]$ we have\n$x(s(t)) = x_j + (x_{j+1}-x_j)s(t)$, $s(t) = d(t-t_j) \\in [0,1]$ and the same for $x'(s(t))$.\nRecall that $\\phi_t$ is given by \n\\eqref{eqn:theisotopy} and therefore by \\eqref{Ltransform2}, \n\\[\ny(s(\\tau)) = \\partial_p L^j\\bigl(s(\\tau),x_*,x_{j+1}-x_j), \\quad y'(s(\\tau)) = \\partial_pL^j\\bigl(s(\\tau),x_*,x'_{j+1}-x'_j),\n\\]\nwhere $L^j$ are the Lagrangians for the Moser isotopies $\\psi_{t,H^j}$ in Proposition \\ref{MoserThm}.\nSince $\\partial^2_{pp}L^j\\ge \\delta>0$ and $x_{j+1}-x_j > x'_{j+1}-x'_j$, we conclude that $y(s(\\tau)) > y'(s(\\tau))$.\nBy reversing the role of $x$ and $x'$, i.e.\\ $x_j>x'_j$ and $x_{j+1}x'_{j+1}$.\nAs in the previous case\n\\[\n\\begin{aligned}\ny(s(\\tau)) &= \\partial_pL^{j-1} \\bigl(1,x_*,x_*-x_{j-1}) = \\partial_pL^j\\bigl(0,x_*,x_{j+1}-x_*);\\\\\ny'(s(\\tau)) &= \\partial_pL^{j-1} \\bigl(1,x_*,x_*-x'_{j-1}) = \\partial_pL^j\\bigl(0,x_*,x'_{j+1}-x_*),\n\\end{aligned}\n\\]\nand since $x_*-x_{j-1}>x_*-x'_{j-1}$ (and $x_{j+1}-x_* > x'_{j+1}-x_*$) we conclude that $y(s(\\tau)) >y'(s(\\tau))$. Reversing the role of $x$ and $x'$ yields\n$y(s(\\tau))1$, we\nuse $F^k$ instead.\nConsider three different 2-colored braid words: $\\gamma_0 = \\gamma$ as above, $\\gamma_{-1} = \n \\sigma_4\\sigma_1\\sigma_2\\sigma_3^2\\sigma_2\\sigma_1\\sigma_4$, and\n$\\gamma_1 = \\sigma_4\\sigma_1\\sigma_3\\sigma_2^2\\sigma_3\\sigma_1\\sigma_4$.\nFor all three cases the skeletal word is given by $\\beta$, the coloring is given by $\\aset=\\{3\\}$.\nConsider a symbolic sequence $\\{a_i\\}_{i=1}^k$, $a_i\\in \\{-1,0,1\\}$, then\nthe positive conjugacy class $(\\gamma,\\aset)$, with $\\aset = \\{3\\}$, and\n\\[\n\\gamma = \\gamma_{a_0} \\cdot\\gamma_{a_1} \\cdots \\gamma_{a_{k-1}}\\cdot \\gamma_{a_k},\n\\]\nis proper and acylindrical, except for $a_i=-1$, or $a_i=1$ for all $i$.\nIf follows that $(\\gamma,\\aset) \\mapsto \\beta^k$. In \\cite[Sect.\\ 4.5]{BraidConleyIndex} the braid invariant is given by ${\\mathrm{H}}\\llbracket\\gamma,\\aset\\rrbracket = \\sbb^r$,\nwhere $r$ is the number of zeroes in $\\{a_i\\}$.\nThis procedure produces many $k$-periodic point for $F$ that are forced by the invariant set $B$. As matter of fact one obtains a lower bound on the \ntopological entropy of $F$.\n\\end{exm}\n\n\\begin{exm}\n\\label{exm:exist3}\nLet $F\\in \\Symp(\\disc)$ possess an invariant set $B$ consisting of three points,\nand let the mapping class of $[F]$ relative to\n$B$ be represented by a positive braid word $\\beta = {\\bm{i}}_m^{-1}[{\\bm{\\beta}}]$,\nwith ${\\bm{\\beta}} = \\bigl\\{{\\bm{\\beta}}^1(t), {\\bm{\\beta}}^2(t), {\\bm{\\beta}}^3(t)\\bigr\\}$ being a geometric representative of \n\\[\n\\beta = \\sigma_1\\sigma_2^2\\sigma_1^2\\sigma_2^2\\sigma_1\\sigma_2\\sigma_1\\sigma_2^2\\sigma_1,\n\\]\ncf.\\ Figure \\ref{ex:nontriv1}.\nFor the intersection numbers it holds that $\\iota\\bigl({\\bm{\\beta}}^1(t),{\\bm{\\beta}}^2(t)\\bigr) = \\iota\\bigl({\\bm{\\beta}}^1(t),{\\bm{\\beta}}^3(t)\\bigr) = 6$ and\n $\\iota\\bigl({\\bm{\\beta}}^2(t),{\\bm{\\beta}}^3(t)\\bigr) = 1 < 6$.\nFrom considerations in~\\cite[Sect.\\ 9.2]{BraidConleyIndex} it follows that through the black strands ${\\bm{\\beta}}$ one can plot a single red strand ${\\bm{\\alpha}}$, \nsuch that the 2-colored braid class \n$\\llbracket\\gamma,\\aset\\rrbracket$ represented by their union \n\\[\n\\gamma = \\sigma_2 \\sigma_1 \\sigma_2 \\sigma_3 \\sigma_2 \\sigma_3 \\sigma_2 \\sigma_3 \\sigma_1\n\\sigma_2 \\sigma_2 \\sigma_1 \\sigma_2 \\sigma_3 \\sigma_2 \\sigma_3 \\sigma_2 \\sigma_1 \\sigma_2 \\sigma_1 \\sigma_2 \\sigma_3 \\sigma_2 \\sigma_3 \\sigma_2 \\sigma_1 \\sigma_2,\n\\]\nwith $\\aset = \\{2\\}$, cf.\\ Figure \\ref{ex:nontriv1}, \nis proper, acylindrical and of nontrivial index.\nThe intersection numbers for ${\\bm{\\alpha}}$ are\n $\\iota\\bigl({\\bm{\\alpha}}(t),{\\bm{\\beta}}^2(t)\\bigr) = \\iota\\bigl({\\bm{\\alpha}}(t),{\\bm{\\beta}}^3(t)\\bigr) = 4$.\nIf the intersection numbers are chosen more generally, i.e.\\\n$\\iota\\bigl({\\bm{\\beta}}^1(t),{\\bm{\\beta}}^2(t)\\bigr) = \\iota\\bigl({\\bm{\\beta}}^3(t),{\\bm{\\beta}}^3(t)\\bigr) = 2p$ and\n $\\iota\\bigl({\\bm{\\beta}}^2(t),{\\bm{\\beta}}^3(t)\\bigr) = r < 2p$, and $\\iota\\bigl({\\bm{\\alpha}}(t),{\\bm{\\beta}}^2(t)\\bigr) = \\iota\\bigl({\\bm{\\alpha}}(t),{\\bm{\\beta}}^3(t)\\bigr) = 2q$,\nwhere $r > 0$ and $p \\geq 2$. If $r < 2q < 2p$, then \nthe singular homology of $\\llbracket\\gamma,\\aset\\rrbracket$ is given by\n$$\nH_{k}\\bigl({\\mathrm{H}}\\llbracket\\gamma,\\aset\\rrbracket\\bigr) = \\begin{cases}\n \\mathbb{R}: k= 2q,\\ 2q+1,\\\\\n 0: \\text{elsewise}.\\\\\n\\end{cases}\n$$\nBy Theorem~\\ref{thm:main1} we conclude that there are at least two additional distinct fixed points $A_1, A_2$\nwith $\\jmath_{A_i\\cup B}([F]) = \\gamma\\!\\!\\mod \\square, \\ i = 1,2$. \nIn addition,\nvia concatenating braid diagrams one can produce an infinite number of periodic solutions of different periods, cf.\\ \\cite[Lemma 47]{BraidConleyIndex}.\nThe above \nforcing result\nis specific for area-preserving mapping of the 2-disc, or $\\rr^2$ and \\emph{not} true for arbitrary diffeomorphisms of the 2-disc.\n\nFor example consider the time-1 mapping $F\\colon \\disc\\to\\disc$ given by the differential equation $\\dot{r} = r(r-a_1)(r-a_2)(r-1)$ and $\\dot{\\theta} = g(r)>0$,\nwith $g(a_1) = \\pi$ and $g(a_2) = 6\\pi$, and $0 d_{ij}^v$\\\\\nThe scheduled maintenance date is later than the sample means that the maintenance date is too late and the defect occurs on the use. Still, $d_{ij}^v$ is ``Sampled Due date'' in Figure~\\ref{f1}, but the scheduled maintenance date $D_{ij}$ is ``Maintenance date b''. In this case, $D_{ij}$ is later than $d_{ij}^v$, and the vehicle will break down on the road. In our algorithm, the number of failures will be increased by one.\n\\\\\nCase 3) $~D_{ij} = d_{ij}^v$\\\\\nThe ideal situation is that the maintenance date is scheduled on the due date. The component can be maintained exactly at the date that the component is broken. In this case, there is no penalty or failure.\n\nThe averages of the penalty costs and the number of failures from $1000$ due date samples will be used as the penalty cost and expected number of failures for the scheduled maintenance date of the component. For each operation (the single-component operation or group operation), its cost consists of three parts: the set-up cost of the car, the maintenance costs and the penalty costs of all components of the operation. The penalty cost of components is a part of the total cost, and the expected number of failures of components is the third objective to be minimized in our multi-objective optimization.\n\n\n\\subsection{Implementation of Evolutionary Algorithm Operators}\nTo solve our application problem with an EA, there are several basic issues we need to deal with, such as, how to represent an individual or solution in the population (Chromosome Encoding); how to take these chromosomes into a process of evolution (Genotype-Phenotype Mapping); how to create variations of solutions in each iteration (Genetic Operators). Details of these topics are given in the following subsections.\n\n\\subsubsection{Chromosome Encoding}\nIn our algorithm, a three-vector chromosome (Figure~\\ref{f0}) is proposed to represent an individual, and the three vectors are:\n\\begin{itemize\n \\item Group structure vector: the group structures of components.\n \\item Starting time vector: the starting times of operations.\n \\item Workshop assignment vector: the workshops for operations.\n\\end{itemize}\n\n\\begin{figure*}[!htbp]\n\\hspace{-0.7cm}\n\\includegraphics[height=0.45in, width=5.2in]{chrom1.png}\n\\caption{Three-vector chromosome.}\n\\label{f0}\n\\end{figure*}\n\nThe group structure vector gives the information of which components are in the same group, it is initialized by randomly picking a feasible group structure for each car (check the details in \\cite{wang2019vehicle}). The generation of the starting time vector should be later than the generation of the group structure vector because the starting time of each operation is determined by the execution window which is the entire execution window of the component for a single-component operation or the execution window intersection for a group operation. A time spot is randomly selected from the execution window or execution window intersection for each operation in order to initialize the starting time vector. \n\nA workshop is considered as ``several workshops\" based on its capacity (the number of teams). By this way, the schedule for each workshop team can be achieved from the solution. For example, consider that two workshops have three and four repairing teams respectively. Then, group operations can be randomly assigned to seven ``workshops'', the former three and the latter four represent the corresponding teams in two workshops.\n\n\\subsubsection{Genotype-Phenotype Mapping}\nTo use the power of EAs to obtain a better population, we need to evaluate each chromosome and give the better ones higher probabilities to produce offspring. This is done by genotype-phenotype mapping or decoding the chromosome. In our problem, it is to convert an individual into a feasible schedule to calculate the objectives and constraints which represent the relative superiority of a chromosome. The genotype-phenotype mapping can be easily achieved in our algorithm because the group structure, the starting time and the workshop team of the operations can be acquired directly from each individual. When converting an individual into a schedule, it is possible that the processing times of two or more operations assigned to the same workshop team are overlapping since the starting time of each operation is decided in the starting time vector. In this situation, the principle of first-come-first-served is followed: the starting time and processing time of the earlier started operation remain the same; the starting time of the later started operation is delayed until the completion of the previous operation; the processing time of the later started operation remains the same; while, an extra waiting time is added to the later started operation as a penalty because the vehicle waits in the workshop for the maintenance.\n\n\\subsubsection{Genetic Operators}\nIn accordance with the problem and its encoding, specific crossover and mutation operators have been designed for our problem (check the details in \\cite{wang2019vehicle}). Both operators are applied separately to the three parts of the chromosome.\n\nFor the group structure vector, multi-point crossover can be used as crossover operator and the number of cutting points depends on the length of the vector. The same cutting points can be applied to the starting time vector when performing crossover. However, the change on the group structure vector as a consequence of the crossover may result in the invalidity of genes in the starting time vector because it is possible that the group members and execution window intersections have changed due to the new group structure. Therefore, when performing the crossover on the starting time vector, the starting times of all operations should be checked based on the new group structure and a new starting time is produced randomly from the correct intersection in the case that the starting time of an operation is invalid. The multi-point crossover can be applied to the workshop assignment vector as well. \n\nThe mutation operator alters one or more gene values in a chromosome. Similarly, the mutation should be operated on the group structure vector first due to its impact on the starting time vector; the starting time of operations should be checked and corrected after the mutation is done on the group structure vector. Afterwards, several gene values can be altered in the staring time vector and workshop assignment vector to generate a new individual.\n\n\n\\section{Proposed Preference based Algorithm}\n\\label{sec:ap-di-moea}\nAs the number of objectives and decision variables increases, the number of non-dominated solutions tends to grow exponentially \\cite{pal2018decor}. This brings more challenges on achieving efficiently a solution set with satisfactory convergence and diversity. At the same time, a huge number of solutions is needed to approximate the entire Pareto front. However, a big population means more computational time and resources. To overcome these difficulties, we propose an automatic preference based MOEA, which can generate the preference region or the region of interest (ROI) automatically and find non-dominated solutions in the preference region instead of the entire Pareto front. The automatic preference based MOEA is developed based on the framework of DI-MOEA (Diversity-Indicator based Multi-Objective Evolutionary Algorithm) \\cite{wang2019diversity}. We call our new algorithm AP-DI-MOEA.\n \nDI-MOEA is an indicator-based MOEA, it has shown to be competitive to other MOEAs on common multi-objective benchmark problems. Moreover, it is invariant to the shape of the Pareto front and can achieve evenly spread Pareto front approximations.\nDI-MOEA adopts a hybrid selection scheme:\n\\begin{itemize\n \\item The ($\\mu$ + $\\mu$) generational selection operator is used when the parent population can be layered into multiple dominance ranks. The intention is to accelerate convergence until all solutions are non-dominated.\n \\item The ($\\mu$ + 1) steady state selection operator is adopted in the case that all solutions in the parent population are mutually non-dominated and the diversity is the main selection criterion to achieve a uniform distribution of the solutions on the Pareto front.\n\\end{itemize}\n\nDI-MOEA employs non-dominated sorting as the first ranking criterion; the diversity indicator, i.e., the Euclidean distance based geometric mean gap indicator, as the second, diversity-based ranking criterion to guide the search. Two variants of DI-MOEA, denoted as DI-1 and DI-2, exist, which use the crowding distance and diversity indicator, respectively, as the second criteria in the ($\\mu$ + $\\mu$) generational selection operator. While, to ensure the uniformity of the final solution set, the diversity indicator is used by both variants in the ($\\mu$ + 1) steady state selection operator. Analogously, two variants of AP-DI-MOEA, i.e., AP-DI-1 and AP-DI-2, are derived from the two variants of DI-MOEA\n\n\n\nThe workings of AP-DI-MOEA are outlined in Algorithm 1. (Exceedance of) $Enum\\_P$ is a predefined condition (In our algorithm, $Enum\\_P$ is the number of evaluations.) to divide the algorithm into two phases: learning phase and decision phase. In the learning phase, the algorithm explores the possible area of Pareto optimal solutions and finds the rough approximations of the Pareto front. In the decision phase, the algorithm identifies the preference region and finds preferred solutions. When the algorithm starts running and satisfies $Enum\\_P$ at some moment, the first preference region will be generated and $Enum\\_P$ will be updated for determining a new future moment when the preference region needs to be updated. The process of updating $Enum\\_P$ continues until the end. The first $Enum\\_P$ is a boundary line. Before it is satisfied, AP-DI-MOEA runs exactly like DI-MOEA to approximate the whole Pareto front; while, after it is satisfied, the preference region is generated automatically and AP-DI-MOEA finds solutions focusing on the preference region. The subsequent values of $Enum\\_P$ define the later moments to update the preference region step by step, eventually, a precise ROI with a proper size can be achieved.\n\n\\begin{figure*}[!htbp]\n\\vspace{-3.5cm}\n\\hspace{-3cm}\n\\includegraphics[height=10in]{pg_0002.pdf}\n\\end{figure*}\n\n\\begin{figure*}[!htbp]\n\\hspace{-2.5cm}\n\\includegraphics[width=7in]{al2.pdf}\n\\end{figure*}\n\n\n\\iffalse \n\\addtocounter{algorithm}{1}\n\\begin{algorithm}[!htbp]\n\\setstretch{0.8}\n \t\\caption{Finding the knee point and defining the preference region.}\n \\label{algorithm:2}\n \t\\begin{algorithmic}[1]\n \\STATE $n \\leftarrow$ the number of objectives;\n \\STATE $P_t \\leftarrow$ current population;\n \\STATE $\\epsilon$; \/\/parameter ($>$0) for distinguishing convex\/concave shape;\n \\STATE $popsize \\leftarrow |P_t|$; \/\/population size\n \n \\STATE Declare $Q[n]$; \/\/upper quartile objective values of $P_t$ \n \\STATE Declare $L[n]$; \/\/worst objective values of $P_t$\n \\STATE Declare $knee[n]$; \/\/knee point of $P_t$\n \\STATE Declare $P\\_region[n]$; \/\/preference region of $P_t$\n \\STATE Declare $Expoints[n][n]$; \/\/extreme points\n \\STATE $foundknee \\leftarrow false$;\n \n \n \\FOR{{\\bf each} $i \\in \\{ 1, \\dots, n\\}$}\n \\STATE sort($P_t$) by the $i$th objective in ascending order;\n \\STATE $Q[i] \\leftarrow P_t$.get\\_index$(\\frac{3}{4}\\times popsize)$.get\\_obj($i$); \/\/upper quartile value of the $i$th objective \n \\STATE $L[i] \\leftarrow P_t$.get\\_index$(popsize)$.get\\_obj($i$);\/\/the largest (worst) value of the $i$th objective\n \\ENDFOR\n \\FORALL{solution $s \\in P_t$}\n \\IF{$s$.get\\_obj($i=1,...,n) > Q[i]$ }\n \\STATE remove $s$ from $P_t$;\n \\ENDIF\n \\ENDFOR\n \\STATE $Expoints[\\centerdot][\\centerdot] \\leftarrow$ extreme points in $P_t$;\n \\STATE $num_a\\leftarrow$ the number of points in concave region of hyperplane formed by $Expoints[\\centerdot][\\centerdot]$;\n \\STATE $num_v \\leftarrow |P_t|-num_a$; \/\/the number of points in convex region\n \\IF{$(num_v - num_a > \\epsilon)$}\n \\STATE \/\/roughly convex shape\n \\STATE remove solutions in concave region from $P_t$;\n \\ELSIF{$(num_a - num_v > \\epsilon)$}\n \\STATE \/\/roughly concave shape\n \\STATE remove solutions in convex region from $P_t$;\n \\ELSE\n \\FORALL{solution $s\\in P_t$}\n \\STATE calculate hypervolume of $s$ with reference point $L[\\centerdot]$;\n \\STATE update the largest hypervolume value ($max\\_h$);\n \\ENDFOR\n \\STATE $knee[\\centerdot] \\leftarrow$ solution with $max\\_h$;\n \\STATE $foundknee \\leftarrow true$;\n \\ENDIF\n \\IF{($foundknee == false$)}\n \\FORALL{solution $s\\in P_t$}\n \\STATE calculate distance between $s$ and hyperplane formed by $Expoints[\\centerdot][\\centerdot]$;\n \\STATE update the largest distance ($max\\_d$);\n \\ENDFOR\n \\STATE $knee[\\centerdot] \\leftarrow$ solution with $max\\_d$;\n \\ENDIF\n \\FOR{{\\bf each} $i \\in \\{ 1, \\dots, n\\}$}\n \\STATE $P\\_region[i] \\leftarrow knee[i] + (L[i]-knee[i]) \\times 85\\%$\n \\ENDFOR\n \t\\end{algorithmic}\n\\end{algorithm}\n\\fi\n\n\\begin{figure}[!htbp]\n\\centering\n\\includegraphics[width=3.3in]{knee-explain.png} \n\\caption{Finding the knee point in bi-dimensional space.} \n\\label{knee}\n\\end{figure}\n\nThe first\/new preference region is formed based on the population at the moment when the condition of $Enum\\_P$ is satisfied, especially the knee point of the population. Algorithm 2 gives the details of line 41 in Algorithm 1, it introduces the steps of finding the knee point of a non-dominated solution set and constituting a hypercube shaped preference region according to the knee point. Figure~\\ref{knee} also gives an illustration of finding the knee point in bi-dimensional space. Firstly, the upper quartile objective values (line 13 in Algorithm 2) in the solution set are used as a boundary to define outliers and solutions outside this boundary are removed (line 16-20 in Algorithm 2). The extreme solutions (the solutions with the maximum value in one objective) (line 21 in Algorithm 2) are then found inside the boundary and a hyperplane is formed based on the extreme solutions. In a bi-dimensional space (Figure~\\ref{knee}), the hyperplane is only a line connecting two extreme solutions. According to the numbers of points below and above the hyperplane (line 22 - 23 in Algorithm 2), the shape of the solution set can be roughly perceived.\nWe will distinguish between ``convex'' and ``concave'' regions. Points in the \\textit{convex} (\\textit{concave}) \\textit{region} are dominating (dominated by) at least one point in the hyperplane spanned by the extreme points. However, when the number of the points in the convex region and the number of points in the concave region is close enough, it implies that the shape of the current solution set is almost linear. This occurs both when the true Pareto front is linear and when the solution set is converged very well in a small area of the Pareto front. A parameter $\\epsilon$ then is used to represent the closeness and it is a small number decided by the size of the solution set. In the case that the shape of the current solution set is (almost) linear, the solution with the largest hypervolume value with regards to the worst objective vector (line 14 in Algorithm 2) is adopted as the knee point (line 32 - 36 in Algorithm 2). While, under the condition that the shape of the current solution set is convex or concave, the solution in the convex or concave region with the largest Euclidean distance to the hyperplane is chosen as the knee point (line 39 - 42 in Algorithm 2). After the knee point is found, the preference region can be determined based on the knee point by the following formula:\n\\begin{align}\nP\\_region[i] = knee[i] + (L[i]-knee[i]) \\times 85\\%\n\\end{align}\n\nLet $i$ denotes the $i$th objective, as in Algorithm 2, $L[i]$ is the worst value of the $i$th objective in the population, $knee[i]$ is the $i$th objective value of the knee point and $P\\_region[i]$ is the upper bound of the $i$th objective. W.l.o.g. we assume the objectives are to be minimized and the lower bound of preference region is the origin point. According to the formula, we can see that the first preference region is relatively large (roughly 85\\% of the entire Pareto front). With the increase in the number of iteration, the preference region will be updated and becomes smaller and smaller because every preference region picks 85\\% of the current Pareto front. Eventually, we want the preference region can reach a proper range, say, 15\\% of the initial Pareto front. The process of narrowing down the preference region step by step can benefit the accuracy of the preference region.\n\nIn the interest of clarity, Algorithm 1 only shows the workings of AP-DI-1, the workings of AP-DI-2 can be obtained by replacing crowding distance with the diversity indicator contribution.\nIn the ($\\mu$ + $\\mu$) generational selection operator (line 14 - 36 in Algorithm 1), when there is no preference region, the second ranking criteria (the crowding distance for AP-DI-1; the diversity indicator for AP-DI-2) for all solutions on the last front are calculated and the population will be truncated based on non dominated sorting and the second ranking criteria (line 28 - 29 in Algorithm 1). While, if a preference region already exists, both the second ranking criteria and Euclidean distance to the knee point for all solutions on the last front are calculated and the population will be truncated based on first non dominated sorting, then the second ranking criteria, lastly, Euclidean distance to the knee point (line 31 - 32 in Algorithm 1). In the ($\\mu$ + 1) steady state selection operator (line 38 - 59 in Algorithm 1), firstly, the value of $Enum\\_P$ is compared with the current number of evaluations to determine if a (new) preference region should be generated. When it is time to do so, the preference region is generated through Algorithm 2 (line 41 in Algorithm 1), at the same time, the value of $Enum\\_P$ is updated to the next moment when the preference region is to be updated (line 42 in Algorithm 1). There are different strategies to assign the values of $Enum\\_P$. In our algorithm, we divide the whole computing budget into two parts, the first half is used to find an initial entire Pareto front approximation, and the second half is used to update the preference region and find solutions in the preference region. Assume the total computing budget is $Enum\\_T$ (the number of evaluations), then the first value of $Enum\\_P$ is $\\frac{1}{2}\\times Enum\\_T$. Due to the reason that we expect a final preference region with a size of around 15\\% of the initial entire Pareto front and each new preference region takes 85\\% of the current Pareto front, according to the formula: $0.85^{12} \\approx 0.14$, the value of $Enum\\_P$ can be updated by the following formula:\n\\begin{align}\nEnum\\_P = Enum\\_P + (Enum\\_T\/2)\/12\n\\end{align}\n\nAnother half of budget can be divided into $12$ partial-budgets and a new preference region is constituted after each partial-budget. In the end, the final preference region is achieved and solutions focusing on this preference region are obtained. For the rest part of the ($\\mu$ + 1) steady state selection operator, likewise, when there is a preference region, three ranking criteria (1. non-dominated sorting; 2. diversity indicator; 3. the Euclidean distance to the knee point) work together to achieve a well-converged and well-distributed set of Pareto optimal solutions in the preference region.\n\n\\section{Experimental Results}\n\\label{sec:experiments}\n\\subsection{Experimental Design}\nIn this section, simulations are conducted to demonstrate the performance of proposed algorithms on both benchmark problems and our real-world application problems. All experiments are implemented based on the MOEA Framework (\\url{http:\/\/www.moeaframework.org\/}), which is a Java-based framework for multi-objective optimization.\n\nFor the two variants of AP-DI-MOEA: AP-DI-1 and AP-DI-2, their performances have been compared with DI-MOEA: DI-1, DI-2 and NSGA-III \\cite{deb2014evolutionary}. We compare our algorithm with NSGA-III because NSGA-III is a representative state-of-the-art evolutionary multi-objective algorithm and it is very powerful to handle problems with non-linear characteristics. For bi-objective benchmark problems, algorithms are tested on ZDT1 and ZDT2 with 30 variables. For three objective benchmark problems, DTLZ1 with 7 variables and DTLZ2 with 12 variables are tested. For the real-world application problem of VFMSO, experiments have been conducted on two instances with different sizes. The configurations of the two instances, such as the predicted RUL probability distribution, the processing time and maintenance cost of each component, the set-up time and cost of each car, are made available on \\url{http:\/\/moda.liacs.nl}. On every problem, we run each algorithm $30$ times with different seeds, while the same $30$ different seeds are used for all algorithms. All the experiments are performed with a population size of $100$; and for bi-objective problems, experiments are run with a budget of $22000$ (objective function) evaluations, DTLZ three objective problems with a budget of $120000$ evaluations, the VFMSO problems with a budget of $1200000$ evaluations. This setting is chosen to be more realistic in the light of the applications in scheduling that we ultimately want to solve.\n\n\n\n\n\n\n\\subsection{Experiments on bi-objective problems}\n\nBi-objective problems are optimized with a total budget of $22000$ evaluations, when the number of evaluations reaches $10000$ times, the first preference region is generated, then after every $1200$ evaluations, the preference region will be updated. Figure~\\ref{fig:ZDT1} shows the Pareto front approximations from a typical run on ZDT1 (left column) and ZDT2 (right column). The graphs on the upper row are obtained from DI-1 and AP-DI-1, while the graphs on the lower row are from DI-2 and AP-DI-2. In each graph, the entire Pareto front approximation from DI-MOEA and the preferred solutions from AP-DI-MOEA (or \\textit{AP solutions}) are presented, at the same time, the preference region of AP-DI-MOEA is also shown by the gray area.\n\n\\begin{figure}[htbp]\n\\hspace{-0.58cm}\n\\subfigure[ZDT1]{\n \\begin{minipage}[t]{0.53\\linewidth}\n \\centering\n \\includegraphics[width=2in]{zdt1-di1-tdi1-w.png}\\\\\n \\vspace{0.02cm}\n \\includegraphics[width=2in]{zdt1-di2-tdi2-w.png}\\\\\n \\vspace{0.02cm}\n \n \\end{minipage}%\n}%\n\\subfigure[ZDT2]{\n \\begin{minipage}[t]{0.53\\linewidth}\n \\centering\n \\includegraphics[width=2in]{zdt2-di1-tdi1-w.png}\\\\\n \\vspace{0.02cm}\n \\includegraphics[width=2in]{zdt2-di2-tdi2-w.png}\\\\\n \\vspace{0.02cm}\n \n \\end{minipage}%\n}%\n\n\\caption{Pareto front approximation on ZDT1 and ZDT2.}\n\\label{fig:ZDT1}\n\\end{figure}\n\nBesides the visualization of the Pareto fronts, we also compute the knee point of the entire final Pareto front approximation from DI-MOEA via the strategy described in Algorithm 2. For each run of DI-MOEA and AP-DI-MOEA with the same seed, the following two issues have been checked: \n\\begin{itemize\n\\item If the knee point from DI-MOEA is in the preference region achieved by its derived AP-DI-MOEA;\n\\item If the knee point from DI-MOEA is dominated by or dominating AP solutions; or if it is a non-dominated solution (mutually non-dominated with all AP solutions).\n\\end{itemize}\n\nTable~\\ref{table-kneezdt1} shows the results of 30 runs. For ZDT1 problem, all 30 knee points from DI-1 and DI-2 are in the preference regions from AP-DI-1 and AP-DI-2 respectively; in all these knee points, 10 from DI-1 and 7 from DI-2 are dominated by AP solutions. For ZDT2 problem, most knee points are not in corresponding preference regions, but for those in the preference regions, almost all of them are dominated by AP solutions. Please note that when a knee point from DI-MOEA is outside of the preference region from AP-DI-MOEA, it is not possible that it can dominate any AP solutions because all AP solutions are in the preference region and only solutions in the left side of the gray area can dominate AP solutions. \n\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{Space and dominance relation of knee point from DI-MOEA and AP solutions on ZDT problems.}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\multicolumn{2}{|c|}{Problem} & \\multicolumn{2}{|c|}{ZDT1} & \\multicolumn{2}{|c|}{ZDT2}\\\\ \n\\cline{1-6}\n\\multicolumn{2}{|c|}{\\multirow{2}*{Algorithm}}& DI-1\/ & DI-2\/ & DI-1\/ & DI-2\/ \\\\\n\\multicolumn{2}{|c|}{ }& AP-DI-1 & AP-DI-2 & AP-DI-1 & AP-DI-2 \\\\\n\\hline\nIn & Incomparable & 20 & 23 & 1 & 1\\\\\n\\cline{2-6}\npreference & Dominated & 10 & 7 & 9 & 9 \\\\\n \\cline{2-6}\nregion & Dominating & 0 & 0 & 0 & 0 \\\\\n\\hline\nOutside & Incomparable & 0 & 0 & 20 & 20 \\\\\n\\cline{2-6}\np-region & Dominated & 0 & 0 & 0 & 0 \\\\\n\\hline\n\\end{tabular}\n\\label{table-kneezdt1}\n\\end{table} \n\nWe also perform the same comparison between AP-DI-MOEA and NSGA-III, the results are shown in Table~\\ref{table-kneezdt1-nsga3}. For ZDT1 problem, all knee points from NSGA-III are in the preference regions from AP-DI-MOEA. Some of these knee points dominate AP solutions. For ZDT2 problem, most knee points from NSGA-III are not in the preference regions and these knee points are incomparable with AP solutions. For the knee points in the preference regions, all three dominating relations with AP solutions appear. For both problems, when the knee point from NSGA-III is dominating AP solutions, it only dominates one AP solution.\n\n\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{Space and dominance relation of knee point from NSGA-III and AP solutions on ZDT problems.}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\multicolumn{2}{|c|}{Problem} & \\multicolumn{2}{|c|}{ZDT1} & \\multicolumn{2}{|c|}{ZDT2}\\\\ \n\\cline{1-6}\n\\multicolumn{2}{|c|}{\\multirow{2}*{Algorithm}}& NSGA-III\/ & NSGA-III\/ & NSGA-III\/ & NSGA-III\/ \\\\\n\\multicolumn{2}{|c|}{ }& AP-DI-1 & AP-DI-2 & AP-DI-1 & AP-DI-2 \\\\\n\\hline\nIn & Incomparable & 14 & 19 & 3 & 1\\\\\n\\cline{2-6}\npreference & Dominated & 0 & 0 & 2 & 3 \\\\\n \\cline{2-6}\nregion & Dominating & 16 & 11 & 4 & 6 \\\\\n\\hline\nOutside & Incomparable & 0 & 0 & 21 & 20 \\\\\n\\cline{2-6}\np-region & Dominated & 0 & 0 & 0 & 0 \\\\\n\\hline\n\\end{tabular}\n\\label{table-kneezdt1-nsga3}\n\\end{table} \n\nInstead of spreading the population across the entire Pareto front, we only focus on the preference region. To ensure that our algorithm can guide the search towards the preference region and the achieved solution set is distributed across the preference region, we compare the performance of AP-DI-MOEA, DI-MOEA and NSGA-III in the preference region. For each Pareto front approximation from DI-MOEA and NSGA-III, the solutions in the corresponding preference region from AP-DI-MOEA are picked, and we compare these solutions with AP solutions through the hypervolume indicator. The point formed by the largest objective values over all solutions in the preference region is adopted as the reference point when calculating the hypervolume indicator. It has been found that all hypervolume values of new solution sets from DI-MOEA and NSGA-III in the preference region are worse than the hypervolume values of the solution sets from AP-DI-MOEA, which proves that the mechanism indeed works in practice. Figure~\\ref{box:ZDT} shows box plots of the distribution of hypervolume indicators over 30 runs.\n\n\n\n\\begin{figure}[htbp]\n\\hspace{-0.58cm}\n\\subfigure[ZDT1]{\n \\begin{minipage}[t]{0.53\\linewidth}\n \\centering\n \\includegraphics[width=2in]{zdt1-di1-box1.png}\\\\\n \\vspace{0.02cm}\n \\includegraphics[width=2in]{zdt1-di2-box2.png}\\\\\n \\vspace{0.02cm}\n \n \\end{minipage}%\n}%\n\\subfigure[ZDT2]{\n \\begin{minipage}[t]{0.53\\linewidth}\n \\centering\n \\includegraphics[width=2in]{zdt2-di1-box1.png}\\\\\n \\vspace{0.02cm}\n \\includegraphics[width=2in]{zdt2-di2-box2.png}\\\\\n \\vspace{0.02cm}\n \n \\end{minipage}%\n}%\n\\caption{Boxplots comparing the hypervolume values on ZDT1 and ZDT2.}\n\\label{box:ZDT}\n\\end{figure}\n\n\n\\subsection{Experiments on three objective problems}\nDTLZ1 and DTLZ2 are chosen as three objective benchmark problems to investigate our algorithms. They are performed with a total budget of $120000$ fitness evaluations, when the evaluation reaches $60000$ times, the first preference region is formed, then after every $5000$ evaluations, the preference region is updated. Figure~\\ref{fig:dtlz} shows the Pareto front approximations from a typical run on DTLZ1 (left column) and DTLZ2 (right column). The upper graphs are obtained from DI-1 and AP-DI-1, while the lower graphs are from DI-2 and AP-DI-2. In each graph, the Pareto front approximations from DI-MOEA and corresponding AP-DI-MOEA are given. Since the target region is actually an axis aligned box, the obtained knee region (i.e., the intersection of the axis aligned box with the Pareto front) has an inverted triangle shape for these two benchmark problems.\n\n\\begin{figure*}[htbp]\n\\hspace{-3.5cm}\n\\subfigure[DTLZ1 3 objective problem]{\n \\begin{minipage}[t]{0.5\\linewidth}\n \\centering\n \\includegraphics[height=2.5in,width=4.4in]{dtlz1-di1-tdi1-w.png}\\\\\n \\vspace{0.45cm}\n \\includegraphics[height=2.5in,width=4.4in]{dtlz1-di2-tdi2-w.png}\\\\\n \n \n \\end{minipage}%\n}%\n\\hspace{2cm}\n\\subfigure[DTLZ2 3 objective problem]{\n \\begin{minipage}[t]{0.5\\linewidth}\n \\centering\n \\includegraphics[height=2.7in,width=4.4in]{dtlz2-di1-tdi1-w.png}\\\\\n \n \\includegraphics[height=2.7in,width=4.4in]{dtlz2-di2-tdi2-w.png}\\\\\n \n \n \\end{minipage}%\n}%\n\\caption{Pareto front approximation on DTLZ1 and DTLZ2.}\n\\label{fig:dtlz}\n\\end{figure*}\n\n\n\nTable~\\ref{table-kneedtlz} shows the space and dominance relation of the knee point from DI-MOEA and the solution set from AP-DI-MOEA over 30 runs. For DTLZ1 problem, most knee points from DI-MOEA are in their respective preference regions and all knee points are mutually non-dominated with AP solutions. For DTLZ2 problem, we observed that more knee points are not in the corresponding preference regions. This is because too few solutions from DI-MOEA are in the preference region. For DTLZ1 problem, six solutions from DI-MOEA are in the corresponding preference region on average for each run, while, for DTLZ2 problem, only less than two solutions are in the corresponding preference region on average. Therefore, we can see that on the one side, it is normal that many knee points from the entire Pareto fronts are not in their corresponding preference regions; on the other side, our aim of finding more fine-grained resolution in the preference region has been well achieved because only few solutions can be obtained in the preference region if we spread the population across the entire Pareto front. At the same time, one knee point from DI-1 on DTLZ2 is dominated by solutions from the corresponding AP-DI-1, which proves that AP-DI-MOEA can converge better than DI-MOEA because AP-DI-MOEA focuses on the preference region.\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{Space and dominance relation of knee point from DI-MOEA and AP solutions on DTLZ problems.}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\multicolumn{2}{|c|}{Problem} & \\multicolumn{2}{|c|}{DTLZ1} & \\multicolumn{2}{|c|}{DTLZ2}\\\\ \n\\cline{1-6}\n\\multicolumn{2}{|c|}{\\multirow{2}*{Algorithm}}& DI-1\/ & DI-2\/ & DI-1\/ & DI-2\/ \\\\\n\\multicolumn{2}{|c|}{ }& AP-DI-1 & AP-DI-2 & AP-DI-1 & AP-DI-2 \\\\\n\\hline\nIn & Incomparable & 29 & 27 & 10 & 13\\\\\n\\cline{2-6}\npreference & Dominated & 0 & 0 & 1 & 0 \\\\\n \\cline{2-6}\nregion & Dominating & 0 & 0 & 0 & 0 \\\\\n\\hline\nOutside & Incomparable & 1 & 3 & 19 & 17 \\\\\n\\cline{2-6}\np-region & Dominated & 0 & 0 & 0 & 0 \\\\\n\\hline\n\\end{tabular}\n\\label{table-kneedtlz}\n\\end{table} \n\n\nAP-DI-1 and AP-DI-2 have also been compared with NSGA-III in the same way. Table~\\ref{table-kneedtlz_nsga3} shows the comparison result. For DTLZ1, the average number of solutions from NSGA-III in the corresponding preference regions from AP-DI-MOEA is six. Still, almost all knee solutions from NSGA-III are in the preference region. For DTLZ2, the average number of solutions from NSGA-III in the corresponding preference region from AP-DI-MOEA is less than one, while, in more than half of 30 runs, the knee points from NSGA-III are still in the preference region. To some extent, it can be concluded that the preference regions from AP-DI-MOEA are accurate. It can also be observed that AP-DI-1 behaves better than AP-DI-2 on DTLZ2, because two knee points from NSGA-III dominate the solutions from AP-DI-2.\n\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{Space and dominance relation of knee point from NSGA-III and AP solutions on DTLZ problems.}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\multicolumn{2}{|c|}{Problem} & \\multicolumn{2}{|c|}{DTLZ1} & \\multicolumn{2}{|c|}{DTLZ2}\\\\ \n\\cline{1-6}\n\\multicolumn{2}{|c|}{\\multirow{2}*{Algorithm}}& NSGA-III\/ & NSGA-III\/ & NSGA-III\/ & NSGA-III\/ \\\\\n\\multicolumn{2}{|c|}{ }& AP-DI-1 & AP-DI-2 & AP-DI-1 &AP-DI-2 \\\\\n\\hline\nIn & Incomparable & 30 & 29 & 14 & 17\\\\\n\\cline{2-6}\npreference & Dominated & 0 & 0 & 1 & 1 \\\\\n \\cline{2-6}\nregion & Dominating & 0 & 0 & 0 & 2 \\\\\n\\hline\nOutside & Incomparable & 0 & 1 & 15 & 10 \\\\\n\\cline{2-6}\np-region & Dominated & 0 & 0 & 0 & 0 \\\\\n\\hline\n\\end{tabular}\n\\label{table-kneedtlz_nsga3}\n\\end{table} \n\nSimilarly, we pick from DI-MOEA and NSGA-III solutions which are in the corresponding preference region of AP-DI-MOEA, and the hypervolume indicator value is compared between these solutions and AP solutions. It has been found that all hypervolume values of solutions from AP-DI-MOEA are better than those of solutions from DI-MOEA and NSGA-III. The left column of Figure~\\ref{box:dtlz} shows box plots of the distribution of hypervolume values over 30 runs on DTLZ1, and the right column shows the hypervolume comparison on DTLZ2.\n\n\n\\begin{figure}[htbp]\n\\hspace{-0.58cm}\n\\subfigure[DTLZ1]{\n \\begin{minipage}[t]{0.53\\linewidth}\n \\centering\n \\includegraphics[width=2in]{dtzl1-di1-box.png}\\\\\n \\vspace{0.02cm}\n \\includegraphics[width=2in]{dtlz1-di2-box.png}\\\\\n \\vspace{0.02cm}\n \n \\end{minipage}%\n}%\n\\subfigure[DTLZ2]{\n \\begin{minipage}[t]{0.53\\linewidth}\n \\centering\n \\includegraphics[width=2in]{dtlz2-di1-box.png}\\\\\n \\vspace{0.02cm}\n \\includegraphics[width=2in]{dtlz2-di2-box.png}\\\\\n \\vspace{0.02cm}\n \n \\end{minipage}%\n}%\n\n\\caption{Boxplots comparing the hypervolume values on DTLZ1 and DTLZ2.}\n\\label{box:dtlz}\n\\end{figure}\n\nIn our experiments, we decide half of the total budget is used to find an initial Pareto front because it turned out to be a good compromise: half budget for the initial Pareto front and another half budget for the solutions focusing on the preference region. We also run experiments using 25\\% and 75\\% of the total budget for the initial Pareto front. Figure~\\ref{fig:dtlz-budget} presents the entire Pareto front from DI-MOEA and the Pareto front from AP-DI-MOEA with different budgets for the initial Pareto front. The left two images are on DTLZ1 and the right two images are on DTLZ2. The uppper two images are from DI-1 and AP-DI-1; the lower two images are from DI-2 and AP-DI-2. In the legend labels, 50\\%, 25\\% and 75\\% indicate the budgets which are utilized to find the initial entire Pareto front. It can be observed that the preference region from AP-DI-MOEA with 50\\% of budget are located on a better position than with 25\\% and 75\\% budgets, and the position of the preference region from AP-DI-MOEA with 50\\% of budget is more stable. Therefore, in our algorithm, 50\\% of budget is used before the generation of preference region.\n\n\\begin{figure*}[htbp]\n\\hspace{-3.5cm}\n\\subfigure[DTLZ1 3 objective problem]{\n \\begin{minipage}[t]{0.50\\linewidth}\n \\centering\n \\includegraphics[height=2.6in,width=4.28in]{dtlz1-di1-tdi1-3combine-1.png}\\\\\n \n \\includegraphics[height=2.6in,width=4.28in]{dtlz1-di2-tdi2-3combine-3.png}\\\\\n \n \n \\end{minipage}%\n}%\n\\hspace{1.5cm}\n\\subfigure[DTLZ2 3 objective problem]{\n \\begin{minipage}[t]{0.50\\linewidth}\n \\centering\n \\includegraphics[height=2.6in,width=4.2in]{dtlz2-di1-tdi1-3combine-16.png}\\\\\n \n \\includegraphics[height=2.6in,width=4.2in]{dtlz2-di2-tdi2-3combine-16.png}\\\\\n \n \n \\end{minipage}%\n}%\n\\caption{Pareto front approximation by different budgets generating initial Pareto front.}\n\\label{fig:dtlz-budget}\n\\end{figure*}\n\n\\subsection{Experiments on Vehicle Fleet Maintenance Scheduling Optimization}\nThe budget of $1200000$ evaluations has been used on the real-world application problems, and $600000$ of them are for the initial Pareto front. After that, the preference region is updated after every $50000$ evaluations.\nThe VFMSO problem has been tested with different sizes. Figure~\\ref{fig:20cars} shows Pareto front approximations of a problem with $20$ cars and $3$ workshops (V1), and each car contains $13$ components: one engine, four springs, four brakes and four tires \\cite{van2019modeling}. It can be observed that AP-DI-1 and AP-DI-2 can zoom in the entire Pareto front and find solutions in the preference region, at the same time, both AP-DI-1 and AP-DI-2 converge better than their corresponding DI-1 and DI-2. A similar conclusion can be drawn from Pareto fronts approximations of the problem with $30$ cars and $5$ workshops (V2) in Figure~\\ref{fig:30cars}.\n\n\nIn Figure~\\ref{fig:2030cars}, We put the Pareto front approximations from DI-MOEA, AP-DI-MOEA and NSGA-III on V1 (left) and V2 (right) together. The behaviours of DI-1, DI-2 and NSGA-III are similar on V1, so are the behaviours of AP-DI-1 and AP-DI-2 on this problem. While, DI-2 and AP-DI-2 converge better than DI-1 and AP-DI-1 on V2 problem. The behaviour of NSGA-III is between that of DI-1 and DI-2.\n\n\n\\begin{figure*}[htbp]\n\\hspace{-3.8cm}\n\\subfigure[DI-1 \\& AP-DI-1]{\n \\begin{minipage}[t]{0.5\\linewidth}\n \\centering\n \\includegraphics[height=2.7in,width=4in]{20car-di1-tdi1-legend.png}\\\\\n \n \\end{minipage}%\n}%\n\\hspace{3.2cm}\n\\subfigure[DI-2 \\& AP-DI-2]{\n \\begin{minipage}[t]{0.5\\linewidth}\n \\centering\n \\includegraphics[height=2.7in,width=4in]{20car-di2-tdi2-legend.png}\\\\\n \n \\end{minipage}%\n}%\n\\caption{Pareto front approximation on VFMSO problem with 20 cars and 3 workshops.}\n\\label{fig:20cars}\n\\end{figure*}\n\n\\begin{figure*}[htbp]\n\\hspace{-3.8cm}\n\\subfigure[DI-1 \\& AP-DI-1]{\n \\begin{minipage}[t]{0.5\\linewidth}\n \\centering\n \\includegraphics[height=2.7in,width=4in]{30car-di1-tdi1-legend.png}\\\\\n \n \\end{minipage}%\n}%\n\\hspace{3.2cm}\n\\subfigure[DI-2 \\& AP-DI-2]{\n \\begin{minipage}[t]{0.5\\linewidth}\n \\centering\n \\includegraphics[height=2.7in,width=4in]{30car-di2-tdi2-legend.png}\\\\\n \n \\end{minipage}%\n}%\n\\caption{Pareto front approximation on VFMSO problem with 30 cars and 5 workshops.}\n\\label{fig:30cars}\n\\end{figure*}\n\n\n\n\\begin{figure*}[htbp]\n\\hspace{-3.5cm}\n\\subfigure[V1]{\n \\begin{minipage}[t]{0.5\\linewidth}\n \\centering\n \\includegraphics[height=2.7in,width=4in]{20car-di1-tdi1-di2-tdi2-ns3.png}\\\\\n \n \\end{minipage}%\n}%\n\\hspace{3cm}\n\\subfigure[V2]{\n \\begin{minipage}[t]{0.5\\linewidth}\n \\centering\n \\includegraphics[height=2.7in,width=4in]{30car-di1-tdi1-di2-tdi2-ns3.png}\\\\\n \n \\end{minipage}%\n}%\n\n\\caption{Pareto front approximation on VFMSO problem by DI-MOEA, AP-DI-MOEA and NSGA-III.}\n\\label{fig:2030cars}\n\\end{figure*}\n\n\nTable~\\ref{table-v1} gives the space and dominance relation of knee points from DI-MOEA and solutions from AP-DI-MOEA on these two VFMSO problems. For both problems, only few knee points from DI-MOEA are in the preference regions of AP-DI-MOEA, and the main reason is that the Pareto front of AP-DI-MOEA converges better than that of DI-MOEA, in some cases, the Pareto front of DI-MOEA cannot even reach the corresponding preference region. More importantly, it can be observed that most knee points from DI-MOEA, no matter whether in the preference region or outside of the preference region, are dominated by the solutions from AP-DI-MOEA. This phenomenon is even more obvious for the application problem with bigger size and run with the same budget as the smaller one: for V2, 90\\% of knee points from DI-MOEA are dominated by the solutions from AP-DI-MOEA.\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{Space and dominance relation of knee point from DI-MOEA and AP solutions on V1 and V2.}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\multicolumn{2}{|c|}{Problem} & \\multicolumn{2}{|c|}{V1} & \\multicolumn{2}{|c|}{V2}\\\\ \n\\cline{1-6}\n\\multicolumn{2}{|c|}{\\multirow{2}*{Algorithm}}& DI-1\/ & DI-2\/ & DI-1\/ & DI-2\/ \\\\\n\\multicolumn{2}{|c|}{ }& AP-DI-1 & AP-DI-2 & AP-DI-1 & AP-DI-2 \\\\\n\\hline\nIn & Incomparable & 0 & 0 & 0 & 0\\\\\n\\cline{2-6}\npreference & Dominated & 9 & 7 & 9 & 6 \\\\\n \\cline{2-6}\nregion & Dominating & 0 & 0 & 0 & 0 \\\\\n\\hline\nOutside & Incomparable & 4 & 9 & 3 & 3 \\\\\n\\cline{2-6}\np-region & Dominated & 17 & 14 & 18 & 21 \\\\\n\\hline\n\\end{tabular}\n\\label{table-v1}\n\\end{table} \n\n\nTable~\\ref{table-v1-nsga3} gives the space and dominance relation of knee points from NSGA-III and AP solutions. For both problems, again, most knee points from NSGA-III are not in the preference regions of AP-DI-MOEA. Some knee points from NSGA-III are dominated by AP solutions and most of them are incomparable with AP solutions. \n\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{Space and dominance relation of knee point from NSGA-III and AP solutions on V1 and V2.}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\multicolumn{2}{|c|}{Problem} & \\multicolumn{2}{|c|}{V1} & \\multicolumn{2}{|c|}{V2}\\\\ \n\\cline{1-6}\n\\multicolumn{2}{|c|}{\\multirow{2}*{Algorithm}}& NSGA-III\/ & NSGA-III\/ & NSGA-III\/ & NSGA-III\/ \\\\\n\\multicolumn{2}{|c|}{ }& AP-DI-1 & AP-DI-2 & AP-DI-1 & AP-DI-2 \\\\\n\\hline\nIn & Incomparable & 0 & 0 & 0 & 1\\\\\n\\cline{2-6}\npreference & Dominated & 0 & 1 & 3& 2 \\\\\n \\cline{2-6}\nregion & Dominating & 0 & 0 & 1 & 1 \\\\\n\\hline\nOutside & Incomparable & 23 & 24 & 21 & 18 \\\\\n\\cline{2-6}\np-region & Dominated & 7 & 5 & 5 & 8 \\\\\n\\hline\n\\end{tabular}\n\\label{table-v1-nsga3}\n\\end{table} \n\n\\iffalse \n The right image of Figure~\\ref{fig:30cars-2} presents the entire Pareto front approximations of V2 from four different MOEAs: DI-1, DI-2, RVEA and NSGA-III. It can be seen that DI-MOEA (both DI-1 and DI-2) and NSGA-III converge to the similar area in the end, while, RVEA reaches another area of the objective space. Table~\\ref{table_hy} provides the average hypervolume value of the four Pareto fronts from 30 runs and the reference point for each run is formed by the largest objective value from all solutions. It can be seen that DI-2 behaves the best and RVEA the worst.\n\n\n\\begin{table}[htbp]\n\\caption{Hypervolume values}\n\\label{table_hy}\n\\begin{center}\n\\begin{tabular}{l|c}\n\\hline\nDI-1 & 0.0525\\\\\n\\hline\nDI-2 & 0.0576\\\\\n\\hline\nRVEA & 0.0202\\\\\n\\hline\nNSGAIII & 0.0534\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\\fi\n\n\n\n\n\n\\section{CONCLUSIONS}\n\\label{sec:conclusion}\nIn this paper, a preference based multi-objective evolutionary algorithm, AP-DI-MOEA, is proposed. In the absence of explicitly provided preferences, the knee region is usually treated as the region of interest or preference region. Given this, AP-DI-MOEA can generate the knee region automatically and can find solutions with a more fine-grained resolution in the knee region. This has been demonstrated on the bi-objective problems ZDT1 and ZDT2, and the three objective problems DTLZ1 and DTLZ2. In the benchmark, the new approach was also proven to perform better than NSGA-III which was included in the benchmark as a state-of-the-art reference algorithm.\n\nThe research for the preference based algorithm was originally motivated by a real-world optimization problem, namely, Vehicle Fleet Maintenance Scheduling Optimization (VFMSO), which is described in this paper in a new formulation as a three objective discrete optimization problem. A customized set of operators (initialization, recombination, and mutation) is proposed for a multi-objective evolutionary algorithm with a selection strategy based on DI-MOEA and, respectively, AP-DI-MOEA. The experimental results of AP-DI-MOEA on two real-world application problem instances of different scales show that the newly proposed algorithm can generate preference regions automatically and it (in both cases) finds clearly better and more concentrated solution sets in the preference region than DI-MOEA. For completeness, it was also tested against NSGA-III and a better approximation in the preference region was observed by AP-DI-MOEA .\n\nSince our real-world VFMSO problem is our core issue to be solved, and its Pareto front is convex, we did not consider problems with an irregular shape.\nIt would be an interesting question how to adapt the algorithm to problems with more irregular shapes. Besides, the proposed approach requires a definition of knee points. Future work will provide a more detailed comparison of different variants of methods to generate knee points, as they are briefly introduced in Section \\ref{sec:literature}. In the application of maintenance scheduling, it will also be important to integrate robustness and uncertainty in the problem definition. It is desirable to generate schedules that are robust within a reasonable range of disruptions and uncertainties such as machine breakdowns and processing time variability.\n\n\n\n\\section*{Acknowledgment}\nThis work is part of the research programme Smart Industry SI2016 with project name CIMPLO and project number 15465, which is (partly) financed by the Netherlands Organisation for Scientific Research (NWO).\n\n\n\n\n\\section{Introduction}\n\\label{sec:introduction}\nThe fleet maintenance scheduling optimization (VFMSO) problem was initially proposed in \\cite{wang2019vehicle} due to the increasing demand by companies, corporations, and organizations of all sorts which rely on vehicle fleets to deliver products and services and need to maintain vehicles for safety reasons. In the problem, a vehicle fleet, such as a taxi fleet, a bus fleet, etc., can be maintained in multiple separate workshops according to a maintenance schedule. To be specific, each workshop has its own capacity and ability, meaning that on the one hand, each workshop has its own team and each team can work on only one car at the same time; on the other hand, each workshop is limited to the maintenance of the specific component(s) due to restrictions in the equipment or skill level of the staff. The maintenance schedule is optimized for each component based on its remaining useful lifetime (RUL) which has been predicted through predictive approaches or models \\cite{elattar2016prognostics}. Furthermore, the cost and time which are needed by different workshops are considered because it is possible that the maintenance of the same component produces different costs and workloads when the operation is performed in different workshops. The VFMSO problem is essential because it can not only ensure the safety of vehicles for use; at the same time, it can lead to low maintenance costs and longer lives for vehicles as well.\n\nTo enhance the approach in \\cite{wang2019vehicle}, to be specific, to handle the uncertainty in the problem and apply it on new application scenarios, in this paper, we improve it from the following two aspects:\n\n\\begin{itemize}\n\\item[1.] There exists a lot of uncertainty when we use the predicted RUL for each component as its due date, because no matter how accurate the predictive model is, it is still possible that the component will break on other dates: before the due date or later. Therefore, instead of only the RUL, we decide to involve the RUL probability distribution as the foundation to assign the maintenance time in the scheduling optimization.\n\n\\item[2.] The VFMSO problem usually leads to a large and complex solution space, however, finding the most preferred solution is the ultimate goal. To this end, AP-DI-MOEA (Automatic Preference based DI-MOEA) is developed based on the framework of DI-MOEA (Diversity-Indicator based Multi-Objective Evolutionary Algorithm) \\cite{wang2019diversity}. The new algorithm can generate the preference region automatically and find solutions with a more fine-grained resolution in the preference region.\n\\end{itemize}\n\nThis paper is organized as follows. Section \\ref{sec:formulation} formulates the enhanced VFMSO problem. A literature review on preference based optimization is provided in Section \\ref{sec:literature}. The customized multi-objective evolutionary algorithm for the enhanced VFMSO is introduced in Section \\ref{sec:customizedalg}, and in Section \\ref{sec:ap-di-moea}, we explain AP-DI-MOEA. Section \\ref{sec:experiments} presents and discusses experiments and their results. Lastly, Section \\ref{sec:conclusion} concludes the paper and outlines directions for future work.\n\n\n\\section{Problem Formulation}\n\\label{sec:formulation}\nFor a car fleet operated by an operator, the components of cars (e.g., springs, brakes, tires or the engine) can fail and should be maintained regularly. Some separate workshops are available for the maintenance of the car fleet, and the repair time and maintenance cost are known for each component in each workshop. Beside the time and cost for repairing the car component, a fixed set-up cost and set-up time are also considered for each visit of a car to a workshop, which correspond to the cost and time required for the preparation of the maintenance operation. \n\nThe enhanced VFMSO problem addressed in this paper is defined as follows:\n\\begin{enumerate}\n\\item There are $n$ cars $C=\\{{C_{1},C_{2}},\\cdots,C_{n}\\}$ and $m$ workshops $W=\\{{W_{1},W_{2}},\\\\\\cdots,W_{m}\\}$.\n\n\\item Each car $C_i$ comprises $l_i$ components to be maintained for $i=1,\\cdots,n$.\n\n\\item For each component $O_{ij}$ ($j=1,\\cdots,l_i$), i.e., the $j$th component of car $C_i$, there is a set of workshops capable of repairing it. The set of workshops is represented by $W_{ij}$ which is a subset of $W$.\n\n\\item The processing time for maintaining component $O_{ij}$ in workshop $W_k$ is predefined and denoted by $p_{ijk}$.\n\n\\item The cost for maintaining component $O_{ij}$ in workshop $W_k$ is predefined and denoted by $q_{ijk}$.\n\n\\item The set-up time of car $C_i$ in workshop $W_k$ is predefined and denoted by $x_{ik}$.\n\n\\item The set-up cost of car $C_i$ in workshop $W_k$ is predefined and denoted by $y_{ik}$.\n\n\\item The number of teams in workshop $W_k$ is predefined and denoted by $z_k$.\n\n\\item The previous repair time of component $O_{ij}$ is recorded and denoted by $L_{ij}$.\n\\end{enumerate}\n\nAt the same time, the following assumptions are made:\n\\begin{enumerate}\n\\item All workshops and teams are available at the time of the optimization and assumed to be continuously available.\n\n\\item All the components are independent from each other.\n\n\\item Times required for transport of cars from\/to workshops are included in the maintenance time and cost of cars, and the set-up time\n\n\\item Environmental changes (such as car accidents) are not considered here.\n\n\\item There are no precedence constraints among the components of different cars. Cars are maintained on a first-come-first-served basis.\n\n\\item Each team can only work on one operation at a time and an operation, once started, must run to completion.\n\n\\item No operation can start before the completion of the previous operation.\n\\end{enumerate}\n\nTwo constraints are considered in the problem. As mentioned earlier, each workshop can only repair specific components, and this is the first constraint. Another constraint is that the maintenance periods of different operations for the same car should not overlap. It is obviously wrong if two overlapping maintenance operations of a car are assigned to different workshops because one car cannot be in two different workshops at the same time. If two overlapping maintenance operations of a car are assigned to the same workshop, it is not correct either because these two maintenance operations should be grouped together as one operation in this case. The grouping strategy will be explained in Section \\ref{sec:customizedalg}.\n\nThree objectives are assumed to be relevant for the vehicle fleet operator, which are the total workload, total cost and expected number of failures. In a multi-objective optimization problem, the objectives typically are conflicting, i.e., achieving the optimal value for one objective requires some compromise on other objectives. In our problem, the fact that a faster maintenance usually is more expensive leads to the conflict between the first two objectives. The expected number of failures counts the times when the vehicles are broken on the road. Here, the expected value is used because the actual value is unknown at the time of the optimization due to uncertainties in the predictions. When the expected number of failures is large, less maintenance tasks are performed, therefore, the workload and cost can drop. \n\nLet $T_k$ denote the sum of the times spent for all operations that are processed in workshop $W_k$; $M_i$ the sum of all costs spent for all maintenance operations of car $C_i$; $F_{ij}$ the number of failures of component $O_{ij}$. Three objectives can be defined as: \n\\begin{flalign}\n&\\text{Minimize the total workload:} ~~ f_1 = \\sum_{k=1}^{m}T_k &&\\\\\n&\\text{Minimize the total cost:} ~~ f_2 = \\sum_{i=1}^{n}M_i &&\\\\\n\\nonumber\n&\\text{Minimize the expected number of failures:} \\\\\n&f_3 = \\sum_{i=1}^{n}\\sum_{j=1}^{l_i} \\mathbb{E}(F_{ij})\n\\end{flalign}\n\n\n\\section{LITERATURE REVIEW}\n\\label{sec:literature}\n\nMulti-objective scheduling optimization is a major topic in the research of manufacturing systems. Its fundamental task is to organize work and workloads to achieve comprehensive optimization in multiple aspects, such as the processing time, processing cost and production safety, by deploying resources, setting maintenance time and processing sequence. In past decades, this issue has received a great deal of interest and research in different fields, such as scheduling of charging\/discharging for electric vehicles \\cite{zakariazadeh2014multi}; scheduling in cloud computing \\cite{ramezani2015evolutionary}; scheduling of crude oil operations \\cite{hou2015pareto}; scheduling in the manufacturing industry to reduce carbon emissions \\cite{ding2016carbon}; scheduling medical treatments for resident patients in a hospital \\cite{jeric2012multi}; scheduling for Internet service providers \\cite{bhamare2017multi}, and so on.\n\n\nAs a typical workshop style, the flexible job shop scheduling problem (FJSP) is an essential branch of production planning problems. The FJSP consists of a set of independent jobs to be processed on multiple machines, and each job contains several operations with a predetermined order. It is assumed that each operation must be processed in specified processing time on a specific machine out of multiple alternatives. The problem has been extensively studied in the literature (for example, \\cite{chiang2013simple}, \\cite{yuan2015multiobjective}, \\cite{gao2019review}). The FJSP is the research basis of the maintenance scheduling optimization problem and many real-world problems extend the standard FJSP by adding specific features. \\cite{ozguven2010mathematical} considers FJSP-PPF (process plan flexibility), where jobs can have alternative process plans. It is assumed that the process plans are known in advance and that they are represented by linear precedence relationships. Because only one of the alternative plans has to be adopted for each job, the FJSP-PPF deals with not only routing and sequencing sub-problems, but also the process plan selection sub-problem. In this paper, a mixed-integer linear programming model is developed for the FJSP-PPF. In \\cite{demir2014effective}, a mathematical model and a genetic algorithm are proposed to handle the feature of overlapping in operations. It is assumed that a lot which contains a batch of identical items is transferred from one machine to the next only when all items in the lot have completed their processing, therefore, sublots are transferred from one machine to the next for processing without waiting for the entire lot to be processed at the predecessor machine, meaning that starting a successor operation of job is not necessary to finish of its predecessor completely. Three features are considered in \\cite{yu2017extended}, which are (1) job priority; (2) parallel operations: some operations can be processed simultaneously; (3) sequence flexibility: the sequence of some operations can be exchanged. A mixed integer liner programming formulation (MILP) model is established to formulate the problem and an improved differential evolution algorithm is designed. Because of unexpected events occurring in most of the real manufacturing systems, there is a new type of scheduling problem known as the dynamic scheduling problem. This type of problem considers random machine breakdowns, adding new machine, new job arrival, job cancellation, changing processing time, rush order, rework or quality problem, due date changing, etc. Corresponding works on the FJSP include \\cite{fattahi2010dynamic}, \\cite{al2011robust}, \\cite{shen2015mathematical}, \\cite{ahmadi2016multi}. Compared with the standard FJSP, our VFMSO problem has some special properties: (1) flexible sequence: the sequence of the components is not predefined, but mainly influenced by the RUL probability distribution. (2) multiple problem parameters: besides the processing time, other problem parameters like the maintenance cost, set-up time, set-up cost, repair teams, etc, also have impacts on the result.\n\n\n\nOur real-world problem, like many other multi-objective optimization problems, can lead to a large objective space. However, finding a well-distributed set of solutions on the Pareto front requests a large population size and computational effort. Therefore, instead of spreading a limited size of individuals across the entire Pareto front, we decide to only focus on a part of the Pareto front, to be specific, the search for solutions will be only guided towards the preference region which, in our algorithm, is determined by the knee point. It has been argued in the literature that knee points are most interesting solutions, naturally preferred solutions and most likely the optimal choice of the decision maker (DM) \\cite{das1999characterizing, mattson2002minimal, deb2003multi, branke2004finding}. \n\n\n\n\nThe knee point is a point for which a small improvement in any objective would lead to a large deterioration in at least one other objective. In the last decade, several methods have been presented to identify knee points or knee regions. Das \\cite{das1999characterizing} refers the point where the Pareto surface ``bulges\" the most as the knee point, and this point corresponds to the farthest solution from the convex hull of individual minima which is the minima of the single objective functions. Zitzler \\cite{zitzler2004tutorial} defines $\\epsilon$-dominance: a solution $a$ is said to $\\epsilon$-dominate a solution $b$ if and only if $f_i(a)+\\epsilon \\geq f_i(b) ~\\forall i=1,...,m$ where $m$ is the number of objectives. A solution with a higher $\\epsilon$-dominance value with respect to the other solutions in the Pareto front approximation, is a solution having higher trade-offs and in this definition corresponds to a knee point. The authors of \\cite{yu2018method} propose to calculate the density of solutions projected onto the hyperplane constructed by the extreme points of the non-dominated solutions, then identify the knee regions based on the solution density. \n\nDifferent algorithms of applying knee points in MOEA have also been proposed.\nBranke \\cite{branke2004finding} modifies the second criterion in NSGA-II \\cite{deb2002fast}, and replaces the crowding distance by either an angle-based measure or a utility-based measure. The angle-based method calculates the angle between an individual and its two neighbors in the objective space. The smaller the angle, the more clearly the individual can be classified as a knee point. However, this method can only be used for two objective problems. In the utility-based method, a marginal utility function is suggested to approximate the angle-based measure in the case of more than two objectives. The larger the external angle between a solution and its neighbors, the larger the gain in terms of linear utility obtained from substituting the neighbors with the solution of interest. However, the utility-based measure is not suited for finding knees in concave regions of the Pareto front.\n\nRachmawati \\cite{rachmawati2006multi, rachmawati2006multi2} proposes a knee-based MOEA which computes a transformation of original objective values based on a weighted sum niching approach. The extent and the density of coverage of the knee regions are controllable by the parameters for the niche strength and pool size. The strategy is susceptible to the loss of less pronounced knee regions.\n\nSch{\\\"u}tze \\cite{schutze2008approximating} investigates two strategies for the approximation of knees of bi-objective optimization problems with stochastic search algorithms. Several new definitions for identifying knee points and knee regions for bi-objective optimization problems has been suggested in \\cite{deb2011understanding} and the possibility of applying them has also been discussed.\n\nBesides the knee points, the reference points, which are normally provided by the DM, have also been used to find a set of solutions near reference points. Deb \\cite{deb2006reference} proposes an MOEA, called R-NSGA-II, by which a set of Pareto optimal solutions near a supplied set of reference points can be found. The dominance relation together with a modified crowding distance operator is used in this methodology. For all solutions of the population, the distances to all reference points are calculated and ranked. The lowest rank (over all reference points) of a solution is used as its crowding distance. Besides, a parameter $\\epsilon$ is used to control the spread of obtained solutions. Bechikh proposes KR-NSGA-II \\cite{bechikh2010searching} by extending R-NSGA-II. Instead of obtaining the reference points from the DM, in KR-NSGA-II, the knee points are used as mobile reference points and the search of the algorithm was guided towards these points. The number of knee points of the optimization problem is needed as prior information in KR-NSGA-II.\n\nGaudrie \\cite{gaudrie2019targeting} uses the projection (intersection in case of a continuous front) of the closest non-dominated point on the line connecting the estimated ideal and nadir points as default preference. Conditional Gaussian process simulations are performed to create possible Pareto fronts, each of which defines a sample for the ideal and the nadir point, and the estimated ideal and nadir are the medians of the samples.\n\nRachmawati and Srinivasan \\cite{rachmawati2009multiobjective} evaluate the worthiness of each non-dominated solution in terms of compromise between the objectives. The local maxima is then identified as potential knee solutions and the linear weighted-sums of the original objective functions are optimized to guide solutions toward the knee regions. \n\nAnother idea of incorporating preference information into evolutionary multi-objective optimization is proposed in \\cite{thiele2009preference}. They combine the fitness function and an achievement scalarizing function containing the reference point. In this approach, the preference information is given in the form of a reference point and an indicator-based evolutionary algorithm IBEA \\cite{zitzler2004indicator} is modified by embedding the preference information into the indicator. Various further preference based MOEAs have been suggested, e.g., \\cite{braun2011preference, ramirez2017knee, wang2017new}. \n\nIn our proposed algorithm, i.e., AP-DI-MOEA, we adopt the method from \\cite{das1999characterizing} to identify the knee point, design the preference region based on the knee point, and guide the search towards the preference region. The advantages of our algorithm are: (1) no prior knowledge is used in identifying the knee point and knee region; (2) the preference region is generated automatically and narrowed down step by step to benefit its accuracy; (3) our strategy cannot only handle bi-objective optimization problems, but also tri- and many-objective problems; (4) although we integrate the strategy with DI-MOEA, it may be integrated with any standard MOEAs (such as NSGA-II \\cite{deb2002fast}, SMS-EMOA \\cite{beume2007sms} and others); (5) the proposed algorithm is capable of finding preferred solutions for multi-objective optimization problems with linear, convex, concave Pareto fronts and discrete problems.\n\n\\section{Customized Algorithm for Vehicle Fleet Maintenance Scheduling Optimization}\n\\label{sec:customizedalg}\nFor our real-world VFMSO problem, we first define the execution window for each component based on its predicted RUL probability distribution which is assumed to be a normal distribution. The execution window suggests that the maintenance of the component can only start at a time spot inside the window. The mean ($\\mu$) and standard deviation ($\\sigma$) of the predicted RUL probability distribution determine the interval of the execution window, which is defined as: [$\\mu -2\\times \\sigma$, $\\mu +2\\times \\sigma$]. The interval is chosen relatively long because 95\\% of the values are within two standard deviations of the mean, therefore, maintenance before or after the interval hardly makes sense.\n\nAfter the determination of the execution window, the following two special strategies have been taken to improve the process of scheduling optimization: \n\\begin{itemize}\n\\item Grouping components.\n\\item Obtaining the penalty cost and expected number of failures by Monte Carlo simulation.\n\\end{itemize}\nLastly, evolutionary algorithm (EA) is chosen to solve this real-world application problem due to its powerful characteristics of robustness and flexibility to capture global solutions of complex combinatorial optimization problems. Moreover, EAs are well suited to solve multi-objective optimization problems due to their ability to approximate the entire Pareto front in a single run.\n\n\\subsection{Grouping Components}\nIt would be troublesome and also a waste of time and effort to send a car to workshops repeatedly in a short period of time to repair different components. In our algorithm, since each component has its execution window for its maintenance, it is possible to combine the maintenance of several components to one visit if their execution windows overlap. Especially, by grouping the maintenance of multiple components into one maintenance operation, the set-up cost and set-up time are charged only once for the complete group of components. \n\n\\begin{figure}[!htbp]\n\\centering\n\\includegraphics[height=120pt, width=260pt]{newgroup.png} \n\\caption{Possible groups for a car with eight components.} \n\\label{f2}\n\\end{figure}\n\nFigure~\\ref{f2} represents the execution windows of eight components of a car. The overlap of the execution windows shows the possibility of grouping these components. Combining components can only be effective if there is a common overlap of the execution window for components from the same car, and the starting time of the group operation must lie within the common overlap. In this example, component $c1$ can be grouped with $c2$ and\/or $c3$ due to the overlap between their execution windows. Other possible group structures can be deduced in the same manner. \n\n\\subsection{Monte Carlo Simulation}\nWithin the execution window of a component, an arbitrary time can be chosen as the starting time for maintaining the component. The maintenance time of each component should be as close as possible to its real due date, because:\n\\begin{itemize}\n\\item Performing the maintenance too early results in higher maintenance costs in the long term, because more maintenance tasks have to be done. \n\\item The risk of breaking down on the road will increase if the maintenance date is too late.\n\\end{itemize}\nTherefore, we use Monte Carlo simulation to simulate the ``real'' due dates for each component. In our experiments, stability can be achieved at a few hundred samples, in our case, $1000$ samples of the due date are generated in the execution window of each component according to its predicted RUL probability distribution (see Section IV). Figure~\\ref{f1} shows an example of the execution window evolved from the predicted RUL probability distribution of a component. After 1000 sampled due dates are generated in the execution window, the scheduled maintenance date of the component is compared with these samples one by one, and each comparison can lead to three situations. Let us use $d_{ij}^v$ to denote the $v$th due date sample of component $O_{ij}$; and $D_{ij}$ the scheduled maintenance date of component $O_{ij}$. Three possibilities after the comparison are:\n\\\\\nCase 1) $~D_{ij} < d_{ij}^v$\\\\\nThe scheduled maintenance date is earlier than the sample (or the ``real'' due date) means that the component will be maintained before it is broken. In this case, its useful life between the maintenance date and the due date will be wasted. Therefore, a corresponding penalty cost is imposed to reflect the waste. To calculate the penalty cost, a linear penalty function is suggested based on the following assumptions:\n\\begin{itemize}\n\\item If a component is maintained when it is new or the previous maintenance has just completed, the penalty cost would be the full cost of maintaining it, which is $c+s$: the maintenance cost of the component and the set-up cost of the car;\n\\item If a component is maintained at exactly its due date, the penalty cost would be 0.\n\\end{itemize}\n\n\\begin{figure}[!htbp]\n\\centering\n\\includegraphics[ width=4in]{newpenalty.png} \n\\caption{Execution window of a component.} \n\\label{f1}\n\\end{figure}\n\\vspace{-0.0cm}\n\nAssume $d_{ij}^v$ is ``Sampled Due date'' in Figure~\\ref{f1}, and $D_{ij}$ is ``Maintenance date a'', in this case, $D_{ij}$ is earlier than $d_{ij}^v$. The penalty cost of ``Maintenance date a'' for ``Sampled Due date'' would be the vertical dotted line above ``Maintenance date a''.\n\\\\\nCase 2) $~D_{ij} > d_{ij}^v$\\\\\nThe scheduled maintenance date is later than the sample means that the maintenance date is too late and the defect occurs on the use. Still, $d_{ij}^v$ is ``Sampled Due date'' in Figure~\\ref{f1}, but the scheduled maintenance date $D_{ij}$ is ``Maintenance date b''. In this case, $D_{ij}$ is later than $d_{ij}^v$, and the vehicle will break down on the road. In our algorithm, the number of failures will be increased by one.\n\\\\\nCase 3) $~D_{ij} = d_{ij}^v$\\\\\nThe ideal situation is that the maintenance date is scheduled on the due date. The component can be maintained exactly at the date that the component is broken. In this case, there is no penalty or failure.\n\nThe averages of the penalty costs and the number of failures from $1000$ due date samples will be used as the penalty cost and expected number of failures for the scheduled maintenance date of the component. For each operation (the single-component operation or group operation), its cost consists of three parts: the set-up cost of the car, the maintenance costs and the penalty costs of all components of the operation. The penalty cost of components is a part of the total cost, and the expected number of failures of components is the third objective to be minimized in our multi-objective optimization.\n\n\n\\subsection{Implementation of Evolutionary Algorithm Operators}\nTo solve our application problem with an EA, there are several basic issues we need to deal with, such as, how to represent an individual or solution in the population (Chromosome Encoding); how to take these chromosomes into a process of evolution (Genotype-Phenotype Mapping); how to create variations of solutions in each iteration (Genetic Operators). Details of these topics are given in the following subsections.\n\n\\subsubsection{Chromosome Encoding}\nIn our algorithm, a three-vector chromosome (Figure~\\ref{f0}) is proposed to represent an individual, and the three vectors are:\n\\begin{itemize\n \\item Group structure vector: the group structures of components.\n \\item Starting time vector: the starting times of operations.\n \\item Workshop assignment vector: the workshops for operations.\n\\end{itemize}\n\n\\begin{figure*}[!htbp]\n\\hspace{-0.7cm}\n\\includegraphics[height=0.45in, width=5.2in]{chrom1.png}\n\\caption{Three-vector chromosome.}\n\\label{f0}\n\\end{figure*}\n\nThe group structure vector gives the information of which components are in the same group, it is initialized by randomly picking a feasible group structure for each car (check the details in \\cite{wang2019vehicle}). The generation of the starting time vector should be later than the generation of the group structure vector because the starting time of each operation is determined by the execution window which is the entire execution window of the component for a single-component operation or the execution window intersection for a group operation. A time spot is randomly selected from the execution window or execution window intersection for each operation in order to initialize the starting time vector. \n\nA workshop is considered as ``several workshops\" based on its capacity (the number of teams). By this way, the schedule for each workshop team can be achieved from the solution. For example, consider that two workshops have three and four repairing teams respectively. Then, group operations can be randomly assigned to seven ``workshops'', the former three and the latter four represent the corresponding teams in two workshops.\n\n\\subsubsection{Genotype-Phenotype Mapping}\nTo use the power of EAs to obtain a better population, we need to evaluate each chromosome and give the better ones higher probabilities to produce offspring. This is done by genotype-phenotype mapping or decoding the chromosome. In our problem, it is to convert an individual into a feasible schedule to calculate the objectives and constraints which represent the relative superiority of a chromosome. The genotype-phenotype mapping can be easily achieved in our algorithm because the group structure, the starting time and the workshop team of the operations can be acquired directly from each individual. When converting an individual into a schedule, it is possible that the processing times of two or more operations assigned to the same workshop team are overlapping since the starting time of each operation is decided in the starting time vector. In this situation, the principle of first-come-first-served is followed: the starting time and processing time of the earlier started operation remain the same; the starting time of the later started operation is delayed until the completion of the previous operation; the processing time of the later started operation remains the same; while, an extra waiting time is added to the later started operation as a penalty because the vehicle waits in the workshop for the maintenance.\n\n\\subsubsection{Genetic Operators}\nIn accordance with the problem and its encoding, specific crossover and mutation operators have been designed for our problem (check the details in \\cite{wang2019vehicle}). Both operators are applied separately to the three parts of the chromosome.\n\nFor the group structure vector, multi-point crossover can be used as crossover operator and the number of cutting points depends on the length of the vector. The same cutting points can be applied to the starting time vector when performing crossover. However, the change on the group structure vector as a consequence of the crossover may result in the invalidity of genes in the starting time vector because it is possible that the group members and execution window intersections have changed due to the new group structure. Therefore, when performing the crossover on the starting time vector, the starting times of all operations should be checked based on the new group structure and a new starting time is produced randomly from the correct intersection in the case that the starting time of an operation is invalid. The multi-point crossover can be applied to the workshop assignment vector as well. \n\nThe mutation operator alters one or more gene values in a chromosome. Similarly, the mutation should be operated on the group structure vector first due to its impact on the starting time vector; the starting time of operations should be checked and corrected after the mutation is done on the group structure vector. Afterwards, several gene values can be altered in the staring time vector and workshop assignment vector to generate a new individual.\n\n\n\\section{Proposed Preference based Algorithm}\n\\label{sec:ap-di-moea}\nAs the number of objectives and decision variables increases, the number of non-dominated solutions tends to grow exponentially \\cite{pal2018decor}. This brings more challenges on achieving efficiently a solution set with satisfactory convergence and diversity. At the same time, a huge number of solutions is needed to approximate the entire Pareto front. However, a big population means more computational time and resources. To overcome these difficulties, we propose an automatic preference based MOEA, which can generate the preference region or the region of interest (ROI) automatically and find non-dominated solutions in the preference region instead of the entire Pareto front. The automatic preference based MOEA is developed based on the framework of DI-MOEA (Diversity-Indicator based Multi-Objective Evolutionary Algorithm) \\cite{wang2019diversity}. We call our new algorithm AP-DI-MOEA.\n \nDI-MOEA is an indicator-based MOEA, it has shown to be competitive to other MOEAs on common multi-objective benchmark problems. Moreover, it is invariant to the shape of the Pareto front and can achieve evenly spread Pareto front approximations.\nDI-MOEA adopts a hybrid selection scheme:\n\\begin{itemize\n \\item The ($\\mu$ + $\\mu$) generational selection operator is used when the parent population can be layered into multiple dominance ranks. The intention is to accelerate convergence until all solutions are non-dominated.\n \\item The ($\\mu$ + 1) steady state selection operator is adopted in the case that all solutions in the parent population are mutually non-dominated and the diversity is the main selection criterion to achieve a uniform distribution of the solutions on the Pareto front.\n\\end{itemize}\n\nDI-MOEA employs non-dominated sorting as the first ranking criterion; the diversity indicator, i.e., the Euclidean distance based geometric mean gap indicator, as the second, diversity-based ranking criterion to guide the search. Two variants of DI-MOEA, denoted as DI-1 and DI-2, exist, which use the crowding distance and diversity indicator, respectively, as the second criteria in the ($\\mu$ + $\\mu$) generational selection operator. While, to ensure the uniformity of the final solution set, the diversity indicator is used by both variants in the ($\\mu$ + 1) steady state selection operator. Analogously, two variants of AP-DI-MOEA, i.e., AP-DI-1 and AP-DI-2, are derived from the two variants of DI-MOEA\n\n\n\nThe workings of AP-DI-MOEA are outlined in Algorithm 1. (Exceedance of) $Enum\\_P$ is a predefined condition (In our algorithm, $Enum\\_P$ is the number of evaluations.) to divide the algorithm into two phases: learning phase and decision phase. In the learning phase, the algorithm explores the possible area of Pareto optimal solutions and finds the rough approximations of the Pareto front. In the decision phase, the algorithm identifies the preference region and finds preferred solutions. When the algorithm starts running and satisfies $Enum\\_P$ at some moment, the first preference region will be generated and $Enum\\_P$ will be updated for determining a new future moment when the preference region needs to be updated. The process of updating $Enum\\_P$ continues until the end. The first $Enum\\_P$ is a boundary line. Before it is satisfied, AP-DI-MOEA runs exactly like DI-MOEA to approximate the whole Pareto front; while, after it is satisfied, the preference region is generated automatically and AP-DI-MOEA finds solutions focusing on the preference region. The subsequent values of $Enum\\_P$ define the later moments to update the preference region step by step, eventually, a precise ROI with a proper size can be achieved.\n\n\\begin{figure*}[!htbp]\n\\vspace{-3.5cm}\n\\hspace{-3cm}\n\\includegraphics[height=10in]{pg_0002.pdf}\n\\end{figure*}\n\n\\begin{figure*}[!htbp]\n\\hspace{-2.5cm}\n\\includegraphics[width=7in]{al2.pdf}\n\\end{figure*}\n\n\n\\iffalse \n\\addtocounter{algorithm}{1}\n\\begin{algorithm}[!htbp]\n\\setstretch{0.8}\n \t\\caption{Finding the knee point and defining the preference region.}\n \\label{algorithm:2}\n \t\\begin{algorithmic}[1]\n \\STATE $n \\leftarrow$ the number of objectives;\n \\STATE $P_t \\leftarrow$ current population;\n \\STATE $\\epsilon$; \/\/parameter ($>$0) for distinguishing convex\/concave shape;\n \\STATE $popsize \\leftarrow |P_t|$; \/\/population size\n \n \\STATE Declare $Q[n]$; \/\/upper quartile objective values of $P_t$ \n \\STATE Declare $L[n]$; \/\/worst objective values of $P_t$\n \\STATE Declare $knee[n]$; \/\/knee point of $P_t$\n \\STATE Declare $P\\_region[n]$; \/\/preference region of $P_t$\n \\STATE Declare $Expoints[n][n]$; \/\/extreme points\n \\STATE $foundknee \\leftarrow false$;\n \n \n \\FOR{{\\bf each} $i \\in \\{ 1, \\dots, n\\}$}\n \\STATE sort($P_t$) by the $i$th objective in ascending order;\n \\STATE $Q[i] \\leftarrow P_t$.get\\_index$(\\frac{3}{4}\\times popsize)$.get\\_obj($i$); \/\/upper quartile value of the $i$th objective \n \\STATE $L[i] \\leftarrow P_t$.get\\_index$(popsize)$.get\\_obj($i$);\/\/the largest (worst) value of the $i$th objective\n \\ENDFOR\n \\FORALL{solution $s \\in P_t$}\n \\IF{$s$.get\\_obj($i=1,...,n) > Q[i]$ }\n \\STATE remove $s$ from $P_t$;\n \\ENDIF\n \\ENDFOR\n \\STATE $Expoints[\\centerdot][\\centerdot] \\leftarrow$ extreme points in $P_t$;\n \\STATE $num_a\\leftarrow$ the number of points in concave region of hyperplane formed by $Expoints[\\centerdot][\\centerdot]$;\n \\STATE $num_v \\leftarrow |P_t|-num_a$; \/\/the number of points in convex region\n \\IF{$(num_v - num_a > \\epsilon)$}\n \\STATE \/\/roughly convex shape\n \\STATE remove solutions in concave region from $P_t$;\n \\ELSIF{$(num_a - num_v > \\epsilon)$}\n \\STATE \/\/roughly concave shape\n \\STATE remove solutions in convex region from $P_t$;\n \\ELSE\n \\FORALL{solution $s\\in P_t$}\n \\STATE calculate hypervolume of $s$ with reference point $L[\\centerdot]$;\n \\STATE update the largest hypervolume value ($max\\_h$);\n \\ENDFOR\n \\STATE $knee[\\centerdot] \\leftarrow$ solution with $max\\_h$;\n \\STATE $foundknee \\leftarrow true$;\n \\ENDIF\n \\IF{($foundknee == false$)}\n \\FORALL{solution $s\\in P_t$}\n \\STATE calculate distance between $s$ and hyperplane formed by $Expoints[\\centerdot][\\centerdot]$;\n \\STATE update the largest distance ($max\\_d$);\n \\ENDFOR\n \\STATE $knee[\\centerdot] \\leftarrow$ solution with $max\\_d$;\n \\ENDIF\n \\FOR{{\\bf each} $i \\in \\{ 1, \\dots, n\\}$}\n \\STATE $P\\_region[i] \\leftarrow knee[i] + (L[i]-knee[i]) \\times 85\\%$\n \\ENDFOR\n \t\\end{algorithmic}\n\\end{algorithm}\n\\fi\n\n\\begin{figure}[!htbp]\n\\centering\n\\includegraphics[width=3.3in]{knee-explain.png} \n\\caption{Finding the knee point in bi-dimensional space.} \n\\label{knee}\n\\end{figure}\n\nThe first\/new preference region is formed based on the population at the moment when the condition of $Enum\\_P$ is satisfied, especially the knee point of the population. Algorithm 2 gives the details of line 41 in Algorithm 1, it introduces the steps of finding the knee point of a non-dominated solution set and constituting a hypercube shaped preference region according to the knee point. Figure~\\ref{knee} also gives an illustration of finding the knee point in bi-dimensional space. Firstly, the upper quartile objective values (line 13 in Algorithm 2) in the solution set are used as a boundary to define outliers and solutions outside this boundary are removed (line 16-20 in Algorithm 2). The extreme solutions (the solutions with the maximum value in one objective) (line 21 in Algorithm 2) are then found inside the boundary and a hyperplane is formed based on the extreme solutions. In a bi-dimensional space (Figure~\\ref{knee}), the hyperplane is only a line connecting two extreme solutions. According to the numbers of points below and above the hyperplane (line 22 - 23 in Algorithm 2), the shape of the solution set can be roughly perceived.\nWe will distinguish between ``convex'' and ``concave'' regions. Points in the \\textit{convex} (\\textit{concave}) \\textit{region} are dominating (dominated by) at least one point in the hyperplane spanned by the extreme points. However, when the number of the points in the convex region and the number of points in the concave region is close enough, it implies that the shape of the current solution set is almost linear. This occurs both when the true Pareto front is linear and when the solution set is converged very well in a small area of the Pareto front. A parameter $\\epsilon$ then is used to represent the closeness and it is a small number decided by the size of the solution set. In the case that the shape of the current solution set is (almost) linear, the solution with the largest hypervolume value with regards to the worst objective vector (line 14 in Algorithm 2) is adopted as the knee point (line 32 - 36 in Algorithm 2). While, under the condition that the shape of the current solution set is convex or concave, the solution in the convex or concave region with the largest Euclidean distance to the hyperplane is chosen as the knee point (line 39 - 42 in Algorithm 2). After the knee point is found, the preference region can be determined based on the knee point by the following formula:\n\\begin{align}\nP\\_region[i] = knee[i] + (L[i]-knee[i]) \\times 85\\%\n\\end{align}\n\nLet $i$ denotes the $i$th objective, as in Algorithm 2, $L[i]$ is the worst value of the $i$th objective in the population, $knee[i]$ is the $i$th objective value of the knee point and $P\\_region[i]$ is the upper bound of the $i$th objective. W.l.o.g. we assume the objectives are to be minimized and the lower bound of preference region is the origin point. According to the formula, we can see that the first preference region is relatively large (roughly 85\\% of the entire Pareto front). With the increase in the number of iteration, the preference region will be updated and becomes smaller and smaller because every preference region picks 85\\% of the current Pareto front. Eventually, we want the preference region can reach a proper range, say, 15\\% of the initial Pareto front. The process of narrowing down the preference region step by step can benefit the accuracy of the preference region.\n\nIn the interest of clarity, Algorithm 1 only shows the workings of AP-DI-1, the workings of AP-DI-2 can be obtained by replacing crowding distance with the diversity indicator contribution.\nIn the ($\\mu$ + $\\mu$) generational selection operator (line 14 - 36 in Algorithm 1), when there is no preference region, the second ranking criteria (the crowding distance for AP-DI-1; the diversity indicator for AP-DI-2) for all solutions on the last front are calculated and the population will be truncated based on non dominated sorting and the second ranking criteria (line 28 - 29 in Algorithm 1). While, if a preference region already exists, both the second ranking criteria and Euclidean distance to the knee point for all solutions on the last front are calculated and the population will be truncated based on first non dominated sorting, then the second ranking criteria, lastly, Euclidean distance to the knee point (line 31 - 32 in Algorithm 1). In the ($\\mu$ + 1) steady state selection operator (line 38 - 59 in Algorithm 1), firstly, the value of $Enum\\_P$ is compared with the current number of evaluations to determine if a (new) preference region should be generated. When it is time to do so, the preference region is generated through Algorithm 2 (line 41 in Algorithm 1), at the same time, the value of $Enum\\_P$ is updated to the next moment when the preference region is to be updated (line 42 in Algorithm 1). There are different strategies to assign the values of $Enum\\_P$. In our algorithm, we divide the whole computing budget into two parts, the first half is used to find an initial entire Pareto front approximation, and the second half is used to update the preference region and find solutions in the preference region. Assume the total computing budget is $Enum\\_T$ (the number of evaluations), then the first value of $Enum\\_P$ is $\\frac{1}{2}\\times Enum\\_T$. Due to the reason that we expect a final preference region with a size of around 15\\% of the initial entire Pareto front and each new preference region takes 85\\% of the current Pareto front, according to the formula: $0.85^{12} \\approx 0.14$, the value of $Enum\\_P$ can be updated by the following formula:\n\\begin{align}\nEnum\\_P = Enum\\_P + (Enum\\_T\/2)\/12\n\\end{align}\n\nAnother half of budget can be divided into $12$ partial-budgets and a new preference region is constituted after each partial-budget. In the end, the final preference region is achieved and solutions focusing on this preference region are obtained. For the rest part of the ($\\mu$ + 1) steady state selection operator, likewise, when there is a preference region, three ranking criteria (1. non-dominated sorting; 2. diversity indicator; 3. the Euclidean distance to the knee point) work together to achieve a well-converged and well-distributed set of Pareto optimal solutions in the preference region.\n\n\\section{Experimental Results}\n\\label{sec:experiments}\n\\subsection{Experimental Design}\nIn this section, simulations are conducted to demonstrate the performance of proposed algorithms on both benchmark problems and our real-world application problems. All experiments are implemented based on the MOEA Framework (\\url{http:\/\/www.moeaframework.org\/}), which is a Java-based framework for multi-objective optimization.\n\nFor the two variants of AP-DI-MOEA: AP-DI-1 and AP-DI-2, their performances have been compared with DI-MOEA: DI-1, DI-2 and NSGA-III \\cite{deb2014evolutionary}. We compare our algorithm with NSGA-III because NSGA-III is a representative state-of-the-art evolutionary multi-objective algorithm and it is very powerful to handle problems with non-linear characteristics. For bi-objective benchmark problems, algorithms are tested on ZDT1 and ZDT2 with 30 variables. For three objective benchmark problems, DTLZ1 with 7 variables and DTLZ2 with 12 variables are tested. For the real-world application problem of VFMSO, experiments have been conducted on two instances with different sizes. The configurations of the two instances, such as the predicted RUL probability distribution, the processing time and maintenance cost of each component, the set-up time and cost of each car, are made available on \\url{http:\/\/moda.liacs.nl}. On every problem, we run each algorithm $30$ times with different seeds, while the same $30$ different seeds are used for all algorithms. All the experiments are performed with a population size of $100$; and for bi-objective problems, experiments are run with a budget of $22000$ (objective function) evaluations, DTLZ three objective problems with a budget of $120000$ evaluations, the VFMSO problems with a budget of $1200000$ evaluations. This setting is chosen to be more realistic in the light of the applications in scheduling that we ultimately want to solve.\n\n\n\n\n\n\n\\subsection{Experiments on bi-objective problems}\n\nBi-objective problems are optimized with a total budget of $22000$ evaluations, when the number of evaluations reaches $10000$ times, the first preference region is generated, then after every $1200$ evaluations, the preference region will be updated. Figure~\\ref{fig:ZDT1} shows the Pareto front approximations from a typical run on ZDT1 (left column) and ZDT2 (right column). The graphs on the upper row are obtained from DI-1 and AP-DI-1, while the graphs on the lower row are from DI-2 and AP-DI-2. In each graph, the entire Pareto front approximation from DI-MOEA and the preferred solutions from AP-DI-MOEA (or \\textit{AP solutions}) are presented, at the same time, the preference region of AP-DI-MOEA is also shown by the gray area.\n\n\\begin{figure}[htbp]\n\\hspace{-0.58cm}\n\\subfigure[ZDT1]{\n \\begin{minipage}[t]{0.53\\linewidth}\n \\centering\n \\includegraphics[width=2in]{zdt1-di1-tdi1-w.png}\\\\\n \\vspace{0.02cm}\n \\includegraphics[width=2in]{zdt1-di2-tdi2-w.png}\\\\\n \\vspace{0.02cm}\n \n \\end{minipage}%\n}%\n\\subfigure[ZDT2]{\n \\begin{minipage}[t]{0.53\\linewidth}\n \\centering\n \\includegraphics[width=2in]{zdt2-di1-tdi1-w.png}\\\\\n \\vspace{0.02cm}\n \\includegraphics[width=2in]{zdt2-di2-tdi2-w.png}\\\\\n \\vspace{0.02cm}\n \n \\end{minipage}%\n}%\n\n\\caption{Pareto front approximation on ZDT1 and ZDT2.}\n\\label{fig:ZDT1}\n\\end{figure}\n\nBesides the visualization of the Pareto fronts, we also compute the knee point of the entire final Pareto front approximation from DI-MOEA via the strategy described in Algorithm 2. For each run of DI-MOEA and AP-DI-MOEA with the same seed, the following two issues have been checked: \n\\begin{itemize\n\\item If the knee point from DI-MOEA is in the preference region achieved by its derived AP-DI-MOEA;\n\\item If the knee point from DI-MOEA is dominated by or dominating AP solutions; or if it is a non-dominated solution (mutually non-dominated with all AP solutions).\n\\end{itemize}\n\nTable~\\ref{table-kneezdt1} shows the results of 30 runs. For ZDT1 problem, all 30 knee points from DI-1 and DI-2 are in the preference regions from AP-DI-1 and AP-DI-2 respectively; in all these knee points, 10 from DI-1 and 7 from DI-2 are dominated by AP solutions. For ZDT2 problem, most knee points are not in corresponding preference regions, but for those in the preference regions, almost all of them are dominated by AP solutions. Please note that when a knee point from DI-MOEA is outside of the preference region from AP-DI-MOEA, it is not possible that it can dominate any AP solutions because all AP solutions are in the preference region and only solutions in the left side of the gray area can dominate AP solutions. \n\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{Space and dominance relation of knee point from DI-MOEA and AP solutions on ZDT problems.}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\multicolumn{2}{|c|}{Problem} & \\multicolumn{2}{|c|}{ZDT1} & \\multicolumn{2}{|c|}{ZDT2}\\\\ \n\\cline{1-6}\n\\multicolumn{2}{|c|}{\\multirow{2}*{Algorithm}}& DI-1\/ & DI-2\/ & DI-1\/ & DI-2\/ \\\\\n\\multicolumn{2}{|c|}{ }& AP-DI-1 & AP-DI-2 & AP-DI-1 & AP-DI-2 \\\\\n\\hline\nIn & Incomparable & 20 & 23 & 1 & 1\\\\\n\\cline{2-6}\npreference & Dominated & 10 & 7 & 9 & 9 \\\\\n \\cline{2-6}\nregion & Dominating & 0 & 0 & 0 & 0 \\\\\n\\hline\nOutside & Incomparable & 0 & 0 & 20 & 20 \\\\\n\\cline{2-6}\np-region & Dominated & 0 & 0 & 0 & 0 \\\\\n\\hline\n\\end{tabular}\n\\label{table-kneezdt1}\n\\end{table} \n\nWe also perform the same comparison between AP-DI-MOEA and NSGA-III, the results are shown in Table~\\ref{table-kneezdt1-nsga3}. For ZDT1 problem, all knee points from NSGA-III are in the preference regions from AP-DI-MOEA. Some of these knee points dominate AP solutions. For ZDT2 problem, most knee points from NSGA-III are not in the preference regions and these knee points are incomparable with AP solutions. For the knee points in the preference regions, all three dominating relations with AP solutions appear. For both problems, when the knee point from NSGA-III is dominating AP solutions, it only dominates one AP solution.\n\n\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{Space and dominance relation of knee point from NSGA-III and AP solutions on ZDT problems.}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\multicolumn{2}{|c|}{Problem} & \\multicolumn{2}{|c|}{ZDT1} & \\multicolumn{2}{|c|}{ZDT2}\\\\ \n\\cline{1-6}\n\\multicolumn{2}{|c|}{\\multirow{2}*{Algorithm}}& NSGA-III\/ & NSGA-III\/ & NSGA-III\/ & NSGA-III\/ \\\\\n\\multicolumn{2}{|c|}{ }& AP-DI-1 & AP-DI-2 & AP-DI-1 & AP-DI-2 \\\\\n\\hline\nIn & Incomparable & 14 & 19 & 3 & 1\\\\\n\\cline{2-6}\npreference & Dominated & 0 & 0 & 2 & 3 \\\\\n \\cline{2-6}\nregion & Dominating & 16 & 11 & 4 & 6 \\\\\n\\hline\nOutside & Incomparable & 0 & 0 & 21 & 20 \\\\\n\\cline{2-6}\np-region & Dominated & 0 & 0 & 0 & 0 \\\\\n\\hline\n\\end{tabular}\n\\label{table-kneezdt1-nsga3}\n\\end{table} \n\nInstead of spreading the population across the entire Pareto front, we only focus on the preference region. To ensure that our algorithm can guide the search towards the preference region and the achieved solution set is distributed across the preference region, we compare the performance of AP-DI-MOEA, DI-MOEA and NSGA-III in the preference region. For each Pareto front approximation from DI-MOEA and NSGA-III, the solutions in the corresponding preference region from AP-DI-MOEA are picked, and we compare these solutions with AP solutions through the hypervolume indicator. The point formed by the largest objective values over all solutions in the preference region is adopted as the reference point when calculating the hypervolume indicator. It has been found that all hypervolume values of new solution sets from DI-MOEA and NSGA-III in the preference region are worse than the hypervolume values of the solution sets from AP-DI-MOEA, which proves that the mechanism indeed works in practice. Figure~\\ref{box:ZDT} shows box plots of the distribution of hypervolume indicators over 30 runs.\n\n\n\n\\begin{figure}[htbp]\n\\hspace{-0.58cm}\n\\subfigure[ZDT1]{\n \\begin{minipage}[t]{0.53\\linewidth}\n \\centering\n \\includegraphics[width=2in]{zdt1-di1-box1.png}\\\\\n \\vspace{0.02cm}\n \\includegraphics[width=2in]{zdt1-di2-box2.png}\\\\\n \\vspace{0.02cm}\n \n \\end{minipage}%\n}%\n\\subfigure[ZDT2]{\n \\begin{minipage}[t]{0.53\\linewidth}\n \\centering\n \\includegraphics[width=2in]{zdt2-di1-box1.png}\\\\\n \\vspace{0.02cm}\n \\includegraphics[width=2in]{zdt2-di2-box2.png}\\\\\n \\vspace{0.02cm}\n \n \\end{minipage}%\n}%\n\\caption{Boxplots comparing the hypervolume values on ZDT1 and ZDT2.}\n\\label{box:ZDT}\n\\end{figure}\n\n\n\\subsection{Experiments on three objective problems}\nDTLZ1 and DTLZ2 are chosen as three objective benchmark problems to investigate our algorithms. They are performed with a total budget of $120000$ fitness evaluations, when the evaluation reaches $60000$ times, the first preference region is formed, then after every $5000$ evaluations, the preference region is updated. Figure~\\ref{fig:dtlz} shows the Pareto front approximations from a typical run on DTLZ1 (left column) and DTLZ2 (right column). The upper graphs are obtained from DI-1 and AP-DI-1, while the lower graphs are from DI-2 and AP-DI-2. In each graph, the Pareto front approximations from DI-MOEA and corresponding AP-DI-MOEA are given. Since the target region is actually an axis aligned box, the obtained knee region (i.e., the intersection of the axis aligned box with the Pareto front) has an inverted triangle shape for these two benchmark problems.\n\n\\begin{figure*}[htbp]\n\\hspace{-3.5cm}\n\\subfigure[DTLZ1 3 objective problem]{\n \\begin{minipage}[t]{0.5\\linewidth}\n \\centering\n \\includegraphics[height=2.5in,width=4.4in]{dtlz1-di1-tdi1-w.png}\\\\\n \\vspace{0.45cm}\n \\includegraphics[height=2.5in,width=4.4in]{dtlz1-di2-tdi2-w.png}\\\\\n \n \n \\end{minipage}%\n}%\n\\hspace{2cm}\n\\subfigure[DTLZ2 3 objective problem]{\n \\begin{minipage}[t]{0.5\\linewidth}\n \\centering\n \\includegraphics[height=2.7in,width=4.4in]{dtlz2-di1-tdi1-w.png}\\\\\n \n \\includegraphics[height=2.7in,width=4.4in]{dtlz2-di2-tdi2-w.png}\\\\\n \n \n \\end{minipage}%\n}%\n\\caption{Pareto front approximation on DTLZ1 and DTLZ2.}\n\\label{fig:dtlz}\n\\end{figure*}\n\n\n\nTable~\\ref{table-kneedtlz} shows the space and dominance relation of the knee point from DI-MOEA and the solution set from AP-DI-MOEA over 30 runs. For DTLZ1 problem, most knee points from DI-MOEA are in their respective preference regions and all knee points are mutually non-dominated with AP solutions. For DTLZ2 problem, we observed that more knee points are not in the corresponding preference regions. This is because too few solutions from DI-MOEA are in the preference region. For DTLZ1 problem, six solutions from DI-MOEA are in the corresponding preference region on average for each run, while, for DTLZ2 problem, only less than two solutions are in the corresponding preference region on average. Therefore, we can see that on the one side, it is normal that many knee points from the entire Pareto fronts are not in their corresponding preference regions; on the other side, our aim of finding more fine-grained resolution in the preference region has been well achieved because only few solutions can be obtained in the preference region if we spread the population across the entire Pareto front. At the same time, one knee point from DI-1 on DTLZ2 is dominated by solutions from the corresponding AP-DI-1, which proves that AP-DI-MOEA can converge better than DI-MOEA because AP-DI-MOEA focuses on the preference region.\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{Space and dominance relation of knee point from DI-MOEA and AP solutions on DTLZ problems.}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\multicolumn{2}{|c|}{Problem} & \\multicolumn{2}{|c|}{DTLZ1} & \\multicolumn{2}{|c|}{DTLZ2}\\\\ \n\\cline{1-6}\n\\multicolumn{2}{|c|}{\\multirow{2}*{Algorithm}}& DI-1\/ & DI-2\/ & DI-1\/ & DI-2\/ \\\\\n\\multicolumn{2}{|c|}{ }& AP-DI-1 & AP-DI-2 & AP-DI-1 & AP-DI-2 \\\\\n\\hline\nIn & Incomparable & 29 & 27 & 10 & 13\\\\\n\\cline{2-6}\npreference & Dominated & 0 & 0 & 1 & 0 \\\\\n \\cline{2-6}\nregion & Dominating & 0 & 0 & 0 & 0 \\\\\n\\hline\nOutside & Incomparable & 1 & 3 & 19 & 17 \\\\\n\\cline{2-6}\np-region & Dominated & 0 & 0 & 0 & 0 \\\\\n\\hline\n\\end{tabular}\n\\label{table-kneedtlz}\n\\end{table} \n\n\nAP-DI-1 and AP-DI-2 have also been compared with NSGA-III in the same way. Table~\\ref{table-kneedtlz_nsga3} shows the comparison result. For DTLZ1, the average number of solutions from NSGA-III in the corresponding preference regions from AP-DI-MOEA is six. Still, almost all knee solutions from NSGA-III are in the preference region. For DTLZ2, the average number of solutions from NSGA-III in the corresponding preference region from AP-DI-MOEA is less than one, while, in more than half of 30 runs, the knee points from NSGA-III are still in the preference region. To some extent, it can be concluded that the preference regions from AP-DI-MOEA are accurate. It can also be observed that AP-DI-1 behaves better than AP-DI-2 on DTLZ2, because two knee points from NSGA-III dominate the solutions from AP-DI-2.\n\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{Space and dominance relation of knee point from NSGA-III and AP solutions on DTLZ problems.}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\multicolumn{2}{|c|}{Problem} & \\multicolumn{2}{|c|}{DTLZ1} & \\multicolumn{2}{|c|}{DTLZ2}\\\\ \n\\cline{1-6}\n\\multicolumn{2}{|c|}{\\multirow{2}*{Algorithm}}& NSGA-III\/ & NSGA-III\/ & NSGA-III\/ & NSGA-III\/ \\\\\n\\multicolumn{2}{|c|}{ }& AP-DI-1 & AP-DI-2 & AP-DI-1 &AP-DI-2 \\\\\n\\hline\nIn & Incomparable & 30 & 29 & 14 & 17\\\\\n\\cline{2-6}\npreference & Dominated & 0 & 0 & 1 & 1 \\\\\n \\cline{2-6}\nregion & Dominating & 0 & 0 & 0 & 2 \\\\\n\\hline\nOutside & Incomparable & 0 & 1 & 15 & 10 \\\\\n\\cline{2-6}\np-region & Dominated & 0 & 0 & 0 & 0 \\\\\n\\hline\n\\end{tabular}\n\\label{table-kneedtlz_nsga3}\n\\end{table} \n\nSimilarly, we pick from DI-MOEA and NSGA-III solutions which are in the corresponding preference region of AP-DI-MOEA, and the hypervolume indicator value is compared between these solutions and AP solutions. It has been found that all hypervolume values of solutions from AP-DI-MOEA are better than those of solutions from DI-MOEA and NSGA-III. The left column of Figure~\\ref{box:dtlz} shows box plots of the distribution of hypervolume values over 30 runs on DTLZ1, and the right column shows the hypervolume comparison on DTLZ2.\n\n\n\\begin{figure}[htbp]\n\\hspace{-0.58cm}\n\\subfigure[DTLZ1]{\n \\begin{minipage}[t]{0.53\\linewidth}\n \\centering\n \\includegraphics[width=2in]{dtzl1-di1-box.png}\\\\\n \\vspace{0.02cm}\n \\includegraphics[width=2in]{dtlz1-di2-box.png}\\\\\n \\vspace{0.02cm}\n \n \\end{minipage}%\n}%\n\\subfigure[DTLZ2]{\n \\begin{minipage}[t]{0.53\\linewidth}\n \\centering\n \\includegraphics[width=2in]{dtlz2-di1-box.png}\\\\\n \\vspace{0.02cm}\n \\includegraphics[width=2in]{dtlz2-di2-box.png}\\\\\n \\vspace{0.02cm}\n \n \\end{minipage}%\n}%\n\n\\caption{Boxplots comparing the hypervolume values on DTLZ1 and DTLZ2.}\n\\label{box:dtlz}\n\\end{figure}\n\nIn our experiments, we decide half of the total budget is used to find an initial Pareto front because it turned out to be a good compromise: half budget for the initial Pareto front and another half budget for the solutions focusing on the preference region. We also run experiments using 25\\% and 75\\% of the total budget for the initial Pareto front. Figure~\\ref{fig:dtlz-budget} presents the entire Pareto front from DI-MOEA and the Pareto front from AP-DI-MOEA with different budgets for the initial Pareto front. The left two images are on DTLZ1 and the right two images are on DTLZ2. The uppper two images are from DI-1 and AP-DI-1; the lower two images are from DI-2 and AP-DI-2. In the legend labels, 50\\%, 25\\% and 75\\% indicate the budgets which are utilized to find the initial entire Pareto front. It can be observed that the preference region from AP-DI-MOEA with 50\\% of budget are located on a better position than with 25\\% and 75\\% budgets, and the position of the preference region from AP-DI-MOEA with 50\\% of budget is more stable. Therefore, in our algorithm, 50\\% of budget is used before the generation of preference region.\n\n\\begin{figure*}[htbp]\n\\hspace{-3.5cm}\n\\subfigure[DTLZ1 3 objective problem]{\n \\begin{minipage}[t]{0.50\\linewidth}\n \\centering\n \\includegraphics[height=2.6in,width=4.28in]{dtlz1-di1-tdi1-3combine-1.png}\\\\\n \n \\includegraphics[height=2.6in,width=4.28in]{dtlz1-di2-tdi2-3combine-3.png}\\\\\n \n \n \\end{minipage}%\n}%\n\\hspace{1.5cm}\n\\subfigure[DTLZ2 3 objective problem]{\n \\begin{minipage}[t]{0.50\\linewidth}\n \\centering\n \\includegraphics[height=2.6in,width=4.2in]{dtlz2-di1-tdi1-3combine-16.png}\\\\\n \n \\includegraphics[height=2.6in,width=4.2in]{dtlz2-di2-tdi2-3combine-16.png}\\\\\n \n \n \\end{minipage}%\n}%\n\\caption{Pareto front approximation by different budgets generating initial Pareto front.}\n\\label{fig:dtlz-budget}\n\\end{figure*}\n\n\\subsection{Experiments on Vehicle Fleet Maintenance Scheduling Optimization}\nThe budget of $1200000$ evaluations has been used on the real-world application problems, and $600000$ of them are for the initial Pareto front. After that, the preference region is updated after every $50000$ evaluations.\nThe VFMSO problem has been tested with different sizes. Figure~\\ref{fig:20cars} shows Pareto front approximations of a problem with $20$ cars and $3$ workshops (V1), and each car contains $13$ components: one engine, four springs, four brakes and four tires \\cite{van2019modeling}. It can be observed that AP-DI-1 and AP-DI-2 can zoom in the entire Pareto front and find solutions in the preference region, at the same time, both AP-DI-1 and AP-DI-2 converge better than their corresponding DI-1 and DI-2. A similar conclusion can be drawn from Pareto fronts approximations of the problem with $30$ cars and $5$ workshops (V2) in Figure~\\ref{fig:30cars}.\n\n\nIn Figure~\\ref{fig:2030cars}, We put the Pareto front approximations from DI-MOEA, AP-DI-MOEA and NSGA-III on V1 (left) and V2 (right) together. The behaviours of DI-1, DI-2 and NSGA-III are similar on V1, so are the behaviours of AP-DI-1 and AP-DI-2 on this problem. While, DI-2 and AP-DI-2 converge better than DI-1 and AP-DI-1 on V2 problem. The behaviour of NSGA-III is between that of DI-1 and DI-2.\n\n\n\\begin{figure*}[htbp]\n\\hspace{-3.8cm}\n\\subfigure[DI-1 \\& AP-DI-1]{\n \\begin{minipage}[t]{0.5\\linewidth}\n \\centering\n \\includegraphics[height=2.7in,width=4in]{20car-di1-tdi1-legend.png}\\\\\n \n \\end{minipage}%\n}%\n\\hspace{3.2cm}\n\\subfigure[DI-2 \\& AP-DI-2]{\n \\begin{minipage}[t]{0.5\\linewidth}\n \\centering\n \\includegraphics[height=2.7in,width=4in]{20car-di2-tdi2-legend.png}\\\\\n \n \\end{minipage}%\n}%\n\\caption{Pareto front approximation on VFMSO problem with 20 cars and 3 workshops.}\n\\label{fig:20cars}\n\\end{figure*}\n\n\\begin{figure*}[htbp]\n\\hspace{-3.8cm}\n\\subfigure[DI-1 \\& AP-DI-1]{\n \\begin{minipage}[t]{0.5\\linewidth}\n \\centering\n \\includegraphics[height=2.7in,width=4in]{30car-di1-tdi1-legend.png}\\\\\n \n \\end{minipage}%\n}%\n\\hspace{3.2cm}\n\\subfigure[DI-2 \\& AP-DI-2]{\n \\begin{minipage}[t]{0.5\\linewidth}\n \\centering\n \\includegraphics[height=2.7in,width=4in]{30car-di2-tdi2-legend.png}\\\\\n \n \\end{minipage}%\n}%\n\\caption{Pareto front approximation on VFMSO problem with 30 cars and 5 workshops.}\n\\label{fig:30cars}\n\\end{figure*}\n\n\n\n\\begin{figure*}[htbp]\n\\hspace{-3.5cm}\n\\subfigure[V1]{\n \\begin{minipage}[t]{0.5\\linewidth}\n \\centering\n \\includegraphics[height=2.7in,width=4in]{20car-di1-tdi1-di2-tdi2-ns3.png}\\\\\n \n \\end{minipage}%\n}%\n\\hspace{3cm}\n\\subfigure[V2]{\n \\begin{minipage}[t]{0.5\\linewidth}\n \\centering\n \\includegraphics[height=2.7in,width=4in]{30car-di1-tdi1-di2-tdi2-ns3.png}\\\\\n \n \\end{minipage}%\n}%\n\n\\caption{Pareto front approximation on VFMSO problem by DI-MOEA, AP-DI-MOEA and NSGA-III.}\n\\label{fig:2030cars}\n\\end{figure*}\n\n\nTable~\\ref{table-v1} gives the space and dominance relation of knee points from DI-MOEA and solutions from AP-DI-MOEA on these two VFMSO problems. For both problems, only few knee points from DI-MOEA are in the preference regions of AP-DI-MOEA, and the main reason is that the Pareto front of AP-DI-MOEA converges better than that of DI-MOEA, in some cases, the Pareto front of DI-MOEA cannot even reach the corresponding preference region. More importantly, it can be observed that most knee points from DI-MOEA, no matter whether in the preference region or outside of the preference region, are dominated by the solutions from AP-DI-MOEA. This phenomenon is even more obvious for the application problem with bigger size and run with the same budget as the smaller one: for V2, 90\\% of knee points from DI-MOEA are dominated by the solutions from AP-DI-MOEA.\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{Space and dominance relation of knee point from DI-MOEA and AP solutions on V1 and V2.}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\multicolumn{2}{|c|}{Problem} & \\multicolumn{2}{|c|}{V1} & \\multicolumn{2}{|c|}{V2}\\\\ \n\\cline{1-6}\n\\multicolumn{2}{|c|}{\\multirow{2}*{Algorithm}}& DI-1\/ & DI-2\/ & DI-1\/ & DI-2\/ \\\\\n\\multicolumn{2}{|c|}{ }& AP-DI-1 & AP-DI-2 & AP-DI-1 & AP-DI-2 \\\\\n\\hline\nIn & Incomparable & 0 & 0 & 0 & 0\\\\\n\\cline{2-6}\npreference & Dominated & 9 & 7 & 9 & 6 \\\\\n \\cline{2-6}\nregion & Dominating & 0 & 0 & 0 & 0 \\\\\n\\hline\nOutside & Incomparable & 4 & 9 & 3 & 3 \\\\\n\\cline{2-6}\np-region & Dominated & 17 & 14 & 18 & 21 \\\\\n\\hline\n\\end{tabular}\n\\label{table-v1}\n\\end{table} \n\n\nTable~\\ref{table-v1-nsga3} gives the space and dominance relation of knee points from NSGA-III and AP solutions. For both problems, again, most knee points from NSGA-III are not in the preference regions of AP-DI-MOEA. Some knee points from NSGA-III are dominated by AP solutions and most of them are incomparable with AP solutions. \n\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{Space and dominance relation of knee point from NSGA-III and AP solutions on V1 and V2.}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\multicolumn{2}{|c|}{Problem} & \\multicolumn{2}{|c|}{V1} & \\multicolumn{2}{|c|}{V2}\\\\ \n\\cline{1-6}\n\\multicolumn{2}{|c|}{\\multirow{2}*{Algorithm}}& NSGA-III\/ & NSGA-III\/ & NSGA-III\/ & NSGA-III\/ \\\\\n\\multicolumn{2}{|c|}{ }& AP-DI-1 & AP-DI-2 & AP-DI-1 & AP-DI-2 \\\\\n\\hline\nIn & Incomparable & 0 & 0 & 0 & 1\\\\\n\\cline{2-6}\npreference & Dominated & 0 & 1 & 3& 2 \\\\\n \\cline{2-6}\nregion & Dominating & 0 & 0 & 1 & 1 \\\\\n\\hline\nOutside & Incomparable & 23 & 24 & 21 & 18 \\\\\n\\cline{2-6}\np-region & Dominated & 7 & 5 & 5 & 8 \\\\\n\\hline\n\\end{tabular}\n\\label{table-v1-nsga3}\n\\end{table} \n\n\\iffalse \n The right image of Figure~\\ref{fig:30cars-2} presents the entire Pareto front approximations of V2 from four different MOEAs: DI-1, DI-2, RVEA and NSGA-III. It can be seen that DI-MOEA (both DI-1 and DI-2) and NSGA-III converge to the similar area in the end, while, RVEA reaches another area of the objective space. Table~\\ref{table_hy} provides the average hypervolume value of the four Pareto fronts from 30 runs and the reference point for each run is formed by the largest objective value from all solutions. It can be seen that DI-2 behaves the best and RVEA the worst.\n\n\n\\begin{table}[htbp]\n\\caption{Hypervolume values}\n\\label{table_hy}\n\\begin{center}\n\\begin{tabular}{l|c}\n\\hline\nDI-1 & 0.0525\\\\\n\\hline\nDI-2 & 0.0576\\\\\n\\hline\nRVEA & 0.0202\\\\\n\\hline\nNSGAIII & 0.0534\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\\fi\n\n\n\n\n\n\\section{CONCLUSIONS}\n\\label{sec:conclusion}\nIn this paper, a preference based multi-objective evolutionary algorithm, AP-DI-MOEA, is proposed. In the absence of explicitly provided preferences, the knee region is usually treated as the region of interest or preference region. Given this, AP-DI-MOEA can generate the knee region automatically and can find solutions with a more fine-grained resolution in the knee region. This has been demonstrated on the bi-objective problems ZDT1 and ZDT2, and the three objective problems DTLZ1 and DTLZ2. In the benchmark, the new approach was also proven to perform better than NSGA-III which was included in the benchmark as a state-of-the-art reference algorithm.\n\nThe research for the preference based algorithm was originally motivated by a real-world optimization problem, namely, Vehicle Fleet Maintenance Scheduling Optimization (VFMSO), which is described in this paper in a new formulation as a three objective discrete optimization problem. A customized set of operators (initialization, recombination, and mutation) is proposed for a multi-objective evolutionary algorithm with a selection strategy based on DI-MOEA and, respectively, AP-DI-MOEA. The experimental results of AP-DI-MOEA on two real-world application problem instances of different scales show that the newly proposed algorithm can generate preference regions automatically and it (in both cases) finds clearly better and more concentrated solution sets in the preference region than DI-MOEA. For completeness, it was also tested against NSGA-III and a better approximation in the preference region was observed by AP-DI-MOEA .\n\nSince our real-world VFMSO problem is our core issue to be solved, and its Pareto front is convex, we did not consider problems with an irregular shape.\nIt would be an interesting question how to adapt the algorithm to problems with more irregular shapes. Besides, the proposed approach requires a definition of knee points. Future work will provide a more detailed comparison of different variants of methods to generate knee points, as they are briefly introduced in Section \\ref{sec:literature}. In the application of maintenance scheduling, it will also be important to integrate robustness and uncertainty in the problem definition. It is desirable to generate schedules that are robust within a reasonable range of disruptions and uncertainties such as machine breakdowns and processing time variability.\n\n\n\n\\section*{Acknowledgment}\nThis work is part of the research programme Smart Industry SI2016 with project name CIMPLO and project number 15465, which is (partly) financed by the Netherlands Organisation for Scientific Research (NWO).\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{introduction}\n\nThe properties of physical systems in the vicinity of a critical\npoint, such as critical exponents and amplitude ratios, can be\nextracted by a variety of methods, ranging from exact solutions to\nMonte Carlo simulations.\n\nIn the absence of exact results, one of the most successful approaches\nis based on the investigation of the strong-coupling series expansion,\nwhich enjoys the property of a finite radius of convergence, often\n(but not necessarily) coinciding with the extent of the\nhigh-temperature phase. More generally, when no singular points occur\non the real axis of the complex coupling plane, it is possible to\nexploit strong-coupling results even beyond the convergence radius by\nanalytic continuations, which are based on appropriate resummation\nmethods. Extending the length of the strong-coupling series and\nimproving the accuracy of the resummations are therefore the two most\ncompelling tasks within this approach to the study of the behavior of\nsystems in the critical region.\n\nAs part of an extended program of strong-coupling calculations we have\nrecently computed an extended series expansion of all nontrivial\ntwo-point Green's functions\n\\begin{equation}\nG(x) = \\left<{\\vec s}(0)\\cdot{\\vec s}(x)\\right>\n\\end{equation}\nfor the nearest-neighbor lattice formulation of two-dimensional \n${\\rm O}(N)$ $\\sigma$ models on the square, triangular, and honeycomb\nlattices, respectively up to 21st, 15th, and 30th order in the\nstrong-coupling expansion parameter $\\beta$. A complete presentation\nof our strong-coupling computations for ${\\rm O}(N)$ $\\sigma$ models\nin two and three dimensions will appear in a forthcoming paper.\nA preliminary report of our calculations can be found in \nRef.~\\cite{lattice95}.\n\nThe relevance of a better understanding of 2-$d$ ${\\rm O}(N)$ $\\sigma$\nmodels cannot be overestimated. They appear in condensed matter\nliterature as prototype models for critical phenomena that are\nessentially restricted to two-dimensional layers, including some\ninstances of high-$T_c$ superconductivity. Moreover, they can be\nemployed as model field theories sharing some of the most peculiar\nfeatures of four-dimensional gauge theories, such as asymptotic\nfreedom and spontaneous mass generation. This last statement must\nhowever be qualified, since the above-mentioned properties, according\nto common lore, are possessed only by those 2-$d$ ${\\rm O}(N)$ models\nsuch that $N>2$.\n\nWe focus here on these asymptotically free models, analyzing their\nstrong-coupling expansion in order to extract information that may be\nrelevant to the description of their continuum limit\n($\\beta\\to\\infty$), assuming $\\beta_c=\\infty$ to be the only\nsingularity on the real axis. This hypothesis is favored by all\nnumerical evidence as well as by the successful application of the\nextrapolation techniques that we shall discuss in the present paper.\nThe analysis of our strong-coupling series for \nmodels with $N\\geq 2$ is presented in Ref.~\\cite{Nm2}.\n\nIt is obviously quite hard to imagine that strong-coupling techniques\nmay be really accurate in describing the divergent behavior of such\nquantities as the correlation length and the magnetic susceptibility.\nNevertheless, as our calculations will explicitly confirm, the\nstrong-coupling analysis may provide quite accurate continuum-limit\nestimates when applied directly to dimensionless,\nrenormalization-group invariant ratios of physical quantities. Two\nbasic ideas will make this statement more convincing.\n\n(i) For any dimensionless, renormalization-group invariant ratio\n$R(\\beta)$, when $\\beta$ is sufficiently large we may expect a\nbehavior \n\\begin{equation}\nR(\\beta)-R^*\\sim {1\\over \\xi^2(\\beta)},\n\\label{scalR}\n\\end{equation}\nwhere $R^*$ is the fixed point (continuum) value and $\\xi$ is the\n(diverging) correlation length. Hence a reasonable estimate of $R^*$\nmay be obtained at the values of $\\beta$ corresponding to large but\nfinite correlation lengths, where the function $R(\\beta)$ flattens.\nThis is essentially the same idea underlying Monte Carlo studies of\nasymptotically free theories, based on the identification of the\nso-called scaling region.\n\n(ii) On physical grounds, it is understandable that $\\beta$ is not\nnecessarily the most convenient variable to parameterize phenomena\noccuring around $\\beta=\\infty$. An interesting alternative is based\non the observation that the strong-coupling series of the internal\nenergy\n\\begin{equation}\nE = \\beta + O(\\beta^3)\n\\end{equation}\nmay be inverted to give $\\beta$ as a series in $E$. This series may\nbe substituted into other strong-coupling expansions, obtaining\nexpressions for physical quantities as power series in $E$. It might\nnow be easier to reach the continuum limit, since it now occurs at a\nfinite value of the expansion variable, i.e., $E\\to1$.\n\nWe hope to convince the reader that, by exploiting these ideas,\nstate-of-the-art strong-coupling calculations can be made at least as\naccurate as the best Monte Carlo simulations presently available,\nwhen applied to dimensionless renormalization-group invariant quantities.\n\nWe must stress that the analysis of the strong-coupling series\ncalculated on different lattices offers a possibility of testing\nuniversality, and, on the other side, once universality is assumed, it\nrepresents a further check for possible systematic errors and allows\ntheir quantitative estimate; this estimate is usually a difficult task\nin strong-coupling extrapolation methods such as those based on Pad\\'e\napproximants and their generalizations.\n\nOur physical intuition of the behavior of ${\\rm O}(N)$ models is\nstrongly guided by our knowledge of their large-$N$ behavior, and by\nthe evidence of a very weak dependence on $N$ of the dimensionless\nratios. In order to extend our understanding to those lattices that\nhave not till now received a systematic treatment, and also in order\nto establish a benchmark for the strong-coupling analysis, we decided\nto start our presentation with a detailed comparative study of the\nlarge-$N$ limit of various lattices, in the nearest-neighbor\nformulation. To the best of our knowledge, only the large-$N$\nsolution on the square lattice was already known explicitly\n\\cite{sqNi}.\n\nThe paper is organized as follows:\n\nIn Sec.~\\ref{secNi} we present the large-$N$ limit solution of ${\\rm\n O}(N)$ $\\sigma$ models on the square, triangular and honeycomb\nlattices, in the nearest-neighbor formulation, calculating several\nphysical quantities and showing explicitly the expected universality\nproperties. The triangular- and honeycomb-lattice results are\noriginal, and possess some intrinsic reasons of interest. However,\nreaders willing to focus on square-lattice results are advised to jump\nto Sec.~\\ref{SCA} after reading Subs.~\\ref{secse} and \\ref{secsqNi},\nwhere the notation is fixed.\n\nSec.~\\ref{SCA} is devoted to a detailed analysis of the available\nstrong-coupling series of $G(x)$ and other physical quantities on the\nsquare, triangular, and honeycomb lattices. Most of the results we\nshall show there concern the $N=3$ model. The basic motivation for\nthis choice lies in the observation that all dependence in $N$ is\nmonotonic between 3 and $\\infty$; hence the discussion of higher-$N$\nresults would be only a boring repetition of the considerations\npresented here. The reader not interested in the analysis of\ntriangular and honeycomb lattices may skip most of the discussion, by\nfocusing on Subs.~\\ref{scsq}, where further definitions are introduced\nand the square-lattice series are analyzed, and on Subs.~\\ref{concl},\nwhere all conclusions are drawn.\n\nApps.~\\ref{apptr} and \\ref{appex} provide the derivation and the\ntechnical details of the large-$N$ calculations on the triangular and\nhoneycomb lattices respectively. We present as well the calculation\nof the $\\Lambda$-parameters.\n\nApp.~\\ref{singNinf} is a study of the complex temperature\nsingularities of the $N=\\infty$ partition functions on the \ntriangular and honeycomb lattices.\n\nIn Apps.~\\ref{appscsq}, \\ref{appsctr} and \\ref{appscex} we present,\nfor selected values of $N$, the strong-coupling series of some\nrelevant quantities on the square, triangular, and honeycomb lattice\nrespectively.\n\n\n\\section{The large-$\\protect\\bbox{N}$ limit of lattice \n$\\protect\\bbox{{\\rm O}(N)}$ $\\protect\\bbox{\\sigma}$ models}\n\\label{secNi}\n\n\n\\subsection{The large-$\\protect\\bbox{N}$ saddle point equation}\n\\label{secse}\n\nThe nearest-neighbor lattice formulations on square, triangular and\nhoneycomb lattices are defined by the action\n\\begin{equation}\nS_L= -N\\beta\\sum_{\\rm links} {\\vec s}_{x_l}\\cdot {\\vec s}_{x_r},\n\\qquad {\\vec s}_x\\cdot {\\vec s}_x = 1,\n\\label{lattaction}\n\\end{equation}\nwhere $\\vec s$ is a $N$-component vector, the sum is performed over\nall links of the lattice and $x_l,x_r$ indicate the sites at the ends\nof each link. The coordination number is $c=4,6,3$ respectively for\nthe square, triangular and honeycomb lattice. The lattice spacing\n$a$, which represents the length unit, is defined to be the length of\na link. The volume per site is then $v_s=1,\\sqrt{3}\/2, 3\\sqrt{3}\/4$\n(in unit of $a^2$) respectively for the square, triangular, and\nhoneycomb lattice.\n\nStraightforward calculations show that the correct continuum\nlimit of ${\\rm O}(N)$ $\\sigma$ models, \n\\begin{equation}\nS= {N\\over 2t} \\int d^2x\\, \\partial_\\mu {\\vec s}(x)\\cdot\n\\partial_\\mu {\\vec s}(x),\n\\qquad {\\vec s}(x)\\cdot {\\vec s}(x) = 1,\n\\label{contaction}\n\\end{equation}\nis obtained by identifying \n\\begin{equation}\nt={1\\over \\beta}, \\ {1\\over \\sqrt{3}\\beta},\\ \n{\\sqrt{3}\\over \\beta},\n\\label{temp}\n\\end{equation}\nrespectively for the square, triangular and honeycomb lattice.\nNotice that \n\\begin{equation}\n\\lambda\\equiv t\\beta = {4v_s\\over c}\n\\label{tbeta}\n\\end{equation}\nis the distance between nearest-neighbor\nsites of the dual lattice in unit of the lattice spacing $a$.\n\nWhen the number of field components $N$ per site goes to infinity,\none can use a saddle point equation to evaluate the partition \nfunction. Replacing the constraint $\\vec s_x^{\\,2}=1$\nby a Fourier integral over \na conjugate variable $\\alpha_x$, we write the partition\nfunction as\n\\begin{eqnarray}\nZ&&\\propto \\int \\prod_x d{\\vec s}_x \\,\\delta( \\vec s_x^{\\,2}-1)\\,\n\\exp N\\beta \\sum_{\\rm links} {\\vec s}_{x_l}\\cdot {\\vec s}_{x_r}\n\\nonumber \\\\\n&&\\propto\\int \\prod_x d\\phi_x d\\alpha_x \\,\\exp \nN\\left[ \\sum_x i{\\alpha_x\\over 2}\\left( 1 - \\phi_x^2\\right)\n-{\\beta\\over 2}\\sum_{\\rm links} \\left( \n\\phi_{x_l}-\\phi_{x_r}\\right)^2\\right].\n\\label{fp}\n\\end{eqnarray}\nIntegrating out the $\\phi$ variables we arrive at\nthe expression \n\\begin{equation}\nZ\\propto \\int d\\alpha_x \\,\\exp {N\\over 2}\n\\left( \\sum_x i\\alpha_x - {\\rm Tr}\\,\\ln R\\right),\n\\label{fp2}\n\\end{equation}\nwhere \n\\begin{equation}\nR_{xy}= -{1\\over t} \\Delta_{xy} + i\\alpha_x\\delta_{xy},\n\\label{R}\n\\end{equation}\nand $\\Delta_{xy}$ is a generalized Laplacian operator, such\nthat \n\\begin{equation}\n\\lambda\\, \\sum_{\\rm links} \\left(\\phi_{x_l}-\\phi_{x_r}\\right)^2\n= -\\sum_{x,y} \\phi_x \\Delta_{xy} \\phi_y.\n\\label{Q}\n\\end{equation}\n\nThe large-$N$ limit solution is obtained from the variational \nequation with respect to $\\alpha_x$.\nLooking for a translation invariant solution we set\n\\begin{equation}\ni\\alpha_x={v_s\\over t}\\,z.\n\\label{costalp}\n\\end{equation}\nThe matrix $R$ then becomes\n\\begin{equation}\nR_{xy}= {1\\over t} \\left[ -\\Delta_{xy}+zv_s\\delta_{xy}\\right],\n\\label{R2}\n\\end{equation}\nand the saddle point equation is written as\n\\begin{equation}\n1=\\lim_{N_s\\rightarrow\\infty}\n{1\\over N_s} {\\rm Tr}\\,R^{-1},\n\\label{spe}\n\\end{equation}\nwhere $N_s$ is the number of sites.\n\nThe large-$N$ fundamental two-point Green's function is\nobtained by\n\\begin{equation}\nG(x-y)= R^{-1}_{xy}.\n\\label{NiGx}\n\\end{equation}\n\nIn order to calculate the trace of $R^{-1}$, the easiest\nprocedure consists in Fourier transforming the operator\n$R$. Such transformation is straightforward on lattices,\nlike square and triangular lattices, whose sites\nare related by a translation group, and in these cases it\nyields the diagonalization of the matrix $R_{xy}$.\nThe honeycomb lattice, not possessing a full translation\nsymmetry, presents some complications. \nIn this case a partial diagonalization of $R_{xy}$ can be \nachieved following the procedure outlined in Ref.~\\cite{SCUN2}.\n\n\n\\subsection{The square lattice}\n\\label{secsqNi}\n\nTurning to the momentum space the variational equation\nbecomes\n\\begin{equation}\n{1\\over t}=\\beta =\\int_{-\\pi}^\\pi {d^2 k\\over (2\\pi)^2}\n{1\\over \\widehat{k}^2+z}={1\\over 2\\pi} \n\\rho_{\\rm s}(z) K\\left( \\rho_{\\rm s}(z) \\right), \n\\label{sesq}\n\\end{equation}\nwhere \n\\begin{equation}\n\\rho_{\\rm s}(z)=\\left(1 + {1\\over 4}z\\right)^{-1},\n\\label{rhos}\n\\end{equation}\nand $K$ is the complete integral of the first kind.\n\nLet's define the moments of $G(x)$\n\\begin{equation}\nm_{2j}\\equiv\\sum_x (x^2)^j \\,G(x).\n\\label{momgx}\n\\end{equation}\nStraightforward calculations lead to the following\nresults\n\\begin{equation}\n\\chi\\equiv m_0 = {t\\over z},\n\\label{chisqin}\n\\end{equation}\n\\begin{equation}\n\\xi_{G}^2 \\equiv M_{G}^{-2} \\equiv \n{m_2\\over 4\\chi}= {1\\over z},\n\\label{xigsqin}\n\\end{equation}\n\\begin{equation}\n u\\equiv {m_2^2\\over \\chi m_4}=\n{1\\over 4}\\left( 1 + {z\\over 16}\\right)^{-1}.\n\\label{omsqin}\n\\end{equation}\nNotice that in the large-$N$ limit the renormalization constant of the\nfundamental field is $Z=t$. $u$ is a renormalization-group invariant\nquantity.\n\nThe mass-gap should be extracted from the long distance\nbehavior of the two-point Green's function, which is also\nrelated to the imaginary momentum singularity of the \nFourier transform of $G(x)$.\nIn the absence of a strict rotation invariance, one actually\nmay define different estimators of the mass-gap having \nthe same continuum limit.\nOn the square lattice one may consider $\\mu_{\\rm s}$ and \n$\\mu_{\\rm d}$ obtained respectively by the equations\n\\begin{eqnarray}\n&&\\tilde{G}^{-1}(p_1=i\\mu_{\\rm s},p_2=0)=0,\\nonumber \\\\\n&&\\tilde{G}^{-1}\\left(p_1=i{\\mu_{\\rm d}\\over\\sqrt{2}},\np_2=i{\\mu_{\\rm d}\\over\\sqrt{2}}\\right) = 0.\n\\label{msmd}\n\\end{eqnarray}\n$\\mu_{\\rm s}$ and $\\mu_{\\rm d}$ \ndetermine respectively the long\ndistance behavior of the side and diagonal wall-wall\ncorrelations constructed with $G(x)$. \nIn generalized Gaussian models, such as the large-$N$ limit\nof ${\\rm O}(N)$ models, it turns out convenient to define\nthe following quantities \n\\begin{eqnarray}\n&&M_{\\rm s}^2= 2\\left( {\\rm cosh} \n\\mu_{\\rm s} - 1\\right),\\nonumber \\\\\n&&M_{\\rm d}^2= 4\\left( {\\rm cosh} \n{\\mu_{\\rm d}\\over \\sqrt{2}} -1\\right).\n\\label{MsMd}\n\\end{eqnarray}\nIn the continuum limit \n\\begin{equation}\n{M_{\\rm s}\\over \\mu_{\\rm s}}\\,,{M_{\\rm d}\n\\over \\mu_{\\rm d}}\\rightarrow 1,\n\\label{msmd2}\n\\end{equation}\ntherefore $M_{\\rm s}$ and $M_{\\rm d}$ may be also used\nas estimators of the mass-gap. \n\nIn the large-$N$ limit \n\\begin{equation}\nM_{\\rm s}^2 = M_{\\rm d}^2 = z=M_{G}^2.\n\\label{msmdmg}\n\\end{equation}\n\nThe rotational invariance of $G(x)$ at large distance,\n$d\\gg\\xi$, is checked by the ratios $\\mu_{\\rm s}\/\\mu_{\\rm d}$.\nUsing the above results one can evaluate the scaling violation terms:\n\\begin{equation}\n{\\mu_{\\rm s}\\over \\mu_{\\rm d}}=\n{ \n\\ln\\left( {1\\over 2}\\sqrt{z} + \\sqrt{ 1 + \n{1\\over 4}z}\\right)\\over\n\\sqrt{2}\\ln\\left( {1\\over 2\\sqrt{2}}\\sqrt{z} + \\sqrt{ 1 + \n{1\\over 8}z}\\right)}\n = 1 - {1\\over 48}z + {71\\over 23040}z^2+\nO\\left(z^3\\right).\n\\label{rotviol}\n\\end{equation}\n \nAnother test of scaling is provided by the ratio\n\\begin{equation}\n{\\mu_{\\rm s}\\over M_{G}}\n= {2\\over \\sqrt{z}}\n\\ln\\left( {\\sqrt{z}\\over 2} + \\sqrt{ 1 + \n{z\\over 4}}\\right)=\n1 - {1\\over 24}z + {3\\over 640}z^2+\nO\\left(z^3\\right).\n\\label{scalviol}\n\\end{equation}\n\nThe internal energy can be easily calculated obtaining\n\\begin{equation}\nE \\equiv \\langle {\\vec s}_x\\cdot {\\vec s}_{x+\\mu} \\rangle =\nR^{-1}_{x,x+\\mu}=\n1\\,-\\,{1\\over 4\\beta}\\,+\\,{z\\over 4}.\n\\label{energysq}\n\\end{equation}\nTherefore\n\\begin{equation}\n{1\\over 2}\\sum_\\mu\n\\langle ({\\vec s}_{x+\\mu}-{\\vec s}_x)^2 \\rangle =\n{1\\over 2\\beta}\\,-\\,{z\\over 2},\n\\label{condlatt}\n\\end{equation}\nwhere the term proportional to $z$ is related to the condensate $T$ of\nthe trace of the energy-momentum tensor~\\cite{CRcond}\n\\begin{equation}\n{\\beta(t)\\over 2t^2}\n\\partial_\\mu {\\vec s}(x)\\cdot\\partial_\\mu {\\vec s}(x).\n\\label{temtr}\n\\end{equation}\nIn the large-$N$ limit\n\\begin{equation}\n\\beta(t)=-{1\\over 2\\pi}t^2,\n\\label{beta}\n\\end{equation}\ntherefore from the expression (\\ref{energysq}) we deduce\n\\begin{equation}\n{T\\over M_{G}^2}={1\\over 4\\pi}.\n\\label{condsq}\n\\end{equation}\n\nAnother interesting quantity which can be evaluated in the large-$N$\nlimit is the zero-momentum four-point renormalized coupling constant,\ndefined by\n\\begin{equation}\ng_r = -{\\chi_4\\over \\chi^2\\xi_{G}^2}\n\\label{gr0}\n\\end{equation}\nwhere\n\\begin{equation}\n\\chi_4 = \\sum_{x,y,z} \\langle {\\vec s}_0\\cdot {\\vec s}_x \n\\, {\\vec s}_y\\cdot {\\vec s}_z \\rangle_c.\n\\label{chi4}\n\\end{equation}\n$g_r$ is zero in the large-$N$ limit, where the theory is Gaussian-like\nand thus $\\chi_4=0$. \nIts value in the continuum limit\n\\begin{equation}\ng_r^* = {8\\pi\\over N}+ O\\left({1\\over N^2}\\right)\n\\label{gr1}\n\\end{equation}\ncan be also evaluated in the large-$N$ expansion of\nthe continuum formulation of the ${\\rm O}(N)$ models~\\cite{gr}.\nOn the square lattice, by using the saddle point equation we find\n\\begin{equation}\nNg_r = -2 {\\partial \\ln z\\over \\partial \\beta},\n\\label{gr2}\n\\end{equation}\nwhich can be made more explicit by writing\n\\begin{equation}\nNg_r = 4\\pi{1+\\rho_{\\rm s}\\over \\rho_{\\rm s} E(\\rho_{\\rm s})} =\n8\\pi\\left[1 + {z\\over 8}\\left(\\ln {z\\over 32}\\,+\\,2\\right) \n+ O(z^2)\\right],\n\\label{gr3}\n\\end{equation}\nwhere $E$ is an elliptic function.\n\nAll the above results can be expressed as functions of $\\beta$\nby solving the saddle point equation.\nConcerning asymptotic scaling, and therefore \nsolving the saddle point equation at large $\\beta$, one finds\n\\begin{equation}\nM_{G}\\simeq 4\\sqrt{2}\\,\\exp \\left(-{2\\pi\\over t}\\right).\n\\label{asysq}\n\\end{equation}\n\nThe analytic structure of the various observables\nhas been investigated in Ref.~\\cite{BCMO}.\nThe complex $\\beta$-singularities are square-root branch points,\nindeed quantities like $\\chi$ and $\\xi_G^2$\nbehave as \n\\begin{equation}\nA(\\beta)+B(\\beta) \\sqrt{\\beta-\\beta_s}\n\\end{equation}\naround a singular point $\\beta_s$, where $A(\\beta)$ and $B(\\beta)$ are\nregular in the neighborhood of $\\beta_s$. The singularities closest\nto the origin are located at $\\bar{\\beta}=0.32162\\,(\\pm 1\\pm i)$.\nSuch singularities determine the convergence radius of the\nstrong-coupling expansion, which is therefore $\\beta_r=0.45484$,\ncorresponding to a correlation length $\\xi_{G}=3.17160$.\n\n\n\\subsection{The triangular lattice}\n\\label{sectrNi}\n\nOn the triangular lattice, using the results of App.~\\ref{apptr}, \nthe saddle point equation can be written as\n\\begin{equation}\n{1\\over t}=\\sqrt{3}\\beta = \\int^\\pi_{-\\pi} {dk_1\\over 2\\pi}\n\\int^{2\\pi\/\\sqrt{3}}_{-2\\pi\/\\sqrt{3}}\n{dk_2\\over 2\\pi} {1\\over \\Delta(k)+z}\n\\label{setr}\n\\end{equation}\nwhere \n\\begin{equation}\n\\Delta(k)=4\\left[ 1 - {1\\over 3}\\left(\n\\cos k_1+2\\cos {k_1\\over 2}\\cos {\\sqrt{3}k_2\\over 2}\\right)\\right]\n\\label{deltatr}\n\\end{equation}\nand the momentum integration is performed over the Brillouin\nzone corresponding to a triangular lattice.\nBy rather straightforward calculations (making also use\nof some of the formulas of Ref.~\\cite{Gradshteyn})\nthe saddle point equation can be written as\n\\begin{equation}\n{1\\over t}=\\sqrt{3}\\beta =\n{1\\over 2\\pi} \\left( 1 + {z\\over 6}\\right)^{-1\/4}\\,\\rho_{\\rm t}(z)\n\\,K(\\rho_{\\rm t}(z)),\n\\label{setr2}\n\\end{equation}\nwhere\n\\begin{equation}\n \\rho_{\\rm t}(z)= \\left( 1+{z\\over 6}\\right)^{1\/4}\\,\n\\left[ {1\\over 2} + {z\\over 8} + {1\\over 2}\n\\left( 1 + {z\\over 6}\\right)^{1\/2} \\right]^{-1\/2}\\, \n\\left[ {5\\over 2} + {3z\\over 8} - {3\\over 2}\n\\left( 1 + {z\\over 6}\\right)^{1\/2} \\right]^{-1\/2}. \n\\label{setr3}\n\\end{equation}\n\nUsing the results of App.~\\ref{apptr} one can find\n\\begin{equation}\n\\chi={t\\over v_s z}={2\\over 3\\beta z},\n\\label{chitrin}\n\\end{equation}\n\\begin{equation}\n \\xi_{G}^2 \\equiv M_{G}^{-2} = {1\\over z}\\, ,\n\\label{xitrin}\n\\end{equation}\n\\begin{equation}\nu\\equiv {m_2^2\\over \\chi m_4}\n={1\\over 4}\\left( 1 + {z\\over 16}\\right)^{-1} .\n\\label{omtrin}\n\\end{equation}\n\nAn estimator of the mass-gap $\\mu_{\\rm t}$ can be extracted from the\nlong distance behavior of the \nwall-wall correlation function defined in\nEq.~(\\ref{walldeftr}), indeed for $x\\gg 1$\n\\begin{equation}\nG_{\\rm t}^{(\\rm w)}(x)\\propto e^{-\\mu_{\\rm t} x}.\n\\label{ldgtr}\n\\end{equation}\nIn the large-$N$ limit one finds\n\\begin{equation}\nM^2_{\\rm t}\\equiv{8\\over 3}\\left( {\\rm cosh} {\\sqrt{3}\\over 2}\n\\mu_{\\rm t} -1\\right)=z=M^2_{G}.\n\\label{Mtr}\n\\end{equation}\nA test of scaling is provided by the ratio \n\\begin{equation}\n{\\mu_{\\rm t}\\over M_{G}}=\n{2\\over \\sqrt{3z}}\n{\\rm Arccosh}\\,\\left[ 1 + {3\\over 8}z \\right]\n = 1-{1\\over 32}z + {9\\over 10240} z^2\n+O\\left( z^3\\right),\n\\label{scalvioltr}\n\\end{equation}\nwhere scaling violations are of the same order as those \nfound on the square lattice for the corresponding\nquantity, cfr.\\ Eq.~(\\ref{scalviol}).\n\nThe internal energy is given by the following\nexpression\n\\begin{equation}\nE=\\langle {\\vec s}_{x_l}\\cdot {\\vec s}_{x_r}\\rangle =\n1 - {1\\over 6\\beta} + {z\\over 4} ,\n\\label{etr}\n\\end{equation}\nleading again to the result (\\ref{condsq}) \nfor the condensate of the trace of the\nenergy-momentum tensor, in agreement with universality. \n\nWe calculated $g_r$ on the triangular lattice, finding \nthe following expression (in the derivation we made use of the\nsaddle point equation (\\ref{setr}))\n\\begin{equation}\nNg_r = - {2\\over \\sqrt{3}} {\\partial \\ln z\\over \\partial \\beta},\n\\label{gr2t}\n\\end{equation}\nwhich can be written in a more explicit form using \nEq.~(\\ref{setr2}): \n\\begin{eqnarray}\n Ng_r&=&4\\pi\\left( 1 + {1\\over 6}z\\right)^{1\/4}\n{1\\over z}\\left[ {E(\\rho_{\\rm t})\\over 1-\\rho_{\\rm t}^2} \n{\\partial \\rho_{\\rm t}\\over \\partial z}- \n{1\\over 24}\\left(1+{1\\over 6}z\\right)^{-1}\\rho_{\\rm t} \n K(\\rho_{\\rm t})\\right]^{-1}\n\\nonumber \\\\\n&=&\n8\\pi\\left[1 + {z\\over 8}\\left(\\ln {z\\over 48}\\,+\\,{11\\over 6}\\right) \n+ O(z^2)\\right],\n\\label{gr3t}\n\\end{eqnarray}\nwhere the continuum value of $Ng_r$, obtained for $z\\rightarrow 0$, is\nin agreement with the results (\\ref{gr1}) and (\\ref{gr3}).\n\nIn the weak coupling region $t\\rightarrow 0$ the\nsaddle point equation leads to the asymptotic\nscaling formula\n\\begin{equation}\nM_{G}\\simeq 4\\sqrt{3} \\exp \\left( \n-{ 2\\pi\\over t}\\right).\n\\label{asytr}\n\\end{equation}\nThe equations (\\ref{asysq}) and\n(\\ref{asytr}) are in agreement with the large-$N$ limit of the \nratio of the $\\Lambda$-parameters of the square \nand triangular lattice formulations\ncalculated in App.~\\ref{apptr}, cfr.\\ Eq.~(\\ref{ratioltr2}),\nusing perturbation theory.\n\nWe have investigated the analytic structure in the complex\n$\\beta$-plane. Details of such study are presented in\nApp.~\\ref{singNinf}. As on the square lattice, the singularities are\nsquare-root branch points. Those closest to the origin are placed at\n$\\bar{\\beta}= 0.206711\\pm \\,i\\,0.181627$, leading to a convergence\nradius for the strong-coupling expansion $\\beta_r=0.275169$, which\ncorresponds to a correlation length $\\xi_{G}=2.98925$.\n\n\n\\subsection{The honeycomb lattice}\n\\label{iNhl}\n\nThe analysis of models defined on the honeycomb lattice presents a few\nsubtleties caused by fact that, unlike square and triangular lattices,\nnot all sites are related by a translation, not allowing a\nstraightforward definition of a Fourier transform. Nevertheless,\nobserving that sites at even distance in the number of lattice links\nform triangular lattices, one can define a Fourier-like transformation\nthat partially diagonalizes the Gaussian propagator (up to $2\\times 2$\nmatrices)~\\cite{SCUN2}. In this section we present the relevant\nresults, some details of their derivation are reported in\nApp.~\\ref{appex}.\n\nUsing the expression of $R^{-1}$ of Eq.~(\\ref{greengex}) \nwe write the saddle point equation in the following form\n\\begin{equation}\n{1\\over t}={\\beta\\over\\sqrt{3}}=\n\\int^{{2\\over3}\\pi}_{-{2\\over3}\\pi} {dk_1\\over 2\\pi}\n\\int^{\\pi\/\\sqrt{3}}_{-\\pi\/\\sqrt{3}}\n{dk_2\\over 2\\pi} {1+{1\\over 4}z\\over \n\\Delta(k)+z\\left(1+{1\\over 8}z\\right)}\n\\label{seex}\n\\end{equation}\nwhere \n\\begin{equation}\n\\Delta(k)={8\\over 9}\\left[ 2 - \\cos {\\sqrt{3}\\over 2}k_2\n\\left( \\cos {3\\over 2}k_1 + \\cos {\\sqrt{3}\\over 2}k_2\\right) \n\\right],\n\\label{deltaex}\n\\end{equation}\nand integrating over the momentum we arrive at\n\\begin{equation}\n{1\\over t}={\\beta\\over\\sqrt{3}}=\n{1\\over 2\\pi} \\left( 1 + {z\\over 4}\\right)^{1\/2}\n\\rho_{\\rm h}(z) \\,K(\\rho_{\\rm h}(z)),\n\\label{seex2}\n\\end{equation}\nwhere\n\\begin{equation}\n\\rho_{\\rm h}(z)= \\left( 1 + {z\\over 4}\\right)^{1\/2} \n\\left( 1 + {3z\\over 8}\\right)^{-3\/2} \n\\left( 1 + {z\\over 8}\\right)^{-1\/2}. \n\\label{seex3}\n\\end{equation}\n\nFrom Eq.~(\\ref{greengex}) we also derive\n\\begin{equation}\n\\chi ={t\\over v_s z}={4\\over 3\\beta z},\n\\label{chihoin}\n\\end{equation}\n\\begin{equation}\n\\xi_{G}^2 \\equiv M^{-2}_{G}=\n{1\\over z},\n\\label{xihoin}\n\\end{equation}\n\\begin{equation}\nu ={1\\over 4}\\left( 1 + {z\\over 16}\\right)^{-1}.\n\\label{omhoin}\n\\end{equation}\n\nThe two orthogonal wall-wall correlation functions \n$G^{(\\rm w)}_{\\rm v}(x)$ and $G^{(\\rm w)}_{\\rm h}(x)$ defined in\nEqs.~(\\ref{g1}) and (\\ref{g2}) allow one to define two estimators of\nthe mass-gap from their long distance behavior\n\\begin{eqnarray}\nG^{(\\rm w)}_{\\rm v}(x)\\propto e^{-\\mu_{\\rm v} x},\\nonumber\\\\ \nG^{(\\rm w)}_{\\rm h}(x)\\propto e^{-\\mu_{\\rm h} x},\n\\label{ldbg}\n\\end{eqnarray}\nwhere $x$ is the distance between the two walls in \nunit of the lattice spacing.\nIn the continuum limit $\\mu_{\\rm v}=\\mu_{\\rm h}$ \nand they both reproduce\nthe physical mass propagating in the fundamental channel.\nAs on the square and triangular lattices, it is convenient\nto define the quantities\n\\begin{eqnarray}\n&&M_{\\rm v}^2 = {8\\over 9}\\left( {\\rm cosh} {3\n\\mu_{\\rm v}\\over 2} - 1\\right),\\nonumber \\\\\n&&M_{\\rm h}^2 = {8\\over 3}\\left( {\\rm cosh} \n{\\sqrt{3}\\mu_{\\rm h}\\over 2} -1\\right),\n\\label{M1M2}\n\\end{eqnarray}\nwhich, in the continuum limit, \nare also estimators of the mass gap. \nIn the large-$N$ limit one finds\n\\begin{eqnarray}\n&&M_{\\rm v}^2 = z\\left( 1+ {z\\over 8}\\right),\\nonumber \\\\\n&&M_{\\rm h}^2=z.\n\\label{M1M22}\n\\end{eqnarray}\nNotice that in the continuum large-$N$ limit the result \n\\begin{equation}\n{M\\over M_{G}}=1,\n\\label{momg}\n\\end{equation}\nwhere $M$ is any mass-gap\nestimator, is found for all lattice formulations considered.\n\nOn the honeycomb lattice the maximal violation of full rotational\nsymmetry occurs for directions differing by a $\\pi\/6$ angle, and\ntherefore, taking into account its discrete rotational symmetry, also\nby a $\\pi\/2$ angle. So a good test of rotation invariance of $G(x)$ at\nlarge distance is provided by the ratio $\\mu_{\\rm v}\/\\mu_{\\rm h}$:\n\\begin{equation}\n{\\mu_{\\rm v}\\over \\mu_{\\rm h}}=\n{\n{\\rm Arccosh}\\,\\left[ 1 + {9\\over 8}z \\left( 1 + {1\\over 8}\nz\\right)\\right]\n\\over\n\\sqrt{3}{\\rm Arccosh}\\,\\left[ 1 + {3\\over 8}z \\right]}\n= 1+{1\\over 640} z^2\n+O\\left( z^3\\right).\n\\label{rotscalho}\n\\end{equation}\nAs expected from the better rotational symmetry of the honeycomb\nlattice, rotation invariance is set earlier than for the square\nlattice, indeed the $O(z)$ scaling violation is absent.\n\nA test of scaling is provided by the ratio \n\\begin{equation}\n{\\mu_{\\rm h}\\over M_{G}}=\n{2\\over \\sqrt{3z}}\n{\\rm Arccosh}\\,\\left[ 1 + {3\\over 8}z \\right]\n= 1-{1\\over 32}z + {9\\over 10240} z^2\n+O\\left( z^3\\right),\n\\label{scalviolho}\n\\end{equation}\nwhere scaling violations are of the same order of those \nfound on the square lattice for the corresponding\nquantity, cfr.\\ Eq.~(\\ref{scalviol}).\n\nThe internal energy is given by\n\\begin{equation}\nE=1 - {1\\over 3\\beta} + {z\\over 4}\n=1 - {1\\over 3\\beta} + {M_{G}^2\\over 4},\n\\label{eneex}\n\\end{equation}\nwhere the term proportional to $M_{G}^2$\nverifies again universality.\n\nIn the weak coupling region $t\\rightarrow 0$ the\nsaddle point equation leads to the asymptotic\nscaling formula\n\\begin{equation}\nM_{G}\\simeq 4 \\exp \\left( \n-{ 2\\pi\\over t}\\right).\n\\label{asyex}\n\\end{equation}\nThe equations (\\ref{asysq}) and\n(\\ref{asyex}) are in agreement with the large-$N$ limit of the \nratio of the $\\Lambda$-parameters of the square \nand triangular lattice formulations\ncalculated in App.~\\ref{appex}, cfr.\\ Eq.~(\\ref{ratiolho2}),\nusing perturbation theory.\n\nIn Fig.~\\ref{asyiN} we compare asymptotic scaling from\nthe various lattices considered, plotting the ratio\nbetween $M_{G}$ and the corresponding\nasymptotic formula (cfr.\\ Eqs.~(\\ref{asysq}),\n(\\ref{asytr}) and (\\ref{asyex})).\nNotice that in the large-$N$ limit corrections to \nasymptotic scaling are $O(M^2_{G})$, in that corrections\n$O(1\/\\ln M_{G})$ are suppressed by a factor $1\/N$.\n\nWe have investigated the analytic structure in the complex \ntemperature-plane of the $N=\\infty$ model on the honeycomb lattice\n(details are reported in App.~\\ref{singNinf}).\nAs on the square and triangular lattices,\nsingularities are square-root branch points, and those closest to the\norigin are placed on the imaginary axis\nat $\\bar{\\beta}=\\pm i 0.362095$. \nThe convergence radius for the strong-coupling expansion\nis associated to a quite small correlation length:\n$\\xi_{G}=1.00002$.\n\n\n\\section{Continuum results from strong coupling}\n\\label{SCA}\n\n\n\\subsection{Analysis of the series}\n\\label{analysis}\n\nIn this section we analyze the strong-coupling series of some of the\nphysical quantities which can be extracted from the two-point\nfundamental Green's function. We especially consider dimensionless\nrenormalization-group invariant ratios, whose value in the scaling\nregion, i.e., their asymptotic value for $\\beta\\rightarrow \\infty$,\nconcerns the continuum physics. Some strong-coupling series for\nselected values of $N$ are reported in the Apps.~\\ref{appscsq},\n\\ref{appsctr} and \\ref{appscex} respectively for the square,\ntriangular and honeycomb lattices. The series in the energy are\nobtained by inverting the strong-coupling series of the energy\n$E=\\beta+O(\\beta^3)$ and substituting into the original series in\n$\\beta$.\n\nOur analysis of the series of dimensionless renormalization \ngroup invariant ratios of physical quantities,\nsuch as those defined in the previous section, is \nbased on Pad\\'e approximant (PA) techniques.\nFor a review on the resummation techniques cfr.\\ \nRef.~\\cite{Guttmann}.\n\nPA's are expected to converge well to meromorphic\nanalytic functions. More flexibility is achieved by applying\nthe PA analysis to the logarithmic derivative \n(Dlog-PA analysis), and therefore enlarging the class\nof functions which can be reproduced to those having\nbranch-point singularities.\nThe accuracy and the convergence of the PA's \ndepend on how well the function considered, \nor its logarithmic derivative, can be reproduced by a \nmeromorphic analytic function, and may change when considering\ndifferent representations of the same quantity.\nBy comparing the results from different series representations\nof the same quantity one may check for possible\nsystematic errors in the resummation procedure employed.\n\nIn our analysis we constructed $[l\/m]$ PA's and Dlog-PA's\nof both the series in $\\beta$ and in the energy.\n$l,m$ are the orders of the polynomials\nrespectively at the numerator and at the denominator\nof the ratio forming the $[l\/m]$ PA of the series at hand, \nor of its logarithmic derivative (Dlog-PA).\nWhile $[l\/m]$ PA's provide directly \nthe quantity at hand, in a Dlog-PA analysis one gets \na $[l\/m]$ approximant by reconstructing the original quantity\nfrom the $[l\/m]$ PA of its logarithmic derivative,\ni.e., a $[l\/m]$ Dlog-PA of the series $A(x)=\\sum_{i=0}^\\infty a_i x^i$\nis obtained by\n\\begin{equation}\nA_{l\/m}(x) = a_0\\exp \\int_0^x \ndx' \\, {\\rm Dlog}_{l\/m} A(x').\n\\label{appA}\n\\end{equation}\nwhere ${\\rm Dlog}_{l\/m} A(x)$ indicates the $[l\/m]$ PA\nof the logarithmic derivative of $A(x)$.\n\nWe recall that a $[l\/m]$ PA uses $n=l+m$ terms of the series,\nwhile a $[l\/m]$ Dlog-PA requires $n=l+m+1$ terms.\nContinuum estimates are then obtained by evaluating the approximants\nof the energy series at $E=1$, and those of the $\\beta$\nseries at a value of $\\beta$ corresponding to a reasonably\nlarge correlation length. \n\nAs final estimates we take the average of the results from the\nquasi-diagonal (i.e., with $l\\simeq m$) PA's using all available terms\nof the series. The errors we will display are just indicative, and\ngive an idea of the spread of the results coming from different PA's.\nThey are the square root of the variance around the estimate of the\nresults, using also quasidiagonal PA's constructed from shorter\nseries. Such errors do not always provide a reliable estimate of the\nuncertainty, which may be underestimated especially when the structure\nof the function (or of its logarithmic derivative) is not well\napproximated by a meromorphic analytic function. In such cases a more\nreliable estimate of the real uncertainty should come from the\ncomparison of results from the analysis of different series\nrepresenting the same quantity, which in general are not expected to\nhave the same analytic structure.\n\nIn the following of this section we present \nthe main results obtained from our strong-coupling analysis.\nMost of them will concern the $N=3$ case.\n\n\n\\subsection{The square lattice}\n\\label{scsq}\n\nOn the square lattice we have calculated the two-point Green's\nfunction up to $O(\\beta^{21})$, from which we have extracted\nstrong-coupling series of the quantities $E$, $\\chi$, $\\xi_{G}^2$,\n$u$, $M_{\\rm s}^2$, $M_{\\rm d}^2$, already introduced in\nSec.~\\ref{secsqNi}, and of the ratios $r\\equiv M_{\\rm s}^2\/M_{\\rm\n d}^2$, $s\\equiv M_{\\rm s}^2\/M_{G}^2$. Some of the above series for\nselected values of $N$ are reported in App.~\\ref{appscsq}. Our\nstrong-coupling series represent a considerable extension of the 14th\norder calculations of Ref.~\\cite{Luscher}, performed by means of a\nlinked cluster expansion, which have been rielaborated and analyzed in\nRef.~\\cite{Butera}. We also mention recent works where the linked\ncluster expansion technique has been further developed and\ncalculations of series up to 18th order~\\cite{Reisz} and 19th\norder~\\cite{Butera2} for d=2,3,4 have been announced.\n\nIn order to investigate the analytic structure in the complex\n$\\beta$-plane we have performed a study of the singularities of the\nDlog-PA's of the strong-coupling series of $\\chi$ and $\\xi_{G}^2$. As\nexpected by asymptotic freedom, no indication of the presence of a\ncritical point at a finite real value of $\\beta$ emerges from the\nstrong-coupling analysis of $N\\geq 3$ models, confirming earlier\nstrong-coupling studies~\\cite{Butera}. The singularities closest to\nthe origin, emerging from the Dlog-PA analysis of $\\chi$ and\n$\\xi_{G}^2$, are located at a pair of complex conjugate points, rather\nclose to the real axis in the $N=3$ case (where $\\bar{\\beta}\\simeq\n0.59\\pm i 0.16$) and moving, when increasing $N$, toward the\n$N=\\infty$ limiting points $\\bar{\\beta}=0.32162\\,(1\\pm i)$. In Table\n\\ref{zeroes} such singularities are reported for some values of $N$.\nThe singularity closest to the origin determines the convergence\nradius of the corresponding strong-coupling series. For example for\n$N=3$ the strong-coupling convergence radius turns out to be\n$\\beta_r\\simeq 0.61$, which corresponds to a quite large correlation\nlength $\\xi\\simeq 65$. We recall that the partition function on the\nsquare lattice has the symmetry $\\beta \\rightarrow -\\beta$, which must\nbe also realized in the locations of its complex singularities.\n\nBy rotation invariance the ratio \n$r\\equiv M_{\\rm s}^2\/M_{\\rm d}^2$ should go to one\nin the continuum limit. Therefore the analysis of such ratio\nshould be considered as a test of the procedure employed \nto estimate continuum physical quantities. \nIn the large-$N$ limit $r=1$ at all values of $\\beta$.\nThis is not anymore true \nat finite $N$, where the strong-coupling series of \n$M_{\\rm s}^2$ and $M_{\\rm d}^2$ differ from each other, \nas shown in App.~\\ref{appscsq}.\nFrom $G(x)$ up to $O(\\beta^{21})$ we could calculate\nthe ratio $r$ up to $O(\\beta^{14})$. \nThe results of our analysis of the series of $r$ for $N=3$\nare summarized in Table \\ref{sqr}. There we report\nthe values of the PA's and Dlog-PA's of the $E$-series at $E=1$, \nand of those of the $\\beta$-series at $\\beta=0.55$,\nwhich corresponds to a reasonably large \ncorrelation length $\\xi\\simeq 25$.\nWe considered PA's and Dlog-PA's with \n$l+m\\geq 11$ and $m\\geq l\\geq 5$.\nThe most precise determinations of $r^*$,\nthe value of $r$ at the continuum limit, come from Dlog-PA's,\nwhose final estimates are $r^*=1.0000(12)$ from the $E$-approximants,\nand $r^*=1.0002(6)$ from the $\\beta$-approximants (at $\\beta=0.55$).\nThe precision of these results is remarkable.\n\nFor all $N\\geq 3$ the violation of rotation invariance in the large\ndistance behavior of $G(x)$, monitored by the ratio \n$\\mu_{\\rm s}\/\\mu_{\\rm d}$, turns out quantitatively very close to that\nat $N=\\infty$ when considered as function of $\\xi_G$ (in a plot the\n$N=3$ curve of $\\mu_{\\rm s}\/\\mu_{\\rm d}$ versus $\\xi_{G}$ as obtained\nfrom the strong-coupling analysis would be hardly distinguishable from\nthe exact $N=\\infty$ one). $\\mu_{\\rm s}\/\\mu_{\\rm d}$ is one within\nabout one per mille already at $\\xi\\simeq 4$.\n\nCalculating a few more components of $G(x)$ at larger orders\n(i.e., those involved by the wall-wall correlation\nfunction at distance 6 and 7 respectively up to $O(\\beta^{22})$\nand $O(\\beta^{23})$),\nwe computed the ratio \n\\begin{equation}\ns\\equiv {M_{\\rm s}^2\\over M_{G}^2}\n\\label{sdef}\n\\end{equation}\nup to $O(\\beta^{16})$, by applying the technique described in\nRefs.~\\cite{SCUN1,RV}. We recall that at $N=\\infty$ we found $s=1$\nindependently of $\\beta$. No exact results are known about the\ncontinuum limit $s^*$ of the ratio $s$, except for its large-$N$\nlimit: $s^*=1$. Both large-$N$ and Monte Carlo estimates indicate a\nvalue very close to one. From a $1\/N$ expansion~\\cite{Flyv,CR}:\n\\begin{equation}\ns^*= 1 - {0.006450\\over N} + O\\left( {1\\over N^2}\\right).\n\\label{largeNs}\n\\end{equation}\nMonte Carlo simulations at $N=3$~\\cite{Meyer}\ngave $s=0.9988(16)$ at $\\beta={1.7\/3}=0.5666...$ ($\\xi\\simeq 35$), and\n$s=0.9982(18)$ at $\\beta=0.6$ ($\\xi\\simeq 65$),\nleading to the estimate $s^*=0.9985(12)$.\n\nIn Table \\ref{sqs} we report, for $N=3$, the values of PA's and\nDlog-PA's of the energy and $\\beta$ series of $s$ respectively at\n$E=1$ and at $\\beta=0.55$. We considered PA's and Dlog-PA's with\n$l+m\\geq 13$ and $m\\geq l\\geq 5$. Combining PA and Dlog-PA results,\nour final estimates are $s^*=0.998(3)$ from the $E$-approximants, and\n$s^*=0.998(1)$ from the $\\beta$ approximants evaluated at\n$\\beta=0.55$, in full agreement with the estimates from the $1\/N$\nexpansion and Monte Carlo simulations. With increasing $N$, the\ncentral estimate of $s^*$ tends to be closer to one.\n\nThe scaling-violation pattern of the quantity $\\mu_{\\rm s}\/M_{G}$ for\n$N=3$ is similar to the pattern for $N=\\infty$ (cfr.\\ \nEq.~(\\ref{scalviol})), i.e., it is stable within a few per\nmille for $\\xi\\gtrsim 5$.\n\nAnother dimensionless renormalization-group invariant quantity we have\nconsidered is $u\\equiv m_2^2\/(\\chi m_4)$, whose large-$N$ limit has\nbeen calculated in the previous section, cfr.\\ Eq.~(\\ref{omsqin}). \nAt finite $N$ its continuum limit $u^*$ is not known. \nFrom the expression of the self-energy calculated up to\n$O(1\/N)$~\\cite{Flyv,CR,CRselfenergy}, one can obtain\n\\begin{equation}\nu^*= {1\\over 4}\\left[ 1 - {0.006198\\over N} \n + O\\left( {1\\over N^2}\\right) \\right].\n\\label{largeNu}\n\\end{equation}\nIt is interesting to notice that the $O(1\/N)$ correction in\nEqs.~(\\ref{largeNs}) and (\\ref{largeNu}) is very small.\n\nAt $N=3$ the analysis of the $O(\\beta^{21})$ strong-coupling series of\n$u$ detected a simple pole close to the origin at $\\beta_0=-0.085545$\nfor the $\\beta$-series, and at $E_0=-0.086418$ for the energy series,\ncorresponding to $M^2_{G}=-16.000$, which, within the precision of our\nstrong-coupling estimate, is also the location of the pole in the\ncorresponding $N=\\infty$ expression (\\ref{omsqin}). Being a simple\npole, this singularity can be perfectly reproduced by a standard PA\nanalysis, and indeed we found PA's to be slightly more stable than\nDlog-PA's in the analysis of $u$. The results concerning $N=3$,\nreported in Table \\ref{sqom} (for PA's with $l+m\\geq 16$ and $m\\geq\nl\\geq 8$), lead to the estimates $u^*=0.2498(6)$ from the energy\nanalysis, and $u^*=0.2499(5)$ form the $\\beta$ analysis (at\n$\\beta=0.55$). The agreement with the large-$N$ formula (\\ref{largeNu})\nis satisfactory.\nIn Fig.~\\ref{figomsq} the curve $u(E)$ as obtained\nfrom the $[10\/10]$ PA and the exact curve $u(E)$ at $N=\\infty$ (cfr.\\ \nEq.~\\ref{omsqin}) are plotted, showing almost no differences.\n\nIn Table \\ref{sum} we give a\nsummary of the determinations of $r^*$, $s^*$, and $u^*$ \nfrom PA's and Dlog-PA's of the energy and $\\beta$-series.\n\nWe mention that we also tried to analyze series in the variable\n\\begin{equation}\nz={I_{N\/2}(N\\beta)\\over I_{N\/2-1}(N\\beta)},\n\\label{chcoeff}\n\\end{equation}\nwhich is the character coefficient of the fundamental representation.\nAs for the $E$-series, the continuum limit should be reached at a\nfinite value $z\\rightarrow 1$, and estimates of $r^*$,$s^*$ and $u^*$\nmay be obtained evaluating the approximants of the corresponding\n$z$-series at $z=1$. We obtained results much less precise than those\nfrom the analysis of the $E$-series. Maybe because of the\nthermodynamical meaning of the internal energy, resummations by PA's\nand Dlog-PA's of the $E$-series turn out much more effective,\nproviding rather precise results even at the continuum limit $E=1$.\n\nThe strong-coupling approach turns out to be less effective to the\npurpose of checking asymptotic scaling. In Table \\ref{mc}, we\ncompare, for $N=3,4,8$, $\\xi_{G}$ as obtained from the plain \n21st order series of $\\xi_{G}^2$ and from its Dlog-PA's with\nsome Monte Carlo results available in the literature. Resummation\nby integral approximants~\\cite{IA} provides results substantially\nequivalent to those of Dlog-PA's. For $N=3$ Dlog-PA's follow\n Monte Carlo data reasonably well up to about the convergence radius\n$\\beta_r\\simeq 0.6$ of the strong-coupling expansion, but they fail\nbeyond $\\beta_r$. On the other hand it is well known that for $N=3$\nthe asymptotic scaling regime is set at larger\n$\\beta$-values~\\cite{CEPS}. More sophisticated analysis can be found\nin Refs.~\\cite{Butera,Bonnier}, but they do not seem to lead to a\nconclusive result about the asymptotic freedom prediction in the\n$O(3)$ $\\sigma$ model. At larger $N$, the convergence radius\ndecreases, but on the other hand the asymptotic scaling regime should\nbe reached earlier. At $N=4$ and $N=8$ the 21st order plain\nseries of $\\xi_{G}^2$ provides already quite good estimates of $\\xi_G$\nwithin the convergence radius when compared with Monte Carlo results.\nAgain Pad\\'e-type resummation fails for $\\beta>\\beta_r$. We mention\nthat at $N=4$ the convergence radius $\\beta_r\\simeq 0.60$ corresponds\nto $\\xi_G\\simeq 25$, and at $N=8$ $\\beta_r\\simeq 0.55$ corresponds to\n$\\xi_G\\simeq 8$.\n\nIn order to check asymptotic scaling we consider the ratio\n$\\Lambda_{\\rm s}\/\\Lambda_{2l}$, where\n$\\Lambda_{\\rm s}$ is the effective $\\Lambda$-parameter which\ncan be extracted by\n\\begin{equation}\n\\Lambda_{\\rm s}\\equiv\\left( {\\Lambda_{\\rm s}\\over M}\\right)\nM={M\\over R_{\\rm s}},\n\\label{effla}\n\\end{equation}\nwhere $M$ is \nan estimator of the mass-gap,\n$R_{\\rm s}$ is the mass-$\\Lambda$ parameter ratio\nin the square lattice nearest-neighbor formulation~\\cite{Hasenfratz}\n\\begin{equation}\nR_{\\rm s}= R_{\\overline{\\rm MS}}\\times\n\\left( {\\Lambda_{\\overline{\\rm MS}}\\over \n\\Lambda_{\\rm s}}\\right)=\\left( {8\\over e}\\right)^{1\\over N-2}\n{1\\over \\Gamma\\left( 1 + {1\\over N-2}\\right)}\\times\n\\sqrt{32} \\exp\\left[ {\\pi\\over 2(N-2)}\\right],\n\\label{RL}\n\\end{equation}\nand $\\Lambda_{2l}$ is the corresponding two-loop formula\n\\begin{equation}\n\\Lambda_{2l}= \\left( {2\\pi N \\over N-2}\\beta\\right)^{1\\over N-2}\n\\exp \\left( -{2\\pi N\\over N-2}\\beta\\right).\n\\label{twoloopla}\n\\end{equation}\nThe ratio $\\Lambda_{\\rm s}\/\\Lambda_{2l}$ should go to one in the\ncontinuum limit, according to asymptotic scaling. The available\nseries of $M_G^2$ are longer than any series of the mass-gap\nestimators; therefore, neglecting the very small difference between\n$M_{G}$ and $M$ (we have seen that for $N\\geq 3$ $(M_{G}-M)\/M\\lesssim\n10^{-3}$ in the continuum limit), for which formula (\\ref{RL})\nholds, we use $M_{G}$ as estimator of $M$. In Fig.~\\ref{asysc} we\nplot $\\Lambda_{\\rm s}\/ \\Lambda_{2l}$ for various values of $N$,\n$N=3,4,8$, and for comparison the exact curve for $N=\\infty$. As\nalready noted in Ref.~\\cite{Wolff} by a Monte Carlo study, for\n$N=3,4$ at $\\xi\\simeq 10$ the asymptotic scaling regime is still far\n(about 50\\% off at $N=3$ and $15\\%$ at $N=4$), while for $N=8$ it is\nverified for $\\xi\\gtrsim 4$ within a few per cent, and notice that the\nconvergence radius $\\beta_r\\simeq 0.55$ corresponds to $\\xi\\simeq 8$.\nAnyway with increasing $N$ curves of $\\Lambda_{\\rm s}\/ \\Lambda_{2l}$\nclearly approach the exact $N=\\infty$ limit.\n \n\n\\subsection{The triangular lattice}\n\\label{sctr}\n\nOn the triangular lattice we have calculated the two-point Green's\nfunction up to $O(\\beta^{15})$, from which we have extracted\nstrong-coupling series of the quantities $E$, $\\chi$, $\\xi_{G}^2$,\n$u$, $M_{\\rm t}^2$, already introduced in Sec.~\\ref{sectrNi}, and of\nthe ratios $s\\equiv M_{\\rm t}^2\/M_{G}^2$. Some of the above series\nfor $N=3$ are reported in App.~\\ref{appsctr}.\n\nLike ${\\rm O}(N)$ $\\sigma$ models on the square lattice, \nno indication of the presence\nof a critical point at a finite real value of $\\beta$ emerges\nfrom the strong-coupling analysis for $N\\geq 3$. \nBy a Dlog-PA analysis of the $O(\\beta^{15})$\nstrong-coupling series of $\\chi$ and $\\xi_{G}^2$ at $N=3$, \nwe found that the singularities closest to the origin is \n$\\bar{\\beta}\\simeq 0.358\\pm i 0.085$, giving rise to a convergence\nradius $\\beta_r\\simeq 0.37$ which should correspond to a rather\nlarge correlation length: $\\xi_{G}\\simeq 70$.\nIncreasing $N$ such singularities move toward their\n$N=\\infty$ limit $\\bar{\\beta}=0.206711\\pm\\,i\\,0.181627$.\nSome details of this analysis are given in Table \\ref{zeroes}.\n\nIn our analysis of dimensionless quantities we considered, as on the\nsquare lattice, both the series in the energy and in $\\beta$. The\nestimates concerning the continuum limit are obtained by evaluating\nthe approximants of the energy series at $E=1$, and those of the\n$\\beta$-series at a $\\beta$ associated to a reasonably large\ncorrelation length. For $N=3$ we chose $\\beta=0.33$, whose\ncorresponding correlation length should be $\\xi\\simeq 22$, according\nto a strong-coupling estimate.\n\nCalculating a few more components of $G(x)$ at larger orders (i.e.,\nthose involved by the wall-wall correlation function at distance\n$\\sqrt{3}\/2 \\times 5$ up to $O(\\beta^{16})$), we computed the\nratio $s\\equiv M_{\\rm t}^2\/M_{G}^2$ up to $O(\\beta^{11})$\n\\cite{SCUN1,RV}. For $N=3$ the analysis of the strong-coupling series\nof $s$ (some details are given in Table \\ref{trs}) leads to the\nestimate $s^*=0.998(3)$ from the energy approach, and $s^*=0.998(1)$\nevaluating the approximants at $\\beta=0.33$ (we considered PA's and\nDlog-PA's with $l+m\\geq 8$ and $m\\geq l\\geq 4$). Such results are in\nperfect agreement with those found for the square lattice.\n\nPA's and Dlog-PA's (with \n$l+m\\geq 11$ and $m\\geq l\\geq 5$)\nof the strong-coupling series of $u$ expressed \nin terms of the energy, evaluated at $E=1$, lead to the estimate\n$u^*=0.249(1)$ at $N=3$. The analysis of the series in $\\beta$\ngives $u^*=0.2502(4)$.\nAgain universality is satisfied. \n\nA summary of the results on the triangular lattice can be\nfound in Table \\ref{sum}.\n\nAs on the square lattice we checked asymptotic scaling by looking at\nthe ratio $\\Lambda_{\\rm t}\/ \\Lambda_{2l}$, where $\\Lambda_{\\rm t}$ is\nthe effective $\\Lambda$-parameter on the triangular lattice, defined\nin analogy with Eq.~(\\ref{effla}). Beside\nthe formulas concerning asymptotic scaling given for the square\nlattice case, cfr.\\ Eqs.~(\\ref{effla}-\\ref{twoloopla}), we need here\nthe $\\Lambda$-parameter ratio $\\Lambda_{\\rm t}\/\\Lambda_{\\rm s}$\ncalculated in App.~\\ref{apptr}, cfr.\\ Eq.~(\\ref{ratioltr2}). We again\nused $M_G$ as approximate estimator of the mass-gap $M$.\nFig.~\\ref{asysctr} shows curves of $\\Lambda_{\\rm t}\/ \\Lambda_{2l}$ for\nvarious values of $N$, $N=3,4,8$, and for comparison the exact curve\nfor $N=\\infty$. Such results are similar to those found on the square\nlattice: for $N=3,4$ asymptotic scaling regime is still far at\n$\\xi_G\\simeq 10$, but it is verified within a few per cent at $N=8$,\nwhere the correlation length corresponding to the strong-coupling\nconvergence radius is $\\xi\\simeq 8$.\n\n\n\\subsection{The honeycomb lattice}\n\\label{scho}\n\nOn the honeycomb lattice we have calculated \nthe two-point Green's function \nup to $O(\\beta^{30})$, from which we extracted \nstrong-coupling series of the quantities $E$, $\\chi$, \n$\\xi_{G}^2$, $u$, $M_{\\rm v}^2$, $M_{\\rm h}^2$, \nalready introduced in Sec.~\\ref{iNhl},\nand of the ratios $r\\equiv M_{\\rm v}^2\/M_{\\rm h}^2$,\n$s\\equiv M_{\\rm h}^2\/M_{G}^2$.\nSome of the above series for $N=3$ are reported \nin App.~\\ref{appscex}.\n\nAt $N=3$ a Dlog-PA analysis of the $O(\\beta^{30})$\nstrong-coupling series of $\\chi$ and $\\xi_{G}^2$ \ndetected two couples of complex conjugate singularities,\none on the imaginary axis at $\\bar{\\beta}\\simeq\\pm i0.460$, \nquite close to the origin, and the other \nat $\\bar{\\beta}\\simeq 0.93\\pm i 0.29$.\nThe singularity on the imaginary axis\nleads to a rather small convergence radius in terms\nof correlation length, indeed at $\\beta\\simeq 0.46$ \nwe estimate $\\xi\\simeq 2.6$. \nAt $N=4$ we found $\\bar{\\beta}\\simeq\\pm i0.444$, \nand $\\bar{\\beta}\\simeq 0.88\\pm i 0.41$.\nAt larger $N$ the singularities closest\nto the origin converge toward the $N=\\infty$ value\n$\\bar{\\beta}=\\pm i \\,0.362095$.\nNotice that, as on the square lattice, the partition function on the\nhoneycomb lattice enjoys the symmetry $\\beta\\rightarrow -\\beta$.\n\nAgain we analyzed both the series in the energy and in $\\beta$.\nThe estimates concerning the continuum limit are obtained by evaluating\nthe approximants of the energy series at $E=1$, and \nthose of the $\\beta$-series at $\\beta=0.85$ for the $N=3$ case,\nwhich should correspond to $\\xi\\simeq 22$.\n\nBy rotation invariance the ratio \n$r\\equiv M_{\\rm h}^2\/M_{\\rm v}^2$ should go to one\nin the continuum limit. \nFrom $G(x)$ up to $O(\\beta^{30})$ we extracted\nthe ratio $r$ up to $O(\\beta^{20})$. \nAgain PA's and Dlog-PA's \nof the energy series evaluated at $E=1$\nand of the $\\beta$-series evaluated at $\\beta=0.85$\n(some details are given in Table \\ref{hor})\ngive the correct result in the continuum limit: \nrespectively $r^*=1.00(1)$ and $r^*=1.001(1)$ at $N=3$\n(we considered PA's and Dlog-PA's with \n$l+m\\geq 16$ and $m\\geq l\\geq 7$).\n\nCalculating a few more components of $G(x)$ at larger orders (i.e.,\nthose involved by $G^{({\\rm w})}_{\\rm h}(x)$, defined in\nEq.~(\\ref{g2}), at distances $x=\\sqrt{3}\/2 \\times 9$ and\n$x=\\sqrt{3}\/2 \\times 10$ respectively at $O(\\beta^{34})$ and\n$O(\\beta^{35})$), we computed the ratio $s\\equiv M_{\\rm h}^2\/M_{G}^2$\nup to $O(\\beta^{25})$ \\cite{SCUN1,RV}. For $N=3$ the analysis of the\nstrong-coupling series of $s$ gives $s^*=0.999(3)$ from the\n$E$-approximants and $s^*=0.9987(5)$ from the $\\beta$-approximants\nevaluated at $\\beta=0.85$ (some details are given in Table \\ref{hos}),\nin agreement with the result found on the other lattices. We\nconsidered PA's and Dlog-Pa's with $l+m\\geq 22$, $m\\geq l\\geq 10$.\n\nThe analysis of the energy series of $u$ confirms universality:\nPA's and Dlog-PA's (with $l\\leq m$,\n$l+m\\geq 26$, $l\\geq 12$) of the energy series\nevaluated at $E=1$ give $u^*=0.249(3)$,\nand those of the $\\beta$-series at $\\beta=0.85$ $u^*=0.2491(3)$.\nAs for the square lattice, the curve $u(E)$ obtained \nfrom the PA's at $N=3$ and the exact curve $u(E)$ \nat $N=\\infty$, cfr.\\ Eq.~(\\ref{omhoin}), \nwould be hardly distinguishable if plotted together.\n\nAs noted above, the convergence radius $\\beta_r$ is small in terms of\ncorrelation length for all values of $N$: it goes from $\\xi\\simeq 1.0$\nat $N=\\infty$ to $\\xi\\simeq 2.6$ at $N=3$. Nevertheless in this case\nDlog-PA resummations seem to give reasonable estimates of $\\xi_G$ even\nbeyond $\\beta_r$ (apparently up to about the next singularity closest\nto the origin). In Fig.~\\ref{asyscho} we show curves of \n$\\Lambda_{\\rm h}\/ \\Lambda_{2l}$, where $\\Lambda_{\\rm h}$ is the\neffective $\\Lambda$-parameter on the honeycomb lattice, for various\nvalues of $N$, $N=3,4,8$, and for comparison the exact curve for\n$N=\\infty$. The necessary ratio of $\\Lambda$-parameters has been\ncalculated in App.~\\ref{appex}, cfr.\\ Eqs.~(\\ref{ratiolho}) and\n(\\ref{ratiolho2}).\n\n\n\\subsection{Conclusions}\n\\label{concl}\n\nWe have shown that quite accurate continuum limit estimates of\ndimensionless renormalization-group invariant quantities, such as $s$\nand $u$ (cfr.\\ Eqs.~(\\ref{sdef}) and (\\ref{omsqin})), can be obtained\nby analyzing their strong-coupling series and applying resummation\ntechniques both in the inverse temperature variable $\\beta$ and in the\nenergy variable $E$. In particular, in order to get continuum\nestimates from the analysis of the energy series, we evaluated the\ncorresponding PA's and Dlog-PA's at $E=1$, i.e., at the continuum\nlimit. This idea was already applied to the calculation of the\ncontinuum limit of the zero-momentum four-point coupling $g_r$,\nobtaining accurate results~\\cite{gr}. These results look very\npromising in view of a possible application of such strong-coupling\nanalysis to four-dimensional gauge theories.\n\nThe summary in Table \\ref{sum} of our $N=3$ strong-coupling results\nfor the continuum values $r^*$, $s^*$ and $u^*$,\nfor all lattices we have considered, shows that\nuniversality is verified within a precision of few per mille, leading\n to the final estimates $s^*\\simeq 0.9985$ and $u^*\\simeq 0.2495$\nwith an uncertainty of about one per mille.\nThe comparison with the exact $N=\\infty$ results, $s^*=1$ and\n$u^*=1\/4$, shows that quantities like $s^*$ and $u^*$, which describe the\nsmall momentum universal behavior of $\\widetilde{G}(p)$ in the\ncontinuum limit, change very little and apparently monotonically from\n$N=3$ to $N=\\infty$, suggesting that\nat $N=3$ $\\widetilde{G}(p)$ is essentially Gaussian at small momentum.\n\nLet us make this statement more precise.\nIn the critical region\none can expand the dimensionless \nrenormalization-group invariant function\n\\begin{equation}\nL(p^2\/M_G^2)\\equiv {\\widetilde{G}(0)\\over \\widetilde{G}(p)}\n\\label{elle}\n\\end{equation}\naround $y\\equiv p^2\/M_G^2=0$, writing\n\\begin{eqnarray}\nL(y)&=&1 + y + l(y)\\nonumber \\\\\nl(y)&=&\\sum_{i=2}^\\infty c_i y^i.\n\\label{lexp}\n\\end{eqnarray}\n$l(y)$ parameterizes the difference from a generalized Gaussian\npropagator. One can easily relate the coefficients $c_i$ of the\nexpansion (\\ref{lexp}) to dimensionless renormalization-group\ninvariant ratios involving the moments $m_{2j}$ of $G(x)$.\n\nIt is worth observing that\n\\begin{equation}\nu^* = {1\\over 4 (1 - c_2)}.\n\\label{uc2}\n\\end{equation} \nIn the large-$N$ limit the function $l(y)$ is\ndepressed by a factor $1\/N$.\nMoreover the coefficients of its low-momentum expansion are very small.\nThey can be derived from the $1\/N$ expansion of the \nself-energy~\\cite{Flyv,CR,CRselfenergy}. \nIn the leading order in the $1\/N$ expansion one finds \n\\begin{eqnarray}\nc_{2}&\\simeq&-{0.00619816...\\over N},\\nonumber \\\\ \nc_{3}&\\simeq&{0.00023845...\\over N},\\nonumber \\\\\nc_{4}&\\simeq&-{0.00001344...\\over N},\\nonumber \\\\ \nc_{5}&\\simeq&{0.00000090...\\over N},\n\\label{c1N}\n\\end{eqnarray}\netc.. For sufficiently large $N$ we then expect\n\\begin{equation}\nc_i\\ll c_2\\ll 1 \\;\\;\\;\\;\\;\\;\\;\\;\\;{\\rm for}\\;\\;\\;\\; i\\geq 3.\n\\label{crel}\n\\end{equation} \nAs a consequence, since \nthe zero of $L(y)$ closest \nto the origin is $y_0=-s^*$, the value of $s^*$ \nis substantially fixed by the term proportional to\n$(p^2)^2$ in the inverse propagator, through the approximate relation \n\\begin{equation}\ns^*-1\\simeq c_2 \\simeq 4 u^* - 1 .\n\\label{s4u}\n\\end{equation}\nIndeed in the large-$N$ limit one finds from Eqs.~(\\ref{largeNs}) and \n(\\ref{largeNu})\n\\begin{equation}\ns^*-4u^*={-0.000252\\over N}+O\\left( {1\\over N^2}\\right),\n\\end{equation}\nwhere the coefficient of the $1\/N$ term is much\nsmaller than those of $s^*$ and $u^*$.\n\nFrom this large-$N$ analysis one expects \nthat even at $N=3$ the function $l(y)$ be small in a relatively large\nregion around $y=0$. \nThis is confirmed by the strong-coupling estimate of $u^*$,\nwhich, using Eq.~(\\ref{uc2}), leads to $c_2 \\simeq -0.002$.\nFurthermore, the comparison of the estimates of $s^*$ and $u^*$\nshows that $s^*-4u^*\\simeq 0$ within the precision of our analysis,\nconsistently with Eq.~(\\ref{crel}).\nIt is interesting to note that similar results have been\nobtained for the models with $N\\leq 2$,\nand in particular for the Ising Model, i.e., for $N=1$, where the\nstrong-coupling analysis turns out to be very precise~\\cite{Nm2}.\n\nWe can conclude that the two-point Green function for all $N\\geq 3$ is\nalmost Gaussian in a large region around $p^2=0$, i.e.,\n$|p^2\/M_G^2|\\lesssim 1$, and the small corrections to Gaussian\nbehavior are essentially determined by the $(p^2)^2$ term in the\nexpansion of the inverse propagator.\n\n\n\nDifferences from Gaussian behavior will become important at\nsufficiently large momenta, as predicted by simple weak coupling\ncalculations supplemented by a renormalization group resummation.\nIndeed the asymptotic behavior of $G(x)$ for $x\\ll1\/M$ (where $M$ is\nthe mass gap) turns out to be\n\\begin{equation}\nG(x) \\sim \\left(\\ln{1\\over xM}\\right)^{\\gamma_1\/b_0}, \\qquad\n{\\gamma_1\\over b_0} = {N-1\\over N-2} \\,;\n\\end{equation}\n$b_0$ and $\\gamma_1$ are the first coefficients of the\n$\\beta$-function and of the anomalous dimension of the fundamental\nfield $\\vec s$ respectively. Let us remind that a free Gaussian\nGreen's function behaves like $\\ln (1\/x)$.\nImportant differences are\npresent in other Green's functions even at small momentum, as shown in\nthe analysis of the four-point zero-momentum renormalized coupling,\nwhose definition involves the zero-momentum four-point correlation\nfunction (\\ref{chi4})~\\cite{gr}.\nHowever monotonicity in $N$ seems to be a persistent feature.\n\nOur strong-coupling calculations allow also a check of asymptotic\nscaling for a relatively large range of correlation lengths. For all\nlattices considered the ratio between the effective\n$\\Lambda$-parameter extracted from the mass-gap and its two-loop\napproximation, $\\Lambda\/\\Lambda_{2l}$, when considered as function of\n$\\xi_G$, shows similar patterns with changing $N$. Confirming earlier\nMonte Carlo studies, large discrepancies from asymptotic scaling are\nobserved for $N=3$ in the range of correlation lengths we could\nreliably investigate, i.e., $\\xi\\lesssim 50$. At $N=8$ and for all\nlattices considered, asymptotic scaling within a few per cent is\nverified for $\\xi\\gtrsim 4$, and increasing $N$ the ratio\n$\\Lambda\/\\Lambda_{2l}$ approaches smoothly its $N=\\infty$ limit.\n\n\n\\acknowledgments\n\nIt is a pleasure to thank B.~Alles\nfor useful and stimulating discussions.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}