diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzeeyy" "b/data_all_eng_slimpj/shuffled/split2/finalzzeeyy" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzeeyy" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nCitizen science projects \\cite{sullivan2014ebird,larrivee2014ebutterfly,seibert2017engaging} play a critical role in collecting rich datasets for scientific research, especially in computational sustainability \\cite{gomes2009computational}, because they offer an effective low-cost way to collect large datasets for non-commercial research.\nThe success of these projects depends heavily on the public's intrinsic motivations as well as the enjoyment of the participants, which engages them to volunteer their efforts \\cite{bonney2009citizen}.\nTherefore, citizen science projects usually have few restrictions, providing as much freedom as possible to engage volunteers, so that they can decide where, when, and how to collect data, based on their interests. \nAs a result, the data collected by volunteers are often biased, \nand align more with their personal preferences, instead of providing systematic observations across various experimental settings.\nFor example, personal convenience has a significant impact on the data collection process, since the participants contribute their time and effort voluntarily.\nConsequently, most data are collected in or near urban areas and along major roads.\nOn the other hand, most machine learning algorithms are constructed under the assumption that the training data are governed by the same data distribution as that on which the model will later be tested.\nAs a result, the model trained with biased data would perform poorly when it is evaluated with unbiased test data \ndesigned for the scientific objectives.\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=7cm]{map_light.png}\n\\caption{Highly biased distribution of \\textit{eBird} observations in the continental U.S. Submissions are concentrated in or near urban areas and along major roads.}\n\\label{fig:map}\n\\end{figure}\n\nIncentive mechanisms to shift the efforts of volunteers into the more unexplored areas have been proposed \\cite{xue2016avicaching}, in order to improve the scientific quality of the data.\nHowever, the scalability of those mechanisms is limited by the budget, and it takes a long time to realize the payback. \nFurthermore, the type of locality also restricts the distribution of collected data. For example, it is difficult to incentivize volunteers to go to remote places, such as deserts or primal forests, to collect data.\nTherefore, a tactical learning scheme is needed to bridge the gap between biased data and the desired scientific objectives.\n\nIn general, given only the labeled training data (collected by volunteers) and the unlabeled test data (designed for evaluating the scientific objectives), we set out to: \n(i) learn the shift between the training data distribution $P$ (associated with PDF $p(\\mathbf{x},y)$) and the test data distribution $Q$ (associated with PDF $q(\\mathbf{x},y)$), and\n(ii) compensate for that shift so that the model will perform well on the test data. \nTo achieve these objectives, we needed to make assumptions on how to bring the training distribution into alignment with the test distribution.\nTwo candidates are \\textit{covariate shift} \\cite{bickel2009discriminative}, where $p(y|\\mathbf{x})=q(y|\\mathbf{x})$, and \\textit{label shift} \\cite {lipton2018detecting}, where $p(\\mathbf{x}|y)=q(\\mathbf{x}|y)$. \nMotivated by the field observations in the \\textit{eBird} project, where the habitat preference $p(y|\\mathbf{x})$ of a given species remains the same throughout a season, while the occurrence records $p(\\mathbf{x}|y)$ vary significantly because of the volunteers' preferences, we focused on the \\textit{covariate shift} setting.\nInformally, covariate shift captures the change in the distribution of the feature (covariate) vector $\\mathbf{x}$.\nFormally, under covariate shift, we can factor the distributions as follows:\n\\allowdisplaybreaks\n\\begin{gather}\n p(\\mathbf{x},y) = p(y|\\mathbf{x})p(\\mathbf{x}) \\nonumber\\\\\n q(\\mathbf{x},y) = q(y|\\mathbf{x})q(\\mathbf{x}) \\nonumber\\\\\n p(y|\\mathbf{x})=q(y|\\mathbf{x}) \\Longrightarrow \n \\frac{q(\\mathbf{x},y)}{p(\\mathbf{x},y)}=\\frac{q(\\mathbf{x})}{p(\\mathbf{x})}\n \\label{eqn:covariate}\n\\end{gather}\nThus we were able to learn the shift from $P$ to $Q$ and correct our model by quantifying the test-to-training \\textbf{shift factor} $q(\\mathbf{x})\/p(\\mathbf{x})$.\n\n\\textbf{Our contribution is} \\textbf{an end-to-end learning scheme, which we call the Shift Compensation Network (SCN), that estimates the shift factor while re-weighting the training data to correct the model.} \nSpecifically, SCN (i) estimates the shift factor by learning a discriminator that distinguishes between the samples drawn from the training distribution and those drawn from the test distribution, and (ii) aligns the mean of the weighted feature space of the training data with the feature space of the test data, which guides the discriminator to improve the quality of the shift factor.\nGiven the shift factor learned from the discriminator, SCN also compensates for the shift by re-weighting the training samples obtained in the training process to optimize classification loss under the test distribution.\nWe worked with data from \\textit{eBird} \\cite{sullivan2014ebird}, which is the world's largest biodiversity-related citizen science project.\nApplying SCN to the \\textit{eBird} observational data, we demonstrate that it significantly improves multi-species distribution modeling by detecting and correcting for the data bias, thereby providing a better approach for monitoring species distribution as well as for inferring migration changes and global climate fluctuation.\nWe further demonstrate the advantage of combining the power of discriminative learning and feature space matching, by showing that SCN outperforms all competing models in our experiments.\n\n\\section{Preliminaries}\n\\subsection{Notation}\nWe use $\\mathbf{x} \\in \\mathcal{X}\\subseteq\\mathbb{R}^d$ and $y \\in \\mathcal{Y}=\\{0,1,...,l\\}$ for the feature and label variables. \nFor ease of notation, we use $P$ and $Q$ for the training data and test data distributions, respectively, defined on $\\mathcal{X} \\times \\mathcal{Y}$.\nWe use $p(\\mathbf{x},y)$ and $q(\\mathbf{x},y)$ for the probability density functions (PDFs) associated with $P$ and $Q$, respectively, and $p(\\mathbf{x})$ and $q(\\mathbf{x})$ for the marginal PDFs of $P$ and $Q$.\n\\subsection{Problem Setting}\nWe have labeled training data $D_{P}=\\{(\\mathbf{x}_1, y_1)$, $(\\mathbf{x}_2, y_2)$ ...,$(\\mathbf{x}_n, y_n)\\}$ drawn iid from a training distribution $P$ and unlabeled test data $D_{Q}=\\{\\mathbf{x}'_1;\\mathbf{x}'_2;...;\\mathbf{x}'_n\\}$ drawn iid from a test distribution $Q$, where $P$ denotes the data collected by volunteers and $Q$ denotes the data designed for evaluation of the scientific objectives. \nOur goal is to yield good predictions for samples drawn from $Q$.\nFurthermore, we make the following (realistic) assumptions:\n\\begin{itemize}\n \\item $p(y|\\mathbf{x}) = q(y|\\mathbf{x})$\n \\item $p(\\mathbf{x})>0$ for every $\\mathbf{x} \\in \\mathcal{X}$ with $q(\\mathbf{x})>0$.\n\\end{itemize}\nThe first assumption expresses the use of \\textit{covariate shift}, which is consistent with the field observations in the \\textit{eBird} project. The second assumption ensures that the support of $P$ contains the support of $Q$; without this assumption, this task would not be feasible, as there would be a lack of information on some samples $\\mathbf{x}$.\n\nAs illustrated in \\cite{shimodaira2000improving}, in the covariate shift setting the loss $\\ell(f(\\mathbf{x}),y)$ on the test distribution $Q$ can be minimized by re-weighting the loss on the training distribution $P$ with the shift factor $q(\\mathbf{x})\/p(\\mathbf{x})$, that is,\n\\begin{equation}\n \\mathbb{E}_{(\\mathbf{x},y)\\sim Q}[\\ell(f(\\mathbf{x}),y)] = \\mathbb{E}_{(\\mathbf{x},y)\\sim P}\\Bigg[\\ell(f(\\mathbf{x}),y)\\frac{q(\\mathbf{x})}{p(\\mathbf{x})}\\Bigg]\n\\end{equation}\nTherefore, our goal is to estimate the shift factor $q(\\mathbf{x})\/p(\\mathbf{x})$ and correct the model so that it performs well on $Q$.\n\n\\section{End-to-End Shift Learning}\n\\subsection{Shift Compensation Network}\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=7cm]{model.png}\n\\caption{Overview of the Shift Compensation Network}\n\\label{fig:model}\n\\end{figure}\n\nFig. \\ref{fig:model} depicts the end-to-end learning framework implemented in the Shift Compensation Network (SCN).\nA feature extractor $G$ is first applied to encode both the raw training features $X_{P}$ and the raw test features $X_{Q}$ into a high-level feature space. \nLater, we introduce three different losses to estimate the shift factor $q(\\mathbf{x})\/p(\\mathbf{x})$ and optimize the classification task. \n\nWe first introduce a discriminative network (with discriminator $D$), together with a discriminative loss, to distinguish between the samples coming from the training data and those coming from the test data.\nSpecifically, the discriminator $D$ is learned by maximizing the log-likelihood of distinguishing between samples from the training distribution and those from the test distribution, that is, \n\\begin{align}\n \\mathcal{L}_{D}=&\\frac{1}{2}\\mathbb{E}_{\\mathbf{x}\\sim P}[\\log(D(G(\\mathbf{x})))]+ \\\\ \\nonumber\n &\\frac{1}{2}\\mathbb{E}_{\\mathbf{x}\\sim Q}[\\log(1-D(G(\\mathbf{x})))]\n \\label{eqn:lossD}\n\\end{align}\n\\begin{proposition}\n\\label{prop:one}\nFor any fixed feature extractor $G$, the optimal discriminator $D^{*}$ for maximizing $\\mathcal{L}_{D}$ is\n\\begin{equation}\n \\small D^{*}(G(\\mathbf{x}))=\\frac{p(\\mathbf{x})}{p(\\mathbf{x})+q(\\mathbf{x})}.\n \\nonumber\n\\end{equation}\nThus we can estimate the shift factor $\\frac{q(\\mathbf{x})}{p(\\mathbf{x})}$ by $\\frac{1-D(G(\\mathbf{x}))}{D(G(\\mathbf{x}))}$.\nProof.\n \\begin{align}\n &\\resizebox{1\\hsize}{!}{$\n D^{*}\n =\\argmax\\limits_{D}\n \\frac{1}{2}\\mathbb{E}_{\\mathbf{x}\\sim P}[\\log(D(G(\\mathbf{x})))]+\n \\frac{1}{2}\\mathbb{E}_{\\mathbf{x}\\sim Q}[\\log(1-D(G(\\mathbf{x})))]\n $}\n \\nonumber \\\\\n &\\resizebox{1\\hsize}{!}{$\n \\quad\\,\\,\\,=\\argmax\\limits_{D}\\int{p(\\mathbf{x})\\log(D(G(\\mathbf{x})))+q(\\mathbf{x})\\log(1-D(G(\\mathbf{x})))\\mbox{d}\\mathbf{x}} \n $}\\nonumber\\\\\n &\\small \\Longrightarrow \\mbox{(maximizing the integrand)}\n \\nonumber\\\\\n &\\resizebox{1\\hsize}{!}{$\n D^{*}(G(\\mathbf{x}))=\\argmax\\limits_{D(G(\\mathbf{x}))}\n p(\\mathbf{x})\\log(D(G(\\mathbf{x})))+q(\\mathbf{x})\\log(1-D(G(\\mathbf{x})))\n $}\\nonumber\\\\\n %\n \n \n &\\small \\Longrightarrow \\mbox{ (the function $d\\rightarrow p\\log(d)+q\\log(1-d)$ achieves its } \n \\nonumber\\\\\n &\\quad \\quad \\,\\, \\small \\mbox{maximum in $(0,1)$ at $\\frac{p}{p+q}$)}\n \\nonumber \\\\\n &\\resizebox{0.35\\hsize}{!}{$\n D^{*}(G(\\mathbf{x}))=\\frac{p(\\mathbf{x})}{p(\\mathbf{x})+q(\\mathbf{x})}\n $}\n \\nonumber\n\\end{align}\n\\label{prop1}\n\\end{proposition}\n\nOur use of a discriminative loss is inspired by the generative adversarial nets (GANs) \\cite{goodfellow2014generative}, which have been applied to many areas. \nThe fundamental idea of GANs is to train a generator and a discriminator in an adversarial way, where the generator is trying to generate data (e.g., an image) that are as similar to the source data as possible, to fool the discriminator, while the discriminator is trying to distinguish between the generated data and the source data. \nThis idea has recently been used in domain adaptation \\cite{tzeng2017adversarial,hoffman2017cycada}, where two generators are trained to align the source data and the target data into a common feature space so that the discriminator cannot distinguish them. \nIn contrast to those applications, however, SCN does not have an adversarial training process, where the training and test samples share the same extractor $G$.\nIn our setting, the training and test distributions share the same feature domain, and they differ only in the frequencies of the sample.\nTherefore, instead of trying to fool the discriminator, we want the discriminator to distinguish the training samples from the test samples to the greatest extent possible, so that we can infer the shift factor between the two distributions reversely as in Proposition \\ref{prop:one}.\n\nUse of a feature space mean matching (FSMM) loss comes from the notion of kernel mean matching (KMM) \\cite{huang2007correcting,gretton2009covariate}, in which the shift factor $w(\\mathbf{x})=\\frac{q(\\mathbf{x})}{p(\\mathbf{x})}$ is estimated directly by matching the distributions $P$ and $Q$ in a reproducing kernel Hilbert space (RKHS) $\\Phi_{\\mathcal{H}}: \\mathcal{X}\\xrightarrow{} \\mathcal{F}$, that is, \n\\begin{gather}\n \\minimize\\limits_{w}{\\norm{\\mathbb{E}_{\\mathbf{x}\\sim Q}[\\Phi_{\\mathcal{H}}(\\mathbf{x})]-\n \\mathbb{E}_{\\mathbf{x}\\sim P}[w(\\mathbf{x})\\Phi_{\\mathcal{H}}(\\mathbf{x})]\n }_2} \\nonumber\\\\\n \\mbox{subject to $w(\\mathbf{x})\\geq0$ and $\\mathbb{E}_{\\mathbf{x}\\sim P}\n[w(\\mathbf{x})] = 1$} \n\\label{eqn:KMM}\n\\end{gather}\nThough Gretton (\\citeyear{gretton2009covariate}) proved that the optimization problem \n(\\ref{eqn:KMM}) is convex and has a unique global optimal solution $w(\\mathbf{x})=\\frac{q(\\mathbf{x})}{p(\\mathbf{x})}$, the time complexity of KMM is cubic in the size of the training dataset, which is prohibitive when dealing with very large datasets.\nWe note that even though we do not use the universal kernel \\cite{steinwart2002support} in an RKHS, $w(\\mathbf{x})=q(\\mathbf{x})\/p(\\mathbf{x})$ still implies that\n$$\n\\norm{\\mathbb{E}_{\\mathbf{x}\\sim Q}[\\Phi(\\mathbf{x})]-\n \\mathbb{E}_{\\mathbf{x}\\sim P}[w(\\mathbf{x})\\Phi(\\mathbf{x})]\n }_2=0\n$$\nfor any mapping $\\Phi(\\cdot)$. \nTherefore, our insight is to replace \nthe $\\Phi_{\\mathcal{H}(\\cdot)}$ with a deep feature extractor $G(\\cdot)$ and derive the FSMM loss to further guide the discriminator and improve the quality of $w(\\mathbf{x})$.\n\\begin{gather}\n \\mathcal{L}_{FSMM}=\\norm{\\mathbb{E}_{\\mathbf{x}\\sim Q}[G(\\mathbf{x})]-\n \\mathbb{E}_{\\mathbf{x}\\sim P}[w(\\mathbf{x})G(\\mathbf{x})]\n }_2 \\nonumber\\\\\n =\\norm{\\mathbb{E}_{\\mathbf{x}\\sim Q}[G(\\mathbf{x})]-\n \\mathbb{E}_{\\mathbf{x}\\sim P}\\Bigg[\\frac{1-D(G(\\mathbf{x}))}{D(G(\\mathbf{x}))}G(\\mathbf{x})\\Bigg]\n }_2 \n \\label{eqn:lossFSMM}\n\\end{gather}\nOne advantage of combining the power of $\\mathcal{L}_{D}$ and $\\mathcal{L}_{FSMM}$ is to prevent overfitting.\nSpecifically, if we were learning the shift factor $w(\\mathbf{x})$ by using only $\\mathcal{L}_{FSMM}$, we could end up getting some ill-posed weights $w(\\mathbf{x})$ which potentially would not even be relevant to $q(\\mathbf{x})\/p(\\mathbf{x})$. \nThis is because \nthe dimensionality of $G(\\mathbf{x})$ is usually smaller than the number of training data. \nTherefore, there could be infinitely many solutions of $w(\\mathbf{x})$ that achieve zero loss if we consider minimizing $\\mathcal{L}_{FSMM}$ as solving a linear equation.\nHowever, with the help of the discriminative loss, which constrains the solution space of equation (\\ref{eqn:lossFSMM}), we are able to get good weights which minimize $\\mathcal{L}_{FSMM}$ while preventing overfitting.\nOn the other hand, the feature space mean matching loss also plays the role of a regularizer for the discriminative loss, to prevent the discriminator from distinguishing the two distributions by simply memorizing all the samples.\nInterestingly, in our experiments, we found that the feature space mean matching loss works well empirically even without the discriminative loss.\nA detailed comparison will be shown in the Experiments section.\n\nUsing the shift factor learned from $\\mathcal{L}_{D}$ and $\\mathcal{L}_{FSMM}$, we derive the weighted classification loss, that is,\n\\begin{gather}\n \\mathcal{L}_{C}=\\mathbb{E}_{\\mathbf{x}\\sim P}[w(\\mathbf{x})\\ell(C(G(\\mathbf{x})),y)],\n \\label{eqn:lossC}\n\\end{gather}\nwhere $\\ell(\\cdot,\\cdot)$ is typically the cross-entropy loss. \nThe classification loss $\\mathcal{L}_{C}$ not only is used for the classification task but also ensures that the feature space given by the feature extractor $G$ represents the important characteristics of the raw data.\n\n\\subsection{End-to-End Learning for SCN}\n\n\\begin{algorithm}[t]\n \\caption{Two-step learning in iteration $t$ for SCN \n \\label{alg:SCN}}\n \\begin{algorithmic}[1]\n \\Require{$X_P$ and $X_Q$ are raw features sampled iid from the training distribution $P$ and the test distribution $Q$; $Y_P$ is the label corresponding to $X_P$;\n $M^{t-1}_P$ and $M^{t-1}_Q$ are the moving averages from the previous iteration.\n }\n \\Statex\n \\Ensure{\\textbf{Step 1}}\n \\\\\n $m^t_Q = \\sum_{\\mathbf{x}_i\\in X_Q}{G(\\mathbf{x}_i)}\/|X_Q|$\n \\\\\n $m^t_P = \\sum_{\\mathbf{x}_j\\in X_P}\n \\frac{1-D(G(\\mathbf{x}_j))}{D(G(\\mathbf{x}_j))}\n G(\\mathbf{x}_j)\n \/|X_P|$ \n \\\\\n $M^{t}_Q \\xleftarrow{} \\alpha M^{t-1}_Q + (1-\\alpha)m^t_Q$\n \\\\\n $M^{t}_P \\xleftarrow{} \\alpha M^{t-1}_P + (1-\\alpha)m^t_P$\n \\\\\n $\\widehat{M^{t}_Q}\\xleftarrow{} M^{t}_Q\/(1-\\alpha^t)$\n \\\\\n $\\widehat{M^{t}_P}\\xleftarrow{} M^{t}_P\/(1-\\alpha^t)$\n \\\\\n $\\mathcal{L}_{FSMM}\\xleftarrow{}\\norm{\\widehat{M^{t}_Q}-\\widehat{M^{t}_P}}_2$\n \\\\\n $\\mathcal{L}_{D}\\xleftarrow{} \\frac{\\sum_{\\mathbf{x}_j\\in X_P}\\log(D(G(\\mathbf{x}_j)))}{2|X_P|}+\n \\frac{\\sum_{\\mathbf{x}_i\\in X_Q}\\log(1-D(G(\\mathbf{x}_i)))}{2|X_Q|}$\n \\\\\n Update the discriminator $D$ and the feature extractor $G$ by ascending along the gradients:\n $$\n \\triangledown_{\\theta_D}(\\lambda_1 \\mathcal{L}_{D}-\\lambda_2 \\mathcal{L}_{FSMM}) \\mbox{ and } \\triangledown_{\\theta_G}(\\lambda_1 \\mathcal{L}_{D}-\\lambda_2 \\mathcal{L}_{FSMM})\n $$\n \\Statex\n \\Ensure{\\textbf{Step 2}}\n \\\\\n For $\\mathbf{x}_j\\in X_P$, $w(\\mathbf{x}_j) \\xleftarrow{} \\frac{1-D(G(\\mathbf{x}_j))}{D(G(\\mathbf{x}_j))}$\n \\\\\n $\\mathcal{L}_{C}=\\frac{\\sum_{\\mathbf{x}_j\\in X_P}w(\\mathbf{x}_j)\\ell(C(G(\\mathbf{x})),y)}{|X_P|}$\n \\\\\n Update the classifier $C$ and the feature extractor $G$ by ascending along the gradients:\n $$\n \\triangledown_{\\theta_C}\\mathcal{L}_{C} \\mbox{ and } \\triangledown_{\\theta_G}\\mathcal{L}_{C}\n $$\n Here we ignore the gradients coming from the weights $w(\\mathbf{x}_j)$, that is, we consider the $w(\\mathbf{x}_j)$ as constants.\n \\end{algorithmic}\n\\end{algorithm}\n\nOne straightforward way to train SCN is to use mini-batch stochastic gradient descent (SGD) for optimization. \nHowever, the feature space mean matching loss $\\mathcal{L}_{FSMM}$ could have a large variance with small batch sizes. \nFor example, if the two sampled batches $X_P$ and $X_Q$ have very few similar features $G(\\mathbf{x}_i)$, the $\\mathcal{L}_{FSMM}$ could be very large even with the optimal shift factor.\nTherefore, instead of estimating $\\mathcal{L}_{FSMM}$ based on each mini batch, we maintain the moving averages of both the weighted training features $M_P$ and the test features $M_Q$.\nAlgorithm \\ref{alg:SCN} shows the pseudocode of our two-step learning scheme for SCN.\nIn the first step, we update the moving averages of both the training data and the test data using the features extracted by $G$ with hyperparameter $\\alpha$. \nThen we use the losses $\\mathcal{L}_{D}$ and $\\mathcal{L}_{FSMM}$ to update the parameters in the feature extractor $G$ and the discriminator $D$ with hyperparameters $\\lambda_1$ and $\\lambda_2$, respectively, which adjusts the importance of $\\mathcal{L}_{D}$ and $\\mathcal{L}_{FSMM}$. (We set $\\lambda_1=1$ and $\\lambda_2=0.1$ in our experiments.)\nIn the second step, we update the classifier $C$ and the feature extractor $G$ using the estimated shift factor $w(\\mathbf{x})$ from the first step.\nWe initialize the moving averages $M_P$ and $M_Q$ to 0, so that operations (5) and (6) in Algorithm (\\ref{alg:SCN}) are applied to compensate for the bias caused by initialization to 0, that is, \n\\allowdisplaybreaks\n\\small\n\\begin{align}\n \\mathbb{E}[M^t_Q]&=\\mathbb{E}[\\alpha M^{t-1}_Q + (1-\\alpha)m^t_Q]\n \\nonumber\\\\\n \n \n &=\\sum^t_{i=1}(1-\\alpha)\\alpha^{i-1}\\mathbb{E}[m^i_Q]\n \\nonumber\\\\\n &\\approx \\widetilde{\\mathbb{E}}[m^i_Q](1-\\alpha^t)\n \\label{eqn:bias}\n\\end{align}\n\\normalsize\nFurther, since the mini batches are drawn independently, we show that\n\\small\n\\allowdisplaybreaks\n\\begin{align}\n \\mbox{Var}[M^t_Q]&=\\mbox{Var}[\\alpha M^{t-1}_Q + (1-\\alpha)m^t_Q]\n \\nonumber\\\\\n \n \n &=\\sum^t_{i=1}(1-\\alpha)^2\\alpha^{2i-2}\\mbox{Var}[m^i_Q]\n \\nonumber\\\\\n &\\approx \\widetilde{\\mbox{Var}}[m^i_Q]\\frac{1-\\alpha}{1+\\alpha}(1-\\alpha^{2t})\n \\label{eqn:var}\n\\end{align}\n\\normalsize\nThat is, by using moving-average estimation, the variance can be reduced by approximately $\\frac{1-\\alpha}{1+\\alpha}$.\nConsequently, we can apply an $\\alpha$ close to 1 to significantly reduce the variance of $\\mathcal{L}_{FSMM}$. \nHowever, an $\\alpha$ close to 1 implies a strong momentum which is too high for the early-stage training. Empirically, we chose $\\alpha=0.9$ in our experiments.\n\nIn the second step of the training process, the shift factor $w(\\mathbf{x})$ is used to compensate for the bias between training and test.\nNote that we must consider the shift factor as a constant instead of as a function of the discriminator $D$.\nOtherwise, minimizing the classification loss $\\mathcal{L}_{C}$ would end up trivially causing all the $w(\\mathbf{x})$ to be reduced to zero.\n\n\\section{Related Work}\nDifferent approaches for reducing the data bias problem have been proposed.\nIn mechanism design, \n\\citeauthor{xue2016avicaching} (\\citeyear{xue2016avicaching}) proposed a two-stage game for providing incentives to shift the efforts of volunteers to more unexplored tasks in order to improve the scientific quality of the data.\nIn ecology, \\citeauthor{phillips2009sample} (\\citeyear{phillips2009sample}) improved the modeling of presence-only data by aligning the biases of both training data and background samples. \n\\citeauthor{fithian2015bias} (\\citeyear{fithian2015bias}) explored the complementary strengths of doing a joint analysis of data coming from different sources to reduce the bias. \nIn domain adaptation, various methods \\cite{jiang2007instance,shinnou2015learning,tzeng2017adversarial,hoffman2017cycada} have been proposed to reduce the bias between the source domain and the target domain by mapping them to a common feature space while reserving the critical characteristics.\n\nOur work is most closely related to the approaches of \\cite{zadrozny2004learning,huang2007correcting,sugiyama2008direct,gretton2009covariate} developed under the names of \\textit{covariate shift} and \\textit{sample selection bias}, where the shift factor is learned in order to align the training distribution with the test distribution. \nThe earliest work in this domain came from the statistics and econometrics communities, where they addressed the use of non-random samples to estimate behavior.\n\\citeauthor{heckman1977sample} (\\citeyear{heckman1977sample}) addressed sample selection bias, and \\citeauthor{manski1977estimation} (\\citeyear{manski1977estimation}) investigated estimating parameters under \\textit{choice-based bias}, cases that are analogous to a shift in the data distribution.\nLater, \\cite{shimodaira2000improving} proposed correcting models via weighting of samples in empirical risk minimization (ERM) by the shift factor $q(\\mathbf{x})\/p(\\mathbf{x})$.\n\n\n\nOne straightforward approach to learning the weights is to directly estimate the distributions $p(\\mathbf{x})$ and $q(\\mathbf{x})$ from the training and test data respectively, using kernel density estimation \\cite{shimodaira2000improving,sugiyama2005input}.\nHowever, learning the data distribution $p(\\mathbf{x})$ is intrinsically model based and performs poorly with high-dimensional data.\n\\citeauthor{huang2007correcting} (\\citeyear{huang2007correcting}) and \\citeauthor{gretton2009covariate} (\\citeyear{gretton2009covariate})\nproposed kernel mean matching (KMM), which estimates the shift factor $w(\\mathbf{x}) = q(\\mathbf{x})\/p(\\mathbf{x})$ directly via matching the first moment of the covariate distributions of the training and test data in a reproducing kernel Hilbert space (RKHS) using quadratic programming. \nKLIEP \\cite{sugiyama2008direct} estimates $w(\\mathbf{x})$ by minimizing the \\mbox{Kullback-Leibler} (KL) divergence between the test distribution and the weighted training distribution.\nLater, \\citeauthor{tsuboi2009direct} (\\citeyear{tsuboi2009direct}) derived an extension of KLIEP for applications with a large test set and revealed a close relationship of that approach to kernel mean matching.\nAlso, \\citeauthor{rosenbaum1983central} (\\citeyear{rosenbaum1983central}) and \\citeauthor{lunceford2004stratification} (\\citeyear{lunceford2004stratification}) introduced propensity scoring to design unbiased experiments, which they applied in settings related to sample selection bias.\n\nWhile the problem of covariate shift has received much attention in the past, it has been used mainly in settings where the size of the dataset is relatively small and the dimensionality of the data is relatively low. \nTherefore, it has not been adequately addressed in settings with massive high-dimensional data, such as hundreds of thousands of high-resolution images.\nAmong the studies in this area, \\cite{bickel2009discriminative} is the one most closely related to ours.\nThey tackled this task by modeling the sample selection process using Bayesian inference, where the shift factor is learned by modeling the probability that a sample is selected into training data. \nThough we both use a discriminative model to detect the shift, SCN provides an end-to-end deep learning scheme, where the shift factor and the classification model are learned simultaneously, providing a smoother compensation process, which has considerable advantages for work with massive high-dimensional data and deep learning models.\nIn addition, SCN introduces the feature space mean matching loss, which further improves the quality of the shift factor and leads to a better predictive performance.\nFor the sake of fairness, we adapted the work of \\cite{bickel2009discriminative} to the deep learning context in our experiments.\n\n\n\\section{Experiments}\n\n\\subsection{Datasets and Implementation Details}\nWe worked with a crowd-sourced bird observation dataset from the successful citizen science project \\textit{eBird} \\cite{sullivan2014ebird}, which is the world's largest biodiversity-related citizen science project, with more than 100 million bird sightings contributed each year by eBirders around the world.\nEven though \\textit{eBird} amasses large amounts of citizen science data, \nthe locations of the collected observation records are highly concentrated in urban areas and along major roads, as shown in Fig. \\ref{fig:map}.\nThis hinders our understanding of species distribution as well as inference of migration changes and global climate fluctuation.\nTherefore, we evaluated our SCN\n\\footnote{Code to reproduce the experiments can be found at {https:\/\/bitbucket.org\/DiChen9412\/aaai2019-scn\/}.} \napproach by measuring how we could improve multi-species distribution modeling given biased observational data.\n\nOne record in the \\textit{eBird} dataset is referred to as a checklist, in which the bird observer records all the species he\/she detects as well as the time and the geographical location of the observation site. \nCrossed with the National Land Cover Dataset for the U.S. (NLCD) \\cite{homer2015completion}, we obtained a 16-dimensional feature vector for each observation site, which describes the landscape composition with respect to 16 different land types such as water and forest.\nWe also collected satellite images for each observation site by matching the geographical location of a site to Google Earth, where several preprocesses have been conducted, including cloud removal.\nEach satellite image covers an area of 17.8 $\\mbox{k}\\mbox{m}^2$ near the observation site and has 256$\\times$256 pixels.\nThe dataset for our experiments was formed by using all the observation checklists from Bird Conservation Regions (BCRs) 13 and 14 in May from 2002 to 2014, which contains 100,978 observations \\cite{us2000north}.\nMay is a migration period for BCR 13 and 14; therefore a lot of non-native birds pass over this region, which gives us excellent opportunities to observe their choice of habitats during the migration.\nWe chose the 50 most frequently observed birds as the target species, which cover over 97.4\\% of the records in our dataset. \nBecause our goal was to learn and predict multi-species distributions across landscapes,\nwe formed the unbiased test set and the unbiased validation set by overlaying a grid on the map and choosing observation records spatially uniformly. \nWe used the rest of the observations to form the spatially biased training set.\nTable \\ref{table:dataset} presents details of the dataset configuration.\n\nIn the experiments, we applied two types of neural networks for the feature extractor $G$: multi-layer fully connected networks (MLPs) and convolutional neural networks (CNNs).\nFor the NLCD features, we used a three-layer fully connected neural network with hidden units of size 512, 1024 and 512, and with ReLU \\cite{nair2010rectified} as the activation function.\nFor the Google Earth images, we used DenseNet \\cite{huang2017densely} with minor adjustments to fit the image size. \nThe discriminator $D$ and Classifier $C$ in SCN were all formed by three-layer fully connected networks with hidden units of size 512, 256, and $\\#outcome$, and with ReLU as the activation function for the first two layers; there was no activation function for the third layer.\nFor all models in our experiments, the training process was done for 200 epochs, using a batch size of 128, cross-entropy loss, and an Adam optimizer \\cite{kingma2014adam} with a learning rate of 0.0001, and utilized batch normalization \\cite{ioffe2015batch}, a 0.8 dropout rate \\cite{srivastava2014dropout}, and early stopping to accelerate the training process and prevent overfitting.\n\n\\subsection{Analysis of Performance of the SCN} \n\n\\begin{table}[t]\n\\newcommand{\\tabincell}[2]{\\begin{tabular}{@{}#1@{}}#2\\end{tabular}}\n\\centering\n\\begin{tabular}{|l|l|l|}\n\\hline\n\\textbf{Feature Type} & \\textbf{NLCD} & \\textbf{Google Earth Image} \\\\ \\hline\n\\textbf{Dimensionality} & $16$ & $256\\times256\\times3$ \\\\ \\hline\n\\textbf{\\#Training Set} & 79060 & 79060 \\\\\n\\hline\n\\textbf{\\#Validation Set} & 10959 & 10959 \\\\\n\\hline\n\\textbf{\\#Test Set} & 10959 & 10959 \\\\\n\\hline\n\\textbf{\\#Labels} & 50 & 50 \\\\\n\\hline\n\\end{tabular}\n\\caption{Statistics of the \\textit{eBird} dataset}\n\\label{table:dataset}\n\\end{table}\n\n\n\nWe compared the performance of SCN with baseline models from two different groups.\nThe first group included \\textit{models that ignore the covariate shift} ( which we refer to as vanilla models), that is, models are trained directly by using batches sampled uniformly from the training set without correcting for the bias. \nThe second group included \\textit{different competitive models for solving the covariate shift problem}: \n(1) kernel density estimation (KDE) methods \\cite{shimodaira2000improving,sugiyama2005input}, \n(2) the \\mbox{Kullback-Leibler} Importance Estimation Procedure (KLIEP) \\cite{sugiyama2008direct}, and (3) discriminative factor weighting (DFW) \\cite{bickel2009discriminative}.\nThe DFW method was implemented initially by using a Bayesian model, which we adapted to the deep learning model in order to use it with the $eBird$ dataset. \nWe did not compare SCN with the kernel mean matching (KMM) methods \\cite{huang2007correcting,gretton2009covariate}, because KMM, like many kernel methods, requires the construction and inversion of an $n \\times n$ Gram matrix, which has a complexity of $\\mathcal{O}(n^3)$. \nThis hinders its application to real-life applications, where the value of $n$ will often be in the hundreds of thousands. \nIn our experiments, we found that the largest $n$ for which we could feasibly run the KMM code is roughly 10,000 (even with SGD), which is only $10\\%$ of our dataset.\nTo make a fair comparison, we did a grid search for the hyperparameters of all the baseline models to saturate their performance.\nMoreover, the structure of the networks for the feature extractor and the classifier used in all the baseline models, were the same as those in our SCN (i.e., $G$ and $C$), while the shift factors for those models were learned using their methods.\n\n\n\\begin{table}[t]\n\\footnotesize\n\\centering\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n \\multicolumn{4}{|c|}{\\textbf{NLCD Feature}} \\\\\n\\hline\n\\textbf{Test Metrics (\\%)} & \\textbf{AUC} & \\textbf{AP} & \\textbf{F1 score} \\\\ \n\\hline\nvanilla model & 77.86 & 63.31 & 54.90 \\\\\n\\hline\nSCN & \\textbf{80.34} & \\textbf{66.17} & \\textbf{57.06} \\\\\n\\hline\nKLIEP & 78.87 & 64.33 & 55.63 \\\\\n\\hline\nKDE & 78.96 & 64.42 & 55.27 \\\\\n\\hline\nDFW & 79.38 & 64.98 & 55.79 \\\\\n\\hline\n\n \\multicolumn{4}{|c|}{\\textbf{Google Earth Image}} \\\\\n\\hline\nvanilla model & 80.93 & 67.33 & 59.97 \\\\\n\\hline\nSCN & \\textbf{83.80} & \\textbf{70.39} & \\textbf{62.37} \\\\\n\\hline\nKLIEP & 81.17 & 67.86 & 60.23 \\\\\n\\hline\nKDE & 80.95 & 67.42 & 60.01 \\\\\n\\hline\nDFW & 81.99 & 68.44 & 60.77 \\\\\n\\hline\n\\end{tabular}\n\\caption{Comparison of predictive performance of different methods under three different metrics. (The larger, the better.)\n}\n\\label{table:results}\n\\end{table}\n\nTable \\ref{table:results} shows the average performance of SCN and other baseline models with respect to three different metrics: (1) \\textbf{AUC}, area under the ROC curve; (2) \\textbf{AP}, area under the precision--recall curve; (3) \\textbf{F1 score}, the harmonic mean of precision and recall.\nBecause our task is a multi-label classification problem, these three metrics were averaged over all 50 species in the datasets.\nIn our experiments, the standard error of all the models was less than $0.2\\%$ under all three metrics.\nThere are two key results in Table \\ref{table:results}: \n\\textbf{(1) All bias-correction models outperformed the vanilla models under all metrics, which shows a significant advantage of correcting for the covariate shift.}\n\\textbf{(2) SCN outperformed all the other bias-correcting models, especially on high-dimensional Google Earth images.}\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=8cm]{learning_curve2.png}\n\\caption{The learning curves of all models. The vertical axis shows the cross-entropy loss, and the horizontal axis shows the number of iterations. }\n\\label{fig:curve}\n\\end{figure}\nThe kernel density estimation (KDE) models had the worst performance, especially on Google Earth images.\nThis is not only because of the difficulty of modeling high-dimensional data distributions, but also because of the sensitivity of the KDE approach. When $p(\\mathbf{x}) \\ll q(\\mathbf{x})$, a tiny perturbation of $p(\\mathbf{x})$ could result in a huge fluctuation in the shift factor $q(\\mathbf{x})\/p(\\mathbf{x})$.\nKLIEP performed slightly better than KDE, by learning the shift factor $w(\\mathbf{x})=\\frac{q(\\mathbf{x})}{p(\\mathbf{x})}$ directly, where it minimized the KL divergence between the weighted training distribution and the test distribution. \nHowever, it showed only a slight improvement over the vanilla models on Google Earth images.\nDFW performed better than the other two baseline models, which is not surprising, given that DFW learns the shift factor by using a discriminator similar to the one in SCN.\nSCN outperformed DFW not only because it uses an additional loss, the feature space mean matching loss, but also because of its end-to-end training process. \nDFW first learns the shift factor by optimizing the discriminator, and then it trains the classification model using samples weighted by the shift factor. \nHowever, SCN learns the classifier $C$ and the discriminator $D$ simultaneously, where the weighted training distribution approaches the test distribution smoothly through the training process, which performed better empirically than directly adding the optimized weights to the training samples. \n\\citeauthor{wang2018cost} (\\citeyear{wang2018cost}) also discovered a similar phenomenon in cost-sensitive learning, where pre-training of the neural network with unweighted samples significantly improved the model performance.\nOne possible explanation of this phenomenon is that the training of deep neural networks depends highly on mini-batch SGD, so that the fluctuation of gradients caused by the weights may hinder the stability of the training process, especially during the early stage of the training.\nFig. \\ref{fig:curve} shows the learning curves of all five models, where we used the same batch size and an Adam optimizer with the same learning rate.\nAs seen there, SCN had a milder oscillation curve than DFW, which is consistent with the conjecture we stated earlier. \nIn our experiments, we pre-trained the baseline models with unweighted samples for 20 epochs in order to achieve a stable initialization. \nOtherwise, some of them would end up falling into a bad local minimum, where they would perform even worse than the vanilla models.\n\n\n\nTo further explore the functionality of the discriminative loss and the feature mean matching loss in SCN, we implemented several variants of the original SCN model:\n\\begin{itemize}\n \\item SCN: The original Shift Compensation Network\n \\item SCN\\_D: The Shift Compensation Network without the feature space mean matching loss ($\\lambda_2=0$)\n \\item SCN\\_FSMM: The Shift Compensation Network without the discriminative loss ($\\lambda_1=0$)\n \\item SCN$^{-}$: The Shift Compensation Network without using moving-average estimation for the feature space mean matching loss ($\\alpha=0$)\n\\end{itemize}\n\\begin{table}[t]\n\\footnotesize\n\\centering\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n \\multicolumn{4}{|c|}{\\textbf{NLCD Feature}} \\\\\n\\hline\n\\textbf{Test Metrics (\\%)} & \\textbf{AUC} & \\textbf{AP} & \\textbf{F1-score} \\\\ \n\\hline\nSCN & \\textbf{80.34} & \\textbf{66.17} & \\textbf{57.06} \\\\\n\\hline\nSCN\\_D & 79.53 & 65.11 & 56.11 \\\\\n\\hline\nSCN\\_FSMM & 79.58 & 65.17 & 56.26 \\\\\n\\hline\nSCN$^{-}$ & 80.09 & 65.97 & 56.83 \\\\\n\\hline\n\\multicolumn{4}{|c|}{\\textbf{Google Earth Image}} \\\\\n\\hline\nSCN & \\textbf{83.80} & \\textbf{70.39} & \\textbf{62.37} \\\\\n\\hline\nSCN\\_D & 82.35 & 68.96 & 61.23 \\\\\n\\hline\nSCN\\_FSMM & 82.49 & 69.05 & 61.51 \\\\\n\\hline\nSCN$^{-}$ & 83.44 & 69.72 & 62.01 \\\\\n\\hline\n\\end{tabular}\n\\caption{Comparison of predictive performance of the different variants of SCN\n}\n\\label{table:variants}\n\\end{table}\nTable \\ref{table:variants} compares the performance of the different variants of SCN, where we observe the following:\n(1) Both the discriminative loss and the feature space mean matching loss play an important role in learning the shift factor. \n(2)\nThe moving-average estimation for the feature space mean matching loss shows an advantage over the batch-wise estimation (compare SCN to SCN$^{-}$).\n(3) Crossed with Table \\ref{table:results}, SCN performs better than DFW, even with only the discriminative loss, which shows the benefit of fitting the shift factor gradually through the training process.\n(4) Surprisingly, even if we use only the feature space mean matching loss, which would potentially lead to ill-posed weights, SCN\\_FSMM still shows much better performance than the other baselines.\n \n \n\n\n\n\\subsection{Shift Factor Analysis} \nWe visualized the heatmap\nof the observation records for the month of May in New York State (Fig. \\ref{fig:factor}), where the left panel shows the distribution of the original samples and the right one shows the distribution weighted with the shift factor.\nThe colors from white to brown indicates the sample popularity from low to high using a logarithmic scale from 1 to 256.\nAs seen there, the original samples are concentrated in the southeastern portion and Long Island, while the weighted one is more balanced over the whole state after applying the shift factor. \nThis illustrates that SCN learns the shift correctly and provides a more balanced sample distribution by compensating for the shift.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=8cm]{heatmap2.png}\n\\caption{Heatmap of the observation records for the month of May in New York State, where the left panel shows the distribution of the original samples and the right one shows the distribution weighted with the shift factor}\n\\label{fig:factor}\n\\end{figure}\n\nWe investigated the shift factors learned from the different models (Table \\ref{table:dis}) by analyzing the ratio of the feature space mean matching loss to the dimensionality of the feature space using equation (\\ref{eqn:dis}). \n\\begin{gather}\n \\norm{\\mathbb{E}_{\\mathbf{x}\\sim Q}[\\Phi(\\mathbf{x})]-\n \\mathbb{E}_{\\mathbf{x}\\sim P}[w(\\mathbf{x})\\Phi(\\mathbf{x})]\n }_2\/\\mbox{dim}(\\Phi(\\mathbf{x})) \\nonumber\\\\\n \\resizebox{0.95\\hsize}{!}{$ \n \\approx\\norm{\\frac{1}{|X_Q|}\\sum_{i}\\Phi(\\mathbf{x}_i)-\n \\frac{1}{|X_P|}\\sum_{i}w(\\mathbf{x}_i)\\Phi(\\mathbf{x}_i)\n }_2\/\\mbox{dim}(\\Phi(\\mathbf{x})) \n $}\\label{eqn:dis}\n\\end{gather}\nHere, we chose the output of the feature extractor in each model (such as $G(\\mathbf{x})$ in SCN) as the feature space $\\Phi(\\mathbf{x})$. \nCompared to the vanilla models, all the shift correction models significantly reduced the discrepancy between the weighted training distribution and the test distribution.\nHowever, crossed with Table \\ref{table:results}, it is interesting to see the counter-intuitive result that the models with the smaller feature space discrepancies (KDE \\& KLIEP) did not necessarily perform better. \n\\begin{table}[t]\n\\footnotesize\n\\centering\n\\begin{tabular}{|c|c|}\n\\hline\n \\multicolumn{2}{|c|}{\\textbf{Averaged Feature Space Mean Matching Loss}} \\\\\n\\hline\nvanilla model & 0.8006 \\\\\n\\hline\nSCN & 0.0182 \\\\\n\\hline\nKLIEP & 0.0015 \\\\\n\\hline\nKDE & 0.0028 \\\\\n\\hline\nDFW & 0.0109 \\\\\n\\hline\n\\end{tabular}\n\\caption{Feature space discrepancy between the weighted training data and the test data\n}\n\\label{table:dis}\n\\end{table}\n\n\\section{Conclusion}\nIn this paper, we proposed the Shift Compensation Network (SCN) along with an end-to-end learning scheme for solving the covariate shift problem in citizen science.\nWe incorporated the discriminative loss and the feature space mean matching loss to learn the shift factor.\nTested on a real-world biodiversity-related citizen science project, \\textit{eBird},\nwe show how SCN significantly improves multi-species distribution modeling by learning and correcting for the data bias, and that it consistently performs better than previous models.\nWe also discovered the importance of fitting the shift factor gradually through the training process, which raises an interesting question for future research: How do the weights affect the performance of models learned by stochastic gradient descent?\nFuture directions include exploring the best way to learn and apply shift factors in deep learning models.\n\n\\section*{Acknowledgments}\nWe are grateful for the work of Wenting Zhao, the Cornell Lab of Ornithology and thousands of \\textit{eBird} participants.\nThis research was supported by the National Science Foundation (Grants Number 0832782,1522054, 1059284, 1356308), and ARO grant W911-NF-14-1-0498.\n\n\n\\small\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\textbf{Introduction}}\nLet $d\\geq 2$ and let\n$ 2\\leq n_10$. Two points $x,y\\in E$ are said to be \\emph{$\\delta$-equivalent} if there exists a sequence $\\{x_1=x,x_2,\\dots,x_{k-1},x_k=y\\}\\subset E$ such that $\\rho(x_{i},x_{i+1})\\le\\delta$ for $1\\le i\\le k-1$.\nA $\\delta$-equivalent class of $E$ is called a \\emph{$\\delta$-connected component} of $E$.\nWe denote by $h_E(\\delta)$ the cardinality of the set of $\\delta$-connected components of $E$.\n\n\nTwo positive sequences $\\{a_i\\}_{i\\ge 1}$ and $\\{b_i\\}_{i\\ge 1}$ are said to be \\emph{comparable}, and denoted by $a_i\\asymp b_i$, if there exists a constant $c>1$ such that $c^{-1}\\le b_i\/a_i\\le c$ for all $i\\geq 1$.\nWe define the maximal power law property as following.\n\n\\begin{defn}[Maximal power law]\n\\emph{Let $(E, \\rho)$ be a compact metric space.\nLet $\\gamma>0$. We say $E$ satisfies the \\emph{power law} with \\emph{index} $\\gamma$ if $h_E(\\delta)\\asymp\\delta^{-\\gamma}$ ; if $\\gamma=\\dim_BE$ in addition, we say $E$ satisfies the \\emph{maximal power law}.}\n\\end{defn}\n\nThe above definition is motivated by the notion of gap sequence.\n Gap sequence of a set on $\\R$ was widely used by many mathematicians, for instance, \\cite{BT54,K95,LP93}.\nUsing the function $h_E(\\delta)$, Rao, Ruan and Yang \\cite{RRY08} generalized the notion of gap sequence to\n$E\\subset \\R^d$, denoted by $\\{g_i(E)\\}_{i\\geq 1}$. It is shown in \\cite{RRY08} that if two compact subsets $E,E'\\subset\\mathbb{R}^d$ are Lipschitz equivalent,\nthen $g_i(E)\\asymp g_i(E')$.\nActually, the definition and result in \\cite{RRY08} are also valid for metric spaces, see Section 2.\n\nMiao, Xi and Xiong \\cite{MXX17} observed that $E$ satisfies the power law with index $\\gamma$ if and only if $g_i(E)\\asymp i^{-1\/\\gamma}$ (see Lemma \\ref{MXX}).\nConsequently, the (maximal) power law property is invariant under bi-Lipschitz maps.\n\nDeng, Wang and Xi \\cite{DWX15} proved that if $E\\subset\\mathbb{R}^d$ is a $C^{1+\\alpha}$ conformal set satisfying the strong separation condition, then $E$ satisfies the maximal power law.\nMiao \\emph{et al.} \\cite{MXX17} proved that a totally disconnected Bedford-McMullen carpet satisfies the maximal power law if and only if it possesses vacant rows.\n Liang, Miao and Ruan \\cite{LMR2020} completely characterized the gap sequences of Bedford-McMullen carpets.\n For higher dimensional fractal cubes (see Section 3 for the definition), we show that\n\n\\begin{thm}\\label{thm:cube} A fractal cube satisfies the maximal power law if and only if it has trivial points.\n\\end{thm}\n\nLet $E$ be a self-affine Sierpi$\\acute{\\text{n}}$ski sponge defined in \\eqref{Ssponge}.\nA point $z\\in E$ is called \\emph{a trivial point} of $E$ if $\\{z\\}$ is a connected component of $E$.\nFor $x=(x_1,\\dots, x_d)\\in E$ and $1\\leq j\\leq d-1$, we set\n$$\\pi_j(x)=(x_{1},\\dots, x_j).$$\nWe call $\\pi_j(E)$ the $j$-th major projection of $E$.\nWe say $E$ is \\emph{degenerated} if $E$ is contained in a face of $[0,1]^d$ with dimension $d-1$.\n\n\\begin{thm}\\label{thm:law} Let $E$ be a non-degenerated self-affine Sierpi$\\acute{\\text{n}}$ski sponge defined in \\eqref{Ssponge}. Then $E$\nsatisfies the maximal power law if and only if $E$ and all $\\pi_j(E)(1\\le j\\le d-1)$ possess trivial points.\n\\end{thm}\n\n The following definition characterizes a class of fractals that all $\\delta$-connected components\n are small. Let $\\text{diam}\\, U$ denote the diameter of a set $U$.\n\n\\begin{defn}[Perfectly disconnectedness]\n\\emph{Let $(E,\\rho)$ be a compact metric space. We say $E$ is \\emph{perfectly disconnected},\nif there is a constant $M_0>0$ such that for any $\\delta$-connected component $U$ of $E$ with $0<\\delta<\\text{diam}(E)$,\n $\\text{diam}\\,U\\leq M_0\\delta$. }\n\\end{defn}\n\nIt is clear that perfectly disconnectedness implies totally disconnectedness,\nand the perfectly disconnectedness property is invariant under bi-Lipschitz maps.\n\n\\begin{remark}\\label{perfectly_prop}\n\\emph{\n It is essentially shown in Xi and Xiong \\cite{XX10} that\n a fractal cube is perfectly disconnected if and only if it is totally disconnected.\n We guess this may be true for a large class of self-similar sets.}\n\\end{remark}\n\n\\begin{thm}\\label{thm:perfect}\nLet $E$ be a non-degenerated self-affine Sierpi$\\acute{\\text{n}}$ski sponge defined in \\eqref{Ssponge}. Then $E$ is\n perfectly disconnected if and only if $E$ and all $\\pi_j(E)(1\\le j\\le d-1)$ are totally\n disconnected.\n\\end{thm}\n\n\nAs a consequence of Theorem \\ref{thm:law} and Theorem \\ref{thm:perfect},\nwe obtain\n\n\\begin{coro}\\label{Lip1}\nSuppose $E$ and $E'$ are two non-degenerated self-affine Sierpi$\\acute{\\text{n}}$ski sponges\nin ${\\mathbb R}^d$. If $E$ and $E'$ are Lipschitz equivalent, then\n \\\\\n\\indent\\emph{(i)} if $E$ and all $\\pi_j(E)$, $1\\leq j\\leq d-1$ possess trivial points, then so do $E'$ and $\\pi_j(E')$,\n $1\\le j\\le d-1$;\\\\\n\\indent\\emph{(ii)} if $E$ and all $\\pi_j(E)$, $1\\leq j\\leq d-1$ are totally disconnected, so are $E'$ and\n $\\pi_j(E')$, $1\\leq j\\leq d-1$.\n\\end{coro}\n\n\n\nThis paper is organized as follows: In Section 2, we give some basic facts about gap sequences of metric spaces.\nThen we prove Theorem \\ref{thm:cube}, Theorem \\ref{thm:law} and Theroem \\ref{thm:perfect} in Section 3, 4, 5 respectively.\n\n\n\n\\section{\\textbf{Gap sequences of metric spaces}}\nLet $(E,\\rho)$ be a metric space. Recall that $h_E(\\delta)$ is the cardinality of the set of $\\delta$-connected components of $E$.\nIt is clear that $h_E:(0,+\\infty)\\rightarrow\\mathbb{Z}_{\\geq 1}$ is non-increasing. Let $\\{\\delta_k\\}_{k\\ge 1}$ be the set of discontinuous points of $h_E$ in decreasing order. Then $h_E(\\delta)=1$ on $[\\delta_1,\\infty)$, and is constant on $[\\delta_{k+1},\\delta_k)$ for $k\\ge 1$.\n We call $m_k=h_E(\\delta_{k+1})-h_E(\\delta_k)$ the \\emph{multiplicity} of $\\delta_k$ and define the \\emph{gap sequence} of $E$, denoted by $\\{g_i(E)\\}_{i\\ge 1}$, to be the sequence\n$$\n\\underbrace{\\delta_1,\\dots,\\delta_1}_{m_1},\\underbrace{\\delta_2,\\dots,\\delta_2}_{m_2},\n\\dots\\underbrace{\\delta_k,\\dots,\\delta_k}_{m_k},\\dots\n$$\nIn other words,\n\\begin{equation}\\label{gap}\ng_i(E)=\\delta_k,\\quad\\text{if } h_E(\\delta_k)\\le i0$. Then\n$\ng_i(E)\\asymp i^{-1\/\\gamma}\\Leftrightarrow h_E(\\delta)\\asymp\\delta^{-\\gamma}.\n$\n\\end{lem}\n\n\\begin{proof} Let $\\{\\delta_k\\}_{k\\ge 1}$ be the set of discontinuous points of $h_E$ in decreasing order.\nFirst, we show that either\n$g_i(E)\\asymp i^{-1\/\\gamma}, i\\geq 1 $ or $h_E(\\delta)\\asymp\\delta^{-\\gamma}, \\delta\\in (0,1)$ will imply that\n\\begin{equation}\\label{bound-gap}\nM=\\sup_{k\\geq 1}\\frac{\\delta_k}{\\delta_{k+1}}<\\infty.\n\\end{equation}\nIf $g_i(E)\\asymp i^{-1\/\\gamma}, i\\geq 1, $ holds, then for any $k\\geq 1$, there exists $i$ such\nthat $g_i(E)=\\delta_k$ and $g_{i+1}(E)=\\delta_{k+1}$, so \\eqref{bound-gap} holds in this case.\nIf $h_E(\\delta)\\asymp\\delta^{-\\gamma}, \\delta\\in (0,1),$ holds, then this together with\n$\\lim_{\\delta\\to (\\delta_{k+1})+} h_E(\\delta)=h_E(\\delta_k)$ imply \\eqref{bound-gap} again.\n\nFinally, using \\eqref{gap} and \\eqref{bound-gap}, we obtain the lemma by a routine estimation.\n\\end{proof}\n\n\n \\begin{remark}\\label{equalF}\n\\emph{\n Let $m>1$ be an integer and $\\gamma>0$.\n Since $h_E(\\delta)$ is non-increasing, we see that $h_E(m^{-k})\\asymp m^{k\\gamma}\n (k\\ge 0)$, implies that\n $E$ satisfies the power law property.\n }\n\\end{remark}\n\n\n\\section{\\textbf{Proof of Theorem \\ref{thm:cube}}}\n\nLet $m\\ge 2$ be an integer. Let $\\bD_F=\\{\\bj_1,\\cdots,\\bj_r\\}\\subset\\{0,1,\\dots,m-1\\}^d$.\nFor $\\bj\\in \\bD_F$ and $y\\in\\mathbb{R}^d$, we define $\\varphi_\\bj(y)=\\frac{1}{m}(y+\\bj)$,\nthen $\\{\\varphi_\\bj\\}_{\\bj\\in \\bD_F}$ is an IFS.\nThe unique non-empty compact set $F=F(m,\\bD_F)$ satisfying\n\\begin{equation}\\label{F}\nF=\\underset{\\bj\\in\\bD_F}{\\bigcup}\\varphi_\\bj(F)\n\\end{equation}\nis called a \\emph{$d$-dimensional fractal cube}, see \\cite{XX10}.\n\n\nFor $\\boldsymbol{\\sigma}=\\sigma_1\\dots\\sigma_k\\in\\bD_F^k$, we define $\\varphi_{\\boldsymbol{\\sigma}}(z)=\\varphi_{\\sigma_1}\\circ\\dots\\circ \\varphi_{\\sigma_k}(z)$.\nWe call\n$$\nF_k=\\bigcup_{\\boldsymbol{\\sigma}\\in\\bD_F^k}\\varphi_{\\boldsymbol{\\sigma}}([0,1]^d)\n$$\nthe $k$-th approximation of $F$. We call $\\varphi_{\\boldsymbol{\\sigma}}([0,1]^d)$ a \\emph{$k$-th basic cube}, and call it a \\emph{$k$-th boundary cube} if in addition $\\varphi_{\\boldsymbol{\\sigma}}([0,1]^d)\\cap\\partial[0,1]^d\\ne\\emptyset$.\nClearly, $F_k\\subset F_{k-1}$ for all $k\\ge 1$ and $F=\\bigcap_{k=0}^\\infty F_k$.\n\n\nRecall that $F$ is degenerated if $F$ is contained in a face of $[0,1]^d$ with dimension $d-1$.\nA connected component $C$ of $F_k$ is called a \\emph{$k$-th island} of $F_k$ if $C\\cap\\partial[0,1]^d=\\emptyset$, see \\cite{HR20}.\nHuang and Rao (\\cite{HR20},Theorem 3.1) proved that\n\n\\begin{prop}[\\cite{HR20}]\\label{hasisland}\nLet $F$ be a $d$-dimensional fractal cube which is non-degenerated. Then $F$ has trivial points if and only if there is an integer $p\\ge 1$ such that $F_p$ contains an island.\n\\end{prop}\n\n\n\nFor $A\\subset\\mathbb{R}^d$, we denote by $N_c(A)$ the number of connected components of $A$.\n\n\\begin{lem}\\label{inverse}\nLet $F$ be a $d$-dimensional fractal cube defined in \\eqref{F}. If $F$ has no trivial point, then for any $\\delta\\in(0,1)$ we have\n\\begin{equation}\\label{eqlem1}\nh_F(\\delta)\\le c_1\\delta^{-\\log_m(r-1)},\n\\end{equation} where $c_1>0$ is a constant.\n\\end{lem}\n\\begin{proof\nSince a degenerated fractal cube $F=F(m, \\SD_F)$ is always isometric to a non-degenerated fractal cube\n$F'=F(m, \\SD_{F'})$ such that $\\#\\SD_F=\\#\\SD_{F'}$, so we only need to consider the case that $F$ is non-degenerated.\n\n\nLet $q=\\lfloor\\log_m\\sqrt{d}\\rfloor+1$, where $\\lfloor a\\rfloor$ denotes the greatest integer no larger than $a$.\nWe will prove\n\\begin{equation}\\label{unstable}\nh_F(m^{-k})\\le 2d(r-1)^{k+q},\\quad\\text{for all }k\\ge 1\n\\end{equation}\nby induction on $d$. Notice that $r\\ge m$ since $F$ has no trivial point.\n\nIf $d=1$, then $r=m$, $F=[0,1]$ and $h_F(m^{-k})=1$ for all $k\\geq 1$, so \\eqref{unstable} holds in this case.\n\nAssume that \\eqref{unstable} holds for all $d'$-dimensional fractal cubes which have no trivial point, where $d'p$. Clearly any $m^{-k}$-connected component of $F$ is contained in a connected component of $F_{k-1}$.\nIt is easy to see that $\\varphi_{\\boldsymbol{\\sigma}}(C)$ is a $(k-1)$-th island of $F_{k-1}$ for any $\\boldsymbol{\\sigma}\\in\\bD_F^{k-p-1}$, and the distance of\nany two distinct $(k-1)$-th islands of the form $\\varphi_{\\boldsymbol{\\sigma}}(C)$ is no less than $m^{1-k}$, so\n\\begin{equation}\\label{less\nh_F(m^{-k})\\ge N_c(F_{k-1})\\geq (\\#\\bD_F)^{k-p-1}=r^{k-p-1}.\n\\end{equation\n\nLet $q=\\lfloor\\log_m \\sqrt{d}\\rfloor+1$. Then the points of $F$ in a $(k+q)$-th basic cube is contained in a $m^{-k}$-connected component of $F$, which implies that\n$h_F(m^{-k}) \\le r^{k+q}$.\nThis together with \\eqref{less} imply $h_F(m^{-k})\\asymp r^k=m^{k\\dim_BF}$.\nThe theorem is proved.\n\\end{proof}\n\n\\begin{remark}\n\\emph{It is shown in \\cite{HR20} that if a fractal cube $F$ has a trivial point, then the Hausdorff dimension of the collection of its non-trivial points is strictly less than $\\dim_HF$.}\n\\end{remark}\n\n\n\n\n\n\n\n\\section{\\textbf{Proof of Theorem \\ref{thm:law}}}\nIn this section, we always assume that $E$ is a self-affine Sierpi$\\acute{\\text{n}}$ski sponge defined in \\eqref{Ssponge}.\n We call\n\\begin{equation}\\label{eq-E_k}\nE_k=\\underset{\\bomega\\in\\bD^k }\\bigcup S_\\bomega([0,1]^d)\n\\end{equation}\nthe \\emph{$k$-th approximation} of $E$, and call each $S_\\bomega([0,1]^d)$ a \\emph{$k$-th basic pillar} of $E_k$.\n\n\nRecall that $\\pi_j(x_1,\\dots, x_d)=(x_1,\\dots, x_j)$, $1\\le j \\le d-1$ and\nby convention we set $\\#\\pi_0(\\bD)=1$ and $\\pi_d=id$.\nNote that $\\pi_j(E)$ is a self-affine Sierpi$\\acute{\\text{n}}$ski sponge which determined by $\\{n_\\ell\\}_{\\ell=1}^{j}$ and $\\pi_j(\\bD)$.\nBy \\cite{KP96}, the box-counting dimension of $E$ is\n\\begin{equation}\\label{boxdim}\n\\dim_BE=\\sum_{j=1}^d\\frac{1}{\\log n_j}\\log\\frac{\\#\\pi_j(\\bD)}{\\#\\pi_{j-1}(\\bD)},\n\\end{equation}\n\nRecall that $\\Lambda=\\text{diag}(n_1,\\dots, n_d)$.\n A $k$-th basic pillar of $E$ can be represented by\n\\begin{equation}\\label{eq_pillar}\nS_{\\omega_1\\dots\\omega_k}([0,1]^d)\n =\\sum_{l=1}^k \\Lambda^{-l}\\omega_l+\\prod_{j=1}^d[0,n_j^{-k}],\n\\end{equation}\nwhere $\\omega_l\\dots \\omega_k\\in\\bD^k$.\nFor $1\\le j\\le d$, denote $\\ell_j(k)=\\lfloor k\\log n_d\/\\log n_j\\rfloor$.\nClearly $n_j^{-\\ell_j(k)}\\approx n_d^{-k}$ and $\\ell_1(k)\\geq \\ell_2(k)\\geq \\cdots \\geq \\ell_d(k)=k$.\nWe call\n\\begin{equation}\\label{eq_approx_cube}\nQ_k:=\\left(\\sum_{l=1}^{\\ell_1(k)}\\frac{i_1(\\omega_l)}{n_1^l},\\dots,\n\\sum_{l=1}^{\\ell_d(k)}\\frac{i_d(\\omega_l)}{n_d^l}\\right)+\\prod_{j=1}^d[0,n_j^{-\\ell_j(k)}],\n\\end{equation}\na \\emph{$k$-th approximate box} of $E$, if $\\omega_l=(i_1(\\omega_l),\\dots,i_d(\\omega_l))\\in\\bD$ for $1\\le l\\le \\ell_1(k)$.\nLet $\\widetilde{E}_k$ be the union of all $k$-th approximate boxes. It is clear that $\\widetilde{E}_k\\subset E_k$.\n\nLet $\\mu$ be the uniform Bernoulli measure on $E$, that is, every $k$-th basic pillar has measure $1\/N^k$ in $\\mu$.\nThe following lemma illustrates a nice covering property of self-affine Sierpi$\\acute{\\text{n}}$ski sponges;\nit is contained implicitly in \\cite{KP96} and it is a special case of a result in \\cite{HRWX}.\n\n\\begin{lem}[\\cite{KP96, HRWX}] \\label{lem:box} Let $E$ be a self-affine Sierpi$\\acute{\\text{n}}$ski sponge.\nLet $R$ be a $k$-th basic pillar of $E$. Then the number of $(k+p)$-th approximate boxes contained in $R$\nis comparable to\n$$\\frac{n_d^{(k+p)\\dim_B E}}{N^k}, \\quad p\\geq 1.$$\n\\end{lem}\n\n\\begin{coro}\\label{cor:upper}\nLet $V$ be a union of some $k$-th cylinders of $E$.\n Then there exists a constant $M_1>0$ such that\n\\begin{equation}\\label{eq:V-1}\nh_V(\\delta)\\leq M_1\\mu(V)\\delta^{-\\dim_B E}, \\quad \\delta\\in (0, n_d^{-k}).\n\\end{equation}\n\\end{coro}\n\n\\begin{proof}\nLet $p\\geq 1$ and $\\delta=n_d^{-k-p}$. By Lemma \\ref{lem:box},\n the number of $(k+p)$-th\napproximate boxes contained in the union of the corresponding $k$-th basic pillars of $V$ is comparable to $\\mu(V)n_d^{(k+p)\\dim_B E}$.\nSince every $(k+p)$-th approximate box can intersect a bounded number of $\\delta$-connected component of $E$,\nwe obtain the lemma.\n\\end{proof}\n\nSimilar as Section 3, a connected component $C$ of $E_k$ (resp. $\\widetilde{E}_k$) is called a \\emph{$k$-th island of $E_k$} (resp. $\\widetilde{E}_k$) if $C\\cap\\partial[0,1]^d=\\emptyset$. It is easy to see\nthat $E_k$ has islands if and only if $\\widetilde{E}_k$ has islands.\nRecall that $E$ is said to be degenerated if $E$ is contained in a face of $[0,1]^d$ with dimension $d-1$.\nZhang and Xu \\cite[Theorem 4.1]{ZX21} proved that\n\n\n\\begin{prop}\\label{has-island2}\nLet $E$ be a non-degenerated self-affine Sierpi$\\acute{\\text{n}}$ski sponge.\n Then $E$ has trivial points if and only if there is an integer $q\\ge 1$ such that $E_q$ has islands.\n\\end{prop}\n\n\n\n\nLet $Q_k\\in \\widetilde{E}_k$,\nwe call $Q_k$ a \\emph{$k$-th boundary approximate box} of $E$ if $Q_k\\cap \\partial[0,1]^d\\neq \\emptyset.$\n\nLet $W$ be a $k$-th cylinder of $E$. Write $W=f(E)$, then $f([0,1]^d)$ is the corresponding $k$-th\nbasic pillar of $W$.\nA $\\delta$-connected component $U$ of $W$ is called an\n\\emph{inner $\\delta$-connected component}, if\n $$\n \\dist\\left(U, \\partial(f([0,1]^d))\\right)>\\delta,\n $$\notherwise, we call $U$ a \\emph{boundary $\\delta$-connected component}.\nWe denote by $h^b_W(\\delta)$ the number of boundary $\\delta$-connected components of $W$,\nand by $h^i_W(\\delta)$ the number of inner $\\delta$-connected components of $W$.\n\n\n\n\\begin{lem}\\label{lem:key}\nLet $E$ be a non-degenerated self-affine Sierpi$\\acute{\\text{n}}$ski sponge.\n\n\\emph{(i)} Let $W$ be a $k$-th cylinder of $E$. Then\n\\begin{equation}\\label{eq:Wb}\nh^b_W(\\delta)=o(\\mu(W)\\delta^{-\\dim_B E}),\\quad \\delta\\to 0.\n\\end{equation}\n\n\\emph{(ii)} If $E$ satisfies the maximal power law,\nthen $E$ possesses trivial points; moreover, there exists an integer $p_0\\geq 1$ and a constant $c'>0$ such that for any $k\\geq 1$, any $k$-th cylinder $W$ of $E$\nand any $\\delta\\leq n_d^{-(k+p_0)}$,\n $$h^i_W(\\delta)\\geq c'h_W(\\delta).$$\n\\end{lem}\n\\begin{proof} (i)\nWrite $W=f(E)$. Let $W_p^*$ be the union of $(k+p)$-th cylinders of $W$ whose corresponding basic pillars\n intersecting the boundary of $f([0,1]^d)$. Since $E$ is non-degenerated, we have\n\\begin{equation}\\label{eq:Wp}\n\\mu(W_p^*) =o(\\mu(W)), \\quad p\\to \\infty.\n\\end{equation}\n Set $\\delta=n_d^{-(k+p)}$. If $U$ is a boundary $\\delta$-connected component of $W$,\n then $U\\cap W_p^*\\neq \\emptyset$.\nTherefore,\n $$\n h^b_{W}(\\delta)\\leq h_{W_p^*}(\\delta) \\leq M_1\\mu(W_p^*)\n \\delta^{-\\dim_B E},\n $$\nwhere the last inequality is due to Corollary \\ref{cor:upper}.\n This together with \\eqref{eq:Wp} imply \\eqref{eq:Wb}.\n\n(ii)\nThe assumption that $E$ satisfies the maximal power law implies that there exists $M_2>0$ such that\n \\begin{equation}\\label{eq:V-2}\n h_E(\\delta)\\geq M_2\\cdot \\delta^{-\\dim_B E},\\quad \\text{for any}~~ \\delta\\in(0,1).\n \\end{equation}\n If $E$ does not possess trivial points, then for all $k\\geq 1$,\n $E_k$ does not contain any $k$-th island by Proposition \\ref{has-island2},\n furthermore, $\\widetilde{E}_k$ does not contain any $k$-th island.\n Thus each $\\delta$-connected component of $E$ contains points of $E\\cap \\partial([0,1]^d)$,\n so $h_E(\\delta)=h^b_E(\\delta)$.\n On the other hand, by (i) we have $h^b_E(\\delta)=o(\\delta^{-\\dim_B E})$ as $\\delta\\rightarrow 0$,\n which contradicts \\eqref{eq:V-2}. This proves that $E$ possesses trivial points.\n\n\n\n\n\n Let $W$ be a $k$-th cylinder of $E$.\n Since $h_{A\\cup B}(\\delta)\\leq h_A(\\delta)+h_B(\\delta)$, by \\eqref{eq:V-2} we have\n\\begin{equation}\\label{eq:V333}\n h_W(\\delta)\\geq N^{-k}h_E(\\delta)\\geq M_2\\mu(W) \\delta^{-\\dim_B E}, \\quad \\text{for any}~\\delta\\in (0,1).\n\\end{equation}\nTake $\\varepsilon\\leq M_2\/2$. By \\eqref{eq:Wb}, there exists an integer $p_0\\geq 1$ such that\n $\n h^b_W(\\delta)\\leq \\varepsilon \\mu(W)\\delta^{-\\dim_B E} \\text{ for } \\delta\\leq n_d^{-(k+p_0)}.\n $\n This together with \\eqref{eq:V333} and \\eqref{eq:V-1} imply that\n for $\\delta\\leq n_d^{-(k+p_0)}$,\n $\n h^i_{W}(\\delta)=h_W(\\delta)-h^b_W(\\delta)\\geq \\frac{M_2}{2} \\mu(W)\\delta^{-\\dim_B E}\\geq c'h_{W}(\\delta),\n $\n where $c'=M_2\/(2M_1)$ and $M_1$ is the constant in Corollary \\ref{cor:upper}.\n The lemma is proved.\n\\end{proof}\n\n\n\n\\medskip\n\\begin{proof}[\\textbf{Proof of Theorem \\ref{thm:law}}]\nWe will prove this theorem by induction on $d$. Notice that $E$ is a $1$-dimensional fractal cube if $d=1$,\n and in this case the theorem holds by Theorem \\ref{thm:cube}.\nAssume that the theorem holds for all $d'$-dimensional self-affine Sierpi$\\acute{\\text{n}}$ski sponge with $d'\\le d-1$.\n\nNow we consider the $d$-dimensional self-affine Sierpi$\\acute{\\text{n}}$ski sponge $E$.\nDenote $G:=\\pi_{d-1}(E)$. First, $G$ is non-degenerated since $E$ is. Secondly, by \\eqref{boxdim} we have\n\\begin{equation}\\label{eq:dimG}\n\\log_{n_d} \\frac{N}{\\#\\pi_{d-1}(\\bD)}+\\dim_B G=\\dim_B E.\n\\end{equation}\n\n``$\\Leftarrow$\": Suppose $E$ and all $\\pi_j(E)~(1\\leq j \\leq d-1)$ possess trivial points,\nthen $G=\\pi_{d-1}(E)$ satisfies the maximal power law by induction hypothesis.\nWe will show that $E$ satisfies the maximal power law.\n\nFirstly,\nby Corollary \\ref{cor:upper},\n\\begin{equation}\\label{uppbound1}\nh_E(\\delta)\\leq M_1\\delta^{-\\dim_B E} \\quad \\text{ for all }\\delta\\in (0,1).\n\\end{equation}\nNow we consider the lower bound of $h_E(\\delta)$.\n\nSince $E$ possesses trivial points, by Proposition \\ref{has-island2}, there exists an integer $q_0\\geq 1$ such that\n$E_{q_0}$ has a $q_0$-th island, which we denote by $I$.\nLet $p_0$ be the constant in Lemma \\ref{lem:key} (ii).\nLet $k\\geq q_0$ and $\\delta=n_d^{-(k+p_0)}$.\nSince $G$ satisfies the maximal power law, there exists a constant $c>0$ such that\n\\begin{equation}\\label{uppbound_4}\nc^{-1}\\delta^{-\\text{dim}_B G} \\leq h_G(\\delta)\\leq c\\delta^{-\\text{dim}_B G}.\n\\end{equation}\n\nIt is easy to see that\n$S_{\\boldsymbol{\\tau}}(I)$ is a $k$-th island of $E$ for any $\\boldsymbol{\\tau}\\in \\SD^{k-q_0}$.\nSo $E_k$ has $N^{k-q_0}$ number of $k$-th islands like $I':=S_\\bomega(I)$ for some $\\bomega\\in \\SD^{k-q_0}$.\nSince the distance of any two $k$-th islands of $E_k$ is at least $n_d^{-k}$,\nwe have\n\\begin{equation}\\label{lowerbound_1}\nh_E(\\delta)\\geq N^{k-q_0}\\cdot h_{E\\cap I'}(\\delta).\n\\end{equation}\nLet $W$ be any $k$-th cylinder of $E$ contained in $I'$. Since $\\pi_{d-1}$ is contractive,\n\n we obtain\n\\begin{equation}\\label{lowerbound_a}\n h_{E\\cap I'}(\\delta)\\geq h_{G\\cap \\pi_{d-1}(I')}( \\delta)\\geq c' h_{ \\pi_{d-1}(W)}(\\delta),\n\\end{equation}\n where the last inequality is due to Lemma \\ref{lem:key}(ii) with the constant $c'$ depends on $G$.\n Furthermore, from $h_{A\\cup B}(\\delta)\\leq h_A(\\delta)+h_B(\\delta)$ we infer that\n\\begin{equation}\\label{lowerbound_2}\n (\\#\\pi_{d-1}(\\bD))^k \\cdot h_{\\pi_{d-1}(W)}(\\delta)\\geq h_G(\\delta).\n \\end{equation}\nBy (\\ref{uppbound_4}-\\ref{lowerbound_2}), we have\n\\begin{eqnarray*}\nh_E(\\delta)&\\geq& N^{k-q_0} \\cdot \\frac{c'h_G(\\delta)}{(\\#\\pi_{d-1}(\\bD))^k}\n\\geq c'c^{-1} N^{-q_0}\\cdot\n\\left(\\frac{ N}{\\#\\pi_{d-1}(\\bD)}\\right)^k\\cdot \\delta^{-\\text{dim}_B G}\\\\\n&=& c'c^{-1} N^{-q_0} \\cdot \\left(\\frac{\\#\\pi_{d-1}(\\bD)}{N}\\right)^{p_0} \\cdot \\delta^{-\\text{dim}_B E},\n\\end{eqnarray*}\nwhere the last equality holds by \\eqref{eq:dimG}.\nThis together with \\eqref{uppbound1} imply that $E$ satisfies the maximal power law.\n\n\n\n ``$\\Rightarrow$\": Suppose $E$ satisfies the maximal power law. Then $E$ possesses trivial points by Lemma \\ref{lem:key} (ii).\nSo it is sufficient to show that $G=\\pi_{d-1}(E)$ satisfies the maximal power law by induction hypothesis.\n\n\nSuppose on the contrary this is false.\n Then\n given $\\varepsilon>0$, there exists $\\delta$ as small as we want, such that\n \\begin{equation}\\label{ineq-h_G}\n h_G(\\delta)\\leq\\varepsilon \\delta^{-\\dim_B G}.\n \\end{equation}\n\nLet $W$ be a $k$-th cylinder of $E$,\nthen $V:=\\pi_{d-1}(W)$ is a $k$-th cylinder of $G$.\nLet $\\mu'$ be the uniform Bernoulli measure on $G$.\nBy Lemma \\ref{lem:key} (i), there exists an integer $p_1\\geq 1$ such that\n$$\nh^b_V(\\eta)\\leq \\varepsilon \\mu'(V) \\eta^{-\\dim_B G} \\text{ for } \\eta\\leq n_{d-1}^{-(k+p_1)}.\n$$\nWe choose $\\delta$ small and thus $k$ large so that\n$n^{-(k+1)}\\leq \\delta \\sqrt{\\delta^2+(n_d^{-k})^2}$.\nSince $W$ is contained in $\\pi_{d-1}(W)\\times [b, b+n_d^{-k}]$ for some $b\\in [0,1]$, we deduce that\nif two points $\\pi_{d-1}(x)$ and $\\pi_{d-1}(y)$ belong to a same $\\delta$-connected component of $G$,\nthen $x$ and $y$ belong to a same $\\delta'$-connected component of $E$.\n Therefore,\n\\begin{equation}\\label{eq:W-2}\n h_W(\\delta')\\leq h_{V}(\\delta),\n\\end{equation}\nand consequently,\n$$\nh_{E}(\\delta') \\leq N^k\\cdot h_W(\\delta') \\leq N^k\\cdot \\frac{2\\varepsilon \\delta^{-\\dim_B G}}{(N')^{k}}\n \\leq M'\\varepsilon (\\delta')^{-\\dim_B E}\n$$\nfor $M'=2(\\sqrt{2})^{\\dim_B E}n_d^{\\dim_B G}$.\nThis is a contradiction since $E$ satisfies the maximal power law.\nThe theorem is proved.\n\\end{proof}\n\n\n\n\\section{\\textbf{Proof of Theorem \\ref{thm:perfect}}}\n\n\nBefore proving Theorem \\ref{thm:perfect}, we prove a finite type property of totally disconnected self-affine Sierpi$\\acute{\\text{n}}$ski sponge. The proof is similar to \\cite{XX10} and \\cite{MXX17}, which dealt with\n fractal cubes and Bedford-McMullen carpets, respectively.\n\n\n\\begin{thm}\\label{finite-type}\nLet $E$ be a totally disconnected self-affine Sierpi$\\acute{\\text{n}}$ski sponge, then there is an integer $M_3>0$ such that\nfor every integer $k\\geq 1$, each connected component of $E_k$ contains at most $M_3$ $k$-th basic pillars.\n\\end{thm}\n\n\n\n\nDenote $d_H(A,B)$ the \\emph{Hausdorff metric} between two subsets $A$ and $B$ of $\\mathbb{R}^d$.\nThe following lemma is obvious, see for instance \\cite{XX10}.\n\n\n\\begin{lem}\\label{pre_lem_1}\nSuppose $\\{X_k\\}_{k\\ge 1}$ is a collection of connected compact subsets of $[0,1]^d$.\nThen there exist a subsequence $\\{k_i\\}_{i\\ge 1}$ and a connected compact set $X$\nsuch that $X_{k_i}\\stackrel{d_H}\\longrightarrow X$ as $i\\rightarrow \\infty$.\n\\end{lem}\n\n\n\n\n\n\n\\begin{proof}[\\textbf{Proof of Theorem \\ref{finite-type}}]\nLet $E$ be a totally disconnected self-affine Sierpi$\\acute{\\text{n}}$ski sponge.\nWe set\n$$\n\\Xi_k=\\underset{h\\in\\{-1,0,1\\}^d}{\\bigcup} \\left({E_k}+h\\right).\n$$\nFirst, we claim that there exists an\ninteger $k_0$ such that for any connected component $X$ of $\\Xi_{k_0}$\nwith $X\\cap[0,1]^d\\neq\\emptyset$, $X$ is contained in $(-1,2)^d$.\n\nSuppose on the contrary that for any $k$ there are connected components $X_k\\subset \\Xi_k$ and points $x_k\\in[0,1]^d\\cap X_k$ and $y_k\\in\\partial[-1,2]^d\\cap X_k$.\nBy Lemma \\ref{pre_lem_1}, we can take a subsequence $\\{k_i\\}_i$ such that $x_{k_i}\\rightarrow x^*\\in[0,1]^d,\ny_{k_i}\\rightarrow y^*\\in \\partial[-1,2]^d$ and $X_{k_i}\\stackrel{d_H}\\longrightarrow X$ for some connected compact set $X$ with\n$$X\\subset \\underset{h\\in\\{-1,0,1\\}^d}\\bigcup (E+h), \\ x^*\\in X\\cap [0,1]^d, \\text{ and } y^*\\in X\\cap\\partial[-1,2]^d.$$\nClearly $\\underset{h\\in\\{-1,0,1\\}^d}\\bigcup (E+h)$ is totally disconnected\n since it is a finite union\nof totally disconnected compact sets. (A carefully proof of this fact can be found in \\cite{XX10}).\nThis contradiction proves our claim.\n\nFor any $k\\ge 1$, let $U$ be a connected component of $E_{k+k_0}$ such that $U\\cap S_\\bomega([0,1]^d)\\ne\\emptyset$ for some $\\bomega\\in\\bD^k$. Notice that $S_\\bomega([-1,2]^d)\\cap E_{k+k_0}\\subset S_\\bomega(\\Xi_{k_0})$,\nthen $[-1,2]^d\\cap S_\\bomega^{-1}(U)\\subset \\Xi_{k_0}$.\nBy the claim above, every connected component of $[-1,2]^d\\cap S_\\bomega^{-1}(U)$ which intersects $[0,1]^d$ is contained in $(-1,2)^d$.\nThis implies that\n$$S_\\bomega^{-1}(U) \\subset (-1,2)^d$$\nsince $U$ is connected.\nIt follows that $U\\subset S_\\bomega((-1,2)^d)$, so $U$ contains at most $3^dN^{k_0}$ $(k+k_0)$-th basic pillars\nof $E$. Thus $E$ is of finite type.\n\\end{proof}\n\n\n\n\nRecall that $\\pi_j(x_1,\\dots,x_d)=(x_1,\\dots, x_j)$ for $1\\leq j \\leq d-1$.\nDenote $\\hat \\pi_d(x_1,\\dots, x_d)=x_d$. Let $z_1,\\dots, z_p\\in E$.\nFor $0< \\delta<1$, we call $z_1,\\dots, z_p$ a \\emph{$\\delta$-chain} of $E$ if $|z_{i+1}-z_i|\\leq \\delta$ for $1\\leq i \\leq p-1$,\nand define the \\emph{size} of the chain as\n$$\nL=\\frac{\\text{diam} \\{z_1,\\dots, z_p\\}}{\\delta}.\n$$\n\n\\begin{proof}[\\textbf{Proof of Theorem \\ref{thm:perfect}}]\nWe will prove the theorem by induction on $d$.\nIf $d=1$, $E$ is a $1$-dimensional fractal cube, and the theorem holds by Remark \\ref{perfectly_prop}.\n\n\\medskip\n\n \\emph{Totally disconnected $\\Rightarrow$ perfectly disconnected:}\n Suppose that $E$ and $\\pi_j(E)$, $j=1,\\dots, d-1$,\n are totally disconnected. By induction hypothesis, $\\pi_{d-1}(E)$\n is perfectly disconnected.\n We will show that $E$ is perfectly disconnected.\n\nTake $\\delta\\in(0,1)$. Let $U$ be a $\\delta$-connected component of $E$. Let $k$ be the\n integer such\n that $n_d^{-(k+1)}\\leq \\delta \\sqrt{\\frac{L^2}{4n_d^2}-M_3^2}.\n $$\n\n If $E$ is not perfectly disconnected, then we can choose $U$ such that $L$ is arbitrarily large,\n so $\\diam(\\pi_{d-1}(U))\/\\delta\\geq L'$ can be arbitrarily large,\n which contradicts\n the fact that $\\pi_{d-1}(E)$ is perfectly disconnected.\n\n \\medskip\n\n \\emph{Perfectly disconnected $\\Rightarrow$ totally disconnected:}\n Assume that $E$ is perfectly disconnected.\n Clearly $E$ is totally disconnected. Hence, by induction hypothesis,\n we only need to show that $G:=\\pi_{d-1}(E)$ is perfectly disconnected.\n\n Firstly, we claim that $G$ must be totally disconnected.\n\n Suppose on the contrary that the claim is false.\n Let $\\Gamma$ be a connected subset of $(0,1)^{d-1}\\cap G$. Fix two points $a, b\\in \\Gamma$.\n Take any $\\bomega\\in (\\pi_{d-1}(\\SD))^k$, then $S'_\\bomega(\\Gamma)$ is contained\n in the interior of the $k$-th basic pillar $V=S'_\\bomega([0,1]^{d-1})$,\n where $\\{S'_\\bi\\}_{\\bi\\in \\pi_{d-1}(\\SD)}$ is the IFS of $G$.\n Denote $a'=S'_\\bomega(a)$, $b'=S'_\\bomega(b)$.\n Then $|b'-a'|\\geq |b-a|\/n_{d-1}^k$.\n\n Let $\\delta=n_{d}^{-k}$. Let $z_1=a', \\dots, z_p=b'$ be a $\\delta$-chain in $S'_\\bomega(\\Gamma)\\subset V$. Let $W$ be a $k$-th cylinder of $E$ such that $\\pi_{d-1}(W)=V\\cap G$.\n Let $y_1,\\dots, y_p$ be a sequence in $W$ such that $\\pi_{d-1}(y_j)=z_j,~1\\leq j \\leq p$, then it is a $(\\sqrt{2}\\delta)$-chain.\n Since\n $$\n L=\\frac{|y_1-y_p|}{\\sqrt{2}\\delta}\\geq \\frac{|z_1-z_p|}{\\sqrt{2}\\delta}\n \\geq \\frac{|b-a|}{\\sqrt{2}} \\cdot\\left(\\frac{n_d}{n_{d-1}}\\right)^k\n $$\n can be arbitrarily large when $k$ tends to $\\infty$, we conclude that $E$ is not perfectly disconnected,\n which is a contradiction. The claim is proved.\n\n\n\n Secondly, suppose on the contrary that $G$ is not perfectly disconnected.\n\nTake $\\delta\\in(0,1)$. Let $U$ be a $\\delta$-connected component of $G$. Denote $L=\\text{diam}(U)\/\\delta$. Let $k$ be the\n integer such\n that $n_d^{-(k+1)}\\leq \\delta \\sqrt{L}\\delta$.\n\n Let $1\\leq j^*\\leq \\ell-1$ be the index such that $|x_{j^*+1}-x_{j^*}|=\\Delta$.\n \n Denote the sub-chain of $z_1,\\dots,z_p$ between $x_{j^*}$ and $x_{j^*+1}$ by\n $$z_1'=z_{m+1},\\dots, z'_s=z_{m+s}.$$\n Then $(z'_j)_{j=1}^s$ belong to $V_2\\cup \\cdots \\cup V_h$ and $|z'_1-z'_s|\\geq (\\sqrt{L}-2)\\delta$.\n\n Now we repeat the above argument by considering the $\\delta$-chain\n $(z'_j)_{j=1}^s$ in $V_2\\cup \\cdots \\cup V_h$.\n In at most $M_3-1$ steps, we will obtain a $\\Delta'$-chain in $E$ with arbitrarily large size when $L\\to \\infty$.\nThis finishes the proof of Case 2 and the theorem is proved.\n\\end{proof}\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Preliminaries}\n\n\\subsection{Summary on Galois covers}\n\n\\textit{1. Generalities}\n\\medskip\n\nLet $G$ be a finite group. Let $X$ and $Y$ be a normal projective varieties. We call $X$ a $G$-cover, if there exists a finite\nsurjective morphism $\\pi : X \\to Y$ such that the finite field extension given by\n$\\pi^* : \\mathbb C(Y) \\hookrightarrow \\mathbb C(X)$ is a Galois extension with \n$\\mathop {\\rm Gal}\\nolimits(\\mathbb C(X)\/\\mathbb C(Y)) \\cong G$. We denote the branch locus of $\\pi$ by\n$\\Delta_{\\pi}$. $\\Delta_{\\pi}$ is a reduced divisor if $Y$ is smooth(\\cite{zariski}).\n Let $B$ be a reduced divisor on $Y$ and we denote its irreducible decomposition by $B = \\sum_{i=1}^r B_i$. We say that a $G$-cover \n$\\pi : X \\to Y$ is branched at $\\sum_{i=1}^r e_iB_i$ if \n\n\\begin{enumerate}\n\\item[(i)] $\\Delta_{\\pi} = \\mathop {\\rm Supp}\\nolimits (\\sum_{i=1}^r e_i B_i)$ and\n\\item[(ii)] the ramification index along $B_i$ is $e_i$ for $1 \\le i \\le r$.\n\\end{enumerate}\n\n\\bigskip\n\n\\textit{2. Cyclic covers and double covers}\n\n\\medskip\n\nLet $\\mathbb Z\/n\\mathbb Z$ be a cyclic group of order $n$. We call a $\\mathbb Z\/n\\mathbb Z$- (resp. a $\\mathbb Z\/2\\mathbb Z$-) cover\nby an $n$-cyclic (resp. a double) cover. \nWe here summarize some facts\non cyclic and double covers. We first remark the following fact on cyclic covers.\n\n\\medskip\n\n\\textbf{Fact:} Let $Y$ be a smooth projective variety and $B$ a reduced divisor on $Y$. If\nthere exists a line bundle ${\\mathcal L}$ on $Y$ such that \n$B \\sim n{\\mathcal L}$, then we can construct a hypersurface $X$ in the total space, $L$, of \n${\\mathcal L}$ such that \n\n\\begin{itemize}\n\n\\item $X$ is irreducible and normal, and\n\n\\item $\\pi := \\mbox{pr}|_X$ gives rise to an $n$-cyclic cover, pr being the canonical projection\n$\\mbox{pr} : L \\to Y$.\n\\end{itemize}\n(See \\cite{bpv} for the above fact.)\n\n\\medskip\n\nAs we see in \\cite{tokunaga90}, cyclic covers are not always realized as a hypersurface of the\ntotal space of a certain line bundle. \nAs for double covers, however, the following lemma holds.\n\n\\begin{lem}\\label{lem:db-1}{Let $f : X \\to Y$ be a double cover of a smooth projective\nvariety with $\\Delta_f = B$, then there exists a line bundle ${\\mathcal L}$ such that\n$B \\sim 2{\\mathcal L}$ and $X$ is obtained as a hypersurface of the total space, $L$, of\n${\\mathcal L}$ as above.\n}\n\\end{lem}\n\n\\noindent{\\textsl {Proof}.}\\hskip 3pt Let $\\varphi$ be a rational function in $\\mathbb C(Y)$ such that \n$\\mathbb C(X) = \\mathbb C(Y)(\\sqrt{\\varphi})$. By our assumption, the divisor of $\\varphi$ is of the form\n\\[\n(\\varphi) = B + 2D,\n\\]\nwhere $D$ is a divisor on $Y$. Choose ${\\mathcal L}$ as the line bundle determined by $-D$. This implies our \nstatement.\n\\qed \\par\\smallskip\\noindent\n\n\\medskip\n\nBy Lemma~\\ref{lem:db-1}, note that any double cover $X$ over $Y$ is determined by the pair\n$(B, {\\mathcal L})$ as above. \n\n\\bigskip\n\n\\textit{3. Dihedral covers}\n\n\\medskip\n \n We next explain dihedral covers briefly.\n Let $D_{2n}$ be a dihedral group of order $2n$ given by\n$\\langle \\sigma, \\tau \\mid \\sigma^2 = \\tau^n = (\\sigma\\tau)^2 = 1\\rangle$. In \\cite{tokunaga94},\nwe developed a method of dealing with $D_{2n}$-covers. We need to introduce some notation\nin order to describe it.\n\nLet $\\pi : X \\to Y$ be a $D_{2n}$-cover. By its definition, $\\mathbb C(X)$ is a $D_{2n}$-extension\nof $\\mathbb C(Y)$. Let $\\mathbb C(X)^{\\tau}$ be the fixed field by $\\tau$. We denote the $\\mathbb C(X)^{\\tau}$-\nnormalization by $D(X\/Y)$. We denote the induced morphisms by\n$\\beta_1(\\pi) : D(X\/Y) \\to Y$ and $\\beta_2(\\pi) : X \\to D(X\/Y)$. Note that $X$ is a $\\mathbb Z\/n\\mathbb Z$-cover \nof $D(X\/Y)$ and $D(X\/Y)$ is a double cover of $Y$ such that $\\pi = \\beta_1(\\pi)\\circ\\beta_2(\\pi)$:\n\n\\[\n\\begin{diagram}\n\\node{X}\\arrow[2]{s,l}{\\pi}\\arrow{se,t}{\\beta_2(\\pi)}\\\\\n \\node{\n\\node[1]{D(X\/Y)}\\arrow{sw,r}{\\beta_1(\\pi)} \\\\\n\\node{Y}\n\\end{diagram}\n\\]\n\nIn \\cite{tokunaga94}, we have the following results for $D_{2n}$-covers ($n$:odd).\n\n\\begin{prop}\\label{prop:di-suf}{Let $n$ be an odd integer with $n \\ge 3$. \nLet $f : Z \\to Y$ be a double cover of a smooth projective variety $Y$, and assume that $Z$ is smooth. \n Let $\\sigma_f$ be the covering transformation of $f$. Suppose that there exists a pair \n $(D, {\\mathcal L})$ of an effective divisor and a \nline bundle on $Y$ such that\n\\begin{enumerate}\n\\item[(i)] $D$ and $\\sigma_f^*D$ have no common components,\n\n\\item[(ii)] if $D = \\sum_ia_iD_i$ denotes the irreducible decomposition of $D$, then \n$0 < a_i \\le (n-1)\/2$ for every $i$;and the greatest common divisor of the $a_i$'s and $n$ is $1$, and\n\n\\item[(iii)] $D - \\sigma_f^*D$ is linearly equivalent to $n{\\mathcal L}$.\n\\end{enumerate}\nThen there exists a $D_{2n}$-cover, $X$, of $Y$ such that (a) $D(X\/Y) = Z$,\n(b) $\\Delta(X\/Y) =\\Delta_f \\cup f(\\mathop {\\rm Supp}\\nolimits D)$ and (c) the ramification index along $D_i$ is\n$n\/\\gcd(a_i, n)$ for $\\forall i$. \n}\n\\end{prop}\n\nFor a proof, see \\cite{tokunaga94}. A corollary related to a splitting divisor on $Y$, we have\nthe following:\n\n\\begin{cor}\\label{cor:di-suf}{Let ${\\mathcal D}$ be a splitting divisor on $Y$ with respect to $f :\nZ \\to Y$. Put $f^*{\\mathcal D} = {\\mathcal D}^+ + {\\mathcal D}^-$. If there exists a line bundle\n${\\mathcal L}$ on $Z$ such that ${\\mathcal D}^+ - {\\mathcal D}^- \\sim n{\\mathcal L}$ for an\nodd number $n$,\nthen there exists a $D_{2n}$-cover $\\pi : X \\to Y$ branched at $2\\Delta_f + n{\\mathcal D}$.\n}\n\\end{cor}\n\nConversely we have the following:\n\n\\begin{prop}\\label{prop:dinec}{Let $\\pi : X \\to Y$ be a $D_{2n}$-cover ($n \\ge 3$, $n$: odd) of $Y$\nand let $\\sigma_{\\beta_1(\\pi)}$ be the involution on $D(X\/Y)$ determined by\nthe covering transformation of $\\beta_1(\\pi)$. Suppose that\n$D(X\/Y)$ is smooth. Then there exists a pair of an effective divisor and a line bundle $(D, {\\mathcal L})$\non\n$D(X\/Y)$ such that \n\n\\begin{enumerate}\n\\item[(i)] $D$ and $\\sigma_{\\beta_1(\\pi)}^*D$ have no common component,\n\\item[(ii)] if $D = \\sum_ia_iD_i$ denotes the decomposition into irreducible components, then\n$0 \\le a_i \\le (n-1)\/2$ for every $i$,\n\n\\item[(iii)] $D - \\sigma_{\\beta_1(\\pi)}^*D \\sim n{\\mathcal L}$, and\n\n\\item[(iv)] $\\Delta_{\\beta_2(\\pi)} = \\mathop {\\rm Supp}\\nolimits(D+\\sigma_{\\beta_1(\\pi)}^*D)$.\n\\end{enumerate}\n}\n\\end{prop}\n\nFor a proof, see \\cite{tokunaga94}.\n\n\\begin{cor}\\label{cor:di-nec}{ Let $D_i$ be an arbitrary irreducible component of $D$ in \nProposition~\\ref{prop:dinec}. The image $\\beta_1(\\pi)(D_i)$ is a splitting divisor with respect\nto $\\beta_1(\\pi) : D(X\/Y) \\to Y$\n}\n\\end{cor}\n\n\n\\subsection{A review on the Mordell-Weil groups for fibrations over curves}\n\n\nIn this section, we review the results on the Mordell-Weil group and the Mordell-Weil\nlattices studied by Shioda in \\cite{shioda90, shioda99}.\n\nLet $S$ be a smooth algebraic surface with fibration $\\varphi : S \\to C$ of genus $g (\\ge 1)$ curves over a smooth curve $C$. Throughout this section, we assume that\n\n\\begin{itemize}\n \\item $\\varphi$ has a section $O$ and\n \\item $\\varphi$ is relatively minimal, i.e., no $(-1)$ curve is contained\n in any fiber.\n \\end{itemize}\n \n Let $S_{\\eta}$ be the generic fiber of $\\varphi$ and let $K =\\mathbb C(C)$ be the rational\n function field of $C$. $S_{\\eta}$ is regarded as a curve of genus $g$ over $K$.\n \n Let ${\\mathcal J}_S:= J(S_{\\eta})$ be the Jacobian variety of $S_{\\eta}$. We denote the set of rational points\n over $K$ by $\\mathop {\\rm MW}\\nolimits({\\mathcal J}_S)$. By our assumption, $\\mathop {\\rm MW}\\nolimits({\\mathcal J}_S) \\neq \\emptyset$ and it is \n well-known that $\\mathop {\\rm MW}\\nolimits({\\mathcal J}_S)$ has the structure of an abelian group. \n \n \n Let $\\mathop {\\rm NS}\\nolimits(S)$ be the N\\'eron-Severi group of $S$ and let $Tr(\\varphi)$ be the subgroup\n of $\\mathop {\\rm NS}\\nolimits(S)$ generated by $O$ and irreducible components of fibers of $\\varphi$. Under these\n notation, we have: \n \n \\begin{thm}\\label{thm:shioda}{ If the irregularity of $S$ is equal to $C$, then\n $\\mathop {\\rm MW}\\nolimits({\\mathcal J}_S)$ is a finitely generated abelian group such that\n \\[\n \\mathop {\\rm MW}\\nolimits({\\mathcal J}_S) \\cong \\mathop {\\rm NS}\\nolimits(S)\/Tr(\\varphi).\n \\]\n }\n \\end{thm}\n \n See \\cite{shioda90, shioda99} for a proof.\n \n \\bigskip\n \n Let $p_d :S_d \\to \\Sigma_d$ be the double cover of $\\Sigma_d$ with branch locus\n $\\Delta_{0,d} + T_d$ as in Introduction. Then we have \n \n \n \\begin{lem}\\label{lem:non-etale}{Let $p_d : S_d \\to \\Sigma_d$ be the double cover as before.\n There exists no unramified cover of $S_d$. In particular, $\\mathop {\\rm Pic}\\nolimits(S_d)$ has no torsion element.\n }\n \\end{lem}\n \n \\noindent{\\textsl {Proof}.}\\hskip 3pt By Brieskorn's results on the simultaneous resolution of rational double points, we\n may assume that $T_d$ is smooth. Since the linear system $|T_d|$ is base point free,\n it is enough to prove our statement for one special case. Chose an affine open set $U_d$\n of $\\Sigma_d$ isomorphic to $\\mathbb C^2$ with a coordinate $(x, t)$ so that a curve $x = 0$ gives\n rise to a section linear equivalent to $\\Delta_{\\infty}$.\n Choose $T_d$ whose defining equation in $U_d$ is\n \\[\n T_d: F_{T_d} = x^{2g+1} - \\Pi_{i=1}^{(2g+1)d}(t - \\alpha_i) = 0,\n \\]\n where $\\alpha_i$ ($i = 1,\\ldots, (2g+1)d$) are distinct complex numbers. Note that\n \\begin{itemize}\n \n \\item $T_d$ is smooth,\n \n\\item singulair fibers of $\\varphi$ are over $\\alpha_i$ ($i = 1, \\ldots, (2g+1)d$), and\n\n\\item all the singular fibers are irreducible rational curves with unique singularity isomorphic to\n $y^2 - x^{2g+1}= 0$. \n \n \\end{itemize}\n Suppose that $\\widehat{S}_d \\to S_d$ be any unramified cover, and let $\\hat {g} : \\widehat{S}_d\n \\to \\mathbb P^1$ be the induced fiberation. We claim that $\\hat {g}$ has a connected fiber. Let\n $\\widehat{S}_d \\miya{{\\rho_1}} C \\miya{{\\rho_2}} \\mathbb P^1$ be the Stein factorization and \n let $\\widehat {O}$ be a section coming from $O$. Then $\\deg (\\rho_2\\circ\\rho_1)|_{\\hat {g}} = \\deg \\hat {g}|_{\\widehat {O}} = 1$, and $\\hat g$ has a connected fiber. \n \n On the other hand, since all the singular fibers of $g$ are simply connected, all fibers over\n $\\alpha_i$ ($i = 1, \\ldots, (2g+1)d$) are disconnected. This leads us to a contradiction.\n \\qed \\par\\smallskip\\noindent\n \n \n \\begin{cor}\\label{cor:mw}{\n The irreguarity $h^1(S_d, {\\mathcal O}_{S_d})$ of $S_d$ is $0$. In particular, \n \\[\n \\mathop {\\rm MW}\\nolimits({\\mathcal J}_{S_d}) \\cong \\mathop {\\rm NS}\\nolimits(S_d)\/Tr(\\varphi),\n \\]\n where $Tr(\\varphi)$ denotes the subgroup of $\\mathop {\\rm NS}\\nolimits(S_d)$ introduced as above.\n }\n \\end{cor}\n \n \\noindent{\\textsl {Proof}.}\\hskip 3pt By Lemma~\\ref{lem:non-etale}, we infer that $H^1(S_d, \\mathbb Z) = \\{0\\}$. Hence\n the irregularity of $S_d$ is $0$.\n \n \\qed \\par\\smallskip\\noindent\n \n\n\n\\section{Proof of Theorem~\\ref{thm:qr-1}}\n\n\n\nLet us start with the following lemma:\n\n\\begin{lem}\\label{lem:db-2}{ $f : X \\to Y$ be the double cover of $Y$ determined by \n$(B, {\\mathcal L})$ as in Lemma~\\ref{lem:db-1}. Let $Z$ be a smooth subvariety of $Y$ such that $(i)$ \n$\\dim Z > 0$ and $(ii)$ $Z \\not\\subset B$. We denote the inclusion morphism\n$Z \\hookrightarrow Y$ by $\\iota$. If there exists a divisor $B_1$ on $Z$ such that\n\\begin{itemize}\n \\item $\\iota^*B = 2B_1$ and\n \\item $\\iota^*{\\mathcal L} \\sim B_1$,\n\\end{itemize}\nthen the preimage $f^{-1}Z$ splits into two irreducible components $Z^+$ and $Z^-$.\n}\n\\end{lem} \n\n\n\\noindent{\\textsl {Proof}.}\\hskip 3pt Let $f|_{f^{-1}(Z)} : f^{-1}(Z) \\to Z$ be the induced morphism. $f^{-1}(Z)$ is realized \nas a hypersurface in the total space of $\\iota^*L$ as in usual manner (see \\cite[Chapter I, \\S 17]{bpv}, for example).\nOur condition implies that $f^{-1}(Z)$ is reducible. Since $\\deg f = 2$, our statement holds.\n\\qed \\par\\smallskip\\noindent\n\n\\begin{lem}\\label{lem:db-3}{Let $Y$ be a smooth projective variety, let \n$\\sigma : Y \\to Y$ be an involution on $Y$, let $R$ be a smooth irreducible\ndivisor on $Y$ such that $\\sigma|_R$ is the identity, and let $B$ be a reduced divisor on $Y$\nsuch that $\\sigma^*B$ and $B$ have no common component.\n\nIf there exists a $\\sigma$-invariant divisor $D$ on $Y$ (i.e., $\\sigma^*D = D$) such that\n\n\\begin{itemize}\n\\item $B+D$ is $2$-divisible in $\\mathop {\\rm Pic}\\nolimits(Y)$, and\n\\item $R$ is not contained in $\\mathop {\\rm Supp}\\nolimits(D)$,\n\\end{itemize}\nthen there exists a double cover $f: X \\to Y$ branched at $2(B + \\sigma^*B)$ such that\n$R$ is a splitting divisor with respect to $f$.\n\nMoreover, if there is no $2$-torsion in $\\mathop {\\rm Pic}\\nolimits(Y)$, then $R$ is a splitting divisor with \nrespect to $B + \\sigma^*B$.\n}\n\\end{lem}\n\n\\noindent{\\textsl {Proof}.}\\hskip 3pt By our assumption and $Y$ is projective, there exists a divisor $D_o$ on $Y$ such that \n\\begin{enumerate}\n\\item\n$R$ is not contained in $\\mathop {\\rm Supp}\\nolimits(D_o)$, and\n\n\\item $B+D \\sim 2D_o$.\n\\end{enumerate}\nHence $B+\\sigma^*B \\sim 2(D_o + \\sigma^*D_o - D)$ Let $f : X \\to Y$ be a double cover determined\nby $(Y, B + \\sigma^*B, D_o + \\sigma^*D_o - D)$ and let $\\iota :\nR \\hookrightarrow Y$ denote the inclusion morphism. Since $\\sigma |_R = \\mathop {{\\rm id}}\\nolimits_R$,\n\\[\n\\iota^*B = \\iota^*\\sigma^*B, \\qquad \\iota^*(D_o - D) = \\iota^*(\\sigma^*D_o - D),\n\\]\nwe have\n\\begin{eqnarray*}\n\\iota^*B & \\sim & \\iota^*(2D_o - D) \\\\\n & = & \\iota^*D_o + \\iota^*(\\sigma^*D_o - D) \\\\\n & = & \\iota^*(D_o + \\sigma^*D_o-D).\n \\end{eqnarray*}\n \n\nHence, by Lemma~\\ref{lem:db-2}, $R$ is a splitting divisor with respect to $f$. Moreover,\nif there is no $2$-torsion in $\\mathop {\\rm Pic}\\nolimits(Y)$, $f$ is determined by $B+\\sigma^*B$. Hence $R$ is \na splitting divisor with respect to $B + \\sigma^*B$.\n\\qed \\par\\smallskip\\noindent\n\n\n\n\n\\begin{prop}\\label{prop:main}\n{\nLet $p_d : S_d \\to \\Sigma_d$ and $q_d : W_d \\to \\Sigma_d$ be the double covers as in\nIntroduction. If there exists a $\\sigma_{p_d}$-invariant divisor $D$ on $S_d$ such that\n$s^+ + D$ is $2$-divisible in $\\mathop {\\rm Pic}\\nolimits(S_d)$, $\\sigma_{p_d}$ being the covering transformation of\n$p_d$, then $T_d$ is a splitting divisor with respect to $\\Delta_{0,d} + \\Delta$.\n}\n\\end{prop}\n\n\\noindent{\\textsl {Proof}.}\\hskip 3pt Let $\\psi_1$ and $\\psi_2$ be rational function on $\\Sigma_d$ such that\n$\\mathbb C(W_d) = \\mathbb C(\\Sigma_d)(\\sqrt{\\psi_1})$ and\n $\\mathbb C(S'_d) (= \\mathbb C(S_d)) = \\mathbb C(\\Sigma_d)(\\sqrt{\\psi_2})$, respectively. Note that\n $(\\psi_1) = \\Delta_{0,d} + \\Delta + 2D_1$ and $(\\psi_2) = \\Delta_{0,d} + T_d + 2D_2$ for \n some divisors $D_1$ and $D_2$ on $\\Sigma_d$. Let $X'_d$ be the \n $\\mathbb C(\\Sigma_d)(\\sqrt{\\psi_1}, \\sqrt{\\psi_2})$-normalization of $\\Sigma_d$ and let $\\tilde {p}_d: X_d\n \\to S_d$ be\n the induced double cover of $S_d$ by the quadratic extension\n $\\mathbb C(\\Sigma_d)(\\sqrt{\\psi_1}, \\sqrt{\\psi_2})\/\\mathbb C(\\Sigma_d)(\\sqrt{\\psi_2})$ and let \n $\\tilde{\\mu} : X_d \\to X'_d$ be the induced morphsim. $X'_d$ is a bi-double\n cover of $\\Sigma_d$ as well as a double cover of both $W_d$ and $S'_d$. We denote the induced\n covering morphisms by $\\tilde{q}_d : X'_d \\to W_d$.\n\n \\[\n \\begin{CD}\n\tW_d@<{\\tilde{q}_d}<0$ telle que, pour tout couple d'entiers $n\\geq2$ et $D\\geq3$, le nombre $\\mathcal{N}(D,n)$ de rationnels $z\\in[n-1,n]$ de d\u00e9nominateur au plus $D$ et tels que $\\Gamma(z)$ soit \u00e9galement un rationnel de d\u00e9nominateur au plus $D$ v\u00e9rifie:\n \\[\\mathcal{N}(D,n)\\leq c n^4\\log^3(n)\\frac{\\log^2(D)}{\\log\\log(D)}.\\]\n \\item Il existe une constante absolue $c^\\prime>0$ telle que, pour tout couple d'entiers $n\\geq2$ et $D\\geq3$, le nombre $\\mathcal{N}^\\prime(D,n)$ de rationnels $z\\in[n-1,n]$ de d\u00e9nominateur au plus $D$ et tels que $1\/\\Gamma(z)$ soit \u00e9galement un rationnel de d\u00e9nominateur au plus $D$ v\u00e9rifie:\n \\[\\mathcal{N}^\\prime(D,n)\\leq c^\\prime n^2\\log^3(n)\\frac{\\log^2(D)}{\\log\\log(D)}.\\]\n \\end{enumerate}\n\\end{theorem*}\n\nIl n'est pas surprenant d'obtenir des r\u00e9sultats de ce type. Par exemple, les r\u00e9sultats de l'article \\cite{Surroca} de Surroca, valables pour une classe plus g\u00e9n\u00e9rale de fonctions, conduisent facilement pour la fonction Gamma \u00e0 la borne $\\mathcal{N}(D,n)\\leq\\delta\\cdot(n\\log(n)\\log(D))^2$, avec une constante $\\delta>0$ pouvant \u00eatre choisie arbitrairement petite, mais seulement pour une sous-suite (non explicite) de valeurs de $D$.\n\nSimultan\u00e9ment \u00e0 la r\u00e9daction de cet article, Boxall et Jones ont \u00e9galement obtenu des r\u00e9sultats plus g\u00e9n\u00e9raux dans \\cite{BoxallJones}. En particulier, ils obtiennent une borne en $C\\log^3(D)$ pour le nombre de points rationnels de d\u00e9nominateur born\u00e9 par $D$ de la restriction de la fonction Gamma \u00e0 un compact quelconque (la constante $C$ d\u00e9pendant du compact).\n\nLa strat\u00e9gie employ\u00e9e ici suit la m\u00e9thode de Masser et peut \u00eatre r\u00e9sum\u00e9e ainsi: dans la partie \\ref{sectionlemmezeros}, on utilise les propri\u00e9t\u00e9s analytiques de la fonction enti\u00e8re $1\/\\Gamma$ pour d\u00e9montrer un lemme de z\u00e9ros pour les polyn\u00f4mes en $z$ et $1\/\\Gamma(z)$. Dans la partie \\ref{sectiontheoreme}, on utilise la Proposition 2 de \\cite{Masser} et le lemme de z\u00e9ros pour en d\u00e9duire un r\u00e9sultat plus g\u00e9n\u00e9ral sur la r\u00e9partition des points de degr\u00e9 et de hauteur fix\u00e9s sur le graphe de $1\/\\Gamma$. Le th\u00e9or\u00e8me annonc\u00e9 est alors un corollaire de ce r\u00e9sultat.\n\nJe tiens \u00e0 remercier Tanguy Rivoal, sans l'aide duquel cet article n'aurait pas pu voir le jour.\n\n\\section{Propri\u00e9t\u00e9s classiques de la fonction Gamma}\n\nLes m\u00e9thodes employ\u00e9es dans la suite s'appliquent \u00e0 des fonctions enti\u00e8res avec une infinit\u00e9 de z\u00e9ros \\og{}bien distribu\u00e9s\\fg{} en un certain sens, c'est pourquoi on va en g\u00e9n\u00e9ral travailler avec $1\/\\Gamma$ plut\u00f4t qu'avec la fonction Gamma elle-m\u00eame (sachant que d\u00e9duire les points rationnels de $\\Gamma$ de ceux de $1\/\\Gamma$ ne pose pas de probl\u00e8me).\n\nCette partie est consacr\u00e9e \u00e0 un bref rappel des propri\u00e9t\u00e9s de la fonction $1\/\\Gamma$ (on pourra se r\u00e9f\u00e9rer \u00e0 \\cite{WhittakerWatson} ou \\cite{Campbell} pour plus de d\u00e9tails).\n\n\\begin{lemma}\\label{defgamma}Pour tout $z\\in\\mathbf{C}$, on a \\[\\frac{1}{\\Gamma(z)}=ze^{\\gamma z}\\prod_{n\\geq1}\\left(1+\\frac{z}{n}\\right)e^{-z\/n},\\]\n o\u00f9 $\\gamma$ est la constante d'Euler-Mascheroni.\n\\end{lemma}\n\nOn d\u00e9duit de ce produit absolument convergent que $1\/\\Gamma$ est une fonction enti\u00e8re admettant un z\u00e9ro d'ordre un en chacun des entiers n\u00e9gatifs ou nuls.\n\nLe lien entre le comportement de $1\/\\Gamma$ dans la partie droite et dans la partie gauche du plan complexe est donn\u00e9 par la formule des compl\u00e9ments \\cite[12.14]{WhittakerWatson}:\n\\begin{lemma}[Formule des compl\u00e9ments]\n Pour tout $z\\in\\mathbf{C}$, on a \\[\\frac{1}{\\Gamma(z)\\Gamma(1-z)}=\\frac{\\sin(\\pi z)}{\\pi}.\\]\n\\end{lemma}\nUn d\u00e9veloppement asymptotique de la fonction pour des valeurs de $z$ \u00e9loign\u00e9es de la demi-droite r\u00e9elle n\u00e9gative est donn\u00e9 par la formule de Stirling \\cite[12.33]{WhittakerWatson}:\n\\begin{lemma}[Formule de Stirling]\n Pour $z\\in\\mathbf{C}$, $|\\arg(z)|\\leq\\delta<\\pi$ et $|z|\\to\\infty$, on a:\n \\[\\frac{1}{\\Gamma(z)}=\\frac{1}{\\sqrt{2\\pi}}e^{z}z^{-z+1\/2}\\left(1+O\\left(z^{-1}\\right)\\right).\\]\n\\end{lemma}\n\nEn particulier, la formule est valable dans tout le demi-plan $\\Re(z)>0$. En utilisant la formule des compl\u00e9ments, on en d\u00e9duit une seconde estimation, valable dans le demi-plan $\\Re(z)<1$:\n\n\\begin{lemma}\\label{stirling}\n Pour $z\\in\\mathbf{C}$, $|z|\\to\\infty$, on a:\n \\[\\frac{1}{\\Gamma(z)}=\\left\\{\\begin{array}{ll}\n \\displaystyle\\frac{e^{z}z^{-z+1\/2}}{\\sqrt{2\\pi}}\\left(1+O\\left(z^{-1}\\right)\\right) & \\text{ si }\\Re(z)>0\\\\\n \\displaystyle\\sqrt{\\frac{2}{\\pi}}e^{z-1}(1-z)^{-z+1\/2}\\sin(\\pi z)\\left(1+O\\left(z^{-1}\\right)\\right) & \\text{ si }\\Re(z)<1\n \\end{array}\\right.\\]\n\\end{lemma}\n \nDe cette estimation, on tire une majoration de la croissance radiale de $1\/\\Gamma$:\n\n\\begin{lemma}\\label{croissance}\n Il existe une constante $c$ telle que, pour tout r\u00e9el positif $R$ assez grand, on ait $\\left|1\/\\Gamma\\right|_R\\leq R^{cR}$, o\u00f9 $|1\/\\Gamma|_R$ d\u00e9signe le maximum du module de $1\/\\Gamma$ sur le disque ferm\u00e9 centr\u00e9 en $0$ et de rayon $R$.\n\\end{lemma}\n\n\\begin{proof}\n D'apr\u00e8s le lemme \\ref{stirling}, pour $\\Re(z)>0$ et $|z|=R$ assez grand, on a:\n \\[\\left|\\frac{1}{\\Gamma(z)}\\right|\\leq \\left|e^{z}z^{-z+1\/2}\\right|\\leq e^R R^{R+1\/2} e^{\\pi R\/2}\\leq R^{cR}.\\]\n Par ailleurs, pour $\\Re(z)<1$ et $|z|=R$ assez grand, on a:\n \\[\\left|\\frac{1}{\\Gamma(z)}\\right|\\leq\\left|e^{z-1}(1-z)^{-z+1\/2}\\sin(\\pi z)\\right|\\leq(1+R)^{R+1\/2}e^{\\pi R\/2}\\leq R^{c^\\prime R}.\\]\n\n\n\\end{proof}\n\n\n\\section{Lemme de z\u00e9ros pour les polyn\u00f4mes en $z$ et $1\/\\Gamma(z)$}\\label{sectionlemmezeros}\n\nDans toute la suite, on notera $G(z)=1\/\\Gamma(z)$ pour all\u00e9ger les formules. L'objectif de cette partie est la d\u00e9monstration de l'estimation suivante:\n\\begin{theorem}\\label{lemmedezeros}\n Il existe une constante absolue $c$ telle que, pour tout entier $L\\geq1$, pour tout r\u00e9el $R\\geq2$ et pour tout polyn\u00f4me non nul $P(z,w)$ de degr\u00e9 au plus $L$ suivant chaque variable, la fonction $P(z,G(z))$ admet au plus $cL(L+R)\\log(L+R)$ z\u00e9ros (compt\u00e9s avec multiplicit\u00e9) dans le disque $|z|\\leq R$.\n\\end{theorem}\n\nDe mani\u00e8re analogue au lemme de \\cite{Masser} pour les polyn\u00f4mes en $z$ et $\\zeta(z)$, on \u00e9tablit ce r\u00e9sultat par des techniques d'analyse complexe faisant intervenir la distribution des valeurs de la fonction $G$. Ceci diff\u00e8re fondamentalement des lemmes de z\u00e9ros obtenus par Gabrielov et Vorobyov dans \\cite{GV} et utilis\u00e9s notamment par Pila dans \\cite{Pila3}, qui font intervenir des arguments d'analyse r\u00e9elle et s'appliquent \u00e0 des courbes, dites pfaffiennes, satisfaisant certaines \u00e9quations diff\u00e9rentielles alg\u00e9briques -- alors qu'il est bien connu que la fonction Gamma n'est solution d'aucune telle \u00e9quation diff\u00e9rentielle (voir \\cite{Campbell} ou \\cite{WhittakerWatson}).\n\nDans toute la suite de cette partie, pour $X$ et $Y$ des r\u00e9els sup\u00e9rieurs \u00e0 $0$, on d\u00e9finit $\\mathcal{Z}(X,Y)$ comme \u00e9tant l'ensemble des $z=x+iy\\in\\mathbf{C}$ avec $-X\\leq x\\leq 1$ et $-Y\\leq y\\leq Y$.\n\nLe lemme suivant nous donne la distribution dans $\\mathcal{Z}(X,Y)$ des solutions des \u00e9quations $G(z)=w$ pour les petites valeurs de $w$.\n\\begin{lemma}\\label{lemmetech4}\n Il existe une constante absolue $r_0>0$ telle que, pour tout $w\\in\\mathbf{C}$ avec $|w|\\leq r_0$ et pour $X\\geq0$ tendant vers l'infini, le nombre $\\mathcal{N}(X,w)$ de solutions (compt\u00e9es avec multiplicit\u00e9) \u00e0 l'\u00e9quation $G(z)=w$ avec $z\\in\\mathcal{Z}(X,1)$ v\u00e9rifie\n$\\mathcal{N}(X,w)=X+O(1)$,\net cela uniform\u00e9ment en $w$.\n\\end{lemma}\n\nCe lemme est l'analogue pour $G$ de la proposition (22) de l'article \\cite{BLL} de Bohr, Landau et Littlewood sur la distribution des solutions de $\\zeta(s)=a$. Contrairement au cas de la fonction z\u00eata, l'existence de la formule de Stirling nous permet d'estimer avec pr\u00e9cision le comportement de la fonction, ce qui donne lieu \u00e0 une d\u00e9monstration beaucoup plus directe.\n\n\\begin{proof}\n Soit $w\\in\\mathbf{C}$. D'apr\u00e8s le th\u00e9or\u00e8me des r\u00e9sidus, le nombre de solutions \u00e0 l'\u00e9quation $G(z)=w$ avec $z\\in\\mathcal{Z}(X,1)$ est donn\u00e9 par:\n \\begin{equation}\\label{residupourtech4}\\mathcal{N}(X,w)=\\frac{1}{2\\pi}\\Im\\int_\\gamma\\frac{G^\\prime(z)}{G(z)-w}\\mathrm{d}z,\\end{equation}\n o\u00f9 $\\gamma$ est le contour de $\\mathcal{Z}(X,1)$, orient\u00e9 positivement et contournant les \u00e9ventuels z\u00e9ros de $G(z)-w$ par de petits arcs de cercle passant \u00e0 l'ext\u00e9rieur de $\\mathcal{Z}(X,1)$.\n\nRemarquons que, d'apr\u00e8s la proposition \\ref{defgamma}, les solutions de l'\u00e9quation $G(z)=0$ sont tous les entiers n\u00e9gatifs ou nuls, si bien qu'on a, pour $w=0$, l'estimation :\n\\begin{equation}\\label{NX0}\\mathcal{N}(X,0)=\\frac{1}{2\\pi}\\Im\\int_\\gamma\\frac{G^\\prime(z)}{G(z)}\\mathrm{d}z=X+O(1).\\end{equation}\n\nOn va montrer que, pour peu qu'on choisisse $w$ de module inf\u00e9rieur \u00e0 un $r_0$ qu'on d\u00e9terminera par la suite, il n'y a jamais de z\u00e9ros de $G(z)-w$ sur le segment vertical $\\Re(z)=1$, ni sur les segments horizontaux $\\Im(z)=\\pm 1$. On va ensuite montrer que, quitte \u00e0 ajuster l\u00e9g\u00e8rement la valeur de $X$, il n'y en a pas non plus sur le segment vertical $\\Re(z)=-X$. On obtiendra alors le r\u00e9sultat souhait\u00e9 en comparant les int\u00e9grales \\eqref{residupourtech4} et \\eqref{NX0}.\n\n \\subsection*{Estimation de $G$ sur le segment $\\Re(z)=1$.}\n\n Soit $z\\in[1-i,1+i]$. On a\n \\[G(z)=ze^{\\gamma z}\\prod_{n\\geq1}\\left(1+\\frac{z}{n}\\right)e^{-\\frac{z}{n}},\\]\n donc\n \\[|G(z)| = |z|e^{\\gamma x}\\prod_{n\\geq1}\\left|1+\\frac{z}{n}\\right|e^{-\\frac{x}{n}}\n \\geq |x|e^{\\gamma x}\\prod_{n\\geq1}\\left|1+\\frac{x}{n}\\right|e^{-\\frac{x}{n}},\\]\n o\u00f9 $x=\\Re(z)=1$. On a:\n \\begin{align*}\n \\log\\prod_{n\\geq1}\\left|1+\\frac{x}{n}\\right|e^{-\\frac{x}{n}} \n & = \\sum_{n\\geq1}\\left(\\log\\left|1+\\frac{x}{n}\\right|-\\frac{x}{n}\\right)\n = \\sum_{n\\geq1}\\left(\\log\\left(1+\\frac{x}{n}\\right)-\\frac{x}{n}\\right)\\\\\n & \\geq -\\frac{1}{2}\\sum_{n\\geq1}\\left(\\frac{x}{n}\\right)^2\n \\geq -\\frac{x^2}{2}\\zeta(2)\n > -1,\n \\end{align*}\n d'o\u00f9\n $|G(z)| \\geq |x|e^{\\gamma x}e^{-3} > \\frac{1}{2}$.\n\n Ainsi, si on prend $|w|\\leq r_0\\leq\\frac{1}{2}$, il n'y aura pas de z\u00e9ro de $G(z)-w$ \u00e0 contourner sur le segment $\\Re(z)=1$.\n\n \\subsection*{Estimation de $G$ sur les segments horizontaux $\\Im(z)=\\pm 1$.}\n\n La formule de Stirling pour le demi-plan $\\Re(z)<1$ (lemme \\ref{stirling}) nous permet d'obtenir des minorations valables pour $\\Re(z)$ suffisamment \u00e9loign\u00e9e de $0$. Resteront \u00e0 consid\u00e9rer les points \u00e0 distance born\u00e9e de l'origine, que l'on pourra traiter par un calcul direct.\n \n D'apr\u00e8s le lemme \\ref{stirling}, on a, pour $y=\\Im(z)=\\pm1$ et $x=\\Re(z)<1$ tendant vers $-\\infty$,\n$G(z)=\\sqrt{\\frac{\\pi}{2}}e^{z-1}(1-z)^{-z+1\/2}\\sin(\\pi z)\\left(1+O\\left(z^{-1}\\right)\\right)$.\n\n En particulier, il existe $R\\geq1$ tel que, pour $x\\leq-R$, on ait:\n\\begin{align*}\n \\left|G(z)\\right| & \\geq\\frac{1}{2}\\left|\\sqrt{\\frac{2}{\\pi}}e^{z-1}(1-z)^{-z+1\/2}\\sin(\\pi z)\\right|\\\\\n & \\geq(2\\pi)^{-1\/2}e^{x-1}(1-x)^{-x+1\/2}e^{y\\arg(1-z)}\\sinh|\\pi y|\\\\\n & \\geq(2\\pi e)^{-1\/2}\\left(\\frac{1-x}{e}\\right)^{-x+1\/2}e^{y\\arg(1-z)}\\sinh|\\pi y|\\\\ \n & \\geq(2\\pi e)^{-1\/2}\\sinh(\\pi) e^{-\\pi\/2}\\left(\\frac{1+R}{e}\\right)^{R+1\/2}.\n\\end{align*}\nPar ailleurs, pour $-R\\leq x<1$, on revient au lemme \\ref{defgamma}:\n$G(z) = \\allowbreak ze^{\\gamma z}\\prod_{n\\geq1}\\allowbreak \\left(1+\\frac{z}{n}\\right)e^{-\\frac{z}{n}}$,\ndonc\n\n\\begin{equation}\\label{modulegamma}\\left|G(z)\\right| = |z|e^{\\gamma x}\\prod_{n\\geq1}\\left|1+\\frac{z}{n}\\right|e^{-\\frac{x}{n}}\\end{equation}\n\nIci, $x\\in[-R,1[$ et $y$ est fix\u00e9 \u00e9gal \u00e0 $\\pm 1$ donc $z$ est born\u00e9. Soit $n_0$ un entier fix\u00e9 sup\u00e9rieur \u00e0 cette borne; alors, pour $n\\geq n_0$, on a \n$\\left|1+\\frac{z}{n}\\right| \\geq \\left|1+\\frac{x}{n}\\right| = 1+\\frac{x}{n} \\geq 0$,\nd'o\u00f9\n\\[\\prod_{n\\geq n_0}\\left|1+\\frac{z}{n}\\right|e^{-\\frac{x}{n}} \\geq \\prod_{n\\geq n_0}\\left(1+\\frac{x}{n}\\right)e^{-\\frac{x}{n}}\n \\geq e^{-x^2\/2\\,\\zeta(2)}\n \\geq e^{-R^2\/2\\,\\zeta(2)}.\\]\n\nEt, pour $n0$ tel que, pour $X\\geq cL$, on a au moins $L+1$ solutions $z_0,\\dots,z_L$ (qui sont donc deux \u00e0 deux distinctes) \u00e0 l'\u00e9quation $G(z)=w$ dans la bande $\\mathcal{Z}(X,1)$. Ces solutions v\u00e9rifient alors la condition \\ref{zpetits} pour une certaine constante $c_0$. De plus, les $z_i$ sont pas trop proches les uns des autres. En effet, quitte \u00e0 prendre pour $X$ un multiple suffisamment grand de $L$ (ce qui se traduit par une augmentation de $c_0$ dans \\ref{zpetits}), on peut choisir les $z_i$ dans des disques de rayon $\\frac{1}{4}$ centr\u00e9s autour d'entiers n\u00e9gatifs deux \u00e0 deux distincts. On va donc avoir, pour tout $k$:\n \\[\\prod_{j\\neq k}|z_j-z_k| \\geq \\frac{1}{2}\\times\\left(1+\\frac{1}{2}\\right)\\times\\left(2+\\frac{1}{2}\\right)\\times\\cdots\\times\\left(L-1+\\frac{1}{2}\\right)\\\\\n \\geq (L-1)!,\n \\]\nce qui d\u00e9montre la condition \\ref{zloin}. La condition \\ref{zloinde1} est imm\u00e9diate si $X$ est un assez grand multiple de $L$ (ce \\og{}assez grand\\fg{} \u00e9tant uniforme en $w$).\n \n Pour d\u00e9montrer le lemme, il suffit donc de choisir arbitrairement des $w_l$ dans le disque d\u00e9fini par la condition \\ref{wpetits}, v\u00e9rifiant la propri\u00e9t\u00e9 de s\u00e9paration \\ref{wloin} et \u00e9vitant un nombre au plus d\u00e9nombrable de points pathologiques, ce qui ne pose aucune difficult\u00e9 en choisissant $c$ suffisamment petit par rapport \u00e0 $r_0$.\\end{proof}\n\nPassons \u00e0 la d\u00e9monstration du lemme de z\u00e9ros proprement dit (le th\u00e9or\u00e8me \\ref{lemmedezeros}).\n\n\\begin{proof}\n Quitte \u00e0 multiplier $P$ par une constante, on peut supposer que le maximum des modules de ses coefficients est $1$. Posons $F(z) = P(z,G(z))$ et notons $z_1,\\dots,z_N$ les z\u00e9ros de $F$ compt\u00e9s avec multiplicit\u00e9. Consid\u00e9rons alors la fonction enti\u00e8re\n $\\phi(z)=F(z)\\prod_{n=1}^N(z-z_n)^{-1}$.\n \n Notons ${S}=R+c_0L$ pour le $c_0$ du lemme \\ref{wpoints}. D'apr\u00e8s le principe du maximum, on a:\n \\begin{equation}\\label{phirleqphi5r}|\\phi|_{{S}}\\leq|\\phi|_{5{S}}.\\end{equation}\n \n D'une part, d'apr\u00e8s la proposition \\ref{croissance}, $|G|_{5{S}}\\leq (5{S})^{5c{S}}\\leq{S}^{c^\\prime{S}}$, d'o\u00f9 $|F|_{5{S}}\\leq {S}^{c{S}L}$, puis:\n \\begin{equation}\\label{phi5r}|\\phi|_{5{S}}\\leq{S}^{c{S}L}(4{S})^{-N}.\\end{equation}\n \n D'autre part, pour tout $z$ v\u00e9rifiant $|z-1|\\geq1$ et $|z|\\leq{S}$, on aura\n $|F(z)| = |\\phi(z)|\\prod_{n=1}^N|z-z_n| \\leq |\\phi|_{{S}}(2{S})^N$.\n On d\u00e9duit alors es in\u00e9galit\u00e9s \\eqref{phirleqphi5r} et \\eqref{phi5r} que $|F(z)|\\leq{S}^{c{S}L}2^{-N}$, et ceci s'appliquant en particulier au cas des $z=z_{k,l}$ du lemme \\ref{wpoints}, on a donc, pour tous $k,l=0,\\dots,L$:\n \\begin{equation}\\label{majoreP}|P(z_{k,l},w_l)|\\leq{S}^{c{S}L}2^{-N}.\\end{equation}\n \n On va maintenant utiliser deux fois la formule d'interpolation de Lagrange pour les points du lemme \\ref{wpoints}. D'abord:\n \\[P(z,w)=\\sum_{l=0}^L\\left(\\prod_{0\\leq i\\leq L, i\\neq l}\\frac{w-w_i}{w_l-w_i}\\right)P(z,w_l),\\]\n puis\n \\[P(z,w_l)=\\sum_{k=0}^L\\left(\\prod_{0\\leq j\\leq L, j\\neq k}\\frac{z-z_{j,l}}{z_{k,l}-z_{j,l}}\\right)P(z_{k,l},w_l).\\]\n Au final:\n \\[P(z,w)=\\sum_{k,l=0}^L\\left(\\prod_{0\\leq i\\leq L, i\\neq l}\\frac{w-w_i}{w_l-w_i}\\cdot\\prod_{0\\leq j\\leq L, j\\neq k}\\frac{z-z_{j,l}}{z_{k,l}-z_{j,l}}\\right)P(z_{k,l},w_l).\\]\n \n Le coefficient d'indices $(p,q)$ de $P$ est donc donn\u00e9 par:\n \\begin{align*}a_{p,q}= \\sum_{k,l=0}^L& \\left((-1)^{p+q}\\, \\sigma_{L-p}(w_0,\\dots,\\widehat{w_l},\\dots,w_L)\\,\\sigma_{L-q}(z_{0,l},\\dots,\\widehat{z_{k,l}},\\dots,z_{L,l})\\hspace{-3ex}\\phantom{\\prod_a}\\right.\\\\\n & \\left.\\prod_{0\\leq i\\leq L, i\\neq l}(w_l-w_i)^{-1}\\prod_{0\\leq j\\leq L, j\\neq k}(z_{k,l}-z_{j,l})^{-1}\\right)P(z_{k,l},w_l),\\end{align*}\n o\u00f9 $(x_0,\\dots,\\widehat{x_k},\\dots,x_L)$ d\u00e9signe le $L$-uplet obtenu en omettant l'entr\u00e9e $x_k$ dans $(x_0,\\dots,x_L)$ et o\u00f9 les $\\sigma_p$ sont les fonctions sym\u00e9triques \u00e9l\u00e9mentaires\n $\\sigma_p(x_1,\\dots,x_n)=\\sum_{1\\leq i_1<\\cdots0$, $Z>0$, $M>0$ et $H\\geq 1$ des r\u00e9els, $f_1$ et $f_2$ des fonctions analytiques sur un voisinage ouvert du disque $|z|\\leq 2Z$ et born\u00e9es par $M$ sur ce disque. Soit $\\mathcal{Z}$ une partie finie de $\\mathbf{C}$ v\u00e9rifiant les propri\u00e9t\u00e9s suivantes:\n \\begin{enumerate}\n \\item $|z|\\leq Z$ pour tout $z\\in\\mathcal{Z}$;\n \\item $|z-z^\\prime|\\leq A^{-1}$ pour tous $z,z^\\prime\\in\\mathcal{Z}$;\n \\item $[\\mathbf{Q}(f_1(z),f_2(z)):\\mathbf{Q}]\\leq d$ pour tout $z\\in\\mathcal{Z}$;\n \\item $H(f_1(z))\\leq H$, $H(f_2(z))\\leq H$ pour tout $z\\in\\mathcal{Z}$.\n \\end{enumerate}\n On suppose enfin que la condition suivante est v\u00e9rifi\u00e9e:\n \\[(AZ)^T>(4T)^{96d^2\/T}(M+1)^{16d}H^{48d^2}.\\]\n Alors, en notant $\\mathbf{f}=(f_1,f_2)$, on a $\\omega(\\mathbf{f}(\\mathcal{Z}))\\leq T$.\n\\end{proposition}\n\n\\begin{proof}[D\u00e9monstration du th\u00e9or\u00e8me]\n Soit $T\\geq\\sqrt{8d}$ un entier dont on pr\u00e9cisera le choix par la suite. On va appliquer la proposition \\ref{bombieripila} \u00e0 l'ensemble $\\mathcal{Z}$ des points $z\\in\\mathbf{C}$ satisfaisant les hypoth\u00e8ses du th\u00e9or\u00e8me et aux fonctions enti\u00e8res $f_1(z)=z$ et $f_2(z)=G(z)$. On va avoir $\\omega(\\mathbf{f}(\\mathcal{Z}))\\leq T$, sous r\u00e9serve que les conditions suivantes soient v\u00e9rifi\u00e9es:\n \\begin{enumerate}\n \\item $\\mathcal{Z}$ est contenu dans un disque de diam\u00e8tre $A^{-1}$;\n \\item $[\\mathbf{Q}(z,G(z)):\\mathbf{Q}]\\leq d$ pour tout $z\\in \\mathcal{Z}$;\n \\item $H(z)\\leq H$, $H(G(z))\\leq H$ pour tout $z\\in \\mathcal{Z}$;\n \\item $(AZ)^T>(4T)^{96d^2\/T}(M+1)^{16d}H^{48d^2}$,\n \\end{enumerate}\n o\u00f9 $Z\\geq n$ et $M$ est un majorant de $|z|$ et de $|G(z)|$ sur le disque $|z|\\leq2Z$.\n \n D'apr\u00e8s les hypoth\u00e8ses du th\u00e9or\u00e8me, on a directement la condition 1 avec $A=1$, la condition 2 ainsi que la condition 3.\n D'apr\u00e8s la proposition \\ref{croissance}, on peut prendre $M\\leq Z^{cZ}$ pour une certaine constante $c$, donc:\n \\[ (4T)^{96d^2\/T}(M+1)^{16d}H^{48d^2} \\leq (4T)^{96d^2\/T}Z^{c^\\prime d Z}H^{48d^2}\n \\leq c^{d^2}Z^{c^\\prime d Z}H^{48d^2}\\]\n et, pour que la condition 4 soit r\u00e9alis\u00e9e, il suffit donc d'avoir\n $cd^2+(cdZ-T)\\log(Z) + 48d^2\\log(H)\\leq 0$.\n Si on prend $T\\geq2cdZ$, il suffit d\u00e9sormais d'avoir $d(c+48\\log(H))\\leq cZ\\log(Z)$, ce qui se ram\u00e8ne \u00e0:\n \\begin{equation}\\label{choixdeZ}cd\\log(H)\\leq Z\\log(Z),\\end{equation}\n ce qui peut \u00eatre obtenu en choisissant $Z$ sup\u00e9rieur \u00e0 un multiple constant assez grand de $\\frac{d\\log(H)}{\\log(d\\log(H))}$. En effet, si $Z\\geq\\lambda\\frac{d\\log(H)}{\\log(d\\log(H))}$, alors:\n \\begin{align*}\n Z\\log(Z) & \\geq \\lambda\\frac{d\\log(H)}{\\log(d\\log(H))}\\left(\\log(d\\log(H)) + \\log(\\lambda)- \\log\\log(d\\log(H))\\right)\\\\\n & \\geq \\lambda d\\log(H)\\left(1+\\frac{\\log(\\lambda)}{\\log(d\\log(H))} - \\frac{\\log\\log(d\\log(H))}{\\log(d\\log (H))}\\right)\\\\\n & \\geq \\lambda d\\log(H)\\left(1- \\frac{\\log\\log(d\\log(H))}{\\log(d\\log(H))}\\right)\n \\geq \\lambda d\\log(H)\\left(1-\\frac{1}{e}\\right).\n \\end{align*}\n \n On pose donc $Z=\\lambda\\frac{d\\log(H)}{\\log(d\\log(H))}+n$ pour une valeur de $\\lambda$ suffisamment grande, de fa\u00e7on \u00e0 assurer simultan\u00e9ment \\eqref{choixdeZ} et la condition $Z\\geq n$. On prend alors pour $T$ un entier sup\u00e9rieur ou \u00e9gal \u00e0 $2cdZ$, mais du m\u00eame ordre de grandeur (par exemple, la partie enti\u00e8re sup\u00e9rieure de $2cdZ$). La condition $T\\geq\\sqrt{8d}$ sera alors v\u00e9rifi\u00e9e, quitte \u00e0 augmenter encore la valeur de $\\lambda$.\n \n On peut alors appliquer la proposition \\ref{bombieripila}, qui nous dit que $\\omega(\\mathbf{f}(\\mathcal{Z}))\\leq T$, c'est-\u00e0-dire qu'il existe un polyn\u00f4me $P$ en deux variables, de degr\u00e9 total au plus $T$, tel que tous les points de $\\mathbf{f}(\\mathcal{Z})$ soient des z\u00e9ros de $P$. Autrement dit, pour tout $z$ v\u00e9rifiant les hypoth\u00e8ses du th\u00e9or\u00e8me, on a $P(z,G(z))=0$.\n \n Le degr\u00e9 de $P$ en chaque variable est major\u00e9 par $T$, donc on peut utiliser le th\u00e9or\u00e8me \\ref{lemmedezeros} pour borner le nombre de z\u00e9ros de $P(z,G(z))$ dans un disque de rayon $n$, donc a fortiori le nombre de points de l'ensemble $\\mathcal{Z}$, c'est-\u00e0-dire les points satisfaisant les hypoth\u00e8ses du th\u00e9or\u00e8me:\n \\begin{align*}\n \\mathcal{N}(n,d,H) & \\leq cT(T+n)\\log(T+n)\\\\\n & \\leq c\\left(\\frac{d^2\\log(H)}{\\log(d\\log(H))}+n\\right)^2\\log\\left(\\frac{d^2\\log(H)}{\\log(d\\log(H))}+n\\right)\\\\\n & \\leq c (n^2\\log(n))\\frac{\\left(d^2\\log(H)\\right)^2}{\\log(d\\log(H))}.\\qedhere\n \\end{align*} \n\\end{proof}\n\nSi l'on parvenait \u00e0 supprimer le facteur $\\log(L+R)$ dans la proposition \\ref{lemmedezeros}, cela se traduirait ici par un d\u00e9nominateur en $(\\log(d\\log(H)))^2$ au lieu de $\\log(d\\log(H))$.\n\nOn peut maintenant d\u00e9montrer les r\u00e9sultats annonc\u00e9s comme corollaires du th\u00e9or\u00e8me \\ref{maintheorem}.\n\n\\begin{proof}\n \\begin{enumerate}\n \\item Si $z$ est un rationnel compris entre $n-1$ et $n$ et de d\u00e9nominateur au plus $D$, le num\u00e9rateur de $z$ est au plus $nD$ donc on a $H(z)\\leq nD$. Par ailleurs, $0\\leq \\Gamma(z)\\leq\\Gamma(n)=(n-1)!$ donc $H(G(z))\\leq (n-1)!D$.\n \n On applique donc le th\u00e9or\u00e8me \\ref{maintheorem} avec $d=1$ et $H=(n-1)!D$:\n \\begin{align*}\n \\mathcal{N}(D,n) & \\leq c (n^2\\log(n))\\frac{\\log^2 ((n-1)!D)}{\\log\\log ((n-1)!D)}\\\\\n & \\leq c (n^2\\log(n))\\frac{(n\\log(n))^2\\log^2 (D)}{\\log\\log (D)}\n \\leq c (n^4\\log^3(n))\\frac{\\log^2(D)}{\\log\\log(D)}.\n \\end{align*}\n \n \\item On fait le m\u00eame calcul, mais $G$ \u00e9tant born\u00e9e par $2$ sur $[1,+\\infty[$, la borne pour le num\u00e9rateur de $G(z)$ est cette fois $2D\\leq nD$, donc on applique le th\u00e9or\u00e8me avec $H=nD$, ce qui donne une borne en $c^\\prime n^2\\log^3 (n)\\frac{\\log^2(D)}{\\log\\log(D)}$. \\end{enumerate}\n\\end{proof}\n\n\n\\nocite{*}\n\n\\bibliographystyle{smfalpha}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\n\t\n\tApproximate Bayesian computation (ABC) is now a well-known method for conducting Bayesian statistical inference for stochastic simulation models that do not possess a computationally tractable likelihood function \\citep{Sisson2018}. ABC bypasses likelihood evaluations by preferring parameter configurations that generate simulated data close to the observed data, typically on the basis of a carefully chosen summarisation of the data.\n\t\n\tAlthough ABC has allowed practitioners in many diverse scientific fields to consider more realistic models, it still has several drawbacks. ABC is effectively a nonparametric procedure, and scales poorly with summary statistic dimension. Consequently, most ABC analyses resort to a low dimensional summary statistic to maintain a manageable level of computation, which could lead to significant information loss. Secondly, ABC requires the user to select values for various tuning parameters, which can impact on the approximation.\n\t\n\tAn alternative approach, called synthetic likelihood \\citep[SL,][]{Wood2010,Price2018}, assumes that the model statistic for a given parameter value has a multivariate normal distribution. The mean and covariance matrix of this distribution are estimated via independent simulations of the statistic, which is used to approximate the summary statistic likelihood at the observed statistic. \\citet{Price2018} developed the first Bayesian approach for SL, called Bayesian synthetic likelihood (BSL). The parametric assumption made by the SL allows it to scale more efficiently to increasing dimension of the summary statistic \\citep{Price2018}. Further, the BSL approach requires very little tuning and is ideal for exploiting parallel computing architectures compared to ABC.\n\t\n\tThe SL method has been tested successfully and shows great potential in a wide range of application areas such as epidemiology and ecology \\citep{Barbu2017,Price2018}. Further, \\citet{Price2018} and \\citet{Everitt2017} demonstrate that BSL exhibits some robustness to a departure from normality. However, a barrier to the ubiquitous use of SL is its strong normality assumption. We seek an approach that maintains the efficiency gains of BSL but enhances its robustness.\n\t\n\tThe main aim of this paper is to develop a more robust approach to BSL. We do this by developing a semi-parametric method for approximating the summary statistic likelihood, called semiBSL, which involves using kernel density estimates for the marginals and combining them with a Gaussian copula. An incidental contribution of this paper is a thorough empirical investigation into the robustness of BSL. We demonstrate through several simulated and real examples that semiBSL can offer significantly improved posterior approximations compared to BSL. Furthermore, we find that the number of model simulations required for semiBSL does not appear to increase significantly compared to BSL, and does not require any additional tuning parameters. However, we also find that the standard BSL approach can be remarkably robust on occasions. We also identify the limits of semiBSL by considering an example with nonlinear dependence structures between summaries. \n\t\n\tThere have been other approaches developed for robustifying the synthetic likelihood. \\citet{Fasiolo2018} developed an extended empirical saddlepoint (EES) estimation method, which involves shrinking the empirical saddlepoint approximation towards the multivariate normal distribution. The regularisation helps ensure that their likelihood estimator is well-defined even when the observed statistic lies in the tail of the model summary statistic distribution. The shrinkage parameter (called the ``decay\") needs to be selected by the user, and effectively represents a bias\/variance trade-off. More shrinkage towards the Gaussian decreases variance but increases bias. Although \\citet{Fasiolo2018} consider frequentist estimation only, we demonstrate that it is straightforward to consider it within a Bayesian algorithm, allowing direct comparison with our new approach. We demonstrate how our approach tends to outperform the saddlepoint approach in terms of accuracy, at least in our test examples we consider, with less tuning.\n \n\tAnother alternative to synthetic likelihood is the approach of \\citet{Dutta2017}, which frames the problem of estimating an intractable likelihood as ratio estimation. After choosing an appropriate density for the denominator, the likelihood is estimated via supervised binary classification methods where pseudo datasets drawn from the likelihood are labelled with a ``1'' and pseudo datasets drawn from the denominator distribution are labelled with a ``0''. The features of the classification problem are the chosen summary statistics and possibly transformations of them. \\citet{Dutta2017} use logistic regression for the classification task. This approach has the potential to be more robust than the synthetic likelihood. However, we do not compare with this approach for several reasons. Firstly, \\citet{Dutta2017} consider point estimation, and extension to Bayesian algorithms is not trivial. Secondly, their method requires the practitioner to make several choices, making it difficult to perform a fair comparison. Finally, their approach needs to perform a potentially expensive penalised logistic regression for each proposed parameter, making it more valuable for very expensive model simulators.\n\t\n\t\n\t\\citet{Li2017} also use copula modelling within ABC. However, they use a Gaussian copula for approximating the ABC posterior for a high-dimensional parameter, whereas we use it for modelling a high-dimensional summary statistic.\n\t\n\tThe article is structured as follows. Section \\ref{sec:BSL} introduces relevant background and previous research on BSL. Section \\ref{sec:semiBSL} presents our new method, semiBSL, for robustifying BSL. Section \\ref{sec:app} shows the comparison of the performance of BSL and semiBSL with four simulated examples and one real data example with models of varying complexity and dimension. Section \\ref{sec:conc} concludes the paper, and points out the limitations of our approach and directions for future work.\n\t\n\t\n\t\n\t\\section{Bayesian Synthetic Likelihood} \\label{sec:BSL}\n\t\n\tIn ABC and BSL, the objective is to simulate from the summary statistic posterior given by\n\t\\begin{align*}\n\t\tp(\\vect{\\theta}|\\vect{s_y}) \\propto p(\\vect{s_y}|\\vect{\\theta})p(\\vect{\\theta}),\n\t\\end{align*}\n\twhere $\\vect{\\theta} \\in \\Theta \\subseteq \\mathbb{R}^p$ is the parameter that requires estimation with corresponding prior distribution $p(\\vect{\\theta})$. Here, $\\vect{y} \\in \\mathcal{Y}$ is the observed data that are subsequently reduced to a summary statistic $\\vect{s_y} = S(\\vect{y})$ where $S(\\cdot): \\mathcal{Y} \\rightarrow \\mathbb{R}^d$ is the summary statistic function. The dimension of the statistic $d$ must be at least the same size as the parameter dimension, i.e.\\ $d \\geq p$.\n\t\n\tThe SL \\citep{Wood2010} involves approximating $p(\\vect{s_y}|\\vect{\\theta})$ with\n\t\\begin{align}\n\t\tp(\\vect{s_y}|\\vect{\\theta}) \\approx p_A(\\vect{s_y}|\\vect{\\theta}) = \\mathcal{N}(\\vect{s_{y}} | \\vect{\\mu}(\\vect{\\theta}),\\vect{\\Sigma}(\\vect{\\theta})). \\label{eq:sl}\n\t\\end{align}\n\tThe mean and covariance $\\vect{\\mu}(\\vect{\\theta})$ and $\\vect{\\Sigma}(\\vect{\\theta})$ are not available in closed form but can be estimated via independent model simulations at $\\vect{\\theta}$. The procedure involves drawing $\\vect{x}_{1:n} = (\\vect{x}_1,\\ldots,\\vect{x}_n)$, where $\\vect{x}_i \\stackrel{\\mathrm{iid}}{\\sim} p(\\cdot|\\vect{\\theta})$ for $i=1,\\ldots,n$, and calculating the summary statistic for each dataset, $\\vect{s}_{1:n} = (\\vect{s}_1,\\ldots,\\vect{s}_n)$, where $\\vect{s}_i$ is the summary statistic for $\\vect{x}_i,\\ i=1,\\ldots,n$. These simulations can be used to estimate $\\vect{\\mu}$ and $\\vect{\\Sigma}$ unbiasedly \n\t\\small\n\t\\begin{align}\n\t\t\\begin{split}\n\t\t\t\\vect{\\mu}_n(\\vect{\\theta}) & = \\frac{1}{n}\\sum_{i=1}^{n}\\vect{s}_{i}, \\\\ \\vect{\\Sigma}_n(\\vect{\\theta}) &= \\frac{1}{n-1}\\sum_{i=1}^{n}(\\vect{s}_{i}-\\vect{\\mu}_n(\\vect{\\theta}))(\\vect{s}_{i}-\\vect{\\mu}_n(\\vect{\\theta}))^{\\top}.\n\t\t\t\\label{eq:SLparams}\n\t\t\\end{split}\n\t\\end{align}\n\t\\normalsize\n\tThe estimates in \\eqref{eq:SLparams} can be substituted into the SL in \\eqref{eq:sl} to estimate the SL as $\\mathcal{N}(\\vect{s_{y}} | \\vect{\\mu}_n(\\vect{\\theta}),\\vect{\\Sigma}_n(\\vect{\\theta}))$. We can sample from the approximate posterior using MCMC, see Algorithm \\ref{alg:MCMCBSL}. Theoretically, the corresponding MCMC algorithm targets the approximate posterior $p_{A,n}(\\vect{\\theta}|\\vect{s_y}) \\propto \\mathsf{E}[\\mathcal{N}(\\vect{s_{y}} | \\vect{\\mu}_n(\\vect{\\theta}),\\vect{\\Sigma}_n(\\vect{\\theta}))]p(\\vect{\\theta})$. For finite $n$, $\\mathcal{N}(\\vect{s_{y}} | \\vect{\\mu}_n(\\vect{\\theta}),\\vect{\\Sigma}_n(\\vect{\\theta}))$ is a biased estimate of $\\mathcal{N}(\\vect{s_{y}} | \\vect{\\mu}(\\vect{\\theta}),\\vect{\\Sigma}(\\vect{\\theta}))$, thus resulting in $p_{A,n}(\\vect{\\theta}|\\vect{s_y})$ being theoretically dependent on $n$ \\citep{Andrieu2009}. However, \\citet{Price2018} demonstrate empirically that the BSL posterior $p_{A,n}(\\vect{\\theta}|\\vect{s_y})$ is remarkably insensitive to its only tuning parameter, $n$. Thus we can choose $n$ to maximise computational efficiency. \n\t\n\t\\begin{algorithm}[htp]\n\t\t\\SetKwInOut{Input}{Input}\n\t\t\\SetKwInOut{Output}{Output}\n\t\t\\Input{Summary statistic of the data, $\\vect{s_y}$, the prior distribution, $p(\\vect{\\theta})$, the proposal distribution $q$, the number of iterations, $T$, and the initial value of the chain $\\vect{\\theta}^{0}$.}\n\t\t\\Output{MCMC sample $(\\vect{\\theta}^{0},\\vect{\\theta}^{1}, \\ldots, \\vect{\\theta}^{T})$ from the BSL posterior, $p_{A,n}(\\vect{\\theta}|\\vect{s_{y}})$. Some samples can be discarded as burn-in if required. }\n\t\t\\vspace{0.5cm}\n\t\tSimulate $\\vect{x}_{1:n} \\stackrel{\\mathrm{iid}}{\\sim} p(\\cdot|\\vect{\\theta}^{0})$ and compute $\\vect{s}_{1:n}$\\\\\n\t\tCompute $\\vect{\\phi}^{0}=(\\vect{\\mu}_n(\\vect{\\theta}^{0}),\\vect{\\Sigma}_n(\\vect{\\theta}^{0}))$ using \\eqref{eq:SLparams} \\\\\n\t\t\\For{$i = 1$ to $T$}{\n\t\t\tDraw $\\vect{\\theta}^{*} \\sim q(\\cdot|\\vect{\\theta}^{i-1})$\\\\\n\t\t\tSimulate $\\vect{x}_{1:n}^{*} \\stackrel{\\mathrm{iid}}{\\sim} p(\\cdot|\\vect{\\theta}^{*})$ and compute $\\vect{s}_{1:n}^{*}$\\\\\n\t\t\tCompute $\\vect{\\phi}^{*}=(\\vect{\\mu}_n(\\vect{\\theta}^{*}),\\vect{\\Sigma}_n(\\vect{\\theta}^{*}))$ using \\eqref{eq:SLparams} \\\\\n\t\t\tCompute \\tiny$r=\\min\\left(1,\\frac{\\mathcal{N}(\\vect{s_{y}} | \\vect{\\mu}_n(\\vect{\\theta}^*),\\vect{\\Sigma}_n(\\vect{\\theta}^*))p(\\vect{\\theta}^{*})q(\\vect{\\theta}^{i-1}|\\vect{\\theta}^{*})}{\\mathcal{N}(\\vect{s_{y}} | \\vect{\\mu}_n(\\vect{\\theta}^{i-1}),\\vect{\\Sigma}_n(\\vect{\\theta}^{i-1}))p(\\vect{\\theta}^{i-1})q(\\vect{\\theta}^{*}|\\vect{\\theta}^{i-1})}\\right)$\\normalsize\\\\\n\t\t\t\\eIf{$\\mathcal{U}(0,1)-1,\\theta_1-\\theta_2<1$. Here, the likelihood function is multivariate normal with $\\mathrm{Var}(y_t)=1+\\theta_1^2+\\theta_2^2$, $\\mathrm{Cov}(y_t,y_{t-1})=\\theta_1+\\theta_1\\theta_2$, $\\mathrm{Cov}(y_t,y_{t-2})=\\theta_2$, with all other covariances equal to $0$. Here, the observed dataset is simulated with parameters $\\theta_1 = 0.6$ and $\\theta_2 = 0.2$.\n\t\n\tWe do not perform any data summarisation here, i.e.\\ the summary statistic is the full dataset. Given the likelihood is multivariate normal, standard BSL will perform well. To explore its performance when normality is violated, we apply the sinh-arcsinh transformation \\citep{Jones2009} to each individual data point to test different amounts of skewness and kurtosis,\n\t\n\t\t\\begin{equation}\n\t\t\tf(x) = \\sinh \\Big(\\dfrac{1}{\\delta} \\sinh^{-1}(x + \\epsilon) \\Big),\n\t\t\t\\label{eq:transformation}\n\t\t\\end{equation}\n\twhere $\\epsilon$ and $\\delta$ control the level of skewness and kurtosis, respectively. We tested the following four scenarios: no transformation, skewness only, kurtosis only and both skewness and kurtosis (see Figure \\ref{fig:summStat_ma2} in Appendix \\ref{app:computational_efficiency_example} for an example of the distributions). In the last experiment, we use different transformation parameters for each marginal summary statistic to aggravate estimation difficulty, i.e.\\ $\\epsilon_i \\stackrel{\\mathrm{iid}}{\\sim} \\mathrm{U}(-2,2)$, $\\delta_i = u_i ^ {(-1)^{v_i}}$, where $u_i \\stackrel{\\mathrm{iid}}{\\sim} \\mathrm{U}(1,2)$, $v_i \\stackrel{\\mathrm{iid}}{\\sim} \\mathrm{Binary}(0.5)$. \n\t\n\tEstimated posterior densities are presented in Figure \\ref{fig:scatter_ma2}. The results are shown for four different transformations and three different posterior approximations (BSL, semiBSL and EES). The contour plots represent the ``true\" posterior (obtained from a long run of MCMC using the exact likelihood). For the posterior approximations, bivariate scatterplots of the posterior samples are shown. We use the total variation distance to quantify the disparity between the approximated posterior distribution and the true posterior, i.e.\\ $\\text{tv}(f_1,f_2) = \\dfrac{1}{2} \\int_{\\vect{\\Theta}} |f_1(\\vect{\\theta}) - f_2(\\vect{\\theta})| d \\vect{\\theta}$. This can be estimated via numerical integration with a 2-D grid covering most of the posterior mass. The choice of $n$ and the total variation distance are shown in Figure \\ref{fig:scatter_ma2}.\n\t\n\tIn order to show the stability and performance of our approximate posterior results, we also run BSL and semiBSL for additional datasets generated from the MA(2) model with the same true parameter configuration. We compute the total variation using nonparametric density estimation for each case and summarise the results in Table \\ref{tab:tv_ma2}. It is apparent that semiBSL always outperforms BSL in terms of the total variation distance to the true posterior.\n\t\n\t\\begin{table} \\label{tab:tv_ma2}\n\t\t\\centering\n\t\t\\caption{Total variation distance from the approximate posterior to the true posterior for the MA(2) example. The values shown in the table are the mean of the total variation distances of $30$ different replications. The standard errors are given in the parentheses.}\n\t\t\\vspace{0.5cm}\n\t\t\\begin{tabular}{lll}\n\t\t\t\\hline\\noalign{\\smallskip}\n\t\t\t$\\epsilon$ and $\\delta$ & tv (BSL) & tv (semiBSL) \\\\\n\t\t\t\\noalign{\\smallskip}\\hline\\noalign{\\smallskip}\n\t\t\t$\\epsilon = 0$, $\\delta = 1$ & 0.07 (0.02) & 0.07 (0.02) \\\\\n\t\t\t$\\epsilon = 2$, $\\delta = 1$ & 0.41 (0.16) & 0.22 (0.09) \\\\\n\t\t\t$\\epsilon = 0$, $\\delta = 0.5$ & 0.42 (0.18) & 0.11 (0.03) \\\\\n\t\t\trandom $\\epsilon$ and $\\delta$ & 0.47 (0.19) & 0.16 (0.08) \\\\\n\t\t\t\\noalign{\\smallskip}\\hline\n\t\t\\end{tabular}\n\t\\end{table}\n\t\n\tThe bivariate posteriors and total variations indicate that the semiBSL posteriors are robust to all four transformed summary statistics, while BSL fails to get close to the true posterior. The results also suggest that the EES provides little robustness in this example. We also tested the EES method with a much larger $n$, here $n=5000$, for the randomised $\\epsilon, \\delta$ dataset to see whether the posterior accuracy improves. With $n=5000$, the decay parameter is reduced to 45 (roughly one third of the value obtained with $n=300$). We find that even with significant additional computational effort, the EES posterior approximation shows little improvement.\n\t\n\tIt is important to note that the dependence structure in the data (after back-transformation) is Gaussian. Therefore, the Copula assumption made by semiBSL is correct in this example up to estimation of the marginals, which are done using KDE without knowledge of true marginals. Thus, we would expect semiBSL to provide good posterior approximations here and it is necessary to test its performance in other examples. However, the example does serve to illustrate that BSL can be impacted by non-normality and that the EES may not provide sufficient robustness to non-normality.\n\n\t\\begin{figure*}\n\t\t\\centering\n\t\t\\includegraphics[width=0.8\\textwidth]{Fig1_1.png}\n\t\t\\includegraphics[width=0.8\\textwidth]{Fig1_2.png}\n\t\t\\includegraphics[width=0.8\\textwidth]{Fig1_3.png}\n\t\t\\includegraphics[width=0.8\\textwidth]{Fig1_4.png}\n\t\t\\caption{Bivariate scatter plots of the posterior distributions of the MA(2) example using different pairs of transformation parameters for the summary statistic. The overlaid contour plots represent the true posterior. The number of simulations per iteration $n$ and the total variation distance compared to the gold standard are shown in the corner of each plot.}\n\t\t\\label{fig:scatter_ma2}\n\t\\end{figure*}\n\t\n\t\\subsection{M\/G\/1} \\label{subsec:mg1}\n\t\n\tThe M\/G\/1 (single-server queuing system with Poisson arrivals and general service times) queuing model has been investigated previously in the context of ABC by \\citet{Blum2010} and \\citet{Fearnhead2012}. Here, the observed data, $\\vect{y}_{1:50}$, are $50$ inter-departure times from $51$ customers. In this model, the likelihood is cumbersome to calculate whereas simulation is trivial. The distribution of the service time is uniform $\\mathrm{U}(\\theta_1,\\theta_2)$, and the distribution of the inter-arrival time is exponential with rate parameter $\\theta_3$. We take $\\vect{\\theta} = (\\theta_1,\\theta_2,\\theta_3)$ as the parameter of interest and put a uniform prior $\\mathrm{U}(0,10) \\times \\mathrm{U}(0,10) \\times \\mathrm{U}(0,0.5)$ on $(\\theta_1,\\theta_2-\\theta_1,\\theta_3)$. The observed data are generated from the model with true parameter $\\vect{\\theta} = (1,5,0.2)$.\n\t\n\tHere, we take the log of the inter-departure times as our summary statistic. It is interesting that the distribution of the summary statistics does not resemble any common distribution, see Figure \\ref{fig:summStat_mg1} of Appendix \\ref{app:subsec:dist_summStat}. With the Bayesian approach given by \\citet{Shestopaloff2014}, we obtained a ``true'' posterior distribution of the model (sampled with MCMC). Figure \\ref{fig:scatter_mg1} shows the bivariate scatterplot of posterior samples using BSL, semiBSL and EES. Here the number of simulations per iteration is $n=1000$ for all methods. The coloured contour plot indicates the true posterior. It is evident that both BSL and EES exhibit an ``L'' shape in the bivariate posterior and can hardly reflect the true posterior. The semiBSL posterior is significantly more accurate and provides reasonable estimates of the posterior means.\n\t\n\tGiven that the posterior distribution of the M\/G\/1 example can be sensitive to the observed dataset, we repeat the MCMC BSL and MCMC semiBSL methods for $50$ different observed datasets using the same true parameter configuration and computed the 2D total variation distance to each true posterior for each pair of parameters. From Table \\ref{tab:tv_mg1}, it is evident that semiBSL outperforms BSL in terms of posterior accuracy.\n\t\n\t\\begin{table} \\label{tab:tv_mg1}\n\t\t\\centering\n\t\t\\caption{Total variation distance from the approximate posterior to the true posterior for the M\/G\/1 example. The values shown in the table are the mean of the total variation distances of $50$ different replications. The standard errors are given in the parentheses.}\n\t\t\\vspace{0.5cm}\n\t\t\\begin{tabular}{lll}\n\t\t\t\\hline\\noalign{\\smallskip}\n\t\t\tpair of parameters & tv (BSL) & tv (semiBSL) \\\\\n\t\t\t\\noalign{\\smallskip}\\hline\\noalign{\\smallskip}\n\t\t\t$\\theta_1$, $\\theta_2$ & 0.89 (0.07) & 0.55 (0.12) \\\\\n\t\t\t$\\theta_1$, $\\theta_3$ & 0.72 (0.05) & 0.38 (0.12) \\\\\n\t\t\t$\\theta_2$, $\\theta_3$ & 0.80 (0.10) & 0.42 (0.12) \\\\\n\t\t\t\\noalign{\\smallskip}\\hline\n\t\t\\end{tabular}\n\t\\end{table}\n \n\t\\begin{figure*}\n\t\t\\centering\n\t\t\\includegraphics[width=0.8\\textwidth]{Fig2.png}\n\t\t\\caption{Bivariate scatter and contour plots of the posterior distributions of the M\/G\/1 example. The scatter plot is generated with a thinned approximate sample obtained by different approaches. The contour plot is drawn based on the ``true'' posterior distribution. The number of simulations per iteration $n$ and the total variation distance compared to the gold standard are shown in the corner of each plot.}\n\t\t\\label{fig:scatter_mg1}\n\t\\end{figure*}\n\n\t\\subsection{Stereological Extremes} \\label{subsec:stereo}\n\t\n\tDuring the process of steel production, the occurrence of microscopic particles, called inclusions, is a critical measure of the quality of steel. It is desirable that the inclusions are kept under a certain threshold, since steel fatigue is believed to start from the largest inclusion within the block. Direct observation of $3$-dimensional inclusions is inaccessible, so that inclusion analysis typically relies on examining $2$-dimensional planar slices. \\citet{Anderson2002} establish a mathematical model to formulate the relationship between observed cross-section samples, $S$, and the real diameter of inclusions, $V$, assuming that the inclusions are spherical. The model focuses on large inclusions, i.e.\\ $V > \\nu_0$, where $\\nu_0$ is a certain threshold, which is endowed with a generalised Pareto distribution such that\n\t\n\t\\begin{equation*}\n\t\tP(V \\leq \\nu | V > \\nu_0) = 1 - \\left\\{1 + \\dfrac{\\xi (\\nu - \\nu_0)}{\\sigma}\\right\\} ^{-1\/\\xi} _{+},\n\t\\end{equation*}\n\twhere $\\sigma > 0$ and $\\xi \\in \\mathbb{R}$ are the scale and shape parameters, respectively, and $\\{\\cdot\\}_{+} = \\max\\{0,\\cdot\\}$. The inclusions are mutually independent and locations of them follow a homogeneous Poisson process with rate parameter $\\lambda$. While the spherical model possesses a likelihood function that is easily computable, the spherical assumption itself might be inappropriate. This leads to the ellipsoidal model proposed by \\citet{Bortot2007}, who use ABC for likelihood-free inference due to the intractable likelihood function that it inherits. The new model assumes that inclusions are ellipsoidal with principal diameters $(V_1,V_2,V_3)$, where $V_3$ is the largest diameter without loss of generality. Here, $V_1 = U_1V_3$ and $V_2 = U_2V_3$, where $U_1$ and $U_2$ are independent uniform $\\mathrm{U}(0,1)$ random variables. The observed value $S$ is the largest principal diameter of an ellipse in the two-dimensional cross-section.\n\t\n\tHere we consider the ellipsoidal model with parameter of interest $\\vect{\\theta} = (\\lambda,\\sigma,\\xi)$. The prior distribution is $\\mathrm{U}(30,200) \\times \\mathrm{U}(0,15) \\times \\mathrm{U}(-3,3)$. Denoting the observed samples as $\\vect{S}$, we consider four summary statistics: the number of inclusions, $\\log(\\min(\\vect{S}))$, $\\log(\\text{mean}(\\vect{S}))$, $\\log(\\max(\\vect{S}))$. Figure \\ref{fig:summStat_stereo} in Appendix \\ref{app:subsec:dist_summStat} shows the distribution of the chosen summary statistic simulated at a point estimate of $\\vect{\\theta}$. The last three summary statistics have a significantly heavy right tail, strongly invalidating the normality assumption of BSL.\n\t\n\tFigure \\ref{fig:scatter_stereo} shows the bivariate scatterplot of posteriors obtained by BSL, semiBSL and EES. The number of simulations is $n=50$ for all methods. The overlaying contour plot is drawn with an MCMC ABC result with tolerance $1$. A Mahalanobis distance is used to compare summary statistics. The covariance used in the Mahalanobis distance is taken to be the sample covariance of summary statistics independently generated from the model at a point estimate of the parameter. Note that the outliers are removed before computing the Mahalanobis covariance matrix. Given that there are only four summary statistics in this example, we take ABC as the gold standard approximation. It is apparent that both BSL and EES results are accepting parameter values in the tails that are rejected by ABC and semiBSL. We use the boxplot (Figure \\ref{fig:box_stereo}) to explain the impact of outlier simulations in BSL. The first column uses a parameter value that has high posterior support in the semiBSL result (referred to here as ``good\") and a medium number of simulations, the second column uses a parameter value that should have negligible posterior support (referred to here as ``poor\") and a medium number of simulations, and the last column uses a poor parameter value and a large number of simulations. In each test, we simulate $300$ independent BSL and semiBSL log-likelihood estimates. Negative infinities are ignored in the second and third tests in semiBSL. It is worth noting that semiBSL produces a lot of negative infinity log-likelihood estimates for simulations at the poor parameter value, which are ignored in the boxplot. The overestimated log-likelihoods at the ``poor'' parameter value by BSL are competitive to those at the ``good'' parameter value. Thus, when BSL grossly overestimates the likelihood at a poor parameter value, the algorithm may get stuck in the tails for long periods. Overall, semiBSL is the only method that gives an approximate posterior consistent with that of ABC.\n\t\n\t\\begin{figure}\n\t\t\\centering\n\t\t\\includegraphics[width=0.5\\textwidth]{Fig3.png}\n\t\t\\caption{Boxplot of BSL and semiBSL log-likelihoods of the stereological extreme example. Negative infinite semiBSL log-likelihood values are ignored. The figure at top is the whole view of the boxplot. The figure at the bottom is zoomed in vertically to -40 to 7 to show more clearly highlight the outliers for BSL.}\n\t\t\\label{fig:box_stereo}\n\t\\end{figure}\n\t\n\t\\begin{figure*}\n\t\t\\centering\n\t\t\\includegraphics[width=0.8\\textwidth]{Fig4.png}\n\t\t\\caption{Bivariate scatter plots of the posterior distributions of the stereological extreme example. The scatter plot is generated with thinned approximate sample by BSL (run no.$2$ in Figure \\ref{fig:post_stereo}), semiBSL and EES approach. The contour plot corresponds to the gold standard in this example (MCMC ABC). The number of simulations per iteration $n$ and the total variation distance compared to the gold standard are shown in the corner of each plot.}\n\t\t\\label{fig:scatter_stereo}\n\t\\end{figure*}\n\n\t\\subsection{Fowler's Toads} \\label{subsec:toads}\n\t\n\tMovements of amphibian animals exhibit patterns of site fidelity and long-distance dispersal at the same time. Modelling such patterns helps in understanding amphibian's travel behaviour and contributes to amphibian conservation. \\citet{Marchand2017} develop an individual-based model for a species called Fowler's Toads (\\textit{Anaxyrus fowleri}), and collect data via radiotracking in Ontario, Canada. The comprehensive experimental and modelling details are stated in the original paper. Here, we only present a brief description of the model.\n\t\n\tThe model assumes that a toad hides in its refuge site in the daytime and moves to a randomly chosen foraging place at night. After its geographical position is collected via a transmitter, the toad either takes refuge at the current location or returns to one of the previous sites. For simplicity, the refuge locations are projected to a single axis, thus can be represented by a single-dimensional spatial process. GPS location data are collected on $n_t$ toads for $n_d$ days, i.e.\\ the observation matrix $\\vect{Y}$ is of dimension $n_d \\times n_t$. For the synthetic data we use here, $n_t=66$, $n_d=63$, and missingness is not considered. Then $\\vect{Y}$ is summarised down to four sets comprising the relative moving distances for time lags of $1,2,4,8$ days. For instance, $\\vect{y}_1$ consists of the displacement information of lag $1$ day, $\\vect{y}_1 = \\{|\\Delta y| = |\\vect{Y}_{i,j}-\\vect{Y}_{i+1,j}| ; 1 \\leq i \\leq n_d-1, 1 \\leq j \\leq n_t \\}$.\n\t\n\tSimulation from the model involves two distinct processes. For each single toad, we first generate an overnight displacement, $\\Delta y$, then mimic the returning behaviour with a simplified model. The overnight displacement is deemed to have significant heavy tails, assumed to belong to the L\\'evy-alpha stable distribution family, with stability parameter $\\alpha$ and scale parameter $\\gamma$. This distribution has no closed form, while simulation from it is straightforward \\citep{Chambers1976}, making simulation-based approaches appealing. The original paper provided three returning models with different complexity. We adopt the random return model here as it has the best performance among the three and is the easiest for simulation. The total returning probability is a constant $p_0$, and if a return occurs on day $1 \\leq i \\leq m$, then the return site is the same as the refuge site on day $i$, where $i$ is selected randomly from ${1,2,\\ldots,m}$ with equal probability. Here we take the observed data as synthetically generated with true parameter $\\vect{\\theta}=(\\alpha,\\gamma,p_0)=(1.7,35,0.6)$. We use a uniform prior over $(1,2) \\times (0,100) \\times (0,0.9)$ here.\n\t\n\tWe fit a four-component Gaussian mixture model to each set of $\\log(|\\Delta y|)$. Figure \\ref{fig:kde_deltay_toads} shows the distributions of $\\log(|\\Delta y|)$ for lags of $1,2,4,8$. As the summary statistic we use the 11-dimensional score of this fitted auxiliary model (corresponding to three component weights, four means and four standard deviations). This corresponds to the indirect inference approach for selecting summary statistics \\citep[see][]{Drovandi2011,Gleim2013,Drovandi2015}. Accommodating the four different lags, there are $44$ summary statistics in total. The scores do not seem to depart a large amount from normality (see Figure \\ref{fig:summStat_scores_toads} in Appendix \\ref{app:subsec:dist_summStat}), thus standard BSL may be suitable for this model. To further explore the robustness of BSL and test our semiBSL approach, we include a power transformation of $\\vect{s}_{\\vect{x}}$ to push the irregularity of the summary statistic further. The transformation function is given by $f_{p}(\\cdot) = \\mathrm{sgn}(\\cdot) \\times (|\\cdot|)^{p}$. It retains the sign of the input and creates a sharp peak near $0$. The distribution of the transformed summary statistics using $p=1.5$ is also included in Appendix \\ref{app:subsec:dist_summStat}.\n\t\n\t\\begin{figure}\n\t\t\\centering\n\t\t\\includegraphics[width=0.75\\textwidth]{Fig5}\n\t\t\\caption{Density plot of $\\log(|\\Delta y|)$ for lags of $1,2,4,8$ days of the Fowler's toads example.}\n\t\t\\label{fig:kde_deltay_toads}\n\t\\end{figure}\n\t\n\tIn Figure \\ref{fig:post_comp_toads}, we show the posterior approximations produced by different approaches and transformation parameters. The posterior distributions obtained by different approaches are reasonably close to each other using the original score summary statistics. The BSL marginal posterior shows significant shift horizontally as the transformation power grows, whist semiBSL and EES are more robust to the change.\n\t\n\t\\begin{figure*}\n\t\t\\centering\n\t\t\\includegraphics[width=0.8\\textwidth]{Fig6_1}\n\t\t\\includegraphics[width=0.8\\textwidth]{Fig6_2}\n\t\t\\includegraphics[width=0.8\\textwidth]{Fig6_3}\n\t\t\\caption{Comparing the approximate posterior distributions for the BSL, semiBSL and EES approaches at different transformation powers of the Fowler's toads example. The vertical line indicates the true parameter value.}\n\t\t\\label{fig:post_comp_toads}\n\t\\end{figure*}\n\t\n\t\\subsection{Simple Recruitment, Boom and Bust} \\label{subsec:bnb}\n\n\tHere we consider an example that tests the limits of our semiBSL method. The simple recruitment, boom and bust model was used in \\citet{Fasiolo2018} to investigate the performance of the saddlepoint approximation to a non-normal summary statistic. This is a discrete stochastic temporal model that can be used to represent the fluctuation of the population size of a certain group over time. Given the population size $N_t$ and parameter $\\vect{\\theta}=(r,\\kappa,\\alpha,\\beta)$, the next value $N_{t+1}$ follows the following distribution\n\t\n\t\\begin{align*}\n\t\tN_{t+1} \\sim \n\t\t\\begin{cases}\n\t\t\t\\mathrm{Poisson}(N_t(1+r)) + \\epsilon_t, & \\text{ if }\\quad N_t \\leq \\kappa \\\\\n\t\t\t\\mathrm{Binom}(N_t,\\alpha) + \\epsilon_t, & \\text{ if}\\quad N_t > \\kappa\n\t\t\\end{cases},\n\t\\end{align*}\n\twhere $\\epsilon_t \\sim \\mathrm{Pois}(\\beta)$ is a stochastic term. The population oscillates between high and low level population sizes for several cycles. The true parameters are $r=0.4$, $\\kappa=50$, $\\alpha=0.09$ and $\\beta=0.05$, and the prior distribution is $\\mathrm{U}(0,1) \\times \\mathrm{U}(10,80) \\times \\mathrm{U}(0,1) \\times \\mathrm{U}(0,1)$. There are $250$ values in the observed data. We use $50$ burn-in values to remove the transient phase of the process.\n\t\n\tWe construct the summary statistics as follows. Consider a dataset $\\vect{x}$, define the differences and ratios as $\\vect{d}_{\\vect{x}} = \\{x_i - x_{i-1} ; i=2,\\ldots,250\\}$ and $\\vect{r}_{\\vect{x}} = \\{x_i \/ x_{i-1} ; i=2,\\ldots,250\\}$, respectively. We use the sample mean, variance, skewness and kurtosis of $\\vect{x}$, $\\vect{d}_{\\vect{x}}$ and $\\vect{r}_{\\vect{x}}$ as our summary statistic, $\\vect{s}_{\\vect{x}}$. We also tested the statistics used in \\citet{Fasiolo2018} but we found our choice to be more informative about the model parameters. The parameter $\\beta$ seems to have a strong impact on the model statistic distribution. Small values of $\\beta$ tend to generate statistics that are highly non-normal so we consider such a case here.\n\t\n\tDistributions of the $12$-dimensional summary statistic (based on the true parameter value) are shown in Figure \\ref{fig:summStat_bnb} in Appendix \\ref{app:subsec:dist_summStat}. None of the chosen summary statistics are close to normal. Marginal posterior distributions by BSL, semiBSL, EES and ABC are shown in Figure \\ref{fig:post_bnb}. The values of $n$ are given in the legend. With only four parameters and twelve summary statistics in this example, ABC can perform well and is treated as the gold standard. In the MCMC ABC result, we manage to get $14.5$ thousand accepted samples out of 18 million iterations at tolerance $=2$. A Mahalanobis distance is used to compare summary statistics. The covariance used in the Mahalanobis distance is taken to be the sample covariance of summary statistics independently generated from the model at $\\vect{\\theta} = (0.4,50,0.09,0.1)$. Figure \\ref{fig:scatter_bnb} shows the bivariate scatterplot with overlaying contour plot (ABC result). It is evident that the semiBSL procedure is producing an approximation that is closer to ABC compared to BSL. It does suggest that semiBSL is providing some robustness. However, it is also evident that there is some difference between the ABC and semiBSL results. To gain some insights into the dependence structure between summaries, we consider bivariate scatterplots of the summaries, shown in Figure \\ref{fig:summStat_corr_bnb} in Appendix \\ref{app:subsec:scatter_summStat_bnb}. It is clear that there is a high degree of nonlinear dependence between many of the summaries, which cannot be captured by our Gaussian copula. Therefore this example is highly challenging for our semiBSL approach.\n\t\n \\begin{figure}\n \t\\centering\n \t\\includegraphics[width=0.75\\textwidth]{Fig7}\n \t\\caption{Approximate marginal posterior distributions for the simple recruitment, boom and bust example. The tolerance used in MCMC ABC is $2$. The vertical lines indicate the true parameter values. The number of simulation used for each approach is given in the legend.}\n \t\\label{fig:post_bnb}\n \\end{figure}\n\n\t\\begin{figure*}\n\t\t\\centering\n\t\t\\includegraphics[width=0.8\\textwidth]{Fig8.png}\n\t\t\\caption{Bivariate scatter and contour plots of the posterior distributions of the simple recruitment, boom and bust example. The scatter plot is generated with thinned approximate sample obtained by BSL, semiBSL or EES methods. The contour plot is based on MCMC ABC with a tolerance of $2$. Only three pairs of the parameters are shown here, $r$ versus $\\alpha$, $r$ versus $\\beta$ and $\\kappa$ versus $\\beta$. The number of simulations per iteration $n$ and the total variation distance compared to the gold standard are shown in the corner of each plot.}\n\t\t\\label{fig:scatter_bnb}\n\t\\end{figure*}\n\n \n\t\n\t\\section{Discussion} \\label{sec:conc}\n\t\n\tIn this paper, we proposed a new method to relax the normality assumption in BSL and presented several examples of varying complexity to test the empirical performance of BSL and semiBSL. The new approach offered additional robustness in all of the considered examples. Further, given the semi-parametric nature of the method, the computational gains of the fully parametric BSL are largely retained in terms of the number of model simulations required. When model simulations per iteration required by BSL is non-negligible, then the additional cost incurred by semiBSL will be small. Estimating marginal KDEs and the Gaussian rank correlation matrix is relatively straightforward.\n\t\n\tHowever, we did observe situations where standard BSL was remarkably robust to lack of normality, which is consistent with some previous literature including \\citet{Price2018} and \\citet{Everitt2017}. Developing some theory around when we can expect standard BSL to work well would be useful.\n\t\n\tPrevious BSL research \\citep{Price2018} showed that the approximate posterior is very insensitive to the choice of $n$. Surprisingly, we also found the semiBSL posterior to be relatively insensitive to $n$, albeit not as insensitive as BSL. We expect semiBSL to be less insensitive to $n$ since the choice of $n$ is more likely to impact kernel density estimates compared to the Gaussian synthetic likelihood. The sensitivity to $n$ results for semiBSL is presented in Appendix \\ref{app:sensitivity2n}.\n\t\n\tThe new approach was also compared with another robustified synthetic likelihood method, the EES \\citep{Fasiolo2018}. Because of potential numerical issues with the standard empirical saddlepoint approximation, the EES has to resort to a tuning parameter called the decay, which shifts the estimation between a flexible saddlepoint one and a rigid normal distribution. For roughly the same number of simulations, the posterior approximation by EES generally shows some improvements over BSL but less compared to semiBSL in the examples tested.\n\t\n\tSince we can obtain synthetic likelihood estimates of $-\\infty$ with our approach for parameter values in the far tails of the posterior, we recommend that the practitioner firstly finds and initialises the MCMC at a parameter that produces a summary statistic distribution that has reasonable support for the observed statistic. We note that standard BSL can also exhibit slow convergence when initialised in the tail of the posterior \\citep{Price2018}.\n\t\n\tNote that we use an unrestricted correlation matrix $\\vect{R}$ in equation \\eqref{eq:pdf_semiBSL} throughout the main paper. However, it is possible to improve the computational efficiency with shrinkage estimation on $\\vect{R}$. \\citet{An2018} used the graphical lasso \\citep{Friedman2008} in the standard BSL algorithm and proposed a novel penalty selection method so that the Markov chain has efficient mixing. We present the results using a straightforward shrinkage estimator proposed by \\citet{Warton2008} in Appendix \\ref{app:semiBSL_warton} and show a significant computational gain in the M\/G\/1 example.\n\t\n\tIf the true underlying marginal distribution of a statistic is highly irregular, one limitation of our approach is that the number of simulations required for a KDE to capture this will be large. We point out that we only require estimates of the marginal distributions of the summary statistics at the observed statistic values, rather than estimating the entire marginal distribution. We note that future work could revolve around improving the performance of KDE. It could be beneficial to deliberately undersmooth in kernel estimation to reduce bias to get more accurate results in some cases. Another direction would be considering other adaptive kernel density estimation approaches, such as the balloon estimator and the sample point estimator \\citep[e.g.][]{Terrell1992} , which may provide more stability and require less model simulations.\n\t\n\tThere is still research to be done in improving the robustness of BSL. Our semiBSL method relies on the Gaussian copula dependence structure. The results in the boom and bust example show a compromised performance of semiBSL when there exists strong nonlinear dependence structures between summary statistics. For future work, we plan to investigate other more flexible copula structures such as multivariate skew normal \\citep{Sahu2008} and vine copulas \\citep{Bedford2002}. Another direction with great potential is to incorporate the semiBSL likelihood estimator into the variational Bayes synthetic likelihood approaches \\citep[see][]{Ong2018b,Ong2018a} to speed up computation for a high dimensional statistic and\/or parameter.\n\n\tOverall, we have demonstrated that our semiBSL approach can provide a significant amount of robustness relative to BSL with little or no additional computational cost in terms of the number of model simulations, while requiring no additional tuning parameters.\n\t\n\t\\section*{Acknowledgements}\n\tCD was supported by an Australian Research Council's Discovery Early Career Researcher Award funding scheme (DE160100741). ZA was supported by a scholarship under CDs Grant DE160100741 and a top-up scholarship from the Australian Research Council Centre of Excellence for Mathematical and Statistics Frontiers (ACEMS). DJN was supported by a Singapore Ministry of Education Academic Research Fund Tier 1 Grant (R-155-000-189-114). Computational resources and services used in this work were provided by the HPC and Research Support Group, Queensland University of Technology, Brisbane, Australia. The authors thank Alex Shestopaloff for sharing his code on exact MCMC for the M\/G\/1 model.\n\t\n\t\\bibliographystyle{apalike} \n\t","meta":{"redpajama_set_name":"RedPajamaArXiv"}}