applied-ai-018's picture
Add files using upload-large-folder tool
5027ccc verified
raw
history blame
232 kB
{"text":"\\section{Introduction}\n\\label{intro}\nWhile many inventions are granted a patent, only a small fraction of them represent ``important'' technological advances or will have a significant impact on the market.\nAs a result, a key problem in technological forecasting is to detect which patents are important as early as possible. The literature has designed various indicators of patent importance based on patent data analysis, and it has been found quite consistently (see Section \\ref{sec:related}) that at least on average, important patents tend to receive more citations. However, this relationship is typically noisy, which suggests that more sophisticated metrics could outperform simple citation count in identifying important patents. Importantly, it takes time for a patent to accumulate citations, which implies that simply counting the number of citations received by a patent may be effective for uncovering old important patents, but not to detect important patents shortly after they are granted. \n\nIn this paper, we propose a network-based metric that identifies important patents better and earlier than citation count. Our metric, time-rescaled PageRank, was introduced by \\citet{mariani2016identification} to identify expert-selected important papers in physics. It is built on Google's PageRank algorithm \\citep{brin1998anatomy} by requiring that node score is not biased by node age. This metric is computationally efficient and thus can be applied on very large datasets~\\citep{vaccario2017quantifying}.\nHere we validate this metric on the US patent citation network (1926-2010), by evaluating its ability to detect the expert-selected ``important'' patents from \\citet{strumsky2015identifying}.\n\nWe find that Google's PageRank outperforms raw citation count in identifying the important patents, which supports the idea that important patents tend to be cited by other important patents. \nThis idea is further supported by the strong assortative degree-degree correlations observed in the network (Fig. \\ref{degree_correlations} below): highly-cited patents are typically cited by other highly-cited patents significantly more than what we would expect by chance; at the same time, highly-cited patents tend to cite other highly-cited patents.\nHowever, both PageRank and citation count are biased towards old patents; removing this bias is crucial to compare young and old patents on the same scale~\\citep{mariani2016identification}.\n\nTo demonstrate the usefulness of removing the age bias\nin the context of technological forecasting, we evaluate the metrics' performance in identifying the important patents shortly after they are issued, and find that time-rescaled PageRank significantly outperforms citation count and original PageRank in the first $10$ years after issuing, approximately.\nFinally, we use a time-respecting network null model \\citep{ren2017time} to generate randomized networks where the individual patents' citation count dynamics is the same as in the original network. We find that \nboth the observed degree-degree correlations and the performance advantage of PageRank-related metrics over citation count cannot be found in the randomized networks, which indicates that these properties emerge as a result of network effects that go beyond patents' neighborhood.\n\nOur findings demonstrate that in order to timely identify the significant patents, both network topology and time information play an essential role. In more general terms, the advantage of PageRank-related metrics over citation counting metrics, together with the strong degree-degree correlations of the patent citation network, support the hypothesis that significant technological advances ``lean on the shoulders of giants'' in a similar way as scientific advances \\citep{bornmann2010scientific}.\nYet, we find that the citation dynamics of scientific papers and patents are characterized by substantially different timescales. As a result, because patents (on average) take more time than papers to accumulate citations, the early identification of significant patents is more challenging than that of significant papers.\n\n\n\\section{Related work}\n\\label{sec:related}\nBroadly speaking, our work is related to those studying the relation between popularity metrics and significance in creative works such as scientific papers \\citep{mariani2016identification,comins2017citation}, movies \\citep{spitz2014measuring,wasserman2015cross,ren2017time}, or music albums \\citep{monechi2017significance}.\n\nIn the context of patent analysis, it is well known that patents are of extremely different quality \\citep{silverberg2007size}. While a direct measure of patent value is unavailable, patent data are very rich and there have been many attempts at providing indicators of patent value or novelty based on data contained in patent documents, such as the number of claims, the number and type of technology categories, the size of the patent family, and renewal fees, to give just major examples. By far the most widely used patent impact indicator is the number of citations received, and many studies have established a correlation between patent citations and patent value. For instance, \\citet{trajtenberg1990penny} found that to understand the evolution of the social value generated by the CT scan industry, it was better to count citations received by patents rather than simply counting patents.\n\\citet{albert1991direct} asked experts to rate the technical impact of patents in the area of silver halide technology, and found that highly cited patents received higher ratings. \\citet{harhoff1999citation}, \\citet{jaffe2000knowledge} and \\citet{harhoff2003citations}, using survey data, found that citations were correlated with the value reported by the inventors.\n\\citet{lanjouw2004patent} collected several indicators of patent quality and concluded that citations and the number of claims were the most important indicators of quality. \nRecently, \\citet{zhang2017entropy} proposed to weight $11$ indicators of patent value using the Shannon entropy, and selected forward citations as one of the most important indicators for technological value.\n\\citet{hall2005market} found that firm market value (Tobin's Q ratio) was correlated to the citation-weighted patent portfolio of the firms. \n\\cite{carpenter1981citation} and \\citet{fontana2013reassessing} compared patents associated with inventions that received a prize and patents from a control group, finding again evidence that ``important'' patents are more cited (the mean number of citations received was found to be about 50\\% higher for important patents).\n\nBut in spite of the repeated evidence of the positive relationship between citations received and different indicators of value or quality, it is often\nacknowledged that this relationship is very noisy \\citep{harhoff1999citation}, thus leaving open the possibility that more elaborated indicators could outperform simple citations count in predicting patent value. Here we address two basic (and well-known) problems of evaluation by simply counting citations: when evaluating a given patent's score, it fails to take into account the importance of the citing patents \\citep{narin1976evaluative}; and it fails to correct for the fact that young but potentially important patents did not have the time to accumulate a high number of citations.\n\nThe basic motivation for using citations received as an indicator of quality is that citations indicate some form of knowledge spillovers. As argued by \\citet{jaffe2000meaning}, citations reflect the fact that either a new technology builds on an existing one, or that they serve a similar purpose. As a result, chains of citations allow us to trace the technological evolution, and hence patent centrality in the citation network can be used to score the patents. But not all measures of centrality are appropriate. For instance, in the case of patents, we want to value positively how many citations are received, but not necessarily how many citations are made. \n\nWhether the references made by a given patent can be used to infer the patent's importance is a delicate issue.\nIn principle, one could argue that a patent with many references has high potential, because it draws from many existing inventions. But an opposite argument could be made as well, because a patent with many references makes it also (legally) clear that its claims are somewhat limited by the claims of the cited patents -- in that sense, references indicate a limitation of novelty. It is not yet well-understood which of these two arguments is the most appropriate, and the empirical evidence so far is inconclusive \\citep{jaffe2017patent}; here, we will consider that citations received are a weaker signal of importance when they come from patents that make a lot of references\".\n\nBased upon the aforementioned considerations, Google's PageRank centrality \\citep{brin1998anatomy} is especially suited for identifying important patents for three reasons: (i) It takes into account how many citations are received by a patent, (ii) It takes into account how many citations are received by the citing patents, and (iii) it takes into account that citations from patents that have many references are less indicative of the cited patent's quality.\n\nWe are not the first ones to suggest that PageRank \\citep{lukach2007ranking,bedau2011evidence,shaffer2011entrepreneurial,dechezlepretre2014knowledge,bruck2016recognition} and similar eigenvector-based metrics \\citep{corredoira2015measuring} can be computed on patent citation networks to identify important patents. However, robust evidence that PageRank is more effective than citation count in identifying the key patents is still lacking. In addition, both citation counts and PageRank fail to take into account the dynamic, evolving nature of the citation network. Because the patent system grows with time, older patents tend to have more citations simply because they have been there for a longer time and, on top of that, the preferential attachment mechanism \\citep{valverde2007topology} further magnified their advantage. This problem has been long acknowledged and the usual solution is either to limit citation counts to a fixed citation \"time span\", such as the first five years after issuing (e.g., \\citet{lanjouw2004patent}), or to control for the grant year in regressions (e.g., \\citet{kogan2012technological}).\n\n\n\n\nHere, we propose an alternative approach, put forward recently by \\citet{mariani2016identification} in the context of scientific publications, which can be applied equally well to citation counts and other centrality metrics, and produces a single score without (or with dramatically reduced) age bias.\n\nOur work complements other efforts to identify important items using citation networks. For instance, Comins and Leydesdorff (2017) report that Reference Publication Year Spectroscopy, a method that looks at the temporal distribution of cited references, is able to identify the biomedical research milestones listed by experts. In the patent literature, \\citet{castaldi2015related} proposed to identify ``superstar\" patents as those in the extreme right tail of the citation count distribution, where a power law behavior was observed. Another popular approach, main path analysis, was introduced in the bibliometric literature by \\citet{hummon1989connectivity}, and further developed and applied to patents by \\citet{verspagen2007mapping}. In the spirit of the betweenness centrality, it seeks to extract important nodes and edges based on how often geodesic paths pass through them, thus revealing continuity or disruption in technological trajectories. This aspect was exemplified by \\citet{martinelli2012emerging} for the telecommunication switching industry, and by \\citet{epicoco2013knowledge} for the green chemistry sector. \\citet{triulzi2017predicting} measured patent centrality using a normalized version of centrality metrics, and found that technological domains with central patents also tend to have faster technological improvement rates (a separately measured indicator of progress in technological performance). Finally, as a last example of this rich literature, \\citet{martinelli2014measuring} proposed to measure knowledge persistence by giving higher value to patents that are cited by patents that don't cite many patents \\-- an idea that we will use here too, as PageRank normalizes the received citations by the outdegree of the citing nodes.\n\nIn this work, we focus on comparing PageRank with citation counts, and age-rescaled metrics with non-rescaled metrics. This allows us to evaluate whether network-based metrics outperform raw citations counts, and to determine over which range of time the rescaling procedure allows us to better identify the significant patents. In addition, because our analysis follows closely the study of milestone physics papers by \\cite{mariani2016identification}, we are able to evaluate the similarities and differences between the scholarly and patent citation data. We find that patents take much longer than papers to receive citations, which makes it harder to identify important patents early on. \n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[scale=0.6]{assortativity-DCM}\n\\caption{Degree-degree correlations in the US patents' citation network. The gray circles represent the observed average neighbors' indegree for all the indegree values; the blue circles represent the same information in a histogram with equal bin length on logarithmic scale; the green squares represent the mean behavior observed within ten realizations of the dynamic configuration model (see Section \\ref{sec:dcm} for a description of the model); the shadows around the line that connect the green squares represent one standard deviation around the mean.\n}\n\\label{degree_correlations}\n\\end{figure*}\n\n\\section{Data}\n\n\\subsection{The U.S. patent citation network}\nWe analyzed the US patents dataset collected by \\cite{kogan2012technological} that spans the period between 01-01-1926 and 11-02-2010. As compared to the well-known NBER patent data, this dataset has a vastly improved coverage.\nWe pre-processed the data to only keep the citations between patents that were issued within this temporal period, removing thereby the citations to patents issued before 01-01-1926.\nThe resulting citation network analyzed in this paper is composed of $N=6,237,625$ nodes (patents) and $E=45,962,301$ directed edges (citations) between them. \n\nIn this dataset, the in- and out-degree distribution of the US patent citation network are in agreement with previous findings \\citep{valverde2007topology,csardi2007modeling,silverberg2007size}: the two distributions are relatively broad and span more than three orders of magnitude.\n\nIn previous works, \\cite{mariani2016identification} found that PageRank-related metrics outperform citation-counting metrics in identifying significant nodes in a scientific paper citation network, whereas the same does not happen in a movie citation network \\citep{ren2017time}.\nAdditionally, \\cite{ren2017time} found remarkably different degree-degree correlations for the two networks: the papers' citation network is strongly assortative, whereas the movies' citation network is basically uncorrelated.\nThis observation led \\cite{ren2017time} to suggest that the relative performance of PageRank-related and citation-counting metrics may be related to the network correlation patterns: when the network is uncorrelated, PageRank and indegree bring basically the same information \\citep{fortunato2008approximating}; when there are significant structural correlations, PageRank brings additional information that may be valuable to improve ranking performance.\n\nFig.~\\ref{degree_correlations} shows that the US patent network exhibits strong degree-degree correlations\\footnote{The assortativity plot used here is arguably the simplest method to visualize network structural correlations, as simply represents the average (in- or out-)degree of nodes' neighbors as a function of node (in- or out-)degree. Since node centrality is related to incoming connections, we focus here on the average indegree of citing and cited nodes as a function of node indegree.}: highly-cited patents tend to be cited by other highly-cited patents, and to cite other highly-cited patents. This assortative pattern cannot be explained by a null model that preserves the temporal evolution of node degree (\\citep{ren2017time}, see Section \\ref{sec:dcm} for details), which suggests that it is a genuine network effect.\n\nIn agreement with similar findings for scientific papers \\citep{ren2017time,bornmann2010scientific}, Fig.~\\ref{degree_correlations}A suggests that high-impact patents are able to inspire other high-impact patents more than expected by chance, whereas low-impact patents tend to be cited by other low-impact patents; at the same time (Fig. \\ref{degree_correlations}B), high-impact patents rely on other high-impact patents more heavily than expected by chance.\nFollowing \\cite{ren2017time}, the strong degree-degree correlations in the patent citation network opens the door to the possibility that metrics that take higher-order network effects into account outperform simple citation counts.\n\n\n\n\n\n\\subsection{Expert-selected historically significant patents}\n\\label{sec:strumsky}\nIn a recent work, \\citet{strumsky2015identifying} listed $175$ patents carefully selected ``on the basis of consultation with engineers, scientists, historians, textbooks, magazine articles, and internet searches''. The patents in the list ``all started technological pathways which affected society, individuals and the economy in a historically significant manner'' \\citep{strumsky2015identifying}. These significant patents thus provide a good ``ground-truth'' set of patents that can be used to discern the ability of different metrics to uncover the significant patents. The complete list of significant patents can be found in Appendix C of \\cite{strumsky2015identifying}; the list is quite heterogeneous and comprises patents ranging from simple objects \nthat are part of our everyday life (like the toothbrush and the post-it note)\nto more sophisticated inventions (like the Game Boy and the Desk Jet printer).\n\nPresence in the list of significant patents by Strumsky and Lobo is a binary variable: a patent is either in the list or not; we can therefore study the ability of the metrics to rank these outstanding patents as high as possible, in agreement with the main goals of this paper.\nWhile there are $175$ significant patents in the Strumsky-Lobo list, we restrict our analysis to those patents that were issued within our dataset's temporal span, and remove the design patents which are absent in our dataset. This leaves us with $M_0=112$ significant patents.\n\n\n\\begin{table*}[t]\n\\centering\n\\begin{tabular}{lp{5cm}p{5cm}}\n\\toprule\n & Static & Age-rescaled \\\\\n\\midrule\nCitation-counting &\n\\textbf{Citation count}, $c$\\newline\nA patent is important if it is cited by many other patents &\n\\textbf{Rescaled citation count}, $R(c)$\\newline\nBuilt on citation count by requiring that patent score is not biased by node age\\\\[4pt]\nNetwork-based &\n\\textbf{PageRank score}, $p$\\newline\nA patent is important if it is cited by other important patents &\n\\textbf{Rescaled PageRank score}, $R(p)$\\newline\nBuilt on PageRank score by requiring that patent score is not biased by node age\\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Metrics considered in this paper together and their main assumptions.}\n\\label{table:metrics}\n\\end{table*}\n\n\\section{Methods}\nIn this section, we define the metrics used to identify important patents, and the indicators of performance that we use to evaluate them.\nMany network centrality metrics \\citep{lu2016vital,liao2017ranking} and bibliometric indicators \\citep{waltman2016review} have been devised in the literature. Here, we narrow our focus to four metrics (see Table \\ref{table:metrics} for a summary): citation count $c$, PageRank score $p$, (age-)rescaled citation count $R(c)$ and (age-)rescaled PageRank $R(p)$. Differently from citation count, PageRank score takes the whole network structure into account and weights citations differently according to the centrality of the citing nodes. Rescaled citation count and rescaled PageRank score are obtained from citation count and PageRank score, by explicitly requiring that node score is not biased by node age (see details below).\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=7.5cm]{bias}\n\\caption{Bias by node age of the rankings by the metrics. Patents are divided by their age into 40 equally-sized groups; the bias by patent age is represented by the number $n_{0.005}$ of patents from each age group in the top $f=0.5\\%$ of the overall patent ranking. The black horizontal line represents the expected unbiased value $n^{(0)}_{0.005}=0.005\\,N\/40$. Results for different (small) values of $f$ are qualitatively similar.}\n\\label{bias}\n\\end{figure}\n\n\n\\subsection{Static patent-level metrics}\n\\label{sec:1}\n\n\\paragraph{Citation count, c}\nThe citation count of a given patent is simply the total number of citations the patent has received so far. In terms of the patent citation network's adjacency matrix $\\mathsf{A}$ ($A_{ij}$ is equal to one if patent $j$ cites patent $i$, zero otherwise), the citation count $c_i$ of patent $i$ is defined as $c_i=\\sum_{j}A_{ij}$; $c_i$ is referred to as the node $i$'s indegree in the network science language \\citep{newman2010networks}. Ranking the patents by citation count assumes that \\emph{a patent is important if it is cited by many other patents}. \n\nThe ranking by citation count is strongly biased by node age in our dataset. To visualize and quantify this bias, we divide the $N$ patents into $40$ equally-sized age-groups based on age.\nWe then count how many patents from each age group are in the top-$f$ fraction of the patent ranking by $c$. For an ideal unbiased ranking, for each age group, we would expect $n_{f}^{(0)}=f\\,N\/40$ patents in the top-$f$ fraction, with small deviations.\nThe result is strikingly different for citation count (see Fig.~\\ref{bias}) which underestimates both the oldest and the most recent patents in the dataset. \nWhile the bias against recent patents is expected (they have had less time to accumulate citations), the bias against older patents is more surprising, and it can be due to a variety of factors such a higher propensity to cite patents available electronically, a prevalence of patents in technological domains for which less citations tend to be made, patent citation patterns changing with time -- for a discussion of these and other reasons for citation bias, see \\citet{jaffe2017patent}. \nTo counteract this bias, we use a simple normalization procedure, described in paragraph~\\ref{sec:rescaled}. \n\n\n\n\\paragraph{PageRank, p}\nGoogle's PageRank is a node ranking algorithm introduced by \\citet{brin1998anatomy} with the original goal to rank websites in the World Wide Web. Since then, the algorithm has found applications in a broad range of real systems \\citep{gleich2015pagerank,liao2017ranking}. \nThe PageRank score $p_i$ of node $i$ is defined through the equation \\citep{berkhin2005survey}\n\\begin{equation}\np_i=\\alpha\\,\\sum_{j:k^{out}>0}\\frac{A_{ij}}{k^{out}_j}\\,p_j+\\alpha\\,\\sum_{j:k^{out}=0}\\frac{p_j}{N}+\\frac{1-\\alpha}{N}\n\\label{pr}\n\\end{equation}\nwhere $k^{out}_j=\\sum_{l}A_{lj}$ is the number of references made by patent $j$ ($k^{out}_i$ referred to as the node $i$'s \\emph{outdegree} in the network science language) and the term $(1-\\alpha)\/N$ represents the so-called ``teleportation term'' \\citep{berkhin2005survey,gleich2015pagerank}.\nThe algorithm is built on the thesis that \\emph{a node is important if it is cited by other important nodes} \\citep{franceschet2011pagerank}: the score of a given patent $i$ depends linearly on the scores of the patents that cited patent $i$. We set $\\alpha=0.5$ which is the common choice in citation networks \\citep{chen2007finding,walker2007ranking,bruck2016recognition}.\n\nIn practice, the vector of PageRank scores can be obtained from Eq.~\\eqref{pr} by the power iteration method. Starting from a uniform score vector $p_i^{(0)}=1\/N\\,\\forall i$, we iteratively update the scores according to the equation \\citep{berkhin2005survey} \n\\begin{equation}\np_i^{(n+1)}=\\alpha\\,\\sum_{j:k^{out}>0}\\frac{A_{ij}}{k^{out}_j}\\,p_j^{(n)}+\\alpha\\,\\sum_{j:k^{out}=0}\\frac{p_j^{(n)}}{N}+\\frac{1-\\alpha}{N}.\n\\end{equation}\nNote that the previous equation is the master equation of a two-fold stochastic process on the network where at each step $n$, a random walker either performs a jump along the network edges (with probability $\\alpha$), or ``teleports'' to a randomly chosen node in the network (with probability $1-\\alpha$).\nThe PageRank vector of scores $\\mathbf{p}=\\{p_i\\}$ can therefore be interpreted as the stationary state of this Markov process.\nWe halt the iterations when \n\\begin{equation}\n\\sum_i \\big\\lvert p^{(n)}_{i}-p^{(n-1)}_{i}\\big\\rvert < \\epsilon,\n\\end{equation}\nwhere we set $\\epsilon=10^{-9}$. This procedure guarantees convergence after a number of iterations smaller than $\\log\\alpha\/\\log\\epsilon$, independently of $N$ \\citep{berkhin2005survey}.\n\nWhile PageRank's basic premise is plausible, the algorithm is \\emph{static}, whereas real networks evolve in time. This causes the ranking by the algorithm to be severely biased by node age in growing networks \\citep{chen2007finding,mariani2015ranking,mariani2016identification,vaccario2017quantifying}.\nThe ranking by PageRank is strongly biased by node age also in our dataset (see Fig.~\\ref{bias}). \nPageRank's bias in the patent citation network has different features with respect to its bias in papers' citation network reported by \\cite{mariani2016identification}.\nWhile in both datasets recent nodes are strongly disadvantaged by the algorithm, the oldest patents are not the most overvalued by the PageRank algorithm as opposed to what has been observed for papers \\citep{mariani2016identification}.\nThis is a direct consequence of the significantly smaller citation count of the oldest patents. The peak of $n_{0.005}(p)$ is shifted to the left from the peak of $n_{0.005}(c)$, which means that PageRank nevertheless tends to favor older nodes with respect to citation count.\n\n\n\n\\subsection{Time rescaled metrics R(p) and R(c)}\n\\label{sec:rescaled}\nThe strong age bias of the rankings by citation count and PageRank score implies that patents that appeared in some time periods are much more likely to rank well than other patents, independently of their properties such as novelty and significance. In bibliometrics \\citep{radicchi2008universality,waltman2016review} and patent analysis \\citep{triulzi2017predicting}, it is common to attempt to suppress this bias by age through various normalization procedures.\n\nHere, we apply the rescaling procedure proposed by \\citet{mariani2016identification} to citation count and PageRank.\nThe rescaling procedure consists of comparing the score $s_i$ of a given patent $i$ with scores of the patents that belong to a reference-set of patents of similar age\\footnote{A potential limitation of this approach is that by comparing each patent's score with only the scores of patents of similar age, it may underestimate the importance of patents that happened to be issued in periods during which many breakthrough inventions took place. However, despite the well-known theory of Kondratiev waves and innovation clustering, robust empirical evidence for the existence of such periods is weak and debated. For instance, \\cite{silverberg2003breaking} found no evidence for innovation clustering in a list of basic inventions,whereas \\citet{korotayev2011kondratieff} found an evidence of Kondratiev cycles in the world-level patent output per inhabitant.} as patent $i$. \nBy labeling the patents in order of decreasing age\\footnote{We order by increasing ID those patents that are issued on the same day.}, the reference set is the set of $\\Delta$ patents $j$ such that $\\lvert i-j\\rvert < \\Delta\/2$.\\footnote{The temporal window is defined in a slightly different way for patents close to the beginning and the end of the dataset. For the $\\Delta\/2$ patents closest to the beginning (end) of the dataset, the temporal window is given by the $\\Delta$ oldest (most recent) patents in the dataset.} Constructing the set of comparable patents based on a continuously moving window centered on a focal patent is advantageous with respect to grouping the patents by year as the latter results in imposing a sharp distinction between patents granted very closely in time but on either side of the January 1st boundary.\n\nDenoting with $\\mu_i(s)$ and $\\sigma_i(s)$ the mean value and the standard deviation, respectively, of score $s$ over the patent $i$'s reference set, the rescaled score $R_i(s)$ of patent $i$ is given by \n\\begin{equation}\nR_i(s)=\\frac{s_i-\\mu_i(s)}{\\sigma_i(s)}.\n\\end{equation}\nIn this work, we set $\\Delta=15,000$, yet our results are robust with respect to other choices of $\\Delta$ (not shown here).\n\nAs shown in Fig.~\\ref{bias}, the rescaled scores $R(c)$ and $R(p)$ are much less biased by node age than the original scores $c$ and $p$: $n_{0.005}(R(c))$ and $n_{0.005}(R(p))$ are remarkably stable across different age groups, and their value is always close to the expected unbiased value $n_{0.005}^{(0)}$. In agreement with \\cite{mariani2016identification,vaccario2017quantifying,liao2017ranking}, this shows that the proposed rescaling procedure is effective in suppressing the temporal biases of the static metrics. \nBy giving to old and recent patents the same chance of appearing to the top of the ranking, we expect the rescaled metrics to bring a substantial advantage in identifying valuable patents shortly after issuing. As the rankings by static metrics are biased toward old patents, we also expect the rescaled metrics' advantage in identifying significant patents to shrink (and eventually vanish) as we consider older significant patents. \nThese hypotheses are validated in the next Section.\n\n\n\n\\subsection{Evaluation of the metrics' performance in identifying the significant patents}\n\n\n\nTo make quantitative statements on the ability of the metrics to single out the significant patents of different age, we introduce two evaluation metrics: the average ranking ratio and the identification rate.\n\n\\paragraph{Average ranking ratio}\nA straightforward way to assess the metrics' performance in identifying the significant patents would consist in calculating the average ranking position of the significant patents $t$ years after they are issued. However, this simple measure is highly sensitive to the ranking position of the lowest-ranked items \\citep{mariani2016identification}. \n\nTo prevent this shortcoming, it is preferable to measure the \\emph{average ranking ratio} of the target items by the different metrics \\citep{mariani2016identification}, which is defined as follows. Denoting the rank of patent $i$ by metric $m$ as $r_i(m)$,\nthe ranking ratio of metric $m$ is defined as $\\hat{r}_{i}(m)=r_{i}(m)\/\\min_{m'}\\{r_{i}(m')\\}$. The metric achieves the best-possible ranking ratio of one if it ranks a given significant patent best of all metrics; the lower the value, the better.\nThe \\emph{average ranking ratio} $\\braket{\\hat{r}}(m)$ of metric $m$ is the average of the ranking ratios $\\hat{r}_{i}(m)$ of all significant patents, and it quantifies how much the metric underperforms, on average, with respect to the best-performing metric. A metric that outperforms all the other metrics for all the target nodes achieves an average ranking ratio $\\braket{\\hat{r}}=1$; larger values of $\\braket{\\hat{r}}$ indicate a worse performance\n\n\\paragraph{Identification rate} The identification rate $f_z(m)$ -- commonly referred to as \\emph{recall} in the information filtering community \\citep{lu2012recommender} -- of a given metric $m$ is defined as the fraction of significant patents that are ranked among the top $z\\,N$ patents by metric $m$. Hence, while the average ranking ratio takes all significant patents and their ranking into account, the identification rate measure focuses on the top-items by each ranking. \n\n\n\\subsection{Evaluating the evolution of the metrics' performance with patent age}\n\\label{sec:age_evaluation}\nTo uncover the metrics' ability to early identify the significant patents, \nwe evaluate the metrics' average ranking ratio and identification rate as a function of patent age. In this way, we are able to untangle the role of patent age in determining the metrics' performance; for example, a metric that is biased toward old patents only performs well in detecting \\emph{old} important patents, whereas we expect good early-identification metrics to perform well in detecting \\emph{recent} important patents.\n\n\n\n\nTo untangle the role of patent age in determining the metrics' performance, we dissect the network evolution by computing the rankings by the metrics every six months. At each ranking computation time $t^{(c)}$, only the patents issued before $t^{(c)}$ are included in the analysis. For significant patent $i$ (issued at time $t_i$), we measure the age $\\Delta t=t^{(c)}-t_i$ of the significant patent at time $t^{(c)}$. \nThen, we determine its ranking ratio values $\\hat{r}_i(m;\\Delta t)$ for all considered metrics $m$.\nPatent $i$'s ranking ratio $\\hat{r}_i(m;\\Delta t)$ contributes to metric $m$'s average ranking ratio $\\braket{\\hat{r}}(m;\\Delta t)$ for $\\Delta t$ year-old patents.\nAfter having analyzed the whole network history, we can thus determine the average ranking ratio $\\braket{\\hat{r}_i}(m;\\Delta t)$ of metric $m$ for $\\Delta t$ years old patents as the average of $\\hat{r}_i(m;\\Delta t)$ over all the significant patents included in the analysis.\n\n\n\nIn the same way, we define the identification rate $f_z(m;\\Delta t)$ of metric $m$ for $\\Delta t$ years old patents as the fraction of significant patents that were ranked among the top $z\\,N$ patents by metric $m$ when they were $\\Delta t$ years old. \n\n\\section{Results}\n\\label{section:results}\n\n\\subsection{Metrics' performance on the time-aggregate network}\n\\label{sec:aggregate}\nWe start by assessing the average ranking ratio (Fig.~\\ref{histo}A) and the identification rate (Fig. \\ref{histo}B) of the metrics on the whole dataset.\nThe results show a clear advantage of the network-based metrics, $p$ and $R(p)$, over the citation-counting metrics. According to the average ranking ratio (Fig.~\\ref{histo}A), rescaled PageRank is the best-performing metric with a small margin over PageRank and a large margin over raw and rescaled citation count. Rescaled PageRank and PageRank also achieve the highest identification rates (Fig.~\\ref{histo}B). \n\n\\begin{figure*}[t]\n\\centering\n\\includegraphics[width=15cm]{histogram_all}\n\\caption{Performance of the metrics in identifying the significant patents from the list by Strumsky and Lobo, as measured by the metrics' average ranking ratio (panel A, the lower the better) and their identification rate (panels B, the higher the better) evaluated on the complete patent citation dataset}\n\\label{histo}\n\\end{figure*}\n\nTo understand where the gaps between the metrics stem from, we inspect the patents that give the largest contribution to $c$ and $R(c)$'s ranking ratio -- i.e., the patents that are ranked much better by $p$ and $R(p)$ than by $c$ and $R(c)$. We find a significant contribution coming from patent $4,237,224$ (``Process for producing biologically functional molecular chimeras'', $c=285$), which is ranked $2$nd by $R(p)$ ($\\hat{r}=1$), $3$rd by $p$ ($\\hat{r}=1.5$), $1079$th by $R(c)$ ($\\hat{r}=539.5$), and $1181$st by $c$ ($\\hat{r}=590.5$). \nImportantly, this patent gives the same contribution (equal to one) to all metrics' identification rate as all the metrics rank it among the top-$0.5\\%$ patents. This example shows well that patents that are ranked at the top of the ranking by all metrics can have very different ranking ratio values.\nThe second largest contribution to $c$'s and $R(c)$'s average ranking ratio comes from patent $4,438,032$ (``Unique T-lymphocyte line and products derived therefrom'', $c=73$), which is ranked $253$rd by $p$ ($\\hat{r}=1$), $562$nd by $R(p)$ ($\\hat{r}=2.2$), $48,742$nd by $c$ ($\\hat{r}=192.7$), $66,014$th by $R(c)$ ($\\hat{r}=260.9$). \nTo check that the advantage of network-based metrics was not entirely due to these two patents, we have excluded them from the analysis and recalculated the metrics' average ranking ratio. PageRank and rescaled PageRank remain the two best-performing metrics ($\\braket{\\hat{r}}(p)=4.2$, $\\braket{\\hat{r}}(R(p))=6.1$), yet their edge over the link-counting metrics ($\\braket{\\hat{r}}(c)=8.1$, $\\braket{\\hat{r}}(R(c))=10.0$) significantly decreased.\n\n\n\n\\subsection{Age-rescaling matters most for young patents}\n\\label{dynamics}\nWhile the analysis of the previous Section reveals important differences among the metrics, the main goal of this manuscript is to reveal the dependence of the metrics' performance as a function of patent age, and to assess the metrics' ability to early-identify the significant patents. To this end, by following the procedure described in Section \\ref{sec:age_evaluation}, we consider the ranking positions\\footnote{The ranking positions considered in this paper are always normalized by the size of the system at the time when the ranking is computed.} of the group of expert-selected significant patent by \\cite{strumsky2015identifying} one (Figs.~\\ref{rankings}A,D), five (Figs.~\\ref{rankings}B,E) and ten (Figs.~\\ref{rankings}C,F) years after issuing. Due to their lack of time bias, the rescaled metrics rank the significant patents much better than the corresponding static metrics one year after issuing (see Figs.~\\ref{rankings}A,D). On the other hand, the ranking positions by rescaled and static metrics are comparable ten years after issuing (see Figs.~\\ref{rankings}C,F).\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=15cm]{ranking_time_effect}\n\\caption{A comparison of the relative rankings (the lower, the better) of the significant patents by $c$ and $R(c)$ one (panel A), five (panel B) and ten (panel C) years after issuing. Only the patents that received at least one citation at a given age are included. The same comparison between $p$ and $R(p)$ is shown in panels D--F.}\n\\label{rankings}\n\\end{figure*}\n\nThe evolution of the ranking position of the significant patents as evaluated by $p$ and $R(p)$ is shown in Supplementary Movie M1; the same for $c$ and $R(p)$ is shown in Supplementary Movie M2. The moving dots in these movies represent the significant patents, and the displacements of the dots represent the change in the significant patents' ranking position as they get older\\footnote{We only represent the significant patents after they have received their first citation. This is the reason why during the dynamics, some dots appear on the plane out of nowhere.}. \nMovies M1 and M2 show that short after issuing, all significant patents are ranked higher by rescaled PageRank than by PageRank and citation count, respectively, consistently with Figs.~\\ref{rankings}A,D. In movie M1 that compares the rankings by $p$ and $R(p)$, as the significant patents get older, the entity of their displacements in the ranking plane diminish, and they gradually drift toward the diagonal of the plane, which means that the gap between their ranking position by $p$ and $R(p)$ shrinks. \nAfter ten years, most of the significant patents lie close to the diagonal, which indicates that the rankings of the significant patents by $p$ and $R(p)$ are comparable. \n\n\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=18cm]{age_performance-control20y.pdf}\n\\caption{Performance of the metrics in identifying the significant patents from the list by Strumsky and Lobo over a $20$-year time window after their issuing, as explained in the main text. \\emph{(A)} Average ranking ratio as a function of patent age. \\emph{(B)} Identification rate as a function of patent age.}\n\\label{short} \n\\end{figure*}\n\n\\subsection{Comparison of the four metrics' performance for different patents' age}\nThe above-discussed Fig.~\\ref{rankings} and Supplementary Movies M1--M2 show that the age of the significant patents has a large impact on the ability of the metrics to identify them.\nThe goal of this section is to quantify the magnitude and the duration of the advantage of rescaled metrics in early identifying the significant patents, and to compare the obtained results with known results for scientific papers \\citep{mariani2016identification}.\n\nTo quantify how well the different metrics recognize the significant patents short after their issuing, we focus on the $M_{20}= 77$ patents that are at least $20$ years old at the end of the dataset. By performing the evaluation procedure described in Section \\ref{sec:age_evaluation}, we study how their average ranking ratio and identification rate depend on their age up to $20$ years after issuing. We focus thus on a fixed group of target patents, which allows us to gauge the impact of time on the metrics' performance\\footnote{Patents less than are $10$ years old, for example, could not contribute to the age bins from $10$ to $20$ years after issuing. Were we including also them in the control group of significant patents, we would have ended up with a control group with different composition for different age bins, which would have confounded the temporal effects that we focus on here.}.\n\n\n\\subsubsection{Average ranking ratio}\nIn qualitative agreement with Fig.~\\ref{rankings}, Fig.~\\ref{short}A shows striking differences between the metrics' performance. Shortly after issuing, the rescaled metrics achieve an average ranking ratio much lower than that of the non-rescaled metrics. For example, one year after publication, PageRank's and rescaled PageRank's average ranking ratio are equal to $20.8$ and $1.6$, respectively, which indicates a performance advantage of one order of magnitude in favor of $R(p)$.\nThe gap between rescaled PageRank and PageRank (rescaled citation count and citation count) closes $12$ ($7$) years after issuing.\nThere is therefore a medium-term temporal window over which the rescaled metrics rank the significant patents remarkably better than the non-rescaled metrics.\n\nImportantly, once we have suppressed the age bias of $c$ and $p$, we are able to reveal the advantage of using (higher-order) network information to rank the significant patents instead of simply counting citations, which manifests itself in the performance advantage of $R(p)$ over $R(c)$.\n\n\n\\subsubsection{Identification rate}\nFig.~\\ref{short}B shows the dependence of the metrics' identification rate $f_{0.005}(\\Delta t)$ as a function of patent age. This evaluation measure quantifies the fraction of significant patents ranked in the top $0.5\\%$ by the metrics when they were $\\Delta t$ years old.\nThe rescaled metrics outperform the non-rescaled metrics shortly after publication; the gap between rescaled and non-rescaled metrics closes \neventually:\n$p$'s performance reaches $R(p)$'s performance $12$ years after issuing, and $c$'s performance reaches $R(c)$'s performance $6$ years after issuing.\nThese two timescales are consistent with those observed for the average ranking ratio, and they define a temporal window over which the rescaled metrics achieve an improved identification of the significant patents.\n\n\n\\subsection{The role of the network structure}\n\\label{sec:dcm}\nIn this Section, we address the following question: to which extent can the improved performance of PageRank-related metrics be explained by citation count dynamics alone?\nIn other words, once we control for the effect of citation count dynamics and randomize the rest, can we reproduce the results in Fig.~\\ref{short}?\n\nTo address these questions, we use the the Dynamic Configuration Model (DCM) introduced by \\cite{ren2017time} to generate random networks that preserve the individual nodes' citation count time-series observed in the original network. Differently from the widely-used configuration model \\citep{molloy1995critical}, the DCM preserves the original network's temporal linking patterns \\citep{ren2017time}.\nIn the DCM, the total system time span $T$ is divided into $L$ layers of equal duration $\\Delta t=T\/L$. The randomized networks are thus generated by rewiring the existing connections, within each layer, by preserving each node's indegree and outdegree variation in that layer (see \\cite{ren2017time} for the details). The expected number of edges $E_{ij}(n)$ from node $j$ to node $i$ at layer $n$ is given by\n\\begin{equation}\nE_{ij}(n)=\\frac{\\Delta k_i^{in}(n)\\,\\Delta k_j^{out}(n)}{E(n)},\n\\end{equation}\nwhere $\\Delta k_i^{in}$ ($\\Delta k_j^{out}(n)$) denotes the indegree (outdegree) increase of node $i$ ($j$) in layer $n$, and $E(n)$ denotes the total number of edges in layer $n$. In our work, we set $L=100$ which results in $\\Delta t = 310$ days.\n\nWe compare the metrics' performance in the thus-generated random networks with the performance observed in the real data.\nBy construction, the model preserves the indegree time-series of the original network; as a consequence, the performance of the citation count and rescaled citation count is the same as in the real data (Fig.~\\ref{performance-rand}).\nThe model allows us to assess whether the advantage of network-based metrics (Fig.~\\ref{performance-rand}) is a genuine network effect or if it can be explained by random fluctuations.\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=18cm]{age_performance-control20y-dcm-av}\n\\caption{\\emph{(A)} Metrics' performance in identifying the significant patents in a random network generated with the Dynamic Configuration Model. The lines and the shaded areas around the lines represent the mean and the standard error, respectively, measured over $12$ realizations of the randomization process. \\emph{(B)} Difference between the performance observed in the real data and that observed in the randomized networks generated with the Dynamic Configuration Model.}\n\\label{performance-rand}\n\\end{figure*}\n\nIn the randomized networks, the network-based metrics have no advantage with respect to citation-counting metrics in identifying the significant patents (Fig.~\\ref{performance-rand}A). In fact, $R(p)$ falls slightly below $R(c)$ for almost every patent age. Fig.~\\ref{performance-rand}B shows that the performance difference between the performance of PageRank-related metrics in real and randomized networks is significantly positive. We conclude that controlling for the individual nodes' citation count dynamics is not sufficient to explain our findings. Therefore, (higher-order) network structure plays a significant role for the advantage of network-based metrics with respect to citation-counting metrics in identifying the significant patents.\n\n\n\n\n\n\\subsection{Top patents}\n\\label{sec:top_patents}\nIn this section we inspect the top-ranked patents. For simplicity, we focus on the top-$15$ patents as ranked by PageRank (Table~\\ref{tab:pr}) and rescaled PageRank score (Table~\\ref{tab:rpr}).\n\nTable~\\ref{tab:pr} shows that also patents with relatively few citations can reach the top of the ranking by PageRank score, which confirms the idea that in citation networks, the PageRank algorithm can identify ``hidden gems'' \\citep{chen2007finding} that are underestimated by citation count. A paradigmatic example in this sense is patent $3813316$ (``Microorganisms having multiple compatible degradative energy-generating plasmids and preparation thereof''). The patent is ranked $6$th by PageRank despite having been cited only $16$ times. By inspecting the patent's neighborhood, it emerges that the reason for this is that the patent has been cited by patents with relatively large citation count and, additionally, small outdegree. For example, patent $3813316$ is the only patent cited by patent $4237224$ (``Process for producing biologically functional molecular chimeras'', $c=285$, included in the Strumsky-Lobo list of significant patents) which is ranked $3$rd by PageRank. Highly scoring patent $3813316$ refers only to patent $3723248$ (``Method for producing ketoglucaric acid'') which is consequently ranked $38$th by PageRank despite having received only one citation. Small outdegree of the citing patents is crucial because it implies that a large portion of the citing patents' score will contribute to the cited patent's score in the score redistribution process defined by Eq.~(\\ref{pr}).\n\nTable~\\ref{tab:rpr} shows that the top-$15$ patents by rescaled PageRank span a wider temporal range (1934-2010) than the top-$15$ by PageRank (1942-1996), which is a direct consequence of the age-bias removal. \nOn the other hand, Table~\\ref{tab:rpr} also points out a potential limitation of the rescaling procedure. Among the $15$ top-ranked patents, four are indeed from $2010$ (the last year in the dataset) and received only one citation. This happens because only a few among the most recent patents received citations, which results in temporal windows with a large fraction of patents with zero citations. Within such a temporal window, a patent can achieve large rescaled score thanks to one single citation. A possible solution for this issue is to only include the patents whose temporal windows contain a certain minimal number of incoming citations. However, we prefer to show the scores of all the patents in order to highlight the subtleties associated with the evaluation of very recent patents.\n\n\n\n\\section{A comparison of the APS papers' and the US patents' citation network dynamics}\n\\label{sec:comparison}\nSection~\\ref{section:results} validates the rescaled metrics as better indicators of significance of recent patents than the non-rescaled metrics. Yet, there is a remarkable difference between the behavior of the identification rate observed in our analysis of the US patent dataset (Fig.~\\ref{short}B) and that reported by \\cite{mariani2016identification} in their analysis of the American Physical Society (APS) paper citation network: \\cite{mariani2016identification} found that $R(p)$ ranks more than $30\\%$ of the Physical Review Letters milestone letters in the top $0.5\\%$ already one year after publication, whereas it only ranks $1 \\%$ of the Strumsky-Lobo significant patents in the top $1\\%$ one year after issuing.\n\nThe qualitative difference between Fig.~\\ref{short}B and Fig.~3B in \\cite{mariani2016identification} for significant papers motivated us to explore the differences between the dynamics of (significant) patents and that of (significant) papers. To this end, we analyzed an extension of the dataset\\footnote{In particular, we analyzed the APS citation network from $1893$ to $2016$, which comprises $593,443$ papers and $7,031,030$ citations between them.} used by \\cite{mariani2016identification}, and compared the obtained results with those obtained for the US patent citation network. The results of our analysis are summarized in Tables \\ref{table:average}. The table shows that both the significant papers and the significant patents: (1) tend to be cited more than ordinary papers and ordinary patents, respectively, in the respective datasets; (2) tend to accrue citations faster than ordinary papers and ordinary patents, respectively.\nLike patents of high economic value \\citep{lee2017makes}, the Strumsky-Lobo significant patents tend to receive the first few citations quicker than ordinary patents. \n\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{llrrr}\n\\toprule\nDataset & Group of nodes & citations & $\\tau_3$ & $\\tau_5$ \\\\\n\\midrule\n \\textbf{US patents} & Significant patents & 105.6 & 9.6 y & 12.0 y \\\\\n & All patents & 7.4 & 24.9 y & 31.4 y \\\\\n\\midrule\n \\textbf{APS papers} & Significant papers & 457.0 & 1.0 y & 1.4 y \\\\\n & All papers & 11.8 & 3.6 y & 4.8 y \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{A comparison between the average properties of all nodes and the average properties of the significant nodes in the APS paper and US patent citation network. The significant nodes are the milestone letters and the Strumsky-Lobo significant patents in the two datasets, respectively.}\n\\label{table:average}\n\\end{table}\n\n\\iffalse\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{|l|l|c|c|c|}\n\\hline\n & & citations & $\\tau_3$ & $\\tau_5$ \\\\\n\\hline\n \\textbf{US patents} & Median (SPs) & $30$ & 4.0 y & 5.2 y \\\\\n & Median (all) & 3 & 6.6 y & 8.3 y \\\\\n \\hline\n \\textbf{APS papers} & Median (MLs) & 288 & 0.6 y & 0.9 y \\\\\n & Median (all) & 5 & 2.0 y & 2.9 y \\\\\n \\hline\n\\end{tabular}\n\\caption{A comparison between the median properties of all nodes and the median properties of the significant nodes in the APS paper and US patent citation network. The significant nodes are the Milestone Letters (MLs) and the Strumsky-Lobo significant patents (SPs) in the two datasets, respectively. \\mmcomment{If you agree with the layout of table 2, this one should be the same, of course.}}\n\\label{table:median}\n\\end{table}\n\\fi\n\n\n\n\nHowever, there is a striking difference between the dynamics of the two datasets: the APS papers tend to accrue citations much quicker than US patents. For example, the time needed for papers that received at least three total citations to receive their first three citations is much smaller for papers ($3.6$ years on average) than for patents ($24.9$ years). The same is true if we restrict the analysis to the significant papers ($1.0$ years) and patents ($9.6$ years), respectively.\nThese results point out that the smaller identification rate for patents shortly after issuing is partly a manifestation of the slower citation dynamics of patents with respect to the citation dynamics of papers.\\footnote{\nWe emphasize that the APS papers dataset contains only citations among the APS papers. By considering the citations to APS papers from non-APS papers, $\\tau_3$ and $\\tau_5$ would further decrease and the difference from the patent dataset (which, by contrast to the APS dataset, is comparably complete) would further magnify.}\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusions}\n\nOur paper has two main messages.\n\nFirst, we find that using the whole network topology instead of only counting citations brings a substantial advantage in identifying the significant patents. Both the observed degree-degree correlations (Fig.~\\ref{degree_correlations}) and the performance edge of PageRank-related metrics over citation-counting metrics (Fig.~\\ref{short}) suggest that important patents build on other other important patents. \nThis supports the hypothesis that high-impact patents ``stand on the shoulders of giants'', in a similar way as scientific papers \\citep{bornmann2010scientific}, although the high prevalence of examiner-added citations in patents makes the analogy imperfect.\n\nSecond, we show that removing the time bias of static centrality metrics allows one to identify significant patents much earlier than it is possible with conventional static metrics. \nThe rescaling procedure which we use to remove the time bias is efficient and thus applicable even to large-scale datasets \\citep{vaccario2017quantifying}.\n\nThere are some limitations to our work that deserve to be discussed. \nFirst of all, we have pointed out that the early-identification of significant patents is more difficult than that of significant papers, because patents take more time to accumulate citations (Section~\\ref{sec:comparison}). Second, the time-rescaled metrics are based on the assumption that a good ranking of the patents should give the patents from different age periods the same chance to get to the top of the ranking. While this assumption is customary in paper citation analysis \\citep{waltman2016review}, it creates a bias against patents that appear in periods of intensive breakthrough inventive activity, if they exist.\nThird, the rescaled metrics evaluate the most recent patents on the basis of citations received in a relatively short time period. While this may be justified by the finding that patents in rapidly growing domains are highly cited shortly after issuing \\citep{benson2015quantitative}, it potentially misses out ``sleeping-beauty'' \\citep{ke2015defining} patents that received a substantial amount of citations only many years after issuing.\n\nWe see three major directions for extending this research.\nThe most obvious is to acknowledge that there are different citation practices across technological fields, just as different scientific fields exhibit different citation patterns \\citep{waltman2016review}. Based on the results by \\citet{vaccario2017quantifying}, we know that the rescaling procedure can in principle be extended to suppress the bias by technological field as well. However, while it is natural to suppress biases by scientific field of paper-level metrics due to their use in research evaluation, it remains unclear whether a similar approach would be the most effective strategy to rank patents. Besides, using patent classification information is problematic when the goal is to rigorously test predictive ability, because the classification system is changing often and many patents are reclassified \\citep{lafond2017long}. Second, while there exist theoretical explanations for how the broad citation count distribution and the bias of citation-based metric by node age emerge as a result of the dynamics of the system \\citep{valverde2007topology,newman2009first,medo2011temporal,mariani2015ranking}, a model-based explanation of the strong degree-degree correlations and the improved PageRank performance observed in our dataset is still lacking. \nThird, while we studied PageRank as a paradigmatic network-based metric because of its plausible assumption (``a node is important if it is cited by other important nodes''), other network-based metrics \\citep{liao2017ranking} can be analyzed in a similar way to improve our understanding of which metrics best identify important patents.\n\n\n\\section*{Acknowledgments}\nMSM and MM acknowledge financial support from the Swiss National Science Foundation Grant No. 200020-156188. FL acknowledges financial support from Partners for a New Economy and the Institute for New Economic Thinking.\n\n\n\\bibliographystyle{spbasic} \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
{"text":"\\section{Introduction} \\label{sec:introduction}\nParton distribution functions (PDFs) determined from global analysis~\\cite{Martin:2009iq,Martin:2007bv,Lai:2010vv,Nadolsky:2008zw,Ball:2010de} of fixed-target and collider data, mainly from deep-inelastic scattering (DIS), are an essential input to theory calculations for hard processes at hadron colliders such as the Tevatron and LHC. In addition to the fitted input PDFs, several other parameters enter into the global fits, which affect both the PDFs obtained and predictions for hadronic processes. One important example is the value of the strong coupling $\\alpha_S$ and its uncertainty~\\cite{Martin:2009bu,Lai:2010nw,Demartin:2010er}. Others, which are the focus of the present paper, are the values of the heavy-quark masses and the scheme used to calculate heavy-quark contributions to observables. In particular, while the precise values taken for $m_c$ and $m_b$ may not be crucial for the processes included in the global PDF analyses, calculations of processes with explicit $c$- and $b$-quarks might be more sensitive to these values, and it is therefore desirable to have PDFs available which have been consistently fitted and evolved with corresponding values of $m_c$ and $m_b$.\n\nWe will study two topics, both of which concern the treatment of the heavy charm and bottom quarks in global parton analyses. The first topic, which is the subject of Section~\\ref{sec:massdep}, concerns the sensitivity of the MSTW~2008 global parton analysis~\\cite{Martin:2009iq} to the values of the heavy-quark masses $m_h$, with $h=c,b$. In Ref.~\\cite{Martin:2009iq} these masses were taken to be $m_c=1.40$~GeV and $m_b=4.75$~GeV. Here we perform global fits for a range of $m_c$ and $m_b$ about these values, with $\\alpha_S(M_Z^2)$ allowed to be a free parameter. In this way, we determine the values of $m_c$ and $m_b$ preferred by the data, and the correlations between these values of $m_h$ and $\\alpha_S(M_Z^2)$. Due to a significant correlation between $m_c$ and $\\alpha_S(M_Z^2)$, we also perform fits with varying $m_c$ but with $\\alpha_S(M_Z^2)$ held fixed. We study how the cross sections for $W$, $Z$ and Higgs boson production at the Tevatron and the LHC depend on the choice of $m_c$ and $m_b$, and we provide a recommendation on how to include the uncertainty arising from $m_h$ in a general cross-section calculation.\n\nThe second topic, described in Section~\\ref{sec:FFNS}, is the generation of sets of 3- and 4-flavour PDFs corresponding to the MSTW~2008~\\cite{Martin:2009iq} 5-flavour sets of PDFs. We follow the same procedure previously used to generate the 3- and 4-flavour versions~\\cite{Martin:2006qz} of the 5-flavour MRST 2004 PDFs~\\cite{Martin:2004ir}, i.e.~the input PDFs (and $\\alpha_S$) at $Q_0^2=1$~GeV$^2$ determined from fits in the 5-flavour scheme are used as the initial condition for 3- or 4-flavour evolution. However, going beyond Ref.~\\cite{Martin:2006qz}, we also provide 3- and 4-flavour \\emph{eigenvector} PDF sets to enable calculation of PDF uncertainties, and we provide 3- and 4-flavour PDF sets for a wide range of $m_c$ and $m_b$ values, respectively. As an example of the use of the central 4-flavour PDFs for the default quark masses, we compare the total cross sections for $Z$ production at the Tevatron and LHC in the 4- and 5-flavour schemes. However, first we begin with a short r\\'{e}sum\\'{e} of the alternative ``schemes'' to treat heavy flavours\\footnote{Strictly speaking, it would be better to say alternative ``techniques'', since the use of the word ``scheme'' is usually reserved for an alternative choice in the ordering of the perturbative expansion, or a particular separation of contributions between the coefficient functions and parton densities---an ambiguity inherent in QCD.}, and in particular, an explanation of what precisely we mean by 3-, 4-, and 5-flavour PDFs.\n\n\n\n\\section{Schemes for the treatment of heavy flavours}\nIt is appropriate to briefly recall the various schemes for the treatment of heavy flavours in global parton analyses. In PDF analyses it is common to start evolving upwards from $Q^2=Q^2_0\\sim 1$~GeV$^2$ with the distributions of the three light quarks $(u,d,s)$, assuming that they are massless. As we evolve upwards, we have the choice to turn on the heavy-quark $(c,b,t)$ distributions as we pass through their respective transition point, for which a convenient choice is $\\mu_F^2=m_h^2$. As we pass through a transition point, the number of active quarks is increased from $n$ to $n+1$, and the parton densities are related to each other perturbatively, i.e.\n\\begin{equation}\n f^{n+1}_j(\\mu_F^2)=\\sum_k A_{jk}(\\mu_F^2\/m_h^2)\\otimes f^{n}_k(\\mu_F^2),\n \\label{eq:transition}\n\\end{equation}\n where the perturbative matrix elements $A_{jk}(\\mu_F^2\/m_h^2)$ contain $\\ln(\\mu_F^2\/m_h^2)$ terms which are known to ${\\cal O} (\\alpha_S^2)$~\\cite{Buza:1996wv} and ${\\cal O} (\\alpha_S^3)$~\\cite{Bierenbaum:2009mv}. The ``$x$'' arguments have been suppressed in Eq.~(\\ref{eq:transition}), and the symbol $\\otimes$ is shorthand for the convolution\n\\begin{equation}\n f \\otimes g \\equiv \\int^1_x \\frac{{\\rm d}x'}{x'}f(x')g(x\/x').\n\\end{equation}\nEq.~(\\ref{eq:transition}) relates the $f^{n}_i(\\mu_F^2)$ set of partons to the $f^{n+1}_i(\\mu_F^2)$ set, guaranteeing the correct evolution for both the $n$ and $n+1$ regimes. We make the simplest choice, $\\mu_F^2=Q^2$, for the factorisation scale.\n\nHence, we have to decide whether or not to keep a heavy quark as just a final-state particle, and not as a parton within the proton. We may choose to keep just the three light flavours as parton distributions. We will call this the 3-flavour scheme (3FS), though it is often referred to as the fixed flavour number scheme (FFNS). Alternatively, we may include the $c$-quark in the evolution above $Q^2=m_c^2$ and generate 4-flavour PDFs in a 4-flavour scheme (4FS). Actually, in the global MRST\/MSTW parton analyses we also include the $b$-quark distribution in the evolution above $Q^2=m_b^2$, but \\emph{not} the $t$-quark above $Q^2=m_t^2$, so we generate 5-flavour sets of parton distributions in a 5-flavour scheme (5FS). So to be precise, in our definition of $n_f$-flavour parton sets, $n_f$ refers to the {\\it maximum} number of quark flavours in the evolution.\n\nIn each $n_f$-flavour scheme ($n_f$FS) the structure functions are given by the usual convolution of coefficient functions and parton distributions:\n\\begin{equation}\n F(x,Q^2)=\\sum_{j} C^{n_f\\rm FS}_j(Q^2\/m_h^2)\\otimes f^n_j(Q^2),\n \\label{general}\n\\end{equation}\nwhere the sum $j$ is over the gluon and the (variable, depending on $Q^2$) number of active quark flavours, $n\\le n_f$. We have a choice in how to choose $n_f$ and define the coefficient functions. One simple choice is to fix $n=n_f=3$. For the heavy flavours, all the $m_h$ dependence then occurs in the coefficient functions, and these are called the FFNS coefficient functions. The structure functions may be written in the form\n\\begin{equation}\n F(x,Q^2)=\\sum_{j=u,d,s,g} C^{\\rm FF,3}_j(Q^2\/m_h^2)\\otimes f^3_j(Q^2).\n \\label{ffns}\n\\end{equation}\nHowever, the sum of the $\\alpha_S^k\\ln^m(Q^2\/m_h^2)$ terms, with $m\\leq k$, is not included in the perturbative expansion. Thus the accuracy of the expansion becomes increasingly unreliable as $Q^2$ increases above $m_h^2$. In addition, there is the problem that the full mass dependence of the coefficient functions is known up to NLO~\\cite{Laenen:1992zk}, but is not completely defined at NNLO, i.e.~the $\\alpha_S^3$ coefficient, $C^{{\\rm FF},3,(3)}_{2, hg}$, for $F_2$ is not fully known, see~Ref.~\\cite{Bierenbaum:2009mv}. (Here, the outer subscript ``$hg$'' denotes the $g\\to h$ contribution to the heavy-flavour structure function $F_2^h$.)\n\nAs an aside, we note that it would be possible to treat the charm quark as light, and the bottom quark as heavy. Then it would be possible to express the structure functions and cross sections in terms of four light quarks with all the mass dependence of the bottom quark contained in the coefficient functions. This is sometimes called the 4-flavour FFNS. However if one needs to use scales $Q^2<m^2_b$ and in the vicinity of $m_c^2$, as for example in the global PDF analysis, then the charm-quark mass dependence, and the charm transition point (if $Q^2 \\leq m_c^2$), should be included. Thus, this 4-flavour FFNS is only applicable in a restricted range of $Q^2$, and it will not be considered further here.\n\nThe alternative, and better, approach is to use the 4-flavour (and 5-flavour) PDFs of the variable flavour number schemes (4FS, 5FS). The simplest variable flavour number evolution procedure is to evolve treating some (or all) of the heavy quarks as massless, but to turn on the distributions at the appropriate transition points, $Q^2=m_h^2$. That is, to assume that these heavy-quark distributions evolve according to the splitting functions for massless quarks. Thus, the resummation of the large logarithms in $Q^2\/m_h^2$ is achieved by the introduction of heavy-flavour parton distributions and the solution of the evolution equations. It is motivated by the observation that at high scales the massive quarks behave like massless partons, and the coefficient functions are simply those in the massless limit, e.g.~for structure functions\n\\begin{equation}\n F(x,Q^2) =\\sum_j C^{n_f{\\rm ZMVF}}_j\\otimes f^{n}_j(Q^2),\n \\label{ZMVFNS}\n\\end{equation}\nwhere, as in Eq.~\\eqref{general}, the sum $j$ is over the gluon and the (variable) number $n\\le n_f$ of quarks that are active during the evolution. This is the so-called zero-mass variable flavour number scheme (ZM-VFNS). However, the ZM-VFNS has the failing that it simply ignores ${\\cal O}(m_h^2\/Q^2)$ corrections to the coefficient functions, and hence it is inaccurate in the region where $Q^2$ is not so much greater than $m_h^2$. Thus, strictly speaking, the ZM-VFNS does not define a scheme, but rather an approximation.\n\nSo we have two approaches where the treatment of heavy flavours is relatively simple. The 3-flavour scheme (or FFNS), appropriate to the region $Q^2 \\lesssim m_c^2$, and the ZM-VFNS, appropriate to the region $Q^2\\gg m_h^2$. Clearly, for precision parton analyses, we must use a so-called general-mass variable flavour number scheme (GM-VFNS) which smoothly connects these two well-defined regions, so as to reduce to the (3-flavour) FFNS in the low $Q^2$ limit and to the ZM-VFNS in the high $Q^2$ limit (up to possible terms of higher order in $\\alpha_S$). In particular, in Eq.~(\\ref{general}) the value of $n$ increases by one each time we reach $Q^2=m_h^2$ in the evolution, and the coefficient functions $C^{n\\rm GMVF}_j(Q^2\/m_h^2)$ interpolate smoothly from $C^{\\rm FF,3}_j(Q^2\/m_h^2)$ to $C^{n_f\\rm ZMVF}_j$, for the maximum value of $n=n_f$. There is some freedom in how one does this, and in the MSTW~2008 analysis~\\cite{Martin:2009iq} we use the definition of the GM-VFNS as described in Ref.~\\cite{Thorne:2006qt}.\\footnote{At NNLO this involves some modelling of the ${\\cal O}(\\alpha_S^3)$ FFNS coefficient functions which are not fully calculated. In the GM-VFNS we only need these in the low-$Q^2$ regime and we approximate them using the known results of the small-$x$~\\cite{Catani:1990eg} and threshold limits~\\cite{Laenen:1998kp} (see Refs.~\\cite{Alekhin:2008hc,Presti:2010pd} for recent refinements).} We also note that actually a 5-flavour GM-VFNS is used, since we do not include the top quark in the evolution. Note that this means that processes such as associated production of Higgs bosons with top quarks should be calculated using the 5-flavour PDFs with the full $2\\to 3$ subprocess matrix elements, i.e.~$gg, q\\bar q \\to Ht\\bar t$. In a 6FS, one would calculate this via $t\\bar{t}\\to H$, introducing a top-quark PDF. Although in principle this method would include resummed higher-order corrections $\\sim [\\alpha_S \\ln(M_H^2\/m_t^2)]^n$, we would expect that for all practical purposes\\footnote{Note that for $M_H\\lesssim 1$~TeV, $\\alpha_S\\ln(M_H^2\/m_t^2)\\lesssim 0.35$.} at the LHC, the 5FS approach, $gg, q\\bar q \\to Ht\\bar t$, supplemented by NLO corrections, would be adequate, see also Section~\\ref{sec:Ztot4FS}.\n\nAs just noted, the use of a GM-VFNS is nowadays essential for the determination of a set of precision PDFs from the data.\\footnote{In fact we only use the GM-VFNS for structure functions in DIS, whereas hadron collider cross sections are calculated in the ZM-VFNS under the assumption that mass effects are negligible at the typically large scales. (Low-mass fixed-target Drell--Yan production is also calculated in the ZM-VFNS, but here heavy flavours contribute $\\ll 1\\%$ of the total, so inaccuracies induced by using the ZM-VFNS are in practice irrelevant.)} However, there are processes where the full mass dependence of the coefficient functions for $c$ and $b$ production processes are only known in the case where the heavy quark is treated as a final-state particle, and not as a parton in the proton. For example, FFNS parton distributions are needed for use with the \\textsc{hvqdis}~\\cite{Harris:1995tu,Harris:1997zq} and \\textsc{mc@nlo}~\\cite{Frixione:2003ei} programs. Thus in this paper we make available 3- and 4-flavour PDFs. We also give an example of their use. This is the subject of Section~\\ref{sec:FFNS}.\n\n\n\n\\section{Dependence of PDFs on the heavy-quark masses~\\label{sec:massdep}}\nIn the MSTW~2008 global parton analysis~\\cite{Martin:2009iq} we used $m_c=1.40$~GeV and $m_b=4.75$~GeV, where these are the pole mass values. Some limited justification for choosing these values was briefly given at the end of Section~3 of Ref.~\\cite{Martin:2009iq}. To summarise, there was little data constraint on $m_b$, so it was simply fixed at a value of $4.75$~GeV, close to the calculated $\\overline{\\rm MS}$ mass transformed to the pole mass. The fixed value of $m_c=1.40$~GeV was close to the best-fit value at NLO if treated as a free parameter, but a little higher than the best-fit value at NNLO, and lower than the calculated $\\overline{\\rm MS}$ mass transformed to the pole mass. We discuss limitations in constraints on the pole masses by transforming from the calculated $\\overline{\\rm MS}$ masses below in Section~\\ref{sec:polemasses}.\n\nHere, using {\\it exactly} the same data sets, we repeat the global analysis for different fixed values of $m_c$ and $m_b$, but with $\\alpha_S(M_Z^2)$ left as a free parameter. The MSTW 2008 global fit~\\cite{Martin:2009iq} used a wide variety of data from both fixed-target experiments and the HERA $ep$ and Tevatron $p\\bar{p}$ colliders. Neutral-current structure functions ($F_2$ and $F_L$) were included from fixed-target lepton--nucleon scattering experiments (BCDMS~\\cite{Benvenuti:1989rh,Benvenuti:1989fm}, NMC~\\cite{Arneodo:1996qe,Arneodo:1996kd}, E665~\\cite{Adams:1996gu} and SLAC~\\cite{Whitlow:1991uw,Whitlow:1990dr,Whitlow:1990gk}), low-mass Drell--Yan cross sections from the E866\/NuSea experiment~\\cite{Webb:2003bj,Towell:2001nh}, and charged-current structure functions ($F_2$ and $xF_3$) and dimuon cross sections from neutrino--nucleon scattering experiments (CCFR\/NuTeV~\\cite{Goncharov:2001qe,Tzanov:2005kr} and CHORUS~\\cite{Onengut:2005kv}). From the HERA experiments, H1 and ZEUS, data were included on neutral- and charged-current reduced cross sections ($\\sigma_r^{\\rm NC}$ and $\\sigma_r^{\\rm CC}$)~\\cite{Lobodzinska:2003yd,Adloff:2000qk,Adloff:2000qj,Adloff:2003uh,Breitweg:1998dz,Chekanov:2001qu,Chekanov:2002ej,Chekanov:2003yv,Chekanov:2003vw}, the charm structure function ($F_2^c$)~\\cite{Adloff:1996xq,Adloff:2001zj,Aktas:2005iw,Aktas:2004az,Breitweg:1999ad,Chekanov:2003rb,Chekanov:2007ch}, and inclusive jet production in DIS~\\cite{Aktas:2007pb,Chekanov:2002be,Chekanov:2006xr}. From the Tevatron experiments, CDF and D{\\O}, Run II data were included on inclusive jet production~\\cite{Abazov:2008hu,Abulencia:2007ez}, the lepton charge asymmetry from $W$ decays~\\cite{Abazov:2007pm,Acosta:2005ud} and the $Z$ rapidity distribution~\\cite{Abazov:2007jy,Han:2008}. A more detailed description of the treatment of each of these data sets can be found in Ref.~\\cite{Martin:2009iq}. Note that more precise H1 and ZEUS data on $F_2^c$ (and the beauty structure function, $F_2^b$, not included originally) are now available, but we stick to fitting exactly the same data sets as in the MSTW 2008 analysis.\n\n\n\\subsection{Dependence on charm-quark mass $m_c$}\n\\begin{table}\n \\centering\n{\\footnotesize\n \\begin{minipage}{0.5\\textwidth}\n (a) NLO:\\\\ \\\\\n \\begin{tabular}{l|r|r|r}\n \\hline\\hline\n $m_c$ (GeV) & $\\chi^2_{\\rm global}$ & $\\chi^2_{F_2^c}$ & $\\alpha_S(M_Z^2)$ \\\\\n & (2699 pts.) & (83 pts.) & \\\\\n \\hline\n & & & \\\\\n 1.1 & 2729 & 263 & 0.1182 \\\\\n 1.2 & 2625 & 188 & 0.1188 \\\\\n 1.3 & 2563 & 134 & 0.1195 \\\\\n {\\bf 1.4} & 2543 & 107 & 0.1202 \\\\\n 1.45 & 2541 & 100 & 0.1205 \\\\\n 1.5 & 2545 & 97 & 0.1209 \\\\\n 1.6 & 2574 & 104 & 0.1216 \\\\\n 1.7 & 2627 & 128 & 0.1223 \\\\\n \\hline\\hline\n \\end{tabular}\n \\end{minipage}%\n \\hfill\n \\begin{minipage}{0.5\\textwidth}\n (b) NNLO:\\\\ \\\\\n \\begin{tabular}{l|r|r|r}\n \\hline\\hline\n $m_c$ (GeV) & $\\chi^2_{\\rm global}$ & $\\chi^2_{F_2^c}$ & $\\alpha_S(M_Z^2)$ \\\\\n & (2615 pts.) & (83 pts.) & \\\\\n \\hline\n & & & \\\\\n 1.1 & 2498 & 113 & 0.1159 \\\\\n 1.2 & 2463 & 88 & 0.1162 \\\\\n 1.26 & 2456 & 82 & 0.1165 \\\\\n 1.3 & 2458 & 82 & 0.1166 \\\\\n {\\bf 1.4} & 2480 & 95 & 0.1171 \\\\\n 1.5 & 2528 & 126 & 0.1175 \\\\\n 1.6 & 2589 & 167 & 0.1180 \\\\\n 1.7 & 2666 & 217 & 0.1184 \\\\\n \\hline\\hline\n \\end{tabular}\n \\end{minipage}\n}\n \\caption{\\sf Fit quality and $\\alpha_S(M_Z^2)$ for different $m_c$ values at (a) NLO and (b) NNLO.}\n \\label{tab:mc}\n\\end{table}\nThe sensitivity to the charm mass of the data in the global PDF analysis, and the subset of $F_2^c$ data, in terms of the goodness-of-fit measure $\\chi^2$, is shown in Table~\\ref{tab:mc} at NLO and NNLO. The global best-fit values of $m_c$, found by varying $m_c$ in steps of 0.01 GeV, are $m_c=1.45$ GeV at NLO and $m_c=1.26$ GeV at NNLO. Recall that in Section~3 of Ref.~\\cite{Martin:2009iq} we instead quoted best-fit values of $m_c=1.39$~GeV at NLO and $m_c=1.27$~GeV at NNLO. In fact, the more detailed analysis performed here, by scanning $m_c$ in steps of 0.01 GeV, revealed a rather flat minimum of the global $\\chi^2$ versus $m_c$, with a slight double-minimum structure at NLO (the two minima being at 1.39~GeV and 1.45~GeV). Hence the best-fit values we quote here, of $m_c=1.45$~GeV at NLO and $m_c=1.26$~GeV at NNLO, differ slightly from the best-fit values given in Ref.~\\cite{Martin:2009iq}. Note from Table~\\ref{tab:mc}(a) that there is a clear correlation between $m_c$ and the value of $\\alpha_S(M_Z^2)$ obtained from the NLO fit, and that from Table~\\ref{tab:mc}(b) this correlation is reduced at NNLO.\n\n\\begin{figure}\n \\centering\n \\begin{minipage}{0.8\\textwidth}\n (a)\\\\\n \\includegraphics[width=\\textwidth]{nloglobalchisqMCharm.eps}\n \\end{minipage}\n \\begin{minipage}{0.8\\textwidth}\n (b)\\\\\n \\includegraphics[width=\\textwidth]{nnloglobalchisqMCharm.eps}\n \\end{minipage}\n \\caption{\\sf Dependence on $m_c$ of the global $\\chi^2$, relative to the value at the best-fit $m_c$, at (a)~NLO and (b)~NNLO, fitting to the same data sets used in the MSTW~2008 global analysis~\\cite{Martin:2009iq}. The values quoted for $m_c$ correspond to treating $\\alpha_S(M_Z^2)$ as a free parameter, and the corresponding global $\\chi^2$ values are shown by the \\emph{closed points} ($\\bullet$) joined by the \\emph{continuous curves}. The $68\\%$ ($1\\sigma$) and $90\\%$ confidence-level (C.L.) uncertainties on $m_c$ are indicated by the \\emph{vertical dashed lines}. The \\emph{open points} ($\\circ$), joined by the \\emph{dashed curves}, are the global $\\chi^2$ values with $\\alpha_S(M_Z^2)$ held fixed.}\n \\label{fig:globalmc}\n\\end{figure}\nIn Fig.~\\ref{fig:globalmc} we show the NLO and NNLO global $\\chi^2$ profiles as a function of $m_c$. We see that at NLO there is a preference for $m_c=1.45$~GeV, with a 1$\\sigma$ uncertainty of $^{+0.14}_{-0.06}$~GeV. The global data prefer a lower value of $m_c$ at NNLO, namely $m_c=1.26$~GeV, with a 1$\\sigma$ uncertainty of $^{+0.09}_{-0.04}$~GeV. (For the higher values of $m_c$, the NNLO fit considerably undershoots the moderate $Q^2$ deep-inelastic data.) Here, the 1$\\sigma$ uncertainty is obtained in exactly the same way as described for the uncertainty on $\\alpha_S(M_Z^2)$ in Ref.~\\cite{Martin:2009bu}, i.e.~by examining the $\\chi^2$ profile of each data set included in the global fit.\n\\begin{figure}\n \\centering\n \\begin{minipage}{0.8\\textwidth}\n (a)\\\\\n \\includegraphics[width=\\textwidth]{nlorangechisqMCharm.eps}\n \\end{minipage}\n \\begin{minipage}{0.8\\textwidth}\n (b)\\\\\n \\includegraphics[width=\\textwidth]{nnlorangechisqMCharm.eps}\n \\end{minipage}\n \\caption{\\sf Ranges of $m_c$ for which data sets are described within their $90\\%$ C.L.~limit (\\emph{outer error bars}) or $68\\%$ C.L.~limit (\\emph{inner error bars}) in the (a)~NLO and (b)~NNLO global fits, with $\\alpha_S(M_Z^2)$ as a free parameter. The \\emph{points} ($\\bullet$) indicate the values of $m_c$ favoured by each individual data set, that is, the values for which the $\\chi^2$ is minimised. The experimental uncertainty on $m_c$, indicated by the \\emph{horizontal dashed lines}, is chosen to ensure that all data sets are described within their $68\\%$ or $90\\%$ C.L.~limits defined by Eq.~(2) of Ref.~\\cite{Martin:2009bu}.}\n \\label{fig:rangemc}\n\\end{figure}\nThe distinguishing power of the various data sets is shown in Fig.~\\ref{fig:rangemc}. Here, the points ($\\bullet$) indicate the values of $m_c$ for which the $\\chi^2$ for each data set is minimised (within the context of the global fit), while the inner error bars extend across the 68\\% confidence-level (C.L.)~region and the outer error bars extend across the 90\\% C.L.~region (see Section 4 of Ref.~\\cite{Martin:2009bu} for the precise definitions of the 68\\% and 90\\% C.L.~limits). As may be expected, the $F_2^c(x,Q^2)$ data of the H1 and ZEUS collaborations are the most discriminating.\n\\begin{figure} \n \\centering\n \\begin{minipage}{0.5\\textwidth}\n (a) MSTW 2008 NLO:\\\\\n \\includegraphics[width=\\textwidth]{nloscanchisqMCharm_F2charm.eps}\n \\end{minipage}%\n \\begin{minipage}{0.5\\textwidth}\n (b) MSTW 2008 NNLO:\\\\\n \\includegraphics[width=\\textwidth]{nnloscanchisqMCharm_F2charm.eps}\n \\end{minipage}\n \\caption{\\sf Dependence on $m_c$ of the $\\chi^2$ for HERA data~\\cite{Adloff:1996xq,Adloff:2001zj,Aktas:2005iw,Aktas:2004az,Breitweg:1999ad,Chekanov:2003rb,Chekanov:2007ch} on the charm structure function, $F_2^c$, relative to the value at the global best-fit $m_c$, at (a)~NLO and (b)~NNLO. The \\emph{closed points} ($\\bullet$) have $\\alpha_S(M_Z^2)$ as a free parameter, and are fitted to a quadratic function of $m_c$ shown by the continuous curves, while the \\emph{open points} ($\\circ$, mostly hidden) have $\\alpha_S(M_Z^2)$ held fixed at the MSTW 2008 value. The \\emph{horizontal dashed lines} in the plots indicate the $68\\%$ and $90\\%$ C.L.~limits, determined according to a ``hypothesis-testing'' criterion, see Section 4 of Ref.~\\cite{Martin:2009bu}.}\n \\label{fig:chi2charm}\n\\end{figure}\nIndeed, they dominate the determination of $m_c$, as can be seen from Fig.~\\ref{fig:chi2charm}, which shows $\\chi^2$ versus $m_c$ for the $F_2^c$ data alone. For the HERA inclusive data, changes in the value of $m_c$ are partially compensated by changes in the gluon and in $\\alpha_S$. The NMC data prefer a lower value of $m_c$ giving a quicker evolution near threshold. Similarly the BCDMS data mainly prefer a lower $m_c$, but only due to the correlation with a lower value of $\\alpha_S$. Supplementary plots of the $\\chi^2$ profiles versus $m_c$ for all data sets in the global fit are available from Ref.~\\cite{mstwpdf}. The experimental uncertainty on $m_c$, given in Fig.~\\ref{fig:globalmc} and indicated by the horizontal dashed lines in Fig.~\\ref{fig:rangemc}, is chosen to ensure that all data sets are described within their 68\\% or 90\\% C.L.~limits.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.5\\textwidth]{nloratiogluon_varymc_qsq4GeV2.eps}%\n \\includegraphics[width=0.5\\textwidth]{nloratiogluon_varymc.eps}\\\\\n \\includegraphics[width=0.5\\textwidth]{nloratiosinglet_varymc_qsq4GeV2.eps}%\n \\includegraphics[width=0.5\\textwidth]{nloratiosinglet_varymc.eps}\\\\\n \\includegraphics[width=0.5\\textwidth]{nloratiocharm_varymc_qsq4GeV2.eps}%\n \\includegraphics[width=0.5\\textwidth]{nloratiocharm_varymc.eps}\n \\caption{\\sf $m_c$ dependence of gluon, singlet and charm distributions at NLO at two different $Q^2$ values, $4$~GeV$^2$~(\\emph{left}) and $10^4$~GeV$^2$~(\\emph{right}), compared to the $90\\%$ C.L.~PDF uncertainties.}\n \\label{fig:mcdep}\n\\end{figure}\nThe dependence of the PDFs on the value taken for $m_c$ is shown in Fig.~\\ref{fig:mcdep}. The plots show the ratio of the NLO PDFs obtained with $m_c=1.3$~GeV and $m_c=1.5$~GeV to those of the MSTW~2008 analysis (which assumed $m_c=1.4$ GeV) at scales of $Q^2=4$~GeV$^2$ and $Q^2=10^4$~GeV$^2$. We see that the ratios lie well within the 90\\% C.L.~PDF uncertainty. The exception is the charm distribution at $Q^2=4$~GeV$^2$ where, as expected, the different charm transition points have an appreciable effect.\n\n\n\\subsection{Dependence on bottom-quark mass $m_b$}\n\\begin{table}\n \\centering\n{\\footnotesize\n \\begin{tabular}{l||r|r|r|r||r|r|r|r}\n \\hline\\hline\n & \\multicolumn{4}{c||}{NLO} & \\multicolumn{4}{c}{NNLO} \\\\\n \\hline\n $m_b$ (GeV) & $\\chi^2_{\\rm global}$ & $\\chi^2_{F_2^b}$ & $\\chi^2_{{\\rm global}+F_2^b}$ & $\\alpha_S(M_Z^2)$ & $\\chi^2_{\\rm global}$ & $\\chi^2_{F_2^b}$ & $\\chi^2_{{\\rm global}+F_2^b}$ & $\\alpha_S(M_Z^2)$ \\\\\n & (2699 pts.) & (12 pts.) & (2711 pts.) & & (2615 pts.) & (12 pts.) & (2627 pts.) & \\\\\n \\hline\n & & & & & & & & \\\\\n 4.00 & 2537 & 20 & 2557 & 0.1202 & 2477 & 21 & 2498 & 0.1171 \\\\\n 4.25 & 2539 & 13 & 2552 & 0.1202 & 2478 & 15 & 2493 & 0.1171 \\\\\n 4.50 & 2541 & 8.9 & 2550 & 0.1202 & 2478 & 11 & 2489 & 0.1171 \\\\\n {\\bf 4.75} & 2543 & 7.4 & 2550 & 0.1202 & 2480 & 8.8 & 2489 & 0.1171 \\\\\n 5.00 & 2545 & 7.6 & 2553 & 0.1202 & 2481 & 6.9 & 2488 & 0.1170 \\\\\n 5.25 & 2547 & 7.6 & 2555 & 0.1201 & 2483 & 7.7 & 2491 & 0.1170 \\\\\n 5.50 & 2549 & 8.0 & 2557 & 0.1201 & 2485 & 7.9 & 2493 & 0.1170 \\\\\n \\hline\\hline\n \\end{tabular}\n}\n \\caption{\\sf Fit quality and $\\alpha_S(M_Z^2)$ for different $m_b$ values at NLO and NNLO. Note that the $F_2^b$ data~\\cite{Aaron:2009ut} are not included in the global fit.}\n \\label{tab:mb}\n\\end{table}\nWe repeated the whole exercise for $m_b$. That is, we performed a series of global fits for different values of $m_b$. The results are shown in Table~\\ref{tab:mb}. Now there is less sensitivity to the value of the quark mass, and essentially no correlation between $m_b$ and $\\alpha_S(M_Z^2)$. Indeed, the global $\\chi^2$ stays fairly flat all the way down to $m_b=3$~GeV. For the lower values of $m_b$, there is a slightly better description of the HERA data, including $F_2^c(x,Q^2)$. A similar conclusion holds at NNLO, but with about half the change in global $\\chi^2$, that is, even less sensitivity to $m_b$.\n\n\\begin{figure}\n \\begin{center}\n \\includegraphics[width=\\textwidth]{bottmbvar.eps}\n \\caption{\\sf Comparison of the HERA $F_2^b(x,Q^2)$ data~\\cite{Aaron:2009ut,Abramowicz:2010zq} with NNLO predictions obtained using PDFs from global fits with different values of $m_b$. Note that the $F_2^b$ data are not included in these global fits. The \\emph{curves}, in decreasing order, correspond to $m_b=4,~4.5, ~4.75, ~5, ~5.5$ GeV.}\n \\label{fig:F2b}\n \\end{center}\n\\end{figure}\nNote that the HERA $F_2^b(x,Q^2)$ data~\\cite{Aktas:2004az,Aktas:2005iw,Aaron:2009ut,Chekanov:2009kj,Abramowicz:2010zq} are not included in the global fits, neither in MSTW~2008~\\cite{Martin:2009iq}, nor in the fits described above. In Fig.~\\ref{fig:F2b} we compare the predictions of the NNLO fits with varying $m_b$ values to the $F_2^b$ data from H1~\\cite{Aaron:2009ut}. We also show a few of the (less precise) ZEUS data points~\\cite{Abramowicz:2010zq}. In Table~\\ref{tab:mb} we give the $\\chi^2$ for the H1 $F_2^b$ data accounting\\footnote{We use Eqs.~(38--40) of Ref.~\\cite{Martin:2009iq}, noting that there is a typo in Eq.~(40) of that paper and $\\sigma_{n,i}^{\\rm uncorr.}$ should appear squared in the expression for $A_{kk^\\prime}$.} for all 24 sources of correlated systematic uncertainty, and we also give the simple addition with the global $\\chi^2$. We see that the $F_2^b$ data show a slight preference for $m_b\\approx 4.75$~GeV at NLO and $m_b\\approx 5.00$~GeV at NNLO. The global fits (including the $F_2^b$ data, obtained by simply adding the $\\chi^2$ values), would prefer $m_b\\approx 4.50$--4.75~GeV at NLO and $m_b\\approx 4.50$--5.00~GeV at NNLO, although admittedly the $\\chi^2$ profiles are quite flat, see Table~\\ref{tab:mb}. We conclude that the global data do not meaningfully constrain $m_b$, and that, in view of Table~\\ref{tab:mb} and Fig.~\\ref{fig:F2b}, the choice $m_b=4.75$~GeV made in the MSTW~2008~\\cite{Martin:2009iq} analysis is completely satisfactory.\n\n\\begin{figure}\n \\centering\n \\begin{minipage}{0.5\\textwidth}\n (a)\\\\\n \\includegraphics[width=\\textwidth]{nloratiobottom_varymb_qsq40GeV2.eps}\n \\end{minipage}%\n \\begin{minipage}{0.5\\textwidth}\n (b)\\\\\n \\includegraphics[width=\\textwidth]{nloratiobottom_varymb.eps}\n \\end{minipage}\n \\caption{\\sf $m_b$ dependence of bottom-quark distribution at NLO at two different $Q^2$ values, (a)~$40$~GeV$^2$ and (b)~$10^4$~GeV$^2$, compared to the $90\\%$ C.L.~PDF uncertainties.}\n \\label{fig:mbdep}\n\\end{figure}\nThe dependence of the bottom-quark distribution on the value taken for $m_b$ is shown in Fig.~\\ref{fig:mbdep}. The plots show the ratio of the NLO PDFs obtained with $m_b=4.25$~GeV and $m_b=5.25$~GeV to those of the MSTW~2008 analysis (which assumed $m_b=4.75$ GeV) at scales of $Q^2=40$~GeV$^2$ and $Q^2=10^4$~GeV$^2$. We see that the $b$-quark ratios lie well outside the 90\\%~C.L.~PDF uncertainty at $Q^2=40$~GeV$^2$ and even at $Q^2=10^4$~GeV$^2$. The gluon distribution and the other quark flavours are hardly affected by the value of $m_b$ (not shown here).\n\n\n\\subsection{Pole masses of heavy quarks} \\label{sec:polemasses}\nNote that in our analyses $m_c$ refers to the {\\it pole} mass of the charm quark. Both the transition matrix elements defining the boundary conditions for heavy-quark evolution~\\cite{Buza:1996wv} and the heavy-quark coefficient functions used to define the GM-VFNS~\\cite{Buza:1996wv} are defined using the on-shell renormalisation scheme for the heavy quark. Unlike $\\alpha_S(\\mu^2)$, the pole mass $m_c$ is, in principle, a physically-defined (spoilt by confinement which gives power corrections, see below) quantity which is stable to the order of perturbation theory. So the values that we obtain in the NLO and NNLO analyses should, in principle, be the same. However, we see from Fig.~\\ref{fig:globalmc} that the values are only marginally compatible. This is due to a variety of reasons. The perturbative expansion of heavy-flavour coefficient functions is not as convergent as one might hope, with large corrections near threshold and in the small-$x$ limit. So even though the pole mass might, in principle, be the same quantity at two different perturbative orders, the calculation at lower orders is missing significant corrections at higher orders which will affect the value of the pole mass obtained in an extraction from data. Perhaps most important in this respect is the large negative boundary condition (at $Q^2=m_h^2$) for the heavy-flavour distribution at NNLO, which has the effect of flattening the slope $\\partial F_2^h\/\\partial\\ln Q^2$ (see Ref.~\\cite{Thorne:2006qt} for details) and implies a lower mass to correct for this. There is also more potential variation in the GM-VFNS definition at NLO than NNLO, and there would be slightly more stability in preferred pole mass in, for example, the ``optimal'' scheme introduced in Ref.~\\cite{Thorne:2010pa} (although this does not remove the variation and is the subject for a future study). Finally, as briefly noted above and discussed further below, the definition of the pole mass is contaminated by non-perturbative corrections. These are reflected in the divergence of the perturbation theory and power corrections are, in practice, difficult to disentangle from higher-order corrections, so extractions at different orders will have a variation from this source. We now examine this issue in more detail.\n\nLet us consider the values we extract and use as default. In the \\emph{Review of Particle Physics}~\\cite{Amsler:2008zzb}, the authors quote values for the ``{\\it running}'' masses, $m_c(\\mu=m_c)$ and $m_b(m_b)$, in the $\\overline{\\rm MS}$ scheme:\n\\begin{equation}\n m_c(\\mu=m_c)~=~1.27^{~+0.07}_{~-0.11}~~{\\rm GeV},\\qquad m_b(m_b)~=~4.20~^{+0.17}_{-0.07}~~{\\rm GeV},\n \\label{eq:running}\n\\end{equation}\nwhich they evaluate from a whole series of independent determinations. This implies a precise determination of masses.\\footnote{There are numerous individual determinations of both charm- and bottom-quark masses in the $\\overline{\\rm MS}$ scheme which are actually even more precise than the above---for recent examples see~Refs.~\\cite{Chetyrkin:2009fv,Narison:2010cg}. However, these can vary from each other by considerably more than the quoted uncertainties, so in the \\emph{Review of Particle Physics}~\\cite{Amsler:2008zzb} account is taken of the spread, as well as the individual uncertainties.} The best precision of the pole mass should be obtained by using this $\\overline{\\rm MS}$ value (the best determinations coming from NNNLO calculations) with the most accurate conversion factor, also at NNNLO. However, the conversion of the ``{\\it running}'' mass to the pole mass is problematic, since it relies on a very weakly convergent perturbative expansion. Using our NNLO value of $\\alpha_S$, the term-by-term conversion factor (obtained from the formula in Ref.~\\cite{Amsler:2008zzb}), starting at zeroth order and going to the highest known order of $\\alpha_S^3$~\\cite{Chetyrkin:1999qi,Melnikov:2000qh}, for the bottom-quark mass is $(1+0.095+0.045+0.036)$ and for charm is $(1+0.16+0.14+0.18)$.\\footnote{Use of an NNNLO value of $\\alpha_S$ would make little difference. As shown in Ref.~\\cite{Kuhn:2008zzd}, NNNLO extractions of $\\alpha_S(M_Z^2)$ are very similar to the corresponding NNLO values, and the slightly quicker running at NNNLO would marginally increase $\\alpha_S$ at the scale of the charm- and bottom-quark masses, i.e.~the convergence of the perturbative series would likely be very slightly worse.} The former implies that convergence is ceasing, while the latter has no real convergence at all, since it relies on a much higher value of $\\alpha_S$ (and larger coefficients due to less cancellation between individual terms as the number of light quark flavours decreases~\\cite{Chetyrkin:1999qi,Melnikov:2000qh}).\n\nIndeed, the contamination by renormalons of the perturbative series for the transformation of a running quark mass in the $\\overline{\\rm MS}$ scheme to the pole mass has been known for a long time~\\cite{Bigi:1994em,Beneke:1994sw}. These papers estimate the intrinsic ambiguity of the perturbative series due to infrared contributions, translating into an uncertainty on the pole mass of at least $0.05~{\\rm GeV}$, but more likely of order 0.1--0.2~GeV. The best estimate one can obtain from the perturbative series is usually defined by including all terms in the series until they cease to fall in magnitude from one order to the next. For the bottom quark, under the plausible assumption that the unknown ${\\cal O}(\\alpha_S^4)$ term is similar in size to the calculated ${\\cal O}(\\alpha_S^3)$ term, this would give a pole mass of $m_b = 4.9$~GeV. The uncertainty on this value from the conversion is either the estimate from Ref.~\\cite{Beneke:1994sw} or roughly the size of the last term included in the series, i.e.~$0.15$~GeV. The good correspondence of the two estimates of the uncertainty is expected, and suggests that the series is indeed truncated at the correct point. If this uncertainty is added (approximately) in quadrature with the uncertainty on the $\\overline{\\rm MS}$ scheme mass, Eq.~\\eqref{eq:running}, we obtain $m_b = 4.9\\pm0.2$~GeV.\n\nThe determination of the charm-quark pole mass by conversion from the $\\overline{\\rm MS}$ scheme mass is more problematic since, as noted above, the perturbative series displays no clear convergence at all. However, as discussed in Ref.~\\cite{Hoang:2005zw}, the leading renormalon contribution is the same for different masses, i.e.~$\\delta m^2 \\propto \\Lambda_{\\rm QCD}^2$, independent of quark flavour. This means the difference between $m_b$ and $m_c$ can be well determined, and turns out to be $3.4$~GeV with a very small uncertainty~\\cite{Hoang:2005zw}. Using this result we obtain a best determination of the charm-quark pole mass of $m_c=1.5\\pm 0.2$~GeV. There is an uncertainty on $m_b-m_c$ due to the fact that the non-dominant renormalon contributions, $\\delta m^2 \\propto \\Lambda_{\\rm QCD}^4\/m^2$ and beyond, do not cancel. However, the uncertainty on $m_b-m_c$ is small compared to the $0.2$~GeV uncertainty on $m_b$ and hence does not affect the uncertainty on $m_c$ at the quoted accuracy when added in quadrature.\n\nThese pole mass values obtained from $\\overline{\\rm MS}$ conversion:\n\\begin{equation}\n m_c~=~1.5~\\pm0.2~~{\\rm GeV},\\qquad m_b~=~4.9~\\pm0.2~~{\\rm GeV},\n \\label{eq:polemasses}\n\\end{equation}\nare slightly high compared with our default values ($1.4$~GeV and $4.75$~GeV) and our best-fit values of $m_c$, particularly at NNLO, but even in the most discrepant case the error bars overlap. Combining our best-fit $m_c$ values and the pole mass determination given in Eq.~\\eqref{eq:polemasses}, it seems that a range for $m_c$ of about 1.25--1.55~GeV at 68\\% C.L.~and 1.15--1.65~GeV at 90\\%~C.L.~would seem reasonable at present. With improved HERA data on $F_2^c$ to come~\\cite{HERAF2charm} this range can hopefully be narrowed in the near future. Turning now to the bottom mass, the limited information that we obtain from comparison to data, and from the determination of the pole mass given in Eq.~\\eqref{eq:polemasses}, suggest that a range of $m_b$ from about 4.65--5.05~GeV at 68\\%~C.L.~would seem sensible. As we will see in the next section, LHC cross sections that do not involve explicit $b$-quarks are not very sensitive to the PDF variation with $m_b$, so using our grids at 4.5 and 5~GeV should give a good estimate of the uncertainty due to $m_b$ at 68\\%~C.L., and the grids at 4.25 and 5.25~GeV should give a conservative estimate of this source of uncertainty at 90\\%~C.L. As with charm, improved HERA data on $F_2^b$ are likely to limit the allowed spread in future.\n\nIf we reach the stage where we become confident that the data are starting to constrain the pole masses to an accuracy clearly better than the renormalon ambiguity, it may become preferable to transform to a scheme where $\\overline{\\rm MS}$ definitions are used instead, even though the mass is less directly related to a physical variable in this case. This would be appropriate if we become strongly constrained by data with $Q^2\\sim m_h^2$ and data close to the kinematic threshold, $W^2=Q^2(1\/x-1)=4m_h^2$ (for neutral-current DIS), where non-perturbative effects and the interplay between the leading-twist power-series and the power corrections become very important, and the ambiguities may be reduced in a different renormalisation scheme. Indeed, we have already noted in Section 9.2 of Ref.~\\cite{Martin:2009iq} that the lowest-$Q^2$ EMC data on $F_2^c$~\\cite{Aubert:1982tt} imply a non-perturbative correction to the cross section.\n\n\n\\subsection{Impact on $W$, $Z$ and Higgs production at the Tevatron and LHC}\n\\begin{table}\n \\centering\n (a)\\hfill$\\,$\\\\\n{\\footnotesize\n \\begin{tabular}{c|c||r|r|r||r|r|r||r|r|r}\n \\hline\\hline\n \\multicolumn{2}{c||}{Variable $\\alpha_S(M_Z^2)$} & \\multicolumn{3}{c||}{Tevatron} & \\multicolumn{3}{c||}{LHC} & \\multicolumn{3}{c}{LHC} \\\\\n \\multicolumn{2}{c||}{} & \\multicolumn{3}{c||}{($\\sqrt{s}=1.96$ TeV)} & \\multicolumn{3}{c||}{($\\sqrt{s}=7$ TeV)} & \\multicolumn{3}{c}{($\\sqrt{s}=14$ TeV)} \\\\\n \\hline\n $m_c$~(GeV) & $m_b$~(GeV) &\n $\\delta\\sigma^{W}$ & $\\delta\\sigma^{Z}$ & $\\delta\\sigma^{H}$ & $\\delta\\sigma^{W}$ & $\\delta\\sigma^{Z}$ & $\\delta\\sigma^{H}$ & $\\delta\\sigma^{W}$ & $\\delta\\sigma^{Z}$ & $\\delta\\sigma^{H}$ \\\\\n \\hline\n 1.05 & & -2.6 & -2.8 & +0.4 & -4.1 & -4.6 & -2.4 & -5.1 & -5.5 & -3.8 \\\\\n 1.10 & & -2.2 & -2.4 & +0.2 & -3.5 & -3.9 & -2.1 & -4.3 & -4.7 & -3.3 \\\\\n 1.15 & & -1.8 & -1.9 & +0.1 & -2.9 & -3.3 & -1.8 & -3.6 & -3.9 & -2.8 \\\\\n 1.20 & & -1.4 & -1.5 & +0.1 & -2.3 & -2.6 & -1.5 & -2.8 & -3.1 & -2.3 \\\\\n 1.25 & & -1.0 & -1.1 & 0.0 & -1.7 & -1.9 & -1.2 & -2.1 & -2.3 & -1.7 \\\\\n 1.30 & & -0.7 & -0.7 & 0.0 & -1.1 & -1.3 & -0.8 & -1.4 & -1.5 & -1.2 \\\\\n 1.35 & & -0.3 & -0.4 & 0.0 & -0.6 & -0.6 & -0.4 & -0.7 & -0.8 & -0.6 \\\\\n {\\bf 1.40} & {\\bf 4.75} & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\\\\n 1.45 & & +0.3 & +0.3 & 0.0 & +0.6 & +0.6 & +0.4 & +0.7 & +0.8 & +0.6 \\\\\n 1.50 & & +0.6 & +0.6 & 0.0 & +1.1 & +1.3 & +0.8 & +1.3 & +1.5 & +1.2 \\\\\n 1.55 & & +0.8 & +0.9 & +0.1 & +1.6 & +1.9 & +1.2 & +2.0 & +2.3 & +1.8 \\\\\n 1.60 & & +1.1 & +1.2 & +0.2 & +2.1 & +2.5 & +1.8 & +2.6 & +3.0 & +2.5 \\\\\n 1.65 & & +1.3 & +1.5 & +0.1 & +2.6 & +3.0 & +2.0 & +3.2 & +3.7 & +2.9 \\\\\n 1.70 & & +1.5 & +1.8 & +0.2 & +3.1 & +3.6 & +2.5 & +3.8 & +4.4 & +3.6 \\\\\n 1.75 & & +1.8 & +2.0 & +0.3 & +3.5 & +4.2 & +2.9 & +4.3 & +5.1 & +4.1 \\\\\n \\hline\\hline\n \\end{tabular}\n}\n\\\\[1cm]\n(b)\\hfill$\\,$\\\\\n{\\footnotesize\n \\begin{tabular}{c|c||r|r|r||r|r|r||r|r|r}\n \\hline\\hline\n \\multicolumn{2}{c||}{Fixed $\\alpha_S(M_Z^2)$} & \\multicolumn{3}{c||}{Tevatron} & \\multicolumn{3}{c||}{LHC} & \\multicolumn{3}{c}{LHC} \\\\\n \\multicolumn{2}{c||}{} & \\multicolumn{3}{c||}{($\\sqrt{s}=1.96$ TeV)} & \\multicolumn{3}{c||}{($\\sqrt{s}=7$ TeV)} & \\multicolumn{3}{c}{($\\sqrt{s}=14$ TeV)} \\\\\n \\hline\n $m_c$~(GeV) & $m_b$~(GeV) &\n $\\delta\\sigma^{W}$ & $\\delta\\sigma^{Z}$ & $\\delta\\sigma^{H}$ & $\\delta\\sigma^{W}$ & $\\delta\\sigma^{Z}$ & $\\delta\\sigma^{H}$ & $\\delta\\sigma^{W}$ & $\\delta\\sigma^{Z}$ & $\\delta\\sigma^{H}$ \\\\\n \\hline\n 1.05 & & -1.9 & -2.2 & +3.8 & -2.9 & -3.4 & 0.0 & -3.9 & -4.3 & -1.5 \\\\\n 1.10 & & -1.5 & -1.8 & +3.3 & -2.5 & -2.9 & 0.0 & -3.3 & -3.6 & -1.3 \\\\\n 1.15 & & -1.2 & -1.4 & +2.7 & -2.1 & -2.4 & 0.0 & -2.7 & -3.0 & -1.1 \\\\\n 1.20 & & -0.9 & -1.1 & +2.2 & -1.6 & -1.9 & 0.0 & -2.1 & -2.4 & -0.9 \\\\\n 1.25 & & -0.7 & -0.8 & +1.6 & -1.2 & -1.4 & 0.0 & -1.6 & -1.8 & -0.7 \\\\\n 1.30 & & -0.4 & -0.5 & +1.1 & -0.8 & -1.0 & 0.0 & -1.0 & -1.2 & -0.4 \\\\\n 1.35 & & -0.2 & -0.3 & +0.5 & -0.4 & -0.5 & 0.0 & -0.5 & -0.6 & -0.2 \\\\\n {\\bf 1.40} & {\\bf 4.75} & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\\\\n 1.45 & & +0.2 & +0.2 & -0.6 & +0.4 & +0.5 & 0.0 & +0.5 & +0.6 & +0.3 \\\\\n 1.50 & & +0.3 & +0.4 & -1.2 & +0.8 & +0.9 & 0.0 & +1.0 & +1.2 & +0.4 \\\\\n 1.55 & & +0.5 & +0.7 & -1.7 & +1.1 & +1.4 & 0.0 & +1.4 & +1.7 & +0.6 \\\\\n 1.60 & & +0.7 & +0.8 & -2.3 & +1.4 & +1.8 & 0.0 & +1.9 & +2.2 & +0.9 \\\\\n 1.65 & & +0.8 & +1.1 & -2.9 & +1.8 & +2.3 & -0.1 & +2.3 & +2.8 & +1.0 \\\\\n 1.70 & & +0.9 & +1.2 & -3.5 & +2.1 & +2.7 & -0.1 & +2.7 & +3.3 & +1.2 \\\\\n 1.75 & & +1.1 & +1.4 & -4.0 & +2.4 & +3.1 & -0.1 & +3.1 & +3.8 & +1.3 \\\\\n \\hline\\hline\n \\end{tabular}\n}\n \\caption{\\sf (a) Difference (in percent) between the predictions for the $W$, $Z$ and Standard Model Higgs ($M_H = 120$~GeV) NNLO production cross sections at the Tevatron and LHC, calculated using the variable-$m_c$ PDF sets, and the standard MSTW~2008 NNLO predictions with $m_c=1.40$~GeV. (b) Also shown are the corresponding results for variable $m_c$ with \\emph{fixed} $\\alpha_S(M_Z^2)$.}\n \\label{tab:WZHmc}\n\\end{table}\n\\begin{table}\n \\centering\n{\\footnotesize\n \\begin{tabular}{c|c||r|r|r||r|r|r||r|r|r}\n \\hline\\hline\n \\multicolumn{2}{c||}{Variable $\\alpha_S(M_Z^2)$} & \\multicolumn{3}{c||}{Tevatron} & \\multicolumn{3}{c||}{LHC} & \\multicolumn{3}{c}{LHC} \\\\\n \\multicolumn{2}{c||}{} & \\multicolumn{3}{c||}{($\\sqrt{s}=1.96$ TeV)} & \\multicolumn{3}{c||}{($\\sqrt{s}=7$ TeV)} & \\multicolumn{3}{c}{($\\sqrt{s}=14$ TeV)} \\\\\n \\hline\n $m_c$~(GeV) & $m_b$~(GeV) &\n $\\delta\\sigma^{W}$ & $\\delta\\sigma^{Z}$ & $\\delta\\sigma^{H}$ & $\\delta\\sigma^{W}$ & $\\delta\\sigma^{Z}$ & $\\delta\\sigma^{H}$ & $\\delta\\sigma^{W}$ & $\\delta\\sigma^{Z}$ & $\\delta\\sigma^{H}$ \\\\\n \\hline\n & 4.00 & -0.2 & -0.1 & -0.2 & -0.4 & -0.1 & -0.4 & -0.5 & 0.0 & -0.5 \\\\\n & 4.25 & -0.1 & -0.1 & -0.1 & -0.3 & 0.0 & -0.3 & -0.3 & 0.0 & -0.3 \\\\\n & 4.50 & -0.1 & 0.0 & -0.1 & -0.1 & 0.0 & -0.1 & -0.1 & 0.0 & -0.1 \\\\\n {\\bf 1.40} & {\\bf 4.75} & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\\\\n & 5.00 & 0.0 & 0.0 & 0.0 & +0.1 & 0.0 & +0.1 & +0.1 & 0.0 & +0.1 \\\\\n & 5.25 & +0.1 & 0.0 & +0.1 & +0.2 & 0.0 & +0.2 & +0.2 & 0.0 & +0.2 \\\\\n & 5.50 & +0.1 & 0.0 & +0.1 & +0.3 & 0.0 & +0.2 & +0.3 & -0.1 & +0.3 \\\\\n \\hline\\hline\n \\end{tabular}\n}\n \\caption{\\sf Difference (in percent) between the predictions for the $W$, $Z$ and Standard Model Higgs ($M_H = 120$~GeV) NNLO production cross sections at the Tevatron and LHC, calculated using the variable-$m_b$ PDF sets, and the standard MSTW~2008 NNLO predictions with $m_b=4.75$~GeV. Here $\\alpha_S(M_Z^2)$ is a free parameter in each fit, although the correlation with $m_b$ is negligible, see Table~\\ref{tab:mb}.}\n \\label{tab:WZHmb}\n\\end{table}\n\nTables~\\ref{tab:WZHmc} and \\ref{tab:WZHmb} show how the predictions for the ``standard candle'' NNLO $W$ and $Z$ cross sections at the Tevatron and LHC change using the PDFs obtained when the charm- and bottom-quark masses are varied in the global fit. The cross sections are calculated in the 5-flavour ZM-VFNS as described in Section~15 of Ref.~\\cite{Martin:2009iq}, e.g.~using the PDG 2008~\\cite{Amsler:2008zzb} electroweak parameters. These changes are not due primarily to the changes in the heavy-quark PDFs themselves, which would in any case be in the opposite direction. Rather they are due to the changes in the light quarks, which evolve slightly more rapidly at small $x$ to compensate for the slower turn-on of the heavy-flavour contribution to the structure functions when the heavy-quark masses are increased. In the free coupling case this is achieved mainly by an increase in $\\alpha_S$, while for fixed coupling it occurs from an increase in the small-$x$ input gluon distribution. Additionally, the input sea quarks, mainly the less well-constrained strange quark distributions, also increase with increasing $m_h$ to compensate for the suppressed heavy-flavour contribution to small-$x$ structure functions. At the Tevatron, varying the charm mass by $\\pm 0.15$ GeV from its default MSTW~2008 value leads to fairly small (${\\cal O}(1\\%)$ or less) changes in the $W$ and $Z$ cross sections, with heavier values of the charm mass giving slightly larger cross sections. The effects are a little more pronounced at the LHC: the cross sections vary by about $\\pm 2\\%$ for $\\sqrt{s}=14$~TeV. These changes reflect the behaviour of the up-quark distribution shown in Fig.~\\ref{fig:mcdepnnlo}(a) at the relevant $x$ values (indicated). The sensitivity to the value of $m_c$ of the $W$ and $Z$ cross sections at the LHC has previously been noticed in Ref.~\\cite{CooperSarkar:2010ik}.\n\n\\begin{figure}\n \\centering\n (a)\\hfill$\\,$\\\\\n \\includegraphics[width=0.8\\textwidth]{nnloratioup_varymcby0.25_MZ.eps}\\\\\n (b)\\hfill$\\,$\\\\\n \\includegraphics[width=0.8\\textwidth]{nnloratiogluon_varymcby0.25_MH120.eps}\n \\caption{\\sf $m_c$ dependence of the NNLO (a) up-quark and (b) gluon distributions at the scales relevant for $Z$ and Higgs boson ($M_H=120$ GeV) production, respectively. Also shown is the $m_c$ dependence when $\\alpha_S(M_Z^2)$ is held fixed. The values of $x=M_{Z,H}\/\\sqrt{s}$ probed at central rapidity (and $p_T^{Z,H}=0$) are indicated for the Tevatron ($\\sqrt{s} = 1.96$ TeV) and LHC ($7$~TeV and $14$~TeV).}\n \\label{fig:mcdepnnlo}\n\\end{figure}\n\nIn Tables~\\ref{tab:WZHmc} and \\ref{tab:WZHmb} we also give the changes to the cross section for the production via gluon--gluon fusion through a top-quark loop of a Standard Model Higgs boson of mass 120 GeV, calculated in the 5-flavour ZM-VFNS as described in Section 6.2 of Ref.~\\cite{Martin:2009bu}. From Table~\\ref{tab:WZHmc} we see that the variation of the Higgs cross sections with $m_c$ is different depending on whether $\\alpha_S$ is (a)~allowed to vary or (b)~kept fixed. This can be understood from the behaviour of the gluon distribution as $m_c$ is varied, see Fig.~\\ref{fig:mcdepnnlo}(b). When $\\alpha_S$ is fixed, the change in the Higgs cross section reflects the change in the gluon distribution at the relevant $x$ values (indicated). Thus the \\{14 TeV LHC, 7 TeV LHC, Tevatron\\} Higgs cross sections are \\{correlated, uncorrelated, anticorrelated\\} with $m_c$. On the other hand, for variable $\\alpha_S$, the additional correlation of $\\alpha_S$ with $m_c$ (see Table~\\ref{tab:mc}) results in \\{14 TeV LHC, 7 TeV LHC, Tevatron\\} Higgs cross sections that are \\{correlated, correlated, almost uncorrelated\\} with $m_c$, i.e.~at the Tevatron the anticorrelation of the gluon distribution largely cancels the correlation of $\\alpha_S$.\n\nFrom Table~\\ref{tab:WZHmb} we see that the $W$, $Z$ and Higgs production cross sections are much less dependent on the value of $m_b$. As seen from Table~\\ref{tab:mb}, the correlation between $m_b$ and $\\alpha_S(M_Z^2)$ is negligible, so we do not explicitly show results with fixed $\\alpha_S(M_Z^2)$ since they would be almost the same as in Table~\\ref{tab:WZHmb}. In all cases, for both varying-$m_c$ and varying-$m_b$, the cross-section ratios $\\sigma^W\/\\sigma^Z$ (and also $\\sigma^{W^+}\/\\sigma^{W^-}$ at the LHC) are almost completely unaffected.\n\n\\begin{table}\n \\centering\n \\begin{tabular}{l|c|c|c}\n \\hline\\hline\n Tevatron, $\\sqrt{s} = 1.96$ TeV & $B_{\\ell\\nu} \\cdot \\sigma^W$ & $B_{\\ell^+\\ell^-}\\cdot\\sigma^Z$ & $\\sigma^H$ \\\\\n \\hline\n Central value & 2.747~nb & 0.2507~nb & 0.9550~pb \\\\ \\hline\n PDF only uncertainty & $^{+1.8\\%}_{-1.5\\%}$ & $^{+1.9\\%}_{-1.6\\%}$ & $^{+3.1\\%}_{-3.3\\%}$ \\\\\n PDF+$\\alpha_S$ uncertainty & $^{+2.2\\%}_{-1.7\\%}$ & $^{+2.2\\%}_{-1.8\\%}$ & $^{+5.4\\%}_{-4.8\\%}$ \\\\ \\hline\n PDF+$\\alpha_S$+$m_{c,b}$ uncertainty & $^{+2.3\\%}_{-1.8\\%}$ & $^{+2.3\\%}_{-2.0\\%}$ & $^{+5.6\\%}_{-5.1\\%}$ \\\\\n \\hline\\hline\\multicolumn{4}{c}{}\\\\\\hline\\hline\n LHC, $\\sqrt{s} = 7$ TeV & $B_{\\ell\\nu} \\cdot \\sigma^W$ & $B_{\\ell^+\\ell^-}\\cdot\\sigma^Z$ & $\\sigma^H$ \\\\\n \\hline\n Central value & 10.47~nb & 0.958~nb & 15.50~pb \\\\ \\hline\n PDF only uncertainty & $^{+1.7\\%}_{-1.6\\%}$ & $^{+1.7\\%}_{-1.5\\%}$ & $^{+1.1\\%}_{-1.6\\%}$ \\\\\n PDF+$\\alpha_S$ uncertainty & $^{+2.5\\%}_{-1.9\\%}$ & $^{+2.5\\%}_{-1.9\\%}$ & $^{+3.7\\%}_{-2.9\\%}$ \\\\ \\hline\n PDF+$\\alpha_S$+$m_{c,b}$ uncertainty & $^{+2.7\\%}_{-2.2\\%}$ & $^{+2.9\\%}_{-2.4\\%}$ & $^{+3.7\\%}_{-2.9\\%}$ \\\\\n \\hline\\hline\\multicolumn{4}{c}{}\\\\\\hline\\hline\n LHC, $\\sqrt{s} = 14$ TeV & $B_{\\ell\\nu} \\cdot \\sigma^W$ & $B_{\\ell^+\\ell^-}\\cdot\\sigma^Z$ & $\\sigma^H$ \\\\\n \\hline\n Central value & 21.72~nb & 2.051~nb & 50.51~pb \\\\ \\hline\n PDF only uncertainty & $^{+1.7\\%}_{-1.7\\%}$ & $^{+1.7\\%}_{-1.6\\%}$ & $^{+1.0\\%}_{-1.6\\%}$ \\\\\n PDF+$\\alpha_S$ uncertainty & $^{+2.6\\%}_{-2.2\\%}$ & $^{+2.6\\%}_{-2.1\\%}$ & $^{+3.6\\%}_{-2.7\\%}$ \\\\ \\hline\n PDF+$\\alpha_S$+$m_{c,b}$ uncertainty & $^{+3.0\\%}_{-2.7\\%}$ & $^{+3.1\\%}_{-2.8\\%}$ & $^{+3.7\\%}_{-2.8\\%}$ \\\\\n \\hline\\hline\n \\end{tabular}\n \\caption{\\sf NNLO predictions for $W$, $Z$ and Higgs ($M_H=120$~GeV) total cross sections at the Tevatron, $7$~TeV LHC and $14$~TeV LHC, with PDF uncertainties only~\\cite{Martin:2009iq}, with the combined ``PDF+$\\alpha_S$'' uncertainty~\\cite{Martin:2009bu}, then finally also including the uncertainty due to $m_c$ and $m_b$. The $68\\%$ C.L.~uncertainties are given in all cases. We take $\\mu_R=\\mu_F=M_{W,Z,H}$.}\n \\label{tab:wzhtot}\n\\end{table}\nIn order to assess the {\\em total} uncertainty on a hadron collider cross section arising from variations in PDFs, $\\alpha_S$ and the heavy-quark masses, we need to define a prescription for combining the latter with the former. Our recommendation is to vary $m_c$ in the range $m_c = 1.40\\pm 0.15$~GeV at 68\\%~C.L.~and $\\pm 0.25$~GeV at 90\\%~C.L., centred on the default value for fixed $\\alpha_S$, and to vary $m_b$ in the range $m_b = 4.75\\pm0.25$~GeV at 68\\%~C.L.~and $\\pm0.50$~GeV at 90\\%~C.L., and then to separately add the resulting cross-section variations in quadrature with the usual ``PDF+$\\alpha_S$'' uncertainty. Of course, this prescription does not account for all possible correlations between PDFs, $\\alpha_S$ and $m_{c,b}$, but it should be a sufficiently good approximation. The range of $m_c$ values is based on the slightly contrasting pulls of the pole mass determination from the $\\overline{\\rm MS}$ conversion and our fit. As an example, in Table~\\ref{tab:wzhtot} we show the increase in uncertainty on the $W$, $Z$ and Higgs total cross sections due to $m_c$ and $m_b$ variations, compared to the ``PDF only''~\\cite{Martin:2009iq} and ``PDF+$\\alpha_S$''~\\cite{Martin:2009bu} uncertainties.\n\n\\begin{table}\n \\centering\n \\begin{tabular}{l|l|l}\n \\hline\\hline\n PDF set & $m_c$~(GeV) & $m_b$~(GeV) \\\\\n \\hline\n MSTW08~\\cite[this work]{Martin:2009iq} & $1.40\\pm0.15$ & $4.75\\pm0.25$ \\\\\n MRST06~\\cite{Martin:2007bv} & $1.43$ & $4.30$ \\\\\n CT10~\\cite{Lai:2010vv} & $1.30$ & $4.75$ \\\\\n CTEQ6.6~\\cite{Nadolsky:2008zw} & $1.30$ & $4.50$ \\\\\n NNPDF2.0~\\cite{Ball:2010de} & $\\sqrt{2}$ & $4.30$ \\\\\n GJR08\/JR09~\\cite{Gluck:2007ck,JimenezDelgado:2008hf} & $1.30$ & $4.20$ \\\\\n HERAPDF1.0~\\cite{Aaron:2009wt} & $1.40^{+0.20}_{-0.05}$ & $4.75^{+0.25}_{-0.45}$ \\\\\n ABKM09~\\cite{Alekhin:2009ni} & $1.50\\pm0.10$ & $4.50\\pm0.50$ \\\\\n \\hline\\hline\n \\end{tabular}\n \\caption{\\sf Values of $m_c$ and $m_b$ used in various PDF fits. The $1\\sigma$ uncertainties are given only for the three PDF groups who attempt to account for uncertainties on $m_{c,b}$ in their analyses.}\n \\label{tab:mhcompare}\n\\end{table}\nIn Table~\\ref{tab:mhcompare} we compare the values of the heavy-quark masses, $m_c$ and $m_b$, used in the MRST\/MSTW analyses, with the values taken in the most recent public PDF fits of other groups. The other two \\emph{global} PDF fitting groups, CTEQ~\\cite{Lai:2010vv,Nadolsky:2008zw} and NNPDF~\\cite{Ball:2010de}, do not attempt to quantify the uncertainty coming from $m_{c,b}$, neither do the fits of ``dynamical'' PDFs by GJR\/JR~\\cite{Gluck:2007ck,JimenezDelgado:2008hf}. The HERAPDF1.0 fit~\\cite{Aaron:2009wt} uses the same central values as MSTW08~\\cite{Martin:2009iq}, but additional PDF fits are also provided with $m_c=\\{1.35,1.65\\}$~GeV and $m_b=\\{4.3,5.0\\}$~GeV, and this additional model uncertainty is recommended to be added in quadrature with the other uncertainties. The ABKM09 analysis~\\cite{Alekhin:2009ni} uses fixed values of $m_c=1.5$~GeV and $m_b=4.5$~GeV to determine the central fit. However, additional pseudo-measurements of $m_{c,b}$ are then added, with values given in the last line of Table~\\ref{tab:mhcompare}, and $m_c$ and $m_b$ are taken as free parameters to calculate the covariance matrix used for the final error propagation. This means that each of the public eigenvector PDF sets will be associated with different values of $m_c$ and $m_b$, but these values are not readily accessible.\n\n\n\n\\section{3-flavour and 4-flavour scheme parton distributions} \\label{sec:FFNS}\nAs well as looking at the variation in the parton distributions as a function of the heavy-quark masses, it is also interesting to consider the PDF sets obtained in the framework of a different maximum number of active quark flavours. Hence, in this section we will consider our PDFs when charm becomes an active parton but bottom does not---the 4-flavour scheme---and when both the charm and bottom quarks only appear in the final state---the 3-flavour scheme. We have argued on various occasions (e.g.~in Section 4 of Ref.~\\cite{Martin:2009iq}) that the use of a GM-VFNS, i.e.~with up to five active quarks (or even six if we include top), is preferable. However, there are cases where the cross section has only been calculated with finite mass dependence for the fixed flavour number case; see, for example, the \\textsc{hqvdis} program~\\cite{Harris:1995tu,Harris:1997zq} for details of final states in heavy-quark production in DIS. For this reason we make available sets of 3FS and 4FS distributions with a variety of both charm and bottom masses.\n\n\n\\subsection{Obtaining 3-flavour and 4-flavour scheme parton distributions}\nIt might be thought preferable to obtain these lower active quark number PDF sets from a fit performed using FFNS scheme coefficient functions. However, as argued in Refs.~\\cite{Martin:2006qz,Thorne:2008xf}, it is not actually so obvious that this is the case. This is largely because rather few of the data sets included in a truly global PDF fit can be kept, even at NLO, in a fit using the FFNS, due to lack of the full coefficient functions (even charged-current DIS coefficients are not known with full mass dependence at order $\\alpha_S^2$ except for $Q^2 \\gg m_h^2$~\\cite{Buza:1997mg}). Hence, the central values of the PDFs are likely to be influenced as much by the lack of data as by the change of scheme. However, it is also a consideration that the lack of resummation of the large logarithms in $Q^2\/m_h^2$ potentially affects the stability of the fit compared to the presumably more stable GM-VFNS fit. Ultimately a correct GM-VFNS will provide results very similar to the FFNS near to the transition points $Q^2=m_h^2$ anyway, so we deem it best to simply obtain the 3FS and 4FS PDFs from the inputs for the full fits performed using the GM-VFNS. \n\nWhen obtaining the 3FS and 4FS PDFs it is vital to make a self-consistent definition of the strong coupling constant $\\alpha_S$. It is generally the case that coefficient functions calculated in the 3- or 4-flavour schemes are made in a renormalisation scheme where the contribution of the heavy quark decouples and the coupling itself does not include the heavy quark as an active flavour. On this basis we define the PDFs using this definition of the coupling in the splitting functions. It is certainly possible to use a different definition of the coupling, but this must be applied universally, in both the PDFs and the coefficient functions. As illustrated in Ref.~\\cite{Martin:2006qz}, the error, made from not doing so, can be a few percent. Indeed, the change in the coupling obtained by altering the number of heavy flavours is quite dramatic. In our default NLO fit we start with a value $\\alpha_S(Q_0^2) = 0.49128$ (with $Q_0^2=1$~GeV$^2$), which for a variable flavour number results in $\\alpha_S^{(5)}(M_Z^2) = 0.12018$. Restricting ourselves to a maximum of four active flavours, the same boundary condition at $Q_0^2$ gives $\\alpha_S^{(4)}(M_Z^2) = 0.11490$, and for three active flavours gives $\\alpha_S^{(3)}(M_Z^2) = 0.10809$. This is illustrated in Fig.~\\ref{fig:alphas}, and more clearly in Fig.~\\ref{fig:ratioasg}(a), which shows the ratio of the 3- and 4-flavour $\\alpha_S$ to the 5-flavour one.\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.8\\textwidth]{nloalphas_vs_q2.eps}\n \\caption{\\sf The NLO strong coupling $\\alpha_S(Q^2)$ versus $Q^2$, in the default 5FS with the $90\\%$~C.L.~experimental uncertainty, and in the 4FS and 3FS, taking the same input value of $\\alpha_S(Q_0^2=1\\,{\\rm GeV}^2)$.}\n \\label{fig:alphas}\n\\end{figure}\n\\begin{figure}\n \\centering\n (a)\\hfill$\\,$\\\\\n \\includegraphics[width=0.8\\textwidth]{nloratioalphas_vs_q2.eps}\\\\\n (b)\\hfill$\\,$\\\\\n \\includegraphics[width=0.8\\textwidth]{nloratiogluon_vs_q2.eps}\n \\caption{\\sf Ratio to the 5FS values of (a)~strong coupling $\\alpha_S$ and (b)~gluon distribution at $x=0.01$, versus $Q^2$ at NLO in the 4FS and 3FS, compared to the $90\\%$~C.L.~experimental uncertainty bands.}\n \\label{fig:ratioasg}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.8\\textwidth]{partons_nlo_345.eps}\n \\caption{\\sf Ratio of 3FS and 4FS NLO PDFs to the standard 5FS NLO PDFs, at $Q^2 = 10^4$~GeV$^2$.}\n \\label{fig:partons345}\n\\end{figure}\nWe make available both the 3FS and 4FS NLO PDF sets for the full variety of fits with varying charm and bottom masses discussed in the previous section, including the case of varying $m_c$ with fixed $\\alpha_S(M_Z^2)$. We do not vary the quark mass at LO since the quality of the fit is already poor at LO. However, we do provide the 3FS and 4FS PDFs for the default masses at LO since these may be useful in some Monte Carlo event generators. We also make available the full range of 3FS and 4FS sets at NNLO, although the application of these is distinctly limited at present due to the lack of cross sections with full mass dependence calculated at this order. We will comment more on this in the following section. The ratios of the 3FS and 4FS NLO PDFs to those in the 5FS are shown in Fig.~\\ref{fig:partons345} at $Q^2=10^4$~GeV$^2$. In both cases, the gluon is larger in the 4FS and, particularly, in the 3FS, due to the fact that it splits into a smaller number of quark--antiquark pairs; see also Fig.~\\ref{fig:ratioasg}(b). The effect of the increased growth of the gluon distribution is countered exactly for the leading term in $\\alpha_S(Q^2)\\ln(Q^2\/m_h^2)$ by the quicker decrease of the coupling. This is illustrated in Fig.~\\ref{fig:ratioasg}, where the change in the gluon distribution comparing different flavour numbers is, to a good approximation, the inverse of that for $\\alpha_S$. This behaviour is only violated by higher-order corrections, so the gluon-driven change in the light quarks at small $x$ is rather minimal. At higher $x$, the onset of the smaller coupling in the lower flavour number schemes means a slower evolution and consequently larger light-quark distributions, as seen in Fig.~\\ref{fig:partons345}.\n\nFinally, we also make available new eigenvector PDF sets for the 3- and 4-flavour PDFs using the default quark masses at LO, NLO and NNLO. These are evolved from the saved PDF parameters at $Q_0^2=1$~GeV$^2$ for the default MSTW fits, i.e.~the ``dynamical'' tolerance values, $T=(\\Delta\\chi^2_{\\rm global})^{1\/2}$, are those determined from the 5-flavour fit~\\cite{Martin:2009iq}. For the first time, this will allow PDF uncertainties to be consistently included in a 3FS or 4FS calculation. We have not yet generated the additional eigenvector PDF sets with varying $\\alpha_S$ values needed for the ``PDF+$\\alpha_S$'' uncertainty calculation~\\cite{Martin:2009bu}, although these could be provided at a future point if necessary. Note, however, that 3FS or 4FS calculations generally have large higher-order corrections, and an associated large (renormalisation and factorisation) scale dependence, which is common in processes with multiple hard scales. The theory uncertainty due to neglected higher-order corrections is therefore likely to dominate over any PDF or ``PDF+$\\alpha_S$'' uncertainty. Precise calculations of the latter quantities are therefore relatively less important than in typical 5FS calculations where higher-order corrections are more readily available.\n\n\n\n\\subsection{Comparison of $Z$ total cross-section predictions in 4FS and 5FS} \\label{sec:Ztot4FS}\nWe have already noted in Section~2 that while the parton distributions of a $n$FS, with appropriate coefficient functions, can give an adequate description of the production of an $(n+1)$th parton near threshold, the accuracy of the perturbative expansion becomes increasingly unreliable as the scale of the hard scattering process, $Q^2$, increases above $m_h^2$. In this section we illustrate this by considering the total $Z$ cross section at hadron colliders. In particular, we compare the cross-section predictions in the 5FS ($\\equiv$ MSTW~2008~\\cite{Martin:2009iq}) and 4FS defined previously. Since in this case $Q^2 = M_Z^2 \\gg m_b^2$, we would expect to see quantitative differences between the predictions due to the resummed $[\\alpha_S \\log(M_Z^2\/m_b^2)]^n$ terms which are implicit in the 5FS, via the evolution of the $b$-quark PDF and $\\alpha_S^{(5)}$, but absent from the 4FS. For reasons which will become apparent below, we consider the $Z$ cross section at both NLO and NNLO.\n\nIn the 4FS the $b$ quarks are not considered as partons in the initial state, but contribute to the $Z$ cross section via real and virtual contributions which first appear at ${\\cal O}(\\alpha_S^2)$ in perturbation theory. Therefore the only difference in the predicted cross sections at NLO are (i) explicit $b$-parton contributions $b + \\bar b \\to Z ( + g)$ and $g + b (\\bar b) \\to Z + b (\\bar b)$ which only contribute in the $n_f = 5$ scheme (5FS), (ii) small differences in the light-quark and gluon distributions arising from the slightly different evolution in the two schemes, see Fig.~\\ref{fig:partons345}, and (iii) the difference in the values of $\\alpha_S^{(4)}(M_Z^2)$ and $\\alpha_S^{(5)}(M_Z^2)$, see Figs.~\\ref{fig:alphas} and \\ref{fig:ratioasg}(a), which affect the size of the NLO $K$-factor. We would expect the Tevatron $Z$ cross sections to be more similar than those at the LHC, since the $b$-quark contributions to the 5FS cross section are much smaller at the lower collider energy. We do not explicitly include PDF and\/or $\\alpha_S$ uncertainties on the calculated cross sections, as these are presumed to be more or less the same for the 4FS and 5FS calculations. Contributions from top quarks are also not included, as these have been shown to be very small in Ref.~\\cite{Rijken:1995gi}, and in any case should be added to both the 4FS and 5FS cross sections.\n\n\\begin{table}\n \\centering\n \\begin{tabular}{l|c|c|c}\n \\hline\\hline\n Tevatron, $\\sqrt{s} = 1.96$ TeV & $B\\cdot\\sigma_{\\rm NLO}^Z(4{\\rm FS})$ (nb) & $B\\cdot\\sigma_{\\rm NLO}^Z(5{\\rm FS})$ (nb)\n & $B\\cdot\\sigma_{\\rm NLO}^Z(5{\\rm FS},b)$ (nb) \\\\ \\hline\n$\\sigma_0^Z$ & 0.1989 & 0.1990 & 0.0012\\\\\n$\\sigma_1^Z$ & 0.0413 & 0.0436 & -0.0002\\\\\n \\hline\ntotal & 0.2402 & 0.2426 & 0.0010\\\\ \n \\hline\\hline\\multicolumn{4}{c}{}\\\\\\hline\\hline\n LHC, $\\sqrt{s} = 7$ TeV & $B\\cdot\\sigma_{\\rm NLO}^Z(4{\\rm FS})$ (nb) & $B\\cdot\\sigma_{\\rm NLO}^Z(5{\\rm FS})$ (nb)\n & $B\\cdot\\sigma_{\\rm NLO}^Z(5{\\rm FS},b)$ (nb) \\\\ \\hline\n$\\sigma_0^Z$ & 0.7846 & 0.8023 & 0.0205 \\\\\n$\\sigma_1^Z$ & 0.1206 & 0.1285 & -0.0020\\\\\n \\hline\ntotal & 0.9052 & 0.9308 & 0.0185\\\\ \n \\hline\\hline\\multicolumn{4}{c}{}\\\\\\hline\\hline\n LHC, $\\sqrt{s} = 14$ TeV & $B\\cdot\\sigma_{\\rm NLO}^Z(4{\\rm FS})$ (nb) & $B\\cdot\\sigma_{\\rm NLO}^Z(5{\\rm FS})$ (nb)\n & $B\\cdot\\sigma_{\\rm NLO}^Z(5{\\rm FS},b)$ (nb) \\\\ \\hline\n$\\sigma_0^Z$ & 1.6922 & 1.7545 & 0.0656\\\\\n$\\sigma_1^Z$ & 0.2303 & 0.2465 & -0.0050\\\\\n \\hline\ntotal & 1.9225 & 2.0009 & 0.0601\\\\ \n\\hline\\hline\n \\end{tabular}\n \\caption{\\sf NLO predictions for the total $Z$ cross section (multiplied by leptonic branching ratio $B$) at the Tevatron and LHC using MSTW~2008 NLO PDFs~\\cite{Martin:2009iq} as input, broken down into the $\\alpha_S^n$ ($n=0,1$) contributions, with $\\{q=u,d,s,c;\\; \\alpha_S^{(4)};\\; $4-flavour MSTW~2008 NLO PDFs$\\}$ in the 4FS calculation and $\\{q=u,d,s,c,b;\\; \\alpha_S^{(5)};\\; $5-flavour MSTW~2008 NLO PDFs$\\}$ in the 5FS calculation. The final column gives the contribution to the 5FS cross sections from processes where the $Z$ couples directly to $b$ quarks.}\n \\label{tab:z01nlotot}\n\\end{table}\nThe results at NLO are shown in Table~\\ref{tab:z01nlotot}.\\footnote{The 5FS cross sections are identical to those reported in Ref.~\\cite{Martin:2009iq}.} The 4FS cross section is smaller by $1.0\\%$, $2.8\\%$ and $3.9\\%$ at the Tevatron, LHC(7 TeV) and LHC(14 TeV) respectively. The ${\\cal O}(\\alpha_S^1)$ contributions differ by more than the ${\\cal O}(\\alpha_S^0)$ contributions, due to the $\\sim5\\%$ differences in the $\\alpha_S(M_Z^2)$ values in the two schemes. Also shown in Table~\\ref{tab:z01nlotot} (final column) are the contributions to the 5FS cross sections from the $b + \\bar b \\to Z ( + g)$ and $g + b (\\bar b) \\to Z + b (\\bar b)$ processes, which evidently account for $0.4\\%$, $2.0\\%$ and $3.0\\%$ of the total at the Tevatron, LHC(7 TeV) and LHC(14 TeV) respectively. Thus these contributions account for the bulk of the differences in the 4FS and 5FS cross sections at the LHC energies.\n\nAs noted above, when making a comparison between the 4FS and 5FS calculations at NNLO, we must include in the former the explicit real and virtual $b$-quark contributions which first appear at ${\\cal O}(\\alpha_S^2)$ in perturbation theory. When calculated with a non-zero $b$-quark mass\\footnote{For consistency, we use $m_b = 4.75$~GeV, the same value used in the 5FS~\\cite{Martin:2009iq} to generate the $b$-quark PDFs via the DGLAP equation.} these contributions are finite, and simply add to the 4-flavour contributions.\n\\begin{figure}\n \\begin{center}\n \\includegraphics[width=0.7\\textwidth]{diagrams.eps}\n \\caption{\\sf Sample Feynman diagrams for the various (a,b,c,d) virtual and (e,f) real $b$-quark contributions to the $Z$ cross section. The $b$-quarks are shown as the thicker fermion lines.}\n \\label{fig:diagrams}\n \\end{center}\n\\end{figure}\nSample Feynman diagrams for the various real and virtual $b$-quark contributions are shown in Fig.~\\ref{fig:diagrams}, and analytic expressions are given in Refs.~\\cite{Rijken:1995gi,Gonsalves:1991qn}. We summarise these in Appendix~\\ref{sec:appendix}, and derive small-mass (i.e.~$m_b^2\/M_Z^2 \\ll 1$) expansions which are useful in practice.\n\\begin{table}\n \\centering\n \\begin{tabular}{l|r|r|r}\n \\hline\\hline\nsubprocess & $\\Delta_b\\sigma^Z$ (Tevatron) & $\\Delta_b\\sigma^Z$ (LHC, 7 TeV) & $\\Delta_b\\sigma^Z$ (LHC, 14 TeV) \\\\\n\\hline\n$q + \\bar q \\to Z$ & $5.230 \\times 10^{-6}$ & $-2.124 \\times 10^{-5}$ & $-6.440 \\times 10^{-5}$ \\\\ \\hline\n$q + \\bar q \\to Z + g$ & $4.901 \\times 10^{-5}$ & $6.185 \\times 10^{-5}$ & $9.701 \\times 10^{-5}$ \\\\ \\hline\n$q(\\bar q) + g \\to Z + q(\\bar q)$ & $-2.862 \\times 10^{-5}$ & $-1.456 \\times 10^{-4}$ & $-2.632 \\times 10^{-4}$ \\\\ \\hline\n$q + \\bar q \\to Z + b + \\bar b$ & $3.754 \\times 10^{-4}$ & $1.450 \\times 10^{-3}$ & $3.382 \\times 10^{-3}$ \\\\ \\hline\n$ g + g \\to Z + b + \\bar b$ & $2.090 \\times 10^{-4}$ & $5.287 \\times 10^{-3}$ & $1.997 \\times 10^{-2}$ \\\\ \\hline\ntotal & $6.100 \\times 10^{-4}$ & $6.632 \\times 10^{-3}$ & $2.312 \\times 10^{-2}$\\\\\n \\hline\\hline\n \\end{tabular}\n \\caption{\\sf Additional ${\\cal O}(\\alpha_S^2)$ contributions to the total $Z$ 4FS NNLO cross section in nb (multiplied by leptonic branching ratio) at the Tevatron and LHC arising from real and virtual $b$-quark processes.}\n \\label{tab:ztotb}\n\\end{table}\nThe results are presented in Table~\\ref{tab:ztotb}, for Tevatron and ($\\sqrt{s} = 7, 14$~TeV) LHC energies. Evidently the $2\\to 3$ process contributions, Eq.~(\\ref{eq:3}), are by far the most dominant, and especially so at the LHC where they are two orders of magnitude larger then the rest of the ${\\cal O}(\\alpha_S^2)$ contributions combined. \n\\begin{table}\n \\centering\n \\begin{tabular}{l|c|c|c}\n \\hline\\hline\n Tevatron, $\\sqrt{s} = 1.96$ TeV & $B\\cdot\\sigma_{\\rm NNLO}^Z(4{\\rm FS})$ (nb) & $B\\cdot\\sigma_{\\rm NNLO}^Z(5{\\rm FS})$ (nb)\n & $B\\cdot\\sigma_{\\rm NNLO}^Z(5{\\rm FS},b)$ (nb)\\\\ \\hline\n$\\sigma_0^Z$ & 0.2013 & 0.2016 & 0.0012 \\\\\n$\\sigma_1^Z$ & 0.0409 & 0.0431 & -0.0002\\\\\n$\\sigma_2^Z$ & 0.0063 & 0.0060 & -0.0003\\\\ \\hline\ntotal & 0.2485 & 0.2507 & 0.0008 \\\\ \\hline\n$\\Delta_b\\sigma^Z$ & 0.0006 & $-$ & \\\\ \\hline\ntotal + $\\Delta_b\\sigma^Z$ & 0.2491 & 0.2507 & \\\\\n \\hline\\hline\\multicolumn{4}{c}{}\\\\\\hline\\hline\n LHC, $\\sqrt{s} = 7$ TeV & $B\\cdot\\sigma_{\\rm NNLO}^Z(4{\\rm FS})$ (nb) & $B\\cdot\\sigma_{\\rm NNLO}^Z(5{\\rm FS})$ (nb)\n & $B\\cdot\\sigma_{\\rm NNLO}^Z(5{\\rm FS},b)$ (nb)\\\\ \\hline\n$\\sigma_0^Z$ & 0.8083 & 0.8266 & 0.0202\\\\\n$\\sigma_1^Z$ & 0.1239 & 0.1322 & -0.0020\\\\\n$\\sigma_2^Z$ & 0.0037 & -0.0002 & -0.0037\\\\ \\hline\ntotal & 0.9359 & 0.9586 & 0.0145\\\\ \\hline\n$\\Delta_b\\sigma^Z$ & 0.0066 & $-$ & \\\\ \\hline\ntotal + $\\Delta_b\\sigma^Z$ & 0.9426 & 0.9586 & \\\\\n \\hline\\hline\\multicolumn{4}{c}{}\\\\\\hline\\hline\n LHC, $\\sqrt{s} = 14$ TeV & $B\\cdot\\sigma_{\\rm NNLO}^Z(4{\\rm FS})$ (nb) & $B\\cdot\\sigma_{\\rm NNLO}^Z(5{\\rm FS})$ (nb)\n & $B\\cdot\\sigma_{\\rm NNLO}^Z(5{\\rm FS},b)$ (nb)\\\\ \\hline\n$\\sigma_0^Z$ & 1.7472 & 1.8110 & 0.0641\\\\\n$\\sigma_1^Z$ & 0.2384 & 0.2557 & -0.0050\\\\\n$\\sigma_2^Z$ & -0.0047 & -0.0153 & -0.0107\\\\ \\hline\ntotal & 1.9809 & 2.0514 & 0.0484\\\\ \\hline\n$\\Delta_b\\sigma^Z$ & 0.0231 & $-$ & \\\\ \\hline\ntotal + $\\Delta_b\\sigma^Z$ & 2.0040 & 2.0514 & \\\\\n\\hline\\hline\n \\end{tabular}\n \\caption{\\sf NNLO predictions for the total $Z$ cross section (multiplied by leptonic branching ratio $B$) at the Tevatron and LHC using MSTW~2008 NNLO PDFs~\\cite{Martin:2009iq} as input, broken down into the $\\alpha_S^n$ ($n=0,1,2$) contributions, with $\\{q=u,d,s,c;\\;\\alpha_S^{(4)};\\;$4-flavour MSTW~2008~NNLO PDFs$\\}$ in the 4FS calculation and $\\{q=u,d,s,c,b;\\; \\alpha_S^{(5)};\\;$5-flavour MSTW~2008~NNLO PDFs$\\}$ in the 5FS calculation. The final column gives the contribution to the 5FS cross sections from processes where the $Z$ couples directly to $b$ quarks. The additional ${\\cal O}(\\alpha_S^2)$ contributions to the cross section arising from real and virtual $b$-quark processes, taken from Table~\\ref{tab:ztotb}, are added to the 4FS cross section in the last line of each sub-table.}\n \\label{tab:z012nnlotot}\n\\end{table}\nIn Table~\\ref{tab:z012nnlotot} we add these real and virtual $b$-quark contributions to the bulk of the NNLO cross section coming from 4FS light quarks and gluons, and compare the total with the benchmark 5FS NNLO results presented in Ref.~\\cite{Martin:2009iq}, in which the $b$ quark is treated as a massless parton in the subprocess cross sections, i.e.~both in the initial and final states and in loop contributions. Also shown in Table~\\ref{tab:z012nnlotot} are the ($b\\bar b, bg, \\ldots$) contributions in which the $Z$ couples directly to a $b$ quark and in which there is at least one $b$ or $\\bar b$ quark in the initial state.\\footnote{This excludes small ${\\cal O}(\\alpha_S^2)$ contributions initiated by light quarks and gluons, e.g.~$q \\bar q, gg \\to Z b\\bar b$, in which the $Z$ couples to $b$ quarks.} These represent $0.03\\%$, $1.5\\%$ and $2.4\\%$ of the total 5FS NNLO cross sections at the Tevatron, LHC(7 TeV) and LHC(14 TeV) respectively.\n\nFrom Table~\\ref{tab:z012nnlotot}, we see that at the Tevatron the 5FS and 4FS$ + b$-quark total cross sections are the same to within 1$\\%$. At the LHC, however, the additional $b$-quark contributions to the 4FS cross section are at the $+1\\%$ level and so do not completely compensate the ``missing'' $b \\bar b \\to Z$ contributions to the 5FS cross section. To be precise,\n\\begin{equation}\n B\\cdot\\sigma_{\\rm NNLO}^Z(5{\\rm FS},b) = 48.4\\; (14.5) \\ {\\rm pb} > \\Delta_b\\sigma^Z = 23.1\\; (6.6)\\ {\\rm pb}\n \\label{eq:45diff}\n\\end{equation}\nat $\\sqrt{s} = 14$~TeV (7~TeV), which results in the ``full'' 4FS total cross section being $2.3\\%$ ($1.7\\%$) smaller than the the 5FS cross section. We interpret this as meaning that the DGLAP-resummed $[\\alpha_S \\ln(M_Z^2\/m_b^2)]^n$ contributions that are absorbed into the $b$ parton distribution are numerically important. \n\nOur results are in broad agreement with the study of Ref.~\\cite{Maltoni:2005wd}, in which the different calculational methods for heavy particles produced in association with $b$ quarks, $gg\\to Xb\\bar b$ (4FS) versus $b \\bar b \\to X$ (5FS), were studied in detail for $X=H,Z$. In particular, it was shown that with the canonical choice of scale $\\mu=M_Z$, the LO $gg\\to Zb \\bar b$ cross section at the LHC was a factor of two smaller than the NNLO $b \\bar b \\to Z$ cross section, consistent with our results in Eq.~(\\ref{eq:45diff}) above. It was also shown in Ref.~\\cite{Maltoni:2005wd} that the agreement between the 4FS and 5FS calculations improves if the scale $\\mu$ is reduced to around $M_Z\/3$ (found by choosing $\\mu\\sim \\sqrt{-t}$ near the end of the collinear plateau in the quantity $-t\\,{\\rm d}\\sigma\/{\\rm d}t$ for the process $gb\\to Zb$). The NNLO $b \\bar b \\to Z$ cross section is approximately scale independent, while the LO $gg\\to Zb \\bar b$ cross section increases with decreasing scale, primarily because of the overall $\\alpha_S^2$ factor.\n\nOf course the explicit $gg \\to Z b \\bar b$ contribution corresponds only to the $n=1$ term in the resummed $[\\alpha_S \\ln(M_Z^2\/m_b^2)]^n$ perturbation series implicit in the $b\\bar{b}\\to Z$ 5FS calculation. Complete agreement between the two schemes {\\it would} be obtained in a fully all-orders perturbative calculation. Note that since at all collider energies $\\Delta_b\\sigma^Z$ is dominated by the contributions involving the $Zb \\bar b$ final state, we would expect that higher-order corrections to the $Z b \\bar b$ production process, which is here calculated only at leading order, will generate the $[\\alpha_S \\ln(M_Z^2\/m_b^2)]^n$ terms implicit in the $b$-quark PDF. The NLO (i.e.~${\\cal O}(\\alpha_S^3)$) corrections to the $Z b \\bar b$ total cross sections with $m_b \\neq 0$ have recently been calculated~\\cite{FebresCordero:2006sj,FebresCordero:2008ci,Cordero:2009kv,zbbCMS}. Although the results presented in Ref.~\\cite{zbbCMS} impose a minimum $p_T^b$ ($>5$~GeV), the $K$-factor\\footnote{The $K$-factor is here defined as the ratio of cross sections calculated using the dynamical scale $\\mu^2 = M_Z^2 + (p_T^{b,1})^2 + (p_T^{b,2})^2$, and with the standard 5-flavour CTEQ6M\/CTEQ6L1 PDFs~\\cite{Pumplin:2002vw} at NLO\/LO.} is evidently rather independent of $p_T^b$ at small $p_T^b$, suggesting that the NLO\/LO $K$-factor for the fully inclusive $Zb\\bar b$ cross section is approximately 1.5 for the LHC at $\\sqrt{s} = 14$~TeV. It is therefore plausible that even higher-order perturbative corrections can account for the factor of 2 difference in the 4FS and 5FS cross sections, Eq.~(\\ref{eq:45diff}). This conclusion is supported by the fact that the scale dependence of the 4FS calculation for $Zb\\bar b$ production at NLO is only mildly weaker than at LO~\\cite{FebresCordero:2006sj,FebresCordero:2008ci,Cordero:2009kv,zbbCMS}.\n\nIn Ref.~\\cite{Thorne:1997ga} it was shown that for $x$ in the region of 0.01--0.05, the relevant region for $Z+b\\bar{b}$ production at the 14~TeV LHC, the ratio of the GM-VFNS structure function $F_2^h$ to the FFNS structure function was $\\sim 1.5$ at LO at high scales. This represents the effect of either resumming the $[\\alpha_S\\ln(Q^2\/m_h^2)]^n$ contributions, or keeping only the contribution of fixed order in $\\alpha_S$ for one parton. For hadron--hadron processes we would expect the difference to be about $1.5^2>2$, exactly as observed. At NLO for structure functions at this $x$ the ratio is reduced to $\\sim1.1$, so inclusion of the extra $\\ln(Q^2\/m_h^2)$ removes much of the discrepancy present at LO. However, for hadron--hadron processes, NLO in the fixed flavour scheme only contains the extra large logarithm for one of the two incoming partons, so the ratio between the 5FS and 4FS would be roughly $1.5\\times 1.1 \\approx 1.6$, again as we expect to see in practice. It would only be at NNLO in the 4FS, when the double logarithm for both incoming PDFs is included, that we would expect to see the reduction to roughly $1.1^2 \\approx 1.2$ in the ratio of the 5FS to 4FS cross sections. This is a general feature, i.e.~the 4FS (or 3FS) will converge to the resummed 5FS results more slowly for hadronic processes than for those in DIS.\n\nIn summary, the 5FS PDFs are clearly the most appropriate to use for inclusive quantities such as $\\sigma^Z$ (or $\\sigma^{\\rm dijet}$, etc.) at high $Q^2$ scales, where resummation of the $[\\alpha_S \\ln(Q^2\/m_b^2)]^n$ contributions is evidently important. However, for more exclusive quantities like $\\sigma^{Zb\\bar b}$, where the $b$ quarks are measured in the final state, the 4FS parton distributions are more appropriate since the 5FS calculation gives no information on the spectator $b$ and $\\bar b$ quarks which must be present in the final state for the $b$- and $\\bar{b}$-initiated processes. Note that if only the \\emph{total} cross section is required, without cuts imposed on the $b$-quarks, then a 5FS is still better, e.g.~for $Zb\\bar{b}$ a 5FS calculation can be used for $b\\bar{b}\\to Z$ at NNLO, where the $b$-quarks couple directly to the $Z$, and so there are implicitly also two $b$-quarks in the final state~\\cite{Maltoni:2005wd}. However, if cuts must be applied to the $b$-quarks, as is the case in the experimental measurement, then a 4FS calculation is more appropriate. Similar remarks apply to the calculation of $Hb\\bar{b}$ production~\\cite{Dittmaier:2003ej,Dawson:2003kb,Campbell:2004pu} and other processes where $b$-quarks are detected in the final state. In a recent study~\\cite{Dittmaier:2009np}, the production of a charged Higgs boson in association with a $t$ quark was considered in both the 4FS ($gg \\to t \\bar b H^-$ etc.) and 5FS ($g b \\to t H^-$ etc.) to NLO in pQCD, using the appropriate MRST 2004 PDF sets~\\cite{Martin:2006qz,Martin:2004ir}. The central predictions in the 5FS were shown to be approximately 40$\\%$ larger than those in the 4FS. Even taking the scale uncertainty into account the 4FS and 5FS NLO cross sections are barely consistent.\n\nAn ideal calculation would combine the best features of the 4FS and 5FS so as to resum $[\\alpha_S\\ln(Q^2\/m_b^2)]^n$ terms while also retaining a finite $m_b$ dependence in the partonic cross section (rather than setting $m_b$ to zero as done in the 5FS). This matching has, of course, been done for structure functions in DIS using different variants of the GM-VFNS, but applications to hadron collider cross sections are more involved and have so far been limited (see Ref.~\\cite{Cacciari:1998it} for an application of the GM-VFNS to the $p_T$ spectrum in heavy-flavour hadroproduction). However, for processes where the hard scale is, for example, $Q^2\\sim M_Z^2$, then the GM-VFNS calculation will differ from the ZM-VFNS (5FS) only by terms $\\mathcal{O}(m_b^2\/M_Z^2\\sim 0.3\\%)$. We would therefore expect the complete GM-VFNS calculation to give results very close to the pure ZM-VFNS (5FS) for the total cross section.\n\nNote that rather than producing separate 4-flavour PDFs for use in a 4FS calculation, an alternative approach (e.g.~used at NLO in Refs.~\\cite{Campbell:2009ss,Campbell:2009gj}) is to use the conventional 5-flavour PDFs, then pass to the 4-flavour scheme using counterterms given in Ref.~\\cite{Cacciari:1998it}. However, these counterterms~\\cite{Cacciari:1998it} are equivalent to using the inverse of transition matrix elements, but only out to order $\\alpha_S$. One could indeed use the transition matrix elements themselves to go from 4-flavour to 5-flavour PDFs, except this would not sum the logarithmic terms, $[\\alpha_S \\ln(Q^2\/m_b^2)]^n$, in the PDF evolution.\\footnote{See Fig.~9 of Ref.~\\cite{Alekhin:2009ni} for a comparison of 5-flavour NNLO PDFs obtained from 3-flavour PDFs either by evolution or by applying fixed-order matching conditions; the differences will be larger at NLO.} Hence, the use of counterterms~\\cite{Cacciari:1998it} is a less complete way of going from a 5FS to a 4FS, and instead, we recommend that dedicated 4-flavour PDFs be used in 4FS calculations.\\footnote{However, for the 4FS calculation of $t$-channel single-top production at the Tevatron and LHC~\\cite{Campbell:2009ss}, it was explicitly checked that results obtained with the dedicated 4-flavour MRST set~\\cite{Martin:2006qz} were consistent (within the numerical integration precision) with those obtained with the corresponding 5-flavour MRST set~\\cite{Martin:2004ir} plus appropriate counterterms~\\cite{Cacciari:1998it}. We thank R.~Frederix and F.~Tramontano for discussions on this issue.} Previously, a major advantage of using 5-flavour PDFs with counterterms was that eigenvector PDF sets to calculate PDF uncertainties were not made available for existing 4-flavour PDFs~\\cite{Martin:2006qz}. However, we have now provided eigenvector PDF sets also for the 4-flavour PDFs, therefore this advantage no longer holds.\n\n\n\n\\section{Conclusions} \\label{sec:conclusions}\n\nWe have repeated the NLO and NNLO MSTW~2008 global PDF analyses~\\cite{Martin:2009iq} for a range of heavy-quark masses about their ``default'' values $m_c=1.40$~GeV and $m_b=4.75$~GeV. For the charm quark, we found that the global data prefer the values $m_c = 1.45$ (1.26)~GeV at NLO (NNLO). The most discriminating data are, as anticipated, the HERA data for $F_2^c$~\\cite{Adloff:1996xq,Adloff:2001zj,Aktas:2005iw,Aktas:2004az,Breitweg:1999ad,Chekanov:2003rb,Chekanov:2007ch}. On the other hand, for the bottom quark, the data included in the global fit (excluding $F_2^b$) do not put a meaningful constraint on the value of $m_b$, while the HERA $F_2^b$ data slightly favour $m_b \\approx 4.75$--5~GeV. We pointed out that precise determinations of the heavy-quark masses in the $\\overline{\\rm MS}$ scheme are affected by poorly convergent perturbative series in the conversion to the pole masses, particularly for the case of the charm quark. Recent precise combined HERA data on $\\sigma_r^{\\rm NC}$~\\cite{Aaron:2009wt} and $F_2^c$~\\cite{HERAF2charm} will in future be able to narrow the favoured range of the charm-quark pole mass $m_c$. Note, however, that uncertainties from the choice of GM-VFNS~\\cite{Thorne:2010pa} mean that the favoured value of $m_c$ will be correlated to some extent with the particular choice of GM-VFNS, although this correlation will be much smaller at NNLO than at NLO, as will other uncertainties arising from the choice of GM-VFNS~\\cite{Thorne:2010pa}.\n\nWe explored the effect of the values of the heavy-quark masses on $W$, $Z$ and Higgs production at the Tevatron and LHC. Varying the charm mass by $\\pm 0.15$ GeV changes the cross sections by $\\pm 1\\%$ or less at Tevatron energies and by $\\pm 2\\%$ at the LHC energy of $\\sqrt{s}=14$ TeV. The various weak boson cross-section \\emph{ratios} are essentially unchanged. The predictions for $W$, $Z$ and Higgs cross sections are much less dependent on the value taken for $m_b$. We provided a recommendation on how to include the uncertainty arising from the choices of $m_c$ and $m_b$ in a generic cross-section calculation.\n\nWe also presented PDF sets obtained in a framework with different active numbers of quark flavours, as done previously in the context of the MRST~2004 analysis~\\cite{Martin:2006qz}. Explicitly, we determined 4-flavour PDF sets in which charm becomes an active parton, but bottom does not, and 3-flavour PDF sets where charm and bottom are not partons, but only appear in the final state. The analogous 5-flavour parton sets are simply those of MSTW~2008~\\cite{Martin:2009iq}. Of course, the latter, which in the absence of top corresponds to PDFs of a (general-mass) variable flavour number scheme, are generally to be preferred, particularly for inclusive quantities at high $Q^2$ scales where the resummation of the $[\\alpha_S\\ln(Q^2\/m_b^2)]^n$ contributions is essential. However, for more exclusive processes, such as $Zb\\bar{b},~Hb\\bar{b},~\\ldots$, where $b$-quarks are observed in the final state, the 4-flavour parton distributions are more appropriate. For illustration, we computed the various components of the $Z$ production cross section to ${\\cal O}(\\alpha_S^2)$ at the Tevatron and LHC, and compared the predictions obtained using the 4FS and 5FS ($\\equiv$ MSTW~2008~\\cite{Martin:2009iq}) parton sets.\n\nThe additional grids for all PDF sets discussed in this paper are made publicly available~\\cite{mstwpdf}, for use either with the standalone MSTW interpolation code or via the \\textsc{lhapdf} interface~\\cite{Whalley:2005nh}. To be precise, grids for the following PDF sets are made available:\n\\begin{itemize}\n \\item For the default quark masses, $m_c=1.40$~GeV and $m_b=4.75$~GeV, we provide LO, NLO and NNLO grids for 3- and 4-flavour PDFs (central set and 40 eigenvector sets at both 68\\% and 90\\% C.L.). These grids complement the existing grids for the 5-flavour PDFs.\n \\item For $m_c$ in the range 1.05~GeV to 1.75~GeV (in steps of 0.05~GeV), we provide NLO and NNLO grids for 3- and 5-flavour PDFs (central set only) for both free and fixed $\\alpha_S(M_Z^2)$.\n \\item For $m_b$ in the range 4.00~GeV to 5.50~GeV (in steps of 0.25~GeV), we provide NLO and NNLO grids for 4- and 5-flavour PDFs (central set only) for free $\\alpha_S(M_Z^2)$ only.\n\\end{itemize}\nThese additional grids should prove to be useful in future for detailed studies of a variety of collider processes involving heavy quarks.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
{"text":"\\section{Introduction}\n\nThe three-gluon vertex is one of the QCD fundamental Green's functions. This vertex allows the computation of the strong coupling constant and the measurement of a static potential between color charges. Herein we report on an upgrade of the lattice computation of this vertex performed by some of the authors in \\cite{duarte2016, proc2016}.\n\nThe three-gluon correlation function $G^{a_1 a_2 a_3}_{\\mu_1 \\mu_2 \\mu_3} (p_1, p_2, p_3)$ is given by\n\\begin{equation}\n \\langle A^{a_1}_{\\mu_1} (p_1) \\, A^{a_2}_{\\mu_2} (p_2) \\, A^{a_3}_{\\mu_3} (p_3) \\rangle = V \\, \\delta( p_1 + p_2 + p_3) ~\n {G^{a_1 a_2 a_3}_{\\mu_1 \\mu_2 \\mu_3} (p_1, p_2, p_3)}\n\\end{equation}\nand can be written in terms of the gluon propagator $D^{ab}_{\\mu\\nu}(p^2)$ and the one-particle irreducible (1PI) vertex $\\Gamma$ using\n\\begin{equation}\n {G^{a_1a_2a_3}_{\\mu_1\\mu_2\\mu_3} (p_1, p_2, p_3)} = D^{a_1b_1}_{\\mu_1\\nu_1}(p_1) ~ D^{a_2b_2}_{\\mu_2\\nu_2}(p_2) ~ D^{a_3b_3}_{\\mu_3\\nu_3}(p_3) \n {\\Gamma^{b_1b_2b_3}_{\\nu_1\\nu_2\\nu_3} (p_1, p_2, p_3)} .\n\\end{equation}\nBose symmetry requires the 1PI vertex to be symmetric under permutations of any pair $(p_i, a_i, \\mu_i)$. Given that\n\\begin{equation}\n \\Gamma^{a_1 a_2 a_3}_{\\mu_1 \\mu_2 \\mu_3} (p_1, p_2, p_3) = f_{a_1 a_2 a_3}\u00a0\\Gamma_{\\mu_1 \\mu_2 \\mu_3} (p_1, p_2, p_3)\n\\end{equation}\nthen the function $\\Gamma_{\\mu_1 \\mu_2 \\mu_3} (p_1, p_2, p_3)$ must be antisymmetric under the interchange of any pair $(p_i, \\mu_i)$.\n\nA complete description of $\\Gamma_{\\mu_1 \\mu_2 \\mu_3} (p_1, p_2, p_3)$ in the continuum requires six Lorentz invariant form factors, two associated to the transverse component $\\Gamma^{(t)}$ \nand the remaining associated to the longitudinal $\\Gamma^{(l)}$ \\cite{ballchiu}.\n\n\n\n \n\\section{Asymmetric momentum configuration}\n\nIn this work we consider the computation of the three-gluon vertex in the asymmetric momentum configuration $p_2=0$, as in \\cite{alles, duarte2016}. In this case, the correlation function can be written as\n\\begin{equation}\n G_{\\mu_1\\mu_2\\mu_3} (p, 0, -p) = V \\frac{N_c(N^2_c-1)}{4} \\left[D(p^2)\\right]^2 \\, D(0) \\frac{\\Gamma (p^2)}{3} ~ ~ p_{\\mu_2} ~T_{\\mu_1\\mu_3} (p).\n\\end{equation}\nThe contraction of the Lorentz $\\mu_1$ and $\\mu_3$ indices, together with the contraction with the momentum $p_\\alpha$, gives\n\\begin{equation}\n G_{\\mu \\, \\alpha \\,\\mu} (p, 0, -p) \\, p_\\alpha = V \\frac{N_c(N^2_c-1)}{4}\n \\, \\left[D(p^2)\\right]^2 \\, D(0) ~~\\Gamma (p^2) ~~ p^2 .\n\\end{equation}\nFrom this expression it is possible to extract the form factor $\\Gamma (p^2)$. However, a lattice measurement of $\\Gamma (p^2)$ requires the computation of the ratio\n\\begin{equation}\n G_{\\mu \\alpha \\mu} (p, 0, -p) p_\\alpha \/ \\left[D(p^2)\\right]^2 \\, D(0) \n\\end{equation}\nand the extraction of $\\Gamma (p^2)$ from this ratio will originate large statistical fluctuations at high momenta, where $D(p^2)$ becomes quite small. In fact, assuming Gaussian error propagation, it is possible to show that the statistical error on $\\Gamma (p^2)$ behaves as $\\Delta \\Gamma(p^2) \\sim p^4$ in the UV regime \\cite{duarte2016}.\n\n\n\n\n\\section{Handling of noise, lattice artefacts}\n\nIn order to try to deal with the large statistical fluctuations at high momenta, we considered a few strategies \\cite{guitese}:\n\n\\begin{itemize}\n\\item explore the ambiguity on the scale setting and perform a binning in the momentum --- all data points in each bin are replaced by a weighted average of the data points;\n\\item perform a $H(4)$ extrapolation of the lattice data \\cite{becirevic1999, soto2009} --- such procedure is based on the remnant $H(4)$ symmetry group\nassociated with a hypercubic lattice. On the lattice, a scalar quantity $F$ is a function of the $H(4)$ invariants\n\\begin{displaymath}\n p^2 = p^{[2]} = \\sum_\\mu p^2_\\mu , \\quad\n p^{[4]} = \\sum_\\mu p^4_\\mu , \\quad\n p^{[6]} = \\sum_\\mu p^6_\\mu , \\quad\n p^{[8]} = \\sum_\\mu p^8_\\mu ,\n\\end{displaymath}\ni.e. $F_{Lat} = F(p^{[2]}, p^{[4]}, p^{[6]}, p^{[8]})$. The continuum limit will be given by $F(p^{[2]}, 0, 0, 0)$ up to corrections $\\mathcal{O}(a^2)$. Having several data points for the same $p^2$ but different $p^{[4]}$, $p^{[6]}$, $p^{[8]}$, an extrapolation of $F_{Lat}$ to the continuum limit can be done, assuming that it can be written as a power series of the H(4) invariants. Note that, in this work, only a linear extrapolation in $p^{[4]}$ is considered.\n\n\n\\end{itemize}\n\n \n\n\\section{Lattice setup}\n\nIn this work we consider the $64^4$ ensemble of 2000 configurations already studied in \\cite{duarte2016}, together with a $80^4$ ensemble of 1800 configurations, both generated with the Wilson gauge action at $\\beta=6.0$. The rotation to the Landau gauge has been performed using the Fourier accelerated Steepest Descent method \\cite{davies} implemented with the help of Chroma \\cite{chroma} and PFFT \\cite{pfft} libraries. The gluon field is computed using the definition\n\\begin{equation}\na g_0 A_\\mu (x + a \\hat{e}_\\mu) = \\frac{ U_\\mu (x) - U^\\dagger (x)}{ 2 i g_0} \n - \\frac{\\mbox{Tr} \\left[ U_\\mu (x) - U^\\dagger (x) \\right]}{6 i g_0} \n\\end{equation}\nwith the momentum space gluon field given by\n\\begin{equation}\nA_\\mu (\\hat{p}) = \\sum_x e^{- i \\hat{p} (x + a \\hat{e}_\\mu) } \\, A_\\mu (x + a \\hat{e}_\\mu) \\,\\,,\\,\\, \\hat{p}_\\mu = \\frac{2 \\, \\pi \\, n_\\mu}{a \\, L_\\mu}.\n\\end{equation}\n\n\n\\section{Results}\n\nIn Figure \\ref{binned} we compare the original and binned data for $\\Gamma (p^2)$. The binning of the data suppresses the large statistical errors in the high momentum region and produces a well defined and smooth curve.\n\n\\begin{figure}[h]\n\\vspace{0.55cm}\n \\centering\n \\subfigure[$64^4$ lattice.]{ \\includegraphics[width=0.42\\textwidth]{plots\/gamma_64x4.eps} \\label{binn64}} \\qquad\n \\subfigure[$80^4$ lattice.]{ \\includegraphics[width=0.42\\textwidth]{plots\/gamma_80x4.eps} \\label{binn80}}\n \\caption{Original and binned data for $\\Gamma (p^2)$.}\n \\label{binned}\n\\end{figure}\n\nNext, in Figure \\ref{binnedoverp2} we compare the binned data for both lattices. The results of the two volumes agree within errors, suggesting that finite volume effects are small. \n\n\\begin{figure}[h]\n\\vspace{0.65cm}\n\\begin{center}\n\\includegraphics[width=0.6\\textwidth]{plots\/gamma_over_p2_compare.eps}\n\\end{center}\n \\caption{Comparison of binned data for $\\Gamma (p^2)$.}\n \\label{binnedoverp2}\n\\end{figure}\n\nIn Figure \\ref{H4extr} we compare the H(4) extrapolation of the $64^4$ lattice data with the binning of the original data. We observe that the H(4) extrapolation pushes the vertex to higher values in the high momentum regime. Nevertheless, in the infrared region, the extrapolated data is compatible with the original data, for both lattice volumes --- see Figure \\ref{H4infra}.\n\n\\begin{figure}[h]\n\\vspace{0.65cm}\n\\begin{center}\n\\includegraphics[width=0.6\\textwidth]{plots\/all_gamma_over_p2_64_H4.eps}\n\\end{center}\n \\caption{Results of the H(4) extrapolation of $\\Gamma (p^2)$ on the $64^4$ lattice volume.}\n \\label{H4extr}\n\\end{figure}\n\n\n\\begin{figure}[h]\n\\vspace{0.55cm}\n \\centering\n \\subfigure[$p^2 \\Gamma(p^2)$.]{ \\includegraphics[width=0.42\\textwidth]{plots\/all_gamma.eps} \\label{H4infra-p2G}} \\qquad\n \\subfigure[$\\Gamma(p^2)$.]{ \\includegraphics[width=0.42\\textwidth]{plots\/all_gamma_over_p2.eps} \\label{H4infra-G}}\n \\caption{Original and H(4) data for both lattice volumes for low momenta.}\n \\label{H4infra}\n\\end{figure}\n\n\\section{Infrared behaviour of $\\Gamma(p^2)$}\n\nNo zero crossing of $\\Gamma(p^2)$, an indication of ghost dominance in the\ninfrared, is seen in the lattice data reported here. In order to check for\na change of sign in $\\Gamma(p^2)$, in this section we explore the infrared\nbehaviour of the lattice $\\Gamma(p^2)$, using the $80^4$ data for momenta below 1GeV, and fit the data to $\\Gamma_1(p^2)=A + Z \\ln(p^2)$ and $ \\Gamma_2(p^2)=A + Z \\ln(p^2+m^2)$. The first one is a typical ansatz considered in recent studies to study the zero crossing, see \\cite{guitese} for details, and the second one is a variant of the first one which includes an infrared logarithmic regularizing mass. \n\nIn Figure \\ref{zerocrossing} we plot the best fits of the lattice data for both fitting functions, obtained through the minimization of $\\chi^2\/d.o.f.$ .\nFor $\\Gamma_1(p^2)$, we got $\\chi^2\/d.o.f. = 1.23$ with $A=0.2395(16)$ and $Z=0.0646(21)$. Accordingly the zero crossing occurs at $p_o=157$MeV.\nFor $\\Gamma_2(p^2)$, the parameters take the values $A=0.208(24)$, $Z=0.124(27)$ and $m=0.61(15)$GeV, with a $\\chi^2\/d.o.f. = 0.95$. As shown in the right plot of Figure \\ref{zerocrossing}, in this case there is no zero crossing. \n\n\\begin{figure}[h]\n\\vspace{0.55cm}\n \\centering\n \\subfigure[$\\Gamma (p^2) = A + Z \\ln(p^2)$.]{ \\includegraphics[width=0.42\\textwidth]{plots\/gamma80-fit1.eps} \\label{zerocrossing-fit1}} \\qquad\n \\subfigure[$\\Gamma (p^2) = A + Z \\ln(p^2+m^2)$.]{ \\includegraphics[width=0.42\\textwidth]{plots\/gamma80-fit2.eps} \\label{zerocrossing-fit2}}\n \\caption{Infrared $80^4$ lattice data for $\\Gamma(p^2)$ together with some fitting functions. }\n \\label{zerocrossing}\n\\end{figure}\n\n\n\\section{Conclusions and outlook}\n\nIn this paper we describe an improved calculation of the three gluon vertex on the lattice, for the asymmetric momentum configuration. We use two different lattice volumes $(6.5$ fm$)^4$ and $(8.2$ fm$)^4$, with a common lattice spacing of $a = 0.102$ fm. We show that a H(4) extrapolation of the lattice data pushes the vertex to higher values in UV regime. We proceed with a functional study in the infrared region, considering some functional forms compatible with zero crossing and IR divergence.\n\nFurther momentum configurations will be explored in the near future.\n\n\n\n\n\n\n\\acknowledgments\n\n\n\nThis work was supported by national funds from FCT\nFunda\u00e7\u00e3o para a Ci\u00eancia e a Tecnologia, I. P., within the\nProjects UIDB\/04564\/2020, UIDP\/04564\/2020, and CERN\/FIS-COM\/0029\/2017.\nG. T. R. C. acknowledges financial support from FCT\n under Project UIDB\/04564\/2020, and also from the Generalitat Valenciana\n (genT program CIDEGENT\/2019\/040) and Ministerio de Ciencia e\n Innovacion PID2020-113644GB-I00.\nP. J. S. acknowledges financial support from FCT\n under Contract CEECIND\/00488\/2017.\nThis work was granted access to the HPC resources of\nthe PDC Center for High Performance Computing at the\nKTH Royal Institute of Technology, Sweden, made\navailable within the Distributed European Computing\nInitiative by the PRACE-2IP, receiving funding from the\nEuropean Communitys Seventh Framework Programme\n(FP7\/2007-2013) under Grant agreement no. RI-283493.\nThe use of Lindgren has been provided under DECI-9\nproject COIMBRALATT. We acknowledge that the results\nof this research have been achieved using the PRACE-3IP\nproject (FP7 RI312763) resource Sisu based in Finland at\nCSC. The use of Sisu has been provided under DECI-12\nproject COIMBRALATT2. We acknowledge the\nLaboratory for Advanced Computing at the University of\nCoimbra \\cite{lca} for providing access to the HPC resource\nNavigator. The authors acknowledge Minho Advanced Computing Center\n\\cite{macc} for providing HPC resources that have contributed to\nthe research results reported within this paper. This work was\nproduced with the support of MACC and it was funded by FCT I.P.\nunder the Advanced Computing Project CPCA\/A2\/6816\/2020, platform Bob.\nThis work was produced with the support of INCD \\cite{incd} funded by FCT and\nFEDER under the project 01\/SAICT\/2016 n\u00ba 022153.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
{"text":"\\section{Introduction} \\label{sec:intro}\n\n\n\nThe one-dimensional Ising model has played an important role in statistical mechanics since its introduction in 1920 \\cite{Lenz1920}. This model possesses the virtue of exact solvability in the ensemble of fixed ordering field, $h$, and it also represents an important example of an infinite system with short range interactions that is in addition experimentally realizable \\cite{Armitage1,Armitage2,Armitage3,Armitage4}. Additionally, it manifests scaling behavior in the vicinity of its zero-temperature ordered state, and it is a test-bed for exploring the influence of finite-size scaling on critical properties, including the connection between the behavior of fluctuation-induced forces in the critical regime and scaling predictions \\cite{rudzanabshak2010}. The one-dimensional Ising model in a transverse field has proven an important experimental realization of a system with a quantum phase transition \\cite{Armitage4}. While the properties of the Ising model in the fixed ordering field ensemble are well-studied, we are aware of no corresponding work having been done in the conjugate ensemble in which the total magnetization is held fixed---except in the limit of an infinite Ising chain, in which a Legendre transformation suffices to derive the fixed-$M$, or Helmholtz, free energy from the fixed $h$, or Gibbs free energy, and in the case of $M=0$ \\cite{Wang2017}. Utilizing an analysis of domain wall statistics, we have obtained closed-form expressions for the fixed-$M$ partition function in the case of periodic, antiperiodic and Dirichlet, or free, boundary conditions. The resulting expressions differ non-trivially from corresponding expressions in the fixed-$h$ ensemble. These results should extend the applicability of the one-dimensional Ising model to physical systems in which finite size effects play an important role. \nThe analysis reported here can also be viewed as a useful addition to approaches based on Ginzburg-Landau-Wilson Hamiltonians \\cite{Dietrich1,Dietrich2,Dietrich3}. \n\n\n\\section{Ising chain with periodic boundary conditions: the combinatorics of domains} \\label{sec:periodicdomains}\n\nAs in the case of all boundary conditions considered here, the partition function to be evaluated is the sum over spin states of the Boltzmann factor\n\\begin{equation}\n\\exp\\left[K \\sum_{i=1}^{N-1}s_i s_{i+1}\\right] \\label{eq:bf}\n\\end{equation}\nwhere each spin variable takes on the values $\\pm 1$. Fixing the total magnetism amounts to the constraint that the difference between the number of up spins, $N_+$ and the number of down spins, $N_-$ is equal to $M$. \n \n\n \nThe key step in the calculation of the partition function is the determination of the number of ways in which the spins can arrange themselves into alternating spin-up and spin-down domains, subject to the requirement of a fixed value of the total magnetization, $M$. We start with equations that express the relationships between $M$, the number of up spins, $N_+$ and the number of down spins, $N_-$, along with the total number of spins, $N$:\n \\begin{equation}\n \tN = N_+ + N_-, \\, \\qquad \\mbox{and}\\qquad \n \tM = N_+-N_- \\label{eq:cf1}.\n \\end{equation}\n Inverting these equations we find\n \\begin{equation}\n \tN_+ = \\frac{N+M}{2} \\, \\qquad \\mbox{and}\\qquad \n \tN_- = \\frac{N-M}{2}. \\label{eq:cf2}\n \\end{equation}\n \nFor insight into the determination of domain statistics, we look at, say, the third leading contribution in an expansion of the partition function in powers of $\\exp[-K]$. We start with a domain of $N_+$ up spins. We then partition that domain into four smaller domains. We do this by inserting three ``slices'' into the domain, effectively three walls between adjacent spins. \n We then partition a domain of $N_-$ down spins into four smaller domains, which we insert between the domains of up spins. The process is illustrated in Fig. \\ref{fig:steps}.\n \\begin{figure}[htbp]\n \t\t\\includegraphics[width=3in]{step1.pdf}\n\t\t\\includegraphics[width=3in]{step2.pdf}\n\t\t\\includegraphics[width=3in]{step3.pdf}\n \t\t\\caption{Top portion: the domain of up spins (in green) and of down spins (in orange); middle portion: the domains are each divided into four smaller domains; bottom portion: the smaller domains are now interspersed. }\n \t\t\\label{fig:steps}\n \\end{figure}\n\n \n \n We now calculate how many ways there are of subdividing each domain into four subdomains. In the case of the spin up domain, that quantity is \n \\begin{equation}\n \t(N_+-1)(N_+-2)(N_+-3)\/3! \\label{eq:cf5}\n \\end{equation}\n which is the number of ways of inserting three partitions between adjacent spins in a linear array of $N_+$ up spins. A similar expression holds for the number of ways of subdividing the domain of down spins. Making use of the relations (\\ref{eq:cf2}) and multiplying resulting expressions to obtain the number of ways of subdividing both domains we end up with the factor \n \\begin{equation}\n \t((N-2)^2-M^2)((N-4)^2-M^2)((N-6)^2-M^2)\/(4^3(3!)^2). \\label{eq:cf9}\n \\end{equation}\n \n We now join the ends of the set of domains up so that they form a ring, consistent with periodic boundary conditions, and we rotate the ring around to find out how many ways we can arrange the subdomains. This yields a factor of $N$. However, because we take all possible lengths for the set of subdomains we are overcounting by a factor of four, the number of pairs of domains. \n The overall factor is thus\n \\begin{equation}\n \t\\frac{N}{4}\\frac{((N-2)^2-M^2)((N-4)^2-M^2)((N-6)^2-M^2)}{ 4^3 (3!)^2}. \\label{eq:cf10}\n \\end{equation}\n To obtain the complete expression, we multiply the above by $\\exp(-16K)$, corresponding to the energy cost of the eight walls between the eight domains of the periodically continued array in Fig. \\ref{fig:steps}. \n \n In the general case of $2k$ alternating domains, the first factor of 4 in the denominator of (\\ref{eq:cf10}) is replaced by $k$, while the two other factors become $4^{k-1}$ and $(k-1)!^2$. Thus, the general form of the denominator is\n \\begin{equation}\n \t\\label{eq:gen-form-denomnator}\n 4^{k-1} k ((k-1)!)^2.\n \\end{equation}\nThen, for the numerator one has\n\\begin{equation}\n\t\\label{eq:gen-form-nominator}\n\tN\\prod _{p=1}^{k-1} \\left((N-2 p)^2-M^2\\right).\n\\end{equation}\nTaking into account that the energy of a configuration with $2k$ domains is $\\exp [K (N-4 k)]$, for the contribution of this configuration in the statistical sum one obtains \n\\begin{eqnarray}\n\t\\label{eq:stat-sum-expansion}\n\t\\lefteqn{\\mathrm{Zterm}(N,M,K,k)} \\\\&=& \\frac{N \\exp (K (N-4 k)) \\prod _{p=1}^{k-1} \\left((N-2 p)^2-M^2\\right)}{4^{k-1} k ((k-1)!)^2}.\\nonumber \n\\end{eqnarray}\nThe form of the right hand side of (\\ref{eq:stat-sum-expansion}) allows us to sum over $k$ from 0 to $\\infty$ to obtain the the partition function $Z^{({\\rm per})}(N,M,K)$. The result is a closed-form expression that is exact when $N$ and $M$ are both even or odd integers with $|M|<N$, and that smoothly interpolates between the exact expression for all other values of $N$ and $M$ with $|M|<N$. The case $|M|=N$ is exceptional, but trivial to determine. The result is \n\\begin{eqnarray}\n\t\\label{eq:statistical-sum}\n&&\tZ^{(\\rm per)}(N,K,M) = N e^{K (N-4)}\\\\&& \\times \\,_2F_1\\left(\\frac{1}{2} (-M-N+2),\\frac{1}{2} (M-N+2);2;e^{-4 K}\\right), \\nonumber\n\\end{eqnarray}\nwhere the generalized hypergeometric function $_2F_1(\\alpha,\\beta;\\gamma;z)$ is often denoted simply as $F(\\alpha,\\beta;\\gamma;z)$ with the later being the hypergeometric function \\cite{A&S1970}. \n\n\nSimilar calculations\\cite{supmat} lead to expressions for the partition functions in the case of antiperiodic and Dirichlet boundary conditions. \nWhen the boundary conditions are Dirichlet, we have\n\\begin{widetext}\n\\begin{eqnarray}\n\tZ^{(D)}(N,K,M) \n\t& = & e^{K (N-1)} \\Bigg[2 e^{-2 K} \\, _2F_1\\left(\\frac{1}{2} (-M-N+2),\\frac{1}{2}\n\t(M-N+2);1;e^{-4 K}\\right) \\nonumber \\\\ &&-\\frac{1}{2} e^{-4 K} (M-N+2) \\, _2F_1\\left(\\frac{1}{2}\n\t(-M-N+2),\\frac{1}{2} (M-N+4);2;e^{-4 K}\\right) \\nonumber \\\\ && + \\frac{1}{2} e^{-4 K} (M+N-2) \\,\n\t_2F_1\\left(\\frac{1}{2} (-M-N+4),\\frac{1}{2} (M-N+2);2;e^{-4 K}\\right)\\Bigg] \\nonumber \\\\ \\label{eq:DDbc-main-text-appendix}\n\\end{eqnarray}\n\\end{widetext}\nand when the boundary conditions are antiperiodic, the partition function is given by \n\\begin{widetext}\n\\begin{eqnarray}\nZ^{(\\rm anti)}(N,K,M)\n& = &e^{K (N-6)} \\left[2 \\left(e^{4 K}-1\\right) \\, _2F_1\\left(\\frac{1}{2} (-M-N+2),\\frac{1}{2}\n (M-N+2);1;e^{-4 K}\\right) \\right. \\nonumber \\\\ & & \\left. +N \\, _2F_1\\left(\\frac{1}{2} (-M-N+2),\\frac{1}{2}\n (M-N+2);2;e^{-4 K}\\right)\\right] \\label{eq:cf33}\n\\end{eqnarray}\n\\end{widetext}\nAs in the case of periodic boundary conditions, the expressions above for the partition function when boundary conditions are Dirichlet and antiperiodic are exact except in the case of perfect alignment of the spins, when $M= \\pm N$. \n\nIf we write $M=mN$ and focus on the case $N \\gg 1$, then the exact expressions above approach different forms. In the case of periodic boundary conditions, the partition function becomes\n\\begin{equation}\nZ^{(\\rm per)}_{\\rm \\lim}(N,K,m) = \\frac{2}{N}\\frac{e^{NK}x_t}{\\sqrt{1-m^2}}I_1(x_t \\sqrt{1-m^2}) \\label{eq:scalingform}\n\\end{equation}\nwhere $I_1$ is the modified Bessel function of order 1, $x_t=Ne^{-2K}$ is the scaling combination $N\/\\xi_t$, $\\xi_t $ being the correlation length \\cite{Baxter1982} in the vicinity of the zero temperature critical point. This allows us to explore the scaling behavior of thermodynamic quantities close to $T=0$. Limiting forms for the antiperiodic and Dirichlet partition functions can also be obtained. \n\nThe explicit formulas (\\ref{eq:statistical-sum})--(\\ref{eq:cf33}) allow one to obtain expressions involving derivatives with respect to the size, $N$, and the total magnetization, $M$, of the Ising chain. This is useful in the calculation of fluctuation-induced forces in the one-dimensional Ising system. Because of the nature of the ensemble in which the forces are generated, we refer to them as \\textit{Helmholtz} forces. The determination of this kind of force requires that we specify precisely what is held constant in the finite Ising strip (the ``film'') and the infinite Ising system that borders it (the ``bulk''). Three of the possibilities are 1) constant total magnetization, $M$, 2) constant magnetization per site, $m=M\/N$, and 3) constant number of up-spins, $N_+$. The last is relevant to lattice gas models. Consideration of the model lead to the following observations.\n\n\\begin{enumerate}\n\\item In the case of periodic and antiperiodic boundary conditions the degeneracy in the position of the borders between the domains with respect to translation results in a contribution to the Helmholtz free energy that is logarithmic in $N$. The implies lack of a perfect scaling. \n\\item When $m$ in the fixed $m$ ensemble is not equal to $\\pm1$, the interfacial energy between the domains with coexisting phases plays a key role in the statistical mechanics of the system. \n\\end{enumerate}\n\nIt is well known that in the grand canonical ensemble, i.e., fixed $h$, the Gibbs free energy of the finite system approaches the bulk limit \\textit{exponentially} in $N$ (i.e. as $e^{- \\alpha N}$ with $\\alpha>0$) as $N\\to \\infty$ for periodic boundary conditions. The properties listed above imply that in systems with fixed $m$ the Helmholtz free energy possesses non-scaling contributions that vanish significantly more slowly than this exponential approach to the bulk behavior. \n\n\\begin{figure}[htbp]\n\t\\includegraphics[width=\\columnwidth]{new_fig_2.pdf}\n\t\\caption{The behavior of the function $X_{\\rm H}^{(\\rm per)}(K,m|N)$ (see \\eq{eq:figeq3}) with $N=100$ and for $m=0.1, 0.3$, and $m=0.5$. We observe that the function is \\textit{positive} for large values of $K$ and \\textit{negative} for relatively small values of $K$ provided $m$ is also relatively small. For large $m$ the force is always repulsive, irrespective on the value of $K$. The same is also true for very small values of $K$, independent on the values of $m$. The logarithmic behavior of the free energy of the finite Ising chain with periodic boundary conditions noted in item 1 of the comments above lead to the limit $X_{\\rm H}^{(\\rm per)}(K\\to\\infty,m|N)=1$. }\n\t\\label{fig:Helmholtz}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\t\\includegraphics[width=\\columnwidth]{new_fig_3.pdf}\n\t\\caption{The behavior of the function $X_{\\rm H}^{(\\rm per)}(K,m|N)$ (see \\eq{eq:figeq3}) with $N=100,200, 300, 400$ and $N=500$. We observe that the function is \\textit{positive} for large and for small enough values of $K$, while being \\textit{negative} for relatively moderate values of $K$, \\textit{irrespective} of the value of $N$. Larger $N$ stronger is the repulsion for small enough $K$; the force in the latter regime is strongly repulsive, irrespective on the value of $N$. }\n\t\\label{fig:Helmholtz2}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\t\\includegraphics[width=\\columnwidth]{new_fig_4.pdf}\n\t\\caption{The behavior of the scaling function $X_{\\rm H}^{(\\rm per)}(x_t,m)$ for $m=0.1$. The inspection of the results obtained numerically from \\eq{eq:statistical-sum} with $N=100,200, 300, 400$ and $N=500$, and that one from \\eq{eq:scalingform} demonstrate perfect scaling and agreement between each other. We observe that the function is \\textit{positive} for large values of $x_t$, \\textit{negative} for relatively moderate values of $x_t$, and again strongly repulsive for small values of $x_t$. }\n\t\\label{fig:Helmholtz3}\n\\end{figure}\n\nNote that $m$ can be also seen as a sort of generalized ``charge,'' or symmetry value, which is conserved both inside and outside the system. Given the free energy derivable from the partition function, one is in a position to determine the fluctuation-induced Helmholtz force on a finite Ising chain in contact with a ``bulk,'' chain of infinite extent. The results of such a calculation are shown in Figs. \\ref{fig:Helmholtz}---\\ref{fig:Helmholtz3}. The force is minus the derivative with respect to $N$ of the combined Helmholtz free energy\n\\begin{equation}\n\\mathcal{F} = -\\ln\\left( \\mathcal{Z}^{({\\rm per})}(N,K,M)\\right) + (\\mathcal{N}-N)F_H(K,m). \\label{eq:figeq1}\n\\end{equation}\nHere $F_H$ is the Helmholtz free energy density of a ``bulk'' neighboring Ising chain. The term proportional to $\\mathcal{N}$ can be ignored as a background contribution to the overall free energy. The quantities $M$, $m$ and $K$ are kept constant in the process of differentiation, after which $M$ is set equal to $mN$. This yields the fluctuation induced Helmholtz force\n\\begin{equation}\nf_H^{({\\rm per})}(K,m,N) \\label{eq:figeq2}.\n\\end{equation}\nMultiplying the result for $f_H^{({\\rm per})}(K,m,N)$ by $N$ provides the function $X_H^{({\\rm per})}(K,m|N)$\n\\begin{equation}\nX_H^{({\\rm per})}(K,m|N)=N\tf_H^{({\\rm per})}(K,m,N) \\label{eq:figeq3}.\n\\end{equation} \nIts behavior is shown in Figs. \\ref{fig:Helmholtz} and \\ref{fig:Helmholtz2}. Fig \\ref{fig:Helmholtz} shows its behavior as a function of $K$ for $N=100$, and $m=0.1, 0.3$ and $m=0.5$, while Fig. \\ref{fig:Helmholtz2} shows it for $m=0.1$ and $N=100, 200, 300, 400$, and $N=500$. Focusing on the scaling regime ($K$ and $N$ both large compared to 1) we end up with the $N$-independent scaling function $X^{({\\rm per})}(x_t,m)$. \nFigure \\ref{fig:Helmholtz3} shows the behavior of this quantity as a function of $x_t$ for $m=0.1$.\n\n\n\n\n\nThe plots in Fig. \\ref{fig:Helmholtz} show that the fluctuation induced force studied has a behavior that is similar to one appearing in some versions of the Big Bang theory---strong repulsion at high temperatures, transitioning to moderate attraction for intermediate values of the temperature, and then back to repulsion, albeit much weaker than during the initial period of highest temperature \\cite{inflationary}. \n\n\n\n\\section*{Acknowledgements}\nDD gratefully acknowledges the discussions and exchange of information with Prof. Siegfried Dietrich and Dr. Markus Gross on some aspects of the current work.\n\n\n\n\n\\section{Introduction} \\label{sec:intro}\n\n\n\nThe one-dimensional Ising model has played an important role in statistical mechanics since its introduction in 1920 \\cite{Lenz1920}. This model possesses the virtue of exact solvability in the ensemble of fixed ordering field, $h$, and it also represents an important example of an infinite system with short range interactions that is in addition experimentally realizable \\cite{Armitage1,Armitage2,Armitage3,Armitage4}. Additionally, it manifests scaling behavior in the vicinity of its zero-temperature ordered state, and it is a test-bed for exploring the influence of finite-size scaling on critical properties, including the connection between the behavior of fluctuation-induced forces in the critical regime and scaling predictions \\cite{rudzanabshak2010}. The one-dimensional Ising model in a transverse field has proven an important experimental realization of a system with a quantum phase transition \\cite{Armitage4}. While the properties of the Ising model in the fixed ordering field ensemble are well-studied, we are aware of no corresponding work having been done in the conjugate ensemble in which the total magnetization is held fixed---except in the limit of an infinite Ising chain, in which a Legendre transformation suffices to derive the fixed-$M$, or Helmholtz, free energy from the fixed $h$, or Gibbs free energy, and in the case of $M=0$ \\cite{Wang2017}. Utilizing an analysis of domain wall statistics, we have obtained closed-form expressions for the fixed-$M$ partition function in the case of periodic, antiperiodic and Dirichlet, or free, boundary conditions. The resulting expressions differ non-trivially from corresponding expressions in the fixed-$h$ ensemble. These results should extend the applicability of the one-dimensional Ising model to physical systems in which finite size effects play an important role. \nThe analysis reported here can also be viewed as a useful addition to approaches based on Ginzburg-Landau-Wilson Hamiltonians \\cite{Dietrich1,Dietrich2,Dietrich3}. \n\n\n\\section{Ising chain with periodic boundary conditions: the combinatorics of domains} \\label{sec:periodicdomains}\n\nAs in the case of all boundary conditions considered here, the partition function to be evaluated is the sum over spin states of the Boltzmann factor\n\\begin{equation}\n\\exp\\left[K \\sum_{i=1}^{N-1}s_i s_{i+1}\\right] \\label{eq:bf}\n\\end{equation}\nwhere each spin variable takes on the values $\\pm 1$. Fixing the total magnetism amounts to the constraint that the difference between the number of up spins, $N_+$ and the number of down spins, $N_-$ is equal to $M$. \n \n\n \nThe key step in the calculation of the partition function is the determination of the number of ways in which the spins can arrange themselves into alternating spin-up and spin-down domains, subject to the requirement of a fixed value of the total magnetization, $M$. We start with equations that express the relationships between $M$, the number of up spins, $N_+$ and the number of down spins, $N_-$, along with the total number of spins, $N$:\n \\begin{equation}\n \tN = N_+ + N_-, \\, \\qquad \\mbox{and}\\qquad \n \tM = N_+-N_- \\label{eq:cf1}.\n \\end{equation}\n Inverting these equations we find\n \\begin{equation}\n \tN_+ = \\frac{N+M}{2} \\, \\qquad \\mbox{and}\\qquad \n \tN_- = \\frac{N-M}{2}. \\label{eq:cf2}\n \\end{equation}\n \nFor insight into the determination of domain statistics, we look at, say, the third leading contribution in an expansion of the partition function in powers of $\\exp[-K]$. We start with a domain of $N_+$ up spins. We then partition that domain into four smaller domains. We do this by inserting three ``slices'' into the domain, effectively three walls between adjacent spins. \n We then partition a domain of $N_-$ down spins into four smaller domains, which we insert between the domains of up spins. The process is illustrated in Fig. \\ref{fig:steps}.\n \\begin{figure}[htbp]\n \t\t\\includegraphics[width=3in]{step1.pdf}\n\t\t\\includegraphics[width=3in]{step2.pdf}\n\t\t\\includegraphics[width=3in]{step3.pdf}\n \t\t\\caption{Top portion: the domain of up spins (in green) and of down spins (in orange); middle portion: the domains are each divided into four smaller domains; bottom portion: the smaller domains are now interspersed. }\n \t\t\\label{fig:steps}\n \\end{figure}\n\n \n \n We now calculate how many ways there are of subdividing each domain into four subdomains. In the case of the spin up domain, that quantity is \n \\begin{equation}\n \t(N_+-1)(N_+-2)(N_+-3)\/3! \\label{eq:cf5}\n \\end{equation}\n which is the number of ways of inserting three partitions between adjacent spins in a linear array of $N_+$ up spins. A similar expression holds for the number of ways of subdividing the domain of down spins. Making use of the relations (\\ref{eq:cf2}) and multiplying resulting expressions to obtain the number of ways of subdividing both domains we end up with the factor \n \\begin{equation}\n \t((N-2)^2-M^2)((N-4)^2-M^2)((N-6)^2-M^2)\/(4^3(3!)^2). \\label{eq:cf9}\n \\end{equation}\n \n We now join the ends of the set of domains up so that they form a ring, consistent with periodic boundary conditions, and we rotate the ring around to find out how many ways we can arrange the subdomains. This yields a factor of $N$. However, because we take all possible lengths for the set of subdomains we are overcounting by a factor of four, the number of pairs of domains. \n The overall factor is thus\n \\begin{equation}\n \t\\frac{N}{4}\\frac{((N-2)^2-M^2)((N-4)^2-M^2)((N-6)^2-M^2)}{ 4^3 (3!)^2}. \\label{eq:cf10}\n \\end{equation}\n To obtain the complete expression, we multiply the above by $\\exp(-16K)$, corresponding to the energy cost of the eight walls between the eight domains of the periodically continued array in Fig. \\ref{fig:steps}. \n \n In the general case of $2k$ alternating domains, the first factor of 4 in the denominator of (\\ref{eq:cf10}) is replaced by $k$, while the two other factors become $4^{k-1}$ and $(k-1)!^2$. Thus, the general form of the denominator is\n \\begin{equation}\n \t\\label{eq:gen-form-denomnator}\n 4^{k-1} k ((k-1)!)^2.\n \\end{equation}\nThen, for the numerator one has\n\\begin{equation}\n\t\\label{eq:gen-form-nominator}\n\tN\\prod _{p=1}^{k-1} \\left((N-2 p)^2-M^2\\right).\n\\end{equation}\nTaking into account that the energy of a configuration with $2k$ domains is $\\exp [K (N-4 k)]$, for the contribution of this configuration in the statistical sum one obtains \n\\begin{eqnarray}\n\t\\label{eq:stat-sum-expansion}\n\t\\lefteqn{\\mathrm{Zterm}(N,M,K,k)} \\\\&=& \\frac{N \\exp (K (N-4 k)) \\prod _{p=1}^{k-1} \\left((N-2 p)^2-M^2\\right)}{4^{k-1} k ((k-1)!)^2}.\\nonumber \n\\end{eqnarray}\nThe form of the right hand side of (\\ref{eq:stat-sum-expansion}) allows us to sum over $k$ from 0 to $\\infty$ to obtain the the partition function $Z^{({\\rm per})}(N,M,K)$. The result is a closed-form expression that is exact when $N$ and $M$ are both even or odd integers with $|M|<N$, and that smoothly interpolates between the exact expression for all other values of $N$ and $M$ with $|M|<N$. The case $|M|=N$ is exceptional, but trivial to determine. The result is \n\\begin{eqnarray}\n\t\\label{eq:statistical-sum}\n&&\tZ^{(\\rm per)}(N,K,M) = N e^{K (N-4)}\\\\&& \\times \\,_2F_1\\left(\\frac{1}{2} (-M-N+2),\\frac{1}{2} (M-N+2);2;e^{-4 K}\\right), \\nonumber\n\\end{eqnarray}\nwhere the generalized hypergeometric function $_2F_1(\\alpha,\\beta;\\gamma;z)$ is often denoted simply as $F(\\alpha,\\beta;\\gamma;z)$ with the later being the hypergeometric function \\cite{A&S1970}. \n\n\nSimilar calculations\\cite{supmat} lead to expressions for the partition functions in the case of antiperiodic and Dirichlet boundary conditions. \nWhen the boundary conditions are Dirichlet, we have\n\\begin{widetext}\n\\begin{eqnarray}\n\tZ^{(D)}(N,K,M) \n\t& = & e^{K (N-1)} \\Bigg[2 e^{-2 K} \\, _2F_1\\left(\\frac{1}{2} (-M-N+2),\\frac{1}{2}\n\t(M-N+2);1;e^{-4 K}\\right) \\nonumber \\\\ &&-\\frac{1}{2} e^{-4 K} (M-N+2) \\, _2F_1\\left(\\frac{1}{2}\n\t(-M-N+2),\\frac{1}{2} (M-N+4);2;e^{-4 K}\\right) \\nonumber \\\\ && + \\frac{1}{2} e^{-4 K} (M+N-2) \\,\n\t_2F_1\\left(\\frac{1}{2} (-M-N+4),\\frac{1}{2} (M-N+2);2;e^{-4 K}\\right)\\Bigg] \\nonumber \\\\ \\label{eq:DDbc-main-text-appendix}\n\\end{eqnarray}\n\\end{widetext}\nand when the boundary conditions are antiperiodic, the partition function is given by \n\\begin{widetext}\n\\begin{eqnarray}\nZ^{(\\rm anti)}(N,K,M)\n& = &e^{K (N-6)} \\left[2 \\left(e^{4 K}-1\\right) \\, _2F_1\\left(\\frac{1}{2} (-M-N+2),\\frac{1}{2}\n (M-N+2);1;e^{-4 K}\\right) \\right. \\nonumber \\\\ & & \\left. +N \\, _2F_1\\left(\\frac{1}{2} (-M-N+2),\\frac{1}{2}\n (M-N+2);2;e^{-4 K}\\right)\\right] \\label{eq:cf33}\n\\end{eqnarray}\n\\end{widetext}\nAs in the case of periodic boundary conditions, the expressions above for the partition function when boundary conditions are Dirichlet and antiperiodic are exact except in the case of perfect alignment of the spins, when $M= \\pm N$. \n\nIf we write $M=mN$ and focus on the case $N \\gg 1$, then the exact expressions above approach different forms. In the case of periodic boundary conditions, the partition function becomes\n\\begin{equation}\nZ^{(\\rm per)}_{\\rm \\lim}(N,K,m) = \\frac{2}{N}\\frac{e^{NK}x_t}{\\sqrt{1-m^2}}I_1(x_t \\sqrt{1-m^2}) \\label{eq:scalingform}\n\\end{equation}\nwhere $I_1$ is the modified Bessel function of order 1, $x_t=Ne^{-2K}$ is the scaling combination $N\/\\xi_t$, $\\xi_t $ being the correlation length \\cite{Baxter1982} in the vicinity of the zero temperature critical point. This allows us to explore the scaling behavior of thermodynamic quantities close to $T=0$. Limiting forms for the antiperiodic and Dirichlet partition functions can also be obtained. \n\nThe explicit formulas (\\ref{eq:statistical-sum})--(\\ref{eq:cf33}) allow one to obtain expressions involving derivatives with respect to the size, $N$, and the total magnetization, $M$, of the Ising chain. This is useful in the calculation of fluctuation-induced forces in the one-dimensional Ising system. Because of the nature of the ensemble in which the forces are generated, we refer to them as \\textit{Helmholtz} forces. The determination of this kind of force requires that we specify precisely what is held constant in the finite Ising strip (the ``film'') and the infinite Ising system that borders it (the ``bulk''). Three of the possibilities are 1) constant total magnetization, $M$, 2) constant magnetization per site, $m=M\/N$, and 3) constant number of up-spins, $N_+$. The last is relevant to lattice gas models. Consideration of the model lead to the following observations.\n\n\\begin{enumerate}\n\\item In the case of periodic and antiperiodic boundary conditions the degeneracy in the position of the borders between the domains with respect to translation results in a contribution to the Helmholtz free energy that is logarithmic in $N$. The implies lack of a perfect scaling. \n\\item When $m$ in the fixed $m$ ensemble is not equal to $\\pm1$, the interfacial energy between the domains with coexisting phases plays a key role in the statistical mechanics of the system. \n\\end{enumerate}\n\nIt is well known that in the grand canonical ensemble, i.e., fixed $h$, the Gibbs free energy of the finite system approaches the bulk limit \\textit{exponentially} in $N$ (i.e. as $e^{- \\alpha N}$ with $\\alpha>0$) as $N\\to \\infty$ for periodic boundary conditions. The properties listed above imply that in systems with fixed $m$ the Helmholtz free energy possesses non-scaling contributions that vanish significantly more slowly than this exponential approach to the bulk behavior. \n\n\\begin{figure}[htbp]\n\t\\includegraphics[width=\\columnwidth]{new_fig_2.pdf}\n\t\\caption{The behavior of the function $X_{\\rm H}^{(\\rm per)}(K,m|N)$ (see \\eq{eq:figeq3}) with $N=100$ and for $m=0.1, 0.3$, and $m=0.5$. We observe that the function is \\textit{positive} for large values of $K$ and \\textit{negative} for relatively small values of $K$ provided $m$ is also relatively small. For large $m$ the force is always repulsive, irrespective on the value of $K$. The same is also true for very small values of $K$, independent on the values of $m$. The logarithmic behavior of the free energy of the finite Ising chain with periodic boundary conditions noted in item 1 of the comments above lead to the limit $X_{\\rm H}^{(\\rm per)}(K\\to\\infty,m|N)=1$. }\n\t\\label{fig:Helmholtz}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\t\\includegraphics[width=\\columnwidth]{new_fig_3.pdf}\n\t\\caption{The behavior of the function $X_{\\rm H}^{(\\rm per)}(K,m|N)$ (see \\eq{eq:figeq3}) with $N=100,200, 300, 400$ and $N=500$. We observe that the function is \\textit{positive} for large and for small enough values of $K$, while being \\textit{negative} for relatively moderate values of $K$, \\textit{irrespective} of the value of $N$. Larger $N$ stronger is the repulsion for small enough $K$; the force in the latter regime is strongly repulsive, irrespective on the value of $N$. }\n\t\\label{fig:Helmholtz2}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\t\\includegraphics[width=\\columnwidth]{new_fig_4.pdf}\n\t\\caption{The behavior of the scaling function $X_{\\rm H}^{(\\rm per)}(x_t,m)$ for $m=0.1$. The inspection of the results obtained numerically from \\eq{eq:statistical-sum} with $N=100,200, 300, 400$ and $N=500$, and that one from \\eq{eq:scalingform} demonstrate perfect scaling and agreement between each other. We observe that the function is \\textit{positive} for large values of $x_t$, \\textit{negative} for relatively moderate values of $x_t$, and again strongly repulsive for small values of $x_t$. }\n\t\\label{fig:Helmholtz3}\n\\end{figure}\n\nNote that $m$ can be also seen as a sort of generalized ``charge,'' or symmetry value, which is conserved both inside and outside the system. Given the free energy derivable from the partition function, one is in a position to determine the fluctuation-induced Helmholtz force on a finite Ising chain in contact with a ``bulk,'' chain of infinite extent. The results of such a calculation are shown in Figs. \\ref{fig:Helmholtz}---\\ref{fig:Helmholtz3}. The force is minus the derivative with respect to $N$ of the combined Helmholtz free energy\n\\begin{equation}\n\\mathcal{F} = -\\ln\\left( \\mathcal{Z}^{({\\rm per})}(N,K,M)\\right) + (\\mathcal{N}-N)F_H(K,m). \\label{eq:figeq1}\n\\end{equation}\nHere $F_H$ is the Helmholtz free energy density of a ``bulk'' neighboring Ising chain. The term proportional to $\\mathcal{N}$ can be ignored as a background contribution to the overall free energy. The quantities $M$, $m$ and $K$ are kept constant in the process of differentiation, after which $M$ is set equal to $mN$. This yields the fluctuation induced Helmholtz force\n\\begin{equation}\nf_H^{({\\rm per})}(K,m,N) \\label{eq:figeq2}.\n\\end{equation}\nMultiplying the result for $f_H^{({\\rm per})}(K,m,N)$ by $N$ provides the function $X_H^{({\\rm per})}(K,m|N)$\n\\begin{equation}\nX_H^{({\\rm per})}(K,m|N)=N\tf_H^{({\\rm per})}(K,m,N) \\label{eq:figeq3}.\n\\end{equation} \nIts behavior is shown in Figs. \\ref{fig:Helmholtz} and \\ref{fig:Helmholtz2}. Fig \\ref{fig:Helmholtz} shows its behavior as a function of $K$ for $N=100$, and $m=0.1, 0.3$ and $m=0.5$, while Fig. \\ref{fig:Helmholtz2} shows it for $m=0.1$ and $N=100, 200, 300, 400$, and $N=500$. Focusing on the scaling regime ($K$ and $N$ both large compared to 1) we end up with the $N$-independent scaling function $X^{({\\rm per})}(x_t,m)$. \nFigure \\ref{fig:Helmholtz3} shows the behavior of this quantity as a function of $x_t$ for $m=0.1$.\n\n\n\n\n\nThe plots in Fig. \\ref{fig:Helmholtz} show that the fluctuation induced force studied has a behavior that is similar to one appearing in some versions of the Big Bang theory---strong repulsion at high temperatures, transitioning to moderate attraction for intermediate values of the temperature, and then back to repulsion, albeit much weaker than during the initial period of highest temperature \\cite{inflationary}. \n\n\n\n\\section*{Acknowledgements}\nDD gratefully acknowledges the discussions and exchange of information with Prof. Siegfried Dietrich and Dr. Markus Gross on some aspects of the current work.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
{"text":"\n\n\\section{Problem statement}\n\n\\label{sec:statement}\n\n\\begin{figure}\n\\centering\n \\resizebox{\\linewidth}{!}{\n \\begin{subfigure}{.5\\textwidth}\n \\centering\n \\resizebox{\\linewidth}{!}{\\input{frame_diagram}}\n \n \\caption{}\n \\label{fig:sim_scene}\n \\end{subfigure}\n \\begin{subfigure}{.5\\textwidth}\n \\centering\n \\resizebox{\\linewidth}{!}{\\includegraphics{image\/ur5_setup.jpeg}}\n \\caption{}\n \\label{fig:real_scene}\n \\end{subfigure}}\n \\caption{Our benchmark scene for multi-fingered grasping. \nThe pose of the hand $(\\mathbf{x}, \\mathbf{R})$ is defined in the local object frame. The depth camera produces an image $i$ of the scene. }\n \\label{fig:frame_diagram}\n\\end{figure}\n\n\\paragraph{Description}\nWe consider the problem of grasping a rigid and stable body with a multi-fingered gripper, as illustrated in Fig.~\\ref{fig:frame_diagram}.\nThe object $\\mathcal{O}$ is modelled as a 3D surface mesh and its centroid stands on a table at a location $(x_{\\mathcal{O}}, y_{\\mathcal{O}}, z_{\\mathcal{O}})$ with a rotation $\\varphi_{z,\\mathcal{O}}$ around the $z$-axis in the world reference frame $\\mathcal{F}_{W}$.\nWe refer to its 2D pose $(x_\\mathcal{O}, y_\\mathcal{O}, \\varphi_{z,\\mathcal{O}})$ as $\\mathbf{p}_{\\mathcal{O}} \\in \\mathbb{R}^{2}\\times \\text{SO(2)}$. \nThe hand configuration $\\mathbf{h} \\in \\mathcal{H} = \\mathbb{R}^{3}\\times \\text{SO(3)} \\times \\mathcal{G}$ is defined as the combination of the pose $(\\mathbf{x},\\mathbf{R}) \\in \\mathbb{R}^{3}\\times \\text{SO(3)}$ of the hand and the type $g \\in \\mathcal{G} = \\{ \\text{basic}, \\text{wide}, \\text{pinch} \\}$ of the grasp.\nThe hand pose $(\\mathbf{x}, \\mathbf{R})$ is defined with respect to the world frame coordinate.\nThe robot evolves in a 3D workspace observed with a fixed depth camera producing images $i \\in \\mathcal{I}$.\nThe goal is to find a robust hand configuration $\\mathbf{h^{*}}$ with respect to a given binary metric $S = \\{0, 1\\}$, where $S=1$ indicates a successful grasp.\n\n\n\\paragraph{Probabilistic modeling}\nWe model the scene and the grasping task according to the Bayesian network shown in Fig.~\\ref{fig:graphical_model}.\nThe variables $S, i, \\mathbf{h}, \\mathcal{O}$ and $\\mathbf{p}_{\\mathcal{O}}$ are modelled as random variables in order to capture the noise in the robot or in the depth camera, as well as our prior beliefs about the hand configuration, the object or its pose. \nThe structure of the Bayesian network is motivated by the fact that $\\mathbf{h}$, $\\mathcal{O}$ and $\\mathbf{p}_\\mathcal{O}$ are independent, while $S$ is dependent on $\\mathbf{h}$, $\\mathcal{O}$ and $\\mathbf{p}_\\mathcal{O}$ and $i$ is dependent on $\\mathcal{O}$ and $\\mathbf{p}_\\mathcal{O}$. This structure also enables a direct and straightforward way to generate data: $\\mathbf{h}$, $\\mathcal{O}$ and $\\mathbf{p}_\\mathcal{O}$ are sampled from their respective prior distributions while $S$ and $i$ can be generated using forward physical simulators for the grasping and the camera.\n\nThe prior distribution $p(\\mathbf{x})$ of the spatial position is uniformly distributed between the extreme values $\\mathbf{x}_{\\text{lim}}=(x_{\\text{low}},y_{\\text{low}}, z_{\\text{low}}, x_{\\text{high}},y_{\\text{high}},z_{\\text{high}})$, chosen to be within the range of physical dimensions of the gripper and the biggest object. It emphasizes our ignorance about interesting regions of space for grasping. The rotation $\\mathbf{R}$ is parameterized with a quaternion. A quaternion $\\mathbf{q}$ is an element of the quaternion group $\\mathbb{H}$, in the form $\\mathbf{q}= q_{0}\\mathbf{1} + q_{1}\\mathbf{i} + q_{2}\\mathbf{j} + q_{3}\\mathbf{k} = (q_{0}, q_{1}, q_{2}, q_{3})^{T}$ with $(q_{0}, q_{1}, q_{2}, q_{3})^{T} \\in \\mathbb{R}^{4}$ and $\\mathbf{i}^{2}=\\mathbf{j}^{2}=\\mathbf{k}^{2}=\\mathbf{ijk}=-1$. The conjugate $\\mathbf{\\bar{q}}$ of quaterion $\\mathbf{q}$ is given by $\\mathbf{\\bar{q}}:= q_{0}\\mathbf{1} - q_{1}\\mathbf{i} - q_{2}\\mathbf{j} - q_{3}\\mathbf{k}$. A unit quaternion, called \\textit{versor}, $\\mathbf{q}_{1} \\in \\mathbb{H}_{1}$ has a unit norm defined as $\\|\\mathbf{q}\\| = \\sqrt{ \\mathbf{q}\\mathbf{\\bar{q}}}=1$. They give a more compact representation than rotation matrices and avoid gimbal lock and singularities. Unit quaternions can be identified with the elements of a hyperspherical manifold $\\mathbb{S}^{3}$ embedded into $\\mathbb{R}^{4}$. Moreover, $\\mathbb{S}^{3}$ is a double covering of $\\text{SO(3)}$, meaning that antipodal points $\\pm\\mathbf{q}$ represent the same rotation, which implies that $p(\\mathbf{q};\\cdot)=p(-\\mathbf{q};\\cdot)$. We define the prior $p(\\mathbf{q})$ as a mixture of \\textit{power-spherical} distributions \\cite{de2020power} with 4 modes $\\mathbf{\\mu}_{i}$. Each mode is a mixture that satisfies $p(\\mathbf{q};\\cdot)=p(-\\mathbf{q};\\cdot)$. In total, we have\n\\begin{equation}p(\\mathbf{q}) = \\frac{1}{N} \\sum_{i=1}^{N=4}\\frac{\\text{PowerSpherical}(\\mathbf{q}; \\mathbf{\\mu}_{i}, \\kappa)}{2} + \\frac{\\text{PowerSpherical}(\\mathbf{q}; -\\mathbf{\\mu}_{i}, \\kappa)}{2}.\n\\end{equation}\nThese modes $\\mathbf{\\mu}_{i}$ encode information about the orientation of the gripper and share the same concentration factor $\\kappa=30$.\nTo grasp an object, the gripper point toward the table and thus toward the object -- an informed prior which indeed results in sufficiently many successful grasps. \nWe then define four rotations, separated by a rotation of $\\frac{\\pi}{2}$ around the $z$-axis (see Fig.~\\ref{fig:prior_quat_mode} in Appendix~\\ref{appendix:distribution}).\nIn this way, our prior covers a large part of the rotation space and is sufficiently informative by contrast to a uniform prior over the unit sphere $\\mathbb{S}^{3}$.\nThe grasp type $g$ is uniformly distributed between the three types basic, wide and pinch. These three modes modulates the spacing between the fingers in the opposite side of the thumb. Finally, $p(\\mathbf{h}) = p(\\mathbf{x})p(\\mathbf{R})p(g)$.\n\n\nThe prior $p(\\mathcal{O})=p(\\mathrm{Mesh})p(\\beta)$ is a discrete uniform distribution over a fixed set of object meshes and a uniform distribution for the scaling factor $\\beta$. \nFinally, the prior $p(\\mathbf{p}_{\\mathcal{O}})$ captures our belief that the object can be everywhere on the table with any rotation around the vertical axis. For this reason, uniform distributions are used for all three parameters $x_{\\mathcal{O}},y_{\\mathcal{O}}, \\varphi_{z, \\mathcal{O}}$.\nTable~\\ref{tab:prior_distribution} summarizes the prior distributions.\n\n\\begin{figure}\n \\begin{subfigure}[b]{.55\\linewidth}\n \\centering\n \\resizebox{\\linewidth}{!}{\\input{bayesian_network.tex}}\n \\vspace{0.5em}\n \n \\caption{}\n \\label{fig:graphical_model}\n\\end{subfigure}\n\\begin{subfigure}[b]{.45\\linewidth}\n\\centering\n\\resizebox{0.85\\linewidth}{!}{\n\\begin{tabular}{cl}\n \\hline\n Variable & Prior \\\\\n \\hline\n $x$ & $\\text{uniform}(-0.15,0.15)$ \\\\\n $y$ & $\\text{uniform}(-0.15,0.15)$ \\\\\n $z$ & $\\text{uniform}(0.12,0.34)$ \\\\\n $\\mathbf{R}$ & $\\text{mixture of power spherical}(\\mathbf{\\mu}_{i}, \\kappa)$ \\\\\n $g$ & $\\text{categorical}(\\{\\frac{1}{3},\\frac{1}{3},\\frac{1}{3}\\})$ \\\\\n $x_{\\mathcal{O}}$ & $\\text{uniform}(-0.05,0.05)$ \\\\\n $y_{\\mathcal{O}}$ & $\\text{uniform}(-0.05,0.05)$ \\\\\n $\\varphi_{z, \\mathcal{O}}$ & $\\text{uniform}(-\\pi,\\pi)$ \\\\\n \n $\\mathrm{Mesh}$ & $\\text{uniform in the set of objects}$ \\\\\n $\\beta$ & uniform(0.9, 1.1)\\\\\n \\hline\n \\end{tabular}}\n \\caption{}\n \\label{tab:prior_distribution}\n\\end{subfigure}\n\\caption{(a) Probabilistic graphical model of the environment. Gray nodes correspond to observed variables and white nodes to unobserved variables. (b) Prior distributions.}\n\\end{figure}\n\nGiven our probabilistic graphical model, we finally formulate the problem of grasping as the Bayesian inference of the hand configuration $\\mathbf{h}^{*}$ that is a posteriori the most likely given a successful grasp and an observation $i$. That is, we are seeking for the maximum a posteriori (MAP) estimate\n\\begin{equation}\n\\label{eq:map}\n\\mathbf{h}^{*} = \\argmax_{\\mathbf{h}}~p(\\mathbf{h}|S=1, i).\n\\end{equation}\n\n\n\\section{Likelihood-free Bayesian inference for multi-fingered grasping}\n\\label{sec:method}\n\n\\subsection{Density ratio estimation}\nFrom the Bayes's rule, the posterior of the hand configuration is\n\\begin{equation}\n\\label{eq:proba_cond}\n\\begin{split}\np(\\mathbf{h}|S, i) = \\frac{p(S,i| \\mathbf{h})}{p(S,i)}p(\\mathbf{h}). \\\\\n\\end{split}\n\\end{equation}\nThe likelihood function $p(S, i|\\mathbf{h})$ and the evidence $p(S, i)$ are both intractable, which makes standard Bayesian inference procedures such as Markov chain Monte Carlo unusable. \nHowever, drawing samples from forward models remains feasible with physical simulators, hence enabling likelihood-free Bayesian inference algorithms. \n\nFirst, we express the likelihood-to-evidence ratio as a product of two individual ratios,\n\\begin{align}\n r(S, i|\\mathbf{h}) &= \\frac{p(S, i|\\mathbf{h})}{p(S, i)}= \\frac{p(S|\\mathbf{h})}{p(S)} \\frac{p(i|S, \\mathbf{h})}{p(i|S)}= r(S|\\mathbf{h}) r(i|S, \\mathbf{h}).\n \\label{eq:ratio_decomposition}\n\\end{align}\nBy adapting the approach described in \\cite{pmlr-v119-hermans20a, Brehmer:2019jyt} for likelihood ratio estimation, we train two neural network classifiers $d_\\phi$ and $d_\\theta$ that we will use to approximate $r(S|\\mathbf{h})$ and $r(i|S, \\mathbf{h})$.\nThe first network $d_\\phi$ is trained to distinguish positive tuples $(S, \\mathbf{h})$ (labeled $y=1$) sampled from the joint distribution $p(S, \\mathbf{h})$ against negative tuples (labeled $y=0$) sampled from the product of marginals $p(S)p(\\mathbf{h})$. The Bayes optimal classifier $d^{*}(S,\\mathbf{h})$ that minimizes the cross-entropy loss is given by\n\\begin{equation}\n\\label{eq:discriminator}\nd^{*}(S, \\mathbf{h}) = \\frac{p(S, \\mathbf{h} )}{p(S)p(\\mathbf{h})+ p(S, \\mathbf{h})},\n\\end{equation}\nwhich recovers the likelihood ratio $r(S|\\mathbf{h})$ as\n\\begin{equation}\n \\label{eq:d_to_r}\n \\begin{split}\n \\frac{d^{*}(S,\\mathbf{h})}{1-d^{*}(S,\\mathbf{h})} & = \\frac{p(S,\\mathbf{h})}{p(S)p(\\mathbf{h})} = \\frac{p(S|\\mathbf{h})}{p(S)}.\n \\end{split}\n\\end{equation}\nTherefore, by modelling the classifier with a neural network $d_\\phi$ trained on the binary classification problem, we obtain an approximate but amortized and differentiable likelihood ratio \n\\begin{equation}\n \\hat{r}(S|\\mathbf{h}) = \\frac{d_\\phi(S,\\mathbf{h})}{1-d_\\phi(S,\\mathbf{h})}.\n\\end{equation}\nThe second network $d_\\theta$ is trained similarly, over positive tuples $(i, \\mathbf{h})$ (labeled ($y=1$) sampled from the conditional joint distribution $p(i, \\mathbf{h}|S=1)$ against negative tuples $(i, \\mathbf{h})$ (labeled $y=0$) sampled from the product of marginals $p(i|S=1)p(\\mathbf{h}|S=1)$. Using the same likelihood ratio trick, we obtain \n\\begin{equation}\n \\hat{r}(i|S=1, \\mathbf{h}) = \\frac{d_\\theta(i, \\mathbf{h})}{1-d_\\theta(i, \\mathbf{h})}.\n\\end{equation}\nFinally, the likelihood ratios are combined with the prior to approximate the posterior as\n\\begin{equation}\n\\hat{p}(\\mathbf{h}|S=1, i) = \\hat{r}(i|S=1, \\mathbf{h})\\hat{r}(S=1|\\mathbf{h}) p(\\mathbf{h}),\n\\end{equation}\nwhich enables immediate posterior inference despite the initial intractability of the likelihood function $p(S, i|\\mathbf{h})$ and of the evidence $p(S, i)$.\n\n\n\n\\begin{figure}\n \\centering\n \\input{nn_architecture}\n \n \\label{fig:image_estimator}\n \\caption{Neural network architectures of the classifiers $d_\\phi$ and $d_\\theta$ used to respectively approximate the likelihood ratios $r(S|\\mathbf{h})$ and $r(i|S=1,\\mathbf{h})$. }\n \\label{fig:pipeline_architecture}\n\\end{figure}\n\n\nThe neural network classifiers $d_\\phi$ and $d_\\theta$ are architectured as in Fig.~\\ref{fig:pipeline_architecture}. \nIn $d_\\theta$, the camera image $i$ of size $640\\times 480\\times 1$ is pre-processed by scaling the depths in the interval $\\{0\\} \\cup [0.45, 1]$ and resized to $256\\times 160\\times 1$. Then, $i$ is fed to a convolutional network made of four convolutional layers followed by a fully connected layer and which goal is to produce a vector embedding of the image.\nThe image embedding and $\\mathbf{h}$ are then both fed to a subsequent network made of 2 fully connected layers. \nThe hand configuration $\\mathbf{h}$ enters the neural network as a $1\\times13$ vector where the rotation matrix $\\mathbf{R}$ is flattened \\cite{murphy2021implicit, zhou2019continuity} and the grasp type $g$ is passed through an embedding.\nIn $d_\\phi$, both $S$ and $\\mathbf{h}$ are directly fed to a network made of 2 fully connected layers. \nThe parameters $\\phi$ and $\\theta$ are optimized by following Algorithm~\\ref{alg:learning_procedure} (Appendix~\\ref{appendix:algorithms}), using Adam as optimizer.\n\nFinally, we note that the factorization of the likelihood-to-evidence ratio forces the two ratio estimators to focus their respective capacity on the information brought by $S$ and $i$. Because of the high discriminative power of $S$, training instead a single ratio taking both $S$ and $i$ as inputs would indeed lead to an estimator that usually discards the smaller information brought in $i$.\n\n\\subsection{Maximum a posteriori estimation}\n\\label{subsec:optimization}\n\nDue to the intractability of the likelihood function and of the evidence,\nEq.~(\\ref{eq:map}) cannot be solved analytically nor numerically. \nWe rely instead on the approximation given by the likelihood-to-evidence ratio $\\hat{r}$ to find an approximation of the maximum a posteriori (MAP) estimate as\n\\begin{align}\n \\hat{\\mathbf{h}}^{*} &= \\argmax_{\\mathbf{h}}\\hat{r}(S=1,i|\\mathbf{h})p(\\mathbf{h}) \\\\\n &= \\argmin_{\\mathbf{h}} -\\log \\hat{r}(S=1,i|\\mathbf{h})p(\\mathbf{h})\n \\label{eq:approximate_map},\n\\end{align}\nwhich we solve using gradient descent. \nFor a given $g$, the gradient of Eq.~(\\ref{eq:approximate_map}) decomposes as\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:euclidean_grad}\n -\\nabla_{(\\mathbf{x},\\mathbf{R})}\\log \\hat{r}(S,i|\\mathbf{h})p(\\mathbf{h}) = -\\nabla_{(\\mathbf{x},\\mathbf{R})}\\log &\\,\\hat{r}(S,i|\\mathbf{h}) - \\nabla_{(\\mathbf{x},\\mathbf{R})}\\log p(\\mathbf{h}).\n\\end{aligned}\n\\end{equation}\nOur closed-form prior $p(\\mathbf{h})$ has analytical gradients. In fact, uniform distributions are set to have null gradient everywhere in the domain. Therefore, $\\nabla_{\\mathbf{x}}p(\\mathbf{h}) = \\mathbf{0}$. By contrast, $p(\\mathbf{R})$ is a weakly informative prior and has a non null gradient from the power spherical distribution. Its derivative with respect to $\\mathbf{q}$ is\n\\begin{equation}\n\\label{eq:grad_power_spherical}\n\\begin{split}\n\\nabla_{\\mathbf{q}}p(\\mathbf{q};\\mu, \\kappa) &= C(\\kappa)\\kappa(1+\\mu^{T}\\mathbf{q})^{\\kappa-1}\\nabla_{\\mathbf{q}}(1+\\mu^{T}\\mathbf{q})\\\\\n &= C(\\kappa)\\kappa\\mathbf{\\mu}(1+\\mu^{T}\\mathbf{q})^{\\kappa-1},\n\\end{split}\n\\end{equation}\nwhere $C(\\kappa)$ is the normalization term.\nSince the likelihood-to-evidence ratio estimator $\\hat{r}$ is modelled by a neural network, it is fully differentiable with respect to its inputs and its gradients can be computed by automatic differentiation. \nHowever, not all variables of the problem are Euclidean variables and naively performing gradient descent would violate our geometric assumptions. \nLet us consider a variable $\\mathcal{Z}$ on the smooth Riemannian manifold $\\mathcal{M}=\\mathbb{R}^{3} \\times \\text{SO(3)}$ with tangent space $\\mathcal{T}_{\\mathcal{Z}}\\mathcal{M}$ and a function $f : \\mathcal{M} \\rightarrow \\mathbb{R}$. Since SO(3) is embedded in the set of $3\\times 3$ matrices $\\mathbb{R}^{3\\times 3}$, $f$ can be evaluated on $\\mathbb{R}^{3} \\times \\mathbb{R}^{3\\times3} $, leading to the definition of the Euclidean gradients $\\nabla f(\\mathcal{Z}) \\in \\mathbb{R}^{3} \\times \\mathbb{R}^{3\\times3}$. In turn, these Euclidean gradients can be transformed into their Riemannian counterparts $\\text{grad}f(\\mathcal{Z})$ via orthogonal projection $\\mathbf{P}_{\\mathcal{Z}}$ into the tangent space $\\mathcal{T}_{\\mathcal{Z}}\\mathcal{M}$ \\cite{absil2009optimization, hu2019brief}. Therefore,\n\\begin{equation}\n \\text{grad} f(\\mathcal{Z}) = \\mathbf{P}_{\\mathcal{Z}}(\\nabla f(\\mathcal{Z}))\n\\end{equation}\nwhere the orthogonal projection onto $\\mathbb{R}^{3}$ is the identity $\\mathbb{I}_{3}$ and the orthogonal projection onto SO(3) at $\\xi \\in \\text{SO(3)}$ is $\\xi\\text{skew}(\\xi^{T}\\nabla f(\\xi))$ where $\\text{skew}(A) \\coloneqq \\frac{1}{2}(A-A^{T})$. Thus, we can solve Eq.~(\\ref{eq:approximate_map}) by projecting Euclidean gradients of Eq.~(\\ref{eq:euclidean_grad}) to the tangent space $\\mathcal{T}_{\\mathcal{Z}}\\mathcal{M}$ and plug them to a manifold optimization procedure. In our experiments, we use the geometrical conjugate gradient method \\cite{absil2009optimization} implemented in Pymanopt~\\cite{townsend2016pymanopt} to perform 20 optimization steps and we scan for the best value of $g$. \nFor completeness, the full optimization algorithm is provided in Algorithm~\\ref{alg:optimization_procedure} (Appendix~\\ref{appendix:algorithms}).\n\n\n\\section{Experiments}\n\\label{sec:experiment}\n\nFor training, we use 19 objects from the YCB data \\cite{7251504} (see Fig.~\\ref{fig:training_set} in Appendix~\\ref{appendix:objects}) together with 5 objects from the ShapeNet dataset \\cite{shapenet2015}, for a total of 24 types of objects. We selected a diverse range of objects compatible with the geometry of our gripper.\nIn simulation, the success rate was evaluated on the 19 objects used for training, as well as on \n5 new unseen objects from the YCB data (see Fig.~\\ref{fig:testing_set} in Appendix~\\ref{appendix:objects}).\nOnly the 19 objects from YCB are used in the real setup.\n\n\n\n\\subsection{Data generation}\n\\paragraph{Grasp generative model}\nA physical simulator is used to sample from $p(S|\\mathbf{h}, \\mathcal{O}, \\mathbf{p}_{\\mathcal{O}})$. Hand configurations, objects and object poses are sampled from their priors $p(\\mathbf{h})$, $p(\\mathcal{O})$ and $p(\\mathbf{p}_{\\mathcal{O}})$, and are then submitted to a lift test. First, a planner generates a trajectory in the joint space to bring the gripper to the hand configuration $\\mathbf{h}$. If the pose is not reachable, the test fails. Otherwise, the gripper closes its fingers until contact with the object or the gripper itself. Then, the robot lifts possibly the object to a given height. If the object is held in the gripper, the grasp is considered as successful. Simulations are performed using Pybullet~\\cite{coumans2020}.\nWe use volumetric hierarchical approximate decomposition to get convex meshes of objects from \\textit{obj} files for collision detection \\cite{mamou2009simple}.\n\n\\paragraph{Sensor generative model}\nThe sensor generative model $p(i|\\mathbf{p}_{\\mathcal{O}}, \\mathcal{O})$ is implemented in Pybullet, with an approach similar to the Blensor sensor framework \\cite{gschwandtner2011blensor} used to render depth images from a Kinect model sensor. Simulating a structured-light sensor allows a better transfer to the real setup. \nObjects and poses are sampled from their priors, $\\mathcal{O} \\sim p(\\mathcal{O}), \\mathbf{p}_{\\mathcal{O}}\\sim p(\\mathbf{p}_{\\mathcal{O}})$. Then, the object is placed and an image $i \\sim p(i|\\mathbf{p}_{\\mathcal{O}}, \\mathcal{O})$ is generated.\n\n\\paragraph{Domain randomization for sim-to-real transfer}\nGenerative models in simulation differ from their real world counterparts due to incorrect physical modelling and inaccurate physical parameters. This \\textit{reality gap} may lead to failures from our model because the synthetic data and the real data distributions are different. To overcome this issue, we use \\textit{domain randomization} \\cite{tobin2017domain} with nuisance parameters on the position and the orientation of the camera, the minimum and maximum distance of the depth, the field of view, and the coefficient of lateral and spinning frictions $\\mu$ and $\\gamma$.\nDomain randomization avoids the precise calibration of both the grasp and the image simulators, which can be very difficult and costly. \nWe use uniform distributions for the nuisance parameters which are difficult to measure with $\\pm 2\\%$ error and Gaussian distributions for easily measurable parameters. For the orientation of the camera, a multivariate normal variable $\\eta \\sim \\mathcal{N}(\\mathbf{0}, \\Sigma), \\Sigma=\\text{diag}(\\sigma_{\\alpha}=0.002, \\sigma_{\\beta}=0.01, \\sigma_{\\gamma}=0.002)$ is drawn and then mapped to $\\text{SO}(3)$ using the exponential map.\n\n\n\n\\subsection{Simulation benchmarks}\nWe evaluate the performance of our approach incrementally, adding algorithmic components one by one to assess their respective marginal benefits.\nFor each inference strategy, we estimate the success rate over $1000$ grasping attempts for randomly sampled objects and camera images. Nuisances parameters are resampled in simulation when evaluating the success.\nOur general results are summarized in Table~\\ref{tab:sim_metric}, while supplementary results for each individual category of objects can be found in Appendix~\\ref{appendix:supplementary-results}.\nWe first report results for strategies maximizing the (conditional) densities $p(\\mathbf{h}|\\cdot)$ of hand configurations.\nOptimizing for the maximum a priori estimate $\\mathbf{h} = \\argmax p(\\mathbf{h})$, without conditioning on success or an observation of the scene, leads to a very low success rate of $0.6\\%$. \nAs expected, these results are too poor to be usable but they should underline the informativeness of the prior. Sampling hand configurations from a uniform prior would instead result in a much smaller success rate, by about one order of magnitude (less than $0.1\\%$).\nWhen conditioning on the expected success $S=1$, performance increases to $44\\%$.\nTaking both the expected success $S=1$ and the image $i$ into account and following the inference methodology proposed in Section~\\ref{sec:method}, leads to an even larger success rate of $71\\%$ for the maximum a posteriori estimates. For the 5 new objects, we reach a comparable success rate of $75\\%$, which demonstrates the good generalization of the approach.\nIn comparison, had the properties $\\mathcal{O}$, $\\mu$, and $\\beta$ of the object been perfectly known, the success rate would reach $85\\%$, which shows that the convolutional network manages to extract most of the relevant information from the observation $i$.\nTable~\\ref{tab:sim_metric} also reports results for maximum likelihood estimates, achieving success rates of $43\\%$, $64\\%$ and $80\\%$ when maximizing the likelihoods $p(S=1|\\mathbf{h})$, $p(S=1,i|\\mathbf{h})$, and $p(S=1, \\mathcal{O}, \\mu, \\beta|\\mathbf{h})$ respectively. Note that maximizing $p(S=1|i, \\mathbf{h})$ and $p(S=1| \\mathcal{O}, \\mu, \\beta,\\mathbf{h})$ would give the same result since $i$ and $\\mathcal{O}, \\mu, \\beta$ are independent from $\\mathbf{h}$.\nOur informative prior can explain the difference in success rates between the MAP and the MLE estimates and motivates the use of a Bayesian approach.\n\nNot only our framework can be used for grasp planning, it also provides immediate access to the full posterior $p(\\mathbf{h}|S=1, i)$ of hand configurations.\nAs an illustrating example, we extract the marginal posterior densities $p(\\mathbf{x}|S=1, u), p(\\mathbf{R}|S=1, i)$ and $p(g|S=1, i)$ for the simulated scene of Fig.~\\ref{fig:sim_scene}, with the box centered at $(0, 0)$ without any rotation. \nThe resulting posterior is shown in Fig.~\\ref{fig:posterior}.\nFirst, $p(\\mathbf{x}|S=1, i)$ shows the distribution in space of the hand configuration $\\mathbf{h}$. \nThe concentration along the $x$-axis and $z$-axis are high, meaning that high density regions are located slightly behind and in front of the box, at a given height related to the geometrical dimensions of the box. \nConcerning the $y$-axis, the posterior fails to capture the symmetry and places all the density at the right of the box. \nOverall, the positions $\\mathbf{x}$ which are the most likely to give a successful grasp are on the right corner of the box. \nThis is underlined by the posterior of $p(\\mathbf{R}|S=1, i)$. The red dots correspond to the density of the $x$-axis, the green dots to the $y$-axis and the blue dots to the $z$-axis. \nThe $x$-axis has one mode, directed toward the table, inherited from the prior and slightly deviates to the right. The $y$-axis, however, has only two antipodal modes by contrast to the prior. These modes correspond to the situation in which the fingers are placed on the front surface or the back surface. The $z$-axis can be constructed by taking the cross product between $x$ and $y$. Uncertainties from $x$ and $y$ are propagated, leading to two antipodal modes with lower concentration than $y$. \nFinally, the posterior $p(g|S=1, i)$ for the grasp type indicates a preference towards pinch and wide modes over the basic mode. Pinch mode is preferred when the position $\\mathbf{x}$ is far from the right corner while the wide mode is mainly used when $\\mathbf{x}$ is located near the right corner.\n\n\\begin{table}\n\\centering\n \\resizebox{0.80\\linewidth}{!}{\n \\begin{tabular}{llcc}\n \\hline\n \\hline\n Grasping inference strategy & & \\multicolumn{2}{c}{Success rate}\\\\\n \\hline\n & & {\\it Sim} & {\\it Real} \\\\\n \\hline\n \\multirow{2}{*}{Prior based} & $\\mathbf{h} \\sim p(\\mathbf{h})$ & $0.8\\%$ & - \\\\\n & $\\mathbf{h}=\\argmax p(\\mathbf{h})$ & $0.6\\%$ & - \\\\\n \\hline\n \\multirow{2}{*}{Metric based} & $\\mathbf{h}=\\argmax_{\\mathbf{h}} \\hat{p}(S=1| \\mathbf{h})$ & $43\\%$ & - \\\\\n & $\\mathbf{h}=\\argmax_{\\mathbf{h}} \\hat{p}(\\mathbf{h}| S=1)$ & $44\\%$ & $46\\%$ \\\\\n \\hline\n \\multirow{2}{*}{Partial observation based}\n & $\\mathbf{h}=\\argmax_{\\mathbf{h}} \\hat{p}(S=1, i | \\mathbf{h})$ & $64\\%$$\/71\\%$ & - \\\\\n & $\\mathbf{h}=\\argmax_{\\mathbf{h}} \\hat{p}(\\mathbf{h}| S=1, i)$ & $71\\%$$\/\\mathbf{75}\\%$ & $70\\%$ \\\\\n \\hline\n \\multirow{2}{*}{Full observation based (ideal)} &\n $\\mathbf{h}=\\argmax_{\\mathbf{h}} \\hat{p}(S=1, \\mathcal{O}, \\mu, \\beta| \\mathbf{h})$ & $80\\%$ & -\\\\\n & $\\mathbf{h}=\\argmax_{\\mathbf{h}} \\hat{p}(\\mathbf{h}|S=1, \\mathcal{O}, \\mu, \\beta)$ & $85\\%$ & - \\\\\n \\hline\n \\hline\n \\end{tabular}}\n \\vspace{1em}\n \\caption{Grasping success rate for various inference strategies of the hand configuration. \n The success rate obtained by performing Bayesian posterior inference through the full forward simulation reaches $71\\%$ for objects seen during training and $75\\%$ for 5 new objects. In real experiments, the success rate reaches $70\\%$. \n }\n \\label{tab:sim_metric}\n\\end{table}\n\n\\begin{figure}\n \\centering\n \\resizebox{0.70\\linewidth}{!}{\\includegraphics{image\/results\/posterior_all.pdf}}\n \\caption{Posterior $p(\\mathbf{h}|S=1, i)$ for the setup in Fig.~\\ref{fig:sim_scene} with the box centered at $(0, 0)$. (Left) Posterior $p(\\mathbf{x}|S=1, i)$, (Middle) Posterior $p(\\mathbf{R}|S=1, i)$, (Right) Posterior $p(g|S=1, i)$.}\n \\label{fig:posterior}\n\\end{figure}\n\n\\subsection{Physical benchmarks}\nWe carried out experiments with a Robotiq 3-finger gripper attached to a UR5 robotic arm, as shown in Figure~\\ref{fig:real_scene}. A Kinect v1 provides depth images and is rigidly linked to the robot base. The robotic arm is controlled in position in the joint space. \nCommunications are performed within the ROS framework. \nWe calibrate the centre of the table by computing the trajectory in the simulator and then sending it to the real robot. \nWe perform 10 trials per object, for a total of 190 grasps. Objects are placed at best at $(x_{\\mathcal{O}}=0, y_{\\mathcal{O}}=0, \\theta_{\\mathcal{O}}=0)$. \nAs shown in Table~\\ref{tab:sim_metric}, our success rate of 70\\% is similar than in simulation, which indicates that the simulation-to-reality transfer works well, at least on average. \nThese results also demonstrate competitive performance with respect to related works (see Section~\\ref{sec:related_work}), although this remains difficult to establish because of the distinct hardware setups.\nWe observe that failures are mainly behavioral and geometric.\nBehavioral failures arise when the simulation does not model the physics correctly. For example, in the real setup, the bowl slides on the table when the gripper closes its fingers, while in simulation, the bowl is just lifted. We could reduce these errors by using a more accurate simulator. Geometric failures arise when there is a shift in the location or in the orientation of the object. Most of the time, the robot either collides with the object or misses it for smaller objects. These failures could be avoided using a more precise calibration, additional sensors, or more extensive domain randomization. \nFinally, the time of computation is reasonable (from 5 to 10s), but could be decreased by tuning the architecture of the neural network, lowering the number of optimization steps, or using more powerful hardware. We leave this for future work.\n\n\n\\section{Related work}\n\\label{sec:related_work}\n\n\nOver the last decade, progress in multi-fingered robotic grasping has been steady \\cite{varley2015generating,lu2019modeling,lu2020planning,lundell2020multi, lundell2021ddgc,wu2020generative, merzic2019leveraging} thanks to differentiable models such as neural networks. Unfortunately, the variety of robotic hands, their actuation modes and sensor inputs, and the lack of standard benchmarks, make it difficult to compare these advances fairly against one another.\n\nFrom early works, \\cite{varley2015generating} identify poses and fingertip positions of stable grasps with a deep neural network from RGB-D images and use a planner (GraspIt!) based on simulated annealing to determine the best hand configurations. They reach a success rate of $75\\%$ over a set of 8 objects but suffer from slow execution times (16.6s on average). By contrast, our method is faster and reaches comparable performance over a larger set of objects.\n\\cite{lundell2020multi} use generative adversarial networks to sample both grasp poses and finger joints efficiently based on RGB-D images. While being a fast approach, they reach only $60\\%$ of success rate in real experiments.\nThe work of \\cite{lu2019modeling} is the most similar to ours. They perform grasp planning as probabilistic inference via a classifier trained to predict the success of a grasp. They retrieve maximum a posteriori estimates using gradient ascent and the fact that the classifier is fully differentiable. The prior distribution is fitted with a Gaussian mixture model from the dataset. \nIn contrast, our method uses an analytical prior based on power-spherical distributions, does not require an external grasp sampler, and relies on a neural classifier to approximate the likelihood-to-evidence ratio.\nSimilarly, \\cite{lu2020planning} compute maximum likelihood estimates of the hand configuration by making use of gradients provided by a neural network. \nFinally, both of these works treat the rotations with Euler angles and optimize them as real numbers with boundary constraints. \nThis representation is not suitable for a neural network according to \\cite{zhou2019continuity}.\nInstead, our optimization relies on Riemannian conjugate gradients, which preserve the geometrical structure of the rotation group.\nOther interesting approaches to multi-fingered grasping include the use of deep reinforcement learning based on vision and tactile sensors~\\cite{wu2020generative}, or the use of tactile information only for learning a closed-loop controller~\\cite{merzic2019leveraging}.\n\nFrom a statistical perspective, several Bayesian likelihood-free inference algorithms~\\cite{marin2012approximate, beaumont2002approximate, Papamakarios2019SequentialNL, SNPEA, SNPEB, APT, pmlr-v119-hermans20a} have been developed to carry out inference when the likelihood function is implicit and intractable. \nThese methods operate by approximating the posterior through rejection sampling or by learning parts of the Bayes' rule, such as the likelihood function, the likelihood-to-evidence ratio, or the posterior itself. \nThese algorithms have been used across a wide range of scientific disciplines such as particle physics, neuroscience, biology, or cosmology~\\cite{cranmer2020frontier}.\nTo the best of our knowledge, our work is one of the first to apply one of those for the direct planning successful grasps.\nMore specifically, we rely here on the amortized inference approach of \\cite{pmlr-v119-hermans20a} to carry out inference within seconds for any new observation $i$. In contrast, an approach such as ABC \\cite{marin2012approximate, beaumont2002approximate} could take up to hours to determine a single hand configuration $\\mathbf{h}$ since data would need to be simulated on-the-fly for each observation $i$ due to the lack of amortization of ABC. \nNeural posterior estimation~\\cite{APT} is also amortizable but would have required new methodological developments to be applicable on distributions defined on manifolds, such as those needed here for the rotational part of the pose.\n\n\n\n\n\\section{Summary and future work}\n\\label{sec:summary}\n\nWe demonstrate the usefulness and applicability of simulation-based Bayesian inference to multi-fingered grasping.\nThe approach is generic yet powerful because it can work with any simulator, thereby incorporating from the simplest to the more sophisticated piece of domain knowledge, while leveraging all recent developments from deep learning to solve the Bayesian inference problem. \nMaximum a posteriori hand configurations are found by directly optimizing through the resulting amortized and differentiable expression for the posterior.\nThe geometry of the configuration space is accounted for by proposing a Riemannian manifold optimization procedure through the neural posterior.\nWe demonstrate a working proof-of-concept achieving robust multi-fingered grasping, both in simulation and in real experiments thanks to domain randomization. \nOur success rate is comparable to previous works.\n\n\n\\section*{Acknowledgments}Norman Marlier would like to acknowledge the Belgian Fund for Research training in Industry and Agriculture for its financial support (FRIA grant). Computational resources have been provided by the Consortium des \u00c9quipements de Calcul Intensif (C\u00c9CI), funded by the Fonds de la Recherche Scientifique de Belgique (F.R.S.-FNRS) under Grant No. 2.5020.11 and by the Walloon Region.\nGilles Louppe is recipient of the ULi\u00e8ge - NRB Chair on Big data and is thankful for the support of NRB.\n\n\n\n\\bibliographystyle{unsrt} \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}