{"text":"\\section{Introduction}\n\\label{intro}\nWhile many inventions are granted a patent, only a small fraction of them represent ``important'' technological advances or will have a significant impact on the market.\nAs a result, a key problem in technological forecasting is to detect which patents are important as early as possible. The literature has designed various indicators of patent importance based on patent data analysis, and it has been found quite consistently (see Section \\ref{sec:related}) that at least on average, important patents tend to receive more citations. However, this relationship is typically noisy, which suggests that more sophisticated metrics could outperform simple citation count in identifying important patents. Importantly, it takes time for a patent to accumulate citations, which implies that simply counting the number of citations received by a patent may be effective for uncovering old important patents, but not to detect important patents shortly after they are granted. \n\nIn this paper, we propose a network-based metric that identifies important patents better and earlier than citation count. Our metric, time-rescaled PageRank, was introduced by \\citet{mariani2016identification} to identify expert-selected important papers in physics. It is built on Google's PageRank algorithm \\citep{brin1998anatomy} by requiring that node score is not biased by node age. This metric is computationally efficient and thus can be applied on very large datasets~\\citep{vaccario2017quantifying}.\nHere we validate this metric on the US patent citation network (1926-2010), by evaluating its ability to detect the expert-selected ``important'' patents from \\citet{strumsky2015identifying}.\n\nWe find that Google's PageRank outperforms raw citation count in identifying the important patents, which supports the idea that important patents tend to be cited by other important patents. \nThis idea is further supported by the strong assortative degree-degree correlations observed in the network (Fig. \\ref{degree_correlations} below): highly-cited patents are typically cited by other highly-cited patents significantly more than what we would expect by chance; at the same time, highly-cited patents tend to cite other highly-cited patents.\nHowever, both PageRank and citation count are biased towards old patents; removing this bias is crucial to compare young and old patents on the same scale~\\citep{mariani2016identification}.\n\nTo demonstrate the usefulness of removing the age bias\nin the context of technological forecasting, we evaluate the metrics' performance in identifying the important patents shortly after they are issued, and find that time-rescaled PageRank significantly outperforms citation count and original PageRank in the first $10$ years after issuing, approximately.\nFinally, we use a time-respecting network null model \\citep{ren2017time} to generate randomized networks where the individual patents' citation count dynamics is the same as in the original network. We find that \nboth the observed degree-degree correlations and the performance advantage of PageRank-related metrics over citation count cannot be found in the randomized networks, which indicates that these properties emerge as a result of network effects that go beyond patents' neighborhood.\n\nOur findings demonstrate that in order to timely identify the significant patents, both network topology and time information play an essential role. In more general terms, the advantage of PageRank-related metrics over citation counting metrics, together with the strong degree-degree correlations of the patent citation network, support the hypothesis that significant technological advances ``lean on the shoulders of giants'' in a similar way as scientific advances \\citep{bornmann2010scientific}.\nYet, we find that the citation dynamics of scientific papers and patents are characterized by substantially different timescales. As a result, because patents (on average) take more time than papers to accumulate citations, the early identification of significant patents is more challenging than that of significant papers.\n\n\n\\section{Related work}\n\\label{sec:related}\nBroadly speaking, our work is related to those studying the relation between popularity metrics and significance in creative works such as scientific papers \\citep{mariani2016identification,comins2017citation}, movies \\citep{spitz2014measuring,wasserman2015cross,ren2017time}, or music albums \\citep{monechi2017significance}.\n\nIn the context of patent analysis, it is well known that patents are of extremely different quality \\citep{silverberg2007size}. While a direct measure of patent value is unavailable, patent data are very rich and there have been many attempts at providing indicators of patent value or novelty based on data contained in patent documents, such as the number of claims, the number and type of technology categories, the size of the patent family, and renewal fees, to give just major examples. By far the most widely used patent impact indicator is the number of citations received, and many studies have established a correlation between patent citations and patent value. For instance, \\citet{trajtenberg1990penny} found that to understand the evolution of the social value generated by the CT scan industry, it was better to count citations received by patents rather than simply counting patents.\n\\citet{albert1991direct} asked experts to rate the technical impact of patents in the area of silver halide technology, and found that highly cited patents received higher ratings. \\citet{harhoff1999citation}, \\citet{jaffe2000knowledge} and \\citet{harhoff2003citations}, using survey data, found that citations were correlated with the value reported by the inventors.\n\\citet{lanjouw2004patent} collected several indicators of patent quality and concluded that citations and the number of claims were the most important indicators of quality. \nRecently, \\citet{zhang2017entropy} proposed to weight $11$ indicators of patent value using the Shannon entropy, and selected forward citations as one of the most important indicators for technological value.\n\\citet{hall2005market} found that firm market value (Tobin's Q ratio) was correlated to the citation-weighted patent portfolio of the firms. \n\\cite{carpenter1981citation} and \\citet{fontana2013reassessing} compared patents associated with inventions that received a prize and patents from a control group, finding again evidence that ``important'' patents are more cited (the mean number of citations received was found to be about 50\\% higher for important patents).\n\nBut in spite of the repeated evidence of the positive relationship between citations received and different indicators of value or quality, it is often\nacknowledged that this relationship is very noisy \\citep{harhoff1999citation}, thus leaving open the possibility that more elaborated indicators could outperform simple citations count in predicting patent value. Here we address two basic (and well-known) problems of evaluation by simply counting citations: when evaluating a given patent's score, it fails to take into account the importance of the citing patents \\citep{narin1976evaluative}; and it fails to correct for the fact that young but potentially important patents did not have the time to accumulate a high number of citations.\n\nThe basic motivation for using citations received as an indicator of quality is that citations indicate some form of knowledge spillovers. As argued by \\citet{jaffe2000meaning}, citations reflect the fact that either a new technology builds on an existing one, or that they serve a similar purpose. As a result, chains of citations allow us to trace the technological evolution, and hence patent centrality in the citation network can be used to score the patents. But not all measures of centrality are appropriate. For instance, in the case of patents, we want to value positively how many citations are received, but not necessarily how many citations are made. \n\nWhether the references made by a given patent can be used to infer the patent's importance is a delicate issue.\nIn principle, one could argue that a patent with many references has high potential, because it draws from many existing inventions. But an opposite argument could be made as well, because a patent with many references makes it also (legally) clear that its claims are somewhat limited by the claims of the cited patents -- in that sense, references indicate a limitation of novelty. It is not yet well-understood which of these two arguments is the most appropriate, and the empirical evidence so far is inconclusive \\citep{jaffe2017patent}; here, we will consider that citations received are a weaker signal of importance when they come from patents that make a lot of references\".\n\nBased upon the aforementioned considerations, Google's PageRank centrality \\citep{brin1998anatomy} is especially suited for identifying important patents for three reasons: (i) It takes into account how many citations are received by a patent, (ii) It takes into account how many citations are received by the citing patents, and (iii) it takes into account that citations from patents that have many references are less indicative of the cited patent's quality.\n\nWe are not the first ones to suggest that PageRank \\citep{lukach2007ranking,bedau2011evidence,shaffer2011entrepreneurial,dechezlepretre2014knowledge,bruck2016recognition} and similar eigenvector-based metrics \\citep{corredoira2015measuring} can be computed on patent citation networks to identify important patents. However, robust evidence that PageRank is more effective than citation count in identifying the key patents is still lacking. In addition, both citation counts and PageRank fail to take into account the dynamic, evolving nature of the citation network. Because the patent system grows with time, older patents tend to have more citations simply because they have been there for a longer time and, on top of that, the preferential attachment mechanism \\citep{valverde2007topology} further magnified their advantage. This problem has been long acknowledged and the usual solution is either to limit citation counts to a fixed citation \"time span\", such as the first five years after issuing (e.g., \\citet{lanjouw2004patent}), or to control for the grant year in regressions (e.g., \\citet{kogan2012technological}).\n\n\n\n\nHere, we propose an alternative approach, put forward recently by \\citet{mariani2016identification} in the context of scientific publications, which can be applied equally well to citation counts and other centrality metrics, and produces a single score without (or with dramatically reduced) age bias.\n\nOur work complements other efforts to identify important items using citation networks. For instance, Comins and Leydesdorff (2017) report that Reference Publication Year Spectroscopy, a method that looks at the temporal distribution of cited references, is able to identify the biomedical research milestones listed by experts. In the patent literature, \\citet{castaldi2015related} proposed to identify ``superstar\" patents as those in the extreme right tail of the citation count distribution, where a power law behavior was observed. Another popular approach, main path analysis, was introduced in the bibliometric literature by \\citet{hummon1989connectivity}, and further developed and applied to patents by \\citet{verspagen2007mapping}. In the spirit of the betweenness centrality, it seeks to extract important nodes and edges based on how often geodesic paths pass through them, thus revealing continuity or disruption in technological trajectories. This aspect was exemplified by \\citet{martinelli2012emerging} for the telecommunication switching industry, and by \\citet{epicoco2013knowledge} for the green chemistry sector. \\citet{triulzi2017predicting} measured patent centrality using a normalized version of centrality metrics, and found that technological domains with central patents also tend to have faster technological improvement rates (a separately measured indicator of progress in technological performance). Finally, as a last example of this rich literature, \\citet{martinelli2014measuring} proposed to measure knowledge persistence by giving higher value to patents that are cited by patents that don't cite many patents \\-- an idea that we will use here too, as PageRank normalizes the received citations by the outdegree of the citing nodes.\n\nIn this work, we focus on comparing PageRank with citation counts, and age-rescaled metrics with non-rescaled metrics. This allows us to evaluate whether network-based metrics outperform raw citations counts, and to determine over which range of time the rescaling procedure allows us to better identify the significant patents. In addition, because our analysis follows closely the study of milestone physics papers by \\cite{mariani2016identification}, we are able to evaluate the similarities and differences between the scholarly and patent citation data. We find that patents take much longer than papers to receive citations, which makes it harder to identify important patents early on. \n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[scale=0.6]{assortativity-DCM}\n\\caption{Degree-degree correlations in the US patents' citation network. The gray circles represent the observed average neighbors' indegree for all the indegree values; the blue circles represent the same information in a histogram with equal bin length on logarithmic scale; the green squares represent the mean behavior observed within ten realizations of the dynamic configuration model (see Section \\ref{sec:dcm} for a description of the model); the shadows around the line that connect the green squares represent one standard deviation around the mean.\n}\n\\label{degree_correlations}\n\\end{figure*}\n\n\\section{Data}\n\n\\subsection{The U.S. patent citation network}\nWe analyzed the US patents dataset collected by \\cite{kogan2012technological} that spans the period between 01-01-1926 and 11-02-2010. As compared to the well-known NBER patent data, this dataset has a vastly improved coverage.\nWe pre-processed the data to only keep the citations between patents that were issued within this temporal period, removing thereby the citations to patents issued before 01-01-1926.\nThe resulting citation network analyzed in this paper is composed of $N=6,237,625$ nodes (patents) and $E=45,962,301$ directed edges (citations) between them. \n\nIn this dataset, the in- and out-degree distribution of the US patent citation network are in agreement with previous findings \\citep{valverde2007topology,csardi2007modeling,silverberg2007size}: the two distributions are relatively broad and span more than three orders of magnitude.\n\nIn previous works, \\cite{mariani2016identification} found that PageRank-related metrics outperform citation-counting metrics in identifying significant nodes in a scientific paper citation network, whereas the same does not happen in a movie citation network \\citep{ren2017time}.\nAdditionally, \\cite{ren2017time} found remarkably different degree-degree correlations for the two networks: the papers' citation network is strongly assortative, whereas the movies' citation network is basically uncorrelated.\nThis observation led \\cite{ren2017time} to suggest that the relative performance of PageRank-related and citation-counting metrics may be related to the network correlation patterns: when the network is uncorrelated, PageRank and indegree bring basically the same information \\citep{fortunato2008approximating}; when there are significant structural correlations, PageRank brings additional information that may be valuable to improve ranking performance.\n\nFig.~\\ref{degree_correlations} shows that the US patent network exhibits strong degree-degree correlations\\footnote{The assortativity plot used here is arguably the simplest method to visualize network structural correlations, as simply represents the average (in- or out-)degree of nodes' neighbors as a function of node (in- or out-)degree. Since node centrality is related to incoming connections, we focus here on the average indegree of citing and cited nodes as a function of node indegree.}: highly-cited patents tend to be cited by other highly-cited patents, and to cite other highly-cited patents. This assortative pattern cannot be explained by a null model that preserves the temporal evolution of node degree (\\citep{ren2017time}, see Section \\ref{sec:dcm} for details), which suggests that it is a genuine network effect.\n\nIn agreement with similar findings for scientific papers \\citep{ren2017time,bornmann2010scientific}, Fig.~\\ref{degree_correlations}A suggests that high-impact patents are able to inspire other high-impact patents more than expected by chance, whereas low-impact patents tend to be cited by other low-impact patents; at the same time (Fig. \\ref{degree_correlations}B), high-impact patents rely on other high-impact patents more heavily than expected by chance.\nFollowing \\cite{ren2017time}, the strong degree-degree correlations in the patent citation network opens the door to the possibility that metrics that take higher-order network effects into account outperform simple citation counts.\n\n\n\n\n\n\\subsection{Expert-selected historically significant patents}\n\\label{sec:strumsky}\nIn a recent work, \\citet{strumsky2015identifying} listed $175$ patents carefully selected ``on the basis of consultation with engineers, scientists, historians, textbooks, magazine articles, and internet searches''. The patents in the list ``all started technological pathways which affected society, individuals and the economy in a historically significant manner'' \\citep{strumsky2015identifying}. These significant patents thus provide a good ``ground-truth'' set of patents that can be used to discern the ability of different metrics to uncover the significant patents. The complete list of significant patents can be found in Appendix C of \\cite{strumsky2015identifying}; the list is quite heterogeneous and comprises patents ranging from simple objects \nthat are part of our everyday life (like the toothbrush and the post-it note)\nto more sophisticated inventions (like the Game Boy and the Desk Jet printer).\n\nPresence in the list of significant patents by Strumsky and Lobo is a binary variable: a patent is either in the list or not; we can therefore study the ability of the metrics to rank these outstanding patents as high as possible, in agreement with the main goals of this paper.\nWhile there are $175$ significant patents in the Strumsky-Lobo list, we restrict our analysis to those patents that were issued within our dataset's temporal span, and remove the design patents which are absent in our dataset. This leaves us with $M_0=112$ significant patents.\n\n\n\\begin{table*}[t]\n\\centering\n\\begin{tabular}{lp{5cm}p{5cm}}\n\\toprule\n & Static & Age-rescaled \\\\\n\\midrule\nCitation-counting &\n\\textbf{Citation count}, $c$\\newline\nA patent is important if it is cited by many other patents &\n\\textbf{Rescaled citation count}, $R(c)$\\newline\nBuilt on citation count by requiring that patent score is not biased by node age\\\\[4pt]\nNetwork-based &\n\\textbf{PageRank score}, $p$\\newline\nA patent is important if it is cited by other important patents &\n\\textbf{Rescaled PageRank score}, $R(p)$\\newline\nBuilt on PageRank score by requiring that patent score is not biased by node age\\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Metrics considered in this paper together and their main assumptions.}\n\\label{table:metrics}\n\\end{table*}\n\n\\section{Methods}\nIn this section, we define the metrics used to identify important patents, and the indicators of performance that we use to evaluate them.\nMany network centrality metrics \\citep{lu2016vital,liao2017ranking} and bibliometric indicators \\citep{waltman2016review} have been devised in the literature. Here, we narrow our focus to four metrics (see Table \\ref{table:metrics} for a summary): citation count $c$, PageRank score $p$, (age-)rescaled citation count $R(c)$ and (age-)rescaled PageRank $R(p)$. Differently from citation count, PageRank score takes the whole network structure into account and weights citations differently according to the centrality of the citing nodes. Rescaled citation count and rescaled PageRank score are obtained from citation count and PageRank score, by explicitly requiring that node score is not biased by node age (see details below).\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=7.5cm]{bias}\n\\caption{Bias by node age of the rankings by the metrics. Patents are divided by their age into 40 equally-sized groups; the bias by patent age is represented by the number $n_{0.005}$ of patents from each age group in the top $f=0.5\\%$ of the overall patent ranking. The black horizontal line represents the expected unbiased value $n^{(0)}_{0.005}=0.005\\,N\/40$. Results for different (small) values of $f$ are qualitatively similar.}\n\\label{bias}\n\\end{figure}\n\n\n\\subsection{Static patent-level metrics}\n\\label{sec:1}\n\n\\paragraph{Citation count, c}\nThe citation count of a given patent is simply the total number of citations the patent has received so far. In terms of the patent citation network's adjacency matrix $\\mathsf{A}$ ($A_{ij}$ is equal to one if patent $j$ cites patent $i$, zero otherwise), the citation count $c_i$ of patent $i$ is defined as $c_i=\\sum_{j}A_{ij}$; $c_i$ is referred to as the node $i$'s indegree in the network science language \\citep{newman2010networks}. Ranking the patents by citation count assumes that \\emph{a patent is important if it is cited by many other patents}. \n\nThe ranking by citation count is strongly biased by node age in our dataset. To visualize and quantify this bias, we divide the $N$ patents into $40$ equally-sized age-groups based on age.\nWe then count how many patents from each age group are in the top-$f$ fraction of the patent ranking by $c$. For an ideal unbiased ranking, for each age group, we would expect $n_{f}^{(0)}=f\\,N\/40$ patents in the top-$f$ fraction, with small deviations.\nThe result is strikingly different for citation count (see Fig.~\\ref{bias}) which underestimates both the oldest and the most recent patents in the dataset. \nWhile the bias against recent patents is expected (they have had less time to accumulate citations), the bias against older patents is more surprising, and it can be due to a variety of factors such a higher propensity to cite patents available electronically, a prevalence of patents in technological domains for which less citations tend to be made, patent citation patterns changing with time -- for a discussion of these and other reasons for citation bias, see \\citet{jaffe2017patent}. \nTo counteract this bias, we use a simple normalization procedure, described in paragraph~\\ref{sec:rescaled}. \n\n\n\n\\paragraph{PageRank, p}\nGoogle's PageRank is a node ranking algorithm introduced by \\citet{brin1998anatomy} with the original goal to rank websites in the World Wide Web. Since then, the algorithm has found applications in a broad range of real systems \\citep{gleich2015pagerank,liao2017ranking}. \nThe PageRank score $p_i$ of node $i$ is defined through the equation \\citep{berkhin2005survey}\n\\begin{equation}\np_i=\\alpha\\,\\sum_{j:k^{out}>0}\\frac{A_{ij}}{k^{out}_j}\\,p_j+\\alpha\\,\\sum_{j:k^{out}=0}\\frac{p_j}{N}+\\frac{1-\\alpha}{N}\n\\label{pr}\n\\end{equation}\nwhere $k^{out}_j=\\sum_{l}A_{lj}$ is the number of references made by patent $j$ ($k^{out}_i$ referred to as the node $i$'s \\emph{outdegree} in the network science language) and the term $(1-\\alpha)\/N$ represents the so-called ``teleportation term'' \\citep{berkhin2005survey,gleich2015pagerank}.\nThe algorithm is built on the thesis that \\emph{a node is important if it is cited by other important nodes} \\citep{franceschet2011pagerank}: the score of a given patent $i$ depends linearly on the scores of the patents that cited patent $i$. We set $\\alpha=0.5$ which is the common choice in citation networks \\citep{chen2007finding,walker2007ranking,bruck2016recognition}.\n\nIn practice, the vector of PageRank scores can be obtained from Eq.~\\eqref{pr} by the power iteration method. Starting from a uniform score vector $p_i^{(0)}=1\/N\\,\\forall i$, we iteratively update the scores according to the equation \\citep{berkhin2005survey} \n\\begin{equation}\np_i^{(n+1)}=\\alpha\\,\\sum_{j:k^{out}>0}\\frac{A_{ij}}{k^{out}_j}\\,p_j^{(n)}+\\alpha\\,\\sum_{j:k^{out}=0}\\frac{p_j^{(n)}}{N}+\\frac{1-\\alpha}{N}.\n\\end{equation}\nNote that the previous equation is the master equation of a two-fold stochastic process on the network where at each step $n$, a random walker either performs a jump along the network edges (with probability $\\alpha$), or ``teleports'' to a randomly chosen node in the network (with probability $1-\\alpha$).\nThe PageRank vector of scores $\\mathbf{p}=\\{p_i\\}$ can therefore be interpreted as the stationary state of this Markov process.\nWe halt the iterations when \n\\begin{equation}\n\\sum_i \\big\\lvert p^{(n)}_{i}-p^{(n-1)}_{i}\\big\\rvert < \\epsilon,\n\\end{equation}\nwhere we set $\\epsilon=10^{-9}$. This procedure guarantees convergence after a number of iterations smaller than $\\log\\alpha\/\\log\\epsilon$, independently of $N$ \\citep{berkhin2005survey}.\n\nWhile PageRank's basic premise is plausible, the algorithm is \\emph{static}, whereas real networks evolve in time. This causes the ranking by the algorithm to be severely biased by node age in growing networks \\citep{chen2007finding,mariani2015ranking,mariani2016identification,vaccario2017quantifying}.\nThe ranking by PageRank is strongly biased by node age also in our dataset (see Fig.~\\ref{bias}). \nPageRank's bias in the patent citation network has different features with respect to its bias in papers' citation network reported by \\cite{mariani2016identification}.\nWhile in both datasets recent nodes are strongly disadvantaged by the algorithm, the oldest patents are not the most overvalued by the PageRank algorithm as opposed to what has been observed for papers \\citep{mariani2016identification}.\nThis is a direct consequence of the significantly smaller citation count of the oldest patents. The peak of $n_{0.005}(p)$ is shifted to the left from the peak of $n_{0.005}(c)$, which means that PageRank nevertheless tends to favor older nodes with respect to citation count.\n\n\n\n\\subsection{Time rescaled metrics R(p) and R(c)}\n\\label{sec:rescaled}\nThe strong age bias of the rankings by citation count and PageRank score implies that patents that appeared in some time periods are much more likely to rank well than other patents, independently of their properties such as novelty and significance. In bibliometrics \\citep{radicchi2008universality,waltman2016review} and patent analysis \\citep{triulzi2017predicting}, it is common to attempt to suppress this bias by age through various normalization procedures.\n\nHere, we apply the rescaling procedure proposed by \\citet{mariani2016identification} to citation count and PageRank.\nThe rescaling procedure consists of comparing the score $s_i$ of a given patent $i$ with scores of the patents that belong to a reference-set of patents of similar age\\footnote{A potential limitation of this approach is that by comparing each patent's score with only the scores of patents of similar age, it may underestimate the importance of patents that happened to be issued in periods during which many breakthrough inventions took place. However, despite the well-known theory of Kondratiev waves and innovation clustering, robust empirical evidence for the existence of such periods is weak and debated. For instance, \\cite{silverberg2003breaking} found no evidence for innovation clustering in a list of basic inventions,whereas \\citet{korotayev2011kondratieff} found an evidence of Kondratiev cycles in the world-level patent output per inhabitant.} as patent $i$. \nBy labeling the patents in order of decreasing age\\footnote{We order by increasing ID those patents that are issued on the same day.}, the reference set is the set of $\\Delta$ patents $j$ such that $\\lvert i-j\\rvert < \\Delta\/2$.\\footnote{The temporal window is defined in a slightly different way for patents close to the beginning and the end of the dataset. For the $\\Delta\/2$ patents closest to the beginning (end) of the dataset, the temporal window is given by the $\\Delta$ oldest (most recent) patents in the dataset.} Constructing the set of comparable patents based on a continuously moving window centered on a focal patent is advantageous with respect to grouping the patents by year as the latter results in imposing a sharp distinction between patents granted very closely in time but on either side of the January 1st boundary.\n\nDenoting with $\\mu_i(s)$ and $\\sigma_i(s)$ the mean value and the standard deviation, respectively, of score $s$ over the patent $i$'s reference set, the rescaled score $R_i(s)$ of patent $i$ is given by \n\\begin{equation}\nR_i(s)=\\frac{s_i-\\mu_i(s)}{\\sigma_i(s)}.\n\\end{equation}\nIn this work, we set $\\Delta=15,000$, yet our results are robust with respect to other choices of $\\Delta$ (not shown here).\n\nAs shown in Fig.~\\ref{bias}, the rescaled scores $R(c)$ and $R(p)$ are much less biased by node age than the original scores $c$ and $p$: $n_{0.005}(R(c))$ and $n_{0.005}(R(p))$ are remarkably stable across different age groups, and their value is always close to the expected unbiased value $n_{0.005}^{(0)}$. In agreement with \\cite{mariani2016identification,vaccario2017quantifying,liao2017ranking}, this shows that the proposed rescaling procedure is effective in suppressing the temporal biases of the static metrics. \nBy giving to old and recent patents the same chance of appearing to the top of the ranking, we expect the rescaled metrics to bring a substantial advantage in identifying valuable patents shortly after issuing. As the rankings by static metrics are biased toward old patents, we also expect the rescaled metrics' advantage in identifying significant patents to shrink (and eventually vanish) as we consider older significant patents. \nThese hypotheses are validated in the next Section.\n\n\n\n\\subsection{Evaluation of the metrics' performance in identifying the significant patents}\n\n\n\nTo make quantitative statements on the ability of the metrics to single out the significant patents of different age, we introduce two evaluation metrics: the average ranking ratio and the identification rate.\n\n\\paragraph{Average ranking ratio}\nA straightforward way to assess the metrics' performance in identifying the significant patents would consist in calculating the average ranking position of the significant patents $t$ years after they are issued. However, this simple measure is highly sensitive to the ranking position of the lowest-ranked items \\citep{mariani2016identification}. \n\nTo prevent this shortcoming, it is preferable to measure the \\emph{average ranking ratio} of the target items by the different metrics \\citep{mariani2016identification}, which is defined as follows. Denoting the rank of patent $i$ by metric $m$ as $r_i(m)$,\nthe ranking ratio of metric $m$ is defined as $\\hat{r}_{i}(m)=r_{i}(m)\/\\min_{m'}\\{r_{i}(m')\\}$. The metric achieves the best-possible ranking ratio of one if it ranks a given significant patent best of all metrics; the lower the value, the better.\nThe \\emph{average ranking ratio} $\\braket{\\hat{r}}(m)$ of metric $m$ is the average of the ranking ratios $\\hat{r}_{i}(m)$ of all significant patents, and it quantifies how much the metric underperforms, on average, with respect to the best-performing metric. A metric that outperforms all the other metrics for all the target nodes achieves an average ranking ratio $\\braket{\\hat{r}}=1$; larger values of $\\braket{\\hat{r}}$ indicate a worse performance\n\n\\paragraph{Identification rate} The identification rate $f_z(m)$ -- commonly referred to as \\emph{recall} in the information filtering community \\citep{lu2012recommender} -- of a given metric $m$ is defined as the fraction of significant patents that are ranked among the top $z\\,N$ patents by metric $m$. Hence, while the average ranking ratio takes all significant patents and their ranking into account, the identification rate measure focuses on the top-items by each ranking. \n\n\n\\subsection{Evaluating the evolution of the metrics' performance with patent age}\n\\label{sec:age_evaluation}\nTo uncover the metrics' ability to early identify the significant patents, \nwe evaluate the metrics' average ranking ratio and identification rate as a function of patent age. In this way, we are able to untangle the role of patent age in determining the metrics' performance; for example, a metric that is biased toward old patents only performs well in detecting \\emph{old} important patents, whereas we expect good early-identification metrics to perform well in detecting \\emph{recent} important patents.\n\n\n\n\nTo untangle the role of patent age in determining the metrics' performance, we dissect the network evolution by computing the rankings by the metrics every six months. At each ranking computation time $t^{(c)}$, only the patents issued before $t^{(c)}$ are included in the analysis. For significant patent $i$ (issued at time $t_i$), we measure the age $\\Delta t=t^{(c)}-t_i$ of the significant patent at time $t^{(c)}$. \nThen, we determine its ranking ratio values $\\hat{r}_i(m;\\Delta t)$ for all considered metrics $m$.\nPatent $i$'s ranking ratio $\\hat{r}_i(m;\\Delta t)$ contributes to metric $m$'s average ranking ratio $\\braket{\\hat{r}}(m;\\Delta t)$ for $\\Delta t$ year-old patents.\nAfter having analyzed the whole network history, we can thus determine the average ranking ratio $\\braket{\\hat{r}_i}(m;\\Delta t)$ of metric $m$ for $\\Delta t$ years old patents as the average of $\\hat{r}_i(m;\\Delta t)$ over all the significant patents included in the analysis.\n\n\n\nIn the same way, we define the identification rate $f_z(m;\\Delta t)$ of metric $m$ for $\\Delta t$ years old patents as the fraction of significant patents that were ranked among the top $z\\,N$ patents by metric $m$ when they were $\\Delta t$ years old. \n\n\\section{Results}\n\\label{section:results}\n\n\\subsection{Metrics' performance on the time-aggregate network}\n\\label{sec:aggregate}\nWe start by assessing the average ranking ratio (Fig.~\\ref{histo}A) and the identification rate (Fig. \\ref{histo}B) of the metrics on the whole dataset.\nThe results show a clear advantage of the network-based metrics, $p$ and $R(p)$, over the citation-counting metrics. According to the average ranking ratio (Fig.~\\ref{histo}A), rescaled PageRank is the best-performing metric with a small margin over PageRank and a large margin over raw and rescaled citation count. Rescaled PageRank and PageRank also achieve the highest identification rates (Fig.~\\ref{histo}B). \n\n\\begin{figure*}[t]\n\\centering\n\\includegraphics[width=15cm]{histogram_all}\n\\caption{Performance of the metrics in identifying the significant patents from the list by Strumsky and Lobo, as measured by the metrics' average ranking ratio (panel A, the lower the better) and their identification rate (panels B, the higher the better) evaluated on the complete patent citation dataset}\n\\label{histo}\n\\end{figure*}\n\nTo understand where the gaps between the metrics stem from, we inspect the patents that give the largest contribution to $c$ and $R(c)$'s ranking ratio -- i.e., the patents that are ranked much better by $p$ and $R(p)$ than by $c$ and $R(c)$. We find a significant contribution coming from patent $4,237,224$ (``Process for producing biologically functional molecular chimeras'', $c=285$), which is ranked $2$nd by $R(p)$ ($\\hat{r}=1$), $3$rd by $p$ ($\\hat{r}=1.5$), $1079$th by $R(c)$ ($\\hat{r}=539.5$), and $1181$st by $c$ ($\\hat{r}=590.5$). \nImportantly, this patent gives the same contribution (equal to one) to all metrics' identification rate as all the metrics rank it among the top-$0.5\\%$ patents. This example shows well that patents that are ranked at the top of the ranking by all metrics can have very different ranking ratio values.\nThe second largest contribution to $c$'s and $R(c)$'s average ranking ratio comes from patent $4,438,032$ (``Unique T-lymphocyte line and products derived therefrom'', $c=73$), which is ranked $253$rd by $p$ ($\\hat{r}=1$), $562$nd by $R(p)$ ($\\hat{r}=2.2$), $48,742$nd by $c$ ($\\hat{r}=192.7$), $66,014$th by $R(c)$ ($\\hat{r}=260.9$). \nTo check that the advantage of network-based metrics was not entirely due to these two patents, we have excluded them from the analysis and recalculated the metrics' average ranking ratio. PageRank and rescaled PageRank remain the two best-performing metrics ($\\braket{\\hat{r}}(p)=4.2$, $\\braket{\\hat{r}}(R(p))=6.1$), yet their edge over the link-counting metrics ($\\braket{\\hat{r}}(c)=8.1$, $\\braket{\\hat{r}}(R(c))=10.0$) significantly decreased.\n\n\n\n\\subsection{Age-rescaling matters most for young patents}\n\\label{dynamics}\nWhile the analysis of the previous Section reveals important differences among the metrics, the main goal of this manuscript is to reveal the dependence of the metrics' performance as a function of patent age, and to assess the metrics' ability to early-identify the significant patents. To this end, by following the procedure described in Section \\ref{sec:age_evaluation}, we consider the ranking positions\\footnote{The ranking positions considered in this paper are always normalized by the size of the system at the time when the ranking is computed.} of the group of expert-selected significant patent by \\cite{strumsky2015identifying} one (Figs.~\\ref{rankings}A,D), five (Figs.~\\ref{rankings}B,E) and ten (Figs.~\\ref{rankings}C,F) years after issuing. Due to their lack of time bias, the rescaled metrics rank the significant patents much better than the corresponding static metrics one year after issuing (see Figs.~\\ref{rankings}A,D). On the other hand, the ranking positions by rescaled and static metrics are comparable ten years after issuing (see Figs.~\\ref{rankings}C,F).\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=15cm]{ranking_time_effect}\n\\caption{A comparison of the relative rankings (the lower, the better) of the significant patents by $c$ and $R(c)$ one (panel A), five (panel B) and ten (panel C) years after issuing. Only the patents that received at least one citation at a given age are included. The same comparison between $p$ and $R(p)$ is shown in panels D--F.}\n\\label{rankings}\n\\end{figure*}\n\nThe evolution of the ranking position of the significant patents as evaluated by $p$ and $R(p)$ is shown in Supplementary Movie M1; the same for $c$ and $R(p)$ is shown in Supplementary Movie M2. The moving dots in these movies represent the significant patents, and the displacements of the dots represent the change in the significant patents' ranking position as they get older\\footnote{We only represent the significant patents after they have received their first citation. This is the reason why during the dynamics, some dots appear on the plane out of nowhere.}. \nMovies M1 and M2 show that short after issuing, all significant patents are ranked higher by rescaled PageRank than by PageRank and citation count, respectively, consistently with Figs.~\\ref{rankings}A,D. In movie M1 that compares the rankings by $p$ and $R(p)$, as the significant patents get older, the entity of their displacements in the ranking plane diminish, and they gradually drift toward the diagonal of the plane, which means that the gap between their ranking position by $p$ and $R(p)$ shrinks. \nAfter ten years, most of the significant patents lie close to the diagonal, which indicates that the rankings of the significant patents by $p$ and $R(p)$ are comparable. \n\n\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=18cm]{age_performance-control20y.pdf}\n\\caption{Performance of the metrics in identifying the significant patents from the list by Strumsky and Lobo over a $20$-year time window after their issuing, as explained in the main text. \\emph{(A)} Average ranking ratio as a function of patent age. \\emph{(B)} Identification rate as a function of patent age.}\n\\label{short} \n\\end{figure*}\n\n\\subsection{Comparison of the four metrics' performance for different patents' age}\nThe above-discussed Fig.~\\ref{rankings} and Supplementary Movies M1--M2 show that the age of the significant patents has a large impact on the ability of the metrics to identify them.\nThe goal of this section is to quantify the magnitude and the duration of the advantage of rescaled metrics in early identifying the significant patents, and to compare the obtained results with known results for scientific papers \\citep{mariani2016identification}.\n\nTo quantify how well the different metrics recognize the significant patents short after their issuing, we focus on the $M_{20}= 77$ patents that are at least $20$ years old at the end of the dataset. By performing the evaluation procedure described in Section \\ref{sec:age_evaluation}, we study how their average ranking ratio and identification rate depend on their age up to $20$ years after issuing. We focus thus on a fixed group of target patents, which allows us to gauge the impact of time on the metrics' performance\\footnote{Patents less than are $10$ years old, for example, could not contribute to the age bins from $10$ to $20$ years after issuing. Were we including also them in the control group of significant patents, we would have ended up with a control group with different composition for different age bins, which would have confounded the temporal effects that we focus on here.}.\n\n\n\\subsubsection{Average ranking ratio}\nIn qualitative agreement with Fig.~\\ref{rankings}, Fig.~\\ref{short}A shows striking differences between the metrics' performance. Shortly after issuing, the rescaled metrics achieve an average ranking ratio much lower than that of the non-rescaled metrics. For example, one year after publication, PageRank's and rescaled PageRank's average ranking ratio are equal to $20.8$ and $1.6$, respectively, which indicates a performance advantage of one order of magnitude in favor of $R(p)$.\nThe gap between rescaled PageRank and PageRank (rescaled citation count and citation count) closes $12$ ($7$) years after issuing.\nThere is therefore a medium-term temporal window over which the rescaled metrics rank the significant patents remarkably better than the non-rescaled metrics.\n\nImportantly, once we have suppressed the age bias of $c$ and $p$, we are able to reveal the advantage of using (higher-order) network information to rank the significant patents instead of simply counting citations, which manifests itself in the performance advantage of $R(p)$ over $R(c)$.\n\n\n\\subsubsection{Identification rate}\nFig.~\\ref{short}B shows the dependence of the metrics' identification rate $f_{0.005}(\\Delta t)$ as a function of patent age. This evaluation measure quantifies the fraction of significant patents ranked in the top $0.5\\%$ by the metrics when they were $\\Delta t$ years old.\nThe rescaled metrics outperform the non-rescaled metrics shortly after publication; the gap between rescaled and non-rescaled metrics closes \neventually:\n$p$'s performance reaches $R(p)$'s performance $12$ years after issuing, and $c$'s performance reaches $R(c)$'s performance $6$ years after issuing.\nThese two timescales are consistent with those observed for the average ranking ratio, and they define a temporal window over which the rescaled metrics achieve an improved identification of the significant patents.\n\n\n\\subsection{The role of the network structure}\n\\label{sec:dcm}\nIn this Section, we address the following question: to which extent can the improved performance of PageRank-related metrics be explained by citation count dynamics alone?\nIn other words, once we control for the effect of citation count dynamics and randomize the rest, can we reproduce the results in Fig.~\\ref{short}?\n\nTo address these questions, we use the the Dynamic Configuration Model (DCM) introduced by \\cite{ren2017time} to generate random networks that preserve the individual nodes' citation count time-series observed in the original network. Differently from the widely-used configuration model \\citep{molloy1995critical}, the DCM preserves the original network's temporal linking patterns \\citep{ren2017time}.\nIn the DCM, the total system time span $T$ is divided into $L$ layers of equal duration $\\Delta t=T\/L$. The randomized networks are thus generated by rewiring the existing connections, within each layer, by preserving each node's indegree and outdegree variation in that layer (see \\cite{ren2017time} for the details). The expected number of edges $E_{ij}(n)$ from node $j$ to node $i$ at layer $n$ is given by\n\\begin{equation}\nE_{ij}(n)=\\frac{\\Delta k_i^{in}(n)\\,\\Delta k_j^{out}(n)}{E(n)},\n\\end{equation}\nwhere $\\Delta k_i^{in}$ ($\\Delta k_j^{out}(n)$) denotes the indegree (outdegree) increase of node $i$ ($j$) in layer $n$, and $E(n)$ denotes the total number of edges in layer $n$. In our work, we set $L=100$ which results in $\\Delta t = 310$ days.\n\nWe compare the metrics' performance in the thus-generated random networks with the performance observed in the real data.\nBy construction, the model preserves the indegree time-series of the original network; as a consequence, the performance of the citation count and rescaled citation count is the same as in the real data (Fig.~\\ref{performance-rand}).\nThe model allows us to assess whether the advantage of network-based metrics (Fig.~\\ref{performance-rand}) is a genuine network effect or if it can be explained by random fluctuations.\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=18cm]{age_performance-control20y-dcm-av}\n\\caption{\\emph{(A)} Metrics' performance in identifying the significant patents in a random network generated with the Dynamic Configuration Model. The lines and the shaded areas around the lines represent the mean and the standard error, respectively, measured over $12$ realizations of the randomization process. \\emph{(B)} Difference between the performance observed in the real data and that observed in the randomized networks generated with the Dynamic Configuration Model.}\n\\label{performance-rand}\n\\end{figure*}\n\nIn the randomized networks, the network-based metrics have no advantage with respect to citation-counting metrics in identifying the significant patents (Fig.~\\ref{performance-rand}A). In fact, $R(p)$ falls slightly below $R(c)$ for almost every patent age. Fig.~\\ref{performance-rand}B shows that the performance difference between the performance of PageRank-related metrics in real and randomized networks is significantly positive. We conclude that controlling for the individual nodes' citation count dynamics is not sufficient to explain our findings. Therefore, (higher-order) network structure plays a significant role for the advantage of network-based metrics with respect to citation-counting metrics in identifying the significant patents.\n\n\n\n\n\n\\subsection{Top patents}\n\\label{sec:top_patents}\nIn this section we inspect the top-ranked patents. For simplicity, we focus on the top-$15$ patents as ranked by PageRank (Table~\\ref{tab:pr}) and rescaled PageRank score (Table~\\ref{tab:rpr}).\n\nTable~\\ref{tab:pr} shows that also patents with relatively few citations can reach the top of the ranking by PageRank score, which confirms the idea that in citation networks, the PageRank algorithm can identify ``hidden gems'' \\citep{chen2007finding} that are underestimated by citation count. A paradigmatic example in this sense is patent $3813316$ (``Microorganisms having multiple compatible degradative energy-generating plasmids and preparation thereof''). The patent is ranked $6$th by PageRank despite having been cited only $16$ times. By inspecting the patent's neighborhood, it emerges that the reason for this is that the patent has been cited by patents with relatively large citation count and, additionally, small outdegree. For example, patent $3813316$ is the only patent cited by patent $4237224$ (``Process for producing biologically functional molecular chimeras'', $c=285$, included in the Strumsky-Lobo list of significant patents) which is ranked $3$rd by PageRank. Highly scoring patent $3813316$ refers only to patent $3723248$ (``Method for producing ketoglucaric acid'') which is consequently ranked $38$th by PageRank despite having received only one citation. Small outdegree of the citing patents is crucial because it implies that a large portion of the citing patents' score will contribute to the cited patent's score in the score redistribution process defined by Eq.~(\\ref{pr}).\n\nTable~\\ref{tab:rpr} shows that the top-$15$ patents by rescaled PageRank span a wider temporal range (1934-2010) than the top-$15$ by PageRank (1942-1996), which is a direct consequence of the age-bias removal. \nOn the other hand, Table~\\ref{tab:rpr} also points out a potential limitation of the rescaling procedure. Among the $15$ top-ranked patents, four are indeed from $2010$ (the last year in the dataset) and received only one citation. This happens because only a few among the most recent patents received citations, which results in temporal windows with a large fraction of patents with zero citations. Within such a temporal window, a patent can achieve large rescaled score thanks to one single citation. A possible solution for this issue is to only include the patents whose temporal windows contain a certain minimal number of incoming citations. However, we prefer to show the scores of all the patents in order to highlight the subtleties associated with the evaluation of very recent patents.\n\n\n\n\\section{A comparison of the APS papers' and the US patents' citation network dynamics}\n\\label{sec:comparison}\nSection~\\ref{section:results} validates the rescaled metrics as better indicators of significance of recent patents than the non-rescaled metrics. Yet, there is a remarkable difference between the behavior of the identification rate observed in our analysis of the US patent dataset (Fig.~\\ref{short}B) and that reported by \\cite{mariani2016identification} in their analysis of the American Physical Society (APS) paper citation network: \\cite{mariani2016identification} found that $R(p)$ ranks more than $30\\%$ of the Physical Review Letters milestone letters in the top $0.5\\%$ already one year after publication, whereas it only ranks $1 \\%$ of the Strumsky-Lobo significant patents in the top $1\\%$ one year after issuing.\n\nThe qualitative difference between Fig.~\\ref{short}B and Fig.~3B in \\cite{mariani2016identification} for significant papers motivated us to explore the differences between the dynamics of (significant) patents and that of (significant) papers. To this end, we analyzed an extension of the dataset\\footnote{In particular, we analyzed the APS citation network from $1893$ to $2016$, which comprises $593,443$ papers and $7,031,030$ citations between them.} used by \\cite{mariani2016identification}, and compared the obtained results with those obtained for the US patent citation network. The results of our analysis are summarized in Tables \\ref{table:average}. The table shows that both the significant papers and the significant patents: (1) tend to be cited more than ordinary papers and ordinary patents, respectively, in the respective datasets; (2) tend to accrue citations faster than ordinary papers and ordinary patents, respectively.\nLike patents of high economic value \\citep{lee2017makes}, the Strumsky-Lobo significant patents tend to receive the first few citations quicker than ordinary patents. \n\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{llrrr}\n\\toprule\nDataset & Group of nodes & citations & $\\tau_3$ & $\\tau_5$ \\\\\n\\midrule\n \\textbf{US patents} & Significant patents & 105.6 & 9.6 y & 12.0 y \\\\\n & All patents & 7.4 & 24.9 y & 31.4 y \\\\\n\\midrule\n \\textbf{APS papers} & Significant papers & 457.0 & 1.0 y & 1.4 y \\\\\n & All papers & 11.8 & 3.6 y & 4.8 y \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{A comparison between the average properties of all nodes and the average properties of the significant nodes in the APS paper and US patent citation network. The significant nodes are the milestone letters and the Strumsky-Lobo significant patents in the two datasets, respectively.}\n\\label{table:average}\n\\end{table}\n\n\\iffalse\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{|l|l|c|c|c|}\n\\hline\n & & citations & $\\tau_3$ & $\\tau_5$ \\\\\n\\hline\n \\textbf{US patents} & Median (SPs) & $30$ & 4.0 y & 5.2 y \\\\\n & Median (all) & 3 & 6.6 y & 8.3 y \\\\\n \\hline\n \\textbf{APS papers} & Median (MLs) & 288 & 0.6 y & 0.9 y \\\\\n & Median (all) & 5 & 2.0 y & 2.9 y \\\\\n \\hline\n\\end{tabular}\n\\caption{A comparison between the median properties of all nodes and the median properties of the significant nodes in the APS paper and US patent citation network. The significant nodes are the Milestone Letters (MLs) and the Strumsky-Lobo significant patents (SPs) in the two datasets, respectively. \\mmcomment{If you agree with the layout of table 2, this one should be the same, of course.}}\n\\label{table:median}\n\\end{table}\n\\fi\n\n\n\n\nHowever, there is a striking difference between the dynamics of the two datasets: the APS papers tend to accrue citations much quicker than US patents. For example, the time needed for papers that received at least three total citations to receive their first three citations is much smaller for papers ($3.6$ years on average) than for patents ($24.9$ years). The same is true if we restrict the analysis to the significant papers ($1.0$ years) and patents ($9.6$ years), respectively.\nThese results point out that the smaller identification rate for patents shortly after issuing is partly a manifestation of the slower citation dynamics of patents with respect to the citation dynamics of papers.\\footnote{\nWe emphasize that the APS papers dataset contains only citations among the APS papers. By considering the citations to APS papers from non-APS papers, $\\tau_3$ and $\\tau_5$ would further decrease and the difference from the patent dataset (which, by contrast to the APS dataset, is comparably complete) would further magnify.}\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusions}\n\nOur paper has two main messages.\n\nFirst, we find that using the whole network topology instead of only counting citations brings a substantial advantage in identifying the significant patents. Both the observed degree-degree correlations (Fig.~\\ref{degree_correlations}) and the performance edge of PageRank-related metrics over citation-counting metrics (Fig.~\\ref{short}) suggest that important patents build on other other important patents. \nThis supports the hypothesis that high-impact patents ``stand on the shoulders of giants'', in a similar way as scientific papers \\citep{bornmann2010scientific}, although the high prevalence of examiner-added citations in patents makes the analogy imperfect.\n\nSecond, we show that removing the time bias of static centrality metrics allows one to identify significant patents much earlier than it is possible with conventional static metrics. \nThe rescaling procedure which we use to remove the time bias is efficient and thus applicable even to large-scale datasets \\citep{vaccario2017quantifying}.\n\nThere are some limitations to our work that deserve to be discussed. \nFirst of all, we have pointed out that the early-identification of significant patents is more difficult than that of significant papers, because patents take more time to accumulate citations (Section~\\ref{sec:comparison}). Second, the time-rescaled metrics are based on the assumption that a good ranking of the patents should give the patents from different age periods the same chance to get to the top of the ranking. While this assumption is customary in paper citation analysis \\citep{waltman2016review}, it creates a bias against patents that appear in periods of intensive breakthrough inventive activity, if they exist.\nThird, the rescaled metrics evaluate the most recent patents on the basis of citations received in a relatively short time period. While this may be justified by the finding that patents in rapidly growing domains are highly cited shortly after issuing \\citep{benson2015quantitative}, it potentially misses out ``sleeping-beauty'' \\citep{ke2015defining} patents that received a substantial amount of citations only many years after issuing.\n\nWe see three major directions for extending this research.\nThe most obvious is to acknowledge that there are different citation practices across technological fields, just as different scientific fields exhibit different citation patterns \\citep{waltman2016review}. Based on the results by \\citet{vaccario2017quantifying}, we know that the rescaling procedure can in principle be extended to suppress the bias by technological field as well. However, while it is natural to suppress biases by scientific field of paper-level metrics due to their use in research evaluation, it remains unclear whether a similar approach would be the most effective strategy to rank patents. Besides, using patent classification information is problematic when the goal is to rigorously test predictive ability, because the classification system is changing often and many patents are reclassified \\citep{lafond2017long}. Second, while there exist theoretical explanations for how the broad citation count distribution and the bias of citation-based metric by node age emerge as a result of the dynamics of the system \\citep{valverde2007topology,newman2009first,medo2011temporal,mariani2015ranking}, a model-based explanation of the strong degree-degree correlations and the improved PageRank performance observed in our dataset is still lacking. \nThird, while we studied PageRank as a paradigmatic network-based metric because of its plausible assumption (``a node is important if it is cited by other important nodes''), other network-based metrics \\citep{liao2017ranking} can be analyzed in a similar way to improve our understanding of which metrics best identify important patents.\n\n\n\\section*{Acknowledgments}\nMSM and MM acknowledge financial support from the Swiss National Science Foundation Grant No. 200020-156188. FL acknowledges financial support from Partners for a New Economy and the Institute for New Economic Thinking.\n\n\n\\bibliographystyle{spbasic} \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction} \\label{sec:introduction}\nParton distribution functions (PDFs) determined from global analysis~\\cite{Martin:2009iq,Martin:2007bv,Lai:2010vv,Nadolsky:2008zw,Ball:2010de} of fixed-target and collider data, mainly from deep-inelastic scattering (DIS), are an essential input to theory calculations for hard processes at hadron colliders such as the Tevatron and LHC. In addition to the fitted input PDFs, several other parameters enter into the global fits, which affect both the PDFs obtained and predictions for hadronic processes. One important example is the value of the strong coupling $\\alpha_S$ and its uncertainty~\\cite{Martin:2009bu,Lai:2010nw,Demartin:2010er}. Others, which are the focus of the present paper, are the values of the heavy-quark masses and the scheme used to calculate heavy-quark contributions to observables. In particular, while the precise values taken for $m_c$ and $m_b$ may not be crucial for the processes included in the global PDF analyses, calculations of processes with explicit $c$- and $b$-quarks might be more sensitive to these values, and it is therefore desirable to have PDFs available which have been consistently fitted and evolved with corresponding values of $m_c$ and $m_b$.\n\nWe will study two topics, both of which concern the treatment of the heavy charm and bottom quarks in global parton analyses. The first topic, which is the subject of Section~\\ref{sec:massdep}, concerns the sensitivity of the MSTW~2008 global parton analysis~\\cite{Martin:2009iq} to the values of the heavy-quark masses $m_h$, with $h=c,b$. In Ref.~\\cite{Martin:2009iq} these masses were taken to be $m_c=1.40$~GeV and $m_b=4.75$~GeV. Here we perform global fits for a range of $m_c$ and $m_b$ about these values, with $\\alpha_S(M_Z^2)$ allowed to be a free parameter. In this way, we determine the values of $m_c$ and $m_b$ preferred by the data, and the correlations between these values of $m_h$ and $\\alpha_S(M_Z^2)$. Due to a significant correlation between $m_c$ and $\\alpha_S(M_Z^2)$, we also perform fits with varying $m_c$ but with $\\alpha_S(M_Z^2)$ held fixed. We study how the cross sections for $W$, $Z$ and Higgs boson production at the Tevatron and the LHC depend on the choice of $m_c$ and $m_b$, and we provide a recommendation on how to include the uncertainty arising from $m_h$ in a general cross-section calculation.\n\nThe second topic, described in Section~\\ref{sec:FFNS}, is the generation of sets of 3- and 4-flavour PDFs corresponding to the MSTW~2008~\\cite{Martin:2009iq} 5-flavour sets of PDFs. We follow the same procedure previously used to generate the 3- and 4-flavour versions~\\cite{Martin:2006qz} of the 5-flavour MRST 2004 PDFs~\\cite{Martin:2004ir}, i.e.~the input PDFs (and $\\alpha_S$) at $Q_0^2=1$~GeV$^2$ determined from fits in the 5-flavour scheme are used as the initial condition for 3- or 4-flavour evolution. However, going beyond Ref.~\\cite{Martin:2006qz}, we also provide 3- and 4-flavour \\emph{eigenvector} PDF sets to enable calculation of PDF uncertainties, and we provide 3- and 4-flavour PDF sets for a wide range of $m_c$ and $m_b$ values, respectively. As an example of the use of the central 4-flavour PDFs for the default quark masses, we compare the total cross sections for $Z$ production at the Tevatron and LHC in the 4- and 5-flavour schemes. However, first we begin with a short r\\'{e}sum\\'{e} of the alternative ``schemes'' to treat heavy flavours\\footnote{Strictly speaking, it would be better to say alternative ``techniques'', since the use of the word ``scheme'' is usually reserved for an alternative choice in the ordering of the perturbative expansion, or a particular separation of contributions between the coefficient functions and parton densities---an ambiguity inherent in QCD.}, and in particular, an explanation of what precisely we mean by 3-, 4-, and 5-flavour PDFs.\n\n\n\n\\section{Schemes for the treatment of heavy flavours}\nIt is appropriate to briefly recall the various schemes for the treatment of heavy flavours in global parton analyses. In PDF analyses it is common to start evolving upwards from $Q^2=Q^2_0\\sim 1$~GeV$^2$ with the distributions of the three light quarks $(u,d,s)$, assuming that they are massless. As we evolve upwards, we have the choice to turn on the heavy-quark $(c,b,t)$ distributions as we pass through their respective transition point, for which a convenient choice is $\\mu_F^2=m_h^2$. As we pass through a transition point, the number of active quarks is increased from $n$ to $n+1$, and the parton densities are related to each other perturbatively, i.e.\n\\begin{equation}\n f^{n+1}_j(\\mu_F^2)=\\sum_k A_{jk}(\\mu_F^2\/m_h^2)\\otimes f^{n}_k(\\mu_F^2),\n \\label{eq:transition}\n\\end{equation}\n where the perturbative matrix elements $A_{jk}(\\mu_F^2\/m_h^2)$ contain $\\ln(\\mu_F^2\/m_h^2)$ terms which are known to ${\\cal O} (\\alpha_S^2)$~\\cite{Buza:1996wv} and ${\\cal O} (\\alpha_S^3)$~\\cite{Bierenbaum:2009mv}. The ``$x$'' arguments have been suppressed in Eq.~(\\ref{eq:transition}), and the symbol $\\otimes$ is shorthand for the convolution\n\\begin{equation}\n f \\otimes g \\equiv \\int^1_x \\frac{{\\rm d}x'}{x'}f(x')g(x\/x').\n\\end{equation}\nEq.~(\\ref{eq:transition}) relates the $f^{n}_i(\\mu_F^2)$ set of partons to the $f^{n+1}_i(\\mu_F^2)$ set, guaranteeing the correct evolution for both the $n$ and $n+1$ regimes. We make the simplest choice, $\\mu_F^2=Q^2$, for the factorisation scale.\n\nHence, we have to decide whether or not to keep a heavy quark as just a final-state particle, and not as a parton within the proton. We may choose to keep just the three light flavours as parton distributions. We will call this the 3-flavour scheme (3FS), though it is often referred to as the fixed flavour number scheme (FFNS). Alternatively, we may include the $c$-quark in the evolution above $Q^2=m_c^2$ and generate 4-flavour PDFs in a 4-flavour scheme (4FS). Actually, in the global MRST\/MSTW parton analyses we also include the $b$-quark distribution in the evolution above $Q^2=m_b^2$, but \\emph{not} the $t$-quark above $Q^2=m_t^2$, so we generate 5-flavour sets of parton distributions in a 5-flavour scheme (5FS). So to be precise, in our definition of $n_f$-flavour parton sets, $n_f$ refers to the {\\it maximum} number of quark flavours in the evolution.\n\nIn each $n_f$-flavour scheme ($n_f$FS) the structure functions are given by the usual convolution of coefficient functions and parton distributions:\n\\begin{equation}\n F(x,Q^2)=\\sum_{j} C^{n_f\\rm FS}_j(Q^2\/m_h^2)\\otimes f^n_j(Q^2),\n \\label{general}\n\\end{equation}\nwhere the sum $j$ is over the gluon and the (variable, depending on $Q^2$) number of active quark flavours, $n\\le n_f$. We have a choice in how to choose $n_f$ and define the coefficient functions. One simple choice is to fix $n=n_f=3$. For the heavy flavours, all the $m_h$ dependence then occurs in the coefficient functions, and these are called the FFNS coefficient functions. The structure functions may be written in the form\n\\begin{equation}\n F(x,Q^2)=\\sum_{j=u,d,s,g} C^{\\rm FF,3}_j(Q^2\/m_h^2)\\otimes f^3_j(Q^2).\n \\label{ffns}\n\\end{equation}\nHowever, the sum of the $\\alpha_S^k\\ln^m(Q^2\/m_h^2)$ terms, with $m\\leq k$, is not included in the perturbative expansion. Thus the accuracy of the expansion becomes increasingly unreliable as $Q^2$ increases above $m_h^2$. In addition, there is the problem that the full mass dependence of the coefficient functions is known up to NLO~\\cite{Laenen:1992zk}, but is not completely defined at NNLO, i.e.~the $\\alpha_S^3$ coefficient, $C^{{\\rm FF},3,(3)}_{2, hg}$, for $F_2$ is not fully known, see~Ref.~\\cite{Bierenbaum:2009mv}. (Here, the outer subscript ``$hg$'' denotes the $g\\to h$ contribution to the heavy-flavour structure function $F_2^h$.)\n\nAs an aside, we note that it would be possible to treat the charm quark as light, and the bottom quark as heavy. Then it would be possible to express the structure functions and cross sections in terms of four light quarks with all the mass dependence of the bottom quark contained in the coefficient functions. This is sometimes called the 4-flavour FFNS. However if one needs to use scales $Q^2 \\Delta_b\\sigma^Z = 23.1\\; (6.6)\\ {\\rm pb}\n \\label{eq:45diff}\n\\end{equation}\nat $\\sqrt{s} = 14$~TeV (7~TeV), which results in the ``full'' 4FS total cross section being $2.3\\%$ ($1.7\\%$) smaller than the the 5FS cross section. We interpret this as meaning that the DGLAP-resummed $[\\alpha_S \\ln(M_Z^2\/m_b^2)]^n$ contributions that are absorbed into the $b$ parton distribution are numerically important. \n\nOur results are in broad agreement with the study of Ref.~\\cite{Maltoni:2005wd}, in which the different calculational methods for heavy particles produced in association with $b$ quarks, $gg\\to Xb\\bar b$ (4FS) versus $b \\bar b \\to X$ (5FS), were studied in detail for $X=H,Z$. In particular, it was shown that with the canonical choice of scale $\\mu=M_Z$, the LO $gg\\to Zb \\bar b$ cross section at the LHC was a factor of two smaller than the NNLO $b \\bar b \\to Z$ cross section, consistent with our results in Eq.~(\\ref{eq:45diff}) above. It was also shown in Ref.~\\cite{Maltoni:2005wd} that the agreement between the 4FS and 5FS calculations improves if the scale $\\mu$ is reduced to around $M_Z\/3$ (found by choosing $\\mu\\sim \\sqrt{-t}$ near the end of the collinear plateau in the quantity $-t\\,{\\rm d}\\sigma\/{\\rm d}t$ for the process $gb\\to Zb$). The NNLO $b \\bar b \\to Z$ cross section is approximately scale independent, while the LO $gg\\to Zb \\bar b$ cross section increases with decreasing scale, primarily because of the overall $\\alpha_S^2$ factor.\n\nOf course the explicit $gg \\to Z b \\bar b$ contribution corresponds only to the $n=1$ term in the resummed $[\\alpha_S \\ln(M_Z^2\/m_b^2)]^n$ perturbation series implicit in the $b\\bar{b}\\to Z$ 5FS calculation. Complete agreement between the two schemes {\\it would} be obtained in a fully all-orders perturbative calculation. Note that since at all collider energies $\\Delta_b\\sigma^Z$ is dominated by the contributions involving the $Zb \\bar b$ final state, we would expect that higher-order corrections to the $Z b \\bar b$ production process, which is here calculated only at leading order, will generate the $[\\alpha_S \\ln(M_Z^2\/m_b^2)]^n$ terms implicit in the $b$-quark PDF. The NLO (i.e.~${\\cal O}(\\alpha_S^3)$) corrections to the $Z b \\bar b$ total cross sections with $m_b \\neq 0$ have recently been calculated~\\cite{FebresCordero:2006sj,FebresCordero:2008ci,Cordero:2009kv,zbbCMS}. Although the results presented in Ref.~\\cite{zbbCMS} impose a minimum $p_T^b$ ($>5$~GeV), the $K$-factor\\footnote{The $K$-factor is here defined as the ratio of cross sections calculated using the dynamical scale $\\mu^2 = M_Z^2 + (p_T^{b,1})^2 + (p_T^{b,2})^2$, and with the standard 5-flavour CTEQ6M\/CTEQ6L1 PDFs~\\cite{Pumplin:2002vw} at NLO\/LO.} is evidently rather independent of $p_T^b$ at small $p_T^b$, suggesting that the NLO\/LO $K$-factor for the fully inclusive $Zb\\bar b$ cross section is approximately 1.5 for the LHC at $\\sqrt{s} = 14$~TeV. It is therefore plausible that even higher-order perturbative corrections can account for the factor of 2 difference in the 4FS and 5FS cross sections, Eq.~(\\ref{eq:45diff}). This conclusion is supported by the fact that the scale dependence of the 4FS calculation for $Zb\\bar b$ production at NLO is only mildly weaker than at LO~\\cite{FebresCordero:2006sj,FebresCordero:2008ci,Cordero:2009kv,zbbCMS}.\n\nIn Ref.~\\cite{Thorne:1997ga} it was shown that for $x$ in the region of 0.01--0.05, the relevant region for $Z+b\\bar{b}$ production at the 14~TeV LHC, the ratio of the GM-VFNS structure function $F_2^h$ to the FFNS structure function was $\\sim 1.5$ at LO at high scales. This represents the effect of either resumming the $[\\alpha_S\\ln(Q^2\/m_h^2)]^n$ contributions, or keeping only the contribution of fixed order in $\\alpha_S$ for one parton. For hadron--hadron processes we would expect the difference to be about $1.5^2>2$, exactly as observed. At NLO for structure functions at this $x$ the ratio is reduced to $\\sim1.1$, so inclusion of the extra $\\ln(Q^2\/m_h^2)$ removes much of the discrepancy present at LO. However, for hadron--hadron processes, NLO in the fixed flavour scheme only contains the extra large logarithm for one of the two incoming partons, so the ratio between the 5FS and 4FS would be roughly $1.5\\times 1.1 \\approx 1.6$, again as we expect to see in practice. It would only be at NNLO in the 4FS, when the double logarithm for both incoming PDFs is included, that we would expect to see the reduction to roughly $1.1^2 \\approx 1.2$ in the ratio of the 5FS to 4FS cross sections. This is a general feature, i.e.~the 4FS (or 3FS) will converge to the resummed 5FS results more slowly for hadronic processes than for those in DIS.\n\nIn summary, the 5FS PDFs are clearly the most appropriate to use for inclusive quantities such as $\\sigma^Z$ (or $\\sigma^{\\rm dijet}$, etc.) at high $Q^2$ scales, where resummation of the $[\\alpha_S \\ln(Q^2\/m_b^2)]^n$ contributions is evidently important. However, for more exclusive quantities like $\\sigma^{Zb\\bar b}$, where the $b$ quarks are measured in the final state, the 4FS parton distributions are more appropriate since the 5FS calculation gives no information on the spectator $b$ and $\\bar b$ quarks which must be present in the final state for the $b$- and $\\bar{b}$-initiated processes. Note that if only the \\emph{total} cross section is required, without cuts imposed on the $b$-quarks, then a 5FS is still better, e.g.~for $Zb\\bar{b}$ a 5FS calculation can be used for $b\\bar{b}\\to Z$ at NNLO, where the $b$-quarks couple directly to the $Z$, and so there are implicitly also two $b$-quarks in the final state~\\cite{Maltoni:2005wd}. However, if cuts must be applied to the $b$-quarks, as is the case in the experimental measurement, then a 4FS calculation is more appropriate. Similar remarks apply to the calculation of $Hb\\bar{b}$ production~\\cite{Dittmaier:2003ej,Dawson:2003kb,Campbell:2004pu} and other processes where $b$-quarks are detected in the final state. In a recent study~\\cite{Dittmaier:2009np}, the production of a charged Higgs boson in association with a $t$ quark was considered in both the 4FS ($gg \\to t \\bar b H^-$ etc.) and 5FS ($g b \\to t H^-$ etc.) to NLO in pQCD, using the appropriate MRST 2004 PDF sets~\\cite{Martin:2006qz,Martin:2004ir}. The central predictions in the 5FS were shown to be approximately 40$\\%$ larger than those in the 4FS. Even taking the scale uncertainty into account the 4FS and 5FS NLO cross sections are barely consistent.\n\nAn ideal calculation would combine the best features of the 4FS and 5FS so as to resum $[\\alpha_S\\ln(Q^2\/m_b^2)]^n$ terms while also retaining a finite $m_b$ dependence in the partonic cross section (rather than setting $m_b$ to zero as done in the 5FS). This matching has, of course, been done for structure functions in DIS using different variants of the GM-VFNS, but applications to hadron collider cross sections are more involved and have so far been limited (see Ref.~\\cite{Cacciari:1998it} for an application of the GM-VFNS to the $p_T$ spectrum in heavy-flavour hadroproduction). However, for processes where the hard scale is, for example, $Q^2\\sim M_Z^2$, then the GM-VFNS calculation will differ from the ZM-VFNS (5FS) only by terms $\\mathcal{O}(m_b^2\/M_Z^2\\sim 0.3\\%)$. We would therefore expect the complete GM-VFNS calculation to give results very close to the pure ZM-VFNS (5FS) for the total cross section.\n\nNote that rather than producing separate 4-flavour PDFs for use in a 4FS calculation, an alternative approach (e.g.~used at NLO in Refs.~\\cite{Campbell:2009ss,Campbell:2009gj}) is to use the conventional 5-flavour PDFs, then pass to the 4-flavour scheme using counterterms given in Ref.~\\cite{Cacciari:1998it}. However, these counterterms~\\cite{Cacciari:1998it} are equivalent to using the inverse of transition matrix elements, but only out to order $\\alpha_S$. One could indeed use the transition matrix elements themselves to go from 4-flavour to 5-flavour PDFs, except this would not sum the logarithmic terms, $[\\alpha_S \\ln(Q^2\/m_b^2)]^n$, in the PDF evolution.\\footnote{See Fig.~9 of Ref.~\\cite{Alekhin:2009ni} for a comparison of 5-flavour NNLO PDFs obtained from 3-flavour PDFs either by evolution or by applying fixed-order matching conditions; the differences will be larger at NLO.} Hence, the use of counterterms~\\cite{Cacciari:1998it} is a less complete way of going from a 5FS to a 4FS, and instead, we recommend that dedicated 4-flavour PDFs be used in 4FS calculations.\\footnote{However, for the 4FS calculation of $t$-channel single-top production at the Tevatron and LHC~\\cite{Campbell:2009ss}, it was explicitly checked that results obtained with the dedicated 4-flavour MRST set~\\cite{Martin:2006qz} were consistent (within the numerical integration precision) with those obtained with the corresponding 5-flavour MRST set~\\cite{Martin:2004ir} plus appropriate counterterms~\\cite{Cacciari:1998it}. We thank R.~Frederix and F.~Tramontano for discussions on this issue.} Previously, a major advantage of using 5-flavour PDFs with counterterms was that eigenvector PDF sets to calculate PDF uncertainties were not made available for existing 4-flavour PDFs~\\cite{Martin:2006qz}. However, we have now provided eigenvector PDF sets also for the 4-flavour PDFs, therefore this advantage no longer holds.\n\n\n\n\\section{Conclusions} \\label{sec:conclusions}\n\nWe have repeated the NLO and NNLO MSTW~2008 global PDF analyses~\\cite{Martin:2009iq} for a range of heavy-quark masses about their ``default'' values $m_c=1.40$~GeV and $m_b=4.75$~GeV. For the charm quark, we found that the global data prefer the values $m_c = 1.45$ (1.26)~GeV at NLO (NNLO). The most discriminating data are, as anticipated, the HERA data for $F_2^c$~\\cite{Adloff:1996xq,Adloff:2001zj,Aktas:2005iw,Aktas:2004az,Breitweg:1999ad,Chekanov:2003rb,Chekanov:2007ch}. On the other hand, for the bottom quark, the data included in the global fit (excluding $F_2^b$) do not put a meaningful constraint on the value of $m_b$, while the HERA $F_2^b$ data slightly favour $m_b \\approx 4.75$--5~GeV. We pointed out that precise determinations of the heavy-quark masses in the $\\overline{\\rm MS}$ scheme are affected by poorly convergent perturbative series in the conversion to the pole masses, particularly for the case of the charm quark. Recent precise combined HERA data on $\\sigma_r^{\\rm NC}$~\\cite{Aaron:2009wt} and $F_2^c$~\\cite{HERAF2charm} will in future be able to narrow the favoured range of the charm-quark pole mass $m_c$. Note, however, that uncertainties from the choice of GM-VFNS~\\cite{Thorne:2010pa} mean that the favoured value of $m_c$ will be correlated to some extent with the particular choice of GM-VFNS, although this correlation will be much smaller at NNLO than at NLO, as will other uncertainties arising from the choice of GM-VFNS~\\cite{Thorne:2010pa}.\n\nWe explored the effect of the values of the heavy-quark masses on $W$, $Z$ and Higgs production at the Tevatron and LHC. Varying the charm mass by $\\pm 0.15$ GeV changes the cross sections by $\\pm 1\\%$ or less at Tevatron energies and by $\\pm 2\\%$ at the LHC energy of $\\sqrt{s}=14$ TeV. The various weak boson cross-section \\emph{ratios} are essentially unchanged. The predictions for $W$, $Z$ and Higgs cross sections are much less dependent on the value taken for $m_b$. We provided a recommendation on how to include the uncertainty arising from the choices of $m_c$ and $m_b$ in a generic cross-section calculation.\n\nWe also presented PDF sets obtained in a framework with different active numbers of quark flavours, as done previously in the context of the MRST~2004 analysis~\\cite{Martin:2006qz}. Explicitly, we determined 4-flavour PDF sets in which charm becomes an active parton, but bottom does not, and 3-flavour PDF sets where charm and bottom are not partons, but only appear in the final state. The analogous 5-flavour parton sets are simply those of MSTW~2008~\\cite{Martin:2009iq}. Of course, the latter, which in the absence of top corresponds to PDFs of a (general-mass) variable flavour number scheme, are generally to be preferred, particularly for inclusive quantities at high $Q^2$ scales where the resummation of the $[\\alpha_S\\ln(Q^2\/m_b^2)]^n$ contributions is essential. However, for more exclusive processes, such as $Zb\\bar{b},~Hb\\bar{b},~\\ldots$, where $b$-quarks are observed in the final state, the 4-flavour parton distributions are more appropriate. For illustration, we computed the various components of the $Z$ production cross section to ${\\cal O}(\\alpha_S^2)$ at the Tevatron and LHC, and compared the predictions obtained using the 4FS and 5FS ($\\equiv$ MSTW~2008~\\cite{Martin:2009iq}) parton sets.\n\nThe additional grids for all PDF sets discussed in this paper are made publicly available~\\cite{mstwpdf}, for use either with the standalone MSTW interpolation code or via the \\textsc{lhapdf} interface~\\cite{Whalley:2005nh}. To be precise, grids for the following PDF sets are made available:\n\\begin{itemize}\n \\item For the default quark masses, $m_c=1.40$~GeV and $m_b=4.75$~GeV, we provide LO, NLO and NNLO grids for 3- and 4-flavour PDFs (central set and 40 eigenvector sets at both 68\\% and 90\\% C.L.). These grids complement the existing grids for the 5-flavour PDFs.\n \\item For $m_c$ in the range 1.05~GeV to 1.75~GeV (in steps of 0.05~GeV), we provide NLO and NNLO grids for 3- and 5-flavour PDFs (central set only) for both free and fixed $\\alpha_S(M_Z^2)$.\n \\item For $m_b$ in the range 4.00~GeV to 5.50~GeV (in steps of 0.25~GeV), we provide NLO and NNLO grids for 4- and 5-flavour PDFs (central set only) for free $\\alpha_S(M_Z^2)$ only.\n\\end{itemize}\nThese additional grids should prove to be useful in future for detailed studies of a variety of collider processes involving heavy quarks.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\nThe three-gluon vertex is one of the QCD fundamental Green's functions. This vertex allows the computation of the strong coupling constant and the measurement of a static potential between color charges. Herein we report on an upgrade of the lattice computation of this vertex performed by some of the authors in \\cite{duarte2016, proc2016}.\n\nThe three-gluon correlation function $G^{a_1 a_2 a_3}_{\\mu_1 \\mu_2 \\mu_3} (p_1, p_2, p_3)$ is given by\n\\begin{equation}\n \\langle A^{a_1}_{\\mu_1} (p_1) \\, A^{a_2}_{\\mu_2} (p_2) \\, A^{a_3}_{\\mu_3} (p_3) \\rangle = V \\, \\delta( p_1 + p_2 + p_3) ~\n {G^{a_1 a_2 a_3}_{\\mu_1 \\mu_2 \\mu_3} (p_1, p_2, p_3)}\n\\end{equation}\nand can be written in terms of the gluon propagator $D^{ab}_{\\mu\\nu}(p^2)$ and the one-particle irreducible (1PI) vertex $\\Gamma$ using\n\\begin{equation}\n {G^{a_1a_2a_3}_{\\mu_1\\mu_2\\mu_3} (p_1, p_2, p_3)} = D^{a_1b_1}_{\\mu_1\\nu_1}(p_1) ~ D^{a_2b_2}_{\\mu_2\\nu_2}(p_2) ~ D^{a_3b_3}_{\\mu_3\\nu_3}(p_3) \n {\\Gamma^{b_1b_2b_3}_{\\nu_1\\nu_2\\nu_3} (p_1, p_2, p_3)} .\n\\end{equation}\nBose symmetry requires the 1PI vertex to be symmetric under permutations of any pair $(p_i, a_i, \\mu_i)$. Given that\n\\begin{equation}\n \\Gamma^{a_1 a_2 a_3}_{\\mu_1 \\mu_2 \\mu_3} (p_1, p_2, p_3) = f_{a_1 a_2 a_3}\u00a0\\Gamma_{\\mu_1 \\mu_2 \\mu_3} (p_1, p_2, p_3)\n\\end{equation}\nthen the function $\\Gamma_{\\mu_1 \\mu_2 \\mu_3} (p_1, p_2, p_3)$ must be antisymmetric under the interchange of any pair $(p_i, \\mu_i)$.\n\nA complete description of $\\Gamma_{\\mu_1 \\mu_2 \\mu_3} (p_1, p_2, p_3)$ in the continuum requires six Lorentz invariant form factors, two associated to the transverse component $\\Gamma^{(t)}$ \nand the remaining associated to the longitudinal $\\Gamma^{(l)}$ \\cite{ballchiu}.\n\n\n\n \n\\section{Asymmetric momentum configuration}\n\nIn this work we consider the computation of the three-gluon vertex in the asymmetric momentum configuration $p_2=0$, as in \\cite{alles, duarte2016}. In this case, the correlation function can be written as\n\\begin{equation}\n G_{\\mu_1\\mu_2\\mu_3} (p, 0, -p) = V \\frac{N_c(N^2_c-1)}{4} \\left[D(p^2)\\right]^2 \\, D(0) \\frac{\\Gamma (p^2)}{3} ~ ~ p_{\\mu_2} ~T_{\\mu_1\\mu_3} (p).\n\\end{equation}\nThe contraction of the Lorentz $\\mu_1$ and $\\mu_3$ indices, together with the contraction with the momentum $p_\\alpha$, gives\n\\begin{equation}\n G_{\\mu \\, \\alpha \\,\\mu} (p, 0, -p) \\, p_\\alpha = V \\frac{N_c(N^2_c-1)}{4}\n \\, \\left[D(p^2)\\right]^2 \\, D(0) ~~\\Gamma (p^2) ~~ p^2 .\n\\end{equation}\nFrom this expression it is possible to extract the form factor $\\Gamma (p^2)$. However, a lattice measurement of $\\Gamma (p^2)$ requires the computation of the ratio\n\\begin{equation}\n G_{\\mu \\alpha \\mu} (p, 0, -p) p_\\alpha \/ \\left[D(p^2)\\right]^2 \\, D(0) \n\\end{equation}\nand the extraction of $\\Gamma (p^2)$ from this ratio will originate large statistical fluctuations at high momenta, where $D(p^2)$ becomes quite small. In fact, assuming Gaussian error propagation, it is possible to show that the statistical error on $\\Gamma (p^2)$ behaves as $\\Delta \\Gamma(p^2) \\sim p^4$ in the UV regime \\cite{duarte2016}.\n\n\n\n\n\\section{Handling of noise, lattice artefacts}\n\nIn order to try to deal with the large statistical fluctuations at high momenta, we considered a few strategies \\cite{guitese}:\n\n\\begin{itemize}\n\\item explore the ambiguity on the scale setting and perform a binning in the momentum --- all data points in each bin are replaced by a weighted average of the data points;\n\\item perform a $H(4)$ extrapolation of the lattice data \\cite{becirevic1999, soto2009} --- such procedure is based on the remnant $H(4)$ symmetry group\nassociated with a hypercubic lattice. On the lattice, a scalar quantity $F$ is a function of the $H(4)$ invariants\n\\begin{displaymath}\n p^2 = p^{[2]} = \\sum_\\mu p^2_\\mu , \\quad\n p^{[4]} = \\sum_\\mu p^4_\\mu , \\quad\n p^{[6]} = \\sum_\\mu p^6_\\mu , \\quad\n p^{[8]} = \\sum_\\mu p^8_\\mu ,\n\\end{displaymath}\ni.e. $F_{Lat} = F(p^{[2]}, p^{[4]}, p^{[6]}, p^{[8]})$. The continuum limit will be given by $F(p^{[2]}, 0, 0, 0)$ up to corrections $\\mathcal{O}(a^2)$. Having several data points for the same $p^2$ but different $p^{[4]}$, $p^{[6]}$, $p^{[8]}$, an extrapolation of $F_{Lat}$ to the continuum limit can be done, assuming that it can be written as a power series of the H(4) invariants. Note that, in this work, only a linear extrapolation in $p^{[4]}$ is considered.\n\n\n\\end{itemize}\n\n \n\n\\section{Lattice setup}\n\nIn this work we consider the $64^4$ ensemble of 2000 configurations already studied in \\cite{duarte2016}, together with a $80^4$ ensemble of 1800 configurations, both generated with the Wilson gauge action at $\\beta=6.0$. The rotation to the Landau gauge has been performed using the Fourier accelerated Steepest Descent method \\cite{davies} implemented with the help of Chroma \\cite{chroma} and PFFT \\cite{pfft} libraries. The gluon field is computed using the definition\n\\begin{equation}\na g_0 A_\\mu (x + a \\hat{e}_\\mu) = \\frac{ U_\\mu (x) - U^\\dagger (x)}{ 2 i g_0} \n - \\frac{\\mbox{Tr} \\left[ U_\\mu (x) - U^\\dagger (x) \\right]}{6 i g_0} \n\\end{equation}\nwith the momentum space gluon field given by\n\\begin{equation}\nA_\\mu (\\hat{p}) = \\sum_x e^{- i \\hat{p} (x + a \\hat{e}_\\mu) } \\, A_\\mu (x + a \\hat{e}_\\mu) \\,\\,,\\,\\, \\hat{p}_\\mu = \\frac{2 \\, \\pi \\, n_\\mu}{a \\, L_\\mu}.\n\\end{equation}\n\n\n\\section{Results}\n\nIn Figure \\ref{binned} we compare the original and binned data for $\\Gamma (p^2)$. The binning of the data suppresses the large statistical errors in the high momentum region and produces a well defined and smooth curve.\n\n\\begin{figure}[h]\n\\vspace{0.55cm}\n \\centering\n \\subfigure[$64^4$ lattice.]{ \\includegraphics[width=0.42\\textwidth]{plots\/gamma_64x4.eps} \\label{binn64}} \\qquad\n \\subfigure[$80^4$ lattice.]{ \\includegraphics[width=0.42\\textwidth]{plots\/gamma_80x4.eps} \\label{binn80}}\n \\caption{Original and binned data for $\\Gamma (p^2)$.}\n \\label{binned}\n\\end{figure}\n\nNext, in Figure \\ref{binnedoverp2} we compare the binned data for both lattices. The results of the two volumes agree within errors, suggesting that finite volume effects are small. \n\n\\begin{figure}[h]\n\\vspace{0.65cm}\n\\begin{center}\n\\includegraphics[width=0.6\\textwidth]{plots\/gamma_over_p2_compare.eps}\n\\end{center}\n \\caption{Comparison of binned data for $\\Gamma (p^2)$.}\n \\label{binnedoverp2}\n\\end{figure}\n\nIn Figure \\ref{H4extr} we compare the H(4) extrapolation of the $64^4$ lattice data with the binning of the original data. We observe that the H(4) extrapolation pushes the vertex to higher values in the high momentum regime. Nevertheless, in the infrared region, the extrapolated data is compatible with the original data, for both lattice volumes --- see Figure \\ref{H4infra}.\n\n\\begin{figure}[h]\n\\vspace{0.65cm}\n\\begin{center}\n\\includegraphics[width=0.6\\textwidth]{plots\/all_gamma_over_p2_64_H4.eps}\n\\end{center}\n \\caption{Results of the H(4) extrapolation of $\\Gamma (p^2)$ on the $64^4$ lattice volume.}\n \\label{H4extr}\n\\end{figure}\n\n\n\\begin{figure}[h]\n\\vspace{0.55cm}\n \\centering\n \\subfigure[$p^2 \\Gamma(p^2)$.]{ \\includegraphics[width=0.42\\textwidth]{plots\/all_gamma.eps} \\label{H4infra-p2G}} \\qquad\n \\subfigure[$\\Gamma(p^2)$.]{ \\includegraphics[width=0.42\\textwidth]{plots\/all_gamma_over_p2.eps} \\label{H4infra-G}}\n \\caption{Original and H(4) data for both lattice volumes for low momenta.}\n \\label{H4infra}\n\\end{figure}\n\n\\section{Infrared behaviour of $\\Gamma(p^2)$}\n\nNo zero crossing of $\\Gamma(p^2)$, an indication of ghost dominance in the\ninfrared, is seen in the lattice data reported here. In order to check for\na change of sign in $\\Gamma(p^2)$, in this section we explore the infrared\nbehaviour of the lattice $\\Gamma(p^2)$, using the $80^4$ data for momenta below 1GeV, and fit the data to $\\Gamma_1(p^2)=A + Z \\ln(p^2)$ and $ \\Gamma_2(p^2)=A + Z \\ln(p^2+m^2)$. The first one is a typical ansatz considered in recent studies to study the zero crossing, see \\cite{guitese} for details, and the second one is a variant of the first one which includes an infrared logarithmic regularizing mass. \n\nIn Figure \\ref{zerocrossing} we plot the best fits of the lattice data for both fitting functions, obtained through the minimization of $\\chi^2\/d.o.f.$ .\nFor $\\Gamma_1(p^2)$, we got $\\chi^2\/d.o.f. = 1.23$ with $A=0.2395(16)$ and $Z=0.0646(21)$. Accordingly the zero crossing occurs at $p_o=157$MeV.\nFor $\\Gamma_2(p^2)$, the parameters take the values $A=0.208(24)$, $Z=0.124(27)$ and $m=0.61(15)$GeV, with a $\\chi^2\/d.o.f. = 0.95$. As shown in the right plot of Figure \\ref{zerocrossing}, in this case there is no zero crossing. \n\n\\begin{figure}[h]\n\\vspace{0.55cm}\n \\centering\n \\subfigure[$\\Gamma (p^2) = A + Z \\ln(p^2)$.]{ \\includegraphics[width=0.42\\textwidth]{plots\/gamma80-fit1.eps} \\label{zerocrossing-fit1}} \\qquad\n \\subfigure[$\\Gamma (p^2) = A + Z \\ln(p^2+m^2)$.]{ \\includegraphics[width=0.42\\textwidth]{plots\/gamma80-fit2.eps} \\label{zerocrossing-fit2}}\n \\caption{Infrared $80^4$ lattice data for $\\Gamma(p^2)$ together with some fitting functions. }\n \\label{zerocrossing}\n\\end{figure}\n\n\n\\section{Conclusions and outlook}\n\nIn this paper we describe an improved calculation of the three gluon vertex on the lattice, for the asymmetric momentum configuration. We use two different lattice volumes $(6.5$ fm$)^4$ and $(8.2$ fm$)^4$, with a common lattice spacing of $a = 0.102$ fm. We show that a H(4) extrapolation of the lattice data pushes the vertex to higher values in UV regime. We proceed with a functional study in the infrared region, considering some functional forms compatible with zero crossing and IR divergence.\n\nFurther momentum configurations will be explored in the near future.\n\n\n\n\n\n\n\\acknowledgments\n\n\n\nThis work was supported by national funds from FCT\nFunda\u00e7\u00e3o para a Ci\u00eancia e a Tecnologia, I. P., within the\nProjects UIDB\/04564\/2020, UIDP\/04564\/2020, and CERN\/FIS-COM\/0029\/2017.\nG. T. R. C. acknowledges financial support from FCT\n under Project UIDB\/04564\/2020, and also from the Generalitat Valenciana\n (genT program CIDEGENT\/2019\/040) and Ministerio de Ciencia e\n Innovacion PID2020-113644GB-I00.\nP. J. S. acknowledges financial support from FCT\n under Contract CEECIND\/00488\/2017.\nThis work was granted access to the HPC resources of\nthe PDC Center for High Performance Computing at the\nKTH Royal Institute of Technology, Sweden, made\navailable within the Distributed European Computing\nInitiative by the PRACE-2IP, receiving funding from the\nEuropean Communitys Seventh Framework Programme\n(FP7\/2007-2013) under Grant agreement no. RI-283493.\nThe use of Lindgren has been provided under DECI-9\nproject COIMBRALATT. We acknowledge that the results\nof this research have been achieved using the PRACE-3IP\nproject (FP7 RI312763) resource Sisu based in Finland at\nCSC. The use of Sisu has been provided under DECI-12\nproject COIMBRALATT2. We acknowledge the\nLaboratory for Advanced Computing at the University of\nCoimbra \\cite{lca} for providing access to the HPC resource\nNavigator. The authors acknowledge Minho Advanced Computing Center\n\\cite{macc} for providing HPC resources that have contributed to\nthe research results reported within this paper. This work was\nproduced with the support of MACC and it was funded by FCT I.P.\nunder the Advanced Computing Project CPCA\/A2\/6816\/2020, platform Bob.\nThis work was produced with the support of INCD \\cite{incd} funded by FCT and\nFEDER under the project 01\/SAICT\/2016 n\u00ba 022153.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction} \\label{sec:intro}\n\n\n\nThe one-dimensional Ising model has played an important role in statistical mechanics since its introduction in 1920 \\cite{Lenz1920}. This model possesses the virtue of exact solvability in the ensemble of fixed ordering field, $h$, and it also represents an important example of an infinite system with short range interactions that is in addition experimentally realizable \\cite{Armitage1,Armitage2,Armitage3,Armitage4}. Additionally, it manifests scaling behavior in the vicinity of its zero-temperature ordered state, and it is a test-bed for exploring the influence of finite-size scaling on critical properties, including the connection between the behavior of fluctuation-induced forces in the critical regime and scaling predictions \\cite{rudzanabshak2010}. The one-dimensional Ising model in a transverse field has proven an important experimental realization of a system with a quantum phase transition \\cite{Armitage4}. While the properties of the Ising model in the fixed ordering field ensemble are well-studied, we are aware of no corresponding work having been done in the conjugate ensemble in which the total magnetization is held fixed---except in the limit of an infinite Ising chain, in which a Legendre transformation suffices to derive the fixed-$M$, or Helmholtz, free energy from the fixed $h$, or Gibbs free energy, and in the case of $M=0$ \\cite{Wang2017}. Utilizing an analysis of domain wall statistics, we have obtained closed-form expressions for the fixed-$M$ partition function in the case of periodic, antiperiodic and Dirichlet, or free, boundary conditions. The resulting expressions differ non-trivially from corresponding expressions in the fixed-$h$ ensemble. These results should extend the applicability of the one-dimensional Ising model to physical systems in which finite size effects play an important role. \nThe analysis reported here can also be viewed as a useful addition to approaches based on Ginzburg-Landau-Wilson Hamiltonians \\cite{Dietrich1,Dietrich2,Dietrich3}. \n\n\n\\section{Ising chain with periodic boundary conditions: the combinatorics of domains} \\label{sec:periodicdomains}\n\nAs in the case of all boundary conditions considered here, the partition function to be evaluated is the sum over spin states of the Boltzmann factor\n\\begin{equation}\n\\exp\\left[K \\sum_{i=1}^{N-1}s_i s_{i+1}\\right] \\label{eq:bf}\n\\end{equation}\nwhere each spin variable takes on the values $\\pm 1$. Fixing the total magnetism amounts to the constraint that the difference between the number of up spins, $N_+$ and the number of down spins, $N_-$ is equal to $M$. \n \n\n \nThe key step in the calculation of the partition function is the determination of the number of ways in which the spins can arrange themselves into alternating spin-up and spin-down domains, subject to the requirement of a fixed value of the total magnetization, $M$. We start with equations that express the relationships between $M$, the number of up spins, $N_+$ and the number of down spins, $N_-$, along with the total number of spins, $N$:\n \\begin{equation}\n \tN = N_+ + N_-, \\, \\qquad \\mbox{and}\\qquad \n \tM = N_+-N_- \\label{eq:cf1}.\n \\end{equation}\n Inverting these equations we find\n \\begin{equation}\n \tN_+ = \\frac{N+M}{2} \\, \\qquad \\mbox{and}\\qquad \n \tN_- = \\frac{N-M}{2}. \\label{eq:cf2}\n \\end{equation}\n \nFor insight into the determination of domain statistics, we look at, say, the third leading contribution in an expansion of the partition function in powers of $\\exp[-K]$. We start with a domain of $N_+$ up spins. We then partition that domain into four smaller domains. We do this by inserting three ``slices'' into the domain, effectively three walls between adjacent spins. \n We then partition a domain of $N_-$ down spins into four smaller domains, which we insert between the domains of up spins. The process is illustrated in Fig. \\ref{fig:steps}.\n \\begin{figure}[htbp]\n \t\t\\includegraphics[width=3in]{step1.pdf}\n\t\t\\includegraphics[width=3in]{step2.pdf}\n\t\t\\includegraphics[width=3in]{step3.pdf}\n \t\t\\caption{Top portion: the domain of up spins (in green) and of down spins (in orange); middle portion: the domains are each divided into four smaller domains; bottom portion: the smaller domains are now interspersed. }\n \t\t\\label{fig:steps}\n \\end{figure}\n\n \n \n We now calculate how many ways there are of subdividing each domain into four subdomains. In the case of the spin up domain, that quantity is \n \\begin{equation}\n \t(N_+-1)(N_+-2)(N_+-3)\/3! \\label{eq:cf5}\n \\end{equation}\n which is the number of ways of inserting three partitions between adjacent spins in a linear array of $N_+$ up spins. A similar expression holds for the number of ways of subdividing the domain of down spins. Making use of the relations (\\ref{eq:cf2}) and multiplying resulting expressions to obtain the number of ways of subdividing both domains we end up with the factor \n \\begin{equation}\n \t((N-2)^2-M^2)((N-4)^2-M^2)((N-6)^2-M^2)\/(4^3(3!)^2). \\label{eq:cf9}\n \\end{equation}\n \n We now join the ends of the set of domains up so that they form a ring, consistent with periodic boundary conditions, and we rotate the ring around to find out how many ways we can arrange the subdomains. This yields a factor of $N$. However, because we take all possible lengths for the set of subdomains we are overcounting by a factor of four, the number of pairs of domains. \n The overall factor is thus\n \\begin{equation}\n \t\\frac{N}{4}\\frac{((N-2)^2-M^2)((N-4)^2-M^2)((N-6)^2-M^2)}{ 4^3 (3!)^2}. \\label{eq:cf10}\n \\end{equation}\n To obtain the complete expression, we multiply the above by $\\exp(-16K)$, corresponding to the energy cost of the eight walls between the eight domains of the periodically continued array in Fig. \\ref{fig:steps}. \n \n In the general case of $2k$ alternating domains, the first factor of 4 in the denominator of (\\ref{eq:cf10}) is replaced by $k$, while the two other factors become $4^{k-1}$ and $(k-1)!^2$. Thus, the general form of the denominator is\n \\begin{equation}\n \t\\label{eq:gen-form-denomnator}\n 4^{k-1} k ((k-1)!)^2.\n \\end{equation}\nThen, for the numerator one has\n\\begin{equation}\n\t\\label{eq:gen-form-nominator}\n\tN\\prod _{p=1}^{k-1} \\left((N-2 p)^2-M^2\\right).\n\\end{equation}\nTaking into account that the energy of a configuration with $2k$ domains is $\\exp [K (N-4 k)]$, for the contribution of this configuration in the statistical sum one obtains \n\\begin{eqnarray}\n\t\\label{eq:stat-sum-expansion}\n\t\\lefteqn{\\mathrm{Zterm}(N,M,K,k)} \\\\&=& \\frac{N \\exp (K (N-4 k)) \\prod _{p=1}^{k-1} \\left((N-2 p)^2-M^2\\right)}{4^{k-1} k ((k-1)!)^2}.\\nonumber \n\\end{eqnarray}\nThe form of the right hand side of (\\ref{eq:stat-sum-expansion}) allows us to sum over $k$ from 0 to $\\infty$ to obtain the the partition function $Z^{({\\rm per})}(N,M,K)$. The result is a closed-form expression that is exact when $N$ and $M$ are both even or odd integers with $|M|0$) as $N\\to \\infty$ for periodic boundary conditions. The properties listed above imply that in systems with fixed $m$ the Helmholtz free energy possesses non-scaling contributions that vanish significantly more slowly than this exponential approach to the bulk behavior. \n\n\\begin{figure}[htbp]\n\t\\includegraphics[width=\\columnwidth]{new_fig_2.pdf}\n\t\\caption{The behavior of the function $X_{\\rm H}^{(\\rm per)}(K,m|N)$ (see \\eq{eq:figeq3}) with $N=100$ and for $m=0.1, 0.3$, and $m=0.5$. We observe that the function is \\textit{positive} for large values of $K$ and \\textit{negative} for relatively small values of $K$ provided $m$ is also relatively small. For large $m$ the force is always repulsive, irrespective on the value of $K$. The same is also true for very small values of $K$, independent on the values of $m$. The logarithmic behavior of the free energy of the finite Ising chain with periodic boundary conditions noted in item 1 of the comments above lead to the limit $X_{\\rm H}^{(\\rm per)}(K\\to\\infty,m|N)=1$. }\n\t\\label{fig:Helmholtz}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\t\\includegraphics[width=\\columnwidth]{new_fig_3.pdf}\n\t\\caption{The behavior of the function $X_{\\rm H}^{(\\rm per)}(K,m|N)$ (see \\eq{eq:figeq3}) with $N=100,200, 300, 400$ and $N=500$. We observe that the function is \\textit{positive} for large and for small enough values of $K$, while being \\textit{negative} for relatively moderate values of $K$, \\textit{irrespective} of the value of $N$. Larger $N$ stronger is the repulsion for small enough $K$; the force in the latter regime is strongly repulsive, irrespective on the value of $N$. }\n\t\\label{fig:Helmholtz2}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\t\\includegraphics[width=\\columnwidth]{new_fig_4.pdf}\n\t\\caption{The behavior of the scaling function $X_{\\rm H}^{(\\rm per)}(x_t,m)$ for $m=0.1$. The inspection of the results obtained numerically from \\eq{eq:statistical-sum} with $N=100,200, 300, 400$ and $N=500$, and that one from \\eq{eq:scalingform} demonstrate perfect scaling and agreement between each other. We observe that the function is \\textit{positive} for large values of $x_t$, \\textit{negative} for relatively moderate values of $x_t$, and again strongly repulsive for small values of $x_t$. }\n\t\\label{fig:Helmholtz3}\n\\end{figure}\n\nNote that $m$ can be also seen as a sort of generalized ``charge,'' or symmetry value, which is conserved both inside and outside the system. Given the free energy derivable from the partition function, one is in a position to determine the fluctuation-induced Helmholtz force on a finite Ising chain in contact with a ``bulk,'' chain of infinite extent. The results of such a calculation are shown in Figs. \\ref{fig:Helmholtz}---\\ref{fig:Helmholtz3}. The force is minus the derivative with respect to $N$ of the combined Helmholtz free energy\n\\begin{equation}\n\\mathcal{F} = -\\ln\\left( \\mathcal{Z}^{({\\rm per})}(N,K,M)\\right) + (\\mathcal{N}-N)F_H(K,m). \\label{eq:figeq1}\n\\end{equation}\nHere $F_H$ is the Helmholtz free energy density of a ``bulk'' neighboring Ising chain. The term proportional to $\\mathcal{N}$ can be ignored as a background contribution to the overall free energy. The quantities $M$, $m$ and $K$ are kept constant in the process of differentiation, after which $M$ is set equal to $mN$. This yields the fluctuation induced Helmholtz force\n\\begin{equation}\nf_H^{({\\rm per})}(K,m,N) \\label{eq:figeq2}.\n\\end{equation}\nMultiplying the result for $f_H^{({\\rm per})}(K,m,N)$ by $N$ provides the function $X_H^{({\\rm per})}(K,m|N)$\n\\begin{equation}\nX_H^{({\\rm per})}(K,m|N)=N\tf_H^{({\\rm per})}(K,m,N) \\label{eq:figeq3}.\n\\end{equation} \nIts behavior is shown in Figs. \\ref{fig:Helmholtz} and \\ref{fig:Helmholtz2}. Fig \\ref{fig:Helmholtz} shows its behavior as a function of $K$ for $N=100$, and $m=0.1, 0.3$ and $m=0.5$, while Fig. \\ref{fig:Helmholtz2} shows it for $m=0.1$ and $N=100, 200, 300, 400$, and $N=500$. Focusing on the scaling regime ($K$ and $N$ both large compared to 1) we end up with the $N$-independent scaling function $X^{({\\rm per})}(x_t,m)$. \nFigure \\ref{fig:Helmholtz3} shows the behavior of this quantity as a function of $x_t$ for $m=0.1$.\n\n\n\n\n\nThe plots in Fig. \\ref{fig:Helmholtz} show that the fluctuation induced force studied has a behavior that is similar to one appearing in some versions of the Big Bang theory---strong repulsion at high temperatures, transitioning to moderate attraction for intermediate values of the temperature, and then back to repulsion, albeit much weaker than during the initial period of highest temperature \\cite{inflationary}. \n\n\n\n\\section*{Acknowledgements}\nDD gratefully acknowledges the discussions and exchange of information with Prof. Siegfried Dietrich and Dr. Markus Gross on some aspects of the current work.\n\n\n\n\n\\section{Introduction} \\label{sec:intro}\n\n\n\nThe one-dimensional Ising model has played an important role in statistical mechanics since its introduction in 1920 \\cite{Lenz1920}. This model possesses the virtue of exact solvability in the ensemble of fixed ordering field, $h$, and it also represents an important example of an infinite system with short range interactions that is in addition experimentally realizable \\cite{Armitage1,Armitage2,Armitage3,Armitage4}. Additionally, it manifests scaling behavior in the vicinity of its zero-temperature ordered state, and it is a test-bed for exploring the influence of finite-size scaling on critical properties, including the connection between the behavior of fluctuation-induced forces in the critical regime and scaling predictions \\cite{rudzanabshak2010}. The one-dimensional Ising model in a transverse field has proven an important experimental realization of a system with a quantum phase transition \\cite{Armitage4}. While the properties of the Ising model in the fixed ordering field ensemble are well-studied, we are aware of no corresponding work having been done in the conjugate ensemble in which the total magnetization is held fixed---except in the limit of an infinite Ising chain, in which a Legendre transformation suffices to derive the fixed-$M$, or Helmholtz, free energy from the fixed $h$, or Gibbs free energy, and in the case of $M=0$ \\cite{Wang2017}. Utilizing an analysis of domain wall statistics, we have obtained closed-form expressions for the fixed-$M$ partition function in the case of periodic, antiperiodic and Dirichlet, or free, boundary conditions. The resulting expressions differ non-trivially from corresponding expressions in the fixed-$h$ ensemble. These results should extend the applicability of the one-dimensional Ising model to physical systems in which finite size effects play an important role. \nThe analysis reported here can also be viewed as a useful addition to approaches based on Ginzburg-Landau-Wilson Hamiltonians \\cite{Dietrich1,Dietrich2,Dietrich3}. \n\n\n\\section{Ising chain with periodic boundary conditions: the combinatorics of domains} \\label{sec:periodicdomains}\n\nAs in the case of all boundary conditions considered here, the partition function to be evaluated is the sum over spin states of the Boltzmann factor\n\\begin{equation}\n\\exp\\left[K \\sum_{i=1}^{N-1}s_i s_{i+1}\\right] \\label{eq:bf}\n\\end{equation}\nwhere each spin variable takes on the values $\\pm 1$. Fixing the total magnetism amounts to the constraint that the difference between the number of up spins, $N_+$ and the number of down spins, $N_-$ is equal to $M$. \n \n\n \nThe key step in the calculation of the partition function is the determination of the number of ways in which the spins can arrange themselves into alternating spin-up and spin-down domains, subject to the requirement of a fixed value of the total magnetization, $M$. We start with equations that express the relationships between $M$, the number of up spins, $N_+$ and the number of down spins, $N_-$, along with the total number of spins, $N$:\n \\begin{equation}\n \tN = N_+ + N_-, \\, \\qquad \\mbox{and}\\qquad \n \tM = N_+-N_- \\label{eq:cf1}.\n \\end{equation}\n Inverting these equations we find\n \\begin{equation}\n \tN_+ = \\frac{N+M}{2} \\, \\qquad \\mbox{and}\\qquad \n \tN_- = \\frac{N-M}{2}. \\label{eq:cf2}\n \\end{equation}\n \nFor insight into the determination of domain statistics, we look at, say, the third leading contribution in an expansion of the partition function in powers of $\\exp[-K]$. We start with a domain of $N_+$ up spins. We then partition that domain into four smaller domains. We do this by inserting three ``slices'' into the domain, effectively three walls between adjacent spins. \n We then partition a domain of $N_-$ down spins into four smaller domains, which we insert between the domains of up spins. The process is illustrated in Fig. \\ref{fig:steps}.\n \\begin{figure}[htbp]\n \t\t\\includegraphics[width=3in]{step1.pdf}\n\t\t\\includegraphics[width=3in]{step2.pdf}\n\t\t\\includegraphics[width=3in]{step3.pdf}\n \t\t\\caption{Top portion: the domain of up spins (in green) and of down spins (in orange); middle portion: the domains are each divided into four smaller domains; bottom portion: the smaller domains are now interspersed. }\n \t\t\\label{fig:steps}\n \\end{figure}\n\n \n \n We now calculate how many ways there are of subdividing each domain into four subdomains. In the case of the spin up domain, that quantity is \n \\begin{equation}\n \t(N_+-1)(N_+-2)(N_+-3)\/3! \\label{eq:cf5}\n \\end{equation}\n which is the number of ways of inserting three partitions between adjacent spins in a linear array of $N_+$ up spins. A similar expression holds for the number of ways of subdividing the domain of down spins. Making use of the relations (\\ref{eq:cf2}) and multiplying resulting expressions to obtain the number of ways of subdividing both domains we end up with the factor \n \\begin{equation}\n \t((N-2)^2-M^2)((N-4)^2-M^2)((N-6)^2-M^2)\/(4^3(3!)^2). \\label{eq:cf9}\n \\end{equation}\n \n We now join the ends of the set of domains up so that they form a ring, consistent with periodic boundary conditions, and we rotate the ring around to find out how many ways we can arrange the subdomains. This yields a factor of $N$. However, because we take all possible lengths for the set of subdomains we are overcounting by a factor of four, the number of pairs of domains. \n The overall factor is thus\n \\begin{equation}\n \t\\frac{N}{4}\\frac{((N-2)^2-M^2)((N-4)^2-M^2)((N-6)^2-M^2)}{ 4^3 (3!)^2}. \\label{eq:cf10}\n \\end{equation}\n To obtain the complete expression, we multiply the above by $\\exp(-16K)$, corresponding to the energy cost of the eight walls between the eight domains of the periodically continued array in Fig. \\ref{fig:steps}. \n \n In the general case of $2k$ alternating domains, the first factor of 4 in the denominator of (\\ref{eq:cf10}) is replaced by $k$, while the two other factors become $4^{k-1}$ and $(k-1)!^2$. Thus, the general form of the denominator is\n \\begin{equation}\n \t\\label{eq:gen-form-denomnator}\n 4^{k-1} k ((k-1)!)^2.\n \\end{equation}\nThen, for the numerator one has\n\\begin{equation}\n\t\\label{eq:gen-form-nominator}\n\tN\\prod _{p=1}^{k-1} \\left((N-2 p)^2-M^2\\right).\n\\end{equation}\nTaking into account that the energy of a configuration with $2k$ domains is $\\exp [K (N-4 k)]$, for the contribution of this configuration in the statistical sum one obtains \n\\begin{eqnarray}\n\t\\label{eq:stat-sum-expansion}\n\t\\lefteqn{\\mathrm{Zterm}(N,M,K,k)} \\\\&=& \\frac{N \\exp (K (N-4 k)) \\prod _{p=1}^{k-1} \\left((N-2 p)^2-M^2\\right)}{4^{k-1} k ((k-1)!)^2}.\\nonumber \n\\end{eqnarray}\nThe form of the right hand side of (\\ref{eq:stat-sum-expansion}) allows us to sum over $k$ from 0 to $\\infty$ to obtain the the partition function $Z^{({\\rm per})}(N,M,K)$. The result is a closed-form expression that is exact when $N$ and $M$ are both even or odd integers with $|M|0$) as $N\\to \\infty$ for periodic boundary conditions. The properties listed above imply that in systems with fixed $m$ the Helmholtz free energy possesses non-scaling contributions that vanish significantly more slowly than this exponential approach to the bulk behavior. \n\n\\begin{figure}[htbp]\n\t\\includegraphics[width=\\columnwidth]{new_fig_2.pdf}\n\t\\caption{The behavior of the function $X_{\\rm H}^{(\\rm per)}(K,m|N)$ (see \\eq{eq:figeq3}) with $N=100$ and for $m=0.1, 0.3$, and $m=0.5$. We observe that the function is \\textit{positive} for large values of $K$ and \\textit{negative} for relatively small values of $K$ provided $m$ is also relatively small. For large $m$ the force is always repulsive, irrespective on the value of $K$. The same is also true for very small values of $K$, independent on the values of $m$. The logarithmic behavior of the free energy of the finite Ising chain with periodic boundary conditions noted in item 1 of the comments above lead to the limit $X_{\\rm H}^{(\\rm per)}(K\\to\\infty,m|N)=1$. }\n\t\\label{fig:Helmholtz}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\t\\includegraphics[width=\\columnwidth]{new_fig_3.pdf}\n\t\\caption{The behavior of the function $X_{\\rm H}^{(\\rm per)}(K,m|N)$ (see \\eq{eq:figeq3}) with $N=100,200, 300, 400$ and $N=500$. We observe that the function is \\textit{positive} for large and for small enough values of $K$, while being \\textit{negative} for relatively moderate values of $K$, \\textit{irrespective} of the value of $N$. Larger $N$ stronger is the repulsion for small enough $K$; the force in the latter regime is strongly repulsive, irrespective on the value of $N$. }\n\t\\label{fig:Helmholtz2}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\t\\includegraphics[width=\\columnwidth]{new_fig_4.pdf}\n\t\\caption{The behavior of the scaling function $X_{\\rm H}^{(\\rm per)}(x_t,m)$ for $m=0.1$. The inspection of the results obtained numerically from \\eq{eq:statistical-sum} with $N=100,200, 300, 400$ and $N=500$, and that one from \\eq{eq:scalingform} demonstrate perfect scaling and agreement between each other. We observe that the function is \\textit{positive} for large values of $x_t$, \\textit{negative} for relatively moderate values of $x_t$, and again strongly repulsive for small values of $x_t$. }\n\t\\label{fig:Helmholtz3}\n\\end{figure}\n\nNote that $m$ can be also seen as a sort of generalized ``charge,'' or symmetry value, which is conserved both inside and outside the system. Given the free energy derivable from the partition function, one is in a position to determine the fluctuation-induced Helmholtz force on a finite Ising chain in contact with a ``bulk,'' chain of infinite extent. The results of such a calculation are shown in Figs. \\ref{fig:Helmholtz}---\\ref{fig:Helmholtz3}. The force is minus the derivative with respect to $N$ of the combined Helmholtz free energy\n\\begin{equation}\n\\mathcal{F} = -\\ln\\left( \\mathcal{Z}^{({\\rm per})}(N,K,M)\\right) + (\\mathcal{N}-N)F_H(K,m). \\label{eq:figeq1}\n\\end{equation}\nHere $F_H$ is the Helmholtz free energy density of a ``bulk'' neighboring Ising chain. The term proportional to $\\mathcal{N}$ can be ignored as a background contribution to the overall free energy. The quantities $M$, $m$ and $K$ are kept constant in the process of differentiation, after which $M$ is set equal to $mN$. This yields the fluctuation induced Helmholtz force\n\\begin{equation}\nf_H^{({\\rm per})}(K,m,N) \\label{eq:figeq2}.\n\\end{equation}\nMultiplying the result for $f_H^{({\\rm per})}(K,m,N)$ by $N$ provides the function $X_H^{({\\rm per})}(K,m|N)$\n\\begin{equation}\nX_H^{({\\rm per})}(K,m|N)=N\tf_H^{({\\rm per})}(K,m,N) \\label{eq:figeq3}.\n\\end{equation} \nIts behavior is shown in Figs. \\ref{fig:Helmholtz} and \\ref{fig:Helmholtz2}. Fig \\ref{fig:Helmholtz} shows its behavior as a function of $K$ for $N=100$, and $m=0.1, 0.3$ and $m=0.5$, while Fig. \\ref{fig:Helmholtz2} shows it for $m=0.1$ and $N=100, 200, 300, 400$, and $N=500$. Focusing on the scaling regime ($K$ and $N$ both large compared to 1) we end up with the $N$-independent scaling function $X^{({\\rm per})}(x_t,m)$. \nFigure \\ref{fig:Helmholtz3} shows the behavior of this quantity as a function of $x_t$ for $m=0.1$.\n\n\n\n\n\nThe plots in Fig. \\ref{fig:Helmholtz} show that the fluctuation induced force studied has a behavior that is similar to one appearing in some versions of the Big Bang theory---strong repulsion at high temperatures, transitioning to moderate attraction for intermediate values of the temperature, and then back to repulsion, albeit much weaker than during the initial period of highest temperature \\cite{inflationary}. \n\n\n\n\\section*{Acknowledgements}\nDD gratefully acknowledges the discussions and exchange of information with Prof. Siegfried Dietrich and Dr. Markus Gross on some aspects of the current work.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\n\n\\section{Problem statement}\n\n\\label{sec:statement}\n\n\\begin{figure}\n\\centering\n \\resizebox{\\linewidth}{!}{\n \\begin{subfigure}{.5\\textwidth}\n \\centering\n \\resizebox{\\linewidth}{!}{\\input{frame_diagram}}\n \n \\caption{}\n \\label{fig:sim_scene}\n \\end{subfigure}\n \\begin{subfigure}{.5\\textwidth}\n \\centering\n \\resizebox{\\linewidth}{!}{\\includegraphics{image\/ur5_setup.jpeg}}\n \\caption{}\n \\label{fig:real_scene}\n \\end{subfigure}}\n \\caption{Our benchmark scene for multi-fingered grasping. \nThe pose of the hand $(\\mathbf{x}, \\mathbf{R})$ is defined in the local object frame. The depth camera produces an image $i$ of the scene. }\n \\label{fig:frame_diagram}\n\\end{figure}\n\n\\paragraph{Description}\nWe consider the problem of grasping a rigid and stable body with a multi-fingered gripper, as illustrated in Fig.~\\ref{fig:frame_diagram}.\nThe object $\\mathcal{O}$ is modelled as a 3D surface mesh and its centroid stands on a table at a location $(x_{\\mathcal{O}}, y_{\\mathcal{O}}, z_{\\mathcal{O}})$ with a rotation $\\varphi_{z,\\mathcal{O}}$ around the $z$-axis in the world reference frame $\\mathcal{F}_{W}$.\nWe refer to its 2D pose $(x_\\mathcal{O}, y_\\mathcal{O}, \\varphi_{z,\\mathcal{O}})$ as $\\mathbf{p}_{\\mathcal{O}} \\in \\mathbb{R}^{2}\\times \\text{SO(2)}$. \nThe hand configuration $\\mathbf{h} \\in \\mathcal{H} = \\mathbb{R}^{3}\\times \\text{SO(3)} \\times \\mathcal{G}$ is defined as the combination of the pose $(\\mathbf{x},\\mathbf{R}) \\in \\mathbb{R}^{3}\\times \\text{SO(3)}$ of the hand and the type $g \\in \\mathcal{G} = \\{ \\text{basic}, \\text{wide}, \\text{pinch} \\}$ of the grasp.\nThe hand pose $(\\mathbf{x}, \\mathbf{R})$ is defined with respect to the world frame coordinate.\nThe robot evolves in a 3D workspace observed with a fixed depth camera producing images $i \\in \\mathcal{I}$.\nThe goal is to find a robust hand configuration $\\mathbf{h^{*}}$ with respect to a given binary metric $S = \\{0, 1\\}$, where $S=1$ indicates a successful grasp.\n\n\n\\paragraph{Probabilistic modeling}\nWe model the scene and the grasping task according to the Bayesian network shown in Fig.~\\ref{fig:graphical_model}.\nThe variables $S, i, \\mathbf{h}, \\mathcal{O}$ and $\\mathbf{p}_{\\mathcal{O}}$ are modelled as random variables in order to capture the noise in the robot or in the depth camera, as well as our prior beliefs about the hand configuration, the object or its pose. \nThe structure of the Bayesian network is motivated by the fact that $\\mathbf{h}$, $\\mathcal{O}$ and $\\mathbf{p}_\\mathcal{O}$ are independent, while $S$ is dependent on $\\mathbf{h}$, $\\mathcal{O}$ and $\\mathbf{p}_\\mathcal{O}$ and $i$ is dependent on $\\mathcal{O}$ and $\\mathbf{p}_\\mathcal{O}$. This structure also enables a direct and straightforward way to generate data: $\\mathbf{h}$, $\\mathcal{O}$ and $\\mathbf{p}_\\mathcal{O}$ are sampled from their respective prior distributions while $S$ and $i$ can be generated using forward physical simulators for the grasping and the camera.\n\nThe prior distribution $p(\\mathbf{x})$ of the spatial position is uniformly distributed between the extreme values $\\mathbf{x}_{\\text{lim}}=(x_{\\text{low}},y_{\\text{low}}, z_{\\text{low}}, x_{\\text{high}},y_{\\text{high}},z_{\\text{high}})$, chosen to be within the range of physical dimensions of the gripper and the biggest object. It emphasizes our ignorance about interesting regions of space for grasping. The rotation $\\mathbf{R}$ is parameterized with a quaternion. A quaternion $\\mathbf{q}$ is an element of the quaternion group $\\mathbb{H}$, in the form $\\mathbf{q}= q_{0}\\mathbf{1} + q_{1}\\mathbf{i} + q_{2}\\mathbf{j} + q_{3}\\mathbf{k} = (q_{0}, q_{1}, q_{2}, q_{3})^{T}$ with $(q_{0}, q_{1}, q_{2}, q_{3})^{T} \\in \\mathbb{R}^{4}$ and $\\mathbf{i}^{2}=\\mathbf{j}^{2}=\\mathbf{k}^{2}=\\mathbf{ijk}=-1$. The conjugate $\\mathbf{\\bar{q}}$ of quaterion $\\mathbf{q}$ is given by $\\mathbf{\\bar{q}}:= q_{0}\\mathbf{1} - q_{1}\\mathbf{i} - q_{2}\\mathbf{j} - q_{3}\\mathbf{k}$. A unit quaternion, called \\textit{versor}, $\\mathbf{q}_{1} \\in \\mathbb{H}_{1}$ has a unit norm defined as $\\|\\mathbf{q}\\| = \\sqrt{ \\mathbf{q}\\mathbf{\\bar{q}}}=1$. They give a more compact representation than rotation matrices and avoid gimbal lock and singularities. Unit quaternions can be identified with the elements of a hyperspherical manifold $\\mathbb{S}^{3}$ embedded into $\\mathbb{R}^{4}$. Moreover, $\\mathbb{S}^{3}$ is a double covering of $\\text{SO(3)}$, meaning that antipodal points $\\pm\\mathbf{q}$ represent the same rotation, which implies that $p(\\mathbf{q};\\cdot)=p(-\\mathbf{q};\\cdot)$. We define the prior $p(\\mathbf{q})$ as a mixture of \\textit{power-spherical} distributions \\cite{de2020power} with 4 modes $\\mathbf{\\mu}_{i}$. Each mode is a mixture that satisfies $p(\\mathbf{q};\\cdot)=p(-\\mathbf{q};\\cdot)$. In total, we have\n\\begin{equation}p(\\mathbf{q}) = \\frac{1}{N} \\sum_{i=1}^{N=4}\\frac{\\text{PowerSpherical}(\\mathbf{q}; \\mathbf{\\mu}_{i}, \\kappa)}{2} + \\frac{\\text{PowerSpherical}(\\mathbf{q}; -\\mathbf{\\mu}_{i}, \\kappa)}{2}.\n\\end{equation}\nThese modes $\\mathbf{\\mu}_{i}$ encode information about the orientation of the gripper and share the same concentration factor $\\kappa=30$.\nTo grasp an object, the gripper point toward the table and thus toward the object -- an informed prior which indeed results in sufficiently many successful grasps. \nWe then define four rotations, separated by a rotation of $\\frac{\\pi}{2}$ around the $z$-axis (see Fig.~\\ref{fig:prior_quat_mode} in Appendix~\\ref{appendix:distribution}).\nIn this way, our prior covers a large part of the rotation space and is sufficiently informative by contrast to a uniform prior over the unit sphere $\\mathbb{S}^{3}$.\nThe grasp type $g$ is uniformly distributed between the three types basic, wide and pinch. These three modes modulates the spacing between the fingers in the opposite side of the thumb. Finally, $p(\\mathbf{h}) = p(\\mathbf{x})p(\\mathbf{R})p(g)$.\n\n\nThe prior $p(\\mathcal{O})=p(\\mathrm{Mesh})p(\\beta)$ is a discrete uniform distribution over a fixed set of object meshes and a uniform distribution for the scaling factor $\\beta$. \nFinally, the prior $p(\\mathbf{p}_{\\mathcal{O}})$ captures our belief that the object can be everywhere on the table with any rotation around the vertical axis. For this reason, uniform distributions are used for all three parameters $x_{\\mathcal{O}},y_{\\mathcal{O}}, \\varphi_{z, \\mathcal{O}}$.\nTable~\\ref{tab:prior_distribution} summarizes the prior distributions.\n\n\\begin{figure}\n \\begin{subfigure}[b]{.55\\linewidth}\n \\centering\n \\resizebox{\\linewidth}{!}{\\input{bayesian_network.tex}}\n \\vspace{0.5em}\n \n \\caption{}\n \\label{fig:graphical_model}\n\\end{subfigure}\n\\begin{subfigure}[b]{.45\\linewidth}\n\\centering\n\\resizebox{0.85\\linewidth}{!}{\n\\begin{tabular}{cl}\n \\hline\n Variable & Prior \\\\\n \\hline\n $x$ & $\\text{uniform}(-0.15,0.15)$ \\\\\n $y$ & $\\text{uniform}(-0.15,0.15)$ \\\\\n $z$ & $\\text{uniform}(0.12,0.34)$ \\\\\n $\\mathbf{R}$ & $\\text{mixture of power spherical}(\\mathbf{\\mu}_{i}, \\kappa)$ \\\\\n $g$ & $\\text{categorical}(\\{\\frac{1}{3},\\frac{1}{3},\\frac{1}{3}\\})$ \\\\\n $x_{\\mathcal{O}}$ & $\\text{uniform}(-0.05,0.05)$ \\\\\n $y_{\\mathcal{O}}$ & $\\text{uniform}(-0.05,0.05)$ \\\\\n $\\varphi_{z, \\mathcal{O}}$ & $\\text{uniform}(-\\pi,\\pi)$ \\\\\n \n $\\mathrm{Mesh}$ & $\\text{uniform in the set of objects}$ \\\\\n $\\beta$ & uniform(0.9, 1.1)\\\\\n \\hline\n \\end{tabular}}\n \\caption{}\n \\label{tab:prior_distribution}\n\\end{subfigure}\n\\caption{(a) Probabilistic graphical model of the environment. Gray nodes correspond to observed variables and white nodes to unobserved variables. (b) Prior distributions.}\n\\end{figure}\n\nGiven our probabilistic graphical model, we finally formulate the problem of grasping as the Bayesian inference of the hand configuration $\\mathbf{h}^{*}$ that is a posteriori the most likely given a successful grasp and an observation $i$. That is, we are seeking for the maximum a posteriori (MAP) estimate\n\\begin{equation}\n\\label{eq:map}\n\\mathbf{h}^{*} = \\argmax_{\\mathbf{h}}~p(\\mathbf{h}|S=1, i).\n\\end{equation}\n\n\n\\section{Likelihood-free Bayesian inference for multi-fingered grasping}\n\\label{sec:method}\n\n\\subsection{Density ratio estimation}\nFrom the Bayes's rule, the posterior of the hand configuration is\n\\begin{equation}\n\\label{eq:proba_cond}\n\\begin{split}\np(\\mathbf{h}|S, i) = \\frac{p(S,i| \\mathbf{h})}{p(S,i)}p(\\mathbf{h}). \\\\\n\\end{split}\n\\end{equation}\nThe likelihood function $p(S, i|\\mathbf{h})$ and the evidence $p(S, i)$ are both intractable, which makes standard Bayesian inference procedures such as Markov chain Monte Carlo unusable. \nHowever, drawing samples from forward models remains feasible with physical simulators, hence enabling likelihood-free Bayesian inference algorithms. \n\nFirst, we express the likelihood-to-evidence ratio as a product of two individual ratios,\n\\begin{align}\n r(S, i|\\mathbf{h}) &= \\frac{p(S, i|\\mathbf{h})}{p(S, i)}= \\frac{p(S|\\mathbf{h})}{p(S)} \\frac{p(i|S, \\mathbf{h})}{p(i|S)}= r(S|\\mathbf{h}) r(i|S, \\mathbf{h}).\n \\label{eq:ratio_decomposition}\n\\end{align}\nBy adapting the approach described in \\cite{pmlr-v119-hermans20a, Brehmer:2019jyt} for likelihood ratio estimation, we train two neural network classifiers $d_\\phi$ and $d_\\theta$ that we will use to approximate $r(S|\\mathbf{h})$ and $r(i|S, \\mathbf{h})$.\nThe first network $d_\\phi$ is trained to distinguish positive tuples $(S, \\mathbf{h})$ (labeled $y=1$) sampled from the joint distribution $p(S, \\mathbf{h})$ against negative tuples (labeled $y=0$) sampled from the product of marginals $p(S)p(\\mathbf{h})$. The Bayes optimal classifier $d^{*}(S,\\mathbf{h})$ that minimizes the cross-entropy loss is given by\n\\begin{equation}\n\\label{eq:discriminator}\nd^{*}(S, \\mathbf{h}) = \\frac{p(S, \\mathbf{h} )}{p(S)p(\\mathbf{h})+ p(S, \\mathbf{h})},\n\\end{equation}\nwhich recovers the likelihood ratio $r(S|\\mathbf{h})$ as\n\\begin{equation}\n \\label{eq:d_to_r}\n \\begin{split}\n \\frac{d^{*}(S,\\mathbf{h})}{1-d^{*}(S,\\mathbf{h})} & = \\frac{p(S,\\mathbf{h})}{p(S)p(\\mathbf{h})} = \\frac{p(S|\\mathbf{h})}{p(S)}.\n \\end{split}\n\\end{equation}\nTherefore, by modelling the classifier with a neural network $d_\\phi$ trained on the binary classification problem, we obtain an approximate but amortized and differentiable likelihood ratio \n\\begin{equation}\n \\hat{r}(S|\\mathbf{h}) = \\frac{d_\\phi(S,\\mathbf{h})}{1-d_\\phi(S,\\mathbf{h})}.\n\\end{equation}\nThe second network $d_\\theta$ is trained similarly, over positive tuples $(i, \\mathbf{h})$ (labeled ($y=1$) sampled from the conditional joint distribution $p(i, \\mathbf{h}|S=1)$ against negative tuples $(i, \\mathbf{h})$ (labeled $y=0$) sampled from the product of marginals $p(i|S=1)p(\\mathbf{h}|S=1)$. Using the same likelihood ratio trick, we obtain \n\\begin{equation}\n \\hat{r}(i|S=1, \\mathbf{h}) = \\frac{d_\\theta(i, \\mathbf{h})}{1-d_\\theta(i, \\mathbf{h})}.\n\\end{equation}\nFinally, the likelihood ratios are combined with the prior to approximate the posterior as\n\\begin{equation}\n\\hat{p}(\\mathbf{h}|S=1, i) = \\hat{r}(i|S=1, \\mathbf{h})\\hat{r}(S=1|\\mathbf{h}) p(\\mathbf{h}),\n\\end{equation}\nwhich enables immediate posterior inference despite the initial intractability of the likelihood function $p(S, i|\\mathbf{h})$ and of the evidence $p(S, i)$.\n\n\n\n\\begin{figure}\n \\centering\n \\input{nn_architecture}\n \n \\label{fig:image_estimator}\n \\caption{Neural network architectures of the classifiers $d_\\phi$ and $d_\\theta$ used to respectively approximate the likelihood ratios $r(S|\\mathbf{h})$ and $r(i|S=1,\\mathbf{h})$. }\n \\label{fig:pipeline_architecture}\n\\end{figure}\n\n\nThe neural network classifiers $d_\\phi$ and $d_\\theta$ are architectured as in Fig.~\\ref{fig:pipeline_architecture}. \nIn $d_\\theta$, the camera image $i$ of size $640\\times 480\\times 1$ is pre-processed by scaling the depths in the interval $\\{0\\} \\cup [0.45, 1]$ and resized to $256\\times 160\\times 1$. Then, $i$ is fed to a convolutional network made of four convolutional layers followed by a fully connected layer and which goal is to produce a vector embedding of the image.\nThe image embedding and $\\mathbf{h}$ are then both fed to a subsequent network made of 2 fully connected layers. \nThe hand configuration $\\mathbf{h}$ enters the neural network as a $1\\times13$ vector where the rotation matrix $\\mathbf{R}$ is flattened \\cite{murphy2021implicit, zhou2019continuity} and the grasp type $g$ is passed through an embedding.\nIn $d_\\phi$, both $S$ and $\\mathbf{h}$ are directly fed to a network made of 2 fully connected layers. \nThe parameters $\\phi$ and $\\theta$ are optimized by following Algorithm~\\ref{alg:learning_procedure} (Appendix~\\ref{appendix:algorithms}), using Adam as optimizer.\n\nFinally, we note that the factorization of the likelihood-to-evidence ratio forces the two ratio estimators to focus their respective capacity on the information brought by $S$ and $i$. Because of the high discriminative power of $S$, training instead a single ratio taking both $S$ and $i$ as inputs would indeed lead to an estimator that usually discards the smaller information brought in $i$.\n\n\\subsection{Maximum a posteriori estimation}\n\\label{subsec:optimization}\n\nDue to the intractability of the likelihood function and of the evidence,\nEq.~(\\ref{eq:map}) cannot be solved analytically nor numerically. \nWe rely instead on the approximation given by the likelihood-to-evidence ratio $\\hat{r}$ to find an approximation of the maximum a posteriori (MAP) estimate as\n\\begin{align}\n \\hat{\\mathbf{h}}^{*} &= \\argmax_{\\mathbf{h}}\\hat{r}(S=1,i|\\mathbf{h})p(\\mathbf{h}) \\\\\n &= \\argmin_{\\mathbf{h}} -\\log \\hat{r}(S=1,i|\\mathbf{h})p(\\mathbf{h})\n \\label{eq:approximate_map},\n\\end{align}\nwhich we solve using gradient descent. \nFor a given $g$, the gradient of Eq.~(\\ref{eq:approximate_map}) decomposes as\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:euclidean_grad}\n -\\nabla_{(\\mathbf{x},\\mathbf{R})}\\log \\hat{r}(S,i|\\mathbf{h})p(\\mathbf{h}) = -\\nabla_{(\\mathbf{x},\\mathbf{R})}\\log &\\,\\hat{r}(S,i|\\mathbf{h}) - \\nabla_{(\\mathbf{x},\\mathbf{R})}\\log p(\\mathbf{h}).\n\\end{aligned}\n\\end{equation}\nOur closed-form prior $p(\\mathbf{h})$ has analytical gradients. In fact, uniform distributions are set to have null gradient everywhere in the domain. Therefore, $\\nabla_{\\mathbf{x}}p(\\mathbf{h}) = \\mathbf{0}$. By contrast, $p(\\mathbf{R})$ is a weakly informative prior and has a non null gradient from the power spherical distribution. Its derivative with respect to $\\mathbf{q}$ is\n\\begin{equation}\n\\label{eq:grad_power_spherical}\n\\begin{split}\n\\nabla_{\\mathbf{q}}p(\\mathbf{q};\\mu, \\kappa) &= C(\\kappa)\\kappa(1+\\mu^{T}\\mathbf{q})^{\\kappa-1}\\nabla_{\\mathbf{q}}(1+\\mu^{T}\\mathbf{q})\\\\\n &= C(\\kappa)\\kappa\\mathbf{\\mu}(1+\\mu^{T}\\mathbf{q})^{\\kappa-1},\n\\end{split}\n\\end{equation}\nwhere $C(\\kappa)$ is the normalization term.\nSince the likelihood-to-evidence ratio estimator $\\hat{r}$ is modelled by a neural network, it is fully differentiable with respect to its inputs and its gradients can be computed by automatic differentiation. \nHowever, not all variables of the problem are Euclidean variables and naively performing gradient descent would violate our geometric assumptions. \nLet us consider a variable $\\mathcal{Z}$ on the smooth Riemannian manifold $\\mathcal{M}=\\mathbb{R}^{3} \\times \\text{SO(3)}$ with tangent space $\\mathcal{T}_{\\mathcal{Z}}\\mathcal{M}$ and a function $f : \\mathcal{M} \\rightarrow \\mathbb{R}$. Since SO(3) is embedded in the set of $3\\times 3$ matrices $\\mathbb{R}^{3\\times 3}$, $f$ can be evaluated on $\\mathbb{R}^{3} \\times \\mathbb{R}^{3\\times3} $, leading to the definition of the Euclidean gradients $\\nabla f(\\mathcal{Z}) \\in \\mathbb{R}^{3} \\times \\mathbb{R}^{3\\times3}$. In turn, these Euclidean gradients can be transformed into their Riemannian counterparts $\\text{grad}f(\\mathcal{Z})$ via orthogonal projection $\\mathbf{P}_{\\mathcal{Z}}$ into the tangent space $\\mathcal{T}_{\\mathcal{Z}}\\mathcal{M}$ \\cite{absil2009optimization, hu2019brief}. Therefore,\n\\begin{equation}\n \\text{grad} f(\\mathcal{Z}) = \\mathbf{P}_{\\mathcal{Z}}(\\nabla f(\\mathcal{Z}))\n\\end{equation}\nwhere the orthogonal projection onto $\\mathbb{R}^{3}$ is the identity $\\mathbb{I}_{3}$ and the orthogonal projection onto SO(3) at $\\xi \\in \\text{SO(3)}$ is $\\xi\\text{skew}(\\xi^{T}\\nabla f(\\xi))$ where $\\text{skew}(A) \\coloneqq \\frac{1}{2}(A-A^{T})$. Thus, we can solve Eq.~(\\ref{eq:approximate_map}) by projecting Euclidean gradients of Eq.~(\\ref{eq:euclidean_grad}) to the tangent space $\\mathcal{T}_{\\mathcal{Z}}\\mathcal{M}$ and plug them to a manifold optimization procedure. In our experiments, we use the geometrical conjugate gradient method \\cite{absil2009optimization} implemented in Pymanopt~\\cite{townsend2016pymanopt} to perform 20 optimization steps and we scan for the best value of $g$. \nFor completeness, the full optimization algorithm is provided in Algorithm~\\ref{alg:optimization_procedure} (Appendix~\\ref{appendix:algorithms}).\n\n\n\\section{Experiments}\n\\label{sec:experiment}\n\nFor training, we use 19 objects from the YCB data \\cite{7251504} (see Fig.~\\ref{fig:training_set} in Appendix~\\ref{appendix:objects}) together with 5 objects from the ShapeNet dataset \\cite{shapenet2015}, for a total of 24 types of objects. We selected a diverse range of objects compatible with the geometry of our gripper.\nIn simulation, the success rate was evaluated on the 19 objects used for training, as well as on \n5 new unseen objects from the YCB data (see Fig.~\\ref{fig:testing_set} in Appendix~\\ref{appendix:objects}).\nOnly the 19 objects from YCB are used in the real setup.\n\n\n\n\\subsection{Data generation}\n\\paragraph{Grasp generative model}\nA physical simulator is used to sample from $p(S|\\mathbf{h}, \\mathcal{O}, \\mathbf{p}_{\\mathcal{O}})$. Hand configurations, objects and object poses are sampled from their priors $p(\\mathbf{h})$, $p(\\mathcal{O})$ and $p(\\mathbf{p}_{\\mathcal{O}})$, and are then submitted to a lift test. First, a planner generates a trajectory in the joint space to bring the gripper to the hand configuration $\\mathbf{h}$. If the pose is not reachable, the test fails. Otherwise, the gripper closes its fingers until contact with the object or the gripper itself. Then, the robot lifts possibly the object to a given height. If the object is held in the gripper, the grasp is considered as successful. Simulations are performed using Pybullet~\\cite{coumans2020}.\nWe use volumetric hierarchical approximate decomposition to get convex meshes of objects from \\textit{obj} files for collision detection \\cite{mamou2009simple}.\n\n\\paragraph{Sensor generative model}\nThe sensor generative model $p(i|\\mathbf{p}_{\\mathcal{O}}, \\mathcal{O})$ is implemented in Pybullet, with an approach similar to the Blensor sensor framework \\cite{gschwandtner2011blensor} used to render depth images from a Kinect model sensor. Simulating a structured-light sensor allows a better transfer to the real setup. \nObjects and poses are sampled from their priors, $\\mathcal{O} \\sim p(\\mathcal{O}), \\mathbf{p}_{\\mathcal{O}}\\sim p(\\mathbf{p}_{\\mathcal{O}})$. Then, the object is placed and an image $i \\sim p(i|\\mathbf{p}_{\\mathcal{O}}, \\mathcal{O})$ is generated.\n\n\\paragraph{Domain randomization for sim-to-real transfer}\nGenerative models in simulation differ from their real world counterparts due to incorrect physical modelling and inaccurate physical parameters. This \\textit{reality gap} may lead to failures from our model because the synthetic data and the real data distributions are different. To overcome this issue, we use \\textit{domain randomization} \\cite{tobin2017domain} with nuisance parameters on the position and the orientation of the camera, the minimum and maximum distance of the depth, the field of view, and the coefficient of lateral and spinning frictions $\\mu$ and $\\gamma$.\nDomain randomization avoids the precise calibration of both the grasp and the image simulators, which can be very difficult and costly. \nWe use uniform distributions for the nuisance parameters which are difficult to measure with $\\pm 2\\%$ error and Gaussian distributions for easily measurable parameters. For the orientation of the camera, a multivariate normal variable $\\eta \\sim \\mathcal{N}(\\mathbf{0}, \\Sigma), \\Sigma=\\text{diag}(\\sigma_{\\alpha}=0.002, \\sigma_{\\beta}=0.01, \\sigma_{\\gamma}=0.002)$ is drawn and then mapped to $\\text{SO}(3)$ using the exponential map.\n\n\n\n\\subsection{Simulation benchmarks}\nWe evaluate the performance of our approach incrementally, adding algorithmic components one by one to assess their respective marginal benefits.\nFor each inference strategy, we estimate the success rate over $1000$ grasping attempts for randomly sampled objects and camera images. Nuisances parameters are resampled in simulation when evaluating the success.\nOur general results are summarized in Table~\\ref{tab:sim_metric}, while supplementary results for each individual category of objects can be found in Appendix~\\ref{appendix:supplementary-results}.\nWe first report results for strategies maximizing the (conditional) densities $p(\\mathbf{h}|\\cdot)$ of hand configurations.\nOptimizing for the maximum a priori estimate $\\mathbf{h} = \\argmax p(\\mathbf{h})$, without conditioning on success or an observation of the scene, leads to a very low success rate of $0.6\\%$. \nAs expected, these results are too poor to be usable but they should underline the informativeness of the prior. Sampling hand configurations from a uniform prior would instead result in a much smaller success rate, by about one order of magnitude (less than $0.1\\%$).\nWhen conditioning on the expected success $S=1$, performance increases to $44\\%$.\nTaking both the expected success $S=1$ and the image $i$ into account and following the inference methodology proposed in Section~\\ref{sec:method}, leads to an even larger success rate of $71\\%$ for the maximum a posteriori estimates. For the 5 new objects, we reach a comparable success rate of $75\\%$, which demonstrates the good generalization of the approach.\nIn comparison, had the properties $\\mathcal{O}$, $\\mu$, and $\\beta$ of the object been perfectly known, the success rate would reach $85\\%$, which shows that the convolutional network manages to extract most of the relevant information from the observation $i$.\nTable~\\ref{tab:sim_metric} also reports results for maximum likelihood estimates, achieving success rates of $43\\%$, $64\\%$ and $80\\%$ when maximizing the likelihoods $p(S=1|\\mathbf{h})$, $p(S=1,i|\\mathbf{h})$, and $p(S=1, \\mathcal{O}, \\mu, \\beta|\\mathbf{h})$ respectively. Note that maximizing $p(S=1|i, \\mathbf{h})$ and $p(S=1| \\mathcal{O}, \\mu, \\beta,\\mathbf{h})$ would give the same result since $i$ and $\\mathcal{O}, \\mu, \\beta$ are independent from $\\mathbf{h}$.\nOur informative prior can explain the difference in success rates between the MAP and the MLE estimates and motivates the use of a Bayesian approach.\n\nNot only our framework can be used for grasp planning, it also provides immediate access to the full posterior $p(\\mathbf{h}|S=1, i)$ of hand configurations.\nAs an illustrating example, we extract the marginal posterior densities $p(\\mathbf{x}|S=1, u), p(\\mathbf{R}|S=1, i)$ and $p(g|S=1, i)$ for the simulated scene of Fig.~\\ref{fig:sim_scene}, with the box centered at $(0, 0)$ without any rotation. \nThe resulting posterior is shown in Fig.~\\ref{fig:posterior}.\nFirst, $p(\\mathbf{x}|S=1, i)$ shows the distribution in space of the hand configuration $\\mathbf{h}$. \nThe concentration along the $x$-axis and $z$-axis are high, meaning that high density regions are located slightly behind and in front of the box, at a given height related to the geometrical dimensions of the box. \nConcerning the $y$-axis, the posterior fails to capture the symmetry and places all the density at the right of the box. \nOverall, the positions $\\mathbf{x}$ which are the most likely to give a successful grasp are on the right corner of the box. \nThis is underlined by the posterior of $p(\\mathbf{R}|S=1, i)$. The red dots correspond to the density of the $x$-axis, the green dots to the $y$-axis and the blue dots to the $z$-axis. \nThe $x$-axis has one mode, directed toward the table, inherited from the prior and slightly deviates to the right. The $y$-axis, however, has only two antipodal modes by contrast to the prior. These modes correspond to the situation in which the fingers are placed on the front surface or the back surface. The $z$-axis can be constructed by taking the cross product between $x$ and $y$. Uncertainties from $x$ and $y$ are propagated, leading to two antipodal modes with lower concentration than $y$. \nFinally, the posterior $p(g|S=1, i)$ for the grasp type indicates a preference towards pinch and wide modes over the basic mode. Pinch mode is preferred when the position $\\mathbf{x}$ is far from the right corner while the wide mode is mainly used when $\\mathbf{x}$ is located near the right corner.\n\n\\begin{table}\n\\centering\n \\resizebox{0.80\\linewidth}{!}{\n \\begin{tabular}{llcc}\n \\hline\n \\hline\n Grasping inference strategy & & \\multicolumn{2}{c}{Success rate}\\\\\n \\hline\n & & {\\it Sim} & {\\it Real} \\\\\n \\hline\n \\multirow{2}{*}{Prior based} & $\\mathbf{h} \\sim p(\\mathbf{h})$ & $0.8\\%$ & - \\\\\n & $\\mathbf{h}=\\argmax p(\\mathbf{h})$ & $0.6\\%$ & - \\\\\n \\hline\n \\multirow{2}{*}{Metric based} & $\\mathbf{h}=\\argmax_{\\mathbf{h}} \\hat{p}(S=1| \\mathbf{h})$ & $43\\%$ & - \\\\\n & $\\mathbf{h}=\\argmax_{\\mathbf{h}} \\hat{p}(\\mathbf{h}| S=1)$ & $44\\%$ & $46\\%$ \\\\\n \\hline\n \\multirow{2}{*}{Partial observation based}\n & $\\mathbf{h}=\\argmax_{\\mathbf{h}} \\hat{p}(S=1, i | \\mathbf{h})$ & $64\\%$$\/71\\%$ & - \\\\\n & $\\mathbf{h}=\\argmax_{\\mathbf{h}} \\hat{p}(\\mathbf{h}| S=1, i)$ & $71\\%$$\/\\mathbf{75}\\%$ & $70\\%$ \\\\\n \\hline\n \\multirow{2}{*}{Full observation based (ideal)} &\n $\\mathbf{h}=\\argmax_{\\mathbf{h}} \\hat{p}(S=1, \\mathcal{O}, \\mu, \\beta| \\mathbf{h})$ & $80\\%$ & -\\\\\n & $\\mathbf{h}=\\argmax_{\\mathbf{h}} \\hat{p}(\\mathbf{h}|S=1, \\mathcal{O}, \\mu, \\beta)$ & $85\\%$ & - \\\\\n \\hline\n \\hline\n \\end{tabular}}\n \\vspace{1em}\n \\caption{Grasping success rate for various inference strategies of the hand configuration. \n The success rate obtained by performing Bayesian posterior inference through the full forward simulation reaches $71\\%$ for objects seen during training and $75\\%$ for 5 new objects. In real experiments, the success rate reaches $70\\%$. \n }\n \\label{tab:sim_metric}\n\\end{table}\n\n\\begin{figure}\n \\centering\n \\resizebox{0.70\\linewidth}{!}{\\includegraphics{image\/results\/posterior_all.pdf}}\n \\caption{Posterior $p(\\mathbf{h}|S=1, i)$ for the setup in Fig.~\\ref{fig:sim_scene} with the box centered at $(0, 0)$. (Left) Posterior $p(\\mathbf{x}|S=1, i)$, (Middle) Posterior $p(\\mathbf{R}|S=1, i)$, (Right) Posterior $p(g|S=1, i)$.}\n \\label{fig:posterior}\n\\end{figure}\n\n\\subsection{Physical benchmarks}\nWe carried out experiments with a Robotiq 3-finger gripper attached to a UR5 robotic arm, as shown in Figure~\\ref{fig:real_scene}. A Kinect v1 provides depth images and is rigidly linked to the robot base. The robotic arm is controlled in position in the joint space. \nCommunications are performed within the ROS framework. \nWe calibrate the centre of the table by computing the trajectory in the simulator and then sending it to the real robot. \nWe perform 10 trials per object, for a total of 190 grasps. Objects are placed at best at $(x_{\\mathcal{O}}=0, y_{\\mathcal{O}}=0, \\theta_{\\mathcal{O}}=0)$. \nAs shown in Table~\\ref{tab:sim_metric}, our success rate of 70\\% is similar than in simulation, which indicates that the simulation-to-reality transfer works well, at least on average. \nThese results also demonstrate competitive performance with respect to related works (see Section~\\ref{sec:related_work}), although this remains difficult to establish because of the distinct hardware setups.\nWe observe that failures are mainly behavioral and geometric.\nBehavioral failures arise when the simulation does not model the physics correctly. For example, in the real setup, the bowl slides on the table when the gripper closes its fingers, while in simulation, the bowl is just lifted. We could reduce these errors by using a more accurate simulator. Geometric failures arise when there is a shift in the location or in the orientation of the object. Most of the time, the robot either collides with the object or misses it for smaller objects. These failures could be avoided using a more precise calibration, additional sensors, or more extensive domain randomization. \nFinally, the time of computation is reasonable (from 5 to 10s), but could be decreased by tuning the architecture of the neural network, lowering the number of optimization steps, or using more powerful hardware. We leave this for future work.\n\n\n\\section{Related work}\n\\label{sec:related_work}\n\n\nOver the last decade, progress in multi-fingered robotic grasping has been steady \\cite{varley2015generating,lu2019modeling,lu2020planning,lundell2020multi, lundell2021ddgc,wu2020generative, merzic2019leveraging} thanks to differentiable models such as neural networks. Unfortunately, the variety of robotic hands, their actuation modes and sensor inputs, and the lack of standard benchmarks, make it difficult to compare these advances fairly against one another.\n\nFrom early works, \\cite{varley2015generating} identify poses and fingertip positions of stable grasps with a deep neural network from RGB-D images and use a planner (GraspIt!) based on simulated annealing to determine the best hand configurations. They reach a success rate of $75\\%$ over a set of 8 objects but suffer from slow execution times (16.6s on average). By contrast, our method is faster and reaches comparable performance over a larger set of objects.\n\\cite{lundell2020multi} use generative adversarial networks to sample both grasp poses and finger joints efficiently based on RGB-D images. While being a fast approach, they reach only $60\\%$ of success rate in real experiments.\nThe work of \\cite{lu2019modeling} is the most similar to ours. They perform grasp planning as probabilistic inference via a classifier trained to predict the success of a grasp. They retrieve maximum a posteriori estimates using gradient ascent and the fact that the classifier is fully differentiable. The prior distribution is fitted with a Gaussian mixture model from the dataset. \nIn contrast, our method uses an analytical prior based on power-spherical distributions, does not require an external grasp sampler, and relies on a neural classifier to approximate the likelihood-to-evidence ratio.\nSimilarly, \\cite{lu2020planning} compute maximum likelihood estimates of the hand configuration by making use of gradients provided by a neural network. \nFinally, both of these works treat the rotations with Euler angles and optimize them as real numbers with boundary constraints. \nThis representation is not suitable for a neural network according to \\cite{zhou2019continuity}.\nInstead, our optimization relies on Riemannian conjugate gradients, which preserve the geometrical structure of the rotation group.\nOther interesting approaches to multi-fingered grasping include the use of deep reinforcement learning based on vision and tactile sensors~\\cite{wu2020generative}, or the use of tactile information only for learning a closed-loop controller~\\cite{merzic2019leveraging}.\n\nFrom a statistical perspective, several Bayesian likelihood-free inference algorithms~\\cite{marin2012approximate, beaumont2002approximate, Papamakarios2019SequentialNL, SNPEA, SNPEB, APT, pmlr-v119-hermans20a} have been developed to carry out inference when the likelihood function is implicit and intractable. \nThese methods operate by approximating the posterior through rejection sampling or by learning parts of the Bayes' rule, such as the likelihood function, the likelihood-to-evidence ratio, or the posterior itself. \nThese algorithms have been used across a wide range of scientific disciplines such as particle physics, neuroscience, biology, or cosmology~\\cite{cranmer2020frontier}.\nTo the best of our knowledge, our work is one of the first to apply one of those for the direct planning successful grasps.\nMore specifically, we rely here on the amortized inference approach of \\cite{pmlr-v119-hermans20a} to carry out inference within seconds for any new observation $i$. In contrast, an approach such as ABC \\cite{marin2012approximate, beaumont2002approximate} could take up to hours to determine a single hand configuration $\\mathbf{h}$ since data would need to be simulated on-the-fly for each observation $i$ due to the lack of amortization of ABC. \nNeural posterior estimation~\\cite{APT} is also amortizable but would have required new methodological developments to be applicable on distributions defined on manifolds, such as those needed here for the rotational part of the pose.\n\n\n\n\n\\section{Summary and future work}\n\\label{sec:summary}\n\nWe demonstrate the usefulness and applicability of simulation-based Bayesian inference to multi-fingered grasping.\nThe approach is generic yet powerful because it can work with any simulator, thereby incorporating from the simplest to the more sophisticated piece of domain knowledge, while leveraging all recent developments from deep learning to solve the Bayesian inference problem. \nMaximum a posteriori hand configurations are found by directly optimizing through the resulting amortized and differentiable expression for the posterior.\nThe geometry of the configuration space is accounted for by proposing a Riemannian manifold optimization procedure through the neural posterior.\nWe demonstrate a working proof-of-concept achieving robust multi-fingered grasping, both in simulation and in real experiments thanks to domain randomization. \nOur success rate is comparable to previous works.\n\n\n\\section*{Acknowledgments}Norman Marlier would like to acknowledge the Belgian Fund for Research training in Industry and Agriculture for its financial support (FRIA grant). Computational resources have been provided by the Consortium des \u00c9quipements de Calcul Intensif (C\u00c9CI), funded by the Fonds de la Recherche Scientifique de Belgique (F.R.S.-FNRS) under Grant No. 2.5020.11 and by the Walloon Region.\nGilles Louppe is recipient of the ULi\u00e8ge - NRB Chair on Big data and is thankful for the support of NRB.\n\n\n\n\\bibliographystyle{unsrt} \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}