diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzhvce" "b/data_all_eng_slimpj/shuffled/split2/finalzzhvce"
new file mode 100644--- /dev/null
+++ "b/data_all_eng_slimpj/shuffled/split2/finalzzhvce"
@@ -0,0 +1,5 @@
+{"text":"\\section{Introduction}\n\\noindent\nDifferent types of sports have different game rules, frequency of games, style, etc that make the sport unique and interesting. In particular, how often games are held is a factor that differs a lot across different sports. Baseball, which is a sport that the league is very popular in countries like United States (Major League Baseball, or MLB), South Korea (Korea Baseball Organization league, or KBO league), and Japan (Nippon Professional Baseball, or NPB), has a feature that each team plays a game nearly everyday. This is in contrast to, say, the English Premier League (the soccer league in England), where each team has a game once or twice a week. In particular, in the KBO league, each of the ten teams has a match everyday except for Mondays. This feature that there is a game nearly everyday could be associated somehow to a team's performance in a game. For instance, a team might be more likely to do well on a game if it had won all the recent five games compared to the case where it had lost all the recent three games. In fact, when discussing a preview or prediction on a game, many KBO league news articles mention the number of consecutive wins or losses the team currently has.\n\nThis paper focuses on the KBO league, in particular how the consecutiveness of the schedule is related to each team's outcome. More specifically, this paper examines, for each of the ten teams in the KBO league, if we were to model the game outcomes (represented as a single sequence) as a $k^{\\text{th}}$ order Markov chain, which value of $k$ is the most effective. In section 2, we introduce the KBO league in general, and in section 3, the higher-order Markov chain model whose possible states are win ($``W\"$), draw ($``D\"$), and loss ($``L\"$), particularly the one used in the \\textsf{markovchain} R package$^{4}$, is discussed. Then in sections 4 and 5, how we assess the model fit and the actual model results are reported, and lastly in section 6 we discuss conclusions and potential future work.\n\n\\section{KBO League Introduction}\n\\noindent\nIn this section, we introduce the KBO league in general. The KBO league began in 1982 with six teams back then: Haitai Tigers, Lotte Giants, MBC Blue Dragons, OB Bears, Sammi Superstars, and Samsung Lions$^{1}$. With some teams being the successor of the teams back then in 1982, currently the KBO league has ten teams competing: Doosan Bears, SK Wyverns, Hanwha Eagles, Nexen Heroes, LG Twins, Samsung Lions, Lotte Giants, KIA Tigers, KT Wiz, and NC Dinos. Unlike in the MLB (where team names represent the home location: e.g. New York Yankees), the KBO league team names represent the sponsor corporation. Furthermore, unlike in MLB where the game doesn't end as a draw (or a tie) except for exceptional reasons like weather (or sometimes darkenss), in a KBO league game, if the two teams have the same score after the 12th inning, the game ends as a draw.\n\nThe league does not have sub-leagues. Rather, the ten teams together compete in the pennant race in such a way that each of the ten teams face the other nine teams 16 times, eight home games and eight away games, thereby playing total 144 games in the regular season. In post-season, the 5th place team competes in a wild-card round against the 4th place team. In the wild-card round, if the 4th place wins the first game, then the round is immediately over with the 4th place going to the semi-playoffs, but if the 5th place wins the first game, then the two teams compete in the second game where that game's winner goes to the semi-playoffs. The wild-card round victor faces the 3rd place team in the semi-playoffs with a best-of-five format, and the semi-playoffs victor faces the 2nd place in the playoffs, also in best-of-five. Finally, the playoffs victor plays against the 1st place team in the final round called the Korean Series with a best-of-seven format. Note that the rules mentioned in this paragraph could change in future seasons (for example, in the 2015 season the total number of games changed from 128 to 144), but at least in the 2018 season, those rules are applied$^{2}$.\n\nTable 1 shows the ranking of the KBO league 2018 as of August 18th, 2018, which is right before the Asian Games break (the KBO league in 2018 has a break of approximately 3 weeks since some of the players go to the Jakarta-Palembang Asian Games 2018 as part of the South Korean national team).\n\n\\begin{table}[htbp]\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|}\n\\hline\nRank & Team & Games & Wins & Draws & Losses & Winning rate & Games behind \\\\ \\hline\n1 & Doosan Bears & 113 & 73 & 0 & 40 & 0.646 & 0.0 \\\\ \\hline\n2 & SK Wyverns & 112 & 62 & 1 & 49 & 0.559 & 10.0 \\\\ \\hline\n3 & Hanwha Eagles & 114 & 62 & 0 & 52 & 0.544 & 11.5 \\\\ \\hline\n4 & Nexen Heroes & 118 & 61 & 0 & 57 & 0.517 & 14.5 \\\\ \\hline\n5 & LG Twins & 116 & 56 & 1 & 59 & 0.487 & 18.0 \\\\ \\hline\n6 & Samsung Lions & 116 & 54 & 3 & 59 & 0.478 & 19.0 \\\\ \\hline\n7 & Lotte Giants & 110 & 51 & 2 & 57 & 0.472 & 19.5 \\\\ \\hline\n8 & KIA Tigers & 110 & 51 & 0 & 59 & 0.464 & 20.5 \\\\ \\hline\n9 & KT Wiz & 113 & 47 & 2 & 64 & 0.423 & 25.0 \\\\ \\hline\n10 & NC Dinos & 116 & 47 & 1 & 68 & 0.409 & 27.0 \\\\ \\hline\n\\end{tabular}\n\t\\caption{KBO League 2018 Rank (as of August 18th, 2018)}\n\t\\label{tab:num1}\n\\end{center}\n\\end{table}\n\n\\section{Higher-Order Markov Chain Model}\n\\noindent\nThe goal of this paper is to use higher-order Markov chains to model the game outcomes for each team in the KBO league. This section introduces the higher-order Markov chain model and parameter estimation methods. The notations and formulations of the model discussed in this section follow Chapter 6 in Ching, Huang, Ng, and Siu (2013)$^{3}$, in which the \\textsf{markovchain} R package$^{4}$, which we use for the computation in this study, was implemented based on.\n\n\\subsection{First-order Markov Chain}\n\nFirst, we briefly introduce the (discrete time) first-order Markov chain, usually referred to just ``Markov chain\". Let the data sequence $\\{ X^{(i)} \\}_{i=1}^{n} = \\{ X^{(1)}, X^{(2)}, \\cdots, X^{(n)} \\} $ be a stochastic process where each $X^{(i)}$ can take finite or countable number of possible values. We call such possible values `states'. For example, a stochastic process whose possible states are ``sunny'', ``cloudy'', and ``rainy'' may produce a data sequence of $\\{ \\text{``cloudy''}, \\text{``cloudy''}, \\text{``rainy''}, \\text{``sunny''}, \\text{``rainy''} \\}$. The set of all possible states is called the `state space', which can consist of essentially anything: numbers, letters, weather conditions, baseball game outcomes, etc. In this paper, we let $S$ denote the state space, and let $m$ denote the number of possible states (i.e. $m = |S|$).\n\nThe name ``Markov chain\" comes from the (first-order) Markov property. The property states, or assumes, that the next state $X^{(n)}$ is conditionally independent of all the states so far (i.e. $X^{(n-1)}, X^{(n-2)}, \\cdots, X^{(1)}$) given the current state $X^{(n-1)}$. That is: for any timestep $n$,\n\n\\begin{equation}\nP(X^{(n)} = x_{new} | X^{(n-1)}=x_{n-1}, X^{(n-2)}=x_{n-2}, \\cdots, X^{(1)}=x_{1}) = P(X^{(n)} = x_{new} | X^{(n-1)}=x_{n-1})\n\\end{equation}\nIntuitively, we wander around the states in the state space, and the most recent past is only what matters for the present.\n\nMoreover, the model assumes that for each pair of states $(i, j)$, there is a fixed transition probability $p_{ij}$, which is the probability that the process moves to state $i$ given that it's currently at state $j$. The chain always decides its state at the next timestep according to these transition probabilities, which can be represented as a single $m \\times m$ matrix called the ``transition matrix\". In our notation, the row $i$ \\& column $j$ entry of the transition matrix has $p_{ij}$, the transition probability from state $j$ to state $i$. Intuitively, we can think of each column of the transition matrix representing the ``from\" state, and each row being the ``to\" state. Clearly, each column of the transition matrix must sum to 1.\n\n\\subsection{Higher-order Markov Chain}\n\nIn the first-order Markov chain model, the assumption was that the state at timestep $n$ only depends on the state at the timestep immediately before (i.e. $n-1$) and all the further past are meaningless. We can relax the assumption in such a way that the state at a timestep depends on more of the recent past. Formally, a $k^{th}$ order Markov chain assumes that the state at timestep $n$ only depends on the states at the recent $k$ timesteps (i.e. $n-1, n-2, \\cdots, n-k$). That is: for any timestep $n$:\n\n\\begin{equation}\n\\begin{split}\n& \\quad \\enskip P(X^{(n)} = x_{new} | X^{(n-1)}=x_{n-1}, X^{(n-2)}=x_{n-2}, \\cdots, X^{(1)}=x_{1}) \\\\\n&= P(X^{(n)} = x_{new} | X^{(n-1)}=x_{n-1}, X^{(n-2)}=x_{n-2}, \\cdots, X^{(n-k)}=x_{n-k})\n\\end{split}\n\\end{equation}\nNotice that if we set $k=1$, then the model is equivalent to what was introduced in Section 3.1, and this is why it is called the ``first-order Markov chain\".\n\nFurthermore, the $k^{th}$ order Markov chain model assumes that there is an $m \\times m$ transition matrix $Q^{(l)}$ defined for each lag $l \\in \\{1, \\cdots, k\\}$. The row $i$ \\& column $j$ entry of the $l$-step transition matrix $Q^{(l)}$ has the probability that the process will move to state $i$ after $l$ timesteps given that currently it's at state $j$. Again, clearly it must be true that each column of $Q^{(l)}$ sums to 1, $\\forall l \\in \\{1, \\cdots, k\\}$. Also, each lag $l \\in \\{1, \\cdots, k\\}$ has a non-negative weight $\\lambda_{l}$ with: \n\n\\begin{equation}\n\\sum_{l=1}^{k} \\lambda_{l} = 1\n\\end{equation}\n\nThen, the model says:\n\n\\begin{equation}\n\\mathbf{X}^{(n+k+1)} = \\sum_{l=1}^{k} \\lambda_{l} Q^{(l)} \\mathbf{X}^{(n+k+1-l)}\n\\end{equation}\nwhere $\\mathbf{X}^{(n+k+1-l)}$ is an $m \\times 1$ vector that shows the probability distribution of the $m$ states at timestep $n+k+1-l$, which essentially shows, for each state $i$, if we draw this Markov chain process many times, what proportion of those simulations will be at state $i$ at timestep $n+k+1-l$.\n\nEquation (4) can be rewritten as:\n\n\\begin{equation}\nP(X^{(n)} = x_{new} | X^{(n-1)}=x_{n-1}, X^{(n-2)}=x_{n-2}, \\cdots, X^{(n-k)}=x_{n-k}) = \\sum_{l=1}^{k} \\lambda_{l} q_{x_{new}, x_{n-l}}^{(l)}\n\\end{equation}\nwhere $q_{x_{new}, x_{n-l}}^{(l)}$ denotes the row $x_{new}$ \\& column $x_{n-1}$ entry of the matrix $\tQ^{(l)} $. It can be shown that if $Q^{(l)}$ is irreducible and aperiodic, $\\lambda_{l} > 0$, and $\\sum_{l=1}^{k} \\lambda_{l} = 1$, then this model has a stationary distribution $\\mathbf{X}$ that satisfies $\\Big( \\mathbf{I} - \\sum_{l=1}^{k} \\lambda_{l} Q^{(l)} \\Big) \\mathbf{X} = \\mathbf{0}$ and also $\\text{lim}_{n \\rightarrow \\infty} \\mathbf{X}^{(n)} = \\mathbf{X}$, where $\\mathbf{I}$ denotes the $m \\times m$ identity matrix, and $\\mathbf{0}$ is the length-$m$ vector of all $0$'s.\n\nNow we discuss the methods for estimating the model parameters: $Q^{(l)}$ and $\\lambda_{l}$ for each $\\l \\in \\{1, \\cdots, k\\}$. Notice that this higher-order Markov chain model has $k + km^{2}$ parameters since each transition matrix $Q^{(l)}$ has $m^{2}$ entries.\n\nAgain, assume we observe a data sequence of length $n$: $\\{ X^{(t)} \\}_{t=1}^{n} = \\{ X^{(1)}, X^{(2)}, \\cdots, X^{(n)} \\} $. For every ordered pair of states $(i,j)$, for each lag $l \\in \\{1, \\cdots, k\\}$, we define the transition frequency $f_{ji}^{(l)}$ as the number of times in the given data sequence such that the process is at state $i$ and then after $l$ steps it is at state $j$. Naturally, we can write these altogether in matrix form: we define the $l$-step transition frequency matrix $F^{(l)}$ (of size $m \\times m$) as:\n\n\\begin{equation}\nF^{(l)} = \\begin{bmatrix}\nf_{11}^{(l)} & f_{12}^{(l)} & \\cdots & f_{1m}^{(l)} \\\\\nf_{21}^{(l)} & f_{22}^{(l)} & \\cdots & f_{2m}^{(l)} \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\nf_{m1}^{(l)} & f_{m2}^{(l)} & \\cdots & f_{mm}^{(l)} \\\\\n\\end{bmatrix}\n\\end{equation}\nOf course, this matrix is defined for every lag $l \\in \\{1, \\cdots, k\\}$.\n\nThen, for each lag $l \\in \\{1, \\cdots, k\\}$, we can estimate the $l$-step transition matrix $Q^{(l)}$ as:\n\n\\begin{equation}\n\\hat{Q}^{(l)} = \\begin{bmatrix}\n\\hat{q}_{11}^{(l)} & \\hat{q}_{12}^{(l)} & \\cdots & \\hat{q}_{1m}^{(l)} \\\\\n\\hat{q}_{21}^{(l)} & \\hat{q}_{22}^{(l)} & \\cdots & \\hat{q}_{2m}^{(l)} \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\n\\hat{q}_{m1}^{(l)} & \\hat{q}_{m2}^{(l)} & \\cdots & \\hat{q}_{mm}^{(l)} \\\\\n\\end{bmatrix}\n\\end{equation}\nwhere\n\n\\begin{equation}\n\\hat{q}_{ij}^{(l)} = \\left\\{\n \\begin{array}{ll}\n \\frac{f_{ij}^{(l)}}{\\sum_{i=1}^{m}f_{ij}^{(l)}} & \\quad \\text{if } \\sum_{i=1}^{m}f_{ij}^{(l)} \\neq 0 \\\\\n 0 & \\quad \\text{otherwise}\n \\end{array}\n \\right.\n\\end{equation}\nNote that $\\hat{q}_{ij}^{(l)} = 0$ if there is no observation such that the process is at state $j$ and then after $l$ steps it is at some state, which happens when state $j$ appears only at the last $l$ timesteps of the observed data sequence.\n\n\nAlso, the stationary distribution $\\mathbf{X}$ can be estimated from the observed data sequence as the proportion of the occurrence of each state in the sequence. That is: for each state $i$, our estimate of the corresponding entry in the stationary distribution is just the number of times state $i$ appears in our length-$n$ sequence divided by $n$. Let's denote such an estimate by $\\hat{\\mathbf{X}}$.\n\nGiven the estimated transition matrices $\\hat{Q}^{(1)}, \\cdots, \\hat{Q}^{(k)}$ and the estimated stationary distribution $\\hat{\\mathbf{X}}$, we can estimate the $\\lambda_{l}$ parameters via solving the following linear programming problem:\n\n\\begin{equation}\n\\underset{\\lambda}{\\text{min}} \\sum_{i=1}^{m} w_{i}\n\\end{equation}\n\\text{subject to}\n\n\\begin{equation}\n\\begin{bmatrix}\nw_{1} \\\\ w_{2} \\\\ \\vdots \\\\ w_{m}\n\\end{bmatrix}\n\\ge \\hat{\\mathbf{X}} - \\Big[ \\hat{Q}^{(1)} \\hat{\\mathbf{X}} \\enskip | \\enskip \\hat{Q}^{(2)} \\hat{\\mathbf{X}} \\enskip | \\cdots | \\enskip \\hat{Q}^{(k)} \\hat{\\mathbf{X}} \\Big] \\begin{bmatrix}\n\\lambda_{1} \\\\ \\lambda_{2} \\\\ \\vdots \\\\ \\lambda_{m}\n\\end{bmatrix} ,\n\\end{equation} \n\\begin{equation}\n\\begin{bmatrix}\nw_{1} \\\\ w_{2} \\\\ \\vdots \\\\ w_{m}\n\\end{bmatrix}\n\\ge - \\hat{\\mathbf{X}} + \\Big[ \\hat{Q}^{(1)} \\hat{\\mathbf{X}} \\enskip | \\enskip \\hat{Q}^{(2)} \\hat{\\mathbf{X}} \\enskip | \\cdots | \\enskip \\hat{Q}^{(k)} \\hat{\\mathbf{X}} \\Big] \\begin{bmatrix}\n\\lambda_{1} \\\\ \\lambda_{2} \\\\ \\vdots \\\\ \\lambda_{m}\n\\end{bmatrix} ,\n\\end{equation}\n\\begin{equation}\n\\forall i \\in \\{ 1, \\cdots, m \\}. \\enskip w_{i} \\ge 0,\n\\end{equation}\n\\begin{equation}\n\\forall l \\in \\{ 1, \\cdots, k \\}. \\lambda_{l} \\ge 0,\n\\end{equation}\n\\begin{equation}\n\\sum_{l=1}^{k} \\lambda_{l} = 1\n\\end{equation}\n\n\\section{Method for Assessing Model Fit}\n\\noindent\nNow that we know how the model is defined and how the parameters are estimated (in the \\textsf{markovchain} R package$^{4}$), in this section, we introduce how we assess the quality of the model fit, given a fitted higher-order Markov chain model.\n\nFor each of the ten teams in the KBO league, we fit a $k^{th}$ order Markov chain on its data sequence of the outcomes of the recent 100 games, for $k = 1, \\cdots, 13$. Here, the state space is $\\{ ``W\", ``D\", ``L\" \\}$ where each state (in the listed order) represents win, draw, and loss, respectively. Each fitted object in the \\textsf{markovchain} R package$^{4}$ returns the estimated $\\lambda_{l}$ parameters, the estimated $Q^{(l)}$ matrices, and the estimated stationary distribution $\\mathbf{X}$. We assess which value of $k$ has the corresponding $k^{th}$ order Markov chain model best describing the team's data sequence via the following procedure. For each team:\n\n\\noindent\\rule{8cm}{0.4pt}\n\n\\begin{algorithmic}\n\\State $tenGames \\gets \\text{Randomly choose 10 out of the 100 games in the team's data sequence}$\n\\For {$k \\text{ in } \\{1, \\cdots, 13 \\}$}\n\\For {$game \\text{ in } tenGames$}\n\\For {$state \\text{ in } \\{ ``W\", ``D\", ``L\" \\}$}\n\\State $p_{state} \\gets P(game=state | \\text{recent $k$ observations})$ computed via Equation (5) \\enskip (We'll get $p_{W}, p_{D}, p_{L}$)\n\\EndFor\n\\State $predict \\gets X \\sim Categorical(p_{W}, p_{D}, p_{L})$\n\\EndFor\n\\EndFor\n\\State $team\\_k\\_acc \\gets (\\text{number of correct predictions}) \/ 10$\n\\end{algorithmic}\n\n\\noindent\\rule{8cm}{0.4pt}\n\nIn words, for each team, we first randomly select 10 games out of the 100 present in the team's sequence. We examine across every value of $k$ (corresponding to the $k^{th}$ order Markov chain fitted to this team's sequence) via: \n\n\\begin{enumerate}\n\\item For each of the ten games, for each of the three possible states, compute the estimated probability that the game's outcome was that particular state given the recent $k$ observations, using Equation (5) and the estimated $\\lambda_{l}$'s and the $Q^{(l)}$'s.\n\\item Then, run a simulation from a Categorical distribution (which is essentially a generalization of the Bernoulli distribution where there can be more than two categories) that has three categories (``W'', ``D'', and ``L'') with the computed probabilities. The sampled outcome is our prediction on this game's result. Compare our prediction with the actual game outcome in the team's data sequence.\n\\item Calculate the prediction accuracy: Out of the ten predictions, how many are correct?\n\\end{enumerate}\n\nAfter this process, for each team, we have the prediction accuracy of each of the 13 values of $k$. We assess the fit of the $k^{th}$ order Markov chain model applied to this team's sequence via how high the prediction accuracy is. That is: for each team, we rank the 13 values of $k$ on how well the $k^{th}$ order Markov chain modeled, or described, the observed length-$100$ sequence of the team.\n\n\n\\section{Results}\n\\noindent\nHere we present the model fit results. For each team, We execute the process described in Section 4 and draw a barplot where each bar in the vertical axis represents each $k$ value, and the horizontal axis, of course, shows the prediction accuracy of the corresponding $k^{th}$ order Markov chain model fitted on that team's sequence. The barplots are shown in Figure 1.\n\n\\begin{figure}[h!]\n\\centering\n \\includegraphics[width=16cm, height=20cm]{\"barcharts\".png}\n \\caption{Model Fit Result for Each Team}\n\\end{figure}\n\nIntuitively, if a particular value of $k$ has the highest prediction accuracy compared to the other values, we can think of it as: for predicting this team's performance of an arbitrary day, considering exactly the recent $k$ games' outcomes works the best, compared to any smaller or larger values of $k$. \n\nOne of the most interesting patterns we can see in the plots that both Doosan Bears and SK Wyverns, which are the 1st and 2nd ranked teams in the league, have a skewed-to-the right shape (except that the lowest value $k=1$ has a low accuracy), meaning that lower values of $k$ tend to predict better than higher values. In particular, they both have $k = 2$ having the highest accuracy. This means that taking only the recent two matches into account best describes the team's performance in general. The 3rd ranked team Hanwha Eagles also has a right-skewed shape except that the accuracy rises again for the $k < 9$. So overall, the top 3 teams in the league tend to have their recent few games (say, one, two, and three) associated with the performance in a new game the most. Considering the fact that these teams all have relatively high winning rates, we could interpret this result as: a characteristic of the top teams is that once they have a good pace for a few recent games in a row, then it is likely that they will perform well again. On the other hand, once we additionally incorporate earlier games as well, the prediction tends to become poorer.\n\nThe remaining seven teams (that is: the lower seven) tend to have a reasonably symmetric barplot, with the exception of Lotte Giants (7th ranked) that has a right-skewed shape. Such symmetries indicate that these teams don't have a particular value of $k$ such that the $k^{th}$ order Markov chain better models their outcomes compared to other orders. So either considering only the few recent games or taking more further past outcomes into account does not appear to have much difference in predicting the outcome. Perhaps one could think of this characteristic as: for these teams, the performance in recent games (regardless of which value $k$ takes out of $\\{ 1, \\cdots, 13 \\}$) tend to not be influential to its performance today in the first place.\n\n\n\n\\section{Discussions and Future Work}\n\\noindent\nThrough our results we saw that the top three teams in the KBO league (Doosan, SK, and Hanwha) has a common characteristic that overall, lower values of $k$ tend to have the $k^{th}$ order Markov chain to better model their outcome. On the other hand, the remaining teams except Lotte has a reasonably symmetric shape in their barplot, meaning there is not really a particular value of $k$ that works considerably well compared to other values.\n\nHowever, our analyses has limitations and thus potential future work that can improve the study. First, all of the interpretations were based on exploratory analyses. We plotted a barchart for each team, visually observing how well each value of $k$ did in terms of its $k^{th}$ order Markov chain predicting the match outcomes and thereby modeling the team's performance. We cannot make any formal conclusions at this point. To do so, we could utilize statistical tests on our data, but unfortunately, the size of our data is currently too small. For example, considering applying some kind of two-sample t-test for comparing the top-half teams and the bottom-half teams. We currently only have sample sizes of $n_{1} = 5$ and $n_{2} = 5$. One way of obtaining a larger sample would be to look for the observations in the past years of the KBO league: we go through each year's data, include the top-half ranked teams' sequences into group 1, and the other teams' sequences into group 2. This approach does have a risk that we have to assume that observations across different years for the same team are independent (by definition of the t-test). That is: we have to assume that the 2018 edition of Doosan Bears is independent of the 2017 edition of Doosan Bears, which, according to common sense, is not really a valid assumption to make. Another way would be to incorporate data of other leagues such as the MLB and the NPB since those leagues also have the characteristic that each team plays a game nearly everyday.\n\n\n\nIn addition, given the task to predict a team's game outcome, depending solely on the team's recent game results is perhaps an oversimplification of the task. Common sense tells us that there are other numerous factors that affect a team's performance in a game: e.g. the team's winning rate against the opponent in this season, statistics regarding the starting pitchers' of both teams, whether it's a home game or an away game, etc. So we could perhaps use a classical regression \/ classification model such as linear regression, support vector machines, deep neural networks, etc where we include those canonical features and additionally the result of the recent $k$ games as the predictors. Furthermore, if we want to stay with higher-order Markov chains but gain better modeling, we could consider using the higher-order multivariate Markov chains, where we are given $s$ separate categorical sequences that all have the same state space, instead of just one. The $k^{th}$ order multivariate Markov chain model says that the probability distribution (across the $m$ states) for the $j^{th}$ sequence at an arbitrary timestep depends on the probability distributions of all the $s$ sequences, including its own, at the recent $k$ timesteps. This model is also implemented in the the \\textsf{markovchain} R package$^{4}$. In our study, this model can be utilized in such a way that given an arbitrary baseball game between two teams, the data sequence for both teams (so total $s=2$ sequences) are incorporated to better model the game result. That is: we consider the recent trend, or flow, of both of the two competing teams.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\newpage\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
+{"text":"\\section{Introduction}\n\nThe knowledge of the dynamical state of galaxy clusters could provide important constraints\non cosmological scenarios. It is\nwidely assumed that the dynamics of clusters is mostly governed by infall models and the\ntheory of caustics (Reg'{o}s \\& Geller [1]; van Haarlem et al. [2]; Diaferio \\& Geller [3];\nDiaferio [4]; Rines et al. [5]. If galactic clusters are formed by hierarchical merging of\ngroups of galaxies, no rotation of a cluster, as a whole, is possible. However, the formation \nof a cluster from primordial giant gas cloud may not be overlooked. If so, the formed cluster may preserve the rotation of the primordial gas cloud, if afterwards it have not merged with\nother clusters and groups. The problem of possible rotation of galaxy clusters has been\ndiscussed by many (Kalinkov [6], Gregory [7], Gregory \\& Tift [8], Gregory \\&\nThompson [9], Materne \\& Hopp [10], Materne, [11] Williams [12] Oegerly \\& Hill [13],\nSodre'{e} et al. [14], Biviano et al. [15], Den Hartog \\& Katgert [16], Dupke \\& Bregman\n[17, 18], Tovmassian [19], Burgett et al. [20], Kalinkov et al. [21], Hwang \\& Lee [22]. Though indications of rotation were found in some clusters\n[10, 16, 20-22], the general accepted opinion is that galaxy clusters do not rotate. \nThe sparse rotating clusters were found among arbitrarily selected cluster samples. Den\nHartog \\& Katgert [16] found 13 possibly rotating ones out of studied 72 clusters. Hwang \n\\& Lee [22] detected only 13 tentatively rotating clusters among studied 899 Abell clusters. \n\nFor detection the cluster rotation Den Hartog \\& Katgert [16] plotted line-of-sight velocity\ndispersion against the projected radial distance of the galaxy from the cluster center.\nHwang \\& Lee [22] fitted the observed radial velocities $v_p$ of the cluster galaxies with a\nfunction of position angle, $v_p(\\theta)$. In both methods it was assumed that the observed\ndistance of a galaxy from the cluster center is the real distance, and the galaxy observed at\nthis position has the velocity corresponding to the rotation model. However, this approach \ncould not be applied for all observed galaxies. Tovmassian \\& Mntsakanian [23] studied \n3D-distribution of galaxies in clusters and showed that the number of observed\ngalaxies over the cluster area with certain radius is sufficiently higher of the number of galaxies\nwithin the sphere of the same radius. For example, the number of galaxies observed over the\nsmall central area of the cluster with radius about five times smaller of the cluster Abell radius,\nis by $4\\div 5$ times higher of the number of galaxies within the corresponding sphere. Most \nof the observed galaxies here are projected galaxies from the outer spherical shells of the\ncluster. Some of them could be located at the cluster border, and their rotational velocities\nwould be significantly different from the assumed velocities. Also, it was shown [23] that on average about 25\\% of galaxies observed over the cluster are\nprojected galaxies of the cluster environment that have the same velocities, as the cluster\nproper members. They will introduce additional errors in the analysis. It follows that the\npossible rotation of the cluster could hardly be revealed by the study of the correlation between\nthe radial velocity dispersion and the projected galaxy position in the cluster.\n\nI propose a simple method for detecting the cluster rotation, that does not depend on the\nprojected position of galaxies in relation to the rotation axes. First, in order to minimize the\ninfluence of projected environmental galaxies on the results, it is desirable to study the possibly\nsmall central area of the cluster. I limited the radius of the studied area so in order to have not \nless than about 20 galaxies there. Then, I counted galaxies with velocities lower, $V_l$, and\nhigher, $V_h$, of the mean velocity $V^*$ of galaxies in the studied area. If a cluster is\nexperiencing merging, the numbers $n_l$ and $n_h$ of galaxies with velocities lower and\nhigher of the mean velocity $V^*$ will sufficiently differ from each other. I assume that a \ncluster is in the state of merging, if numbers $n_l$ and $n_h$ differ from each other by more than 1.2 times. If a cluster is in dynamical equilibrium,\nthe numbers of galaxies with velocities lower and higher of the cluster mean velocity will be\napproximately the same (the ratio of their numbers will be smaller than 1.2), at any half of the\ncluster image. In a rotating cluster the assumed rotation axes will pass through or be located\nclose to the adopted cluster center. The numbers of galaxies at two sides of the rotation axes\nwill be about the same (the ratio of numbers <1.2), but galaxies at one side of the rotation \naxis will have velocities higher of the mean velocity of galaxies in the studied area, and\ngalaxies at the other side will have velocities lower of the mean velocity. However, due to\ninteractions between close neighbors, especially in the central dense regions, there could be\nmember galaxies with not rotational velocities. As a result, the regularities that are\ncharacteristic for a rotating cluster would be somewhat deteriorated. Environmental galaxies\nprojected over the cluster and interlopers will mask more the effect of rotation. Anyhow, in a\nrotating cluster the majority of galaxies at one side of the rotation axes will move in direction\nopposite to the direction of movement of the majority of galaxies at the other side. We assume\nthat the cluster is rotating, if the portion of galaxies that rotate in the cluster, constitute more\nthan 60\\% of all galaxies at each side of the rotation axes. Hence, by analyzing the distribution \nof galaxies with velocities lower and higher of the cluster mean velocity it will be possible to\ndetect rotating clusters. \n\nIn this paper I checked the proposed method for detection of the cluster rotation by using four\nsamples of galaxies. The first sample consists of highly flattened ACO [24] clusters from Strubble and Ftaclas [25] with $f=a\/b$ exceeding 1.8. Here $a$\nand $b$ are respectively the cluster large and small axis. The high flatness could be evidence\nof merging of two clusters. Clusters of high flatness could also be rotating. For comparison I\nused the sample of round clusters with $f<1.2$ [25]. The probability of\ndetection of rotation of such clusters is apparently smaller than in flat ones.\n\nI used also the samples of BMI and non-BMI clusters introduced by Tovmassian \\&\nAndernach [26], who compared the Abell number count $N_A$ of clusters hosted the cD\ngalaxy (type I clusters according to Bautz \\& Morgan [27]), their\nvelocity dispersion $\\sigma_v$, the peculiar velocity of the cD galaxy and the cluster X-ray\nbrightness with absolute $K_{s-total}$ magnitude of the cD galaxy, and divided the clusters \ninto two types. The clusters, the $K_{s-total}$ 2MASS magnitude of the cD galaxy of which is\nby more than $1^m$ brighter than that of the second by brightness galaxy in the cluster, were\nclassified as BMI. The clusters the $K_{s-total}$ magnitude of the second by brightness galaxy\nof which is fainter than the cD galaxy by less than $0.7^m$, were classified as non-BMI (NBMI)\ntype. Tovmassian \\& Andernach [26] suggested that clusters of BMI and NBMI types have\ndifferent evolution histories. Clusters of BMI type evolved preferentially without merging with\nother clusters. Meanwhile, clusters of NBMI type experienced mergers in history. Therefore,\none could expect that rotating clusters could be found among BMI clusters, whereas hardly NBMI clusters will be rotating.\n\n\\section{Analysis and Results}\n\n\\subsection{Data}\n\nFor our study we used redshifts of cluster members from SDSS-DR9 [28] that\nprovides uniform coverage of radial velocities of member galaxies in the whole target area.\nPositions of clusters and their redshifts are taken from NED. The galaxies with radial velocities\nwithin $\\pm1500$ km s$^{-1}$ from the cluster mean velocity were selected as cluster\nmembers. In order to minimize the influence of projected environmental galaxies, I collected\ndata for possibly smaller central area of the cluster. Depending on the richness of the cluster\nthe counts were made within area with radii from 0.25 to 0.75 Abell radius, \n$R_A$\\footnote{$R_A $=1.7\/$z$ ~arcmin (29)} \nof the cluster. For reliability of the obtaining results, the size of the studied area was chosen so,\nto have in it at least 20 galaxies with known redshifts. The compiled lists of the flat and round\nclusters contain 18 and 13 clusters respectively. The lists of BMI and NBMI clusters consist of\n20 and 16 clusters respectively. The cluster, A1663 is included in two samples: of the round\nclusters and of the BMI type clusters. The cluster A2147, is also included in two sample: of the\nflat and BMI samples. Hence, the total number of studies clusters is 65.\n\n\n\\subsection{Merging clusters}\n\n\nIn some clusters of all four studied samples the ratio $n_l$\/$n_h$ of numbers of galaxies with\nvelocities lower and higher of the mean velocity of galaxies in the studied area exceeds 1.2\nor is smaller than 0.8. We assume that these clusters are in the state of merging. The results \nof counts in merging clusters of all four samples are presented in Table 1. In the 1-st column \nof Table 1 the designation of the cluster is presented. In the 2-d column the size of the studied\narea as part of the Abell radius $R_A$ is shown. The numbers $n_l$ and $n_h$ are presented\nin columns 3 and 4 respectively. In the last column the ratio of the numbers $n_l$\/$n_h$ is\ngiven.\n\n\\subsection{Not rotating clusters}\t\t\n\nIn some other clusters of all four samples the ratio of the numbers of galaxies moving towards\nthe observer and in opposite direction is within $0.8\\div 1.20$. These clusters are not \nexperiencing merging with other clusters. In all of them it was not possible to determine\na dividing line that could be a rotating axes. Hence, these clusters are not rotating. Their list\nis presented in Table 2 analogues to Table 1.\n\n\n\n\\subsection{Rotating clusters}\n\nIn the rest of the studied clusters of all four samples the ratios of the numbers of galaxies\nmoving towards the observer and in opposite direction are, as in previous group of galaxies,\nwithin $0.8\\div 1.2$. For these clusters the possible rotation axis are determined. The number\nof galaxies at two sides of the assumed rotation axes in the studied area of\neach cluster is about the same, the difference being less than 20\\%. The majority of galaxies\nat one side of these clusters move in one direction, and the majority of galaxies at the other\nside - in opposite direction, that evidences on the cluster rotation. The numbers of galaxies with\nrotational movement in these clusters is by $1.5\\div3.3$ (with median 2.2) times higher of the\nnumber of other galaxies observed in the cluster area. Therefore, we conclude that these\nclusters are rotating. The examples of maps of rotating clusters are presented in Figures 1 and 2 (flat clusters),\nFigure 3 (round cluster), and Figures 4 and 5 (clusters of BMI type). The results of counts on rotating clusters are presented in Table 3. In consecutive\ncolumns of Table 3 the following data is presented: column 1 - the Abell designation of the\ncluster; column 2 - the designation of the half of the cluster area for which the information is\npresented (W-West, E-East, NE-North-East, etc.); columns 3 and 4 - the number of galaxies\nwith velocities respectively lower and higher of the mean velocity of galaxies in the studied\narea of the cluster. At the upper an lower lanes for each cluster the numbers of galaxies at\ncorresponding areas are presented. \n\n\n\\section{Conclusions} \n\nA simple method for detection of rotating clusters is proposed. The essence of the method is\nthe counts of galaxies with velocities lower and higher of the cluster mean velocity at different\nhalves of the cluster. The method does not depend on the distance of member galaxies along\nthe line of sight within the cluster, that affects other methods for search for cluster rotation. The\napplied simple method allowed to detect 17 rotating clusters among studied 65, i.e. more than\nthe quarter of studied clusters are rotating. Note, that the rotation may not be detected for\nclusters the rotation axes of which is oriented close to the line of sight. The rate of rotating\nclusters detection is much higher than in other attempts to find rotation (e.g. [16-18; 20-22]). \n\nThe detection rate is incomparably high for flat clusters with $f=a\/b>1.8$, which were assumed\nto be rotating. In seven out of 18 flat clusters the numbers of galaxies moving in opposite\ndirections significantly differ from each other. Most probably they are two clusters in the state \nof merging. Out of the rest 11 really flat clusters, seven clusters, i.e. about 64\\%, are rotating.\nMeanwhile, only two rotating clusters, $\\approx 15$\\%, are found among 13 round clusters.\nThe rate of rotating cluster is also very high among clusters of BMI type, the cD galaxy in which\nis brighter than the second by brightness galaxy by more than 1 magnitude. These clusters\npreferentially did not have merging in their life, as it was suggested by Tovmassian \\&\nAndernach [26]. Seven out of the studied 20 BMI clusters, i.e. 35\\%, are found to be rotating.\nAnd only one rotating cluster, i.e. $\\approx 6$\\%, was found among 16 NBMI clusters, which\nmost probably have experienced mergings in the past [26]. \nThe single rotating NBMI cluster is A2147, which is also included in the sample of flat clusters. The high percentage of rotating clusters among clusters of BMI type proves that they\nare indeed systems that have not experienced mergings and preserved the rotation of\nprimordial gas clouds from which they were formed. \n\nThe found high rate of rotating clusters support the opinion that clusters were originally formed\nin the rotating primordial gas cloud. Then most of them became reacher in the result of\nhierarchical assembly of other groups and clusters of galaxies and, as a result, lost the rotation.\n\n\n\\section{Acknowledgements}\n\nThis research has made use of the NASA\/IPAC Extragalactic Database (NED) which \nis operated by the Jet Propulsion Laboratory, California Institute of Technology, under \ncontract with the National Aeronautics and Space Administration. \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
+{"text":"\\section{Introduction}\nWe consider\nthe Euler system related to an incompressible inviscid fluid with constant\ndensity, namely\n\\begin{equation}\n\\label{E}\n \\left\\{ \n\\begin{array}{ll} \n\\partial_t u+u\\cdot\\nabla u+\\nabla P=0,\\qquad x\\in \\mathbb R^d, t>0, \\\\\n\\nabla.u=0,\\\\\nu_{\\mid t=0}= u_{0}.\n\\end{array} \\right. \n \\end{equation}\n Here, the vector field \\mbox{$u=(u_2,u_1,...,u_d)$} is a function of \\mbox{$(t,x)\\in \\mathbb R_+\\times\\mathbb R^d$} denoting the velocity of the fluid and the scalar function \n\\mbox{$P$} stands for the pressure.\nThe second equation of the system \\mbox{$\\nabla.u=0$} is the \n condition of incompressibility. \nMathematically, it guarantees the preservation of Lebesgue measure by the particle-trajectory mapping (the classical flow associated to the \nvelocity vector fields).\nIt is worthy of noting that the pressure can be recovered from the velocity via an explicit Calder\\'on-Zygmund type operator (see \\cite{Ch1} for instance).\n\nThe question of local well-posedness of \\eqref{E} with smooth data was resolved by many authors in different spaces (see for instance \\cite{Ch1,Maj}). In this context, the vorticity \\mbox{$\\omega={\\rm curl}\\, u$} plays a fundamental role. In fact, the well-known BKM criterion \\cite{Beale} ensures that the development of finite time singularities for these solutions is related to the blow-up of the \\mbox{$L^\\infty$} norm of the vorticity near the maximal time existence. A direct consequence of this result is the global well-posedness of the two-dimensional Euler solutions with smooth initial data, since the vorticity satisfies\nthe transport equation\n\\begin{equation}\n\\label{tourbillon}\n\\partial_t\\omega+(u \\cdot \\nabla)\\omega=0,\n\\end{equation}\nand then all its \\mbox{$L^p$} norms are conserved. \n\nAnother class of solutions requiring lower regularity on the velocity can be considered: the weak solutions (see for instance \\cite[Chap 4]{lions1}). They solve a\nweak form of the equation in the distribution sense, placing the equations in large\nspaces and using duality. The divergence form of Euler equations allows to put all the derivative on the test functions and so to obtain\n$$\n\\int_0^\\infty\\int_{{\\mathbb R}^d}(\\partial_t\\varphi+(u \\cdot \\nabla)\\varphi).u\\,dxdt+\\int_{{\\mathbb R}^d}\\varphi(0,x)u_0(x)\\,dx=0,\n$$\nfor all \\mbox{$\\varphi\\in C^\\infty_0({\\mathbb R}_+\\times{\\mathbb R}^d, {\\mathbb R}^d)$} with \\mbox{$ \\nabla.\\varphi=0$}. In the two dimensional space and when the regularity is sufficient to give a sense to Biot-Savart law, then one can consider an alternative weak formulation: the vorticity-stream weak formulation. It consists in resolving the weak form of \\eqref{tourbillon} supplemented with the Biot-Savart law:\n\\begin{equation}\n\\label{bs}\nu=K\\ast\\omega,\\quad \\hbox{with}\\quad K(x)=\\frac{x^\\perp}{2\\pi|x|^2}.\n\\end{equation}\nIn this case, \\mbox{$(v,\\omega)$} is a weak solution to the vorticity-stream formulation of the 2D Euler equation with initial data \\mbox{$\\omega_0$} if \\eqref{bs} is satisfied and \n$$\n\\int_0^\\infty\\int_{{\\mathbb R}^2}(\\partial_t\\varphi+u.\\nabla\\varphi)\\omega(t,x) dxdt+\\int_{{\\mathbb R}^2}\\varphi(0,x)\\omega_0(x)dx=0,\n$$\nfor all \\mbox{$\\varphi\\in C^\\infty_0({\\mathbb R}_+\\times{\\mathbb R}^2,{\\mathbb R})$}.\n\nThe questions of existence\/uniqueness of weak solutions have been extensively studied and a detailed\naccount can be found in the books \\cite{Ch1, Maj, lions1}. We emphasize that, unlike the fixed-point argument, the compactness method does not guarantee the uniqueness of the solutions and then the two issues (existence\/uniqueness) are usually dealt with separately. These questions have been originally addressed by Yudovich in \\cite{Y1} where the existence and uniqueness of weak solution to 2D Euler systems (in bounded domain) are proved under the assumptions: \\mbox{$u_0\\in L^2$} and \\mbox{$\\omega_0\\in L^\\infty$}. \nSerfati \\cite{Ser} proved the uniqueness and existence of a solution with initial velocity and vorticity which are only bounded (without any integrability condition). There is an extensive literature on the existence of weak solution to Euler system, possibly without uniqueness, with unbounded vorticity. DiPerna-Majda \\cite{DM} proved the existence of weak solution for \\mbox{$\\omega_0\\in L^1\\cap L^p$} with \\mbox{$2
0$}, \\mbox{$\\lambda B$} denotes the ball that is concentric with \\mbox{$B$} and whose radius is \\mbox{$\\lambda$} times the radius of \\mbox{$B$}.\n\\end{Defin}\nWe recall that \n$$\n\\|f\\|_{{\\rm BMO}}:=\\sup_{{\\rm ball}\\,\\, B}\\av_{B}|f-\\av_{B}(f)|.\n$$\nIt is worth of noting that if \\mbox{$B_2$} and \\mbox{$B_1$} are two balls such that \\mbox{$2B_2\\subset B_1$} then\\footnote{ Throughout this paper the notation \\mbox{$A \\lesssim B$} means that there exists a positive universal constant \\mbox{$C$} such that \\mbox{$A\\le CB$}. }\n\\begin{equation}\n\\label{22}\n{|\\av_{B_2}(f)-\\av_{B_1}(f)|} \\lesssim {\\ln\\hspace{-3.8mm}{}^+(1+\\frac{r_1}{r_2})} \\|f\\|_{{\\rm BMO}}.\n\\end{equation}\nIn the definition of \\mbox{${{\\rm LBMO}}$} we replace the term \\mbox{$\\ln\\hspace{-3.8mm}{}^+(1+\\frac{r_1}{r_2})$} by \\mbox{$\\ln\\hspace{-3.8mm}{}^+\\big(\\frac{ 1-\\ln\\hspace{-3.8mm}{}^+ r_2 }{1-\\ln\\hspace{-3.8mm}{}^+ r_1 }\\big)$}, which is smaller. This puts more constraints on the functions belonging to this space\\footnote{ Here, we identify all functions whose difference is a constant. In section 2, we will prove that \\mbox{${{\\rm LBMO}}$} is complete and strictly imbricated between \\mbox{${\\rm BMO}$} and \\mbox{$L^\\infty$}. {The \"$L$\" in \\mbox{${{\\rm LBMO}}$} stands for \"logarithmic\".}\n} and allows us to derive some crucial property on the composition of them with Lebesgue measure preserving homeomorphisms, which is the heart of our analysis.\n\n\n\n The following statement is the main result of the paper.\n\\begin{Theo} \n\\label{main}Assume \\mbox{$\\omega_0\\in L^p\\cap {{\\rm LBMO}}$} with \\mbox{$p\\in ]1,2[$}. Then there exists a unique global weak solution \\mbox{$(v,\\omega)$} to the vorticity-stream formulation of the 2D Euler equation. Besides, there exists a constant \\mbox{$C_0$} depending only on the \\mbox{$L^p\\cap {{\\rm LBMO}}$}-norm of \\mbox{$\\omega_0$} such that\n\\begin{equation}\n\\label{bound}\n\\|\\omega(t)\\|_{ L^p\\cap {{\\rm LBMO}} }\\leq C_0\\exp({C_0t}),\\qquad\\forall\\, t\\in{\\mathbb R}_{+}.\n\\end{equation}\n\\end{Theo}\nSome remarks are in order.\n\\begin{rema} {\\rm The proof gives more, namely \\mbox{$\n\\omega\\in \\mathcal C({\\mathbb R}_+, L^q)$} for all \\mbox{$p\\leq q<\\infty$}. \nCombined with the Biot-Savart law\\footnote{If \\mbox{$\\omega_0\\in L^p$} with \\mbox{$p\\in ]1,2[$} then a classical Hardy-Littlewood-Sobolev inequality gives \\mbox{$u\\in L^q$} with \\mbox{$\\frac1q=\\frac1p-\\frac12$}. } this yields\n\\mbox{$\nu\\in \\mathcal C({\\mathbb R}_+, W^{1,r})\\cap \\mathcal C({\\mathbb R}_+, L^\\infty)$} for all \\mbox{$\\frac{2p}{2-p}\\leq r<\\infty$}.\n}\n\\end{rema}\n\n\n\n\n\\begin{rema}\n\\label{r2}\n{\\rm The essential point of Theorem \\ref{main} is that it provides an initial space which is strictly larger than \\mbox{$L^p\\cap L^\\infty$} (it contains unbounded elements) which is a space of existence, uniqueness and persistence of regularity at once. We emphasize that the bound \\eqref{bound} is crucial since it implies that \\mbox{$u$} is, uniformly in time, \\mbox{$\\log$}-Lipschitzian which is the main ingredient for the uniqueness. Once this bound established the uniqueness follows from the work by Vishik \\cite{Vishik1}. In this paper Vishik also gave a result of existence (possibly without regularity persistence) in some large space characterized by growth of the partial sum of the \\mbox{$L^\\infty$}-norm of its dyadic blocs. \n We should also mention the result of uniqueness by Yudovich \\cite{Y2} which establish uniqueness (for bounded domain) for some space which contains unbounded functions. Note also that the example of unbounded function, given in \\cite{Y2}, belongs actually to the space \\mbox{${{\\rm LBMO}}$} (see Proposition \\ref{pro3} below). Our approach is different from those in \\cite{Vishik1} and \\cite{Y2} and uses a classical harmonic analysis ``\\`a la stein\" without making appeal to the Fourier analysis (para-differential calculus). }\n \\end{rema}\n \\begin{rema} {\\rm The main ingredient of the proof of \\eqref{bound} is a logarithmic estimate in the space \\mbox{$L^p\\cap {{\\rm LBMO}}$} (see Theorem \\ref{decom} below). It would be desirable to prove this result for \\mbox{${\\rm BMO}$} instead of \\mbox{${{\\rm LBMO}}$}. \n Unfortunately, as it is proved in \\cite{BK}, the corresponding estimate with \\mbox{${\\rm BMO}$} is optimal (with the bi-Lipschitzian norm instead of the \\mbox{$\\log$}-Lipschitzian norm of the homeomorphism) and so the argument presented here seem to be not extendable to \\mbox{${\\rm BMO}$}. }\n\\end{rema}\n\n\nThe remainder of this paper is organized as follows. In the two next sections we introduce some functional spaces and prove a logarithmic estimate which is crucial to the proof of Theorem \\ref{main}. The fourth and last section is dedicated to the proof of Theorem \\ref{main}. \n\n\n \\section{Functional spaces}\n \n Let us first recall that the set of \\mbox{$\\log$}-Lipschitzian vector fields on ${\\mathbb R}^2$ , denoted by $LL$, is the\nset of bounded vector fields $v$ such that\n $$\n \\|v\\|_{LL}:=\\sup_{x\\neq y}\\frac{|v(x)-v(y)|}{|x-y|\\big(1+\\big|\\ln\\hspace{-3.8mm}{}^+|x-y|\\big|\\big)}<\\infty.\n $$\n The importance of this notion lies in the fact that if the vorticity belong to the Yudovich type space (say \\mbox{$L^1\\cap L^\\infty$}) then the velocity is no longer Lipschitzian, but \\mbox{$\\log$}-Lipschitzian. In this case we still have existence and uniqueness of flow but a \nloss of regularity may occur. Actually, this loss of regularity is unavoidable and its degree is \n related to the norm \\mbox{$L^1_t(LL)$} of the velocity. The reader is referred to section 3.3 in \\cite{bah-ch-dan} for more details about this issue.\n \n To capture this behavior, and \n overcome the difficulty generated by it, we introduce the following definition. \n \\begin{Defin} For every homeomorphism \\mbox{$\\psi$}, we set\n $$\n \\|\\psi\\|_*:=\\sup_{x\\neq y}\\Phi\\big(|\\psi(x)-\\psi(y)|, |x-y|\\big),\n $$\n where \\mbox{$\\Phi$} is defined on \\mbox{$]0,+\\infty[\\times]0,+\\infty[$} by\n \\begin{equation*}\n\\Phi(r,s)=\\left\\{ \n\\begin{array}{ll} \n\\max\\{\\frac{1+|\\ln\\hspace{-3.8mm}{}^+(s)| }{ 1+|\\ln\\hspace{-3.8mm}{}^+ r | };\\frac{ 1+|\\ln\\hspace{-3.8mm}{}^+ r | }{1+|\\ln\\hspace{-3.8mm}{}^+(s)| }\\},\\quad {\\rm if}\\quad (1-s)(1-r)\\geq 0, \\\\\n{(1+|\\ln\\hspace{-3.8mm}{}^+ s|) }{ (1+|\\ln\\hspace{-3.8mm}{}^+ r|) },\\quad {\\rm if}\\quad (1-s)(1-r)\\leq 0.\n\\end{array} \\right. \n \\end{equation*}\n \\end{Defin}\nSince \\mbox{$\\Phi$} is symmetric then \\mbox{$\\|\\psi\\|_*=\\|\\psi^{-1}\\|_*\\geq 1$}. It is clear also that every homeomorphism \\mbox{$\\psi$} satisfying\n$$\n\\frac{1}C|x-y|^\\alpha\\leq |\\psi(x)-\\psi(y)|\\leq C|x-y|^\\beta,\n$$\nfor some \\mbox{$\\alpha,\\beta,C>0$} has its \\mbox{$\\|\\psi\\|_*$} finite (see Proposition \\ref{p1} for a reciprocal property). \n\nThe definition above is motivated by this proposition (and by Theorem \\ref{decom} below as well).\n\\begin{Prop} \\label{prop} Let \\mbox{$u$} be a smooth divergence-free vector fields and \\mbox{$\\psi$} be its flow:\n$$\n\\partial_t{\\psi}(t,x)=u(t,\\psi(t,x)),\\qquad {\\psi}(0,x)=x.\n$$\nThen, for every \\mbox{$t\\geq 0$}\n$$\n\\|\\psi(t,\\cdot)\\|_*\\leq \\exp(\\int_0^t\\|u(\\tau)\\|_{LL}d\\tau).\n$$\n\\end{Prop}\n\\begin{proof} It is well-known that for every \\mbox{$t\\geq 0$} the mapping \\mbox{$ x\\mapsto \\psi(t,x)$} is a Lebesgue measure preserving homeomorphism (see \\cite{Ch1} for instance). We fix \\mbox{$t\\geq 0$} and \\mbox{$x\\neq y$} and set \n$$\nz(t)=|\\psi(t,x)-\\psi(t,y)|.\n$$\nClearly the function \\mbox{$Z$} is strictly positive and satisfies \n$$\n|\\dot{z}(t)|\\leq \\|u(t)\\|_{LL}(1+|\\ln\\hspace{-3.8mm}{}^+ z(t)|)z(t).\n$$ \nAccordingly, we infer\n$$\n|g(z(t))-g(z(0))|\\leq \\int_0^t\\|u(\\tau)\\|_{LL}d\\tau\n$$\nwhere \n\\begin{equation*}\ng(\\tau):=\\left\\{ \n\\begin{array}{ll} \n\\ln\\hspace{-3.8mm}{}^+(1+\\ln\\hspace{-3.8mm}{}^+(\\tau)),\\quad {\\rm if}\\quad \\tau\\geq 1, \\\\\n-\\ln\\hspace{-3.8mm}{}^+(1-\\ln\\hspace{-3.8mm}{}^+(\\tau)),\\quad {\\rm if}\\quad 0<\\tau<1.\n\\end{array} \\right. \n \\end{equation*}\n This yields in particular that\n \\mbox{$\n \\frac{\\exp(g(z(t)))}{\\exp(g(z(0)))}$} and \\mbox{$\\frac{\\exp(g(z(0)))}{\\exp(g(z(t)))}$} are both controlled by \\mbox{$\\exp(\\int_0^t\\|u(\\tau)\\|_{LL}d\\tau)$} leading to \n $$\n \\Phi(z(t), z(0))\\leq \\exp(\\int_0^t\\|u(\\tau)\\|_{LL}d\\tau),\n $$\n as claimed. \n\\end{proof}\nThe following proposition follows directly from the definition by a straightforward computation.\n\\begin{Prop} \n\\label{p1}\nLet \\mbox{$\\psi$} be a homeomorphism with \\mbox{$\\|\\psi\\|_{*}<\\infty$}. Then for every \\mbox{$(x,y)\\in \\mathbb R^2\\times\\mathbb R^2$} one has\n\\begin{enumerate}\n\\item If \\mbox{$|x-y|\\geq 1$} and \\mbox{$|\\psi(x)-\\psi(y)|\\geq 1$}\n$$\ne^{-1}|x-y|^{\\frac{1}{\\|\\psi\\|_*}}\\leq |\\psi(x)-\\psi(y)|\\leq e^{\\|\\psi\\|_*}|x-y|^{\\|\\psi\\|_*}.\n$$\n\\item If \\mbox{$|x-y|\\leq 1$} and \\mbox{$|\\psi(x)-\\psi(y)|\\leq 1$}\n$$\ne^{-\\|\\psi\\|_*}|x-y|^{{\\|\\psi\\|_*}}\\leq |\\psi(x)-\\psi(y)|\\leq e |x-y|^{\\frac{1}{\\|\\psi\\|_*}}.\n$$\n\\item In the other cases \n$$\ne^{-\\|\\psi\\|_*}|x-y|\\leq |\\psi(x)-\\psi(y)|\\leq e^{\\|\\psi\\|_*}|x-y|.\n$$\n\\end{enumerate}\n\\end{Prop}\nAs an application we obtain the following useful lemma.\n\\begin{Lemm}\n\\label{g}\n For every \\mbox{$r>0$} and a homeomorphism \\mbox{$\\psi$} one has\n$$4\\psi(B(x_0,r))\\subset B(\\psi(x_0), g_\\psi(r)),\n$$ \nwhere\\footnote{This notation means that for every ball \\mbox{$B\\subset\\psi(B(x_0,r))$} we have \\mbox{$4B \\subset B(\\psi(x_0), g_\\psi(r))$}.},\n\\begin{equation*}\ng_\\psi(r):=\\left\\{ \n\\begin{array}{ll} 4e^{ \\|\\psi\\|_{*}}r^{ \\|\\psi\\|_{*}},\\quad {\\rm if}\\quad r\\geq 1, \\\\\n4\\max\\{ er^{\\frac{1}{\\|\\psi\\|_{*}}}; e^{\\|\\psi\\|_{*}}r\\}, \\quad {\\rm if}\\quad 00$} then \n$$\n\\sup(\\beta, \\frac1\\beta)\\leq \\alpha^\\gamma \\Longleftrightarrow |\\ln\\hspace{-3.8mm}{}^+(\\beta)|\\leq\\gamma \\ln\\hspace{-3.8mm}{}^+(\\alpha).\n$$\n\\mbox{$\\bullet$} If \\mbox{$r\\geq 1$} then \n$$\n1\\leq \\frac{ 1+|\\ln\\hspace{-3.8mm}{}^+ g_\\psi(r)| }{1+|\\ln\\hspace{-3.8mm}{}^+ r|}=\\frac{ 1+\\ln4+\\|\\psi\\|_{*}+\\ln\\hspace{-3.8mm}{}^+ r }{1+\\ln\\hspace{-3.8mm}{}^+ r}\\leq 3+\\|\\psi\\|_{*}.\n$$\n\\mbox{$\\bullet$} If \\mbox{$r< 1$} then we have to deal with two possible values of \\mbox{$g_\\psi(r)$}.\n\n\\underline{\\it Case 1:} If \\mbox{$g_\\psi(r)=4er^{\\frac{1}{\\|\\psi\\|_{*}}}$} then\n$$\n|\\ln\\hspace{-3.8mm}{}^+ g_\\psi(r)| =|\\ln\\hspace{-3.8mm}{}^+ 4+1+\\|\\psi\\|_{*}^{-1}\\ln\\hspace{-3.8mm}{}^+(r)|.\n$$\nSince \\mbox{$\\|\\psi\\|_{*}\\geq 1$}, we get\n$$\n\\frac{ 1+|\\ln\\hspace{-3.8mm}{}^+ g_\\psi(r)| }{1+|\\ln\\hspace{-3.8mm}{}^+ r|}\\leq \\frac{ 3+\\frac1{ \\|\\psi\\|_{*} }|\\ln\\hspace{-3.8mm}{}^+ r| }{1+|\\ln\\hspace{-3.8mm}{}^+ r|}\\leq \\frac{ 3+|\\ln\\hspace{-3.8mm}{}^+ r| }{1+|\\ln\\hspace{-3.8mm}{}^+ r|}\\leq 3.\n$$\nTo estimate \\mbox{$\\frac{1+|\\ln\\hspace{-3.8mm}{}^+ r|}{ 1+|\\ln\\hspace{-3.8mm}{}^+ g_\\psi(r)| }$} we consider two possibilities.\n\n- If \\mbox{$|\\ln\\hspace{-3.8mm}{}^+(r)|\\leq 8 \\|\\psi\\|_{*}$} then \n$$\n\\frac{1+|\\ln\\hspace{-3.8mm}{}^+ r|}{ 1+|\\ln\\hspace{-3.8mm}{}^+ g_\\psi(r)| }\\leq 1+|\\ln\\hspace{-3.8mm}{}^+ r|\\leq 1+8 \\|\\psi\\|_{*}.\n$$\n- If \\mbox{$|\\ln\\hspace{-3.8mm}{}^+(r)|\\geq 8 \\|\\psi\\|_{*}$} then \n$$ \n|\\ln\\hspace{-3.8mm}{}^+(4)+1+\\|\\psi\\|_{*}^{-1}\\ln\\hspace{-3.8mm}{}^+(r)|\\geq \\frac12 \\|\\psi\\|_{*}^{-1} |\\ln\\hspace{-3.8mm}{}^+(r)|,\n$$\nand so \n$$\n\\frac{1+|\\ln\\hspace{-3.8mm}{}^+ r|}{ 1+|\\ln\\hspace{-3.8mm}{}^+ g_\\psi(r)| }\\leq \\frac{1+|\\ln\\hspace{-3.8mm}{}^+ r|}{1+\\frac12 \\|\\psi\\|_{*}^{-1} |\\ln\\hspace{-3.8mm}{}^+(r)|}\\leq 2(1+\\|\\psi\\|_{*}).\n$$\n\\underline{\\it Case 2:} If \\mbox{$g_\\psi(r)=4e^{\\|\\psi\\|_{*}}r$} then\n$$\n|\\ln\\hspace{-3.8mm}{}^+ g_\\psi(r)| =|\\ln\\hspace{-3.8mm}{}^+ 4+\\|\\psi\\|_{*}+\\ln\\hspace{-3.8mm}{}^+(r)|.\n$$\nThus,\n$$\n\\frac{ 1+|\\ln\\hspace{-3.8mm}{}^+ g_\\psi(r)| }{1+|\\ln\\hspace{-3.8mm}{}^+ r|}\\leq \\frac{ 3+\\|\\psi\\|_{*} +|\\ln\\hspace{-3.8mm}{}^+ r| }{1+|\\ln\\hspace{-3.8mm}{}^+ r|}\\leq 3+\\|\\psi\\|_{*}.\n$$\nAs previously for estimating \\mbox{$\\frac{1+|\\ln\\hspace{-3.8mm}{}^+ r|}{ 1+|\\ln\\hspace{-3.8mm}{}^+ g_\\psi(r)| }$}, we consider two possibilities.\n\n- If \\mbox{$|\\ln\\hspace{-3.8mm}{}^+(r)|\\leq 2(\\ln\\hspace{-3.8mm}{}^+(4)+ \\|\\psi\\|_{*})$} then \n$$\n\\frac{1+|\\ln\\hspace{-3.8mm}{}^+ r|}{ 1+|\\ln\\hspace{-3.8mm}{}^+ g_\\psi(r)| }\\leq 1+|\\ln\\hspace{-3.8mm}{}^+ r|\\leq 5+2 \\|\\psi\\|_{*}.\n$$\n- If \\mbox{$|\\ln\\hspace{-3.8mm}{}^+(r)|\\geq 2(\\ln\\hspace{-3.8mm}{}^+ 4+ \\|\\psi\\|_{*})$} then \\mbox{$|\\ln\\hspace{-3.8mm}{}^+(4)+\\|\\psi\\|_{*}+\\ln\\hspace{-3.8mm}{}^+ r|\\geq \\frac12|\\ln\\hspace{-3.8mm}{}^+(r)|$}\nand so \n$$\n\\frac{1+|\\ln\\hspace{-3.8mm}{}^+ r|}{ 1+|\\ln\\hspace{-3.8mm}{}^+ g_\\psi(r)| } \\leq \\frac{1+|\\ln\\hspace{-3.8mm}{}^+ r|}{1+\\frac12|\\ln\\hspace{-3.8mm}{}^+(r)|}\\leq 2.\n$$\n\\end{proof}\n\n\\begin{rema}\n\\label{sss} The estimate \\eqref{ss} remains valid when we multiply \\mbox{$g_\\psi(r)$} by any positive constant. \n\\end{rema}\n\n\\section{ The \\mbox{${{\\rm LBMO}}$} space}\nLet us now detail some properties of the space \\mbox{${{\\rm LBMO}}$} introduced in the first section of this paper.\n\\begin{Prop} \n\\label{pro3}\nThe following properties hold true.\\\\\n{\\rm (i)} The space \\mbox{${{\\rm LBMO}}$} is a Banach space included in \\mbox{${\\rm BMO}$} and strictly containing \\mbox{$L^\\infty({\\mathbb R}^2)$}.\\\\\n{\\rm (ii)} For every \\mbox{$g\\in \\mathcal C^\\infty_0({\\mathbb R}^2)$} and \\mbox{$f\\in {{\\rm LBMO}}$} one has\n\\begin{equation}\n\\| g\\ast f\\|_{{{\\rm LBMO}}}\\leq \\|g\\|_{L^1}\\| f\\|_{{{\\rm LBMO}}}.\n\\label{eq:comp} \\end{equation}\n\\end{Prop}\n\\begin{proof}\n(i) Completeness of the space. Let \\mbox{$(f_n)_n$} be a Cauchy sequence in \\mbox{${{\\rm LBMO}}$}. Since \\mbox{${\\rm BMO}$} is complete then this sequences converges in \\mbox{${\\rm BMO}$} and then in \\mbox{$L^1_{\\text{loc}}$}. \nUsing the definition and the the convergence in \\mbox{$L^1_{\\text{loc}}$}, we get that the convergence holds in \\mbox{${{\\rm LBMO}}$}.\n\n It remains to check that \\mbox{$L^\\infty \\subsetneq {{\\rm LBMO}}$}. Since \\mbox{$L^\\infty$} is obviously embedded into \\mbox{${{\\rm LBMO}}$}, we have just to build an unbounded function belonging to \\mbox{${{\\rm LBMO}}$}. Take \n\\begin{equation*}\nf(x)=\\left\\{ \n\\begin{array}{ll} \\ln\\hspace{-3.8mm}{}^+(1-\\ln\\hspace{-3.8mm}{}^+|x|) \\qquad {\\rm if}\\quad |x|\\leq 1\\\\\n0,\\qquad \\qquad {\\rm if}\\quad |x|\\geq 1.\n\\end{array} \\right. \n \\end{equation*}\n \nIt is clear that both \\mbox{$f$} and \\mbox{$\\nabla f$} belong to \\mbox{$L^2({\\mathbb R}^2)$} meaning that \\mbox{$f\\in H^1({\\mathbb R}^2)\\subset {\\rm BMO}$}.\n\n\nBefore going further three preliminary remarks are necessary. \n\n\\mbox{$\\bullet$} Since \\mbox{$f$} is radially symmetric and decreasing then, for every \\mbox{$r>0$}, the mapping \\mbox{$x\\mapsto \\av_{B(x,r)}f$} is radial and decreasing.\n\n\\mbox{$\\bullet$} For the same reasons the mapping \\mbox{$r\\mapsto \\av_{B(0,r)}(f)$} is decreasing.\n\n\\mbox{$\\bullet$} Take \\mbox{$(r,\\rho) \\in ]0,+\\infty[^2$} and consider the problem of maximization of\n \\mbox{$\\av_{B(x_1,r)}(f)-\\av_{B(x_2,r)}(f)$} when \\mbox{$|x_1-x_2|=\\rho$}. The convexity of \\mbox{$f$} implies that \n\\mbox{$x_1=0$} and \\mbox{$|x_2|=\\rho$} are solutions of this problem.\n\n\\\n\nWe fix \\mbox{$r_1$} and \\mbox{$r_2$} such that \\mbox{$r_1\\leq 1$} and \\mbox{$2r_2\\leq r_1$}.\nFor every \\mbox{$x_1\\in{\\mathbb R}^2$} one defines \\mbox{$\\tilde x_1$} and \\mbox{$\\hat x_1$} as follows:\n\\begin{equation*}\n\\tilde x_1=\\left\\{ \n\\begin{array}{ll} x_1(1-\\frac{r_2+r_1}{|x_1|}) \\qquad {\\rm if}\\quad |x_1|\\geq r_2+r_1\\\\\n0,\\qquad \\qquad {\\rm if}\\quad |x_1|\\leq r_2+r_1,\n\\end{array} \\right. \n \\end{equation*}\nand\n\\begin{equation*}\n\\hat x_1=\\left\\{ \n\\begin{array}{ll} x_1(1+\\frac{r_2+r_1}{|x_1|}) \\qquad {\\rm if}\\quad |x_1|\\neq 0\\\\\n({r_2+r_1},0)\\qquad \\qquad {\\rm if}\\quad |x_1|=0.\n\\end{array} \\right. \n \\end{equation*}\nLet\n\\mbox{$A(x_1)$} be the set of admissible \\mbox{$x_2$}: the set of \\mbox{$x_2$} such that \\mbox{$2B(x_2,r_2)\\subset B(x_1,r_1)$}. \nUsing the two preliminary remarks above, we see that \n$$\n\\sup_{ x_2\\in A(x_1)}|\\av_{B(x_2,r_2)}(f)-\\av_{B(x_1,r_1)}(f)|\\leq \\max\\{J_{1},J_{2}\\}.\n$$\nwith\n\\begin{eqnarray*}\nJ_{1}&=&\\av_{B(\\tilde x_1,r_2)}(f)-\\av_{B(x_1,r_1)}(f),\n\\\\\nJ_{2}&=&\\av_{B(x_1,r_1)}(f)-\\av_{B(\\hat{x}_1,r_2)}(f).\n\\end{eqnarray*} \nIn fact, if \\mbox{$\\av_{B(x_2,r_2)}(f)-\\av_{B(x_1,r_1)}(f)$} is positive (resp. negative) then it is obviously dominated by \\mbox{$J_{1}$} (resp. \\mbox{$J_{2}$}).\nThus, we obtain\n$$\n\\sup_{ x_2\\in A(x_1)}|\\av_{B(x_2,r_2)}(f)-\\av_{B(x_1,r_1)}(f)|\\leq J_{1}+J_{2}= \\av_{B(\\tilde x_1,r_2)}(f)-\\av_{B(\\hat{x}_1,r_2)}(f).\n$$\nThe right hand side is maximal in the configuration when \\mbox{$\\tilde x_1=0$} and \\mbox{$\\hat{x}_1$} the furthest away from \\mbox{$0$}.\nThis means when \n\\mbox{$|x_1|=r_1+r_2$}, \\mbox{$\\tilde x_1=0$} and \\mbox{$|\\hat{x}_1|=2(r_1+r_2)$}.\n\nSince \\mbox{$f$} is increasing (going to the axe) then\n$$\n \\av_{B(\\hat{x}_2,r_1)}(f)\\geq f(4r_1).\n $$\n Finally, we get for all \\mbox{$x_1\\in\\mathbb R^2$} and \\mbox{$ x_2\\in A(x_1)$}\n \\begin{eqnarray*}\n|\\av_{B(x_2,r_2)}(f)-\\av_{B(x_1,r_1)}(f)|\\leq \\av_{B(0,r_2)}(f)- f(4r_1).\n\\end{eqnarray*}\nNow it is easy to see that\n$$\nf(4r_1)= \\ln\\hspace{-3.8mm}{}^+(1-\\ln\\hspace{-3.8mm}{}^+(r_1))+ {\\mathcal O}(1),\n$$\nand (with an integration by parts)\n\\begin{eqnarray*}\n\\av_{B(0,r_2)}(f) \n& =& \\ln\\hspace{-3.8mm}{}^+(1-\\ln\\hspace{-3.8mm}{}^+(r_2)) + \\frac{1}{r_2^2}\\int_0^{r_1} \\frac{1}{1-\\ln\\hspace{-3.8mm}{}^+(\\rho)} \\rho d\\rho \n\\\\\n& =& \\ln\\hspace{-3.8mm}{}^+(1-\\ln\\hspace{-3.8mm}{}^+(r_2)) + {\\mathcal O}(1).\n\\end{eqnarray*}\nThis yields,\n$$\n|\\av_{B(x_2,r_2)}(f)-\\av_{B(x_1,r_1)}(f)|\\leq \\ln\\hspace{-3.8mm}{}^+\\Big(\\frac{1-\\ln\\hspace{-3.8mm}{}^+(r_2)}{1-\\ln\\hspace{-3.8mm}{}^+(r_1)}\\Big)+ {\\mathcal O}(1),\n$$\nas desired.\n\n\\\n\n(ii) Stability by convolution. (\\ref{eq:comp}) follows from the fact that for all \\mbox{$r>0$} \n$$\nx\\mapsto\\av_{B(x,r)}(g\\ast f)=(g\\ast\\av_{B(\\cdot,r)}(f))(x).\n$$\n\n\\end{proof}\nThe advantage of using the space \\mbox{${{\\rm LBMO}}$} lies in the following logarithmic estimate which is the main ingredient for proving Theorem \\ref{main}.\n\\begin{Theo}\n\\label{decom}\nThere exists a universal constant \\mbox{$C>0$} such that \n$$\n\\|f{\\rm o}\\psi\\|_{{{\\rm LBMO}}\\cap L^p}\\leq C\\ln\\hspace{-3.8mm}{}^+(1+\\|\\psi\\|_*)\\|f\\|_{{{\\rm LBMO}}\\cap L^p},\n$$\nfor any Lebesgue measure preserving homeomorphism \\mbox{$\\psi$}.\n\\end{Theo}\n\\begin{proof}[Proof of Theorem \\ref{decom}] \nOf course we are concerned with $\\psi$ such that $\\|\\psi\\|_*$ is finite (if not the inequality is empty).\n Without loss of generality one can assume that \\mbox{$\\|f\\|_{{{\\rm LBMO}}\\cap L^p}=1$}. Since \\mbox{$\\psi$} preserves Lebesgue measure then the \\mbox{$L^p$}-part of the norm is conserved. For the two other parts of the norm, we will proceed in two steps. In the first step we consider the \\mbox{${\\rm BMO}$} term of the norm and in the second one we deal with the other term. \n\\subsection*{ Step 1} Let \\mbox{$B=B(x_0,r)$} be a given ball of \\mbox{${\\mathbb R}^2$}. \nBy using the \\mbox{$L^p$}-norm we need only to deal with balls whose radius is smaller than a universal constant \\mbox{$\\delta_0$} (we want \\mbox{$r$} to be small with respect to the constants appearing in Whitney covering lemma below). Since \\mbox{$\\psi$} is a Lebesgue measure preserving homeomorphism then \\mbox{$\\psi(B)$} is an open connected\\footnote{We have also that \\mbox{$ \\psi(B)^C=\\psi(B^C)$} and \\mbox{$\\psi(\\partial B)=\\partial(\\psi(B)).$} } set with \\mbox{$|\\psi(B)|=|B|$}. By Whitney covering lemma, there exists a collection of balls \\mbox{$(O_j)_j$} such that: \n\n- The collection of double ball is a bounded covering:\n$$\n\\psi(B)\\subset \\bigcup 2O_j.\n$$\n\n- The collection is disjoint and, for all \\mbox{$j$}, \n$$\nO_j\\subset \\psi(B).\n$$\n\n- The Whitney property is verified:\n$$\nr_{O_j}\\simeq d(O_j, \\psi(B)^c).\n$$\n\n\n\\\n\n\\mbox{$\\bullet$} {\\it Case 1}: \\mbox{$r\\leq\\frac14 e^{-\\|\\psi\\|_*}$}. In this case\n$$\ng_\\psi(r)\\leq 1.\n$$\nWe set \\mbox{$\\tilde B:= B(\\psi(x_0), g_\\psi(r))$}. \nSince \\mbox{$\\psi$} preserves Lebesgue measure we get\n\n\\begin{eqnarray*}\n \\av_{B}|f{\\rm o}\\psi- \\av_{B}(f{\\rm o}\\psi)|&=&\\av_{\\psi(B)}|f- \\av_{\\psi(B)}(f)|\n\\\\\n&\\leq & 2 \\av_{\\psi(B)}|f- \\av_{\\tilde B}(f)|.\n\\end{eqnarray*}\nUsing the notations above \n\\begin{eqnarray*}\n\\av_{\\psi(B)}|f- \\av_{\\tilde B}(f)|&\\lesssim & \\frac{1}{|B|}\\sum_j |O_j|\\av_{2O_j}\\big|f- \\av_{\\tilde B}(f)\\big|\n\\\\\n&\\lesssim & I_1+I_2,\n\\end{eqnarray*}\nwith\n\\begin{eqnarray*}\nI_1&=& \\frac{1}{|B|}\\sum_j |O_j|\\big |\\av_{2O_j}(f)- \\av_{2O_j}(f)\\big |\\\\\nI_2&= & \\frac{1}{|B|}\\sum_j |O_j|\\big |\\av_{2O_j}(f)- \\av_{\\tilde B}(f)\\big |.\n\\end{eqnarray*}\nOn one hand, since \\mbox{$\\sum|O_j|\\leq |B|$} then\n\\begin{eqnarray*}\nI_1&\\leq& \\frac{1}{|B|}\\sum_j |O_j|\\|f\\|_{{\\rm BMO}}\n\\\\\n&\\leq & \\|f\\|_{{\\rm BMO}}.\n\\end{eqnarray*}\nOn the other hand, sinc \\mbox{$4O_j\\subset \\tilde B$} (remember Lemma \\ref{g}) and \\mbox{$r_{\\tilde B}\\leq 1$}, it ensues that\n\\begin{eqnarray*}\nI_2&\\lesssim& \\frac{1}{|B|}\\sum_j |O_j|\\big(1+\\ln\\hspace{-3.8mm}{}^+\\Big(\\frac{1-\\ln\\hspace{-3.8mm}{}^+ 2r_j}{ 1-\\ln\\hspace{-3.8mm}{}^+ g_\\psi(r) } \\Big)\\big)\n\\\\\n&\\lesssim& \\frac{1}{|B|}\\sum_j |O_j|(1+\\ln\\hspace{-3.8mm}{}^+\\big(\\frac{1-\\ln\\hspace{-3.8mm}{}^+ r_j}{ 1-\\ln\\hspace{-3.8mm}{}^+ g_\\psi(r) } \\big)).\n\\end{eqnarray*}\nThanks to \\eqref{ss} we get\n\\begin{eqnarray}\n\\nonumber\n\\ln\\hspace{-3.8mm}{}^+\\Big(\\frac{1-\\ln\\hspace{-3.8mm}{}^+ r_j}{ 1-\\ln\\hspace{-3.8mm}{}^+ g_\\psi(r) } \\Big)&\\leq& \\ln\\hspace{-3.8mm}{}^+\\Big(\\frac{1-\\ln\\hspace{-3.8mm}{}^+ r_j}{ 1-\\ln\\hspace{-3.8mm}{}^+ r } \\Big)+\\ln\\hspace{-3.8mm}{}^+\\Big(\\frac{1-\\ln\\hspace{-3.8mm}{}^+ r}{ 1-\\ln\\hspace{-3.8mm}{}^+ g_\\psi(r) } \\Big)\n\\\\\n\\label{s}\n&\\lesssim& 1+ \\ln\\hspace{-3.8mm}{}^+\\Big(\\frac{1-\\ln\\hspace{-3.8mm}{}^+ r_j}{ 1-\\ln\\hspace{-3.8mm}{}^+ r } \\Big)+\\ln\\hspace{-3.8mm}{}^+(1+\\|\\psi\\|_{*}).\n\\end{eqnarray}\nThus it remains to prove that\n\\begin{eqnarray}\n\\label{ef}\n II:=\\frac{1}{|B|}\\sum_j |O_j|(1+\\ln\\hspace{-3.8mm}{}^+\\big(\\frac{1-\\ln\\hspace{-3.8mm}{}^+ r_j} { 1-\\ln\\hspace{-3.8mm}{}^+ r }\\big))\\lesssim 1+\\ln\\hspace{-3.8mm}{}^+(1+\\|\\psi\\|_{*}).\n\\end{eqnarray}\nFor every \\mbox{$k\\in\\mathbb N$} we set \n$$\nu_k:=\\sum_{e^{-(k+1)}r< r_j\\leq e^{-k}r}|O_j|,\n$$\nso that\n\\begin{eqnarray}\n\\label{eff}\nII\\leq \\frac{1}{|B|}\\sum_{k\\geq 0} u_k\\big(1+\\ln\\hspace{-3.8mm}{}^+\\big(\\frac{k+2-\\ln\\hspace{-3.8mm}{}^+ r} { 1-\\ln\\hspace{-3.8mm}{}^+ r }\\big)\\big).\n\\end{eqnarray}\n\nWe need the following lemma.\n\\begin{Lemm} \n\\label{equivalence}\nThere exists a universal constant \\mbox{$C>0$} such that\n$$\nu_k\\leq Ce^{-\\frac{k}{\\|\\psi\\|_*}}r^{1+\\frac{1}{\\|\\psi\\|_*}},\n$$\nfor every \\mbox{$k\\in\\mathbb N$}.\n\\end{Lemm}\n\\begin{proof}[Proof of Lemma \\ref{equivalence}]\nIf we denote by \\mbox{$C\\geq 1$} the implicit constant appearing in Whitney Lemma, then \n$$\nu_k\\leq |\\{ y\\in \\psi(B): d(y, \\psi(B)^c)\\leq Ce^{-k}r\\}|.\n$$\nThe preservation of Lebesgue measure by \\mbox{$\\psi$} yields\n$$\n |\\{ y\\in \\psi(B): d(y, \\psi(B)^c)\\leq Ce^{-k}r\\}|=|\\{ x\\in B: d(\\psi(x), \\psi(B)^c)\\leq Ce^{-k}r\\}|,\n$$\n\nSince \\mbox{$ \\psi(B)^c=\\psi(B^c)$} then\n$$\nu_k\\leq |\\{ x\\in B: d(\\psi(x), \\psi(B^c))\\leq Ce^{-k}r\\}|.\n$$\nWe set\n $$D_k=\\{ x\\in B: d(\\psi(x), \\psi(B^c))\\leq Ce^{-k}r\\}.\n $$\nSince \\mbox{$\\psi(\\partial B)$} is the frontier of \\mbox{$\\psi(B)$} and \\mbox{$d(\\psi(x), \\psi(B^c))=d(\\psi(x), \\partial \\psi(B))$} then\n$$\nD_k\\subset \\{ x\\in B: \\exists y\\in \\partial B \\;{\\rm with}\\; |\\psi(x)- \\psi(y)|\\leq Ce^{-k}r\\}.\n$$\nThe condition on \\mbox{$\\delta_0$} is just to assure that \\mbox{$Cr\\leq 1$} for all \\mbox{$r\\leq\\delta_0$}.\nIn this case Proposition \\ref{p1} gives\n$$\nD_k\\subset \\{ x\\in B: \\exists y\\in \\partial B: |x- y|\\leq Ce^{1-\\frac{k}{\\|\\psi\\|_*}}r^{\\frac{1}{\\|\\psi\\|_*}}\\}.\n$$\nThus, \\mbox{$D_k$} is contained in the annulus \\mbox{$\\mathcal A=\\{ x\\in B: d(x,\\partial B) \\leq Ce^{1-\\frac{k}{\\|\\psi\\|_*}}r^{\\frac{1}{\\|\\psi\\|_*}}\\}$} and so \n$$\nu_k\\leq |D_k|\\lesssim e^{-\\frac{k}{\\|\\psi\\|_*}}r^{1+\\frac{1}{\\|\\psi\\|_*}},\n$$\nas claimed.\n\\end{proof}\n\nComing back to \\eqref{eff}. Let \\mbox{$N$} a large integer to be chosen later. We split the sum in the right hand side of \\eqref{eff} into two parts\n$$\nII\\lesssim \\sum_{k\\leq N}(...)+\\sum_{k> N}(.....):=II_{1}+II_{2}.\n$$\nSince \\mbox{$\\sum u_k\\leq |B|$} then\n\\begin{eqnarray}\n\\label{ff}\nII_{1}\\leq 1+\\ln\\hspace{-3.8mm}{}^+\\Big(\\frac{N+2-\\ln\\hspace{-3.8mm}{}^+ r} { 1-\\ln\\hspace{-3.8mm}{}^+ r }\\Big).\n\\end{eqnarray}\nOn the other hand\n$$\nII_{2}\\leq \\sum_{k> N}e^{-\\frac{k}{\\|\\psi\\|_*}}r^{\\frac{1}{\\|\\psi\\|_*}-1}(1+\\ln\\hspace{-3.8mm}{}^+\\big(\\frac{k+2-\\ln\\hspace{-3.8mm}{}^+ r} { 1-\\ln\\hspace{-3.8mm}{}^+ r }\\big)).\n$$\nThe parameter \\mbox{$N$} will be taken bigger than \\mbox{$\\|\\psi\\|_*$} so that the function in \\mbox{$k$} inside the sum is decreasing and an easy comparison with integral yields\n\\begin{eqnarray}\n\\label{fff}\n II_{2}\\lesssim e^{-\\frac{N}{\\|\\psi\\|_*}}\\|\\psi\\|_*^2r^{\\frac{1}{\\|\\psi\\|_*}-1}\\big(1+\\ln\\hspace{-3.8mm}{}^+\\Big(\\frac{N+2-\\ln\\hspace{-3.8mm}{}^+ r} { 1-\\ln\\hspace{-3.8mm}{}^+ r }\\Big)\\big).\n\\end{eqnarray}\n\n\nPutting \\eqref{ff} and \\eqref{fff} together and taking \\mbox{$N= [\\|\\psi\\|_*(\\|\\psi\\|_*-\\ln\\hspace{-3.8mm}{}^+ r)]+1$} \n\\begin{eqnarray*}\nII\\lesssim \\big(1+ e^{-\\frac{N}{\\|\\psi\\|_*}}\\|\\psi\\|_*^2r^{\\frac{1}{\\|\\psi\\|_*}-1}\\big)\\big(1+\\ln\\hspace{-3.8mm}{}^+\\Big(\\frac{N+2-\\ln\\hspace{-3.8mm}{}^+ r} { 1-\\ln\\hspace{-3.8mm}{}^+ r }\\Big)\\big).\n\\end{eqnarray*}\nTaking \\mbox{$N= [\\|\\psi\\|_*(\\|\\psi\\|_*-\\ln\\hspace{-3.8mm}{}^+ r)]+1$} \n\\begin{eqnarray*}\nII&\\lesssim& (1+ e^{-\\|\\psi\\|_*}\\|\\psi\\|_*^2r^{\\frac{1}{\\|\\psi\\|_*}})\\big(1+\\ln\\hspace{-3.8mm}{}^+\\big(\\frac{\\|\\psi\\|_*(\\|\\psi\\|_*-\\ln\\hspace{-3.8mm}{}^+ r)+2-\\ln\\hspace{-3.8mm}{}^+ r} { 1-\\ln\\hspace{-3.8mm}{}^+ r }\\big)\\big).\n\\\\\n&\\lesssim& 1+\\ln\\hspace{-3.8mm}{}^+(1+\\|\\psi\\|_*),\n\\end{eqnarray*}\nwhere we have used the fact that \\mbox{$r\\leq 1$} and the obvious inequality \n$$\n\\frac{\\|\\psi\\|_*(\\|\\psi\\|_*-\\ln\\hspace{-3.8mm}{}^+ r)+2-\\ln\\hspace{-3.8mm}{}^+ r} { 1-\\ln\\hspace{-3.8mm}{}^+ r }\\lesssim (1+\\|\\psi\\|_*)^2.\n$$\nThis ends the proof of \\eqref{ef}.\n\n \\mbox{$\\bullet$} {\\it Case 2:} \\mbox{$\\delta_0\\geq r \\geq \\frac14 e^{-\\|\\psi\\|_*}$.} In this case \n $$\n |\\ln\\hspace{-3.8mm}{}^+ r|\\lesssim \\|\\psi\\|_*.\n $$\n Since \\mbox{$\\psi$} preserves Lebesgue measure, we get\n\\begin{eqnarray*}\nI&:=&\\av_{B}|f{\\rm o}\\psi- \\av_{B}(f{\\rm o}\\psi)|\n\\\\\n&\\leq & 2 \\av_{\\psi(B)}|f|.\n\\end{eqnarray*}\nLet \\mbox{$\\tilde O_j$} denote the ball which is concentric to \\mbox{$O_j$} and whose radius is equal to \\mbox{$1$} (we use the same Whitney covering as above). Without loss of generality we can assume \\mbox{$\\delta_0\\leq\\frac14$}. This guarantees \\mbox{$4O_j\\subset\\tilde O_j$} and yields by definition\n\\begin{eqnarray*}\nI&\\lesssim & \\frac{1}{|B|}\\sum_j |O_j|\\av_{2O_j}|f-\\av_{\\tilde O_j}(f)|+ \\frac{1}{|B|}\\sum_j |O_j| |\\av_{\\tilde O_j}(f)|\n\\\\\n&\\lesssim& \\frac{1}{|B|}\\sum_j |O_j|\\Big(1+\\ln\\hspace{-3.8mm}{}^+\\big({1-\\ln\\hspace{-3.8mm}{}^+ 2r_j} \\big)\\Big)\\|f\\|_{{{\\rm LBMO}}}+\\frac{1}{|B|}\\sum_j |O_j| \\|f\\|_{L^p}\n\\\\\n&\\lesssim& \n 1+ \\frac{1}{|B|}\\sum_j |O_j|\\big(1+\\ln\\hspace{-3.8mm}{}^+\\big({1-\\ln\\hspace{-3.8mm}{}^+ r_j} \\big)\\big).\n\\end{eqnarray*}\nAs before one writes\n\\begin{eqnarray*}\nI&\\lesssim& \\frac{1}{|B|}\\sum_{k\\geq 0} u_k\\big(1+\\ln\\hspace{-3.8mm}{}^+\\big(k+2-\\ln\\hspace{-3.8mm}{}^+ r\\big)\\big)\n\\\\\n&\\lesssim&1+\\ln\\hspace{-3.8mm}{}^+\\big(N+2-\\ln\\hspace{-3.8mm}{}^+ r\\big)+ e^{-\\frac{N}{\\|\\psi\\|_*}} \\|\\psi\\|_*^2 r^{\\frac{1}{\\|\\psi\\|_*}-1}\\big(1+\\ln\\hspace{-3.8mm}{}^+\\big(N+2-\\ln\\hspace{-3.8mm}{}^+ r)\\big).\n\\end{eqnarray*}\nTaking \\mbox{$N=[ \\|\\psi\\|_*(\\|\\psi\\|_*-\\ln\\hspace{-3.8mm}{}^+ r)]+1$} and using the fact that \\mbox{$|\\ln\\hspace{-3.8mm}{}^+ r|\\lesssim \\|\\psi\\|_*$} leads to the desired result.\n\nThe outcome of this first step of the proof is\n$$\n\\|f{\\rm o}\\psi\\|_{{\\rm BMO}\\cap L^p}\\lesssim\\ln\\hspace{-3.8mm}{}^+(1+\\|\\psi\\|_*)\\|f\\|_{{{\\rm LBMO}}\\cap L^p}.\n$$\n\\subsection*{ Step 2} This step of the proof deals with the second term in the \\mbox{${{\\rm LBMO}}$}-norm. It is shorter than the first step because it makes use of the arguments developed above.\nTake \\mbox{$B_2=B(x_2,r_2)$} and \\mbox{$B_1=B(x_1,r_1)$} in \\mbox{${\\mathbb R}^2$} with \\mbox{$r_1\\leq 1$} and \\mbox{$2B_2\\subset B_1$}.\nThere are three cases to consider.\n\n\n\n\n\\mbox{$\\bullet$} {\\it Case 1:} \\mbox{$ r_1\\lesssim e^{-\\|\\psi\\|_*}$} (so that \\mbox{$g_\\psi(r_2)\\leq g_\\psi(r_1) \\leq \\frac12$}).\n\nWe set \\mbox{$\\tilde B_i:= B(\\psi(x_i), g_\\psi(r_i)), i=1,2$} and\n$$\nJ:=\\frac{|\\av_{B_2}(f{\\rm o}\\psi)-\\av_{B_1}(f{\\rm o}\\psi)|}{1+ \\ln\\hspace{-3.8mm}{}^+\\big(\\frac{ 1-\\ln\\hspace{-3.8mm}{}^+ r_2 }{1-\\ln\\hspace{-3.8mm}{}^+ r_1}\\big)}.\n$$\nSince the denominator is bigger than \\mbox{$1$} one get\n$$\nJ\\leq J_{1}+J_{2}+J_3,\n$$\nwith \n\\begin{eqnarray*}\nJ_{1}&=& |\\av_{\\psi(B_2)}(f)-\\av_{\\tilde B_2}(f)|+ |\\av_{\\psi(B_1)}(f)-\\av_{\\tilde B_1}(f)| \\\\\nJ_{2}&=&\\frac{|\\av_{\\tilde B_2}(f)-\\av_{2\\tilde B_1}(f)|}{1+ \\ln\\hspace{-3.8mm}{}^+\\Big(\\frac{ 1-\\ln\\hspace{-3.8mm}{}^+ r_2 }{1-\\ln\\hspace{-3.8mm}{}^+ r_1}\\Big)}\n \\\\\nJ_{3}&=&|\\av_{\\tilde B_1}(f)-\\av_{2\\tilde B_1}(f)|.\n\\end{eqnarray*}\nSince \\mbox{$2\\tilde B_2\\subset 2\\tilde B_1$} and \\mbox{$r_{2\\tilde B_1}\\leq1$} then\n$$\nJ_{2}\\leq \\frac{1+ \\ln\\hspace{-3.8mm}{}^+\\big(\\frac{ 1-\\ln\\hspace{-3.8mm}{}^+ g_\\psi(r_2) }{1-\\ln\\hspace{-3.8mm}{}^+(2g_\\psi(r_1))}\\big)}{1+ \\ln\\hspace{-3.8mm}{}^+\\big(\\frac{ 1-\\ln\\hspace{-3.8mm}{}^+ r_2 }{1-\\ln\\hspace{-3.8mm}{}^+ r_1}\\big)}\\|f\\|_{{{\\rm LBMO}}}.\n$$\nUsing similar argument than \\eqref{s} (and remembering Remark \\ref{sss}) we infer\n\\begin{eqnarray*}\n \\ln\\hspace{-3.8mm}{}^+\\Big(\\frac{ 1-\\ln\\hspace{-3.8mm}{}^+ g_\\psi(r_2) }{1-\\ln\\hspace{-3.8mm}{}^+ (2g_\\psi(r_1)) }\\Big)\n\\lesssim 1+\\ln\\hspace{-3.8mm}{}^+(1+\\|\\psi\\|_{*})+\\ln\\hspace{-3.8mm}{}^+\\Big(\\frac{ 1-\\ln\\hspace{-3.8mm}{}^+ r_2 }{1-\\ln\\hspace{-3.8mm}{}^+ r_1}\\Big).\n\\end{eqnarray*}\nThus,\n$$\nJ_{2}\\lesssim1+\\ln\\hspace{-3.8mm}{}^+(1+\\|\\psi\\|_{*}).\n$$\nThe estimation \\eqref{22} yields\n$$\nJ_3\\lesssim \\|f\\|_{{\\rm BMO}}.\n$$ \nThe term \\mbox{$J_{1}$} can be handled exactly as in the analysis of \\mbox{case 1} of \\mbox{step 1}. \n\n\\\n\n\n\n\\mbox{$\\bullet$} {\\it Case 2:} \\mbox{$e^{-\\|\\psi\\|_*}\\lesssim r_2$}. In this case we write\n$$\nJ\\leq \\av_{\\psi(B_2)}|f|+\\av_{\\psi(B_1)}|f|.\n$$\nBoth terms can be handled as in the analysis of \\mbox{case 2} of the proof of \\mbox{${\\rm BMO}$}-part in \\mbox{step 1.} \n\n\\mbox{$\\bullet$} {\\it Case 3:} \\mbox{$r_2\\lesssim e^{-\\|\\psi\\|_*}$} and \\mbox{$r_1\\gtrsim e^{-\\|\\psi\\|_*} $}. Again since the denominator is bigger than \\mbox{$1$} we get\n$$\nJ\\leq \\av_{\\psi(B_2)}|f- \\av_{\\tilde B_2}(f) |+\\frac{|\\av_{\\tilde B_2}(f)|}{1+\\ln\\hspace{-3.8mm}{}^+\\big(\\frac{ 1-\\ln\\hspace{-3.8mm}{}^+ r_2 }{1-\\ln\\hspace{-3.8mm}{}^+ r_1}\\big)} +\\av_{\\psi(B_1)}|f|=J_{1}+J_{2}+J_3.\n$$\nThe terms \\mbox{$J_{1}$} and \\mbox{$J_3$} can be controlled as before. The second term is controlled as follows (we make appear the average on \\mbox{$B(\\psi(x_2),1)$} and use Lemma \\ref{g} with \\mbox{$\\|f\\|_{L^p}\\leq 1$})\n\\begin{eqnarray*}\nJ_{2}&\\leq& \\frac{1+ \\ln\\hspace{-3.8mm}{}^+(1-\\ln\\hspace{-3.8mm}{}^+ r_2) }{1+ \\ln\\hspace{-3.8mm}{}^+\\Big(\\frac{ 1-\\ln\\hspace{-3.8mm}{}^+ r_2 }{1-\\ln\\hspace{-3.8mm}{}^+ r_1} \\Big)} \n\\\\\n&\\leq& {1+ \\ln\\hspace{-3.8mm}{}^+(1+|\\ln\\hspace{-3.8mm}{}^+ r_1|) }\n \\\\\n&\\leq& {1+ \\ln\\hspace{-3.8mm}{}^+(1+\\|\\psi\\|_*) }.\n\\end{eqnarray*}\n\\end{proof}\n\n\\section{Proof of Theorem \\ref{main}}\nThe proof falls naturally into three parts.\n\\subsection{{ A priori} estimates}\nThe following estimates follow directly from Proposition \\ref{prop} and Theorem \\ref{decom}.\n \\begin{Prop}\n \\label{apriori} Let \\mbox{$u$} be a smooth solution of \\eqref{E} and \\mbox{$\\omega$} its vorticity. Then, there exists a constant \\mbox{$C_0$} depending only on the norm \\mbox{$L^p\\cap {{\\rm LBMO}}$} of \\mbox{$\\omega_0$} such that\n $$\n \\|u(t)\\|_{LL}+\\|\\omega(t)\\|_{{{\\rm LBMO}}}\\leq C_0\\exp{(C_0t)},\n $$\n for every \\mbox{$t\\geq 0$}.\n \\end{Prop}\n \\begin{proof} One has \\mbox{$\\omega(t,x) =\\omega_0(\\psi_t^{-1}(x))$} where \\mbox{$\\psi_t$} is the flow associated to the velocity \\mbox{$u$}. Since \\mbox{$u$} is smooth then \\mbox{$\\psi_t^{\\pm 1}$} is Lipschitzian for every \\mbox{$t\\geq 0$}. This implies in particular that\n \\mbox{$\\|\\psi_t^{\\pm 1}\\|_*$} is finite for every \\mbox{$t\\geq 0$}. \n Theorem \\ref{decom} and Proposition \\ref{prop} yield together\n\\begin{eqnarray*}\n \\|\\omega(t)\\|_{{{\\rm LBMO}}}&\\leq& C\\|\\omega_0\\|_{{{\\rm LBMO}}\\cap L^p}\\ln\\hspace{-3.8mm}{}^+(1+\\|\\psi_t^{-1}\\|_*)\n \\\\\n &\\leq& C\\|\\omega_0\\|_{{{\\rm LBMO}}\\cap L^p}\\ln\\hspace{-3.8mm}{}^+(1+\\exp(\\int_0^t\\|u(\\tau)\\|_{LL}d\\tau))\n \\\\\n &\\leq& C_0(1+\\int_0^t\\|u(\\tau)\\|_{LL}d\\tau).\n \\end{eqnarray*}\nOn the other hand, one has\n \\begin{eqnarray*}\n \\|u(t)\\|_{LL}&\\leq&\\|\\omega(t)\\|_{L^2}+ \\|\\omega(t)\\|_{B_{\\infty,\\infty}^0}\n \\\\\n &\\leq& C(\\|\\omega_0\\|_{L^2}+ \\|\\omega(t)\\|_{BM0}).\n\\end{eqnarray*}\nThe first estimate is classical (see \\cite{bah-ch-dan} for instance) and the second one is just the conservation of the \\mbox{$L^2$}-norm of the vorticity and the continuity of the embedding \\mbox{${\\rm BMO}\\hookrightarrow B_{\\infty,\\infty}^0$}.\n\nConsequently, we deduce that\n \\begin{eqnarray*}\n \\|u(t)\\|_{LL}\\leq C_0(1+\\int_0^t\\|u(\\tau)\\|_{LL}d\\tau), \n\\end{eqnarray*}\nand by Gronwall's Lemma \n$$\n\\|u(t)\\|_{LL}\\leq C_0\\exp(C_0t),\\qquad\\forall\\, t\\geq 0.\n$$\nThis yields in particular\n$$\n \\|\\omega(t)\\|_{{{\\rm LBMO}}}\\leq C_0\\exp{(C_0t)},\\qquad\\forall\\, t\\geq 0,\n $$\n as claimed.\n\n\\end{proof}\n\n\\subsection{ Existence} Let \\mbox{$\\omega_0\\in L^p\\cap {{\\rm LBMO}}$} and $u_0=k\\ast \\omega_0, $\n with $K(x)=\\frac{x^\\perp}{2\\pi|x|^2}.$ We take \\mbox{$\\rho\\in \\mathcal C^\\infty_0$}, with \\mbox{$\\rho\\geq 0$} and \\mbox{$\\int\\rho(x)dx=1$} and set\n$$\n\\omega_0^n=\\rho_n\\ast \\omega_0,\\qquad u_0^n= \\rho_n\\ast u_0,\n$$\n where \\mbox{$\\rho_n(x)=n^2\\rho(nx)$}. Obviously, \\mbox{$\\omega_0^n$} is a \\mbox{$C^\\infty$} bounded function for every \\mbox{$n\\in\\mathbb N^*$}. Furthermore, thanks to \\eqref{eq:comp}, \n $$\n \\|\\omega_0^n\\|_{L^p}\\leq \\|\\omega_0\\|_{L^p}\\qquad{\\rm and}\\qquad \\|\\omega_0^n\\|_{{{\\rm LBMO}}}\\leq \\|\\omega_0\\|_{{{\\rm LBMO}}}.\n $$\nThe classical interpolation result between Lebesgue and \\mbox{${\\rm BMO}$} spaces (see \\cite{GR} for more details) implies that \n$$\n \\|\\omega_0^n\\|_{L^q}\\leq \\|\\omega_0^n\\|_{L^p\\cap {\\rm BMO}}\\leq \\|\\omega_0\\|_{L^p\\cap {\\rm BMO}} , \\qquad \\forall\\, q\\in[p,+\\infty[.\n$$\nSince, \\mbox{$\\omega_0^n\\in L^p\\cap L^\\infty$} then there exists a unique weak solution \\mbox{$u^n$} with\n $$\n\\omega_n\\in L^\\infty({\\mathbb R}_+, L^p\\cap L^\\infty).\n$$\naccording to the classical result of Yudovich \\cite{Y1}.\nAccording to Proposition \\ref{apriori} one has\n \\begin{eqnarray}\n \\label{44}\n \\|u^n(t)\\|_{LL}+ \\|\\omega^n(t)\\|_{L^p\\cap{{\\rm LBMO}}}\\leq C_0\\exp(C_0t),\\qquad\\forall\\, t\\in{\\mathbb R}_+.\n \\end{eqnarray}\n With this uniform estimate in hand, we can perform the same analysis as in the case \\mbox{$\\omega_0\\in L^p\\cap L^\\infty$} (see paragraph 8.2.2 in \\cite{Maj} for more explanation). For the convenience of the reader we briefly outline the main arguments of the proof.\n \n If one denotes by \\mbox{$\\psi_n(t,x)$} the associated flow to \\mbox{$u^n$} then \n \\begin{equation}\n \\label{tt}\n \\|\\psi_n^{\\pm1}(t)\\|_{*}\\leq C_0\\exp(C_0t),\\qquad\\forall\\, t\\in{\\mathbb R}_+.\n \\end{equation}\n This yields the existence of explicit time continuous functions \\mbox{$\\beta(t)>0$} and \\mbox{$C(t)$} such that\n $$\n |\\psi_n^{\\pm1}(t,x_2)-\\psi_n^{\\pm1}(t,x_1)|\\leq C(t)|x_2-x_1|^{\\beta(t)},\\qquad \\forall\\, (x_1,x_2)\\in{\\mathbb R}^2\\times{\\mathbb R}^2.\n $$\n Moreover,\n $$\n |\\psi_n^{\\pm1}(t_2,x)-\\psi_n^{\\pm1}(t_1,x)|\\leq |t_2-t_1|\\|u^n\\|_{L^\\infty}\\leq C_0|t_2-t_1|,\\qquad \\forall\\, (t_1,t_2)\\in{\\mathbb R}_+\\times{\\mathbb R}_+.\n $$\nHere, we have used the Biot-Savart law to get\n$$\n \\|u^n(t)\\|_{L^\\infty}\\lesssim \\|\\omega^n(t)\\|_{L^p\\cap L^3}\\leq\\|\\omega_0\\|_{L^p\\cap L^3}.\n $$\n The family \\mbox{$\\{\\psi_n,\\, n\\in\\mathbb N\\}$} is bounded and equicontinuous on every compact \\mbox{$[0,T]\\times \\bar B(0,R)\\subset {\\mathbb R}_+\\times{\\mathbb R}^2$}. The Arzela-Ascoli\ntheorem implies\n the existence of a limiting particle trajectories \\mbox{$\\psi(t,x)$}. Performing the same analysis for \\mbox{$\\{\\psi_n^{-1},\\, n\\in\\mathbb N\\}$} we figure out that \\mbox{$\\psi(t,x)$} is a Lebesgue measure preserving homeomorphism . Also, passing to the limit\\footnote{ We take the pointwise limit in the definition formula and then take the supremum.} in \\eqref{tt} leads to\n $$\n \\|\\psi_t\\|_{*}=\\|\\psi^{-1}_t\\|_{*}\\leq C_0\\exp(C_0t),\\qquad \\forall\\, t\\in{\\mathbb R}_+.\n $$\n One defines,\n $$\n \\omega(t,x)=\\omega_0(\\psi^{-1}_t(x)),\\qquad u(t,x)=(k\\ast_x \\omega(t,.))(x).\n $$\nWe easily check that for every \\mbox{$q\\in [p,+\\infty[$} one has\n \\begin{eqnarray*}\n \\omega^n(.,t)&\\longrightarrow& \\omega(.,t)\\quad {\\rm in }\\,\\, L^q.\n \\\\\n u^n(.,t)&\\longrightarrow_x& u(.,t)\\quad {\\rm uniformly}. \n \\end{eqnarray*}\nThe last claim follows from the fact that\n $$\n \\|u^n(t)-u(t)\\|_{L^\\infty}\\lesssim \\|\\omega^n(t)-\\omega(t)\\|_{ L^p\\cap L^3}.\n $$\n All this allows us to pass to the limit in the integral equation on \\mbox{$\\omega^n$} and then to prove that \\mbox{$(u,\\omega)$} is a weak solution to the vorticity-stream formulation of the 2D Euler system. Furthermore, the convergence of \n \\mbox{$\\{\\omega^n(t)\\}$} in \\mbox{$L^1_{\\text{loc}}$} and \\eqref{44} imply together that\n $$\n \\|\\omega(t)\\|_{L^p\\cap{{\\rm LBMO}}}\\leq C_0\\exp(C_0t),\\qquad \\forall\\,t\\in{\\mathbb R}_+.\n $$\n as claimed.\n \n The continuity of \\mbox{$\\psi$} and the preservation of Lebesgue measure imply that \\mbox{$t\\mapsto \\omega(t)$} is continuous\\footnote{ By approximation we are reduced to the following situation: \\mbox{$g_n(x)\\to g(x)$} pointwise and \n $$\\|g_n\\|_{L^q}=\\|g\\|_{L^q}.\n $$\n This is enough to deduce that \\mbox{$g_n\\to g$} in \\mbox{$L^q$} (see Theorem 1.9 in \\cite{LL} for instance). } with values in \\mbox{$L^q$} for all \\mbox{$q\\in [p,+\\infty[$}. This implies in particular that\n \\mbox{$u\\in \\mathcal C([0,+\\infty[, L^r({\\mathbb R}^d))$} for every \\mbox{$r\\in [\\frac{2p}{2-p},+\\infty]$}. \n \\subsection{ Uniqueness} Since the vorticity remains bounded in \\mbox{${\\rm BMO}$} space then the uniqueness of the solutions follows from Theorem 7.1 in \\cite{Vishik1}. \n Another way to prove that is to add the information \\mbox{$u\\in \\mathcal C([0,+\\infty, L^\\infty({\\mathbb R}^d))$} (which is satisfied for the solution constructed above) to the theorem and in this case the uniqueness follows from Theorem 7.17 in \\cite{bah-ch-dan}.\n \n\\\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
+{"text":"\\section{Introduction}\nStudies of jet substructure have become an area of great interest and \nmuch activity in the context of LHC phenomenology \\cite{boost1,boost2}. The \nprimary reason for this relatively recent explosion of interest has been the \nobservation that massive particles at the LHC can emerge with large boosts \neither due to the fact that one has an unprecedentedly large centre-of--mass energy $\\sqrt{s}$, which consequently enables access to a large range of particle transverse momenta, or due to the possibility that one may have very heavy new BSM particles that could decay to lighter Standard Model particles which thus emerge highly boosted. Such particles with energies or, equivalently, transverse momenta much greater than their mass $p_T \\gg m$, would decay to products that are collimated and a significant fraction of the time would be recombined by \njet algorithms into single jets. One is then faced with the problem of \ndisentangling more interesting signal jets from those that are \npurely QCD background jets. In this context, detailed studies of jet substructure had been proposed some time ago \\cite{Seymour} as having the potential to help separate signal jets (for instance, those arising from hadronic Higgs decays) from QCD background. More recent and detailed analyses \\cite{BDRS,atlas_HZ,PSS} subsequently revealed the great potential offered by substructure studies in the specific case of boosted Higgs searches and paved the way for other substructure based studies in many different new physics contexts, which continue to emerge.\n\nHowever, one important issue that is a significant hinderance to the success of studies that try to exploit detailed knowledge of jet substructure is the complexity of the LHC as a physics environment, particularly in the context of strong interactions. For example, while looking for signal mass-peaks in the vicinity of a certain value of mass, $m$, one is inevitably led to consider the distribution of background QCD jets around that value. In fact a precise knowledge of jet-mass distributions and the related role of cuts on jet masses are extremely crucial for a wide variety of substructure studies. At the LHC one faces formidable obstacles as far as an accurate description of such spectra is concerned. Firstly, from the viewpoint of QCD perturbation theory, the most reliable tool at our disposal, one is confronted with large logarithms due to the multi-scale nature of the distribution at hand, which are as singular as $\\frac{1}{m_J} \\alpha_s^n \\ln^{2n-1} \\frac{p_T}{m_J}$, i.e double logarithmic, plus less singular terms. For boosted studies involving values $p_T \\gg m_J$ such logarithms dominate the perturbative expansion and hence fixed-order tools like \\textsc{Nlojet}{\\footnotesize ++} \\cite{nlojet}, which are typically heavily relied upon for accurate QCD predictions, become at worst invalid or at best of severely limited use. To make matters worse, there are non-perturbative effects such as the underlying event and also the issue of pile-up, which leads to a contamination of jets resulting in an often very significant worsening of any perturbative description. \n\nIn light of the above discussion, it is clear that the tools one can most \nreadily use to estimate distributions of quantities such as jet masses are in \nfact Monte Carlo event generators. The parton showers encoded in these event generators derive from first principles of QCD and offer a partial resummation of the large logarithms we mentioned before. They can be combined with fixed-order NLO results \\cite{POWHEG,MC@NLO} to yield descriptions that also describe accurately regions of phase space where the logarithms in question may not be entirely dominant. Moreover, they include hadronisation models as well as a model dependent treatment of other effects such as the underlying event where the parameters of the model are extensively tuned to various data sets, to render them more accurate descriptions. While such tools are very general and hence of immense value in addressing the complexity of the LHC environment, the level of precision they offer may be considered a still open question. On the perturbative \nside the accuracy of the logarithmic resummation represented by parton showers is not clear. While leading logarithms (double logarithms in the present case) are understood to be guaranteed, at the level of next-to--leading or \nsingle logarithms (NLL level) the showers are not expected to provide a complete description. For example it is well known that parton showers work in the leading colour (large $N_c$) approximation as far as large-angle soft radiation is concerned, while the jet mass distributions we discuss here receive single \nlogarithmic contributions from such effects which start already at leading \norder in $\\alpha_s$ and contain subleading colour terms. As we shall show \nsingle-logarithmic terms on the whole have a large impact on the final result \nand thus it is a little disconcerting to note that they will not be fully accommodated in current shower descriptions. \n\nAlso the event generators we are considering have different models both for parton showers as well as for non-perturbative effects such as the underlying event and hadronisation and while it is always possible to tune the parameters in each event generator to experimental data, a comparison of the separate physics ingredients of each program often reveal large differences which, in certain cases, does not inspire much confidence in the accuracy of the final description, as far as QCD predictions are concerned. As one example of such differences we can refer the reader to the studies of non-perturbative effects in jets at hadron colliders~\\cite{Dassalmag}, where large differences were pointed out between the \\textsc{Herwig} and \\textsc{Pythia} underlying event estimates at Tevatron energies. A perhaps even more directly relevant example is the recent comparison by the ATLAS collaboration of their data on jet masses and shapes \\cite{ATLASjetmass, ATLASjetshape, ATLASboosted} to predictions from \\textsc{Herwig}{\\footnotesize++} and \\textsc{Pythia} which for the jet mass case, for example, do not agree very well with one another, with \\textsc{Pythia} describing the data better. An understanding of the origin of such differences is certainly an important issue in order to gain confidence in the use of Monte Carlo tools for future LHC phenomenology. Hence the somewhat black-box nature of event generators is an issue which can makes sole reliance on these tools dangerous in the long run. \n\nAnother avenue that can be explored in terms of theoretical predictions is the possibility for analytical resummation of large logarithms. While the \ntechniques needed to achieve such resummation currently apply to a more limited number of observables, while event generators are far more general purpose tools, where possible, resummation alleviates some of the present difficulties inherent in an event generator based approach. The typical accuracy of analytical resummation is usually at least NLL with some results having been obtained even up to the $\\mathrm{N^3LL}$ level \\cite{SchwaBec}. It is also possible and \nstraightforward in principle to match these resummed calculations to NLO estimates so as to have an accurate prediction over a wide range of observable values. Resummation has been a valuable tool in achieving high precision in several QCD studies ranging from pioneering studies of LEP event shapes \\cite{CTTW} to current studies involving hadron collider event shapes \\cite{BSZpheno}. In this paper we carry out such a resummation for the case of the jet mass in Z+jet and dijet processes at the LHC.\n\nIf one takes the study of jet-mass distributions as a particular case of studying shapes of jets produced in multi-jet events (see for instance Refs.~\\cite{EHLVW1, EHLVW2}) then it is clear that for substructure studies, resummation of the kind that we perform here for the jet mass, will be an important tool in achieving a precise description of the internal structure of jets. For the same reason, all the issues one encounters in the present study and the solutions we propose here to those problems, will also be of general relevance in the wider context of resummed calculations as substructure tools. There have in fact been several recent attempts to address the issue of resummation for jet masses and other substructure observables such as angularities \\cite{EHLVW1,EHLVW2, yuan1,yuan2}. In Refs.~\\cite{EHLVW1,EHLVW2} it was proposed to study angularities of one or more jets produced in multijet events but calculations were only carried out for jets produced in $e^{+}e^{-}$ annihilation. The calculations carried out in these references also omitted important contributions at single-logarithmic level, the so called non-global logarithms \\cite{DassalNG1,DassalNG2}, as was explained in some detail in Ref.~\\cite{BDKM}. Further calculations for jet-mass observable definitions in $e^{+}e^{-}$ annihilation were carried out in Ref.~\\cite{KSZ1,KSZ2}. References \\cite{yuan1,yuan2} on the other hand attempt to address the issue for the case of hadron collisions but employ calculations that are not complete to next-to--leading logarithmic accuracy, taking only into account the collinear branchings that generate the so called process independent jet function approximation to the resummed result. In this approach one does not treat the soft large-angle effects that arise from initial state radiation (ISR) or address the important issue of non-global logarithms or the dependence on the jet algorithm explained in Ref.~\\cite{BDKM}, and hence cannot be considered sufficient for a reasonable phenomenological description of data. \n\nIn our current paper we address both issues of ISR and non-global contributions and demonstrate their significance not just as formal NLL terms but also from the perspective of numerics and the accuracy of the final description for potential comparisons to data. We consider specifically the jet mass distribution of jets produced in two different hard processes: Z+jet production where it will be a background to the case of associated boosted Higgs production, \nwith Higgs decay to $b \\bar{b}$ and the case of jet production in dijet LHC \nevents. We carry out a resummed calculation including the ISR contributions as a power series in jet radius $R$, while the non-global logarithms are calculated exactly at leading order (i.e. order $\\alpha_s^2$) and then resummed in the leading $N_c$ approximation as was the case for DIS single hemisphere event shapes studied phenomenologically in Ref.~\\cite{DasSalDIS}. We demonstrate that developing calculations for $e^{+}e^{-}$ variables and carrying them over to the LHC with neglect of process dependent ISR and non-global logarithms can yield large differences with the full resummation which correctly includes these effects. Moreover, our calculations have an advantage also over Monte Carlo event generators in that we retain the full colour structure of the ISR terms resorting to the leading $N_c$ approximation only for the non-global terms starting from order $\\alpha_s^3$. The accuracy that we achieve in our resummed exponent should then be comparable to that which yielded a good description of DIS event shape data \\cite{DasSalDIS}. We also match our results to leading order QCD predictions so as \nto account for those terms which are not enhanced by large logarithms but may be important at larger values of jet mass, i.e. away from the peak of the distribution. \nThe calculations of this paper are valid for jets defined in the anti-$k_t$ algorithm \\cite{antikt}. For jets defined in other algorithms such as the $k_t$ \\cite{kt1,kt2}, and Cambridge--Aachen algorithms \\cite{CAM,CA} \nthe role of gluon self-clustering effects greatly complicates the single-logarithmic resummation (see e.g the discussions in Refs.~\\cite{BanDas05,BanDasDel,KKK, KWZ1,KWZ2}). For such algorithms analytical resummation to single logarithmic accuracy is still beyond the current state-of--the art and we shall hence not treat them. \n\nThe calculations in the present paper also stop short of achieving the accuracy that was obtained for single-hemisphere DIS event shapes in one aspect. \nWhile we achieve the same accuracy as the DIS case for the resummed exponent, we do not yet obtain the NNLL accuracy in the expansion of the resummation as is achieved for most global event shapes in $e^{+}e^{-}$ and DIS in the leading $N_c$ limit \\cite{Dassalrev} as well as in hadron collisions \\cite{BSZpheno}. In other words our current resummation would not guarantee obtaining the $\\alpha_s^2 L^2$ terms in the expansion, that arise from a cross-talk between a constant coefficient function $ \\alpha_s C_1$, which corrects the resummation off just the Born configuration, and the $\\alpha_s L^2$ term of the Sudakov form factors one obtains for jet masses. A proper treatment of such constant coefficients requires further work which we shall carry out in a subsequent article. At that stage we will also be in a position to carry out an NLO matching which, at least from the perturbative viewpoint, will give us an answer that will represent the state of the art for non-global observables. We do however estimate in this paper the possible effect of correcting for the coefficient function on our present predictions for the case of Z+jet production. In order to proceed to full NLL accuracy one would also need to understand non-global logarithms beyond the leading $N_c$ approximation but this is a much longer term goal. In the meantime we believe that the predictions we obtain here and certainly after forthcoming NLO matching will be sufficiently accurate so as to render them valuable for phenomenological studies. For the present moment we compare our resummed results to results from shower Monte Carlos and comment on the interesting features that emerge.\n\nWe organise this article as follows: in the following section we outline the \ngeneral framework for our resummed results indicating the separation between \nthe global piece and the non-global terms. Following this, we derive in more \ndetail the results for the global terms for both the case of Z+jet and dijet production. In section~\\ref{sec:nglogs} we detail the results of our calculation for the \nnon-global component at fixed-order and at all orders in the large $N_c$ limit. In section~\\ref{sec:Zjet} we plot our final results for the case of Z+jet production, having carried out a leading-order matching and commented on the impact of various contributions to the resummed exponent such as ISR and non-global logarithms. We also discuss the expected effect on our results of a proper treatment of the coefficient $C_1$, by treating its contribution in different approximations. Lastly, for the Z+jet case we compare our results to Monte Carlo estimates from a variety of event generators. In section~\\ref{sec:dijets} we discuss final results for the case of dijet production with matching to leading-order calculations. Finally in \nsection~\\ref{sec:conclusions} we arrive at our conclusions. Detailed calculations and explicit resummation formulae are collected in the Appendices. \n\n\n\n\n\n\n\n\n\\section{General framework} \\label{sec:framework}\nThe purpose of this section is to outline the overall structure of the resummed results that we have computed for both jet production in association with a \nvector boson and dijet processes at hadron colliders. The notation we find most convenient to adopt is the one developed and used in Refs.~\\cite{BSZcaesar,BSZpheno} for the case of global event shape variables in hadron collisions. The extra ingredient specific to our calculation of jet masses is essentially that \nthe observables we address are non-global, so that certain specifics shall of course differ from the case of global event shapes, which we shall highlight, where relevant.\n\n\n\nFor the case of jet production in association with a Z boson we shall thus \nexamine a distribution of the form \n\\begin{equation}\n\\frac{1}{\\sigma} \\frac{d \\sigma}{dm_J^2}\n\\end{equation}\nwhere $m_J^2$ is the jet-mass squared of the highest $p_T$ jet recoiling against the Z boson. For dijet production we shall instead adopt a different \nobservable definition and study essentially the jet mass distribution averaged over the highest and second highest transverse momentum jets:\n\\begin{equation}\n\\frac{1}{\\sigma} \\left(\\frac{d \\sigma}{dm_{J1}^2}+\\frac{d \\sigma}{dm_{J2}^2} \\right)\n\\end{equation}\nwhere $m_{J1}^2$ and $m_{J2}^2$ are the squared masses of the highest and second highest $p_T$ jets, respectively. \n\nAt Born level, for the Z+jet case, we have a single parton recoiling against the Z boson. If we restrict ourselves to a single-jet final state, then the jet-mass distribution for small jet masses is generated by soft and collinear \nemission off the hard Born configuration whose colour content is provided by the two incoming partons and the final state parton that initiates the jet. \nIn contrast, for the case of dijet production soft and collinear emissions \naround the Born configuration, involve an ensemble of four coloured particles, with two incoming partons and two outgoing partons corresponding to the jets. \n\nWhile for global observables, such as event shapes, resummation off the hard Born configuration is all that matters, in the present case it is obvious that small jet masses can be produced in events with any jet multiplicity, that represent higher order corrections to the basic Born configurations we address. To restrict oneself to addressing just the Born configuration one can, for example, impose a veto scale $p_{T0}$, as suggested in Ref.~\\cite{EHLVW1}. However, depending on the value of this scale one may then need to also resum the consequent logarithms involving the veto scale (see Refs.~\\cite{EHLVW1, EHLVW2} for a discussion). Even if one chooses to adopt this procedure, the calculations we carry out and report in this paper shall still form the basis of the resummed answer, but will need to be modified to account additionally for the imposition of a veto. In the present article, we do not impose a veto but note that any additional production of non-soft, non-collinear particles (e.g the Z+2 jet correction terms to the leading Z+jet process) will be associated with a suppression factor of $\\alpha_s(p_T)$ relative to the Born term, for each additional jet, where $p_T$ is the typical transverse momentum of the additional jet. This means that to the accuracy of our present paper (and indeed the accuracy of most current resummed calculations) we shall need to account for only the order $\\alpha_s$ correction to the Born term supported by a form factor involving only the double logarithmic (soft {\\emph{and}} collinear) component of the jet mass resummation. Thus, we never have to discuss, to our accuracy, the complex issue of soft wide-angle gluon resummation off an ensemble other than the Born configuration. The role of correction terms to the basic Born level resummation shall be discussed in more detail later in the article. \n\nNext, following the notation of Refs.~\\cite{BSZcaesar,BSZpheno} and denoting \nthe Born kinematical configuration by ${\\mathcal{B}}$ we write, for a fixed Born configuration, the cross-section for the squared jet mass to \nbe below some value $v p_T^2 $, as \n\\begin{equation} \\label{sigmadef}\n\\frac{d\\Sigma^{(\\delta)}(v)}{d \\mathcal{B}}= \\int d m_J^2 \\frac{d^2\\sigma^{(\\delta)}}{d \\mathcal{B} d m_J^2 } \\Theta(v p_T^2 -m_J^2) . \n\\end{equation}\nThe label $\\delta$ corresponds to the relevant production channel at Born level, i.e the flavour structure of the underlying $2 \\to 2$ Born process. We have also introduced the dimensionless variable $v=m_J^2\/p_T^2$, with $p_T$ the transverse momentum of the measured jet. One can integrate over the Born configuration \nwith a set of kinematical cuts denoted by $\\mathcal {{H}} (\\mathcal{B})$ to obtain the integrated cross-section \n\\begin{equation}\n\\Sigma^{(\\delta)}(v) = \\int d\\mathcal{B} \\frac{d \\Sigma^{(\\delta)}(v)}{d\\mathcal{B}} {\\mathcal{H}}(\\mathcal{B}),\n\\end{equation} \nwhere, as should be clear from the notation, $\\frac{d\\Sigma^{\\left(\\delta \\right)}}{d \\mathcal{B}}$ is the fully differential Born cross-section (i.e the leading order cross-section at fixed Born kinematics) for the subprocess labelled by $\\delta$. \nWe can then sum over Born channels $\\delta$ to obtain $\\Sigma(v)$ the integrated jet mass cross-section. However, the above result differs from the case of global observables considered in Refs.~\\cite{BSZcaesar, BSZpheno}, in that it is correct only as far as the resummation of logarithms off the Born configuration is concerned and not at the level of constant terms which can arise from the higher jet topologies we mentioned before, that are not related to the Born configuration. When addressing the issue of constant corrections we shall thus need to account in addition to the above, for jet production beyond the Born level. For the present we focus on the basic resummation and hence work only with the Born level production channels as detailed above.\n\n\nFollwing Ref.~\\cite{BSZcaesar} (for $v \\ll 1$) we then write\n\\begin{equation}\n\\label{eq:fact}\n\\frac{d\\Sigma^{\\left(\\delta \\right)}(v)}{d \\mathcal{B}} = \\frac{d\\sigma_0^{\\left(\\delta \\right)}}{d \\mathcal{B}} f_{\\mathcal{B}}^{(\\delta)}\\left(1+\\mathcal{O} \\left(\\alpha_s \\right) \\right).\n\\end{equation}\nThe resummation is included in the function $f_{\\mathcal{B}} ^{(\\delta)}$ and has the usual form \\cite{CTTW}:\n\\begin{equation}\nf_{\\mathcal{B}}^{(\\delta)} = \\exp \\left [Lg_1(\\alpha_s L)+g_2(\\alpha_s L)+\\alpha_s g_3(\\alpha_s L) +\\cdots \\right]\n\\end{equation} \nwhere $g_1$, $g_2$ and $g_3$ are leading, next-to--leading and next-to--next to leading logarithmic functions with further subleading terms indicated by the \nellipses and $L=\\ln \\frac{1}{v}$. \n\nFor the observable we study here, namely non-global jet mass distributions, \nthe function $g_1$ is generated simply by the time-like soft and collinear branching of an outgoing parton and depends only on the colour Casimir operator of the parton initiating the jet, while being independent of the rest of the event. The function $g_2$ is much more complicated. It has a piece of pure hard-collinear origin, which, like the leading logarithmic function $g_1$, only depends on the colour charge of the parton initiating the jet and factorises from the rest of the event. In the collinear approximation, combining the soft-collinear terms of $g_1$ and the hard-collinear terms included in $g_2$ we recover essentially the jet functions first computed for quark jets in $e^{+}e^{-}$ annihilation in \\cite{CTTW}. However, for complete single logarithmic accuracy one has to consider also the role of soft wide-angle radiation. The function $g_2$ receives a pure soft large-angle contribution also due to emissions from hard partons other than the one initiating the jet. For the Z+jet case this piece would be generated by coherent soft wide-angle emission from a three hard parton ensemble, consisting of the incoming partons and the outgoing hard parton (jet). For the case of dijet production, we have instead to consider an ensemble of four hard partons and the consequent soft wide-angle radiation has a non-trivial colour matrix structure~\\cite{KOS}, as for global hadronic dijet event shapes. \n\nOther than the above effects, which are all present for global event shapes and which are all generated by a single soft emission, the function $g_2$ receives another kind of soft contribution, starting from the level of two soft gluons. \nSince we are looking in the interior of a jet rather than the whole of phase \nspace, our observable is sensitive to soft gluons outside the jet region \nemitting a much softer gluon into the jet. While for a global observable such a much softer emission would cancel against virtual corrections, in the present case it makes an essential contribution to the jet mass, triggering single logarithms in the jet mass distribution. These single logarithms (non-global logarithms) cannot be resummed by traditional methods which are based essentially on single gluon exponentiation. In fact a resummation of non-global terms, valid in the large $N_c$ limit, which corresponds to solving a non-linear evolution equation~\\cite{BMS}, can be obtained by means of a dipole evolution code \\cite{DassalNG1}. We carry out such a resummation in this article but do not attempt to \naddress the issue of the subleading $N_c$ corrections which are as yet an \nunsolved problem. Since non-global logarithms are next-to--leading and we are in fact able to obtain the full colour structure for them up to order $\\alpha_s^2$, it is only single logarithmic terms starting at order $\\alpha_s^3$ where one needs to use the large $N_c$ approximation. One may thus expect that for \nphenomenological purposes an adequate description of non-global effects will be provided by our treatment here, as was the case for DIS event shape variables studied in Ref.~\\cite{DasSalDIS}.\n\nIn the next section we shall generate the entire result, except for the non-global terms, which we shall correct for in a subsequent section. The results of the \nnext section correspond to the answer that would be obtained if the observable were a global observable and include process independent soft and \nhard-collinear terms alluded to above, as well as a process dependent soft wide-angle piece, which also depends on the jet radius $R$. This soft wide-angle piece, which starts at order $\\alpha_s$, is calculated with full colour structure whereas one would expect that in Monte Carlo event generators only the leading $N_c$ terms are retained, thus implying higher accuracy of the results we obtain here. \n\n\n\n\n\n\n\\section{The eikonal approximation and resummation} \\label{sec:eikonal}\nIn the current section we shall consider the emission of a soft gluon by an ensemble of hard partons in the eikonal approximation. In this limit one can consider the radiation pattern to be a sum over dipole emission terms \\cite{QCDESW}. Our strategy is to calculate the individual dipole contributions to the jet mass distribution and then to sum over dipoles to obtain results for both the \nZ+jet case as well as the dijet case. While the sum over dipoles shall \ngenerate both the soft-collinear and soft wide-angle terms we mentioned in the preceding section, we shall need to extend our answer \nto include also the relevant hard-collinear terms. Once this is done, the only remaining source of single logarithmic terms will be the non-global contribution to the single-logarithmic function $g_2$, which we shall address in detail in the following section.\n\nWe shall consider the most general situation that we need to address with all dipoles formed by two incoming partons and two outgoing hard partons. Clearly,\nfor the Z+jet case one would have only a single hard parton in the final state, with the other parton replaced by a massive vector boson and hence for this case we will exclude the dipole contributions involving the recoiling jet.\n\nThe squared matrix element for emission of a soft gluon $k$ by a system of hard dipoles is described, in the eikonal approximation, as a sum over contributions from all possible colour dipoles:\n\\begin{equation}\n \\label{eq:dipsum}\n \\left| {\\mathcal{M}}_{\\delta}\\right|^2 = \\left| {\\cal M}_{\\mathcal{B},\\delta} \\right|^2 \n \\sum_{(ij )\\in \\delta } C_{ij} \\, W_{ij}(k)~,\n\\end{equation} \nwhere the sum runs over all distinct pairs $(ij)$ of hard partons present in the flavour configuration $\\delta$, or equivalently, as stated before, over all dipoles. The quantity $\\left| {\\cal M}_{\\mathcal{B,\\delta}}\\right|^2$ is the squared matrix element for the Born level hard scattering, which in our case has to be computed for each separate partonic subprocess $\\delta$ contributing to the jet distribution and contains also the dependence on parton distribution functions. The contribution of each dipole $W_{ij}$ is weighted by the colour factor $C_{i j} $, which we shall specify later, while the kinematic factor $W_{ij} (k)$ is explicitly given by the classical antenna function\n\\begin{equation}\n W_{ij} (k) = \\frac{\\alpha_s \\left( \\kappa_{t, i j} \\right)}{2 \\pi}\n \\frac{p_i \\cdot p_j}{(p_i \\cdot k)(p_j \\cdot k)}.\n \\label{eikant}\n\\end{equation}\nIn the above equation $\\alpha_s$ is defined in the bremsstrahlung scheme \\cite{Catani:1990rr}, and its argument is the invariant quantity $\\kappa_{t, i j}^2 = 2 (p_i \\cdot k)(p_j\n\\cdot k)\/(p_i \\cdot p_j)$, which is just the transverse momentum with respect\nto the dipole axis, in the dipole rest frame.\nWe note that in the eikonal approximation, as is well known, the Born level production of hard partons in the relevant subprocess $\\delta$, factorises from the production of soft gluons described by the antenna functions $W$. The squared matrix element $|M_{\\mathcal{B,\\delta}}|^2$ essentially produces the quantity \n$\\frac{d\\Sigma_0^{\\left(\\delta \\right)}}{d \\mathcal{B}}$ while the $W$ functions \nstart to build up the exponential resummation factor $f_{\\mathcal{B}}^{(\\delta)}$ referred to in Eq.~\\eqref{eq:fact}. In what follows below we shall focus on the various components of the resummation in more detail and in particular carry out the calculations from the individual dipole terms. \n\n\\subsection{Exponentiation: the Z+jet case}\nWe have mentioned above the antenna structure of soft gluon emission from a system of hard emitting dipoles. It is well understood by now that if one ignores configurations corresponding to non-global logarithms (in other words those that stem from emission regions where gluons are ordered in energy but not in angle), then to single-logarithmic accuracy there is an exponentiation of the \none-gluon emission terms described in the preceding section, as well as the corresponding virtual corrections that have the same colour and dynamical \nstructure but contribute with an opposite sign so as to cancel the divergences of real emission. We note that for the case of Z+jet we are dealing with a hard parton ensemble with three partons, two incoming and one corresponding to the triggered jet. In this case the colour factors $C_{i j} = -2\\left({\\bf T}_i . {\\bf T}_j \\right)$ (with the ${\\bf T}_i$ being SU(3) generators), that accompany the dipole contributions, can be straightforwardly expressed in terms of quark and gluon colour charges. Taking account of virtual corrections and the role of multiple emissions generating the jet mass one can write the result in a form that is familiar from the earliest studies of jet masses in $e^{+}e^{-}$ annihilation \\cite{CTTW}\n\\begin{equation} \\label{fglobalZ}\nf_{\\mathcal{B}, { \\rm global}}^{(\\delta )} = \\frac{\\exp[-\\mathcal{R}_\\delta-\\gamma_E \\mathcal{R}_\\delta']}{\\Gamma(1+\\mathcal{R}_\\delta')},\n\\end{equation}\nwhere the subscript above denotes that we are considering the global term only i.e ignoring all non-global corrections to $f_{\\mathcal{B}}^{(\\delta)}$.\n\nThe function $(-\\mathcal{R})$ represents the exponentiation of the single-gluon contribution after cancellation of real-virtual divergences, while $\\mathcal{R}'$ is the logarithmic derivative of $\\mathcal{R}$, $\\partial_L {\\mathcal{R}}$ to be evaluated to our accuracy simply by accounting for the leading logarithmic terms in $\\mathcal{R}$. The terms involving $\\mathcal{R}'$ arise due to the fact that direct exponentiation only occurs for the Mellin conjugate of the variable $v$. To single-logarithmic accuracy one can invert the Mellin transform analytically by performing a Taylor expansion of the Mellin space result to first order and integrating over the Mellin variable, resulting in the form written above \\cite{CTTW}.\n\nOne has for $\\mathcal{R}_\\delta$ the result:\n\\begin{equation}\n\\mathcal{R}_\\delta = \\sum_{(ij)\\in \\delta}\\int C_{ij} \\, {dk_t} k_t \\, d\\eta \\, \\frac{d\\phi}{2\\pi} \\, W_{ij}(k) \\, \\Theta \\left(v(k)-v \\right),\n\\end{equation} \nwhere we have introduced the integral over the momentum of the emitted gluon $k$ and the step function accounts for the fact that real-virtual cancellations \noccur below a value $v$ of the normalised squared jet mass, while uncancelled virtual corrections remain above $v$. The function $v(k)$ is just the dependence of jet-mass on the emission $k$. Letting the hard initiating parton have rapidity $y$ and transverse momentum $p_t$ and denoting by $k_t$, $\\eta$ and $\\phi$ the transverse \nmomentum rapidity and azimuth of the soft gluon $k$ we have (when the hard parton and gluon are recombined to form a massive jet)\n\\begin{equation} \\label{eikonalv}\nv(k) = \\frac{m_J^2}{|\\underline{p}_t+\\underline{k}_t|^2} = \\frac{2 k_t}{p_t} \\left [ \\cosh \\left ( \n\\eta- y \\right )- \\cos \\phi \\right ]+\\mathcal{O} \\left ( \\frac{k_t^2}{p_t^2} \\right)\n\\end{equation}\nwhere we neglect terms quadratic in the small quantity $k_t$.\n\nIt now remains to carry out the dipole calculations for the Z+jet case. We have a hard process with two coloured fermions and a gluon or a three-hard--particle antenna, irrespective of the Born channel $\\delta$. Let us call $\\delta_1$ the Born subprocess with an incoming quark (or anti-quark) and an incoming gluon, which results in a final state coloured quark or antiquark recoiling against the Z boson. Labelling the incoming partons as $1$ (fermion) and $2$ (gluon) and the measured jet as $3$ we have the following colour factors:\n\\begin{equation}\n\\label{eq:colfac}\nC_{12} = N_c, \\; C_{23} = N_c, \\; C_{13} = -\\frac{1}{N_c}.\n\\end{equation} \nFor the remaining Born channel with an incoming $q \\bar{q}$ pair we obtain the same set of colour factors as above but with an interchange of $2$ and $3$, to correspond to the fact that is always the quark-(anti)quark dipole which is colour suppressed.\n\n\nThe calculation of individual dipole terms is carried out in \nAppendix~\\ref{app:global}. We use the results obtained there to construct the final answer. Let us focus on the Born channel $\\delta_1$ corresponding to an incoming gluon and quark with a \nmeasured quark jet. In order to combine the various dipoles that contribute to the resummed exponent, we combine the pieces $\\mathcal{R}_{ij}$ computed in the appendix weighting them appropriately by colour factors:\n\\begin{equation}\n\\mathcal{R}_{\\delta_1} = C_{12} \\, \\mathcal{R}_{12}+C_{13} \\, \\left( \\mathcal{R}_{13}^{\\mathrm{soft}} +\\mathcal{R}_{13}^{\\mathrm{coll.}} \n\\right)+C_{23} \\, \\left ( \\mathcal{R}_{23}^{\\mathrm{soft}} +\\mathcal{R}_{23}^{\\mathrm{coll}} \\right).\n\\end{equation}\n\nUsing the appropriate dipole results generated by using the eikonal approximation, from Appendix~\\ref{app:global} and using the colour factors mentioned in Eq.~\\eqref{eq:colfac} we obtain \n\\begin{equation}\n\\mathcal{R}_{\\delta_1} = \\frac{N_c^2-1}{N_c} \\mathcal{R}^{\\mathrm{coll.}} + N_c \\mathcal{R}^{\\mathrm{soft}}_{12}+\\frac{N_c^2-1}{N_c}\\mathcal{R}^{\\mathrm{soft}},\n\\end{equation}\nwhere we used the fact that $\\mathcal{R}_{13}^{\\mathrm{coll.}}=\\mathcal{R}_{23}^{\\mathrm{coll.}}= \\mathcal{R}^{\\mathrm{coll.}}$ and $\\mathcal{R}_{13}^{\\mathrm{soft}}=\\mathcal{R}_{23}^{\\mathrm{soft}}= \\mathcal{R}^{\\mathrm{soft}}$. Writing the result in terms of the colour factors $C_F$ and $C_A$ and the explicit results for the various dipoles results in the following simple form:\n\\begin{multline} \\label{radiatorCF}\n\\mathcal{R}_{\\delta_1}(v)=2C_F \\int \\frac{\\alpha_s \\left( k_{t,J} \\right)}{2\\pi} \\frac{d k_{t,J}^2}{k_{t,J}^2} \\ln \\left (\\frac{R p_te^{-3\/2}}{k_{t,J}} \\right) \\Theta \\left (\\frac{k_{t,J}^2}{p_t^2} -v \\right) \\Theta \\left(R^2 -\\frac{k_{t,J}^2}{p_t^2} \\right) \\\\ +2 C_F \\int \\frac{\\alpha_s \\left( k_{t,J}\\right)}{2\\pi} \\frac{d k_{t,J}^2}{k_{t,J}^2} \\ln \\left (\\frac{R k_{t,J}}{v p_{t}} \\right) \\Theta \\left (v-\\frac{k_{t,J}^2}{p_t^2} \\right ) \\Theta \\left(\\frac{k_{t,J}^2}{p_t^2} -\\frac{v^2}{R^2}\\right) \\\\ \n+{R^2} \\left(C_A+\\frac{C_F}{2} \\right) \\int_v^1\\frac{dx}{x} \\frac{\\alpha_s(x p_t)}{2 \\pi}+\\frac{R^4}{144} C_F \\int_v^1\\frac{dx}{x} \\frac{\\alpha_s(x p_t)}{2 \\pi},\n\\end{multline}\nwhere $k_{t,J}$ is the transverse momentum of the emitted gluon with respect to the jet.\n\nThe above result represents the decomposition of the resummed exponent into a collinear piece contained in the first two lines of the above equation and a soft wide-angle piece. We have included in the collinear piece a term $e^{-3\/2}$ in the argument of the logarithm, which corrects the eikonal approximation for hard collinear splittings of the final state quark jet. This correction term emerges from replacing the IR singular (pole part) of the $q \\to qg$ splitting function, treated by the eikonal approximation, by the full splitting function. As one would expect this collinear piece, which also contains the leading double logarithms, involves only the colour charge $C_F$ of the parton that initiates the measured massive jet, in this case a quark. The remaining part of the result above is a process dependent soft large angle piece that has a power series expansion in jet radius $R$, which we truncated at the $R^4$ term. We note that the $R^4$ term emerges with a numerically small coefficient and shall make a negligible impact on our final results which can thus be essentially obtained by considering the ${\\cal O}(R^2)$ corrections alone. We also note that the calculation of the soft large-angle component of the result should mean that our results are more accurate than those obtained from MC event generators, which would only treat such pieces in a leading $N_c$ approximation.\n\nThe case of the other subprocess, which generates a gluon jet in the final state, is totally analogous. The result for the resummed exponent is\n\\begin{multline} \\label{radiatorCA}\n\\mathcal{R}_{\\delta_2}(v)=2C_A \\int \\frac{\\alpha_s \\left( k_{t,J} \\right)}{2\\pi} \\frac{d k_{t,J}^2}{k_{t,J}^2} \\ln \\left (\\frac{R p_te^{-2 \\pi \\beta_0\/C_A}}{k_{t,J}} \\right) \\Theta \\left (\\frac{k_{t,J}^2}{p_t^2} -v \\right) \\Theta \\left(R^2 -\\frac{k_{t,J}^2}{p_t^2} \\right) \\\\ +2 C_A \\int \\frac{\\alpha_s \\left( k_{t,J}\\right)}{2\\pi} \\frac{d k_{t,J}^2}{k_{t,J}^2} \\ln \\left (\\frac{R k_{t,J}}{v p_{t}} \\right) \\Theta \\left (v-\\frac{k_{t,J}^2}{p_t^2} \\right ) \\Theta \\left(\\frac{k_{t,J}^2}{p_t^2} -\\frac{v^2}{R^2}\\right) \\\\ \n+{R^2} \\left(2 C_F-\\frac{C_A}{2} \\right) \\int_v^1\\frac{dx}{x} \\frac{\\alpha_s(x p_t)}{2 \\pi}+{\\cal O}(R^4).\n\\end{multline}\nIn order to achieve NLL accuracy, the remaining integrals in Eq.~(\\ref{radiatorCF}) and Eq.~(\\ref{radiatorCA}) must be performed with the two-loop expression for the running coupling. We obtain:\n\\begin{eqnarray} \\label{Zjet_rad}\n\\mathcal{R}_{\\delta_1}&=& -C_F \\left (L f_1+f_2 + f_{{\\rm coll},q} \\right) - R^2 f_{\\rm l.a.} \\left (C_A+\\frac{C_F}{2} \\right) + {\\cal O}(R^4), \\nonumber \\\\\n\\mathcal{R}_{\\delta_2}&=& -C_A \\left (L f_1+f_2 + f_{{\\rm coll},g} \\right) -R^2 f_{\\rm l.a.} \\left ( 2 C_F -\\frac{C_A}{2}\\right) + {\\cal O}(R^4), \\nonumber \\\\\n\\end{eqnarray}\nwith $L=\\ln \\frac{R^2}{v}$. Explicit results for the functions $f_i$ are collected in Appendix~\\ref{app:resum}.\n\n\n\\subsection{Exponentiation: the dijet case}\nWe now turn our attention to the process\n\\begin{equation}\np(P_1) + p(P_2) \\to J (p_3)+J(p_4) + X,\n\\end{equation}\nwhere we want to measure the mass of the two leading jets. For convenience, we fix the kinematics of the two (back-to-back, in the eikonal limit) leading jets, i.e. their transverse momentum $p_T$ and their rapidity separation $|\\Delta y|$.\nThe calculation of the dipoles in the eikonal limit proceeds in the same way as in the Z+jet case that we have previously analysed. The main difference is \nthe more complicated colour algebra, that leads to a matrix-structure of the resummed result. The formalism to perform the resummation in the presence of more than three hard partons was developed in~\\cite{KOS}. For each partonic sub-process we need to fix a colour basis and find the corresponding representations of the colour factors ${\\bf T}_i . {\\bf T}_j$. For each partonic subprocess, the resummed exponent takes the form\n\\begin{equation} \\label{dijets_rc}\nf_{\\mathcal{B}, { \\rm global}}^{(\\delta )} = \\frac{1}{ {\\rm tr} \\, H_{\\delta} } \\sum_{J=3,4} {\\rm tr} \\left[ \\frac{ H_{\\delta} e^{-\\left( \\mathcal{G}_{\\delta,J}+\\gamma_E \\mathcal{G}'_{\\delta,J}\\right)^{\\dagger}}S_{\\delta,J} e^{-\\mathcal{G}_{\\delta,J}-\\gamma_E \\mathcal{G}'_{\\delta,J}}+(\\Delta y \\leftrightarrow - \\Delta y) }{\\Gamma\\left(1+2 \\mathcal{G}'_{\\delta,J} \\right)} \\right].\n\\end{equation}\nThe matrices $H_\\delta$ correspond to the different Born subprocesses and ${\\rm tr } H=\\frac{d \\sigma_0^{(\\delta)}}{ d\\mathcal{B}}$. We note that the resummed expression in Eq.~(\\ref{dijets_rc}) is written in terms of exponentials that describe the colour evolution of the amplitude~\\footnote{In the literature this resummed expression is written in terms of an anomalous dimension $\\Gamma$, where $\\mathcal{G}= \\Gamma \\xi$ and $\\xi$ is the appropriate evolution variable.}, rather than of the cross-section as in the Z+jet case. We obtain\n\\begin{eqnarray} \\label{dijets_rad}\n\\mathcal{G}_{\\delta,3}&=& -\\frac{{\\bf T}_3^2}{2}\\left (L f_1+f_2 + f_{{\\rm coll},3} \\right) + {\\bf T}_1 . {\\bf T}_2 f_{\\rm l.a.}(2 \\pi i +R^2)\n\\nonumber \\\\ &&+R^2f_{\\rm l.a.} \\left( \\frac{1}{4} {\\bf T}_3.{\\bf T}_4 \\tanh^2 \\frac{\\Delta y}{2}+ \\frac{1}{4} ({\\bf T}_1.{\\bf T}_3+ {\\bf T}_2.{\\bf T}_3) \\right. \\nonumber \\\\ &&+\\frac{1}{2} {\\bf T}_1.{\\bf T}_4 \\frac{e^{\\Delta y}}{1+\\cosh \\Delta y}+ \\left. \\frac{1}{2} {\\bf T}_2.{\\bf T}_4 \\frac{e^{-\\Delta y}}{1+\\cosh \\Delta y} \\right) \\nonumber +{\\cal O}(R^4), \\nonumber \\\\\n\\mathcal{G}_{\\delta,4}&=& -\\frac{{\\bf T}_4^2}{2}\\left (L f_1+f_2 + f_{{\\rm coll},4} \\right) + {\\bf T}_1 . {\\bf T}_2 f_{\\rm l.a.}(2 \\pi i +R^2)\n\\nonumber \\\\ &&+R^2f_{\\rm l.a.} \\left( \\frac{1}{4} {\\bf T}_3.{\\bf T}_4 \\tanh^2 \\frac{\\Delta y}{2}+ \\frac{1}{4} ({\\bf T}_1.{\\bf T}_4+ {\\bf T}_2.{\\bf T}_4) \\right. \\nonumber \\\\ &&+\\frac{1}{2} {\\bf T}_2.{\\bf T}_3 \\frac{e^{\\Delta y}}{1+\\cosh \\Delta y}+ \\left. \\frac{1}{2} {\\bf T}_1.{\\bf T}_3 \\frac{e^{-\\Delta y}}{1+\\cosh \\Delta y} \\right) \\nonumber +{\\cal O}(R^4). \\nonumber \\\\\n \\end{eqnarray}\nwhere the functions $f_i$ are reported in Appendix~\\ref{app:resum} and, as before, $L=\\ln \\frac{R^2}{v}$, $\\mathcal{G}'=\\partial_L \\mathcal{G}$. The collinear part of the result is diagonal in colour space, with a coefficient which is the Casimir of the jet. Large-angle radiation is instead characterised by a more complicated colour structure. We also note the presence of the imaginary phase due to Coulomb gluon exchange. We choose to work in the set of orthonormal bases specified in~\\cite{FKM}, to which we remind for the explicit expressions. As a result, all the colour matrices are symmetric and the soft matrix appearing in Eq.~(\\ref{dijets_rc}) is the identity $S_{\\delta,J}=1$.\n\nAs an example, we report explicit results for the scattering of quarks with different flavours $q(i) q'(j) \\to q(k) q'(l)$. We work in a normalised singlet-octet basis:\n\\begin{eqnarray} \\label{qqqqbasis}\nc_1 &= & \\frac{1}{N_c}\\delta_{ik} \\delta_{jl}\\,, \\nonumber \\\\\nc_2 &=& \\frac{1}{\\sqrt{N_c^2-1}} \\left(\\delta_{il} \\delta_{jk}-\\frac{1}{N_c}\\delta_{ik} \\delta_{jl} \\right).\n\\end{eqnarray}\nIn the $t$ channel ($\\Delta y>0$), the hard scattering matrix is given by\n\\begin{equation} \\label{qqpqqpH} \nH(t,u)= \\frac{4}{N_c^2}\\left(\\begin{array}{cc}\n0 & 0\\\\\n 0 & \\frac{u^2+s^2}{t^2}\n \\end{array}\n \\right)\\,.\n\\end{equation}\nWe have that ${\\bf T}_3^2={\\bf T}_4^2=C_F$ and the other colour matrices are\n\\begin{eqnarray}\n{\\bf T}_1 . {\\bf T}_2= {\\bf T}_3. {\\bf T}_4&=& \\left(\\begin{array}{cc}\n0 & \\frac{\\sqrt{N_c^2-}1}{2 N_c}\\\\\n \\frac{\\sqrt{N_c^2-1}}{2 N_c} &-\\frac{1}{N_c}\n \\end{array} \\right)\\,, \\nonumber \\\\\n {\\bf T}_1 . {\\bf T}_3= {\\bf T}_2. {\\bf T}_4&=& \\left(\\begin{array}{cc}\n-C_F & 0\\\\\n 0 &\\frac{1}{2N_c}\n \\end{array} \\right)\\,, \\nonumber \\\\\n {\\bf T}_2 . {\\bf T}_3= {\\bf T}_1. {\\bf T}_4&=& \\left(\\begin{array}{cc}\n0 & -\\frac{\\sqrt{N_c^2-}1}{2 N_c}\\\\\n -\\frac{\\sqrt{N_c^2-1}}{2 N_c} &\\frac{1}{2N_c}-C_F\n \\end{array} \\right)\\,.\n \\end{eqnarray}\n\n\n\\subsection{The constant term $C_1$} \\label{sec:C1}\nIn order to achieve the NNLL accuracy in the perturbative expansion, that is common for most resummation studies of event shape variables, one must consider also the $\\mathcal{O}(\\alpha_s)$ corrections, which are not logarithmically enhanced in the small jet-mass limit and their cross-talk with double logarithmic terms arising from the Sudakov form factors. The constant terms can be expressed as :\n\\begin{eqnarray} \\label{C1def}\n\\alpha_s \\mathcal{C}_{1}^{(\\delta)}&=& \\lim_{v \\to 0} \\left[ \\Sigma_{\\rm NLO}^{(\\delta)}(v)-\\Sigma^{(\\delta)}_{{\\rm NLL},\\alpha_s}(v)\\right]=\n \\lim_{v \\to 0} \\left[ \\int_0^{v}\\frac{d \\sigma^{(\\delta)}}{d v}d v-\\Sigma^{(\\delta)}_{{\\rm NLL},\\alpha_s}(v)\\right] \\nonumber \\\\ &=&\n \\lim_{v \\to 0} \\left[ \\sigma^{(\\delta)}_{\\rm NLO}-\\int_{v}^{v_{\\rm max}}\\frac{d \\sigma^{(\\delta)}}{d v}d v-\\Sigma^{(\\delta)}_{{\\rm NLL},\\alpha_s}(v)\\right] \\nonumber \\\\ &=&\n\\sigma^{(\\delta)}_{\\rm NLO}+\\lim_{v \\to 0} \\left[\\int^{v}_{v_{\\rm max}}\\frac{d \\sigma^{(\\delta)}}{d v} d v-\\Sigma^{(\\delta)}_{{\\rm NLL},\\alpha_s}(v)\\right].\n\\end{eqnarray}\nIf $\\delta$ is a partonic channel that is also present at Born level, then we can recover the usual definition of the constant term:\n\\begin{equation}\n{C}_{1}^{(\\delta)}=\\frac{\\mathcal{C}_{1}^{(\\delta)}}{\\sigma_0^{(\\delta)}}.\n\\end{equation}\n\nThe general kinematic, colour and flavour structure of $\\mathcal{C}_{1}^{(\\delta)}$ can be rather complicated. However, as discussed for global event-shapes in Ref.~\\cite{BSZcaesar, BSZpheno}, one can simply multiply together this constant and the appropriate resummed exponent $f_{\\mathcal{B}}^{(\\delta)}$ previously discussed, essentially because the only relevant terms at NLL originate by the product of $C_{1}^{(\\delta)}$ times the double logarithms (soft and collinear) in the exponent, which do not depend on the colour flow in the hard scattering nor on the parton distribution functions. This is also true in the case of the jet-mass we are considering in this paper, with the further complication that we need to specify the flavour of the jet we are measuring, because quark and gluon jets will receive different suppression factors. In particular, new channels open up at relative order $\\alpha_s$ which are not related to the Born channels and hence obtaining the contribution of the constant terms separately for these channels is an involved exercise. While we leave the complete determination of $C_1$ in different experimental set-ups, as well as an analysis of the dijet case, for future work, for the case of the jet mass in Z+jet, where we \nmeasure the mass of the hardest jet, we shall provide later an estimate of the contribution of the constant terms $C_1$ to the resummed distribution. \n\n\n\n\\section{Non-global logarithms} \\label{sec:nglogs}\n\\subsection{Fixed order}\nAs we emphasised before, the jet mass that we study here, is a non-global observable which means that the results presented in the previous section are not sufficient to obtain the next-to--leading logarithmic accuracy that is common for event shape variables in $e^{+}e^{-}$ annihilation. One is \nalso required to correct the results for the effect of soft wide-angle emissions arising from gluon branching; in other words correlated gluon emission as opposed to independent emission of soft gluons by the hard particle ensemble. Calculations involving correlated gluon emission have been carried out at fixed-order and all-orders (in the leading $N_c$ limit), in the simpler cases of non-global event shapes in $e^{+}e^{-}$ annihilation (such as the light jet-mass) and DIS. Till date a full calculation even at fixed-order has not been carried out for hadron collisions, which we shall do below, in the context of the jet-mass distribution. We also note that in our previous work we carried out a calculation of non-global logarithms for jet masses of jets produced in $e^{+}e^{-}$ annihilation, in the limit of small jet radius $R \\ll 1$, which we argued could serve as a model for the calculation in the hadron collision case. However, in the current work we shall lift the approximation of small $R$ meaning that our calculations should be useful for jets of any radius. \n\nTo address the correlated emission term at leading-order we need to consider \nthe case where one has two energy-ordered soft gluons $k_1$ and $k_2$ such that, for instance, $\\omega_1 \\gg \\omega_2$, where $\\omega_1$ and $\\omega_2$ are the respective energies. In previous work involving $e^{+}e^{-}$ annihilation we have addressed this situation by using the fact that the emission of such a two-gluon system off a hard $q\\bar{q}$ dipole is described by an independent \nemission term with colour factor $C_F^2$ and a correlated emission term with \ncolour factor $C_F C_A$. However, in the present case of multiple hard partons, the emission of a two-gluon system is in principle more involved, since there are several emitting dipoles. In practice, the structure of two-parton emission for the cases we study in this paper (i.e up to $n=4$ hard legs) is known to be remarkably simple~\\cite{CatGraz2gluon,BMDZ3jet}.\n\nAs an illustrative example we consider again the Z+jet case, with three hard legs and the Born channel $\\delta_1$, with an outgoing quark jet. \nThe squared matrix element once again contains an independent emission piece, which contributes to the exponentiation of the single gluon result, described by our function $\\mathcal{R}$. This leaves the correlated parton emission term which has the structure\n\\begin{equation}\nW^{\\mathrm{corr.}}(k_1,k_2) = \\frac{N_c}{2} w_{12}^{\\left(2\\right)}+\\frac{N_c}{2} w_{23}^{\\left(2\\right)}-\\frac{1}{2N_c} w_{13}^{\\left(2 \\right)},\n\\end{equation}\nwhere $w_{ij}^{\\left(2 \\right)}$ is the correlated two-gluon emission by a dipole $ij$, and which is the same as for the $q \\bar{q}$ case studied in $e^{+}e^{-}$ annihilation. Hence the dipole emission and associated colour structure for a correlated two-parton system is precisely the same as for a single gluon emission \\cite{BMDZ3jet}. \n\nNext we note that, as described for example in Ref.~\\cite{BMDZ3jet}, a piece of the correlated two-parton emission contribution actually goes to build up \nthe running coupling we have already considered in the exponentiated single gluon contribution $\\mathcal{R}$. This piece comprises of gluon splittings into \nequally hard gluons or into a $q \\bar q$ pair, which together produce the leading term of the QCD $\\beta$ function \\footnote{For a non-global observable such hard splittings will also give rise to non-global logarithms below the single logarithmic level, which are subleading from our point of view~\\cite{KSZ2,KKK}.}. This leaves us to consider only the soft part of the correlated emission $\\mathcal{S}$~\\cite{milan}, which describes the production of an energy-ordered two-gluon system. For a global \nobservable as is well known, this term produces no effect at single-logarithmic accuracy whereas in the present case it provides us with the first term of the non-global contribution. For a general dipole $ij$ we can explicitly write\n\\begin{equation}\n\\label{eq:coll}\n\\mathcal{S}_{ij} = 2 C_A {\\bf T}_i.{\\bf T}_j \\left (A_{ab}+A_{ba}-A_a A_b \\right),\n\\end{equation}\nwhere\n\\begin{equation}\n\\label{eq:ngdip}\nA_{ab} = \\frac{(ij)}{(i k_a)(k_a k_b)(k_b j)}\\,, \\quad A_{a} = \\frac{(ij)}{(i k_a) (j k_a)}.\n\\end{equation} \nWe note that this term is free of collinear singularities along the hard legs $a$ and $b$ due to cancellation between the various terms of Eq.~\\eqref{eq:coll}. The remaining collinear singularity from the $1\/\\left( k_1.k_2 \\right)$ , involving the soft gluons, shall turn out to be integrable for the non-global configurations considered below. Thus to obtain the leading non-global term for the jet mass under consideration, it suffices to carry out dipole calculations using the Eqs.~\\eqref{eq:coll} and \\eqref{eq:ngdip}, for each of the hard emitting dipoles, just as we did for the single gluon emission case. The results we \nobtain are mentioned and commented on below with the details on the calculation in Appendix~\\ref{app:nonglobal}. We report below the coefficients $I_{ij}$ of the non-global single logarithms:\n\\begin{equation}\n\\int [d k_1] [d k_2] \\mathcal{S}_{ij} \\Theta_{k_1 \\notin J } \\Theta_{k_2 \\in J } \\equiv C_A {\\bf T}_i.{\\bf T}_j I_{ij} \\ln^2 \\frac{1}{v}.\n\\end{equation}\n\n\\begin{itemize}\n\n\\item {\\bf{dipoles involving the measured jet}}\n\nWe first consider dipoles involving the measure jet:\n\\begin{equation}\nI_{13}=I_{23}\\simeq I_{34} = \\frac{\\pi^2}{3}+ \\alpha_2 R^2+\\alpha_4 R^4+{\\cal O}(R^6).\n\\end{equation}\nWe find that the corrections to the small-$R$ approximation~\\cite{BDKM} are numerically small, with $\\alpha_2\\approx 0$ and $\\alpha_4\\approx 0.013$. \nThe dipole $I_{34}$, which is relevant for the dijet calculation, depends, in principle, on the rapidity separation $\\Delta y$ between the leading jets. \nThis dependence will be associated with powers of $R$ that appear to make a negligible contribution. \n\nWe note that the result $\\frac{\\pi^2}{3}$ is exact for $e^{+}e^{-}$ collisions, where one defines the jet in terms of a polar angle. This result is precisely the same as was obtained for hemisphere jet masses in $e^{+}e^{-}$ annihilation and implies that the result obtained does not depend on the position of the jet boundary, a feature that has also been observed in~\\cite{HLWZinout}.\n\n\n\n\n \\item {\\bf{in-in dipole}}\n \nHere we are considering a situation where the emitting dipole legs are away \nfrom the interior of the jet region into which the softest gluon $k_2$ is emitted. This situation is reminiscent of the much studied case of non-global logarithms in energy flow into gaps between jets \\cite{DassalNG2,BMS}. In Ref.~\\cite{DassalNG2} for instance, energy flow into an inter-jet region (i.e a region between \nbut away from the hard legs of an emitting dipole) was considered for specific choices of the geometry of this region, such as a rapidity slice and a patch in rapidity and azimuth. In the present case of course we are studying the same problem but the region being considered here is a circle $(\\eta-y)^2+\\phi^2$, centred on the measured jet. The result we obtain for the non-global contribution due to the in-in dipole was computed numerically but has the small $R$ behaviour\n\n\\begin{equation}\nI_{12} \\approx 4 \\left[1.17 R^2-R^2 \\ln 2R +\\mathcal{O} \\left(R^4 \\right)\\right ].\n\\end{equation}\nWe note that the $R^2$ behaviour arises simply from integrating the emitted softest gluon $k_2$ over the interior of the jet region while the $\\ln R$ behaviour is a reflection of the collinear singularity we mentioned above, between the two soft gluons $k_1$ and $k_2$. In practice the above small $R$ approximation is not a good approximation to the full result obtained numerically, for larger values of $R \\sim 1$, hence eventually we use the numerical answer rather than the form above.\n\n\n\\item {\\bf{in-recoil dipole}}\n\nHere again the situation is similar to the case of the interjet energy flow with the only difference from the previous dipole being the geometry of the hard dipole legs, which are now formed by an incoming parton and an outgoing parton at a finite rapidity with respect tothe beam.\n The result reads\n\\begin{equation}\nI_{12} \\approx\\frac{\\left(1+e^{\\Delta y}\\right)^2}{\\left(1+\\cosh \\Delta y \\right)^2} \\left(1.17 R^2 -R^2 \\ln 2 R\\right)+\\frac{(1+e^{\\Delta y})}{1+\\cosh \\Delta y} \\kappa (\\Delta y) R^2.\n\\end{equation}\nwhere $\\kappa$ depends on rapidity difference $\\Delta y =y-y_r$ and is evaluated numerically. In the limit $\\Delta y \\to \\infty$, $\\kappa(\\Delta y) \\to 0$ and one recovers the previous case.\nThe dipole $I_{24}$ is easily obtained as \n\\begin{equation}\nI_{24}= I_{14}(-\\Delta y).\n\\end{equation}\n\n\\end{itemize}\n\n\\subsection{All-order treatment}\nIn order to perform a complete (NLL) resummation of non-global logarithms, one would need to consider the emission of a soft gluon from an ensemble of any number of gluons. This problem can be treated only in the large-$N_c$ limit~\\cite{DassalNG1,BMS}\\footnote{An alternative approach that can be found in the literature consists in an expansion in the number of out-of-jet (in this case), gluons, keeping the full colour structure, see for instance~\\cite{FKS1,FKS2,FKM,DFMS}.}. For this study, we have adapted the dipole-evolution code used in~\\cite{DassalNG1} to perform the resummation in the case of jet-masses. \n\nThe code developed in Ref.~\\cite{DassalNG1} handled the case of evolution off a hard primary dipole in the leading $N_c$ limit. The result for the non-global contribution $S(t)$ was obtained by dividing the resummed result by the contribution of primary emissions alone, off the hard emitting dipole. For our work here we have a situation with several possible emitting dipoles. In this situation one has to resort to the large $N_c$ limit in which one can treat the problem as independent evolution of only the {\\emph{leading colour-connected dipoles in the hard process}}. Detailed formulae can be found in Ref.~\\cite{BMS}.\n\nIn the Z+jet case, which is simpler, we have noted that there is no visible difference arising from considering just the leading colour-connected dipoles in \nthe hard process relative to the case where one evolves \\emph{all} hard dipoles using the evolution code. We choose the latter option here hence write the full resummed expression, with the exception of the contributions coming from the constant terms $C_1$, as\n\\begin{equation} \\label{masterZj}\n\\Sigma(v)= \\sum_\\delta \\int d \\mathcal{B} \\frac{d \\sigma^{(\\delta)}_0}{d \\mathcal{B}} f^{(\\delta)}_{\\mathcal{B},{\\rm global}} f^{(\\delta)}_{\\mathcal{B}, {\\rm non-global}} \\mathcal{H(B)}.\n\\end{equation}\nThe resummation of non-global logarithms, including contributions from the colour suppressed hard dipoles, is encoded in the two terms\n\\begin{eqnarray} \\label{NGZjet}\nf^{(\\delta_1)}_{\\rm non-global}&=& \\exp \\left (-C_A C_F I_{13}(R) f_{13}(t) -\\frac{C_A^2}{2} I_{12}(R )f_{12}(t) \\right), \\\\\nf^{(\\delta_2)}_{\\rm non-global}&=& \\exp \\left (-C_A^2 I_{13} (R)f_{13}(t)-C_A \\left( C_F -\\frac{C_A}{2}\\right) I_{12}( R ) f_{12}(t) \\right),\n\\end{eqnarray}\nwhere we have used that $I_{13}=I_{23}$; the other contribution $I_{12}$ does depend on $R$ and vanishes in the $R\\to 0$ limit, where one recovers the picture of jets evolving independently. As we stated before, taking the large $N_c$ limit of the non-global contributions would amount to switching off the contribution from the colour suppressed dipoles or, equivalently, choosing $C_F =C_A\/2$ in the above. This produces no significant difference to our results but the result written above has the advantage that it includes the the full contribution to ${\\cal O}(\\alpha_s^2)$ non-global coefficient.\nWe have defined\n\\begin{equation}\nf_{ij}(t)= \\frac{1+(a_{ij} t)^2}{1+(b_{ij}t)^{c_{ij}} } t^2\\,, \\quad t=\\frac{1}{4 \\pi \\beta_0} \\ln \\left(1-2 \\alpha_s(p_T) \\beta_0 \\ln\\frac{R^2}{v}\\right)\n\\end{equation}\nThe coefficients $a_{ij}, b_{ij}, c_{ij}$ are obtained by fitting the functional form above to the numerical results from the large-$N_c$ dipole-evolution code. Numerical results are reported in Appendix~\\ref{app:resum}.\n\nFollowing a similar method, we obtain the corresponding result for the dijet case:\n\\begin{eqnarray} \\label{masterdijets}\n\\Sigma(v)&=& \\sum_{\\delta,J=3,4} \\int d \\mathcal{B} \\,{\\rm tr} \\left[ \\frac{ H_{\\delta} e^{-\\left( \\mathcal{G}_{\\delta,J}+\\gamma_E \\mathcal{G}'_{\\delta,J}+ \\mathcal{S}_{\\delta,J}\\right)^{\\dagger}} e^{-\\mathcal{G}_{\\delta,J}-\\gamma_E \\mathcal{G}'_{\\delta,J}-\\mathcal{S}_{\\delta,J}}+(\\Delta y \\leftrightarrow - \\Delta y) }{\\Gamma\\left(1+ 2 \\mathcal{G}'_{\\delta,J} \\right)} \\right]\\mathcal{H(B)}.\\nonumber \\\\\n\\end{eqnarray}\nUp to small ${\\cal O}\\left(R^4 \\right)$ corrections, we have\n\\begin{eqnarray}\n\\mathcal{S}_{\\delta,3}&=& \\Big [ \\frac{C_A}{2} \\Big (I_{13}(R){\\bf T}_3^2 f_{13}(t)- {\\bf T}_1.{\\bf T}_2 I_{12} ( R )f_{12}(t)-\n {\\bf T}_1 . {\\bf T}_4 I_{14}(R,\\Delta y) f_{14}(t) \\nonumber \\\\ &&-{\\bf T}_2.{\\bf T}_4 I_{24}(R, \\Delta y)f_{24}(t) \\Big) \\Big],\\nonumber \\\\\n\\mathcal{S}_{\\delta,4}&=& \\Big [ \\frac{C_A}{2} \\Big (I_{13}(R){\\bf T}_4^2 f_{13}(t)- {\\bf T}_1.{\\bf T}_2 I_{12} ( R )f_{12}(t)-\n {\\bf T}_2 . {\\bf T}_3 I_{14}(R,\\Delta y) f_{14}(t) \\nonumber \\\\ && -{\\bf T}_1. {\\bf T}_3 I_{24}(R, \\Delta y)f_{24}(t) \\Big) \\Big],\n\\end{eqnarray}\nwhere we have used that $I_{13}=I_{23}\\simeq I_{34}$.\nAs before the above expressions capture the full colour structure of the non-global contribution at ${\\cal O}(\\alpha_s^2)$, but beyond that are valid only in the large-$N_c$ limit.\n\n\\section{Z+jet at the LHC} \\label{sec:Zjet}\nIn this section we investigate the numerical impact of the different contributions which are relevant in order to achieve NLL accuracy. We decide to study the differential distribution\n$\\frac{d \\sigma}{d \\zeta} $, where $\\zeta =\\sqrt{v}=\\frac{m_{ J}}{p_{ T}}$, so as to study the jet mass distribution directly rather than the squared jet mass. We also find it useful to work with a dimensionless ratio to better separate soft physics contributions. In fact, a fairly large value of the jet mass can be generated by the emission of a very soft gluon, if the transverse momentum of the hard jet is large, while small values of $\\zeta$ always correspond to the emission of soft and\/or collinear gluons. If not stated otherwise, we normalise our curves to the Born cross-section. We use the matrix element generator \\textsc{Comix}~\\cite{comix}, included in \\textsc{Sherpa}~\\cite{sherpa}, to produce all the tree-level cross-sections and distributions. We consider proton-proton collisions at $\\sqrt{s}=7$~TeV and we select events requiring $p_{T}>200$~GeV; the Z boson is produced on-shell and does not decay. Jets are defined using the anti-$k_t$ algorithm~\\cite{antikt}.\nIn our calculation we use the set of parton distribution function \\textsc{Cteq}6m~\\cite{cteq6m}, with renormalisation and factorisation scales fixed at $\\mu_R=\\mu_F=200$~GeV, to ease the comparison with different Monte Carlo parton showers, which we are going to perform in Section~\\ref{sec:showers}. \n\n\\subsection{Different approximations to the resummed exponent} \\label{sec:exponent}\n\nWe start by considering different approximations to the resummed exponent $f^{(\\delta)}_{\\mathcal{B}}$. \nWe present our results for two different jet-radii: $R=0.6$, in Fig.~\\ref{fig:comparisonR06}, and $R=1.0$, in Fig.~\\ref{fig:comparisonR10}.\nThe blue curve corresponds to the most simple approximation to the NLL resummed exponent: the jet-function. This approximation correctly resums soft and collinear radiation as well as hard collinear, but does not capture all soft radiation at large angle. In particular this corresponds to neglecting terms that are suppressed by powers of the jet radius in the resummed exponent Eq.~(\\ref{Zjet_rad}). These terms are included in the resummation of all global contributions (green line). We have checked that inclusion of ${\\cal O}(R^2)$ is enough because the ${\\cal O}(R^4)$ corrections are below the percent level even for $R=1.0$. We stress that up to this point no approximation on the colour structure has been made, although we have checked that sub-leading colour corrections are small, once the collinear part has been properly treated. Finally, in the red curve we also take into account the resummation of non-global logarithms as described by Eq.~(\\ref{masterZj}). The first, ${\\cal O}(\\alpha_s^2)$, coefficient on the non-global contribution is computed exactly, while the subsequent resummation is performed \nusing a numerical dipole-evolution code in the large-$N_c$ limit. \nWe note that the inclusion of ${\\cal O}(R^2)$ terms in the resummed exponent as well as non-global logarithms, noticeably corrects the simple jet-function picture, based on collinear evolution. The peak height is reduced by more than 30\\% for $R=0.6$ and it is nearly halved for $R=1.0$. The effect of non-global logarithms is reduced in the latter case, but the ${\\cal O}(R^2)$ corrections to the jet-function approximation become bigger.\n\\begin{figure} \n\\begin{center}\n\\includegraphics[width=0.7\\textwidth]{plots\/comparison_zeta_R06}\n\\caption{Comparison between different approximations to the resummed exponent: jet functions (blue), with full resummation of the global contribution (green) and with non-global logarithms as well (red). The jet radius is $R=0.6$.} \\label{fig:comparisonR06}\n\\end{center}\n\\end{figure}\n\n\\begin{figure} \n\\begin{center}\n\\includegraphics[width=0.7\\textwidth]{plots\/comparison_zeta_R10}\n\\caption{Comparison between different approximations to the resummed exponent: jet functions (blue), with full resummation of the global contribution (green) and with non-global logarithms as well (red). The jet radius is $R=1.0$.} \\label{fig:comparisonR10}\n\\end{center}\n\\end{figure}\n \\FloatBarrier\n\n\\subsection{Matching to fixed order} \\label{sec:matching}\nWe now turn our attention to obtaining a jet-mass distribution which is reliable for all values of $\\zeta$. We achieve this by matching to a fixed-order (FO) calculation. \nAlthough a complete phenomenological analysis would require matching to a next-to leading order (NLO) QCD calculation, for the purpose of this paper we limit ourselves to LO matching. \nWe compute the differential jet-mass distribution at ${\\cal O}(\\alpha_s)$ using \\textsc{Comix}. The result is plotted in Fig.~\\ref{fig:matching} (dashed black line): the differential distribution $\\frac{d \\sigma}{d \\ln \\zeta}$ diverges logarithmically in the small-mass limit. The dotted green line instead represents the resummed result. The matched curve (shown in solid red) is obtained by straightforwardly adding the two contributions and removing the double-counted terms, i.e. the expansion of the resummation to ${\\cal O}(\\alpha_s)$:\n\\begin{equation} \\label{matching}\n\\frac{1}{\\sigma}\\frac{{ d \\sigma}_{\\rm NLL+LO}}{d \\ln \\zeta}= \\frac{1}{\\sigma}\\left[ \\frac{d \\sigma_{\\rm LO}}{d \\ln \\zeta}+\\frac{d \\sigma_{\\rm NLL}}{d \\ln \\zeta}-\\frac{d \\sigma_{{\\rm NLL},\\alpha_s}}{d \\ln \\zeta} \\right].\n\\end{equation}\nWe note that the matched result coincides with the resummation at small $\\zeta$, because the logarithmically divergent contributions to the LO distribution are cancelled by the expansion of the resummation. Moreover, the matched distribution follows the LO order one in the opposite limit. In particular, we note that the LO result exhibits an end-point:\n\\begin{equation}\n\\zeta^2 =\\frac{m_{ J}^2}{p_{T}^2} = \\frac{2 p_t k_t}{|\\underline{p}_t+\\underline{k}_t|^2} \\left(\\cosh y - \\cos \\phi \\right)=\\frac{2 p_t k_t}{p_t^2+k_t^2+2 p_t k_t \\cos \\phi} \\left(\\cosh y - \\cos \\phi \\right),\n\\end{equation}\nwhich leads to \n\\begin{equation}\n\\zeta_{\\rm max}=\\sqrt{\\max_{y^2+\\phi^2\\le R^2} \\zeta^2}=\\tan \\frac{R}{2}=\\frac{R}{2}+O(R^3).\n\\end{equation}\nThis feature is not present in the resummed distribution, because the jet does not recoil against the emission of the eikonal gluon, but it is restored thanks to matching.\n\\begin{figure} \n\\begin{center}\n\\includegraphics[width=0.49\\textwidth]{plots\/matched_log_zeta_R_06_200GeV}\n\\includegraphics[width=0.49\\textwidth]{plots\/matched_log_zeta_R_10_200GeV} \n\\caption{Matching of the NLL resummed distribution to the LO one for $R=0.6$ (on the left) and $R=1.0$ (on the right).}\\label{fig:matching}\n\\end{center}\n\\end{figure}\n\n \\FloatBarrier\n\n\\subsection{Numerical estimate of constant term $C_1$} \\label{sec:C1num}\n\nIn this section we investigate the numerical impact of the constant $C_1$ defined in Eq.~(\\ref{C1def}), in the case of Z+jet events, where we measure the mass of the highest $p_{\\rm T}$ jet.\nDifferent contributions build up the constant terms at $\\mathcal{O}(\\alpha_s)$: we have one-loop virtual corrections, terms that cancel the renormalisation and factorisation scale dependence of the Born pieces and contributions with multiple partons in final state that may, or may not, end up in the same jet. \nOne-loop virtual corrections have the same kinematical and flavour structure as the corresponding Born subprocess $\\delta$ and, consequently, they need to be suppressed by the same resummed exponent. In order to cancel infrared divergencies, one needs to consider real emissions in the soft and collinear limit as well. The final-state singularities are precisely the source of the logarithms we are resumming and these configurations can be mapped onto one of the Born subprocesses $(\\delta)$. Initial-state collinear singularities, which do not give rise to any logarithms of the jet mass, must be absorbed into the parton densities, leaving behind terms that depend on the factorisation scale. Finally, we also need to consider kinematic configurations where the two final-state partons are not recombined, resulting into Z+2 jet events, and suppress the hardest one with the appropriate exponent. Thus, we should perform the calculation of the NLO cross-section that appears in Eq.~(\\ref{C1def}) keeping track of the kinematics of the final-state partons, in order to separate between $g$ and $q$ components. Although this can be clearly done by computing these corrections analytically, for the current analysis we use the program MCFM~\\cite{mcfm}, which does not trivially allow us to do so. Nevertheless, we are able to compute the finite part of the virtual corrections, for the different initial-state channels. \nWe suppress virtual corrections, as well as the integrated term in Eq.~(\\ref{C1def}) with their appropriate form factor, $f^{(\\delta)}$. We then multiply all the remaining real corrections by either a gluon or a quark form factor, producing a band. \n\nOur findings are plotted in Fig.~\\ref{fig:C1}, for $R=0.6$, on the left, and $R=1.0$, on the right\\footnote{We have suppressed $C_1$ with the full global resummed exponent, producing terms beyond our NLL accuracy, which we do not control. A more precise analysis would involve the complete determination of $C_1$, suppressed only with double-logarithmic terms, together with an uncertainty band, assessing the impact of higher logarithmic orders.}.\nWhen $C_1$ is included, we normalise the distribution to the total NLO rate, rather than the usual Born cross-section. In order to avoid large NLO corrections~\\cite{giantKfact}, we put a cut on the Z boson transverse momentum $p_{ TZ}>150$~GeV. We have found that this leads to $K$-factor $K=1.45$ and $K=1.57$, for $R=0.6$ and $R=1.0$, respectively. With this set of cuts, the impact of $C_1$ is not too big, but definitely not negligible. The complete calculation of this constant is therefore necessary in order to be able to perform accurate phenomenology. \n\n\\begin{figure} \n\\begin{center}\n\\includegraphics[width=0.49\\textwidth]{plots\/C1_zeta_R_06_200GeV}\n\\includegraphics[width=0.49\\textwidth]{plots\/C1_zeta_R_10_200GeV}\n\\caption{The impact of the NLL constant $C_1$, for $R=0.6$ jets (on the left) and $R=1.0$ (on the right). The band is produced by suppressing the real radiation contributions with a quark or gluon jet form factor, as explained in the text.}\\label{fig:C1}\n\\end{center}\n\\end{figure}\n\n \\FloatBarrier\n\\subsection{Comparison to Monte Carlo event generators}\\label{sec:showers}\nIn this section we compare our resummed and matched result NLL+LO to three standard Monte Carlo event generators: \\textsc{Sherpa}~\\cite{sherpa}, \\textsc{Pythia}~\\cite{pythia} and \\textsc{Herwig}{\\footnotesize ++}~\\cite{herwig}. \nMonte Carlo parton showers are powerful tools to simulate complicated final states in particle collisions. When interfaced with hadronisation models, they are able to describe the transition between partons and hadrons. Moreover, they \nprovide events which are fully differential in the particles' momenta. Also \nthey provide models of other non-perturbative effects at hadron colliders such as the underlying event where it is not possible to have phenomenological estimates from first principles of QCD. As stated before however, despite their usefulness and successes, it is quite difficult to assess the theoretical precision of these tools. For this reason, comparisons between parton showers and analytic calculations, which have a well-defined theoretical accuracy, form an important part of QCD phenomenology. For the case of jet-mass it has been noted in recent ATLAS studies~\\cite{ATLASjetmass} for jet masses that \\textsc{Pythia} with hadronisation and the underlying event gives a reasonably good description of the data. We would therefore expect our resummation to be in some accordance with \\textsc{Pythia} though we should stress that we do not include any non-perturbative effects. Hence we compare our results to the parton shower aspect of the various event generators on its own. While this is in principle correct, in practice one should be aware that there can be considerable interplay between the shower level and the non-perturbative effects in various event generators so that these programs may only return more meaningful results when all physical effects (perturbative and non-perturbative) are considered together. We should bear this caveat in mind while attempting to compare a resummed prediction with just the parton shower models in event generators. \n\nThe results of the comparison are shown in Fig.~\\ref{fig:shower}. Our NLL+LO result for $R=0.6$ is shown in red (the band represents the uncertainty due to the incomplete determination of $C_1$). The Monte Carlo results are obtained with the same parton densities as in the resummed calculation and the same set of cuts. For \\textsc{Sherpa} (blue line) and \\textsc{Pythia} (green line) we fix $\\mu_R=\\mu_F=200$~GeV, while for \\textsc{Herwig}{\\footnotesize ++} (magenta line) we use the default transverse mass of the Z boson. \nAt the shower level, \\textsc{Sherpa} and \\textsc{Herwig}{\\footnotesize ++} appear to perform quite similarly. They produce fairly broad distributions, which \nare not very much suppressed as $\\zeta \\to 0$. \nThe distribution obtained from \\textsc{Pythia} instead produces a curve which is much closer to our resummed result. Although the position of the peaks differ by $ \\delta \\zeta =0.01$ ($\\delta m_{\\rm J}\\sim2$~GeV), the height and general shapes appear in agreement. \n\nThe agreement between the different Monte Carlo generators is restored when hadronisation corrections are included, as demonstrated in Fig.~\\ref{fig:hadro}: \\textsc{Pythia} and \\textsc{Herwig}{\\footnotesize ++} produce very similar results, while the distribution obtained with \\textsc{Sherpa} is broader, but not too different. Clearly, in order to compare to collider data, one must also include the contribution from the underlying event. \n\nIt is also interesting to compare the resummed prediction we computed in this paper with a shift to the right to account for hadronisation corrections to the event generator results after hadronisation. The shift approximation, initially suggested in \\cite{DokWeb97}, should be valid to the right of the Sudakov peak but will certainly break down in the vicinity of the peak (see also the discussion in \\cite{DasSalDIS} and references therein). The amount of the shift is related to the non-perturbative correction to mean values of jet masses derived in \\cite{Dassalmag}. From that reference, we note that the $v=m_J^2\/p_T^2$ distribution should be shifted by an amount $\\alpha R\/p_T$ (the correction to the mean value) with a dimensionful coefficient $\\alpha$ one can take to be of order of a few times $\\Lambda_{\\rm QCD}$. The results of our calculations with a non-perturbative shift are shown in Fig.~\\ref{fig:hadro}. From there one notes that a shift of $1.5\\,\\mathrm{GeV}\\, R\/p_T$ (for the $v$ distribution which we carry over to the $\\zeta$ distribution plotted in Fig.~\\ref{fig:hadro}) where we take $p_T$ to be the value of the lower bound (200 GeV in this case) on transverse momentum, yields an excellent agreement with the \\textsc{Herwig}++ result after hadronisation. On the other hand, a slightly larger shift of $2.0 \\, \\mathrm{GeV}\\, R\/p_T $ yields a good agreement with \\textsc{Pythia}. We have truncated the shifted resummed result near the peak of the resummed distribution, as we would expect the shift to not be meaningful beyond this region at the very best. Although we have made a crude estimate of hadronisation effects, and one may be able to better compute these corrections, we do note that within non-perturbative uncertainties our results are compatible with the most widely used event generator models. We therefore anticipate that after the further improvements we have in mind are accomplished, our results may be directly used for phenomenological purposes. \n\n\\begin{figure} \n\\begin{center}\n\\includegraphics[width=0.7\\textwidth]{plots\/shower_ps}\n\\caption{Comparison of our resummed and matched result NLL+LO (in red) to standard Monte Carlo event generators, at the parton level.} \\label{fig:shower}\n\\end{center}\n\\end{figure}\n\n\\begin{figure} \n\\begin{center}\n\\includegraphics[width=0.7\\textwidth]{plots\/shower_hadr.eps}\n\\caption{Results for the $\\zeta$ distributions obtained with standard Monte Carlo parton showers, with hadronisation corrections (dashed lines) compared to analytical resummation with non-perturbative shifts (shaded bands) as explained in the main text.} \\label{fig:hadro}\n\\end{center}\n\\end{figure}\n\n \\FloatBarrier\n\\section{Dijets at the LHC} \\label{sec:dijets}\n\n\nIn this section we provide numerical predictions for the jet mass distribution in dijet events. As before, we consider proton-proton collision at $\\sqrt{s}=7$~TeV, with jets defined according to the anti-$k_t$ algorithm~\\cite{antikt}.\nThe main complication with respect to the Z+jet case previously discussed is a more complicated colour algebra, which results into a matrix structure of large-angle soft gluon radiation. In order to simplify our resummed calculation, we work at fixed kinematics, i.e. we demand the jets' transverse momentum to be $p_{T}=200$~GeV and their rapidity separation to be $|\\Delta y| = 2$ (at Born level we only have two jets). We remind the reader that we consider\n\\begin{equation}\n\\frac{1}{\\sigma}\\frac{{ d \\sigma}}{d \\zeta}= \\frac{1}{\\sigma} \\left( \\frac{d \\sigma}{d \\zeta_1} +\\frac{d \\sigma}{d \\zeta_2}\\right)_{\\zeta_1=\\zeta_2=\\zeta}.\n\\end{equation}\nWe match the resummation to a LO calculation of the jet mass distribution obtained with \\textsc{Nlojet}{\\footnotesize++}~\\cite{nlojet}, according to Eq~(\\ref{matching}). In Fig.~\\ref{fig:dijets} we plot our NLL+LO result with different approximation for the resummed exponent Eq.~(\\ref{masterdijets}): jet-functions (blue), with the inclusion of ${\\cal O}(R^2)$ corrections (green) and non-global logarithms (red), as explained in detail for the Z+jet case. Although the corrections to the jet-function approximation are less pronounced than in the Z+jet case, they are still sizeable and must be taken into account. In our understanding, the perturbative part of the resummed result of Ref~\\cite{yuan1,yuan2} is precisely the one captured by the jet-functions (plus an approximated treatment of the one-loop constant $C_1$). \n\nIn order to obtain a more realistic prediction, one would need to integrate over the appropriate cuts in transverse momentum and rapidity and match to NLO, which, in principle, should not pose any difficulties. More delicate is instead the determination of the constant $C_1$, although this issue has been addressed in Ref.~\\cite{BSZcaesar, BSZpheno} for the case of global event-shapes.\n\n\\begin{figure} \n\\begin{center}\n\\includegraphics[width=0.7\\textwidth]{plots\/dijets_zeta}\n\\caption{The NLL+LO jet mass distribution for dijets, with different approximation to the resummed exponent.}\\label{fig:dijets}\n\\end{center}\n\\end{figure}\n\n\\section{Conclusions and Outlook}\\label{sec:conclusions}\n\nIn this paper we have provided resummed expressions to NLL accuracy for jet-mass distributions in hadron-hadron collisions, for jets of arbitrary radius and defined in the anti-$k_t$ algorithm. In particular, we have considered Z boson production in association with a jet and jet masses in dijet production. We have improved upon existing studies of jet masses and shapes (many of which use $e^{+}e^{-}$ annihilation as a model for jets produced in hadron collision processes) by incorporating initial state radiation effects as well as accounting for non-global logarithms in the large $N_c$ limit. We have matched our results to leading-order predictions to account for non-logarithmic terms to this order and have commented on the role of the coefficient function $C_1 \\alpha_s$ (corrections of order $\\alpha_s$ to the basic resummation off the Born configuration). Finally, we have compared our results to all the leading Monte Carlo event generators \\textsc{Pythia}, \\textsc{Sherpa} and \\textsc{Herwig}{\\footnotesize++}, with and without non-perturbative hadronisation corrections. \n\nWe have found that firstly ISR and non-global logarithms play an important role even at relatively small values of jet radius such as $R=0.6$ and hence cannot be ignored while discussing inclusive jet shapes at hadron colliders. Our calculations for non-global logarithms both at fixed-order and at all orders represent to our knowledge the first attempt to calculate these terms beyond the simpler cases of $e^{+}e^{-}$ annihilation and DIS. On comparing our results to event generators widely used for phenomenology we find that at the purely perturbative level the best agreement is with the \\textsc{Pythia} shower, with an apparent small shift accounting for much of the difference between the analytical resummation and the parton shower estimate. The differences with \\textsc{Sherpa} and \\textsc{Herwig}{\\footnotesize++} on the other hand are more marked especially towards smaller jet masses as one approaches the peak of the distributions. After hadronisation corrections are applied in the event generators and also applying a shift to the analytical resummation to account for hadronisation, we are able to obtain very good agreement with \\textsc{Pythia} and \\textsc{Herwig}{\\footnotesize++} with slightly different shifts required in either case, for jet mass values to the right of the Sudakov peak of the distribution. For smaller jet masses we do not expect a simple shift of the analytical resummation to reproduce the correct result and we observe here discrepancy with all Monte Carlo generators which is to be expected. However, this region of very small jet masses of the order of a few GeV will not be of interest in LHC phenomenology in any case. \nWe may thus expect that our results should be of direct phenomenological value even pending some of the improvements we intend to make in the near future. \n\nFor the immediate future we aim to improve our results chiefly by taking proper account of the order $\\alpha_s$ coefficient function $C_1$ and computing the various pieces of $C_1$ which originate from different regions and suppressing these by an appropriate form factor rather than the cruder treatment reported in the text, which was aimed at producing an uncertainty band associated to a lack of correct treatment of $C_1$. When this is done we will be in a position to carry out \nan NLO matching and estimate the uncertainty of our theoretical calculations accurately, which will be important in the context of phenomenology. \n\nWe have not taken directly into account non-perturbative effects in this paper, incorporating them as a simple shift of the perturbative spectra as for the case of global event shapes~\\cite{Dassalrev}. We can study non-perturbative corrections in more detail using the methods outlined in~\\cite{Dassalmag}, but for the moment we note that the Monte Carlo event generators we have studied, contain differences in their estimate of hadronisation which should be explored further along with studying the underlying event (UE) contributions. Since our predictions are valid for any value of jet radius $R$, one can hope that phenomenological studies selecting a small value of $R$ would help in better isolating the perturbative contributions, since the hadronisation and UE corrections for $m_J^2\/p_T^2$ scale respectively as $R$ and $R^4$~\\cite{Dassalmag}. On the other hand moving to a larger $R$ after pinning down the perturbative content would help to more accurately constrain the non-perturbative models in various approaches.\n\nAdditionally, although in this article we have treated a single variable, the jet mass, it is actually straightforward to extend our treatment to the entire range of angularities as for instance were explored for jets in \\cite{EHLVW1,EHLVW2}. The basic calculations we carried out here can be easily extended to include those variables as well as any variants of the jet mass itself such as studying the jet mass with an additional jet veto. Once we have accomplished an NLO matching we therefore intend to generalise our approach to accommodate a range of substructure variables in different hard processes. We hope that our work will eventually lead to improved estimates of the accuracy and hence more confidence in a detailed understanding of jet substructure than is the case presently, which could in turn be important for a variety of substructure applications at the LHC.\n\n\n\\section*{Acknowledgements}\nWe wish to thank Andrea Banfi and Frank Krauss for many useful discussions about resummation and parton showers.\nIn particular, we thank Peter Richardson and Marek Schoenherr for \\textsc{Herwig}{\\footnotesize++} and \\textsc{Sherpa} support.\n MD would like to thank the IPPP for support via the IPPP associateship award which facilitated his visit to Durham and those of SM to Manchester, during the course of this work. \nThis work is supported by UK's STFC. The work of MD is supported by the Lancaster-Manchester-Sheffield Consortium for Fundamental Physics, under STFC grant ST\/J000418\/1.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
+{"text":"\\section{Introduction}\nWe study the problem of sampling from a target distribution using Langevin dynamics \\citep{langevin1908theory} based algorithms. Mathematically, Langevin dynamics (a.k.a., overdamped Langevin dynamics) is defined by the following stochastic differential equation (SDE)\n\\begin{align}\\label{eq:langevin_dynamics}\n\\text{d} \\bX(t) = -\\nabla f\\big(\\bX(t)\\big) \\text{d} t + \\sqrt{2\\beta^{-1}}\\text{d} \\bB(t),\n\\end{align}\nwhere $\\beta>0$ is called the inverse temperature parameter and $\\bB(t)\\in\\RR^d$ is the Brownian motion at time $t$. It has been proved in \\cite{chiang1987diffusion,roberts1996exponential} that under certain conditions on the drift term $-\\nabla f(\\bX(t))$, the Langevin dynamics will converge to a unique stationary distribution $\\pi(\\text{d} \\xb)\\propto e^{-\\beta f(\\xb)}\\text{d} \\xb$. To approximately sample from such a target distribution $\\pi$, we can apply the Euler-Maruyama discretization onto \\eqref{eq:langevin_dynamics}, leading to the Langevin Monte Carlo algorithm (LMC), which iteratively updates the parameter $\\xb_k$ as follows\n\\begin{align}\\label{eq:def_lmc}\n\\xb_{k+1} = \\xb_k - \\eta\\nabla f(\\xb_k) + \\sqrt{2\\eta\\beta^{-1}}\\cdot\\bepsilon_k,\n\\end{align}\nwhere $k=0,1,\\ldots$ denotes the time step, $\\{\\bepsilon_k\\}_{k=0,1,\\ldots}$ are i.i.d. standard Gaussian random vectors in $\\RR^d$, and $\\eta>0$ is the step size of the discretization. \n\n\nIn large scale machine learning problems that involve a large amount of training data, the log-density function $f(\\xb)$ can be typically formulated as the average of the log-density functions over all the training data points, i.e., $f(\\xb) = n^{-1}\\sum_{i=1}^n f_i(\\xb)$\\footnote{In some cases, the log-density function $f(\\xb)$ is formulated as the sum of the log-density functions for training data points instead of the average. To cover these cases, we can simply transform the temperature parameter $\\beta\\rightarrow n\\beta$ and thus the target distribution remains the same.\n}, where $n$ is the size of training dataset and $f_i(\\xb)$ denotes the log-density function for the $i$-th training data point. In these problems, the computation of the full gradient over the entire dataset can be very time-consuming. In order to save the cost of gradient computation, one can replace the full gradient $\\nabla f(\\xb)$ with a stochastic gradient computed only over a small subset of the dataset, which gives rise to stochastic gradient Langevin dynamics (SGLD) \\citep{welling2011bayesian}. \n\nWhen the target distribution $\\pi$ is log-concave, SGLD provably converges to $\\pi$ at a sublinear rate in $2$-Wasserstein distance \\citep{dalalyan2017user,dalalyan2017further,wang2019laplacian}. \nHowever, it becomes much more challenging to establish the convergence of SGLD when the target distribution is not log-concave. When the negative log-density function $f(\\xb)$ is smooth and dissipative, the global convergence guarantee of SGLD has been firstly established in \\citet{raginsky2017non}\\footnote{Although this paper mainly focuses on the convergence analysis of SGLD for nonconvex optimization, part of its theoretical results also reveal the convergence rate for sampling from a target distribution.} via the optimal control theory and further improved in \\citet{xu2018global} by a direct analysis of the ergodicity of LMC. Nonetheless, these two works require extremely large mini-batch size (e.g., $B=\\Omega(\\epsilon^{-4})$) to ensure a sufficiently small sampling error, which is prohibitively large or even unrealistic compared with the practical setting. \\citet{zhang2017hitting} studied the hitting time of SGLD for nonconvex optimization, but can only provide the convergence guarantee for finding a local minimum rather than converging to the target distribution. Recently, \\citet{chau2019stochastic, zhang2019nonasymptotic} studied the global convergence of SGLD for nonconvex stochastic optimization problems and proved faster convergence rates than those in \\citet{raginsky2017non,xu2018global}. However, their convergence results require an additional Lipschitz condition in terms of the input data (rather than the model parameter) on the stochastic gradients, which restricts their applications to a small class of SGLD-based sampling problems.\n\nIn this paper, we consider the same setting in \\citet{raginsky2017non,xu2018global} and aim to establish faster convergence rates for SGLD with an arbitrary mini-batch size. \nIn particular, we provide a new convergence analysis for SGLD based on an auxiliary time-reversible Markov chain called Metropolized SGLD \\citep{zhang2017hitting}, which is constructed by adding a Metropolis-Hasting step to SGLD\\footnote{This Markov chain is practically intractable and is only used for the sake of theoretical analysis.}. The key idea is that as long as the transition kernel of the constructed Metropolized SGLD chain is sufficiently close to that of SGLD, we can prove the convergence of SGLD to the target distribution. Compared with existing proof techniques that typically take LMC or Langevin dynamics as an auxiliary sequence, the advantage of using Metropolized SGLD as the auxiliary sequence is that it is closer to SGLD in distribution as its transition distribution also covers the randomness of stochastic gradients, thus can better characterize the convergence behavior of SGLD and lead to sharper convergence guarantees. \nTo sum up, we highlight our main contributions as follows:\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\\begin{itemize}[leftmargin=*]\n \\item We provide a new convergence analysis of SGLD for sampling a large class of distributions that can be non-log-concave. In contrast to \\cite{raginsky2017non,xu2018global} that require a very large mini-batch size, our convergence guarantee holds for an arbitrary choice of mini-batch size.\n\n \n \n\\item We prove that SGLD can achieve $\\epsilon$-sampling error in total variation distance within $\\tilde O(d^4\\beta^2\\rho^{-4}\\epsilon^{-2})$ stochastic gradient evaluations, where $d$ is the problem dimension, $\\beta$ is the inverse temperature parameter, and $\\rho$ is the Cheeger constant (See Definition~\\ref{def:cheeger}) of a truncated version of the target distribution.\n \n \nWe also prove the convergence of SGLD under the measure of polynomial growth functions, which suggests that the number of required stochastic gradient evaluations is $\\tilde O(\\epsilon^{-2})$. This improves the state-of-the-art result proved in \\cite{xu2018global} by a factor of $\\tilde O(\\epsilon^{-3})$.\n \n \\item We further establish sharper convergence guarantees for SGLD under an additional Hessian Lipschitz condition on the negative log density function $f(\\xb)$. We show that $\\tilde O(d^{15\/4}\\beta^{7\/4}\\rho^{-7\/2}\\epsilon^{-3\/2})$ stochastic gradient evaluations suffice to achieve $\\epsilon$-sampling error in total variation distance. Our proof technique is much simpler and more intuitive than existing analysis for proving the convergence of Langevin algorithms under the Hessian Lipschitz condition \\citep{dalalyan2017user,mou2019improved,vempala2019rapid}, which can be of independent interest. \n \\end{itemize}\n\n\n\n\n\\noindent\\textbf{Notation.} \nWe use the notation $x\\wedge y$ and $x\\vee y$ to denote $\\min\\{x,y\\}$ and $\\max\\{x, y\\}$ respectively.\nWe denote by $\\cB(\\ub,r)$ the Euclidean of radius $r>0$ centered at $\\ub\\in\\RR^d$. For any distribution $\\mu$ and set $\\cA$, we use $\\mu(\\cA)$ to denote the probability measure of $\\cA$ under the distribution $\\mu$. For any two distributions $\\mu$ and $\\nu$, we use $\\|\\mu-\\nu\\|_{TV}$ and $D_{KL}(\\mu,\\nu)$ to denote the total variation distance and Kullback\u2013Leibler divergence between $\\mu$ and $\\nu$ respectively. For $\\ub,\\vb\\in\\RR^d$, we use $\\cT_{\\ub}(\\vb)$ to denote the probability of transiting to $\\vb$ after one step SGLD update from $\\ub$. Similarly, $\\cT_{\\ub}(\\cA)$ and $\\cT_{\\cA'}(\\cA)$ are the probabilities of transiting to a set $\\cA\\subseteq\\RR^d$ after one step SGLD update starting from $\\ub$ and the set $\\cA'$ respectively. For any two sequences $\\{a_n\\}$ and $\\{b_n\\}$, we denote $a_n = O(b_n)$ and $a_n = \\Omega(b_n)$ if $a_n\\le C_1b_n$ or $a_n\\ge C_2 b_n$ for some absolute constants $C_1$ and $C_2$. We use notations $\\tilde O(\\cdot)$ and $\\tilde \\Omega(\\cdot)$ to hide polylogarithmic factors in $O(\\cdot)$ and $\\Omega(\\cdot)$ respectively.\n\n\\section{Related Work}\nMarkov Chain Monte Carlo (MCMC) methods, such as random walk Metropolis \\citep{mengersen1996rates}, ball walk \\citep{lovasz1990mixing}, hit-and-run \\citep{smith1984efficient} and Langevin algorithms \\citep{parisi1981correlation}, have been extensively studied for sampling from a target distribution, and widely used in many machine learning applications. There are a large number of works focusing on developing fast MCMC algorithms and establishing sharp theoretical guarantees. We will review the most related works among them due to the space limit.\n\nLangevin dynamics \\eqref{eq:langevin_dynamics} based algorithms have recently aroused as a promising method for accurate and efficient Bayesian sampling in both theory and practice \\citep{welling2011bayesian,dalalyan2017theoretical}. \nThe non-asymptotic convergence rate of LMC has been extensively investigated in the literature when the target distribution is strongly log-concave \\citep{durmus2016sampling,dalalyan2017theoretical,durmus2017nonasymptotic}, weakly log-concave\n\\citep{dalalyan2017further,mangoubi2019nonconvex}, and non-log-concave but admits certain good isoperimetric properties \\citep{raginsky2017non,ma2018sampling,lee2018beyond,xu2018global,vempala2019rapid}, to mention a few. \nThe stochastic variant of LMC, i.e., SGLD, is often studied together in the above literature and the convex\/nonconvex optimization field \\citep{raginsky2017non,zhang2017hitting,xu2018global,gao2018global,chen2018accelerating,deng2020non,chen2020stationary}.\nAnother important Langevin-based algorithm is the Metropolis Adjusted Langevin Algorithms (MALA) \\citep{roberts1996exponential}, which is developed by introducing a Metropolis-Hasting step into LMC. Theoretically, it has been proved that MALA converges to the target distribution at a linear rate for sampling from both strongly log-concave \\citep{dwivedi2018log} and non-log-concave \\citep{bou2013nonasymptotic} distributions. \n\nBeyond first-order MCMC methods, there has also emerged extensive work on high-order MCMC methods. One popular algorithm among them is Hamiltonian Monte Carlo (HMC) \\citep{neal2011mcmc}, which introduces a Hamiltonian momentum and leapfrog integrator to accelerate the mixing rate. \nFrom the theoretical perspective,\n\\citet{durmus2017convergence} established general conditions under which HMC can be guaranteed to be geometrically ergodic.\n\\citet{mangoubi2018dimensionally,mangoubi2019nonconvex} proved the convergence rate of HMC for sampling both log-concave and non-log-concave distributions. \\citet{bou2018coupling,chen2019fast} studied the convergence of Metropolized HMC (MHMC) for sampling strongly log-concave distributions. Another important high-order MCMC method is built upon the underdamped Langevin dynamics, which incorporates the velocity into the Langevin dynamics \\eqref{eq:langevin_dynamics}. \nFor continuous-time underdamped Langevin dynamics, its mixing rate has been studied in \\cite{eberle2016reflection,eberle2017couplings}.\nThe convergence of its discrete version has also been widely studied for sampling from both log-concave \\citep{chen2017convergence} \nand non-log-concave distributions \\citep{chen2015convergence,cheng2018sharp,gao2018global,gao2018breaking}.\n\n\n \n\\section{Review of the SGLD Algorithm}\n\nFor the completeness, we present the SGLD algorithm \\citep{welling2011bayesian} in Algorithm \\ref{alg:sgld}, which is built upon the Euler-Maruyama discretization of the continuous-time Langevin dynamics \\eqref{eq:langevin_dynamics} while using mini-batch stochastic gradient in each iteration.\n\nIn the $k$-th iteration, SGLD samples a mini-batch of data points without replacement, denoted by $\\cI$, and computes the stochastic gradient at the current iterate $\\xb_k$, i.e., $\\gb(\\xb_k,\\cI) = 1\/B\\sum_{i\\in\\cI}\\nabla f_i(\\xb_k)$, where $B = |\\cI|$ is the mini-batch size. Based on the stochastic gradient, the model parameter is updated using the following rule,\n\\begin{align*}\n\\xb_{k+1} = \\xb_k - \\eta\\gb(\\xb_k,\\cI) + \\sqrt{2\\eta\/\\beta}\\cdot\\bepsilon_k,\n\\end{align*}\nwhere $\\bepsilon_k$ is randomly drawn from a standard normal distribution $N(\\zero, \\Ib)$ and $\\eta>0$ is the step size. \n\n\n\\begin{algorithm}[!t]\n\t\\caption{Stochastic Gradient Langevin Dynamics (SGLD)}\n\t\\label{alg:sgld}\n\t\\begin{algorithmic}\n\t\t\\STATE \\textbf{input:} step size $\\eta$; mini-batch size $B$; inverse temperature parameter $\\beta$;\n\t\n\t \\STATE Randomly draw $\\xb_0$ from initial distribution $\\mu_0$.\n \t\t\\FOR {$k = 0,1,\\ldots, K$}\n\t\t\\STATE Randomly pick a subset $\\cI$ from $\\{1,\\ldots,n\\}$ of size $|\\cI|=B$; randomly draw $\\bepsilon_k\\sim N(\\zero,\\Ib)$\n\t\t\\STATE Compute the stochastic gradient $\\gb(\\xb_k,\\cI) = 1\/B\\sum_{i\\in\\cI}\\nabla f_i(\\xb_k)$\n\t\t\\STATE Update: $\\xb_{k+1}=\\xb_k-\\eta \\gb(\\xb_k,\\cI)+\\sqrt{2\\eta\/\\beta}\\bepsilon_k$\n\t\t\\ENDFOR \n\t\t\\STATE \\textbf{output: $\\xb_K$} \n\t\\end{algorithmic}\n\n\\end{algorithm}\n\n\n\n\n\n\\section{Main Results}\nIn this section, we present our main theoretical results. We start with the following two definitions. The first one quantifies the goodness of the initial distribution compared with the target distribution, and the second one characterizes the isoperimetric profile of a given distribution. Both definitions are widely used in the convergence analysis of MCMC methods \\citep{lovasz1993random,vempala2007geometric,dwivedi2018log,mangoubi2019nonconvex}.\n\n\\begin{definition}[$\\lambda$-warm start]\\label{eq:def_warm_start}\nLet $\\nu$ be a distribution on $\\Omega$. We say the initial distribution $\\mu_0$ is a $\\lambda$-warm start with respect to $\\nu$ if\n\\begin{align*}\n\\sup_{\\cA:\\cA\\subseteq\\Omega} \\frac{\\mu_0(\\cA)}{\\nu(\\cA)}\\le \\lambda. \n\\end{align*}\n\\end{definition}\n\n\n\n\\begin{definition}[Cheeger constant]\\label{def:cheeger} Let $\\mu$ be a probability measure on $\\Omega$. We say $\\mu$ satisfies the isoperimetric inequality with Cheeger constant $\\rho$ if for any $\\cA\\in\\Omega$, it holds that\n\\begin{align*}\n\\liminf_{h\\rightarrow 0^+} \\frac{\\mu(\\cA_h)-\\mu(\\cA)}{h}\\ge \\rho\\min\\big\\{\\mu(\\cA), 1-\\mu(\\cA)\\big\\},\n\\end{align*}\nwhere $\\cA_h = \\{\\xb\\in\\Omega: \\exists \\yb\\in\\cA, \\|\\xb-\\yb\\|_2\\le h\\}$.\n\\end{definition}\n\n\nNext, we introduce some common\nassumptions on the negative log density function $f(\\xb)$ and the stochastic gradient $\\gb(\\xb,\\cI)$. \n\\begin{assumption}[Dissipativeness]\\label{assump:diss}\nThere are absolute constants $m>0$ and $b\\ge 0$ such that \n\\begin{align*}\n\\langle\\nabla f(\\xb), \\xb\\rangle\\ge m\\|\\xb\\|_2^2 - b, \\quad\\text{for all }\\xb\\in\\RR^d. \n\\end{align*}\n\\end{assumption}\nThis assumption has been conventionally made in the convergence analysis for sampling from non-log-concave distributions \\citep{raginsky2017non,xu2018global,zou2019sampling}. Basically, this assumption implies that the log density function $f(\\xb)$ grows like a quadratic function when $\\xb$ is outside a ball centered at the origin. Note that a strongly convex function $f(\\xb)$ satisfies Assumption~\\ref{assump:diss}, but not vice versa.\n\n\\begin{assumption}[Smoothness]\\label{assump:smooth}\nThere exists a positive constant $L$ such that for any $\\xb,\\yb\\in\\RR^d$ and all functions $f_i(\\xb)$, $i=1,\\ldots,n$, it holds that\n\\begin{align*}\n\\|\\nabla f_i(\\xb) - \\nabla f_i(\\yb)\\|_2\\le L\\|\\xb - \\yb\\|_2.\n\\end{align*}\n\\end{assumption}\nThis assumption has also been made in many prior works \\citep{raginsky2017non,zhang2017hitting,xu2018global}.\n\n\n\n\n\n\n\n\n\nWe now define the following function that will be repeatedly used in the subsequent theoretical results:\n{\n\\begin{align}\\label{eq:def_barR}\n\\bar R(z) &= \\bigg[\\max\\bigg\\{\\frac{625d\\log(4\/z)}{m\\beta},\\frac{4d\\log(4L\/m)+4\\beta b}{m\\beta}, \\frac{4d + 8 \\sqrt{d\\log(1\/z)}+8\\log(1\/z)}{m\\beta}\\bigg\\}\\bigg]^{1\/2}.\n\\end{align}}%\nBased on all aforementioned assumptions, we present the convergence result of SGLD in the following theorem.\n\\begin{theorem}\\label{thm:main_thm}\n For any $\\epsilon\\in(0,1)$, let $\\pi^*\\propto e^{-\\beta f(\\xb)}\\ind\\big(\\xb\\in\\cB(0,R)\\big)$ be the truncated target distribution in $\\Omega = \\cB(0,R)$ with $R = \\bar R(\\epsilon K^{-1}\/12)$, and $\\rho$ be the Cheeger constant of $\\pi^*$. Under Assumptions \\ref{assump:diss} and \\ref{assump:smooth}, we suppose $\\PP(\\|\\xb_0\\|_2\\le R\/2)\\le \\epsilon\/16$, and set the step size as $\\eta=\\tilde O( \\rho^2d^{-2}\\beta^{-1}\\wedge B^2\\rho^2d^{-4}\\beta^{-1})$. Then for any $\\lambda$-warm start with respect to $\\pi$, the output of Algorithm \\ref{alg:sgld} satisfies\n\\begin{align}\\label{eq:main_thm_bound}\n\\|\\mu_K^{\\text{SGLD}}- \\pi\\|_{TV}\\le \\lambda(1-C_0\\eta)^{K} +\\frac{C_1\\eta^{1\/2}}{B}+C_2\\eta^{1\/2}+\\frac{\\epsilon}{2},\n\\end{align}\nwhere $C_0 = \\tilde O\\big(\\rho^2\\beta^{-1}\\big)$, \n$C_1 = \\tilde O\\big(Rd\\rho^{-1}\\beta^{3\/2}\\big)$ and $C_2 = \\tilde O\\big(d\\rho^{-1}\\beta^{1\/2}\\big)$ are problem-dependent constants.\n\\end{theorem}\n\nTo prove Theorem \\ref{thm:main_thm}, we constructed an auxiliary sequence $\\{\\xb_k^{\\text{MH}}\\}_{k=0,1,\\ldots}$ by adding a Metropolis-Hasting accept\/reject step in each iteration of SGLD. We call this auxiliary sequence Metropolized SGLD and it is only used in the analysis (please refer to Section \\ref{sec:metropolized_SGLD} for the rigorous definition).\nTherefore, the terms in \\eqref{eq:main_thm_bound} can be categorized into two types: (1) the approximation error between SGLD iterates $\\{\\xb_k\\}_{k=0,1,\\ldots}$ and the auxiliary sequence $\\{\\xb_k^{\\text{MH}}\\}_{k=0,1,\\ldots}$; and (2) the convergence of Metropolized SGLD to the target distribution $\\pi$.\n\nIn particular, the four terms on the right-hand side of \\eqref{eq:main_thm_bound} are interpreted as follows: the first term corresponds to the sampling error of the auxiliary sequence generated by Metropolized SGLD, which converges to zero at a linear rate. The second to the last terms in \\eqref{eq:main_thm_bound} together\nreflect the approximation error between SGLD and Metropolized SGLD, which is in the order of $O(\\eta^{1\/2}+\\epsilon)$. More specifically, the second and third terms correspond to the reject probability of Metropolized SGLD (since SGLD does not have this accept\/reject step), which are contributed by the randomness of stochastic gradients and Brownian motion respectively. The last term is related to our choice of $R$ since the constructed Metropolized restricts all iterates inside the region $\\cB(0,R)$ while SGLD has no constraint on its iterates (see Section \\ref{sec:metropolized_SGLD} for more details).\n\n\n\n\n\n\n\\begin{remark}\\label{rmk:comment_cheeger}\nFor a general non-log-concave distribution, it is difficult to prove a tight bound on the Cheeger constant $\\rho$. One possible lower bound of $\\rho$ can be obtained via Buser's inequality \\citep{buser1982note,ledoux1994simple}, which shows that the Cheeger constant $\\rho$ can be lower bounded by $\\Omega(d^{-1\/2}c_p)$ under Assumption \\ref{assump:smooth}, where $c_p$ is the Poincar\\'e constant of the distribution $\\pi^{\\star}$. Moreover, \\citet{bakry2008simple} gave a simple lower bound of $c_p$, showing that $c_p \\ge e^{-\\beta\\text{Osc}_{R}f}\/(2R^2)$, where $\\text{Osc}_{R}f = \\sup_{\\xb\\in\\cB(\\zero,R)}f(\\xb) -\\inf_{\\xb\\in\\cB(\\zero,R)}f(\\xb)\\le LR^2\/2$. Assuming $R = \\tilde O(d^{1\/2})$, this further implies that $\\rho= \\Omega(d^{-1})\\cdot e^{-O(R^2)} = e^{-\\tilde O(d)}$. In addition, better lower bounds of $\\rho$ can be proved when the target distribution enjoys better properties. When the target distribution is a mixture of strongly log-concave distributions, the lower bound of $\\rho$ can be improved to $1\/\\text{poly}(d)$ \\citep{lee2018beyond}. Strengthening Assumption \\ref{assump:diss} to a local nonconvexity condition yields $\\rho=e^{-O(L)}$ \\citep{ma2018sampling}. For log-concave distributions, \\citet{lee2017eldan} proved that the Cheeger constant $\\rho$ can be lower bounded by $\\rho = \\Omega\\big(1\/(\\text{Tr}(\\bSigma^2))^{1\/4}\\big)$, where $\\bSigma$ is the covariance matrix of the distribution $\\pi^{\\star}$. \nWhen the target distribution is $m$-strongly log-concave, based on \\cite{cousins2014cubic,dwivedi2018log}, it can be shown that $\\rho = \\Omega(\\sqrt{m})$.\n\\end{remark}\n\n\\iffalse\n\\begin{remark}\nIf $\\epsilon = O(\\eta^{1\/2})$, then it can be observed that when $k\\rightarrow \\infty$, SGLD achieves sampling error at most $O(\\eta^{1\/2})$, suggesting that the stationary distribution of SGLD (if exists) is $\\eta^{1\/2}$-close to the stationary distribution of the Langevin dynamics \\eqref{eq:langevin_dynamics}. This bypasses the drawback of a large body of existing works \\citep{raginsky2017non,xu2018global,gao2018global,zou2019sampling,nguyen2019non,zhang2019cyclical} where the sampling error bound usually diverges when the number of iterations goes to infinity.\n\\end{remark}\n\\fi\n\n\n\n\n\n\n\n\n\nNote that the upper bound of the sampling error proved in Theorem \\ref{thm:main_thm} relies on the step size, mini-batch size, and the goodness of the initialization (i.e., $\\lambda$). In order to guarantee $\\epsilon$-sampling error of SGLD, we need to specify the choices of these hyper-parameters. In particular, we present the iteration complexity of SGLD in the following corollary.\n\n\n\n\\begin{corollary}\\label{coro:SGLD1}\nUnder the same assumptions made in Theorem \\ref{thm:main_thm}, we use Gaussian initialization $\\mu_0 = N\\big(\\zero, \\Ib\/(2\\beta L)\\big)$. For any mini-batch size $B\\le n$ and $\\epsilon\\in(0,1)$, if we set the step size and the maximum iteration number as\n\\begin{align*}\n\\eta &= \\tilde O\\bigg(\\frac{\\rho^2\\epsilon^2}{d^2\\beta} \\wedge \\frac{B^2\\rho^2\\epsilon^2}{d^4\\beta}\\bigg), \\\\\nK&= \\tilde O\\bigg(\\frac{d^3\\beta^2}{\\rho^4\\epsilon^2}\\vee \\frac{d^5\\beta^2}{B^2\\rho^4\\epsilon^2}\\bigg),\n\\end{align*}\nthen SGLD can achieve an $\\epsilon$ sampling error in total variation distance. \n\\end{corollary}\nIt is worth noting that the iteration complexity in Corollary \\ref{coro:SGLD1} holds for any mini-batch size $1\\leq B\\le n$, as opposed to \\cite{raginsky2017non,xu2018global} that require the mini-batch size to be $\\text{poly}(\\epsilon^{-1})$ in order to guarantee vanishing sampling error. Moreover, if we set the mini-batch size to be $B = O(d)$, the number of stochastic gradient evaluations needed to achieve $\\epsilon$-sampling error is $K\\cdot B=\\tilde O(d^4\\beta^2\\rho^{-4}\\epsilon^{-2})$.\n\n\n\nBased on Corollary \\ref{coro:SGLD1}, we further prove the convergence of SGLD under the measure of any polynomial growth function. \n\\begin{corollary}\\label{coro:weak_converge}\nUnder the same assumptions and hyper-parameter configurations as in Corollary~\\ref{coro:SGLD1}, \nlet $h(\\xb)$ be a polynomial growth function with degree $D$, i.e., $h(\\xb)\\le C(1+\\|\\xb\\|_2^D)$ for some constant $C$, and $K$ be defined in Corollary \\ref{coro:SGLD1}, then the output of SGLD satisfies\n\\begin{align*}\n\\EE[h(\\xb_K)] - \\EE[h(\\xb^\\pi)]\\le C' \\epsilon,\n\\end{align*}\nwhere $\\xb^\\pi\\sim\\pi$ denotes the random vector sampled from $\\pi$ and $C' = \\tilde O\\big(d^{D\/2}\\big)$ is a problem-dependent constant.\n\\end{corollary}\n\\begin{remark}\nSimilar results have been presented in \\cite{sato2014approximation,chen2015convergence,vollmer2016exploration,erdogdu2018global}. However, \\citet{sato2014approximation} only analyzed the finite-time approximation error between SGLD and the SDE \\eqref{eq:langevin_dynamics} rather than the convergence to the target distribution. The convergence results in \\cite{chen2015convergence,vollmer2016exploration,erdogdu2018global} also differ from ours as their guarantees are made on the sample path average rather than the last iterate. In addition, these works assume that the Poisson equation solution of the SDE \\eqref{eq:langevin_dynamics} has polynomially bounded $i$-th order derivative ($i\\in\\{2,3,4\\}$), which is not required in our result.\n\\end{remark}\n\nLet us consider a special case that $h(\\cdot) = f(\\cdot)$, which was studied in \\cite{raginsky2017non,xu2018global}. Assumption \\ref{assump:smooth} implies that $h(\\xb)$ is a quadratic growth function. Then Corollary~\\ref{coro:weak_converge} shows that in order to guarantee $\\EE[f(\\xb_k)] - \\EE[f(\\xb^\\pi)]\\le \\epsilon$, SGLD requires $\\tilde O(\\epsilon^{-2})$ stochastic gradient evaluations. In contrast, in order to achieve the same error, \\citet{raginsky2017non,xu2018global} require $\\tilde O(\\epsilon^{-8})$ and $\\tilde O(\\epsilon^{-5})$ stochastic gradient evaluations respectively, both of which are worse than ours.\n\n\\section{Improved Convergence Rates under Hessian Lipschitz Condition}\n\nIn this section, we will show that the convergence rate of SGLD can be improved if the log density function additionally satisfies the Hessian Lipschitz condition, which is defined as follows.\n\\begin{assumption}[Hessian Lipschitz]\\label{assump:hessian_lip}\nThere exists a positive constant $H$ such that for any $\\xb,\\yb\\in\\RR^d$, it holds that\n\\begin{align*}\n\\big\\|\\nabla^2 f(\\xb) - \\nabla^2 f(\\yb)\\big\\|_{\\textrm{op}}\\le H\\|\\xb - \\yb\\|_2.\n\\end{align*}\n\n\\end{assumption}\nThis assumption appears in many recent papers that aim to prove faster convergence rates of LMC \\citep{dalalyan2017user,vempala2019rapid,mou2019improved} for sampling from both log-concave and non-log-concave distributions. \n\nWith this additional assumption, we state the convergence result of SGLD in the following theorem.\n\n\\begin{theorem}\\label{thm:main_thm_hessian}\nFor any $\\epsilon\\in(0,1)$, let $\\pi^*\\propto e^{-\\beta f(\\xb)}\\ind\\big(\\xb\\in\\cB(0,R)\\big)$ be the truncated target distribution in $\\Omega = \\cB(0,R)$ with $R = \\bar R(\\epsilon K^{-1}\/12)$, and let $\\rho$ be the Cheeger constant of $\\pi^*$.\nUnder Assumptions~\\ref{assump:diss}, \\ref{assump:smooth}, and \\ref{assump:hessian_lip}, we suppose $\\PP(\\|\\xb_0\\|_2\\le R\/2)\\le \\epsilon\/16$. If we set the step size $\\eta=\\tilde O\\big( \\rho^2d^{-2}\\beta^{-1}B^2\\wedge \\rho\/(d^{3\/2}+d\\beta^{1\/2})\\big)$, then for any $\\lambda$-warm start with respect to $\\pi$, the output of Algorithm \\ref{alg:sgld} satisfies\n\\begin{align*}\n\\|\\mu_K^{\\text{SGLD}}- \\pi\\|_{TV}\\le \\lambda(1-C_0\\eta)^{K} +\\frac{C_1\\eta^{1\/2}}{B}+C_2\\eta+ \\frac{\\epsilon}{2},\n\\end{align*}\nwhere $C_0 = O(\\beta^{-1}\\rho^2)$, $C_1 = \\tilde O(R^2d\\rho^{-1}\\beta^{3\/2})$ and $C_2 = \\tilde O(d^{3\/2}\\rho^{-1}+Rd^{1\/2}\\beta \\rho^{-1})$ are problem-dependent constants.\n\\end{theorem}\nThe four terms in the upper bound in Theorem \\ref{thm:main_thm_hessian} have the same interpretation as those in Theorem \\ref{thm:main_thm}.\nCompared with the convergence result in Theorem \\ref{thm:main_thm}, the improvement brought by the Hessian Lipschitz condition lies in the approximation error between the transition distributions of SGLD and Metropolized SGLD, which is improved from $O(\\eta^{1\/2})$ to $O\\big(B^{-1}\\eta^{1\/2}+\\eta\\big)$.\n\nUnder the same Hessian Lipschitz condition, \\citet{dalalyan2017user,mou2019improved,vempala2019rapid} improved the convergence rate of LMC \\eqref{eq:def_lmc}. However, \\citet{dalalyan2017user} only focused on strongly log-concave distributions and the theoretical results in \\citet{mou2019improved,vempala2019rapid} cannot be easily extended to SGLD. \n\n\n\\begin{corollary}\\label{coro:sgld2}\nUnder the same assumptions made in Theorem \\ref{thm:main_thm_hessian}, we use Gaussian initialization $\\mu_0 = N\\big(\\zero, \\Ib\/(2\\beta L)\\big)$. For any mini-batch size $B\\le n$, if \nset the step size and maximum iteration number as\n\\begin{align*}\n\\eta &= \\tilde O\\bigg(\\frac{\\rho^2B^2\\epsilon^2}{d^2\\beta}\\wedge \\frac{\\rho\\epsilon}{d^{3\/2}+d\\beta^{1\/2}}\\bigg),\\\\\nK&= \\tilde O\\bigg(\\frac{d^5\\beta^2}{\\rho^4B^2\\epsilon^2}+\\frac{d^{5\/2}\\beta + d^2\\beta^{3\/2}}{\\rho^3\\epsilon}\\bigg),\n\\end{align*}\nthen SGLD can achieve an $\\epsilon$ sampling error in terms of total variation distance. \n\\end{corollary}\n\nNote that the required number of stochastic gradient evaluations is $K\\cdot B = \\tilde O\\big(d^5\\beta^{2}\/(B\\rho^4\\epsilon^2)+Bd^{5\/2}\\beta^{3\/2}\/(\\rho^3\\epsilon)\\big)$. Therefore, if setting the mini-batch size as $B=\\tilde O\\big([d^{5\/2}\\beta^{1\/2}\\rho\\epsilon]^{1\/2}\\big)$, it can be derived that the gradient complexity of SGLD is $\\tilde O(d^{15\/4}\\beta^{7\/4}\\rho^{-7\/2}\\epsilon^{-3\/2})$. This strictly improves the stochastic gradient complexity (i.e., number of stochastic gradient evaluations to achieve $\\epsilon$-sampling error) of SGLD without Assumption \\ref{assump:hessian_lip} by a factor of $\\tilde O(d^{1\/4}\\beta^{1\/4}\\rho^{-1\/2}\\epsilon^{-1\/2})$.\n\n\\section{Proof Outline}\\label{sec:proof_main}\nIn this section, we will sketch the proof of the main results (Theorem \\ref{thm:main_thm}). The missing proofs for the other theorems, corollaries and lemmas are deferred to the appendix. We first highlight the key proof technique and its novelty and difference compared with prior works. Then we will go over each of the key steps in detail.\n\n\n\n\\subsection{Proof Technique and Novelty}\n\n\n\\paragraph{Proof Technique.} \nOur proof relies on two sequences (green arrows in Figure \\ref{fig:decomposition}): \\textbf{Projected SGLD} ($\\xb_k^{\\text{\\tiny Proj-SGLD}}$) and \\textbf{Metropolized SGLD} ($\\xb_k^{\\text{\\tiny MH}}$). Projected SGLD is constructed by adding an accept\/reject step to the standard SGLD algorithm, which was first studied in \\citet{zhang2017hitting}. Metropolized SGLD is a ``virtual'' sequence constructed by further adding a Metropolis Hasting step into Projected SGLD (the Metropolis Hasting step is computationally intractable so that Metropolized SGLD is not a practical algorithm and we only use it for theoretical analysis). Due to such Metropolis Hasting step, Metropolized SGLD is a time-reversible Markov chain and thus enjoys good conductance properties. Based on these two auxiliary sequences, we will prove the convergence of SGLD following three steps: (1) show that the output of Projected SGLD is close to that of SGLD in distribution (see Lemma \\ref{lemma:connection_SGLD}); (2) show that the transition distribution of Projected SGLD is close to that of Metropolized SGLD (see Lemma \\ref{lemma:approximation}); and (3) prove the convergence of Projected SGLD based on the conductance of Metropolized SGLD (see Lemma \\ref{lemma:convergence_approximate}). \n\n\\paragraph{Technical Novelty. }\nIn order to prove the convergence rate of SGLD, prior works \\citep{raginsky2017non,xu2018global} typically make use of the LMC iterates $\\xb_k^{\\text{\\tiny LMC}}$ and decompose the sampling error of SGLD (the error between $\\xb_k$ and $\\xb^\\pi$) into two parts: (1) the error between SGLD iterates and LMC iterates; and (2) the sampling error of LMC (though \\citet{raginsky2017non,xu2018global} bound the sampling error of $\\xb_k^{\\text{\\tiny LMC}}$ in different ways). We illustrate the roadmap of different proof techniques in Figure \\ref{fig:decomposition}.\nNote that their results on the error between $\\xb_k$ and $\\xb_k^{\\text{\\tiny LMC}}$ diverge as $k$ increases, due to the uncertainty of stochastic gradients. This suggests that LMC may not be a good enough auxiliary chain for studying SGLD. In contrast, our constructed auxiliary sequences (i.e., Projected SGLD and Metropolized SGLD) are closer to SGLD since they also cover the randomness of stochastic gradients (this randomness can be included as part of the transition distribution, see Section \\ref{sec:metropolized_SGLD} for more details). Therefore, our proof technique can lead to a sharper convergence analysis than those in \\citet{raginsky2017non,xu2018global}, which consequently gives a faster convergence rate of SGLD for sampling from non-log-concave distributions.\n\n\n\n\n\n\n\n\n\n\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[scale=0.4]{flowchart.pdf}\n \n \\caption{Illustration of the analysis framework of SGLD in different works: \\textcolor{blue}{\\textbf{Raginsky et al. (2017)}}, \\textcolor{red}{\\textbf{Xu et al. (2018)}}, \\textcolor{ForestGreen}{\\textbf{this work}}. The goal is to prove the convergence of SGLD iterates $\\xb_k$ to the point following the target distribution $\\xb^\\pi$. Note that, $\\xb_{k}^{\\text{\\tiny LMC}}$, $\\xb_{k}^{\\text{\\tiny Proj-SGLD}}$ and $\\xb_{k}^{\\text{\\tiny MH}}$ denote the $k$-th iterates of LMC, Proj-SGLD, and Metropolized SGLD respectively; $\\xb_t^{\\text{\\tiny LD}}$ denotes the solution of \\eqref{eq:langevin_dynamics} at time $t$; $\\xb^{\\pi_{\\text{\\tiny LMC}}}$ denotes the point following the stationary distribution of LMC. }\n \\label{fig:decomposition}\n \n\\end{figure}\n\nWe would also like to point out that while the construction of Metropolized SGLD follows the same spirit of \\citet{zhang2017hitting}, it has a different goal and thus the corresponding analysis is not the same. Specifically, \\citet{zhang2017hitting} only characterizes the hitting time of SGLD to a certain set by lower bounding the restricted conductance of SGLD, but does not prove its convergence to $\\pi$. In contrast, we focus on the ability of SGLD for sampling from a certain target distribution. Thus we not only need to analyze the conductance of SGLD, but also need to bound the approximation error between the distribution of $\\xb_k$ and the target one (see Lemmas \\ref{lemma:convergence_approximate} and \\ref{lemma:contraction} and their proofs for more details), which is more challenging.\nAs a consequence, we prove that the sampling error of SGLD to the target distribution can be upper bounded by $O(\\sqrt{\\eta})$, while the analysis in \\citet{zhang2017hitting} can only give $O(1)$ sampling error.\n\n\n\n\n\n\n\\subsection{Projected SGLD and Its Equivalence to SGLD}\n\nProjected SGLD is constructed by adding an extra step in Algorithm \\ref{alg:projected_sgld} with the following accept\/reject rule:\n\\begin{align}\\label{eq:def_alg_accept}\n\\begin{split}\n\\xb_{k+1} =\n \\begin{cases}\n \\xb_{k+1} & \\xb_{k+1}\\in\\cB(\\xb_{k},r)\\cap\\cB(\\zero,R); \\\\\n \\xb_k & \\mbox{otherwise}.\n \\end{cases}\n \\end{split}\n\\end{align}\nThis step ensures each new iterate $\\xb_{k+1}$ does not go too far away from the current iterate and all iterates are restricted in a (relatively) large region $\\cB(\\zero,R)$. The entire algorithm is summarized in Algorithm \\ref{alg:projected_sgld}.\nDue to the above accept\/reject rule, Projected SGLD is slightly different from the standard SGLD algorithm (see Algorithm \\ref{alg:sgld}). However, we can show that Projected SGLD is nearly the same as SGLD given proper choices of $R$ and $r$. In particular, in the following lemma, we will show that the total variation distance between the distributions of the outputs of both algorithms can be arbitrarily small. \n\n\n\\begin{algorithm}[!t]\n\t\\caption{Projected SGLD }\n\t\\label{alg:projected_sgld}\n\t\\begin{algorithmic}\n\t\t\\STATE \\textbf{input:} step size $\\eta$; mini-batch size $B$; inverse temperature parameter $\\beta$; radius $R$, $r$; \n\t \\STATE Randomly draw $\\xb_0$ from initial distribution $\\mu_0$.\n \t\t\\FOR {$k = 0,1,\\ldots, K$}\n\t\t\\STATE Randomly pick a subset $\\cI$ from $\\{1,\\ldots,n\\}$ of size $|\\cI|=B$; randomly draw $\\bepsilon_k\\sim N(\\zero,\\Ib)$\n\t\t\\STATE Compute the stochastic gradient $\\gb(\\xb_k,\\cI) = 1\/B\\sum_{i\\in\\cI}\\nabla f_i(\\xb_k)$\n\t\t\\STATE Update: $\\xb_{k+1}=\\xb_k-\\eta \\gb(\\xb_k,\\cI)+\\sqrt{2\\eta\/\\beta}\\bepsilon_k$\n\t\t\\IF{$\\xb_{k+1}\\not\\in \\cB(\\xb_k,r)\\cap\\cB(\\zero,R)$} \n\t\t\\STATE $\\xb_{k+1}= \\xb_k$\n\t\t\\ENDIF\n\t\t\\ENDFOR \n\t\t\\STATE \\textbf{output: $\\xb_K$} \n\t\\end{algorithmic}\n\n\\end{algorithm}\n\n\\begin{lemma}\\label{lemma:connection_SGLD}\nLet $\\mu_K^{\\text{SGLD}}$ and $\\mu_K^{\\text{Proj-SGLD}}$ be distributions of the outputs of standard SGLD (Algorithm \\ref{alg:sgld}) and projected SGLD (Algorithm \\ref{alg:projected_sgld}). For any $\\epsilon\\in(0,1)$, we set\n\\begin{align*}\nR = \\bar R(\\epsilon K^{-1}\/4),\\quad\nr = \\sqrt{2\\eta d \/\\beta}\\big(2+\\sqrt{2\\log(8K\/\\epsilon)\/d}\\big).\n\\end{align*}\nSuppose $\\PP(\\|\\xb_0\\|_2\\le R\/2)\\le \\epsilon\/16$ and setting $\\eta\\le (LR+G)^{-2}\\beta^{-1}d$, then we have\n\\begin{align*}\n\\big\\|\\mu_K^{\\text{SGLD}}-\\mu_K^{\\text{Proj-SGLD}}\\big\\|_{TV}\\le \\frac{\\epsilon}{4}.\n\\end{align*}\n\\end{lemma}\n\n\n\\subsection{Construction of Metropolized SGLD}\\label{sec:metropolized_SGLD}\nProjected SGLD will approximately generate samples from the following truncated target distribution since it restricts all iterates to the region $\\Omega:=\\cB(0,R)$,\n\\begin{align}\\label{eq:def_restrict_target}\n\\begin{split}\n\\pi^{\\star}(\\text{d} \\xb) = \\begin{cases}\n \\frac{e^{-\\beta f(\\xb)}}{\\int_{\\Omega}e^{-\\beta f(\\yb)}\\text{d}\\yb}\\text{d}\\xb & \\xb\\in\\Omega; \\\\\n 0 & \\mbox{otherwise}.\n \\end{cases}\n\\end{split}\n\\end{align}\nThen we will characterize the convergence of Projected SGLD to $\\pi^*$. In particular, we will introduce a useful auxiliary Markov chain called Metropolized SGLD, i.e., SGLD with a Metropolis-Hasting step. We will first give the transition distribution of the Markov chain corresponding to Projected SGLD.\n\n\n\n\n\n\n\\noindent\\textbf{Transition distribution of Projected SGLD.} Let $\\gb\\big(\\xb,\\cI\\big)$ be the stochastic gradient computed at the point $\\xb$, where $\\cI$ denotes the mini-batch of data points queried in the stochastic gradient computation. Then it is clear that Algorithm \\ref{alg:projected_sgld} can be described as a Markov process. More specifically, let $\\ub$ and $\\wb$ be the starting point and the point obtained after one-step iteration of Algorithm \\ref{alg:projected_sgld}, the Markov chain in this iteration can be formed as $\\ub\\rightarrow \\vb\\rightarrow \\wb$, where $\\vb$ is generated based on the following conditional probability density function,\n\\begin{align}\\label{eq:trans_SGLD}\nP(\\vb|\\ub)= \\EE_{\\cI}[P(\\vb|\\ub,\\cI)] = \\EE_{\\cI} \\bigg[\\frac{1}{(4\\pi\\eta\/\\beta)^{d\/2}}\\exp\\bigg(-\\frac{\\|\\vb - \\ub + \\eta\\gb(\\ub,\\cI)\\|_2^2}{4\\eta\/\\beta}\\bigg)\\bigg|\\ub\\bigg],\n\\end{align}%\nwhich is exactly the transition probability of standard SGLD (i.e., without any accept\/reject step). Let $R>0$ be a tunable radius and recall that $\\Omega = \\cB(\\zero,R)$. The process $\\vb\\rightarrow \\wb$ can be formulated as\n\\begin{align}\\label{eq:markov_chain_v2w}\n\\wb =\n \\begin{cases}\n \\vb & \\vb\\in\\cB(\\ub,r)\\cap\\Omega; \\\\\n \\ub & \\mbox{otherwise}.\n \\end{cases}\n\\end{align}\nLet $p(\\ub) = \\PP_{\\vb\\sim P(\\cdot|\\ub)}[\\vb\\in\\cB(\\ub,r)\\cap\\Omega]$ be the acceptance probability in \\eqref{eq:markov_chain_v2w}, and $Q(\\wb|\\ub)$ be the conditional PDF that describes $\\ub\\rightarrow \\wb$. \nThen we have\n\\begin{align*}\nQ(\\wb|\\ub) &= (1-p(\\ub))\\delta_{\\ub}(\\wb)+ P(\\wb|\\ub)\\cdot\\ind\\big[\\wb\\in\\cB(\\ub,r)\\cap\\Omega\\big],\n\\end{align*}\nwhere $P(\\wb|\\ub)$ is computed by replacing $\\vb$ with $\\wb$ in \\eqref{eq:trans_SGLD}.\nSimilar to \\cite{zhang2017hitting,dwivedi2018log}, we consider the $1\/2$-lazy version of the above Markov process, i.e., a Markov process with the following transition distribution\n\\begin{align}\\label{eq:def_trans_lz_sgld}\n\\cT_{\\ub}(\\wb) = \\frac{1}{2}\\delta_{\\ub}(\\wb) + \\frac{1}{2}Q(\\wb|\\ub),\n\\end{align}\nwhere $\\delta_\\ub(\\cdot)$ is the Dirac-delta distribution at $\\ub$.\nHowever, it is difficult to directly prove the ergodicity of the Markov process with transition distribution $\\cT_\\ub(\\wb)$, and it is also hard to tell whether its stationary distribution exists or not. Besides, SGLD is known to be asymptotically biased \\citep{teh2016consistency,vollmer2016exploration}, which does not converge to the target distribution $\\pi$ even when it runs for infinite steps. It remains unclear whether Projected SGLD can converge to the target distribution given the formula of its transition distribution. \n\n\n\\noindent\\textbf{Metropolized SGLD.}\nIn order to quantify the sampling error for the output of Projected SGLD in Algorithm \\ref{alg:projected_sgld} and prove its convergence, we follow the idea of \\cite{zhang2017hitting}, which constructs an auxiliary Markov process by adding an extra Metropolis-Hasting correction step into Algorithm \\ref{alg:projected_sgld}. We call it Metropolized SGLD.\nGiven the starting point $\\ub$, let $\\wb$ be the candidate state generated from the distribution $\\cT_\\ub(\\cdot)$. Metropolized SGLD will accept the candidate $\\wb$ with the following probability, \n\\begin{align*}\n\\alpha_{\\ub}(\\wb) = \\min\\bigg\\{1, \\frac{\\cT_{\\wb}(\\ub)}{\\cT_{\\ub}(\\wb)}\\cdot\\exp\\big[-\\beta\\big(f(\\wb) - f(\\ub)\\big)\\big]\\bigg\\}.\n\\end{align*}\nLet $\\cT^{\\star}_\\ub(\\cdot)$ denote the transition distribution of such auxiliary Markov process, i.e.,\n\\begin{align*}\n\\cT^{\\star}_\\ub(\\wb) = (1-\\alpha_\\ub(\\wb))\\delta(\\ub) + \\alpha_\\ub(\\wb)\\cT_\\ub(\\wb).\n\\end{align*}\nIt is easy to verify that the aforementioned Markov process is time-reversible. Due to this Metropolis-Hastings correction step, the Markov chain can converge to a unique stationary distribution $\\pi^{\\star}\\propto e^{-\\beta f(\\xb)}\\cdot\\ind(\\xb\\in\\Omega)$ \\citep{zhang2017hitting}. It is worth pointing out that Metropolized SGLD cannot be implemented in practice since we are only allowed to query a subset of the training data in each iteration of SGLD, thus we are not able to \nprecisely calculate the accept probability $\\alpha_\\ub(\\wb)$, which involves the expectation computation over the stochastic mini-batch of data points.\nNevertheless, we will only use this auxiliary Markov chain in our theoretical analysis to show the convergence of Algorithm \\ref{alg:projected_sgld}.\n\nWe will further show that the transition distribution of Projected SGLD ($\\cT_\\ub(\\cdot)$) can be $\\delta$-close to that of Metropolized SGLD ($\\cT_\\ub^*(\\cdot)$) for some small quantity $\\delta$ governed by $\\eta$, which is provided in the following lemma. \n\\begin{lemma}\\label{lemma:approximation}\nUnder Assumption \\ref{assump:smooth}, let $G = \\max_{i\\in[n]} \\|\\nabla f_i(\\zero)\\|_2$, and set $r = \\sqrt{10\\eta d\/\\beta}\\big(1+\\sqrt{\\log(8K\/\\epsilon)\/d}\\big)$, where $K$ is the total number of iterations of Projected SGLD. Then \nthere exists a constant \\begin{align*}\n\\delta &= \\Big[10Ld\\eta +10L(LR+G)d^{1\/2}\\beta^{1\/2}\\eta^{3\/2}+ 12\\beta(LR+G)^2 d\\eta\/B+ 2\\beta^2(LR+G)^4\\eta^2\/B\\Big]\\notag\\\\\n&\\qquad \\cdot \\big(1+\\sqrt{\\log(8K\/\\epsilon)\/d}\\big)^2\n\\end{align*}\nsuch that for any set $\\cA\\subseteq \\Omega$ and any point $\\ub\\in \\Omega$,\n\\begin{align}\\label{eq:delta_approximation}\n(1-\\delta)\\cT^{\\star}_\\ub(\\cA)\\le \\cT_\\ub(\\cA)\\le (1+\\delta)\\cT^{\\star}_\\ub(\\cA). \n\\end{align}\n\\end{lemma}\n\n\n \n\n \n\\subsection{Convergence of Projected SGLD}\nIn this part, we will characterize the convergence of Projected SGLD, which consists of two steps:\n(1) given the $\\delta$-closeness result in Lemma \\ref{lemma:approximation}, we prove that Projected SGLD can converge to the truncated target distribution $\\pi^{\\star}$ up to some approximation error determined by $\\delta$; and (2) we prove that with a proper choice of the truncation radius $R$, the total variation distance between $\\pi^{\\star}$ and the target distribution $\\pi$ can be sufficiently small.\n\n\n\n\n\n\n\n\\noindent\\textbf{Convergence of Projected SGLD to $\\pi^{\\star}$.}\nWe first provide the definition of the conductance for a time-reversible Markov chain as follows.\n\\begin{definition}[Conductance]\\label{def:s-conductance}\nThe conductance of a time-reversible Markov chain with transition distribution $\\cT^{\\star}_\\ub(\\cdot)$ and stationary distribution $\\pi^{\\star}$ is defined by,\n\\begin{align*}\n\\phi: = \\inf_{\\cA: \\cA\\subseteq\\Omega, \\pi^{\\star}(\\cA)\\in(0,1)}\\frac{\\int_{\\cA}\\cT^{\\star}_\\ub(\\Omega\\backslash\\cA)\\pi^{\\star}(\\text{d} \\ub)}{\\min\\{\\pi^{\\star}(\\cA), \\pi^{\\star}(\\Omega\\backslash\\cA)\\}},\n\\end{align*}\nwhere $\\Omega$ is the support of the state of the Markov chain.\n\\end{definition}\nIn Lemma \\ref{lemma:approximation}, we have already shown that the transition distribution of Algorithm \\ref{alg:projected_sgld}, i.e., $\\cT_\\ub(\\cdot)$ is $\\delta$-close to that of Metropolized SGLD, i.e., $\\cT^{\\star}_\\ub(\\cdot)$, for some small quantity $\\delta$. \nBesides, from \\cite{lovasz1993random,vempala2007geometric}, we know that a time-reversible Markov chain can converge to its stationary distribution at a linear rate depending on its conductance.\nTherefore, we aim to characterize the convergence rate of $\\cT_\\ub(\\cdot)$ based on the ergodicity of $\\cT^{\\star}_\\ub(\\cdot)$.\nWe utilize the conductance parameter of $\\cT^{\\star}_\\ub(\\cdot)$, denoted by $\\phi$, and establish the convergence of $\\cT_\\ub(\\cdot)$ in total variation distance in the following lemma.\n\\begin{lemma}\\label{lemma:convergence_approximate}\nLet $\\mu_K^{\\text{Proj-SGLD}}$ be the distribution of the output of Algorithm \\ref{alg:projected_sgld}. Under Assumption~\\ref{assump:smooth}, if $\\cT_\\ub(\\cdot)$ is $\\delta$-close to $\\cT^{\\star}_\\ub(\\cdot)$ with $\\delta\\le\\min\\{1-\\sqrt{2}\/2, \\phi\/16\\}$, then for any $\\lambda$-warm start initial distribution with respect to $\\pi^{\\star}$, it holds that \n\\begin{align*}\n\\|\\mu_K^{\\text{Proj-SGLD}} -\\pi^{\\star}\\|_{TV}\\le \\lambda\\big(1 - \\phi^2\/8\\big)^K + 16\\delta\/\\phi. \n\\end{align*}\n\\end{lemma}\nLemma \\ref{lemma:convergence_approximate} shows that Projected SGLD converges to $\\pi^{\\star}$ in total variance distance with approximation error up to $16\\delta\/\\phi$. The next step is to characterize the conductance parameter $\\phi$ and reveal its dependency on the problem-dependent parameters, which we state in the following lemma.\n\\begin{lemma}\\label{lemma:lowerbound_phi}\nUnder Assumptions \\ref{assump:diss} and \\ref{assump:smooth}, if the step size satisfies $\\eta\\le \\big[35(Ld+(LR+G)^2\\beta d\/B)\\big]^{-1}\\wedge [25\\beta(LR+G)^2]^{-1}$, there exists an absolute constant $c_0$ such that\n\\begin{align*}\n\\phi\\ge c_0\\rho\\sqrt{\\eta\/\\beta},\n\\end{align*}\nwhere $\\rho$ is the Cheeger constant of the distribution $\\pi^{\\star}$. \n\\end{lemma}\n\\noindent\\textbf{Bounding the difference between $\\pi$ and $\\pi^{\\star}$.} \nLemmas \\ref{lemma:convergence_approximate} and \\ref{lemma:lowerbound_phi} together guarantee that Algorithm \\ref{alg:projected_sgld} converges to the truncated target distribution $\\pi^{\\star}$. Thus the last thing remaining to be done is ensuring that $\\pi^{\\star}$ is sufficiently close to $\\pi$. The following lemma characterizes the total variation distance between the target distribution $\\pi$ and its truncated version $\\pi^*$ in $\\cB(\\zero,R)$.\n\n\\begin{lemma}\\label{lemma:approximate_target}\nFor any $\\epsilon\\in(0,1)$, set $R=\\bar R(\\epsilon\/12)$ and let $\\Omega = \\cB(\\zero,R)$ and $\\pi^{\\star}$ be the truncated target distribution in $\\Omega$. Then the total variation distance between $\\pi^{\\star}$ and $\\pi$ can be upper bounded by\n$\\|\\pi^{\\star} - \\pi\\|_{TV}\\le \\epsilon\/4$.\n\\end{lemma}\n\\begin{proof}[Proof of Theorem \\ref{thm:main_thm}]\nThe rest proof of Theorem \\ref{thm:main_thm} is straightforward by combining Lemmas~\\ref{lemma:connection_SGLD},~\\ref{lemma:convergence_approximate}, and \\ref{lemma:approximate_target} using the triangle inequality. We defer the detailed proof to Appendix \\ref{sec:proof_remaining}.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusion}\nIn this paper, we proved a faster convergence rate of SGLD for sampling from a broad class of distributions that can be non-log-concave. \nIn particular, we developed a new proof technique for characterizing the convergence of SGLD. Different from the existing works that mainly study the convergence of SGLD using full-gradient-based Markov chains such as LMC or continuous Langevin dynamics,\nthe key of our proof technique relies on two auxiliary Markov chains: Projected SGLD and Metropolized SGLD, which can better capture the behavior of SGLD since they also cover the randomness of the stochastic gradients. Our proof technique is of independent technical interest and can be potentially adapted to study the convergence of other stochastic gradient-based sampling algorithms.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}