diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzgkae" "b/data_all_eng_slimpj/shuffled/split2/finalzzgkae" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzgkae" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nIn this paper we study a L\\'evy flights dynamics in an external multi-well \npotential in the annealed regime.\nWe are motivated by the problem of random search of the global \nminimum of an unknown function\n$U$ with help of simulated annealing. For simplicity, we consider a one-dimensional case. \nLet $U$ be a multi-well potential satisfying some regularity conditions. \nClassical continuous time simulated annealing consists in an examination of a time non-homogeneous Smoluchowski\ndiffusion\n\\begin{equation}\n\\label{eq:1}\nd\\hat{Z}(t)=-U'(\\hat{Z}(t))dt+\\hat{\\sigma}(t)dW(t)\n\\end{equation}\nwith a temperature $\\hat{\\sigma}(t)\\to 0$ as $t\\to+\\infty$. For small values of $\\hat{\\sigma}(t)$,\nthe process $\\hat{Z}$ spends most of the time in small neighbourhoods of the \npotential's local\nminima and makes occasional transitions between the adjacent wells. It is possible to choose\nan appropriate cooling schedule $\\hat{\\sigma}(t)$, such that the diffusion \nsettles down near the\nglobal maximum of $U$. Indeed, one should take \n$\\hat{\\sigma}^2(t)\\approx \\frac{\\theta}{\\ln(\\lambda+t)}$, the parameter $\\theta>0$ being a\ncooling rate and $\\lambda>1$ parameterising the initial temperature. Then there is a critical\nvalue $\\hat{\\theta}>0$, such that $\\hat{Z}(t)$ converges in probability to the global\nminimum of $U$ if $\\theta>\\hat{\\theta}$, and the convergence fails if $0<\\theta<\\hat{\\theta}$. \nMoreover, the critical value $\\hat{\\theta}$ is a logarithmic growth rate of the principal\nnon-zero eigenvalue $\\lambda^1(\\sigma)$ of the generator of the time-homogeneous diffusion\n\\begin{equation}\nd\\hat{X}(t)=-U'(\\hat{X}(t))dt+\\sigma dW(t),\n\\end{equation} \ni.e.\\ $|\\lambda^1(\\sigma)|\\propto \\exp(-\\hat{\\theta}\/\\sigma^2)$.\nHeuristic justification for the convergence is as follows. \nThe principal non-zero eigenvalue $\\lambda^1(\\sigma)$\ndetermines the convergence rate of $\\hat{X}$ to its invariant measure \n$\\mu_\\sigma(dx)=c_\\sigma\\exp(-2U(x)\/\\sigma^2)dx$, $c_\\sigma$ being a normalising factor.\nThus, for any continuous positive function $f$ we have an estimate\n\\begin{equation}\n|\\mathbf{E}_x f(X(t)) - \\int f(x)\\mu_\\sigma(dx)|\n\\leq C e^{-|\\lambda^1(\\sigma)|t}.\n\\end{equation} \nThe weak limit of the invariant measures $\\mu_\\sigma(dy)$ as $\\sigma\\to 0$ \nis a Dirac mass at the potential's global minimum. \nFor small values of $\\sigma(t)$, the dynamics of $\\hat{Z}$ reminds of a \ndynamics of $\\hat{X}$. \nThus $\\hat{Z}(t)$ has enough time to settle down in the deepest potential well\nif $\\sigma(t)$ is such that\n\\begin{equation}\nt|\\lambda^1(\\sigma(t))|\\to \\infty \\quad \n\\Leftrightarrow\\quad \n\\frac{t}{(\\lambda+t)^{\\hat{\\theta}\/\\theta}}\\to \\infty\n\\quad \n\\Leftrightarrow\\quad \n\\theta>\\hat{\\theta},\\quad t\\to+\\infty.\n\\end{equation}\nThe logarithmic decrease rate of $\\sigma$ was obtained in the seminal paper\n\\cite{GemanG-84}.\nFurther mathematical results on classical continuous time simulated \nannealing can be found in \\cite{ChiangHS-87,HwangS-90,HwangS-92,HolleyS-88,HolleyKS-89}. \nWe also refer the reader to the review paper \\cite{Gidas-95} and further references \ntherein.\n\n\n\nOur research is motivated by the paper \\cite{SzuH-87} by Szu and Hartley, where they\nintroduced the \nso-called \\textit{fast simulated annealing}\nwhich allows to perform a non-local search of the deepest well.\nFast simulated annealing process in the sense of \\cite{SzuH-87} is a discrete time \nMarkov chain, where the states are\nobtained from the Euler approximation of \\eqref{eq:1} driven not by Gaussian noise but \n\\textit{Cauchy} noise. \nThe new state is accepted according to the Metropolis algorithm \n(see \\cite{MetropolisRRT-53}) with Boltzmann acceptance probability which equals $1$, \nif the potential value in this state is smaller, i.e.\\ the new position is `lower'\nin the potential landscape. If the new position is `higher', it is accepted with the probability \n$\\sim \\exp(-\\Delta U\/\\sigma)$, where $\\Delta U$ is the difference of the potential values in the \nnew and the old states, and $\\sigma$ is a decreasing `temperature' parameter. The \nadvantage of this method consists in faster transitions between the potential wells due to \nthe \nheavy tails of Cauchy distribution. Moreover, the authors claim that the optimal cooling\nrate is algebraic, $\\sigma(t)\\approx t^{-1}$, which also accelerates convergence.\n\nIn this paper, we consider a continuous-time \nL\\'evy flights counterpart of the diffusion \\eqref{eq:1}. \nOur goal is to study the asymptotic properties of the system in dependence of a\ncooling schedule. We notify the reader, that in regimes where a L\\'evy flights\nprocess converges to some limiting distribution, it does not locate the global minimum\nof $U$, but reveals the spatial structure of the potential. \n\nThis paper contains a heuristic derivation of results, \nwhich are proved rigorously in \\cite{Pavlyukevich-06a}. \nIt can be seen as a sequel of \\cite{ImkellerP-06,ImkellerP-06a,ImkellerP-06b} where a \nsmall-noise dynamics of L\\'evy flights in external potentials were studied. We emphasise \nthat our methods are purely probabilistic.\n\n\n\\section{Object of study and results}\n\n\\subsection{L\\'evy flights}\n\nLet $L=(L(t))_{t\\geq 0}$ be a L\\'evy flights (LF) process of index $\\alpha\\in(0,2)$, \ni.e.\\ a non-Gaussian stable symmetric L\\'evy\nprocess with marginals having the Fourier transform\n\\begin{equation}\n\\label{eq:f}\n\\mathbf{E} e^{i\\omega L(t)}=e^{-c(\\alpha)t|\\omega|^\\alpha},\n\\quad c(\\alpha)=2\\int_0^\\infty\\frac{1-\\cos{y}}{y^{1+\\alpha}}\\,dy=2|\\cos(\\frac{\\pi\\alpha}{2})\\Gamma(-\\alpha)|.\n\\end{equation}\nIn our analysis we shall use the L\\'evy-Khinchin representation of the characteristic function of $L(t)$, namely\n\\begin{equation}\n\\label{eq:lh}\n\\mathbf E e^{i\\omega L(t)}=\n\\exp\\left\\lbrace t\n\\int_{\\mathbb R\\backslash\\{0\\}} \\left[ e^{i\\omega y}-1\n-i\\omega y \\I{|y|\\leq 1}\\right]\n\\frac{dy}{|y|^{1+\\alpha}}\\right\\rbrace,\n\\end{equation}\nwhere $\\I{A}$ denotes the indicator function of a set $A$.\nThe most important ingredient of the representation\n\\eqref{eq:lh} is the so called \\textit{L\\'evy (jump) measure} of the random process\n$L$ given by\n\\begin{equation}\n\\nu(A)=\\int_{A\\backslash\\{0\\}}\\frac{dy}{|y|^{1+\\alpha}}, \\quad\nA\\,\\,\\mbox{ is a Borel set in}\\,\\,\\mathbb{R}.\n\\end{equation}\nNote, that some authors prefer another parametrisation of LFs with the Fourier transform $\\exp(-t|\\omega|^\\alpha)$.\nIn this paper we use the representation \\eqref{eq:f}\ndue to a simpler form of the L\\'evy measure. \n\nThe measure $\\nu$ controls the intensity and sizes of the jumps of\nthe L\\'evy flights process. Let $\\Delta L(t)=L(t)-L(t-)$ be the random jump \nsize of $L$ at a time instance $t$, $t>0$, and the number of jumps belonging to the set $A$ on the time interval $(0,t]$ be denoted \nby $N(t, A)$, i.e.\\\n\\begin{equation}\nN(t,A)=\\sharp\\{s: (s,\\Delta L_s)\\in (0,t]\\times A\\}.\n\\end{equation}\nThen the random variable $N(t, A)$ has a Poisson distribution with mean\n$t\\nu(A)$ (which can possibly be infinite). \nNote, that for any stability index $\\alpha\\in(0,2)$, the L\\'evy measure of \nany neighbourhood of $0$ is infinite, hence LFs make infinitely many very \nsmall jumps on any time interval.\nThe tails of the density $|y|^{-1-\\alpha}dy$ \ndetermine big jumps of LFs. Thus, $\\mathbf{E}|L(t)|^\\delta<\\infty$, $t>0$, \niff $\\int_{|y|\\geq 1}|y|^\\delta\\nu(dy)<\\infty$\niff $\\delta<\\alpha$.\n\n\n\\subsection{External potential}\n\nWe assume that the external potential $U$ is smooth and has $n$ local minima $m_i$ and $n-1$ local maxima $s_i$ enumerated in the increasing order, i.e.\\\n\\begin{equation}\n-\\infty=s_00$ and $U''(s_i)<0$, and the potential \nincreases fast at infinity, i.e.\\\n$|U'(x)|>|x|^{1+c}$, $|x|\\to\\infty$ for some $c>0$.\n\nUnder the assumptions on $U$, the deterministic dynamical system\n\\begin{equation}\n\\label{eq:x0}\nX^0_x(t)=x-\\int_0^t U'(X^0_x(u))\\, du\n\\end{equation}\nhas $n$ domains of attraction $\\Omega_i=(s_{i-1},s_i)$ with asymptotically stable \nattractors $m_i$. We note that if $x\\in \\Omega_i$ then $X^0_x(t)\\in \\Omega_i$ for all $t\\geq 0$,\ni.e.\\ the deterministic trajectory cannot pass between different domains of attraction.\nDenote $B_i=\\{x:|m_i-x|\\leq \\Delta\\}$ a $\\Delta$-neighbourhood of the attractor $m_i$.\nWe suppose that $\\Delta$ is small enough, so that $B_i\\subset \\Omega_i$, $1\\leq i\\leq n$.\nDue to the rapid increase of $U'$ at infinity, the return of $X^0_x(t)$ from $\\pm\\infty$\nto $B_1$ or $B_n$ occurs in finite time. \n\n\n\\subsection{Small constant temperature}\n\nFirst, we consider L\\'evy flights $L$ in the potential $U$ in the regime of \nsmall constant temperature.\nThe resulting random dynamics is described by the stochastic differential equation\n\\begin{eqnarray}\nX^\\varepsilon_x(t)&=x-\\int_0^t U'(X^\\varepsilon_x(u-))\\, du+\\varepsilon L(t), \\quad x\\in\\mathbb{R},\\,\\,t\\geq 0.\n\\end{eqnarray} \nThe properties of $X^\\varepsilon$ are studied in our previous \npaper \\cite{ImkellerP-06a} in the case of a double-well potential. \nThe general multi-well case is studied in \\cite{ImkellerP-06b}. Below we formulate \nthe results.\n\nFor any $\\Delta>0$ sufficiently small, in the limit $\\varepsilon\\to 0$, the process $X^\\varepsilon$ spends an overwhelming \nproportion of time in the set $\\cup_{i=1}^n B_i$ making occasional abrupt jumps between \ndifferent neighbourhoods $B_i$.\nThus, the knowledge of the transition times and probabilities is essential\nfor understanding the asymptotic properties of $X^\\varepsilon$.\nLet $T^i_x(\\varepsilon)=\\inf\\{t\\geq 0: X^\\varepsilon_x(t)\\in \\cup_{j\\neq i}B_j\\}$. For $x\\in B_i$, the stopping time\n$T^i_x(\\varepsilon)$ denotes the first transition time to a $\\Delta$-neighbourhood of \na minimum of a different well.\nThen we have the following result.\n\\begin{theorem}[constant temperature, transitions]\n\\label{th:T}\nFor $x\\in B_i$, $1\\leq i\\leq n$, the following estimates hold in the limit $\\varepsilon\\to 0$:\n\\begin{align}\n\\label{eq:t1}\n&\\mathbf{P}_x\\left( X^\\varepsilon(T^i(\\varepsilon))\\in B_j\\right) \\to\\frac{q_{ij}}{q_i}, \\quad i\\neq j,\\\\\n\\label{eq:t2}\n&\\varepsilon^\\alpha T^i_x(\\varepsilon)\\stackrel{d}{\\to} \\exp(q_i),\\\\\n\\label{eq:t3}\n& \\varepsilon^\\alpha \\mathbf{E}_x T^i(\\varepsilon)\\to \\frac{1}{q_i},\n\\end{align}\nwhere\n\\begin{align}\n\\label{eq:q}\nq_{ij}&\n=\\frac{1}{\\alpha}\n\\left|\\frac{1}{|s_{j-1}-m_i|^\\alpha}-\\frac{1}{|s_j-m_i|^\\alpha}\\right|,\\quad i\\neq j,\\\\\nq_i&=\\sum_{j\\neq i}q_{ij}=\\frac{1}{\\alpha}\\left( \\frac{1}{|s_{i-1}-m_i|^\\alpha}\n+\\frac{1}{|s_i-m_i|^\\alpha}\\right),\n\\end{align}\nand ``$\\stackrel{d}{\\to}$'' denotes convergence in distribution.\n\\end{theorem}\n\nAs we see, the transition times between the wells of $X^\\varepsilon$ are asymptotically \nexponentially distributed in the limit of small noise, and hence \\textit{unpredictable}, \ndue to the memoryless property of the exponential law.\nThe transition probabilities between the wells are noise independent and strictly positive.\nThus, $X^\\varepsilon$ reminds of a Markov process on a finite state space. Indeed, the following theorem \nholds. \n\\begin{theorem}[constant temperature, metastability]\n\\label{th:meta}\nIf $x\\in \\Omega_i$,\n$1\\leq i\\leq n$, then for $t>0$\n\\begin{eqnarray}\nX^\\varepsilon_x\\left( \\frac{t}{\\varepsilon^\\alpha}\\right) \\to Y_{m_i}(t),\\quad \\varepsilon\\to 0,\n\\end{eqnarray}\nin the sense of finite-dimensional distributions, where $Y=(Y_y(t))_{t\\geq 0}$ is a Markov process on a state space $\\{m_1,\\dots, m_n\\}$ with the infinitesimal generator $Q=(q_{ij})_{i,j=1}^n$,\n$q_{ij}$ being defined in \\eqref{eq:q}, $q_{ii}=-q_i$.\n\\end{theorem}\n\nSince none of the entries $q_{ij}$ vanishes, the limiting Markov process $Y$ has a unique invariant distribution \n$\\pi=(\\pi_1,\\dots,\\pi_n)^T$, which can be calculated from the matrix equation $Q^T\\pi=0$.\n\n\n\n\n\n\n\n\\subsection{Decreasing temperature}\n\nIn the annealed regime, the dynamics of L\\'evy flights is characterised by \nthe time non-homogeneous equation\n\\begin{eqnarray}\n\\label{eq:Z}\nZ^\\lambda_{s,z}(t)&=z-\\int_s^t U'(Z^\\lambda_{s,z}(u-))\\, du+\\int_s^t \\frac{dL(u)}{(\\lambda+u)^\\theta},\n\\quad z\\in\\mathbb{R},\\,\\,0\\leq s\\leq t,\n\\end{eqnarray} \nwhere a positive parameter $\\theta$ denotes the \\textit{cooling rate}, and $\\lambda>0$ determines the initial\ntemperature, which equals to $(\\lambda+s)^{-\\theta}$.\n\nIt is easily seen from \\eqref{eq:Z}, that the evolution of the process starting\nat time $s\\geq 0$ is the same as that of the process starting at time zero with a different initial temperature, namely\n\\begin{equation}\n\\label{eq:mp}\n(Z^\\lambda_{s,z}(s+t))_{t\\geq 0}\\stackrel{d}{=}(Z^{\\lambda+s}_{0,z}(t))_{t\\geq 0},\n\\end{equation}\nand thus the particular values of $s$ or $\\lambda$ do not influence asymptotic\nproperties of the \nprocess in the limit $t\\to\\infty$. However, since our theory will work\nfor low temperatures, it is often convenient to study\nthe dynamics not for large values of $s$ and $t$ but for large\nvalues of $\\lambda$.\n\nThe goal of this paper is to study the limiting behaviour of $Z^\\lambda_{0,z}(t)$ as $t\\to \\infty$\nin dependence of the cooling rate $\\theta$, `initial temperature' $\\lambda$ and initial\npoint $z$.\n\n\n\nSimilarly to the classical Gaussian case discussed in the introduction,\nthe candidate for the limiting law of $Z^\\lambda_{0,z}(t)$ is the invariant \ndistribution $\\pi$ of the Markov chain $Y$ from Theorem~\\ref{th:meta}.\nFurthermore, we have to distinguish between two different cooling regimes. \n\nAs in the previous section, for $1\\leq i\\leq n$, consider the stopping times\n\\begin{equation}\n\\tau_{s,z}^{i,\\lambda}=\\inf\\{u\\geq s:Z_{s,z}^\\lambda(u)\\in \\cup_{j\\neq i}B_j\\}.\n\\end{equation}\nIf $z\\in B_i$ then $\\tau_{s,z}^{i,\\lambda}$ denotes the \\textit{transition time} from a\n$\\Delta$-neighbourhood of $m_i$ to a $\\Delta$-neighbourhood of some other potential's minimum. \nFor all $j\\neq i$ we also consider the corresponding\n\\textit{transition probabilities} \n$\\mathbf{P}_{s,z}(Z^\\lambda(\\tau^{i,\\lambda})\\in B_j)$. Then the following analogue of Theorem~\\ref{th:T}\nholds.\n\\begin{theorem}[slow cooling, transitions]\n\\label{th:tau}\nLet $\\theta<1\/\\alpha$. For $z\\in B_i$, $1\\leq i\\leq n$, the following estimates hold in the\nlimit of small initial temperature, i.e. when $\\lambda\\to+\\infty$:\n\\begin{align} \n\\label{eq:p}\n&\\mathbf{P}_{0,z}(Z^\\lambda(\\tau^{i,\\lambda})\\in B_j)\n\\to\\frac{q_{ij}}{q_i},\\quad i\\neq j,\\\\\n\\label{eq:tau}\n&\\frac{\\mathbf{E}_{0,z} \\tau^{i,\\lambda}}{\\lambda^{\\alpha\\theta}}\\to \\frac{1}{q_i},\n\\end{align} \n$q_i$ and $q_{ij}$ being defined in \\eqref{eq:q}.\n\\end{theorem}\n\n\n\n\n\n\n\n\\begin{theorem}[slow cooling, convergence]\n\\label{th:slow}\nLet $\\theta<1\/\\alpha$. \nThen for any $\\lambda>0$, $z\\in\\mathbb{R}$, the law of $Z_{0,z}^\\lambda(t)$ converges weakly to\nthe measure $\\pi$, i.e.\\\nfor any continuous and bounded function $f$ we have\n\\begin{equation}\n\\mathbf{E}_{0,z}f(Z^\\lambda(t))\\to \\sum_{j=1}^n f(m_j)\\pi_j, \\quad t\\to\\infty.\n\\end{equation}\n\\end{theorem}\n\nIf the cooling rate $\\theta$ is above the threshold $1\/\\alpha$, the solution $Z^\\lambda$ gets trapped in one of the wells and thus the convergence fails. Consider the first exit time \nfrom the $i$-th well\n\\begin{equation}\n\\sigma^{i,\\lambda}_{s,z}=\\inf\\{t\\geq 0: Z^\\lambda_{s,z}\\notin \\Omega_i \\}.\n\\end{equation}\n\nThen the following trapping result holds. \n\\begin{theorem}[fast cooling, trapping]\n\\label{th:fast}\nLet $\\theta>1\/\\alpha$. For $z\\in B_i$, $1\\leq i\\leq n$, \n\\begin{equation}\n\\mathbf{P}_{0,z}(\\sigma^{i,\\lambda}<\\infty)\n=\\mathcal{O}\\left(\\frac{1}{\\lambda^{\\alpha\\theta-1}}\\right),\\quad\\lambda\\to\\infty.\n\\end{equation}\nConsequently, $\\mathbf{E}_{0,z}\\sigma^{i,\\lambda}=\\infty$.\n\\end{theorem}\n\nIn the subsequent section we sketch the proof of Theorems~\\ref{th:tau}--\\ref{th:fast} and discuss the results.\n\n\\section{Predominant behaviour of the annealed process}\n\nOur study of the random process $Z^\\lambda$ is based on probabilistic analysis of its\nsample paths. We use the decomposition\nof the process $L$ into small- and big-jump parts similar to that used in \\cite{ImkellerP-06a}.\nThus we refer the reader to that paper for details, and sketch the idea briefly.\n\n\\subsection{Big and small jumps of a L\\'evy flights process}\n\nWith help of the L\\'evy-Khinchin formula \\eqref{eq:lh}, we decompose the \nprocess $L$ into a sum of two independent L\\'evy\nprocesses with relatively small and big jumps. For any cooling rate $\\theta>0$, we\nintroduce two new L\\'evy measures by setting\n\\begin{eqnarray}\n\\nu_\\xi^\\lambda(A) &= \\nu\\left(A\\cap \\{x:|x|\\leq \\lambda^{\\theta\/2}\\}\\right),\\\\\n\\nu_\\eta^\\lambda(A)& = \\nu\\left(A\\cap \\{x:|x|> \\lambda^{\\theta\/2}\\}\\right),\n\\end{eqnarray}\nand two L\\'evy processes $\\xi^\\lambda$ and $\\eta^\\lambda$\nwith the corresponding Fourier transforms:\n\\begin{eqnarray}\n\\mathbf E e^{i\\omega \\xi^\\lambda_t}&=\n\\exp\\left\\lbrace\nt\\int_{\\mathbb R\\backslash\\{0\\}} \\left[e^{i\\omega y}-1-i\\omega y \\I{|y|\\leq 1}\\right]\n\\nu_\\xi^\\lambda(dy) \\right\\rbrace,\\\\\n\\mathbf E e^{i\\omega \\eta^\\lambda_t}&=\n\\exp\\left\\lbrace\nt\\int_{\\mathbb R\\backslash\\{0\\}} \\left[ e^{i\\omega y}-1-i\\omega y \\I{|y|\\leq 1}\\right]\n\\nu_\\eta^\\lambda(dy)\\right\\rbrace.\n\\end{eqnarray}\nIt is clear that, the processes $\\xi^\\lambda$ and $\\eta^\\lambda$ are independent and\n$L=\\xi^\\lambda+\\eta^\\lambda$.\n\nSince $\\nu^\\lambda_\\xi(\\mathbb{R})=\\infty$, the process\n$\\xi^\\lambda$ makes infinitely many jumps on each time\ninterval. Its jumps are, however, bounded by the threshold\n$\\lambda^{\\theta\/2}$, i.e.\\ $|\\Delta\\xi_t^\\lambda|\\leq \\lambda^{\\theta\/2}$. \nThus $\\xi^\\lambda_t$ has a finite variance, and more generally moments of all\norders.\n\n\nOn the contrary, the L\\'evy measure of the process\n$\\eta^\\lambda$ is finite, and its mass equals\n\\begin{equation}\n\\beta_\\lambda=\\nu_\\eta^\\lambda(\\mathbb{R})\n=\\int_{-\\infty}^{-\\lambda^{\\theta\/2}} \\frac{dy}{|y|^{1+\\alpha}}\n+\\int_{\\lambda^{\\theta\/2}}^\\infty \\frac{dy}{y^{1+\\alpha}}\n=2\\int_{\\lambda^{\\theta\/2}}^\\infty \\frac{dy}{y^{1+\\alpha}}\n=\\frac{2}{\\alpha}\\lambda^{-\\alpha\\theta\/2}.\n\\end{equation}\nHence, $\\eta^\\lambda$ is a compound Poisson process with jumps\nof absolute value larger than $\\lambda^{\\theta\/2}$. Let $\\tau^\\lambda_k$ and\n$W^\\lambda_k$, $k\\geq 0$, be the jump arrival times and jump sizes of $\\eta^\\lambda$ \nunder the\nconvention $\\tau^\\lambda_0=W^\\lambda_0=0$. Then the inter-arrival times\n$T^\\lambda_k=\\tau^\\lambda_k-\\tau^\\lambda_{k-1}$, $k\\geq 1$, are independent and\nexponentially distributed with mean $\\beta_\\lambda^{-1}$. The jump sizes $W^\\lambda_k$\nare also independent random variables with\nthe probability distribution function given by\n\\begin{equation}\n\\label{eq:w}\n\\mathbf{P}(W^\\lambda_k< u)\n=\\frac{\\nu_\\eta^\\lambda(-\\infty,u)}{\\nu_\\eta^\\lambda(\\mathbb{R})}\n=\\frac{1}{\\beta_\\lambda}\\int_{-\\infty}^u \\I{|y|> \\lambda^{\\theta\/2}}\n\\nu^\\lambda_\\eta(dy).\n\\end{equation}\nFinally, we can represent the random perturbation in \\eqref{eq:Z} as a sum of two processes, namely,\n\\begin{equation}\n\\int_0^t\\frac{dL(u)}{(\\lambda+u)^\\theta}\n=\\int_0^t\\frac{d\\xi^\\lambda_u}{(\\lambda+u)^\\theta}\n+\\sum_{k=1}^\\infty \\frac{W^\\lambda_k}{(\\lambda+\\tau^\\lambda_k)^\\theta}\n\\I{t\\geq \\tau_k}.\n\\end{equation}\n\n\n\n\\subsection{Predominant behaviour}\n\nConsider now the process $Z^\\lambda_{0,z}$ given by equation\n\\eqref{eq:Z}. On the inter-arrival intervals $[\\tau^\\lambda_{k-1}, \\tau^\\lambda_k)$,\n$k\\geq 1$, it is driven only by the process\n$\\varphi_t^\\lambda=\\int_0^t (\\lambda+u)^{-\\theta}d\\xi^\\lambda_u$, \nand at the time instants $\\tau^\\lambda_k$ it\nmakes jumps of the size $W^\\lambda_k\/(\\lambda+\\tau_\\lambda^k)^\\theta$. \nRecall that the jumps of\n$\\xi^\\lambda$ are bounded by $\\lambda^{\\theta\/2}$, hence the jumps sizes \nof $\\varphi^\\lambda$\ntend to zero as $\\lambda\\to\\infty$ for all $t\\geq 0$, i.e.\\ \n\\begin{equation}\n|\\Delta \\varphi^\\lambda_t|\\leq \\frac{\\lambda^{\\theta\/2}}{(\\lambda+t)^\\theta}\\leq \\frac{1}{\\lambda^{\\theta\/2}}.\n\\end{equation}\nThe variance of $\\varphi_t^\\lambda$ tends to zero in the\nlimit of large $\\lambda$, and the random trajectory $Z^\\lambda_{0,z}(t)$\ncan be seen as a small random perturbation of the deterministic\ntrajectory $X^0_z(t)$ of the underlying dynamical system on the\nintervals $[\\tau^\\lambda_{k-1},\\tau^\\lambda_k)$. \nConsider a well $\\Omega_i$ with a minimum $m_i$. Let initial points $z$ be\naway from the unstable points $s_{i-1}$ and $s_i$, namely \n$z\\in(s_{i-1}+\\lambda^{-\\gamma}, s_i-\\lambda^{-\\gamma})$ for some positive $\\gamma$.\nThen the deterministic trajectory $X_z^0(t)$ reaches a $\\lambda^{-\\gamma}$-neighbourhood of $m_i$ in at most logarithmic time $\\mathcal{O}(\\ln\\lambda)$.\nSince the periods between the big jumps are essentially longer, i.e.\\\n\\begin{equation}\n\\mathbf{E} T_k^\\lambda=\\mathbf{E}(\\tau_{k}^\\lambda-\\tau_{k-1}^\\lambda)\n= \\frac{\\alpha}{2}\\lambda^{\\alpha\\theta\/2} \\gg \\mathcal{O}(\\ln\\lambda),\n\\end{equation} \nwe can show that with probability close to $1$, the random trajectory $Z^\\lambda$ is located \nin a small neighbourhood of $m_i$ before the big jump. \n\nThus we can summarise the pathwise behaviour of $Z^\\lambda_{0,z}$ for large values of\n$\\lambda$ as follows: \n\\begin{equation}\n\\begin{aligned}\n\\label{eq:t}\n&Z^\\lambda_{0,z}(0)=z\\in (s_{i-1}+\\lambda^{-\\gamma}, s_i-\\lambda^{-\\gamma}),\\\\\n&Z^\\lambda_{0,z}(\\tau^1_\\lambda-)\\approx m_i,\\\\\n&Z^\\lambda_{0,z}(\\tau^1_\\lambda)\\approx\nm_i+\\frac{W_1^\\lambda}{(\\lambda+\\tau_1^\\lambda)^\\theta}\\in (s_{i-1}+\\lambda^{-\\gamma}, s_i-\\lambda^{-\\gamma})\\\\\n&Z^\\lambda_{0,z}(\\tau^2_\\lambda-)\\approx m_i,\\\\\n&\\cdots\\\\\n&Z^\\lambda_{0,z}(\\tau^k_\\lambda-)\\approx m_i,\\\\\n&Z^\\lambda_{0,z}(\\tau^k_\\lambda)\\approx\nm_i+\\frac{W_k^\\lambda}{(\\lambda+\\tau_k^\\lambda)^\\theta}\\in (s_{j-1}+\\lambda^{-\\gamma}, s_j-\\lambda^{-\\gamma}), \n\\,\\,j\\neq i,\\\\\n&Z^\\lambda_{0,z}(\\tau^{k+1}_\\lambda-)\\approx m_j,\\\\\n&\\cdots,\n\\end{aligned}\n\\end{equation}\nwhereas on the intervals $[\\tau^\\lambda_{k-1},\\tau^\\lambda_k)$ the process $Z^\\lambda$ follows the\ndeterministic trajectory $X^0$.\nThus, since we know the initial location of the particle, \nas well as the jump sizes $W_k^\\lambda$ and jump times $\\tau_k^\\lambda$,\nwe can catch the essential features of the random path $Z^\\lambda$.\n\n\nOf course, we have to be carefull when dealing with trajectories which occasionally enter\nthe $\\lambda^{-\\gamma}$-neighbourhoods of the saddle points $s_i$, where the force field\n$U'$ becomes insignificant. In these neighbourhoods, the L\\'evy particle has no strong \ndeterministic drift which brings it to a certain well's minimum. Thus we cannot decide\nwhether $Z^\\lambda$ converges to $m_i$ or to $m_{i+1}$. However, in the limit\n$\\lambda\\to\\infty$, the probability that $Z^\\lambda$ jumps from a \nneighbourhood of $m_i$ to a \n$\\lambda^{-\\gamma}$-neighbourhood of some $s_j$ is negligible. In our further \nexposition, we do not consider the unstable dynamics in these\n$\\lambda^{-\\gamma}$-neighbourhoods \nand assume that \\eqref{eq:t} holds for all $z\\in\\Omega_i$.\nInterested readers can find rigorous arguments in \\cite{Pavlyukevich-06a}.\n\n\n\n\n\n\\section{Transitions between the wells in the slow cooling regime\\label{s:kl}}\n\nIn this section we justify the limits \\eqref{eq:p} and \\eqref{eq:tau} from \nTheorem~\\ref{th:tau}.\n\nFirst, we obtain the mean value of the first exit time $\\sigma^{i,\\lambda}$\nform the well $\\Omega^i$ in the limit $\\lambda\\to\\infty$.\nIndeed, $Z^\\lambda$ can roughly leave $\\Omega_i$ only\nat one of the time instants $\\tau^\\lambda_k$ when \n$m_i+W^\\lambda_k\/(\\lambda+\\tau^\\lambda_k)^\\theta\\notin \\Omega_i$.\nWe can therefore calculate the mean value of $\\sigma^{i,\\lambda}$\nusing the full probability formula:\n\\begin{equation}\n\\label{eq:m}\n\\begin{aligned}\n&\\mathbf{E}_{0,z}\\sigma^{i,\\lambda}\n\\approx \\sum_{k=1}^{\\infty}\n\\mathbf{E}\\left[\\tau^\\lambda_k \\I{ \\sigma^{i,\\lambda}=\\tau^\\lambda_k} \\right] \\\\\n&\\approx\\sum_{k=1}^{\\infty}\n\\mathbf{E}\\left[\\tau^\\lambda_k\\cdot\n\\I{m_i+ \\frac{W^\\lambda_1}{(\\lambda+\\tau_1^\\lambda)^\\theta}\\in \\Omega_i,\\dots, \nm_i+ \\frac{W^\\lambda_{k-1}}{(\\lambda+\\tau_{k-1}^\\lambda)^\\theta}\\in \\Omega_i,\nm_i+ \\frac{W^\\lambda_k}{(\\lambda+\\tau_k^\\lambda)^\\theta}\\notin \\Omega_i\n}\\right].\n\\end{aligned}\n\\end{equation}\nSince the arrival times $\\tau_1^\\lambda,\\tau_2^\\lambda,\\dots, \\tau_k^\\lambda$, are dependent, no straightforward\ncalculation of the \nexpectations in the latter sum seems possible. However, we can estimate these expectations\nfrom above and below.\nOur argument is based on the inequalities \n$0<\\tau_1^\\lambda< \\tau_2^\\lambda<\\cdots<\\tau_k^\\lambda$, and the obvious inclusions\n\\begin{equation}\n\\label{eq:inc}\n\\left\\lbrace m_i+ \\frac{W^\\lambda_{j}}{\\lambda^\\theta}\\in \\Omega_i\\right\\rbrace \n\\subseteq\n\\left\\lbrace m_i+ \\frac{W^\\lambda_{j}}{(\\lambda+\\tau_{j}^\\lambda)^\\theta}\\in \\Omega_i\\right\\rbrace \n\\subseteq\n\\left\\lbrace m_i+ \\frac{W^\\lambda_{j}}{(\\lambda+\\tau_{k}^\\lambda)^\\theta}\\in \\Omega_i\\right\\rbrace,\n\\,\\,\n1\\leq j\\leq k-1,\n\\end{equation}\nwhere the probability of these events can be\ncalculated explicitly from \\eqref{eq:w}, to yield the formula\n\\begin{equation}\n\\quad\\mathbf{P}\\left( m_i+\\frac{W^\\lambda_1}{(\\lambda+t)^\\theta} \n\\notin \\Omega_i\\right)\n=\\frac{1}{\\beta_\\lambda}\n\\left( \\int_{-\\infty}^{-|m_i-s_{i-1}|(\\lambda+t)^\\theta} \n+\\int_{(s_i-m_i)(\\lambda+t)^\\theta}^\\infty \\right) \\frac{dy}{|y|^{1+\\alpha}}\n=\\frac{q_i}{\\beta_\\lambda(\\lambda+t)^{\\alpha\\theta}}.\n\\end{equation}\n\n\n\n\n\n\\subsection{Mean transition time}\n\n\nLet us obtain an estimate from above. Note that for each $k\\geq 1$, the arrival time \n$\\tau^\\lambda_k$ is a sum of $k$ independent\nexponentially distributed random variables $T^\\lambda_j$ and thus has a $\\mbox{Gamma}(k,\\beta_\\lambda)$ distribution\nwith a probability density\n$\\beta_\\lambda e^{-\\beta_\\lambda t}(\\beta_\\lambda t)^{k-1}\/(k-1)!$, $t\\geq 0$.\nThen, applying the second inclusion in \\eqref{eq:inc} we obtain\n\\begin{equation}\n\\begin{aligned}\n&\\mathbf{E}\\left[\\tau^\\lambda_k\\cdot\n\\I{m_i+ \\frac{W^\\lambda_1}{(\\lambda+\\tau_1^\\lambda)^\\theta}\\in \\Omega_i,\\dots, \nm_i+ \\frac{W^\\lambda_{k-1}}{(\\lambda+\\tau_{k-1}^\\lambda)^\\theta}\\in \\Omega_i,\nm_i+ \\frac{W^\\lambda_k}{(\\lambda+\\tau_k^\\lambda)^\\theta}\\notin \\Omega_i\n}\\right]\\\\\n&\\leq \\mathbf{E}\\left[\\tau^\\lambda_k\\cdot\n\\I{m_i+ \\frac{W^\\lambda_1}{(\\lambda+\\tau_k^\\lambda)^\\theta}\\in \\Omega_i,\\dots, \nm_i+ \\frac{W^\\lambda_{k-1}}{(\\lambda+\\tau_{k}^\\lambda)^\\theta}\\in \\Omega_i,\nm_i+ \\frac{W^\\lambda_k}{(\\lambda+\\tau_k^\\lambda)^\\theta}\\notin \\Omega_i\n}\\right]\\\\\n&=\\int_0^\\infty \\beta_\\lambda t e^{-\\beta_\\lambda t}\\frac{(\\beta_\\lambda t)^{k-1}}{(k-1)!}\n\\mathbf{P}\\left( m_i+ \\frac{W^\\lambda_1}{(\\lambda+t)^\\theta}\n\\in \\Omega_i\\right)^{k-1} \n\\mathbf{P}\\left( m_i+ \\frac{W^\\lambda_1}{(\\lambda+t)^\\theta}\n\\notin \\Omega_i\\right) dt\\\\\n&=\\int_0^\\infty \\beta_\\lambda t e^{-\\beta_\\lambda t}\\frac{(\\beta_\\lambda t)^{k-1}}{(k-1)!}\n\\left[1-\\frac{q_i}{\\beta_\\lambda(\\lambda+t)^{\\alpha\\theta}} \\right]^{k-1} \n\\frac{q_i}{\\beta_\\lambda(\\lambda+t)^{\\alpha\\theta}} dt.\n\\end{aligned}\n\\end{equation}\nSummation over $k$ yields\n\\begin{equation}\n\\begin{aligned}\n\\mathbf{E}_{0,z}\\sigma^{i,\\lambda}\n&\\lesssim\n\\int_0^\\infty \\beta_\\lambda t e^{-\\beta_\\lambda t} \\frac{q_i}{\\beta_\\lambda(\\lambda+t)^{\\alpha\\theta}} \\sum_{k=1}^{\\infty} \n\\frac{(\\beta_\\lambda t)^{k-1}}{(k-1)!}\n\\left[1-\\frac{q_i}{\\beta_\\lambda(\\lambda+t)^{\\alpha\\theta}} \\right]^{k-1} dt\\\\\n&=\\int_0^\\infty \\frac{q_i t }{(\\lambda+t)^{\\alpha\\theta}} \n\\exp\\left( -\\frac{q_i t }{(\\lambda+t)^{\\alpha\\theta}} \\right) dt,\n\\end{aligned}\n\\end{equation}\nwhere `$\\lesssim$' denotes an inequality up to negligible error terms.\nSince $\\alpha\\theta<1$, the latter integral converges for all $\\lambda>0$, and it is \npossible to evaluate it in the limit $\\lambda\\to+\\infty$. \nIntroducing a new variable $u=\\frac{\\lambda+t}{\\lambda}$, we transform it\n to a Laplace type integral with big parameter, \nwhich can be evaluated asymptotically (see \\cite[Chapter 3]{Olver-74}), i.e.\\\n\\begin{equation}\n\\begin{aligned}\n\\mathbf{E}_{0,z}\\sigma^{i,\\lambda}\n&\\lesssim \n\\lambda^{2-\\alpha\\theta}\\int_1^\\infty \\frac{q_i(u-1)}{u^{\\alpha\\theta}}\n\\exp\\left( -\\frac{q_i (u-1)\\lambda^{1-\\alpha\\theta} }{u^{\\alpha\\theta}} \\right) du\\\\\n&\\approx \\lambda^{2-\\alpha\\theta}\\int_1^\\infty q_i(u-1)\n\\exp\\left( -q_i (u-1)\\lambda^{1-\\alpha\\theta} \\right) du\\\\\n&=q_i^{-1}\\lambda^{\\alpha\\theta}.\n\\end{aligned}\n\\end{equation}\nApplying analogously the first inclusion from \\eqref{eq:inc} to the first $k-1$ jumps, \nwe obtain the estimate from below:\n\\begin{equation}\n\\begin{aligned}\n&\\mathbf{E}_{0,z}\\sigma^{i,\\lambda}\n\\gtrsim \\sum_{k=1}^{\\infty}\n\\mathbf{E}\\left[\\tau^\\lambda_k\\cdot\n\\I{m_i+ \\frac{W^\\lambda_1}{\\lambda^\\theta}\\in \\Omega_i,\\dots, \nm_i+ \\frac{W^\\lambda_{k-1}}{\\lambda^\\theta}\\in \\Omega_i,\nm_i+ \\frac{W^\\lambda_k}{(\\lambda+\\tau_k^\\lambda)^\\theta}\\notin \\Omega_i\n}\\right]\\\\\n&\\geq \\sum_{k=1}^{\\infty}\\int_0^\\infty t\\beta_\\lambda e^{-\\beta_\\lambda t}\n\\frac{(\\beta_\\lambda t)^{k-1}}{(k-1)!}\n\\left[1-\\frac{q_i}{\\beta_\\lambda \\lambda^{\\alpha\\theta}} \\right]^{k-1} \n\\frac{q_i}{\\beta_\\lambda(\\lambda+t)^{\\alpha\\theta}} dt\\\\\n&=\\int_0^\\infty \\frac{q_i t }{(\\lambda+t)^{\\alpha\\theta}} \n\\exp\\left( -\\frac{q_i t }{\\lambda^{\\alpha\\theta}} \\right) dt\n= \\lambda^{2-\\alpha\\theta} \\int_1^\\infty \\frac{q_i(u-1)}{u^{\\alpha\\theta}}\n\\exp\\left( -q_i(u-1)\\lambda^{1-\\alpha\\theta} \\right)\\\\\n&\\approx \\lambda^{2-\\alpha\\theta}\\int_1^\\infty q_i(u-1)\n\\exp\\left( -q_i (u-1)\\lambda^{1-\\alpha\\theta} \\right) du\n=q_i^{-1}\\lambda^{\\alpha\\theta} .\n\\end{aligned}\n\\end{equation}\nSurprisingly, the estimates from below and above coincide, and thus give the asymptotic value\nof the mean life time of the slowly cooled L\\'evy particle in a potential well.\n\nTo obtain the limit \\eqref{eq:tau} for the mean transition time $\\tau^{i,\\lambda}$\nbetween the sets $B_i$ and $\\cup_{j\\neq i}B_j$,\nwe note that\nat the exit time $\\sigma^{i,\\lambda}$ the process $Z^\\lambda$ enters some of the\nwells $\\Omega_j$, $j\\neq i$, with high probability follows the deterministic trajectory, and reaches a $\\Delta$-neighbourhood\nof a well's minimum in a time of the order $\\mathcal{O}(\\ln\\lambda)$, \nwhich is negligible in comparison with\n$\\lambda^{\\alpha\\theta}$. Thus the limit \\eqref{eq:tau} holds.\n\n\\subsection{Transition probability}\n\n\nTo calculate the transition probability between the wells, it suffices to obtain an estimate from\nbelow. Similarly to the estimate of the mean exit time, we have\n\\begin{equation}\n\\begin{aligned}\n&\\mathbf{P}_{0,z}(Z^\\lambda(\\sigma^{i,\\lambda})\\in\\Omega_j)\n\\gtrsim \\sum_{k=1}^{\\infty}\n\\mathbf{P}\\left(m_i+ \\frac{W^\\lambda_1}{\\lambda^\\theta}\\in \\Omega_i,\\dots, \nm_i+ \\frac{W^\\lambda_{k-1}}{\\lambda^\\theta}\\in \\Omega_i,\nm_i+ \\frac{W^\\lambda_k}{(\\lambda+\\tau_k^\\lambda)^\\theta}\\in \\Omega_j\\right)\\\\\n&\\geq \\sum_{k=1}^{\\infty}\\int_0^\\infty \\beta_\\lambda e^{-\\beta_\\lambda t}\n\\frac{(\\beta_\\lambda t)^{k-1}}{(k-1)!}\n\\left[1-\\frac{q_i}{\\beta_\\lambda \\lambda^{\\alpha\\theta}} \\right]^{k-1} \n\\frac{q_{ij}}{\\beta_\\lambda(\\lambda+t)^{\\alpha\\theta}} dt\\\\\n&=\\int_0^\\infty \\frac{q_{ij} }{(\\lambda+t)^{\\alpha\\theta}} \n\\exp\\left( -\\frac{q_i t }{\\lambda^{\\alpha\\theta}} \\right) dt\n= \\lambda^{1-\\alpha\\theta} \\int_1^\\infty \\frac{q_{ij}}{u^{\\alpha\\theta}}\n\\exp\\left( -q_{i}(u-1)\\lambda^{1-\\alpha\\theta} \\right)\\\\\n&\\approx \\lambda^{1-\\alpha\\theta}\\int_1^\\infty q_{ij}\n\\exp\\left( -q_i (u-1)\\lambda^{1-\\alpha\\theta} \\right) du\n=q_{ij}q_i^{-1}.\n\\end{aligned}\n\\end{equation}\nWith help of the equality $\\sum_{j\\neq i}q_{ij}q_i^{-1}=1$, we conclude that\n$\\mathbf{P}_{0,z}(Z^\\lambda(\\sigma^{i,\\lambda})\\in\\Omega_j)\\to q_{ij}q_i^{-1}$.\nFinally, since after entering $\\Omega_j$, the process $Z^\\lambda$ reaches $B_j$ with high probability,\n$\\mathbf{P}_{0,z}(Z^\\lambda(\\sigma^{i,\\lambda})\\in\\Omega_j)\\approx\n \\mathbf{P}_{0,z}(Z^\\lambda(\\tau^{i,\\lambda})\\in B_j)$, and \\eqref{eq:p} holds.\n\n\\section{Convergence in the slow cooling regime}\n\nFigure~\\ref{f:slow} illustrates the typical behaviour of $Z$ in the slow cooling \nregime, $\\alpha\\theta<1$. \n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=.9\\textwidth]{150-049.eps}\n\\end{center}\n\\caption{A slow cooling of a L\\'evy particle in a potential with \nlocal minima at $-4$, $0$ and $3$.\\label{f:slow}}\n\\end{figure}\nRoughly speaking, we can distinguish two different behaviours: chaotic and regular.\n\n1.\\ In general case, the initial temperature $\\lambda^{-\\theta}$ can be high, so that \nasymptotics of Theorem~\\ref{th:tau} does not hold. \nThus, the transitions of $Z^\\lambda$ are chaotic until some time instance $T$, when \nthe temperature $(\\lambda+T)^{-\\theta}$ becomes low enough, Theorem~\\ref{th:tau} \nstarts working.\nMoreover, choosing the time $T$ sufficiently large, we\nmake the transition probabilities of $Z$ between the neighbourhoods $B_i$ \nclose to $q_{ij}\/q_i$ with any prescribed precision.\nFor brevity, we can also assume that \n$Z^\\lambda_{0,z}(T)\\in B_i$ for some $i$. \n\n2.\\\nDenote $\\tau(k)$, $k\\geq 0$, successive transition times after $T$ between \ndifferent $B_j$, $1\\leq j\\leq n$, with $\\tau(0)=T$ by convention. \nThe mean values $\\mathbf{E}_{0,z}\\tau(k)$ are finite and can be calculated from Theorem~\\ref{th:tau}. \nIndeed, if $\\tau(k-1)=t_{k-1}$ and $Z^\\lambda(\\tau(k-1))=z_{k-1}\\in B_i$, then\nthe conditional expectation of $\\mathbf{E}_{0,z}\\tau(k)$ equals\n\\begin{equation}\n\\begin{aligned}\n&\\mathbf{E}_{0,z}\\left[\\tau(k)|\\tau(k-1)=t_{k-1},Z^\\lambda(\\tau(k-1))=z_{k-1}\\in B_i\\right]\n=t_{k-1}+\\mathbf{E}_{t_{k-1},z_{k-1}}\\tau^{i,\\lambda}\\\\\n&=t_{k-1}+\\mathbf{E}_{0,z_{k-1}}\\tau^{i,\\lambda+t_{k-1}}\n\\approx t_{k-1}+q_i^{-1}(\\lambda+t_{k-1})^{\\alpha\\theta}<\\infty.\n\\end{aligned}\n\\end{equation}\nFrom the time instance $T$ on, $Z^\\lambda$ makes transitions between \nthe wells with probabilities close to $p_{ij}=q_{ij}\/q_i$, $i\\neq j$, where \n$p_{ii}=0$ by convention. The probabilities $p_{ij}$ determine a discrete time Markov chain $V(k)$ on \n$\\{m_1,\\dots, m_n\\}$, such that $\\mathbf{P}(V(k)=m_j|V(k-1)=m_i)=p_{ij}$. \nIt is clear, that\n$V$ has the unique invariant distribution $\\pi$. Moreover, $V(k)$ converges to the\ninvariant distribution geometrically fast, i.e.\\ there is $0<\\rho<1$ such \nthat for all $1\\leq i,j\\leq n$ and $k\\geq 0$\n\\begin{equation}\n\\label{eq:V}\n|\\mathbf{P}_{m_i}(V(k)=m_j)-\\pi_j|=\\mathcal{O}({\\rho^k}).\n\\end{equation}\nWith help of the asymptotic relation \n\\begin{equation}\n\\mathbf{P}(Z^\\lambda(\\tau(k)\\in B_j)|Z^\\lambda(\\tau(k-1)\\in B_i))\\approx p_{ij}=\\mathbf{P}(V(k)=m_j|V(k-1)=m_i).\n\\end{equation}\none can show that the distributions of $Z(\\tau(k))$ and $V(k)$ are also close\nfor $k\\geq 1$, i.e.\\ \n\\begin{equation}\n\\mathbf{P}(Z^\\lambda_{0,z}(\\tau(k))\\in B_j|Z^\\lambda_{0,z}(T)\\in B_i)\\approx\\mathbf{P}_{m_i}(V(k)=m_j).\n\\end{equation}\nHence, with help of \\eqref{eq:V} for any prescribed accuracy level we can find $k_0\\geq 1$ such that\nfor $k\\geq k_0$ we have\n\\begin{equation}\n\\mathbf{P}_{0,z}(Z^\\lambda(\\tau(k))\\in B_j)\\approx\\pi_j\n\\end{equation}\nindependently on the initial point $z$.\n\nFinally, we note that\nafter time $T$, the process $Z^\\lambda$ spends most of the time in the \nneighbourhoods $B_i$, and \n\\begin{equation}\nZ^\\lambda_{0,z}(t)\\approx Z^\\lambda_{0,z}(\\tau(k)),\\quad \\mbox{for }\\, \nt\\in [\\tau(k),\\tau(k+1)). \n\\end{equation}\nand thus if $t\\geq \\tau(k_0)$ then $\\mathbf{P}_{0,z}(Z^\\lambda(t)\\in B_j)\\approx \\pi^0_j$ .\n\nAs we see, if $\\alpha\\theta<1$,\nthe process $Z^\\lambda$ reminds of a peace-wise constant jump process on the state space\n$\\{m_1,\\dots, m_n\\}$. It\n never stops jumping between the wells of $U$, and the \nrandom sequence $Z^\\lambda(\\tau(k))$, $k\\geq k_0$, behave as a stationary \ndiscrete time Markov chain with a distribution $\\pi$.\n\n\n\\section{Trapping in the fast cooling regime}\n\nThe regime of fast cooling $\\theta>1\/\\alpha$ is more simple.\nWe estimate the probability of the exit from a well. \nSince the exit occurs with high\nprobability only at the arrival times of the big jump process $\\eta^\\lambda$, we estimate\n\\begin{equation}\n\\begin{aligned}\n&\\mathbf{P}_{0,z}(\\sigma^{i,\\lambda}<\\infty)\\approx\n\\sum_{k=1}^\\infty\\mathbf{P}_{0,z}(\\sigma^{i,\\lambda}=\\tau_k^\\lambda)\\\\\n&\\lesssim\\sum_{k=1}^\\infty\n\\mathbf{P}\\left(\nm_i+ \\frac{W^\\lambda_1}{(\\lambda+\\tau_1^\\lambda)^\\theta}\\in \\Omega_i,\\dots, \nm_i+ \\frac{W^\\lambda_{k-1}}{(\\lambda+\\tau_{k-1}^\\lambda)^\\theta}\\in \\Omega_i,\nm_i+ \\frac{W^\\lambda_k}{(\\lambda+\\tau_k^\\lambda)^\\theta}\\notin \\Omega_i\n\\right)\\\\\n&\\leq\\sum_{k=1}^\\infty\n\\mathbf{P}\\left(m_i+ \\frac{W^\\lambda_k}{(\\lambda+\\tau_k^\\lambda)^\\theta}\\notin \\Omega_i\n\\right)\n=\\int_0^\\infty \\beta_\\lambda e^{-\\beta_\\lambda t} \\frac{q_i}{\\beta_\\lambda(\\lambda+t)^{\\alpha\\theta}} \\sum_{k=1}^\\infty\\frac{(\\beta_\\lambda t)^{k-1}}{(k-1)!} dt\\\\\n&=\\int_0^\\infty \\frac{q_i }{(\\lambda+t)^{\\alpha\\theta}} dt\n=\\frac{q_i}{\\alpha\\theta-1}\\frac{1}{\\lambda^{\\alpha\\theta-1}}\\to 0, \\quad \\lambda\\to\\infty.\n\\end{aligned}\n\\end{equation} \nAs a consequence we have infinite mean exit times $\\mathbf{E}_{0,z}\\sigma^{i,\\lambda}=\\infty$.\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=.9\\textwidth]{150-067.eps}\n\\end{center}\n\\caption{A fast cooling of a L\\'evy particle in a potential with \nlocal minima at $-4$, $0$ and $3$.\\label{f:fast}}\n\\end{figure}\nIn other words, if $\\theta>1\/\\alpha$, the dynamics of $Z^\\lambda$ has two\nqualitatively different regimes. First, for high temperatures, the\ntransitions between the wells are chaotic. Second, when the temperature is low enough,\nthe particle gets trapped in one of the wells, see Figure~\\ref{f:fast}. \nIn this case, there is no convergence\nto the invariant measure $\\pi$.\n\n\n\\section{Conclusion and discussion}\n\nIn this paper we studied the large time dynamics of a L\\'evy particle \nin a multi-well external potential, with the temperature decreasing with time\nas $1\/t^\\theta$. \n\nWe discovered, \nthat if the cooling is slow, i.e.\\ $\\theta<1\/\\alpha$, then the system reaches a\nquasi-stationary regime where the transition probabilities between \nthe wells converge to certain values which are explicitly\ndetermined in terms of the potential's spatial geometry. \nMoreover, the mean transition times are finite, \nand between the transitions the process lives in a small neighbourhoods of wells' minima. \nAs opposed to the Gaussian simulated annealing, the L\\'evy flights process does not settle\ndown near the global maximum of $U$. However, our results can be applied \nfor a search for the global minimum of the \npotentials, which possess the so-called\n``large-rims-have-deep-wells'' property, see \\cite{Schoen-97,Locatelli-02}, i.e.\\ when the \nspatially largest well is at the same time the deepest. \nThen, having empirical estimates of the local minima locations $m_i$ and the invariant distribution $\\pi$, we\ncan derive the coordinates of the saddle points $s_i$, and thus reconstruct the sizes of the wells.\n\nOn the other hand, if the cooling is fast, i.e.\\ $\\theta>1\/\\alpha$, the L\\'evy particle\ngets trapped in one of the wells when the temperature decreases below some critical level.\n\nIn this paper we do not answer the most interesting question: \nis it possible to detect a global minimum of $U$ with\nhelp of a non-local search and L\\'evy flights? The answer to this question is affirmative, and the results will be presented \nin our forthcoming paper. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section*{References}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\chapter*{Supplementary Material}\n\\section*{Part A}\n\\textit{In this section we show how the Euler-Lagrange equation (8)\ngenerates the Bose condensate excitation modes (10).}\\\\\nIt is convenient to define the following time independent spinors\n\\begin{equation}\n\\tag{A.1}\nu=\\begin{pmatrix}\n\\cos\\frac{\\theta_0}{2}\\\\\n\\sin\\frac{\\theta_0}{2}e^{i\\varphi_0}\n\\end{pmatrix} \\ , \\ \\ \\ \n\\bar{u}= \\begin{pmatrix}\n-\\sin\\frac{\\theta_0}{2}\\\\\n\\cos\\frac{\\theta_0}{2}e^{i\\varphi_0}\n\\end{pmatrix}\\ .\n\\end{equation}\nHence the Bose condensate field (6) is $z_0=A_0 u$.\nSubstitution of ansatz (9) in the Euler-Lagrange equation (8)\nresults in the following algebraic equation for the ``amplitude'' spinors\n$z_{+}$ and $z_{-}$\n\\begin{align*}\n\\tag{A.2}\n(-\\omega^2+k^2)z_{+} + 2\\omega (\\vec{\\sigma}\\cdot\\vec{{\\cal B}})z_{+} + ({\\cal B}^2-m^2)(z_{-}^\\dag u + u^\\dag z_{+})u&=0\\\\\n(-\\omega^2+k^2)z_{-} - 2\\omega (\\vec{\\sigma}\\cdot\\vec{{\\cal B}})z_{-} + ({\\cal B}^2-m^2)(z_{+}^\\dag u + u^\\dag z_{-})u&=0.\n\\end{align*}\nProjecting each equation (A.2), onto both $u$ and $\\bar{u}$, and using the following definitions;\n\\begin{align*}\n\\tag{A.3}\nu^\\dag z_{+} &= a_1 &&a_1 + a_2^*= a_+\\\\\nu^\\dag z_{-} &= a_2 &&a_1 - a_2^*=a_-\\\\\n\\bar{u}^\\dag z_{+} &= b_1 &&b_1+b_2^*\\,=b_+\\\\\n\\bar{u}^\\dag z_{-} &= b_2 &&b_1-b_2^*\\,=b_-\\\\\n\\end{align*}\nwe get the following matrix equation for the amplitudes\n$a_+$, $a_-$, $b_+$, $b_-$,\n \\begin{align*}\n\\tag{A.4}\\begin{pmatrix} \nk^2 - \\omega^2 + 2({\\cal B}^2-m^2) & 2\\omega {\\cal B}\\cos\\theta_0& 0 & -2\\omega {\\cal B}\\sin\\theta_0 \\\\\n2\\omega {\\cal B}\\cos\\theta_0& k^2 - \\omega^2& -2\\omega {\\cal B}\\sin\\theta_0&0\\\\\n0& -2\\omega {\\cal B}\\sin\\theta_0 & k^2 - \\omega^2& -2\\omega {\\cal B}\\cos\\theta_0\\\\\n-2\\omega {\\cal B}\\sin\\theta_0 & 0 &- 2\\omega{\\cal B}\\cos\\theta_0&k^2 - \\omega^2\\\\\n\\end{pmatrix}\n\\begin{pmatrix}\na_+\\\\\na_-\\\\\nb_+\\\\\nb_-\\\\\n\\end{pmatrix}\n=0.\n\\end{align*}\nFrequencies of normal modes (10) immediately follow from this matrix \nequation.\nAmplitude vectors $( a_+, a_-, b_+, b_-)$,\ncorresponding to the normal modes are of the following form\n\\begin{align*}\n&|1\\rangle \\propto \n\\begin{pmatrix}\n0\\\\\n\\sin\\theta_0\\\\\n1\\\\\n\\cos\\theta_0\\\\\n\\end{pmatrix}; \\ \\ \\ \\\n|2\\rangle \\propto \n\\begin{pmatrix}\nb_2\\\\\n-\\cos\\theta_0\\\\\n0\\\\\n\\sin\\theta_0\\\\\n\\end{pmatrix}; \\ \\ \\ \\\n|3\\rangle \\propto\n\\begin{pmatrix}\n0\\\\\n\\sin\\theta_0\\\\\n-1\\\\\n\\cos\\theta_0\\\\\n\\end{pmatrix}; \\ \\ \\ \\\n|4\\rangle \\propto\n\\begin{pmatrix}\nb_4\\\\\n-\\cos\\theta_0\\\\\n0\\\\\n\\sin\\theta_0\\\\\n\\end{pmatrix}\\\\\n\\tag{A.5}\n&b_2=\\frac{\\sqrt{(3{\\cal B}^2-m^2)^2+4{\\cal B}^2k^2}-(3{\\cal B}^2-m^2)}\n{2{\\cal B}\\omega_2}\\nonumber\\\\\n&b_4=\\frac{-\\sqrt{(3{\\cal B}^2-m^2)^2+4{\\cal B}^2k^2}-(3{\\cal B}^2-m^2)}\n{2{\\cal B}\\omega_4}\\nonumber\n\\end{align*}\nAt small momenta, $k\\to 0$, the coefficients behave as\n$b_2 \\to 0$, $b_4 \\to const \\ne 0$.\nNote that there is no naive orthogonality of ``eigenvectors'', in particular \n$\\langle 2|4\\rangle \\ne 0$.\nA similar nonorthogonality is typical for classical (non-quantum)\ncoupled oscillators in magnetic field.\nIn spite of the nonorthogonality, after an appropriate canonical transformation\nthe Hamiltonian is transformed to the sum of independent Hamiltonians\nof the normal modes.\n\n\\newpage\n\\section*{Part B}\n\\textit{\nIn this section we calculate spin vibrations (11),(12)\ncorresponding to different excitation modes.}\\\\\nUsing Eqs. (A.1) and (A.3) we find\n\\begin{align*}\n\\tag{B.1}\nz_+&\\propto \\begin{pmatrix}\n[(a_{+}+a_{-})\\cos\\frac{\\theta_0}{2} - (b_{+}+b_{-})\\sin\\frac{\\theta_0}{2}]\\\\\n[(a_{+}+a_{-})\\sin\\frac{\\theta_0}{2} + (b_{+}+b_{-})\\cos\\frac{\\theta_0}{2}]e^{i\\varphi_0}\\\\\n\\end{pmatrix}\\\\\nz_-&\\propto \\begin{pmatrix}\n[(a_{+}-a_{-})\\cos\\frac{\\theta_0}{2} - (b_{+}-b_{-})\\sin\\frac{\\theta_0}{2}]\\\\\n[(a_{+}-a_{-})\\sin\\frac{\\theta_0}{2} + (b_{+}-b_{-})\\cos\\frac{\\theta_0}{2}]e^{i\\varphi_0}\n\\end{pmatrix}\\\\\n\\end{align*}\nFrom here, using\n\\begin{align*}\n\\tag{B.2}\n\\delta\\vec{\\zeta}&=\\delta z^{\\dag}\\vec{\\sigma}z_0 + z_0^{\\dag}\\vec{\\sigma}\\delta z,\n\\end{align*}\ntogether with Eqs.(9) and(A.5) we find explicit formulas for\n the spin vibration vectors \n$\\delta {\\vec \\zeta}$ presented in Eqs. (11) and (12).\n\n\n\n\\end{document}\n\n\n\n\n\n\n\n \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nAhlswede et~al.\\ \\cite{ahlswede00}\nproposed the notion of network coding\nthat multicasts data from a single sender to\nmultiple receivers at a rate at which\nthe ordinary store and forward routing cannot\nmulticast the data.\nSuch high rate multicast becomes feasible by allowing\nintermediate nodes to encode and decode the data.\nA sender is usually called a source and a receiver is called\na sink.\nA network coding is said to be linear if\nevery intermediate node outputs a linear combination\nof its inputs \\cite{li03}.\n\nA study of network coding usually assumes that\nan error does not occur in networks.\nRecently, Cai and Yeung \\cite{yeung06a,yeung06b}\nconsidered errors in network coding,\nand proposed the network error correcting codes\nthat allow sinks to recover the information even\nwhen errors occur on intermediate edges in the network.\nAfter formulating the network error correction,\nthey proposed the lower and upper bounds\non the number of messages in a network $\\alpha$-error correcting\ncode, \\change{and one of their upper bound was a\nnatural generalization of the Singleton bound\nfor the ordinary error-correcting codes.\nRecently,\nZhang \\cite{zhang06} and Yang et~al.~\\cite{yang07b}\nindependently observed that the Singleton bound\ncan be refined.\nWe note that the problem formulation in \\cite{yeung06a,yeung06b}\nwas later independently presented in \\cite{jaggi05}.\n(The proceedings paper of \\cite{yeung06a,yeung06b} appeared\nin 2002.)}\n\nCai and Yeung mostly considered the case that\nintermediate nodes perform only simple encoding and\ndecoding without delay, such as computing the output of the node\nas a linear combination of its inputs, and the sinks\nperform complex decoding computation.\nThe network error correcting codes\n\\change{can} avoid introducing decoding computation and delay into\nintermediate nodes, which is the advantage \nover use of ordinary error correcting codes\nbetween nodes.\n\nNote that\na similar type of network\nfailure in a slightly different context was considered in\n\\cite[Sect.\\ V]{koetter03}\nand\n\\cite[Sect.\\ VI]{sanders05}\nin which\nevery sink is assumed to know the set of failed edges\nand failed edges are assumed to emit zero symbols.\nNetwork error correction does not assume\nthe knowledge of edges causing errors,\nand the problem formulation is different from \\cite{koetter03,sanders05}.\nNote also that Kurihara \\cite{kurihara06} considered the different\nnotion of robustness. In his paper, he considered network coding\nthat allows sinks to recover partial information with edge failures.\n\n\nFor the construction of the network error-correcting\ncodes, Jaggi et~al.~\\cite{jaggi07}\nproposed a randomized construction that uses\ncoding among different time intervals.\nTheir method produces\ncodes attains the Singleton bound \nwith high probability with sufficiently long\nblock length, where the block length\nrefers to the number of time intervals\namong which coding is done.\nIt is desirable to have a network\nerror-correcting code that does not\ncode among different time intervals and\nthus does not introduce delay.\nConcurrently to this paper,\nYang et~al.~\\cite{yang07b}\nproposed an explicit construction algorithm\nthat produces codes attaining the\nrefined Singleton bound.\nThe idea in \\cite{yang07b} is similar to this paper\nin the sense that they also regard errors as information\nfrom the source and add extra components in the global\nencoding vectors corresponding to errors.\n\n\nIn this paper,\nwe give a \\change{deterministic and\ncentralized} algorithm that constructs\na network error-correcting code that \\change{attains}\nthe Singleton bound \\change{of} network error-correcting codes\nobtained in \\cite{yeung06a}.\n\\change{We also give a relationship between\nthe success probability and the field size\nfor successful construction of network error-correcting\ncodes when intermediate nodes choose their encoding\ncoefficients randomly and independently.}\nThe proposed algorithms are based on \\cite{sanders05}.\nOur network error-correcting codes\nmake multicast robust to errors without introducing\ndelay in the transmission, which is very attractive to\ndelay sensitive multicast applications, such as\nmulticast of video or audio.\nOur method is also useful for cryptographic applications,\nbecause it can tolerate modification and deletion of data\nby an adversary.\n\nThis paper is organized as follows.\nSection 2 introduces notations and the model of errors.\nSection 3 proposes an algorithm for constructing\nnetwork error-correcting codes \\change{attaining}\nthe Singleton bound.\n\\change{Section 4 shows how to modify the algorithm\nin Sect.\\ 3 to attain the refined Singleton bound,\nthe success probability of the random\nconstruction of network error-correcting codes,\nand the relationship between\nthe robust network coding \\cite{koetter03,sanders05}\nand the network error-correcting\ncodes with known locations of errors \\cite{yang07}.}\nSection 5 gives concluding remarks.\n\n\\section{Preliminary}\n\\subsection{Basic notations}\nWe consider an acyclic directed graph $G=(V,E)$ with possible parallel\nedges of unit capacity.\n$V \\ni s$ denotes the source and $V \\supset T$ denotes\nthe set of sinks.\nLet $n$ be the smallest min-cut separating $s$ from any $t\\in T$\n\\change{throughout this paper.}\nFor $v \\in V$, $\\Gamma^+(v)$ (resp.\\ $\\Gamma^-(v)$) denotes the \nset of edges leaving (resp.\\ reaching) the node $v$,\nand $\\mathrm{start}(e)$ (resp.\\ $\\mathrm{end}(e)$)\ndenotes the node at which the edge $e$ starts (resp.\\ ends).\n\nWe consider linear coding over a finite field $\\mathbf{F}_q$ with\n$q$ elements.\nThe source $s$ gets $k$ ($\\leq n$) input symbols from $\\mathbf{F}_q$.\nThe symbol $y(e)\\in \\mathbf{F}_q$ carried by an edge $e$ is a linear combination of\nthe symbols carried by the edges entering $\\mathrm{start}(e)$.\nThe \\emph{local \\change{encoding} vector\n$m_e: \\Gamma^-(\\mathrm{start}(e))\n\\rightarrow\\mathbf{F}_q$} determines the coefficients of\nthis linear combination, that is,\n\\[\ny(e) = \\sum_{e'\\in \\Gamma^-(\\mathrm{start}(e))} m_e(e')y(e').\n\\]\n\nIn this paper,\na nonsink node performs only the computation of linear combination\nof its inputs, and they do not correct errors.\nAn error is assumed to occur always at an edge.\nWhen an error occurs at an edge $e$,\nthe symbol received by $\\mathrm{end}(e)$ is different from\none sent by $\\mathrm{start}(e)$,\nand $\\mathrm{end}(e)$ computes its outputs as if there was no error\nat $e$. The error value at an edge $e$\nis defined by the received symbol minus\nthe transmitted symbol at $e$.\nNote that we express\na failure of a node $v \\in V$ in a real network as\nerrors on edges in $\\Gamma^+(v)$ in our model.\nThe number of errors is the number of edges at which errors occur.\nA network code is said to correct $\\alpha$ errors\nif every sink can recover the original information\nsent by the source when $\\alpha$ or less\nerrors occur at arbitrary edges.\n\\change{We call the recovery of information by a sink \\emph{decoding}.}\n\nWe represent errors occurred in the whole network\nby a vector $\\vec{e}$ in $\\mathbf{F}_q^{|E|}$,\nwhere $|E|$ denotes the number of elements in $E$.\nFix some total ordering in $E$, and enumeration of\nthe error values gives $\\vec{e}$.\n\nRegarding on the number of messages in a network $\\alpha$-error\ncorrecting code, Cai and Yeung obtained the following result.\n\\begin{proposition}\\label{lemsingleton} \\textnormal{\\cite{yeung06a}}\nThe number $M$ of messages\nin a network $\\alpha$-error\ncorrecting code, not necessarily linear, is upper bounded by\n\\[\nM \\leq q^{n-2\\alpha}.\n\\]\n\\end{proposition}\n\n\nVery recently, Zhang~\\cite{zhang06} and\nYang et~al.~\\cite{yang07b} observed that the above\nproposition can be refined as follows.\n\n\\begin{proposition}\\label{lemsingleton2} \\textnormal{\\cite{zhang06,yang07b}}\nLet $n_t$ be the min-cut from the source $s$ to a sink $t$.\nIf the sink $t$ can correct any $\\alpha_t$ errors then\nthe number $M$ of messages\nin the network \ncorrecting code, not necessarily linear, is upper bounded by\n\\[\nM \\leq q^{n_t-2\\alpha_t}.\n\\]\n\\end{proposition}\n\n\n\\subsection{Jaggi et~al.'s algorithm for construction of\nan ordinary network \\change{code}}\nIn this subsection,\nwe review Jaggi et~al.'s algorithm \\cite{sanders05} for construction of\nan ordinary network coding.\nThe proposed algorithm uses a modified version of their algorithm.\n\nSince linear coding is used,\nthe information carried by an edge $e$\nis a linear combination of $k$ information symbols in $\\mathbf{F}_q$.\nWe can characterize the effect of all the local\n\\change{encoding} vectors on an edge $e$ independently of a\nconcrete $k$ information symbols using \\emph{global\n\\change{encoding} vectors $\\vec{b}(e) \\in \\mathbf{F}_q^k$}.\nWhen the information from the source is $\\vec{i} \\in \\mathbf{F}_q^k$,\nthe transmitted symbol on an edge $e$ is equal to the inner product\nof $\\vec{i}$ and $\\vec{b}(e)$.\n\\change{In order to decide the encoding at the source node $s$,\nwe have to introduce an imaginary source $s'$ and\n$k$ edges of unit capacity from $s'$ to $s$.\nWe regard that $s'$ sends $k$ symbols to $s$ over\n$k$ edges.}\n\nWe initially computes an \\change{$s'$-$t$} flow $f^t$\nof magnitude $k$ for each $t\\in T$ and decomposes this flow\ninto $k$ edge disjoint paths from \\change{$s'$} to $t$.\nIf an edge $e$ is on some flow path $W$ from \\change{$s'$} to\n$t$, let $f_\\leftarrow^t(e)$ denote the predecessor edge of\nthe edge $e$ on the path $W$.\nJaggi et~al.'s algorithm steps through \nthe nodes $v\\in V$ in a topological order\ninduced by the directed graph $G$.\nThis ensures that the global \\change{encoding} vectors of all\nedges reaching $v$ are known when the local \\change{encoding} vectors\nof the edges leaving $v$ are determined.\nThe algorithm defines the coefficients of $m_e$\nfor one edge $e\\in\\Gamma^+(v)$ after the other.\nThere might be multiple flow paths to different sinks\nthrough an edge $e$. Let $T(e)$ denote the set of sinks\nusing $e$ in some flow $f^t$ and\nlet $P(e) = \\{f_\\leftarrow^t(e)\n\\mid t\\in T(e)\\}$ denote the set of\npredecessors edges of $e$ in some flow path.\nThe value $0$ is chosen for $m_e(e')$ with edges\n$e' \\notin P(e)$.\n\nWe introduce two algorithmic variables $B_t$ and $C_t$\nthat are updated by Jaggi et~al.'s algorithm.\n$C_t$ contains one edge from each path in $f^t$,\nnamely the edge whose global \\change{encoding}\nvector was defined most recently in the path.\n$B_t = \\{\\vec{b}(e) \\mid e \\in C_t\\}$ is updated\nwhen $C_t$ is updated.\nThe algorithm determines $m_e$ so that\nfor all $t \\in T$,\n$B_t$ \nis linearly independent.\n\nAfter finishing the algorithm,\nevery sink can \\change{decode} the original information because $B_t$ is\nlinearly independent.\n\n\\section{Construction algorithm}\\label{sec:const}\nWe shall propose an algorithm constructing a network $\\alpha$-error\ncorrecting code carrying $k$ information\nsymbols in $\\mathbf{F}_q$ with $n-k\\geq 2\\alpha$,\nwhich is equivalent to the Singleton bound (Proposition \\ref{lemsingleton}).\nThe proposed construction is based on \\cite{sanders05}.\nWe assume that the size of alphabet $\\mathbf{F}_q$ satisfies\n\\begin{equation}\nq > |T| \\cdot {|E| \\choose 2\\alpha}. \\label{qassumption}\n\\end{equation}\n\n\\begin{definition}\\label{def1}\nFor the original information $\\vec{i} \\in \\mathbf{F}_q^k$ and\nthe error $\\vec{e} \\in \\mathbf{F}_q^{|E|}$,\nlet \\change{$\\phi_t(\\vec{i},\\vec{e}) \\in \\mathbf{F}_q^{|\\Gamma^-(t)|}$}\nbe the vector of symbols carried by the input edges to $t$.\n\\end{definition}\n\n\\begin{lemma}\nIf a sink $t$ can \\change{decode} the original information $\\vec{i}$\nwith any $2\\alpha$ or less errors whose locations are known to the sink $t$,\nthen the sink $t$ can \\change{decode} the original information with any\n$\\alpha$ or less errors without the knowledge of the error locations\nunder the assumption that the number of errors is $\\leq \\alpha$.\n\\end{lemma}\nNote that errors with known locations are called erasures in\n\\cite{yang07} and the properties of erasures are also studied\nin \\cite{yang07}.\n\n\\noindent\\emph{Proof.} \nDenote the Hamming weight of a vector $\\vec{x}$ by $w(\\vec{x})$.\nThe assumption of the lemma implies that\nfor any $\\vec{i} \\neq \\vec{j}$ and $\\vec{e}$ with $w(\\vec{e})\n\\leq 2\\alpha$ we have\n\\begin{equation}\n\\phi_t(\\vec{i},\\vec{e})\\neq \n\\phi_t(\\vec{j},\\vec{0}). \\label{eq10}\n\\end{equation}\nEquation (\\ref{eq10}) implies that for any $\\vec{i} \\neq \\vec{j}$ and\n$\\vec{e}_1$, $\\vec{e}_2$ with $w(\\vec{e}_1) \\leq \\alpha$ and\n$w(\\vec{e}_2)\\leq \\alpha$ we have\n\\[\n\\phi_t(\\vec{i},\\vec{e}_1)\\neq \n\\phi_t(\\vec{j},\\vec{e}_2),\n\\]\nwhich guarantees that $t$ can \\change{decode} the original information\nunder the assumption that the number of errors is $\\leq \\alpha$\nby exhaustive search.\n\\qed\n\n\n\\begin{remark}\nThe above lemma does not guarantee the existence of\nan efficient decoding algorithm.\n\\end{remark}\n\n\nFix $F \\subset E$ with $|F| = 2\\alpha$.\nWe shall show how to construct a network error-correcting code that\nallows every sink to \\change{decode} the original information when the errors can\noccur only at $F$.\n\\change{We call $F$ the \\emph{error pattern}.\nThe following description is a condensed version of the\nproposed algorithm, which is equivalent to the full description with $\\mathcal{F} = \\{ F \\}$ in Fig.~\\ref{fig2} on p.~\\pageref{fig2}.}\n\\begin{enumerate}\n\n\\item\\label{step0} Add the imaginary source $s'$ and draw $k$ edges from\n$s'$ to $s$.\n\n\\item\\label{step1} Add an \\change{imaginary} node $v$ at the midpoint of each $e \\in F$ and add\nan edge of unit capacity from \\change{$s'$} to each $v$.\n\\item\\label{step2} For each sink $t$, do the following:\n\\begin{enumerate}\n\\item\\label{step2a} Draw as many edge disjoint paths\nfrom \\change{$s'$} to $t$ passing through the \\change{imaginary} edges\nadded at Step \\ref{step1}\nas possible.\nLet \\change{$m_t^F (\\leq 2\\alpha)$} be the number of paths.\n\\item\\label{step2b} Draw $k$ edge disjoint paths passing\nthrough $s$ that are also edge disjoint from\nthe \\change{$m_t^F$} paths drawn in the previous step.\n\\end{enumerate}\n\\item\\label{finalstep} Execute the algorithm by Jaggi et~al.\\ \nwith $\\sum_{t\\in T}(k+m_t^F)$ edge disjoint paths constructed in Step \\ref{step2}.\n\\end{enumerate}\n\n\n\\begin{figure}[t!]\n\\psset{unit=0.1\\linewidth}\n\\begin{pspicture}(0,0)(10,13)\n\\rput(5,12){\\circlenode[]{sd}{\\large $s'$}}\n\\rput(5,10){\\circlenode[]{s}{\\large $s$}}\n\\rput(2,8){\\circlenode[]{1}{\\large $1$}}\n\\rput(4,8){\\circlenode[]{2}{\\large $2$}}\n\\rput(6,8){\\circlenode[]{3}{\\large $3$}}\n\\rput(8,8){\\circlenode[]{4}{\\large $4$}}\n\n\\rput(3,9.5){\\dianode[]{A}{\\large $A$}}\n\\rput(2.5,6.5){\\dianode[]{B}{\\large $B$}}\n\n\\rput(3,5){\\circlenode[]{5}{\\large $5$}}\n\\rput(5,5){\\circlenode[]{6}{\\large $6$}}\n\\rput(7,5){\\circlenode[]{7}{\\large $7$}}\n\\rput(3,3){\\circlenode[]{8}{\\large $8$}}\n\\rput(5,3){\\circlenode[]{9}{\\large $9$}}\n\\rput(7,3){\\circlenode[]{10}{\\large $10$}}\n\\rput(1,1){\\circlenode[]{11}{\\large $t_1$}}\n\\rput(9,1){\\circlenode[]{12}{\\large $t_2$}}\n\n\\ncline[linestyle=dashed]{->}{sd}{A}\n\\ncline[linestyle=dashed]{->}{sd}{B}\n\n\\ncarc[arcangle=15.000000]{->}{sd}{s}\n\\ncarc[arcangle=-15.000000]{->}{sd}{s}\n\n\n\\ncline{->}{s}{A}\n\\ncline{->}{A}{1}\n\\ncline{->}{s}{2}\n\\ncline{->}{s}{3}\n\\ncline{->}{s}{4}\n\\ncline{->}{1}{11}\n\\ncline{->}{4}{12}\n\\ncline{->}{1}{B}\n\\ncline{->}{B}{5}\n\\ncline{->}{2}{5}\n\\ncline{->}{2}{6}\n\\ncline{->}{3}{6}\n\\ncline{->}{3}{7}\n\\ncline{->}{4}{7}\n\n\n\\ncline{->}{5}{8}\n\\ncline{->}{6}{9}\n\\ncline{->}{7}{10}\n\\ncline{->}{8}{11}\n\\ncline{->}{9}{11}\n\\ncline{->}{10}{11}\n\\ncline{->}{8}{12}\n\\ncline{->}{9}{12}\n\\ncline{->}{10}{12}\n\\end{pspicture}\n\n\\caption{Example of a network with \\change{imaginary} nodes and edges.\nNodes $A$ and $B$ are the \\change{imaginary} nodes added in Step~\\ref{step1}\nand the dashed lines from \\change{$s'$} to $A$ and $B$ represent\nthe \\change{imaginary} edges added in Step~\\ref{step1}. See Example~\\ref{ex1}\nfor explanation.}\\label{fig1}\n\n\\end{figure}\n\n\n\n\\begin{example}\\label{ex1}\nIn Fig.~\\ref{fig1},\nwe give an example of addition of \\change{imaginary} nodes and edges.\nThe network structure in Fig.~\\ref{fig1}\nis taken from \\cite[Fig.~2]{harada05}.\nNodes $A$ and $B$ are the \\change{imaginary} nodes added in Step~\\ref{step1}\nand the dashed lines from \\change{$s'$} to $A$ and $B$ represent\nthe \\change{imaginary} edges added in Step~\\ref{step1}.\n\nThe min-cut from $s$ to every sink is $4$ in the original network.\nThe set $F$ of edges with errors consists of the edge\nfrom $s$ to node $1$ and the edge from node $1$ to node $5$.\n\nWe denote a path\nby enumerating nodes on the path.\nIn Step~\\ref{step2a} for $t_1$\nwe can find two edge disjoint paths,\nnamely $(s', A,1,t_1)$ and $(s',B,5,8,t_1)$.\nOn the other hand, in Step~\\ref{step2a} for $t_2$,\nwe can find only one edge disjoint path,\nnamely $(s',A,1,B,5,8,t_2)$ \\emph{or} $(s',B,5,8,t_2)$.\nTherefore $m_{t_1}^F=2$ while $m_{t_2}^F = 1$.\n\nIn Step~\\ref{step2b} for $t_1$,\nwe find two edge disjoint paths as\n$(s',s,3,6,9,t_1)$ and $(s',s,4,7,10,t_1)$.\nIn Step~\\ref{step2b} for $t_2$,\nwe find \\emph{three} edge disjoint paths as\n$(s',s,2,6,9,t_2)$, $(s',s,3,7,10,t_2)$, and $(s',s,4,t_2)$.\nWe can use arbitrary two paths among the three paths.\nIn either case, we can find $n-m_t^F$ paths in Step~\\ref{step2b}. \\qed\n\\end{example}\n\nIn Step~\\ref{step2b}, we can guarantee the existence of\n$k$ paths as follows:\nSuppose that edges in \\change{$m_t^F$} paths used in Step~\\ref{step2a}\nare removed from the original network $(V,E)$.\nThen the min-cut from $s$ to a sink $t$ in the original network $(V,E)$\nis at least $n - m_t^F$, which is larger than or equal to\n$k$.\n\nIn Step \\ref{finalstep}\nwe use the algorithm by Jaggi et~al.\\ as if\nthe \\change{imaginary} source\n\\change{$s'$} sent information on the $\\alpha$ \\change{imaginary} edges added\nin Step \\ref{step1}.\nWe denote by $B_t^F$ the set $B_t$ of global encoding vectors\nfor $k+m_t^F$ edge disjoint paths.\n$B_t^F$ consists of $k+m_t^F$ vectors of length $k+2\\alpha$.\nWe require that every sink $t$ is able to decode\n$k$ information symbols, while $t$ may be unable to\ndecode $2\\alpha$ error symbols in general\nbecause $m_t^F \\leq 2\\alpha$.\n\n\nThere are always two edges end at the added \\change{imaginary} node $v$\nand one edge starts from $v$ in Step \\ref{step1}. Since $v$ is \\change{imaginary},\nwe cannot choose local \\change{encoding} vectors at $v$.\nTherefore, in Step \\ref{finalstep},\nall components in the local \\change{encoding} vector at $v$ must be selected\nto $1$, which keeps $B_t$ linearly independent.\n\nThe reason is as follows:\nLet $e$ be the edge from \\change{$s'$} to $v$ added in Step~\\ref{step1}.\nThe global \\change{encoding} vector of $e$ is of the form\n\\[\n(0^{j-1}, 1, 0^{n-j}),\n\\]\nthat is, it has only $1$ at the $j$-th component.\nAll other global \\change{encoding} vectors in $B_t^F$ have zero\nat the $j$-th component, since they are not in downstream\nof $e$ when we choose local \\change{encoding} vectors at $v$.\nTherefore, the added \\change{imaginary} node $v$\ndoes not interfere with the execution of\nJaggi et~al.'s algorithm.\n\n\nObserve also that $q > |T|$ guarantees the successful execution of\nthe algorithm as with the original version of Jaggi et~al.'s algorithm.\n\nWe shall show how each sink $t$ can \\change{decode} the\noriginal information sent from the source $s$.\n\\change{After executing Step~\\ref{finalstep} we have decided all the\nlocal \\change{encoding} vectors in the original network $(V,E)$.}\nConsider the three linear spaces defined by\n\\begin{eqnarray*}\nV_1 &=& \\{ \\phi_t(\\vec{i},\\vec{e}) \\mid \\vec{i}\\in \\mathbf{F}_q^k,\n\\vec{e}\\in\\mathbf{F}_q^{|E|} \\},\\\\\nV_2 &=& \\{ \\phi_t(\\vec{i},\\vec{0}) \\mid \\vec{i}\\in \\mathbf{F}_q^k \\},\\\\\nV_3 &=& \\{ \\phi_t(\\vec{0},\\vec{e}) \\mid \\vec{e}\\in\\mathbf{F}_q^{|E|} \\},\n\\end{eqnarray*}\nwhere components in $\\vec{e}$ corresponding to $E \\setminus F$ are zero,\n\nand $\\phi_t$ is as defined in Definition~\\ref{def1}.\nWe consider $V_1$, $V_2$, and $V_3$ in the original network $(V,E)$\nwithout added \\change{imaginary} nodes and edges.\nThen we have\n\\begin{equation}\nV_1 = V_2 + V_3,\\, \\dim V_2 \\leq k. \\label{eq21}\n\\end{equation}\nSince we keep $B_t^F$ linearly independent,\n\\begin{equation}\n\\dim V_1 \\geq k+m_t^F. \\label{eq22}\n\\end{equation}\nSince the maximum number of edge disjoint paths passing through\nthe \\change{imaginary} edges added in Step \\ref{step1}\nis \\change{$m_t^F$}, we have\n\\begin{equation}\n\\dim V_3 \\leq m_t^F. \\label{eq23}\n\\end{equation}\nEquations (\\ref{eq21}--\\ref{eq23}) imply\n\\begin{eqnarray}\n\\dim V_1 &=& k+m_t^F,\\nonumber\\\\\n\\dim V_2 &=& k,\\label{eq33}\\\\\n\\dim V_3 &=& m_t^F,\\nonumber\\\\\n\\dim V_2 \\cap V_3 &=& 0. \\label{eq12}\n\\end{eqnarray}\nThe number of nonzero components in $\\phi_t(\\vec{i},\\vec{e})$ is $k+m_t^F$\nand the number of unknowns in $\\phi_t(\\vec{i},\\vec{e})$ is $k+2\\alpha$,\nwhich can be larger than $k+m_t^F$.\nHowever, by Eq.~(\\ref{eq12}), the sink $t$ can compute\n$\\phi_t(\\vec{i},\\vec{0})$\nfrom $\\phi_t(\\vec{i},\\vec{e})$ as follows:\nWrite $\\phi_t(\\vec{i},\\vec{e})$ as $\\vec{u} + \\vec{v}$\nsuch that $\\vec{u} \\in V_2$ and $\\vec{v} \\in V_3$.\nBy Eq.~(\\ref{eq12}) $\\vec{u}$ and $\\vec{v}$ are uniquely\ndetermined \\cite[p.19, Theorem 4.1]{lang87}.\nWe have $\\vec{u} = \\phi_t(\\vec{i},\\vec{0})$ and\nthe effect of errors is removed.\nThe sink $t$ can also compute the original\ninformation $\\vec{i}$ from $\\phi_t(\\vec{i},\\vec{0})$\nby Eq.~(\\ref{eq33}).\n\n\nWe shall describe how to construct a network error-correcting\ncode that can correct errors in any edge set $F \\subset E$ with\n$|F| = 2\\alpha$.\n\\change{Let $\\mathcal{F} = \\{ F \\subset E \\,:\\, |F| = 2\\alpha \\}$.}\nThe idea in this paragraph is almost the same as the construction\nof the robust network coding in \\cite[Sect.\\ VI]{sanders05}.\n\\change{Recall that} $B_t^F$ is the set of global \\change{encoding} vectors\non edge disjoint paths to a sink $t$ with an edge set $F$ of errors.\nExecute Jaggi et~al.'s algorithm keeping $B_t^F$ linearly\nindependent for all $t\\in T$ and all $F\\in \\mathcal{F}$. Then every sink $t$ can \\change{decode} the original information\nwith the knowledge of the edge set $F$ on which errors actually occur.\nAs in \\cite[Sect.\\ VI]{sanders05},\n\\[\nq > |T| \\cdot |\\mathcal{F}| = |T| {|E| \\choose 2\\alpha}\n\\]\nguarantees the successful execution of the algorithm.\n\n\nWe present a pseudo programming code\nof the proposed algorithm in Fig.\\ \\ref{fig2}.\nIn order to present a detailed description,\nwe introduce new notations.\n$G_F=(V_F,E_F)$ denotes the network\nwith added \\change{imaginary} nodes and edges in Steps~\\ref{step0}\nand \\ref{step1}\nwith the error pattern $F \\subset E$.\nLet $f^{t,F}$ be the flow established in Steps~\\ref{step2a} and\n\\ref{step2b} in $G_F$.\nLet $f_\\leftarrow^{t,F}(e)$ denote the set of predecessor edges of\nthe edge $e$ in a flow path in $f^{t,F}$.\nLet $T^F(e)$ denote the set of sinks\nusing $e$ in some flow $f^{t,F}$ and\nlet $P^F(e) = \\{f_\\leftarrow^t(e)\n\\mid t\\in T(e)\\}$.\n\n\n\\begin{figure}[t!]\n\n\\begin{tabbing}\n(* Initialization *)\\\\\nAdded \\change{imaginary} node \\change{$s'$} and edges $e_1$, \\ldots, $e_{k}$\nfrom \\change{$s'$} to $s$. $O(k)$\\\\\n\\textbf{foreach} error pattern $F \\in \\mathcal{F}$ \\textbf{do}\\\\\n\\hspace*{4ex}\\= Initialize global \\change{encoding} vector\\\\\n\\>$\\vec{b}^F(e_i) = (0^{i-1},1,0^{k+2\\alpha-i})\\in\\mathbf{F}_q^{k+2\\alpha}$.\\`$O((k+2\\alpha)^2)$\\\\\n\\> \\textbf{foreach} edge $e \\in F$ \\textbf{do}\\\\\n\\>\\hspace*{4ex}\\= Add an \\change{imaginary} node $v$ at the midpoint of $e \\in F$.\\`$O(1)$\\\\\n\\>\\> Divide $e$ into an edge to $v$ and an edge from $v$.\\`$O(1)$\\\\\n\\>\\> Draw an \\change{imaginary} edge from \\change{$s'$} to $v$.\\` (*) $O(1)$\\\\\n\\>\\textbf{endforeach}\\\\\n\\>\\textbf{foreach} sink $t\\in T$ \\textbf{do}\\\\\n\\>\\> Draw as many edge disjoint paths from \\change{$s'$} to $t$ as possible\\\\\n\\>\\>passing through the edge\nadded in (*).\\\\\n\\` $O(2\\alpha(|E|+k+4\\alpha))$\\\\\n\\>\\> Draw $k$ edge disjoint path from \\change{$s'$} to $t$\npassing through\\\\\n\\>\\>$s$ and\nalso disjoint from\npaths made in the previous step.\\\\\n\\` $O(k(|E|+k+4\\alpha))$\\\\\n\\>\\> Initialize the basis $B_t^F = \\{ \\vec{b}^F(e_i) \\mid e_i$ is on a path to $t \\}$.\\\\\n\\` $O((k+2\\alpha)^2)$\\\\\n\\>\\textbf{endforeach}\\\\\n\\textbf{endforeach}\\\\\n(* Main loop *)\\\\\n\\textbf{foreach} edge $e\\in \\bigcup_{F\\in\\mathcal{F}} E_F\n\\setminus \\{e_1, \\ldots, e_k\\}$ in a topological order \\textbf{do}\\\\\n\\>\\textbf{if} $\\mathrm{start}(e) \\in V$ \\textbf{then}\\\\\n\\>\\>Choose a linear combination $\\vec{b}^F(e) = \\sum_{p\\in P^F(e)}m_e(p)\\vec{b}(p)$\\\\\n\\>\\>such that $B_t^F$ remains linearly independent for all $t$\\\\\n\\>\\>and $F$ by the method in \\cite[Sect.\\ III.B]{sanders05}. \\` (**)\\\\\n\\>\\textbf{else}\\\\\n\\>\\>$m_e(p) = 1$ for all $p\\in P^F(e)$ and\n$\\vec{b}^F(e) = \\sum_{p\\in P^F(e)}\\vec{b}(p).$\\\\\n\\` $O(k+2\\alpha)$\\\\\n\\>\\textbf{endif}\\\\\n\\textbf{endforeach}\\\\\n\\textbf{return} $\\{ m_e(\\cdot) \\mid \\mathrm{start}(e) \\in V \\}$.\n\\end{tabbing}\n\\caption{Construction algorithm for a network $\\alpha$-error correcting code.\nThe rightmost $O(\\cdot)$ indicates the time complexity executing\nthe step.}\\label{fig2}\n\n\\end{figure}\n\nWe shall analyze the time complexity of the proposed\nalgorithm in Fig.~\\ref{fig2}.\nAs in \\cite{sanders05} we assume that any arithmetic\nin the finite field is $O(1)$ regardless of the field size.\nFirst we analyze that of the initialization part.\nObserve that $|E_F| = |E| + k+ 2|F| = |E| + k+4\\alpha$ because\neach edge in $F$ adds two edges to $E$ and\nthere are $k$ edges from \\change{$s'$} to $s$.\nThe most time consuming part in the initialization\nis construction of edge disjoint paths,\nwhose overall time complexity\nis $O((|E|+k+4\\alpha)|\\mathcal{F}||T|(k+2\\alpha))$.\n\nNext we analyze the time complexity of the main loop.\nBy \\cite[Proof of Lemma 8]{sanders05},\nthe time complexity of choosing the local \\change{encoding}\nvector $m_e(p)$ in Step (**) is \n$O((|\\mathcal{F}||T|)^2 (k+2\\alpha))$,\nwhich is the most time consuming part in the main loop.\nChoice of $m_e(p)$\nis executed for $|E|$ edges starting from a real node in $V$.\nThus, the time complexity of the main loop is\n$O(|E|(|\\mathcal{F}||T|)^2 (k+2\\alpha))$,\nand the overall time complexity is\n$\nO(\n|\\mathcal{F}||T|(k+2\\alpha)[\n|E|+k+4\\alpha + |\\mathcal{F}||T|])\n$.\nNote that $|\\mathcal{F}| = {|E| \\choose 2\\alpha}$.\n\nA sink \\change{decode}s the information by exhaustive search.\nSpecifically the sink enumerates all the possible information\nand all the possible errors for all $F\\in\\mathcal{F}$,\nthen compares the resulting symbols on incoming edges with\nthe actual received symbols by the sink.\nThe computation of the resulting symbols can be\ndone by a matrix multiplication in $O((k+\\alpha)^2)$ time complexity.\nThe number of possible information is $q^k$ and\nthe number of possible errors is $\\sum_{j=0}^{\\alpha}{|E| \\choose j}(q-1)^j$.\nThus, the time complexity of decoding by a sink is\n$O(q^k \\sum_{j=0}^{\\alpha}{|E| \\choose j}(q-1)^j(k+\\alpha)^2)$.\n\n\\section{Variants of the proposed method and its relation to\nthe robust network coding}\nWe shall introduce two variants of the proposed method in this section.\n\n\\subsection{Attaining the refined Singleton bound}\nNetwork error-correcting codes constructed by the proposed\nmethod attains the Singleton bound (Proposition \\ref{lemsingleton}),\nwhile they do not necessarily attains the refined Singleton bound\n(Proposition \\ref{lemsingleton2}).\nYang et~al.\\ \\cite{yang07b} concurrently proposed a construction algorithm\nthat produces a code attaining the refined Singleton bound.\nIn this subsection we modify the proposed method so that\nit can produce a code attaining the refined Singleton bound.\n\nLet $n_t$ be the min-cut from $s$ to $t$,\nand suppose that the source $s$ emits $k$ symbols within unit\ntime interval.\nA sink $t$ can correct $\\alpha$ errors if $2\\alpha \\leq n_t -k$.\nLet $\\mathcal{F}_t = \\{ F\\subset E \\,:\\, |F| = n_t - k\\}$ and\n$\\mathcal{F} = \\bigcup_{t\\in T}\\mathcal{F}_t$.\nFor fixed $F \\in \\mathcal{F}$ and $t\\in T$,\nwe cannot garuantee that there exists $k$ edge disjoint\npaths in Step~\\ref{step2b}.\nFor such $F$, the sink $t$ cannot \\change{decode} information\nwith errors occered at $F$. We exclude $B_t^F$ with such $(t,F)$\nfrom the\nalgorithm.\nNote that if $|F| \\leq n_t - k$ then there always exist\n$k$ edge disjoint paths in Step~\\ref{step2b}.\n\nIn order to attain the refined Singleton bound\nwe keep the linear independence of\nall bases in $\\{B_t^F \\mid t\\in T$, $F\\in\\mathcal{F}$,\n$|F| \\leq n_t - k\\}$ in Step (**) in Fig.~\\ref{fig2}.\nBy the exactly same argument, we see that the produced code attains\nthe refined Singleton bound.\n\nBy almost the same argument as Sect.\\ \\ref{sec:const},\nwe see that the modified proposed algorithm runs\nin time complexity \n$\nO(\n|\\mathcal{F}||T|(k+2\\alpha_\\mathrm{max})[\n|E|+k+4\\alpha_\\mathrm{max} + |\\mathcal{F}||T|])\n$,\nwhere $\\alpha_\\mathrm{max} = \\lfloor (\\max_{t\\in T} n_t - k)\/2\\rfloor$. The required field size for successful execution of the algorithm\nis $|T| \\cdot |\\mathcal{F}|$, and in this case\n$|\\mathcal{F}|$ depends on the structure of the network $(V,E)$.\n\n\\begin{table*}[t!]\n\n\\caption{Comparison among the proposed methods and \\cite{yang07b,jaggi07}.\nWe assumed that the min-cut is $n$ for all $t\\in T$\nand $k = n-2\\alpha$. $I$ denotes the maximum of in-degrees of nodes.}\\label{tab1}\n\\begin{tabular}{|p{0.1\\textwidth}|p{0.1\\textwidth}|p{0.2\\textwidth}|p{0.25\\textwidth}|p{0.2\\textwidth}|}\\hline\n&delay&required field size for the success probability of code construction to be\n$\\geq 1-\\delta$&\ntime complexity of code construction&time complexity of decoding by sinks\\\\\\hline\\hline\nFigure~\\ref{fig2}&none&$|T| {|E| \\choose 2\\alpha}$&\n $O(\n{|E| \\choose 2\\alpha}|T|(k+2\\alpha)[\n|E|+k+4\\alpha + {|E| \\choose 2\\alpha}|T|])$&$O(q^k \\sum_{j=0}^{\\alpha}{|E| \\choose j}(q-1)^j(k+\\alpha)^2)$\\\\\\hline\nSect.\\ \\ref{sec:random}&none&$|E||T|{|E| \\choose 2\\alpha}\/\\delta$&\n$O(I)$&$O(q^k \\sum_{j=0}^{\\alpha}{|E| \\choose j}(q-1)^j(k+\\alpha)^2)$\\\\\\hline\nPaper \\cite{yang07b}&none&$|T|{n + |E| -2 \\choose 2\\alpha}$\n&\n$O(|E| |T| q^k\\sum_{j=0}^{2\\alpha} {|E| \\choose j}(q-1)^j)$&$O(q^k \\sum_{j=0}^{\\alpha}{|E| \\choose j}(q-1)^j(k+\\alpha)^2)$\\\\\\hline\nPaper \\cite{jaggi07}&large¬ estimated&$O(I)$&\n$O((n \\times \\mathrm{delay})^3)$\\\\\\hline\n\\end{tabular}\n\n\\end{table*}\n\n\n\nOn the other hand,\nthe time complexity of constructing\nlocal \\change{encoding} vectors by the method of Yang et~al.~\\cite{yang07b}\nis\n\\[\nO\\left( |E| q^k \\sum_{t\\in T} \\sum_{j=0}^{n_t-k} {|E| \\choose j}(q-1)^j\\right),\n\\]\nand the required field size is\n\\[\n\\sum_{t\\in T} {n_t + |E| -2 \\choose n_t-k}.\n\\]\nThe time complexity of the proposed algorithm\ncan be smaller or larger depending on the network structure and $q$\nthan Yang et~al.~\\cite{yang07b}.\nThe required field size of the proposed algorithm\ncan also be smaller or larger depending on the network structure.\nHowever, for the special case $n_t = n$ for all $t\\in T$,\nthe required field size of the proposed method is\nsmaller than Yang et~al.~\\cite{yang07b}.\n\n\n\\subsection{Completely randomized construction}\\label{sec:random}\nBy using the idea in the previous section,\nwe can estimate the success probability of\nconstructing a network error-correcting code\nby randomly choosing local \\change{encoding} vectors\nas follows.\nThe idea behind its proof is almost the same as\n\\cite[Theorem 12]{sanders05}.\nObserve that the random choice of local \\change{encoding} vectors\ncompletely remove the time complexity\nof selecting \\change{encoding} vectors in the centralized manner\nat the expense of larger required field size $q$.\n\n\\begin{proposition}\nSuppose that the source $s$ transmits $k$ symbols within\nunit time interval, and let $\\mathcal{F} =\\{F\\subset E \\,:$\n$|F| = 2\\alpha\\}$\nbe the set of edges on which errors can occur.\nSuppose also that local \\change{encoding} vector coefficients\nare generated at random independently and uniformly over\n$\\mathbf{F}_q$.\nWith this network error-correcting code,\nall sinks can correct errors in any edge set $F\\in \\mathcal{F}$\nwith probability at least $1-\\delta$\nif $q \\geq |E||T||\\mathcal{F}|\/\\delta$.\n\\end{proposition}\n\n\\noindent\\emph{Proof.} \nFirst pick independent random local \\change{encoding}\nvectors for all edges in the network simultaneously.\nThen pick an error pattern $F\\in\\mathcal{F}$.\nFor this $F$, execute Steps 1 and 2 in page~\\pageref{step1}\nand compute the global \\change{encoding} vectors $\\vec{b}^F(e)$'s belonging\nto $\\mathbf{F}_q^{k+|F|}$.\nFor each cut in the network,\ntest whether $B_t^F$'s are linearly independent for all $t$.\nThis test fails with probability at most $|T|\/q$\nby the proof of \\cite[Theorem 9]{sanders05}\nprovided that this tests succeed on all the upstream cuts\nand $n\\geq k+2\\alpha$.\n\nIn the proposed algorithm in Fig.~\\ref{fig2},\nwe test linear independence of $B_t^F$'s on\n$|E|$ cuts in Step (**), which is sufficient to\ngaruantee the decodability of the information by every sink.\nBy the same reason, for each sink to be able to correct errors in $F$,\none needs to consider linear independence only on\nat most $|E|$ such cuts with random\nchoice of local \\change{encoding} vectors.\nBy the union bound, the probability that\nthe the independence tests fails for any of $|T|$ sinks\nin any of the $|E|$ cuts in any of the $|\\mathcal{F}|$\nerror patters is at most $\\delta$\nif $q \\geq |E||T||\\mathcal{F}|\/\\delta$.\n\\qed\n\nJaggi et~al.~\\cite{jaggi07} do not provide an estimate\non the relation between the success probability of\ntheir algorithm and the field size $q$.\nTheir method \\cite{jaggi07} uses coding among\ndifferent time intervals and thus introduces\ndelays while our methods do not introduce extra delay.\nIn addition to this,\n$\\alpha$-error correcting codes by constructed\nby the proposed methods allow sinks to correct\nless than $\\alpha$ errors, while the method\nin \\cite{jaggi07} does not.\nThe advantage of the method in \\cite{jaggi07}\nover the proposed methods in this paper\nis that their method allows efficient decoding of\ninformation by every sink, while our proposed\nmethods require exhaustive search of transmitted\ninformation.\n\nWe summarize the comparison among the proposed algorithms and\n\\cite{jaggi07,yang07b} in Table~\\ref{tab1}.\n\n\n\n\\subsection{Relation to the robust network coding}\nWe clarify the difference between the robust network\ncoding in \\cite[Sect.\\ V]{koetter03},\\cite[Sect.\\ VI]{sanders05} and\nthe network error-correcting codes with known locations\nof errors \\cite{yang07}.\nA network error correcting codes that can correct errors\non a known locations $F\\subset E$\nis a robust network coding tolerating \nedge failures on $F$.\nHowever, the converse is not always true.\nConsider the network consists of three nodes $\\{s,t,v\\}$\nwith two directed edges from $s$ to $v$ and\none directed edge from $v$ to $t$.\nThe source is $s$ and the sink is $t$.\nThe intermediate node $v$ sends to $t$ the sum of two inputs\nfrom $s$. This network coding tolerate single edge failure\nbetween $s$ and $v$ but cannot correct single error between\n$s$ and $v$.\n\n\n\\section{Concluding remarks}\nIn this paper,\nwe proposed an algorithm constructing\nnetwork error-correcting\ncodes attaining the Singleton bound, and\nclarified its relation to the robust network\ncoding \\cite[Sect.\\ VI]{sanders05}.\n\nThere are several research problems that have\nnot been addressed in this paper.\nFirstly, the proposed deterministic algorithm requires\ntests of linear independence against\n${|E| \\choose 2\\alpha}$ sets consisting of \\change{$k+m_t^F$} vectors,\nwhich is really time consuming.\nIt is desirable to have a more efficient \n\\change{deterministic} construction algorithm.\n\nSecondly, since there seems no structure in the constructed\ncode, the \\change{decoding} of the original information at a sink $t$\nrequires the exhaustive search by $t$\nfor possible information from the source and possible errors.\nIt is desirable to have a code with structure that allows\nefficient decoding.\n\nFinally, the case $|T|=1$ and $|E|=n$ includes the ordinary\nerror correcting codes as\na special case.\nSubstituting $|T|=1$, $|E|=n$ and $2\\alpha = n-k$ into Eq.~(\\ref{qassumption})\ngives $q > {n \\choose n-k}$,\nwhich can be regarded as a sufficient condition\nfor\nthe existence of the MDS linear code.\nOn the other hand, a well-known sufficient condition for\nthe existence of the MDS linear code is $q > n-2$,\nwhich suggests that Eq.~(\\ref{qassumption})\nis loose and that there is a room for improvement\nin Eq.~(\\ref{qassumption}).\n\n\\section*{Acknowledgment}\nThe author thanks for constructive criticisms by\nreviewers that improved the presentation of the results\nvery much. He also thanks Dr.\\ Masazumi Kurihara\nfor pointing out ambiguity in the earlier manuscript.\nHe would like to thank Prof.~Kaoru Kurosawa\nfor drawing his attention to the network error correction,\nProf.\\ Olav Geil,\nProf.\\ Toshiya Itoh, Prof.\\ Tomohiko Uyematsu,\nMr.\\ Akisato Kimura, and Dr.\\ Shigeaki Kuzuoka \nfor helpful discussions.\nHe also would like to thank Dr.\\ Sidharth Jaggi and Mr.\\ Allen Min Tan\nfor informing the papers \\cite{jaggi07,yang07b}.\nPart of this research was conducted during the author's\nstay in the Department of Mathematical Sciences,\nAalborg University.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n Recent advances in generative models including GANs \\cite{NIPS2014_5ca3e9b1, DBLP:conf\/iclr\/BrockDS19, Karras_2019_CVPR, Karras_2020_CVPR, Karras2021}, variational autoencoders (VAEs) \\cite{kingma2013auto, rezende2014stochastic, vahdat2020nvae}, and autoregressive models \\cite{van2016conditional,chen2018pixelsnail,henighan2020scaling} have realized high-quality image generation with great diversity. Diffusion probabilistic models \\cite{sohl2015deep} are introduced to match data distributions by learning to reverse multi-step noising processes. Ho et al. \\cite{NEURIPS2020_4c5bcfec} demonstrate the capability of DDPMs to produce high-quality results. Following works \\cite{song2020improved, dhariwal2021diffusion, nichol2021improved,kingma2021variational} further optimize the noise addition schedules, network architectures, and optimization targets of DDPMs. Besides, classifier guidance is added to realize DDPM-based conditional image generation \\cite{dhariwal2021diffusion}. DDPMs have shown excellent results competitive with GANs \\cite{Karras_2020_CVPR, DBLP:conf\/iclr\/BrockDS19} on datasets including CIFAR-10 \\cite{krizhevsky2009learning}, LSUN \\cite{yu2015lsun}, and ImageNet \\cite{van2016conditional}. Moreover, DDPMs have also achieved compelling results in generating videos \\cite{ho2022video, harvey2022flexible, yang2022diffusion, zhang2022motiondiffuse}, audios \\cite{kong2020diffwave, austin2021structured}, point clouds \\cite{zhou20213d, luo2021diffusion, lyu2021conditional, liu2022let}, and biological structures \\cite{xu2022geodiff,hoogeboom2022equivariant}. \n\nModern DDPMs depend on large amounts of data to train the millions of parameters in their networks like other generative models, which tend to overfit seriously and fail to produce high-quality images with considerable diversity when training data is limited. Unfortunately, it is not always possible to obtain abundant data under some circumstances. A series of GAN-based approaches \\cite{wang2018transferring, ada,mo2020freeze, wang2020minegan, ewc, ojha2021few-shot-gan, zhao2022closer} have been proposed to adapt models pre-trained on large-scale source datasets to target datasets using a few available training samples (e.g., 10 images). These approaches utilize knowledge from source models to relieve overfitting and achieve improvement in generation quality and diversity. Nevertheless, the performance of DDPMs trained on limited data and effective DDPM-based few-shot image generation approaches remain to be investigated.\n\n \n\nTherefore, we first evaluate the performance of DDPMs trained on small-scale datasets and show that DDPMs suffer similar overfitting problems to other modern generative models when trained on limited data. Then we fine-tune pre-trained DDPMs on target domains using limited data directly. The fine-tuned DDPMs achieve faster convergence and greater generation diversity compared with DDPMs trained from scratch but still fail to maintain the relative distances between generation samples during domain adaptation. To this end, we introduce a pairwise similarity loss to keep the distributions of relative distances in target models similar to source models for greater generation diversity, as shown in Fig. \\ref{pairwise}. The main contributions of this paper can be concluded as:\n\n\\begin{itemize}\n \\item We make the first attempt to study when do DDPMs overfit as training data become scarce and propose a Nearest-LPIPS metric to evaluate generation diversity.\n \\item We propose a DDPM-PA approach based on the pairwise similarity loss designed for DDPMs to preserve the relative distances between samples during domain adaptation and achieve great generation diversity.\n \\item We demonstrate the effectiveness of DDPM-PA qualitatively and quantitatively on a series of few-shot image generation tasks and show that DDPM-PA achieves better generation quality and diversity than current state-of-the-art GAN-based approaches.\n \n\\end{itemize}\n\n\n\\section{Related Work}\n\\subsection{Diffusion Denoising Probabilistic Models}\n\\textbf{DDPMs Formulation} Given training images $x_0$ following the distribution $q(x_0)$, DDPMs define a forward noising (diffusion) process $q$ adding Gaussian noises with variance $\\beta_t \\in (0,1)$ at diffusion step $t$ and produce the noised image sequences: $x_1,x _2,...,x_T$ as follows:\n\\begin{align}\n\\label{eq1}\n q(x_1,x_2,...,x_T|x_0) &:= \\prod_{t=1}^{T} q(x_t|x_{t-1}), \\\\\n\\label{eq2}\n q(x_t|x_{t-1}) &:= \\mathcal{N}(x_t;\\sqrt{1-\\beta_t}x_{t-1},\\beta_t \\mathbf{I}).\n\\end{align}\nWe can sample an arbitrary step of the noising process conditioned on $x_0$ as follows:\n\\begin{align}\n\\label{eq3}\n q(x_t|x_0) &= \\mathcal{N}(x_t;\\sqrt{\\overline{\\alpha}_t}x_0,(1-\\overline{\\alpha}_t)\\mathbf{I}), \\\\\n x_t &= \\sqrt{\\overline{\\alpha}_t}x_0 + \\sqrt{1-\\overline{\\alpha}_t}\\epsilon, \\label{eq4}\n\\end{align}\nwhere $\\alpha_t:=1-\\beta_t$, $\\overline{\\alpha}_t:=\\prod_{s=0}^{t}\\alpha_s$, and $\\epsilon$ represents Gaussian distributions following $\\mathcal{N}(0,\\mathbf{I})$. Based on the Bayes theorem, we have the posterior $q(x_{t-1}|x_t,x_0)$:\n\\begin{equation}\n q(x_{t-1}|x_t,x_0)=\\mathcal{N}(x_{t-1};\\hat{\\mu}_t(x_t,x_0),\\hat{\\beta}_t\\mathbf{I}),\n\\end{equation}\nwhere $\\hat{\\mu}_t(x_t,x_0)$ and $\\hat{\\beta}_t$ are defined in terms of $x_0, x_t$ and the variance as follows:\n\\begin{align}\n \\hat{\\mu}_t(x_t,x_0)&:=\\frac{\\sqrt{\\overline{\\alpha}_{t-1}}\\beta_t}{1-\\overline{\\alpha}_t}x_0 + \\frac{\\sqrt{\\alpha_t}(1-\\overline{\\alpha}_{t-1})}{1-\\overline{\\alpha}_t}x_t, \\label{eq6}\\\\ \n \\hat{\\beta}_t&:=\\frac{1-\\overline{\\alpha}_{t-1}}{1-\\overline{{\\alpha}}_t}\\beta_t.\n\\end{align}\n\nWith a large enough $T$, the noised image $x_T$ almost follows an isotropic Gaussian distribution. Therefore, we can randomly sample a $x_T$ from $\\mathcal{N}(0,\\mathbf{I})$ and apply the reverse distribution $q(x_{t-1}|x_t)$ to reverse the diffusion process and get the sample following $q(x_0)$. DDPMs employ a UNet-based neural network to approximate the reverse distribution $q(x_{t-1}|x_t)$ as follows:\n\\begin{align}\n\\label{eq8}\n p_{\\theta}(x_{t-1}|x_t):=\\mathcal{N}(x_{t-1};\\mu_{\\theta}(x_t,t),\\Sigma_{\\theta}(x_t,t)).\n\\end{align}\nNaturally, we can train the network to predict $x_0$ and apply it to Equation \\ref{eq6} to parameterize the reverse distribution mean $\\mu_{\\theta}(x_t,t)$. Besides, we can also derive the prediction of $\\mu_{\\theta}(x_t,t)$ combing Equations \\ref{eq4} and \\ref{eq6} as:\n\\begin{align}\n \\mu_{\\theta}(x_t,t)=\\frac{1}{\\sqrt{\\alpha_t}}(x_t-\\frac{\\beta_t}{\\sqrt{1-\\overline{\\alpha}_t}}\\epsilon_{\\theta}(x_t,t)),\n\\end{align}\n if we train the network as a function approximator $\\epsilon_{\\theta}(x_t,t)$ to predict the noise $\\epsilon$ in Equation \\ref{eq4}. Ho et al. \\cite{NEURIPS2020_4c5bcfec} demonstrate that predicting $\\epsilon$ performs well and achieves high-quality results using a reweighted loss function:\n\\begin{align}\n\\label{loss_simple}\n \\mathcal{L}_{simple}=E_{t,x_0,\\epsilon}\\left[||\\epsilon-\\epsilon_{\\theta}(x_t,t)||\\right]^2.\n\\end{align}\nIn Ho et al.'s work \\cite{NEURIPS2020_4c5bcfec}, the variance $\\Sigma_{\\theta}(x_t,t)$ is fixed as a constant $\\sigma_t^2 \\mathbf{I}$, where $\\sigma_t^2=\\beta_t$ and is not learned. The network is only trained to learn the model mean $\\mu_{\\theta}(x_t,t)$ through predicting noises with $\\epsilon_\\theta(x_t,t)$. Following works \\cite{nichol2021improved} propose to add an additional optimization term $L_{vlb}$ to optimize the variational lower bound (VLB) and guide the learning of $\\Sigma_{\\theta}(x_t,t)$ as follows:\n\\begin{align}\n L_{vlb}:&=L_0+L_1+...+L_{T-1}+L_T, \\\\\n L_0:&=-log\\ p_{\\theta}(x_0|x_1), \\\\\n L_{t-1}:&=D_{KL}(q(x_{t-1}|x_t,x_0)\\,||\\,p_{\\theta}(x_{t-1}|x_t)), \\\\\n L_T:&=D_{KL}(q(x_T|x_0)\\,||\\,p(x_T)),\n\\end{align}\nwhere $D_{KL}$ represents the Kullback-Leibler (KL) divergence used to evaluate the distance between distributions.\n\n\\textbf{Fast Sampling} DDPMs need time-consuming iterative processes to realize sampling following Equation \\ref{eq8}. Recent works including DDIM and gDDIM \\cite{song2020denoising, zhang2022gddim} extend the original DDPMs to non-Markovian cases for fast sampling. DPM-solver \\cite{lu2022dpm,lu2022dpm2} presents a theoretical formulation for the solution of probability flow ordinary differential equations (ODEs) and achieves a fast ODE solver needing only 10 steps for DDPM sampling. Diffusion Exponential Integrator Sampler (DEIS) \\cite{zhang2022fast} makes use of an exponential integrator to approximately calculate the solution of ODEs. Karras et al. \\cite{karras2022elucidating} present a design space to identify changes in both the sampling and training processes and achieve state-of-the-art FID \\cite{heusel2017gans} on CIFAR-10 \\cite{krizhevsky2009learning} under a class-conditional setting.\n\n\\textbf{Applications} DDPMs have already been applied to many aspects of applications such as image super-resolution \\cite{li2022srdiff, saharia2022image, rombach2022high,ho2022cascaded}, image translation \\cite{saharia2022palette, ozbey2022unsupervised}, semantic segmentation \\cite{baranchuk2021label, asiedu2022decoder}, few-shot generation for unseen classes \\cite{giannone2022few,sinha2021d2c}, and natural language processing \\cite{austin2021structured, li2022diffusion, chen2022analog}. Besides, DDPMs are combined with other generative models including GANs \\cite{xiao2021tackling, wang2022diffusion}, VAEs \\cite{vahdat2021score, huang2021variational, luo2022understanding}, and autoregressive models \\cite{rasul2021autoregressive, hoogeboom2021autoregressive}. Different from existing works, this paper focuses on model-level, unconditional, few-shot image generation with DDPM-based approaches.\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{scratch2.jpg}\n \\caption{ For small-scale datasets including Babies, Sunglasses, and LSUN Church containing 10, 100, and 1000 images, \\textbf{Left}: samples picked from the small-scale datasets, \\textbf{Right}: samples produced by DDPMs trained on the small-scale datasets from scratch.}\n \\label{scratch}\n\\end{figure*}\n\n\\subsection{Few-shot Image Generation}\n Few-shot image generation aims to achieve high-quality generation with great diversity using only a few available training samples. However, modern generative models easily overfit and suffer severe diversity degradation when trained on limited data (e.g., 10 images). They tend to replicate training samples instead of generating diverse images following similar distributions. GAN-based few-shot image generation approaches mainly follow TGAN \\cite{wang2018transferring} to adapt GANs pre-trained on large source domains, including ImageNet \\cite{van2016conditional}, LSUN \\cite{yu2015lsun}, and FFHQ \\cite{Karras_2020_CVPR}, to target domains with limited data. Augmentation approaches \\cite{tran2021data, zhao2020differentiable, zhao2020image, ada} also help improve generation diversity with diverse augmented training samples. BSA \\cite{noguchi2019image} fixes all the parameters except for the scale and shift parameters in the generator. FreezeD \\cite{mo2020freeze} freezes parameters in high-resolution layers of the discriminator to relieve overfitting. MineGAN \\cite{wang2020minegan} uses additional fully connected networks to modify noise inputs for the generator. EWC \\cite{ewc} makes use of elastic weight consolidation to regularize the generator by making it harder to change the critical weights which have higher Fisher information values. CDC \\cite{ojha2021few-shot-gan} proposes a cross-domain consistency loss and patch-level discrimination to build a correspondence between source and target domains. DCL \\cite{zhao2022closer} utilizes contrastive learning to push away the generated samples from real images and maximize the similarity between corresponding image pairs in the source and target domain. The proposed DDPM-based approach follows similar strategies to adapt models pre-trained on large source domains to target domains. Our experiments show that DDPMs are appropriate for few-shot image generation tasks and can achieve better results than current state-of-the-art GAN-based approaches in quality and diversity. \n\n\n\\section{DDPMs Trained on Small-scale Datasets}\n\\label{section3}\n\nTo evaluate the performance of DDPMs when training data is limited, we train DDPMs on small-scale datasets containing various numbers of images from scratch. We analyze generation diversity qualitatively and quantitatively to study when do DDPMs overfit as training samples decrease. \n\n\\textbf{Basic Setups} We sample 10, 100, and 1000 images from FFHQ-babies (Babies), FFHQ-sunglasses (Sunglasses) \\cite{ojha2021few-shot-gan}, and LSUN Church \\cite{yu2015lsun} respectively as small-scale training datasets. The image resolution of all the datasets is set as $256\\times 256$. We follow the model setups in prior works \\cite{nichol2021improved, dhariwal2021diffusion} used for LSUN $256^2$ \\cite{yu2015lsun}. The max diffusion step $T$ is set as 1000. Our experiments are carried out on $\\times 8$ NVIDIA RTX A6000 GPUs (with 48 GB of memory each). We use a learning rate of 1e-4 and a batch size of 48. We train DDPMs for $40K$ iterations on datasets containing 10 or 100 images and $60K$ iterations on datasets containing 1000 images, respectively.\n\n\\textbf{Qualitative Evaluation} In Fig. \\ref{scratch}, we visualize the generated samples of DDPMs trained from scratch on small-scale datasets and provide some training samples for comparison (more generation and training samples are provided in Appendix \\ref{appendix_scratch}). We observe that DDPMs overfit and tend to replicate training samples when the datasets are limited to 10 or 100 images. Since some training samples are flipped in the training process as a step of data augmentation, we can also find some generated images symmetric to the training samples. While for datasets containing 1000 images, DDPMs can generate samples following similar distributions of training samples instead of replicating them. The overfitting problem is relatively alleviated. As shown in the bottom row of Fig. \\ref{scratch}, all kinds of babies, people wearing sunglasses, and churches different from training samples can be found in the generated images.\n\n\n\\textbf{Quantitative Evaluation} LPIPS \\cite{zhang2018unreasonable} is proposed to evaluate the perceptual distances between images. We propose a Nearest-LPIPS metric based on LPIPS to evaluate the generation diversity of DDPMs trained on small-scale datasets. More specifically, we first generate 1000 images randomly and find the most similar training sample having the lowest LPIPS distance to each generated sample. Nearest-LPIPS is defined as the LPIPS distances between generated samples and the most similar training samples in correspondence, averaged over all the generated samples. If a generative model reproduces the training samples exactly, the Nearest-LPIPS metric will have a score of zero. Larger Nearest-LPIPS values indicate lower replication rates and greater diversity compared with training samples. \n\n\n\\begin{table}[htbp]\n\\centering\n\\begin{tabular}{c|c|c|c}\nNumber of Samples & Babies & Sunglasses & Church \\\\\n\\hline\n$10$ & $0.2875$ & $0.3030$ & $0.3136$ \\\\\n$100$ & $0.3152$ & $0.3310$ & $0.3327$ \\\\\n$1000$ & $\\pmb{0.4658}$ & $\\pmb{0.4819}$ & $\\pmb{0.5707}$ \\\\\n\\hline\n$10$ (+flip) & $0.1206$ & $0.1217$ & $0.0445$\\\\\n$100$ (+flip) & $0.1556$ & $0.1297$ & $0.1177$ \\\\\n$1000$ (+flip) & $\\pmb{0.4611}$ & $\\pmb{0.4726}$ & $\\pmb{0.5625}$ \\\\\n\\end{tabular}\n\\caption{Nearest-LPIPS ($\\uparrow$) results of DDPMs trained from scratch on several small-scale datasets.}\n\\label{nearlpips}\n\\end{table}\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{result1.jpg}\n \\caption{DDPM-based image generation samples on 10-shot FFHQ $\\rightarrow$ Babies, FFHQ $\\rightarrow$ Sketches, and LSUN Church $\\rightarrow$ Haunted houses.}\n \\label{result1}\n\\end{figure*}\n\nWe provide the Nearest-LPIPS results of DDPMs trained from scratch on small-scale datasets in the top part of Table \\ref{nearlpips}. For datasets containing 10 or 100 images, we have lower Nearest-LPIPS values. While for datasets containing 1000 images, we get measurably improved Nearest-LPIPS values. To avoid the influence of generated images symmetric to training samples, we flip all the training samples as supplements to the original datasets and recalculate the Nearest-LPIPS metric. The results are listed in the bottom part of Table \\ref{nearlpips}. With the addition of flipped training samples, we find apparently lower Nearest-LPIPS values for datasets containing 10 or 100 images. However, we get almost the same Nearest-LPIPS results for DDPMs trained on larger datasets containing 1000 images, indicating that these models can generate diverse samples different from the original or symmetric training samples.\n\n\nBased on the above analysis of DDPMs trained from scratch on limited data, we conclude that it becomes harder for DDPMs to learn the representations of the datasets as training data become scarce. With extremely limited training samples like 10 or 100 images, DDPMs overfit and can only replicate them but fail to match the data distributions or generate diverse results different from the training samples. \n\n\n\n\n\n\n\n\\section{Few-shot Pairwise DDPM Adaptation}\nFor large-scale datasets with great diversity, DDPMs can learn the representations of datasets and generate high-quality and diverse images following similar distributions to training samples. However, when the datasets are limited to 10 or 100 samples, DDPMs trained from scratch also need large amounts of iterations to generate reasonable images but still lack diversity and can only replicate the training samples as illustrated in Sec. \\ref{section3}. Unfortunately, there is no guarantee of obtaining abundant data in many cases, such as artists' paintings and some other corner cases.\n\nTo realize DDPM-based few-shot image generation, we first fine-tune DDPMs pre-trained on large-scale source datasets on target domains using limited data directly. The fine-tuned models only need about $3K$ iterations to converge. As shown in the middle row of Fig. \\ref{result1}, they can produce diverse samples for target domains utilizing only 10 training samples. However, there still exists room for greater generation diversity. For example, similar hairstyles and facial expressions can be found in the samples produced by the fine-tuned DDPM trained on FFHQ $\\rightarrow$ Babies.\n\n\n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{degrade.jpg}\n \\caption{Samples synthesized from fixed noise inputs by the fine-tuned DDPM on 10-shot FFHQ $\\rightarrow$ Amedeo's paintings throughout training.}\n \\label{degrade}\n\\end{figure}\n\n\n\n\nCompared with pre-trained models, the diversity degradation of fine-tuned models mainly comes from the shortened relative distances between generated samples. As shown in Fig. \\ref{degrade}, two samples synthesized from fixed noise inputs by the fine-tuned DDPM become more and more similar (e.g., eyes, ears, and facial expressions) throughout training. Therefore, we design a pairwise similarity loss to preserve the relative distances between generated samples during domain adaptation and propose the DDPM-PA approach based on that to achieve greater generation diversity.\n\n\n\n\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{pairwise.jpg}\n \\caption{The proposed DDPM-PA adds a pairwise similarity loss $\\mathcal{L}_{pair}$ to keep the relative distances between samples generated by the target models similar to the source models. The pairwise similarity between predicted images $\\tilde{x}_0^{i}, \\tilde{x}_0^{j}\\ (0\\leq i,j \\leq N)$ is converted into probability distributions for $\\mathcal{L}_{pair}$ computation. Taking the 10-shot FFHQ $\\rightarrow$ Amedeo's paintings as an example, DDPM-PA can generate diverse samples sharing similar styles with training samples utilizing the knowledge provided by the source model pre-trained on FFHQ.}\n \\label{pairwise}\n\\end{figure*}\n\n\\subsection{Pairwise Similarity Loss}\n\\label{42}\n\nWe illustrate the pairwise similarity loss in Fig. \\ref{pairwise} using 10-shot FFHQ $\\rightarrow$ Amedeo's paintings as an example. The weights in the target model $\\epsilon_{tar}$ are initialized to the source model $\\epsilon_{sou}$ and adapted to target domains. To construct N-way probability distributions for each image, we sample a batch of noised images $\\left\\lbrace x_t^{n} \\right\\rbrace_{n=0}^{N}$ by randomly adding Gaussian noises to training samples $x_0\\sim q(x_0)$ following Equations \\ref{eq1} and \\ref{eq2}. Then the source and target models are applied to predict the fully denoised images $\\left\\lbrace \\tilde{x}_0^{n} \\right\\rbrace_{n=0}^{N}$. We have the prediction of $\\tilde{x}_0$ in terms of $x_t$ and $\\epsilon_{\\theta}(x_t,t)$ as follows:\n\\begin{align}\n\\label{eq11}\n \\tilde{x}_0 = \\frac{1}{\\sqrt{\\overline{\\alpha}_t}}x_t-\\frac{\\sqrt{1-\\overline{\\alpha}_t}}{\\sqrt{\\overline{\\alpha}_t}}\\epsilon_{\\theta}(x_t,t),\n\\end{align}\nwhich can be derived from Equation \\ref{eq4}. Cosine similarity is employed to measure the relative distances between the predicted samples $\\tilde{x}_0$. The probability distribution for $\\tilde{x}_0^{i}\\ (0\\leq i \\leq N)$ in the source and target models can be expressed as follows:\n\\begin{align}\n p_{i}^{sou} = Softmax(\\left\\lbrace sim(\\tilde{x}_{0_{sou}}^{i},\\tilde{x}_{0_{sou}}^{j})\\right\\rbrace_{\\forall i\\neq j}), \\\\\n p_{i}^{tar} = Softmax(\\left\\lbrace sim(\\tilde{x}_{0_{tar}}^{i},\\tilde{x}_{0_{tar}}^{j})\\right\\rbrace_{\\forall i\\neq j}),\n\\end{align}\nwhere $sim$ denotes cosine similarity. Then we have the pairwise similarity loss $\\mathcal{L}_{pair}$ as follows:\n\\begin{align}\n \\mathcal{L}_{pair}(\\epsilon_{sou},\\epsilon_{tar}) = \\mathbb{E}_{t,x_0,\\epsilon} \\sum_{i} D_{KL} (p_{i}^{tar}\\,||\\, p_{i}^{sou}),\n\\end{align}\nwhere $D_{KL}$ represents KL-divergence. The pairwise similarity loss guides the target models to preserve the relative distances between samples and keep similar distributions to source models, leading to greater generation diversity. \n\n\n\n\nThe overall loss function of DDPM-PA is the weighted sum of three terms, including $\\mathcal{L}_{simple}$, $\\mathcal{L}_{vlb}$, and $\\mathcal{L}_{pair}$: \n\\begin{align}\n\\label{loss}\n \\mathcal{L} = \\mathcal{L}_{simple} + \\lambda_1 \\mathcal{L}_{vlb} + \\lambda_2 \\mathcal{L}_{pair}.\n\\end{align}\nWe follow prior works \\cite{nichol2021improved} to set $\\lambda_1$ as 0.001 to avoid $\\mathcal{L}_{vlb}$ from overwhelming $\\mathcal{L}_{simple}$. The proposed pairwise similarity loss $\\mathcal{L}_{pair}$ is added to relieve overfitting and preserve generation diversity during adaptation when training data is limited. We empirically find $\\lambda_2$ ranging between 0.2 and 5.0 to be effective for few-shot adaptation setups. \n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{result2.jpg}\n \\caption{DDPM-PA image generation samples on 10-shot FFHQ $\\rightarrow$ Raphael's paintings, FFHQ $\\rightarrow$ Sunglasses, and LSUN Church $\\rightarrow$ Landscape drawings.}\n \\label{result2}\n\\end{figure*}\n\n\\begin{table*}[htbp]\n\\centering\n\\setlength{\\tabcolsep}{2mm}{\n\\begin{tabular}{l|c|c|c|c|c}\n Approaches & \\makecell[c]{FFHQ $\\rightarrow$ \\\\ Sketches} & \\makecell[c]{FFHQ $\\rightarrow$ \\\\ Babies} & \\makecell[c]{FFHQ $\\rightarrow$ \\\\ Raphael's paintings} & \\makecell[c]{LSUN Church $\\rightarrow$ \\\\ Haunted houses} & \\makecell[c]{LSUN Church $\\rightarrow$ \\\\ Landscape drawings} \n \\\\\n\\hline\nTGAN \\cite{wang2018transferring} & $0.394 \\pm 0.023$ & $0.510 \\pm 0.026$ & $0.533 \\pm 0.023$ & $0.585 \\pm 0.007$ & $0.601 \\pm 0.030$ \\\\\nTGAN+ADA \\cite{ada} & $0.427 \\pm 0.022$ & $0.546 \\pm 0.033$ & $0.546 \\pm 0.037$ & $0.615 \\pm 0.018$ & $0.643 \\pm 0.060$ \\\\\nFreezeD \\cite{mo2020freeze} & $0.406 \\pm 0.017$ & $0.535 \\pm 0.021$ & $0.537 \\pm 0.026$ & $0.558 \\pm 0.019$ & $0.597 \\pm 0.032$ \\\\\nMineGAN \\cite{wang2020minegan} & $0.407 \\pm 0.020$ & $0.514 \\pm 0.034$ & $0.559 \\pm 0.031$ & $0.586 \\pm 0.041$ & $0.614 \\pm 0.027$ \\\\\nEWC \\cite{ewc} & $0.430 \\pm 0.018$ & $0.560 \\pm 0.019$ & $0.541 \\pm 0.023$ & $0.579 \\pm 0.035$ & $0.596 \\pm 0.052$ \\\\\nCDC \\cite{ojha2021few-shot-gan} & $0.454 \\pm 0.017$ & $0.583 \\pm 0.014$ & $0.564 \\pm 0.010$ & $0.620 \\pm 0.029$ & $0.674 \\pm 0.024$ \\\\\nDCL \\cite{zhao2022closer} & $0.461 \\pm 0.021$ & $0.579 \\pm 0.018$ & $0.558 \\pm 0.033$ & $0.616 \\pm 0.043$ & $0.626 \\pm 0.021$ \\\\\n\\hline\nFine-tuned DDPMs & $0.473 \\pm 0.022$ & $0.513 \\pm 0.019$ & $0.466 \\pm 0.018$ & $0.590 \\pm 0.045$ & $0.666 \\pm 0.044$ \\\\\nDDPM-PA (ours) & $\\pmb{0.509 \\pm 0.054}$ & $\\pmb{0.603 \\pm 0.017}$ & $\\pmb{0.579 \\pm 0.027}$ & $\\pmb{0.627 \\pm 0.050}$ & $\\pmb{0.686 \\pm 0.032 }$ \\\\\n\\end{tabular}}\n\\caption{Intra-LPIPS ($\\uparrow$) results of DDPM-based approaches and GAN-based baselines on 10-shot image generation tasks adapted from the source domain FFHQ and LSUN Church. Standard deviations are computed across 10 clusters (the same number as training samples). The proposed DDPM-PA achieves state-of-the-art performance in generation diversity compared with modern GAN-based approaches.}\n\\label{intralpips}\n\\end{table*}\n\n\n\n\n\n\n\n\\subsection{Quality and Diversity Evaluation}\n\\label{422}\n\n\\textbf{Basic Setups} To demonstrate the effectiveness of DDPM-PA, we evaluate it with a series of few-shot image generation tasks using extremely few training samples (10 images). The performance of DDPM-PA on relatively abundant data is added in Appendix \\ref{appendix_adaptation}. We choose FFHQ \\cite{Karras_2020_CVPR} and LSUN Church \\cite{yu2015lsun} as source datasets and train DDPMs from scratch on these two datasets for $300K$ and $250K$ iterations as source models. As for the target datasets, we employ 10-shot Sketches \\cite{wang2008face}, FFHQ-babies (Babies), FFHQ-sunglasses (Sunglasses) \\cite{Karras_2020_CVPR}, and face paintings by Amedeo Modigliani and Raphael Peale \\cite{yaniv2019face} in correspondence to the source domain FFHQ. Besides, 10-shot Haunted houses and Landscape drawings are used as the target datasets in correspondence to LSUN Church. The model setups are consistent with the experiments on small-scale datasets in Sec. \\ref{section3}. We train DDPM-PA models for $4K$-$5K$ iterations with a batch size of 24 on $\\times 8$ NVIDIA RTX A6000 GPUs.\n\n\n\\textbf{Evaluation Metrics} We follow Ojha et al.'s work \\cite{ojha2021few-shot-gan} to use Intra-LPIPS for the evaluation of generation diversity. To be more specific, we generate 1000 images and assign them to one of the training samples with the lowest LPIPS \\cite{zhang2018unreasonable} distance. Intra-LPIPS is defined as the average pairwise LPIPS distance within members of the same cluster averaged over all the clusters. If a model exactly replicates the training samples, its Intra-LPIPS will have a score of zero. Larger Intra-LPIPS values correspond to greater generation diversity. We fix the noise inputs for DDPM-based approaches and GAN-based baselines, respectively, to synthesize samples for fair comparison of generation diversity in terms of Intra-LPIPS.\n\nFID \\cite{heusel2017gans} is widely used to evaluate the generation quality of generative models by computing the distribution distance between generated images and training datasets. However, FID would become unstable and unreliable when it comes to datasets containing a few samples (e.g., 10-shot datasets used in this paper). Therefore, we provide visualized samples to evaluate the generation quality of DDPM-PA compared with prior works.\n\n\\textbf{Baselines}\nSince few prior works realize few-shot image generation with DDPM-based models, we employ 7 GAN-based baselines sharing similar targets with us to adapt pre-trained models to target domains using only a few available samples for comparison: TGAN \\cite{wang2018transferring}, TGAN+ADA \\cite{ada}, FreezeD \\cite{mo2020freeze}, MineGAN \\cite{wang2020minegan}, EWC \\cite{ewc}, CDC \\cite{ojha2021few-shot-gan}, and DCL \\cite{zhao2022closer}. All the methods are implemented based on the same StyleGAN2 \\cite{Karras_2020_CVPR} codebase. DDPMs directly fine-tuned on limited data are included for comparison as well. The StyleGAN2 models and DDPMs trained on the large source datasets share similar generation quality and diversity (see more details in Appendix \\ref{appendix_source}).\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{sunglasses.jpg}\n \\caption{10-shot image generation samples on FFHQ $\\rightarrow$ Sunglasses. All the visualized samples of GAN-based approaches are synthesized from fixed noise inputs (rows 1-7). Visualized samples of the fine-tuned DDPM and DDPM-PA are synthesized from fixed noise inputs as well (rows 8-9). Our approach generates high-quality results with fewer blurs and artifacts and achieves considerable generation diversity.}\n \\label{sunglass}\n\\end{figure*}\n\n\\textbf{Qualitative Evaluation}\nAs shown in the bottom row of Fig. \\ref{result1}, we visualize the samples of DDPM-PA on 10-shot FFHQ $\\rightarrow$ Babies, FFHQ $\\rightarrow$ Sketches, and LSUN Church $\\rightarrow$ Haunted houses. Compared with fine-tuned DDPMs, DDPM-PA produces more diverse samples different from the training samples under the guidance of $\\mathcal{L}_{pair}$. For example, DDPM-PA generates babies having more diverse hairstyles and facial expressions. More visualized samples of other adaptation setups are provided in Fig. \\ref{result2}. We observe that DDPM-PA generates samples containing hats that cannot be found in training samples when adapting FFHQ to Babies and Sunglasses. The adaptation from LSUN Church to Haunted houses and Landscape drawings retains all kinds of building structures. Moreover, DDPM-PA also performs better on learning target distributions. The target models are encouraged to learn common features of limited training samples instead of generating samples similar to one of them. As shown in the samples of FFHQ $\\rightarrow$ Sketches, DDPM-PA generates images having a more similar style to the training samples while maintaining diversity.\n \n\nFig. \\ref{sunglass} shows the results of GAN-based baselines and DDPM-based approaches on 10-shot FFHQ $\\rightarrow$ Sunglasses. For fair comparison, we fix the noise inputs for GAN-based and DDPM-based approaches, respectively. We observe that GAN-based approaches generate samples containing unnatural blurs and artifacts, while DDPM-based approaches achieve more realistic results. DDPM-PA can generate more diverse images different from the training samples than the fine-tuned DDPM, including hairstyles, the color of sunglasses, et al. Samples produced by DDPM-PA models trained for different iterations and more visualized results comparison on 10-shot FFHQ $\\rightarrow$ Babies, FFHQ $\\rightarrow$ Raphael's paintings, and LSUN Church $\\rightarrow$ Landscape drawings can be found in Appendix \\ref{appendix_results}. \n\n\n\\textbf{Quantitative Evaluation}\nWe provide the Intra-LPIPS results of DDPM-PA under a series of 10-shot adaptation setups in Table \\ref{intralpips}. Benefiting from the pairwise similarity loss $\\mathcal{L}_{pair}$, DDPM-PA realizes a superior improvement of Intra-LPIPS compared with fine-tuned DDPMs. Besides, DDPM-PA outperforms state-of-the-art GAN-based approaches on Intra-LPIPS, indicating its strong capability of maintaining generation diversity. Additional Intra-LPIPS results of other adaptation setups are added in Appendix \\ref{appendix_results}. \n\n\n\n\\subsection{Ablation Analysis}\n\\label{43}\nWe evaluate the generation diversity of DDPM-PA with different $\\lambda_2$, the weight of the proposed $\\mathcal{L}_{pair}$ using 10-shot FFHQ $\\rightarrow$ Raphael's paintings as an example. The quantitative results are listed in Table \\ref{ablation}. We add visualized comparison results in Appendix \\ref{appendix_ablation}. DDPM-PA models with larger $\\lambda_2$ values tend to produce more diverse results and achieve greater diversity. However, too large $\\lambda_2$ values may cause $\\mathcal{L}_{pair}$ to overwhelm $\\mathcal{L}_{simple}$ and prevent the adapted DDPMs from learning the distributions of target domains, leading to negative visual effects and degraded diversity.\n\n\\begin{table}[htbp]\n \\centering\n \\small\n \\begin{tabular}{c|c|c|c}\n $\\lambda_2$ & $0.01$ & $0.04$ & $0.2$ \\\\\n \\hline\n Intra-LPIPS & $0.47 \\pm 0.04 $ & $0.50 \\pm 0.04$ & $0.51 \\pm 0.04$ \\\\\n \\hline\n $\\lambda_2$ & $1.0$ & $5.0$ & $10.0$ \\\\\n \\hline\n Intra-LPIPS & $0.52 \\pm 0.03$ & $\\pmb{0.58 \\pm 0.03}$ & $0.55 \\pm 0.05$ \\\\\n \\end{tabular} \n \\caption{Intra-LPIPS ($\\uparrow$) results of DDPM-PA trained on 10-shot FFHQ $\\rightarrow$ Raphael's paintings with different $\\lambda_2$.}\n \\label{ablation}\n\\end{table}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusion}\nThis work focuses on DDPM-based few-shot image generation. We first evaluate the performance of DDPMs on small-scale datasets. When trained on a few samples from scratch, DDPMs need large amounts of iterations to generate reasonable results but can only replicate training samples instead of generating diverse images. Next, we fine-tune pre-trained DDPMs on target domains using limited data directly but still fail to achieve results competitive in diversity. Therefore, we present DDPM-PA using a pairwise similarity loss to preserve the relative distances between samples during domain adaptation. DDPM-PA realizes significant improvement in generation diversity and outperforms current state-of-the-art GAN-based approaches. In addition, DDPM-PA avoids generating blurs or artifacts and achieves more realistic results. We consider our work to be an important step towards more data-efficient diffusion models. The limitations are discussed in Appendix \\ref{appendix_limitations}.\n\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\n\n\\section{Deep Hawkes Process}\n\\label{sec:DHP_DHP}\nIn this section, we propose that the Deep Hawkes Process (DHP) be used\nto concurrently model the order book event timings and associated event\ntypes. By assimilating it into the order arrival process, the market\nmakers have control over the sending of different orders at specific\npoints in time. The basic idea behind our approach is to view the\nconditional intensity of the Hawkes process as a nonlinear deterministic\nfunction of past history, and to use DLSTM to automatically learn a\nhigh-dimensional representation from the data. We believe that\ncapturing the constraints of the Hawkes process and recurrent network\narchitecture enables a replication of the success of the natural language\nprocess \\citep{Du2016, Mei2017} and time series prediction \\citep{Sagheer2019A, Sagheer2019B} in market making. \n\n\\subsection{Deep Long Short-Term Memory}\n\nDespite the success of recurrent marked temporal point processes and the\nNeural Hawkes process across disciplines, the conventional shallow\nrecurrent network architecture and its variant long short-term memory\n(LSTM) might be inadequate when it comes to modelling noisy\nasynchronous order book events. Lately, the DLSTM architecture \\citep{Sagheer2019A, Sagheer2019B, Zhu2016} used in action modelling\nand multivariate time series forecasting validated its precedence over\ntraditional LSTM architecture. The architecture of the DLSTM is same\nas the previously introduced LSTM, apart from the fact that it involves\nmultiple LSTM layers stacked on top of each other. \n\n\\subsection{DLSTM-SDAE}\n\nIn an ideal setting, the performance of the DLSTM model proved to be\nempirically better than existing contemporary statistical and deep modules. However, as the non-linear variables are modelled in such a way as to be scaled up, the overall learning of the models suffers greatly due to the back-propagation algorithm trapped within multiple local minima \\citep{Sagheer2019B}. This may be the biggest hurdle when it comes to modelling limit order book events that comprise complex, asynchronous non-linear multivariate time series. The ultra-noisy order book data also makes processing more challenging. To circumvent the limitations of the conventional stacked LSTM model, we use the stacked denoising auto-encoder (SDAE) together with DLSTM. The SDAE enables the deep neural networks with multiple nonlinear hidden layers to learn complex features from noisy limit order book data \\citep{Li2019, Vincent2010} and resolves the random weight initialization of LSTM's units problem in DLSTM \\citep{Sagheer2019B}. Figure \\ref{fig:DLSTM} shows the proposed DLSTM- based SDAE architecture for DHP.\n\n\\begin{figure}[H]\n\\centering\n\\resizebox{\\textwidth}{!}{\\begin{tikzpicture}[roundnode\/.style={circle,draw,thick, align=center, text width=6mm, minimum size=6mm}]\n\t \n \t\n\t\n\t\n\t\\node(h3) at (8,8) {$h^{N}_{t+1}$};\n \\node(h2) at (4,8) {$h^{N}_{t}$};\n\t\\node(h1) at (0,8) {$h^{N}_{t-1}$};\t\n\t\n\t\\node[draw] (LSTM9) at (8,6) {LSTM N};\n\t\\node[draw] (LSTM8) at (8,4) {LSTM 2};\n\t\\node[draw] (LSTM7) at (8,2) {LSTM 1};\n \\node[draw] (LSTM6) at (4,6) {LSTM N};\t\n\t\\node[draw] (LSTM5) at (0,6) {LSTM N};\n\t\\node[draw] (LSTM4) at (4,4) {LSTM 2};\n\t\\node[draw] (LSTM3) at (0,4) {LSTM 2};\n\t\\node[draw] (LSTM2) at (4,2) {LSTM 1};\n\t\\node[draw] (LSTM1) at (0,2) {LSTM 1};\n\t\n\t\\node[draw] (SDAE1) at (0,0) {SDAE};\n\t\\node[draw] (SDAE2) at (4,0) {SDAE};\n\t\\node[draw] (SDAE3) at (8,0) {SDAE};\n\t\n\t\\node(H19) at (7.5,5) {$h^{2}_{t+1}$};\n\t\\node(H18) at (7.5,3) {$h^{1}_{t+1}$};\n\t\n\t\\node(H17) at (10,6) {$h^{N}_{t+1}$};\n\t\\node(H16) at (10,4) {$h^{2}_{t+1}$};\n\t\\node(H15) at (10,2) {$h^{1}_{t+1}$};\n\t\\node(H14) at (2.0,6.3) {$h^{N}_{t-1}$};\n\t\\node(H13) at (2.0,4.3) {$h^{2}_{t-1}$};\n\t\\node(H12) at (2.0,2.3) {$h^{1}_{t-1}$};\n\t\\node(H11) at (3.5,5) {$h^{2}_{t}$};\n\t\\node(H10) at (3.5,3) {$h^{1}_{t}$};\n\t\\node(H8) at (-0.5,5) {$h^{2}_{t-1}$};\n\t\\node(H7) at (-0.5,3) {$h^{1}_{t-1}$};\n\t\\node(H6) at (6,6.3) {$h^{N}_{t}$};\n\t\\node(H5) at (-2,6) {$h^{N}_{t-2}$};\n\t\\node(H4) at (6,4.3) {$h^{2}_{t}$};\n\t\\node(H3) at (-2,4) {$h^{2}_{t-2}$};\n\t\\node(H2) at (6,2.3) {$h^{1}_{t}$};\n\t\\node(H1) at (-2,2) {$h^{1}_{t-2}$};\t\n\t\n\t\\draw[->] (LSTM9) to (H17);\n\t\\draw[->] (H5) to (LSTM5);\n\t\\draw[->] (LSTM8) to (H16);\n\t\\draw[->] (H3) to (LSTM3);\n\t\\draw[->] (LSTM7) to (H15);\n\t\\draw[->] (H1) to (LSTM1);\n\t\\draw[->] (LSTM5) to (LSTM6);\n\t\\draw[->] (LSTM5) to (LSTM6);\n\t\\draw[->] (LSTM3) to (LSTM4);\n\t\\draw[->] (LSTM1) to (LSTM2);\n\t\\draw[->] (LSTM9) to (h3);\n\t\\draw[->] (LSTM6) to (h2);\n\t\\draw[->] (LSTM5) to (h1);\n\t\\draw[dotted, very thick] (LSTM4) to (LSTM6);\n\t\\draw[dotted, very thick] (LSTM3) to (LSTM5);\n\t\\draw[dotted, very thick] (LSTM8) to (LSTM9);\n\t\n\t\\draw[->] (LSTM7) to (LSTM8);\n\t\\draw[->] (LSTM6) to (LSTM9);\n\t\\draw[->] (LSTM4) to (LSTM8);\n\t\\draw[->] (LSTM2) to (LSTM7);\n\t\\draw[->] (LSTM2) to (LSTM4);\n\t\\draw[->] (LSTM1) to (LSTM3);\n\t\n\t\n\t\\draw[->] (SDAE3) to (LSTM7);\n\t\\draw[->] (SDAE2) to (LSTM2);\n\t\\draw[->] (SDAE1) to (LSTM1);\n\t\n\t\n\t\\node[roundnode](x1) at (0,-2) {$\\bf{x}_{t-1}$};\n\t\\node[roundnode](x2) at (4,-2) {$\\bf{x}_{t}$};\n\t\\node[roundnode](x3) at (8,-2) {$\\bf{x}_{t+1}$};\n\t\n\t\\draw[->] (x1) to (SDAE1);\n\t\\draw[->] (x2) to (SDAE2);\n\t\\draw[->] (x3) to (SDAE3);\n\t\n\\end{tikzpicture}\n}\n\\caption{The DLSTM-SDAE architecture.}\n\\label{fig:DLSTM}\n\\end{figure}\n\nAs shown in Figure \\ref{fig:DLSTM}, the reconstructed order book data is denoised by SDAEs layers in in DLSTM-SDAE architecture. In this method, at the first layer the input $\\bf{x_t}$ is corrupted into $\\bf{\\tilde{x_t}}$ using stochastic mapping $\\bf{\\tilde{x_t}} \\sim \\mathcal{S_{\\mathcal{D}}}(\\bf{\\tilde{x_t}}\\mid \\bf{x_t} )$. Then, the autoencoder maps corrupted input $\\bf{\\tilde{x_t}}$ to a hidden representation $\\mathit{h} = f_{\\theta}(\\bf{\\tilde{x_t}})$ with encoder $f_{\\theta}(\\bf{\\tilde{x_t}})= (W \\bf{\\tilde{x_t}} + b)$. Lastly, the decoder $g_{\\theta^{'}}$ reconstruct $\\bf{z}=g_{\\theta^{'}}(\\bf{\\tilde{x_t}})$ from the hidden representation $\\mathit{h}$. The parameters $\\theta$ and $\\theta^{'}$ are trained using stochastic gradient descent to minimize reconstruction error measured in the squared error loss $L_{2}(\\bf{x},\\bf{z})=\\norm{\\bf{x} -\\bf{z} }^{2}$. Once mapping is learned, the high-level hidden state $\\mathit{h}$ is applied for training the next layer. For detail learning procedure in SDAE, please refer to seminal paper on the subject \\citep{Vincent2010}.\n\nAt time $t$, the denoised input $\\bf{x_t}$ from SDAE is then passed to first layer of LSTM together with previous hidden state $h^{1}_{t-1}$. The hidden state at time $t$, $h^{1}_{t}$ is calculated using recursive LSTM procedure discussed in Equation \\ref{eqn:lstm}. Its is then moved to next time step and LSTM layers. In the second layer, the hidden state $h^{1}_{t}$ and the previous $h^{2}_{t-1}$ is used to compute $h^{2}_{t}$ and procedure repeats until last layer is complied.\n\n\\subsection{Model Formulation}\n\\label{ss:MF}\n\nLet $\\{t_n, \\varkappa_n, k_n\\}_{n \\in \\mathbb{N}}$ be a stream of order book event, where $t_n$ are times of occurrence of an event, its component $\\varkappa_n$, and their corresponding mark $\\{k_n\\}_{n \\in \\mathbb{N}}$ in $k_n := \\{1, \\dots , K\\}$. Then, the probability that the next event occurs at time $t_n$ is of type $k_n$ is $\\mathbb{P}\\{(t_n,\\varkappa_n, k_n) \\mid \\mathcal{H}_n, (t_n - t_{n-1})\\} dt$. We are interested in the model to predict next event stream $\\{t_n, \\varkappa_n, k_n\\}$ given a past history of event $k_n$, evaluate its likelihood and simulate the next event stream by learning from the past event stream. Equation \\ref{eqn:hawkes_mod_a} shows the associated intensity function\nof the DHP with relaxed positivity constraints: \n\n\\begin{equation}\n\\lambda_k(t) = f_{k}(\\bf{w}_{k}^{\\top}\\bf{h}(t))\n\\label{eqn:hawkes_mod_a}\n\\end{equation}\nThe hidden state $\\bf{h}(t)$ is updated from the memory sell $\\bf{c}(t)$ as in Equation \\ref{eqn:hawkes_mod_b}.\n \\begin{equation}\n \\bf{h}(t) = \\bf{o}_{n} \\odot \\phi (\\bf{c}(t)) \\text{ for } t \\in (t_{n-1}, t_{n}]\n \\label{eqn:hawkes_mod_b}\n \\end{equation}\n \nThe life of interval $(t_{n-1},t_n]$ is determined by the next event $k_n$ at $t_n$, DLSTM reads $\\{t_n, \\varkappa_n, k_n\\}$ and updates the current memory cells $\\bf{c}(t)$ to $\\bf{c}_{n+1}$, associated with hidden state $\\bf{h}(t_n)$. The other parameters of the DLSTM are recursively updated according to the Equation \\ref{eqn:dlstm}.\n\n\\begin{eqnarray}\n\\label{eqn:dlstm}\n\\begin{aligned}\n\\!&{\\bf{i}}_{n+1}= \\sigma \\left( {{\\bf{W}}_{xi}}{{\\bf{x}}_{n}} + {{\\bf{W}}_{hi}}{\\bf{h}}(t_n) + {{\\bf{W}}_{ci}}{\\bf{c}} (t_n) + {\\bf{b}}_i \\right), \\\\ \n\\!&{\\bf{f}}_{n+1} = \\sigma \\left( {{\\bf{W}}_{xf}}{{\\bf{x}}_{n}} + {{\\bf{W}}_{hf}}{\\bf{h}} (t_n) + {{\\bf{W}}_{cf}}{\\bf{c}} (t_n) + {\\bf{b}}_f \\right), \\\\\n\\!&{\\bf{\\overline{c}}}_{n+1} = \\phi \\left( {{\\bf{W}}_{xc}}{{\\bf{x}}_{n}}\\! +\\! {{\\bf{W}}_{hc}}{\\bf{h}}(t_n) \\!+\\! {\\bf{b}}_c \\right), \\\\ \n\\!&{\\bf{c}}_{n+1} = {\\bf{f}}_{n+1}\\!\\odot {\\bf{c}}(t_n)\\!+ {\\bf{i}}_{n+1}\\!\\odot {\\bf{\\overline{c}}}_{n+1}, \\\\ \n\\!&{\\bf{\\widehat{c}}}_{n+1} = {\\bf{\\widehat{f}}}_{n+1}\\!\\odot {\\bf{\\widehat{c}}} (t_n)\\!+ {\\bf{\\widehat{i}}}_{n+1}\\!\\odot {\\bf{\\overline{c}}}_{n+1}, \\\\ \n\\!&{\\bf{o}}_{n+1} = \\sigma \\left( {{\\bf{W}}_{xo}}{{\\bf{x}}_{n}} + {{\\bf{W}}_{ho}}{\\bf{h}}(t_n) + {{\\bf{W}}_{co}}{\\bf{c}}(t_n) + {\\bf{b}}_o \\right), \\\\\n\\!&{\\bf{\\Bar{\\Bar{s}}}}_{n+1} = f \\left( {{\\bf{W}}_{xd}}{{\\bf{x}}_{n}} + {{\\bf{W}}_{hd}}{\\bf{h}}(t_n) + {{\\bf{W}}_{cd}}{\\bf{c}} (t_n) + {\\bf{b}}_d \\right),\n\\end{aligned}\n\\end{eqnarray}\n\nwhere $\\bf{x}_n$ is $n^{th}$ input vector represented by one hot encoding of new order book event $k_n$; the activation functions $\\sigma \\left(x \\right)$ \/ $\\phi \\left(x \\right)$ are sigmoid \/ hyperbolic tangent function, respectively; ${\\bf{W}}_{A B}$ (e.g., ${\\bf{W}}_{c i}$) is the weight matrix from the memory cell to input gate vector; ${\\bf{b}}_{B}$ denotes the bias term of $B$ with $B \\in \\{i,f,c,o,d\\}$, $\\bf{\\Bar{\\Bar{s}}}$ is exponential decay parameter and $f(x)=s \\log(1+\\exp(x\/s))$, $s > 0$ is scaled soft plus function. As it can be seen in the Equation \\ref{eqn:dlstm}, the parameters are updated using the hidden state $\\bf{h}(t_n)$ at time $t_n$, succeeding its decay over interval $t_{n} - t_{n-1}$ rather previous hidden state (Equation \\ref{eqn:lstm}). The memory cell ${\\bf{c}}(t)$ on the interval $(t_{n-1}, t_{n}]$ follows power-law distribution decaying from ${\\bf{c}}_{n+1}$ to ${\\bf{\\widehat{c}}}_{n+1}$ and defined as:\n\n\\begin{equation}\n\\label{eqn:c_decay}\n\\bf{c}(t) = {\\bf{\\widehat{c}}}_{n+1} + \\left( {\\bf{c}}_{n+1} - {\\bf{\\widehat{c}}}_{n+1} \\right) \\left( \\left( t - t_n \\right)^{{- \\bf{\\Bar{\\Bar{p}}}}_{n+1}} \\right) \\text{ for } t \\in (t_n,t_{n+1}]\n\\end{equation}\n\nThe DHP, with novel discrete update of stacked LSTM state, allows the model to capture a delayed response, fits non-interacting event pairs, and copes with partially observed event streams. \\cite{Mei2017} discuss in detail these benefits of the neural version of the model. In order to ensure mathematical tractability, we have illustrated parameter updates for one of the layers of stacked LSTM, but this can be easily extended to deep architecture. For example, the hidden state ${\\bf{h}}^b$ at block $b$ in stacked LSTM is recursively computed from $b = 1\\,\\text{:}\\,N$ and $t = 1\\,\\text{:}\\,T$ using ${\\bf{h}}^{b}_{t}= \\sigma ({{\\bf{W}}_{h^{b-1}h^{b}}}{\\bf{h}}^{b-1}_t + {{\\bf{W}}_{h^{b}h^{b}}}{\\bf{h}}^{b}_{t-1} + {\\bf{b}}_h)$.\n\n\\subsection{Feedback Loop Exploration}\n\nThe empirical results \\citep{Morariupatrichi2018, Gonzalez2017} indicate the existence of feedback loop between the order flow and the shape of the LOB, together with the self- and cross-excitation effects. In order to efficiently capture this feedback effect in high-dimensional parameter space, we infuse the feedback loop exploration process into the DLSTM-SDAE architecture, as discussed in Figure \\ref{fig:DLSTM}. In connection with designing the deep network architecture and appropriate regularisation, we take into consideration that the network can automatically explore distinct feedback loops for different types of events and their codependency. For example, the feedback effect of market buy and sell orders on price, volume and the bid-ask spread of LOB. \n\nConsequently, we design a fully connected deep network in which\neach neuron represents an LOB state or feedback effect of the preceding\nlayer, in order to automatically explore the feedback loop. Furthermore,\nthe neurons in the same layer are partitioned into $\\mathfrak{B}$ blocks to take into account different combinations of feedback loops. The corresponding regularisation is incorporated into the loss function and described by:\n\n\\begin{equation}\n\\label{eqn:loss}\n\\!\\mathfrak{L}\\!=\\!\\min_{{\\bf{W}}_{xB}}{\\mathfrak{L}}\\!+{\\lambda}_1\\!\\!\\sum_{\\substack{B \\in S}}{\\left\\Vert {\\bf{W}}_{xB}\\right\\Vert}_{1}\\!\\!+\\!{\\lambda}_2\\!\\sum_{\\substack{B \\in S}}\\sum_{{\\mathfrak{b}}=1}^{\\mathfrak{B}} \\left\\Vert {{\\bf{W}}_{xB,{\\mathfrak{b}}}}^T \\right\\Vert _{2,1}\\!,\n\\end{equation}\n\nwhere $\\mathfrak{L}$ is the loss function of the DLSTM, and other two terms are feedback loop regularization applied to each block in the network \\citep{Zhu2016}. ${\\bf{W}}_{xB}\\!\\in {\\mathbb{R}}^{N_{N} \\times K_{J}} $ is weight matrix, with number of neurons $N_{N}$ and inputs dimension $K_{J}$. The $S$ characterizes the set of gates and cell in LSTM neurons for each block in DLSTM. Lastly, the $\\left\\Vert {\\bf{W}}\\right\\Vert_{2,1}\\!\\!=\\!\\!\\sum_i\\!\\!\\sqrt{\\sum_j\\!\\!w_{i,j}^2}$ is a structural $\\ell_{21}$ norm. The loss function (Equation \\ref{eqn:loss}) was solved by using Adaptive Moment Estimation (Adam) \\citep{Kingma2015}. Adam optimization is an augmentation to stochastic gradient descent that is memory efficient, extremely insensitive to hyperparameters, works with sparse gradients, is appropriate for non-stationary objectives, and learns the learning rates itself on a per-parameter basis. It is well suited for highly noisy and\/or sparse-gradient order book data.\n\n\\subsection{Parameters Estimation}\n\nGiven a collection of sequence of order book events $\\mathcal{S}$, $\\{t_n, \\varkappa_n, k_n\\}_{n \\in \\mathbb{N}}$, the the log-likelihood of the model can be expressed as the sum of the log-intensities of lapsed event minus an integral of the aggregate intensities over the whole interval observed till $T$:\n\\begin{equation}\\label{eqn:loglike}\n\\mathcal{L}\\!=\\!\\ \\sum_{n: t_n \\leq T}\\!\\!\\log \\lambda_{k_n}(t_n)\\! - \\int_{t=0}^{T}\\!\\!\\lambda(t) dt,\n\\end{equation} \n \nThe parameter $(k,t)$ is estimated maximizing $\\mathcal{L}$ using Adam \\citep{Kingma2015} and Monte Carlo methods \\citep{Mei2017}. The frequently used thinning algorithm adapted from multivariate Hawkes process is then used to sample random sequence from the model.\n\\section*{Abstract}\n\nHigh-frequency market making is a liquidity-providing trading strategy that simultaneously generates many bids and asks for a security at\nultra-low latency while maintaining a relatively neutral position. The\nstrategy makes a profit from the bid-ask spread for every buy and sell\ntransaction, against the risk of adverse selection, uncertain execution\nand inventory risk. We design realistic simulations of limit order\nmarkets and develop a high-frequency market making strategy in which\nagents process order book information to post the optimal price, order\ntype and execution time. By introducing the Deep Hawkes process\nto the high-frequency market making strategy, we allow a feedback loop to be created between order arrival and the state of the limit order book,\ntogether with self- and cross-excitation effects. Our high-frequency market making strategy accounts for the cancellation of orders that influence order queue position, profitability, bid-ask spread and the value of the order. The experimental results show that our trading agent outperforms the baseline strategy, which uses a probability density estimate of the fundamental price. We investigate the effect of cancellations on market quality and the agent's profitability. We validate how closely the simulation framework approximates reality by reproducing stylised facts from the empirical analysis of the simulated order book data.\n\n \n\n\\section{Introduction}\nTechnological innovations and regulatory initiatives in the financial\nmarket have led to the traditional exchange floor being displaced by the\nelectronic exchange. The electronic exchange is a fully automated\ntrading system programmed to incisively enforce order precedence,\npricing and the matching of buy and sell orders. Each order's pricing,\nsubmission and execution is performed using sophisticated algorithmic\ntrading strategies, which account for 85\\% of the equity market's trading\nvolume \\citep{Mukerji2019}. High-frequency trading (HFT, or high-frequency trader), a subset of algorithmic trading, is characterised by exceptionally high speeds, minuscule timeframes and complex programs for initiating and liquidating positions \\citep{SEC2014}. The critical discussion on the role of HFT in a fragmented market has been reignited after the Flash Crash of 6 May 2010 \\citep{Kirilenko2017}. This systemic intra-day anomaly only lasted for a couple of minutes, but temporarily wiped away a trillion dollars in market value. The analysis of agents resolved transaction level data in the E-mini by \\cite{Kirilenko2017} also looks at the behaviour of market makers, whose inventory dynamics remain stationary in conditions of fluctuating liquidity. Even though the market design of E-mini has no high-frequency market maker liability, unlike equity markets, this seminal paper\\citep{Kirilenko2017} gave a boost to research aimed at understanding high-frequency market making or other liquidity-providing strategies in an algorithmic trading setting.\n\nMarket making is a liquidity-providing trading strategy that quotes numerous bids and asks for a security in anticipation of making a profit from a bid-ask spread, while maintaining a relatively neutral position \\citep{Chakrqborty2011}. The high-frequency market making strategy can be characterised as subset of HFT that uses latency, at a scale of nanoseconds, to trade in a fragmented market \\citep{Menkveld2013}. The growing literature reports that the market makers provide quality liquidity, improve market quality, contribute to price efficiency and\nhave a positive but moderate welfare effect \\citep{Kirilenko2017, Brogaard2014, Menkveld2013}. However, there is another strand in the literature that argues that the quality of liquidity is deceptive. The orders are characterised as phantom liquidity, which quickly disappears before other market participants can access it. The optimal design of market making strategies is therefore an important question for practical applicability, market design and security exchange regulations.\n\nThe research on market making spans numerous disciplines, including finance \\citep{Ho1981, Glosten1985, Hara1986, Avellaneda2008, Gueant2013, Cartea2014, Ait2017}, agent-based modeling \\citep{Das2005, Preis2006, Wah2017, Chao2018}, and artificial intelligence \\citep{Spooner2018, Ganesh2019, Kumar2020}. Inspired by seminal work of \\cite{Ho1981} and its mathematical formulation \\citep{Avellaneda2008}, the quintessential research in finance considers market making as a stochastic optimal control problem. In a simplistic setting, the market is modelled as a stochastic process, in which market makers try to maximise the expected utility of their profit and loss under inventory constraints \\citep{Gueant2013}. In parallel to inventory-based models, \\cite{Glosten1985} proposed information-based models, in which market makers face adverse selection risk emerging from informed traders. The unrealistic assumptions placed on market models to mathematically extract the market maker's asset pricing forces researchers to look beyond stochastic optimal control approaches.\n\nMarket making has also been extensively investigated in agent-based modelling (ABM, or agent-based model) literature \\citep{Das2005, Wah2017}. The ABMs in market making evolved from zero intelligence to an intelligent variant by incorporating order book microstructure for order placement, execution and pricing policy. For example, \\cite{Toke2011} reinforced the zero-intelligence market maker model with order arrival following mutually exciting Hawkes processes. The Hawkes process has\nbeen exhaustively used in an empirical estimation and calibration of\nmarket microstructure models deemed essential for designing optimal\nmarket-making strategies \\citep{Toke2011, Alan2018, Morariupatrichi2018}. In these models, the arrival rate of orders is not dependent on the state of the limit order book. However, the empirical results suggest the existence of feedback loop between order arrival and the state of the limit order book, together with self- and cross-excitation effects, for which current models fail to account \\citep{Gonzalez2017, Morariupatrichi2018}. In addition, the Hawkes process constrains the parametric specification for conditional intensity, which\nlimits the model's eloquence. To tackle the parametric specification\nproblem, \\cite{Mei2017} proposed the Neural Hawkes process, in which the Hawkes process is generalised by calculating the event intensities from the hidden state of a long short-term memory (LSTM). Despite the success of the Neural Hawkes process in natural language processing \\citep{Mei2017}, the facile LSTM architecture might be inadequate when it comes to modelling noisy, asynchronous order book events.\n\nIn recent years, deep learning has made significant inroads into high-\nfrequency finance. The Convolutional Neural Network (CNN)\narchitecture and its variants were used to model price-formation\nmechanisms using order book events as input \\citep{Sirignano2019, Cont2019, Tashiro2019, Tsantekidis2017}. However, the CNN architectures are not sophisticated enough to capture self- and cross-excitation effects in the limit order book (LOB) \\citep{Zhang2019}. The deep long-short term memory (DLSTM) architecture performs a hierarchical processing of complex order book events, and as such is able to capture the temporal structures of LOB \\citep{Sagheer2019A, Sagheer2019B}. However, training the DLSTM model directly through stochastic gradient descents, initialised with random parameters, may have led to the backpropagation algorithm being trapped within multiple local minima \\citep{Sagheer2019B, Vincent2010}. To circumvent the aforementioned limitations, the literature proposed an unsupervised pre-training of each layer and a stacking of many convolutional layers \\citep{Vincent2010}. We use Stacking Denoising Autoencoders (SDAEs) together with DLSTM to resolve the random weight initialisation problem in base architecture \\citep{Sagheer2019B}. In addition, SDAEs are quite effective at filtering out noisy order-level data at minuscule resolutions.\n\nThe exemplary predictive performance of deep-learning models has\nencouraged researchers to augment order book data with agent-based\nartificial market simulation, for the purpose of investigating algorithmic trading strategies \\citep{Maeda2019}. The success of the model is dependent on the simulation framework of the financial market being close to realism. However, algorithmic trading research is still waiting for market simulators that could be used for developing, training, and testing algorithms in a manner similar to classic Atari 2600 games simulator \\citep{Mnih2015}. In this paper, we develop realistic simulations of the financial market and use them to design a high-frequency market making agent using the Deep Hawkes process (DHP). The DHP models the streams of order book events by constructing a self-exciting multivariate Hawkes process and a limit order state process, which are coupled and interact with each other. Based on a long stream of high-frequency transaction-level order book data for the different events (e.g. buy, sell, cancel, etc), the high-frequency market makers use DHP\nto accurately predict every held-out event.\n\n\\subsection{Contribution}\n\nThis paper is the first to incorporate DHP into the market making\nstrategy, which allows feedback loops between order arrival and the\nstate of the limit order book, together with self- and cross-excitation\neffects. We extend the neurally self-modulating multivariate point\nprocess \\citep{Mei2017} to the deep framework by stacking SDAE with DLSTM, resulting in DLSTM-SDAE. The SDAE resolves the problem associated with weight initialisation, multiple local minima and ultra-noisy order book data that the stacked recurrent network fails to address. Our approach outperforms the NHP in predicting the next order type and its time. The gained predictive power helps agents to outperform the benchmark market making strategy, and uses a probability density estimate of the fundamental price. We outline our contribution below:\n\\begin{enumerate}\n\\item We designed a multi-asset simulation framework that is scalable and\ncan augment markets of substantial size. The framework is built on\nrealistic market architecture, interface kernels, a matching engine and\nthe Financial Information eXchange (FIX) protocol. \n\\item We are first to introduce a feedback loop between order arrival and the state of the order book using DHP in the high-frequency market making setting.\n\\item We investigate the predictive and trading performance of the agents\nwith the benchmark.\n\\item We explore the effect of cancellation on order queue position, agent's profitability, bid-ask spread, and value of order relative to queue position, in order to verify the existing empirical findings \\citep{Moallemi2017, Dahlstrm2018}.\n\\end{enumerate}\n\n\\subsection{Structure}\n\nThe paper is organized as follow. Section \\ref{sec:DHP_B} presents the background to the limit order book, market making, Hawkes process, Neural Hawkes process, and long short-term memory. Section \\ref{sec:DHP_LR} explores the research stream in market making strategies. Section \\ref{sec:DHP_DHP} explains the novel deep Hawkes process. Section \\ref{sec:DHP_MASF} illustrates the multi-agent simulation framework. Section \\ref{sec:DHP_E} elaborates on the experimental configuration. Section \\ref{sec:DHP_R} provides the results of the experiments. Section \\ref{sec:DHP_C} presents our conclusions.\n\\section{Background}\n\\label{sec:DHP_B}\nIn this section, we first introduce the limit order book together with basic definitions. We then briefly present examples of market making\nstrategy, and review essential tools for designing and investigating the\nclassical market making model. We start with the Hawkes process, the\nassumptions of which are violated in the financial markets. To check the\nmissing links of the Hawkes process, we describe the Neural Hawkes\nprocess. Finally, we discuss the LSTM framework, which represents a\ndivergence from selecting a parametric form for the conditional intensity\nin the Hawkes process variants.\n\n\\subsection{Limit Order Books} \n\nThe LOB is a centralised database for outstanding orders submitted by traders to buy or sell a specified number of securities on an exchange. Figure \\ref{fig:DHP_LOB} illustrates an example of the reconstructed LOB for Apple securities traded on NASDAQ. The smallest increment by which the price of the security can move is called a \\emph{tick}. The highest price at time $t$ for which there is outstanding buy order is called \\emph{bid price}($168.60$), while the lowest sell price is called \\emph{ask price} ($168.50$). The \\emph{bid-ask spread} ($0.10$) at time $t$ is defined as the difference between the ask and bid prices. The \\emph{mid price} (150.05) at time $t$ is the arithmetic average of the bid and the ask. For an in-depth review of definitions, mechanisms and\nnomenclature, please refer to \\cite{Gould2013}.\n\n\\begin{figure}[H]\n\\begin{center}\n\\centerline{\\includegraphics[width=\\columnwidth]{img\/LOBA}}\n\\caption{ Limit order book at NASDAQ for Apple Inc. (AAPL) on March 29th, 2018.}\\label{fig:DHP_LOB}\n\\end{center}\n\\end{figure}\n\nIn an exchange, the traders can primarily submit three different types\nof orders: \\emph{limit orders}, \\emph{market orders}, and \\emph{cancellation orders}. A \\emph{limit order} is an order to buy or sell a particular number of securities at a specified price or better. A \\emph{market order} is an order to immediately buy or sell a certain number of securities at the best available price in the LOB. The arrival of market orders is instantly executed against the best available price in the limit order book. Orders that exceed the size available at the best price automatically spill over to the next best available price. Unlike market orders, the limit orders rest in the order book, pending\nexecution against a market order or being partially or fully \\emph{canceled} by traders. The limit orders posted near to bid and ask are executed instantly, but may be prolonged if market prices diverge from the requested price. Market makers utilise this attribute to design optimal trading strategies \\citep{Spooner2018}. \n \nIn an exchange, several orders posted by traders can have the same\nprice at a given time $t$. To effectively match the orders within each\ndiscrete price level, the LOBs employ various priority mechanism\nalgorithms. The algorithm most commonly employed by various\nexchanges is \\emph{price-time}. In this case, for buy or sell orders, the matching algorithms give priority to the orders with the highest or lowest price. In the event of ties, preference is given to orders with the earliest submission time relative to other orders \\citep{Gould2013}. Other prominent priority mechanism algorithms include \\emph{pro-rata} and \\emph{price-size}. Under the \\emph{pro-rata} algorithms, the ties at the given price are broken by distributing orders according to the depth in the LOB, while \\emph{price-size} mechanism algorithms break the ties by giving higher priority to larger orders. Our study makes use of \\emph{price-time} priority mechanism algorithms, as these encourage market makers to submit limit orders when designing their trading strategies. \n\n\\subsection{High-Frequency Market Making}\nMarket making is a liquidity-providing trading strategy that quotes\nnumerous bids and asks for a security in anticipation of making a profit\nin the form of bid-ask spread, while maintaining a relatively neutral\nposition \\citep{Chakrqborty2011}. The modern market making strategy or high-frequency market making strategy can also be characterized as subset of HFT, where traders uses \"lightning fast with a latency (inter-message time) upper bound of 1 millisecond, only engages in proprietary trading, generates many trades (it participates in 14.4\\% of all trades, split almost evenly across both markets), and starts and ends most trading days with a zero net position\" \\citep{Menkveld2013}. The foundation of designing optimal market making strategies is dependent on optimising order-handling costs, adversely selected bid or ask quote costs, and non-zero position risk-averse costs\\citep{Menkveld2013}.\n\nIn order to better understand high-frequency market making, let us consider a simple example from LOB illustrated in Figure \\ref{fig:DHP_LOB}. The market making strategy places a buy order at $100.00$ and sell order at $100.10$. The execution of the both orders will give the traders profit of $00.10$, which is also the spread. As millions of market maker's trades are executed each second, they can amass huge profits. \n\nHowever, high-frequency market making strategies are always exposed to the risk of adverse price movements, uncertain executions and adverse\nselections \\citep{Penalva2015}. To avoid the above risks, the high-frequency market making strategies seek to compete in the market through lower order-processing costs, fast matching engines and low latency. The fast matching engine reduces the adverse selection risk by facilitating a\nmarket-making strategy to immediately update quotes on the arrival\norder book information, while low latency reduces the search for market\nvenues to mitigate a costly non-zero inventory position \\citep{Penalva2015, Menkveld2013}. The array of sophisticated statistical or machine learning models used on the real-time order book data serve to\noptimise the transaction costs, which are directly related to profitability.\n\n\\subsection{Hawkes Process}\nTo set the stage for Hawkes process, we first explicate key concept related to counting process, point process, and conditional intensity function from the classical literature \\citep{Gao2018, Alan2018, Bacry2015, Embrechts2011}.\n\nLet us consider a positive sequence of event arrival time, $\\{t_n\\}_{n \\in \\mathbb{N}}$, such that $\\forall \\ n \\in \\mathbb{N}$, $t_n < t_{n+1}$, defined on probability space $(\\Omega,\\mathcal{F},\\mathbb{P})$ with almost surely finite, right-continuous step function defined for all $t \\in \\mathbb{R}_+$, and complete information filtration $(\\mathcal{F}_{t})_{t\\geq 0}$, such that $\\mathbb{F} = \\mathcal{F}^N(t)$, $t\\geq 0$. Then the counting process $N(t)$ and its analogous point process $L(t)$ is defined as:\n\n\\begin{equation}\nN(t)=\\sum_{n \\in \\mathbb{N}} \\mathbb{I}_{t_n \\leq t} \\quad \\text{and} \\quad L(t)=\\sum_{n \\in \\mathbb{N}} \\ell_{n}\\cdot \\mathbb{I}_{t_n \\leq t}\n\\end{equation}\nwhere $\\mathbb{I}_{\\{.\\}}$ is the indicator function and $\\{ \\ell_{n}: n \\ge 1 \\}$ is a sequence of non-negative random variable having independent and identical distribution.\n\nIn academic literature, counting and point process terminology is often indistinguishable. The reader is expected to infer the nature of the\nprocess depending on the context. For example, point process are often characterized by the distribution function of the\noccurrence of the event conditioned to the past, but problems associated with the conditional arrival distribution mean that this is not very practical. Instead, the conditional intensity function is used. For counting process $N(t)$ with associated history $\\mathcal{H}(t)$ and adapted to a filtration $\\mathbb{F}$, we define the conditional intensity as: \n\n\\begin{equation}\\label{eq:CIF}\n \\lambda(t|\\mathbb{F}) = \\lim_{h \\rightarrow 0} \\mathbb{E} \\bigg[ \\frac{(N(t+h) - N(t))|\\mathcal{H}(t)}{h} \\bigg| \\mathbb{F} \\bigg]\n\\end{equation}\n\nWe now define Hawkes process characterized by intensity $\\lambda(t)$ with respect to its natural filtration as:\n\n\\begin{equation}\\label{eq:HP}\n \\lambda(t|\\mathbb{F}) = \\mu(t) + \\int_{-\\infty}^{t} \\phi(t-\\tau) \\ {dN}(\\tau) \n\\end{equation}\nwhere $\\mu$ is an the baseline intensity and $\\phi$ is a non-negative kernel function such that $||\\phi||_1 = \\int_0^{\\infty}\\phi(\\tau)d\\tau < 1$. Prominent kernel function used in finance is exponential decay $\\phi(t) = \\alpha e^{-\\beta t}$, where parameter $\\alpha$ represents previous event weight and $\\beta$ past event duration. \n\nNow, we consider $N (t) = \\{N_t^i\\}_{i=1}^{M}$ as the M-dimensional counting process, and its analogous point process as $\\{L_t^i\\}_{i=1}^{M}$. Similarly, we define the multidimensional Hawkes process as:\n\\begin{equation}\\label{eq:MHP}\n \\lambda_{i}(t|\\mathbb{F})=\\mu_i+\\sum_{j=1}^{M}\\int_0^t \\phi_{i,j}(t-\\tau)d N_j(\\tau),\n \\end{equation}\nwhere, $\\mu = \\{\\mu_i\\}_{i=1}^{M}$ a baseline intensity vector and $\\phi(t) = \\{\\phi_{i,j}(t)\\}_{i,j=1}^{M}$ a matrix-valued kernel that is component-wise non-negative, causal, and $L^1$-integrable \\citep{Bacry2015}.\n\nWe can also enrich the Equation \\ref{eq:MHP} by associating each event with time of event $t_n$, its component $\\varkappa _n$ and mark $k_n$. For example, modeling the trades performed at event time $t_n$ with different volume $k_n$ and drawdown intensity $\\varkappa_n$ \\citep{Bacry2015}. The vector intensity function of the multivariate marked Hawkes process will be defined as: \n\n\\begin{equation}\n \\lambda_{i}(t,n |\\mathbb{F})=\\mu_i+\\sum_{j=1}^{M} \\int_0^t \\phi_{i,j}(t-\\tau)\\psi_{j}(n)d N_j(\\tau \\times \\delta),\n \\end{equation}\nwhere, $\\mu = \\{\\mu_i\\}_{i=1}^{M}$ a exogenous intensities vector, $\\phi(t) = \\{\\phi_{i,j}(t)\\}_{i,j=1}^{M}$ a matrix-valued kernel and $\\psi_{j}(n)$ a impact function of marks.\n\nThe properties of the Hawkes process can be characterized thoroughly in an analytical manner due to the linear structure of the stochastic intensity \\citep{Alan2018, Bacry2015}.The linear properties of a Hawkes process enable linear predictions of the models, given their base intensity and the kernels as parameters (the kernels are non-parametrically calibrated from the data). A Hawkes process can be appropriately approximated as auto-regressive process, wiener processes, and clustering representation \\citep{Bacry2015}. Simply put by \\cite{Mei2017}, \" the Hawkes process supposes that past events can temporarily raise the probability of future events, assuming that such excitation is positive, additive over the past events, and exponentially decaying with time\". In a simplified setting, these properties might be useful for modelling constrained processes, but might not be applicable to real world examples. For example, at large scales, the price formation process's microscopic variables, as they relate to order book events, do not diffuse toward Wiener processes. Similarly, the massive cancellation of orders in LOB might inhibit price rather than exciting it.\n\n\\subsection{Neural Hawkes Process} \\label{ss:NHP}\n\nThe Neural Hawkes Process was (NHP) introduced to fill the gaps in the Hawkes process's unrealistic assumptions. Building on the earlier\nformulation, the positivity constraints on baseline intensity vector $\\mu$, kernels $\\phi$, and decay rate $\\delta$ limit the eloquentness of Hawkes process. It fails to capture inhibition and inherent inertia effect, which are characteristics of realistic financial market \\citep{Mei2017}.\n\nLet $\\{t_n, \\varkappa_n, k_n\\}_{n \\in \\mathbb{N}}$ is a event streams, where $t_n$ are times of occurrence of an event, its component $\\varkappa_n$, and their corresponding mark $\\{k_n\\}_{n \\in \\mathbb{N}}$ in $k_n:= \\{1, \\dots ,K\\}$. Then, the probability of incidence of next event at time $t_n$ of type $k_n$ is $\\mathbb{P}\\{(t_n,\\varkappa_n, k_n) \\mid \\mathcal{H}_n, (t_n - t_{n-1})\\} dt$. The associated intensity function conditioned on the past events $\\overline{h}$ for self-exciting multivariate point process or Hawkes process with exponentially decaying kernel function is:\n\\begin{equation}\n \\lambda_k(t)= \\mu_k + \\sum_{\\overline{h}: t_{\\overline{h}} < T}= \\alpha_{k_{\\overline{h}},k} \\exp (-\\beta_{k_{\\overline{h}},k} (t-t_{\\overline{h}}) ),\n \\end{equation}\nThe inhibition and inherent inertia effect are introduced in the self-modulating model, where we relax positivity constraints on parameters $\\alpha$ and $\\mu$. The negative total resultant activation is then passed through non-linear transfer function (e.g. rectified linear unit (ReLU) function, softplus function etc.) such that:\n\n\\begin{equation}\n \\lambda_k(t) = f_k(\\tilde{\\lambda}_k(t)),\n \\end{equation}\n\n\\begin{equation} \\label{eq:NHP}\n \\tilde{\\lambda}_k(t)= \\mu_k + \\sum_{\\overline{h}: t_{\\overline{h}} < t} \\alpha_{k_{\\overline{h}},k} \\exp (-\\beta_{k_{\\overline{h}},k} (t-t_{\\overline{h}}) ).\n \\end{equation}\nwhere $\\mu_k < 0$, $\\alpha_{j,k} < 0$ allows inertia and inhibition effect respectively \\citep{Mei2017}.\n \nThe summation in Equations \\ref{eq:NHP} places a restriction on the $\\tilde{\\lambda}_k(t)$, where past events have an independent and additive influence. This deviates from reality, which is characterised by the existence of complex dependence between the intensities in terms of number of order event types and past event timings \\citep{Bacry2015}. \\cite{Mei2017} proposed the \\emph{Neural Hawkes Process} to learn and predict complex dependency by replacing the summation with a novel recurrent neural network. In this novel process, the hidden state vector $\\mathbf{h}(t)$ controls the dynamics of time varying event intensities, which in turn depends on a vector $\\mathbf{c}(t)$ of memory cells in a continuous-time long short-term memory \\citep{Hochreiter1997}.\n \n\\subsection{Long Short-Term Memory} \\label{ss:lstm}\n\nThe central idea of LSTM is the use of {\\em memory cell}, which overcomes the problem associated with {\\em vanishing gradient} \\citep{Arras2019}. The memory cell of LSTM is a complex unit, built from alike nodes in a distinct connectivity pattern, with the novel inclusion of multiplicative nodes, represented in figure \\ref{fig:lstm}. A typical LSTM memory cell architecture contains a cell input activation vector ${{x}}_t$, an input gate ${{i}}_t$, a forget gate ${{f}}_t$, a cell ${{c}}_t$, an output gate ${{o}}_t$ and an output response $h_t$. The distinctive feature of the LSTM approach, input gate and forget gate, govern the information flow into and out of the cell according to gate logic. Whereas, the output gate controls the amount of information flow from the cell to the output ${{h}}_t$. A self-connected recurrent edge with fixed unit weight in the memory cell ensures that error can flow across many time steps without vanishing or exploding.\n\n\\begin{figure}[H]\n\\begin{center}\n\\centerline{\\includegraphics[width=\\columnwidth]{img\/lstm}}\n\\caption{ LSTM Memory Cell \\citep{Zhu2016}.}\\label{fig:lstm}\n\\end{center}\n\\end{figure}\n\nAt each time step $t$, the recursive computation in the LSTM model proceeds according to the following equations:\n\n\\newcommand{\\operatorname{tanh}}{\\operatorname{tanh}}\n\\newcommand{\\mathring} %{\\bar}{\\mathring}\n\n\\begin{eqnarray}\n\\label{eqn:lstm}\n\\begin{aligned}\n\\!&{\\bf{i}}_t = \\sigma \\left( {{\\bf{W}}_{xi}}{{\\bf{x}}_{t}} + {{\\bf{W}}_{hi}}{\\bf{h}}_{t-1} + {{\\bf{W}}_{ci}}{\\bf{c}}_{t-1} + {\\bf{b}}_i \\right), \\\\ \n\\!&{\\bf{f}}_t = \\sigma \\left( {{\\bf{W}}_{xf}}{{\\bf{x}}_{t}} + {{\\bf{W}}_{hf}}{\\bf{h}}_{t-1} + {{\\bf{W}}_{cf}}{\\bf{c}}_{t-1} + {\\bf{b}}_f \\right), \\\\\n\\!&{\\bf{\\overline{c}}}_t = \\phi \\left( {{\\bf{W}}_{xc}}{{\\bf{x}}_{t}}\\! +\\! {{\\bf{W}}_{hc}}{\\bf{h}}_{t-1} \\!+\\! {\\bf{b}}_c \\right), \\\\ \n\\!&{\\bf{c}}_t = {\\bf{f}}_t\\!\\odot {\\bf{c}}_{t-1}\\!+ {\\bf{i}}_t\\!\\odot {\\bf{\\overline{c}}}_t, \\\\ \n\\!&{\\bf{o}}_t = \\sigma \\left( {{\\bf{W}}_{xo}}{{\\bf{x}}_{t}} + {{\\bf{W}}_{ho}}{\\bf{h}}_{t-1} + {{\\bf{W}}_{co}}{\\bf{c}}_{t} + {\\bf{b}}_o \\right), \\\\\n\\!&{\\bf{h}}_t = {\\bf{o}}_t \\odot \\phi \\left( {\\bf{c}}_t \\right),\n\\end{aligned}\n\\end{eqnarray}\nwhere ${{x}}_t$ is input vector at time $t$, the activation function $\\sigma \\left(x \\right)$ \/ $\\phi \\left(x \\right)$ is defined as sigmoid $\\sigma\\left(x\\right)=1\/(1+e^{-x})$ \/ $\\phi \\left(x \\right)=\\tanh$, ${\\bf{W}}_{A B}$ is the weight matrix between $A$ and $B$ (e.g., ${\\bf{W}}_{x i}$ is the weight matrix from the inputs ${\\bf{x}}_{t}$ to the input gates ${\\bf{i}}_t$), ${\\bf{b}}_{B}$ denotes the bias term of $B$ with $B \\in \\{i,f,c,o\\}$ and $\\odot$ denotes point-wise multiplication of the two vectors. To ensure comprehensibility with standard literature, we follow\nthe same naming conventions discussed in the paper by \\cite{Zhu2016}.\n\n\\section{Literature Review}\n\\label{sec:DHP_LR}\n\nIn this section, we briefly review three prominent streams of research\ninto how the Hawkes process is used to design market making strategies.\nWe are well aware of blurred and overlapping boundaries across\nresearch streams, but strongly believe in capturing the essence of\nHawkes models in market making.\n\n\\subsection{Stochastic Optimal Control}\n\nThe classical finance literature employs a stochastic optimal control\nframework to determine market maker's optimal quotes for bid and ask\nin the presence of inventory risk, adverse selection, information\nasymmetry and latency \\citep{Ho1981, Sandas2001, Avellaneda2008, Penalva2015}. The market maker aims to maximise the expected utility of the profit and loss contour at closing time. By integrating a utility framework into the microstructure of LOB, \\cite{Avellaneda2008} were able to analytically derive optimal bid and ask quotes. In the continuous-time model, the mid-prices evolve according to Brownian motion, and the order arrival follows a Poisson process. In one particular case, a non-homogeneous Poisson process is reduced to the Hawkes process \\citep{Bacry2015}. The unrealistic assumptions limit the model's ability to capture adverse selection effects, market impact and autocorrelation structures in the security being traded. Building on the seminal work off \\cite{Avellaneda2008}, numerous researchers looked at price impact, adverse selection effects, and latency, together with different objective functions \\citep{Penalva2015}. The purpose of all of the improvements in the model is to provide numerical approximations for the associated\nstochastic differential equations. However, problems related to model\nambiguity have yet to be addressed \\citep{Nystrom2014}. \n\nThe profusion of transaction-level data at a minuscule resolution\nprovides an unprecedented opportunity to apply Hawkes processes to\nthe study of market microstructure with a view to deciphering price\nformation mechanism, liquidity dynamics and volatility \\citep{Morariupatrichi2018}. The use of Hawkes processes to model\nextreme price moves and order flow is well integrated with the high-\nfrequency market making model \\citep{Nystrom2014, Bacry2015}. For example, \\cite{Nystrom2014} proposed a market making\nmodel based on model risk or uncertainty, in which inventory dynamics,\nfill rates and price formation are modelled using two independent\nHawkes processes. While the market making models based on the\nHawkes processes have been moderately successful in integrating a\nrealistic order arrival process, they fail to incorporate the complex\ninteraction between the self- and cross-excitation effects of the\nendogenous state variables that describe the LOB \\citep{Morariupatrichi2018}. In the classical \"buy low and sell high\" market making strategies, \\citep{Cartea2014} used a multivariate Hawkes process to capture the interaction between market orders and the state of LOB. However, the constraints placed on parametric specification for\nconditional intensity by the Hawkes process limit the model's\neloquence.\n \n\\subsection{Agent-Based Models}\n\nMarket making has been also been extensively investigated in ABM \\citep{Das2005, Wah2017}. Using a bottom-up approach, the ABMs try to artificially simulate the systems' aggregate behaviours caused by the actions and interactions between heterogeneous autonomous agents \\citep{Samanidou2007}. The ABMs in market making range from the simple \"zero intelligence\" of mainstream economics to representing in detail the full complexity of the order book and market microstructure. The seminal work of \\cite{Toke2011} assimilated a microscopic, dynamical statistical model for the continuous double auction \\citep{Smith2003} into Hawkes framework. The reinforced zero-intelligence market making model\npopulated with the liquidity provider and liquidity taker uses mutually\nexciting Hawkes processes for the order arrival process. However, the\nparametric specification of the exponential kernels contradicts the\nempirical results on the existence of a feedback loop between order\narrival and the state of LOB \\citep{Gonzalez2017, Morariupatrichi2018}. The Neural Hawkes process \\cite{Mei2017} applied a Neural\nHawkes process to natural language processing in order to tackle the\nparametric specifications problem, and generalised the Hawkes process\nby calculating the event intensities from the hidden state of an LSTM.\n\nAnother prominent ABM in market making research \\citep{Wah2017}uses empirical game-theory analysis to examine the effect of\nmarket making on market performance in different scenarios. In the\npaper, the authors use multiple background traders (e.g. traditional zero\nintelligence agents) to produce realistic market microstructure. The\nBayesian market makers were then used to investigate welfare effect,\ntrading gains and strategic behaviour. The simplistic adaptive trading\nstrategies adopted by the market makers ought to be sufficient to\ninvestigate market equilibria, but would face serious challenges in\nrelation to developing realistic market making strategies that also take\naccount of market microstructure. The academic literature needs to\nassimilate the success of artificial intelligence in designing trading\nstrategies that interact with close-to-reality market simulations.\n\n\\subsection{Artificial Intelligence}\n\nThe success of deep reinforcement learning in board games \\citep{Silver2017} and video games \\citep{Mnih2015} soon sent ripples through the world of finance. Drawing on the classical mathematical\nsetup by \\cite{Avellaneda2008}, \\cite{Gueant2019} used a model-based deep actor-critic algorithm to find the optimal bid and ask quotes across a high-dimensional space of corporate bonds. \\cite{Spooner2018} rreconstructed a limit order book from historical data and used it to construct a market making agent using temporal-difference reinforcement learning. The next natural progression, a multi-agent simulation of a dealer market, was developed by \\cite{Ganesh2019} to understand the behaviour of market making agents. The success of this model is dependent on a realistic simulation framework of the financial market. However, algorithmic trading research is evolving, and is still at the stage of exploiting market simulators that could be used to develop training and testing algorithms in a similar vein to the classic Atari 2600 games simulator \\citep{Mnih2015}.\n\nDespite the exemplary predictive performance of deep learning\nmodels, market making has not yet been comprehensively addressed. If\nwe return to stochastic optimal control in market making, we notice that\nthe dynamics of order flow are mostly described by variants of Hawkes\nprocesses, while the resultant control algorithms are associated with\ncontinuous semi-martingales, which are hard to solve recursively \\citep{Gueant2019}. In addition, the unavailability of a realistic\nsimulation framework and high-quality order-level data restricts\nresearchers' ability to replicate the successes of deep learning in market\nmaking.\n\nNevertheless, deep learning models (based on CNN architecture)\nhave done moderately well when it comes to modelling price-formation\nmechanisms using order book events as input \\citep{Sirignano2019, Cont2019, Tashiro2019, Tsantekidis2017}. The literature is proof that empirical investigation of order-level data has always augmented the model of choice. In short, the ABM, statistical modelling and artificial intelligence approaches to market making all have desirable attributes that the others lack. For example, the CNN architectures lacks self- and cross-excitation effects while modelling LOB dynamics \\citep{Zhang2019}. The DLSTM architecture performs hierarchical processing of complex order book events, and as such is able to efficiently capture the temporal structures of LOB \\citep{Sagheer2019A, Sagheer2019B}. As market making involves placing optimal bids and asks, the DLSTM architecture, together with the Hawkes process, would efficiently model order arrival\nand price-formation mechanisms by constructing a self-exciting multivariate Hawkes process with limit order state process, which are\ncoupled and interact with each other.\n\\section{Multi-Agent Simulation Framework}\n\\label{sec:DHP_MASF}\nMulti-agent-based modelling is a bottom-up computational modelling\napproach intended to artificially simulate the aggregate behaviours of\nsystems caused by the actions and interactions between heterogeneous\nautonomous agents in an environment or environments \\citep{Samanidou2007}. In this section, we describe the important components of multi-agent-based modelling, including environment (market\nsimulator), agent ecology (trading strategies) and reward (profit and\nloss), to study the behaviour of market making agents whose strategies\nemploy Deep Hawkes processes.\n\n\\subsection{Market Simulator}\n\nThe market simulator is an essential tool for designing, evaluating and backtesting algorithmic trading strategies under various market scenarios. The market is flooded with various proprietary and open-\nsource financial simulation frameworks, but the constraints associated\nwith licences, application interfaces and software design limit their\nusability \\citep{Maeda2019, Izumi2009}. In this paper, we have designed a multi-asset market simulator from scratch, which is scalable to markets of substantial size. The asynchronous event-based interface is built over realistic market architecture, interface kernels, a matching engine and the Financial Information eXchange (FIX) protocol \u2013 an open electronic communications protocol standard used to carry out trades in electronic exchanges. \n\n\\subsubsection{Market Architecture}\n\nThe market architecture consolidates the communication interface,\nmarket and matching engine. Figure \\ref{fig:DHP_MS} outlines key components of market architecture and their interaction from a high-level perspective. The agents connect to the market via a kernel that hosts order management details. This acts as a transmission channel between agents and markets, thereby providing the extreme throughput and lowest\nlatency for order transactions. It also throttles the amount of\ntransactions, as per the market requirement. As such, there is a guarantee of fairness between agents waiting to place orders. The markets\nrepresent an information interchange, in which heterogeneous agents\ncommunicate through kernels for order transactions, processing and\nexecution according to the matching engine, as per the financial\ninstruments. The markets respond to order status by sending an\nexecution report that covers the period from start to market reset event.\nThis provides opportunities for agents to tweak the parameters in their\ntrading strategies after every trading period if required.\n\n\\tikzstyle{AM} = [rectangle, rounded corners, minimum width=2cm, minimum height=1cm,text centered, draw=black]\n\n\\tikzstyle{arrow} = [thick,->,>=stealth]\n\n\\begin{figure}[H]\n\\centering \n\\resizebox{\\textwidth}{!}{\\begin{tikzpicture}[node distance=2.0cm]\n\\node (A) [AM] {Agents};\n\\node (K) [AM, right of=A, xshift=1.0cm]{Kernel};\n\\draw [arrow] (A) -- node[anchor=south] {} (K);\n\\draw [arrow] (K) -- node[anchor=north] {} (A);\n\\node (M) [AM, right of=K, xshift=1.0cm]{Markets};\n\\draw [arrow] (K) -- node[anchor=south] {} (M);\n\\draw [arrow] (M) -- node[anchor=north] {} (K);\n\\node (E) [AM, right of=M, xshift=1.0cm]{Matching Engines};\n\\draw [arrow] (M) -- node[anchor=south] {} (E);\n\\draw [arrow] (E) -- node[anchor=north] {} (M);\n\\end{tikzpicture}\n}\n\\caption{Market Simulator.}\n\n\\label{fig:DHP_MS}\n\\end{figure}\n\n\\subsubsection{Communication Interface}\nThe communication interface between agents and markets follows the\nFIX standard protocol to increase efficiency, competition and\ninnovation. It is an electronic communications protocol designed for the\nreal-time exchange of information between agents and exchanges,\nincluding agent identifiers, order identifiers, order handling, trade\nnotifications, broadcasts and execution reports. The widespread\nadoption of the FIX protocol across financial markets reduces costs\nassociated with connectivity, regulatory compliance, liquidity searches\nand transactions. Using the FIX protocol, multiple agents can interact\nwith the market via kernels, simultaneously and independently.\n\n\\subsubsection{Matching Engine}\nAt the core of the market simulator are several \\emph{matching engines} for different financial instruments. Each \\emph{matching engines} matches bids and asks to execute trades in specific instruments. The orders are matched using \\emph{price-time} priority mechanisms. In this context, among bids or asks, the matching algorithms give priority to orders with the highest or lowest price. The ties are broken by giving preference to orders with the earliest submission time compared to other orders. Other well-known priority mechanism algorithms are \\emph{pro-rata} and \\emph{price-size} \\citep{Gould2013}.\n\n\\subsection{Market Ecology}\nThe success of a financial simulation framework depends on the precise representation of market ecology that can adequately mimic the real\nmarket design. In finance literature, the term \"market ecology\" \\citep{Farmer2002} refers to the composition of heterogeneous trading strategies that keep evolving over time in response to pressure from a contrasting market. The correct mapping of financial market ecology requires the availability of agent-resolved order-level data, which enables identification of sources and events in the market. In the absence of agent-resolved data, the trading strategies are classified by theoretical considerations, the results of surveys, direct investigations of the trading profile of classes of investors and proxies for HFT \\citep{Kirilenko2017, Mankad2013}. In this paper, we adapt the market ecology from the different strands of academic literature \\citep{McGroarty2019, Paulin2019, Musciotto2018, Kirilenko2017, Leal2016, Mankad2013, Toth2012}. \n \n\\subsubsection{Deep Hawkes Market Makers}\n\nDespite the existence of sophisticated market making strategies, the classical \"buy low and sell high\" strategy \\citep{Cartea2014} is a preferred strategy to make money in the securities market. The success of making a profit from short-term price predictions on bids and asks hinges on placing orders at precisely the right time. The mathematical modelling of the complicated order arrival process is done using the Poisson process \\citep{Chakraborti2011E}, which assumes that orders arrive\nrandomly. However, this is contrary to empirical findings, which show that order arrival times are strongly connected \\citep{Rambaldi2017}, have self- and cross-excitation effects \\citep{Morariupatrichi2018} and that a feedback loop exists between the order arrival and the state of the LOB \\citep{Gonzalez2017}. In this article, we incorporate the feedback loop between order arrival and the state of the limit order book, together with self- and cross- excitation effects, using DHP, to design a high-frequency market making strategy.\n\nThe order book events in the securities market are stochastically excited or impeded by a pattern in the past event streams. The market makers are interested in learning the distribution and structure of order book events stream to accurately predict the next order (limit orders, market orders, cancellations, etc.) together with an associated labels (price, volume, etc.). Given a stream of order book events $\\{t_n, \\varkappa_n, k_n\\}_{n \\in \\mathbb{N}}$, the market makers calculate the probability that the next event occurs at time $t_n$ is of type $k_n$ and its probability density conditioned on the history of events $\\mathcal{H}_n$ by:\n\\begin{equation}\n\\lambda(t)dt = \\mathbb{P}\\{(t_n,\\varkappa_n, k_n) \\mid \\mathcal{H}_n, (t_n - t_{n-1})\\}.\n\\end{equation}\n\n\n\\begin{equation}\nf_n(t) = \\lambda(t) \\exp \\left( -\\int_{t_{n-1}}^{t} \\lambda(\\tau) d\\tau \\right).\n\\end{equation}\n\nTo predict the time and the next event having minimum loss without information about the time $t_n$, we choose $ \\hat{t}_{n} = \\int_{t_{n-1}}^{\\infty} t f_n(t) dt$ and $\\hat{k}_{n} = \\argmax_{k} \\int_{t_{n-1}}^{\\infty} \\frac{\\lambda_k(t)}{\\lambda(t)} f_n(t) dt$. The associated intensity function for calculating the next order book events is the same as DHP as described in Equation \\ref{eqn:hawkes_mod_a}. \n\nThe real securities market has numerous distinct order book events, which is difficult to incorporate in our simulation framework. For the\nsake of computational tractability, we decided to only include limit order buy\/sell, market order buy\/sell, and partial\/full cancellations. We also assume that the high-frequency market maker is trading a single security in a market whose price at time $t$ is denoted by $p_t$. Unlike traditional market making strategies \\citep{Spooner2018}, the DHP market makers can trade with rational limit or market order quantity, price surge and bid-ask spread distributions. \n\nThe deep hawkes market maker (DHMM) place orders at a specified depth relative to the mid-price, $\\overline{p_t}$. At each time step $t$, the DHMM agent's pricing mechanism is given by:\n\n\\begin{equation}\np^{a,b}_t = \\overline{p_t} + \\sum_{i=1}^{\\overline{\\delta}}i\\cdot\\mathbf{J}^{i,u}_t - \\sum_{i=1}^{\\overline{\\delta}}i\\cdot\\mathbf{J}^{i,d}_t \n\\end{equation}\n\nwhere $\\overline{p_t}$ is the mid price at time $t$, $\\mathbf{J}^{i,u}$ is the number of upward jumps with $i$ ticks, and $\\mathbf{J}^{i,d}$ is the number of downward jumps with $i$ ticks between $0$ and $t$, $i = 1,\\dotsc,\\overline{\\delta}$. The intensities of $\\mathbf{J}^{i,u}$ and $\\mathbf{J}^{i,d}$ are $\\lambda_{k,u}(t)$ and $\\lambda_{k,d}(t)$, respectively,\n\n\\begin{equation}\n\\lambda_{k,i}(t) = f_{k,i}(\\bf{w}_{k,i}^{\\top}\\bf{h}(t)), \\; \\; k=u,d.\n\\label{eqn:hawkes_mod_A}\n\\end{equation}\n\nThe parameters of the Equation \\ref{eqn:hawkes_mod_A} are calculated as discussed under model formulation in Section \\ref{ss:MF}.\n\nMost of the quantitative finance research into the high-frequency market making problem is based on the assumption of constant order size \\citep{Huang2015}. However, the empirical analyses suggests that the order sizes have striking statistical distribution at different timescales \\citep{Lu2018, Rambaldi2017, Mu2009}. The limit order size follows $\\mathfrak{q}$-Gamma distribution \\citep{Mu2009}. The market maker's willingness to sell or buy specified quantities of securities is defined as: \n\n\\begin{equation}\n q^{a,b}_{lt}= \\left[\\Gamma (\\alpha, \\beta)\\right]^{q_{max}}_{q_{min}} \\cdot \\left(\\frac{I_t \\pm \\bar{I}}{\\bar{I}}\\right),\n \\label{Eq:G}\n\\end{equation}\nwhere $I_t$ is inventory at time $t$, $\\bar{I}$ maximum inventory, and $\\Gamma (\\alpha, \\beta)$ is $\\mathfrak{q}$-Gamma distribution is described as:\n\n\\begin{equation*}\n \\Gamma (\\alpha, \\beta; q)= \\frac{1}{Z} \\left(\\frac{q}{\\alpha}\\right)^{\\beta}\n \\left[1-(1-\\mathfrak{q}){\\frac{q}{\\alpha}}\\right]^{\\frac{1}{1-\\mathfrak{q}}}~,\n \\label{Eq:qGamma}\n\\end{equation*}\n\n\\begin{equation*}\n Z=\\int_{0}^{\\infty}\\left(\\frac{q}{\\alpha}\\right)^{\\beta}\n \\left[1-(1-\\mathfrak{q}){\\frac{q}{\\alpha}}\\right]^{\\frac{1}{1-\\mathfrak{q}}}{\\rm{d}}q.\n \\label{Eq:Z}\n\\end{equation*}\n\nOne striking feature of equity markets is the existence of short- lived\nlimit orders that are modified or cancelled once every 50 milliseconds \\citep{Dahlstrm2018}. The limit order cancellation is an important\ncharacteristic of market making strategies that are related to expected\nprofit, bid-ask spread and order queue position. We model cancellation\nsizes as follows:\n\n\\begin{equation}\n q^{a,b}_{ct}= \\left[P_c(q;Q)\\right]^{q_{max}}_{q_{min}} \\cdot \\left(\\frac{I_t \\pm \\bar{I}}{\\bar{I}}\\right),\n \\label{Eq:TG}\n\\end{equation}\nwhere $P_c(q;Q)$ is truncated geometric distribution \\citep{Lu2018}. The LOB is represented as $[Q_{-i}:i = 1,\\ldots,L]$ and $[Q_{i}:i=1,\\ldots,L]$ with corresponding quantities $q_i$. The truncated geometric distribution is defined as:\n\\begin{equation*}\nP_c(q;Q)= \\mathbb{P}[q|Q] = \\frac{p_{c}^{0}(1-p^{0}_{c})^{q-1}}{1-(1-p^{0}_{c})^Q}\\mathbbm{1}_{\\{q\\leq Q\\}}.\n\\end{equation*}\n\nFinally, the market order follows a mixture of truncated geometric distribution and the dirac delta distribution \\citep{Lu2018}. The market order size that a market maker is willing to buy or sell is described as: \n\n \\begin{equation}\n q^{a,b}_{mt}= \\left[P_m(q;Q)\\right]^{q_{max}}_{q_{min}} \\cdot \\left(\\frac{I_t \\pm \\bar{I}}{\\bar{I}}\\right),\n \\label{Eq:TGD}\n \\end{equation}\n \n \\begin{align*}\n P_m(q;Q) &= \\theta_0\\frac{p_m^0(1-p_m^0)^{q-1}}{1-(1-p_m^0)^Q}\\mathbbm{1}_{\\{ q\\leq Q\\}} \\\\\n &+ \\sum_{k=1}^{\\lfloor \\frac{Q-1}{5} \\rfloor} \\theta_k\\mathbbm{1}_{\\{q=5k+1\\}}+\\theta_\\infty \\mathbbm{1}_{\\{q=Q, Q \\neq 5n+1\\}}.\n \\end{align*}\n\nThe parameters $\\{p_c^0, p_m^0, \\theta_0, \\theta_k, \\theta_\\infty\\}$ are estimated using a maximum likelihood method. The details of estimation and calibration can be retrieved from the \\cite{Lu2018}. The market orders are used to clear the unexecuted inventory at the end of trading.\n\n\\subsubsection{Probabilistic Market Makers}\nTo ensure fair competition with DHMM and incorporate the existing state-of-the-art, we include a probabilistic estimate-based benchmark strategy \\citep{Das2005} adapted to our simulation framework. In this market making strategy, the agent attempts to track the fundamental price of securities by maintaining a probability density estimate of the fundamental price.\n\nThe probabilistic market makers (PMM) intent to sell or buy $q$ unit of security at time $t$ for price $p^{a,b}_t$ in a market populated with uninformed, informed and noisy informed agents. Let us assume that the fundamental price of the security at time $t$ is $f_t$, $\\xi$ be the fraction of informed agents and the probability of buy or sell orders by the uninformed agents is $\\zeta$. The noisy informed agents assumes that the price of securities follow normal distribution $p_t \\; = \\; f_t + \\mathcal{N}_s(0,\\sigma^{2}_{n})$. Whereas the fundamental price of security evolves according to a jump process. The order book event defines the jump and prices follow normal distribution. The PMM ask and bid prices at time $t$ are then defined as:\n \n\n\\begin{equation} \n\\begin{split}\np^{a,b}_{t} & = \\frac{1}{P_{Buy, Sell}}\\sum_{f_{t}=f_{min}}^{f_{t}=p^{a,b}_{t}} \\left[ \\left( (1 - \\xi) \\zeta + \\mathsf{Pr}(\\mathcal{N}_s(0,\\sigma^{2}_{n})\\gtrless(p^{a,b}_{t} - f_{t})) \\right) f_{t} \\mathsf{Pr}(f = f_t) \\right] \\\\\n & + \\frac{1}{P_{Buy, Sell}}\\sum_{f_{t}=p_{t}^{a,b}+1}^{f_{t}=f_{max}} \\left[ \\left( (1 - \\xi) \\zeta + \\mathsf{Pr}(\\mathcal{N}_s(0,\\sigma^{2}_{n})\\lessgtr(f_{t} - p^{a,b}_{t})) \\right) f_{t} \\mathsf{Pr}(f = f_t) \\right] \n\\end{split}\n\\label{Eq:PMM_ab}\n\\end{equation}\n\n\\begin{equation*} \n\\begin{split}\nP_{Buy, Sell} & = \\sum_{f_{t}=f_{min}}^{f_{t}=p^{a,b}_{t}} \\left[ (1 - \\xi) \\zeta + \\mathsf{Pr}(\\mathcal{N}_s(0,\\sigma^{2}_{n})\\gtrless(p^{a,b}_{t} - f_{t})) \\right] \\mathsf{Pr}(f = f_t) \\\\\n & + \\sum_{f_{t}=p_{t}^{a,b}+1}^{f_{t}=f_{max}} \\left[ (1 - \\xi) \\zeta + \\mathsf{Pr}(\\mathcal{N}_s(0,\\sigma^{2}_{n})\\lessgtr(f_{t} - p^{a,b}_{t})) \\right] \\mathsf{Pr}(f = f_t) \n\\end{split}\n\\label{Eq:P_ab}\n\\end{equation*}\n\nwhere $P_{Buy, Sell}$ is a priori probability of a buy or sell order and $\\mathcal{N}_s(0,\\sigma_{n})$ is sample from normal distribution. The bids\/asks equations derivation, its approximate solutions, density estimate update, and algorithm are discussed in the benchmark paper \\citep{Das2005}. We add layers of complexity on the benchmark algorithm by allowing the PMM to sample order or cancellations size from a normal distribution. The order cancellations size at time $t$ determined as follows:\n\n\\begin{equation}\nq_{t;o,c}^{a,b} = \\eta_{o,c}(\\bar{p}_{t}-\\bar{p}_{t-1})+\\mathcal{N}_s(0,\\sigma^{2}_{o,c}), \\; \\; 0 < \\eta_{o,c} < 1\n\\label{Eq:P_oc}\n\\end{equation}\n\\subsubsection{Fundamental Traders}\n\nFundamental traders decide to trade based on the presumption that the\nsecurities prices will eventually return to their basic, intrinsic or\nfundamental value. Therefore, they strive to buy (sell) the security when\nthe price at time t is below (above) its fundamental value. Fundamental\ntraders are predominantly categorised as buyers or sellers, depending on\nthe inventory at the end of a trading day. The accumulation of directional net positions is an important element in identifying buyers or sellers, since the latter acquire sizable net positions by executing numerous small-size orders, while the former only execute a couple of large orders \\citep{Kirilenko2017, Mankad2013}. According to the agent ecology literature, the fundamental traders assume that fundamental value of a security will follow a random walk:\n\n\\begin{equation}\nf_t = f_{t-1}(1+\\delta_f)(1+x_t), \\; \\; \\; \\delta_f > 0; \\; x_t \\sim \\mathcal{N}(0, \\sigma^{2}_x)\n\\end{equation}\n\nGiven last mid-price at time $t$, the limit order price by fundamental traders is determined by:\n\n\\begin{equation}\np_t = \\bar{p}_{t-1}(1+\\delta_f)(1+z_t), \\; \\; z_t \\sim \\mathcal{N}(0, \\sigma^{2}_z)\n\\end{equation}\n\nFinally, the order under fundamental traders strategy are calculated as follows:\n\n\\begin{equation}\nq_{t;f} = \\eta_{f}(f_{t}-\\bar{p}_{t-1})+\\mathcal{N}_s(0,\\sigma^{2}_{f}), \\; \\; 0 < \\eta_{f} < 1\n\\end{equation}\n\nThe decision to buy or sell is governed by following logic:\n\n\\begin{equation}\n \\mathcal{D}_t= \n\\begin{cases}\n Buy ,& q_{t;f} \\geq 0\\\\\n Sell, & q_{t;f} < 0\n\\end{cases}\n\\end{equation}\n\n\n\\subsubsection{Chartist Traders}\nUnlike fundamental traders, the chartist or technical trader's strategy depends on predicting future price direction based on past price movement. The chartist traders in our simulation framework use a simple trend-following strategy described in \\cite{Leal2016}. The price, order size and trade direction are described below:\n\n\\begin{equation}\np_t = \\bar{p}_{t-1}(1+\\delta_c)(1+z_t), \\; \\; z_t \\sim \\mathcal{N}(0, \\sigma^{2}_c)\n\\end{equation}\n\n\\begin{equation}\nq_{t;c} = \\eta_{c}(\\bar{p}_{t-1}-\\bar{p}_{t-2})+\\mathcal{N}_s(0,\\sigma^{2}_{c}), \\; \\; 0 < \\eta_{c} < 1\n\\end{equation}\n\n\\begin{equation}\n \\mathcal{D}_t= \n\\begin{cases}\n Buy ,& q_{t;c} \\geq 0\\\\\n Sell, & q_{t;c} < 0\n\\end{cases}\n\\end{equation}\n\n\\subsubsection{Noise Traders}\nIn the securities market, noise traders make trading decisions based\nsolely on non-information. In the models, they serve as an essential\nproxy for randomness, no trade and no speculation. We incorporate the\nslightly more evolved noise or background traders from a seminal paper by \\cite{Wah2017}. The noise traders ask or bid price is determined by its fundamental private valuation and trading strategy. The fundamental value evolves according to a mean-reverting stochastic process \\citep{Wah2017}.\n\n\\begin{equation}\nf_t = max\\left[0, \\eta_{n}\\bar{f}+\\eta_{n}(1-f_{t-1})+y_t \\right], \\; 0 < \\eta_{n} < 1; \\; y_t \\sim \\mathcal{N}(0, \\sigma^{2}_n) \n\\end{equation}\n\nThe private valuation for the noise traders at time $t$ is given by:\n\\begin{equation}\np_v = max \\left[ 0, d_f\\right], \\; d_f \\sim \\mathcal{N}(f_t, \\sigma^{2}_v)\n\\end{equation}\n\nThe noise trader calculates its private value and decide to buy or sell $q_{t;n}$ order sampled from a normal distribution,$\\mathcal{N}(0, \\sigma^{2}_n)$, with equal probability of 1\/2. \n\n\\subsection{Reward Design}\nUnlike traditional reward design, in which an agent's performance is assessed at the end of a trading period, we calculate the agent's\ninstantaneous rewards at each timestep $t$. The reward function for the agents ($j$) comprises profit \\& loss (PnL), inventory cost (IC) and transaction cost (TC). The PnL is simple profit or loss made by the agents through buying or selling security at the exchange. Its is defined as:\n\n\\begin{equation}\nPnL_{t;j} = q^{a}_{t;j}\\left(p^{a}_{t;j} - \\bar{p}_{t} \\right)+q^{b}_{t;j}\\left(\\bar{p}_{t} - p^{b}_{t;j} \\right)\n\\end{equation} \n\nAs a agent's inventory is exposed to the volatility of the market price, we incorporate it our reward design using a term associated with inventory cost. Its given by:\n\n\\begin{equation}\nIC_{t;j} = I_{t;j}\\left(\\bar{p}_{t} - \\bar{p}_{t-1} \\right)\n\\end{equation} \n\nFinally, we consolidate a quadratic penalty on the number of shares executed to account for transaction cost. Specifically, the transaction cost for order executed $q_{t}^{e}$ by agent $j$ till time $t$ is:\n\n\\begin{equation}\nTC_{t;j} = \\daleth \\left( q_{t;j}^{e}\\right)^{2}, \\; 0<\\daleth<1\n\\end{equation}\n\nThe reward function is the sum of orders bought or sold plus inventory cost less a transaction cost penalty.\n\n\\begin{equation}\nR_{t;j} = PnL_{t;j}+IC_{t;j}-TC_{t;j}.\n\\end{equation}\n\n\\subsection{Capital Allocation}\n\\label{sec:CA}\n\nThe amount of currency units held by an agent is represented by capital. Prior to securities market opening in simulation framework, every heterogeneous agents endowed with different amount of capital by a power law distribution. The agent's initial capital $c_a$ follows a power law if it is drawn from drawn from a probability\ndistribution $p(c_a) \\propto {c_a}^{- \\alpha_a}$. The $\\alpha_a$ is referred as scaling parameter which ordinarily lies between 2 and 3 \\citep{Clauset2009}.\n\n\\section{Conclusions}\n\\label{sec:DHP_C}\n\nWe have developed a market making strategy that takes account of the\nfeedback loop between the order arrival and the state of the LOB, with\nself- and cross-excitation effects, while placing an order in our realistic simulation framework. The strategy was designed by integrating the self-modulating multivariate Hawkes process with DLSTM-SDAE. The\ndata-driven approach performed adversely in relation to predicting the\nnext order type and its timestamp when fitted to reconstructed order\nbook data at nanosecond resolution. When trained with millisecond\nresolution data, it outperforms NHP in prediction tasks and benchmark\nmarket making strategies in trading performance. We have demonstrated that extending the DHP in a market making setting accomplished better performance when validating empirical claims about the effect of cancellation on the determinants of order size. Our modelling approach is still far from inferring causality, but does pave the way exploring a range of diverse research avenues. The most important and immediate of these are listed below:\n\\begin{enumerate}\n\\item Apply more advanced pre-processing, architecture, and learning\nalgorithms to filter out ultra-noisy order book data at nanosecond\ntimestamps.\n\\item Explore an intensity-free approach for Hawkes processes with learning mechanisms other than the maximum likelihood approach.\n\\item Embedding DHP within the reinforcement learning framework to learn\nthe optimal policy.\n\\item Extend the model to the deep reinforcement learning framework, in\nwhich the agent's trading action and reward from the simulator are\nasynchronous stochastic events characterised by marked multivariate\nHawkes processes. \n\\item Extract the agent's trading algorithm parameters directly from the order book data , rather than from random seeds or empirical literature.\n\\end{enumerate}\n\n\n\n\n\\section{Experiments}\n\\label{sec:DHP_E}\nIn this section, we elaborate on data, its processing, performance\nmetrics, benchmarks, training and the parameter configuration for the\nproposed model.\n\\subsection{Data}\n\nWe use the publicly available historical Nasdaq TotalView-ITCH 5.0 data feed sample \\footnote{\\url{ftp:\/\/emi.nasdaq.com\/ITCH\/Nasdaq_ITCH\/}} to reconstruct limit order book \\citep{Huang2011}. The reconstructed database provides tick-by-tick details of full order book depth by listing every quote and order at each price level of a specific security in Nasdaq, NYSE, and regional-listed securities on Nasdaq. The raw data feed in the binary format has a series of sequenced messages to describe the system, securities, order, and trade events at a resolution of the nanosecond scale. The event stream at nanosecond timestamp guarantees the inclusion of stochastically missing events which might increase the predictive accuracy of the Deep Hawkes model. Although the neural hawkes model \\citep{Mei2017} is expressive enough to take account of missing event stream, it makes sense to access the performance of deep hawkes model at millisecond resolution order book data as compared to nanoseconds. Nasdaq uses multiple messages to indicate the current order, trading, system, and circuit breakers event's status as discussed in technical report \\citep{Nasdaq2020}. For mathematical tractability, we have sampled high frequency data for hundred most liquid securities over eight days from reconstructed orderbook. The extracted sample data consists of approximately a billion transaction records at nanosecond resolutions together with the possible event of limit order buy\/sell, market order buy\/sell, and cancellations partial\/full. The reconstructed limit order book for Apple at 11:21 on March 29th, 2018 is shown in Figure \\ref{fig:DHP_LOB}. \n\nThe reconstructed orderbook data is divided into training, validation and test set. For a single security at nanosecond and millisecond resolution, the descriptive statistics are given in Table \\ref{tab:DHP_DS}. The validation set is included to optimize the model's hyper-parameters while training, thus having control at over-fitting. To avoid high variance in the data set, we only record the average value over multiple splits denoted by $\\approx$ in the Table \\ref{tab:DHP_DS}.\n \n\\begin{table}[H]\n\\caption{Descriptive statistics of the orderbook data}\\label{tab:DHP_DS}\n \\resizebox{\\textwidth}{!}{ \\begin{tabular}{clrrrrr}\n \\toprule\n \\toprule\n \\multirow{2}{*}{Data} & \n \\multicolumn{3}{c}{\\# Orderbook Event Token} &\n \\multicolumn{3}{c}{Stream Length} \\\\\n \\cmidrule(l){2-4} \\cmidrule(l){5-7}\n & Train & Val & Test & Min & Mean & Max \\\\\n \\midrule\n Nanosecond & $\\approx 9210480$ & 921000 & 2302620 & 26752 & 85874 & 116217 \\\\\n \\midrule\n Millisecond & $\\approx 432116$ & 42252 & 108029 & 1167 & 3670 & 5216 \\\\\n \n \\bottomrule\n \\bottomrule\n \\end{tabular}\n }\n \n\\end{table}\n\n\\subsection{Performance Metrics}\n\nDHMM agents use DHP to accurately predict Experiments order book events (buy, sell or cancel) and their timing. The accuracy of DHMM's predictions in terms of events and time is an important determinant of its trading profitability. The performance metrics are vital components for measuring the performance of the trained model's prediction reliability with observed test data. The widely used scale-\ndependent metric\\citep{Hyndman2006}, root mean square error (RMSE) and classification error rate (ER) were used to evaluate the prediction performance of the Neural Hawkes model. Following \\cite{Mei2017}, we predict each prevailed order book event stream $\\{t_n, \\varkappa_n, k_n\\}$ from the past event stream $\\mathcal{H}_n$ and evaluate prediction using RMSE and ER.\n\nThe predominant metric for evaluating the performance of the agents\nis profit and loss at the end of the trading period. However, this approach may be misleading, as agents are tested across heterogeneous securities, with varying pricing and liquidity structures. Alternatively, to efficiently capture spread, we use a normalised PnL (NPnL) with inventory and quadratic transaction costs. The NPnL is calculated every hour by dividing the total reward by the weighted average market spread. To take account of the small inventories maintained by the market maker,, \\cite{Spooner2018} introduced the mean absolute position (MAP)\nmetric. An extreme score under this metric indicates a risky speculative\nstrategy, while a moderate one indicates a strategy based on a stagnant\nmarket. We record the variability for NPnL and MAP using the standard\ndeviation and mean, respectively.\n\n\\subsection{Benchmarks}\nWe aim to evaluate the performance of the DHMM with a modified\nprobabilistic estimate-based benchmark strategy \\citep{Das2005}. The\nmarket making strategy is an extension of the classic information-based\nmodel \\citep{Glosten1985}, in which agents use the probability\nestimates of the fundamental price of securities to set bid and ask prices. The agents can sample limit, market or cancellation orders from the normal distribution or contradictory to a unit market order. We\nimplement the probabilistic estimate-based strategy at the top of our\nsimulation framework in continuous-time simulation rather than a\ndiscrete-time simulation. This provides the perfect test-bed to assess the performance of the simulation framework in extending the discrete-time mechanisms to continuous-time, where heterogeneous agents interact\nasynchronously. \n\nThe DHP extends the seminal Neural Hawkes process to the deep\nlearning framework in a market making setting. We introduce the novel\narchitecture to circumvent complications related to random weight\ninitialisation, training and noisy order-level data \\citep{Sagheer2019B}. Given that the Neural Hawkes process is the kernel of our proposed deep model, we evaluate the performance of market making agents that use the earlier model in their trading strategies. For comparison purposes, we use the same architecture and training mechanism as discussed in the seminal paper \\citep{Mei2017}. The neurally self-modulating multivariate Hawkes process also acts as benchmark model for evaluating DHP's performance on the prediction of order book events and in terms of time on the reconstructed limit order book data.\n\n\\subsection{Training}\n\nThe high-frequency marker making agents uses DHP to learns from reconstructed limit orderbook data to place bids or asks or cancels at suitable time. The learned prediction is then infused into the market making strategy to trade with the simulation framework. The agents learn the system parameters in a two-step process. Firstly, the preprocessed order book stream, n-th event, $k_n$ is embedded into a latent space before passing into SDAE layer together with timing $t_n$. The deep network, consists of a stack of multiple DAEs, generate higher representation of convoluted order book events interaction. The high level denoised representations are then fed into DLSTM to predict the next order's type and the time to evaluate the loss. The DLSTM-SDAE learns the deep representation in two phases: pre-training and fine-tuning. In pre-training, a greedy layer-wise structure is used to train each layer of DAE iteratively, to form a three-layer SDAE. At the end of pre-training, a stack of three LSTMs is produced as an output of SDAE. Secondly, the parameter of DLSTM-SDAE is then fine-tuned to minimise the error in predicting events and time, using using stochastic gradient-descent and Adam optimisation algorithms. The early stopping methods used on the validation set's log-likelihood performance were also used on the held-out validation set to avoid overfitting. We also add isotropic Gaussian noise to augment generalisation in the performance of the events' classification. Table \\ref{tab:DHP_PC} lists the hyper-parameters tuned by validation set performance for the DLSTM-SDAE network architecture. The other non-LSTM parameters includes $s_n \\in \\mathbb{R}$ and $\\mathbf{W}_n \\in \\mathbb{R}^D$ as discussed in Section \\ref{sec:DHP_DHP}. The market making agent using NHP uses single layer LSTM and the number of hidden nodes from a small set $\\left(64, 128, 256, 512, 1024 \\right)$ as described in the base paper \\citep{Mei2017}. The hyperparameters are optimized based on the performance of the validation set.\n\nThe high-frequency market making agents are trained in the simulation framework for 1000 trading days. Each trading day starts at 9:30 and lasts until 16:00. Two hundred trading days were used to fine tune the hyper-parameters using random search. We acknowledge the existence of\ndifferences between the real data and simulated data, but firmly believe\nthat they are generated from the same mechanisms \u2013 a claim substantiated by agent-based models that reproduce stylised facts similar to empirical findings. Taking the above into consideration, we train market makings agents using DHP and NHP for 100 trading days five times. The aim of this exercise is to synchronise the agents' learning over different sets of data generated from the same stochastic process. We then test the performance of the agents against the benchmark for 300 trading days. To ensure fair competition with market making agents, we use heterogeneous market ecology consisting of fundamental, chartist and random agents. The important parameters pertaining to the trading agents in the simulation framework are given in Table \\ref{tab:DHP_PC}. \n\n\\begin{table}[H]\n \\centering\n \\caption{Parameters configuration.}\\label{tab:DHP_PC}\n \\resizebox{\\textwidth}{!}{ \\begin{tabular}{lll}\n \\toprule\\toprule\n Description & Parameter\/Hyperparameter & Value \\\\\n \\midrule\n Total sessions size & $\\mathsf{T}_{total}$ & 1300 days \\\\\n Training sample size & $\\mathsf{T}_{train}$ & 1000 days \\\\\n Testing sample size & $\\mathsf{T}_{test}$ & 300 days \\\\\n \\midrule\n Total number of traders &$a_t$& $ \\approx 10^4$ \\\\\n Number of market makers &$a_m$& 3 \\\\\n Number of fundamental traders &$a_f$& 3000 \\\\\n Number of chartist traders &$a_c$& 6000 \\\\\n Number of noise traders &$a_n$& 4000 \\\\\n \n \\midrule\n Initial capital & $\\mathsf{c}_{a}$ & $ \\sim p(2.3) \\; \\times \\; 10^4$ \\\\\n Min inventory & $\\min \\Inventory$ & $ \\sim - p(2.3) \\; \\times \\; 10^5$ \\\\\n Max inventory & $\\max \\Inventory$ & $ \\sim p(2.3) \\; \\times \\;10^5$ \\\\\n \\midrule\n Scale factor & $s_n $ & 1 \\\\\n LSTM weights & $\\mathbf{W}_n$ & $\\sim \\mathcal{N}(0,0.01) $\\\\\n DHMM limit order parameters & $\\Gamma (\\alpha, \\beta)$ & $\\Gamma (0.07, 1.52)$\\\\\n DHMM limit order cancellations parameters &$P_c(q;Q)$ & 0.60\\\\\n \n \n Fraction of informed agents & $\\xi$ & 0.33\\\\\n Probability of buy\/sell by uninformed agents &$\\zeta$ & 0.33\\\\\n PMM order cancellation size parameter & $\\eta_{o,c}$ & 0.04\\\\\n Fundamental order size parameter & $\\eta_{f}$ & 0.04\\\\\n Chartist order size parameter & $\\eta_{c}$ & 0.04\\\\\n Noise price parameter &$\\eta_{u}$& 0.04 \\\\\n Transaction cost penalty & $\\daleth$& 0.06\\\\\n \n \n \\midrule \n Number of layers (DAE\/LSTM) & $N_{L}$ & 3\/3\\\\\n Number of hidden unit per layer & $N_{H}$1024\\\\\n Learning rate for pretraining & $\\alpha_{LPT}$ &0.05 \\\\\n Learning rate for fine-tuning &$\\alpha_{LFT}$& 0.10\\\\\n Number of pretraining epochs & $\\epsilon_{PT}$&100\\\\\n Additive isotropic Gaussian noise & $\\eta_{noise}$& $\\sim \\mathcal{N}(0,0.50)$ \\\\\n \n \\bottomrule\\bottomrule\n \n \\end{tabular}}\n\\end{table}\n\n\\section{Results}\n\\label{sec:DHP_R}\nIn this section, we investigate the performance of market making agents\nin predicting types of order book events and their timestamps. Having\nlearned which orders to send and at what time, we evaluate the agent's\ntrading performance in the simulation framework. We tweak order\ncancellations to examine the impact on the agent's profitability and the\nmicrostructure of the order book. We then check the robustness of the\nmodel by performing sensitivity analysis. Finally, we validate our\nsimulation framework by reproducing stylised facts with our simulated\ndata.\n\n\\subsection{Predictive Performance}\nGiven a stream of order book events $\\{t_n, \\varkappa_n, k_n\\}_{n \\in \\mathbb{N}}$, the market makers seek to predict the next event type and its time. We evaluate predictive performance of $\\hat{t}_{n}$ and $\\hat{k}_{n}$ using RMSE and ER, respectively.To avoid getting entangled in the problem of overfitting, we divide the training set into the sub-training and validation sets. We train DHP and NHP models on the sub-training set so as to choose hyperparameters for validation set. Following the training procedure of \\cite{Mei2017}, we generate the predictive performance of the market making agents on reconstructed order book data at nanosecond resolution in Figure \\ref{fig:PTEN}. As is evident from the figure, neither model is invariably better at predicting events or, in particular, time. It seems that the both models do not explicitly address the complex dynamics of asynchronous order book data at nanosecond resolution. The event dynamics at nanosecond timestamps need much more sophisticated models to filter noise and to model event interaction and non-linearity.\n\n\\begin{figure}[H]\n \\centering\n \\begin{subfigure}[t]{0.32\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{img\/IET_N}\n \\caption{Time Prediction Graph}\\label{fig:IETN}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.32\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{img\/RMSE_N}\n \\caption{Time Prediction Error}\\label{fig:RMSEN}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.32\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{img\/ER_N}\n \\caption{Event Prediction Error}\\label{fig:ERN}\n \\end{subfigure}\n \n \\caption{Performance evaluation of high-frequency market making agents in predicting order book events and time at nanosecond resolution. The standard deviation over 10 experiments using different train-val-test sample is denoted by error bar. }\\label{fig:PTEN}\n \\end{figure} \n\n\nIn Figure \\ref{fig:PTEM}, we evaluate the predictive performance of the high-frequency market making agents using millisecond data sampled from the reconstructed order book. Compared to the earlier results, the DHP model's performance has drastically increased. In addition, the time prediction is consistently better than with the NHP. The deep model with novel architecture and pre-training module are sophisticated enough to capture excitation and feedback effects in the order book. The results also substantiate the claim of the NHP model regarding stochastically missing data. The order book events at millisecond timestamps theoretically omit the events at the finer time resolution, but they are generated by a different mechanisms. This is the reason why there are completely different results at the two time resolutions. The Deep Hawkes model presented here is expressive enough to learn true predictive distribution with scholastically missing events, but only if they are generated from the same mechanism. By integrating the predictive capabilities into the market making strategies, the agents trade in a simulation framework populated with heterogeneous trading strategies. In the next section, we explore the agents' trading performance.\n \n \\begin{figure}[H]\n \\centering\n \\begin{subfigure}[t]{0.32\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{img\/IET_M}\n \\caption{Time Prediction Graph}\\label{fig:IETM}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.32\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{img\/RMSE_M}\n \\caption{Time Prediction Error}\\label{fig:RMSEM}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.32\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{img\/ER_M}\n \\caption{Event Prediction Error}\\label{fig:ERM}\n \\end{subfigure}\n \n \\caption{Performance evaluation of high-frequency market making agents in predicting order book events and time at millisecond resolution. The standard deviation over 10 experiments using different train-val-test sample is denoted by error bar. }\\label{fig:PTEM}\n \n\\end{figure} \n\n\\subsection{Trading Performance}\n\nThe trading performance of the high-frequency market making agents is discussed in Table \\ref{tab:DHP_AP}. Our simulation framework evaluates various models, including the Neural Hawkes model, the benchmark probabilistic estimate model, and our proposed Deep Hawkes model. According to the performance metrics specified in Table \\ref{tab:DHP_AP}, our proposed agent using DHP (DHMM) consistently outperforms the PMM and NHMM, which suggests that the proposed trading agent benefits by learning the robust microstructure of order book data. Further, the DHP lets the agent capture the self- and cross-excitation effects of the limit order book together with a feedback loop, to place the right order at the right time. We discuss the performance of each agent in detail below.\n\nAs shown in Figure \\ref{fig:PTEM}, the DHMM is better at predicting the type of order and its time compared to NHMM. This is an important element\nof the market making strategy. The novel DLSTM-SDAE architecture allows the agents to learn the hidden representation of the noisy order book data, and therefore to place orders that add to its profitability. Furthermore, DHMM exhibits a faster convergence rate compared to\nNHMM, as shown in Figure \\ref{fig:DHP_TD}.\n\nThe baseline strategy used by PMM maintains a probability density\nestimated on the basis of fundamental price. The fundamental price\nevolves according to the jump process, following a normal distribution.\nThis works in the favor of PMM, which makes more profit at the\nbeginning, as verified in Figure \\ref{fig:DHP_TD}.Over time, however, the DHMM and NHMM learn the art of placing the right order with the right\nintensity. Afterwards, the profitability of the PMM fall dramatically.\nThe PMM might perform better if it took a long position over several\ndays, rather than trading intraday. It would be interesting to check the\nperformance of the PMM agents with different probability density\nestimate conditions on the joint distribution of microstructure features. \n\n\\begin{figure}[H]\n \\centering\n \\begin{subfigure}[t]{0.32\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{img\/RTR}\n \\caption{Train}\\label{fig:DHP_RTR}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.32\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{img\/RTS}\n \\caption{Test}\\label{fig:DHP_RTS}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.32\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{img\/RDH}\n \\caption{Random day}\\label{fig:DHP_RDH}\n \\end{subfigure}\n \n \\caption{Trading agents performance with DHMM, NHMM and PMM while training, testing and random day. }\\label{fig:DHP_TD}\n \\end{figure} \n \n \\begin{table}[H]\n \\centering\n\\caption{Mean and standard deviation on the daily normalised PnL (PnL) and mean absolute positions (MAP) for different market makers }\\label{tab:DHP_AP}\n \\resizebox{\\textwidth}{!}{ \\begin{tabular}{cllllll}\n \\toprule\n \\toprule\n \\multirow{2}{*}{Agents} & \n \\multicolumn{2}{c}{NPnL [$10^5$]} &\n \\multicolumn{2}{c}{MAP[unit]} \\\\\n \\cmidrule(l){2-3} \\cmidrule(l){4-5}\n & Mean & Std.Dev. & Mean & Std.Dev. \\\\\n \\midrule\n DHMM & 2.1 & $\\pm 18.26$ & 17 & $\\pm 20$ \\\\\n \\midrule\n NHMM & 1.1 & $\\pm 4.09$ & 4 & $\\pm 6$ \\\\\n \\midrule\n PMM & -1.6 & $\\pm 79.55$ & 41 & $\\pm 74$ \\\\\n \\bottomrule\n \\bottomrule\n \\end{tabular}\n} \n\\end{table}\n \n\\subsection{Order Cancellation Effect} \n\nMassive numbers of order cancellations in a short period are a\ndistinctive attribute of the equity market. For example, at Nasdaq\nNordic, order cancellations typically account for 40\\% of submitted limit\norders on a particular trading day. Market making strategies using limit\norder cancellations contribute to the market marker's profit, bid-ask\nspread and order queue position \\citep{Dahlstrm2018}.We study the\ndistribution of profit, bid-ask spread and order queue position by\nremoving the cancellations mechanism in the base simulation\nframework. We estimate the intrinsic value of the order relative to the\nqueue position by applying the model developed by \\cite{Moallemi2017}. The agent's order queue position provides an estimate of number\nof orders ahead of the agent's order at a particular price. A position at\nthe front of the queue guarantees prompt execution, higher fill rate, low\nlatency and lower adverse selection cost. We estimate the queue position\nin the order book by reconstructing the limit order book from the\nsimulated data feed.\n\nLets us suppose that the high-frequency market making agent places a limit order at time $t\\!=0$ seeking best ask price $p_a$ which gets filled or canceled at time $\\tau$. Filling the order pays the agent $p_a$ while cancellations pay nothing. We now describe the value of the order perceived by agents relative to the queue position as:\n\n\\begin{equation}\n\\begin{split}\nV_t &= \\mathbb{E}[ (p_a - p_t)\\mathbb{I}_{\\mathsf{FILL}} - (p - p_t)\\mathbb{I}_{\\mathsf{FILL}} \\mid \\mathcal{F}_t ] \\\\\n&= \\mathsf{FP}_t (\\mathsf{LSP}_t - \\mathsf{ASC}_t) \\\\\n&= fill \\; probablity \\left( liquidity \\; spread \\; premium - adverse \\; selection \\; cost \\right)\n\\end{split}\n\\label{EQ:COE}\n\\end{equation} \n\nwhere \n\\begin{eqnarray*}\n\\begin{aligned}\n\\!& \\mathsf{FP}_t \\triangleq \\mathbb{P} \\left( \\mathsf{FILL} \\mid \\mathcal{F}_t \\right), \\\\\n\\!& \\mathsf{LSP}_t \\triangleq \\left( p_a - p_t \\right), \\\\\n\\!& \\mathsf{ASC}_t \\triangleq \\mathbb{E} \\left[ (p_{\\tau} - p_t) \\mid \\mathcal{F}_t, \\mathsf{FILL} \\right]\n\\end{aligned}\n\\end{eqnarray*} \n\nTo empirically calibrate the model (Equation \\ref{EQ:COE}), we take the same parameters used by \\cite{Moallemi2017}. These are exponential order size distribution, trade arrival rate (TAR), average trades size (ATS), trade size in the stan-dard lot (TSS), cancellation arrival rate (CAR), average cancellation size (ACS), price jump arrival rate (PJR), average jump size (AJS), market impact (MI) and average queue size (AQS). The trade size is identified as the limit order or market order, contrary to aggressive market orders as described by \\cite{Moallemi2017}. Table \\ref{tab:OCE} specifies the estimated parameters for simulated data with no cancellation mechanisms (Simulated NC), without cancellation mechanisms (Simulated WC) and an average (Simulated AV) over 21 days. The paper itself provides more detail regarding the parameters, calibration and model fitting \\citep{Moallemi2017}.\n\n\\begin{table}[H]\n\\caption{Estimated parameters for simulated orderbook data.}\\label{tab:OCE}\n\\resizebox{\\textwidth}{!}{ \\begin{tabular}{cccccccccc}\n \\toprule\n \\toprule\n \n Data & TAR (\/min)& ATS (shares) & TSS (shares) & CAR (\/min) & ACS (shares) & PJR (\/min) & AJS (ticks) & MI & AQS (shares) \\\\ [0.5ex]\n \\midrule\n Simulated NC & 2.53 & 3467 &7664 & 92.72 & 5061 & 1.26 & 0.32 & 10.91 & 16416 \\\\\n Simulated AV & 2.04 & 4037 & 6901 & 82.21 & 4022 & 1.01 & 0.46 & 8.76 & 23554 \\\\\n \\midrule\n Simulated WC & 5.26 & 6329 & 8083 & 43.71 & 1107 & 3.75 & 2.06 & 11.92 & 40191\\\\\n Simulated AV & 4.07 & 5463 & 9147 & 40.31 & 1560 & 3.01 & 2.06 & 13.02 & 46815 \\\\\n \\bottomrule\n \\bottomrule\n \\end{tabular}}\n \n\\end{table}\n\nTable \\ref{tab:OCE} shows that an absence of cancellation mechanisms at the high-frequency market maker's end leads to a drastic increase in the average queue size. The decrease in the cancellation rate increases the queue size, which affects the high-frequency market maker's profitability, bid-ask spread and market impact. The value of the order as a function of the queue position, bid-ask spread, and the agent's profit efficiently captures the claim illustrated by the data in Figure \\ref{fig:OCE}. The wider bid-ask spread when agent's are unable to cancel the limit order in Figure \\ref{fig:BAD_A} has negative effect on the profitability (Figure \\ref{fig:RDH_A} ) as compared to scenarios with cancellations (Figure \\ref{fig:BAD_B},\\ref{fig:RDH_B} ). As stated in the model in Equation \\ref{EQ:COE}, the value of an order that is not filled is zero. Figure \\ref{fig:VO_A} shows that an increase in queue length decreases the probability of execution, and therefore the value. The value of the order becomes flat, as the queue length is extremely large. Our results are consistent with the findings of \\cite{Dahlstrm2018} when investigating the determinants of order cancellations. It is difficult to infer causal relationships between cancellations and market microstructure variables based on artificially created scenarios, but this approach nonetheless it paves the way for future investigation using order level data.\n\n\\begin{figure}[H]\n \\centering\n \\begin{subfigure}[t]{0.32\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{img\/BAD_B}\n \\caption{BAD WC}\\label{fig:BAD_B}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.32\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{img\/RDH_B}\n \\caption{DHMMP WC}\\label{fig:RDH_B}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.32\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{img\/VO_B}\n \\caption{VOPQ WC}\\label{fig:VO_B}\n \\end{subfigure}\n \n \\begin{subfigure}[t]{0.32\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{img\/BAD_A}\n \\caption{BAD NC}\\label{fig:BAD_A}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.32\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{img\/RDH_A}\n \\caption{DHMMP NC }\\label{fig:RDH_A}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.32\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{img\/VO_A}\n \\caption{VOPQ NC}\\label{fig:VO_A}\n \\end{subfigure}\n \n \\caption{Effect of limit order cancellations on the market. The top\nrow (marked WC) represents the distribution when the market maker's\nagents can cancel the limit orders. The bottom row (marked NC)\nrepresents a situation with no cancellations. BAD is intraday bid-ask\ndistribution, DHMMP is profit distribution of DHMM over the trading\nday, and VOPQ is the value of the orders relative to queue position. The\naverage queue length on a particular trading day is represented by a\nblack triangle.}\\label{fig:OCE}\n \\end{figure} \n \n\n\\subsection{Sensitivity Analysis}\nWe perform sensitivity analysis on the proposed model by changing the\nhyperparameters \u2013 specifically, the number of layers, the number of\nhidden units per layer and the noise level. The aim is to confirm the\nrobustness of the model, rather than overfitting parameters. Table \\ref{tab:SA} shows the performance of our proposed model when there is an increase in the number of layers, hidden units and noise level. The model is robust to the noise level at the optimal choice of numbers of layers, parameters and hidden units. \n\n\\begin{table}[H]\n\\caption{Sensitivity to the number of hidden units and Gaussian noise.\nThe DLSTM-SDAE used in our model has 3 DAE layers and 3 LSTM layers. In performing sensitivity analysis, we fix the 3 LSTM layers and\nchange only the DAE layer. }\\label{tab:SA}\n \\resizebox{\\textwidth}{!}{ \\begin{tabular}{llll}\n \\toprule\n \\toprule\n & \\# Number of layer 1 & \\# Number of layer 2 & \\# Number of layer 3 \\\\\n \\midrule \n Number of hidden unit per layer & $\\left(64, 128, 256, 512, 1024 \\right)$ & $\\left(64, 128, 256, 512, 1024 \\right)$ & $\\left(64, 128, 256, 512, 1024 \\right)$ \\\\\n \n \\midrule\n RMSE & $\\left( 4.6, 4.4, 4.1, 4.1, 4.0 \\right)$ & $\\left( 3.5, 3.0, 3.0, 2.5, 2.5 \\right)$ & $\\left( 2.0, 2.0, 2.0, 1.5, 1.5 \\right)$ \\\\\n \\midrule\n ER & $\\left( 55.7, 55.7, 55.7, 50.4, 54.0 \\right)$ & $\\left( 47.2, 44.4, 44.1, 44.1, 44.1 \\right)$ & $\\left( 37.5, 37.0, 35.0, 35.0, 34.0 \\right)$ \\\\\n \\midrule\n Gaussian noise & $\\left(0.10, 0.20, 0.30, 0.40, 0.50 \\right)$ & $\\left(0.10, 0.20, 0.30, 0.40, 0.50 \\right)$& $\\left(0.10, 0.20, 0.30, 0.40, 0.50 \\right)$\\\\\n \\midrule\n RMSE & $\\left(7.2,5.9, 4.3, 5.4, 6.5 \\right)$& $\\left(4.1,3.9, 3.0, 4.0, 4.6 \\right)$&$\\left(2.2,2.2, 2.0, 2.1, 2.1 \\right)$ \\\\\n \\midrule\n ER & $\\left(60.2,55.9, 50.1, 60.0, 63.5 \\right)$ & $\\left(55.6,52.2, 49.1, 55.3, 67.5 \\right)$& $\\left(37.2,37.9, 37.3, 37.1, 37.2 \\right)$ \\\\\n \n \\bottomrule\n \\bottomrule\n \\end{tabular}}\n \n\\end{table}\n\n\n\\subsection{Validation}\n\nThe validation of the trading simulation framework is performed by\nmeasuring how successfully the simulation's output exhibits persis- tent\nempirical patterns in the order book data. Such empirical patterns are\ncommon across various markets and instruments, and even timescales\nare often classified as \"stylised facts\" \\citep{Cont2001}. We present a\nnominal set of stylised facts, reproduced from the empirical analysis of\nsimulated order book data, as shown in Figure \\ref{fig:DHP_SF}.\n\nLet $p(t)$ be the price of a security at time $t$. Given a timescale $\\Delta t$, we define log return at $\\Delta t $ as $ r (t, \\Delta t) = \\ln p(t+\\Delta t) - \\ln p(t)$. The cumulative distribution (CDF) of returns is given as $F_{\\Delta t}(x) = \\mathbb{P}[r(t, \\Delta t) \\leq x]$. The derivative of the earlier gives probability density function (PDF) $F^{'}_{\\Delta t}=f_{\\Delta t}$, empirically estimated for normalized simulated return, as illustrated in Figure \\ref{fig:DHP_N}. The cumulative distribution of return follows power law $F_{\\Delta t} \\sim \\left|r\\right|^{-\\alpha }$ with $2<\\alpha<5$. In Figure \\ref{fig:DHP_NCDF}, the positive tail $F^{+}_{\\Delta t}(x) = \\mathbb{P}[r(t,\\Delta t) \\geq x]$ and the negative tail $F^{-}_{\\Delta t}(x) = \\mathbb{P}[r(t,\\Delta t) \\leq x]$ of cumulative distribution, shown as yellow circles and green squares, exhibit power law, as denoted by the red line with $\\alpha=2.8$. In Figure \\ref{fig:DHP_SFAC}, we show the absence of the autocorrelation of price change, defined as $\\rho (\\tau) = Corr \\left( r (t, \\Delta t ), r (t + \\tau, \\Delta t ) \\right) $. The autocorrelation function (ACF) drastically decays to zero in few lags. \n\n\\begin{figure}[H]\n \\centering\n \\begin{subfigure}[t]{0.32\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{img\/SF_Normal}\n \\caption{PDF}\\label{fig:DHP_N}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.32\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{img\/SF_NCDF}\n \\caption{CDF}\\label{fig:DHP_NCDF}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.32\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{img\/SF_AC}\n \\caption{ACF}\\label{fig:DHP_SFAC}\n \\end{subfigure}\n \n \\caption{Stylised facts reproduced from simulated order book data. All graphs were generated on the basis of $\\Delta t = 1\\; minute $. }\\label{fig:DHP_SF}\n \\end{figure} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}