diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzhgko" "b/data_all_eng_slimpj/shuffled/split2/finalzzhgko" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzhgko" @@ -0,0 +1,5 @@ +{"text":"\\section*{Introduction}\nIn recent years unstable periodic orbits have\nbeen shown to be an effective tool in the description of \ndeterministic dynamical systems of low intrinsic dimension\\cite{CHAOS92}, \nin diagnosing deterministic chaos in noisy biological \nsystems\\cite{Moss94}, and many other applications. \nThe theory has been successfully applied to low dimensional ordinary \ndifferential equations (deterministic chaos) and linear partial differential \nequations (semiclassical quantization). It is an open question whether\n the theory has anything to say about nonlinear partial differential \nequations (hydrodynamics, field theory). \nIn this paper we show that \n the periodic orbit theory can be used to describe \nspatially extended systems by applying it to the Kuramoto-Sivashinsky\nequation\\cite{Kur,Siv}.\n\nIn what follows we shall refer to a periodic solution \n as a ``cycle'', and to the closure of the union of all \nperiodic solutions as the ``invariant set''.\nPeriodic solutions are important because they form the {skeleton}\nof the invariant set\\cite{cycprl,AACI}, with\ncycles ordered {hierarchically}; short cycles give good\napproximations to the invariant set, longer cycles refinements.\nErrors due to neglecting long cycles can be bounded, and for nice hyperbolic\nsystems they fall off exponentially or even superexponentially \nwith the cutoff cycle length\\cite{Rugh92,CRR93}.\nFurthermore, cycles are {structurally robust} as for smooth flows\neigenvalues of short cycles vary slowly with smooth parameter changes,\nshort cycles can be accurately extracted from experimental\nor numerical data, and\nglobal averages (such as \ncorrelation exponents, escape rates \nand other ``thermodynamic\" averages) can be efficiently computed from short\ncycles by means of {cycle expansions}.\n\nWhile the role of periodic solutions in elucidating the asymptotics of\nordinary differential equations was already appreciated by\nPoincar\\'e\\cite{poincare}, allegedly Hopf\\cite{Hopf42}\nand, more demonstrably, Spiegel and\ncollaborators\\cite{MS66,BMS71,EAS87} have argued that the asymptotics of\npartial differential equations should also\nbe described in terms of recurrent spatiotemporal patterns.\nPictorially, dynamics drives a given spatially extended system through a\nrepertoire of unstable patterns; as we watch \na given ``turbulent'' system evolve, \nevery so often we catch a glimpse of a familiar pattern. \nFor any finite spatial resolution,\nthe system follows approximately for a finite time \na pattern belonging to a finite \nalphabet of admissible patterns, and the long term dynamics can be thought\nof as a walk through the space of such patterns,\njust as chaotic dynamics with a low dimensional\nattractor can be thought of as a succession of nearly periodic (but\nunstable) motions.\n\n\\section{Kuramoto-Sivashinsky system}\n\nWe offer here a modest implementation of the above program on\na prototype spatially extended dynamical system defined by\nthe Kuramoto-Sivashinsky equation\\cite{Kur,Siv}\n\\begin{equation}\nu_t=(u^2)_x-u_{xx}-\\nu u_{xxxx} \n\\label{ks}\n\\end{equation}\nwhich arises as a model amplitude equation for interfacial instabilities\nin a variety of contexts - see e.g. \\refref{KNS90}.\nHere $t \\geq 0$ is the time, $x \\in [0,2\\pi]$ is the space coordinate,\nand $\\nu$ is a fourth-order ``viscosity'' damping parameter.\nThe subscripts $x$ and $t$ denote the partial derivatives with respect to \n$x$ and $t$. \nWe take the Kuramoto-Sivashinsky system because it is one of the\nsimplest physically interesting spatially extended nonlinear systems,\nbut in the present context the \ninterpretation of the equation, or the equation itself is not the most \nimportant ingredient;\nthe approach should be applicable to a wide class of \nspatially extended nonlinear systems. The salient feature of such \npartial differential equations is that\nfor any finite value of $\\nu$ their asymptotics is in principle \ndescribable by a\n{\\em finite} set of ``inertial manifold''\n ordinary differential equations\\cite{Foias88}.\n\nThe program of studying unstable solutions in this context\nwas initiated by Goren, Eckmann and Procaccia\\cite{GEP}\nwho have used a 2-unstable modes truncation of \nthe Kuramoto-Sivashinsky equation\nto study the dynamics connecting coexisting unstable \n{\\em temporally stationary} solutions.\nWe shall study here\nunstable {\\em spatiotemporally periodic} solutions of the {\\em full}\nKuramoto-Sivashinsky system.\nOur main result is that in the limit of weak turbulence or \n``spatiotemporal chaos'', we can determine hierarchically and exhaustively\ncycles of longer and longer periods, and apply this data \nto the evaluation of global averages. \n\nThe function $u(x,t)=u(x+2\\pi,t)$ is assumed periodic on \nthe $x \\in [0,2\\pi]$ interval. \nAs $u(x,t)$ has compact support, the standard\nstrategy is to expand it in a discrete spatial Fourier series: \n\\begin{equation}\n u(x,t)= \\sum_{k=-\\infty}^{+ \\infty} b_k(t) \\e^{\\i k x}\n\\, . \n\\label{fseries}\n\\end{equation}\nSince $u(x,t)$ is real, $b_k=b_{-k}^*$.\nSubstituting (\\ref{fseries}) into (\\ref{ks}) yields \nthe infinite ladder of evolution equations for the Fourier coefficients $b_k$: \n\\begin{equation}\n\\dot{b}_k=(k^2-\\nu k^4)b_k +\\i k \\sum_{m=-\\infty}^{\\infty}\nb_m b_{k-m}\n\\,. \n\\label{expanfull}\n\\end{equation}\nAs $\\dot{b}_0=0$, the average (the mean drift) of the solution is \nan integral of motion. In what follows we shall assume that this average is \nzero, $\\int \\d x \\, u(x,t) =0$. \n\nThe coefficients $b_k$ are in general complex functions of time. \nWe can simplify the system (\\ref{expanfull}) further by assuming \nthat $b_k$ are pure imaginary, $b_k= \\i a_k$, \nwhere $a_k$ are real. \nAs we shall see below, this picks out the\nsubspace of odd solutions $u(x,t)=-u(-x,t)$, with \nthe evolution equations \n\\begin{equation}\n\\dot{a}_k=(k^2-\\nu k^4)a_k - k \\sum_{m=-\\infty}^{\\infty} a_m a_{k-m} \n\\,.\n\\label{expan}\n\\end{equation}\nWe shall determine \nthe periodic solutions in the space of Fourier coefficients,\nand then reconstitute from them the unstable spatiotemporally \nperiodic solutions of\n\\refeq{ks}. \n\nThe trivial solution $u(x,t)=0$ is a fixed point of (\\ref{ks}). From \n(\\ref{expan}) it follows that the $|k|<1\/ \\sqrt{\\nu}$ \nlong wavelength modes of this fixed point \nare linearly unstable, and the \n$|k|>1\/ \\sqrt{\\nu}$ short wavelength modes are stable. \nFor $\\nu > 1$, $u(x,t)=0$ is the globally attractive stable fixed point;\nstarting with $\\nu =1$ the solutions go through a rich sequence of\nbifurcations, studied e.g. in \\refref{KNS90}. \nDetailed knowledge of the parameter dependence of bifurcations sequences\nis not needed for our\npurposes; we shall take $\\sqrt{\\nu}$ sufficiently small so that the\ndynamics can be spatiotemporally chaotic, but not so small that we would be\noverwhelmed by too many short wavelength modes needed in order to accurately\nrepresent the dynamics.\n\nThe growth of the unstable long wavelengths (low $|k|$) excites\nthe short wavelengths \nthrough the nonlinear term in (\\ref{expan}). \nThe excitations thus transferred are dissipated by the strongly damped \nshort wavelengths, and a sort of ``chaotic equilibrium'' \ncan emerge. The very short wavelengths $|k| \\gg 1 \/ \\sqrt{\\nu}$ \nwill remain small \nfor all times, but the intermediate wavelengths of order\n$|k| \\sim 1 \/ \\sqrt{\\nu}$ \nwill play an important role in maintaining the dynamical equilibrium. \nAs the damping parameter decreases, the solutions increasingly take on \n Burgers type shock front\ncharacter which is poorly represented by the Fourier basis, and many\nhigher harmonics need to be kept\\cite{KNS90,GEP} in truncations of\n(\\ref{expan}).\nHence, while one may truncate the high modes in the expansion (\\ref{expan}), \ncare has to be exercised to ensure that no\nmodes essential to the dynamics are chopped away.\n\nBefore proceeding with the calculations, we take into account\nthe symmetries of the solutions and describe our criterion for reliable\ntruncations of the infinite ladder of \nordinary differential equations (\\ref{expan}).\n\n\\section{Symmetry decomposition}\nAs usual, the first step in analysis of such dynamical flows\nis to restrict the dynamics to a Poincar\\'e section. We\nshall fix the Poincar\\'e section to be the hyperplane\n$a_1=0$. We integrate (\\ref{expan}) with the initial\n conditions \n$a_1=0$, and arbitrary values of the coordinates $a_2, \\ldots, a_N$, where \n$N$ is the truncation order. When $a_1$ becomes \n$0$ the next time, the coordinates $a_2, \\ldots, a_N$ are mapped \ninto $(a_2', \\ldots a_N')=P(a_2, \\ldots, a_N)$, where $P$ is the Poincar\\'e \nmap. $P$ defines a mapping of a $N-1$ dimensional hyperplane into itself. \nUnder successive iterations of $P$, any trajectory\napproaches the attractor ${\\cal A}$, which itself is an invariant \nset under $P$. \n\nA trajectory of \n (\\ref{expan}) can cross the plane $a_1=0$ in two possible ways: \n either when \n$\\dot{a_1}>0$ (``up'' intersection) \nor when $\\dot{a_1}<0$ (``down'' intersection),\n with the ``down'' and ``up'' crossings \nalternating. \nIt then makes sense to define the Poincar\\'e map $P$ as a transition between, \nsay, ``up'' and ``up'' crossing. \nWith Poincar\\'e section defined as the ``up-up'' transition, \nit is natural to define a ``down-up'' transition map $\\Theta$. Since \n$\\Theta$ describes the transition from down to up (or up to down) state, \nthe map $\\Theta^2$ describes the transition up-down-up, that is \n$\\Theta^2=P$.\n\nConsider the spatial flip and\nshift symmetry operations $Ru(x)=u(-x)$, $Su(x)=u(x+\\pi)$. \n The latter symmetry reflects the invariance under\nthe shift $u(x,t) \\rightarrow u(x+ \\pi,t)$, and is a particular case of the \ntranslational invariance of the Kuramoto-Sivashinsky equation (\\ref{ks}). \nIn the Fourier modes decomposition (\\ref{expan}) this \nsymmetry acts as\n$S: a_{2k} \\rightarrow a_{2k}, a_{2k+1} \\rightarrow -a_{2k+1}$. \nRelations $R^2=S^2=1$\ninduce decomposition of the space of solutions into 4 invariant \nsubspaces\\cite{KNS90};\nthe above restriction to $b_k= \\i a_k$ amounts to specializing\nto a subspace of odd solutions $u(x,t)=-u(-x,t)$.\n\nNow, with the help of the symmetry $S$ \nthe whole attractor ${\\cal A}_{tot}$ can be \ndecomposed into two pieces: ${\\cal A}_{tot}={\\cal A}_0 \\cup S \n{\\cal A}_0 $ for some set ${\\cal A}_0$.\n It can happen that the set ${\\cal A}_0$\n (the symmetrically decomposed attractor) \ncan be decomposed even further by the action of the map $\\Theta$. In this \ncase the attractor will consist of four disjoint sets: \n ${\\cal A}_{tot}={\\cal A} \\cup S {\\cal A} \\cup \\Theta {\\cal A} \n \\cup \\Theta S {\\cal A} $. As we shall see, \nthis decomposition \nis not always possible, since sometimes $ {\\cal A}$ overlaps with \n$\\Theta S{\\cal A} $ (in this case $\\Theta {\\cal A}$ will also overlap with \n$S {\\cal A} $). \nWe shall carry out our calculations in the regime where \nthe decomposition into four disjoint pieces \nis possible. In this case the set $ {\\cal A}$ can be taken as\nthe fundamental \ndomain of the Poincar{\\'e} map, with $S {\\cal A} $, \n$\\Theta {\\cal A} $ and $\\Theta S {\\cal A} $ its images under the \n$S$ and $\\Theta$ mappings.\n\nThis reduction of the dynamics to the fundamental domain is particularly\nuseful in periodic orbit calculations, as it simplifies symbolic dynamics\nand improves the convergence of cycle expansions\\cite{CEsym}.\n\n\\section{Fourier modes truncations}\n \nWhen we simulate the equation (\\ref{expan}) on a computer, we have \nto truncate the ladder of equations to a finite length $N$, i.e., set \n $a_k=0$ for $k>N$. \n$N$ has to be sufficiently large that no harmonics \n$a_k$ important for the dynamics with $k>N$ \nare truncated. On the other hand,\ncomputation time increases dramatically with the increase of $N$:\nsince we will be evaluating the stability matrices for the flow,\nthe computation time will grow at least as $N^2$. \n\n\nAdding an extra dimension to a truncation of the system (\\ref{expan})\nintroduces a small\nperturbation, and this can (and often will) \nthrow the system into a totally different asymptotic state. \nA chaotic attractor for $N=15$ can become a period three \nwindow for $N=16$, and so on. \nIf we compute, for example, the Lyapunov exponent\n$\\lambda(\\nu,N)$ for the strange attractor of the \nsystem (\\ref{expan}), there is no reason to \nexpect $\\lambda(\\nu,N)$ to smoothly converge to the limit \nvalue $\\lambda(\\nu,\\infty)$ as $N \\rightarrow \\infty$. \nThe situation is different in the periodic windows, \nwhere the system is structurally stable, and it makes sense to compute \n Lyapunov exponents, escape rates, etc. for the \n{\\em repeller}, i.e. the closure of the set of all \n{\\em unstable} periodic orbits. \nHere the power of cycle expansions comes in: \nto compute quantities on the repeller by direct averaging methods is \ngenerally more difficult, because the motion quickly collapses to the \nstable cycle. \n\n\\begin{figure}\n\\centerline{\\epsfig{file=feig16bm.ps,width=8cm}}\n\\caption[]{\nFeigenbaum tree for coordinate $a_6$, $N=16$ Fourier modes \ntruncation of (\\ref{expan}). The two upper arrows indicate \nthe values of damping parameter that we use in our \nnumerical investigations; $\\nu=0.029910$ \n(chaotic) and $\\nu=0.029924$ (period-3 window). The lower arrow indicates\nthe kink where the invariant set ${\\cal A}$ \nstarts to overlap with $\\Theta S {\\cal A}$.\nTruncation to $N=17$ modes yields a similar figure, with values for \nspecific bifurcation points shifted by $\\sim 10^{-5}$ with respect to the \n$N=16$ values. The choice of the coordinate $a_6$ is arbitrary;\nprojected down to any coordinate, the tree is qualitatively the same.\n}\n\\label{feig16}\n\\end{figure}\n\n\nWe have found that the minimum value of $N$ to get any chaotic behavior at all \nwas $N=9$. However, the dynamics for the $N=9$ truncated system is rather\ndifferent from the full system dynamics, and therefore we have performed our\nnumerical calculations for $N=15$, $N=16$ and $N=17$. \n\\refFig{feig16} is a representative plot of the Feigenbaum \ntree for the Poincar\\'e map $P$. \nTo obtain this figure, we took a random \ninitial point, iterated it for a some time to let it settle on the\nattractor and then plotted the $a_6$ coordinate of the next 1000 intersections\nwith the Poincar{\\'e} section.\nRepeating this for different values of the damping parameter $\\nu$, one can \nobtain a picture of the attractor as a function of $\\nu$.\nFor an intermediate range of values of\n$\\nu$, the dynamics exhibits a rich variety of behaviours, such\nas strange attractors, stable limit cycles, and so on.\nThe Feigenbaum trees for different values of $N$ resemble \neach other, but the precise \nvalues of $\\nu$ corresponding the various bifurcations \ndepend on the order of truncation $N$. \n\nBased on the observed numerical similarity\nbetween the Feigenbaum trees for $N=16$ and $N=17$ (cf. \\reffig{feig16}),\n we choose $N=16$ as a reasonable cutoff \nand will use only this truncation throughout the remainder of this \npaper. We will\nexamine two values of the damping parameter: $\\nu=0.029910$, \nfor which\nthe system is chaotic, and $\\nu=0.029924$, for which the system has a\nstable period-3 cycle.\nIn our numerical work we use both the pseudospectral\\cite{Laurette} and the \n$4$-th order variable-step Runge-Kutta integration routines\\cite{NAG};\ntheir results are in satisfactory agreement. As will be seen below, the\ngood control of symbolic dynamics guarantees that we do not miss\nany short periodic orbits generated by the bifurcation sequence indicated\nby the Feigenbaum tree of \\reffig{feig16}. However, even though\nwe are fairly sure that for this parameter value we have all\nshort periodic orbits, the possibility that\nother sets of periodic solutions exist somewhere else in the\nphase space has not been excluded.\n\n\n\\begin{figure}\n\\centerline{\\epsfig{file=solution123.ps,width=8cm}\n\t \\epsfig{file=solution124.ps,width=8cm}}\n\\caption[]{Projections of a typical 16-dimensional trajectory onto \n\t\tdifferent 3-dimensional subspaces, coordinates \n\t\t(a) $\\{a_1, a_2, a_3\\}$,\n\t\t(b) $\\{a_1, a_2, a_4\\}$. $N=16$ Fourier modes truncation with\n\t\t $\\nu=0.029910$.}\n\\label{plot123}\n\\end{figure}\n\n\nThe problem with such high dimensional truncations of (\\ref{expan})\nis that the dynamics is difficult to visualize. We can \nexamine its projections onto any three axes\n$a_i,a_j,a_k$, as in \\reffig{plot123}\nor, alternatively, study \na return map for a given coordinate\n$a_k \\rightarrow a_k' = P_k(a_2, \\ldots, a_{N})$\nas the one plotted in \\reffig{returnmap}.\nThe full return map is $(N-1)$-dimensional\n${\\bf a} \\rightarrow {\\bf P}(a_2, \\dots, a_{N})={\\bf a}'$\nand single-valued, and for the values of $\\nu$ used here\nthe attractor is essentially 1-dimensional,\nbut its projection into the $\\{a_k,P_k(a_2,\\ldots,a_{N})\\}$ plane \ncan be multi-valued and self-intersecting.\nOne can imagine a situation where no\n``good'' projection is possible,\nthat is, any projection onto any two-dimensional\nplane is a multiple-valued function.\nThe question is how to treat such a map?\n \n\n\\begin{figure}\n\\centerline{\\epsfig{file=return6.ps,width=8cm}}\n\\caption[]{The attractor of the system \\refeq{expan}, plotted as the $a_6$\ncomponent of the $a_1=0$ \nPoincar\\'e section return map, 10,000 Poincar\\'e section returns of\na typical trajectory. Indicated are the periodic points $\\overline{0}$, \n$\\overline{1}$ and $\\overline{01}$; as this is an arbitrary projection of\nthe invariant set, they exhibit no good spatial ordering.\n$N=16$ Fourier modes truncation with $\\nu=0.029910$. \n}\n\\label{returnmap}\n\\end{figure}\n\n\\section{One-dimensional visualization of the dynamics} \n\nWe now describe an \napproach which simplifies matters a lot by reducing the \nmap to an approximate one-dimensional map. The multiple-valuedness in \n\\reffig{returnmap} arises from the fact that the return map \nis a 2-dimensional\nprojection of a convoluted \n1-dimensional curve embedded into a high-dimensional space.\nWe shall show that it is possible to find an {\\em intrinsic} \nparametrization $s$ along the unstable manifold, \nsuch that the map $s \\rightarrow f(s)$ induced by the\nfull $d$-dimensional flow is approximately {\\em $1$-dimensional}. \nStrictly speaking, the attractor on \\reffig{returnmap} has a certain \nthickness transverse to it, but the contraction in the transverse\ndirections is so strong that the invariant set is effectively\n$1$-dimensional. \n\nSuppose we already have determined some of the shorter cycles for our\nsystem, i.e. \nthe fixed points of the Poincar\\'e map and its iterates. This is \naccomplished relatively easily \nby checking a trajectory of a random initial point for close returns\nand then using these as initial guesses for a cycle search algorithm.\nWe now assume that the invariant set can be approximated by a curve\npassing close to all periodic points,\nand determine the order of periodic points along such curve.\nThis is done in the following way: there exists a fixed point which\nis not connected to the attractor (the point \n$\\overline{0}$ in \\reffig{returnmap}) - we\nchoose this fixed point \nas the starting point and assign it number $1$. \nPoint number $2$ is the periodic point in the sample\nwhich is closest (in the full space) \nto this fixed point, and the $n$-th point is determined as the point \nwhich has the minimum distance from the point number $n-1$ among \nall the periodic points which have not yet been enumerated. \nProceeding this way, we order all the periodic points that we have found\nso far.\n\nSince all periodic points belong to cycles, \ntheir images are known and are simply the successive periodic points\nalong the cycle. We use this fact to recursively construct\na $1$-dimensional mapping $s_i \\rightarrow f(s_i)$. We approximate\nparametrization length $s$ along the invariant set \nby computing the Euclidean inter-distances between the\nsuccessive periodic points in the full dynamical\nspace,\n$s_1=0, s_2=\\| {\\bf a}_2-{\\bf a}_1 \\|, s_i-s_{i-1}=\\| \n{\\bf a}_i-{\\bf a}_{i-1} \\| $. \nThe $i$-th cycle point $s_i$ is mapped onto its image\n$s_{\\sigma{i}} = f(s_i)$, \nwhere $\\sigma i$ denotes the label of the next periodic point \nin the cycle.\nWe can now find longer \nperiodic orbits of the 1-dimensional map $f$ by standard methods such\nas inverse iteration, and guess the location of \nthe corresponding points in the full $N$-dimensional\nspace by interpolating between the nearest known periodic points.\nThese will not be exact periodic orbits of the full system,\nbut are very useful as good starting guesses in a search for the exact\nperiodic orbits. Iteratively, more and more periodic orbits can be computed. \nWhile it only pays to refine the 1-dimensional parametrization \nuntil the density\nof the periodic points become so high that the width of the attractor\nbecomes noticeable, the 1-dimensional map continues to provide good\ninitial guesses to longer periodic orbits.\nMore sophisticated methods are needed only if \nhigh accuracy around the folding region of $f(s)$ is required\nin order to distinguish between long cycles.\n\n\\begin{figure}\n\\centerline{\\epsfig{file=unfolded.ps,width=8cm}}\n\\caption[]{The return map $s_{n+1} = f(s_n)$ constructed\nfrom the images of periodic points. The diamonds were obtained by \nusing $34$ periodic points, the dots were obtained by using $240$ periodic \npoints. \nWe have indicated the periodic points $\\overline{0}$, \n$\\overline{1}$ and $\\overline{01}$. \nNote that the transverse fractal structure of the map shows when \nthe number of points is increased. \n$N=16$ Fourier modes truncation with $\\nu=0.029910$. }\n\\label{unfolded}\n\\end{figure}\nFor the values of $\\nu$ we are working with, the attractor consists \nof four disjoint sets, the fundamental domain ${\\cal A}$ and \nits images under the maps $S$ and $\\Theta$. \nIn this case the approximate return map \n$s \\rightarrow f(s)$ is unimodal.\nThe corresponding map on the symmetric \npart of the attractor, $S \\Theta {\\cal A} $, is likewise unimodal,\nand turned $180$ degrees around the origin. For the values of $\\nu$ we\nwork with the\ntwo maps do not interact and their domains are separate. \nHowever, if the value of the damping parameter $\\nu$ is decreased\nsufficiently, the domains of the\nmaps join and together they form a connected invariant set\ndescribed by a bimodal map\\cite{Glbtsky94}.\nThis joining of the fundamental domain ${\\cal A}$ and \nits symmetric image $\\Theta S {\\cal A} $ is visible \nin \\reffig{feig16} at\n$\\nu \\simeq 0.0299$, where the range of the $a_6$ coordinate \nincreases discontinuously.\n\n\\begin{figure}\n\\centerline{\\epsfig{file=timeceiling.ps,width=8cm}}\n\\caption[]{The return time $T(s)$ as a function of the parameter $s$,\nevaluated on the periodic points, as in \\reffig{unfolded}, with\nthe diamonds obtained by \n$34$ periodic points and the dots by $240$ periodic points. \nThe fine structure is due to the fractal structure of the attractor.\n}\n\\label{rettime}\n\\end{figure}\n\nWe use the unimodal map $s \\rightarrow f(s)$ to construct binary symbolic \ndynamics for the system in the usual way: assign the symbol '0' to points\nto the left of the maximum, '1' to the points to the right. \nIn the period-3 window with the stable cycle $\\overline{001}$, \nthe pruning rules are very easy: except for the stable $\\overline{001}$ cycle\nand the $\\overline{0}$ fixed point (both disjoint from the invariant set) \ntwo 0's in a row are forbidden. \nIn this case it is convenient to redefine the alphabet by\ndenoting the symbol pair $01$ by $a$ and the symbol $1$ by $b$.\nThis renders the symbolic dynamics of the points on the repeller\ncomplete binary: all sequences of\nthe letters $a$ and $b$ are admissible.\n\nA flow in $N$ dimensions \ncan be reduced to an $(N-1)$-dimensional \nreturn map by suspension on a Poincar\\'e section\nprovided that the Poincar\\'e return map is \nsupplemented by a ``time ceiling function''\\cite{bowen} which accounts\nfor a variation in the section return times. \nHence we also determine the return time $T(s_i)$ \nfor each periodic point $i$, and use those to construct recursively\nthe periodic orbit approximations to the time ceiling function,\n\\reffig{rettime}.\nThe mean Poincar\\'e section return time \nis of order $\\overline{T} \\approx .88$. \n\n\n\\subsection{Numerical results}\n\nWe have found all cycles \nup to topological length 10 (the redefined topological length in the case of\nthe period-3 window), 92 cycles in the chaotic regime and 228 \nin the period-3 window,\nby using the 1-dimensional parametrization $f(s)$ to find initial guesses for\nperiodic points of the full $N=16$ Fourier modes truncation \nand then determining the cycles by\na multi-shooting Newton routine. It is worth\nnoting that the effectiveness of using the \n$1$-dimensional $f(s)$ approximation to the dynamics\nto determine initial guesses is\nsuch that for a typical cycle it takes only 2-3 Newton iterations to find\nthe cycle with an accuracy of $10^{-10}$.\n\n\\begin{table}\n\\caption[]{All cycles up to topological length\n5 for the $N=16$ Fourier modes truncation of the Kuramoto-Sivashinsky equation \n(\\ref{expan}),\ndamping parameter \n$ \\nu =0.029910$ (chaotic attractor) and $\\nu=0.029924$ (period-3\nwindow), their itineraries, periods, \nthe first four stability eigenvalues. For the chaotic attractor\npruning shows up at the \ntopological length $4$; $\\overline{0001}$ \nand $\\overline{0011}$ cycles are pruned. \nThe deviation from unity of $\\Lambda_2$, the eigenvalue along the flow, \nis an indication of the accuracy of the numerical integration. \nFor the period-3 window we also give the itineraries in the redefined alphabet\nwhere $a=1$ and $b=10$.}\n\\begin{indented}\n{\\small \n\\lineup\n\\item[]\\begin{tabular}{@{}lllllll}\n\\br\n$p$ & & $T_p$ & $\\Lambda_1$ & $\\Lambda_2-1$ & $\\Lambda_3$ & $\\Lambda_4$ \\\\ \\mr\n\\multicolumn{7}{l}{Chaotic, $\\nu=0.029910$} \\\\ \\mr\n0 & & 0.897653 & \\03.298183 & 5$\\cdot10^{-12}$ \n& \\-2.793085$\\cdot 10^{-3}$ & \\-2.793085$\\cdot 10^{-3}$ \\\\\n1 & & 0.870729 & \\0\\-2.014326 & \\-5$\\cdot10^{-12}$ \n & 6.579608$\\cdot 10^{-3}$ & \\-3.653655$\\cdot 10^{-4}$ \\\\\n10 & & 1.751810 & \\0\\-3.801854 & 8$\\cdot10^{-12}$ &\n \\-3.892045$\\cdot 10^{-5}$ & 2.576621$\\cdot 10^{-7}$ \\\\\n100 & & 2.639954 &\\0\\-4.852486 & 1$\\cdot10^{-11}$\n & 3.044730$\\cdot 10^{-7}$ &\\-3.297996$\\cdot 10^{-10}$ \\\\\n110 & & 2.632544 & \\06.062332 & 2$\\cdot10^{-11}$ &\n \\-2.721273$\\cdot 10^{-7}$ & \\-1.961928$\\cdot 10^{-10}$ \\\\ \n1000 & & - & - & - & - & - \\\\\n1100 & & - & - & - & - & - \\\\\n1110 & & 3.497622 & \\-14.76756 & 2$\\cdot10^{-11}$ &\n \\-1.629532$\\cdot 10^{-9}$ & 6.041192$\\cdot 10^{-14}$ \\\\\n10100 & & 4.393973 & 19.64397 & 2$\\cdot10^{-11}$ & \n\\-1.083266$\\cdot 10^{-11}$ & 3.796396$\\cdot 10^{-15}$ \\\\\n11100 & & 4.391976 & \\-18.93979 & 2$\\cdot10^{-11}$ & \n 1.162713$\\cdot 10^{-11}$ &\\-1.247149$\\cdot 10^{-14}$ \\\\\n11010 & & 4.380100 & \\-26.11626 & 2$\\cdot10^{-11}$ & \n 1.005397$\\cdot 10^{-11}$ & 8.161650$\\cdot 10^{-15}$ \\\\\n11110 & & 4.370895 & 28.53133 & 2$\\cdot10^{-11}$ & \n1.706568$\\cdot 10^{-11}$ & 1.706568$\\cdot 10^{-14}$ \\\\ \\mr\n\\multicolumn{7}{l}{Period-3 window, $\\nu=0.029924$} \\\\ \\mr\n 0 & & 0.897809 & \\03.185997 & 7$\\cdot10^{-13}$ & \n\\-2.772435$\\cdot10^{-3}$ & \\-2.772435$\\cdot10^{-3}$ \\\\ \n 1 & $a$ & 0.871737 & \\0\\-1.914257 & 5$\\cdot10^{-13}$ &\n 6.913449$\\cdot10^{-3}$ & \\-3.676167$\\cdot10^{-4}$ \\\\ \n 10 & $b$ & 1.752821 & \\0\\-3.250080 & 1$\\cdot10^{-12}$ &\n\\-4.563478$\\cdot10^{-5}$ & 2.468647$\\cdot10^{-7}$ \\\\ \n 100 & & 2.638794 & \\0\\-0.315134 & \\-4$\\cdot10^{-13}$ &\n 4.821809$\\cdot10^{-6}$ & \\-2.576341$\\cdot10^{-10}$ \\\\ \n 110 & $ab$ & 2.636903 & \\02.263744 & 3$\\cdot10^{-12}$ &\n\\-6.923648$\\cdot10^{-7}$ & \\-2.251226$\\cdot10^{-10}$ \\\\ \n 1110 & $aab$ & 3.500743 & \\-10.87103 & 2$\\cdot10^{-12}$ &\n\\-2.198314$\\cdot10^{-9}$ & 3.302367$\\cdot10^{-14}$ \\\\ \n 11010 & $abb$ & 4.382927 & \\-15.84102 & 2$\\cdot10^{-12}$ &\n 1.656690$\\cdot10^{-11}$ & 1.388232$\\cdot10^{-14}$ \\\\ \n 11110 & $aaab$ & 4.375712 & 18.52766 & 3$\\cdot10^{-12}$ &\n\\-1.604898$\\cdot10^{-11}$ & 2.831886$\\cdot10^{-14}$ \\\\ \\br\n\\end{tabular} \n}\n\\end{indented}\n\\label{t_orbits}\n\\end{table}\n\n\\begin{figure}\n\\centerline{\\epsfig{file=eigenvalues.ps,width=8cm}}\n\\caption[]{Lyapunov exponents $\\lambda_k$ versus $k$ for the \nperiodic orbit $\\overline{1}$ compared with the stability eigenvalues \nof the $u(x,t)=0$ stationary solution $k^2- \\nu k^4$. \n$\\lambda_k$ for $k \\geq 8$ fall below the numerical accuracy of integration \nand are not meaningful. \n$N=16$ Fourier modes, $\\nu=0.029924$, chaotic regime.\n}\n\\label{eigenvalues}\n\\end{figure}\n\nIn \\reftab{t_orbits} we list the periodic orbits \nto topological length 5 found by our method.\nThe value of $\\Lambda_2$ serves as an indication of the \naccuracy of our numerics, as $\\Lambda_2$ corresponds to the marginal\neigenvalue along the periodic orbit, strictly equal to $1$.\nAll cycles seem to have \nreal eigenvalues (to within the numerical accuracy) except for the \n$\\overline{0}$-cycle \nwhich has a pair of complex eigenvalues, $\\Lambda_3$ and $\\Lambda_4$.\nWe therefore do not list the corresponding imaginary parts of the eigenvalues. \nTo illustrate the rapid contraction in the nonleading eigendirections\nwe plot all the eigenvalues of the \n$\\overline{1}$-cycle in \\reffig{eigenvalues}. \nAs the length of the orbit increases, the magnitude of\ncontracting eigenvalues falls quickly \nbellow the attainable numerical numerical \naccuracy $\\approx 10^{-16}$ and our numerical results for \n$\\Lambda_k$ are not meaningful for $ k \\geq 8$.\n\nHaving determined the periodic solutions $p$ in the Fourier modes space, \nwe now go back to the configuration space and plot the corresponding\nspatiotemporally periodic solutions $u_p(x,t)$: they are the\nrepertoire of the recurrent spatiotemporal patterns that Hopf wanted to\nsee in turbulent dynamics.\nDifferent spatiotemporally periodic solutions are qualitatively\nvery much alike but still different, as a closer inspection reveals. \nIn \\reffig{orbit0fig} we plot \n$u_0(x,t)$ corresponding to the Fourier space $\\overline{0}$-cycle. \nOther solutions, plotted in the configuration space, exhibit the same\noverall gross structure. For this reason we find it more informative\nto plot the difference $u_0(x,t'T_0)-u_p(x,t''T_p\/n_p)$\nrather than $u_p(x,t)$ itself. \nHere $p$ labels a given prime (non-repeating) cycle, \n$n_p$ is the topological cycle length, $T_p$ its period,\nand the time is rescaled to make this difference periodic in \ntime: $t'=t \/T_0$ and $t''=n_p t\/T_p$, so that $t''$ ranges from $0$ to $n_p$. \n$u_0(x,t'T_0)-u_1(x,t''T_1)$ is given in \\reffig{diff1fig}, and\n$u_0(x,t'T_0)-u_{01}(x,t''T_{01}\/2)$ in \\reffig{diff01fig}.\n\n\\begin{figure}\n\\centerline{\\epsfig{file=orbit0.ps,width=8cm}}\n\\caption[]{Spatiotemporally periodic solution $u_0(x,t)$.\nWe have divided $x$ by $\\pi$ and plotted only the $x>0$ part, since we work in\nthe subspace of the odd solutions, $u(x,t)=-u(-x,t)$. \n$N=16$ Fourier modes truncation with $\\nu=0.029910$.\n\t }\n\\label{orbit0fig}\n\\end{figure}\n\n\\begin{figure}\n\\centerline{\\epsfig{file=diff1.ps,width=8cm}}\n\\caption[]{The difference between the two shortest period\n\t spatiotemporally periodic solutions \n\t $u_0(x,t'T_0)$ and $u_1(x,t''T_1)$.\n\t }\n\\label{diff1fig}\n\\end{figure}\n\n\\begin{figure}\n\\centerline{\\epsfig{file=diff01.ps,width=8cm}}\n\\caption[]{The difference between solution $u_0(x,t'T_0)$ repeated\n\t twice and the $n_p=2$ period spatiotemporally periodic solution\n $u_{01}(x,t''T_{01}\/2)$.\n }\n\\label{diff01fig}\n\\end{figure}\n\n\\section{Global averaging: periodic orbits in action} \n\nThe above investigation of the Kuramoto-Sivashinsky\nsystem demonstrates that it is possible to\nconstruct recursively and exhaustively\na hierarchy of spatiotemporally periodic unstable solutions\nof a spatially extended nonlinear system.\n\nNow we turn to the central issue of this paper; qualitatively, these\nsolutions are indeed an implementation of Hopf's program, but how\nis this information to be used quantitatively? This is precisely\nwhat the periodic orbit theory is about;\nit offers machinery that puts together\nthe topological and the quantitative information about individual\nsolutions, such as their periods and stabilities, into predictions \nabout measurable global averages, such as the Lyapunov exponents,\ncorrelation functions, and so on. The proper tool for\ncomputing such global characterizations of the dynamics are the trace\nand determinant formulas of the periodic orbit theory.\n\nWe shall briefly summarize the aspects of the\nperiodic orbit theory relevant to the present application;\nfor a complete exposition of the theory \nthe reader is referred to \\refref{cycl_book}. The key idea is to\nreplace a time average $\\Phi^t (x)\/t$\nof an ``observable\" $\\phi$ measured along\na dynamical trajectory $x(t) = f^t(x)$\n\\[\n\\Phi^t (x) = \\int_0^t \\d\\tau \\, \\phi(x(\\tau))\n\\]\nby the spatial average $\\left<\\e^{\\beta \\Phi^t}\\right>$\nof the quantity $\\e^{\\beta \\Phi^t (x)}$. Here $\\beta$ is\na dummy variable, used to recover the desired expectation value\n$\\left<\\phi\\right> = \\lim_{t\\to\\infty} \\left<\\Phi^t\/t \\right>$ \nby taking $\\frac{\\d}{\\d\\beta}$ derivatives\nof $\\left<\\e^{\\beta \\Phi^t}\\right>$ and then setting $\\beta=0$.\nFor large $t$ the average $\\left<\\e^{\\beta \\Phi^t}\\right>$ behaves as\nthe trace \n\\begin{equation}\n{\\rm tr}\\, {\\cal L}^{t}\n = \\sum_{p} \\period{p} \\sum_{r=1}^\\infty\n { \\e^{r \\beta \\Phi_p} \\over \\oneMinJ{r} }\n \\prpgtr{t-r \\period{p}}\n\\ee{tr-L1}\nof the evolution operator\n\\[\n{\\cal L}^t (x,y) = \\delta(y-f^{t}(x))\\e^{\\beta \\Phi^t(x)} .\n\\]\nand is dominated by its largest eigenvalue $\\e^{ts(\\beta)}$.\n\nThe trace formula \\refeq{tr-L1} has an intuitive geometrical interpretation.\nThe sums in \\refeq{tr-L1}\nare over prime periodic orbits $p$ \nand their repeats $r$, $T_p$ are their periods,\nand ${\\bf J_p}$ are their stability matrices. \nPrime cycles partition the dynamical space into closed tubes of\nlength $\\period{p}$ and thickness\n$\\left|\\det({\\bf 1}-{\\bf J}_p)\\right|^{-1}$, \nand the trace picks up a periodic orbit contribution only when the\ntime $t$ equals a prime period or its repeat, hence the time\ndelta function $\\prpgtr{t- r\\period{p}}$. Finally,\n$\\e^{\\beta \\Phi_p }$\nis the mean value of\n$ \\e^{\\beta \\Phi^t(x)}$\nevaluated on this part of dynamical space, so the trace formula is \nthe average of $\\left< \\e^{\\beta \\Phi^t}\\right>$ expressed as a partition\nof the space of solutions into a repertoire of spatiotemporally\nperiodic solutions, each weighted by its stability, i.e. likelihood of its\noccurrence in a long time evolution of the system.\n\nIn applications of the periodic orbit theory \nthe related Fredholm determinant\n\\begin{equation}\nF(\\beta,s)=\n\\exp \\left ( \n{ \\displaystyle - \\sum_p \\sum_{r=1}^{\\infty} z ^{n_p r} \n\\frac{\\displaystyle \\e^{r( \\beta \\Phi_p -s T_p) } }\n{ \\displaystyle \nr \\left | \\det \\left ( {\\bf 1}- {\\bf J}_p^r \\right ) \\right | } } \\right )\n\\label{fredholm} \n\\end{equation}\nhas better convergence as a function of\nthe maximal cycle length truncation, so that is the function whose\nleading zero $F(\\beta,s(\\beta))=0$ we determine here in order\nto evaluate the leading eigenvalue $s(\\beta)$. \n\nThe dummy variable $z$ in \\refeq{fredholm} keeps track of \nthe topological lengths $n_p$ (number of the Poincar\\'e section crossings),\nand is used to expand $F$ as a series in\n$z$. If we know all cycles up to topological length $l$ we truncate $F$\nto $l$-th order polynomial:\n\\begin{equation}\nF(\\beta,s)=1-\\sum_{1}^{l} c_k z^k + (\\mbox{remainder})\n\\label{Fred_exp}\n\\end{equation}\nand set $z=1$. The general theory\\cite{Ruelle,Rugh92,CRR93} then \nguarantees that\nfor a hyperbolic dynamical system the coefficients $c_k$ fall\noff in magnitude exponentially or faster with increasing $k$.\nWe now calculate the leading eigenvalue $s(\\beta)$ \nby determining the smallest zero of $F(\\beta,s)$,\nand check the convergence of this estimate by studying it as a \nfunction of the maximal cycle length truncation $l$. \nIf the flow conserves all trajectories, the leading eigenvalue\nmust satisfy $s(0)=0$; if the invariant set is repelling, the\nleading eigenvalue yields $\\gamma= -s(0)$, the escape rate from the repeller.\nOnce the leading eigenvalue is determined we can\ncalculate the desired average $\\left<\\phi \\right>$ using formula\\cite{AACI}:\n\\begin{equation}\n\\left<\\phi \\right> =\n\\left. -\\frac{\\partial s}{\\partial \\beta}\\right|_{\\beta=0} =\n\\left. -\\frac{\\partial F}{\\partial \\beta} \n \\left\/\n \\frac{ \\partial F}{\\partial s }\n \\right. \\right|_{\\beta=0 \\atop s=s(0)}.\n\\label{cyc_aver}\n\\end{equation}\nFor example, if we take as our ``observable'' $\\log |\\Lambda_{1}^t(x)|$,\nthe largest eigenvalue of the linearized stability of the flow,\n$\\Phi_p$ will be\n$\\log |\\Lambda_{1,p}|$ where $\\Lambda_{1,p}$ is the largest eigenvalue\nof stability matrix of the cycle $p$, and the above \nformula yields the Lyapunov exponent $\\left<\\lambda\\right>$.\n \nBoth the numerator and the denominator in \\refeq{cyc_aver} have a cycle\nexpansion analogous to \\refeq{Fred_exp} (cf. \\refref{cycl_book}), \nand the same periodic orbit data suffices for their evaluation.\n\nConceptually the most\nimportant lesson of the periodic orbit theory \nis that the spatiotemporally periodic\nsolutions are {\\em not} to be thought of as eigenmodes to be used as a linear\nbasis for expressing solutions of the equations of motion - as the equations\nare nonlinear, the periodic solutions are in no sense additive.\nNevertheless, the trace formulas and determinants of the periodic\norbit theory give a precise prescription for how \nto systematically explore the repertoire of admissible spatiotemporal patterns,\nand how to put them together in order to predict measurable observables. \n\n\\subsection{Numerical results}\n\nOne of the objectives of a theory of turbulence is\nto predict measurable global averages over turbulent flows, such as\nvelocity-velocity correlations and transport coefficients. While in\nprinciple the periodic orbit averaging formulas should be applicable\nto such averages, with the present parameter values\nwe are far from any strongly turbulent regime, and here we shall\nrestrict ourselves to the simplest tests of chaotic dynamics: we shall\ntest the theory by evaluating Lyapunov exponents\nand escape rates.\n\n\\begin{figure}\n\\centerline{\\epsfig{file=coeff.ps,width=8cm}}\n\\caption[]{$\\log_{10}$ of the coefficients $|c_k|$ in the cycle expansion \n\\refeq{Fred_exp} of $F(0,0)$ versus $k$ \nfor the period-$3$ window case (crosses) and the chaotic case\n(diamonds). $N=16$ Fourier modes truncation. }\n\\label{coeffval}\n\\end{figure}\n\nWe compute the periodic orbits, escape rates and Lyapunov exponents \nboth for the period-$3$ window and a chaotic regime.\nIn the case of period-$3$ window the complete symbolics dynamics\nand grammar rules\nare known and good convergence of cycle expansions is expected \nboth for the escape rate from the repeller\nand the Lyapunov exponent. Parenthetically, the stable\nperiod-3 orbit is separated from the rest of the invariant set\nby its immediate basin of attraction window, and its eigenvalues \nbear no immediate relation to the escape rate and the Lyapunov\nexponent of the repelling set.\n\nIn the case of a generic ``strange attractor'',\nthe convergence is not expected to be nearly as good, \nsince in this case there exist no finite description of the symbolic dynamics. \nFor closed systems (no escape) $\\gamma=0$ and $F(0,0)=0$. \nThe discrepancy of the value $F(0,0)$ from $0$ for a closed system \nallows us to estimate the accuracy of \nfinite cycle length approximations to the \nFredholm determinant. \n\nThe analytic properties of the Fredholm determinant are illustrated by \nthe decay rate of the coefficients $c_k$ as a function of $k$ in the expansion \n\\refeq{Fred_exp}.\nIf the complete symbolic dynamics is known and the system is hyperbolic, the \ndecay of $c_k$ should be superexponential\\cite{Rugh92}. \nThis is illustrated in \\reffig{coeffval}, where\nwe plot the coefficients $c_k$ for the $16$-dimensional \nsystem for the chaotic case and for the period-$3$ window. \nWe can clearly see the \nsuperexponential decay for the period-$3$ \nwindow case and at best exponential decay \nfor the chaotic case. \n\nOur results are presented in \\reftab{t_16_chaotic}. One \nobserves that when the symbolic dynamics is known \n(period-$3$ window), the convergence is much better than\nin the generic case, in accordance with the periodic orbit theory\nexpectations. \n\n\\begin{table}\n\\caption[]{\nThe escape rate $\\gamma$ and the leading Lyapunov exponent\nas a function of the cycle expansion truncation $n_{max}$ \nfor the $N=16$ Fourier modes truncation, chaotic regime \n$(\\nu=0.029910$) and period-3 window $(\\nu=0.029924)$. In the \nperiod-3 window the Fredholm determinant starts converging only\nfor $n_{max}>4$; for $n_{max}=4$ it has no real zero at all.\nA numerical simulation \nestimate for the Lyapunov exponent in the chaotic regime is given in the \nlast line; for this parameter value the escape rate, $\\gamma$,\nshould strictly equal zero.}\n\\begin{indented}\n{\\small \n\\lineup\n\\item[]\\begin{tabular}{lllll} \\br\n & \\multicolumn{2}{l}{chaotic} & \n \\multicolumn{2}{l}{period-3 window} \\\\ \\mr\n$n_{max}$ & $\\gamma$ & $\\lambda_1$ & $\\gamma$ & $\\lambda_1$ \\\\ \\mr\n 1 & & & 0.428143 & 0.703010 \\\\ \\hline\n 2 & 0.441948 & 0.981267 & \\-0.187882 & 0.430485 \\\\ \\hline\n 3 & 0.080117 & 0.765050 & \\-0.049325 & 0.469350 \\\\ \\hline\n 4 & 0.148583 & 0.703072 & & \\\\ \\hline\n 5 & 0.068513 & 0.727498 & 1.072468 & 0.585506 \\\\ \\hline\n 6 & 0.027724 & 0.699907 & 0.078008 & 0.547005 \\\\ \\hline\n 7 & 0.035137 & 0.693852 & 0.088132 & 0.598977 \\\\ \\hline\n 8 & 0.007104 & 0.675529 & 0.090425 & 0.631551 \\\\ \\hline\n 9 & 0.021066 & 0.673144 & 0.090101 & 0.618160 \\\\ \\hline\n10 & 0.007367 & 0.646233 & 0.090065 & 0.621271 \\\\ \\hline\nnumer. & & 0.629 & & \\\\ \\br\n\\end{tabular}\n}\n\\end{indented}\n\\label{table16_chaotic}\n\\label{t_16_chaotic}\n\\end{table}\n\n\\section{Summary}\n\nHopf's proposal for a theory of turbulence was, as we understand it, to think \nof turbulence as a sequence of near recurrences of a repertoire of unstable \nspatiotemporal patterns. Hopf's \nproposal is in its spirit very different from most ideas that animate\ncurrent turbulence research. \nIt is distinct from\nthe Landau quasiperiodic picture of turbulence as a sum of \ninfinite number of incommensurate frequencies, with dynamics taking place on a \nlarge-dimensional torus.\nIt is not the\nKolmogorov's 1941 homogeneous turbulence with no \ncoherent structures fixing the length scale, here all the action is \nin specific coherent structures.\nIt is emphatically {\\em not} universal; spatiotemporally periodic solutions \nare specific to the particular set of equations and boundary conditions.\nAnd it is {\\em not} probabilistic; everything is fixed by the deterministic\ndynamics with no probabilistic assumptions on the velocity distributions \nor external stochastic forcing. \n\nOur investigation of the Kuramoto-Sivashinsky system is a\nstep in the direction of implementing Hopf's program. \nWe have constructed \na complete and exhaustive hierarchy of spatiotemporally periodic solutions \nof spatially extended nonlinear system and applied the periodic orbit theory \nto evaluation of global averages for such system. Conceptually the most \nimportant lesson of this theory is that \nthe unstable spatiotemporally periodic\nsolutions serve to explore systematically the\nrepertoire of admissible spatiotemporal patterns, with the trace\nand determinant formulas and their cycle expansions being the proper tools for\nextraction of quantitative predictions from the periodic orbits data.\n\nWe have applied the\ntheory to a low dimensional attractor, not larger than the Lorenz's original \nstrange attractor\\cite{Lorenz}. As our aim was to solve the given equations \naccurately, we were forced to work with a high dimensional\nFourier modes truncations, and we succeeded in determining the\nperiodic orbits for flows of much higher dimension than in\nprevious applications of the periodic orbit theory. As something new, we \nhave developed an intrinsic parametrization of the invariant set that\nprovided the key to finding the periodic orbits. \n\nIn practice, the method \nof averaging by means of periodic orbits produced best results when \nthe complete symbolic dynamics was known. \nFor generic parameter values we cannot claim that the periodic\norbit approach is\ncomputationally superior to a direct numerical simulation. \nA program to find periodic orbits up to length 10 for one value of \nthe damping parameter $\\nu$ requires a day of CPU on a fast workstation, \nmuch longer than the time used in the direct numerical\nsimulations. \n\nThe parameter $\\nu$ values that we work with correspond to\nthe weakest nontrivial ``turbulence'', and it is an open \nquestion to what extent the approach remains implementable as the system goes \nmore turbulent. Our hope is that the unstable structures captured so far \ncan be adiabatically tracked to the ``intermediate turbulence'' regime, \nand still remain sufficiently representative of the space of admissible \npatterns to allow meaningful estimates of global averages. \nAs long as no effective coordinatization of the ``inertial\nmanifold'' exists and we rely on the spatial Fourier decomposition, the\npresent approach is \nbound to fail in the ``strong turbulence'' $\\nu \\rightarrow 0$ limit,\nwhere the dominant structures are Burgers-type shocks \nand truncations of the spatial Fourier modes \nexpansions are increasingly uncontrollable. \n\n\\ack\nWe are grateful to L. Tuckerman for patient instruction,\nE.A. Spiegel,\nG. Goren, \nR. Zeitek,\nand \nI. Procaccia\nfor inspiring conversations, P. Dahlqvist for a critical\nreading of an early version of the paper, and E. Bogomolny for the\ncatchy but all too ephemeral title for the paper.\n\n\n\\section*{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe connection between deep neural network and the ordinary differential equation has been studied and discussed in different recent works [1,2,3,4,5,6]. It has been shown that residual networks such as ResNet [7] and recurrent neural network decoders can be modeled as a discretization of a continuous ODE model. An ODE-based model and its relation to the residual network can be shown as follows:\n\\begin{equation} \\label{Eq1}\n\\begin{aligned}\n \\ \\ \\text L_{t_1} =\\text L_{t_0} + f(\\text L_{t_0},\\theta) \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ (ResNet)\n\\end{aligned}\n\\end{equation}\n\\begin{equation} \\label{Eq2}\n\\begin{aligned}\n\\text L_{t_1} =\\text L_{t_0} + \\int_{t_0}^{t_1} f(\\text L_{t},\\theta)\\ dt \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ (ODE)\n\\end{aligned}\n\\end{equation}\nwhere $\\text L_{t_0}, \\text L_{t_1}$ are the residual block input and output. $f$ represents the network-defined nonlinear operator which preserves the dimensionality of $\\text L_{t_0}$ and $\\theta$ represents the network weights. The defined ODE ($\\frac{d\\text L}{dt} = f(\\text L_{t},\\theta)$) is described in terms of its solution in $t=t_1$. \nThe forward step of ODE Euler discretization is as follows:\n\\begin{equation} \\label{Eq3}\n\\begin{aligned}\n \\ \\ \\text L_{t_{0}+h} =\\text L_{t_0} + hf(\\text L_{t_0},\\theta) \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n\\end{aligned}\n\\end{equation}\nIt can be observed that the single forward step of Eq~\\ref{Eq3}, can be considered as the equivalent to the formulation of the residual block. Therefore, The ODE discretization model can lead to different ODE-inspired network architectures. In this paper, we present an ODE-based deep network for MRI reconstruction which extends the conventional reconstruction framework to its data-adaptive variant using the ODE-based network and provides an end-to-end reconstruction scheme. We evaluate the reconstruction performance of our method to the reconstruction methods based on the standard UNet network [8] and Residual network. \n\\section{Method}\nThe discretized version of MR imaging model given by\t\t\n\\begin{equation} \\label{Eq4}\n\\begin{aligned}\n\\text d=\\text E \\text x + \\text n.\n\\end{aligned}\n\\end{equation}\nwhere $\\text x$ is the samples of unknown MR image, and $\\text d$ is the undersampled k-space data. $\\text E =\\text {FS}$ is an encoding matrix, and $\\text F$ is an undersampled Fourier operator. $\\text S$ is a matrix representing the sensitivity map of the coils, and $\\text n$ is noise. Assuming that the interchannel noise covariance has been whitened, the reconstruction relies on the least-square approach:\n\\begin{equation} \\label{Eq5}\n\\begin{aligned}\n\\hat{\\text x} =\\underset{\\text x}{ argmin} \\ \\|\\text d-\\text E\\text x\\|_{2}^{2}\n\\end{aligned}\n\\end{equation}\nThe ODE-based reconstruction framework we used for solving the Eq~\\ref{Eq5} is shown in Fig~\\ref{fig1}. \\\\\nFor a conventional neural network, we minimize the loss function ($l$) over a set of training pairs and we search for the weights ($\\theta$) that minimize that loss function:\n\\begin{equation} \\label{Eq6}\n\\begin{aligned}\n\\underset{\\theta}{ minimize} \\ \\frac{1}{M} \\sum\\limits_{i=1}^M l(L(\\theta;x_i,y_i)) + \\text R(\\theta)\n\\end{aligned}\n\\end{equation}\nwhere $(x_i,y_i)$ is the $i_{th}$ training pairs (input and ground truth). $R$ is a regularization operator and $M$ is the number of training pairs. The loss function depends implicitly on $\\theta$. This optimization is usually solved through Stochastic Gradient Descent (SGD) and backpropagation to compute the gradient of $L$ with respect to $\\theta$. In our ODE-based network, besides the $L$s, the network weights also change with respect to time as well. In this case, we need to solve the following constrained optimization problem :\n\\begin{equation} \\label{Eq7}\n\\begin{aligned}\n\\underset{p,\\theta}{ minimize} \\ \\frac{1}{M} \\sum\\limits_{i=1}^M l(L_{t_1};x_i,y_i) + \\text R(p,\\theta)\n\\end{aligned}\n\\end{equation}\n\\begin{equation} \\label{Eq8}\n\\begin{aligned}\n\\frac{d\\text L}{dt} = f(\\text L_{t},\\theta_{t})\n\\end{aligned}\n\\end{equation}\n\\begin{equation} \\label{Eq9}\n\\begin{aligned}\n\\frac{d\\theta}{dt} = w(\\theta_{t},p)\n\\end{aligned}\n\\end{equation}\nwhere $\\theta_t$ is parameterized by the learnable dynamics of\n\\begin{equation} \\label{Eq10}\n\\begin{aligned}\n\\theta_{t_1} =\\theta_{t_0} + \\int_{t_0}^{t_1} w(\\theta_{t},p)\\ dt\n\\end{aligned}\n\\end{equation}\nwhere $w$ is a nonlinear operator responsible for the network weights dynamics and $p$ is the parameters for $w$.\nWe also augment the learnable space and solve the ODE flow as follows so that the learned ODE representation won't preserve the input space topology [5].\n\\begin{equation} \\label{Eq11}\n\\begin{aligned}\n\\frac{d}{dt} \\begin{bmatrix}\n \\text L\\\\\n a\n\\end{bmatrix} = f(\\begin{bmatrix}\n \\text L_{t}\\\\\n a_{t}\n\\end{bmatrix},\\theta_{t})\n\\end{aligned}\n\\end{equation}\nwhere $a_{0}=0$. We use the discretize-then-optimize method [4,6] to calculate the gradients for backpropagating through ODE layers. Figure \\ref{fig2} shows the proposed ODE-based deep network. Five residual blocks have been used in our method (N=5).\n\\section{Results and Discussion}\nIn our experiments, we have tested our method with our MPRAGE brain datasets. The data on ten volunteers with a total of 750 brain images used as the training set. Images from fifteen different volunteers have used as the testing set. The sensitivity maps were computed from a block of size 24x24 using the ESPIRiT [9] method. Reconstruction results with the undersampling factor of 2x2 for different approaches are shown in Fig \\ref{fig3}. ResidualNet includes the same number of residual blocks as our proposed method (without ODE layers).\nTable \\ref{table1} shows\nthat our method consistently has higher Peak Signal-to-noise Ratios (PSNR) and structural similarities (SSIM) compared to the reconstructions using the other two networks. In conclusion, an ODE-based deep network for MRI reconstruction is proposed. It enables the rapid acquisition of MR images with improved image quality. The proposed ODE-based network can be easily adopted by unrolled optimization schemes for better MRI reconstruction accuracy.\n\n\n\\begin{figure}\n\\begin{floatrow}\n\\ffigbox{%\n \\includegraphics[scale=0.37]{fig2.jpg\n }{%\n \\caption{The reconstruction framework}%\n \\label{fig1}}\n\\capbtabbox{%\n\\begin{tabular}{rllll}\n\\hline\n\\multicolumn{3}{c} {Brain Dataset} \\\\\n\\cline{2-3} \n\\cline{4-5} \nMethod & PSNR & SSIM \\\\\n\\hline\nProposed & $54.5\\pm1.37$ & $0.99\\pm0.0063$ \\\\\nUNet & $52.4\\pm1.54$ & $0.98\\pm0.0075$ \\\\\nResidualNet & $50.1\\pm1.65$ & $0.978\\pm0.0097$ \\\\\n\\hline\n\\end{tabular}\n}\n{\n\\caption{PSNR and SSIM variations on Brain dataset}%\n\\label{table1}\n}\n\\end{floatrow}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\centerline{\\includegraphics[scale=0.5]{fig1.jpg}}\n \\caption{The proposed ODE-based deep network.}\n \\label{fig2}\n\\end{figure}\n\\begin{figure}\n \\centering\n \\centerline{\\includegraphics[scale=0.15]{fig3.jpg}}\n \\caption{First row (left to right): Reference image using fully sampled data, ResidualNet reconstruction, UNet reconstruction, and our reconstruction all with undersampling factor of 2x2. Second row includes error maps correspond to each reconstruction results for comparison.}\n \\label{fig3}\n\\end{figure}\n\\newpage\n\n\\subsubsection*{Acknowledgments}\n\nThis research was supported in part by NIH grants R01 NS079788, R01 EB019483, R01 DK100404, R44 MH086984, IDDRC U54 HD090255, and by a research grant from the Boston Children's Hospital Translational Research Program.\n\n\n\\section*{References}\n\n[1] Haber, E. and Ruthotto, L., \"Stable architectures for deep neural networks,\" Inverse Problems, 34(1), 014004 (2017).\n\n[2] Ruthotto, L. and Haber, E., \"Deep neural networks motivated by partial differential equations,\" arXiv preprint arXiv:1804.04272 (2018).\n\n[3] Chen, T. Q., Rubanova, Y., Bettencourt, J., and Duvenaud, D. K., \"Neural ordinary differential equations,\" in [Advances in neural information processing systems], 6571\u20136583 (2018).\n\n[4] Gholami, A., Keutzer, K., and Biros, G., \"Anode: Unconditionally accurate memory-efficient gradients for neural odes,\" arXiv preprint arXiv:1902.10298 (2019).\n\n[5] Dupont, E., Doucet, A., and Teh, Y. W., \"Augmented neural odes,\" arXiv preprint arXiv:1904.01681 (2019).\n\n[6] Zhang, T., Yao, Z., Gholami, A., Keutzer, K., Gonzalez, J., Biros, G., and Mahoney, M., \"Anodev2: A coupled neural ode evolution framework,\" arXiv preprint arXiv:1906.04596 (2019).\n\n[7] He, K., Zhang, X., Ren, S., and Sun, J., \"Deep residual learning for image recognition,\" in [Proceedings of the IEEE conference on computer vision and pattern recognition], 770\u2013778 (2016).\n\n[8] Ronneberger, O., Fischer, P., and Brox, T., \"U-net: Convolutional networks for biomedical image segmentation,\" in [International Conference on Medical image computing and computer-assisted intervention], 234\u2013241, Springer (2015).\n\n[9] M. Uecker, P. Lai, M. J. Murphy, P. Virtue, M. Elad, J. M.Pauly, S. S. Vasanawala, and M. Lustig, \"Espirit an eigenvalue approach to autocalibrating parallel mri: where sense meets grappa,\" Magnetic resonance in medicine, 71(3), 990\u20131001 (2014).\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe duration of particle collisions is an interesting and important aspect of\ngeneral scattering theory which is in a sense complementary to the energy\nrepresentation ordinarily used. A collision is characterized in this case by\nthe time delay of particles in the region of interaction. Wigner \\cite{W-55}\nwas the first who considered the time on which a monochromatic particle with\ngiven\nangular momentum is delayed during its elastic scattering. He established\nthe connection of this time delay to the energy derivative of the scattering\nphase shift. The sharper the energy dependence of the phase shift is the\nlonger is the time delay.\n\nLater on Smith \\cite{S-60} extended the time delay concept on many channel\nproblems introducing the time delay matrix\n\\begin{equation}\\label{q-matr}\n Q^{ab}(E) = -i\\hbar\\left\\{\\frac{d}{d\\,\\varepsilon}\\,\n \\sum_{c}S^{ac}(E\\!+\\!\\frac{\\varepsilon}{2})\n S^{\\ast\\,cb}(E\\!-\\!\\frac{\\varepsilon}{2})\n \\right\\}_{\\varepsilon=0}\\,\\,,\n\\end{equation}\nin the channel space. Here $S$ is the scattering matrix and the summation\nindex $c$ runs over all the $M$ open scattering channels. The matrix $Q$ is\nhermitian; its diagonal element $Q^{cc}$ coincides with the mean duration of\ncollision (time delay) in the $c$-th entrance channel. Generally speaking,\nthe delays are different in different channels $c$. Taking the trace of the\nSmith matrix, one arrives at the simple weighted-mean characteristic\n\\begin{equation}\\label{q1}\nQ(E) = \\frac{1}{M}\\,\\sum_{c} Q^{cc}\n = -\\frac{i}{M}\\,\\frac{d}{dE}\\,\\ln \\det S(E)\n\\end{equation}\nof the duration of collisions. Eq. (\\ref{q1}) is just the many-channel\nversion of the well-known simple Wigner formula. (Here and below we set\n$\\hbar=1$.)\n\nThe time delay turns out to be an especially pertinent concept for the chaotic\nresonance scattering encountered in atomic, molecular and nuclear physics\n\\cite{L-77, LW-91}, as well as in the scattering of electromagnetic\nmicrowaves \\cite{Sr-91, GHLLRRSW-92, SS-92} in resonance billiard-like\ncavities. The quantity $Q(E)$, being closely connected to the complex energy\nspectrum of resonance states, shows in its energy dependence strong\nfluctuations around a smooth regular variation. The two kinds of variation on\ndifferent energy scales are naturally decomposed\n\\begin{equation}\\label{atd}\nQ(E)=\\langle Q(E)\\rangle+Q_{fl}(E)\\,\\,,\n\\end{equation}\nwith an energy or ensemble averaging. By this, the slow energy dependence of\ntime delay is revealed whereas the two-point autocorrelation function\n\\begin{equation}\\label{dcf}\nC_Q(E,\\varepsilon) =\n\\langle Q_{fl}(E\\!+\\!\\frac{\\varepsilon}{2})\nQ_{fl}(E\\!-\\!\\frac{\\varepsilon}{2})\\rangle=\n\\langle Q(E\\!+\\!\\frac{\\varepsilon}{2})\nQ(E\\!-\\!\\frac{\\varepsilon}{2})\\rangle -\n\\langle Q(E\\!+\\!\\frac{\\varepsilon}{2})\\rangle\n\\langle Q(E\\!-\\!\\frac{\\varepsilon}{2})\\rangle\n\\end{equation}\nis used to characterize the time delay fluctuations.\n\nTo the best of our knowledge, the first consideration of these fluctuations\nhas been made numerically as well as analytically in \\cite{WJ-89} and\n\\cite{SW-92} in the framework of rather peculiar model of resonance elastic\nquantum scattering on a leaky surface of constant negative curvature. The\nnoteworthy feature of this model is that the poles of the scattering\namplitude turn out to correspond to zeros of the famous Riemann\n$\\zeta$-function. The real parts of the poles are therefore supposed\n\\cite{Mo-73} to be chaotically distributed similar to the eigenvalues of the\nGaussian Unitary Ensemble whereas all their imaginary parts (the widths of\nresonances) are the same. The latter very specific property partly deprives\nthe model of its interest since actual single-channel widths are known to\nexhibit quite large fluctuations \\cite{Po-65}.\n\nThe width fluctuations are suppressed when many channels are open. In this\ncase semiclassical approximation can be as a rule expected to be valid. The\nsemiclassical analysis of the time delay in terms of closed periodic orbits\nis given in \\cite{Ec-93}. It is in particular emphasized there that only the\ntail of the correlation function (\\ref{dcf}) corresponding to the very large\nvalues of $\\varepsilon$ can immediately be related to the (short) periodic\norbits. Quite opposite, the central peak near the point $\\varepsilon=0$ is\nformed as a result of a strong interference of many orbits. Therefore, its\nwidth describing the long-time asymptotic behaviour of the Fourier transform\nhas no direct connection to the classical escape rate and has rather to be\ncalculated on the pure quantum ground. This is in line with the results of\nthe analysis \\cite{GR-89} of distribution of the resonance widths in the\nthree discs scattering problem.\n\nIt is now generally acknowledged that the random matrix theory \\cite{Me-67}\nrepresents a suitable and reliable foundation for description of local\nproperties of dynamical quantum chaos \\cite{BG-84}. We therefore use below a\nrandom matrix model of chaotic scattering to calculate the time delay\nautocorrelation function. We suppose as usual that the number $N$ of\nresonances is asymptotically large and use the powerful supersymmetry\ntechnique \\cite{E-83}\nfirst applied to chaotic scattering problems in \\cite{VWZ-85}. The number $M$\nof the (statistically equivalent) scattering channels\ncan be small or large or can even scale with the number of resonance states.\nOne can treat the latter two cases \\cite{LW-91, HILSS-92, LSSS-94} as a\n\"semiclassical limit\" in the matrix model. We show here that the time-delay\nlocal fluctuations are governed, similar to those of the $S$-matrix\n\\cite{LSSS-94}, by the gap between the real axis and the upper edge of the\ndistribution of resonance energies in the complex energy plane. We compare\nthis result with that obtained in the framework of the periodic orbit\napproach.\n\nIn the next section our statistical matrix model is briefly presented.\nThe connections of average time delay with the resonance spectrum and S-matrix\nfluctuations are elucidated in sec.~3. After a short description in sec.~4\nof the supersymmetry method which we use the main analytical results for the\ntime delay correlation function are given and discussed in detail in sec.~5.\nSome numerical results shedding additional light upon properties of the time\ndelay correlations are gathered in sec. 6. We close with a brief summary\nin sec.~7.\n\n\\section{The Resonance Matrix Model}\nAccording to the general scattering theory, the evolution of the $N$-level\nunstable system formed on intermediate stage of a resonance collision is\ndescribed \\cite{MW-69, KNO-69, SZ-89} by the effective Hamiltonian\n\\begin{equation}\\label{hamil}\n{\\cal H} = H - i\\gamma\\, W,\\; \\; \\;\\; W = VV^{T}\\,\\,.\n\\end{equation}\nThe Hamiltonian (\\ref{hamil}) acts within the intrinsic $N$-dimensional space\nbut acquires, due to the elimination of continuum variables, an antihermitian\npart. The hermitian matrix $H$ is the internal Hamiltonian with a discrete\nspectrum whereas the rectangular $N\\times M$ matrix $V$ consists of the\namplitudes $V_m^c$ of transitions between $N$ internal and $M$ channel\nstates. These amplitudes are real in T-invariant theory, so that the matrix\n$W$, similar to $H$, is real and symmetric. As usual, we neglect the smooth\nenergy dependence of $V$ and $W$. The dimensionless parameter $\\gamma$\ncharacterizes the strength of the coupling of the internal motion to the\ncontinuum.\n\nThe poles of the resonance scattering matrix in the complex energy plane\nare those of the Green's function \\cite{MW-69, KNO-69, SZ-89}\n\\begin{equation}\\label{green}\n{\\cal G}(E) = (E-{\\cal H})^{-1}\\,\\,.\n\\end{equation}\nThey coincide with the eigenvalues ${\\cal E}_n=E_n-\\frac{i}{2}\\Gamma_n$ of\nthe effective Hamiltonian ${\\cal H}$ with $E_n$ and $\\Gamma_n$ being the\nenergy and width of $n$-th resonance state. It what follows, the properties\nof the spectrum of complex energies ${\\cal E}_n$ play the crucial role.\n\nThe intrinsic chaoticity of the internal motion of long-lived intermediate\nsystem manifests itself by chaotic fluctuations in resonance scattering and\ndemands a statistical consideration. Therefore the random matrix approach\nextending\nthe well-known \\cite{Po-65, Me-67} description of chaotic bounded systems\nhas been worked out in \\cite{W-84, VWZ-85, SZ-89}. It is usually\nassumed that the hermitian part $H$ of the effective Hamiltonian belongs to\nthe Gaussian Orthogonal Ensemble (GOE),\n\\begin{equation}\\label{goe}\n\\langle H_{nm} \\rangle = 0,\\ \\ \\ \\langle H_{nm}H_{n'm'} \\rangle =\n\\frac{\\lambda^2}{N}(\\delta_{nn'}\\delta_{mm'}+\\delta_{nm'}\\delta_{mn'})\\,\\,.\n\\end{equation}\nIn the limit $N\\rightarrow\\infty$ eigenvalues of $H$ are situated in the\ninterval $[-2\\lambda,2\\lambda]$ with the density given by Wigner's\nsemicircle law. Following \\cite{SZ-89}, we suggest the transition amplitudes\n$V_n^c$ also to be statistically independent Gaussian variables,\n\\begin{equation}\\label{rand}\n\\langle V^a_n \\rangle = 0,\\, \\, \\,\n\\langle V^a_nV^b_m \\rangle = \\frac{\\lambda}{N}\\delta^{ab}\\delta_{nm}\\,\\,.\n\\end{equation}\nWe will use below the ensemble (\\ref{goe},\\ref{rand}) to calculate the\naverage quantities defined in (\\ref{atd},\\ref{dcf}).\n\n\\section{Time Delay and Resonance Spectrum}\n\nSince we have neglected a smooth energy dependence of the effective\nHamiltonian (\\ref{hamil}),\nthe poles ${\\cal E}_n$ in the lower part of the complex energy\nplane are the only singularities of the resonance scattering matrix. Due to\nthe unitarity condition their complex conjugates ${\\cal E}_n^*$ serve as\n$S$-matrix's zeros. These two conditions result in the representation\n\\begin{equation}\\label{det_s}\n\\det\\,S(E) = \\prod_{n} \\frac{E-{\\cal E}^{\\ast}_{n}}{E-{\\cal E}_{n}}\\,\\,.\n\\end{equation}\nSubstituting eq.(\\ref{det_s}) in eq.(\\ref{q1}), we come to the important\nconnection\n\\begin{equation}\\label{q2}\n Q(E) = -2\\,\\mbox{Im}\\,\\left\\{\\frac{1}{M}\\mbox{tr\\,} {\\cal G(E)}\\right\\}\n = \\frac{1}{M}\\,\\sum_n \\frac{\\Gamma_n}{(E-E_n)^2+\\frac{1}{4}\\Gamma_n^2}\n\\end{equation}\nbetween the time delay and the trace of the Green's function (\\ref{green}) of\nthe intermediate unstable system. The time delay is entirely determined by\nthe spectrum of complex energies of this system. The collision duration\ndirectly reflects the statistical properties of resonances. This is in\ncontrast to the scattering amplitudes $S^{cc'}$ which explicitly depend also\non the transition amplitudes $V_n^c$.\n\nThe ensemble averaging of eq.(\\ref{q2}) gives\n\\begin{equation}\\label{qav}\n \\langle Q(E) \\rangle = \\frac{2}{m\\lambda}\\,\\mbox{Re}\\,{\\sl g}(E)\n\\end{equation}\nwhere $m < 1$ is the ratio $M\/N$ and the function\n\\begin{equation}\\label{gav}\n {\\sl g}(E) = i\\lambda\\frac{1}{N}\\,\\langle \\mbox{tr\\,}\\,{\\cal G}(E)\\rangle\n\\end{equation}\nsatisfies the cubic equation \\cite{LSSS-94}\n\\begin{equation}\\label{qeq}\n {\\sl g}(E)-\\frac{1}{{\\sl g}(E)}+\\frac{m\\gamma}{1+\\gamma{\\sl g}(E)}-\n i\\frac{E}{\\lambda}=0\\,\\,.\n\\end{equation}\nThe (unique) solution with a positive real part has to be chosen. It can be\nseen from the consideration given in \\cite{LSSS-94} that this real part is\nclose to $\\frac{\\lambda}{N}\\pi\\rho(E)$ with $\\rho(E)$ being the projection on\nthe real energy axis near the scattering energy $E$ of the density of\nresonance levels in the complex energy plane.\n\nOn the other hand, averaging eq.(\\ref{q-matr}) directly, we express\n$\\langle Q \\rangle$ in terms of the two-point $S$-matrix correlation function\n\\cite{VWZ-85, LSSS-94}. In the limit of a large number of statistically\nequivalent channels, $M\\gg 1$, scaling with the number of resonances $N$\n\\begin{equation}\\label{q-ss}\n\\langle Q \\rangle =\n-i\\frac{dC_S(\\varepsilon)}{d\\varepsilon}\\Bigg|_{\\varepsilon=0} +\ni\\frac{d{\\cal T}(\\varepsilon)}{d\\varepsilon}\\Bigg|_{\\varepsilon=0}\\,\\,.\n\\end{equation}\nHere \\cite{LSSS-94}\n\\begin{equation}\\label{scor}\n C_S(\\varepsilon) = \\frac{i\\Gamma(\\varepsilon)}{\\varepsilon +\n i\\Gamma(\\varepsilon)}\\,{\\cal T}(\\varepsilon) \\equiv\n K(\\varepsilon)\\,{\\cal T}(\\varepsilon)\n\\end{equation}\nwith the two smooth functions defined by\n\\begin{equation}\\label{G,T}\n \\Gamma(\\varepsilon) = \\frac{m}{2}\\lambda\\,\n \\frac{{\\cal T}(\\varepsilon)}{{\\sl g}(\\varepsilon\/2)}\\;\\;\\;,\n \\;\\;\\;{\\cal T}(\\varepsilon) = \\frac{4\\gamma{\\sl g}(\\varepsilon\/2)}\n {\\left[1+\\gamma{\\sl g}(\\varepsilon\/2)\\right]^2}\n\\end{equation}\nand we set $E=0$ for the sake of simplicity. The quantity\n\\begin{equation}\\label{trc}\nC_S(0) = {\\cal T}(0)\\equiv T\n\\end{equation}\ncoincides with the transmission coefficient $T=1-|\\langle S\\rangle|^2$. With\neq.(\\ref{scor}) taken into account we obtain from (\\ref{q-ss})\n\\begin{equation}\\label{qav2}\n\\langle Q \\rangle =\n-iT\\frac{dK(\\varepsilon)}{d\\varepsilon}\\Bigg|_{\\varepsilon=0}\n= \\frac{T}{\\Gamma_0}\\,\\,,\n\\end{equation}\nwhere we have designated $\\Gamma(0)$ as $\\Gamma_0$.\n\nAs long as the typical values of the quantity $\\Gamma(\\varepsilon)$ are small\nas compared to the parameter $\\lambda$ characterizing the scale of the smooth\n$\\varepsilon$-dependence of the function ${\\cal T}(\\varepsilon)$, the two\nfactors on the r.h.s. of eq.(\\ref{scor}) have quite different energy scales.\nOnly the first fast varying factor $K(\\varepsilon)$ describes the local\nfluctuations whereas the second one corresponds to the joint influence of all\nresonances giving rise to the processes with a very short duration. The\nlatter came out from eq.(\\ref{qav2}). The average time delay of a\nnon-monochromatic spatially small wave packet caused by the formation of a\nlong-lived intermediate state \\cite{L-77, DHM-92} is determined just by the\nfactor $K(\\varepsilon)$ \\cite{LSSS-94}\n\\begin{equation}\\label{dstd}\n\\langle \\tau \\rangle =\n-i\\frac{dK(\\varepsilon)}{d\\varepsilon}\\Bigg|_{\\varepsilon=0}\n= \\Gamma_0^{-1}\\,\\,.\n\\end{equation}\nThis implies the connection \\cite{DHM-92, LSSS-94}\n\\begin{equation}\\label{std}\n\\langle\\tau\\rangle = \\langle Q\\rangle\/T =\n\\frac{2N}{\\lambda MT}\\,{\\sl g}(0)\\approx\\frac{2\\pi\\rho}{MT}\\,\\,.\n\\end{equation}\n\n\\section{The Supersymmetry Method}\n\nNow we calculate the correlation function (\\ref{dcf}). Taking into account\nthe relation (\\ref{q2}), one can cast eq.(\\ref{dcf}) into the form\n\\begin{equation}\\label{qq2}\nC_Q(E,\\varepsilon)= \\frac{2}{M^2}\\,\\mbox{Re} \\left\\{\n\\langle \\mbox{tr\\,}{\\cal G}(E\\!+\\!\\frac{\\varepsilon}{2})\n\\mbox{tr\\,}{\\cal G}^{\\dagger}(E\\!-\\!\\frac{\\varepsilon}{2}) \\rangle\n- \\langle \\mbox{tr\\,}{\\cal G}(E\\!+\\!\\frac{\\varepsilon}{2}) \\rangle\n\\langle \\mbox{tr\\,}{\\cal G^{\\dagger}}(E\\!-\\!\\frac{\\varepsilon}{2})\n\\rangle \\right\\}\\,\\,.\n\\end{equation}\nWe also define the normalized quantity\n\\begin{equation}\\label{Nqq}\nK_Q(E,\\varepsilon)=\\frac{C_Q(E,\\varepsilon)}{{\\langle Q(E)\\rangle}^2}\\,\\,.\n\\end{equation}\nThe terms containing two Green's functions with poles at the same side from\nthe real energy axis are omitted in (\\ref{qq2}). We will briefly return to\nthis point later.\n\nIn the limit $\\gamma=0$, when the system gets closed, the correlation function\n(\\ref{qq2}) becomes proportional to the GOE density-density correlation which\nconsists \\cite{Me-67} of the singular term $\\delta(\\pi\\rho\\varepsilon)$ and\nDyson's smooth function $-Y_2(\\pi\\rho\\varepsilon)$. Coupling to the continuum\nleads to appearing of a new energy scale caused by the decay processes. This\nscale is defined \\cite{LSSS-94} by the quantity $\\Gamma(\\varepsilon)$ from\neq.(\\ref{G,T}). One can anticipate a qualitative changing of the correlation\nfunction to occur on this scale. For larger distances the influence of the\nantihermitian part should fade away and the asymptotics of $C_Q$ for\n$\\varepsilon\\rightarrow\\infty$ is expected to coincide with that of the\nDyson's function $-Y_2(\\pi\\rho\\varepsilon)$.\n\nTo perform the ensemble averaging in (\\ref{qq2}) we use the modification\nworked out in \\cite{LSSS-94} of the supersymmetry technique \\cite{VWZ-85}.\nUsing the integral representation of Green's function as a multivariate\nGaussian integral over commuting and anticommuting variables, one gains the\npossibility to accomplish the averaging exactly. With the help of the Fourier\ntransformation in the supermatrix space the integration over initial\nauxiliary supervectors is then carried out. Going along this line, one finally\narrives at\n\\begin{equation}\\label{grgr}\n \\langle \\mbox{tr\\,}{\\cal G}(E\\!+\\!\\frac{\\varepsilon}{2})\\,\n \\mbox{tr\\,}{\\cal G^{\\dagger}}(E\\!-\\!\\frac{\\varepsilon}{2}) \\rangle =\n -\\frac{N^2}{4}\\langle\\mbox{str\\,}(\\sigma \\eta_1)\\,\\mbox{str\\,}\n (\\sigma \\eta_2) \\rangle_{{\\cal L}}\\,\\,.\n\\end{equation}\nHere the shorthand $\\langle\\ldots\\rangle_{{\\cal L}}$ is used to denote\nthe integral\n\\begin{equation}\\label{shand}\n \\langle \\ldots \\rangle_{{\\cal L}} =\n \\int\\!d[\\sigma]\\,d[\\hat\\sigma]\\,\\exp \\{\n - N{\\cal L}(\\sigma,\\hat\\sigma) \\} (\\ldots)\n\\end{equation}\nover two $8\\times8$ supermatrices $\\sigma$ and $\\hat\\sigma$ with the\nmeasure defined by the Lagrangian \\cite{LSSS-94}\n\\begin{equation}\\label{Lg}\n {\\cal L}(\\sigma,\\hat\\sigma) = \\frac{1}{4}\\,\\mbox{str\\,}\\sigma^2\n - \\frac{i}{2}E\\,\\mbox{str\\,}\\sigma -\n \\frac{i}{2}\\mbox{str\\,}(\\sigma\\hat\\sigma) +\n \\frac{1}{2}\\mbox{str\\,}\\ln(\\hat\\sigma) + \\frac{m}{2}\\,\\mbox{str\\,} \\ln\n (1\\!+\\!\\gamma\\sigma \\eta) - \\frac{i}{4}\\varepsilon\\,\\mbox{str\\,}(\\sigma\n \\eta)\\,\\,. \\end{equation} The diagonal supermatrices appearing above are\nequal to \\[\\eta=\\mbox{diag}(1,1,-1,-1,1,1,-1,-1)\\]\n\\[\\eta_1=\\mbox{diag}(1,1,0,0,-1,-1,0,0)\\;\\;\\;\n\\eta_2=\\mbox{diag}(0,0,1,1,0,0,-1,-1)\\,\\,. \\]\nHere we have set the GOE parameter $\\lambda$ equal to one.\n\nThe supermatrix $\\sigma$ can be decomposed in the following way \\cite{VWZ-85}\n\\begin{equation}\n\\sigma=T_0\\,\\sigma_R\\,T_0^{-1}\n\\end{equation}\nwhere $T_0$ is a transformation from a non-compact manifold whereas the matrix\n$\\sigma_R$ is\ndiagonalized by transformations from a compact one. This implies a\ncorresponding decomposition of the integrals on the r.h.s. of (\\ref{shand})\n\\begin{equation}\\label{z4}\n \\langle \\ldots \\rangle_{{\\cal L}} =\n \\int\\!{\\cal F}(\\sigma_R)\\,d[\\sigma_R]\\,d[\\hat\\sigma]\n \\exp\\{-N{\\cal L}_R(\\sigma_R,\\hat\\sigma) \\}\n \\int\\!d\\mu\\exp\\{-N{\\cal L}_{\\mu}(\\sigma_R,T_0)\\}(\\ldots)\\,\\,.\n\\end{equation}\nThe Berezinian ${\\cal F}(\\sigma_R)$ depends only on the eigenvalues of\n$\\sigma_R$; $d\\mu$ is the invariant measure of the manifold of non-compact\ntransformations $T_0$. At last, the Lagrangian (\\ref{Lg}) is splitted into\n two parts, ${\\cal L}_R$ and ${\\cal L}_{\\mu}$, given by\n\\begin{equation}\\label{dLg}\n\\begin{array}{l}\n {\\cal L}_R(\\sigma,\\hat\\sigma) = \\frac{1}{4}\\mbox{str\\,}\\sigma_R^2 -\n \\frac{i}{2}E\\,\\mbox{str\\,}\\sigma_R -\n \\frac{i}{2}\\mbox{str\\,}(\\sigma_R\\hat\\sigma) +\n \\frac{1}{2}\\mbox{str\\,}\\ln(\\hat\\sigma)\\,\\,, \\\\ \\\\ {\\cal\n L}_{\\mu}(\\sigma_R,T_0) = -\\frac{i}{4}\\varepsilon\\,\\mbox{str\\,}(\\sigma_R\n T_0^{-1}\\eta T_0) + \\frac{m}{2}\\,\\mbox{str\\,}\\ln(1\\!+\\!\\gamma\\sigma_R\n T_0^{-1}\\eta T_0)\\,\\,. \\end{array} \\end{equation} Only the second part\n${\\cal L}_{\\mu}$ depends on the non-compact variables. The first one\n${\\cal L}_R$ is invariant under a transformation by\n$T_0$ since it is fully absorbed by an appropriate transformation of\n$\\hat\\sigma$. One can easily verify that the corresponding Berezinian is\nequal to unity.\n\nSince the number of resonances $N\\rightarrow \\infty$, the integrations over\n$\\sigma_R$ and $\\hat\\sigma$ can be carried out in the saddle-point\napproximation. At the same time, one has to integrate exactly over\nnon-compact variables as long as the number of channels $M$ is finite\n($m=0$). The saddle-point approximation becomes valid for the latter\nintegration when the number $M$ also tends to infinity ($m$ is finite). We\nwill consider both cases mentioned. To simplify formulae we restrict\nour further consideration to the center of the GOE spectrum $E=0$.\n\n\\section{Time Delay Correlation Function}\n\nLet us first consider collisions with a fixed number of channels $M$. The\nlogarithmic term in ${\\cal L}_{\\mu}$ being proportional to the small ratio\n$m$ does not influence then the saddle-point equations in the\n$(\\sigma_R,\\hat\\sigma)$-sector. In particular, the term in (\\ref{qeq})\ncontaining this ratio has to be omitted. The saddle-point equations are\ntrivially solved in this case and at the point $E=0$\n\\begin{equation}\\label{sps1}\n\\hat\\sigma=-i\\sigma_R^{-1}\\;\\;,\\;\\;\\sigma_R=\\eta\\,\\,.\n\\end{equation}\nWith integrations over $\\sigma_R$ and $\\hat\\sigma$ being done, the\ncorrelation function (\\ref{dcf}) reduces to the integral\n\\begin{equation}\\label{Cqq}\n K_Q(\\varepsilon)=2\\,\\mbox{Re}\\int\\!d\\mu\\,\n \\mbox{str\\,}(\\kappa\\alpha_1)\\,\\mbox{str\\,}(\\kappa\\alpha_2)\\,\n \\exp\\Bigl\\{ \\frac{i}{2}\\pi\\rho\\varepsilon\\,\\mbox{str\\,}\\alpha_1\n-\\frac{M}{2}\\mbox{str\\,}\\ln(1\\!+\\!\\frac{1}{2}T\\alpha_1) \\Bigr\\}\n\\end{equation}\nover the invariant measure of the non-compact manifold of $T_0$-matrices.\nHere $\\alpha_{1,2}$ are the $4\\times 4$ supermatrices defined in\n\\cite{VWZ-85}, the supermatrix $\\kappa=\\mbox{diag}(1,1,-1,-1)$ and\n\\begin{equation}\\label{trc0}\nT=\\frac{4\\gamma}{(1+\\gamma)^2}\n\\end{equation}\nis the transmission coefficient (\\ref{trc}) calculated in the limit of $m=0$.\n\nThe further calculations go along the line described in details in\n\\cite{VWZ-85} and lead to the result\n\\[K_Q(\\varepsilon)=\\frac{1}{4}\\,\\int\\limits_0^1\\!d\\lambda_0\\!\n\\int\\limits_0^{\\infty}\\!d\\lambda_1\\!\\int\\limits_0^{\\infty}\\!d\\lambda_2\\,\n\\mu(\\lambda_0,\\lambda_1,\\lambda_2)(2\\lambda_0+\\!\\lambda_1\\!+\\!\\lambda_2\\!)^2\\,\n\\mbox{cos}\\{\\pi\\rho\\varepsilon(2\\lambda_0+\\!\\lambda_1\\!+\\!\\lambda_2\\!)\\}\\]\n\\begin{equation}\\label{K-f}\n\\times\\left[ \\frac{(1\\!-\\!T\\lambda_0)^2}\n{(1\\!+\\!T\\lambda_1)(1\\!+\\!T\\lambda_2) }\\right]^{M\/2}\n\\end{equation}\nwhere\n\\[\\mu(\\lambda_0,\\lambda_1,\\lambda_2)=\n\\frac{(1\\!-\\!\\lambda_0)\\lambda_0|\\lambda_1-\\lambda_2|}\n{[(1+\\lambda_1)\\lambda_1(1+\\lambda_2)\\lambda_2]^{1\/2}\n(\\lambda_0\\!+\\!\\lambda_1)^2(\\lambda_0\\!+\\!\\lambda_2)^2 }\\,\\,.\\]\n\nThe dependence of the function $K_Q$ on openness of the unstable system is\nfully contained in the last factor in (\\ref{K-f}). If at least one of the\nquantities $M$ or $T$ is equal to zero the threefold integral reduces to\nthe single one \\cite{E-83}\n\\begin{equation}\\label{d-d}\nK_Q^{(0)}(\\varepsilon)=\\int\\limits_0^2\\!dt\\,\nt\\left(1-\\frac{1}{2}\\ln(t\\!+\\!1)\\right)\n\\mbox{cos}(\\pi\\rho\\varepsilon t)+\n\\int\\limits_2^{\\infty}\\!dt\\,\n\\left(2-\\frac{t}{2}\\ln\\frac{t\\!+\\!1}{t\\!-\\!1}\\right)\n\\mbox{cos}(\\pi\\rho\\varepsilon t)\n\\end{equation}\n\\[=\\delta(\\pi\\rho\\varepsilon)-Y_2(\\pi\\rho\\varepsilon)\\]\nwhich is just the normalized GOE density-density correlation function.\n\nGenerally speaking, the threefold integral in (\\ref{K-f}) can be investigated\nfor arbitrary number of channels $M$ only numerically using the methods\ndeveloped in \\cite{V-86} (see the next section). However, this integral can\nbe simplified if $M$ becomes large enough. Let the number $M$ grow still\nkeeping the ratio $m=0$ and the product $MT=2\\pi\\rho\\Gamma_W$ (compare with\n(\\ref{std})) fixed. The quantity $\\Gamma_W$ is just the limiting value of\n$\\Gamma_0$ with $T$ and ${\\sl g}$ calculated in the limit $m=0$. It coincides\nwith the well-known semiclassical Weisskopf estimate \\cite{BW-79} of the\ncorrelation length of Ericson fluctuations. Then\n\\begin{equation}\\label{t-exp}\n\\left[\\frac{(1\\!-\\!T\\lambda_0)^2}\n{(1\\!+\\!T\\lambda_1)(1\\!+\\!T\\lambda_2)}\\right]^{M\/2}\n\\rightarrow\\exp\\{-\\pi\\rho\\Gamma_W\\,\n(2\\lambda_0+\\!\\lambda_1\\!+\\!\\lambda_2\\!)\\},\n\\end{equation}\nand one obtains similar to eq.(\\ref{d-d})\n\\[K_Q(\\varepsilon)=\\int\\limits_0^2\\!dt\\,t e^{(-\\pi\\rho\\Gamma_W t)}\\,\n\\left(1-\\frac{1}{2}\\ln(t\\!+\\!1)\\right)\n\\mbox{cos}(\\pi\\rho\\varepsilon t)\\]\n\\begin{equation}\\label{lM}\n+\\int\\limits_2^{\\infty}\\!dt\\,e^{(-\\pi\\rho\\Gamma_W t)}\\,\n\\left(2-\\frac{t}{2}\\ln\\frac{t\\!+\\!1}{t\\!-\\!1}\\right)\n\\mbox{cos}(\\pi\\rho\\varepsilon t)\\,\\,.\n\\end{equation}\nThis is in close analogy with the consideration of the S-matrix correlation\nfunction made in \\cite{V-86}.\n\nA new convergency factor appeared in the integrals in (\\ref{lM}) as compared\nto (\\ref{d-d}) where only the oscillating cosine cuts the integral in the\nregion\nof asymptotically large $t$. This makes the function $K_Q$ finite for all\nvalues of $\\varepsilon$ including zero, so that the $\\delta$-function is now\nsmeared out. The behaviour of $K_Q(\\varepsilon)$ is quite different in the\nregions $\\varepsilon\\ll\\Gamma_W$ and $\\varepsilon\\gg \\Gamma_W$. In the first\none it is determined by decays and therefore is sensitive to the coupling to\nthe continuum. Quite opposite, for large $\\varepsilon$ the behaviour becomes\nuniversal since the GOE fluctuations described by the Dyson's function $Y_2$\nare restored. It is perfectly reasonable since an open system cannot be\ndistinguished from a closed one during a small time $t\\ll \\Gamma_W^{-1}$.\n\nThe first $\\gamma$-sensitive domain is widened when the width $\\Gamma_W$\ngrows.\nIn the case of small $\\rho\\Gamma_W\\ll 1$ (isolated resonances) it is natural\nto set aside the contribution of asymptotics of the integrand presenting\n(\\ref{lM}) in the form\n\\begin{equation}\\label{isol}\n K_Q(\\varepsilon) = \\frac{1}{\\pi\\rho}\\,\\frac{\\Gamma_W}\n{(\\varepsilon^2\\!+\\!\\Gamma_W^2)}\\, +\n\\end{equation}\n\\[\\int\\limits_0^2\\!dt\\,e^{-\\pi\\rho\\Gamma_W t}\\,\n\\left(t-\\frac{t}{2}\\ln(t\\!+\\!1)-1\\right)\n\\mbox{cos}(\\pi\\rho\\varepsilon t) +\n\\int\\limits_2^{\\infty}\\!dt\\,e^{-\\pi\\rho\\Gamma_W t}\\,\n\\left(1-\\frac{t}{2}\\ln\\frac{t\\!+\\!1}{t\\!-\\!1}\\right)\n\\mbox{cos}(\\pi\\rho\\varepsilon t)\\,\\,.\\]\nThe Lorentzian contribution with the width $\\Gamma_W$ directly traced to\nthe GOE $\\delta$-function dominates in the domain $\\varepsilon\\mathrel{\\mathpalette\\fun <}\\Gamma_W$.\nThe sum of the integrals in the second line is negative for all values of\n$\\varepsilon$ and approaches asymptotically the function $Y_2$ from above.\nWe thus come to the conclusion that the correlation function vanishes at some\nintermediate point $\\varepsilon_0$ which can be estimated as\n\\begin{equation}\\label{isoz}\n\\varepsilon_0\\simeq \\sqrt{\\frac{\\Gamma_W}{\\pi\\rho}}\n\\end{equation}\nusing the condition\n\\[\\frac{1}{\\pi\\rho}\\,\\frac{\\Gamma_W}{(\\varepsilon_0^2\\!+\\!\\Gamma_W^2)}\n\\sim |Y_2(\\rho\\varepsilon_0)|\\sim 1\\,\\,.\\]\n\nThe regime of strongly overlapping resonances, $\\rho\\Gamma_W\\gg 1$, is the\nmost interesting. In this case the\nmain contribution in $K_Q$ comes from the region of small $t$. Therefore, the\nsecond integral in (\\ref{lM}) can be neglected. Dropping then out the small\nlogarithmic term in the first integral and extending its upper limit to\ninfinity, we arrive at\n\\begin{equation}\\label{overl}\nK_Q(\\varepsilon)\\approx\\int\\limits_0^\\infty\\!dt\\,t e^{(-\\pi\\rho\\Gamma_W t)}\\,\n\\mbox{cos}(\\pi\\rho\\varepsilon t) =\n\\frac{1}{\\pi^2\\rho^2}\\frac{\\Gamma_W^2-\\varepsilon^2}\n{(\\varepsilon^2+\\Gamma_W^2)^2}\\,\\,.\n\\end{equation}\nCorrections to this result are of higher order with respect to the parameter\n$(\\rho\\Gamma_W)^{-1}$. The function (\\ref{overl}) is not a Lorentzian at all.\nDecreasing quadratically in a small vicinity of the point $\\varepsilon=0$, it\ndeviates subsequently from a Lorentzian, becomes zero at the point\n$\\varepsilon=\\Gamma_W$, reaches a negative minimum and approaches at last\nzero from below. Just the correlation function of such a form with $\\Gamma_W$\nsubstituted by the classical escape rate was conjectured in \\cite{Ec-93}\nas the limiting classical expression following from the periodic orbit\npicture. However, there is no room for the classical escape rate in the\nmatrix models considered here. One can see that the found form has in fact\nquantum grounds.\n\nOne should return to the exact expressions (\\ref{z4},\\ref{dLg}) if the ratio\n$m$ is finite. The resonances strongly overlap in this case. The\nsaddle-point is now found to be\n\\begin{equation}\\label{sps2}\nT_0=1\\;\\,,\\;\\, \\hat\\sigma=-i\\sigma_R^{-1}\\;\\,,\n\\;\\, \\sigma_R={\\sl g}(\\varepsilon\/2)\\,\\eta\\,\\,,\n\\end{equation}\nwhere ${\\sl g}$ is the solution chosen in sec. 5 of the cubic equation\n(\\ref{qeq}). The sequential saddle-point integrations over\n$\\sigma_R,\\hat\\sigma$ and then over the non-compact manifold result in the\nexpression\n\\begin{equation}\\label{Cqqm}\nK_Q(\\varepsilon)=-\\frac{4}{M^2T^2}\\,\\mbox{Re}\\,\n\\frac{\\Gamma_0^2}{\\left[\\varepsilon+i\\Gamma(\\varepsilon)\\right]^2}\n\\end{equation}\nwhere the function $\\Gamma(\\varepsilon)$ defined in (\\ref{G,T}) is just the\none appearing when the $S$-matrix fluctuations are considered \\cite{LSSS-94}.\n\nThe explicit dependence on $\\varepsilon$ gives rise to a sharp variation of\nthe correlation function (\\ref{Cqqm}) in the vicinity of zero if the typical\nvalues $|\\Gamma(\\varepsilon)|\\ll 1$ (see eq.(\\ref{scor}) and the discussion\nbelow). As long as the ratio $m$ is small, the quantity $\\Gamma(\\varepsilon)$\nis small indeed and we can neglect its smooth $\\varepsilon$-dependence for\nall $\\varepsilon\\mathrel{\\mathpalette\\fun <}\\Gamma_0\\approx\\Gamma_W$. Eq.(\\ref{Cqqm}) is equivalent\nto eq.~(\\ref{overl}) within this domain. The asymptotic behaviour for large\n$\\varepsilon$ also does not change since $\\Gamma(\\varepsilon)$ remains\nrestricted for all $\\varepsilon$. A small difference can appear only for\nintermediate values of $\\varepsilon$.\n\nHowever, for larger values of $m$ the deviation can become noticeable even\nnear the point $\\varepsilon=0$. In this case the next term in the power\nexpansion\n\\begin{equation}\\label{pex}\n\\Gamma(\\varepsilon)\\approx\\Gamma_0+\\Gamma_0'\\,\\varepsilon\n\\end{equation}\nwith respect to the smooth $\\varepsilon$-dependence should be taken into\naccount \\cite{LSSS-94}. Because of the smoothness, the derivative $\\Gamma_0'$\nis small. One can see from eq.(\\ref{qeq}) that this derivative is pure\nimaginary. The form (\\ref{overl}) is now reproduced again for sufficiently\nsmall $\\varepsilon$ ,\n\\begin{equation}\\label{Cqqmap}\nK_Q(\\varepsilon) = \\frac{4\\Gamma_g^2}{M^2T^2}\\,\n\\frac{\\Gamma_g^2-\\varepsilon^2}{(\\varepsilon^2+\\Gamma_g^2)^2 }\\,\\,,\n\\end{equation}\nwith\n\\begin{equation}\\label{Gg}\n\\Gamma_g=\\frac{\\Gamma_0}{1+i\\Gamma_0'}\\,\\,.\n\\end{equation}\nIt has been proven in \\cite{LSSS-94} that $\\Gamma_g$, playing the role of the\ncorrelation length of the Ericson fluctuations, coincides with the gap between\nthe distribution of resonance energies in the complex energy plane and the\nreal energy axis. Therefore we come to the conclusion that the properties of\nfluctuations both of the $S$-matrix and time delay are described by the same\nquantity, the gap $\\Gamma_g$, rather than the classical escape rate.\n\nUntil now we neglected the \"one-sided\" contribution\n\\[\\widetilde{C}_Q(\\varepsilon)=\\langle Q\\rangle^2\\,\n\\widetilde{K}_Q(\\varepsilon)=\\]\n\\begin{equation}\\label{tqq2}\n-\\frac{2}{M^2}\\,\\mbox{Re}\\left\\{\\langle\\mbox{tr\\,}{\\cal G}(\\frac{\\varepsilon}\n{2})\\mbox{tr\\,}{\\cal G}(-\\frac{\\varepsilon}{2})\\rangle\n- \\langle \\mbox{tr\\,}{\\cal G}(\\frac{\\varepsilon}{2})\\rangle\n\\langle\\mbox{tr\\,}{\\cal G}(-\\frac{\\varepsilon}{2})\n\\rangle \\right\\}\n\\end{equation}\nto the correlation function (\\ref{dcf}). As long as $m=0$, this contribution\nis of higher order in the parameter $N^{-1}$. However, this is not the case\nwhen the ratio $M\/N$ is finite. So one has to calculate (\\ref{tqq2})\nexplicitly. The well-known replica method \\cite{EA-75} turns out to be\nsufficient for the latter purpose. Dropping here the corresponding rather\ncumbersome\nexpressions we only note that the function $\\widetilde{K}_Q(\\varepsilon)$ is\nentirely expressed in terms of the slowly varying ${\\sl\ng}(\\frac{\\varepsilon}{2})$ and varies slowly itself. It has got no pronounced\nresonance behaviour around the point $\\varepsilon=0$ and constitutes a\nsmooth background for the correlation function. Its value at the point\n$\\varepsilon=0$ is approximately equal to\n\\[\\widetilde{K}_Q(0)\\approx - \\frac{1}{8N^2}\\]\nso that\n\\[\\Bigg|\\widetilde K_Q(0)\/{K_Q(0)}\\Bigg|\n\\approx\\frac{1}{2}\\,\\left(\\frac{\\pi\\rho\\Gamma_0}{2N}\\right)^2\\,\\,.\\]\nThe ratio is small under the condition\n\\begin{equation}\\label{con}\n\\pi\\rho\\Gamma_0\\ll N \\quad\\mbox{or}\\quad \\Gamma_0\\ll 1\n\\end{equation}\nimplying a clear-cut distinction of the local and global scales\n\\cite{LSSS-94}. Such a scale separation is necessary for matrix models to\nbe valid so far as the fluctuations are concerned.\n\nThe obtained form of the $\\varepsilon$-dependence of the many-channel\ncorrelation function $C_Q$ is close to that found in \\cite{SW-92} for the\nGutzwiller's model of single-channel chaotic scattering on a space of\nnegative curvature. The same values of all resonance widths and the\noutcoming possibility for resonances to overlap are two specific features\nof the model which are in fact in strong disagreement with properties of\nthe resonance spectra represented by matrix models. In particular, the\nsingle-channel resonances cannot overlap at all in the latter models\n\\cite{SZ-89} and their widths fluctuate strongly. That is why our result\nfor $M=1$ (see below) differs noticeably from the correlation function\nof ref.\\cite{SW-92}. The situation changes when the number of channels is\nlarge. The width fluctuations diminish with the number $M$ of channels\ngrowing. Since the time delay depends, according to (\\ref{q2}), only on\nproperties of the complex energies of resonances and not on the number of\nchannels directly, the correlation functions become similar in the two\nquite different cases compared.\n\nIt is worthy to note that the resonances overlapping strongly suppress the\ntime delay fluctuations. Indeed, eq.(\\ref{isol}) gives for isolated\nresonances\n\\[K_Q(0)=\\frac{1}{\\pi\\rho\\Gamma_W}\\gg 1\\]\nwhereas\n\\[K_Q(0)=\\frac{1}{\\pi^2\\rho^2\\Gamma_W^2}\\ll 1\\]\nwhen they overlap. The duration of a collision thus becomes a good definite\nquantity in the \"quasiclassical\" limit.\n\n\\section{Numerical results}\nExcepting a few limiting cases considered above, further analytical study of\n(\\ref{K-f}) is not possible and one has to use numerical methods. However,\nthe threefold integral as it stands does not suit for numerical computation.\nA very convenient substitution of the integration variables has been proposed\nin \\cite{V-86} to overcome all difficulties appearing. Following this author\nwe reduce the expression (\\ref{K-f}) to the Fourier integral\n\\begin{equation}\\label{fourier}\n K_Q(\\varepsilon) =\n \\int\\limits_0^{\\infty}\\!dt\\, F(t)\\cos(\\pi\\rho\\varepsilon t)\n\\end{equation}\nwith the Fourier transform $F(t)$ given by a double integral of a smooth\nfunction quite convenient for the numerical work. The asymptotic behaviour\nof $F(t)$ can be easily found explicitly\n\\begin{equation}\\label{F-asymp}\n F(t) \\sim \\left\\{ \\begin{array}{ll} t\n &\\ \\ \\mbox{for}\\ t\\ll 1 \\\\ (1+Tt)^{-M\/2} &\\ \\\n \\mbox{for}\\ t\\gg 1 \\end{array} \\right.\\,\\,.\n\\end{equation}\n\nFor a closed system ($T=0$) the Fourier transform $F(t)$ tends to unity in\nthe large-$t$ asymptotics. This results in the $\\delta$-term in the GOE\ndensity-density correlation. A singularity still survives even for an open\nsystem with one or two decay channels. The asymptotics (\\ref{F-asymp})\nimplies square root or logarithmic divergences correspondingly at the\npoint $\\varepsilon=0$ in these two cases.\n\nIn Fig. 1 the function $K_Q(x)$ versus $x=\\rho\\varepsilon$ is plotted for the\ncase of a single open channel. The singular behaviour near zero as well as\nGOE-like asymptotics are shown. The dashed line represents the Dyson's\nfunction $-Y_2(\\pi x)$. The calculation was made for the value $\\gamma=1$;\nonly some small domain around zero is sensitive to the choice of $\\gamma$.\nThe correlation function Fig.1 has little in common with that found in\n\\cite{SW-92}. This discrepancy is due to the strong fluctuations of\nsingle-channel widths in our model in contrast to identical widths of all\nresonances in Gutzwiller's one.\n\nFor $M>2$ the quantity $K_Q(0)$ is finite and the correlation function\napproaches, as the number of channels grows, the asymptotics given by\n(\\ref{lM}). The Fig. 2 demonstrates this for the ratio\n$K_Q(\\varepsilon)\/K_Q(0)$ in the\ncase of overlapping resonances. In asymptotic regime (\\ref{overl}) such a\nratio is an universal function of the only variable $\\varepsilon\/\\Gamma_W$.\nOne can see how the exact result (\\ref{K-f}) gets more and more close to this\nuniversal behaviour.\n\nThe Lorentzian peak should dominate the ratio\n$K_Q(\\varepsilon)\/K_Q(0)$ in the domain\n$\\varepsilon\/\\Gamma_W\\mathrel{\\mathpalette\\fun <} (\\pi\\rho\\Gamma_W)^{-\\frac{1}{2}}\\gg 1$ when\nresonances are isolated (see (\\ref{isoz})). Fig. 3 demonstrates this for two\nvalues of coupling constant $\\gamma$.\n\nAs it has been mentioned above, the function $K_Q(\\varepsilon)$ vanishes at\nsome point $\\varepsilon_0$. The position of this point as the function of\nthe number of channels $M$ at several fixed values of $\\gamma$ is shown in\nFig. 4 for three different values of the coupling constant $\\gamma$. It is\nclearly seen that the square root dependence for isolated resonances (see\n(\\ref{isoz})) is replaced by the linear one for overlapping ones.\n\n\n\n\\section{Summary.}\nIn this paper we have considered the fluctuations of the characteristic time\nof collisions in the framework of a random matrix model of resonance chaotic\nscattering. These fluctuations are entirely due to the fluctuations of the\nspectrum of complex resonance energies. We calculate analytically the time\ndelay correlation function and investigate its properties analytically and\nnumerically for different values of the number of channels and the strength\nof the coupling to the continuum. For any values of these parameters this\nfunction is far from being a Lorentzian. In particular, it vanishes at some\npoint which plays the role of the characteristic correlation length of the\nfluctuations. In the \"quasiclassical\" limit of a large number of strongly\noverlapping resonances this length is given, similar to that of the S-matrix\nfluctuations, by the gap between the upper edge of the distribution of\ncomplex energies of resonances and the real energy axis. We do not expect\nthat this quantity may be connected to the escape rate appearing in the\nclassical theory of chaotic scattering. The latter has been conjectured in\n\\cite{Sm-91} to be the semiclassical limit for the correlation length in\nchaotic scattering.\n\n\\begin{center}\n{\\large\\bf Acknowledgements}\n\\end{center}\nWe are grateful to F.Izrailev for his permanent interest to this work.\nFinancial support by the Deutsche Forschungsgemeinschaft through the SFB 237\nis acknowledged. For two of us (V.V.S. and D.V.S.) the research described in\nthis publication was made possible in part by Grant No RB7000 from the\nInternational Science Foundation.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nCrowd counting aims to count the number of people in a crowded scene where as density estimation aims to map an input crowd image to it's corresponding density map which indicates the number of people per pixel present in the image (as illustrated in Fig. \\ref{fig:task_illustration}) and the two problems have been jointly addressed by researchers. The problem of crowd counting and density estimation is of paramount importance and it is essential for building higher level cognitive abilities in crowded scenarios such as crowd monitoring \\cite{chan2008privacy} and scene understanding \\cite{shao2015deeply,zhou2012understanding}. Crowd analysis has attracted significant attention from researchers in the recent past due to a variety of reasons. Exponential growth in the world population and the resulting urbanization has led to an increased number of activities such as sporting events, political rallies, public demonstrations etc. (shown in Fig. \\ref{fig:crowd_scenes}), thereby resulting in more frequent crowd gatherings in the recent years. In such scenarios, it is essential to analyze crowd behavior for better management, safety and security. \n\nLike any other computer vision problem, crowd analysis comes with many challenges such as occlusions, high clutter, non-uniform distribution of people, non-uniform illumination, intra-scene and inter-scene variations in appearance, scale and perspective making the problem extremely difficult. Some of these challenges are illustrated in Fig. \\ref{fig:crowd_scenes}. The complexity of the problem together with the wide range of applications for crowd analysis has led to an increased focus by researchers in the recent past. \n\n\\begin{figure}[t]\n\\begin{center}\n\\begin{minipage}{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig1}\n\\vskip-6pt\n\\captionof*{figure}{(a)}\n\\end{minipage}%\n\\hfill\n\\begin{minipage}{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig2}\n\\vskip-6pt\n\\captionof*{figure}{(b)}\n\\end{minipage}\n\\vskip-10pt\n\\captionof{figure}{Illustration of density map estimation. (a) Input image (b) Corresponding density map with count.}\n\\label{fig:task_illustration}\n\\end{center}\n\n\\end{figure}\n\nCrowd analysis is an inherently inter-disciplinary research topic with researchers from different communities (such as sociology \\cite{moussaid2010walking,blumer1951collective}, psychology \\cite{aveni1977not}, physics \\cite{castellano2009statistical,1971HendersonStatistics}, biology \\cite{parrish1999complexity,zhang2010collective}, computer vision and public safety) have addressed the issue from different viewpoints. Crowd analysis has a variety of critical applications of inter-disciplinarian nature:\n\n\\noindent \\textit{Safety monitoring}: The widespread usage of video surveillance cameras for security and safety purposes in places such as sports stadiums, tourist spots, shopping malls and airports has enabled easier monitoring of crowd in such scenarios. However, traditional surveillance algorithms may break down as they are unable to process high density crowds due to limitations in their design. In such scenarios, we can leverage the results of algorithms specially designed for crowd analysis related tasks such as behavior analysis \\cite{saxena2008crowd,ko2008survey}, congestion analysis \\cite{zhou2015learning,huang2015congestion}, anomaly detection \\cite{li2014anomaly,chaker2017social} and event detection \\cite{benabbas2010motion}. \n\n\n\\noindent \\textit{Disaster management}: Many scenarios involving crowd gatherings such as sports events, music concerts, public demonstrations and political rallies face the risk of crowd related disasters such as stampedes which can be life threatening. In such cases, crowd analysis can be used as an effective tool for early overcrowding detection and appropriate management of crowd, hence, eventual aversion of any disaster \\cite{abdelghany2014modeling,almeida2013crowd}. \n\n\\noindent \\textit{Design of public spaces}: Crowd analysis on existing public spots such as airport terminals, train stations, shopping malls and other public buildings \\cite{chow2008waiting,sime1995crowd} can reveal important design shortcomings from crowd safety and convenience point of view. These studies can be used for design of public spaces that are optimized for better safety and crowd movement \\cite{lu2016study,al2013crowd}. \n\n\\noindent \\textit{Intelligence gathering and analysis}: Crowd counting techniques can be used to gather intelligence for further analysis and inference. For instance, in retail sector, crowd counting can be used to gauge people's interest in a product in a store and this information can be used for appropriate product placement \\cite{lipton2015video,mongeon2015busyness}. Similarly, crowd counting can be used to measure queue lengths to optimize staff numbers at different times of the day. Furthermore, crowd counting can be used to analyze pedestrian flow at signals at different times of the day and this information can be used for optimizing signal-wait times \\cite{bernal2014system}. \n\n\\noindent \\textit{Virtual environments}: Crowd analysis methods can be used to understand the underlying phenomenon thereby enabling us to establish mathematical models that can provide accurate simulations. These mathematical models can be further used for simulation of crowd phenomena for various applications such as computer games, inserting visual effects in film scenes and designing evacuation plans \\cite{gustafson2016mure,perez2016task}.\n\n\\noindent \\textit{Forensic search}: Crowd analysis can be used to search for suspects and victims in events such as bombing, shooting or accidents in large gatherings. Traditional face detection and recognition algorithms can be speeded up using crowd analysis techniques which are more adept at handling such scenarios \\cite{klontz2013case,barr2014effectiveness}.\n\n\n\n\nThese variety of applications has motivated researchers across various fields to develop sophisticated methods for crowd analysis and related tasks such as counting \\citep{chan2008privacy,chan2009bayesian,chen2012feature,idrees2013multi,chan2012counting,skaug2016end,ge2009marked,idrees2013multi}, density estimation \\citep{lempitsky2010learning,chen2013cumulative,zhang2016single,zhang2015cross,pham2015count,wang2016fast,boominathan2016crowdnet}, segmentation \\citep{kang2014fully,dong2007fast}, behaviour analysis \\citep{bandini2014towards,shao2014scene,cheng2014recognizing,zhou2012understanding,zhou2015learning,yi2016l0}, tracking \\citep{rodriguez2011density,zhu2014crowd}, scene understanding \\citep{shao2015deeply,zhou2012understanding} and anomaly detection \\citep{mahadevan2010anomaly,li2014anomaly}. Among these, crowd counting and density estimation are a set of fundamental tasks and they form basic building blocks for various other applications discussed earlier. Additionally, methods developed for crowd counting can be easily extended to counting tasks in other fields such as cell microscopy \\cite{wang2016fast,walach2016learning,lempitsky2010learning,chen2012feature}, vehicle counting \\cite{onoro2016towards}, environmental survey \\cite{french2015convolutional,zhan2008crowd}, etc.\n\n\\begin{figure}[t]\n\\begin{center}\n\\begin{minipage}{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig3}\n\\vskip-6pt \\captionof*{figure}{(a)}\n\\end{minipage}%\n\\hfill\n\\begin{minipage}{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig4}\n\\vskip-6pt \\captionof*{figure}{(b)}\n\\end{minipage}\n\\end{center}\n\n\\begin{center}\n\\begin{minipage}{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig5}\n\\vskip-6pt \\captionof*{figure}{(c)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{0.49\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig6}\n\\vskip-6pt \\captionof*{figure}{(d)}\n\\end{minipage}\n\\vskip-10pt \\captionof{figure}{Illustration of various crowded scenes and the associated challenges. (a) Parade (b) Musical concert (c) Public demonstration (d) Sports stadium. High clutter, overlapping of subjects, variation in scale and perspective can be observed across images.}\n\\label{fig:crowd_scenes}\n\\end{center}\n\\end{figure}\n\nOver the last few years, researchers have attempted to address the issue of crowd counting and density estimation using a variety of approaches such as detection-based counting, clustering-based counting and regression-based counting \\cite{loy2013crowd}. The initial work on regression-based methods mainly use hand-crafted features and the more recent works use Convolutional Neural Network (CNN) based approaches. The CNN-based approaches have demonstrated significant improvements over previous hand-crafted feature-based methods, thus, motivating more researchers to explore CNN-based approaches further for related crowd analysis problems. In this paper, we review various single image crowd counting and density estimation methods with a specific focus on recent CNN-based approaches. \n\n\nResearchers have attempted to provide a comprehensive survey and evaluation of existing techniques for various aspects of crowd analysis \\citep{zhan2008crowd,ferryman2014performance,junior2010crowd,li2015crowded,zitouni2016advances}. Zhan \\textit{et al. } \\cite{zhan2008crowd} and Junior \\textit{et al. } \\cite{junior2010crowd} were among the first ones to study and review existing methods for general crowd analysis. Li \\textit{et al. } \\cite{li2015crowded} surveyed different methods for crowded scene analysis tasks such as crowd motion pattern learning, crowd behavior, activity analysis and anomaly detection in crowds. More recently, Zitouni \\textit{et al. } \\cite{zitouni2016advances} evaluated existing methods across different research disciplines by inferring key statistical evidence from existing literature and provided suggestions towards the general aspects of techniques rather than any specific algorithm. While these works focussed on the general aspects of crowd analysis, researchers have studied in detail crowd counting and density estimation methods specifically \\cite{loy2013crowd,saleh2015recent,ryan2015evaluation}. Loy \\textit{et al. } \\cite{loy2013crowd} provided a detailed description and comparison of video imagery-based crowd counting and evaluation of different methods using the same protocol. They also analyzed each processing module to identify potential bottlenecks to provide new directions for further research. In another work, Ryan \\textit{et al. } \\cite{ryan2015evaluation} presented an evaluation of regression-based methods for crowd counting across multiple datasets and provided a detailed analysis of performance of various hand-crafted features. Recently, Saleh \\textit{et al. } \\cite{saleh2015recent} surveyed two main approaches which are direct approach (i.e., object based target detection) and indirect approach (e.g. pixel-based, texture-based, and corner points based analysis). \n\n\nThough existing surveys analyze various methods for crowd analysis and counting, they however cover only traditional methods that use hand-crafted features and do not take into account the recent advancements driven primarily by CNN-based approaches \\cite{shao2015deeply,hu2016dense,zhao2016crossing,boominathan2016crowdnet,skaug2016end,walach2016learning,arteta2016counting,wang2015deep,zhang2016single,zhang2015cross,onoro2016towards,shao2016slicing}\nand creation of new challenging crowd datasets \\cite{zhang2016data,zhang2015cross,zhang2016single}. While CNN-based approaches have achieved drastically lower error rates, the creation of new datasets has enabled learning of more generalized models. To keep up with the rapidly advancing research in crowd counting, we believe it is necessary to analyze these methods in detail in order to understand the trends. Hence, in this paper, we provide a survey of recent state-of-the-art CNN-based approaches for crowd counting and density estimation for single images. \n\n\n\nRest of the paper is organized as follows: Section \\ref{sec:review_traditional} briefly reviews the traditional crowd counting and density estimation approaches with an emphasis on the most recent methods. This is followed by a detailed survey on CNN-based methods along with a discussion on their merits and drawbacks in Section \\ref{sec:survey_cnn}. In Section \\ref{sec:datasets_and_results}, recently published challenging datasets for crowd counting are discussed in detail along with results of the state-of-the-art methods. We discuss several promising avenues for achieving further progress in Section \\ref{sec:future_research}. Finally, concluding remarks are made in Section \\ref{sec:conclusion}.\n\n\\section{Review of traditional approaches}\n\\label{sec:review_traditional}\nVarious approaches have been proposed to tackle the problem of crowd counting in images \\cite{idrees2013multi,chen2013cumulative,lempitsky2010learning,zhang2015cross,zhang2016single} and videos \\cite{brostow2006unsupervised,ge2009marked,rodriguez2011density,chen2015person}. \nLoy \\textit{et al. } \\cite{loy2013crowd} broadly classified traditional crowd counting methods based on the approach into the following categories: (1) Detection-based approaches, (2) Regression-based approaches, and (3) Density estimation-based approaches. \n\nSince the focus of this work is on CNN-based approaches, in this section, we briefly review the detection and regression-based approaches using hand-crafted features for the sake of completeness. In addition, we present a review of the recent traditional methods \\cite{idrees2013multi,lempitsky2010learning,pham2015count,wang2016fast,xu2016crowd} that have not been analyzed in earlier surveys. \n\n\\subsection{Detection-based approaches}\nMost of the initial research was focussed on detection style framework, where a sliding window detector is used to detect people in the scene \\cite{dollar2012pedestrian} and this information is used to count the number of people \\cite{li2008estimating}. Detection is usually performed either in the monolithic style or parts-based detection. Monolithic detection approaches \\cite{dalal2005histograms,leibe2005pedestrian,tuzel2008pedestrian,enzweiler2009monocular} typically are traditional pedestrian detection methods which train a classifier using features (such as Haar wavelets \\cite{viola2004robust}, histogram oriented gradients \\cite{dalal2005histograms}, edgelet \\cite{wu2005detection} and shapelet \\cite{sabzmeydani2007detecting}) extracted from a full body. Various learning approaches such as Support Vector Machines, boosting \\cite{viola2005detecting} and random forest \\cite{gall2011hough} have been used with varying degree of success. Though successful in low density crowd scenes, these methods are adversely affected by the presence of high density crowds. Researchers have attempted to address this issue by adopting part-based detection methods \\cite{felzenszwalb2010object,lin2001estimation,wu2007detection}, where one constructs boosted classifiers for specific body parts such as\nthe head and shoulder to estimate the people counts in a designated area \\cite{li2008estimating}. In another approach using shape learning, Zhao et al. \\cite{zhao2008segmentation} modelled humans using 3D shapes composed of ellipsoids, and employed a stochastic process to estimate the number and shape configuration that best explains a given foreground mask in a scene. Ge and Collins \\cite{ge2009marked} further extended the idea by using flexible and practical shape models.\n\n\\subsection{Regression-based approaches}\nThough parts-based and shape-based detectors were used to mitigate the issues of occlusion, these methods were not successful in the presence of extremely dense crowds and\nhigh background clutter. To overcome these issues, researchers attempted to count by regression where they learn a mapping between features extracted from local image patches to their counts \\cite{chan2009bayesian,ryan2009crowd,chen2012feature}. By counting using regression, these methods avoid dependency on learning detectors which is a relatively complex task. These methods have two major components: low-level feature extraction and regression modelling. A variety of features such as foreground features, edge features, texture and gradient features have been used for encoding low-level information. Foreground features are extracted from foreground segments in a video using standard background subtraction techniques. Blob-based holistic features such as area, perimeter, perimeter-area ration, etc. have demonstrated encouraging results \\cite{chan2008privacy,chen2012feature,ryan2009crowd}. While these methods capture global properties of the scene, local features such as edges and texture\/gradient features such as local binary pattern (LBP), histogram oriented gradients (HOG), gray level co-occurrence matrices (GLCM) have been \nused to further improve the results. Once these global and local features are extracted, different regression techniques such as linear regression \\cite{paragios2001mrf}, piecewise linear regression \\cite{chan2008privacy}, ridge regression \\cite{chen2012feature}, Gaussian process regression and neural network \\cite{marana1998efficacy} are used to learn a mapping from low-level feature to the crowd count.\n\nIn a recent approach, Idrees \\textit{et al. } \\cite{idrees2013multi} identified that no single feature or detection method is reliable enough to provide sufficient information for accurate counting in the presence of high density crowds due to various reasons such as low resolution, severe occlusion, foreshortening and perspective. Additionally, they observed that there exists a spatial relationship that can be used to constrain the count estimates in neighboring local regions. With these observations in mind, they proposed to extract features using different methods that capture different information. By treating densely packed crowds of individuals as irregular and non-homogeneous texture, they employed Fourier analysis along with head detections and SIFT interest-point based counting in local neighborhoods. The count estimates from this localized multi-scale analysis are then aggregated subject to global consistency constraints. The three sources, i.e., Fourier, interest points and head detection are then combined with their respective confidences and counts at localized patches are computed independently. These local counts are then globally constrained in a multi-scale Markov Random Field (MRF) framework to get an estimate of count for the entire image. The authors also introduced an annotated dataset (UCF\\textunderscore CC\\textunderscore 50) of 50 images containing 64000 humans.\n\nChen \\textit{et al. } \\cite{chen2013cumulative} introduced a novel cumulative attribute concept for learning a regression model when only sparse and imbalanced data are available. Considering that the challenges of inconsistent features along with sparse and imbalanced (encountered during learning a regression function) are related, cumulative attribute-based representation for learning a regression model is proposed. Specifically, features extracted from sparse and imbalanced image samples are mapped onto a cumulative attribute space. The method is based on the notion of discriminative attributes used for addressing sparse training data. This method is inherently capable of handling imbalanced data.\n\n\\subsection{Density estimation-based approaches}\nWhile the earlier methods were successful in addressing the issues of occlusion and clutter, most of them ignored important spatial information as they were regressing on the global count. In contrast, Lempitsky \\textit{et al. } \\cite{lempitsky2010learning} proposed to learn a linear mapping between local patch features and corresponding object density maps, thereby incorporating spatial information in the learning process. In doing so, they avoided the hard task of learning to detect and localize individual object instances by introducing a new approach of estimating image density whose integral over any region in the density map gives the count of objects within that region. The problem of learning density maps is formulated as a minimization of a regularized risk quadratic cost function. A new loss function appropriate for learning density maps is introduced. The entire problem is posed as a convex optimization task which they solve using cutting-plane optimization.\n\nObserving that it is difficult to learn a linear mapping, Pham \\textit{et al. } \\cite{pham2015count} proposed to learn a non-linear mapping between local patch features and density maps. They used random forest regression from multiple image patches to vote for densities of multiple target objects to learn a non-linear mapping. In addition, they tackled the problem of large variation in appearance and shape between crowded image patches and non-crowded ones by proposing a crowdedness prior and they trained two different forests corresponding to this prior. Furthermore, they were able to successfully speed up the estimation process for real-time performance by proposing an effective forest reduction that uses permutation of decision trees. Apart from achieving real-time performance, another advantage of their method is that it requires relatively less memory to build and store the forest.\n\nSimilar to the above approach, Wang and Zou \\cite{wang2016fast} identified that though existing methods are effective, they were inefficient from computational complexity point of view. To this effect, they proposed a fast method for density estimation based on subspace learning. Instead of learning a mapping between dense features and their corresponding density maps, they learned to compute the embedding of each subspace formed by image patches. Essentially, they exploited the relationship between images and their corresponding density maps in the respective feature spaces. The feature space of image patches are clustered and examples of each subspace are collected to learn its embedding. Their assumption that local image patches and their corresponding density maps share similar local geometry enables them to learn locally linear embedding using which the density map of an image patch can be estimated by preserving the geometry. Since, implementing locally linear embedding (LLE) is time-consuming, they divided the feature spaces of image patches and their counterpart density maps into subspaces, and computed the embedding of each subspace formed by image patches. The density map of input patch is then estimated by simple classification and mapping with the corresponding embedding matrix. \n\nIn a more recent approach, Xu and Qiu \\cite{xu2016crowd} observed that the existing crowd density estimation methods used a smaller set of features thereby limiting their ability to perform better. Inspired by the ability of high-dimensional features in other domains such as face recognition, they proposed to boost the performances of crowd density estimation by using a much extensive and richer set of features. However, since the regression techniques used by earlier methods (based on Gaussian process\nregression or Ridge regression) are computationally complex and are unable to process very high-dimensional features, they used random forest as the regression model whose tree structure is intrinsically fast and scalable. Unlike traditional approaches to random forest construction, they embedded random projection in the tree nodes to combat the curse of dimensionality and to introduce randomness in the tree construction.\n\n\\begin{figure}[t]\n\\begin{center}\n\\begin{minipage}{0.95\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig7}\n\\end{minipage}%\n\\vskip-10pt\n\\captionof{figure}{Categorization of existing CNN-based approaches.}\n\\label{fig:classification}\n\\end{center}\n\\end{figure}\n\n\n\n\n\n\\section{CNN-based methods}\n\\label{sec:survey_cnn}\nThe success of CNNs in numerous computer vision tasks has inspired researchers to exploit their abilities for learning non-linear functions from crowd images to their corresponding density maps or corresponding counts. A variety of CNN-based methods have been proposed in the literature. We broadly categorize these methods based on property of the networks and training approach as shown in Fig. \\ref{fig:classification}. Based on the property of the networks, we classify the approaches into the following categories:\n\\begin{itemize}[noitemsep]\n\\item \\textbf{Basic CNNs}: Approaches that involve basic CNN layers in their networks fall into this category. These methods are amongst initial deep learning approaches for crowd counting and density estimation.\n\\item \\textbf{Scale-aware models}: The basic CNN-based approaches evolved into more sophisticated models that were robust to variations in scale. This robustness is achieved through different techniques such as multi-column or multi-resolution architectures.\n\\item \\textbf{Context-aware models}: Another set of approaches attempted to incorporate local and global contextual information present in the image into the CNN framework for achieving lower estimation errors. \n\\item \\textbf{Multi-task frameworks}: Motivated by the success of multi-task learning for various computer vision tasks, various approaches have been developed to combine crowd counting and estimation along with other tasks such as foreground-background subtraction and crowd velocity estimation.\n\\end{itemize}\n\nIn an yet another categorization, we classify the CNN-based approaches based on the inference methodology into the following two categories:\n\\begin{itemize}[noitemsep]\n\\item \\textbf{Patch-based inference}: In this approach, the CNNs are trained using patches cropped from the input images. Different methods use different crop sizes. During the prediction phase, a sliding window is run over the test image and predictions are obtained for each window and finally aggregated to obtain total count in the image. \n\\item \\textbf{Whole image-based inference}: Methods in this category perform a whole-image based inference. These methods avoid computationally expensive sliding windows. \n\\end{itemize}\nTable~\\ref{tab:classification} presents a categorization of various CNN-based crowd counting methods based on their network property and inference process. \n\n\n\\begin{table}[htp!]\n\\centering\n\\caption{Categorization of existing CNN-based approaches.}\n\\label{tab:classification}\n\\resizebox{0.99\\linewidth}{!}\n\\begin{tabular}{|l|l|l|}\n\\hline\n & \\multicolumn{2}{c|}{Category} \\\\ \\hline\nMethod & \\begin{tabular}[c]{@{}l@{}}Network \\\\ property\\end{tabular} & \\begin{tabular}[c]{@{}l@{}}Inference\\\\ process\\end{tabular} \\\\ \\hline\nFu \\textit{et al. } \\cite{fu2015fast} & Basic & Patch-based \\\\ \\hline\nWang \\textit{et al. } \\cite{wang2015deep} & Basic & Patch-based \\\\ \\hline\nZhang \\textit{et al. } \\cite{zhang2015cross} & Multi-task & Patch-based \\\\ \\hline\nBoominathan \\textit{et al. } \\cite{boominathan2016crowdnet} & Scale-aware & Patch-based \\\\ \\hline\nZhang \\textit{et al. } \\cite{zhang2016single} & Scale-aware & Whole image-based \\\\ \\hline\nWalach and Wolf \\cite{walach2016learning} & Basic & Patch-based \\\\ \\hline\nOnoro \\textit{et al. } \\cite{onoro2016towards} & Scale-aware & Patch-based \\\\ \\hline\nShang \\textit{et al. } \\cite{skaug2016end} & Context-aware & Whole image-based \\\\ \\hline\nSheng \\textit{et al. } \\cite{sheng2016crowd} & Context-aware & Whole image-based \\\\ \\hline\nKumagai \\textit{et al. } \\cite{kumagai2017mixture} & Scale-aware & Patch-based \\\\ \\hline\nMarsden \\textit{et al. } \\cite{marsden2016fully} & Scale-aware & Whole image-based \\\\ \\hline\nMundhenk \\textit{et al. } \\cite{mundhenk2016large} & Basic & Patch-based \\\\ \\hline\nArtetta \\textit{et al. } \\cite{arteta2016counting} & Multi-task & Patch-based \\\\ \\hline\nZhao \\textit{et al. } \\cite{zhao2016crossing} & Multi-task & Patch-based \\\\ \\hline\nSindagi \\textit{et al. } \\cite{sindagi2017cnnbased} & Multi-task & Whole image-based \\\\ \\hline\nSam \\textit{et al. } \\cite{sam2017switching} & Scale-aware & Patch-based \\\\ \\hline\nKang \\textit{et al. } \\cite{zhao2016crossing} & Basic & Patch-based \\\\ \\hline\n\\end{tabular}\n}\n\\end{table}\n\n\\subsection{Survey of CNN-based methods}\nIn this section, we review various CNN-based crowd counting and density estimation methods along with their merits and drawbacks.\n\nWang \\textit{et al. } \\cite{wang2015deep} and Fu \\textit{et al. } \\cite{fu2015fast} were among the first ones to apply CNNs for the task of crowd density estimation. Wang \\textit{et al. } proposed an end-to-end deep CNN regression model for counting people from images in extremely dense crowds. They adopted AlexNet network \\cite{krizhevsky2012imagenet} in their architecture where the final fully connected layer of 4096 neurons is replaced with a single neuron layer for predicting the count. Besides, in order to reduce false responses background like buildings and trees in the images, training data is augmented with additional negative samples whose ground truth count is set as zero. In a different approach, Fu \\textit{et al. } proposed to classify the image into one of the five classes: very high density, high density, medium density, low density and very low density instead of estimating density maps. Multi-stage ConvNet from the works of Sermanet \\textit{et al. } \\cite{sermanet2012convolutional} was adopted for better shift, scale and distortion invariance. In addition, they used a cascade of two classifiers to achieve boosting in which the first one specifically samples misclassified images whereas the second one reclassifies rejected samples. \n\nZhang \\textit{et al. } \\cite{zhang2015cross} analyzed existing methods to identify that their performance reduces drastically when applied to a new scene that is different from the training dataset. To overcome this issue, they proposed to learn a mapping from images to crowd counts and to adapt this mapping to new target scenes for cross-scene counting. To achieve this, they first learned their network by alternatively training on two objective functions: crowd count and density estimation which are related objectives. By alternatively optimizing over these objective functions one is able to obtain better local optima. In order to adapt this network to a new scene, the network is fine-tuned using training samples that are similar to the target scene. It is important to note that the network is adapted to new target scenes without any extra label information. The overview of their approach is shown in Fig. \\ref{fig:cross_scene}. Also, in contrast to earlier methods that use the sum of Gaussian kernels centered on the locations of objects, a new method for generating ground truth density map is proposed that incorporates perspective information. In doing so, the network is able to perform perspective normalization thereby achieving robustness to scale and perspective variations. Additionally, they introduced a new dataset for the purpose of evaluating cross-scene crowd counting. The network is evaluated for cross-scene crowd counting as well as single scene crowd counting and superior results are demonstrated for both scenarios.\n\n\n\\begin{figure}[htp!]\n\\begin{center}\n\\begin{minipage}{1\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig8}\n\\end{minipage}%\n\\vskip-8pt\n\\captionof{figure}{Overview of cross scene crowd counting proposed by Zhang \\textit{et al. } \\cite{zhang2015cross}.}\n\\label{fig:cross_scene}\n\\end{center}\n\\end{figure}\n\n\nInspired by the success of cross-scene crowd counting \\cite{zhang2015cross}, Walach and Wolf \\cite{walach2016learning} performed layered boosting and selective sampling. Layered boosting involves iteratively adding CNN layers to the model such that every new layer is trained to estimate the residual error of the earlier prediction. For instance, after the first CNN layer is trained, the second CNN layer is trained on the difference between the estimation and ground truth. This layered boosting approach is based on the notion of Gradient Boosting Machines (GBM) \\cite{friedman2001greedy} which are a subset of powerful ensemble techniques. An overview of their boosting approach is presented in Fig. \\ref{fig:learning_to_count_arch}. The other contribution made by the authors is the use of sample selection algorithm to improve the training process by reducing the effect of low quality samples such as trivial samples or outliers. According to the authors, the samples that are correctly classified early on are trivial samples. Presenting such samples for training even after the networks have learned to classify them tends to introduce bias in the network for such samples, thereby affecting its generalization performance. Another source of training inefficiency is the presence of outliers such as mislabeled samples. Apart from affecting the network's performance, these samples increase the training time. To overcome this issue, such samples are eliminated out of the training process for a number of epochs. The authors demonstrated that their method reduces the count estimation error by 20\\% to 30\\% over existing state-of-the-art methods at that time on different datasets. \n\\begin{figure}[t]\n\\begin{center}\n\\begin{minipage}{0.7\\linewidth}\n\\includegraphics[width=1\\linewidth]{Fig9}\n\\end{minipage}%\n\\vskip-8pt\n\\captionof{figure}{Overview of learning to count using boosting by Walach and Wolf \\cite{walach2016learning}.}\n\\label{fig:learning_to_count_arch}\n\\end{center}\n\\end{figure}\n\n\nIn contrast to the above methods that use patch-based training, Shang \\textit{et al. } \\cite{skaug2016end} proposed an end-to-end count estimation method using CNNs (Fig. \\ref{fig:end_to_end_arch}). Instead of dividing the image into patches, their method takes the entire image as input and directly outputs the final crowd count. As a result, computations on overlapping regions are shared by combining multiple stages of processing leading to a reduction of complexity. The network simultaneously learns to estimate local counts and can be viewed as learning a patch level counting model which enables faster training. By doing so, contextual information is incorporated into the network, enabling it to ignore background noises and achieve better performance. The network is composed of three parts: (1) Pre-trained GoogLeNet model \\cite{szegedy2015going}, (2) Long-short time memory (LSTM) decoders for local count, and (3) Fully connected layers for the final count. The network takes an image as input and computes high-dimensional CNN feature maps using the GoogleNet network. Local blocks in these high-dimensional features are decoded into local count using a LSTM unit. A set of fully connected layers after the LSTM unit map the local counts into global count. The two counting objectives are jointly optimized during training.\n\n\\begin{figure}[htp!]\n\\begin{center}\n\\begin{minipage}{1\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig10}\n\\end{minipage}%\n\\vskip-8pt\n\\captionof{figure}{Overview of the end-to-end counting method proposed by Shang \\textit{et al. } \\cite{skaug2016end}. GoogLeNet is used to compute high-dimensional features which are further decoded into local counts using LSTM units. }\n\\label{fig:end_to_end_arch}\n\\end{center}\n\\end{figure}\n\nIn an effort to capture semantic information in the image, Boominathan \\textit{et al. } \\cite{boominathan2016crowdnet} combined deep and shallow fully convolutional networks to predict the density map for a given crowd image. The combination of two networks enables one to build a model robust to non-uniform scaling of crowd and variations in perspective. Furthermore, an extensive augmentation of the training dataset is performed in two ways. Patches from the multi-scale image representation are sampled to make the system robust to scale variations. Fig. \\ref{fig:boominathan_arch} shows overview of this method. \n\n\\begin{figure}[htp!]\n\\begin{center}\n\\begin{minipage}{1\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig11}\n\\end{minipage}%\n\\vskip-8pt\n\\captionof{figure}{Overview of counting method proposed by Boominathan \\textit{et al. } \\cite{boominathan2016crowdnet}. A deep network is used in combination with a shallow network to address scale variations across images. }\n\\label{fig:boominathan_arch}\n\\end{center}\n\\end{figure}\n\nIn another approach, Zhang \\textit{et al. } \\cite{zhang2016single} proposed a multi-column based architecture (MCNN) for images with arbitrary crowd density and arbitrary perspective. Inspired by the success of multi-column networks for image recognition \\cite{ciregan2012multi}, the proposed method ensures robustness to large variation in object scales by constructing a network that comprises of three columns corresponding to filters with receptive fields of different sizes (large, medium, small) as shown in Fig. \\ref{fig:single_image_arch}. These different columns are designed to cater to different object scales present in the images. Additionally, a new method for generating ground truth crowd density maps is proposed. In contrast to existing methods that either use sum of Gaussian kernels with a fixed variance or perspective maps, Zhang \\textit{et al. } proposed to take into account perspective distortion by estimating spread parameter of the Gaussian kernel based on the size of the head of each person within the image. However, it is impractical to estimate head sizes and their underlying relationship with density maps. Instead they used an important property observed in high density crowd images that the head size is related to distance between the centers of two neighboring persons. The spread parameter for each person is data-adaptively determined based on its average distance to its neighbors. Note that the ground truth density maps created using this technique incorporate distortion information without the use of perspective maps. Finally, considering that existing crowd counting datasets do not cater to all the challenging situations encountered in real world scenarios, a new ShanghaiTech crowd datasets is constructed. This new dataset includes 1198 images with about 330,000 annotated heads. \n\n\n\n\\begin{figure}[htp!]\n\\begin{center}\n\\begin{minipage}{1\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig12}\n\\end{minipage}%\n\\vskip-8pt\n\\captionof{figure}{Overview of single image crowd counting via multi-column network by Zhang \\textit{et al. } \\cite{zhang2016single}.}\n\\label{fig:single_image_arch}\n\\end{center}\n\\end{figure}\n \n\n\nSimilar to the above approach, Onoro and Sastre \\citep{onoro2016towards} developed a scale aware counting model called Hydra CNN that is able to estimate object densities in a variety of crowded scenarios without any explicit geometric information of the scene. First, a deep fully-convolutional neural network (which they call as Counting CNN) with six convolutional layers is employed. Motivated by the observation of earlier work \\cite{zhang2015cross,loy2013crowd} that incorporating perspective information for geometric correction of the input features results in better accuracy, geometric information is incorporated into the Counting CNN (CCNN). To this effect, they developed Hydra CNN that learns a multi-scale non-linear regression model. As shown in Fig. \\ref{fig:hydra_cnn_arch} the network consists of 3 heads and a body with each head learning features for a particular scale. Each head of the Hydra-CNN is constructed using the CCNN model whose outputs are concatenated and fed to the body. The body consists of a set of two fully-connected layers followed by a rectified linear unit (ReLu), a dropout layer and a final fully connected layer to estimate the object density map. While the different heads extract image descriptors at different scales, the body learns a high-dimensional representation that fuses the multi-scale information provided by the heads. This network design of Hydra CNN is inspired by the work of Li et al. \\cite{li2015visual}. Finally, the network is trained with pyramid of image patches extracted at multiple scales. The authors demonstrated through their experiments that the Hydra CNN is able to perform successfully in scenarios and datasets with significant variations in the scene. \n\n\n\\begin{figure}[htp!]\n\\begin{center}\n\\begin{minipage}{1\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig13}\n\\includegraphics[width=\\linewidth]{Fig14}\n\\end{minipage}%\n\\vskip-10pt\n\\captionof{figure}{Overview of Hydra-CNN by Onoro \\textit{et al. } \\citep{onoro2016towards}.}\n\\label{fig:hydra_cnn_arch}\n\\end{center}\n\\end{figure}\n\n\nInstead of training all regressors of a multi-column network \\cite{zhang2016single} on all the input patches, Sam \\textit{et al. } \\cite{sam2017switching} argue that better performance is obtained by training regressors with a particular set of training patches by leveraging variation of crowd density within an image. To this end, they proposed a switching CNN that cleverly selects an optimal regressor suited for a particular input patch. As shown in Fig. \\ref{fig:switching_cnn}, the proposed network consists of multiple independent regressors similar to multi-column network \\cite{zhang2016single} with different receptive fields and a switch classifier. The switch classifier is trained to select the optimal regressor for a particular input patch. Independent CNN crowd density regressors are trained on patches sampled from a grid in a given crowd scene. The switch classifier and the independent regressors are alternatively trained. The authors describe multiple stages of training their network. First, the independent regressors are pretrained on image patches to minimize the Euclidean distance between the estimated density map and ground truth. This is followed by a differential training stage where, the count error is factored in to improve the counting performance by back-propagating a regressor with the minimum count error for a given training patch. After training the multiple regressors, a switch classifier based on VGG-16 architecture \\cite{simonyan2014very} is trained to select an optimal regressor for accurate counting. Finally, the switch classifier and CNN regressors are co-adapted in the coupled training stage.\n\n\n\\begin{figure}[htp!]\n\\begin{center}\n\\begin{minipage}{1\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig32}\n\\end{minipage}%\n\\vskip-8pt\n\\captionof{figure}{Overview of Switching CNN by Sam \\textit{et al. } \\cite{sam2017switching}.}\n\\label{fig:switching_cnn}\n\\end{center}\n\\end{figure}\n \n\nWhile the above methods concentrated on incorporating scale information in the network, Sheng \\textit{et al. } in \\cite{sheng2016crowd} proposed to integrate semantic information by learning locality-aware feature sets. Noting that earlier methods that use hand-crafted features ignored key semantic and spatial information, the authors proposed a new image representation which incorporates semantic attributes as well as spatial cues to improve the discriminative power of feature representations. They defined semantic attributes at the pixel level and learned semantic feature maps via deep CNN. The spatial information in the image is encoded using locality-aware features in the semantic attribute feature map space. The locality-aware features (LAF) are built on the idea of spatial pyramids on neighboring patches thereby encoding spatial context and local information. The local descriptors from adjacent cells are then encoded into image representations using weighted VLAD encoding method. \n\n\n\n\nSimilar to \\cite{zhang2016single,onoro2016towards}, Kumagai \\textit{et al. } \\cite{kumagai2017mixture}, based on the observation that a single predictor is insufficient to appropriately predict the count in the presence of large appearance changes, proposed a Mixture of CNNs (MoCNN) that are specialized to a different scene appearances. As shown in Fig. \\ref{fig:mixture_cnn_arch}, the architecture consists of a mixture of expert CNNs and a gating CNN that adaptively selects the appropriate CNN among the experts according to the appearance of the input image. For prediction, the expert CNNs predict crowd count in the image while the gating CNN predicts appropriate probabilities for each of the expert CNNs. These probabilities are further used as weighting factors to compute the weighted average of the counts predicted by all the expert CNNs.\n\n\n\\begin{figure}[htp!]\n\\begin{center}\n\\begin{minipage}{.8\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig15}\n\\end{minipage}%\n\\vskip-8pt\n\\captionof{figure}{Overview of MoC (Mixture of CNN) for crowd counting by Kumagai \\textit{et al. } \\cite{kumagai2017mixture}.}\n\\label{fig:mixture_cnn_arch}\n\\end{center}\n\\end{figure}\n\n\n\n\nMotivated by the success of scale aware models \\cite{zhang2016single,onoro2016towards}, Marsden \\textit{et al. } \\cite{marsden2016fully} proposed to incorporate scale into the models with much less number of model parameters. Observing that the earlier scale aware models \\cite{zhang2016single,onoro2016towards} are difficult to optimize and are computationally complex, Marsden \\textit{et al. } \\cite{marsden2016fully} proposed a single column fully convolutional network where the scale information is incorporated into the model using a simple yet effective multi-scale averaging step during prediction without any increase in the model parameters. The method addresses the issues of scale and perspective changes by feeding multiple scales of test image into the network during prediction phase. The crowd count is estimated for each scale and the final count is obtained by taking an average of all the estimates. Additionally, a new training set augmentation scheme is developed to reduce redundancy among the training samples. In contrast to the earlier methods that use randomly cropped patches with high degree of overlap, the training set in this work is constructed using the four image quadrants as well as their horizontal flips ensuring no overlap. This technique avoids potential overfit when the network is continuously exposed to the same set of pixels during training, thereby improving the generalization performance of the network. In addition, the generalization performance of the proposed method is studied by measuring cross dataset performance. \n\n\n\\begin{figure}[htp!]\n\\begin{center}\n\\begin{minipage}{0.8\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig16}\n\\end{minipage}%\n\\vskip-8pt\n\\captionof{figure}{Overview of Fully Convolutional Network for crowd counting by Marsden \\textit{et al. } \\cite{marsden2016fully}.}\n\\label{fig:congested_arch}\n\\end{center}\n\\end{figure}\n\n\nInspired by the superior results achieved by simultaneous learning of related tasks \\cite{ranjan2016hyperface,yu2017iprivacy}, Sindagi \\textit{et al. } \\cite{sindagi2017cnnbased} and Marsden et al. \\cite{marsden2017resnetcrowd} explored multi-task learning to boost individual task performance. Marsden et al. \\cite{marsden2017resnetcrowd} proposed a Resnet-18 \\cite{he2016deep} based architecture for simultaneous crowd counting, violent behaviour detection and crowd density level classification. The network consists of initial 5 convolutional layers of Resnet18 including batch normalisation layers and skip connections form the primary module. The convolutional layers are followed by a set of task specific layers. Finally, sum of all the losses corresponding to different tasks is minimized. Additionally, the authors constructed a new 100 image dataset specifically designed for multi-task learning of crowd count and behaviour. In a different approach, Sindagi \\textit{et al. } \\cite{sindagi2017cnnbased} proposed a cascaded CNN architecture to incorporate learning of a high-level prior to boost the density estimation performance. Inspired by \\cite{chen2016cascaded}, the proposed network simultaneously learns to classify the crowd count into various density levels and estimate density map (as shown in Fig. \\ref{fig:cascaded_mtcnn}). Classifying crowd count into various levels is equivalent to coarsely estimating the total count in the image thereby incorporating a high-level prior into the density estimation network. This enables the layers in the network to learn globally relevant discriminative features. Additionally, in contrast to most recent work, they make use of transposed convolutional layers to generate high resolution density maps.\n\n\n\\begin{figure}[htp!]\n\\begin{center}\n\\begin{minipage}{1\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig33}\n\\end{minipage}%\n\\vskip-8pt\n\\captionof{figure}{Overview of Cascaded Multi-task CNN by Sindagi \\textit{et al. } \\cite{sindagi2017cnnbased}.}\n\\label{fig:cascaded_mtcnn}\n\\end{center}\n\\end{figure}\n\n\n\nIn a recent work, Kang \\textit{et al. } \\cite{kang2017beyond} explored maps generated by density estimation methods for the purpose of various crowd analysis tasks such as counting, detection and tracking. They performed a detailed analysis of the effect of using full-resolution density maps on the performance of these tasks. They demonstrated through their experiments that full resolution density maps improved the performance of localization tasks such as detection and tracking. Two different approaches are considered for generating full-resolution maps. In the first approach, a sliding window based CNN regressor is used for pixel-wise density prediction. In the second approach, Fully Convolutional Networks \\cite{long2015fully} along with skip connections are used to learning a non-linear mapping between input image and the corresponding density map.\n\n\n\n\nIn a slightly different application context of counting, Mundhenk \\textit{et al. } \\cite{mundhenk2016large} and Arteta \\textit{et al. } \\cite{arteta2016counting} proposed to count different types of objects such as cars and penguins respectively. Mundhenk \\textit{et al. } \\cite{mundhenk2016large} addressed the problem of automated counting of automobiles from satellite\/aerial platforms. Their primary contribution is the creation of a large diverse set of cars from overhead images. Along with the large dataset, they present a deep CNN-based network to recognize the number of cars in patches. The network is trained in a classification setting where the output of the network is a class that is indicative of the number of objects in the input image. Also, they incorporated contextual information by including additional regions around the cars in the training patches. Three different networks based on AlexNet \\cite{krizhevsky2012imagenet}, GoogLeNet \\cite{szegedy2015going} and ResNet \\cite{he2016deep} with Inception are evaluated. For a different application of counting penguins in images, Arteta \\textit{et al. } \\cite{arteta2016counting} proposed a deep multi-task architecture for accurate counting even in the presence of labeling errors. The network is trained in a multi-task setting where, the tasks of foreground-background subtraction and uncertainty estimation along with counting are jointly learned. The authors demonstrated that the joint learning especially helps in learning a counting model that is robust to labeling errors. Additionally, they exploited scale variations and count variability across the annotations to incorporate scale information of the object and prediction of annotation difficulty respectively into the model. The network was evaluated on a newly created Penguin dataset. \n\n\nZhao \\textit{et al. } addressed a higher level cognitive task of counting people that cross a line in \\citep{zhao2016crossing}. Though the task is a video-based application, it comprises of a CNN-based model that is trained with pixel-level supervision maps similar to single image crowd density estimation methods, making it a relevant approach to be included in this article. Their method consists of a two-phase training scheme (as shown in Fig. \\ref{fig:crossing_line_arch}) that decomposes original counting problem into two sub-problems: estimating crowd density map and crowd velocity map where the two tasks share the initial set of layers enabling them to learn more effectively. The estimated crowd density and crowd velocity maps are then multiplied element-wise to generate the crowd counting maps. Additionally, they contributed a large-scale dataset for evaluating crossing-line crowd counting algorithms, which includes 5 different scenes, 3,100 annotated frames and 5,900 annotated pedestrians. \n\n\n\\begin{figure}[htp!]\n\\begin{center}\n\\begin{minipage}{1\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig17}\n\\end{minipage}%\n\\vskip-8pt\n\\captionof{figure}{Overview of the method proposed by Zhao \\textit{et al. } \\cite{zhao2016crossing} for counting people crossing a line.}\n\\label{fig:crossing_line_arch}\n\\end{center}\n\\end{figure}\n\n\\section{Discussion}\n\nWith a variety of methods discussed in Section \\ref{sec:survey_cnn}, we analyze various advantages and disadvantages of the broad approaches followed by these methods in this section. \n\nZhang \\textit{et al. } \\cite{zhang2015cross} were among the first ones to address the problem of adapting models to new unlabelled datasets using a simple and effective method based on finding similar patches across datasets. However, their method is heavily dependent on accurate perspective maps which may not be necessarily available for all the datasets. Additionally, the use of 72$\\times$72 sized patches for training and evaluation ignores global context which is necessary for accurate estimation of count. Walach \\textit{et al. } \\cite{walach2016learning} successfully addressed training inefficiencies in earlier methods using a layered boosting approach and a simple sample selection method. However, similar to Zhang \\textit{et al. } \\cite{zhang2015cross}, their method involves patch-based training and evaluation resulting in loss of global context information along with inefficiency during evaluation due to the use of a sliding window approach. Additionally, these methods tend to ignore scale variance among the dataset assuming that their models will implicitly learn the invariance.\n\nIn an effort to explicitly model scale invariance, several methods involving combination of networks were proposed (\\cite{zhang2016single,onoro2016towards,sam2017switching,kumagai2017mixture,boominathan2016crowdnet}). \nWhile these methods demonstrated significant improvements in the performance using multiple column networks and a combination of deep and shallow networks, the invariance achieved is limited by the number of columns present in the network and receptive field sizes which are chosen based on the scales present in the dataset. Additionally, these methods do not explicitly model global context information which is crucial for a task such as crowd counting. In a different approach, Marsden \\textit{et al. } \\cite{marsden2016fully} attempt to address the scale issue by performing a multi-scale averaging during the prediction phase. While being simple and effective, it results in an inefficient inference stage. Additionally, these methods do not explicitly encode global context present in an image which can be crucial for improving the count performance. To this end, few approaches model local and global context \\cite{sheng2016crowd,skaug2016end} by considering key spatial and semantic information present in the image. \n\nIn an entirely different approach, few methods \\cite{marsden2017resnetcrowd,sindagi2017cnnbased} take advantage of multi-task learning and incorporate high-level priors into the network. For instance, Sindagi \\textit{et al. } \\cite{sindagi2017cnnbased} simultaneously learn density estimation and a high-level prior in the form of crowd count classification. While they demonstrated high performance gain by learning an additional task of crowd density level classification, the number of density levels is dataset dependent and it needs to be carefully chosen based on the density levels present in the dataset. \n\n\n\n\n\n\n\n\\section{Datasets and results}\n\\label{sec:datasets_and_results}\nA variety of datasets have been created over the last few years driving researchers to create models with better generalization abilities. While the earlier datasets usually contain low density crowd images, the most recent ones focus on high density crowd thus posing numerous challenges such as scale variations, clutter and severe occlusion. The creation of these large scale datasets has motivated recent approaches to develop methods that cater to such challenges. In this section, we review five key datasets \\cite{chan2008privacy,chen2012feature,idrees2013multi,zhang2015cross,zhang2016single} followed by a discussion on the results of CNN-based approaches and recent traditional methods that were not included in the earlier surveys. \n\n\\begin{table*}[t!]\n\\caption{Summary of various datasets.}\n\\begin{center}\n\\vskip-15pt\\begin{tabular}{|l|c|c|c|c|c|c|}\n\\hline\nDataset & No. of images & Resolution & Min & Ave & Max & Total count \\\\\n\\hlin\nUCSD \\cite{chan2008privacy} & 2000 & 158x238 & 11 & 25 & 46 & 49,885\\\\\n\\hline\nMall \\cite{chen2012feature} & 2000 & 320x240 & 13 & - & 53 & 62,325\\\\\n\\hline\nUCF\\textunderscore CC\\textunderscore 50 \\cite{idrees2013multi} & 50 & Varied & 94 & 1279 & 4543 & 63,974\\\\\n\\hline\nWorldExpo '10 \\cite{zhang2016data,zhang2015cross} & 3980 & 576x720 & 1 & 50 & 253 & 199,923\\\\\n\\hline\nShanghaiTech Part A \\cite{zhang2016single} & 482 & Varied & 33 & 501 & 3139 & 241,677\\\\\n\\hline\nShanghaiTech Part B \\cite{zhang2016single}& 716 & 768x1024 & 9 & 123 & 578 & 88,488\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\label{tab:datasetsummary}\n\\end{table*}\n\n\\begin{figure*}[ht!]\n\\begin{center}\n\\begin{minipage}{0.163\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig18}\n\\captionof*{figure}{(a)}\n\\end{minipage}%\n\\hfill\n\\begin{minipage}{0.163\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig19}\n\\captionof*{figure}{(b)}\n\\end{minipage}\n\\begin{minipage}{0.163\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig20}\n \\captionof*{figure}{(c)}\n\\end{minipage}\n\\begin{minipage}{0.163\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig21}\n\\captionof*{figure}{(d)}\n\\end{minipage}\n\\begin{minipage}{0.163\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig22}\n\\captionof*{figure}{(e)}\n\\end{minipage}\n\\begin{minipage}{0.163\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig23}\n\\captionof*{figure}{(f)}\n\\end{minipage}\n\\end{center}\n\\vskip-10pt \\captionof{figure}{Sample images from various datasets. (a) UCSD \\cite{chan2008privacy} (b) Mall \\cite{chen2012feature} (c) UCF\\textunderscore CC\\textunderscore 50 \\cite{idrees2013multi} (d) WorldExpo '10 \\cite{zhang2015cross} (e) Shanghai Tech Part A \\cite{zhang2016single} (f) SHanghai Tech Part B \\cite{zhang2016single}. It can be observed that in the case of UCSD and Mall dataset , the images come from the same video sequence providing no variation in perspective across images.}\n\\label{fig:dataset}\n\\end{figure*}\n\n\n\\subsection{Datasets}\n\\textbf{UCSD dataset}: The UCSD dataset \\cite{chan2008privacy} was among the first datasets to be created for counting people. The dataset was collected from a video camera at a pedestrian walkway. The dataset consists of 2000 frames of size 238$\\times$158 from a video sequence along with ground truth annotations of each pedestrian in every fifth frame. For the rest of the frames, linear interpolation is used to create the annotations. A region-of-interest is also provided to ignore unnecessary moving objects such as trees. The dataset contains a total of 49,885 pedestrian instances and it is split into training and test set. While the training set contains frames with indices 600 to 1399, the test set contains the remaining 1200 images. This dataset has relatively low density crowd with an average of around 15 people in a frame and since the dataset was collected from a single location, there is no variation in the scene perspective across images. \n\\linebreak\n\n\n\\noindent \\textbf{Mall dataset}: Considering little variation in the scene type in the UCSD dataset, Chen \\textit{et al. } in \\cite{chen2012feature} collected a new Mall dataset with diverse illumination conditions and crowd densities. The dataset was collected using a surveillance camera installed in a shopping mall. Along with having various density levels, it also has different activity patterns (static and moving crowds). Additionally, the scene contained in the dataset has severe perspective distortion resulting in large variations in size and appearance of objects. The dataset also presents the challenge of severe occlusions caused by the scene objects, e.g.stall, indoor plants along the walking path. The video sequence in the dataset consists of 2000 frames of size 320$\\times$240 with 6000 instances of labelled pedestrians. The first 800 frames are used for training and the remaining 1200 frames are used for evaluation. In comparison to the UCSD dataset, the Mall dataset has relatively higher crowd density images. However, both the datasets do not have any variation in the scene perspective across images since they are a part of a single continuous video sequence. \n\\linebreak\n\n\\noindent \\textbf{UCF\\textunderscore CC\\textunderscore 50 dataset}: The UCF\\textunderscore CC\\textunderscore 50 \\cite{idrees2013multi} is the first truly challenging dataset constructed to include a wide range of densities and diverse scenes with varying perspective distortion. The dataset was created from publicly available web images. In order to capture diversity in the scene types, the authors collected images with different tags such as concerts, protests, stadiums and marathons. It contains a total of 50 images of varying resolutions with an average of 1280 individuals per image. A total of 63075 individuals were labelled in the entire dataset. The number of individuals varies from 94 to 4543 indicating a large variation across the images. The only drawback of this dataset is that only a limited number of images are available for training and evaluation. Considering the low number of images, the authors defined a cross-validation protocol for training and testing their approach where the dataset was divided into sets of 10 and a five fold cross-validation is performed. The challenges posed by this dataset are so enormous that even the results of recent CNN-based state-of-the-art approaches on this dataset are far from optimal.\n\\linebreak\n\n\\noindent \\textbf{WorldExpo '10 dataset}: Since some of the earlier approaches and datasets focussed primarily on single scene counting, Zhang \\textit{et al. } \\cite{zhang2015cross} introduced a dataset for the purpose of cross-scene crowd counting. The authors attempted to perform a data-driven cross-scene crowd counting for which they collected a new large-scale dataset that includes 1132 annotated video sequences captured by 108 surveillance cameras, all from Shanghai 2010 WorldExpo event. Large diversity in the scene types is ensured by collecting videos from cameras having disjoint bird views. The dataset consists of a total of 3980 frames of size 576 $\\times$ 720 with 199923 labelled pedestrians. The dataset is split into two parts: training set consisting of 1,127 one-minute long video sequences from 103 scenes and test set consisting of 5 one-hour long video sequences from 5 different scenes. Each test scene consists of 120 labelled frames with the crowd count varying from 1 to 220. Though an attempt is made to capture diverse scenes with varying density levels, the diversity is limited to only 5 scenes in the test set and the maximum crowd count is limited to 220. Hence, the dataset is not sufficient enough for evaluating approaches designed for extremely dense crowds in a variety of scenes. \n\\linebreak\n\n\n\\begin{table*}[htp!]\n\\centering\n\\caption{Comparison of results on various datasets. The CNN-based approaches provide significant improvements over traditional approaches that rely on hand-crafted representations. Further, among the CNN-based methods, scale aware and context aware approaches tend to achieve lower count error.}\n\\label{tab:results}\n\\resizebox{0.70\\textwidth}{!}{%\n\\begin{tabular}{|c|l|c|c|c|c|c|c|c|c|c|c|c|c|}\n\\hline\n\\multicolumn{1}{|l|}{} & Dataset & \\multicolumn{2}{c|}{UCSD} & \\multicolumn{2}{c|}{Mall} & \\multicolumn{2}{c|}{UCF CC 50} & \\multicolumn{2}{c|}{\\begin{tabular}[c]{@{}c@{}}WorldExpo\\\\ '10\\end{tabular}} & \\multicolumn{2}{c|}{\\begin{tabular}[c]{@{}c@{}}Shanghai\\\\ Tech-A\\end{tabular}} & \\multicolumn{2}{c|}{\\begin{tabular}[c]{@{}c@{}}Shanghai\\\\ Tech-B\\end{tabular}} \\\\ \\hline\n\\multicolumn{1}{|l|}{\\begin{tabular}[c]{@{}l@{}}Approach\\\\ type\\end{tabular}} & Method & MAE & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE & MSE & MAE & MSE \\\\ \\hline\n\\multirow{5}{*}{\\rotatebox[origin=c]{90}{\\parbox[c]{4.5cm}{\\centering Traditional approaches}}} & \\begin{tabular}[c]{@{}l@{}}Multi-source multi-scale\\\\ Idrees \\textit{et al. } \\cite{idrees2013multi}\\end{tabular} & & & & & 468.0 & 590.3 & & & & & & \\\\ \\cline{2-14} \n \n & \\begin{tabular}[c]{@{}l@{}}Cumulative Attributes\\\\ Chen \\textit{et al. } \\cite{chen2013cumulative}\\end{tabular} & 2.07 & 6.86 & 3.43 & 17.07 & & & & & & & & \\\\ \\cline{2-14} \n & \\begin{tabular}[c]{@{}l@{}}Density learning\\\\ Lempitsky \\textit{et al. } \\cite{lempitsky2010learning}\\end{tabular} & 1.7 & & & & 493.4 & 487.1 & & & & & & \\\\ \\cline{2-14} \n & \\begin{tabular}[c]{@{}l@{}}Count forest\\\\ Pham \\textit{et al. } \\cite{pham2015count}\\end{tabular} & 1.61 & 4.40 & 2.5 & 10.0 & & & & & & & & \\\\ \\cline{2-14} \n & \\begin{tabular}[c]{@{}l@{}}Exemplar density\\\\ Wang \\textit{et al. } \\cite{wang2016fast}\\end{tabular} & 1.98 & 1.82 & 2.74 & {\\underline \\textbf{2.10}} & & & & & & & & \\\\ \\cline{2-14} \n & \\begin{tabular}[c]{@{}l@{}}Random projection forest\\\\ Xu \\textit{et al. } \\cite{xu2016crowd}\\end{tabular} & 1.90 & 6.01 & 3.22 & 15.5 & & & & & & & & \\\\ \\hline\n\\multirow{7}{*}{\\rotatebox[origin=c]{90}{\\parbox[c]{9.5cm}{\\centering CNN-based approaches}}} & \\begin{tabular}[c]{@{}l@{}}Cross-scene\\\\ Zhang \\textit{et al. } \\cite{zhang2015cross}\\end{tabular} & 1.60 & 3.31 & & & 467.0 & 498.5 & 12.9 & & 181.8 & 277.7 & 32.0 & 49.8 \\\\ \\cline{2-14} \n & \\begin{tabular}[c]{@{}l@{}}Deep + shallow\\\\ Boominathan \\textit{et al. } \\cite{boominathan2016crowdnet}\\end{tabular} & & & & & 452.5 & & & & & & & \\\\ \\cline{2-14} \n & \\begin{tabular}[c]{@{}l@{}}M-CNN\\\\ Zhang \\textit{et al. } \\cite{zhang2016single}\\end{tabular} & {\\underline \\textbf{1.07}} & {\\underline \\textbf{1.35}} & & & 377.6 & 509.1 & 11.6 & & 110.2 & 173.2 & 26.4 & 41.3 \\\\ \\cline{2-14} \n & \\begin{tabular}[c]{@{}l@{}}CNN-boosting\\\\ Walach and Wolf \\cite{walach2016learning}\\end{tabular} & 1.10 & & {\\underline \\textbf{2.01}} & & 364.4 & & & & & & & \\\\ \\cline{2-14} \n & \\begin{tabular}[c]{@{}l@{}}Hydra-CNN\\\\ Onoro \\textit{et al. } \\cite{onoro2016towards}\\end{tabular} & & & & & 333.7 & 425.2 & & & & & & \\\\ \\cline{2-14} \n & \\begin{tabular}[c]{@{}l@{}}Joint local \\& global count\\\\ Shang \\textit{et al. } \\cite{skaug2016end}\\end{tabular} & & & & & {\\underline \\textbf{270.3}} & & 11.7 & & & & & \\\\ \\cline{2-14} \n \n \n & \\begin{tabular}[c]{@{}l@{}}MoCNN \\\\ Kumagai \\textit{et al. } \\cite{kumagai2017mixture}\\end{tabular} & & & 2.75 & 13.4 & 361.7 & 493.3 & & & & & & \n \\\\ \\cline{2-14} \n \n & \\begin{tabular}[c]{@{}l@{}}FCN \\\\ Marsden \\textit{et al. } \\cite{marsden2016fully}\\end{tabular} & & & & & 338.6 & 424.5 & & & 126.5 & 173.5 & 23.76 & 33.12\n \\\\ \\cline{2-14} \n \n & \\begin{tabular}[c]{@{}l@{}}CNN-pixel \\\\ Kang \\textit{et al. } \\cite{kang2017beyond}\\end{tabular} & 1.12 & 2.06 & & & 406.2 & 404.0 & 13.4 & & & & & \n \\\\ \\cline{2-14} \n \n & \\begin{tabular}[c]{@{}l@{}}Weighted V-LAD\\\\ Sheng \\textit{et al. } \\cite{sheng2016crowd}\\end{tabular} & 2.86 & 13.0 & 2.41 & 9.12 & & & & & & & & \n \\\\ \\cline{2-14}\n \n & \\begin{tabular}[c]{@{}l@{}}Cascaded-MTL\\\\ Sindagi \\textit{et al. } \\cite{sindagi2017cnnbased}\\end{tabular} & & & & & 322.8 & {\\underline \\textbf{341.4 }} & & & 101.3 & 152.4 & {\\underline \\textbf{20.0}} & {\\underline \\textbf{31.1}}\n \\\\ \\cline{2-14} \n \n \n \n & \\begin{tabular}[c]{@{}l@{}}Switching-CNN\\\\ Sam \\textit{et al. } \\cite{sam2017switching}\\end{tabular} & 1.62 & 2.10 & & & 318.1 & 439.2 & {\\underline \\textbf{9.4}} & & {\\underline \\textbf{90.4} } & {\\underline \\textbf{ 135.0}} & 21.6 & 33.4 \\\\ \\hline\n \n \n \n \n\\end{tabular}%\n}\n\\end{table*}\n\n\n\\noindent \\textbf{Shanghai Tech dataset}: Zhang \\textit{et al. } \\cite{zhang2016single} introduced a new large-scale crowd counting dataset consisting of 1198 images with 330,165 annotated heads. The dataset is among the largest ones in terms of the number of annotated people and it contains two parts: Part A and Part B. Part A consists of 482 images that are randomly chosen from the Internet whereas Part B consists of images taken from the streets of metropolitan areas in Shanghai. Part A has considerably larger density images as compared to Part B. Both the parts are further divided into training and evaluation sets. The training and test of Part A has 300 and 182 images, respectively, whereas that of Part B has 400 and 316 images, respectively. The dataset successfully attempts to create a challenging dataset with diverse scene types and varying density levels. However, the number of images for various density levels are not uniform making the training and evaluation biased towards low density levels. Nevertheless, the complexities present in this dataset such as varying scales and perspective distortion has created new opportunities for more complex CNN network designs. \n\nSample images from the five datasets are shown in Fig. \\ref{fig:dataset}. The datasets are also summarized in Table \\ref{tab:datasetsummary}. It can be observed that the UCSD and the Mall dataset have relatively low density images and typically focus on single scene type. In contrast, the other datasets have significant variations in the density levels along with different perspectives across images. \n\n\\subsection{Discussion on results}\nResults of the recent traditional approaches along with CNN-based methods are tabulated in Table \\ref{tab:results}. The count estimation errors are reported directly from the respective original works. The following standard metrics are used to compare different methods:\n\\begin{equation}\nMAE = \\frac{1}{N}\\sum_{i=1}^{N}|y_i-y'_i|,\n\\end{equation}\n\\begin{equation}\nMSE = \\sqrt{\\frac{1}{N}\\sum_{i=1}^{N}|y_i-y'_i|^2},\n\\end{equation}\nwhere MAE is mean absolute error, MSE is mean squared error, $N$ is the number of test samples, $y_i$ is the ground truth count and $y'_i$ is the estimated count corresponding to the $i^{th}$ sample. We make the following observations regarding the results:\n\n\\begin{itemize}[noitemsep]\n\\item In general, CNN-based methods outperform the traditional approaches across all datasets. \n\\item While the CNN-based methods are especially effective in large density crowds with a diverse scene conditions, the traditional approaches suffer from high error rates in such scenarios. \n\\item Among the CNN-based methods, most performance improvement is achieved by scale-aware and context-aware models. It can be observed from Table \\ref{tab:results} that a reduction in count error is largely driven by the increase in the complexity of CNN models (due to addition of context and scale information). \n\\item While the multi-column CNN architecture \\cite{zhang2016single} achieves the state-of-the-art results on 3 datasets: UCSD, WorldExpo '10 and ShanghaiTech, the CNN-boosting approach by \\cite{walach2016learning} achieves the best results on the Mall dataset. The best results on the UCF\\textunderscore CC\\textunderscore 50 dataset are achieved by joint local and global count approach \\cite{skaug2016end} and Hydra-CNN \\cite{onoro2016towards}. \n\\item The work in \\cite{walach2016learning} suggests that layered boosting can achieve performances that are comparable to scale aware models. \n\\item The improvements obtained by selective sampling in \\cite{wang2015deep} and \\cite{walach2016learning} suggests that it helps to obtain unbiased performance. \n\\item Whole image-based methods such as Zhang \\textit{et al. } \\cite{zhang2016single} and Shang \\textit{et al. } \\cite{skaug2016end} are less computationally complex from the prediction point of view and they have proved to achieve better results over patch-based techniques. \n\\item Finally, techniques such as layered boosting and selective sampling \\cite{onoro2016towards,wang2016fast} not only improve the estimation error but also reduce the training time significantly. \n\\end{itemize}\n\n\n\n\n\n\n\\begin{figure}[htp!]\n\\begin{center}\n\\begin{minipage}{0.33\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig24}\n\\end{minipage}%\n\\hfill\n\\begin{minipage}{0.33\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig25}\n\\end{minipage}\n\\begin{minipage}{0.33\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig26}\n\\end{minipage}\n\\end{center}\n\n\\begin{center}\n\\begin{minipage}{0.33\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig27}\n\\captionof*{figure}{(a)}\n\\end{minipage}%\n\\hfill\n\\begin{minipage}{0.33\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig28}\n\\captionof*{figure}{(b)}\n\\end{minipage}\n\\begin{minipage}{0.33\\linewidth}\n\\includegraphics[width=\\linewidth]{Fig29}\n\\captionof*{figure}{(c)}\n\\end{minipage}\n\\end{center}\n\\vskip-10pt \\captionof{figure}{Results of Zhang \\textit{et al. } \\cite{zhang2016single} on ShanghaiTech dataset. (a) Input image(b) Ground-truth density map (c) Estimated density maps. It can be observed that though the method is able to accurate estimation of crowd count, the estimated density maps are of poor quality.}\n\\label{fig:quality}\n\\end{figure}\n\n\n\\begin{figure}[htp!]\n\\begin{center}\n\\begin{minipage}{1\\linewidth}\n\\includegraphics[width=0.47\\linewidth]{Fig30}\n\\includegraphics[width=0.47\\linewidth]{Fig31}\n\\end{minipage}%\n\\vskip-8pt\n\\captionof{figure}{Distribution of crowd counts in ShanghaiTech dataset. It can be observed that the dataset is highly imbalanced. }\n\\label{fig:shanghaitech}\n\\end{center}\n\\end{figure}\n\n\\section{Future research directions}\n\\label{sec:future_research}\n\nBased on the analysis of various methods and results from Section \\ref{sec:survey_cnn} and \\ref{sec:datasets_and_results} and the trend of other developments in computer vision, we believe that CNN-based deeper architectures will dominate further research in the field of crowd counting and density estimation. We make the following observations regarding future trends in research on crowd counting:\n\n\\begin{enumerate}[noitemsep]\n\\item Given the requirement of large datasets for training deep networks, collection of large scale datasets (especially for extremely dense crowds) is essential. Though many datasets exist currently, only one of them (The UCF\\textunderscore CC\\textunderscore 50 \\cite{idrees2013multi}) caters to large density crowds. However, the size of the dataset is too small for training deeper networks. Though Shanghai Tech \\cite{zhang2016single}) attempts to capture large density crowds, the number of images per density level is non-uniform with a large number of images available for low density levels and very few samples for high density levels (as shown in Fig. \\ref{fig:shanghaitech}). \n\n\\item Considering the difficulty of training deep networks for new scenes, it would be important to explore how to leverage from models trained on existing sources. Most of the existing methods retrain their models on a new scene and it is impractical to do so in real world scenarios as it would be expensive to obtain annotations for every new scene. Zhang \\textit{et al. } \\cite{zhang2015cross} attempted to address this issue by performing a data driven training without the need of labelled data for new scenes. In an another approach, Liu \\textit{et al. } \\cite{liu2015bayesian} considered the problem of transfer learning for crowd counting. A model adaptation technique for Gaussian process counting model was introduced. Considering the source model as a prior and the target dataset as a set of observations, the components are combined into a predictive distribution that captures information in both the source and target datasets. However, the idea of transfer learning or domain adaptation \\cite{VMP_SPM_DA_2015} for crowd scenes is relatively unexplored and is a nascent area of research.\n\n\\item Most crowd counting and density estimation methods have been designed for and evaluated either only on single images or videos. Combining the techniques developed separately for these methods is a non-trivial task. Development of low-latency methods that can operate in real-time for counting people in crowds from videos is another interesting problem to be addressed in future. \n\n\n\n\\item Another key issue ignored by earlier research is that the quality of estimated crowd density maps. Many existing CNN-based approaches have a number of max-pooling layers in their networks compelling them to regress on down-sampled density maps. Also, most methods optimize over traditional Euclidean loss which is known to have certain disadvantages \\cite{johnson2016perceptual1}. Regressing on down-sampled density maps using Euclidean loss results in low quality density maps. Fig. \\ref{fig:quality} demonstrates the results obtained using the state-of-the-art method \\cite{zhang2016single}. It can be observed that though accurate count estimates are obtained, the quality of the density maps is poor. As a result, these poor quality maps adversely affect other higher level cognition tasks which depend on them. Recent work on style-transfer \\cite{zhang17ICCV}, image de-raining \\cite{zhang2017image} and image-to-image translation \\cite{pix2pix2016} have demonstrated promising results from the use of additional loss functions such as adversarial loss and perceptual loss. In principle, density estimation can be considered as an image-to-image translation problem and it would be interesting to see the effect of these recent loss functions. Generating high quality density maps along with low count estimation error would be another important issue to be addressed in the future.\n\n\n\n\n\n\\item Finally, considering advancements by scale-aware \\cite{zhang2016single,onoro2016towards} and context-aware models \\cite{skaug2016end}, we believe designing networks to incorporate additional contextual and scale information will enable further progress. \n\n\n\\end{enumerate}\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nThis article presented an overview of recent advances in CNN-based methods for crowd counting and density estimation. In particular, we summarized various methods for crowd counting into traditional approaches (that use hand-crafted features) and CNN-based approaches. The CNN-based approaches are further categorized based on the training process and the network property. Obviously all the literature on crowd counting cannot be covered, hence, we have chosen a representative subset of the latest approaches for a detailed analysis and review. We also reviewed the results demonstrated by various traditional and CNN-based approaches to conclude that CNN-based methods are more adept at handling large density crowds with variations in object scales and scene perspective. Additionally, we observed that incorporating scale and contextual information in the CNN-based methods drastically improves the estimation error. Finally, we identified some of the most compelling challenges and issues that confront research in crowd counting and density estimation using computer vision and machine leaning approaches. \n\n\\vspace{5pt}\n\\noindent \\textbf{Acknowledgement}\nThis work was supported by US Office of Naval Research (ONR) Grant YIP N00014-16-1-3134.\n\n\n\\bibliographystyle{model2-names}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction to dibosons}\nProcesses involving multiple massive bosons probe right at the\ncore of the SM, i.e. its electro-weak symmetries and resulting\ngauge boson structure. Our current understanding requires some -- yet\nundetected --\nmechanism to regularize processes like $W_L W_L \\rightarrow W_L W_L$ at the\n$O(1)$ TeV scale in order to maintain unitarity \\cite{Chanowitz:2004gk}.\nThis hints at an experimental no-lose situation where we either will\nfind the scalar\nresponsible for the Higgs mechanism, or find new interactions\ninteracting strongly with the longitudinal gauge modes. If none of these cases\nare seen we will be obliged to go back to the sub TeV region and ask ourself\nwhat it is that we don't understand on the fundamental level.\n\nOur lack of knowledge concerning the underlying electro-weak symmetry\nbreaking (EWSB) mechanism of the SM desperately calls for experimental input to\nguide us. Clearly, as indicated from the discussion above diboson\nscattering is likely to provide the ultimate EWSB exploration tool.\nUnfortunately the cross-section for diboson scattering is expected to\nbe unobservable at the Tevatron compared to the LHC \\cite{Butterworth:2002tt}.\nAnother way to get a handle on the EWSB is to reconstruct massive triple gauge\nbosons, but also here is the expected Tevatron cross-section beyond reach.\nIt is primarily for inclusive dibosons that we currently are gaining\nsensitivity to anomalous gauge interactions where new phase-space regions\nare opening up with the increase of integrated Tevatron luminosity.\n\nLEP 2 puts stringent limits on\nthe anomalous triple gauge couplings \\cite{Bruneliere:2004ab} (ATGC) which\nare the lowest order general effective theory operator couplings related\nto the diboson final state. However, the LEP 2 energy scale\n$\\sqrt{\\hat{s}} \\sim 200$ GeV is sufficiently small compared to the new\nphysics scale $\\Lambda$ such that higher order operators can be neglected. At\nthe Tevatron this is no longer the case since $O(1)$ TeV resonances can\nbe produced on-shell and a form factor must be\nintroduced in order to maintain unitarity of the model \\cite{Hagiwara:1989mx}.\nThe form factor is equivalent to a infinite series of higher order operators\nand implies a non linear extension of the linear model commonly used\nat LEP. For this reason the quoted limits on the couplings refer to\nslightly different assumptions at the two accelerators. And one should be\naware of that Tevatron measurements are in many ways complementary to\nthose made at LEP, e.g. new unexpected\non-shell dynamics may kick-in at higher energies and with different\nproduction mechanisms.\n\n\\section{Tevatron measurements}\nAs can be seen from table \\ref{tab:diboxs} it is just recently that\nmassive diboson signals became statistically significant at hadron colliders.\n\\begin{table}[t]\n\\caption{New(**) and recent diboson measurements at the Tevatron. The\nphoton($\\gamma$) has a transverse momentum cut of 7(8) GeV for the\nCDF(D0) measurements. All cross-sections are within the theoretical\nexpectations.\n\\label{tab:diboxs}}\n\\vspace{0.4cm}\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n& \\multicolumn{2}{|c|}{CDF} & \\multicolumn{2}{|c|}{D0} \\\\\n\\hline\nDiboson & Events & $\\sigma$[pb] & Events & $\\sigma$[pb] \\\\\n\\hline\n$W(l\\nu)\\gamma \\times BR(l\\nu)$ & 208(200\/pb)& $18.1 \\pm 3.1$ &\n141(162\/pb) & $14.8 \\pm 1.9$ \\\\\n$Z(ll)\\gamma \\times BR(ll)$ & 66 (200\/pb)& $4.6 \\pm 0.6$ &\n244(320\/pb) & $4.2 \\pm 0.5$ \\\\\n$W(l\\nu)W(l\\nu)$ & 12(200\/pb) &\n\\begin{minipage}{1in}\n$14.6^{+5.8}_{-5.1}$(stat) $^{+1.8}_{-3.0}$(sys) $\\pm 0.9$(lum)\n\\end{minipage}\n& 17(252\/pb) &\n\\begin{minipage}{1in}\n$13.8^{+4.3}_{-3.8}$(stat) $^{+1.2}_{-0.9}$(sys) $\\pm 0.9$(lum)\n\\end{minipage}\n\\\\\n$W(l\\nu)(W(jj)+Z(jj))$ & n.a.(350\/pb) & $<36$ (**) & & \\\\\n$W(l\\nu)Z(ll)$ & n.a.(825\/pb)& $<6.4$ (**) & n.a.(320\/pb) & $<13.3$ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\nThe measured cross-sections decrease with the diboson mass and the first\nprocess that has been firmly established during the last two years is WW to\nleptons. This is very different from LEP 2 where thousands of WW pairs have\nbeen observed in many decay channels, both leptonic and pure hadronic\nmodes. However, constraints on new physics are dominated by high\nenergy scales and the inclusive cross-section is not the most efficient\nobservable at the Tevatron for anomalous gauge interactions. A closer\nlook at the anomalous cross-section contributions \\cite{Hagiwara:1989mx}\nreveal that they, apart from a boson angular dependence, scale with\nthe parton center-of-mass energy ($\\sqrt{\\hat{s}}$). Thus the very high energy\ntail of the boson transverse momentum turns out to be a extremely sensitive and\nrobust observable for ATGC.\n\nIn the following subsections I will report on two brand new measurements\nof WW and WZ dibosons from CDF. It is interesting to note that even though the\nprocesses themselves are just below the the threshold of observation they\nhave a significant constraining power for ATGC.\n\n\\subsection{The process $WZ \\rightarrow l\\nu ll$}\n\n\\begin{figure}\n\\begin{center}\n\\psfig{figure=feyn_wz.eps,width=12cm}\n\\caption{Tree level graphs for WZ.\n\\label{fig:wz}}\n\\end{center}\n\\end{figure}\n\nThe WZ process, see figure \\ref{fig:wz}, is particularly interesting among\nthe dibosons.\nSince it could not be be produced at tree level at LEP we have so far no\ndirect experimental measurement of the WWZ vertex. Furthermore,\nthe absence of interference with other triple gauge vertices at tree level\nmakes this channel unique since it can provide a unambiguous\nhandle on the WWZ couplings in case of any observed anomaly that needs to be\ndisentangled in e.g. the WW production.\n\nA new CDF search for WZ to leptons has been made using data\nequivalent to 825\/pb of integrated luminosity of proton-antiproton collisions\nat $\\sqrt{s}=1.96$ TeV. The events are triggered by a lepton (isolated\nelectron or muon) with at least 20 GeV transverse momentum. Two more leptons\nare then required with at least 10 GeV transverse momentum. A neutrino\nlike signature is selected by requiring at least 25 GeV, see figure\n\\ref{fig:wzmet}, of missing transverse\nenergy. One Z is selected by one opposite sign lepton pair within the\ndilepton mass $76 < M_{ll} < 106$ GeV, see figure \\ref{fig:wzmll},\nand ZZ events are vetoed by removing\nevents with tracks that falls within $76 < M_{trk,l} < 106$ GeV using the\nremaining unmatched lepton. Due to the specific multilepton signature the\nselected candidates have a relatively small expected background contamination.\nAn overview of measured and estimated events are shown in\ntable \\ref{tab:wz}. Only two candidates are observed and a limit is\nderived on the total cross-section\n$$\n\\sigma(p\\bar{p} \\rightarrow WZ)< 6.4 \\textrm{ pb (95\\% C.L.)}.\n$$\nThis is intriguingly close to the theoretical NLO value around 4 pb.\nLimits on the WWZ ATGC have not yet been extracted but we know\nfrom a recent and similar D0 measurement \\cite{Abazov:2005ys}\n(see also the corresponding entry in table 1)\nthat this analysis have the potential to provide strong constraints.\n\n\\begin{table}[t]\n\\caption{Measured and expected number of events for WZ to leptons\nat CDF using 825\/pb of data.\n\\label{tab:wz}}\n\\vspace{0.4cm}\n\\begin{center}\n\\begin{tabular}{|l|l|}\n\\hline\nProcess & Events \\\\\n\\hline\n$WZ$ & $3.72 \\pm 0.02$(stat) $\\pm 0.15$(sys) \\\\\n\\hline\n$ZZ$ & $0.50 \\pm 0.01$(stat) $\\pm 0.05$(sys) \\\\\n$Z\\gamma$ & $0.03 \\pm 0.01$(stat) $\\pm 0.01$(sys) \\\\\n$t\\bar{t}$ & $0.05 \\pm 0.01$(stat) $\\pm 0.01$(sys) \\\\\n$Z+jets$ & $0.34 \\pm 0.07$(stat) $^{+0.16}_{-0.10}$(sys) \\\\\n\\hline\nTotal background: & $0.92 \\pm 0.07$(stat) $\\pm 0.15$(sys) \\\\\n\\hline\nTotal data: & 2 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\begin{figure}[p]\n\\begin{center}\n\\psfig{figure=h_Met_noMetCut_wsf.eps,width=13cm}\n\\caption{Missing transverse energy in the WZ events. The cut for the signal\nregion is also shown.\n\\label{fig:wzmet}}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}[p]\n\\begin{center}\n\\psfig{figure=trileptons_Mll_Met.eps,width=13cm}\n\\caption{Scatter plot in the missing transverse energy and dilepton mass\nplane for WZ. Data is compared to expectation.\n\\label{fig:wzmll}}\n\\end{center}\n\\end{figure}\n\n\\subsection{The process $WW+WZ \\rightarrow l\\nu jj$}\n\nSemi-leptonic decay modes of WW events have yet not been observed at hadron\ncolliders, see figure \\ref{fig:ww} where one of the bosons decays into quarks\nand the other one into leptons.\n\\begin{figure}\n\\begin{center}\n\\psfig{figure=feyn_ww.eps,width=13cm}\n\\caption{Tree level and LO Higgs graphs for WW.\n\\label{fig:ww}}\n\\end{center}\n\\end{figure}\nThe reconstruction is challenging due\nto the limited dijet mass resolution of about 10\\% and the W+jets background\nwhich is $O(100)$ times larger than the signal after event preselection\nat the Tevatron. Note that events from diagrams in figure \\ref{fig:wz} (replace\nthe leptonic Z decay with quarks) cannot be excluded, hence they are included\nin the signal. Still, the hadronic decay modes are interesting since\nthey have larger event yield than the pure leptonic modes and they have\nthe important ability to reconstruct the W transverse momentum which we\nknow is a sensitive handle on the ATGC.\n\nIn order to constrain the ATGC and to verify the SM rate\na new CDF search for WW+WZ to leptons, missing transverse energy and jets\nhas been made using data equivalent to 350\/pb of integrated luminosity\nof proton-antiproton collisions at $\\sqrt{s}=1.96$ TeV. The events are\ntriggered by an isolated lepton with 25 GeV transverse energy (20 GeV\ntransverse momentum for muons). An inclusive set of leptonic W decays is\nselected by requiring the missing transverse energy to be at least 25 GeV.\nThe events selection then proceeds by requiring two jets with at least\n15 GeV transverse energy and with a dijet mass in the range $32 < M_{jj} <\n184$ GeV. An additional cut on the W transverse mass below 25 GeV is applied\nto reduce the multijet background. Also proximity cuts are applied among the\nlepton and jets. However care must be taken not to reject narrow jets since\nthe interesting high transverse momentum boson have narrow jets due to the\nboost. The expected number of events is shown in table \\ref{tab:ww}.\n\n\\begin{table}[t]\n\\caption{Measured and expected number of events for WW+WZ to leptons,\nmissing transverse energy and jets at CDF using 350\/pb of data.\n\\label{tab:ww}}\n\\vspace{0.4cm}\n\\begin{center}\n\\begin{tabular}{|l|c|r|}\n\\hline\nProcess & Uncertainty & Events \\\\\n\\hline\n$WW$ & 15\\% & 142.0 \\\\\n$WZ$ & 15\\% & 18.2 \\\\\n\\hline\n$W+jets$ & 20\\% & 6261.0 \\\\\n$Multijets$ & 40\\% & 263.4 \\\\\n$W(\\tau)+jets$ & 20\\% & 171.0 \\\\\n$Z+jets$ & 20\\% & 154.0 \\\\\n$t\\bar{t}$ & 25\\% & 171.6 \\\\\n$t(t-ch)$ & 25\\% & 14.4 \\\\\n$t(s-ch)$ & 25\\% & 8.2 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\nSince the theoretical uncertainty on the W+jets background is huge we extract\nthe WW+WZ signal by assuming the SM ratio between WW\/WZ and fit the expected\nsignal and background dijet mass shapes to data, see figure \\ref{fig:wwxsfit}.\nThe fit gives 109 signal events and an statistical uncertainty of 110 events.\nSystematic uncertainties contributes with an additional 54 events. From this\nwe estimate the upper limit on the WW+WZ cross-section to be\n$$\n\\sigma(p\\bar{p} \\rightarrow WW+WZ) < 36 \\textrm{ pb}.\n$$\nThe SM expectation is 16 pb.\n\nThe ATGC are extracted from the signal region defined to be the\ndijet mass window $56