diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzhuze" "b/data_all_eng_slimpj/shuffled/split2/finalzzhuze" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzhuze" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{intro} \n\n\n\nSee \\cite{parti,KV,EGT} for definitions and notation. \n\nWe shall need the following theorem from \\cite{nd}. \n\n\\begin{theorem} \\label{limit} \nIf $ \\lambda $ is a singular cardinal, then \nan ultrafilter is $(\\lambda ,\\lambda )$-regular\nif and only if it is either \n$(\\cf\\lambda ,\\cf\\lambda )$-regular\nor\n$(\\lambda^+ ,\\lambda^+ )$-regular. \n\\end{theorem}\n\n\\begin{corollary} \\label{cor}\nSuppose that $ \\lambda $ is a singular cardinal,\nand consider the topological space $X$ \nobtained by forming the disjoint union of the topological \nspaces $ \\lambda ^+$ and $\\cf \\lambda $, both\nendowed with the order topology. \n\nThen, for every ultrafilter $D$, the space $X$\nis $D$-compact if and only if $D$ is not \n$( \\lambda, \\lambda )$-regular. \n \nThus, $X$ is productively $[ \\lambda', \\mu']$-compact if and only if\nthere exists a \n$( \\lambda', \\mu' )$-regular\n not $( \\lambda, \\lambda )$-regular\nultrafilter. In particular, $X$ is not productively $[ \\lambda, \\lambda ]$-compact. \n \\end{corollary} \n\n\\begin{proof}\nBy Theorem \\ref{limit}, $D$ is not \n$( \\lambda, \\lambda )$-regular\nif and only if it is neither \n$(\\cf\\lambda ,\\cf\\lambda )$-regular\nnor $(\\lambda^+ ,\\lambda^+ )$-regular. \n\nHence, by \\cite[Proposition 1]{topproc}, and since both\n$ \\lambda ^+$ and $\\cf \\lambda $ are regular cardinals,\n$D$ is not \n$( \\lambda, \\lambda )$-regular\nif and only if both $ \\lambda ^+$\nand $\\cf \\lambda $ are $D$-compact.\n This is clearly equivalent to $X$\nbeing $D$-compact.\n\nThe last statement is immediate from \\cite[Theorem 1.7]{C}, also stated\nin \\cite[Theorem 2]{topproc}. \n \\end{proof} \n\nLet $\\mathbf{2}= \\{0,1 \\} $ denote the two-elements topological space\nwith the discrete topology. \nIf $ \\lambda \\leq \\mu$ are cardinals, let \n$\\mathbf{2}^ \\mu $\nbe the Tychonoff product of $ \\mu $-many copies\nof $\\mathbf{2}$, and let\n$\\mathbf{2}^ \\mu_ \\lambda $\ndenote the subset of\n$\\mathbf{2}^ \\mu $\nconsisting of all those functions\n$h: \\mu \\to \\mathbf{2}$ such that\n$ \\left| \\{ \\alpha \\in \\mu| h( \\alpha )=1 \\} \\right| < \\lambda $.\n\nIn passing, let us mention that, when $ \\mu= \\aleph_\\omega$,\nthe space $\\mathbf{2}^ \\mu_ \\mu $ provides\nan example of a linearly Lindel\\\"of not Lindel\\\"of space.\nSee \\cite[Example 4.1]{AB}. Compare also \\cite[Example 4.2]{St}.\n\nNotice that \n$\\mathbf{2}^ \\mu_ \\lambda $ is a Tychonoff topological group\nwith a base of clopen sets.\n\nSet theoretically, $\\mathbf{2}^ \\mu_ \\lambda $\nis in a one to one correspondence (via characteristic functions) with\n$S_\\lambda(\\mu)$, the set of all subsets of $ \\mu$\nof cardinality $ < \\lambda $. \nSince many properties of ultrafilters are defined in terms of $S_\\lambda(\\mu)$,\nfor sake of convenience, in what follows we shall deal with\n$S_\\lambda(\\mu)$, rather than $\\mathbf{2}^ \\mu_ \\lambda $.\nHenceforth, we shall deal with the topology induced on $S_\\lambda(\\mu)$\nby the above correspondence.\n\nIn detail, $S_\\lambda(\\mu)$ is endowed with the smallest topology containing, as open sets, all sets of the form $X_\\alpha = \\{x\\in S_\\lambda(\\mu) | \\alpha\\in x\\}$ ($\\alpha$ varying in $\\mu$), as well as their complements. Thus, a base for the topology consists of all finite intersections of the above sets; that is, the elements of the base are the sets $\\{x\\in S_\\lambda(\\mu) | \\alpha_1\\in x, \\alpha_2\\in x , \\dots, \\alpha_n\\in x , \\beta_1\\not\\in x, \\beta_2\\not\\in x , \\dots, \\beta_m\\not\\in x \\}$, with $n,m$ varying in $\\omega$ and $\\alpha_1, \\dots, \\alpha_n, \\beta_1 , \\dots, \\beta_m\n$ varying $\\mu$. \n\nNotice that this topology is finer than the topology on $S_\\lambda(\\mu)$ used in\n\\cite{topproc}.\n\nWith the above topology, $S_\\lambda(\\mu)$ and\n$\\mathbf{2}^ \\mu_ \\lambda $ are homeomorphic,\nthus $S_\\lambda(\\mu)$ can be given the structure \nof a Tychonoff topological group.\n\nNotice that if $ \\lambda \\leq \\mu$ then $S_\\lambda(\\mu)$\nis not $[\\lambda ,\\lambda ]$-compact. Indeed, for $ \\alpha \\in \\mu$,\nlet $Y_\\alpha = \\{ x \\in S_\\lambda(\\mu)| \\alpha \\not\\in x \\} $.\nIf $Z \\subseteq \\mu$ and $|Z|= \\lambda $ then\n$(Y_\\alpha )_{\\alpha\\in Z}$ is an open cover of \n$S_\\lambda(\\mu)$ by $\\lambda $-many sets,\n$<\\lambda $ of which never cover $S_\\lambda(\\mu)$. \n\n\\begin{proposition} \\label{dsll} \nFor every ultrafilter $D$ and every cardinal $\\lambda$, \nthe topological space $S_\\lambda(\\lambda)$ is $D$-compact if and only if \n $D$ is not ($\\lambda,\\lambda$)-regular.\n\\end{proposition}\n\n\\begin{proof} \nSuppose that \n$D$ is an ultrafilter over $I$ and that\n$S_\\lambda(\\lambda)$ is $D$-compact. For every $f:I\\to S_\\lambda(\\lambda)$ there exists $x\\in S_\\lambda(\\lambda)$ such that $f(i)_{i\\in I}$ $D$-converges to $x$. If $\\alpha\\in\\lambda $ and $\\{i\\in I|\\alpha\\in f(i)\\}\\in D$ then $\\alpha\\in x $, since \n otherwise $Y=\\{z\\in S_\\lambda(\\lambda) | \\alpha\\not\\in z\\}$ is an open set containing $x$, and $\\{i\\in I|f(i)\\in Y\\}= \\{i\\in I|\\alpha\\not\\in f(i)\\}\\not\\in D$, contradicting $D$-convergence.\n\nWhence, $\\{\\alpha\\in\\lambda |\\{i\\in I|\\alpha\\in f(i)\\}\\in D\\}\\subseteq x\\in S_\\lambda(\\lambda)$, and thus $x$ has cardinality $<\\lambda$; that is, $f$\ndoes not witness ($\\lambda,\\lambda$)-regularity of $D$.\nSince $f$ has been chosen arbitrarily,\n$D$ is not ($\\lambda,\\lambda$)-regular.\n\nConversely, suppose that $D$ over $I$ is not ($\\lambda,\\lambda$)-regular, and let \n$f:I\\to S_\\lambda(\\lambda)$. Then $x=\\{\\alpha\\in\\lambda |\\{i\\in I|\\alpha\\in f(i)\\}\\in D\\}$ has cardinality $<\\lambda$ and hence is in $S_\\lambda(\\lambda)$. We show that $f$ $D$-converges to $x$. Indeed, let $Y$ be a neighborhood of $x$: we have to show that \n $\\{i\\in I|f(i)\\in Y\\}\\in D$. \nWithout loss of generality, we can suppose that $Y$ is an element of the base \nof $S_\\lambda(\\lambda)$, that is, $Y$ has the\n form $\\{z\\in S_\\lambda(\\lambda) | \\alpha_1\\in z, \\alpha_2\\in z , \\dots, \\alpha_n\\in z , \\beta_1\\not\\in z, \\beta_2\\not\\in z , \\dots, \\beta_m\\not\\in z \\}$. Since $D$ is closed under finite intersections,\n then $\\{i\\in I|f(i)\\in Y\\}\\in D$ if and only if $\\{i\\in I|\\alpha_1\\in f(i)\\}\\in D$ and\n $\\{i\\in I|\\alpha_2\\in f(i)\\}\\in D$ and\\dots\\ and $\\{i\\in I|\\alpha_n\\in f(i)\\}\\in D$ and \n $\\{i\\in I|\\beta_1\\not\\in f(i)\\}\\in D$ and\\dots\\ and $\\{i\\in I|\\beta_m\\not\\in f(i)\\}\\in D$. But all \nthe above sets are actualy in $D$, by the definition of $x$ and since $x\\in Y$ and \n$D$ is an ultrafilter; thus $f$ $D$-converges to $x$.\n\nSince $f$ was arbitrary, every $f:I\\to S_\\lambda(\\lambda)$ $D$-converges, and thus \n$S_\\lambda(\\lambda)$ is $D$-compact. \n\\end{proof}\n\n\\begin{corollary} \\label{psll} \n The space $S_\\lambda(\\lambda)$ is productively $[\\lambda',\\mu']$-compact if and only if there exists a $(\\lambda',\\mu')$-regular not-$(\\lambda,\\lambda)$-regular ultrafilter. \n\\end{corollary}\n\n\\begin{proof}\nImmediate from Proposition \\ref{dsll} and \\cite[Theorem 1.7]{C}.\n\\end{proof} \n\nIn the statements of the next theorems\nthe word ``productively'', when included within parentheses, can be equivalently inserted or omitted.\n\n\\begin{theorem} \\label{topprocsing}\nFor all infinite cardinals $ \\lambda $, $ \\mu$, $ \\kappa $,\n the following are equivalent:\n\n(i) Every productively $[ \\lambda, \\mu]$-compact topological space\nis (productively) $[ \\kappa , \\kappa ]$-compact.\n\n(ii) Every productively $[ \\lambda, \\mu]$-compact family of topological spaces\nis productively $[ \\kappa , \\kappa ]$-compact.\n\n(iii) Every $( \\lambda, \\mu)$-regular ultrafilter is $( \\kappa , \\kappa )$-regular.\n\n(iv) Every productively $[ \\lambda, \\mu]$-compact Hausdorff normal \ntopological space with a base of clopen sets\nis productively $[ \\kappa , \\kappa ]$-compact.\n\n(v) Every productively $[ \\lambda, \\mu]$-compact \nTychonoff topological group with a base of clopen sets\nis (productively) $[ \\kappa , \\kappa ]$-compact.\n\nIf $ \\kappa $ is regular, then the preceding conditions are also\nequivalent to:\n\n(vi) Every productively $[ \\lambda, \\mu]$-compact Hausdorff normal \ntopological space with a base of clopen sets\nis $[ \\kappa , \\kappa ]$-compact.\n\\end{theorem}\n\n \\begin{proof} \nLet us denote by (i)$_{\\text p}$ Condition (i) when the second occurrence of the word \n``productively'' is included, and simply by (i) when it is omitted. Similarly, for condition (v).\n\nThe equivalence of (i)-(iii) has been proved in \\cite[Theorem 1]{topproc},\nwhere it has also been proved that, for $ \\kappa $ regular, they are equivalent\nto (vi).\n\nSince (ii) $ \\Rightarrow $ (i)$_{\\mathrm p}$ $ \\Rightarrow $ (i) are trivial,\nwe get that (i), (ii), (iii), (i)$_{\\mathrm p}$ are all equivalent, and equivalent to\n(vi) for $\\kappa $ regular.\n\n(ii) $\\Rightarrow $ (iv) and (ii) $\\Rightarrow $ (v)$_{\\mathrm p}$ $ \\Rightarrow $ (v) are trivial.\n\nIf (iii) fails, then there is a\n$( \\lambda, \\mu)$-regular ultrafilter which is not $( \\kappa , \\kappa )$-regular, thus,\nfor $\\kappa $ singular, \nthe space $X$ of Corollary \\ref{cor} is \nproductively $[ \\lambda, \\mu]$-compact. \nFor $\\kappa $ regular, take $X= \\kappa $ with the order topology\n(see \\cite{topproc}).\n $X$ \nis Hausdorff, normal, \n with a base of clopen sets, but not \nproductively $[ \\kappa , \\kappa ]$-compact, again by Corollary \\ref{cor}, \nthus (iv) fails. We have proved (iv) $\\Rightarrow $ (iii). \n\n(v) $\\Rightarrow $ (iii) is similar, using \nCorollary \\ref{psll}, since $S_\\kappa (\\kappa )$ is not\n$[\\kappa ,\\kappa ]$-compact.\n\\end{proof} \n\n\\begin{theorem} \\label{topproc2sing}\nFor all infinite cardinals $ \\lambda $, $ \\mu$, \nand for any family $ (\\kappa_i)_{i \\in I} $ of infinite cardinals,\n the following are equivalent:\n\n(i) Every productively $[ \\lambda, \\mu]$-compact topological space\nis (productively) \n$[ \\kappa_i , \\kappa_i ]$-compact for some $i \\in I$.\n\n(ii) Every productively $[ \\lambda, \\mu]$-compact family of topological spaces\nis productively \n$[ \\kappa_i , \\kappa_i ]$-compact for some $i \\in I$.\n\n(iii) Every $( \\lambda, \\mu)$-regular ultrafilter is $( \\kappa_i , \\kappa_i )$-regular\n for some $i \\in I$.\n\n(iv) Every productively $[ \\lambda, \\mu]$-compact Hausdorff normal \ntopological space with a base of clopen sets\nis productively \n$[ \\kappa_i , \\kappa_i ]$-compact for some $i \\in I$.\n\n(v) Every productively $[ \\lambda, \\mu]$-compact \nTychonoff topological group with a base of clopen sets\nis (productively) \n$[ \\kappa_i , \\kappa_i ]$-compact for some $i \\in I$.\n\nIf every $ \\kappa_i $ is regular, then the preceding conditions are also\nequivalent to:\n\n(vi) Every productively $[ \\lambda, \\mu]$-compact Hausdorff normal \ntopological space with a base of clopen sets\nis \n$[ \\kappa_i , \\kappa_i ]$-compact for some $i \\in I$.\n\\end{theorem}\n\n \\begin{proof} \nThe equivalence of (i)-(iii) has been proved in \n \\cite[Theorem 3]{topproc}, thus, arguing as in the proof of \nTheorem \\ref{topprocsing}, we get that \n(i), (ii), (iii), (i)$_{\\mathrm p}$ are all equivalent. \n\n(ii) $\\Rightarrow $ (iv) $\\Rightarrow $ (vi) and (ii) $\\Rightarrow $ (v)$_{\\mathrm p}$ $ \\Rightarrow $ (v) are trivial.\n\nIf (iii) fails, then there is a\n$( \\lambda, \\mu)$-regular ultrafilter $D$ which for no $i \\in I$ is \n$( \\kappa_i , \\kappa_i )$-regular.\nBy Proposition \\ref{dsll},\nfor every $i \\in I$\nthe topological space \n$S_{\\kappa_i} (\\kappa_i )$ is \n $D$-compact.\nHence $X = \\prod_{i \\in I} S_{\\kappa_i} (\\kappa_i )$\nis $D$-compact, thus\nproductively $[ \\lambda, \\mu]$-compact, by \n\\cite[Theorem 1.7]{C}.\nHowever, \n$X$ is a Tychonoff topological group with a base of clopen sets\nwhich for no $i \\in I$ is\n$[ \\kappa_i , \\kappa_i ]$-compact,\nthus (v) fails.\n We have proved (v) $\\Rightarrow $ (iii). \n\nThe proofs of (iv) $\\Rightarrow $ (iii) \nand \n(vi) $\\Rightarrow $ (iii) \nare similar, using the next proposition.\nIf (iii) fails, then there is a\n$( \\lambda, \\mu)$-regular ultrafilter $D$ which for no $i \\in I$ is \n$( \\kappa_i , \\kappa_i )$-regular.\nBy the proof of Theorem \\ref{topprocsing},\nfor every $i \\in I$\nwe have a $D$-compact topological space \n$X_i $ which falsify \\ref{topprocsing}(iv),\nresp., \\ref{topprocsing}(vi). \nThen the space $X = \\{x\\} \\cupdoty \\bigcupdoty _{i \\in I} X_i $\nwe shall construct in the next definition\nis $D$-compact, thus\nproductively $[ \\lambda, \\mu]$-compact, by \n\\cite[Theorem 1.7]{C},\nand makes (iv), resp., (vi), fail.\n\\end{proof} \n\n\\begin{definition}\\label{fr}\nGiven a family $(X_i) _{i \\in I} $\nof topological spaces, construct their\n\\emph{Frechet disjoint union}\n$X = \\{x\\} \\cupdoty \\bigcupdoty _{i \\in I} X_i $\nas follows.\n\nSet theoretically, $X$ is the union of\n(disjoint copies) of the $X_i$'s, plus a new element\n$x$ which belongs to no $X_i$.\nThe topology on $X$ is the smallest topology which\ncontains each open set of each $X_i$,\nand which contains \n$\\{x\\} \\cupdoty \\bigcupdoty _{i \\in E} X_i $,\nfor every $E \\subseteq I$ such that $I \\setminus E$\nis finite.\n\\end{definition}\n\n\n\\begin{proposition}\\label{disj}\nIf $(X_i) _{i \\in I} $ is a family of\ntopological spaces, then\ntheir Frechet disjoint union\n$X = \\{x\\} \\cupdoty \\bigcupdoty _{i \\in I} X_i $\nis $T_0$, $T_1$, Hausdorff, regular, normal, \n$D$-compact (for a given ultrafilter $D$), $[\\lambda,\\mu]$-compact\n(for given infinite cardinals $\\lambda $ and $\\mu$), \nhas a base of clopen sets if and only if so is (has) each $X_i$.\n\\end{proposition}\n\n\\begin{proof}\nStraightforward. We shall comment only on regularity, normality\nand $D$-compactness.\n\nFor regularity and normality, just observe that \nif $C$ is closed in $X$ and $C$\nhas nonempty intersection with infinitely many\n$X_i$'s, then $x \\in C$.\n\nAs for $D$-compactness, \nsuppose $D$ is over $J$ \nand that each $X_i$ is $D$-compact.\nLet $(y_j) _{j \\in J} $ be a sequence of elements \nof $X$. If \n$ \\{j \\in J| y_j \\in \\bigcup _{i \\in F} X_i \\} \\not\\in D $ holds\nfor every $F \\subseteq I$,\n then $(y_j) _{j \\in J} $ $D$-converges to $x$.\nOtherwise, since $D$ is an ultrafilter, \nhence $\\omega$-complete, there exists some\n$i \\in I$ such that \n$ \\{j \\in J| y_j \\in X_i \\} \\in D $.\nBut then $(y_j) _{j \\in J} $ $D$-converges to \nsome point of $X_i$, since $X_i$ is supposed to be\n$D$-compact. \n\\end{proof}\n\n\nWhen $ \\kappa $ is singular of cofinality $ \\omega $,\nCondition (vi) in Theorem \\ref{topprocsing} is equivalent\nto the other conditions.\nWhen each $ \\kappa_i $ is either a regular cardinal, or a singular\ncardinal of cofinality $ \\omega $,\nthen Condition (vi) in Theorem \\ref{topproc2sing}.\nis equivalent\nto the other conditions. Proofs shall be given elsewhere.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\\label{sec:intro}\n\n\nSpeech enhancement is very important to many real applications, such as telecommunication, robust automatic speech recognition (ASR), and hearing aids.\nFor better speech quality and intelligibility, most of the devices, e.g. mobile phone and smart home device, are equipped with multiple microphones which can utilize spatial information.\nDual-microphone is the most common configuration.\n\n\nTraditionally, multi-channel speech enhancement can be divided into two categories.\nOne is the blind source separation (BSS) method \\cite{bbs1}\\cite{bbs2},\nwhich is under the assumption of the independence of source signals.\nBSS-based speech enhancement separates signals by adaptively optimizing the cost function of the independent components analysis (ICA) process.\n\nThe other one is beamforming \\cite{BF1}\\cite{BF2}\\cite{GSC_dc}, which utilizes the direction of arrival (DOA) and second-order statistics of signals.\n\n\n\nRecently, deep learning has achieved great progress in multi-channel speech enhancement.\nGenerally, the deep-learning-based methods can be divided into two categories.\nOne way is to combine deep learning with the traditional methods.\nThe representative method is mask-based beamforming \\cite{mask_BF} \\cite{mask_BF2}, which calculated beamformer coefficients with the help of a mask estimated by deep neural networks (DNN).\n\nInstead of estimating mask, Wang and Wang used a deep neural network to directly estimate the complex spectral which is utilized to computing a minimum variance distortion-less response (MVDR) beamformer \\cite{cs_MVDR}.\nZhang and Wang used spectral features extracted by fixed beamforming and spatial features as the input of a DNN for binaural speech enhancement \\cite{xueliangzhang}.\nLi et al. used two fixed differential beamformers with opposite directions as a robust discriminative feature for the neural network to directly estimate the amplitude mask \\cite{BF_feat}.\n\n\n\n\nAnother is the full neural network-based or end-to-end method.\nWang and Wang proposed an all-neural multi-channel speech enhancement \\cite{all_NN}.\nTan et al. utilized a convolutional recurrent network for dual-microphone speech enhancement \\cite{crn_mobile}.\nGu et al. proposed an end-to-end network architecture for multi-channel speech separation in the time domain, which aims to learn spatial information directly from multi-channel waveform instead of widely-used Short Time-Frequency Transform (STFT) \\cite{BF_feat2}.\nMost of these algorithms mentioned above are finely designed with multistage training or process. However, separately trained modules may not cooperate well, because the hand-crafted interface may lead to information distortions and limits the ability of neural networks. Instead, a well-designed end-to-end system naturally fits the solution's manifold space of the original task.\n\n\n\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\linewidth]{pipeline.pdf}\n \\caption{inplace GCRN based end-to-end speech enhancement pipeline comparing module functioning with traditional method pipeline.}\n\\vspace{-5mm}\n\\end{figure}\n\nInspired by the three steps of beamforming technique, DOA estimation, beamforming, and post-filtering, we propose end-to-end dual-channel speech enhancement, as shown in Figure 1.\nThe pipeline consists of speech signal perception, spatial cue processing, and speech signal reconstruction, which are implemented by the similar architecture of CRN \\cite{gcrn_mapping}.\nIt should be mentioned that these three steps don't exactly correspond to the traditional array speech processing pipeline due to its end-to-end nature.\nThe typical CRN utilizes the convolution with stride operation to shrink and expand the feature on the frequency dimension in the encoder and decoder stage, respectively.\nHowever, wideband beamforming is processed in each frequency bin independently.\nAnd we call this inplace process. Therefore, we propose an inplace GCRN model which is consists of an inplace-encoder, channel-wise LSTM shared by all frequency bin, and inplace-decoder. Experimental results show that the proposed inplace GCRN can dramatically improve the performance.\n\n\nThe paper is organized as follows.\nIn Section 2, we describe the core ideas and show key details of the inplace GCRN model and feature design.\nIn Section 3, we show the setup and details of the experiment, the experimental result, and the analysis.\nWe make conclusions in Section 4.\n\n\\vspace{2mm}\n\\section{Algorithm}\n\n\nFor a dual-channel microphone array system, the received signal $x_m(k)$ can be modeled as follows:\n\n\n\\begin{equation}\n\\footnotesize\n\\begin{split}\nx_{m}(k) = s(k) *h_{s,m}(k) + n(k)*h_{n,m}(k) \\\\\n\\end{split}\n\\vspace{1mm}\n\\end{equation}\nwhere, $m$ denotes the channel number, $ s(k) $ and $ n(k) $ indicate speech signal and noise signal.\n$ h_{s,m}(k) $ and $ h_{n,m}(k) $ are the acoustic impulse response from speech source and noise source to $m$-th microphone, respectively,\nand '$*$' is the convolution operation.\n\n\\subsection{Inplace GCRN}\n\nThe inplace GCRN is mainly constructed by inplace convolution gated linear unit (GLU) and channel-wise LSTM to analyzing noisy input features and synthesize clean speech features.\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=\\linewidth]{model2.pdf}\n \\caption{The proposed dual-channel speech enhancement system and the room simulation setup.}\n\\end{figure}\n\n\n\\subsubsection{Inplace convolution}\n\nInplace-convolution is the convolutional neural network that the stride of the convolving kernel is set to one.\nIt means that the inplace convolution does not downsample the features in the frequency dimension. In this way, the spatial correlations are naturally and explicitly maintained in each frequency bin.\nIn the conventional CRN structure, the stride of convolutional operation in the frequency dimension is normally set to 2, which shrinks the feature in the frequency dimension.\nBy stacking the convolutional layers several times, the patterns lying in the frequency dimension are encoded into the channel dimension.\nThis is very effective for the single-channel task to model speech harmonic structural patterns and tracking their variations in the time domain.\nBut for multi-channel speech enhancement,\nthe downsampling convolution aliases spatial cues with speech patterns in channel dimension, which makes later LSTM hard to extract the spatial information.\n\n\n\n\n\n\\subsubsection{Channel-wise LSTM with model reuse mechanism}\nThe conventional CRN model using the LSTM model to process overall frequency bins.\nIn contrast, we apply LSTMs on each frequency bin, the input feature is only containing channel-wise features without frequency dimension.\n\nDue to the inplace characteristic of the encoder, the spatial cue will be explicitly maintained inside each frequency bin, without being obscured with its neighborhoods by the encoding process on the frequency dimension.\nSo the processing of extract spatial information for each frequency bin could be done independently, which is similar to the beamforming method.\nThere is one thing different comparing to the beamforming method, due to the difference of wavelength in the different frequency bands, if we want to make a same phase compensation to form a same beam pattern for different frequencies, the beamformer weight for each band is different.\nBut the LSTM model does not pick up speech by phase compensation, it only needs to analyze spatial information by time delay, the time delay for a certain look direction in different frequency bins is the same, so we could process all the frequency bins by reusing one LSTM model. This LSTM reuse mechanism makes the whole model very compact.\n\n\n\n\\subsection{Amplitude and phase prediction}\nFor phase prediction, Yin et al. \\cite{phasen} show that it has benefits to estimate the amplitude and phase separately, compared with complex ratio mask \\cite{CRM}.\nWhen it comes to amplitude prediction, mask and mapping are two common ways. Estimating mask works well for high SNR conditions due to it can directly use the input features, while the mapping method performs better, in low SNR conditions.\nThe characteristics of the spectrogram recovered by them could be different and somewhere complementary.\nZhang et al.\\cite{mask_mapping} use two networks to predict the amplitude mask and amplitude itself respectively, and then, use another network to combine the outputs of the two models to achieve better performance.\nIn our model, two decoders are used to separately predict the amplitude and phase, masking and mapping are done by a single decoder, and another decoder is used to predict the phase.\n\n\\subsection{System construction}\nThe proposed system is shown in Figure 2.\nWe use the short-time Fourier transform (STFT) to extract the complex spectrum of two channels, and concatenate their real and imaginary parts as channel dimensions of the input features of the model.\nThe input feature in the shape of [batch, channel=4, frequency, time] is first processed by six cascades 5x1 kernel inplace GLU, which is constructed by inplace convolution as follow:\n\\begin{equation}\n\\footnotesize\nY = ELU(BN(iConv(X)\\otimes Sigmoid(iConv(X))))\n\\end{equation}\nwhere $ELU(.)$ and $Sigmoid(.)$ are the activation functions, $BN(.)$ is the batch normalization, the $iConv$ is the inplace convolution, $ \\otimes$ denotes for the element-wise multiplication.\n\n\nAfter the encoder, we use channel-wise LSTM to refine the spatial information.\nThat is, technically, we merge the frequency dimension of the encoder's output feature to the batch dimension through reshape operation as [batch x frequency, time,channel=64], and put it into a Bi-LSTM model with two layers and 64 feature size.\nAfter the Bi-LSTM, the output feature is passing a linear layer to half its channel number and reshapes back, finally the feature is duplicated as two decoders' input.\n\nThe decoder is constructed by six inplace cascades Transpose GLU, the Transpose GLU is defined as follow:\n\\begin{equation}\n\\footnotesize\nY = ELU(BN(iTConv(X)\\otimes Sigmoid(iTConv(X))))\n\\end{equation}\nwhere $iTConv$ is the inplace transpose convolution.\nThe $i$ th GLU's output concatenated with $i-1$ th transpose GLU's output as the $i$ th transpose GLU's input, this lead to the skip connections.\nThe input channel of transpose GLU is 128, the output channel for both GLU and transpose GLU is constantly 64, except the output layer of decoders.\nA more detailed description of our proposed network hyperparameters is provided in Table 1.\n\n\nThere are two channels of output from both amplitude decoder and phase decoder, every output features passing a linear layer with 256 unit as final output feature, two output of amplitude decoder predict amplitude mask and amplitude mapping.\nWe generate the estimated amplitude spectrogram and phase spectrogram as follow:\n\\vspace{0mm}\n\\begin{footnotesize}\n\\begin{align}\n\\footnotesize\n& A_{est} = A_{msk} \\otimes A_{nsy}+A_{map} \\\\\n& P_{est} = \\frac{P_{est_{r}}+jP_{est_{i}}}{\\sqrt{P_{est_{r}}^2+P_{est_{i}}^2}} \\\\\n& X_{est} = A_{est} \\otimes P_{est}\n\\end{align}\n\\end{footnotesize}\nwhere, $A_{msk}$ and $A_{map}$ are two outputs of amplitude decoder used for amplitude mask and amplitude mapping, $A_{nsy}$ is the noisy speech amplitude. $P_{est_{r}}$ and $P_{est_{i}}$ are two outputs of phase decoder used as the real and imaginary of phase $P_{est}$.\n$ X_{est} $ denotes for the estimated complex spectrogram.\n\nIn the training stage, we use the Phasen loss function \\cite{phasen}:\n\\vspace{-3mm}\n\n\\begin{scriptsize}\n\\begin{equation}\n\\begin{aligned}\n&&L = & \\frac{1}{F} \\sum_{i=1}^{F} ((A_{s}[i])^{\\frac{1}{3}}-(A_{est}[i])^{\\frac{1}{3}})^2 + \\\\\n&& & \\frac{1}{F} \\sum_{i=1}^{F} ((A_{s}[i])^{\\frac{1}{3}} \\otimes P_{s_{r}}[i] - (A_{est}[i])^{\\frac{1}{3}}\\otimes P_{est_{r}}[i])^2 + \\\\\n&& & \\frac{1}{F} \\sum_{i=1}^{F} ((A_{s}[i])^{\\frac{1}{3}} \\otimes P_{s_{i}}[i] - (A_{est}[i])^{\\frac{1}{3}}\\otimes P_{est_{i}}[i])^2\n\\end{aligned}\n\\end{equation}\n\\end{scriptsize}\nwhere $i$ is the index of frequency bin elements, and $F$ is the total number of frequency bins. $A_s$, $P_{s_{r}}$ and $P_{s_{i}}$ are amplitude, real and imaginary part of phase of clean speech spectrogram, respectively.\n\n\\begin{table}[htbp]\n\\scriptsize\n \\centering\n \\caption{ Architecture of our proposed IGCRN. T denotes the\nnumber of time frames, B is the batch size.}\n \\begin{tabular}{|c|c|c|c|}\n \\hline\n layer name & input size & hyperparameters & output size \\\\\n \\hline\n iGLU1 & [B,2,256,T] & 5x1, (1,1), 64 & [B,64,256,T] \\\\\n \\hline\n iGLU2 $\\sim$ 6 & [B,64,256,T] & 5x1, (1,1), 64 & [B,64,256,T] \\\\\n \\hline\n reshape & [B,64,256,T] & & [Bx256,T,64] \\\\\n \\hline\n B-LSTM(2layer) & [Bx256,T,64] & 64 & [Bx256,T,128] \\\\\n \\hline\n linear & [Bx256,T,128] & (128,64) & [Bx256,T,64] \\\\\n \\hline\n reshape & [Bx256,T,64] & & [B,64,256,T] \\\\\n \\hline\n iTGLU6 $\\sim$ 2 & [B,128,256,T] & 5x1, (1,1), 64 & [B,64,256,T] \\\\\n \\hline\n iTGLU1 & [B,128,256,T] & 5x1, (1,1), 64 & [B,2,256,T] \\\\\n \\hline\n \\end{tabular}%\n \\label{tab:addlabel}%\n\\vspace{0mm}\n\\end{table}%\n\n\\section{Experiment and Evaluation}\n\\subsection{Experimental setup}\nFor the speech corpus, we randomly select 29 hours and 1 hour of speech from the mandarin dataset AISHELL-1 \\cite{aishell-1} as training and validation sets, respectively. To evaluate the generalization ability, we selected a 1-hour speech corpus from the TIMIT dataset for the test. The noises are from NOISEX92. We choose destroyerops, white, and babble for the test, and the remaining 12 noises are used for training.\nWe simulate room impulse response (RIR) by the IMAGE method\\cite{rir_gen}.\nSpecifically, two microphones with $2cm$ interval are placed at the center of a $5m (length) \\times 5m (width) \\times 3m (height)$ room.\nWe use 9 source positions for training which are placed at $1.5m$ away from the center of the two microphones and ranged from $-90^{\\circ}$ to $90^{\\circ}$ with $22.5^{\\circ}$ interval. Another 17 different positions are used for testing which are placed at the same distance with $11.25^{\\circ}$ interval.\nFor each mixture, we first randomly choose a speech and a slice of noise, and then place them at two different positions, and mix the speech and noise at selected SNR, -3dB, 0dB, and 3dB.\nThe frame length is 32 ms and the frameshift 16 ms. The Square-root Hann window is used as the analysis window. The sampling rate is 16 kHz. A 512-point discrete Fourier transform is used to\nextract complex STFT spectrograms.\n\nAll models are trained using Adam optimizer with a fixed learning rate of 0.0002, the minibatch is setting to 4. The detailed structure of the proposed inplace GCRN is shown in Figure 2.\n\n\\subsection{Experimental result}\n\nIn this study, short-time objective intelligibility (STOI) \\cite{stoi}, perceptual evaluation of speech quality (PESQ) \\cite{pesq}, and signal-to-distortion ratios (SDR) are employed as the evaluation metrics.\nThe best results in each case are highlighted by boldface.\n\n\n\\begin{table*}[!tbp]\n \\caption{Comparisons of different approaches in terms of STOI, PESQ, and SDR in -3dB, 0dB, and 3dB direction noise.}\n \\centering\n\n \\footnotesize\n\\begin{tabular}{cc|ccc|ccc|ccc}\n\\hline\n & & \\multicolumn{3}{c|}{STOI} & \\multicolumn{3}{c|}{PESQ} & \\multicolumn{3}{c}{SDR} \\\\ \\hline\n SNR & method & white & destroyerops & babble & white & destroyerops & babble & white & destroyerops & babble \\\\ \\hline\n\\multirow{4}{*}{3dB} & noisy & 0.78 & 0.73 & 0.71 & 1.71 & 1.93 & 1.9 & 3 & 3 & 3 \\\\\n & MVDR & 0.88 & 0.87 & 0.87 & 2.63 & 2.59 & 2.65 & 11.6 & 11.1 & 11.7 \\\\\n & GCRN & 0.90 & 0.90 & 0.91 & 2.71 & 2.81 & 2.91 & 9.3 & 9.1 & 9.0 \\\\\n & IGCRN & \\textbf{0.97} & \\textbf{0.98} & \\textbf{0.98} & \\textbf{3.75} & \\textbf{3.96} & \\textbf{3.95} & \\textbf{19.6} & \\textbf{21.6} & \\textbf{21.4} \\\\ \\hline\n\\multirow{4}{*}{0dB} & noisy & 0.71 & 0.67 & 0.65 & 1.49 & 1.7 & 1.69 & 0 & 0 & 0 \\\\\n & MVDR & 0.87 & 0.85 & 0.85 & 2.55 & 2.51 & 2.54 & 8.6 & 7.8 & 8.4 \\\\\n & GCRN & 0.88 & 0.89 & 0.89 & 2.57 & 2.75 & 2.79 & 6.3 & 6.5 & 6.1 \\\\\n & IGCRN & \\textbf{0.96} & \\textbf{0.97} & \\textbf{0.97} & \\textbf{3.59} & \\textbf{3.87} & \\textbf{3.89} & \\textbf{18.4} & \\textbf{20.6} & \\textbf{20.5} \\\\ \\hline\n\\multirow{4}{*}{-3dB} & noisy & 0.64 & 0.61 & 0.58 & 1.29 & 1.46 & 1.49 & -3 & -3 & -3 \\\\\n & MVDR & 0.85 & 0.84 & 0.83 & 2.49 & 2.45 & 2.46 & 5.4 & 4.4 & 5.3 \\\\\n & GCRN & 0.85 & 0.84 & 0.85 & 2.35 & 2.54 & 2.59 & 3.4 & 3.5 & 3.3 \\\\\n & IGCRN & \\textbf{0.94} & \\textbf{0.95} & \\textbf{0.96} & \\textbf{3.36} & \\textbf{3.68} & \\textbf{3.75} & \\textbf{15.6} & \\textbf{18.6} & \\textbf{19.2} \\\\ \\hline\n\n\\end{tabular}\n\\vspace{0mm}\n\\end{table*}\n\nFirst, we compare the proposed IGCRN with the conventional beamformer, MVDR, and the gated CRN in different noisy conditions at different SNR. It should be mentioned that the true direction is given for MVDR, which has to be estimated in practice. The results are shown in Table 2. It can be seen that the proposed IGCRN significantly and consistently outperforms the comparison methods in all conditions. The average STOI and PESQ gains are over $30\\%$ and $2.0$ compare to the unprocessed noisy speech.\n\n\n\\begin{table*}[!tbp]\n \\caption{Comparisons of different approaches in terms of STOI, PESQ, and SDR in -3dB direction noise.}\n \\centering\n \\footnotesize\n\\begin{tabular}{c|ccc|ccc|ccc}\n\\hline\n & \\multicolumn{3}{c|}{STOI} & \\multicolumn{3}{c|}{PESQ} & \\multicolumn{3}{c}{SDR} \\\\ \\hline\n method & white & destroyerops & babble & white & destroyerops & babble & white & destroyerops & babble \\\\ \\hline\nnoisy & 0.64 & 0.61 & 0.58 & 1.29 & 1.46 & 1.49 & -3 & -3 & -3 \\\\\nGCRN(CS) & 0.85 & 0.84 & 0.85 & 2.35 & 2.54 & 2.59 & 3.4 & 3.5 & 3.3 \\\\\nGCRN(Msk+Ps) & 0.90 & 0.87 & 0.85 & 2.74 & 2.75 & 2.62 & 11.6 & 10 & 8.5 \\\\\nGCRN(Msk+Map+Ps) & 0.90 & 0.88 & 0.87 & 2.89 & 2.87 & 2.77 & 11.8 & 11.3 & 10.4 \\\\\nIGCRN & \\textbf{0.94} & \\textbf{0.95} & \\textbf{0.96} & \\textbf{3.36} & \\textbf{3.68} & \\textbf{3.75} & \\textbf{15.6} & \\textbf{18.6} & \\textbf{19.2} \\\\ \\hline\n\\end{tabular}\n\\vspace{0mm}\n\\end{table*}\n\nAnother contribution of this work is the proposed training target.\nIn order to evaluate the effectiveness, we compare the performances of GCRN with different outputs. The results are shown in Table 3, where GCRN(CS) is the original complex spectral mapping, GCRN(Msk+Ps) is to estimate the amplitude mask and clean phase, and GCRN(Msk+Mp+Ps) is the proposed target. The results are shown in Table 3.\nIt can be seen that compared with the original GCRN, the effect of predicting mask and phase is better.\nIt is because amplitude is more important than phase, and the amplitude and phase are coupled in the complex spectrum. Similar results are observed in \\cite{wangzhongqiu1}\\cite{wangzhongqiu2}, where both complex and magnitude spectrum are restrained.\nFor GCRN(Msk+Map+Ps) we the introduced amplitude mapping term can further improve the performance, which pays more attention to the amplitude of spectrum than the others. However, GCRN(Msk+Map+Ps) is still much worse than the proposed IGCRN.\n\n\\setlength{\\tabcolsep}{0.75mm}{\n\\begin{table*}[!t]\n \\caption{Comparisons of different methods in terms of different DOA with $11^{\\circ}$ degree included angle of speech and noise in -3 dB babble direction noise, S and N are the DOA of speech and noise respectively.}\n \\centering\n \\footnotesize\n\\begin{tabular}{c|ccc|ccc|ccc}\n \\hline\n & \\multicolumn{3}{c|}{STOI(0.58)} & \\multicolumn{3}{c|}{PESQ(1.49)} & \\multicolumn{3}{c}{SDR(-3)} \\\\ \\hline\n DOA & MVDR & GCRN(Msk+Map+Ps) & IGCRN & MVDR & GCRN(Msk+Map+Ps) & IGCRN & MVDR & GCRN(Msk+Map+Ps) & IGCRN \\\\ \\hline\n$S=0^{\\circ}, \\ \\ N=11^{\\circ}$ & 0.87 & 0.85 & \\textbf{0.95} & 2.74 & 2.64 & \\textbf{3.54} & 11.6 & 9.2 & \\textbf{17.5} \\\\\n$S=23^{\\circ},N=34^{\\circ}$ & 0.76 & 0.74 & \\textbf{0.94} & 2.37 & 2.16 & \\textbf{3.40} & 6.4 & 3.7 & \\textbf{15.4} \\\\\n$S=45^{\\circ},N=56^{\\circ}$ & 0.69 & 0.66 & \\textbf{0.89} & 1.80 & 1.89 & \\textbf{2.86} & 0.2 & 1.3 & \\textbf{9.7 } \\\\\n$S=68^{\\circ},N=79^{\\circ}$ & 0.60 & 0.61 & \\textbf{0.73} & 1.55 & 1.74 & \\textbf{2.17} & -1.7 & 0.1 & \\textbf{3.6 } \\\\\n$S=79^{\\circ},N=90^{\\circ}$ & 0.54 & \\textbf{0.58} & {0.57} & 1.34 & 1.69 & \\textbf{1.70} & -6.4 & -0.5 & \\textbf{0.4 } \\\\ \\hline\n\\end{tabular}\n\\vspace{-1mm}\n\\end{table*}\n}\n\n\nIt is known that the spatial information is reflected by the time delay between the two microphones. The resolution of time delay is non-uniform to the directions. So, we investigate the performance of the methods when the target speech comes from different directions. In Table 4, it can be seen that performances gradually decay when the direction moves from $0^\\circ$ to $90^\\circ$, because the difference between the time delays of speech and noise becomes small. Compared with MVDR, GCRN is not good in high-resolution conditions, e. g. $S=0^\\circ$ and $23^\\circ$. However, in low-resolution conditions, GCRN outperforms the MVDR, because GCRN utilizes both spectral and spatial information. However, the proposed IGCRN outperforms the MVDR and GCRN in all the conditions. It implies that IGCRN can make better use of spatial information than GCRN.\n\n\n\\setlength{\\tabcolsep}{2mm}{\n\\begin{table}[!t]\n\n \\caption{Investigation of influence of the downsampling in -3dB babble noise condition.}\n \\centering\n \\scriptsize\n\\linespread{1}\n\n \\begin{tabular}{lccccc}\n\\hline\n method & STOI & PESQ & MAC(G) & Params(M) & LSTM \\\\\n\\hline\n noisy & 0.583 & 1.49 & & & \\\\\n\n GCRN & 0.847 & 2.59 & 28.8 & 71.8 & 1024 \\\\\n IGCRN64 & 0.968 & 3.83 & 19.9 & 1.4 & 64 \\\\\n IGCRN80 & \\textbf{0.982} & \\textbf{4.02} & 31.1 & 2.3 & 80 \\\\\n IGCRN64-1DS & 0.982 & 3.94 & 32.1 & 3.5 & 128 \\\\\n IGCRN64-2DS & 0.981 & 3.91 & 53.3 & 9.5 & 256 \\\\\n IGCRN64-3DS & 0.974 & 3.73 & 85.3 & 24.1 & 512 \\\\\n IGCRN64-4DS & 0.961 & 3.58 & 149.5 & 82.5 & 1024 \\\\\n IGCRN64-5DS & 0.954 & 3.52 & 277.8 & 316.3 & 2048 \\\\\n IGCRN64-6DS & 0.949 & 3.51 & 430.8 & 777.3 & 2048 \\\\\n\\hline\n \\end{tabular}%\n \\label{tab:addlabel}%\n\\vspace{-2mm}\n\\end{table}%\n}\n\nIn Table 5, we show that how the downsampling operation affects the performance, where multiply-accumulate operations (MAC) and total trainable parameters (Params) are also listed.\nFor IGCRN(n)-(k)DS, n and k denote the number of the channel of the first GLU output feature and the times of downsampling in convolution layers.\nFor each downsampling operation, we will double the channel dimension of its output feature.\nWe expand the IGCRN channel from the original 64 to 80, so the MAC of IGCRN80 is similar to the 1DS model.\nFrom Table 5, we can see that the performance gradually drops when the downsampling operation increasing, even though the complexity of the model is significantly increased.\nThis result shows the importance of the inplace characteristic when we doing multi-channel enhancement in the time-frequency domain.\n\nWhen it comes to parameter efficiency, the reuse mechanism of channel-wise LSTM makes the inplace GCRN model extremely compact with only 1.4 million parameters, and also the computational complexity is lower than the conventional GCRN model.\n\n\\section{Conclusions}\nIn this study, we propose a compact inplace GCRN model for dual-channel enhancement. Experimental results show that the proposed method can effectively exploit and utilize the spatial source information, which is guaranteed by the inplace characteristics of the inplace GCRN model, and it reveals the huge potential of designing a proper neural network for a certain task with a specific sparse manifold space.\n\n\n\\vspace{1mm}\n\\section{Acknowledgements}\nThis research is supported by the National Natural Science Foundation of China (No. 61876214).\n\n\n\\newpage\n\\input{paper_IGCRN.bbl}\n\\bibliographystyle{IEEEtran}\n\n\n\\end{document}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{S:1}\nGrapevine is an important crop, historically and economically. In contrast to many other crops, the recent development in breeding is not faced towards yield enhancement, the goal is to breed robust varieties which satisfy the same quality standards as the traditional ones \\cite{Toepfer11}. \nTo improve wine quality it is not desirable to grow as many grapes as possible, because a higher yield leads to a decrease in quality. Therefore thinning procedures are applied to produce high quality grapes. \n\nGrapevine is a perennial crop which means that the monitoring of the vines needs to be carried out in the field. Traditional phenotyping is performed by skilled experts who apply labour intensive and subjective methods (OIV\\cite{OIV01}, BBCH\\cite{Lorenz95}). The methods range from visual screening to counting manually or weighting.\nThe experts only sample small regions of large grapevine plots and extrapolate the results for the whole field which leads to an error prone estimation.\n\n\\begin{figure}[h]\n\\centering\\includegraphics[width=0.8\\linewidth]{images\/1DC24E73_1533025530972_predi.png}\n\\caption{Prediction after application of the neural network. The image shows a Riesling plant in the semi minimal pruned hedges training system. The resulting berry mask is put over the original image to give a visual impression of the result.}\n\\label{fig:Predi}\n\\end{figure}\n\nThe development of high-throughput phenotyping, the acquisition of large amounts of phenotypic data and the automatic analysis and extraction of phenotypic traits, was driven by the development of sensors and new image analysis techniques. RGB-, RGBD- or multi-spectral cameras can be used as well as laser scanners.\nGongal et al. \\cite{Gongal15} did a review about different sensors and algorithms which were used to detect and localize fruits with a robotic background. \nOther authors use high-throughput phenotyping for crop breeding \\cite{Araus14}, precision agriculture \\cite{Kipp14}, the detection of diseases (e.g. \\cite{Behmann15}, \\cite{foerster2019}) or anomalies \\cite{strothmann19} or the classification between different plant types to identify weeds (e.g., \\cite{Milioto18_2}, \\cite{lottes19}). Some of the main advantages of automatic procedures are objectivity, repeatability and high quality results.\n\nEarly approaches for high-throughput phenotyping for grapevine aimed at detecting grapes in images which were taken by handheld consumer cameras. The main focus lay in the recognition of geometrical structures.\nThis gives information about the spatial arrangement of the objects and in some cases even about the object size.\nFor example, the approaches used in \\cite{Roscher13} and \\cite{Nuske11} define berries as circular objects and use Hough transform or radial symmetry transform to detect them. In \\cite{Nyarko18}, convex surfaces are identified and used for fruit recognition.\n \nLater Nuske et al. \\cite{Nuske14} presented a large scale experiment for berry detection with a moving platform and semi automatic image acquisition with illumination in a realistic environment. They apply a circular Hough transform to detect berry candidates and classify them with texture, colour and shape features. Afterwards they cluster neighbouring berries into clusters.\nOther approaches investigate the 3D structure of berries.\nRose et al. \\cite{Rose16} used a stereo camera system to reconstruct point clouds from image sequences. They used colour and shape features to distinguish between canopy and berries.\n\\cite{Rist18} used a handheld laser scanner to produce high resolutions scans from bunches under laboratory conditions. They extract several parameters regarding single bunches.\n\nSince 2012, with the work of Krizhevsky et al. \\cite{Krizhevsky12}, neural networks (NN) became state-of-the-art for image classification tasks.\nMoreover, by introducing convolutions to capture the spatial characteristics of objects in images and extending NN to convolutional neural networks (CNNs) \\cite{Long15}, applications such as pixel-wise semantic segmentation were advancing. However, solving the task as semantic segmentation does not allow for a distinction of single instances in image regions, which is most obvious for neighbored image objects of the same semantic class. \nA distinction between single objects is realized extending a semantic segmentation task to instance segmentation, which outputs bounding boxes and several segmentation masks for single objects. One of the most famous approaches is called Mask-RCNN \\cite{He17} where a major disadvantage is that the algorithms needs a predefined number of object proposals. \n\nIn contrast to the detection approaches, several other ideas how to count objects in images exist.\nOne possibility is to count in an image without detecting their exact location. This is realized, for example, by regression methods as presented in \\cite{Lemptisky10} and \\cite{Cohen17}. \nThe areas of application are diverse, from counting penguins in colonies \\cite{Arteta16}, cells in microscopy images (e.g., \\cite{Xie16}, \\cite{Guo19}), buildings in high-resolution over head images \\cite{Lobry19} or nearly class agnostic approaches \\cite{Lu18}. These works focus on avoiding explicit detection to count objects in images. Instead they output either a single number for each image or in some cases offer the possibility to retrieve spatial information while counting with the estimation of density maps.\n\nIn the field of remote sensing with images the automatic detection of buildings is often realized with the classical detection approach. \nYang et al. \\cite{Yang18} proposed the combination of SegNet with signed-distance labels to improve the detection of building from images. Marmanis et al. \\cite{Marmanis18} on the other hand refine their buildings by adding information from an edge detector network.\n\nAn overview about the usage of neural networks and deep learning in agricultural applications was done by Kamilaris et al. \\cite{Kamilaris2018}. Similar techniques were used by several researcher regarding the problem of yield estimation of grapevine. Aquino et al. first proposed a smartphone application where single bunches had to be surrounded by a black background \\cite{Aquino16}.They detected circular light reflection and classified the results with a neural network. Later they disregarded the need for a background box \\cite{Aquino17}. Another approach aimed at detecting regions containing grapevine inflorescences in images with neural networks \\cite{Rudolph18} and applying a circular Hough transform in a second step. An adaptive network for semantic segmentation which was evaluated on different growth stages and data sets was proposed by \\cite{Grimm19}. They either detect regions containing berries or dotwise berry positions.\nAn example for instance segmentation via Mask-RCNN for grapevine was done by \\cite{Nellithimaru19}. They applied the Mask-RCNN to images which were simultaneously used for a 3D reconstruction.\n\nWe present a novel and objective approach to determine the number of berries as a decision base for thinning methods by providing berry numbers for whole rows.\nThe data collection is performed with a modified grapevine harvester called Phenoliner \\cite{Kicherer17}. The harvesting equipment is replaced by a camera system which continuously records images laterally from the canopy while the harvester drives along the rows.\n\n\\begin{figure*}[h]\n \\centering\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[height = 7cm]{images\/Phenoliner.JPG}\n \\caption[]%\n {{\\small Phenoliner}} \n \\label{fig:Phenoliner}\n \\end{subfigure}\n \\quad\n \\begin{subfigure}[b]{0.48\\textwidth} \n \\centering \n \\includegraphics[height = 7cm]{images\/CameraSystem_red.png}\n \\caption[]%\n {{\\small Camera System}} \n \\label{fig:CameraSyst}\n \\end{subfigure}\n \\caption[ The average and standard deviation of critical parameters ]\n {\\small \\ref{fig:Phenoliner} shows the phenotyping platform \"Phenoliner\", which was first introduced by Kicherer et al. \\cite{Kicherer17}. It is based on a grapevine harvester where the harvesting equipment is replaced by a camera system. The system can be seen in \\ref{fig:CameraSyst} and consist of 5 cameras which deliver overlapping images of the canopy. The vertical cameras are positioned with 35 cm between each camera resulting in a maximum distance between the outer two cameras of approximately 70 cm. 1.2 m of the canopy are covered vertically. \n \\label{fig:R2}}\n\\end{figure*}\n\nTo avoid a computationally intensive instance segmentation we reformulate a semantic segmentation task in a way that results in single object instances without object proposals. We define three classes, 'berry', 'edge' and 'background' so that every single berry is separated from neighbouring berries by an edge and can therefore be identified as a single instance. There is no need to explicitly perform an instance segmentation.\\\\\n\nThe contributions of this paper are the following:\n\\begin{itemize}\n \\item we present a novel, accurate and efficient way of counting by reformulating an instance segmentation into a semantic segmentation by introducing an additional class 'edge'\n \\item we evaluate our algorithm thoroughly for two different pruning systems and show that our algorithm handles both convincingly\n \\item we compare our algorithm with two different methods. First a state-of-the-art instance segmentation with Mask-RCNN. Secondly regression of a density map with U-Net.\n\\end{itemize}\n\n\n\\section{Materials and Methods}\n\\subsection{Data}\nOur data set is part of a big campaign which was carried out in 2018 in an experimental vineyard at the JKI Geilweilerhof located in Siebeldingen, Germany. Data were acquired on three different dates in 2018. The first images were taken before the application of a thinning procedure, the second set of images was taken shortly after the thinning. Further images were taken shortly before harvest.\n\nWe observed two different training systems, the vertical shoot positioned (VSP) and the semi minimal pruned hedges (SMPH). The two training systems feature diverse difficulties. \n\n\\begin{figure*}[h]\n \\centering\n \\begin{subfigure}[b]{0.44\\textwidth}\n \\centering\n \\includegraphics[height = 6cm]{images\/VSP.png}\n \\caption[]%\n {{\\small Vertical Shoot Positioned\\newline (VSP)}} \n \\label{fig:VSP}\n \\end{subfigure}\n \\quad\n \\begin{subfigure}[b]{0.52\\textwidth} \n \\centering \n \\includegraphics[height = 6cm]{images\/SMPH.png}\n \\caption[]%\n {{\\small Semi Minimal Pruned Hedge\\newline (SMPH)}} \n \\label{fig:SMPH}\n \\end{subfigure}\n \\caption[ The average and standard deviation of critical parameters ]\n {\\small Depiction of the two different training systems. Fig. \\ref{fig:VSP} shows an example for the vertical shoot positioned (VSP) system. It features one main branch which grows over multiple years while the others are removed annually. The grape bunches mainly grow in the bottom part of the canopy and feature a compact and homogeneous structure. \n In contrast Fig. \\ref{fig:SMPH} shows the semi minimal pruned hedges (SMPH). More branches are allowed to grow and the canopy is thicker. The grape bunches are positioned all over the canopy but often grow in the top part. The bunches itself are smaller, looser in structure and the berry size is inhomogeneous.\n \\label{fig:TrainingSystems}}\n\\end{figure*}\n\nThe VSP is the traditional training system (see Fig. \\ref{fig:TrainingSystems}). It features one main branch with several thinner shoots branching off (see Fig. \\ref{fig:VSP}). Small branches and leafs are drastically reduced at the end of each season. Grape bunches mainly occur in the bottom part of the canopy, are seldom covered by leafs and feature a compact and homogeneous berry structure. \n\nThe SMPH has a thick canopy which occludes many bunches (see Fig. \\ref{fig:SMPH}) . Due to the minimal pruning more than one main branch exists. The grape bunches are spread through the whole canopy although they mainly occur in the upper part of the plant. The bunches itself have a loose structure and the berries have inhomogeneous sizes.\n\nIn each training system three different wine varieties were observed, namely Riesling, Felicia and Regent. The first two are white varieties while Regent is a red one. All varieties are part of the training set but for the evaluation we focus on the variety Riesling (see Fig. \\ref{fig:Predi}).\n\n\\subsection{Sensor System}\nWe acquired images of grapevine with a field phenotyping platform called Phenoliner \\cite{Kicherer17} which is shown in Fig. \\ref{fig:Phenoliner}. The Phenoliner is a modified grapevine harvester. The harvesting equipment is removed and replaced by a camera and illumination system. \nThe system consists of 5 cameras which can be seen in Fig. \\ref{fig:CameraSyst}. \n3 RGB cameras are vertically aligned to cover the canopy of each vine. \n2 Additional cameras are installed in alignment with the bottom camera, building an L-Shape. \nA near-infrared camera is positioned in the middle of the horizontal cameras while the surrounding ones are RGB cameras.\nThe vertical cameras allow a 3D reconstruction, but this is not in the scope of this paper. \nFurther equipment of the Phenoliner consists of a real-time-kinematic (RTK)-GPS system which enables the geo-referencing of each image. \nThe cameras are triggered simultaneously since they are synchronized with the GPS clock.\nThe geo-reference allows the identification objects which occur more than once in overlapping images, though this is out of scope for this paper.\n\nThe cameras have a distance of approximately 75 cm to the canopy which results in a coverage of 1.2 m of the canopy in vertical direction. Each image has dimensions of 2592 $\\times$ 2048 pixels and has a spatial resolution in the real world of 0.3 mm. \nFor further information about the camera system we refer the reader to \\cite{Kicherer17}.\n\nWe observed 10 plants in both training systems with the three vertical cameras, each plant was covered with 3 overlapping images. Each image is processed individually.\nImages featuring the VSP show on average 329 and a maximum of 890 berries per image. \nThe number of berries per image is higher for the SMPH with 556 berries on average and a maximum of more than 1100 per image. \n\n\\subsection{Algorithms}\nThe main contribution of this paper is the reformulation of an instance segmentation. More specifically we tackle a counting task with a semantic segmentation. We evade the detection and segmentation of every berry instance in an image by turning it into a pixel-wise classification with the classes 'berry', 'edge' and 'background'. Fig. \\ref{fig:Net} shows an inferred mask on the right side. \\\\\n\nThe cameras mounted on the Phenoliner record images in DSLR-quality. They have a high resolution which makes them expensive to work with in terms of memory consumption. \nTwo ways exist to handle high resolution images with convolutional neural networks (CNNs), namely down sampling or cutting the image into patches. Due to the fine structure and small size of the berries, down sampling leads to performance losses resulting in missing berries or wrong classifications. Therefore we crop each image into overlapping patches. The overlap is important to minimize edge artifacts. We use a majority vote over each overlapping image region. The result is a full segmentation mask with the same size and resolution as the original image without hurting the performance.\nThis means we have to run the CNN multiple times per image which calls for a lightweight yet powerful network architecture to efficiently process each patch.\n\n\\subsubsection{Network Structure}\n\\begin{figure*}[h]\n\\centering\\includegraphics[width=1\\linewidth]{images\/Network.png}\n\\caption{Berry Segmentation Framework. For computational reasons each image is cut into overlapping image patches. Each image patch is classified by an encoder-decoder network. The encoder backbone consists of an MobilenetV2 \\cite{Sandler18} and the decoder head of a DeepLabV3+ \\cite{Chen18}. The patches are reconstructed into an image mask. Due to the overlap we avoid border fragmentation by applying a majority vote on regions containing more than one patch.}\n\\label{fig:Net}\n\\end{figure*}\n\nWe use a traditional U-shaped decoder-encoder architecture for pixel-wise semantic segmentation. The encoder backbone is a MobileNetV2 \\cite{Sandler18} which introduced the inverted residual concept. The network has mobile applications in mind and poses an efficient and lightweight feature extraction that produces close to state-of-the-art results for tasks like classification, detection, and segmentation.\nThe decoder used is the DeepLabV3+ \\cite{Chen18}. It refines the segmentation results with special focus on object boundaries.\nThe combination of encoder and decoder results in a fully convolutional semantic segmentation network. The framework is based on an open source implementation by Milioto et al. \\cite{Milioto18}. \nWe did not change the architecture of the model, but made adaptations to the data input, because we wanted to achieve the detection of single object instances with a semantic segmentation network. Our dataset definition is the focus of this work.\nThe network segments berries, edges and background accurately and performs fast on a moving platform. \nKeeping in mind that our application includes large amounts of data, we decide on a lightweight architecture which allows fast processing and decision processes.\n\n\\subsubsection{Loss Function}\nAs a loss function we use an Intersection over Union (IoU) loss as proposed by Yu et al. \\cite{Yu16}. The IoU depicts the similarity between the prediction and the reference labels and can be defined as followed:\n\n\\begin{equation}\n \\textrm{IoU} = \\frac{|\\textrm{A} \\cap \\textrm{B}|}{|\\textrm{A} \\cup \\textrm{B}|} = \\frac{\\textrm{TP}}{\\textrm{FP} + \\textrm{TP} + \\textrm{FN}}\n\\end{equation}\n\n$\\textrm{A} \\cap \\textrm{B}$ denotes the intersection and $\\textrm{A} \\cup \\textrm{B}$ the union of two data sets, in our case the prediction and the reference masks. The second formulation is an intuitive representation with classical image analysis measures. TP means true positive, FP false positive and FN false negative. \\\\\nWe formulate the IoU as a loss as followed:\n\n\\begin{equation}\n L_{IoU} = -\\ln \\frac{|\\textrm{A} \\cap \\textrm{B}|}{|\\textrm{A} \\cup \\textrm{B}|}\n\\end{equation}\n\\subsubsection{Image Annotation}\n\\label{sec:Annot}\nWe have two different sets of annotated data. On the one hand we have a set of pixel wise annotated images which are used to train and evaluate the CNN. On the other hand we have a second set of dot annotated images to evaluate the counting of berries in a more extended fashion.\\\\\n\n\\textbf{Pixel wise Annotation}\\\\\n\nThe detection of single berries in images is formulated as a semantic segmentation task with three classes: 'berry', edge' and 'background'. The labeling procedure consisted of colouring every berry in an image individually. Adjacent berries are labeled in different colours. From each berry component we compute the outer edge and label an edge with a fixed width (in our case 2 or 3 pixels). The edge width is a crucial parameter to distinguish between single elements. \nWe use a fixed size due to simplicity reasons. In a single image scenario we don't have access to the depth information and can therefore not use an edge thickness which is depending on the distance to the camera. Furthermore the variation of berry sizes in different training systems has a higher impact than the size variation due to the distance from the camera.\nThe remaining inner parts of each berry are uniformly labeled into the class 'berry'. Everything else is denoted with the class 'background'. An example of the labeling and how the resulting mask with the berry edge formulation looks can be seen in Fig. \\ref{fig:OrigAnn} and \\ref{fig:BEAnn}.\n\n\\begin{figure*}[h]\n \\centering\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/1DC23B69_1530684567554_patch.png}\n \\caption[]%\n {{\\small Original image}} \n \\label{fig:Orig}\n \\end{subfigure}\n \\quad\n \\begin{subfigure}[b]{0.48\\textwidth} \n \\centering \n \\includegraphics[width=\\textwidth]{images\/1DC23B69_1530684567554_patchDot.png}\n \\caption[]%\n {{\\small Dot wise annotation}} \n \\label{fig:DotAnn}\n \\end{subfigure}\n \\vskip\\baselineskip\n \\centering\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/A_1DC23B69_1530684567554_patch.png}\n \\caption[]%\n {{\\small Original annotation}} \n \\label{fig:OrigAnn}\n \\end{subfigure}\n \\quad\n \\begin{subfigure}[b]{0.48\\textwidth} \n \\centering \n \\includegraphics[width=\\textwidth]{images\/A_1DC23B69_1530684567554_patchEdge.png}\n \\caption[]%\n {{\\small Berry-edge-format}} \n \\label{fig:BEAnn}\n \\end{subfigure}\n \\caption[ Annotation Process ]\n {\\small Different stages of the annotation process. The first picture shows the original image without annotations. Fig. \\ref{fig:DotAnn} shows an example of the dot annotated berries while Fig. \\ref{fig:OrigAnn} demonstrates how the berries are originally marked with different colours. Later for each component an edge is computed (pixel width of edge is adjustable). \n \\label{fig:pixelAnnot}}\n\\end{figure*}\n\nWe manually labeled 38 images showing different grapevine varieties and training systems to ensure a robust algorithm. \nThe labeled data set features 61 \\% images showing the SMPH and 30 \\% show the VSP. This choice was done because the SMPH features more variety in the berry size and distribution. The occurrence of the different varieties is as followed: 55 \\% Riesling, 23 \\% Felicia and 23 \\% Regent. \nThe included images are from the two first recording times to show a variety of different grape sizes. The images show the plants before the veraison which means that all berries are green. \\\\\n\n\\textbf{Dot wise Annotation}\\\\\n\nFor evaluation purposes regarding the counting we created a set of dot wise annotated images. Each berry is manually marked with a dot. \nWe annotated the images of 20 Riesling vines, 10 in the VSP and 10 in the SMPH. Each plant is covered by 3 overlapping images which leads to a total number of 60 dot annotated images. \nIn Fig. \\ref{fig:DotAnn} we can see an example for the dot annotations. In contrast to the pixel wise annotation for training this is a time efficient procedure and allows an extended evaluation of the counting.\\\\\n\n\\subsubsection{Post Processing}\nTo reduce the number of misclassifications we utilize prior knowledge about the geometry of berries. The main geometric property of berries is roundness although we investigate the quality of the prediction as well. We explore two possibilities to remove components which do not satisfy the definition of roundness. \n\nThe initial steps of our post processing investigates the geometric properties via the minor and major axis of every component. This poses an intuitive definition of roundness. \nFirst we discard components which have a relation between minor axis $a_{min}$ and major axis $a_{maj}$ of less than $0.3$. This leads to a reduction of arbitrary shaped objects. This post processing stage is later called \"Axis\":\n\n\\begin{equation}\n \\frac{a_{min}}{a_{maj}} > 0.3\n\\end{equation}\n\nThe second step focuses on the area of each component. We determine the radius of each component $\\bar{r}$ by computing the mean between the minor and major axis. This radius is used to compute the theoretical area $A$ of a circle and compare it with the actual area $A_{comp}$ of the component. If the actual area $A_{comp}$ is more than 30 \\% smaller than the computed circle area we discard the component. This leads to a reduction of leaf edges which are wrongly classified as berries, because of their crescent shape. This stage is later referred to as \"Area\":\n\n\\begin{equation}\n A < 0.3 \\cdot A_{comp} = 0.3 \\cdot \\bar{r}^2 \\cdot \\pi = 0.3 \\cdot \\left(\\frac{a_{min}+a_{maj}}{2}\\right) ^2 \\cdot \\pi \n\\end{equation}\n\nA further visual investigation of the predictions shows, that correctly identified berries are often well surrounded by an edge. Misclassifications on the other hand are often insufficiently enclosed. Therefore we remove components which are surrounded by less than 40 \\% and call the stage \"Edge\".\n\nAll parameters in the post processing step are chosen manually after performing several experiment with different values. We tried to achieve a suitable trade-off between removing misclassifications and not removing too many correct ones.\nNonetheless all parameters allow an intuitive understanding of the effects. \n\n\\section{Results}\nWe thoroughly investigate our framework regarding different criteria and perform the following experiments:\n\\begin{itemize}\n \\item analysis of intersection over union (IoU) as a classical error analysis for a semantic segmentation\n \\item variation of edge thickness and its influence on the detection of berries\n \\item analysis of post-processing steps and their influence on the detection of berries\n \\item investigation of berry counting with $R^2$-plots\n \\item comparison of the berry count with a classical instance segmentation approach, the Mask-RCNN\n \\item comparison with U-Net which produces a density map.\n \\item Qualitative evaluation of inference under different conditions\n\\end{itemize}\n\n\\subsection{Experimental Setup}\nThe network is trained on overlapping image patches. The patches are extracted from the 38 pixel wise annotated images (see first part of section \\ref{sec:Annot}). Each patch has 432 $\\times$ 256 pixels and a 50 \\% overlap in vertical and horizontal dimension. \nWe chose an overlap of 50 \\% to cover each image region at least twice (at the edges of the image) and otherwise up to 4 times. This reduces the edge effects of the inferred masks. A higher overlap would be possible but results in a higher inference time due to the higher number of image pixels.\nThis results in a drastic reduction of training time in comparison to training on the full resolution of the images (2592 $\\times$ 2048 pixels).\n\nThe data set contains 38 images, 90\\% are used in the training set while 10\\% are used for testing. This means that 4 images are chosen for testing, before we extract patches.\nFurthermore we augment our whole data set to enhance the robustness of our network. We perform three different kinds of augmentations: flipping, blurring and gamma shifting. We flip the images only horizontally to preserve the characteristic that grape bunches get smaller in the lower part. The blurring is applied with random kernel sizes between 3 and 7 and the gamma shift is randomly chosen in between 0.8 and 1.2. \nIncluding data augmentations we end up with 5700 patches from 38 images.\n\nWe retrained a network which was pretrained on the imagenet dataset \\cite{imagenet_cvpr09}. The learning rate of the network is 0.001 and the momentum is 0.9. The learning rate is decreased by 0.99 after 5 epochs.\n\n\\subsection{Intersection over Union (IoU)}\nFurthermore we investigate the intersection over union (IoU) for every class. The IoU, also referred to as the Jaccard coefficient, is a similarity measure between two sample sets $A$ and $B$. The definition is the same as for our loss.\nIn our case one set is the reference mask and the other one is the prediction mask which is produced by the network. The evaluation is done for each class separately.\nIt is the most common measure to evaluate a semantic segmentation.\n\nTab. \\ref{tab:Acc} shows that we achieve an IoU for the class 'berry' of more than 75~\\% for both training systems. \nFor the class 'edge' the IoU is more than 10~\\% worse for 2 pixel edges compared to 3 pixels. The result is still reasonable because a thin edge is very hard to reproduce. \n\n\\begin{table}[h]\n\\begin{center}\n \\begin{tabular}{|l|c|c|c|c|c|}\n \\hline\n & Edge [3 pix] & Edge [2 pix] \\\\\n \\hline\\hline\n \n Average IoU [\\%]& 76.0 & 73.0 \\\\\n IoU Background [\\%]& 99.0 & 99.1 \\\\\n IoU Berry [\\%]& 75.3 & \\textbf{76.8} \\\\\n IoU Egde [\\%]& \\textbf{53.7} & 42.0 \\\\\n \\hline\n \\end{tabular}\n\\end{center}\n\\caption{Investigation of the intersection over union (IoU). The IoU is better for berry than edge which can be explained with the nature of the classes. The overlay of thin edges is unlikely.}\n\\label{tab:Acc}\n\\end{table}\n\nTo further validate our model we also performed a cross validation on 30 images with an edge thickness of 2 pixels. For each batch we selected 3 images as validation images and the other 27 images as training images. We chose 30 images to evenly split our data. We chose the 30 images to be still representative for the whole network.\nThe average IoU is slightly worse than the above mentioned results with 69.96 $\\pm$ 1.19~\\%. The IoU for the background is 98.72 $\\pm$ 0.45~\\%, for the class 'berry' 72.03 $\\pm$ 2.23 \\% and for the class 'edge' 39.17 $\\pm$ 2.41~\\%. These values match the results from the fully trained network and indicate that our model fits well.\n\n\\subsection{Influence of Edge Thickness and Training System}\nThe class 'edge' has a major impact on the correct detection of single berries. The whole evaluation is done without the application of post processing steps.\nWe aim for an explicit differentiation between berries and examine two different edge thicknesses with this criterion in mind. We apply two different edge thicknesses, 2 and 3 pixels, on the two different training systems namely VSP and SMPH and evaluate the results in relation to the dot annotated images.\nThe precision $P$ describes the ratio between correctly predicted berries and all predicted berries. Correctly predicted means that a predicted berry region contains at least one manually annotated berry. The overall predicted berries contain the incorrectly detected berries as well, where no manual annotation lies within a component. \n\\begin{equation}\n P = \\frac{\\textrm{TP}}{\\textrm{TP} + \\textrm{FP}}\n\\end{equation}\nThe recall $R$ describes the ratio between correctly predicted berries and all manually annotated berries. \nWe have to keep in mind that one berry region can contain more than one manually annotated berry.\nThe $F_1$-Score is a measure for the test accuracy and contains the precision and the recall.\n\n\\begin{equation}\n R = \\frac{\\textrm{TP}}{\\textrm{TP} + \\textrm{FN}}\n\\end{equation}\n\n\\begin{table*}[h]\n \\begin{center}\n \\begin{tabular}{|l|c|c|c|c|}\n \\hline\n \\makecell{Training \\\\System} & Edge [pix] & Precision [\\%] & Recall [\\%]& $F_1$-Score\\\\\n \\hline\\hline\n VSP & 2 & \\textbf{85.41} & \\textbf{93.90} & \\textbf{89.46}\\\\\n VSP & 3 & 81.21 & 92.59 & 86.53\\\\\n SMPH & 2 & \\textbf{80.54} & \\textbf{89.00} & \\textbf{84.56}\\\\\n SMPH & 3 & 78.65 & 85.26 & 81.82\\\\\n \\hline\n \\end{tabular}\n\\end{center}\n\\caption{Comparison of various edge thickness values on different training systems. We show Precision, Recall and $F_1$-score.}\n\\label{tab:Edge}\n\\end{table*}\nTab. \\ref{tab:Edge} shows that we recognize more berries in the VSP than in the SMPH. The improvement over different edge thicknesses can mainly be seen in the increased number of correct detections for the SMPH and in the decrease of the wrong classifications for both training systems. This means that the influence of the edge thickness on the number of correct classifications is smaller for the VSP than for the SMPH.\n\nThe SMPH is characterized by an inhomogeneous berry size with a higher number of small berries compared to the VSP. Although we trained with an edge thicknesses of 2 or 3 pixels, the predictions feature thicker edges than the ground truth images. Due to the small berry size some of the smaller berries only consist of edge pixels if the thickness is too high. Our proposed method is then not able to recognize these small berries with a radius below 2 - 3 pixels.\n\n\\subsection{Influence of Post Processing}\nThe three post processing steps are applied to reduce the number of misclassifications, regions where a berry is detected, but no manually annotated berry is present. Fig. \\ref{fig:Filter} shows an example of possible misclassifications and how the post processing steps reduces them.\n\n\\begin{figure*}[h]\n \\centering\n \\begin{subfigure}[b]{0.24\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/patchesFilter\/Patch_Orig_3.png}\n \\caption[]%\n {{\\small Original }} \n \\label{fig:OrigFilter}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.24\\textwidth} \n \\centering \n \\includegraphics[width=\\textwidth]{images\/patchesFilter\/Patch_Axis_3.png}\n \\caption[]%\n {{\\small Axis}} \n \\label{fig:Axis}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.24\\textwidth} \n \\centering \n \\includegraphics[width=\\textwidth]{images\/patchesFilter\/Patch_AxisArea_3.png}\n \\caption[]%\n {{\\small Area}} \n \\label{fig:Area}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.24\\textwidth} \n \\centering \n \\includegraphics[width=\\textwidth]{images\/patchesFilter\/Patch_AxisAreaEdge_3.png}\n \\caption[]%\n {{\\small Edge}} \n \\label{fig:Edge}\n \\end{subfigure}\n \\caption[]\n {\\small Results of the different filter stages. First we check the relation between major an minor axis. This removes objects which are not round. The second stage checks the relation between actual pixel area and the computed area of a circle with the diameter of the mean from the main axes. This stage removes objects which are round according to their main axes, but are not filled in. The last stage removes all components which are not sufficiently surrounded by an edge. We perform this computational intensive step as the last stage because we want to filter out as many objects as possible.\n \\label{fig:Filter}}\n\\end{figure*}\n\nFig. \\ref{fig:OrigFilter} shows the original prediction of the classes 'berry' and 'edge' overlayed with the original image. Artifacts at leaf edges can be seen as well as two berries which are fused into one component. \nThe initial post processing step reduces components which have a major-minor-axis relation of less than 30 \\%. The component which is removed is a small fragment in the left lower edge. \nOn the right side a component remains which has a correct axis relation but is not filled in. In Fig. \\ref{fig:Area} this component is removed because its actual area is smaller than the computed area of a circle with a radius computed from the mean of the main axis. \nThe last and computational most intensive step is the removal of all components which are not sufficiently surrounded by an edge. Fig. \\ref{fig:Edge} shows that the lowest component is removed due to this criteria. \n\n\\begin{table*}[h]\n \\begin{center}\n \\begin{tabular}{|c|c|c|c|c|c|c|}\n \\hline\n \\makecell{Training \\\\System} & Axis & Area & Edge & Precision [\\%] & Recall [\\%] & $F_1$-Score\\\\\n \\hline\\hline\n VSP & - & - & - & 85.41 & \\textbf{93.90} & 89.46\\\\\n VSP & 0.3 & - & - & 88.32 & 93.67 & 90.92\\\\\n VSP & 0.3 & 0.3 & - & 88.92 & 93.65 & 91.22\\\\\n VSP & 0.3 & 0.3 & 0.4 & \\textbf{91.63} & 92.46 & \\textbf{92.04}\\\\\n SMPH & - & - & - & 80.54 & \\textbf{89.00} &84.56 \\\\\n SMPH & 0.3 & - & - & 82.80 & 88.88 & 85.73 \\\\\n SMPH & 0.3 & 0.3 & - & 83.49 & 88.81 & 86.07\\\\\n SMPH & 0.3 & 0.3 & 0.4 & \\textbf{87.79} & 86.90 & \\textbf{87.34}\\\\\n \\hline\n \\end{tabular}\n \\end{center}\n \\caption{Comparison of different filter strategies. Axis means that the relation between the minor and major axis of each component is not allowed to be smaller than 0.3. For the computation of a circle area we compute the radius of each component as the mean of the minor and major axis. We then compare the computed area with the actual area of each component. The actual area is not allowed to be smaller than 0.3 times the circle area. Edge means that every component needs to be surrounded by at least $40 \\%$ of edge.}\n \\label{tab:Filter}\n\\end{table*}\n\nIn table \\ref{tab:Filter} we can see that for every filter stage the precision is increased while the recall decreases. Our filter removes a lot of misclassified berries but in some cases correctly classified berries are removed as well. \nSince the increase of the precision is stronger than the decrease of the recall, we remove more misclassified berries than correctly classified berries. \n\n\\subsection{Evaluation of Berry Counting}\nThe proposed processing chain has the goal to count berries in the field. We therefore evaluate the complete chain by comparing its output, the predicted berry masks with the ground truth, our manually dot annotated berries. Especially in the agricultural science community this is often done by correlation plots.\nThe evaluation of the berry count is done on the dot annotated data set. It contains images of 10 plants in the VSP and 10 plants in the SMPH. Each plant is covered by 3 images which results in a total of 60 dot annotated images. \nWe investigate the counting of berries by computing the coefficient of determination ($R^2$).\nFig. \\ref{fig:R2Plot} shows the correlation plots for the VSP and the SMPH. The x-axis shows the number of manually counted berries while the y-axis shows the number of detected berries. Each point of the plots depicts the numbers for one non overlapping image patch cut from the images. The lines show the correlation between the number of reference and detected berries. The dashed line depicts a perfect correlation if we'd always detected the correct number of berries for each image. The continuous line is the actual correlation. \nWe tend to underestimate the number of berries. The VSP shows with a $R^2$ of 98.79 \\% a slightly better correlation than the SMPH with $R^2$ of 97.15 \\%. \n\n\\begin{figure*}[h]\n \\centering\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[height = 6cm]{images\/R2_VSP.png}\n \\caption[]%\n {{\\small $R^2$-Plot for VSP ($R^2 = 98.79 \\%$)}} \n \\label{fig:RVSP}\n \\end{subfigure}\n \\quad\n \\begin{subfigure}[b]{0.48\\textwidth} \n \\centering \n \\includegraphics[height = 6cm]{images\/R2_SMPH.png}\n \\caption[]%\n {{\\small $R^2$-Plot for SMPH ($R^2 = 97.15 \\%$)}} \n \\label{fig:RSMPH}\n \\end{subfigure}\n \\caption[ The average and standard deviation of critical parameters ]\n {\\small $R^2$-Plots for the two different training systems VSP and SMPH. The red circles depict the berry count for an image patch. The x-axis shows the manual reference count, the y-axis the result of the connected component analysis of the predicted mask. The dashed line represents the optimal mapping between ground truth and prediction ($R^2 = 100 \\%$). The continuous line is the estimated mapping, showing a good correlation in both cases.\n \\label{fig:R2Plot}}\n\\end{figure*}\n\\subsection{Comparison with other approaches}\n\nThe task of object counting can be tackled with many different approaches. \nWe want to show a comprehensive evaluation that compares our approach to both an instance segmentation and a regression approach where all approaches can be used to solve the same problem.\nWe compare our method against two well established methods.\nThe first one is an instance segmentation network called Mask-RCNN. The second one is a density map estimation with a U-Net.\nEach network has a different structure, number of parameters and inference time (see Tab. \\ref{tab:Networks}). Since our approach is very flexible in the sense that other segmentation networks can be used, for example U-Net, we still wanted to show that our method can be used successfully with a lightweight network architecture like the one proposed in Bonnet \\cite{Milioto18}.\n\nThe two networks are trained and tested on the same data as our approach.\n\n\\begin{table*}[h]\n \\begin{center}\n \\begin{tabular}{|c|c|c|c|c|}\n \\hline\n Network & Parameters Mio & Inference time& Spatial & Size \\\\\n \\hline\\hline\n Mask-RCNN & 64.159 & high& $\\surd$ & $\\surd$\\\\\n U-Net & 7.769 & low & $\\surd$ & $\\times$\\\\\n \\hline\n Ours & 3.188 & low & $\\surd$ & $\\surd$\\\\\n \\hline\n \\end{tabular}\n \\end{center}\n \\caption{Comparison of different counting approaches. Mask-RCNN is a classical deep learning instance segmentation approach. It's a complex network with a high number of parameters and is able to detect object locations and the spatial extend of single objects. The U-Net is used to estimate density maps which give a count after integration. The spatial position of objects can be extracted but no information regarding their size. Our architecture with a MobileNetV2 encoder and a DeepLab3V+ decoder has the smallest number of parameter but is able to extract the spatial position and extend of single berries.}\n \\label{tab:Networks}\n\\end{table*}\n\n\\subsubsection{Comparison with Mask-R CNN}\nOne of the most well known approaches for instance segmentation is the Mask-RCNN which was presented by He et al. \\cite{He17}. Mask-RCNN is an extension of Faster-RCNN \\cite{Ren15} which adds a third branch to the two already existing ones which provide a class label and a bounding box offset. The new branch outputs an object mask. The Mask-RCNN is therefore able to detect single objects in images by providing a mask for each one.\n\n\\begin{figure*}[h]\n \\centering\n \\begin{subfigure}[b]{0.35\\textwidth}\n \\centering\n \\includegraphics[height = 3.3cm]{images\/comparison\/Count_1DC24E72_1533025456076_1024_0000_Beschriftung.png}\n \\caption[]%\n {{\\small Manual Count}} \n \\label{fig:SMPH_Manu}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.3\\textwidth} \n \\centering \n \\includegraphics[height = 3.3cm]{images\/comparison\/color_1DC24E72_1533025456076_1024_0000.png}\n \\caption[]%\n {{\\small Own}} \n \\label{fig:SMPH_Own}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.3\\textwidth} \n \\centering \n \\includegraphics[height = 3.3cm]{images\/comparison\/1DC24E72_1533025456076_1024_0000.png}\n \\caption[]%\n {{\\small Mask-RCNN}} \n \\label{fig:SMPH_Mask}\n \\end{subfigure}\\\\\n \\begin{subfigure}[b]{0.35\\textwidth}\n \\centering\n \\includegraphics[height = 3.3cm]{images\/comparison\/Count_1DC24E73_1533024584584_1792_1600_Beschriftung.png}\n \\caption[]%\n {{\\small Manual Count}} \n \\label{fig:VSP_Manu}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.3\\textwidth} \n \\centering \n \\includegraphics[height = 3.3cm]{images\/comparison\/color_1DC24E73_1533024584584_1792_1600.png}\n \\caption[]%\n {{\\small Own}} \n \\label{fig:VSP_Own}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.3\\textwidth} \n \\centering \n \\includegraphics[height = 3.3cm]{images\/comparison\/1DC24E73_1533024584584_1792_1600.png}\n \\caption[]%\n {{\\small Mask-RCNN}} \n \\label{fig:VSP_Mask}\n \\end{subfigure}\n \\caption[ The average and standard deviation of critical parameters ]\n {\\small Visual comparison of our own algorithm and the Mask-RCNN for an exemplary image patch for plant types SMPH (upper row) and VSP (lower row). In the first picture we can see the manual annotation for the counting reference. In the middle picture the mask which is outputed by our proposed algorithm is shown, while the last picture shows the result of the Mask-RCNN. The Mask-RCNN features more falsely detected berries in image regions where no berries should be detected.\n \\label{fig:CompVisu}}\n\\end{figure*}\n\nFor training purposes we customize our already annotated data set in which every berry is individually coloured. In contrast to our 'edge' and 'berry' annotation we now consider each berry as a whole.\nEach berry object presented by an own mask layer. In this layer only the classes 'berry' and 'background' exist. This means if we have 50 berries in an image we have 50 masks, one for each berry. These masks are stacked together to build a label matrix with the depth corresponding to the number of objects.\n\n\\begin{figure*}[h]\n \\centering\n \\begin{subfigure}[b]{0.35\\textwidth}\n \\centering\n \\includegraphics[height = 4.3 cm]{images\/comparison\/Mask_R_VSP.png}\n \\caption[]%\n {{\\small Mask-RCNN}} \n \\label{fig:R_Mask_VSP}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.3\\textwidth} \n \\centering \n \\includegraphics[height = 4.3 cm]{images\/comparison\/Reg_R_VSP.png}\n \\caption[]%\n {{\\small Regression}} \n \\label{fig:R_Reg_VSP}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.3\\textwidth} \n \\centering \n \\includegraphics[height = 4.3 cm]{images\/comparison\/Awesomer_R_VSP.png}\n \\caption[]%\n {{\\small Own}} \n \\label{fig:R_Own_VSP}\n \\end{subfigure}\\\\\n \\begin{subfigure}[b]{0.35\\textwidth}\n \\centering\n \\includegraphics[height = 4.3cm]{images\/comparison\/Mask_R_SMPH.png}\n \\caption[]%\n {{\\small Mask-RCNN}} \n \\label{fig:R_Mask_SMPH}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.3\\textwidth} \n \\centering \n \\includegraphics[height = 4.3 cm]{images\/comparison\/Reg_R_SMPH.png}\n \\caption[]%\n {{\\small Regression}} \n \\label{fig:R_Reg_SMPH}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.3\\textwidth} \n \\centering \n \\includegraphics[height = 4.3 cm]{images\/comparison\/Awesomer_R_SMPH.png}\n \\caption[]%\n {{\\small Own}} \n \\label{fig:R_Own_SMPH}\n \\end{subfigure}\n \\caption[ The average and standard deviation of critical parameters ]\n {\\small Comparison of the berry count between Mask-RCNN, regression with U-Net and our approach. The Mask-RCNN overestimates the number of berries drastically for the SMPH as can be seen in Fig. \\ref{fig:R_Mask_SMPH}. The $R^2$ is with 74.33 \\% significantly worse than for our algorithms. The same is applicable for the VSP. The regression approach is only slightly worse than our approach.\n \\label{fig:CompR}}\n\\end{figure*}\n\nWe use the Mask-RCNN implementation which is provided by Abdulla et al. \\cite{Abdulla17}.\nWe train the network on image patches with the dimensions 320 $\\times$ 256 pixels. The maximum number of objects is 80. Due to the small object sizes we use anchors of the sizes 8, 16, 32, 64 and 128 pixels with anchor ratios of 0.5, 1 and 2. \n\nTo ensure a fair comparison between our algorithm and the Mask-RCNN we don't apply our post processing steps to the network outputs. Furthermore we train the network with the same patch size and number of images as the Mask-RCNN.\nFig. \\ref{fig:CompVisu} shows a visual comparison between our algorithm and the results of the Mask-RCNN. For both training systems, the Mask-RCNN tends to overestimate the number of objects. \n\nWe further investigate and compare the counting of berries in images with the coefficient of determination $R^2$ for our algorithm and the Mask-RCNN for both training systems. Fig. \\ref{fig:R_Mask_SMPH} and Fig. \\ref{fig:R_Mask_VSP} show the $R^2$-plots for the Mask-RCNN. In both cases the data points are very scattered. For our algorithm the data points are closer together and are approximated well by a straight line. The Mask-RCNN achieves $R^2$ between $74$ and $81~\\%$ while our implementation shows a better correlation between the manually counted berries and the detected ones with $R^2 > 95~\\%$.\n\nThere are several reasons which could be responsible for the bad performance of the Mask-RCNN. We only have a limited data set. We train our network on 5700 patches with a maximum number of 80 objects per patch or less. Since the Mask-RCNN has to train more than 64 Mio. parameters this might be not sufficient.\n\nAnother important aspect is the inference time. While we can infer our network on nearly 2000 images in roughly a minute, it takes the Mask-RCNN nearly an hour to process the same images. \nThis differs by a factor of 60. Although we are mainly interested in the absolute inference time we have to keep in mind that the network architectures are highly disparate. The number of network parameters differs by a factor of 20, the number of parameters for the Mask-RCNN is approx. 64 Mio. while our network only has 3 Mio. The inference time decreases more than the parameter number increases.\n\n\\subsubsection{Comparison with U-Net density map estimation}\nBesides the detection of objects, the problem of counting can be addressed by regression approaches as well. The idea is to estimate a density map from an input image and to integrate over the map to retrieve the maxima in the density map. The goal of the procedure is to provide a count of the objects present in the image. We use a U-Net \\cite{Ronneberger15}, an image segmentation network which is often applied for biomedical data.\n\nThe annotation process for regression networks is simpler than for detection network. Instead of pixel wise masks or bounding boxes a dot wise annotation is sufficient. We modify our data set of pixel wise annotated berries by providing the centers of each component as the dot annotation for each object. Around each dot a Gaussian Kernel with a deviation of one in both directions is applied.\nWe train the network\\footnote{https:\/\/github.com\/NeuroSYS-pl\/objects\\_counting\\_dmap} on the same image patches as the Mask-RCNN and our own network. The patch dimensions are again $320 \\times 256$ with a maximum number of 80 objects in each image.\n\nThe regression with the U-Net performs slightly worse than our approach. The $R^2$ for the VSP is $97.19 \\%$ compared to $R^2 = 97.74\\%$ from our approach. For the SMPH the difference is slightly larger with $R^2 = 92.20\\%$ from the U-Net and $R^2 = 95.84\\%$ from our approach.\nThe inference time is similar to our approach and U-Net has only two times as many parameters as our network (7.7 Mio).\n\nThe main advantage of our masks is the extraction of further phenotypic traits like the berry size. The regression approach yields density maps which could be used to extract the spatial positions of the single objects but not their whole extent.\n\n\\section{Qualitative investigation for difficult conditions}\n\\begin{figure*}[h]\n \\centering\n \\centering\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/outlook\/051-006-010_1DC24E74_1504000666872.jpg}\n \\caption[]%\n {{\\small Phenoliner Image}} \n \\label{fig:PhenoOrig}\n \\end{subfigure}\n \\quad\n \\begin{subfigure}[b]{0.48\\textwidth} \n \\centering \n \\includegraphics[width=\\textwidth]{images\/outlook\/051-006-010_1DC24E74_1504000666872_masked_hell.png}\n \\caption[]%\n {{\\small Prediction}} \n \\label{fig:PhenoPredi}\n \\end{subfigure}\n \\vskip\\baselineskip\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/outlook\/51_033_001_OIV1.JPG}\n \\caption[]%\n {{\\small SLR Image}} \n \\label{fig:SLROrig}\n \\end{subfigure}\n \\quad\n \\begin{subfigure}[b]{0.48\\textwidth} \n \\centering \n \\includegraphics[width=\\textwidth]{images\/outlook\/51_033_001_OIV1_masked_hell.png}\n \\caption[]%\n {{\\small Prediction}} \n \\label{fig:SLRPredi}\n \\end{subfigure}\n \\vskip\\baselineskip\n \\caption[Outlook ]\n {\\small Inference of our network on unrelated data. \n Fig. \\ref{fig:PhenoOrig} and \\ref{fig:PhenoPredi} show images which were taken in 2017 with the Phenoliner. The illumination system and the background are different from the current setup. Furthermore the images show Regent, a red variety. Although the network never saw red wine berries it is able to detect them correctly.\n Fig. \\ref{fig:SLROrig} and Fig. \\ref{fig:SLRPredi} show a picture which was taken with a handheld camera under natural illumination. The network is nonetheless able to identify most of the berries. \n \\label{fig:Outlook}}\n\\end{figure*}\n\nAs an outlook, we show the ability of our network to adapt to different conditions with two qualitative examples.\nFirst we infer an image which was taken with the Phenoliner in 2017 (see Fig. \\ref{fig:PhenoOrig}). The illumination system and the background differ considerably from the setup in 2018. Furthermore the image shows a red variety after the veraison and features dark berries. The network is able to correctly detect most of the berries although it never saw dark berries during training (see Fig. \\ref{fig:PhenoPredi}). \nAs a second experiment we infer an image taken by a handheld SLR camera under natural sun light (see Fig. \\ref{fig:SLROrig}). The distance between camera and canopy is different from the otherwise nearly constant 0.75 m which are recorded by the Phenoliner. The observed variety shows green berries similar to the berries from the training set. The network is able to correctly detect most of the berries as well (see Fig. \\ref{fig:SLRPredi}).\\\\\nBoth examples show that our network has the potential to be used in different surroundings. \n\n\\section{Conclusion and Outlook}\n\\label{sec:conclusion}\nIn this paper, we presented a novel berry counting approach. We are able to detect and mask single berry objects with a semantic segmentation network by using a class 'edge' to separate single objects from each other. This enables the evasion of the time and computationally intensive use of an instance segmentation network like the Mask-RCNN.\nAlthough we trained only one network we are able to handle two training systems with different characteristics and challenges. We achieve $R^2 = 98.79~\\%$ for the VSP and $97.15~\\%$ for the challenging SMPH.\n\nWe compared our approach with a state-of-the-art instance segmentation approach, the Mask-RCNN. We achieve visually as well as quantitative better results ($R^2 = 81.52~\\%$ for the VSP and $R^2 = 74.33~\\%$ for the SMPH). Furthermore we outperform the inference time of the Mask-RCNN, which takes nearly an hour while our approach can be inferred in minutes.\nThe comparison with a classical regression approach yields results which are just slightly worse than our approach ($R^2 = 97.19~\\%$ for the VSP and $R^2 = 92.20~\\%$ for the SMPH). But the advantage of our method is the potential extraction of additional phenotypic traits like the berry size.\n\nDespite these encouraging results, there is further space for improvements. For example, we want to investigate in detail the application to other grapevine varieties. Furthermore, we want to tackle the problem of counting berries in overlapping images multiple times. The goal is to offer the joint investigation of a whole row of plants.\n\nThere is still a gap between counting berries and estimation of yield such as \ncorrecting for counts in overlapping images, estimation of the number of 'invisible berries' and a proper evaluation of these steps. These steps are part of current research and beyond the scope of the paper.\n\n\n\\section*{Acknowledgment}\nThis work was funded by German Federal Ministry of Education and Research (BMBF, Bonn, Germany) in the framework of the project novisys (FKZ 031A349) \nand partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC 2070 - 390732324.\n\n\n\n\n\\section*{References}\n\\bibliographystyle{model1-num-names}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIt has been suggested that the glass transition in cooled liquids is a\ndynamic transition from an ergodic to a non-ergodic state. For\nexample, the ideal mode-coupling theory predicts that the molecules of\nsimple liquids become increasingly ``caged'' by surrounding molecules,\nresulting in an ergodic to non-ergodic transition at a critical\ntemperature $T_c$ at which the fluid molecules become permanently\nlocalized (i.e., the self-diffusion coefficient vanishes)\n\\cite{mc}. Although a tendency toward particle localization for\nincreasingly long times has been observed in simulations and\nexperiments on supercooled liquids, particle localization and\nstructural arrest does not actually occur at the extrapolated\ntemperature $T_c$ because the particles are eventually able to\n``escape'' their cages. Recent simulations have also shown the\ntendency for particle motion to occur in an increasingly correlated\nway in supercooled liquids \\cite{ddkppg,dgpkp,kdppg,gd,gdp,dgp,bdbg},\na feature emphasized by the older, phenomenological Adam-Gibbs model\nof glass formation \\cite{ag65}. The observed greater particle\nmobility near $T_c$ is presumably a consequence of the increased\ncollective motion of cooled liquids (``hopping'' in the extended\nversion of the mode-coupling theory \\cite{mc}) which restores the\nergodicity of the liquid for some temperature range below $T_c$. This\nthermally activated collective motion apparently postpones the ergodic\nto nonergodic transition to a lower temperature. In the Adam-Gibbs\nmodel \\cite{ag65}, this lower temperature corresponds to the\nconjectured ``ideal'' glass transition temperature $T_o$, where the\nequilibrium configurational entropy extrapolates to zero \\cite{gdm}.\n\nIf glass formation indeed represents an ergodic to non-ergodic dynamic\ntransition, then it is important to define a dynamical measure of\norder that quantifies both the ``closeness'' of the transition\n\\cite{tmk89}, and the degree of correlated motion in an equilibrium\nglass-forming liquid. Ergodic theory provides us with a natural\nmeasure in the form of the ``dynamic entropy''\n\\cite{krylov,k59,GW,G2,Z81,L73}.\n\nThe concept of dynamic entropy was\nintroduced by Shannon in his theory describing the capacity of ideal\ncommunication devices to transmit information \\cite{shannon}. This\nidea was later developed by Kolmogorov and others \\cite{k59} into a general\nmeasure of the ``degree of randomness'' or ``degree of chaos'' of\ndynamical systems. According to Pessin's theorem \\cite{pessin}, the\nKolmogorov-Sinai dynamic entropy $h_{KS}$ for a Hamiltonian dynamical\nsystem equals the sum of the positive Lyapunov exponents \\cite{pessin,livi}. \nThese exponents are measures of the ``instability'' of the\nsystem evolution \\cite{krylov,Z81}. Dynamic entropy extends \nthe equilibrium definition of entropy from statistical\nmechanics to the {\\it time domain}.\nThe dynamical entropy provides an estimate of the rate of growth of\n``information'' (per unit time) required to describe the evolution of\na dynamical system \\cite{GW,G2,D98} and it is also a measure of\nthe ``complexity'' of a dynamical system \\cite{complexity}. The\ndynamic entropy characteristically decreases as a system orders and\nits exploration of its phase space becomes more restricted\n\\cite{cleary,butera}. Thus, the dynamic entropy decreases as a fluid\ncrystallizes or a spin system orders\\cite{butera,posch,caiani,wales}.\n\nThe Kolmogorov-Sinai dynamic entropy has some shortcomings in the\ndescription of complex configurational changes that occur in\nsupercooled liquids. In particular, $h_{KS}$ diverges for the ideal\nprocess of Brownian motion (due to the non-differentiability of the\ntrajectories) \\cite{GW,berger,wiener}. Consequently, we must anticipate\ndifficulties in applying dynamic entropy to quantify particle motions\nat large length and time scales in the case of supercooled\nliquids. Recently, there has been an important generalization of the\ndynamic entropy concept that provides a ``bridge'' between\nmicroscopic dynamical system descriptions and macroscopic stochastic\ndescriptions of liquid dynamics. This generalization recognizes that\nthe amount of information required to describe the paths of a\nstochastic process depends strongly on the length scale of observation\n$\\epsilon$. The $\\epsilon$-dependent dynamic entropy $h(\\epsilon)$ of\nGaspard and Wang and others \\cite{GW} (also called\n``$\\epsilon$-entropy'') reduces to the $h_{KS}$ entropy in the limit\nof small $\\epsilon$,\n\\begin{eqnarray} \\label{liks}\n\\lim_{\\epsilon \\to 0} h(\\epsilon) = h_{KS},\n\\end{eqnarray}\nand is well-defined for idealized stochastic processes at a fixed,\nnonvanishing $\\epsilon$. The dynamic entropy of a Brownian particle\n$h_B(\\epsilon)$ obeys the scaling relation\n\\begin{eqnarray} \\label{hb}\nh_B(\\epsilon) \\propto \\epsilon^{-2}, \\ \\ \\ \\ \\ \\ \\ \\ \\ \\epsilon > 0,\n\\end{eqnarray}\nwhere the proportionality constant is fixed by the particle diffusion\ncoefficient \\cite{GW}. As mentioned above, $h(\\epsilon)$ for\nstochastic particle motion diverges as $\\epsilon \\to 0$ and the\nexponent reflects the fractal dimension of the particle trajectories\n\\cite{GW,mandelbrodt}. Specifically, the exponent $2$ in Eq.~\\ref{hb}\nis the Hausdorff dimension of a Brownian path in three dimensions\n\\cite{hausdorff}, and in the limit of perfectly coherent (ballistic)\nparticle motion this exponent is $1$. In idealized stochastic\nprocesses ({\\em e.g.} fractional Brownian motion, L\\'evy flights,\netc.) the exponent in Eq.~1.2 can be identified with the path\nHausdorff dimension \\cite{k59,GW,mandelbrodt,getoor}, and can take\nvalues intermediate between 1 and 2. This exponent reflects the\n``degree of persistence'' in the particle displacement relative to\nBrownian motion.\n\nThe scale dependent dynamic entropy $h(\\epsilon)$ for complex\ndynamical systems such as liquids depends strongly on the\nobservational scale $\\epsilon$. At very small $\\epsilon$ the\nmicroscopic chaotic motion of the molecules is observed, so that\n$h(\\epsilon)$ varies slowly with $\\epsilon$. The decorrelation of\nparticle velocities in a liquid occurs at a time and space scale\ncorresponding to the average interparticle ``collision time'' and\n$h(\\epsilon)$ starts varying with $\\epsilon$ as this decorrelation\noccurs. This helps us to identify a characteristic space and time\nscale over which the bare microscopic dynamics can be coarse-grained\nby a stochastic description. Correlations associated with particle\ndisplacement arise at longer times in cooled liquids and $h(\\epsilon)$\nalso helps us in determining the spatial and time scales over which\nthese correlations occur. $h(\\epsilon)$ thus provides a measure of the\ndegree of chaotic motion appropriate to the description of real\nsystems at arbitrary observational scales and is an attractive\ntool for quantifying the increasingly restricted motion in cooled\nliquids. It is notable that its definition is not restricted to\ncircumstances where statistical mechanical equilibrium exists, so that\nthis measure of the degree of chaos extends to non-equilibrium\nsituations such as the glass state and turbulent fluids \\cite{GW}.\n\nThe calculation of $h(\\epsilon)$ \\cite{GW} (or $h_{KS}$) is generally\ndifficult, especially in cases where $h_{KS}$ is small and long\ncomputational times are required for its accurate determination\n\\cite{GW,posch}. In the present paper, we utilize a simple\napproximation for $h(\\epsilon)$ that has the advantage of being\naccessible in experiments on real materials and computer simulations\n\\cite{GW,gnature}. Provided that the spatial scale $\\epsilon$ is not\ntoo small \\cite{GW}, $h(\\epsilon)$ can be approximated by enclosing\nthe particle position at time $t=0$ by a sphere of radius $\\epsilon$\ncentered on the particle, and then determining the time $\\tau$ at\nwhich the trajectory first arrives at the threshold distance\n$\\epsilon$ (see Fig.~1). We average this ``first-passage time'' over\nall particles in the liquid to obtain the mean first-passage time\n(MFPT) $\\tau(\\epsilon)$ and we define the ``MFPT dynamic entropy''\n$S(\\epsilon)$ as,\n\\begin{equation} \\label{defentropy}\nS(\\epsilon) \\equiv 1\/\\tau(\\epsilon),\n\\end{equation}\nwhere\n\\begin{equation} \\label{deftau}\n\\tau(\\epsilon) \\equiv \\int_{0}^{\\infty}{dt}\\, P_{\\epsilon}(t) \\, t.\n\\end{equation}\n$P_{\\epsilon}(t)dt$ is the probability that the particle reaches the\ndistance $\\epsilon$ between $t$ and $t+dt$. The dynamic entropy\n$S(\\epsilon)$ is thus one measure of the average ``escape rate'' of a\nparticle from its local environment \\cite{jk}. We note that although\nthe definition of $S(\\epsilon)$ is motivated by dynamical systems\ntheory concepts, this property defines an independently interesting\nmeasure of correlated motion in liquids that does not rely on the\napproximation relating $S(\\epsilon)$ to $h(\\epsilon)$.\n\\begin{figure}\n\\hbox to\\hsize{\\epsfxsize=0.65\\hsize\\hfil\\epsfbox{adg-fig1.eps}\\hfil}\n\\caption{Schematic of a particle trajectory in a cooled liquid. The\nsolid line represents the $\\epsilon$-sphere (colored gray).\n$\\tau(\\epsilon)$ is the first-passage time for the particle to reach\nthe sphere boundary. Filled circle denotes initial particle position,\nand open circle denotes particle at first-passage time.}\n\\label{fig0}\n\\end{figure}\n\nIn this paper we utilize $S(\\epsilon)$ to identify characteristic\nspace and time scales in the particle dynamics and to quantify the\nincreasingly correlated motion observed in previous analyses of the\nsame simulations considered in the present paper\n\\cite{ddkppg,dgpkp}. These studies indicated the development of\nlarge scale dynamical heterogeneity and the nature of this dynamical\nheterogeneity has been examined in a series of recent papers that are\ncomplementary to the present work\n\\cite{ddkppg,dgpkp,kdppg,gd,gdp,dgp}. There it was established that\ntransient clusters of highly ``mobile'' particles form in the cooled\nliquid and that the average size of these clusters grows rapidly as\n$T_c$ is approached \\cite{dgpkp}. A pair distribution function for\nparticle displacements was defined and this quantity exhibits a\ngrowing length scale upon cooling that reflects the clustering of\nmobile particles \\cite{gd,gdp,dgp}. The growing length scale is\ntime-dependent and attains a peak value at a time in the\n$\\alpha$-relaxation regime \\cite{gdp,dgp}.\n\nIt has also been shown in the present liquid that the particles within\nthe mobile particle clusters move in cooperatively rearranging\n``strings'' \\cite{ddkppg,strings}. Notably, the string-like\ncollective motion also begins well above $T_c$, but the strings\nthemselves exhibit no tendency to grow rapidly near $T_c$. Instead,\nthe length distribution of the strings is found to be nearly\nexponential and a similarity of this distribution to that commonly\nobserved in equilibrium polymerization has been noted\n\\cite{ddkppg}. Donati et al. \\cite{ddkppg} have suggested that the\ngrowing barrier height to particle motion is proportional to the\naverage string length, which would imply that these string-like\nmotions have a basic significance for understanding transport in\ncooled liquids. Thus, string-like correlated particle motion appears\nto be an important mode of motion in our cooled liquid \\cite{ddkppg}\nand part of the motivation of the present work is to better\ncharacterize the development of this type of collective motion. We are\nalso interested in the extent to which particle displacement becomes\nintermittent in time in cooled liquids, since a growing intermittency\nin particle motion has been suggested to underlie the glass transition\n\\cite{odagaki,goodguy}.\n\nThe paper is outlined as follows. In Section II we review some details\nof the MD simulation data utilized in this work. Section III\nexamines $S(\\epsilon)$ over a broad range of scales, and\ndynamical regimes are defined where the motion is ballistic, transiently\nlocalized, persistent and diffusive. These regimes are examined\nin separate subsections.\nWe summarize our findings in Section IV.\n\n\\section{Simulation Details} \n\nThe system studied is a three-dimensional binary mixture of 8000\nLennard-Jones (LJ) particles in which the sizes of the particles and\nthe interaction parameters are chosen to prevent crystallization and\ndemixing \\cite{units}. The size of the A-particles is about 10\\%\nlarger than that of the B-particles (while the mass is the same) and\nthe particles have a relative concentration 80:20 of A-particles to\nB-particles. We report our results in dimensionless LJ units\n\\cite{units}. The system was equilibrated at different temperatures\n$T$ in the range (0.451,0.550). The density $\\rho$ varied from $1.09$\nparticles per unit volume at the highest temperature to $1.19$ at the\nlowest $T$ simulated. For reference, the mode-coupling temperature\n$T_c$ for this system is $T_c=0.435$ at $\\rho \\simeq 1.20$\n\\cite{ddkppg,dgp,kob}, so all the simulation data analyzed here is\nwell above the glass transition. Configurational histories for up to\n$4 \\cdot 10^6$ molecular dynamics time-steps following equilibration\nwere stored for each run. Following equilibration in the NPT and NVT\nensembles, the trajectories were calculated in an NVE ensemble, and\nsnapshots containing the particle coordinates and velocities were\ntaken at logarithmic time intervals during the run. In this stage, the\nequations of motion were integrated using the velocity Verlet\nalgorithm with a step size of 0.0015 at the highest temperature, and\n0.003 at all other temperatures. Adopting argon values for the LJ\nparameters of the large particles implies an observation time of\n$\\approx 26$ ns for the coldest $T$. All data presented here are\ncalculated for the majority (A) particles only, except where otherwise\nnoted \\cite{sametc}. Further details of the simulation can be found in\nRef.~\\cite{dgpkp}.\n\n\\begin{figure}\n\\hbox to\\hsize{\\epsfxsize=1.0\\hsize\\hfil\\epsfbox{adg-fig2.eps}\\hfil}\n\\caption{Mean square displacement of the majority species ($A$\nparticles) vs. time for different $T$.}\n\\label{figmsd}\n\\end{figure}\n\nOver the temperature-density regime studied, the system exhibits the\nusual features of a fragile \\cite{angell}, glassforming liquid. For\nexample, the mean square displacement $\\langle r^2(t) \\rangle \\equiv\n\\bigl \\langle \\frac{1}{N_A} \\sum_{i=1}^{N_A} |{\\bf r}_i(t) - {\\bf\nr}_i(0)|^2 \\bigr \\rangle$ for the A particles is shown in\nFig.~\\ref{figmsd} for different T. Here ${\\bf r}_i(t)$ is the position\nof particle $i$ at time $t$, $N_A$ is the number of A particles\n(6400), and $\\langle \\cdots \\rangle$ denotes an ensemble average. For\neach state point, a ``plateau'' exists in both the mean square\ndisplacement and the self-part of the intermediate scattering function\n$F_s({\\bf q},t)$ as a function of $t$ (see \\cite{dgpkp}). The plateau\nin Fig.~\\ref{figmsd} separates an early time ``ballistic'' regime from\na late time diffusive regime. The plateau is interpreted as implying\n``caging'' of the particles and this phenomenon is typical for liquids\nof low temperature or high density. Over the range of $T$ studied,\nthe $\\alpha$-relaxation time $\\tau_{\\alpha}$, describing the decay of\n$F_s({\\bf q},t)$ (at the value of $q$ corresponding to the first peak\nin the static structure factor), increases by 2.4 orders of magnitude,\nand follows a power law $\\tau_{\\alpha} \\sim (T-T_c)^{-\\gamma}$, with\n$T_c \\simeq 0.435$ and $\\gamma \\simeq 2.8$. The simulated\nliquid states analyzed here therefore exhibit relaxation behavior\ncharacteristic of a supercooled liquid. No long range structural\ncorrelations due to density or composition fluctuations are apparent\nin the simulation data \\cite{dgpkp}.\n\n\\section{Characteristic decorrelation time and space scales}\n\nIn Fig.~\\ref{figtaueps} we show the MFPT $\\tau(\\epsilon)$ plot for six\ndifferent runs corresponding to varying the temperature of the system\nfrom $T=0.550$ to $T=0.451$. In the inset we show the dynamic entropy\n$S(\\epsilon) \\equiv 1\/\\tau(\\epsilon)$. Note that the variation of\n$\\tau(\\epsilon)$ with $\\epsilon$ exhibits similar qualitative trends\nto the variation of $t$ with $\\langle r^2(t)\\rangle$ shown in\nFig.~\\ref{figmsd}.\n\nFor small $\\epsilon$, corresponding to the inertial regime,\n$S(\\epsilon)$ is insensitive to temperature. At intermediate\n$\\epsilon$ values we see a decrease of $S(\\epsilon)$ with decreasing\n$T$ and an increase in the magnitude of the slope in the log-log plot.\nA strong temperature dependence of $S(\\epsilon)$ is apparent at a\nscale on the order of one interparticle distance ($\\epsilon=1$). On\nthese larger scales, we show in a later subsection that $S(\\epsilon)$\nexhibits a power-law scaling with $\\epsilon$ and reduced temperature.\n\n\\begin{figure}\n\\hbox to\\hsize{\\epsfxsize=1.0\\hsize\\hfil\\epsfbox{adg-fig3.eps}\\hfil}\n\\caption{Mean first-passage time $\\tau$ of the majority species (A\nparticles) versus $\\epsilon$, for different $T$. Inset: Mean\nfirst-passage time dynamic entropy $S(\\epsilon)$. Compare the\n$S(\\epsilon)$ variation observed here for a cooled liquid with the\ndynamic entropy $h(\\epsilon)$ calculated for a one-dimensional model\nmap exhibiting diffusion at long times (see Fig.~25b of Gaspard and\nWang [8]).}\n\\label{figtaueps}\n\\end{figure}\n\nIt is apparent from Fig.~\\ref{figmsd} that the particle displacement\nin this cooled liquid is not Brownian over most of the simulation time\nscales, and it is conventional to quantify this deviation by a\n``non-Gaussian parameter'' $\\alpha (t)$ involving the moments $\\langle\nr^2 (t) \\rangle$ and $\\langle r^4 (t) \\rangle$ of the self part of the\nvan Hove correlation function $G_s(r,t) \\equiv \\Bigl \\langle\n\\frac{1}{N_A}\\sum_{i=1}^{N_A} \\delta \\bigl( {\\bf r} - ({\\bf r}_i(t) -\n{\\bf r}_i(0)) \\bigr) \\Bigr \\rangle.$ The parameter $\\alpha(t)$ is\ndefined as,\n\\begin{equation}\n\\alpha(t)\n\\equiv \\frac{3 \\langle r^4(t) \\rangle}{5\\langle r^2(t) \\rangle^2} -1,\n\\end{equation}\nand vanishes for Brownian motion. $\\alpha(t)$ is shown in\nFig.~\\ref{figmfpt} together with $\\tau(\\epsilon)$ for the coldest run\n($T=0.451$); note that for $\\alpha(t)$, time $t$ is plotted on the\nordinate axis. This comparison allows us to identify four regimes: An \ninertial regime (Regime I) where the non-Gaussian parameter is small;\na ``localization'' regime characterized by a large value for the slope\nin the $\\tau(\\epsilon)$ log-log plot and by a growing $\\alpha(t)$\n(Regime II); a regime of particle motion that is persistent relative\nto Brownian motion, and where $\\alpha(t)$ decreases (Regime III); and\na fourth regime where the non-Gaussian parameter has decayed back to\nvery small values so that the particle motion is nearly Brownian\n(Regime IV). The four regimes are examinated in detail in the\nfollowing subsections.\n\n\\begin{figure}\n\\hbox to\\hsize{\\epsfxsize=1.0\\hsize\\hfil\\epsfbox{adg-fig4.eps}\\hfil}\n\\caption{Classification of dynamic regimes. MFPT $\\tau(\\epsilon)$\n(solid curve) plotted versus $\\epsilon$, and the non-Gaussian\nparameter $\\alpha(t)$ (dashed curve) plotted versus t, for $T=0.451$.\nInset: Comparison between $\\tau(\\epsilon)$ (solid curve) and $\\langle\nr^2(t)\\rangle$ (dot-dashed curve) on a linear scale.}\n\\label{figmfpt}\n\\end{figure}\n\nWe emphasize that while some parallelism exists between the\nmean square displacement and the first-passage time, the inset of\nFig.~\\ref{figmfpt} comparing these quantities shows that they are not\nequivalent. However, if the distribution of particle displacments\n$G_s(r,t)$ were always exactly Gaussian (i.e. $\\alpha(t)=0$ for every\n$t$), then a simple inverse function relation should hold between\nthese quantities. $G_s(r,t)$ obeys a ``scaling relation'' if we can\nrescale $G_s(r,t)$ as,\n\\begin{equation} \\label{diffscaling}\nG_s(r,t)=\\frac{1}{t^{\\nu}}f \\left( \\frac{r}{t^{\\nu}} \\right),\n\\end{equation} \nwhere $f$ is some function. Simple dimensional analysis based on\n(\\ref{diffscaling}) implies,\n\\begin{equation}\n\\langle r^2 (t) \\rangle \\propto t^{2\\nu}\n\\end{equation}\nand the scaling of the first-passage time with $\\epsilon$,\n\\begin{equation}\n\\tau(\\epsilon) \\propto \\epsilon^{1\/\\nu}.\n\\end{equation}\n\nIn practice, these idealized scaling relations are restricted to\ncertain time and space scales. Scaling with $\\nu=1$ is observed in the\nshort time inertial regime. This result can be inferred from the\ngeneral relation between the second moment of $G_s(r,t)$ and the\nparticle velocity,\n\\begin{equation} \\label{phix2}\n\\langle {\\dot {\\bf r}} (0) \\cdot {\\dot {\\bf r}}(t) \\rangle \n= \\frac{1}{2}\\frac{d^2 \\langle r^2 (t) \\rangle}{dt^2}, \n\\end{equation}\nand the constancy of the velocity autocorrelation function at short\ntimes \\cite{lee,equipartition}, \n\\begin{equation} \n\\langle {\\dot {\\bf r}} (0) \\cdot {\\dot {\\bf r}}(t) \\rangle \n\\to \\langle {\\dot {\\bf r}} (0) \\cdot {\\dot {\\bf r}}(0) \\rangle\n\\equiv v_o^2 \\ \\ \\ \\ \\ t \\to 0, \n\\end{equation}\nwhere $v_o$ is the average particle velocity. Integrating\nEq.~\\ref{phix2} over a short time interval obviously gives $\\langle\nr^2(t) \\rangle \\sim t^2$ or ``ballistic-like'' motion. Actually, this\nscaling is just a consequence of the existence of equilibrium, and\nshould not be construed as necessarily implying the absence of\ninterparticle interactions at short timescales \\cite{equipartition}. A\nGaussian form for the van Hove correlation function in this short time\nregime is ensured by the Maxwell-Boltzmann distribution for the\nparticle velocities. In the opposite extreme of very long times, the\ncentral limit theorem governing the sum of independent particle\ndisplacements implies that the particle displacement distribution is\nGaussian, and that $\\nu=1\/2$. Transient scaling regimes can be\nobserved at intermediate time scales, however. We refer to particle\ndisplacements as ``persistent'' relative to Brownian motion if $\\nu >\n1\/2$, or ``localized'' relative to Brownian motion if $\\nu < 1\/2$.\n\n\\subsection{Inertial regime}\nIn the limit of very small $\\epsilon$ we probe the fast microscopic\ndynamics associated with the decorrelation of the particle momenta.\nIt is difficult to probe this decorrelation directly using the\nfirst-passage time approximation to the dynamic entropy.\nFig.~\\ref{figtaueps} indicates that our approximation for the dynamic\nentropy appears to diverge for $\\epsilon \\rightarrow 0$, so our\napproximation must break down in this limit. Gaspard and Wang have\npointed out before that the first-passage time approximation to the\ndynamic entropy breaks down in this limit \\cite{GW}, so this\nshortcoming is to be expected. An estimate of the expected plateau in\n$S(\\epsilon)$ corresponding to the Kolmogorov-Sinai entropy can be\nobtained by determining a cut-off time of the first-passage time\ndistributions in the fast dynamics regime.\n\nThe dynamics in this regime is examined by setting the magnitude of\nthe first-passage sphere ($\\epsilon$-sphere) about the center of each\nparticle to be small enough that a collision does not usually occur\nbefore the particle leaves the $\\epsilon$-sphere (see Fig.~1). By\nfocusing on the particle first-passage time in this regime we identify\na time and space scale over which particle velocities begin\ndecorrelating. The first-passage time in this regime is insensitive to\nthe type (A or B) of particle since they have the same mass. This is\napparent in Fig.~\\ref{figAeB} where the first-passage time\ndistributions for both the $A$ and $B$ particles are shown for the\ncoldest run and $\\epsilon=0.1$.\n\n\\begin{figure}\n\\hbox to\\hsize{\\epsfxsize=1.0\\hsize\\hfil\\epsfbox{adg-fig5.eps}\\hfil}\n\\caption{Probability of first-passage time $P_{\\epsilon}(t)$ for the A\nand B particles at $T=0.451$ and $\\epsilon=0.1$. INSET: Same data \nrepresented in a semi-log scale.}\n\\label{figAeB}\n\\end{figure}\n\n\\begin{figure}\n\\hbox to\\hsize{\\epsfxsize=1.0\\hsize\\hfil\\epsfbox{adg-fig6.eps}\\hfil}\n\\caption{$\\tau$ versus $\\epsilon$ in the small $\\epsilon$ regime,\n$0.001 < \\epsilon < 0.15$, for T=0.468. Inset: log-log plot of the\nsame data. Note that the asymptotic linear scaling of\n$\\tau(\\epsilon)=1.14\\epsilon$ (solid lines) breaks down around a value\n$\\epsilon_v \\approx 0.05$, corresponding to $\\tau_v \\approx 0.6$.\n$\\epsilon_v$ and $\\tau_v$ are approximately independent of\ntemperature (see also Fig.~3).}\n\\label{smalleps}\n\\end{figure}\n\nThe idealization of ballistic particle motion implies the time\n$\\tau(\\epsilon)$ it takes for the particle to exit the sphere of\nradius $\\epsilon$ equals $\\epsilon\/v_0$. Fig \\ref{smalleps} shows an\nexpanded plot of $\\tau$ versus $\\epsilon$ in the small $\\epsilon$\nregime where this linear dependence of $\\tau$ on $\\epsilon$ is\napparent. A non-linear dependence of $\\tau$ on $\\epsilon$ develops as\nthe particle velocities decorrelate at a ``velocity decorrelation\nscale'' $\\epsilon_v \\approx 0.05$. This distance corresponds to the\ntime $\\tau_v \\approx 0.6$ (on the order of $10^{-13}s$ in argon\nunits). We find that $\\epsilon_v$ and $\\tau_v$ are approximately\nindependent of temperature in the present simulation (see also\nFig.~3).\n\n\\begin{figure}\n\\hbox to\\hsize{\\epsfxsize=1.0\\hsize\\hfil\\epsfbox{adg-fig7.eps}\\hfil}\n\\label{cutoff}\n\\caption{First-passage time distributions $P_{\\epsilon}(t)$ in the\ninertial regime. Main figure contains six different temperatures at\n$\\epsilon=0.1$ plotted log-log. Solid line indicates Eq.~3.7 where\nthe constant of proportionality has been adjusted to best fit the\ndata. The long-dashed line indicates the cut-off distribution function\nEq.~3.8 with no free parameters. Inset: $P_{\\epsilon}(t)$ at $T=\n0.468$ for $\\epsilon=0.03$. This value of $\\epsilon$ is less than the\nvelocity decorrelation scale $\\epsilon_v$ indicated in\nFig.~6. Eqs.~\\ref{wsmalle} and \\ref{rsmalle} are also shown in\ncomparison with the simulation data. Note the absence of the long\ntail in the inset distribution.}\n\\end{figure}\n\nWe obtain further insight into the decorrelation of particle\nvelocities in the inertial regime by examining the distribution of\nfirst-passage times, $P_{\\epsilon}(t)$, as a function of\n$\\epsilon$. Fig.~7 shows $P_{\\epsilon}(t)$ for $\\epsilon=0.1$ and\n$\\epsilon=0.03$ for $T=0.468$. We obtain a good approximation to\n$P_{\\epsilon}(t)$ in the inertial regime through a Gaussian\napproximation for $G_s(r,t)$ \\cite{mcquarrie} in conjunction with the\nfirst-passage distribution for Brownian paths \\cite{ciesielski}. The\nfirst-passage time distribution $P_{\\epsilon}(t)$ scales as\n\\begin{equation} \\label{wsmalle}\nP_{\\epsilon}(t) \\sim \n\\exp{\\left({ \\frac{-\\epsilon^2}{2\\langle R^2 \\rangle}}\\right)}\n\\end{equation} \nat short times and decays exponentially at long times,\n$P_{\\epsilon}(t) \\sim \\exp \\left[ -t\/\\tau(\\epsilon)\\right]$\n\\cite{ciesielski}. We then introduce the approximation\n\\begin{equation}\\label{rsmalle}\nP_{\\epsilon} \\simeq \\frac{A}{t^2} \\exp \\left[ -\\frac{1}{2}\n\\left( \\frac{\\epsilon}{v_0 t} \\right)^2\n-\\frac{t}{\\tau(\\epsilon)} \\right],\n\\end{equation}\nwhere $A$ is a normalization constant. $\\langle R^2 \\rangle^{1\/2}$ in\nEq.~\\ref{wsmalle} has been replaced by $v_ot$ based on the assumption\nthat the particle displacements are nearly ``ballistic'' in the\ninertial regime. Note that the limiting expression for\n$P_{\\epsilon}(t)$ given by Eq.~\\ref{wsmalle} can also be deduced by\nassuming a Maxwell velocity distribution (one-dimensional due to the\nnear rectilinear motion) with the velocity replaced by $\\epsilon \/ t$.\nFig.~7 shows that Eq.~\\ref{wsmalle} (solid line) does not provide an\naccurate fit to the data using $v_0 = 1\/1.14$ from the linear slope in\nFig.~\\ref{smalleps}, while the approximate expression\nEq.~\\ref{rsmalle} fits well for $\\epsilon$ less than the ``velocity\ndecorrelation scale'' $\\epsilon_v \\approx 0.05$. For $\\epsilon >\n\\epsilon_v$, $P_{\\epsilon}(t)$ shows evidence of developing a\npower-law ``tail'' at long times, as seen in the main part of Fig.~7.\nThis tendency becomes more developed at larger $\\epsilon$, as\ndiscussed in the next subsection. The inset of Fig.~7 shows\nfirst-passage time data for $\\epsilon < \\epsilon_v$ where the tail is\nnearly absent and the cut-off is evident. At this point, we note that\nin liquids $h_{KS}$ characteristically scales as the inverse of the\naverage interparticle collision time and thus has the interpretation\nof a microscopic ``collision rate'' \\cite {krylov,GW}. The time\n$\\tau_v$ can be considered as an average particle collision time so\nthat an inverse relation betwen $h_{KS}$ and $\\tau_v$ is expected. We\ndo not consider the relation between $\\tau_v$ and $h_{KS}$, since\nsimulations over a broader temperature range and the direct\ncalculation of the Kolmogorov-Sinai entropy through Lyapunov exponent\nspectra \\cite{dzugutov,sri} are required for such an investigation.\n\n\\subsection{Particle localization regime}\n\nThe tendency of particle motion to become increasingly localized, as\nemphasized by the mode-coupling theory, is a conspicuous feature of\nexperimental and simulation data on supercooled liquids. The\n``plateau'' in the mean square displacement log-log plots (see Fig.~2)\nindicates transient particle localization or ``caging'', and the\npersistence of this plateau increases with decreasing $T$\n\\cite{doliwa}. Next, we utilize the dynamic entropy concept to\nquantify this particle localization.\n\nAn increase in the slope of $\\log{\\tau(\\epsilon)}$ versus\n$\\log{\\epsilon}$ in Fig.~\\ref{figtaueps} provides evidence for\nlocalization. A numerical differentiation of the data in\nFig.~\\ref{figtaueps} is shown in Fig.~\\ref{logder}a where\n$\\Delta(\\epsilon)$ denotes the logarithmic derivative\n$\\Delta(\\epsilon) \\equiv d \\log \\tau\/d \\log \\epsilon$. The scaling\nbehavior in Eq.~3.4 implies that $\\Delta(\\epsilon)$ corresponds to the\nfractal dimension $1\/\\nu$ of the particle trajectories. Fig.~8a shows\nthat for all $T$, $\\Delta(\\epsilon) \\to 1$ for small $\\epsilon$, and\n$\\Delta(\\epsilon) \\to 1\/\\nu$ for large $\\epsilon$. At intermediate\nvalues of $\\epsilon$, we observe that $\\Delta(\\epsilon)$ develops a\nmaximum at $\\epsilon_c$, corresponding to the inflection point in\nFig.~\\ref{figtaueps}. This length scale defines a distance that is\ndifficult for the particle to exceed, and thus we define $\\epsilon_c$\nas the ``cage'' size. The inset in Fig.~\\ref{logder}a shows that\n$\\epsilon_c$ decreases with temperature, so that increased particle\nconfinement occurs with cooling. Independent evidence indicating the\nsignificance of this characteristic scale is discussed later in this\nsubsection.\n\nWe denote the value of $\\Delta(\\epsilon)$ at $\\epsilon_c$ by the\n``localization parameter'' $\\Lambda(T) \\equiv \\max \\Delta(\\epsilon) =\n\\Delta(\\epsilon_c)$, and its $T$-dependence is shown in Fig\n\\ref{logder}b. The value of $\\Lambda(T)$ increases with cooling,\nconsistent with increasing particle localization ($\\nu < 1\/2$). The\nrelatively noisy data in Fig.~\\ref{logder}b can be fitted by a power\nlaw, $\\Lambda(T) = 2+0.15(T-T_c)^{-1.03}$ with $T_c=0.435$.\n\nOur identification of the cage size $\\epsilon_c$ from the maximum\nvalue of $\\Delta(\\epsilon)$ is further supported by the examination of\nthe first-passage time distribution $P_{\\epsilon}(t)$ for $\\epsilon$\nnear $\\epsilon_c$. We find that $P_{\\epsilon}(t)$ develops a long time\npower-law tail in this intermediate regime which is symptomatic of the\ndevelopment of intermittency in particle motion and particle\nlocalization \\cite{odagaki}. In Fig.~\\ref{figdcsj} we show for\n$T=0.451$, $P_{\\epsilon}(t)$ at several values of $\\epsilon$ near\n$\\epsilon_c =0.21 \\pm 0.02$. The apparent power law for the\n$P_{\\epsilon}(t)$ tail varies with $\\epsilon$ and has a value near 2\nfor $\\epsilon \\approx \\epsilon_c$.\n\n\\begin{figure}\n\\hbox to\\hsize{\\epsfxsize=1.0\\hsize\\hfil\\epsfbox{adg-fig8.eps}\\hfil}\n\\caption{(a) $\\Delta(\\epsilon) \\equiv d \\log \\tau(\\epsilon)\/d \\log\n\\epsilon)$ for (from top to bottom) $T=0.451, 0.457, 0.468, 0.480,\n0.505$, and $0.550$. Inset: ``Cage'' size $\\epsilon_c$ versus $T$. (b)\n$\\Lambda \\equiv \\max \\Delta(\\epsilon)$ plotted versus $T$. The\ndashed line is a power law fit to the data as indicated in the text.}\n\\label{logder}\n\\end{figure}\n\nThis power-law tail behavior in $P_{\\epsilon_c}(t)$ is shared by the\nfour coldest temperatures, as shown in Fig.~10, where the asymptotic\nbehavior of the distributions is seen to be numerically very\nsimilar. The difference in their first moments\n[i.e. $\\tau(\\epsilon_c)$ in Fig.~3] reflects both the differences\nbetween the distributions in Fig.~10 at short times, and the\nasymptotic cut-off in the distributions that is difficult to resolve\nnumerically.\n\n\\begin{figure}\n\\hbox to\\hsize{\\epsfxsize=1.0\\hsize\\hfil\\epsfbox{adg-fig9.eps}\\hfil}\n\\caption{First-passage time distributions for $T=0.451$, at different\nvalues of $\\epsilon$ near $\\epsilon_c = 0.21 \\pm 0.02$. The dashed line \nindicates an inverse power law $t^{-2}$ for comparison.}\n\\label{figdcsj}\n\\end{figure}\n\n\\begin{figure}\n\\hbox to\\hsize{\\epsfxsize=1.0\\hsize\\hfil\\epsfbox{adg-fig10.eps}\\hfil}\n\\caption{First-passage time distributions at the cage scale,\n$\\epsilon=\\epsilon_c$, for four different $T$.}\n\\label{figepsc}\n\\end{figure}\n\nOdagaki \\cite{odagaki} has suggested that the second and first moments\nof the first-passage time distribution at the scale of one\ninterparticle distance diverge at $T_c$ and at the glass transition\ntemperature $T_g$, respectively, and a recent paper by Hiwatari and\nMuranaka \\cite{hiwatari} supports this glass transition scenario. A\ntransition to intermittent particle motion at the glass transition has\nalso been suggested by Douglas and Hubbard \\cite{goodguy}. Our\nobservations are consistent with the growth of intermittency of\nparticle motion as $T_c$ is approached, but we cannot confirm\ntheoretical predictions of a dynamical transition in the degree of\nintermittancy until lower temperatures are examined. An accurate test\nof these predictions will require carefully equilibrated data below\n$T_c$, beyond the temperature range of the simulations analyzed here.\nWe point out that, in the present system, the scale at which this\nintermittency occurs is substantially {\\it smaller} than one\ninterparticle spacing, and instead pertains to motion at the scale of\nthe cage size, $\\epsilon_c$.\n\n\\subsection{Regime of persistent particle motion}\nPrevious work has shown that particle motion becomes increasingly\ncollective in this supercooled liquid and that an important mode of\nmotion at the scale of the interparticle distance involves the\nstring-like collective motion of particles. A visualization of this\nprocess \\cite{movie} suggested to us that this process\nbecomes increasingly ``coherent'' or ``jump-like'' at lower\ntemperatures and this tendency towards ``coherent jumping''\nhas been noticed in a number of other physical systems (melting\nof hard disks \\cite{alder}, hexatic liquids \\cite{murray} and ordering\nplasmas \\cite{choquard}). In this subsection we utilize $S(\\epsilon)$ \nto further quantify this effect.\n\nRegimes I and II were defined by characteristic spatial scales at\nwhich changes occur in the first-passage time distributions. However,\nthe long run times of the simulations analyzed here necessitated the\nstoring of configurations on a logarithmic, rather than linear, time\nscale, for all but the coldest simulation \\cite{dgpkp}. As a\nconsequence, first-passage time distributions cannot be obtained over\na continuous range of $\\epsilon$ for large $\\epsilon$. In the absence\nof this information, we roughly identify Regime III by the tendency\nfor the non-Gaussian parameter $\\alpha(t)$ to decrease (see\nFig.~4). In Fig.~11 we show the apparent fractal dimension\n$\\Delta(\\epsilon)$ in Regime III for the highest and lowest\ntemperatures. (The results for all $T$ are shown over an extended\nscale in Fig.~8). Notice that persistent particle motion\n($\\Delta(\\epsilon)<2$) develops for $\\epsilon > 0.6$ in Fig.~8\n(although $\\Delta$ remains near 2 at the highest temperatures) and\nconsequently we show $\\Delta(\\epsilon)$ for the range $0.7 < \\epsilon\n< 1.0$ in Fig.~11. The value of $\\epsilon$ at which\n$\\Delta(\\epsilon)=2$ provides a more precise estimate for the\nbeginning of Regime III. Although the data is noisy, we find that\n$\\Delta(\\epsilon) \\approx 2$ within numerical error, and is nearly\nindependent of $\\epsilon$ for high $T$. Particle displacement at high\ntemperature is then reasonably approximated by Brownian motion on\nthese spatial scales. However, a substantially smaller average value\nof $\\Delta(\\epsilon) \\approx 1.7$ (dashed line in Fig.~11) is found\nfor the lowest $T$. Thus, we find the particle motion becomes\nincreasingly persistent on cooling.\n\nWhy is persistent motion not observed in the mean square displacement?\nThe tendency for persistent particle motion is not apparent in\n$\\langle r^2(t)\\rangle$ shown in Fig.~\\ref{figmsd} or in the inset of\nFig.~\\ref{figmfpt} because, when averaging over the squared\ndisplacements, the contribution of the few particles that are at any\ngiven time moving persistently is ``washed out''. However, these\nparticles give a large contribution to the mean first-passage time, so\nthat this quantity is therefore a sensitive indictor of persistent\nparticle motion.\n\n\\begin{figure}\n\\hbox to\\hsize{\\epsfxsize=1.0\\hsize\\hfil\\epsfbox{adg-fig11.eps}\\hfil}\n\\caption{Apparent fractal dimension $\\Delta(\\epsilon)$ of\nparticle displacements in Regime III. From Fig.~8, highest and lowest\nsimulation temperature $T=0.5510$ and $T=0.4510$ are shown for $0.7 <\n\\epsilon < 1.0$.}\n\\label{figppm}\n\\end{figure}\n\nWe can obtain insight into the emergence of persistent particle motion\nin our cooled liquid from idealized models of Brownian motion subject\nto potential fluctuations. If the potential fluctuations are\nquenched, there is a tendency towards particle localization\n\\cite{grassberger}, but the occurrence of fluctuations in the\npotential in both space and time can lead to persistent particle\nmotion. In particular, if the fluctuations are delta-correlated in\nboth space and time, then the exponent $\\Delta$ equals $3\/2$\n\\cite{kardar}. The effects observed in Ref.~\\cite{kardar} are\nqualitatively consistent with our understanding of the origin of\ncorrelated motion. At short times, the existence of relatively\nimmobile particles leads to a randomly fluctuating field felt by those\nparticles free to move at a given point in time. This spatially\nfluctuating field is responsible for the particle caging or\nlocalization on timescales short compared to the decorrelation time\n$\\tau_{\\alpha}$ of the ``structural fluctuations'' (associated with\nthe relatively immobile particles). At longer times, the formerly\nimmobile particles become mobile and the potential field fluctuates in\ntime, leading to an enhancement in the particle displacement. At\nstill longer times, thermal fluctuations restore equilibrium and\nparticle displacement ultimately becomes diffusive.\n\nFinally, we point out that particle motion can be persistent even in\nthe absence of a secondary (so-called ``hopping'') peak in $G_s(r,t)$.\nInstead, persistent particle motion contributes to a long tail in\n$G_s(r,t)$ in the temperature range of the present system\n\\cite{dgpkp,kdppg}. This long tail sharpens up and becomes a\nsecondary peak at lower $T$ \\cite{secondpeak}, indicating the\nincreased contribution of collective particle motion to transport\nbelow $T_c$.\n\n\n\\subsection{Large scale particle displacement}\n\nParticle displacement in a liquid at large scales is described by\nBrownian motion so that $S(\\epsilon)$ should scale asymptotically as\n$\\epsilon^2$ for large $\\epsilon$, regardless of temperature. It is\napparent in Fig.~\\ref{figmsd} that the data in the asymptotic\ndiffusive regime is limited, especially at lower\ntemperatures. Previous work has shown that there is a tendency for \n``mobile'' particles, which dominate transport in cooled liquids near\n$T_c$, to move an interparticle distance during the time in which they\nare mobile \\cite{ddkppg,dgpkp}. This happens because the particles\ntend to move between local minima in the potential surface describing\nthe interparticle interaction \\cite{minima,tbs}. This feature is\nespecially apparent in the string-like particle motion noted before,\nwhere it has been observed that the strings ``disintegrate'' once an\ninterparticle displacement has been achieved \\cite{ddkppg}. Thus, one\ninterparticle distance is taken to be the minimal scale of the large\nscale particle displacement regime (Regime IV).\n\n\\begin{figure}\n\\hbox to\\hsize{\\epsfxsize=1.0\\hsize\\hfil\\epsfbox{adg-fig12.ps}\\hfil}\n\\caption{Dynamic entropy at the scale of one interparticle separation,\n$S(\\epsilon=1)$, versus $T-T_c$, with $T_c=0.435$. Diamonds refer to\nthe simulation data for $S(\\epsilon = 1)$, and the dashed lines refer\nto the power law, $S(T) \\sim (T-T_c)^{2.5}$. We also observe this\nscaling for $S(\\epsilon=0.4)$ (circles). The inverse of the structural\nrelaxation $1\/\\tau_{\\alpha}$ (triangles) follows a power law\n$1\/\\tau_{\\alpha} \\sim (T-T_c)^{2.8}$. Inset: Schematic indication of\nexpected temperature variation of dynamic entropy $h_{KS}$ at small\nscales. Note that the extrapolated ergodic-nonergodic transition in\nthe $\\epsilon \\to 0$ limit should occur at a temperature $T_o \\ < \\\nT_c$.}\n\\label{figalpha1}\n\\end{figure}\n\nIn the previous discussion, we have established that $S(\\epsilon)$ is\ninsensitive to temperature for small $\\epsilon$ over the temperature\nrange investigated. This insensitivity accords with the expected\nvariation of dynamic entropy $h_{KS}$. At higher $T$ we expect the\ndynamic entropy to saturate, while a decrease should accompany the\nmore restricted particle motion at lower $T$ \\cite{posch}. A variation\nsimilar to that shown in the inset of Fig.~12 has been established for\nthe ordering of the XY model \\cite{butera}. The investigation of the\nanticipated ergodic-nonergodic transition and its possible relation to\na vanishing of $h_{KS}$ requires the calculation of the full Lyapunov\nspectrum, which is currently prohibitive for a system of the present\nsize. However, we can investigate $S(\\epsilon)$ at a larger scale on\nthe order of an interparticle spacing. This should be interesting\nbecause of the strong $T$-dependence in $S(\\epsilon)$ at this length\nscale noted above (see Fig.~3).\n\nIn Fig.~\\ref{figalpha1} we examine the $T$-dependence of\n$S(T;\\epsilon)$ at the scale of one interparticle separation\n$\\epsilon=1$. We observe that $S(T;\\epsilon=1)$ obeys a power law to a\ngood approximation over the temperature range investigated, and that\n$S(T;\\epsilon=1)$ seems to extrapolate to zero at the mode-coupling\ntemperature $T_c=0.435$. As shown in Fig.~12, a reasonable fit to the\ndata is obtained with the relation\n\\begin{equation} \\label{spowerlaw}\nS(\\epsilon=1) \\sim (T-T_c)^{2.5}, \\,\\,\\, T_c=0.435. \n\\end{equation}\nThe scaling of $S(\\tau;\\epsilon=1)$ is compared in\nFig.~\\ref{figalpha1} to the structural relaxation time $\\tau_{\\alpha}$\ndescribing the decay of the intermediate scattering function.\nAlthough the scaling of the two quantities is qualitatively similar,\nthe best fit exponent for $\\tau_{\\alpha}$ has the somewhat larger\nvalue of $-2.8$.\n\nThe power law scaling of $S(\\epsilon=1)$ with temperature is not\nobvious since if the particle displacement were exactly described by\nBrownian motion, then $\\tau(\\epsilon)$ would scale in inverse\nproportion to the diffusion coefficient. A determination of the\ndiffusion coefficient for A particles is obtained by a simple\nleast-squares fit (not shown) of the long time data to the function\n$\\langle r^2 (t) \\rangle = A + 6Dt$. This fitting gives\n\\begin{equation} \\label{dpowerlaw}\nD \\sim (T-T_c)^{2.1}.\n\\end{equation}\nA more refined estimate by Kob and Anderson \\cite{kob} on a smaller \n($1000$ particles) system\ngave an exponent $2.0$ for the A particles.\nThe diffusion data thus scales with a fractional power of the structural \nrelaxation time,\n\\begin{equation} \\label{decoupling}\nD \\sim \\tau_{\\alpha}^{-(2.1\/2.8)} \\simeq \\tau_{\\alpha}^{-0.75}\n\\end{equation}\nover the temperature range investigated. Since evidence supports a\ncommon temperature scaling of $\\tau_{\\alpha}$ and the fluid viscosity\n$\\eta$ \\cite{onukiprl}, the observation implied by\nEq. (\\ref{decoupling}) is consistent with the breakdown of the\nStokes-Einstein relation in real and simulated supercooled liquids\n\\cite{decoupling,footdec}. We therefore conclude that $S(\\epsilon=1)$\nscales neither exactly like the inverse structural relaxation time\n$1\/\\tau_{\\alpha}$ nor the diffusion coefficient $D$. Other\ncharacteristic times exist for this liquid. It has recently been\nreported for the same simulations investigated here that the time\nscale on which particle displacements are most correlated scales as a\npower law with $(T-0.435)$, with an exponent $\\gamma = 2.3 \\pm 0.2$\n\\cite{dgp}. This exponent notably agrees within numerical error with\nthe exponent in Eq.~\\ref{spowerlaw}. We further note that the time\n$t^*$ at which $\\alpha(t)$ is a maximum (see Fig.~4) appears to\ndiverge at $T_c$ with an exponent $1.7$ \\cite{dgpkp}.\n\nSince the temperature dependence of $S(\\epsilon)$ is largest in\nFig.~\\ref{figtaueps} for $\\epsilon=1$ and very small for $\\epsilon\n\\rightarrow 0$, it is natural to consider the temperature dependence\nof $S(\\epsilon)$ for many fixed $\\epsilon$ to determine how this\ncrossover occurs. In Fig.~\\ref{figvareps} we see that the scaling $S(T) \n\\propto (T-T_c)^{2.5}$ holds for $\\epsilon$ greater than the cage size\n$\\epsilon_c$ and sufficiently small reduced temperature. We also find\nthat for any temperature, the scaling fails for $\\epsilon <\n\\epsilon_c$. This provides another method for determining the\ncage size.\n\\begin{figure}\n\\hbox to\\hsize{\\epsfxsize=1.0\\hsize\\hfil\\epsfbox{adg-fig13.eps}\\hfil}\n\\caption{Mean first-passage time $\\tau(\\epsilon)$ versus $T-T_c$,\ncalculated at fixed values of $\\epsilon$. Dotted lines denote the power law \n$S(T) \\sim (T-T_c)^{2.5}$.}\n\\label{figvareps}\n\\end{figure}\n\n\\section{Conclusion}\nWe have studied the length-scale dependence of the dynamic entropy\n$S(\\epsilon)$ in a molecular dynamics simulation of a model\nsupercooled binary Lennard-Jones liquid. The simulations were\nperformed above both the glass transition temperature and the\nmode-coupling critical temperature and correspond to equilibrated\nliquid states \\cite{dgpkp}.\n\n$S(\\epsilon)$ as estimated by the MFPT provides a tool for identifying\ncharacteristic length and time scales of the dynamics of liquids and\nprovides a means for quantifying the degree of correlated motion\noccurring in supercooled liquids. At very small $\\epsilon$ we observe\na decorrelation of particle velocity and an $S(\\epsilon)$ which is\ninsensitive to temperature and $\\epsilon$ variation. A decorrelation\ntime associated with an average interparticle collision time is\nidentified. At intermediate values of $\\epsilon$ we observe a sharp\ndrop in $S(\\epsilon)$ with increasing $\\epsilon$, indicating that the\npath motion is more ``stochastic'' at these length\nscales. $S(\\epsilon)$ is found to have a strong temperature dependence\nas well in this regime. The logarithmic derivative $\\Delta \\equiv\nd\\log{\\tau}\/d\\log{\\epsilon}$ becomes a maximum at a characteristic\n$\\epsilon$ value that is identified with the particle ``cage'' size\n$\\epsilon_c$, since particle localization is maximal at this\npoint. The scaling of $S(\\epsilon)$ with temperature at fixed\n$\\epsilon$ gives an independent confirmation of our estimation of\n$\\epsilon_c$ because it exhibits a qualitatively different dependence\non temperature for $\\epsilon > \\epsilon_c$ and $\\epsilon <\n\\epsilon_c$. The localization parameter $\\Lambda \\equiv\n\\Delta(\\epsilon_c)$ increases and the cage size $\\epsilon_c$ decreases\nas the liquid is cooled. The distribution functions for the\nfirst-passage time at the scale of the cage size, $P_{\\epsilon_c}(t)$,\nin the coldest runs exhibit a long power-law tail, consistent with\nsuggestions that there is growing intermittency in the particle\ndisplacements in glass-forming liquids \\cite{odagaki,goodguy}. This\nfeature requires further study in cooler liquids to check the\npredictions of these models. At still larger scales (still less than\none interparticle separation) we observe persistent motion which\nfollows the transient particle ``caging''. $S(\\epsilon)$ obeys a power\nlaw $S(\\epsilon) \\sim \\epsilon^{-1\/\\nu}$ for $\\epsilon$ in the range\n$(0.7,1)$ where the apparent fractal dimension of the particle\ntrajectories $\\Delta(\\epsilon) = 1\/\\nu$ ranges from $\\approx 2$ to\n$\\approx 1.7$ as the temperature is lowered. Thus, we observe a\ntendency for the particle motion to acquire a persistent character\nrelative to Brownian motion as it is cooled. This is consistent with\nthe previous observation of correlated string-like motion in this\nliquid \\cite{ddkppg}.\n\nGeneral arguments suggest that the dynamic entropy at microscopic\nscales decreases at low temperatures, but a slower variation should be\nobtained at higher temperatures, as in the present molecular dynamics\ncalculation (see Fig.~2). The situation is not so clear for the\nfirst-passage time dynamic entropy $S(\\epsilon)$ at the scale of one\ninterparticle separation $\\epsilon=1$. In this case $S(\\epsilon=1)$ is\ninverse to the ``average interparticle exchange time \\cite{frenkel}'',\n$\\tau(\\epsilon=1)$. We find that $S(\\epsilon)$ vanishes as a power\nlaw, $S(\\epsilon=1) \\propto (T-T_c)^{2.5}$ when the mode-coupling\ntemperature $T_c$ is approached from above. Thus, the cooled liquid\nhas the appearance of approaching an ergodic to non-ergodic transition\nas $T \\rightarrow T_c$, when viewed at the scale $\\epsilon=1$. This is\nconsistent with the predictions of mode-coupling theory.\n\nThe first-passage time can also be utilized to obtain information\nabout the spatial dependence of mobility fluctuations in cooled\nliquids. Perera and Harrowell \\cite{perera}, for example, have\nexamined the position dependence of $\\tau$ at the scale of one\ninterparticle spacing in a two-dimensional soft-sphere supercooled\nliquid and found a tendency for particles of relatively high and low\n``mobility'' (i.e., small and large $\\tau$, respectively) to cluster\nas $T$ is lowered. A detailed study of spatial correlations of\nfirst-passage times in the present system will be presented elsewhere\n\\cite{ag}.\n\nFuture work should examine our approximate expression for\n$S(\\epsilon)$ in the $\\epsilon \\rightarrow 0$ limit through\nindependent calculation of the Kolmogorov-Sinai dynamic entropy,\n$S(\\epsilon \\rightarrow 0) \\equiv h_{KS}$. The temperature dependence\nof $h_{KS}$ in cooler liquids ($T < T_c$) should be examined to\ndetermine if there is a tendency for the ``bare'' dynamic entropy to\nvanish at a lower glass transition temperature (see inset of\nFig.~12). Recent work has established a phenomenological relation\nbetween dynamic entropy $h_{KS}$ and the equilibrium entropy in\nsimulations of model liquids at relatively elevated temperatures\n\\cite{dzugutov}. A decrease in $h_{KS}$ at lower $T$ should be\naccompanied by the development of collective motion at short\ntimes. Such motion has been reported in Ref.~\\cite{hiwatari}. We\nexpect this change in the short time dynamics to be relevant for\ninterpreting the Boson peak phenomenon in cooled liquids\n\\cite{bosonpeak}. Simulations have already shown that the fraction of\nunstable modes $f_u$ in a cooled liquid decreases in parallel with\n$h_{KS}$ \\cite{posch}, and the vanishing of $f_u$ has been identified\nwith the temperature where the diffusion coefficient $D$ vanishes\n\\cite{seeley,sciortino}. \n\nFinally, we note that the calculation for $S(\\epsilon)$ can be\nextended to other dynamical variables associated with other transport\nproperties (viscosity, thermal conductivity, etc.) and these\ncalculations should provide useful estimates of other characteristic\nspace and time scale in cooled liquids \\cite{DorfmanGaspard}. We\nemphasize that although the definition of $S(\\epsilon)$ is motivated\nby dynamical systems theory concepts, this quantity defines an\nindependently interesting measure of correlated motion in liquids that\ndoes not rely on the approximation relating $S(\\epsilon)$ to\n$h(\\epsilon)$.\n\n\\bigskip\n\\noindent {\\it Corresponding author:} {\\bf sharon.glotzer@nist.gov}.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Acknowledgements}\n\nThe author would like to thank Miao Li and Feng-Quan Wu for useful\ndiscussions. He is also grateful to Shinji Tsujikawa and Alexander\nVikman for helpful correspondences. This work was supported in part\nby the Natural Science Foundation of China.\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}