diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzcinv" "b/data_all_eng_slimpj/shuffled/split2/finalzzcinv" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzcinv" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction.}\\label{intro}\n\n\\section{Introduction}\nThe spatio-temporal traffic origin-destination demand is a critical component to dynamic system modeling for transport operation and management. For decades, dynamic traffic demand is deterministically modeled as it is translated into deterministic link\/path flow and travel cost. Recent studies on transportation network uncertainty and reliability indicate that the variation of traffic demand also has equally large economic and environmental regional impacts \\citep{mahmassani2014incorporating}. However, traffic demand and flow variation from time to time ({\\em e.g.}, morning versus afternoon, today versus yesterday) cannot be captured in those deterministic models. In addition, the multi-day large-scale data cannot be characterized by deterministic traffic models. It is essential to consider the flow variation and understand its causes for system-wide decision making in the real-world. \nTherefore, modeling and estimating the stochasticity of traffic demand, namely its spatial-temporal correlation\/variation, is a real need for public agencies and decision-makers.\n\n\nIn view of this, this paper addresses a fundamental problem to estimate the probabilistic dynamic origin-destination demand (PDOD) on general road networks. The reasons for estimating the PDOD instead of the deterministic dynamic OD demand (DDOD) are four-fold: 1) PDOD enables the modeling of system variation \\citep{han2018stochastic}, and hence the corresponding traffic model is more reliable; 2) later we will show that there is a theoretical bias when using the deterministic dynamic OD demand estimation (DDODE) framework with stochastic traffic flow; 2) the probabilistic dynamic OD estimation (PDODE) framework makes full use of multi-day traffic data, and the confidence level of the estimated PDOD can be quantified. In particular, the confidence in estimation accuracy increases when the number of data increases in the PDODE framework; 4) the estimated PDOD facilitates public agencies to operate and manage the stochastic complex road networks more robustly \\citep{jin2019behavior}.\n\n\nBefore focusing on the PDODE problem, we first review the large body of literature for DDODE problems, then their extensions to PDODE problems are discussed.\nThe DDODE problem is originally proposed and solved through a generalized least square (GLS) formulation by assuming the networks are not congested and travelers' behaviors ({\\em e.g.} route choice, departure time choice) are exogenous \\citep{cascetta1993dynamic}. On congested networks, travelers' behaviors need to be considered endogenously. A bi-level formulation is then proposed on top of the GLS formulation, in which the upper-level problem solves for the GLS formulation with fixed travelers' behaviors and the lower-level problem updates the travelers' behaviors \\citep{tavana2001internally}. Readers are referred to more details on the bi-level formulation from \\citet{nguyen1977estimating, leblanc1982selection, fisk1989trip, yang1992estimation, florian1995coordinate, jha2004development,nie2008variational}. The DDODE problem can also be solved with real-time data feeds from and for ATIS\/ATMS applications, and state-space models are usually adopted to estimate the OD demand on a rolling basis \\citep{bierlaire2004efficient, zhou2007structural, ashok2000alternative}. Another interesting trend is that emerging data sources are becoming available to estimate OD demand directly, which include automatic vehicle identification data \\citep{cao2021day}, mobile phone data \\citep{bachir2019inferring}, Bluetooth data \\citep{cipriani2021traffic}, GPS trajectories \\citep{ros2022practical}, and satellite images \\citep{kaack2019truck}. Unlike static networks \\citep{wu2018hierarchical, waller2021rapidex}, an universal framework that can integrate multi-source data is still lacking for dynamic networks.\n\nSolution algorithms to the DDODE problem can be categorized into two types: 1) meta-heuristic methods; 2) gradient-based methods.\nThough meta-heuristics methods might be able to search for the global optimal, most studies only handle small networks with low-dimensional OD demand \\citep{patil2022methods}. In contrast, gradient-based methods can be applied to large-scale networks without exploiting computational resources.\nThe performance of gradient-based methods depends on how to accurately evaluate the gradient of the GLS formulation. \\citet{balakrishna2008time, cipriani2011gradient} adopt the stochastic perturbation simultaneous approximation (SPSA) framework to approximate the gradients. \\citet{lee2009new, vaze2009calibration, ben2012dynamic, lu2015enhanced, tympakianaki2015c, antoniou2015w, oh2019demand, qurashi2019pc} further enhance the SPSA-based methods. \\citet{lu2013dynamic} discuss to evaluate the gradients of dynamic OD under congested networks. \\citet{flotterod2011bayesian, yu2021bayesian} derives the gradient of OD demand in a Bayesian inference framework. \\citet{osorio2019dynamic, osorio2019high, patwary2021metamodel, dantsuji2022novel} develop a meta-model to approximate the gradients of dynamic OD demand through linear models. Recently, \\citet{wu2018hierarchical, ma2019estimating} propose a novel approach to evaluate the gradient of OD demand efficiently through the computational graph approach. \n\nA few studies have explored the possibilities of estimating PDOD, and this problem turns out to be much more challenging than the DDODE problem. As far as we know, all the existing studies related to the probabilistic OD demand focus on static networks. For example, a statistical inference framework with Markov Chain Monte Carlo (MCMC) algorithm is proposed to estimate the probabilistic OD demand \\citep{hazelton2008statistical}. The GLS formulation is also extended to consider the variance\/covariance matrices in order to estimate the probabilistic OD demand \\citep{shao2014estimation, shao2015estimation}. \\citet{ma2018statistical} estimate the probabilistic OD demand under statistical traffic equilibrium using Maximum Likelihood Estimators (MLE). Recently, \\citet{yang2019estimating} adopt the Generalized Method of Moment (GMM) methods to estimate the parameters of probability distributions of OD demand. \n\nEstimating the probabilistic dynamic OD demand (PDOD) is challenging, and the reasons are three-fold: 1) PDODE problem requires modeling the dynamic traffic networks in the probabilistic space, hence a number of existing models need to be adapted or re-formulated \\citep{shao2006reliability, nakayama2014consistent, watling2015stochastic, ma2017variance}; 2) estimating the probabilistic OD demand is an under-determined problem, and the problem dimension of PDODE is much higher than that for DDODE \\citep{shao2015estimation,ma2018statistical,yang2019estimating}; 3) solving PDODE problem is more computationally intensive than solving the DDODE problem, and hence new approaches need to be developed to improve the efficiency of the solution algorithm \\citep{flotterod2017search, ma2018estimating, shen2019spatial}. \n\n\n\nIn both PDODE and DDODE formulations, travelers' behaviors are modeled through the dynamic traffic assignment (DTA) models. Two major types of DTA models are Dynamic User Equilibrium (DUE) models and Dynamic System Optimal (DSO) models. The DUE models search for the user optimal traffic conditions such that all travelers in the same OD pair have the same utilities \\citep{mahmassani1984dynamic, nie2010solving}; DSO models solve the system optimal in which the total disutility is minimized \\citep{shen2007path, qian2012system, ma2014continuous}. Most DTA models rely on the Dynamic Network Loading (DNL) models on general networks \\citep{ma2008polymorphic}, and the DNL models simulate all the vehicle trajectories and spatio-temporal traffic conditions given the origin-destination (OD) demand and fixed travelers' behaviors. \n\n\nOne noteworthy observation is that many studies have shown great potential in improving the solution efficiency by casting network modeling formulations into computational graphs \\citep{wu2018hierarchical, ma2019estimating, sun2019analyzing, zhang2021network, kim2021computational}. The advantages of using computational graphs for PDODE problem lies in that, the computational graph shares similarities with deep neural networks from many perspectives. Hence a number of state-of-art techniques, which are previously developed to enhance the efficiency for solving neural networks, can be directly used for solving the PDODE problem. These techniques include, but are not limited to, adaptive gradient-based methods, dropout \\citep{srivastava2014dropout}, GPU acceleration, multi-processing \\citep{zinkevich2009slow}. Some of the techniques have been examined in our previous study, and the experimental results demonstrate great potential on large-scale networks \\citep{ma2019estimating}. Additionally, multi-source data can be seamlessly integrated into the computational graph to estimate the OD demand.\n\nThe success of computational graphs advocates the development of the end-to-end framework, and this paper inherits this idea to estimate the mean and standard deviation of PDOD simultaneously. Ideally, the computational graph involves the variables that will be estimated (decision variables, {\\em e.g.}, mean and standard deviation of PDOD), intermediate variables ({\\em e.g.}, path\/link flow distributions), and observed data ({\\em e.g.}, observed traffic volumes), and it finally computes the objective function as a scalar. Among different variables, all the neural network operations can be employed. The chain rule and back-propagation can be adopted to update all the variables on the computational graph. This process is also known as differential programming \\citep{jax2018github}. As some of the variables in the computational graph contain physical meaning, we view the computational graph as a powerful tool to integrate data-driven approaches and domain-oriented knowledge. An overview of a computation graph is presented in Figure~\\ref{fig:cg}. \n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.85\\linewidth]{cg}\n\t\\caption{\\footnotesize{An illustrative figure for computation graphs.}}\n\t\\label{fig:cg}\n\\end{figure}\n\n\nIn this paper, we develop a data-driven framework that solves the probabilistic dynamic OD demand estimation (PDODE) problem using multi-day traffic data on general networks. The proposed framework rigorously formulates the PDODE problem on computational graphs, and different statistical distances ({\\em e.g.}, $\\ell_p$-norm, Wasserstein distance, KL divergence, Bhattacharyya distance) are used and compared for the objective function. \nThe closest studies to this paper are those of \\citet{wu2018hierarchical, ma2019estimating}, which construct the computational graphs for static and dynamic OD demand estimation, respectively. This paper extends the usage of computation graphs to solve PDODE problems. The main contributions of this paper are summarized as follows:\n\\begin{enumerate}[label=\\arabic*)]\n\t\\item We illustrate the potential bias in the DDODE framework when dynamic OD demands are stochastic.\n\t\\item We rigorously formulate the probabilistic dynamic OD estimation (PDODE) problem, and different statistical distances are compared for the objective function. It is found that $\\ell_1$ and $\\ell_2$ norms have advantages in estimating the mean and standard deviation of the PDOD, respectively, and the 2-Wasserstein distance achieves a balanced accuracy in estimating both mean and standard deviation. \n\t\\item The PDODE formulation is vectorized and cast into a computational graph, and a reparameterization trick is first time developed to estimate the mean and standard deviation of the PDOD simultaneously using adaptive gradient-based methods.\n\t\\item We examine the proposed PDODE framework on a large-scale network to demonstrate its effectiveness and computational efficiency.\n\\end{enumerate}\n\n\nThe remainder of this paper as organized as follows. Section~\\ref{sec:example} illustrates the necessity of the PDODE, and section~\\ref{sec:model} presents the proposed model formulation and casts the formulation into computational graphs. Section~\\ref{sec:solution} proposes a novel solution algorithm with a reparameterization trick. Numerical experiments on both small and large networks are conducted in section~\\ref{sec:experiment}. Finally, conclusions and future research are summarized in section \\ref{sec:con}.\n\n\n\\section{An Illustrative Example}\n\\label{sec:example}\nTo illustrate the necessity of considering the demand variation when the traffic flow is stochastic, we show that the DDODE framework can under-estimate the DDOD (mean of PDOD) when traffic flow is stochastic. Considering a simple 2-link network with a bottleneck, as shown in Figure~\\ref{fig:bottle}, the capacity of the bottleneck is 2000vehicles\/hour, and the incoming flow follows a Gaussian distribution with mean 2000vehicles\/hour. Due to the limited bottleneck capacity, we can compute the probability density functions (PDFs) of the queue accumulative rate and flow rate on the downstream link, as shown in Figure~\\ref{fig:bottle}.\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.85\\linewidth]{example.png}\n\t\\caption{\\footnotesize{A simple network with a bottleneck.}}\n\t\\label{fig:bottle}\n\\end{figure}\n\nSuppose link 2 is installed with a loop detector, we aim to estimate the OD demand from Origin to Destination. If the DDODE method is used, we ignore the variation in the traffic flow observation on link 2, and the mean traffic flow is used, which is below 2000vehicles\/hour. Therefore, the estimated OD demand will be less than 2000vehicles\/hour. One can see the demand is under-estimated, and the bias is due to the ignorance of the flow variation. In contrast, the flow variation is considered in our proposed model. By matching the PDF of the observed traffic flow, the distribution of the OD demand can be estimated in an unbiased manner if the model specifications of the OD demand is correct. Overall, considering flow variation could improve the estimation accuracy of traffic demand, which motivates the development of the PDODE framework.\n\n\n\\section{Model}\n\\label{sec:model}\nIn this section, we first present model assumptions. Then important components of the probabilistic traffic dynamics on general networks, which include PDOD, route choice models, and network loading models, are discussed. The PDODE problem is formulated and cast to a vectorized representation using random vectors. Lastly, we propose different statistical distances as the objective function.\n\n\\subsection{Assumptions}\nLet $Q_{rs}^h$ represent the dynamic traffic demand (number of travelers departing to travel) from OD pair $r$ to $s$ in time interval $h$, where $r \\in R, s\\in S$ and $h \\in H$. $R$ is the set of origins, $S$ is the set of destinations and $H$ is the set of all time intervals. \n\\begin{assumption}\n\t\\label{as:mvn}\n\tThe probabilistic dynamic OD demand (PDOD) follows multivariate Gaussian distribution with diagonal covariance matrix. We also assume that $Q_{rs}^h$ is bounded such that $Q_{rs}^h \\geq 0$ for the sake of simplification. Readers are referred to \\citet{nakayama2016effect} for more discussions about the assumptions of bounded Gaussian distribution of OD demand.\n\\end{assumption}\n\n\\begin{assumption}\n\tThe dynamic traffic flows, including OD demand, path flow, and link flow, are infinitesimal. Therefore, the variation of travelers' choice is not considered \\citep{ma2017variance}.\n\\end{assumption}\n\n\\subsection{Modeling the probabilistic network dynamics}\nWe present different components and their relationship on a probabilistic and dynamic network.\n\\subsubsection{Probabilistic dynamic OD demand} \nThe dynamic OD demand $Q_{rs}^h$ is an univariate random variable, and it can be decomposed into two parts, as shown in Equation~\\ref{eq:od}.\n\\begin{eqnarray}\n\t\\label{eq:od}\n\tQ_{rs}^h = q_{rs}^h + \\varepsilon_{rs}^h\n\\end{eqnarray}\nwhere $q_{rs}^h$ is the mean OD demand for OD pair $rs$ in time interval $h$ and it is a deterministic scalar; while $\\varepsilon_{rs}^h$ represents the randomness of OD demand. Based on Assumption~\\ref{as:mvn}, $\\varepsilon_{rs}^h$ follows the zero-mean Gaussian distribution, as presented in Equation~\\ref{eq:random}.\n\n\\begin{eqnarray}\n\t\\label{eq:random}\n\t\\varepsilon_{rs}^h \\sim \\mathcal{N}\\left(0, \\left(\\sigma_{rs}^h\\right)^2 \\right)\n\\end{eqnarray}\nwhere $\\sigma_{rs}^h$ is the standard deviation of $Q_{rs}^h$, and $\\mathcal{N}(\\cdot, \\cdot)$ represents the Gaussian distribution.\n\n\\subsubsection{Travelers' Route Choice}\nTo model the travelers' route choice behaviors, we define the time-dependent route choice portion $p_{rs}^{kh}$ such that it distributes OD demand $Q_{rs}^{h}$ to path flow $F_{rs}^{kh}$ by Equation~\\ref{eq:ODpath}.\n\\begin{eqnarray}\n\t\\label{eq:ODpath}\n\tF_{rs}^{kh} = p_{rs}^{kh} Q_{rs}^h \n\\end{eqnarray}\nwhere $F_{rs}^{kh}$ is the path flow (number of travelers departing to travel along a path) for $k$th path in OD pair $rs$ in time interval $h$. The route choice portion $p_{rs}^{kh}$ can be determined through a generalized route choice model, as presented in Equation~\\ref{eq:gen_choice}.\n\\begin{eqnarray}\n\t\\label{eq:gen_choice}\n\tp_{rs}^{kh} = \\Psi_{rs}^{kh}\\left( \\D\\left(\\{C_{rs}^{kh}\\}_{rskh}\\right), \\D\\left(\\{T_{a}^h\\}_{ah}\\right)\\right)\n\\end{eqnarray}\nwhere $\\Psi_{rs}^{kh}$ is the generalized route choice model and the operator $\\D(\\cdot)$ extracts all the parameters in a certain distribution. For example, if $Y\\sim \\N(\\mu, \\sigma^2)$, then $\\D(Y) = (\\mu, \\sigma)$. $T_{a}^{h}$ represents the link travel time for link $a$ in time interval $h$, and $C_{rs}^{kh}$ represents the path travel time for $k$th path in OD pair $rs$ departing in time interval $h$. Equation~\\ref{eq:gen_choice} indicates that the route choice portions are based on the distributions of link travel time and path travel time. In this paper we use travel time as the disutility function, while any form of disutility can be used as long as it can be simulated by $\\Lambda$. The generalized travel time can include roads tolls, left turn penalty, delay at intersessions, travelers' preferences and so on.\n\n\n\n\\subsubsection{Dynamic network loading} For a dynamic network, the network conditions ({\\em i.e.} path travel time, link travel time, delays) are governed by the link\/node flow dynamics, which can be modeled through dynamic network loading (DNL) models \\citep{ma2008polymorphic}. Let $\\Lambda(\\cdot)$ represent the DNL model, as presented in Equation~\\ref{eq:dnl}.\n\\begin{eqnarray}\n\t\\label{eq:dnl}\n\t\\left\\{T_{a}^{h}, C_{rs}^{kh}, {\\rho}_{rs}^{ka}(h, h') \\right\\} _{r,s,k,a,h, h'} = \\Lambda(\\{F_{rs}^{kh}\\}_{r,s,k,h})\n\\end{eqnarray}\nwhere $\\rho_{rs}^{ka}(h, h')$ is the dynamic assignment ratio (DAR) which represents the portion of the $k$th path flow departing within time interval $h$ for OD pair $rs$ which arrives at link $a$ within time interval $h'$ \\citep{ma2018estimating}. Link $a$ is chosen from the link set $A$, and path $k$ is chosen from the set of all paths for OD $rs$, as represented by $a \\in A, k \\in K_{rs}$. We remark that $T_{a}^{h}, C_{rs}^{kh}, \\rho_{rs}^{ka}(h, h')$ are random variables as they are the function outputs of the random variable $F_{rs}^{kh}$. \n\nThe DNL model $\\Lambda$ depicts the network dynamics through traffic flow theory \\citep{zhang2013modelling,jin2012link}. Essentially, many existing traffic simulation packages, which include but are not limited to, MATSIM \\citep{balmer2009matsim}, Polaris \\citep{stolte2002polaris}, BEAM \\citep{sheppard2017modeling}, DynaMIT \\citep{ben1998dynamit}, DYNASMART \\citep{mahmassani1992dynamic}, DTALite \\citep{zhou2014dtalite} and MAC-POSTS \\citep{qian2016dynamic, CARTRUCK}, can be used as function $\\Lambda$. In this paper, MAC-POSTS is used as $\\Lambda$.\n\nFurthermore, link flow can be modeled by Equation~\\ref{eq:link}.\n\\begin{eqnarray}\n\t\\label{eq:link}\n\tX_a^{h'} = \\sum_{rs \\in K_q} \\sum_{k \\in K_{rs}} \\sum_{h \\in H}{\\rho}_{rs}^{ka}(h, h') F_{rs}^{kh}\n\\end{eqnarray}\nwhere $X_a^{h'}$ represents the flow of link $a$ that arrives in time intervel $h'$, and $K_q$ is the set of all OD pairs. \n\n\\subsubsection{Statistical equilibrium}\nThe route choice proportion $p_{rs}^{kh}$ is a deterministic variable rather than a random variable, and the reason is that we assume travelers' behaviors are based on the statistical equilibrium originally defined in \\citet{ma2017variance}, as presented in Definition~\\ref{def:equi}.\n\\begin{definition}\n\t\\label{def:equi}\n\tA road network is under statistical equilibrium, if all travelers practice the following behavior: on each day, each traveler from origin $r$ to destination $s$ independently chooses route $k$ with a deterministic probability $p_{rs}^k$. For a sufficient number of days, this choice behavior yields a stabilized distribution of travel costs with parameters $\\D\\left(\\{C_{rs}^{kh}\\}_{rskh}\\right), \\D\\left(\\{T_{a}^h\\}_{ah}\\right)$. This stabilized distribution, in turn, results in the deterministic probabilities $p_{rs}^{kh} = \\Psi_{rs}^k\\left( \\D\\left(\\{C_{rs}^{kh}\\}_{rskh}\\right), \\D\\left(\\{T_{a}^h\\}_{ah}\\right)\\right)$ where $\\psi_{rs}^k(\\cdot)$ is a general route choice function. Mathematically, we say the network is under statistical equilibrium when the random variables $(\\{Q_{rs}^h \\}_{rsh},\\{F_{rs}^{kh}\\}_{rskh}, \\{X_a^h\\}_{ah}, \\{C_{rs}^{kh}\\}_{rskh}, \\{T_a^h\\}_{ah})$ are consistent with the Formulation \\ref{eq:ODpath}, \\ref{eq:gen_choice} and \\ref{eq:dnl}.\n\\end{definition}\n\nThe statistical equilibrium indicates that the multi-day traffic conditions are independent and identically distributed, which differs from the assumptions in day-to-day traffic models. Readers are referred to \\citet{ma2017variance} for more details. The assumption of statistical equilibrium allows us to estimate the distribution of link\/path\/OD flow from the observed data, and the empirical covariance matrix of link\/path\/OD flow can approximate the corresponding true covariance matrix when there is a large number of data.\n\n\\subsubsection{Vectorization} \nTo simplify notations, all the related variables are vectorized. We set $N = |H|$ and denote the total number of paths as $\\Pi = \\sum_{rs} |K_{rs}|$, $K=|K_q|$. The vectorized variables are presented in Table~\\ref{tab:mcvec}.\n\\begin{table*}[h]\n\t\\begin{center}\n\t\t\\caption{Variable vectorization table (R.V.: random variable).}\n\t\t\\label{tab:mcvec}\n\t\t\\begin{tabular}{p{3cm}cccccp{4.5cm}}\n\t\t\t\\hline\n\t\t\tVariable & R.V. &Scalar & Vector& Dimension & Type & Description\\\\\n\t\t\t\\hline\\hline \\rule{0pt}{3ex}\n\t\t\tMean OD flow & No &$q_{rs}^h$ & $\\vec{q}$ &$\\mathbb{R}^{NK}$ & Dense & $q_{rs}^h$ is placed at entry $(h-1)K + k$\\\\\n\t\t\t\\hline \\rule{0pt}{3ex}\n\t\t\tStandard deviation of OD flow & No &$\\sigma_{rs}^h$ & $\\boldsymbol \\sigma$ &$\\mathbb{R}^{NK}$ & Dense & $\\sigma_{rs}^h$ is placed at entry $(h-1)K + k$\\\\\n\t\t\t\\hline \\rule{0pt}{3ex}\n\t\t\tRandomness of OD flow & Yes &$\\varepsilon_{rs}^h$ & $\\boldsymbol \\varepsilon$ &$\\mathbb{R}^{NK}$ & Dense & $\\varepsilon_{rs}^h$ is placed at entry $(h-1)K + k$\\\\\n\t\t\t\\hline \\rule{0pt}{3ex}\n\t\t\tOD flow & Yes&$Q_{rs}^h$ & $\\vec{Q}$ &$\\mathbb{R}^{NK}$ & Dense & $Q_{rs}^h$ is placed at entry $(h-1)K + k$\\\\\n\t\t\t\\hline \\rule{0pt}{3ex}\n\t\t\tPath flow & Yes &$F_{rs}^{kh}$ &$\\mathbf{F}$ & $\\mathbb{R}^{N\\Pi}$ & Dense & $F_{rs}^{kh}$ is placed at entry $(h-1)\\Pi + k$\\\\\n\t\t\t\\hline \\rule{0pt}{3ex}\n\t\t\tLink flow & Yes &$X_{a}^h$ & $\\mathbf{X}$ &$\\mathbb{R}^{N|A|}$ & Dense & $X_{a}^h$ is placed at entry $(N-1)|A| + k$\\\\\n\t\t\t\\hline \\rule{0pt}{3ex}\n\t\t\tLink travel time & Yes&$T_{a}^h$ & $\\mathbf{T}$ &$\\mathbb{R}^{N|A|}$ & Dense & $T_{a}^h$ is placed at entry $(N-1)|A| + k$\\\\\n\t\t\t\\hline \\rule{0pt}{3ex}\n\t\t\tPath travel time & Yes&$C_{rs}^{kh}$ & $\\mathbf{C}$ &$\\mathbb{R}^{N\\Pi}$ & Dense & $C_{rs}^{kh}$ is placed at entry $(N-1)\\Pi + k$\\\\\n\t\t\t\\hline \\rule{0pt}{3ex}\n\t\t\tDAR matrix & Yes&$\\rho_{rs}^{ka}(h, h')$ &$\\boldsymbol \\rho$ & $\\mathbb{R}^{N|A| \\times N\\Pi}$ & Sparse & $\\rho_{rs}^{ka}(h, h')$ is placed at entry $[(h'-1)|A| + a, (h-1)\\Pi + k]$\\\\\n\t\t\t\\hline \\rule{0pt}{3ex}\n\t\t\tRoute choice matrix & No&$p_{rs}^{kh}$ &$\\mathbf{p}$ & $\\mathbb{R}^{N\\Pi \\times NK}$ & Sparse & $p_{rs}^{kh}$ is placed at entry $[(h-1)|\\Pi| + k, (h-1)K + rs]$\\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\\end{table*}\n\nUsing the notations presented in Table~\\ref{tab:mcvec}, we can rewrite Equation~\\ref{eq:od}, \\ref{eq:random}, \\ref{eq:ODpath}, \\ref{eq:gen_choice}, \\ref{eq:dnl}, and \\ref{eq:link} in Equation~\\ref{eq:vec}.\n\\begin{equation}\n\t\\label{eq:vec}\n\t\\begin{array}{llllll}\n\t\t\\vspace{5pt}\n\t\t\\vec{Q} &=& \\vec{q} + \\boldsymbol \\varepsilon\\\\\n\t\t{\\boldsymbol \\varepsilon} &\\sim& \\N\\left(\\vec{0}, {\\boldsymbol \\sigma}^2\\right)\\\\\n\t\t\\vec{F} &=& \\vec{p}\\vec{Q}\\\\\n\t\t\\vec{p} &= & \\Psi \\left( \\D \\left(\\vec{C} \\right), \\D \\left(\\vec{T}\\right) \\right)& \\\\\n\t\t\\left \\{ \\vec{C}, \\vec{T}, {\\boldsymbol\\rho} \\right\\} &= & \\Lambda(\\vec{F}) & \\\\\n\t\t\\vec{X} &=&{\\boldsymbol \\rho} \\vec{F}\n\t\\end{array}\n\\end{equation}\nwhere ${\\boldsymbol \\sigma}^2$ denotes the element-wise square for matrix ${\\boldsymbol \\sigma}$. In the rest of this paper, we will use the vectorized notations for simplicity.\n\n\\subsection{Formulating the PDODE problem}\nThe PDODE problem is formulated in this section. In particular, different objective functions are discussed.\n\\subsubsection{Objective function} \n\\label{sec:obj}\nTo formulate the PDODE problem, we first define the objective function in the optimization problem. DDODE problem minimizes the gap between the estimated (reproduced) and the observed traffic conditions. The gap is usually measured through $\\ell^2$-norm, which is commonly used to measure the distance between two deterministic variables. However, the PDODE problem is formulated in the probabilistic space, and we need to measure the distance between the distributions of the observed traffic conditions and the estimated (reproduced) traffic conditions. \nTo this end, we define a generalized form to measure the observed and estimated distribution of traffic conditions, as presented by Equation~\\ref{eq:ere}.\n\n\\begin{eqnarray}\n\t\\label{eq:ere}\n\t\\mathcal{L}_0 = \\mathcal{M} \\left(\\tilde{\\mathbf{X}}, \\mathbf{X}(\\mathbf{Q})\\right) \n\\end{eqnarray}\nwhere $\\mathcal{M}$ measures the statistical distance between two distributions, which is defined in Definition~\\ref{def:stat}. \n\n\n\\begin{definition}\n\\label{def:stat}\nThe statistical distance $\\mathcal{M}(\\mathbf{X}_1, \\mathbf{X}_2)$ is defined as the distance between two random vectors ({\\em i.e.}, two probabilistic distributions) $\\mathbf{X}_1$ and $\\mathbf{X}_2$, and it should satisfy two properties: 1) $\\mathcal{M}(\\mathbf{X}_1, \\mathbf{X}_2) \\geq 0, \\forall~\\mathbf{X}_1, \\mathbf{X}_2 $; 2) $\\mathcal{M}(\\mathbf{X}_1, \\mathbf{X}_2) = 0 \\iff \\mathbf{X}_1 = \\mathbf{X}_2$. \n\\end{definition}\n\nThe statistical distance may not be symmetric with respect to $\\mathbf{X}_1$ and $\\mathbf{X}_2$, and hence it may not be viewed as a metric. Various statistical distances can be used for $\\mathcal{M}$, and we review existing literature to list out some commonly used distances that have explicit form for Gaussian distributions. We further simplify the notation $\\mathbf{X}(\\mathbf{Q})$ to $\\mathbf{X}$, and we assume $\\tilde{\\mathbf{X}} \\sim \\mathcal{N}\\left( \\tilde{\\mathbf{x}}, \\Sigma_{\\tilde{\\mathbf{X}}} \\right)$ and $\\mathbf{X} \\sim \\mathcal{N}\\left( \\mathbf{x}, \\Sigma_{\\mathbf{X}} \\right)$, then different statistical distances can be computed as follows.\n\\begin{itemize}\n\t\\item $\\ell_p$-norm on distribution parameters: this metric directly compare the $\\ell_p$-norm of the mean vector and covariance matrix, which can be written as:\n\t$$\\|\\tilde{\\mathbf{x}} - \\mathbf{x}\\|_p + \\| \\Sigma_{\\tilde{\\mathbf{X}}} - \\Sigma_{\\mathbf{X}}\\|_p $$\n\t\\item Wasserstein distance: the 2-Wasserstein distance has close-form for Gaussian distributions, and $\\mathcal{M}\\left(\\tilde{\\mathbf{X}}, \\mathbf{X}\\right)$ can be written as:\n\t$$\\|\\tilde{\\mathbf{x}} - \\mathbf{x}\\|_2^2 + \\text{Tr}\\left( \\Sigma_{\\tilde{\\mathbf{X}}} + \\Sigma_{\\mathbf{X}} -2 \\left(\\Sigma_{\\tilde{\\mathbf{X}}}^{1\/2} \\Sigma_{\\mathbf{X}} \\Sigma_{\\tilde{\\mathbf{X}}}^{1\/2} \\right)^{1\/2} \\right)$$\n\n\n\n\t\\item Kullback\u2013Leibler (KL) divergence: also known as relative entropy. KL divergence is not symmetric, and we choose the forward KL divergence to avoid taking inverse of $\\Sigma_{\\mathbf{X}}$, which can be written as \n\t$$\\frac{1}{2} \\left[ \\log \\frac{|\\Sigma_{\\tilde{\\mathbf{X}}}|}{|\\Sigma_{\\mathbf{X}}|} + (\\tilde{\\mathbf{x}} - \\mathbf{x})^T\\Sigma_{\\tilde{\\mathbf{X}}}^{-1} (\\tilde{\\mathbf{x}} - \\mathbf{x}) + \\text{Tr}\\left(\\Sigma_{\\tilde{\\mathbf{X}}}^{-1} \\Sigma_{\\mathbf{X}}\\right) \\right]. $$\n\tWe note that KL divergence can be further extended to Jensen\u2013Shannon (JS) divergence, while it requires to take the inverse of $\\Sigma_{\\mathbf{X}}$, so we will not consider it in this study.\n\t\\item Bhattacharyya distance: we set $\\Sigma = \\frac{\\Sigma_{\\tilde{\\mathbf{X}}} + \\Sigma_{\\mathbf{X}}}{2}$, then $\\mathcal{M}\\left(\\tilde{\\mathbf{X}}, \\mathbf{X}\\right)$ can be written as:\n\t$$\\frac{1}{8}(\\tilde{\\mathbf{x}} - \\mathbf{x})^T\\Sigma^{-1} (\\tilde{\\mathbf{x}} - \\mathbf{x}) + \\frac{1}{2}\\ln \\frac{|\\Sigma|}{\\sqrt{|\\Sigma_{\\tilde{\\mathbf{X}}}| |\\Sigma_{\\mathbf{X}}|}}$$\n\\end{itemize}\n\nAll the above statistical distances satisfy Definition~\\ref{def:stat}, and they are continuous with respect to the distribution parameters. More importantly, all the statistical distances are differentiable, as each of the used operation is differentiable and the auto-differentiation techniques can be used to derive the overall gradient of the statistical distances with respect to the distribution parameters \\citep{speelpenning1980compiling}. Theoretically, all the above distances can be used as the objective function in PDODE, while we will show in the numerical experiments that their performance can be drastically different.\n\n\n\\subsubsection{PDODE formulation}\n\nTo simulate the availability of multi-day traffic data, we assume that $I$ day's traffic counts data are collected, and $\\tilde{\\mathbf{X}}^{(i)}$ is the $i$th observed link flow data, $i= 1, 2, \\cdots, I$, and $\\tilde{\\mathbf{X}}^{(i)}$ iid follows the distribution of $\\tilde{\\mathbf{X}}$.\nBecause the actual distributions of $\\tilde{\\mathbf{X}}$ and $\\mathbf{X}$ are unknown, we use the Monte-Carlo approximation to approximate $\\mathcal{L}_0$, as presented in Equation~\\ref{eq:ereap}.\n\\begin{eqnarray}\n\t\\label{eq:ereap}\n\t\\mathcal{L} &=& \\mathbb{E}_{\\left({\\boldsymbol \\alpha}, {\\boldsymbol \\beta}\\right) \\sim \n\t\t{\\tilde{\\mathbf{X}}}^{\\bigotimes I} \\bigotimes {\\mathbf{X}}^{\\bigotimes L}} \\mathcal{M} \\left(\\boldsymbol \\alpha, \\boldsymbol \\beta\\right) \\nonumber\\\\\n\t&=&\\frac{1}{IL}\\sum_{i=1}^I \\sum_{l=1}^{L} \\mathcal{M} \\left(\\tilde{\\mathbf{X}}^{(i)}, \\mathbf{X}^{(l)}\\right) \\label{eq:L}\n\\end{eqnarray}\nwhere $I, L$ are the number of samples from distributions of $\\tilde{\\mathbf{X}}$ and $\\mathbf{X}$, respectively, and $\\tilde{\\mathbf{X}}^{(i)}, \\mathbf{X}^{(l)}$ are the sampled distributions of $\\tilde{\\mathbf{X}}$ and $\\mathbf{X}$, respectively.\nBy the law of large numbers (LLN), $\\mathcal{L}$ converges to $\\mathcal{L}_0$ when $I,L \\to \\infty$.\n\n\nCombining the constraints in Equation~\\ref{eq:vec} and the objective function in Equation~\\ref{eq:L}, now we are ready to formulate the PDODE problem in Formulation~\\ref{eq:pdode1}.\n\n\\begin{equation}\n\t\\label{eq:pdode1}\n\t\\begin{array}{rrcllll}\n\t\t\\vspace{5pt}\n\t\t\\displaystyle \\min_{\\vec{q}, {\\boldsymbol \\sigma}} & \\multicolumn{4}{l}{\\displaystyle \\frac{1}{IL}\\sum_{i=1}^I \\sum_{l=1}^{L} \\mathcal{M} \\left(\\tilde{\\mathbf{X}}^{(i)}, \\mathbf{X}^{(l)}\\right)} &\\\\\n\t\t\\textrm{s.t.} & \\left \\{ \\vec{C}^{(l)}, \\vec{T}^{(l)}, {\\boldsymbol \\rho}^{(l)} \\ \\right\\} &= & \\Lambda(\\vec{F}^{(l)}) & \\forall l&\\\\\n\t\t~ & \\vec{p}^{(l)} &= & \\Psi \\left( \\D(\\vec{C}^{(l)}), \\D(\\vec{T}^{(l)}) \\right)& \\forall l &\\\\\n\t\t~ & \\mathbf{Q}^{(l)} &\\sim& \\mathcal{N}\\left(\\vec{q}, {\\boldsymbol \\sigma}^2\\right)&\\forall l&\\\\\n\t\t~ & \\vec{F}^{(l)} & = & \\vec{p}^{(l)}\\vec{Q}^{(l)} & \\forall l&\\\\\n\t\t~ & \\mathbf{X}^{(l)} & = & {\\boldsymbol \\rho}^{(l)} \\vec{F}^{(l)} &\\forall l &\n\t\\end{array}\n\\end{equation}\nwhere $\\vec{C}^{(l)}, \\vec{T}^{(l)}, \\mathbf{X}^{(l)}, \\vec{F}^{(l)}, \\mathbf{Q}^{(l)}, {\\boldsymbol \\rho}^{(l)}, \\vec{p}^{(l)}$ are the sample distributions of $\\vec{C}, \\vec{T}, \\mathbf{X}, \\vec{F}, \\mathbf{Q}, {\\boldsymbol \\rho}, \\vec{p}$, respectively. Formulation~\\ref{eq:pdode1} searches for the optimal mean and standard deviation of the dynamic OD demand to minimize the statistical distance between the observed and estimated link flow distributions such that the DNL and travelers' behavior models are satisfied. We note that Formulation~\\ref{eq:pdode1} can be extended to include the traffic speed, travel time, and historical OD demand data \\citep{ma2019estimating}. It is straightforward to show that Formulation~\\ref{eq:pdode1} is always feasible, as long as the sampled PDOD is feasible to the traffic simulator, as presented in Proposition~\\ref{prop:fea}.\n\n\n\\begin{proposition}[Feasibility]\n\\label{prop:fea}\nThere exist a feasible solution $(\\vec{q}, {\\boldsymbol \\sigma})$ to Formulation~\\ref{eq:pdode1} if the non-negative support of the distribution $\\mathcal{N}\\left(\\vec{q}, {\\boldsymbol \\sigma}^2\\right)$ is feasible to the traffic simulator $\\Lambda$.\n\\end{proposition}\n\n\nTo compute $\\mathcal{M} \\left(\\tilde{\\mathbf{X}}^{(i)}, \\mathbf{X}^{(l)}\\right)$, we first characterize the distribution of $\\vec{X}^{(l)}$ by $\\vec{X}^{(l)} = \\Lambda(\\vec{p}^{(l)} \\vec{Q}^{(l)})$.\nHence the computation of the $\\mathcal{M} \\left(\\tilde{\\mathbf{X}}^{(i)}, \\mathbf{X}^{(l)}\\right)$ is based on the distribution of $\\vec{Q}^{(l)}$, as the distribution of $\\mathbf{X}^{(l)}$ is obtained from $\\mathbf{Q}^{(l)}$. Additionally, the sample distribution $\\mathbf{Q}^{(l)}$ is further generated from $\\mathbf{Q}^{(l)} \\sim \\mathcal{N}\\left(\\vec{q}, {\\boldsymbol \\sigma}^2\\right)$. \n\n\n\nFormulation~\\ref{eq:pdode1} is challenging to solve because the derivatives of the loss function with respect to $\\vec{q}$ and ${\\boldsymbol \\sigma}$ are difficult to obtain. The reason is that $\\mathbf{Q}^{(l)}$ is sampled from the Gaussian distribution, and it is difficult to compute $\\frac{\\partial \\mathbf{Q}^{(l)}}{\\partial \\vec{q}}$ and $\\frac{\\partial \\mathbf{Q}^{(l)}}{\\partial {\\boldsymbol \\sigma}}$. Without the closed-form gradients, most existing studies adopt a two-step approach to estimate the PDOD. The first step estimates the OD demand mean and the second step estimates the standard deviation. The two steps are conducted iteratively until convergence \\citep{ma2018statistical, yang2019estimating}. \nIn this paper, we propose a novel solution to estimate the mean and standard deviation simultaneously by casting the PDODE problem into computational graphs. Details will be discussed in the following section.\n\n\n\n\\section{Solution Algorithm}\n\\label{sec:solution}\nIn this section, a reparameterization trick is developed to enable the simultaneous estimation of mean and standard deviation of the dynamic OD demand. The PDODE formulation in Equation~\\ref{eq:pdode1} is then cast into a computational graph. We then summarize the step-by-step solution framework for PDODE. Finally, the underdetermination issue of the PDODE problem is discussed .\n\n\\subsection{A key reparameterization trick}\n\nTo solve Formulation~\\ref{eq:pdode1}, our objective is to directly evaluate the derivative of both mean and standard deviation of OD demand, {\\em i.e.} $\\frac{\\partial \\mathcal{L}}{\\partial \\mathbf{q}}$ and $\\frac{\\partial \\mathcal{L}}{\\partial {\\boldsymbol \\sigma}}$, then gradient descent methods can be used to search for the optimal solution.\n\nWe will leave the computation of $\\frac{\\partial \\mathcal{L}}{\\partial \\mathbf{q}}$ and $\\frac{\\partial \\mathcal{L}}{\\partial {\\boldsymbol\\sigma}}$ in the next section, while this section addresses a key issue of evaluating $\\frac{\\partial \\vec{Q}^{(l)}}{\\partial \\mathbf{q}}$ and $\\frac{\\partial \\vec{Q}^{(l)}}{\\partial {\\boldsymbol\\sigma}}$. The idea is actually simple and straightforward. Instead of directly sampling $\\vec{Q}^{(l)}$ from $\\mathcal{N}\\left(\\vec{q}, {\\boldsymbol\\sigma}^2\\right)$, we conduct the following steps to generate $\\vec{Q}^{(l)}$: 1) Sample ${\\boldsymbol\\nu}^{(l)} \\in \\mathbb{R}^{NK}$ from $\\mathcal{N}\\left(\\vec{0}, \\mathbf{1}\\right)$; 2) Obtain $\\vec{Q}^{(l)}$ by $\\vec{Q}^{(l)} = \\vec{q} + {\\boldsymbol\\sigma} \\circ {\\boldsymbol\\nu}^{(l)}$, where $\\circ $ represents the element-wise product. \n\nThrough the above reparameterization trick, we can compute the derivatives $\\frac{\\partial \\vec{Q}^{(l)}}{\\partial \\mathbf{q}}$ and $\\frac{\\partial \\vec{Q}^{(l)}}{\\partial {\\boldsymbol\\sigma}}$ by Equation~\\ref{eq:odd}.\n\\begin{equation}\n\t\\label{eq:odd}\n\t\\begin{array}{llllll}\n\t\t\\frac{\\partial \\vec{Q}^{(l)}}{\\partial \\mathbf{q}} &=& \\vec{1}_{NK}\\\\\n\t\t\\frac{\\partial \\vec{Q}^{(l)}}{\\partial {\\boldsymbol\\sigma}} &=& {\\boldsymbol\\nu}^{(l)}\n\t\\end{array}\n\\end{equation}\nwhere $\\vec{1}_{NK} \\in \\mathbb{R}^{NK}$ is a vector filled with $1$. This reparameterization trick is originally used to solve the variational autoencoder (VAE) \\citep{kingma2013auto}, and we adapt it to solve the PDODE problem. \n\n\\subsection{Reformulating PDODE through computational graphs}\nWith the reparameterization trick discussed in the previous section, we can reformulate the PDODE problem in Equation~\\ref{eq:pdode2}.\n\\begin{equation}\n\t\\label{eq:pdode2}\n\t\\begin{array}{rrcllll}\n\t\t\\vspace{5pt}\n\t\t\\displaystyle \\min_{\\vec{q}, {\\boldsymbol \\sigma}} & \\multicolumn{4}{l}{\\displaystyle \\frac{1}{IL}\\sum_{i=1}^I \\sum_{l=1}^{L} \\mathcal{M} \\left(\\tilde{\\mathbf{X}}^{(i)}, \\mathbf{X}^{(l)}\\right)} &\\\\\n\t\t\\textrm{s.t.} & \\left \\{ \\vec{C}^{(l)}, \\vec{T}^{(l)}, {\\boldsymbol \\rho}^{(l)} \\ \\right\\} &= & \\Lambda(\\vec{F}^{(l)}) & \\forall l&\\\\\n\t\t~ & \\vec{p}^{(l)} &= & \\Psi \\left( \\D(\\vec{C}^{(l)}), \\D(\\vec{T}^{(l)}) \\right)& \\forall l &\\\\\n\t\t~ & {\\boldsymbol\\nu}^{(l)} &\\sim& \\mathcal{N}\\left(\\vec{0}, \\mathbf{1}\\right)& \\forall l&\\\\\n\t\t~ & \\mathbf{Q}^{(l)} & = & \\vec{q} + {\\boldsymbol\\sigma}\\circ{\\boldsymbol\\nu}^{(l)}& \\forall l&\\\\\n\t\t~ & \\vec{F}^{(l)} & = & \\vec{p}^{(l)}\\vec{Q}^{(l)} & \\forall l&\\\\\n\t\t~ & \\mathbf{X}^{(l)} & = & {\\boldsymbol \\rho}^{(l)} \\vec{F}^{(l)} &\\forall l &\n\t\\end{array}\n\\end{equation}\n\nExtending the forward-backward algorithm proposed by \\citet{ma2019estimating}, we can solve Formulation~\\ref{eq:pdode2} through the forward-backward algorithm. The forward-backward algorithm consists of two major components: 1) the forward iteration computes the objective function of Formulation~\\ref{eq:pdode2}; 2) the backward iteration evaluates the gradients of the objective function with respect to the mean and standard deviation of the dynamic OD demand ($\\frac{\\partial \\mathcal{L}}{\\partial \\mathbf{q}}, \\frac{\\partial \\mathcal{L}}{\\partial {\\boldsymbol\\sigma}}$). \n\n\\begin{figure*}[h]\n\t\\centering\n\t\\includegraphics[width=0.95\\linewidth]{diagram}\n\t\\caption{The computational graph for PDODE.}\n\t\\label{fig:fb}\n\\end{figure*}\n\n{\\bf Forward iteration.} In the forward iteration, we compute the objective function based on the sample distribution of observation $\\tilde{\\mathbf{X}}^{(i)}$ in a decomposed manner, as presented in Equation~\\ref{eq:forward}.\n\\begin{equation}\n\t\\begin{array}{lllllll}\n\t\t\\label{eq:forward}\n\t\t{\\boldsymbol\\nu}^{(l)} &\\sim& \\mathcal{N}\\left(\\vec{0}, \\mathbf{1}\\right)&\\forall l\\\\\n\t\t\\mathbf{Q}^{(l)} & = & \\vec{q} + {\\boldsymbol\\sigma} {\\boldsymbol \\nu}^{(l)}&\\forall l\\\\\n\t\t\\vec{F}^{(l)} &=& \\vec{p}^{(l)} \\vec{Q}^{(l)}&\\forall l\\\\\n\t\t\\mathbf{X}^{(l)} &=& {\\boldsymbol \\rho}^{(l)} \\vec{F}^{(l)}&\\forall l\\\\\n\t\t\\mathcal{L} &=& \\frac{1}{L}\\sum_{l=1}^{L} \\mathcal{M} \\left(\\tilde{\\mathbf{X}}^{(i)}, \\mathbf{X}^{(l)}\\right)\\\\\n\t\\end{array}\n\\end{equation}\n\n\n{\\bf Backward iteration.} The backward iteration evaluates the gradients of mean and standard deviation of the PDOD through the back-propagation (BP) method, as presented in Equation~\\ref{eq:backward}.\n\\begin{equation}\n\t\\begin{array}{llllllll}\n\t\t\\label{eq:backward}\n\t\t\\frac{\\partial \\mathcal{L}}{\\partial \\mathbf{X}^{(l)}} &=& \\frac{\\partial \\mathcal{M} \\left(\\tilde{\\mathbf{X}}^{(i)}, \\mathbf{X}^{(l)}\\right)}{\\partial \\mathbf{X}^{(l)}} & \\forall l\\\\\n\t\t\\vspace{5pt}\n\t\t\\frac{\\partial \\mathcal{L}}{\\partial \\vec{F}^{(l)}} &=& {{\\boldsymbol \\rho}^{(l)}}^T \\frac{\\partial \\mathcal{L}}{\\partial \\mathbf{X}^{(l)}}& \\forall l \\\\\n\t\t\\vspace{5pt}\n\t\t\\frac{\\partial \\mathcal{L}}{\\partial \\vec{Q}^{(l)}} &=& {\\vec{p}^{(l)}}^T \\frac{\\partial \\mathcal{L}}{\\partial \\vec{F}^{(l)}}& \\forall l\\\\\n\t\t\\vspace{5pt}\n\t\t\\frac{\\partial \\mathcal{L}}{\\partial \\vec{q}} &= &\\frac{\\partial \\mathcal{L}}{\\partial \\vec{Q}^{(l)}}& \\forall l\\\\\n\t\t\\vspace{5pt}\n\t\t\\frac{\\partial \\mathcal{L}}{\\partial {\\boldsymbol\\sigma}} &= & {\\boldsymbol\\nu}^{(l)}\\circ \\frac{\\partial \\mathcal{L}}{\\partial \\vec{Q}^{(l)}} & \\forall l\n\t\\end{array}\n\\end{equation}\n\n\nThe forward-backward algorithm is presented in Figure~\\ref{fig:fb}. Forward iteration is conducted through the solid line, during which the temporary matrices ${\\boldsymbol\\nu}, \\mathbf{p}, {\\boldsymbol\\rho}$ are also prepared (through the dot dashed link). The objective $\\mathcal{L}$ is computed at the end of $\\mathcal{L}$, and the backward iteration is conducted through the dashed line, and both OD demand mean and standard deviation are updated simultaneously.\n\n\nIn one iteration of forward-backward algorithm, we first run the forward iteration to compute the objective function, then the backward iteration is performed to evaluate the gradient of the objective function with respect to $\\mathbf{q}, {\\boldsymbol\\sigma}$.\nWith the forward-backward algorithm to compute the gradient of the objective function, we can solve the PDODE formulation in Equation~\\ref{eq:pdode2} through gradient-based methods. For example, the projected gradient descent method can be used to iteratively update the OD demand. This paper adopts Adagrad, a gradient-based method using adaptive step sizes \\cite{duchi2011adaptive}. As for the stopping criteria, Proposition~\\ref{prop:stop} indicates that the following two conditions are equivalent: 1) in the forward iteration, the distribution of path cost, link cost, path flow, OD demand do not change; 2) in the backward iteration, $\\frac{\\partial \\mathcal{L}}{\\partial \\vec{q}} = 0$ and $\\frac{\\partial \\mathcal{L}}{\\partial {\\boldsymbol\\sigma}} = 0$. \n\n\\begin{proposition}[Stopping criterion]\n\t\\label{prop:stop}\n\tThe PDODE formulation is solved when the forward and backward iterations converge, namely the the distributions of path cost, link cost, path flow, OD demand do not change, and $\\frac{\\partial \\mathcal{L}}{\\partial \\vec{q}} = 0$ and $\\frac{\\partial \\mathcal{L}}{\\partial {\\boldsymbol\\sigma}} = 0$.\n\\end{proposition}\n\nSince $\\mathcal{L} \\to \\mathcal{L}_0$ when $I,L\\to \\infty$, we claim the PDODE problem is solved when $\\frac{\\partial \\mathcal{L}}{\\partial \\vec{q}}$ and $\\frac{\\partial \\mathcal{L}}{\\partial {\\boldsymbol\\sigma}}$ are close to zero given a large $I$ and $L$.\n\n\n\n\\subsection{Solution Framework}\nTo summarize, the overall solution algorithm for PDODE is summarized in Table~\\ref{tab:sol}.\n\\begin{table}[h]\n\t\\begin{tabular}{p{2.2cm}p{13.6cm}}\n\t\t\\textbf{Algorithm}& \\textbf{[\\textit{PDODE-FRAMEWORK}]} \\\\[3ex]\\hline\n\t\t\\textit{Step 0} & \\textit{Initialization.} Initialize the mean and standard deviation of dynamic OD demand $\\vec{q}, {\\boldsymbol\\sigma}$. \\\\[3ex]\\hline\n\t\t\\textit{Step 1} & \\textit{Data preparation.} Randomly select a batch of observed data to form the sample distribution of $\\tilde{\\mathbf{X}}^{(i)}$.\\\\[3ex]\\hline\n\t\t\\textit{Step 2} & \\textit{Forward iteration.} Iterate over $l=1,\\cdots,L$: for each $l$, sample ${\\boldsymbol\\nu}^{(l)}$, solve the DNL models and travelers' behavior model, and compute the objective function $\\mathcal{L}$ based on Equation~\\ref{eq:forward} with $\\vec{q}, {\\boldsymbol\\sigma}$ \\\\[3ex]\\hline\n\t\t\\textit{Step 3} & \\textit{Backward iteration.} Compute the gradient of the mean and standard deviation of dynamic OD demand using the backward iteration presented in Equation~\\ref{eq:backward}. \\\\[3ex]\\hline\n\t\t\\textit{Step 4} & \\textit{Update PDOD.} Update the mean and standard deviation of dynamic OD ($\\vec{q}, {\\boldsymbol\\sigma}$) with the gradient-based projection method.\\\\[3ex]\\hline\n\t\t\\textit{Step 5} & \\textit{Batch Convergence check.} Continue when the changes of OD demand mean and standard deviation are within tolerance. Otherwise, go to Step 2.\\\\[3ex]\\hline\n\t\t\\textit{Step 6} & \\textit{Convergence check.} Iterate over $i=1,\\cdots, I$. Stop when the changes of OD demand mean and standard deviation are within tolerance across different $i$. Otherwise, go to Step 1.\\\\[3ex]\\hline\n\t\\end{tabular}\n\t\\caption{The PDODE solution framework.}\n\t\\label{tab:sol}\n\\end{table}\n\nIn practical applications, Step 3 and Step 4 can be conducted using the stochastic gradient projection methods to enhance the algorithm efficiency.\nAdditionally, Step 3 and Step 4 can be conducted using the auto-differentiation and deep learning packages, such as PyTorch, TensorFlow, JAX, etc, and both steps can be run on multi-core CPUs and GPUs efficiently. \n\n\n\n\\subsection{Underdetermination and evaluation criterion}\nIn this section, we discuss the underdetermination issue for the PDODE problem. It is well known that both static OD estimation and dynamic OD estimation problems are undetermined \\citep{yang1995heuristic,ma2018statistical}. We claim that PDODE is also under-determined because the problem dimension is much higher than its deterministic version. In the case of PDODE, not only the OD demand mean but also the standard deviation need to be estimated.\nTherefore, estimating exact PDOD accurately with limited observed data is challenging in most practical applications. Instead, since the objective of PDODE is to better reproduce the observed traffic conditions, we can evaluate the PDODE methods based on whether they can reproduce the network conditions accurately. We can evaluate the PDODE framework by measuring how well the traffic conditions can be evaluated for the observed and all links, respectively. Using this concept, we categorize the PDODE evaluation criterion into three levels as follows:\n\\begin{enumerate}[label=\\roman*)]\n\t\\item Observed Links (\\texttt{OL}): The traffic conditions simulated from the estimated PDOD on the observed links are accurate. \n\t\\item All Links (\\texttt{AL}): The traffic conditions simulated from the estimated PDOD on all the links are accurate.\n\t\\item Dynamic OD demand (\\texttt{OD}): The estimated PDOD is accurate.\n\\end{enumerate}\n\n\\begin{figure*}[h]\n\t\\centering\n\t\\includegraphics[width=0.75\\linewidth]{ec}\n\t\\caption{An overview of the evaluation criterion in PDODE.}\n\t\\label{fig:ec}\n\\end{figure*}\n\nThe three evaluation criterion are summarized in Figure~\\ref{fig:ec}. One can see that the objective of Formulation~\\ref{eq:pdode2} is actually \\texttt{OL}, and we include a series constraints in order to achieve \\texttt{AL}. Specifically, the flow conservation and route choice model help to achieve \\texttt{AL}. As for the \\texttt{OD}, there is no guarantee for large-scale networks. Many recent studies also indicate the same observations \\citep{osorio2019high, ma2019estimating, wollenstein2022joint}.\n\nOverall, a PDODE framework that satisfies \\texttt{OL} tends to overfit the observed data. We claim that a PDODE framework that satisfies \\texttt{AL} is sufficient for most needs in traffic operation and management, as the ultimate goal for PDODE is to understand the dynamic traffic conditions on the networks. To achieve \\texttt{OD}, a high-quality prior PDOD matrix is necessary to reduce the search space \\citep{ma2018estimating}.\n\nFrom the perspective of the underdetermination issue of PDODE, \\texttt{OL} is always determined as it only focuses on the observed links. On general networks, \\texttt{OD} is an under-determined problem as the cardinality of a network is much smaller than the dimension of OD demand. Whether \\texttt{AL} is determined or not is based on the network topology and data availability, and hence it is promising to make full use of the proposed computational graphs to achieve \\texttt{AL}, as the computational graphs have an advantage in multi-source data integration and fast computation. This further motivates the necessity to formulate the PDODE problem using computational graphs.\n\n\n\\section{Numerical experiments}\n\\label{sec:experiment}\nIn this section, we first examine the proposed PDODE framework on a small network. Different statistical distances are compared and the optimal one is selected. We further compare the PDODE with DDODE method, and the parameter sensitivity is discussed. In addition, the effectiveness and scalability of the PDODE framework are demonstrated on a real-world large-scale network: SR-41 corridor. All the experiments in this section are conducted on a desktop with Intel Core i7-6700K CPU 4.00GHz $\\times$ 8, 2133 MHz 2 $\\times$ 16GB RAM, 500GB SSD.\n\n\\subsection{A small network}\n\n\\subsubsection{Settings}\n\\label{sec:setting}\nWe first work on a small network with 13 nodes, 27 links, and 3 OD pairs, as presented in Figure~\\ref{fig:31net}. There are in total 29 paths for the 3 OD pairs ($1 \\to9$, $5 \\to9$, and $10 \\to9$). Links connecting node $1,5,9, 10$ are OD connectors, and the rest of links are standard roads with two lanes. The triangular fundamental diagrams (FD) are used for the standard links, in which the length of each road segment is $0.5$ mile, flow capacity is 2,000 vehicles\/hour, and holding capacity is $200$ vehicles\/mile. The free flow speed is uniform sampled from $20$ to $45$ miles\/hour.\n\n\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.75\\linewidth]{network33}\n\t\\caption{\\footnotesize{An overview of the small network.}}\n\t\\label{fig:31net}\n\\end{figure}\n\n\nTo evaluate the performance of the proposed method, we generate the mean and standard deviation of PDOD using a triangular pattern, as shown in Figure~\\ref{fig:dod}. The PDOD is high enough to generate congestion. The observed flow is obtained by solving the statistical traffic equilibrium and then a Gaussian distributed noise is added to the observation. The performance of the PDODE estimation formulation is assessed by comparing the estimated flow\nwith the ``true'' flow (flow includes observed link flow, all link flow, and OD demand) \\citep{antoniou2015towards}. We set the study period to 10 time intervals and each time interval lasts 100 seconds.\nA Logit model with a dispersion factor $0.1$ is applied to the mean route travel time for modeling the travelers' behaviors. \n\nSupposing 100 days' data are collected, the dynamic OD demand from the ``true'' distribution of dynamic OD demand is generated on each day, and the demand is loaded on the network with the route choice model and DNL model. We randomly select 12 links to be observed, and a random Gaussian noise $\\mathcal{N}(0, 5)$ is further added to the observed link flow. Our task is to estimate the mean and standard deviation of PDOD using the observed 100 days' data.\n\nWe run the proposed PDODE presented in Table~\\ref{tab:sol} with the projected stochastic gradient descent, and the solution algorithm is Adagrad \\citep{duchi2011adaptive}. We use the loss function $\\mathcal{L}$ to measure the efficiency of the proposed method, as presented in Equation~\\ref{eq:ereap}. \nNote that we loop all the 100 days' data once in each epoch.\n\n\\subsubsection{Comparing different statistical distances}\n\n\\begin{table*}[h]\n\t\\centering\n\t\\begin{tabular}{|l|cc|cc|cc|}\n\t\t\\hline\n\t\t\\multirow{2}{*}{\\backslashbox{$\\mathcal{M}$}{Accuracy}} & \\multicolumn{2}{c|}{\\texttt{OL}} & \\multicolumn{2}{c|}{\\texttt{AL}} & \\multicolumn{2}{c|}{\\texttt{OD}} \\\\\n\t\t\\cline {2-7}\n\t\t~ & Mean & Std & Mean & Std & Mean & Std \\\\\n\t\t\\hline\\hline\n\t\t$\\ell_1$-norm & {\\bf 0.968}& 0.792& {\\bf 0.997}& 0.806& {\\bf 0.996}& 0.804\\\\\n\t\t$\\ell_2$-norm & 0.955 & {\\bf 0.880}& 0.994& {\\bf 0.897} & 0.985& {\\bf 0.892} \\\\\n\t\t2-Wasserstein distance & {\\em 0.961} & {\\em 0.843} & {\\em 0.996} & {\\em 0.861} & {\\em 0.991}& {\\em 0.860} \\\\\n\t\tKL divergence & -0.575 & 0.027 & 0.508 & 0.062 & -0.592 & 0.027 \\\\\n\t\tBhattacharyya distance& -0.726 & -0.004 & 0.460 & 0.029 & -0.748 & -0.005 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Performance of different statistical distances in terms of R-squared score.}\n\t\\label{tab:compare}\n\\end{table*}\n\nWe first compare different statistical distances discussed in section~\\ref{sec:obj}. Under the same settings in section~\\ref{sec:setting}, different statistical distances are compared as the objective function in Formulation~\\ref{eq:pdode2}, and the estimation results are presented in Table~\\ref{tab:compare} in terms of R-squared score. We use the \\texttt{r2\\_score} function in sklearn to compute the R-squared score, and the score can be negative (because the model can be arbitrarily worse)\\footnote{\\url{https:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.metrics.r2\\_score.html}}.\n\n\nOne can see that neither KL divergence nor Bhattacharyya distance can yield proper estimation of PDOD, which may be due to the complicated formulations of its objective function, and gradient explosion and vanishing with respect to the objective function significantly affect the estimation accuracy. The other three statistical distances achieve satisfactory accuracy. Using $\\ell_1$-norm and $\\ell_2$-norm can achieve the best estimation of PDOD mean and standard deviation, respectively. Both objectives perform stably, which is probably attributed to the simple forms of their gradients. This finding is also consistent with many existing literature \\citep{shao2014estimation,shao2015estimation}. The 2-Wasserstein distance achieves a balanced performance in terms of estimating both mean and standard deviation, which might be because the 2-Wasserstein distance compares the probability density functions, instead of directly comparing the parameters of the two distributions. For the rest of the experiments, we choose 2-Wasserstein distance as the objective function. \n\n\n\\subsubsection{Basic estimation results}\n\nWe present the detailed estimation results using 2-Wasserstein distance as the objective function. The estimated and ``true'' PDOD are compared in Figure~\\ref{fig:dod}. One can see that the proposed PDODE framework accurately estimates the mean and standard deviation of the PDOD. Particularly, both surging and decreasing trends are quickly captured.\n\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.95\\linewidth]{odflow}\n\t\\caption{Comparison between the ``true'' and estimated OD demand (first row: mean; second row: standard deviation; first column: $1\\to9$; second column: $5\\to9$; third column: $10\\to9$; unit: vehicle\/100seconds).}\n\t\\label{fig:dod}\n\\end{figure}\n\n\n\n\\subsubsection{Comparing with the deterministic DODE}\n\n\nTo demonstrate the advantages of the PDODE framework, we also run the standard DDODE framework proposed by \\citet{ma2019estimating} using the same setting and data presented in section~\\ref{sec:setting}. Because the DDODE framework does not estimate the standard deviation, so we only evaluate the estimation accuracy of the mean. The comparison is conducted by plotting the estimated OD demand mean, observed link flow and all link flow against ``true'' flow for both PDODE and DDODE frameworks, as presented in Figure~\\ref{fig:31comp}. The algorithm performs well when the scattered points are close to $y=x$ line.\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.95\\linewidth]{flow}\n\t\\caption{Comparison between the ``true'' and estimated flow in terms of \\texttt{OL}, \\texttt{AL}, and \\texttt{OD} (first row: the proposed PDODE framework; second row: the standard DDODE framework; unit:vehicle\/100seconds).}\n\t\\label{fig:31comp}\n\\end{figure}\n\nIt can be seen from Figure~\\ref{fig:31comp}, the PDODE framework can better reproduce the ``true'' traffic flow. Firstly, DDODE can fit the observed link flow better as it directly optimize the gap between the observed and estimated link flow. However, the DDODE framework tends to overfit the noisy data because it does not model the variance of the flow explicitly. PDODE can provide a better estimation for those unobserved links and OD demand through a comprehensive modeling of the flow distribution. To summarize, DDODE achieves a higher accuracy on observed links (\\texttt{OL}), while PDODE outperforms DDODE in terms of \\texttt{AL} and \\texttt{OD}. \n\nTo quantify the error, we compute the R-squared scores between the ``true'' and estimated flow for both PDODE and DODE, as presented in Table~\\ref{tab:sr}.\n\n\\begin{table}[h]\n\t\\centering\n\t\\begin{tabular}{|c|ccc|}\n\t\t\\hline\n\t\t\\backslashbox{Formulation}{Accuracy} & \\texttt{OL} & \\texttt{AL} & \\texttt{OD} \\\\\n\t\t\\hline\\hline\n\t\tPDODE & 0.961 & {\\bf 0.996} & {\\bf 0.991} \\\\\n\t\tDDODE & {\\bf 0.963} & 0.979 & 0.857 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{R-squared scores between the ``true'' and estimated flow for PDODE and DODE.}\n\t\\label{tab:sr}\n\\end{table}\n\nThe R-squared score between the ``true'' and estimated OD demand and all links for PDODE is higher than that for DODE, while the differences between the R-squared scores for observed link flow are relatively small. This further explains the overfitting issue of the DDODE on the observed links, and the above experiments verify the illustrative example presented in section~\\ref{sec:example}.\n\n\n\n\\subsubsection{Sensitivity analysis.} \n\nWe also conduct sensitivity analysis regarding the proposed PDODE framework.\n\n{\\bf Impact of travel time.}\nIf the travel time of each link on the network is also observed, the proposed PDODE framework can be extended to incorporate the data. To be specific, we use the travel time information to calibrate the DAR matrix using the approach presented in \\citet{ma2018estimating}.\nIt is expected that the the estimation accuracy can further improved. The comparison of estimation accuracy is presented in Table~\\ref{tab:compare2}. One can see that the inclusion of travel time data is beneficial to all the estimates (\\texttt{OL}, \\texttt{AL}, \\texttt{OD}). Particularly, the estimation accuracy of standard deviation can be improved significantly by over 5\\%.\n\n\\begin{table*}[h]\n\t\\centering\n\t\\begin{tabular}{|c|cc|cc|cc|}\n\t\t\\hline\n\t\t\\multirow{2}{*}{\\backslashbox{$\\mathcal{M}$}{Accuracy}} & \\multicolumn{2}{c|}{\\texttt{OL}} & \\multicolumn{2}{c|}{\\texttt{AL}} & \\multicolumn{2}{c|}{\\texttt{OD}} \\\\\n\t\t\\cline {2-7}\n\t\t~ & Mean & Std & Mean & Std & Mean & Std \\\\\n\t\t\\hline\\hline\n\t\tPDODE & 0.961 & 0.843 & 0.996 & 0.861 & 0.991& 0.860 \\\\\n\t\tPDODE + travel time & 0.997& 0.925& 0.997& 0.921& 0.998& 0.908 \\\\\n\t\t\\hline\n\t\tImprovement & +3.746\\% & +9.727\\% & +0.100\\% & +9.969\\% & +0.706\\% & +5.581\\%\\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Effects of considering travel time data.}\n\t\\label{tab:compare2}\n\\end{table*}\n\n{\\bf Adaptive gradient descent methods.}\nWe compare different adaptive gradient descent methods, and the convergence curves are shown in Figure~\\ref{fig:conv}. Note that the conventional gradient descent (GD) or stochastic gradient descent (SGD) cannot converge and the loss does not decrease, so their curves are not shown in the figure. One can see that the Adagrad converges quickly within 20 epochs, and the whole 200 epochs take less than 10 minutes. Both Adam and AdaDelta can also converge after 60 epochs, while the curves are not as stable as the Adagrad. This is the reason we choose Adagrad in the proposed framework.\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.85\\linewidth]{converge}\n\t\\caption{Convergence curves of different adaptive gradient descent methods.}\n\t\\label{fig:conv}\n\\end{figure}\n\nSensitivity analysis regarding the learning rates, number of data samples, number of CPU cores, and noise level have also been conducted, and the results are similar to the previous study. For details, readers are referred to \\citet{ma2019estimating}.\n\n\\subsection{A large-scale network: SR-41 corridor}\nIn this section, we perform the proposed PDODE framework on a large-scale network. The SR-41 corridor is located in the City of Fresno, California. It consists of one major freeway and two parallel arterial roads. These roads are connected with local streets, as presented in Figure~\\ref{fig:srnet}. The network contains 1,441 nodes, 2,413 links and 7,110 OD pairs \\citep{liu2006streamlined, zhang2008developing}. We consider 6 time intervals and each time interval last 15 minutes. The ``true'' dynamic OD mean is generated from $\\texttt{Unif}(0,5)$ and the standard deviation is generated from $\\texttt{Unif}(0,1)$. It is assumed $500$ links are observed. The statistical traffic equilibrium is solved with Logit mode, and we generate $10$ days' data under the equilibrium. We run the proposed PDODE framework, and $\\mathcal{L}$ with 2-Wasserstein distance is used to measure the efficiency of the algorithm. No historical OD demand is used, and the initial PDOD is randomly generated for the proposed solution algorithm. The convergence curve is presented in Figure~\\ref{fig:convsr}. \n\n\\begin{figure}[h]\n\t\\centering\n\t\\begin{subfigure}[b]{0.475\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{network_sr41}\n\t\t\\caption{\\footnotesize{Overview of the SR41 corridor.}}\n\t\t\\label{fig:srnet}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.475\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{sr41conv}\n\t\t\\caption{\\footnotesize{Convergence curve of the proposed PDODE framework.}}\n\t\t\\label{fig:convsr}\n\t\\end{subfigure}\n\t\\caption{Network overview and the algorithm convergence curve for SR41 corridor.}\n\t\\label{fig:srover}\n\\end{figure}\n\nOne can see the proposed framework performs well and the objective function converges quickly. Each epoch takes 37 minutes and hence the algorithm takes 3,700 minutes ($\\sim$ 62 hours) to finish 100 epochs.\n\nAs discussed in previous sections, we do not compare the OD demand as the estimation of OD demand is under-determined, and it is challenging to fully recover the exact dynamic OD demand on such large-scale network without historical OD demand, analogous to DDODE or static ODE in the literature. Instead, we focus on the \\texttt{OL} and \\texttt{AL} to assess the performance of the proposed PDODE framework.\n\n\nWe plot the ``true'' and estimated mean of link flow on observed links and all links in Figure~\\ref{fig:srcomp}, respectively. \nOne can see that PDODE can reproduce the flow on observed links accurately, while the accuracy on all links is relatively lower. This observation is different from the small network, which implies extra difficulties and potential challenges of estimating dynamic network flow on large-scale networks, in extremely high dimensions. Quantitatively, the R-squared score is 0.949 on the observed links and 0.851 on all links. Both R-squared scores are satisfactory, and the R-squared score for \\texttt{AL} is higher than that estimated by the DDODE framework ($0.823$). Hence we conclude that the proposed PDODE framework performs well on the large-scale network.\n\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.95\\linewidth]{sr41compare}\n\t\\caption{Comparison between the ``true'' and estimated mean of link flow in PDODE framework (unit:vehicle\/15minutes).}\n\t\\label{fig:srcomp}\n\\end{figure}\n\n\n\n\\section{Conclusions}\n\\label{sec:con}\nThis paper presents a data-driven framework for the probabilistic dynamic origin-destination demand estimation (PDODE) problem. The PDODE problem is rigorously formulated on general networks. \nDifferent statistical distances ({\\em e.g.}, $\\ell_p$-norm, Wasserstein distance, KL divergence, Bhattacharyya distance) are tested as the objective function. All the variables involved in the PDODE formulation are vectorized, and the proposed framework is cast into a computational graph.\nBoth mean and standard deviation of the PDOD can be simultaneously estimated through a novel reparameterization trick. The underdetermination issues of the PDODE problem are also discussed, and three different evaluation criterion (\\texttt{OL}, \\texttt{AL}, \\texttt{OD}) are presented.\n\n\nThe proposed PDODE framework is examined on a small network as well as a real-world\nlarge-scale network. The loss function reduces quickly on both networks and the time consumption is satisfactory. $\\ell_1$ and $\\ell_2$ norms have advantages in estimating the mean and standard deviation of dynamic OD demand, respectively, and the 2-Wasserstein distance achieves a balanced accuracy in estimating both mean and standard deviation. We also compared the DDODE framework with the proposed PDODE framework. The experiment results show that the DDODE framework tends to overfit on \\texttt{OL}, and PDODE can achieve better estimation on \\texttt{AL} and \\texttt{OD}.\n\nIn the near future, we will extend the existing PDODE formulation to estimate the spatio-temporal covariance of the dynamic OD demand. The covariance (correlation) of dynamic OD demand can further help public agencies to better understand the intercorrelation of network dynamics and further improve the effectiveness of the operation\/management strategies. Low-rank or sparsity regularization for the covariance matrix of the PDOD might be necessary. The choice of statistical distances can be better justified through theoretical derivations. The computational graph also has great potential in incorporating multi-source data \\citep{ma2019estimating}, and it is interesting to explore the possibility of estimating the PDOD using emerging data sources, such as vehicle trajectory \\citep{ma2019measuring} and automatic vehicle identification (AVI) data \\citep{cao2021day}. The under-determination issue remains a critical challenge for the OD estimation problem (including both DDODE and PDODE), and this study demonstrates the possibility of mitigating the overfitting issue by considering the standard deviation. We believe this sheds light on overcoming the under-determination issues in general OD estimation problems. \n\n\\section*{Supplementary Materials}\nThe proposed PDODE framework is implemented with PyTorch and open-sourced on Github (\\url{https:\/\/github.com\/Lemma1\/Probabilistic-OD-Estimation}).\n\n\n\\ACKNOWLEDGMENT{%\nThe work described in this paper was supported by U.S. National Science Foundation CMMI-1751448. The first author was supported by the National Natural Science Foundation of China (No. 52102385) and a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. PolyU\/25209221). The contents of this paper reflect the views of the authors, who are responsible for the facts and the accuracy of the information presented herein. \n\n\n\n\n\n\n\\clearpage\n\n\\bibliographystyle{informs2014trsc}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nRandom object generation is a broad topic, since the word ``object'' has many connotations in mathematics and applied probability. For example, ``object'' could refer to a matrix or a polynomial. Indeed, observed data are random objects; for instance, a vector of observables in a regression context satisfies transparently the idea of a probabilistic \"object\" \\citep{Leemis:2006}. Of late, a class of random object models is growing in popularity, namely Latent Factor (or Feature) Models, abbreviated LFM. The theory and use of these models lie at the intersection of probability theory, Bayesian inference, and simulation methods, particularly Markov chain Monte Carlo (MCMC). Saving the formal description of LFMs for future sections, consider the following heuristics of certain key ideas central to the paper.\n\nLatent variables are unobserved, or are not directly measurable. Parenting skill, speech impediments, socio-economic status, and quality of life are some examples of these. Latent variables could also correspond to a ``true'' variable observed with error. Examples would include iron intake measured by a food frequency, self-reported weight, and lung capacity measured by forced expiratory volume in one second. In Bayesian hierarchical modeling, latent variables are often used to represent unobserved properties or hidden causes of data that are being modeled \\citep{Bishop:1998}. Often, these variables have a natural interpretation in terms of certain underlying but unobserved features of the data; as examples, thematic topics in a document or motifs in an image. The simplest of such models, which we will refer to as Latent Variable Models (LVMs), typically use a finite number of latent variables, with each datum related to a single latent variable \\citep{Bishop:1998,McLaughlan:2000}. This class of models includes finite mixture models, where a datum is associated with a single latent mixture component, and Hidden Markov Models (HMMs) where each point in a time series is associated with a single latent state \\citep{Baum:Petrie:1996}. All data associated with a given latent parameter are assumed to be independently and identically simulated according to a distribution parametrized by that latent parameter.\n\nGreater flexibility can be obtained by allowing multiple latent features for each datum. This allows different aspects of a datum to be shared with different subsets of the dataset. For example, two articles may share the theme ``science'', but the second article may also exhibit the theme ``finance''. Similarly, a picture of a dog in front of a tree has aspects in common with both pictures of trees and pictures of dogs. Models that allow multiple features are typically referred to as Latent Factor Models (LFMs). Examples of LFMs include Bayesian Principle Component Analysis where data are represented using a weighted superposition of latent factors, and Latent Dirichlet Allocation where data are represented using a mixture of latent factors; see \\citet{Roweis:Ghahramani:1999} for a review of both LVMs and LFMs.\n\nIn the majority of LVMs and LFMs, the number of latent variables is finite and pre-specified. The appropriate cardinality is often hard to determine \\emph{a priori} and, in many cases, we do not expect our training set to contain exemplars of all possible latent variables. These difficulties have led to the increasing popularity of LVMs and LFMs where the number of latent variables associated with each datum or object is potentially unbounded; see, \\citep{Antoniak:1974,Teh:Jordan:Beal:Blei:2006, Griffiths:Ghahramani:2005,Titsias:2007,Broderick:Mackay:Paisley:Jordan:2015}. These latter probabilistic models with an infinite number of parameters are referred to as nonparametric latent variable models (npLVMs) and nonparametric latent factor models (npLFMs). These models generally tend to provide richer inferences than their finite-dimensional counterparts, since deeper relationships between the unobserved variables and the observed data could be obtained by relaxing finite distributional assumptions about the probability generating mechanism.\n\nIn many applications, data are assumed \\emph{exchangeable}, in that no information is conveyed by the order in which data are observed. Even though exchangeability is a weaker (hence preferable) assumption than independent and identically distributed data, often times, observed data are time-stamped emissions from some evolving process. That is, the ordering (or dependency) is crucial to understanding the entire random data-generating mechanism. There are two types of dependent data that, typically, arise in practice. It is convenient to use terminology from the biomedical literature to distinguish the two. \\emph{Longitudinal dependency} refers to situations where one records multiple entries from the same random process over a period of time. In AIDS research, a biomarker such as a CD4 lymphocyte cell count is observed intermittently for a patient and its relation to time of death is of interest. In a different context, the ordering of frames in a video sequence or the ordering of windowed audio spectra in a piece of music within a time interval are crucial to our understanding of the entire video or musical piece. \n\n\\emph{Epidemiological dependency} corresponds to situations where our data generating mechanism involves multiple random processes, but where we typically observe each single process at only one covariate value; that is, that is, single records from multiple entities constitute the observed data. For instance, in an annual survey on diabetic indicators one might interview a different group of people each year; the observations correspond to different random processes (i.e. different individuals), but still capture global trends. Or consider articles published in a newspaper: while the articles published today are distinct from those published yesterday, there are likely to be general trends in the themes covered over time. \n\nMost research using npLFMs has focused on the exchangeable setting with non-dependent nonparametric LFMs being deployed in a number of application areas \\citep{Wood:Griffiths:Ghahramani:2006,Ruiz:Valera:Blanco:PerezCruz:2014,Meeds:Ghahramani:Neal:Roweis:2007}. A number of papers have developed npLFMs for epidemiological dependence \\citep{Foti:Futoma:Rockmore:Williamson:2013,Ren:Wang:Carin:Dunson:2011,Zhou:Yang:Sapiro:Dunson:Carin:2011,Rao:Teh:2009}. In these settings we are often able to make use of conjugacy to develop reasonably efficient stochastic simulation schemes. In addition, several nonparametric priors for LFMs have been proposed for longitudinally dependent data ~\\citep{Williamson:Orbanz:Ghahramani:2010,Gershman:Frazier:Blei:2015}, but unfortunately these papers, by virtue of their modeling approaches, require computationally complex inference protocols. Furthermore, these existing epidemiologically and longitudinally dependent methods are often invariant under time reversal. This is often a poor choice for modeling temporal dynamics, where the direction of causality means that they dynamics are not invariant under time reversal. \n\n\nIn this paper, we introduce a new class of npLFMs that is suitable for time (or longitudinally) dependent data. From a modeling perspective, the focus is on npLFMs rather than npLVMs since the separability assumptions underlying LVMs are overly restrictive for most real data. Specifically, we follow the tradition of generative or simulation-based npLFMs. A Bayesian approach is natural in this framework since the form of npLFMs needed to better model temporal dependency involves the use of probability distributions on function spaces; the latter idea is commonly referred to as Bayesian nonparametric inference \\citep{Walker:Damien:Laud:Smith:1999}.\n\nThe primary aims of this research are the following. First, to develop a class of npLFMs with practically useful attributes to generate random objects in a variety of applications. These attributes include an unbounded number of latent factors; capturing temporal dynamics in the data; and the tracking of \\emph{persistent} factors over time. The significance of this class of models is best described with a simple, yet meaningful, example. Consider a flautist playing a musical piece. At very short time intervals if the flautist is playing a B$\\flat$ at time $t$, it is likely that note would still be playing at time $t+1$. Arguably, this is a continuation of a single note instance that begins at time $t$ and persists to time $t+1$ (or beyond). Unlike current approaches, our proposed time-dynamic model captures this (persistent latent factor) dependency in the musical notes from time $t$ to $t+1$ (or beyond). The second goal of this research is to develop a general Markov chain Monte Carlo algorithm to enable full Bayesian implementation of the new npLFM family. Finally, applications of time-dependent npLFMs are shown via simulated and real data analysis. \n\nIn Section~\\ref{sec:bg}, finite and nonparametric LFMs are described. Section~\\ref{sec:ibp} discusses the Indian Buffet Processes that form the kernel for the new class of npLFMs introduced in Section~\\ref{sec:model_long}. Section~\\ref{sec:inf} details the inference methods used to implement the models in Section~\\ref{sec:model_long}, followed by synthetic and real data illustrations in Section~\\ref{sec:results}. A brief discussion in Section~\\ref{sec:conc} concludes the paper.\n\n\n\\section{Latent Factor Models}\\label{sec:bg\nA Latent Variable Model (LVM) posits that the variation within a dataset of size $N$ could be described using some set of $K$ features, with each observation associated with a single parameter. As an example, consider a mixture of $K$ Gaussian distributions where each datum belongs to one of the $K$ mixture components parametrized by different means and variances. These parameters, along with the cluster allocations, comprise the latent variables. In alternative settings, the number of features may be infinite; however since each data point is associated with a single feature, the number of features required to describe the dataset will always be upper bounded by $N$.\n\n\nWhile mixture models are widely used for representing the latent structure of a dataset, there are many practical applications where the observed data exhibit multiple underlying features. For example, in image modeling we may have two pictures, one of a dog beside a tree, and one of a dog beside a bicycle. If we assign both images to a single cluster, we ignore the difference between tree and bicycle. If we assign them to different clusters, we ignore the commonality of the dogs. In these situations, LVMs should allow each datum to be associated with multiple latent variables.\n\nIf each datum can be subdivided into a collection of discrete observations, one approach is to use an admixture model, such as latent Dirichlet allocation \\citep{Blei:Ng:Jordan:2003} or a hierarchical Dirichlet process \\citep{Teh:Jordan:Beal:Blei:2006}. Such approaches model the constituent observations of a data point using a mixture model, allowing a data point to express multiple features. For example, if a datum is a text document, the constituent observations might be words, each of which can be associated with a separate latent variable.\n\nIf it is not natural to split a data point into constituent parts---for example, if a data point is described in terms of a single vector---then we can construct models that directly associate each data point with multiple latent variables. This extension of LVMs is typically referred to as Latent Feature Models or Latent Factor Models (LFMs). For clarity, throughout this paper, LVM refers exclusively to models where each datum is associated with a single latent parameter, and LFM refers to models where each datum is associated with multiple latent parameters.\n\nA classic example of an LFM is Factor Analysis \\citep{Cattell:1952}, wherein one assumes $K$ $D$-dimensional latent features (or factors) $f_k$ which are typically represented as a $K\\times D$ matrix $F$. Each datum, $x_n$, is associated with a vector of weights, $\\lambda_n$, known as the factor loading, which determines the degree to which the datum exhibits each factor. Letting $X = (x_n)_{n=1}^N$ be the $N\\times D$ data matrix and $L = (\\lambda_n)_{n=1}^N$ be the $N\\times K$ factor loading matrix, we can write $ X = L F + \\mathbf{e}$, \nwhere $\\mathbf{e}$ is a matrix of random noise terms. Factor Analysis can be cast in a Bayesian framework by placing appropriate priors on the factors, loadings and noise terms \\citep{Press:Shigemasu:1989}. Such analysis is used in many contexts; as examples: micro array data \\citep{Hochreiter:Clevert:Obermayer:2006}, dietary patterns \\citep{Venkaiah:Brahmam:Vijayaraghavan:2011}, and psychological test responses \\citep{Tait:1986}. Independent Component Analysis \\citep[ICA]{Hyvarinen:Karhunen:Oja:2001} is a related model with independent non-Gaussian factors; ICA is commonly used in blind source separation of audio data.\n\nA serious disadvantage of LFMs such as Factor Analysis and ICA is that they assume a fixed, finite number of latent factors. In many settings, such an assumption is hard to justify. Even with a fixed, finite dataset, picking an appropriate number of factors, \\emph{a priori}, requires expensive cross-validation. In an online setting, where the dataset is constantly growing, it may be unreasonable to consider any finite upper bound. As illustrations, the number of topics that may appear in a newspaper, or the number of image features that may appear in an online image database, could grow unboundedly over time. One way of obviating this difficulty is to allow an infinite number of latent features \\emph{a priori}, and to ensure that every datum exhibits only a finite number of features wherein popular features tend to get reused. Such a construction would allow the number of exhibited features to grow in an unbounded manner as sample size grows, while still borrowing (statistical) strength from repeated features.\n\nThe transition from finite to infinite dimensional latent factors implies that the probability distributions on these factors in the generative process would now be elements in some function space; i.e., we enter the realm of Bayesian nonparametric inference. There is a vast literature on Bayesian nonparametric models; the classic references are \\citet{Ferguson:1973} and \\citet{Lo:1984}. Since the Indian buffet process is central to this paper, it is discussed in the following subsection. \n\n\n\n\\subsection{The Indian Buffet Process (IBP)}\\label{sec:ibp}\n\n\nA new class of nonparametric distributions of particular relevance to LFMs was developed by \\citet{Griffiths:Ghahramani:2005} who labeled their stochastic process prior the Indian Buffet Process (IBP). This prior adopts a Bayesian nonparametric inference approach to the generative process of an LFM where the goal of unsupervised learning is to discover the latent variables responsible for generating the observed properties of a set of objects.\n\nThe IBP provides a mechanism for selecting overlapping sets of features. This mechanism can be broken down into two components: a global random sequence of feature probabilities that assigns probabilities to infinitely many features, and a local random process that selects a finite subset of these features for each datum. \n\nThe global sequence of feature probabilities is distributed according to a stochastic process known as the beta process \\citep{Hjort:1990,Thibaux:Jordan:2007}. Loosely speaking, the beta process is a random measure, $B = \\sum_{k=1}^\\infty \\mu_k \\delta_{\\theta_k}$, that assigns finite mass to a countably infinite number of locations; these atomic masses $\\mu_k$ are independent, and are distributed according to the infinitesimal limit of a beta distribution. The locations, $\\theta_k$, of the atoms parametrize an infinite sequence of latent features.\n\nThe subset selection mechanism is a stochastic process known as the Bernoulli process \\citep{Thibaux:Jordan:2007}. This process samples a random measure $\\zeta_n = \\sum_{k=1}^\\infty z_{nk} \\delta_{\\theta_k}$, where each $z_{nk}\\in \\{0,1\\}$ indicates the presence or absence of the $k$th feature $\\theta_k$ in the latent representation of the $n$th datum, and are sampled independently as $z_{nk}\\sim \\mbox{Bernoulli}(\\mu_k)$. We can use these random measures $\\zeta_n$ to construct a binary feature allocation matrix $Z$ by ordering the features according to their popularity and aligning the corresponding ordered vector of indicators. This matrix will have a finite but unbounded number of columns with at least one non-zero entry; the re-ordering allows us to store the non-zero portion of the matrix in memory. It is often convenient to work directly with this random, binary matrix, and doing so offers certain insights into the properties of the IBP. This representation depicts the IBP as a (stochastic process) prior probability distribution over equivalence classes of binary matrices with a specified number of rows and a random, unbounded number of non-zero columns that grows in expectation as the amount of data increases. \n\nConsider a mathematical representation of the above discussion. Let $Z$ denote a random, binary matrix with $N$ rows and infinitely many columns, $K$ of which contain at least one non-zero entry. Then, following \\citet{Griffiths:Ghahramani:2005}, the IBP prior distribution for $Z$ is given by\n\\begin{equation}\np(Z) = \\frac{\\alpha^K}{\\prod_{h=1}^{2^N-1}K_h!}\\exp\\{-\\alpha H_N\\}\\prod_{k=1}^K\\frac{(N-m_k)!(m_k-1)!}{N!}\n\\label{eqn:ibpjoint}\n\\end{equation}\nwhere $m_k = \\sum_{n=1}^N z_{nk}$ is the number of times we have seen feature $k$; $K_h$ is the number of columns whose binary pattern encodes the number $h$ written as a binary number; $H_N$ is the $N$th harmonic number, and $\\alpha>0$ is the parameter of the process. Succinctly, Equation~\\ref{eqn:ibpjoint} is stated as: $Z \\sim \\mbox{IBP}(\\alpha)$, where $\\alpha$ is the parameter of the process; that is, Z has an IBP distribution with parameter $\\alpha$.\n\nWhat is the meaning of $\\alpha$? Perhaps the most intuitive way to understand the answer to this question is to recast $p(Z)$ in Equation~\\ref{eqn:ibpjoint} through the spectrum of an Indian buffet restaurant serving an infinite number of dishes at a buffet. Customers (observations) sequentially enter this restaurant, and select a subset of the dishes (observations). The first customer takes $\\mbox{Poisson}(\\alpha)$ dishes. The $n$th customer selects each previously sampled dish with probability $m_k\/n$, where $m_k$ is the number of customers who have previously selected that dishes -- i.e. she chooses dishes proportional to their popularity. In addition, she samples a $\\mbox{Poisson}(\\alpha\/n)$ number of previously untried dishes. This process continues until all $N$ customers visit the buffet. Now, represent the outcome of this buffet process in a binary matrix $Z$ where the rows of the matrix are customers and the columns are the dishes. The element $z_{n,k}$ is 1 if observation $n$ possesses feature $k$. Then, after some algebra, it follows that the probability distribution over the random, binary matrix $Z$ (up to a reordering of the columns) induced by this buffet process is invariant to the order in which customers arrived at the buffet, and is the expression given in Equation~\\ref{eqn:ibpjoint}.\n\nThe meaning of $\\alpha$ is now clear. The smaller the $\\alpha$, the lower the number of features with $\\sum_n z_{nk} > 0$, and the lower the average number of features per data point, with the number of features per datapoint distributed (marginally) as $\\mbox{Poisson}(\\alpha)$. Thus when the IBP is used in the generative process of an LFM, the total number of features exhibited by $N$ data points will be finite, but random, and this number will grow in expectation with the number of data points. This subset selection procedure behaves in a ``rich-get-richer'' manner--- if a dish had been selected by previous customers, it would likely be selected by new arrivals to the buffet. Stating generically, therefore, if a feature appears frequently in previously observed data points it would likely continue to appear again in subsequent observations as well.\n\nWe could use the IBP as the basis for an LFM by specifying a prior on the latent factors (henceforward denoted by a $ K \\times D $ matrix $A$), as well as a likelihood model for generating observations, as shown in the following examples. If the data are real-valued vectors, an appropriate choice for the likelihood model data could be a weighted superposition of Gaussian features:\n\\begin{equation}\n\\begin{aligned}\nZ = (Z_n)_{n=1}^N \\sim& \\mbox{IBP}(\\alpha) & &\\\\\ny_{nk} \\sim& f &&\\\\\nA_k \\sim& \\mbox{Normal}(0, \\sigma_A^2I), &k&=1,2,\\dots \\\\\nX_n \\sim& \\mbox{Normal}((Z\\circ Y)A, \\sigma_X^2I), &n&=1,\\dots, N.\\label{eqn:lingauss}\n\\end{aligned}\n\\end{equation}\n\nHere, $Y$ is the $N\\times \\infty$ matrix with elements $y_{nk}$; $A$ is the $\\infty \\times D$ matrix with rows $A_k$; $\\circ$ is the Hadamard product; and $f$ is a distribution over the weights for a given feature instance. Note that, while we are working with infinite-dimensional matrices, the number of non-zero columns of $Z$ is finite almost surely, so we only need to represent finitely many columns of $Y$ and rows of $A$. If $f=\\delta_{1}$, we have a model where features are either included or not in a data point, and where a feature is the same each time it appears; this straightforward model was proposed by \\citet{Griffiths:Ghahramani:2005}, but is somewhat inflexible for real-life modeling scenarios.\n\nLetting $f=\\mbox{Normal}(\\mu_f, \\sigma_f^2)$ gives Gaussian weights, yielding a nonparametric variant of Factor Analysis \\citep{Knowles:Ghahramani:2007,Teh:Gorur:Ghahramani:2007}. This approach is useful in modeling psychometric test data, or analyzing marketing survey data. Letting $f=\\mbox{Laplace}(\\mu_f, b_f)$ results in a heavier-tailed distribution over feature weights, yielding a nonparametric version of Independent Components Analysis \\citep{Knowles:Ghahramani:2007}. This allows one to perform blind source separation where the number of sources is unknown, making it a potentially useful tool in signal processing applications.\n\nOften, one encounters binary-valued data: for example, an indicator vector corresponding to disease symptoms (where a 1 indicates the patient exhibits that symptom), or purchasing patterns (where a 1 indicates that a consumer has purchased that product). In these cases, a weighted superposition model is not directly applicable, but it may be reasonable to believe there are multiple latent causes influencing whether an element is turned on or not. One option in such cases is to use the IBP with a likelihood model \\citep{Wood:Griffiths:Ghahramani:2006} where observations are generated according to:\n\\begin{equation*}\n\\begin{split}\nZ = (Z_n)_{n=1}^N \\sim& \\mbox{IBP}(\\alpha)\\\\\ny_{dk} \\sim& \\mbox{Bernoulli}(p)\\\\\nP(x_{nd}=1|Z,Y) =& 1-(1-\\lambda)^{Z_nY_d^T}(1-\\epsilon),\n\\end{split}\n\\end{equation*}\nand where $Y$ is the $D\\times \\infty$ matrix with elements $y_{dk}$; $Z_i$ and $Y_i$ are the $i$th rows of $Z$ and $Y$ respectively. \n\nThe above illustrations exemplify the value of IBP priors in LFMs. While these illustrations cover a vast range of applied problems, there are limitations. Notable among them is that the above LFMs do not encapsulate time dynamics. The aim of this paper is to develop a new family of IBP-based LFMs that obviates this crucial shortcoming. Additionally, unlike the afore-described models, the new class also allows one to capture repeat occurrence of a feature through time; i.e., \\emph{persistence} of latent factors. (Recall from the Introduction the example of a flautist's musical note persisting in successive time intervals.) \n\n\n\\subsection{Temporal Dynamics in npLFMs}\\label{sec:dynamic_bg}\n\nThe IBP, like its finite-dimensional analogues, assumes that the data are exchangeable. In practice, this could be a restrictive assumption. In many applications, the data exhibit either longitudinal (time) dependence or epidemiological dependence. Since the latter form of dependency is not the focus of this paper, we no longer consider it in the ensuing discussions. Some important references for this latter type of dependency include \\citet{Ren:Wang:Carin:Dunson:2011}, \\citet{Foti:Futoma:Rockmore:Williamson:2013}, and \\citet{Zhou:Yang:Sapiro:Dunson:Carin:2011}.\n\nLongitudinal dependence considers the case where each datum corresponds to an instantiation of a single evolving entity at different points in time. For example, data might correspond to timepoints in an audio recording, or measurements from a single patient over time. Mathematically, this means we would like to capture continuity of latent features. This setting has been considered less frequently in the literature. The Dependent Indian Buffet Process \\citep[DIBP,][]{Williamson:Orbanz:Ghahramani:2010} captures longitudinal dependence by modeling the occurrence of a given feature with a transformed Gaussian process. This allows for a great deal of flexibility in the form of dependence but comes at high computational cost: inference in each Gaussian process scales cubically with the number of time steps, and we must use a separate Gaussian process for each feature. \n\nAnother model for longitudinal dependence is the Distance-Dependent Indian Buffet Process (DDIBP) of \\citet{Gershman:Frazier:Blei:2015}. In this model, features are allocated using a variant of the IBP metaphor, wherein each pair of data points is associated with some distance measure. The probability of two data points sharing a feature depends on the distance between them. With an appropriate choice of distance measure, this model could prove useful for time-dependent data.\n\nAn alternative approach is provided by IBP-based hidden Markov models. For example, the Markov IBP extends the IBP such that rows are indexed by time and the presence or absence of a feature at time $t$ depends only on which features were present at time $t-1$. This model is extended further in the Infinite Factorial Unbounded State Hidden Markov Model \\citep[IFUHMM,][]{Valera:Ruiz:Perez:2016} and the Infinite Factorial Dynamical Model \\citep[IFDM,][]{Valera:Ruiz:Lennart:PerezCruz2015}. These related models combine hidden Markov models, one which controls which features are present, and one which controls the expression of that feature. The feature presence\/absence is modeled using a Markov IBP \\citep{VanGael:Teh:Ghahramani:2009}. At different time points, a single feature can have multiple expressions. During a contiguous time period where feature $k$ is present, it moves between these expressions using Markovian dynamics. While this increases the model flexibility, this comes at a cost of interpretability. Unlike the DIBP, the DDIBP, and the model proposed in this paper, the IFUHMM and IFDM do not impose any similarity requirements on the expressions of a given feature and can therefore use a single feature to capture two very different effects, provided they never occur simultaneously. \n\n\nWhile not a dynamic latent factor model, another dynamic model based on npLFMs is the beta process autoregressive hidden Markov model \\citep[BP-AR-HMM,][]{Fox:Sudderth:Jordan:Willsky:2009,Fox:Hughes:Sudderth:Jordan:2014}. In this model, an IBP is used to couple multiple time-series in a vector autoregressive model. The IBP is used to control the weights assigned to the lagged components; these weights are stationary over time.\n\nIn addition to the longitudinally dependent variants npLFMs mentioned here, there also exist a large number of temporally dependent npLVMs. In particular, dependent Dirichlet processes \\citep[e.g.][]{Maceachern:2000,Caron:Davy:Doucet:2017,Lin:Grimson:Fisher:2010,Griffin:2011} extend the Dirichlet process to allow temporal dynamics, allowing for time-dependent clustering models. Hidden Markov models based on the hierarchical Dirichlet process \\citep{Fox:Sudderth:Jordan:Willsky:2008,Fox:Sudderth:Jordan:Willsky:2011,Zhang:Guletkin:Paislet:2016} allow the latent variable associated with an observation to evolve in a Markovian manner. We do not discuss these methods in depth here, since they assume a single latent variable at each time point.\n\n\n\n\n\\iffalse\nLastly, the other dynamic latent factor model that we consider is the Infinite Factorial Dynamical Model \\citep[IFDM]{Valera:Ruiz:Lennart:PerezCruz2015}. The IFDM combines two hidden Markov models, one which controls which features are present, and one which controls the expression of that feature. Concretely, the feature presence\/absence is modeled using a Markov IBP \\citep{VanGael:Teh:Ghahramani:2009}, a variant of the IBP where rows are indexed by time and the presence or absence of a feature at time $t$ depends only on which features were present at time $t-1$. At different time points, a single feature can have multiple expressions. During a contiguous time period where feature $k$ is present, it moves between these expressions using Markovian dynamics.\n\nThe Infinite Factorial Dynamical Model is one of several nonparametric hidden Markov models (HMMs) that could be used to represent the data. For example, we could use an Hierarchical DP-based HMM such as \\cite{Fox:Sudderth:Jordan:Willsky:2008,Fox:Sudderth:Jordan:Willsky:2011,Zhang:Guletkin:Paislet:2016}, where each time period is associated with a single latent state. Other non-parametric HMMs like \\cite{Fox:Sudderth:Jordan:Willsky:2009,Fox:Hughes:Sudderth:Jordan:2014} use the beta process to learn multiple features to model collections of autoregressive time series of order $ r $ where where an observation $ y_{i,t} $ of time series $ i $ at time $ t $ is modeled as\n\\begin{equation*}\ny_{i,t} = \\sum_{j=1}^{r}A_{j,z_{i,t}}y_{i,t-j}+ \\epsilon_{i,t}(z_{i,t}),\n\\end{equation*}\nfor a feature $ A_{j,k} $ and feature dependent noise vector $ \\epsilon_{i,t}(\\cdot) $ with latent indicator $ z_{i,t} $. However such an approach ignores the flexibility of a latent feature model where we can capture partial similarities between time point. Moreover in our model, the probability of a feature turning on depends on the total number of times it's been seen, rather than on just the last time step as in the non-parameteric HMMs. We choose the IFDM as it is most similar to our approach: At any given time point it describes a latent feature model, and like in our model the expression of a feature can vary over time. However, the IFDM is by nature less interpretable than our model, since it does not impose any similarity requirements on the expressions of a given feature and can therefore use a single feature to capture two very different effects, provided they never occur simultaneously. \n\\fi\n\n\n\n\nThe model we propose in Section~\\ref{sec:model_long} falls into this class of longitudinally dependent LFMs. Unlike the DIBP, DDIBP, our model explicitly models feature persistence. Unlike all the models described above, our model allows multiple instances of a feature to appear at once. This is appropriate in many contexts; for instance, in music analysis, where each note has an explicit duration and two musicians could play the same note simultaneously. Importantly, the proposed nonparametric LFM leaves the underlying IBP mechanism intact, leading to more straightforward inference procedures when compared to DIBP and DDIBP.\n\n\n\n\n\n\\section{A New Family of npLFMs for Time-Dependent Data}\\label{sec:model_long}\n\nExisting temporally dependent versions of the IBP ~\\citep{Williamson:Orbanz:Ghahramani:2010,Gershman:Frazier:Blei:2015} rely on explicitly or implicitly varying the underlying latent feature probabilities---a difficult task--- and inference tends to be computationally complex.\n\nOur proposed method obviates these limitations. In a nutshell, unlike existing dependent npLFMs, we build our model on top of a single IBP, as described in Section~\\ref{sec:ibp}. Temporal dependence is encapsulated via a \\textit{likelihood model}. The value of our approach could be best understood via some simple examples. Consider audio data. A common approach to modeling audio data is to view them as superpositions of multiple sources; for example, individual speakers or different instruments. The IBP has previously been used in these types of applications \\citep{Knowles:Ghahramani:2007,DoshiVelez:2009}. However, these approaches ignore \\emph{temporal dynamics} present in most audio data. Recall the flautist example: at very short time intervals, if a flautist is playing a B$\\flat$ at time $t$, it is likely that note could still be playing at time $t+k$, $k=1,2,\\dots$. Our proposed model captures this dependency in the musical notes. In Section~\\ref{sec:results}, using real data, we show the benefit of incorporating this dynamic, temporal \\emph{feature persistence} and contrast it to a static IBP, DIBP, and DDIBP. \n\nAs noted in the Abstract, another illustration is the modeling of sensor outputs over time. Sensors record responses to a variety of external events: for example, in a building we may have sensors recording temperature, humidity, power consumption and noise levels. These are all altered by events happening in the building---the presence of individuals; the turning on and off of electrical devices; and so on. Latent factors influencing the sensor output are typically present for a contiguous period of time before disappearing; besides, multiple factors could be present at a time. Thus, for instance, our model should capture the effect on power consumption due to an air conditioning unit being turned on from 9am to 5pm, and which could be subject to latent disturbances during that time interval such as voltage fluctuations. \n\nConsider a third illustration involving medical signals such as EEG or ECG data. Here, we could identify latent factors causing unexpected patterns in the data, as well as infer the duration of their influence. As in previous examples, we expect such factors to contribute for a contiguous period of time: for instance, a release of stress hormones would affect all time periods until the levels decrease below a threshold. Note that the temporal variation in all three illustrations above cannot be accurately captured with epidemiologically dependent factor models where the probability of a factor varies smoothly over time, but the actual presence or absence of that feature is sampled independently given appropriate probabilities. This approach would lead to noisy data where a feature randomly oscillates between on and off.\n\nUnder the linear Gaussian likelihood model described in Equation~\\ref{eqn:lingauss}, conditioned on the latent factors $A_k$, the $n$th datum is characterized entirely by the $n$th row of the IBP-distributed matrix $Z$ thereby ensuring that the data, like the rows of $Z$, are exchangeable. In the following, the key point of departure from the npLFMs described earlier is this: we now let the $n$th datum depend not only on the $n$th row of $Z$, but also on the $n-1$ preceding rows, thus breaking the exchangeability of the $X_n$ data sequence. This is the mathematical equivalent of dependency in the data that we now formalize.\n\nAssociate each non-zero element $z_{nk}$ of $Z$ with a geometrically-distributed ``lifetime'', namely $\\ell_{nk} \\sim \\mbox{Geometric}(\\rho_k)$. An instance of the $k$th latent factor is then incorporated from the $n$th to the $(n+\\ell_{nk}-1)$th datum. The $n$th datum is therefore associated with a set $\\mathcal{Y}_n$, which represents the time series data, of feature indices $\\{(i,j): z_{ij}=1, i+\\ell>n\\}$. We use the term ``feature'' to refer to a factor, and the term ``feature instance'' to refer to a specific realization of that factor. For example, if each factor corresponds to a single note in an audio recording, the global representation of the note $C$ would be a feature, and the specific instance of note $C$ that starts at time $n$ and lasts for a geometrically distributed time would be a feature instance. If we assume a shared lifetime parameter, $\\rho_k=\\rho$ for all features, then the number of features at any time point is given, in expectation, by a geometric series $E[|\\mathcal{Y}_n|] = \\sum_{i=0}^{n-1}\\alpha \\rho^i\\rightarrow \\frac{\\alpha}{1-\\rho}$ as $n\\rightarrow \\infty$, i.e. as we forget the start of the process. More generally, we allow $\\rho_k$ to differ between features, and place a $\\mbox{Beta}(a_\\rho, b_\\rho)$ prior on each $\\rho_k$. By a judicious choice of the hyper-parameters, this prior could be easily tailored to encapsulate vague prior knowledge or contextual knowledge. (As an added bonus, it leads to simpler stochastic simulation methods which will be discussed later on.)\n\nThis geometric lifetime is the source of dependency in our new class of IBP-based npLFMs. It captures the idea of feature \\textit{persistence}: a feature instance ``turned on'' at time $t$ appears in a geometrically distributed number of future time steps. Since any feature instance that contributes to $x_n$ also contributes to $x_{n+1}$ with probability $\\rho_k$, we expect $x_n$ to share $\\frac{\\alpha+\\rho-1}{1-\\rho}$ feature instances with $x_{n-1}$, and to introduce $\\alpha$ new feature instances. Of these new feature instances, we expect $\\alpha\/n$ to be versions of previously unseen features.\n\nNote that this construction allows a specific datum to exhibit multiple instances of a given latent factor. For example, if $\\mathcal{Y}_n=\\{(n,1), (n,3),(n-1,1)\\}$, then the $n$th datum will exhibit two copies of the first feature and one copy of the third feature. In many settings, this is a reasonable assumption: two trees appearing in a movie frame, or two instruments playing the same note at the same time.\n\nThe construction of dependency detailed above could now be combined with a variety of likelihood functions (or models) appropriate for different data sources or applications. We could also replace the geometric lifetime with other choices, for example using semi-Markovian models as in \\citet{johnson2013bayesian}. Armed with this kernel of geometric dependency and likelihood functions, we now illustrate the broad scope of the proposed family of time-dependent npLFMs via two generalizations. Later, we demonstrate these using real or synthetic data.\n\nAdapting the linear Gaussian IBP LFM used by \\citet{Griffiths:Ghahramani:2005} to our dynamic time-dependent model, where each datum is given by a linear superposition of Gaussian features, results in:\n\\begin{equation}\n\\begin{aligned}\nZ \\sim& \\mbox{IBP}(\\alpha) && \\\\\nA_k \\sim& \\mbox{Normal}(0,\\sigma_A^2) && \\\\\n\\ell_{nk}\\sim& \\mbox{Geometric}(\\rho_k), &k&=1,2,\\dots\\\\\n\\mu_n =& \\textstyle\\sum_{i=1}^n \\sum_{k=1}^\\infty z_{ik}I(i+\\ell_{ik}>n)A_k & &\\\\\nX_n \\sim& \\mbox{Normal}(\\mu_k, \\sigma_X^2), &n&=1,\\dots, N,\\label{eqn:dynamic_general}\n\\end{aligned}\n\\end{equation}\nwhere $I()$ is the indicator function.\n\nConsider a second generalization where one wishes to model variations in the appearance of a feature. Here, we can customize each feature instance using a feature weight $b_{nk}$ distributed according to some distribution $f$ so that: \n\\begin{equation}\n\\begin{aligned}\nZ \\sim& \\mbox{IBP}(\\alpha) &&\\\\\nA_k \\sim& \\mbox{Normal}(0,\\sigma_A^2) &&\\\\\n\\ell_{nk}\\sim& \\mbox{Geometric}(\\rho_k), &k&=1,2,\\dots\\\\\nb_{nk}\\sim& f &&\\\\\n\\mu_n =& \\textstyle \\sum_{i=1}^n \\sum_{k=1}^\\infty z_{ik}b_{ik}I(i+\\ell_{ik}>n)A_k&&\\\\\nX_n \\sim& \\mbox{Normal}(\\mu_k, \\sigma_X^2), &n&=1,\\dots, N.\n\\end{aligned}\n\\label{eqn:dynamic_amplitude}\n\\end{equation}\n\nFor example, in modeling audio data, a note or chord might be played at different volumes throughout a piece. In this case, it is appropriate to incorporate a per-factor-instance gamma-distributed weight, $b_{nk}\\sim \\mbox{Gamma}(\\alpha_B,\\beta_B)$. \n\nThe new family of time-dependent models above could be used in many applications, provided they are computationally feasible. In the following, we develop stochastic simulation methods to achieve this goal.\n\n\n\n\n\\section{Inference Methods for npLFMs}\\label{sec:inf}\nA number of inference methods have been proposed for the IBP, including Gibbs samplers \\citep{Griffiths:Ghahramani:2005,Teh:Gorur:Ghahramani:2007}, variational inference algorithms \\citep{Doshi:Miller:VanGael:Teh:2009}, and sequential Monte Carlo samplers \\citep{Wood:Griffiths:2006}. In this work, we focus on Markov chain Monte Carlo (MCMC) approaches (like the Gibbs sampler) since, under certain conditions, they are guaranteed to asymptotically converge to the true posterior distributions of the random parameters. Additionally, having tested various simulation methods for the dynamic models introduced in this paper, we found that the MCMC approach is easier to implement, and has good mixing properties.\n\nWhen working with nonparametric models, we are faced with a choice. One, perform inference on the full nonparametric model by assuming infinitely many features {\\emph{a priori} and inferring the appropriate number of features required to model the data. Two, work with a large, $K$-dimensional model that converges (in a weak-limit sense) to the true posterior distributions as $K$ tends to infinity. The former approach will asymptotically sample from the true posterior distributions, but the latter approximation approach is often preferred in practice due to lower computational costs. We describe algorithms for both approaches.\n\t\n\t\n\t\n\t\\subsection{An MCMC Algorithm for the Dynamic npLFM}\\label{sec:MCMCbasic}\n\tConsider the weighted model in Equation~\\ref{eqn:dynamic_amplitude}, where the feature instance weights $b_{nk}$ are distributed according to some arbitrary distribution $f(b)$ defined on the positive reals. Define $B$ as the matrix with elements $b_{nk}$. Inference for the uniform-weighted model in Equation~\\ref{eqn:dynamic_general} is easily recovered by setting $b_{nk}=1$ for all $n,k$.\n\t\n\tOur algorithms adapt existing fully nonparametric \\citep{Griffiths:Ghahramani:2005,DoshiVelez:Ghahramani:2009} and weak-limit MCMC algorithms \\citep{zhou2009non} for the IBP. One key difference is that we must sample not only whether feature $k$ is instantiated in observation $n$, but also for the number of observations for which this particular feature remains active. We obtain inferences for the IBP-distributed matrix $Z$ and the lifetimes $\\ell_{nk}$ using a Metropolis-Hastings algorithm described below.\n\t\n\t\n\n\t\n\t\\paragraph{Sampling $Z$ and the $\\ell_{nk}$ in the Full Nonparametric Model:}\n\t\n\tWe jointly sample the feature instance matrix $Z$ and the corresponding lifetimes $\\ell$ using a slice sampler \\citep{Neal:2003}. Let $\\Lambda$ be the matrix whose elements are given by $\\lambda_{nk}:=z_{nk}\\ell_{nk}$. To sample a new value for $\\lambda_{nk}$ where $\\sum_{i\\neq n}\\lambda_{ik}>0$, we first sample an auxiliary slice variable $u\\sim \\mbox{Uniform}(0,Q^*(\\lambda_{nk}))$, where $Q^*(\\lambda_{nk}) = p(\\lambda_{nk}|\\rho_k, m_k^{-n})p(X|\\lambda_{nk}, A,B, \\sigma_X^2)$. Here, the likelihood term $p(X|\\lambda_{nk}, A, B, \\sigma_X^2)$ depends on the choice of likelihood, and\n\t\n\t\\begin{equation}\n\tp(\\lambda|\\rho_k m_k^{-n}) = \\begin{cases} \\frac{N-m_k^{-n}}{N} & \\mbox{if }\\lambda=0\\\\\n\t\\frac{m_k^{-n}}{N} \\rho(1-\\rho)^{\\lambda_{nk}-1} & \\mbox{otherwise}\n\t\\end{cases}\\label{eqn:plambda_np}\n\t\\end{equation}\n\t\n\tWe then define a bracket centered on the current value of $\\lambda_{nk}$, and sample $\\lambda_{nk}^*$ uniformly from this bracket. We accept $\\lambda_{nk}^*$ if $Q(\\lambda_{nk}^*) = p(\\lambda_{nk}^*|\\rho_k, m_k^{-n})p(X|\\lambda_{nk}^*, A,B, \\sigma_X^2) > u$. If we do not accept $\\lambda^*_{nk}$, we shrink our bracket so that it excludes $\\lambda_{nk}^*$ but includes $\\lambda_{nk}$, and repeat this procedure until we either accept a new value, or our bracket contains only the previous value.\n\t\n\tFor the $n$th row of $Z$, we can sample the number of singleton features --- i.e. features where $z_{nk}=1$ but $\\sum_{i\\neq n}z_{ik}=0$ --- using a Metropolis Hastings step. We sample the number $K^*$ of singletons in our proposal from a $\\mbox{Poisson}(\\alpha\/N)$ distribution, and sample corresponding values of $b^*_{nk}\\sim f(b)$. We also sample corresponding lifetime probabilities $\\rho_k^*\\sim \\mbox{Beta}(a_\\rho, b_\\rho)$ and lifetimes $\\ell_{nk}^* \\sim \\mbox{Geometric}(\\rho_k^*)$ for the proposed singleton features. We then accept the new $\\Lambda$ and $B$ with probability\n\t$$\\min\\left(1, \\frac{Q(X|\\Lambda^*, A,B^*, \\sigma_X)}{Q(\\Lambda, A, B, \\sigma_X)}\\right),$$\n\tfor some proposal distribution $ Q $.\n\t\n\t\\paragraph{Sampling $Z$ and the $\\ell_{nk}$ using a Weak-Limit Approximation:}\n\tInference in the weak-limit setting is more straightforward since we do not have to worry about adding and deleting new features. We modify the slice sampler for the full nonparametric model, replacing the definition of $p(\\lambda_{nk}|\\rho_k, m_{k}^{-n})$ in Equation~\\ref{eqn:plambda_np} by\n\t\\begin{equation}\n\tp(\\lambda|\\rho_k m_k^{-n}) = \\begin{cases} \\frac{N-m_k^{-n}}{N+\\frac{\\alpha}{K}} & \\mbox{if }\\lambda=0\\\\\n\t\\frac{m_k^{-n}+\\frac{\\alpha}{K}}{N+\\frac{\\alpha}{K}} \\rho(1-\\rho)^{\\lambda_{nk}-1} & \\mbox{otherwise,}\n\t\\end{cases}\\label{eqn:plambda_wl}\n\t\\end{equation}\n\tand by slice sampling $\\lambda_{nk}$ even if $\\sum_{i\\neq n}\\lambda_{ik}=0$. In the weak limit setting, we do not have a separate procedure for sampling singleton features.\n\t\n\t\n\t\\paragraph{Sampling $A$ and $B$:}\n\tConditioned on $Z$ and the $\\ell_{nk}$, inferring $A$ and $B$ will generally be identical to a model based on the static IBP, and does not depend on whether we used a weak-limit approximation for sampling $Z$. Recall that $\\mathcal{Y}_{n}$ is the vector of feature indices $\\{(i,j): z_{ij}=1, i+\\ell>n\\}$. Let $Y$ be the matrix with elements $y_{nk} = \\sum_{i: (i,k)\\in \\mathcal{Y}_{n}} b_{nk}$ -- i.e. the total weight given to the $k$th feature in the $n$th observation. Then conditioned on $Y$ and $B$, the feature matrix $A$ is normally distributed with mean\n\t$$\\mu_A = \\left(Y^TY + \\frac{\\sigma_X^2}{\\sigma_A^2}\\mathbf{I}\\right)^{-1}Y^TX$$\n\tand block-diagonal covariance, with each column of $A$ having the same covariance \n\t$$\\Sigma_A = \\sigma_x^2\\left(Y^T Y+\\frac{\\sigma_X^2}{\\sigma_A^2}\\mathbf{I}\\right)^{-1}.$$\n\t\n\tWe can use a Metropolis-Hastings proposal to sample from the conditional distribution $P(b_{nk}|X,Z,\\{\\ell_{nk}\\}, A, \\sigma_X^2)$ --- for example, sampling $b_{nk}^*\\sim f(b)$ and accepting with probability\n\t$$\\min\\left(1, \\frac{P(X|Z, \\{\\ell_{nk}\\}, A, B^*, \\sigma_X)}{P(X|Z,\\{\\ell_{nk}\\}, A,B, \\sigma_X)}\\right).$$\n\t\n\t\\paragraph{Sampling Hyperparameters:}\n\tWith respect to the choice of model we could either incorporate informative prior beliefs or use non-informative settings, depending on the user knowledge and the data at hand. Without loss of generality, we place inverse gamma priors on $\\sigma_X^2$ and $\\sigma_A^2$ and beta priors on each of the $\\ell_k$; then, we can easily sample from their conditional distributions due to conjugacy. \n\tSimilarly, if we place a $\\mbox{Gamma}(a_\\alpha,b_\\alpha)$ prior on $\\alpha$, we can sample from its conditional distribution\n\t$$\\alpha|Z\\sim \\mbox{Gamma}\\left(K+a_\\alpha, \\frac{b_{\\alpha}}{1+b_\\alpha H_n}\\right)$$\n\twhere $H_n$ is the $n$th harmonic number. These inverse gamma and gamma prior distributions are general since, by a judicious choice of hyperparameter values, they could be tailored to model very little to strong prior information. \n\t\n\t\\section{Experimental Evaluation}\\label{sec:results}\n\tHere the proposed models and stochastic simulation methods are exemplified via synthetic and real data illustrations. In the synthetic illustration, we used the full nonparametric simulation method; in the real data examples, we used the weak-limit approximation version of the MCMC algorithm to sample the nonparametric component. We do this to allow fair comparison with the DIBP and DDIBP, which both use a weak-limit approximation. We choose to compare with the IFDM over the related IFUHMM since it offers a more efficient inference algorithm, and because code was made available by the authors\n\t\n\tThe ``gold standard'' in assessing npLFMs is to first set aside a hold-out sample. Then, using the estimated parameters one predicts these held-out data; i.e., comparing actual versus predicted values. In this section, we do this by alternately imputing the missing values from their appropriate conditional distributions, and using the imputed values to sample the latent variables.\n\t\n\t\n\tSince the aim is to compare static npLFM models and existing dynamic (DIBP, DDIBP and IFDM) models with the temporal dynamic npLFM models developed in this paper, the mean square error (MSE) is used to contrast the performance of these approaches on the held-out samples. We choose to consider squared error over absolute error due to its emphasis on extreme values. In the interest of space, we have not included plots or figures demonstrating the mixing of the MCMC sampler though one may use typical MCMC convergence heuristics to assess convergence \\citep[for example]{Geweke:1991,Gelman:Rubin:1992,Brooks:Gelman:1998}.\n\t\n\n\t\n\t\\subsection{Synthetic Data}\n\t\n\tTo show the benefits of explicitly addressing temporal dependence, we carried out the following.\n\t\n\t\\begin{itemize}\n\t\t\\item Generate a synthetic dataset with the canonical ``Cambridge bars'' features shown in Figure~\\ref{fig:synthA}; these features were used to generate a longitudinally varying dataset. \n\t\t\\item Simulate a sequence of $N=500$ data points corresponding to $N$ time steps.\n\t\t\\item For each time step, add a new instance of each feature with probability 0.2, then sample an active lifetime for that feature instance according to a geometric distribution with parameter 0.5.\n\t\t\\item Each datum was generated by superimposing all the active feature instances (i.e. those whose lifetimes had not expired) and adding Gaussian noise to give an $6\\times 6$ real-valued image.\n\t\t\\item We designated 10\\% of the observations as our test set. For each test set observation, we held out 30 of the 36 pixels. The remaining 6 pixels allowed us to infer the features associated with the test set observations.\n\t\\end{itemize}\n\t\n\t\n\t\\begin{figure}\n\t\t\\centering\n\t\t\\includegraphics[width=.45\\textwidth]{figs\/synthetic_features.png}\\\\\n\t\t\\includegraphics[width=1.0\\textwidth]{figs\/synthetic_obs.png}\n\t\t\\caption{Top row: Four synthetic features used to generate data. Bottom row: Ten consecutive observations.}\n\t\t\\label{fig:synthA}\n\t\\end{figure}\n\t\n\tWe considered four models: The static IBP; the dynamic npLFM proposed in this paper; the DIBP, the DDIBP, and the IFDM. For the dynamic npLFM and the static IBP, we used our fully nonparametric sampler. For the DIBP, DDIBP and the IFDM we used code available from the authors. The DIBP and DDIBP codes use a weak limit sampler; we fixed $K=20$ for the DDIBP and for the DIBP; the lower value for the DIBP is needed due to the much slower computational protocol for this method. \n\t\n\tTable~\\ref{table:synthetic1} shows the MSEs obtained on the held-out data; the number of features; and the average feature persistence. All values are the final MSE averaged over 5 trials from the appropriate posterior distributions following convergence of the MCMC chain. The average MSE is significantly lower for our dynamic model in contrast to all the other models we compared against.\n\tNext, consider Figure~\\ref{fig:instances} that shows the total number of times each feature contributes to a data point (i.e., the sum of that feature's lifetimes), based on a single iteration from both the dynamic and the static model. It is clear that the dynamic model reuses common features a larger number of times than the static model.\n\t\\begin{table}\n\t\t\\centering\n\t\t\\begin{tabular}{|c|c|c|c|}\n\t\t\t\\hline\n\t\t\t& MSE & Number of features & Average persistence \\\\\n\t\t\t\\hline\n\t\t\tDynamic npLFM & $0.274\\pm 0.02$ & $15.80 \\pm 0.748$ & $2.147 \\pm 0.58$\\\\\n\t\t\t\\hline\n\t\t\tStatic npLFM & $0.496 \\pm 0.04$ & $19.80 \\pm 0.400$ & $-$ \\\\\n\t\t\t\\hline\n\t\t\tDIBP & $0.459 \\pm 0.01$ & $20^*$ & $-$ \\\\\n\t\t\t\\hline\n\t\t\tDDIBP & $0.561 \\pm 0.02$ & $20^{\\dagger}$ & $-$\\\\\n\t\t\t\\hline\n\t\t\tIFDM & $0.7513 \\pm 0.003$ & $2^{\\dagger}$ & $-$ \\\\\n\t\t\t\\hline \n\t\t\\end{tabular}\n\t\t\\caption{Average MSE; number of features; and feature persistence on synthetic data under static and dynamic npLFMs. $ ^*$Note that the DIBP was restricted to $K=20$ features for computational reasons. $^{\\dagger}$The results over the 5 trials resulted in learning the same number of features.}\n\t\t\\label{table:synthetic1}\n\t\\end{table}\n\t\n\tThere are two critical reasons for this superior performance. First, consider a datum with two instances of a given feature: one that has just been introduced, and one that has persisted from a previous time-point. Our dynamic model is able to use the same latent feature to model both feature instances, while the static model, the DIBP, and the DDIBP must use two separate features (or model this double-instance as a separate feature from a single-instance). This is seen in the lower average number of features required by the dynamic model (Table~\\ref{table:synthetic1}), and in the greater number of times common features are reused (Figure~\\ref{fig:instances}).\n\t\n\tIn general, if (in the limit of infinitely many observations) there is truely a finite number of latent features, it is known that non-parametric models will tend to overestimate this number \\citep{Miller:Harrison:2013}. With that said, from a modeling perspective we generally wish to recover fewer redundant features, giving a parsimonious reconstruction of the data. We can see that we achieve this, by comparing the number and populatity of the features recovered with our dynamic model, relative to the static model.\n\t\n\tSecond, the dynamic npLFM makes use of the ordering information and anticipates that feature instances will persist for multiple time periods; this means that the latent structure for a given test-set observation is informed not only by the six observed pixels, but also by the latent structures of the adjacent observations. We see that the average feature persistence is $2.147$ time steps, which confirms that the dynamic model makes use of the temporal dynamics inherent in the data. While the DIBP, DDIBP and IFDM both have mechanisms to model temporal variation, their models do not match the method used to generate the data, and cannot capture the persistence variation explicitly.\n\t\\begin{figure}\n\t\t\\centering\n\t\t\\includegraphics[width=.65\\textwidth]{figs\/synthetic_feat_counts.png}\n\t\t\\caption{Number of times each feature contributes to a data point under static and dynamic npLFMs. Note that under the dynamic model, a feature can contribute multiple times to the same data point. In this setting, the features are arbitrarily labeled and thus labeled according to their popularity.}\n\t\t\\label{fig:instances}\n\t\\end{figure}\n\t\n\t\\begin{table}\n\t\t\\centering\n\t\t\\begin{tabular}{|c|c|c|c|c|}\n\t\t\t\\hline\n\t\t\t& Household power consumption & Audio data & Bird call data \\\\\n\t\t\t\\hline\n\t\t\tDynamic npLFM & $0.287 \\pm 0.013$ & $0.722 \\pm 0.007$ & $ 0.561 \\pm 0.306$\\\\\n\t\t\t\\hline\n\t\t\tStatic npLFM & $1.835 \\pm 0.182$ & $1.013 \\pm 0.013$ & $ 1.026 \\pm 0.481 $ \\\\ \n\t\t\t\\hline\n\t\t\tDDIBP & $1.424 \\pm 0.069$ & $1.289 \\pm 0.224$ & $ 0.606 \\pm 0.036 $ \\\\ \n\t\t\t\\hline\n\t\t\tDIBP & $1.324 \\pm 0.106$ & $ 1.845 \\pm 0.264$ & $ 1.308 \\pm 0.744 $ \\\\\n\t\t\t\\hline\n\t\t\tIFDM & $ 0.294 \\pm 0.022$ & $ 1.906 \\pm 0.009 $ & $1.222 \\pm 0.130$ \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\t\\caption{Average MSE obtained on the empirical datasets by the dynamic model proposed in this paper; the static IBP latent feature model; the DDIBP; the DIBP; and the IFDM.}\n\t\t\\label{table:real1}\n\t\\end{table}\n\t\n\t\n\t\\subsection{Household Power Consumption Real Data Illustration}\n\tA number of different appliances contribute to a household's overall power consumption, and each appliance will have different energy consumption and operating patterns. We analyzed the ``Individual household electric power consumption'' data set\\footnote{We only analyzed a subset of the data for computational reasons.} available from the UCI Machine Learning Repository\\footnote{\\texttt{http:\/\/archive.ics.uci.edu\/ml}}. This dataset records overall minute-averaged active power, overall minute-averaged reactive power, minute-averaged voltage, overall minute-averaged current intensity, and watt-hours of active energy on three sub-circuits within one house.\n\t\n\tWe examined 500 consecutive recordings. For each recording, we independently scaled each observed feature to have zero mean and unit variance, and subtracted the minimum value for each observed feature. The preprocessed data can, therefore, be seen as excess observation above a baseline, with all features existing on the same scale justifying a shared prior variance. Based on the assumption that a given appliance's energy demands are approximately constant, we applied our dynamic npLFM with constant weights, described in Equation~\\ref{eqn:dynamic_general}.\n\t\\begin{figure}[h]\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{figs\/dibp_hpc_shared}\n\t\t\\caption{Latent structure obtained from the household power consumption data using the dynamic npLFM. Top left: Intensity of observed feature at each observation (after pre-processing). Bottom left: Latent features found by the model. Top right: Number of instances of each latent feature, at each observation.}\n\t\t\\label{fig:hpc_dynamic}\n\t\\end{figure}\n\t\n\t\n\t\\begin{figure}[h]\n\t\t\\centering\n\t\t\\includegraphics[width=\\textwidth]{figs\/ibp_hpc_shared}\n\t\t\\caption{Latent structure obtained from the household power consumption data using the static IBP. Top left: Intensity of observed feature at each observation (after pre-processing). Bottom left: Latent features found by the model. Top right: Number of instances of each latent feature, at each observation.}\n\t\t\\label{fig:hpc_ibp}\n\t\\end{figure}\n\t\n\t\\begin{figure}[h]\n\t\t\\centering\n\t\t\\includegraphics[width=.8\\textwidth]{figs\/hpc_time_series}\n\t\t\\caption{Plot of observed features 2 and 6 from the household power consumption data, over time.}\n\t\t\\label{fig:time_series}\n\t\\end{figure}\n\tWe again compared against a static IBP, the DIBP, the DDIBP (with exponential similarity measure), and the IFDM. For all models, we used a weak limit sampler with a maximum of 20 features. For validation, 10\\% of the data were set aside, with a randomly selected six out of seven dimensions being held out. We expect the dynamic models to perform better than the static model, given the underlying data generating process: electricity demand is dictated by which appliances and systems are currently drawing power. Most appliances are used for contiguous stretches of time. For example, we turn a light on when we enter a room, and turn it off when we leave some time later. Further, many appliances have characteristic periods of use: a microwave is typically on for a few minutes, while a washing machine is on for around an hour. A static model cannot capture these patterns.\n\t\n\tThe held-out set average MSEs with bounds are shown in Table~\\ref{table:real1}. The DDIBP performs comparably with the static model, suggesting its form of dependence is not appropriate for this task. The DIBP performs slightly better than the static model, indicating that it can capture the feature persistence described above. However, our model significantly outperforms the other models. This can be explained by two properties of our model that are not present in the comparison methods. First, our method of modeling feature persistence is a natural fit for the data set: latent features are turned on at a rate given by the IBP, and they have an explicit duration that is independent of this rate. \n\t\n\tBy contrast, in the DIBP, a single Gaussian process controls both the rate at which a feature is turned on, and the amount of time for which it contributes. Second, our construction allows multiple instances of the same feature to contribute to a given time point. This means that our approach allows a single feature to model multiple similar appliances -- e.g. light bulbs -- which can be used simultaneously. The IFDM also performs favorably for this task, as we could imagine this problem to be like a blind signal separation problem where we want to model the probability when a dishwasher or laundry machine is on at a certain time point which is the setting for which such a model is designed. The IBP, DIBP and DDIBP, by contrast, must use separate features for different numbers of similar appliances in use, such as light bulbs.\n\t\n\tConsider a visual assessment of the importance of allowing multiple instances by examining the latent structures obtained from the static IBP and our dynamic npLFM. Figures~\\ref{fig:hpc_dynamic} and \\ref{fig:hpc_ibp}, respectively, show the latent structure obtained from a single sample from these models. The top left panel of each of these figures shows the levels of the observed features. We can see that observed features 2 and 6 have spikes between observations 250 and 300. These spikes can be seen more clearly in Figure~\\ref{fig:time_series} which plots the use of observed features 2 and 6 over time. Feature 2 corresponds to the minute-averaged voltage, and feature 6 corresponds to watt-hours of active energy in the third sub-circuit, which powers an electric water heater and an air-conditioner --- both power-hungry appliances. The spikes are likely due to either the simultaneous use of the air-conditioner and water heater, or different levels of use of these appliances.\n\t\n\t\n\tUnder the dynamic model, the bottom left panel of Figure~\\ref{fig:hpc_dynamic} depicts that latent feature 0 places mass on observed features 2 and 6. The top right panel shows that there are multiple instances of this feature in observations corresponding to the previously discussed spikes in observed features 2 and 6. The corresponding static model graph in \\ref{fig:hpc_ibp} shows that the static IBP is unable to account for this latent behavior resulting from increased usage of the third sub-circuit; hence this model must use a combination of multiple features to capture the same behavior.\n\t\n\t\n\t\\subsection{Audio Real Data Illustration}\\label{sec:audio1}\n\tIt is natural to think of a musical piece in terms of latent features, for it is made up of one or more instruments playing one or more notes simultaneously. There is clearly persistence of features, making the longitudinal model described in Section~\\ref{sec:model_long} a perfect fit. We chose to evaluate the model on a section of Strauss's ``Also Sprach Zarathrustra''. A midi-synthesized multi-instrumental recording of the piece was converted to a mono wave recording with an 8kHz sampling rate. We then generated a sequence of $D=128$-dimensional observations by applying a short-time Fourier transform using a $128$-point discrete Fourier transform, a $128$-point Hanning window, and an 128-point advance between frames---so, each datum corresponds to a 16ms segment with a 16ms advance between segments. We scaled the data along each frequency component to have unit variance, and subtracted the minimum value for each observed feature.\n\t\n\tTo evaluate the model, a hold-out sample of 10\\% of the data, evenly spaced throughout the piece, was set aside. All but eight randomly selected dimensions were held out. Again, we use the same settings as described in the earlier experiments. We obtained average MSEs, along with bounds, by averaging over 5 independent trials from the final value of the Gibbs sampler, and are reported in Table~\\ref{table:real1}. We see that by modeling the duration of features, we can perform favorably in a musical example which exhibits durational properties unlike the other models we compared against. Recall that the dynamic model has two advantages over the static model: it explicitly models feature duration, and allows multiple instances of a given feature to contribute to a given observation. The first aspect allows the model to capture the duration of notes or chords. The second allows the model to capture dynamic ranges of a single instrument, and the effect of multiple instruments playing the same note.\n\t\n\t\\subsection{Bird Call Source Separation}\n\tNext, we consider the problem of separating different sources of audio. Since it is difficult to imagine the number of different audio sources \\textit{a priori}, we could instead learn the number of sources non-parametrically. A dynamic Indian buffet process model is well suited to this type of problem, as we may imagine different but possibly repeating sources of audio represented as the dishes selected from an IBP. To this end, we apply our dynamic model to an audio sample of bird calls. The audio source that we will look at for this problem is a two minute long recording of various bird calls in Kerala, India\\footnote{The recording is available at \\url{https:\/\/freesound.org\/s\/27334\/}}. We transformed the raw wave file by Cholesky whitening the data and then took a regularly spaced subsample of 2,000 observations, of which we held out $10\\%$ of the data randomly as a test set. We then analysed the data as described in Section~\\ref{sec:audio1}.\n\t\n\tOne could easily imagine that a bird call would be a natural duration element and would reappear throughout the recording. Hence, for a recording such as this one, being able to incorporate durational effects would be important to modeling the data. Though equivalently, one could also imagine this mode, again, like a blind source separation problem for which we could imagine a model like the IFDM performing favorably, without needing to model the durational component of the features. As seen in Table~\\ref{table:real1}, we obtain superior performance for reasons, we posit, that we described above.\n\t\n\t\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\t\n\t\n\t\\section{Conclusion}\\label{sec:conc}\n\tThis paper introduces a new family of longitudinally dependent latent factor (or feature) models for time-dependent data. Unobserved latent features are often subject to temporal dynamics in data arising in a multitude of applications in industry. Static models for time-dependence exist but, as shown in this work, such approaches disregard key insights that could be gained if time dependency were to be modeled dynamically. Synthetic and real data illustrations exemplify the improved predictive accuracy while using time-dependent, nonparametric latent feature models. General algorithms to sample from the new family developed here could be easily adapted to model data arising in different applications where the likelihood function changes. \n\t\n\tThis paper focused on temporal dynamics for random, \\emph{fixed}, time-dependent data using nonparametric LFMs. But if data are \\emph{changing} in real time, as in moving images in a film, then the notion of temporal dependency needs a different treatment than the one developed here. We wish to investigate this type of dependence in future work. In addition to the mathematical challenges that this proposed extension presents, the computational challenges are daunting as well. The theoretical and empirical work exhibited here show promise and we hope to develop faster and more expressive non-parametric factor models.\n\n\t\\section*{Acknowledgments}\n\tSinead Williamson and Michael Zhang were supported by NSF grant 1447721.\n\t\n\n\n\t\n\n\t\n\n\n\t\\bibliographystyle{apalike}\n\t","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Results}\n\n\n\n\\noindent{\\bf Ejection speed.} The surface tension catapult realizes maximum ejection speed when the spore and Buller's drop have nearly the same volume. The two drops that coalesce to power the surface tension catapult are made of condensed water vapor and form after secretion of hygroscopic substances by the spore.\nWhen Buller's drop coalesces with the adaxial drop, the resulting reduction of surface area provides the surface energy to accelerate the spore. Because the adaxial drop is pinned to the surface of the spore, Buller's drop accelerates towards the distal tip of the spore. Once the coalesced drops reach the tip of the spore, capillarity and contact line pinning decelerate water and its momentum is transferred to the spore. Momentum transfer causes the force that breaks the hilum and results in spore ejection away from the basidium. \nThe release of surface energy by coalescence is $\\sim\\pi\\gamma R_B^2$, where $\\gamma$ is surface tension, $R_B$ is Buller's drop radius.\nBy balancing surface energy to kinetic energy of the spore - drop complex, we obtain:\n\\begin{equation}\nv_0 = \nU \\sqrt{\\dfrac{y^2}{y^3+\\beta}}\n\\label{eq:v0}\n\\end{equation}\nwhere $v_0$ is the ejection velocity, $U= \\sqrt{3\\alpha\\gamma\/(2\\rho_B R_s)}$; $y=R_B\/R_s$, $R_B$ is Buller's drop radius and $R_s$ is the radius of a sphere with the same volume as the spore; $\\beta = \\rho_B\/\\rho_s$; $\\rho_B$ and $\\rho_s$ are densities of Buller's drop and spore respectively. The parameter $\\alpha$ signifies that a fraction of available energy is dissipated in the process of breaking the spore apart from the hilum, the structure that holds it attached to the gill. We will consider $\\alpha=0.23$, which is the average among the values of efficiency previously measured \\cite{thesis_jessica}. Viscous dissipation during the dynamics of coalescence can be neglected because ballistospory operates in a regime of low Onhesorge number \\cite{liu2017}. We realized that the simple energy balance discussed at length in the literature and recapitulated in equation~\\eqref{eq:v0} predicts that there is a radius of Buller's drop that maximizes $v_0$ (see Figure 2). By zeroing the derivative in \\eqref{eq:v0} we obtain the size of Buller's drop that maximizes ejection speed:\n\\begin{equation}\ny_{\\text{max}}=(2 \\beta)^{1\/3}\n\\label{eq:ymax}\n\\end{equation}\n\\noindent and considering spores with density once or twice the density of water\\cite{hussein}, $\\beta=1$ to 2, implying that at $y_{\\text{max}}$ Buller's drop radius is comparable to the equivalent radius of the spore $R_B\\sim 1.26 R_s$ to $1.59 R_s$(Figure~\\ref{fig:velocity}, grey vertical mark labeled $y_{\\text{max}}$). \nNote that at $y_{\\text{max}}$ the ejection speed is controlled robustly, i.e.~it becomes insensitive to small deviations from the exact value of Buller's drop size. \nBuller's drop is generally assumed to scale with spore length \\cite{aerodynamics_ballisto} and this scaling appears to hold for at least 13 species of basidiomycetes as shown in \\cite{aerodynamics_ballisto,thesis_jessica,pringle2005,Stolze-Rybczynski2009}. Supplementary Figure 1 shows these published data, as a function of spore equivalent radius $R_s$, pointing to $y_{\\text{data}}= R_B\/R_s \\sim 0.35 \\pm 0.11$ where we report average $\\pm$ standard deviation.\n$y_{\\text{data}}$ are represented on the horizontal axis in Figure~\\ref{fig:velocity}, suggesting these fungi do not operate at maximum ejection speed, but rather remain on the rising slope preceding the maximum. \n\\\\\n\\begin{figure}\n\\includegraphics[width=0.5\\textwidth]{figure1.png\n\\caption{\\footnotesize Energy balance from eq~\\eqref{eq:v0} predicts discharge speed $v_0$ as a function of $y$ defined as the ratio of Buller's drop radius $R_B$ divided by spore equivalent radius $R_s$. Velocity peaks at $y_{\\text{max}} = (2 \\beta)^{1\/3} = 1.26$ to $1.59$ for $\\beta$ ranging from 1 to $2$, where $\\beta$ is the ratio of spore to drop density. \nThe same ejection speed is attained for two values of $y$, one on either side of the maximum. Experimental data of $y$ all lie to the left of the peak, suggesting evolution has favored smaller drops. \n} \n\\label{fig:velocity}\n\\end{figure}\n\n \n\\noindent{\\bf Maximum spore packing.} \nOnce the spore-drop complex is ejected, it is soon decelerated by air drag and its relaxation time is well approximated by the Stokes time \\cite{stokes,aerodynamics}:\n\\begin{equation}\n\\tau\n= T (y^3 + 1)^{2\/3}\n\\label{eq:tau}\n\\end{equation}\n\\noindent where we have considered the complex as an equivalent sphere with volume equal to the sum of the spore and drop volumes. Here, $T=2R_s^2 \/ (9\\nu \\bar{\\beta})$, $\\nu$ is the air kinematic viscosity, $\\bar{\\beta}$ is the density of air divided by the density of the spore-drop complex. \nAfter discharge, spores travel horizontally a distance $x=v_0 \\tau$, with $v_0$ and $\\tau$ from equations~\\eqref{eq:v0} and \\eqref{eq:tau} and then stop abruptly and start to sediment vertically out of the gills, following a trajectory commonly known as ``sporabola'' (represented in Figure~\\ref{fig:maxpack}A). In order to successfully escape, spores should first travel horizontally far enough to avoid sticking to the spores and basidia underneath. If $x$ is indeed dictated by this safety criterion, then the distance between two opposite gills, $d$, should be at least twice $x$, hence $d>2x$. To pack as many spores as possible and avoid inefficient empty spaces, \nthe distance between gills must be close to this minimum value:\n\\[\nd \\sim 2 v_0 \\tau\n\\]\n\\noindent Plugging in the values of $v_0$ and $\\tau$ given by equations~\\eqref{eq:v0} and \\eqref{eq:tau} we obtain:\n\\begin{equation}\n\\frac{d}{2UT} =\n\\Bigl(\\frac{y_{\\text{pack}}^2}{y_{\\text{pack}}^3 + \\beta} \\Bigr)^{1\/2} (y_{\\text{pack}}^3+1)^{2\/3} \n\\label{eq:theory}\n\\end{equation}\nFor any combination of spore density and radius as well as intergill distance, equation~\\eqref{eq:theory} predicts the optimal radius of Buller's drop normalized by spore radius, $y_{\\text{pack}}$, that achieves maximum packing. We solve numerically Equation~\\eqref{eq:theory} and show the result for $y_{\\text{pack}}$ in Figure~\\ref{fig:maxpack} for different combinations of intergill distance and spore radius, assuming $\\beta=1.2$, $\\alpha=0.23$, $\\bar{\\beta}=10^{-3}$ color-coded from 0 (cyan) to 10 (dark blue). The value of $y_{\\text{max}}$ from Equation~\\eqref{eq:ymax} that maximizes ejection speed is marked in white for $\\beta=1.2$, for reference. \n\\begin{figure}[h!]\n\\includegraphics[width=0.5\\textwidth]{figure3.pdf}\n\\caption{\\footnotesize Optimal morphology of mushroom caps. (A) Sketch of a cross section of a mushroom, close up of gills and magnified view of adjacent gills with basidia and basidiospores. Several trajectories of individual spores (sporabolas) are represented with black arrows; trajectories traced by Buller in 1909 \\cite{buller}. Maximum packing implies that spores initially travel a distance $x = v_0 \\tau$ to reach the midpoint between two opposite gills $d =2 v_0 \\tau$ with $v_0$ and $\\tau$ given by Equations~\\eqref{eq:v0} and \\eqref{eq:tau}. (B) Prediction for normalized Buller drop radius at maximum packing, $y_{\\text{pack}}$, obtained by numerically solving Equation~\\eqref{eq:theory} with $\\beta=1.2$, $\\bar{\\beta}=0.001$ and $\\alpha=23$\\%. $y_{\\text{pack}}$ is color coded from 0 (cyan) to $10$ (dark blue), and white marks normalized Buller drop radius at maximum velocity from Equation~\\eqref{eq:ymax}. Red symbols correspond to data of intergill distance and spore equivalent radius from 8 species collected and analyzed in the present study (see Figure 4).\nThe predicted radius of Buller's drop that maximizes packing for the 8 collected species is $y_{\\text{pack}} \\sim 0.56 \\pm 0.20$, which compares well to measured values of Buller's drop size pointing to $y \\sim 0.35 \\pm 0.11$, where we report average $\\pm$ standard deviation.} \n\\label{fig:maxpack}\n\\end{figure}\n\\\\\n\n\n\\begin{figure}[h!]\n\\includegraphics[width=0.5\\textwidth]{figure4.pdf}\n\\caption{\\footnotesize Data collection. (A) Picture of wild isolate of mushroom cap. (B) Spore print obtained by deposition of the spore cap on aluminum foil overnight. (C) Confocal microscope image of a sample of spores from the spore print. (D) Segmentation of spore image to recover spore contour. (E) Concentric circle around the center of the cap where gill distance is measured and definition of azimuthal angle $\\theta$. (F) Grey scale value from image in panel E, as a function of azimuthal angle $\\theta$. (G) Close up image showing locations of two peaks in the grey image, marked automatically by arrows 1 and 2 (above). Gill distance is defined as the distance between peaks minus their width (see Materials and Methods).} \n\\label{fig:collection}\n\\end{figure}\n\\noindent{\\bf Data collection and data analysis.} \\\\\nTo place real species on the phase space generated by the theory, we collected data of spore and gill morphology for eight wild mushroom isolates. \nWe isolate mushroom caps (Figure~\\ref{fig:collection}A), let them sit overnight on aluminum foil, resulting in what is called a spore print (Figure~\\ref{fig:collection}B) and then isolate samples of spores from different regions of the mushroom. Spores are imaged under confocal microscopy (Figure~\\ref{fig:collection}C), and images are analyzed with a standard segmentation postprocessing using imageJ to isolate contours of spores (Figure~\\ref{fig:collection}D). Spore area $S$ is computed from these images and radius is obtained from the area $R_s=\\sqrt{S\/\\pi}$. To measure gill distance, we first identify the center of the cap by eye. We then draw several circles, between 6 and 10 depending on the size of the cap, around the center of the cap (Figure~\\ref{fig:collection}E). Grey values along the circles are obtained (one example in Figure~\\ref{fig:collection}F) and the profile of the grey value analyzed to define the distance $d$ between the gills as the peak to peak distance minus the width of the peaks (see close up of two peaks in Figure~\\ref{fig:collection}G, and Materials and Methods). \\\\ \nThe collected data show that spore size varies from species to species, but does not vary across a single mushroom cap, suggesting that mushrooms tend to produce spores of the same size in a single fruit body (Figure~\\ref{fig:avspores}). The average intergill distance varies little with distance from the center of the cap, with the exception of \\emph{Russula cremicolor}, which is the only species with no secondary gills in our collection, consistent with previous models and experiments \\cite{phillips91,gills}. The intergill distance varies from about 0.25~mm to 1.5~mm (Figure~\\ref{fig:avintergill}) with no obvious correlation with the size of the mushroom cap\n\\begin{figure}\n\\includegraphics[width=0.5\\textwidth]{spore_radius.pdf}\n\\caption{\\footnotesize Results of data analysis. Spore size does not vary across a single mushroom cap.} \n\\label{fig:avspores}\n\\end{figure}\n\\begin{figure}\n\\includegraphics[width=0.5\\textwidth]{intergill.pdf}\n\\caption{\\footnotesize Results of data analysis. Average gill spacing varies little with distance from the center of the cap. The only exception is \\emph{Russula cremicolor} which has no secondary gills.}\n\\label{fig:avintergill} \n\\end{figure}\nWe use these data to compute average and standard deviation of spore radius and intergill distance across a single individual. The experimental data are superimposed to the theory for maximum spore packing. \nThe 8 species tested in this study fall in a region where, if gill morphology is optimized for maximum spore packing, then Buller's drop radius is $R_B\\sim 0.55 R_s$, consistent with previously published data pointing to $R_B\\sim 0.33 R_s$ (Figure~\\ref{fig:maxpack}B and Figure~\\ref{fig:velocity}). \n\\\\\n\n\\noindent{\\bf Conclusions.} \nGilled mushroom have long been hypothesized to have intricate morphologies to maximize the surface to volume ratio and pack the maximum number of spores with minimum amount of biomass. In order to comply with this hypothesis, the horizontal range that spores travel upon ejection must be finely tuned to land spores midway in between two opposite gills. Spore range is dictated by the dimension of Buller's drop and its density relative to the dimension and density of the spore. We find that real species populate a region of the phase space where the radius of Buller's drop that maximizes spore packing achieves velocities that are smaller than the maximum ejection speed $R_B \\sim 0.55 R_s$, while at maximum ejection speed $R_B \\sim 1.3$ to $1.6 R_s$.\nThis conclusion is backed from data previously published in the literature, suggesting that Buller's drop radius does indeed scale with spore dimensions, and is smaller than the value that maximizes ejection speed $R_B \\sim 0.32 R_s$. \nFurther data monitoring spore, gills and Buller's drop morphologies and densities at the same time are needed to find how close are species to maximum packing.\nAll data to date are consistent with the hypothesis of maximum spore packing, suggesting that Buller's drop radius is finely tuned to control range and speed. How this fine tuning might function, in a process that is purely extracellular, in the face of fluctuations in the environmental conditions remains a fascinating question for future research.\n \n\n\n\\begin{table*}\n\\caption{\\footnotesize List of collected species, location that these specimens were collected from, number of spores imaged and analyzed, corresponding symbol used in Figure~\\ref{fig:maxpack},\\ref{fig:avspores}-\\ref{fig:avintergill}.}\n\\label{tab:species}\n\\begin{tabular}{p{0.3\\textwidth}p{0.38\\textwidth}p{0.1\\textwidth}p{0.1\\textwidth}}\n\\hline\n{\\footnotesize \\bf Collected species} & {\\footnotesize \\bf Location} & {\\footnotesize \\bf \\# spores}& {\\footnotesize \\bf symbol}\\\\\n\\hline\n{\\footnotesize \\emph{Camarophyllus borealis}}& {\\footnotesize Huron Mountain Club }& 231 & $\\pmb \\RHD$\\\\\n{\\footnotesize \\emph{Cortinarius caperatus}}& {\\footnotesize Huron Mountain Club }& 1180 & $\\pmb\\bigvarstar$\\\\\n{\\footnotesize \\emph{Amanita lavendula}}&{\\footnotesize Huron Mountain Club }& 155 & $\\pmb\\times$\\\\\n{\\footnotesize \\emph{Armillaria mellea sp. complex}}& {\\footnotesize Huron Mountain Club }& 301 &$\\pmb\\bigstar$\\\\\n{\\footnotesize \\emph{Armillaria mellea sp. complex}}& {\\footnotesize Huron Mountain Club }& 257 & { $\\pmb\\LHD$}\\\\\n{\\footnotesize \\emph{Mycena sp.}} & {\\footnotesize UW-Madison Lakeshore Natural Preserve }& 530 & {\\Large$\\pmb\\bullet$}\\\\\n{\\footnotesize \\emph{Russula sp.}} & {\\footnotesize UW-Madison Lakeshore Natural Preserve }& 1053 & {\\Large $\\pmb\\blackdiamond$}\\\\\n{\\footnotesize \\emph{Galerina marginata\/autumnalis}} & {\\footnotesize UW-Madison Lakeshore Natural Preserve }& 1159 & {\\Large$\\pmb\\sqbullet$}\\\\\n\\hline\n\\end{tabular}\n\\end{table*}\n\n\n\n{\\small\n\\section*{\\bf \\large Materials and methods.}\n\\noindent{\\bf Data collection and published data}\\quad Between the 15th and 17th of September, 2017 we collected mushrooms from lands owned by the Huron Mountain Club, in the Upper Peninsula of Michigan. On the 15th of October, 2017 we collected mushrooms from the University of Wisconsin-Madison Lakeshore Natural Preserve. We collected opportunistically, taking any mushroom that appeared in good shape, but focusing on gilled (not pored) fungi. Unfortunately we were collecting during a particularly dry period, nonetheless, we collected specimens of eight morphologically identified species, listed in Tab.~\\ref{tab:species}. \nWe integrated our data with data from the literature where spore dimensions and radius of the Buller's drop were precised ~\\cite{aerodynamics_ballisto,thesis_jessica,pringle2005,Stolze-Rybczynski2009}. \\\\\n\n\\noindent{\\bf Preparing specimens for morphometrics}\\quad On the same day mushrooms were collected, caps were separated from stipes using a scalpel and left face down from 8 to 12 hours on a piece of paper covered with aluminum foil in order to create spore prints. Spore prints are generated when spores fall from gills and settle directly underneath. They reflect the morphology of each collected specimen and the location of stipes and patterns of gill spacing are easily visualized. Spore prints were carefully wrapped in wax paper and taken back to the Pringle laboratory at the University of Wisconsin-Madison. To image spores, three small pieces of aluminum foil, each measuring approximately 1mm x 1mm , were cut (i) from close to each stem,(ii) equidistant between the stem and the cap edge and (iii) from near the edge of each cap. Spores were washed off the foil and suspended in a Tween 80 $0.01 \\% $ vol solution. 15 $\\mu$l of each spore suspension were then spread right after onto a glass slide and spores imaged. Microscope slides were sealed with nail polish in order to avoid evaporation of Tween and consequent movement of spores during the imaging. To measure distance between gills, a photograph of each cap's underside, with a ruler included in the photograph, was taken immediately after spore printing using a Canon EOS400D. After spore printing and photography, collected mushrooms were dried in a mushroom dryer and stored in the Pringle laboratory.\\\\\n\n\\noindent{\\bf Identification of species using DNA barcoding}\\quad To identify the taxa of sporocarps, we extract DNA with NaOH extraction method modified from Wang et al. (1993) to amplify internal transcribed spacer. Specifically, the tissues of the sporocarps were ground finely with pestle in $40 \\mu l$ of 0.5 M NaOH and\ncentrifuged at 13,000 rpm for 10 min. Five microliters of supernatant was transferred to\n$495 \\mu l$ of 100 mM Tris-HCl (pH 8) and centrifuged at 13,000 rpm for another min.\nTo amplify the internal transcribed spacer, 1 \u00b5l of extracted DNA was mixed with $1\n\\mu l$ of $10 \\mu M$ ITS1F (5'-CTT GGT CAT TTA GAG GAA GTA A-3'), 1 \u00b5l of 10 \u00b5M ITS4 (5'-TCC\nTCC GCT TAT TGA TAT GC-3'), $12.5 \\mu l$ of Econotaq plus green 2x master mix (Lucigen,\nWisconsin), and $9.5 \\mu l$ of nuclease-free water. The reaction mixtures were incubated in\n$95 \\circ C$ for 5 min, followed with 30 rounds of amplifying reaction, including (1)\ndenaturation under $95 \\circ C$ for 30 s, (2) primer annealing under $50 \\circ C$ for 30 s and (3)elongation under 72\u02daC for 60 s. The reaction ends with 7 min of additional elongation\nunder $72 \\circ C$ and pauses at $4 \\circ C$. Amplified internal transcribed spacer were cleaned,\nSanger-sequenced by Functional Biosciences (Wisconsin) and deposited on Genbank\ndatabase (https:\/\/www.ncbi.nlm.nih.gov\/).\\\\\n\n\\noindent{\\bf Microscopy and image analysis for spore geometry.}\\quad Microscope images of spores were taken and recorded at the Newcomb Image Center at the University of Wisconsin-Madison. Spores were imaged either individually or in groups depending whether a particular microscope field of view housed one or more than one spore using Zeiss Elyra LSM 780 and Zeiss LSM 710 confocal microscopes. Spores were not stained as all species collected proved to be autofluorescent. The laser wavelength used to excite autofluorescence was 405 nm. The average area and average radius of spores of each species were then calculated using an image analysis tool implemented in ImageJ v.~1.51. Pixel's dimension in $\\mu$m was obtained from the microscope and the image converted to greyscale (8-bit or 16-bit). Thresholding was done using imageJ to then convert greyscale to binary image, highlight all the spores to be counted and measure the area of each spore as shown in Figure~\\ref{fig:collection}C-D. Spores touching other spores were not measured, nor were particles smaller than $2 \\mu m^2$. Particles bigger than $ 2 \\mu m^2$ were identified either as spores or not-spores by eye. \\\\\n\n\\noindent{\\bf Image analysis for gill distance.}\\quad The distance betweeen gills was measured based on the cross section of gills at various distances from the center of the cap. Image analysis with ImageJ v1.51 and then analyzed with a custom made Matlab R2017b script. \nWe first used ImageJ v1.51 to open each picture, set pixel length in $mm$ using the image of the ruler and convert images to greyscale (8-bit or 16-bit). The Oval Profile plugin was implemented to obtain the grey scale profile along oval traces, drawn manually around the mushroom cap center. The area of the ovals was measured to calculate the average distance from the cap center which was used to convert the distance between gills from radiants to mm. The greyscale is sampled at 3600 equally spaced points along the oval. \nThe grey scale profile obtained from ImageJ was imported into Matlab and analyzed with the function Findpeaks to first identify the center of the gills as the peaks in the greyscale image. Peaks that were closer than 0.3$^\\circ$ were discarded as noise. Visual inspection was applied to check that minor peaks did correspond to gills. Additionally, we quantified gill thickness as the width of the peak, defined as the distance where grey value drops half way below peak prominence, which is a measure of peak height. Distance between two gills is defined as the distance between their centers minus the half width of the two gills. \n\n}\n\n\n\\section*{Aknowledgements} \nThis work was supported by the Agence Nationale de la Recherche Investissements d'Avenir UCA$^{\\textrm{\\sc \\sf \\tiny JEDI}}$ \\#ANR-15-IDEX-01, by CNRS PICS ``2FORECAST'', by the Thomas Jefferson Fund a program of FACE and by Global Health Institute - University of Wisconsin-Madison. We would like also to thank Houron Mountain Club for its kind hospitality and Sarah Swanson for all her help and discussions about confocal microscopy. \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\nGamma-ray bursts (GRBs) are sudden releases of gamma-rays, lasting from milliseconds to thousands of seconds. The typical energy range\nof GRBs is from tens of keV to several MeV. The soft gamma-ray\/hard X-ray emission is usually called ``prompt emission\"\nof GRBs. According to the classification criterion based on the duration distribution \\citep{1993ApJ...413...L101}, short gamma-ray\nbursts (sGRBs), for which T$_9$$_0$ $<$2 s, likely originate from the mergers of compact binaries involving Double Neutron Star\n(DNS) and Neutron Star Black-Hole (NSBH) systems \\citep{1986ApJ...308...L43, 1989nature...340...126, 1992ApJ...395L..83N}. The prompt\nemission properties of sGRBs are distinguished from long GRBs (lGRBs), for example, sGRBs are found to have negligible spectral lag\n\\citep {2000APJ...534...248, 2006MNRAS...373...729} and harder spectra than lGRBs \\citep {1993ApJ...413...L101}.\nSwift\/BAT (The Burst Alert Telescope) has detected about 120 sGRBs in the past $\\sim$14 years since it was successfully launched in November 2004 and achieved several important breakthroughs during the study of the prompt emission and the early afterglow \\citep{2004ApJ...611...1005, 2016ApJ...829...7}.\n\nIn the internal shock model, as the jet propagates, a faster shell meets a slower one and interacts in the form of an\ninternal shock, which is thought to cause the prompt emission of GRBs \\citep{1994APJ...430...L93, 1997APJ...490...92, 1999PR...308...L43}. Although the light curves of the prompt emissions of GRBs have very irregular and complex structures, some of these can be divided into some individual or overlapping pulses which contain the key information of internal\nenergy dissipation and radiative mechanisms \\citep{1996APJ...459...393, 2005APJ...627...324}. In general, the prompt gamma-ray emission\nof GRBs may consist of various emission episodes, including precursors, main peaks and\nextended emissions (EEs) or parts of them. For some GRBs, the three components are usually bright enough to be detected easily. For instance, the extraordinarily bright GRB 160625B was found to have three\nemission episodes separated by long quiescent intervals \\citep{2018Nature...2...69}.\n\n\\cite {1974ApJ...194...L19} reported a probable precursor prior to the three main impulses with the deviation 3.1 $\\sigma$ from\nbackground. They pointed out that the precursor was not initiated by its most\nexplosive phase. The precursor might come from the photospheric emission and have black\nbody-like spectrum \\citep [e.g.,][]{1991Nature...350...592,2000ApJ...543...L129, 2002MNRAS...336...1271,2007MNRAS...380...621}.\nFor the lGRBs, the jet launched by the central engine makes its way out of the stellar envelope of the progenitor star and releases\nthermal emission as the shock breakout precursors \\citep{2002APJ..331...197}. When the progenitor system is a NS-NS system and\nor magnetar, the similar shock breakout precursor could occur in sGRBs if a dense wind is released\nfrom the center engine \\citep{2014ApJ...788...L8}. \\cite{2007APJ...670...1247} discussed that\nthe central engine might undergo a second collapse and the precursor may be due to the initial weak jet. They reported that some of the initial\njet materials fall back onto the central engine when it manages to penetrate the stellar mantle and will be accreted by\nthe central engine. The core collapse or\nbinary merger could produce a temporarily stable intermediate object as a ``spinar\", which supports a large range of precursor energies \\citep{2008MNRAS...383...1397, 2009MNRAS...397...1695}. Different precursor models can be used to explain the diversity of the quiescent period timescales, the spectral and temporal properties and so on. Observationally, significant differences were not found not only\nbetween precursor and main peak but also for the GRBs with\/without precursor\n\\citep [e.g.,][]{2010ApJ...723...1711, 2014ApJ...789...145}. Moreover, there was no obvious or mild correlations between\n the precursor and the main peak \\citep [e.g.,][]{1995ApJ...452...145, 2005MNRAS...357...722, 2008APJ...685...L19, 2009AA...505...569,\n 2015MNRAS...448...2624}. By reanalysing the observed pulse light curves, one can check the previous findings or exclude some untenable theoretical models for the origin of the precursors.\n\nThe EE component as soft $\\gamma$-ray emission or\nhard X-ray afterglow occurring after the main prompt emission is another important messager. Postburst emission component might be a\nfeature of BATSE bursts, which was interpreted as the hard X-ray afterglow occurring \\citep [e.g.,][]{2001AA...379...L39,2002APJ...567...1028,2005Science...309...1833}.\n\\cite{2002Mazets} discussed a special kind of the ``short\" burst, the initial emission of which was spike-like and accompanied\nby low intensity EE for tens of seconds. \\cite{2006Nature...444...1044} found that the temporal lag and peak luminosity of long\nGRB 060614, were similar to those of the sGRBs.\nBy studying a large BATSE sample, \\cite{2006APJ...643...266} found that a handful of GRBs were somewhat similar to GRB 060614.\nThey showed that the extended components were always softer than\nthe initial spikes. Note that GRB 060614 is a sGRB-like long burst with the EE component \\citep{2006Nature...444...1053}, which is very similar to GRB 050709, a short burst with the EE lasting about 130s. Interestingly, both of them are found to associate with a macro-nova and share with the same origin \\citep{2015ApJ...811...L22,2016NC...7...12898}.\nAlthough the EE tails within some GRBs have been confirmed,\ntheir physical natures are still in debate. For example, the sGRBs with EE tail may be produced by the formation and early evolution of\na highly magnetized, rapidly rotating neutron star \\citep[e.g.,][]{2008MNRAS...385..1455,2012MNRAS...419...1537B}. \\cite{2011MNRAS...417...2161} proposed\na short-duration jet powered by heating due to $\\nu$$\\tilde{\\nu}$ annihilation and a long-lived Blandford-Znajek jet to describe\nthe initial pulses (IP) and the EE segment. The lifetime of the accretion process, which is divided into multiple emission episodes\nby the magnetic barrier, may be prolonged by radial angular momentum transfer and the accretion disk mass is critical for producing\nthe observed soft EE \\citep{2012Apj...760...63}. \\cite{2017MNRAS...470...4925} explained the EE tail with a process of fallback accretion on to a new born magnetar. The temporal and spectral characteristics of the IP and the EE of the sGRBs, or\nthe sGRBs with\/without EE, were extracted and compared \\citep[e.g.,][]{2010APJ...717...411, 2011APJ...735...23, 2013MNRAS...428...1623,\n2015MNRAS...452...824, 2015ApJ...811...4, 2016ApJ...829...7, 2018MNRAS...481...4332}. \\cite{2017ApJ...846...142} argued that the EE components reflect the central engine activities instead of the external environments associated with afterglows in general. Thus, it would be very intriguing to\nanalyse whether observational temporal properties of EE tail were similar or dissimilar to those of precursor\nor main peak.\n\nFor the first time, we systematically investigate the temporal properties of the fitted sGRB pulses in the third Swift\/BAT catalog and present a joint temporal analysis of the three prompt emission components across four energy channels in one-component and two-component sGRBs.\nThe data preparation and sample selection are given in Section 2. The results are presented in Section 3, in which, we pay special attention to a direct comparison with\nour recent results of BATSE sGRBs (Li et al. 2020, hereafter paper I).\nFinally, we discuss and summarize the results in Section 4 and 5, respectively.\n\\section{SAMPLE PREPARATION} \\label{sec:DATE PREPARATION }\n\\subsection{Data and Method}\nWe construct our initial sGRBs sample using the parameter T$_9$$_0$ from the third Swift GRBs Catalog from December 2004 to July 2019, which corresponds to about all the sGRBs detected by the Swift\/BAT. The sample comprises 124 sGRBs including additional sGRB 090510 \\citep [e.g.,][]{2010ApJ...720...1008, 2010ApJ...723...1711,\n2010ApJ...716...1178, 2013ApJ...772...62}, sGRB 050724 \\cite[e.g.,][]{2007APJ...655...989,2007PTRSLS...365...1281}\nand GRB 060614 \\citep [e.g.,][]{2006Nature...444...1053,2006Nature...444...1044, 2007APJ...655...L25, 2013MNRAS...428...1623,\n2014ApJ...789...145, 2015ApJ...811...4, 2016ApJ...829...7}.\n\nThe mask-weighted light curve data of sGRBs are taken from\nthe Swift website \\citep{2016ApJ...829...7}\\footnote{\\url{https:\/\/swift.gsfc.nasa.gov\/results\/batgrbcat\/}} for four energy channels, labeled with Ch1 (15-25 keV), Ch2 (25-50 keV), Ch3 (50-100 keV), and Ch4 (100-350 keV). We calculate the background noise (1$\\sigma$) and define the effective sGRBs signal at a level of S\/N $>$3. Although the increase in bin size can reduce the\nlevel of background noise fluctuations, it might change the potential pulse structure.\nBecause the total BAT energy band with good localization is narrower\nand the signals of sGRBs are relatively weaker than those of the lGRBs, we fit the sGRB pulses of different energy bands with a small bin size of 8ms till the potential GRBs signals can be identified significantly, except some sGRBs with precursor or EE, i.e. GRB 071112B and GRBs 060614, 150101B, 170817A. For these four GRBs, we fit the pulse light curves with other bin sizes\nof 2ms, 16ms, 64ms or 1s instead. Several points need to be cautioned for these GRBs. First, the precursor or EE weak pulse structure can all be identified in a corresponding energy band. Second, the detection points of the effective GRB signal should be relatively more enough to ensure a successful fit. In addition,\nfor the possible weak signals, we combine the adjacent energy channels into one channel in order to increase the statistical reliability.\n\nConsidering the features of duration\nand noise level of sGRBs, the mask-weighted light curves data are intercepted from\n1$-$2T$_9$$_0$ prior to the BAT trigger time and 2$-$3T$_9$$_0$ posterior to the trigger time to\nfurther enhance the fitting accuracy. The detailed methods to identify the pulse numbers of a burst have been described in paper I. The pulse shapes depend on the final choice of fit. In this study, we have\nused the least chi-square criterion together with a residual analysis to evaluate the goodness of our fits, as done in paper I and other previous works \\citep[e.g.,][]{1996APJ...459...393,2005APJ...627...324,2003APJ...596...389,Peng2006,2011ApJ...740...104,2019ApJ...876...89}. In spite of this, this high-dimensional nonlinear regression fit should instead be performed using unbinned maximum likelihood estimation using the photon arrival times \\citep{2002Fraley,2000McLachlan,2008McLachlan,2014Tartakovsky}. Subsequently, we will apply the powerful EM method to study how many pulses within a burst in our subsequent paper.\n\nSeveral authors have proposed some relatively simple functions to describe the pulse light curves of GRBs \\citep{2002APJ...566...210, 2003APJ...596...389,\n1996APJ...459...393, 2005APJ...627...324, 2007APJ...662...1093, 2007ApJ...670...565, 2016ApJS..224...20Y}. Among these functions, the ``KRL'' function proves the most flexible profiles of individual GRB pulses \\citep{2003APJ...596...389} and can be written as \\begin{equation}\\label{equation:1}\nf(t)=f_m(\\frac{t+t_0}{t_m+t_0})^r[\\frac{d}{d+r}+\\frac{r}{d+r}(\\frac{t+t_0}{t_m+t_0})^{(r+1)}]^{-(\\frac{r+d}{r+1})},\n\\end{equation}\nwhere \\emph{r} and \\emph{d} determine the rise and the decay shapes of an individual pulse, $f_m$ represents the peak flux, $t_m$ is the peak time, $t_0$ is the offset from the pulse start time to the trigger time. Simultaneously, the five parameters in the ``KRL'' model have been interpreted in theory by \\cite{2005MNRAS...363...1290}. As applied in our paper I, this empirical function will be utilized again in this study.\n\nTo investigate the prompt emission mechanisms or classify GRBs, many temporal properties of GRBs pulse light curves have been studied \\citep[e.g.,][]{1996APJ...459...393, 2005APJ...627...324, 2001AA...380...L31, 2002AA...385...377,2003APJ...596...389,2007APSS...310...19,2011ApJ...740...104, 2014ApJ...783...88,2015ApJ...815...134,2018ApJ...855...101,2019ApJ...883...70,2019-190510440}, but bimodal distributions are still preferred \\citep[e.g.,][] {1993ApJ...413...L101,2008AA...484...293,2016APSS...361...257,2017APSS...362...70,\n2018ApSS...363...223,2016MNRAS...462...3243,2018PASP...130...054202,2015AA...581... 29,2019ApJ...870... 105,2019ApJ...887... 97}.\nIn this study, the pulse properties including peak amplitude (f$_m$), peak time (t$_m$), full width\nat half maximum (FWHM), rise time (t$_r$) and decay time (t$_d$) as well as asymmetry (t$_r$\/t$_d$) will be investigated in details for different kinds of Swift sGRBs. The systematic errors of pulse measurements are estimated with error propagation using the same methods described in paper I according to \\cite{2006ChJAA...6...312}. In particular, if the diverse parameters of the pulse shapes have strong covariance the relationships among these pulse properties could not reflect underlying behaviors. Fortunately, we calculate the covariance matrix and the correlations matrix and find that there are no significant correlations between different pulse parameters derived from our fits.\n\n\\subsection{Selection Criteria of Precursor and EE Candidates }\n \\cite {1995ApJ...452...145} had a conclusion that there are only 3\\% GRBs with precursor in their 1000 BATSE\n GRBs. \\cite {2010ApJ...723...1711} found that 8\\%-10\\% of Swift\/BAT sGRBs display precursor. Further studies showed that roughly 5\\% of Fermi\/GBM sGRBs and 18\\% of Fermi\/GBM lGRBs have precursor \\citep{2015PHD...???...???}. \\cite{2017ApJ...43...1} find the precursors existing in less than 0.4\\% of\nthe SPI-ACS\/INTEGRAL sGRBs. There is no obviously objective criterion to define the ``precursor''. In general, the peak flux of precursor is smaller than those of main events while the flux falls below the background level before the start\nof the main event \\cite[e.g.,][]{2008APJ...685...L19, 2010ApJ...723...1711}. \\cite{2014ApJ...789...145} pointed out that precursors can be triggered and non-triggered events. Considering all above aspects, we identify a significant precursor\nwhen it fulfills the following three conditions:\n\n(1) The precursor is effective as the detection points are at least 3$\\sigma$ above the background in the whole energy range of 15-350 keV.\n\n(2) The precursor at least includes three detection points and its peak flux is smaller than that of the main peak (see also \\citealt{2008APJ...685...L19,2010ApJ...723...1711}).\n\n(3) The precursor can be detected prior or posterior to the BAT trigger time. Simultaneously, quiescent period duration between the precursor and the main peak is well-defined (or, the precursor flux has fallen into the background level when the main peak starts) (see also \\citealt{2008APJ...685...L19,2010ApJ...723...1711}).\n\n Figure \\ref{fig:precursor1} and \\ref{fig:precursor2} show six single-peaked sGRBs (SPs) and three double-peaked sGRBs (DPs) with precursors. Totally, 25 precursor pulses across different energy bands have been successfully identified.\n It is worthy to point out that two precursors are found in GRB 090510 \\citep{2010ApJ...723...1711}. In this work, however, only one precursor occurring at t $\\sim$ 0.5 s prior to the main peaks is confidently discriminated as a real precursor detected by the Fermi\/GBM in the higher energy bands \\citep{2009Nature...462...331,2010ApJ...723...1711}, will be carefully reanalyzed. We stress that it is a very challenging result since most precursors are very fainter and generally detected before the main outbursts in the lower energy channels.\n \\begin{figure*}\n\\centering\n\\gridline{\n\\fig{pre060502B.pdf}{0.5\\textwidth}{(a)}\n\\fig{pre071112B.pdf}{0.5\\textwidth}{(b)}\n }\n\\gridline{\n\\fig{pre100702A.pdf}{0.495\\textwidth}{(c)}\n\\fig{pre160408A.pdf}{0.5\\textwidth}{(d)}\n }\n\\gridline{\n\\fig{pre160726A.pdf}{0.5\\textwidth}{(e)}\n\\fig{pre180402A.pdf}{0.5\\textwidth}{(f)}\n }\n\\caption{Examples of pulses in the Pre+SPs. For comparison, the pulses of the two individual channels and the combined energy channel for each sGRBs are analyzed. The horizontal dotted black lines mark a 3$\\sigma$ confidence level. \\label{fig:precursor1}}\n\\end{figure*}\n \\begin{figure*}\n\\centering\n\\gridline{\n \\fig{pre081024A.pdf}{0.5\\textwidth}{(a)}\n \\fig{pre090510.pdf}{0.5\\textwidth}{(b)}\n }\n \\gridline{\n \\fig{pre100625A.pdf}{0.5\\textwidth}{(c)}\n}\n\\caption{Examples of pulses in the Pre+DPs. For comparison, the pulses of the two individual channels and the combined energy channel for each sGRBs are analyzed. The horizontal dotted black lines mark a 3$\\sigma$ confidence level.\\label{fig:precursor2}}\n\\end{figure*}\n\n\\begin{figure*}\n\\centering\n\\gridline{\n\\fig{ee050724.pdf}{0.5\\linewidth}{(a)}\n\\fig{ee051221A.pdf}{0.5\\textwidth}{(b)}\n }\n \\gridline{\n \\fig{ee060614.pdf}{0.5\\textwidth}{(c)}\n \\fig{ee150101B.pdf}{0.5\\textwidth}{(d)}\n }\n \\gridline{\n \\fig{ee130603B.pdf}{0.5\\textwidth}{(e)}\n \\fig{ee170817A.pdf}{0.5\\textwidth}{(f)}\n }\n\\caption{Examples of pulses in the SPs+EE and DPs+EE (GRB 130603B). For comparison, the pulses of the two individual channels and the combined energy channel for each sGRBs are analyzed. The horizontal dotted black lines mark a 3$\\sigma$ confidence level. \\label{fig:EE}}\n\\end{figure*}\n\n\n\\cite{2013MNRAS...428...1623} searched 11 from 256 BATSE GRBs with EE to unveil the BATSE population of a new hybrid class of GRBs similar to\nGRB 060614. Using Bayesian Block (BB) methods, \\cite{2010APJ...717...411,2011APJ...735...23} found $\\sim$25\\% of 51 Swift\/BAT sGRBs with EE component. \\cite{2016ApJ...829...7} reported the fraction\nof sGRBs with EE to be 1.19\\% in the third Swift\/BAT catalog. Here, we extend our search to include the GRBs with EE tails\nreported in literatures \\cite[e.g.,][]{2006APJ...643...266,2017ApJ...846...142,Yu2020,Zhangxiaolu2020}. According to \\cite{2019ApJ...876...89}, we adopt the following three criteria:\n\n(1) The peak flux of the EE is well lower than that of the main peak.\n\n(2) The signal-to-noise ratios of the main peak and the EE tails should be at least S\/N$>3$ above background.\n\n(3) The EE parts must be brightest in lower energy channel than 50 keV and are too weak to be distinguished effectively in higher energy channels, say 50-350 keV.\n\nWe have to stress that we would like to include as many sGRBs with EE as possible in our EE sample. Although the EE tails reflect long-lasting activities with a typical timescale of\n$\\lesssim$ 10$^3$ s \\cite[e.g.,][]{2001AA...379...L39, 2002APJ...567...1028, 2006APJ...643...266, 2008MNRAS...385..1455,2014ApJ...789...145, 2017ApJ...846...142},\nmost of them cannot be well fitted by a pulse function because of their low signal-to-noise ratios as illustrated as in \\cite{2018ApJ...855...101} and our paper I. So we just choose GRB 050724 \\citep{2005Nature...438...988,2010ApJ...723...1711,2016ApJ...829...7,2017ApJ...846...142,2018MNRAS...481...4332}, GRB 051221A\n\\citep{2006MNRAS...372...L19, 2011ApJ...734...35, 2017ApJ...846...142}, GRB 060614\n\\citep{2006Nature...444...1044,2006APJ...643...266}, GRB 130603B \\citep{2015ApJ...802..119K}, and GRB 150101B \\citep{2018NC...9...4089,2018APJL...863...L34, 2019ApJ...876...89, 2019MNRAS} as the subsample with EE. For comparison, the EE sample also includes the first gravitational-wave associated GRB 170817A detected by the Fermi Gamma-ray Burst Monitor (GBM), whose main\npeak is dominant over an energy\nrange of 50-300 keV and its soft tail is stronger below 50 keV \\citep{2017ApJL...848...L14,ZhangBB2018,2019ApJ...876...89,2020MNRAS...492...3622}. Finally, twelve typical EE tail pulses across different energy bands are fitted (see Figure \\ref{fig:EE}). Totally, Table \\ref{tableEE} lists 67 typical sGRBs with three components, including 10 precursor events ($\\sim$15\\%) and 17 EE events ($\\sim$25\\%). It is necessary to point out that there are no sGRBs with both the precursor and the EE components in our sample.\n\\section{RESULTS}\nIn this section, the pulse features of the main peaks not only in one-component and two-component sGRBs but also in the SPs and the DPs are compared. On the other hand, the correlations of the main peaks with either the precursors or the EEs in the two-component sGRBs are inspected. As done in paper I, the M-loose double-peaked sGRBs (Ml-DPs) and the M-tight double-peaked sGRBs (Mt-DPs) will be investigated separately. For convenience, we name the sGRBs with precursor or EE as Pre+sGRBs or sGRBs+EE hereafter.\n\\subsection{Main Peaks}\n\\subsubsection{One-component versus Two-component sGRBs}\nIn order to investigate whether the temporal properties of the main peaks in one-component sGRBs differ from\nthose of two-component sGRBs, comparisons are displayed\nin Figures \\ref{fig:trtd}$-$\\ref{fig:fmFWHM}.\nIn Figure \\ref{fig:trtd}$-$\\ref{fig:fmFWHM} (a), we find that the main peaks of the one-component SPs tend to be similar to those of the two-component SPs. In Figures \\ref{fig:trtd}$-$\\ref{fig:fmFWHM} (b), the main peaks of both the first pulses (1st) and the second pulses (2nd) of the two-component Mt-DPs tend to be similar to those of the one-component Mt-DPs. There is no two-component Ml-DPs in our sample. Note that, we can see from Figures \\ref{fig:trtd}$-$\\ref{fig:fmFWHM} that there is no significant evolutions on energy in both the SPs and the Mt-DPs. Thus we combine adjacent two channels into one channel for the two-component sGRBs and compare them with ones of the one-component sGRBs in two individual channels. These comparisons suggest that the main peaks in either one-component or two-component sGRBs tend to have no significant difference and are\nlikely to share the similar physical mechanism.\n\n\\subsubsection{SPs versus DPs}\nIn order to reveal the individual or\ncollective temporal characteristics of the main peaks\nbetween the SPs and the DPs, we compare the analysis results in Figures \\ref{fig:trtd} (a) $-$ \\ref{fig:fmFWHM} (a) with the reuslts of Figures \\ref{fig:trtd} (b) $-$ \\ref{fig:fmFWHM} (b) and \\ref{fig:trtd} (c) $-$ \\ref{fig:fmFWHM} (c).\n\nIt is found that the t$_r$ and the t$_d$ of the main peaks\nof all kinds of the sGRBs are self-similar with a\npower-law form as t$_r \\sim t_d$$^\\beta$ in Figure \\ref{fig:trtd},\nThe detailed fitting results\nare summarized in Table \\ref{tab:tabletrtd}. Interestingly, these results\nfor all kinds of the sGRBs are consistent with\nthe BATSE sGRBs, especially for the SPs. For\ntwo kinds of DPs, the power-law correlations are\ntighter or looser than those of the BATSE DPs.\n\nThe relations of the FWHM, the t$_m$ and the f$_m$ with\nthe asymmetry appear to be not evident in Figures\n\\ref{fig:FWHM-asy}$-$\\ref{fig:fm-asy}. It needs to note that the\nweak dependence of the t$_r$\/t$_d$ on the FWHM or the\nt$_m$ was found in the BATSE Ml-DPs, instead of the Swift\/BAT sGRBs. It may implies that these temporal parameters could evolve with the energy bands of detectors.\n\nFigure \\ref{fig:tmFWHM} illustrates that the t$_m$ increases with the FWHM, following a power-law behavior t$_m \\sim FWHM^\\mu$, which are similar to the results found by \\cite{2005APJ...627...324} for the long-lag, wide-pulse BATSE\nGRBs and our previous results for BATSE sGRBs. The\ndetailed fitting results are summarized in Table \\ref{tab:tabletmFWHM}. The\nf$_m$ and the FWHM are an anti-correlated power-law relation with the\nBATSE lGRBs \\citep{2002AA...385...377}, similar to our previous conclusion\nfor the BATSE sGRBs. Figure \\ref{fig:fmFWHM} shows that the\nf$_m$ is generally anti-correlated with the FWHM with a\npower-law form of f$_m \\sim FWHM^\\nu$. The detailed fitting\nresults are summarized in Table \\ref{tab:tablefmFWHM}. It is worthy to point\nout that the power-law index of the SPs is consistent with that of the BATSE SPs.\nFor two kinds of the DPs, the power-law relations of the first pulses are tighter than those of the second pulses.\n\nIn Table \\ref{tab:tableasy}, we find that the asymmetries of the main peaks for all kinds\nof sGRBs are limited from 0.03 to 1.56 and the\nmean asymmetry is 0.79 which is almost equal\nto the value of the BATSE sGRBs and lager than the value\nof 0.65 for a sample of 100 bright BATSE sGRBs found\nby \\cite{2001AA...380...L31}. The result is well agreement\nwith the value of 0.81 obtained by \\cite{2007APSS...310...19}.\nThe really interesting result is that the mean asymmetry of the SPs is still very similar to the values\nof the 1st pulses of the two subclasses of the\nDPs, which we found in the BATSE sGRBs. A K-S test\nto the cumulative distributions of the t$_r$\/t$_d$ between\nthe SPs and the 1st pulses of the Mt-DPs\ngives a p-value of 0.50 showing they are not significantly different. Similarly, a K-S test to the distributions\nof the t$_r$\/t$_d$ between the SPs and the 1st\npulses of the Ml-DPs gives a a p-value of 0.50 showing they\nalso are not significantly different.\n\n\\subsubsection{Dependence of Pulse Width on Energy}\n\nFigures \\ref{fig:singleFandE} - Figure \\ref{fig:mlFandE} show the relations of the width\n(FWHM) dependence on the average photon energy\nwith a form as FWHM $\\sim E^\\alpha$ for the SPs and the\ntwo kinds of the DPs. In Figure \\ref{fig:singleFandE}, except the\nGRB 070810B, the power-law indexes of the 16 SPs are negative and the mean value is $\\alpha\\simeq$ $-$0.32 $\\pm$ 0.02\nwhich is in close proximity to the results of $-$0.4 found by \\cite{1995APJ...448...L101} and \\cite{1996APJ...459...393} for the long\nGRBs. Very excitingly, the mean value is almost equal\nto the value of $-$0.32 for the BATSE SPs (see paper I). Figure \\ref{fig:single-index} shows the distribution of these power-law indexes of the 16 SPs. The detailed\nfitting results are summarized in Table \\ref{tab:index}.\n\nFor GRB 070810B, the value of the power-law index\nis $\\alpha\\simeq$ $-$0.05 $\\pm$ 0.16 and the pulse shape evolution from\nlow to high energy channel is shown in Figure \\ref{fig:pulse evo} (1).\nThe values of t$_r\/t_d$ from the low to the high energy channel are 0.49, 0.60, 0.82\nand 0.60, respectively.\n\nIn Figures \\ref{fig:mtFandE} and \\ref{fig:mlFandE}, we find that most of the DPs possess the negative\npower-law indexes except GRB 130912A whose\nlight curve shows two overlapping peaks \\citep{2013GCN...15216...1}. The mean values of these\nnegative power-law indexes are $\\alpha\\simeq$ $-$0.38 $\\pm$ 0.85 (1st)\nand $\\alpha\\simeq$ $-$0.45 $\\pm$ 0.33 (2nd) for the Mt-DPs and\n$\\alpha\\simeq$ $-$0.22 $\\pm$ 0.21 (1st) and $\\alpha\\simeq$ $-$0.42 $\\pm$ 0.13 (2nd) for the Ml-DPs, respectively. For Mt-DP 130912A, the\nvalues of the power-law indexes are $\\alpha\\simeq$ 0.13 $\\pm$ 0.12\nand $\\alpha\\simeq$ $-$0.12 $\\pm$ 0.11 for the 1st pulse and the 2nd\npulse, respectively. The pulse shape evolution from low\nto high energy channel is shown in Figure \\ref{fig:pulse evo} (2). The\nvalues of t$_r$\/t$_d$ are 0.67, 0.65, 0.82 and 0.66 for the 1st\npulses and 1.39, 1.38, 1.44 and 1.37 for the 2nd pulses.\nObviously, we can see that there is no obvious shape\nevolution for either the 1st or the 2nd pulses.\n\n\\subsection{Main Peaks versus Precursors or EEs}\n\nIn Figure \\ref{fig:preandee} (a1), we find weak positive correlations\nin the f$_m$ between the precursors and the main\npeaks. For the three Pre+DPs, we just check the\nfirst (1st) main peaks (main1) because the number\nof the second (2nd) main peaks is relatively\nlimited. Power-law fits across different energy bands\ngive $logf_{m,main1}$=$(0.85\\pm0.44)\\times logf_{m,pre}+(0.25\\pm0.18)$ with the correlation coefficient of Pearson's r=0.59\n(15-350 keV), $logf_{m,main1}$=$(0.87\\pm0.59)\\times logf_{m,pre}+(0.07\\pm0.41)$ with the correlation coefficient of Pearson's\nr=0.55 (15-50 keV), $logf_{m,main1}$=$(0.41\\pm0.34)\\times logf_{m,pre}+(0.08\\pm0.20)$ with the correlation coefficient of Pearson's r=0.56 (50-350 keV). The results are marginally in agreement with the recent conclusion\ndrawn by \\cite{2019Zhong} for 18 sGRB candidates with precursor observed by Fermi\/GBM and Swift\/BAT.\n\nIn Figure \\ref{fig:preandee} (b1)(c1)(d1)(e1), mild correlations are\nfound in the t$_r$\/t$_d$, the FWHM, the t$_r$ and the t$_d$ for\nthe Pre+sGRBs, similar to the results of the ``Type I\"\nprecursors reported by \\cite{2015PHD...???...???}. Additionally, there\nis general no event in the lower right region from the\nsolid line in Figure \\ref{fig:preandee} (c1)(d1)(e1). The FWHM of the\nprecursors are found to be on average an order of magnitude\nsmaller than those of the main peaks. These\nresults all indicate that the widths of main peak pulses\ntend to be wider than those of precursor pulses for\nthe Pre+sGRBs, suggesting that the main peaks tend\nto last for longer time than the precursors, agreement\nwith result of the \\cite{2019Zhong}.\n\nSimilarly, a positive correlation is found in the\nf$_m$ between the EE and the main peaks. For the GRB\n051221A with two EE pulses, we take into account the 1st\nEE pulse. For Mt-DPs+EE 130603B, we also just consider\nthe first main pulse. The results of power-law fit gives\n$logf_{m,main1}$=$(1.16\\pm0.08)\\times logf_{m,EE1}+(0.78\\pm0.10)$\nwith the correlation coefficient of Pearson's r =0.97 (see\nFigure \\ref{fig:preandee} (a2)). The positive power-law index is larger than the result suggested by \\cite{2019ApJ...876...89} for\nthe Fermi sGRBs with soft tail similar to GRB 170817A.\nHere, note that the correlations in the f$_m$ between the\nEE and the main peaks among different energy channels\nbehave no dependence on energy in Figure \\ref{fig:preandee} (a2),\nthus, all the first EE pulses across 15-350 keV are chosen\nto increase the statistical reliability.\n\nNo distinct correlations are found in the t$_r$\/t$_d$, the\nFWHM, the t$_r$ and the t$_d$ not only for the Pre+sGRBs but also for\nthe sGRBs+EE (see Figure \\ref{fig:preandee} (b2), (c2), (d2), (e2)).\nHowever, there are less events in the upper left region\nfrom the solid line in Figure \\ref{fig:preandee} (c2)(d2)(e2). These\nresults indicate that the widths of main peak pulses\ntend to be narrower than those of the EE pulses for the\nsGRBs+EE, which is caused by both the rise and the decay times. Particularly, we compare the temporal properties with those of the GRB 170817A and no obvious differences are found.\n\nAdditionally,\nFigure \\ref{fig:preandee} (a1)(a2) illustrate that the f$_m$ values of the main\npeaks are generally larger than those of the other two\ncomponents. On the other hand, the photon\nfluxes of main\npeaks and the other two components seem to be positively correlated, indicating that the luminosities of the main\npeaks are linked to those of the precursors and the EE components, agreement\nwith the results of \\cite{Zhangxiaolu2020} and \\cite{2019ApJ...876...89}. Moreover, it in return hints that the three parts of prompt gamma ray emissions could be produced from the same progenitor. For example, recently, \\cite{2018Nature...2...69} studied the properties\nof a extraordinarily bright three-episode long GRB 160625B detected by Fermi. Although we have not\nfound three-episode sGRBs in our sample, the similarities may indicate that the two weak emission episodes\nmay be likely to exist, or, in other words, be intrinsic. The absences of the EEs or precursors might be related to sensitivity or energy coverage of the current GRB detectors.\n\nEspecially, we check the relations of the width dependence\non the average photon energy and the pulse evolution\nmodes among different energy channels not only\nin the precursor pulse but the main pulse of the Pre+SPs in Figure \\ref{fig:pulse evo2}. The values of t$_r$\/t$_d$ of the precursor\nin GRB 160726A are 0.67, 0.64, 0.91, 0.69 while those\nof the main pulse are 0.48, 0.84, 0.96, 1.21 corresponding\nto channels from low to high individually. There is\nalmost no shape evolution across different energy bands\nfor precursor pulse. However, the shape evolution of the\nmain pulses is very different.\n\n\\section{DISCUSSION}\nExcept two regular evolving modes, ``MODE I\" and\n``MODE II\" which are corresponding to the sGRBs with\npositive and negative power-law index of the pulse width\nwith the photon energy (see paper I), we found that the\nindexes of some sGRBs are marginally zero including\nGRB 100206A, GRB 070810B, GRB 111117A (1st), GRB 101219A, GRB 130912A and GRB\n160726A (main). Surprisingly, there are two possible cases for\nthese sGRBs. In one case, we find there are almost no\nshape evolution across different energy bands, for example,\nGRB 100206A, GRB 070810B and GRB 130912A.\nIn another case, pulse shape of the main peak for the GRB 160726A evolves from the low to the high channel\nin the inverse ``MODE II\" way. In case this evolution\nmode is new for sGRBs. Therefore, it is worthy\nto search for the same effect in events from the Fermi\/\nGBM, HXMT\/HE catalogues to asses more robust\nconclusions in the future.\n\nOn 2017 August 17, GRB 170817A was observed independently\nby the Fermi GBM Monitor and the Anti-\nCoincidence Shield for the Spectrometer for the International\nGamma-Ray Astrophysics Laboratory \\citep{2017ApJL...848...L14,2017ApJL...848...L15,2017ApJL...848...L13}\n$\\sim$ 1.7 s posterior to the first binary neutron star (BNS)\nmerger event GW170817 observed by the Advanced\nLIGO and Virgo detectors \\citep{2017GCN21509}. The\njoint detection of GW170817\/GRB 170817A confirms\nthat at least some sGRBs are indeed originated from the\nmergers of the compact binaries. So, which sGRBs show\nsimilar characteristics to GRB 170817A becomes a hot topic \\citep[e.g.][]{2018APJL...863...L34,2018ApJL...853...L10,2018NC...9...4089,2019ApJ...876...89,2019APJL...880...L63}. In our subsample with EE, the temporal\nproperties of GRB 150101B and GRB 050724, including\nthe apparent two-component signature and the EE\ntails which are strongest below 50 keV energy range starting\napproximately at the end of the main peak,\nare very phenomenologically similar in shape to that of\nGRB 170817A, strengthening the potential relation with\nGRB 170817A (see \\citealt{2018APJL...863...L34,2019ApJ...876...89} for similar conclusions).\n\n\\cite{2016MNRAS...461...3607} reported that a highly variable\nlight curve viewed on-axis will become smooth\nand apparently single-pulsed (when viewed off-axis).\nThey suggested that low-luminosity GRBs are consistent\nwith being ordinary bursts seen off-axis. \\cite{2019APJL...880...L63} investigated the outflow structure\nof GRB 170817A and found 14 sGRBs share the\nsimilar relativistic structured jets with GRB 170817A.\nThey modelled their afterglow light curves and generated\nthe on-axis light curve for GRB 170817A which is\nconsistent with those of common sGRBs, as suggested by \\cite{2019AA...628...A18}. Therefore,\nit is very interesting to compare the on-axis prompt\nemission properties by further observations directly.\n\n\n\\emph{\\section{CONCLUSIONS}}\nWe studied nine sGRBs with precursor in our sample. Simultaneously, five typical Swift sGRBs with EE and one Fermi GRB 170817A\nhave been analyzed. For the first time, we presented the joint temporal property analysis of the fitting pulses\namong the main peak with the other two components in both one-component sGRBs and two-component sGRBs. Our major results are summarized as follows:\n\n1. We confirm that the main peaks in either one-component or two-component sGRBs tend to have no significant difference and might be generated from the similar physical mechanism.\n\n2. We inspected the correlations among the temporal properties of the SPs and DPs and found that the results are essentially consistent with those in\nCGRO\/BATSE ones recently found in our paper I. For instance, the t$_r$ and the t$_d$ are in agreement with a power-law relation for the SPs and the DPs, except the 2nd pulses of the Mt-DPs. There are no evident correlation between the asymmetry and the FWHM, the t$_m$ as well as the f$_m$, marginally similar to the results of BATSE sGRBs. Particularly, the temporal properties of the SPs have been found to be quite close to these\nof the 1st pulses of the sub-type DPs, especially the asymmetry.\n\n3. The relations, FWHM $\\sim E^\\alpha$, of the width of the main peak dependence on the average photon energy have been compared with the results discovered in paper I. The negatively mean index of $a\\sim-$0.32 for the SPs is consistent with the\nresults for the BATSE SPs. Relative to the negative and positive energy correlations, $\\alpha$ is found to be marginally zero not only in the SPs but also in the DPs. We studied the pulse shape evolutions from low to high energy channel for the sGRBs with $\\alpha$ which is in close proximity to zero. It is found that there is no obvious shape evolution in one case. In another case, pulse shape evolves from low to high channel in a new way, inverse ``MODE II'', demonstrating that there would have more evolution\nmodes of pulses across differently adjacent energy channels in view of the Swift\/BAT observations.\n\n4. Furthermore, we studied the correlations of the main peaks with either the precursors or the EEs. No distinct correlations have been found in the asymmetry, the FWHM, the t$_r$ and the t$_r$ not only for the Pre+sGRBs but also for the sGRBs+EE. We found that the widths of main peaks tend to be narrower than those of the EE pulses and wider than those of the precursors. In particular, we verified the power-law correlations in the f$_m$ of the three components, strongly suggesting that they are seem to origin from the similar central engine activities. Especially, we compared the\ntemporal properties of the GRB 170817A with the other sGRBs+EE and no obvious difference have been found.\n\nOn the basis of these studies, we hope that the most important role of our results could show new lights on the search of the possible connection among these three components. More observational data of the precursors and the EEs are extremely needed in the future to constrain the current physical models in order to interpret the complex GRB light curves in the new era of satellites.\n\n\nAcknowledgements\n\nWe thank the referee for very helpful suggestion and comments.\nThis work makes use of the data supplied NASA's High Energy Astrophysics Science Archive Research Center (HEASARC). It was supported by the Youth Innovations and Talents Project of Shandong Provincial Colleges and Universities (Grant No. 201909118) and the Natural Science Foundations (ZR2018MA030, XKJJC201901, OP201511, 20165660 and 11104161).\n\n\\begin{figure*}\n\\centering\n\\gridline{\n\\fig{trtd.pdf}{1\\textwidth}{}\n }\n\\caption{The t$_r$ vs. $t_d$ of the sGRB pulses in the main peaks. In each panel, (a) SPs, (b) Mt-DPs, (c) Ml-DPs and\n(d) Comparisons of the SPs and the 1st pulses in two kinds of the DPs.\nThe lines are the best fits, solid lines for the 1st main peaks and dotted lines for the 2nd ones, respectively.\n\\label{fig:trtd}}\n\\end{figure*}\n\n\\begin{figure*}\n\\centering\n\\gridline{\n\\fig{FWHM-asy.pdf}{1\\textwidth}{}\n }\n\\caption{ The FWHM vs. $t_r\/t_d$ of the sGRB pulses in the main peaks. The symbols are as same as Figure \\ref{fig:trtd}.\n\\label{fig:FWHM-asy}}\n\\end{figure*}\n\n\\begin{figure*}\n\\centering\n\\gridline{\n\\fig{tm-asy.pdf}{1\\textwidth}{}\n}\n\\caption{ The $t_m$ vs. $t_r\/t_d$ of the sGRB pulses in the main peaks. The symbols are as same as Figure \\ref{fig:trtd}.\n\\label{fig:tm-asy}}\n\\end{figure*}\n\n\\begin{figure*}\n\\centering\n\\gridline{\n\\fig{fm-asy.pdf}{1\\textwidth}{}\n }\n\\caption{ The $f_m$ vs. $t_r\/t_d$ of the sGRB pulses in the main peaks. The symbols are as same as Figure \\ref{fig:trtd}.\n\\label{fig:fm-asy}}\n\\end{figure*}\n\n\\begin{figure*}\n\\centering\n\n\\gridline{\n\\fig{tmFWHM.pdf}{1\\textwidth}{}\n }\n\\caption{The log$t_m$ vs. logFWHM of the sGRB pulses in the main peaks. The values of $t_m$ in this figure are positive. The symbols are as same as Figure \\ref{fig:trtd}.\n\\label{fig:tmFWHM}}\n\\end{figure*}\n\\begin{figure*}\n\\centering\n\n\\gridline{\n\\fig{fmFWHM.pdf}{1\\textwidth}{}\n }\n\\caption{The $f_m$ vs. FWHM of the sGRB pulses in the main peaks. The symbols are as same as Figure \\ref{fig:trtd}.\n\\label{fig:fmFWHM}}\n\\end{figure*}\n\n\n\n\\begin{figure*}[ht]\n\\centering\n\\gridline{\n\\fig{single1.pdf}{0.45\\linewidth}{(a)}\n\\fig{single2.pdf}{0.45\\linewidth}{(b)}}\n\\gridline{\n\\fig{single3.pdf}{0.5\\linewidth}{(c)}}\n\\caption{The FWHM vs. average photon energy of\nthe 17 SPs. The solid line stands for the best power-law fit to the observations. \\label{fig:singleFandE}}\n\\end{figure*}\n\n\\begin{figure*}[ht]\n\\centering\n\\gridline{\n\\fig{single-index.pdf}{0.5\\linewidth}{}}\n\\caption{Distribution of the negative power-law indexes in $FWHM \\sim E^\\alpha$ for the 16 SPs. The vertical red dashed line shows the mean value of $\\alpha \\sim$ $-$0.32. \\label{fig:single-index}}\n\\end{figure*}\n\\begin{figure*}[ht]\n\\gridline{\n\t\\centering\n\\fig{double_101219A1.pdf}{0.3333\\linewidth}{(a)}\n\\fig{double_120804A.pdf}{0.3333\\textwidth}{(b)}\n\\fig{double_130912A1.pdf}{0.3333\\textwidth}{(c)}\n }\n\\caption{the FWHM vs. the average photon energy of the Mt-DPs. The examples are present for\n(a) GRB 101219A, (b) GRB 120804A and (c) GRB 130912A. The solid line stands for the best power-law fit to the observations. \\label{fig:mtFandE}}\n\\end{figure*}\n\n\\begin{figure*}[ht]\n\\centering\n\\gridline{\n\\fig{double_l_111117A.pdf}{0.3333\\linewidth}{(a)}\n\\fig{double_l_180204A1.pdf}{0.3333\\textwidth}{(b)}\n }\n\\caption{The FWHM vs. average photon energy of the Ml-DPs. The examples are shown for\n(a) GRB 111117A and (b) GRB 180204A. The solid line stands for the best power-law fit to the observations. \\label{fig:mlFandE}}\n\\end{figure*}\n\n\n\\begin{figure*}[ht]\n\\centering\n\\gridline{\n\\fig{single070810Bpulse.pdf}{0.495\\linewidth}{(1) the pulse shape revolutions of GRB 070810B}\n\\fig{130912Apulse.pdf}{0.5\\textwidth}{(2) the pulse shape revolutions of GRB 130912A}}\n\\caption{The revolution from the lower to the higher energy channels. The vertical\nblack dash lines mark the peak time (t$_m$) of the main peaks in Ch1 (GRB 070810B) and Ch1+2 (GRB 130912A). The horizontal dotted black lines mark a 3$\\sigma$ confidence level. \\label{fig:pulse evo}}\n\\end{figure*}\n\n\\begin{figure*}\n\\centering\n\\gridline{\\fig{fm-pre-m.pdf}{0.33\\textwidth}{(a1)}\n \\fig{asy-pre-m.pdf}{0.33\\textwidth}{(b1)}\n \\fig{FWHM-pre-m.pdf}{0.33\\textwidth}{(c1)}\n }\n\\gridline{\\fig{fm-EE-m.pdf}{0.33\\textwidth}{(a2)}\n \\fig{asy-EE-m.pdf}{0.33\\textwidth}{(b2)}\n \\fig{FWHM-EE-m.pdf}{0.33\\textwidth}{(c2)}\n }\n\\gridline{\\fig{tr-pre-m.pdf}{0.33\\textwidth}{(d1)}\n \\fig{td-pre-m.pdf}{0.33\\textwidth}{(e1)}\n }\n\\gridline{\\fig{tr-EE-m.pdf}{0.33\\textwidth}{(d2)}\n \\fig{td-EE-m.pdf}{0.33\\textwidth}{(e2)}\n }\n\\caption{Comparisons of the pulse properties of the main peaks to those of the precursors (panels a(1) b(1) c(1) d(1) e(1)) the EE tails (panels a(2) b(2) c(2) d(2) e(2)). The solid black lines plotted\nin both panels denote where the pulse properties are equal. In panel a(1), the lines stand for the best power-law fits to the data of the first main peak and its precursor, dotted line for Ch1+2+3+4, dashed line for Ch1+2, dash-dotted line for Ch3+4. In panel a(2), the dotted line stands for the best power-law fit to the data of the main peak and its first EE pulse.\n\\label{fig:preandee}}\n\\end{figure*}\n\n\\begin{figure*}[!h]\n\\centering\n\\gridline{\n\\fig{pre160726AFandE.pdf}{0.5\\textwidth}{(1)}\n\\fig{160726Apulse.pdf}{0.5\\textwidth}{(2)}\n }\n\\caption{The example of the Pre+SPs. (1) The FWHM vs. average photon energy for GRB 160726A. (2) The pulse shapes of GRB 160726A revolution from the lower to the higher energy channels. The vertical\nblack dash lines mark the peak time (t$_m$) of the main peaks in Ch1. The horizontal dotted black lines mark a 3$\\sigma$ confidence level. \\label{fig:pulse evo2}}\n\\end{figure*}\n\n\\clearpage\n\\startlongtable\n\\begin{deluxetable*}{l| c|c|c|c|c|c|c}\n\\tablecaption{The sample of sGRBs with three emission components. \\label{tableEE}}\n\\tablehead{\n\\emph{GRB}& T$_{90} (s)$ & $Redshift$ & $Energy$ $band$ & $Satellite$ & $Precursor$ & $Main peak$ & $EE$ }\n\\tabletypesize{\\small}\n\\startdata\n190326A&0.076&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n181123B&0.260&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n180727A&1.056&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n180718A&0.084&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n180402A&0.180&&15-350 keV&Swift&$\\surd$&$\\surd$&$\\times$\\\\\n180204A&1.164&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n170817A&2.048&0.009783&8-350 keV&Fermi&$\\times$&$\\surd$&$\\surd$\\\\\n170428A&0.200&0.454&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n170325A&0.332&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n160726A&0.728&&15-350 keV&Swift&$\\surd$&$\\surd$&$\\times$\\\\\n160612A&0.248&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n160601A&0.120&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n160408A&0.320&&15-350 keV&Swift&$\\surd$&$\\surd$&$\\times$\\\\\n151229A&1.440&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n151228A&0.276&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n150831A&0.920&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n150710A&0.152&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n150423A$^\\ddag$&0.216&1.394&15-350 keV&Swift&$\\times$&$\\surd$&$\\surd$\\\\\n150301A&0.484&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n150120A$^\\ddag$&1.196&0.460&15-350 keV&Swift&$\\times$&$\\surd$&$\\surd$\\\\\n150101B&0.012&0.1343&15-350 keV&Swift&$\\times$&$\\surd$&$\\surd$\\\\\n141212A&0.288&0.596&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n140930B&0.844&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n140903A&0.296&0.351&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n131004A$^\\ddag$&1.536&0.717&15-350 keV&Swift&$\\times$&$\\surd$&$\\surd$\\\\\n130912A&0.284&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n130626A&0.160&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n130603B&0.176&0.3565&15-350 keV&Swift&$\\times$&$\\surd$&$\\surd$\\\\\n130515A&0.296&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n120804A$^\\ddag$&0.808&1.3&15-350 keV&Swift&$\\times$&$\\surd$&$\\surd$\\\\\n120630A&0.596&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n120521A&0.512&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n120305A&0.100&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n111117A$^\\ddag$&0.464&2.211&15-350 keV&Swift&$\\times$&$\\surd$&$\\surd$\\\\\n111020A&0.384&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n110420B&0.084&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n101224A&0.244&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n101219A$^\\ddag$&0.828&0.718&15-350 keV&Swift&$\\times$&$\\surd$&$\\surd$\\\\\n101129A&0.384&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n100724A$^\\ddag$&1.388&1.288&15-350 keV&Swift&$\\times$&$\\surd$&$\\surd$\\\\\n100702A&0.512&&15-350 keV&Swift&$\\surd$&$\\surd$&$\\times$\\\\\n100625A$^\\ddag$&0.332&0.452&15-350 keV&Swift&$\\surd$&$\\surd$&$\\surd$\\\\\n100206A&0.116&0.4068&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n100117A$^\\ddag$&0.292&0.915&15-350 keV&Swift&$\\times$&$\\surd$&$\\surd$\\\\\n091109B&0.272&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n090621B&0.140&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n090510$^\\ddag$&5.664&0.903&15-350 keV&Swift&$\\surd$&$\\surd$&$\\surd$\\\\\n090426$^\\ddag$&1.236&2.609&15-350 keV&Swift&$\\times$&$\\surd$&$\\surd$\\\\\n090417A&0.068&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n081226A&0.436&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n081101&0.180&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n081024A&1.824&&15-350 keV&Swift&$\\surd$&$\\surd$&$\\times$\\\\\n080426&1.732&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n071112B&0.304&&15-350 keV&Swift&$\\surd$&$\\surd$&$\\times$\\\\\n070923&0.040&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n070810B&0.072&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n070809&1.280&0.2187&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n061217&0.224&0.827&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n060614&109.104&0.1254 &15-350 keV&Swift&$\\times$&$\\surd$&$\\surd$\\\\\n060502B&0.144&0.287&15-350 keV&Swift&$\\surd$&$\\surd$&$\\times$\\\\\n051221A&1.392&0.5464&15-350 keV&Swift&$\\times$&$\\surd$&$\\surd$\\\\\n051105A&0.056&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n050925&0.092&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n050813&0.384&0.722&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n050724$^\\dagger$&98.684&0.257&15-350 keV&Swift&$\\surd$&$\\surd$&$\\surd$\\\\\n050509B&0.024&0.2249&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\\\\\n050202&0.112&&15-350 keV&Swift&$\\times$&$\\surd$&$\\times$\n\\enddata\n\\tablecomments{ The precursor (or EE) components of these sGRBs are too weak to be fitted successfully and are marked with $\\dag$ (or $\\ddag$).}\n\\end{deluxetable*}\n\n\\startlongtable\n\\begin{deluxetable*}{clcccc}\n\\tabletypesize{\\small}\n\\tablecaption{The best-fit parameters of the power-law correlation between the t$_r$ and the t$_d$.\\label{tab:tabletrtd}}\n\\tablehead{\n\\emph{GRB}& $\\emph{N}$&$\\emph{Pearson's r}$ & \\emph{$R^2$}& $\\emph{$\\beta$}$}\n\\startdata\n\\hline\nSPs&157&0.82&0.67&0.86 $\\pm$ 0.05 \\\\\n\\hline\nMt-DPs 1st&18&0.79&0.59&0.94 $\\pm$ 0.18 \\\\\nMt-DPs 2nd&18&0.24&0&0.32 $\\pm$ 0.31 \\\\\n\\hline\nMl-DPs 1st&\t7&0.52&0.12&1.31 $\\pm$ 0.96 \\\\\nMl-DPs 2nd&\t7&0.93&0.83&1.07 $\\pm$ 0.20 \\\\\n\\hline\n\\enddata\n\\end{deluxetable*}\n\n\\startlongtable\n\\begin{deluxetable*}{clcccc}\n\\tabletypesize{\\small}\n\\tablecaption{The best-fit parameters of the correlation of the logt$_m$ with the logFWHM. \\label{tab:tabletmFWHM}}\n\\tablehead{\n\\emph{GRB}& $\\emph{N}$&$\\emph{Pearson's r}$ & \\emph{$R^2$}& $\\emph{$\\mu$}$}\n\\startdata\n\\hline\nSPs&154*&0.78&0.60&0.93$\\pm$ 0.06\\\\\n\\hline\nMt-DPs 1st\t&15*&0.46&0.15&0.38 $\\pm$ 0.21\\\\\nMt-DPs 2nd\t&18*&0.43&0.14&0.67$\\pm$ 0.35\\\\\n\\hline\nMl-DPs 1st\t&6*&0.53&0.10&2.20 $\\pm$ 1.76\\\\\nMl-DPs 2nd\t&7*&0.95&0.89&0.33 $\\pm$ 0.05\\\\\n\\hline\n\\enddata\n\\tablecomments{* The number of sGRB pulses whose t$_m$ is positive.}\n\\end{deluxetable*}\n\n\\startlongtable\n\\begin{deluxetable*}{cccccc}\n\\tabletypesize{\\small}\n\\tablecaption{The best-fit parameters of the power-law correlation of the f$_m$ with the FWHM. \\label{tab:tablefmFWHM}}\n\\tablehead{\n\\emph{GRB}&\\emph{N}&\\emph{Pearson's r} & \\emph{$R^2$}& \\emph{$\\nu$}}\n\\startdata\nSPs&157 &-0.51 &0.25 &-0.45 $\\pm$ 0.06\\\\\n\\hline\nMt-DPs 1st&18 &-0.79 &0.61 &-1.08 $\\pm$ 0.21\\\\\nMt-DPs 2nd&18 &-0.31&0.04&-0.65 $\\pm$ 0.50\\\\\n\\hline\nMl-DPs 1st&7 &-0.67 &0.34 &-1.31 $\\pm$ 0.66\\\\\nMl-DPs 2nd&7 &-0.30 &-0.09 &-0.13 $\\pm$ 0.18 \\\\\n\\hline\n\\enddata\n\\end{deluxetable*}\n\n\\startlongtable\n\\begin{deluxetable*}{cccccc}\n\\tabletypesize{\\small}\n\\tablecaption{Asymmetric properties of the main peaks for different kinds of sGRBs.\\label{tab:tableasy}}\n\\tablehead{\n\\emph{GRB}&\\emph{N}&\\emph{mean}&\\emph{Median} & \\emph{Minimum}& \\emph{Maximum}}\n\\startdata\nSPs\t\t&157&0.73&0.65&0.03&1.56\\\\\n\\hline\nMt-DPs\t1st\t&18&0.79&0.65&0.17&1.56\\\\\nMt-DPs\t2nd\t&18&1.12&1.28&0.07&1.48\\\\\n\\hline\nMl-DPs\t1st\t&7&0.87&0.80&0.33&1.30\\\\\nMl-DPs\t2nd\t&7&1.07&1.37&0.50&1.51\\\\\n\\hline\nAll&207&0.79&0.70&0.03&1.56\\\\\n\\enddata\n\\end{deluxetable*}\n\n\\startlongtable\n\\begin{deluxetable*}{clccccc}\n\\tabletypesize{\\small}\n\\tablecaption{The best-fit parameters of the correlation between the FWHM and the average energy\nof photons in each channel.\\label{tab:index}}\n\\tablehead{\n\\emph{Type}&\\emph{GRB} &\\emph{Pearson's r}& \\emph{$R^2$}& \\emph{$\\chi^2_\\nu$}& \\emph{$\\alpha$}}\n\\startdata\nSPs&190326A &-0.99 &0.97 &1.2 &-0.40 $\\pm$ 0.04\\\\\nSPs&180718A &-0.94 &0.82 &0.60&-0.47 $\\pm$ 0.12\\\\\nSPs&170428A &-0.92 &0.77&5.15 &-0.34 $\\pm$ 0.10\\\\\nSPs&160601A &-0.94 &0.82 &7.27&-0.66 $\\pm$ 0.17\\\\\nSPs&150710A &-0.97 &0.92&0.29 &-0.19 $\\pm$ 0.03\\\\\nSPs& 150301A &-0.93 &0.78&6.32 &-0.21 $\\pm$ 0.06\\\\\nSPs&141212A &-0.90 &0.71&3.16 &-0.33 $\\pm$ 0.11\\\\\nSPs&140903A &-0.92 &0.76 &3.12&-0.26 $\\pm$ 0.08\\\\\nSPs& 131004A &-0.90 &0.70 &1.52&-0.21 $\\pm$ 0.06\\\\\nSPs& 120305A&-0.99 &0.98 &0.63&-0.26 $\\pm$ 0.02\\\\\nSPs& 110420B &-0.93 &0.79 &12.82&-0.80 $\\pm$ 0.23\\\\\nSPs& 100206A* &-0.90 &0.73 &0.08&-0.06 $\\pm$ 0.02\\\\\nSPs& 091109B &-0.95 &0.85 &0.91&-0.27 $\\pm$ 0.06\\\\\nSPs& 090621B &-0.98 &0.93 &0.63&-0.26 $\\pm$ 0.04\\\\\nSPs& 070923 &-0.99 &0.97&0.10 &-0.18 $\\pm$ 0.02\\\\\nSPs& 050925&-0.91 &0.75 &3.60&-0.29 $\\pm$ 0.09\\\\\n\\hline\nPre+SPs& 160726A precursor&-0.82&0.50&0.89&-0.28$\\pm$ 0.14\\\\\nPre+SPs& 160726A* main&-0.66&0.16&3.57&-0.11 $\\pm$ 0.09\\\\\n\\hline\nSPs&GRB 070810B* &0.23 &-0.42 &0.69&0.05 $\\pm$ 0.16\\\\\n\\hline\n\\hline\nMt-DPs& 120804A 1st &-0.28 &-0.38 &33.88&-0.35 $\\pm$ 0.83\\\\\nMt-DPs& 120804A 2nd &-0.86 &0.62 &64.79&-0.79 $\\pm$ 0.32\\\\\nMt-DPs& 101219A 1st &-0.85 &0.59 &31.33&-0.42 $\\pm$ 0.18\\\\\nMt-DPs&101219A* 2nd &-0.69 &0.21&0.66 &-0.11 $\\pm$ 0.08\\\\\n\\hline\nMt-DPs& 130912A* 1st &0.61 &0.05 &4.23&0.13 $\\pm$ 0.12\\\\\nMt-DPs& 130912A* 2nd &-0.60 &0.05 &1.76&-0.12 $\\pm$ 0.11\\\\\n\\hline\nMl-DPs& 180204A 1st&-0.79&0.43&10.27&-0.38 $\\pm$ 0.21\\\\\nMl-DPs& 180204A 2nd&-0.93&0.79&4.32&-0.28 $\\pm$ 0.08\\\\\nMl-DPs& 111117A* 1st&-0.83&0.52&0.07&-0.06 $\\pm$ 0.03\\\\\nMl-DPs& 111117A 2nd&-0.97&0.91&0.59&-0.57 $\\pm$ 0.10\\\\\n\\enddata\n\\tablecomments{* The power-law indexes in $FWHM \\sim E^\\alpha$ of sGRBs are marginally zero.}\n\\end{deluxetable*}\n\n\n\\clearpage\n\\begin{longrotatetable}\n\\begin{deluxetable*}{lclcrccCccccccccccc}\n\\tabletypesize{\\tiny}\n\\tablecaption{Analysis Results of Precursors\\label{prefitting}}\n\\tablewidth{750pt}\n\\setlength{\\tabcolsep}{0.15mm}\n\\tablehead{\n\\colhead{sGRBs}&\\colhead{f$_{mp}$}&\n\\colhead{t$_{mp}$} &\n\\colhead{t$_{rp}$} & \\colhead{t$_{dp}$} & \\colhead{(t$_r$\/t$_d$)$_p$} &\n\\colhead{FWHM$_{p}$}&\n\\colhead{f$_{mm1}$} &\n\\colhead{t$_{mm1}$} &\n\\colhead{t$_{rm1}$} & \\colhead{t$_{dm1}$} & \\colhead{(t$_r$\/t$_d$)$_{m1}$} &\n\\colhead{FWHM$_{m1}$}&\\colhead{f$_{mm2}$} &\n\\colhead{t$_{mm2}$} &\n\\colhead{t$_{rm2}$} & \\colhead{t$_{dm2}$} & \\colhead{(t$_r$\/t$_d$)$_{m2}$} &\n\\colhead{FWHM$_{m2}$}\\\\\n&\\colhead{(Counts\/bin)} &\n\\colhead{(s)} &\n\\colhead{(s)} & \\colhead{(s)} & \\colhead{-} &\n\\colhead{(s)}&\n\\colhead{(Counts\/bin)} &\n\\colhead{(s)} &\n\\colhead{(s)} & \\colhead{(s)} & \\colhead{-} &\n\\colhead{(s)}&\n\\colhead{(Counts\/bin)} &\n\\colhead{(s)} &\n\\colhead{(s)} & \\colhead{(s)} & \\colhead{-} &\n\\colhead{(s)}\n}\n\\startdata\nsingle-main-peaked Pre+sGRBs&&&&&&&&&&&&&&&&&&\\\\\n\\hline 060502B(Ch1+2+3+4)(8ms)&0.29254&-0.39565&0.01804&0.03231&0.55834&0.05035&1.10619&0.01680&0.01249&0.03320&0.37620&0.04569&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\ 060502B(Ch1+2)&0.18210&-0.39826&0.00933&0.03153&0.29591&0.04086&0.47337&0.01878&0.01421&0.03878&0.36643&0.05299&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\n 060502B(Ch3+4)&0.12216&-0.40005&0.01804&0.03385&0.53294&0.05189&0.65985&0.01582&0.01095&0.02689&0.40721&0.03784&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\ 071112B(Ch1+2+3+4)(16ms)\\tablenotemark{a}&0.36533&-0.57263&0.02383&0.02295&1.03834&0.04678&0.34022&0.05956&0.06489&0.10150&0.63931&0.16639&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\ 071112B(Ch1+2)&0.15123&-0.58015&0.02623&0.03538&0.74138&0.06161&0.16127&0.06561&0.05627&0.04372&1.28705&0.09999&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\n071112B(Ch3+4)&0.21181&-0.57754&0.01532&0.02371&0.64614&0.03903&0.26077&0.05354&0.03620&0.09331&0.38795&0.12951&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\n100702A(Ch1+2+3+4)(8ms)&0.46258&-0.25755&0.01681&0.02625&0.64038&0.04306&1.57141&0.08286&0.04207&0.06910&0.60883&0.11117&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\n100702A(Ch2)&0.18082&-0.25757&0.03834&0.02750&1.39418&0.06584&0.56600&0.07597&0.04315&0.07271&0.59345 &0.11586&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\n100702A(Ch3)&0.09887&-0.26002&0.02406&0.01716&1.40210&0.04122&0.55149&0.07321&0.02712&0.06700&0.40478&0.09412&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\n160408A(Ch1+2+3+4)(8ms)&0.34285&-0.92805&0.00620&0.09794&0.06330&0.10414&0.80350&0.12414&0.09374&0.19035&0.49246&0.28409&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\n160408A(Ch1+2)&0.24496&-0.89193 &0.04242&0.03554&1.19358&0.07796&0.26421&0.23971&0.20388&0.15022&1.35721&0.35410&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\n160408A(Ch3+4)&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&0.57607&0.07839&0.05323&0.17024&0.31268&0.22347&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\t160726A(Ch1+2+3+4)(8ms)&1.37414&0.01983&0.01621&0.02554&0.63469&0.04175&1.56596&0.63190&0.08339&0.09692&0.86040&0.18031&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\n160726A(Ch1+2)&0.45222&0.01986&0.02871&0.04433&0.64764&0.07304&0.81925&0.61585&0.06846&0.09671&0.70789&0.16517&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\n160726A(Ch3+4)&0.88454&0.02220&0.01645&0.01661&0.99037&0.03306&0.76960&0.66611&0.10826&0.08215&1.31783&0.19041&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\t180402A(Ch1+2+3+4)(8ms)&0.42598&-0.19677&0.01679&0.02710&0.61956&0.04389&0.99640&0.19356&0.10409&0.07669&1.35728&0.18078&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\n180402A(Ch1+2)&0.31619&-0.19699&0.01170&0.00754&1.55172&0.01924&0.33803&0.17332&0.13891&0.11366&1.22215&0.25257&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\n180402A(Ch3+4)&0.34479&-0.20039&0.00735&0.01979&0.37140&0.02714&0.74040&0.20217&0.07322&0.05872&1.24693&0.13194&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\n\\hline\ndouble-main-peaked Pre+sGRBs &&&&&&&&&&&&&&&&&&\\\\\n\\hline\n081024A(Ch1+2+3+4)(8ms)&0.32881&-1.62738 &0.02039&0.03372&0.60469&0.05411&0.20932&-0.26660&0.17324&0.1313&1.31932&0.30455&0.45532&0.07160 &0.06408&0.07796&0.82196&0.14204\\\\\n081024A(Ch1+2)&0.16799&-1.61997&0.03414&0.02491&1.37053&0.05905&0.10299&-0.27799&0.19897&0.12758&1.55957&0.32655&0.21059&0.09102 &0.10461&0.07092&1.47504&0.17553\\\\\n081024A(Ch3)&0.12958&-1.63504&0.01357&0.04425&0.30667&0.05782&0.06638&-0.24707&0.16134&0.11555&1.39628&0.27689&0.18884&0.09078&0.05191 &0.04091&1.26888&0.09282\\\\\n090510(Ch1+2+3+4)(8ms)&0.96274&-0.52401&0.02343&0.01152&2.03385&0.03495&3.13425&0.03598&0.03423&0.04810&0.71164&0.08233&1.48482&0.29996&0.07362&0.05295&1.39037&0.12657\\\\\n090510(Ch1+2)&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&0.98523&0.03116 &0.03742&0.05867&0.63780&0.09609&0.67289&0.28228&0.06988&0.08297&0.84223&0.15285\\\\\n090510(Ch3+4)&0.79215&-0.52361&0.01878&0.01302&1.44240&0.03180&2.21937&0.03597&0.02867&0.04453&0.64384&0.07320&0.73833&0.29850&0.08080&0.05654&1.42908&0.13734\\\\\n100625A(Ch1+2+3+4)(8ms)&0.25267&-0.37602&0.01657&0.01334&1.24213&0.02991&0.91095&0.04791&0.08971&0.06296&1.42487&0.15267&1.19950 &0.21595 &0.08502&0.05728&1.48429&0.14230\\\\\n100625A(Ch1+2)&0.13500&-0.37508&0.01876&0.01315&1.42662&0.03191&0.43599&-0.02516&0.04723&0.07267&0.64992&0.11990&0.50705&0.21970 &0.11453&0.08541&1.34094&0.19994\\\\\n100625A(Ch3+4)&0.09812&-0.37598&0.02120&0.01405&1.50890&0.03525&0.61602&0.07495&0.07481&0.05539&1.35052&0.13020&0.87084&0.21245&0.03871 &0.02780&1.39245&0.06651\\\\\n\\enddata\n\\tablecomments{ We give the peak amplitude f$_{m}$, the peak time t$_{m}$, the rise time t$_{r}$, the decay time t$_{d}$, the asymmetry t$_r$\/t$_d$ and the FWHM of each component. The subscript m and p identify the main peaks and precursor components.\n}\n\\tablenotetext{a}{For this GRB, we adopt the average amplitude to estimate the peak amplitudes of the 8 ms light curve data. }\n\\end{deluxetable*}\n\\end{longrotatetable}\n\n\\clearpage\n\\begin{longrotatetable}\n\\begin{deluxetable*}{lccccccCccccccllrcc}\n\\tabletypesize{\\tiny}\n\\tablecaption{Analysis Results of Extended Emissions\\label{EEfitting1}}\n\\tablewidth{500pt}\n\\setlength{\\tabcolsep}{0.25mm}\n\\tablehead{\n\\colhead{sGRBs} &\n\\colhead{f$_{mm}$} &\n\\colhead{t$_{mm}$} &\n\\colhead{t$_{rm}$} & \\colhead{t$_{dm}$} & \\colhead{(t$_r$\/t$_d$)$_m$} &\n\\colhead{FWHM$_{m}$}&\n\\colhead{f$_{mE1}$} &\n\\colhead{t$_{mE1}$} &\n\\colhead{t$_{rE1}$} & \\colhead{t$_{dE1}$} & \\colhead{(t$_r$\/t$_d$)$_{E1}$} &\n\\colhead{FWHM$_{E1}$}&\\colhead{Cp$_{E2}$} &\n\\colhead{Tp$_{E2}$} &\n\\colhead{t$_{rE2}$} & \\colhead{t$_{dE2}$} & \\colhead{(t$_r$\/t$_d$)$_{E2}$} &\n\\colhead{FWHM$_{E2}$}\\\\\n&\\colhead{(Counts\/bin)} &\n\\colhead{(s)} &\n\\colhead{(s)} & \\colhead{(s)} & \\colhead{-} &\n\\colhead{(s)}&\n\\colhead{(Counts\/bin)} &\n\\colhead{(s)} &\n\\colhead{(s)} & \\colhead{(s)} & \\colhead{-} &\n\\colhead{(s)}&\n\\colhead{(Counts\/bin)} &\n\\colhead{(s)} &\n\\colhead{(s)} & \\colhead{(s)} & \\colhead{-} &\n\\colhead{(s)}\n}\n\\startdata\nsGRBs with single EE&&&&&&&&&&&&&&&&&&\\\\\n\\hline\n050724(Ch1+2+3+4)(8ms)&1.48424&0.03066&0.04292&0.14942&0.28724&0.19234&0.25967&1.06785&0.10299&0.13674&0.75318&0.23973&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\t\t\t\t\t050724(Ch1+2)&0.90669&0.01477&0.03347&0.17023&0.19662&0.20370&0.21770&1.08060&0.09051&0.08264&1.09523&0.17315&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\ 050724(Ch3+4)&0.63496&0.06784&0.05864&0.10271&0.57093&0.16135&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\n060614(Ch1+2+3+4)(1s)\\tablenotemark{a}&0.93613&0.73466&2.24341&2.04035&1.09952&4.28376&0.54390&37.72733&22.07336&31.22141&0.70699&53.29477&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\t\t\t\t\t060614(Ch1+2)&0.58041&0.73500&2.20074&2.04868&1.07422&4.24942&0.42033&38.73400&21.86396&31.21840&0.70035&53.08236&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\t\t\t\t\t060614(Ch3+4)&0.33087&0.74033&2.32277&2.37680&0.97727&4.69957&0.12627&33.75601&22.95666&30.75025&0.74655&53.70691&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\t\t\t\t\t150101B(Ch1+2+3+4)(2ms)\\tablenotemark{a}&4.10741&0.00899&0.00632&0.00292&2.16438&0.00924&0.53550&0.03902&0.03900&0.02880&1.35411&0.06780&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\t\t\t\t\t150101B(Ch1+2)&2.59968&0.00701&0.00303&0.00601&0.50416&0.00904&0.39149&0.05099&0.02439&0.01809&1.34826&0.04248&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\t\t\t\t\t150101B(Ch3+4)&2.14224&0.00678&0.00634&0.00439&1.44465&0.01073&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\n170817A(8-350 KeV)(64ms)\\tablenotemark{a}{$^,$}\\tablenotemark{b}&57.07469&-0.06509&0.20199&0.30698&0.65799&0.50897&23.73346&1.66293&0.29296&0.23442&1.24970&0.52738&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\t\t\t\t\t170817A(8-50 KeV)&24.19345&0.12665&0.32209&0.20113&1.60140&0.52322&14.38594&1.53474&0.90567&0.74143&1.22152&1.64710&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\t\t\t\t\t170817A(50-350 KeV)&52.45493&-0.16450&0.05621&0.15697&0.35809&0.21318&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata\\\\\t\t\t\t\t\t\t\t\t\t\t\\hline\nsGRBs with double EE&&&&&&&&&&&&&&&&&&\\\\\n\\hline\n051221A(Ch1+2+3+4)(8ms)&5.96541&0.11585&0.06280&0.10660&0.58912&0.16940&0.55322&0.62791&0.04217&0.06762&0.62363&0.10979&0.99519&0.89204&0.03746&0.22567&0.16599&0.26313\\\\\n051221A(Ch1+2)&2.06378&0.16418&0.13124&0.11657&1.12585&0.24781&0.44105&0.58814&0.21491&0.15913&1.35053&0.37404&0.81210&0.90792&0.05647&0.21265&0.26555&0.26912\\\\\n051221A(Ch3+4)&3.54495&0.11599&0.05270&0.08532&0.61767&0.13802&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&\\nodata&0.2260&0.96211&0.08544&0.06866&1.24439&0.15410\\\\\n\\enddata\n\\tablecomments{We give the peak amplitude f$_{m}$, the peak time t$_{m}$, the rise time t$_{r}$, the decay time t$_{d}$, the asymmetry t$_r$\/t$_d$ and the FWHM of each component. The subscript m and E identify the main peaks and EE components.\n}\n\\tablenotetext{a}{For these GRBs, we adopt the average amplitude to estimate the peak amplitudes of the 8 ms light curve data. }\n\\tablenotetext{b} {For GRB\n170817A, the summed GBM lightcurves for sodium iodide\n(NaI) detectors 1, 2, and 5 with 64 ms resolution\nbetween 8 and 350 keV have been used. We estimate the backgrounds\nusing the model \\emph{$f_0$(t) = at+b} from 20 s prior to the trigger time.}\n\\end{deluxetable*}\n\\end{longrotatetable}\n\n\\begin{longrotatetable}\n\\setlength{\\tabcolsep}{6pt}\n\\begin{deluxetable*}{lccccccCccccccllrcc}\n\\tabletypesize{\\tiny}\n\\tablecaption{Analysis Results of Extended Emissions (Continued) \\label{EEfitting2}}\n\\tablewidth{500pt}\n\\setlength{\\tabcolsep}{0.25mm}\n\\tablehead{\n\\colhead{sGRBs} &\n\\colhead{f$_{mm1}$} &\n\\colhead{t$_{mm1}$} &\n\\colhead{t$_{rm1}$} & \\colhead{t$_{dm1}$} & \\colhead{(t$_r$\/t$_d$)$_{m1}$} &\n\\colhead{FWHM$_{m1}$}&\n\\colhead{f$_{mm2}$} &\n\\colhead{t$_{mm2}$} &\n\\colhead{t$_{rm2}$} & \\colhead{t$_{dm2}$} & \\colhead{(t$_r$\/t$_d$)$_{m2}$} &\n\\colhead{FWHM$_{m2}$}&\\colhead{f$_{mE}$} &\n\\colhead{t$_{mE}$} &\n\\colhead{t$_{rE}$} & \\colhead{t$_{dE}$} & \\colhead{(t$_r$\/t$_d$)$_{E}$} &\n\\colhead{FWHM$_{E}$}\\\\\n&\\colhead{(Counts\/bin)} &\n\\colhead{(s)} &\n\\colhead{(s)} & \\colhead{(s)} & \\colhead{-} &\n\\colhead{(s)}&\n\\colhead{(Counts\/bin)} &\n\\colhead{(s)} &\n\\colhead{(s)} & \\colhead{(s)} & \\colhead{-} &\n\\colhead{(s)}&\n\\colhead{(Counts\/bin)} &\n\\colhead{(s)} &\n\\colhead{(s)} & \\colhead{(s)} & \\colhead{-} &\n\\colhead{(s)}\n}\n\\startdata\nsGRBs with single EE&&&&&&&&&&&&&&&&&&\\\\\n130603B(Ch1+2+3+4)(8ms)&11.14306&0.02243&0.01184&0.02019&0.58643&0.03203&5.42614&0.07061&0.02032&0.02692&0.75483&0.04724&0.70613&0.19549&0.09033&0.06332&1.42656&0.15365\\\\\n130603B(Ch1+2)&4.86977&0.02241&0.01246&0.01949&0.63930&0.03195&2.13590&0.07964&0.02673&0.03681&0.72616&0.06354&0.49596&0.21647&0.07554&0.05542&1.36305&0.13096\\\\\n130603B(Ch3+4)&5.29256&0.02239&0.01183&0.01758&0.67292&0.02941&3.77207&0.06909&0.02843&0.02086&1.36290&0.04929&0.40396&0.11880&0.06665&0.05060&1.31719&0.11725\\\\\n\\enddata\n\\tablecomments{We give the peak amplitude f$_{m}$, the peak time t$_{m}$, the rise time t$_{r}$, the decay time t$_{d}$, the asymmetry t$_r$\/t$_d$ and the FWHM of each component. The subscript m and E identify the main peak and EE components.\n}\n\\end{deluxetable*}\n\\end{longrotatetable}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\\section{Introduction}\nIn developing human-computer interaction systems, Speech Emotion Recognition (SER) technology is considered as an essential element to provide proper response depending on a user's emotional state \\cite{kolakowska2014emotion}.\nMany machine learning models have been built for SER, in which the models are trained to predict an emotion among the candidates such as happy, sad, angry, or neutral for a given speech~\\cite{nwe2003speech, chavhan2010speech, mao2014learning, mirsamadi2017automatic}.\nRecently, researchers have adopted multimodal approaches in SER considering that emotions can be expressed in various ways such as facial expressions, gestures, texts, or speech~\\cite{castellano2008emotion, yoon2020attentive}.\nIn particular, the text modality has been frequently used in addition to the speech in many SER studies, because human speech inherently consists of the acoustic features and the linguistic contents that can be expressed using text~\\cite{yoon2019speech, xu2019learning}.\n\nThe major issue in SER using both the audio and text modalities is how to extract and combine the information that each audio and text carries.\nFor example, if someone says, \\emph{``Thank you for being with me.\"} in a very calm voice, the emotional information is contained mostly in the linguistic contents while it sounds neutral based on the acoustic features.\nPrevious studies have approached this issue by designing their models to encode the audio and text independently and fuse the results using attention mechanisms, which help their models effectively capture the locally salient regions from given signals.\nIn these attention mechanisms, the separately encoded audio and text information operated as each other's query and key-value pair.\nYoon et al.~\\cite{yoon2019speech} used the last hidden state of a recurrent modality encoder as a query and used the other encoded modality as a key-value pair in the attention mechanism.\nIn another research, Xu et al.~\\cite{xu2019learning} designed their model to learn the alignment between the audio and text by itself from the attention mechanism.\n\nHowever, letting the model learn the complex interaction between the different modalities without any constraints can make the training more difficult.\nUsing the last hidden state of a recurrent encoder as a query as in \\cite{yoon2019speech} can lead to temporal information loss in the attention as pointed out in \\cite{mirsamadi2017automatic}.\nBesides, learning the alignment between the audio and text signals relying on the attention mechanism as in \\cite{xu2019learning} is a challenging task unless additional prior knowledge is provided as in \\cite{raffel2017online, battenberg2019location}.\n\n\\input{ForcedAlignment.tex}\nTo overcome these limitations, we propose a novel SER model called Cross Attention Network (CAN) that effectively combines the information obtained from aligned audio and text signals.\nInspired by how humans recognize speech, we design our model to regard the audio and text as temporarily aligned signals.\nIn the CAN, each audio and text input is separately encoded through its own recurrent encoder.\nThen, the hidden states obtained from each encoder are independently aggregated by applying the global attention mechanism onto each modality.\nFurthermore, the attention weights extracted from each modality are directly applied to each other's hidden states in a crossed way, so that the information at the same time steps is aggregated with the same weights.\n\nIn order to make the cross attention work properly, we propose an aligned segmentation technique that divides each audio and text signal into the same number of parts in an aligned way.\nIn the aligned segmentation technique, the text signal is segmented into words.\nFollowing the text, the audio signal is segmented using alignment information as shown in Table~\\ref{tab:alignment}, where the start- and end-time for each word are used to determine the partitioning points in the audio signal.\nThe aligned segmentation technique enables our model to successfully combine the information from the aligned audio and text signals without having to learn the complex attention between different modalities as in the previous works.\n\nTo evaluate the performance of the proposed method, we conduct experiments on the IEMOCAP dataset.\nFirstly, we compare the CAN with other state-of-the-art SER models that use additional text modality.\nThe results show that our model outperforms the other models in both weighted and unweighted accuracy by 2.66\\% and 3.18\\% relatively.\nFurthermore, ablation studies are conducted to see the actual effectiveness of the components such as aligned segmentation, stop-gradient operator, and additional loss.\nIn the ablation studies, we observe the independent contribution of each component for improving the model performance.\n\n\\section{Related work}\nAfter the classical machine learning models such as the hidden markov model or the support vector machine \\cite{nwe2003speech, chavhan2010speech}, models using neural networks have been actively studied in Speech Emotion Recognition (SER).\nTo improve the model performance, researchers proposed various methods to effectively capture the locally salient regions over the time axis from a given speech.\nBertero et al. \\cite{bertero2017first} proposed a model consisting of the convolutional neural network (CNN) that captures local information from given acoustic feature frames.\nMirsamadi et al. \\cite{mirsamadi2017automatic} used the global attention mechanism to make their model learn where to attend to capture the locally salient features.\nSahoo et al. \\cite{sahoo2019segment} proposed to train a CNN-based model with audio segments that are segmented from an utterance with equal length, which improved the model by forcing it to learn to capture the locally salient emotional features in a more elaborated manner.\n\nRecently, multimodal models that use the audio and text together for SER have attracted much attention \\cite{yoon2019speech, xu2019learning, sebastian2019fusion, liang2019cross}.\nSince the audio and text signals contain different information, it has been a major issue of how to design the models to effectively extract information from each modality and combine them.\nIn the previous studies, attention mechanisms were frequently used to combine the information \\cite{yoon2019speech, xu2019learning}, where the hidden states obtained separately from the audio and text signals were used as each other's query or key-value pair.\nThe attention mechanisms were expected to help their models learn to combine the information of each modality by themselves.\nHowever, none of these studies used proper constraints of prior knowledge to ease the difficulty of learning the complex interaction between the audio and text signals.\n\\section{Methodology}\n\\label{section:algorithm}\nIn this section, we propose a novel Speech Emotion Recognition (SER) model called Cross Attention Network (CAN).\nFirst, we explain a preprocessing of the text and audio data, which is necessary for the CAN to work properly.\nThe purpose of the preprocessing is to make the text and audio have the same number of time steps while the same time steps of the sequential signals cover the same time span.\nThen the CAN is explained, which is a model utilizing the cross attention mechanism that enables the CAN to focus on the salient features of the aligned text and audio signals with a different perspective of each modality.\n\n\\subsection{Data preprocessing}\n\\subsubsection{Text data}\nIn this study, we consider a text input as a word sequence, so the text input is represented as $X=\\{x_1, x_2, ..., x_L\\},~X\\in \\mathbb{R}^{L\\times V}$, where $L$ is the number of words, $V$ is the size of the vocabulary, and each $x_i$ is a one-hot vector representing the corresponding word.\nThen, $\\textbf{E}^{(T)}\\in \\mathbb{R}^{L\\times D_e}$, an embedded text input, is obtained after the $X$ passes through a trainable Glove embedding layer \\cite{pennington2014glove}, where $D_e$ is the dimension of the embedding layer.\n\n\\subsubsection{Audio data}\n\\label{subsection:audioprocessing}\nLet $Y=\\{y_1, y_2, ..., y_T\\},\\;Y\\in \\mathbb{R}^{T}$ be the 1-dimensional audio data and $D=\\{d_1, d_2, ..., d_L\\}$ be its alignment information, where $T$ is the audio length and each $d_i=(s_i, e_i)$ represents the start and the end of each word.\nTo prevent information loss about the correlation, we make the neighboring $d_i$s have 10\\% overlap.\n\nUsing the $Y$ and $D$, we obtain a segmented audio data $\\textbf{E}^{(A)}\\in \\mathbb{R}^{L\\times (T'\\times D_f)}$; the audio $Y$ is first segmented into audio segments $Y'=\\{y_{s_1:e_1},y_{s_2:e_2},...,\\;y_{s_L:e_L}\\}$, and then each segment is converted into a MFCC feature and stacked into the $\\textbf{E}^{(A)}$ with zero-padding.\nHere, $D_f$ is the number of the MFCC coefficients and $T'$ is the length of the longest MFCC.\n\n\n\n\n\n\\subsection{Model architecture}\n\\begin{figure}[t]\n \\centering\n \\includegraphics[scale=0.20]{encoders_grey.png}\n \\caption{Text encoder and Audio encoder. The BLSTM modules inside the dotted box share their weights. For better understanding, we represent the audio input using raw waveforms but we convert the wavs into MFCC segments in advance and use them as audio inputs.}\n \\label{fig:encoders}\n\\end{figure}\n\\subsubsection{Text encoder}\nThe embedded text data $\\textbf{E}^{(T)}$ is fed into the text encoder consisting of the bidirectional long short-term memory (BLSTM) \\cite{hochreiter1997long} as represented at the left side of Figure \\ref{fig:encoders}, which leads to the hidden states $\\textbf{H}^{(T)}\\in\\;\\mathbb{R}^{L\\times D_h}$ obtained from the equations below:\n\\begin{align}\n & \\overrightarrow{h_i}=f_{\\theta}(\\overrightarrow{h_{i-1}},~ \\textbf{E}^{(T)}_i), \\\\[10pt]\n & \\overleftarrow{h_i}=f'_{\\theta}(\\overleftarrow{h_{i+1}},~ \\textbf{E}^{(T)}_i), \\\\[10pt]\n & \\textbf{H}^{(T)}=\\{[\\overrightarrow{h_1};\\overleftarrow{h_1}],~ [\\overrightarrow{h_2};\\overleftarrow{h_2}],~...,~ [\\overrightarrow{h_L};\\overleftarrow{h_L}]\\},\n\\end{align}\nwhere $f_\\theta$, $f'_\\theta$ are the forward and backward LSTMs having $D_h$ hidden units with parameter $\\theta$.\nAdditionally, $h_i$ represents the hidden state at i-th time step and $\\textbf{E}^{(T)}_i$ represents the i-th embedded word vector of the text data.\n\n\\subsubsection{Audio encoder}\nThe audio encoder consists of two bidirectional LSTM layers as represented at the right side of Figure \\ref{fig:encoders}.\nThe bottom LSTM layer encodes each MFCC segment $\\textbf{E}^{(A)}_i\\in \\mathbb{R}^{T'\\times D_f}$ independently and outputs a vector from each segment using average pooling.\nThe BLSTM modules inside the dotted box in Figure \\ref{fig:encoders} share their weights.\nThe upper LSTM layer encodes the audio features obtained from the bottom layer and outputs the hidden states $\\textbf{H}^{(A)}\\in \\mathbb{R}^{L\\times D_h}$, which has the same time steps $L$ with the $\\textbf{H}^{(T)}$.\n\n\n\n\\subsubsection{Cross attention}\n\\begin{figure}[t]\n \\centering\n \\includegraphics[scale=0.20]{CAN_grey.png}\n \\caption{Cross Attention Network. The scissors represent the stop-gradient operator that cuts the gradient flow during back propagation.}\n \\label{fig:can}\n\\end{figure}\nIn the cross attention, attention weights obtained from one modality are used to aggregate the other modality as shown in Figure \\ref{fig:can}, while conforming to the constraint that the audio and text are temporarily aligned.\nSince the salient regions can be different depending on what modality the prediction is based on, the aggregation of the modalities happens twice based on each modality in the cross attention as follows:\n\\begin{gather}\n\t\\alpha^{(T)}_i=\\dfrac{\\text{exp}({~(\\textbf{q}^{(T)})}^\\intercal~\\textbf{H}^{(T)}_{i}~)}{\\sum_{j} \\text{exp}({~(\\textbf{q}^{(T)})}^\\intercal~\\textbf{H}^{(T)}_{j}~)},~~~(i=1,...,L),\n \\\\[10pt]\n \\alpha^{(A)}_i=\\dfrac{\\text{exp}({~(\\textbf{q}^{(A)})}^\\intercal~\\textbf{H}^{(A)}_{i}~)}{\\sum_{j} \\text{exp}({~(\\textbf{q}^{(A)})}^\\intercal~\\textbf{H}^{(A)}_{j}~)},~~~(i=1,...,L),\n\\end{gather}\n\\begin{align}\n \\textbf{c}^{(TT)}&={\\sum_{i}} \\alpha^{(T)}_i~{\\textbf{H}^{(T)}_i},\n \\\\\n \\textbf{c}^{(TA)}&={\\sum_{i}} \\mathbf{sg}(\\alpha^{(T)}_i)~{\\textbf{H}^{(A)}_i},\n \\\\\n \\textbf{c}^{(AA)}&={\\sum_{i}} \\alpha^{(A)}_i~{\\textbf{H}^{(A)}_i},\n \\\\\n \\textbf{c}^{(AT)}&={\\sum_{i}} \\mathbf{sg}(\\alpha^{(A)}_i)~{\\textbf{H}^{(T)}_i},\n\\end{align}\nwhere $\\textbf{q}^{(T)}$ and $\\textbf{q}^{(A)}$ are the global queries used to decide which parts of the aligned signals to focus on based on each modality perspective.\nAdditionally, $\\textbf{c}^{(xy)}$s are context vectors, where the $x$ represents the modality used as a query and the $y$ represents the modality used as a key-value pair.\nTo prevent the CAN from learning attention based on the other modality, we introduce a function $\\mathbf{sg}$; stop-gradient operator as shown in the equations (7) and (9).\nIt cuts the gradient flow through its argument during the backpropagation.\n\n\\subsection{Training objective}\nIn the training, the CAN makes three different predictions using the context vectors.\n\\begin{gather}\n \\hat{y}= \\text{softmax}([{c}^{(TT)};{c}^{(TA)};{c}^{(AA)};{c}^{(AT)}])^\\intercal~\\textbf{W}+\\textbf{b}), \\\\[10pt]\n \\hat{y}^{(T)}= \\text{softmax}(({c}^{(TT)})^\\intercal~\\textbf{W}^{(T)}+\\textbf{b}^{(T)}~),\\\\[10pt]\n \\hat{y}^{(A)}= \\text{softmax}(({c}^{(AA)})^\\intercal~\\textbf{W}^{(A)}+\\textbf{b}^{(A)}~),\n\\end{gather}\nwhere the $\\textbf{W}$s and $\\textbf{b}$s are trainable weights.\n$\\hat{y}$ is made based on all the context vectors and each $\\hat{y}^{(T)}$ and $\\hat{y}^{(A)}$ is made based on a context vector that uses either the text or the audio modality.\nUsing the predictions, we calculate loss terms as follows:\n\\begin{gather}\n \\mathcal{L}_{align}= CE(\\hat{y}^{(T)},~y) + CE(\\hat{y}^{(A)},~y),\\\\[10pt]\n \\mathcal{L}_{total}= CE(\\hat{y},~y) + \\alpha \\cdot \\mathcal{L}_{align},\n\\end{gather}\nwhere CE represents the cross-entropy loss, $y$ is the true emotion labels, and $\\alpha$ is a weight for the additional loss term $\\mathcal{L}_{align}$, of which optimal value is found using the validation dataset.\nThe additional loss terms in $\\mathcal{L}_{align}$ are added to help the global attention attend to the salient features based on each modality better.\n\\begin{gather}\n \\hat{y}^\\text{final}=(\\hat{y})\\cdot(\\hat{y}^{(T)})^\\alpha \\cdot(\\hat{y}^{(A)})^\\alpha\n\\end{gather}\nAfter the training, the final prediction $\\hat{y}^\\text{final}$ is calculated following the equation (15).\n\n\n\n\\section{Experiments}\nIn this section, we describe the experimental setup and the results conducted on the IEMOCAP dataset.\nFirst, we compare the CAN to other SER models for the weighted accuracy (\\textbf{WA}) and the unweighted accuracy (\\textbf{UA}), where the CAN shows the best performance.\nIn addition, we conduct several analyses on our model to see how each component described in Section \\ref{section:algorithm} affects the performance of the CAN.\n\n\\subsection{Dataset}\nIn the experiments, we use the Interactive Emotional Dyadic Motion Capture (IEMOCAP) \\cite{busso2008iemocap} dataset which provides the speech and text dataset including the alignment information as represented in Table \\ref{tab:alignment}.\nEach utterance in the dataset is labeled as one of the 10-class emotions, where we do not use the classes with too few data instances (fear, disgust, other) so the final dataset contains 7,486 utterances in total (1,103 angry, 1,040 excite, 595 happy, 1,084 sad, 1,849 frustrated, 107 surprise and 1,708 neutral).\nIn the experiments, we perform 10-fold cross-validation, and in each validation, the total dataset is split into 8:1:1 training set, validation set, and test set, respectively.\n\n\\subsection{Experimental setup}\n\\label{subsection:setup}\nFor text input, we use a sequence of words in Table \\ref{tab:alignment} as our text input and the 300-dimensional GloVe word vectors \\cite{pennington2014glove} are used as the embedding vectors.\nIn this step, we remove the special tokens such as `$\\left< s \\right>$', `$\\left< sil \\right>$', `$\\left< \/s \\right>$' and their durations are equally divided into the neighboring words.\nFor audio input, we use the zero-padded MFCC segments as our audio input, which are obtained as described in Section \\ref{subsection:audioprocessing}.\nIn the MFCC conversion, we use 40 MFCC coefficients and the frames are extracted while sliding the hamming window with 25ms frame size and 10ms hopping.\nWe use the bidirectional LSTMs with 128 hidden units followed by the dropout layer with 0.3 dropout probability.\nFor the cross attention, multi-head global attention with four heads is used to view the inputs from various perspectives so enrich the aggregated information \\cite{vaswani2017attention}.\nDuring the training, we use the validation dataset as a criterion of early stopping with the patience 10.\nWe use the batch size of 64 and use the Adam optimizer \\cite{kingma2014adam} with a learning rate of 1e-3, and the gradients are clipped with a norm value of 1.0.\nThe weight of the additional loss term $\\alpha$ is set to 0.1, which is obtained from the cross-validation.\n\n\\subsection{Results}\n\\subsubsection{Performance comparison}\n\\input{Accuracy}\nTable~\\ref{tab:accuracy} shows the performance of the CAN and the other SER models. Each `TextModel' and `AudioModel' uses a single modality by encoding it with a simple bidirectional LSTM with the global attention following \\cite{mirsamadi2017automatic}. The other two multimodal models are \\cite{yoon2019speech} and \\cite{xu2019learning} proposed in the previous studies, where the attention weights are obtained based on the interaction between the audio and the text modalities.\nIn the experiments, we re-implement all the models and obtain the accuracy values as described in Section \\ref{subsection:setup}.\nAs the Table~\\ref{tab:accuracy} shows, our CAN outperforms the other models for both \\textbf{WA} and \\textbf{UA} including the previous state-of-the-art model \\cite{yoon2019speech}.\nTo analyze the causes of the performance gain, we conduct further experiments to see the effectiveness of the components in our methodology, which are described in the next sections.\n\n\n\\input{Segmentation}\n\\subsubsection{Segmentation policy}\nIn order to demonstrate the superiority of the aligned segmentation, we compare it to the segmentation where a 1-dimensional audio signal is segmented into the segments of equal length, which has been widely used in the previous studies \\cite{sahoo2019segment, mao2019deep}.\nIn the experiment, the aligned segmentation outperforms the equal segmentation for both \\textbf{WA} and \\textbf{UA}.\nThe results in Table \\ref{tab:segmentation} imply that our aligned segmentation actually has effectiveness in combining the information in the cross attention.\n\n\\subsubsection{Ablation study}\n\\input{ablation}\nAs supportive evidence of the components in our model, we conduct ablation studies for four variant models while removing each of the components in Section \\ref{section:algorithm}.\nWhen we remove the stop-gradient operator and additional loss term $\\mathcal{L}_{align}$, the accuracy of the CAN decreases and the worst performance for the \\textbf{WA} is observed when both components are removed.\nFurthermore, we even remove the whole cross attention in the CAN, where the prediction is based only on a concatenation of the $c^{\\text{(TT)}}$ and $c^{\\text{(AA)}}$ and the stop-gradient operator and the $\\mathcal{L}_{align}$ are not used.\nIn that case, the performance decreases even more compared to the other variants.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusion}\n\nIn this paper, we propose a Cross Attention Network (CAN) for Speech Emotion Recognition (SER) task.\nIt uses the cross attention to combine information from the aligned audio and text signals.\nInspired by the way humans recognize speech, we align the text and audio signals so that the CAN regards each modality to have the same time resolution.\nIn the experiments conducted on the IEMOCAP dataset, the proposed system outperforms the state-of-the-art systems by 2.66\\% and 3.18\\% relatively for the weighted and unweighted accuracy.\nTo the best of our knowledge, this is the first study that shows the improvement\nusing the aligned audio and text signals in SER.\nIn order to apply our system to the real-world scenario where only the speech signal is available, the text and alignment information are required for the CAN to work properly.\nIn future work, we plan to extend our research by integrating the CAN with the automatic speech recognition system which outputs the text and alignment information given a speech signal.\n\\section{Acknowledgments}\nK. Jung is with ASRI, Seoul National University, Korea.\nThis work was supported by the Ministry of Trade, Industry \\& Energy (MOTIE, Korea) under the Industrial Technology Innovation Program (No. 10073144).\nThis research was results of a study on the ``HPC Support'' Project, supported by the `Ministry of Science and ICT' and NIPA.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}