diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzexbt" "b/data_all_eng_slimpj/shuffled/split2/finalzzexbt" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzexbt" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\noindent The identification of time-varying systems plays a key role in different applications, such as adaptive and model predictive control, where a good real-time tracking of the system to be controlled is necessary. In addition, the detection of changes or drifts in plant parameters is crucial in terms of process monitoring and fault detection. Online System Identification (SysId) and the estimation of time-varying systems are typically strictly connected problems: one would like to exploit the new data that become available in order to track possible changes in the system dynamics.\\\\\nRecursive Prediction Error Method (RPEM), a variant of the classical PEM \\cite{Ljung:99,RecursiveBook}, represents nowadays a well-established technique, through which the current estimate can be efficiently updated, as soon as new data are provided. RPEM are parametric approaches, relying on Recursive Least-Squares (RLS) routines, which compute the parameter estimate by minimizing a functional of the prediction errors \\cite{Ljung:99}. An extension of these approaches to the identification of time-varying systems involves the adoption of a forgetting factor, through which old data become less relevant in the estimation criterion. Convergence and stability properties of Forgetting Factor RPEM have been well-studied within the SysId community \\cite{bittanti1990convergence,guo1993performance}. \\\\\nAlternative approaches model the coefficients trajectories as stochastic processes \\cite{Chow84}, thus exploiting Kalman filtering \\cite{guo1990estimating} or Bayesian inference \\cite{Sarris73} for parameter estimation. Combinations of bases sequences (e.g. wavelet basis \\cite{tsatsanis1993time}) have also been considered to model the parameters time evolution.\n\\\\The above-mentioned parametric procedures share the criticality of the model selection complexity: this step is especially crucial when the model complexity has to be modified in response to changes in the true system dynamics.\nIn addition, classical complexity selection rules (e.g. cross-validation or information criteria) may not be applicable in online settings, due to the excessive computational effort they require. \nModel complexity issues have been partially addressed in the SysId community through the recent introduction of non-parametric methods, relying on Gaussian processes and Bayesian inference \\cite{GP-AC-GdN:11,SurveyKBsysid}. In this framework model complexity is tuned in a continuous manner by estimating the hyper-parameters which describe the prior distribution chosen by the user \\cite{PCAuto2015}. This property makes these new techniques appealing for the online identification of time-varying systems: indeed, model complexity can be continuously adapted whenever new data become available.\\\\\nIn a previous work \\cite{RPPCECC2016} we started exploring this research direction by adapting the newly introduced Bayesian procedures to an online identification setting. The methodologies proposed in \\cite{RPPCECC2016} are extended in this new paper by dealing with time-varying systems. Two approaches, relying on the use of a forgetting factor, are proposed; in particular, following the approach in \\cite{PA2014}, we investigate the online estimation of the forgetting factor by treating it as a hyper-parameter of the Bayesian inference procedure. These techniques are experimentally compared with the classical parametric counterparts: the results appear favourable and promising for the methods we propose.\n\\\\\nThe paper is organized as follows. Sec.~\\ref{sec:problem_formulation} presents the online identification framework and the challenges we will try to address. Sec.~\\ref{sec:pem} provides a brief review of parametric real-time identification techniques, while Sec.~\\ref{sec:bayes} illustrates the Bayesian approach to linear SysId, both in the batch and online scenarios. In particular, Sec.~\\ref{sec:time_var} focuses on the estimation of time-varying systems. Experimental results are reported in Sec.~\\ref{sec:experiment}, while conclusions are drawn in Sec.~\\ref{sec:conclusion}.\n\n\n\\section{Problem Formulation}\\label{sec:problem_formulation}\n\\noindent Consider a dynamical system described through an output-error model, i.e.:\n\\begin{equation} \\label{equ:sys}\ny(t) = \\left[h \\ast u\\right](t) + e(t), \\quad y(t), \\ u(t) \\in\\mathbb{R}\n\\end{equation}\nwhere $h(t)$ denotes the model impulse response and $e(t)$ is assumed to be a zero-mean Gaussian noise with variance $\\sigma^2$.\\\\\nSysId techniques aim at estimating the impulse response $h$ of the system, once a set $\\mathcal{D}:=\\left\\{y(t),u(t)\\right\\}_{t=1}^N$ of measurements of its input and output signals is provided.\\\\\nIn this work we consider an online setting, in which a new set of input-output measurements becomes available every $T$ time steps.\nSpecifically, let us define the variable $i:=k\/T$ by assuming w.l.o.g. that $k$ is a multiple of $T$, and the $i^{th}-$dataset as $\\mathcal{D}_i =\\left\\{u(t),y(t)\\right\\}_{t=(i-1)T +1}^{iT}$.\n\\\\We suppose that at time $k$ an impulse response estimate $\\hat{h}^{(i)}$ has been computed using the data coming from a collection of previous datasets $\\bigcup_{l=1}^{i} \\mathcal{D}_l = \\left\\{u(t),y(t)\\right\\}_{t=1}^{i T}$; at time $k+T$ new data $\\mathcal{D}_{i+1}$ become available and we would like to update the previous estimate $\\hat{h}^{(i)}$ by exploiting them. In addition we assume that the underlying system undergoes certain variations that we would like to track: this situation could often arise in practice, due to e.g. variations of the internal temperature, of the masses (e.g. after grasping an object).\\\\\nFurthermore, online applications typically require that the new estimate is available before the new dataset $\\mathcal{D}_{i+2}$ is provided, thus limiting the computational complexity and the memory storage of the adopted estimation methods.\\\\\nIn this paper, the recently proposed Bayesian approach to SysId \\cite{SurveyKBsysid} is adapted in order to cope with the outlined online setting. Its performances are compared with the ones achieved using classical parametric approaches.\n\n\\begin{rem}\nWe stress that in the remainder of the paper we will use the indexes $k$ and $iT$ interchangeably.\n\\end{rem}\n\n\\section{Parametric Approach}\\label{sec:pem}\n\\noindent Standard parametric approaches to SysId rely on the a-priori choice of a model class $\\mathcal{M}$ (e.g. ARX, ARMAX, OE, etc.), which is completely characterized by a parameter $\\theta \\in \\mathbb{R}^m$.\n\n\\subsection{Batch Approach}\n\\noindent In the batch setting, when a dataset $\\mathcal{D}=\t\\left\\{y(t),u(t)\\right\\}_{t=1}^N$ is provided, the identification procedure reduces to estimate $\\theta$ by minimizing the sum of squared prediction errors:\n\\begin{equation}\n\\hat{\\theta} = \\arg\\min_{\\theta\\in \\mathbb{R}^m} V_N (\\theta,\\mathcal{D}) = \\arg\\min_{\\theta\\in \\mathbb{R}^m} \\frac{1}{2}\\sum_{t=1}^N \\left(y(t)-\\hat{y}(t\\vert \\theta)\\right)^2\n\\end{equation}\nwhere $\\hat{y}(t\\vert \\theta)$ denotes the one-step ahead predictor \\cite{Ljung:99}.\n\n\\subsection{Online Approach}\n\\noindent The extension of these procedures to an online setting relies on RLS (or pseudo LS) methods.\\\\\nFor ease of notation, let us assume $T=1$ in this section. Suppose that at time $k+1$ a new input-output data pair $\\mathcal{D}_{i+1}$ is provided; then $\\hat{\\theta}^{(i)}$ is updated as:\n\\begin{equation}\\label{equ:param_update}\n\\hat{\\theta}^{(i+1)} = \\hat{\\theta}^{(i)} + \\mu^{(i+1)} Q^{(i+1)^{-1}} \\nabla_{\\theta} V_{k+1}(\\hat{\\theta}^{(i)}, \\textstyle{\\bigcup_{l=1}^{i+1}}\\mathcal{D}_{l})\n\\end{equation}\nwhere $\\nabla_\\theta V_{k+1}(\\hat{\\theta}^{(i)}, \\bigcup_{l=1}^{i+1}\\ \\mathcal{D}_{l})$ denotes the gradient of the loss function computed in the previous estimate and in the new data; $\\mu^{(i+1)}\\in \\mathbb{R}$ and $Q^{(i+1)}\\in\\mathbb{R}^{m\\times m}$ are appropriate scalings which assume different shapes according to the specific algorithm which is adopted (see \\cite{RecursiveBook} and \\cite{Ljung:99}, Ch. 11, for further details). Notice that \\eqref{equ:param_update} is simply a scaled gradient step w.r.t. the loss function $V_{k+1}(\\theta,\\bigcup_{l=1}^{i+1}\\mathcal{D}_{l})$.\n\n\\subsection{Dealing with time-varying systems}\n\\noindent In order to cope with time-varying systems, a possible strategy involves the inclusion of a \\textit{forgetting factor} $\\bar{\\gamma}$ in the loss function $V_{k}(\\theta,\\mathcal{D})$:\n\\begin{equation} \\label{equ:loss_ff}\nV_{k}^\\gamma (\\theta,\\mathcal{D}) = \\frac{1}{2} \\sum_{t=1}^k \\bar{\\gamma}^{k-t} \\left(y(t)-\\hat{y}(t\\vert \\theta)\\right)^2, \\qquad \\bar{\\gamma} \\in (0,1]\n\\end{equation}\nIn this way old measurements become less relevant for the computation of the estimate. A recursive update of the estimate $\\hat{\\theta}^{(i)}$ (as the one in \\eqref{equ:param_update}) can be derived (\\cite{Ljung:99}, Ch. 11).\n\\\\\nAs an alternative, a sliding window approach can be adopted: at each time step only the last $N_w$ data are used for computing the current estimate (with $N_w$ being the window length). However, since this approach does not admit an update rule as the one in \\eqref{equ:param_update}, the computational complexity of the new estimate will depend on the window length.\n\\\\\nA crucial role in the application of parametric SysId techniques is played by the model order selection step: once a model class $\\mathcal{M}$ is fixed, its complexity has to be chosen using the available data. This is typically accomplished by estimating models with different complexities and by applying tools such as cross-validation or information criteria to select the most appropriate one. However, the estimation of multiple models may be computationally expensive, making this procedure not suited for the online identification of time-varying systems. Indeed, in this framework, it should ideally be applied every time new data become available.\n\\\\\nThe recently proposed approach to SysId, relying on regularization\/Bayesian techniques, overpasses the above-described issue by jointly performing estimation and order selection.\nNext section will illustrate how the batch regularization\/Bayesian method can be tailored to the online identification of time-varying systems.\n\n\n\n\\section{Regularization\/Bayesian Approach} \\label{sec:bayes}\n\n\\subsection{Batch Approach}\n\\label{subsec:batch_approach}\n\\noindent We discuss how the regularization\/Bayesian technique works in the standard batch setting, i.e. when data $\\mathcal{D}=\\left\\{y(t),u(t)\\right\\}_{t=1}^N$ are given. For future use, let us define the vector $Y_N =\\left[y(1)\\ ...\\ y(N)\\right]^\\top\\in\\mathbb{R}^N$.\n\\\\\nAccording to the Bayesian estimation, the impulse response $h$ is considered as a realization of a stochastic process with a prior distribution $p_\\eta(h)$, depending on some parameters $\\eta\\in \\Omega$. The prior $p_\\eta(h)$ is designed in order to account for some desired properties of the estimated impulse response, such as smoothness and stability \\cite{GP-AC-GdN:11,SurveyKBsysid}. In the Bayesian framework, the parameters $\\eta$ are known as hyper-parameters and they need to be estimated from the data, e.g. by optimizing the so-called marginal likelihood (i.e. the likelihood once the latent variable $h$ has been integrated out) \\cite{PCAuto2015}:\n\\begin{equation}\n\\hat{\\eta} =\\arg\\max_{\\eta\\in\\Omega} p_\\eta(Y_N) = \\arg\\max_{\\eta\\in\\Omega} \\int p(Y_N\\vert h)p_\\eta(h)dh\n\\end{equation}\nOnce the hyper-parameters $\\eta$ have been estimated, the minimum variance estimate of $h$ needs to be computed; it coincides with the posterior mean given the observed data:\n\\begin{equation}\\label{equ:post_est}\n\\hat{h} := \\mathbb{E}_{\\hat{\\eta}} \\left[h \\vert Y_N \\right] =\\int h \\frac{p(Y_N\\vert h)p_{\\hat{\\eta}}(h)}{p_{\\hat{\\eta}}(Y_N)} dh\n\\end{equation}\nIn the SysId context, $h$ is typically modelled as a zero-mean Gaussian process (independent of the noise $e(t)$) with covariance $\\mathbb{E}\\left[h(t),h(s)\\right]=\\bar{K}_\\eta(t,s)$ (aka kernel in the Machine Learning literature) \\cite{GP-AC-GdN:11,ChenOL12}. Thanks to this asssumption, the marginal likelihood $p_\\eta(Y_N)$ is Gaussian and the estimate \\eqref{equ:post_est} is available in closed form.\n\\\\\nFurthermore, for simplicity the IIR model in \\eqref{equ:sys} can be accurately approximated by a FIR model of order $n$, whenever $n$ is chosen large enough to catch the relevant components of the system dynamics. By collecting in $\\mathbf{h}:=\\left[h(1)\\ \\cdots\\ h(n)\\right]^\\top\\in\\mathbb{R}^n$ the first $n$ impulse response coefficients, the following Gaussian prior can be defined:\n\\begin{align}\np_\\eta(\\mathbf{h})&\\sim \\mathcal{N}(0,K_\\eta), \\qquad \\eta \\in \\Omega \\subset \\mathbb{R}^d,\\ \\ K_\\eta \\in \\mathbb{R}^{n\\times n}\n\\end{align}\nThe hyper-parameters $\\eta$ can then be estimated by solving\n\\begin{align}\n\\hat{\\eta} &= \\arg\\min_{\\eta\\in\\Omega}\\ - \\ln p_\\eta(Y_N)= \\arg\\min_{\\eta\\in\\Omega}\\ f_N(\\eta)\\label{equ:ml_max}\\\\\nf_N(\\eta) &= Y_N^\\top \\Sigma(\\eta)^{-1} Y_N + \\ln \\det \\Sigma(\\eta)\\label{equ:ml}\\\\\n\\Sigma(\\eta)&= \\Phi_N K_\\eta \\Phi_N^\\top + \\sigma^2 I_N\n\\end{align}\nwhere $\\Phi_N\\in\\mathbb{R}^{N\\times n}$:\n\\begin{equation}\\label{equ:phi}\n\\Phi_N := \\begin{bmatrix}\nu(0) & u(-1) & \\cdots & u(-n+1)\\\\\n\\vdots & \\ddots & \\ddots &\\vdots\\\\\nu(N) & u(N-1) & \\cdots & u(N-n+1)\n\\end{bmatrix} \n\\end{equation}\nIn the batch setting we are considering the quantities $u(-n+1), ...,u(0)$ can be either estimated or set to zero. Here, we follow the latter option. \nOnce $\\hat{\\eta}$ has been computed, the corresponding minimum variance estimate is given by\n\\resizebox{1.02\\linewidth}{!}{\n \\begin{minipage}{\\linewidth}\n\\begin{align}\n\\widehat{\\mathbf{h}} :&=\\mathbb{E}_{\\hat{\\eta}}\\left[\\mathbf{h} \\vert Y_N\\right] = \\arg\\min_{\\mathbf{h}^\\in\\mathbb{R}^n} \\left(Y_N-\\Phi_N\\mathbf{h}\\right)^\\top \\left(Y_N-\\Phi_N\\mathbf{h}\\right) + \\sigma^2 \\mathbf{h}^\\top K_{\\hat{\\eta}}^{-1}\\mathbf{h}\\nonumber \\\\\n&= (\\Phi_N^\\top \\Phi_N + \\sigma^2 K_{\\hat{\\eta}}^{-1})^{-1}\\Phi_N^\\top Y_N \\label{equ:h_hat}\n\\end{align}\n\\end{minipage}\n}\n\n\\vspace{1mm}\n\n\\begin{rem}\nThe estimate $\\widehat{\\mathbf{h}}$ in \\eqref{equ:h_hat} can be computed once a noise variance estimate $\\hat{\\sigma}^2$ is available. For this purpose, $\\sigma^2$ can be treated as a hyper-parameter and estimated by solving \\eqref{equ:ml_max} or it can be computed from a LS estimate of $\\mathbf{h}$. In this work the latter option is adopted.\n\\end{rem}\n\n\n\n\\subsection{Online Approach} \\label{subsec:bayes_onine}\n\\noindent We now adapt the batch technique described in Sec.~\\ref{subsec:batch_approach} to the online setting outlined in Sec.~\\ref{sec:problem_formulation}. At time $k+T$, when data $\\mathcal{D}_{i+1}=\\left\\{u(t),y(t)\\right\\}_{t=iT+1}^{(i+1)T}$ are provided, the current impulse response estimate $\\widehat{\\mathbf{h}}^{(i)}$ is updated through formula \\eqref{equ:h_hat}, once the data matrices are enlarged with the new data and a new hyper-parameter estimate $\\hat{\\eta}^{(i+1)}$ is computed. The data matrices are updated through the following recursions\n\\begin{align}\nR^{(i+1)} &:= \\Phi_{(i+1)T}^\\top \\Phi_{(i+1)T} = R^{(i)} + \\left(\\Phi_{iT+1}^{(i+1)T}\\right)^\\top \\Phi_{iT+1}^{(i+1)T}\\label{equ:r}\\\\\n\\widetilde{Y}^{(i+1)} &:= \\Phi_{(i+1)T}^\\top Y_{(i+1)T} =\\widetilde{Y}^{(i)} + \\left(\\Phi_{iT+1}^{(i+1)T}\\right)^\\top Y_{iT+1}^{(i+1)T}\\label{equ:y_tilde}\\\\\n\\xbar{Y}^{(i+1)} &:= Y_{(i+1)T}^\\top Y_{(i+1)T}= \\xbar{Y}^{(i)} + \\left(Y_{iT+1}^{(i+1)T}\\right)^\\top Y_{iT+1}^{(i+1)T}\\label{equ:y_bar}\n\\end{align}\nwhere $Y_{(i+1)T}=\\left[y(1)\\cdots y(iT+T)\\right]^\\top\\in\\mathbb{R}^{(i+1)T}$, $Y_{iT+1}^{(i+1)T} = \\left[y(iT+1) \\cdots y(iT+T)\\right]$; $\\Phi_i$ is defined as in \\eqref{equ:phi} with $N$ replaced by $(i+1)T$, while $\\Phi_{iT+1}^{(i+1)T}$ has the same structure of matrix \\eqref{equ:phi} but it contains the data from $iT-n+1$ to $(i+1)T$.\nThe computational cost of \\eqref{equ:r}-\\eqref{equ:y_bar} is, $O(n^2T)$, $O(nT)$ and $O(T^2)$, respectively.\\\\\nThe minimization of $f_{(i+1)T}(\\eta)$ in \\eqref{equ:ml}, needed to determine $\\hat{\\eta}^{(i+1)}$, is typically performed through iterative routines, such as 1st or 2nd order optimization algorithms \\cite{BonettiniCPSIAM2014} or the Expectation-Maximization (EM) algorithm \\cite{bottegal2016robust,BOTTEGAL2015466}. Since these methods may require a large number of iterations before reaching convergence, they may be unsuited for online applications. We should recall that, when adopted for marginal likelihood optimization, each iteration of these algorithms has a computational complexity of $O(n^3)$, due to the objective function evaluation. Specifically, $f_{(i+1)T}(\\eta)$ can be robustly evaluated as \\cite{chen2013implementation}\n\\begin{align}\nf_{(i+1)T}(\\eta) = &((i+1)T-n)\\ln \\sigma^2 + 2\\ln\\vert S\\vert \\nonumber\\\\\n&+\\sigma^{-2}(\\ \\xbar{Y}^{(i+1)}- \\| S^{-1}L^\\top \\widetilde{Y}^{(i+1)}\\|_2^2\\ ) \\label{equ:eff_ml}\n\\end{align}\nwhere $L$ and $S$ are Cholesky factors: $K_\\eta =: LL^\\top$ and $\\sigma^2 I_n + L^\\top R^{(i+1)} L =: SS^\\top$ (whose computation is $O(n^3)$).\\\\ \nTo tackle the real-time constraints, the approach proposed in \\cite{RPPCECC2016} is adopted: $\\hat{\\eta}^{(i+1)}$ is computed by running just one iteration of a Scaled Gradient Projection (SGP) algorithm (a 1st order optimization method) applied to solve problem \\eqref{equ:ml_max} \\cite{BonettiniCPSIAM2014}. Algorithm \\ref{alg:1step_grad} summarizes its implementation. Notice that it is initialized with the previous estimate $\\hat{\\eta}^{(i)}$ (obtained using the data $\\bigcup_{l=1}^{i} \\mathcal{D}_l$) which is likely to be close to a local optimum of the objective function $f_{iT}(\\eta)\\equiv f_{k}(\\eta)$. If the number of new data $T << k$, it is reasonable to suppose that $\\arg\\min_{\\eta\\in\\Omega} f_{iT} (\\eta) \\approx \\arg\\min_{\\eta\\in \\Omega} f_{(i+1)T} (\\eta)$. \nTherefore, by just performing one SGP iteration, $\\hat{\\eta}^{(i+1)}$ will be sufficiently close to a local optimum of $f_{(i+1)T}(\\eta)$.\n\n\n\\begin{algorithm}\n\\caption{1-step Scaled Gradient Projection (SGP)}\\label{alg:1step_grad}\n\\begin{algorithmic}[1]\n\\Statex{\\textbf{Inputs:}} previous estimates $\\{ \\hat{\\eta}^{(i)}, \\hat{\\eta}^{(i-1)}\\}$, $\\nabla f_{iT}(\\hat{\\eta}^{(i-1)})$, $R^{(i+1)}$, $\\widetilde{Y}^{(i+1)}$, $\\xbar{Y}^{(i+1)}$, $\\hat{\\sigma}^{(i+1)^2}$\n\\Statex Initialize: $c=10^{-4},\\ \\delta=0.4$\n\\State Compute $\\nabla f_{(i+1)T}(\\hat{\\eta}^{(i)})$\n\\State $r^{(i-1)} \\gets \\hat{\\eta}^{(i)} - \\hat{\\eta}^{(i-1)}$ \\label{alg_step:rf}\n\\State $w^{(i-1)} \\gets \\nabla f_{(i+1)T}(\\hat{\\eta}^{(i)}) - \\nabla f_{iT}(\\hat{\\eta}^{(i-1)})$ \\label{alg_step:w}\n\\State Approximate the inverse Hessian of $f_{(i+1)T}(\\hat{\\eta}^{(i)})$ as $B^{(i)}=\\alpha^{(i)}D^{(i)}$ (using the procedure outlined in \\cite{BonettiniCPSIAM2014})\\label{alg_step:inverse_H}\n\\State Project onto the feasible set:\n\\Statex $z\\gets \\Pi_{\\Omega,D^{(i)}} (\\ \\hat{\\eta}^{(i)} -B^{(i)}\\nabla f_{(i+1)T}(\\hat{\\eta}^{(i)})\\ )$\\label{alg_step:proj}\n\\State $\\Delta\\hat{\\eta}^{(i)} \\gets z - \\hat{\\eta}^{(i)}$\n\\State $\\nu \\gets 1$\n\\If{$f_{(i+1)T}(\\hat{\\eta}^{(i)}+\\nu \\Delta \\hat{\\eta}^{(i)}) \\leq f_{(i+1)T}(\\hat{\\eta}^{(i)})+ c \\nu \\nabla f_{(i+1)T}(\\hat{\\eta}^{(i)})^\\top \\Delta\\hat{\\eta}^{(i)}$}\n\\State Go to step 12\n\\Else\n\\State $\\nu \\gets \\delta \\nu$\n\\EndIf\n\\State $\\hat{\\eta}^{(i+1)} \\gets \\hat{\\eta}^{(i)} + \\nu \\Delta \\hat{\\eta}^{(i)}$\n\\Statex \\textbf{Output:} $\\hat{\\eta}^{(i+1)}$\n\\end{algorithmic}\n\\end{algorithm}\n\n\\noindent The key step in Algorithm \\ref{alg:1step_grad} is \\ref{alg_step:inverse_H}, where the inverse Hessian is approximated as the product between the positive scalar $\\alpha^{(i)}\\in\\mathbb{R}_+$ and the diagonal matrix $D^{(i)}\\in\\mathbb{R}^{d\\times d}$.\n$\\alpha^{(i)}$ is chosen by alternating the so-called Barzilai-Borwein (BB) rules \\cite{barzilai1988two}:\n\\begin{equation}\n\\alpha_1^{(i)} := \\frac{r^{(i-1)^\\top}r^{(i-1)}}{r^{(i-1)^\\top}w^{(i-1)}}, \\qquad \n\\alpha_2^{(i)} := \\frac{r^{(i-1)^\\top}w^{(i-1)}}{w^{(i-1)^\\top}w^{(i-1)}}\\label{equ:bb}\n\\end{equation}\nwith $r^{(i-1)}$ and $w^{(i-1)}$ specified at steps \\ref{alg_step:rf} and~\\ref{alg_step:w} of Algorithm \\ref{alg:1step_grad}. The definition of $D^{(i)}$ depends on the constraints set and on the objective function. The authors in \\cite{BonettiniCPSIAM2014} exploit the following decomposition of $\\nabla_\\eta f_{(i+1)T}(\\eta)$ (defined in \\eqref{equ:ml}):\n\\begin{equation}\\label{equ:grad_decomp}\n\\nabla_\\eta f_{(i+1)T}(\\eta) = V(\\eta) - U(\\eta), \\quad V(\\eta)>0, \\ U(\\eta)\\geq 0\n\\end{equation}\nto specify $D^{(i)}$.\n\\noindent We refer the interested reader to \\cite{BonettiniCPSIAM2014} for further details\n\n\\noindent The projection operator adopted at step \\ref{alg_step:proj} of Algorithm \\ref{alg:1step_grad} is\n\\begin{equation}\n\\Pi_{\\Omega,D^{(i)}} (z) = \\textstyle{\\arg\\min_{x\\in\\Omega}} (x-z)^\\top D^{(i)^{-1}}(x-z)\n\\end{equation}\n\n\n\\begin{rem}\nBesides SGP, in \\cite{RPPCECC2016} other inverse Hessian approximations are investigated (e.g. the BFGS formula). In this work we only consider the SGP approximation, since it appears preferable to the others, according to the experiments we performed (both in the time-invariant and -variant domain). \\cite{RPPCECC2016} also considers the EM algorithm as an alternative to 1st order optimization methods to solve problem \\eqref{equ:ml_max}. Even if the results reported for EM in \\cite{RPPCECC2016} are comparable to the ones achieved through SGP, the latter approach appears superior to EM in the time-varying setting we are considering.\n\\end{rem}\n\n\\subsection{Dealing with time-varying systems}\\label{sec:time_var}\n\\noindent In this section we deal with the identification of time-varying systems: specifically, estimators have to be equipped with tools through which past data become less relevant for the current estimation. In the following we propose two routines which combine the ``online Bayesian estimation'' above sketched with the ability to ``forget'' past data.\n\n\\subsubsection{Fixed Forgetting Factor}\nFollowing a classical practice in parametric SysId (see Sec.~\\ref{sec:pem}), we introduce a forgetting factor $\\bar{\\gamma} \\in (0,1]$ into the data we are provided in order to base the estimation mainly on the more recent data. Specifically, we assume that the first $k$ data are generated according to the following linear model:\n\\begin{equation}\\label{equ:ff_model}\n\\bar{G}_k Y_k = \\bar{G}_k \\Phi_k \\mathbf{h} + E, \\ E= \\left[e(1)...e(k)\\right]^\\top \\sim \\mathcal{N}(0,\\sigma^2 I_k)\n\\end{equation}\nwhere $\\bar{G}_k \\bar{G}_k =: \\bar{\\Gamma}_k$ and $\\bar{\\Gamma}_k := diag\\left(\\bar{\\gamma}^{k-1}, \\bar{\\gamma}^{k-2}, ..., \\bar{\\gamma}^0\\right)$. Therefore, when adopting the regularized regression criterion \\eqref{equ:h_hat}, the estimate at time $k$ is computed as:\n\\begin{align}\n\\widehat{\\mathbf{h}}_{\\bar{\\gamma}} &:= \\arg\\min_{\\mathbf{h}\\in\\mathbb{R}^n}\\sum_{t=1}^k \\bar{\\gamma}^{k-t} \\left(y(t) - \\Phi_t^t \\mathbf{h}\\right)^2 + \\sigma^2 \\mathbf{h}^\\top K_{\\hat{\\eta}}^{-1} \\mathbf{h} \\label{equ:regul_probl_ff}\\\\\n&= \\arg\\min_{\\mathbf{h}\\in\\mathbb{R}^n}\\left(Y_k - \\Phi_k \\mathbf{h}\\right)^\\top \\bar{\\Gamma}_k \\left(Y_k - \\Phi_k \\mathbf{h}\\right) + \\sigma^2 \\mathbf{h}^\\top K_{\\hat{\\eta}}^{-1} \\mathbf{h}\\nonumber\\\\\n&= (\\Phi_k^\\top \\bar{\\Gamma}_k \\Phi_k + \\sigma^2 K_{\\hat{\\eta}}^{-1})^{-1} \\Phi_k^\\top \\bar{\\Gamma}_k Y_k \\label{equ:h_hat_ff}\n\\end{align}\nCorrespondingly, the hyper-parameters are estimated solving:\n\\begin{align}\n\\hat{\\eta} &= \\arg\\min_{\\eta\\in\\Omega}\\ Y_k^\\top \\bar{G}_k \\Sigma_{\\bar{\\gamma}}(\\eta)^{-1} \\bar{G}_k Y_k + \\ln \\det \\Sigma_{\\bar{\\gamma}}(\\eta)\\label{equ:eta_hat_ff}\\\\\n\\Sigma_{\\bar{\\gamma}}(\\eta)&= \\bar{G}_k \\Phi_k K_\\eta \\Phi_k^\\top \\bar{G}_k + \\sigma^2 I_k\n\\end{align}\nAlgorithm \\ref{alg:on_line_ff} illustrates the online implementation of the identification procedure based on equations \\eqref{equ:h_hat_ff} and \\eqref{equ:eta_hat_ff}. In particular, it assumes that at time $k$ the estimates $\\widehat{\\mathbf{h}}^{(i)}$ and $\\hat{\\eta}^{(i)}$ are available and they have been computed by solving, respectively, \\eqref{equ:regul_probl_ff} and \\eqref{equ:eta_hat_ff}; these estimates are then online updated after the new data $\\mathcal{D}_{i+1}$ are provided. Once $\\bar{\\gamma}$ is chosen by the user, it is inserted in the data matrices $R_{\\bar{\\gamma}}^{(i+1)}:=\\Phi_{(i+1)T}^\\top \\bar{\\Gamma}_{(i+1)T}\\Phi_{(i+1)T},\\ \\widetilde{Y}_{\\bar{\\gamma}^{(i+1)}}:=\\Phi_{(i+1)T}^\\top \\bar{\\Gamma}_{(i+1)T}Y_{(i+1)T}, \\\n\\xbar{Y}_{\\bar{\\gamma}^{(i+1)}}:=Y_{(i+1)T}^\\top \\bar{\\Gamma}_{(i+1)T}Y_{(i+1)T}$, updated at steps \\ref{alg2_step:r}-\\ref{alg2_step:yb} of the algorithm.\n\n\\begin{algorithm}\n\\caption{Online Bayesian SysId: Fixed Forgetting Factor}\\label{alg:on_line_ff}\n\\begin{algorithmic}[1]\n\\Statex{\\textbf{Inputs:}} forgetting factor $\\bar{\\gamma}$, previous estimates $\\{ \\hat{\\eta}^{(i)}, \\hat{\\eta}^{(i-1)}\\}$, previous data matrices $\\{R_{\\bar{\\gamma}}^{(i)},\\widetilde{Y}_{\\bar{\\gamma}}^{(i)},\\xbar{Y}_{\\bar{\\gamma}}^{(i)}\\}$, new data $\\mathcal{D}_{i+1}=\\left\\{u(t),y(t)\\right\\}_{t=iT+1}^{(i+1)T}$\n\\State $R_{\\bar{\\gamma}}^{(i+1)} \\gets \\bar{\\gamma}^T R_{\\bar{\\gamma}}^{(i)} + \\left(\\Phi_{iT+1}^{(i+1)T}\\right)^\\top \\bar{\\Gamma}_T\\ \\Phi_{iT+1}^{(i+1)T} $ \\label{alg2_step:r}\n\\State $\\widetilde{Y}_{\\bar{\\gamma}}^{(i+1)} \\gets \\gamma^T\\widetilde{Y}_\\gamma^{(i)} + \\left(\\Phi_{iT+1}^{(i+1)T}\\right)^\\top \\bar{\\Gamma}_T\\ Y_{iT+1}^{(i+1)T}$ \\label{alg2_step:yt}\n\\State $\\xbar{Y}_{\\bar{\\gamma}}^{(i+1)} \\gets \\bar{\\gamma}^T \\xbar{Y}_{\\bar{\\gamma}}^{(i)} + \\left(Y_{iT+1}^{(i+1)T}\\right)^\\top \\bar{\\Gamma}_T\\ Y_{iT+1}^{(i+1)T}$ \\label{alg2_step:yb}\n\\State $\\widehat{\\mathbf{h}}_{LS}^{(i+1)} \\gets R_{\\bar{\\gamma}}^{(i+1)^{-1}} \\widetilde{Y}_{\\bar{\\gamma}}^{(i+1)}$ \\label{alg2_step:ls}\n\\State {\\footnotesize$\\hat{\\sigma}^{(i+1)^2} \\gets \\frac{1}{ (i+1)T - n} \\left(\\bar{Y}_{\\bar{\\gamma}}^{(i+1)}-2\\widetilde{Y}_{\\bar{\\gamma}}^{(i+1)^\\top}\\widehat{\\mathbf{h}}_{LS}^{(i+1)} + \\widehat{\\mathbf{h}}_{LS}^{(i+1)^\\top}R_{\\bar{\\gamma}}^{(i+1)}\\widehat{\\mathbf{h}}_{LS}^{(i+1)} \\right)$}\n\\State $\\hat{\\eta}^{(i+1)}\\gets \\arg\\min_{\\eta\\in\\Omega}\\ f_{(i+1)T}(\\eta)$ (use Algorithm \\ref{alg:1step_grad})\n\\label{alg2_step:ml}\n\\State $\\widehat{\\mathbf{h}}^{(i+1)} \\gets \\left(R_{\\bar{\\gamma}}^{(i+1)} +\\hat{\\sigma}_{\\bar{\\gamma}}^{(i+1)^2} K_{\\hat{\\eta}^{(i+1)}}^{-1}\\right)^{-1}\\widetilde{Y}_{\\bar{\\gamma}}^{(i+1)}$\\label{alg_step:h}\n\\Statex{\\textbf{Output:}} $\\widehat{\\mathbf{h}}^{(i+1)}$, $\\hat{\\eta}^{(i+1)}$\n\\end{algorithmic}\n\\end{algorithm}\n \n\n\n\\subsubsection{Treating the Forgetting Factor as a Hyper-parameter}\nThe Bayesian framework provides the user with the possibility to treat the forgetting factor as a hyper-parameter and to estimate it by solving:\n\\begin{align}\n\\hat{\\eta}, \\hat{\\gamma} &= {\\textstyle\\arg\\min_{\\eta\\in\\Omega, \\gamma\\in(0,1]}}\\ f_k(\\eta,\\gamma)\\label{equ:eta_hat_ff_hyper} \\\\\nf_k(\\eta, \\gamma)&= Y_k^\\top G_k \\Sigma(\\eta,\\gamma)^{-1} G_k Y_k + \\ln \\det \\Sigma(\\eta,\\gamma)\\label{equ:ml_ff_hyper} \\\\\n\\Sigma(\\eta,\\gamma)&= G_k \\Phi_k K_\\eta \\Phi_k^\\top G_k + \\sigma^2 I_k\n\\end{align}\nwhere $G_k G_k =: \\Gamma_k$ and $\\Gamma_k := diag\\left(\\gamma^{k-1}, \\gamma^{k-2}, ..., \\gamma^0\\right)$.\n\\begin{rem} Notice that the model \\eqref{equ:ff_model} is equivalent to\n$$\nY_k = \\Phi_k \\mathbf{h} + E_{\\bar{\\gamma}}, \\quad E_{\\bar{\\gamma}} = \\left[e_{\\bar{\\gamma}}(1),...,e_{\\bar{\\gamma}}(k)\\right]^\\top \\sim \\mathcal{N}(0,\\sigma^2 \\bar{\\Gamma}_k^{-1})\n$$\nTherefore, treating the forgetting factor as a hyper-parameter is equivalent to modeling the noise with a non-constant variance and to give to the diagonal entries of the covariance matrix an exponential decaying structure.\n\\end{rem}\n\nThe online implementation of this approach is detailed in Algorithm \\ref{alg:on_line_ff_hyper}, where\n\\begin{equation}\nR_{\\hat{\\pmb{\\gamma}}}^{(i)} := \\hat{\\gamma}^{(i)} R_{\\hat{\\pmb{\\gamma}}}^{(i-1)} + \\left(\\Phi_{(i-1)T+1}^{iT}\\right)^\\top \\widehat{\\Gamma}_T^{(i)} \\Phi_{(i-1)T+1}^{iT} \n\\end{equation}\nwith $\\widehat{\\Gamma}_T^{(i)} = diag((\\hat{\\gamma}^{(i)})^{T-1},.. ,(\\hat{\\gamma}^{(i)})^{0})$. $\\widetilde{Y}_{\\hat{\\pmb{\\gamma}}}^{(i)}$ and $\\xbar{Y}_{\\hat{\\pmb{\\gamma}}}^{(i)}$ are analogously defined.\\\\\nWe should stress that the objective function in \\eqref{equ:ml_ff_hyper} does not admit the decomposition \\eqref{equ:grad_decomp}; we have\n\\begin{equation*}\n\\frac{\\partial f_k(\\eta,\\gamma)}{\\partial \\gamma} = V(\\eta,\\gamma) + U(\\eta,\\gamma), \\quad V(\\eta,\\gamma) >0, \\ U(\\eta,\\gamma) \\geq 0 \n\\end{equation*}\nThus, when $\\gamma$ is treated as an hyper-parameter, Algorithm \\ref{alg:1step_grad} is run setting $D^{(i)}=I_d$ at step \\ref{alg_step:inverse_H}; $\\alpha^{(i)}$ is still determined alternating the BB rules \\eqref{equ:bb}.\n\n\n\n\\begin{algorithm}\n\\caption{Online Bayesian SysId: Forgetting Factor as a hyper-parameter}\\label{alg:on_line_ff_hyper}\n\\begin{algorithmic}[1]\n\\Statex{\\textbf{Inputs:}} previous estimates $\\{ \\hat{\\eta}^{(i)}, \\hat{\\eta}^{(i-1)}, \\hat{\\gamma}^{(i)}, \\hat{\\gamma}^{(i-1)}\\}$, previous data matrices $\\{R_{\\hat{\\pmb{\\gamma}}}^{(i)},\\widetilde{Y}_{\\hat{\\pmb{\\gamma}}}^{(i)},\\xbar{Y}_{\\hat{\\pmb{\\gamma}}}^{(i)}\\}$, new data $\\mathcal{D}_{i+1}=\\left\\{u(t),y(t)\\right\\}_{t=iT+1}^{(i+1)T}$\n\\State $R_{\\gamma}^{(i+1)} \\gets \\gamma^T R_{\\hat{\\pmb{\\gamma}}}^{(i)} + \\left(\\Phi_{iT+1}^{(i+1)T}\\right)^\\top \\Gamma_T \\ \\Phi_{iT+1}^{(i+1)T} $ \\label{alg3_step:r}\n\\State $\\widetilde{Y}^{(i+1)}_{\\gamma} \\gets \\gamma^T \\widetilde{Y}_{\\hat{\\pmb{\\gamma}}}^{(i)} + \\left(\\Phi_{iT+1}^{(i+1)T}\\right)^\\top \\Gamma_T \\ Y_{iT+1}^{(i+1)T}$ \\label{alg3_step:yt}\n\\State $\\xbar{Y}^{(i+1)}_\\gamma \\gets \\gamma^T \\xbar{Y}_{\\hat{\\pmb{\\gamma}}}^{(i)} + \\left(Y_{iT+1}^{(i+1)T}\\right)^\\top \\Gamma_T \\ Y_{iT+1}^{(i+1)T}$ \\label{alg3_step:yb}\n\\State $\\widehat{\\mathbf{h}}_{LS}^{(i+1)} \\gets (R_{\\hat{\\pmb{\\gamma}}}^{(i)})^{-1} \\widetilde{Y}_{\\hat{\\pmb{\\gamma}}}^{(i)}$ \\label{alg3_step:ls}\n\\State {\\footnotesize$\\hat{\\sigma}^{2^{(i+1)}} \\gets \\frac{1}{ (i+1)T - n} \\left(\\xbar{Y}_{\\hat{\\pmb{\\gamma}}}^{(i)}-\n2(\\widetilde{Y}_{\\hat{\\pmb{\\gamma}}}^{(i)})^\\top\\ \\widehat{\\mathbf{h}}_{LS}^{(i+1)} + (\\widehat{\\mathbf{h}}_{LS}^{(i+1)})^\\top R_{\\hat{\\pmb{\\gamma}}}^{(i)} \\widehat{\\mathbf{h}}_{LS}^{(i+1)} \\right)$}\n\\State $\\hat{\\eta}^{(i+1)}, \\hat{\\gamma}^{(i+1)} \\gets \\arg\\min_{\\eta\\in\\Omega, \\gamma\\in(0,1]}\\ f_{(i+1)T}(\\eta,\\gamma)$ \\Statex (use Algorithm \\ref{alg:1step_grad})\n\\label{alg3_step:ml}\n\\State $\\widehat{\\mathbf{h}}^{(i+1)} \\gets \\left(R_{\\hat{\\pmb{\\gamma}}}^{(i+1)} +\\hat{\\sigma}^{2^{(i+1)}}(\\hat{\\gamma}^{(i+1)})\\ K_{\\hat{\\eta}^{(i+1)}}^{-1}\\right)^{-1}\n\\widetilde{Y}_{\\hat{\\pmb{\\gamma}}}^{(i+1)}$\\label{alg_step:h}\n\\Statex{\\textbf{Output:}} $\\widehat{\\mathbf{h}}^{(i+1)}$, $\\hat{\\eta}^{(i+1)}$\n\\end{algorithmic}\n\\end{algorithm}\n\n\n \n\n\n\n\n\t\n\n\n\n\\section{Experimental Results}\\label{sec:experiment}\n\\noindent In this section we test the online algorithms for parametric and Bayesian SysId described in Sec.~\\ref{sec:pem} and \\ref{sec:bayes}. Their performance are compared through a Monte-Carlo study over 200 time-varying systems.\n\n\\subsection{Data}\n\\noindent 200 datasets consisting of 3000 input-output measurement pairs are generated. Each of them is created as follows: the first 1000 data are produced by a system contained in the data-bank D4 (used in \\cite{TC-MA-LL-AC-GP:14}), while the remaining 2000 data are generated by perturbing the D4-system with two additional poles and zeros. These are chosen such that the order of the D4-system changes, thus creating a switch on the data generating system at time $k=1001$. \n\\\\\nThe data-bank D4 consists of 30th order random SISO dicrete-time systems having all the poles inside a circle of radius 0.95. These systems are simulated with a unit variance band-limited Gaussian signal with normalized band $[0,0.8]$. A zero mean white Gaussian noise, with variance adjusted so that the Signal to Noise Ration (SNR) is always equal to 1, is then added to the output data.\n\n\n\\subsection{Estimators}\n\\noindent The parametric estimators are computed with the \\verb!roe! Matlab routine, using the BIC criterion for the model complexity selection. In the following this estimator will be denoted as \\textit{PEM BIC}. Furthermore, as a benchmark we introduce the parametric oracle estimator, called \\textit{PEM OR}, which selects the model complexity by choosing the model that gives the best fit to the impulse response of the true system. The order selection is performed every time a new dataset becomes available: multiple models with orders ranging from 1 to 20 are estimated and the order selection is performed according to the two criteria above-described.\n\\\\\nFor what regards the methods relying on Bayesian inference we adopt a zero-mean Gaussian prior with a covariance matrix (kernel) given by the so-called TC-kernel:\n\\begin{equation}\nK_\\eta^{TC}(k,j) = \\lambda \\min(\\beta^{k},\\beta^{j}), \\qquad \\eta = [ \\lambda,\\ \\beta]\n\\end{equation}\nwith $\\Omega = \\{(\\lambda,\\beta): \\lambda \\geq 0, 0 \\leq \\beta \\leq 1\\}$ \\cite{ChenOL12}. The length $n$ of the estimated impulse responses is set to 100. In the following, we will use the acronym \\textit{TC} to denote these methods. Furthermore, the notation \\textit{OPT} will refer to the standard Bayesian procedure, in which the SGP algorithm adopted to optimize the marginal likelihood $f_k(\\eta)$ is run until the relative change in $f_k(\\eta)$ is less than $10^{-9}$.\nFrom here on, the online counterpart (illustrated in Sec.~\\ref{sec:bayes}) will be referred to as the \\textit{1-step ML}.\nWe will also use the acronyms \\textit{TC FF} when a fixed forgetting factor is adopted, \\textit{TC est FF} when the forgetting factor is estimated as a hyper-parameter.\n\\\\\nFor each Monte Carlo run, the identification algorithms are initialized using the first batch of data $\\mathcal{D}_{init}=\\left\\{u(t),y(t)\\right\\}_{t=1}^{300}$.\nAfter this initial step, the estimators are updated every $T=10$ time steps, when new data $\\mathcal{D}_{i+1} = \\left\\{u(t),y(t)\\right\\}_{t=iT}^{(i+1)T}$ are provided. The forgetting factor in the \\textit{TC FF} and \\textit{PEM} methods is set to 0.998, while its estimation in \\textit{TC est FF} method is initialized with 0.995.\n\n\\subsection{Performance}\n\\noindent The purpose of the experiments is twofold. First, we will compare the two routines we have proposed in Sec.~\\ref{sec:time_var} to explicitly deal with time-varying systems. Second, we will compare the parametric and the Bayesian identification approaches while dealing with time-varying systems.\\\\\nAs a first comparison, we evaluate the adherence of the estimated impulse response $\\widehat{\\mathbf{h}}$ to the true one $\\mathbf{h}$, measured~as\n\\begin{equation}\n\\label{eq:fit_imp_resp}\n\\mathcal{F}(\\widehat{\\mathbf{h}})= 100 \\cdot \\Big(1- \\frac{\\Vert \\mathbf{h} - \\widehat{\\mathbf{h}} \\Vert_2}{\\Vert \\mathbf{h} \\Vert_2}\\Big)\n\\end{equation}\nFigure \\ref{fig:fit} reports the values of $\\mathcal{F}(\\widehat{\\mathbf{h}})$ at four time instants.\n\n\n\n\n\n\\begin{figure}[hbtp]\n\\centering\n\\includegraphics[width=.9\\columnwidth]{fitBoxplot_timeVard4_sgp_1i_TC_Nsys200_NestSysOrig1000_NestPerturb3000_SNR1_Nred300_Nstep10_FF998_FFpem998_FINAL-crop.pdf}\n\\caption{Fit $\\mathcal{F}(\\widehat{\\mathbf{h}})$ achieved at four time instants $k$ (corresponding to the number of data available for the estimation).}\n\\label{fig:fit}\n\\end{figure}\n\n\n\\noindent \nIt is interesting to note that immediately before the change in the data generating system ($k = 1000$) the \\textit{TC} methods slightly outperform the ideal parametric estimator \\textit{PEM OR}.\nAfter the switch (occurring at $k=1001$), among regularization\/Bayesian routines \\textit{TC est FF} recovers the fit performance a bit faster than \\textit{TC FF}; even at regime it outperforms the latter because it can choose forgetting factor values that retain a larger amount of data.\n\\\\\nWe also observe how the \\textit{1-step ML} procedures and the corresponding \\textit{OPT} routines provide analogous performance at each time step $k$, validating the method we propose for online estimation and confirming the results in \\cite{RPPCECC2016}.\n\\\\\nThe unrealistic \\textit{PEM OR} represents the reference on the achievable performance of the PEM estimators; it outperforms \\textit{TC} methods in the transient after the switch, while it has comparable performance at regime. Instead, the recursive \\textit{PEM BIC} estimator performs very poorly.\n\n\n\n\\begin{table}\n\\centering\n\\begin{tabularx}{\\columnwidth}{|X|X|X|X|X|X|X|}\n\\hline \n & \\multicolumn{3}{c|}{TC} & \\multicolumn{2}{c|}{PEM} \n\\\\ \n & OPT FF & FF & est FF & OR & BIC \\\\ \n\\hline \nmean & 6.70 & 0.44 & 5.45 & 18.44 & 18.44 \\\\\\hline\nstd & 1.28 & 0.03 & 0.67 & 0.69 & 0.69 \\\\\n\\hline \n\\end{tabularx} \n\\caption{Computational cumulative time after data $\\mathcal{D} =\\left\\{u(t),y(t)\\right\\}_{t=1}^{3000}$ are used: mean and std over 200 datasets.}\\label{tab:cumulative_time_TC_PEM}\n\\vspace{-4mm}\n\\end{table}\n\n\n\n\\noindent As a second comparison, Table \\ref{tab:cumulative_time_TC_PEM} reports the computational cumulative time of the proposed algorithms in terms of mean and standard deviation after the estimators are fed with all the data $\\mathcal{D} =\\left\\{u(t),y(t)\\right\\}_{t=1}^{3000}$. \nThe \\textit{1-step ML} methods are one order of magnitude faster than the corresponding \\textit{OPT} ones. The \\textit{TC est FF} estimator is slower than \\textit{TC FF}: this should be a consequence of having set $D^{(i)}=I_d$ in Algorithm \\ref{alg:1step_grad}. On the other hand the RPEM estimators are three times slower than the \\textit{OPT} ones, thus appearing not particularly appealing for online applications. \n\\section{Conclusion and Future Work}\\label{sec:conclusion}\n\\noindent We have adopted recently developed SysId techniques relying on the Gaussian processes and Bayesian inference to the identification of time-varying systems. Specifically, we have focused on an online setting by assuming that new data become available at predefined time instants. To tackle the real-time constraints we have modified the standard Bayesian procedure: hyper-parameters are estimated by performing only one gradient step in the corresponding marginal likelihood optimization problem.\nIn order to cope with the time-varying nature of the systems to be identified, we propose two approaches, based on the use of a forgetting factor. One of them treats the forgetting factor as a constant, while the other estimates it as a hyper-parameter of the Bayesian inference procedure.\\\\\nWe believe that the preliminary investigation performed in this work may pave the way for further research in this topic. A future research direction could consider the recursive update of the Bayesian estimate, resembling the one which is available for parametric techniques.\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \n\nMultiple stellar populations are routinely \nfound in old Galactic and intermediate-age Magellanic Clouds star clusters\n(Piotto 2008 and references therein). \nWhether they are a signature of the cluster formation process or a result of the star formation\nhistory and related stellar\nevolution effects, is still matter of lively discussion (Renzini 2008, Bekki et al. 2008, Decressin et al. 2007).\nThe prototype of globular hosting multiple populations has for long time been \n$\\omega$~ Cen (Villanova et al. 2007), although the current understanding is that it is possibly the remnant\nof a dwarf galaxy (Carraro \\& Lia 2000, Tsuchiya et al. 2004, Romano et al. 2007).\\\\\nMost chemical studies of the stellar population in $\\omega$~ Cen are\nrestricted within 20 arcmin of the cluster radius center (see Norris \\& Da Costa 1995, Villanova et al. 2007),\n where, probably, the diverse stellar components are better mixed and\nless subjected to external perturbations, like the Galactic tidal stress, than the outer\nregions. Assessing whether there are population inhomogeneities in $\\omega$ Cen or gradients\nin metal abundance is a crucial step to progress in our understanding of this fascinating stellar\nsystem.\\\\\nIn Scarpa et al. (2003, 2007) we presented the results of a spectroscopic campaign to\nstudy the stellar radial velocity dispersion profile at $\\sim$ 25 arcmin, half way to \nthe tidal radius ($\\sim$ 57 arcmin, Harris 1996), in an attempt to find a new way to verify the \nMOND (Modified Newtonian Dynamics, Milgrom 1983) theory of gravitation.\\\\\nIn this paper we make use of a subsample of those spectra (the ones taken for\nRGB stars) and extract estimates of metal abundances for some of the most\ninteresting elements.\nThe aim is to study the chemical trends of the stellar populations in the cluster periphery,\nto try to learn whether the cluster outskirts contain, both qualitatively\nand quantitatively, the same population mix and to infer from this additional information\non the cluster formation and evolution.\\\\\nThe layout of the paper is as follows. In Sect.~2 we describe observations and data reduction,\nwhile Sect.~3 is dedicated to the derivation of metal abundances.\nThese latter are then discussed in detail in Sect.~4. Sect.~5 is devoted to the comparison\nof the metal abundance trends in the inner and outer regions of $\\omega$~ Cen, and, finally,\nSect.~6 summarizes the findings of this study.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{figure1.eps}\n\\caption{The CMD of $\\omega$~ Cen in the optical (left panel) and infrared.\nSolid symbols of different colors indicate stars belonging to the MPP (red),\nIMP (blue) and MRP(green). See text for more details.}\n\\label{f1}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{figure2.eps}\n\\caption{Distribution of iron abundance for the program stars. A bimodal\nGaussian fit is used to derive the mean iron abundance of the MPP and IMP. \nMean iron abundances of the three peaks are indicated. See text for more details.}\n\\label{f2}\n\\end{figure}\n\n\\begin{table} [!hbp]\n\\caption{Measured Solar abundances ($\\rm\n {log\\epsilon(X)=log(N_{X}\/N_{H})+12)}$.} \n\\label{t1}\n\\centering\n\\begin{tabular}{lc}\n\\hline\n\\hline\nElement & log$\\epsilon$(X)\\\\\n\\hline\nNaI & 6.37 \\\\\nMgI & 7.54 \\\\\nSiI & 7.61 \\\\\nCaI & 6.39 \\\\\nTiI & 4.94 \\\\\nTiI & 4.96 \\\\\nCrI & 5.63 \\\\\nFeI & 7.50 \\\\\nNiI & 6.28 \\\\\nZnI & 4.61 \\\\\nYII & 2.25 \\\\\nBaI & 2.31 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\\noindent\n\n\n\\section{Observations and Data reduction}\nOur data-set consists of UVES spectra collected in\nAugust 2001, for a project devoted to \nmeasuring radial velocities and establishing membership in the outskirts of the \ncluster. Data were obtained with UVES\/VLT@UT2\n(Pasquini et al.\\ 2002) with a typical seeing of 0.8-1.2 arcsec. \nWe observed isolated stars from the lower red giant branch (RGB) up to the tip\nof the RGB of $\\omega$ Cen, in the magnitude\nrange $11.5<{\\rm V}<16.0$.\n\nWe used the UVES spectrograph in the RED 580 setting. The spectra have a spectral coverage of\n$\\sim$2000 \\AA \\ with the central wavelength at 5800 \\AA. The typical\nsignal to noise ratio is ${\\rm S\/N\\sim 20-30}$.\nFor additional details, the reader is referred to Scarpa et al. (2003).\n\n\nData were reduced using UVES pipelines (Ballester et al.\\ 2000),\nincluding bias subtraction, flat-field correction, wavelength\ncalibration, sky subtraction and spectral rectification. Stars\nwere selected from photographic BV observations (van Leeuwen et al. 2000)\ncoupled with infrared JHK 2MASS photometry (Skrutskie\net al.\\ 2006). Targets are located at a radial distance between 20 and 30\narcmin. The whole sample of stars contain both RGB and horizontal branch (HB) stars.\nIn this paper we focus our attention only on RGB objects, for the sake of comparison\nwith previous studies.\n\n\n\\subsection{Radial velocities and membership}\nIn the present work, radial velocities were used as the membership criterion\nsince the cluster stars all have similar motions with respect to the observer.\nThe radial velocities of the\nstars were measured using the IRAF FXCOR task, which cross-correlates\nthe object spectrum with a template. As a template, we used a\nsynthetic spectrum obtained through the spectral synthesis code\nSPECTRUM (see {\\sf http:\/\/www.phys.appstate.edu\/spectrum\/spectrum.html} for\nmore details), using a Kurucz model atmosphere with roughly the mean\natmospheric parameters of our stars $\\rm {T_{eff}=4900}$ K, $\\rm {log(g)=2.0}$,\n$\\rm {v_{t}=1.3}$ km\/s, $\\rm {[Fe\/H]=-1.40}$. Each radial velocity was\ncorrected to the heliocentric system. We calculated a first approximation\nmean velocity and the r.m.s ($\\sigma$) of the velocity distribution.\nStars showing $\\rm {v_{r}}$ out of more than\n$3\\sigma$ from the mean value were considered probable field objects and\nrejected, leaving us with 48 UVES spectra of probable members, whose position\nin the CMD are shown in Fig.~\\ref{f1}. Radial velocities\nfor member stars are reported in Tab.~\\ref{t2}\n\n\\section{Abundance analysis}\n\\subsection{Continuum determination}\n\nThe chemical abundances for all elements\nwere obtained from the equivalent widths (EWs) of the spectral lines (see next\nSection for the description of the line-list we used). \nAn accurate measurement of EWs first requires a good determination\nof the continuum level. Our relatively metal-poor stars\nallowed us to proceed in the following way. First, \nfor each line, we selected a region of 20 \\AA \\ centered on the line itself\n(this value is a good compromise between having enough points, i. e. a good statistic, and \navoiding a too large region where the spectrum might not be flat).\nThen we built the histogram of the distribution of the flux where the peak is a\nrough estimation of the continuum. We refined this determination by fitting a\nparabolic curve to the peak and using the vertex as our continuum estimation. \nFinally, we revised the continuum determination by eye and corrected by hand\nif an obvious discrepancy with the spectrum was found.\nThen, using the continuum value previously obtained, we fit a Gaussian curve\nto each spectral line and obtained the EW from integration.\nWe rejected lines if they were affected by bad continuum determinations, by non-Gaussian\nshape, if their central wavelength did not agree with that expected from \nour line-list, or if the lines were too broad or too narrow with respect to the\nmean FWHM.\nWe verified that the Gaussian shape is a good approximation for our spectral\nlines, so no Lorenzian correction has been applied.\n\n\\begin{table*}\n\\caption{Stellar parameters. Coordinates are for J2000.0 equinox}\n\\label{t2}\n\\centering\n\\begin{tabular}{cccccccccccc}\n\\hline\n\\hline\nID & $\\alpha$ & $\\delta$ & B & V & J$_{\\rm 2MASS}$ & H$_{\\rm 2MASS}$ & K$_{\\rm 2MASS}$ & T$_{\\rm eff}$& log(g) & v$_{\\rm t}$ & RV$_{\\rm H}$\\\\\n\\hline\n& deg & deg. & & & & & & $^{0}K$ & & km\/sec & km\/sec \\\\\n\\hline \n00006 & 201.27504 & -47.15599 & 16.327 & 15.531 & 13.865 & 13.386 & 13.364 & 5277 & 2.75 & 1.23 & 222.99\\\\\n08004 & 201.07113 & -47.22082 & 15.393 & 14.508 & 12.687 & 12.110 & 12.007 & 4900 & 2.17 & 1.38 & 241.70\\\\\n10006 & 201.16314 & -47.23357 & 14.510 & 13.710 & 11.887 & 11.413 & 11.300 & 5080 & 1.93 & 1.44 & 237.18\\\\\n10009 & 201.24457 & -47.23406 & 13.807 & 12.573 & 10.331 & 9.664 & 9.520 & 4432 & 1.14 & 1.64 & 227.57\\\\\n10010 & 201.33458 & -47.23334 & 14.982 & 13.941 & 11.963 & 11.394 & 11.249 & 4758 & 1.88 & 1.45 & 220.18\\\\\n13006 & 201.13373 & -47.25880 & 16.442 & 15.665 & 14.112 & 13.615 & 13.504 & 5251 & 2.79 & 1.22 & 231.78\\\\\n14002 & 201.16243 & -47.26471 & 15.696 & 14.853 & 13.110 & 12.634 & 12.552 & 5151 & 2.42 & 1.31 & 224.00\\\\\n22007 & 201.08521 & -47.32639 & 14.799 & 13.635 & 11.843 & 11.221 & 11.077 & 4750 & 1.75 & 1.49 & 227.25\\\\\n25004 & 201.18696 & -47.34607 & 15.048 & 14.064 & 12.393 & 11.852 & 11.762 & 5034 & 2.06 & 1.41 & 230.68\\\\\n27008 & 201.16507 & -47.36326 & 15.242 & 14.220 & 12.519 & 12.046 & 11.911 & 5095 & 2.15 & 1.38 & 237.50\\\\\n28009 & 201.13729 & -47.36499 & 15.687 & 14.779 & 13.133 & 12.664 & 12.549 & 5186 & 2.41 & 1.32 & 236.00\\\\\n33006 & 201.12822 & -47.40730 & 13.062 & 11.403 & 8.924 & 8.064 & 7.929 & 4051 & 0.39 & 1.83 & 226.34\\\\ \n34008 & 201.19496 & -47.41343 & 13.803 & 12.629 & 10.510 & 9.897 & 9.749 & 4570 & 1.25 & 1.61 & 232.16\\\\\n38006 & 201.11643 & -47.44354 & 16.289 & 15.436 & 13.822 & 13.304 & 13.263 & 5202 & 2.68 & 1.25 & 222.58\\\\\n39013 & 201.16078 & -47.45089 & 13.950 & 12.755 & 10.690 & 10.097 & 9.935 & 4610 & 1.32 & 1.59 & 231.47\\\\\n42012 & 201.17440 & -47.47487 & 14.468 & 13.379 & 11.299 & 10.705 & 10.613 & 4673 & 1.61 & 1.52 & 232.58\\\\\n43002 & 201.14213 & -47.47916 & 15.313 & 14.365 & 12.597 & 12.113 & 11.956 & 5021 & 2.17 & 1.38 & 229.17\\\\\n45011 & 201.10941 & -47.49389 & 16.208 & 15.346 & 13.630 & 13.146 & 13.146 & 5229 & 2.66 & 1.25 & 224.37\\\\\n45014 & 201.15625 & -47.50013 & 15.894 & 15.066 & 13.316 & 12.803 & 12.720 & 5073 & 2.47 & 1.30 & 249.24\\\\\n46003 & 201.12943 & -47.50252 & 15.640 & 14.788 & 13.073 & 12.578 & 12.455 & 5091 & 2.37 & 1.33 & 242.00\\\\\n48009 & 201.12036 & -47.51844 & 16.504 & 15.616 & 14.125 & 13.602 & 13.537 & 5279 & 2.79 & 1.22 & 222.94\\\\\n49008 & 201.16235 & -47.52717 & 15.657 & 14.687 & 12.799 & 12.256 & 12.210 & 4925 & 2.26 & 1.36 & 238.57\\\\\n51005 & 201.09190 & -47.53945 & 16.140 & 15.292 & 13.551 & 13.005 & 12.913 & 5028 & 2.55 & 1.28 & 221.27\\\\\n57006 & 201.18559 & -47.58523 & 15.906 & 15.046 & 13.320 & 12.797 & 12.757 & 5096 & 2.48 & 1.30 & 234.33\\\\\n61009 & 201.16032 & -47.61620 & 14.488 & 13.496 & 11.533 & 10.947 & 10.890 & 4784 & 1.71 & 1.50 & 239.65\\\\\n76015 & 201.33839 & -47.73435 & 15.602 & 14.604 & 12.839 & 12.355 & 12.231 & 5026 & 2.27 & 1.35 & 241.73\\\\\n77010 & 201.23548 & -47.74124 & 14.992 & 13.641 & 11.886 & 11.269 & 11.133 & 4746 & 1.75 & 1.49 & 238.49\\\\\n78008 & 201.21908 & -47.74676 & 16.088 & 15.001 & 13.484 & 12.990 & 12.909 & 5221 & 2.52 & 1.29 & 223.14\\\\\n80017 & 201.40179 & -47.75878 & 15.250 & 14.294 & 12.481 & 11.989 & 11.896 & 5026 & 2.15 & 1.38 & 231.59\\\\\n82012 & 201.44193 & -47.77921 & 16.094 & 15.298 & 13.558 & 13.059 & 12.947 & 5099 & 2.58 & 1.27 & 232.28\\\\\n85007 & 201.19307 & -47.80062 & - & - & 14.020 & 13.489 & 13.419 & 4983 & 2.20 & 1.37 & 250.48\\\\\n85014 & 201.37723 & -47.80134 & 15.400 & 14.347 & 12.560 & 11.982 & 11.923 & 4899 & 2.11 & 1.39 & 236.78\\\\\n85019 & 201.53965 & -47.80194 & 15.727 & 14.803 & 12.939 & 12.428 & 12.308 & 4938 & 2.31 & 1.34 & 243.81\\\\\n86007 & 201.22490 & -47.80442 & - & - & 13.024 & 12.487 & 12.387 & 4914 & 1.88 & 1.45 & 238.70\\\\\n86010 & 201.31217 & -47.80789 & 15.594 & 14.557 & 12.926 & 12.437 & 12.329 & 5115 & 2.29 & 1.35 & 238.05\\\\\n86017 & 201.56208 & -47.80760 & 16.289 & 15.452 & 13.737 & 13.319 & 13.232 & 5290 & 2.73 & 1.24 & 231.74\\\\\n87009 & 201.61710 & -47.81630 & 16.081 & 15.199 & 13.392 & 12.885 & 12.850 & 5082 & 2.54 & 1.29 & 247.67\\\\\n88023 & 201.58521 & -47.82029 & 16.415 & 15.542 & 13.774 & 13.268 & 13.154 & 5050 & 2.66 & 1.25 & 232.87\\\\\n89009 & 201.57067 & -47.83291 & 13.776 & 12.650 & 10.497 & 9.890 & 9.753 & 4568 & 1.25 & 1.61 & 242.07\\\\\n89014 & 201.66544 & -47.83110 & 14.611 & 13.607 & 11.639 & 11.055 & 10.967 & 4774 & 1.75 & 1.49 & 231.57\\\\\n90008 & 201.22516 & -47.83980 & - & - & 13.209 & 12.703 & 12.591 & 5010 & 1.95 & 1.43 & 240.42\\\\\n90019 & 201.62529 & -47.83825 & 14.462 & 13.509 & 11.537 & 11.018 & 10.911 & 4860 & 1.75 & 1.48 & 232.73\\\\\n90020 & 201.64363 & -47.83814 & 16.305 & 15.563 & 13.858 & 13.395 & 13.292 & 5219 & 2.74 & 1.23 & 240.39\\\\\n93016 & 201.65058 & -47.86211 & 15.342 & 14.479 & 12.620 & 12.107 & 12.031 & 5015 & 2.22 & 1.37 & 230.82\\\\\n94011 & 201.30980 & -47.86480 & 15.217 & 14.151 & 12.462 & 11.911 & 11.842 & 4989 & 2.07 & 1.40 & 241.74\\\\\n95015 & 201.54907 & -47.87303 & 16.122 & 15.264 & 13.475 & 12.977 & 12.884 & 5076 & 2.56 & 1.28 & 239.10\\\\\n96011 & 201.52316 & -47.88203 & 13.954 & 12.975 & 11.027 & 10.514 & 10.416 & 4894 & 1.56 & 1.53 & 229.55\\\\\n98012 & 201.35549 & -47.89600 & 14.561 & 13.623 & 12.034 & 11.552 & 11.471 & 5210 & 1.96 & 1.43 & 229.93\\\\\n\\hline \n\\end{tabular}\n\\end{table*}\n\n\\begin{table*}\n\\caption{Stellar abundances}\n\\label{t3}\n\\centering\n\\begin{tabular}{lccccccccccccc}\n\\hline\n\\hline\nID & FeI & ${\\rm [FeI\/H]}$ & NaI & MgI & SiI & CaI & TiI & TiII & CrI & NiI & ZnI & YII & BaII\\\\\n\\hline \n00006 & 6.15 & -1.35 & 4.91 & 6.27 & 6.63 & 5.25 & 3.94 & 3.89 & 4.11 & 4.61 & 3.35 & 1.06 & 1.27\\\\\n08004 & 6.23 & -1.27 & 5.74 & 6.31 & 6.97 & 5.53 & 4.12 & 4.30 & 4.38 & 5.01 & 3.61 & 1.75 & 1.93\\\\\n10006 & 5.80 & -1.70 & - & - & - & 5.03 & 3.77 & 3.64 & 3.76 & - & 3.09 & - & 1.57\\\\\n10009 & 6.18 & -1.32 & 5.61 & 6.38 & - & 5.27 & 3.93 & 4.03 & 4.24 & 5.04 & 3.04 & 1.26 & 1.69\\\\\n10010 & 6.45 & -1.05 & 5.38 & 6.71 & - & 5.65 & 3.91 & 4.02 & 4.65 & 5.07 & 3.42 & 1.86 & 2.08\\\\\n13006 & 5.93 & -1.57 & - & 6.31 & - & 5.06 & 3.88 & 3.61 & 4.04 & - & 2.95 & - & 0.37\\\\\n14002 & 6.02 & -1.48 & - & - & - & 5.25 & 3.90 & 3.72 & 4.00 & 5.19 & 3.28 & 0.61 & 1.15\\\\\n22007 & 6.43 & -1.07 & 5.80 & 6.88 & 6.85 & 5.61 & 4.07 & 4.17 & 4.51 & 5.09 & 3.54 & 1.80 & 2.00\\\\\n25004 & 6.14 & -1.36 & - & - & - & 5.44 & 4.09 & 3.81 & 3.95 & - & - & - & 1.08\\\\\n27008 & 6.52 & -0.98 & - & - & 7.29 & 5.60 & 4.30 & 4.24 & 4.91 & - & - & 1.86 & 2.71\\\\\n28009 & 6.29 & -1.21 & - & - & - & 5.53 & 4.20 & 4.29 & - & 4.97 & - & - & 0.65\\\\\n33006 & 6.07 & -1.43 & - & 6.38 & - & 5.30 & 4.10 & 4.27 & 4.32 & 4.73 & - & - & 1.43\\\\ \n34008 & 6.11 & -1.39 & 5.52 & 6.24 & - & 5.29 & 3.87 & 3.99 & 4.00 & 4.83 & - & 1.27 & 1.54\\\\\n38006 & 5.97 & -1.53 & - & 6.19 & - & 5.15 & 3.79 & 3.88 & 4.16 & 4.99 & 3.19 & 0.65 & 0.61\\\\\n39013 & 6.01 & -1.49 & - & 6.51 & 6.85 & 5.30 & 3.77 & 3.71 & 4.13 & 4.80 & - & 1.34 & 1.17\\\\\n42012 & 6.10 & -1.40 & 5.05 & 6.62 & 6.66 & 5.42 & 3.93 & 4.08 & 4.22 & 4.85 & 3.37 & 1.62 & 1.61\\\\\n43002 & 5.94 & -1.56 & 5.10 & 6.09 & - & 5.24 & 3.96 & 3.81 & 4.29 & - & - & 1.52 & 1.15\\\\\n45011 & 6.16 & -1.34 & 5.30 & 6.63 & 6.66 & 5.46 & 4.24 & 4.14 & 4.28 & 4.94 & - & 0.98 & 1.64\\\\\n45014 & 5.76 & -1.74 & - & 6.25 & - & 5.00 & 3.64 & 3.65 & 3.83 & - & - & - & 0.10\\\\\n46003 & 5.81 & -1.69 & - & 6.04 & - & 5.10 & 3.63 & 3.63 & 4.16 & 4.75 & 3.03 & 0.41 & 0.36\\\\\n48009 & 6.24 & -1.26 & 5.30 & 6.83 & - & 5.61 & - & - & 4.58 & 5.18 & 3.68 & 1.10 & 2.06\\\\\n49008 & 6.09 & -1.41 & - & 6.43 & - & 5.57 & 4.19 & 4.19 & 4.36 & 5.04 & 4.18 & 2.34 & 1.79\\\\\n51005 & 6.08 & -1.42 & - & 6.31 & - & 5.56 & 4.01 & 4.34 & 4.55 & 4.97 & 3.59 & 1.28 & 1.20\\\\\n57006 & 5.80 & -1.70 & 5.61 & - & - & 5.10 & 3.97 & 4.11 & 3.84 & - & 2.96 & 0.68 & 0.47\\\\\n61009 & 5.76 & -1.74 & - & 6.12 & 6.48 & 5.13 & 3.74 & 3.80 & 4.13 & 5.82 & - & 0.38 & 0.50\\\\\n76015 & 5.90 & -1.60 & - & - & - & 5.21 & 3.85 & 3.88 & 4.06 & - & 3.40 & 1.12 & 1.70\\\\\n77010 & 6.56 & -0.94 & 5.61 & 7.16 & 7.21 & 5.82 & 4.10 & 4.17 & 4.73 & 5.29 & 3.46 & 1.84 & 1.81\\\\\n78008 & 6.14 & -1.36 & - & - & - & 5.15 & 4.10 & 3.92 & 4.32 & 5.06 & - & 0.72 & 0.96\\\\\n80017 & 6.04 & -1.46 & - & - & - & 5.23 & 3.64 & 3.79 & 3.87 & - & - & - & 0.55\\\\\n82012 & 5.86 & -1.64 & 5.63 & - & - & 5.08 & 3.82 & 3.76 & 4.16 & - & 3.72 & 1.06 & 1.29\\\\\n85007 & 5.52 & -1.98 & - & 6.29 & - & 5.14 & 3.67 & 3.30 & 3.83 & 4.74 & - & 0.61 & 0.89\\\\\n85014 & 6.03 & -1.47 & - & - & - & 5.50 & 4.24 & 4.05 & 4.42 & - & - & 1.86 & 1.62\\\\\n85019 & 5.88 & -1.62 & - & 6.59 & - & 5.29 & 3.96 & 3.99 & 4.19 & - & 3.56 & 1.47 & 1.68\\\\\n86007 & 5.84 & -1.66 & 5.72 & 6.24 & 6.87 & 5.50 & 4.05 & 3.86 & 4.43 & 4.80 & 3.67 & 1.24 & 1.65\\\\\n86010 & 5.87 & -1.63 & - & - & - & 5.17 & - & - & - & - & - & - & 0.71\\\\\n86017 & 6.15 & -1.35 & 5.55 & - & 6.84 & 5.54 & 4.19 & 4.21 & 4.17 & 5.07 & 3.33 & 1.22 & 2.30\\\\\n87009 & 6.12 & -1.38 & - & 6.59 & - & 5.62 & 4.41 & 4.11 & 4.83 & 4.90 & 3.41 & 1.98 & 1.84\\\\\n88023 & 6.10 & -1.40 & 5.08 & - & - & 5.52 & 4.26 & 4.16 & 4.52 & 4.65 & - & 1.80 & 2.20\\\\\n89009 & 5.74 & -1.76 & - & 6.28 & - & 5.05 & 3.55 & 3.76 & 4.09 & 4.55 & - & 0.19 & 0.41\\\\\n89014 & 6.14 & -1.36 & 5.84 & - & 6.66 & 5.53 & 4.08 & 4.16 & 4.45 & 4.92 & - & 1.18 & 1.64\\\\\n90008 & 5.65 & -1.85 & 5.32 & 6.27 & - & 5.35 & 3.82 & 3.29 & 4.20 & - & 3.53 & 0.73 & 1.00\\\\\n90019 & 5.83 & -1.67 & - & - & - & 5.16 & 4.10 & 3.70 & 4.14 & - & - & 0.54 & 0.47\\\\\n90020 & 5.89 & -1.61 & - & - & - & 5.09 & 3.70 & 3.72 & - & - & - & 0.57 & 0.60\\\\\n93016 & 6.20 & -1.30 & - & 6.69 & - & 5.37 & 4.07 & 3.94 & 4.59 & - & - & 1.85 & 1.49\\\\\n94011 & 5.93 & -1.57 & - & - & - & 5.17 & 4.02 & 4.00 & 4.12 & - & - & 0.43 & 0.72\\\\\n95015 & 6.24 & -1.26 & - & 6.72 & - & 5.32 & 4.31 & 4.28 & 4.71 & 5.12 & 3.74 & 1.33 & 1.48\\\\\n96011 & 5.96 & -1.54 & 5.68 & - & - & 5.25 & 4.20 & 4.05 & 4.33 & 4.59 & - & 1.15 & 1.70\\\\\n98012 & 5.85 & -1.65 & - & - & - & 4.98 & - & - & - & - & - & - & 0.68\\\\\n\\hline \nObs. lines & 30 & & 2 & 1 & 2 & 10 & 10 & 5 & 5 & 5 & 1 & 4 & 2\\\\ \n\\hline \n\\end{tabular}\n\\end{table*}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=10cm]{figure3.eps}\n\\caption{Trend of Na and $\\alpha-$element abundance ratios as a function of\n [Fe\/H]. Mean values (continuous lines) are provided for those elements which do not show a \n sizable scattering. See also Table~3.}\n\\label{f3}\n\\end{figure*}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=10cm]{figure4.eps}\n\\caption{Trend of abundance ratios for Iron peak elements (Ni and Cr),\nZn, Y and Ba for the outer region stars.\nMean values (continuous lines) are provided when there is no sizable scatter }\n\\label{f4}\n\\end{figure*}\n\n\\subsection{The linelist}\n\nThe line-lists for the chemical analysis were obtained from the VALD\ndatabase (Kupka et al.\\ 1999) and calibrated using the Solar-inverse technique.\nFor this purpose we used the high resolution, high S\/N Solar spectrum\nobtained at NOAO ($National~Optical~Astronomy~Observatory$, Kurucz et\nal.\\ 1984). The EWs for the reference Solar spectrum were obtained in the same\nway as the observed spectra, with the exception of the strongest lines, where\na Voigt profile integration was used. Lines affected by blends were rejected\nfrom the final line-list.\nMetal abundances were determined by the Local Thermodynamic Equilibrium (LTE)\nprogram MOOG (freely distributed by C. Sneden, University of Texas at Austin),\ncoupled with a solar model atmosphere interpolated from the Kurucz (1992) grids\nusing the canonical atmospheric parameters for the Sun: $\\rm {T_{eff}=5777}$ K,\n$\\rm {log(g)}=4.44$, $\\rm {v_{t}=0.80}$ km\/s and $\\rm {[Fe\/H]=0.00}$.\nIn the calibration procedure, we adjusted the value of the line strength\nlog(gf) of each spectral line in order to report the abundances obtained from\nall the lines of the same element to the mean value.\nThe chemical abundances obtained for the Sun and used in the paper as\nreference are reported in Tab.~\\ref{t1}.\n\n\\subsection{Atmospheric parameters}\n\nEstimates of the atmospheric parameters were derived from \nBVJHK photometry. Effective temperatures (T$_{\\rm eff}$) for each \nstar were derived from the T$_{\\rm eff}$-color relations\n(Alonso et al.\\ 1999, Di Benedetto 1998, and Ramirez \\& M\\'elendez 2005). \nColors were de-reddened using a reddening given by\nSchlegel et al. (1998). A value E(B-V) = 0.134 mag. was adopted.\n\nSurface gravities log(g) were obtained from the canonical equation:\n\n\\begin{center}\n${\\rm log(g\/g_{\\odot}) = log(M\/M_{\\odot}) + 4\\cdot\n log(T_{eff}\/T_{\\odot}) - log(L\/L_{\\odot}) }$\n\\end{center}\n\nFor M\/M$_{\\odot}$ we adopted 0.8 M$_{\\odot}$, which is the\ntypical mass of RGB stars in globular clusters.\nThe luminosity ${\\rm L\/L_{\\odot}}$ was obtained from the absolute\nmagnitude M$_{\\rm V}$, assuming an absolute distance modulus \nof (m-M)$_{\\rm 0}$=13.75 (Harris 1996), which gives an apparent distance\nmodulus of (m-M)$_{\\rm V}$=14.17 using the adopted reddening. \nThe bolometric correction ($\\rm {BC}$) was derived by adopting the relation\nBC-T$_{\\rm eff}$ from Alonso et al.\\ (1999). \\\\\nFinally, microturbolence velocity ($\\rm {v_{t}}$) was obtained from the\nrelation (Marino et al. 2008):\n\n\\begin{center}\n${\\rm v_{t}\\ (km\/s) = -0.254\\cdot log(g)+1.930}$\n\\end{center}\n\nwhich was obtained specifically for old RGB stars, as it is our present sample.\nAdopted atmospheric parameters for each star are reported in Tab.~\\ref{t2}\nin column 9,10,11.\nIn this Table column 1 gives the ID of the star, columns 2 and 3 the\ncoordinates, column 4,5,6,7,8 the B,V,J,H,K magnitudes, column 12 the\nheliocentric radial velocity.\n\n\n\\subsection{Chemical abundances}\n\nThe Local Thermodynamic Equilibrium (LTE) program MOOG (freely\ndistributed by C. Sneden, University of Texas at Austin) has been used to\ndetermine the abundances from EWs, coupled with model atmosphere\ninterpolated from the Kurucz (1992) for the parameters obtained as described\nin the previous Section.\nThe wide spectral range of the UVES data allowed us to derive the\nchemical abundances of several elements. Chemical abundances\nfor single stars we obtained are listed in Tab.~\\ref{t3}.\nThe last line of this table gives the mean number of lines we were able to \nmeasured for each elements.\nTi is the only element for which we could measure neutral and first ionization\nlines. The difference of mean abundances obtained from the two stages is:\n\\begin{center}\n${\\rm \\Delta(TiI-TiII)=0.03\\pm0.01}$\n\\end{center}\nThis difference is small and compatible with zero within 3 $\\sigma$. This confirms\nthat gravities obtained by the canonical equation are not affected by appreciable\nsystematic errors.\n\n\\subsection{Internal errors associated with the chemical abundances}\n\nThe measured abundances of every element vary from\nstar to star as a consequence of both measurement errors and\nintrinsic star to star abundance variations.\nIn this section our goal is to search for evidence of intrinsic\nabundance dispersion in each element by comparing the observed\ndispersion $\\sigma_{\\rm {obs}}$ and that produced by internal errors\n($\\Delta_{\\rm {tot}}$). Clearly, this requires an accurate analysis of\nall the internal sources of measurement errors.\nWe remark here that we are interested in star-to-star intrinsic\nabundance variation, i.e. we want to measure the internal intrinsic\nabundance spread of our sample of stars. For this reason, we\nare not interested in external sources of error which are systematic\nand do not affect relative abundances.\\\\\nIt must be noted that two main sources of errors contribute\nto $\\sigma_{\\rm {tot}}$. They are:\n\\begin{itemize}\n\\item the errors $\\sigma_{\\rm {EW}}$ due to the uncertainties in the\n EWs measures;\n\\item the uncertainty $\\sigma_{\\rm {atm}}$ introduced by errors in the \n atmospheric parameters adopted to compute the chemical abundances.\n\\end{itemize}\n\n$\\sigma_{\\rm {EW}}$ is given by MOOG for each element and each star.\nIn Tab.~\\ref{t4} we reported in the second column the average $\\sigma_{\\rm {EW}}$\nfor each element. For Mg and Zn we were able to measure only one line.\nFor this reason their $\\sigma_{\\rm {EW}}$ has been obtained as the mean of\n$\\sigma_{\\rm {EW}}$ of Na and Si multiplied by $\\sqrt 2$. \nNa and Si lines were selected because their strength was similar to that of Mg\nand Zn features. This guarantees that $\\sigma_{\\rm {EW}}$, due to the photon\nnoise, is the same for each single line.\n\nErrors in temperature are easy to determine because, for each star, it is\nthe r.m.s. of the temperatures obtained from the single colours. The mean\nerror $\\Delta$T$_{\\rm eff}$ turned out to be 50 K.\nUncertainty on gravity has been obtained by the canonical formula using the\npropagation of errors. The variables used in this formula that are affected\nby random errors are T$_{\\rm eff}$ and the V magnitude. For temperature we\nused the error previously obtained, while for V we assumed a error of 0.1 mag,\nwhich is the typical random error for photographic magnitudes. Other error\nsources (distance modulus, reddening, bolometric correction) affect\ngravity in a systematic way, so are not important to our analysis.\nMean error in gravity turned out to be 0.06 dex. This implies, in turn, a mean error\nin microturbolence of 0.02 km\/s.\n \nOnce the internal errors associated with the atmospheric parameters were\ncalculated, we re-derived the abundances of two reference stars (\\#00006 and\n\\#42012) which roughly cover the whole temperature range of our sample \nby using the following combination of atmospheric parameters:\n\\begin{itemize}\n\\item ($\\rm {T_{eff}} \\pm \\Delta (\\rm {T_{eff}})$, $\\rm {log(g)}$, $\\rm {v_{t}}$)\n\\item ($\\rm {T_{eff}} $, $\\rm {log(g)} \\pm \\Delta (\\rm {log(g)})$, $\\rm {v_{t}}$)\n\\item ($\\rm {T_{eff}} $, $\\rm {log(g)}$, $\\rm {v_{t}} \\pm \\Delta (\\rm {v_{t}})$)\n\\end{itemize}\nwhere $\\rm {T_{eff}}$, $\\rm {log(g)}$, $\\rm {v_{t}}$ are the measures\ndetermined in Section 3.2 .\n\nThe difference of abundance between values obtained with the original\nand those ones obtained with the modified values gives\nthe errors in the chemical abundances due to uncertainties in\neach atmospheric parameter. They are listed in Tab.~\\ref{t4} (columns 3, 4\nand 5) and are the average values obtained from the two stars. \nBecause the parameters were not obtained indipendently we cannot\nestimate of the total error associated with the abundance\nmeasures by simply taking the squadratic sum of all the single errors.\nInstead we calculated the upper limits for the total error as:\n\\begin{center}\n$\\rm {\\Delta_{tot}=\\sigma_{EW}+\\Sigma(\\sigma_{atm})}$\n\\end{center}\nlisted in column 6 of Tab.~\\ref{t4}.\nColumn 7 of Tab.~\\ref{t4} is the observed dispersion.\nComparing $\\sigma_{\\rm obs}$ with $\\Delta_{\\rm tot}$ (wich is an upper limit) we can see\nthat for many elements (Mg, Si, Ca, Ti, Cr, Ni) we do not find any evidence of inhomogeneity\namong the whole Fe range. Some others (Na, Zn, Y, Ba) instead show\nan intrinsic dispersion. This is confirmed also by Figs. \\ref{f3} and \\ref{f4}\n(see next Section).\nFinally we just mention here the problem of the differential reddening. \nSome authors (Calamida et al. 2005) claim that is is of the order of 0.03 mag,\nwhile some others (McDonald et al. 2009) suggest a value lower than 0.02 dex.\nHowever all those results concern the internal part, while no information is\navailable for the region explored in this paper. \nWe can only say that an error on the reddening of 0.03 dex would alter\nthe temperature of 90 degrees.\n\n\\begin{table*}\n\\caption{Internal errors associated with the chemical abundances\ndue to errors in the EW measurement (column 2) and in the atmospheric\nparameters (column 3,4,5) for the studied elements.\n6$^{th}$ column gives the total internal error, while the last\ncolumn gives the observed dispersion of the abundances. See text for\nmore details.\n}\n\n\\label{t4}\n\\centering\n\\begin{tabular}{lcccccc}\n\\hline\n\\hline\nEl. & $\\sigma_{\\rm EW}$ & $\\Delta$T$_{\\rm eff}$ & $\\Delta$log(g) & $\\Delta$v$_{\\rm t}$ & $\\Delta_{\\rm tot}$ & $\\sigma_{\\rm obs}$\\\\\n\\hline \n${\\rm [FeI\/H]}$ & 0.05 & 0.05 & 0.01 & 0.02 & 0.13 & - \\\\\n${\\rm [NaI\/FeI]}$ & 0.12 & 0.02 & 0.01 & 0.02 & 0.17 & 0.34\\\\\n${\\rm [MgI\/FeI]}$ & 0.18 & 0.02 & 0.01 & 0.02 & 0.23 & 0.18\\\\\n${\\rm [SiI\/FeI]}$ & 0.15 & 0.03 & 0.01 & 0.02 & 0.21 & 0.12\\\\\n${\\rm [CaI\/FeI]}$ & 0.09 & 0.01 & 0.00 & 0.01 & 0.11 & 0.11\\\\\n${\\rm [TiI\/FeI]}$ & 0.14 & 0.04 & 0.01 & 0.01 & 0.20 & 0.19\\\\\n${\\rm [TiII\/FeI]}$ & 0.13 & 0.04 & 0.03 & 0.01 & 0.21 & 0.17\\\\\n${\\rm [CrI\/FeI]}$ & 0.12 & 0.03 & 0.01 & 0.01 & 0.17 & 0.17\\\\\n${\\rm [NiI\/FeI]}$ & 0.13 & 0.01 & 0.01 & 0.01 & 0.16 & 0.14\\\\\n${\\rm [ZnI\/FeI]}$ & 0.19 & 0.04 & 0.03 & 0.02 & 0.28 & 0.32\\\\\n${\\rm [YII\/FeI]}$ & 0.13 & 0.03 & 0.03 & 0.01 & 0.20 & 0.42\\\\\n${\\rm [BaII\/FeI]}$ & 0.14 & 0.02 & 0.03 & 0.00 & 0.19 & 0.50\\\\\n\\hline \n\\end{tabular}\n\\end{table*}\n\n\n\n\\section{Results of abundance analysis}\nThe results of the abundance analysis are shown in Fig.~\\ref{f2} for [Fe\/H],\nand in Figs.~\\ref{f3} and \\ref{f4} for all the abundance ratios we could derive.\nA Gaussian fit was used to derive the mean metallicity of the three peaks in\nFig.~\\ref{f2}. We found the following values: -1.64 (metal poor\npopulation, {\\it MPP}), -1.37 (intermediate metallicity population, {\\it IMP}), and -1.02\n(metal rich population, {\\it MRP}). Stars belonging to each of the three populations are\nidentified with different colors in Fig.~\\ref{f1}.The population mix is in the proportion \n({\\it MPP:IMP:MRP}) = (21:23:4).\\\\\n\n\\noindent\nThe abundance ratio trends versus [Fe\/H] \nare shown in the various panels in Figs.~\\ref{f3} and \\ref{f4} for all the elements we could measure.\nWhen the abundance ratio scatter is low (lower than 0.2 dex which, according to\nthe previous Section, implies a homogeneous abundance) we also show the mean value\nof the data as a continuous line, to make the comparison with literature easier.\nWhat we find in the outer region of $\\omega$~ Cen is in basic agreement\nwith previous investigations. Comparing our trends with -e.g.- Norris \\& Da Costa (1995)\nvalues( see next Section for a more general comparison with the literature), \nwe find that all the abundance ratios we could measure are in very good\nagreement with that study, except for [Ti\/Fe], \nwhich is slightly larger in our stars, and [Ca\/Fe], \nwhich is slightly smaller in our study. However, within the measurement errors we do not\nfind any significant deviation.\\\\ \nThe $\\alpha-$elements (Mg, Ti, Si and Ca, see Fig.~\\ref{f3}) \nare systematically overabundant with respect to the Sun, \nwhile iron peak elements (Ni and Cr, see Fig.~\\ref{f4}) are basically solar.\nSimilarly, overabundant in average with respect to the Sun are Y, Ba and Zn (see Fig.~\\ref{f4}). \nY abundance ratio show some trend with [Fe\/H], but of the same sign and \ncomparable magnitude to Norris \\& Da Costa (1995).\n\nFinally, we looked for possible correlations between abundance ratios, and compare\nthe outcome from the different populations of our sample. This was possible only for [Y\/Fe] and [Zn\/Fe]\nversus [Ba\/Fe], and it is plotted in Fig.~\\ref{f5}. For {\\it MPP} (filled circles) a trend\nappears both for Zn and Y as a function of Ba (see also value of the slope ($a$) in Fig.~\\ref{f5}), with\nBa-poor stars being also Zn and Y poor. Y-Ba correlation can be easily\nexplained because both are neutron-capture elements.\\\\\nAs for {\\it IMP}, a marginal trend in the Y vs. Ba relation is present,\nwhile no trend appears in the Zn vs. Ba. No trends at all were detected for\nMRP, mostly because our sample of {\\it MRP} stars is too small for any significant\nconclusion. We underline the fact that this different behaviour of {\\it MPP} and {\\it IMP}\nwith respect to their Zn-Y-Ba correlations points to a different chemical\nenrichment history of the two populations.\n\n\\section{Outer versus inner regions}\n\nA promising application of our data is the comparison of the population mix in the cluster outskirts\nwith the one routinely found in more central regions of the cluster (Norris \\& Da Costa 1995; Smith et al. 2000;\nVillanova et al 2007; Johnson et al. 2009; Wylie-de Boer et al. 2009).\\\\\n\nTo this aim, we firstly compute the fraction of stars\nin the various metallicity ([Fe\/H]) populations, and compare it with the inner\nregions trends from Villanova et al. (2007), for the sake of homogeneity,\nto statistically test the significance of their similarity or difference. \nWe are aware that this is not much more than a mere exercise.\nFirstly, while our program stars are mostly in the RGB phase, in Villanova et al (2007)\nsample only SGB stars are present. This implies that we are comparing stars in\nslightly different evolutionary phases.\nSecond, and more important, the statistics is probably too poor. \nIn fact, we report in Table~5 (column 2 and 3) the number of stars\nin the different metallicity bin derived from a Gaussian fit to our and Villanova et al. (2007)\ndata. They have large errors. We see that within these errors the population mix is basically the\nsame in the inner and outer regions. Therefore, with so few stars we cannot\ndetect easily differences between the inner and outer regions.\nTo check for that, we make use of the Kolmogorov-Smirnov statistics,\nand compare the metallicity distributions of the inner and outer samples, to see\nwhether they come from the same parental distribution. We found that the probability\nthat the two distributions derive from the same underlying distribution is 77$\\%$.\nThis is quite a small number, and simply means that with these samples we cannot\neither disprove or confirm the null hypothesis (say that the two populations have\nsame parental distribution).\nBesides, our sample and that of Villanova et al (2007) do not have stars\nbelonging to the most metal-rich population of Omega centauri (at\n[Fe\/H]$\\sim$-0.6), which therefore we cannot comment on.\\\\\n\n\\noindent\nThat clarified, we then compare in Fig.~6 and Fig.~7 the trend of the various elements we could\nmeasure (see Table~4) in the cluster outskirts with the trends found in the central regions by other\nstudies. In details, in all Fig.~6 panels we indicate with filled circles the data\npresented in this study and with open circles data from Villanova et al. (2007). Crosses indicate\nWylie-de Boer et al. (2009), stars Norris \\& Da Costa (1995), empty squares \nSmith et al. (2000) and, finally, empty pentagons Johnson et al. (2009).\nWe separate in Fig.~6 elements which do no show significant scatter (see Table~4) from\nelements which do show a sizeable scatter (see Fig.~7).\nBa abundances from Villanova et al. (2007) were corrected of $\\sim$-0.3 dex,\nto take into account the hyperfine structure that seriously affects the Ba\nline at 4554 \\AA.\n\n\n\n\\begin{table}\n\\centering\n\\label{t5}\n\\begin{tabular}{ccc}\n\\hline\nPopulation & Inner & Outer \\\\\n & $\\%$ & $\\%$ \\\\\n\\hline\nMPP & 46$\\pm$10 & 45$\\pm$10 \\\\\nIMP & 36$\\pm$10 & 47$\\pm$10 \\\\\nMRP & 18$\\pm$10 & 8$\\pm$10 \\\\\n\\hline\n\\end{tabular}\n \\caption{Percentages of different metallicity populations in the inner and outer regions\nof $\\omega$~ Cen.}\n \\end{table}\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{figure5.eps}\n\\caption{\nAbundance ratios of [Y\/Fe] and [Zn\/Fe] vs. [Ba\/Fe] for our sample.\nFilled circles, open circles, and crosses are MPP, IMP, MRP stars\nrespectively. A straight line has been fitted to MPP stars. The value of the slope\n({\\it a}) is given. In both cases {\\it a} is out of more than 3$\\sigma$ with\nrespect the null trend value, implying that trends for MPP are real.\n}\n\\label{f5}\n\\end{figure}\n\n\n\n\\noindent\nLooking at Fig.~6, we immediately recognize two important facts.\\\\\nFirst, all the studies we culled from the literature for Omega Cen central regions\nshow the same trends.\\\\\nSecond, and more important for the purpose of this paper,\nwe do not see any significant difference bewteen the outer (filled circles)\nand inner (all the other symbols) regions of the cluster. Only Ti seems to be\nslightly over-abundant in the outer regions with respect to the more central ones.\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{figure6.eps}\n\\caption{Comparison of abundance ratios in the inner and\nouter stars (filled circles). Symbols are as follows. Empty circles (Villanova et al. 2007), crosses \n(Wylie-de Boer et al. 2009), stars (Norris \\& Da Costa 1995), empty squares \n(Smith et al. 2000) empty pentagons (Johnson et al. 2009)}\n\\label{f6}\n\\end{figure}\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{figure7.eps}\n\\caption{Comparison of abundance ratios in the inner\nand outer stars (filled circles). Symbols are as in Fig.~6}\n\\label{f7}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{figure8.eps}\n\\caption{Comparison of our Y and Ba abundance ratios (our sample, filled\n circles) with the literature (inner sample).Symbols are as in Fig.~6}\n\\label{f8}\n\\end{figure}\n\n\\noindent\nAs for the more scattered elements (see Fig.~7) we notice that Na shows the opposite trend\nin the outer regions with respect to the inner ones, but this is possibly related\nto a bias induced by the signal to noise of our data which does not allow us to detect\nNa-poor stars in the metal poor population.\nOn the other hand, Y and Ba do not show any spatial difference. \\\\\n\n\\noindent\nInterestingly enough, at low metallicity Ba shows quite a significant scattered\ndistribution, expecially for stars more metal-poor than -1.2 dex, covering a\nrange of 1.5 dex. This behaviour is shared with Y and Na, althought at a lower level.\nFurthermore, looking carefully at Fig. 4, it is possible to see a hint of\nbimodality for the Ba content of the stars having [Fe\/H]$<$-1.5 dex\n(i.e. belonging to the MMP), with the presence of a group of objects having\n[Ba\/Fe]$\\sim$1.0 dex, and another having [Ba\/Fe]$\\sim$-0.2 dex.\nThe same trend is visible in all the literature data.\\\\\nWe remind the reader that such a bimodal distribution is similar to that found by \nJohnson et al. (2009, thier Fig.~8) for Al.\\\\\n\n\\noindent\nFinally, we compare our Y vs. Ba trend with literature in Fig. 8. Also in this\ncase the agreement is very good and no radial trend is found.\\\\\n\n\\noindent\nThe stars studied by Wylie-de Boer et\nal. (2009) deserve special attention. They belong to the Kapteyn Group, \nbut their kinematics and chemistry suggest a likely association with $\\omega$ Cen.\nThese stars, in spite of being part of a moving group, exhibit \nquite a large iron abundance spread (see Fig. 6 and 7), totally compatible with the one of\n$\\omega$ Cen. Also their Na and Ba abundance (see Fig. 7) are comparable with\nthose of the cluster. We suggest that the comparison with our data reinforces\nthe idea that the Kapteyn Group is likely formed by stars stripped away from $\\omega$ Cen.\n\n\\section{Conclusions}\nIn this study, we analized a sample of 48 RGB stars located half-way to the tidal\nradius of $\\omega$ Cen, well beyond any previous study devoted to the detailed chemical composition of the\ndifferent cluster sub-populations.\\\\\nWe compared the abundance trends in the cluster outer regions with literature studies which focus\non the inner regions of $\\omega$ Cen.\\\\\n\n\\noindent\nThe results of this study can be summarized as follows:\n\n\\begin{description}\n\\item $\\bullet$ we could not highlight any difference between the outer and inner regions\nas far as [Fe\/H]is concerned: the same mix of different iron abundance population is present\nin both locations;\n\\item $\\bullet$ most elements appear in the same proportion both in the inner and in the outer\nzone, irrespective of the particular investigation one takes into account for the comparison;\n\\item $\\bullet$ [Ba\/Fe] shows an indication of a bimodal distribution at low metallicity at any location\nin the cluster, which deserves further investigation;\n\\item $\\bullet$ no indications emerge of gradients in the radial abundance trend of the elements we could\nmeasure.\n\\end{description}\n\n\\noindent\nOur results clearly depend on a small data-set, and more extended studies are encouraged to confirm\nor deny our findings.\n\n\n\\section*{Acknowledgements}\nSandro Villanova acknowledges ESO for financial support during several visits to the Vitacura\nScience office. The authors express their gratitude to Kenneth Janes for reading carefully\nthe manuscript.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nTransition Metal Dichalcogenides (TMD) have the generic formula MX$_2$, consisting of a transition metal M (Nb, Ta, Ti, Mo, W...) and a chalcogen X (S, Se, Te). They are layered materials with strong in-plane bonds and weak Van der Waals inter-layers interactions providing an important two dimensional (2D) character. The individual layers consist of a triangular lattice of transition metal atoms surrounded by chalcogens, and come in two forms named 1T and 1H. In 1T layers the transition metal atoms are surrounded by six chalcogens in octahedral (O$_h$) coordination, whereas in 1H layers the six chalcogens are in trigonal prismatic (D$_{3h}$) coordination. \nThese two base layers have a wide range of possible stacking arrangements, called polytypes \\cite{Wilson1975AdvPhysCDWTMDReview} (e.g. see Fig.~\\ref{fig:2H3R}), which differ by the translation, rotation and ordering of the two base layers 1H and 1T. \nTMD polytypes are usually classified using Ramsdell's notation\\cite{ramsdell1947studies}, which specifies the number of layers in the unit cell followed by a letter to indicate the lattice type and, when necessary, an additional alphanumeric character to distinguish between stacking sequences. Thus, a 1T polytype has 1 layer in a trigonal unit cell while a 2H polytype has 2 layers in a primitive hexagonal unit cell.\nThis distinction is especially important for TMD as polytypes of the same TMD compound can have dramatically different electronic properties spanning from semiconducting to metallic or superconducting\\cite{Voiry2015TMD_polytypesynthesis_review}. \n\nTMD recently attracted renewed interest because their quasi 2D nature is similar to graphene and the tunability of their electronic properties is promising for novel electronic devices\\cite{Wang2012TMDreviewnature,Vogel2015MRSBull}. In the case of metallic TMD, the 2D character and strong electron-phonon coupling makes them prone to electronic orderings such as Mott insulator or charge density waves (CDW) and superconductivity\\cite{Wilson1975AdvPhysCDWTMDReview}. This multiplicity of possible ground states hold a great technological potential. For instance, a new \\emph{orbitronics} concept has been proposed in TMD such as 1T-TaS$_2$, whereby switching between the orbital configurations and melting the CDW phase using ultrashort laser pulses would yield a complete and reversible semiconductor-to-metal transition\\cite{Ritschel2015Orbitronics}. \n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{Figure1_1T_2H_3R90dpi.png}%\n\\caption{\\label{fig:2H3R} \\textbf{Cristallographic structures of the three known polytypes of NbS$_2$}: 1T reported only in thin film\\cite{Carmalt2004_1st_1TNbS2synthesis} or monolayer form\\cite{Chen20151TNbS2monolayerforH2}, and 2H and 3R found in bulk crystals form\\cite{Fisher1980NbS2difficultsynthesis}. In the 1T polytype, the transition metal atom is in octahedral coordination and layers are stacked without rotation or in-plane translation. The 2H and 3R polytypes are composed of 1H single layers with the metal atom in trigonal prismatic coordination, but they differ by their stacking: rotation and no in-plane translation for 2H, in-plane translation and no rotation for 3R. The unit cell is indicated by solid red lines.}\n\\end{figure}\n\nCDW are periodic modulations of the electronic density accompanied by a periodic distortion of the crystal lattice. CDW are usually caused either by a nesting vector of the Fermi surface inducing a peak in electronic susceptibility\\cite{gruner}, or by a strong k-dependent electron-phonon coupling\\cite{weber}. TMD are the first 2D compounds where CDW were observed\\cite{Wilson1974PRL}, and TMD appear even more prone to CDW in single layer than in bulk form. For instance, in 2H-NbSe$_2$ the CDW transition temperature increases from 33\\,K in bulk to 145\\,K in monolayer form\\cite{Xi2015highTcdwNbSe2,CalandraPRB2009monolayerNbSe2}.\n\nHowever, among the metallic TMD, NbS$_2$ stands out as none of its polytypes have been reported to have a CDW. \n{In bulk form, only the 2H and 3R polytypes (trigonal prismatic coordination of Nb atoms) have been grown and CDWs have not been reported in either polytytpe\\cite{FriendYoffe1987}}.\nThe trigonal prismatic coordination was found to be thermodynamically stable in bulk by DFT calculations\\cite{LIU2014472}. The 1T polytype (octahedral coordination) has also been reported but only in single layer\\cite{Chen20151TNbS2monolayerforH2} or thin film form\\cite{Carmalt2004_1st_1TNbS2synthesis}, both also without CDW.\n\nIn the 2H polytype of NbS$_2$ that we study here, we previously showed that anharmonic effects prevent the formation of a CDW despite strong phonon modes softening\\cite{Leroux2012anharmonicNbS2}. Thus, 2H-NbS$_2$ is just on the verge of a CDW and DFT calculations also hint at the proximity of density waves instabilities\\cite{Leroux2012anharmonicNbS2, Guller2015DFTsdwNbS2, HeilPRL2017}.\nThe soft phonon modes do contribute to another electronic ordering: the metal-to-superconductor transition below $T_\\mathrm{c}= 6$\\,K\\cite{Wilson1975AdvPhysCDWTMDReview}, in which they are the dominant contributor to anisotropic two-gap superconductivity\\cite{Guillamon2008STMVortexcore,KACMARCIK2010S719,pribulova2010two,Diener2011TDONbS2,Leroux2012Hc1NbS2,HeilPRL2017}.\nYet, no other electronic phase has ever been found experimentally in 2H-NbS$_2$ using either: very pure crystals ($RRR=105$)\\cite{Naito1982ResistivityNbS2}, low temperature (100\\,mK)\\cite{Guillamon2008STMVortexcore}, or high pressure\\cite{JonesMorosinPRB1972pressureNbS2}. This is in contrast with the isoelectronic TMD 2H-NbSe$_2$ and 2H-TaS\/Se$_2$ which all have CDW\\cite{Wilson1975AdvPhysCDWTMDReview,Naito1982ResistivityNbS2}.\n \nHere we find that there are faint traces of CDW in 2H-NbS$_2$ using {diffuse x-ray scattering}. This CDW wavevectors are the same as that of the commensurate CDW in 1T-TaS$_2$ and 1T-TaSe$_2$. Such 1T-like CDW has not been reported before for the NbS$_2$ compound. We suggest two mechanisms to explain both the symmetry and very small amplitude of the CDW we observe. Rotational stacking faults between 2H domains could be locally like a 1T-layer, or a very dilute amount of Nb in the van der Waals interlayer space could also present an octahedral coordination.\n\n\n\n\\section{Materials and methods}\n\nSingle crystals of 2H-NbS$_2$ were synthesized from an appropriate mixture of the elements that was sealed in an evacuated quartz tube. A large excess of sulfur was added to the mixture (20 \\%) to act as a transporting agent and favor the formation of the 2H polytype. The tube was heated to 950$^\\circ$C for 240\\,h, slowly cooled down to 750$^\\circ$C and subsequently quenched to room temperature. This synthesis yielded a powder containing single crystals with lateral sizes exceeding 200\\,$\\mu$m as shown in Ref.\\cite{Diener2011TDONbS2}. Powder x-ray diffraction on several batches showed a predominance by volume of 99\\% of 2H polytype (P6$_3\/mmc$) versus 1\\% of 3R (R$3m$), and the polytype of each single crystal used was checked individually using x-ray diffraction. We find lattice parameters $a = b = 3.33\\,$\\AA~and $c = 11.95\\,$\\AA. Superconducting properties and phonon spectrum of samples from this batch were published elsewhere\\cite{Guillamon2008STMVortexcore,Kacmarcik2010CpNbS2,Diener2011TDONbS2,Leroux2012Hc1NbS2,Leroux2012anharmonicNbS2} and are in agreement with the literature\\cite{Onabe1978ResistivityNbS2,Naito1982ResistivityNbS2}. Typical superconducting transition temperature is $T_\\mathrm{c} = 6.05 \\pm 0.4$\\,K, as determined by AC specific heat\\cite{Kacmarcik2010CpNbS2}. \n\n{Diffuse x-ray scattering} imaging was performed at beamline ID29 at the ESRF at a wavelength of 0.6966 \\AA\\ (17.798 keV) and using a PILATUS 6M detector 200\\,mm away from the sample. 3600 pictures were acquired in three dimensions with 0.1$^\\circ$ oscillations and 0.25\\,s of exposure. Reconstruction of the $(h,k,0)$ plane was performed using CrysAlis software. Final reconstructions were made with locally developed software and Laue symmetry applied to remove the gaps between the detector elements. Inelastic x-ray scattering was performed at beamline ID28 at the ESRF using the Si (9,9,9) monochromator reflection giving an energy resolution of 2.6\\,meV and a photon energy of 17.794 keV. Measurements were performed at 300 and 77\\,K using a nitrogen cryostream cooler.\n\n\\section{Results}\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{Figure2_RT_77K.jpg}%\n\\caption{\\label{fig:RT_77K}\\textbf{{diffuse x-ray scattering} of 2H-NbS$_2$.} $(h,k,0)$ plane at 300\\,K \\textbf{(left)} and 77\\,K \\textbf{(right)} showing the hexagonal Brillouin zone, {the reciprocal space base vectors $\\vec{a}^*$ and $\\vec{b}^*$, the high symmetry points $\\Gamma$, M and K, and the (1,0,0) and (2,0,0) Bragg peaks.} Elongated diffuse scattering is visible between Bragg peaks at 300\\,K, and increases in intensity at 77\\,K. It is caused by soft phonon modes and is not visible between each pair of Bragg peaks because of the longitudinal polarization of the soft phonon modes\\cite{Leroux2012anharmonicNbS2}.\nAt 77\\,K, three types of satellite peaks appear: a ring of twelve sharp peaks around each Bragg peak, a peak at the M point and 4 peaks around the M point. One example from each of these three sets of satellite peaks are indicated by white circles on the right panel}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{Graph1.pdf}%\n\\caption{\\label{fig:IXS}\\textbf{Typical IXS spectra to determine the elastic or inelastic nature of the satellite peaks.} IXS energy spectra at 77\\,K around the M point ($\\vec{q} = (0.5,0,0)$) show that the elastic peak at zero energy transfer (static order) is the dominant contribution to the peak observed at the M point in diffuse scattering (energy integrated intensity). But the amplitude of this elastic peak is still comparable to that of a soft phonon. An elastic peak corresponds to a static diffracting object in real space.}\n\\end{figure}\n\nFig.~\\ref{fig:RT_77K} shows the diffuse scattering in the $(h,k,0)$ plane reconstructed from diffuse scattering data, at 300 and 77\\,K. The Bragg peaks amplitude is saturated on these images. The rocking curve of the (1,1,0) spot of a crystal from the same batch has a Full-Width at Half Maximum (FWHM) of 0.12$^\\circ$ at room temperature, implying a Bragg peak FWHM of at most 0.0068\\AA$^{-1}$ or 0.0036 $a^*$, i.e. an in-plane coherence length of at least 276 unit cells. \n\nAt 300\\,K, some diffuse scattering can be seen spanning the length between the different $\\Gamma$M direction around each Bragg peak. This elongated diffuse scattering becomes salient at 77\\,K.\nIt is caused by the broad softening of phonon modes around 1\/3 of $a^*$ (2\/3 of $\\Gamma$M)\\cite{Leroux2012anharmonicNbS2}. \n\nAt 77\\,K, Fig.~\\ref{fig:RT_77K} also shows three types of satellite peaks: a peak at the M point, four peaks around the M point, and a ring of twelve peaks around each Bragg peak.\nInelastic x-ray scattering results, presented in Fig.~\\ref{fig:IXS}, show that these peaks are all of an elastic nature, i.e. reflections of a static order, but with an amplitude similar to that of the soft phonon modes. {Comparison to the (1,1,0) Bragg peak, shows that these peaks are 5 orders of magnitude less intense}. Such a low intensity indicates that these peaks correspond either to very small atomic displacements or to displacements taking place in a very small fraction of the crystal.\n\n\\subsection{Satellite peak at the M point}\n\n\\textit{Ab initio} calculations\\cite{Leroux2012anharmonicNbS2} find that the satellite peak at the M point corresponds to a maximum in electronic susceptibility. Considering its phonon-like amplitude, the peak at the M point could correspond to Friedel's oscillations around impurities. However the peak FWHM is 0.036\\,$a^*$, as shown in the lower panel of Fig.~\\ref{fig:satpeakwidth}, which corresponds to a coherence length of about 30 unit cells. It therefore seems equally likely that the peak at the M point could correspond to an extremely faint CDW with a periodicity of $2\\,a$ induced by the maximum in electronic susceptibility. Such faint $2\\,a$ super-lattice spots have also been reported in 2H-NbSe$_2$\\cite{CHEN1984}.\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{Figure3_1st2nd3rdorder_reduced.pdf}%\n\\caption{\\label{fig:qvector}\\textbf{Two interwoven commensurate superlattices} Diffuse scattering of 2H-NbS$_2$ in the $(h,k,0)$ plane at 77\\,K.\nThe ring of twelve satellite peaks around each Bragg peak, and the four peaks around the M point can be indexed with two wavevectors. \n\\textbf{(Upper left panel)} Subset of peaks indexed by $\\vec{q_1} = \\frac{3}{13}\\,\\vec{a}^* + \\frac{1}{13}\\,\\vec{b}^*$, with $1^{\\mathrm{st}}$, $2^{\\mathrm{nd}}$ and $3^{\\mathrm{rd}}$ order reflections.\n\\textbf{(Upper right panel)} Both subsets of peaks indexed by $\\vec{q_1}$ in shades of red, and its mirror image $\\vec{q_2} = \\frac{4}{13}\\,\\vec{a}^* - \\frac{1}{13}\\,\\vec{b}^*$ in shades of blue.\n\\textbf{(Lower panel)} $\\vec{q_1}$, and $\\vec{q_2}$ are commensurate with the crystal lattice via $13\\,\\vec{q_1} = 3\\,\\vec{a}^* + \\vec{b}^*$. The wavevectors length is $||\\vec{q_{1,2}}|| =\\frac{1}{\\sqrt{13}}||\\vec{a}^*||$ so that each defines a $\\sqrt{13}\\,a\\times\\sqrt{13}\\,a$ superlattice in real space. This is also geometrically equivalent to $3\\vec{q_1}-\\vec{q_1}'=\\vec{a}^*$, where $\\vec{q_1}'$ is $\\vec{q_1}$ rotated by $+120^\\circ$, which clearly appears in the upper panels.\n{Note that the upper panels show the same region as in Fig.~\\ref{fig:RT_77K}, and that the lower panel is an extended view centered on this same region.}\n}\n\\end{figure}\n\n\\subsection{Other satellite peaks}\n\nThe ring of twelve satellite peaks around each Bragg peak, and the four peaks around the M point can all be indexed with only two wavevectors. The left panel of Fig.~\\ref{fig:qvector} shows the wavevector: $\\vec{q_1} = \\frac{3}{13}\\,\\vec{a}^* + \\frac{1}{13}\\,\\vec{b}^* \\approx 0.231\\,\\vec{a}^* + 0.077\\,\\vec{b}^* $ and its $1^{\\mathrm{st}}$, $2^{\\mathrm{nd}}$ and $3^{\\mathrm{rd}}$ order reflections. This wavevector corresponds to a deviation angle of $\\arctan\\left(\\frac{\\sqrt{3}}{7}\\right)\\approx13.9^{\\circ}$ from $\\vec{a}^*$. The right panel of Fig.~\\ref{fig:qvector} shows both $\\vec{q_1}$ in shades of red, and $\\vec{q_2} = \\frac{4}{13}\\,\\vec{a}^* - \\frac{1}{13}\\,\\vec{b}^* \\approx 0.308\\,\\vec{a}^* - 0.077\\,\\vec{b}^* $ in shades of blue.\n\nThese two wavevectors are mirror image of each other, with length $||\\vec{q_{1,2}}|| =\\frac{1}{\\sqrt{13}}||\\vec{a}^*||$. They thus correspond to two commensurate $\\sqrt{13}\\,a\\times\\sqrt{13}\\,a$ superlattices in real space.\nNote that the commensurate relation $13\\,\\vec{q_1} = 3\\,\\vec{a}^* + \\vec{b}^*$ is geometrically equivalent to $3\\vec{q_1}-\\vec{q_1}'=\\vec{a}^*$, where $\\vec{q_1}'$ is $\\vec{q_1}$ rotated by $+120^\\circ$. This clearly appears in the upper panels of Fig.~\\ref{fig:qvector} where the $3^{\\mathrm{rd}}$ order reflections from one Bragg peak coincide with the $1^{\\mathrm{st}}$ order reflections from another. \n\nThe presence of high order reflections evidences the long range coherence associated to these peaks or the non-sinusoidal character of the atomic displacements. \nThe long range coherence is also evidenced by the small width of the peaks as shown in the upper panel of Fig.~\\ref{fig:satpeakwidth}. The FWHM along $a^*$ of the satellite peak at (1.231, 0.077, 0) is 0.012 $a^*$ corresponding to a coherence length of $\\approx 83$ unit cells. {These sharp peaks along a* correspond to rods of scattering along c* with FWHM of $\\approx 0.5$\\,c* i.e. 2 unit cells along the c-axis.}\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{Satpeakwidth3.pdf}%\n\\caption{\\label{fig:satpeakwidth}\\textbf{Width of the satellite peaks} \\textbf{(Upper panel)} cross-sections of the satellite peak at $(1,0,0)+\\vec{q_1}$, which is part of the ring of 12 peaks around each Bragg peak. \\textbf{(Lower panel)} cross-sections of a satellite peak at the M point. Note that the difference between the cross-section along and perpendicularly to $a^*$ are due to the background of diffuse scattering (caused by soft phonons) which disappears rapidly perpendicularly to $a^*$.}\n\\end{figure}\n\nThe ring of twelve satellite peaks also has an intensity that follows the same extinction pattern as the elongated diffuse scattering, suggesting that it corresponds to a static longitudinal modulation. Indeed, the very specific angles at which the diffuse scattering is extinguished show that the underlying soft phonons (which cause the diffuse scattering) are polarized longitudinally with in-plane niobium displacements\\cite{Leroux2012anharmonicNbS2}.\nIn more details, the scattered intensity depends on phonons polarization via the dynamical structure factor $G(Q,m)$~\\cite{Burkel_IXS}\n\\begin{equation}\nG(Q,m)=\\left|\\sum_{j}^{\\text{unit cell}} f_j(\\vec{Q}).\\mathrm{e}^{-W_j}\\left[\\vec{Q}.\\vec{\\epsilon_j}(\\vec{Q},m) \\right] \\sqrt{M_j} e^{i \\vec{Q}.\\vec{r_j}}\\right|^2\n\\label{eqn:polarisation_phonon}\n\\end{equation}\nwhere $f_j(\\vec{Q})$ is the atomic form factor of atom $j$ at $\\vec{r_j}$ with mass $M_j$; $\\vec{\\epsilon_j}(\\vec{Q},m)$ is the unit displacement vector of atom $j$ in the $m$ phonon branch for a phonon wavevector $\\vec{Q}$; and $\\mathrm{e}^{-W_j}$ is the Debye-Waller factor of atom $j$.\nBecause $\\vec{Q}.\\vec{\\epsilon_j}(\\vec{Q},m)$ is zero for a phonon polarization perpendicular to $\\vec{Q}$ (see Fig.11 in Ref.~\\citenum{Burkel_IXS}), these extinctions indicate that the soft phonons are longitudinally polarized.\n{We cannot distinguish the respective contribution of sulfur and niobium atoms to the longitudinal soft phonon modes in our data. However, the total scattered intensity is dominated by the contribution from niobium atoms as the mass and atomic form factor of niobium are larger than that of sulfur. This suggests that the displacements of the niobium atoms involved in the soft phonons would also be mostly longitudinal, i.e. in the ab plane.}\n\nThe commensurate wavevectors $\\vec{q_1}$ and $\\vec{q_2}$ are the same as those of the low temperature commensurate CDW in 1T-TaS$_2$\\cite{ScrubyPhilMag1975} (semiconducting 1T$_3$ phase) and 1T-TaSe$_2$\\cite{McMillanPRB1975,Wilson1974PRL}.\nIt is worth noting that this CDW is dominated by in-plane longitudinal displacements of Ta atoms in 1T-TaS$_2$\\cite{ScrubyPhilMag1975}. Also, only one set of 6 peaks around each Bragg peaks is observed in the Ta based TMD. {But here we observe that both sets of 6 peaks are equivalently present, evidencing two sets of triple-q CDW, most likely from twinning in the crystal.}\n\nWe therefore conclude that the ring of peaks we observe in 2H-NbS$_2$ is the trace of a faint longitudinal periodic lattice distortion, appearing between 77 and 300\\,K, and corresponding to two commensurate $\\sqrt{13}\\,a\\times\\sqrt{13}\\,a$ CDW identical to that found in 1T Ta based TMD. \nIn addition, as the commensurate CDW becomes incommensurate above 473\\,K in 1T-TaSe$_2$ and 190\\,K in 1T-TaS$_2$, this suggests the possibility of an incommensurate CDW in our crystal as well. As we observed no incommensurate peaks at 300\\,K, this incommensurate CDW would have to occur in a temperature range between 77 and 300\\,K.\n\nInterestingly, in 1T-TaSe$_2$, the thrice degenerate wavevector of the high temperature incommensurate CDW becomes commensurate with the lattice by a rotation of 13.9$^{\\circ}$ because it is not close enough to $1\/3\\,a^*$. {Indeed, according to Landau theory of CDW in TMD\\cite{McMillanPRB1975}, this commensuration by rotation is a feature of the 1T polytype, whereas in the 2H polytype the CDW locks in with 1\/3 of $a^*$\\cite{McMillanPRB1975} (2\/3 of $\\Gamma$M). While we cannot preclude that the CDW we observe is a bulk phenomenon native to the 2H polytype, this would be the first $\\sqrt{13}\\,a\\times\\sqrt{13}\\,a$ in a 2H TMD to our knowledge. In addition, the very short coherence length of the CDW peaks along c* supports the picture of a CDW occurring almost independently in each layer of the crystal. Therefore, we also consider the possibility that this CDW originates from a local 1T-like environment in a 2H crystal and we now discuss the possible origins of such environment.}\n\n\\section{Discussion}\nIn TMD, the dominance of trigonal prismatic (1H) or octahedral (1T) coordination can be classified by transition metal atoms. There are three typical cases. In the first case, as with titanium (Ti), the coordination is generally octahedral so that the dominant polytype is 1T. In the second case, such as niobium (Nb), the coordination is usually trigonal prismatic so that 2H or 3R polytypes are favored. In the third case, as with tantalum (Ta), both coordinations have similar energies, in which case various polytypes can be synthesized: 1T, 2H and mixed stackings of 1T and 1H layers such as 4Hb.\\cite{FriendYoffe1987}\n\nIn NbS$_2$, the 3R polytype is the thermodynamically stable phase at room temperature\\cite{Fisher1980NbS2difficultsynthesis}. The 2H polytype can also be synthesized at room temperature by quenching from $\\approx1000$\\,K. As for the 1T polytype of NbS$_2$, it has never been synthesized in bulk crystal form, but it can be stabilized by strain in thin film\\cite{Carmalt2004_1st_1TNbS2synthesis} or monolayer\\cite{Chen20151TNbS2monolayerforH2} forms.\n\nLooking at the 2H structure in Fig.~\\ref{fig:2H3R}, we emphasize that it has two possible rotational positions for each 1H layer, separated by a 60$^\\circ$ rotation around the c-axis. This rotational position alternates between each 1H layers, so that, once an origin is given, the rotational positions of all 1H layers are fixed in an ideal crystal. In real crystals of the 2H structure, especially if synthesized by quenching, this opens up the possibility of rotational domains, where each domain has a different origin of the rotational positions. \n\nMost interestingly, at the junction of two rotational domains, there should be two 1H layers in the same rotational position stacked one onto the other (i.e. a locally 1H polytype), where the sulfur atoms are facing each other. This locally 1H polytype seems a priori unstable because of the geometrical repulsion between sulfur atoms in adjacent layers. In fact, such stacking of sulfur atoms does not occur in either of the three known polytypes 1T, 2H, or 3R and there are no known purely 1H polytype of NbS$_2$. Energetically, it seems much more likely that one of the sulfur atoms layer will move such that the sulfur atoms of one layer face the center of a triangle of sulfur atoms in the other layer. This reduces the geometrical repulsion, which brings the layers closer together and increases the orbital overlaps and van der Waals interactions. There are, however, several ways to displace the sulfur atoms layer.\n\nOne way, which does not involve changing the coordination of the Nb atoms, is for one of the 1H layer to slide by $(\\frac{1}{3}, \\frac{2}{3}, 0)$ or $(\\frac{2}{3}, \\frac{1}{3}, 0)$, yielding a locally 3R structure (which is non-centrosymmetric, hence the two possible sliding vectors). Such 3R-like stacking faults have actually been studied before in 2H-NbS$_2$. The study\\cite{Katzke2002} concluded to the presence of 15\\% of 3R-like stacking faults in powder samples of 2H-NbS$_2$ {(i.e. any two adjacent layers have a 15\\% chance of having a faulty stacking)}.\nWe performed a similar analysis in our sample and found the presence of 18\\% of 3R-like stacking faults.\\cite{lerouxHALThesis}\n\nA second way the sulfur atoms layer can move to reduce geometrical repulsion at the junction between domains, is a rotation by 60$^\\circ$ around the c-axis. This changes the coordination of the Nb atom from trigonal prismatic to octahedral, and yields a single purely 1T layer. To some extent, this is similar to thin films and monolayer of 1T-NbS$_2$, where 1T layers are stabilized by strain at interfaces. This single 1T layer is only three-fold symmetric and can occur in two types which are mirror image of each other (or, equivalently, rotated by 60$^\\circ$). The junctions between domains would yield both types equiprobably. This would naturally explain the presence of both wavevectors $\\vec{q_1}$ and $\\vec{q_2}$ yielding two $\\sqrt{13}\\,a\\times\\sqrt{13}\\,a$ superlattices, instead of only one in pure 1T-TaS$_2$ and 1T-TaSe$_2$.\n\nTo our knowledge, this type of 1T-like stacking faults has not been studied before. In fact, considering that it involves a change of coordination of the Nb atom, a 1T-like stacking fault seems more energetic than the 3R-like stacking fault considered above. We can therefore expect that the 1T-like stacking fault occurs less frequently than the 3R-like ones. Yet, if the 1T-like CDW we observed in x-ray occurs only on such rare 1T-like stacking fault, it would explain why the CDW x-ray peaks are so faint.\n\nFinally, another explanation for the presence of local 1T-like environment could be based on the presence of small clusters of extra Nb atoms intercalated in the van der Waals gap between layers. Indeed, Meerschaut and Deudon\\cite{Meershault01} have reported that the 3R-NbS$_2$ phase is favored by an overdoping of Nb. This extra Nb is placed in the van der Waals gap between two layer of Nb in a trigonal prismatic coordination. Locally the Nb atom is surrounded by 6 chalcogen atoms in a octahedral coordination. Because of the Nb-Nb repulsion, this extra Nb atom is slightly shifted from the center of the octahedron \\cite{Meershault01}. Thus, in our NbS$_2$ crystal, a local 1T-like environment could be associated to a small amount of extra Nb with a local octahedral coordination lying in the 3R-like stacking fault.\n\n\\section{Conclusions}\nUsing {diffuse x-ray scattering} in 2H-NbS$_2$, we observed very weak superlattice peaks corresponding to two longitudinal commensurate $\\sqrt{13}\\,a\\times\\sqrt{13}\\,a$ periodic lattice distortion, identical to that associated with the CDW of 1T-TaSe$_2$ and 1T-TaS$_2$. Around each Bragg peaks in the $(h,k,0)$ plane, we found a series of 12 satellite peaks at $\\pm$ 13.9$^{\\circ}$ from $\\vec{a}^*$ and $\\vec{b}^*$, commensurate with the lattice through $3\\vec{q_1}-\\vec{q_1}'=\\vec{a}^*$ or, equivalently, $13\\,\\vec{q_1} = 3\\,\\vec{a}^* + \\vec{b}^*$. Inelastic x-ray scattering (IXS) measurements confirmed the predominantly elastic nature of these satellite peaks, but the amplitude of these peaks is almost as faint as that of soft phonons.\nTo our knowledge, no CDW has been reported in any polytypes of NbS$_2$.\nWe suggest that rotational disorder in the stacking of 1H layers, induces 3R-like stacking fault and, less frequently, single 1T layers at the interface between 2H rotational domains. Such rare and dilute 1T layers might be the support of this faint 1T-like CDW. A very dilute amount of Nb in the van der Waals interlayer space of 3R-like stacking fault could also present a 1T-like octahedral coordination.\n\\section{Acknowledgments}\nM.L. acknowledges S. Eley for fruitful discussions.\nThis work was supported by the Neel Institute CNRS - University of Grenoble. We acknowledge the European Synchrotron Radiation Facility for provision of synchrotron radiation facilities. The experiments were performed on beamline ID28 and ID29.\n\n\\section*{References}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introducci\u00f3n}\nEn rob\u00f3tica, se conoce como exploraci\u00f3n al proceso mediante el cual se busca aumentar la informaci\u00f3n respecto del entorno del robot para construir un modelo del ambiente que lo rodea.\nSe aplican los algoritmos de exploraci\u00f3n cuando se necesita conocer el estado de una locaci\u00f3n a la que no pueden acceder personas o hay un riesgo latente o manifiesto en su ingreso. \nPor ejemplo en estructuras colapsadas o con posibilidad de colapso en busca de sobrevivientes.\nEn estas circunstancias los robots m\u00e1s utilizados son de dimensiones reducidas por su capacidad de atravesar aberturas peque\u00f1as.\nDebido a estos limitantes geom\u00e9tricos, la capacidad de c\u00f3mputo de estos robots tambi\u00e9n se ve limitada.\n\nLo modelos generados por estos robots son \u00fatiles para poder navegar en estos ambientes, entendi\u00e9ndose por navegar a la tarea que permite llevarlo de un punto a otro de manera segura evadiendo obst\u00e1culos.\nLos modelos pueden ser m\u00e9tricos, donde se trata de representar o reproducir la geometr\u00eda del entorno; o topol\u00f3gicos, donde se describe la relaci\u00f3n espacial entre distintos ambientes.\n\n\nRespecto de los m\u00e9tricos, durante a\u00f1os una de las formas m\u00e1s populares de representar el entorno fue mediante grilla de ocupaci\u00f3n \\cite{moravec_1985}.\nEste m\u00e9todo consiste en representar el ambiente bidimensional con un plano dividido en celdas formando una grilla.\nLas celdas de la grilla son de tama\u00f1o regular y cada una contiene un valor que representa la probabilidad de estar ocupada basada en modelos probabil\u00edsticos de los sensores involucrados.\nDerivaciones de este trabajo usando tres dimensiones con cubos llamados \\textit{voxels} se presentaron en trabajos como \\cite{moravec1996} y \\cite{dryanovski2010} mostraron ser ineficientes respecto del uso de memoria para almacenar el mapa debido al gran tama\u00f1o de los mismos.\n\n\\begin{figure}[tp!]\n\\centering\n \\includegraphics[width=0.45\\textwidth]{images\/jetson_nano.png}\n \\caption{Montaje utilizado para las pruebas en un escenario real}\n \\label{fig:pruebas_real}\n\\end{figure}\n\n\\IEEEpubidadjcol\n\nDe forma m\u00e1s eficiente, el \\textit{framework} Octomap \\cite{hornung2013} usa una estructura en forma de \u00e1rbol con ocho nodos, donde cada nodo comienza con un voxel que es dividido sucesivamente en otros ocho hasta alcanzar la resoluci\u00f3n deseada.\nEsta estructura es llamada \\textit{octrees} y fue propuesta por primera vez en \\cite{doctor1981}.\nEl valor contenido en el voxel puede ser binario, un valor de probabilidad que puede estar basado en distintos criterios o una variante con una cota aplicada a una densidad de probabilidad.\nSi bien el Octomap hace mejor uso de la memoria, el acceso a cada elemento tiene mayor costo computacional que la grilla de ocupaci\u00f3n.\n\nPara mejorar el acceso a cada elemento, en \\cite{museth2013vdb} se propone una variante de \u00e1rboles B+, usados generalmente en sistemas de archivos, llamada VDB por las siglas de \\textit{Volume Dynamic B+tree}.\nCon esta topolog\u00eda se puede modelar un espacio de \u00edndices tridimensional, \\textit{virtualmente infinito} que permite el acceso r\u00e1pido a informaci\u00f3n dispersa.\nAdicionalmente, la implementaci\u00f3n de VDB no impone restricciones de topolog\u00eda sobre los datos de entrada y admite patrones de acceso, inserci\u00f3n y eliminado aleatorio r\u00e1pidos, en promedio $\\mathcal{O}(1)$.\nLa implementaci\u00f3n de esta estructura de datos es conocida como OpenVDB (OVDB) y se presenta en \\cite{museth2019}.\n\nExplotando las caracter\u00edsticas de OVDB, en \\cite{stvl2020steve} se presenta una biblioteca llamada STVL por las siglas de \\textit{Spatio-Temporal Voxel Layer} donde se implementan una serie de \\textit{buffers} que almacenan nubes de puntos provenientes de c\u00e1maras de profundidad u otras fuentes capaces de generar este tipo de datos.\nEstas nubes de puntos se codifican usando OpenVDB logrando formar mapas tridimensionales de manera eficiente.\nDebido a la resoluci\u00f3n y cantidad de sensores, las nube suelen estar compuestas de millones de puntos por lo que es necesario realizar una \\textit{compresi\u00f3n} diezm\u00e1ndola con alg\u00fan criterio.\nEn \\cite{stvl2020steve} esto es realizado mediante un \\textit{filtro de voxelizado}, disponible en la biblioteca \\textit{Point Could Library} \\cite{Rusu_ICRA2011_PCL} el cual corre en CPU.\n\nEn este trabajo se presenta una implementaci\u00f3n de STVL capaz de ser ejecutada en la GPU de la plataforma de desarrollo Jetson Nano de Nvidia, utilizando el arreglo de la Fig.~\\ref{fig:pruebas_real}.\nEsta plataforma fue elegida debido a su reducido tama\u00f1o y gran eficiencia energ\u00e9tica, lo que posibilita su uso en peque\u00f1os robots ya sean voladores o terrestres.\nAdicionalmente, esta plataforma tiene la CPU y la GPU en el mismo chip, por los que comparten f\u00edsicamente la memoria.\nEsto evita que se hagan copias redundantes de bloques de memoria, por ejemplo una imagen, entre el CPU y la GPU.\nEste enfoque es conocido por NVIDIA como \\textit{Zero-Copy}.\nEspec\u00edficamente se presenta una variante de \\cite{bkedkowski2013general} implementada en CUDA para realizar la compresi\u00f3n de los puntos.\nEsta compresi\u00f3n o filtrado se realiza aprovechando el acceso \\textit{Zero-Copy} disponible en la plataforma Jetson Nano.\nNo hay, seg\u00fan el conocimiento de los autores, otra implementaci\u00f3n que pueda ser ejecutada en esta plataforma.\n\nEl trabajo se organiza de la siguiente manera: En la Secci\u00f3n~II se describen las herramientas de \\textit{software} y las plataformas de \\textit{hardware} utilizadas en el trabajo, as\u00ed como la propuesta de modificaci\u00f3n al algoritmo para poder ser utilizado en la plataforma Jetson Nano. Tambi\u00e9n se describen los escenario donde fue probada la implementaci\u00f3n. En la Secci\u00f3n~III se muestran los resultados obtenidos en las distintas plataformas y escenarios, con tres tama\u00f1os de voxel. Las conclusiones del trabajo y futuras l\u00edneas de investigaci\u00f3n son enumeradas en la Secci\u00f3n~IV.\n\n\\section{Materiales y m\u00e9todos}\n\nPara evaluar el algoritmo propuesto se hicieron pruebas en un ambiente simulado y en un escenario real.\nAmbos escenarios de similares caracter\u00edsticas, est\u00e1ticos y estructurados.\nAdem\u00e1s se compar\u00f3 el tiempo de c\u00f3mputo de la biblioteca original y la propuesta usando distintas plataformas.\n\n\\subsection{Herramientas de software utilizadas}\nLa implementaci\u00f3n se realiz\u00f3 sobre el \\textit{framework} ROS (siglas en ingl\u00e9s de Sistema Operativo para Robots) \\cite{quigley2009ROS}, usando la versi\u00f3n de nombre clave Melodic Morenia.\nEsto permite una r\u00e1pida integraci\u00f3n del m\u00e9todo a evaluar con algoritmos existentes y ampliamente usados en rob\u00f3tica.\nLa mejora propuesta es una variante de m\u00e9todo de filtrado de puntos de PCL~\\cite{pcl} utilizado por la biblioteca STVL disponible en \\cite{stvl}.\nPara la etapa de simulaci\u00f3n, la misma se realiz\u00f3 con el entorno Gazebo \\cite{koenig2004design} versi\u00f3n 9 la cual puede descargarse de \\cite{gazebo} e integrarse a ROS.\n\n\\subsection{Plataformas de hardware utilizadas}\nLa plataforma elegida para realizar las pruebas fue la Jetson Nano de NVIDIA.\nLa misma es una placa de peque\u00f1as dimensiones, tan solo 70mm de largo y 45mm de ancho, lo que posibilita su utilizaci\u00f3n en robots de tama\u00f1o reducido.\nCuenta con una GPU de arquitectura Maxwell de 128 CUDA cores con los cuales puede correr hasta 2048 hilos y como procesador principal tiene un ARM Quad-core Cortex-A57.\nEsta placa cuenta con una memoria de 4GB 64-bit LPDDR4 con un ancho de banda m\u00e1ximo te\u00f3rico 25.6GB\/s.\nCon todo este \\textit{hardware} puede alcanzar 472GFLOPS de rendimiento de c\u00f3mputo en FP16, con 5-10W de consumo de energ\u00eda.\nComo se mencion\u00f3 antes la CPU y la GPU se encuentran en el mismo encapsulado y comparten f\u00edsicamente la misma memoria del sistema, admitiendo una forma limitada de \\textit{memoria unificada}.\nDicha arquitectura permite comunicaciones CPU-GPU, que no son posibles en GPU discretas.\nDe esta forma, se evita la copia de cada punto y solamente es necesario pasar la direcci\u00f3n de memoria del mismo a las funciones de CUDA lo que se traduce en una reducci\u00f3n de tiempo y espacio utilizado.\nEs importante destacar que esta implementaci\u00f3n es en si misma una mejora sobre el algoritmo original presentado por \\cite{bkedkowski2013general}, ya que originalmente estaba pensado para arquitecturas de GPU NVIDIA Fermi y Kepler las cuales son generaciones anteriores y no disponen de memoria unificada.\n\nEl algoritmo tambi\u00e9n fue evaluado en una PC de escritorio con procesador Ryzen 7 1700 y una GPU NVIDIA GTX 1660 Super.\nLa misma cuenta con 6GB de memoria GDDR6 de video, con un ancho de banda te\u00f3rico m\u00e1ximo indicado por el fabricante de hasta 336GB\/s.\nEsta placa pertenece a la familia de micro arquitectura Turing la cual dispone de la tecnolog\u00eda de memoria unificada.\nComo contrapartida, la memoria est\u00e1 separada del procesador de la CPU por lo cual la informaci\u00f3n tiene que ser copiada a trav\u00e9s del bus PCI Express.\n\nPara generar la nube de puntos en el escenario real se utiliz\u00f3 la c\u00e1mara RGB-D Intel RealSense Depth Camera D435.\nEsta c\u00e1mara est\u00e1 formada por un par est\u00e9reo capaz de determinar la distancia al sensor de los puntos dentro de su campo de visi\u00f3n.\nTambi\u00e9n dispone de un proyector infrarrojo, para mejorar la imagen 3D en paredes que no tengan caracter\u00edsticas sobresalientes.\n\n\\subsection{Algoritmo propuesto}\nEn el algoritmo presentado en \\cite{stvl2020steve}, las nubes de puntos provenientes de las c\u00e1maras RGB-D, son ingresadas en \\textit{buffers} de observaciones, los cuales almacenan la posici\u00f3n relativa de la medici\u00f3n con respecto a un sistema de coordenadas globales.\nLuego, estas mediciones pueden ser utilizadas con dos finalidades: operaciones de marcado en la cual se designa la celda como ocupada, u operaciones de liberaci\u00f3n donde se marca la celda como vac\u00eda.\n\nDebido a la implementaci\u00f3n paralela del programa, existen \\textit{buffers} extra destinado a cada una de las operaciones.\nLas nubes de puntos recibidas, al estar en el sistema de referencia de la c\u00e1mara, tienen que ser transformadas al sistema de coordenadas de globales.\nEsta transformaci\u00f3n de un sistema de referencias a otro es realizado mediante la biblioteca \\texttt{tf2} \\cite{foote2013}, la cual permite realizar las transformaciones entre los m\u00faltiples sistemas de referencia presentes en el robot, a lo largo del tiempo.\n\nMediante el an\u00e1lisis de tiempo empleado en cada una de las operaciones del programa mencionadas anteriormente, se determinaron las partes del algoritmo que presentaban mayores demoras.\nComo se puede ver en la Fig.\\ref{fig:tiempo_algoritmos}, las operaciones m\u00e1s costosas desde el punto de vista computacional son la de filtrado de la nube de puntos, llamada \\texttt{filter} y la operaci\u00f3n de marcado\/liberaci\u00f3n, llamada \\texttt{ClearFrustums}.\nEsta \u00faltima comprende el acceso a las celdas del mapa global, utilizando los m\u00e9todos provistos por la biblioteca OpenVDB.\n\n\n\nEstas nubes de puntos pueden llegar a ser muy densas, del orden de millones de puntos, y al ser utilizadas en el marcado de celdas para el mapa global, es necesario realizar una compresi\u00f3n de las mismas.\nEn \\cite{stvl2020steve} esto es realizado mediante un filtro de \\textit{voxelizado}, disponible en la biblioteca de PCL.\nEste filtro recibe como entrada nube de puntos, la cual tiene que estar en el sistema de referencia global, para poder limitar la altura de la nube de puntos.\nEste l\u00edmite en el eje $\\mathbf{z}$, permite al \\texttt{navigation\\_stack} de ROS \\cite{tbd} proyectar la nube de puntos sobre el plano horizontal y generar un mapa de ocupaci\u00f3n, utilizado en la navegaci\u00f3n del robot.\nEl filtro disponible en la biblioteca PCL es calculado en la CPU, y seg\u00fan el conocimiento de los autores, no hay otra implementaci\u00f3n que pueda ser ejecutada en esta plataforma.\n\nPor otro lado, los algoritmos implementados en CUDA para el filtrado de nubes de puntos normalmente se implementan en GPUs de escritorio con gran capacidad de memoria.\nEsto los convierten en poco atractivos para utilizar en plataformas embebidas debido a su limitada capacidad de memoria, generalmente compartida con la CPU.\n\n\\begin{figure}[bt]\n\\centerline{\\includegraphics[width=0.5\\textwidth]{images\/tiempos_funciones_pcl.png}}\n\\caption{Tiempos de procesamiento de las partes de mayor carga computacional del algoritmo.}\n\\label{fig:tiempo_algoritmos}\n\\end{figure}\n\n\nSe implement\u00f3 una versi\u00f3n modificada de \\textit{Cubic Subspaces - Neighboring Buckets} \\cite{bkedkowski2013general}, implementada en CUDA y utilizando el proceso \\textit{Zero-Copy} disponible en plataformas como la Jetson Nano.\nLa idea principal es usar la GPU para descomponer el espacio 3D en una cuadr\u00edcula regular de $2^n \\times 2^n\\times 2^n$ cubos $(n = 4,5,6,7,8,9)$. Por lo tanto, para cada punto, se consideran solo 27 cubos $(3^3)$ para encontrar los vecinos m\u00e1s cercanos.\nPara calcular la distancia entre dos puntos $p1={x_1,y_1,z_1}$ y $p2={x_2,y_2,z_2}$ se utiliza la distancia eucl\u00eddea definida como: %\n{\n\\setlength{\\abovedisplayskip}{1pt}\n\\setlength{\\belowdisplayskip}{15pt}\n\\begin{equation}\nd(p1,p2) = \\Big[ (x_2-x_1)^2+(y_2-y_1)^2+(z_2-z_1)^2\\Big]^{\\dfrac{1}{2}}\n\\end{equation}%\n}\ncon $x$, $y$, $z$ y $d(p1,p2)$ pertenecientes al espacio $\\mathbb{R}^3$.\nCada punto del espacio tridimensional $XYZ$ es normalizado, de tal forma que $\\left\\{x,y,z\\in \\mathbb{R}: -1 \\leq x,y,z \\leq 1\\right\\}$.\nLuego, son clasificados mediante un \u00e1rbol de decisi\u00f3n para determinar a que subdivisi\u00f3n del espacio $2^n \\times 2^n\\times 2^n$ pertenecen.\n\nLa cantidad de puntos que pertenecen a cada subdivisi\u00f3n, es determinada mediante una tabla (\\texttt{tabla\\_subdiv}) ordenada de pares ``clave-valor\"\\,utilizando el conocido algoritmo ``Radix Sort\".\nEs importante notar que esta tabla es almacenada en la memoria global de la GPU, por lo tanto, todos los hilos de CUDA pueden acceder a los datos.\nEl par ``clave-valor\", junto con la informaci\u00f3n de la cantidad de puntos en cada subdivisi\u00f3n, son accesibles mediante la memoria de la GPU, y se utilizar\u00e1 para buscar el vecino m\u00e1s cercano en el algoritmo.\n\n\\begin{figure*}[ht]\n \\includegraphics[width=\\textwidth,height=4cm]{images\/superposition.png}\n \\caption{Secuencia de im\u00e1genes del ambiente utilizado para las pruebas. Izquierda: Nube de puntos generada por la c\u00e1mara RGB-D, Centro: Grilla de ocupaci\u00f3n 3D generada por el algoritmo STVL, Derecha: Nube de puntos y mapa superpuestos. }\n \\label{fig:grilla_3d_generada}\n\\end{figure*}\n\n\\begin{algorithm}[ht]\n\\caption{Filtrado de puntos 3D}\n\\label{alg:filtrado}\n\\begin{algorithmic}[1]\n\\renewcommand{\\algorithmicrequire}{\\textbf{Entrada:}}\n\\renewcommand{\\algorithmicensure}{\\textbf{Salida:}}\n\\renewcommand{\\algorithmicfor}{\\textbf{para}}\n\\renewcommand{\\algorithmicdo}{\\textbf{hacer}}\n\\renewcommand{\\algorithmicend}{\\textbf{fin}}\n\\renewcommand{\\algorithmicwhile}{\\textbf{mientras}}\n\\renewcommand{\\algorithmicif}{\\textbf{si}}\n\\renewcommand{\\algorithmicthen}{\\textbf{entonces}}\n\n\\REQUIRE Puntero a la nube de puntos de la c\u00e1mara\n\\ENSURE Puntero a la de puntos filtrada\n\n\\STATE copiar el puntero del host al device\n\\STATE llamado a la funci\u00f3n de CUDA\n\\FOR {todos los puntos $m^i_{xyz}$ en paralelo}\n\\STATE buscar $subdiv_{m^i}$\n\\STATE actualizar \\texttt{tabla\\_subdiv}\n\\ENDFOR\n\\STATE en paralelo ordenar \\texttt{tabla\\_subdiv} $\\{$radix sort$\\}$\n\n\n\\WHILE {la cantidad de puntos marcados $>$ 1000 $\\{$un kernel CUDA por cada punto $m_{xyz}$ $\\}$}\n\\FOR {todos los puntos $m^i_{xyz}$ en paralelo}\n\\STATE buscar $subdiv$\n\\FOR {todas las $subdiv$ vecinas}\n\\STATE contar la cantidad de vecinos de $\\{$teniendo en cuenta los puntos marcados para borrar$\\}$\n\\ENDFOR $\\{$un kernel de CUDA por un punto$\\}$\n\\STATE marcar $m^i_{xyz}$ para borrar si \\texttt{cont} $>$ \\texttt{umbral}\n\\ENDFOR $\\{$un kernel de CUDA para todos los puntos$\\}$\n\\IF {cantidad de puntos marcada $>$ 1000 }\n\\STATE aleatoriamente elegir 1000 puntos marcados para eliminarlos definitivamente\n\\ENDIF\n\\ENDWHILE\n\n\n\\STATE sincronizar el llamado a los kernels\n\\STATE eliminar los puntos marcados\n\\STATE copiar el puntero del device al host\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\nEl objetivo del filtrado es eliminar puntos, para reducir la densidad de la nube de puntos y al mismo tiempo eliminar el ruido de la medici\u00f3n proveniente de la c\u00e1mara de profundidad.\nUna vez calculados los vecinos m\u00e1s cercanos por el m\u00e9todo anteriormente descripto, las subdivisiones del espacio son iterativamente reducidas hasta obtener la densidad de puntos deseada.\nEste proceso puede ser descripto mediante el Algoritmo \\ref{alg:filtrado}. \n\nLa nube de puntos obtenida del filtro, es una versi\u00f3n comprimida de la nube de entrada, y la densidad de la misma depende del tama\u00f1o de bloque elegido para el filtro.\nEsta dimensi\u00f3n puede ser configurada al inicio del algoritmo, y tambi\u00e9n es utilizada por la estructura de OpenVDB para almacenar los puntos en el mapa global.\n \nDebido a las restricciones impuestas por la evasi\u00f3n de obst\u00e1culos, es una prioridad garantizar c\u00e1lculos completos de la nube de puntos en el menor tiempo posible.\n\nPara asegurar la utilizaci\u00f3n completa de la GPU, el programa determina la cantidad m\u00e1xima de procesadores CUDA disponibles, para distribuir cada \\texttt{subdiv} del espacio.\nEsto se puede ver en el punto n\u00famero 11 del Algoritmo \\ref{alg:filtrado}.\n\n\nSe realizaron experimentos en ambientes simulados y reales para validar el m\u00e9todo de filtrado.\nA continuaci\u00f3n se presenta una breve descripci\u00f3n del sistema utilizado.\n\n\n\\begin{figure}[b]\n\\centering\n \\includegraphics[width=0.45\\textwidth]{images\/gazebo_stvl.png}\n \\caption{Robot en el entorno de simulaci\u00f3n. Izquierda: Mapa 3D obtenido por el algoritmo. Derecha: Pasillo de oficina con obst\u00e1culos en sus laterales}\n \\label{fig:prueba_simulada}\n\\end{figure}\n\n\\subsection{Escenario simulado}\nPara evaluar el algoritmo propuesto, en una primera instancia se utiliz\u00f3 el simulador Gazebo \\cite{koenig2004design}, en su versi\u00f3n 9.\nSe opt\u00f3 por por este simulador ya que es compatible con el protocolo de mensajes de ROS Melodic utilizada en este trabajo, y cuenta con soporte para los sensores requeridos por el algoritmo.\n\nEl robot es del estilo tracci\u00f3n diferencial, con una c\u00e1mara RGB-D montada en su parte superior.\nEl mismo cuenta con odometr\u00eda, con lo cual el mapa generado puede ser extendido m\u00e1s all\u00e1 del alcance de la c\u00e1mara de profundidad.\nEn la Fig.\\ref{fig:prueba_simulada}, se puede observar el mapa generado por el algoritmo.\nEl mismo intenta recrear un ambiente de oficinas, en el cual se encuentran dispuestos mobiliario, en ambos lados.\nEs importante destacar que al ser un sensor simulado, no presenta los artefactos normalmente encontrados en c\u00e1maras RGB-D.\n\n\\subsection{Escenario real}\nPara un an\u00e1lisis cualitativo se realizaron pruebas en un escenario real en donde se tomaron im\u00e1genes de un pasillo con distintos obst\u00e1culos y puertas.\nSe utiliz\u00f3 la configuraci\u00f3n mostrada en la Fig.~\\ref{fig:pruebas_real}.\nLa misma est\u00e1 compuesta por la c\u00e1mara RGB-D con el eje \u00f3ptico alineado con el sentido de circulaci\u00f3n del pasillo.\nLa c\u00e1mara fue montada sobre un perfil de aluminio.\nDebajo de la misma, se coloc\u00f3 la plataforma de desarrollo Jetson Nano.\n\n\\section{Resultados}\n\nComo se explic\u00f3 anteriormente, el mayor c\u00f3mputo del algoritmo est\u00e1 concentrado en dos partes principales, el filtrado y el acceso a la nube de puntos global mediante las funciones provistas por OVDB.\nLuego de la implementaci\u00f3n del algoritmo, en la Fig.~\\ref{fig:pcl_gpu} se muestra el an\u00e1lisis de cada funci\u00f3n comparando la versi\u00f3n original con la versi\u00f3n en GPU ejecut\u00e1ndose en la Jetson Nano.\n\n\\begin{figure}[b]\n\\centerline{\\includegraphics[width=0.5\\textwidth]{images\/tiempos_funciones_pcl_gpu.png}}\n\\caption{Comparaci\u00f3n de las funciones de mayor carga computacional, en el algoritmo con el filtrado en CUDA y en PCL. Se puede apreciar como las funciones las funciones \\texttt{clear\\_frustum} y \\texttt{mark} que est\u00e1n implementadas en CPU, ahora se ejecutan en un menor tiempo, ya que la CPU dispone de m\u00e1s recursos.}\n\\label{fig:pcl_gpu}\n\\end{figure}\n\nEn la Fig.~\\ref{fig:tiempo_filtrado}, se pueden observar los tiempos empleados por el filtrado de la nube de puntos.\nComo se puede ver, la versi\u00f3n utilizando CUDA es m\u00e1s r\u00e1pida que la versi\u00f3n original utilizando el filtrado con la biblioteca PCL.\nEsto era de esperar, ya que el filtrado puede ser descompuesto de forma tal, que se expone el paralelismo del algoritmo.\nEstas tareas masivamente paralelas, hacen que las placas GPU saquen grandes ventajas frente a su versi\u00f3n serializada.\n\n\\begin{figure}[ht!]\n\\centerline{\\includegraphics[width=0.5\\textwidth]{images\/tiempos_filtrado.png}}\n\\caption{Comparaci\u00f3n entre la versi\u00f3n original del algoritmo de filtrado mediante PCL y la versi\u00f3n propuesta en CUDA, para diferentes resoluciones del mapa global}\n\\label{fig:tiempo_filtrado}\n\\end{figure}\n\nPor otro lado, llevar el procesamiento de la nube de puntos a la GPU, implica que la CPU ahora dispone de m\u00e1s recursos, dejando lugar para ejecutar otros algoritmos en forma simultanea.\nEsto se puede ver en la Fig.~\\ref{fig:tiempo_operador_ovdb} como una reducci\u00f3n en el tiempo de procesamiento de la nube de puntos por parte del algoritmo de OVDB, el cual se ejecuta en la CPU.\nPara el mismo tama\u00f1o de grilla elegido, al algoritmo se procesa en menor tiempo en la versi\u00f3n con el filtro paralelizado en CUDA.\n\nEsto tambi\u00e9n se puede ver en la Fig.~\\ref{fig:pcl_gpu}, en la cual las funciones \\texttt{clear\\_frustum} y \\texttt{mark} que est\u00e1n implementadas en CPU, ahora se ejecutan en un menor tiempo, pese a que no se realizaron mejoras en esas funciones.\nEs importante destacar que la mejora se realiz\u00f3 sobre el filtrado de la nube de puntos; y el algoritmo de inserci\u00f3n y eliminaci\u00f3n de puntos, mediantes las funciones de OVDB en el mapa global no presenta modificaciones.\n\n\n\\begin{figure}[b]\n\\centerline{\\includegraphics[width=0.5\\textwidth]{images\/tiempos_operador_ovdb.png}}\n\\caption{Comparaci\u00f3n del tiempo para las operaciones en la CPU por parte de la biblioteca OpenVDB en el mapa global, para diferentes tama\u00f1os de grilla.}\n\\label{fig:tiempo_operador_ovdb}\n\\end{figure}\n\nEn la Fig.~\\ref{fig:grilla_3d_generada} podemos observar el mapa de ocupaci\u00f3n 3D generado por el algoritmo.\nTambi\u00e9n se puede apreciar en la parte inferior, la ausencia de voxels (puntos en el mapa 3D), debido a que es una condici\u00f3n requerida por el \\texttt{navigation\\_stack} para realizar la proyecci\u00f3n sobre el plano y generar el mapa de ocupaci\u00f3n.\nEste mapa generado, puede ser utilizado por algoritmos de planificaci\u00f3n, para desplazarse en el ambiente evitando obst\u00e1culos, y de forma paralela para generar un mapa con la informaci\u00f3n de odometr\u00eda.\nEste escenario ser\u00eda el caso de un robot navegando por un pasillo, en el cual se encuentran presentes obst\u00e1culos que obstruyen su camino.\nEn este caso la resoluci\u00f3n elegida fue de 5cm, la cual no solo permiti\u00f3 representar con efectividad los obst\u00e1culos (Cajas de cart\u00f3n en la imagen), si no que tambi\u00e9n permiti\u00f3 capturar detalles del ambiente.\nPor ejemplo, en la esquina superior derecha, podemos observar como el matafuegos es capturado por el mapa 3D.\nEn el caso de la puerta a la izquierda de la escena, la discontinuidad se puede distinguir por un cambio en el color de la grilla 3D.\nEs importante remarcar que, debido a la presencia de ruido en la c\u00e1mara de profundidad, en la tercera imagen de la Fig.~\\ref{fig:grilla_3d_generada}, algunos voxels del mapa 3D son ocluidos por la nube de puntos.\n\nEl algoritmo de filtrado implementado en CUDA no tiene una gran carga aritm\u00e9tica, debido a que la mayor\u00eda de operaciones corresponden a movimientos de memoria.\nComo se mencion\u00f3 anteriormente, la plataforma utilizada dispone de una memoria LPDDR4 con cuatro canales de 16 bits, capaz de alcanzar un ancho de banda m\u00e1ximo te\u00f3rico de 25.6GB\/s, sin embargo en las pruebas realizadas la velocidad no supera los 16GB\/s para copias entre la misma GPU y de 10GB\/s para copias CPU-GPU.\nLas pruebas fueron realizadas con los ejemplos sugeridos por el fabricante \\cite{cudaSpeed}, para evaluar el ancho de banda.\nEs importante destacar que las transferencias CPU-GPU se deben a no utilizar memoria fijada (\\texttt{pinned\\_memory)}), la cual evita que el sistema operativo la mueva o la cambie al disco.\nLa memoria fija proporciona una mayor velocidad de transferencia para la GPU y permite la copia asincr\u00f3nica\n\n\nComo puede observarse en la Fig.~\\ref{fig:mejora_velocidad}, se obtiene una mejora notable en el tiempo de filtrado de la nube de puntos.\nLa velocidad del filtrado se reduce en un orden de magnitud cuando el algoritmo es ejecutado en la computadora de escritorio como se indica en la Tabla \\ref{tab:mejora_velocidad}.\nEsto tambi\u00e9n se debe al cambio de micro arquitectura, ya que en Turing los procesadores se redise\u00f1aron para unificar la memoria compartida, el almacenamiento en cach\u00e9 de texturas y el almacenamiento en cach\u00e9 de carga de memoria, en una sola unidad.\nEsto se traduce en 2 veces m\u00e1s ancho de banda y m\u00e1s de 2 veces m\u00e1s capacidad disponible para la cach\u00e9 L1 \\cite{cudaTuring}, en comparaci\u00f3n a la arquitectura anterior a Turing.\nPor otra parte, la placa de video cuenta con 22 SM (Streaming Multiprocessors) con con 64 CUDA cores cada uno, lo que mejora el desempe\u00f1o del algoritmo, puesto que cada subdivisi\u00f3n del espacio (\\texttt{subdiv}) es procesada en forma independiente del resto. \n\nCabe aclarar que el algoritmo para este \u00faltimo caso, est\u00e1 utilizando memoria unificada, pero no puede utilizarse el concepto de \\textit{Zero-copy} disponible en la plataforma Jetson.\n\n\\begin{table}[htbp]\n\\caption{Datos estad\u00edsticos obtenidos para la funci\u00f3n de filtrado en la plataforma Jetson Nano y en una GPU discreta, para un tama\u00f1o de 0.02m}\n\\label{tab:mejora_velocidad}\n\\centering\n\\begin{tabular}{l|c|c}\nPlataforma & Media [ms]& Desviaci\u00f3n est\u00e1ndar\\\\\n\\hline\npcl-jetson& $46.88$ & $1.60\\cdot 10^{-3}$\\\\\ncuda-jetson& $23.2$ & $1.38\\cdot 10^{-3}$\\\\\ncuda-desktop& $3.9$ & $1.76\\cdot 10^{-3}$\\\\\n\\end{tabular}\n\\end{table}\n\n\n\\begin{figure}[htbp]\n\\centerline{\\includegraphics[width=0.5\\textwidth]{images\/compara_jetson_desktop.png}}\n\\caption{Comparaci\u00f3n de tiempo de filtrado para tres configuraciones con una resoluci\u00f3n de 0.02m: a la izquierda, el algoritmo de filtrado original corriendo en la CPU de la Jetson; al centro, el algoritmo propuesto corriendo en la GPU de la Jetson; a la derecha, el algoritmo propuesto corriendo en la GPU de escritorio}\n\\label{fig:mejora_velocidad}\n\\end{figure}\n\n\n\n\\balance\n\n\\section{Conclusiones}\nEn este trabajo se present\u00f3 una mejora sobre sobre el filtrado de una nube de puntos, en el programa STVL.\nSe realizaron pruebas simuladas como en la vida real, y se analizaron las ventajas de procesar la nube de puntos en la GPU embebida en plataformas como las NVIDIA Jetson.\nTambi\u00e9n se realiz\u00f3 una comparaci\u00f3n de velocidad con una placa de video discreta en una PC de escritorio, en la cual se pudo comprobar que el aumento en la velocidad de memoria disminuye el tiempo de procesamiento del algoritmo.\nDurante el desarrollo del algoritmo, se pudo constatar que es muy importante tener la misma arquitectura de GPU que la de la plataforma sobre la cual se est\u00e1 desarrollando.\nUna discordancia en la misma, podr\u00eda provocar errores conceptuales y algor\u00edtmicos, debido a que algunas funciones pueden no estar disponibles.\nPara el caso de la Jetson Nano, la misma cuenta con una versi\u00f3n limitada de las funciones de memoria unificada.\nEs trabajo futuro implementar mejoras sobre el algoritmo presentado en la plataforma Jetson TX2, la cual cuenta con micro arquitectura Pascal y mejor soporte de memoria unificada, con lo cual se espera mejorar a\u00fan m\u00e1s el rendimiento.\n\nTambi\u00e9n se est\u00e1 trabajando en reemplazar el uso de las funciones de OpenVDB (realizadas en CPU), por las de la biblioteca GVDB provistas por NVIDIA \\cite{gvdb}.\nDebido al uso de funciones implementadas a partir de la micro arquitectura Pascal, se planea utilizar la plataforma Jetson TX2.\n\n\n\n\\bibliographystyle{IEEEtran}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro}\n\nIn recent years, reinforcement learning~\\cite{BertsekasTsitsiklis1996,KaelblingLittmanMoore1996,sutton1998reinforcement,Szepesvari2009} has emerged as a leading framework to learn how to act optimally in unknown environments. Policy gradient methods~\\cite{Sutton2000,Kakade2002,KondaTsitsiklis2003,\nSchulman2015,Schulman2017,SilverSchrittwieserSimonyanEtAl2017} have played a prominent role in the success of reinforcement learning. Such methods have two critical components: policy evaluation and policy improvement. In policy evaluation step, the performance of a parameterized policy is evaluated while in the policy improvement step, the policy parameters are updated using stochastic gradient ascent.\n\nPolicy gradient methods may be broadly classified as Monte Carlo \nmethods and temporal difference methods. In Monte Carlo methods, performance of a\npolicy is estimated using the discounted return of a single sample path; in\ntemporal difference methods, the value(-action) function is guessed and this guess is\niteratively improved using temporal differences. Monte Carlo methods are attractive\nbecause they have zero bias, are simple and easy to implement, and work for\nboth discounted and average reward setups as well as for models with\ncontinuous state and action spaces. However, they suffer from various drawbacks.\nFirst, they have a high variance because a single sample path is used to\nestimate performance. Second, they are not asymptotically optimal for\ninfinite horizon models because it is effectively\nassumed that the model is episodic; in infinite horizon models, the\ntrajectory is arbitrarily truncated to treat the model as an episodic model.\nThird, the policy improvement step cannot be carried out in tandem with\npolicy evaluation. One must wait until the end of the episode to estimate the\nperformance and only then can the policy parameters be updated. It is for\nthese reasons that Monte Carlo methods are largely ignored in the literature\non policy gradient methods, which\nalmost exclusively focuses on temporal difference methods such as actor-critic with eligibility traces~\\cite{sutton1998reinforcement}. \n\nIn this paper, we propose a Monte Carlo method---which we call \\emph{Renewal\nMonte Carlo} (RMC)---for infinite horizon Markov decision processes with\ndesignated start state. Like Monte Carlo, RMC has low bias, is simple and easy\nto implement, and works for models with continuous state and action spaces.\nAt the same time, it does not suffer from the drawbacks of typical Monte Carlo methods.\nRMC is a low-variance online algorithm that works for infinite horizon\ndiscounted and average reward setups. One doesn't have to wait until the end\nof the episode to carry out the policy improvement step; it can be carried out\nwhenever the system visits the start state (or a neighborhood of\nit).\n\nAlthough renewal theory is commonly used to estimate performance of stochastic\nsystems in the simulation optimization community~\\cite{Glynn1986,Glynn1990},\nthose methods assume that the probability law of the primitive random\nvariables and its weak derivate are known, which is not the case in\nreinforcement learning. Renewal theory is also commonly used in\nthe engineering literature on queuing theory and systems and control for\nMarkov decision processes (MDPs) with average reward criteria and a known\nsystem model. There is some prior work on using renewal theory for\nreinforcement learning~\\cite{MarbachTsitsiklis2001,MarbachTsitsiklis2003},\nwhere renewal theory based estimators for the average return and differential\nvalue function for average reward MDPs is developed. In RMC, renewal theory is\nused in a different manner for discounted reward MDPs (and the results\ngeneralize to average cost MDPs).\n\n\\section{RMC Algorithm} \\label{sec:rl}\n\nConsider a Markov decision process (MDP) with state $\\State_t \\in \\mathcal{\\State}$\nand action $\\Action_t \\in \\ACTION$. The system starts in an initial state\n$\\state_0 \\in \\mathcal{\\State}$ and at time $t$:\n\\begin{enumerate}\n \\item there is a controlled transition from $S_t$ to $S_{t+1}$ according to\n a transition kernel $P(\\Action_t)$;\n \\item a per-step reward $R_t = r(\\State_t, \\Action_t, \\State_{t+1})$ is\n received.\n\\end{enumerate}\nFuture is discounted at a rate $\\discount \\in (0,1)$. \n\nA (time-homogeneous and Markov) policy $\\policy$ maps the current state to a\ndistribution on actions, i.e., $\\Action_t \\sim \\policy(\\State_t)$. We use\n$\\policy(\\action | \\state)$ to denote \n$\\PR(\\Action_t = \\action | \\State_t = \\state)$.\nThe performance of a policy $\\policy$ is given by\n\\begin{equation}\n J_\\pi = \n \\EXPA\\biggl[\\sum_{t=0}^{\\infty}\\discount^{t}\\Reward_t\\biggm|\\State_0 =\n \\state_0\\biggr]. \\label{eq:Vp-defn}\n\\end{equation}\n\nWe are interested in identifying an optimal policy, i.e., a policy that maximizes the performance. When $\\mathcal{\\State}$ and $\\ACTION$ are Borel spaces, we assume that the model satisfies the standard conditions under which time-homogeneous Markov policies are optimal~\\cite{Hernandez-Lerma1996}. In the\nsequel, we present a sample path based online learning algorithm, which\nwe call Renewal Monte Carlo (RMC), which identifies a locally optimal policy\nwithin the class of parameterized policies.\n\nSuppose policies are parameterized by a closed and convex subset\n$\\polParSpace$ of the Euclidean space. For example, $\\polParSpace$\n could be the weight vector in a Gibbs soft-max policy, or the weights\n of a deep neural network, or the thresholds in a control limit policy, and\nso on. Given $\\polPars \\in \\polParSpace$, we use $\\policy_\\polPars$ to denote\nthe policy parameterized by $\\polPars$ and $J_\\polPars$ to denote\n$J_{\\policy_{\\polPars}}$. We assume that for all policies $\\policy_\\polPars$,\n$\\polPars \\in \\polParSpace$, the designated start state $s_0$ is positive\nrecurrent.\n\nThe typical approach for policy gradient based reinforcement learning is to start with an\ninitial guess $\\polPars_0 \\in \\polParSpace$ and iteratively update it using\nstochastic gradient ascent. In particular, let $\\widehat \\GRAD J_{\\polPars_m}$ be an\nunbiased estimator of $\\GRAD_\\polPars J_\\polPars \\big|_{\\polPars =\n\\polPars_m}$, then update \n\\begin{equation} \\label{eq:J-update}\n \\polPars_{m+1}\n= \\big[ \\polPars_m + \\alpha_m \\widehat \\GRAD J_{\\polPars_m} \\big]_{\\polParSpace}\n\\end{equation} \nwhere $[\\polPars]_{\\polParSpace}$ denotes the projection of\n$\\polPars$ onto $\\polParSpace$ and $\\{\\alpha_m\\}_{m \\ge 1}$ is the sequence of\nlearning rates that satisfies the standard assumptions of \n\\begin{equation}\\label{eq:lr}\n \\sum_{m=1}^\\infty \\alpha_m = \\infty \n \\quad\\text{and}\\quad \n \\sum_{m=1}^\\infty \\alpha_m^2 < \\infty.\n\\end{equation}\nUnder mild\ntechnical conditions~\\cite{Borkar:book}, the above iteration converges to a $\\polPars^*$ that is\nlocally optimal, i.e., $\\GRAD_\\polPars J_\\polPars \\big|_{\\polPars =\n\\polPars^*} = 0$. In RMC, we approximate $\\GRAD_\\polPars J_\\polPars$ by a\nRenewal theory based estimator as explained below.\n\nLet $\\tau^{(n)}$ denote the stopping time when the system returns to the start\nstate $\\state_0$ for the $n$-th time. In particular, let $\\tau^{(0)}=0$ and\nfor $n \\ge 1$ define \n\\[ \\tau^{(n)} = \\inf\\{t > \\tau^{(n-1)}:\\state_t = \\state_0\\}. \\]\nWe call the sequence of $(\\State_t, \\Action_t,\n\\Reward_t)$ from $\\tau^{(n-1)}$ to ${\\tau^{(n)} - 1}$ as the\n\\emph{$n$-th regenerative cycle}. Let $\\mathsf{R}^{(n)}$ and $\\mathsf{T}^{(n)}$ denote the total\ndiscounted reward and total discounted time of the $n$-th regenerative cycle,\ni.e., \n\\begin{align}\\label{eq:Rn_and_Tn}\n \\mathsf{R}^{(n)} = \\Gamma^{(n)} \n \\smashoperator[r]{\\sum_{t = \\tau^{(n-1)}}^{\\tau^{(n)} - 1}}\n \\discount^{t} R_t\n \\quad\\text{and}\\quad\n \\mathsf{T}^{(n)} = \\Gamma^{(n)}\n \\smashoperator[r]{\\sum_{t = \\tau^{(n-1)}}^{\\tau^{(n)}-1}}\n \\discount^{t},\n\\end{align}\nwhere $\\Gamma^{(n)}=\\discount^{-\\tau^{(n-1)}}$.\nBy the strong Markov property, $\\{\\mathsf{R}^{(n)}\\}_{n \\ge 1}$ and $\\{\\mathsf{T}^{(n)}\\}_{n \\ge 1}$\nare i.i.d.\\@ sequences. Let $\\mathsf{R}_\\polPars$ and $\\mathsf{T}_\\polPars$ denote $\\EXP[\\mathsf{R}^{(n)}]$\nand $\\EXP[\\mathsf{T}^{(n)}]$, respectively. Define\n\\begin{equation}\n \\widehat \\mathsf{R} = \\frac 1N \\sum_{n=1}^N \\mathsf{R}^{(n)} \n \\quad \\hbox{and}\\quad\n \\widehat \\mathsf{T} = \\frac 1N \\sum_{n=1}^N \\mathsf{T}^{(n)} ,\n \\label{eq:est}\n\\end{equation}\nwhere $N$ is a large number. \nThen, $\\widehat \\mathsf{R}$ and $\\widehat \\mathsf{T}$ are unbiased and asymptotically\nconsistent estimators of $\\mathsf{R}_\\polPars$ and $\\mathsf{T}_\\polPars$.\n\nFrom ideas similar to standard\nRenewal theory \\cite{Feller1966}, we have the following.\n\\begin{proposition}[Renewal Relationship]\\label{prop:renewal-basic1} The performance of policy $\\policy_\\polPars$ is given by:\n\\begin{equation}\\label{eq:renewal-basic1}\n J_\\polPars = \\frac { \\mathsf{R}_\\polPars } { (1 - \\discount) \\mathsf{T}_\\polPars }.\n\\end{equation}\n\\end{proposition}\n\\begin{proof}\n For ease of notation, define\n \\[\n \\overline \\mathsf{T}_\\polPars = \\EXPB\\big[ \\discount^{\\tau^{(n)} - \\tau^{(n-1)}}\n \\big]\n \\]\n Using the formula for geometric series, we get that $\\mathsf{T}_\\polPars = ( 1 -\n \\overline \\mathsf{T}_\\polPars)\/(1 - \\discount)$. Hence,\n \\begin{equation} \\label{eq:Tbar1}\n \\overline \\mathsf{T}_\\polPars = 1 - (1 - \\discount) \\mathsf{T}_\\polPars.\n \\end{equation}\n\n Now, consider the performance:\n \\begin{align}\n J_\\polPars &= \\EXPB\\bigg[\n \\sum_{t=0}^{\\tau^{(1)}-1} \\discount^{t} R_t \n \\notag \\\\\n & \\hspace{7em}\n + \\discount^{\\tau^{(1)}}\n \\smashoperator{\\sum_{t = \\tau^{(1)}}^\\infty} \\discount^{t-\\tau^{(1)}} R_t \n \\biggm| \\State_{0} = \\state_0 \\bigg]\n \\displaybreak[0]\n \\notag \\\\\n &\\stackrel{(a)}= \\mathsf{R}_\\polPars + \n \\EXPB[ \\discount^{\\tau^{(1)})} ]\\, J_\\polPars\n \\displaybreak[0]\n \\notag \\\\\n &= \\mathsf{R}_\\polPars + \\overline \\mathsf{T}_\\polPars J_\\polPars,\n \\label{eq:R-11}\n \\end{align}\n where the second expression in $(a)$ uses the independence of random\n variables from $(0, \\tau^{(1)}-1)$ to those from $\\tau^{(1)}$ onwards \n due to the strong Markov property. Substituting~\\eqref{eq:Tbar1}\n in~\\eqref{eq:R-11} and rearranging terms, we get the result of the\n proposition.\n\\end{proof}\nDifferentiating both sides of Equation~\\eqref{eq:renewal-basic1} with respect to $\\polPars$, we get that\n\\begin{equation} \\label{eq:H}\n \\GRAD_\\polPars J_\\polPars = \\frac{H_\\polPars}{\\mathsf{T}_\\polPars^2(1 - \\discount)},\n \\enskip\\text{where }\n H_\\polPars = \\mathsf{T}_\\polPars \\GRAD_\\polPars \\mathsf{R}_\\polPars\n - \\mathsf{R}_\\polPars \\GRAD_\\polPars \\mathsf{T}_\\polPars. \n\\end{equation}\n\nTherefore, instead of using stochastic gradient ascent to find the maximum\nof $J_\\polPars$, we can use stochastic approximation to find\nthe root of $H_\\polPars$. In particular, let $\\widehat H_m$ be an unbiased\nestimator of $H_{\\polPars_m}$. We then use the update\n\\begin{equation} \\label{eq:H-update}\n \\polPars_{m+1} = \\big[ \\polPars_m + \\alpha_m \\widehat H_m \\big]_{\\polParSpace}\n\\end{equation}\nwhere $\\{\\alpha_m\\}_{m \\ge 1}$ satisfies the standard conditions on learning\nrates~\\eqref{eq:lr}. The above iteration converges to a locally optimal policy.\nSpecifically, we have the following.\n\n\\begin{theorem}\\label{thm:convergence}\n Let $\\widehat \\mathsf{R}_m$, $\\widehat \\mathsf{T}_m$, $\\widehat \\GRAD \\mathsf{R}_m$ and $\\widehat\n \\GRAD \\mathsf{T}_m$ be unbiased estimators of $\\mathsf{R}_{\\polPars_m}$, $\\mathsf{T}_{\\polPars_m}$,\n $\\GRAD_\\polPars \\mathsf{R}_{\\polPars_m}$, and $\\GRAD_\\polPars \\mathsf{R}_{\\polPars_m}$,\n respectively such that $\\widehat \\mathsf{T}_m \\perp \\widehat \\GRAD \\mathsf{R}_m$ and\n $\\widehat \\mathsf{R}_m \\perp \\widehat \\GRAD \\mathsf{T}_m$.\\footnote{The notation $X \\perp Y$\n means that the random variables $X$ and $Y$ are independent.} Then,\n \\begin{equation}\n \\widehat H_m = \\widehat \\mathsf{T}_m \\widehat \\GRAD \\mathsf{R}_m - \\widehat \\mathsf{R}_m \n \\widehat \\GRAD \\mathsf{T}_m\n \\label{eq:H-est}\n \\end{equation}\n is an unbiased estimator of $H_\\polPars$ and \n the sequence $\\{\\polPars_m\\}_{m \\ge 1}$ generated\n by~\\eqref{eq:H-update} converges almost surely and \n \\[\n \\lim_{m \\to \\infty} \\GRAD_\\polPars J_\\polPars \\big|_{\\polPars_m} = 0.\n \\]\n\\end{theorem}\n\\begin{proof}\n The unbiasedness of $\\widehat H_m$ follows immediately from the\n independence assumption. The convergence of\n the $\\{\\polPars_m\\}_{m \\ge 1}$ follows from~\\cite[Theorem 2.2]{Borkar:book}\n and the fact that the model satisfies conditions (A1)--(A4)\n of~\\cite[pg~10--11]{Borkar:book}.\n\\end{proof}\n\nIn the remainder of this section, we present two methods for estimating the\ngradients of $\\mathsf{R}_\\polPars$ and $\\mathsf{T}_\\polPars$. \nThe first is a likelihood ratio based\ngradient estimator which works when the policy is differentiable with respect\nto the policy parameters. The second is a simultaneous perturbation based\ngradient estimator that uses finite differences, which is useful when the\npolicy is not differentiable with respect to the policy parameters.\n\n\\subsection{Likelihood ratio based gradient based estimator}\\label{sec:likelihood}\n\nOne approach to estimate the performance gradient is to use likelihood radio\nbased estimates~\\cite{Rubinstein1989,Glynn1990,Williams1992}.\nSuppose the policy $\\policy_\\polPars(\\action | \\state)$ is differentiable with respect to\n$\\polPars$. For any time~$t$, define the likelihood function\n\\begin{equation}\\label{eq:score}\n \\Score_t = \n \\GRAD_\\polPars \\log[ \\policy_\\polPars(\\Action_t \\mid \\State_t) ],\n\\end{equation}\nand for $\\sigma \\in \\{ \\tau^{(n-1)}, \\dots, \\tau^{(n)} - 1 \\}$, define\n\\begin{equation}\n \\mathsf{R}^{(n)}_\\sigma = \\Gamma^{(n)}\\sum_{t=\\sigma}^{\\tau^{(n)}-1}\\discount^tR_t,\\enskip\n \\mathsf{T}^{(n)}_\\sigma = \\Gamma^{(n)}\\sum_{t=\\sigma}^{\\tau^{(n)}-1}\\discount^t.\\label{eq:R_T_sigma}\n\\end{equation}\nIn this notation $\\mathsf{R}^{(n)} = \\mathsf{R}^{(n)}_{\\tau^{(n-1)}}$ and $\\mathsf{T}^{(n)} = \\mathsf{T}^{(n)}_{\\tau^{(n-1)}}$.\nThen, define the following estimators for $\\GRAD_\\polPars \\mathsf{R}_\\polPars$ and\n$\\GRAD_\\polPars \\mathsf{T}_\\polPars$:\n\\begin{align}\n\\widehat \\GRAD \\mathsf{R} &= \\frac 1N \\sum_{n=1}^N \\sum_{\\sigma=\\tau^{(n-1)}}^{\\tau^{(n)}-1}\\mathsf{R}^{(n)}_\\sigma \\Score_{\\sigma},\\label{eq:grad_R_new}\\\\\n\\widehat \\GRAD \\mathsf{T} &= \\frac 1N \\sum_{n=1}^N \\sum_{\\sigma=\\tau^{(n-1)}}^{\\tau^{(n)}-1}\\mathsf{T}^{(n)}_\\sigma \\Score_{\\sigma},\\label{eq:grad_T_new}\n\\end{align}\nwhere $N$ is a large number. \n\n\\begin{proposition} \\label{prop:estimator}\n $\\widehat \\GRAD \\mathsf{R}$ and $\\widehat \\GRAD \\mathsf{T}$ defined above are unbiased\n and asymptotically consistent estimators\n of $\\GRAD_\\polPars \\mathsf{R}_\\polPars$ and $\\GRAD_\\polPars \\mathsf{T}_\\polPars$. \n\\end{proposition}\n\\begin{proof}\nLet $P_\\polPars$ denote the probability induced on the sample paths when the\nsystem is following policy $\\policy_\\polPars$. For $t \\in \\{ \\tau^{(n-1)},\n\\dots, \\tau^{(n)} - 1\\}$, let $D^{(n)}_{t}$ denote the sample path $(\\State_s,\n\\Action_s, \\State_{s+1})_{s=\\tau^{(n-1)}}^{t}$ for\nthe $n$-th regenerative cycle until time $t$. Then,\n\\[\n \\let\\smashoperator\\relax\n P_\\polPars(D^{(n)}_t) = \\smashoperator{\\prod_{s= \\tau^{(n-1)}}^{t} }\n \\policy_\\polPars(A_s | S_s)\n \\PR(\\State_{s+1} | S_{s}, A_{s})\n\\]\nTherefore,\n\\begin{equation} \\label{rel:11}\n \\GRAD_\\polPars \\log P_\\polPars(D^{(n)}_t) =\n \\smashoperator{\\sum_{s=\\tau^{(n-1)}}^{t}} \\GRAD_\\polPars \\log\n \\policy_\\polPars(\\Action_s | \\State_s) = \\smashoperator{\\sum_{s=\\tau^{(n-1)}}^{t}} \\Score_s.\n\\end{equation}\n\nNote that $\\mathsf{R}_\\polPars$ can be written as:\n\\[\n \\mathsf{R}_\\polPars = \\Gamma^{(n)}\\smashoperator[r]{\\sum_{t = \\tau^{(n-1)}}^{\\tau^{(n)} - 1}}\\discount^{t}\\EXPB[ R_t].\n\\]\nUsing the log derivative trick,\\footnote{Log-derivative trick: For any distribution $p(x|\\theta)$ and any function $f$,\n\\[\n \\GRAD_\\theta \\EXP_{X \\sim p(X|\\theta)} [ f(X) ]\n = \n \\EXP_{X \\sim p(X|\\theta)}[ f(X) \\GRAD_\\theta \\log p(X | \\theta)].\n\\]\n} we get\n\\begin{align} \n \\GRAD_\\polPars \\mathsf{R}_\\polPars &= \n \\Gamma^{(n)} \n \\smashoperator[r]{\\sum_{t = \\tau^{(n-1)}}^{\\tau^{(n)} - 1}}\n \\discount^{t}\\,\n \\EXPB[ R_t \\GRAD_\\polPars \\log P_\\polPars(D^{(n)}_t) ] \n \\notag \\\\\n &\\stackrel{(a)}= \n \\Gamma^{(n)} \n \\EXPB\\bigg[\n \\sum_{t = \\tau^{(n-1)}}^{\\tau^{(n)} - 1}\n \\bigg[\n\\discount^{t} R_t{\\sum_{\\sigma=\\tau^{(n-1)}}^t}\\Score_\\sigma\\bigg] \\bigg]\n \\notag \\\\\n &\\stackrel{(b)}= \\EXPB\\bigg[ \n \\sum_{\\sigma = \\tau^{(n-1)}}^{\\tau^{(n)} - 1}\\Score_\\sigma\\bigg[\n \\Gamma^{(n)}\\sum_{t=\\sigma}^{\\tau^{(n)} - 1}\\discount^{t} R_t \\bigg] \n \\bigg] \\notag \\\\\n &\\stackrel{(c)}= \\EXPB\\bigg[ \n \\sum_{\\sigma = \\tau^{(n-1)}}^{\\tau^{(n)} - 1}\n \\mathsf{R}^{(n)}_\\sigma \\Score_\\sigma \\bigg]\n \\label{rel:12}\n\\end{align}\nwhere $(a)$ follows from~\\eqref{rel:11}, $(b)$ follows from changing the order\nof summations, and $(c)$ follows from the definition of\n$\\mathsf{R}^{(n)}_\\sigma$ in~\\eqref{eq:R_T_sigma}. $\\widehat \\GRAD\n\\mathsf{R}$ is an unbiased and asymptotically consistent estimator of the right hand\nside of the first equation in~\\eqref{rel:12}. The result for $\\widehat \\GRAD\n\\mathsf{T}$ follows from a similar argument.\n\\end{proof}\n\n\\begin{algorithm2e}[!tb]\n\\def\\1#1{\\quitvmode\\hbox to 1em{\\hfill$\\mathsurround0pt #1$}}\n\\SetKwInOut{Input}{input}\n\\SetKwInOut{Output}{output}\n\\SetKwInOut{Init}{initialize}\n\\SetKwProg{Fn}{function}{}{}\n\\SetKwFor{ForAll}{forall}{do}{}\n\\SetKwRepeat{Do}{do}{while}\n\\DontPrintSemicolon\n\\Input{Intial policy $\\polPars_0$, discount factor $\\discount$, \n initial state~$\\state_0$, number of regenerative cycles $N$}\n\n\\For{iteration $m = 0, 1, \\dots$}{\n \\For{regenerative cycle $n_1=1$ to $N$}{\n Generate $n_1$-th regenerative cycle using\n \\rlap{policy~$\\policy_{\\polPars_m}$.}\n\n Compute $\\mathsf{R}^{(n_1)}$ and $\\mathsf{T}^{(n_1)}$ using~\\eqref{eq:Rn_and_Tn}.\n }\n Set $\\widehat \\mathsf{R}_{m} = \\texttt{average}(\\mathsf{R}^{(n_1)}: n_1 \\in \\{1,\\dots,N\\})$.\n\n Set $\\widehat \\mathsf{T}_{m} = \\texttt{average}(\\mathsf{T}^{(n_1)}: n_1 \\in \\{1,\\dots,N\\})$.\n\n \\For{regenerative cycle $n_2=1$ to $N$}{\n Generate $n_2$-th regenerative cycle using\n \\rlap{policy~$\\policy_{\\polPars_m}$.}\n\n Compute $\\mathsf{R}_\\sigma^{(n_2)}$, $\\mathsf{T}_\\sigma^{(n_2)}$ and $\\Score_\\sigma$\n for all $\\sigma$.\n }\n Compute $\\widehat \\GRAD \\mathsf{R}_{m}$ and $\\widehat \\GRAD \\mathsf{T}_m$ \n using \\eqref{eq:grad_R_new} and~\\eqref{eq:grad_T_new}.\n\n \\vskip 2pt\n Set $\\widehat H_m = \\widehat \\mathsf{T}_m \\widehat \\GRAD \\mathsf{R}_m - \\widehat \\mathsf{R}_m \\widehat \\GRAD \\mathsf{T}_m$.\n\n \\vskip 4pt\n Update $\\polPars_{m+1} = \\big[ \\polPars_m + \\alpha_m \\widehat H_m\n \\big]_{\\polParSpace}$.\n}\n\\caption{RMC Algorithm with likelihood ratio based gradient estimates.}\n\\label{alg:likelihood}\n\\end{algorithm2e}\n\nTo satisfy the independence condition of Theorem~\\ref{thm:convergence}, we use two independent sample paths: one to estimate $\\widehat \\mathsf{R}$ and\n$\\widehat \\mathsf{T}$ and the other to estimate $\\widehat \\GRAD \\mathsf{R}$ and $\\widehat\n\\GRAD \\mathsf{T}$. The complete algorithm in shown in Algorithm~\\ref{alg:likelihood}.\nAn immediate consequence of Theorem~\\ref{thm:convergence} is the following. \n\\begin{corollary}\\label{cor:pol_grad_conv}\n The sequence $\\{\\polPars_m\\}_{m \\ge 1}$ generated by\n Algorithm~\\ref{alg:likelihood} converges to a local optimal.\n\\end{corollary}\n\n\\begin{remark}\\label{rem:1}\nAlgorithm~\\ref{alg:likelihood} is presented in its simplest form. It is possible to use standard variance reduction techniques such as\nsubtracting a baseline~\\cite{Williams1992,Greensmith2004,Peters2006} to reduce variance. \n\\end{remark}\n\n\\begin{remark}\\label{rem:2} \\label{rem:single_run}\nIn Algorithm~\\ref{alg:likelihood}, we use two separate runs to compute $(\\widehat \\mathsf{R}_m, \\widehat \\mathsf{T}_m)$ and $(\\GRAD \\widehat \\mathsf{R}_m, \\GRAD \\widehat \\mathsf{T}_m)$ to ensure that the independence conditions of Proposition~\\ref{prop:estimator} are satisfied. In practice, we found that using a single run to compute both $(\\widehat \\mathsf{R}_m, \\widehat \\mathsf{T}_m)$ and $(\\GRAD \\widehat \\mathsf{R}_m, \\GRAD \\widehat \\mathsf{T}_m)$ has negligible effect on the accuracy of convergence (but speeds up convergence by a factor of two).\n\\end{remark}\n\n\\begin{remark}\\label{rem:3}\nIt has been reported in the literature~\\cite{Thomas2014} that using a biased estimate of the gradient given by:\n\\begin{equation}\n \\mathsf{R}^{(n)}_\\sigma = \\Gamma^{(n)} \n \\sum_{t=\\sigma}^{\\tau^{(n)}-1}\\discount^{t-\\sigma} R_t,\n \\label{eq:R_sigma_biased} \n\\end{equation}\n(and a similar expression for $T^{(n)}_\\sigma$) leads to faster convergence. We call this variant \\textit{RMC with biased gradients} and, in our experiments, found that it does converge faster than RMC.\n\\end{remark}\n\n\n\\subsection{Simultaneous perturbation based gradient estimator}\nAnother approach to estimate performance gradient is to use simultaneous\nperturbation based estimates\\cite{Spall1992,Maryak2008,Katkovnik1972,Bhatnagar:2013}. The general one-sided form of such estimates is\n\\[\n \\widehat \\GRAD \\mathsf{R}_\\polPars = \\delta (\n \\widehat \\mathsf{R}_{\\polPars + c \\delta} - \\widehat \\mathsf{R}_{\\polPars} )\/c\n\\]\nwhere $\\delta$ is a random variable with the same dimension as\n$\\polPars$ and $c$ is a small constant. The expression for $\\widehat \\GRAD\n\\mathsf{T}_\\polPars$ is similar. When $\\delta_i \\sim \n\\text{Rademacher}(\\pm 1)$, the above method corresponds\nto simultaneous perturbation stochastic approximation (SPSA)~\\cite{Spall1992,\nMaryak2008}; when $\\delta \\sim \\text{Normal}(0, I)$, the above method\ncorresponds to smoothed function stochastic approximation\n(SFSA)~\\cite{Katkovnik1972,Bhatnagar:2013}.\n\n\\begin{algorithm2e}[!tb]\n\\def\\1#1{\\quitvmode\\hbox to 1em{\\hfill$\\mathsurround0pt #1$}}\n\\SetKwInOut{Input}{input}\n\\SetKwInOut{Output}{output}\n\\SetKwInOut{Init}{initialize}\n\\SetKwProg{Fn}{function}{}{}\n\\SetKwFor{ForAll}{forall}{do}{}\n\\SetKwRepeat{Do}{do}{while}\n\\DontPrintSemicolon\n\\Input{Intial policy $\\polPars_0$, discount factor $\\discount$, initial state~$\\state_0$, number of regenerative cycles $N$, constant $c$, perturbation distribution\n$\\Delta$}\n\n\\For{iteration $m = 0, 1, \\dots$}{\n \\For{regenerative cycle $n_1=1$ to $N$}{\n Generate $n_1$-th regenerative cycle using\n \\rlap{policy~$\\policy_{\\polPars_m}$.}\n\n Compute $\\mathsf{R}^{(n_1)}$ and $\\mathsf{T}^{(n_1)}$ using~\\eqref{eq:Rn_and_Tn}.\n }\n Set $\\widehat \\mathsf{R}_{m} = \\texttt{average}(\\mathsf{R}^{(n_1)}: n_1 \\in \\{1,\\dots,N\\})$.\n\n Set $\\widehat \\mathsf{T}_{m} = \\texttt{average}(\\mathsf{T}^{(n_1)}: n_1 \\in \\{1,\\dots,N\\})$.\n\n Sample $\\delta \\sim \\Delta$.\n\n Set $\\polPars_m' = \\polPars_m + c \\delta$.\n\n \\For{regenerative cycle $n_2=1$ to $N$}{\n Generate $n_2$-th regenerative cycle using\n \\rlap{policy~$\\policy_{\\polPars_m}$.}\n\n Compute $\\mathsf{R}^{(n_2)}$ and $\\mathsf{T}^{(n_2)}$ using~\\eqref{eq:Rn_and_Tn}.\n }\n Set $\\widehat \\mathsf{R}'_{m} = \\texttt{average}(\\mathsf{R}^{(n_2)}: n_2 \\in \\{1,\\dots,N\\})$.\n\n Set $\\widehat \\mathsf{T}'_{m} = \\texttt{average}(\\mathsf{T}^{(n_2)}: n_2 \\in \\{1,\\dots,N\\})$.\n\n \\vskip 2pt\n Set $\\widehat H_m = \\delta(\\widehat \\mathsf{T}_m \\widehat \\mathsf{R}'_m \n - \\widehat \\mathsf{R}_m \\widehat \\mathsf{T}'_m)\/c$.\n\n \\vskip 4pt\n Update $\\polPars_{m+1} = \\big[ \\polPars_m + \\alpha_m \\widehat H_m\n \\big]_{\\polParSpace}$.\n}\n\\caption{RMC Algorithm with simultaneous perturbation based gradient estimates.}\n\\label{alg:SPSA}\n\\end{algorithm2e}\n\n\nSubstituting the above estimates in~\\eqref{eq:H-est} and simplifying, we get\n\\[\n \\widehat H_\\polPars = \\delta ( \\widehat \\mathsf{T}_\\polPars \\widehat \\mathsf{R}_{\\polPars + c\\delta} \n - \n \\widehat \\mathsf{R}_\\polPars \\widehat \\mathsf{T}_{\\polPars + c \\delta} )\/c.\n\\]\nThe complete algorithm in shown in Algorithm~\\ref{alg:SPSA}. Since $(\\widehat\n\\mathsf{R}_\\polPars, \\widehat \\mathsf{T}_\\polPars)$ and $(\\widehat \\mathsf{R}_{\\polPars + c \\delta},\n\\widehat \\mathsf{T}_{\\polPars + c \\delta})$ are estimated from separate sample paths,\n$\\widehat H_\\polPars$ defined above is an unbiased estimator of $H_\\polPars$.\nThen, an immediate consequence of Theorem~\\ref{thm:convergence} is the\nfollowing.\n\\begin{corollary}\n The sequence $\\{\\polPars_m\\}_{m \\ge 1}$ generated by\n Algorithm~\\ref{alg:SPSA} converges to a local optimal.\n\\end{corollary}\n\n\\section{RMC for Post-Decision State Model} \\label{sec:post_model}\nIn many models, the state dynamics can be split into two parts: a controlled evolution followed by an uncontrolled evolution. For example, many continuous state models have dynamics of the form\n\\[\nS_{t+1} = f(S_t, A_t) + N_t,\n\\]\nwhere $\\{N_t\\}_{t \\ge 0}$ is an independent noise process. For other examples, see the inventory control and event-triggered communication models in Sec~\\ref{sec:num_exp}. Such models can be written in terms of a post-decision state model described below.\n\nConsider a post-decision state MDP with pre-decision state $\\Prestate_t \\in\n\\PRESTATE$, post-decision state $\\Poststate_t \\in \\POSTSTATE$, action\n$\\Action_t \\in \\ACTION$.\nThe system starts at an initial state $\\poststate_0 \\in \\POSTSTATE$ and at \ntime~$t$: \n\\begin{enumerate}\n \\item there is a controlled transition from $\\Prestate_t$ to $\\Poststate_t$\n according to a transition kernel $\\PRE P(\\Action_t)$; \n \\item there is an uncontrolled transition from $\\Poststate_t$ to\n $\\Prestate_{t+1}$ according to a transition kernel $\\POST P$;\n \\item a per-step reward $R_t = r(\\Prestate_t, \\Action_t,\n \\Poststate_t)$ is received.\n\\end{enumerate}\nFuture is discounted at a rate $\\discount \\in (0,1)$. \n\\begin{remark}\n When $\\POSTSTATE = \\PRESTATE$ and $\\PRE P$ is identity, then the above model\n reduces to the standard MDP model, considered in Sec~\\ref{sec:rl}. When\n $\\POST P$ is a deterministic transition, the model reduces to a standard MDP\n model with post decision\n states~\\cite{VanRoyBertsekasLeeEtAl1997,powell2011approximate}. \n\\end{remark}\n\nAs in Sec~\\ref{sec:rl}, we choose a (time-homogeneous and Markov) policy $\\policy$ that maps the current pre-decision state $\\PRESTATE$ to a\ndistribution on actions, i.e., $\\Action_t \\sim \\policy(\\Prestate_t)$. We use\n$\\policy(\\action | \\prestate)$ to denote \n$\\PR(\\Action_t = \\action | \\Prestate_t = \\prestate)$. \n\nThe performance when the system starts in post-decision state $\\poststate_0 \\in\n\\POSTSTATE$ and follows policy $\\policy$ is given by\n\\begin{equation}\n J_\\policy = \n \\EXPA\\biggl[\\sum_{t=0}^{\\infty}\\discount^{t}\\Reward_t\\biggm|\\Poststate_0 =\n \\poststate_0\\biggr].\n\\end{equation}\nAs before, we are interested in identifying an optimal policy, i.e., a policy that maximizes the performance. When $\\mathcal{\\State}$ and $\\ACTION$ are Borel spaces, we assume that the model satisfies the standard conditions under which time-homogeneous Markov policies are optimal~\\cite{Hernandez-Lerma1996}.\nLet $\\tau^{(n)}$ denote the stopping times such that $\\tau^{(0)} = 0$ and for\n$n \\ge 1$,\n\\[\n \\tau^{(n)} = \\inf \\{ t > \\tau^{(n-1)} : \\poststate_{t-1} = \\poststate_0 \\}.\n\\]\nThe slightly unusual definition (using $\\poststate_{t-1} = \\poststate_0$\nrather than the more natural $\\poststate_t = \\poststate_0$) is to ensure that\nthe formulas for $\\mathsf{R}^{(n)}$ and $\\mathsf{T}^{(n)}$ used in Sec.~\\ref{sec:rl} remain\nvalid for the post-decision state model as well. Thus, using arguments similar to Sec.~\\ref{sec:rl}, we can show that both variants of RMC\npresented in Sec.~\\ref{sec:rl} converge to a locally optimal parameter\n$\\polPars$ for the post-decision state model as well.\n\n\\section{Approximate RMC}\\label{sec:approx_rl}\n\nIn this section, we present an approximate version of RMC (for the basic\nmodel of Sec.~\\ref{sec:rl}). Suppose that the state and action spaces\n$\\mathcal{\\State}$ and $\\ACTION$ are separable metric spaces (with metrics $d_S$ and\n$d_A$).\n\nGiven an approximation constant $\\rho \\in \\reals_{> 0}$, let $B^\\rho = \\{s \\in \\mathcal{\\State}: d_S(s,s_0) \\le \\rho\\}$ denote\nthe ball of radius $\\rho$ centered around $s_0$. Given a policy $\\policy$, let\n$\\tau^{(n)}$ denote the stopping times for successive visits to $B^\\rho$,\ni.e., $\\tau^{(0)} = 0$ and for $n \\ge 1$, \n\\[\n \\tau^{(n)} = \\inf \\{ t > \\tau^{(n-1)} : \\state_t \\in B^\\rho \\}.\n\\]\nDefine $\\mathsf{R}^{(n)}$ and $\\mathsf{T}^{(n)}$ as in~\\eqref{eq:Rn_and_Tn} and let\n$\\mathsf{R}^\\rho_\\polPars$ and $\\mathsf{T}^\\rho_\\polPars$ denote the expected values of\n$\\mathsf{R}^{(n)}$ and $\\mathsf{T}^{(n)}$, respectively. Define\n\\[\n J^\\rho_\\polPars = \\frac{\\mathsf{R}^\\rho_\\polPars}{ (1-\\discount) \\mathsf{T}^\\rho_\\polPars}.\n\\]\n\n\\begin{theorem}\\label{thm:approx_RMC}\n Given a policy $\\policy_\\polPars$, let $V_\\polPars$ denote the value\n function and $\\overline \\mathsf{T}^\\rho_\\polPars = \\EXPB[ \\discount^{\\tau^{(1)}} |\n \\State_0 = \\state_0 ]$ (which is always less than $\\discount$). Suppose the\n following condition is satisfied:\n \\begin{enumerate}\n \\item[\\textup{(C)}] The value function $V_\\polPars$ is locally Lipschitz\n in $B^\\rho$, i.e., there exists a $L_\\polPars$ such that for any $s, s'\n \\in B^\\rho$, \n \\[\n | V_\\polPars(s) - V_\\polPars(s') | \\le L_\\polPars d_S(s,s').\n \\]\n \\end{enumerate}\n Then\n \\begin{equation}\\label{eq:approxJ_bound}\n \\big| J_\\polPars - J^\\rho_\\polPars \\big| \\le \n \\frac{ L_\\polPars \\overline \\mathsf{T}^\\rho_\\polPars } { (1-\\discount)\n \\mathsf{T}^\\rho_\\polPars} \\rho \\le \\frac{\\discount}{(1-\\discount)} L_\\polPars \\rho.\n \\end{equation}\n\\end{theorem}\n\\begin{proof}\n We follow an argument similar to Proposition~\\ref{prop:renewal-basic1}.\n \\begin{align}\n J_\\polPars &= V_\\polPars(s_0) = \n \\EXPB\\bigg[\n \\sum_{t=0}^{\\tau^{(1)}-1} \\discount^{t} R_t \n \\notag \\\\\n & \\hskip 6em \n + \\discount^{\\tau^{(1)}}\n \\sum_{t = \\tau^{(1)}}^\\infty \\discount^{t-\\tau^{(1)}} R_t \n \\biggm| \\State_{0} = \\state_{\\tau^{(1)}} \\bigg]\n \\notag \\\\\n &\\stackrel{(a)}= \\mathsf{R}^\\rho_\\polPars + \n\\EXPB[ \\discount^{\\tau^{(1)}} | \\State_0 = \\state_0]\\, V_\\polPars(\\state_{\\tau^{(1)}})\n\\label{eq:approx1}\n \\end{align}\n where $(a)$ uses the strong Markov property.\n Since $V_\\polPars$ is locally Lipschitz with constant $L_\\polPars$ and\n $s_{\\tau^{(1)}} \\in B^\\rho$, we have that\n \\[\n |J_\\polPars - V_\\polPars(s_{\\tau^{(1)}}) | = |V_\\polPars(s_0) -\n V_\\polPars(s_{\\tau^{(1)}}) | \\le L_\\polPars \\rho. \n \\]\n Substituting the above in~\\eqref{eq:approx1} gives\n \\[\n J_\\polPars \\le \\mathsf{R}^\\rho_\\polPars + \\overline \\mathsf{T}^\\rho_\\polPars (J_\\polPars +\n L_\\polPars \\rho).\n \\]\n Substituting $\\mathsf{T}^\\rho_\\polPars = (1 - \\overline \\mathsf{T}^\\rho_\\polPars)\/(1 -\n \\discount)$ and rearranging the terms, we get \n \\[\n J_\\polPars \\le J^\\rho_\\polPars + \\frac{L_\\polPars \\overline\n \\mathsf{T}^\\rho_\\polPars}{(1-\\discount) \\mathsf{T}^\\rho_\\polPars } \\rho.\n \\]\n The other direction can also be proved using a similar argument. The second\n inequality in~\\eqref{eq:approxJ_bound} follows from $\\overline \\mathsf{T}^\\rho_\\polPars \\le \\gamma$ and ${\\mathsf{T}^\\rho_\\polPars \\ge 1}$.\n\\end{proof}\n\nTheorem~\\ref{thm:approx_RMC} implies that we can find an approximately optimal policy by identifying policy parameters $\\polPars$ that minimize $J^\\rho_\\polPars$. To do so, we can appropriately modify both variants of\nRMC defined in Sec.~\\ref{sec:rl} to declare a renewal whenever the state lies\nin $B^\\rho$. \n\nFor specific models, it may be possible to verify that the value function is\nlocally Lipschitz (see Sec.~\\ref{sec:inv_ctrl} for an example). However, we\nare not aware of general conditions that guarantee local Lipschitz\ncontinuity of value functions. It is possible to identify sufficient conditions that guarantee global Lipschitz continuity of value functions (see~\\cite[Theorem 4.1]{Hinderer2005},\n\\cite[Lemma 1, Theorem 1]{Rachelson2010}, \\cite[Lemma 1]{Pirotta2015}). We state these conditions below.\n\\begin{proposition}\\label{prop:Lispschitz}\n Let $V_\\polPars$ denote the value function for any policy\n $\\policy_{\\polPars}$. Suppose the model satisfies the following conditions:\n \\begin{enumerate}\n \\item The transition kernel $P$ is Lipschitz, i.e., there\n exists a constant $L_P$ such that for all\n $s,s' \\in \\mathcal{\\State}$ and $a,a' \\in \\ACTION$, \n \\[\n \\mathcal K(P(\\cdot | s,a), P(\\cdot | s',a')) \\le \n L_P\\big[ d_S(s,s') + d_A(a,a') \\big],\n \\]\n where $\\mathcal K$ is the Kantorovich metric (also called\n Kantorovich-Monge-Rubinstein metric or Wasserstein distance) between\n probability measures.\n\n \\item The per-step reward $r$ is Lipschitz, i.e., there exists a constant\n $L_r$ such that for all $s,s',s_+ \\in \\mathcal{\\State}$ and $a,a' \\in \\ACTION$,\n \\[\n | r(s,a,s_+) - r(s',a',s_+) | \\le \n L_r\\big[ d_S(s,s') + d_A(a,a') \\big].\n \\]\n \\end{enumerate}\n In addition, suppose the policy satisfies the following:\n \\begin{enumerate}\n \\setcounter{enumi}{2}\n \\item The policy $\\policy_\\polPars$ is Lipschitz, i.e., there exists a\n constant $L_{\\policy_\\polPars}$ such that for any $s,s' \\in \\mathcal{\\State}$,\n \\[\n \\mathcal K( \\policy_\\polPars(\\cdot | s), \\policy_\\polPars(\\cdot | s')) \n \\le\n L_{\\policy_\\polPars}\\, d_S(s,s').\n \\]\n \\item $\\discount L_P(1 + L_{\\policy_\\polPars}) < 1$.\n \\item The value function $V_\\polPars$ exists and is finite. \n \\end{enumerate}\n Then, $V_\\polPars$ is Lipschitz. In particular, for any $s, s' \\in\n \\mathcal{\\State}$,\n \\[\n | V_\\polPars(s) - V_\\polPars(s') | \\le L_\\polPars d_S(s,s'),\n \\]\n where\n \\[\n L_\\polPars = \\frac{L_r (1 + L_{\\policy_\\polPars})}\n {1 - \\discount L_P(1 + L_{\\policy_\\polPars}) }.\n \\]\n\\end{proposition}\n\n\\section{Numerical Experiments}\\label{sec:num_exp}\n\nWe conduct three experiments to evaluate the performance of RMC: a randomly\ngenerated MDP, event-triggered communication, and inventory\nmanagement. \n\n\\subsection{Randomized MDP (GARNET)} \\label{sec:GARNET}\n\nIn this experiment, we study a randomly generated $\\text{GARNET}(100,10,50)$ \nmodel~\\cite{Bhatnagar2009}, which is an MDP with $100$ states, $10$ actions,\nand a branching factor of $50$ (which means that each row of all transition\n matrices has $50$ non-zero elements, chosen $\\text{Unif}[0,1]$ and\nnormalized to add to~$1$). For each state-action pair, with probability\n$p=0.05$, the reward is chosen $\\text{Unif}[10,100]$, and with probability\n$1-p$, the reward is~$0$. Future is discounted by a factor of\n$\\discount=0.9$. The first state is chosen as start state. The policy is a Gibbs soft-max distribution\nparameterized by $100 \\times 10$ (states $\\times$ actions) parameters, where\neach parameter belongs to the interval $[-30, 30]$. The temperature of the\nGibbs distribution is kept constant and equal to~$1$.\n\nWe compare the performance of RMC, RMC with biased gradient (denoted by\nRMC-B, see Remark~\\ref{rem:2}), and actor critic with eligibility\ntraces for the critic~\\cite{sutton1998reinforcement} (which we refer to as\nSARSA-$\\lambda$ and abbreviate as S-$\\lambda$ in the plots),\nwith $\\lambda \\in \\{0, 0.25, 0.5, 0.75, 1\\}$.\n For both the RMC algorithms, we use the same runs to estimate the gradients\n (see Remark~\\ref{rem:single_run} in Sec.~\\ref{sec:rl}).\n\\def\\PARAMS{For\nall algorithms, the learning rate is chosen using ADAM~\\cite{ADAM} with\ndefault hyper-parameters and the $\\alpha$ parameter of ADAM equal to $0.05$\nfor RMC, RMC-B, and the actor in SARSA-$\\lambda$ and the learning rate is equal to $0.1$ for the\ncritic in SARSA-$\\lambda$. For RMC and RMC-B, the policy parameters are\nupdated after $N=5$ renewals.}\nEach algorithm\\footnote{\\PARAMS} is run $100$ times and the mean and standard deviation of the\nperformance (as estimated by the algorithms themselves) is shown in\nFig.~\\ref{fig:GARNET-RL-train}. The performance of the corresponding policy\nevaluated by Monte-Carlo evaluation over a horizon of $250$~steps and averaged\nover $100$~runs is shown in Fig.~\\ref{fig:GARNET-RL-eval}. The optimal\nperformance computed using value iteration is also shown.\n\nThe results show that SARSA-$\\lambda$ learns faster (this is expected because\nthe critic is keeping track of the entire value function) but has higher\nvariance and gets stuck in a local minima. On the other hand, RMC and RMC-B\nlearn slower but have a low bias and do not get stuck in a local minima. The\nsame qualitative behavior was observed for other randomly generated models. Policy gradient algorithms only guarantee convergence to a local optimum. We are not sure why RMC and SARSA differ in which local minima they\nconverge to. Also, it was observed that RMC-B (which is RMC with biased evaluation of the gradient)\nlearns faster than RMC.\n\n\\begin{figure}[!t!b]\n \\centering\n \\begin{subfigure}{1.0\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{light_garnet_rl_train.pdf}\n \\caption{}\n \\label{fig:GARNET-RL-train}\n \\end{subfigure}\n \\begin{subfigure}{1.0\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{garnet_rl_eval.pdf}\n \\caption{}\n \\label{fig:GARNET-RL-eval}\n \\end{subfigure}\n \\caption{Performance of different learning algorithms on\n $\\text{GARNET}(100,10,50)$ with $p=0.05$ and $\\discount=0.9$. \n (a)~The performance estimated by the algorithms online. (b)~The\n performance estimated by averaging over $100$ Monte Carlo evaluations\n for a rollout horizon of $250$. The solid lines show the mean value and the shaded region shows the $\\pm$ one standard deviation region.}\n\\end{figure}\n\n\\subsection{Event-Triggered Communication} \\label{sec:rem_est}\n\n\\begin{figure}[!t!b]\n \\centering\n \\renewcommand\\unitlength{cm}\n \\includegraphics[width=1.0\\linewidth]{rl_re.pdf}\n \\caption{Policy parameters versus number of samples (sample values averaged over 100 runs) for event-driven\n communication using RMC for different values of $p_d$. The solid lines\n show the mean value and the shaded area shows the $\\pm$ one standard\n deviation region.}\n \\label{fig:RE}\n\\end{figure}\n\nIn this experiment, we study an event-triggered communication problem that arises in networked control systems~\\cite{LipsaMartins:2011,CSM:thresholds}. A transmitter observes a first-order autoregressive process $\\{X_t\\}_{t \\ge 1}$, i.e., $X_{t+1} =\n\\alpha X_t + W_t$, where $\\alpha, X_t, W_t \\in \\reals$, and $\\{W_t\\}_{t \\ge 1}$ is\nan i.i.d.\\ process. At each time, the transmitter uses an event-triggered\npolicy (explained below) to determine whether to transmit or not (denoted by\n$A_t = 1$ and $A_t = 0$, respectively). Transmission takes place over an\ni.i.d.\\ erasure channel with erasure probability $p_d$. Let $\\Prestate_t$ and\n$\\Poststate_t$ denote the ``error'' between the source realization and it's\nreconstruction at a receiver. It can be shown that $\\Prestate_t$ and\n$\\Poststate_t$ evolve as follows~\\cite{LipsaMartins:2011,CSM:thresholds}: when $A_t = 0$,\n$\\Poststate_t = \\Prestate_t$; when $A_t = 1$, $\\Poststate_t = 0$ if the\ntransmission is successful (w.p. $(1-p_d)$) and $\\Poststate_t = \\Prestate_t$ if\nthe transmission is not successful (w.p. $p_d$); \nand $\\Prestate_{t+1} = \\alpha \\Poststate_t + W_t$. Note that this is a post-decision state model, where the post-decision\nstate resets to zero after every successful transmission.\\footnote{Had we used\nthe standard MDP model instead of the post-decision state model,\nthis restart would not have always resulted in a renewal.}%\n\nThe per-step cost has two components: a communication cost of\n$\\lambda A_t$, where $\\lambda \\in \\reals_{> 0}$ and an estimation error \n$(\\Poststate_t)^2$. The objective is to minimize the expected discounted cost. \n\nAn event-triggered policy is a threshold policy that chooses $A_t = 1$\nwhenever $|\\Prestate_t| \\ge \\polPars$, where $\\polPars$ is a design choice.\nUnder certain conditions, such an event-triggered policy is known to be\noptimal~\\cite{LipsaMartins:2011,CSM:thresholds}. When the system model is known, algorithms to compute the optimal $\\polPars$ are presented\nin~\\cite{XuHes2004,CM:remote-estimation}. In this section, we use RMC to\nidentify the optimal policy when the model parameters are not known. \n\nIn our experiment we consider an event-triggered model with $\\alpha = 1$,\n$\\lambda = 500$, $p_d \\in \\{0, 0.1, 0.2\\}$, $W_t \\sim {\\cal N}(0, 1)$,\n$\\discount = 0.9$, and use simultaneous perturbation variant of RMC\\footnote{An\nevent-triggered policy is a parametric policy but $\\policy_\\polPars(\\action |\n\\prestate)$ is not differentiable in $\\polPars$. Therefore, the likelihood\nratio method cannot be used to estimate performance gradient.} to identify\n$\\polPars$. We run the algorithm 100 times and the result for different\nchoices of $p_d$ are shown in Fig.~\\ref{fig:RE}.\\footnote{We choose the\nlearning rate using ADAM with default hyper-parameters and the $\\alpha$\nparameter of ADAM equal to 0.01. We choose $c = 0.3$, $N=100$ and $\\Delta =\n\\mathcal{N}(0,1)$ in Algorithm~\\ref{alg:SPSA}.} For $p_d = 0$, the optimal\nthreshold computed using~\\cite{CM:remote-estimation} is also shown.\nThe results show that RMC converges relatively quickly and has low bias across multiple runs.\n\n\\subsection{Inventory Control} \\label{sec:inv_ctrl}\n\nIn this experiment, we study an inventory management problem that arises in operations\nresearch~\\cite{Arrow1951,Bellman1955}. Let $S_t \\in \\subset \\reals$ denote\nthe volume of goods stored in a warehouse, $A_t \\in \\reals_{\\ge 0}$ denote the\namount of goods ordered, and $D_t$ denotes the demand. The state evolves\naccording to $S_{t+1} = S_t + A_t - D_{t+1}$. \n\nWe work with the normalized cost function:\n\\[C(s) = a_p s (1-\\discount)\/\\discount + a_h s \\mathds{1}_{\\{ s \\ge 0\\}} - a_b s \\mathds{1}_{\\{s < 0\\}}, \\] \nwhere $a_p$ is the procurement cost, $a_h$ is the holding cost, and $a_b$ is the backlog cost (see~\\cite[Chapter 13]{Whittle1982}\nfor details).\n\nIt is known that there exists a threshold $\\theta$ such that the optimal\npolicy is a base stock policy with threshold $\\theta$ (i.e., whenever the\ncurrent stock level falls below $\\theta$, one orders up to $\\theta$).\nFurthermore, for $s \\le \\theta$, we have that~\\cite[Sec~13.2]{Whittle1982}\n\\begin{equation}\\label{eq:opt-IC}\n V_\\polPars(s) = C(s) + \\frac{\\discount}{(1-\\discount)} \\EXP[C(\\polPars - D) ].\n\\end{equation}\nSo, for $B^\\rho \\subset (0, \\theta)$, the value function is locally Lipschitz, with\n\\[\n L_\\polPars = \\left( a_h + \\frac{1 - \\discount} {\\discount} a_p \\right).\n\\]\nSo, we can use\napproximate RMC to learn the optimal policy.\n\nIn our experiments, we consider an inventory management model with $a_h = 1$, $a_b\n= 1$, $a_p = 1.5$, $D_t \\sim \\text{Exp}(\\lambda)$ with $\\lambda = 0.025$, start\nstate $s_0 = 1$, discount factor $\\discount = 0.9$, and use\nsimultaneous perturbation variant of approximate RMC to identify $\\theta$. \nWe\nrun the algorithm $100$ times and the result is shown in\nFig.~\\ref{fig:inv_ctl-RL}.\\footnote{We choose the learning rate using ADAM\nwith default hyper-parameters and the $\\alpha$ parameter of ADAM equal to\n0.25. We choose $c = 3.0$, $N=100$, and $\\Delta = \\mathcal{N}(0,1)$ in\nAlgorithm~\\ref{alg:SPSA} and choose $\\rho = 0.5$ for approximate RMC\\@. We bound the states within $[-100.0, 100.0]$.} The\noptimal threshold and performance computed using~\\cite[Sec 13.2]{Whittle1982}%\n\\footnote{For $\\text{Exp}(\\lambda)$ demand,\n the optimal threshold is (see~\\cite[Sec 13.2]{Whittle1982})\n \\[\\polPars^* = \\frac 1\\lambda \n \\log\\left( \\frac{a_h + a_b}{a_h + a_p(1-\\gamma)\/\\gamma)} \\right).\\]\n}\nis also shown. \nThe result shows that RMC converges to an approximately optimal parameter value with total cost within the bound predicted in Theorem~\\ref{thm:approx_RMC}.\n\n\\begin{figure}[!t!b]\n \\centering\n \\begin{subfigure}{1.0\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{AN_1_inv_ctl_threshold_paper.pdf}\n \\caption{}\n \\label{fig:inv_ctl-RL_threshold}\n \\end{subfigure}\n \\begin{subfigure}{1.0\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{AN_1_inv_ctl_perf_paper.pdf}\n \\caption{}\n \\label{fig:inv_ctl-RL_perf}\n \\end{subfigure}\n \\caption{(a) Policy parameters and (b) Performance (total cost) versus\n number of samples (sample values averaged over 100 runs) for inventory\n control using RMC\\@. The solid lines show the mean value and the shaded area shows the $\\pm$ one standard\ndeviation region. In (b), the performance is computed using~\\eqref{eq:opt-IC} for the policy parameters given in (a). The red rectangular region shows the total cost bound given by Theorem~\\ref{thm:approx_RMC}.}\n\\label{fig:inv_ctl-RL}\n\\end{figure}\n\n\\section{Conclusions}\n\nWe present a renewal theory based reinforcement learning algorithm called\nRenewal Monte Carlo. RMC retains the key advantages of Monte Carlo methods and\nhas low bias, is simple and easy to implement, and works for models with\ncontinuous state and action spaces. In addition, due to the averaging over\nmultiple renewals, RMC has low variance. We generalized the\nRMC algorithm to post-decision state models and also presented a variant that converges faster to an approximately optimal policy, where the renewal state is replaced by a renewal set. The error in using such an approximation is bounded by the size of the renewal set.\n\nIn certain models, one is interested in the peformance at a reference state\nthat is not the start state. In such models, we can start with an arbitrary\npolicy and ignore the trajectory until the reference state is visited for\nthe first time and use RMC from that time onwards (assuming that the reference\nstate is the new start state).\n\nThe results presented in this paper also apply to average reward models where\nthe objective is to maximize\n\\begin{equation}\n J_\\pi = \n \\lim_{t_h \\to\n \\infty}\\frac{1}{t_h}\\EXPA\\biggl[\\sum_{t=0}^{t_h-1}\\Reward_t\\biggm|\\State_0 =\n \\state_0\\biggr]. \\label{eq:avg_Vp-defn}\n\\end{equation}\nLet the stopping times $\\tau^{(n)}$ be defined as before. Define the total reward\n$\\mathsf{R}^{(n)}$ and duration $\\mathsf{T}^{(n)}$ of the $n$-th regenerative cycle as\n\\[\n \\mathsf{R}^{(n)} = \n \\smashoperator[r]{\\sum_{t = \\tau^{(n-1)}}^{\\tau^{(n)} - 1}} R_t\n \\quad\\text{and}\\quad\n \\mathsf{T}^{(n)} = \\tau^{(n)} - \\tau^{(n-1)}.\n\\]\nLet $\\mathsf{R}_\\polPars$ and $\\mathsf{T}_\\polPars$ denote the expected values of $\\mathsf{R}^{(n)}$\nand $\\mathsf{T}^{(n)}$ under policy $\\policy_{\\polPars}$. Then from standard renewal\ntheory we have that the performance $J_\\polPars$ is equal to \n$\\mathsf{R}_\\polPars\/ \\mathsf{T}_\\polPars$ and, therefore $\\GRAD_\\polPars J_\\polPars =\nH_\\polPars\/T^2_\\polPars$, where $H_\\polPars$ is defined as in~\\eqref{eq:H}. We\ncan use both variants of RMC prosented in Sec.~\\ref{sec:rl} to obtain\nestimates of $H_\\polPars$ and use these to update the policy parameters\nusing~\\eqref{eq:H-update}.\n\n\n\\section*{Acknowledgment}\n\nThe authors are grateful to Joelle Pineau for useful feedback and for suggesting the idea of approximate RMC.\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}