diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzoech" "b/data_all_eng_slimpj/shuffled/split2/finalzzoech" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzoech" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nFor the last decade, motion deblurring has been an active research topic in computer vision.\nMotion blur is produced by relative motion between camera and scene during the exposure where blur kernel, {\\it i.e.} point spread function (PSF), is spatially non-uniform.\nIn blind non-uniform deblurring problem, pixel-wise blur kernels and corresponding sharp image are estimated simultaneously.\n\nEarly works on motion deblurring~\\cite{chan1998total,cho2009fast,fergus2006removing,shan2008high,xu2010two} focus on removing spatially uniform blur in the image.\nHowever, the assumption of uniform motion blur is often broken in real world due to nonhomogeneous scene depth and rolling motion of camera.\nRecently, a number of methods~\\cite{cho2007removing,gupta2010single,hu2014joint,hu2012fast,ji2012two,kim2014segmentation,sun2015learning,whyte2012non,zheng2013forward} have been proposed for non-uniform deblurring.\nHowever, they still can not completely handle non-uniform blur caused by scene depth variation.\nThe main challenge lies in the difficulty of estimating the scene depth with only single observation, which is highly ill-posed.\n\n\\begin{figure}[t]\n\t\\begin{center}\n\t\t\\vspace*{0.15in}\n\t\t\\subfloat[]{\\includegraphics[width=0.25\\textwidth]{figure\/figure1_Blurred.pdf}}\\hspace*{-0.01in}\n\t\t\\subfloat[]{\\includegraphics[width=0.25\\textwidth]{figure\/figure1_Latent.pdf}}\\hspace*{-0.01in}\n\t\t\\subfloat[]{\\includegraphics[width=0.25\\textwidth]{figure\/figure1_Depth.pdf}}\\hspace*{-0.01in}\n\t\t\\subfloat[]{\\includegraphics[width=0.25\\textwidth]{figure\/figure1_Motion.pdf}}\t\n\t\\end{center}\n\t\\vspace*{-0.05in}\n\t\\caption{The proposed algorithm jointly estimates latent image, depth map, and camera motion from a single light field. (a) Center-view of blurred light field sub-aperture image. (b) Deblurred image of (a). (c) Estimated depth map. (d) Camera motion path and orientation (6-DOF).}\n\t\\label{fig:comp0}\n\n\\end{figure}\nA light field camera ameliorates the ill-posedness of single-shot deblurring problem of the conventional camera.\n4D light field is equivalent to multi-view images with narrow baseline, {\\it i.e.} sub-aperture images, taken with an identical exposure~\\cite{ng2005light}.\nConsequently, motion deblurring using light field can be leveraged by its multi-dimensional nature of captured information.\nFirst, strong depth cue is obtained by employing multi-view stereo matching between sub-aperture images.\nIn addition, different blurs in the sub-aperture images can help the optimization converge more fast and precise.\n\nIn this paper, we propose an efficient algorithm to jointly estimate latent image, sharp depth map, and 6-DOF camera motion from a blurred single 4D light field as shown in Figure~\\ref{fig:comp0}.\nIn the proposed light field blur model, latent sub-aperture images are formulated by 3D warping of the center-view sharp image using the depth map and the 6-DOF camera motion.\nThen, motion blur is modeled as the integral of latent sub-aperture images during the shutter open.\nNote that the proposed center-view parameterization reduces light field deblurring problem in lower dimension comparable to a single image deblurring.\nThe joint optimization is performed in an alternating manner, in which the deblurred image, depth map, and camera motion are refined during iteration.\nThe overview of the proposed algorithm is shown in Figure~\\ref{fig:comp1}.\nIn overall, the contribution of this paper is summarized as follows.\n\\begin{itemize}\n \\item[$\\bullet$] We propose a joint method which simultaneously solves deblurring, depth estimation, and camera motion estimation problems from a single light field.\n \\item[$\\bullet$] Unlike the previous state-of-the-art algorithm, the proposed method handles blind light field motion deblurring under 6-DOF camera motion.\n \\item[$\\bullet$] Practical and extensible blur formulation that can be extended to any multi-view camera system.\n\\end{itemize}\n\\section{Related Works}\n\n\\textbf{Conventional Single Image Deblurring.}~One way to effectively remove the spatially-variant motion in a conventional single image is to first find the motion density function (MDF) and then generate the pixel-wise kernel from this function~\\cite{gupta2010single,hu2014joint,hu2012fast}.\nGupta~et al.~\\cite{gupta2010single} modeled the camera motion in discrete 3D motion space comprising $x$, $y$ translation and in-plane rotation.\nThey performed deblurring by iteratively optimizing the MDF and the latent image that best describe the blurred image.\nSimilar model was used by Hu and Yang~\\cite{hu2012fast} in which MDF was modeled with 3D rotations.\nThese methods of using MDF well parameterize the spatially-variant blur kernel into low dimensions.\nHowever, modeling the motion blur using MDF only in depth varying images is difficult, because the motion blur is determined by both camera motion and scene depth.\nIn~\\cite{hu2014joint}, the image was segmented by the matting algorithm, and the MDF and representative depth values \u200b\u200bof each region were found through the expectation-maximization algorithm.\n\\begin{figure*}[t]\n\t\\vspace*{0.15in}\n\t\\centering\n\t\\scalebox{1.1}{\n\t\t\\includegraphics[width=0.90\\textwidth]{figure\/figure2_compress.pdf}\n\t}\n\t\\vspace*{-0.05in}\n\t\\caption{Overview of the proposed algorithm. The proposed algorithm jointly estimates the latent image, depth map, and camera motion from a single light field.\n\t\\label{fig:comp1}\n\\end{figure*}\n\nA few methods~\\cite{kim2014segmentation,sun2015learning} estimated linear blur kernels locally, and they showed acceptable results for the arbitrary scene depth.\nKim and Lee~\\cite{kim2014segmentation} jointly estimated the spatially varying motion flow and the latent image.\nSun~et al.~\\cite{sun2015learning} adopted a learning method based on convolutional neural network~(CNN) and assumed that the motion was locally linear.\nHowever, the locally linear blur assumption does not hold in large motion.\n\\vspace*{0.3cm}\n\n\\noindent\\textbf{Video and Multi-View Deblurring.}~Xu and Jia~\\cite{xu2012depth} decomposed the image region according to the depth map obtained from a stereo camera and recombined them after independent deblurring.\nRecently, several methods~\\cite{cho2012video,hyun2015generalized,park2017joint,sellent2016stereo,wulff2014modeling} have addressed the motion blur problem in video sequences.\nVideo deblurring shows good performance, because it exploits optical flow as a strong guide for motion estimation.\n\\vspace*{0.3cm}\n\n\\noindent\\textbf{Light Field Deblurring.}~Light field with two plane parameterization is equivalent to multi-view images with narrow baseline.\nIt contains rich geometric information of rays in a single-shot image.\nThese multi-view images are called sub-aperture images and individual sub-aperture images show slightly different blur pattern due to the viewpoint variation.\nIn last a few years, several approaches~\\cite{chandramouli2014motion,dansereau2016motion,jin2015bilayer,snoswell2014lf,srinivasan2017light} have been proposed to perform motion deblurring on the light field.\nChandramouli~et~al.~\\cite{chandramouli2014motion} addressed the motion blur problem in the light field for the first time.\nThey assumed constant depth and uniform motion to alleviate the complexity of the imaging model.\nConstant depth means that the light field has little information about 3D scene structure, which depletes the advantages of light field.\nJin~et al.~\\cite{jin2015bilayer} quantized the depth map into two layers and removed the motion blur in each layer.\nTheir method assumed that the camera motion is in-plane translation and utilized depth value as a scale factor of translational motion.\nAlthough their model handles non-uniform blur kernel related to the depth map, a more general depth variation and camera motion should be considered for application to real-world scenes.\nDansereau~et al.~\\cite{dansereau2016motion} applied the Richardson-Lucy deblurring algorithm to the light field with non-blind 6-DOF motion blur.\nAlthough their method dealt with 6-DOF motion blur, it was assumed that the ground truth camera motion was known.\nUnlike~\\cite{dansereau2016motion}, in this paper, we address the problem of blind deblurring which is a more highly ill-posed problem.\nSrinivasan~et al.~\\cite{srinivasan2017light} solved the light field deblurring under 3D camera motion path and showed visually pleasing result.\nHowever, their methods do not consider 3D orientation change of the camera.\n\nIn contrast to the previous works of light field deblurring, the proposed method completely handles 6-DOF motion blur and unconstrained scene depth variation.\n\n\n\n\\section{Motion Blur Formulation in Light Field}\n\\label{sec3}\n\nA pixel in a 4D light field has four coordinates, {\\em i.e.} $(x,y)$ for spatial and $(u,v)$ for angular coordinates.\nA light field can be interpreted as a set of $u\\times v$ multi-view images with narrow baseline, which are often called sub-aperture images~\\cite{ng2006digital}.\nThroughout this paper, a sub-aperture image is represented as $I(\\mathbf{x},{\\mathbf{u}})$ where $\\mathbf{x}=(x,y)$ and $\\mathbf{u}=(u,v)$.\nFor each sub-aperture image, the blurred image $B(\\mathbf{x},{\\mathbf{u}})$ is the average of the sharp images ${I}_{t}(\\mathbf{x},{\\mathbf{u}})$ during the shutter open over $[t_0 , t_1]$ as follows:\n\\begin{align}\nB(\\mathbf{x},{\\mathbf{u}}) = \\int_{t_0}^{t_1} {I}_{t}(\\mathbf{x},{\\mathbf{u}}) dt.\n\\label{eq:blurEq}\n\\end{align}\n\nFollowing the blur model of~\\cite{park2017joint,sellent2016stereo}, we approximate all the blurred sub-aperture images by projecting a single latent image with 3D rigid motion.\nWe choose the center-view ($\\mathbf{c}$) of sub-aperture images and the middle of the shutter time ($t_r$) as the reference angular position and the time stamp of the latent image.\nWith above notations, the pixel correspondence from each sub-aperture image to the latent image $I_{t_r}(\\mathbf{x},\\mathbf{c})$ is expressed as follows:\n\\begin{align}\n{I}_{t}(\\mathbf{x},{\\mathbf{u}})={I}_{t_r}(w_{t}(\\mathbf{x},\\mathbf{u}),{\\mathbf{c}}),\n\\end{align}\nwhere\n\\begin{align}\nw_{t}(\\mathbf{x},\\mathbf{u})=\\Pi_{\\mathbf{c}}(\\mathrm{P}^{\\mathbf{c}}_{t_r}(\\mathrm{P}^{\\mathbf{u}}_{t})^{\\scriptscriptstyle -1}\\Pi^{\\scriptscriptstyle -1}_{\\mathbf{u}}(\\mathbf{x},D_{t}(\\mathbf{x},\\mathbf{u}))).\n\\label{eq:warping}\n\\end{align}\n$w_{t}(\\mathbf{x},\\mathbf{u})$ computes the warped pixel position from $\\mathbf{u}$ to $\\mathbf{c}$, and from $t$ to $t_r$.\n$\\Pi_{\\mathbf{c}}$, $\\Pi^{\\scriptscriptstyle -1}_{\\mathbf{u}}$ are the projection and back-projection function between the image coordinate and the 3D homogeneous coordinate using the camera intrinsic parameters.\nMatrices $\\mathrm{P}^{\\mathbf{c}}_{t_r}$ and $\\mathrm{P}^{\\mathbf{u}}_{t}\\in SE(3)$ denote the 6-DOF camera pose at the corresponding angular position and the time stamp. $D_{t}(\\mathbf{x},\\mathbf{u})$ is the depth map at the time stamp $t$.\n\nIn the proposed model, the blur operator $\\Psi(\\cdot)$ is defined by approximating the integral in (\\ref{eq:blurEq}) as a finite sum as follows:\n\\begin{align}\nB(\\mathbf{x},\\mathbf{u})\\approx (\\Psi\\circ I)(\\mathbf{x},\\mathbf{u}),\n\\end{align}\nwhere\n\\begin{align}\n(\\Psi\\circ I)(\\mathbf{x},\\mathbf{u}) = \\frac{1}{M}\\sum^{M-1}_{m=0}I_{t_r}(w_{t_m}(\\mathbf{x},\\mathbf{u}), \\mathbf{c}).\n\\label{eq:blurOp}\n\\end{align}\nIn (\\ref{eq:blurOp}), $t_m$ is $m_{th}$ uniformly sampled time stamp during the interval $[t_0,t_1]$.\n\nOur goal is to formulate $(\\Psi\\circ I)(\\mathbf{x},\\mathbf{u})$ with only center-view variables, {\\em i.e.} $I_{t_r}(\\mathbf{x},\\mathbf{c})$, $D_{t_r}(\\mathbf{x},\\mathbf{c})$, and $\\mathrm{P}^{\\mathbf{c}}_{t_0}$.\n$\\mathrm{P}^{\\mathbf{u}}_{t_m}$ and $D_{t_m}(\\mathbf{x},\\mathbf{u})$ are variables related to $\\mathbf{u}$ in the warping function (\\ref{eq:blurOp}).\nTherefore, we parameterize $\\mathrm{P}^{\\mathbf{u}}_{t_m}$ and $D_{t_m}(\\mathbf{x},\\mathbf{u})$ by employing center-view variables.\nBecause the relative camera pose $\\mathrm{P}^{\\scriptscriptstyle \\mathbf{c\\rightarrow u}}$ is fixed over time, $\\mathrm{P}^{\\mathbf{u}}_{t_m}$ is expressed by $\\mathrm{P}^{\\mathbf{c}}_{t_0}$ and $\\mathrm{P}^{\\mathbf{c}}_{t_1}$ as follows:\n\\begin{align}\n\\mathrm{P}^{\\mathbf{u}}_{t_m}=\\mathrm{P}^{\\scriptscriptstyle \\mathbf{c\\rightarrow u}}\\mathrm{P}^{\\mathbf{c}}_{t_m},\n\\end{align}\n\\begin{align}\n\\mathrm{P}^{\\mathbf{c}}_{t_m}=\\exp(\\frac{m}{M}\\log(\\mathrm{P}^{\\mathbf{c}}_{t_1}{(\\mathrm{P}^{\\mathbf{c}}_{t_0})}^{\\scriptscriptstyle -1}))\\mathrm{P}^{\\mathbf{c}}_{t_0},\n\\label{eq:motionPath}\n\\end{align}\nwhere $\\exp$ and $\\log$ denote the exponential and logarithmic maps between Lie group $SE(3)$ and Lie algebra $\\mathfrak{se}(3)$ space~\\cite{blanco2010tutorial}.\nTo minimize the viewpoint shift of the latent image, we assume $\\mathrm{P}^{\\mathbf{c}}_{t_1}=(\\mathrm{P}^{\\mathbf{c}}_{t_0})^{\\scriptscriptstyle -1}$ which makes $\\mathrm{P}^{\\mathbf{c}}_{t_m}$ an identity matrix when $t_m=t_r$.\nNote that we use the camera path model used in~\\cite{park2017joint,sellent2016stereo}.\nHowever, the B\\'{e}zier camera path model used in~\\cite{srinivasan2017light} can be directly applied to (\\ref{eq:motionPath}) as well.\n$D_{t_m}(\\mathbf{x},\\mathbf{u})$ is also represented by $D_{t_r}(\\mathbf{x},\\mathbf{c})$ by forward warping and interpolation.\n\nIn order to estimate all blur variables in the proposed light field blur model, we need to recover the latent variables, {\\it i.e.} $I_{t_r}(\\mathbf{x},\\mathbf{c})$, $D_{t_r}(\\mathbf{x},\\mathbf{c})$, and $\\mathrm{P}^{\\mathbf{c}}_{t_0}$.\nWe model an energy function as follows:\n\\begin{align}\n\\begin{split}\nE & = \\sum_{\\mathbf{u}}\\sum_{\\mathbf{x}}\\lambda_u\\|(\\Psi\\circ I)(\\mathbf{x},\\mathbf{u})-B(\\mathbf{x},{\\mathbf{u}})\\|_1\\\\\n& + \\lambda_L\\sum_{\\mathbf{x}}\\|\\nabla I_{t_r}(\\mathbf{x},\\mathbf{c})\\|_2+\\lambda_D\\sum_{\\mathbf{x}}\\|\\nabla D_{t_r}(\\mathbf{x},\\mathbf{c})\\|_2.\n\\label{eq:energy}\n\\end{split}\n\\end{align}\nThe data term imposes the brightness consistency between the input blurred light field and the restored light field.\nNotice that the L1-norm is employed in our approach as in~\\cite{kim2014segmentation}, where it effectively removes the ringing artifact around object boundary and provides more robust deblurring results on large depth change.\nThe last two terms are the total variation~(TV) regularizers~\\cite{beck2009fast} for the latent image and the depth map, respectively.\n\nIn our energy model, $D_{t_r}(\\mathbf{x},\\mathbf{c})$ and $\\mathrm{P}^{\\mathbf{c}}_{t_0}$ are implicitly included in the warping function (\\ref{eq:blurOp}).\nThe pixel-wise depth $D_{t_r}(\\mathbf{x},\\mathbf{c})$ determines the scale of the motion at each pixel.\nAt the boundary of an object where depth changes abruptly, there is a large difference of the blur kernel size between the near and farther objects.\nIf the optimization is performed without considering this, the blur will not be removed well at the boundary of the object.\n\nSimultaneously optimizing the three variables is complicated because the warping function (\\ref{eq:blurOp}) has severe nonlinearity.\nTherefore, our strategy is to optimize three latent variables in an alternating manner.\nWe minimize one variable while the others are fixed. The optimization (\\ref{eq:energy}) is carried out in turn for the three variables.\nThe L1 optimization is approximated using iterative reweighted least square (IRLS)~\\cite{scales1988robust}.\nThe optimization procedure converges in small number of iterations~$(<10)$.\n\nAn example of the iterative optimization is illustrated in Figure~\\ref{fig3} which shows the benefit of the iterative joint estimation of sharp depth map and latent image.\nThe initial depth map from the blurred light field is blurry as shown in Figure~\\ref{fig3}(c).\nHowever, both depth maps and latent images get sharper as the iteration continues as shown in Figure~\\ref{fig3}(d).\n\\begin{figure*}[t]\n\t\\vspace*{0.15in}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[width=0.265\\textwidth]{figure\/figure3_1.pdf}}\\hspace*{-0.05in}\n\t\\subfloat[]{\\includegraphics[width=0.25\\textwidth]{figure\/figure3_2.pdf}}\\hspace*{-0.05in}\n\t\\subfloat[]{\\includegraphics[width=0.265\\textwidth]{figure\/figure3_3.pdf}}\\hspace*{-0.05in}\n\t\\subfloat[]{\\includegraphics[width=0.25\\textwidth]{figure\/figure3_4.pdf}\n\n\t\\vspace*{-0.05in}\n\\caption{Example of the iterative joint estimation. The proposed method converges in small number of iteration.\n(a)$\\sim$(b) Input blurred image and deblurring results by iteration.\n(c)$\\sim$(d) Initial blurred depth map and depth estimation results by iteration.\n}\n\t\\label{fig:comp2}\n\t\\label{fig3}\t\n\\end{figure*}\n\n\\section{Joint Estimation of Latent Image, Camera Motion, and Depth Map}\n\\label{sec4}\n\n\\subsection{Update of the Latent Image}\n\\label{sec:4.1}\n\nThe proposed algorithm first updates the latent image $I_{t_r}(\\mathbf{x},\\mathbf{c})$.\nIn our data term, the blur operator (\\ref{eq:blurOp}) is simplified as the linear matrix multiplication, if $D_{t_r}(\\mathbf{x},\\mathbf{c})$ and $\\mathrm{P}^{\\mathbf{c}}_{t_0}$ remain fixed.\nUpdating the latent image is equivalent to minimizing (\\ref{eq:energy}) as follows:\n\\begin{align}\n\\min_{I^{\\mathbf{c}}_{t}}\\sum_{\\mathbf{u}}\\|K^{\\mathbf{u}}I^{\\mathbf{c}}_{t_r}-B^{\\mathbf{u}} \\|_1 + \\lambda_L\\|\\nabla I^{\\mathbf{c}}_{t_r}\\|_2.\n\\label{eq:imageUpdate}\n\\end{align}\n$I^{\\mathbf{c}}_{t_r}$, $B^{\\mathbf{u}}\\in\\mathbb{R}^n$ are vectorized images and ${K}^{\\mathbf{u}}\\in\\mathbb{R}^{n\\times n}$ is the blur operator in square matrix form, where $n$ is the number of pixels in the center-view sub-aperture image.\nTV regularization serves as a prior to the latent image with clear boundary while eliminating the ringing artifacts.\n\n\\subsection{Update of the Camera Pose and Depth Map}\n\\label{sec:4.2}\n\nSince (\\ref{eq:blurOp}) is a non-linear function of $D_{t_r}(\\mathbf{x},\\mathbf{c})$ and $\\mathrm{P}^{\\mathbf{c}}_{t_0}$, it is necessary to approximate it in a linear form for efficient computation.\nIn our approach, the blur operation (\\ref{eq:blurOp}) is approximated as a first-order expansion.\nLet $D_{0}(\\mathbf{x},\\mathbf{c})$ and $\\mathrm{P}^{\\mathbf{c}}_{0}$ denote the initial variables, then (\\ref{eq:blurOp}) is approximated as follow:\n\\begin{align}\n\\begin{split}\n&(\\Psi\\circ I)(\\mathbf{x},\\mathbf{u})\\\\\n&= B_0(\\mathbf{x},\\mathbf{u})+ \\textstyle \\frac{\\partial B_0}{\\partial \\mathbf{f}}(\\frac{\\partial\\mathbf{f}}{\\partial D_{t_r}(\\mathbf{x},\\mathbf{c})}\\Delta D_{t_r}(\\mathbf{x},\\mathbf{c})+\\frac{\\partial\\mathbf{f}}{\\partial \\varepsilon_{t_0}}\\varepsilon_{t_0}),\n\\end{split}\n\\end{align}\nwhere\n\\begin{align}\nB_0(\\mathbf{x},\\mathbf{u})=(\\Psi\\circ I)(\\mathbf{x},\\mathbf{u})\\vert_{D_{t_r}(\\mathbf{x},\\mathbf{c})=D_{0}(\\mathbf{x},\\mathbf{c}),\\mathrm{P}^{\\mathbf{c}}_{t_0}=\\mathrm{P}^{\\mathbf{c}}_{0}},\n\\end{align}\nNote that $\\mathbf{f}$ is motion flow generated by warping function, and $\\varepsilon_{t_0}$ denotes six-dimensional vector on $\\mathfrak{se}(3)$.\nThe partial derivatives related to $D_{t_r}(\\mathbf{x},\\mathbf{c})$ and $\\varepsilon_{t_0}$ are given in~\\cite{blanco2010tutorial}.\n\nOnce it is approximated using $\\Delta D_{t_r}(\\mathbf{x},\\mathbf{c})$ and $\\varepsilon_{t_0}$, (\\ref{eq:energy}) can be optimized using IRLS.\nThe resulting $\\Delta D_{t_r}(\\mathbf{x},\\mathbf{c})$ and $\\varepsilon_{t_0}$ are incremental values for the current $D_{t_r}(\\mathbf{x},\\mathbf{c})$ and $\\mathrm{P}^{\\mathbf{c}}_{t_0}$, respectively.\nThey are updated as follows:\n\\begin{align}\n\\begin{split}\n&D_{t_r}(\\mathbf{x},\\mathbf{c})=D_{t_r}(\\mathbf{x},\\mathbf{c})+\\Delta D_{t_r}(\\mathbf{x},\\mathbf{c}),\\\\\n&\\mathrm{P}^{\\mathbf{c}}_{t_0} = \\exp (\\varepsilon_{t_0})\\mathrm{P}^{\\mathbf{c}}_{t_0},\n\\end{split}\n\\end{align}\nwhere $\\mathrm{P}^{\\mathbf{c}}_{t_0}$ is updated through the exponential mapping of the motion vector $\\varepsilon_{t_0}$.\n\nFigure~\\ref{fig3} shows the initial latent variables and final outputs.\nAfter joint estimation, both the latent image and the depth map become clean and sharp.\n\nThe proposed blur formulation and joint estimation approach are not limited to the light field but can also be applied to images obtained from a stereo camera or general multi-view camera system. The only property of the light field we use is that sub-aperture images are equivalent to the images obtained from multi-view camera array.\nNote that the proposed method is not limited to a simple motion path model (moving smoothly in $\\mathfrak{se}(3)$ space).\nMore complex parametric curves, such as the B\\'{e}zier curve used in the prior work~\\cite{srinivasan2017light}, can be directly applied only if they are differentiable.\\\\\n\n\\begin{figure}[t]\n\t\\vspace*{0.15in}\n\t\\centering\n\t\\subfloat[]{\\includegraphics[width=0.248\\columnwidth]{figure\/figure4_1.pdf}}\\hspace*{-0.01in}\n\t\\subfloat[]{\\includegraphics[width=0.248\\columnwidth]{figure\/figure4_2.pdf}}\\hspace*{-0.01in}\n\t\\subfloat[]{\\includegraphics[width=0.248\\columnwidth]{figure\/figure4_3.pdf}}\\hspace*{-0.01in}\n\t\\subfloat[]{\\includegraphics[width=0.248\\columnwidth]{figure\/figure4_4.pdf}}\\hspace*{-0.01in\n\t\\vspace*{-0.05in}\n\\caption{\nExample of camera motion initialization on a synthetic light field.\n(a) Blurred input light field.\n(b) Ground truth motion flow.\n(c) Sun~et al.~\\cite{sun2015learning} (EPE = 3.05),\n(d) Proposed initial motion (EPE = 0.95).\nIn (b) and (d), the linear blur kernels are approximated only using the end points of camera motion for the visualization.}\n\t\\label{fig4}\t\n\n\\end{figure}\n\\subsection{Initialization}\n\\label{sec:initialization}\n\nSince deblurring is a highly ill-posed problem and the optimization is done in a greedy and iterative fashion, it is important to start with good initial values.\nFirst, we initialize the depth map using the input sub-aperture images of the light field.\nIt is assumed that the camera is not moving and (\\ref{eq:energy}) is minimized to obtain the initial $D_{t_r}(\\mathbf{x},\\mathbf{c})$.\nMinimizing (\\ref{eq:energy}) becomes a simple multi-view stereo matching problem.\nFigure~\\ref{fig:comp2}(c) shows the initial depth map which exhibits fattened object boundary.\n\nCamera motion $\\mathrm{P}^{\\mathbf{c}}_{t_0}$ is initialized from the local linear blur kernels and initial scene depth.\nWe first estimate the local linear blur kernel of $B(\\mathbf{x},{\\mathbf{c}})$ using~\\cite{sun2015learning}.\nThen, we fit the pixel coordinates moved by the linear kernel and the re-projected coordinates by the warping function as follows:\n\\begin{align}\n\\min_{\\mathrm{P}^{\\mathbf{c}}_{t_0}}\\sum^N_{i=1}\\|w_{t_0}(\\mathbf{x}_i,\\mathbf{c})-l(\\mathbf{x}_i)\\|^2_2\n\\label{eq:fitting}\n\\end{align}\nwhere $\\mathbf{x}_i$ is the sampled pixel position and $l(\\mathbf{x}_i)$ is the point that $\\mathbf{x}_i$ is moved by the end point of the linear kernel.\n$\\mathrm{P}^{\\mathbf{c}}_{t_0}$ is obtained by fitting $\\mathbf{x}_i$ moved by $w_{t_0}(\\cdot,\\mathbf{c})$ and $l(\\cdot)$.\n$\\mathrm{P}^{\\mathbf{c}}_{t_0}$ is the only variable of $w_{t_0}(\\cdot,\\mathbf{c})$ since the scene depth is fixed to the initial depth map.\nIn our implementation, RANSAC is used to find the camera motion that best describes the pixel-wise linear kernels.\n$N$ is the number of random samples, which is fixed to 4.\n\nFigure~\\ref{fig4} shows an example of camera motion initialization.\nIt is shown that \\cite{sun2015learning} underestimates the size of the motion (upper blue rectangle) and produces noisy motion where the texture is insufficient (lower blue rectangle).\n\n\n\t\n\\section{Experimental Results}\n\nThe proposed algorithm is implemented using Matlab on an Intel i7 7770K @ 4.2GHz with 16GB RAM and is evaluated for both synthetic and real light fields.\nSynthetic light field is generated using \\textit{Blender}~\\cite{blender} for qualitative as well as quantitative evaluation.\nIt includes 6~types of camera motion for 3~different scenes in which each light field has $7\\times7$ angular structure of 480$\\times$360 sub-aperture images.\nSynthetic blur is simulated by moving the camera array over a sequence of frames $(\\geq40)$ and then by averaging the individual frames.\nOn the other hand, real light field data is captured using Lytro Illum camera which generates $7\\times7$ angular structure of 552$\\times$383 sub-aperture images.\nWe generate the sub-aperture images from light field using the toolbox~\\cite{bok2014geometric} which provides the relative camera poses between sub-aperture images.\nLight fields are blurred by moving camera quickly under arbitrary motion, while the scene remains static.\nIn our implementation, we fixed most of the parameters except $\\lambda_D$ such that $ \\lambda_u=15, \\lambda_c = 1, \\lambda_L=5$.\n$\\lambda_D$ is set to a larger value for a real light field ($\\lambda_D=400$) than for synthetic data ($\\lambda_D=20$).\n\nFor quantitative evaluation of deblurring, we use both peak signal to noise ratio~(PSNR) and structural similarity~(SSIM).\nNote that PSNR and SSIM are measured by the maximum (best) ones among individual PSNR and SSIM values computed between the deblurred image and the ground truth images (along the motion path) as adopted in~\\cite{kohler2012recording}.\nFor comparison with light field depth estimation methods, we use the relative mean absolute error~(L1-rel) defined as\n\\begin{equation}\n\\text{L1-rel}(D,\\hat{D})=\\frac{1}{n}\\sum_{i}\\frac{|D_{i}-\\hat{D}_{i}|}{\\hat{D}_{i}},\n\\end{equation}\nwhich computes the relative error of the estimated depth $\\hat{D}$ to the ground truth depth $D$.\nThe accuracy of camera motion estimation is measured by the average end point error (EPE) to the end point of ground truth blur kernels.\nIn our evaluation, we compute the EPE by generating an end point of blur kernel using the estimated camera motion and ground truth depth.\nWe compare the performance of the proposed algorithm to linear blur kernel methods that directly computes the EPE between the ground truth and their pixel-wise blur kernel.\n\n\\begin{figure*}[ht]\n\t\\begin{center}\t\t\n\t\t\\vspace*{0.15in}\n\t\t\\includegraphics[width=0.24\\textwidth]{figure\/figure567_ryan_1_Blurred.pdf}\\hspace*{0.01in}\n\t\t\\includegraphics[width=0.24\\textwidth]{figure\/figure567_ryan_1_THKim.pdf}\\hspace*{0.01in}\n\t\t\\includegraphics[width=0.24\\textwidth]{figure\/figure567_ryan_1_JianSun.pdf}\\hspace*{0.01in}\n\t\t\\includegraphics[width=0.24\\textwidth]{figure\/figure567_ryan_1_Ours_nocurve.pdf\n\t\t\\vspace*{0.01in}\n\t\t\n\t\t\\includegraphics[width=0.24\\textwidth]{figure\/figure567_cocacola_5_Blurred.pdf}\\hspace*{0.01in}\n\t\t\\includegraphics[width=0.24\\textwidth]{figure\/figure567_cocacola_5_THKim.pdf}\\hspace*{0.01in}\n\t\t\\includegraphics[width=0.24\\textwidth]{figure\/figure567_cocacola_5_JianSun.pdf}\\hspace*{0.01in}\n\t\t\\includegraphics[width=0.24\\textwidth]{figure\/figure567_cocacola_5_Ours_nocurve.pdf\n\t\t\\vspace*{-0.12in}\n\t\n\t\t\\subfloat[]{\\includegraphics[width=0.24\\textwidth]{figure\/figure567_mvg_5_Blurred.pdf}}\\hspace*{0.01in}\n\t\t\\subfloat[]{\\includegraphics[width=0.24\\textwidth]{figure\/figure567_mvg_5_THKim.pdf}}\\hspace*{0.01in}\n\t\t\\subfloat[]{\\includegraphics[width=0.24\\textwidth]{figure\/figure567_mvg_5_JianSun.pdf}}\\hspace*{0.01in}\n\t\t\\subfloat[]{\\includegraphics[width=0.24\\textwidth]{figure\/figure567_mvg_5_Ours_nocurve.pdf}\n\t\t\n\t\\end{center}\t\n\t\\vspace*{-0.23in}\n\t\\caption{Deblurring result for real light field dataset with comparison to local linear blur kernel deblurring methods. (a) Blurred input image. (b) Result of Kim and Lee~\\cite{kim2014segmentation}. (c) Sun~et al.~\\cite{sun2015learning}. (d) Proposed algorithm.}\n\n\t\\label{qual:deblur_real1}\n\\end{figure*}\n\\begin{figure*}[!t]\n\t\\begin{center}\t\n\t\t\\includegraphics[width=0.24\\textwidth]{figure\/figure567_doll_rot_large_Blurred.pdf}\\hspace*{0.01in}\n\t\t\\includegraphics[width=0.24\\textwidth]{figure\/figure567_doll_rot_large_ZheHu.pdf}\\hspace*{0.01in}\n\t\t\\includegraphics[width=0.24\\textwidth]{figure\/figure567_doll_rot_large_Srinivasan_nocurve.pdf}\\hspace*{0.01in}\n\t\t\\includegraphics[width=0.24\\textwidth]{figure\/figure567_doll_rot_large_Ours_nocurve.pdf\n\t\t\\vspace*{0.01in}\n\t\t\n\t\t\\includegraphics[width=0.24\\textwidth]{figure\/figure567_friends_1_Blurred.pdf}\\hspace*{0.01in}\n\t\t\\includegraphics[width=0.24\\textwidth]{figure\/figure567_friends_1_ZheHu.pdf}\\hspace*{0.01in}\n\t\t\\includegraphics[width=0.24\\textwidth]{figure\/figure567_friends_1_Srinivasan_nocurve.pdf}\\hspace*{0.01in}\n\t\t\\includegraphics[width=0.24\\textwidth]{figure\/figure567_friends_1_Ours_nocurve.pdf\n\t\t\\vspace*{-0.12in}\n\t\t\n\t\t\\subfloat[]{\\includegraphics[width=0.24\\textwidth]{figure\/figure567_playground_1_Blurred.pdf}}\\hspace*{0.01in}\n\t\t\\subfloat[]{\\includegraphics[width=0.24\\textwidth]{figure\/figure567_playground_1_ZheHu.pdf}}\\hspace*{0.01in}\n\t\t\\subfloat[]{\\includegraphics[width=0.24\\textwidth]{figure\/figure567_playground_1_Srinivasan_nocurve.pdf}}\\hspace*{0.01in}\n\t\t\\subfloat[]{\\includegraphics[width=0.24\\textwidth]{figure\/figure567_playground_1_Ours_nocurve.pdf}\n\t\\end{center}\n\t\\vspace*{-0.23in}\n\t\\caption{Deblurring result for real light field dataset with comparison to global camera motion estimation methods. (a) Blurred input image. (b) Result of Hu~et al.~\\cite{hu2014joint}. (c) Srinivasan et al.~\\cite{srinivasan2017light}. (d) Proposed algorithm.}\n\t\\label{qual:deblur_real2}\n\n\\end{figure*}\n\n\\subsection{Light Field Deblurring}\n\n\\textbf{Real Data.}~Figure~\\ref{qual:deblur_real1} and Figure~ \\ref{qual:deblur_real2} show the light field deblurring results for blurred real light field with spatially varying blur kernels.\nIn Figure~\\ref{qual:deblur_real1}, the result is compared with the existing motion deblurring methods~\\cite{kim2014segmentation,sun2015learning} which utilize motion flow estimation.\nIt is shown that the proposed algorithm reconstructs sharper latent image better than others.\nNote that \\cite{kim2014segmentation,sun2015learning} show satisfactory performance only for small blur kernels.\n\nFigure~\\ref{qual:deblur_real2} shows the comparison results with the deblurring method based on the global camera motion model~\\cite{hu2014joint,srinivasan2017light}.\nIn comparison with~\\cite{srinivasan2017light}, we deblur only cropped regions shown in the yellow boxes of Figure~\\ref{qual:deblur_real2}(c) due to GPU memory overflow ($>$12GB) for larger spatial resolution.\n\n\\cite{hu2014joint} assumes the scene depth is piecewisely planar. Therefore, it cannot be generalized to arbitrary scene, yielding unsatisfactory deblurring result.\n\\cite{srinivasan2017light} estimates the reasonably correct camera motion of the blurred light field while their output is less deblurred.\nNote that \\cite{srinivasan2017light} can not handle the rotational camera motion which produces completely different blur kernels from translational motion.\nOn the other hand, the proposed algorithm fully utilizes the 6-DOF camera motion and the scene depth, yielding outperforming results for the arbitrary scene.\n\nThe light field deblurring experiments with real data show that the proposed algorithm works robustly even for the hand-shake motion which does not match the proposed motion path model.\nThe proposed algorithm showed superior deblurring performance for both natural indoor and outdoor scenes, which confirms the robustness of the proposed algorithm to noise and depth level.\n\\begin{figure*}[t]\n\t\\begin{center}\t\n\t\t\\vspace*{0.15in}\n\t\t\\includegraphics[width=0.2\\textwidth]{figure\/figure567_StaticScene_Blurred.pdf}\\hspace*{0.01in}\n\t\t\\includegraphics[width=0.2\\textwidth]{figure\/figure567_StaticScene_ZheHu.pdf}\\hspace*{0.01in}\n\t\t\\includegraphics[width=0.2\\textwidth]{figure\/figure567_StaticScene_THKim.pdf}\\hspace*{0.01in}\n\t\t\\includegraphics[width=0.2\\textwidth]{figure\/figure567_StaticScene_JianSun.pdf}\\hspace*{0.01in}\n\t\t\\includegraphics[width=0.2\\textwidth]{figure\/figure567_StaticScene_Ours.pdf}\\hspace*{0.01in}\n\t\t\\vspace*{0.03in}\t\t\n\\\\\t\t\\includegraphics[width=0.2\\textwidth]{figure\/figure567_Fruits_small_rotation_Blurred.pdf}\\hspace*{0.01in}\n\t\t\\includegraphics[width=0.2\\textwidth]{figure\/figure567_Fruits_small_rotation_ZheHu.pdf}\\hspace*{0.01in}\n\t\t\\includegraphics[width=0.2\\textwidth]{figure\/figure567_Fruits_small_rotation_THKim.pdf}\\hspace*{0.01in}\n\t\t\\includegraphics[width=0.2\\textwidth]{figure\/figure567_Fruits_small_rotation_JianSun.pdf}\\hspace*{0.01in}\n\t\t\\includegraphics[width=0.2\\textwidth]{figure\/figure567_Fruits_small_rotation_Ours.pdf}\\hspace*{0.01in}\n\t\t\\vspace*{-0.11in}\t\t\n\\\\\t\t\\subfloat[]{\\includegraphics[width=0.2\\textwidth]{figure\/figure567_Baseball_large_forward_Blurred.pdf}}\\hspace*{0.01in}\n\t\t\\subfloat[]{\\includegraphics[width=0.2\\textwidth]{figure\/figure567_Baseball_large_forward_ZheHu.pdf}}\\hspace*{0.01in}\n\t\t\\subfloat[]{\\includegraphics[width=0.2\\textwidth]{figure\/figure567_Baseball_large_forward_THKim.pdf}}\\hspace*{0.01in}\n\t\t\\subfloat[]{\\includegraphics[width=0.2\\textwidth]{figure\/figure567_Baseball_large_forward_JianSun.pdf}}\\hspace*{0.01in}\n\t\t\\subfloat[]{\\includegraphics[width=0.2\\textwidth]{figure\/figure567_Baseball_large_forward_Ours.pdf}}\\hspace*{0.01in}\n\t\t\\vspace*{-0.13in}\n\\\\\n\t\\end{center}\n\t\\vspace*{-0.05in}\n\t\\caption{Deblurring result for synthetic light field. (a) Blurred input light field. (b) Result of Hu~et al.~\\cite{hu2014joint}. (c) Kim and Lee~\\cite{kim2014segmentation}. (d) Sun~et al.~\\cite{sun2015learning}. (e) Proposed algorithm.}\n\t\\label{qual:deblur_synthetic}\n\t\\vspace*{-0.05in}\n\\end{figure*}\n\n\\begin{table*}[t]\n\t\\caption{Quantitative evaluation of deblurring on synthetic light field dataset (in PSNR and SSIM).}\n\t\\centering\n\\renewcommand{\\arraystretch}{1.1}{\n\t\\scalebox{0.8}{\n\t\t\\begin{tabular}{ccccccccccccc}\n\t\t\t\\toprule\n\t\t\t& \\multicolumn{2}{c}{Forward} & \\multicolumn{2}{c}{Rotation} & \\multicolumn{2}{c}{Translation} & \\multicolumn{2}{c}{Forward+Rot.} & \\multicolumn{2}{c}{Forward+Tran.} & \\multicolumn{2}{c}{Rot.+Tran.}\\\\\n\t\t\t\\multicolumn{1}{c}{Methods}& PSNR & SSIM & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM \\\\\n\t\t\t\\cmidrule(r){1-1}\\cmidrule(lr){2-3}\\cmidrule(lr){4-5}\\cmidrule(lr){6-7}\\cmidrule(lr){8-9}\\cmidrule(lr){10-11}\\cmidrule(l){12-13}\n\t\t\t\\multicolumn{1}{c}{Input}& 20.72 & 0.740 & 20.37 & 0.731 & 21.82 & 0.758 & 19.84 & 0.723 & 20.30 & 0.731 & 19.79 & 0.728 \\\\\n\t\t\t\\multicolumn{1}{c}{Hu~et al.~\\cite{hu2014joint}}& 20.00 & 0.716 & 19.42 & 0.704 & 21.42 & 0.745 & 19.18 & 0.701 & 19.43 & 0.699 & 19.24 & 0.711 \\\\\n\t\t\t\\multicolumn{1}{c}{Kim and Lee~\\cite{kim2014segmentation}}& 20.06 & 0.721 & 19.78 & 0.714 & 21.42 & 0.749 & 19.32 & 0.706 & 19.65 & 0.708 & 19.34 & 0.714 \\\\\n\t\t\t\\multicolumn{1}{c}{Sun~et al.~\\cite{sun2015learning}}& 27.69 & 0.896 & 27.68 & 0.881 & 25.41 & 0.856 & 27.40 & 0.874 & 27.23 & 0.899 & 26.42 & 0.868 \\\\\n\t\t\t\\multicolumn{1}{c}{Proposed Method}& \\bf{29.24} & \\bf{0.915} & \\bf{29.14} & \\bf{0.913} & \\bf{26.99} & \\bf{0.876} & \\bf{28.92} & \\bf{0.905} & \\bf{28.91} & \\bf{0.922} & \\bf{27.85} & \\bf{0.893} \\\\\n\t\t\t\\midrule\n\t\t\t\\multicolumn{1}{c}{Input~(cropped)}& 21.01 & 0.758 & 21.19 & 0.746 & 19.39 & 0.698 & 21.73 & 0.758 & 21.67 & 0.782 & 20.50 & 0.745 \\\\\n\t\t\t\\multicolumn{1}{c}{Srinivasan~et al.~\\cite{srinivasan2017light}}& 17.15 & 0.730 & 19.02 & 0.652 & 16.28 & 0.620 & 19.17 & 0.660 & 16.20 & 0.726 & 16.38 & 0.626 \\\\\n\t\t\t\\multicolumn{1}{c}{Proposed Method}& \\bf{27.15} & \\bf{0.871} & \\bf{27.32} & \\bf{0.870} & \\bf{25.30} & \\bf{0.836} & \\bf{28.83} & \\bf{0.904} & \\bf{28.01} & \\bf{0.901} & \\bf{25.88} & \\bf{0.867} \\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t}\n}\n\t\\label{quan:deblur}\n\t\\end{table*}\n\\noindent\n\n\\textbf{Synthetic Data.}~The performance of the proposed algorithm is evaluated using synthetic light field dataset, as shown in Figure~\\ref{qual:deblur_synthetic} and Table~\\ref{quan:deblur}.\nThe synthetic data consists of forward, rotation, in-plane translation motion and their combinations.\nIn Figure~\\ref{qual:deblur_synthetic}, we visualize and compare the deblurring performance with existing motion flow methods~\\cite{kim2014segmentation,sun2015learning} and a camera motion method~\\cite{hu2014joint}.\nIn all examples, the proposed algorithm produces sharper deblurred images than others as shown clearly in the cropped boxes.\n\nTable~\\ref{quan:deblur} shows the quantitative comparison of deblurring performance by measuring PSNR and SSIM to the ground truth.\nIt shows that the proposed joint estimation algorithm significantly outperforms the others.\nSun~et al.~\\cite{sun2015learning} achieves comparable performance to the proposed algorithm in which CNN is trained with MSE loss.\nOther algorithms achieve minor improvement from the input image because the assumed blur models are simple and inconsistent with the ground truth blur.\n\nFor the comparison with \\cite{srinivasan2017light}, we crop the each light field to $200\\times200$ because of the GPU memory overflow. Note that we use the original setting of \\cite{srinivasan2017light}.\n\\cite{srinivasan2017light} shows lower performance than the input blurred light field due to the spatial viewpoint shift as in the output of \\cite{srinivasan2017light}.\nSince the original point exists at the end point of the camera motion path in \\cite{srinivasan2017light}, the viewpoint shift occurs when the estimated 3D motion is large.\nIt is observed that this is an additional cause to decrease PSNR and SSIM when the estimated 3D motion is different from the ground truth.\nThe proposed algorithm estimates the latent image with ignorable viewpoint shift because the origin is located in the middle of the camera motion path.\n\n\\subsection{Light Field Depth Estimation}\n\n\n\nTo show the performance of light field depth estimation, we compare the proposed method with several state-of-the-art methods~\\cite{chen2014light,jeon2015accurate,tao2015depth,wang2015occlusion,williem2016robust}.\nFor comparison, all blurred sub-aperture images are independently deblurred using~\\cite{sun2015learning} before running their own depth estimation algorithms.\n\nFigure~\\ref{qual:depth} shows the visual comparison of estimated depth map generated by different methods, which confirms that the proposed algorithm produces significantly better depth map in terms of accuracy and completeness.\nSince independent deblurring of all sub-aperture images does not consider correlation between them, conventional correspondence and defocus cue do not produce reliable matching, yielding noisy depth map.\nOnly the proposed joint estimation algorithm results in sharp and unfattened object boundary, and produces the closest result to the ground truth.\n\nQuantitative performance comparison of depth map estimation is shown in Table~\\ref{quan:depth}.\nFor three synthetic scenes with three different motion for each scene, the average L1-rel error of the estimated depth map is computed and compared.\nThe comparison clearly shows that the proposed method produces the lowest error in all types of camera motion.\nNote that the second best result is achieved by Chen et al.~\\cite{chen2014light}, which is relatively robust in the presence of motion blur because bilateral edge preserving filtering is employed for cost computation.\n\n\\begin{figure*}[t]\n\t\\begin{center}\n\t\t\\vspace*{0.15in}\n\t\t\\subfloat[]{\\includegraphics[width=0.245\\textwidth]{figure\/figure8_1.pdf}}\\hspace*{0.01in}\n\t\t\\subfloat[]{\\includegraphics[width=0.245\\textwidth]{figure\/figure8_2.pdf}}\\hspace*{0.01in}\n\t\t\\subfloat[]{\\includegraphics[width=0.245\\textwidth]{figure\/figure8_3.pdf}}\\hspace*{0.01in}\n\t\t\\subfloat[]{\\includegraphics[width=0.245\\textwidth]{figure\/figure8_4.pdf}}\\hspace*{0.01in}\\\\\n\t\t\\vspace*{-0.125in}\t\t\t\n\t\t\\subfloat[]{\\includegraphics[width=0.245\\textwidth]{figure\/figure8_5.pdf}}\\hspace*{0.01in}\n\t\t\\subfloat[]{\\includegraphics[width=0.245\\textwidth]{figure\/figure8_6.pdf}}\\hspace*{0.01in}\n\t\t\\subfloat[]{\\includegraphics[width=0.245\\textwidth]{figure\/figure8_7.pdf}}\\hspace*{0.01in}\n\t\t\\subfloat[]{\\includegraphics[width=0.245\\textwidth]{figure\/figure8_8.pdf}}\\hspace*{0.01in}\n\t\\end{center}\t\n\t\\vspace*{-0.2in}\n\t\\caption{Depth estimation results on blurred light field. (a) Blurred center sub-aperture image. (b) Ground truth depth. (c) Result of Jeon et al.~\\cite{jeon2015accurate}. (d) Williem and Park~\\cite{williem2016robust}. (e) Tao et al.~\\cite{tao2015depth}. (f) Wang et al.~\\cite{wang2015occlusion}. (g) Chen et al.~\\cite{chen2014light}. (h) Proposed algorithm.}\n\t\\label{qual:depth}\n\\end{figure*}\n\n\\begin{table}[!t]\n\t\\caption{Comparison of depth estimation (in average L1-rel error).}\n\t\\centering\n\\renewcommand{\\arraystretch}{1.1}{\n\t\\scalebox{1.1}{\n\t\t\\begin{tabular}{ccccc}\n\t\t\t\\toprule\n\t\t\tMethods & Forward & Rotation & Trans. & Overall\\\\\n\t\t\t\\cmidrule(r){1-1}\\cmidrule(lr){2-2}\\cmidrule(lr){3-3}\\cmidrule(lr){4-4}\\cmidrule(l){5-5}\n\t\t\tChen et al.~\\cite{chen2014light} & 0.0251 & 0.0326 & 0.0331 & 0.0303 \\\\\n\t\t\tTao et al.~\\cite{tao2015depth} & 0.0251 & 0.0359 & 0.0371 & 0.0327 \\\\\n\t\t\tWang et al.~\\cite{wang2015occlusion} & 0.0312 & 0.0377 & 0.0400 & 0.0363 \\\\\n\t\t\tJeon et al.~\\cite{jeon2015accurate} & 0.0835 & 0.0916 & 0.0921 & 0.0891 \\\\\n\t\t\tWilliem and Park~\\cite{williem2016robust} & 0.0615 & 0.0895 & 0.0966 & 0.0825 \\\\\n\t\t\tProposed Method & \\bf{0.0198} & \\bf{0.0150} & \\bf{0.0243} & \\bf{0.0197} \\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t}\n}\n\t\\vspace*{-0.2in}\n\t\\label{quan:depth}\n\\end{table}\n\nThe depth estimation experiment demonstrates that consistent deblurring between sub-aperture images is essential.\nThe cues used in conventional depth estimation are seriously hampered in the independently deblurred image.\nHowever, the proposed joint estimation algorithm results in robust and accurate depth estimation, providing the justification of solving deblurring and depth estimation in a joint manner.\n\n\n\\subsection{Camera Motion Estimation}\nTable~\\ref{quan:motion} shows the EPE of the estimated motion on synthetic light field dataset.\nCompared with other methods~\\cite{kim2014segmentation,sun2015learning}, the proposed method improves the accuracy of the estimated motion significantly.\nIn particular, a large gain is obtained in the rotational motion, which indicates that the rotational motion cannot be modeled accurately as a linear blur kernel used in~\\cite{kim2014segmentation,sun2015learning}.\n\nFigure~\\ref{fig:motion} shows the motion estimation results compared to the ground truth motion.\nSince the camera orientation changes while the camera is moving, the 6-DOF camera motion can not be recovered properly by~\\cite{srinivasan2017light}.\nAs shown in Figure~\\ref{fig:motion}(b) and Figure~\\ref{fig:motion}(c), the deblurring results are similar to the input, because the motion can not converge to the ground truth.\nIn contrast, the proposed algorithm converges to the ground truth 6-DOF motion and also produces the sharp deblurring result.\n\\begin{figure}[t]\n\\begin{center}\n\t\\subfloat[]{\\includegraphics[width=0.24\\columnwidth]{figure\/figure9_input.pdf}}\\hspace*{0.05in\n\t\\subfloat[]{\\includegraphics[width=0.24\\columnwidth]{figure\/figure9_quad.pdf}}\\hspace*{0.03in}\t\t\t\n\t\\subfloat[]{\\includegraphics[width=0.24\\columnwidth]{figure\/figure9_cubic.pdf}}\\hspace*{0.03in}\n\t\\subfloat[]{\\includegraphics[width=0.24\\columnwidth]{figure\/figure9_ours.pdf}}\\hspace*{0.03in}\n\t\n\t\\vspace*{-0.05in}\n\\end{center}\n\\caption{Deblurring and camera motion estimation result for synthetic light field with comparison to \\cite{srinivasan2017light}.\n\t(a) Input light field and ground truth camera motion.\n\t(b) Result of Srinivasan et al.~\\cite{srinivasan2017light} (quadratic).\n\t(b) Srinivasan et al.~\\cite{srinivasan2017light} (cubic).\n\t(d) Proposed algorithm.\n}\n\\vspace*{-0.1in}\n\\label{fig:motion}\n\\end{figure}\n\\begin{table}[t]\n\\caption{Comparison of motion estimation (in EPE).}\n\\centering\n\\renewcommand{\\arraystretch}{1.1}{\n\n\t\\scalebox{1.1}{\n\t\t\\begin{tabular}{cccc}\n\t\t\t\\toprule\n\t\t\tMethods & Forward & Rotation & Translation \\\\\n\t\t\t\\cmidrule(r){1-1}\\cmidrule(lr){2-2}\\cmidrule(lr){3-3}\\cmidrule(l){4-4}\n\t\t\tKim and Lee~\\cite{kim2014segmentation} & 2.153 & 3.317 & 1.989 \\\\\n\t\t\tSun et al.~\\cite{sun2015learning} & 1.492 & 2.557 & 1.810 \\\\\n\t\t\tProposed Method & \\bf{0.325} & \\bf{0.171} & \\bf{0.590}\\\\\n\t\t\t\\bottomrule\n\t\\end{tabular}}}\n\t\\label{quan:motion}\n\t\\vspace*{-0.05in}\n\\end{table}\n\n\\section{Conclusion}\nIn this paper, we presented the novel light field deblurring algorithm that estimated latent image, sharp depth map, and camera motion jointly.\nFirstly, we modeled all the blurred sub-aperture images by center-view latent image using 3D warping function.\nThen, we developed the algorithm to initialize the 6-DOF camera motion from the local linear blur kernel and scene depth.\nIn the iterative joint optimization, the nonlinear energy minimization was solved efficiently using IRLS.\nThe evaluation on both synthetic and real light field data showed that the proposed model and algorithm worked well with general camera motion and scene depth variation.\n\n\\bibliographystyle{splncs}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\\label{sec:intro}\nThe Standard Model of particle physics has proven successful in describing the elementary particles and their interactions \\cite{pdg}. However, it still falls short in explaining many of the basic features of the nucleon, features that to this day remain the objects of intensive research: spin \\cite{pspin,pspin2}, size \\cite{pradius1,pradius2,pradius3}, intrinsic structure \\cite{pstructure,pstructure2} and abundance, \\textit{i.e.} the excess of nucleons compared to antinucleons in the universe \\cite{pasym}.\n\nOne of the nucleons, the proton, is the most stable composite system we know. In order to study its properties, we therefore need to distort or break it by scattering something on it, for example an electron, or by adding some energy and thereby excite it. A third option is to replace one or several of the building blocks \\cite{granados}. The latter is the main concept of \\textit{hyperon physics}: one or several light $u$ or $d$ quarks in the nucleon is replaced by strange ones.\\footnote{In principle, one can also replace it with a charm or a bottom quark, but the scope of this paper is strange hyperons.} The mass of the strange quark is $\\approx$ 95 MeV, which is $\\geq$ 20 times larger than the light $d$ and $u$ quark masses. The strange quark in a hyperon is therefore expected to behave a bit differently than the light quarks, for example it will be less relativistic. Furthermore, a larger part of the mass of a hyperon comes from the quarks compared to the nucleon. However, the mass of the strange quark is much smaller than the mass of the hyperon itself, in contrast to the more than ten times heavier charm quark. Hence, strange hyperons are sufficiently similar to nucleons for comparisons to be valid, for example assuming approximate SU(3) flavour symmetry.\n\nBy being unstable, hyperons reveal more of their features than protons. In particular, the weak, parity violating and thereby \\textit{self-analysing} decay of many ground-state hyperons make their spin properties experimentally accessible. This makes hyperons a powerful diagnostic tool that can shed light on various physics problems, \\textit{e.g.} non-perturbative production dynamics, internal structure and fundamental symmetries.\n\nIn this paper, we outline the assets of hyperon physics to be exploited by the future PANDA (antiProton ANnihilation at DArmstadt) experiment with an antiproton beam at FAIR (Facility for Antiproton and Ion Research) in Darmstadt, Germany. We describe in detail a comprehensive simulation study that demonstrates the feasibility of the planned hyperon physics programme, and discuss the impact and long-term perspectives.\n\n\\section{The PANDA experiment}\n\\label{sec:panda}\n\n\\begin{figure*}[t]\n\\centering\n\\begin{tikzpicture}\n\\node[anchor=south west, inner sep=0] (image) {\\includegraphics[width=\\textwidth]{PANDA_full-set-up.png}};\n\t\\begin{scope}[x={(image.south east)},y={(image.north west)}]\n \n \\begin{scope}\n \n \\node[align=center, red] at (0.25,0.95) {TS};\n \\node[align=center, red] at (0.65,0.7) {FS};\n \n \n \n \n \n \\end{scope}\n\t\\end{scope}\n\\end{tikzpicture}\n\\caption{(Colour online) Overview of the full PANDA setup. The antiproton beam will go from left to right, whereas the target jets\/pellets from top to bottom. The left part of the detector surrounds the interaction point and is the Target Spectrometer (TS), whereas the right part is the Forward Spectrometer (FS).}\n\\label{fig:panda}\n\\end{figure*} \n\nThe PANDA experiment, which is currently under construction at FAIR \\cite{fair} in Darmstadt, Germany, offers a broad programme for studies of the strong interaction and fundamental symmetries \\cite{panda}. The unique combination of an antiproton beam in the intermediate energy range and a nearly 4$\\pi$ detector with vertex- and tracking devices, multiple particle identification (PID) detectors and calorimeters, give excellent conditions for a new generation of hyperon physics experiments.\n\nThe High Energy Storage Ring (HESR) will deliver an antiproton beam with momenta ranging from 1.5 GeV\/$c$ up to 15 GeV\/$c$ \\cite{hesr}. In the start-up phases, referred to as \\textit{Phase One} and \\textit{Phase Two}, the HESR will be able to accumulate up to $10^{10}$ antiprotons in 1000~s. In the final \\textit{Phase Three}, the luminosity will be ramped up by the Recuperated Experimental Storage Ring (RESR), allowing up to $10^{11}$ antiprotons to be injected and stored in the HESR. The HESR will offer stochastic cooling resulting in a beam momentum spread of better than $5\\cdot 10^{-5}$. The antiproton beam will impinge on a hydrogen cluster jet or pellet target, which during Phase One will result in an average luminosity of $\\approx 10^{31}$ cm$^2$s$^{-1}$~\\cite{target}. At low energies, the luminosity will be about a factor of two lower. During Phase Three, the design luminosity of $\\approx 2\\cdot10^{32}$ cm$^2$s$^{-1}$ will be achieved. \n\nThe PANDA detector, shown in Fig. \\ref{fig:panda} and described in detail in Ref.~\\cite{pandadet}, is divided into two parts: the target spectrometer (TS) and the forward spectrometer (FS). The TS covers polar angles of $> 10^{\\mathrm{o}}$ in the horizontal direction and $> 5^{\\mathrm{o}}$ in the vertical direction, whereas the FS covers polar angles $<10^{\\mathrm{o}}$. The TS provides timing and vertexing by the silicon micro vertex detector (MVD). The MVD is also used for tracking together with the gas-filled straw tube trackers (STT). The polar angle range of the latter is $22^{\\mathrm{o}} < \\theta < 140^{\\mathrm{o}}$. In order to bridge the acceptance between the STT and the FS, the gas electron multiplier detectors (GEM) is designed to track particles emitted below 22$^\\circ$. Time-of-flight detectors (TOF), made of scintillating tiles, offer excellent time resolution. By providing the reaction time $t_0$, it improves the resolution of the track parameters, and increases the particle identification capabilities. Detection of internally reflected Cherenkov light (DIRC) offer independent PID and an electromagnetic calorimeter (EMC) with lead-tungstate (PbWO$_4$) crystals will measure energies between 10 MeV and 7 GeV. The laminated yoke of the solenoid magnet, outside the barrel EMC, is interleaved with sensitive layers to act as a range system for the detection and identification of muons. Measurement of the charge and momenta are possible thanks to the bending of particle trajectories by a solenoid magnet providing a field of up to 2.0 Tesla. \n\nThe FS will consist of six straw tube stations for tracking, a dipole magnet, a ring imaging Cherenkov counter (RICH) detector for PID as well as a TOF for timing and PID. The energies of the forward going, electromagnetically interacting particles, will be measured by a Shashlyk electromagnetic calorimeter. A muon range system, using sensors interleaved with absorber layers, is placed at the end of the FS.\n\nThe luminosity will be determined by using elastic antiproton-proton scattering as the reference channel. The differential cross section of this process can be calculated with extremely high precision at small angles, where the Coulomb component dominates~\\cite{luminosity}. At polar angles within 3-8~mrad, the scattered antiproton will be measured by a luminosity detector consisting of four layers of thin monolithic active pixels sensors made of silicon~\\cite{luminosity}. \n\nPANDA will feature, as one of the first experiments, a time-based data acquisition system (DAQ) without hard-ware triggers. Data will instead be read out as a continuous stream using an entirely software-based selection scheme. This change of paradigm is driven by the large foreseen reaction rates, resulting in huge amounts of data to be stored.\n\n\nThe feasibility studies presented in this work are performed within the common simulation and analysis framework PandaROOT \\cite{PANDAROOT}. It comprises the complete simulation chain, including Monte Carlo event generation, particle propagation and detector response, hardware digitization, reconstruction and calibration, and data analysis. PandaROOT is derived from the FairROOT framework \\cite{FAIRROOT} which in turn is based on ROOT \\cite{ROOT}. \n\n\\section{Hyperon production with antiproton probes}\n\\label{sec:hyperonphys}\nThe focus of this paper is $\\Lambda$ and $\\Xi^-$ hyperon production in the $\\bar{p}p \\to \\bar{Y}Y$ reaction, where $Y$ refers to the octet hyperons $\\Lambda$, $\\Xi^-$. Understanding the production and decay of these hyperons is crucial in order to correctly interpret experimental analyses of heavier hyperons. The study of excited multi-strange hyperons constitutes an important part of the PANDA physics programme and is described in more detail in Ref. \\cite{jennypthesis}. However, octet hyperons are interesting in their own right. The $\\Lambda$ and $\\Xi^-$ hyperons considered in this work, predominantly decay into charged final state particles which makes them straight-forward to measure experimentally.\nIn the following, we will discuss how the self-analysing decays can shed light on various aspects of fundamental physics and the advantages of antiproton probes in hyperon studies.\n\n\\subsection{Weak two-body decays}\n\\label{sec:features}\nAll ground-state hyperons except the $\\Sigma^0$ decay weakly through a process that has a parity violating component. This means that the direction of the decay products depends on the spin direction of the mother hyperon. In Fig. \\ref{fig:hypdecay}, the two-body decay of a spin 1\/2 hyperon $Y$ into a spin 1\/2 baryon $B$ and a pseudoscalar meson $M$, is illustrated. The angular distribution of $B$ in the rest system of $Y$ is given by \\cite{pdg,bigibook}\n\\begin{equation}\nW(\\cos\\theta^B)=\\frac{1}{4\\pi}(1+\\alpha P_y^{Y}(\\cos\\theta_{Y}) \\cos\\theta^B),\n\\label{eq:decay}\n\\end{equation}\n\n\\noindent where $P^Y_y(\\cos\\theta_{Y})$ is the polarisation with respect to some reference axis $\\hat y$. $P^Y_y$ carries information about the production process and therefore depends on the collision energy and the scattering angle. The decay asymmetry parameter $\\alpha$ is the real part of the product between the parity violating and the parity conserving decay amplitudes, $T_s$ and $T_p$ \\cite{leeyang}. \nEq. (\\ref{eq:decay}) demonstrates how the experimentally measurable decay angular distribution is related to quantities with physical meaning, \\textit{i.e.} $P^Y_y$ and $\\alpha$. This feature makes hyperons a powerful diagnostic tool. \n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.35\\textwidth]{AngDist3BW.png}\n\\end{center}\n\\caption{The $Y \\to BM$ decay, with the spin direction of $Y$ along the $y$-axis.}\n\\label{fig:hypdecay}\n\\end{figure} \n\\subsection{Scientific case}\n\\label{sec:scientific}\n\nAntihyperon-hyperon pair production in antiproton-proton annihilation, $\\bar{p}p \\to \\bar{Y}Y$, provides excellent conditions for hyperon studies, since\n\\begin{itemize}\n \\item Antihyperons and hyperons, also with double- and triple strangeness, can be produced in two-body processes at low energies, where the number of partial waves is small. This makes the production process parameterizable in a close to model-independent way.\n \\item Antihyperons and hyperons can be studied simultaneously, under symmetric conditions.\n \\item The production cross sections for single- and double-strange hyperons are known to be large \\cite{tord} which results in large count rates also for modest luminosities.\n \\end{itemize}\n\nThe scale of strangeness production is governed by the mass of the strange quark, $m_s \\approx 95$ MeV\/$c^2$. This is far below the scale where perturbative QCD breaks down ($\\approx$ 1 GeV) but close to the QCD cut-off scale ($\\Lambda_{QCD}$). Therefore, the relevant degrees of freedom in processes involving strange quarks are unclear: quarks and gluons, or hadrons? \n\nSingle-strange hyperon production in $\\bar{p}p \\to \\bar{Y}Y$ has been modeled using quark-gluon degrees of freedom \\cite{quarkgluon}, meson exchange \\cite{kaonexchange} and a combination of the two \\cite{quarkgluonhadron}. The production of double-strange hyperons requires interactions at shorter distances, since it either implies annihilation of two quark-antiquark pairs \\cite{XiQG,kaidalov}, or exchange of two kaons \\cite{XiMEX}. Spin observables, accessible for hyperons through their self-analysing decays, are particularly powerful in differentiating between models, since they are sensitive to the production mechanism. Spin observables can also give information about possible polarized strangeness content in nucleons \\cite{alberg} and final state interactions \\cite{haidenbauerFSI}. It is important to have a solid understanding of the latter also when interpreting results from antihyperon-hyperon pair production with other probes. One example is the $e^+e^- \\to Y\\bar{Y}$ process, from which the time-like electromagnetic form factors are determined. In Ref. \\cite{haidenbauerFSI}, the $\\Lambda$ complex form factors are predicted based on potential models fitted to PS185 data \\cite{PS185} on spin observables in the $\\bar{p}p \\to \\bar{\\Lambda}\\Lambda$ reaction \\cite{haidenbauerPot}. It was found that the form factors are sensitive to the $\\bar{\\Lambda}\\Lambda$ final state interaction and that spin observables are necessary in order to discriminate between them \\cite{haidenbauerFSI}. This has been done in a recent measurement of $\\Lambda$ form factors by the BESIII collaboration \\cite{bes3prl}.\n\nHyperon decays can provide one piece to the puzzle of nucleon abundance, more commonly referred to as the \\textit{matter-antimatter asymmetry} puzzle. According to the present paradigm, equal amounts of matter and antimatter should have been produced in the Big Bang. Unless the initial matter-antimatter imbalance was fine-tuned, a dynamical enrichment of matter with respect to antimatter must have occurred, \\textit{i.e.} Baryogenesis. However, this is only possible if i) processes exist that violate baryon number conservation ii) processes exist that violate C and CP symmetry, and iii) the aforementioned processes occured outside thermal equilibrium \\cite{pasym}. With hyperons, criterion ii) can be tested. CP symmetry means that hyperons and antihyperons have the same decay patterns, but with reversed spatial coordinates. For two-body hyperon decays, it means that the decay asymmetry parameters, \\textit{e.g.} $\\alpha$ in Eq. (\\ref{eq:decay}), have exactly the same value but with opposite sign compared to the corresponding antihyperon parameter, \\textit{i.e.} $\\alpha = -\\bar{\\alpha}$. The large production rates and the symmetric hyperon and antihyperon conditions make the $\\bar{p}p \\to \\bar{Y}Y$ a suitable reaction for searching for CP violation. Hyperon-antiproton studies have been carried out recently with BESIII in $e^+e^- \\to Y\\bar{Y}$, a reaction that is similar to $\\bar{p}p \\to \\bar{Y}Y$ in the sense that it is a two-body reaction that is symmetric in particle-antiparticle observables. These studies show that the precision can be greatly improved by several orders of magnitudes if the production process can be pinned down \\cite{goran,goranand,bes3nature}. Hence, a proper understanding of the $\\bar{p}p \\to \\bar{Y}Y$ reaction mechanism constitutes a crucial milestone in future large-scale CP studies with PANDA at FAIR.\n\n\\subsection{State of the art}\n\\label{sec:stateoftheart}\n\n\\subsubsection{Hyperon production in $\\bar{p}p$ annihilations}\n\nThe large amount of high-quality data on single-strange hyperons \\cite{tord,PS185,PS185164} produced in antiproton-proton annihilation, partly with a polarised target, led to important insights. For instance, it was found that the $\\bar{\\Lambda}\\Lambda$ pair is produced almost exclusively in a spin triplet state. From this, conclusions about the $\\Lambda$ quark structure can be drawn: the spin of the $\\Lambda$ is carried by the strange quark, while the light $u$ and $d$ quarks form a spin-0 \\textit{di-quark}. Theoretical investigations based on the aforementioned quark-gluon approach \\cite{quarkgluon}, kaon exchange \\cite{kaonexchange} and a combined approach \\cite{quarkgluonhadron}, reproduced this finding. However, no model so far describes the complete spin structure of the reaction. The models extensions into the double-strange sector \\cite{XiQG,XiMEX} have not been tested due to the lack of data -- only a few bubble-chamber events exist for $\\Xi^-$ and $\\Xi^0$ from $\\bar{p}p$ annihilations \\cite{Musgrave1965}. In Ref. \\cite{XiMEX}, $\\bar{\\Xi}^+$ emitted in the forward-direction in the center of mass frame are predicted to be in a triplet state, while backward-going $\\bar{\\Xi}^+$ are in a singlet state, in contrast to the $\\Lambda$ case that is in a spin-triplet state irrespective of the angle \\cite{PS185}. With future data from PANDA, this prediction can be tested. The hope is also that new spin structure data of $\\bar{p}p \\rightarrow \\bar{Y}Y$ reactions will trigger the activity of the theory community and lead to a deeper understanding of strange reaction dynamics. \n\n\\subsubsection{CP symmetry in hyperon decays}\n\nThe existence of CP violation for spinless mesons is experimentally well-established in the strange and bottom sector \\cite{pdg} and recently also in the charm sector \\cite{LHCbcharm}. It is also incorporated in the Standard Model, through the Cabibbo-Kobayashi-Maskawa mechanism \\cite{cabibbo,kobayashi}. However, Standard Model deviation from CP symmetry would result in a matter-antimatter asymmetry of eight orders of magnitude smaller than the observed one \\cite{werner}. Hence, this problem is intimately connected to the search for physics beyond the Standard Model. The spin-carrying baryons could give new insights into CP violation, since spin behaves differently from momentum under parity flip. However, the only indication of CP violation in a baryon decay, observed very recently by the LHCb collaboration \\cite{LHCb}, was not confirmed in a later study with larger precision by the same experiment \\cite{LHCb2}. Two-body decays of strange hyperons provide a cleaner search-ground, but require large data samples. The most precise CP test in the strange sector so far is provided by the HyperCP collaboration. The proton angular distributions from the $\\Xi^- \\to \\Lambda \\pi^-, \\Lambda \\to p\\pi^-$ chain was studied, along with the corresponding antiproton distributions from the $\\bar{\\Xi}^+$ decay chain. The result was found to be consistent with CP symmetry with a precision of $10^{-4}$ \\cite{hyperCP}. The most precise test for the $\\Lambda$ hyperon was obtained recently by the BESIII collaboration \\cite{bes3nature}. They analysed $\\bar{\\Lambda}\\Lambda$ pair production from $J\/\\Psi$ using a multi-dimensional method. The good precision for a relatively modest sample size ($\\approx$ 420 000 $\\bar{\\Lambda}\\Lambda$ events) demonstrates the merits of exclusive measurements of polarised and entangled hyperon-antihyperon pairs, produced in two-body reactions. The most remarkable finding was however that the decay asymmetry parameter $\\alpha_{\\Lambda}$ was found to be $0.750 \\pm 0.009 \\pm 0.004$, \\textit{i.e.} 17\\% larger than the PDG world average of $0.642$ at the time \\cite{pdg}. This average was calculated from measurements made in the 1960s and 1970s, based on the proton polarimeter technique \\cite{cronin}. In the 2019 update of the PDG, the old measurements are discarded and instead, the BESIII value is established as the recommended one. In a re-analysis of CLAS data, the $\\alpha_{\\Lambda}$ was calculated to be $0.721 \\pm 0.006 \\pm 0.005$. This is between the old average and the new BESIII value, though much closer the to the latter \\cite{claslambda}. More high-precision measurements from independent experiments will be valuable not only to establish the correct decay asymmetry, but also to understand the difference between old and new measurements.\n\n \n\\section{Formalism}\n\\label{sec:formalism}\n\n\\begin{figure}[htbp]\n\\begin{center}\n\n\\includegraphics[width=0.95\\linewidth]{Hyperons_coordpbarpBW.png}\n\n\\caption{The reference system of the $\\bar{p}p \\to \\bar{Y}Y$ reaction.}\n\\label{fig:refsys}\n\\end{center}\n\\vspace{-11pt}\n\\end{figure}\n\\noindent Consider an antiproton beam impinging on a hydrogen target, producing a $\\bar{Y}Y$ pair. Then the rest systems of the outgoing hyperons can be defined as in Fig. \\ref{fig:refsys}: the $\\hat{y}_Y$ and $\\hat{y}_{\\bar{Y}}$ axes as the normal of the production plane, spanned by the incoming antiproton beam and the outgoing antihyperon in the centre of mass system of the reaction. The $\\hat{z}_Y$ ($\\hat{z}_{\\bar{Y}}$) is defined along the direction of the outgoing hyperon (antihyperon) and the $\\hat{x}_Y$ ($\\hat{x}_{\\bar{Y}}$) is obtained by the cross product of the $y$ and $z$ direction: \n\\begin{equation}\n\t\\hat{z}_Y = \\frac{\\vec{p}_{Y}}{|\\vec{p}_{Y}|}, \\hat{y}_Y = \\frac{\\vec{p}_{beam} \\times \\vec{p}_{Y}}{|\\vec{p}_{beam} \\times \\vec{p}_{Y}|}, \\hat{x}_Y = \\hat{y}_Y \\times \\hat{z}_Y,\n\\end{equation}\nwhere $\\vec{p}_Y$ is the momentum vector of the outgoing hyperon and $\\vec{p}_{beam}$ is the momentum of the initial beam.\n\nInterference between complex production amplitudes has a polarising effect on the outgoing hyperon and antihyperon, even if the initial state is unpolarised. In PANDA, the beam and target will be unpolarised. Since the $\\bar{p}p \\to \\bar{Y}Y$ reaction is a strong, parity-conserving process, the polarisation of the outgoing hyperon and antihyperon can only be non-zero in the direction along the normal of the production plane, \\textit{i.e.} $\\hat{y}_Y$($\\hat{y}_{\\bar{Y}})$ in Fig. \\ref{fig:refsys}. In the case of a spin 1\/2 hyperon $Y$ (antihyperon $\\bar{Y}$) decaying into a spin 1\/2 baryon $B$ (antibaryon $\\bar{B}$) and a meson $M$ (antimeson $\\bar{M}$), the angular distribution of the decay baryon and antibaryon can be parameterised as:\n\\begin{align}\n\\label{eq:ppbar}\n &I(\\theta_Y,\\theta^B,\\theta^{\\bar{B}}) = N[1 + \\alpha\\sum_i P^Y_{i}(\\theta_Y)\\cos\\theta_{i}^{B} \\nonumber\\\\ &+ \\bar{\\alpha}\\sum_j P^{\\bar{Y}}_{j}(\\theta_Y)\\cos\\theta_{j}^{\\bar{B}} + \\alpha\\bar{\\alpha} \\sum_{ij} C^{Y\\bar{Y}}_{ij}(\\theta_Y)\\cos\\theta_{i}^{B}\\cos\\theta_{j}^{\\bar{B}}]\n \n \\end{align}\nwhere $i,j = x,y,z$ and the opening angle $\\cos\\theta_{i}^B$ $(\\cos\\theta_{j}^{\\bar{B}})$ is taken between the direction of the final state baryon $B$ (antibaryon $\\bar{B}$) and the axis $i$ $(j)$ in the rest system of the hyperon (antihyperon). The $P^Y_i(\\theta_Y)$ denote the vector polarisation and the $C^{\\bar{Y}Y}_{ij}(\\theta_Y)$ the spin correlation of the antihyperon and hyperon with respect to the axes $i,j = x,y,z$. The $\\theta_Y$ angle is defined in the reaction CMS system. With the unpolarised beam and target foreseen with PANDA and the reference system defined in Fig. \\ref{fig:refsys}, most spin variables must be zero due to parity conservation. The only non-zero spin variables are $P^Y_y$, $P^{\\bar{Y}}_y$, $C^{Y\\bar{Y}}_{xz}$, $C^{Y\\bar{Y}}_{zx}$, $C^{Y\\bar{Y}}_{xx}$, $C^{Y\\bar{Y}}_{yy}$ and $C^{Y\\bar{Y}}_{zz}$ \\cite{erikthesis,paschke}. Of these, only five are independent since $P^Y_y = P^{\\bar{Y}}_y$ and $C^{Y\\bar{Y}}_{xz} = C^{Y\\bar{Y}}_{zx}$.\n\nThe angular distribution can also be expressed with a matrix formulation. Then, one first defines the 4D vectors\n\\begin{align}\n\t&k_{\\bar{B}}=(1,\\cos\\theta_{x}^{\\bar{B}},\\cos\\theta_{y}^{\\bar{B}},\\cos\\theta_{z}^{\\bar{B}}) \\\\\n\t&k_B=(1,\\cos\\theta_{x}^{B},\\cos\\theta_{y}^{B},\\cos\\theta_{z}^{B}).\n\\end{align}\n\n\\noindent In addition, a matrix with spin observables and decay parameters can be defined in the following way\n\n\\begin{equation}\n\tD_{\\mu\\nu}=\\begin{pmatrix}\n\t\t1 & \\alpha P^Y_{x} & \\alpha P^Y_{y} & \\alpha P^Y_{z} \\\\\n\t\t\\bar{\\alpha} P^{\\bar{Y}}_{x} & \\bar{\\alpha}\\alpha C^{\\bar{Y}Y}_{xx} & \\bar{\\alpha}\\alpha C^{\\bar{Y}Y}_{xy} & \\bar{\\alpha}\\alpha C^{\\bar{Y}Y}_{xz} \\\\\n\t\t\\bar{\\alpha} P^{\\bar{Y}}_{y} & \\bar{\\alpha}\\alpha C^{\\bar{Y}Y}_{yx} & \\bar{\\alpha}\\alpha C^{\\bar{Y}Y}_{yy} & \\bar{\\alpha}\\alpha C^{\\bar{Y}Y}_{yz} \\\\\n\t\t\\bar{\\alpha} P^{\\bar{Y}}_{z} & \\bar{\\alpha}\\alpha C^{\\bar{Y}Y}_{zx} & \\bar{\\alpha}\\alpha C^{\\bar{Y}Y}_{zy} & \\bar{\\alpha}\\alpha C^{\\bar{Y}Y}_{zz}\n\t\\end{pmatrix},\n\\end{equation}\n\n\\noindent where $\\mu= 0,1,2,3$ or $0, x, y, z$ for the antihyperon and $\\nu= 0,1,2,3$ or $0, x, y, z$ for the hyperon. Since parity is conserved in strong interactions, the spin observables matrix in the $\\bar{p}p \\to \\bar{Y}Y$ reduces to \n\n\\begin{align}\n\t&D_{\\mu\\nu}=\\begin{pmatrix}\n\t\t1 & 0 & D_{02} & 0 \\\\\n\t\t0 & D_{11} & 0 & D_{13} \\\\\n\t\tD_{20} & 0 & D_{22} & 0 \\\\\n\t\t0 & D_{31} & 0 & D_{33}\n\t\\end{pmatrix}\\nonumber\\\\\n\t&=\\begin{pmatrix}\n\t\t1 & 0 & \\alpha P^Y_{y} & 0 \\\\\n\t\t0 & \\bar{\\alpha}\\alpha C^{\\bar{Y}Y}_{xx} & 0 & \\bar{\\alpha}\\alpha C^{\\bar{Y}Y}_{xz} \\\\\n\t\t\\bar{\\alpha} P^{\\bar{Y}}_{y} & 0 & \\bar{\\alpha}\\alpha C^{\\bar{Y}Y}_{yy} & 0 \\\\\n\t\t0 & \\bar{\\alpha}\\alpha C^{\\bar{Y}Y}_{zx} & 0 & \\bar{\\alpha}\\alpha C^{\\bar{Y}Y}_{zz}\n\t\\end{pmatrix}.\n\\end{align}\n\n\\noindent Then the angular distribution, expressed in matrix form, becomes\n\n\\begin{equation}\n\tI(\\theta_{\\bar{B}},\\phi_{\\bar{B}},\\theta_B,\\phi_B)=\\frac{1}{16\\pi^2}k_{\\bar{B}}D_{\\mu\\nu}k_B^T.\\label{eq:pdfspincorrmatrix}\n\\end{equation}\n\nFrom the spin correlations, one can calculate the \\textit{singlet fraction}:\n\n\\begin{equation}\n\tF_S = \\frac{1}{4}(1+C^{\\bar{Y}Y}_{xx}-C^{\\bar{Y}Y}_{yy}+C^{\\bar{Y}Y}_{zz}).\n\t\\label{eq:spinfrac}\n\\end{equation}\n\n\\noindent In its original form, derived in Ref.\\cite{durand}, it equals the expectation value of the product of the Pauli matrices \\textbf{$\\sigma_{\\bar{Y}}$}$\\cdot$\\textbf{$\\sigma_{Y}$} which is a number between -3 and 1. In Eq. \\ref{eq:spinfrac}, it has been rewritten to stay between 0 and 1. If $F_S = 0$, all $\\bar{Y}Y$ states are produced in a spin triplet state whereas $F_S = 1$ means they are all in a singlet state. If the spins are completely uncorrelated, the singlet fraction equals 0.25. \n\n\n\n\n\n\\section{Simulations of hyperon production in PANDA}\n\\label{sec:simulations}\n\nIn order to estimate the expected hyperon reconstruction efficiency with PANDA, and to quantify its sensitivity to spin observables, a comprehensive simulation study of two key channels has been performed. We have simulated the reactions\n\n\\begin{itemize}\n \\item $\\bar{p}p \\rightarrow \\bar{\\Lambda}\\Lambda, \\bar{\\Lambda} \\to \\bar{p}\\pi^+, \\Lambda \\to p \\pi^-$ at $p_{beam} = 1.64$ GeV\/$c$;\n \\item $\\bar{p}p \\rightarrow \\bar{\\Xi}^+\\Xi^-, \\bar{\\Xi}^+ \\to \\bar{\\Lambda}\\pi^+, \\bar{\\Lambda} \\to \\bar{p}\\pi^+, \\Xi^- \\to \\Lambda \\pi^-, \\\\\\Lambda \\to p \\pi^-$ at $p_{beam} = 4.6$ GeV\/$c$ and $p_{beam} = 7.0$ GeV\/$c$.\n\\end{itemize}\n\n\\noindent The channels have been chosen since the most prominent decay channel in each case leaves only charged particles in the final state. Even though PANDA will be capable of measuring both neutral and charged final states, charged final states are more straight-forward and can be reconstructed with better resolution. Hence, channels with charged final state particles serve as a first benchmark in the overall PANDA hyperon performance check-list. The beam momentum $p_{beam} = 1.64$ GeV\/$c$ for the $\\bar{p}p \\rightarrow \\bar{\\Lambda}\\Lambda$ was chosen since it coincides with a large data set collected by the PS185 experiment \\cite{PS185164}. The PS185 measurement of the cross section, the angular distribution and the spin observables can be compared with new data from one of the first foreseen data taking periods with PANDA. This allows for a systematic comparison between PANDA and a completely independent previous experiment, hence providing important guidance for all future hyperon studies with PANDA.\n\nNeither differential cross sections nor spin observables of the $\\bar{p}p \\rightarrow \\bar{\\Xi}^+\\Xi^-$ reaction have been studied before, and the goal of PANDA is therefore to contribute with completely new insights. For the double-strange $\\Xi^-$, the chosen beam momenta coincide with the hyperon spectroscopy campaign (4.6 GeV\/$c$, see Ref. \\cite{jennynstar}) and the $X(3872)$ line-shape campaign (7 GeV\/$c$, see Ref. \\cite{xscan}). \n\nSince hyperons have relatively long life-time ($10^{-10}$ s), they travel a measurable distance before decaying. This makes the track reconstruction a challenging task \\cite{walter,michael,jenny} since most standard algorithms assume that all tracks originate in the beam-target interaction point. The simulation study presented here is focused on Phase One of PANDA. A realistic PandaROOT implementation of the Phase One conditions was used \\cite{phaseone}. Some simplifications were made due to limitations in the current version of the simulation software:\n\\begin{itemize}\n \\item The general track reconstruction algorithms that can handle tracks originating far from the interaction point, are still under development and have not yet been deployed as a part of the standard PandaROOT package. Therefore, an ideal pattern recognition algorithm has been used, combined with some additional criteria on the number of hits per track in order to mimic realistic conditions.\n \n \\item The particle identification method is not yet stabilised and therefore, ideal PID matching was used. It was shown in Ref. \\cite{erikthesis} that the event selection of non-strange final state particles (\\textit{i.e.} decay products of $\\Lambda$ and $\\Xi^-$) can be performed without PID thanks to the distinct topology of hyperon events. Ideal PID however considerably reduces the run-time due to combinatorics, and was therefore used in the reconstruction. \n \n\\end{itemize} \n\\noindent In order to mimic the conditions of real pattern recognition, each track in the target spectrometer was required to contain either 4 hits in the MVD, or in total 6 hits in the MVD + STT + GEM. Tracks in the forward spectrometer are required to contain at least 6 hits in the FTS. \\footnote{The minimum number of hits to fit a circle is three, but additional hits are needed in order to verify that the hits come from a real track and to resolve ambiguities.}\n\\subsection{Signal sample}\nIn total, $10^6$ events were generated for $\\bar{\\Lambda}\\Lambda$ and $\\bar{\\Xi}^+\\Xi^-$ \\cite{walter} using the EvtGen generator \\cite{evtgen}.\nThe $\\bar{\\Lambda}\\Lambda$ sample was weighted using a parameterization of data from PS185, that revealed a strongly forward-peaking $\\bar{\\Lambda}$ distribution in the CMS system of the reaction \\cite{PS185,PS185164}. The $\\bar{\\Xi}^+\\Xi^-$ final state has never been studied and was therefore generated both with an isotropic angular distribution and with a forward-peaking distribution, using a parameterisation from $\\bar{p}p \\to \\bar{\\Sigma}^0\\Lambda$ production in Ref. \\cite{sigma6}. In this way, we can estimate the sensitivity of the reconstruction efficiency to the underlying angular distribution. This is particularly important in a fixed-target, two-spectrometer experiment like PANDA.\n\nIn hyperon-antihyperon pair production in $\\bar{p}p$ annihilations, the $\\theta_Y$ dependence of the spin observables is not straight-forward to parameterize in contrast to the $e^+e^-$ case \\cite{goran}, since more than two production amplitudes can contribute. However, the spin observables must satify some constraints: i) they need to stay within the interval $[-1,1]$ and ii) they need to go to zero at extreme angles, \\textit{i.e.} $\\theta_Y = 0^{\\mathrm{o}}$ and $\\theta_Y = 180^{\\mathrm{o}}$. The latter is because at these angles, the incoming beam is either parallel or anti-parallel to the outgoing antihyperon. Their cross product, giving the direction of the normal of the production plane, is thus not defined.\n\nThe data in this study were weighted according to \n\n\\begin{equation}\nP^Y_y(\\theta_Y) = \\sin2\\theta_Y \n\\end{equation}\n\nand \n\n\\begin{equation}\nC^{\\bar{Y}Y}_{ij}(\\theta_Y) = \\sin\\theta_Y. \n\\end{equation}\n\n\\noindent since they satisfy the constraints and since this gives a polarisation with a shape that resembles real data \\cite{PS185,PS185164}. \n\n\\subsection{Background samples}\n\n\\subsubsection{Background to $\\bar{p}p \\to \\bar{\\Lambda}\\Lambda$}\n\nGeneric hadronic background is denoted $\\bar{p}p\\to X$, where $X$ refers to \\textit{any} allowed final state. Such processes are simulated using the Dual Parton Model (DPM) generator \\cite{pbarx}, based on a phenomenological model that incorporates Regge theory, topological expansions of QCD, and concepts from the parton model. From this, energy dependencies are obtained of hadron-hadron cross sections with large number of particles with small transverse momenta with respect to the collision axis. Since the strong coupling is large for such processes, perturbation theory is not applicable. Instead, a topological expansion is employed, where the number of colours $N_c$ or flavours $N_f$ is the expansion parameter.\n\nThe total cross section of all $\\bar{p}p\\to X$ processes is around three orders of magnitude larger than that of the $\\bar{p}p\\to\\bar{\\Lambda}\\Lambda$. The expected ratio of produced generic background and signal events can be estimated from simulations using:\n\\begin{equation}\n\t\\frac{N_{X}}{N_{\\mathrm{signal}}} = \\frac{\\sigma(\\bar{p}p \\to X)}{\\sigma (\\bar{p}p \\to \\bar{\\Lambda}\\Lambda)\\mathrm{BR}(\\Lambda \\to p \\pi)^2},\n\\end{equation}\nwhere $\\sigma (\\bar{p}p \\to \\bar{\\Lambda}\\Lambda) = 64.1 \\pm 0.4 \\pm 1.6$ $\\mu$b is the production cross section \\cite{PS185164_1}, $\\mathrm{BR}(\\Lambda \\to p \\pi) = 63.9 \\pm 0.5 \\%$ is the branching ratio \\cite{pdg} and $\\sigma(\\bar{p}p \\to X) = 96 \\pm 3$ mb \\cite{CERNHERA}\n \n \n In order to estimate the expected background contamination, one should ideally produce a realistic amount of background events with respect to the signal. This would however require $3.6\\times 10^3$ DPM events per signal event which in turn implies more than $10^9$ DPM events. Since this would take an unreasonably long time to simulate, a smaller background sample has been generated and then weighted to give the expected signal-to-background ratio.\n\nAmong the numerous channels included in the generic background, the non-resonant $\\bar{p}p \\to \\bar{p}p\\pi^+\\pi^-$ process is particularly important. This is because it has the same final state particles as the process of interest $i.e.$ $\\bar{p}p \\to \\bar{\\Lambda}\\Lambda\\to \\bar{p}p\\pi^+\\pi^-$ and a cross section that is of the same order of magnitude as the signal process \\cite{CERNHERA,EASTMAN197329,LYS1973610}. Though included in the DPM generator, its cross section has not been tuned to real data. Therefore, this reaction has been simulated separately. The number of simulated events, the cross sections and the weights when calculating signal-to-background ratios, are given in Table \\ref{tab:samplesize1}\n\n\\begin{table}[ht]\n\t\t\\centering\n\t\t\\begin{tabular}{c|c|c|c}\n\t\tChannel & $\\bar{\\Lambda}\\Lambda$ & $\\bar{p}p\\pi^+\\pi^-$ & DPM \\\\ \\hline\n\t\tSample & $9.75\\cdot10^5$ & $9.74\\cdot10^5$ & $9.07\\cdot10^6$\\\\\n\t\tCross section [$\\mu$b] & 64.1 \\cite{PS185164_1} & 15.4 & 96 000\\\\\n\t\tWeight & 1.00 & 0.590 & 395 \\\\\t\n\t\t\\hline\n\t\t\\end{tabular}\n\t\t\\caption{Sample sizes, cross sections and weights for the simulation study at 1.64 GeV\/$c$. The non-resonant cross section has been calculated from the average of Refs. \\cite{CERNHERA,EASTMAN197329,LYS1973610}}\n\t\t\\label{tab:samplesize1}\n\t\\end{table}\n\n\\subsubsection{Background to $\\bar{p}p \\to \\bar{\\Xi}^+\\Xi^-$}\n\nAlso in this case, generic $\\bar{p}p\\to X$ processes are studied with the DPM generator to understand the background. \n\nThe expected production ratio of generic background and signal is given by\n\n\\begin{equation}\n\t\\frac{N_{X}}{N_{\\mathrm{signal}}} = \\frac{\\sigma(\\bar{p}p \\to X)}{\\sigma (\\bar{p}p \\to \\bar{\\Xi}^+\\Xi^-)\\mathrm{BR}(\\Xi\\to\\Lambda\\pi)^2 \\mathrm{BR}(\\Lambda\\to p\\pi)^2}. \\label{xsecratio}\n\\end{equation}\n\nThe cross sections $\\sigma(\\bar{p}p \\to X)$ at the beam momenta 7.3 GeV\/$c$ (the tabulated value closest to 7.0 GeV\/$c$) and 4.6 GeV\/$c$ are $58.3 \\pm 1.3$ mb at $p_{\\mathrm{beam}} = 7.3$ GeV\/c \\cite{dpm7} and $68.8 \\pm 0.8$ mb \\cite{dpm46}, respectively. From Eq. (\\ref{xsecratio}), we see that for each simulated signal event, at least $4.76\\cdot10^5$ DPM events must be simulated to obtain the correct signal-to-background ratio. This would be even more computationally demanding than in the case of $\\bar{p}p \\to \\bar{\\Lambda}\\Lambda$. The weighting method presented in the previous section can be applied, but the weights need to be about two orders of magnitude larger. This means that if very few DPM pass the selection criteria, then the signal-to-background ratio becomes very sensitive to fluctuations. Therefore, the most important background channels are considered separately. These are identified based on their final state particles, vertex topology and invariant masses of particle combinations, and found to be $\\bar{p}p\\to\\bar{\\Sigma}^*(1385)^+ \\Sigma^*(1385)^-$, $\\bar{p}p \\to\\bar{\\Lambda}\\Lambda\\pi^+\\pi^-$ and $\\bar{p}p \\to \\bar{p}p 2 \\pi^+ 2 \\pi^-$. Events from these channels are removed from the DPM sample at the analysis stage, to avoid double-counting of background. Out of the $9.80\\cdot10^7$ simulated DPM events at $p_{\\mathrm{beam}} = 7.0$ GeV\/c, $\\sim 7\\cdot 10^4$ events were removed. For the DPM sample at $p_{\\mathrm{beam}} = 4.6$ GeV\/c, $\\sim10\n^4$ were removed from the $9.8\\cdot10^7$ generated events. The simulated samples, cross sections and weights are summarised in Table \\ref{tab:samplesize2}.\n\n\\begin{table*}[ht]\n\t\t\\centering\n\t\t\\begin{tabular}{c|c|c|c|c|c}\n\t\tChannel at 7 GeV\/$c$ & $\\bar{\\Xi}^+\\Xi^-$ & $\\bar{\\Sigma}^*(1385)^+ \\Sigma^*(1385)^-$ & $\\bar{\\Lambda}\\Lambda \\pi^+\\pi^-$ & $\\bar{p}p 2 \\pi^+ 2 \\pi^-$ & DPM \\\\ \\hline\n\t\tSample & $8.54\\cdot10^5$ & $9.87\\cdot10^6$ & $9.85\\cdot10^6$ & $9.78\\cdot10^6$ & $9.73\\cdot10^7$\\\\\n\t\t$\\sigma_{\\mathrm{eff}}$ [$\\mu$b] & 0.123 & 1.39 & 24.1 & 390 & $5.83\\cdot10^4$\\\\\t\n\t\tWeight factor & 1.00 & 0.98 & 17.1 & 278 & $4.18\\cdot10^3$ \\\\\t\n\t\t\\hline\n\t\tChannel at 4.6 GeV\/$c$ & $\\bar{\\Xi}^+\\Xi^-$ & $\\bar{\\Sigma}^*(1385)^+ \\Sigma^*(1385)^-$ & $\\bar{\\Lambda}\\Lambda \\pi^+\\pi^-$ & $\\bar{p}p 2 \\pi^+ 2 \\pi^-$ & DPM \\\\ \\hline\n\t\tSample & $8.80\\cdot10^5$ & $9.86\\cdot10^6$ & $9.88\\cdot10^6$ & $9.80\\cdot10^6$ & $9.82\\cdot10^7$\\\\\n\t\t$\\sigma_{\\mathrm{eff}}$ [$\\mu$b] & 0.41 & 1.39 & 14.7 & 143 & $6.88\\cdot10^4$\\\\\t\n\t\tWeight factor & 1.00 & 0.304 & 3.21 & 31.4 & $1.51\\cdot10^3$ \\\\ \\hline\n\t\t\\end{tabular}\n\t\t\\caption{Sample sizes, cross sections and weights for the simulation study at $p_{\\mathrm{beam}} = 7$ GeV\/c and $p_{\\mathrm{beam}} = 4.6$ GeV\/c. The $\\bar{\\Sigma}^*(1385)\\Sigma^*(1385)$ cross section is obtained from Ref. \\cite{llbarpippim46}, and the $\\bar{\\Lambda}\\Lambda\\pi^+\\pi^-$ cross sections from Ref. \\cite{llbarpippim7} and \\cite{llbarpippim46} at 7 GeV\/$c$ and 4.6 GeV\/$c$, respectively. The non-resonant $\\bar{p}p 2\\pi^+2\\pi^-$ cross section is obtained from Ref. \\cite{nonres7} at 7 GeV\/$c$ and the average of Refs. \\cite{nonres46} and \\cite{nonres462} at 4.6 GeV\/$c$.}\n\t\t\\label{tab:samplesize2}\n\t\\end{table*}\n\n\\subsection{Event selection} \n\nReactions involving hyperons have a very distinct topology, since the long-lived hyperons decay a measurable distance from the point of production. The topology of each reaction and subsequent decay chain studied in this work, are shown in Fig. \\ref{fig:topology}. This can be exploited in the event selection procedure, as outlined in this chapter.\n\nThe event selection is performed in two stages: a pre-selection and a fine selection. The pre-selection comprises a set of basic topological criteria, that reduces the total simulated sample and hence the analysis run-time. The fine selection involves kinematic fits and fine-tuned mass windows.\n\n\\begin{figure*}[ht]\n\\begin{minipage}{0.4\\textwidth}\n\t\\centering\n\t\\includegraphics[width = \\textwidth]{llbartopology.png}\n\t\\end{minipage}\n\t\\begin{minipage}{0.6\\textwidth}\n\t\\includegraphics[width =\\textwidth]{xixibartopology.png}\n\t\t\\end{minipage}\n\t\\caption{Signal event topology of $\\bar{p}p \\to \\bar{\\Lambda}\\Lambda, \\bar{\\Lambda} \\to \\bar{p}\\pi^+, \\Lambda \\to p\\pi^-$ (left) and $\\bar{p}p \\to \\bar{\\Xi}^+\\Xi^-, \\bar{\\Xi}^+ \\to \\bar{\\Lambda}, \\bar{\\Lambda} \\to \\bar{p}\\pi^+, \\Xi^- \\to \\Lambda\\pi^-, \\Lambda \\to p\\pi^-$ (right).}\n\t\\label{fig:topology}\n\t\\end{figure*}\n\n\\subsubsection{The $\\bar{p}p \\to \\bar{\\Lambda}\\Lambda$ reaction}\n\\label{sec:lambda}\n\n\\noindent The pre-selection criteria for this reaction are:\n\\begin{itemize}\n \\item Each event must contain at least one each of the following: $p$, $\\bar{p}$, $\\pi^+$ and $\\pi^-$.\n \\item Each event contains at least one $p\\pi^-$ and one $\\bar{p}\\pi^+$ combination that can be successfully fitted to one common vertex, with a probability of $> 0.01$. If more than one such $\\Lambda$ or $\\bar{\\Lambda}$ candidate exist in one event (occurs in 6\\% of the cases for $\\Lambda$ and 2\\% of the cases for $\\bar{\\Lambda}$), then the one with the smallest $\\chi^2$ is kept for further analysis.\n \\item Each event must contain at least one $p\\pi^-$ and one $\\bar{p}\\pi^+$ combination with an invariant mass that satisfies $|m_{\\Lambda}-m(p\\pi)| < 0.3$ GeV\/$c^2$. This mass window is very wide and is further tightened in the final selection.\n \\item The four-vectors of the $\\Lambda$ and the $\\bar{\\Lambda}$ candidate can be fitted successfully to the initial beam momentum, with a \\textit{four-constraints} ($4C$) fit.\n\\end{itemize}\n\n\\noindent The event filtering is further improved by the fine selection. The criteria of the fine selection were tuned and optimised using as a figure of merit the significance, \\textit{i.e.} $S\/\\sqrt{S+B}$, where $S$ refers to the number of signal events and $B$ the number of generic hadronic events generated by DPM. The criteria are the following:\n\n\\begin{itemize}\n \\item The $\\chi^2$ of the $4C$ fit is required to be $< 100$. \n \\item The total distance $z_{tot}$ from the interaction point in the beam direction of the $\\Lambda$ and $\\bar{\\Lambda}$ candidate must fulfill $z_{tot} = |z_{\\Lambda} + z_{\\bar{\\Lambda}}| > 2$ cm.\n \\item The invariant mass of the $p\\pi^-$ and $\\bar{p}\\pi^+$ system must not differ from the PDG $\\Lambda$ mass by more than 5$\\sigma$, where $\\sigma$ is the width of a Gaussian fitted to the invariant mass peak. \n\\end{itemize}\n\\noindent The mass resolution differs between $\\Lambda$ ($\\sigma = 2.864\\cdot10^{-3}$ GeV\/$c^2$) and $\\bar{\\Lambda}$ ($\\sigma = 2.980\\cdot10^{-3}$ GeV\/$c^2$). This is because the decay products from $\\Lambda$ are primarily emitted in the acceptance of the MVD and STT, while the decay products of $\\bar{\\Lambda}$ to a larger extent hit the FTS. The $\\bar{p}\\pi^-$ invariant mass for signal and background are shown in the left panel of Figure \\ref{fig:invmass}.\n\nThe reconstruction efficiency of the signal reaction and the most important background sources for the different selection criteria are given in Table \\ref{tab:effllbar}. In addition, the number of expected background events for a given number of signal events has been calculated taking the cross sections into account. It is clear that background can be very successfully suppressed. A signal-to-background ratio of $S\/B \\approx 106$ is obtained. We conclude that the PANDA detector will be capable of collecting very clean $\\bar{\\Lambda}\\Lambda$ samples, which is essential when extracting spin observables. \n\\begin{table}[ht]\n\t\\centering\n\t\n\t\\begin{tabular}{c|c|c|c}\n\t Channel & $\\bar{\\Lambda}\\Lambda$ & $\\bar{p}p\\pi^+\\pi^-$ & DPM \\\\ \\hline\n\t Generated & $9.75\\cdot10^5$ & $9.74\\cdot10^5$ & $9.07\\cdot10^6$ \\\\\n\tPreselection & $2.129\\cdot10^5$ & 292700 & 651 \\\\\n\t$\\chi^2 < 100$ & $1.879\\cdot10^5$ & $249190$ & $136$ \\\\\n\t$\\Delta m < 5\\sigma$ & $1.685\\cdot10^5$ & $29180$ & $3$ \\\\\n\t$z_{\\bar{\\Lambda}} + z_{\\Lambda} > 2$ cm & $1.572\\cdot10^5$ & 470 & 2 \\\\ \n\tEff. (\\%) & $16.0\\pm0.4$ & $0.05$ & $2.2\\cdot10^{-7}$\\\\\n\t\\hline\n\t\t$N_{exp}$ & $1.572\\cdot10^5$ & 277 & 790 \\\\\n\t\t\\hline\n\t\t\\end{tabular}\n\t\\caption{Reconstruction efficiency after the final selection for signal events as well as non-resonant and generic hadronic background. In the bottom row, the proportion of expected events are shown. These numbers were calculated by applying the weights in Table \\ref{tab:samplesize1}. \\label{tab:effllbar}}\n\\end{table}\n\n\\subsubsection{The $\\bar{p}p \\to \\bar{\\Xi}^+\\Xi^-$ reaction}\n\\label{sec:xi}\n\nThe $\\bar{p}p \\to \\bar{\\Xi}^+\\Xi^-$ reaction is more complicated than the $\\bar{p}p \\to \\bar{\\Lambda}\\Lambda$ reaction since i) there are more particles in the final state ii) there are several identical particles in the final state and iii) each event contains four displaced decay vertices instead of two. In addition, the cross section is smaller and at the larger beam momenta necessary for $\\Xi$ studies, the cross section of background channels are larger. Hence, the selection procedure is by necessity a bit more involved. In the following, we summarise the pre-selection criteria. For simplicity, the charge conjugated mode is implied unless otherwise stated.\n\n\\noindent\\textbf{Final State Reconstruction and Combinatorics}\n\n\\noindent The first step is to combine the final state particles into $\\Lambda$ and $\\Xi$ candidates:\n\\begin{itemize}\n\t\\item All possible $p \\pi^-$ combinations are combined to form $\\Lambda$ candidates.\n\t\\item All combinations fulfilling $|m_{\\Lambda} - M(p\\pi^-) | < 0.05$ GeV\/c$^2$ are accepted and stored for further analysis.\n\t\\item All possible $\\Lambda \\pi^-$ combinations are combined to form $\\Xi^-$ candidates.\n\t\\item All combinations fulfilling $|m_{\\Xi} - M(p\\pi^-\\pi^-) | < 0.05$ MeV\/c$^2$ are accepted and stored for further analysis. This mass window is very wide and is tightened in the final selection.\n\\end{itemize}\n\\textbf{Fit of the $\\mathbf{\\Xi^-\\to \\Lambda \\pi^-, \\Lambda \\to p \\pi^-}$ decay chain}\n\n\\noindent The second step is to exploit the distinct topology of the $\\bar{p}p \\to \\bar{\\Xi}^+\\Xi^-, \\bar{\\Xi}^+ \\to \\bar{\\Lambda}\\pi^+, \\bar{\\Lambda} \\to \\bar{p}\\pi^+, \\Xi^-\\to \\Lambda \\pi^-, \\Lambda \\to p \\pi^-$ process, that imposes many constraints. Therefore, all $\\Xi^-$ candidates from the previous step are fitted under the $\\Xi^-\\to \\Lambda \\pi^-, \\Lambda \\to p \\pi^-$ hypothesis where the $\\Lambda$ mass is constrained to its PDG value. This is achieved using the Decay Chain Fitting package \\cite{HULSBERGEN2005566}, designed to perform kinematic fitting on a sequence of decays with at least one over-constraint. Taking all constraints and unknown parameters in the fit into account, results in three effective degrees of freedom. The advantage of this approach compared to multiple sequential fits, is that all constraints in a reaction are taken into account simultaneously, on an equal basis. This feature is not available in conventional fitters. In our case, the procedure is the following:\n\n\\begin{itemize}\n\t\\item The decay chain $\\Xi^- \\to \\Lambda \\pi^-, \\Lambda \\to p \\pi^-$ is fitted. The constraints are provided by momentum conservation, the two vertex positions and the $\\Lambda$ mass. All momentum components of all particles are modified in the fit.\n\t\\item Candidates with a fit probability $< 0.01$ are rejected.\n\n\\end{itemize}\n\\textbf{Reconstructing the $\\bar{p}p$ system} \\\\\n\n\t\\begin{figure*}[ht]\n\t\t \n\t\t \n\t\t \n\t\t \\includegraphics[width=0.33\\linewidth]{figllbarBW\/h0m.pdf}\n\t\t \\includegraphics[width=0.33\\linewidth]{figxixibar46BW\/h0m.pdf}\n\t\t \\includegraphics[width=0.33\\linewidth]{figxixibar7BW\/h0m.pdf}\n\t\t \n\t\t \n\t\t \n\t\t \n\t\t \n\t\t\t\\caption{Invariant mass distributions of signal and background samples in the final selection state. Left: The $\\bar{p}\\pi^+$ invariant mass at $p_{\\mathrm{beam}}$ = 1.64 GeV\/c for the $\\bar{p}p \\to \\bar{\\Lambda}\\Lambda$ reaction (black), non-resonant $\\bar{p}p \\to \\bar{p}p\\pi^+\\pi^-$ (dotted) and DPM (grey). Middle: The $\\bar{p}\\pi^+\\pi^+$ invariant mass at $p_{\\mathrm{beam}}$ = 4.6 GeV\/c for the $\\bar{p}p \\to \\bar{\\Xi}^+\\Xi^-$ reaction (black dashed), the $\\bar{p}p \\to \\bar{\\Sigma(1385)}^+\\Sigma(1385)^-$ (black dotted), $\\bar{p}p \\to \\bar{\\Lambda}\\Lambda\\pi^+\\pi^-$ (grey dotted), $\\bar{p}p \\to \\bar{p}p2\\pi^+2\\pi^-$ (grey solid) and combinatorial (black solid). Right: Same as in the middle panel but at $p_{\\mathrm{beam}}$ = 7.0 GeV\/c. The vertical lines mark the final selection mass window. All distributions are normalised to previously measured cross sections.}\n\t\t\t\\label{fig:invmass}\n\t\t\\end{figure*}\n\n\\noindent The decay chain fitter results in a list of $\\Xi^-$ and $\\bar{\\Xi}^+$ candidates in each event. The next step is to combine these candidates and test the hypothesis that they come from a common production vertex, and fulfill the kinematics of the initial $\\bar{p}p$ system.\n\\begin{itemize}\n\t\\item All possible $\\bar{\\Xi}^+\\Xi^-$ combinations form a hypothetical $\\bar{p}p$ system.\n\t\\item A vertex fit of $\\bar{\\Xi}^+\\Xi^-$ pairs is performed to reconstruct the interaction point.\n\t\\item Candidates with a fit probability $ < 0.01$ are rejected.\n\t\\item Candidates where the opening angle of the $\\bar{\\Xi}^+\\Xi^-$ pair is $> 3$ rad in the CMS system are selected for further analysis.This is because in the two-body reaction of interest, the $\\bar{\\Xi}^+$ and the $\\Xi^-$ are emitted back to back.\n\t\\item Events where $\\Lambda$ and $\\Xi^-$ candidates satisfy $\\Delta z = z(\\Lambda) - z(\\Xi) > 0$ cm are selected, where $z(Y)$ is the $z$-position of the hyperon decay vertex.\n\t\\item A kinematic fit of $\\bar{\\Xi}^+\\Xi^-$ pairs is performed, where energy and momentum are constrained to the initial system.\n\t\\item In case there is more than one $\\bar{\\Xi}^+\\Xi^-$ combination in an event, fulfilling all previous criteria, the candidate with the smallest $\\chi^2$ value from the kinematic fit is chosen for further analysis.\n\\end{itemize}\n\n\\noindent In the fine selection, additional criteria are applied after careful studies of the significance $S\/\\sqrt{S+B}$:\n\\begin{itemize}\n\t\\item Combinations of $\\bar{p}\\pi^+\\pi^+$ must fulfill \\\\ $|m_{fit}(\\bar{p}\\pi^+\\pi^+) - m_{PDG}(\\Xi^-)| < 5\\cdot0.003 $ GeV\/c$^2$, where 0.003 GeV\/c$^2$ is the $\\sigma$ width of the broader Gaussian component of the curve fitted to the data in the peak region.\n\t\\item Combinations of $p\\pi^-\\pi^-$ must fulfill \\\\$| m_{fit}(p\\pi^-\\pi^-) - m_{PDG}(\\Xi^-)| < 5\\cdot0.003 $ GeV\/c$^2$, where 0.003 GeV\/c$^2$ is the $\\sigma$ width of the broader Gaussian component of the curve fitted to the data in the peak region.\n\t\\item The total distance in the beam direction from the reconstructed interaction point (IP) in an event must satisfy $(z_{fit}(\\bar{\\Xi}^+) - z_{fit}(IP)) + (z_{fit}(\\Xi^-) - z_{fit}(IP)) > 3$ cm.\n\\end{itemize}\n\n\\noindent Invariant mass plots of the $\\bar{p}\\pi^+\\pi^-$ system for signal and various background channels are shown in the middle (at $p_{\\mathrm{beam}}$ = 4.6 GeV\/c) and right ($p_{\\mathrm{beam}}$ = 7.0 GeV\/c) panel of Figure \\ref{fig:invmass}. The resulting reconstruction efficiency for each criterion, or set of criteria, are shown in Table \\ref{tab:effxixibar}. The proportion of expected events, calculated from the cross sections, are also given. No non-resonant nor any generic background events satisfy the selection criteria. Therefore, the Poisson upper limit of 2.3 events has been used to estimate the number of background events at a confidence level of 90\\%. \n\n\n\\begin{table*}[ht]\n\t\t\\centering\n\t\t\\resizebox{1.\\textwidth}{!}{%\n\t\t\\begin{tabular}{c|c|c|c|c|c}\n\t\t\t$p_{beam}$ = 7.0 GeV\/$c$ & $\\bar{\\Xi}^+\\Xi^-$ & $\\bar{\\Sigma}(1385)^+ \\Sigma(1385)^-$ & $\\bar{\\Lambda}\\Lambda\\pi^+\\pi^-$ & $\\bar{p}p 2 \\pi^+ 2 \\pi^-$ & DPM \\\\ \\hline\nGenerated & $8.54\\cdot10^5$ & $9.87\\cdot10^6$ & $9.85\\cdot10^6$ & $9.78\\cdot10^6$ & $9.73\\cdot10^7$\\\\\n\t\t\tPre-selection & $7.83\\cdot10^4$ & $3.45\\cdot10^4$ & $3.51\\cdot10^3$ & $1$ & $100$ \\\\\n\t\t\tMass cut & $7.27\\cdot10^4$ & $23$ & $379$ & $<2.3$ & $7.0$ \\\\\n\t\t\t$\\Delta d>3$ & $6.76\\cdot10^4$ & $3.0$ & $14$ & $<2.3$ & $<2.3$ \\\\\n\t\t\tEfficiency \\% & $7.95\\pm0.03$ & $(3.0\\pm0.2)\\cdot10^{-5}$ & $(1.4\\pm0.4)\\cdot10^{-4}$ & $<2.3\\cdot 10^{-5}$ & $<2.3\\cdot 10^{-6}$\\\\ \\hline\n\t\t\t$N_{exp}$ weighted & $6.76\\cdot10^4$ & $2.9$ & $239$ & $<640$ & $<9.61\\cdot10^3$ \\\\\n\t\t\t\\hline \n\t\t\t$p_{beam}$ = 4.6 GeV\/$c$ & & & & & \\\\ \\hline\n\t\t\tGenerated & $8.80\\cdot10^5$ & $9.86\\cdot10^6$ & $9.88\\cdot10^6$ & $9.80\\cdot10^6$ & $9.82\\cdot10^7$\\\\\n\t\t\tPre-selection & $8.65\\cdot10^4$ & $3.29\\cdot10^4$ & $2.61\\cdot10^4$ & $105$ & $44$ \\\\ \n\t\t\tMass cut & $8.06\\cdot10^4$ & $21$ & $2.49\\cdot10^3$ & $13$ & $6.0$ \\\\ \n\t\t\t$\\Delta d >3$ & $7.23\\cdot10^4$ & $1.0$ & $39$ & $<2.3$ & $<2.3$ \\\\\n\t\t\tEfficiency (\\%) & $8.22\\pm0.03$ & $(1.0\\pm1.0)\\cdot10^{-5}$ & $(4.0\\pm0.6)\\cdot10^{-4}$ & $<2.3\\cdot10^{-5}$ & $<2.3\\cdot10^{-6}$\\\\ \\hline\n\t\t\t$N_{exp}$ weighted & $7.23\\cdot10^4$ & $0.30$ & $125$ & $<72$ & $<3.47\\cdot10^3$\t\\\\\n\t\t\t\\hline\n\t\t\n\t\t\n\t\t\\end{tabular}}%\n\t\t\\caption{Reconstruction efficiency after the final selection for signal events as well as non-resonant and generic hadronic background. $N_{exp}$ is the expected proportion of events, applying weights in Table \\ref{tab:samplesize2}. The Poisson upper limits are given at a 90\\% confidence level. \\label{tab:effxixibar}}\n\t\t\n\t\\end{table*}\n\n\\section{Parameter estimation}\n\\label{sec:paramest}\n\nTo estimate the physics parameters $\\alpha$, $\\bar{\\alpha}$, $P^Y_y$, $P^{\\bar{Y}}_y$, $C^{Y\\bar{Y}}_{xz}$, $C^{Y\\bar{Y}}_{xx}$, $C^{Y\\bar{Y}}_{yy}$ and $C^{Y\\bar{Y}}_{zz}$ from the measured quantities, \\textit{i.e.} the hyperon scattering angle and the baryon and antibaryon decay angles, methods like Maximum Log Likelihood or the Method of Moments can be used. In the very first phase of data taking with PANDA, the samples will be relatively modest and the measurements will be focused on the production related parameters, \\textit{i.e.} the polarisation and the spin correlations. These can be obtained for any given beam momentum and scattering angle by fixing $\\alpha$ and $\\bar{\\alpha}$ to the already measured value of $\\alpha$ \\cite{pdg}, assuming CP symmetry \\textit{i.e.} $\\alpha$ = $-\\bar{\\alpha}$. \n\nIn this study, the Method of Moments has been chosen as parameter estimation method, due to its computational simplicity.\nAt a given antihyperon scattering angle $\\theta_{\\bar{Y}}$, it can be shown \\cite{erikthesis} that the first moment of $\\cos\\theta_y^B$ is proportional to the polarisation at this angle:\n\\begin{align}\n <\\cos\\theta_y^B>_{\\theta_{\\bar{Y}}}=&\\frac{\\int I(\\theta_y^B,\\theta_y^{\\bar{B}})_{\\theta_{\\bar{Y}}}\\cos\\theta_y^B d\\Omega_B d\\Omega_{\\bar{B}}}{\\int I(\\theta_y^B,\\theta_y^{\\bar{B}})_{\\theta_{\\bar{Y}}} d\\Omega_B d\\Omega_{\\bar{B}}}\\\\\n & = \\frac{\\alpha P^Y_{y,\\theta_{\\bar{Y}}}}{3}\n\\end{align}\n\\noindent Hence, the polarisation can be calculated from the moment\n\\begin{equation}\n P_{y,\\theta_{\\bar{Y}}}^{Y\/\\bar{Y}}=\\frac{3<\\cos\\theta_y^{B\/\\bar{B}}>_{\\theta_{\\bar{Y}}}}{\\alpha}\n \\label{eq:pol}\n\\end{equation}\n\\noindent where the estimator of the moment is the arithmetic mean of $\\cos\\theta_y^{B\/\\bar{B}}$ obtained from a sample of $N$ events:\n\n\\begin{equation}\n\t<\\reallywidehat{\\cos\\theta_y^{B\/\\bar{B}}}>_{\\theta_{\\bar{Y}}} = \\frac{1}{N} \\sum_{i=1}^N \\cos\\theta_{y,i}^{B\/\\bar{B}} \\Bigg \\rfloor _{\\theta_{\\bar{Y}}}.\n\t\\label{eq:polest}\n\\end{equation}\n\n\\noindent In the following, we always refer to moments and spin observables at a given $\\theta_Y$, unless explicitly stated otherwise. That means $P^{\\bar{Y}}_{y,\\theta_Y} = P^{\\bar{Y}}_y$ and so on.\n\nThe variance of the first moment is given by the difference between the second moment and the square of the first moment. In our case, we have\n\n\\begin{equation}\n V(<\\cos_y^B>) = \\frac{1}{N(N-1)}[<\\cos^2\\theta_y^B>-<\\cos\\theta_y^B>^2]\n\\end{equation}\n\n\\noindent that after error propagation and some algebra becomes \n\n\\begin{equation}\n V(P_y) = \\frac{3-(\\alpha P_{y})^2}{\\alpha(N-1)}.\n \\label{eq:polvar}\n\\end{equation}\n\nIn a similar way, the spin correlations at a given hyperon scattering angle $\\theta_Y$ can be obtained from the moments of the product of the cosines with respect to the different reference axes $i,j = x,y,z$ \\cite{erikthesis}:\n\n\\begin{equation}\n C^{\\bar{Y}Y}_{i,j} = \\frac{9<\\cos\\theta_{i}^B\\cos\\theta_{j}^{\\bar{B}}>}{\\alpha\\bar{\\alpha}}\n \\label{eq:spincorr}\n\\end{equation}\n\n\\noindent where the estimator of the moment is given by the arithmetic mean of the cosine product from the data sample at a given scattering angle:\n\n\\begin{equation}\n\t<\\reallywidehat{\\cos\\theta_{i}^B\\cos\\theta_{j}^{\\bar{B}}}> = \\frac{1}{N} \\sum_{k=1}^N \\cos\\theta_{i,k}^B \\cos\\theta_{j,k}^{\\bar{B}}.\n\t\\label{eq:spinest}\n\\end{equation}\n\n\\noindent The variance of the spin correlations can be calculated in the same way as that of the polarisation and is found to be\n\n\\begin{equation}\n V(C^{\\bar{Y}Y}_{i,j}) = \\frac{9-(\\alpha\\bar{\\alpha}C^{\\bar{Y}Y}_{i,j})^2}{\\alpha\\bar{\\alpha}(N-1)},\n \\label{eq:spinvar}\n\\end{equation}\n\n\\noindent for $i,j=x,y,z$.\n\n\\subsection{Efficiency corrections}\nIn reality, detectors and reconstruction algorithms have finite efficiencies. This needs to be taken into account in the parameter estimation. However, the efficiency is a complicated function of all measured variables. In the case of exclusive $\\bar{p}p \\to \\bar{\\Lambda}\\Lambda, \\bar{\\Lambda} \\to \\bar{p}\\pi^+, \\Lambda \\to p\\pi^-$ measurements, there are five independent measured variables: the $\\bar{\\Lambda}$ scattering angle, the proton decay angles $\\theta_p$ and $\\phi_p$ and the antiproton decay angles $\\theta_{\\bar{p}}$ and $\\phi_{\\bar{p}}$. In principle, this means that parameter estimation methods, which rely on integration, such as the Method of Moments, should employ efficiency corrections in all five independent variables. In the case of $\\bar{p}p \\to \\bar{\\Xi}^+\\Xi^-, \\bar{\\Xi}^+ \\to \\bar{\\Lambda}\\pi^+, \\bar{\\Lambda} \\to \\bar{p}\\pi^+, \\Xi^- \\to \\Lambda\\pi^-, \\Lambda \\to p\\pi^-$, the efficiency depends on nine independent variables. This is however difficult to achieve in practice, since the number of Monte Carlo simulated events required for a five or nine-dimensional correction matrix is very large and thus unfeasible.\n\nInstead, different approximations have to be made, based on reasonable and testable assumptions. In this work, we have treated the efficiency with two independent methods, the \\textit{efficiency dependent} and \\textit{efficiency independent method}, as outlined in the following.\n\n\\subsubsection{Efficiency dependent method}\n\nWith this method, the efficiency is corrected for on an event-by-event basis. The efficiency corrected estimator of the moment $<\\cos\\theta_y^B>$ is given by\n\n\\begin{equation}\n\t<\\reallywidehat{\\cos\\theta_y^B}> = \\frac{1}{N} \\sum_{i=1}^N \\cos\\theta_{y,i}^B w_i(\\theta_y,\\Omega_B,\\Omega_{\\bar{B}})\n\t\\label{eq:polesteff}\n\\end{equation}\n\nwhere \n\\begin{equation}\n w_i(\\theta_Y,\\Omega_B,\\Omega_{\\bar{B}}) = \\frac{1}{\\epsilon_i(\\theta_y,\\Omega_B,\\Omega_{\\bar{B}})} \n \\label{eq:weight}\n\\end{equation}.\n\nIn the polarisation extraction, we assume for computational simplicity that the efficiency of the $\\Lambda$ as a function of the $\\Lambda$ angles is independent of the $\\bar{\\Lambda}$ angles, and vice versa. Then we can reduce $\\epsilon(\\theta_Y,\\Omega_B,\\Omega_{\\bar{B}})$ to $\\epsilon(\\theta_Y,\\Omega_B)$. Furthermore, our simulations show that the efficiency is symmetric with respect to the azimuthal angle $\\phi_y$, which means that we can integrate over $\\phi_y$ without introducing a bias. This means that our efficiency is simplified to $\\epsilon(\\theta_Y,\\cos\\theta^B_{y})$. Hence, we can represent the efficiency by two-dimensional matrices: the $\\bar{Y}$ scattering angle $\\theta_{\\bar{Y}}$ in the CMS system of the reaction \\textit{versus} the decay proton angle $\\theta^B_{y}$ with respect the $y$ axis in Fig. \\ref{fig:refsys}, in the rest frame of the decaying hyperon. \n\nFor the spin correlation $C^{\\bar{Y}Y}_{i,j}$, we need to take into account the decay angles from the hyperon and the antihyperon. We then assume a 3D efficiency \\\\ $\\epsilon(\\theta_Y,\\cos\\theta^B_{i}, \\cos\\theta^{\\bar{B}}_{j})$ and hence, we use 3D matrices. Here, $i,j = x,y,z$ in Fig. \\ref{fig:refsys}. These three-dimensional correction matrices were also used in a cross check analysis of the polarisation estimation, with consistent results.\n\nIn the $\\bar{p}p \\to \\bar{\\Xi}^+\\Xi^-$ case, we have assumed that the efficiency is symmetric with respect to the $\\Lambda \\to p\\pi^-$ and $\\bar{\\Lambda} \\to \\bar{p}\\pi^+$ decay angles which were integrated out. \n\nThe estimator for the polarisation is given by\n\n\\begin{equation}\n\t\\reallywidehat{P_y}^{Y\/\\bar{Y}} = \\frac{3}{\\alpha}\\frac{\\sum^N_{i=1}\\cos\\theta_{y,i}^{B\/\\bar{B}}\\cdot{w_i(\\cos\\theta_{y,i}^{B\/\\bar{B}},\\cos\\theta_{\\bar{Y}})}}{\\sum^N_{i=1}w_i(\\cos\\theta_{y,i}^{B\/\\bar{B}},\\cos\\theta_{\\bar{Y}})}.\n\t\\label{eq:recpol}\n\\end{equation}\n\n\\noindent where $w(\\cos\\theta_{y,i},\\cos\\theta_{\\bar{Y}})$ is the weight (Eq. (\\ref{eq:weight})) at the given $\\cos\\theta_y$ and $\\cos\\theta_{\\bar{Y}}$. $N$ is the number of events in the sample. For the spin correlations, the estimators are given by \n\n\\begin{equation}\n\t\\reallywidehat{C^{\\bar{Y}Y}_{\\mu\\nu}} = \\frac{9}{\\alpha\\bar{\\alpha}}\\frac{\\sum^N_{i=1} \\cos\\theta_{\\mu,i}^{\\bar{B}}\\cos\\theta_{\\nu,i}^B\\cdot w_i(\\cos\\theta_{\\mu,i}^{\\bar{B}},\\cos\\theta_{\\nu,i}^B,\\cos\\theta_{Y})}{\\sum^N_{i=1} w_i(\\cos\\theta_{\\mu,i}^{\\bar{B}},\\cos\\theta_{\\nu,i}^B,\\cos\\theta_{Y})}.\n\t\\label{eq:recspincorr}\n\\end{equation}\n\n\\subsubsection{Efficiency independent method}\n\\label{sec:effind}\n\nFor special cases, alternative estimators can be defined which do not require efficiency corrections. These have been treated thoroughly in Refs. \\cite{erikthesis,tayloethesis} and will be briefly summarised here. \n\nHere, it is most convenient to use the matrix formulation, see Sect. \\ref{sec:formalism}. The first order moments of the angles and their products can be gathered in a $4\\times4$ matrix as follows:\n\n\\begin{equation*}\n\tE=\\begin{pmatrix}\n\t\\braket{1} & \\braket{\\cos\\theta_{x}^{B}} & \\braket{\\cos\\theta_{y}^{B}} & \\braket{\\cos\\theta_{z}^{B}} \\\\\n\t\\braket{\\cos\\theta_{x}^{\\bar{B}}} & \\braket{\\cos\\theta_{x}^{\\bar{B}}\\cos\\theta_{x}^{B}} & \\braket{\\cos\\theta_{x}^{\\bar{B}}\\cos\\theta_{y}^{B}} & \\braket{\\cos\\theta_{x}^{\\bar{B}}\\cos\\theta_{z}^{B}} \\\\\n\t\\braket{\\cos\\theta_{y}^{\\bar{B}}} & \\braket{\\cos\\theta_{y}^{\\bar{B}}\\cos\\theta_{x}^{B}} & \\braket{\\cos\\theta_{y}^{\\bar{B}}\\cos\\theta_{y}^{B}} & \\braket{\\cos\\theta_{y}^{\\bar{B}}\\cos\\theta_{z}^{B}} \\\\\n\t\\braket{\\cos\\theta_{z}^{\\bar{B}}} & \\braket{\\cos\\theta_{z}^{\\bar{B}}\\cos\\theta_{x}^{B}} & \\braket{\\cos\\theta_{z}^{\\bar{B}}\\cos\\theta_{y}^{B}} & \\braket{\\cos\\theta_{z}^{\\bar{B}}\\cos\\theta_{z}^{B}} \n\t\\end{pmatrix}\n\\end{equation*}\n\nand some additional moments in vector form\n\n\\begin{align}\n\t& F = \\begin{pmatrix}\n\t\t\\braket{1} & \\braket{\\cos^2\\theta_{x}^{B}} & \\braket{\\cos^2\\theta_{y}^{B}} & \\braket{\\cos^2\\theta_{z}^{B}}\n\t\\end{pmatrix}\\\\\n\t& \\bar{F} = \\begin{pmatrix}\n\t\t\\braket{1} & \\braket{\\cos^2\\theta_{x}^{\\bar{B}}} & \\braket{\\cos^2\\theta_{y}^{\\bar{B}}} & \\braket{\\cos^2\\theta_{z}^{\\bar{B}}}\n\t\\end{pmatrix}.\n\\end{align}\n\n\\noindent We assume that the efficiency of the antihyperon and its decay is independent of that of the hyperon, \\textit{i.e.} \n\n\\begin{equation}\n \\epsilon(\\Omega_{\\bar{B}},\\Omega_B) = \\epsilon(\\Omega_{\\bar{B}})\\cdot\\epsilon(\\Omega_B).\n\\end{equation}\nWe then define the following matrices of efficiency weighted moments\n\n\\begin{align}\n\t& \\bar{\\mathcal{A}}_{\\mu,\\nu} \\equiv \\int \\cos\\theta_{\\mu}^{\\bar{B}} \\cos\\theta_{\\nu}^{\\bar{B}} \\epsilon(\\Omega_{\\bar{B}}) d\\Omega_{\\bar{B}} \\\\\n\t& \\mathcal{A}_{\\mu,\\nu} \\equiv \\int \\cos\\theta_{\\mu}^{B} \\cos\\theta_{\\nu}^{B} \\epsilon(\\Omega_{B}) d\\Omega_{B} \\\\\n\t& \\bar{\\mathcal{B}}_{\\mu,\\nu} \\equiv \\int \\cos^2\\theta_{\\mu}^{\\bar{B}} \\cos\\theta_{\\nu}^{\\bar{B}} \\epsilon(\\Omega_{\\bar{B}}) d\\Omega_{\\bar{B}} \\\\\n\t& \\mathcal{B}_{\\mu,\\nu} \\equiv \\int \\cos\\theta_{\\mu}^{B} \\cos^2\\theta_{\\nu}^{B} \\epsilon(\\Omega_{B}) d\\Omega_{B} \\\\\n\t& \\bar{\\mathcal{C}}_{\\mu} \\equiv \\bar{\\mathcal{A}}_{\\mu,0} \\\\\n\t& \\mathcal{C}_{\\nu} \\equiv \\mathcal{A}_{0,\\nu}.\n\\end{align}\n\n\\noindent By definition, the $\\mathcal{A}$ matrices are symmetric in $\\mu$ and $\\nu$. Furthermore, some elements of $\\mathcal{B}$ are identical to those of $\\mathcal{A}$ $e.g.$ $\\mathcal{B}_{10}=\\mathcal{A}_{11}$, $\\mathcal{\\bar{B}}_{01}=\\mathcal{\\bar{A}}_{11}$. With these definitions, the moments can be related to the spin observables in the following way:\n\\begin{equation}\n\tE = \\frac{1}{16\\pi^2} \\bar{\\mathcal{A}} D \\mathcal{A}\n\t\\label{eq:Esolve}\n\\end{equation}\n\\begin{equation}\n\t\\bar{F} = \\frac{1}{16\\pi^2} \\bar{\\mathcal{B}} D \\mathcal{C}\n\t\\label{eq:Fbarsolve}\n\\end{equation}\n\\begin{equation}\n\tF = \\frac{1}{16\\pi^2} \\bar{\\mathcal{C}} D \\mathcal{B}\n\t\\label{eq:Fsolve}\n\\end{equation}\n\nIf the efficiency is symmetric with respect to $\\cos\\theta_y$ for both the antibaryon and the baryon, \\textit{i.e.}\n\\begin{align}\n\t&\\epsilon(\\cos\\theta_{x}^{\\bar{B}},\\cos\\theta_{y}^{\\bar{B}},\\cos\\theta_{z}^{\\bar{B}})=\\epsilon(\\cos\\theta_{x}^{\\bar{B}},-\\cos\\theta_{y}^{\\bar{B}},\\cos\\theta_{z}^{\\bar{B}})\\\\\n\t&\\epsilon(\\cos\\theta_{x}^{B},\\cos\\theta_{y}^{B},\\cos\\theta_{z}^{B})=\\epsilon(\\cos\\theta_{x}^{B},-\\cos\\theta_{y}^{B},\\cos\\theta_{z}^{B}).\n\\end{align}\n\\noindent then all matrix elements in $\\mathcal{A}$ and $\\mathcal{B}$ with odd powers of $\\cos\\theta_y$ are zero. The matrices then reduce to\n\\begin{equation}\n\t\\mathcal{A} = \\begin{pmatrix}\n\t\t\\mathcal{A}_{00} & \\mathcal{A}_{01} & 0 & \\mathcal{A}_{03} \\\\\n\t\t\\mathcal{A}_{01} & \\mathcal{A}_{11} & 0 & \\mathcal{A}_{13} \\\\\n\t\t0 & 0 & \\mathcal{A}_{22} & 0 \\\\\n\t\t\\mathcal{A}_{03} & \\mathcal{A}_{13} & 0 & \\mathcal{A}_{33} \\\\\n\t\\end{pmatrix}\n\\end{equation}\n\\begin{equation}\n\t\\mathcal{\\bar{B}} = \\begin{pmatrix}\n\t\t\\mathcal{\\bar{B}}_{00} & \\mathcal{\\bar{B}}_{01} & 0 & \\mathcal{\\bar{B}}_{03} \\\\\n\t\t\\mathcal{\\bar{B}}_{10} & \\mathcal{\\bar{B}}_{11} & 0 & \\mathcal{\\bar{B}}_{13} \\\\\n\t\t\\mathcal{\\bar{B}}_{20} & \\mathcal{\\bar{B}}_{21} & 0 & \\mathcal{\\bar{B}}_{23} \\\\\n\t\t\\mathcal{\\bar{B}}_{30} & \\mathcal{\\bar{B}}_{31} & 0 & \\mathcal{\\bar{B}}_{33} \\\\\n\t\\end{pmatrix},\\;\\;\\; B = \\begin{pmatrix}\n\t\t\\mathcal{B}_{00} & \\mathcal{B}_{01} & \\mathcal{B}_{02} & \\mathcal{B}_{03} \\\\\n\t\t\\mathcal{B}_{10} & \\mathcal{B}_{11} & \\mathcal{B}_{12} & \\mathcal{B}_{13} \\\\\n\t\t0 & 0 & 0 & 0 \\\\\n\t\t\\mathcal{B}_{30} & \\mathcal{B}_{31} & \\mathcal{B}_{32} & \\mathcal{B}_{33} \\\\\n\t\\end{pmatrix}\n\\end{equation}\n\nWith these simplifications, the right hand side of Eqs. \\ref{eq:Esolve}, \\ref{eq:Fbarsolve} and \\ref{eq:Fsolve} can be solved, resulting in terms that consist of products of $\\mathcal{A}_{\\mu,\\nu}$, $\\mathcal{B}_{\\mu,\\nu}$ and $D_{\\mu,\\nu}$. We find that some of these terms are small in magnitude. If these terms can be neglected, then the non-zero spin observables are shown in Ref. \\cite{tayloethesis} to be\n\n\\begin{align}\n\t&D_{20} = \\frac{E_{20}}{\\bar{F}_2},\\;\\;\\; D_{02} = \\frac{E_{02}}{F_2} \\\\\n\t&D_{22} = \\frac{E_{22}}{\\bar{F}_2 F_2}\\\\\n\t&D_{11} = \\frac{E_{11}-E_{10}E_{01}}{\\bar{F}_1 F_1},\\;\\;\\;D_{13} = \\frac{E_{13}-E_{10}E_{03}}{\\bar{F}_1 F_3}\\\\\n\t&D_{31} = \\frac{E_{31}-E_{30}E_{01}}{\\bar{F}_3 F_1},\\;\\;\\;D_{33} = \\frac{E_{33}-E_{30}E_{03}}{\\bar{F}_3 F_3},\n\\end{align}\nwhich translates to\n\\begin{align}\n\t& P^{\\bar{Y}}_{y} = \\frac{1}{\\bar{\\alpha}}\\frac{\\braket{\\cos\\theta_{y,\\bar{B}}}}{\\braket{\\cos^2\\theta_{y,\\bar{B}}}}\\label{eq:pybarfinal} \\\\\n\t& P^Y_{y} = \\frac{1}{\\alpha}\\frac{\\braket{\\cos\\theta_{y,B}}}{\\braket{\\cos^2\\theta_{y,B}}}\\label{eq:pyfinal} \\\\\n\t& C^{\\bar{Y}Y}_{yy} = \\frac{1}{\\bar{\\alpha}\\alpha} \\frac{\\braket{\\cos\\theta_{y,\\bar{B}}\\cos\\theta_{y,B}}}{\\braket{\\cos^2\\theta_{y,\\bar{B}}}\\braket{\\cos^2\\theta_{y,B}}}\\label{eq:cyyfinal} \\\\\n\t& C^{\\bar{Y}Y}_{xx} = \\frac{1}{\\bar{\\alpha}\\alpha} \\frac{\\braket{\\cos\\theta_{x,\\bar{B}}\\cos\\theta_{x,B}} - \\braket{\\cos\\theta_{x,\\bar{B}}} \\braket{\\cos\\theta_{x,B}} }{\\braket{\\cos^2\\theta_{x,\\bar{B}}}\\braket{\\cos^2\\theta_{x,B}}}\\label{eq:cxxfinal} \\\\\n\t& C^{\\bar{Y}Y}_{xz} = \\frac{1}{\\bar{\\alpha}\\alpha} \\frac{\\braket{\\cos\\theta_{x,\\bar{B}}\\cos\\theta_{z,B}} - \\braket{\\cos\\theta_{x,\\bar{B}}} \\braket{\\cos\\theta_{z,B}} }{\\braket{\\cos^2\\theta_{x,\\bar{B}}}\\braket{\\cos^2\\theta_{z,B}}}\\label{eq:cxzfinal} \\\\\n\t& C^{\\bar{Y}Y}_{zx} = \\frac{1}{\\bar{\\alpha}\\alpha} \\frac{\\braket{\\cos\\theta_{z,\\bar{B}}\\cos\\theta_{x,B}} - \\braket{\\cos\\theta_{z,\\bar{B}}} \\braket{\\cos\\theta_{x,B}} }{\\braket{\\cos^2\\theta_{z,\\bar{B}}}\\braket{\\cos^2\\theta_{x,B}}}\\label{eq:czxfinal} \\\\\n\t& C^{\\bar{Y}Y}_{zz} = \\frac{1}{\\bar{\\alpha}\\alpha} \\frac{\\braket{\\cos\\theta_{z,\\bar{B}}\\cos\\theta_{z,B}} - \\braket{\\cos\\theta_{z,\\bar{B}}} \\braket{\\cos\\theta_{z,B}} }{\\braket{\\cos^2\\theta_{z,\\bar{B}}}\\braket{\\cos^2\\theta_{z,B}}}\\label{eq:czzfinal}.\n\\end{align}\n\nTo summarize, the efficiency independent method is viable if the following three conditions are met:\n\\begin{enumerate}\n\t\\item The detection efficiency of the antibaryon is independent of that of the baryon.\n\t\\item The efficiency is symmetric in $\\cos\\theta_{y}^{B}$ and $\\cos\\theta_{y}^{\\bar{B}}$ \n\t\\item Higher order terms emerging from Eqs.~\\eqref{eq:Esolve}, \\eqref{eq:Fbarsolve} and \\eqref{eq:Fsolve} can be neglected.\n\\end{enumerate}\n\nSimulations show that the first criterion is fulfilled for both channels at both momenta, whereas the second and third criteria are channel- and momentum dependent. For $\\bar{p}p \\to \\bar{\\Lambda}\\Lambda$ at 1.642 GeV\/$c$, the second criterion is fulfilled. Furthermore, the higher order terms appearing in the expressions for $P^Y_y$, $P^{\\bar{Y}}_y$ and $C^{\\bar{Y}Y}_{yy}$ can be neglected whereas they are large for $C^{\\bar{Y}Y}_{xx}$, $C^{\\bar{Y}Y}_{zz}$ $C^{\\bar{Y}Y}_{xz}$ and $C^{\\bar{Y}Y}_{zx}$. This means we expect the efficiency independent method to work for $P\n^Y_y$, $P^{\\bar{Y}}_y$ and $C^{\\bar{Y}Y}_{yy}$ but not for the other observables.\n\nFor the $\\bar{p}p \\to \\bar{\\Xi}^+\\Xi^-$ channel at 4.6 GeV\/$c$, all three criteria are fulfilled for all spin observables. Thus the efficiency independent method can be used without restrictions in this case. At 7.0 GeV\/$c$, the second criterion is not fulfilled which means that the efficiency independent method cannot be used to extract the polarisation of neither the hyperon nor the antihyperon. However, it can be applied to estimate all spin correlations.\n\n\n\n\\section{Results}\n\\label{sec:results}\n \n\n\\subsection{Reconstruction rates}\n\\label{sec:rates}\nWith the reconstruction efficiencies obtained from the simulations, the measured $\\bar{p}p \\to \\bar{\\Lambda}\\Lambda$ cross section from Ref. \\cite{PS185164} and the predicted cross sections from Ref. \\cite{kaidalov}, we can calculate the expected rate at which hyperons can be reconstructed exclusively in PANDA. We have performed the calculations for two different scenarios: with the Phase One luminosity, which will be around $10^{31}$cm$^{-2}$s$^{-1}$, and with the 20 times larger design luminosity. The results are presented in Table \\ref{hypprod}.\nHowever, during the very first period of data taking, the luminosity at low beam momenta will be smaller by about a factor of two. This means that in the first $\\bar{p}p \\to \\bar{\\Lambda}\\Lambda$ benchmark study, the actual luminosity will be about $5\\cdot10^{30}$cm$^{-2}$s$^{-1}$, giving a two times smaller reconstruction rate than at the nominal Phase One luminosity. The $S\/B$ ratios are calculated using all remaining signal events $S$ and background events from all sources, weighted using their corresponding weight factors given in Tables \\ref{tab:samplesize1} and \\ref{tab:samplesize2}.\n\\begin{table*}\n\\centering\n\\begin{tabular}{llllllll}\n\\hline\n$p_{\\bar{p}}$ (GeV\/$c$) & Reaction & $\\sigma$ ($\\mu$b) & Eff (\\%) & Decay & S\/B & Rate ($s^{-1}$) & Rate ($s^{-1}$)\\\\&&&&&& at $10^{31}$cm$^{-2}$s$^{-1}$ & at 2$\\cdot10^{32}$cm$^{-2}$s$^{-1}$\\\\\\hline\n1.64 & $\\bar{p}p \\rightarrow \\bar{\\Lambda}\\Lambda$ & 64.1 $\\pm$ 1.6~\\cite{PS185164_1} & 16.04 $\\pm$ 0.04 & $\\Lambda \\rightarrow p \\pi^-$ & 114 & 44 & 880 \\\\\\hline\n\n4.6 & $\\bar{p}p \\rightarrow \\bar{\\Xi}^+\\Xi^-$ & $\\approx$1~~\\cite{kaidalov} & 8.22 $\\pm$ 0.03 & $\\Xi^- \\rightarrow \\Lambda \\pi^-$ & 270 & 0.3 & 6 \\\\\\hline\n\n7.0 & $\\bar{p}p \\rightarrow \\bar{\\Xi}^+\\Xi^-$ & $\\approx$0.3~~\\cite{kaidalov} & 7.95 $\\pm$ 0.03 & $\\Xi^- \\rightarrow \\Lambda \\pi^-$ & 170 & 0.1 & 2 \\\\\\hline\n\\end{tabular}\n\\caption{Results from simulation studies of the various production reactions of ground state hyperons. The efficiencies are for exclusive reconstruction, and are presented with statistical uncertainties. The $S\/B$ denotes the signal-to-background ratio.}\n\\label{hypprod} \n\\end{table*}\n\n\\subsubsection{Effects from the $\\bar{\\Xi}^+$ angular distribution}\n\nThe distribution of the $\\bar{\\Xi}^+$ scattering angle is not known, since so far, only a few bubble-chamber events exist from the $\\bar{p}p \\to \\bar{\\Xi}^+\\Xi^-$ reaction \\cite{Musgrave1965}. The nominal simulations in this work were therefore performed for isotropically distributed $\\bar{\\Xi}^+$ antihyperons. However, in reality, the angular distribution in the CMS system of the reaction may be forward peaking in a similar way as for $\\bar{p}p \\to \\bar{\\Lambda}\\Lambda$ \\cite{PS185} and $\\bar{p}p \\to \\bar{\\Sigma}^0\\Lambda + c.c.$ \\cite{sigma6}. Since the $\\bar{\\Xi}^+$ share one less quark with the initial $\\bar{p}$ compared to $\\bar{\\Lambda}$ and $\\bar{\\Sigma}^0$, the forward peak is expected to be less pronounced for $\\bar{\\Xi}^+$. Investigations with meson exchange models have resulted in a fairly strong anisotropy for $\\bar{\\Xi}^0$ while almost flat for the $\\bar{\\Xi}^+$ \\cite{XiMEX}. This can have an impact on the total reconstruction efficiency, partly because decay products of the $\\bar{\\Xi}^+$ may escape detection by being emitted along the beam pipe, and partly because a backward-going $\\Xi^-$ in the CMS system is almost at rest in the lab system. Its decay products may then have too low energy to reach the detectors. \n\nIn order to investigate the sensitivity of the total reconstruction efficiency to the $\\bar{\\Xi}^+$ angular distribution, additional simulations were carried out for two other scenarios with more forward-going antihyperons. The \\textit{extreme case} employs angular distribution parameters from the most forward-peaking distributions that have been observed so far, namely in $\\bar{p}p \\to \\bar{\\Lambda}\\Sigma^0 + c.c.$ \\cite{sigma6}. The \\textit{lenient case} represents an intermediate scenario with parameters between those of a flat distribution and those of an extreme one. The distributions are shown in Fig. \\ref{fig:costhtfwp} and the results from the simulations are presented in Table \\ref{tab:fwpeff}. Indeed, the reconstruction efficiency decreases for a strongly forward peaking $\\bar{\\Xi}^+$ distribution. However, the most extreme case results in a reduction of 25-35\\% and the total efficiency -- 5-6\\% -- is still feasible for $\\bar{p}p \\to \\bar{\\Xi}^+\\Xi^-$ studies.\n\n\t\\begin{figure*}[ht]\n\t\t \n\t\t \n\t\t \n\t\t \\includegraphics[width=0.5\\linewidth]{fwpdist46BW.pdf}\n\t\t \n\t\t \n\t\t \n\t\t \\includegraphics[width=0.5\\linewidth]{fwpdist7BW.pdf}\n\t\t \n\t\t\t\\caption{Simulated angular distributions of the $\\bar{p}p\\to\\bar{\\Xi}^+\\Xi^-$ reaction using a flat distribution (black), lenient case (dashed), and the extreme case (dotted) at $p_{\\mathrm{beam}}=4.6$ GeV\/$c$ (left) and $p_{\\mathrm{beam}}=7.0$ GeV\/$c$ (right). Note the different scales on the $y$-axes.}\n\t\t\t\\label{fig:costhtfwp}\n\t\t\\end{figure*}\n\t\t\n\t\t\t\\begin{table}[ht]\n\t\t\t\\centering\n\t\t\t\\begin{tabular}{c|c|c|c}\n\t\t\t\t\\textbf{$p_{\\bar{p}}$ (GeV\/c)} & $\\epsilon_{Isotropic}$ (\\%) & $\\epsilon_{Lenient}$ (\\%) & $\\epsilon_{Extreme}$ (\\%) \\\\ \\hline\n\t\t\t\t4.6 & 8.22 $\\pm$ 0.03 & 7.7 $\\pm$ 0.03 & 6.1 $\\pm$ 0.03 \\\\\n\t\t\t\t7.0 & 7.95 $\\pm$ 0.03 & 7.5 $\\pm$ 0.03 & 5.0 $\\pm$ 0.03 \\\\\n\t\t\t\t\\hline\n\t\t\t\\end{tabular}\n\t\t\t\\caption{Reconstruction efficiency of the $\\bar{p}p\\to\\bar{\\Xi}^+\\Xi^-$ reaction with an isotropic angular distribution, a lenient one and an extremely forward peaking distribution. \\label{tab:fwpeff}}\n\t\t\\end{table}\n\n\n\\subsection{Spin observables}\n\\label{sec:spin}\n\nThe spin observables defined in Sect. \\ref{sec:formalism} have been reconstructed with two independent methods to handle the efficiency, described in Sect. \\ref{sec:paramest}. In both cases, we have used data samples that are realistic during the first year of data taking with PANDA, given the reconstruction rates estimated in Sect. \\ref{sec:rates}. Since the background can be suppressed to a very low level, background effects are neglected in these spin studies.\n\n\\subsubsection{The $\\bar{p}p \\to \\bar{\\Lambda}\\Lambda$ reaction}\n\nIn this study, $1.5\\cdot10^6$ reconstructed $\\bar{p}p \\to \\bar{\\Lambda}\\Lambda$ events were used. This amount can be collected in 24 hours during the first phase of data taking with PANDA, where the luminosity at the lowest beam momenta will be about half of that of intermediate and high momenta, \\textit{i.e.} 5$\\cdot10^{30}$cm$^{-2}$s$^{-1}$. \n\nThe polarisation of $\\bar{\\Lambda}$ and $\\Lambda$ as a function of the $\\bar{\\Lambda}$ scattering angle in the CMS system are shown in Fig. \\ref{fig:llbarpol}. The $\\bar{\\Lambda}$ and $\\Lambda$ polarisation are shown to the left in the same plot. Since charge conjugation invariance requires $P^{Y} = P^{\\bar{Y}}$, deviations from this equality could indicate artificial bias from the detector or the reconstruction procedure. However, the agreement is excellent. In the right panels, the average of the $\\bar{\\Lambda}$ and $\\Lambda$ polarisation is shown. The top panels show the polarisations extracted with efficiency corrections, estimated by Eq. (\\ref{eq:pol}). The bottom panels are extracted using the efficiency independent method, applying Eqs. (\\ref{eq:pybarfinal}) and (\\ref{eq:pyfinal}). The polarisations reconstructed with the two techniques agree very well with the input distributions, shown as solid curves. The statistical uncertainties are found to be very small.\n\n\\begin{figure*}[ht]\n \\centering\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=1.\\linewidth]{figllbarBW\/pol.pdf}\\\\\n \\includegraphics[width=1.\\linewidth]{figllbarBW\/pol_nocorr.pdf}\n \\end{minipage}%\n \\begin{minipage}{0.5\\textwidth}\n \\centering\n \\includegraphics[width=1.\\linewidth]{figllbarBW\/polavg.pdf}\\\\\n \\includegraphics[width=1.\\linewidth]{figllbarBW\/polavg_nocorr.pdf}\n \\end{minipage}\n\t\\caption{Top left: Polarization of the $\\bar{\\Lambda}$ (black) and the $\\Lambda$ (open) at $p_{\\mathrm{beam}}=1.642$ GeV\/c, reconstructed using the efficiency dependent method with 2D efficiency matrices. Top-right: Average values of the two reconstructed polarisations. Bottom-left: Polarisations reconstructed using the efficiency independent method. Bottom-right: Average of the polarisations reconstructed with the efficiency independent method. The vertical error bars represent statistical uncertainties, the horizontal bars the bin widths and the solid curves the input model.}\n\t\\label{fig:llbarpol}\n\\end{figure*}\n\nThe diagonal spin correlations, \\textit{i.e.} $C^{\\bar{Y}Y}_{xx}$, $C^{\\bar{Y}Y}_{yy}$ and $C^{\\bar{Y}Y}_{zz}$, are shown in Figure \\ref{fig:llbarspincorrii}. In the top left and right panels, as well the bottom left, the correlations are extracted using efficiency corrections. The bottom right panel display the average correlation $(C^{\\bar{Y}Y}_{xz}+C^{\\bar{Y}Y}_{zx})\/2$ extracted with the efficiency independent method. In most cases, the reconstructed distributions agree fairly well with the input distributions. However, significant deviations are observed when applying the efficiency independent parameter estimation method, as seen in Fig. \\ref{fig:obsll}. This is expected since we concluded in Sect. \\ref{sec:effind} that higher order terms could not be neglected in this case. With the efficiency dependent method, all deviations are small and do not follow any obvious trend. Furthermore, it is clear that the statistical precision will be greatly improved compared to the PS185 measurements \\cite{PS185164}.\n\n\\begin{figure*}[ht]\n \\centering\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=1.\\linewidth]{figllbarBW\/cxx.pdf}\\\\\n \\includegraphics[width=1.\\linewidth]{figllbarBW\/czz.pdf}\n \\end{minipage}%\n \\begin{minipage}{0.5\\textwidth}\n \\centering\n \\includegraphics[width=1.\\linewidth]{figllbarBW\/cyy.pdf}\\\\\n \\includegraphics[width=1.\\linewidth]{figllbarBW\/cxzavg.pdf}\n \\end{minipage}\n\t\\caption{Spin correlations of the $\\bar{\\Lambda}\\Lambda$ pair produced at $p_{\\mathrm{beam}}=1.642$ GeV\/c. These observables were estimated with the efficiency dependent method, using 3D efficiency matrices. Top-left: $C^{\\bar{Y}Y}_{xx}$, top-right: $C^{\\bar{Y}Y}_{yy}$ and bottom-left: $C^{\\bar{Y}Y}_{zz}$ of the $\\bar{\\Lambda}\\Lambda$ pair. Bottom-right: The average $(C^{\\bar{Y}Y}_{xz} + C^{\\bar{Y}Y}_{zx})\/2$. The vertical error bars represent statistical uncertainties, the horizontal bars the bin widths and the solid curve the input distributions.}\n\t\\label{fig:llbarspincorrii}\n\\end{figure*}\n\n\\begin{figure*}[ht]\n \\centering\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=1.\\linewidth]{figllbarBW\/cxx_nocorr.pdf}\\\\\n \\includegraphics[width=1.\\linewidth]{figllbarBW\/czz_nocorr.pdf}\\\\\n \\end{minipage}%\n \\begin{minipage}{0.5\\textwidth}\n \\centering\n \\includegraphics[width=1.\\linewidth]{figllbarBW\/cyy_nocorr.pdf}\\\\\n \\includegraphics[width=1.\\linewidth]{figllbarBW\/cxzavg_nocorr.pdf}\\\\\n\t\n \\end{minipage}\n\t\\caption{Spin correlations of the $\\bar{\\Lambda}\\Lambda$ pair produced at $p_{\\mathrm{beam}}=1.642$ GeV\/c. These observables are estimated using the efficiency independent method. Top-left: $C^{\\bar{Y}Y}_{xx}$, top-right: $C^{\\bar{Y}Y}_{yy}$ and bottom-left: $C^{\\bar{Y}Y}_{zz}$ of the $\\bar{\\Lambda}\\Lambda$ pair. Bottom-right: The average $(C^{\\bar{Y}Y}_{xz} + C^{\\bar{Y}Y}_{zx})\/2$. The vertical error bars represent statistical uncertainties, the horizontal bars the bin widths and the solid curve the input distributions. For $C^{\\bar{Y}Y}_{xx}$, $C^{\\bar{Y}Y}_{xx}$ and $C^{\\bar{Y}Y}_{xx}$ correlations, we do not expect agreement with the input model (solid curve) due to large high order terms. In the case of $C^{\\bar{Y}Y}_{yy}$ correlations, higher order terms were found to be negligible.}\n\n\t\\label{fig:obsll}\n\\end{figure*}\n\n\n\n\\subsubsection{The $\\bar{p}p \\to \\bar{\\Xi}^+\\Xi^-$ reaction}\nTwo studies have been performed at beam momenta of $p_{\\mathrm{beam}}=4.6$ GeV\/c and $p_{\\mathrm{beam}}=7.0$ GeV\/c, using $5.86\\cdot10^5$ and $4.52\\cdot10^5$ reconstructed $\\bar{p}p \\to \\bar{\\Xi}^+\\Xi^-$ events, respectively. The sample at $p_{\\mathrm{beam}}=4.6$ GeV\/c can be collected in 21 days while the sample at $p_{\\mathrm{beam}}=7.0$ GeV\/c requires 55 days of data taking, in line with the planned 80 days campaign at an energy around the $X(3872)$ mass. Here, we assume a luminosity of $10^{31}$cm$^{-2}$s$^{-1}$, which will be achievable at these energies during the first phase of data taking with PANDA. \n\n\\begin{figure*}[ht]\n \\centering\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=1.\\linewidth]{figxixibar46BW\/pol.pdf}\\\\\n \\includegraphics[width=1.\\linewidth]{figxixibar46BW\/pol_nocorr.pdf}\n\n \\end{minipage}%\n \\begin{minipage}{0.5\\textwidth}\n \\centering\n \\includegraphics[width=1.\\linewidth]{figxixibar46BW\/polavg.pdf}\\\\\n \\includegraphics[width=1.\\linewidth]{figxixibar46BW\/polavg_nocorr.pdf}\n \\end{minipage}\n\t\\caption{Top-left: Reconstructed polarisation of the $\\bar{\\Xi}^+$ (black) and the $\\Xi^-$ (open) at $p_{\\mathrm{beam}}=4.6$ GeV\/c using the efficiency dependent method with 2D efficiency matrices. Top-right: Average values of the two reconstructed polarisations. Bottom-left: Polarisations reconstructed with the efficiency independent method. Bottom-right: The average of the polarisations reconstructed with the efficiency independent method. The vertical error bars represent statistical uncertainties, the horizontal bars the bin widths and the solid curves the input model.}\n\t\\label{fig:xixibarpol46}\n\\end{figure*}\n\nIn Fig. \\ref{fig:xixibarpol46}, the polarisation at 4.6 GeV\/$c$ of the $\\Xi^-$ and the $\\bar{\\Xi}^+$ are shown individually (top-left panel) and averaged (top-right panel) for efficiency corrected data. The agreement between $\\Xi^-$ and $\\bar{\\Xi}^+$ as well as between the input distributions and the reconstructed ones, is excellent and the statistical uncertainties are small. Also when using the efficiency independent method, there is good agreement between reconstructed data and the input model (bottom-left and bottom-right panels). This is expected since the simulations showed that all criteria are fulfilled for this reaction at this beam momentum.\n\nThe spin correlations $C^{\\bar{Y}Y}_{xx}$, $C^{\\bar{Y}Y}_{yy}$, $C^{\\bar{Y}Y}_{zz}$ and the average $(C^{\\bar{Y}Y}_{xz}+C^{\\bar{Y}Y}_{zx})\/2$ are shown in Fig. \\ref{fig:xixibar46spincorr} for the same beam momentum. The agreement between the input distributions and the reconstructed distributions is good. Fig. \\ref{fig:obs46} displays two examples of spin correlations reconstructed with the efficiency independent method. The $C^{\\bar{Y}Y}_{yy}$ correlation agrees well with the input model whereas some deviations are seen in the case of $C^{\\bar{Y}Y}_{xx}$, despite the fact that the criteria outlined in Sect. \\ref{sec:effind} are fulfilled. This shows that this observable is more sensitive to the efficiency than the $C^{\\bar{Y}Y}_{yy}$ and that the efficiency independent method has to be used with caution. \n\n\\begin{figure*}[ht]\n \\centering\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=1.\\linewidth]{figxixibar46BW\/cxx.pdf}\\\\\n \\includegraphics[width=1.\\linewidth]{figxixibar46BW\/czz.pdf}\n \\end{minipage}%\n \\begin{minipage}{0.5\\textwidth}\n \\centering\n \\includegraphics[width=1.\\linewidth]{figxixibar46BW\/cyy.pdf}\\\\\n \\includegraphics[width=1.\\linewidth]{figxixibar46BW\/cxzavg.pdf}\n \\end{minipage}\n\t\\caption{Reconstructed spin correlations of the $\\bar{\\Xi}^+\\Xi^-$ pair at $p_{\\mathrm{beam}}=4.6$ GeV\/c. Top-left: the $C^{\\bar{Y}Y}_{xx}$ correlation. Top-right: $C^{\\bar{Y}Y}_{yy}$. Bottom-left: $C^{\\bar{Y}Y}_{zz}$ . Bottom-right: the reconstructed average $(C^{\\bar{Y}Y}_{xz} + C^{\\bar{Y}Y}_{zx})\/2$. The spin correlations are reconstructed using the efficiency independent method. The vertical error bars represent statistical uncertainties, the horizontal bars the bin widths and the solid curve the input model.}\n\t\\label{fig:xixibar46spincorr}\n\\end{figure*}\n\n\n\\begin{figure*}[ht]\n \\centering\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=1.\\linewidth]{figxixibar46BW\/cxx_nocorr.pdf}\n \\end{minipage}%\n \\begin{minipage}{0.5\\textwidth}\n \\centering\n \\includegraphics[width=1.\\linewidth]{figxixibar46BW\/cyy_nocorr.pdf}\n \\end{minipage}\n\t\\caption{Spin correlations of the $\\bar{\\Xi}^+\\Xi^-$ pair at $p_{\\mathrm{beam}}=4.6$ GeV\/c, reconstructed with the efficiency independent method. Left: the $C^{\\bar{Y}Y}_{xx}$ correlation. Right: the $C^{\\bar{Y}Y}_{yy}$ correlation. The vertical error bars represent statistical uncertainties, the horisontal the bin widths and the red curves the input model.}\n\t\\label{fig:obs46}\n\\end{figure*}\n\n\nIn Fig. \\ref{fig:xixibarpol7}, the polarisations of the $\\bar{\\Xi}^+$ and $\\Xi^-$ at 7.0 GeV\/$c$ are shown. In the left panel, where the efficiency dependent method has been used, we see that the reconstructed polarisations agree well with the input model. In the right panel, the efficiency independent method is used. Here, some disagreement is observed with respect to the input model, as expected since one of the criteria in Sect. \\ref{sec:effind} is not fulfilled. Furthermore, we observe that the $\\bar{\\Xi}^+$ polarisation disagrees with the $\\Xi^-$ polarisation. This shows that a comparison between hyperon and antihyperon observables serve as a consistency check.\n\nIn Fig. \\ref{fig:xixibar7spincorr}, the spin correlations of the $\\bar{\\Xi}^+\\Xi^-$ pair are shown at 7.0 GeV\/$c$, reconstructed with the efficiency dependent method. The reconstructed distributions agree with the input ones, indicating that the reconstruction and analysis procedure do not impose any bias. In Fig. \\ref{fig:obs7}, the $C^{\\bar{Y}Y}_{xx}$ and $C^{\\bar{Y}Y}_{yy}$ spin correlations are shown, reconstructed with the efficiency independent method. Even in this case, the reconstructed distributions agree well with the input models.\n\n\n\\begin{figure*}[ht]\n \\centering\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=1.\\linewidth]{figxixibar7BW\/pol.pdf}\n \\end{minipage}%\n \\begin{minipage}{0.5\\textwidth}\n \\centering\n \\includegraphics[width=1.\\linewidth]{figxixibar7BW\/pol_nocorr.pdf}\n \\end{minipage}\n\t\\caption{The polarisation of the $\\bar{\\Xi}^+$ (black) and the $\\Xi^-$ (open) hyperons at $p_{\\mathrm{beam}}=7.0$ GeV\/c. Left: data reconstructed using the efficiency dependent method. Right: data reconstructed using the efficiency independent method. The vertical error bars represent statistical uncertainties, the horizontal bars the bin widths and the solid curve the input model.}\n\t\\label{fig:xixibarpol7}\n\\end{figure*}\n\n\\begin{figure*}[ht]\n \\centering\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=1.\\linewidth]{figxixibar7BW\/cxx.pdf}\\\\\n \\includegraphics[width=1.\\linewidth]{figxixibar7BW\/czz.pdf}\n \\end{minipage}%\n \\begin{minipage}{0.5\\textwidth}\n \\centering\n \\includegraphics[width=1.\\linewidth]{figxixibar7BW\/cyy.pdf}\\\\\n \\includegraphics[width=1.\\linewidth]{figxixibar7BW\/cxzavg.pdf}\n \\end{minipage}\n\t\\caption{Reconstructed spin correlations (top-left) $C^{\\bar{Y}Y}_{xx}$, (top-right) $C^{\\bar{Y}Y}_{yy}$ and (bottom-left) $C^{\\bar{Y}Y}_{zz}$ of the $\\bar{\\Xi}^+\\Xi^-$ pair at $p_{\\mathrm{beam}}=7.0$ GeV\/c. (bottom-right) Reconstructed average $(C^{\\bar{Y}Y}_{xz} + C^{\\bar{Y}Y}_{zx})\/2$. The spin correlations are reconstructed at $p_{\\mathrm{beam}}=7.0$ GeV\/c using acceptance corrections. The vertical error bars represent statistical uncertainties, the horizontal bars the bin widths and the solid curves the input model.}\n\t\\label{fig:xixibar7spincorr}\n\\end{figure*}\n\n\\begin{figure*}[ht]\n \\centering\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=1.\\linewidth]{figxixibar7BW\/cxx_nocorr.pdf}\n \\end{minipage}%\n \\begin{minipage}{0.5\\textwidth}\n \\centering\n \\includegraphics[width=1.\\linewidth]{figxixibar7BW\/cyy_nocorr.pdf}\n \\end{minipage}\n\t\\caption{Reconstructed spin correlations of the $\\bar{\\Xi}^+\\Xi^-$ pair at $p_{\\mathrm{beam}}=7.0$ GeV\/$c$, reconstructed with the efficiency independent method. Left:The $C^{\\bar{Y}Y}_{xx}$ correlation. Right: The $C^{\\bar{Y}Y}_{yy}$ correlation. The solid curves represent the input model.}\n\t\\label{fig:obs7}\n\\end{figure*}\n\nThe singlet fractions of the $\\bar{\\Xi}^+\\Xi^-$ pair, calculated from the spin correlations according to Eq. (\\ref{eq:spinfrac}), are shown in Fig. \\ref{fig:xixibarsf} as a function of the $\\bar{\\Xi}^+$ scattering angle. The results show that the prospects of measuring the singlet fraction, and thereby establish in which spin state the produced $\\bar{\\Xi}^+\\Xi^-$ is, are very good. It will also be possible to test the predictions from Ref. \\cite{XiMEX}.\n\n\\begin{figure*}[ht]\n \\centering\n \\begin{minipage}{.5\\textwidth}\n \\centering\n \\includegraphics[width=1.\\linewidth]{figxixibar46BW\/Fs.pdf}\n\n \\end{minipage}%\n \\begin{minipage}{0.5\\textwidth}\n \\centering\n \\includegraphics[width=1.\\linewidth]{figxixibar7BW\/Fs.pdf}\n \\end{minipage}\n\t\\caption{Reconstructed singlet fractions at $p_{\\mathrm{beam}}=4.6$ GeV\/c (left) and $p_{\\mathrm{beam}}=7.0$ GeV\/c (right). The vertical errorbars are statistical uncertainties only. The horizontal bars are the bin widths.}\n\t\\label{fig:xixibarsf}\n\\end{figure*}\n\n\\subsection{Systematic uncertainties}\n\\label{sec:systematic}\n\nIt is hard to evaluate systematic uncertainties before the experiment is taken into operation, since effects such as trigger efficiencies or imperfections in tracking or in the Monte Carlo implementation of the detector are difficult to estimate without real data. \n\nIn the feasibility study of electromagnetic form factors in PANDA \\cite{EMFFPanda} as well as in the simulation of the foreseen energy scan around the $X(3872)$ \\cite{xscan}, uncertainties in the estimated luminosity and background constitute the most important sources of systematics. While being very important in cross section measurements, effects from the uncertainty in the luminosity are expected to be negligible in measurements of differential distributions. This is because such uncertainties should be uniformly distributed over the angles of the final state particles. Regarding the background, the displaced decay vertices of hyperons result in a very distinct event topology that allows for a very strong suppression of background. Furthermore, the cross section of the hyperon channels studied in this work are several orders of magnitude larger than in Refs. \\cite{EMFFPanda} and \\cite{xscan}.\n\nNon-negligible systematic effects can arise from model-dependencies in the efficiency correction. The method of moments introduces an uncertainty for each measured variable that is integrated out when calculating each moment. In multi-dimensional problems like the ones presented here, this needs a thorough investigation. Therefore, we have carried out three comparative studies: i) between generated distributions on one hand and reconstructed and efficiency corrected distributions on the other ii) between extracted hyperon and antihyperon parameters iii) between two different parameter estimation techniques. Significant differences only appear for the efficiency independent method and are well understood since in these cases, the necessary criteria for using the efficiency independent method are not fulfilled. However, for the high-precision studies enabled by the design luminosity, it will likely be necessary to use a model-independent method for extracting the spin observables, \\textit{e.g.} a Maximum Likelihood-based method similar to the one in Refs. \\cite{bes3prl,bes3nature}. For $\\bar{p}p$ reactions, a dedicated formalism and analysis framework will be needed for this purpose.\n\n\n\\section{Summary and discussion}\n\\label{sec:summary}\n\nThe feasibility of exclusive reconstruction of two antihyperon-hyperon reactions in the foreseen antiproton experiment PANDA at FAIR has been investigated: $\\bar{p}p \\to \\bar{\\Lambda}\\Lambda$ and $\\bar{p}p \\to \\bar{\\Xi}^+\\Xi^-$. The former has been studied with the PS185 experiment and will be used for quality assurance and fine-tuning of detectors, data acquisition, reconstruction and analysis. However, even at the modest luminosity during the start-up phase of PANDA, a world-record sample can be collected in a few days. Furthermore, the background can be suppressed to a very low level. This will allow PANDA to push forward the state of the art in the measurement of spin observables. The double-strange $\\Xi^-$ has barely been studied with antiproton probes before and the studies proposed here will therefore be pioneering. The foreseen high data rates and the low background level will enable a complete spin decomposition of the reaction already during the first year of data taking. This demonstrates PANDAs potential as a strangeness factory.\n\nThe method of moments applied in this work is suitable for sample sizes of the first phase of PANDA. Two different approaches were applied: a standard efficiency dependent one, and a more unusual efficiency independent method. The applicability of the latter however relies on approximations whose validity need to be evaluated on a case-by-case basis. After only a few years at the initial luminosity, and even more, when the design luminosity is available, the hyperon spin studies will reach high statistical precision. For this, a multi-dimensional and model-independent analysis framework needs to be developed in order to match accuracy and precision. This could open up for large-scale searches for CP violation in hyperon decays and its feasibility will be investigated in the future. \n\n\\section*{Acknowledgement}\nWe acknowledge financial support from \nthe Bhabha Atomic Research Centre (BARC) and the Indian Institute of Technology Bombay, India; \nthe Bundesministerium f\\\"ur Bildung und Forschung (BMBF), Germany; \nthe Carl-Zeiss-Stiftung 21-0563-2.8\/122\/1 and 21-0563-2.8\/131\/1, Mainz, Germany; \nthe Center for Advanced Radiation Technology (KVI-CART), Groningen, Netherlands; \nthe CNRS\/IN2P3 and the Universit\\'{e} Paris-Sud, France; \nthe Czech Ministry (MEYS) grants LM2015049, CZ.02.1.01\/0.0\/0.0\/16 and 013\/0001677, \nthe Deutsche Forschungsgemeinschaft (DFG), Germany; \nthe Deutscher Akademischer Austauschdienst (DAAD), Germany; \nthe European Union's Horizon 2020 research and innovation programme under grant agreement No 824093.\nthe Forschungszentrum J\\\"ulich, Germany; \nthe Gesellschaft f\\\"ur Schwerionenforschung GmbH (GSI), Darmstadt, Germany; \nthe Helmholtz-Gemeinschaft Deutscher Forschungszentren (HGF), Germany; \nthe INTAS, European Commission funding; \nthe Institute of High Energy Physics (IHEP) and the Chinese Academy of Sciences, Beijing, China; \nthe Istituto Nazionale di Fisica Nucleare (INFN), Italy; \nthe Ministerio de Educacion y Ciencia (MEC) under grant FPA2006-12120-C03-02; \nthe Polish Ministry of Science and Higher Education (MNiSW) grant No. 2593\/7, PR UE\/2012\/2, and the National Science Centre (NCN) DEC-2013\/09\/N\/ST2\/02180, Poland; \nthe State Atomic Energy Corporation Rosatom, National Research Center Kurchatov Institute, Russia; \nthe Schweizerischer Nationalfonds zur F\\\"orderung der Wissenschaftlichen Forschung (SNF), Swiss; \nthe Science and Technology Facilities Council (STFC), British funding agency, Great Britain; \nthe Scientific and Technological Research Council of Turkey (TUBITAK) under the Grant No. 119F094\nthe Stefan Meyer Institut f\\\"ur Subatomare Physik and the \\\"Osterreichische Akademie der Wissenschaften, Wien, Austria; \nthe Swedish Research Council and the Knut and Alice Wallenberg Foundation, Sweden.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nBroadband beamforming with microphone array is a key signal processing module in many consumer electronics products, e.g., smart phones and smart speakers \\cite{chhetri2018multichannel, MA_book, benesty2008microphone}. The proliferation of microphone arrays due to decreasing hardware cost and superior speech enhancement performance, has made broadband beamforming a ubiquitous embedded technology, and its performance has a critical impact on the overall system. \n \nA key requirement for broadband beamforming is to deliver consistent performance across several octaves of frequencies, e.g., $80$ Hz - $8$ KHz in the voiceband case. Speech enhancement is typically the system objective in most microphone arrays systems, rather than mere signal detection, as in the narrowband case. This poses hardware and algorithmic challenges in the design of microphone arrays and the underlying beamforming procedure. Filter-and-Sum (F\\&S) \\cite{frost_FilterandSum} has been a standard approach for designing a broadband beamformer as an extension to a narrowband beamformer by \\emph{stitching} frequency-domain coefficients that are computed using narrowband beamforming techniques. Several narrowband beamforming techniques, with different objectives and assumptions, become standard design techniques, e.g., Delay-and-Sum (D\\&S) \\cite{DS, MA_book}, Minimum-Variance-Distortionless-Response (MVDR) \\cite{MVDR1, MVDR2}, Subspace methods \\cite{benesty2008microphone}. In this work, we do not address a particular beamformer design algorithm. Rather, the emphasis is on the acoustic modeling, which is common among these techniques. Without loss of generality, MVDR-based F\\&S beamformer with a robustness constraint is used as a case study for our analysis. \nAt a given frequency and \\emph{look-direction}, the two key design parameters in almost all beamforming algorithms are the steering vector and the spatial coherence matrix. \nProper design of these two parameters is the subject of this work. \n\nIn Far-Field models, the acoustic wave is usually approximated by plane-waves \\cite{RoomAcousticsBook}, and the steering vector at the direction\/frequency of a plane wave is defined as the observed acoustic pressure at the different microphones when the microphone array is impinged with the plane-wave. Near-Field steering vectors can similarly be approximated by acoustic spherical waves. The observed wave-field in the general case is the superposition of the incident wave-field and the scattered wave-field. A typical approximation of the steering vector is the free-field approximation, which assumes sound propagation in free-field (at the speed of sound in air), and only the incident wave-field is considered. This approximation is used almost universally in the microphone array literature because it yields closed-form formulae that simplify beamformer analysis. The main issue of the free-field approximation is that it ignores the impact of the device surface on the observed acoustic pressure, i.e., the scattered wave-field. This impact, as will be shown, can significantly change the microphone array behavior at certain frequencies and angles. \n\nA possible remedy to this problem is to rely on anechoic lab measurements to quantify the device response to incident waves. However, this is a time-consuming and high-cost solution, and imperfect experimental settings could lead to noticeable modeling errors, especially in near-field cases. In this work, we describe a simulation-based approach for acoustic modeling \nof microphone array on rigid surface by solving the Helmholtz wave equation using Finite-Element-Method (FEM) with a background wave-field that matches the incident wave-field \\cite{larsson2008partial}. Prior works that studied the impact of scattered field on microphone arrays used spherical harmonic decomposition for specific form-factors (e.g. sphere, cylinder) \\cite{zotkin2017incident, teutsch2007modal, rafaely2015fundamentals} (and references, therein). However, these methods are restrictive in the choice of device form-factors (e.g. do not include modern smart-speaker form-factors) and beamforming techniques. In comparison, the FEM method proposed in this paper provides three notable contributions: (i) a methodology to compute the steering vector for microphone arrays mounted on solid hard surfaces without the need for expensive anechoic chamber measurements, (ii) ability to design any type of beamformer that relies on steering vectors; these include MVDR beamformer \\cite{MA_book}, linearly constrained minimum variance (LCMV) beamformer \\cite{VanTrees2002,mabande2009design}, and polynomial beamformer \\cite{mabande2010design}, and (iii) extension of the proposed method to generic device form-factors that are used for smart speakers. \n\nThe following notations are used throughout the paper. A bold lower-case letter denotes a column vector, while a bold upper-case letter denotes a matrix. ${\\bf{A}}^T$ and ${\\bf{A}}^H$ denote the transpose and conjugate transpose, respectively, of $\\bf{A}$, and ${\\bf{A}}_{m,n}$ is the matrix entry at position $(m,n)$. $\\Theta \\triangleq \\left(\\theta, \\phi \\right)^T$ denotes the polar and azimuth angles, respectively, in a spherical coordinate system. $M$ always refers to the number of microphones. $\\mbox{$\\boldsymbol{\\Psi}$} (\\omega)$ denotes the noise coherence matrix, of size $M\\times M$, at frequency $\\omega$ (the dependency on $\\omega$ is dropped whenever it is clear from the context). Additional notations are introduced when needed.\n\n\\section{Background}\n\\label{sec:background}\n\\subsection{Wave Equation}\nThe acoustic wave equation \\cite{acousticsbook} is the governing equation for the propagation of sound waves at equilibrium in elastic fluids, e.g., air. The homogenous wave equation has the form\n\\begin{equation}\n\\nabla^2 \\bar{p} - \\frac{1}{c^2} \\frac{\\partial^2 \\bar{p} }{\\partial t^2} = 0\n\\end{equation}\nwhere $\\bar{p}(t)$ is the acoustic pressure, and $c$ is the speed of sound in the medium. In this work, we consider only the practical case of homogenous fluid with no viscosity. \n\nIn practice, the wave equation is usually solved in the frequency domain using the Helmholtz equation to find $p(\\omega)$:\n\\begin{equation}\n\\nabla^2 p + k^2 p = 0 \\label{eq:Helmholtz}\n\\end{equation}\nwhere $k\\triangleq\\omega\/c$ is the wave number. At steady state, the time-domain and frequency-domain solutions are Fourier pairs \\cite{FourierAcoustics}. In our modeling, we work only with the homogenous Helmholtz equation under various boundary conditions. The boundary conditions are determined by the geometry and the acoustic impedance of the different boundaries. We assume the device has a rigid surface, therefore, it is modeled as a \\emph{sound hard} boundary.\n\\vspace{-0.1cm}\n\\subsection{Beamforming Strategies} \\label{strategies}\nBeamforming is a microphone-array signal processing technique that allows emphasizing the user's speech from a desirable look-direction (LD) while suppressing interferences from other directions. Here, we process microphone elements such that the signals arriving from look-direction are combined in-phase, while signals arriving from other directions are combined out-of-phase. \nDenote the position of the $m$-th microphone by $\\mbox{$\\mathbf{r}$}_m$, and the signal acquired at the $m$-th microphone for frequency $\\omega$ by $x(w,\\mbox{$\\mathbf{r}$}_m)$. Then, the signal acquired by the microphone array can then be expressed as:\n\\beq \\label{eq:xdef}\n\\mbox{$\\mathbf{x}$}(\\omega,\\mbox{$\\mathbf{r}$}) = \\lts x(\\omega,\\mbox{$\\mathbf{r}$}_1) \\hspace{2mm} x(\\omega,\\mbox{$\\mathbf{r}$}_2) \\hspace{2mm} \\hdots \\hspace{2mm} x(\\omega,\\mbox{$\\mathbf{r}$}_M) \\right]^T .\n\\end{equation}\nDenoting the spectrum of the desired source signal by $s(\\omega)$ and the ambient noise captured by the microphone array as $\\mbox{$\\mathbf{n}$}(\\omega)$, we can express $\\mbox{$\\mathbf{x}$}(\\omega,\\mbox{$\\mathbf{r}$})$ as:\n\\beq \\label{eq:XExpand}\n\\mbox{$\\mathbf{x}$}(\\omega,\\mbox{$\\mathbf{r}$}) = \\mbox{$\\mathbf{v}$}(\\omega,\\mbox{$\\boldsymbol{\\Theta}$}) s(\\omega) + \\mbox{$\\mathbf{n}$}(\\omega),\n\\end{equation}\nwhere, $\\mbox{$\\mathbf{v}$}(\\omega,\\Theta) \\triangleq \\lts v_1(\\omega,\\Theta) \\hspace{2mm} v_2(\\omega,\\Theta) \\hspace{2mm} \\hdots \\hspace{2mm} v_M(\\omega,\\Theta) \\right]$ is the frequency and angle-dependent steering vector. \n\nThe beamformer design involves computation of complex-valued weights for each frequency and microphone denoted by $\\mbox{$\\mathbf{w}$} (\\omega) \\triangleq \\lts w_1(\\omega) \\hspace{2mm} w_2(\\omega) \\hspace{2mm} \\hdots \\hspace{2mm} w_M(\\omega) \\right]^T$, which are then applied to $\\mbox{$\\mathbf{x}$}(\\omega,\\mbox{$\\mathbf{r}$})$ to obtain the beamformer output $y(\\omega)$:\n\\beq \\label{eq:bfOut}\ny(\\omega) = \\mbox{$\\mathbf{w}$}^H (\\omega) \\mbox{$\\mathbf{x}$}(\\omega,\\mbox{$\\mathbf{r}$}).\n\\end{equation}\n\nWe are interested in using FEM modeling for the design of F\\&S beamformers that can be expressed as a constrained optimization problem, and the solution to which provides the optimal beamformer filters. This covers various F\\&S beamformers like MVDR, maximum SNR, and LCMV beamformers \\cite{VanTrees2002}. In this work, we use, without loss of generality, the MVDR beamformer with a robustness constraint to present our analysis.\n\n\\subsection{Beamforming Metrics} \\label{metrics}\nWe use three metrics to assess the performance: array gain (AG), white noise gain (WNG) \\cite{MA_book}, and microphone array channel capacity (MACC) \\cite{mansour2018information}. \nThe AG metric is defined as the improvement in signal-to-noise-ratio (SNR) offered by the beamformer:\n$\\text{AG}(\\omega) \\triangleq \\frac{SNR_{out}(\\omega)}{SNR_{in}(\\omega)}$.\nAfter some algebraic manipulations, one can show that \\cite{MA_book}:\n\\beq \\label{eq:AG}\n\\text{AG}(\\omega, \\Theta_{LD}) = \\frac{|\\mbox{$\\mathbf{w}$}^H \\mbox{$\\mathbf{v}$}(\\omega,\\Theta_{LD})|^2}{\\mbox{$\\mathbf{w}$}^H \\mbox{$\\boldsymbol{\\Psi}$}(\\omega) \\mbox{$\\mathbf{w}$}},\n\\end{equation}\nwhere $\\Theta_{LD}$ denotes the look-direction, and $\\mbox{$\\boldsymbol{\\Psi}$} \\triangleq \\mbox{$\\mathbf{\\Lambda}$}\/\\beta$ is the normalized noise correlation matrix with\n\\beq \\label{eq:LambdaNN}\n\\mbox{$\\mathbf{\\Lambda}$}_{m,q} = \\int_{0}^{\\pi} \\int_{0}^{2\\pi} \\mbox{$\\mathbf{v}$}_m(\\omega,\\Theta) \\mbox{$\\mathbf{v}$}_q^{*}(\\omega,\\Theta) \\sigma_{N}^{2} (\\omega,\\Theta) \\sin(\\theta) \\ d\\theta \\ d\\phi ,\n\\end{equation}\nwhere $\\sigma_{N}^{2} (\\omega,\\Theta)$ denotes the distribution of noise power as a function of $\\omega$ and $\\Theta$, and\n\\beq \\label{eq:phiN}\n\\beta = \\int_{0}^{\\pi} \\int_{0}^{2\\pi} \\sigma_{N}^{2} (\\omega,\\Theta) \\sin(\\theta) \\ d\\theta \\ d\\phi .\n\\end{equation}\nThe WNG metric is the SNR improvement provided by the beamformer when the noise components at the microphones are statistically independent \\cite{MA_book}:\n\\beq \\label{eq:WNG}\n\\text{WNG}(\\omega, \\Theta_{LD}) = \\frac{|\\mbox{$\\mathbf{w}$}^H \\mbox{$\\mathbf{v}$}(\\omega,\\Theta_{LD})|^2}{\\mbox{$\\mathbf{w}$}^H \\mbox{$\\mathbf{w}$}}.\n\\end{equation}\nThe MACC metric \\cite{mansour2018information} aims at providing a characterization of the microphone array that is independent of the beamformer realization. It is analogous to MIMO channel capacity in wireless communication. If the source location is known, then the MACC is defined as \n\\begin{equation}\n\\text{MACC}(\\omega, \\Theta_{LD}) \\triangleq \\log \\left(1+ P\\|\\mathbf{S}^{-\\frac{1}{2}} \\mathbf{U}^H \\mathbf{v}(\\omega, \\Theta_{LD})\\|^2\\right)\n\\end{equation} \nwhere $\\mathbf{USU}^H$ is the singular value decomposition of $\\mbox{$\\boldsymbol{\\Psi}$}$, and $P$ is the input power.\n\n\\section{Acoustic Modeling}\n\\label{sec:typestyle}\n\\subsection{Acoustic Plane-Waves}\nAcoustic plane waves constitute a powerful tool for analyzing the wave equation, and it provides a good approximation of the wavefield emanating from a far-field point source \\cite{teutsch2007modal}. The acoustic pressure of a plane-wave with vector wave number $\\bf{k}$ is defined at a point $\\bf{r}$ in the 3D space as:\n\\begin{equation}\np({\\bf{k}}) \\triangleq p_0 e^{-j {\\bf{k}}^T{\\bf{r}}} \\label{eq:pw_def}\n\\end{equation} \nThis is a solution of the inhomogeneous Helmholtz equation with a far point source, where $\\|{\\bf{k}}\\| = k$ (note that, for a given $k$ in the Helmholtz equation, there are two degrees of freedom in choosing $\\bf{k}$). Further, a general solution to the homogenous Helmholtz equation can be approximated by a linear superposition of plane waves of different angles \\cite{FourierAcoustics, pwd1, pwd2, pwd3}. These properties render acoustic plane-waves a key tool in designing far-field beamforming for microphone arrays, where the microphone array response to each plane wave provides a sufficient set for the beamformer design.\n\nThe \\emph{total} wavefield at each microphone of the microphone array when an incident plane-wave $p_i({\\bf{k}})$ impinges on the device has the general form:\n\\vspace{-0.1cm}\n\\begin{equation}\np_t = p_i + p_s \\label{eq:pt_df}\n\\vspace{-0.1cm} \n\\end{equation}\nwhere $p_t$ and $p_s$ refer to the total and scattered wavefield respectively. The total wavefield, $p_t$, at each microphone is computed by inserting \\eqref{eq:pt_df} in the Helmholtz equation \\eqref{eq:Helmholtz} and solving for $p_s$ with appropriate boundary conditions. The details of this modeling are described in section \\ref{sec:FEM_modeling}. It is evident from \\eqref{eq:pw_def} that an incident plane-wave does not have magnitude information, and it is fully parameterized by its phase. This is not true for the scattered wavefield, $p_s$ which represents the reflections\/diffractions due to the rigid device surface. This magnitude information in $p_s$ is critical in resolving phase ambiguity due to microphone array geometry. \n\nIf the microphone array is composed of discrete microphones in space, and the area of each microphone is much smaller than the wavelength, then a reasonable approximation is to set $p_s = 0$ in \\eqref{eq:pt_df}. This is referred to as \\emph{free-field} approximation. In this case, the total wavefield, $p_t$, is fully determined by the wavenumber $\\bf{k}$ in \\eqref{eq:pw_def}, and the $(x,y,z)$ coordinates of each microphone. It is obvious that, free-field approximation is not accurate if the microphone array is on a rigid surface. Nevertheless, this approximation has been utilized almost universally in the literature for acoustic modeling in beamformer design. In the following section, we show that the free-field approximation does not provide a good approximation of the total field under important practical cases.\n\n\\vspace{-0.2cm}\n\\subsection{FEM Modeling}\n\\label{sec:FEM_modeling}\n\nThe modeling objective is to compute the total sound field in \\eqref{eq:pt_df} at each microphone when the device is impinged by a plane wave. This resembles physical measurement in anechoic room with a distant point source. FEM is one of the standard approaches for solving the Helmholtz equation numerically. In our case, we need to solve the Helmholtz equation for the total wavefield at all frequencies of interest with a \\emph{background} plane wave. The device surface is modeled as sound hard boundary. The microphone is modeled as a point receiver on the surface if the microphone surface area is much smaller than the wavelength, otherwise, its response is computed as the integral of the acoustic pressure over its area. To have a true background plane-wave, the external boundary should be open and non-reflecting. In our model, the device is enclosed by a closed boundary, e.g., a cylinder or a spherical surface. To mimic open-ended boundary, there are two choices: (i) Matched boundary whose impedance is matched to the air impedance at the frequency of interest, (ii) Perfectly matched layer, which defines a special absorbing domain that eliminates reflection and refractions in the internal domain that encloses the device \\cite{berenger1994perfectly}. The merits of each approach is beyond the scope of this paper. The FEM solves for $p_t$ in \\eqref{eq:pt_df}, which is equivalent to solving for only the scattered field, $p_s$, after inserting background plane wave model \\eqref{eq:pw_def} in the Helmholtz equation. The acoustics module of COMSOL multiphysics package \\cite{COMSOL} is used for this FEM numerical solution, and the simulation is rigorously validated with exact and measured results on different form-factors. For example, in Fig. \\ref{fig:comsol}, we show the total pressure field of two microphones on a spherical surface with analytical and simulated solution. Both amplitude and phase responses match excellently with the analytical solution \\cite{ Bowman_book}. Further, in Fig. \\ref{fig:comsol2}, we show an example of simulated and measured acoustic pressure of a rectangular microphone array mounted on a slanted cube. In the plot, we show the inter-channel response, i.e., $\\{H_i(\\omega)\/H_r(\\omega)\\}_{i\\neq r}$, where $r$ is a reference microphone. The phase difference between simulated and measured responses is linear, which is expected when the positions of the device in both cases are not perfectly aligned. For more comparisons between simulated\/theoretical and measured acoustic pressure responses, one may refer to \\cite{wiener1949diffraction,spence1948diffraction}.\n\\begin{figure}[t] \n\t\\centering\n\t\\includegraphics[width=0.4\\textwidth, trim=5mm 5mm 5mm 5mm,clip]{Sphere_5cm_comp.eps}\n\t\\caption{\\footnotesize{Comparison of FEM and analytical solutions for spherical surface of radius 5 cm. Top: magnitude response, bottom: phase response. Mic 2 is in the middle of sphere facing the incident wave; Mic 1 is facing away from it.} } \n\t\\label{fig:comsol}\n\\efi\n\\begin{figure}[htp] \n\t\\centering\n\t\\includegraphics[width=0.45\\textwidth, trim=5mm 1mm 5mm 7.8mm,clip]{knight_sim.eps}\n\t\\caption{\\footnotesize{Normalized acoustic pressure of a rectangular microphone array on a slanted cube for simulated and measured cases for a background plane wave at direction $(\\theta,\\phi) =(90^\\circ, 210^\\circ)$ (simulated response is in dotted lines, measured response in solid lines, with different colors for different microphones). Top: magnitude, bottom: phase difference} } \n\t\\label{fig:comsol2}\n\\vspace{-0.25cm}\n\\efi\n\nNote that, the above procedure is not limited to plane-wave as we only need to specify the background pressure field, which could be, for example, spherical wave for near-field modeling. The procedure is repeated for a grid of frequency and incident angles to build a dictionary of total pressure $\\{p_t(\\omega, \\theta,\\phi)\\}_{\\omega, \\theta, \\phi}$ that is used in subsequent analysis.\n\n\\vspace{-0.25cm}\n\\section{Analysis of Free-Space Beamforming}\n\\label{sec:analysis}\nTo illustrate the benefits of FEM modeling, we use the MVDR beamformer with a robustness constraint, formulated as a constrained convex optimization problem \\cite{mabande2009design}:\n\\vspace{-0.1cm}\n\\beqa \\label{eq:MVDR}\n\\widehat\\mbox{$\\mathbf{w}$} = \\arg \\min_{\\mbox{$\\mathbf{w}$}} & & \\mbox{$\\mathbf{w}$}^H \\mbox{$\\boldsymbol{\\Psi}$} \\ \\mbox{$\\mathbf{w}$} \\nonumber \\\\\n \\text{such that} & & \\mbox{$\\mathbf{w}$}^H \\mbox{$\\mathbf{v}$}(\\omega, \\theta_{LD}) =1, \\nonumber \\\\\n & & \\frac{|\\mbox{$\\mathbf{w}$}^H \\mbox{$\\mathbf{v}$}(\\omega,\\Theta_{LD})|^2}{\\mbox{$\\mathbf{w}$}^H \\mbox{$\\mathbf{w}$}} \\geq \\gamma ,\n \\end{eqnarray}\n where the first constraint is called the distortionless constraint \\cite{ MA_book}, and the second constraint is the WNG constraint, which imposes robustness in the beamformer design that can be controlled through $\\gamma$ \\cite{mabande2009design}. Further, the WNG constraint enables a more fair comparison between the total and free-field beamformer designs because the WNG is bounded in both cases. Without loss of generality, we assume a spherically diffuse noise field. The optimization problem in \\eqref{eq:MVDR} is solved using a convex optimization solver to obtain the beamformer weights $\\widehat\\mbox{$\\mathbf{w}$}$. Note that the proposed FEM-model based method can be similarly extended to other beamformer designs like the MVDR \\cite{MA_book}, LCMV \\cite{VanTrees2002}, and polynomial beamformer \\cite{ mabande2010design}.\n\n\\subsection{Analysis Methodology}\n\\label{sec:anaMethod}\n The MVDR solution can be obtained from \\eqref{eq:MVDR} by using $\\mbox{$\\mathbf{v}$}_i$ and $\\mbox{$\\mathbf{v}$}_t$ as steering vectors for free-field (FF) and total-field (TF), respectively. To compute $\\mbox{$\\boldsymbol{\\Psi}$}$, we use analytical method for canonical device shapes, such as finite cylinder and sphere \\cite{Bowman_book}. For a general device shape, the FEM tool is used to simulate the steering vectors for a uniform grid of azimuth and polar angles. Then, $\\mbox{$\\boldsymbol{\\Psi}$}$ is numerically computed from \\eqref{eq:LambdaNN} and \\eqref{eq:phiN}, with $\\sigma_N^2(\\omega,\\Theta)=1$ for the spherically diffuse noise field.\n \n \\begin{figure}[t] \n\t\\centering\n\t\\includegraphics[width=9cm, height=3.25cm, trim=15mm 5mm 5mm 30mm,clip]{device.eps}\n\t\\vspace*{-0.5cm}\n\t\\caption{Simulation setup for FEM-based beamforming. The form-factor is a combination of a cylindrical bottom and a top surface with a spherically-curved shape. }\n\t\\label{fig:arraySetup}\n\\efi\n\nWe now compare the performance of MVDR beamformer designed using FF and TF assumptions. For our study, we use the setup in Fig. \\ref{fig:arraySetup}, which has $5$ microphones on the top of a cylinder of height 130 mm and diameter of 70 mm; the top surface of the cylinder has a spherically-curved shape. This surface does not have a closed-form solution for the Helmholtz equation, which necessitates the use of the proposed FEM method. The origin of the coordinate system coincides with the center microphone with $z$ axis pointing upwards, and the $x$-$y$ plane parallel to the bottom face of the cylinder. The coordinates of the microphones are: $(x,y,z)= \\ltc (r_o,0,z_o), (0,r_o,z_o), (-r_o,0,z_o),(0,-r_o,z_o), (0,0,0) \\right\\}$, where $r_o=30$ mm and $z_o=-3$ mm. Lastly, we set $\\gamma=-25$ dB. \n\\vspace{-0.25cm}\n\n\n\\subsection {Results} \\label{sec:results}\nWe evaluate the microphone array metrics under Free Field (FF) and Total Field (TF) setups for the array in Fig. \\ref{fig:arraySetup} at two arrival angles: $(\\theta,\\phi) = (90^\\circ, 0^\\circ)$ and $(30^\\circ, 0^\\circ)$. The results are summarized in Figs. \\ref{fig:AG}-\\ref{fig:MACC} for the three performance metrics. \n\n\\begin{figure}[h]%\n \\centering\n \\subfloat[$(\\theta,\\phi) = (90^\\circ, 0^\\circ)$]{{\\includegraphics[width=4.295cm, height=3.7cm, trim=11mm 0mm 11mm 7mm,clip]{AG-Polar90Deg_3.eps} }}\n \\subfloat[$(\\theta,\\phi) = (30^\\circ, 0^\\circ)$]{{\\includegraphics[width=4.295cm, height=3.7cm,trim=11mm 0mm 11mm 7mm,clip]{AG-Polar30Deg_3.eps} }}\n\t\\vspace*{-0.15cm}\n \\caption{AG performance showing the difference between FF and TF. Note how the array gain is higher, especially at high frequencies. }\n \\label{fig:AG}%\n\t\\vspace*{-0.3cm}\n\\end{figure}\n\\begin{figure}[h]%\n \\centering\n \\subfloat[$(\\theta,\\phi) = (90^\\circ, 0^\\circ)$]{{\\includegraphics[width=4.295cm, height=3.7cm, trim=11mm 0mm 11mm 7mm,clip]{WNG-Polar90Deg_3.eps} }}\n \\subfloat[$(\\theta,\\phi) = (30^\\circ, 0^\\circ)$]{{\\includegraphics[width=4.295cm, height=3.7cm,trim=11mm 0mm 11mm 7mm,clip]{WNG-Polar30Deg_3.eps} }}\n\t\\vspace*{-0.15cm}\n \\caption{WNG performance. Note that even with the higher array gain from Fig. \\ref{fig:AG}, the WNG is better for the TF configuration.}\n \\label{fig:WNG}%\n\t\\vspace*{-0.3cm}\n\\end{figure}\n \\begin{figure}[h]%\n \\centering\n \\subfloat[$(\\theta,\\phi) = (90^\\circ, 0^\\circ)$]{{\\includegraphics[width=4.295cm, height=3.7cm, trim=11mm 0mm 11mm 7mm,clip]{MACC_Polar90_3.eps} }}\n \\subfloat[$(\\theta,\\phi) = (30^\\circ, 0^\\circ)$]{{\\includegraphics[width=4.295cm, height=3.7cm,trim=11mm 0mm 11mm 7mm,clip]{MACC_Polar30_3.eps} }}\n \t\\vspace*{-0.15cm}\n\\caption{MACC performance, which corresponds well with the AG performance in Fig. \\ref{fig:AG}.}\n \\label{fig:MACC}%\n\t\\vspace*{-0.3cm}\n\\end{figure}\n \n At $\\theta = 90^\\circ$, i.e., $x$-$y$ plane, the TF case is slightly better for the AG and MACC, but the WNG performance for the TF case is better than the FF case. This is explained by noting that the steering vectors in the TF case have variations in both phase and amplitude (over microphones) in comparison to the FF case, which only has phase variations. The amplitude variations increase the spatial diversity for the TF case, which can be used to improve the spatial directivity of the beamformer.\nAt $\\theta = 30^\\circ$, the TF case has WNG performance better than the FF over the full frequency range; the AG for TF case is noticeably better than FF case for all frequencies, and significantly better for frequencies beyond 2 kHz. Note that the WNG curves are lower-bounded by $-25$ dB, because of the WNG constraint specified in (\\ref{eq:MVDR}). Note also that in all cases, the MACC for the TF case is noticeably better than the FF case, because the FF case ignores the magnitude information, which provides invaluable characterization of the look direction. \n\n\\begin{figure}[h]%\n \\centering\n \\subfloat[$(\\theta,\\phi) = (90^\\circ, 0^\\circ)$]{{\\includegraphics[width=4.295cm, height=3.7cm, trim=10mm 0mm 11mm 7mm,clip]{ScatteredFieldPressure_Polar90_3.eps} }}%\n \\subfloat[$(\\theta,\\phi) = (30^\\circ, 0^\\circ)$]{{\\includegraphics[width=4.295cm, height=3.7cm,trim=10mm 0mm 11mm 7mm,clip]{ScatteredFieldPressure_Polar30_3.eps} }}%\n\t\\vspace*{-0.1cm}\n \\caption{Magnitude of scattered wavefield, $p_s$, (in dB) at two plane wave angles.}\n \\label{fig:scatterfield}%\n\\end{figure}\n\nThe big deviation of the FF performance at $\\theta = 30^ \\circ$ is attributed to the magnitude of the scattered wavefield (which is ignored in the FF case). This is illustrated in Fig. \\ref{fig:scatterfield}, where we show the magnitude of the scattered wavefield at the five microphones at both angles (where the background plane wave has the same magnitude in both cases). Note that, the scattered wavefield at $\\theta = 30^\\circ$ is approximately $10$ dB stronger than $\\theta = 90^\\circ$ especially at high frequencies, which is manifested clearly in the corresponding AG\/WNG\/MACC behavior. This significant deviation of the free-field case demonstrates the limitation of this modeling and the necessity of incorporating the scattered field component through FEM modeling for beamformer design. \n\n\\vspace{-0.3cm}\n\n\\section{Conclusion and Future Work}\nThe free-field model does not provide accurate modeling for broadband beamformer design, especially when the scattered wavefield is significant. Therefore, designing beamformer metrics based on free-field modeling results in suboptimal performance. To mitigate this issue, we described a simulation-based framework for modeling the total wavefield, which is shown to noticeably improve the beamformer design. The model is universal for any device surface, and it could be used for both near-field and far-field modeling by computing the steering vectors of spherical and plane waves, respectively. Future work will utilize the results of this work to develop novel design techniques for broadband beamformer and generic form-factors that are based on this realistic microphone array modeling \\cite{guangdongComsol2019}. Additionally, we expand the array processing metrics, and show a close matching of simulated and measured beampatterns for our proposed method \\cite{guangdongComsol2019}.\n\n\\bibliographystyle{IEEEbib}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe purpose of this paper is to consider the problem of finding, for $x, t \\ge 0$, the quantity \n\\begin{equation}\\label{eq:suptail}\n\t\\sup_{\\mathbf{X}}{\\mathbb{P}(X_1+\\ldots+X_k\\geq t)},\n\\end{equation}\nwhere the supremum is taken over all random vectors $\\mathbf{X}=(X_1,\\ldots,X_k)$ \nof nonnegative, independent and identically distributed (further \\emph{i.i.d.}) random variables \n$X_i$ such that $\\mathbb{E}(X_i) \\le x$ for $i=1,\\ldots,k$.\n\nFrom now on we assume that $t=1$, since, writing \n\\begin{equation*}\n\tm_k(x) = \\sup_{\\mathbf{X}}{\\mathbb{P}(X_1+\\ldots+X_k\\geq 1)},\n\\end{equation*}\nby rescaling we get that \\eqref{eq:suptail} is equal to $m_k(x\/t)$.\n\nFor $x \\ge 1\/k$ the trivial solution $m_k(x) = 1$ is given by $X_i$'s which are identically equal to $x$. \nFor $k=1$ and $x < 1$ the solution $m_1(x) = x$ is given by Markov's inequality and a zero-one random variable.\nIn the case of two variables the problem was solved by Hoeffding and Shrikhande~\\cite{HS} who showed that\n\n$$m_2(x)=\n\\begin{cases} \n2x-x^2 & \\text{for}\\quad x < 2\/5; \\\\\n4x^2 & \\text{for}\\quad 2\/5 \\leq x < 1\/2;\\\\\n\t1, & \\text{for}\\quad x \\ge 1\/2.\n\\end{cases}$$\n\n\nWe conjecture that the following generalization of the above results holds. \n\n\\begin{conjecture}\\label{conjLMS}\nFor every positive integer $k$ and $x\\geq 0$ we have\n\\begin{equation}\\label{eq:LMS}\n\tm_k(x)=\n\t\\begin{cases} \n\t\t1-(1-x)^k & \\text{for}\\quad x < x_0 ,\\\\\n\t\t(kx)^k & \\text{for}\\quad x_0 \\le x < 1\/k,\\\\\n\t\t1, & \\text{for}\\quad x\\geq 1\/k,\n\t\\end{cases}\n\\end{equation}\nwhere \n$x_0(k)$ is the solution of $1-(1-x)^k = (kx)^k$.\n\\end{conjecture}\n\nNote that the lower bound on $m_k(x)$ in the first case is given by $X_i$'s with a two-point distribution \n$\t\\mathbb{P}(X_i=1)=1-\\mathbb{P}(X_i=0)=x$\nand in the second case by $X_i$'s with distribution\n$\t\\mathbb{P}(X_i=1\/k)=1-\\mathbb{P}(X_i=0)=kx .$\n\nThe case when $X_i$'s are not necessarily identically distributed has also been studied. Let\n\\begin{equation*}\n\ts_k(x) = \\sup_{\\mathbf X} \\mathbb{P}\\left( X_1 + \\dots + X_k \\ge 1\\right),\n\\end{equation*}\nwhere the supremum is taken over all vectors of independent, nonnegative random variables with common mean $x$. Clearly, $m_k(x) \\le s_k(x)$. In 1966 Samuels \\cite{S66} formulated a conjecture on the least upper bound for the tail probability in terms of $\\cE X_i, i = 1, \\dots, k$, which are not necessarily equal. For simplicity, we state this conjecture in the case when means are equal.\n\\begin{conjecture}[Samuels \\cite{S66}]\\label{conj:Samuels}\nFor every positive integer $k$ and $x\\geq 0$ we have\n\t\\begin{equation}\\label{eq:Samuels}\n\t\ts_k(x)=\n\t\t\\begin{cases} \n\t\t\t1 - \\min_{t = 0}^{k-1} \\left(1 - \\frac{x}{1 - tx} \\right)^{k-t} & \\text{for}\\quad x < 1\/k,\\\\\n\t\t\t1, & \\text{for}\\quad x\\geq 1\/k,\n\t\t\\end{cases}\n\t\\end{equation}\n\\end{conjecture}\n\nThe lower bound on $s_k(x)$ is given by one of the random vectors $\\mathbf X_t, t=0, \\cdots, k-1$, where $\\mathbf X_t$ consists of~$t$ random variables identically equal to $x$ and $k-t$ i.i.d. random variables taking values $0$ and $1 - tx$. Therefore, if true, \\eqref{eq:Samuels} implies Conjecture \\ref{conjLMS} only when the minimum is attained by $t=0$. Computer-generated graphs suggest that the minimum is attained by $t=0$ when $x < x_1(k)$ and by $t= k - 1$ when $x \\ge x_1(k)$, where $x_1(k) \\in (0, 1\/k)$ is the solution of $1-(1 - x)^k = x\/ (1 - (k-1)x)$. In \\cite{AFHRRS} it was shown rigorously that the minimum is attained by $t=0$ for $x \\le 1\/(k+1)$.\n\nSamuels \\cite{S66, S68} confirmed \\eqref{eq:Samuels} for $k = 3, 4$. Computer-generated graphs of functions $s_3(x)$ and $s_4(x)$ suggest that for $k = 3,4$ we have\n\t\\label{thm:Samuels}\n\t\\begin{equation*}\n\t\tm_k(x)= s_k(x) = 1 - (1 -x)^k, \\qquad x \\le x_1(k),\t\n\t\\end{equation*}\n\twhere $x_1(3) = 0.27729\\dots$, $x_1(4) = 0.21737\\dots$.\n\n\tMoreover, Samuels \\cite{S69} proved that for $k \\ge 5$\n\t\\begin{equation*}\n\t\tm_k(x) = s_k(x) = 1 - (1-x)^k, \\qquad x \\le 1\/(k^2-k).\n\t\\end{equation*}\n\nOur main result can be stated as follows.\n\n\\begin{theorem}\\label{thmLMS}\n\tConjecture~\\ref{conjLMS} holds for $k=3$ and every $x$. Moreover, it holds for $k \\ge 5$ when $x < \\frac{1}{2k-1}$.\n\\end{theorem}\n\nThe proof of Theorem~\\ref{thmLMS} is based on an observation that \nConjecture~\\ref{conjLMS} is asymptotically equivalent \nto the fractional version of Erd\\H{o}s' Conjecture \non matchings in hypergraphs which we introduce in the next section.\n\n\n\n\\section{The hypergraph matching problem}\\label{sec2}\n\nA {\\em $k$-uniform hypergraph} $H=(V,E)$ \nis a~set of vertices $V$ \ntogether with a~family $E$ of \n$k$-element subsets of $V$, called \\emph{edges}.\nA {\\em matching} is a~family of disjoint edges of $H$, and the size \nof the largest matching in $H$ is called a~{\\em matching number} and is denoted by $\\nu(H)$. \n\nIn~\\cite{E} Erd\\H{o}s\\ stated the following.\n\n\\begin{conjecture}[Erd\\H{o}s~\\cite{E}]\\label{conjerd}\nLet $H=(V,E)$ be a~$k$-uniform hypergraph, $|V|=n$, $\\nu(H) = s$. \nIf $n\\geq ks+k-1$, then \n\\begin{equation}\\label{eq:erdos_conj}\n|E|\\leq \\max\\bigg\\{\\binom nk-\\binom{n-s}k\\,,\\binom {ks+k-1}k\\bigg\\}.\n\\end{equation}\n\\end{conjecture}\n\nNote that the equality in Conjecture~\\ref{conjerd} holds either when $H$ is \na~hypergraph consisting of\nall $k$-element sets intersecting a~given subset $S\\subset V$, $|S|=s$,\nor when $H$ consists of all $k$-element subsets of a given subset $T\\subset V$, $|T|=ks+k-1$.\nWe denote these two families of hypergraphs by $Cov_{n,k}(s)$ and $Cl_{n,k}(ks+k-1)$, respectively.\n\nA similar problem can be formulated in terms of fractional matchings.\nA~\\emph{fractional matching} in a~hypergraph $H$ \nis a~function \n$$\n\\begin{aligned}\n&w:E\\rightarrow [0,1] \\text{ such that }\\\\ \n\\sum_{e\\ni v}{w(e)}&\\leq 1 \\text{ for every vertex } v\\in V.\n\\end{aligned}$$\nThen, $\\sum_{e\\in E}{w(e)}$ is a~{\\em size} of the matching $w$ and \na~size of the largest fractional matching in $H$, denoted by $\\nu^*(H)$,\nis a~{\\em fractional matching number}. \nAlon, Frankl, Huang, R\\\"odl, Ruci\\'nski and Sudakov~\\cite{AFHRRS} stated the following conjecture.\n\n\\begin{conjecture}[\\cite{AFHRRS}]\\label{conjfrac}\n\tLet $x \\in [0,1\/k]$ be fixed and let $H_n=(V_n,E_n)$ be a sequence of $k$-uniform hypergraphs such that $ \\nu^*(H_n) \\le x|V_n|$.\nThen\n\\begin{equation}\\label{eq:limsup}\n\t\\limsup_{n\\to \\infty} \\frac{ |E_n|}{{|V_n| \\choose k}} \\le \\max\\left\\{1-(1-x)^k, {(kx)^k}\\right\\}.\n\\end{equation}\n\\end{conjecture}\n\nFinding a~fractional matching number \nis a~linear programming problem. \nIts dual problem is to minimize the size of a \\emph{fractional vertex cover} of $H$, which is defined as a~function \n\\[\n\t\\begin{aligned}\n\t\t&w: V \\rightarrow[0, 1] \\text{ such that }\\\\ \n\t\t\\text{ for each } &e\\in E \\text{ we have } \\sum_{v\\in e}{w(v)}\\geq 1.\n\t\\end{aligned}\n\\]\nThen, $\\sum_{v\\in V}{w(v)}$ is the~\\emph{size} of $w$ and \nthe size of the smallest fractional vertex cover in $H$ is denoted by $\\tau^*(H)$. \nBy the Duality Theorem, \n\\begin{equation}\\label{eq:nutau}\n\t\\nu^*(H)=\\tau^*(H).\n\\end{equation}\n\nThe bound in Conjecture~\\ref{conjfrac}, if true, is \nattained by either a sequence $H_n \\in Cov_{n,k}(\\floor{xn})$ (which has a fractional vertex cover $w(v)= \\mathbf 1_{v \\in S}$ of size $\\floor{xn}$ and therefore satisfies $\\nu^*(H_n) \\le xn$) or $H_n \\in Cl_{n,k}(\\floor{kxn})$ (which has fractional vertex cover $w(v) = \\frac 1 k \\mathbf 1_{v \\in T}$ of size $\\floor{kxn}\/k$).\n\n\n\nObserve that if a fractional matching $w$ is such that $w(e)\\in \\{0,1\\}$ for every edge~$e$, \nthen $w$ is just a~matching or, more precisely, the indicator function of a~matching.\nThus, every integral matching is also a~fractional matching \nand hence\n\\begin{equation}\\label{eq:nunu}\n\t\\nu(H)\\leq \\nu^*(H),\n\\end{equation}\nso consequently, Conjecture~\\ref{conjfrac} follows from Conjecture~\\ref{conjerd}. Furthermore,\nConjecture~\\ref{conjerd} was confirmed for $k = 3$ by the first two authors \\cite{LM} (for $n$ bigger than some absolute constant) and by Frankl \\cite{Fnew} (for every $n$). \nMoreover, Frankl \\cite{Fgen} confirmed Conjecture \\ref{conjerd} for $k \\ge 4$ and $s \\le (n-k)\/(2k-1)$. Therefore, in view of \\eqref{eq:nunu} we have the following.\n\\begin{remark}[\\cite{Fnew,Fgen,LM}]\\label{thm:frac}\n\tConjecture~\\ref{conjfrac} holds for $k = 3$ and every $x$ as well as for $k \\ge 4$ and $x < 1\/(2k-1)$.\n\\end{remark}\n\n\n\\section{Proof of Theorem~\\ref{thmLMS}}\n\nWe prove Theorem~\\ref{thmLMS} in two steps. \nFirst we observe that it is enough to confirm\nConjecture~\\ref{conjLMS} with some additional restrictions.\nThen we show the equivalence of Conjectures~\\ref{conjLMS} and~\\ref{conjfrac}.\n\nHere and below, given a discrete random variable, $\\supp(X)$ denotes the set of values which $X$ attains with positive probability.\n\n\\begin{lemma}\\label{lm1}\n\tIt suffices to prove Conjecture~\\ref{conjLMS} for $X_i$'s with discrete distribution satisfying the following properties: (i) $\\supp(X_i)$ is finite subset of $[0,1]$; (ii) $\\mathbb{P}\\left( X_i = a \\right) \\in \\mathbb Q$ for every $a \\in \\supp(X_i)$.\n\\end{lemma}\n\n\\begin{proof}\n\nDefine, for $x \\ge 0$, \n$$M(x)=\\max\\{1-(1-x)^k, (kx)^k, 1\\},$$\nwhich is equal to the right-hand side of \\eqref{eq:LMS}.\n\nLet us first assume that Conjecture~\\ref{conjLMS} holds for random variables\nsatisfying (i) and (ii) and show it holds for $X_i$'s satisfying (i) only, that is, with $\\supp(X_i)=\\{a_1,\\ldots,a_m\\} \\subset [0,1]$.\nLet $p_j=\\mathbb{P}\\left(X_i=a_j\\right)$, $j=1,\\ldots,m$.\nFor every sufficiently large integer $n$ define a random variable $Y_i^{(n)}$ such that\n$$\\mathbb{P}\\left(Y_i^{(n)}=a_j\\right)=p_j^{(n)} \\in \\mathbb Q, \\quad j = 1, \\dots, m$$\nwhere $p_j^{(n)}=\\lceil n p_j\\rceil \/n$ for $j=2,\\ldots,m$,\nand $p_1^{(n)}=1-\\sum_{j=2}^m p_j^{(n)}$ (note that the $p_1^{(n)}$ is positive for $n$ large enough).\nThen, for every $j$ we have\n$p_j^{(n)}\\leq p_j+1\/n,$ and therefore\n$$\\cE(Y_i^{(n)})\\leq \\cE(X_i)+ m\/n.$$\nApplying the conclusion of Conjecture 1 to $Y^{(n)}_i$'s and using the continuity of function $M$, we get\n\\begin{align*}\n\t\\mathbb{P}\\left(\\textstyle \\sum_{i=1}^kX_i\\geq 1\\right) &= \\lim_{n \\to \\infty} \\mathbb{P}\\left(\\textstyle\\sum_{i=1}^kY_i^{(n)}\\geq 1\\right)\n&\t\\le \\lim_{n \\to \\infty} M(x + m\/n) = M(x).\n\\end{align*}\nLet us now assume that the Conjecture~\\ref{conjLMS} holds for random variables satisfying (i) and show it then holds for arbitrary $X_i, i = 1, \\dots, k$. For every positive integer $m$ define random variables\n$$Y_i^{(m)}=\\min \\{\\left\\lceil m X_i\\right\\rceil\/{m}, 1\\}.$$\nNote that $\\supp(Y_i^{(m)})$ is finite and contained in $[0,1]$.\nWe have $Y_i^{(m)}\\leq X_i + 1\/m$ and therefore $\\cE(Y_i^{(m)})\\leq \\cE(X_i) + 1\/m \\le x + 1\/m$. For sufficiently large $m$ we get \n\\begin{align*}\n\t\\mathbb{P}\\left(\\textstyle \\sum_{i=1}^kX_i\\geq 1\\right) &\\leq \\mathbb{P} \\left( \\textstyle \\sum_{i=1}^k \\lceil m X_i \\rceil\/m \\ge 1 \\right) = \\mathbb{P}\\left(\\textstyle \\sum_{i=1}^kY_i^{(m)}\\geq 1\\right) \\leq M(x+1\/m).\n\\end{align*}\nTaking a limit over $m \\to \\infty$ and using the fact that $M$ is continuous (from the right), we obtain \n$$\\mathbb{P}\\left(\\textstyle \\sum_{i=1}^kX_i\\geq 1\\right)\\leq M(x).\\quad \\qed$$\n\\renewcommand{\\qed}{}\n\\end{proof}\n\n\\begin{lemma}\\label{lm2}\n\tFor every $k$ and $x \\in [0,1\/k]$ Conjectures~\\ref{conjLMS} and~\\ref{conjfrac} are equivalent.\n\\end{lemma}\n\n\\begin{proof}\n\tThe proof that Conjecture~\\ref{conjLMS} implies Conjecture~\\ref{conjfrac} goes along the same lines as the proof of Theorem 2.1 in~\\cite{AFHRRS}.\nWe recall it below for the sake of completeness.\n\nLet us fix $k$ and $x \\in [0,1\/k]$ and suppose that Conjecture~\\ref{conjLMS} holds.\nMoreover, let $H_n=(V_n,E_n)$ be a~sequence of $k$-uniform hypergraphs such that $\\nu^*(H_n)\\leq x|V_n|=xn$. \nBy~(\\ref{eq:nutau}) we have $\\tau^*(H_n)=\\nu^*(H_n)\\leq xn$, \nhence there exists a weight function $w_n:V_n\\rightarrow [0,1]$ such that \n$$\\sum_{v\\in V_n}{w_n(v)}=xn,$$\nand $\\sum_{v\\in e}{w_n(v)}\\geq 1$ for every $e\\in E_n$.\n\nLet $(v^{n}_1,\\ldots,v^{n}_k) \\in V_n^k$ be a~vector of random vertices,\neach chosen independently and uniformly over $V_n$.\nNote that $w_n(v^{n}_1),\\ldots,w_n(v^{n}_k)$ are nonnegative, independent and identically distributed random variables with mean\n\\begin{equation*}\n\t\\mathbb{E}(w_n(v^{n}_i))=\\frac{1}{|V_n|}\\sum_{v\\in V_n}{w_n(v)}=\\frac{1}{n}xn=x.\n\\end{equation*}\nObserve also that \n\\begin{equation}\\label{eq1}\n\\mathbb{P}(\\{v^{n}_1,\\ldots,v^{n}_k\\}\\in E_n)=\\frac{k!|E_n|}{n^k}.\\end{equation}\nOn the other hand, since $w_n$ is a vertex cover of $H_n$, for $\\{v^{n}_1,\\ldots,v^{n}_k\\} \\in E_n$ we have $\\sum_{i=1}^k{w_n(v^{n}_i)}\\geq 1$ and thus\n\\begin{equation}\\label{eq2}\n\\mathbb{P}(\\{v^{n}_1,\\ldots,v^{n}_k\\}\\in E_n)\\leq\\mathbb{P}\\left(\\textstyle \\sum_{i=1}^k{w_n(v^{n}_i)}\\geq 1\\right).\\end{equation}\nFrom (\\ref{eq1}), (\\ref{eq2}) and \nthe assumption that Conjecture~\\ref{conjLMS} is true, we conclude that\n\\begin{equation*}\n\t\\limsup_{n\\to \\infty} \\frac{ |E_n|}{{|V_n| \\choose k}} \\le \\limsup_{n\\to \\infty} \\mathbb{P}\\left(\\textstyle \\sum_{i=1}^k{w_n(v^{n}_i)}\\geq 1\\right)\\leq \t\\max\\left\\{1-(1-x)^k, {(kx)^k}\\right\\}.\n\\end{equation*}\n\nIt remains to prove the reverse implication.\nLet us assume that Conjecture~\\ref{conjfrac} is valid for some $k$ and $x \\in [0,1\/k]$.\nDue to Lemma~\\ref{lm1} it is enough to show that Conjecture~\\ref{conjLMS} holds for $X_i$'s attaining a finite set of values $a_1, \\dots, a_m \\in [0,1]$ such that\n$$\\mathbb{P}(X_i=a_j)={p_j}\/{q_j}, \\qquad j = 1, \\dots, m$$\nfor some positive integers $p_j$ and $q_j$. \nMoreover, let $r$ be the smallest common multiple of the numbers $\\{q_1,\\ldots,q_m\\}$, and define integers \n\\begin{equation}\\label{eq:pprime}\n\tp'_j = rp_j\/q_j, \\qquad j = 1, \\dots, m. \n\\end{equation}\n\nIn order to apply Conjecture \\ref{conjfrac}, we define hypergraphs \nwith bounded fractional matching number.\nFor $n=1,2,\\ldots$, let $V_n=[nr]$. Observing that $np_1' + \\dots + np_m' = nr$, define a function $w_n:V_n\\rightarrow[0,1]$ in such a way that for each $j = 1, \\dots, m$ function $w_n(v)$ takes value $a_j$ precisely $np_j'$ times.\nLet $H_n=(V_n,E_n)$ be a hypergraph with the edge set \n$$E_n=\\left\\{e\\in\\binom{V_n}{k}: \\sum_{v\\in e}{w_n(v)}\\geq 1\\right\\}.$$\nIn view of \\eqref{eq:pprime}, we have that $w_n$ is a~fractional\nvertex cover of $H_n$ of size \n\\begin{equation*}\n\t\\sum_{v=1}^{nr}{w_n(v)}=\\sum_{j=1}^m{a_j}n{p'_j} = n\\sum_{j=1}^m ra_j\\frac{p_j}{q_j} = nr \\mathbb E (X_i) \\le xnr.\n\\end{equation*}\nHence by \\eqref{eq:nutau} we have $\\nu^*(H_n)=\\tau^*(H_n)\\leq xnr$ and therefore \\eqref{eq:limsup} gives\n\\begin{equation}\\label{eq:Ebound}\n\t\\limsup_{n \\to \\infty}\\frac{ |E_n|} {\\binom {nr}k}\\le \\max \\left\\{ 1-(1-x)^k, (kx)^k \\right\\}.\n\\end{equation}\n\nLet $(v^{n}_1,\\ldots,v^{n}_k) \\in V_n^k$ be a~vector of random vertices,\neach chosen independently and uniformly over $V_n$.\nNote that for every $n$ the random variable $w_n(v_i^{n})$ has the same distribution as $X_i$, since, by \\eqref{eq:pprime}, \n$$\\mathbb{P}(w_n(v_i^{n}) = a_j)=\\frac{np_j'}{|V_n|}=\\frac{n r p_j\/q_j}{nr}=\\frac{p_j}{q_j}, \\qquad j = 1, \\dots, m.$$\nLet $N_n$ denote the number of $k$-element vectors $(v_1,\\ldots,v_k) \\in V_n^k$\nof vertices with at least two equal coordinates. We have \n\\begin{align*}\n\\mathbb{P}(X_1+\\ldots +X_k\\geq 1)=& \\mathbb{P}(w_n(v^{n}_1) + \\dots + w_n(v^{n}_k) \\geq 1)\\\\\n=&\\frac{\\left|\\left\\{(v_1,\\ldots,v_k)\\in V_n^{k}: \\sum_i{w(v_i)}\\geq 1\\right\\}\\right|}{(nr)^k}\\\\\n=& \\frac{k!|E_n|+N_n}{(nr)^k} \\le \\frac{|E_n|}{\\binom {nr}k} + \\frac{\\binom {k} 2 (nr)^{k-1}}{(nr)^k}.\n\\end{align*}\n\nTaking the limit over $n \\to \\infty$ and using \\eqref{eq:Ebound} we get that\n$$\\mathbb{P}(X_1+\\ldots +X_k\\geq 1) \\leq \\max\\{1-(1-x)^k,(kx)^k\\}.\\quad\\qed$$\n\\renewcommand{\\qed}{}\n\\end{proof}\n\nNow Theorem~\\ref{thmLMS} follows from Lemma~\\ref{lm2}\nand Remark \\ref{thm:frac}.\n\n\\bibliographystyle{amsplain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe tendency of epidemics to return in repeated waves has been known since the 1918 Spanish flu \\cite{Spanishflu}, and the recent COVID-19 pandemic is no exception. In November 2020, the history of reported daily cases, or incidence rate, of COVID-19 varies considerably across the world's regions. The broad picture is as follows \\cite{ecdc}: The outbreak in China was practically over five weeks after a lockdown was imposed on January 23. Europe took over as the epicenter of the pandemic in late February. After lockdowns in most European countries in early March, followed by a gradual relaxation of these interventions, the first wave was over in the late spring. Incidence rates remained very low during the summer until they started to increase slowly in August. In late October, incidence rates higher than in March are common in Europe and are growing exponentially with a week's doubling time. The United States developed its first wave delayed by a week or two compared to Europe and a second and stronger wave throughout the summer. The country is now dealing with a third and even stronger wave. Many countries in South America, Africa, and South-East Asia are in the middle of (or have just finished) the first wave. On the other hand, a few countries, like New Zealand, Australia, Japan, and South Korea, have finished the second wave and managed to prevent it from becoming much stronger than the first one. Although there are a plethora of different wave patterns among the world's countries, one could hope that these patterns fall into a limited number of identifiable groups. \n\nIn this paper, we do not claim that there is any epidemiological rule that states a pandemic evolving without social intervention must come in increasingly severe waves. On the contrary, the simple compartmental models devise the evolution of one single wave that finally declines due to herd immunity. We shall adopt the simplest of all such models here, the Susceptible-Infectious-Recovered (SIR) model, to describe the evolution of the epidemic state variables. However, in the SIR model, the effective reproduction number ${\\cal R}$, the average number of new infections caused by one infected individual, is proportional to the fraction of susceptible individuals $S$ in the population. If initially ${\\cal R}={\\cal R}_0>1$, the daily number of new infections (incidence rate) will increase with time $t$ until $S$ has been reduced to the point where ${\\cal R}$ goes below 1, and then decay to zero as $t\\rightarrow \\infty$.\n\nAt present, herd immunity is not an essential mechanism in the COVID-19 pandemic because the fraction of susceptible individuals is still close to 1 in most populations. Consequently, the time variation of the reproduction number is predominantly caused by changes in social behavior. Changes in virus contagiousness could also play a r\\^ole, but we have not taken virus mutations into account in this paper. Thus, by adopting the approximation $S=1$ in the definition of the reproduction number, the SIR model reduces to a set of two first-order ordinary differential equations for the cumulative number of infected cases $J(t)$ and the instantaneous number of infectious individuals $I(t)$. These are the ``state variables'' of the epidemic, which are driven by the reproduction number ${\\cal R}(t)$. The evolution of the epidemic state $[J(t),I(t)]$ can be computed as a solution to these equations if ${\\cal R}(t)$ is known.\n\nThis paper's philosophy is to make the simplifying assumption that ${\\cal R}(t)$ responds to the epidemic state. More precisely, that the rate of change of ${\\cal R}(t)$ is a function of the rate of change of $J(t)$ depending on a set of parameters with distinct and straightforward interpretations that characterize the response. This mathematical relationship turns the SIR model into a closed model for the epidemic evolution, \nwhich depends on three parameters: i) the relaxation rate when incidence rate is low, ii) the intervention rate when incidence is high, and iii) a fatigue rate that gradually weakens the effect of interventions over time.\nThese parameters can be fitted to the incidence rate time series $X(t)=d_tJ(t)$ reported by different countries. \nThe analysis of the fitted values of these parameters, allows to identify groups of countries with similar evolution of the epidemics and help to understand the most effective mechanisms controlling the epidemic's spread. \nOne of our findings is stated in the paper's title; intervention fatigue is the primary mechanism that gives rise to the strong secondary waves emerging in many countries.\\\\\n\n\\noindent \\textit{Paper outline.}\\\\\nSection~\\ref{sec:reserse_SIR} presents a method for reconstructing the ${\\cal R}(t)$-profile from the observed time series for the daily incidence rate using a simple inversion of the SIR model.\nThe method's effectiveness is illustrated in Section~\\ref {:reconstruction_examples} by application to selected representative countries. \nIn Section~\\ref{sec:clustering} we compute a dissimilarity measure between the reconstructed ${\\cal R}(t)$-profiles for each country in the world.\nBased on such a dissimilarity, we generate a dendrogram that hierarchically partitions countries according to their evolutionary paths of the epidemic.\nFinally, in Section~\\ref{sec:closed_model} we construct a self-consistent, closed model for the simultaneous evolution of $J(t)$ and ${\\cal R}(t)$ and describe how this model can be fitted to the observed data for $d_tJ$ for individual countries.\n\nIn Section \\ref{sec:clustering_results} we synthesize the reconstructed ${\\cal R}$-curves for the majority of the world's countries and use the dendrogram to group them into seven clusters, which are also shown on a World map. \nThe features characterizing each cluster are analyzed and discussed. \nSection~\\ref{sec:params_exploration} illustrates the scenarios of the epidemic evolution that can be derived by solving the equations of the proposed closed model for different sets of parameters. \nThe proposed model's effectiveness is empirically validated in Section~\\ref{sec:fit}, where the model parameters are numerically fit to the incidence data for some selected countries exhibiting different characteristic patterns of epidemic evolution.\n\nThe possible implications of these results for COVID-19 strategic preparedness and response plans are discussed in Section \\ref{sec:discuss}.\n\n\n\\section{Methods}\n\\label{sec:Methods}\n\n\\subsection{Estimating the reproduction number from incidence rate data}\n\\label{sec:reserse_SIR}\nLet $S$ be the fraction of susceptible individuals in a population, $I$ the fraction of infectious, and $R$ the fraction of individuals ``removed\" from the susceptible population (\\textit{e.g.}, recovered, isolated, or deceased individuals). A simple model describing the evolution of these variables is the classical SIR-model \\cite{SIR}, \n\\begin{linenomath*}\n\\begin{eqnarray} \n\\label{1}\n\\frac{dS}{dt}&=&-\\beta IS, \\\\ \n\\label{2}\n\\frac{dI}{dt}&=&\\beta IS-\\alpha I, \\\\ \n\\label{3}\n\\frac{dR}{dt}&=&\\alpha I, \n\\end{eqnarray}\n\\end{linenomath*}\nwhere $\\alpha$ is the rate by which the infected are isolated from the susceptible population. Another interpretation of $\\alpha$ is that $\\alpha^{-1}$ is the average duration of the period an individual is infectious, which essentially depends only on the properties of the pathogen. As long as these do not change significantly, $\\alpha$ will remain constant in time. In this paper we use $\\alpha=1\/(8 \\text{ days})$ but our results are not sensitive to this choice. The coefficient $\\beta$, on the other hand, is the rate by which the infection is being transmitted. It evolves in time as societal interventions change. It is also influenced by behavioral changes in the susceptible population, such as eliminating superspreaders. \nThe effective reproduction number is defined as \n\\begin{linenomath*}\n\\begin{equation}\n{\\cal R}(t) \\equiv \\frac{\\beta(t)}{\\alpha}\\label{4}\n\\end{equation}\n\\end{linenomath*}\nand can be interpreted as the average number of new infections caused by an infected individual over the infectious period $\\alpha^{-1}$.\n\nThe coupled system given by Eqs.\\;(\\ref{1}) and (\\ref{2}), with initial conditions $S_0$ and $I_0$, constitutes a closed nonlinear initial value problem. Eq.\\;(\\ref{3}) is not a part of this system since it is trivially integrated to yield the removed population $R(t)$ once $I(t)$ is known.\n\nThe method developed in this paper is valid for an infectious disease with a new pathogen which is transmitted by contact between infectious and susceptible individuals. This implies that there is practically no immunity in the population from the start of the epidemic and we shall assume that this herd immunity is low throughout the period for which we estimate the reproduction number. In other words, we shall assume that $S\\approx 1$, and hence that the cumulative fraction $J=1-S$ of infected individuals is always much less than unity ($J\\ll 1$). By introducing $S=1-J$ in Eqs.\\;(\\ref{1}) and (\\ref{2}), and by neglecting the term $\\beta IJ$ compared to the term $\\beta I$ in Eq.\\;(\\ref{1}), these two equations reduce to a linear model for $J$ and $I$,\n\\begin{linenomath*}\n\\begin{eqnarray} \\label{5} \n\\frac{dJ}{dt}&=&\\alpha {\\cal R}(t) I \\\\ \n\\label{6} \n\\frac{dI}{dt}&=&\\alpha[{\\cal R}(t) -1] I\\,. \n\\end{eqnarray}\n\\end{linenomath*}\nNote that $\\gamma_I(t)=I^{-1}dI\/dt=\\alpha [{\\cal R}(t)-1]$ is the relative growth rate for the instantaneous number of infectious individuals $I(t)$, which is positive when ${\\cal R}(t)>1$ and negative when ${\\cal R}(t)<1$. \nBy integrating Eq.\\;(\\ref{6}) and by inserting the result on the right hand side of Eq.\\;(\\ref{5}), we obtain that the daily number of new infections $dJ\/dt$ is determined by the initial $I_0$ and the history of ${\\cal R}(t)$ on the interval $(0,t)$; \n\\begin{linenomath*}\n\\begin{equation}\n \\frac{dJ}{dt}=\\alpha I_0{\\cal R}(t)\\exp{\\Bigg{(}\\int_0^t\\alpha[{\\cal R}(t')-1)] dt'\\Bigg{)}}. \\label{7} \n\\end{equation}\n\\end{linenomath*}\nWhat we are interested in here, however, is the inverse relationship; suppose the evolution of $dJ\/dt$ is known, how do we find the evolution of the reproduction number ${\\cal R}(t)$? \n\nBy using Eq.\\;(\\ref{5}) to replace $\\alpha {\\cal R}I$ by $d_tJ$ in Eq.\\;(\\ref{6}), the latter can be integrated to yield,\n\\begin{linenomath*}\n\\begin{equation}\n I(t)=I_0 e^{-\\alpha t}+\\int_0^t e^{-\\alpha (t-t')}d_{t'}J\\; dt', \\label{8}\n\\end{equation}\n\\end{linenomath*}\nwhich allows us to compute ${\\cal R}(t)$ from Eq.\\,(\\ref{5});\n\\begin{linenomath*}\n\\begin{equation}\n {\\cal R}(t) = \\frac{d_tJ}{\\alpha I} = \\frac{1}{\\alpha}\\frac{d_tJ}{ I_0 e^{-\\alpha t}+\\int_0^t e^{\\alpha (t'- t)}d_{t'}J\\; dt'}.\\label{9}\n\\end{equation}\n\\end{linenomath*}\n\nProvided a time series for $J(t)$ is available, we can approximate $d_tJ$ as a finite difference and the integral in Eq.\\;(\\ref{9}) as a discrete sum. \nThis sum gives us a fast and direct algorithm to estimate ${\\cal R}(t)$.\n\n\n\\subsection{{\\cal R}(t)-reconstructions for individual countries}\n\\label{sec:reconstruction_examples}\nSince we do not have actual measurements of the cumulative number of infected, to estimate the $\\mathcal{R}(t)$-curves for each country using Eq.\\;(\\ref{9}) we rely on the number of confirmed cases as a proxy for $d_tJ(t)$. \nSpecifically, we assume that the incidence rate $d_tJ(t)$ is proportional to the daily number of confirmed cases $X(t)$. \n\nThe time series $X(t)$ of new daily cases reported for each country are taken from the Our World in Data database\\footnote{\\url{https:\/\/ourworldindata.org\/coronavirus-source-data}}.\nFigure~\\ref{fig:examples} shows three examples of $\\mathcal{R}(t)$ estimated using Eq.\\;(\\ref{9}) from the new daily cases $X(t)$ reported by Sweden, Italy, and Argentina. \n\\begin{figure}[t!]\n\t\\centering\n\\includegraphics[width=16cm]{Fig1.pdf}\n \\caption{\\footnotesize $\\mathcal{R}(t)$ estimated with the proposed method from the time series of new daily cases reported in Sweden, Italy, and Argentina. Red curves are reported incidence rates, and the blue curves the reconstructed reproduction numbers.}\n\t\\label{fig:examples}\n\\end{figure}\nThe initial values assumed by $\\mathcal{R}(t)$ are affected by different choices of $I_0$. The transient effect given by the initial conditions quickly vanishes as $t$ increases, and $\\mathcal{R}(t)$ converges to a stable solution since the first term in the denominator of Eq.\\;(\\ref{9}) goes exponentially to zero. To compute the results, we generated several initial conditions for $I_0$ in a reasonable range, and we discarded the transient phase, depicted as a gray area in Fig.~\\ref{fig:examples}. In these plots we have taken into consideration the delay between the date of infection and the reported positive tests, which may amount to approximately one week. Thus, the actual ${\\cal R}(t)$ curves should be shifted towards the left by approximately this amount.\n\nThe three countries in Fig.~\\ref{fig:examples} are characterized by a different evolution of the epidemics.\nSweden had a long first wave that peaked in the middle of June and the second wave started when the first one was not completely over.\nThis is reflected in the estimated $\\mathcal{R}(t)$, which stays for a long time interval above 1.\nItaly had a first wave stronger than other countries, that was brought down completely due to the lockdown.\nThe estimated $\\mathcal{R}(t)$ starts from very high values and quickly goes below 1 by the beginning of April.\nFinally, the number of new cases in Argentina kept growing very slowly, but consistently, until the middle of October and there are no two distinct waves like in many other countries. \nConsequently, the $\\mathcal{R}(t)$ is characterized by values that are slightly above 1 until the beginning of November.\n\n\n\\subsection{Cluster analysis of {\\cal R}(t)-curves}\n\\label{sec:clustering}\n\nRather than presenting reconstructions of ${\\cal R}(t)$ case by case for all of the world's countries, it would bring more insight to combine them in groups of ${\\cal R}(t)$-curves according to some common features and then analyze the characteristics of each group.\nTherefore, we follow such an indirect approach where we first cluster the $\\mathcal{R}(t)$ curves of different countries and then analyze the clustering partition and the representatives of each cluster.\nThe cornerstone of each clustering algorithm is the computation of a dissimilarity measure between the data samples.\nSince we are dealing with with sequential data, we leverage on a dissimilarity measure $\\delta_{i,j} = d(x_i, x_j)$ that yields a real number $\\delta_{i,j}$ proportional to the discrepancy between the time series $x_i$ and $x_j$.\n\nA large variety of time series dissimilarity measures have been proposed in the literature, including those based on statistical methods~\\cite{de2011tail}, signal processing~\\cite{chan1999efficient}, kernel methods~\\cite{mikalsen2018time}, and reservoir computing~\\cite{bianchi2020reservoir}.\nIn this paper, we adopt the Dynamic Time Warping (DTW) distance~\\cite{keogh2005exact}, which is an efficient and well-known algorithm that computes the dissimilarity between two sequences as the cost required to obtain an optimal match between them. The cost is computed as the sum of absolute differences between a set of indices in the two time series.\nDTW allows similar shapes to match, even if they are out of phase or, in general, not perfectly synchronized along the time axis.\n\nFrom the dissimilarity $\\delta_{i,j}$ between countries $i$ and $j$, it is possible to compute a clustering partition, where similar $\\mathcal{R}$-time-series are assigned to the same cluster.\nSeveral approaches can be used to generate the clusters~\\cite{aghabozorgi2015time}. \nWe opted for a hierarchical clustering method~\\cite{cohen2018hierarchical}, which gradually joins data samples together by increasing the maximum radius of the clusters' $\\delta_\\text{max}$.\nOne of the main advantages of hierarchical clustering is the possibility of generating a dendrogram, which allows to visually explore the structure of the clustering partition at different resolution levels.\n\n\n\n\\subsection{A closed model for model for the epidemic evolution}\n\\label{sec:closed_model}\nThe SIR-model does not constitute a closed model for the evolution of $J(t)$, $I(t)$, and ${\\cal R}(t)$. Eqs.\\;(\\ref{5}--\\ref{6}) describe the dynamics of the epidemic state variables $J(t)$ and $I(t)$ when the evolution of the social state represented by ${\\cal R}(t)$ is given. Eq.\\;(\\ref{9}) is nothing but an inverse of this relationship and should not be interpreted as a social response of ${\\cal R}(t)$ to changes in the epidemic state variables. A closed model can only be obtained by adding an equation describing such a response. \nWhile a simple dynamical model can not reflect the whole complexity of the social response, it may still provide some useful insight. \n\nWe shall represent this response by assuming that the rate of change $d_t{\\cal R}(t)$ is a function of the incidence rate $X(t)=d_tJ(t)$, \nand that this function is positive when $X(t)$ is below a threshold $X^*$ and negative when it is above that threshold. When the incidence rate is low, society responds by relaxing restrictions, and the reproduction number increases. When the incidence rate exceeds the threshold $X^*$, restrictions are introduced that make $d_t{\\cal R}(t)$ to change sign from positive to negative.\n\nIn the following, it is convenient to introduce a dimensionless time variable $t'= \\alpha t$, which allows us to formulate the differential equations as functions of the mean infectious time $\\alpha^{-1}$ (which becomes the new time unit) rather than days. \nAccordingly, Eq.\\;(\\ref{5}) can be written as\n\\begin{linenomath*}\n\\begin{equation}\n X(t')\\equiv d_{t'} J= {\\cal R} I, \\label{16}\n\\end{equation} \n\\end{linenomath*}\nand we have a closed model for $I(t')$ and ${\\cal R}(t')$ in the form of the dynamical system,\n\\begin{linenomath*}\n\\begin{eqnarray}\n\\label{17} \n\\frac{d{\\cal R}}{dt'} &=& f(X)=f({\\cal R} I),\\\\ \n\\label{18}\n\\frac{dI}{dt'} &=& ({\\cal R}-1)I, \n\\end{eqnarray}\n\\end{linenomath*}\nwhere $f(X)$ is assumed to be a differentiable function which is decreasing in a neighborhood of $X^*$ and with $f(X^*)=0$. The system has a fixed point in ${\\cal R}=1$ and $I = X^*$. In this state, the number of infected stays constant at the threshold value.\n\nBy linearization of $f(X)$ around the fixed point ${\\cal R}=1$ and $I = X^*$, and by introducing the rate constant $\\nu = -(1\/2)X^*f'(X^*)>0$, the system reduces to\n\\begin{linenomath*}\n\\begin{eqnarray}\n\\label{19}\n\\frac{d\\Delta {\\cal R}}{dt'} &=& -2\\nu (\\Delta {\\cal R} + \\Delta \\tilde{I}+ \\Delta {\\cal R}\\Delta \\tilde{I}) \\\\ \n\\label{20}\n\\frac{d\\Delta \\tilde{I}}{dt'} &=& \\Delta {\\cal R}(1+\\Delta \\tilde{I}), \n\\end{eqnarray}\n\\end{linenomath*}\nwhere we have introduced $\\Delta {\\cal R}=1-{\\cal R}$ and the normalized number of infected $\\tilde{I}=I\/X^*=1+\\Delta \\tilde{I}$. This nonlinear dynamical system has a stable fixed point in $(\\Delta {\\cal R},\\Delta \\tilde{I})=(0,0)$.\n\n\n\\subsubsection{The damped harmonic oscillator model}\n\n In this section we demonstrate that if $X$ is close to the threshold value $X^*$ and ${\\cal R}$ is close to 1, the linearization of Eqs.\\;(\\ref{17}) and (\\ref{18}) leads to the equation for a damped harmonic oscillator. The purpose is to show analytically under what circumstances a damped oscillation is a natural time-asymptotic state of the epidemic. In Section \\ref{nonlinear model} we argue that the model needs to be generalized to yield realistic descriptions of epidemic curves in most countries and, hence, the present section may be skipped without losing anything essential. \n \n In the vicinity of the stable state $(\\Delta {\\cal R},\\Delta \\tilde{I})=(0,0)$, linearization yields the damped, harmonic oscillator equation, \n\\begin{linenomath*}\n\\begin{equation}\n\\frac{d^2 \\Delta \\tilde{I}}{dt'^2} + 2\\nu \\frac{ d\\Delta \\tilde{I}}{dt'} + (\\omega^2+\\nu^2) \\Delta \\tilde{I} = 0,\n\\label{21}\n\\end{equation}\n\\end{linenomath*}\nwhere $\\omega^2 = 2\\nu -\\nu^2$.\nFor $\\nu<2$, the general solution is the damped oscillator\n\\begin{linenomath*}\n\\begin{equation}\n\\Delta \\tilde{I}(t') = A \\, e^{-\\mu t'} \\cos(\\omega t' + \\varphi), \\label{22}\n\\end{equation}\n\\end{linenomath*}\nwhere $A$ and $\\varphi$ are integration constants and $\\mu\\equiv\\nu$,\nand for $\\nu\\geq 2$ the non-oscillatory strongly damped solution which for large $t'$ goes as\n\\begin{linenomath*}\n\\begin{equation}\n \\Delta \\tilde{I}(t')=Be^{-\\mu^{(-)} t'}+Ce^{-\\mu^{(+)}t'}, \\label{23}\n\\end{equation}\n\\end{linenomath*}\nwhere $B$ and $C$ are constants of integration and $\\mu^{(\\pm)}\\equiv \\nu(1\\pm \\sqrt{1-2\/\\nu})$. From Eq.\\;(\\ref{20}), we have \n\\begin{linenomath*}\n\\begin{equation}\n \\Delta {\\cal R}=\\frac{d\\Delta \\tilde{I}}{dt'}, \\label{24}\n\\end{equation}\n\\end{linenomath*}\nand from Eq.\\;(\\ref{16}),\n\\begin{linenomath*}\n\\begin{equation}\n\\Delta X(t')=\\Delta(\\tilde{I}(t'){\\cal R}(t')) \\approx \\Delta {\\cal R}(t') + \\Delta \\tilde{I}(t')=\\frac{d\\Delta \\tilde{I}}{dt'}+\\Delta \\tilde{I}, \\label{25}\n\\end{equation}\n\\end{linenomath*}\nwhich means that $\\Delta I$, $\\Delta {\\cal R}$, and $\\Delta X$ experience the same damped oscillations with some phase shifts, or the same strongly damped solutions.\n\nThe frequency $\\omega$ (for $0<\\nu<2$) and the damping rate $\\mu$ (for $0<\\nu<\\infty$) are plotted against the parameter $\\nu$ in Figure 1. The oscillation frequency and the damping rate are of comparable magnitude for $\\nu<1$, but the damping dominates in the interval $1<\\nu<2$. For $\\nu>2$, the damping rate decreases towards 1 as $\\nu$ increases.\n\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width=8 cm]{Fig2.pdf}\n\\caption{The yellow curve shows the frequency $\\omega=\\sqrt{2\\nu-\\nu^2}$, and the blue curve the damping rate $\\mu=\\nu$ for $\\nu<2$ and $\\mu=\\nu(1-\\sqrt{1-2\/\\nu}-1)$ for $\\nu\\geq 2$.}\n\\end{figure} \n\nThe most rapid control of the epidemic is obtained when $\\mu\\approx 2$, but we also have reasonably rapid control for larger $\\nu$. Slower damping takes place when $\\nu<1$, and slower the lower $\\nu$. Hence, what really should be avoided is $\\nu$ much less than 1.\nThe linearization of $f(X)$ around $X^*$ in Eq.\\;(\\ref{17}) yields\n\\begin{linenomath*}\n\\begin{equation}\n \\frac{d{\\cal R}}{dt'}=-\\nu\\Delta \\tilde{X}, \\label{26}\n\\end{equation}\n\\end{linenomath*}\nwhere $\\tilde{X}=X\/X^*$ is the incidence rate normalized to its threshold value. which shows that $\\nu$ is a measure of how fast the rate of change in ${\\cal R}$ responds to the deviation of $\\tilde{X}$ from its threshold value $\\tilde{X}^*=1$. If $\\nu\\ll 1$ , then $d_t{\\cal R}$ responds slowly to $\\Delta \\tilde{X}$, \\textit{i.e.}, there is a slow social response to the rise or decay of the incidence. Eq.\\;(\\ref{21}) and Figure~\\ref{fig:oscillator} then yields a damped oscillation with envelope that decays exponentially at a rate $\\mu=\\nu$. A characteristic duration of the epidemic is $\\tau\\equiv \\nu^{-1}$ and the characteristic time scale of the oscillation is $T =\\omega ^{-1}=1\/\\sqrt{2\\nu-\\nu^2}$. Since the ratio between the two is $T\/\\tau =\\sqrt{\\nu\/(2-\\nu)}$, we observe that the oscillation scale $T$ is longer than the decay time $\\tau$ for all $\\nu>1$. Hence, this model suggests that oscillatory behavior and a long duration of the epidemic are features we expect to observe when the social response is slow ($\\nu<1$).\n\n\\subsubsection{A nonlinear, three-parameter oscillator model}\n\\label{nonlinear model}\nAlthough the linear oscillator model gives some insight into the mechanism that makes the epidemic return in repeated waves, there is an obvious lack of realism. One is to neglect the terms containing the product $\\Delta {\\cal R}\\Delta \\tilde{I}$ in Eqs.\\;(\\ref{19})-(\\ref{20}), since neither $\\Delta {\\cal R}$ nor $\\Delta \\tilde{I}$ are, in general, small. \nThere is also little reason to expect that the rate of change $\\nu$ is the same below and above the social response threshold. \nBelow the threshold, ${\\cal R}$ increases because of the intervention's termination and because the population relaxes. The more relaxed, the larger $\\nu$, so let us denote this parameter $\\nu_1$ the ``relaxation rate''. Above the threshold, ${\\cal R}$ decreases because of the interventions aiming to strike the epidemic down. Stronger intervention translates into a larger ``intervention rate'' $\\nu_2$. Even with these generalizations, the model will still give a damped, nonlinear oscillation and, hence, is unable to describe a situation where the second wave is stronger than the first. \nA generalization which may cover such a situation is to let the intervention rate decay with time, for instance, exponentially, such that we have an ultimate model for $\\nu(t')$ on the form\n\\begin{linenomath*}\n\\begin{equation}\n \\nu(t')=\\nu_1\\theta(-\\Delta\\tilde{X})+\\nu_2e^{-\\nu_3 t'}\\theta(\\Delta \\tilde{X}), \\label{27}\n\\end{equation}\n\\end{linenomath*}\nwhere $\\theta(x)$ is the unit step function. The parameter $\\nu_3$ can be thought of as a ``fatigue rate'', \\textit{i.e.}, the rate at which the strike-down rate is reduced because the population is becoming increasingly tired of interventions and restrictions.\n\nNote also that the time dependence of the reproduction number in this model is independent of the response threshold $X^*$. This is because $X^*$ has been eliminated in Eqs.\\;(\\ref{19})-(\\ref{20}) through normalization of the variables. The un-normalized variables $I=X^*\\tilde{I}$ and $X(t)$ are, of course, proportional to $X^*$ and emphasizes the importance of a low tolerance threshold for social intervention.\n\n\\subsubsection{Fitting model parameters to the observed incidence data}\nTo validate the effectiveness of the proposed model in describing real data, we fit the three parameters $\\nu_1, \\nu_2, \\nu_3$ with a numerical optimization routine that minimizes the discrepancy between the time series of reported new daily cases $X(t)$ and those generated by the model.\nWe constrained $\\nu_1 \\in [0, \\infty]$, while the other two parameters are unbounded.\nBesides the three model parameters, we also optimize with a grid search the following hyperparameters:\nthe initial reproduction number $\\mathcal{R}_0$ searched in the interval $[1.0, 3.0]$ and the value $X^{*} = X(t^{*})$ for each country with $t^{*}$ searched in the interval [15 January, 31 March].\nAs initial conditions for $\\nu_1, \\nu_2, \\nu_3$ in the optimization routine, we used the values $[0.1, 0.1, 0.1]$.\n\n\n\\section{Results and discussion}\n\\label{sec:results}\n\n\\subsection{Results of the cluster analysis of {\\cal R}(t)-curves}\n\\label{sec:clustering_results}\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=14 cm]{Fig3.pdf}\n \\caption{\\footnotesize The left figure depicts the dendrogram obtained from the DTW dissimilarity between the $\\mathcal{R}(t)$ time series. It is possible to obtain a certain number of clusters by putting a threshold at a specific dissimilarity value. In the example, we choose the threshold equal to 130. The right figures depict the average $\\mathcal{R}(t)$ of countries in the same cluster.}\n \\label{fig:dendrogram}\n\\end{figure}\n\nThe dendrogram to the left of Figure~\\ref{fig:dendrogram} depicts the result of the clustering procedure, based on the DTW dissimilarities between the $\\mathcal{R}(t)$-curves estimated according to Eq.~(\\ref{9}).\nIn particular, the dendrogram illustrates how two leaves $i$, $j$ (\\textit{i.e.}, the $\\mathcal{R}(t)$-curves of countries $i$ and $j$) are merged together as soon as the threshold $\\delta_\\text{max}$ becomes larger than their DTW dissimilarity value $\\delta_{i,j}$.\nThere is no unique way of selecting an optimal $\\delta_\\text{max}$, but it rather depends on what level of resolution of the clustering partition is amenable for a meaningful exploration of the structure underlying our data.\nIn our case, we selected a $\\delta_\\text{max}=150$ that gave rise to seven clusters, depicted in different colors in Figure~\\ref{fig:dendrogram}.\nOn the right hand side of Figure~\\ref{fig:dendrogram}, we report $\\mathcal{R}(t)$ averaged over all the countries in the same cluster.\nTo facilitate the interpretation of the results, in Figure~\\ref{fig:world} we depict the same clustering partition obtained for $\\delta_\\text{max}=150$ on the political world map. \n\n\\begin{figure}[t!]\n\t\\centering\n \\includegraphics[width=14 cm]{Fig4.pdf}\n \\caption{\\footnotesize Visualization on the World map of the clusters obtained from the dissimilarity of the $\\mathcal{R}(t)$ curves.}\n\t\\label{fig:world}\n\\end{figure}\n\n\n\nThe 1\\textsuperscript{st} cluster (light blue) contains countries mostly from Africa, South America and Middle East. \nThe average $\\mathcal{R}$ curve of the countries in the light blue cluster (top-right of Figure~\\ref{fig:dendrogram}) shows that the reproduction number is always very low, but consistently above one.\nA possible explanation is that in those countries communities are more isolated and there are less travels and exchanges between them, making the infection to spread slower.\n\nIn 2$\\textsuperscript{nd}$ cluster (green) the average $\\mathcal{R}$ curve also stays always above one, but it starts from a higher value $\\mathcal{R}_0$. It is important to notice that this cluster includes large countries, like India, Brazil, United States and Russia.\nIn these countries, the time series of new cases $X(t)$ have a particular profile since they are a combination from widely separated areas where the infection outbreak followed different courses. For instance, in the U.S., the waves in New York and California are almost in opposite phase.\n\nThe 3$\\textsuperscript{rd}$ cluster (pink) contains countries where the first wave is very long and it took a considerable amount of time to bring the $\\mathcal{R}$ curve below 1. A second wave is slowly emerging in the Northern autumn. An atypical member of this cluster is Sweden, which experienced a second wave in the summer that appeared almost as a continuation of the first wave, and then a strong third wave in the fall that is synchronous with the second wave for the rest of West-Europe (cluster 4).\n\nThe 4$\\textsuperscript{th}$ cluster (brown) mostly contains West-European countries, characterized by a strong first wave that was brought down quickly and a second wave that begun in the fall.\nThe average $\\mathcal{R}$ curve is characterized by strong variability: it starts from a very high value and goes quickly below 1, to raise again quickly in the summer.\n\nSimilarly to the 1\\textsuperscript{st} cluster, the 5$\\textsuperscript{th}$ cluster (red) contains South American, African countries, and New Zealand. However, a key difference from 1$\\textsuperscript{st}$ cluster is that in this case the $\\mathcal{R}$ curve goes and remains below 1 during the Northern fall and autumn.\n\nFinally, clusters 6 and 7 differ from the others by exhibiting initial $\\mathcal{R}$ close to, or even lower than, one. For some countries in Cluster 6 this is an artifact of the averaging over all the countries in the cluster which includes some countries like China, Australia, and South Korea, which started out with quite high ${\\cal R}$, but brought it down very rapidly through strong interventions \\cite{Rahman2020}. \nThe common characteristic feature for the cluster is an ${\\cal R}(t)$ above 1 during the Northern summer, but a reduction in the fall, which is the opposite of what has been observed in Western Europe and Canada. \nCluster 7, on the other hand, contains most East-European countries, where the reproduction number was very low during the spring, but increased rapidly after the summer.\n\n\n\\subsection{Exploring the parameter space of the oscillator model}\n\\label{sec:params_exploration}\nThe data for the incidence rate $X(t)$ in the world's countries show a wavy pattern consisting of one to three maxima during the first year of the pandemic evolution. However, the duration, relative strength, and separation between the waves vary substantially among countries and regions of the world. The total cumulative number of confirmed cases and deaths per million inhabitants can also vary by an order of magnitude or more among countries comparable to economic development, culture, and the healthcare system. Rypdal and Rypdal \\cite{RR2020} demonstrated this for the first wave of the pandemic in a sample of 73 countries and discussed the significant differences in death toll between the two neighboring countries, Sweden and Norway. At the time of writing this paper, we are four months further into the pandemic. The picture has changed dramatically, with secondary and tertiary waves developing in many countries.\n\n\n\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width=14 cm]{Fig5.pdf}\n\\caption{Blue curves show the evolution of ${\\cal R}(t)$ and red curves $\\tilde{X}(t)$ as solutions of Eqs.(\\ref{19}) and (\\ref{20}) with $\\tilde{X}={\\cal R}\\tilde{I}$ and $\\nu$ given by Eq.\\;(\\ref{27}), initial conditions $\\tilde{X}(0)=1$, ${\\cal R}(0)={\\cal R}_0$, and parameters ${\\cal R}_0,\\, \\nu_1,\\, \\nu_2,\\, \\nu_3$ as indicated in the figures.}\\label{fig:RXplotarray}\n\\end{figure} \n\nIn Figure~\\ref{fig:RXplotarray}, we have summarized some of the conclusions drawn from numerical solutions of the model proposed in Section~\\ref{sec:closed_model}, obtained by varying the model parameters. In all simulations, we have chosen the time origin $t=0$ to be the first time the incidence rate $X(0)$ crosses the threshold value $X^*$. Hence, $\\tilde{X}(0)=1$ for all simulations. The incidence rate measured on the right-hand axis in the figures is measured in units of the threshold $X^*$.\n\n\\subsubsection{The effect of the initial reproduction number}\\label{sec:effect of R0}\nIn the first row of panels, Figure~\\ref{fig:RXplotarray}(a)--(c), we consider the effect of changing the initial reproduction number ${\\cal R}_0$. From the reconstructed ${\\cal R}(t)$-curves, we observe that ${\\cal R}_0$ varies considerably among countries and regions. Low values just above ${\\cal R}_0=1$ are common in developing countries in South America, sub-Sahara Africa, and India. Several factors may contribute to this; lower mobility of people, a younger population, and a warmer climate. For these countries, we typically observe a slower rise and decay of the first wave, and the wave is generally weaker than in industrialized countries where ${\\cal R}_0$ varies in the range 2.0--2.5. In Figure~\\ref{fig:RXplotarray}(a)--(c), we have changed ${\\cal R}_0$, keeping $\\nu_1$, $\\nu_2$, $\\nu_3$ constant. By choosing $\\nu_1=0$, and $\\nu_3=0$ we consider countries that respond slowly to an incidence rate below the threshold and show little fatigue, which may be characteristic for developing countries for which Figure~\\ref{fig:RXplotarray}(a) may be relevant. \nIn these panels, the choice $\\nu_2=0.01$ is somewhat arbitrary but yields a rather stretched-out and low-amplitude first wave typical for those countries. For higher ${\\cal R}_0$, the first wave is higher in amplitude and shorter, like what we have seen in China. Here $\\nu_1=\\nu_3=0$ signify that the relaxation rate and fatigue have been sufficiently low to prevent ${\\cal R}$ from increasing after it has stabilized below 1. The maximum incidence rate $\\tilde{X}=X\/X^*$ in panels (b) and (c) is high; in the range 15-40. In panel (i), where the parameters are the same as in (b) except for $\\nu_2=0.1$ being ten times higher, shows $\\tilde{X}\\approx 3$, which, as we will see later, is representative for China.\n\n\\subsubsection{The effect of the relaxation rate}\nIn the second row, we vary the relaxation rate $\\nu_1$ while keeping the strike-down rate fixed at $\\nu_2=0.01$. The result is that as $\\tilde{X}$ drops below the threshold after about 45 days, $\\mathcal{R}(t)$ starts to rise and grow well beyond 1. How fast this happens, depends on $\\nu_1$. In the phase when ${\\cal R}>1$, $\\tilde{X}(t)$ will also start growing, and when it crosses the threshold $\\tilde{X}=1$, the strike-down sets in again, and we enter a new cycle. With $\\nu_1=0.01$ the first cycle takes almost 500 days, while it takes considerably less time in most countries, suggesting a higher $\\nu_1$. In panel (e) we increase $\\nu_1$ by a factor 5 and observe then two cycles within the first year, and in panel (f) another increment by a factor 10 almost eliminates the next waves. \nThis faster relaxation to the equilibrium ${\\cal R}=\\tilde{X}=1$ when the relaxation rate is high may appear counter-intuitive. After all, it leads to a rapid increase of ${\\cal R}$ once $\\tilde{X}$ has dropped below the threshold. However, the faster rise of ${\\cal R}$ also leads to a faster rise of ${\\cal X}$ beyond the threshold and to a faster strike-down of ${\\cal R}$ back towards 1, \\textit{i.e.}, to faster damping of the oscillation. This observation suggests that the strong second wave of the epidemic evolving in Europe in the fall of 2020 is not caused by the relaxation of social interventions during the summer but is caused by something else.\n\n\\subsubsection{The effect of the intervention rate}\n A suspected candidate could be the intervention rate $\\nu_2$, which is varied in the third row, panels (g)--(i). However, we observe that the main effect of increasing $\\nu_2$ is to decrease the amplitude of the oscillation in $\\tilde{X}$ in inverse proportion to $\\nu_2$. In this row, we have kept $\\nu_1=0$, resulting in relaxation to a time-asymptotic ($t\\rightarrow \\infty$) equilibrium ${\\cal R}_{\\infty}<1$, $\\tilde{X}_{\\infty}=0$. This is in contrast to the second row ($\\nu_1>0$), where this equilibrium is ${\\cal R}_{\\infty}=1$, $\\tilde{X}_{\\infty}=1$. These two equilibria correspond to fundamentally different strategies to combat the epidemic. The one without the relaxation mechanism ($\\nu_1=0$) corresponds to the strike-down strategy, where the goal is to eliminate the pathogen without obtaining herd immunity in the population. The one with $\\nu_1>0$, allowing relaxation of interventions when the incidence rate dips below the threshold, will end up with a constant incidence rate at the threshold value and thus a linearly increasing cumulative number of infected until this growth is non-linearly saturated by herd immunity.\n\n \n \\subsubsection{The effect of the fatigue rate}\n The effect of a non-zero fatigue rate is to bring the effective strike-down rate to zero as $t\\rightarrow \\infty$. The solution of the system Eqs.\\;(\\ref{19})--(\\ref{20}) as $t\\rightarrow \\infty$ is that ${\\cal R}\\rightarrow 1+\\nu_3$ and $\\tilde{X}\\approx \\exp{(\\nu_3 t)}$. Of course, this blow-up is prevented by herd immunity, which will reduce the effective ${\\cal R}$ to zero when most of the population has been infected. The effect of increasing immunity in the population is not included in Eq.\\;(\\ref{17}), and hence the model makes sense only as long as the majority of the population is still susceptible to the disease.\n Nevertheless, the last row in Figure \\ref{fig:RXplotarray} shows that increasing intervention fatigue represented by non-zero $\\nu_3$ may increase the second and later waves' amplitude and duration. \n For sufficiently large $\\nu_3$ the second wave's amplitude and duration can become greater than the first. The situations shown in panels (k) and (l) are observed in European countries and are caused by $\\nu_3\\approx 1$. One partial explanation of the second wave's higher amplitude than the first, as observed in many countries, is a considerably higher testing rate. The testing rate, however, cannot explain the considerably longer duration of the second wave. This prolonged duration shows up both in the observed data and in this model, when the fatigue rate is increased.\n\n\n\n\\subsection{Results from fitting the oscillator model to incidence data in different countries}\n\\label{sec:fit}\n\n\n\n\n\\begin{figure}[t!]\n\t\\centering\n \\includegraphics[width=14 cm]{Fig6.pdf}\n \\caption{\\footnotesize Comparison between observed daily cases $X(t)$ (red dashed lines), $X(t)$ simulated from model (solid red lines), $\\mathcal{R}(t)$ estimated from the data using the inverted SIR model (dashed blue line), $\\mathcal{R}(t)$ simulated from the model.}\n\t\\label{fig:Model_estimate_comp}\n\\end{figure}\n\nFigure~\\ref{fig:Model_estimate_comp} depicts for selected countries the reported daily new cases (dashed red line), the daily new cases simulated by the oscillator model (solid red line), the $\\mathcal{R}(t)$ curve estimated using Eq.\\;(\\ref{9}) (dashed blue line), and the $\\mathcal{R}(t)$ curve simulated by the proposed close model (solid blue line).\nOn the top of each graph, we report for each country the fitted values of $[\\nu_1$, $\\nu_2$, $\\nu_3]$, the initial $\\mathcal{R}_0$, and the date $t^{*}$ that identifies $X^{*} = X(t^{*})$.\nOn the horizontal axis, 0 corresponds to $t^{*}$, the left vertical axis indicates the value of the reproduction number, the right vertical axis indicates the number of new daily cases. \n\nThe first row in the array of panels shows results for China and Turkey. The parameters estimated for China is comparable to those in Figure~\\ref{fig:RXplotarray}(i). The peak incidence rate $X_{\\text{max}}\\approx 3,500$ for China is about 3 times the threshold incidence $X^*=1,185$, similar to what is observed in Figure~\\ref{fig:RXplotarray}(i). For Turkey, the evolution of ${\\cal R}(t)$ is initially rather similar to that of China, and the shape of the $X(t)$-curve is also rather similar. But while the model-fitted ${\\cal R}(t)$ converges to a fixed value ${\\cal R}_{\\infty}<1$, and $X(t)$ to 0 after a few months in China, ${\\cal R}(t)$ in Turkey grows slowly greater than 1, and a second wave in $X(t)$ develops. This wave has an amplitude approximately the same as the first, but lasts longer (not shown in the figure), similar to what is shown in Figure~\\ref{fig:RXplotarray}(k). This rise is the result of fundamental differences in the estimated model parameters: $\\nu_1$ is zero for China but non-zero for Turkey.\nWe also notice that China and Turkey belong to different clusters in the dendrogram in Figure~\\ref{fig:dendrogram}. \nThe second wave for Turkey is not created by a finite fatigue rate, since $\\nu_3=0$ both for Turkey and China; it is created by a finite relaxation rate $\\nu_1$. Importantly, this relaxation rate cannot create a second wave that is stronger than the first, it only gives rise to a damped oscillation that ends up in the equilibrium ${\\cal R}=1$, $X=X^*$.\n\nThe second row shows two countries, Brazil and India belonging to the second cluster, represented in green in Figure~\\ref{fig:dendrogram}. The initial reproduction number is low, ${\\cal R}_0=1.5$ for both countries, and the strike-down parameter is also low, $\\nu_2\\approx 0.01$, leading to a strong and long first wave, which is not yet completely over in November 2020. The fatigue rate of $\\nu_3\\approx 0.2$ also contributes to increasing the amplitude and the long-lasting downward slope of the first wave. \n\nIn the third row, we show the typical pattern for Europe, the fourth cluster (brown) in the dendrogram, with Austria and Spain as examples. There is a rather short first wave accompanied by a rapid drop in ${\\cal R}(t)$ due to the almost universal lockdown in March 2020. There is a rather slow relaxation of the interventions throughout the summer, finally leading to ${\\cal R}$ stabilizing in the range 1.2-1.5. The inevitable result is the rise of a second wave, growing stronger and longer than the first, as shown in Figure~\\ref{fig:RXplotarray}(k) and (l). At the time of writing, interventions again have started to inhibit the growth, but they are weaker than in the spring, as reflected by the fatigue rates in the range $\\nu_3\\sim 0.2-0.3$. \nIndeed, the predictions of the oscillator model with the estimated rates is that the second wave will blow up in the spring of 2021 to levels where herd immunity will limit the growth. This is before vaccines are likely to play an important r\\^{o}le, so a more probable scenario is that governments will reverse the fatigue trend and invalidate the model as a prediction for the future. Tendencies in this direction is observed in Europe at the time of writing.\n\n\n\\section{Discussion and conclusions}\\label{sec:discuss}\nThe geographic distribution of countries belonging to different clusters shown in the map in Figure~\\ref{fig:world}, and the associated averaged ${\\cal R}(t)$-curves in Figure~\\ref{fig:dendrogram}, may serve as a crude road map to the global evolution of the pandemic throughout the spring and fall of 2020. One striking feature is some geographic clustering, which is most pronounced in Western Europe (brown) and Eastern Europe (blue). A similar clustering is seen in the U.S. and Equatorial Latin America (green). In this paper, we have a focus on the strength, timing and duration of the second epidemic wave, and for this purpose the dendrogram helps us to identify those regions where there has been a pronounced second wave so far in the pandemic. These are those countries that belong to clusters exhibiting a period of ${\\cal R}(t)<1$ in between periods of ${\\cal R}>1$. From the ${\\cal R}(t)$-profiles in Figure~\\ref{fig:dendrogram} those countries with the most pronounced second wave are Cluster 4 (brown) and 7 (dark blue), Western and Eastern Europe, respectively. The rise of the second wave here is due to the persistently high values of ${\\cal R}$ during the period July - November. What distinguishes the two clusters is the course of the first wave. In Western Europe there was a strong first wave associated with high ${\\cal R}$, and it affected strongly older age groups which resulted in high case fatality ratio (CFR). The second wave has affected all ages and so far the death numbers have been much lower than in the first. In Eastern Europe the first wave was very weak, but the second has been strong and with considerably higher CFR than in the countries further West. \n\nThe main result in this paper is to demonstrate that the varying courses of the epidemic depicted via the seven characteristic ${\\cal R}(t)$ curves shown in Figure~\\ref{fig:dendrogram} to some extent ca be understood in terms of the interplay between three social responses to the epidemic activity; the relaxation of interventions when the activity is low, the intensification of interventions when activity becomes high, and the intervention fatigue which develops with time. Figures~\\ref{fig:RXplotarray} and ~\\ref{fig:Model_estimate_comp} suggest that most country-specific epidemic curves can be qualitatively reproduced by a simple mathematical model involving these three responses. The value of this insight is that, in spite of the immense complexity and diversity of the dynamical response triggered by this new pathogen, there are some universal governing principles that will determine the final outcome in the years to come. \n\nThe model devised here could of course be run to make projections further ahead than one year from the onset of the epidemic, as done in Figure ~\\ref{fig:RXplotarray}. It would show a blow-up of all solutions for which the fatigue parameter is non-zero, and would be unrealistic for several reasons. One is that the linearity approximation would break down as herd immunity will start to bring the effective reproduction number down. Another is that the intervention fatigue model most likely will fail when the epidemic activity becomes sufficiently high. We have already have seen signs in this direction in many European countries where partial lockdowns and mass testing again have succeeded in ``bending the curve'' to an extent that is not described by the model. Finally, mass-vaccination will hopefully become a real game-changer in the year to come. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}