diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzopmn" "b/data_all_eng_slimpj/shuffled/split2/finalzzopmn" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzopmn" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nSwarms of robots are systems composed of a large number of robots that can only interact with direct neighbours and follow simple algorithms. Interestingly, complex behaviours may emerge from such straightforward rules \\citep{navarro2013introduction, Garnier2007}. An advantage of such systems is the usage of low-priced robots instead of a few expensive ones to solve problems. Robotic swarms accurately projected for simple robots may solve complex tasks with greater efficiency and fault-tolerance, while being cheaper than a small group of complex robots oriented for a specific problem domain. They can also be seen as a multiagent system with spatial computers, which is a group of devices displaced in the space such that its objective is defined in terms of spatial structure and its interaction depends on the distance between them \\citep{giavitto:hal-00821901}. Swarms have been recently receiving attention in the multi-agents systems literature in problems such as logistics \\citep{10.5555\/3463952.3464142}, flocking formation \\citep{10.5555\/3463952.3463999}, pattern formation \\citep{10.5555\/3463952.3463998} and coordination of unmanned aerial vehicles swarms \\citep{10.5555\/3463952.3464114}. In such problems relating to spatial distribution, conflicts may be created by the trajectories of its robots, which may slow down the system, in special when a group is intended to go to a common region of the space. Some examples where this happens are waypoint navigation \\citep{marcolinoNoRobotLeft2008} and foraging \\citep{ducatelleCommunicationAssistedNavigation2011}.\n\nRelated works on multi-agents systems \\citep{carlinoAuctionbasedAutonomousIntersection2013, Sharon2017, CuiStoneScalable} deals with a similar problem, but they consider autonomous cars navigating over lanes and roads, and coordination is needed at the junctions. In \\citep{ChoudhurySKP21} and \\citep{jair112397}, they also deal with multi-agents and pathfinding, but not in a situation in which the target of every agent is the same area. \\cite{jair112397} present theoretical analysis for their proposed matter, alongside experimentation by simulations corroborating the results as we do. Furthermore, we are considering agents with only local information and distributed solutions, while \\citep{ChoudhurySKP21} and \\citep{jair112397} propose centralised solutions. \\cite{jmse9121324} investigate the topology of the neighbourhood relations between multiple unmanned surface vehicles in a swarm. In our work, we analyse the impact of the throughput of the target area when using formation packing in squares and hexagons. They deal with maintaining formation in swarms, but they have to keep virtual leaders, and their goal is not minimising congestion.\n\nMoreover, there has not been much research on the problem of reducing congestion when a swarm of robots are aimed at the same target. Surveys about robotic swarms \\citep{sahin04swarm,SahinGBT08,Barca2013swarm,Brambilla2013Swarm,Bayindir2016,8424838} do not provide information regarding these situations. Even a recent survey on collision avoidance \\citep{hoyAlgorithmsCollisionfreeNavigation2015} do not address this issue though it provides insights into multi-vehicle navigation. Congestion in robotic swarms is mostly managed by collision avoidance in a decentralised fashion, which allows for improved scalability of the algorithms. \n\nHowever, solely avoiding collisions does not necessarily lead to a good performance in this problem with a common target. For example, we showed \\citep{Marcolino2016} that the ORCA algorithm \\citep{Berg2011} reaches an equilibrium where robots could not arrive at the target despite avoiding collisions. In that work, we also presented three algorithms using artificial potential fields for the common target congestion problem, but no formal analysis of the cluttered environment was done. Hence, congestion is still not well understood, and more theoretical work is needed to measure the optimality of the algorithms. A better understanding of this topic should lead to a variety of new algorithms adapted to specific environments.\n\nFurthermore, any elaborated analysis on that subject must investigate the effect of the increase of the number of individuals on the swarm congestion, as we desire for the system to perform well as it grows in size. If we have a finite measure that abstracts the optimality of any algorithm as the number of robots goes to infinity, we can use this as a metric to compare different approaches to the same problem. Thus, in this work, we present as metric the common target area throughput. That is, we are proposing a measure of the rate of arrival in this area as the time tends to infinity as an alternative approach to analyse the congestion in swarms with a common target area. In network and parallel computing studies \\citep{asymptotic1,asymptotic2}, asymptotic throughput is used to measure the throughput when the message size is assumed to have infinite length. We use the same idea here, but instead of message size, we work with infinite time, as if the algorithms run forever. As we will present in the next section, this implies dealing with an infinite number of robots. Thus, here we use time instead of message size or bytes as in computer network studies.\n\nTherefore, the contributions in this paper are the following.\n\\begin{enumerate}[(i)]\n \\item We propose a method for evaluating algorithms for the common target problem in a robotic swarm by using the throughput in theoretical or experimental scenarios.\n \\item We present an extensive theoretical study of the common target problem, allowing one to understand better how to measure the access to a common target using a metric not yet used in other works on the same problem. \n \\item Assuming a circular target area and robots with a constant linear velocity and a fixed distance from each other, we develop theoretical strategies for entering the area and calculate their theoretical throughput for a fixed time and their asymptotic throughput when time goes to infinite. Additionally, we verify the correctness of these calculations by simulations.\n\\end{enumerate}\n\nThe presented theoretical strategies are based on forming a corridor towards the target area or making multiple curved trajectories towards the boundary of the target area. For the corridor strategy, we also discuss the throughput when the robots are going to the target in square and hexagonal packing formations. We evaluate our theoretical strategies by realistic Stage \\citep{PlayerStage} simulations with holonomic and non-holonomic robots. Our experiments corroborate that whenever an algorithm makes a swarm take less time to reach the target region than another algorithm, the throughput of the former is higher than the latter. These strategies are the inspiration to new distributed algorithms for robotic swarms in our concurrent work \\citep{arxivAlgorithms}.\n\n\nThis paper is organised as follows. In the next section, we briefly explain the mathematical notation we are using. In Section \\ref{sec:theoreticalresults}, we formally define the common target area throughput and prove statements about this measure for theoretical strategies that allow robots to enter the common target area. Section \\ref{sec:experimentresults} describes the experiments and present its results to verify the correctness of the theoretical strategies results. Finally, we summarise our results and make final remarks in Section \\ref{sec:conclusion}.\n\n\\section{Notation}\n\n\nGeometric notation is used as follows. $\\overleftrightarrow{AB}, \\overrightarrow{AB}$ and $\\overline{AB}$ represents a line passing through points A and B, a ray starting at A and passing through B and a segment from A to B, respectively. $\\vert \\overline{AB}\\vert $ is the size of $\\overline{AB}$. $\\overleftrightarrow{AB} \\parallel \\overleftrightarrow{CD}$ means $\\overleftrightarrow{AB}$ is parallel to $\\overleftrightarrow{CD}$. If a two-dimensional point is represented by a vector $P_{1}$, its x- and y-coordinates are denoted by $P_{1,x}$ and $P_{1,y}$, respectively. \n\n$\\bigtriangleup ABC$ express the triangle formed by the points A, B and C. $\\bigtriangleup ABC \\cong \\bigtriangleup DEF$ and $\\bigtriangleup ABC \\sim \\bigtriangleup DEF$ mean the triangles ABC and DEF are congruent (same angles and same size) and similar (same angles), respectively. Depending on the context, the notation is omitted for brevity. \n\n$\\widehat{AOB}$ means an angle with vertex O, one ray passing through point A and another through B. Depending on the context, if we are dealing only with one $\\bigtriangleup EFG$, we will name its angles only by $\\widehat{E}$, $\\widehat{F}$ and $\\widehat{G}$. All angles are measured in radians in this paper. \n\n\\section{Theoretical Analysis}\n\\label{sec:theoreticalresults}\n\n\nWe consider in this paper the scenario where a large number of robots must reach a common target. After reaching the target, each robot moves towards another destination which may or may not be common among the robots. We assume the target is defined by a circular area of radius $s$. A robot reaches the target if its centre of mass is at a distance below or equal to the radius $s$ from the centre of the target. We assume that there is no minimum amount of time to stay at the target. Additionally, the angle and the speed of arrival have no impact on whether the robot reached the target or not. In this section, theoretical strategies are constructed to solve that task and show limits for the efficiency of real-life implementations, which we developed in a concurrent work \\citep{arxivAlgorithms}. To measure performance, we start with the following definition.\n\n\n\\begin{definition}\nThe \\emph{throughput} is the inverse of the average time between arrivals at the target. \n\\label{def:throughput}\n\\end{definition} \n\nInformally speaking, the throughput is measured by someone located on the common target (i.e., on its perspective). We consider that an optimal algorithm minimises the average time between two arrivals or, equivalently, maximises throughput. The unit for throughput can be in $s^{-1}$. \nIt will be noted $f$ (as in frequency).\nIn the rest of the paper, we focus on maximising throughput.\n\nAssume we have run an experiment with $N \\ge 2$ robots for $T$ units of time, such that the time between the arrival of the $i$-th robot and the $i+1$-th robot is $t_{i}$, for $i$ from $1$ to $N-1$. Then, by Definition \\ref{def:throughput}, we have\n\\if0 1\n $ f = \\frac{1}{\\frac{1}{N-1}\\sum_{i=1}^{N-1}t_{i}} \n = \\frac{N-1}{\\sum_{i=1}^{N-1}t_{i}}\n = \\frac{N-1}{T}, $\n\\else\n $$ f = \\frac{1}{\\frac{1}{N-1}\\sum_{i=1}^{N-1}t_{i}} \n = \\frac{N-1}{\\sum_{i=1}^{N-1}t_{i}}\n = \\frac{N-1}{T}, $$\n\\fi\nbecause $\\sum_{i=1}^{N-1}t_{i} = T.$ Thus, we have an equivalent definition of throughput:\n\n\\begin{definition}\nThe \\emph{throughput} is the ratio of the number of robots that arrive at a target region, not counting the first robot to reach it, to the arrival time of the last robot.\n\\label{def:throughput2}\n\\end{definition} \n\nThe target area is a limited resource that must be shared between the robots. Since the velocities of the robots have an upper bound, a robot needs a minimum amount of time to reach and leave the target before letting another robot in. Let the \\emph{asymptotic throughput} of the target area be its throughput as the time tends to infinity. Because any physical phenomenon is limited by the speed of light, this measure is bounded. Then, the asymptotic throughput is well suited to measure the access of a common target area as the number of robots grows.\n\nOne should expect that the asymptotic throughput depends mainly on the target size and shape, the maximum speed for robots, $v$, and the minimum distance between robots, $d$. As any bounded target region can be included in a circle of radius $s$, we will deal hereafter with only circular target regions. \n\nTo efficiently access the target area, we identify two main cases: $s \\ge d\/2$ and $s < d\/2$. There are targets that several robots can simultaneously reach without collisions. That is the case if the radius $s \\ge d\/2$. Thus, one approach is making lanes to arrive in the target region so that as many robots as possible can simultaneously arrive. After the robots arrive at the target, they must leave the target region by making curves. However, we discovered in \\citep{arxivAlgorithms} that this approach does not obtain good results in our realistic simulations due to the influence of other robots, although it is theoretically the best approach if the robots could run at a constant speed and maintain a fixed distance between each other.\n\nWe are also interested in the case where $s < d \/ 2$, when only one robot can occupy the target area simultaneously. Making two queues and avoiding the inter-robot distance to be less than $d$ is good guidance to work efficiently. Particularly, the case $s = 0$ offers interesting insights, so we begin by discussing it. \n\n\\subsection{Common target point: $s = 0$}\n\nWe consider the case where robots are moving in straight lines at constant linear speed $v$, \nmaintaining a distance of at least $d$ between each other.\nA robot has reached the target when its centre of mass is over the target. \nWhen $s = 0$, the target is a point.\nOur first result is the optimal throughput when robots are moving in a straight line to a target point.\nIt is illustrated in Figure \\ref{fig:straight_line}.\nIn this section, we construct a solution to attain the optimal throughput.\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.38\\columnwidth]{figs\/straight_line_distance.pdf}\n \\caption{We consider two robots $R_1$ and $R_2$ moving in straight lines toward a target at A. The angle between their trajectory is $\\theta$. The distance between the two robots over time is denoted by $l_\\theta(t)$.} \n \\label{fig:straight_line}\n\\end{figure}\n\nFirst, let us consider two robots, Robot 1 and Robot 2. Their trajectories are straight lines towards the target. Assume the straight-line trajectory of Robot 1 has an angle $\\theta_{1}$ with the $x$-axis and the one of Robot 2 has $\\theta_{2}$.\nWe call $\\theta_{2} - \\theta_{1} = \\theta$ the angle between the two lines. \nThe positions of the robots are described by the kinematic equation (\\ref{eq:kimatic_equation_punctual_target}) below, where\n$(x_1(t), y_1(t))$ and $(x_2(t), y_2(t))$ are the positions of Robot 1 and Robot 2, respectively, and\n$t \\in \\mathbb{R}$ is an instant of time.\nWithout loss of generality, we set the origin of time when Robot 1 reaches the target, and the target is located at $(0, 0)$. Thus, $(x_1(0), y_1(0)) = (0, 0)$.\n$\\tau$ is the delay between the two arrivals at the target. Then, $(x_2(\\tau), y_2(\\tau)) = (0, 0)$,\n\n\\begin{equation}\n\\label{eq:kimatic_equation_punctual_target}\n\\left[\n \\begin{matrix}\n x_1(t)\\\\ \n y_1(t) \n \\end{matrix}\n\\right]=\n\\left[\n \\begin{matrix}\n v t \\cos(\\theta_{1}) \\\\\n v t \\sin(\\theta_{1})\n \\end{matrix}\n\\right]\n\\text{ and }\n\\left[\n \\begin{matrix}\n x_2(t)\\\\ \n y_2(t) \n \\end{matrix}\n\\right]=\n\\left[\n \\begin{matrix}\n v (t - \\tau) \\cos(\\theta_{2}) \\\\\n v (t - \\tau) \\sin(\\theta_{2})\n \\end{matrix}\n\\right]\n\\end{equation}\n\nIn order to find the optimal throughput, we will start with the following lemma:\n\n\\begin{lemma}\nTo respect a distance of at least $d$ between the two robots, the minimum delay between their arrival is $\\frac{d}{v} \\sqrt{\\frac{2}{1 + \\cos(\\theta)}}$.\n\\label{prop:security_distance_punctual_target}\n\\end{lemma}\n\\begin{proof}\n \\ifithasappendixforlemmas %\n See Online Appendix.\n \\else %\n Let $l_\\theta(t) = \\sqrt{(x_1(t) - x_2(t))^2 + (y_1(t) - y_2(t))^2}$ be the distance between the two robots.\n The robots must maintain their minimum distance $d$ at all time:\n \\begin{equation}\n \\label{eq:distancerelation1}\n \\forall t \\in \\mathbb{R}, l_\\theta(t) \\ge d.\n \\end{equation}\n \n To avoid a collision, we have $\\theta \\neq \\pi$, which corresponds to the case where robots face each other exactly. As a result, $\\cos(\\theta) \\neq - 1$.\n For ease of calculation, we define $X = \\tau v$, that is, the distance between Robot 1 and Robot 2 when Robot 1 reaches the target. We also define $P_\\theta(t) = l_\\theta(t)^2 - d^2$, so the constraint in (\\ref{eq:distancerelation1}) for minimum distance between them is expressed by\n \\if0 1\n $\n \\forall t \\in \\mathbb{R}, l_\\theta(t) \\ge d\n \\Leftrightarrow\n \\forall t \\in \\mathbb{R}, P_\\theta(t) \\ge 0.\n $\n In addition, we have\n $\n P_\\theta(t) \n = 2 (1 - \\cos(\\theta)) v^{2} t^{2} - 2 X (1 - \\cos(\\theta))v t + X^2 - d^2,\n $\n \\else\n $$\n \\forall t \\in \\mathbb{R}, l_\\theta(t) \\ge d\n \\Leftrightarrow\n \\forall t \\in \\mathbb{R}, P_\\theta(t) \\ge 0.\n $$\n \n In addition, we have\n $$\n \\begin{aligned}\n P_\\theta(t) \n &= (v t \\cos(\\theta_{1}) - v (t - \\tau) \\cos(\\theta_{2}))^2 + (v t \\sin(\\theta_{1}) - v (t - \\tau) \\sin(\\theta_{2}))^2 \\\\\n &\\phantom{=\\ }- d^2\\\\\n \\ifexpandexplanation\n \\end{aligned}\n $$\n $$\n \\begin{aligned} \n \\phantom{P_\\theta(t) }\n &= (v t \\cos(\\theta_{1}) - (v t - X) \\cos(\\theta_{2}))^2 + (v t \\sin(\\theta_{1}) - (v t - X) \\sin(\\theta_{2}))^2 - d^2\\\\\n &= (v t)^{2} \\cos(\\theta_{1})^{2} - 2 v t \\cos(\\theta_{1})(v t - X) \\cos(\\theta_{2}) + (v t - X)^2 \\cos(\\theta_{2})^2 + \\\\\n &\\phantom{=\\ \\ } (v t)^{2} \\sin(\\theta_{1})^{2} - 2 v t \\sin(\\theta_{1})(v t - X) \\sin(\\theta_{2}) + (v t - X)^2 \\sin(\\theta_{2})^2 - d^2\\\\\n \\end{aligned}\n $$\n $$\n \\begin{aligned} \n \\phantom{P_\\theta(t) }\n \\fi\n &= (v t)^{2} - 2 v t(v t - X) (\\cos(\\theta_{1})\\cos(\\theta_{2}) + \\sin(\\theta_{1}) \\sin(\\theta_{2})) \\\\\n &\\phantom{=\\ \\ } + (v t - X)^2 - d^2 \\\\\n &= (v t)^{2} - 2 v t(v t - X) \\cos(\\theta_{2} - \\theta_{1}) + (v t - X)^2 - d^2\\\\\n &= (v t)^{2} - 2 v t(v t - X) \\cos(\\theta) + (v t - X)^2 - d^2\\\\\n\\ifexpandexplanation \n \\end{aligned}\n $$\n $$\n \\begin{aligned}\n \\phantom{P_\\theta(t) } \n &= (v t)^{2} - 2 v t(v t - X) + 2 v t(v t - X) - 2 v t(v t - X) \\cos(\\theta) + (v t - X)^2 - d^2\\\\\n &= (v t)^{2} - 2 v t(v t - X) + (v t - X)^2 + 2 v t(v t - X) - 2 v t(v t - X) \\cos(\\theta) - d^2\\\\\n &= (v t - (v t - X))^2 + 2 v t(v t - X) - 2 v t(v t - X) \\cos(\\theta) - d^2\\\\\n &= X^2 + 2 v t(v t - X) - 2 v t(v t - X) \\cos(\\theta) - d^2\\\\\n &= 2 v t(v t - X) - 2 v t(v t - X) \\cos(\\theta) +X^2 - d^2\\\\\n &= 2 (v t)^{2} -2 X v t - 2 (v t)^{2}\\cos(\\theta) + 2 X v t \\cos(\\theta) +X^2 - d^2\\\\\n &= 2 (v t)^{2} - 2 (v t)^{2}\\cos(\\theta) - 2X v t +2 X v t \\cos(\\theta) +X^2 - d^2\\\\\n &= 2 (1 - \\cos(\\theta)) (v t)^2 - 2 X (1 - \\cos(\\theta))v t + X^2 - d^2\\\\\n \\end{aligned}\n $$\n $$\n \\begin{aligned} \n \\phantom{P_\\theta(t) }\n\\fi\n &= 2 (1 - \\cos(\\theta)) v^{2} t^{2} - 2 X (1 - \\cos(\\theta))v t + X^2 - d^2,\\\\\n \\end{aligned}\n $$\n \\fi\n where we used $\\cos(\\theta)=\\cos(\\theta_{2}-\\theta_{1})= \\cos(\\theta_{2}) \\cos(\\theta_{1}) + \\sin(\\theta_{2}) \\sin(\\theta_{1}) $.\n \n \n We identify two cases:\n \\begin{enumerate}\n \\item Case 1: $\\cos(\\theta) \\neq 1$.\n Then $P_\\theta(t)$ is a second-degree polynomial in $t$. It is of the form $at^2 + b t + c$ \n with $a = 2 (1 - \\cos(\\theta)) v^2$,\n $b = -2 X (1 - \\cos(\\theta))v$ and\n $c = X^2 - d^2$.\n We know that $P_\\theta(t)$ has $a$ with positive sign for all $t$, because $(1 - \\cos(\\theta)) > 0$ when $\\cos(\\theta) \\neq 1$. Thus, as $a>0$, by second-degree polynomial inequalities properties, $P_\\theta(t) \\ge 0$ for all $t$ if and only if its discriminant $\\Delta = b^2 - 4 a c$ is negative, that is, \n \\if0 1\n $\n \\forall t \\in \\mathbb{R}, P_\\theta(t) \\ge 0\n \\Leftrightarrow\n b^2 - 4 a c \\le 0.\n $\n Thus,\n \\begin{equation}\n X \\ge d\\sqrt{\\frac{2}{1 + \\cos(\\theta)}} \n \\label{eq:methogology:pformula}\n \\end{equation}\n \\else\n \n $$\n \\forall t \\in \\mathbb{R}, P_\\theta(t) \\ge 0\n \\Leftrightarrow\n b^2 - 4 a c \\le 0.\n $$\n \n Thus:\n $$\n \\begin{aligned}\n & 2^2 X^2 (1 - \\cos(\\theta))^2 v^2 - 4 \\cdot 2 (1 - \\cos(\\theta))v^2(X^2 - d^2) \\le 0 \\Leftrightarrow\\\\\n & (4(1 - \\cos(\\theta))v^{2}) ( X^2 (1 - \\cos(\\theta)) - 2 (X^2 - d^2)) \\le 0 \\Rightarrow\\\\\n & X^2 (1 - \\cos(\\theta)) - 2 (X^2 - d^2) \\le 0 \\Leftrightarrow\n 2 d^2 - X^2 (1 + \\cos(\\theta)) \\le 0 \\Leftrightarrow\\\\\n & \\frac{2d^{2}}{1 + \\cos(\\theta)} \\le X^2 \\Rightarrow \n \\end{aligned}\n $$\n \n \\begin{equation}\n X \\ge d\\sqrt{\\frac{2}{1 + \\cos(\\theta)}} \n \\label{eq:methogology:pformula}\n \\end{equation}\n \\fi\n \\item Case 2: $\\cos(\\theta) = 1$. Then $P_{\\theta}(t) = X^{2} - d^{2}$. In this case, $P_{\\theta}(t) \\ge 0$ for all $t$ when $X^{2} - d^{2} \\ge 0 \\Rightarrow X \\ge d$. This is the same as using $\\cos(\\theta) = 1$ in (\\ref{eq:methogology:pformula}).\n \\end{enumerate}\n \n Hence, (\\ref{eq:methogology:pformula}) gives, for the robots to respect the minimum distance $d$ for every time $t$, a relation between the minimum distance, the angle between the lanes and the distance between Robot 1 and Robot 2 when Robot 1 reaches the target. The final result is obtained noticing that (\\ref{eq:methogology:pformula}) is equivalent to\n \\if0 1\n $\n \\tau \\ge \\frac{d}{v} \\sqrt{\\frac{2}{1 + \\cos(\\theta)}}.\n $\n \\else\n $$\n \\tau \\ge \\frac{d}{v} \\sqrt{\\frac{2}{1 + \\cos(\\theta)}}.\n $$\n \\fi\n \\fi %\n\\end{proof}\n\n\nThis result enables us to show Proposition \\ref{prop:optimal_throughput_straight_line_punctual_target}.\n\n\\begin{proposition}\nThe optimal throughput $f$ for a point-like target ($s = 0$) is $f = \\frac{v}{d}$. \nIt is achieved when robots form a single line, i.e., the angle between robots trajectories must be $0$.\n\\label{prop:optimal_throughput_straight_line_punctual_target}\n\\end{proposition}\n\n\\begin{proof}\nWe show by induction on $N$, which is the number of robots moving towards the target. We define $\\theta_{N}$ as the angle between the trajectories of Robot $N - 1$ and Robot $N$; \n$\\tau_{N}$, the minimum delay between the arrival of Robot $N - 1$ and Robot $N$; and\n$\\Delta_{N}$, the minimum delay between the arrival of Robot 1 and Robot $N$.\nWe want to show the following predicate: for all $N \\ge 2$, $\\Delta_{N} = (N-1) d \/ v$ for $\\theta_2 = \\theta_3 = \\ldots = \\theta_{N} = 0$.\n\nBase case ($N=2$):\nLet $\\tau_2$ be the delay between the arrival of Robot 1 and Robot 2.\nFrom Lemma \\ref{prop:security_distance_punctual_target} we have that the minimum delay between Robot 1 and Robot 2 is equal to\n$\\frac{d}{v} \\sqrt{\\frac{2}{1 + \\cos(\\theta_2)}}$,\nwhich is minimised by $\\theta_2 = 0$.\nThen, the minimum delay between the two robots is $\\tau_2 = d\/v = \\Delta_{2}$.\n\nInductive step: We suppose the predicate is true for a given $N-1 \\ge 2$. \nWe will show that it implies the predicate is true for $N$ robots.\nAs in the previous case, we conclude from Lemma \\ref{prop:security_distance_punctual_target} that the minimum delay between Robot $N-1$ and Robot $N$ is equal to\n$\\frac{d}{v} \\sqrt{\\frac{2}{1 + \\cos(\\theta_{N})}}$, which is minimised for $\\theta_{N} = 0$.\nThen, the minimum delay between the two robots is $\\tau_{N} = d\/v$.\nWe have \n \\if0 1\n $\n \\begin{aligned}\n \\Delta_{N} & = \\Delta_{N-1} + \\tau_{N}\n = (N-2)\\frac{d}{v} + \\frac{d}{v} \n = (N-1)\\frac{d}{v}.\n \\end{aligned}\n $\n \\else\n $$\n \\begin{aligned}\n \\Delta_{N} & = \\Delta_{N-1} + \\tau_{N}\n = (N-2)\\frac{d}{v} + \\frac{d}{v} \n = (N-1)\\frac{d}{v}.\n \\end{aligned}\n $$\n\n \\fi\nConsequently, the minimum delay between Robot 1 and Robot $N$ is $\\Delta_{N} = \\sum_{i = 2}^{N} \\tau_i = (N-1)\\frac{d}{v}$ and the time of arrival of Robot $N$, for all $N$, \nis minimised for $\\theta_2 = \\theta_3 = \\ldots = \\theta_{N} = 0$. Finally, by Definition \\ref{def:throughput2}, the throughput is $f = \\frac{N-1}{\\Delta_{N}} = \\frac{v}{d}$.\n\\end{proof}\n\n\n\nThe insight derived from Proposition \\ref{prop:optimal_throughput_straight_line_punctual_target} implies that we should increase the maximum speed of the robots or decrease the minimum distance between them to increase the throughput. It is also noted that the optimal trajectory for all the robots is to form a queue behind the target and Robot 1.\nAs a result, the optimal path is to create one lane to reach the target.\nWhen we increase the angle $\\theta$ between the path of a robot and the next one, \nwe introduce a delay from the optimal throughput. \nFor instance, Figure \\ref{fig:theoretical:normalised_delay} shows the normalised delay for different angles $\\theta$ (normalised by dividing $\\tau$ by $\\tau_{min} = d \/ v$) between two robots, according to Lemma \\ref{prop:security_distance_punctual_target}. This figure shows that for an angle of $\\pi\/3$, the minimum delay is $15\\%$ higher than for an angle of 0, and the minimum delay is $41\\%$ higher for an angle of $\\pi\/2$.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.8\\columnwidth]{figs\/theoretical_normalised_delay.pdf}\n \\caption{Normalised delay versus the angle between the trajectories of the robots.}\n \\label{fig:theoretical:normalised_delay}\n\\end{figure}\n\n\n\n\\subsection{Small target area: $0 < s < d\/2$}\n\\label{sec:smalltargetarrea2}\n\nIn this section, we suppose a small target area where $0 < s < d\/2$, hence we cannot yet fit two lanes with a distance $d$ towards the target. The next results are based on a strategy using two \\emph{parallel lanes} as close as possible to guarantee the minimum distance $d$ between robots. Figure \\ref{fig:smalltargetarrea} describes these two parallel lanes. We hereafter call this strategy \\emph{compact lanes}. Proposition \\ref{prop:parallel1} considers a target area with radius $0 < s \\le \\frac{\\sqrt{3}}{4}d$, and Proposition \\ref{prop:parallel2} assumes $\\frac{\\sqrt{3}}{4}d < s < \\frac{d}{2}$.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.5\\columnwidth]{figs\/TwoRobotsSmallTarget.pdf}\n \\caption{Two parallel robot lanes for a small target, illustrating the compact lanes strategy.}\n \\label{fig:smalltargetarrea}\n\\end{figure}\n\n\\begin{proposition}\n Assume two parallel lanes with robots at maximum speed $v$ and maintaining a minimum distance $d$ between them. The throughput of a common target area with radius $0{\\displaystyle}c>{\\displaystyle}l}\n \\max\\left(\\frac{ \\sin(\\frac{\\pi}{3}-\\theta) x_h -\\frac{s}{d}}{\\cos\\left(\\theta -\\frac{\\pi}{6}\\right)},\n \\frac{-\\cos(\\frac{\\pi}{3}-\\theta)x_{h}}{\\sin\\left(\\frac{\\pi}{6}-\\theta\\right)}\\right),\n & \\text{ if } \\theta < \\pi\/6, \\\\\n \\max\\left(\\frac{ \\sin(\\frac{\\pi}{3}-\\theta) x_h -\\frac{s}{d}}{\\cos\\left(\\theta -\\frac{\\pi}{6}\\right)},\n \\frac{\\frac{vT-s}{d} - \\cos(\\frac{\\pi}{3}-\\theta)x_{h}}{\\sin\\left(\\frac{\\pi}{6}-\\theta\\right)} \\right),\n & \\text{ if } \\theta > \\pi\/6,\\\\\n \\frac{x_{h}}{2}-\\frac{s}{d},\n & \\text{ if } \\theta = \\pi\/6,\n \\end{array}\n \\right.\n $$\n $$\n Y_{2}^{R}(x_{h}) = \n \\left\\{\n \\begin{array}{>{\\displaystyle}c>{\\displaystyle}l}\n \\min\\left(\\frac{ \\sin(\\frac{\\pi}{3}-\\theta) x_h +\\frac{s}{d}}{\\cos\\left(\\theta -\\frac{\\pi}{6}\\right)},\n \\frac{\\frac{vT-s}{d} - \\cos(\\frac{\\pi}{3}-\\theta)x_{h}}{\\sin\\left(\\frac{\\pi}{6}-\\theta\\right)} \\right),\n & \\text{ if } \\theta < \\pi\/6, \\\\\n \\min\\left(\\frac{ \\sin(\\frac{\\pi}{3}-\\theta) x_h +\\frac{s}{d}}{\\cos\\left(\\theta -\\frac{\\pi}{6}\\right)},\n \\frac{-\\cos(\\frac{\\pi}{3}-\\theta)x_{h}}{\\sin\\left(\\frac{\\pi}{6}-\\theta\\right)}\\right),\n & \\text{ if } \\theta > \\pi\/6,\\\\\n \\frac{x_{h}}{2}+\\frac{s}{d},\n & \\text{ if } \\theta = \\pi\/6,\n \\end{array}\n \\right.\n $$\n $$\n B = \\left\\{\n \\begin{array}{>{\\displaystyle}c>{\\displaystyle}l}\n \\left\\lceil\\frac{2( \\sin(\\pi\/3-\\theta)(c_{x} - l_{x}) + \\cos(\\pi\/3-\\theta)(y_{0} - l_{y} -s) )}{\\sqrt{3}d}\\right\\rceil,\n & \\text{ if } T > \\frac{s}{v},\n \\\\\n \\left\\lceil-\\frac{2\\sqrt{2svT - (vT)^{2}}}{\\sqrt{3}d}\\sin\\left(\\theta + \\frac{\\pi}{6}\\right)\\right\\rceil,\n & \\text{ otherwise, }\n \\end{array}\n \\right.\n $$\n for $c_{x} = x_{0} + vT - s$ and \n \\if0 1\n $\n (l_{x},l_{y}) = \n \\argmin_{(x,y) \\in Z}{\\vert vT - s + x_{0} - x\\vert + \\vert y_{0} - y\\vert },\n $ \n if $T > \\frac{s}{v}$, \n otherwise,\n $(l_{x},l_{y}) = (x_{0},y_{0})$,\n \\else\n $$\n (l_{x},l_{y}) = \\left\\{\n \\begin{array}{>{\\displaystyle}cl}\n \\argmin_{(x,y) \\in Z}{\\vert vT - s + x_{0} - x\\vert + \\vert y_{0} - y\\vert },\n & \\text{ if } T > \\frac{s}{v}, \\\\\n (x_{0},y_{0}),\n & \\text{ otherwise, }\n \\end{array}\n \\right. \n $$\n \\fi\n where $Z$ is the set of robot positions inside the rectangle measuring $vT - s \\times 2s$ for $vT - s > 0$. If $T > \\frac{s}{v}$ or $\\arctan\\left( \\frac{\\frac{s}{2} - \\sin(\\theta) (vT - s) }{\\frac{\\sqrt{3}s}{2} + \\cos(\\theta) (vT - s)} \\right) < \\frac{\\pi}{2} - \\theta$,\n \\if0 1\n $\n U = \\left \\lfloor \\frac{2(\\sin(\\pi\/3-\\theta)(c_{x} - l_{x}) + \\cos(\\pi\/3-\\theta)(y_{0} - l_{y}) + s)}{\\sqrt{3}d} \\right \\rfloor,\n $\n \\else\n $$\n U = \\left \\lfloor \\frac{2(\\sin(\\pi\/3-\\theta)(c_{x} - l_{x}) + \\cos(\\pi\/3-\\theta)(y_{0} - l_{y}) + s)}{\\sqrt{3}d} \\right \\rfloor,\n $$\n \\fi\n otherwise,\n \\if0 1\n $\n U = \\left \\lfloor \\frac{2\\sqrt{2svT - (vT)^{2}}}{\\sqrt{3}d}\\cos\\left(\\theta-\\frac{\\pi}{3}\\right) \\right\\rfloor.\n $\n \\else\n $$\n U = \\left \\lfloor \\frac{2\\sqrt{2svT - (vT)^{2}}}{\\sqrt{3}d}\\cos\\left(\\theta-\\frac{\\pi}{3}\\right) \\right\\rfloor.\n $$\n \\fi\n Also,\n \\if0 1\n $\n Y_{1}^{S}(x_{h}) = \n \\frac{d {x_{h}}- {C_{-\\theta,x}} + \\sqrt{3} C_{-\\theta,y} - \\sqrt{\\Delta(x_{h})}}{2 d} \n $ and\n \\else\n $$\n Y_{1}^{S}(x_{h}) = \n \\frac{d {x_{h}}- {C_{-\\theta,x}} + \\sqrt{3} C_{-\\theta,y} - \\sqrt{\\Delta(x_{h})}}{2 d} \\text{ and } \n $$\n \\fi\n \\begin{equation}\n Y_{2}^{S}(x_{h}) = \\left\\{\n \\begin{array}{>{\\displaystyle}c>{\\displaystyle}l}\n \\min(L(x_{h}),C_{2}(x_{h})) - 1, \n & \\text{ if } \\min(L(x_{h}),C_{2}(x_{h})) \\\\ \n & \\phantom{if} = \\lfloor L(x_{h}) \\rfloor \\text{ and } T > \\frac{s}{v},\\\\\n \\min(L(x_{h}),C_{2}(x_{h})), \n & \\text{ otherwise, } \n \\end{array}\n \\right.\n \\label{eq:whereIusedepsilon}\n \\end{equation}\n \\if0 1\n $\n C_{-\\theta} = \\left[\n \\begin{array}{cc}\n \\cos(-\\theta) & -\\sin(-\\theta)\\\\\n \\sin(-\\theta) & \\cos(-\\theta)\\\\\n \\end{array}\n \\right]\n \\left[\n \\begin{array}{c}\n c_{x} - l_{x}\\\\\n y_{0} - l_{y}\\\\\n \\end{array}\n \\right],\n $\n $\n \\Delta(x_{h}) = 4 s^{2} - \\left(\\sqrt{3} {\\left(d {x_{h}} -{C_{-\\theta,x}} \\right)} - C_{-\\theta,y}\\right)^{2},\n $\n $\n C_{2}(x_{h}) = \n \\frac{d {x_{h}}- {C_{-\\theta,x}} + \\sqrt{3} C_{-\\theta,y} + \\sqrt{\\Delta(x_{h})}}{2 d}, \n $\n \\else\n $$\n C_{-\\theta} = \\left[\n \\begin{array}{cc}\n \\cos(-\\theta) & -\\sin(-\\theta)\\\\\n \\sin(-\\theta) & \\cos(-\\theta)\\\\\n \\end{array}\n \\right]\n \\left[\n \\begin{array}{c}\n c_{x} - l_{x}\\\\\n y_{0} - l_{y}\\\\\n \\end{array}\n \\right],\n $$\n $$\n \\Delta(x_{h}) = 4 s^{2} - \\left(\\sqrt{3} {\\left(d {x_{h}} -{C_{-\\theta,x}} \\right)} - C_{-\\theta,y}\\right)^{2},\n $$\n $$\n C_{2}(x_{h}) = \n \\frac{d {x_{h}}- {C_{-\\theta,x}} + \\sqrt{3} C_{-\\theta,y} + \\sqrt{\\Delta(x_{h})}}{2 d}, \n $$\n \\fi\n \\if0 1\n $\n L(x_{h}) = \n \\frac{\\sin\\left(\\frac{\\pi}{2} - \\theta\\right)(d x_{h} - C_{-\\theta,x}) + \\cos\\left(\\frac{\\pi}{2} - \\theta\\right)C_{-\\theta,y}}{d \\sin\\left(\\frac{5\\pi}{6}-\\theta\\right)},\n $ if $T > \\frac{s}{v}$, otherwise\n $ L(x_{h}) =\n \\frac{\\sin\\left(\\frac{\\pi}{2}-\\theta\\right) x_{h}}{\\sin\\left( \\frac{5\\pi}{6}-\\theta\\right)}$,\n \\else\n $$\n L(x_{h}) = \\left\\{ \n \\begin{array}{>{\\displaystyle}c>{\\displaystyle}l}\n \\frac{\\sin\\left(\\frac{\\pi}{2} - \\theta\\right)(d x_{h} - C_{-\\theta,x}) + \\cos\\left(\\frac{\\pi}{2} - \\theta\\right)C_{-\\theta,y}}{d \\sin\\left(\\frac{5\\pi}{6}-\\theta\\right)},\n & \\text{ if } T > \\frac{s}{v}, \\\\\n \\frac{\\sin\\left(\\frac{\\pi}{2}-\\theta\\right) x_{h}}{\\sin\\left( \\frac{5\\pi}{6}-\\theta\\right)},\n & \\text{ otherwise,}\\\\\n \\end{array}\n \\right.\n $$\n \\fi\n and\n \\begin{equation}\n \\begin{aligned}\n \\lim_{T \\to \\infty} f_{h}(T,\\theta) \\in&\n \\left(\\frac{4vs}{\\sqrt{3}d^{2}} - \\frac{2 v \\cos(\\theta -\\pi\/6)}{\\sqrt{3}d}, \\frac{4vs}{\\sqrt{3}d^{2}} + \\frac{2 v \\cos(\\theta-\\pi\/6)}{\\sqrt{3}d}\\right].\n \\end{aligned}\n \\label{eq:hexthroughputbounds}\n \\end{equation}\n \\label{prop:hexthroughputbounds}\n\\end{proposition}\n\\begin{proof}\n We are concerned about the throughput of the target region for a given time and hexagonal packing angle $\\theta$, $f_{h}(T,\\theta) = \\frac{N(T,\\theta)-1}{T}$, where $N(T,\\theta)$ denotes the number of robots which arrived at the target region. Figure \\ref{fig:hexnumrobots} illustrates the arrival of the robots on the target region.\n As this region has a circular shape, not all robots at the distance $vT$ arrive at target region by the time $T$. Thence, the number of robots in hexagonal packing are divided into the number of robots located inside a rectangle, $N_{R}$, and of robots inside a semicircle $N_{S}$ (Figure \\ref{fig:hexnumrobots} (III)). That is, $N(T,\\theta) = N_{S}(T,\\theta) + N_{R}(T,\\theta)$ and $N_{R} = 0$ whenever $vT \\le s$.\n \n \\begin{figure}[t!]\n \\centering\n \\includegraphics[width=\\columnwidth]{figs\/hexagonalnumrobotsTime.pdf}\n \\caption{(I) When the robots -- here represented by black dots -- in hexagonal packing begin to arrive at the target region, only the robots inside a part of the semicircle are counted. (II) We consider the first robot to reach the target region being at $(x_{0},y_{0})$ at time $0$. As $T$ grows, this continues until $vT = s$. (III) When $vT > s$, the robots are counted on two regions: a rectangular, $N_{R}$, and a semicircular, $N_{S}$. When $vT > s$, the semicircular region counting starts after the last robot on the rectangular region located at $(l_{x},l_{y})$.}\n \\label{fig:hexnumrobots}\n \\end{figure}\n \n \n \\begin{figure}[t!]\n \\centering\n \\includegraphics[width=0.7\\columnwidth]{figs\/referenceplanes.pdf}\n \\caption{The reference frames used in this proof: the usual Euclidean space $(x,y)$ in relation to the target region and the rectangle region formed by robots in hexagonal packing going to it; the coordinate space $(x_{g},y_{g})$, formed by the usual space after a translation to the first robot to reach the target region at $(x_{0},y_{0})$, followed by a rotation by $-\\psi$; the coordinate space $(x_{h},y_{h})$, a hexagonal grid coordinate space made after this transformation and a linear transformation $H$. Robots are represented by the black dots and they are on hexagonal formation. Each neighbour of a robot is distant by $d$, so $\\bigtriangleup ABC$ is equilateral. Thus, $\\theta + \\psi = \\pi\/3$.}\n \\label{fig:referencepsi}\n \\end{figure}\n \n \\begin{figure}[t!]\n \\centering\n \\includegraphics[width=\\columnwidth]{figs\/hexagonal_rectangle_robots.pdf}\n \\caption{Robots in hexagonal packing formation, and the corresponding rectangular corridor which will reach the target region. Robots are located in the $l$, $m$, $n$, $p$, $q$ and $r$ lines, which are parallel to the $y_h$-axis. In this example, $\\psi = 0.227$ and the distance between all robots is $d = 0.5$. The distance between those parallel-to-$y_{h}$ lines is $\\sqrt{3}d \/ 2$. The robots inside the rectangle EFGH are counted and are indicated by red points, while blue points are robots outside the rectangle. Although the $x_{h}$-axis coincides with the $x_{g}$-axis, $x_{h}$ is scaled by $d$.}\n \\label{fig:rectangle_robots}\n \\end{figure}\n \n \\begin{figure}[t!]\n \\centering\n \\includegraphics[width=0.8\\columnwidth]{figs\/hexagonal_rectangle_problem.pdf}\n \\caption{The problem involves the rectangle EFGH in a hexagonal grid (grey lines inside the rectangle) of robots (the red dots). The $x_h$-axis is horizontal and coincides with the $x$-axis. The $y_h$-axis forms a $2 \\pi \/ 3$ angle with it. $\\overline{EH}$ and $\\overline{AB}$ have length $2 s$ and $vT-s$, respectively. In this example, $\\psi = 19\\pi\/180$ and $s=1$. The angles marked with a line are equal to $\\psi$, because the angle formed by $\\protect \\overrightarrow{y_{2}A}$ and the $x$-axis is right, as well as $\\widehat{EAB}$. Accordingly, $\\widehat{y_{2}AB} = \\pi\/2 - \\psi$ implies that $\\protect \\widehat{EAy_{2}} = \\psi$.}\n \\label{fig:rectangle_problem}\n \\end{figure}\n \n This proof is divided in lemmas for helping the construction of the equation to compute $N_{R}(T,\\theta)$ and $N_{S}(T,\\theta)$ as well for calculating $\\lim_{T\\to \\infty} f_{h}(T,\\theta)$. Before presenting them, we discuss a coordinate space transformation which will be used to count the robots for $N_{R}$ and $N_{S}$. This transformation was inspired by \\citep{redblobgames}.\n\n Figure \\ref{fig:referencepsi} shows the coordinate spaces used in this proof. Let $\\psi = \\pi\/3 - \\theta$ (because the angle of the equilateral triangle formed by neighbours is $\\pi\/3$, as explained in Figure \\ref{fig:referencepsi}). Accordingly, $\\psi \\in \\lbrack 0,\\pi\/3 \\rparen$ too.\n The usual Euclidean coordinate space which represents the location of all robots is denoted here by $(x,y)$ coordinates. The next coordinate space is denoted by $(x_{g},y_{g})$, and it is the result of a translation of the usual Euclidean coordinate space by the position of the first robot to reach the target region at $(x_{0},y_{0})$, then a rotation of $-\\psi$, that is,\n \\if0 1\n $\n \\left[\n \\begin{array}{c}\n x_{g} \\\\\n y_{g}\n \\end{array}\n \\right]\n = \\left[\n \\begin{array}{cc}\n \\cos(-\\psi) & -\\sin(-\\psi)\\\\\n \\sin(-\\psi) & \\cos(-\\psi)\\\\\n \\end{array}\n \\right]\n \\left[\n \\begin{array}{c}\n x-x_{0}\\\\\n y-y_{0}\\\\\n \\end{array}\n \\right].\n $\n \\else\n $$\n \\left[\n \\begin{array}{c}\n x_{g} \\\\\n y_{g}\n \\end{array}\n \\right]\n = \\left[\n \\begin{array}{cc}\n \\cos(-\\psi) & -\\sin(-\\psi)\\\\\n \\sin(-\\psi) & \\cos(-\\psi)\\\\\n \\end{array}\n \\right]\n \\left[\n \\begin{array}{c}\n x-x_{0}\\\\\n y-y_{0}\\\\\n \\end{array}\n \\right].\n $$\n \\fi\n The last coordinate space is denoted by $(x_{h},y_{h})$ and it is intended to represent a hexagonal grid such that the position of each robot is an integer pair.\n \n Figure \\ref{fig:rectangle_robots} shows the location of robots with respect to that hexagonal grid.\n Let $(x_{h}, y_{h}) \\in \\mathbb{Z}^2$ be the hexagonal coordinates of a robot in this hexagonal grid space. In this figure, there is an integer grid in grey -- the horizontal lines correspond to fixed integer $y_{h}$ values and the inclined ones, $x_{h}$ values. For example, in Figure \\ref{fig:rectangle_robots} robots $R_{10}$, $R_{11}$ and $R_{20}$ respectively are at $(0,1)$, $(1,1)$ and $(1,0)$ at $(x_{h}, y_{h})$ coordinate system, which is equivalent to $\\left(-1\/4,\\sqrt{3}\/4\\right)$, $\\left(1\/4,\\sqrt{3}\/4\\right)$ and $\\left(1\/2,0\\right)$ on the usual two dimensional coordinate system with origin at $(x_{0},y_{0})$. \n \n We get the linear transformation $H$ from a point $(x_{h},y_{h})$ to $(x_{g},y_{g})$ basis by knowing the result of this transformation for the standard vectors $(1,0)$ and $(0,1)$. Observing Figure \\ref{fig:rectangle_robots} and having that the angle between the $x$-axis and $y_{h}$-axis is by definition $2\\pi\/3$, we get the following mappings $(x_{h},y_{h}) \\mapsto (x_{g},y_{g})$: $(1,0) \\mapsto (d,0)$ and $(0,1) \\mapsto (d\\cos\\left(2\\pi\/3\\right),d\\sin\\left(2\\pi\/3\\right)) = (-\\frac{d}{2}, \\frac{\\sqrt{3}d}{2})$ (in Figure \\ref{fig:rectangle_robots} these two mappings are represented by robots $R_{20}$ and $R_{10}$, respectively, with $d=0.5$). Then,\n \n \\begin{equation}\n \\left[\n \\begin{array}{c}\n x_{g} \\\\\n y_{g}\n \\end{array}\n \\right]\n = {\\left[\n \\begin{array}{cc}\n H\\left(\\left[\n \\begin{aligned}\n 1 \\\\\n 0\n \\end{aligned}\n \\right]\\right) & \n H\\left(\\left[\n \\begin{aligned}\n 0 \\\\\n 1\n \\end{aligned}\n \\right]\\right) \n \\end{array}\n \\right]}\n \\left[\n \\begin{array}{c}\n x_{h}\\\\\n y_{h}\n \\end{array}\n \\right] \n = \\left[\n \\begin{array}{cc}\n d & -\\frac{d}{2}\\\\\n 0 & \\frac{\\sqrt{3}d}{2}\\\\ \n \\end{array}\n \\right]\n \\left[\n \\begin{array}{c}\n x_{h}\\\\\n y_{h}\n \\end{array}\n \\right].\n \\label{eq:xhyh2xgyg}\n \\end{equation}\n \n \n \n \n \n \n Counting the robots inside the rectangle is the same as counting the number of integer hexagonal coordinate points lying inside it.\n Figure \\ref{fig:rectangle_problem} shows the rectangular part with some robots in hexagonal packing, where the robots are the red dots and the hexagonal packing is guided by the grey lines inside the rectangle, based on the value of the angle $\\psi$. The rectangle is of width $vT - s$ and of height $2s$. The reference frame of the hexagonal grid is rotated in relation to the target region (Figure \\ref{fig:referencepsi}). From Figure \\ref{fig:rectangle_problem}, we have \n \\begin{equation}\n 2 s = (y_{2} - y_{1})\\cos(\\psi),\\ y_{2} = \\frac{s}{\\cos(\\psi)} \\text{ and } y_{1} = -\\frac{s}{\\cos(\\psi)}.\n \\label{eq:2sy2y1}\n \\end{equation}\n \n \n We consider a robot with coordinates $(x_g, y_g)$. The four sides of the rectangle EFGH, $\\overline{HG}$, $\\overline{EF}$, $\\overline{EH}$ and $\\overline{FG}$, have the following equations of line: $y_{g} = y_{1} + \\tan(\\psi)x_{g}$, $y_{g} = y_2 + x_{g} \\tan(\\psi)$, $y_{g} = \\tan\\left(\\psi+\\frac{\\pi}{2}\\right)x_{g}$ and $y_{g} = \\tan(\\psi + \\frac{\\pi}{2}) \\left(x_{g} - \\frac{v T-s}{\\cos(\\psi)} \\right) $, respectively. The term $\\frac{vT-s}{\\cos(\\psi)}$ in the last equation arises because of the length of $\\overline{AC}$, which is the hypotenuse of $\\bigtriangleup ABC$ whose side $\\overline{AB}$ measures $vT$. Knowing that $\\tan\\left(\\psi+\\frac{\\pi}{2}\\right) = -\\cot(\\psi)$, the equations below are all true for a robot at $(x_{g},y_{g})$ to be inside or on the boundary of the previously defined rectangle,\n \\begin{equation}\n \\begin{aligned}\n &y_g \\ge y_1 + x_g \\tan( \\psi), \\,\n y_g \\le y_2 + x_g \\tan (\\psi), \\, \n - x_g \\le \\tan(\\psi) y_g,\\text{ and } \\\\\n &- \\left(x_g - \\frac{vT-s}{\\cos\\psi} \\right) \\ge \\tan(\\psi) y_g.\n \\end{aligned}\n \\label{eq:insiderectangleEFGH}\n \\end{equation}\n \n Now we take the minimum and maximum $y_{h}$ value for each parallel-to-$y_{h}$ line depending on the $x_{h}$ value. Using (\\ref{eq:xhyh2xgyg}) for converting (\\ref{eq:insiderectangleEFGH}) to $x_{h}$ and $y_{h}$ coordinate system, i.e., hexagonal coordinates, we have for $\\overline{HG}$ and $\\overline{EF}$\n \\if0 1\n $ \\left(\\frac{\\sqrt{3}}{2} + \\frac{1}{2}\\tan(\\psi) \\right) y_h - \\tan(\\psi) x_h \\ge \\frac{y_1}{d}$ and\n $ \\left(\\frac{\\sqrt{3}}{2} + \\frac{1}{2}\\tan(\\psi) \\right) y_h - \\tan(\\psi) x_h \\le \\frac{y_2}{d}.$\n \\else\n $$ \\left(\\frac{\\sqrt{3}}{2} + \\frac{1}{2}\\tan(\\psi) \\right) y_h - \\tan(\\psi) x_h \\ge \\frac{y_1}{d} \\text{ and}$$\n $$ \\left(\\frac{\\sqrt{3}}{2} + \\frac{1}{2}\\tan(\\psi) \\right) y_h - \\tan(\\psi) x_h \\le \\frac{y_2}{d}.$$\n \\fi\n Hence,\n \\if0 1\n $\\frac{y_1}{d} \\le \\left(\\frac{\\sqrt{3}}{2} + \\frac{1}{2}\\tan\\psi \\right) y_h - \\tan(\\psi) x_h \\le \\frac{y_2}{d} \\Leftrightarrow$\n \\else\n $$\\frac{y_1}{d} \\le \\left(\\frac{\\sqrt{3}}{2} + \\frac{1}{2}\\tan\\psi \\right) y_h - \\tan(\\psi) x_h \\le \\frac{y_2}{d} \\Leftrightarrow$$\n \\fi\n \\begin{equation}\n \\frac{\\frac{2y_1}{d} + 2\\tan(\\psi) x_h}{\\sqrt{3} + \\tan(\\psi)} \\le y_h \\le\n \\frac{\\frac{2y_2}{d} + 2\\tan(\\psi) x_h}{\\sqrt{3} + \\tan(\\psi)}.\n \\label{eq:boundsyh1}\n \\end{equation}\n \n Analogously, but considering $\\overline{EH}$ and $\\overline{FG}$,\n \\begin{equation}\n - x_{h} \\le \\left(\\tan(\\psi) \\frac{\\sqrt{3}}{2} - \\frac{1}{2}\\right)y_{h} \\text{ and } \n \\left(\\tan(\\psi) \\frac{\\sqrt{3}}{2} - \\frac{1}{2}\\right)y_{h} \\le \\frac{v T-s}{d\\cos(\\psi)} - x_{h}.\n \\label{eq:xhvTcos1}\n \\end{equation} \n \n \n Based on the sign of $\\left(\\tan(\\psi) \\frac{\\sqrt{3}}{2} - \\frac{1}{2}\\right)$ and excluding the null case (when $\\psi=\\pi\/6$), we have two different inequalities over $y_{h}$. Assuming $\\psi \\in \\lbrack 0,\\pi\/3 \\rparen$, we have $\\left(\\tan(\\psi) \\frac{\\sqrt{3}}{2} - \\frac{1}{2}\\right) > 0 \\Leftrightarrow \\tan(\\psi) \\frac{\\sqrt{3}}{2} > \\frac{1}{2} \\Leftrightarrow \\tan(\\psi) > \\frac{1}{\\sqrt{3}} \\Leftrightarrow \\psi > \\pi\/6$. Thus, from (\\ref{eq:xhvTcos1}), \n\\ifexpandexplanation\n \\begin{equation}\n \\begin{aligned}\n \\frac{-x_{h}}{\\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}} \\le y_{h} \\le \\frac{\\frac{v T-s}{d\\cos(\\psi)} - x_{h}}{ \\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}}, & \\text{ if } \\psi > \\pi\/6, \\\\\n \\frac{\\frac{v T-s}{d\\cos(\\psi)} - x_{h}}{\\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}} \\le y_{h} \\le \\frac{-x_{h}}{\\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}}, & \\text{ if } \\psi < \\pi\/6.\n \\end{aligned}\n \\end{equation}\n\\fi\n \\begin{equation}\n \\begin{aligned}\n \\frac{-2x_{h}}{\\sqrt{3} \\tan(\\psi) - 1} \\le y_{h} \\le \\frac{\\frac{2(v T-s)}{d\\cos(\\psi)} - 2x_{h}}{\\sqrt{3} \\tan(\\psi) - 1}, & \\text{ if } \\psi > \\pi\/6, \\\\\n \\frac{\\frac{2(v T-s)}{d\\cos(\\psi)} - 2x_{h}}{\\sqrt{3} \\tan(\\psi) - 1} \\le y_{h} \\le \\frac{-2x_{h}}{\\sqrt{3} \\tan(\\psi) - 1}, & \\text{ if } \\psi < \\pi\/6.\n \\end{aligned}\n \\label{eq:boundsyh2}\n \\end{equation}\n \n We have that (\\ref{eq:boundsyh1}) and (\\ref{eq:boundsyh2}) restrict the value of $y_{h}$ depending on the value of $x_{h}$ by the relation\n \\begin{equation}\n \\begin{aligned}\n \\max\\left(\\frac{\\frac{2y_1}{d} + 2\\tan(\\psi) x_h}{\\sqrt{3} + \\tan(\\psi)},\n \\frac{-2 x_{h}}{\\sqrt{3} \\tan(\\psi) - 1}\\right) \n \\le y_{h} \n \\\\\n \\le\n \\min\\left(\\frac{\\frac{2y_2}{d} + 2\\tan(\\psi) x_h}{\\sqrt{3} + \\tan(\\psi)},\n \\frac{\\frac{2(v T-s)}{d\\cos(\\psi)} - 2x_{h}}{\\sqrt{3} \\tan(\\psi) - 1}\\right), \n & \\text{ if } \\psi > \\pi\/6, \\\\\n \\max\\left(\\frac{\\frac{2y_1}{d} + 2\\tan(\\psi) x_h}{\\sqrt{3} + \\tan(\\psi)},\n \\frac{\\frac{2(v T-s)}{d\\cos(\\psi)} - 2x_{h}}{\\sqrt{3} \\tan(\\psi) - 1} \\right)\n \\le y_{h} \n \\\\\n \\le \n \\min\\left(\\frac{\\frac{2y_2}{d} + 2\\tan(\\psi) x_h}{\\sqrt{3} + \\tan(\\psi)},\n \\frac{-2x_{h}}{\\sqrt{3} \\tan(\\psi) - 1}\\right), \n & \\text{ if } \\psi < \\pi\/6.\n \\end{aligned}\n \\label{eq:boundsyhminmaxreal}\n \\end{equation}\n \n Using hexagonal coordinates the position of each robot is represented by a pair of integers. Then, assuming $x_{h}$ and $y_{h}$ integers, (\\ref{eq:boundsyhminmaxreal}) becomes $\\lceil Y_{1}^{R}(x_{h}) \\rceil \\le y_{h} \\le \\lfloor Y_{2}^{R}(x_{h}) \\rfloor,$ for \n \\if0 1\n \\begin{equation}\n Y_{1}^{R}(x_{h}) \n =\n \\left\\{\n \\begin{array}{>{\\displaystyle}c>{\\displaystyle}l}\n \\max\\left(\\frac{\\sin(\\psi) x_h - \\frac{s}{d}}{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)},\n \\frac{-\\cos(\\psi)x_{h}}{\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\\right),\n & \\text{ if } \\psi > \\pi\/6, \\\\\n \\max\\left(\\frac{ \\sin(\\psi) x_h -\\frac{s}{d}}{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)},\n \\frac{\\frac{v T-s}{d} - \\cos(\\psi)x_{h}}{\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)} \\right),\n & \\text{ if } \\psi < \\pi\/6,\\\\\n \\frac{x_{h}}{2}-\\frac{s}{d},\n & \\text{ if } \\psi = \\pi\/6,\n \\end{array}\n \\right.\n \\label{eq:y1xh}\n \\end{equation}\n \\begin{equation}\n Y_{2}^{R}(x_{h}) \n =\n \\left\\{\n \\begin{array}{>{\\displaystyle}c>{\\displaystyle}l}\n \\min\\left(\\frac{\\sin(\\psi) x_h + \\frac{s}{d}}{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)},\n \\frac{\\frac{v T-s}{d} - \\cos(\\psi)x_{h}}{\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)} \\right),\n & \\text{ if } \\psi > \\pi\/6, \\\\\n \\min\\left(\\frac{ \\sin(\\psi) x_h +\\frac{s}{d}}{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)},\n \\frac{-\\cos(\\psi)x_{h}}{\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\\right),\n & \\text{ if } \\psi < \\pi\/6,\\\\\n \\frac{x_{h}}{2}+\\frac{s}{d},\n & \\text{ if } \\psi = \\pi\/6.\n \\end{array}\n \\right.\n \\label{eq:y2xh}\n \\end{equation} \n \\else \n\\ifexpandexplanation \n $$\n \\begin{aligned}\n Y_{1}^{R}(x_{h}) &= \n \\left\\{\n \\begin{array}{>{\\displaystyle}c>{\\displaystyle}l}\n \\max\\left(\\frac{\\frac{2y_1}{d} + 2\\tan(\\psi) x_h}{\\sqrt{3} + \\tan(\\psi)},\n \\frac{-2x_{h}}{\\sqrt{3} \\tan(\\psi) - 1}\\right),\n & \\text{ if } \\psi > \\pi\/6, \\\\\n \\max\\left(\\frac{\\frac{2y_1}{d} + 2\\tan(\\psi) x_h}{\\sqrt{3} + \\tan(\\psi)},\n \\frac{\\frac{2 (v T-s)}{d\\cos(\\psi)} - 2x_{h}}{\\sqrt{3} \\tan(\\psi) - 1} \\right),\n & \\text{ if } \\psi < \\pi\/6,\\\\\n \\frac{\\sqrt{3}y_{1} + d x_{h}}{2d},\n & \\text{ if } \\psi = \\pi\/6,\n \\end{array}\n \\right.\n \\\\ \n \\end{aligned}\n $$\n $$\n \\begin{aligned}\n &\\left\\{\n \\begin{array}{>{\\displaystyle}c>{\\displaystyle}l}\n \\max\\left(\\frac{\\frac{2y_1}{d} + 2\\tan(\\psi) x_h}{{\\sqrt{3} + \\tan(\\psi)}},\n \\frac{-2x_{h}}{\\sqrt{3} \\tan(\\psi) - 1}\\right),\n & \\text{ if } \\psi > \\pi\/6, \\\\\n \\max\\left(\\frac{\\frac{2y_1}{d} + 2\\tan(\\psi) x_h}{\\sqrt{3} + \\tan(\\psi)},\n \\frac{\\frac{2(v T-s)}{d\\cos(\\psi)} - 2x_{h}}{\\sqrt{3} \\tan(\\psi) - 1} \\right),\n & \\text{ if } \\psi < \\pi\/6,\\\\\n \\frac{\\frac{-\\sqrt{3}s}{\\cos(\\pi\/6)} + d x_{h}}{2d},\n & \\text{ if } \\psi = \\pi\/6,\n \\end{array}\n \\right.\n \\\\\n &=\n \\left\\{\n \\begin{array}{>{\\displaystyle}c>{\\displaystyle}l}\n \\max\\left(\\frac{\\frac{2y_1\\cos(\\psi)}{d} + 2\\sin(\\psi) x_h}{{\\sqrt{3}\\cos(\\psi) + \\sin(\\psi)}},\n \\frac{-2x_{h}\\cos(\\psi)}{\\sqrt{3} \\sin(\\psi) - \\cos(\\psi)}\\right),\n & \\text{ if } \\psi > \\pi\/6, \\\\\n \\max\\left(\\frac{\\frac{2y_1\\cos(\\psi)}{d} + 2\\sin(\\psi) x_h}{\\sqrt{3}\\cos(\\psi) + \\sin(\\psi)},\n \\frac{\\frac{2(v T-s)}{d} - 2x_{h}\\cos(\\psi)}{\\sqrt{3} \\sin(\\psi) - \\cos(\\psi)} \\right),\n & \\text{ if } \\psi < \\pi\/6,\\\\\n \\frac{\\frac{-2\\sqrt{3}s}{\\sqrt{3}} + d x_{h}}{2d},\n & \\text{ if } \\psi = \\pi\/6,\n \\end{array}\n \\right.\n \\\\\n &=\n \\left\\{\n \\begin{array}{>{\\displaystyle}c>{\\displaystyle}l}\n \\max\\left(\\frac{\\frac{-2s}{d} + 2\\sin(\\psi) x_h}{{\\sqrt{3}\\cos(\\psi) + \\sin(\\psi)}},\n \\frac{-2x_{h}\\cos(\\psi)}{\\sqrt{3} \\sin(\\psi) - \\cos(\\psi)}\\right),\n & \\text{ if } \\psi > \\pi\/6, \\\\\n \\max\\left(\\frac{\\frac{-2s}{d} + 2\\sin(\\psi) x_h}{\\sqrt{3}\\cos(\\psi) + \\sin(\\psi)},\n \\frac{\\frac{2(v T-s)}{d} - 2x_{h}\\cos(\\psi)}{\\sqrt{3} \\sin(\\psi) - \\cos(\\psi)} \\right),\n & \\text{ if } \\psi < \\pi\/6,\\\\\n \\frac{-2s + d x_{h}}{2d},\n & \\text{ if } \\psi = \\pi\/6,\n \\end{array}\n \\right.\n \\\\\n \\end{aligned}\n $$\n $$\n \\begin{aligned}\n &=\n \\left\\{\n \\begin{array}{>{\\displaystyle}c>{\\displaystyle}l}\n \\max\\left(\\frac{\\sin(\\psi) x_h - \\frac{s}{d}}{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)},\n \\frac{-\\cos(\\psi)x_{h}}{\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\\right),\n & \\text{ if } \\psi > \\pi\/6, \\\\\n \\max\\left(\\frac{ \\sin(\\psi) x_h -\\frac{s}{d}}{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)},\n \\frac{\\frac{v T-s}{d} - \\cos(\\psi)x_{h}}{\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)} \\right),\n & \\text{ if } \\psi < \\pi\/6,\\\\\n \\frac{x_{h}}{2}-\\frac{s}{d},\n & \\text{ if } \\psi = \\pi\/6,\n \\end{array}\n \\right.\n \\\\\n \\end{aligned}\n $$\n \\begin{equation}\n \\begin{aligned}\n &=\n \\left\\{\n \\begin{array}{>{\\displaystyle}c>{\\displaystyle}l}\n \\max\\left(\\frac{d\\sin(\\psi) x_h - s}{d\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)},\n \\frac{-\\cos(\\psi)x_{h}}{\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\\right),\n & \\text{ if } \\psi > \\pi\/6, \\\\\n \\max\\left(\\frac{ d\\sin(\\psi) x_h -s}{d\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)},\n \\frac{v T-s - d\\cos(\\psi)x_{h}}{d\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)} \\right),\n & \\text{ if } \\psi < \\pi\/6,\\\\\n \\frac{x_{h}}{2}-\\frac{s}{d},\n & \\text{ if } \\psi = \\pi\/6,\n \\end{array}\n \\right.\n \\end{aligned}\n \\label{eq:y1xh}\n \\end{equation}\n \\begin{equation}\n \\begin{aligned}\n Y_{2}^{R}(x_{h}) \n &= \n \\left\\{ \n \\begin{array}{>{\\displaystyle}c>{\\displaystyle}l}\n \\min\\left(\\frac{\\frac{2y_2}{d} + 2\\tan(\\psi) x_h}{\\sqrt{3} + \\tan(\\psi)} ,\n \\frac{\\frac{2(v T-s)}{d\\cos(\\psi)} - 2x_{h}}{\\sqrt{3} \\tan(\\psi) - 1}\\right),\n & \\text{ if } \\psi > \\pi\/6, \\\\\n \\min\\left(\\frac{\\frac{2y_2}{d} + 2\\tan(\\psi) x_h}{\\sqrt{3} + \\tan(\\psi)},\n \\frac{-2x_{h}}{\\sqrt{3} \\tan(\\psi) - 1}\\right), \n & \\text{ if } \\psi < \\pi\/6,\\\\\n \\frac{\\sqrt{3}y_{2} + d x_{h}}{2d},\n & \\text{ if } \\psi = \\pi\/6,\n \\end{array}\n \\right. \n \\\\\n &=\n \\left\\{\n \\begin{array}{>{\\displaystyle}c>{\\displaystyle}l}\n \\min\\left(\\frac{\\sin(\\psi) x_h + \\frac{s}{d}}{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)},\n \\frac{\\frac{v T-s}{d} - \\cos(\\psi)x_{h}}{\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)} \\right),\n & \\text{ if } \\psi > \\pi\/6, \\\\\n \\min\\left(\\frac{ \\sin(\\psi) x_h +\\frac{s}{d}}{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)},\n \\frac{-\\cos(\\psi)x_{h}}{\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\\right),\n & \\text{ if } \\psi < \\pi\/6,\\\\\n \\frac{x_{h}}{2}+\\frac{s}{d},\n & \\text{ if } \\psi = \\pi\/6.\n \\end{array}\n \\right.\n \\\\\n \\end{aligned}\n \\label{eq:y2xh}\n \\end{equation}\n\\else \n \\begin{equation}\n \\begin{aligned}\n Y_{1}^{R}(x_{h}) &= \n \\left\\{\n \\begin{array}{>{\\displaystyle}c>{\\displaystyle}l}\n \\max\\left(\\frac{\\frac{2y_1}{d} + 2\\tan(\\psi) x_h}{\\sqrt{3} + \\tan(\\psi)},\n \\frac{-2x_{h}}{\\sqrt{3} \\tan(\\psi) - 1}\\right),\n & \\text{ if } \\psi > \\pi\/6, \\\\\n \\max\\left(\\frac{\\frac{2y_1}{d} + 2\\tan(\\psi) x_h}{\\sqrt{3} + \\tan(\\psi)},\n \\frac{\\frac{2 (v T-s)}{d\\cos(\\psi)} - 2x_{h}}{\\sqrt{3} \\tan(\\psi) - 1} \\right),\n & \\text{ if } \\psi < \\pi\/6,\\\\\n \\frac{\\sqrt{3}y_{1} + d x_{h}}{2d},\n & \\text{ if } \\psi = \\pi\/6,\n \\end{array}\n \\right.\n \\\\ \n &=\n \\left\\{\n \\begin{array}{>{\\displaystyle}c>{\\displaystyle}l}\n \\max\\left(\\frac{\\sin(\\psi) x_h - \\frac{s}{d}}{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)},\n \\frac{-\\cos(\\psi)x_{h}}{\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\\right),\n & \\text{ if } \\psi > \\pi\/6, \\\\\n \\max\\left(\\frac{ \\sin(\\psi) x_h -\\frac{s}{d}}{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)},\n \\frac{\\frac{v T-s}{d} - \\cos(\\psi)x_{h}}{\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)} \\right),\n & \\text{ if } \\psi < \\pi\/6,\\\\\n \\frac{x_{h}}{2}-\\frac{s}{d},\n & \\text{ if } \\psi = \\pi\/6,\n \\end{array}\n \\right.\n \\end{aligned}\n \\label{eq:y1xh}\n \\end{equation}\n \\begin{equation}\n \\begin{aligned}\n Y_{2}^{R}(x_{h}) \n &= \n \\left\\{ \n \\begin{array}{>{\\displaystyle}c>{\\displaystyle}l}\n \\min\\left(\\frac{\\frac{2y_2}{d} + 2\\tan(\\psi) x_h}{\\sqrt{3} + \\tan(\\psi)} ,\n \\frac{\\frac{2(v T-s)}{d\\cos(\\psi)} - 2x_{h}}{\\sqrt{3} \\tan(\\psi) - 1}\\right),\n & \\text{ if } \\psi > \\pi\/6, \\\\\n \\min\\left(\\frac{\\frac{2y_2}{d} + 2\\tan(\\psi) x_h}{\\sqrt{3} + \\tan(\\psi)},\n \\frac{-2x_{h}}{\\sqrt{3} \\tan(\\psi) - 1}\\right), \n & \\text{ if } \\psi < \\pi\/6,\\\\\n \\frac{\\sqrt{3}y_{2} + d x_{h}}{2d},\n & \\text{ if } \\psi = \\pi\/6,\n \\end{array}\n \\right. \n \\\\\n &=\n \\left\\{\n \\begin{array}{>{\\displaystyle}c>{\\displaystyle}l}\n \\min\\left(\\frac{\\sin(\\psi) x_h + \\frac{s}{d}}{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)},\n \\frac{\\frac{v T-s}{d} - \\cos(\\psi)x_{h}}{\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)} \\right),\n & \\text{ if } \\psi > \\pi\/6, \\\\\n \\min\\left(\\frac{ \\sin(\\psi) x_h +\\frac{s}{d}}{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)},\n \\frac{-\\cos(\\psi)x_{h}}{\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\\right),\n & \\text{ if } \\psi < \\pi\/6,\\\\\n \\frac{x_{h}}{2}+\\frac{s}{d},\n & \\text{ if } \\psi = \\pi\/6.\n \\end{array}\n \\right.\n \\\\\n \\end{aligned}\n \\label{eq:y2xh}\n \\end{equation}\n\\fi\n \\fi\n We simplified above using (\\ref{eq:2sy2y1}), $\\cos\\big(\\frac{\\pi}{6} - \\psi\\big) = \\frac{\\sqrt{3}}{2}\\cos(\\psi)+\\frac{1}{2}\\sin(\\psi)$ and $\\sin\\big(\\psi-\\frac{\\pi}{6}\\big) = \\frac{\\sqrt{3}}{2}\\sin(\\psi)-\\frac{1}{2}\\cos(\\psi)$.\n \n \n Now we get the possible integer values for the $x_{h}$-axis which are inside the rectangle EFGH, that is, we count the number of lines parallel with the $y_h$-axis that intersect the rectangle for $x_{h}$ integer values. Let $n_{l}$ be the number of such parallel lines. We consider $n_{l} = n_{l}^{-} + n_{l}^{+}$, such that $n_{l}^{-}$ is the number of lines parallel to the $y_{h}$-axis whose intersection with the $x_{h}$-axis is a point $(i,0)$ for $i<0$ and $i\\in \\mathds{Z}$, and $n_{l}^{+}$ is similar but for non-negative integer $i$. For example, in Figure \\ref{fig:rectangle_robots} we have $n_{l}^{-} = 0$ and $n_{l}^{+} = 6$ (we marked below the values of the points over the $x$-axis, the equivalent over $x_{h}$-axis in order to aid enumerating them). Note that the point $(i, 0)$ may be outside of the rectangle, but it will still be counted if there are integer $(i, y_h)$ coordinates inside the rectangle. The next lemma shows how to compute $n_{l}^{+}$ and $n_{l}^{-}$ to aid in this proof development.\n \n\n\n \\begin{lemma}\n On the $(x_{h},y_{h})$ coordinate system, the integer values for $x_{h}$ robot coordinates inside the rectangle EFGH are in the set $\\{-n_{l}^{-}, \\dots, n_{l}^{+}-1\\}$ with\n \\begin{equation}\n n_{l}^{+} = \n \\left\\lfloor\\frac{2 (vT-s) \\cos(\\psi - \\pi\/6) + 2s\\sin(\\vert \\psi - \\pi\/6\\vert ) }{\\sqrt{3}d} + 1\\right\\rfloor, \n \\label{eq:nlp}\n \\end{equation}\n and\n \\begin{equation}\n n_{l}^{-} = \n \\left\\lfloor\\frac{2s\\sin\\left(\\left\\vert \\psi - \\pi\/6\\right\\vert \\right)}{\\sqrt{3}d}\\right\\rfloor.\n \\label{eq:nlm}\n \\end{equation}\n \\label{lemma:nlmnlp}\n \\end{lemma}\n \\begin{proof}\n \\ifithasappendixforlemmas %\n See Online Appendix.\n \\else %\n \\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.9\\columnwidth]{figs\/hexagonal_triangles.pdf}\n \\caption{The goal is to count how many points named $B_{i}$ lie in the diagonal $\\overline{HF}$. $B_{i}$ is the intersection of a parallel-to-$y_{h}$ line on a $x_{h}$ integer coordinate and the diagonal $\\overline{HF}$. The triangles $AD_{i}C_{i}$ for any $i \\in \\{1,2\\}$ and $ADC$ are similar. $\\overline{AD_{i}}$ and $\\overline{AC_{i}}$ have distance $i \\cdot d$ and $i \\cdot e$, respectively. In this example, $d = 2$ and there are three points lying over $\\overline{HF}$.}\n \\label{fig:rectangle_triangles}\n \\end{figure}\n\n For getting $n_{l}^{+}$, we count how many parallel-to-$y_{h}$ lines when projected over the $x$-axis are distant from each other by $d$ on this axis and are inside the rectangle. These lines must intersect the diagonal $\\overline{HF}$ of the rectangle, but commencing from the intersection between the $y_{h}$-axis and the diagonal (i.e., from $B_{0}$ in the Figure \\ref{fig:rectangle_triangles}). Let $\\phi = \\arctan\\left(\\frac{2s}{vT-s}\\right)$ be the angle of the diagonal in relation to the rectangle base. We have two cases depending on the value of $\\psi$.\n \n \\begin{itemize} \n \\item Case $\\psi \\le \\frac{\\pi}{6}$: from Figure \\ref{fig:rectangle_triangles}, every line parallel to $y_{h}$ is distant by $d$ on the projection onto the $x$-axis. The triangles $AD_{i}C_{i}$ for any $i \\in \\{1,\\dots,n_{l}^{+}-1\\}$ and $ADC$ are similar, $\\vert \\overline{AD_{1}}\\vert = d$ and $\\vert \\overline{AC_{1}}\\vert = e$, whose value is unknown for the moment. $\\bigtriangleup ADC$ has angles $\\widehat{CAD} = \\psi + \\phi$, $\\widehat{ADC} = \\pi\/3$ and $\\widehat{ACD} = \\pi - \\widehat{CAD} - \\widehat{ADC} = 2\\pi\/3 - \\psi - \\phi$. As for every $i$, $\\bigtriangleup AD_{i}C_{i} \\sim \\bigtriangleup ADC$, we have that \n \\begin{equation}\n \\frac{\\vert \\overline{AC}\\vert }{\\vert \\overline{AC_{1}}\\vert } = \\frac{\\vert \\overline{AD}\\vert }{\\vert \\overline{AD_{1}}\\vert } \\Leftrightarrow \\frac{\\vert \\overline{AC}\\vert }{e} = \\frac{\\vert \\overline{AD}\\vert }{d}.\n \\label{eq:acac1}\n \\end{equation} \n \n As AHFI is a parallelogram, $\\vert \\overline{HF}\\vert = \\vert \\overline{AI}\\vert $ and $\\vert \\overline{FI}\\vert = \\vert \\overline{AH}\\vert = s$, then $\\vert \\overline{BI}\\vert = 2s$. Thus, $\\vert \\overline{AI}\\vert = \\sqrt{(2s)^{2} + (vT-s)^{2}}$, because $\\bigtriangleup ABI$ is right-angled. \n Also, by the law of sines, we get $ \\frac{\\vert \\overline{AD}\\vert }{\\sin(\\widehat{ACD})} = \\frac{\\vert \\overline{AC}\\vert }{\\sin(\\widehat{ADC})} \\Leftrightarrow $\n \\begin{equation}\n \\begin{aligned}\n \\vert \\overline{AD}\\vert &= \\vert \\overline{AC}\\vert \\frac{\\sin(\\widehat{ACD})}{\\sin(\\widehat{ADC})} \n = \\left(\\vert \\overline{AI}\\vert - \\vert \\overline{CI}\\vert \\right)\\frac{\\sin(\\widehat{ACD})}{\\sin(\\widehat{ADC})} \\\\\n &= \\left(\\vert \\overline{AI}\\vert - \\vert \\overline{CI}\\vert \\right)\\frac{\\sin(2\\pi\/3 - \\psi - \\phi)}{\\sin(\\pi\/3)}.\n \\end{aligned}\n \\label{eq:adacsin1}\n \\end{equation}\n $AB_{0}FC$ is a parallelogram as well, so $\\vert \\overline{AC}\\vert = \\vert \\overline{B_{0}F}\\vert = \\vert \\overline{HF}\\vert - \\vert \\overline{HB_{0}}\\vert $ and $\\vert \\overline{CI}\\vert = \\vert \\overline{HB_{0}}\\vert $.\n \n \n The $\\bigtriangleup AB_{0}H$ has angles $\\widehat{HAB_{0}} = \\widehat{HAB} - \\widehat{B_{0}AB} = \\widehat{HAB} - (\\widehat{B_{0}AD} + \\widehat{DAB}) = \\pi\/2 - (\\pi\/3 + \\psi) = \\pi\/6 - \\psi$, $\\widehat{AHB_{0}} = \\widehat{AHG} - \\widehat{FHG} = \\pi\/2 - \\phi$ and $\\widehat{HB_{0}A} = \\pi - \\widehat{HAB_{0}} - \\widehat{AHB_{0}} = \\pi\/3 + \\psi + \\phi$. By the law of sines, we have $\\vert \\overline{HB_{0}}\\vert =\\frac{\\sin(\\widehat{HAB_{0}})\\vert \\overline{AH}\\vert }{\\sin(\\widehat{HB_{0}A})} = \\frac{\\sin(\\pi\/6 - \\psi)s}{\\sin(\\pi\/3 + \\psi + \\phi)}$. Hence,\n \\if0 1\n from (\\ref{eq:adacsin1}), \n $\\vert \\overline{AD}\\vert = \n \\left(\\vert \\overline{AI}\\vert - \\vert \\overline{CI}\\vert \\right)\\frac{\\sin(2\\pi\/3 - \\psi - \\phi)}{\\sin(\\pi\/3)},$ thus\n \\begin{equation}\n \\vert \\overline{AD}\\vert = \\frac{2\\cos(\\pi\/6 - \\psi) (vT-s) + {2s\\sin(\\pi\/6 - \\psi)}}{\\sqrt{3}} \n \\label{eq:adacsin}\n \\end{equation}\n \\else\n $$\n \\begin{aligned}\n &\\vert \\overline{AD}\\vert \n = \\left(\\vert \\overline{AI}\\vert - \\vert \\overline{CI}\\vert \\right)\\frac{\\sin(2\\pi\/3 - \\psi - \\phi)}{\\sin(\\pi\/3)} \n \\hspace*{28mm} [\\text{from (\\ref{eq:adacsin1})}]\n \\\\\n &= \\left(\\sqrt{(2s)^{2} + (vT-s)^{2}} - \\frac{s\\sin(\\pi\/6 - \\psi)}{\\sin(\\pi\/3 + \\psi + \\phi)}\\right)\\frac{\\sin(2\\pi\/3 - \\psi - \\phi)}{\\sin(\\pi\/3)}\\\\\n &= \\sqrt{(2s)^{2} + (vT-s)^{2}}\\frac{\\sin(2\\pi\/3 - \\psi - \\phi)}{\\sin(\\pi\/3)} - \\frac{s\\sin(\\pi\/6 - \\psi)}{\\sin(\\pi\/3)}\\\\ \n\\ifexpandexplanation\n &= 2\\sqrt{(2s)^{2} + (vT-s)^{2}}\\frac{\\sin(2\\pi\/3 - \\psi - \\phi)}{\\sqrt{3}} - \\frac{2s\\sin(\\pi\/6 - \\psi)}{\\sqrt{3}}\\\\ \n\\fi\n &= \\frac{2\\sqrt{(2s)^{2} + (vT-s)^{2}}{\\sin\\left(2\\pi\/3 - \\psi - \\phi\\right)} - {2s\\sin(\\pi\/6 - \\psi)}}{\\sqrt{3}}\\\\\n &= \\frac{2\\sqrt{(2s)^{2} + (vT-s)^{2}}\\left(\\sin(\\frac{2\\pi}{3} - \\psi)\\cos(\\phi) - \\cos\\left(\\frac{2\\pi}{3} - \\psi\\right)\\sin(\\phi)\\right)}{\\sqrt{3}}\\\\ \n &\\phantom{=} \\ - \\frac{2s\\sin(\\frac{\\pi}{6} - \\psi)}{\\sqrt{3}}\\\\\n &= \\frac{2\\sqrt{(2s)^{2} + (vT-s)^{2}}\\left( \\frac{\\sin(2\\pi\/3 - \\psi)(vT-s)}{\\sqrt{(2s)^{2} + (vT-s)^{2}}} - \\frac{2s\\cos(2\\pi\/3 - \\psi)}{\\sqrt{(2s)^{2} + (vT-s)^{2}}} \\right)}{\\sqrt{3}} \\\\\n &\\phantom{=} \\ - \\frac{2s\\sin(\\pi\/6 - \\psi)}{\\sqrt{3}}\\\\\n \\end{aligned}\n $$ \n \\begin{align}\n &= \\frac{2\\left(\\sin(2\\pi\/3 - \\psi) (vT-s) - 2s\\cos(2\\pi\/3 - \\psi) \\right) - {2s\\sin(\\pi\/6 - \\psi)}}{\\sqrt{3}}\\nonumber\\\\\n\\ifexpandexplanation\n &= \\frac{2\\left(\\sin(2\\pi\/3 - \\psi) (vT-s) + 2s\\sin(\\pi\/6 - \\psi) \\right) - {2s\\sin(\\pi\/6 - \\psi)}}{\\sqrt{3}}\\nonumber\\\\\n &= \\frac{2\\sin(2\\pi\/3 - \\psi) (vT-s) + 4s \\sin(\\pi\/6 - \\psi) - {2s\\sin(\\pi\/6 - \\psi)}}{\\sqrt{3}} \\nonumber\\\\\n &= \\frac{2\\sin(2\\pi\/3 - \\psi) (vT-s) + {2s\\sin(\\pi\/6 - \\psi)}}{\\sqrt{3}} \\nonumber\\\\\n\\fi\n &= \\frac{2\\cos(\\pi\/6 - \\psi) (vT-s) + {2s\\sin(\\pi\/6 - \\psi)}}{\\sqrt{3}} \\label{eq:adacsin}\n \\end{align}\n \\fi\n Above we used $\\sin(2\\pi\/3 - \\psi) = \\cos(\\pi\/6 - \\psi)$, $\\cos(2\\pi\/3 - \\psi) = - \\sin(\\pi\/6 - \\psi)$, $\\sin(2\\pi\/3 - \\psi -\\phi) = \\sin(\\pi\/3 + \\psi +\\phi)$, $\\sin(2\\pi\/3 - \\psi - \\phi) = \\sin(2\\pi\/3 - \\psi)\\cos(\\phi) - \\cos(2\\pi\/3 - \\psi)\\sin(\\phi)$, $\\sin(\\arctan(y\/x)) = \\frac{y}{\\sqrt{x^{2}+y^{2}}}$, and $\\cos($ $\\arctan(y\/x) ) = \\frac{x}{\\sqrt{x^{2}+y^{2}}}$.\n \n Therefore, the number of lines parallel to the $y_{h}$-axis intersecting $\\overline{B_{0}F}$ for integer $x_{h}$ values is\n \\if0 1\n $\n n_{l}^{+} = \\left\\lfloor\\frac{\\vert \\overline{B_{0}F}\\vert }{e} + 1\\right\\rfloor \n = \\big\\lfloor\\frac{2\\cos(\\pi\/6 - \\psi) (vT-s) + {2s\\sin(\\pi\/6 - \\psi)}}{\\sqrt{3}d} + 1\\big\\rfloor \n $\n by using (\\ref{eq:acac1}) and (\\ref{eq:adacsin}).\n \\else\n $$\n \\begin{aligned}\n n_{l}^{+}\n &= \\left\\lfloor\\frac{\\vert \\overline{B_{0}F}\\vert }{e} + 1\\right\\rfloor \n\\ifexpandexplanation\n = \\left\\lfloor\\frac{\\vert \\overline{HF}\\vert - \\vert \\overline{HB_{0}}\\vert }{e} + 1\\right\\rfloor \n\\fi\n = \\left\\lfloor\\frac{\\vert \\overline{AC}\\vert }{e} + 1\\right\\rfloor \n = \\left\\lfloor\\frac{\\vert \\overline{AD}\\vert }{d} + 1\\right\\rfloor\n &[\\text{from (\\ref{eq:acac1})}]\n \\\\\n &= \\left\\lfloor\\frac{2\\cos(\\pi\/6 - \\psi) (vT-s) + {2s\\sin(\\pi\/6 - \\psi)}}{\\sqrt{3}d} + 1\\right\\rfloor \n &[\\text{from (\\ref{eq:adacsin})}]\n \\\\\n \\end{aligned}\n $$\n \\fi\n \\item Case $\\psi > \\frac{\\pi}{6}$: Figure \\ref{fig:rectangle_triangles2} shows this case. Observe that when $\\psi > \\frac{\\pi}{6}$, $\\overline{EA}$ is on the left side of the $y_{h}$-axis. Also, note that we are considering now the diagonal $\\overline{EG}$, because the $y_{h}$-axis does not intersect the diagonal $\\overline{HF}$ for these values of $\\psi$. Then, we have to consider $\\overline{B_{0}G}$ to count $n_{l}^{+}$. Additionally, $\\vert B_{0}G\\vert = \\vert AC\\vert $, due to the $AB_{0}GC$ parallelogram properties. As in the previous case, for $i \\in \\{1, \\dots, n_{l}^{+}-1\\},\\bigtriangleup AD_{i}C_{i} \\sim \\bigtriangleup ADC$, $\\widehat{CAD} = \\widehat{BAD} - \\widehat{BAC} = \\psi - \\phi$, $\\widehat{ADC} = \\pi\/3$, $\\widehat{ACD} = \\pi - \\widehat{CAD} - \\widehat{ADC} = 2\\pi\/3 - \\psi + \\phi$, and $\\frac{\\vert \\overline{B_{0}G}\\vert }{e} = \\frac{\\vert \\overline{AC}\\vert }{e} = \\frac{\\vert \\overline{AD}\\vert }{d}$, by the similarity of these triangles as we showed in the previous case. Also, we have $\\widehat{EAB_{0}} = \\widehat{DAE} - \\widehat{DAB_{0}} = \\psi + \\pi\/2 - 2\\pi\/3 = \\psi - \\pi\/6$, $\\widehat{B_{0}EA} = \\widehat{FEA} - \\widehat{FEB_{0}} = \\pi\/2-\\phi$, $\\widehat{EB_{0}A} = \\pi - \\widehat{B_{0}EA} - \\widehat{EAB_{0}} = \\pi - (\\pi\/2-\\phi) - (\\psi - \\pi\/6) = 2\\pi\/3 + \\phi - \\psi $. Thus, by the law of sines, $\\frac{\\vert \\overline{B_{0}E}\\vert }{\\sin(\\widehat{EAB_{0}})} = \\frac{\\vert \\overline{EA}\\vert }{\\sin(\\widehat{EB_{0}A})} \\Leftrightarrow \\vert \\overline{B_{0}E}\\vert = \\frac{s\\sin(\\widehat{EAB_{0}})}{\\sin(\\widehat{EB_{0}A})} = \\frac{s\\sin( \\psi - \\pi\/6)}{\\sin(2\\pi\/3 + \\phi - \\psi)}$. $EAIG$ and $B_{0}ACG$ are parallelograms sharing the points G and A, so $\\vert \\overline{B_{0}E}\\vert = \\vert \\overline{CI}\\vert $. By following similar steps as before, we get\n \\if0 1\n $ n_{l}^{+} = \\left\\lfloor\\frac{2\\cos(\\pi\/6 - \\psi)(vT-s) + 2s\\sin(\\psi - \\pi\/6)}{\\sqrt{3}d} + 1\\right\\rfloor.$\n \\else\n\\ifexpandexplanation \n $$\n \\begin{aligned}\n &n_{l}^{+}\n = \\left\\lfloor\\frac{\\vert \\overline{B_{0}G}\\vert }{e} + 1\\right\\rfloor \n = \\left\\lfloor\\frac{\\vert \\overline{AC}\\vert }{e} + 1\\right\\rfloor \n = \\left\\lfloor\\frac{\\vert \\overline{AD}\\vert }{d} + 1\\right\\rfloor\\\\\n \\end{aligned}\n $$\n $$\n \\begin{aligned}\n & \n = \\left\\lfloor\\frac{\\left(\\vert \\overline{AI}\\vert - \\vert \\overline{CI}\\vert \\right)\\frac{\\sin(2\\pi\/3 - \\psi + \\phi)}{\\sin(\\pi\/3)}}{d} + 1\\right\\rfloor\\\\\n &= \\left\\lfloor\\frac{\\left(\\sqrt{(2s)^{2} + (vT-s)^{2}} - \\frac{s\\sin(\\psi - \\pi\/6)}{\\sin(2\\pi\/3 - \\psi + \\phi)}\\right)\\frac{\\sin(2\\pi\/3 - \\psi + \\phi)}{\\sin(\\pi\/3)}}{d} + 1\\right\\rfloor\\\\\n &= \\left\\lfloor\\frac{\\sqrt{(2s)^{2} + (vT-s)^{2}}\\frac{\\sin(2\\pi\/3 - \\psi + \\phi)}{\\sin(\\pi\/3)} - \\frac{s\\sin(\\psi - \\pi\/6)}{\\sin(\\pi\/3)}}{d} + 1\\right\\rfloor\\\\\n \\end{aligned}\n $$\n $$\n \\begin{aligned}\n &= \\left\\lfloor\\frac{2\\sqrt{(2s)^{2} + (vT-s)^{2}}{\\sin(2\\pi\/3 - \\psi + \\phi)} - {2s\\sin(\\psi - \\pi\/6)}}{\\sqrt{3}d} + 1\\right\\rfloor\\\\\n &= \\Bigg\\lfloor\\frac{2\\sqrt{(2s)^{2} + (vT-s)^{2}}( \\sin(2\\pi\/3 - \\psi)\\cos(\\phi) + \\cos(2\\pi\/3 - \\psi)\\sin(\\phi) )}{\\sqrt{3}d} \\\\\n &\\phantom{=}\\ -\\frac{{2s\\sin(\\psi - \\pi\/6)}}{\\sqrt{3}d} + 1\\Bigg\\rfloor\\\\\n &= \\left\\lfloor\\frac{2\\sin(2\\pi\/3 - \\psi)(vT-s) + 4s\\cos(2\\pi\/3 - \\psi) - {2s\\sin(\\psi - \\pi\/6)}}{\\sqrt{3}d} + 1\\right\\rfloor\\\\\n \\end{aligned}\n $$\n $$\n \\begin{aligned}\n &= \\left\\lfloor\\frac{2\\sin(2\\pi\/3 - \\psi)(vT-s) + 4s\\sin(\\psi - \\pi\/6) - {2s\\sin(\\psi - \\pi\/6)}}{\\sqrt{3}d} + 1\\right\\rfloor\\\\\n &= \\left\\lfloor\\frac{2\\sin(2\\pi\/3 - \\psi)(vT-s) + 2s\\sin(\\psi - \\pi\/6)}{\\sqrt{3}d} + 1\\right\\rfloor \\\\ \n &= \\left\\lfloor\\frac{2\\cos(\\pi\/6 - \\psi)(vT-s) + 2s\\sin(\\psi - \\pi\/6)}{\\sqrt{3}d} + 1\\right\\rfloor.\\\\\n \\end{aligned}\n $$\n\\else\n $$\n \\begin{aligned}\n &n_{l}^{+}\n = \\left\\lfloor\\frac{2\\cos(\\pi\/6 - \\psi)(vT-s) + 2s\\sin(\\psi - \\pi\/6)}{\\sqrt{3}d} + 1\\right\\rfloor.\\\\\n \\end{aligned}\n $$\n\\fi\n \\fi\n We used this time $\\sin(2\\pi\/3 - \\psi) = \\cos(\\psi - \\pi\/6)$ and $\\cos(2\\pi\/3 - \\psi) = \\sin(\\psi - \\pi\/6)$.\n\n \\end{itemize}\n\n For the final result on (\\ref{eq:nlp}), we simplified using the fact that when $\\psi \\le \\pi\/6$, $\\sin(\\vert \\psi - \\pi\/6\\vert ) = \\sin(\\pi\/6 - \\psi)$, otherwise, $\\sin(\\vert \\psi - \\pi\/6\\vert ) = \\sin(\\psi - \\pi\/6).$\n \n \\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.85\\columnwidth]{figs\/hexagonal_triangles2.pdf}\n \\caption{The goal is to count how many points named $B_{i}$ lie in the diagonal $\\overline{EG}$. The triangles $AD_{i}C_{i}$ for any $i \\in \\{1,2\\}$ and $ADC$ are similar. $\\overline{AD_{i}}$ and $\\overline{AC_{i}}$ have distance $i\\cdot d$ and $i\\cdot e$, respectively. In this example, $d = 2$ and there are three points lying over $\\overline{EG}$. }\n \\label{fig:rectangle_triangles2}\n \\end{figure}\n \n \n \n \\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.85\\columnwidth]{figs\/robot_not_full_lines_1.pdf}\n \\caption{In this example $\\psi \\le \\pi\/6$. The pink line on the left side is an example of one satisfying Lemma \\ref{lemma:Interval1}, while the one on the right side, Lemma \\ref{lemma:endcase}. The triangles ACE, HIA, BMG and BNF are congruent, because their respective angles are equal -- due to parallelism -- and $\\vert \\overline{EA}\\vert =\\vert \\overline{AH}\\vert =\\vert \\overline{GB}\\vert =\\vert \\overline{FB}\\vert =s$. In this example, except for $\\protect\\overleftrightarrow{JH}, \\protect\\overleftrightarrow{EC}, \\protect\\overleftrightarrow{MG}, \\protect\\overleftrightarrow{BL}$ and $\\protect \\overleftrightarrow{FD}$, the lines parallel-to-$y_{h}$ are distant by $d$ on the projection over the $x$-axis and can have robots on them. }\n \\label{fig:not_full_lines}\n \\end{figure}\n \n \\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.7\\columnwidth]{figs\/robot_not_full_lines_2.pdf}\n \\caption{In this example $\\psi > \\pi\/6$, the side EH has an angle greater than zero with the $y_{h}$-axis. The pink line on the left side is an example of one satisfying Lemma \\ref{lemma:Interval1}, while the one on the right side, Lemma \\ref{lemma:endcase}. The triangles AIE, HCA, FNB and BMG are congruent, because their respective angles are equal -- due to parallelism -- and $\\vert \\overline{EA}\\vert =\\vert \\overline{AH}\\vert =\\vert \\overline{GB}\\vert =\\vert \\overline{FB}\\vert =s$. Except for $\\protect\\overleftrightarrow{EJ}, \\protect\\overleftrightarrow{CH}, \\protect\\overleftrightarrow{FK}, \\protect\\overleftrightarrow{BL}$ and $\\protect \\overleftrightarrow{GD}$, the lines parallel-to-$y_{h}$ are distant by $d$ on the projection over the $x$-axis and can have robots on them. }\n \\label{fig:not_full_lines2}\n \\end{figure}\n \n For $n_{l}^{-}$, we also calculate how many lines parallel to the $y_{h}$-axis projected over the $x$-axis are distant from each other by $d$ on this axis and are inside the rectangle. However, we consider only those on the left side of the point $A$, i.e., commencing from the one whose intersection with the $x$-axis is at $(-d,0)$, equivalently, $(-1,0)$ on the $(x_{h},y_{h})$ coordinate system. We also have two cases here.\n \n \\begin{itemize}\n \\item Case $\\psi \\le \\pi\/6$: \n Figure \\ref{fig:not_full_lines} shows the $\\bigtriangleup HIA$ on the left side of the rectangle EFGH. As the robots are over the parallel-to-$y_{h}$ lines distant by $d$ on the projection over the $x$-axis, we want to know how many parallel lines intersect $\\overline{HI}$ (equivalently, how many such lines intersect $\\overline{JA}$ due to parallelism), excluding $\\overleftrightarrow{AI}$ (because it was already counted on $n_{l}^{+}$). Thus,\n \\if0 1\n $n_{l}^{-} = \\left\\lfloor\\frac{\\vert HI\\vert }{d}\\right\\rfloor.$\n \\else\n $$n_{l}^{-} = \\left\\lfloor\\frac{\\vert HI\\vert }{d}\\right\\rfloor.$$\n \n \\fi\n We have $\\vert \\overline{AH}\\vert = s, \\widehat{H} = \\pi\/2 + \\psi, \\widehat{I} = \\pi\/3$ and $\\widehat{A} = \\pi - \\widehat{I} - \\widehat{H} = \\pi - \\pi\/3 - (\\pi\/2 + \\psi) = \\pi\/6 - \\psi$. By the law of sines on the angles opposite to the sides $\\overline{AH}$ and $\\overline{HI}$, results the following \n \\begin{equation}\n \\begin{aligned}\n \\vert HI\\vert \n = \\frac{\\vert AH\\vert \\sin(\\widehat{A})}{\\sin(\\widehat{I})} \n = \\frac{s\\sin\\left(\\frac{\\pi}{6} - \\psi\\right)}{\\sin\\left(\\frac{\\pi}{3}\\right)} \n = \\frac{2s\\sin\\left(\\frac{\\pi}{6} - \\psi\\right)}{\\sqrt{3}}.\n \\end{aligned}\n \\label{eq:hisize}\n \\end{equation}\n Thus, \n \\if0 1\n $\n n_{l}^{-} = \\left \\lfloor\\frac{2s\\sin\\left(\\frac{\\pi}{6} - \\psi\\right)}{\\sqrt{3}d} \\right \\rfloor.\n $\n \\else\n $$\n n_{l}^{-} = \\left \\lfloor\\frac{2s\\sin\\left(\\frac{\\pi}{6} - \\psi\\right)}{\\sqrt{3}d} \\right \\rfloor.\n $$\n \\fi\n \\item Case $\\psi > \\pi\/6$. Figure \\ref{fig:not_full_lines2} illustrates this case. The reasoning is similar to the previous case, but now we use $\\bigtriangleup EIA$. Then, $\\vert \\overline{EA}\\vert = s, \\widehat{E} = \\pi\/2 - \\psi$, $\\widehat{I} = 2\\pi\/3$ and $\\widehat{A} = \n \\pi - \\widehat{I} - \\widehat{E} \n = \\pi - 2\\pi\/3 - (\\pi\/2 - \\psi) \n = \\psi - \\pi\/6$. Consequently, \n \\if0 1 \n $\n n_{l}^{-} \n = \\left\\lfloor\\frac{\\vert \\overline{EI}\\vert }{d} \\right\\rfloor \n = \\left\\lfloor\\frac{2s\\sin\\left(\\psi - \\frac{\\pi}{6}\\right)}{\\sqrt{3}d}\\right\\rfloor. \n $\n \\else\n $$\n \\begin{aligned}\n n_{l}^{-} \n = \\left\\lfloor\\frac{\\vert \\overline{EI}\\vert }{d} \\right\\rfloor \n = \\left\\lfloor\\frac{2s\\sin\\left(\\psi - \\frac{\\pi}{6}\\right)}{\\sqrt{3}d}\\right\\rfloor. \\\\\n \\end{aligned}\n $$\n \\fi\n \\end{itemize}\n \n For the final result in (\\ref{eq:nlm}), we use the absolute value inside the sine function to combine both cases.\n \\fi %\n \\end{proof}\n \n By the previous lemma, we have calculated the interval of integer $x_{h}$ values needed for counting the robots inside the rectangle. In the next lemma, we get the equation for the number of robots at the rectangular part ($N_{R}$) ranging from these integer $x_{h}$ values. Although the proposition we are now proving gives the throughput in terms of $\\theta$, we are first going to calculate this number in terms of $\\psi$.\n \n \\begin{lemma}\n For $\\psi \\in \\lbrack 0,\\pi\/3 \\rparen$,\n \\if0 1\n $\n N_{R}(T,\\psi) = \n \\sum_{x_{h}=-n_{l}^{-}}^{n_{l}^{+}-1}\\left( \\lfloor Y_{2}^{R}(x_{h}) \\rfloor - \\lceil Y_{1}^{R}(x_{h}) \\rceil + 1\\right).\n $\n \\else\n $$\n N_{R}(T,\\psi) = \n \\sum_{x_{h}=-n_{l}^{-}}^{n_{l}^{+}-1}\\left( \\lfloor Y_{2}^{R}(x_{h}) \\rfloor - \\lceil Y_{1}^{R}(x_{h}) \\rceil + 1\\right).\n $$\n \\fi\n If for some $x_{h}$ $\\left\\lfloor Y_{2}^{R}(x_{h}) \\right\\rfloor < \\left \\lceil Y_{1}^{R}(x_{h}) \\right \\rceil $, we assume the respective summand for this $x_{h}$ being zero.\n \\label{lemma:NR}\n \\end{lemma}\n \\begin{proof}\n \\ifithasappendixforlemmas %\n See Online Appendix.\n \\else %\n By the previous lemma and knowing that the positions of the robots are integer coordinates over the hexagonal grid coordinate space, \n \\if0 1\n $\n N_{R}(T,\\psi) \n = \\sum_{x_{h}=-n_{l}^{-}}^{n_{l}^{+}-1} \\left( \\lfloor Y_{2}^{R}(x_{h}) \\rfloor - \\lceil Y_{1}^{R}(x_{h}) \\rceil + 1 \\right).\n $\n \\else\n $$\n \\begin{aligned}\n N_{R}(T,\\psi) \n &= \\sum_{x_{h}=-n_{l}^{-}}^{n_{l}^{+}-1}\\sum_{y_{h}=\\lceil Y_{1}^{R}(x_{h}) \\rceil}^{\\lfloor Y_{2}^{R}(x_{h}) \\rfloor}1\n = \\sum_{x_{h}=-n_{l}^{-}}^{n_{l}^{+}-1} \\left( \\lfloor Y_{2}^{R}(x_{h}) \\rfloor - \\lceil Y_{1}^{R}(x_{h}) \\rceil + 1 \\right).\n \\end{aligned}\n $$\n \\fi\n since (\\ref{eq:y1xh}) and (\\ref{eq:y2xh}) give the minimum ($Y_{1}^{R}$) and maximum ($Y_{2}^{R}$) $y_{h}$ coordinates for a given $x_{h}$ value such that the robot is inside the rectangle. Note that the last summation can only be used when $\\left\\lfloor Y_{2}^{R}(x_{h}) \\right\\rfloor \\ge \\left \\lceil Y_{1}^{R}(x_{h}) \\right \\rceil $, otherwise a negative number of robots would be accounted. \n \\fi %\n \\end{proof}\n \n In special, for $\\psi = \\pi\/6$, by (\\ref{eq:y1xh}) and (\\ref{eq:y2xh}),\n \\begin{equation}\n N(T,\\pi\/6) = \\sum_{x_{h}=0}^{\n \\left\\lfloor\\frac{2 (vT-s) }{\\sqrt{3}d} \\right\\rfloor\n } \\left(\\left \\lfloor \\frac{\\sqrt{3}y_{2} + d x_{h}}{2d} \\right \\rfloor - \\left \\lceil \\frac{\\sqrt{3}y_{1} + d x_{h}}{2d} \\right \\rceil + 1 \\right). \n \\label{eq:30degreescasepsi}\n \\end{equation}\n\n If $\\psi \\neq \\pi\/6$, each parallel-to-$y_{h}$-axis line intersects two segments of the rectangle EFGH. The $y_{h}$-components of the two intersections of a rectangle side and such lines are the values of $Y_{1}^{R}(x_{h})$ and $Y_{2}^{R}(x_{h})$ for a given $x_{h}$. Hence, the set of $x_{h}$ integer values $\\{-n_{l}^{-}, \\dots, n_{l}^{+}-1\\}$ will be cut in disjoint subsets based on the $\\max$ and $\\min$ outcomes of (\\ref{eq:y1xh}) and (\\ref{eq:y2xh}). That is, $Y_{1}^{R}(x_{h})$ and $Y_{2}^{R}(x_{h})$, respectively; equivalently, which two sides of the rectangle the parallel-to-$y_{h}$-axis line corresponding to $(x_{h},0)$ intersects. The following lemmas describe each subset: $\\{- n_{l}^{-}, \\dots, n_{l}^{-}\\}$ in Lemma \\ref{lemma:Interval1}; $\\{n_{l}^{-} + 1, \\dots, K'-1 \\}$ in Lemma \\ref{lemma:MiddleInterval}; $\\{K', \\dots, n_{l}^{+}-1\\}$ in Lemma \\ref{lemma:endcase}, for an integer $K'$ defined later.\n\n \\begin{lemma}\n Consider parallel-to-$y_{h}$-axis lines inside the rectangle EFGH intersecting the $x_{h}$-axis at $(x_{h},0)$, for $x_{h} \\in \\mathds{Z}$. The two following statements are equivalent:\n \\begin{itemize}\n \\item[(I)] \n If $\\psi < \\pi\/6$,\n \\begin{equation}\n Y_{1}^{R}(x_{h}) = \n \\frac{\\frac{2y_1}{d} + 2\\tan(\\psi) x_h}{\\sqrt{3} + \\tan(\\psi)} \\text{ and }\n Y_{2}^{R}(x_{h}) = \n \\frac{-2 x_{h}}{\\sqrt{3} \\tan(\\psi) - 1},\n \\label{eq:eq48}\n \\end{equation}\n and, if $\\psi > \\pi\/6$,\n \\begin{equation}\n Y_{1}^{R}(x_{h}) = \n \\frac{-2 x_{h}}{\\sqrt{3} \\tan(\\psi) - 1} \\text{ and } \n Y_{2}^{R}(x_{h}) = \n \\frac{\\frac{2y_2}{d} + 2\\tan(\\psi) x_h}{\\sqrt{3} + \\tan(\\psi)}.\n \\end{equation}\n \\item[(II)] \n $x_{h} \\in \\{- n_{l}^{-}, \\dots, n_{l}^{-}\\}$.\n \\end{itemize}\n \\label{lemma:Interval1}\n \\end{lemma}\n \\begin{proof} \n \\ifithasappendixforlemmas %\n See Online Appendix.\n \\else %\n $(I) \\Rightarrow (II):$ Let $\\psi < \\pi\/6$. By (\\ref{eq:y1xh}) and (\\ref{eq:y2xh}), (\\ref{eq:eq48}) is equivalent to \n \\if0 1\n $\n \\frac{ \\sin(\\psi) x_h -\\frac{s}{d}}{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)} \\ge \\frac{\\frac{v T-s}{d} - \\cos(\\psi)x_{h}}{\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\n $ and $\n \\frac{ \\sin(\\psi) x_h +\\frac{s}{d}}{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)} \\ge \\frac{-\\cos(\\psi)x_{h}}{\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}.\n $\n \\else\n $$\n \\frac{ \\sin(\\psi) x_h -\\frac{s}{d}}{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)} \\ge \\frac{\\frac{v T-s}{d} - \\cos(\\psi)x_{h}}{\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\n \\text{ and }\n \\frac{ \\sin(\\psi) x_h +\\frac{s}{d}}{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)} \\ge \\frac{-\\cos(\\psi)x_{h}}{\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}.\n $$\n \\fi\n From the second inequality, we have \n \\if0 1\n $\n \\frac{ \\sin(\\psi) x_h +\\frac{s}{d}}{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)}\n \\ge\n \\frac{- \\cos(\\psi)x_{h}}{\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\n \\Leftrightarrow \n x_{h}\n \\le\n \\frac{- 2s\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}{\\sqrt{3}d}\n = \\frac{2s\\sin\\left(\\frac{\\pi}{6}-\\psi\\right)}{\\sqrt{3}d}.\n $\n \\else\n $$\n \\begin{aligned}\n &\\phantom{\\Leftrightarrow} \n \\frac{ \\sin(\\psi) x_h +\\frac{s}{d}}{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)}\n \\ge\n \\frac{- \\cos(\\psi)x_{h}}{\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\n \\\\ \n &\\Leftrightarrow\n \\left(\\frac{ \\sin(\\psi) }{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)} + \\frac{ \\cos(\\psi)}{\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\\right) x_{h}\n \\ge\n -\\frac{s}{d\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)}\n \\\\\n &\\Leftrightarrow\n \\left( \\frac{\\sin(\\psi) \\sin\\left(\\psi-\\frac{\\pi}{6}\\right) + \\cos(\\psi)\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)}{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\\right) x_{h}\n \\ge\n - \\frac{s\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}{d\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\n \\\\ \n \\end{aligned}\n $$\n $$\n \\begin{aligned}\n\\ifexpandexplanation \n &\\Leftrightarrow\n \\left( \\frac{\\sin(\\psi) \\sin\\left(\\psi-\\frac{\\pi}{6}\\right) + \\cos(\\psi)\\cos\\left(\\psi-\\frac{\\pi}{6}\\right)}{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\\right) x_{h}\n \\ge\n - \\frac{s\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}{d\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\n \\\\ \n &\\Leftrightarrow\n \\left( \\frac{\\cos(\\psi - (\\psi-\\frac{\\pi}{6}))}{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\\right) x_{h}\n \\ge\n - \\frac{s\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}{d\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\n \\\\\n &\\Leftrightarrow\n \\left( \\frac{\\cos(\\psi - \\psi + \\frac{\\pi}{6})}{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\\right) x_{h}\n \\ge\n - \\frac{s\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}{d\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\n \\\\\n\\fi\n &\\Leftrightarrow\n \\left( \\frac{\\cos(\\frac{\\pi}{6})}{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\\right) x_{h}\n \\ge\n - \\frac{s\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}{d\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\n \\\\\n &\\Leftrightarrow\n \\left( \\frac{\\frac{\\sqrt{3}}{2}}{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\\right) x_{h}\n \\ge\n - \\frac{s\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}{d\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\n \\\\\n\\ifexpandexplanation\n &\\Leftrightarrow\n x_{h}\n \\le\n \\frac{- s\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}{\\frac{\\sqrt{3}}{2}d}\n \\\\\n\\fi\n &\\Leftrightarrow \n x_{h}\n \\le\n \\frac{- 2s\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}{\\sqrt{3}d}\n = \\frac{2s\\sin\\left(\\frac{\\pi}{6}-\\psi\\right)}{\\sqrt{3}d}.\n \\end{aligned}\n $$\n \\fi\n The change of inequality sign above is due to $\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)\\sin\\left(\\psi-\\frac{\\pi}{6}\\right) < 0$ for $\\psi < \\pi\/6$. As $x_{h} \\in \\mathds{Z}$, \n \\if0 1\n $\n x_{h} \\le \\left\\lfloor\\frac{2s\\sin\\left(\\frac{\\pi}{6} - \\psi \\right)}{\\sqrt{3}d}\\right\\rfloor = n_{l}^{-}.\n $\n \\else\n $$\n x_{h} \\le \\left\\lfloor\\frac{2s\\sin\\left(\\frac{\\pi}{6} - \\psi \\right)}{\\sqrt{3}d}\\right\\rfloor = n_{l}^{-}.\n $$\n \\fi\n The lower value on $x_{h}$ is obtained by Lemma \\ref{lemma:nlmnlp}, as to be inside the rectangle EFGH $x_{h} \\ge -n_{l}^{-}$. For $\\psi > \\pi\/6$, we obtain the same result by a similar reasoning, but without changing the inequality sign since in this case $\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)\\sin\\left(\\psi-\\frac{\\pi}{6}\\right) > 0$.\n \n\\begin{comment} \n Let $\\psi > \\pi\/6$. By (\\ref{eq:y1xh}) and (\\ref{eq:y2xh}), (\\ref{eq:eq49}) is equivalent to \n \n $$\n \\frac{\\sin(\\psi) x_h - \\frac{s}{d}}{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)} \\le \\frac{-\\cos(\\psi)x_{h}}{\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\n \\text{ and }\n \\frac{\\sin(\\psi) x_h + \\frac{s}{d}}{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)} \\le \\frac{\\frac{v T-s}{d} - \\cos(\\psi)x_{h}}{\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}.\n $$\n \n From the first inequality, we have \n $$\n \\begin{aligned}\n &\n \\frac{ \\sin(\\psi) x_h -\\frac{s}{d}}{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)}\n \\le\n \\frac{- \\cos(\\psi)x_{h}}{\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\n \\Leftrightarrow\\\\ \n &\n \\left(\\frac{ \\sin(\\psi) }{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)} + \\frac{ \\cos(\\psi)}{\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\\right) x_{h}\n \\le\n \\frac{s}{d\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)}\n \\Leftrightarrow\\\\\n &\n \\left( \\frac{\\sin(\\psi) \\sin\\left(\\psi-\\frac{\\pi}{6}\\right) + \\cos(\\psi)\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)}{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\\right) x_{h}\n \\le\n \\frac{s\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}{d\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\n \\Leftrightarrow\\\\\n &\n \\left( \\frac{\\sin(\\psi) \\sin\\left(\\psi-\\frac{\\pi}{6}\\right) + \\cos(\\psi)\\cos\\left(\\psi-\\frac{\\pi}{6}\\right)}{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\\right) x_{h}\n \\le\n \\frac{s\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}{d\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\n \\Leftrightarrow\\\\ \n &\n \\left( \\frac{\\cos(\\psi - (\\psi-\\frac{\\pi}{6}))}{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\\right) x_{h}\n \\le\n \\frac{s\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}{d\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\n \\Leftrightarrow\\\\\n &\n \\left( \\frac{\\cos(\\psi - \\psi + \\frac{\\pi}{6})}{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\\right) x_{h}\n \\le\n \\frac{s\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}{d\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\n \\Leftrightarrow\\\\\n &\n \\left( \\frac{\\cos(\\frac{\\pi}{6})}{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\\right) x_{h}\n \\le\n \\frac{s\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}{d\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\n \\Leftrightarrow\\\\\n &\n \\left( \\frac{\\frac{\\sqrt{3}}{2}}{\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\\right) x_{h}\n \\le\n \\frac{s\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}{d\\cos\\left(\\frac{\\pi}{6}-\\psi\\right)\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}\n \\Leftrightarrow\\\\\n &\n x_{h}\n \\le\n \\frac{ s\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}{\\frac{\\sqrt{3}}{2}d}\n = \\frac{2 s\\sin\\left(\\psi-\\frac{\\pi}{6}\\right)}{\\sqrt{3}d}\n \\end{aligned}\n $$\n \n As $x_{h} \\in \\mathds{Z}$, \n $$\n x_{h} \\le \\left\\lfloor\\frac{2s\\sin\\left(\\psi - \\frac{\\pi}{6} \\right)}{\\sqrt{3}d}\\right\\rfloor = n_{l}^{-}.\n $$\n The lower value on $x_{h}$ is obtained by Lemma \\ref{lemma:nlmnlp}, as to be inside the rectangle EFGH $x_{h} \\ge -n_{l}^{-}$.\n\\end{comment}\n \n \n $(II) \\Rightarrow (I):$ \n From (\\ref{eq:boundsyh1}), (\\ref{eq:xhvTcos1}) (i.e., the line equations for $\\overleftrightarrow{HG}$, $\\overleftrightarrow{EH}$ and $\\overleftrightarrow{EF}$), (\\ref{eq:y1xh}) and (\\ref{eq:y2xh}) (i.e., the definitions of $Y_{1}^{R}$ and $Y_{2}^{R}$), we have, if $\\psi < \\pi\/6$, \n \\if0 1 \n $(x_{h},Y_{1}^{R}(x_{h})) \\in \\overleftrightarrow{HG} \\Leftrightarrow Y_{1}^{R}(x_{h}) = \\frac{\\frac{2y_1}{d} + 2\\tan(\\psi) x_h}{\\sqrt{3} + \\tan(\\psi)},$\n $(x_{h},Y_{2}^{R}(x_{h})) \\in \\overleftrightarrow{EH} \\Leftrightarrow Y_{2}^{R}(x_{h}) = \\frac{-2 x_{h}}{\\sqrt{3} \\tan(\\psi) - 1},$\n and, if $\\psi > \\pi\/6$,\n $(x_{h},Y_{1}^{R}(x_{h})) \\in \\overleftrightarrow{EH} \\Leftrightarrow Y_{1}^{R}(x_{h}) = \\frac{-2 x_{h}}{\\sqrt{3} \\tan(\\psi) - 1} ,$\n $(x_{h},Y_{2}^{R}(x_{h})) \\in \\overleftrightarrow{EF} \\Leftrightarrow Y_{2}^{R}(x_{h}) = \\frac{\\frac{2y_2}{d} + 2\\tan(\\psi) x_h}{\\sqrt{3} + \\tan(\\psi)}.$\n \\else\n $$(x_{h},Y_{1}^{R}(x_{h})) \\in \\overleftrightarrow{HG} \\Leftrightarrow Y_{1}^{R}(x_{h}) = \\frac{\\frac{2y_1}{d} + 2\\tan(\\psi) x_h}{\\sqrt{3} + \\tan(\\psi)},$$\n $$(x_{h},Y_{2}^{R}(x_{h})) \\in \\overleftrightarrow{EH} \\Leftrightarrow Y_{2}^{R}(x_{h}) = \\frac{-2 x_{h}}{\\sqrt{3} \\tan(\\psi) - 1},$$\n and, if $\\psi > \\pi\/6$,\n $$(x_{h},Y_{1}^{R}(x_{h})) \\in \\overleftrightarrow{EH} \\Leftrightarrow Y_{1}^{R}(x_{h}) = \\frac{-2 x_{h}}{\\sqrt{3} \\tan(\\psi) - 1} ,$$\n $$(x_{h},Y_{2}^{R}(x_{h})) \\in \\overleftrightarrow{EF} \\Leftrightarrow Y_{2}^{R}(x_{h}) = \\frac{\\frac{2y_2}{d} + 2\\tan(\\psi) x_h}{\\sqrt{3} + \\tan(\\psi)}.$$\n \n \\fi\n Then, we prove this part by showing that for all $x_{h} \\in \\{- n_{l}^{-},\\dots,n_{l}^{-}\\}$, the line parallel to the $y_{h}$-axis intercepting the point $(x_{h},0)$ intercepts both sides $\\overline{EH}$ and $\\overline{HG}$ (and no other), if $\\psi < \\pi\/6$ (Figure \\ref{fig:not_full_lines}), and, if $\\psi > \\pi\/6$, both sides $\\overline{EH}$ and $\\overline{EF}$ (and no other) (Figure \\ref{fig:not_full_lines2}).\n \n \\begin{itemize}\n \\item Case $\\psi < \\pi\/6$: \n Figure \\ref{fig:not_full_lines} shows the triangles HIA, ACE and BMG inside the rectangle EFGH. As the robots are over the parallel lines to the $y_{h}$-axis, which are distant by $d$ when projected over the $x$-axis, we want to know how many such parallel lines intersect $\\overline{HI}$ (equivalently, how many such lines intersect $\\overline{JA}$ due to parallelism) or $\\overline{AC}$.\n For such parallel lines that intersect $\\overline{HI}$, Lemma \\ref{lemma:nlmnlp} showed that for every $x_{h} \\in \\{-n_{l}^{-},\\dots,-1\\}$ the line parallel to $y_{h}$-axis intersecting $(x_{h},0)$ is inside the rectangle. Also, these lines intersect the sides $\\overline{EH}$ and $\\overline{HG}$, as any line parallel to $\\overleftrightarrow{AI}$ which is on its left side intersects the sides $\\overline{EH}$ and $\\overline{HG}$ if it is inside the rectangle.\n For the case where such parallel lines intersect $\\overline{AC}$, we need to know the maximum integer value, $M$, such that these parallel lines still intersect the sides $\\overline{EH}$ and $\\overline{HG}$ for any $x_{h} \\in \\{0, \\dots, M\\}$. Starting from point $A$ (that is, when $x_{h} = 0$), we have\n \\if0 1\n $M= \\left\\lfloor\\frac{\\vert \\overline{AC}\\vert }{d}\\right\\rfloor.$\n \\else\n $$M= \\left\\lfloor\\frac{\\vert \\overline{AC}\\vert }{d}\\right\\rfloor.$$\n \\fi\n We have $\\vert \\overline{AH}\\vert = \\vert \\overline{EA}\\vert = s$, $\\overleftrightarrow{AI} \\parallel \\overleftrightarrow{EC}$, $\\overleftrightarrow{HI} \\parallel \\overleftrightarrow{AC}$, and $\\overleftrightarrow{AH} \\parallel \\overleftrightarrow{AE}$ (as $E$, $A$ and $H$ are collinear), then $\\widehat{IHA} = \\widehat{CAE}, \\widehat{AIH} = \\widehat{ECA}$, and $\\widehat{HAI} = \\widehat{AEC}$. Thus, $\\bigtriangleup HIA \\cong \\bigtriangleup ACE$, then $\\vert \\overline{AC}\\vert = \\vert \\overline{HI}\\vert $, whose value has been previously calculated in Lemma \\ref{lemma:nlmnlp}, leading to\n \\if0 1\n $\n M = \\left \\lfloor\\frac{2s\\sin\\left(\\frac{\\pi}{6} - \\psi\\right)}{\\sqrt{3}d} \\right \\rfloor = n_{l}^{-}.\n $\n \\else\n $$\n M = \\left \\lfloor\\frac{2s\\sin\\left(\\frac{\\pi}{6} - \\psi\\right)}{\\sqrt{3}d} \\right \\rfloor = n_{l}^{-}.\n $$\n \\fi\n Hence, for any $x_{h} \\in \\{0, \\dots, n_{l}^{-}\\}$, those parallel lines intersect the sides $\\overline{EH}$ and $\\overline{HG}$. \n \\item Case $\\psi > \\pi\/6$: Figure \\ref{fig:not_full_lines2} illustrates this case. The reasoning is similar to the previous case, but using that $\\bigtriangleup AIE \\cong \\bigtriangleup HCA $. As the value for $\\vert \\overline{EI}\\vert \/d$ also has been calculated in Lemma \\ref{lemma:nlmnlp} for this figure, then \n \\if0 1\n $\n M \n = \\left\\lfloor\\frac{2s\\sin\\left(\\psi - \\frac{\\pi}{6}\\right)}{\\sqrt{3}d}\\right\\rfloor = n_{l}^{-}.\n $\n \\else\n $$\n M \n = \\left\\lfloor\\frac{2s\\sin\\left(\\psi - \\frac{\\pi}{6}\\right)}{\\sqrt{3}d}\\right\\rfloor = n_{l}^{-}.\n $$\n \\fi \n Consequently, for any $x_{h} \\in \\{-n_{l}^{-}, \\dots, n_{l}^{-}\\}$, the parallel-to-$y_{h}$-axis line at $(x_{h},0)$ intersects the sides $\\overline{EH}$ and $\\overline{EF}$ in this case. \\qed \n \\end{itemize} \\renewcommand{\\qedsymbol}{}\n \\fi %\n \\end{proof}\n \nThe next lemma will define the integer $K'$ mentioned before. This number will be compared with the integer $x_{h}$ coordinate of the point $(n_{l}^{+}-1,0)$ intersected by the rightmost parallel-to-$y_{h}$-axis line inside the rectangle EFGH. Assuming $\\theta \\neq \\pi\/6$, if this rightmost line intersects a point on the $x_{h}$-axis with an integer coordinate less than $K'$, then no parallel-to-$y_{h}$-axis line intersects the rectangle right side $\\overline{FG}$. However, if the intersection point coordinate is greater than or equal to $K'$, then at least one parallel line crosses $\\overline{FG}$.\n\n \n \\begin{lemma}\n Consider parallel-to-$y_{h}$-axis lines inside the rectangle EFGH intersecting the $x_{h}$-axis at $(x_{h},0)$, for $x_{h} \\in \\mathds{Z}$, and $K' = \\left\\lceil\\frac{2 (vT-s) \\cos(\\psi - \\pi\/6) - 2s\\sin(\\vert \\psi - \\pi\/6\\vert ) }{\\sqrt{3}d}\\right\\rceil$. Then, the two statements below are equivalent: \n \\begin{itemize}\n \\item[(I)] If $\\psi < \\pi\/6$\n \\begin{equation}\n Y_{1}^{R}(x_{h}) = \n \\frac{\\frac{2 (v T-s)}{d \\cos(\\psi)}-2 x_{h}}{\\sqrt{3} \\tan(\\psi) - 1}\\text{ and } \n Y_{2}^{R}(x_{h}) = \n \\frac{\\frac{2y_2}{d} + 2\\tan(\\psi) x_h}{\\sqrt{3} + \\tan(\\psi)},\n \\label{eq:endcase1}\n \\end{equation}\n and, if $\\psi > \\pi\/6$\n \\begin{equation}\n Y_{1}^{R}(x_{h}) = \n \\frac{\\frac{2y_1}{d} + 2\\tan(\\psi) x_h}{\\sqrt{3} + \\tan(\\psi)} \\text{ and }\n Y_{2}^{R}(x_{h}) = \n \\frac{\\frac{2 (v T-s)}{d \\cos(\\psi)}-2 x_{h}}{\\sqrt{3} \\tan(\\psi) - 1}.\n \\label{eq:endcase2} \n \\end{equation}\n \\item[(II)] $x_{h} \\in \\{ K', \\dots, n_{l}^{+}-1\\}$.\n \\end{itemize}\n \\label{lemma:endcase}\n \\end{lemma}\n \\begin{proof} \n \\ifithasappendixforlemmas %\n See Online Appendix.\n \\else %\n $(I) \\Rightarrow (II)$: By contrapositive, assume $x_{h} \\notin \\{ K', \\dots, n_{l}^{+} - 1\\}$. By Lemma \\ref{lemma:nlmnlp}, there is no $x_{h} > n_{l}^{+} - 1$, so $x_{h} < K'$. \n For the case of $\\psi < \\pi\/6$, observe in Figure \\ref{fig:not_full_lines} the point $K$ on the $x_{h}$-axis. This point corresponds to the intersection of $\\overleftrightarrow{MG}$ on the $x_{h}$-axis, which is the first parallel-to-$y_{h}$-axis crossing the rectangle right side $\\overline{FG}$. The point D on the $x_{h}$-axis is the projection of the point F on this axis. By (\\ref{eq:adacsin}), $\\vert \\overline{AD}\\vert = \\frac{2\\cos(\\pi\/6 - \\psi) (vT-s) + {2s\\sin(\\vert \\psi - \\pi\/6\\vert )}}{\\sqrt{3}}$. Because of the parallelism, we have $\\vert \\overline{MN}\\vert = \\vert \\overline{KD}\\vert $. Due to the congruence of triangles ACE, HIA, BMG and BNF and (\\ref{eq:hisize}), $\\vert \\overline{BM}\\vert = \\vert \\overline{BN}\\vert = \\vert \\overline{HI}\\vert = \\frac{2s\\sin\\left(\\vert \\psi-\\pi\/6\\vert \\right)}{\\sqrt{3}}$. Thus, $\\vert \\overline{KD}\\vert = \\vert \\overline{MN}\\vert = \\vert \\overline{BM}\\vert + \\vert \\overline{BN}\\vert = \\frac{4s\\sin\\left(\\vert \\psi - \\pi\/6 \\vert \\right)}{\\sqrt{3}}$. Since $\\vert \\overline{AK}\\vert = \\vert \\overline{AD}\\vert - \\vert \\overline{KD}\\vert = \\frac{2\\cos(\\pi\/6 - \\psi) (vT -s) - {2s\\sin(\\vert \\psi - \\pi\/6\\vert )}}{\\sqrt{3}}$, the point K is located on the $(x_{h},y_{h})$ coordinate space at $\\bigg( \\frac{2\\cos(\\pi\/6 - \\psi) (vT-s)}{\\sqrt{3}d} - \\frac{2s\\sin(\\vert \\psi - \\pi\/6\\vert )}{\\sqrt{3}d} , 0 \\bigg)$, as $K$ is on the $x$-axis and to convert it to $(x_{h},y_{h})$ coordinate space we only need to divide the $x$-coordinate by $d$. On the $x_{h}$-axis, the nearest point on the right of $K$ with integer $x_{h}$ is $(\\left\\lceil K \\right\\rceil,0)=(K',0)$. As we assumed $x_{h} < K'$, no parallel-to-$y_{h}$-axis crossing a integer $(x_{h},0)$ point inside the rectangle intersects $\\overline{FG}$. Thus, no such parallel line has $Y_{1}^{R}(x_{h}) = \\frac{\\frac{2 (v T-s)}{d \\cos(\\psi)}-2 x_{h}}{\\sqrt{3} \\tan(\\psi) - 1}$, which is the $y_{h}$-coordinate of the intersection of this line with $\\overleftrightarrow{FG}$. \n \n In the case of $\\psi > \\pi\/6$, using a similar argument in the Figure \\ref{fig:not_full_lines2} concludes the desired result, but here we use $\\vert \\overline{NB}\\vert + \\vert \\overline{MG}\\vert = \\vert \\overline{KD}\\vert $ and the congruence of triangles AIE, HCA, FNB and BMG. As we assumed $x_{h} < K'$, no parallel-to-$y_{h}$-axis intersecting a integer point $(x_{h},0)$ inside the rectangle crosses $\\overline{FG}$, so for such line $Y_{2}^{R}(x_{h}) \\neq \\frac{\\frac{2 (v T-s)}{d \\cos(\\psi)}-2 x_{h}}{\\sqrt{3} \\tan(\\psi) - 1}$. \n \n\\begin{comment}\n For the case $\\psi < \\pi\/6$, observe in Figure \\ref{fig:not_full_lines2} the point $K$ on the $x_{h}$-axis. This point corresponds to the intersection of $\\overleftrightarrow{FN}$ on the $x_{h}$-axis, which is the first parallel-to-$y_{h}$-axis crossing the rectangle right side $\\overline{FG}$. The point D on the $x_{h}$-axis is the projection of the point G on this axis. By the (\\ref{eq:adacsin}), $\\vert \\overline{AD}\\vert = \\frac{2\\cos(\\pi\/6 - \\psi) (vT-s) + {2s\\sin(\\vert \\psi - \\pi\/6\\vert )}}{\\sqrt{3}}$. Because of the parallelism, we have $\\vert \\overline{NB}\\vert + \\vert \\overline{MG}\\vert = \\vert \\overline{KD}\\vert $. Due to the congruence of triangles AIE, HCA, FNB and BMG and (\\ref{eq:hisize}), $\\vert \\overline{NB}\\vert = \\vert \\overline{MG}\\vert = \\vert \\overline{EI}\\vert = \\frac{2s\\sin\\left(\\vert \\psi-\\pi\/6\\vert \\right)}{\\sqrt{3}}$. Thus, $\\vert \\overline{KD}\\vert = \\vert \\overline{NB}\\vert + \\vert \\overline{MG}\\vert = \\frac{4s\\sin\\left(\\vert \\psi - \\pi\/6 \\vert \\right)}{\\sqrt{3}}$. Since $\\vert \\overline{AK}\\vert = \\vert \\overline{AD}\\vert - \\vert \\overline{KD}\\vert = \\frac{2\\cos(\\pi\/6 - \\psi) (vT-s) - {2s\\sin(\\vert \\psi - \\pi\/6\\vert )}}{\\sqrt{3}}$, the point K is located on the $(x_{h},y_{h})$ coordinate space at $\\left(\\frac{2\\cos(\\pi\/6 - \\psi) (vT-s) - {2s\\sin(\\vert \\psi - \\pi\/6\\vert )}}{\\sqrt{3}d},0\\right)$. On the $x_{h}$-axis, the nearest point on the right of $K$ with integer $x_{h}$ is $(K',0)$. As we assumed $x_{h} < K'$, no parallel-to-$y_{h}$-axis with integer $x_{h}$ coordinate inside the rectangle intersect the $\\overline{FG}$. Thus, no such parallel line has $Y_{2}^{R} = \\frac{\\frac{2 (v T-s)}{d \\cos(\\psi)}-2 x_{h}}{\\sqrt{3} \\tan(\\psi) - 1}$. \n\\end{comment}\n\n $(II) \\Rightarrow (I):$ If $x_{h} \\in \\{K', \\dots, n_{l}^{+}-1\\}$ then the lines parallel-to-$y_{h}$-axis inside the rectangle intersecting the $x_{h}$-axis at $(x_{h}, 0)$ are on the right of point K or intersecting it. Hence, these lines intersect $\\overline{EF}$ and $\\overline{FG}$, if $\\psi < \\pi\/6$. By applying (\\ref{eq:boundsyh1}), (\\ref{eq:xhvTcos1}) (for the line equations for $\\overleftrightarrow{EF}$ and $\\overleftrightarrow{FG}$), (\\ref{eq:y1xh}) and (\\ref{eq:y2xh}) (for the definitions of $Y_{1}^{R}$ and $Y_{2}^{R}$), we have (\\ref{eq:endcase1}). A similar argument is used in the case of $\\psi > \\pi\/6$, but for $\\overline{FG}$ and $\\overline{HG}$ intersections, yielding (\\ref{eq:endcase2}). \n \\fi %\n \\end{proof}\n\n The lemma below characterises when a parallel-to-$y_{h}$-axis line touches only the sides EH and FG of the rectangle. Intuitively, \n if this happens we have a rectangle with a small width. Thus, on rectangles with a large width, no such lines are crossing the sides EH and FG, for $\\psi \\neq \\pi\/6$. We will use this lemma on the Lemma \\ref{lemma:MiddleInterval}, for completing the disjoint subsets based on the possible $\\max$ and $\\min$ outcomes of $Y_{1}^{R}$ and $Y_{2}^{R}$.\n \n \\begin{lemma}\n If $vT -s > 2s\\tan(\\vert \\psi - \\frac{\\pi}{6}\\vert )$, then\n there is not a $x_{h} \\in \\{-n_{l}^{-},\\dots , n_{l}^{+}-1\\}$ such that, \n \\if0 1\n $\n Y_{1}^{R}(x_{h}) = \n \\frac{\\frac{2 (v T-s)}{d \\cos(\\psi)}-2 x_{h}}{\\sqrt{3} \\tan(\\psi) - 1} \n $\n and\n $\n Y_{2}^{R}(x_{h}) = \n \\frac{-2 x_{h}}{\\sqrt{3} \\tan(\\psi) - 1},\n \\text{ if } \\psi<\\pi\/6;\n $\n $\n Y_{1}^{R}(x_{h}) = \n \\frac{-2 x_{h}}{\\sqrt{3} \\tan(\\psi) - 1} \n $ \n and \n $\n Y_{2}^{R}(x_{h}) = \n \\frac{\\frac{2 (v T-s)}{d \\cos(\\psi)}-2 x_{h}}{\\sqrt{3} \\tan(\\psi) - 1},\n \\text{ if } \\psi > \\pi\/6.\n $\n \\else\n $$\n Y_{1}^{R}(x_{h}) = \n \\frac{\\frac{2 (v T-s)}{d \\cos(\\psi)}-2 x_{h}}{\\sqrt{3} \\tan(\\psi) - 1} \n \\text{ and }\n Y_{2}^{R}(x_{h}) = \n \\frac{-2 x_{h}}{\\sqrt{3} \\tan(\\psi) - 1},\n \\text{ if } \\psi<\\pi\/6;\n $$\n $$\n Y_{1}^{R}(x_{h}) = \n \\frac{-2 x_{h}}{\\sqrt{3} \\tan(\\psi) - 1} \n \\text{ and }\n Y_{2}^{R}(x_{h}) = \n \\frac{\\frac{2 (v T-s)}{d \\cos(\\psi)}-2 x_{h}}{\\sqrt{3} \\tan(\\psi) - 1},\n \\text{ if } \\psi > \\pi\/6.\n $$\n \\fi\n \\label{lemma:excludeCase}\n \\end{lemma}\n \\begin{proof} \n \\ifithasappendixforlemmas %\n See Online Appendix.\n \\else %\n This proof is by contrapositive. Assume $\\psi < \\pi\/6$. By (\\ref{eq:y1xh}) and (\\ref{eq:y2xh}), we have a $x_{h}$ such that \n \\if0 1\n $\n \\frac{\\frac{y_1}{d} + \\tan(\\psi) x_h}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}} \\le\n \\frac{\\frac{v T-s}{d\\cos(\\psi)} - x_{h}}{ \\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}}\n $ and \n $\n \\frac{\\frac{y_2}{d} + \\tan(\\psi) x_h}{{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}}} \\ge\n \\frac{-x_{h}}{\\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}}.\n $\n \\else\n $$\n \\frac{\\frac{y_1}{d} + \\tan(\\psi) x_h}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}} \\le\n \\frac{\\frac{v T-s}{d\\cos(\\psi)} - x_{h}}{ \\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}}\n \\text{ and }\n \\frac{\\frac{y_2}{d} + \\tan(\\psi) x_h}{{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}}} \\ge\n \\frac{-x_{h}}{\\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}}.\n $$\n \n \\fi\n Since $\\frac{\\sqrt{3} \\tan(\\psi) - 1}{2} < 0$, the signs of inequalities change, then we have the following implication\n \\if0 1\n $\n \\frac{\\frac{y_1}{d}\\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}} - \\frac{v T-s}{d\\cos(\\psi)} \\ge - x_{h} - \\frac{\\tan(\\psi) x_h \\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}} \n $ and \n $\n \\frac{\\frac{y_2}{d}\\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}}\\le -x_{h} - \\frac{\\tan(\\psi) x_h \\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}} \n \\Rightarrow\n \\frac{\\frac{y_2}{d}\\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}}\\le \n \\frac{\\frac{y_1}{d}\\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}} - \\frac{v T-s}{d\\cos(\\psi)},\n $\n \\else\n $$\n \\begin{aligned}\n &\\phantom{\\Leftrightarrow} \\frac{\\frac{y_1}{d}\\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}} - \\frac{v T-s}{d\\cos(\\psi)} \\ge - x_{h} - \\frac{\\tan(\\psi) x_h \\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}} \n \\text{ and }\n \\\\\n &\\phantom{\\Leftrightarrow} \\frac{\\frac{y_2}{d}\\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}}\\le -x_{h} - \\frac{\\tan(\\psi) x_h \\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}} \n \\\\\n &\\Rightarrow\n \\frac{\\frac{y_2}{d}\\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}}\\le \n \\frac{\\frac{y_1}{d}\\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}} - \\frac{v T-s}{d\\cos(\\psi)},\n \\end{aligned}\n $$\n \\fi\n by the transitivity of $\\le$ under the real numbers. Also, we have the following equivalences\n \\if0 1\n $ \n \\frac{\\frac{y_2}{d}\\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}}\\le \n \\frac{\\frac{y_1}{d}\\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}} - \\frac{v T-s}{d\\cos(\\psi)}\n \\Leftrightarrow\n v T-s\n \\le \n 2s\\tan(\\pi\/6 - \\psi)\n ,\n $\n due to (\\ref{eq:2sy2y1}), the equalities $\\tan(a+b)=\\frac{\\tan(a)+\\tan(b)}{1-\\tan(a)\\tan(b)}$, $\\cot(a) = -\\tan(a + \\pi\/2)$ and $-\\tan(\\pi-a) = \\tan(a)$ for any real $a$ and $b$.\n \\else\n $$\n \\begin{aligned}\n &\\phantom{\\Leftrightarrow} \\frac{\\frac{y_2}{d}\\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}}\\le \n \\frac{\\frac{y_1}{d}\\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}} - \\frac{v T-s}{d\\cos(\\psi)}\n \\\\\n\\ifexpandexplanation\n &\\Leftrightarrow\n \\frac{v T-s}{d\\cos(\\psi)}\n \\le \n \\frac{\\frac{y_1}{d}\\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}} - \\frac{\\frac{y_2}{d}\\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}} \n \\\\\n\\fi\n &\\Leftrightarrow\n \\frac{v T-s}{d\\cos(\\psi)}\n \\le \n \\frac{y_1-y_2}{d} \\frac{{\\sqrt{3} \\tan(\\psi) - 1}}{{\\sqrt{3} + \\tan(\\psi)}} \\\\\n &\\Leftrightarrow\n \\frac{v T-s}{d\\cos(\\psi)}\n \\le \n -\\frac{2s}{d\\cos(\\psi)} \\frac{{\\sqrt{3} \\tan(\\psi) - 1}}{{\\sqrt{3} + \\tan(\\psi)}} \\hspace{2.5cm} [\\text{By (\\ref{eq:2sy2y1})}]\n \\\\\n & \\Leftrightarrow\n \\frac{v T-s}{2s}\n \\le \n \\frac{{1- \\sqrt{3} \\tan(\\psi)}}{{\\sqrt{3} + \\tan(\\psi)}} \\Leftrightarrow\n \\frac{v T-s}{2s}\n \\le \n \\frac{1}{\\tan(\\pi\/3+\\psi)} \\\\& \\Leftrightarrow\n \\frac{v T-s}{2s}\n \\le \n \\cot(\\pi\/3+\\psi) \n \\Leftrightarrow \n \\frac{v T-s}{2s}\n \\le \n -\\tan(\\psi+5\\pi\/6) \n \\\\\n & \\Leftrightarrow\n v T-s\n \\le \n 2s\\tan(\\pi\/6 - \\psi)\n .\n \\end{aligned}\n $$\n Above we used the equalities $\\tan(a+b)=\\frac{\\tan(a)+\\tan(b)}{1-\\tan(a)\\tan(b)}$, $\\cot(a) = -\\tan(a + \\pi\/2)$ and $-\\tan(\\pi-a) = \\tan(a)$ for any real $a$ and $b$.\n \\fi\n \n For the case $\\psi > \\pi\/6$, using similar arguments we get the same result, but we do not change the signs of inequalities due to $\\frac{\\sqrt{3}\\tan(\\psi) - 1}{2} > 0$ in this case. The conclusion is reached after we combine the two cases using absolute values inside the tangent.\n\\begin{comment}\n If $\\psi > \\pi\/6$, by the hypothesis, (\\ref{eq:y1xh}) and (\\ref{eq:y2xh}), we have a $x_{h}$ such that\n $$\n \\begin{aligned}\n \\frac{\\frac{y_1}{d} + \\tan(\\psi) x_h}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}} \\le \\frac{-x_{h}}{\\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}}\n \\text{ and }\n \\frac{\\frac{y_2}{d} + \\tan(\\psi) x_h}{{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}}} \\ge \\frac{\\frac{v T-s}{d\\cos(\\psi)} - x_{h}}{ \\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}} \\Leftrightarrow\n \\\\\n \\frac{\\frac{y_1}{d} + \\tan(\\psi) x_h}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}}\\left(\\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}\\right) \\le -x_{h}\n \\text{ and }\n \\frac{\\frac{y_2}{d} + \\tan(\\psi) x_h}{{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}}}\\left(\\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}\\right) \\ge \\frac{v T-s}{d\\cos(\\psi)} - x_{h} \\Leftrightarrow\n \\\\\n \\frac{\\frac{y_1}{d}\\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}} \n \\le\n -x_{h} - \\frac{\\tan(\\psi) x_h \\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}}\n \\text{ and }\n \\\\\n \\frac{\\frac{y_2}{d}\\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}} - \\frac{v T-s}{d\\cos(\\psi)} \\ge - x_{h} - \\frac{\\tan(\\psi) x_h \\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}} \\Rightarrow\n \\\\\n \\frac{\\frac{y_1}{d}\\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}} \n \\le \n \\frac{\\frac{y_2}{d}\\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}} - \\frac{v T-s}{d\\cos(\\psi)} \\Leftrightarrow\n \\\\\n \\frac{v T-s}{d\\cos(\\psi)}\n \\le \n \\frac{\\frac{y_2}{d}\\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}} - \\frac{\\frac{y_1}{d}\\frac{\\sqrt{3} \\tan(\\psi) - 1}{2}}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}} \\Leftrightarrow\n \\\\\n \\frac{v T-s}{d\\cos(\\psi)}\n \\le \n \\frac{y_2-y_1}{d} \\frac{{\\sqrt{3} \\tan(\\psi) - 1}}{{\\sqrt{3} + \\tan(\\psi)}} \\Leftrightarrow\n \\frac{v T-s}{d\\cos(\\psi)}\n \\le \n \\frac{2s}{d\\cos(\\psi)} \\frac{{\\sqrt{3} \\tan(\\psi) - 1}}{{\\sqrt{3} + \\tan(\\psi)}} \\Leftrightarrow\n \\\\\n \\frac{v T-s}{2s}\n \\le \n \\frac{{\\sqrt{3} \\tan(\\psi) - 1}}{{\\sqrt{3} + \\tan(\\psi)}} \\Leftrightarrow\n \\\\\n \\frac{{\\sqrt{3} + \\tan(\\psi)}}{{\\sqrt{3} \\tan(\\psi) - 1}} \n \\le \\frac{2s}{vT-s} \\Leftrightarrow\n \\\\\n -\\frac{{\\tan(\\psi) + \\sqrt{3}}}{{1 - \\sqrt{3} \\tan(\\psi)}} \n \\le \\frac{2s}{vT-s} \\Leftrightarrow\n \\\\\n -\\frac{{\\sqrt{3} + \\tan(\\psi) }}{{1 - \\sqrt{3} \\tan(\\psi)}} \n \\le \\frac{2s}{vT-s} \\Leftrightarrow\n -\\tan(\\pi\/3 + \\psi) \\le \\frac{2s}{vT-s} \\Leftrightarrow\n \\cot(\\psi - \\pi\/6) \\le \\frac{2s}{vT-s} \\Leftrightarrow\n \\\\\n \\frac{vT-s}{2s} \\le \\tan(\\psi - \\pi\/6),\n \\end{aligned}\n $$\n\\end{comment} \n \\fi %\n \\end{proof}\n \n The next lemma completes the properties of $N_{R}(T,\\psi)$ that are useful for calculating its limit when T tends to infinity.\n \n \\begin{lemma}\n Let $K'=\\left\\lceil\\frac{2 (vT-s) \\cos(\\psi - \\pi\/6) - 2s\\sin(\\vert \\psi - \\pi\/6\\vert ) }{\\sqrt{3}d}\\right\\rceil$. If $vT-s > 2s \\tan(\\vert \\psi - \\pi\/6\\vert )$, then $x_{h} \\in \\{n_{l}^{-} + 1, \\dots, K' -1\\}$ if and only if\n \\if0 1\n $\n Y_{1}^{R}(x_{h}) = \n \\frac{\\frac{y_1}{d} + \\tan(\\psi) x_h}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}} \n $ and $\n Y_{2}^{R}(x_{h}) = \n \\frac{\\frac{y_2}{d} + \\tan(\\psi) x_h}{{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}}}.\n $\n \\else\n $$\n \\begin{aligned}\n Y_{1}^{R}(x_{h}) = \n \\frac{\\frac{y_1}{d} + \\tan(\\psi) x_h}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}} \n \\text{ and }\n Y_{2}^{R}(x_{h}) = \n \\frac{\\frac{y_2}{d} + \\tan(\\psi) x_h}{{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}}}.\n \\end{aligned}\n $$ \n \\fi\n \\label{lemma:MiddleInterval}\n \\end{lemma}\n \\begin{proof} \n \\ifithasappendixforlemmas %\n See Online Appendix.\n \\else %\n Excluding the case when $\\psi = \\pi\/6$, (\\ref{eq:y1xh}) and (\\ref{eq:y2xh}) give four combinations of possible outcomes for the values of $Y_{1}^{R}(x_{h})$ and $Y_{2}^{R}(x_{h})$ based on the results of $\\min$ and $\\max$. When $vT-s > 2s \\tan(\\vert \\psi - \\pi\/6\\vert )$, by Lemma \\ref{lemma:excludeCase}, we do not have the case when they are on the sides $EH$ and $FG$. For the given values of $x_{h}$ on the hypothesis, neither Lemma \\ref{lemma:Interval1} nor Lemma \\ref{lemma:endcase} applies, excluding other two combinations of results for $Y_{1}^{R}(x_{h})$ and $Y_{2}^{R}(x_{h})$. Finally, Lemma \\ref{lemma:nlmnlp} shows that every parallel-to-$y_{h}$-axis line crosses the $x_{h}$-axis at $(x_{h},0)$ for $x_{h} \\in \\{-n_{l}^{-}, \\dots, n_{l}^{+}-1\\}$, so the remaining combination yields the desired equivalence.\n \\fi %\n \\end{proof}\n \n Now we present the calculation of $N_{S}(T,\\theta)$. Here we are using $\\theta$ instead of $\\psi = \\pi\/3 - \\theta$ for easiness of presentation. We denote $(l_{x},l_{y})$ the position of the last robot inside a rectangle of width $vT - s$ and height $2s$ whose left side is at $(x_{0},y_{0}).$ Here \\emph{last} means the robot with highest $x$ coordinate value. However, if two robots have the same $x$ coordinate value, we take the robot whose $y$ coordinate is nearer to $y_{0}$. Let $Z$ be the set of robot positions inside the rectangle above for $vT - s > 0$.\n \n \\begin{lemma}\n Let $c_{x} = x_{0} + vT - s$, and \n \\if0 1\n $\n (l_{x},l_{y}) = \n \\argmin_{(x,y) \\in Z}{\\vert vT - s + x_{0} - x\\vert + \\vert y_{0} - y\\vert } \n $ if $T > \\frac{s}{v}$, otherwise,\n $(l_{x},l_{y}) = (x_{0},y_{0}).$\n Then,\n $\n N_{S}(T,\\theta) = \\sum_{x_{h} = B}^{U}\\left(\\lfloor Y_{2}^{S}(x_{h}) \\rfloor - \\lceil Y_{1}^{S}(x_{h}) \\rceil + 1\\right),\n $\n \\else\n $$\n (l_{x},l_{y}) = \n \\begin{cases}\n \\argmin_{(x,y) \\in Z}{\\vert vT - s + x_{0} - x\\vert + \\vert y_{0} - y\\vert } \n & \\text{ if } T > \\frac{s}{v}, \\\\\n (x_{0},y_{0}) \n & \\text{ otherwise. }\n \\end{cases}\n $$\n Then,\n $$\n N_{S}(T,\\theta) = \\sum_{x_{h} = B}^{U}\\left(\\lfloor Y_{2}^{S}(x_{h}) \\rfloor - \\lceil Y_{1}^{S}(x_{h}) \\rceil + 1\\right),\n $$\n \\fi\n for $\\left\\lfloor Y_{2}^{S}(x_{h}) \\right\\rfloor \\ge \\left \\lceil Y_{1}^{S}(x_{h}) \\right \\rceil $ (if for some $x_{h}$ $\\left\\lfloor Y_{2}^{S}(x_{h}) \\right\\rfloor < \\left \\lceil Y_{1}^{S}(x_{h}) \\right \\rceil $, we assume the respective summand for this $x_{h}$ being zero),\n \\begin{equation}\n B = \\left\\{\n \\begin{array}{>{\\displaystyle}c>{\\displaystyle}l}\n \\left\\lceil\\frac{2( \\sin(\\pi\/3-\\theta)(c_{x} - l_{x}) + \\cos(\\pi\/3-\\theta)(y_{0} - l_{y} -s) )}{\\sqrt{3}d}\\right\\rceil,\n & \\text{ if } T > \\frac{s}{v},\n \\\\\n \\left\\lceil-\\frac{2\\sqrt{2svT - (vT)^{2}}}{\\sqrt{3}d}\\sin\\left(\\theta + \\frac{\\pi}{6}\\right)\\right\\rceil,\n & \\text{ otherwise, }\n \\end{array}\n \\right. \n \\label{eq:BcasesSC}\n \\end{equation}\n if $T > \\frac{s}{v}$ or $\\arctan\\left( \\frac{\\frac{s}{2} - \\sin(\\theta) (vT - s) }{\\frac{\\sqrt{3}s}{2} + \\cos(\\theta) (vT - s)} \\right) \\le \\frac{\\pi}{2} - \\theta$,\n \\begin{equation}\n U = \\left \\lfloor \\frac{2(\\sin(\\pi\/3-\\theta)(c_{x} - l_{x}) + \\cos(\\pi\/3-\\theta)(y_{0} - l_{y}) + s)}{\\sqrt{3}d} \\right \\rfloor, \n \\label{eq:eq58}\n \\end{equation}\n otherwise, \n \\begin{equation}\n U = \\left \\lfloor \\frac{2\\sqrt{2svT - (vT)^{2}}}{\\sqrt{3}d}\\cos\\left(\\theta-\\frac{\\pi}{3}\\right) \\right\\rfloor.\n \\label{eq:eq59}\n \\end{equation}\n Also,\n \\if0 1\n $\n Y_{1}^{S}(x_{h}) = \n \\frac{d {x_{h}}- {C_{-\\theta,x}} + \\sqrt{3} C_{-\\theta,y} - \\sqrt{\\Delta(x_{h})}}{2 d}, \n $\n $\n Y_{2}^{S}(x_{h}) = \n \\min(L(x_{h}),C_{2}(x_{h})) - 1, \n $ if $\\min(L(x_{h}),C_{2}(x_{h})) = \\lfloor L(x_{h})\\rfloor$ \n and $T > \\frac{s}{v}$, otherwise,\n $ Y_{2}^{S}(x_{h}) = \\min(L(x_{h}),C_{2}(x_{h}))$,\n for\n $\n C_{-\\theta} = \\big[\n \\begin{array}{cc}\n \\cos(-\\theta) & -\\sin(-\\theta)\\\\\n \\sin(-\\theta) & \\cos(-\\theta)\\\\\n \\end{array}\n \\big]\n \\big[\n \\begin{array}{c}\n c_{x} - l_{x}\\\\\n y_{0} - l_{y}\\\\\n \\end{array}\n \\big],\n $\n $\n \\Delta(x_{h}) = 4 s^{2} - \\big(\\sqrt{3} {\\big(d {x_{h}} -{C_{-\\theta,x}} \\big)} - C_{-\\theta,y}\\big)^{2},\n $\n $\n C_{2}(x_{h}) = \n \\frac{d {x_{h}}- {C_{-\\theta,x}} + \\sqrt{3} C_{-\\theta,y} + \\sqrt{\\Delta(x_{h})}}{2 d}, \n $\n and\n $\n L(x_{h}) = \n \\frac{\\sin\\big(\\frac{\\pi}{2} - \\theta\\big)(d x_{h} - C_{-\\theta,x}) + \\cos\\big(\\frac{\\pi}{2} - \\theta\\big)C_{-\\theta,y}}{d \\sin\\big(\\frac{5\\pi}{6}-\\theta\\big)}$,\n if $T > \\frac{s}{v}$, otherwise,\n $L(x_{h}) = \\frac{\\sin\\big(\\frac{\\pi}{2}-\\theta\\big) x_{h}}{\\sin\\big( \\frac{5\\pi}{6}-\\theta\\big)}$.\n \\else\n $$\n Y_{1}^{S}(x_{h}) = \n \\frac{d {x_{h}}- {C_{-\\theta,x}} + \\sqrt{3} C_{-\\theta,y} - \\sqrt{\\Delta(x_{h})}}{2 d}, \n $$\n $$\n Y_{2}^{S}(x_{h}) = \\left\\{\n \\begin{array}{>{\\displaystyle}c>{\\displaystyle}l}\n \\min(L(x_{h}),C_{2}(x_{h})) - 1, \n & \\text{ if } \\min(L(x_{h}),C_{2}(x_{h})) = \\lfloor L(x_{h})\\rfloor \\\\ \n & \\phantom{if} \\text{ and } T > \\frac{s}{v},\\\\\n \\min(L(x_{h}),C_{2}(x_{h})), \n & \\text{ otherwise, } \n \\end{array}\n \\right.\n $$\n for\n $$\n C_{-\\theta} = \\left[\n \\begin{array}{cc}\n \\cos(-\\theta) & -\\sin(-\\theta)\\\\\n \\sin(-\\theta) & \\cos(-\\theta)\\\\\n \\end{array}\n \\right]\n \\left[\n \\begin{array}{c}\n c_{x} - l_{x}\\\\\n y_{0} - l_{y}\\\\\n \\end{array}\n \\right],\n $$\n $$\n \\Delta(x_{h}) = 4 s^{2} - \\left(\\sqrt{3} {\\left(d {x_{h}} -{C_{-\\theta,x}} \\right)} - C_{-\\theta,y}\\right)^{2},\n $$\n $$\n C_{2}(x_{h}) = \n \\frac{d {x_{h}}- {C_{-\\theta,x}} + \\sqrt{3} C_{-\\theta,y} + \\sqrt{\\Delta(x_{h})}}{2 d}, \n $$\n and\n $$\n L(x_{h}) = \\left\\{ \n \\begin{array}{>{\\displaystyle}c>{\\displaystyle}l}\n \\frac{\\sin\\left(\\frac{\\pi}{2} - \\theta\\right)(d x_{h} - C_{-\\theta,x}) + \\cos\\left(\\frac{\\pi}{2} - \\theta\\right)C_{-\\theta,y}}{d \\sin\\left(\\frac{5\\pi}{6}-\\theta\\right)},\n & \\text{ if } T > \\frac{s}{v}, \\\\\n \\frac{\\sin\\left(\\frac{\\pi}{2}-\\theta\\right) x_{h}}{\\sin\\left( \\frac{5\\pi}{6}-\\theta\\right)},\n & \\text{ otherwise.}\\\\\n \\end{array}\n \\right.\n $$\n \\fi\n \\label{lemma:NS} \n \\end{lemma}\n \\begin{proof}\n \\ifithasappendixforlemmas %\n See Online Appendix.\n \\else %\n \\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\columnwidth]{figs\/hex_semicircle1.pdf}\n \\caption{In the space (I) the robots are in the standard coordinate system and the semicircle with centre at $C = (c_{x},c_{y})$ has the lowest point at $B^{'}$. $\\protect\\overleftrightarrow{CB^{'}}$ has angle $\\frac{\\pi}{2}$ with the usual $x$-axis, however the $x_{h}$-axis here has angle $\\theta$ with it. In (II) we rotate by $-\\theta$ with $(l_{x},l_{y})$ as centre of rotation. After this rotation, $B^{'}$, $U^{'}$ and $C$ become $B^{'}_{-\\theta}$, $U^{'}_{-\\theta}$ and $C_{-\\theta}$, respectively, and $\\protect\\overleftrightarrow{C_{-\\theta}B^{'}_{-\\theta}}$ has angle $\\frac{\\pi}{2}-\\theta$ in relation to the $x_{g}$-axis and $x_{h}$-axis, which are now coincident lines despite their scale being different. $B$ and $U$ are the minimum and maximum values of the $x_{h}$-axis coordinate for a line parallel to the $y_{h}$-axis on the hexagonal grid coordinate system.}\n \\label{fig:hexsemicircle1}\n \\end{figure}\n \n Assume $T > \\frac{s}{v}$, as shown in Figure \\ref{fig:hexnumrobots} (III). The robots are located in the usual Euclidean space. Instead of it, we use in this proof a similar coordinate system transformation for positioning the robots in a hexagonal grid as integer coordinates as we did in the rectangular part. As in the previous lemmas, we call this coordinate system space coordinates $(x_{h},y_{h})$. However, here we are using a $(x_{h},y_{h})$ coordinate system with a different origin and inclination. \n \n In order to do so, we first redefine a $(x_{g},y_{g})$ coordinate space, that is, we perform rotation by $-\\theta$ on the usual Euclidean space about $(l_{x},l_{y})$. The origin of the $(x_{g},y_{g})$ coordinate system is at $(l_{x},l_{y})$. The transformation for $(x_{g},y_{g})$ coordinate system used here is similar to the depicted in the Figure \\ref{fig:referencepsi}, but here we are using $-\\theta$ and $(l_{x},l_{y})$ instead of $-\\psi$ and $(x_{0},y_{0})$, i.e., \n \\if0 1\n $\n \\left[\n \\begin{array}{c}\n x_{g} \\\\\n y_{g}\n \\end{array}\n \\right]\n = \\left[\n \\begin{array}{cc}\n \\cos(-\\theta) & -\\sin(-\\theta)\\\\\n \\sin(-\\theta) & \\cos(-\\theta)\\\\\n \\end{array}\n \\right]\n \\left[\n \\begin{array}{c}\n x-l_{x}\\\\\n y-l_{y}\\\\\n \\end{array}\n \\right].\n $\n \\else\n $$\n \\left[\n \\begin{array}{c}\n x_{g} \\\\\n y_{g}\n \\end{array}\n \\right]\n = \\left[\n \\begin{array}{cc}\n \\cos(-\\theta) & -\\sin(-\\theta)\\\\\n \\sin(-\\theta) & \\cos(-\\theta)\\\\\n \\end{array}\n \\right]\n \\left[\n \\begin{array}{c}\n x-l_{x}\\\\\n y-l_{y}\\\\\n \\end{array}\n \\right].\n $$\n \\fi\n As the coordinate space $(x_{g},y_{g})$ is already translated to the point $(l_{x},l_{y})$, the transformation from the new $(x_{h}, y_{h})$ to the new $(x_{g},y_{g})$ is the same as in (\\ref{eq:xhyh2xgyg}). We repeat it below for convenience: \n \\begin{equation}\n \\left[\n \\begin{array}{c}\n x_{g} \\\\\n y_{g}\n \\end{array}\n \\right]\n = \\left[\n \\begin{array}{cc}\n d & -\\frac{d}{2}\\\\\n 0 & \\frac{\\sqrt{3}d}{2}\\\\ \n \\end{array}\n \\right]\n \\left[\n \\begin{array}{c}\n x_{h}\\\\\n y_{h}\n \\end{array}\n \\right].\n \\label{eq:xhyh2xgyglxly}\n \\end{equation}\n Despite these differences, we will keep using the notation $(x_{g},y_{g})$ and $(x_{h},y_{h})$ as we did before for a clean presentation.\n \n \n \n Figure \\ref{fig:hexsemicircle1} shows how the semicircle with centre at $C = (c_{x},c_{y}) = (x_{0} + vT - s, y_{0})$ will be after the rotation by $-\\theta$ about $(l_{x},l_{y})$, that is,\n \\begin{equation}\n C_{-\\theta} = \\left[\n \\begin{array}{cc}\n \\cos(-\\theta) & -\\sin(-\\theta)\\\\\n \\sin(-\\theta) & \\cos(-\\theta)\\\\\n \\end{array}\n \\right]\n \\left[\n \\begin{array}{c}\n c_{x} - l_{x}\\\\\n c_{y} - l_{y}\\\\\n \\end{array}\n \\right].\n \\label{eq:centertheta}\n \\end{equation}\n Hereafter we will use the subscript $-\\theta$ on every point presented on the usual Euclidean space to denote the corresponding point on the $(x_{g},y_{g})$ coordinate space. \n \n \n We compute first the upper and lower values, $U$ and $B$, of $x_{h}$ lying on the semicircle. For getting the $U$ value on the $x_{h}$-axis, we draw a line parallel to the $y_{h}$-axis on the rightmost semicircle boundary at the point $U^{'}$ in order to reach the $x_{h}$-axis (Figure \\ref{fig:hexsemicircle1} (I)). The corresponding point on the $(x_{g},y_{g})$ space is denoted by $U^{'}_{-\\theta}$ (Figure \\ref{fig:hexsemicircle1} (II)).\n We compute $U^{'}_{-\\theta}$, then we take its $x_{h}$-value on the hexagonal grid coordinate system. $\\bigtriangleup U^{'}_{-\\theta}C_{-\\theta}U_{2}$ in Figure \\ref{fig:hexsemicircle1} (II) has $\\vert U^{'}_{-\\theta}C_{-\\theta}\\vert =s$ and $\\widehat{U^{'}_{-\\theta}C_{-\\theta}U_{2}} = \\pi - \\widehat{C_{-\\theta}U^{'}_{-\\theta}U_{2}} - \\widehat{U^{'}_{-\\theta}U_{2}C_{-\\theta}} = \\pi - \\pi\/2 - \\pi\/3 = \\pi\/6.$ Hence, \n \\if0 1 \n $\n U^{'}_{-\\theta} \n = C_{-\\theta} + s(\\cos(\\pi\/6),\\sin(\\pi\/6))\n =\\big(\\cos(\\theta)(c_{x} - l_{x}) + \\sin(\\theta)(c_{y} - l_{y}) + \\frac{\\sqrt{3}s}{2}, \n \\cos(\\theta)(c_{y} - l_{y}) -\\sin(\\theta)(c_{x} - l_{x}) + \\frac{s}{2}\\big).\n $\n \\else\n $$\n \\begin{aligned}\n U^{'}_{-\\theta} \n &= C_{-\\theta} + s(\\cos(\\pi\/6),\\sin(\\pi\/6))\\\\\n\\ifexpandexplanation\n &= C_{-\\theta} + \\left(\\frac{\\sqrt{3}s}{2}, \\frac{s}{2}\\right) \\\\\n &= \\left[\n \\begin{array}{cc}\n \\cos(-\\theta) & -\\sin(-\\theta)\\\\\n \\sin(-\\theta) & \\cos(-\\theta)\\\\\n \\end{array}\n \\right]\n \\left[\n \\begin{array}{c}\n c_{x} - l_{x}\\\\\n c_{y} - l_{y}\\\\\n \\end{array}\n \\right]\n + \\left[\n \\begin{array}{c}\n \\frac{\\sqrt{3}s}{2} \\\\ \\frac{s}{2}\n \\end{array}\n \\right]\\\\\n &=\\bigg(\\cos(-\\theta)(c_{x} - l_{x}) - \\sin(-\\theta)(c_{y} - l_{y}) + \\frac{\\sqrt{3}s}{2}, \\\\\n &\\phantom{= \\bigg(} \\sin(-\\theta)(c_{x} - l_{x}) + \\cos(-\\theta)(c_{y} - l_{y}) + \\frac{s}{2}\\bigg)\\\\\n\\fi\n &=\\bigg(\\cos(\\theta)(c_{x} - l_{x}) + \\sin(\\theta)(c_{y} - l_{y}) + \\frac{\\sqrt{3}s}{2}, \\\\\n &\\phantom{= \\bigg(} \\cos(\\theta)(c_{y} - l_{y}) -\\sin(\\theta)(c_{x} - l_{x}) + \\frac{s}{2}\\bigg).\n \\end{aligned}\n $$\n \n \\fi\n The inverse transformation from (\\ref{eq:xhyh2xgyglxly}) is\n \\begin{equation}\n \\left[\n \\begin{array}{c}\n x_{h}\\\\\n y_{h}\\\\\n \\end{array}\n \\right] = \n \\left[\n \\begin{array}{cc}\n \\frac{1}{d} & \\frac{1}{\\sqrt{3}d}\\\\\n 0 & \\frac{2}{\\sqrt{3}d}\n \\end{array}\n \\right]\n \\left[\n \\begin{array}{c}\n x_{g}\\\\\n y_{g}\\\\\n \\end{array}\n \\right].\n \\label{eq:xgyg2xhyh}\n \\end{equation}\n \n Applying the transformation of (\\ref{eq:xgyg2xhyh}) to the point $U^{'}_{-\\theta}$ we get its $x_{h}$-axis coordinate\n\\ifexpandexplanation \n $$\n \\begin{aligned}\n U \n &= \\frac{1}{d}\\left(\\cos(\\theta)(c_{x} - l_{x}) + \\sin(\\theta)(c_{y} - l_{y}) + \\frac{\\sqrt{3}s}{2} \\right) + \\\\\n &\\phantom{=}\\ \\frac{1}{\\sqrt{3}d} \\left(\\cos(\\theta)(c_{y} - l_{y}) -\\sin(\\theta)(c_{x} - l_{x}) + \\frac{s}{2} \\right)\\\\ \n \\end{aligned}\n $$\n\\else\n \\begin{equation}\n \\begin{aligned}\n U \n &= \\frac{1}{d}\\left(\\cos(\\theta)(c_{x} - l_{x}) + \\sin(\\theta)(c_{y} - l_{y}) + \\frac{\\sqrt{3}s}{2} \\right) + \\\\\n &\\phantom{=}\\ \\frac{1}{\\sqrt{3}d} \\left(\\cos(\\theta)(c_{y} - l_{y}) -\\sin(\\theta)(c_{x} - l_{x}) + \\frac{s}{2} \\right)\\\\\n\\fi\n\\ifexpandexplanation \n $$\n \\begin{aligned}\n &= \\frac{1}{d}\\left(\\cos(\\theta)(c_{x} - l_{x}) + \\sin(\\theta)(c_{y} - l_{y}) \\right) + \\frac{\\sqrt{3}s}{2d} +\\\\\n &\\phantom{=}\\ \\frac{1}{\\sqrt{3}d} \\left(\\cos(\\theta)(c_{y} - l_{y}) -\\sin(\\theta)(c_{x} - l_{x}) \\right) + \\frac{s}{2\\sqrt{3}d}\\\\\n \\end{aligned}\n $$\n $$\n \\begin{aligned}\n &= \\frac{1}{d}\\left(\\cos(\\theta)(c_{x} - l_{x}) + \\sin(\\theta)(c_{y} - l_{y}) \\right) + \\frac{3s+s}{2\\sqrt{3}d}+\\\\\n &\\phantom{=}\\ \\frac{1}{\\sqrt{3}d} \\left(\\cos(\\theta)(c_{y} - l_{y}) -\\sin(\\theta)(c_{x} - l_{x}) \\right)\\\\\n \\end{aligned}\n $$\n $$\n \\begin{aligned}\n &= \\frac{\\cos(\\theta)(c_{x} - l_{x}) + \\sin(\\theta)(c_{y} - l_{y}) }{d} + \\frac{\\cos(\\theta)(c_{y} - l_{y}) -\\sin(\\theta)(c_{x} - l_{x}) }{\\sqrt{3}d} + \\frac{2s}{\\sqrt{3}d}\\\\\n \\end{aligned}\n $$\n $$\n \\begin{aligned}\n &= \\frac{1}{\\sqrt{3}d}(\\sqrt{3}\\cos(\\theta)(c_{x} - l_{x}) + \\sqrt{3}\\sin(\\theta)(c_{y} - l_{y}) + \\cos(\\theta)(c_{y} - l_{y}) \\\\ \n &\\phantom{=}\\ -\\sin(\\theta)(c_{x} - l_{x}) ) + \\frac{2s}{\\sqrt{3}d}\\\\\n &= \\frac{2}{\\sqrt{3}d}\\Bigg(\\frac{\\sqrt{3}}{2}\\cos(\\theta)(c_{x} - l_{x}) + \\frac{\\sqrt{3}}{2}\\sin(\\theta)(c_{y} - l_{y}) \\\\\n &\\phantom{=}\\ + \\frac{1}{2}\\cos(\\theta)(c_{y} - l_{y}) -\\frac{1}{2}\\sin(\\theta)(c_{x} - l_{x}) \\Bigg) + \\frac{2s}{\\sqrt{3}d}\\\\\n &= \\frac{2}{\\sqrt{3}d}\\Bigg(\\Bigg(\\frac{\\sqrt{3}}{2}\\cos(\\theta)-\\frac{1}{2}\\sin(\\theta)\\Bigg)(c_{x} - l_{x}) + \\Bigg(\\frac{\\sqrt{3}}{2}\\sin(\\theta) \\\\ \n &\\phantom{=}\\ + \\frac{1}{2}\\cos(\\theta)\\Bigg)(c_{y} - l_{y}) \\Bigg) + \\frac{2s}{\\sqrt{3}d}\\\\\n \\end{aligned}\n $$\n \\begin{equation}\n \\begin{aligned}\n &= \\frac{2}{\\sqrt{3}d}((\\sin(\\pi\/3)\\cos(\\theta)-\\cos(\\pi\/3)\\sin(\\theta))(c_{x} - l_{x}) + (\\sin(\\pi\/3)\\sin(\\theta) \\\\ \n &\\phantom{=}\\ + \\cos(\\pi\/3)\\cos(\\theta))(c_{y} - l_{y}) ) + \\frac{2s}{\\sqrt{3}d}\\\\\n &= \\frac{2}{\\sqrt{3}d}(\\sin(\\pi\/3-\\theta)(c_{x} - l_{x}) + \\cos(\\pi\/3-\\theta)(c_{y} - l_{y}) ) + \\frac{2s}{\\sqrt{3}d}\\\\\n\\fi\n &= \\frac{2(\\sin(\\pi\/3-\\theta)(c_{x} - l_{x}) + \\cos(\\pi\/3-\\theta)(c_{y} - l_{y}) + s)}{\\sqrt{3}d}\\\\\n \\end{aligned}\n \\label{eq:hexsemicircleU1}\n \\end{equation}\n As we need the integer coordinate less or equal to this value, we apply the floor function to yield the desired result in (\\ref{eq:eq58}).\n \n For getting the $B$ value on the $x_{h}$-axis, we draw a line parallel to the $y_{h}$-axis on the lower semicircle corner at the point $B^{'}$ in order to reach the $x_{h}$-axis (Figure \\ref{fig:hexsemicircle1} (I)). We perform a calculation similar to the previous paragraph but using $B^{'}_{-\\theta}$ (Figure \\ref{fig:hexsemicircle1} (II)). We have $\\widehat{C_{-\\theta}OU}=\\pi\/2 - \\theta$ (as this is the same angle of $\\overleftrightarrow{C B'}$ with $x_{h}$-axis in Figure \\ref{fig:hexsemicircle1} (I) which coincides with $x_{g}$-axis in the Figure \\ref{fig:hexsemicircle1} (II)). Then, as the vector $C_{-\\theta}B^{'}_{-\\theta}$ is pointed downwards, it has negative angle with the $x_{g}$-axis, that is, $-\\widehat{B_{-\\theta}OU} = -(\\pi - \\widehat{C_{-\\theta}OU}) = -(\\pi - (\\pi\/2 - \\theta)) = -\\pi\/2 - \\theta$ with $x_{g}$-axis. Also, $\\left\\vert \\overline{C_{-\\theta}B^{'}_{-\\theta}}\\right\\vert = s$ . Consequently, \n \\if0 1\n $\n \\overrightarrow{C_{-\\theta}B^{'}_{-\\theta}} = B^{'}_{-\\theta} - C_{-\\theta} = s(\\cos(-\\pi\/2 - \\theta),\\sin(-\\pi\/2 - \\theta)) \\Leftrightarrow \n $\n $\n B^{'}_{-\\theta} \n = C_{-\\theta} + s(\\cos(-\\pi\/2 - \\theta),\\sin(-\\pi\/2 - \\theta)) =(\\cos(\\theta)(c_{x} - l_{x}) + \\sin(\\theta)(c_{y} - l_{y} -s), \\cos(\\theta)(c_{y} - l_{y} -s) - \\sin(\\theta)(c_{x} - l_{x}) ).\n $\n \\else\n $$\n \\overrightarrow{C_{-\\theta}B^{'}_{-\\theta}} = B^{'}_{-\\theta} - C_{-\\theta} = s(\\cos(-\\pi\/2 - \\theta),\\sin(-\\pi\/2 - \\theta)) \\Leftrightarrow \\\\\n $$\n $$\n \\begin{aligned}\n B^{'}_{-\\theta} \n &= C_{-\\theta} + s(\\cos(-\\pi\/2 - \\theta),\\sin(-\\pi\/2 - \\theta)) \\\\\n\\ifexpandexplanation\n &= C_{-\\theta} + s(\\sin(-\\theta),-\\cos(-\\theta)) \\\\\n &= C_{-\\theta} - s(\\sin(\\theta),\\cos(\\theta)) \\\\\n &= \\left[\n \\begin{array}{cc}\n \\cos(-\\theta) & -\\sin(-\\theta)\\\\\n \\sin(-\\theta) & \\cos(-\\theta)\\\\\n \\end{array}\n \\right]\n \\left[\n \\begin{array}{c}\n c_{x} - l_{x}\\\\\n c_{y} - l_{y}\\\\\n \\end{array}\n \\right]\n - \\left[\n \\begin{array}{c}\n s\\sin(\\theta) \\\\ s\\cos(\\theta)\n \\end{array}\n \\right]\\\\\n &=(\\cos(\\theta)(c_{x} - l_{x}) + \\sin(\\theta)(c_{y} - l_{y}) -s\\sin(\\theta), \\\\\n &\\phantom{= (} \\cos(\\theta)(c_{y} - l_{y}) - \\sin(\\theta)(c_{x} - l_{x}) - s\\cos(\\theta))\\\\\n\\fi\n &=(\\cos(\\theta)(c_{x} - l_{x}) + \\sin(\\theta)(c_{y} - l_{y} -s), \\\\\n &\\phantom{= (} \\cos(\\theta)(c_{y} - l_{y} -s) - \\sin(\\theta)(c_{x} - l_{x}) ).\\\\\n \\end{aligned}\n $$ \n \\fi\n Using (\\ref{eq:xgyg2xhyh}) on $B^{'}_{-\\theta}$ we get,\n \\if0 1\n $\n B \n = \\frac{1}{d}\\left(\\cos(\\theta)(c_{x} - l_{x}) + \\sin(\\theta)(c_{y} - l_{y} -s)\\right) + \\frac{1}{\\sqrt{3}d}\\left(\\cos(\\theta)(c_{y} - l_{y} - s) - \\sin(\\theta)(c_{x} - l_{x})\\right) = \\frac{2( \\sin(\\pi\/3-\\theta)(c_{x} - l_{x}) + \\cos(\\pi\/3-\\theta)(c_{y} - l_{y} -s) )}{\\sqrt{3}d}. \n $\n \\else\n $$\n \\begin{aligned}\n B \n &= \\frac{1}{d}\\left(\\cos(\\theta)(c_{x} - l_{x}) + \\sin(\\theta)(c_{y} - l_{y} -s)\\right) + \\\\\n &\\phantom{=} \\frac{1}{\\sqrt{3}d}\\left(\\cos(\\theta)(c_{y} - l_{y} - s) - \\sin(\\theta)(c_{x} - l_{x})\\right)\\\\\n\\ifexpandexplanation \n \\end{aligned}\n $$\n $$\n \\begin{aligned}\n &= \\frac{\\cos(\\theta)(c_{x} - l_{x}) + \\sin(\\theta)(c_{y} - l_{y} -s)}{d} + \\frac{ \\cos(\\theta)(c_{y} - l_{y} - s) - \\sin(\\theta)(c_{x} - l_{x})}{\\sqrt{3}d} \\\\\n &= \\frac{1}{\\sqrt{3}d}\\Big(\\sqrt{3}\\cos(\\theta)(c_{x} - l_{x}) + \\sqrt{3}\\sin(\\theta)(c_{y} - l_{y} -s) + \\cos(\\theta)(c_{y} - l_{y} - s) - \\\\\n &\\phantom{=}\\ \\sin(\\theta)(c_{x} - l_{x})\\Big) \\\\\n \\end{aligned}\n $$\n $$\n \\begin{aligned}\n &= \\frac{2}{\\sqrt{3}d}\\Bigg(\\frac{\\sqrt{3}}{2}\\cos(\\theta)(c_{x} - l_{x}) + \\frac{\\sqrt{3}}{2}\\sin(\\theta)(c_{y} - l_{y} -s) + \\frac{1}{2}\\cos(\\theta)(c_{y} - l_{y} - s) - \\\\\n &\\phantom{=}\\ \\frac{1}{2}\\sin(\\theta)(c_{x} - l_{x})\\Bigg) \\\\\n &= \\frac{2}{\\sqrt{3}d}\\Bigg(\\frac{\\sqrt{3}}{2}\\cos(\\theta)(c_{x} - l_{x}) - \\frac{1}{2}\\sin(\\theta)(c_{x} - l_{x}) + \\frac{\\sqrt{3}}{2}\\sin(\\theta)(c_{y} - l_{y} -s) + \\\\\n &\\phantom{=}\\ \\frac{1}{2}\\cos(\\theta)(c_{y} - l_{y} - s) \\Bigg) \\\\\n \\end{aligned}\n $$\n $$\n \\begin{aligned}\n &= \\frac{2}{\\sqrt{3}d}( (\\sin(\\pi\/3)\\cos(\\theta) - \\cos(\\pi\/3)\\sin(\\theta))(c_{x} - l_{x}) + (\\sin(\\pi\/3)\\sin(\\theta) + \\\\\n &\\phantom{=}\\ \\cos(\\pi\/3)\\cos(\\theta))(c_{y} - l_{y} -s) ) \\\\\n\\fi\n &= \\frac{2( \\sin(\\pi\/3-\\theta)(c_{x} - l_{x}) + \\cos(\\pi\/3-\\theta)(c_{y} - l_{y} -s) )}{\\sqrt{3}d}. \\\\\n \\end{aligned}\n $$\n \\fi\n Then, we apply the ceiling function on this value to get an integer coordinate greater or equal to it in order to obtain (\\ref{eq:BcasesSC}) for $T > \\frac{s}{v}$.\n \n On the hexagonal grid coordinate system, for each $x_{h}$ from $B$ to $U$, we need to find the minimum and maximum $y_{h}$ -- namely $Y_{1}^{S}(x_{h})$ and $Y_{2}^{S}(x_{h})$, respectively -- of a line parallel to $y_{h}$-axis intercepting the $x_{h}$-axis and lying on the semicircle. Depending on the angle of $\\overleftrightarrow{C_{-\\theta}B^{'}_{-\\theta}}$ with the $x_{h}$-axis, the minimum and maximum $y_{h}$ can be either on the semicircle arc or $\\overleftrightarrow{C_{-\\theta}B^{'}_{-\\theta}}$. Due to $\\theta \\in \\lbrack 0,\\pi\/3 \\rparen$, the angle of $\\overleftrightarrow{C_{-\\theta}B^{'}_{-\\theta}}$ is in $(\\frac{\\pi}{6}, \\frac{\\pi}{2}]$. Thus, the minimum $y_{h}$ value is at the semicircle arc, otherwise the minimum angle of $\\overleftrightarrow{C_{-\\theta}B^{'}_{-\\theta}}$ would be $2\\pi\/3$, which is the $y_{h}$-axis angle with the $x_{h}$-axis. However, the maximum $y_{h}$ value could be either on $\\overleftrightarrow{C_{-\\theta}B^{'}_{-\\theta}}$ or on the circle, thus we take the lowest, since we want the $y_{h}$ value on the boundary of the semicircle. \n \n Let $C_{1}(x_{h})$ and $C_{2}(x_{h})$ be functions that respectively return the lowest and the highest $y_{h}$ value at the circle centred at $C_{-\\theta}$ and radius $s$ for a $x_{h}$ coordinate value of a parallel-to-$y_{h}$-axis line assuming it intersects the circle. Then, a point $(x_{g},y_{g})$ on the Euclidean space is on that circle if \n \\if0 1\n $(x_{g} - C_{-\\theta,x})^{2} + (y_{g} - C_{-\\theta,y})^{2} = s^{2} \\Leftrightarrow \n \\left(d x_{h} - \\frac{d y_{h}}{2} - C_{-\\theta,x}\\right)^{2} + \\left(\\frac{\\sqrt{3}d y_{h}}{2} - C_{-\\theta,y}\\right)^{2} = s^{2},$\n \\else\n $$(x_{g} - C_{-\\theta,x})^{2} + (y_{g} - C_{-\\theta,y})^{2} = s^{2} \\Leftrightarrow$$ \n $$\\left(d x_{h} - \\frac{d y_{h}}{2} - C_{-\\theta,x}\\right)^{2} + \\left(\\frac{\\sqrt{3}d y_{h}}{2} - C_{-\\theta,y}\\right)^{2} = s^{2},$$ \n \\fi\n by (\\ref{eq:xhyh2xgyglxly}). \n\n Isolating $y_{h}$ and solving the two degree polynomial we get\n \\begin{equation}\n y_{h_{1}} = C_{1}(x_{h}) = \n \\frac{d {x_{h}}- {C_{-\\theta,x}} + \\sqrt{3} {C_{-\\theta,y}} - \\sqrt{\\Delta(x_{h})}}{2 d} \\text{ and }\n \\label{eq:C1hex}\n \\end{equation}\n \\begin{equation}\n y_{h_{2}} = C_{2}(x_{h}) = \n \\frac{d {x_{h}}- {C_{-\\theta,x}} + \\sqrt{3} {C_{-\\theta,y}} + \\sqrt{\\Delta(x_{h})}}{2 d},\n \\label{eq:C2hex}\n \\end{equation}\n for\n $0 \\le \\Delta(x_{h}) = 4 s^{2} - \\left(\\sqrt{3} {\\left(d {x_{h}} -{C_{-\\theta,x}} \\right)} - C_{-\\theta,y}\\right)^{2}$. $\\Delta(x_{h})$ cannot be negative, otherwise the lines would not intersect this circle, contradicting our assumption.\n \n We denote $L(x_{h})$ a function that returns the $y_{h}$ component of the line $\\overleftrightarrow{C_{-\\theta}B_{-\\theta}}$ for a given $x_{h}$. The $\\overleftrightarrow{C_{-\\theta}B_{-\\theta}}$ equation for a point in the space $(x_{g},y_{g})$ \n is \n \\if0 1\n $\\tan\\left(\\frac{\\pi}{2} - \\theta\\right) = \\frac{y_{g} - C_{-\\theta,y}}{x_{g} - C_{-\\theta,x}} \\Rightarrow L(x_{h}) = y_{h} = \\frac{\\sin\\left(\\frac{\\pi}{2} - \\theta\\right)(d x_{h} - C_{-\\theta,x}) + \\cos\\left(\\frac{\\pi}{2} - \\theta\\right)C_{-\\theta,y} }{d \\sin\\left(\\frac{5\\pi}{6}-\\theta\\right)}.$\n \\else\n $$\\tan\\left(\\frac{\\pi}{2} - \\theta\\right) = \\frac{y_{g} - C_{-\\theta,y}}{x_{g} - C_{-\\theta,x}} \\Leftrightarrow \n \\frac{\\sin\\left(\\frac{\\pi}{2} - \\theta\\right)}{\\cos\\left(\\frac{\\pi}{2} - \\theta\\right)} = \\frac{y_{g} - C_{-\\theta,y}}{x_{g} - C_{-\\theta,x}} \\Leftrightarrow$$\n $$\\sin\\left(\\frac{\\pi}{2} - \\theta\\right)\\left(d x_{h} - \\frac{d y_{h}}{2} - C_{-\\theta,x}\\right) = \\cos\\left(\\frac{\\pi}{2} - \\theta\\right) \\left(\\frac{\\sqrt{3}d y_{h}}{2} - C_{-\\theta,y}\\right) \\Leftrightarrow$$ \n\\ifexpandexplanation\n $$\\sin\\left(\\frac{\\pi}{2} - \\theta\\right)\\left(d x_{h} - C_{-\\theta,x}\\right) - \\sin\\left(\\frac{\\pi}{2} - \\theta\\right)\\frac{d y_{h}}{2} = \\cos\\left(\\frac{\\pi}{2} - \\theta\\right) \\frac{\\sqrt{3}d y_{h}}{2} - \\cos\\left(\\frac{\\pi}{2} - \\theta\\right) C_{-\\theta,y}\\Leftrightarrow$$\n $$\\frac{\\sqrt{3}d y_{h}}{2}\\cos\\left(\\frac{\\pi}{2} - \\theta\\right) + \\sin\\left(\\frac{\\pi}{2} - \\theta\\right)\\frac{d y_{h}}{2} = \\sin\\left(\\frac{\\pi}{2} - \\theta\\right)(d x_{h} - C_{-\\theta,x}) + \\cos\\left(\\frac{\\pi}{2} - \\theta\\right)C_{-\\theta,y} \\Leftrightarrow$$ \n $$d \\left(\\frac{\\sqrt{3}}{2}\\cos\\left(\\frac{\\pi}{2} - \\theta\\right) + \\sin\\left(\\frac{\\pi}{2} - \\theta\\right)\\frac{1}{2}\\right) y_{h} = \\sin\\left(\\frac{\\pi}{2} - \\theta\\right)(d x_{h} - C_{-\\theta,x}) + \\cos\\left(\\frac{\\pi}{2} - \\theta\\right)C_{-\\theta,y} \\Leftrightarrow$$ \n\\fi\n $$y_{h} = \\frac{\\sin\\left(\\frac{\\pi}{2} - \\theta\\right)(d x_{h} - C_{-\\theta,x}) + \\cos\\left(\\frac{\\pi}{2} - \\theta\\right)C_{-\\theta,y} }{d \\left(\\frac{\\sqrt{3}}{2}\\cos\\left(\\frac{\\pi}{2} - \\theta\\right) + \\sin\\left(\\frac{\\pi}{2} - \\theta\\right)\\frac{1}{2}\\right)}\\Leftrightarrow$$ \n\\ifexpandexplanation\n $$y_{h} = \\frac{\\sin\\left(\\frac{\\pi}{2} - \\theta\\right)(d x_{h} - C_{-\\theta,x}) + \\cos\\left(\\frac{\\pi}{2} - \\theta\\right)C_{-\\theta,y} }{d \\left(\\sin\\left(\\frac{\\pi}{3}\\right)\\cos\\left(\\frac{\\pi}{2} - \\theta\\right) + \\sin\\left(\\frac{\\pi}{2} - \\theta\\right)\\cos\\left(\\frac{\\pi}{3}\\right)\\right)}\\Leftrightarrow$$\n $$y_{h} = \\frac{\\sin\\left(\\frac{\\pi}{2} - \\theta\\right)(d x_{h} - C_{-\\theta,x}) + \\cos\\left(\\frac{\\pi}{2} - \\theta\\right)C_{-\\theta,y} }{d \\sin\\left(\\frac{\\pi}{3}+\\frac{\\pi}{2} - \\theta\\right)}\\Leftrightarrow$$ \n\\fi\n $$L(x_{h}) = y_{h} = \\frac{\\sin\\left(\\frac{\\pi}{2} - \\theta\\right)(d x_{h} - C_{-\\theta,x}) + \\cos\\left(\\frac{\\pi}{2} - \\theta\\right)C_{-\\theta,y} }{d \\sin\\left(\\frac{5\\pi}{6}-\\theta\\right)}.$$\n \n\\begin{comment}\n If $\\theta = 0$, $\\overleftrightarrow{C_{-\\theta}B_{-\\theta}}$ is parallel to $y_{g}$-axis, then it is represented by the points $(C_{-\\theta,x}, y_{g})$ for any $y_{g}$. From these points, by using (\\ref{eq:xgyg2xhyh}), we have the system of linear equations\n $$ x_{h} = \\frac{1}{d} C_{-\\theta,x} + \\frac{1}{\\sqrt{3}d} y_{g} \\text{ and } y_{h} = \\frac{2}{\\sqrt{3}d}y_{g} \\Leftrightarrow \n $$\n $$ y_{g} = \\frac{x_{h} - \\frac{1}{d} C_{-\\theta,x}}{\\frac{1}{\\sqrt{3}d}} \\text{ and } y_{h} = \\frac{2}{\\sqrt{3}d}y_{g} \\Leftrightarrow $$\n $$ y_{g} = \\sqrt{3}d x_{h} - \\sqrt{3} C_{-\\theta,x} \\text{ and } y_{h} = \\frac{2}{\\sqrt{3}d}y_{g} \\Leftrightarrow $$\n $$ y_{h} = \\frac{2}{\\sqrt{3}d}\\left(\\sqrt{3}d x_{h} - \\sqrt{3} C_{-\\theta,x}\\right) \\Leftrightarrow $$\n $$ \n L(x_{h}) = y_{h} = 2x_{h} - \\frac{2C_{-\\theta,x}}{d}.$$\n\\end{comment}\n \\fi\n\n We have that $Y_{1}^{S}(x_{h}) = C_{1}(x_{h})$ and $Y_{2}^{S}(x_{h})$ can be either $\\min(L(x_{h}),C_{2}(x_{h}))$ or $\\min (L(x_{h}),C_{2}(x_{h})) - 1$. As $T > \\frac{s}{v}$, we can have a number of robots inside the rectangle $N_{R}(T,\\theta) \\ge 1$. If, for some $x_{h}$, $Y'(x_{h}) = \\min(L(x_{h}),C_{2}(x_{h})) = \\lfloor L(x_{h})\\rfloor$, then the robot on $\\left(x_{h}, Y'(x_{h})\\right)$ is on the line $\\overleftrightarrow{C_{-\\theta}B_{-\\theta}^{'}}$. As this line belongs to the rectangle, the robot was already counted by $N_{R}(T,\\theta)$. Hence, \n \\if0 1\n $\n Y_{2}^{S}(x_{h}) = \n \\min(L(x_{h}),C_{2}(x_{h})) - 1,$ \n if $\\min(L(x_{h}),C_{2}(x_{h})) = \\lfloor L(x_{h})\\rfloor$ \n and $T > \\frac{s}{v},$ otherwise,\n $Y_{2}^{S}(x_{h}) =\\min(L(x_{h}),C_{2}(x_{h})).$\n \\else\n $$\n Y_{2}^{S}(x_{h}) = \\left\\{\n \\begin{array}{>{\\displaystyle}c>{\\displaystyle}l}\n \\min(L(x_{h}),C_{2}(x_{h})) - 1, \n & \\text{ if } \\min(L(x_{h}),C_{2}(x_{h})) = \\lfloor L(x_{h})\\rfloor \\\\ \n & \\phantom{if} \\text{ and } T > \\frac{s}{v},\\\\\n \\min(L(x_{h}),C_{2}(x_{h})), \n & \\text{ otherwise. } \n \\end{array}\n \\right.\n $$\n \\fi\n \n The number of robots inside the semicircle is the number of integer coordinates $(x_{h},y_{h})$ for $x_{h}$ ranging from $B$ to $U$ and $y_{h} \\in \\left[\\left\\lceil Y_{1}^{S}(x_{h}) \\right\\rceil, \\left\\lfloor Y_{2}^{S}(x_{h}) \\right\\rfloor\\right]$ for each $x_{h}$. Thus,\n \\if0 1\n $N_{S}(T,\\theta) = \\sum_{x_{h} = B}^{U} \\left( \\lfloor Y_{2}^{S}(x_{h}) \\rfloor - \\lceil Y_{1}^{S}(x_{h}) \\rceil + 1 \\right).$\n \\else\n $$N_{S}(T,\\theta) = \\sum_{x_{h} = B}^{U} \\sum_{y_{h} = \\lceil Y_{1}^{S}(x_{h}) \\rceil}^{\\lfloor Y_{2}^{S}(x_{h}) \\rfloor} 1 = \\sum_{x_{h} = B}^{U} \\left( \\lfloor Y_{2}^{S}(x_{h}) \\rfloor - \\lceil Y_{1}^{S}(x_{h}) \\rceil + 1 \\right).$$\n \\fi\n Heed that the last summation can only be used when $\\left\\lfloor Y_{2}^{S}(x_{h}) \\right\\rfloor \\ge \\left \\lceil Y_{1}^{S}(x_{h}) \\right \\rceil $, otherwise a negative number of robots would be summed. \n \n \n \n \\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.84\\columnwidth]{figs\/hex_semicircle2.pdf} \n \\caption{Similar to the coordinate spaces of Figure \\ref{fig:hexsemicircle1}, but for $T \\le \\frac{s}{v}$. The rotation and hexagonal grid system centres are now $(x_{0},y_{0})$. Notice also in (I) that $\\bigtriangleup CAO$ is right with hypotenuse $\\overline{CA}$ measuring $s$, and the horizontal cathetus $\\overline{CO}$ measures $s - vT$.}\n \\label{fig:hexsemicircle2}\n \\end{figure}\n\n \\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.45\\columnwidth]{figs\/hex_semicircleUA.pdf} \n \\caption{An example of when the angle $\\widehat{U^{'}_{-\\theta}OU}$ is greater than $\\widehat{A_{-\\theta}OU}$. We only consider robots inside the semicircle below the line $\\protect \\overleftrightarrow{OA_{-\\theta}}$, otherwise the robot on $O$ would not be the first robot by assumption. In this case, any line parallel to $y_{h}$-axis crossing the semicircle below $\\protect\\overline{OA_{-\\theta}}$ must have its $x_{h}$-axis coordinate less than or equal to $U$, for example $Q$ projected from $P$.}\n \\label{fig:hexsemicircleUA}\n \\end{figure}\n\n Now, assume $T \\le \\frac{s}{v}$. Then, the semicircle has centre at $C = (c_{x},c_{y}) = (x_{0} - (s - vT) ,y_{0})$ (Figure \\ref{fig:hexsemicircle2}). Now, as we do not have the rectangle part, we consider the \\emph{last} robot of the rectangular part being the first robot to arrive at the target region, so $(l_{x},l_{y}) = (x_{0},y_{0})$, and, by (\\ref{eq:centertheta}),\n \\begin{equation}\n C_{-\\theta} = \\left[\n \\begin{array}{cc}\n \\cos(-\\theta) & -\\sin(-\\theta)\\\\\n \\sin(-\\theta) & \\cos(-\\theta)\\\\\n \\end{array}\n \\right]\n \\left[\n \\begin{array}{c}\n c_{x} - x_{0}\\\\\n c_{y} - y_{0}\\\\\n \\end{array}\n \\right].\n \\label{eq:centertheta2}\n \\end{equation}\n \n In the usual Euclidean coordinate space before the rotation about $(x_{0},y_{0})$, we consider the line $\\overleftrightarrow{OA}$ perpendicular to the $x$-axis at $O = (x_{0},y_{0})$. This line represents the perpendicular axis such that we wish to count all the robots from it to the arc of the semicircle on its right. From Figure \\ref{fig:hexsemicircle2} (I), \n \\if0 1\n $r = \\vert \\overline{AO}\\vert = \\sqrt{\\vert \\overline{CA}\\vert ^{2} - \\vert \\overline{CO}\\vert ^{2}} = \\sqrt{s^{2} - (s - vT)^{2}} = \\sqrt{2svT - (vT)^{2}}.$\n \\else\n $$r = \\vert \\overline{AO}\\vert = \\sqrt{\\vert \\overline{CA}\\vert ^{2} - \\vert \\overline{CO}\\vert ^{2}} = \\sqrt{s^{2} - (s - vT)^{2}} = \\sqrt{2svT - (vT)^{2}}.$$\n \\fi\n \n After the rotation by $-\\theta$ about the point $O$, the maximum value for $x_{h}$ is defined by the point $U$. The point $U$ is chosen depending on the angles $\\widehat{U^{'}_{-\\theta}OU}$ and $\\widehat{A_{-\\theta}OU}$. When the angle $\\widehat{U^{'}_{-\\theta}OU}$ is greater than $\\widehat{A_{-\\theta}OU}$, the value of $U$ is calculated in relation to $A_{-\\theta}$, because the line parallel to $y_{h}$-axis intercepting $U^{'}_{-\\theta}$ is not inside the semicircle below $\\overleftrightarrow{OA_{-\\theta}}$ (Figure \\ref{fig:hexsemicircleUA}). For comparison, Figure \\ref{fig:hexsemicircle2} (II) shows an example where we choose $U$ as the $x_{h}$-axis intersection with the line parallel to $y_{h}$-axis at $U^{'}_{-\\theta}$. As we saw before for the case $T > \\frac{s}{v}$ (Figure \\ref{fig:hexsemicircle1} (II)), the angle of $\\overline{C_{-\\theta}U^{'}_{\\theta}}$ in relation to $x_{h}$-axis is $\\pi\/6$, consequently, \n \\if0 1\n $\n U^{'}_{-\\theta} \n = C_{-\\theta} + s\\left(\\cos\\left(\\frac{\\pi}{6}\\right),\\sin\\left(\\frac{\\pi}{6}\\right)\\right)\n = \\left( \\frac{\\sqrt{3}s}{2} + \\cos(\\theta) (vT - s) , \\frac{s}{2} - \\sin(\\theta) (vT - s) \\right),\n $\n \\else\n $$ \n \\begin{aligned}\n U^{'}_{-\\theta} \n &= C_{-\\theta} + s\\left(\\cos\\left(\\frac{\\pi}{6}\\right),\\sin\\left(\\frac{\\pi}{6}\\right)\\right)\\\\\n\\ifexpandexplanation\n &=\n \\left[\n \\begin{array}{cc}\n \\cos(-\\theta) & -\\sin(-\\theta)\\\\\n \\sin(-\\theta) & \\cos(-\\theta)\\\\\n \\end{array}\n \\right]\n \\left[\n \\begin{array}{c}\n c_{x} - x_{0}\\\\\n c_{y} - y_{0}\\\\\n \\end{array}\n \\right]\n + \\left[\n \\begin{array}{c}\n \\frac{\\sqrt{3}s}{2} \\\\ \\frac{s}{2}\n \\end{array}\n \\right] \\\\\n &=\n \\left[\n \\begin{array}{cc}\n \\cos(\\theta) & \\sin(\\theta)\\\\\n -\\sin(\\theta) & \\cos(\\theta)\\\\\n \\end{array}\n \\right]\n \\left[\n \\begin{array}{c}\n vT-s\\\\\n 0\\\\\n \\end{array}\n \\right]\n + \\left[\n \\begin{array}{c}\n \\frac{\\sqrt{3}s}{2} \\\\ \\frac{s}{2}\n \\end{array}\n \\right]\\\\ \n\\fi\n &= \\left( \\frac{\\sqrt{3}s}{2} + \\cos(\\theta) (vT - s) , \\frac{s}{2} - \\sin(\\theta) (vT - s) \\right),\n \\end{aligned}\n $$\n \\fi\n from (\\ref{eq:centertheta2}), and $\\widehat{U^{'}_{-\\theta}OU}$ measures\n \\if0 1\n $\n \\arctan\\left( \\frac{U^{'}_{-\\theta,y}}{U^{'}_{-\\theta,x}} \\right)\n = \\arctan\\left( \\frac{\\frac{s}{2} - \\sin(\\theta) (vT - s) }{\\frac{\\sqrt{3}s}{2} + \\cos(\\theta) (vT - s)}\\right).\n $\n \\else\n $$\n \\begin{aligned}\n \\arctan\\left( \\frac{U^{'}_{-\\theta,y}}{U^{'}_{-\\theta,x}} \\right)\n &= \\arctan\\left( \\frac{\\frac{s}{2} - \\sin(\\theta) (vT - s) }{\\frac{\\sqrt{3}s}{2} + \\cos(\\theta) (vT - s)}\\right).\n \\end{aligned}\n $$\n \\fi\n \n $\\widehat{A_{-\\theta}OU}$ measures $\\frac{\\pi}{2} - \\theta$, as show in Figure \\ref{fig:hexsemicircle2} (II). Thence, \n \\if0 1\n $A_{-\\theta} = \\big(r\\cos\\big(\\frac{\\pi}{2} - \\theta\\big), r\\sin\\big(\\frac{\\pi}{2} - \\theta\\big)\\big).$\n \\else\n $$A_{-\\theta} = \\left(r\\cos\\left(\\frac{\\pi}{2} - \\theta\\right), r\\sin\\left(\\frac{\\pi}{2} - \\theta\\right)\\right).$$ \n \\fi\n If $\\arctan\\left( \\frac{U^{'}_{-\\theta,y}}{U^{'}_{-\\theta,x}} \\right) \\le \\widehat{A_{-\\theta}OU} = \\frac{\\pi}{2} - \\theta$, \n we apply (\\ref{eq:xgyg2xhyh}) on $U^{'}_{-\\theta}$ to get its $x_{h}$-axis coordinate\n \\if0 1\n $\n U \n = \\frac{1}{d}\\big(\\frac{\\sqrt{3}s}{2} + \\cos(\\theta) (vT - s) \\big) + \\frac{1}{\\sqrt{3}d}\\big(\\frac{s}{2} - \\sin(\\theta) (vT - s) \\big) = \\frac{2\\sin(\\pi\/3 - \\theta)(vT - s)}{\\sqrt{3}d} + \\frac{2s}{\\sqrt{3}d},\n $\n \\else\n $$\n \\begin{aligned}\n U \n &= \\frac{1}{d}\\left(\\frac{\\sqrt{3}s}{2} + \\cos(\\theta) (vT - s) \\right) + \\frac{1}{\\sqrt{3}d}\\left(\\frac{s}{2} - \\sin(\\theta) (vT - s) \\right)\\\\\n\\ifexpandexplanation\n &= \\cos(\\theta) \\frac{vT - s}{d} + \\frac{\\sqrt{3}s}{2d} - \\sin(\\theta) \\frac{vT - s}{\\sqrt{3}d} + \\frac{s}{2\\sqrt{3}d}\\\\\n &= \\cos(\\theta) \\frac{vT - s}{d} - \\sin(\\theta) \\frac{vT - s}{\\sqrt{3}d} + \\frac{\\sqrt{3}\\sqrt{3}s}{2\\sqrt{3}d} + \\frac{s}{2\\sqrt{3}d}\\\\\n &= \\cos(\\theta) \\frac{vT - s}{d} - \\sin(\\theta) \\frac{vT - s}{\\sqrt{3}d} + \\frac{3s}{2\\sqrt{3}d} + \\frac{s}{2\\sqrt{3}d}\\\\\n &= \\cos(\\theta) \\frac{vT - s}{d} - \\sin(\\theta) \\frac{vT - s}{\\sqrt{3}d} + \\frac{4s}{2\\sqrt{3}d}\\\\\n\\fi\n &= \\cos(\\theta) \\frac{vT - s}{d} - \\sin(\\theta) \\frac{vT - s}{\\sqrt{3}d} + \\frac{2s}{\\sqrt{3}d}\\\\\n &= \\sqrt{3}\\cos(\\theta) \\frac{vT - s}{\\sqrt{3}d} - \\sin(\\theta) \\frac{vT - s}{\\sqrt{3}d} + \\frac{2s}{\\sqrt{3}d}\\\\\n\\ifexpandexplanation\n &= 2\\left(\\frac{\\sqrt{3}}{2}\\cos(\\theta) - \\frac{1}{2}\\sin(\\theta) \\right)\\frac{vT - s}{\\sqrt{3}d} + \\frac{2s}{\\sqrt{3}d}\\\\\n\\fi\n &= \\frac{2\\sin(\\pi\/3 - \\theta)(vT - s)}{\\sqrt{3}d} + \\frac{2s}{\\sqrt{3}d},\\\\\n \\end{aligned}\n $$\n \\fi\n followed by applying floor function to it, as we need the integer coordinate less or equal to this value. This is the same as (\\ref{eq:hexsemicircleU1}) by using $(l_{x},l_{y}) = (x_{0},y_{0})$, then we also have (\\ref{eq:eq58}) when $\\arctan\\left( \\frac{\\frac{s}{2} - \\sin(\\theta) (vT - s) }{\\frac{\\sqrt{3}s}{2} + \\cos(\\theta) (vT - s)} \\right) \\le \\frac{\\pi}{2} - \\theta$.\n \n If $\\arctan\\left( \\frac{U^{'}_{-\\theta,y}}{U^{'}_{-\\theta,x} } \\right) > \\frac{\\pi}{2} - \\theta$, then there are no robots to consider on the parallel lines to $y_{h}$-axis between $U^{'}_{-\\theta}$ and $A_{-\\theta}$, otherwise the robot at $(x_{0},y_{0})$ would not be the first to arrive at the target region. Thus, if $\\arctan\\left( \\frac{U^{'}_{-\\theta,y} }{U^{'}_{-\\theta,x}} \\right) > \\frac{\\pi}{2} - \\theta$, we use the $x_{h}$-coordinate for the point $A_{-\\theta}$ on the hexagonal grid space, that is,\n \\if0 1\n $\n U \n = \\frac{1}{d}\\big( r\\cos\\left(\\frac{\\pi}{2} - \\theta\\right)\\big) + \\frac{1}{\\sqrt{3}d}\\big(r\\sin\\left(\\frac{\\pi}{2} - \\theta\\right)\\big)\n = \\frac{2r}{\\sqrt{3}d}\\cos\\left(\\theta-\\frac{\\pi}{3}\\right).\n $\n \\else\n $$\n \\begin{aligned}\n U \n &= \\frac{1}{d}\\left( r\\cos\\left(\\frac{\\pi}{2} - \\theta\\right)\\right) + \\frac{1}{\\sqrt{3}d}\\left(r\\sin\\left(\\frac{\\pi}{2} - \\theta\\right)\\right)\\\\\n &= \\frac{2r}{\\sqrt{3}d}\\left(\\frac{\\sqrt{3}}{2} \\cos\\left(\\frac{\\pi}{2} - \\theta\\right) + \\frac{1}{2}\\sin\\left(\\frac{\\pi}{2} - \\theta\\right)\\right)\\\\\n\\ifexpandexplanation\n &= \\frac{2r}{\\sqrt{3}d}\\left(\\frac{\\sqrt{3}}{2} \\sin\\left(\\theta\\right) + \\frac{1}{2}\\cos\\left(\\theta\\right)\\right)\\\\\n\\fi\n &= \\frac{2r}{\\sqrt{3}d}\\cos\\left(\\theta-\\frac{\\pi}{3}\\right).\\\\\n \\end{aligned}\n $$\n \\fi\n then we apply the floor function to yield the desired result in (\\ref{eq:eq59}).\n \n \n Now we will find the minimum value for an integer $x_{h}$ such that a parallel-to-$y_{h}$-axis line is inside the semicircle and starting from the right of $\\overleftrightarrow{OA}$ or on it.\n For the calculation of $B$, from Figure \\ref{fig:hexsemicircle2} (II), similarly to how we previously did,\n \\if0 1\n $\n B^{'}_{-\\theta} \n = O + r(\\cos(-(\\pi\/2 + \\theta)),\\sin(-(\\pi\/2 + \\theta)) \n = ( - r\\sin(\\theta), - r\\cos(\\theta)),\n $\n \\else\n $$\n \\begin{aligned}\n B^{'}_{-\\theta} \n &\n = O + r(\\cos(-(\\pi\/2 + \\theta)),\\sin(-(\\pi\/2 + \\theta)) \n \\\\ &= (r\\cos(\\pi\/2+\\theta), - r\\sin(\\pi\/2 + \\theta)) \\\\ &\n = ( - r\\sin(\\theta), - r\\cos(\\theta)),\n \\end{aligned}\n $$ \n \\fi\n and, by (\\ref{eq:xgyg2xhyh}), as $B$ is the $x_{h}$-coordinate of the $B_{-\\theta}$,\n \\if0 1\n $\n B \n = \\frac{1}{d}\\left( - r\\sin(\\theta) \\right) + \\frac{1}{\\sqrt{3}d}\\left(-r\\cos(\\theta)\\right)\n = -\\frac{2r}{\\sqrt{3}d}\\sin\\left(\\theta + \\frac{\\pi}{6}\\right).\n $\n \\else\n $$\n \\begin{aligned}\n B \n &= \\frac{1}{d}\\left( - r\\sin(\\theta) \\right) + \\frac{1}{\\sqrt{3}d}\\left(-r\\cos(\\theta)\\right)\n = -\\frac{2r}{\\sqrt{3}d}\\left( \\frac{1}{2}\\cos(\\theta) + \\frac{\\sqrt{3}}{2}\\sin(\\theta) \\right)\\\\\n\\ifexpandexplanation\n &= -\\frac{2r}{\\sqrt{3}d}\\left( \\sin(\\pi\/6)\\cos(\\theta) + \\cos(\\pi\/6)\\sin(\\theta) \\right)\\\\\n\\fi\n &= -\\frac{2r}{\\sqrt{3}d}\\sin\\left(\\theta + \\frac{\\pi}{6}\\right).\\\\\n \\end{aligned}\n $$\n \\fi\n Also, we apply the ceiling function to yield the desired result in (\\ref{eq:BcasesSC}).\n \n In this case, $C_{1}(x_{h})$ and $C_{2}(x_{h})$ are equal to (\\ref{eq:C1hex}) and (\\ref{eq:C2hex}), but $L(x_{h})$ is different from the previous case. The line $\\overleftrightarrow{OA_{-\\theta}}$ for a point $(x_{g},y_{g})$ in the Euclidean space is\n \\if0 1\n $ y_{g} = \\tan\\left(\\frac{\\pi}{2} - \\theta\\right)x_{g} \\Rightarrow $ \n $ L(x_{h}) = y_{h} = \\frac{\\sin\\left(\\frac{\\pi}{2}-\\theta\\right) x_{h}}{\\sin\\left( \\frac{5\\pi}{6}-\\theta\\right)}.$ \n \\else\n $$ y_{g} = \\tan\\left(\\frac{\\pi}{2} - \\theta\\right)x_{g} \\Leftrightarrow \n y_{g} = \\frac{\\sin\\left(\\frac{\\pi}{2} - \\theta\\right)} {\\cos\\left(\\frac{\\pi}{2} - \\theta\\right)} x_{g} \\Leftrightarrow$$\n $$\\frac{\\sqrt{3} d y_{h}}{2} = \\frac{\\sin\\left(\\frac{\\pi}{2}-\\theta\\right)}{\\cos\\left(\\frac{\\pi}{2}-\\theta\\right)} \\left(d x_{h} - \\frac{d y_{h}}{2} \\right) \\Leftrightarrow $$\n\\ifexpandexplanation\n $$\\frac{\\sqrt{3} d y_{h}}{2}\\cos\\left(\\frac{\\pi}{2}-\\theta\\right) + \\sin\\left(\\frac{\\pi}{2}-\\theta\\right)\\frac{d y_{h}}{2} = \\sin\\left(\\frac{\\pi}{2}-\\theta\\right) d x_{h} \\Leftrightarrow$$\n\\fi\n $$ y_{h} = \\frac{2\\sin\\left(\\frac{\\pi}{2}-\\theta\\right) x_{h}}{\\sqrt{3}\\cos\\left(\\frac{\\pi}{2}-\\theta\\right) + \\sin\\left(\\frac{\\pi}{2}-\\theta\\right) } \\Leftrightarrow$$\n\\ifexpandexplanation\n $$y_{h} = \\frac{\\sin\\left(\\frac{\\pi}{2}-\\theta\\right) x_{h}}{\\sin(\\pi\/3)\\cos\\left(\\frac{\\pi}{2}-\\theta\\right) + \\cos(\\pi\/3)\\sin\\left(\\frac{\\pi}{2}-\\theta\\right) } \\Leftrightarrow$$\n $$y_{h} = \\frac{\\sin\\left(\\frac{\\pi}{2}-\\theta\\right) x_{h}}{\\sin(\\pi\/3 + \\pi\/2-\\theta)} \\Leftrightarrow$$\n\\fi\n $$ L(x_{h}) = y_{h} = \\frac{\\sin\\left(\\frac{\\pi}{2}-\\theta\\right) x_{h}}{\\sin\\left( \\frac{5\\pi}{6}-\\theta\\right)}.$$ \n \\fi \n \\fi %\n \\end{proof}\n \n \n We have $\\displaystyle \\lim_{T \\to \\infty} f_{h}(T,\\theta) = \\lim_{T \\to \\infty} \\frac{N_{R}(T,\\theta)}{T} + \\lim_{T \\to \\infty} \\frac{N_{S}(T,\\theta)-1}{T}$, by Definition \\ref{def:throughput2}. As shown below, this limit needs only the rectangle part, because $N_{S}$ is limited by a semicircle with finite radius.\n \n \\begin{lemma}\n \\if0 1\n $\\lim_{T\\to \\infty} \\frac{N_{S}(T,\\theta)-1}{T} = 0.$\n \\else\n $$\\lim_{T\\to \\infty} \\frac{N_{S}(T,\\theta)-1}{T} = 0.$$\n \\fi\n \\label{lemma:limitinftyNS}\n \\end{lemma}\n \\begin{proof} \n \\ifithasappendixforlemmas %\n See Online Appendix.\n \\else %\n As $T \\to \\infty$, we have $T > \\frac{s}{v}$. By Lemma \\ref{lemma:NS}, $c_{x} = x_{0} + vT - s$, which is the $x$-axis coordinate of the right side of the rectangle. We have that the robots are distant by $d$, so the last robot must be at most distant by $d$ from the point $(c_{x},y_{0})$. Hence, $x_{0} + vT - s - d \\le l_{x} \\le x_{0} + vT - s$, and $y_{0} - d \\le l_{y} \\le y_{0} + d$, so $0 = c_{x} - (x_{0} + vT - s) \\le c_{x} - l_{x} \\le c_{x} - (x_{0} + vT - s - d) = d$ and $-d \\le y_{0} - l_{y} \\le d$. Then, $-d \\le C_{-\\theta,x}, C_{-\\theta,y} \\le d$. \n\\ifexpandexplanation\n Also, we have that $\\theta \\in \\lbrack 0,\\pi\/3 \\rparen$, so $-1\/2 \\le \\cos(\\pi\/3 - \\theta) \\le 1$.\n\\fi\n Thus,\n \\if0 1\n $\n B \n = \\big\\lceil\\frac{2( \\sin(\\pi\/3-\\theta)(c_{x} - l_{x}) + \\cos(\\pi\/3-\\theta)(y_{0} - l_{y} -s) )}{\\sqrt{3}d}\\big\\rceil\n \\ge \\big\\lceil\\frac{2(\\cos(\\pi\/3-\\theta)(-d -s) )}{\\sqrt{3}d}\\big\\rceil \n \\ge \\big\\lceil\\frac{-2(1 +\\frac{s}{d}) }{\\sqrt{3}}\\big\\rceil \n = \\big\\lceil-\\frac{2}{\\sqrt{3}} -\\frac{s}{\\sqrt{3}d}\\big\\rceil \n \\ge -\\frac{2}{\\sqrt{3}} -\\frac{s}{\\sqrt{3}d}, \n $ and\n $\n U \n = \\big \\lfloor \\frac{2(\\sin(\\pi\/3-\\theta)(c_{x} - l_{x}) + \\cos(\\pi\/3-\\theta)(y_{0} - l_{y}) + s)}{\\sqrt{3}d} \\big \\rfloor\n \\le \\big \\lfloor \\frac{2(\\sin(\\pi\/3-\\theta)d + \\cos(\\pi\/3-\\theta)d + s)}{\\sqrt{3}d} \\big \\rfloor\n \\le \\big \\lfloor \\frac{2(2d + s)}{\\sqrt{3}d} \\big \\rfloor\n \\le \\frac{4}{\\sqrt{3}} + \\frac{2s}{\\sqrt{3}d}, \n $\n \\else\n $$\n \\begin{aligned}\n B \n &= \\left\\lceil\\frac{2( \\sin(\\pi\/3-\\theta)(c_{x} - l_{x}) + \\cos(\\pi\/3-\\theta)(y_{0} - l_{y} -s) )}{\\sqrt{3}d}\\right\\rceil\\\\\n &\\ge \\left\\lceil\\frac{2(\\cos(\\pi\/3-\\theta)(-d -s) )}{\\sqrt{3}d}\\right\\rceil \n = \\left\\lceil\\frac{-2\\cos(\\pi\/3-\\theta)(1 +\\frac{s}{d}) }{\\sqrt{3}}\\right\\rceil \n \\ge \\left\\lceil\\frac{-2(1 +\\frac{s}{d}) }{\\sqrt{3}}\\right\\rceil \\\\\n &= \\left\\lceil-\\frac{2}{\\sqrt{3}} -\\frac{s}{\\sqrt{3}d}\\right\\rceil \n \\ge -\\frac{2}{\\sqrt{3}} -\\frac{s}{\\sqrt{3}d}, \n \\end{aligned}\n $$\n\\ifexpandexplanation \n $$\n \\begin{aligned}\n U \n &= \\left \\lfloor \\frac{2(\\sin(\\pi\/3-\\theta)(c_{x} - l_{x}) + \\cos(\\pi\/3-\\theta)(y_{0} - l_{y}) + s)}{\\sqrt{3}d} \\right \\rfloor\\\\\n &\\le \\left \\lfloor \\frac{2(\\sin(\\pi\/3-\\theta)d + \\cos(\\pi\/3-\\theta)d + s)}{\\sqrt{3}d} \\right \\rfloor\n \\le \\left \\lfloor \\frac{2(2d + s)}{\\sqrt{3}d} \\right \\rfloor\n = \\left \\lfloor \\frac{4}{\\sqrt{3}} + \\frac{2s}{\\sqrt{3}d} \\right \\rfloor\\\\ & \n \\le \\frac{4}{\\sqrt{3}} + \\frac{2s}{\\sqrt{3}d}, \n \\end{aligned}\n $$\n\\else\n $$\n \\begin{aligned}\n U \n &= \\left \\lfloor \\frac{2(\\sin(\\pi\/3-\\theta)(c_{x} - l_{x}) + \\cos(\\pi\/3-\\theta)(y_{0} - l_{y}) + s)}{\\sqrt{3}d} \\right \\rfloor\\\\\n &\\le \\left \\lfloor \\frac{2(\\sin(\\pi\/3-\\theta)d + \\cos(\\pi\/3-\\theta)d + s)}{\\sqrt{3}d} \\right \\rfloor\n \\le \\left \\lfloor \\frac{2(2d + s)}{\\sqrt{3}d} \\right \\rfloor\n \\le \\frac{4}{\\sqrt{3}} + \\frac{2s}{\\sqrt{3}d}, \n \\end{aligned}\n $$\n\\fi\n \\fi\n and for any integer $x_{h} \\in [B,U]$, as $\\Delta(x_{h})$ cannot be negative,\n \\if0 1\n $\n 0 \\le \\Delta(x_{h}) \n = 4 s^{2} - \\big(\\sqrt{3} {\\big(d {x_{h}} -{C_{-\\theta,x}} \\big)} - C_{-\\theta,y}\\big)^{2} \n \\le 4 s^{2},\n $\n $\n \\lceil Y_{1}^{S}(x_{h}) \\rceil \n \\ge Y_{1}^{S}(x_{h}) \n = \\frac{d {x_{h}}- {C_{-\\theta,x}} + \\sqrt{3} C_{-\\theta,y} - \\sqrt{\\Delta(x_{h})}}{2 d} \n \\ge \\frac{d {x_{h}}- d - \\sqrt{3} d - 2s}{2 d} \n = \\frac{x_{h} - 1 - \\sqrt{3}}{2} - \\frac{s}{d} ,\n $\n and\n $\n \\lfloor Y_{2}^{S}(x_{h}) \\rfloor\n \\le Y_{2}^{S}(x_{h}) \n \\le \\min(L(x_{h}), C_{2}(x_{h})) \\le C_{2}(x_{h}) \n = \\frac{d {x_{h}}- {C_{-\\theta,x}} + \\sqrt{3} C_{-\\theta,y} + \\sqrt{\\Delta(x_{h})}}{2 d}\n \\le \\frac{d {x_{h}} + d + \\sqrt{3} d + 2s}{2 d} \n = \\frac{x_{h}+1+\\sqrt{3}}{2} + \\frac{s}{d}.\n $\n \\else\n $$\n \\begin{aligned}\n 0 \\le \\Delta(x_{h}) \n = 4 s^{2} - \\left(\\sqrt{3} {\\left(d {x_{h}} -{C_{-\\theta,x}} \\right)} - C_{-\\theta,y}\\right)^{2} \n \\le 4 s^{2},\n \\end{aligned}\n $$\n $$\n \\begin{aligned}\n \\lceil Y_{1}^{S}(x_{h}) \\rceil \n &\\ge Y_{1}^{S}(x_{h}) \n = \\frac{d {x_{h}}- {C_{-\\theta,x}} + \\sqrt{3} C_{-\\theta,y} - \\sqrt{\\Delta(x_{h})}}{2 d} \\\\\n &\\ge \\frac{d {x_{h}}- d - \\sqrt{3} d - 2s}{2 d} \n = \\frac{x_{h} - 1 - \\sqrt{3}}{2} - \\frac{s}{d} ,\n \\end{aligned}\n $$\n and\n $$\n \\begin{aligned}\n \\lfloor Y_{2}^{S}(x_{h}) \\rfloor\n &\\le Y_{2}^{S}(x_{h}) \n \\le \\min(L(x_{h}), C_{2}(x_{h})) \\le C_{2}(x_{h}) \n \\\\&= \\frac{d {x_{h}}- {C_{-\\theta,x}} + \\sqrt{3} C_{-\\theta,y} + \\sqrt{\\Delta(x_{h})}}{2 d}\n \\le \\frac{d {x_{h}} + d + \\sqrt{3} d + 2s}{2 d} \n \\\\ & = \\frac{x_{h}+1+\\sqrt{3}}{2} + \\frac{s}{d}.\n \\end{aligned}\n $$\n \\fi\n Thus,\n \\if0 1\n $\n 0 \\le N_{S}(T,\\theta) = \n \\sum_{x_{h}=B}^{U} \\big(\\lfloor Y_{2}^{S}(x_{h}) \\rfloor - \\lceil Y_{1}^{S}(x_{h}) \\rceil + 1 \\big)\n \\le \\sum_{x_{h}=B}^{U}\\big( \\frac{x_{h}+1+\\sqrt{3}}{2} + \\frac{s}{d} - \\big(\\frac{x_{h} - 1 - \\sqrt{3}}{2} - \\frac{s}{d}\\big) + 1 \\big) \n = \\sum_{x_{h}=B}^{U} \\big(\\frac{2s}{d} + \\sqrt{3} + 2\\big) \n \\le \n \\big( \\frac{4}{\\sqrt{3}} + \\frac{2s}{\\sqrt{3}d} -\\big( -\\frac{2}{\\sqrt{3}} -\\frac{s}{\\sqrt{3}d} \\big) + 1\\big) \\big(\\frac{2s}{d} + \\sqrt{3} + 2\\big) \n = \\big( 2\\sqrt{3} + \\frac{\\sqrt{3}s}{d} + 1\\big) \\big(\\frac{2s}{d} + \\sqrt{3} + 2\\big) \n $\n $\n \\Rightarrow 0 = \\lim_{T\\to\\infty} \\frac{-1}{T} \n \\le \\lim_{T\\to \\infty} \\frac{N_{S}(T,\\theta)-1}{T} \n \\le \\lim_{T \\to \\infty} \\frac{1}{T}\\big(\\big( 2\\sqrt{3} + \\frac{\\sqrt{3}s}{d} + 1\\big) \\big(\\frac{2s}{d} + \\sqrt{3} + 2\\big) -1\\big) = 0.\n $\n \\else\n\\ifexpandexplanation \n $$\n \\begin{aligned}\n \\phantom{\\Rightarrow } 0 &\\le N_{S}(T,\\theta) = \n \\sum_{x_{h}=B}^{U} \\left(\\lfloor Y_{2}^{S}(x_{h}) \\rfloor - \\lceil Y_{1}^{S}(x_{h}) \\rceil + 1 \\right)\n \\\\&\\le \\sum_{x_{h}=B}^{U}\\left( \\frac{x_{h}+1+\\sqrt{3}}{2} + \\frac{s}{d} - \\left(\\frac{x_{h} - 1 - \\sqrt{3}}{2} - \\frac{s}{d}\\right) + 1 \\right) \n \\\\&= \\sum_{x_{h}=B}^{U} \\left(\\frac{x_{h}+1+\\sqrt{3}}{2} + \\frac{s}{d} - \\frac{x_{h} - 1 - \\sqrt{3}}{2} + \\frac{s}{d} + 1 \\right)\n \\\\&= \\sum_{x_{h}=B}^{U} \\left(\\frac{\\sqrt{3}}{2} + \\frac{1}{2} + \\frac{2s}{d} + \\frac{1}{2} + \\frac{\\sqrt{3}}{2} + 1 \\right) \n \\\\&= \\sum_{x_{h}=B}^{U} \\left(\\frac{2s}{d} + \\frac{\\sqrt{3} + 1 + 1 + \\sqrt{3}}{2} + 1 \\right) \\\\\n &= \\sum_{x_{h}=B}^{U} \\left(\\frac{2s}{d} + \\frac{2\\sqrt{3} + 4}{2} \\right) \n \\\\\n \\end{aligned}\n $$\n $$\n \\begin{aligned} \n &\n = \\sum_{x_{h}=B}^{U} \\left(\\frac{2s}{d} + \\sqrt{3} + 2\\right) \n = (U - B + 1) \\left(\\frac{2s}{d} + \\sqrt{3} + 2\\right)\n \\\\&\\le \n \\left( \\frac{4}{\\sqrt{3}} + \\frac{2s}{\\sqrt{3}d} -\\left( -\\frac{2}{\\sqrt{3}} -\\frac{s}{\\sqrt{3}d} \\right) + 1\\right) \\left(\\frac{2s}{d} + \\sqrt{3} + 2\\right)\\\\ \n &= \\left( \\frac{4}{\\sqrt{3}} + \\frac{2s}{\\sqrt{3}d} +\\frac{2}{\\sqrt{3}} +\\frac{s}{\\sqrt{3}d} + 1\\right) \\left(\\frac{2s}{d} + \\sqrt{3} + 2\\right)\\\\\n &= \\left( 2\\sqrt{3} + \\frac{\\sqrt{3}s}{d} + 1\\right) \\left(\\frac{2s}{d} + \\sqrt{3} + 2\\right) \n \\end{aligned}\n $$\n\\else \n $$\n \\begin{aligned}\n \\phantom{\\Rightarrow } 0 &\\le N_{S}(T,\\theta) = \n \\sum_{x_{h}=B}^{U} \\left(\\lfloor Y_{2}^{S}(x_{h}) \\rfloor - \\lceil Y_{1}^{S}(x_{h}) \\rceil + 1 \\right)\n \\\\&\\le \\sum_{x_{h}=B}^{U}\\left( \\frac{x_{h}+1+\\sqrt{3}}{2} + \\frac{s}{d} - \\left(\\frac{x_{h} - 1 - \\sqrt{3}}{2} - \\frac{s}{d}\\right) + 1 \\right) \n \\\\&\n = \\sum_{x_{h}=B}^{U} \\left(\\frac{2s}{d} + \\sqrt{3} + 2\\right) \n \\\\&\\le \n \\left( \\frac{4}{\\sqrt{3}} + \\frac{2s}{\\sqrt{3}d} -\\left( -\\frac{2}{\\sqrt{3}} -\\frac{s}{\\sqrt{3}d} \\right) + 1\\right) \\left(\\frac{2s}{d} + \\sqrt{3} + 2\\right)\\\\ \n &= \\left( 2\\sqrt{3} + \\frac{\\sqrt{3}s}{d} + 1\\right) \\left(\\frac{2s}{d} + \\sqrt{3} + 2\\right) \n \\end{aligned}\n $$\n\\fi\n $$\n \\begin{aligned}\n \\Rightarrow 0 = \\lim_{T\\to\\infty} \\frac{-1}{T} \n &\\le \\lim_{T\\to \\infty} \\frac{N_{S}(T,\\theta)-1}{T} \n \\\\\n &\\le \\lim_{T \\to \\infty} \\frac{1}{T}\\left(\\left( 2\\sqrt{3} + \\frac{\\sqrt{3}s}{d} + 1\\right) \\left(\\frac{2s}{d} + \\sqrt{3} + 2\\right) -1\\right) = 0.\n \\end{aligned}\n $$\n \\fi\n Hence, the result follows from the sandwich theorem.\n \\fi %\n \\end{proof}\n \n As we obtained that $\\displaystyle\\lim_{T\\to \\infty} \\frac{N_{S}(T,\\theta)-1}{T} = 0$, hereafter we only calculate the limit for the number of robots inside the rectangle. By Lemmas \\ref{lemma:NR} to \\ref{lemma:MiddleInterval}, if $n_{l}^{+}-1 < K'$ we have\n \\if0 1\n $\n \\lim_{T\\to \\infty} f_{h}(T,\\psi) \n = \\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=-n_{l}^{-}}^{n_{l}^{-}} \\big(\\lfloor Y_{2}^{R}(x_{h}) \\rfloor - \\lceil Y_{1}^{R}(x_{h}) \\rceil + 1 \\big)\n + \\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=n_{l}^{-} + 1}^{n_{l}^{+}-1} \\big( \\lfloor Y_{2}^{R}(x_{h}) \\rfloor - \\lceil Y_{1}^{R}(x_{h}) \\rceil + 1 \\big),\n $\n \\else\n $$\n \\begin{aligned}\n \\lim_{T\\to \\infty} f_{h}(T,\\psi) \n &= \\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=-n_{l}^{-}}^{n_{l}^{-}} \\left(\\lfloor Y_{2}^{R}(x_{h}) \\rfloor - \\lceil Y_{1}^{R}(x_{h}) \\rceil + 1 \\right)\\\\\n &+ \\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=n_{l}^{-} + 1}^{n_{l}^{+}-1} \\left( \\lfloor Y_{2}^{R}(x_{h}) \\rfloor - \\lceil Y_{1}^{R}(x_{h}) \\rceil + 1 \\right),\n \\end{aligned}\n $$\n \\fi\n otherwise,\n \\if0 1\n $\n \\lim_{T\\to \\infty} f_{h}(T,\\psi) \n = \\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=-n_{l}^{-}}^{n_{l}^{-}} \\big(\\lfloor Y_{2}^{R}(x_{h}) \\rfloor - \\lceil Y_{1}^{R}(x_{h}) \\rceil + 1 \\big)\n + \\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=n_{l}^{-} + 1}^{K' -1} \\big( \\lfloor Y_{2}^{R}(x_{h}) \\rfloor - \\lceil Y_{1}^{R}(x_{h}) \\rceil + 1 \\big)\n + \\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=K'}^{n_{l}^{+}-1} \\big( \\lfloor Y_{2}^{R}(x_{h}) \\rfloor - \\lceil Y_{1}^{R}(x_{h}) \\rceil + 1 \\big).\n $\n \\else\n $$\n \\begin{aligned}\n \\lim_{T\\to \\infty} f_{h}(T,\\psi) \n &= \\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=-n_{l}^{-}}^{n_{l}^{-}} \\left(\\lfloor Y_{2}^{R}(x_{h}) \\rfloor - \\lceil Y_{1}^{R}(x_{h}) \\rceil + 1 \\right)\\\\\n &+ \\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=n_{l}^{-} + 1}^{K' -1} \\left( \\lfloor Y_{2}^{R}(x_{h}) \\rfloor - \\lceil Y_{1}^{R}(x_{h}) \\rceil + 1 \\right)\\\\\n &+ \\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=K'}^{n_{l}^{+}-1} \\left( \\lfloor Y_{2}^{R}(x_{h}) \\rfloor - \\lceil Y_{1}^{R}(x_{h}) \\rceil + 1 \\right).\\\\\n \\end{aligned}\n $$ \n \\fi\n To clarify, the third summation is zero in the case of $n_{l}^{+}-1 < K'$, while the second summation goes until $\\min(n_{l}^{+}-1,K'-1)$ in both cases. Each one will be individually solved assuming $\\psi \\neq \\pi\/6$. Later, we will see that the final result holds for $\\psi = \\pi\/6$ as well. The following lemmas will be useful soon.\n \n \\begin{lemma} \n Assume $\\psi \\neq \\pi\/6$.\n \\if0 1\n $\n \\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=-n_{l}^{-}}^{n_{l}^{-}} \\left( \\lfloor Y_{2}^{R}(x_{h}) \\rfloor - \\lceil Y_{1}^{R}(x_{h}) \\rceil + 1 \\right) = 0.\n $\n \\else\n $$\n \\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=-n_{l}^{-}}^{n_{l}^{-}} \\left( \\lfloor Y_{2}^{R}(x_{h}) \\rfloor - \\lceil Y_{1}^{R}(x_{h}) \\rceil + 1 \\right) = 0.\n $$ \n \\fi\n \\label{lemma:lim1st}\n \\end{lemma} \n \\begin{proof} \n \\ifithasappendixforlemmas %\n See Online Appendix.\n \\else %\n As for any $x$, $x - 1 < \\lfloor x \\rfloor \\le x \\le \\lceil x \\rceil < x + 1$,\n \\if0 1\n $\n \\lim_{T\\to \\infty} \\frac{1}{T}\\sum_{x_{h}=-n_{l}^{-}}^{n_{l}^{-}} \\big( Y_{2}^{R}(x_{h}) - Y_{1}^{R}(x_{h}) -1 \\big) \n < \\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=-n_{l}^{-}}^{n_{l}^{-}} \\big(\\lfloor Y_{2}^{R}(x_{h}) \\rfloor - \\lceil Y_{1}^{R}(x_{h}) \\rceil + 1 \\big) \n \\le \\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=-n_{l}^{-}}^{n_{l}^{-}} \\big( Y_{2}^{R}(x_{h}) - Y_{1}^{R}(x_{h}) +1 \\big).\n $\n \\else\n $$\n \\begin{aligned}\n &\\lim_{T\\to \\infty} \\frac{1}{T}\\sum_{x_{h}=-n_{l}^{-}}^{n_{l}^{-}} \\left( Y_{2}^{R}(x_{h}) - Y_{1}^{R}(x_{h}) -1 \\right) \\\\\n &< \\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=-n_{l}^{-}}^{n_{l}^{-}} \\left(\\lfloor Y_{2}^{R}(x_{h}) \\rfloor - \\lceil Y_{1}^{R}(x_{h}) \\rceil + 1 \\right) \\\\\n &\\le \\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=-n_{l}^{-}}^{n_{l}^{-}} \\left( Y_{2}^{R}(x_{h}) - Y_{1}^{R}(x_{h}) +1 \\right).\n \\end{aligned}\n $$\n \\fi\n By Lemma \\ref{lemma:Interval1}, the first and last summations do not depend on $T$, so both sides have limit equal to 0. By the sandwich theorem, we have the result. \n \\fi %\n \\end{proof}\n \n \\begin{lemma}\n Assume $\\psi \\neq \\pi\/6$. For $K' = \\left\\lceil\\frac{2 (vT-s) \\cos(\\psi - \\pi\/6) - 2s\\sin(\\vert \\psi - \\pi\/6\\vert ) }{\\sqrt{3}d}\\right\\rceil$,\n \\if0 1\n $\n \\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=K'}^{n_{l}^{+}-1} \\left( \\lfloor Y_{2}^{R}(x_{h}) \\rfloor - \\lceil Y_{1}^{R}(x_{h}) \\rceil + 1 \\right)= 0.\n $\n \\else\n $$\n \\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=K'}^{n_{l}^{+}-1} \\left( \\lfloor Y_{2}^{R}(x_{h}) \\rfloor - \\lceil Y_{1}^{R}(x_{h}) \\rceil + 1 \\right)= 0.\n $$\n \\fi\n \\label{lemma:lim2nd}\n \\end{lemma} \n \\begin{proof}\n \\ifithasappendixforlemmas %\n See Online Appendix.\n \\else %\n If $K' > n_{l}^{+}-1$, this limit is already zero, so we focus this proof on the other case. We have, analogously to the previous lemma, \n \\begin{equation}\n \\begin{aligned}\n &\\lim_{T\\to \\infty} \\frac{1}{T}\\sum_{x_{h}=K'}^{n_{l}^{+}-1} \\left( Y_{2}^{R}(x_{h}) - Y_{1}^{R}(x_{h}) -1 \\right) \\\\\n &<\\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=K'}^{n_{l}^{+}-1} \\left( \\lfloor Y_{2}^{R}(x_{h}) \\rfloor - \\lceil Y_{1}^{R}(x_{h}) \\rceil + 1 \\right) \\\\\n &\\le\\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=K'}^{n_{l}^{+}-1}\\left( Y_{2}^{R}(x_{h}) - Y_{1}^{R}(x_{h}) +1 \\right).\n \\end{aligned}\n \\label{eq:limit2ineq}\n \\end{equation}\n \n For any constant $c$, we have \n \\begin{equation}\n \\lim_{T \\to \\infty}\\frac{1}{T}\\sum_{x_{h} = K'}^{n_{l}^{+}-1}c = 0,\n \\label{eq:limzerolemma101}\n \\end{equation}\n because the number of $x_{h}$ indexes in the summation is limited by a finite number of integer outcomes that depends on $T$. In other words, the number of indexes in the above summation is $n_{l}^{+} - K'$ such that $\\frac{4s\\sin(\\vert \\psi - \\pi\/6\\vert ) }{\\sqrt{3}d} - 1 < n_{l}^{+} - K' \\le \\frac{4s\\sin(\\vert \\psi - \\pi\/6\\vert ) }{\\sqrt{3}d} + 1$. The last inequality is obtained by counting how many $x_{h}$ are used in the summation and knowing that $2y -1 <\\lfloor x + y \\rfloor - \\lceil x - y \\rceil + 1 \\le 2y + 1$ for any $x,y \\in \\mathds{R}$. Thus, for any $T$, $n_{l}^{+} - K'$ can only range from $\\left\\lceil\\frac{4s\\sin(\\vert \\psi - \\pi\/6\\vert ) }{\\sqrt{3}d}\\right\\rceil -1$ to $\\left\\lfloor\\frac{4s\\sin(\\vert \\psi - \\pi\/6\\vert ) }{\\sqrt{3}d}\\right\\rfloor + 1$. This yields to three possible integer numbers, if $\\frac{4s\\sin(\\vert \\psi - \\pi\/6\\vert ) }{\\sqrt{3}d} \\in \\mathds{Z}$, or four, otherwise. Thus, a finite range of outcomes, none of them having $T$. Hence, for all outcomes, the limit on the left side of (\\ref{eq:limzerolemma101}) is zero. \n \n \n Assume $\\psi > \\pi\/6$ (for $\\psi < \\pi\/6$ the result is the same). From Lemma \\ref{lemma:endcase}, \n \\begin{equation}\n \\begin{aligned}\n &Y_{2}^{R}(x_{h}) - Y_{1}^{R}(x_{h}) \n = \n \\frac{\\frac{2 (v T-s)}{d \\cos(\\psi)}-2 x_{h}}{\\sqrt{3} \\tan(\\psi) - 1}-\\frac{\\frac{2y_1}{d} + 2\\tan(\\psi) x_h}{\\sqrt{3} + \\tan(\\psi)} \\\\\n &= \\frac{\\frac{2 (v T-s)}{d \\cos(\\psi)}}{\\sqrt{3} \\tan(\\psi) - 1}-\\frac{\\frac{2y_1}{d}}{\\sqrt{3} + \\tan(\\psi)} - \\Bigg(\\frac{2}{\\sqrt{3} \\tan(\\psi) - 1} \\\\\n &+ \\frac{2\\tan(\\psi) }{\\sqrt{3} + \\tan(\\psi)}\\Bigg) x_{h}.\n \\end{aligned}\n \\label{eq:t1y2y1xh}\n \\end{equation}\n \n For the second term above, by (\\ref{eq:limzerolemma101}), \n $ \\displaystyle\n \\lim_{T\\to\\infty}\\sum_{x_{h}=K'}^{n_{l}^{+}-1} \\frac{\\frac{2y_1}{d}}{\\sqrt{3} + \\tan(\\psi)} = 0.\n $\n \n For the first term,\n \\begin{equation}\n \\begin{aligned}\n &\\sum_{x_{h}=K'}^{n_{l}^{+}-1} \\frac{1}{T}\\frac{\\frac{2 (v T-s)}{d \\cos(\\psi)}}{\\sqrt{3} \\tan(\\psi) - 1} \n = \\sum_{x_{h}=K'}^{n_{l}^{+}-1} \\frac{2 \\left(v -\\frac{s}{T}\\right)}{d \\cos(\\psi)(\\sqrt{3} \\tan(\\psi) - 1)} \\\\\n &= \\sum_{x_{h}=K'}^{n_{l}^{+}-1} \\frac{2 \\left(v -\\frac{s}{T}\\right)}{d (\\sqrt{3} \\sin(\\psi) - \\cos(\\psi))} \n =\\sum_{x_{h}=K'}^{n_{l}^{+}-1} \\frac{v -\\frac{s}{T}}{d\\sin(\\psi-\\pi\/6)}\\\\\n &=\\sum_{x_{h}=K'}^{n_{l}^{+}-1} \\frac{v}{d\\sin(\\psi-\\pi\/6)} -\\frac{1}{T}\\sum_{x_{h}=K'}^{n_{l}^{+}-1} \\frac{s}{d\\sin(\\psi-\\pi\/6)},\n \\end{aligned}\n \\label{eq:t102l7}\n \\end{equation}\n due to $\\frac{\\sqrt{3}}{2} \\sin(\\psi) - \\frac{1}{2}\\cos(\\psi) = \\sin(\\psi - \\pi\/6)$. Let $L$ be the number of terms on the summation of (\\ref{eq:t102l7}). As discussed above, $L$ is an integer in $\\Big\\{\\left\\lceil\\frac{4s\\sin(\\vert \\psi - \\pi\/6\\vert ) }{\\sqrt{3}d}\\right\\rceil -1, \\dots, \\left\\lfloor\\frac{4s\\sin(\\vert \\psi - \\pi\/6\\vert ) }{\\sqrt{3}d}\\right\\rfloor + 1 \\Big\\}$, so\n \\begin{equation}\n \\begin{aligned}\n &\\sum_{x_{h}=K'}^{n_{l}^{+}-1} \\frac{1}{T}\\frac{\\frac{2 (v T-s)}{d \\cos(\\psi)}}{\\sqrt{3} \\tan(\\psi) - 1} \n =\\frac{Lv}{d\\sin(\\psi-\\pi\/6)} - \\frac{1}{T}\\sum_{x_{h}=K'}^{n_{l}^{+}-1} \\frac{s}{d\\sin(\\psi-\\pi\/6)}.\n \\end{aligned}\n \\label{eq:t102l72}\n \\end{equation}\n \n \n Also, we have\n \\if0 1 \n $\n \\frac{2}{\\sqrt{3} \\tan(\\psi) - 1} + \\frac{2\\tan(\\psi) }{\\sqrt{3} + \\tan(\\psi)} \n = \\frac{\\sqrt{3}}{2\\sin(\\psi - \\pi\/6) \\cos(\\psi - \\pi\/6)} \n $\n \\else\n\\ifexpandexplanation\n $$\n \\begin{aligned}\n &\\frac{2}{\\sqrt{3} \\tan(\\psi) - 1} + \\frac{2\\tan(\\psi) }{\\sqrt{3} + \\tan(\\psi)} \n = \\frac{2(\\sqrt{3} + \\tan(\\psi)) + 2\\tan(\\psi)(\\sqrt{3} \\tan(\\psi) - 1) }{(\\sqrt{3} \\tan(\\psi) - 1)(\\sqrt{3} + \\tan(\\psi))} \\\\\n &= \\frac{2\\sqrt{3} + 2\\tan(\\psi) + 2\\sqrt{3}\\tan^{2}(\\psi) - 2 \\tan(\\psi) }{(\\sqrt{3} \\tan(\\psi) - 1)(\\sqrt{3} + \\tan(\\psi))} \\\\\n &= \\frac{2\\sqrt{3} + 2\\sqrt{3}\\tan^{2}(\\psi) }{(\\sqrt{3} \\tan(\\psi) - 1)(\\sqrt{3} + \\tan(\\psi))} \\\\\n &= \\frac{2\\sqrt{3}( 1 + \\tan^{2}(\\psi)) }{(\\sqrt{3} \\tan(\\psi) - 1)(\\sqrt{3} + \\tan(\\psi))} \n \\\\&= \\frac{2\\sqrt{3}\\sec^{2}(\\psi) }{(\\sqrt{3} \\tan(\\psi) - 1)(\\sqrt{3} + \\tan(\\psi))} \\\\&\n = \\frac{2\\sqrt{3} }{(\\sqrt{3} \\sin(\\psi) - \\cos(\\psi))(\\sqrt{3}\\cos(\\psi) + \\sin(\\psi))} \\\\\n &= \\frac{\\sqrt{3}}{2\\sin(\\psi - \\pi\/6) \\cos(\\psi - \\pi\/6)} \\\\\n \\end{aligned}\n $$ \n\\else \n $$\n \\begin{aligned}\n &\\frac{2}{\\sqrt{3} \\tan(\\psi) - 1} + \\frac{2\\tan(\\psi) }{\\sqrt{3} + \\tan(\\psi)} \n = \\frac{2(\\sqrt{3} + \\tan(\\psi)) + 2\\tan(\\psi)(\\sqrt{3} \\tan(\\psi) - 1) }{(\\sqrt{3} \\tan(\\psi) - 1)(\\sqrt{3} + \\tan(\\psi))} \\\\\n &= \\frac{2\\sqrt{3}( 1 + \\tan^{2}(\\psi)) }{(\\sqrt{3} \\tan(\\psi) - 1)(\\sqrt{3} + \\tan(\\psi))} \n = \\frac{2\\sqrt{3} }{(\\sqrt{3} \\sin(\\psi) - \\cos(\\psi))(\\sqrt{3}\\cos(\\psi) + \\sin(\\psi))} \\\\\n &= \\frac{\\sqrt{3}}{2\\sin(\\psi - \\pi\/6) \\cos(\\psi - \\pi\/6)} \\\\\n \\end{aligned}\n $$ \n\\fi\n \\fi\n as $1 + \\tan^{2}(\\psi) = \\sec^{2}(\\psi)$ and $\\frac{\\sqrt{3}}{2}\\cos(\\psi) + \\frac{1}{2}\\sin(\\psi) = \\cos(\\psi - \\pi\/6)$. Hence, for the last term in (\\ref{eq:t1y2y1xh}),\n \\begin{equation}\n \\begin{aligned}\n &\\frac{1}{T}\\sum_{x_{h}=K'}^{n_{l}^{+}-1} \\frac{\\sqrt{3}}{2\\sin(\\psi - \\pi\/6) \\cos(\\psi - \\pi\/6)}x_{h} \n \\\\\n &=\\frac{1}{T}\\frac{\\sqrt{3}}{2\\sin(\\psi - \\pi\/6) \\cos(\\psi - \\pi\/6)} \\frac{(n_{l}^{+}-1 + K') (n_{l}^{+}-K')}{2} \\\\\n &=\\frac{\\sqrt{3}LG}{4T\\sin(\\psi - \\pi\/6) \\cos(\\psi - \\pi\/6)}, \\\\\n \\end{aligned} \n \\label{eq:t101l7}\n \\end{equation}\n for an integer $G = n_{l}^{+} - 1 + K'$. As $2x - 1 <\\lfloor x + y \\rfloor + \\lceil x - y \\rceil < 2x+1$ for any $x,y \\in \\mathds{R}$, $G \\in \\left(\\frac{4 (vT-s) \\cos(\\psi - \\pi\/6)}{\\sqrt{3}d}-1,\\frac{4 (vT-s) \\cos(\\psi - \\pi\/6)}{\\sqrt{3}d}+1\\right)$. \n\n \n For the lowest bound on G, using (\\ref{eq:t102l72}) and (\\ref{eq:t101l7})\n \\if0 1\n $\n \\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=K'}^{n_{l}^{+}-1} \\big( Y_{2}^{R}(x_{h}) - Y_{1}^{R}(x_{h}) \\big)\n =\\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=K'}^{n_{l}^{+}-1} \n \\bigg( \\frac{\\frac{2 (v T-s)}{d \\cos(\\psi)}}{\\sqrt{3} \\tan(\\psi) - 1} - \\big(\\frac{2}{\\sqrt{3} \\tan(\\psi) - 1} + \\frac{2\\tan(\\psi) }{\\sqrt{3} + \\tan(\\psi)}\\big) x_{h} \\bigg)\n =\\lim_{T\\to \\infty} \\bigg(\\frac{Lv}{d\\sin(\\psi-\\pi\/6)} - \\frac{1}{T}\\sum_{x_{h}=K'}^{n_{l}^{+}-1} \\frac{s}{d\\sin(\\psi-\\pi\/6)} \\big. \n \\phantom{=} \\big. -\\frac{\\sqrt{3}L\\big(\\frac{4 (vT-s) \\cos(\\psi - \\pi\/6)}{\\sqrt{3}d}-1\\big)}{4T\\sin(\\psi - \\pi\/6) \\cos(\\psi - \\pi\/6)} \\bigg) \n =\\lim_{T\\to \\infty} \\bigg( \\frac{\\sqrt{3}L}{4T\\sin(\\psi - \\pi\/6) \\cos(\\psi - \\pi\/6)} - \\frac{1}{T}\\sum_{x_{h}=K'}^{n_{l}^{+}-1} \\frac{s}{d\\sin(\\psi-\\pi\/6)} \\bigg)\n = 0,\n $\n \\else\n $$\n \\begin{aligned}\n &\\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=K'}^{n_{l}^{+}-1} \\left( Y_{2}^{R}(x_{h}) - Y_{1}^{R}(x_{h}) \\right)\\\\\n &=\\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=K'}^{n_{l}^{+}-1} \n \\left( \\frac{\\frac{2 (v T-s)}{d \\cos(\\psi)}}{\\sqrt{3} \\tan(\\psi) - 1} - \\left(\\frac{2}{\\sqrt{3} \\tan(\\psi) - 1} + \\frac{2\\tan(\\psi) }{\\sqrt{3} + \\tan(\\psi)}\\right) x_{h} \\right)\\\\\n &=\\lim_{T\\to \\infty} \\left(\\frac{Lv}{d\\sin(\\psi-\\pi\/6)} - \\frac{1}{T}\\sum_{x_{h}=K'}^{n_{l}^{+}-1} \\frac{s}{d\\sin(\\psi-\\pi\/6)} \\right. \\\\\n &\\phantom{=} \\left. -\\frac{\\sqrt{3}L\\left(\\frac{4 (vT-s) \\cos(\\psi - \\pi\/6)}{\\sqrt{3}d}-1\\right)}{4T\\sin(\\psi - \\pi\/6) \\cos(\\psi - \\pi\/6)} \\right) \\\\\n \\end{aligned}\n $$\n\\ifexpandexplanation\n $$\n \\begin{aligned}\n &=\\lim_{T\\to \\infty} \\left( \\frac{Lv}{d\\sin(\\psi-\\pi\/6)} -\\frac{L\\frac{4 (vT-s) \\cos(\\psi - \\pi\/6)}{d}-\\sqrt{3}L}{4T\\sin(\\psi - \\pi\/6) \\cos(\\psi - \\pi\/6)} - \\frac{1}{T}\\sum_{x_{h}=K'}^{n_{l}^{+}-1} \\frac{s}{d\\sin(\\psi-\\pi\/6)} \\right)\\\\ \n \\end{aligned}\n $$\n $$\n \\begin{aligned}\n &=\\lim_{T\\to \\infty} \\left( \\frac{Lv}{d\\sin(\\psi-\\pi\/6)} -\\frac{Lv }{d\\sin(\\psi - \\pi\/6)} + \\frac{\\sqrt{3}L}{4T\\sin(\\psi - \\pi\/6) \\cos(\\psi - \\pi\/6)} - \\right.\\\\\n &\\phantom{=} \\left. \\frac{1}{T}\\sum_{x_{h}=K'}^{n_{l}^{+}-1} \\frac{s}{d\\sin(\\psi-\\pi\/6)} \\right)\\\\\n \\end{aligned}\n $$\n\\fi\n $$\n \\begin{aligned}\n &=\\lim_{T\\to \\infty} \\left( \\frac{\\sqrt{3}L}{4T\\sin(\\psi - \\pi\/6) \\cos(\\psi - \\pi\/6)} - \\frac{1}{T}\\sum_{x_{h}=K'}^{n_{l}^{+}-1} \\frac{s}{d\\sin(\\psi-\\pi\/6)} \\right)\n = 0,\\\\\n \\end{aligned}\n $$\n \\fi\n due to (\\ref{eq:limzerolemma101}) on the second term and, as \n \\if0 1 \n $L\\in \\Big\\{\\Big\\lceil\\frac{4s\\sin(\\vert \\psi - \\pi\/6\\vert ) }{\\sqrt{3}d}\\Big\\rceil -1, \\dots, \\Big\\lfloor\\frac{4s\\sin(\\vert \\psi - \\pi\/6\\vert ) }{\\sqrt{3}d}\\Big\\rfloor + 1 \\Big\\},$\n \\else\n $$L\\in \\Big\\{\\Big\\lceil\\frac{4s\\sin(\\vert \\psi - \\pi\/6\\vert ) }{\\sqrt{3}d}\\Big\\rceil -1, \\dots, \\Big\\lfloor\\frac{4s\\sin(\\vert \\psi - \\pi\/6\\vert ) }{\\sqrt{3}d}\\Big\\rfloor + 1 \\Big\\},$$ \n \\fi\n no element in this finite set has the term $T$.\n \n For the highest bound on G, we have the same limit. Hence, by the sandwich theorem applied on the results for both bounds of G, we get \n \\begin{equation}\n \\begin{aligned}\n &\\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=K'}^{n_{l}^{+}-1} \\left( Y_{2}^{R}(x_{h}) - Y_{1}^{R}(x_{h}) \\right)\n =0.\n \\end{aligned}\n \\label{eq:limity2y1zero1}\n \\end{equation}\n Using (\\ref{eq:limity2y1zero1}) and (\\ref{eq:limzerolemma101}) on the bounds of (\\ref{eq:limit2ineq}) and the sandwich theorem again concludes with the desired value.\n \\fi %\n \\end{proof}\n \n \n \\begin{lemma} \n Assume $\\psi \\neq \\pi\/6$.\n \\if0 1\n $\n \\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=n_{l}^{-} + 1}^{\\min(n_{l}^{+}-1,K'-1)} \\big(\\lfloor Y_{2}^{R}(x_{h}) \\rfloor - \\lceil Y_{1}^{R}(x_{h}) \\rceil + 1\\big)\n $\n \\else\n $$\n \\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=n_{l}^{-} + 1}^{\\min(n_{l}^{+}-1,K'-1)} \\left(\\lfloor Y_{2}^{R}(x_{h}) \\rfloor - \\lceil Y_{1}^{R}(x_{h}) \\rceil + 1\\right)\n $$\n \\fi\n exists and is bounded by\n \\if0 1\n $\n \\big(\\frac{4vs}{\\sqrt{3}d^{2}} - \\frac{2 v \\cos(\\psi - \\pi\/6)}{\\sqrt{3}d}, \\frac{4vs}{\\sqrt{3}d^{2}} + \\frac{2 v \\cos(\\psi - \\pi\/6)}{\\sqrt{3}d}\\big].\n $\n \\else\n $$\n \\begin{aligned} \n \\left(\\frac{4vs}{\\sqrt{3}d^{2}} - \\frac{2 v \\cos(\\psi - \\pi\/6)}{\\sqrt{3}d}, \\frac{4vs}{\\sqrt{3}d^{2}} + \\frac{2 v \\cos(\\psi - \\pi\/6)}{\\sqrt{3}d}\\right].\n \\end{aligned}\n $$ \n \\fi\n \\label{lemma:lim3rd}\n \\end{lemma}\n \\begin{proof}\n \\ifithasappendixforlemmas %\n See Online Appendix.\n \\else\n The next lemmas will be useful for proving this lemma.\n\n\\begin{lemma}\n For any $a,b >0, a\\lfloor x \\rfloor - b\\lfloor y \\rfloor < ax - by + a + b$.\n \\label{lemma:flooraxby}\n\\end{lemma}\n\\begin{proof} \nAs mentioned before, by the definition of floor function $\\lfloor x \\rfloor = x - frac(x)$, where $frac$ is the function that returns the fractional part of the number $x$, such that $0 \\le frac(x) < 1$ \\citep{graham1994concrete},\n\\if0 1\n $\n a\\lfloor x \\rfloor - b\\lfloor y \\rfloor \n = a x - a frac(x) - by + b frac(y) \n < ax - by + b -a frac(x) $ \n $\n < ax - by + b + a $ \n because $frac(y)<1$ and $-a frac(x) \\le 0 < a$.\n\\else\n $$\n \\begin{aligned}\n a\\lfloor x \\rfloor - b\\lfloor y \\rfloor \n &= a x - a frac(x) - by + b frac(y) \n &\n \\\\\n &< ax - by + b -a frac(x) \n &[\\text{because } frac(y)<1]\n \\\\\n & < ax - by + b + a & [\\text{as } -a frac(x) \\le 0 < a].\n \\end{aligned}\n $$ \n\\fi\n\\end{proof}\n\n\\begin{lemma}\n Let $c,d,A_{1},B_{1},A_{2},B_{2} \\in \\mathds{R}$, $c > 0$ and $I_{1}\\in \\mathds{Z}$. Then, \n \\if0 1\n $\n \\lim_{n\\to\\infty}{\\sum_{i=I_{1}+1}^{\\lfloor cn + d \\rfloor} \\frac{frac(-(A_{1}i+B_{1})) + frac(A_{2}i+B_{2})}{n}}.\n $ exists\n \\else\n the limit below exists:\n $$\n \\lim_{n\\to\\infty}{\\sum_{i=I_{1}+1}^{\\lfloor cn + d \\rfloor} \\frac{frac(-(A_{1}i+B_{1})) + frac(A_{2}i+B_{2})}{n}}.\n $$\n \\fi\n \\label{lemma:limsum1Ri}\n\\end{lemma}\n\\begin{proof}\n For convergence, we show that for $R(i) = frac(-(A_{1}i+B_{1})) + frac(A_{2}i+B_{2})$, $(a_{n})_{n \\in \\mathds{N}^{*}} = \\left(\n \\sum_{i=I_{1}+1}^{\\lfloor cn + d \\rfloor } \\frac{R(i)}{n}\n \\right)_{n\\in \\mathds{N}^{*}}$ is a Cauchy sequence. Take $\\epsilon > 0$ and choose\n $N > \\frac{4\\vert I_{1}-d+1\\vert }{\\epsilon}$.\n Let $n,m \\in \\mathds{N}^{*}$ and $n > m > N.$ \n We have\n \\if0 1\n $\n \\vert a_{n} - a_{m}\\vert \n = \\big\\vert \\sum_{i=I_{1}+1}^{\\lfloor cn + d \\rfloor} \\frac{R(i)}{n} - \\sum_{i=I_{1}+1}^{\\lfloor cm + d \\rfloor} \\frac{R(i)}{m}\\big\\vert \n = \\big\\vert \\frac{1}{nm} \\big( m\\sum_{i=I_{1}+1}^{\\lfloor cn + d \\rfloor} R(i) - n\\sum_{i=I_{1}+1}^{\\lfloor cm + d \\rfloor} R(i) \\big)\\big\\vert \n = \\big\\vert \\frac{1}{nm} \\big( m\\sum_{i=\\lfloor cm + d \\rfloor + 1}^{\\lfloor cn + d \\rfloor} R(i) + (m- n)\\sum_{i=I_{1}+1}^{\\lfloor cm + d \\rfloor } R(i)\\big) \\big\\vert \n < \\frac{2}{\\vert nm\\vert }\\vert m(\\lfloor cn + d \\rfloor - (\\lfloor cm + d \\rfloor + 1) + 1) + (m- n)(\\lfloor cm + d \\rfloor -(I_{1}+1)+1 ) \\vert \n = \\frac{2}{\\vert nm\\vert }\\vert m\\lfloor cn + d \\rfloor - n\\lfloor cm + d \\rfloor - (m-n) I_{1} \\vert \n < \\frac{2}{\\vert nm\\vert }\\vert m( cn + d ) - n( cm + d ) + m + n - (m-n) I_{1} \\vert \n = \\frac{2}{\\vert nm\\vert }\\vert (n - m) (I_{1}-d) + m + n \\vert \n < 2\\big\\vert \\frac{ (n + m) (I_{1}-d) + m + n }{nm} \\big\\vert \n = 2\\big\\vert \\frac{ (m + n) (I_{1}-d + 1) }{nm} \\big\\vert \n = 2\\vert I_{1}-d + 1\\vert \\frac{ m + n }{nm} \n = 2\\vert I_{1}-d + 1\\vert \\big(\\frac{1}{n}+ \\frac{1}{m}\\big)\n < 2\\vert I_{1}-d + 1\\vert \\frac{2}{N}\n = \\frac{4\\vert I_{1}-d+1\\vert }{N}\n < \\epsilon.\n $ \n by knowing that $\\left\\lfloor cn + d \\right \\rfloor > \\left \\lfloor cm + d \\right \\rfloor$, $R(i) < 2$ for any $i$ and Lemma \\ref{lemma:flooraxby}.\n \\else\n $$\n \\begin{aligned}\n &\\vert a_{n} - a_{m}\\vert \n = \\left\\vert \\sum_{i=I_{1}+1}^{\\lfloor cn + d \\rfloor} \\frac{R(i)}{n} - \\sum_{i=I_{1}+1}^{\\lfloor cm + d \\rfloor} \\frac{R(i)}{m}\\right\\vert {}\n = \\left\\vert \\frac{1}{nm}\\left( m\\sum_{i=I_{1}+1}^{\\lfloor cn + d \\rfloor} R(i) - n\\sum_{i=I_{1}+1}^{\\lfloor cm + d \\rfloor} R(i)\\right) \\right\\vert \n \\\\\n &= \\left\\vert \\frac{1}{nm}\\left( m\\sum_{i=\\lfloor cm + d \\rfloor + 1}^{\\lfloor cn + d \\rfloor} R(i) + (m- n)\\sum_{i=I_{1}+1}^{\\lfloor cm + d \\rfloor } R(i) \\right) \\right\\vert \n \\hspace*{.6cm} [\\text{as } \\left\\lfloor cn + d \\right \\rfloor > \\left \\lfloor cm + d \\right \\rfloor]\n \\end{aligned}\n $$\n $$\n \\begin{aligned}\n &< 2\\left\\vert \\frac{m(\\lfloor cn + d \\rfloor - (\\lfloor cm + d \\rfloor + 1) + 1) + (m- n)(\\lfloor cm + d \\rfloor -(I_{1}+1)+1 )}{nm} \\right\\vert \n \\hspace*{0.5cm} [\\text{as } R(i) < 2 \\text{ for any } i]\n \\\\\n\\ifexpandexplanation\n &= 2\\left\\vert \\frac{m\\lfloor cn + d \\rfloor - m(\\lfloor cm + d \\rfloor + 1) + m + m \\lfloor cm + d \\rfloor - m(I_{1}+1)+ m - n\\lfloor cm + d \\rfloor +n(I_{1}+1) -n }{nm} \\right\\vert \n \\\\\n &= 2\\left\\vert \\frac{m\\lfloor cn + d \\rfloor - m\\lfloor cm + d \\rfloor -m + m + m \\lfloor cm + d \\rfloor - mK -m + m - n\\lfloor cm + d \\rfloor +nK +n -n }{nm} \\right\\vert \n \\\\\n &= 2\\left\\vert \\frac{m\\lfloor cn + d \\rfloor - mK - n\\lfloor cm + d \\rfloor +nK }{nm} \\right\\vert \n = 2\\left\\vert \\frac{m\\lfloor cn + d \\rfloor - n\\lfloor cm + d \\rfloor - (m-n) I_{1} }{nm} \\right\\vert \n \\\\\n\\else\n &= 2\\left\\vert \\frac{m\\lfloor cn + d \\rfloor - n\\lfloor cm + d \\rfloor - (m-n) I_{1} }{nm} \\right\\vert \n \\\\\n\\fi\n \\end{aligned}\n $$\n $$\n \\begin{aligned}\n &< 2\\left\\vert \\frac{m( cn + d ) - n( cm + d ) + m + n - (m-n) I_{1} }{nm} \\right\\vert \\hspace*{1.8cm} [\\text{Lemma }\\ref{lemma:flooraxby}]\\\\\n\\ifexpandexplanation\n &= 2\\left\\vert \\frac{(m -n) (d-I_{1}) + m + n }{nm} \\right\\vert \\\\\n\\fi\n &= 2\\left\\vert \\frac{ (n - m) (I_{1}-d) + m + n }{nm} \\right\\vert \n < 2\\left\\vert \\frac{ (n + m) (I_{1}-d) + m + n }{nm} \\right\\vert \n \\\\\n &= 2\\left\\vert \\frac{ (m + n) (I_{1}-d + 1) }{nm} \\right\\vert \n = 2\\vert I_{1}-d + 1\\vert \\frac{ m + n }{nm} \n = 2\\vert I_{1}-d + 1\\vert \\left(\\frac{1}{n}+ \\frac{1}{m}\\right)\n \\\\\n &< 2\\vert I_{1}-d + 1\\vert \\frac{2}{N}\n = \\frac{4\\vert I_{1}-d+1\\vert }{N}\n < \\epsilon.\n \\end{aligned}\n $$ \n \\fi\n\\end{proof}\n\n To prove the existence, we have $\\lceil x \\rceil = x + frac(-x)$, for any real number $x$\\footnote{Heed that using this definition of $frac$, $frac(1.7) = 0.7$ and $frac(-1.7) = 0.3$.}, because \n \\if0 1\n $\n frac(-x) \n = -x - \\lfloor -x \\rfloor $ \n $= -x - (-\\lceil x \\rceil) $ $ \n = -x + \\lceil x \\rceil \n \\Leftrightarrow \\lceil x \\rceil = x + frac(-x) $\n by the definition of $\\lfloor -x \\rfloor$ and $\\lfloor -x \\rfloor = -\\lceil x \\rceil$.\n \\else\n $$\n \\begin{aligned}\n frac(-x) \n &= -x - \\lfloor -x \\rfloor & [\\text{def. of }\\lfloor -x \\rfloor ] \\\\\n &= -x - (-\\lceil x \\rceil) & [\\lfloor -x \\rfloor = -\\lceil x \\rceil] \\\\\n &= -x + \\lceil x \\rceil &\\\\\n \\Leftrightarrow \\lceil x \\rceil &= x + frac(-x).&\\\\\n \\end{aligned}\n $$\n \\fi\n Thus,\n \\if0 1\n $\n \\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=n_{l}^{-} + 1}^{\\min(n_{l}^{+}-1,K'-1)} \\big(\\lfloor Y_{2}^{R}(x_{h}) \\rfloor - \\lceil Y_{1}^{R}(x_{h}) \\rceil + 1 \\big) \n = \\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=n_{l}^{-} + 1}^{\\min(n_{l}^{+}-1,K'-1)} \\big( Y_{2}^{R}(x_{h}) - Y_{1}^{R}(x_{h}) + 1 \\big) -\n \\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=n_{l}^{-} + 1}^{\\min(n_{l}^{+}-1,K'-1)} \\big( frac\\big(-Y_{1}^{R}(x_{h})\\big) +frac\\big(Y_{2}^{R}(x_{h})\\big) \\big).\n $\n \\else\n $$\n \\begin{aligned}\n &\\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=n_{l}^{-} + 1}^{\\min(n_{l}^{+}-1,K'-1)} \\left(\\lfloor Y_{2}^{R}(x_{h}) \\rfloor - \\lceil Y_{1}^{R}(x_{h}) \\rceil + 1 \\right) \n \\\\\n =& \\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=n_{l}^{-} + 1}^{\\min(n_{l}^{+}-1,K'-1)} \\left( Y_{2}^{R}(x_{h}) - Y_{1}^{R}(x_{h}) - frac\\left(-Y_{1}^{R}(x_{h})\\right) \\right.\\\\\n &\n \\left. -frac\\left(Y_{2}^{R}(x_{h})\\right) + 1 \\right)\n \\\\\n \\end{aligned}\n $$\n $$\n \\begin{aligned}\n =& \\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=n_{l}^{-} + 1}^{\\min(n_{l}^{+}-1,K'-1)} \\left( Y_{2}^{R}(x_{h}) - Y_{1}^{R}(x_{h}) + 1 \\right) -\n \\\\\n & \\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=n_{l}^{-} + 1}^{\\min(n_{l}^{+}-1,K'-1)} \\left( frac\\left(-Y_{1}^{R}(x_{h})\\right) +frac\\left(Y_{2}^{R}(x_{h})\\right) \\right).\n \\\\\n \\end{aligned}\n $$\n \\fi\n The limit of the first term above exists and its value is presented below on (\\ref{eq:limplus1above}). \n The existence of the limit for the second term was shown by Lemma \\ref{lemma:limsum1Ri} for any outcome of $\\min(n_{l}^{+}-1,K'-1)$, because, if $ \\left\\lfloor\\frac{2 (vT-s) \\cos(\\pi\/6 - \\theta) + 2s\\sin(\\vert \\pi\/6 - \\theta\\vert ) }{\\sqrt{3}d}\\right\\rfloor = n_{l}^{+}-1 \\le K'-1$, $c = \\frac{2 v \\cos(\\psi - \\pi\/6)}{\\sqrt{3}d}$ and $d = \\frac{2s(\\sin(\\vert \\pi\/6 - \\theta\\vert )-\\cos(\\pi\/6 - \\theta)) }{\\sqrt{3}d}$ on the Lemma \\ref{lemma:limsum1Ri}. If $n_{l}^{+}-1 > K'-1 = \\left\\lceil\\frac{2 (vT-s) \\cos(\\psi - \\pi\/6) - 2s\\sin(\\vert \\psi - \\pi\/6\\vert ) }{\\sqrt{3}d} - 1\\right\\rceil$, as for any $x$, $\\lceil x \\rceil = \\lfloor x \\rfloor$ or $\\lceil x \\rceil = \\lfloor x \\rfloor + 1$ depending on whether $x$ is an integer or not, then $K' - 1 = \\Big\\lfloor \\frac{2 (vT-s) \\cos(\\psi - \\pi\/6)}{\\sqrt{3}d} - \\frac{2s\\sin(\\vert \\psi - \\pi\/6\\vert )}{\\sqrt{3}d} -1\\Big\\rfloor$ or $K' - 1 = \\Big\\lfloor \\frac{2 (vT-s) \\cos(\\psi - \\pi\/6)}{\\sqrt{3}d} - \\frac{2s\\sin(\\vert \\psi - \\pi\/6\\vert )}{\\sqrt{3}d} \\Big\\rfloor$. For both cases, on the Lemma \\ref{lemma:limsum1Ri} $c = \\frac{2 v \\cos(\\psi - \\pi\/6)}{\\sqrt{3}d}$ as well, but for the former case, $d = -\\frac{2s(\\sin(\\vert \\psi - \\pi\/6\\vert )+\\cos(\\pi\/6 - \\theta))}{\\sqrt{3}d} - 1$, and for the latter, $d = - \\frac{2s(\\sin(\\vert \\psi - \\pi\/6\\vert )+\\cos(\\pi\/6 - \\theta))}{\\sqrt{3}d}$.\n \n \n \n \n To get the bounds, we have\n \\begin{equation}\n \\begin{aligned}\n &\\lim_{T\\to \\infty} \\frac{1}{T}\\sum_{x_{h}=n_{l}^{-} + 1}^{\\min(n_{l}^{+}-1,K'-1)} \\left( Y_{2}^{R}(x_{h}) - Y_{1}^{R}(x_{h}) -1 \\right) \\\\\n &< \\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=n_{l}^{-} + 1}^{\\min(n_{l}^{+}-1,K'-1)}\\left( \\lfloor Y_{2}^{R}(x_{h}) \\rfloor - \\lceil Y_{1}^{R}(x_{h}) \\rceil + 1 \\right) \\\\\n &\\le \\lim_{T\\to \\infty} \\frac{1}{T} \\sum_{x_{h}=n_{l}^{-} + 1}^{\\min(n_{l}^{+}-1,K'-1)} \\left( Y_{2}^{R}(x_{h}) - Y_{1}^{R}(x_{h}) +1 \\right), \n \\end{aligned}\n \\label{eq:limit3ineq}\n \\end{equation}\n and by Lemma \\ref{lemma:MiddleInterval}, as $T \\to \\infty$,\n \\if0 1\n $\n Y_{2}^{R}(x_{h}) - Y_{1}^{R}(x_{h}) \n = \n \\frac{\\frac{y_2}{d} + \\tan(\\psi) x_h}{{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}}} - \n \\frac{\\frac{y_1}{d} + \\tan(\\psi) x_h}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}} \n = \\frac{2s}{d\\cos(\\psi - \\pi\/6)},\n $\n \\else\n\\ifexpandexplanation \n $$\n \\begin{aligned}\n Y_{2}^{R}(x_{h}) - Y_{1}^{R}(x_{h}) \n &= \n \\frac{\\frac{y_2}{d} + \\tan(\\psi) x_h}{{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}}} - \n \\frac{\\frac{y_1}{d} + \\tan(\\psi) x_h}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}} \n \\\\ &\n = \\frac{\\frac{y_{2}-y_{1}}{d}}{{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}}}\n = \\frac{\\frac{2s}{d\\cos(\\psi)}}{{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}}}\n = \\frac{4s}{d(\\sqrt{3}\\cos(\\psi) + \\sin(\\psi))}\n \\\\ &\n = \\frac{2s}{d\\cos(\\psi - \\pi\/6)},\n \\end{aligned}\n $$ \n\\else\n $$\n \\begin{aligned}\n Y_{2}^{R}(x_{h}) - Y_{1}^{R}(x_{h}) \n &= \n \\frac{\\frac{y_2}{d} + \\tan(\\psi) x_h}{{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}}} - \n \\frac{\\frac{y_1}{d} + \\tan(\\psi) x_h}{\\frac{\\sqrt{3} + \\tan(\\psi)}{2}} \n = \\frac{2s}{d\\cos(\\psi - \\pi\/6)},\n \\end{aligned}\n $$ \n\\fi\n \\fi\n by (\\ref{eq:2sy2y1}).\n \n For the first limit at (\\ref{eq:limit3ineq}) in the case of $\\min(n_{l}^{+}-1,K'-1) = n_{l}^{+}-1$,\n \\if0 1\n \\begin{equation}\n \\lim_{T\\to \\infty} \\frac{1}{T}\\sum_{x_{h}=n_{l}^{-} + 1}^{n_{l}^{+} - 1 } \\left( Y_{2}^{R}(x_{h}) - Y_{1}^{R}(x_{h}) -1 \\right) \n = \\frac{4vs}{\\sqrt{3}d^{2}} - \\frac{2 v \\cos(\\psi - \\pi\/6)}{\\sqrt{3}d}. \n \\label{eq:limitbelowmid}\n \\end{equation}\n \\else\n $$\n \\lim_{T\\to \\infty} \\frac{1}{T}\\sum_{x_{h}=n_{l}^{-} + 1}^{n_{l}^{+} - 1 } \\left( Y_{2}^{R}(x_{h}) - Y_{1}^{R}(x_{h}) -1 \\right) \n $$\n $$\n = \\lim_{T\\to \\infty} \\frac{1}{T}\\sum_{x_{h}=n_{l}^{-} + 1}^{n_{l}^{+} - 1 } \\left( \\frac{2s}{d\\cos(\\psi - \\pi\/6)} - 1 \\right)\n $$\n $$\n = \\lim_{T\\to \\infty} \\frac{1}{T} \\left(n_{l}^{+} - n_{l}^{-} - 1\\right) \\left(\\frac{2s}{d\\cos(\\psi - \\pi\/6)} - 1\\right)\n $$\n \\begin{align}\n &= \\left(\\frac{2s}{d\\cos(\\psi - \\pi\/6)} - 1\\right) \\left( \\lim_{T\\to \\infty} \\frac{1}{T} n_{l}^{+} - \\lim_{T\\to \\infty} \\frac{1}{T} (n_{l}^{-} - 1 )\\right) \\nonumber \\\\\n &= \\left(\\frac{2s}{d\\cos(\\psi - \\pi\/6)} - 1\\right) \\lim_{T\\to \\infty} \\frac{1}{T} n_{l}^{+} \\nonumber \\\\\n &= \\left(\\frac{2s}{d\\cos(\\psi - \\pi\/6)} - 1\\right) \\frac{2 v \\cos(\\psi - \\pi\/6)}{\\sqrt{3}d}. \\nonumber \\\\\n &= \\frac{4vs}{\\sqrt{3}d^{2}} - \\frac{2 v \\cos(\\psi - \\pi\/6)}{\\sqrt{3}d}.\n \\label{eq:limitbelowmid}\n \\end{align}\n \\fi\n Above we get $\\displaystyle \\lim_{T \\to\\infty}\\frac{1}{T} n_{l}^{+} = \\frac{2 v \\cos(\\psi - \\pi\/6)}{\\sqrt{3}d}$ by using the sandwich theorem and the inequality $x - 1 < \\lfloor x \\rfloor \\le x$ to get the bounds on $n_{l}^{+}$.\n \n \n Similarly, for the last limit at (\\ref{eq:limit3ineq}) in the case of $\\min(n_{l}^{+}-1,K'-1) = n_{l}^{+}-1$, \n \\if0 1\n $\n \\lim_{T\\to \\infty} \\frac{1}{T}\\sum_{x_{h}=n_{l}^{-} + 1}^{n_{l}^{+}-1} \\left(Y_{2}^{R}(x_{h}) - Y_{1}^{R}(x_{h}) +1 \\right)\n =\\frac{4vs}{\\sqrt{3}d^{2}} + \\frac{2 v \\cos(\\psi - \\pi\/6)}{\\sqrt{3}d}.\n $\n \\else\n $$\n \\begin{aligned}\n &\\lim_{T\\to \\infty} \\frac{1}{T}\\sum_{x_{h}=n_{l}^{-} + 1}^{n_{l}^{+}-1} \\left(Y_{2}^{R}(x_{h}) - Y_{1}^{R}(x_{h}) +1 \\right)\n \n =\\frac{4vs}{\\sqrt{3}d^{2}} + \\frac{2 v \\cos(\\psi - \\pi\/6)}{\\sqrt{3}d}.\n \\end{aligned}\n $$\n \\fi\n The limits above in the case of $\\min(n_{l}^{+}-1,K'-1) = K'-1$ yields the same result because of the sandwich theorem, the inequality $x \\le \\lceil x \\rceil < x + 1$, and\n \\if0 1\n $\n \\frac{2 v \\cos(\\psi - \\pi\/6)}{\\sqrt{3}d}\n = \\lim_{T \\to \\infty} \\frac{1}{T} \\big( \\frac{2 (vT-s) \\cos(\\psi - \\pi\/6) - 2s\\sin(\\vert \\psi - \\pi\/6\\vert ) }{\\sqrt{3}d} \\big)\n \\le \\lim_{T \\to \\infty} \\frac{1}{T}(K' -1) = \\lim_{T \\to \\infty} \\frac{1}{T}K' \n = \\lim_{T \\to \\infty} \\frac{1}{T} \\big\\lceil \\frac{2 (vT-s) \\cos(\\psi - \\pi\/6) - 2s\\sin(\\vert \\psi - \\pi\/6\\vert ) }{\\sqrt{3}d} \\big\\rceil \\\\\n < \\lim_{T \\to \\infty} \\frac{1}{T} \\big( \\frac{2 (vT-s) \\cos(\\psi - \\pi\/6) - 2s\\sin(\\vert \\psi - \\pi\/6\\vert ) }{\\sqrt{3}d} + 1 \\big) \n = \\frac{2 v \\cos(\\psi - \\pi\/6)}{\\sqrt{3}d},\n $\n \\else\n $$\n \\begin{aligned} \n &\\frac{2 v \\cos(\\psi - \\pi\/6)}{\\sqrt{3}d}\n = \\lim_{T \\to \\infty} \\frac{1}{T} \\left( \\frac{2 (vT-s) \\cos(\\psi - \\pi\/6) - 2s\\sin(\\vert \\psi - \\pi\/6\\vert ) }{\\sqrt{3}d} \\right)\n \\\\ \n & \\le \\lim_{T \\to \\infty} \\frac{1}{T}(K' -1) = \\lim_{T \\to \\infty} \\frac{1}{T}K' \\\\\n &= \\lim_{T \\to \\infty} \\frac{1}{T} \\left\\lceil \\frac{2 (vT-s) \\cos(\\psi - \\pi\/6) - 2s\\sin(\\vert \\psi - \\pi\/6\\vert ) }{\\sqrt{3}d} \\right\\rceil \n \\\\ \n &<\\lim_{T \\to \\infty} \\frac{1}{T} \\left( \\frac{2 (vT-s) \\cos(\\psi - \\pi\/6) - 2s\\sin(\\vert \\psi - \\pi\/6\\vert ) }{\\sqrt{3}d} + 1 \\right) \\\\\n &= \\frac{2 v \\cos(\\psi - \\pi\/6)}{\\sqrt{3}d},\n \\end{aligned}\n $$\n \\fi\n so, $\\displaystyle \\lim_{T \\to \\infty} \\frac{1}{T}K' = \\lim_{T \\to \\infty} \\frac{1}{T}n_{l}^{+}.$ Consequently, the limit below exists and \n \\begin{equation}\n \\lim_{T\\to \\infty} \\frac{1}{T}\\sum_{x_{h}=n_{l}^{-} + 1}^{\\min(n_{l}^{+}-1,K'-1) } \\left(Y_{2}^{R}(x_{h}) - Y_{1}^{R}(x_{h}) +1 \\right) =\\frac{4vs}{\\sqrt{3}d^{2}} + \\frac{2 v \\cos(\\psi - \\frac{\\pi}{6})}{\\sqrt{3}d}.\n \\label{eq:limplus1above}\n \\end{equation}\n \n Finally, using the bounds provided by (\\ref{eq:limitbelowmid}) and (\\ref{eq:limplus1above}) we have the expected result.\n \\fi %\n \\end{proof}\n \n By Lemmas \\ref{lemma:lim1st}, \\ref{lemma:lim2nd} and \\ref{lemma:lim3rd} we have for $\\psi \\neq \\pi\/6$\n \\begin{equation} \n \\begin{aligned}\n \\lim_{T \\to \\infty} f_{h}(T,\\psi) \\in&\n \\left(\\frac{4vs}{\\sqrt{3}d^{2}} - \\frac{2 v \\cos(\\psi - \\pi\/6)}{\\sqrt{3}d}, \\frac{4vs}{\\sqrt{3}d^{2}} + \\frac{2 v \\cos(\\psi - \\pi\/6)}{\\sqrt{3}d}\\right].\n \\end{aligned}\n \\label{eq:whollynotpi6}\n \\end{equation}\n For $\\psi = \\pi\/6$, by (\\ref{eq:30degreescasepsi}),\n \\if0 1\n $\n \\lim_{T \\to \\infty} \\frac{1}{T}\\sum_{x_{h}=0}^{\n \\big\\lfloor\\frac{2 (vT-s) }{\\sqrt{3}d} \\big\\rfloor\n } \\Big(\\frac{\\sqrt{3}y_{2} + d x_{h}}{2d} - \\frac{\\sqrt{3}y_{1} + d x_{h}}{2d} - 1 \\Big)\n <\n \\lim_{T \\to \\infty} f_{h}(T,\\pi\/6) \n \\le \\lim_{T \\to \\infty} \\frac{1}{T} \\sum_{x_{h}=0}^{\n \\big\\lfloor\\frac{2 (vT-s) }{\\sqrt{3}d} \\big\\rfloor\n } \\Big(\\frac{\\sqrt{3}y_{2} + d x_{h}}{2d} - \\frac{\\sqrt{3}y_{1} + d x_{h}}{2d} + 1 \\Big),\n $\n \\else\n $$\n \\begin{array}{>{\\displaystyle}c}\n \\lim_{T \\to \\infty} \\frac{1}{T}\\sum_{x_{h}=0}^{\n \\left\\lfloor\\frac{2 (vT-s) }{\\sqrt{3}d} \\right\\rfloor\n } \\left(\\frac{\\sqrt{3}y_{2} + d x_{h}}{2d} - \\frac{\\sqrt{3}y_{1} + d x_{h}}{2d} - 1 \\right)\n <\n \\lim_{T \\to \\infty} f_{h}(T,\\pi\/6) \n \\\\\n \\le \\lim_{T \\to \\infty} \\frac{1}{T} \\sum_{x_{h}=0}^{\n \\left\\lfloor\\frac{2 (vT-s) }{\\sqrt{3}d} \\right\\rfloor\n } \\left(\\frac{\\sqrt{3}y_{2} + d x_{h}}{2d} - \\frac{\\sqrt{3}y_{1} + d x_{h}}{2d} + 1 \\right),\n \\end{array}\n $$\n \\fi\n with\n \\if0 1\n $\n \\lim_{T \\to \\infty} \\frac{1}{T}\\sum_{x_{h}=0}^{ \\big\\lfloor\\frac{2 (vT-s) }{\\sqrt{3}d} \\big\\rfloor} \\big( \\frac{\\sqrt{3}y_{2} + d x_{h}}{2d} - \\frac{\\sqrt{3}y_{1} + d x_{h}}{2d} + 1 \\big) \n = \\lim_{T \\to \\infty} \\frac{1}{T}\\sum_{x_{h}=0}^{ \\big\\lfloor\\frac{2 (vT -s)}{\\sqrt{3}d} \\big\\rfloor} \\big( \\frac{\\sqrt{3}s}{d\\cos(\\pi\/6)} + 1 \\big)\n = \\lim_{T \\to \\infty} \\frac{1}{T} \\big\\lfloor \\frac{2 (vT-s) }{\\sqrt{3}d} + 1\\big\\rfloor \\big(\\frac{2s}{d} + 1\\big)\n = \\frac{2 v }{\\sqrt{3}d} \\big(\\frac{2s}{d} + 1\\big),\n $\n \\else\n\\ifexpandexplanation \n $$\n \\begin{aligned}\n &\\lim_{T \\to \\infty} \\frac{1}{T}\\sum_{x_{h}=0}^{ \\left\\lfloor\\frac{2 (vT-s) }{\\sqrt{3}d} \\right\\rfloor} \\left( \\frac{\\sqrt{3}y_{2} + d x_{h}}{2d} - \\frac{\\sqrt{3}y_{1} + d x_{h}}{2d} + 1 \\right) \\\\\n &= \\lim_{T \\to \\infty} \\frac{1}{T}\\sum_{x_{h}=0}^{ \\left\\lfloor\\frac{2 (vT -s)}{\\sqrt{3}d} \\right\\rfloor} \\left( \\frac{\\sqrt{3}s}{d\\cos(\\pi\/6)} + 1 \\right)\n \\\\\n &= \\lim_{T \\to \\infty} \\frac{1}{T}\\sum_{x_{h}=0}^{ \\left\\lfloor\\frac{2 (vT-s) }{\\sqrt{3}d} \\right\\rfloor} \\left(\\frac{2s}{d} + 1 \\right)\\\\\n & \n = \\lim_{T \\to \\infty} \\frac{1}{T} \\left\\lfloor \\frac{2 (vT-s) }{\\sqrt{3}d} + 1\\right\\rfloor \\left(\\frac{2s}{d} + 1\\right)\\\\&\n = \\frac{2 v }{\\sqrt{3}d} \\left(\\frac{2s}{d} + 1\\right),\\\\\n \\end{aligned}\n $$ \n\\else \n $$\n \\begin{aligned}\n &\\lim_{T \\to \\infty} \\frac{1}{T}\\sum_{x_{h}=0}^{ \\left\\lfloor\\frac{2 (vT-s) }{\\sqrt{3}d} \\right\\rfloor} \\left( \\frac{\\sqrt{3}y_{2} + d x_{h}}{2d} - \\frac{\\sqrt{3}y_{1} + d x_{h}}{2d} + 1 \\right) \\\\\n &= \\lim_{T \\to \\infty} \\frac{1}{T}\\sum_{x_{h}=0}^{ \\left\\lfloor\\frac{2 (vT -s)}{\\sqrt{3}d} \\right\\rfloor} \\left( \\frac{\\sqrt{3}s}{d\\cos(\\pi\/6)} + 1 \\right)\n = \\lim_{T \\to \\infty} \\frac{1}{T} \\left\\lfloor \\frac{2 (vT-s) }{\\sqrt{3}d} + 1\\right\\rfloor \\left(\\frac{2s}{d} + 1\\right)\\\\&\n = \\frac{2 v }{\\sqrt{3}d} \\left(\\frac{2s}{d} + 1\\right),\\\\\n \\end{aligned}\n $$ \n\\fi\n \\fi\n from (\\ref{eq:2sy2y1}) and, as similarly done before, $\\displaystyle \\lim_{T \\to \\infty} \\frac{1}{T} \\left\\lfloor \\frac{2 (vT-s) }{\\sqrt{3}d} + 1\\right\\rfloor = \\frac{2 v }{\\sqrt{3}d}$ by using the sandwich theorem and the inequality $x - 1 < \\lfloor x \\rfloor \\le x$ to get the bounds on the floor function; and\n \\if0 1\n $\n \\lim_{T \\to \\infty} \\frac{1}{T}\\sum_{x_{h}=0}^{ \\big\\lfloor\\frac{2 (vT-s) }{\\sqrt{3}d} \\big\\rfloor} \\big( \\frac{\\sqrt{3}y_{2} + d x_{h}}{2d} - \\frac{\\sqrt{3}y_{1} + d x_{h}}{2d} - 1 \\big)\n = \\frac{2 v }{\\sqrt{3}d} \\big(\\frac{2s}{d} - 1\\big).\n $\n \\else\n $$\n \\begin{aligned}\n &\\lim_{T \\to \\infty} \\frac{1}{T}\\sum_{x_{h}=0}^{ \\left\\lfloor\\frac{2 (vT-s) }{\\sqrt{3}d} \\right\\rfloor} \\left( \\frac{\\sqrt{3}y_{2} + d x_{h}}{2d} - \\frac{\\sqrt{3}y_{1} + d x_{h}}{2d} - 1 \\right)\n = \\frac{2 v }{\\sqrt{3}d} \\left(\\frac{2s}{d} - 1\\right).\\\\\n \\end{aligned}\n $$\n \\fi\n Accordingly, \n \\if0 1\n $\\lim_{T \\to \\infty} f_{h}(T,\\pi\/6) \\in \\big(\\frac{2 v }{\\sqrt{3}d} \\big(\\frac{2s}{d} - 1\\big),\\frac{2 v }{\\sqrt{3}d} \\big(\\frac{2s}{d} + 1\\big)\\big],$\n \\else\n $$\\lim_{T \\to \\infty} f_{h}(T,\\pi\/6) \\in \\left(\\frac{2 v }{\\sqrt{3}d} \\left(\\frac{2s}{d} - 1\\right),\\frac{2 v }{\\sqrt{3}d} \\left(\\frac{2s}{d} + 1\\right)\\right],$$\n \\fi\n which are the same values in (\\ref{eq:whollynotpi6}) if used $\\psi = \\pi\/6$. \n \n Lemmas \\ref{lemma:nlmnlp}--\\ref{lemma:MiddleInterval}, \\ref{lemma:lim1st}, \\ref{lemma:lim2nd} and \\ref{lemma:lim3rd} used $\\psi$, so, after replacing $\\psi$ by $\\pi\/3 - \\theta$, we conclude the Proposition \\ref{prop:hexthroughputbounds}. \n\\end{proof}\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.6\\columnwidth]{figs\/limitsHexPack.pdf} \n \\caption{Limit given by (\\ref{eq:limitnoangle}) using the circle packing results and the lower and upper bounds of the hexagonal packing limit by (\\ref{eq:hexthroughputbounds}) for $\\theta \\in \\lbrack 0,\\pi\/3 \\rparen$, $d = 1$ m, $v = 1$ m\/s and $s \\in \\{3,6\\}$ m.}\n \\label{fig:limitshexpack}\n\\end{figure}\n\nThe upper and lower bounds presented on (\\ref{eq:hexthroughputbounds}) are below or equal the maximum asymptotic throughput presented by the Proposition \\ref{prop:triangularthroughput}, equation (\\ref{eq:limitnoangle}). The result of the Proposition \\ref{prop:triangularthroughput} only concerns the maximum asymptotic throughput and do not consider the hexagonal packing angle $\\theta$, while Proposition \\ref{prop:hexthroughputbounds} gives a lower bound and tightens the bounds for a given $\\theta$. Figure \\ref{fig:limitshexpack} presents an example comparison of these equations for two different values of $s$. As expected, the maximum asymptotic throughput under the optimal density assumption (in (\\ref{eq:limitnoangle})) is a possible value of the throughput using hexagonal packing and is above or equal the interval in (\\ref{eq:hexthroughputbounds}) for any given $\\theta$. However, for practical robotic swarms applications, a certain hexagonal packing angle must be fixed depending on the expected height of the corridor, target size and the minimum distance between the robots, resulting in a throughput below or equal to the upper value presented on Proposition \\ref{prop:triangularthroughput}. \n\n\\begin{figure}[t!]\n \\centering \n \\subfloat[For 99 samples, $T = 43$ s, $s=3$ m, $d = 1$ m.]{\\includegraphics[width=0.483\\columnwidth]{figs\/Th3s1d43T1v99samp.pdf}}\n \\,\n \\subfloat[For 100 samples, $T = 43$ s, $s=3$ m, $d = 1$ m.]{\\includegraphics[width=0.483\\columnwidth]{figs\/Th3s1d43T1v100samp.pdf}}\n \\\\\n \\subfloat[For 99 samples, $T=30$ s, $s=2.5$ m and $d=0.66$ m.]{\\includegraphics[width=0.483\\columnwidth]{figs\/Th2_5s0_66d30T1v99samp.pdf}} \n \\,\n \\subfloat[For 100 samples, $T=30$ s, $s=2.5$ m and $d=0.66$ m.]{\\includegraphics[width=0.483\\columnwidth]{figs\/Th2_5s0_66d30T1v100samp.pdf}}\n \\caption{Examples of (\\ref{eq:hexthroughput}) varying $\\theta$ from 0 to $\\frac{\\pi}{3}$ for different and randomly generated values of $T$, $s$, and $d$. In the graphs, $\\theta$ is over the $x$-axis and the number of robots inside the given rectangle is over the $y$-axis. We used 99 samples on the images on the left-hand side and 100 samples on the right-hand side for each plot, and $v = 1$ m\/s. The maximum value in each image is represented by an orange circle and a rectangle represents the maximum between the left and the right image. No square means the maximum values in both sides are equal. It continues in the Figure \\ref{fig:plottheta2}.}\n \\label{fig:plottheta1}\n\\end{figure}\n\n\\begin{figure}[t!]\n \\centering \n \\subfloat[For 99 samples, $T = 4$ s, $s = 2$ m and $d = 0.13$ m.]{\\includegraphics[width=0.483\\columnwidth]{figs\/Th2s0_13d4T1v99samp.pdf}}\n \\,\n \\subfloat[For 100 samples, $T = 4$ s, $s = 2$ m and $d = 0.13$ m.]{\\includegraphics[width=0.483\\columnwidth]{figs\/Th2s0_13d4T1v100samp.pdf}}\n \\\\\n \\subfloat[For 99 samples, $T = 100$ s, $s=2.40513$ m and $d=1$ m.]{\\includegraphics[width=0.483\\columnwidth]{figs\/Th2_4051388090635197s1d100T1v99samp.pdf}} \n \\,\n \\subfloat[For 100 samples, $T = 100$ s, $s=2.40513$ m and $d=1$ m.]{\\includegraphics[width=0.483\\columnwidth]{figs\/Th2_4051388090635197s1d100T1v100samp.pdf}} \n \\caption{Continuation of Figure \\ref{fig:plottheta1}.}\n \\label{fig:plottheta2}\n\\end{figure}\n\n\\begin{figure}[t!]\n \\centering \n \\subfloat[For $10^{7}$ samples, $T = 43$ s, $s=3$ m, $d = 1$ m.]{\\includegraphics[width=0.483\\columnwidth]{figs\/Th3s1d43T1v10000000samp.pdf}}\n \\,\n \\subfloat[For $10^{7}+1$ samples, $T = 43$ s, $s=3$ m, $d = 1$ m.]{\\includegraphics[width=0.483\\columnwidth]{figs\/Th3s1d43T1v10000001samp.pdf}}\n \\\\\n \\subfloat[For $10^{7}$ samples, $T=30$ s, $s=2.5$ m and $d=0.66$ m.]{\\includegraphics[width=0.483\\columnwidth]{figs\/Th2_5s0_66d30T1v10000000samp.pdf}} \n \\,\n \\subfloat[For $10^{7}+1$ samples, $T=30$ s, $s=2.5$ m and $d=0.66$ m.]{\\includegraphics[width=0.483\\columnwidth]{figs\/Th2_5s0_66d30T1v10000001samp.pdf}}\n \\caption{Similar to Figures \\ref{fig:plottheta1} and \\ref{fig:plottheta2} but using 10000000 and 10000001 equally spaced points for $\\theta \\in \\lbrack 0,\\pi\/3 \\rparen$. It continues in the Figure \\ref{fig:plottheta4}.} \n \\label{fig:plottheta3}\n\\end{figure}\n\n\\begin{figure}[t!]\n \\centering \n \\subfloat[For $10^{7}$ samples, $T = 4$ s, $s = 2$ m and $d = 0.13$ m.]{\\includegraphics[width=0.483\\columnwidth]{figs\/Th2s0_13d4T1v10000000samp.pdf}}\n \\,\n \\subfloat[For $10^{7}+1$ samples, $T = 4$ s, $s = 2$ m and $d = 0.13$ m.]{\\includegraphics[width=0.483\\columnwidth]{figs\/Th2s0_13d4T1v10000001samp.pdf}}\n \\\\\n \\subfloat[For $10^{7}$ samples, $T = 100$ s, $s=2.40513$ m and $d=1$ m.]{\\includegraphics[width=0.483\\columnwidth]{figs\/Th2_4051388090635197s1d100T1v10000000samp.pdf}} \n \\,\n \\subfloat[For $10^{7}+1$ samples, $T = 100$ s, $s=2.40513$ m and $d=1$ m.]{\\includegraphics[width=0.483\\columnwidth]{figs\/Th2_4051388090635197s1d100T1v10000001samp.pdf}} \n \\caption{Continuation of Figure \\ref{fig:plottheta3}.}\n \\label{fig:plottheta4}\n\\end{figure}\n\nOn the other hand, due to the discontinuities of (\\ref{eq:hexthroughput}), it is difficult to get an exact value of $\\theta$ which maximises the throughput given the other parameters. Also, there is no specific value of $\\theta$ which achieves the maximum throughput for all possible values of the other parameters. For instance, Figures \\ref{fig:plottheta1}-\\ref{fig:plottheta4} present the result of this equation for some randomly generated parameters and a different number of samples of $\\theta$ equally spaced and taken from the domain interval, that is, from $0$ to $\\pi\/3$, including these values. \n\n\nEach one of the Figures \\ref{fig:plottheta1}-\\ref{fig:plottheta4} presents two different sets of parameters.\nIn Figures \\ref{fig:plottheta1} and \\ref{fig:plottheta2}, we use 99 equally spaced values for $\\theta \\in \\lbrack 0,\\pi\/3 \\rparen$ on the left-hand side images and 100 on the right-hand side, then we compare the maximum on each side and choose the best one. We do the same in Figures \\ref{fig:plottheta3} and \\ref{fig:plottheta4}, but using $10^{7}$ and $10^{7}+1$. \nFigures \n\\ref{fig:plottheta1} (a), \n\\ref{fig:plottheta2} (a),\n\\ref{fig:plottheta3} (b)\nand \\ref{fig:plottheta4} (b)\nshow an example that $\\theta \\approx \\pi\/6$ reaches the maximum throughput, and in Figures \n\\ref{fig:plottheta1} (c) and (d), and \\ref{fig:plottheta3} (c) and (d) the maximum is at $\\theta = 0$.\nMoreover, Figures \n\\ref{fig:plottheta2} (c) and (d) has its maximum for $\\theta$ different from the other examples. Figures\n\\ref{fig:plottheta1} (c) and \\ref{fig:plottheta1} (d)\nhave the same maximum, despite the plotting being different. This also occurs in Figures\n\\ref{fig:plottheta3} (c) and (d), and Figures\n\\ref{fig:plottheta4} (c) and \\ref{fig:plottheta4} (d).\nIf we know the parameters, we can find an approximate best candidate for $\\theta$ by searching several values, as we presented. However, as far as we know, getting the true value which maximises that equation by a closed form is an open problem.\n\n\nAdditionally, notice that whenever the number of samples is odd, it is sampled the value $\\theta= \\pi\/6$. We observe in these figures that when the maximum is at $\\theta = \\pi\/6$, it tends to be higher than the maximum found without considering it. For instance, compare the maximum found on the pairs (a) and (b) in Figures \\ref{fig:plottheta1}-\\ref{fig:plottheta4}. On the other hand, $\\theta = \\pi\/6$ is not always the optimal value. \nThus, we suggest to compute first the value for $\\theta=\\pi\/6$ and compare it with the result for a search for the maximum for any chosen number of samples in the interval from $\\theta \\in \\lbrack 0,\\pi\/3 \\rparen$. \n\n\n\\subsubsection{Touch and run strategy}\n\\label{sec:touchandrun}\n\n\\begin{figure}[t]\n \\centering\n \\subfloat[]{\n \\includegraphics[width=0.47\\columnwidth]{figs\/theoretical_central_angle}\n }\n \\subfloat[]{\n \\includegraphics[width=0.47\\columnwidth]{figs\/theoretical_link_alpha_r2}\n }\n \\caption{Illustration of the touch and run strategy. (a) Central angle region and its exiting and entering rays defined by the angle $\\alpha$. (b) Trajectory of a robot next to the target in red. Here we have the relationship between the target area radius ($s$), the minimum safety distance between the robots ($d$), the turning radius ($r$), the central region angle ($\\alpha$) and the distance from the target centre for a robot to begin turning ($d_{r}$) -- used as justification for (\\ref{eq:distTargetToTurn}) and (\\ref{eq:relationangles}). The green dashed circle represents the whole turning circle.}\n \\label{fig:theoretical:central_angle_linka}\n\\end{figure}\n\n\n\n\nWe now discuss the \\emph{touch and run} strategy. Since a robot should spend as little time as possible near the target, we imagined a simple scenario where robots travel in predefined curved lanes and tangent to the target area where they spend minimum time on the target. \nTo avoid collisions with other robots, the trajectory of a robot nearby the target is circular,\nand the distance between each robot must be at least $d$ at any part of the trajectory. \nHence, no lane crosses another, and each lane occupies a region defined by an angle in the target area that we denote by $\\alpha$, shown in Figure \\ref{fig:theoretical:central_angle_linka} (a).\n\n\n\nFigure \\ref{fig:theoretical:central_angle_linka} (b) shows the trajectory of a robot towards the target region following that strategy. The robot first follows the boundary of the central angle region -- that is, the entering ray -- at a distance $d\/2$.\nThen, it arrives at a distance of $s$ of the target centre using a circular trajectory with a turning radius $r$. \nAs the trajectory is tangent to the target shape, it is close enough to consider the robot reached the target region.\nFinally, the robot leaves the target by following the second boundary of the central angle region -- that is, the exiting ray -- at a distance $d\/2$. Depending on the value of $\\alpha$, it is possible to fit several of these lanes around the target.\nFor example, when $\\alpha = \\pi \/ 2$, it is possible to fit 4 lanes (Figure \\ref{fig:theoretical:trajectory}). The robots in each lane must maintain a distance $d_{o}$ between each other -- which is calculated depending on the values of $d$, $s$, $r$ and the number of lanes $K$ as shown later.\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=0.9\\columnwidth]{figs\/theoretical_input_path.pdf} \n \\caption{Theoretical trajectory in red, for $\\alpha = \\pi\/2$ and $K=4$. Robots are black dots and $d_{o}$ is the desired distance between the robots in the same lane. When robots of all lanes simultaneously occupy the target region, their positions are the vertices of a regular polygon -- here, it is represented by a grey square inside the target region. }\n \\label{fig:theoretical:trajectory}\n\\end{figure}\n\nThe lemma below concerns the distance to the target centre where the robots will start turning on the curved path. It will also be useful in the discussion about experiments using this strategy on Section \\ref{sec:hitandrunexperiments}. \n\n\\begin{lemma}\n The distance $d_{r}$ to the target centre for the robot to start turning is \n \\begin{equation}\n d_{r} = \\sqrt{s(2r+s)-r d}. \n \\label{eq:distTargetToTurn}\n \\end{equation}\n \\label{lemma:drturn}\n\\end{lemma}\n\\begin{proof}\n \\ifithasappendixforlemmas %\n See Online Appendix.\n \\else %\n In Figure \\ref{fig:theoretical:central_angle_linka} (b) we show the distance $d_{r}$ from the target centre where the robots begin turning. By symmetry, this is the same distance from the target centre where the robots stop turning. From the right triangle $ABC$ on that figure, we have $\\vert \\overline{AC}\\vert = \\sqrt{(r+s)^{2} - (r+d\/2)^{2}}$ and from $\\bigtriangleup ACD$, $d_{r} = \\sqrt{(d\/2)^{2} + \\vert \\overline{AC}\\vert ^{2}}$. Thus,\n \\if0 1\n $\n d_{r} = \\sqrt{(d\/2)^{2} + (r+s)^{2} - (r+d\/2)^{2}} = \\sqrt{s(2r+s)-r d}. \n $\n \\else\n $$\n d_{r} = \\sqrt{(d\/2)^{2} + (r+s)^{2} - (r+d\/2)^{2}} = \\sqrt{s(2r+s)-r d}. \n $$\n \\fi\n \\fi %\n\\end{proof}\n\n\nWe now present a lemma about the turning radius, then we define the domain of $K$ and $\\alpha$, in order to calculate the throughput for the touch and run strategy.\n\n\\begin{lemma}\n The central region angle $\\alpha$, the minimum distance between the robots $d$ and the turning radius $r$ are related by\n \\begin{equation}\n r = \\frac{s \\sin(\\alpha \/ 2) - d\/2}{1 - \\sin(\\alpha \/ 2)}.\n \\label{eq:relationangles}\n \\end{equation}\n \\label{prop:relationangles}\n\\end{lemma}\n\\begin{proof} \n \\ifithasappendixforlemmas %\n See Online Appendix.\n \\else %\n From Figure \\ref{fig:theoretical:central_angle_linka} (b), we can see that the right triangle ABE has angle $\\widehat{EAB} = \\alpha\/2$, hypotenuse $r + s$ and cathetus $r + d\/2$. Hence, it directly follows that\n \\if0 1\n $\n sin(\\alpha \/ 2) = \\frac{r + d\/2}{r + s}\n \\Leftrightarrow\n r = \\frac{s \\sin(\\alpha \/ 2) - d\/2}{1 - \\sin(\\alpha \/ 2)}.\n $\n \\else\n $$\n sin(\\alpha \/ 2) = \\frac{r + d\/2}{r + s}\n \\Leftrightarrow\n r = \\frac{s \\sin(\\alpha \/ 2) - d\/2}{1 - \\sin(\\alpha \/ 2)}.\n $$\n \\fi\n \\fi %\n\\end{proof}\n\n\\begin{proposition}\n Let $K$ be the number of curved trajectories around the target area, $\\alpha$ be the angle of each central area region, and $r$ the turning radius of the robot for the curved trajectory of this central area region. For a given $d > 0$ and $s \\ge d\/2$, the domain of $K$ is\n \\begin{equation} \n 3 \\le K \\le \\frac{\\pi}{\\arcsin \\left( \\frac{d}{2s} \\right)}, \\text{ and }\n \\label{eq:Kbounds}\n \\end{equation}\n \\begin{equation}\n \\alpha = \\frac{2 \\pi}{K}.\n \\label{eq:ak}\n \\end{equation}\n \\label{prop:Kboundsrk}\n\\end{proposition}\n\\begin{proof}\nThe number of trajectories $K$ must be greater or equal to 3.\n\tThe reason is that for the minimum possible value for $s$, $s = d\/2$, $K = 2$ is enough to have parallel lanes. However, starting with $K = 3$, \n\tcurved trajectories are needed to guarantee that robots of one lane do not interfere with robots \n\tfrom another lane.\n\t\n\nAlso, we have $K$ identical trajectories around the target, each taking a central angle of $\\alpha$.\nAs a result, the value of $\\alpha$ given $K$ is\n$\n\\alpha = \\frac{2 \\pi}{K},\n$\nimplying that\n$\n0 < \\alpha \\le \\frac{2 \\pi}{3}. \n$\n\n\nAdditionally, in the worst case, one robot of each lane arrives in the target region at the same time. When robots of all lanes simultaneously occupy the target region, their positions can be seen as the vertices of a regular polygon which must be inscribed in the circular target region of radius $s$ (e.g., in Figure \\ref{fig:theoretical:trajectory} we have a square whose sides are greater than $d$). The number of robots on the target region at the same time must be limited by the maximum number of sides of an inscribed regular polygon of a side with minimum side greater or equal to $d$. The side of a K regular polygon inscribed in a circle of radius $s$ measures $2 s \\sin\\left(\\frac{\\pi}{K}\\right)$. Thence, $2 s \\sin\\left(\\frac{\\pi}{K}\\right) \\ge d \\Rightarrow \\frac{\\pi}{\\arcsin\\left(\\frac{d}{2 s} \\right)} \\ge K.$ \n\\end{proof}\n\nNow that we have determined the correct parametrisation for the touch and run strategy, we determine its throughput in the next proposition.\n\n\n\\begin{proposition}\n Assuming the touch and run strategy and that the first robot of every lane begins at the same distance from the target, given a target radius $s$,\n the constant linear robot speed $v$,\n a minimum distance between robots $d$\n and the number of lanes $K$, the throughput for a given instant $T$ is given by\n \n \\begin{equation}\n f_{t}(K,T) = \n \\frac{1}{T}\\left(K\\left\\lfloor \\frac{vT}{d_{o}} +1 \\right\\rfloor - 1\\right), \\text{ for }\n \\label{eq:throughputhitandruntime}\n \\end{equation}\n \\begin{equation}\n d_{o} = \\max(d,d'), \\text{ and }\n \\label{eq:do}\n \\end{equation}\n \\begin{equation}\n d' = \n \\begin{cases}\n r (\\pi - \\alpha) + \\frac{d - 2 r \\cos(\\alpha \/ 2)} {\\sin(\\alpha \/ 2)}, & \\text{ if } 2 r \\cos(\\alpha \/ 2) < d,\\\\\n 2 r \\arcsin\\left( \\frac{d}{2 r}\\right), & \\text{ otherwise, }\n \\end{cases}\n \\label{eq:dprime}\n \\end{equation}\n with $r$ obtained from (\\ref{eq:relationangles}). Also,\n \\begin{equation}\n f_{t}(K) = \\lim_{T\\to \\infty}{f_{t}(K,T)}= \\frac{Kv }{d_{o}}. \n \\label{eq:throughputhitandrunlimit}\n \\end{equation}\n \\label{prop:throughputsdk}\n\\end{proposition} \n\\begin{proof}\n \\begin{figure}[t!]\n \\centering\n \\includegraphics[width=0.55\\columnwidth]{figs\/coolpaths_length_ED}\n \\caption{ The red line represents the trajectory of robots in one lane. $\\alpha$ is the central angle for the lane. The dashed blue circle of centre A is the target. C is the centre of the circle of radius $r$ from the circular trajectory. The grey circle of centre C has a radius of $r + d\/2$. Points D and E represent the connection between the curved path and the straight path. We have $\\beta = \\pi - \\alpha$ due to the symmetry and the fact that the sum of the angles of $\\bigtriangleup \\mathit{ECD}$ is equal to $\\pi$.}\n \\label{fig:coolpaths:1}\n \\end{figure}\n \n Using the touch and run strategy, each lane is distant by at least $d$ from each other. However, the minimum distance between robots on the same lane $d_{o}$ must be checked at the beginning of the curved path, as their distance decreases if assuming linear constant velocity. We distinguish two cases based on Figure \\ref{fig:coolpaths:1}:\n \\begin{enumerate}\n \\item $\\vert \\overline{ED}\\vert < d$:\n Two robots cannot be on the lane curved path; \n \\item $\\vert \\overline{ED}\\vert \\ge d$:\n More than one robot can occupy the lane curved path. \n \\end{enumerate}\n \n These cases affect the minimum distance between robots $d_{o}$ such that they can follow the trajectory without decreasing their linear speed. In both cases, they need to satisfy the minimum distance $d$ if they are turning on the curved path. From Figure \\ref{fig:coolpaths:1}, \n \\begin{equation}\n \\vert \\overline{ED}\\vert =\n 2r \\sin \\left( \\frac{\\beta}{2} \\right) = \n 2r \\sin \\left( \\frac{\\pi}{2} - \\frac{\\alpha}{2} \\right)\n = 2r \\cos\n \\left(\n \\frac{\\alpha}{2}\n \\right).\n \\label{eq:EDv}\n \\end{equation}\n\n \n \\begin{figure}\n \\centering \n \\subfloat[$\\vert \\overline{ED}\\vert < d$]{\n \\includegraphics[width=0.45\\linewidth]{figs\/cool_path_case1_dist}\n }\\qquad\n \\subfloat[$\\vert \\overline{ED}\\vert \\ge d$]{ \n \\includegraphics[width=0.25\\linewidth]{figs\/cool_path_case2_angle2}\n }\n \\caption{Enlargements of Figure \\ref{fig:coolpaths:1}. (a) The robots $R_{1}$ and $R_{2}$ are the black dots on the red line representing the trajectory. If the delay between $R_{1}$ and $R_{2}$ is less than the time for a robot to run from $T$ to $U$ following the red trajectory, there will be some instant which $R_{1}$ and $R_{2}$ will be vertically aligned. Their positions at that instant are represented by grey dots in front of them. Hence, their distance would be less than $d$. The right triangle $TVE$ has side $\\overline{TV}$, which can be measured using $\\overline{ED}$. (b) $d_{o}$ denotes the minimum arc length for two robots located at any two points $R$ and $H$ on $\\wideparen{ED}$ such that they are distant by at least $d$. $\\gamma$ is the angle defining the arc $d_{o}$ for the circle of centre $C$.}\n \\label{fig:coolpaths23}\n \\end{figure}\n \n \n \n \n In the case 1, in Figure \\ref{fig:coolpaths23} (a), we define two points T and U on the lane such that the distance between them is $\\vert \\overline{TU}\\vert = d$ and their distances to the target are equal.\n The delay between one robot at T and another at U is equal to \n \\if0 1\n $\\Delta t_1 = \\frac{\\vert \\overline{TE}\\vert + \\vert \\wideparen{ED}\\vert + \\vert \\overline{DU}\\vert }{v},$\n \\else\n $$\\Delta t_1 = \\frac{\\vert \\overline{TE}\\vert + \\vert \\wideparen{ED}\\vert + \\vert \\overline{DU}\\vert }{v},$$\n \\fi\n that is, the time for running through the straight line TE, the curved path ED and the straight line DU.\n \n For any delay less than $\\Delta t_1 $ between two robots, say $R_{1}$ and $R_{2}$, there is an instant of time when $R_{1}$ is on the path between B and T and $R_{2}$ is on the path between B and U and they are vertically aligned (Figure \\ref{fig:coolpaths23} (a)). In this case, the distance between $R_{1}$ and $R_{2}$ is below $\\vert \\overline{TU}\\vert $, so they do not respect the minimum distance $d$ between them.\n Hence, the minimum delay between two robots in case 1 is $\\Delta t_1$.\n \n From Figure \\ref{fig:coolpaths:1}, we have\n $\\vert \\wideparen{ED}\\vert = r \\beta = r (\\pi - \\alpha)$.\n For calculating the value of $\\vert \\overline{TE}\\vert $ and $\\vert \\overline{DU}\\vert $ from Figure \\ref{fig:coolpaths23} (a), we observe that $\\vert \\overline{TE}\\vert = \\vert \\overline{DU}\\vert $ by symmetry. Thus,\n \\if0 1\n $\n \\vert \\overline{VT}\\vert \n = \\frac{d}{2} - \\frac{\\vert \\overline{ED}\\vert }{2} \n = \\frac{d}{2} - r \\cos \\left( \\frac{\\alpha}{2} \\right) \n $\n From Figure \\ref{fig:coolpaths23} (a) and (\\ref{eq:EDv}).\n \\else\n $$\n \\begin{aligned}\n \\vert \\overline{VT}\\vert \n & = \\frac{d}{2} - \\frac{\\vert \\overline{ED}\\vert }{2} \n & [\\text{From Figure \\ref{fig:coolpaths23} (a)}]\\\\\n & = \\frac{d}{2} - r \\cos \\left( \\frac{\\alpha}{2} \\right) \n & [\\text{From (\\ref{eq:EDv})}].\\\\\n \\end{aligned}\n $$\n \\fi\n As $\\bigtriangleup \\mathit{TVE}$ is right, $\\vert \\overline{TE}\\vert = \\frac{\\vert \\overline{VT}\\vert }{\\sin(\\alpha \/ 2)}$. Thence,\n \\if0 1\n $\n \\Delta t_1 = \n \\frac{r (\\pi - \\alpha) + \n 2 \n \\frac{d\/2 - r \\cos(\\alpha \/ 2)}{\\sin(\\alpha \/ 2)}\n }{v} = \n \\frac{r (\\pi - \\alpha)}{v}\n +\n \\frac{d - 2 r \\cos(\\alpha \/ 2)}{v \\sin(\\alpha \/ 2)}\n $\n \\else\n $$\n \\Delta t_1 = \n \\frac{r (\\pi - \\alpha) + \n 2 \n \\frac{d\/2 - r \\cos(\\alpha \/ 2)}{\\sin(\\alpha \/ 2)}\n }{v} = \n \\frac{r (\\pi - \\alpha)}{v}\n +\n \\frac{d - 2 r \\cos(\\alpha \/ 2)}{v \\sin(\\alpha \/ 2)}\n $$\n \\fi\n and \n \\if0 1\n $d_{o} = \\max\\left(d, v \\Delta t_1\\right) \n = \\max\\left(d,r (\\pi - \\alpha) + \\frac{d - 2 r \\cos\\left(\\alpha\/2\\right)}{\\sin(\\alpha\/2)}\\right).$\n Here\n \\else\n $$d_{o} = \\max\\left(d, v \\Delta t_1\\right) \n = \\max\\left(d,r (\\pi - \\alpha) + \\frac{d - 2 r \\cos\\left(\\alpha\/2\\right)}{\\sin(\\alpha\/2)}\\right).$$\n Above\n \\fi\n we used $\\max$ function because the result of $v \\Delta t_{1}$ can still be less than $d$, depending on $\\alpha$, $r$ and $d$.\n \n In the case 2, we need to check the minimum distance $d$ when two robots are on the circular part $\\wideparen{ED}$ in Figure \\ref{fig:coolpaths23} (b). From this figure, $\\bigtriangleup CRH$ is isosceles, so $\\gamma = 2 \\arcsin\\left( \\frac{d}{2r}\\right)$. Thus, to keep constant velocity, the delay between two robots in this case is\n \\if0 1 \n $\n \\Delta t_2 = \\frac{d_{o}}{v} = \\frac{r \\gamma}{v} = \\frac{2r}{v} \\arcsin \\left( \\frac{d}{2r} \\right).\n $\n \\else\n $$\n \\Delta t_2 = \\frac{d_{o}}{v} = \\frac{r \\gamma}{v} = \\frac{2r}{v} \\arcsin \\left( \\frac{d}{2r} \\right).\n $$\n \\fi\n Then, \n \\if0 1\n $d_{o} = \\max\\left(d, v \\Delta t_2\\right) = \\max\\left(d, 2r\\arcsin\\left(\\frac{d}{2r}\\right)\\right).$ \n \\else\n $$d_{o} = \\max\\left(d, v \\Delta t_2\\right) = \\max\\left(d, 2r\\arcsin\\left(\\frac{d}{2r}\\right)\\right).$$ \n \\fi\n We used $\\max$ function for a similar reason as exposed before. After rearranging, we have (\\ref{eq:do}) and (\\ref{eq:dprime}). \n\n \n For calculating the throughput $f_{t}(K,T)$ for $K$ lanes and a given time $T$ after the arrival of the first robot, we get the number of robots reaching the target region by the time $T$, then we use the Definition \\ref{def:throughput2}. As we assume that the first robot of every lane begins at the same distance from the target, at time $T=0$ we have $K$ robots simultaneously arriving. Then, after $d_{o}\/v$ units of time, we have $K$ more robots arriving and this keeps happening every $d_{o}\/v$ units of time. Denote $N(K,T)$ the total number of robots that have arrived at the target region from $K$ lanes by time $T$. Thus, we have:\n \\if0 1\n $N(K,T) =K\\left\\lfloor \\frac{T}{\\frac{d_{0}}{v}} + 1\\right\\rfloor = K\\left\\lfloor \\frac{vT}{d_{o}} + 1\\right\\rfloor,$\n \\else\n $$N(K,T) =K\\left\\lfloor \\frac{T}{\\frac{d_{0}}{v}} + 1\\right\\rfloor = K\\left\\lfloor \\frac{vT}{d_{o}} + 1\\right\\rfloor,$$\n \\fi\n so, by Definition \\ref{def:throughput2},\n \\if0 1\n $f_{t}(K,T) = \n \\frac{1}{T}\\left(K\\left\\lfloor \\frac{vT}{d_{o}} +1 \\right\\rfloor - 1\\right).\n $\n \\else\n $$f_{t}(K,T) = \n \\frac{1}{T}\\left(K\\left\\lfloor \\frac{vT}{d_{o}} +1 \\right\\rfloor - 1\\right).\n $$\n \\fi\n \n As for every number $x$, $\\lfloor x \\rfloor = x - frac(x)$ and $0 \\le frac(x) < 1$, then distributing $\\frac{1}{T}$ for each term, we get\n \\if0 1\n $\n f_{t}(K) \n = \\lim_{T \\to \\infty} f_{t}(K,T)\n = \\frac{Kv }{d_{o}}.\n $ \n \\else\n $$\n \\begin{aligned}\n f_{t}(K) &= \\lim_{T \\to \\infty} f_{t}(K,T)\n = \\lim_{T \\to \\infty} \\frac{1}{T}\\left(K\\left( \\frac{vT}{d_{o}} + 1 \\right) - frac\\left( \\frac{vT}{d_{o}} + 1 \\right) - 1\\right)\\\\\n &= \\lim_{T \\to \\infty} \\frac{K}{T}\\left( \\frac{vT}{d_{o}} + 1 \\right)\n = \\frac{Kv }{d_{o}}.\n \\end{aligned}\n $$ \n \\fi\n\\end{proof}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.5\\linewidth]{figs\/KsThroughput.pdf}\n \\caption{Plotting of the asymptotic throughput of the touch and run strategy (given by (\\ref{eq:throughputhitandrunlimit})) for some values of $s$ and $d$, in metres, and $v = 1$ m\/s, for the interval of values for $K$ obtained by (\\ref{eq:Kbounds}). }\n \\label{fig:grafKthroughput}\n\\end{figure}\n\nFigure \\ref{fig:grafKthroughput} presents examples of (\\ref{eq:throughputhitandrunlimit}) for some parameters. Observe that the maximum throughput for different values of $s, d$ and $v$ can be found by linear search in the interval obtained by (\\ref{eq:Kbounds}). \n\n\n\\subsubsection{Comparison of the strategies}\n\\label{sec:subseccomparison}\n\nThe parallel lanes strategy has the lowest of the limits in relation to $u = \\frac{s}{d}$, the ratio between the radius of the target region and the minimum distance between the robots. However, its asymptotic value is still higher than the minimum possible asymptotic throughput for hexagonal packing just for some values of $u$. In this section, we will make explicit the dependence on the argument $u$ in every throughput function we defined previously to compare them with respect to this ratio. Let $f_{p}(u) = \\displaystyle \\lim_{T\\to \\infty} f_{p}(T,u)$ and $f_{h}^{min}(u)$ be the asymptotic throughput for the parallel lanes strategy and the lower asymptotic throughput for the hexagonal packing strategy for a ratio $u$, respectively. Hence, by Proposition \\ref{methodology:bigtarget:independent_straight:proof},\n\\if0 1\n $ f_{p}(u) = \\lfloor 2u + 1 \\rfloor \\frac{v}{d}, $\n\\else\n $$ f_{p}(u) = \\lfloor 2u + 1 \\rfloor \\frac{v}{d}, $$\n\\fi\nand, by (\\ref{eq:hexthroughputbounds}) using $\\theta = \\pi\/6$ as it minimises the lower bound of $\\displaystyle \\lim_{T\\to \\infty} f(T,\\theta)$ in Proposition \\ref{prop:hexthroughputbounds},\n\\if0 1\n $f_{h}^{min}(u)=\\frac{2 }{\\sqrt{3}} \\left(2u - 1\\right)\\frac{v}{d}.$\n\\else\n $$f_{h}^{min}(u)=\\frac{2 }{\\sqrt{3}} \\left(2u - 1\\right)\\frac{v}{d}.$$\n\\fi\n\n\\begin{proposition}\n There are some $u < \\frac{\\sqrt{3} + 2 }{4-2 \\sqrt{3} }$ such that $f_{p}(u) > f_{h}^{min}(u)$, and for every $u \\ge \\frac{\\sqrt{3} + 2 }{4-2 \\sqrt{3} }, f_{p}(u) \\le f_{h}^{min}(u)$.\n\\end{proposition}\n\\begin{proof}\n For any $u < \\frac{\\sqrt{3} + 2 }{4-2 \\sqrt{3} }$, $(2u + 1)\\frac{v}{d} > f_{h}^{min}(u)$, due to\n\\if0 1\n \\begin{equation}\n (2 u + 1) \\frac{v}{d} > \\frac{2 }{\\sqrt{3}} \\left(2u - 1\\right)\\frac{v}{d} \\Leftrightarrow \n u < \\frac{-1 - \\frac{2 }{\\sqrt{3}}}{2 - \\frac{4 }{\\sqrt{3}}} \n = \\frac{\\sqrt{3} + 2 }{4-2 \\sqrt{3} }. \n \\label{eq:equivuhex1}\n \\end{equation}\n\\else\n \\begin{equation}\n \\begin{aligned}\n &\\phantom{\\Leftrightarrow\\ } (2 u + 1) \\frac{v}{d} > \\frac{2 }{\\sqrt{3}} \\left(2u - 1\\right)\\frac{v}{d} \\Leftrightarrow \n 2 u + 1 > \\frac{2 }{\\sqrt{3}} \\left(2u - 1\\right) \\\\ \n &\\Leftrightarrow 2 u - \\frac{4 }{\\sqrt{3}} u > -1 - \\frac{2 }{\\sqrt{3}} \\Leftrightarrow \n u < \\frac{-1 - \\frac{2 }{\\sqrt{3}}}{2 - \\frac{4 }{\\sqrt{3}}} \n\\ifexpandexplanation\n = \\frac{-\\sqrt{3} - 2 }{2\\sqrt{3} - 4 } \n\\fi\n = \\frac{\\sqrt{3} + 2 }{4-2 \\sqrt{3} }. \n \\end{aligned}\n \\label{eq:equivuhex1}\n \\end{equation}\n\\fi\nWe have that $f_{p}(u) = (2u + 1)\\frac{v}{d}$ when $2u + 1 \\in \\mathds{Z}$. Also, as $u < \\frac{\\sqrt{3} + 2 }{4-2 \\sqrt{3} } < 7$, $u$ can be a number satisfying $(2u + 1) = \\lfloor2u + 1\\rfloor$. Thus, there are some values of $u$ such that $f_{p}(u) = \\lfloor2u + 1\\rfloor\\frac{v}{d} > f_{h}^{min}(u)$. \n\nFrom the equivalence in (\\ref{eq:equivuhex1}) and because for any $x$, $\\lfloor x \\rfloor \\le x$, it follows that for any $u \\ge \\frac{\\sqrt{3} + 2 }{4-2 \\sqrt{3} }, f_{p}(u) \\le (2 u + 1) \\frac{v}{d} \\le f_{h}^{min}(u).$ \n\\end{proof}\n\nFigure \\ref{fig:fpfhminfhmax} shows an example of $f_{h}^{min}(u)$, $f_{p}(u)$ and the maximum possible asymptotic throughput of the hexagonal packing $f_{h}^{max}(u) = \\frac{2 }{\\sqrt{3}} \\left(2u + 1\\right)\\frac{v}{d}$ for $u \\in [0,10]$. Observe that, from the left side of $u=7$, $f_{p}(u)$ has some values above $f_{h}^{min}(u)$ even though they are below $f_{h}^{max}(u)$ for every $u$. \n\n\n\\begin{figure}\n \\centering \n \\begin{minipage}[t]{0.49\\linewidth}\n \\includegraphics[width=\\linewidth]{figs\/fpfhminfhmax.pdf}\n \\caption{Example of $u$ values such that $f_{h}^{max}(u) > f_{p}(u)$ for $v = 1$ m\/s and $d = 1$ m.}\n \\label{fig:fpfhminfhmax}\n \\end{minipage}\\,\\,\n \\begin{minipage}[t]{0.49\\linewidth} \n \\centering \n \\includegraphics[width=\\linewidth]{figs\/fpfh10000.pdf}\n \\caption{Comparison of $f_{p}(T,u)$ and $f_{h}(T,u)$ for $u \\in [0,7]$, $T=10000$ s, $v = 1$ m\/s and $d = 1$ m.}\n \\label{fig:fhfpLarge}\n \\end{minipage}\n\\end{figure}\n\n\nBecause of this proposition, we are certain that for values of $u \\ge \\frac{\\sqrt{3} + 2 }{4-2 \\sqrt{3} } \\approx 7$ the hexagonal packing strategy at the limit will have higher throughput than parallel lanes. However, for values $u < \\frac{\\sqrt{3} + 2 }{4-2 \\sqrt{3}}$, there is the possibility of the parallel lanes strategy being better than hexagonal packing. As we do not have an exact asymptotic throughput for the hexagonal packing strategy for a given angle $\\theta$, we can numerically find the best $\\theta$ using large values of $T$ on (\\ref{eq:hexthroughput}); then, after choosing $\\theta$, we calculate the numerical approximation of the asymptotic throughput using this fixed $\\theta$ and those $T$ values. This result can be compared with the throughput for the same large values of $T$ for the parallel lanes strategy using (\\ref{eq:parallelT}). Furthermore, in a scenario with the target region only being accessed by a corridor with a finite height, the maximum time $T$ can be inferred by its size, then the exact throughput for this specific value can be calculated by (\\ref{eq:hexthroughput}) and (\\ref{eq:parallelT}) as stated before, but using only this specific value $T$, instead of a set of large values, to decide which strategy is more suitable.\n\nLet $f_{h}(T,\\theta,u)$ and $f_{p}(T,u)$ be (\\ref{eq:hexthroughput}) and (\\ref{eq:parallelT}) making explicit the parameter $u$. Let $\\theta^{*}$ be the outcome from the search of the $\\theta$ which maximises $f_{h}(T,\\theta,u)$ by numeric approximation. Thus, we define $f_{h}(T,u) = f_{h}(T,\\theta^{*},u)$. Figure \\ref{fig:fhfpLarge} illustrates the result of the procedure mentioned above for $T = 10000$ for 100 equally spaced values of $u\\in[0,7]$ and seeking the maximum throughput using 1000 evenly spaced points between $\\lbrack 0,\\pi\/3 \\rparen$ to find the best $\\theta$ for the hexagonal packing strategy. Then, we compare it with the result for $\\theta=\\pi\/6$ as we explained previously when we discussed Figures \\ref{fig:plottheta1}-\\ref{fig:plottheta4}. Observe that for $u \\in [0.5,0.9]$ there is some values for which $f_{h}(10000,u) < f_{p}(10000,u)$. Figure \\ref{fig:fhfpZoom} shows this by 100 equally spaced values of $u \\in [0.4,1]$ for different values of $v$. This occurs because, for such values of $u$, using square packing fits more robots inside the circle over the time than hexagonal packing, as we will show in Section \\ref{sec:hexparexperiments}.\n\n\n\n\n\\begin{figure}[t!]\n \\centering\n \\subfloat[$v=0.1$ m\/s]{\\includegraphics[width=0.49\\linewidth]{figs\/fpfhZoomv01.pdf}}\n \\subfloat[$v=1$ m\/s]{\\includegraphics[width=0.49\\linewidth]{figs\/fpfhZoomv10.pdf}}\n \\caption{Comparison of $f_{p}$ and $f_{h}$ for $u \\in [0.4,1]$, $T=10000$ s, $v \\in \\{0.1,1\\}$ m\/s and $d = 1$ m. The difference in the lines of $f_{h}$ is due to $\\theta^{*}$ being different for each value of $v$.}\n \\label{fig:fhfpZoom}\n\\end{figure}\n\nAdditionally, the asymptotic throughput of the touch and run strategy, $f_{t}(u) = \\displaystyle \\lim_{T \\to \\infty} f_{t}(T,u)$, for higher values of $u$ is greater than the maximum possible asymptotic value of the hexagonal packing $f_{h}^{max}(u) =\\frac{2 }{\\sqrt{3}} \\left(2u + 1\\right)\\frac{v}{d}$, as shown later by numeric experimentation. Before presenting this result, we need to verify which values of $u$ are allowed by $f_{t}(u)$ and to express the asymptotic throughput of the touch and run strategy from Proposition \\ref{prop:throughputsdk} in terms of the ratio $u$. \n\n\nFrom Proposition \\ref{prop:Kboundsrk} we have that the possible number of lanes $K \\in \\{3, \\dots, K(u)\\}$ with \n$\n K (u) = \\Bigl\\lfloor \\frac{\\pi}{\\arcsin \\left( \\frac{1}{2u} \\right)}\\Bigr\\rfloor \n$. Consequently,\n $f_{t}(u)$ is only allowed for any $u \\ge \\frac{1}{\\sqrt{3}}.$ In fact,\n by Proposition \\ref{prop:Kboundsrk}, $K \\ge 3$, then $\\frac{\\pi}{\\arcsin\\left(\\frac{1}{2u}\\right)} \\ge \\Bigl\\lfloor\\frac{\\pi}{\\arcsin\\left(\\frac{1}{2u}\\right)}\\Bigr\\rfloor \\ge 3 \\Rightarrow \n \\frac{\\pi}{3} \\ge \\arcsin\\left(\\frac{1}{2u}\\right) \\Leftrightarrow\n \\sin\\left(\\frac{\\pi}{3}\\right) \\ge \\frac{1}{2u} \\Leftrightarrow\n \\frac{\\sqrt{3}}{2} \\ge \\frac{1}{2u} \\Leftrightarrow\n u \\ge \\frac{1}{\\sqrt{3}}.$ \n \n\nWe show below the algebraic manipulations for expressing the asymptotic throughput of the touch and run strategy from Proposition \\ref{prop:throughputsdk} in terms of the ratio $u$. The asymptotic throughput expressed in (\\ref{eq:throughputhitandrunlimit}) is\n\\ifexpandexplanation\n \\begin{equation}\n \\begin{aligned}\n \\frac{Kv }{d_{o}} \n & \n = \\frac{K}{\\frac{d_{o}}{d}}\\frac{v}{d} \n & \n = \\frac{K}{\\frac{\\max(d,d')}{d}}\\frac{v}{d} \n & [\\text{(\\ref{eq:do})}] \\\\\n = \\frac{K}{\\max(1,\\frac{d'}{d})}\\frac{v}{d},\n \\end{aligned}\n \\label{eq:tempthrcurve}\n \\end{equation}\n\\else\n \\begin{equation}\n \\frac{Kv }{d_{o}} \n = \\frac{K}{\\frac{d_{o}}{d}}\\frac{v}{d} \n = \\frac{K}{\\frac{\\max(d,d')}{d}}\\frac{v}{d} \n = \\frac{K}{\\max(1,\\frac{d'}{d})}\\frac{v}{d},\n \\label{eq:tempthrcurve}\n \\end{equation}\n\\fi\nfor an integer $K \\in \\{3, \\dots, K(u)\\}$. From (\\ref{eq:ak}), $\\alpha = \\frac{2 \\pi}{K}$, and, from (\\ref{eq:relationangles}),\n\\if0 1\n $\n \\frac{r}{d} = \\frac{\\frac{s}{d} \\sin(\\alpha \/ 2) - \\frac{d}{2d}}{1 - \\sin(\\alpha \/ 2)} = \\frac{u \\sin(\\frac{\\pi}{K}) - \\frac{1}{2}}{1 - \\sin(\\frac{\\pi}{K})} \\stackrel{\\text{def}}{=} r(u,K),\n $\n\\else\n $$\n \\begin{aligned}\n \\frac{r}{d} = \\frac{\\frac{s}{d} \\sin(\\alpha \/ 2) - \\frac{d}{2d}}{1 - \\sin(\\alpha \/ 2)} = \\frac{u \\sin(\\frac{\\pi}{K}) - \\frac{1}{2}}{1 - \\sin(\\frac{\\pi}{K})} \\stackrel{\\text{def}}{=} r(u,K),\n \\end{aligned}\n $$\n\\fi\nresulting\n\\begin{equation}\n \\begin{aligned}\n \\frac{d'}{d} \n &= \n \\begin{cases}\n \\frac{r}{d} (\\pi - \\alpha) + \\frac{d - 2 r \\cos(\\alpha \/ 2)} {d\\sin(\\alpha \/ 2)}, & \\text{ if } 2 r \\cos(\\alpha \/ 2) < d,\\\\\n 2 \\frac{r}{d} \\arcsin\\left( \\frac{d}{2 r}\\right), & \\text{ otherwise, }\n \\end{cases}\n & [\\text{by (\\ref{eq:dprime})}]\n \\\\\n &= \n \\begin{cases}\n \\frac{r}{d} \\left(\\pi - \\frac{2 \\pi}{K}\\right) + \\frac{1 - 2 \\frac{r}{d} \\cos(\\frac{\\pi}{K})} {\\sin(\\frac{\\pi}{K})}, & \\text{ if } 2 \\frac{r}{d} \\cos(\\frac{ \\pi}{K} ) < 1,\\\\\n 2 \\frac{r}{d} \\arcsin\\left( \\left(2 \\frac{r}{d}\\right)^{-1}\\right), & \\text{ otherwise, }\n \\end{cases} \n \\\\ &= \n \\begin{cases}\n r(u,K) \\left(\\pi - \\frac{2 \\pi}{K}\\right) + \\frac{1 - 2 r(u,K) \\cos(\\frac{\\pi}{K})} {\\sin(\\frac{\\pi}{K})}, \\\\ \n \\hspace{3.3cm} \\text{ if } 2 r(u,K) \\cos(\\frac{ \\pi}{K} ) < 1,\\\\\n 2 r(u,K) \\arcsin\\left( \\frac{1}{2 r(u,K)}\\right), \\text{ otherwise, }\n \\end{cases}\n \\\\ &\\stackrel{\\text{def}}{=} d'(u,K). \n \\end{aligned}\n \\label{eq:dprimed}\n\\end{equation}\nThus, from (\\ref{eq:tempthrcurve}) and (\\ref{eq:dprimed}), \n$ f_{t}(u,K) = \\frac{K}{\\max(1,d'(u,K))}\\frac{v}{d} $ and the upper throughput for the touch and run strategy in terms of $u$ is given by \n\\if0 1\n $\n f_{t}(u) = \\max_{K \\in \\{3,\\dots,K(u)\\}} f_{t}(u,K) = \\max_{K \\in \\{3,\\dots,K(u)\\}} \\frac{K}{\\max(1,d'(u,K))}\\frac{v}{d} \n = \\frac{K^{*}(u)}{\\max(1,d'(u,K^{*}(u)))}\\frac{v}{d},\n $ \n\\else\n $$ \n \\begin{aligned} \n f_{t}(u) &= \\max_{K \\in \\{3,\\dots,K(u)\\}} f_{t}(u,K) = \\max_{K \\in \\{3,\\dots,K(u)\\}} \\frac{K}{\\max(1,d'(u,K))}\\frac{v}{d} \\\\\n &= \\frac{K^{*}(u)}{\\max(1,d'(u,K^{*}(u)))}\\frac{v}{d},\n \\end{aligned}\n $$ \n\\fi\nfor some function $K^{*}(u)$ that finds this maximum in $\\{3,\\dots,K(u)\\}$. Similarly, for a fixed maximum time $T$, we have by (\\ref{eq:throughputhitandruntime}) $f_{t}(T,u) = \\displaystyle \\max_{K \\in \\{3,\\dots,K(u)\\}} f_{t}(K,T,u)$. \n\nFigure \\ref{fig:numhigherftfh} presents a comparison of the asymptotic throughput $f_{t}(u)$ and the lower and upper values of the asymptotic throughput of the hexagonal packing $f_{h}^{min}(u)$ and $f_{h}^{max}(u)$ for values of $u$ ranging from $1\/\\sqrt{3}$ to 1000. Observe that the asymptotic throughput of the touch and run strategy is greater than the maximum possible asymptotic throughput of the hexagonal packing strategy for almost all values of $u$, except for some in $(1.12,1.25)$ (Figure \\ref{fig:numhigherftfh} (b)).\n\n\\begin{figure}[t!]\n \\centering \n \\subfloat[\\text{$u \\in [1\/\\sqrt{3}, 1000]$}]{\\includegraphics[width=0.49\\linewidth]{figs\/ftabovefh.pdf}}\n \\subfloat[\\text{$u \\in [1\/\\sqrt{3}, 1.25]$}]{\\includegraphics[width=0.455\\linewidth]{figs\/ftabovefh2.pdf}}\n \\caption{Graph varying $u$ for $f_{h}^{min}(u)$, $f_{h}^{max}(u)$ and $f_{t}(u)$ with $v = 1$ m\/s and $d = 1$ m for different intervals of $u$. In (a), $f_{h}^{min}(u)$ and $f_{h}^{max}(u)$ are almost overlapped. In (b), $f_{t}(u) > f_{h}^{max}(u)$ for all $u$, except in an interval within (1.12,1.25).} \n \\label{fig:numhigherftfh}\n\\end{figure}\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=0.49\\linewidth]{figs\/ftfh10000above.pdf}\n \\caption{Example for $T=10000$ s, $v = 1$ m\/s, $d = 1$ m and 100 equally spaced points of $u \\in [1\/\\sqrt{3},7]$. We have $f_{h}(T,u) < f_{t}(T,u)$, albeit $f_{h}^{max}(u) \\ge f_{t}(T,u)$ for a few values of $u < 1.5$.}\n \\label{fig:ftbelowfh}\n\\end{figure}\n\nAdditionally, we performed numerical experiments for $f_{t}(T,u)$ and $f_{h}(T,u)$ using fixed time $T = 10000$ in (\\ref{eq:throughputhitandruntime}), (\\ref{eq:hexthroughput}) and $u \\in [1\/\\sqrt{3}, 7]$. For finding $\\theta^{*}$, we use the same procedure described before to compare $f_{h}(T,u)$ and $f_{p}(T,u)$. \nFigure \\ref{fig:ftbelowfh} shows the result. It suggests the touch and run strategy has higher throughput than hexagonal packing for large values of $T$. Although hexagonal packing has lower asymptotic throughput than the touch and run for almost all $u$ values, it is suitable for $u < \\frac{1}{\\sqrt{3}}$ whenever it surpasses the parallel lanes strategy.\n\nFor real-world applications and assuming the robots are in constant velocity and distance between other robots, the hexagonal packing strategy is adequate for a situation where the target is placed in a constrained region, for example, walls in north and south positions. In this example, the number of lanes used in the touch and run strategy would be reduced because of the surrounding walls. In an unconstrained scenario, if the ratio $u$ and the maximum time $T$ are known, the throughput value of the hexagonal packing strategy from (\\ref{eq:hexthroughput}) (for the $\\theta$ which maximises it) can be compared with the throughput of the touch and run strategy from (\\ref{eq:throughputhitandruntime}) (for $K^{*}(u)$) to choose which strategy should be applied. However, assuming constant velocity and distance between robots in a swarm is not practical, because other robots influence the movement in the environment. Hence, we used these strategies as inspiration to propose novel algorithms based on potential fields for robotic swarms in \\citep{arxivAlgorithms}.\n\n\\section{Experiments and Results}\n\\label{sec:experimentresults}\n\nIn order to evaluate our approach, we executed several simulations using the Stage robot simulator \\citep{PlayerStage} for testing the equations presented in the theoretical section (Section \\ref{sec:theoreticalresults}). \nHyperlinks to the video of executions are available in the captions of each corresponding figure. They are in real-time so that the reader can compare the time and screenshots presented in the figures in this section with those in the supplied videos.\\footnote{The source codes of each experimented strategy are in \\url{https:\/\/github.com\/yuri-tavares\/swarm-strategies}.}\n\nWe ran experiments for all strategies considering $s > 0$. We could not make experiments for point-like targets, because a point with a fixed value is nearly impossible to be reached by a moving robot in Stage computer simulations due to the necessity of exact synchronization of the sampling frequency of positions made by the simulator and the robot velocity. Hence, we must use a circular area with a radius $s>0$ around the target to identify that a robot reached it. After presenting the experiments and results for all strategies for circular target region with radius $s > 0$, we compare them experimentally considering the analysis previously discussed in Section \\ref{sec:subseccomparison}.\n\nWe saved for each robot its arrival time in milliseconds since the start of the experiment. We subtracted the arrival time of every robot by the arrival time of the first robot. By doing so, the experiment is assumed to begin in time $T=0$ without worrying about the initial inertia. After this, we registered the number of robots ($N$) for each time value ($T$).\n\nTo alleviate some of the numerical errors caused by the floating-point representation, we used rounding on the 13th decimal place before using floor and ceiling functions on the equations presented. For example, in nowadays computers, by using double variables in C or float in Python, if you divide 9.6 by 1.6 the result is 5.999999999999999 for 15 decimal places formatting, but it should be 6. If we applied the floor function on the computer result it would give us 5 instead of the expected 6.\n\nFor all experiments in this section, the robots are distant from each other by $d = 1$ m. In the figures of this section, black robots indicate they reached the target, and red did not. Also, we did not repeat the experiments for the points on the graph plotting of this section because the velocity and initial positions are constant, so there is no random aspect, and we obtain the same results for different runs for that particular point.\n \n\n\\subsection{Compact lanes}\nFor compact lanes simulations, we used $v=1$ m\/s, and the first robot to reach the target is at the bottom lane and starts at the target. For a target area radius $s$, such that $0 < s < \\sqrt{3}d\/4$, we used $s = 0.3$ m, and for $\\sqrt{3}d\/4 \\le s < d\/2$, we choose $s = 0.45$ m. Figure \\ref{fig:exptriangulartiling2} shows screenshots of the simulation using $s = 0.3$ m during $T = 7.1$ s and Figure \\ref{fig:exptriangulartiling1} for $s = 0.45$ m and $T = 10.1$ s.\n\n\n\\begin{figure}[t!]\n \\centering\n \\begin{minipage}[t]{0.48\\linewidth} \n \\centering\n \\subfloat[0 s: beginning of the simulation.]{\\includegraphics[width=\\linewidth]{figs\/s3d1tritil-0.png}}\\\\\n \\subfloat[After 2.7 s.]{\\includegraphics[width=\\linewidth]{figs\/s3d1tritil-1.png}}\\\\\n \\subfloat[After 6.7 s]{\\includegraphics[width=\\linewidth]{figs\/s3d1tritil-2.png}}\\\\\n \\subfloat[5 s: ending of the simulation]{\\includegraphics[width=\\linewidth]{figs\/s3d1tritil-3.png}}\n \\caption{Simulation on Stage for compact lanes strategy using $s = 0.3$ m, $d = 1$ m during $T = 7.1$ s. Available on \\url{https:\/\/youtu.be\/e1cWJzWhQmQ}.}\n \\label{fig:exptriangulartiling2}\n \\end{minipage}\\quad\n \\begin{minipage}[t]{0.48\\linewidth}\n \\centering\n \\subfloat[0 s: beginning of the simulation.]{\\includegraphics[width=\\linewidth]{figs\/s45d1tritil-0.png}}\\\\\n \\subfloat[After 3.5 s.]{\\includegraphics[width=\\linewidth]{figs\/s45d1tritil-1.png}}\\\\\n \\subfloat[After 7 s]{\\includegraphics[width=\\linewidth]{figs\/s45d1tritil-2.png}}\\\\\n \\subfloat[10.1 s: ending of the simulation]{\\includegraphics[width=\\linewidth]{figs\/s45d1tritil-3.png}}\n \\caption{Simulation on Stage for compact lanes strategy using $s = 0.45$ m, $d = 1$ m during $T = 10.1$ s. Available on \\url{https:\/\/youtu.be\/9OXGC1w83j0}.}\n \\label{fig:exptriangulartiling1}\n \\end{minipage}\n\\end{figure}\n\nWe ran experiments in order to verify the throughput for a given time and the asymptotic throughput calculated by (\\ref{eq:giventime1}) to (\\ref{eq:limit2}). Figure \\ref{fig:results1qw} shows the throughput for different values of time obtained by the experiments in Stage, i.e. $(N-1)\/T$, in comparison with the calculated value by (\\ref{eq:giventime1}) and (\\ref{eq:limit1}) for $s=0.3$ m and by (\\ref{eq:giventime2}) and (\\ref{eq:limit2}) for $s=0.45$ m. These figures confirm that the equations presented at the theoretical section agree with the throughput obtained by simulations. \n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=0.5\\linewidth]{figs\/ThroughputTriangularExperiments.pdf}\n \\caption{Throughput versus time plotting for compact lanes strategy for different values of $s$. ``Simulation'' stands for the data obtained from Stage, ``Instantaneous'' for the equations of the throughtput for a given time calculated in (\\ref{eq:giventime1}) and (\\ref{eq:giventime2}), and ``Asymptotic'' for the asymptotic throughtput obtained from (\\ref{eq:limit1}) and (\\ref{eq:limit2}). The mentioned equations results match with the data obtained from simulations.} \\label{fig:results1qw} \n\\end{figure}\n\n\\subsection{Parallel lanes}\n\nWe experimented with the parallel lanes strategy for $v=1$ m\/s and $s \\in \\{3,6\\}$ m. Figures \\ref{fig:exppar1} and \\ref{fig:exppar2} present screenshots from executions using these parameters.\n\n\\begin{figure}[t!]\n \\centering\n \\subfloat[0 s: beginning of the simulation.]{\\includegraphics[width=0.49\\linewidth]{figs\/pars3Stage000.png}}\\,\n \\subfloat[After 6.5 s.]{\\includegraphics[width=0.49\\linewidth]{figs\/pars3Stage065.png}}\\\\\n \\subfloat[13 s: ending of the simulation]{\\includegraphics[width=0.49\\linewidth]{figs\/pars3Stage130.png}}\n \\caption{Simulation on Stage for parallel lanes strategy using $s = 3$ m, $d = 1$ m during $T = 13$ s. Available on \\url{https:\/\/youtu.be\/2Y1RHc9YVaw}.}\n \\label{fig:exppar1}\n\\end{figure}\n\n\\begin{figure}[t!]\n \\centering\n \\subfloat[0 s: beginning of the simulation.]{\\includegraphics[width=0.49\\linewidth]{figs\/pars6Stage000.png}}\\,\n \\subfloat[After 8 s.]{\\includegraphics[width=0.49\\linewidth]{figs\/pars6Stage080.png}}\\\\\n \\subfloat[16 s: ending of the simulation]{\\includegraphics[width=0.49\\linewidth]{figs\/pars6Stage160.png}}\n \\caption{Simulation on Stage for parallel lanes strategy using $s = 6$ m, $d = 1$ m during $T = 16$ s. Available on \\url{https:\/\/youtu.be\/TVdka65fi1g}.}\n \\label{fig:exppar2}\n\\end{figure}\n\n\nIn order to verify the throughput for a given time calculated by (\\ref{eq:parallelT}) and its asymptotic value as in (\\ref{eq:parallelLimit}), we compare them with the throughput obtained from Stage simulations. Figure \\ref{fig:tppar} (a) presents these comparisons. Observe that the values from (\\ref{eq:parallelT}) are almost aligned with the values from simulation, except for some points. The difference in those points is due to the floating-point error discussed in the beginning of Section \\ref{sec:experimentresults} that happens in the division before the use of floor or ceiling functions used on (\\ref{eq:parallelT}). As expected, the values of (\\ref{eq:parallelT}) approximates to (\\ref{eq:parallelLimit}) as time passes. As the running time is proportional to the number of robots in our experiments, observe that higher throughput per time is reflected as a lower arrival time of the last robot per number of robots (Figure \\ref{fig:tppar} (b)). Also, note that in the lower arrival time per number of robots graph, those values tends to infinite as the horizontal axis values grows.\n\n\\begin{figure}[t!]\n \\centering\n \\subfloat[]{\\includegraphics[width=0.49\\linewidth]{figs\/ThParal.pdf}}\n \\subfloat[]{\\includegraphics[width=0.49\\linewidth]{figs\/TimeParal.pdf}}\n \\caption{(a) Throughput versus time plotting for parallel lanes strategy for $s \\in \\{3,6\\}$ m. ``Simulation'' stands for the data obtained from Stage, ``Instantaneous'' for the equations of the throughtput for a given time calculated in (\\ref{eq:parallelT}), and ``Asymptotic'' for the asymptotic throughtput obtained from (\\ref{eq:parallelLimit}). (b) Number of robots versus time of arrival of the last robot for the same data.}\n \\label{fig:tppar} \n\\end{figure}\n\n\\subsection{Hexagonal packing}\n\n\\begin{figure}[t!]\n \\centering\n \\begin{minipage}[b]{0.48\\linewidth}\n \\centering\n \\subfloat[0 s: beginning of the simulation.]{\\includegraphics[width=0.7\\linewidth]{figs\/hexs3t0-000.png}}\\\\\n \\subfloat[After 4.9 s.]{\\includegraphics[width=0.7\\linewidth]{figs\/hexs3t0-049.png}}\\\\\n \\subfloat[9.8 s: ending of the simulation.]{\\includegraphics[width=0.7\\linewidth]{figs\/hexs3t0-098.png}}\n \\caption{Simulation on Stage for hexagonal packing strategy using $s = 3$ m, $\\theta = 0$ during $T = 9.8$ s. Available on \\url{https:\/\/youtu.be\/6_LgZWFOWd0}.}\n \\label{fig:exphex1}\n \\end{minipage}\\quad\n \\begin{minipage}[b]{0.48\\linewidth}\n \\centering\n \\subfloat[0 s: beginning of the simulation.]{\\includegraphics[width=0.7\\linewidth]{figs\/hexs3t15-000.png}}\\\\\n \\subfloat[After 5 s.]{\\includegraphics[width=0.7\\linewidth]{figs\/hexs3t15-050.png}}\\\\\n \\subfloat[10 s: ending of the simulation.]{\\includegraphics[width=0.7\\linewidth]{figs\/hexs3t15-100.png}}\n \\caption{Simulation on Stage for hexagonal packing strategy using $s = 3$ m, $\\theta = \\pi\/12$ during $T = 10$ s. Available on \\url{https:\/\/youtu.be\/Wji8XlSQJBQ}.}\n \\end{minipage}\n\\end{figure}\n\n\\begin{figure}[t!]\n \\centering\n \\begin{minipage}[b]{0.48\\linewidth}\n \\centering\n \\subfloat[0 s: beginning of the simulation.]{\\includegraphics[width=0.7\\linewidth]{figs\/hexs3t30-000.png}}\\\\\n \\subfloat[After 4.9 s.]{\\includegraphics[width=0.7\\linewidth]{figs\/hexs3t30-049.png}}\\\\\n \\subfloat[10 s: ending of the simulation.]{\\includegraphics[width=0.7\\linewidth]{figs\/hexs3t30-100.png}}\n \\caption{Simulation on Stage for hexagonal packing strategy using $s = 3$ m, $\\theta = \\pi\/6$ during $T = 10$ s. Available on \\url{https:\/\/youtu.be\/szOBU8no_sU}.}\n \\end{minipage}\\quad\n \\begin{minipage}[b]{0.48\\linewidth}\n \\centering\n \\subfloat[0 s: beginning of the simulation.]{\\includegraphics[width=0.7\\linewidth]{figs\/hexs3t50-000.png}}\\\\\n \\subfloat[After 4.9 s.]{\\includegraphics[width=0.7\\linewidth]{figs\/hexs3t50-049.png}}\\\\\n \\subfloat[10 s: ending of the simulation.]{\\includegraphics[width=0.7\\linewidth]{figs\/hexs3t50-100.png}}\n \\caption{Simulation on Stage for hexagonal packing strategy using $s = 3$ m, $\\theta = 5\\pi\/18$ during $T = 10$ s. Available on \\url{https:\/\/youtu.be\/jRLgaF7Te1Q}.}\n \\end{minipage}\n\\end{figure}\n\n\\begin{figure}[t!]\n \\centering\n \\begin{minipage}[b]{0.48\\linewidth}\n \\centering\n \\subfloat[0 s: beginning of the simulation.]{\\includegraphics[width=0.7\\linewidth]{figs\/hexs6t0-000.png}}\\\\\n \\subfloat[After 4.9 s.]{\\includegraphics[width=0.7\\linewidth]{figs\/hexs6t0-049.png}}\\\\\n \\subfloat[9.8 s: ending of the simulation.]{\\includegraphics[width=0.7\\linewidth]{figs\/hexs6t0-098.png}}\n \\caption{Simulation on Stage for hexagonal packing strategy using $s = 6$ m, $\\theta = 0$ during $T = 9.8$ s. Available on \\url{https:\/\/youtu.be\/v0FK8YpGrL8}.}\n \\end{minipage}\\quad\n \\begin{minipage}[b]{0.48\\linewidth}\n \\centering\n \\subfloat[0 s: beginning of the simulation.]{\\includegraphics[width=0.7\\linewidth]{figs\/hexs6t15-000.png}}\\\\\n \\subfloat[After 5 s.]{\\includegraphics[width=0.7\\linewidth]{figs\/hexs6t15-050.png}}\\\\\n \\subfloat[10.1 s: ending of the simulation.]{\\includegraphics[width=0.7\\linewidth]{figs\/hexs6t15-101.png}}\n \\caption{Simulation on Stage for hexagonal packing strategy using $s = 6$ m, $\\theta = \\pi\/12$ during $T = 10.1$ s. Available on \\url{https:\/\/youtu.be\/OBS_HADH5OE}.}\n \\end{minipage}\n\\end{figure}\n\n\\begin{figure}[t!]\n \\centering\n \\begin{minipage}[b]{0.48\\linewidth}\n \\centering\n \\subfloat[0 s: beginning of the simulation.]{\\includegraphics[width=0.7\\linewidth]{figs\/hexs6t30-000.png}}\\\\\n \\subfloat[After 4.9 s.]{\\includegraphics[width=0.7\\linewidth]{figs\/hexs6t30-049.png}}\\\\\n \\subfloat[10 s: ending of the simulation.]{\\includegraphics[width=0.7\\linewidth]{figs\/hexs6t30-100.png}}\n \\caption{Simulation on Stage for hexagonal packing strategy using $s = 6$ m, $\\theta = \\pi\/6$ during $T = 10$ s. Available on \\url{https:\/\/youtu.be\/-KX7ziOp8b0}.}\n \\end{minipage}\\quad\n \\begin{minipage}[b]{0.48\\linewidth}\n \\centering\n \\subfloat[0 s: beginning of the simulation.]{\\includegraphics[width=0.7\\linewidth]{figs\/hexs6t50-000.png}}\\\\\n \\subfloat[After 4.9 s.]{\\includegraphics[width=0.7\\linewidth]{figs\/hexs6t50-049.png}}\\\\\n \\subfloat[10 s: ending of the simulation.]{\\includegraphics[width=0.7\\linewidth]{figs\/hexs6t50-100.png}}\n \\caption{Simulation on Stage for hexagonal packing strategy using $s = 6$ m, $\\theta = 5\\pi\/18$ during $T = 10$ s. Available on \\url{https:\/\/youtu.be\/GRYRnH5CrhU}.}\n \\label{fig:exphex8}\n \\end{minipage}\n\\end{figure}\n\nWe experimented the hexagonal packing for $v=1$ m\/s, and the combination of the following variables and values: $s \\in \\{3,6\\}$ m and $\\theta \\in \\{0, \\pi\/12, \\pi\/6, 5\\pi\/18\\}$. Figures \\ref{fig:exphex1}-\\ref{fig:exphex8} present screenshots from executions using these parameters.\n\n\\begin{figure}[t!]\n \\subfloat[$\\theta = 0$]{\\includegraphics[width=0.495\\linewidth]{figs\/ThoughputHexExp0deg.pdf}}\n \\subfloat[$\\theta = \\pi\/12$]{\\includegraphics[width=0.495\\linewidth]{figs\/ThoughputHexExp15deg.pdf}}\\\\\n \\subfloat[$\\theta = \\pi\/6$]{\\includegraphics[width=0.495\\linewidth]{figs\/ThoughputHexExp30deg.pdf}}\n \\subfloat[$\\theta = 5\\pi\/18$]{\\includegraphics[width=0.495\\linewidth]{figs\/ThoughputHexExp50deg.pdf}}\n \\caption{Throughput versus time comparison of the simulation on Stage, upper and lower bounds on the asymptotic value and the theoretical instantaneous equation for the throughput for a given hexagonal packing angle for different values of $s$ and $\\theta$.}\n \\label{fig:tphex}\n\\end{figure}\n\n\\begin{figure}[t!]\n \\subfloat[$\\theta = 0$]{\\includegraphics[width=0.495\\linewidth]{figs\/TimeHexExp0deg.pdf}}\n \\subfloat[$\\theta = \\pi\/12$]{\\includegraphics[width=0.495\\linewidth]{figs\/TimeHexExp15deg.pdf}}\\\\\n \\subfloat[$\\theta = \\pi\/6$]{\\includegraphics[width=0.495\\linewidth]{figs\/TimeHexExp30deg.pdf}}\n \\subfloat[$\\theta = 5\\pi\/18$]{\\includegraphics[width=0.495\\linewidth]{figs\/TimeHexExp50deg.pdf}}\n \\caption{Time of arrival at the target of the last robot versus number of robots for the same simulations in Figure \\ref{fig:tphex}.}\n \\label{fig:timehex}\n\\end{figure}\n\nIn order to evaluate the throughput for a given time and angle calculated in (\\ref{eq:hexthroughput}) and the bounds on the asymptotic throughput as in (\\ref{eq:hexthroughputbounds}), we compare them with the throughput obtained from Stage simulations. Figure \\ref{fig:tphex} presents these comparisons. Observe that the values from (\\ref{eq:hexthroughput}) are almost aligned with the values from simulation, except for some points. The difference in those points is also due to the floating point error -- discussed in the introduction of Section \\ref{sec:experimentresults} -- over the divisions and trigonometric functions done before the use of floor or ceiling functions used on (\\ref{eq:hexthroughput}). Also, due to floating point error, in our computation of (\\ref{eq:whereIusedepsilon}), instead of using $\\min(L(x_{h}),C_{2}(x_{h})) = \\lfloor L(x_{h})\\rfloor$, we check $\\vert \\min(L(x_{h}),C_{2}(x_{h})) - \\lfloor L(x_{h})\\rfloor\\vert < 0.001$.\n\nAdditionally, note in Figure \\ref{fig:tphex} that for any value of $s$ or $\\theta$, as the time passes, the values of (\\ref{eq:hexthroughput}) asymptotically approaches some value inside the bounds given by (\\ref{eq:hexthroughputbounds}). Although the exact asymptotic value could not be given for the presented parameters, the experiments show that the bounds are correct. In the same manner as occured for parallel lanes, higher throughput per time is reflected as a lower arrival time of the last robot per number of robots, and it tends to infinite as the number of robots grows (Figure \\ref{fig:timehex}). \n\n\\subsection{Touch and run}\n\\label{sec:hitandrunexperiments}\n\nFor the touch and run strategy, the robots maintain the linear velocity over all the experiment, then turn at fixed constant rotational speed $\\omega = v\/r$, for $r$ obtained from (\\ref{eq:relationangles}), when they are next to the target centre by the distance $d_{r}$ obtained from (\\ref{eq:distTargetToTurn}). After they get on the target region, when they are distant from the target centre by $d_{r}$, they leave the curved path, stop turning and follow the linear exiting lane. On that lane, to stabilise their path following, the robots follow the queue using turning speed equals to $\\gamma - \\beta$, such that $\\beta$ is the angle of the exiting lane and $\\gamma$ is the robot orientation angle, both in relation to the $x$-axis.\n\nWe used $v=0.1$ m\/s because the robots we used on Stage have maximum turning speed of $\\pi\/2$ rad\/s. Choosing a low velocity implies in more number of lanes $K$, as the turning speed $\\omega = v\/r$, and $r$ varies over $K$ and $s$. In addition, low linear speed diminishes time measurement error, since the positions of the robots are sampled at every $0.1$ s by the Stage simulator. Their positions are not guaranteed to be obtained on the exact moment they are far from the target centre by $d_{r}$, thus this also yields error in time measurement for their arrival on the target area.\n\n\\begin{figure}[t!]\n \\centering\n \\begin{minipage}[b]{0.48\\linewidth}\n \\centering\n \\subfloat[0 s: beginning of the simulation.]{\\includegraphics[width=0.7\\linewidth]{figs\/hits3K10-0000.png}}\\\\\n \\subfloat[After 114 s.]{\\includegraphics[width=0.7\\linewidth]{figs\/hits3K10-1140.png}}\\\\\n \\subfloat[228 s: ending of the simulation.]{\\includegraphics[width=0.7\\linewidth]{figs\/hits3K10-2280.png}}\n \\caption{Simulation on Stage for the touch and run strategy using $s = 3$ m, $K=10$ during $T = 228$ s at $v = 0.1$ m\/s. Available on \\url{https:\/\/youtu.be\/Z-ruOMYFyBU}.}\n \\label{fig:expit1}\n \\end{minipage}\\quad\n \\begin{minipage}[b]{0.48\\linewidth}\n \\centering\n \\subfloat[0 s: beginning of the simulation.]{\\includegraphics[width=0.7\\linewidth]{figs\/hits3K16-0000.png}}\\\\\n \\subfloat[After 261.6 s.]{\\includegraphics[width=0.7\\linewidth]{figs\/hits3K16-2616.png}}\\\\\n \\subfloat[523.1 s: ending of the simulation.]{\\includegraphics[width=0.7\\linewidth]{figs\/hits3K16-5231.png}}\n \\caption{Simulation on Stage for the touch and run strategy using $s = 3$ m, $K = 16$ during $T = 523.1$ s at $v = 0.1$ m\/s. Available on \\url{https:\/\/youtu.be\/FvAqv0zD4_Y}.}\n \\end{minipage}\n\\end{figure}\n\n\\begin{figure}[t!]\n \\centering\n \\begin{minipage}[b]{0.48\\linewidth}\n \\centering\n \\subfloat[0 s: beginning of the simulation.]{\\includegraphics[width=0.7\\linewidth]{figs\/hits6K19-0000.png}}\\\\\n \\subfloat[After 63.6 s.]{\\includegraphics[width=0.7\\linewidth]{figs\/hits6K19-0636.png}}\\\\\n \\subfloat[127.4 s: ending of the simulation.]{\\includegraphics[width=0.7\\linewidth]{figs\/hits6K19-1274.png}}\n \\caption{Simulation on Stage for the touch and run strategy using $s = 6$ m, $K=19$ during $T = 127.4$ s at $v = 0.1$ m\/s. Available on \\url{https:\/\/youtu.be\/xJVoVCIjX5k}.}\n \\end{minipage}\\quad\n \\begin{minipage}[b]{0.48\\linewidth}\n \\centering\n \\subfloat[0 s: beginning of the simulation.]{\\includegraphics[width=0.7\\linewidth]{figs\/hits6K33-0000.png}}\\\\\n \\subfloat[After 274 s.]{\\includegraphics[width=0.7\\linewidth]{figs\/hits6K33-2740.png}}\\\\\n \\subfloat[548 s: ending of the simulation.]{\\includegraphics[width=0.7\\linewidth]{figs\/hits6K33-5480.png}}\n \\caption{Simulation on Stage for the touch and run strategy using $s = 6$ m, $K=33$ during $T = 548$ s at $v = 0.1$ m\/s. Available on \\url{https:\/\/youtu.be\/-xZz84npKV4}.}\n \\label{fig:expit4}\n \\end{minipage}\n\\end{figure}\n\n\nWe used $s \\in \\{3, 6\\}$ m and all allowed $K$ values for experimenting the touch and run strategy with 200 robots. By (\\ref{eq:Kbounds}), for the former $s$ value we have a maximum $K = 18$ and for the later, $K = 37$. However, as the maximum angular velocity is limited, the allowed $K$ values range for $s=3$ m is reduced to $\\{3 ,\\dots, 16\\}$ and for $s = 6$ m, $\\{3,\\dots,33\\}$. Figures \\ref{fig:expit1}-\\ref{fig:expit4} presents screenshots from executions using some of these parameters. The circle in the middle of these figures is the target region and the lines where the robots are above represent the curved trajectory which they follow by the touch and run strategy. \n\n\\begin{figure}[t!]\n \\subfloat[$s = 3$ m and $K \\in \\{10,16\\}$.]{\\includegraphics[width=0.5\\linewidth]{figs\/thCoolPaths3.pdf}}\n \\subfloat[$s = 6$ m and $K \\in \\{19,33\\}$.]{\\includegraphics[width=0.5\\linewidth]{figs\/thCoolPaths6.pdf}}\\\\\n \\caption{Throughput versus time comparison of the touch and run simulation on Stage with asymptotic values and the theoretical instantaneous equation for the throughput for different values of $s$ and $K$.}\n \\label{fig:tpit}\n\\end{figure}\n\n\n\\begin{figure}[t!]\n \\subfloat[$s = 3$ m and $K \\in \\{10,16\\}$.]{\\includegraphics[width=0.5\\linewidth]{figs\/timeCoolPaths3.pdf}}\n \\subfloat[$s = 6$ m and $K \\in \\{19,33\\}$.]{\\includegraphics[width=0.5\\linewidth]{figs\/timeCoolPaths6.pdf}}\\\\\n \\caption{Time of arrival at the target of the last robot versus number of robots for the same simulations in Figure \\ref{fig:tpit}.}\n \\label{fig:tsit21}\n\\end{figure}\n\nFigure \\ref{fig:tpit} presents the comparison of (\\ref{eq:throughputhitandruntime}) and (\\ref{eq:throughputhitandrunlimit}) for the throughput for a given time, the bound on its asymptotic value and the one obtained from Stage simulations. Although we fixed the total number of robots and the linear velocity, the arrival times and the number of robots to reach the target change for each parameter used in this figure since the distance between the robots per lane varies and the number of robots simultaneously arriving is, in most cases, the number of lanes. Also, we did not plot the first two arrival times because the first one is zero, yielding an indetermination by the throughput definition, and the second one is still too small in relation to the others, making the resultant throughput too high compared with the rest, producing, thus, an incomprehensible graph.\n\nObserve that the values from (\\ref{eq:throughputhitandruntime}) are almost equal to the values from simulation, except for some points. The difference in those points is due to the floating-point error in the divisions and trigonometric functions before the use of floor function used on (\\ref{eq:throughputhitandruntime}) -- already mentioned in the introduction of Section \\ref{sec:experimentresults} -- as well as the time measurement errors for the arrival of the robots on the target area as explained at the beginning of this section. As expected, the values of (\\ref{eq:throughputhitandruntime}) tend to get nearer to the asymptotic value given by (\\ref{eq:throughputhitandrunlimit}). Differently from the previous strategies, notice that, for small values of $T$, the throughput is higher than for larger ones because, for a fixed $K$, (\\ref{eq:throughputhitandruntime}) is decreasing for $T$. As occurred for the previous strategies, higher throughput per time is reflected as a lower arrival time of the last robot per number of robots, which tends to infinite as the number of robots grows (Figure \\ref{fig:tsit21}). \n\n\n\n\\begin{figure}[t!]\n \\centering \n \\includegraphics[width=0.55\\linewidth]{figs\/KsCoolPath.pdf} \n \\caption{Throughput versus the number of lanes comparison of the simulation on Stage and asymptotic throughput for $s \\in \\{3,6\\}$ m.}\n \\label{fig:ksit}\n\\end{figure}\n\n\nFigure \\ref{fig:ksit} shows a comparison of the throughput at the end of the experiment -- that is, for 200 robots and considering the difference between the time to reach the target region spent by the last robot and the first -- and the asymptotic throughput obtained by (\\ref{eq:throughputhitandrunlimit}) for all the possible number of lanes ($K$) for the used parameters and $s \\in \\{3,6\\}$ m. Confirming our results, the simulation values tend to get near to the asymptotic value.\n\n\n\\subsection{Comparison between hexagonal packing and parallel lanes}\n\n\\label{sec:hexparexperiments}\n\nAs discussed in Section \\ref{sec:subseccomparison}, we observed that the parallel lane strategy has a higher throughput than hexagonal packing for values of $u = s\/d$ from 0.5 to a value about 0.85 and for high values of $T$, despite the parallel lanes having lower asymptotic throughput for other values of $u$. In order to validate this observation, we performed experiments on Stage for these strategies using $T=10000$ s, $v=0.1$ m\/s, $d=1$ m and $s$ ranging from 0.4 to 0.95 m in increments of 0.05 m. For hexagonal packing, we computed the best hexagonal packing angle $\\theta$ using the same method mentioned at the end of the theoretical section, that is, we searched the maximum throughput using 1000 evenly spaced points between $\\lbrack 0,\\pi\/3 \\rparen$ to find the best $\\theta$, then we compared with the result for $\\pi\/6$.\n\n\\begin{figure}[t!]\n \\centering\n \\subfloat[$v = 0.1$ m\/s.]{\\includegraphics[width=0.49\\linewidth]{figs\/SimHexPar01.pdf} }\n \\subfloat[$v = 1$ m\/s.]{\\includegraphics[width=0.49\\linewidth]{figs\/SimHexPar10.pdf} }\n \\caption{Throughput versus ratio $u=s\/d$ comparing hexagonal packing and parallel lanes strategies for $v\\in \\{0.1,1\\}$ m\/s, including results from Stage simulations. The functions $f_{h}$ and $f_{p}$ are the same presented in Figure \\ref{fig:fhfpZoom}. The labels ``Simulation hex.'' and ``Simulation par.'' stand for the throughput resultant from the experiments with hexagonal packing and parallel lanes strategies, respectively.}\n \\label{fig:compHexParExperiments}\n\\end{figure}\n\n\n\n\n\\begin{figure}[t!]\n \\subfloat[Hexagonal packing with best $\\theta$ for $s = 0.5$ m. Available on \\url{https:\/\/youtu.be\/IZBnFHLKXUA}.]{\\includegraphics[width=0.495\\linewidth]{figs\/stage-hex-s0_5.png}}\\ \n \\subfloat[Parallel lanes for $s = 0.5$ m. Available on \\url{https:\/\/youtu.be\/YYv1dJFkdPA}.]{\\includegraphics[width=0.495\\linewidth]{figs\/stage-par-s0_5.png} }\\\\\n \\subfloat[Hexagonal packing with best $\\theta$ for $s = 0.85$ m. Available on \\url{https:\/\/youtu.be\/r9X0fsnngm0}.]{\\includegraphics[width=0.495\\linewidth]{figs\/stage-hex-s0_85.png}}\\ \n \\subfloat[Parallel lanes for $s = 0.85$ m. Available on \\url{https:\/\/youtu.be\/0cx-bHPIong}.]{\\includegraphics[width=0.495\\linewidth]{figs\/stage-par-s0_85.png} }\\\\\n \\caption{Screenshots of the Stage simulation for hexagonal packing and parallel lanes strategy for $d=1$ m and $s\\in \\{0.5,0.85\\}$ m showing that the square packing fits more robots than hexagonal packing over the time in these cases. The robots run from right to left at constant linear speed $v=0.1$ m\/s. We highlighted the grey squares -- which measure $1\\times1$ m$^{2}$ -- to help estimate the time needed to about eight robots arrive on the target region. }\n \\label{fig:stageCompHP}\n\\end{figure}\n\nFigure \\ref{fig:compHexParExperiments} presents the results from the experiments with Stage and the theoretical results showed earlier. \nThe throughput improvement for the values of $u=s\/d$ where the parallel lanes strategy overcomes the hexagonal packing is mainly caused by the square packing being more effective than hexagonal packing for fitting the robots inside the circle over the time for those values. To illustrate this, Figure \\ref{fig:stageCompHP} shows the execution for $v=0.1$ m\/s, $d=1$ m and $s\\in\\{0.5,0.85\\}$ m. Observe in those figures that when the robots are arranged in squares, more robots arrive per unit of time than using hexagonal packing.\nTo help visualise this, heed that in Figure \\ref{fig:stageCompHP} (a) there are $N=9$ robots in black, occupying a rectangle including the circular target area with a width of approximately $W\\approx4.5$ m (this distance can be roughly measured by the grey squares, counting from the two last black robots on the right side to the first one in the left side). As we assumed $v=0.1$ m\/s, the throughput in this case is approximately $\\frac{N-1}{\\frac{W}{v}} \\approx \\frac{(9-1)0.1}{4.5} \\approx 0.178$ s$^{-1}$. Making similar calculations, we have for Figures \\ref{fig:stageCompHP} (b), \\ref{fig:stageCompHP} (c) and \\ref{fig:stageCompHP} (d) the approximate throughputs $\\frac{(8-1)0.1}{3} \\approx 0.233$ s$^{-1}$, $\\frac{(8-1)0.1}{4}=0.175$ s$^{-1}$ and $\\frac{(8-1)0.1}{3} \\approx 0.233$ s$^{-1}$, respectively. The results from the parallel lanes in this illustration -- about 0.233 for both values of $s$ -- surpass the values for the hexagonal packing. \n\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nA novel metric was proposed for measuring the effectiveness of algorithms to minimise congestions in a swarm of robots trying to reach the same goal: the common target area throughput. Also, we defined the asymptotic throughput for the common target area as the throughput as the time tends to infinity. Assuming the robots have constant velocity and distance between each other, we showed how to calculate the asymptotic throughput for different theoretical strategies to arrive in the common circular target region: (i) making parallel queues to get in the target region, (ii) using a corridor with robots in hexagonal packing to enter in the region and (iii) following curved trajectories to touch the region boundary. These strategies where the inspiration of new algorithms using artificial potential fields in \\citep{arxivAlgorithms}. As the growing number of robots needs to be considered in robotic swarms, the asymptotic throughput abstracts the measure of how many robots can reach the target as time passes and, consequently, as the number of robots increases. Should a closed form for the asymptotic throughput be given, we can use it to compare algorithms as it will be always finite. \n\nWhen we assumed constant velocity and distance between robots, we were able to provide theoretical calculations of the throughput for a given time and the asymptotic throughput for the different theoretical strategies. Based solely on these calculations, we could compare which strategy is better. If we can theoretically calculate the asymptotic throughput of any algorithm, we may use it to compare them to decide which one is better. When a robotic swarm has all robots going to the same target, the function relating the number of robots and the time of arrival on the target region tends to infinite as the number of robots grows, while the function relating the number of robots and throughput tends to a finite number, because the asymptotic throughput is finite. When an algorithm exhibits a lower target region arrival time for a number of robots, the throughput is higher, so the comparison by this latter metric can replace the former as it reflects the same ordering but reverse. \n\nAlthough we do not have closed asymptotic equations tackling dynamic speed and inter-robot distance, we believe that, if we had them, we could compare only by the analytical asymptotic throughput, as we accomplished when the linear velocity and distance between the robots are constant. Thus, for common target area congestion in robotic swarms, the throughput is well suited for comparing algorithms due to its abstraction of the rate of the target area access as the number of robots grows, whether we have or not the closed throughput equation.\n\n\\section*{Acknowledgements}\n\nYuri Tavares dos Passos gratefully acknowledges the Faculty of Science and Technology at Lancaster University for the scholarship and the Universidade Federal do Rec\u00f4ncavo da Bahia for granting the leave of absence for finishing his PhD.\n\n\n\\bibliographystyle{elsarticle-num-names}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\nThe proximity catch digraphs (PCDs) were motivated by their applications\nin pattern classification and spatial pattern analysis,\nhence they have become focus of considerable attention recently.\nThe PCDs are a special type of proximity graphs which were introduced by \\cite{toussaint:1980}.\nA \\emph{digraph} is a directed graph with\nvertex set $V$ and arcs (directed edges) each of which is from one\nvertex to another based on a binary relation.\nThen the pair $(p,q) \\in V \\times V$ is an ordered pair\nwhich stands for an \\emph{arc} from vertex $p$ to vertex $q$ in $V$.\nFor example, \\emph{nearest neighbor (di)graph} which is defined by placing\nan arc between each vertex and its nearest neighbor is a proximity digraph\nwhere vertices represent points in some metric space (\\cite{paterson:1992}).\nPCDs are \\emph{data-random digraphs} in which\neach vertex corresponds to a data point\nand arcs are defined in terms of some bivariate relation on the data.\n\nThe PCDs are closely related to the class cover problem of \\cite{cannon:2000}.\nLet $(\\Omega,\\mathcal M)$ be a measurable space and $\\mathcal{X}_n=\\{X_1,X_2,\\ldots,X_n\\}$ and\n$\\mathcal{Y}_m=\\{Y_1,Y_2,\\ldots,Y_m\\}$ be two sets of $\\Omega$-valued random variables\nfrom classes $\\mathcal{X}$ and $\\mathcal{Y}$, respectively, with joint probability distribution $F_{X,Y}$.\nLet $d(\\cdot,\\cdot):\\Omega\\times \\Omega \\rightarrow [0,\\infty)$ be any distance function.\nThe \\emph{class cover problem} for a target class, say $\\mathcal{X}$, refers to finding a collection of neighborhoods,\n$N_i$ around $X_i$ such that\n(i) $\\mathcal{X}_n \\subseteq \\bigl(\\cup_i N_i \\bigr)$ and (ii) $\\mathcal{Y}_m \\cap \\bigl(\\cup_i N_i \\bigr)=\\emptyset$.\nA collection of neighborhoods satisfying both conditions is called a {\\em class cover}.\nA cover satisfying (i) is a {\\em proper cover} of $\\mathcal{X}_n$\nwhile a cover satisfying (ii) is a {\\em pure cover} relative to $\\mathcal{Y}_m$.\nThis article is on the {\\em cardinality of smallest class covers}; that is,\nclass covers satisfying both (i) and (ii) with the smallest number of neighborhoods.\nSee \\cite{cannon:2000} and \\cite{priebe:2001} for more on the class cover problem.\n\nThe first type of PCD was class cover catch digraph (CCCD) introduced by \\cite{priebe:2001}\nwho gave the exact distribution of its domination number for\nuniform data from two classes in $\\mathbb{R}$.\n\\cite{devinney:2002a}, \\cite{marchette:2003}, \\cite{priebe:2003b}, \\cite{priebe:2003a},\n\\cite{devinney:2006} extended the CCCDs to higher dimensions and demonstrated that CCCDs\nare a competitive alternative to the existing methods in classification.\nFurthermore,\n\\cite{devinney:2002b} proved a SLLN result for the one-dimensional class cover problem;\n\\cite{wiermanSLLN:2008} provided a generalized SLLN result\nand \\cite{xiangCLT:2009} provided a CLT result for\nCCCD based on one-dimensional data.\nHowever, CCCDs have some disadvantages in higher dimensions;\nnamely, finding the minimum dominating set for CCCDs is an NP-hard problem in general,\nalthough a simple linear time algorithm is available\nfor one dimensional data (\\cite{priebe:2001});\nand the exact and the asymptotic distributions of the domination number\nof the CCCDs are not analytically tractable in multiple dimensions.\n\\cite{ceyhan:CS-JSM-2003,ceyhan:dom-num-NPE-SPL} introduced the central similarity proximity maps\nand proportional-edge proximity maps for data in $\\mathbb{R}^d$ with $d>1$\nand the associated random PCDs\nwith the purpose of avoiding the above mentioned problems.\nThe asymptotic distribution of the domination number of the proportional-edge PCD\nis calculated for data in $\\mathbb{R}^2$ and\nthen the domination number is used as a statistic\nfor testing bivariate spatial patterns\n(\\cite{ceyhan:dom-num-NPE-SPL}, \\cite{ceyhan:dom-num-NPE-Spat2010}).\nThe relative density of these two PCD families is also calculated\nand used for the same purpose\n(\\cite{ceyhan:arc-density-PE} and \\cite{ceyhan:arc-density-CS}).\nMoreover, the distribution of the domination number\nof CCCDs is derived for non-uniform data (\\cite{ceyhan:dom-num-CCCD-NonUnif}).\n\nIn this article,\nwe provide the exact (and asymptotic) distribution of the domination number of\nproportional-edge PCDs for uniform (and non-uniform) one-dimensional data.\nFirst, some special cases and bounds for the domination number of proportional-edge PCDs\nis presented, then the domination number is investigated\nfor uniform data in one interval (in $\\mathbb{R}$) and the analysis is generalized to\nuniform data in multiple intervals and to non-uniform data in one and multiple intervals.\nThese results can be seen as generalizations of the results of \\cite{ceyhan:dom-num-CCCD-NonUnif}.\nSome trivial proofs are omitted,\nshorter proofs are given in the main body of the article;\nwhile longer proofs are deferred to the Appendix.\n\nWe define the proportional-edge PCDs\nand their domination number in Section \\ref{sec:prop-edge-PCD},\nprovide the exact and asymptotic distributions of the domination number of proportional-edge PCDs\nfor uniform data in one interval in Section \\ref{sec:gamma-dist-uniform},\ndiscuss the distribution of the domination number for\ndata from a general distribution in Section \\ref{sec:non-uniform}.\nWe extend these results to multiple intervals in Section \\ref{sec:dist-multiple-intervals},\nand provide discussion and conclusions in Section \\ref{sec:disc-conclusions}.\nFor convenience in notation and presentation,\nwe resort to non-standard extended (perhaps abused) forms of Bernoulli and Binomial distributions,\ndenoted $\\BER(p)$ and $\\BIN(n,p)$, respectively,\nwhere $p$ is the probability of success and $n$ is the number of trials.\nThroughout the article,\nwe take $p \\in [0,1]$ (unlike $p \\in (0,1)$)\nand if $X \\sim \\BER(p)$, then $P(X=1)=p$ and $P(X=0)=1-p$.\nIf $Y \\sim \\BIN(n,p)$,\nthen\n$P(Y=k)=(n \\atop k)p^k(1-p)^{n-k}$ for $p \\in (0,1)$ and $k \\in \\{0,1,2,\\ldots,n\\}$\nand\n$P(Y=n)=1$ for $p=1$ and $P(Y=0)=1$ for $p=0$.\n\n\\section{Proportional-Edge Proximity Catch Digraphs}\n\\label{sec:prop-edge-PCD}\nConsider the map $N:\\Omega \\rightarrow \\wp(\\Omega)$\nwhere $\\wp(\\Omega)$ represents the power set of $\\Omega$.\nThen given $\\mathcal{Y}_m \\subseteq \\Omega$,\nthe {\\em proximity map}\n$N(\\cdot): \\Omega \\rightarrow \\wp(\\Omega)$\nassociates with each point $x \\in \\Omega$\na {\\em proximity region} $N(x) \\subseteq \\Omega$.\nFor $B \\subseteq \\Omega$, the $\\Gamma_1$-region is the image of the map\n$\\Gamma_1(\\cdot,N):\\wp(\\Omega) \\rightarrow \\wp(\\Omega)$\nthat associates the region $\\Gamma_1(B,N):=\\{z \\in \\Omega: B \\subseteq N(z)\\}$\nwith the set $B$.\nFor a point $x \\in \\Omega$, we denote $\\Gamma_1(\\{x\\},N)$ as $\\Gamma_1(x,N)$.\nNotice that while the proximity region is defined for one point,\na $\\Gamma_1$-region is defined for a set of points.\nThe {\\em data-random PCD} has the vertex set $\\mathcal{V}=\\mathcal{X}_n$\nand arc set $\\mathcal{A}$ defined by $(X_i,X_j) \\in \\mathcal{A}$ iff $X_j \\in N(X_i)$.\n\nLet $\\Omega=\\mathbb{R}$ and $Y_{(i)}$ be the $i^{th}$ order statistic\n(i.e., $i^{th}$ smallest value) of $\\mathcal{Y}_m$ for $i=1,2,\\ldots,m$\nwith the additional notation for $i \\in \\{0,m+1\\}$ as\n$$-\\infty =: Y_{(0)} Y_{(m)}$.}\n\\end{cases}\n\\end{equation}\nNotice that for $i \\in \\{0,m\\}$,\nthe proportional-edge proximity region does not depend on the centrality parameter $c$.\nFor $x \\in \\mathcal{Y}_m$,\nwe define $N(x,r,c)=\\{x\\}$ for all $r \\ge 1$\nand if $x = M_{c,i}$, then in Equation \\eqref{eqn:NPEr-general-defn1},\nwe arbitrarily assign $N(x,r,c)$ to be one of\n$\\left( Y_{(i)}, Y_{(i)}+r\\,\\left( x-Y_{(i)} \\right) \\right)\\cap \\mathcal{I}_i$ or\n$\\left( Y_{(i+1)}-r\\left( Y_{(i+1)}-x \\right), Y_{(i+1)} \\right)\\cap \\mathcal{I}_i$.\nFor $c = 0$,\nwe have $\\left( M_{c,i},Y_{(i+1)} \\right)=\\mathcal{I}_i$\nand\nfor $c = 1$,\nwe have $(Y_{(i)},M_{c,i})=\\mathcal{I}_i$.\nSo,\nwe set\n$N(x,r,0):= \\left( Y_{(i+1)}-r\\left(Y_{(i+1)}-x\\right), Y_{(i+1)} \\right) \\cap \\mathcal{I}_i$\nand\n$N(x,r,1):= \\left( Y_{(i)}, Y_{(i)}+r\\,\\left( x-Y_{(i)} \\right) \\right) \\cap \\mathcal{I}_i$.\nFor $r > 1$, we have $x \\in N(x,r,c)$ for all $x \\in \\mathcal{I}_i$.\nFurthermore,\n$\\lim_{r \\rightarrow \\infty} N(x,r,c) = \\mathcal{I}_i$\nfor all $x \\in \\mathcal{I}_i$,\nso we define $N(x,\\infty,c) = \\mathcal{I}_i$ for all such $x$.\n\nFor $X_i \\stackrel{iid}{\\sim} F$,\nwith the additional assumption\nthat the non-degenerate one-dimensional\nprobability density function (pdf) $f$ exists with support $\\mathcal{S}(F) \\subseteq \\mathcal{I}_i$\nand $f$ is continuous around $M_{c,i}$ and around the end points of $\\mathcal{I}_i$,\nimplies that the special cases in the construction\nof $N(\\cdot,r,c)$ ---\n$X$ falls at $M_{c,i}$ or the end points of $\\mathcal{I}_i$ ---\noccurs with probability zero.\nFor such an $F$,\nthe region $N(X_i,r,c)$ is an interval a.s.\n\nThe data-random proportional-edge PCD has the vertex set $\\mathcal{X}_n$ and arc set $\\mathcal{A}$ defined by\n$(X_i,X_j) \\in \\mathcal{A}$ iff $X_j \\in N(X_i,r,c)$.\nWe call such digraphs $\\mathscr D_{n,m}(r,c)$-digraphs.\nA $\\mathscr D_{n,m}(r,c)$-digraph is a {\\em pseudo digraph} according some authors,\nif loops are allowed (see, e.g., \\cite{chartrand:1996}).\nThe $\\mathscr D_{n,m}(r,c)$-digraphs are closely related to the {\\em proximity graphs} of\n\\cite{jaromczyk:1992} and might be considered as a special case of\n{\\em covering sets} of \\cite{tuza:1994} and {\\em intersection digraphs} of \\cite{sen:1989}.\nOur data-random proximity digraph is a {\\em vertex-random digraph} and\nis not a standard random graph (see, e.g., \\cite{janson:2000}).\nThe randomness of a $\\mathscr D_{n,m}(r,c)$-digraph lies in the fact that\nthe vertices are random with the joint distribution $F_{X,Y}$,\nbut arcs $(X_i,X_j)$ are\ndeterministic functions of the random variable $X_j$ and the random set $N(X_i,r,c)$.\nIn $\\mathbb{R}$, the data-random PCD is a special case of\n{\\em interval catch digraphs} (see, e.g., \\cite{sen:1989} and \\cite{prisner:1994}).\nFurthermore, when $r=2$ and $c=1\/2$ (i.e., $M_{c,i}=\\left( Y_{(i)}+Y_{(i+1)} \\right)\/2$)\nwe have $N(x,r,c)=B(x,r(x))$\nwhere $B(x,r(x))$ is the ball centered at $x$ with radius $r(x)=d(x,\\mathcal{Y}_m)=\\min_{y \\in \\mathcal{Y}_m}d(x,y)$.\nThe region $N(x,2,1\/2)$ corresponds to the proximity region\nwhich gives rise to the CCCD of \\cite{priebe:2001}.\n\n\n\\subsection{Domination Number of Random $\\mathscr D_{n,m}(r,c)$-digraphs}\nIn a digraph $D=(\\mathcal{V},\\mathcal{A})$ of order $|\\mathcal{V}|=n$, a vertex $v$ {\\em dominates}\nitself and all vertices of the form $\\{u:\\,(v,u) \\in \\mathcal{A}\\}$.\nA {\\em dominating set}, $S_D$, for the digraph $D$ is a subset of\n$\\mathcal{V}$ such that each vertex $v \\in \\mathcal{V}$ is dominated by a vertex in $S_D$.\nA {\\em minimum dominating set}, $S^*_D$, is a dominating set of minimum cardinality;\nand the {\\em domination number}, denoted $\\gamma(D)$, is defined as $\\gamma(D):=|S^*_D|$,\nwhere $|\\cdot|$ is the set cardinality functional (\\cite{west:2001}).\nIf a minimum dominating set consists of only one vertex,\nwe call that vertex a {\\em dominating vertex}.\nThe vertex set $\\mathcal{V}$ itself is always a dominating set,\nso $\\gamma(D) \\le n$.\n\nLet $\\mathcal F\\left( \\mathbb{R}^d \\right):=\\{F_{X,Y} \\text{ on } \\mathbb{R}^d \\text { with } P(X=Y)=0\\}$.\nAs in \\cite{priebe:2001} and \\cite{ceyhan:dom-num-CCCD-NonUnif},\nwe consider $\\mathscr D_{n,m}(r,c)$-digraphs for which\n$\\mathcal{X}_n$ and $\\mathcal{Y}_m$ are random samples from $F_X$ and $F_Y$, respectively,\nand the joint distribution of $X,Y$ is $F_{X,Y} \\in \\mathcal F\\left( \\mathbb{R}^d \\right)$.\nWe call such digraphs \\emph{$\\mathcal F\\left( \\mathbb{R}^d \\right)$-random $\\mathscr D_{n,m}(r,c)$-digraphs}\nand focus on the random variable $\\gamma(D)$.\nTo make the dependence on sample sizes $n$ and $m$,\nthe distribution $F$,\nand the parameters $r$ and $c$ explicit,\nwe use $\\gamma_{{}_{n,m}}(F,r,c)$ instead of $\\gamma(D)$.\nFor $n \\ge 1$ and $m \\ge 1$,\nit is trivial to see that $1 \\le \\gamma_{{}_{n,m}}(F,r,c) \\le n$,\nand $1 \\le \\gamma_{{}_{n,m}}(F,r,c) < n$ for nontrivial digraphs.\n\n\n\\subsection{Special Cases for the Distribution of the\nDomination Number of $\\mathcal F(\\mathbb{R})$-random $\\mathscr D_{n,m}(r,c)$-digraphs}\n\\label{sec:domination-number-Dnm}\nLet $\\mathcal{X}_n$ and $\\mathcal{Y}_m$ be two samples from $\\mathcal F(\\mathbb{R})$, $\\mathcal{X}_{[i]}:=\\mathcal{X}_n \\cap \\mathcal{I}_i$,\nand $\\mathcal{Y}_{[i]}:=\\{Y_{(i)},Y_{(i+1)}\\}$ for $i=0,1,2,\\ldots,m$.\nThis yields a disconnected digraph with subdigraphs\neach of which might be null or itself disconnected.\nLet $D_{[i]}$ be the component of the random $\\mathscr D_{n,m}(r,c)$-digraph\ninduced by the pair $\\mathcal{X}_{[i]}$ and $\\mathcal{Y}_{[i]}$ for $i=0,1,2,\\ldots,m$,\n$n_i:=\\left|\\mathcal{X}_{[i]}\\right|$, and $F_i$ be the density $F_X$ restricted to $\\mathcal{I}_i$,\nand\n$\\gamma_{{}_{n_i,2}}(F_i,r,c)$ be the domination number of $D_{[i]}$.\nLet also that $M_{c,i} \\in \\mathcal{I}_i$ be the point that divides\nthe interval $\\mathcal{I}_i$ in ratios $c$ and $1-c$\n(i.e., length of the subinterval to the left of $M_{c,i}$ is\n$c \\times 100$ \\% of the length of $\\mathcal{I}_i$).\nThen $\\gamma_{{}_{n,m}}(F,r,c)=\\sum_{i=0}^m ( \\gamma_{{}_{n_i,2}}(F_i,r,c) \\mathbf{I}(n_i>0) )$\nwhere $\\mathbf{I}(\\cdot)$ is the indicator function.\nWe study the simpler random variable $\\gamma_{{}_{n_i,2}}(F_i,r,c)$ first.\nThe following lemma follows trivially.\n\n\\begin{lemma}\n\\label{lem:end-intervals}\nFor $i \\in \\{ 0,m \\}$,\nwe have $\\gamma_{{}_{n_i,2}}(F_i,r,c)=\\mathbf{I}(n_i >0)$ for all $r \\ge 1$.\n\\end{lemma}\n\nLet $\\Gamma_1\\left( B,r,c \\right)$ be the $\\Gamma_1$-region\nfor set $B$ associated with the proximity map on $N(\\cdot,r,c)$.\n\n\\begin{lemma}\n\\label{lem:G1-region-in-Ii}\nThe $\\Gamma_1$-region for $\\mathcal{X}_{[i]}$ in $\\mathcal{I}_i$ with $r \\ge 1$ and $c \\in [0,1]$\nis\n$$\\Gamma_1\\left( \\mathcal{X}_{[i]},r,c \\right) =\n\\Biggl(Y_{(i)}+\\frac{\\max\\,\\left( \\mathcal{X}_{[i]} \\right)}{r},M_{c,i} \\Biggr] \\bigcup\n\\Biggl[M_{c,i}, Y_{(i+1)}-\\frac{Y_{(i+1)}-\\min\\left( \\mathcal{X}_{[i]} \\right)}{r}\\Biggr)$$\nwith the understanding that the intervals $(a,b)$, $(a,b]$, and $[a,b)$ are empty if $a \\ge b$.\n\\end{lemma}\n\\noindent {\\bf Proof:}\nBy definition,\n$\\Gamma_1\\left( \\mathcal{X}_{[i]},r,c \\right) = \\{x \\in \\mathcal{I}_i: \\mathcal{X}_{[i]} \\subset N(x,r,c)\\}$.\nSuppose $r \\ge 1$ and $c \\in [0,1]$.\nThen\nfor $x \\in ( Y_{(i)},M_{c,i} ]$,\nwe have\n$\\mathcal{X}_{[i]} \\subset N(x,r,c)$\niff $Y_{(i)}+r\\,(x-Y_{(i)}) > \\max\\,\\left( \\mathcal{X}_{[i]} \\right)$\niff $x > Y_{(i)}+\\frac{\\max\\,\\left( \\mathcal{X}_{[i]} \\right)}{r}$.\nLikewise\nfor $x \\in [ M_{c,i},Y_{(i+1)} )$,\nwe have\n$\\mathcal{X}_{[i]} \\subset N(x,r,c)$\niff $Y_{(i+1)}-r\\,(Y_{(i+1)}-x) < \\min\\,\\left( \\mathcal{X}_{[i]} \\right)$\niff $x < Y_{(i+1)}-\\frac{Y_{(i+1)}-\\min\\left( \\mathcal{X}_{[i]} \\right)}{r}$.\nTherefore\n$\\Gamma_1\\left( \\mathcal{X}_{[i]},r,c \\right) =\n\\Bigl(Y_{(i)}+\\frac{\\max\\,\\left( \\mathcal{X}_{[i]} \\right)}{r},M_{c,i} \\Bigr] \\bigcup\n\\Bigl[M_{c,i}, Y_{(i+1)}-\\frac{Y_{(i+1)}-\\min\\left( \\mathcal{X}_{[i]} \\right)}{r}\\Bigr).$\n$\\blacksquare$\n\nNotice that if $\\mathcal{X}_{[i]} \\cap \\Gamma_1\\left( \\mathcal{X}_{[i]},r,c \\right) \\not=\\emptyset$,\nwe have $\\gamma_{{}_{n_i,2}}(F_i,r,c)=1$,\nhence the name $\\Gamma_1$-region and the notation $\\Gamma_1(\\cdot)$.\nFor $i=1,2,3,\\ldots,(m-1)$ and $n_i > 0$,\nwe prove that $\\gamma_{{}_{n_i,2}}(F_i,r,c) = 1$ or $2$\nwith distribution dependent probabilities.\nHence,\nto find the distribution of $\\gamma_{{}_{n_i,2}}(F_i,r,c)$,\nit suffices to find $P(\\gamma_{{}_{n_i,2}}(F_i,r,c)=1)$ or $p_{{}_{n_i}}(F_i,r,c):=P\\bigl( \\gamma_{{}_{n_i,2}}(F_i,r,c)=2 \\bigr)$.\nFor computational convenience, we employ the latter in our calculations, henceforth.\n\n\n\\begin{theorem}\n\\label{thm:gamma 1 or 2}\nFor $i=1,2,3,\\ldots,(m-1)$,\nlet the support of $F_i$ have a positive Lebesgue measure.\nThen for $n_i > 1$, $r \\in (1,\\infty)$, and $c \\in (0,1)$,\nwe have $\\gamma_{{}_{n_i,2}}(F_i,r,c) \\sim 1+\\BER\\left( p_{{}_{n_i}}(F_i,r,c) \\right)$.\nFurthermore,\n$\\gamma_{{}_{1,2}}(F_i,r,c)=1$ for all $r \\ge 1$ and $c \\in [0,1]$;\n$\\gamma_{{}_{n_i,2}}(F_i,r,0)=\\gamma_{{}_{n_i,2}}(F_i,r,1)=1$ for all $n_i \\ge 1$ and $ r \\ge 1$;\nand\n$\\gamma_{{}_{n_i,2}}(F_i,\\infty,c)=1$ for all $n_i \\ge 1$ and $c \\in [0,1]$.\n\\end{theorem}\n\n\\noindent {\\bf Proof:}\nLet $X^-_i:=\\argmin_{x \\in \\mathcal{X}_{[i]} \\cap \\left( Y_{(i)},M_{c,i} \\right)}d(x,M_{c,i})$\nprovided that $\\mathcal{X}_{[i]} \\cap \\left( Y_{(i)},M_{c,i} \\right) \\not= \\emptyset$,\nand $X^+_i:=\\argmin_{x \\in \\mathcal{X}_{[i]} \\cap \\left( M_{c,i},Y_{(i+1)} \\right)}d(x,M_{c,i})$\nprovided that $\\mathcal{X}_{[i]} \\cap \\left( M_{c,i},Y_{(i+1)} \\right) \\not= \\emptyset$.\nThat is, $X^-_i$ and $X^+_i$ are closest class $\\mathcal{X}$ points (if they exist)\nto $M_{c,i}$ from left and right, respectively.\nNotice that since $n_i > 0$, at least one of $X^-_i$ and $X^+_i$ exists a.s.\nIf $\\mathcal{X}_{[i]} \\cap \\left( Y_{(i)},M_{c,i} \\right) = \\emptyset$,\nthen $\\mathcal{X}_{[i]} \\subset N\\left( X^+_i,r,c \\right)$;\nso $\\gamma_{{}_{n_i,2}}(F_i,r,c)=1$.\nSimilarly,\nif $\\mathcal{X}_{[i]} \\cap (M_{c,i},Y_{(i)})= \\emptyset$,\nthen $\\mathcal{X}_{[i]} \\subset N\\left( X^-_i,r,c \\right)$;\nso $\\gamma_{{}_{n_i,2}}(F_i,r,c)=1$.\nIf both of $\\mathcal{X}_{[i]} \\cap \\left( Y_{(i)},M_{c,i} \\right)$ and $\\mathcal{X}_{[i]} \\cap (M_{c,i},Y_{(i)})$ are nonempty,\nthen $\\mathcal{X}_{[i]} \\subset N\\left( X^-_i,r,c \\right) \\cup N\\left( X^+_i,r,c \\right)$,\nso $\\gamma_{{}_{n_i,2}}(F_i,r,c) \\le 2$.\nSince $n_i > 0$, we have $1 \\le \\gamma_{{}_{n_i,2}}(F_i,r,c) \\le 2$.\nThe desired result follows,\nsince the probabilities $1-p_{{}_{n_i}}(F,r,c))=P(\\gamma_{{}_{n_i,2}}(F_i,r,c)=1)$ and\n$p_{{}_{n_i}}(F,r,c))=P(\\gamma_{{}_{n_i,2}}(F_i,r,c)=2)$\nare both positive.\nThe special cases in the theorem follow by construction.\n$\\blacksquare$\n\nThe probability $p_{{}_{n_i}}(F,r,c))=P\\left( \\mathcal{X}_{[i]} \\cap \\Gamma_1\\left( \\mathcal{X}_{[i]},r,c \\right) =\\emptyset \\right)$\ndepends on the conditional distribution $F_{X|Y}$ and the interval $\\Gamma_1\\left( \\mathcal{X}_{[i]},r,c \\right)$,\nwhich, if known, will make possible the calculation of $p_{{}_{n_i}}(F_i,r,c)$.\nAs an immediate result of Lemma \\ref{lem:end-intervals} and Theorem \\ref{thm:gamma 1 or 2},\nwe have the following upper bound for $\\gamma_{{}_{n,m}}(F,r,c)$.\n\n\\begin{theorem}\n\\label{thm:gamma-Dnm-r-M}\nLet $D_{n,m}(r,c)$ be an $\\mathcal F(\\mathbb{R})$-random $\\mathscr D_{n,m}(r,c)$-digraph\nand $k_1$, $k_2$, and $k_3$ be three natural numbers defined as\n$k_1:=\\sum_{i=1}^{m-1} \\mathbf{I}(n_i>1)$,\n$k_2:=\\sum_{i=1}^{m-1} \\mathbf{I}(n_i=1)$,\nand\n$k_3:=\\sum_{i \\in \\{0,m\\}} \\mathbf{I}(n_i > 0)$.\nThen for $n \\ge 1,\\,m \\ge 1$, $r \\ge 1$, and $c \\in [0,1]$,\nwe have\n$1 \\le \\gamma_{{}_{n,m}}(F,r,c) \\le 2\\,k_1+k_2+k_3 \\le \\min(n,2\\,m)$.\nFurthermore,\n$\\gamma_{{}_{1,m}}(F,r,c)=1$ for all $m \\ge 1$, $r \\ge 1$, and $c \\in [0,1]$;\n$\\gamma_{{}_{n,1}}(F,r,c)=\\sum_{i \\in \\{0,1\\}} \\mathbf{I}(n_i > 0)$ for all $n \\ge 1$ and $r \\ge 1$;\n$\\gamma_{{}_{1,1}}(F,r,c)=1$ for all $r \\ge 1$;\n$\\gamma_{{}_{n,m}}(F,r,0)=\\gamma_{{}_{n,m}}(F,r,1)=k_1+k_2+k_3$ for all $m > 1$, $n \\ge 1$, and $r \\ge 1$;\nand\n$\\gamma_{{}_{n,m}}(F,\\infty,c)=k_1+k_2+k_3$ for all $m > 1$, $n \\ge 1$, and $c \\in [0,1]$.\n\\end{theorem}\n\n\\noindent {\\bf Proof:}\nSuppose $n \\ge 1,\\,m \\ge 1$, $r \\ge 1$, and $c \\in [0,1]$.\nThen for $i = 1,2,\\ldots,(m-1)$,\nby Theorem \\ref{thm:gamma 1 or 2},\nwe have $\\gamma_{{}_{n_i,2}}(F_i,r,c) \\in \\{1,2\\}$ provided that $n_i>1$,\nand $\\gamma_{{}_{1,2}}(F_i,r,c) = 1$.\nFor $i\\in \\{0,m\\}$,\nby Lemma \\ref{lem:end-intervals},\nwe have $\\gamma_{{}_{n_i,2}}(F_i,r,c)= \\mathbf{I}(n_i > 0)$.\nSince $\\gamma_{{}_{n,m}}(F,r,c)=\\sum_{i=0}^m (\\gamma_{{}_{n_i,2}}(F_i,r,c)\\mathbf{I}(n_i>0))$,\nthe desired result follows.\nThe special cases in the theorem follow by construction.\n$\\blacksquare$\n\nFor $r=1$, the distribution of $\\gamma_{{}_{n_i,2}}(F_i,r,c)$ is simpler and\nthe distribution of $\\gamma_{{}_{n,m}}(F_i,r,c)$ has simpler upper bounds.\n\n\n\n\n\\begin{theorem}\n\\label{thm:gamma-Dnm-r=1-M}\nLet $D_{n,m}(1,c)$ be an $\\mathcal F(\\mathbb{R})$-random $\\mathscr D_{n,m}(1,c)$-digraph,\n$k_3$ be defined as in Theorem \\ref{thm:gamma-Dnm-r-M},\nand $k_4$ be a natural number defined as\n$k_4:=\\sum_{i=1}^{m-1} \\left[\\mathbf{I}\\left( \\left|\\mathcal{X}_{[i]} \\cap \\left( Y_{(i)},M_{c,i} \\right)\\right|>0 \\right)+\n\\mathbf{I}\\left( \\left|\\mathcal{X}_{[i]} \\cap \\left( M_{c,i},Y_{(i+1)} \\right)\\right|>0 \\right)\\right]$.\nThen for $n \\ge 1,\\,m > 1$, and $c \\in [0,1]$,\nwe have\n$1\\le \\gamma_{{}_{n,m}}(F,1,c) = k_3+k_4 \\le \\min(n,2\\,m)$.\n\\end{theorem}\n\n\\noindent {\\bf Proof:}\nSuppose $n \\ge 1,\\,m > 1$, and $c \\in [0,1]$\nand let $X^-_i$ and $X^+_i$ be defined as in the proof of Theorem \\ref{thm:gamma 1 or 2}.\nThen by construction, $\\mathcal{X}_{[i]} \\cap \\left( Y_{(i)},M_{c,i} \\right) \\subset N\\left( X^-_i,1,c \\right)$,\nbut $N\\left( X^-_i,1,c \\right) \\subseteq \\left( Y_{(i)},M_{c,i} \\right)$.\nSo $\\left[ \\mathcal{X}_{[i]} \\cap \\left( M_{c,i},Y_{(i+1)} \\right) \\right] \\cap N\\left( X^-_i,1,c \\right) = \\emptyset$.\nSimilarly\n$\\mathcal{X}_{[i]} \\cap \\left( M_{c,i},Y_{(i+1)} \\right) \\subset N\\left( X^+_i,1,c \\right)$\nand $\\left[ \\mathcal{X}_{[i]} \\cap \\left( Y_{(i)},M_{c,i} \\right) \\right] \\cap N\\left( X^+_i,1,c \\right) = \\emptyset$.\nThen $\\gamma_{{}_{n_i,2}}(F_i,1,c)=1$,\nif $\\mathcal{X}_{[i]} \\subset \\left( Y_{(i)},M_{c,i} \\right)$\nor\n$\\mathcal{X}_{[i]} \\subset \\left( M_{c,i},Y_{(i+1)} \\right)$,\nand $\\gamma_{{}_{n,m}}(F,1,c)=2$,\nif $\\mathcal{X}_{[i]} \\cap \\left( Y_{(i)},M_{c,i} \\right) \\not= \\emptyset$\nand $\\mathcal{X}_{[i]} \\cap \\left( M_{c,i},Y_{(i+1)} \\right) \\not= \\emptyset$.\nHence for $i=1,2,3,\\ldots,(m-1)$,\nwe have\n$\\gamma_{{}_{n,m}}(F,1,c) =\n\\mathbf{I}\\left( \\left|\\mathcal{X}_{[i]} \\cap \\left( Y_{(i)},M_{c,i} \\right)\\right|>0 \\right)+\n\\mathbf{I}\\left( \\left|\\mathcal{X}_{[i]} \\cap \\left( M_{c,i},Y_{(i+1)} \\right)\\right|>0 \\right)$,\nand for $i\\in \\{0,m\\}$,\nwe have $\\gamma_{{}_{n_i,2}}(F_i,1,c)= \\mathbf{I}(n_i > 0)$.\nSince $\\gamma_{{}_{n,m}}(F,1,c)=\\sum_{i=0}^m (\\gamma_{{}_{n_i,2}}(F_i,1,c)\\mathbf{I}(n_i>0))$,\nthe desired result follows.\n$\\blacksquare$\n\nBased on Theorem \\ref{thm:gamma-Dnm-r=1-M},\nwe have\n$P(\\gamma_{{}_{n_i,2}}(F,1,c)=1)=\nP(\\mathcal{X}_{[i]} \\subset \\left( Y_{(i)},M_{c,i} \\right))+\nP(\\mathcal{X}_{[i]} \\subset \\left( M_{c,i},Y_{(i+1)} \\right))$\nand\n$P(\\gamma_{{}_{n_i,2}}(F,1,c)=2)=\nP(\\mathcal{X}_{[i]} \\cap \\left( Y_{(i)},M_{c,i} \\right) \\not= \\emptyset,\n\\mathcal{X}_{[i]} \\cap \\left( M_{c,i},Y_{(i+1)} \\right) \\not= \\emptyset)$.\n\n\\section{The Distribution of the Domination Number of Proportional-Edge PCDs\nfor Uniform Data in One Interval}\n\\label{sec:gamma-dist-uniform}\n\nIn the special case of fixed $\\mathcal{Y}_2=\\{\\mathsf{y}_1,\\mathsf{y}_2\\}$ with $-\\infty<\\mathsf{y}_1<\\mathsf{y}_2<\\infty$\nand $\\mathcal{X}_n =\\{X_1,X_2,\\ldots,X_n\\}$ a random sample from $\\mathcal{U}(\\mathsf{y}_1,\\mathsf{y}_2)$, the uniform distribution on $(\\mathsf{y}_1,\\mathsf{y}_2)$,\nwe have a $\\mathscr D_{n,2}(r,c)$-digraph\nfor which $F_X=\\mathcal{U}(\\mathsf{y}_1,\\mathsf{y}_2)$.\nWe call such digraphs as \\emph{$\\mathcal{U}(\\mathsf{y}_1,\\mathsf{y}_2)$-random $\\mathscr D_{n,2}(r,c)$-digraphs}\nand provide the exact distributions of their domination\nnumber for the whole range of $r$ and $c$.\nLet $\\gamma_{{}_{n,2}}(\\mathcal{U},r,c)$\nbe the domination number of the PCD based on $N(\\cdot,r,c)$ and $\\mathcal{X}_n$\nand $p_n(\\mathcal{U},r,c):=P(\\gamma_{{}_{n,2}}(\\mathcal{U},r,c)=2)$,\nand $p(\\mathcal{U},r,c):=\\lim_{n\\rightarrow \\infty}p_n(\\mathcal{U},r,c)$.\nWe present a ``scale invariance\" result for $N(\\cdot,r,c)$.\nThis invariance property will simplify the notation and calculations in\nour subsequent analysis by allowing us to consider the special case\nof the unit interval, $(0,1)$.\n\n\\begin{proposition}\n\\label{prop:scale-inv-NYr}\n(Scale Invariance Property)\nSuppose $\\mathcal{X}_n$ is a random sample (i.e., a set of iid random variables) from $\\mathcal{U}(\\mathsf{y}_1,\\mathsf{y}_2)$.\nThen for any $r \\in [1,\\infty]$ the distribution of $\\gamma_{{}_{n,2}}(\\mathcal{U},r,c)$ is\nindependent of $\\mathcal{Y}_2$ and hence the support interval $(\\mathsf{y}_1,\\mathsf{y}_2)$.\n\\end{proposition}\n\n\\noindent {\\bf Proof:}\nLet $\\mathcal{X}_n$ be a random sample from $\\mathcal{U}(\\mathsf{y}_1,\\mathsf{y}_2)$ distribution.\nAny $\\mathcal{U}(\\mathsf{y}_1,\\mathsf{y}_2)$ random variable can be transformed into a $\\mathcal{U}(0,1)$\nrandom variable by the transformation $\\phi(x)=(x-\\mathsf{y}_1)\/(\\mathsf{y}_2-\\mathsf{y}_1)$,\nwhich maps intervals $(t_1,t_2) \\subseteq (\\mathsf{y}_1,\\mathsf{y}_2)$ to\nintervals $\\bigl( \\phi(t_1),\\phi(t_2) \\bigr) \\subseteq (0,1)$.\nThat is,\nif $X \\sim \\mathcal{U}(\\mathsf{y}_1,\\mathsf{y}_2)$,\nthen we have $\\phi(X) \\sim \\mathcal{U}(0,1)$\nand\n$P(X \\in (t_1,t_2))=P(\\phi(X) \\in \\bigl( \\phi(t_1),\\phi(t_2) \\bigr)$\nfor all $(t_1,t_2) \\subseteq (\\mathsf{y}_1,\\mathsf{y}_2)$.\nSo, without loss of generality, we can assume $\\mathcal{X}_n$\nis a random sample from the $\\mathcal{U}(0,1)$ distribution.\nTherefore, the distribution of $\\gamma_{{}_{n,2}}(\\mathcal{U},r,c)$\ndoes not depend on the support interval $(\\mathsf{y}_1,\\mathsf{y}_2)$.\n$\\blacksquare$\n\n\nNote that scale invariance of $\\gamma_{{}_{n,2}}(F,\\infty,c)$ follows trivially\nfor all $\\mathcal{X}_n$ from any $F$ with support in $(\\mathsf{y}_1,\\mathsf{y}_2)$,\nsince for $r=\\infty$, we have $\\gamma_{{}_{n,2}}(F,\\infty,c)=1$ a.s.\nfor all $n>1$ and $c \\in (0,1)$.\nThe scale invariance of $\\gamma_{{}_{1,2}}(F,r,c)$ holds\nfor $n=1$ for all $r \\ge 1$ and $c \\in [0,1]$,\nand scale invariance of $\\gamma_{{}_{n,2}}(F,r,c)$ with $c \\in \\{0,1\\}$ holds\nfor $n \\ge 1$ and $r \\ge 1$, as well.\nBased on Proposition \\ref{prop:scale-inv-NYr},\nfor uniform data,\nwe may assume that $(\\mathsf{y}_1,\\mathsf{y}_2)$ is the\nunit interval $(0,1)$ for $N(\\cdot,r,c)$ with general $c$.\nThen the proportional-edge proximity region for $x \\in (0,1)$\nwith parameters $r \\ge 1$ and $c \\in [0,1]$ becomes\n\\begin{equation}\n\\label{eqn:NPEr-(0,1)-defn1}\nN(x,r,c)=\n\\begin{cases}\n(0, r\\,x) \\cap (0,1) & \\text{if $x \\in (0,c)$,}\\\\\n(1-r(1-x), 1) \\cap (0,1) & \\text{if $x \\in (c,1)$.}\n\\end{cases}\n\\end{equation}\n\nThe region $N(c,r,c)$ is arbitrarily taken to be one of\n$(0, r\\,x) \\cap (0,1)$ or $(1-r(1-x), 1)\\cap (0,1)$.\nMoreover,\n$N(0,r,c):=\\{0\\}$ and $N(1,r,c):=\\{1\\}$ for all $r \\ge 1$ and $c \\in [0,1]$;\nFor $X_i \\stackrel{iid}{\\sim} \\mathcal{U}(\\mathsf{y}_1,\\mathsf{y}_2)$,\nthe special cases in the construction of $N(\\cdot,r,c)$ ---\n$X$ falls at $c$ or the end points of $(\\mathsf{y}_1,\\mathsf{y}_2)$ ---\noccur with probability zero.\nMoreover, the region $N(x,r,c)$ is an interval a.s.\n\nThe $\\Gamma_1$-region, $\\Gamma_1(\\mathcal{X}_n,r,c)$, depends on $X_{(1)}$, $X_{(n)}$, $r$, and $c$.\nIf $\\Gamma_1(\\mathcal{X}_n,r,c) \\not= \\emptyset$,\nthen we have $\\Gamma_1(\\mathcal{X}_n,r,c)=(\\delta_1,\\delta_2)$\nwhere at least one end points $\\delta_1,\\delta_2$ is a function of $X_{(1)}$ and $X_{(n)}$.\nFor $\\mathcal{U}(0,1)$ data,\ngiven $X_{(1)}=x_1$ and $X_{(n)}=x_n$,\nthe probability of $p_n(\\mathcal{U},r,c)$ is\n$\\displaystyle \\left( 1-(\\delta_2-\\delta_1)\/(x_n-x_1)\\right)^{(n-2)}$\nprovided that $\\Gamma_1(\\mathcal{X}_n,r,c) \\not= \\emptyset$;\nand if $\\Gamma_1(\\mathcal{X}_n,r,c) = \\emptyset$,\nthen $\\gamma_{{}_{n,2}}(\\mathcal{U},r,c)=2$ holds.\nThen\n\\begin{equation}\n\\label{eqn:Pg2-int-first}\nP(\\gamma_{{}_{n,2}}(\\mathcal{U},r,c)=2,\\;\\Gamma_1(\\mathcal{X}_n,r,c) \\not= \\emptyset)=\n\\int\\int_{\\mathcal{S}_1}f_{1n}(x_1,x_n)\\left(1-\\frac{\\delta_2-\\delta_1}{x_n-x_1}\\right)^{(n-2)}\\,dx_ndx_1\n\\end{equation}\nwhere $\\mathcal{S}_1=\\{02$, \\cite{priebe:2001} computed the exact distribution of $\\gamma_{{}_{n,m}}(\\mathcal{U},2,1\/2)$ also.\nHowever,\nthe scale invariance property does not hold for general $F$;\nthat is, for $X_i \\stackrel{iid}{\\sim}F$ with support $\\mathcal{S}(F) \\subseteq(\\mathsf{y}_1,\\mathsf{y}_2)$,\nthe exact and asymptotic distribution of $\\gamma_{{}_{n,2}}(F,2,1\/2)$\ndepends on $F$ and $\\mathcal{Y}_2$ (\\cite{ceyhan:dom-num-CCCD-NonUnif}).\n\n\n\\subsection{The Exact Distribution of the Domination Number of $\\mathcal{U}(\\mathsf{y}_1,\\mathsf{y}_2)$-random $\\mathscr D_{n,2}(2,c)$-digraphs}\n\\label{sec:r=2-and-M_c=c}\nFor $r=2$, $c \\in (0,1)$, and $(\\mathsf{y}_1,\\mathsf{y}_2)=(0,1)$,\nthe $\\Gamma_1$-region is $\\Gamma_1(\\mathcal{X}_n,2,c)=( X_{(n)}\/2,c ] \\cup [ c,(1+X_{(1)})\/2)$.\nNotice that $( X_{(n)}\/2,c ]$ or $[ c,(1+X_{(1)})\/2 )$\ncould be empty, but not simultaneously.\n\\begin{theorem}\n\\label{thm:r=2 and M_c=c}\nFor $\\mathcal{U}(\\mathsf{y}_1,\\mathsf{y}_2)$ data\nand $n \\ge 1$,\nwe have\n$\\gamma_{{}_{n,2}}(\\mathcal{U},2,c) \\sim 1+\\BER(p_{{}_n}(\\mathcal{U},2,c))$\nwhere\n$p_{{}_n}(\\mathcal{U},2,c) =\n\\nu_{1,n}(c)\\mathbf{I}(c \\in (0,1\/3]+\n\\nu_{2,n}(c)\\mathbf{I}(c \\in (1\/3,1\/2]+\n\\nu_{3,n}(c)\\mathbf{I}(c \\in (1\/2,2\/3]+\n\\nu_{4,n}(c)\\mathbf{I}(c \\in (2\/3,1)\n$\nwith\n$$\\nu_{1,n}(c)= \\frac{2}{3} \\left(c+\\frac{1}{2}\\right)^n-\\frac{8}{9} 4^{-n}-\\frac{2}{3}\\left(\\frac{1-c}{2}\\right)^n\n+\\frac{1}{9}(1-3c)^n-\\frac{2}{9}\\left(3c-\\frac{1}{2}\\right)^n,$$\n$$\\nu_{2,n}(c)= \\frac{2}{3} \\left(c+\\frac{1}{2}\\right)^n-\\frac{8}{9} 4^{-n}-\\frac{2}{3}\\left(\\frac{1-c}{2}\\right)^n-\n\\frac{2}{9}\\left(\\frac{3c-1}{2}\\right)^n-\\frac{2}{9}\\left(3c-\\frac{1}{2}\\right)^n,$$\n$\\nu_{3,n}(c)=\\nu_{2,n}(1-c)$, and $\\nu_{4,n}(c)=\\nu_{1,n}(1-c)$.\nFurthermore,\n$\\gamma_{{}_{n,2}}(\\mathcal{U},2,0)=\\gamma_{{}_{n,2}}(\\mathcal{U},2,1)=1$ for all $n \\ge 1$.\n\\end{theorem}\n\nObserve that\nthe parameter $p_{{}_n}(\\mathcal{U},2,c)$ is continuous in $c \\in (0,1)$ for fixed $n < \\infty$,\nbut there are jumps (hence discontinuities) in $p_{{}_n}(\\mathcal{U},2,c)$ at $c \\in \\{0,1\\}$.\nIn particular,\n$\\lim_{c \\rightarrow 0}p_{{}_n}(\\mathcal{U},2,c)=\n\\lim_{c \\rightarrow 1}p_{{}_n}(\\mathcal{U},2,c)=\n\\lim_{c \\rightarrow 0}\\nu_{1,n}(c)=\n\\lim_{c \\rightarrow 1}\\nu_{4,n}(c)=\n\\frac{1}{9}-\\frac{2}{9}(-2)^n-\\frac{8}{9}4^{-n}$,\nbut\n$p_{{}_n}(\\mathcal{U},2,0)=p_{{}_n}(\\mathcal{U},2,1)=0$ for all $n \\ge 1$.\nFor $c=1\/2$,\nwe have $p_{{}_n}(\\mathcal{U},2,c)=4\/9-(16\/9) \\, 4^{-n}$,\nhence the distribution of $\\gamma_{{}_{n,2}}(\\mathcal{U},2,c=1\/2)$ is same as in Equation \\eqref{eqn:finite-sample-unif}.\n\nIn the limit\nas $n \\rightarrow \\infty$,\nfor $c \\in [0,1]$,\nwe have\n\\begin{equation*}\n\\gamma_{{}_{n,2}}(\\mathcal{U},2,c) \\sim\n\\left\\lbrace \\begin{array}{ll}\n 1+\\BER(4\/9), & \\text{for $c = 1\/2$,}\\\\\n 1, & \\text{for $c \\not= 1\/2$.}\\\\\n\\end{array} \\right.\n\\end{equation*}\nObserve also the interesting behavior of the asymptotic\ndistribution of $\\gamma_{{}_{n,2}}(\\mathcal{U},2,c)$ around $c=1\/2$.\nThe parameter $p(\\mathcal{U},2,c)$ is continuous in $c \\in [0,1] \\setminus \\{1\/2\\}$\n(in fact it is unity),\nbut there is a jump (hence discontinuity) in $p(\\mathcal{U},2,c)$ at $c=1\/2$,\nsince $p(\\mathcal{U},2,1\/2)=4\/9$\nand $p(\\mathcal{U},2,c)=0$ for $c \\not= 1\/2$.\nHence\nfor $c = 1\/2$, the asymptotic distribution is non-degenerate,\nand\nfor $c \\not= 1\/2$, the asymptotic distribution is degenerate.\nThat is, for $c=1\/2 \\pm \\varepsilon$ with $\\varepsilon>0$ arbitrarily small,\nalthough the exact distribution is non-degenerate,\nthe asymptotic distribution is degenerate.\n\n\n\\subsection{The Exact Distribution of the Domination Number of $\\mathcal{U}(\\mathsf{y}_1,\\mathsf{y}_2)$-random $\\mathscr D_{n,2}(r,1\/2)$-digraphs}\n\\label{sec:r-and-M_c=1\/2}\nFor $r \\ge 1$, $c=1\/2$, and $(\\mathsf{y}_1,\\mathsf{y}_2)=(0,1)$,\nthe $\\Gamma_1$-region is $\\Gamma_1(\\mathcal{X}_n,r,1\/2)=(X_{(n)}\/r,1\/2] \\cup [1\/2,(r-1+X_{(1)})\/r)$\nwhere\n$(X_{(n)}\/r,1\/2]$ or $[1\/2,(r-1+X_{(1)})\/r)$ could be empty, but not simultaneously.\n\\begin{theorem}\n\\label{thm:r and M_c=1\/2}\nFor $\\mathcal{U}(\\mathsf{y}_1,\\mathsf{y}_2)$ data\nwith $n \\ge 1$,\nwe have\n$\\gamma_{{}_{n,2}}(\\mathcal{U},r,1\/2) \\sim 1+\\BER(p_{{}_n}(\\mathcal{U},r,1\/2))$\nwhere\n\\begin{eqnarray*}\np_{{}_n}(\\mathcal{U},r,1\/2)=\n\\begin{cases}\n\\frac{2\\,r}{(r+1)^2} \\left( \\left(\\frac{2}{r}\\right)^{n-1}-\\left(\\frac{r-1}{r^2} \\right)^{n-1} \\right) &\\text{for} \\quad r \\ge 2, \\\\\n1-\\frac{1+r^{2n-1}}{(2\\,r)^{n-1}(r+1)}+\\frac{(r-1)^n}{(r+1)^2} \\left( 1- \\left(\\frac{r-1}{2\\,r}\\right)^{n-1}\\right)\n&\\text{for} \\quad 1 \\le r < 2.\n\\end{cases}\n\\end{eqnarray*}\n\\end{theorem}\n\n\nNotice that for fixed $n < \\infty$,\nthe parameter\n$p_{{}_n}(\\mathcal{U},r,1\/2)$ is continuous in $r \\ge 2$.\nIn particular,\nfor $r=2$,\nwe have $p_{{}_n}(\\mathcal{U},2,1\/2)=4\/9-(16\/9) \\, 4^{-n}$,\nhence the distribution of $\\gamma_{{}_{n,2}}(\\mathcal{U},r=2,1\/2)$ is same as in Equation \\eqref{eqn:finite-sample-unif}.\nFurthermore,\n$\\lim_{r \\rightarrow 1}p_{{}_n}(\\mathcal{U},r,1\/2)=\np_{{}_n}(\\mathcal{U},1,1\/2)=\n1-2^{1-n}$\nand\n$\\lim_{r \\rightarrow \\infty}p_{{}_n}(\\mathcal{U},r,1\/2)=\np_{{}_n}(\\mathcal{U},\\infty,1\/2)=\n0$.\n\nIn the limit,\nas $n \\rightarrow \\infty$, we have\n\\begin{equation*}\n\\gamma_{{}_{n,2}}(\\mathcal{U},r,1\/2) \\sim\n\\left\\lbrace \\begin{array}{ll}\n 1 & \\text{for $r > 2$,}\\\\\n 1+\\BER(4\/9) & \\text{for $r = 2$,}\\\\\n 2 & \\text{for $1 \\le r < 2$.}\\\\\n\\end{array} \\right.\n\\end{equation*}\nObserve the interesting behavior of the asymptotic distribution\nof $\\gamma_{{}_{n,2}}(\\mathcal{U},r,1\/2)$ around $r=2$.\nThe parameter\n$p(\\mathcal{U},r,1\/2)$ is continuous (in fact piecewise constant) for $r \\in [1,\\infty) \\setminus \\{2\\}$.\nHence for $r \\not= 2$, the asymptotic distribution is degenerate,\nas $p(\\mathcal{U},r,1\/2) = 1$ for $r>2$\nand $p(\\mathcal{U},r,1\/2) = 2$ w.p. 1 for $r<2$.\nThat is, for $r=2 \\pm \\varepsilon$ with $\\varepsilon>0$ arbitrarily small,\nalthough the exact distribution is non-degenerate,\nthe asymptotic distribution is degenerate.\n\n\\subsection{The Distribution of the Domination Number of $\\mathcal{U}(\\mathsf{y}_1,\\mathsf{y}_2)$-random $\\mathscr D_{n,2}(r,c)$-digraphs}\n\\label{sec:r-and-M}\nFor $r \\ge 1$ and $c \\in (0,1)$,\nthe $\\Gamma_1$-region is $\\Gamma_1(\\mathcal{X}_n,r,c)=(X_{(n)}\/r,c] \\cup [c,(r-1+X_{(1)})\/r)$\nwhere $(X_{(n)}\/r,c]$ or $[c,(r-1+X_{(1)})\/r)$ could be empty, but not simultaneously.\n\\begin{theorem}\n\\label{thm:r and M}\n\\textbf{Main Result 1:}\nFor $\\mathcal{U}(\\mathsf{y}_1,\\mathsf{y}_2)$ data with $n \\ge 1$, $r \\ge 1$, and $c \\in ((3-\\sqrt{5})\/2,1\/2)$,\nwe have\n$\\gamma_{{}_{n,2}}(\\mathcal{U},r,c) \\sim 1+\\BER(p_{{}_n}(\\mathcal{U},r,c))$\nwhere\n$p_{{}_n}(\\mathcal{U},r,c) =\\pi_{1,n}(r,c) \\,\\mathbf{I}(r \\ge 1\/c) + \\pi_{2,n}(r,c) \\,\\mathbf{I}(1\/(1-c) \\le r < 1\/c)+\n\\pi_{3,n}(r,c) \\,\\mathbf{I}((1-c)\/c \\le r < 1\/(1-c)) + \\pi_{4,n}(r,c) \\,\\mathbf{I}( 1 \\le r < (1-c)\/c)\n$\nwith\n$$\\pi_{1,n}(r,c)=\\frac{2\\,r}{(r+1)^2} \\left( \\left(\\frac{2}{r}\\right)^{n-1}-\\left(\\frac{r-1}{r^2} \\right)^{n-1} \\right),$$\n\\begin{multline*}\n\\pi_{2,n}(r,c)=\n\\frac{1}{(r+1)r^{n-1}}\\left[ (1+c\\,r)^n-(1-c)^n -\\frac{1}{r+1}(c\\,r^2-r+c\\,r+1)^n\n-\\frac{(r-1)^{n-1}}{r+1}\\left(\\frac{1}{r^{n-2}}+(c\\,r-1+c)^n\\right)\\right],\n\\end{multline*}\n\\begin{multline*}\n\\pi_{3,n}(r,c)=\n1+\\frac{(r-1)^{n-1}}{(r+1)^2}\n\\left[(r-1)-\\frac{1}{r^{n-1}}((c\\,r-1+c)^n+(r-c\\,r-c)^n) \\right]\n-\\frac{1}{r+1}[c^n+(1-c)^n]\\left(r^n+\\frac{1}{r^{n-1}}\\right),\n\\end{multline*}\nand\n\\begin{multline*}\n\\pi_{4,n}(r,c)=\n1+\\frac{(r-1)^{n-1}}{(r+1)^2}(1-c\\,r-c)^n\\left( r^2-\\left(\\frac{-1}{r}\\right)^{n-1} (r^2-1)\\right)+\n\\frac{(r-1)^n}{(r+1)^2}\\left(1-r\\left(\\frac{r-c\\,r-c}{r}\\right)^n\\right)\\\\\n-\\frac{1}{r+1}[c^n+(1-c)^n]\\left(r^n-\\frac{1}{r^{n-1}}\\right).\n\\end{multline*}\nAnd for $c \\in (0,(3-\\sqrt{5})\/2]$,\nwe have\n$p_{{}_n}(\\mathcal{U},r,c) =\\vartheta_{1,n}(r,c) \\,\\mathbf{I}(r \\ge 1\/c) + \\vartheta_{2,n}(r,c) \\,\\mathbf{I}((1-c)\/c \\le r < 1\/c)+\n\\vartheta_{3,n}(r,c) \\,\\mathbf{I}(1\/(1-c) \\le r < (1-c)\/c) + \\vartheta_{4,n}(r,c) \\,\\mathbf{I}( 1 \\le r < 1\/(1-c))\n$\nwhere\n$\\vartheta_{1,n}(r,c)=\\pi_{1,n}(r,c)$,\n$\\vartheta_{2,n}(r,c)=\\pi_{2,n}(r,c)$,\n$\\vartheta_{4,n}(r,c)=\\pi_{4,n}(r,c)$,\nand\n\\begin{multline*}\n\\vartheta_{3,n}(r,c)=\n\\frac{r}{(r+1)^2}\\Biggl[\n(r-1)^{n-1}(1-c\\,r-c)^n\\left(r+(r^2-1)\\left(\\frac{-1}{r}\\right)^n\\right)-\n\\left(\\frac{r-1}{r^2}\\right)^{n-1}-\n\\left(\\frac{c\\,r^2-c+c\\,r+1}{r}\\right)^n+\\\\\n\\frac{r+1}{r^n}[(1+c\\,r)^n-(1-c)^n]\\Biggr].\n\\end{multline*}\nFurthermore, we have\n$\\gamma_{{}_{n,2}}(\\mathcal{U},r,0)=\\gamma_{{}_{n,2}}(\\mathcal{U},r,1)=1$ for all $n \\ge 1$.\n\\end{theorem}\n\n\nSome remarks are in order for the Main Result 1.\nThe partitioning of $c \\in (0,1\/2)$\nas $c \\in (0,(3-\\sqrt{5})\/2]$ and $c \\in ((3-\\sqrt{5})\/2,1\/2)$\nis due to the relative positions of $1\/(1-c)$ and $(1-c)\/c$.\nFor $c \\in ((3-\\sqrt{5})\/2,1\/2)$, we have $1\/(1-c) > (1-c)\/c$\nand\nfor $c \\in (0,(3-\\sqrt{5})\/2)$,\nwe have $1\/(1-c) < (1-c)\/c$.\nAt $c=(3-\\sqrt{5})\/2$,\n$1\/(1-c) = (1-c)\/c = (\\sqrt{5}+1)\/2$\nand only\n$\\pi_{1,n}(r,(3-\\sqrt{5})\/2)=\\vartheta_{1,n}(r,(3-\\sqrt{5})\/2)$,\n$\\pi_{2,n}(r,(3-\\sqrt{5})\/2)=\\vartheta_{2,n}(r,(3-\\sqrt{5})\/2)$,\nand\n$\\pi_{4,n}(r,(3-\\sqrt{5})\/2)=\\vartheta_{4,n}(r,(3-\\sqrt{5})\/2)$ terms survive.\nAlso, notice the $(-1)^{n}$ terms in $\\pi_{4,n}(r,c)$ and $\\vartheta_{3,n}(r,c)$\nwhich might suggest fluctuations of these probabilities as $n$ changes (increases).\nHowever,\nas $n$ increases,\n$\\pi_{4,n}(r,c)$ strictly increases towards 1\n(see Figure \\ref{fig:pi4-rc}),\nand\n$\\vartheta_{3,n}(r,c)$\ndecreases (strictly decreases for $n \\ge 3$) towards 0\n(see Figure \\ref{fig:teta3-rc}).\n\n\\begin{figure}[ht]\n\\centering\n\\rotatebox{-90}{ \\resizebox{2.5 in}{!}{ \\includegraphics{ProbPi4rc.ps}}}\n\\caption{\n\\label{fig:pi4-rc}\nThe probability $\\pi_{4,n}(r,c)$ in Main Result 1\nwith $r=1.2$ and $c=0.4$ for $n=2,3,\\ldots,25$.\n}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\centering\n\\psfrag{teta}{\\Huge{$\\vartheta_{3,n}(r,c)$}}\n\\rotatebox{-90}{ \\resizebox{2.5 in}{!}{ \\includegraphics{ProbTeta3rc.ps}}}\n\\caption{\n\\label{fig:teta3-rc}\nThe probability $\\vartheta_{3,n}(r,c)$ in Main Result 1\nwith $r=2$ and $c=0.3$ for $n=2,3,\\ldots,25$.\n}\n\\end{figure}\n\n\n\\begin{remark}\nBy symmetry,\nin Theorem \\ref{thm:r and M},\nfor $c \\in (1\/2,(\\sqrt{5}-1)\/2)$,\nwe have\n$p_{{}_n}(\\mathcal{U},r,c) =\\pi_{1,n}(r,1-c) \\,\\mathbf{I}(r \\ge 1\/(1-c)) + \\pi_{2,n}(r,1-c) \\,\\mathbf{I}(1\/c \\le r < 1\/(1-c))+\n\\pi_{3,n}(r,1-c) \\,\\mathbf{I}(c\/(1-c) \\le r < 1\/c) + \\pi_{4,n}(r,1-c) \\,\\mathbf{I}( 1 \\le r < c\/(1-c))\n$\nand\nfor $c \\in [(\\sqrt{5}-1)\/2),1)$,\n$p_{{}_n}(\\mathcal{U},r,c) =\\vartheta_{1,n}(r,1-c) \\,\\mathbf{I}(r \\ge 1\/(1-c)) + \\vartheta_{2,n}(r,1-c) \\,\\mathbf{I}(c\/(1-c) \\le r < 1\/(1-c))+\n\\vartheta_{3,n}(r,1-c) \\,\\mathbf{I}(1\/c \\le r < c\/(1-c)) + \\vartheta_{4,n}(r,1-c) \\,\\mathbf{I}( 1 \\le r < 1\/c)\n$.\n$\\square$\n\\end{remark}\n\nObserve that $\\lim_{r \\rightarrow 1} p_{{}_n}(\\mathcal{U},r,c)=\\lim_{r \\rightarrow 1} \\pi_{4,n}(r,c)=1$\nas expected.\nFor fixed $1 < n < \\infty$,\nthe probability $p_{{}_n}(\\mathcal{U},r,c)$\nis continuous in $(r,c) \\in \\{(r,c) \\in \\mathbb{R}^2: r \\ge 1, 0 < c < 1\\}$.\nIn particular,\nfor $c \\in ((3-\\sqrt{5})\/2,1\/2)$,\nas $(r,c) \\rightarrow (2,1\/2)$ in $\\{(r,c) \\in \\mathbb{R}^2: r \\ge 1\/c\\}$,\n$p_{{}_n}(\\mathcal{U},r,c)=\\pi_{1,n}(r,c) \\rightarrow 4\/9-(16\/9) \\, 4^{-n}$;\nas $(r,c) \\rightarrow (2,1\/2)$ in $\\{(r,c) \\in \\mathbb{R}^2: 1\/(1-c) \\le r < 1\/c\\}$,\n$p_{{}_n}(\\mathcal{U},r,c)=\\pi_{2,n}(r,c) \\rightarrow 4\/9-(16\/9) \\, 4^{-n}$;\nand\nas $(r,c) \\rightarrow (2,1\/2)$ in $\\{(r,c) \\in \\mathbb{R}^2: (1-c)\/c \\le r < 1\/(1-c)\\}$,\n$p_{{}_n}(\\mathcal{U},r,c)=\\pi_{3,n}(r,c) \\rightarrow 4\/9-(16\/9) \\, 4^{-n}$.\nThe limit $(r,c) \\rightarrow (2,1\/2)$ is not possible for $\\{(r,c) \\in \\mathbb{R}^2: 1 \\le r < (1-c)\/c\\}$.\nFor $c \\in (0,(3-\\sqrt{5})\/2]$,\n$(r,c) \\rightarrow (2,1\/2)$ can not occur either.\nAnd for $(r,c)=(2,1\/2)$,\nthe distribution of $\\gamma_{{}_{n,2}}(\\mathcal{U},r,c)$ is $1+\\BER(p_{{}_n}(\\mathcal{U},2,1\/2))$,\nwhere $p_{{}_n}(\\mathcal{U},2,1\/2)=4\/9-(16\/9) \\, 4^{-n}$ as in Equation \\eqref{eqn:finite-sample-unif}.\nTherefore for fixed $1 1\/\\tau$,}\\\\\n 2, & \\text{for $1 \\le r < 1\/\\tau$.}\\\\\n\\end{array} \\right.\n\\end{equation}\n\\end{theorem}\n\nNotice the interesting behavior of the asymptotic distribution\nof $\\gamma_{{}_{n,2}}(\\mathcal{U},r,c)$ around $r=1\/\\tau$ for any given $c \\in (0,1)$.\nThe asymptotic distribution is non-degenerate only for $r = 1\/\\tau$.\nFor $r>1\/\\tau$,\n$\\lim_{n \\rightarrow \\infty}\\gamma_{{}_{n,2}}(\\mathcal{U},r,c)) = 1$ w.p. 1,\nand\nfor $1 \\le r < 1\/\\tau $, $\\lim_{n \\rightarrow \\infty}\\gamma_{{}_{n,2}}(\\mathcal{U},r,1\/2)) = 2$ w.p. 1.\nThe critical value $r=1\/\\tau$ corresponds to\n$c=(r-1)\/r$, if $c \\in(0,1\/2)$ (i.e., $\\tau=1-c$)\nand\n$c=1\/r$, if $c \\in(1\/2,1)$ (i.e., $\\tau=c$)\nand these are only possible for $r \\in (1,2)$.\nThat is, for $r=(1\/\\tau) \\pm \\varepsilon$ for $\\varepsilon$ arbitrarily small,\nalthough the exact distribution is non-degenerate,\nthe asymptotic distribution is degenerate.\nThe parameter $p(\\mathcal{U},r,c)$ is continuous in $r$ and $c$ for $(r,c) \\in S \\setminus \\{1\/\\tau,c\\}$\nand there is a jump (hence discontinuity) in the probability $p(\\mathcal{U},r,c)$\nat $r=1\/\\tau$,\nsince $p(\\mathcal{U},1\/\\tau,c)=1\/(1+\\tau)=r\/(r+1)$.\nTherefore, given a centrality parameter $c \\in (0,1)$,\nwe can choose the expansion parameter $r$\nfor which the asymptotic distribution is non-degenerate,\nand vice versa.\n\nThere is yet another interesting behavior of the asymptotic distribution\naround $(r,c)=(2,1\/2)$.\nThe parameter $p(\\mathcal{U},r,c)$\nhas jumps at $c=1\/r$ and $(r-1)\/r$ for $r \\in [1,2]$\nwith $p(\\mathcal{U},r,1\/r)=p(\\mathcal{U},r,(r-1)\/r)=r\/(r+1)$.\nThat is,\nfor fixed $(r,c) \\in S$,\n$\\lim_{n \\rightarrow \\infty}p_{{}_n}(\\mathcal{U},r,(r-1)\/r)=\n\\lim_{n \\rightarrow \\infty}p_{{}_n}(\\mathcal{U},r,1\/r)= r\/(r+1)$.\nLetting $(r,c) \\rightarrow (2,1\/2)$\n(i.e., $r \\rightarrow 2$)\nwe get\n$p(\\mathcal{U},r,(r-1)\/r) \\rightarrow 2\/3$\nand\n$p(\\mathcal{U},r,1\/r) \\rightarrow 2\/3$,\nbut\n$p(\\mathcal{U},2,1\/2)=4\/9$.\nHence for $r \\in [1,2)$\nthe distributions\nof $\\gamma_{{}_{n,2}}(\\mathcal{U},r,(r-1)\/r)$ and $\\gamma_{{}_{n,2}}(\\mathcal{U},r,1\/r)$\nare identical and both\nconverge to $1+\\BER(r\/(r+1))$,\nbut the distribution of\n$\\gamma_{{}_{n,2}}(\\mathcal{U},2,1\/2)$\nconverges to $1+\\BER(4\/9)$ as $n \\rightarrow \\infty$.\nIn other words,\n$p(\\mathcal{U},r,(r-1)\/r)=p(\\mathcal{U},r,1\/r)$\nhas another jump at $r=2$ (which corresponds to $(r,c)=(2,1\/2)$).\nThis interesting behavior might be due to the symmetry around $c=1\/2$.\nBecause for $c \\in (0,1\/2)$,\nwith $r=1\/(1-c)$,\nfor sufficiently large $n$,\na point $X_i$ in $(c,1)$ can dominate all the points in $\\mathcal{X}_n$\n(implying $\\gamma_{{}_{n,2}}(\\mathcal{U},r,(r-1)\/r)=1$),\nbut no point in $(0,c)$ can dominate all points a.s.\nLikewise,\nfor $c \\in (1\/2,1)$ with $r=1\/c$,\nfor sufficiently large $n$,\na point $X_i$ in $(0,c)$ can dominate all the points in $\\mathcal{X}_n$\n(implying $\\gamma_{{}_{n,2}}(\\mathcal{U},r,1\/r)=1$),\nbut no point in $(c,1)$ can dominate all points a.s.\nHowever,\nfor $c=1\/2$ and $r=2$,\nfor sufficiently large $n$,\npoints to the left or right of $c$\ncan dominate all other points in $\\mathcal{X}_n$.\n\n\n\\section{The Distribution of the Domination Number for $\\mathcal F(\\mathbb{R})$-random $\\mathscr D_{n,2}(r,c)$-digraphs}\n\\label{sec:non-uniform}\nLet $\\mathcal F(\\mathsf{y}_1,\\mathsf{y}_2)$ be a family of continuous distributions\nwith support in $\\mathcal{S}_F \\subseteq (\\mathsf{y}_1,\\mathsf{y}_2)$.\nConsider a distribution function $F \\in \\mathcal F(\\mathsf{y}_1,\\mathsf{y}_2)$.\nFor simplicity, assume $\\mathsf{y}_1=0$ and $\\mathsf{y}_2=1$.\nLet $\\mathcal{X}_n$ be a random sample from $F$,\n$\\Gamma_1(\\mathcal{X}_n,r,c)=(\\delta_1,\\delta_2)$,\n$p_{{}_n}(F,r,c):=P(\\gamma_{{}_{n,2}}(F,r,c)=2)$,\nand $p(F,r,c):=\\lim_{n \\rightarrow \\infty}P(\\gamma_{{}_{n,2}}(F,r,c)=2)$.\nThe exact (i.e., finite sample) and asymptotic distributions of $\\gamma_{{}_{n,2}}(F,r,c)$\nare $1+\\BER\\left(p_{{}_n}(F,r,c)\\right)$ and $1+\\BER\\left(p(F,r,c)\\right)$, respectively.\nThat is, for finite $n > 1$, $r \\in [1,\\infty)$, and $c \\in (0,1)$,\nwe have\n\\begin{equation}\n\\gamma_{{}_{n,2}}(F,r,c)= \\left\\lbrace \\begin{array}{ll}\n 1 & \\text{w.p. $1-p_{{}_n}(F,r,c)$},\\\\\n 2 & \\text{w.p. $p_{{}_n}(F,r,c)$}.\n\\end{array} \\right.\n\\end{equation}\nMoreover,\n$\\gamma_{{}_{1,2}}(F,r,c)=1$ for all $r \\ge 1$ and $c \\in [0,1]$,\n$\\gamma_{{}_{n,2}}(F,r,0)=\\gamma_{{}_{1,2}}(F,r,1)=1$ for all $n \\ge 1$ and $r \\ge 1$,\n$\\gamma_{{}_{n,2}}(F,\\infty,c)=1$ for all $n \\ge 1$ and $c \\in [0,1]$,\nand\n$\\gamma_{{}_{n,2}}(F,1,c)=k_4$ for all $n \\ge 1$ and $c \\in (0,1)$\nwhere $k_4$ is as in Theorem \\ref{thm:gamma-Dnm-r=1-M} with $m=2$.\nThe asymptotic distribution is similar with\n$p_{{}_n}(F,r,c)$ being replaced by $p(F,r,c)$.\nThe special cases are similar in the asymptotics\nwith the exception that\n$p(F,1,c)=1$ for all $c \\in (0,1)$.\nThe finite sample mean and variance of $\\gamma_{{}_{n,2}}(F,r,c)$ are given by\n$1+p_{{}_n}(F,r,c)$ and $p_{{}_n}(F,r,c)\\,(1-p_{{}_n}(F,r,c))$, respectively;\nand the asymptotic mean and variance of $\\gamma_{{}_{n,2}}(F,r,c)$ are given by\n$1+p(F,r,c)$ and $p(F,r,c)\\,(1-p(F,r,c))$, respectively.\n\nGiven $X_{(1)}=x_1$ and $X_{(n)}=x_n$,\nthe probability of $\\gamma_{{}_{n,2}}(F,r,c)=2$ (i.e., $p_{{}_n}(F,r,c)$) is\n$\\displaystyle ( 1-[F(\\delta_2)-F(\\delta_1)]\/[F(x_n)-F(x_1)])^{(n-2)}$\nprovided that $\\Gamma_1(\\mathcal{X}_n,r,c)=(\\delta_1,\\delta_2) \\not= \\emptyset$;\nif $\\Gamma_1(\\mathcal{X}_n,r,c) = \\emptyset$,\nthen $\\gamma_{{}_{n,2}}(F,r,c)=2$.\nThen\n\\begin{equation}\n\\label{eqn:Pg2-int-first}\nP(\\gamma_{{}_{n,2}}(F,r,c)=2,\\;\\Gamma_1(\\mathcal{X}_n,r,c) \\not= \\emptyset)=\n\\int\\int_{\\mathcal{S}_1}f_{1n}(x_1,x_n)\\left(1-\\frac{F(\\delta_2)-F(\\delta_1)}{F(x_n)-F(x_1)}\\right)^{(n-2)}\\,dx_ndx_1\n\\end{equation}\nwhere $\\mathcal{S}_1=\\{0$'s,\nthen $\\gamma_{{}_{n,2}}(\\mathcal{U},r,F(c))<^{ST}\\gamma_{{}_{n,2}}(F,r,c)$.\nIf $<$'s in expression \\eqref{eqn:stoch-order} are replaced with $=$'s,\nthen $\\gamma_{{}_{n,2}}(F,r,c)\\stackrel{d}{=}\\gamma_{{}_{n,2}}(\\mathcal{U},r,F(c))$ where $\\stackrel{d}{=}$ stands\nfor equality in distribution.\n\\end{proposition}\n\n\\noindent {\\bfseries Proof:}\nLet $U_i$ and\n$U_{(i)}$ be as in proof of Proposition \\ref{prop:NF vs NPE}.\nThen the parameter $c$ for $N(\\cdot,r,c)$ with $\\mathcal{X}_n$ in $(0,1)$\ncorresponds to $F(c)$ for $\\mathcal{U}_n$.\nThen the $\\Gamma_1$-region for $\\mathcal{U}_n$ based on $N(\\cdot,r,F(c))$ is\n$\\Gamma_1(\\mathcal{U}_n,r,F(c))=(U_{(n)}\/r,F(c) ] \\cup [F(c),\\left(U_{(1)}+r-1\\right)\/r )$;\nlikewise, $\\Gamma_1(\\mathcal{X}_n,r,c)=(X_{(n)}\/r,M_c ] \\cup [M_c,\\left(X_{(1)}+r-1\\right)\/r )$.\nBut the conditions in \\eqref{eqn:stoch-order} imply that\n$\\Gamma_1(\\mathcal{U}_n,r,F(c)) \\subsetneq F(\\Gamma_1(\\mathcal{X}_n,r,c))$,\nsince such an $F$ preserves order.\nSo $\\mathcal{U}_n \\cap F(\\Gamma_1(\\mathcal{X}_n,r,c)) = \\emptyset$ implies that\n$\\mathcal{U}_n \\cap \\Gamma_1(\\mathcal{U}_n,r,F(c)) = \\emptyset$ and\n$\\mathcal{U}_n \\cap F(\\Gamma_1(\\mathcal{X}_n,r,c)) = \\emptyset$ iff $\\mathcal{X}_n \\cap \\Gamma_1(\\mathcal{X}_n,r,F(c)) = \\emptyset$.\nHence\n$$p_{{}_n}(F,r,c)=P(\\mathcal{X}_n \\cap \\Gamma_1(\\mathcal{X}_n,r,c) = \\emptyset)<\nP(\\mathcal{U}_n \\cap \\Gamma_1(\\mathcal{U}_n,r,F(c)) = \\emptyset)=p_{{}_n}(\\mathcal{U},r,F(c)).$$\nThen $\\gamma_{{}_{n,2}}(F,r,c)<^{ST}\\gamma_{{}_{n,2}}(\\mathcal{U},r,F(c))$ follows.\nThe other cases follow similarly.\n$\\blacksquare$\n\n\n\\begin{remark}\nWe can also find the exact distribution of $\\gamma_{{}_{n,2}}(F,r,c)$\nfor $F$ whose pdf is piecewise constant\nwith support in $(0,1)$ as in \\cite{ceyhan:dom-num-CCCD-NonUnif}.\nNote that the simplest of such distributions is the uniform distribution $\\mathcal{U}(0,1)$.\nThe exact distribution of $\\gamma_{{}_{n,2}}(F,r,c)$ for (piecewise) polynomial\n$f(x)$ with at least one piece is of degree 1 or higher and support in $(0,1)$\ncan be obtained using the multinomial expansion of the term $(\\cdot)^{n-2}$\nin Equation \\eqref{eqn:integrand} with careful bookkeeping.\nHowever, the resulting expression for $p_{{}_n}(F,r,c)$ is extremely lengthy\nand not that informative (see \\cite{ceyhan:dom-num-CCCD-NonUnif}).\n\nFor fixed $n$, one can obtain $p_{{}_n}(F,r,c)$ for $F$\n(omitted for the sake of brevity)\nby numerical integration of the below expression.\n\\begin{eqnarray*}\np_{{}_n}(F,r,c)=P\\bigl( \\gamma_{{}_{n,2}}(F,r,c)=2 \\bigr)&=&\n\\int\\int_{\\mathcal{S}(F)\\setminus(\\delta_1,\\delta_2)}H(x_1,x_n)\\,dx_ndx_1,\n\\end{eqnarray*}\nwhere $H(x_1,x_n)$ is given in Equation $\\eqref{eqn:integrand}$. $\\square$\n\\end{remark}\n\nRecall the $\\mathcal F(\\mathbb{R}^d)$-random $\\mathscr D_{n,m}(r,c)$-digraphs.\nWe call the digraph which obtains\nin the special case of $\\mathcal{Y}_2=\\{\\mathsf{y}_1,\\mathsf{y}_2\\}$ and support of $F_X$ in $(\\mathsf{y}_1,\\mathsf{y}_2)$,\n\\emph{$\\mathcal F(\\mathsf{y}_1,\\mathsf{y}_2)$-random $\\mathscr D_{n,2}(r,c)$-digraph}.\nBelow, we provide asymptotic results\npertaining to the distribution of such digraphs.\n\n\\subsection{The Asymptotic Distribution of the Domination Number of\n$\\mathcal F(\\mathsf{y}_1,\\mathsf{y}_2)$-random $\\mathscr D_{n,2}(r,c)$-digraphs}\n\\label{sec:asy-dist-generalF}\nAlthough the exact distribution of $\\gamma_{{}_{n,2}}(F,r,c)$ is not analytically available\nin a simple closed form for $F$\nwhose density is not piecewise constant,\nthe asymptotic distribution of $\\gamma_{{}_{n,2}}(F,r,c)$ is available for larger families of distributions.\nFirst, we present the asymptotic distribution of $\\gamma_{{}_{n,2}}(F,r,c)$ for\n$\\mathscr D_{n,2}(r,c)$-digraphs with $\\mathcal{Y}_2=\\{\\mathsf{y}_1,\\mathsf{y}_2\\} \\subset \\mathbb{R}$ with $\\mathsf{y}_1<\\mathsf{y}_2$\nfor general $F$ with support $\\mathcal{S}(F) \\subseteq (\\mathsf{y}_1,\\mathsf{y}_2)$.\nThen we will extend this to the case with $\\mathcal{Y}_m \\subset \\mathbb{R}$ with $m>2$.\n\nLet $c \\in (0,1\/2)$ and $r \\in (1,2)$.\nThen for $c=(r-1)\/r$, i.e., $M_c=\\mathsf{y}_1+(r-1)(\\mathsf{y}_2-\\mathsf{y}_1)\/r$,\nwe define the family of distributions\n\\begin{equation*}\n\\mathcal F_1\\bigl(\\mathsf{y}_1,\\mathsf{y}_2\\bigr) :=\n\\Bigl \\{\\text{$F$ :\n$(\\mathsf{y}_1,\\mathsf{y}_1+\\varepsilon) \\cup \\bigl( M_c,M_c+\\varepsilon \\bigr) \\subseteq \\mathcal{S}(F)\\subseteq(\\mathsf{y}_1,\\mathsf{y}_2)$\nfor some $\\varepsilon \\in (0,c)$ with $c=(r-1)\/r$} \\Bigr\\}.\n\\end{equation*}\nSimilarly,\nlet $c \\in (1\/2,1)$ and $r \\in (1,2)$.\nThen for $c=1\/r$, i.e., $M_c=\\mathsf{y}_1+(\\mathsf{y}_2-\\mathsf{y}_1)\/r$ with $r \\in (1,2)$,\nwe define\n\\begin{equation*}\n\\mathcal F_2\\bigl(\\mathsf{y}_1,\\mathsf{y}_2\\bigr) :=\n\\Bigl \\{\\text{$F$ :\n$(\\mathsf{y}_2-\\varepsilon,\\mathsf{y}_2)\\cup \\bigl(M_c-\\varepsilon,M_c\\bigr) \\subseteq \\mathcal{S}(F)\\subseteq(\\mathsf{y}_1,\\mathsf{y}_2)$\nfor some $\\varepsilon \\in (0,1-c)$ with $c=1\/r$} \\Bigr\\}.\n\\end{equation*}\n\nLet the $k^{th}$ order right (directed) derivative at $x$ be defined as\n$f^{(k)}(x^+):=\\lim_{h \\rightarrow 0^+}\\frac{f^{(k-1)}(x+h)-f^{(k-1)}(x)}{h}$\nfor all $k \\ge 1$ and the right limit at $u$ be defined as $f(u^+):=\\lim_{h \\rightarrow 0^+}f(u+h)$.\nThe left derivatives and limits are defined similarly with $+$'s being replaced by $-$'s.\n\n\\begin{theorem}\n\\label{thm:kth-order-gen-(r-1)\/r}\n\\textbf{Main Result 3:}\nLet $\\mathcal{Y}_2=\\{\\mathsf{y}_1,\\mathsf{y}_2\\} \\subset \\mathbb{R}$ with $-\\infty < \\mathsf{y}_1 < \\mathsf{y}_2<\\infty$,\n$\\mathcal{X}_n=\\{X_1,X_2,\\ldots,X_n\\}$ with $X_i \\stackrel {iid}{\\sim} F \\in \\mathcal F_1(\\mathsf{y}_1,\\mathsf{y}_2)$,\nand $c \\in (0,1\/2)$.\nLet $D_{n,2}$ be the $\\mathcal F_1(\\mathsf{y}_1,\\mathsf{y}_2)$-random $\\mathscr D_{n,2}(r,c)$-digraph\nbased on $\\mathcal{X}_n$ and $\\mathcal{Y}_2$\n\\begin{itemize}\n\\item[(i)]\nThen for $n>1$, $r \\in (1,\\infty)$, and $c=(r-1)\/r$\nwe have $\\gamma_{{}_{n,2}}(F,r,(r-1)\/r) \\sim 1+ \\BER\\bigl(p_{{}_n}(F,r,(r-1)\/r)\\bigr)$.\nNote also that $\\gamma_{{}_{1,2}}(F,r,(r-1)\/r)=1$ for all $r \\ge 1$;\nfor $r=1$,\nwe have $\\gamma_{{}_{n,2}}(F,1,0)=1$ for all $n \\ge 1$ and\nfor $r = \\infty$,\nwe have $\\gamma_{{}_{n,2}}(F,\\infty,1)=1$ for all $n \\ge 1$.\n\n\\item[(ii)]\nFurthermore,\nsuppose $k \\ge 0$ is the smallest integer for which\n$F(\\cdot)$ has continuous right derivatives up to order $(k+1)$ at $\\mathsf{y}_1$,\n$\\mathsf{y}_1+(r-1)(\\mathsf{y}_2-\\mathsf{y}_1)\/r$, and\n$f^{(k)}(\\mathsf{y}_1^+)+r^{-(k+1)}\\,f^{(k)}\\left( \\left( (r-1)(\\mathsf{y}_2-\\mathsf{y}_1)\/r \\right)^+ \\right) \\not= 0$\nand $f^{(i)}(\\mathsf{y}_1^+)=f^{(i)}\\left( \\left( \\mathsf{y}_1+(r-1)(\\mathsf{y}_2-\\mathsf{y}_1)\/r \\right)^+ \\right)=0$ for all $i=0,1,2,\\ldots,(k-1)$\nand suppose also that $F(\\cdot)$ has a continuous left derivative at $\\mathsf{y}_2$.\nThen for bounded $f^{(k)}(\\cdot)$, $c=(r-1)\/r$, and $r \\in (1,2)$,\nwe have the following limit\n$$p(F,r,(r-1)\/r) =\n\\lim_{n \\rightarrow \\infty}p_{{}_n}(F,r,(r-1)\/r) =\n\\frac{f^{(k)}(\\mathsf{y}_1^+)}\n{f^{(k)}(\\mathsf{y}_1^+)+r^{-(k+1)}\\,\nf^{(k)}\\left( \\left( \\mathsf{y}_1+(r-1)(\\mathsf{y}_2-\\mathsf{y}_1)\/r \\right)^+ \\right)}.$$\n\\end{itemize}\n\\end{theorem}\n\n\nNote that in Theorem \\ref{thm:kth-order-gen-(r-1)\/r}\n\\begin{itemize}\n\\item\nwith $(\\mathsf{y}_1,\\mathsf{y}_2)=(0,1)$,\nwe have\n$p(F,r,(r-1)\/r) =\n\\frac{f^{(k)}(0^+)}\n{f^{(k)}(0^+)+r^{-(k+1)}\\,f^{(k)}\\left( \\left( (r-1)\/r \\right)^+ \\right)}$,\n\\item\nif $f^{(k)}(\\mathsf{y}_1^+)=0$ and\n$ f^{(k)} \\left( \\left( \\mathsf{y}_1+(r-1)(\\mathsf{y}_2-\\mathsf{y}_1)\/r \\right)^+ \\right)\\not=0$,\nthen $p_{{}_n}(F,r,(r-1)\/r)\\rightarrow 0$ as $n \\rightarrow \\infty$,\nat rate $O\\bigl( \\kappa_1(f)\\cdot n^{-(k+2)\/(k+1)} \\bigr)$\nwhere $\\kappa_1(f)$ is a constant depending on $f$\nand\n\\item\nif $f^{(k)}(\\mathsf{y}_1^+)\\not=0$ and\n$f^{(k)} \\left( \\left( \\mathsf{y}_1+(r-1)(\\mathsf{y}_2-\\mathsf{y}_1)\/r \\right)^+ \\right)=0$,\nthen $p_{{}_n}(F,r,(r-1)\/r) \\rightarrow 1$\nas $n \\rightarrow \\infty$, at rate $O\\bigl(\\kappa_1(f)\\cdot n^{-(k+2)\/(k+1)} \\bigr)$.\n\\end{itemize}\n\n\\begin{theorem}\n\\label{thm:kth-order-gen-1\/r}\n\\textbf{Main Result 4:}\nLet $\\mathcal{Y}_2=\\{\\mathsf{y}_1,\\mathsf{y}_2\\} \\subset \\mathbb{R}$ with $-\\infty < \\mathsf{y}_1 < \\mathsf{y}_2<\\infty$,\n$\\mathcal{X}_n=\\{X_1,X_2,\\ldots,X_n\\}$ with $X_i \\stackrel {iid}{\\sim} F \\in \\mathcal F_2(\\mathsf{y}_1,\\mathsf{y}_2)$,\nand $c \\in (1\/2,1)$.\nLet $D_{n,2}$ be the $\\mathcal F_2(\\mathsf{y}_1,\\mathsf{y}_2)$-random $\\mathscr D_{n,2}(r,c)$-digraph\nbased on $\\mathcal{X}_n$ and $\\mathcal{Y}_2$.\n\n\\begin{itemize}\n\\item[(i)]\nThen for $n>1$, $r \\in (1,\\infty)$, and $c=1\/r$\nwe have $\\gamma_{{}_{n,2}}(F,r,1\/r) \\sim 1+ \\BER\\bigl(p_{{}_n}(F,r,1\/r)\\bigr)$.\nNote also that $\\gamma_{{}_{1,2}}(F,r,1\/r)=1$ for all $r \\ge 1$;\nfor $r=1$,\nwe have $\\gamma_{{}_{n,2}}(F,1,1)=1$ for all $n \\ge 1$ and\nfor $r = \\infty$,\nwe have $\\gamma_{{}_{n,2}}(F,\\infty,0)=1$ for all $n \\ge 1$.\n\n\\item[(ii)]\nFurthermore,\nsuppose $\\ell \\ge 0$ is the smallest integer for which\n$F(\\cdot)$ has continuous left derivatives up to order $(\\ell+1)$ at $\\mathsf{y}_2$, and $\\mathsf{y}_1+(\\mathsf{y}_2-\\mathsf{y}_1)\/r$,\nand\n$f^{(\\ell)}(\\mathsf{y}_2^-)+r^{-(\\ell+1)}\\,f^{(\\ell)}\\left( \\left( \\mathsf{y}_1+(\\mathsf{y}_2-\\mathsf{y}_1)\/r \\right)^- \\right) \\not= 0$\nand $f^{(i)}(\\mathsf{y}_2^-)=f^{(i)}\\left( \\left( \\mathsf{y}_1+(\\mathsf{y}_2-\\mathsf{y}_1)\/r \\right)^- \\right)=0$ for all $i=0,1,2,\\ldots,(\\ell-1)$\nand suppose also that $F(\\cdot)$ has a continuous right derivative at $\\mathsf{y}_1$.\nAdditionally,\nfor bounded $f^{(\\ell)}(\\cdot)$,\n$c=1\/r$, and $r \\in (1,2)$\nwe have the following limit\n$$\np(F,r,1\/r) =\n\\lim_{n \\rightarrow \\infty}p_{{}_n}(F,r,1\/r) =\n\\frac{f^{(\\ell)}(\\mathsf{y}_2^-)}\n{f^{(\\ell)}(\\mathsf{y}_2^-)+r^{-(\\ell+1)}\\,f^{(\\ell)}\\left( \\left( \\mathsf{y}_1+(\\mathsf{y}_2-\\mathsf{y}_1)\/r \\right)^- \\right)}.$$\n\\end{itemize}\n\\end{theorem}\n\n\nNote that in Theorem \\ref{thm:kth-order-gen-1\/r}\n\\begin{itemize}\n\\item\nwith $(\\mathsf{y}_1,\\mathsf{y}_2)=(0,1)$,\nwe have\n$p(F,r,1\/r) =\n\\frac{f^{(\\ell)}(1^-)}\n{f^{(\\ell)}(1^-)+r^{-(\\ell+1)}\\,f^{(\\ell)}\\left( \\left( 1\/r \\right)^- \\right)}$,\n\\item\nif $f^{(\\ell)}(\\mathsf{y}_2^-)=0$ and\n$f^{(\\ell)} \\left( \\left( \\mathsf{y}_1+(\\mathsf{y}_2-\\mathsf{y}_1)\/r \\right)^- \\right) \\not=0$,\nthen $p_{{}_n}(F,r,1\/r)\\rightarrow 0$ as $n \\rightarrow \\infty$,\nat rate $O\\bigl( \\kappa_2(f)\\cdot n^{-(\\ell+2)\/(\\ell+1)} \\bigr)$\nwhere $\\kappa_2(f)$ is a constant depending on $f$\nand\n\\item\nif $f^{(\\ell)}(\\mathsf{y}_2^-)\\not=0$ and\n$f^{(\\ell)} \\left( \\left( \\mathsf{y}_1+(\\mathsf{y}_2-\\mathsf{y}_1)\/r \\right)^- \\right)=0$,\nthen $p_{{}_n}(F,r,1\/r) \\rightarrow 1$\nas $n \\rightarrow \\infty$, at rate $O\\bigl(\\kappa_2(f)\\cdot n^{-(\\ell+2)\/(\\ell+1)} \\bigr)$.\n\\end{itemize}\n\n\\begin{remark}\n\\label{rem:unbounded}\nIn Theorems \\ref{thm:kth-order-gen-(r-1)\/r} and \\ref{thm:kth-order-gen-1\/r},\nwe assume that $f^{(k)}(\\cdot)$ and $f^{(\\ell)}(\\cdot)$\nare bounded on $(\\mathsf{y}_1,\\mathsf{y}_2)$, respectively.\nIf $f^{(k)}(\\cdot)$ is not bounded on $(\\mathsf{y}_1,\\mathsf{y}_2)$ for $k \\ge 0$,\nin particular at $\\mathsf{y}_1$, and $\\mathsf{y}_1+(r-1)(\\mathsf{y}_2-\\mathsf{y}_1)\/r$, for example,\n$\\lim_{x \\rightarrow \\mathsf{y}_1^+}f^{(k)}(x)=\\infty$,\nthen we have\n$$p(F,r,(r-1)\/r) =\\lim_{\\delta \\rightarrow 0^+}\\frac{f^{(k)}(\\mathsf{y}_1+\\delta)}\n{\\left[f^{(k)}(\\mathsf{y}_1+\\delta)+r^{-(k+1)}\\,f^{(k)} \\left( (\\mathsf{y}_1+(r-1)(\\mathsf{y}_2-\\mathsf{y}_1)\/r)+\\delta \\right)\\right]}.$$\nIf $f^{(\\ell)}(\\cdot)$ is not bounded on $(\\mathsf{y}_1,\\mathsf{y}_2)$ for $\\ell \\ge 0$,\nin particular at $\\mathsf{y}_1+(\\mathsf{y}_2-\\mathsf{y}_1)\/r$, and $\\mathsf{y}_2$, for example,\n$\\lim_{x \\rightarrow \\mathsf{y}_2^-}f^{(\\ell)}(x)=\\infty$,\nthen we have\n$$p(F,r,1\/r) =\\lim_{\\delta \\rightarrow 0^+}\\frac{f^{(\\ell)}(\\mathsf{y}_2-\\delta)}\n{\\left[f^{(\\ell)}(\\mathsf{y}_2-\\delta)+r^{-(\\ell+1)}\\,f^{(\\ell)} \\left( (\\mathsf{y}_1+(\\mathsf{y}_2-\\mathsf{y}_1)\/r)-\\delta \\right)\\right]}.\\;\\;\\square$$\n\\end{remark}\n\n\\begin{remark}\nThe rates of convergence in Theorems \\ref{thm:kth-order-gen-(r-1)\/r} and\n\\ref{thm:kth-order-gen-1\/r} depends on $f$.\nFrom the proofs of Theorems \\ref{thm:kth-order-gen-(r-1)\/r} and \\ref{thm:kth-order-gen-1\/r},\nit follows that for sufficiently large $n$,\n$$p_n(F,r,(r-1)\/r) \\approx p(F,r,(r-1)\/r) +\\frac{\\kappa_1(f)}{n^{-(k+2)\/(k+1)}}\n\\text{ and }\np_n(F,r,1\/r) \\approx p(F,r,1\/r) +\\frac{\\kappa_2(f)}{n^{-(\\ell+2)\/(\\ell+1)}},$$\nwhere\n$\\kappa_1(f)=\\frac{s_1\\,s_3^{\\frac{1}{k+1}}+s_2\\,\\Gamma \\left(\\frac{k+2}{k+1} \\right)}\n{(k+1)\\,s_3^{\\frac{k+2}{k+1}} }$\nwith\n$\\Gamma(x)=\\int_{0}^{\\infty} e^{-t}t^{(x-1)}\\, dt$,\n$s_1=\\frac{1}{n^{k+1}k!}\\,f^{(k)}(\\mathsf{y}_1^+)$,\n$s_2=\\frac{1}{n(k+1)!}\\, f^{(k+1)}(\\mathsf{y}_1^+)$,\nand\n$s_3=\\frac{1}{(k+1)!}p(F,r,(r-1)\/r)$,\n$\\kappa_2(f)=\\frac{q_1\\,\\Gamma\\left( \\frac{\\ell+2}{\\ell+1} \\right)+ q_2\\,q_3^{\\frac{1}{\\ell+1}}}\n{(\\ell+1)\\,q_3^{\\frac{\\ell+2}{\\ell+1}}}$,\n$q_1=\\frac{(-1)^{\\ell+1}}{n(\\ell+1)!}\\,f^{(\\ell+1)}(\\mathsf{y}_2^-)$,\n$q_2=\\frac{(-1)^{\\ell}}{n^{\\ell+1}\\ell!}\\,f^{(\\ell)}(\\mathsf{y}_2^-)$,\nand\n$q_3=\\frac{(-1)^{\\ell+1}}{(\\ell+1)!}p(F,r,1\/r)$\nprovided the derivatives exist. $\\square$\n\\end{remark}\n\nThe conditions of the Theorems \\ref{thm:kth-order-gen-(r-1)\/r} and \\ref{thm:kth-order-gen-1\/r}\nmight seem a bit esoteric.\nHowever, most of the well known functions that are scaled\nand properly transformed to be pdf of some random variable\nwith support in $(\\mathsf{y}_1,\\mathsf{y}_2)$ satisfy the conditions\nfor some $k$ or $\\ell$,\nhence one can compute the corresponding limiting probabilities\n$p(F,r,(r-1)\/r)$ and $p(F,r,1\/r)$.\n\n\\begin{example}\n(a)\nFor example, with $F=\\mathcal{U}(\\mathsf{y}_1,\\mathsf{y}_2)$, in Theorem \\ref{thm:kth-order-gen-(r-1)\/r},\nwe have\n$k=0$ and $f(\\mathsf{y}_1^+)=f\\left( (\\mathsf{y}_1+(r-1)(\\mathsf{y}_2-\\mathsf{y}_1)\/r)^+ \\right)=1\/(\\mathsf{y}_2-\\mathsf{y}_1)$,\nand\nin Theorem \\ref{thm:kth-order-gen-1\/r}, we have\n$\\ell=0$ and $f(\\mathsf{y}_2^-)=f\\left( (\\mathsf{y}_1+(\\mathsf{y}_2-\\mathsf{y}_1)\/r)^- \\right)=1\/(\\mathsf{y}_2-\\mathsf{y}_1)$.\nThen $\\lim _{n \\rightarrow \\infty}p_n(F,r,(r-1)\/r)=\\lim _{n \\rightarrow \\infty}p_n(F,r,1\/r)=r\/(r+1)$,\nwhich agrees with the result given in Equation \\eqref{eqn:asymptotic-uniform}.\n\n(b)\nFor $F$ with pdf $f(x)=\\bigl( x+1\/2 \\bigr)\\,\\mathbf{I}\\bigl( 0 1$, we have\n$\\gamma_{{}_{n,2}}(F,2,1\/2) \\sim 1+ \\BER\\bigl(p_{{}_n}(F,2,1\/2)\\bigr)$.\nNote also that $\\gamma_{{}_{1,2}}(F,2,1\/2)=1$.\n\n\\item[(ii)]\nFurthermore,\nsuppose $k \\ge 0$ is the smallest integer for which\n$F(\\cdot)$ has continuous right derivatives up to order $(k+1)$ at $\\mathsf{y}_1,\\,(\\mathsf{y}_1+\\mathsf{y}_2)\/2$,\n$f^{(k)}(\\mathsf{y}_1^+)+2^{-(k+1)}\\,f^{(k)}\\left( \\left( \\frac{\\mathsf{y}_1+\\mathsf{y}_2}{2} \\right)^+ \\right) \\not= 0$\nand $f^{(i)}(\\mathsf{y}_1^+)=f^{(i)} \\left( \\left( \\frac{\\mathsf{y}_1+\\mathsf{y}_2}{2} \\right)^+ \\right)=0$ for all $i=0,1,\\ldots,k-1$;\nand $\\ell \\ge 0$ is the smallest integer for which\n$F(\\cdot)$ has continuous left derivatives up to order $(\\ell+1)$ at $\\mathsf{y}_2,\\,(\\mathsf{y}_1+\\mathsf{y}_2)\/2$,\n$f^{(\\ell)}(\\mathsf{y}_2^-)+2^{-(\\ell+1)}\\,f^{(\\ell)}\\left( \\left( \\frac{\\mathsf{y}_1+\\mathsf{y}_2}{2} \\right)^- \\right) \\not= 0$\nand $f^{(i)}(\\mathsf{y}_2^-)=f^{(i)}\\left( \\left( \\frac{\\mathsf{y}_1+\\mathsf{y}_2}{2} \\right)^- \\right)=0$ for all $i=0,1,\\ldots,\\ell-1$.\nAdditionally,\nfor bounded $f^{(k)}(\\cdot)$ and $f^{(\\ell)}(\\cdot)$,\nwe have the following limit\n$$\np(F,2,1\/2) =\n\\lim_{n \\rightarrow \\infty}p_{{}_n}(F,2,1\/2) =\n\\frac{f^{(k)}(\\mathsf{y}_1^+)\\,f^{(\\ell)}(\\mathsf{y}_2^-)}\n{\\left[f^{(k)}(\\mathsf{y}_1^+)+2^{-(k+1)}\\,f^{(k)} \\left( \\left( \\frac{\\mathsf{y}_1+\\mathsf{y}_2}{2} \\right)^+ \\right)\\right]\\,\\left[f^{(\\ell)}(\\mathsf{y}_2^-)+\n2^{-(\\ell+1)}\\,f^{(\\ell)}\\left( \\left( \\frac{\\mathsf{y}_1+\\mathsf{y}_2}{2} \\right)^- \\right)\\right]}.$$\n\\end{itemize}\n\\end{theorem}\n\nNotice the interesting behavior of $p(F,r,c)$ around $(r,c)=(2,1\/2)$.\nThere is a jump (hence discontinuity) in $p(F,r,(r-1)\/r)$ and in $p(F,r,1\/r)$ at $r=2$.\n\n\\section{The Distribution of the Domination Number of $\\mathscr D_{n,m}(r,c)$-digraphs}\n\\label{sec:dist-multiple-intervals}\nWe now consider the more challenging case of $m>2$.\nFor $\\omega_1<\\omega_2$ in $\\mathbb{R}$, define the family of distributions\n$$\n\\mathscr H(\\mathbb{R}):=\\bigl \\{ F_{X,Y}:\\;(X_i,Y_i) \\sim F_{X,Y} \\text{ with support }\n\\mathcal{S}(F_{X,Y})=(\\omega_1,\\omega_2)^2 \\subsetneq \\mathbb{R}^2,\\;\\;X_i \\sim F_X \\text{ and } Y_i \\stackrel{iid}{\\sim}F_Y \\bigr\\}.\n$$\nWe provide the exact distribution of $\\gamma_{{}_{n,m}}(F,r,c)$ for\n$\\mathscr H(\\mathbb{R})$-random digraphs in the following theorem.\nLet $[m]:=\\bigl\\{ 0,1,2,\\ldots,m-1 \\bigr\\}$ and\n$\\Theta^S_{a,b}:=\\bigl\\{ (u_1,u_2,\\ldots u_b):\\;\\sum_{i=1}^{b}u_i = a:\\; u_i \\in S, \\;\\;\\forall i \\bigr\\}$.\nIf $Y_i$ have a continuous distribution,\nthen the order statistics of $\\mathcal{Y}_m$ are distinct a.s.\nGiven $Y_{(i)}=\\mathsf{y}_{(i)}$ for $i=1,2,\\ldots,m$,\nlet\n$\\vec{n}$ be the vector of numbers $n_i$,\n$f_{\\vec{Y}}(\\vec{\\mathsf{y}})$ be the joint distribution of the order statistics of $\\mathcal{Y}_m$,\ni.e., $f_{\\vec{Y}}(\\vec{\\mathsf{y}})=\\frac{1}{m!}\\prod_{i=1}^m f(\\mathsf{y}_i)\\,\\mathbf{I}(\\omega_1<\\mathsf{y}_1<\\mathsf{y}_2<\\ldots<\\mathsf{y}_m<\\omega_2)$,\nand $f_{i,j}(\\mathsf{y}_i,\\mathsf{y}_j)$ be the joint distribution of $Y_{(i)},Y_{(j)}$.\nThen we have the following theorem.\n\n\\begin{theorem}\n\\label{thm:general-Dnm}\nLet $D$ be an $\\mathscr H(\\mathbb{R})$-random $\\mathscr D_{n,m}(r,c)$-digraph\nwith $n>1$, $m>1$, $r \\in [1,\\infty)$ and $c \\in (0,1)$.\nThen the probability mass function of the domination number of D is given by\n{\\small\n$$P(\\gamma_{{}_{n,m}}(F,r,(r-1)\/r)=q)=\\int_{\\mathscr S} \\sum_{\\vec{n} \\in \\Theta^{[n+1]}_{n,(m+1)}}\n\\sum_{\\vec{q}\\in \\Theta^{[3]}_{q,(m+1)}} P(\\vec{N}=\\vec{n})\\,\\zeta(q_1,n_1)\\,\\zeta(q_{m+1},\\,n_{m+1})\n\\prod_{j=2}^{m}\\eta(q_i,n_i)f_{\\vec{Y}}(\\vec{\\mathsf{y}})\\,d\\mathsf{y}_1 \\ldots d\\mathsf{y}_m$$\n}\nwhere\n$P(\\vec{N}=\\vec{n})$ is the joint probability of $n_i$ points\nfalling into intervals $\\mathcal{I}_i$ for $i=0,1,2,\\ldots,m$,\n$q_i \\in \\{0,1,2\\}$, $q=\\sum_{i=0}^m q_i$ and\n\\begin{align*}\n\\zeta(q_i,n_i)&=\\max\\bigl( \\mathbf{I}(n_i=q_i=0),\\mathbf{I}(n_i \\ge q_i=1) \\bigr) \\text{ for } i=1,(m+1),\n\\text{ and }\\\\\n\\eta(q_i,n_i)&=\\max \\bigl( \\mathbf{I}(n_i=q_i=0),\\mathbf{I}(n_i \\ge q_i \\ge 1) \\bigr)\\cdot\np_{{}_{n_i}}(F_i,r,(r-1)\/r))^{\\mathbf{I}(q_i=2)}\\,\\bigl( 1-p_{{}_{n_i}}(F_i,r,(r-1)\/r)) \\bigr)^{\\mathbf{I}(q_i=1)}\\\\\n& \\text{ for $i=1,2,3,\\ldots,(m-1),$ and the region of integration is given by}\\\\\n\\mathscr S:=\\bigl\\{&(\\mathsf{y}_1,\\mathsf{y}_2,\\ldots,\\mathsf{y}_m)\\in\n(\\omega_1,\\omega_2)^2:\\,\\omega_1<\\mathsf{y}_1<\\mathsf{y}_2<\\ldots<\\mathsf{y}_m<\\omega_2 \\bigr\\}.\n\\end{align*}\nThe special cases of $n=1$, $m=1$, $r \\in \\{1,\\infty\\}$ and $c \\in \\{0,1\\}$ are\nas in Theorem \\ref{thm:gamma-Dnm-r-M}.\n\\end{theorem}\nProof is as in Theorem 6.1 of \\cite{ceyhan:dom-num-CCCD-NonUnif}.\nA similar construction is available for $c=1\/r$.\n\nThis exact distribution for finite $n$ and $m$ has a simpler form\nwhen $\\mathcal{X}$ and $\\mathcal{Y}$ points are both uniform in a bounded interval in $\\mathbb{R}$.\nDefine $\\mathscr U(\\mathbb{R})$ as follows\n$$\n\\mathscr U(\\mathbb{R}):=\\bigl \\{ F_{X,Y}: \\text{ $X$ and $Y$ are independent}\n\\;X_i \\stackrel{iid}{\\sim} \\mathcal{U}(\\omega_1,\\omega_2) \\text{ and } Y_i \\stackrel{iid}{\\sim}\\mathcal{U}(\\omega_1,\\omega_2),\n\\text{ with } -\\infty <\\omega_1<\\omega_2<\\infty \\bigr\\}.\n$$\nClearly, $\\mathscr U(\\mathbb{R}) \\subsetneq \\mathscr H(\\mathbb{R})$.\nThen we have the following corollary to Theorem \\ref{thm:general-Dnm}.\n\\begin{corollary}\n\\label{cor:uniform-Dnm}\nLet $D$ be a $\\mathscr U(\\mathbb{R})$-random $\\mathscr D_{n,m}(r,c)$-digraph\nwith $n>1$, $m>1$, $r \\in [1,\\infty)$ and $c \\in (0,1)$.\nThen the probability mass function of the domination number of $D$ is given by\n$$P(\\gamma_{{}_{n,m}}(r,(r-1)\/r)=q)=\\frac{n!m!}{(n+m)!}\n\\sum_{\\vec{n} \\in \\Theta^{[n+1]}_{n,(m+1)}}\n\\sum_{\\vec{q}\\in \\Theta^{[3]}_{q,(m+1)}}\n\\zeta(q_1,n_1)\\,\\zeta(q_{m+1},\\,n_{m+1})\n\\prod_{j=2}^{m}\\eta(q_i,n_i).$$\nThe special cases of $n=1$, $m=1$, $r \\in \\{1,\\infty\\}$ and $c \\in \\{0,1\\}$ are\nas in Theorem \\ref{thm:gamma-Dnm-r-M}.\n\\end{corollary}\nProof is as in Theorem 2 of \\cite{priebe:2001}.\nA similar construction is available for $c=1\/r$.\nFor $n,m < \\infty$, the expected value of domination number is\n\\begin{equation}\n\\label{eqn:expected-gamma-Dnm}\n\\mathbf{E}[\\gamma_{{}_{n,m}}(F,r,c)]=P\\left(X_{(1)} Y_{(m)}\\right)+\n\\sum_{i=1}^{m-1}\\sum_{k=1}^n\\,P(N_i=k)\\,\\mathbf{E}[\\gamma_{{}_{n_i,2}}(F_i,r,c)]\n\\end{equation}\nwhere\n\\begin{multline*}\nP(N_i=k)=\\\\\n\\int_{\\omega_1}^{\\omega_2}\\int_{\\mathsf{y}_{(i)}}^{\\omega_2}\nf_{i-1,i}\\left(\\mathsf{y}_{(i)},\\mathsf{y}_{(i+1)}\\right) \\Bigl[F_X\\left(\\mathsf{y}_{(i+1)}\\right)-\nF_X\\left(\\mathsf{y}_{(i)}\\right)\\Bigr]^k\\Bigl[1-\\left(F_X\\left(\\mathsf{y}_{(i+1)}\\right)-\nF_X\\left(\\mathsf{y}_{(i)}\\right)\\right)\\Bigr]^{n-k}\\,d\\mathsf{y}_{(i+1)}d\\mathsf{y}_{(i)}\n\\end{multline*}\nand $\\mathbf{E}[\\gamma_{{}_{n_i,2}}(F_i,r,c)]=1+p_n(F_i,r,c)$.\nThen as in Corollary 6.2 of \\cite{ceyhan:dom-num-CCCD-NonUnif},\nwe have\n\\begin{corollary}\n\\label{cor:Egnn goes infty}\nFor $F_{X,Y} \\in \\mathscr H(\\mathbb{R})$ with support $\\mathcal{S}(F_X) \\cap \\mathcal{S}(F_Y)$ of positive measure\nwith $r \\in [1,\\infty)$ and $c \\in (0,1)$,\nwe have $\\lim_{n \\rightarrow \\infty}\\mathbf{E}[\\gamma_{{}_{n,n}}(F,r,c)] = \\infty$.\n\\end{corollary}\n\n\\begin{theorem}\n\\label{thm:asy-general-Dnm}\n\\textbf{Main Result 5:}\nLet $D_{n,m}(r,c)$ be an $\\mathscr H(\\mathbb{R})$-random $\\mathscr D_{n,m}(r,c)$-digraph.\nThen\n\\begin{itemize}\n\\item[(i)]\nfor fixed $n<\\infty$, $\\lim_{m \\rightarrow \\infty}\\gamma_{{}_{n,m}}(F,r,c)=n$ a.s. for all $r \\ge 1$ and $c \\in [0,1]$.\n\\item[] For fixed $m<\\infty$, and\n\\item[(ii)]\nfor $r=1$ and $c \\in (0,1)$,\n$\\lim_{n \\rightarrow \\infty}P(\\gamma_{{}_{n,m}}(F,1,c)=2m)=1$\nand\n$\\lim_{n \\rightarrow \\infty}P(\\gamma_{{}_{n,m}}(F,1,0)=m+1)=\\\\\n\\lim_{n \\rightarrow \\infty}P(\\gamma_{{}_{n,m}}(F,1,1)=m+1)=1$\n\\item[(iii)]\nfor $r>2$ and $c \\in (0,1)$, $\\lim_{n \\rightarrow \\infty}P(\\gamma_{{}_{n,m}}(F,r,c)=m+1)=1$,\n\\item[(iv)]\nfor $r=2$,\nif $c \\not= 1\/2$,\nthen $\\lim_{n \\rightarrow \\infty}P(\\gamma_{{}_{n,m}}(F,2,c)=m+1)=1$;\\\\\nif $c = 1\/2$,\nthen $\\lim_{n \\rightarrow \\infty}\\gamma_{{}_{n,m}}(F,2,1\/2)\\stackrel{d}{=}m+1+\\BIN(m,p(F_i,2,1\/2))$,\n\\item[(v)]\nfor $r \\in (1,2)$,\nif $r \\not= \\tau=\\max(c,1-c)$,\nthen $\\lim_{n \\rightarrow \\infty} \\gamma_{{}_{n,m}}(F,r,c)$ is degenerate;\notherwise,\nit is non-degenerate.\nThat is, for $r \\in [1,2)$,\nas $n \\rightarrow \\infty$,\n\\begin{equation}\n\\label{eqn:asy-unif-rM-mult-int}\n\\gamma_{{}_{n,m}}(F,r,c) \\sim\n\\left\\lbrace \\begin{array}{ll}\n m+1+\\BIN(m,p(F_i,r,c)), & \\text{for $r = 1\/\\tau$,}\\\\\n m+1, & \\text{for $r > 1\/\\tau$,}\\\\\n 2m+1, & \\text{for $r < 1\/\\tau$.}\\\\\n\\end{array} \\right.\n\\end{equation}\n\\end{itemize}\n\\end{theorem}\n\n\\begin{itemize}\n\\item[] {\\bfseries Proof:}\n\\item[]\nPart (i) is trivial.\nPart (ii) follows from Theorems \\ref{thm:gamma-Dnm-r-M} and \\ref{thm:gamma-Dnm-r=1-M},\nsince as $n_i \\rightarrow \\infty$,\nwe have\n$\\mathcal{X}_{[i]} \\not= \\emptyset$ a.s. for all $i$.\n\\item[]\nPart (iii) follows from Theorem \\ref{thm:r and M-asy},\nsince for $c \\in (0,1)$,\nit follows that $r > 1\/\\tau $ implies $r>2$\nand as $n_i \\rightarrow \\infty$,\nwe have\n$\\gamma_{{}_{n_i,2}}(F_i,r,c) \\rightarrow 1$ in probability for all $i$.\n\\item[]\nIn part (iv), for $r=2$ and $c \\not= 1\/2$,\nbased on Theorem \\ref{thm:r=2 and M_c=c},\nas $n_i \\rightarrow \\infty$,\nwe have\n$\\gamma_{{}_{n_i,2}}(F_i,r,c) \\rightarrow 1$ in probability for all $i$.\nThe result for $r=2$ and $c = 1\/2$ is proved in \\cite{ceyhan:dom-num-CCCD-NonUnif}.\n\\item[]\nPart (v) follows from Theorem \\ref{thm:r and M-asy}. $\\blacksquare$\n\\end{itemize}\n\n\n\n\n\\begin{remark}\n\\label{rem:ext-multi-dim}\n\\textbf{Extension to Higher Dimensions:}\n\nLet $\\mathcal{Y}_m=\\left \\{\\mathsf{y}_1,\\mathsf{y}_2,\\ldots,\\mathsf{y}_m \\right\\}$ be $m$ points in\ngeneral position in $\\mathbb{R}^d$ and $T_i$ be the $i^{th}$ Delaunay cell\nin the Delaunay tessellation (assumed to exist) based on $\\mathcal{Y}_m$\nfor $i=1,2,\\ldots,J_m$.\nLet $\\mathcal{X}_n$ be a random sample from a distribution $F$ in\n$\\mathbb{R}^d$ with support $\\mathcal{S}(F) \\subseteq \\mathcal{C}_H(\\mathcal{Y}_m)$\nwhere $\\mathcal{C}_H(\\mathcal{Y}_m)$ stands for the convex hull of $\\mathcal{Y}_m$.\nIn $\\mathbb{R}$\na Delaunay tessellation is an intervalization (i.e.,\npartitioning of $\\mathbb{R}$ by intervals),\nprovided that no two points in $\\mathcal{Y}_m$ are concurrent.\n\nWe define the proportional-edge proximity region in $\\mathbb{R}^2$.\nThe extension to $\\mathbb{R}^d$ with $d>2$ is straightforward\n(see \\cite{ceyhan:dom-num-NPE-MASA} for an explicit extension).\nLet $T(\\Y_3)$ be the triangle (including the interior) with vertices $\\mathcal{Y}_3=\\{\\mathsf{y}_1,\\mathsf{y}_2,\\mathsf{y}_3\\}$,\n$e_i$ be the edge opposite vertex $\\mathsf{y}_i$,\nand $M_i$ be the midpoint of edge $e_i$ for $i=1,2,3$.\nWe first construct the vertex regions based on a point $M \\in \\mathbb{R}^2 \\setminus \\mathcal{Y}_3$\ncalled \\emph{$M$-vertex regions},\nby the lines joining $M$ to a point on each of the edges of $T(\\Y_3)$.\nSee \\cite{ceyhan:Phd-thesis} for a more general definition of vertex regions.\nPreferably, $M$ is selected to be in the interior of the triangle $T(\\Y_3)$.\nFor such an $M$, the corresponding vertex regions can be defined using\na line segment joining $M$ to $e_i$.\nWith center of mass $M_{CM}$,\nthe lines joining $M_{CM}$ and $\\mathcal{Y}_3$ are the \\emph{median lines}\nwhich cross edges at midpoints $M_i$ for $i=1,2,3$.\nThe vertex regions in Figure \\ref{fig:prox-map-def} are\ncenter of mass vertex regions.\nFor $r \\in [1,\\infty]$, define $N(\\cdot,r,M)$\nto be the \\emph{(parametrized) proportional-edge proximity map} with\n$M$-vertex regions as follows\n(see also Figure \\ref{fig:prox-map-def} with $M=M_{CM}$ and $r=2$).\nLet $R_M(v)$ be the vertex region associated with vertex $v$ and $M$.\nFor $x \\in T(\\Y_3) \\setminus \\mathcal{Y}_3$,\nlet $v(x) \\in \\mathcal{Y}_3$ be the vertex whose region contains $x$; i.e., $x \\in R_M(v(x))$.\nIf $x$ falls on the boundary of two $M$-vertex regions,\nwe assign $v(x)$ arbitrarily.\nLet $e(x)$ be the edge of $T(\\Y_3)$ opposite of $v(x)$,\n$\\ell(v(x),x)$ be the line parallel to $e(x)$ and passes through $x$,\nand $d(v(x),\\ell(v(x),x))$ be the Euclidean\ndistance from $v(x)$ to $\\ell(v(x),x)$.\nFor $r \\in [1,\\infty)$,\nlet $\\ell_r(v(x),x)$ be the line parallel to $e(x)$ such that\n$$\nd(v(x),\\ell_r(v(x),x)) = r\\,d(v(x),\\ell(v(x),x))\\\\\n\\text{ and }\\\\\nd(\\ell(v(x),x),\\ell_r(v(x),x)) < d(v(x),\\ell_r(v(x),x)).\n$$\nLet $T_r(x)$ be the triangle similar to and with the same\norientation as $T(\\Y_3)$ having $v(x)$ as a vertex and $\\ell_r(v(x),x)$\nas the opposite edge.\nThen the \\emph{proportional-edge proximity region} $\\mathcal{N}(x,r,M)$ is defined to be $T_r(x) \\cap T(\\Y_3)$.\nNotice that $\\ell(v(x),x)$ divides the two edges of $T_r(x)$\n(other than the one lies on $\\ell_r(v(x),x)$) proportionally with the factor $r$.\nHence the name \\emph{proportional-edge proximity region}.\n\n\\begin{figure} [ht]\n\\centering\n\\scalebox{.35}{\\input{Nofnu2.pstex_t}}\n\\caption{\n\\label{fig:prox-map-def}\nConstruction of proportional-edge proximity region, $\\mathcal{N}(x,2,M_{CM})$ (shaded region)\nfor an $x$ in the CM-vertex region for $\\mathsf{y}_1$, $R_{CM}(\\mathsf{y}_1)$\nwhere $d_1=d(v(x),\\ell(v(x),x))$ and\n$d_2=d(v(x),\\ell_2(v(x),x))=2\\,d(v(x),\\ell(v(x),x))$.\n}\n\\end{figure}\n\nNotice that in $\\mathbb{R}$, $M$ is the center parametrized by $c$,\ne.g., the center of mass $M_{CM}$ corresponds to $c=1\/2$,\nbut for other $M \\in T(\\Y_3)$, there is no direct counterpart in $\\mathbb{R}$.\nThe vertex regions in $\\mathbb{R}$ with $\\mathcal{Y}_2=\\{\\mathsf{y}_1,\\mathsf{y}_2\\}$ are $(\\mathsf{y}_1,M_c)$ and $(M_c,\\mathsf{y}_2)$.\nObserve that $N(x,r,c)$ in $\\mathbb{R}$ is an open interval,\nwhile in $\\mathbb{R}^d$, the region $\\mathcal{N}(x,r,M)$ is a closed region.\nHowever, the interiors of $\\mathcal{N}(x,r,M)$ satisfy the class cover problem of \\cite{cannon:2000}.\n$\\square$\n\\end{remark}\n\n\\section{Discussion}\n\\label{sec:disc-conclusions}\nIn this article,\nwe present the distribution of the domination number\nof a random digraph family called proportional-edge proximity catch digraph (PCD)\nwhich is based on two classes of points.\nPoints from one of the classes\nconstitute the vertices of the PCDs,\nwhile the points from the other class\nare used in the binary relation\nthat assigns the arcs of the PCDs.\n\nWe introduce the proximity map which is the one-dimensional version\nof $N(\\cdot,r,c)$ of \\cite{ceyhan:dom-num-NPE-SPL} and \\cite{ceyhan:dom-num-NPE-MASA}.\nThis proximity map can also be viewed as an extension of the proximity map\nof \\cite{priebe:2001} and \\cite{ceyhan:dom-num-CCCD-NonUnif}.\nThe PCD we consider is based on a parametrized proximity map\nin which there is an expansion parameter $r$ and a centrality parameter $c$.\nWe provide the exact and asymptotic distributions of\nthe domination number for proportional-edge PCDs\nfor uniform data\nand compute the asymptotic distribution for non-uniform data\nfor the entire range of $(r,c)$.\nThe results in this article can also be\nviewed as generalizations of the main results of\n\\cite{priebe:2001} and \\cite{ceyhan:dom-num-CCCD-NonUnif} in several directions.\n\\cite{priebe:2001} provided the exact distribution of the\ndomination number of class cover catch digraphs (CCCDs) based on $\\mathcal{X}_n$ and $\\mathcal{Y}_m$\nboth of which were sets of iid random variables from\nuniform distribution on $(\\omega_1,\\omega_2) \\subset \\mathbb{R}$ with $-\\infty<\\omega_1<\\omega_2<\\infty$\nand the proximity map $N(x):=B(x,r(x))$ where $r(x):=\\min_{\\mathsf{y} \\in \\mathcal{Y}_m}d(x,\\mathsf{y})$.\n\\cite{ceyhan:dom-num-CCCD-NonUnif} investigates the\ndistribution of the domination number of CCCDs for non-uniform data\nand provides the asymptotic distribution for a large family\nof (non-uniform) distributions.\nFurthermore, this article will form the foundation of the generalizations and calculations\nfor uniform and non-uniform cases in multiple dimensions.\nAs in \\cite{ceyhan:dom-num-NPE-SPL},\nwe can use the domination number in testing\none-dimensional spatial point patterns\nand our results will help make the power comparisons\npossible for a large family of distributions.\n\nWe demonstrate an interesting behavior of the domination number of\nproportional-edge PCD for one-dimensional data.\nFor uniform data or data from a distribution which satisfies some regularity conditions\n(see Section \\ref{sec:asy-dist-generalF})\nand fixed $11$,\nthe parameter, $p_{{}_n}(\\mathcal{U},r,c)$, of distribution of the domination number\nof the proportional-edge PCD based on uniform data is continuous in $r$ and $c$\nfor all $r \\ge 1$ and $c \\in (0,1)$\nand has jumps (hence discontinuities) at $c=0,1$.\nFor fixed $(r,c) \\in [1,\\infty) \\times (0,1)$,\nthe parameter, $p(\\mathcal{U},r,c)$, of the asymptotic distribution\nexhibits some discontinuities.\nFor $c \\in (0,1)$\nthe asymptotic distribution is nondegenerate at $r=1\/\\max(c,1-c)$.\nThe asymptotic distribution of the domination number is degenerate for the expansion parameter $r > 2$.\nFor $r \\in (1,2]$, there exists threshold values for the centrality parameter $c$ for which\nthe asymptotic distribution is non-degenerate.\nIn particular, for $c \\in \\{ (r-1)\/r,1\/r \\}$ with $r \\in (1,2]$\nthe asymptotic distribution\nof the domination number is a translated form of $\\BIN(m,p(\\mathcal{U},r,c)$\nwhere\n$p(F,r,c)$ is continuous in $r$.\nAdditionally,\nby symmetry,\nwe have $p(\\mathcal{U},r,(r-1)\/r)=p(\\mathcal{U},r,1\/r)$ for $r \\in (1,2]$.\nFor $r > 1\/\\max(c,1-c)$\nthe domination number converges in probability to 1,\nand for $r < 1\/\\max(c,1-c)$\nthe domination number converges in probability to 2.\nOn the other hand,\nat $(r,c)= (2,1\/2)$,\nthe asymptotic distribution is again a translated form of $\\BIN(m,p(\\mathcal{U},2,1\/2))$\nbut there is yet another jump at $r=2$,\nas $p(\\mathcal{U},2,1\/2)=4\/9$ while $\\lim_{r \\rightarrow 2} p(\\mathcal{U},r,(r-1)\/r)=\\lim_{r \\rightarrow 2} p(\\mathcal{U},r,1\/r)=2\/3$.\nThis second jump might be due to the symmetry\nfor the domination number at $c=1\/2$.\n\n\n\\section*{Acknowledgments}\nSupported by TUBITAK Kariyer Project Grant 107T647.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{}\nIn this section we will prove the theorems given in this paper.\n\\subsection{Proof of Theorem~\\ref{thr:calc}}\n\\label{prf:calc}\nTo simplify the notation denote $S_0(x) = 1$, $\\theta^*_1 = \\envec{1, \\theta_{11}}{\\theta_{1N}}$ and $\\theta^*_2 = \\envec{1, \\theta_{21}}{\\theta_{2N}}$.\nThe norm function restricted to the affine space has one minimum and it can be found using Lagrange multipliers. Thus we can express the vectors $u_i$ in Eq.~\\ref{eq:def}\n\\[\nu_{ij} = \\lambda_i^TS(j),\n\\]\nwhere $j \\in \\Omega$ and $\\lambda_i$ is the column vector of length $N+1$ consisting of the corresponding Lagrange multipliers.\nThe distance is equal to\n\\[\n\\begin{split}\n\\dist{D_1}{D_2}{S}^2 & = \\abs{\\Omega}\\norm{u_1-u_2}^2 \\\\\n& = \\abs{\\Omega}\\sum_{j \\in \\Omega}\\pr{u_{1j}-u_{2j}}\\pr{u_{1j}-u_{2j}} \\\\\n& = \\abs{\\Omega}\\sum_{j \\in \\Omega}\\pr{u_{1j}-u_{2j}}\\pr{\\lambda_1^TS(j)-\\lambda_2^TS(j)} \\\\\n& = \\abs{\\Omega}\\pr{\\lambda_1-\\lambda_2}^T\\sum_{j \\in \\Omega}\\pr{u_{1j}-u_{2j}}S(j) \\\\\n & = \\abs{\\Omega} \\pr{\\lambda_1-\\lambda_2}^T\\pr{\\theta_1^*-\\theta_2^*}.\n\\end{split}\n\\]\nSince $u_i \\in \\const{S}{\\theta_i}$, the multipliers $\\lambda_i$ can be solved from the equation\n\\[\n\\theta_i^* = \\sum_{j \\in \\Omega} S(j) u_{ij} = \\sum_{j \\in \\Omega} S(j) \\lambda_i^TS(j) = \\pr{\\sum_{j \\in \\Omega} S(j)S(j)^T} \\lambda_i,\n\\]\ni.e., $\\theta^*_i = A\\lambda_i$, where $A$ is an $(N+1)\\times (N+1)$ matrix $A_{xy} = \\sum_j S_x(j)S_y(j)$. It is straightforward to prove that the existence of $\\cov{}{-1}{S}$ implies that $A$ is also invertible. Let $B$ be an $N\\times N$ matrix formed from $A^{-1}$ by removing the first row and the first column. We have\n\\[\n\\begin{split}\n\\abs{\\Omega}\\norm{u_1-u_2}^2 & = \\abs{\\Omega}\\pr{\\theta_1^*-\\theta_2^*}^TA^{-1}\\pr{\\theta_1^*-\\theta_2^*} \\\\\n& = \\abs{\\Omega}\\pr{\\theta_1-\\theta_2}^TB\\pr{\\theta_1-\\theta_2}.\n\\end{split}\n\\]\nThe last equality is true since $\\theta^*_{10} = \\theta^*_{20}$.\n\nWe need to prove that $\\abs{\\Omega}B = \\cov{}{-1}{S}$. Let $\\spr{c; B}$ be the matrix obtained from $A^{-1}$ by removing the first row. Let $\\gamma = \\mean{}{S}$ taken with respect to the uniform distribution. Since the first column of $A$ is equal to $\\abs{\\Omega}\\spr{1, \\gamma}$, it follows that $c = -B\\gamma$. From the identity\n\\[\nc_xA_{(0,y)} + \\sum_{z=1}^NB_{(x,z)}A_{(z,y)} = \\delta_{xy}\n\\]\nwe have\n\\[\n\\sum_{z=1}^NB_{(x,z)}\\pr{A_{(z,y)}-A_{(0,y)}\\gamma_z} = \\sum_{z=1}^N\\abs{\\Omega}B_{(x,z)}\\pr{\\abs{\\Omega}^{-1}A_{(z,y)}-\\gamma_y\\gamma_z} = \\delta_{xy}.\n\\]\nSince $\\abs{\\Omega}^{-1}A_{(z,y)}-\\gamma_z\\gamma_y$ is equal to the $(z,y)$ entry of $\\cov{}{}{S}$, the theorem follows.\n\\subsection{Proofs of Theorems given in Section~\\ref{sec:properties}}\n\\begin{proof}[Theorem~\\ref{thr:metric}]\nThe covariance matrix $\\cov{}{}{S}$ in Theorem~\\ref{thr:calc} depends only on $S$ and is positive definite. Therefore, the CM distance is a Mahalanobis distance.\n\\end{proof}\n\n\\begin{proof}[Theorem~\\ref{thr:augdata}]\nLet $\\theta_i = S(D_i)$ for $i=1,2,3$. The frequencies for $D_1 \\cup D_3$ and $D_2 \\cup D_3$ are $(1-\\epsilon)\\theta_1+\\epsilon\\theta_3$ and $(1-\\epsilon)\\theta_2+\\epsilon\\theta_3$, respectively. The theorem follows from Theorem~\\ref{thr:calc}.\n\\end{proof}\nThe following lemma proves Theorem~\\ref{thr:linear}.\n\\begin{lemma} Let $\\funcdef{A}{\\real^N}{\\real^M}$ and define a function $T(\\omega) = A(S(\\omega))$. Let $\\phi = \\freq{T}{D}$ and $\\theta = \\freq{S}{D}$ be the frequencies for some data set $D$. Assume further that there is no two data sets $D_1$ and $D_2$ such that $\\freq{S}{D_1} = \\freq{S}{D_2}$ and $\\freq{T}{D_1} \\neq \\freq{T}{D_2}$. Then $\\dist{D_1}{D_2}{T} \\leq \\dist{D_1}{D_2}{S}$. The equality holds if for a fixed $\\phi$ the frequency $\\theta$ is unique.\n\\label{thr:transform}\n\\end{lemma}\nBefore proving this lemma, let us explain why the uniqueness requirement is needed: Assume that the sample space $\\Omega$ consists of two-dimensional binary vectors, that is, \n\\[\n\\Omega = \\set{\\pr{0,0},\\pr{1,0},\\pr{0,1},\\pr{1,1}}.\n\\]\nWe set the features to be $S(\\omega) = \\vect{\\omega_1, \\omega_2}$. Define a function $T(x) = \\vect{\\omega_1, \\omega_2, \\omega_1\\omega_2} = \\vect{S_1(\\omega), S_2(\\omega), S_1(\\omega)S_2(\\omega)}$. Note that uniqueness assumption is now violated. Without this assumption the lemma would imply that $\\dist{D_1}{D_2}{T} \\leq \\dist{D_1}{D_2}{S}$ which is in general false.\n\\begin{proof}\nLet $\\theta_1 = \\freq{S}{D_1}$ and $\\phi_1 = \\freq{T}{D_1}$. Pick $u \\in \\const{S}{\\theta_1}$. The frequency of $S$ taken with the respect to $u$ is $\\theta_1$ and because of the assumption the corresponding frequency of $T$ is $\\phi_1$. It follows that $\\const{S}{\\theta_i} \\subseteq \\const{T}{\\phi_i}$. The theorem follows from the fact that the CM distance is the shortest distance between the affine spaces $\\const{S}{\\theta_1}$ and $\\const{S}{\\theta_2}$.\n\\end{proof}\n\n\\subsection{Proof of Theorem~\\ref{thr:char}}\n\\label{prf:char}\nIt suffices to prove that the matrix $C(S)$ is proportional to the covariance matrix $\\cov{}{}{S}$. The notation $\\delta\\pr{\\omega_1 \\mid \\omega_2}$ used in the proof represents a feature function $\\funcdef{\\delta}{\\Omega}{\\set{0,1}}$ which returns $1$ if $\\omega_1 = \\omega_2$ and $0$ otherwise.\n\nBefore proving the theorem we should point one technical detail. In general, $C(S)$ may be singular, especially in Assumption~\\ref{as:1}. In our proof we will show that $C(S) \\propto \\cov{}{}{S}$ and this does not require $C(S)$ to be invertible. However, if one wants to evaluate the distance $d$, then one must assume that $C(S)$ is invertible.\n\nFix indices $i$ and $j$ such that $i \\neq j$. Let $T(\\omega) = \\vect{S_i(\\omega), S_j(\\omega)}$. If follows from Assumption~\\ref{as:1} that\n\\[\nC(T) = \\spr{\n\\begin{array}{cc}\nC_{ii}(S) & C_{ij}(S) \\\\\nC_{ji}(S) & C_{jj}(S) \\\\\n\\end{array}\n}.\n\\]\nThis implies that $C_{ij}(S)$ depends only on $S_i$ and $S_j$. In other words, we can say $C_{ij}(S) = C_{ij}(S_i, S_j)$. Let $\\funcdef{\\rho}{\\enset{1}{N}}{\\enset{1}{N}}$ be some permutation function and define $U(x) = \\envec{S_{\\rho(1)}(x)}{S_{\\rho(N)}(x)}$. Assumption~\\ref{as:1} implies that\n\\[\nC_{\\rho(i)\\rho(j)}(S) = C_{ij}(U) = C_{ij}(U_i, U_j) = C_{ij}(S_{\\rho(i)}, S_{\\rho(j)}).\n\\]\nThis is possible only if all non-diagonal entries of $C$ have the same form or, in other words, $C_{ij}(S) = C_{ij}(S_i, S_j) = C(S_i, S_j)$. Similarly, the diagonal entry $S_{ii}$ depends only on $S_i$ and all the diagonal entries have the same form $C_{ii}(S) = C(S_i)$. To see the connection between $C(S_i)$ and $C(S_i, S_j)$ let $V(\\omega) = \\vect{S_i(\\omega), S_i(\\omega)}$ and let $W(\\omega) = \\vect{2S_i(\\omega)}$. We can represent $W(\\omega) = V_1(\\omega) + V_2(\\omega)$. Now Assumption~\\ref{as:1} implies\n\\[\n\\begin{split}\n4C(S_i) & = C(W) = C(V_{11})+2C(V_{12},V_{21})+C(V_{22}) \\\\\n& = 2C(S_i) + 2C(S_i,S_i)\n\\end{split}\n\\]\nwhich shows that $C(S_i) = C(S_i, S_i)$. Fix $S_j$ and note that Assumption~\\ref{as:1} implies that $C(S_i, S_j)$ is a linear function of $S_i$. Thus $C$ has a form \n\\[\nC(S_i, S_j) = \\sum_{\\omega \\in \\Omega}S_i(\\omega)h(S_j, \\omega)\n\\]\nfor some specific map $h$. Let $\\alpha \\in \\Omega$. Then $C(\\delta\\pr{\\omega \\mid \\alpha}, S_j) = h(S_j, \\alpha)$ is a linear function of $S_j$. Thus $C$ has a form\n\\[C(S_i, S_j) = \\sum_{\\omega_1,\\omega_2 \\in \\Omega}S_i(\\omega_1)S_j(\\omega_2)g(\\omega_1, \\omega_2)\\] for some specific $g$.\n\nLet $\\alpha$, $\\beta$, and $\\gamma$ be distinct points in $\\Omega$. An application of Assumption~\\ref{as:2} shows that\n$g(\\alpha, \\beta) = C(\\delta\\pr{\\omega\\mid\\alpha}, \\delta\\pr{\\omega\\mid\\beta}) = C(\\delta\\pr{\\omega\\mid\\alpha},\\delta\\pr{\\omega\\mid\\gamma}) = g(\\alpha, \\gamma)$. Thus $g$ has a form $g(\\omega_1, \\omega_2) = a\\delta\\pr{\\omega_1\\mid \\omega_2}+b$ for some constants $a$ and $b$.\n\nTo complete the proof note that Assumption~\\ref{as:1} implies that $C(S+b) = C(S)$ which in turns implies that $\\sum_x g(\\omega_1, \\omega_2) = 0$ for all $y$. Thus $b = - a\\abs{\\Omega}^{-1}$. This leads us to\n\\[\n\\begin{split}\nC(S_i, S_j) & = \\sum_{\\omega_1,\\omega_2 \\in \\Omega}S_i(\\omega_1)S_j(\\omega_2)\\pr{a\\delta\\pr{\\omega_1\\mid\\omega_2}-a\\abs{\\Omega}^{-1}} \\\\\n& = a\\sum_{\\omega \\in \\Omega} S_i(\\omega)S_j(\\omega) - a\\pr{\\sum_{\\omega \\in \\Omega} S_i(\\omega)}\\pr{\\sum_{\\omega \\in \\Omega} \\abs{\\Omega}^{-1}S_j(\\omega)} \\\\\n& \\propto \\mean{}{S_iS_j} - \\mean{}{S_i}\\mean{}{S_j},\n\\end{split}\n\\]\nwhere the means are taken with respect to the uniform distribution. This identity proves the theorem.\n\\subsection{Proof for Lemma~\\ref{lem:paritycov}}\nLet us prove that $\\cov{}{}{T_\\iset{F}} = 0.5I$. Let $A$ be an itemset. There are odd number of ones in $A$ in exactly half of the transactions. Hence, $\\mean{}{T_A^2} = \\mean{}{T_A} = 0.5$. Let $B \\neq A$ be an itemset. We wish to have $T_B(\\omega) = T_A(\\omega) = 1$. This means that $\\omega$ must have odd number of ones in $A$ and in $B$. Assume that the number of ones in $A \\cap B$ is even. This means that $A-B$ and $B-A$ have odd number of ones. There is only a quarter of all the transactions that fulfil this condition. If $A \\cap B$ is odd, then we must an even number of ones in $A-B$ and $B-A$. Again, there is only a quarter of all the transactions for which this holds. This implies that $\\mean{}{T_AT_B} = 0.25 = \\mean{}{T_A}\\mean{}{T_B}$. This proves that $\\cov{}{}{T_\\iset{F}} = 0.5I$.\n\n\\subsection{Proof of Theorem~\\ref{thr:generic}}\n\\label{prf:generic}\nBefore proving this theorem let us rephrase it. First, note even though $\\dista{\\cdot}{\\cdot}{\\cdot}$ is defined only on the conjunction functions $S_\\iset{F}$, we can operate with the parity function $T_\\iset{F}$. As we stated before there is an invertible matrix $A$ such that $T_\\iset{F} = AS_\\iset{F}$. We can write the distance as\n\\[\n\\dista{D_1}{D_2}{S_\\iset{F}}^2 = \\pr{A\\theta_1-A\\theta_2}^T\\pr{A^{-1}}^TC(S_\\iset{F})^{-1}A^{-1}\\pr{A\\theta_1-A\\theta_2}.\n\\]\nThus we define $C(T_\\iset{F}) = AC(S_\\iset{F})A^T$. Note that the following lemma implies that the condition stated in Theorem~\\ref{thr:generic} is equivalent to $C(T_\\iset{A}) = cI$, for some constant $c$. Theorem~\\ref{thr:generic} is equivalent to stating that $C(T_\\iset{F}) = cI$.\n\nThe following lemma deals with some difficulties due the fact that the frequencies should arise from some valid distributions\n\\begin{lemma}\n\\label{lem:valid}\nLet $\\iset{A}$ be the family of all itemsets. There exists $\\epsilon > 0$ such that for each real vector $\\gamma$ of length $2^K-1$ that satisfies $\\norm{\\gamma} < \\epsilon$ there exist distributions $p$ and $q$ such that $\\gamma = \\mean{p}{T_\\iset{A}}-\\mean{q}{T_\\iset{A}}$.\n\\end{lemma}\n\\begin{proof}\nTo ease the notation, add $T_0(x) = 1$ to $T_\\iset{A}$ and denote the end result by $T^*$. We can consider $T^*$ as a $2^K \\times 2^K$ matrix, say $A$. Let $p$ be a distribution and let $u$ be the vector of length $2^K$ representing the distribution. Note that we have $Au = \\mean{p}{T^*}$. We can show that $A$ is invertible. Let $U$ some $2^K - 1$ dimensional open ball of distributions. Since $A$ is invertible, the set $V^* = \\set{Ax \\mid x \\in U}$ is a $2^K - 1$ dimensional open ellipsoid. Define also $V$ by removing the first coordinate from the vectors of $V^*$. Note that the first coordinate of elements of $V^*$ is equal to $1$. This implies that $V$ is also a $2^K - 1$ dimensional open ellipsoid. Hence we can pick an open ball $N(\\theta, \\epsilon) \\subset V$. The lemma follows.\n\\end{proof}\nWe are now ready to prove Theorem~\\ref{thr:generic}:\n\nAbbreviate the matrix $C(T_\\iset{F})$ by $C$. We will first prove that the diagonal entries of $C$ are equal to $c$. Let $\\iset{A}$ be the family of all itemsets. Select $G \\in \\iset{F}$ and define $\\iset{R} = \\set{H \\in \\iset{F} \\mid H \\subseteq G}$. As we stated above, $C(T_\\iset{A}) = cI$ and Assumption~\\ref{as2:3} imply that $C(T_\\iset{R}) = cI$. Assumption~\\ref{as2:2} implies that\n\\begin{equation}\n\\label{eq:chain}\n\\dista{\\cdot}{\\cdot}{S_\\iset{R}}^2 \\leq \\dista{\\cdot}{\\cdot}{S_\\iset{F}}^2 \\leq \\dista{\\cdot}{\\cdot}{S_\\iset{A}}^2\n\\end{equation}\nSelect $\\epsilon$ corresponding to Lemma~\\ref{lem:valid} and let $\\gamma_\\iset{A} = \\vect{0,\\ldots,\\epsilon\/2,\\ldots,0}$, i.e., $\\gamma_\\iset{A}$ is a vector whose entries are all $0$ except the entry corresponding to $G$. Lemma~\\ref{lem:valid} guarantees that there exist distributions $p$ and $q$ such that $\\dista{p}{q}{S_\\iset{A}}^2 = c\\norm{\\gamma_\\iset{A}}^2$. Let $\\gamma_\\iset{F} = \\mean{p}{T_\\iset{F}}-\\mean{q}{T_\\iset{F}}$ and $\\gamma_\\iset{R} = \\mean{p}{T_\\iset{R}}-\\mean{q}{T_\\iset{R}}$. Note that $\\gamma_\\iset{R}$ and $\\gamma_\\iset{F}$ has the same form as $\\gamma_{A}$. It follows from Eq.~\\ref{eq:chain} that\n\\[\nc\\epsilon^2\/4 \\leq C_{G,G}\\epsilon^2\/4 \\leq c\\epsilon^2\/4,\n\\]\nwhere $C_{G,G}$ is the diagonal entry of $C$ corresponding to $G$. It follows that $C_{G,G} = c$.\n\nTo complete the proof we need to show that $C_{G,H} = 0$ for $G,H\\in \\iset{F}, G \\neq H$. Assume that $C_{X,Y} \\neq 0$ and let $s$ be the sign of $C_{G,H}$. Apply Lemma~\\ref{lem:valid} again and select $\\gamma_\\iset{A} = \\spr{0,\\ldots,\\epsilon\/4,0,\\ldots,0,s\\epsilon\/4,\\ldots,0}^T$, i.e., $\\gamma_\\iset{A}$ has $\\epsilon\/4$ and $s\\epsilon\/4$ in the entries corresponding to $G$ and $H$, respectively, and $0$ elsewhere. The right side of Eq.~\\ref{eq:chain} implies that\n\\[\n2c\\epsilon^2\/16+2\\abs{C_{G,H}}\\epsilon^2\/16 \\leq 2c\\epsilon^2\/16\n\\]\nwhich is a contradiction and it follows that $C_{G,H} = 0$. This completes the theorem.\n\n\\section{The CM distance and Binary Data Sets}\n\\label{sec:cmbin}\nIn this section we will concentrate on the distances between binary data sets. We will consider the CM distance based on itemset frequencies, a very popular statistics in the literature concerning binary data mining. In the first subsection we will show that a more natural way of representing the CM distance is to use parity frequencies. We also show that we can evaluate the distance in linear time. In the second subsection we will provide more theoretical evidence why the CM distance is a good distance between binary data sets.\n\\subsection{The CM Distance and Itemsets}\n\\label{sec:cmitemsets}\nWe begin this section by giving some definitions. We set the sample space $\\Omega$ to be\n\\[\n\\Omega = \\set{\\omega \\mid \\omega = \\enpr{\\omega_1}{\\omega_K}, \\omega_i = 0,1},\n\\]\nthat is, $\\Omega$ is the set of all binary vectors of length $K$. Note that $\\abs{\\Omega} = 2^K$. It is custom that each dimension in $\\Omega$ is identified with some symbol. We do this by assigning the symbol $a_i$ to the $i^\\text{th}$ dimension. These symbols are called \\emph{attributes} or \\emph{items}. Thus when we speak of the attribute $a_i$ we refer to the $i^\\text{th}$ dimension. We denote the set of all items by $\\mathbb{A} = \\enset{a_1}{a_K}$. A non-empty subset of $\\mathbb{A}$ is called \\emph{itemset}. \n\nA \\emph{boolean formula} $\\funcdef{S}{\\Omega}{\\set{0,1}}$ is a feature function mapping a binary vector to a binary value. We are interested in two particular boolean formulae: Assume that we are given an itemset $B = \\enset{a_{i_1}}{a_{i_L}}$. We define a \\emph{conjunction function} $S_B$ to be\n\\[\nS_B(\\omega) = \\omega_{i_1} \\land \\omega_{i_2} \\land \\cdots \\land \\omega_{i_K},\n\\]\nthat is, $S_B$ results $1$ if and only if all the variables corresponding the itemset $B$ are on. Given a data set $D$ the frequency $S_B(D)$ is called the frequency of the itemset $B$. Conjuction functions are popular and there are a lot of studies in the literature concerning finding itemsets that have large frequency~\\citep[see e.g.,][]{agrawal93mining,hand02principles}. We also define a \\emph{parity function} $T_B$ to be\n\\[\nT_B(\\omega) = \\omega_{i_1} \\oplus \\omega_{i_2} \\oplus \\cdots \\oplus \\omega_{i_K},\n\\]\nwhere $\\oplus$ is the binary operator XOR. The function $T_B$ results $1$ if and only if the number of active variables included in $B$ are odd.\n\nA collection $\\iset{F}$ of itemsets is said to be \\emph{antimonotonic} or \\emph{downwardly closed} if each non-empty subset of an itemset included in $\\iset{F}$ is also included in $\\iset{F}$. Given a collection of itemsets $\\iset{F} = \\enset{B_1}{B_N}$ we extend our definition of the conjuction function by setting $S_\\iset{F} = \\envec{S_{B_1}}{S_{B_N}}$. We also define $T_\\iset{F} = \\envec{T_{B_1}}{T_{B_N}}$.\n\nAssume that we are given an antimonotonic family $\\iset{F}$ of itemsets. We can show that there is an invertible matrix $A$ such that $T_\\iset{F} = AS_\\iset{F}$. In other words, we can get the parity function $T_\\iset{F}$ from the conjunction function $T_\\iset{F}$ by an invertible linear transformation. Corollary~\\ref{thr:ind} now implies that\n\\begin{equation}\n\\label{eq:andvsparity}\n\\dist{D_1}{D_2}{S_\\iset{F}} = \\dist{D_1}{D_2}{T_\\iset{F}},\n\\end{equation}\nfor any $D_1$ and $D_2$. The following lemma shows that the covariance matrix $\\cov{}{}{T_\\iset{F}}$ of the parity function is very simple.\n\\begin{lemma}\n\\label{lem:paritycov}\nLet $T_\\iset{F}$ be a parity function for a family of itemsets $\\iset{F}$, then $\\cov{}{}{T_\\iset{F}} = 0.5I$, that is, the covariance matrix is a diagonal matrix having $0.5$ at the diagonal.\n\\end{lemma}\nTheorem~\\ref{thr:calc}, Lemma~\\ref{lem:paritycov}, and Eq.~\\ref{eq:andvsparity} imply that\n\\begin{equation}\n\\dist{D_1}{D_2}{S_\\iset{F}} = \\sqrt{2}\\norm{\\theta_1 - \\theta_2},\n\\label{eq:fastev}\n\\end{equation}\nwhere $\\theta_1 = \\freq{T_\\iset{F}}{D_1}$ and $\\theta_2 = \\freq{T_\\iset{F}}{D_2}$. This identity says that the CM distance can be calculated in $O(N)$ time (assuming that we know the frequencies $\\theta_1$ and $\\theta_2$). This is better than $O(N^3)$ time implied by Theorem~\\ref{thr:calc}.\n\\begin{example}\n\\label{ex:ind}\nLet $\\iset{I} = \\set{\\set{a_j}\\mid j = 1 \\ldots K}$ be a family of itemsets having only one item. Note that $T_{\\set{a_j}} = S_{\\set{a_j}}$. Eq.~\\ref{eq:fastev} implies that\n\\[\n\\dist{D_1}{D_2}{S_\\iset{I}} = \\sqrt{2}\\norm{\\theta_{1}-\\theta_{2}},\n\\]\nwhere $\\theta_1$ and $\\theta_2$ consists of the marginal frequencies of each $a_j$ calculated from $D_1$ and $D_2$, respectively. In this case the CM distance is simply the $L_2$ distance between the marginal frequencies of the individual attributes. The frequencies $\\theta_1$ and $\\theta_2$ resemble term frequencies (TF) used in text mining~\\citep[see e.g.,][]{baldi03internet}.\n\\end{example}\n\n\\begin{example}\n\\label{ex:cov}\nWe consider now a case with a larger set of features. Our motivation for this is that using only the feature functions $S_\\iset{I}$ is sometimes inadequate. For example, consider data sets with two items having the same individual frequencies but different correlations. In this case the data sets may look very different but according to our distance they are equal.\n\nLet $\\iset{C} = \\iset{I} \\cup \\set{a_ja_k \\mid j, k = 1\\ldots K, j < k}$ be a family of itemsets such that each set contains at most two items. The corresponding frequencies contain the individual means and the pairwise correlation for all items. Let $S_{a_ja_k}$ be the conjunction function for the itemset $a_ja_k$. Let $\\gamma_{jk} = \\freq{S_{a_ja_k}}{D_1}-\\freq{S_{a_ja_k}}{D_2}$ be the difference between the correlation frequencies. Also, let $\\gamma_j = \\freq{S_{a_j}}{D_1}-\\freq{S_{a_j}}{D_2}$. Since\n\\[\nT_{a_ja_k} = S_{a_j}+S_{a_k}-2S_{a_ja_k}\n\\]\nit follows from Eq.~\\ref{eq:fastev} that\n\\begin{equation}\n\\dist{D_1}{D_2}{S_\\iset{C}}^2 = 2\\sum_{j < k}\\pr{\\gamma_j+\\gamma_k-2\\gamma_{jk}}^2+2\\sum_{j=1}^{K}\\gamma_j^2.\n\\label{eq:distcovform}\n\\end{equation}\n\\end{example}\n\n\\subsection{Characterisation of the CM Distance for Itemsets}\nThe identity given in Eq.~\\ref{eq:fastev} is somewhat surprising and seems less intuitive. A question arises: why this distance is more natural than some other, say, a simple $L_2$ distance between the itemset frequencies. Certainly, parity functions are less intuitive than conjunction functions. One answer is that the parity frequencies are decorrelated version of the traditional itemset frequencies.\n\nHowever, we can clarify this situation from another point of view: Let $\\iset{A}$ be the set of all itemsets. Assume that we are given two data sets $D_1$ and $D_2$ and define \\emph{empirical distributions} $p_1$ and $p_2$ by setting\n\\[\np_i(\\omega) = \\frac{\\text{number of samples in $D_i$ equal to $\\omega$}}{\\abs{D_i}}.\n\\]\nThe constrained spaces of $S_\\iset{A}$ are singular points containing only $p_i$, that is, $\\const{S_\\iset{A}}{D_i} = \\set{p_i}$. This implies that\n\\begin{equation}\n\\dist{D_1}{D_2}{S_\\iset{A}} = \\sqrt{2^K}\\norm{p_1-p_2}.\n\\label{eq:fullinfo}\n\\end{equation}\nIn other words, the CM distance is proportional to the $L_2$ distance between the empirical distributions. This identity seems very reasonable. At least, it is more natural than, say, taking $L_2$ distance between the traditional itemset frequencies.\n\nThe identity in Eq.~\\ref{eq:fullinfo} holds only when we use the features $S_\\iset{A}$. However, we can prove that a distance of the Mahalanobis type satisfying the identity in Eq.~\\ref{eq:fullinfo} and some additional conditions is equal to the CM distance. Let us explain this in more detail. We assume that we have a distance $d$ having the form\n\\[\n\\dista{D_1}{D_2}{S_\\iset{F}}^2 = \\pr{\\theta_1-\\theta_2}^TC(S_\\iset{F})^{-1}\\pr{\\theta_1-\\theta_2},\n\\]\nwhere $\\theta_1 = \\freq{S_\\iset{F}}{D_1}$ and $\\theta_2 = \\freq{S_\\iset{F}}{D_2}$ and $C(S_\\iset{F})$ maps a conjuction function $S_\\iset{F}$ to a symmetric $N\\times N$ matrix. The distance $d$ should satisfy the following mild assumptions.\n\\begin{enumerate}\n\\item Assume two antimonotonic families of itemsets $\\iset{F}$ and $\\iset{H}$ such that $\\iset{F} \\subset \\iset{H}$. It follows that $\\dista{\\cdot}{\\cdot}{S_\\iset{F}} \\leq \\dista{\\cdot}{\\cdot}{S_\\iset{H}}$.\n\\label{as2:2}\n\\item Adding extra dimensions (but not changing the features) does not change the distance.\n\\label{as2:3}\n\\end{enumerate}\nThe following theorem says that the assumptions and the identity in Eq.~\\ref{eq:fullinfo} are sufficient to prove that $d$ is actually the CM distance.\n\\begin{theorem}\n\\label{thr:generic} Assume that a Mahalanobis distance $d$ satisfies Assumptions~\\ref{as2:2}~and~\\ref{as2:3}. Assume also that there is a constant $c_1$ such that\n\\[\n\\dista{D_1}{D_2}{S_\\iset{A}} = c_1\\norm{p_1-p_2}.\n\\]\nThen it follows that for any antimonotonic family $\\iset{F}$ we have\n\\[\n\\dista{D_1}{D_2}{S_\\iset{F}} = c_2\\dist{D_1}{D_2}{S_\\iset{F}},\n\\]\nfor some constant $c_2$.\n\\end{theorem}\n\n\n\\section{Conclusions and Discussion}\n\\label{sec:conclusions}\nOur task was to find a versatile distance that has nice statistical properties and that can be evaluated efficiently. The CM distance fulfils our goals. In theoretical sections we proved that this distance takes properly into account the correlation between features, and that it is the only (Mahalanobis) distance that does so. Even though our theoretical justifications are complex, the CM distance itself is rather simple. In its simplest form, it is the $L_2$ distance between the means of the individual attributes. On the other hand, the CM distance has a surprising form when the features are itemsets.\n\nIn general, the computation time of the CM distance depends of the size of sample space that can be exponentially large. Still, there are many types of feature functions for which the distance can be solved. For instance, if the features are itemsets, then the distance can be solved in polynomial time. In addition, if the itemsets form an antimonotonic family, then the distance can be solved in linear time.\n\nIn empirical tests the CM distance implied that the used data sets have structure, as expected. The performance of the CM distance compared to the base distance depended heavily on the data set. We also showed that the feature sets \\ftname{ind} and \\ftname{cov} produced almost equivalent distances, whereas using frequent itemsets produced very different distances.\n\nSophisticated feature selection methods were not compared in this paper. Instead, we either decided explicitly the set of features or deduced them using \\alname{Apriori}. We argued that we cannot use the traditional approaches for selecting features of data sets, unless we are provided some additional information.\n\n\\section{Feature Selection}\n\\label{sec:feature}\nWe will now discuss briefly about feature selection --- a subject that we have taken for granted so far. The CM distance depends on a feature function $S$. How can we choose a good set of features?\n\nAssume for simplicity that we are dealing with binary data sets. Eq.~\\ref{eq:fullinfo} tells us that if we use all itemsets, then the CM distance is $L_2$ distance between empirical distributions. However, to get a reliable empirical distribution we need an exponential number of data points. Hence we can use only some subset of itemsets as features. The first approach is to make an expert choice without seeing data. For example, we could decide that the feature function is $S_\\iset{I}$, the means of the individual attributes, or $S_\\iset{C}$, the means of individual attributes and the pairwise correlation.\n\nThe other approach is to infer a feature function from the data sets. At first glimpse this seems an application of feature selection. However, traditional feature selection fails: Let $S_\\iset{I}$ be the feature function representing the means of the individual attributes and let $S_\\iset{A}$ be the feature function containing all itemsets. Let $\\omega$ be a binary vector. Note that if we know $S_\\iset{I}(\\omega)$, then we can deduce $S_\\iset{A}(\\omega)$. This means that $S_\\iset{I}$ is a \\emph{Markov blanket}~\\citep{pearl88reasoning} for $S_\\iset{A}$. Hence we cannot use the Markov blanket approach to select features. The essential part is that the traditional feature selection algorithms deal with the \\emph{individual} points. We try to select features for whole data sets.\n\nNote that feature selection algorithms for singular points are based on training data, that is, we have data points divided into clusters. In other words, when we are making traditional feature selection we \\emph{know} which points are close and which are not. In order to make the same ideas work for data sets we need to have similar information, that is, we need to know which data sets are close to each other, and which are not. Such an information is rarely provided and hence we are forced to seek some other approach.\n\nWe suggest a simple approach for selecting itemsets by assuming that frequently occurring itemsets are interesting. Assume that we are given a collection of data sets $D_i$ and a threshold $\\sigma$. Let $\\iset{I}$ be the itemsets of order one. We define $\\iset{F}$ such that $B \\in \\iset{F}$ if and only if $B \\in \\iset{I}$ or that $B$ is a $\\sigma$-frequent itemset for some $D_i$.\n\n\\section{Introduction}\nIn this paper we will consider the following problem: Given two data sets $D_1$ and $D_2$ of dimension $K$, define a distance between $D_1$ and $D_2$. To be more precise, we consider the problem of defining the distance between two multisets of transactions, each set sampled from its own unknown distribution. We will define a dissimilarity measure between $D_1$ and $D_2$ and we will refer to this measure as \\emph{CM distance}.\n\nGenerally speaking, the notion of dissimilarity between two objects is one of the most fundamental concepts in data mining. If one is able to retrieve a distance matrix from a set of objects, then one is able to analyse data by using e.g., clustering or visualisation techniques. Many real world data collections may be naturally divided into several data sets. For example, if a data collection consists of movies from different eras, then we may divide the movies into subcollections based on their release years. A distance between these data (sub)sets would provide means to analyse them as single objects. Such an approach may ease the task of understanding complex data collections.\n\nLet us continue by considering the properties the CM distance should have. First of all, it should be a metric. The motivation behind this requirement is that the metric theory is a well-known area and metrics have many theoretical and practical virtues. Secondly, in our scenario the data sets have statistical nature and the CM distance should take this into account. For example, consider that both data sets are generated from the same distribution, then the CM distance should give small values and approach $0$ as the number of data points in the data sets increases. The third requirement is that we should be able to evaluate the CM distance quickly. This requirement is crucial since we may have high dimensional data sets with a vast amount of data points.\n\nThe CM distance will be based on summary statistics, features. Let us give a simple example: Assume that we have data sets $D_1 = \\set{A, B, A, A}$ and $D_2 = \\set{A, B, C, B}$ and assume that the only feature we are interested in is the proportion of $A$ in the data sets. Then we can suggest the distance between $D_1$ and $D_2$ to be $\\abs{3\/4-1\/4} = 1\/2$. The CM distance is based on this idea; however, there is a subtle difficulty: If we calculate several features, then should we take into account the correlation of these features? We will do exactly that in defining the CM distance.\n\nThe rest of this paper is organised as follows. In Section~\\ref{sec:cm} we give the definition of the CM distance by using some geometrical interpretations. We also study the properties of the distance and provide an alternative characterisation. In Section~\\ref{sec:cmbin} we study the CM distance and binary data sets. In Section~\\ref{sec:sequences} we discuss how the CM distance can be used with event sequences and in Section~\\ref{sec:feature} we comment about the feature selection. Section~\\ref{sec:related} is devoted for related work. The empirical tests are represented in Section~\\ref{sec:tests} and we conclude our work with the discussion in Section~\\ref{sec:conclusions}.\n\n\\section{Related Work}\n\\label{sec:related}\nIn this section we discuss some existing methods for comparing data sets and compare the evaluation algorithms. The execution times are summarised in Table~\\ref{tab:times}.\n\\begin{table}[ht!]\n\\centering\n\\begin{tabular}{rr}\n\\toprule\nDistance & Time \\\\\n\\midrule\nCM distance (general case) & $O(NM+N^2\\abs{\\Omega}+N^3)$\\\\\nCM distance (known cov. matrix) & $O(NM+N^3)$\\\\\nCM distance (binary case) & $O(NM+N)$ \\\\\nSet distances & $O(M^3)$\\\\\nKullback-Leibler & $O(NM+N\\abs{\\Omega})$\\\\\nFischer's Information & $O(NM+N^2\\abs{D_2}+N^3)$\\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Comparison of the execution times of various distances. The number $M = \\abs{D_1}+\\abs{D_2}$ is the number of data points in total. The $O(NM)$ term refers to the time needed to evaluate the frequencies $\\freq{S}{D_1}$ and $\\freq{S}{D_2}$. Kullback-Leibler distance is solved using Iterative Scaling algorithm in which one round has $N$ steps and one step is executed in $O(\\abs{\\Omega})$ time.}\n\\label{tab:times}\n\\end{table}\n\\subsection{Set Distances}\nOne approach to define a data set distance is to use some natural distance between single data points and apply some known set distance. \\citet{eiter97distance} show that some data set distances defined in this way can be evaluated in cubic time. However, this is too slow for us since we may have a vast amount of data points. The other downsides are that these distances may not take into account the statistical nature of data which may lead into problems.\n\\subsection{Edit Distances}\nWe discuss in Section~\\ref{sec:sequences} of using the CM distance for event sequences. Traditionally, edit distances are used for comparing event sequences. The most famous edit distance is Levenshtein distance~\\citep{levenshtein66distance}. However, edit distances do not take into account the statistical nature of data. For example, assume that we have two sequences generated such that the events are sampled from the uniform distribution independently of the previous event (a zero-order Markov chain). In this case the CM distance is close to $0$ whereas the edit distance may be large. Roughly put, the CM distance measures the dissimilarity between the statistical characteristics whereas the edit distances operate at the symbol level.\n\n\\subsection{Minimum Discrimination Approach}\nThere are many distances for distributions~\\citep[see][for a nice review]{baseville89distance}. From these distances the CM distance resembles the statistical tests involved with Minimum Discrimination Theorem~\\citep[see][]{kullback68information, csiszar75divergence}. In this framework we are given a feature function $S$ and two data sets $D_1$ and $D_2$. From the set of distributions $\\constp{S}{D_i}$ we select a distribution maximising the entropy and denote it by $p^{ME}_i$. The distance itself is the Kullback-Leibler divergence between $p^{ME}_1$ and $p^{ME}_2$. It has been empirically shown that $p^{ME}_i$ represents well the distribution from which $D_i$ is generated~\\citep[see][]{mannila99prediction}. The downsides are that this distance is not a metric (it is not even symmetric), and that the evaluation time of the distance is infeasible: Solving $p^{ME}_i$ is \\textbf{NP}-hard~\\citep{cooper90complexity}. We can approximate the Kullback-Leibler distance by Fischer's information, that is,\n\\[\n\\kl{p^{ME}_1}{p^{ME}_2} \\approx \\frac{1}{2}\\pr{\\theta_1 - \\theta_2}^T\\cova{p^{ME}_2}{-1}{S}\\pr{\\theta_1 - \\theta_2},\n\\]\nwhere $\\theta_i = \\freq{S}{D_i}$ and $\\cova{p^{ME}_2}{}{S}$ is the covariance matrix of $S$ taken with respect to $p^{ME}_2$~\\citep[see][]{kullback68information}. This resembles greatly the equation in Theorem~\\ref{thr:calc}. However, in this case the covariance matrix depends on data sets and thus generally this approximation is not a metric. In addition, we do not know $p^{ME}_2$ and hence we cannot evaluate the covariance matrix. We can, however, estimate the covariance matrix from $D_2$, that is,\n\\[\n\\cova{p^{ME}_2}{}{S} \\approx \\frac{1}{\\abs{D_2}}\\sum_{\\omega \\in D_2}S(\\omega)S(\\omega)^T -\n\\frac{1}{\\abs{D_2}^2}\\spr{\\sum_{\\omega \\in D_2}S(\\omega)}\\spr{\\sum_{\\omega \\in D_2}S(\\omega)^T}.\n\\]\nThe execution time of this operation is $O(N^2\\abs{D_2})$.\n\n\\section{The CM distance and Event Sequences}\n\\label{sec:sequences}\nIn the previous section we discussed about the CM distance between the binary data sets. We will use similar approach to define the CM distance between sequences.\n\nAn \\emph{event sequence} $s$ is a finite sequence whose symbols belong to a finite alphabet $\\Sigma$. We denote the length of the event sequence $s$ by $\\abs{s}$, and by $s(i, j)$ we mean a subsequence starting from $i$ and ending at $j$. The subsequence $s(i, j)$ is also known as \\emph{window}. A popular choice for statistics of event sequences are \\emph{episodes}~\\citep{hand02principles}. A \\emph{parallel episode} is represented by a subset of the alphabet $\\Sigma$. A window of $s$ satisfies a parallel episode if all the symbols given in the episode occur in the window. Assume that we are given an integer $k$. Let $W$ be a collection of windows of $s$ having the length $k$. A \\emph{frequency} of a parallel episode is the proportion of windows in $W$ satisfying the episode. We should point out that this mapping destroys the exact ordering of the sequence. On the other hand, if some symbols occur often close to each other, then the episode consisting of these symbols will have a high frequency.\n\nIn order to apply the CM distance we will now describe how we can transform a sequence $s$ to a binary data set. Assume that we are given a window length $k$. We transform a window of length $k$ into a binary vector of length $\\abs{\\Sigma}$ by setting $1$ if the corresponding symbol occurs in the window, and $0$ otherwise. Let $D$ be the collection of these binary vectors. We have now transformed the sequence $s$ to the binary data set $D$. Note that parallel episodes of $s$ are represented by itemsets of $D$.\n\nThis transformation enables us to use the CM distance. Assume that we are given two sequences $s_1$ and $s_2$, a collection of parallel episodes $\\iset{F}$, and a window length $k$. First, we transform the sequences into data sets $D_1$ and $D_2$. We set the CM distance between the sequences $s_1$ and $s_2$ to be $\\dist{D_1}{D_2}{S_\\iset{F}}$.\n\n\\section{Empirical Tests}\n\\label{sec:tests}\nIn this section we describe our experiments with the CM distance. We begin by examining the effect of different feature functions. We continue studying the distance by applying clustering algorithms, and finally we represent some interpretations to the results.\n\nIn many experiments we use a base distance $d_{U}$ defined as the $L_2$ distance between the itemset frequencies, that is,\n\\begin{equation}\n\\distu{D_1}{D_2}{S} = \\sqrt{2}\\norm{\\theta_1 - \\theta_2},\n\\label{eq:base}\n\\end{equation}\nwhere $\\theta_i$ are the itemset frequencies $\\theta_i = \\freq{S}{D_i}$. This type of distance was used in~\\cite{hollmen03mixture}. Note that $\\distu{D_1}{D_2}{ind} = \\dist{D_1}{D_2}{ind}$, where $ind$ is the feature set containing only individual means.\n\n\\subsection{Real World Data Sets}\nWe examined the CM distance with several real world data sets and several feature sets. We had $7$ data sets: \\dtname{Bible}, a collection of $73$ books from the Bible\\footnote{The books were taken from \\url{http:\/\/www.gutenberg.org\/etext\/8300} in 20. July 2005}, \\dtname{Addresses}, a collection of $55$ inaugural addresses given by the presidents of the U.S.\\footnote{The addresses were taken from \\url{http:\/\/www.bartleby.com\/124\/} in 17. August 2005}, \\dtname{Beatles}, a set of lyrics from $13$ studio albums made by the Beatles, \\dtname{20Newsgroups}, a collection of 20 newsgroups\\footnote{The data set was taken from \\url{http:\/\/www.ai.mit.edu\/~jrennie\/20Newsgroups\/}, a site hosted by Jason Rennie, in 8. June, 2001.}, \\dtname{TopGenres}, plot summaries for top rated movies of 8 different genres, and \\dtname{TopDecades}, plot summaries for top rated movies from 8 different decades\\footnote{The movie data sets were taken from \\url{http:\/\/www.imdb.com\/Top\/} in 1. January, 2006}. \\dtname{20Newsgroups} contained (in that order) 3 religion groups, 3 of politics, 5 of computers, 4 of science, 4 recreational, and \\dtname{misc.forsale}. \\dtname{TopGenres} consisted (in that order) of \\dtname{Action}, \\dtname{Adventure}, \\dtname{SciFi}, \\dtname{Drama}, \\dtname{Crime}, \\dtname{Horror}, \\dtname{Comedy}, and \\dtname{Romance}. The decades for \\dtname{TopDecades} were 1930--2000. Our final data set, \\dtname{Abstract}, was composed of abstracts describing NSF awards from 1990--1999\\footnote{The data set was taken from \\url{http:\/\/kdd.ics.uci.edu\/databases\/nsfabs\/nsfawards.data.html} in 13. January, 2006}.\n\n\\dtname{Bible} and \\dtname{Addresses} were converted into binary data sets by taking subwindows of length $6$ (see the discussion in Section~\\ref{sec:sequences}). We reduced the number of attributes to $1000$ by using the mutual information gain. \\dtname{Beatles} was preprocessed differently: We transformed each song to its binary bag-of-words representation and selected $100$ most informative words. In \\dtname{20Newsgroups} a transaction was a binary bag-of-words representation of a single article. Similarly, In \\dtname{TopGenres} and in \\dtname{TopDecades} a transaction corresponded to a single plot summary. We reduced the number of attributes in these three data sets to $200$ by using the mutual information gain. In \\dtname{Abstract} a data set represented one year and a transaction was a bag-of-words representation of a single abstract. We reduced the dimension of \\dtname{Abstract} to $1000$.\n\n\\subsection{The Effect of Different Feature Functions}\nWe begin our experiments by studying how the CM distance (and the base distance) changes as we change features.\n\nWe used $3$ different sets of features: \\ftname{ind}, the independent means, \\ftname{cov}, the independent means along with the pairwise correlation, and \\ftname{freq}, a family of frequent itemsets obtained by using \\alname{APriori}~\\citep{agrawal96apriori}. We adjusted the threshold so that \\ftname{freq} contained $10K$ itemsets, where $K$ is the number of attributes.\n\n\nWe plotted the CM distances and the base distances as functions of $\\dcm{ind}$. The results are shown in Figure~\\ref{fig:scatterplots}. Since the number of constraints varies, we normalised the distances by dividing them with $\\sqrt{N}$, where $N$ is the number of constraints. In addition, we computed the correlation of each pair of distances. These correlations are shown in Table~\\ref{tab:corrs}.\n\n\\begin{figure}[htb!]\n\\center\n\\includegraphics[width=6cm]{pics\/cm_ind_vs_cov.eps}\n\\includegraphics[width=6cm]{pics\/cm_ind_vs_s.eps}\n\\includegraphics[width=6cm]{pics\/and_ind_vs_cov.eps}\n\\includegraphics[width=6cm]{pics\/and_ind_vs_s.eps}\n\\caption{CM and base distances as functions of $\\dcm{ind}$. A point represents a distance between two data sets. The upper two figures contain the CM distances while the lower two contain the base distance. The distances were normalised by dividing $\\sqrt{N}$, where $N$ is the number of constraints. The corresponding correlations are given in Table~\\ref{tab:corrs}. Note that $x$-axis in the left (right) two figures are equivalent.}\n\\label{fig:scatterplots}\n\\end{figure}\n\n\\begin{table}[htb!]\n\\centering\n\\begin{tabular}{rrrrrrrrr}\n\\toprule\n& \\multicolumn{3}{c}{$d_{CM}$ vs. $d_{CM}$} & \\multicolumn{3}{c}{$d_U$ vs. $d_U$} & \\multicolumn{2}{c}{$d_{CM}$ vs. $d_U$} \\\\\n\\cmidrule{2-9}\n & \\ftname{cov} & \\ftname{freq} & \\ftname{freq} & \\ftname{cov} & \\ftname{freq} & \\ftname{freq} & \\ftname{cov} & \\ftname{freq} \\\\\nData set & \\ftname{ind} & \\ftname{ind} & \\ftname{cov} & \\ftname{ind} & \\ftname{ind} & \\ftname{cov} & \\ftname{cov} & \\ftname{freq} \\\\\n\\midrule\n\\dtname{20Newsgroups} & $0.996$ & $0.725$ & $0.733$ & $0.902$ & $0.760$ & $0.941$ & $0.874$ & $0.571$ \\\\\n\\dtname{Addresses} & $1.000$ & $0.897$ & $0.897$ & $0.974$ & $0.927$ & $0.982$ & $0.974$ & $0.743$ \\\\\n\\dtname{Bible} & $1.000$ & $0.895$ & $0.895$ & $0.978$ & $0.946$ & $0.989$ & $0.978$ & $0.802$ \\\\\n\\dtname{Beatles} & $0.982$ & $0.764$ & $0.780$ & $0.951$ & $0.858$ & $0.855$ & $0.920$ & $0.827$ \\\\\n\\dtname{TopGenres} & $0.996$ & $0.817$ & $0.833$ & $0.916$ & $0.776$ & $0.934$ & $0.927$ & $0.931$ \\\\\n\\dtname{TopDecades} & $0.998$ & $0.735$ & $0.744$ & $0.897$ & $0.551$ & $0.682$ & $0.895$ & $0.346$ \\\\\n\\dtname{Abstract} & $1.000$ & $0.985$ & $0.985$ & $0.996$ & $0.993$ & $0.995$ & $0.996$ & $0.994$ \\\\\nTotal & $0.998$ & $0.702$ & $0.709$ & $0.934$ & $0.894$ & $0.938$ & $0.910$ & $0.607$ \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Correlations for various pairs of distances. A column represents a pair of distances and a row represents a single data set. For example, the correlation between $\\dcm{ind}$ and $\\dcm{cov}$ in \\dtname{20Newsgroups} is $0.996$. The last row is the correlation obtained by using the distances from all data sets simultaneously. Scatterplots for the columns 1--2 and 4--5 are given in Fig.~\\ref{fig:scatterplots}.}\n\\label{tab:corrs}\n\\end{table}\n\nOur first observation from the results is that $\\dcm{cov}$ resembles $\\dcm{ind}$ whereas $\\dcm{freq}$ produces somewhat different results.\n\nThe correlations between $\\dcm{cov}$ and $\\dcm{ind}$ are stronger than the correlations between $\\du{cov}$ and $\\du{ind}$. This can be explained by examining Eq.~\\ref{eq:distcovform} in Example~\\ref{ex:cov}. If the dimension is $K$, then the itemsets of size $1$, according to Eq.~\\ref{eq:distcovform}, involve $\\frac{1}{2}K(K - 1) + K$ times in computing $\\dcm{cov}$, whereas in computing $\\du{cov}$ they involve only $K$ times. Hence, the itemsets of size $2$ have smaller impact in $\\dcm{cov}$ than in $\\du{cov}$.\n\nOn the other hand, the correlations between $\\dcm{freq}$ and $\\dcm{ind}$ are weaker than the correlations between $\\du{freq}$ and $\\du{ind}$, implying that the itemsets of higher order have stronger impact on the CM distance.\n\n\\subsection{Clustering Experiments}\nIn this section we continue our experiments by applying clustering algorithms to the distances. Our goal is to compare the clusterings obtained from the CM distance to those obtained from the base distance (given in Eq.~\\ref{eq:base}).\n\nWe used $3$ different clustering algorithms: a hierarchical clustering with complete linkage, a standard K-median, and a spectral algorithm by~\\cite{ng02clustering}. Since each algorithm takes a number of clusters as an input parameter, we varied the number of clusters between $3$ and $5$. We applied clustering algorithms to the distances $\\dcm{cov}$, $\\dcm{freq}$, $\\du{cov}$, and $\\du{freq}$, and compare the clusterings obtained from $\\dcm{cov}$ against the clusterings obtained from $\\du{cov}$, and similarly compare the clusterings obtained from $\\dcm{freq}$ against the clusterings obtained from $\\du{freq}$.\n\nWe measured the performance using $3$ different clustering indices: a ratio $r$ of the mean of the intra-cluster distances and the mean of the inter-cluster distances, Davies-Bouldin (DB) index~\\citep{davies79index}, and Calinski-Harabasz (CH) index~\\citep{calinski74index}.\n\nThe obtained results were studied in the following way: Given a data set and a performance index, we calculated the number of algorithms in which $\\dcm{cov}$ outperformed $\\du{cov}$. The distances $\\dcm{freq}$ and $\\du{freq}$ were handled similarly. The results are given in Table~\\ref{tab:clustperf2}. We also calculated the number of data sets in which $\\dcm{cov}$ outperformed $\\du{cov}$, given an algorithm and an index. These results are given in Table~\\ref{tab:clustperf1}.\n\n\\begin{table}[htb!]\n\\centering\n\\begin{tabular}{rrrrrrrrrr}\n\\toprule\n& & \\multicolumn{3}{c}{$\\dcm{cov}$ vs. $\\du{cov}$} & \\multicolumn{3}{c}{$\\dcm{freq}$ vs. $\\du{freq}$} \\\\\n\\cmidrule{3-8}\n& Data set & $r$ & $DB$ & $CH$ & $r$ & $DB$ & $CH$ & Total & $P$ \\\\\n\\midrule\n1. & \\dtname{20Newsgroups} & $0\/9$ & $2\/9$ & $7\/9$ & $8\/9$ & $5\/9$ & $9\/9$ & $31\/54$ & $0.22$ \\\\\n2. & \\dtname{Speeches} & $9\/9$ & $6\/9$ & $3\/9$ & $9\/9$ & $9\/9$ & $9\/9$ & $\\mathbf{45\/54}$ & $0.00$ \\\\\n3. & \\dtname{Bible} & $9\/9$ & $7\/9$ & $2\/9$ & $9\/9$ & $7\/9$ & $9\/9$ & $\\mathbf{43\/54}$ & $0.00$ \\\\\n4. & \\dtname{Beatles} & $0\/9$ & $3\/9$ & $6\/9$ & $0\/9$ & $1\/9$ & $0\/9$ & $\\mathbf{10\/54}$ & $0.00$ \\\\\n5. & \\dtname{TopGenres} & $0\/9$ & $4\/9$ & $5\/9$ & $0\/9$ & $1\/9$ & $0\/9$ & $\\mathbf{10\/54}$ & $0.00$ \\\\\n6. & \\dtname{TopDecades} & $3\/9$ & $7\/9$ & $2\/9$ & $7\/9$ & $7\/9$ & $9\/9$ & $\\mathbf{35\/54}$ & $0.02$ \\\\\n7. & \\dtname{Abstract} & $9\/9$ & $8\/9$ & $1\/9$ & $0\/9$ & $2\/9$ & $1\/9$ & $21\/54$ & $0.08$ \\\\\n\\midrule\n& Total & $30\/63$ & $37\/63$ & $26\/63$ & $33\/63$ & $32\/63$ & $37\/63$ & $195\/378$ & $0.50$ \\\\\n& $P$ & $0.61$ & $0.13$ & $0.13$ & $0.61$ & $0.80$ & $0.13$\\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Summary of the performance results of the CM distance versus the base distance. A single entry contains the number of clustering algorithm configurations (see Column $1$ in Table~\\ref{tab:clustperf1}) in which the CM distance was better than the base distance. The $P$-value is the standard Fisher's sign test.}\n\\label{tab:clustperf2}\n\\end{table}\n\n\n\\begin{table}[htb!]\n\\centering\n\\begin{tabular}{rrrrrrrrrr}\n\\toprule\n& & \\multicolumn{3}{c}{$\\dcm{cov}$ vs. $\\du{cov}$} & \\multicolumn{3}{c}{$\\dcm{freq}$ vs. $\\du{freq}$} \\\\\n\\cmidrule{3-8}\n& Algorithm & $r$ & $DB$ & $CH$ & $r$ & $DB$ & $CH$ & Total & $P$ \\\\\n\\midrule\n1. & \\alname{K-med(3)} & $4\/7$ & $2\/7$ & $5\/7$ & $4\/7$ & $4\/7$ & $4\/7$ & $23\/42$ & $0.44$ \\\\\n2. & \\alname{K-med(4)} & $4\/7$ & $4\/7$ & $3\/7$ & $4\/7$ & $4\/7$ & $4\/7$ & $23\/42$ & $0.44$ \\\\\n3. & \\alname{K-med(5)} & $4\/7$ & $4\/7$ & $3\/7$ & $4\/7$ & $4\/7$ & $4\/7$ & $23\/42$ & $0.44$ \\\\\n4. & \\alname{link(3)} & $3\/7$ & $4\/7$ & $3\/7$ & $2\/7$ & $3\/7$ & $4\/7$ & $19\/42$ & $0.44$ \\\\\n5. & \\alname{link(4)} & $3\/7$ & $4\/7$ & $3\/7$ & $4\/7$ & $3\/7$ & $4\/7$ & $21\/42$ & $0.88$ \\\\\n6. & \\alname{link(5)} & $3\/7$ & $3\/7$ & $4\/7$ & $4\/7$ & $2\/7$ & $4\/7$ & $20\/42$ & $0.64$ \\\\\n7. & \\alname{spect(3)} & $3\/7$ & $6\/7$ & $1\/7$ & $3\/7$ & $4\/7$ & $4\/7$ & $21\/42$ & $0.88$ \\\\\n8. & \\alname{spect(4)} & $3\/7$ & $4\/7$ & $3\/7$ & $4\/7$ & $4\/7$ & $4\/7$ & $22\/42$ & $0.64$ \\\\\n9. & \\alname{spect(5)} & $3\/7$ & $6\/7$ & $1\/7$ & $4\/7$ & $4\/7$ & $5\/7$ & $23\/42$ & $0.44$ \\\\\n\\midrule\n& Total & $30\/63$ & $37\/63$ & $26\/63$ & $33\/63$ & $32\/63$ & $37\/63$ & $195\/378$ & $0.50$ \\\\\n& $P$ & $0.61$ & $0.13$ & $0.13$ & $0.61$ & $0.80$ & $0.13$\\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Summary of the performance results of the CM distance versus the base distance. A single entry contains the number of data sets (see Column $1$ in Table~\\ref{tab:clustperf2}) in which the CM distance was better than the base distance. The $P$-value is the standard Fisher's sign test.}\n\\label{tab:clustperf1}\n\\end{table}\n\nWe see from Table~\\ref{tab:clustperf2} that the performance of CM distance against the base distance depends on the data set. For example, the CM distance has tighter clusterings in \\dtname{Speeches}, \\dtname{Bible}, and \\dtname{TopDecade} whereas the base distance outperforms the CM distance in \\dtname{Beatles} and \\dtname{TopGenre}.\n\nTable~\\ref{tab:clustperf1} suggests that the overall performance of the CM distance is as good as the base distance. The CM distance obtains a better index $195$ times out of $378$. The statistical test suggests that this is a tie. The same observation is true if we compare the distances algorithmic-wise or index-wise.\n\n\n\\subsection{Distance matrices}\nIn this section we will investigate the CM distance matrices for real-world data sets. Our goal is to demonstrate that the CM distance produces interesting and interpretable results.\n\nWe calculated the distance matrices using the feature sets \\ftname{ind}, \\ftname{cov}, and \\ftname{freq}. The matrices are given in Figures~\\ref{fig:distances}~and~\\ref{fig:distances2}. In addition, we computed performance indices, a ratio of the mean of the intra-cluster distances and the mean of the inter-cluster distances, for various clusterings and compare these indices to the ones obtained from the base distances. The results are given in Table~\\ref{tab:clusterings}.\n\n\n\\begin{table}[htb!]\n\\centering\n\\begin{tabular}{rlrrrrr}\n\\toprule\n & & & \\multicolumn{2}{c}{\\ftname{cov}} & \\multicolumn{2}{c}{\\ftname{freq}} \\\\\n\\cmidrule{4-7}\nData & Clustering & ind & $d_{CM}$ & $d_U$ & $d_{CM}$ & $d_U$\\\\\n\\midrule\n\\dtname{Bible} & Old Test. $\\mid$ New Test. & $0.79$ & $0.79$ & $0.82$ & $0.73$ & $0.81$ \\\\\n & Old Test. $\\mid$ Gospels $\\mid$ Epistles & $0.79$ & $0.79$ & $0.81$ & $0.73$ & $0.81$ \\\\\n\\dtname{Addresses} & 1--32 $\\mid$ 33--55 & $0.79$ & $0.80$ & $0.85$ & $0.70$ & $0.84$ \\\\\n & 1--11 $\\mid$ 12--22 $\\mid$ 23--33 $\\mid$ 34--44 $\\mid$ 45--55 & $0.83$ & $0.83$ & $0.87$ & $0.75$ & $0.87$ \\\\\n\\dtname{Beatles} & 1,2,4--6 $\\mid$ 7--10,12--13 $\\mid$ 3 $\\mid$ 11 & $0.83$ & $0.86$ & $0.83$ & $0.88$ & $0.61$ \\\\\n & 1,2,4,12,13 $\\mid$ 5--10 $\\mid$ 3 $\\mid$ 11 & $0.84$ & $0.85$ & $0.84$ & $0.89$ & $0.63$ \\\\\n\\dtname{20Newsgroups} & Rel.,Pol. $\\mid$ Rest & $0.76$ & $0.77$ & $0.67$ & $0.56$ & $0.62$ \\\\\n & Rel.,Pol. $\\mid$ Comp., misc $\\mid$ Rest & $0.78$ & $0.78$ & $0.79$ & $0.53$ & $0.79$ \\\\\n\\dtname{TopGenres} & Act.,Adv., SciFi $\\mid$ Rest & $0.74$ & $0.73$ & $0.64$ & $0.50$ & $0.32$ \\\\\n\\dtname{TopDecades} & 1930--1960 $\\mid$ 1970--2000 & $0.84$ & $0.83$ & $0.88$ & $0.75$ & $0.88$ \\\\\n & 1930--1950 $\\mid$ 1960--2000 & $0.88$ & $0.88$ & $0.98$ & $0.57$ & $1.06$ \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Statistics of various interpretable clusterings. The proportions are the averages of the intra-cluster distances divided by the averages of the inter-cluster distances. Hence small fractions imply tight clusterings.}\n\\label{tab:clusterings}\n\\end{table}\n\n\\begin{figure}[htb!]\n\\center\n\\small\n\\begin{minipage}{4.5cm}\n\\center\n\\includegraphics[width=4cm]{pics\/news_ind.eps}\n\\dtname{20Newsgroups}, $\\dcm{ind}$\n\\end{minipage}\n\\begin{minipage}{4.5cm}\n\\center\n\\includegraphics[width=4cm]{pics\/news_cov.eps}\n\\dtname{20Newsgroups}, $\\dcm{cov}$\n\\end{minipage}\n\\begin{minipage}{4.5cm}\n\\center\n\\includegraphics[width=4cm]{pics\/news_S.eps}\n\\dtname{20Newsgroups}, $\\dcm{freq}$\n\\end{minipage}\n\\bigskip\n\n\\begin{minipage}{4.5cm}\n\\center\n\\includegraphics[width=4cm]{pics\/topgenre_ind.eps}\n\\dtname{TopGenres}, $\\dcm{ind}$\n\\end{minipage}\n\\begin{minipage}{4.5cm}\n\\center\n\\includegraphics[width=4cm]{pics\/topgenre_cov.eps}\n\\dtname{TopGenres}, $\\dcm{cov}$\n\\end{minipage}\n\\begin{minipage}{4.5cm}\n\\center\n\\includegraphics[width=4cm]{pics\/topgenre_S.eps}\n\\dtname{TopGenres}, $\\dcm{freq}$\n\\end{minipage}\n\\bigskip\n\n\\begin{minipage}{4.5cm}\n\\center\n\\includegraphics[width=4cm]{pics\/topdecade_ind.eps}\n\\dtname{TopDecades}, $\\dcm{ind}$\n\\end{minipage}\n\\begin{minipage}{4.5cm}\n\\center\n\\includegraphics[width=4cm]{pics\/topdecade_cov.eps}\n\\dtname{TopDecades}, $\\dcm{cov}$\n\\end{minipage}\n\\begin{minipage}{4.5cm}\n\\center\n\\includegraphics[width=4cm]{pics\/topdecade_S.eps}\n\\dtname{TopDecades}, $\\dcm{freq}$\n\\end{minipage}\n\n\\begin{minipage}{4.5cm}\n\\center\n\\includegraphics[width=4cm]{pics\/abstract_ind.eps}\n\\dtname{Abstract}, $\\dcm{ind}$\n\\end{minipage}\n\\begin{minipage}{4.5cm}\n\\center\n\\includegraphics[width=4cm]{pics\/abstract_cov.eps}\n\\dtname{Abstract}, $\\dcm{cov}$\n\\end{minipage}\n\\begin{minipage}{4.5cm}\n\\center\n\\includegraphics[width=4cm]{pics\/abstract_S.eps}\n\\dtname{Abstract}, $\\dcm{freq}$\n\\end{minipage}\n\\caption{Distance matrices for \\ftname{20Newsgroups}, \\ftname{TopGenres}, \\ftname{TopDecades}, and \\ftname{Abstract}. In the first column the feature set \\ftname{ind} contains the independent means, in the second feature set \\ftname{cov} the pairwise correlation is added, and in the third column the feature set \\ftname{freq} consists of $10K$ most frequent itemsets, where $K$ is the number of attributes. Darker colours indicate smaller distances.}\n\\label{fig:distances2}\n\\end{figure}\n\n\\begin{figure}[htb!]\n\\center\n\\small\n\\begin{minipage}{5cm}\n\\center\n\\includegraphics[width=5cm]{pics\/bible_ind.eps}\n\\dtname{Bible}, $\\dcm{ind}$\n\\end{minipage}\n\\begin{minipage}{5cm}\n\\center\n\\includegraphics[width=5cm]{pics\/bible_cov.eps}\n\\dtname{Bible}, $\\dcm{cov}$\n\\end{minipage}\n\\begin{minipage}{5cm}\n\\center\n\\includegraphics[width=5cm]{pics\/bible_S.eps}\n\\dtname{Bible}, $\\dcm{freq}$\n\\end{minipage}\n\\bigskip\n\n\\begin{minipage}{5cm}\n\\center\n\\includegraphics[width=5cm]{pics\/speech_ind.eps}\n\\dtname{Addresses}, $\\dcm{ind}$\n\\end{minipage}\n\\begin{minipage}{5cm}\n\\center\n\\includegraphics[width=5cm]{pics\/speech_cov.eps}\n\\dtname{Addresses}, $\\dcm{cov}$\n\\end{minipage}\n\\begin{minipage}{5cm}\n\\center\n\\includegraphics[width=5cm]{pics\/speech_S.eps}\n\\dtname{Addresses}, $\\dcm{freq}$\n\\end{minipage}\n\\bigskip\n\n\\begin{minipage}{5cm}\n\\center\n\\includegraphics[width=4cm]{pics\/beatles_ind.eps}\n\\dtname{Beatles}, $\\dcm{ind}$\n\\end{minipage}\n\\begin{minipage}{5cm}\n\\center\n\\includegraphics[width=4cm]{pics\/beatles_cov.eps}\n\\dtname{Beatles}, $\\dcm{cov}$\n\\end{minipage}\n\\begin{minipage}{5cm}\n\\center\n\\includegraphics[width=4cm]{pics\/beatles_S.eps}\n\\dtname{Beatles}, $\\dcm{freq}$\n\\end{minipage}\n\\caption{Distance matrices for \\dtname{Bible}, \\dtname{Addresses}, and \\dtname{Beatles}. In the first column the feature set \\ftname{ind} contains the independent means, in the second feature set \\ftname{cov} the pairwise correlation is added, and in the third column the feature set \\ftname{freq} consists of $10K$ most frequent itemsets, where $K$ is the number of attributes. Darker colours indicate smaller distances.}\n\\label{fig:distances}\n\\end{figure}\n\n\n\nWe should stress that standard edit distances would not work in these data setups. For example, the sequences have different lengths and hence Levenshtein distance cannot work.\n\nThe imperative observation is that, according to the CM distance, the data sets have structure. We can also provide some interpretations to the results: In \\dtname{Bible} we see a cluster starting from the $46$th book. The New Testament starts from the $47$th book. An alternative clustering is obtained by separating the Epistles, starting from the $52$th book, from the Gospels. In \\dtname{Addresses} we some temporal dependence. Early speeches are different than the modern speeches. In \\dtname{Beatles} we see that the early albums are linked together and the last two albums are also linked together. The third album, \\dtname{Help!}, is peculiar. It is not linked to the early albums but rather to the later work. One explanation may be that, unlike the other early albums, this album does not contain any cover songs. In \\dtname{20Newsgroups} the groups of politics and of religions are close to each other and so are the computer-related groups. The group \\dtname{misc.forsale} is close to the computer-related groups. In \\dtname{TopGenres} \\dtname{Action} and \\dtname{Adventure} are close to each other. Also \\dtname{Comedy} and \\dtname{Romance} are linked. In \\dtname{TopDecades} and in \\dtname{Abstract} we see temporal behaviour. In Table~\\ref{tab:clusterings} the CM distance outperforms the base distance, except for \\dtname{Beatles} and \\dtname{TopGenres}.\n\n\n\\section{The Constrained Minimum Distance}\n\\label{sec:cm}\nIn the following subsection we will define our distance using geometrical intuition and show that the distance can be evaluated efficiently. In the second subsection we will discuss various properties of the distance, and in the last subsection we will provide an alternative justification to the distance. The aim of this justification is to provide more theoretical evidence for our distance.\n\\subsection{The definition}\nWe begin by giving some basic definitions. By a \\emph{data set} $D$ we mean a finite collection of samples lying in some finite space $\\Omega$. The set $\\Omega$ is called \\emph{sample space}, and from now on we will denote this space by the letter $\\Omega$. The number of elements in $\\Omega$ is denoted by $\\abs{\\Omega}$. The number of samples in the data set $D$ is denoted by $\\abs{D}$.\n\nAs we said in the introduction, our goal is not to define a distance directly on data sets but rather through some statistics evaluated from the data sets. In order to do so, we define a \\emph{feature function} $\\funcdef{S}{\\Omega}{\\real^N}$ to map a point in the sample space to a real vector. Throughout this section $S$ will indicate some given feature function and $N$ will indicate the dimension of the range space of $S$. We will also denote the $i^{\\text{th}}$ component of $S$ by $S_i$. Note that if we have several feature functions, then we can join them into one big feature function. A \\emph{frequency} $\\theta \\in \\real^N$ of $S$ taken with respect to a data set $D$ is the average of values of $S$ taken over the data set, that is, $\\theta = \\frac{1}{\\abs{D}}\\sum_{\\omega \\in D} S(\\omega)$. We denote this frequency by $\\freq{S}{D}$.\n\nAlthough we do not make any assumptions concerning the size of $\\Omega$, some of our choices are motivated by thinking that $\\abs{\\Omega}$ can be very large --- so large that even the simplest operation, say, enumerating all the elements in $\\Omega$, is not tractable. On the other hand, we assume that $N$ is such that an algorithm executable in, say, $O(N^3)$ time is feasible. In other words, we seek a distance whose evaluation time does not depend of the size of $\\Omega$ but rather of $N$.\n\nLet $\\mathbb{P}$ be the set of all distributions defined on $\\Omega$. Given a feature function $S$ and a frequency $\\theta$ (calculated from some data set) we say that a distribution $p \\in \\mathbb{P}$ satisfies the frequency $\\theta$ if $\\mean{p}{S} = \\theta$. We also define a \\emph{constrained set of distributions}\n\\[\n\\constp{S}{\\theta} = \\set{p \\in \\mathbb{P} \\mid \\mean{p}{S} = \\theta}\n\\]\nto be the set of the distributions satisfying $\\theta$. The idea behind this is as follows: From a given data set we calculate some statistics, and then we examine the distributions that can produce such frequencies.\n\nWe interpret the sets $\\mathbb{P}$ and $\\constp{S}{\\theta}$ as \\emph{geometrical objects}. This is done by enumerating the points in $\\Omega$, that is, we think that $\\Omega = \\enset{1,2,}{\\abs{\\Omega}}$. We can now represent each distribution $p \\in \\mathbb{P}$ by a vector $u \\in \\real^{\\abs{\\Omega}}$ by setting $u_i = p(i)$. Clearly, $\\mathbb{P}$ can be represented by the vectors in $\\real^{\\abs{\\Omega}}$ having only non-negative elements and summing to one. In fact, $\\mathbb{P}$ is a simplex in $\\real^{\\abs{\\Omega}}$. Similarly, we can give an alternative definition for $\\constp{S}{\\theta}$ by saying\n\\begin{equation}\n\\constp{S}{\\theta} = \\set{u \\in \\real^{\\abs{\\Omega}} \\mid \\sum_{i \\in \\Omega }S(i)u_i = \\theta, \\sum_{i \\in \\Omega }u_i = 1, u \\geq 0}.\n\\label{eq:constpdef}\n\\end{equation}\nLet us now study the set $\\constp{S}{\\theta}$. In order to do so, we define a \\emph{constrained space}\n\\[\n\\const{S}{\\theta} = \\set{u \\in \\real^{\\abs{\\Omega}} \\mid \\sum_{i \\in \\Omega }S(i)u_i = \\theta, \\sum_{i \\in \\Omega }u_i = 1},\n\\]\nthat is, we drop the last condition from Eq.~\\ref{eq:constpdef}. The set $\\constp{S}{\\theta}$ is included in $\\const{S}{\\theta}$; the set $\\constp{S}{\\theta}$ consists of the non-negative vectors from $\\const{S}{\\theta}$. Note that the constraints defining $\\const{S}{\\theta}$ are vector products. This implies that $\\const{S}{\\theta}$ is an affine space, and that, given two different frequencies $\\theta_1$ and $\\theta_2$, the spaces $\\const{S}{\\theta_1}$ and $\\const{S}{\\theta_2}$ are parallel. \n\\begin{example}\nLet us illustrate the discussion above with a simple example. Assume that $\\Omega = \\set{A,B,C}$. We can then imagine the distributions as vectors in $\\real^3$. The set $\\mathbb{P}$ is the triangle having $\\pr{1, 0, 0}$, $\\pr{0, 1, 0}$, and $\\pr{0, 0, 1}$ as corner points (see Figure~\\ref{fig:ex1plot}). Define a feature function $S$ to be\n\\[\nS(\\omega) = \\left\\{\n\\begin{array}{ll}\n1 & \\omega = C \\\\\n0 & \\omega \\neq C.\n\\end{array}\\right.\n\\]\nThe frequency $\\freq{S}{D}$ is the proportion of $C$ in a data set $D$. Let $D_1 = \\pr{C, C, C, A}$ and $D_2 = \\pr{C, A, B, A}$. Then $\\freq{S}{D_1} = 0.75$ and $\\freq{S}{D_2} = 0.25$. The spaces $\\const{S}{0.25}$ and $\\const{S}{0.75}$ are parallel lines (see Figure~\\ref{fig:ex1plot}). The distribution sets $\\constp{S}{0.25}$ and $\\constp{S}{0.75}$ are the segments of the lines $\\const{S}{0.25}$ and $\\const{S}{0.75}$, respectively.\n\\begin{figure}[ht!]\n\\center\n\\begin{minipage}{7cm}\n\\includegraphics[width=7cm]{pics\/ex1plot3.eps}\n\\end{minipage}\n\\begin{minipage}{7cm}\n\\includegraphics[width=7cm]{pics\/ex1plot32.eps}\n\\end{minipage}\n\\caption{A geometrical interpretation of the distribution sets for $\\abs{\\Omega} = 3$. In the left figure, the set $\\mathbb{P}$, that is, the set of all distributions, is a triangle. The constrained spaces $\\const{S}{0.25}$ and $\\const{S}{0.75}$ are parallel lines and the distribution sets $\\constp{S}{0.25}$ and $\\constp{S}{0.75}$ are segments of the constrained spaces. In the right figure we added a segment perpendicular to the constraint spaces. This segment has the shortest length among the segments connecting the constrained spaces.}\n\\label{fig:ex1plot}\n\\end{figure}\n\\label{ex:illustration}\n\\end{example}\nThe idea of interpreting distributions as geometrical objects is not new. For example, a well-known boolean query problem is solved by applying linear programming to the constrained sets of distributions~\\citep{hailperin65inequalities, calders03thesis}.\n\nLet us revise some elementary Euclidean geometry: Assume that we are given two parallel affine spaces $\\mathcal{A}_1$ and $\\mathcal{A}_2$. There is a natural way of measuring the distance between these two spaces. This is done by taking the length of the shortest segment going from a point in $\\mathcal{A}_1$ to a point in $\\mathcal{A}_2$ (for example see the illustration in Figure~\\ref{fig:ex1plot}). We know that the segment has the shortest length if and only if it is orthogonal with the affine spaces. We also know that if we select a point $a_1 \\in \\mathcal{A}_1$ having the shortest norm, and if we similarly select $a_2 \\in \\mathcal{A}_2$, then the segment going from $a_1$ to $a_2$ has the shortest length.\n\nThe preceeding discussion and the fact that the constrained spaces are affine motivates us to give the following definition: Assume that we are given two data sets, namely $D_1$ and $D_2$ and a feature function $S$. Let us shorten the notation $\\const{S}{\\freq{S}{D_i}}$ by $\\const{S}{D_i}$. We pick a vector from each constrained space having the shortest norm\n\\[\nu_i = \\underset{u \\in \\const{S}{D_i}}{\\argmin} \\norm{u}, \\quad i = 1,2.\n\\]\nWe define the distance between $D_1$ and $D_2$ to be\n\\begin{equation}\n\\dist{D_1}{D_2}{S} = \\sqrt{\\abs{\\Omega}}\\norm{u_1-u_2}.\n\\label{eq:def}\n\\end{equation}\nThe reasons for having the factor $\\sqrt{\\abs{\\Omega}}$ will be given later. We will refer to this distance as \\emph{Constrained Minimum (CM) distance}. We should emphasise that $u_1$ or $u_2$ may have negative elements. Thus the CM distance is \\emph{not} a distance between two distributions; it is rather a distance based on the frequencies of a given feature function and is motivated by the geometrical interpretation of the distribution sets.\n\nThe main reason why we define the CM distance using the constrained spaces $\\const{S}{D_i}$ and not the distribution sets $\\constp{S}{D_i}$ is that we can evaluate the CM distance efficiently. We discussed earlier that $\\Omega$ may be very large so it is crucial that the evaluation time of a distance does not depend on $\\abs{\\Omega}$. The following theorem says that the CM distance can be represented using the frequencies and a covariance matrix\n\\[\n\\cov{}{}{S} = \\frac{1}{\\abs{\\Omega}}\\sum_{\\omega \\in \\Omega}S(\\omega)S(\\omega)^T-\\pr{\\frac{1}{\\abs{\\Omega}}\\sum_{\\omega \\in \\Omega}S(\\omega)}\\pr{\\frac{1}{\\abs{\\Omega}}\\sum_{\\omega \\in \\Omega}S(\\omega)}^T.\n\\]\n\\begin{theorem}\n\\label{thr:calc}\nAssume that $\\cov{}{}{S}$ is invertible. For the CM distance between two data sets $D_1$ and $D_2$ we have\n\\[\n\\dist{D_1}{D_2}{S}^2 = \\pr{\\theta_1-\\theta_2}^T\\cov{}{-1}{S}\\pr{\\theta_1-\\theta_2},\n\\]\nwhere $\\theta_i = \\freq{S}{D_i}$.\n\\end{theorem}\nThe proofs for the theorems are given in Appendix.\n\nThe preceding theorem shows that we can evaluate the distance using the covariance matrix and frequencies. If we assume that evaluating a single component of the feature function $S$ is a unit operation, then the frequencies can be calculated in $O(N\\abs{D_1}+N\\abs{D_2})$ time. The evaluation time of the covariance matrix is $O(\\abs{\\Omega}N^2)$ but we assume that $S$ is such that we know a closed form for the covariance matrix (such cases will be discussed in Section~\\ref{sec:cmbin}), that is, we assume that we can evaluate the covariance matrix in $O(N^2)$ time. Inverting the matrix takes $O(N^3)$ time and evaluating the distance itself is $O(N^2)$ operation. Note that calculating frequencies and inverting the covariance matrix needs to be done only once: for example, assume that we have $k$ data sets, then calculating the distances between every data set pair can be done in $O\\pr{N\\sum_i^k\\abs{D_i}+N^3+k^2N^2}$ time.\n \n\\begin{example}\nLet us evaluate the distance between the data sets given in Example~\\ref{ex:illustration} using both the definition of the CM distance and Theorem~\\ref{thr:calc}. We see that the shortest vector in $\\const{S}{0.25}$ is $u_1 = \\pr{\\frac{3}{8}, \\frac{3}{8}, \\frac{1}{4}}$. Similarly, the shortest vector in $\\const{S}{0.75}$ is $u_2 = \\pr{\\frac{1}{8}, \\frac{1}{8}, \\frac{3}{4}}$. Thus the CM distance is equal to\n\\[\n\\dist{D_1}{D_2}{S} = \\sqrt{3}\\norm{u_1-u_2} = \\sqrt{3}\\sqrta{\\frac{2^2}{8^2}+\\frac{2^2}{8^2}+\\frac{2^2}{4^2}} = \\frac{3}{\\sqrt{8}}.\n\\]\nThe covariance of $S$ is equal to $\\cov{}{}{S} = \\frac{1}{3}-\\frac{1}{3}\\frac{1}{3} = \\frac{2}{9}$. Thus Theorem~\\ref{thr:calc} gives us\n\\[\n\\dist{D_1}{D_2}{S} = \\sqrta{\\cov{}{-1}{S}\\pr{\\frac{3}{4}-\\frac{1}{4}}^2} = \\sqrta{\\frac{9}{2}\\pr{\\frac{2}{4}}^2} = \\frac{3}{\\sqrt{8}}.\n\\]\n\\end{example}\nFrom Theorem~\\ref{thr:calc} we see a reason to have the factor $\\sqrt{\\abs{\\Omega}}$ in Eq.~\\ref{eq:def}: Assume that we have two data sets $D_1$ and $D_2$ and a feature function $S$. We define a new sample space $\\Omega' = \\set{\\pr{\\omega, b} \\mid \\omega \\in \\Omega, b = 0, 1}$ and transform the original data sets into new ones by setting $D_i' = \\set{\\pr{\\omega, 0} \\mid \\omega \\in D_i}$. We also expand $S$ into $\\Omega'$ by setting $S'(\\omega, 1) = S'(\\omega, 0) = S(\\omega)$. Note that $S(D_i) = S'(D_i')$ and that $\\cov{}{}{S} = \\cov{}{}{S'}$ so Theorem~\\ref{thr:calc} says that the CM distance has not changed during this transformation. This is very reasonable since we did not actually change anything essential: We simply added a bogus variable into the sample space, and we ignored this variable during the feature extraction. The size of the new sample space is $\\abs{\\Omega'} = 2\\abs{\\Omega}$. This means that the difference $\\norm{u_1-u_2}$ in Eq.~\\ref{eq:def} is smaller by the factor $\\sqrt{2}$. The factor $\\sqrt{\\abs{\\Omega}}$ is needed to negate this effect.\n\\subsection{Properties}\n\\label{sec:properties}\nWe will now list some important properties of $\\dist{D_1}{D_2}{S}$.\n\\begin{theorem} $\\dist{D_1}{D_2}{S}$ is a pseudo metric.\n\\label{thr:metric}\n\\end{theorem}\n\nThe following theorem says that adding external data set to the original data sets makes the distance smaller which is very reasonable property.\n\n\\begin{theorem}\nAssume three data sets $D_1$, $D_2$, and $D_3$ over the same set of items. Assume further that $D_1$ and $D_2$ have the same number of data points and let $\\epsilon = \\frac{\\abs{D_3}}{\\abs{D_1}+\\abs{D_3}}$. Then \\[\\dist{D_1 \\cup D_3}{D_2 \\cup D_3}{S} = (1-\\epsilon)\\dist{D_1}{D_2}{S}.\\]\n\\label{thr:augdata}\n\\end{theorem}\n\n\\begin{theorem}\nLet $A$ be a $M \\times N$ matrix and $b$ a vector of length $M$. Define $T(\\omega) = AS(\\omega)+b$. It follows that $\\dist{D_1}{D_2}{T} \\leq \\dist{D_1}{D_2}{S}$ for any $D_1$ and $D_2$.\n\\label{thr:linear}\n\\end{theorem}\n\\begin{corollary}\nAdding extra feature functions cannot decrease the distance.\n\\end{corollary}\n\\begin{corollary}\nLet $A$ be an invertible $N \\times N$ matrix and $b$ a vector of length $N$. Define $T(\\omega) = AS(\\omega)+b$. It follows that $\\dist{D_1}{D_2}{T} = \\dist{D_1}{D_2}{S}$ for any $D_1$ and $D_2$.\n\\label{thr:ind}\n\\end{corollary}\n\nCorollary~\\ref{thr:ind} has an interesting interpretation. Note that $\\freq{T}{D} = A\\freq{S}{D}+b$ and that $\\freq{S}{D} = A^{-1}\\pr{\\freq{T}{D}-b}$. This means that if we know the frequencies $\\freq{S}{D}$, then we can infer the frequencies $\\freq{T}{D}$ without a new data scan. Similarly, we can infer $\\freq{S}{D}$ from $\\freq{T}{D}$. We can interpret this relation by thinking that $\\freq{S}{D}$ and $\\freq{T}{D}$ are merely different representations of the same feature information. Corollary~\\ref{thr:ind} says that the CM distance is equal for any such representation.\n\n\\subsection{Alternative Characterisation of the CM Distance}\nWe derived our distance using geometrical interpretation of the distribution sets. In this section we will provide an alternative way for deriving the CM distance. Namely, we will show that if some distance is of Mahalanobis type and satisfies some mild assumptions, then this distance is proportional to the CM distance. The purpose of this theorem is to provide more theoretical evidence to our distance.\n\nWe say that a distance $d$ is of Mahalanobis type if\n\\[\n\\dista{D_1}{D_2}{S}^2 = \\pr{\\theta_1-\\theta_2}^TC(S)^{-1}\\pr{\\theta_1-\\theta_2},\n\\]\nwhere $\\theta_1 = \\freq{S}{D_1}$ and $\\theta_2 = \\freq{S}{D_2}$ and $C(S)$ maps a feature function $S$ to a symmetric $N\\times N$ matrix. Note that if $C(S) = \\cov{}{}{S}$, then the distance $d$ is the CM distance. We set $\\mathbb{M}$ to be the collection of all distances of Mahalanobis type. Can we justify the decision that we examine only the distances included in $\\mathbb{M}$? One reason is that a distance belonging to $\\mathbb{M}$ is guaranteed to be a metric. The most important reason, however, is the fact that we can evaluate the distance belonging to $\\mathbb{M}$ efficiently (assuming, of course, that we can evaluate $C(S)$).\n\nLet $d \\in \\mathbb{M}$ and assume that it satisfies two additional assumptions:\n\\begin{enumerate}\n\\item If $A$ is an $M \\times N$ matrix and $b$ is a vector of length $M$ and if we set $T(\\omega) = AS(\\omega)+b$, then $C(T) = AC(S)A^T$.\n\\label{as:1}\n\\item Fix two points $\\omega_1$ and $\\omega_2$. Let $\\funcdef{\\sigma}{\\Omega}{\\Omega}$ be a function swapping $\\omega_1$ and $\\omega_2$ and mapping everything else to itself. Define $U(\\omega) = S(\\sigma(\\omega))$. Then $\\dista{\\sigma(D_1)}{\\sigma(D_2)}{U} = \\dista{D_1}{D_2}{S}$.\n\\label{as:2}\n\\end{enumerate}\nThe first assumption can be partially justified if we consider that $A$ is an invertible square matrix. In this case the assumption is identical to $\\dista{\\cdot}{\\cdot}{AS+b} = \\dista{\\cdot}{\\cdot}{S}$. This is to say that the distance is independent of the representation of the frequency information. This is similar to Corollary~\\ref{thr:ind} given in Section~\\ref{sec:properties}. We can construct a distance that would satisfy Assumption~\\ref{as:1} in the invertible case but fail in a general case. We consider such distances pathological and exclude them by making a broader assumption. To justify Assumption~\\ref{as:2} note that the frequencies have not changed, that is, $\\freq{U}{\\sigma(D)} = \\freq{S}{D}$. Only the representation of single data points have changed. Our argument is that the distance should be based on the frequencies and not on the values of the data points.\n\\begin{theorem}\nLet $d \\in M$ satisfying Assumptions~\\ref{as:1}~and~\\ref{as:2}. If $C(S)$ is invertible, then there is a constant $c>0$, not depending on $S$, such that $\\dista{\\cdot}{\\cdot}{S} = c\\dist{\\cdot}{\\cdot}{S}$.\n\\label{thr:char}\n\\end{theorem}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nDeveloping the methods for estimating Hamiltonians has two important motivations in quantum information processing. First, Hamiltonians fully govern the dynamics of quantum systems. Hence, whether the Hamiltonians can be precisely estimated determines whether the control operations are highly accurate on these quantum devices. For instance, quantum circuits are generally realized through the control pulse techniques \\cite{schafer2018fast}, which are beforehand designed and optimized according to the parameters of the system Hamiltonians. Second, as a branch of quantum process tomography \\cite{PhysRevA.77.032322}, estimating Hamiltonians provides an alternative approach to estimate the fidelities of the performed quantum simulations. Therefore, estimating Hamiltonians is a central problem in the related quantum fields, such as quantum platforms \\cite{mermin2007quantum}, quantum control \\cite{dong2010quantum, helsen2020general}, and quantum simulations \\cite{RevModPhys.86.153, xin2020quantum}.\n\nSo far, various methodologies have been studied for this purpose. In principle, Hamiltonians can be estimated by quantum state and process tomography by considering the Hamiltonians are the generators of the dynamical processes \\cite{PhysRevA.77.032322,xin2020improved,xin2017quantum}. However, this approach requires exponential physical resources, although many-body Hamiltonians have the polynomial number of unknown parameters because of the physical constraints. Previously, some methods using Fourier transform or fitting on the temporal records of measurement of some observables also are proposed to estimate Hamiltonians with few qubits \\cite{PhysRevLett.102.187203,PhysRevA.71.062312,PhysRevA.73.052317}. Zhang and Sarvoar \\cite{PhysRevLett.113.080401,PhysRevA.91.052121} proposed an approach for estimating Hamiltonian based on the limited measurements by the eigensystem realization algorithm (ERA). This method was experimentally demonstrated on nuclear magnetic resonance quantum processor \\cite{hou2017experimental}. Akira Sone {\\it{et al}} further studied the identifiability problem of Hamiltonians and the necessary experimental resources in ERA method and they show that more observables are necessary and the required experimental measurements have exponential scaling with the size of systems for complicated Hamiltonians \\cite{PhysRevA.95.022335,sone2017exact}. \nMany-body local Hamiltonians can be uniquely estimated by a single eigenstate of Hamiltonians, which also inspires the subsequent research \\cite{qi2019determining,PhysRevB.100.134201,PhysRevX.8.021026, PhysRevLett.122.020504, xin2019local}. \nRecently, a quantum quench method also was proposed to reconstruct a generic many-body local Hamiltonian \\cite{PhysRevLett.124.160502}, which uses pairs of generic initial and final states connected by the time evolution of Hamiltonians. \n\n\nMachine learning has obtained great successes in solving the problems in quantum physics \\cite{rem2019identifying,van2017learning,huembeli2018identifying,rodriguez2019identifying,lian2019machine,lu2018separability,harney2020entanglement,torlai2018neural,ahmed2020quantum,magesan2015machine,khanahmadi2020time,cimini2020neural}, such as the identification of quantum phase transitions \\cite{rem2019identifying,van2017learning,huembeli2018identifying}, the classification of quantum topological phases and quantum entanglement \\cite{rodriguez2019identifying,lian2019machine,lu2018separability,harney2020entanglement}, quantum state measurement and tomography \\cite{magesan2015machine,torlai2018neural,ahmed2020quantum}. Recently, machine learning also brings developments in estimating the Hamiltonians. Ref. \\cite{xin2019local} presents the deep neural network to recover 2-local Hamiltonians from merely 2-local measurements of ground states. Ref. \\cite{cnnran} proposes that convolutional neural networks can also be used to predict the physical parameters of Hamiltonians from the ground states. However, these methods usually require the assumption of the ground states. \n\nIn this work, we propose a machine learning method, Recurrent Neural Network (RNN), to estimate the parameters of Hamiltonians from single-qubit Pauli measurements on each qubit. In our method, the initial state does not require the ground states of target Hamiltonians and only single-qubit Pauli observables are measured at a discrete-time forming the temporal records of single-qubit measurements which are fed into RNN. The intuition of this method is that if the Hamiltonians are identifiable under the temporal records of single-qubit measurements, then there exists the underlying rule from single-qubit measurements to the target Hamiltonians, which can be learned directly from single-qubit measurements via data-driven machine learning, although this rule may have complicated or even unknown functional forms. Our paper is organized as follows. In Sec. \\ref{sec2}, we first describe our framework for estimating Hamiltonians via RNN and then test our methods for different types of time-independent and time-dependent Hamiltonians with up to 7 qubits. The robustness against the measurement noise and decoherence is successively studied. In Sec. \\ref{sec3}, we in detail discuss the required measurement resources in the practical applications, followed by our conclusions and outlooks. The detailed techniques in our method are placed in Sec. \\ref{sec4}.\n\n\n\\section{Results}\\label{sec2}\n\\subsection{Learning Hamiltonians via the RNN}\nWe firstly describe the dynamics of single-qubit observables under the target Hamiltonians. Here, we consider that a quantum system with $N$ qubits starts from an initial state $\\ket{\\Psi_0}$ and undergoes a dynamical process governed by the unknown Hamiltonian $\\mathcal{H}$. $\\mathcal{H}$ is parameterized as\n\\begin{equation}\n\\begin{aligned}\n\\mathcal{H}=\\sum_{m=1}^M a_m B_m,\n\\end{aligned}\n\\end{equation}\nwhere $B_m$ is the tensor product of Pauli matrices $I, \\sigma_x, \\sigma_y,$ and $\\sigma_z$, and $a_m$ is the parameter of Hamiltonians. For single-qubit Pauli operator $P\\in S_P=\\{\\sigma_k^{(i)}|k=x, y, z, 1\\leqslant i\\leqslant N\\}$, its expectation value is $\\overline{P}(t)=\\bra{\\Psi_0}P(t)\\ket{\\Psi_0}$ with $P(t)=e^{i\\mathcal{H}t}Pe^{-i\\mathcal{H}t}$. Here, $P(t)=\\sum_{n=0}\\frac{i^nt^n}{n!}P_n$. $P_0=P$ and $P_n=\\sum_{m=1}^M a_m [B_m, P_{n-1}]$. Hence, if the parameter $a_m$ participates in the dynamics of single-qubit observables, it is possible to learn the Hamiltonian parameters from the temporal records of their expectation values. In this work, we consider the identifiable Hamiltonians under single-qubit measurements and the initial state $\\ket{\\Psi_0}$. The most common of Hamiltonians belong to this category. Next, we describe our machine learning method for estimating the Hamiltonians from single-qubit Pauli measurements.\n\n\n\n\nAs illustrated in Fig. \\ref{pro}, a $N$-qubit system starts from the initial state $\\ket{\\Psi_0}=\\prod_{i=1}^N \\otimes \\ket{\\psi^i_0}$. Here, $\\ket{\\psi^i_0}=R_z(\\pi\/4)R_y(\\pi\/4)\\ket{0}$ can be prepared from the state $\\ket{0}$ using rotation operations $R_z(\\pi\/4)$ and $R_y(\\pi\/4)$. The purpose for choosing such an initial state ensures that the dynamics of single-qubit observables have nontrivial initial values. During the dynamical evolution $e^{-i\\mathcal{H}t}$, the expectation values of single-qubit operators $\\sigma_x^{(i)}$, $\\sigma_y^{(i)}$, and $\\sigma_z^{(i)}$ are measured at a discrete-time by time interval $\\tau$. Total sample points is denoted by $S$ and then total sample time is $T=S\\tau$. The temporal records of single-qubit measurements are collected as a vector,\n\\begin{equation}\n\\begin{aligned}\n\\textbf{I}=\\{O^{(i)}_k(s\\tau) | O^{(i)}_k(s\\tau)=\\text{Tr}(\\rho(s\\tau)\\cdot \\sigma^{(i)}_k), 1\\leqslant s \\leqslant S, k = x, y, z,1\\leqslant i \\leqslant N\\}.\n\\end{aligned}\n\\end{equation}\n$\\rho(s\\tau)$ is the density matrix of the system at the moment $s\\tau$. The parameters of Hamiltonians are collected as a vector $\\textbf{H}=\\{a_m | 1\\leqslant m \\leqslant M\\}$. Then we can train a neural network framework consisting of Recurrent and full connected (FC) neural networks with generated training data $\\{\\textbf{I}, \\textbf{H}\\}$. After the training, we can predict the unknown Hamiltonian parameters $\\textbf{H}$ from single-qubit measurements $\\textbf{I}$. As shown in Fig. \\ref{pro}, in our NN framework, we use Long Short-Term Memory (LSTM) network which is a type of RNN \\cite{hochreiter1997long}. Compared with traditional feed-forward neural networks, LSTM can learn the correlation in time sequences, which has been widely applied on handwriting recognition and speech recognition in the classical field \\cite{sak2014long}, and quantum control and quantum process tomography in the quantum field \\cite{banchi2018modelling,PhysRevX.10.011006}. LSTM is appropriate to estimate the Hamiltonians from the temporal records. In this training, we define the input and output layers, objective function, and similarity function as follows.\n\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=1\\linewidth]{plotnn.pdf}\n\\caption{\\textbf{Circuit diagram of our neural networks on learning the parameters of Hamiltonians from the temporal records of single-qubit measurements.} We first perform the dynamical evolution $e^{-i\\mathcal{H}t}$ from the initial state $\\ket{\\psi_0}$ (bottom left Bloch). At each moment $s\\tau$, we measure the expectation values of single-qubit Pauli operators (middle Bloch), and they are collected as a vector $\\textbf{O}(s\\tau)$ fed into $s$-th LSTM cell. Lastly, the combination of FC and LSTM for time-independent parameters in Hamiltonians (path A) or an FC neural network for time-independent parameters in Hamiltonians (path B) follows LSTM cells.}\n\\label{pro}\n\\end{figure}\n (i) The input and output layers. $\\textbf{I}$ and $\\textbf{H}$ are respectively used as the input and output layers of our NN framework. At the moment $s\\tau$, the expectation values of single-qubit measurements are collected as a vector\n \\begin{equation}\n\\begin{aligned}\n\\textbf{O}(s\\tau)=\\{O^{(i)}_k(s\\tau)| k = x, y, z,~1\\leqslant i \\leqslant N\\}.\n\\end{aligned}\n\\end{equation}\n It is firstly fed into the $s$-th LSTM cell. Lastly, an FC neural network is applied before exporting the prediction $\\textbf{H}$. Hence, the number of required LSTM cells equals the number of sampling points $S$. \\\\\n (ii) The objective function. Our neural network is trained by minimizing the distance between the predicted outcome $\\textbf{H}^{\\text{pred}}$ and the true outcome $\\textbf{H}^{\\text{true}}$. Here, we use Mean Square Error (MSE) between $\\textbf{H}^{\\text{pred}}$ and $\\textbf{H}^{\\text{true}}$ as the objective function. It is\n \\begin{equation}\n\\begin{aligned}\nL=\\frac{1}{M}\\sum_{m=1}^M(\\textbf{H}^{\\text{true}}_m-\\textbf{H}^{\\text{pred}}_m)^2.\n\\label{loss}\n\\end{aligned}\n\\end{equation}\n This definition can learn the magnitude and sign of the parameters, because $L$ decreases to 0 only when $\\textbf{H}^{\\text{true}}_m$ and $\\textbf{H}^{\\text{pred}}_m$ are absolutely the same. To minimize the objective function in this work, we use Adam optimization algorithm, one of the state-of-the-art gradient descent algorithms, to train the hidden parameters of the network. \\\\\n (iii) The similarity function. To estimate the performance of our trained NN, we need to compute the similarity between the predicted and the real outcomes for the test data. Here, we use the definition of the cosine proximity function between two vectors. It is\n \\begin{equation}\n\\begin{aligned}\nF(\\textbf{H}^{\\text{pred}}, \\textbf{H})=\\frac{(\\textbf{H}^{\\text{pred}}\\cdot\\textbf{H})}{(||\\textbf{H}^{\\text{pred}}|| \\cdot ||\\textbf{H}||)}.\n\\end{aligned}\n\\end{equation}\nHere, $F \\in [-1, 1]$. As shown in Fig. \\ref{pro}, the structures for exporting $\\textbf{H}$ are different for time-dependent and time-independent Hamiltonians. For time-dependent parameters in Hamiltonians, $\\emph{f}_S$ is imported to a composite neural network including LSTM and FC neural networks (path A). Repetitive LSTM cells decode the vector $\\emph{f}_S$ and FC neural networks project the output of each cell to a series of time-dependent Hamiltonian parameters. For time-independent parameters in Hamiltonians, an FC neural network directly follows the LSTM cells. Here, the FC neural networks do not have hidden layers. More details about the structure of LSTM can be found in Methods \\ref{sec4}. Next, we will train neural networks to learn the parameters of different types of Hamiltonians.\n\n\n\\subsection{Applications}\\label{DFE}\n\n{\\it{Ising Hamiltonian 1}}-. As a demonstration of applications, we first train an RNN framework for estimating the parameters of 7-qubit Hamiltonians with the nearest-neighbor XY interactions placed in a static magnetic field around $z$ axis as follows,\n\\begin{equation}\n\\begin{aligned}\n\\mathcal{H}^7_{\\text{XYZ}}=\\sum^7_{i=1}a^{(i)}_z \\sigma_z^{(i)}+\\sum_{j=1}^6 J^{(j)}(\\sigma_x^{(j)}\\sigma_x^{(j+1)}+\\sigma_y^{(j)}\\sigma_y^{(j+1)}).\n\\label{xyz}\n\\end{aligned}\n\\end{equation}\n$a^{(i)}_z$ and $J^{(j)}$ are the parameter of magnetic field on $j$-th qubit and the coupling value between the nearest-neighbor qubits, respectively. Suppose that $a^{(i)}_z\\in [-J_0, J_0]$ and $J^{(j)}\\in [-J_0, J_0]$. $J_0$ is a global factor which is set to 1 in our training. The system evolves under the Hamiltonian $\\mathcal{H}^7_{\\text{XYZ}}$ starting from the initial state $\\prod_{i=1}^7 \\otimes \\ket{\\psi_0}$, and the expectation values of single-qubit observables $\\sigma_x^{(i)}$, $\\sigma_y^{(i)}$, and $\\sigma_z^{(i)}$ are measured at a discrete-time separated by $\\tau=0.02\\pi\/J_0$ with $S=25$ sampling points. The reason for choosing such a time interval can be found in Sec. \\ref{sec3}. \n\nWe collect the Hamiltonian parameters as a vector $\\textbf{H}=\\{a^{(i)}_z, J^{(j)} | 1\\leqslant i \\leqslant 7, 1\\leqslant j \\leqslant 6\\}$ and the measured values as a vector $\\textbf{I}=\\{O^{(i)}_k(s\\tau) | O^{(i)}_k(s\\tau)=\\text{Tr}(\\rho(s\\tau)\\cdot \\sigma^{(i)}_k), 1\\leqslant s \\leqslant 25, k = x, y, z, \\text{and} ~1\\leqslant i \\leqslant 7\\}$, and then we randomly generate 100,000 training data $\\{\\textbf{I}, \\textbf{H}\\}$ fed into the neural networks. The test data consists of 5,000 pairs of Hamiltonians $\\textbf{H}$ and the corresponding single-qubit measurements $\\textbf{I}$. Our RNN is trained by minimizing the distance between the actual and predicted outcomes in Eq. (\\ref{loss}). After finishing the training of RNN on the training data, our RNN has the ability to estimate the unknown parameters of 7-qubit Hamiltonians $\\mathcal{H}^7_{\\text{XYZ}}$ from single-qubit measurements with high accuracy. We compute the similarity $F_{\\text{test}}$ between the actual parameters $\\textbf{H}^{\\text{test}}$ and the predicted outcome $\\textbf{H}^{\\text{pred}}$ for 5,000 test data. The averaged similarity on the whole test data is over 0.99 and $F_{\\text{test}}$ as a function of epochs is also presented in Fig. \\ref{7qubit}(a). Figure \\ref{7qubit}(a) also gives the comparison between the actual value $J^{(1)}_{\\text{test}}$ and the prediction $J^{(1)}_{\\text{pred}}$ for 100 randomly test data at the beginning and end.\n\n\\begin{figure*}[htp]\n\\centering\n\\includegraphics[width=1\\linewidth]{plot7qubit.pdf}\n\\caption{\\textbf{Trained results for 7-qubit Ising Hamiltonian 1 (a) and 6-qubit Ising Hamiltonian 2 (b).} The top right corners of panels respectively present their qubit configurations. The orange and cyan lines show the objective functions $L_{\\text{train}}$ and $L_{\\text{test}}$ as a function of epochs. The similarity $F_{\\text{test}}$ between the predicted $\\textbf{H}^{\\text{pred}}$ and the true $\\textbf{H}^{\\text{true}}$ in the test data is also presented with the increase of epochs (middle subfigures). At the beginning and end of the training, we randomly choose 100 samples and plot the comparison between the predicted and actual values for the parameters $J^{(1)}$ (left and right subfigures).}\n\\label{7qubit}\n\\end{figure*}\n\n{\\it{Ising Hamiltonian 2}}-. Besides, our RNN framework can also be applied to more general Hamiltonian models. Here, we use our RNN to learn the parameters of 6-qubit Ising Hamiltonians with the nearest-neighbor interactions in three directions. The Hamiltonian of this 6-qubit system can be written as,\n\\begin{equation}\n\\begin{aligned}\n\\mathcal{H}=\\sum_{i=1}^6 a^{(i)}_z\\sigma_z^{(i)}+\\sum_{i=1}^{5}(J^{(i)}_x\\sigma_x^{(i)}\\sigma_x^{(i+1)}+J^{(i)}_y\\sigma_y^{(i)}\\sigma_y^{(i+1)}+J^{(i)}_z\\sigma_z^{(i)}\\sigma_z^{(i+1)})\n\\end{aligned}\n\\end{equation}\nSimilarly, single-qubit observables $\\sigma_x^{(i)}$, $\\sigma_y^{(i)}$, and $\\sigma_z^{(i)}$ also are measured at a discrete-time separated by $\\tau=0.02\\pi\/J_0$ and the number of sampling points is $S=75$. We randomly generate 200,000 pairs of such Hamiltonians and the corresponding single-qubit measurements as the training data. After learning on these training data, RNN can predict the outcome of the test data. For 5,000 randomly generated test data, the average accuracy of the predictions is around 0.98. More details about the results can be found in Fig. \\ref{7qubit}(b).\n\n{\\it{Time-dependent Hamiltonians}}-. Most of the existing methods are designed for the time-independent Hamiltonians and they are not directly applicable to time-dependent Hamiltonians. Our proposed RNN method presented in the above can also be used to estimate the parameters of time-dependent Hamiltonians. As a numerical demonstration, we consider a 3-qubit system with the nearest-neighbor XY interactions placed in a time-dependent magnetic field around $z$ axis. The used neural network is presented in Fig. \\ref{pro}. The corresponding Hamiltonian is,\n\\begin{equation}\n\\begin{aligned}\n\\mathcal{H}^3_{\\text{XYZ}}(t)=\\sum^3_{i=1}a^{(i)}_z(t) \\sigma_z^{(i)}+\\sum_{j=1}^2 J^{(j)}(\\sigma_x^{(j)}\\sigma_x^{(j+1)}+\\sigma_y^{(j)}\\sigma_y^{(j+1)}).\n\\end{aligned}\n\\end{equation}\nWe assume that $a^{(i)}_z(t)$ is a random combination of $W$ Fourier series, $a^{(i)}_z(t)=\\frac{1}{W}\\sum^W_{w=1}F_w\\text{cos}(\\nu_w t+\\phi_w)$ and $J_i\\in[-J_0, J_0]$ is static in time . $F_w\\in[-J_0, J_0]$, $\\nu_w\\in[-J_0, J_0]$, and $\\phi_w\\in[0, 2\\pi]$ are the amplitude, frequency, and phase of $w$-th series, respectively. In this case, we set $W=10$. The parameters of $\\mathcal{H}^3_{\\text{XYZ}}(t)$ are collected as a vector $\\textbf{H}=\\{a^{(i)}_z(s\\tau), J^{(j)}|1\\leqslant s \\leqslant 300, 1\\leqslant i \\leqslant 3, 1\\leqslant j \\leqslant 2\\}$. The expectation values of single-qubit observables also are measured at a discrete time separated by $\\tau=0.02\\pi\/J_0$, and they are collected as a vector $\\textbf{I}=\\{O^{(i)}_k(s\\tau) | O^{(i)}_k(s\\tau)=\\text{Tr}(\\rho(s\\tau)\\cdot \\sigma^{(i)}_k), 1\\leqslant s \\leqslant 300, k = x, y, z, \\text{and} ~1\\leqslant i \\leqslant 3\\}$. Our training data also consists of 100,000 randomly generated pairs of Hamiltonians $\\textbf{H}$ and the corresponding single-qubit measurements $\\textbf{I}$. After training RNN to convergence on these training data, it can be used to learn the temporal behavior of $a^{(i)}_z(t)$ from only the measurements $\\textbf{I}$. Figure \\ref{plottime} presents the temporal behavior of the predicted values (solid lines) and its comparison with the actual values (dotted lines) for time-dependent parameters $a^{(i)}_z(t)$. It shows that a good agreement between the predicted and real results has been achieved.\n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[width=0.95\\linewidth]{plottime.pdf}\n\\caption{\\textbf{The temporal curves of the actual parameters (dotted lines) and the values learned by RNN (solid lines) for time-dependent parameters $a^{(i)}_z(t)$.} The predictions of time-independent parameters are $J^{(1)}_{\\text{pred}}=0.0464$ ($J^{(1)}_{\\text{true}}=0.0326$) and $J^{(2)}_{\\text{pred}}=-0.0345$ ($J^{(2)}_{\\text{true}}=-0.0181$). }\n\\label{plottime}\n\\end{figure}\n\n\n\\subsection{Robustness against the noise}\\label{DFE}\n\nThe temporal records of single-qubit measurements inevitably are influenced by the statistical and environmental noises, and may these noises deviate the predicted values of RNN from the ideal ones. Here, we further study the robustness of our RNN framework in learning the unknown Hamiltonians under the Gaussian noise and decoherence effect. Following simulations are performed for a 3-qubit system with Ising Hamiltonian $\\mathcal{H}^3_{\\text{XYZ}}$. $\\mathcal{H}^3_{\\text{XYZ}}$ is\n\\begin{equation}\n\\begin{aligned}\n\\mathcal{H}^3_{\\text{XYZ}}=\\sum^3_{i=1}a^{(i)}_z \\sigma_z^{(i)}+\\sum_{j=1}^2 J^{(j)}(\\sigma_x^{(j)}\\sigma_x^{(j+1)}+\\sigma_y^{(j)}\\sigma_y^{(j+1)}).\n\\end{aligned}\n\\label{3q}\n\\end{equation}\nThe unknown parameters in $\\mathcal{H}^3_{\\text{XYZ}}$ form a vector $\\textbf{H}=[a^{(1)}_z, a^{(2)}_z, a^{(3)}_z, J^{(1)}, J^{(2)}]^T$ as the output of RNN. The expectation values of single-qubit observables $\\sigma_x^{(i)}, \\sigma_y^{(i)}$, and $ \\sigma_z^{(i)}$ are measured at a discrete time separated by $\\tau=0.02\\pi\/J_0$, and they are collected as the input data $\\textbf{I}$ of RNN.\n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[width=0.85\\linewidth]{plotnoise.pdf}\n\\caption{\\textbf{The numerical simulated results for the robustness.} (a) The predicted accuracy of trained RNN models ($\\texttt{\\text{RNN}\\_\\text{0noise}\\_\\text{25}}$, $\\texttt{\\text{RNN}\\_\\text{10noise}\\_\\text{50}}$, $\\texttt{\\text{RNN}\\_\\text{0noise}\\_\\text{25}}$, and $\\texttt{\\text{RNN}\\_\\text{10noise}\\_\\text{50}}$) under the influence of Gaussian noise. (b) The predicted accuracy of trained RNN models ($\\texttt{\\text{RNN}\\_\\text{T2noise}\\_\\text{150}}$ and $\\texttt{\\text{RNN}\\_\\text{0noise}\\_\\text{150}}$) under the influence of decoherence effect. The cyan shadow is that the sampling time is longer than coherence time. }\n\\label{plotnoise}\n\\end{figure}\n\n{\\it{Robustness against the Gaussian noise}}-. First, we train RNN frameworks by feeding 100,000 noiseless training data $\\{\\textbf{I}, \\textbf{H}\\}$ with the sampling points $S=25$ and $S=50$, respectively. Two trained RNN models $\\texttt{\\text{RNN}\\_\\text{0noise}\\_\\text{25}}$ and $\\texttt{\\text{RNN}\\_\\text{0noise}\\_\\text{50}}$ are obtained. Then we predict the Hamiltonian parameters by feeding noisy test data into these two RNN models. These noisy data is artificially generated by adding the Gaussian noise in the data $\\textbf{I}$, i.e., $\\textbf{I}'=\\textbf{I}+\\mathcal{N}(0, \\epsilon)$. Here, $\\mathcal{N}(0, \\epsilon)$ is a Gaussian distribution with the mean of 0 and the standard deviation of $\\epsilon$. We change $\\epsilon$ from 2\\% to 10\\% with the step 2\\% and create 5,000 noisy test data for each $\\epsilon$. Figure \\ref{plotnoise}(a) presents the average similarities between the predicted parameters $\\textbf{H}^{\\text{pred}}$ and the true parameters $\\textbf{H}^{\\text{true}}$as a function of $\\epsilon$. $\\texttt{\\text{RNN}\\_\\text{0noise}\\_\\text{50}}$ has a better performance than $\\texttt{\\text{RNN}\\_\\text{0noise}\\_\\text{25}}$, but both their predicted accuracy decrease with the increasing of $\\epsilon$. When $\\epsilon=0.1$, the accuracy of $\\texttt{\\text{RNN}\\_\\text{0noise}\\_\\text{25}}$ decreases to 0.98. To further improve the robustness of our RNN frameworks under the noise, we adopt the following approach.\n\nSecond, we change to train RNN frameworks by feeding 100,000 noisy training data. The training data is perturbed under a Gaussian noise with a standard deviation of $\\epsilon=0.1$. Two RNN models $\\texttt{\\text{RNN}\\_\\text{10noise}\\_\\text{25}}$ and $\\texttt{\\text{RNN}\\_\\text{10noise}\\_\\text{50}}$ are trained to convergence. Similarly, we use these models to test the noisy data. $\\epsilon$ also is changed from 2\\% to 10\\% with the step 2\\% and 5,000 noisy test data for each $\\epsilon$ is created. The average values of predicted accuracy as a function of $\\epsilon$ also are presented in Fig. \\ref{plotnoise}(a). It shows that it has good performance with the similarity of over 0.99 and the predicted accuracy improves to 0.995 from the previous 0.98 when $\\epsilon=0.1$. The above simulations show that training RNN frameworks with the noisy data will greatly enhance the predicted accuracy and the more sample points will bring better robustness against the noise. From the simulation, it can be roughly concluded that learning Hamiltonians via RNN has robust performance under the Gaussian noise.\n\n{\\it{Robustness against the decoherence}}-. The total time for measuring the temporal records may reach or even exceed the coherence time of the experimental devices. Hence, the collected temporal records contain the decoherence effect, leading to a decrease in the predicted accuracy. For this purpose, we also numerically study the performance of our RNN frameworks under the decoherence effect. The temporal records with decoherence effect are created according to the Kraus representation of decoherence dynamics. The evolution of Hamiltonians is divided into slices with the duration of each slice being $\\delta\\tau$. Supposing that the density matrix is $\\rho(t)$ at the moment $t$, then density matrix at $t+\\delta\\tau$ is\n\\begin{equation}\n\\begin{aligned}\n\\rho(t+\\delta\\tau)=\\sum_{i=1}^3\\sum_{j=0}^1 E^i_j e^{-i\\mathcal{H}\\delta\\tau} \\rho(t)e^{i\\mathcal{H}\\delta\\tau} E^{i\\dag}_j.\n\\end{aligned}\n\\end{equation}\nHere, $ E^i_j $ is the kraus operator of the $i$-th qubit with,\n\\begin{equation}\n\\begin{aligned}\nE^i_0=\\sqrt{\\lambda_i}I_2, E^i_1= \\sqrt{1-\\lambda_i}\\sigma_z^i\n \\end{aligned}\n\\end{equation}\n$\\lambda_i$ is a parameter with $\\lambda_i=(1+e^{-\\delta \\tau\/T_2^i})\/2$. $T_2^i$ is the decoherence time of $i$-th qubit. We change $T_2^i$ from $1\\pi\/J_0$ to $6\\pi\/J_0$ with the segment $2\\pi\/J_0$. For each $T_2^i$, we create 5,000 decoherence test data with the sample points of $S=150$ (Sample interval is $0.02\\pi\/J_0$ and corresponding sampling time is $3\\pi\/J_0$). \n\nAs shown in Fig. \\ref{plotnoise}(b), when we feed these test data to the model $\\texttt{\\text{RNN}\\_\\text{0noise}\\_\\text{150}}$ to predict the Hamiltonian parameters $\\textbf{H}$, it is found that the accuracy of predicted $\\textbf{H}$ rapidly falls with the decrease of coherence time. To improve the robustness against the decoherence effect, we trained a RNN framework using 100,000 decoherence train data, named by $\\texttt{\\text{RNN}\\_\\text{T2noise}\\_\\text{150}}$. Figure \\ref{plotnoise}(b) presents that the predicted accuracy will have a significant improvement with the average value of over 0.99, when using $\\texttt{\\text{RNN}\\_\\text{T2noise}\\_\\text{150}}$ to process the decoherence test data.\n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[width=0.85\\linewidth]{plotd.pdf}\n\\caption{\\textbf{Numerical simulations in the discussion.} (a) The achieved accuracy with the different sampling intervals $\\tau'$ and fixed sampling points $S=25$. (b) The achieved accuracy under the different number of qubits $N$ and sampling points $S$. The simulations are performed for Ising Hamiltonians in Eq. (\\ref{xyz}). The cyan line is drew for the points with the accuracy of over 0.99. }\n\\label{plotN}\n\\end{figure}\n\n\\section{Discussion and Conclusion}\\label{sec3}\n\nWe briefly discuss the required measurement resources and feasibility in the practical experiments, including the sampling interval and the number of sampling points. First, single-qubit measurements are easy-to-implemented in current quantum platforms \\cite{xin2020improved,keith2019single,bruzewicz2019trapped,irber2020robust,xin2018nuclear}, such as using the dispersive readout on superconducting qubits and the ensemble measurements on nuclear magnetic resonance. Single-qubit measurements also have the lower readout errors than multi-qubit measurements \\cite{gambetta2007protocols,nachman2020unfolding}. Second, the sampling interval $\\tau$ should be traded-off, accounting for the coherence time. On the one hand, the total sampling time may exceed the coherence time of qubits if $\\tau$ is too large, leading to the decrease of the prediction accuracy. On the other hand, the temporal records of single-qubit measurements may be hard to distinguish if $\\tau$ is too small, also leading to the decrease of the prediction accuracy. As shown in Fig. \\ref{plotN}(a), we change the sampling interval $\\tau'$ from $0.01\\tau$ to $0.09\\tau$ with the step $0.02\\tau$ ($\\tau=0.02\\pi\/J_0$) and fix the sampling points $S=25$. Then we train our RNN models with 100,000 training data for each $\\tau'$ and test their performance with 5,000 test data. The considered Hamiltonian is described in Eq. (\\ref{3q}). The result shows that the RNN model can not be trained to a high accuracy if $\\tau'$ is too small. \n\nThird, the number of total sampling points is $3NS$, where factor 3 is the number of elements $\\{\\sigma_x^{(i)}, \\sigma_y^{(i)}, \\sigma_z^{(i)}\\}$, $N$ is the number of qubits, and $S$ is the number of sample points. Here, we numerically study how $S$ increases with the size of the systems in our method. In our simulation, we consider Ising Hamiltonians in Eq. (\\ref{xyz}), in which the number of qubits is changed from 2 to 6, and we train the neural networks with 100,000 randomly generated training data for given $N$ and $S$. Then, we test the average accuracy of trained neural networks using 5,000 test data. Figure \\ref{plotN}(b) presents the achieved accuracy as a function of $N$ and $S$. From the simulated results, it is shown that $S$ has a gentle increasing with the size of the system for this type of Hamiltonians. It may be understood from the following aspect. As long as this Hamiltonian is identifiable under the chosen initial states and single-qubit observables, it is possible to learn the parameters of Hamiltonians from their temporal records with finite sampling points. For instance, many-body Hamiltonians have polynomial parameters. The polynomial sampling points may be enough to estimate the parameters for many-body Hamiltonians in machine learning method. \n\nIn summary, we conclude that a composite neural network can be trained to learn the Hamiltonians from single-qubit measurements, and numerical simulations of up to 7 qubits have demonstrated its feasibility on time-independent and time-dependent Hamiltonians. Compared with the existing methods, this neural network method does not need to prepare the eigenstates of target Hamiltonians and it can learn all the information of Hamiltonians including the magnitude and sign of the parameters. Once the neural network is successfully trained, it can be directly used to predict the parameters of unknown Hamiltonians from\n the measured data without any post-processing. It is a `once for all' advantage. Besides, the initial states and single-qubit measurements in this method are easy-to-implemented in current quantum platforms, and the high accuracy can be achieved even under the potential experimental noises, including Gaussian noise and decoherence effect. It will bring some potential applications in performing the tasks of Hamiltonians identification in the experiments. Our method also has possible extensions in the future, such as learning the environment information around the system and simulating the dynamics of closed and open systems.\n \n\n\\section{Methods}\\label{sec4}\n{\\it{Structure of LSTM}}-. The LSTM is a form of the recurrent neural network designed to solve the long-term dependencies problem. An LSTM consists of a chain of repeating neural network modules called LSTM cells. As shown in Fig. \\ref{lstm}(a), the $s$-th LSTM cell imports $\\text{O}(s\\tau)$, $f_{s-1}$, and $c_{s-1}$ and exports $f_{s}$ and $c_{s}$ for the next LSTM cell. Here, $\\text{O}(s\\tau)$ and $f_{s-1}$ are firstly combined by an FC neural network whose structure is shown in Fig. \\ref{lstm}(b). In our training, this layer includes 256 neurons. Then different activation functions $\\sigma$ and $\\text{tanh}$ are used and finally different operations $\\oplus$ and $\\otimes$ are implemented before exporting $f_{s}$ and $c_{s}$. Next, we introduce the detailed operations in the LSTM cell.\n\nAs shown in Fig. \\ref{lstm}, the long-term memory of LSTM is called cell state $c_s$, which stores information learned by flowing through the entire chain. To update the cell state, the cell has two layers called \"forget gate\" and \"input gate\" to remove or add information to the cell state. The cell also has the ability to output the information from cell state called \"output gate\". Thus, these three gates control the cell state and construct an LSTM cell. At the beginning, the cell uses forget gate $G$ to decide what past information to remove from the cell. The input of current moment $o(s)$ and the output of last moment $f_{s-1}$ go through the forget gate $G$ as follows: \n\\begin{equation}\nG = \\sigma (W_g\\cdot [f_{s-1},o(s)]^T+b_g),\n\\end{equation}\nwhere $\\sigma (x)= 1\/(1+e^{-x})$ is the Sigmoid function. Then, it uses input gate $I$ to decide what new information to add to the cell state as follows: $I = \\sigma (W_i\\cdot [f_{s-1}, o(t)]^T+b_i)$. And $o(s)$ and $f_{s-1}$ go through a tanh layer to create a candidate cell state $E$ as follows: \n\\begin{equation}\nE = \\text{tanh}(W_e\\cdot[f_{s-1}, o(s)]^T+b_e]) .\n\\end{equation}\nThe next step is to update the cell state by forget gate $G$ and input gate $I$ as follows: $c_s = G\\times f_{s-1} + I \\times E $. In the end, it uses output gate to decide what information to select as output and generate the output. The equations are given as: $D = \\sigma (W_d\\cdot[f_{s-1}, o(s)]^T+b_d)$ and $f_s = D\\times \\text{tanh}(c_s)$.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.7\\linewidth]{plotlstm.pdf}\n\\caption{\\textbf{The schematic diagram for LSTM (a)-(b).} The right plot presents the operation combining the input $f_{s-1}$ and $O(s\\tau)$ (labeled by red square) with one layer including 256 neurons. }\n\\label{lstm}\n\\end{figure}\n\n\n\n\n\\noindent {\\bf Data Availability.} The experimental data and the source code that support the findings of this study can be obtained from the corresponding authors by email.\n\n\\noindent {\\bf Competing Interests.} The authors declare that there are no competing interests.\n\n\\noindent {\\bf Author Contributions.} C. W. made the corresponding simulations and created the data for training the neural networks. L. C. trained the neural networks. T. X. supervised this project in this work. All the authors joined the discussions, and wrote and modified the manuscript. L. C. and C. W. contributed equally to this work.\n\n\n\n\\noindent {\\bf Funding.} This work is supported by the National Key Research and Development Program of China (2019YFA0308100), National Natural Science Foundation of China (12075110, 11975117, 11905099, 11875159 and U1801661), Guangdong Basic and Applied Basic Research Foundation (2019A1515011383), Guangdong International Collaboration Program (2020A0505100001), Guangdong Provincial Key Laboratory (2019B121203002), Science, Technology and Innovation Commission of Shenzhen Municipality (ZDSYS20170303165926217, KQTD20190929173815000, JCYJ20200109140803865, JCYJ20170412152620376 and JCYJ20180302174036418), and Pengcheng Scholars, Guangdong Innovative and Entrepreneurial Research Team Program (2019ZT08C044).\n \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nOnline Voice Over IP (VoIP) meeting platforms such as Google Meet, Zoom, and Microsoft Teams allow users to join conferences through a regular phone via Audio Conferencing \\cite{ilag2020teams}. This option is frequently used by users who are either on the go, have a poor internet connection, or would prefer to join a meeting hands-free \\cite{ilag2022microsoft}. Given the convenience such an option offers, it is an indispensable functionality that motivates our focus on the Google Meet To Phone track of the VoIP DNS Challenge. \\footnote{\\url{https:\/\/github.com\/deepology\/VoIP-DNS-Challenge}} \n\nHowever, Audio Conferencing brings about the problem of transmission noise. Speech quality is degraded not only by the background noise present when taking the call but also by the transmission channel, whether it be through a telephone line or via wireless transmission \\cite{kaiser2018impact}. This is even more prevalent with mobile telephone transmission, where additional factors, including network congestion and packet loss, further degrade transmission quality \\cite{mcdougall2015telephone}. These factors result in information loss during communication \\cite{lawrence2008acoustic}.\n\nThus, it is important to mitigate information loss by introducing speech enhancement methods to reduce transmission noise. Existing deep learning approaches predominantly focus on removing background noise, and Google Meet itself has a \"Noise Cancellation\" feature that uses deep learning models to address this issue. However, speech enhancement to remove transmission noise and background noise remains largely unexplored. Our aim is to explore speech enhancement models and improve the speech quality of audio coming through Google Meets over calls. The code for all the experiments and ablations can be found at \\url{ https:\/\/github.com\/hamzakhalidhk\/11785-project}\n\n\\section{Background}\n\nWithin the current literature, datasets for speech enhancement tasks are often synthesized, a process in which noise is added to clean audio. This is because supervised optimization requires pairs of noisy inputs and clean targets.\n\\cite{reddy2020interspeech} introduced the INTERSPEECH 2020 Deep Noise Suppression Challenge (DNS) with a reproducible synthesis methodology. With clean speech from Librivox corpus and noise from Freesound and AudioSet, enabling upwards of 500 hours of noisy speech to be created. The VoIP DNS Challenge provides 20 hours of Google Meets to Phone relay, including synthetic background noise at the source and real-world transmission noise at the receiver. \n\nOf the current state-of-the-art speech enhancement models on the 2020 DNS Challenge, this study focuses on Demucs and FullSubNet. Demucs operates on waveform inputs in the time domain, whose real-time denoising architecture was proposed by Defossez et al. \\cite{defossez2019music}. FullSubNet operates on spectrogram inputs in the time-frequency domain and was developed by Hao et al. \\cite{hao2021fullsubnet}. Baseline Demucs achieves \n1.73 PESQ and 0.86 STOI, \nwhile Baseline FullSubNet achieves\n1.69 PESQ and 0.84 STOI.\nAs both of these baseline models do not account for transmission noise, our investigation is focused on fine-tuning and improving the acoustic fidelity, perceptual quality, and intelligibility for the VoIP DNS Challenge challenge.\n\n\n\n\n\n\\section{Dataset}\n\\label{headings}\n\nOur dataset is built off of audio clips from the open-sourced dataset\\footnote{\\url{https:\/\/github.com\/microsoft\/DNS-Challenge\/tree\/interspeech2020\/master\/datasets}} released from Microsoft's Deep Noise Suppression Challenge (DNS). Specifically, we utilize the test set of synthetic clips without reverb from the DNS data, as described by Reddy et al. \\cite{reddy2020interspeech}. Our novel dataset contains these audio clips with the addition of transmission noise resulting from taking a Google Meet call through a cellular device. It is created as described in the following steps.\n\nFirst, clean speech and noise audio are randomly selected from the DNS dataset. These audio files are then mixed, resulting in the noisy audio of clean speech with background noise. Next, to include the transmission noise of a Google Meet to a phone session, we start a Google Meet session and call into the session using a phone that is on the T-mobile network. We then play the noisy speech audio in Google Meet, and the audio that is relayed to the phone is recorded to an audio interface.\n\nThe final recorded audio captures both background noise and transmission noise on a Google Meet session. Furthermore, we note that we have recorded the audio with both the Google Meet noise cancellation feature turned on and off to compare our models with industry standards. We refer to the audio with Google Meet speech enhancement turned off as \\textit{low}, and the audio with speech enhancement turned on as \\textit{auto}. In our work, for each of the auto and low data sources, we utilize 400 audio clips (each thirty seconds long) for our training samples and 150 ten-second audio clips for our testing samples. We further apply gain normalization to transform the audio files to the same amplitude range to facilitate training. This transforms the audio files to the same amplitude range to enable better model training. Figure~\\ref{fig:transmission_diagram} depicts the dataset synthesis process.\n\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=\\linewidth]{img\/final.png}\n \\caption{(a) Demucs architecture with the mixture waveform as input and the four sources estimates\nas output. Arrows represent U-Net connections. (b) Detailed view of the layers $Decoder_i$ on the top\nand $Encoder_i$ on the bottom. Arrows represent connections to other parts of the model. (c) FullSubNet architecture. The second line in the rectangle describes the dimensions of the data at the current stage, e.g., \"1 (F)\" represents one F-dimensional vector. \"F (2N + 1)\" represents F independent (2N + 1)-dimensional vectors.}\n \\label{fig:arch}\n\\end{figure*}\n\n\n\\section{Models}\n\n\nWe select Demucs and FullSubNet (see Section \\ref{appendix-demucs} and \\ref{appendix-fsn}), as baseline models in our work, as both models have yielded state-of-the-art results in the DNS Challenge \\cite{hao2021fullsubnet, defossez2020real}. Demucs consists of 2 LSTM layers between an encoder-decoder structure. FullSubNet is a fusion model that combines a full-band model that captures the global spectral context and a sub-band model that encapsulates the local spectral pattern for single-channel real-time speech enhancement.\n\n\n\\subsection{Demucs}\n\\label{appendix-demucs}\nThe Demucs architecture is heavily inspired by the architectures of SING \\cite{defossez2018sing}, and Wave-U-Net \\cite{stoller2018wave}.\nIt is composed of a convolutional encoder, an LSTM, and a convolutional decoder. The encoder and decoder are linked with skip U-Net connections. The input to the model is a stereo mixture $s = \\sum_i s_i$ and the output is stereo estimate $\\hat{s_i}$ for each source. Fig \\ref{fig:arch} (a) shows the architecture of the complete model.\n\n\\label{demuc_loss}\nDemucs' criterion minimizes the sum of the L1-norm, $\\mathcal{L}_1$, between waveforms and the multi-resolution STFT loss, $\\mathcal{L}_{STFT}$ of the magnitude spectrograms.\n\n\\begin{align*}\n \\mathcal{L}_{demucs} \n & = \\frac{1}{T} [||y-\\hat{y}||_1 + \\sum_{i=1}^{M}\\mathcal{L}_{STFT}^{(i)}(y,\\hat{y})] \\\\\n & \\equiv \\mathcal{L}_{L1} + \\mathcal{L}_{STFT}\n\\end{align*}\nWithout $\\mathcal{L}_{STFT}$, we observe tonal artifacts. We discuss ablating $\\mathcal{L}_{STFT}$ and more recent auxiliary losses in Section 5.2.\n\n\\subsection{FullSubNet}\n\\label{appendix-fsn}\nFullSubNet is a full-band and sub-band fusion model, each with a similar topology. This includes two stacked unidirectional LSTM layers and one linear (fully connected) layer. The only difference between the two is that, unlike the full-band model, the output layer of the sub-band model does not use any activation functions. Fig \\ref{fig:arch} (c) shows the complete model architecture.\n\nFullSubNet adopts the complex Ideal Ratio Mask ($cIRM$) as their model's learning target. They use a hyperbolic tangent to compress $cIRM$ in training and an inverse function to uncompress the mask in inference $(K = 10, C = 0.1)$. \n\n\n\\section{Methods}\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\linewidth]{img\/train.png}\n \\caption{The training workflow for Demucs and FullSubNet}\n \\label{fig:train1}\n\\end{figure}\n\n\n\\subsection{Baseline Method}\nOn the VoIP DNS Challenge, Google Meets To Phone track, we determine baseline performance using pre-trained Demucs and FullSubNet. Our work improves on this, ablating a variety of criteria. Table 1 and Table 2 of Section \\ref{eval-metrics} show ablations. \n\n\\subsection{TAPLoss}\nWe introduce TAPLoss during training to outperform the state-of-the-art speech recognition models on our data. Taploss involves a set of 25 temporal acoustic parameters, including frequency-related parameters: pitch, jitter, F1, F2, F3 Frequency and bandwidth; energy or amplitude-related parameters: shimmer, loudness, harmonics-to-noise (HNR) ratio; spectral balance parameters: alpha ratio, Hammarberg Index, spectral slope, F1, F2, F3 relative energy, harmonic difference; and additional temporal parameters: rate of loudness peaks, mean and standard deviation of length of voiced\/unvoiced regions, and continuous voiced regions per second.\n\n\\subsubsection{Enhanced Demucs loss}\nPrevious works have shown that Demucs model is prone to generating tonal artifacts. The multi-resolution STFT loss amplifies this issue because the error introduced by tonal artifacts is more significant and obvious in the time-frequency domain than in the time domain. Thus, we reduce the influence of $\\mathcal{L}{STFT}$ in the demucs loss and, additionally, introduce the tap loss $\\mathcal{L}_{\\mathcal{T}\\mathcal{A}\\mathcal{P}}$. The new demucs loss function is defined as: \n$$\\mathcal{L}_{Demucs} = \\mathcal{L}_{1} + \\lambda_1 \\cdot \\mathcal{L}_{\\mathcal{T}\\mathcal{A}\\mathcal{P}} + \\lambda_2 \\cdot \\mathcal{L}_{STFT}$$\n\n\\subsubsection{Enhanced FullSubNet loss}\n\nWe also extend the FullSubNet loss by introducing the $\\mathcal{L}_{\\mathcal{T}\\mathcal{A}\\mathcal{P}}$ loss. The new FullSubNet loss is defined as:\n \n$$\\mathcal{L}_{FullSubNet} = \\mathcal{L}_{cIRM} + \\gamma \\cdot \\mathcal{L}_{\\mathcal{T}\\mathcal{A}\\mathcal{P}}$$\n\n\\subsection{Ablations}\nWe perform ablations to find the optimal values for each of the hyperparameters $\\lambda_1$ and $\\lambda_2$ for Demucs, and $\\gamma$ for FullSubNet. These optimal values were determined by their performance on the Perceptual Evaluation of Speech Quality (PESQ) and Short-Time Objective Intelligibility (STOI) metrics, which are defined in Table ~\\ref{table:metrics-table}. Table ~\\ref{table:demucs-ablations} and Table ~\\ref{table:fsn-ablations} show the summary of our ablations for Demucs and FullSubNet, respectively.\n\n\n\\subsection{Evaluation Metrics}\n\\label{eval-metrics}\nAfter determining the optimal hyperparameter values for both models, we further evaluate the respective models with both objective metrics and acoustic parameters.\n\n\\subsubsection{Objective Evaluation}\n In addition to PESQ and STOI, we further test the models on three more objective metrics: Log-Likelihood Ratio (LLR), Coherence and Speech Intelligibility Index (CSII), and Normalized-Covariance Measure (NCM). PESQ and LLR measure Speech Quality (SQ), while the STOI, CSII, and NCM measure Speech Intelligibility (SI). All four metrics aim to capture the human-judged quality of speech recordings, which is regarded as the gold standard for evaluating Speech Enhancement models \\cite{reddy2021dnsmos}. These metrics are defined in Appendix ~\\ref{appendix-metrics}.\n\n \n \n \n\n\n\\subsubsection{Acoustic Evaluation}\nIn addition to objective metrics, we also utilize the set of acoustic parameters, specifically the eGeMAPSv02 functional descriptors, presented by \\cite{eyben2015geneva} to evaluate the model further. This is a set of 88 frequency, energy\/amplitude, and spectral-related parameters. We use the OpenSMILE (Open-source Speech and Music Interpretation by Large-space Extraction)\\footnote{\\url{https:\/\/github.com\/audeering\/opensmile-python}} python package for these parameters. \n\n\\begin{table}\n \\caption{Demucs Ablations}\n \\label{table:demucs-ablations}\n \\centering\n \\begin{tabular}{lrrrrrr}\n \\toprule[1.5pt] \\toprule[1.5pt]\n \\multicolumn{7}{c}{Industry Speech Enhancement ON} \\\\ \n \\toprule[1.2pt]\n $\\lambda_1$ & 1.0 & 1.0 & 1.0 & 0.8 & 0.75 & 0.5 \\\\\n $\\lambda_2$ & 0.8 & 0.5 & 0.0 & 0.5 & 0.5 & 0.5 \\\\\n \\midrule\n STOI & 0.832 & 0.852 & 0.681 & 0.867 & \\textbf{0.882} & 0.870 \\\\\n PESQ & 1.658 & 1.844 & 1.488 & 1.780 & \\textbf{1.921} & 1.863 \\\\\n \\bottomrule[1.5pt]\n \\end{tabular}\n\\end{table}\n\n\\begin{table}\n \\caption{FullSubNet Ablations}\n \\label{table:fsn-ablations}\n \\centering\n \\begin{tabular}{lrrrrrr}\n \\toprule[1.5pt] \\toprule[1.5pt]\n \\multicolumn{7}{c}{Industry Speech Enhancement ON} \\\\ \n \\toprule[1.2pt]\n $\\gamma$ & 1.00 & 0.30 & 0.10 & 0.03 & 0.01 & 0.00 \\\\\n \\midrule\n STOI & \\textbf{0.864} & 0.863 & 0.860 & 0.861 & 0.858 & 0.861 \\\\\n PESQ & 1.843 & 1.829 & 1.805 & \\textbf{1.867} & 1.787 & 1.808 \\\\\n \\toprule[1.4pt] \\toprule[1.4pt]\n \\multicolumn{7}{c}{Industry Speech Enhancement OFF} \\\\ \n \\toprule[1.2pt]\n $\\gamma$ & 1.00 & 0.30 & 0.10 & 0.03 & 0.01 & 0.00 \\\\\n \\midrule\n STOI & 0.722 & 0.731 & 0.735 & \\textbf{0.740} & 0.739 & 0.739 \\\\\n PESQ & 1.427 & 1.437 & 1.470 & 1.572 & \\textbf{1.621} & 1.617 \\\\\n \\bottomrule[1.5pt]\n \\end{tabular}\n\\end{table}\n\n\\begin{figure}\n \\centering\n \n \\includegraphics[height=6cm]{img\/DEMUCS_IMPROV.png}\n \\caption{Acoustic improvements of the finetuned Demucs. The left side depicts the Demucs improvement over noisy in red versus the finetuned improvement over noisy in purple. The right side shows the relative improvement of our finetuned Demucs model over the baseline model}\n \\label{fig:demucs_improv}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \n \\includegraphics[height=6cm]{img\/FSN_IMPROV.png}\n \\caption{Acoustic improvements of the finetuned FullSubNet Model. The left side shows FullSubNet improvement over noisy in red versus the finetuned improvement over noisy in purple. The right side shows the relative improvement of our finetuned FullSubNet model over the baseline model}\n \\label{fig:fsn_improv}\n\\end{figure}\n\n\n\n\n\n\n\\section{Results}\n\n\\subsection{Acoustic Improvement}\nTo quantify the improvement of the acoustic parameters mentioned in section \\ref{eval-metrics}, we measure acoustic improvement as defined by \\cite{taploss}. First, we calculate the mean absolute error (MAE) across the time axis. The MAE between our novel dataset's noisy and clean audio files is denoted as $MAE_{N}$. The MAE between the baseline enhanced audio (output of FullSubNet or Demucs without finetuning) and clean audio files are denoted as $MAE_B$. Lastly, the MAE between the enhanced audio from the finetuned model and clean audio files is denoted as $MAE_F$. Then, improvement is defined as follows, where $I_B$ is the improvement of baseline models and $I_F$ is the improvement of our fine-tuned models.\n\n\n\\begin{equation} \\label{eq:1}\nI_B= 1-\\frac{MAE_{B}}{MAE_{N}}\n\\end{equation}\n\\begin{equation} \\label{eq:2}\nI_F= 1-\\frac{MAE_{F}}{MAE_{N}}\n\\end{equation}\n\nFigure \\ref{fig:demucs_improv} and \\ref{fig:fsn_improv} compare the acoustic improvements for our finetuned Demucs and FullSubNet models to the baseline models. We find that our finetuned models can improve our baselines for over 14 acoustic parameters.\n\n\n\\subsection{Objective Metric Results}\nOur results for the objective metrics are depicted in Table ~\\ref{table:evaluation-table}. We find that our finetuned models are able to outperform the baseline Demucs and FullSubNet models across all metrics.\n\n\\begin{table}\n \\caption{Results Table}\n \\label{table:evaluation-table}\n \\centering\n\\begin{tabular} {P{0.08 \\linewidth} P{0.12 \\linewidth} P{0.14 \\linewidth} P{0.16 \\linewidth} P{0.18 \\linewidth}}\n\\toprule[1.5pt] \\toprule[1.5pt]\n{} & \\multicolumn{2}{c}{Demucs} & \\multicolumn{2}{c}{FullSubNet} \\\\\n\\midrule[1pt]\n{} & Baseline & Finetuned & Baseline & Finetuned \\\\\n\\toprule[1.2pt]\n$PESQ$ & 1.727 & \\textbf{1.921} & 1.694 & \\textbf{1.822} \\\\\n$LLR$ & 1.432 & \\textbf{1.645} & 1.630 & \\textbf{1.833} \\\\\n$STOI$ & 0.864 & \\textbf{0.883} & 0.841 & \\textbf{0.860} \\\\\n$CSII_{high}$ & 0.665 & \\textbf{0.667} & 0.652 & \\textbf{0.653} \\\\\n$CSII_{mid}$ & 0.535 & \\textbf{0.572} & 0.514 & \\textbf{0.539} \\\\\n$CSII_{low}$ & 0.327 & \\textbf{0.333} & 0.299 & \\textbf{0.334} \\\\\n$NCM$ & 0.692 & \\textbf{0.756} & 0.650 & \\textbf{0.675} \\\\\n\\bottomrule[1.5pt]\n\\end{tabular}\n\\end{table}\n\n\\section{Conclusion} In this work, we surpass the industry-standard performance and SOTA baseline architectures -- identify transmission noise as a missing component of current speech enhancement research. We achieve top performance on the VoIP DNS Challenge, improving both the transmission and background noise of audio recordings on the Google Meet To Phone track. We set a new benchmark for speech enhancement by evaluating baseline Demucs and FullSubNet models on our novel dataset. Further, we demonstrate that introducing TAPLoss into the training process and finetuning these models can further improve performance. In the future, we aim to increase our training data from 400 samples to 1200 samples in order to achieve even better performance on our models. We believe that our work can find applications in the telecom industry and directly with mobile phone manufacturers.\n\n\n\\section{Acknowledgements}\n\nWe would like to thank Carnegie Mellon University, Professor Bhiksha Raj, and our mentors Joseph Konan and Ojas Bhargave for their staunch support, encouragement, and guidance throughout this project.\n\n\n\\bibliographystyle{IEEEtran}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}