diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzpxzu" "b/data_all_eng_slimpj/shuffled/split2/finalzzpxzu" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzpxzu" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nVarious wireless relaying techniques have been extensively studied for decades due to their capability to extend the coverage and enhance the capacity of wireless networks \\cite{relay1,relay12,relay2,PNC}. In particular, two-way relaying based on physical-layer network coding (PNC) has attracted much research interest in the past decade \\cite{PNC,TWRC1,TWRC2,TWRC3, TWRC4}. In the two-way relay channel, two users exchange information via a single relay node. Compared with conventional one-way relaying, PNC potentially doubles the spectral efficiency by allowing a relay node to decode and forward message combinations rather than individual messages. Later, the idea of PNC was extended to support efficient communications over multiway relay channels (mRC) \\cite{2}, where multiple users exchange data with the help of a single relay. Efficient PNC design has been studied for various data exchange models, including pairwise data exchange \\cite{MIMO3,Xchannel}, full data exchange \\cite{MIMO3,Gao}, and clustered pairwise\/full data exchange \\cite{MIMO3,MIMO2, multirelay, yuan}. Multiple-input multiple-output (MIMO) techniques have also been incorporated into PNC-aided relay networks to achieve spatial multiplexing \\cite{MIMO1}.\n\nThe capacity of the MIMO mRC generally remains a challenging open problem \\cite{capacity1,capacity4}. Existing work \\cite{DOF1,DOF2,DOF3,3,32,33,review1,review2} was mostly focused on analyzing the degrees of freedom (DoF) that characterizes the capacity slope at high signal-to-noise ratio (SNR). Various signaling techniques have been developed to intelligently manipulate signals and interference based on the ideas of PNC and interference alignment \\cite{Jafar}. Particularly, the authors in \\cite{3,32,33} studied the DoF of the MIMO Y channel, where three users exchange data in a pairwise manner with the help of a single relay. To derive the DoF of this model, a key difficulty is how to jointly optimize the linear processors, including the precoders at the user transmitters, the precoder at the relay, and the post-processers at user receivers. This problem was elegantly solved in \\cite{32} by optimal design of the signal space seen at the relay, where the user precoders and post-processors are constructed by \\textit{pairwise signal alignment} and \\textit{uplink-downlink symmetry}, and the relay precoder by appropriate orthogonal projections. Similar ideas have also been used to derive the DoF of other multiway relay models \\cite{multirelay, capacity1}.\n\nIn the work on MIMO mRC mentioned above, a major limitation is that a single relay node is employed to serve multiple user nodes simultaneously. This implies that the relay node is usually the performance bottleneck of the overall network \\cite{multirelay,yuan}. As such, some recent work began to explore the potential of deploying more relay nodes for enhancing the network capacity. For instance, the authors in \\cite{Lee} derived an achievable DoF of the two-relay MIMO mRC in which two pairs of users exchange messages in a pairwise manner via two relays. Later, the work in \\cite{capacity2} improved the DoF result in \\cite{Lee} by using the techniques of pairwise signal alignment and uplink-downlink symmetric design. The extension to the case of more than two user pairs was also considered in \\cite{capacity2}. However, the DoF characterization of the multi-relay MIMO mRC is still at a very initial stage. The reason is twofold. First, for a multi-relay mRC, the relays are geographically separated and hence cannot jointly process their received signals. This implies that manipulating the relay signal space is far more difficult than that in the single-relay case. Although some of the existing techniques for single-relay mRCs can be directly borrowed for signaling design in a multi-relay mRC, the efficiency of these techniques is no longer guaranteed. Second, for given user precoders and post-processors, the solvability problem for a MIMO mRC (with single or multiple relays) can be converted to a linear system with certain rank constraints. A substantial difference between the single-relay and multiple-relay MIMO mRCs is that the linear system for the multi-relay involves multiple matrix variables, and so solving the corresponding achievability problem is much more challenging. For example, the MIMO multipair two-way relay channel with two relay nodes was considered in \\cite{capacity2}. The achievability proof therein relies on some recent progresses on the solvability of linear matrix systems, and is difficult to be extended to the case with more than two relays or to other multi-relay mRCs. \n\nIn this paper, we analyze the DoF of the symmetric multi-relay MIMO Y channel, where three user nodes, each with $M$ antennas, communicate with each other via $K$ relay nodes, each with $N$ antennas. Compared with the MIMO Y channel in \\cite{3}, a critical difference is that our new model contains an arbitrary number of relays, rather than only a single relay. Following \\cite{capacity2}, we formulate a general DoF achievability problem for the multi-relay MIMO Y channel based on linear processing techniques, involving the design of user precoders, relay precoders, and user post-processors. The main contributions of this paper are as follows.\n\\begin{itemize}\n\\item\nIn contrast to the conventional uplink-downlink symmetric design which is widely used in single-relay MIMO mRCs, we propose a new uplink-downlink asymmetric approach to solve the DoF achievability problem of the symmetric multi-relay MIMO Y channel. Specifically, in our approach, only user precoders are designed based on signal space alignment; the user post-processors are designed directly for interference neutralization. Furthermore, we show that under certain conditions, the uplink-downlink asymmetry allows the relays to deactivate a portion of receiving (not transmitting) antennas to facilitate the signal space alignment at relays. This implies that under certain conditions, some of the receiving antennas at the relays are redundant to achieve the derived DoF.\n\\item\nGiven the designed user precoders and post-processors, the original problem boils down to a linear system on the relay precoders with certain rank constraints. Due to the presence of multiple relays, the linear system involves multiple matrix variables. To tackle the solvability of this system, we establish a new technique to solve linear matrix equations with rank constraints. We emphasize that this technique can potentially be used to analyze the DoF of other multi-relay MIMO mRCs with various data exchange models, e.g., pairwise data exchange \\cite{Xchannel} and clustered full data exchange \\cite{MIMO2, multirelay, yuan}.\n\\item\nBased on the above new techniques, we derive an achievable DoF of the symmetric multi-relay MIMO Y channel with an arbitrary configuration of $\\left(M,N,K\\right)$. Our achievable DoF is considerably higher than that derived by the conventional uplink-downlink symmetric approach. Also, a DoF upper bound is presented by assuming full cooperation among the relays and treating the multiple relays together as a single large relay. We establish the optimality of our achievable DoF for $\\frac{M}{N} \\in \\Big[0,\\max\\left\\{\\frac{\\sqrt{3K}}{3},1\\right\\}\\Big) \\cup\\Big[\\frac{3K+\\sqrt{9K^2-12K}}{6},\\infty\\Big)$ by showing that the achieved DoF matches the upper bound.\n\\end{itemize}\n\n\\textit{Notation}: We use bold upper and lower case letters for matrices and column vectors, respectively. $\\mathbb{C}^{m \\times n}$ denotes the $m\\times n$ dimensional complex space. $\\mathbf{0}_{m\\times n}$ and $\\mathbf{I}_{n}$ represent the $m \\times n$ zero matrix and the $n$-dimensional identity matrix, respectively. For any matrix $\\mathbf{A}$, $\\mathrm{vec}(\\mathbf{A})$ denotes the vectorization of $\\mathbf{A}$ formed by stacking the columns of $\\mathbf{A}$ into a single column vector. Moreover, $\\otimes$ represents the Kronecker product operation. \n\n\\section{System Model}\n\\subsection{Channel Model}\nConsider a symmetric multi-relay MIMO Y channel as shown in Fig. \\ref{Fig:1}, where three user nodes, each equipped with $M$ antennas, exchange information with the help of $K$ relay nodes, each with $N$ antennas. Pairwise data exchange is employed, i.e., every user delivers two independent messages, one to each of the other two users. We assume that the information delivering is half-duplex, i.e., nodes in the network cannot transmit and receive signals simultaneously in a single frequency band. Every round of data exchange consists of two phases, namely, the uplink phase and the downlink phase. The two phases have equal duration $T$, where $T$ is an integer representing the number of symbols within each phase interval.\n\n\\begin{figure}\n \\setlength{\\abovecaptionskip}{-0.1cm}\n \\centering\n \\includegraphics[trim={0cm 2cm 0cm 0cm}, width=8cm]{system_model.png}\n \\caption{The system model of a symmetric multi-relay MIMO Y channel. The communication protocol consists of two phases: uplink phase and downlink phase.}\\label{Fig:1}\n\\end{figure}\n\nIn the uplink phase, the users transmit signals to the relays simultaneously. The received signal at each relay is written by\n\\begin{equation}\n\\label{system1}\n\\mathbf{Y}_{\\mathrm{R},k} = \\sum_{j=0}^2\\mathbf{H}_{k,j}\\mathbf{X}_j +\\mathbf{Z}_{\\mathrm{R},k}, \\quad k= 0,1,\\cdots,K-1\n\\end{equation}\nwhere $\\mathbf{H}_{k,j}\\in \\mathbb{C}^{N\\times M}$ denotes the channel matrix from user $j$ to relay $k$; $\\mathbf{X}_j \\in \\mathbb{C}^{M \\times T}$ is the transmitted signal of user $j$; $\\mathbf{Y}_{\\mathrm{R},k} \\in \\mathbb{C}^{N \\times T}$ is the received signal at relay $k$; $\\mathbf{Z}_{\\mathrm{R},k} \\in \\mathbb{C}^{N \\times T}$ is the additive white Gaussian noise (AWGN) matrix at relay $k$, with the entries independently drawn from $\\mathcal{CN}(0, \\sigma_{\\mathrm{R},k}^2)$. Note that $\\sigma_{\\mathrm{R},k}^2$ is the noise power at relay $k$. The power constraint for user $j$ is $\\frac{1}{T}\\mathrm{tr}(\\mathbf{X}_{j}\\mathbf{X}^H_{j}) \\leq P_j$, where $P_j$ is the maximum transmission power allowed at user $j$.\n\nIn the downlink phase, the relays broadcast signals to the users. The received signal at each user is represented by\n\\begin{equation}\n\\label{system2}\n\\mathbf{Y}_j = \\sum_{k=0}^{K-1}\\mathbf{G}_{j,k}\\mathbf{X}_{\\mathrm{R},k} + \\mathbf{Z}_j, \\quad j= 0,1,2,\n\\end{equation}\nwhere $\\mathbf{G}_{j,k} \\in \\mathbb{C}^{M\\times N}$ is the channel matrix from relay $k$ to user $j$; $\\mathbf{X}_{\\mathrm{R},k} \\in \\mathbb{C}^{N \\times T}$ is the transmitted signal from relay $k$; $\\mathbf{Y}_j \\in \\mathbb{C}^{M \\times T}$ is the received signal at user $j$; $\\mathbf{Z}_j \\in \\mathbb{C}^{M \\times T}$ is the AWGN matrix at user $j$, with entries independently drawn from $\\mathcal{CN}(0, \\sigma_j^2)$. Here, $\\sigma_{j}^2$ is the noise power at user $j$. The power constraint of relay $k$ is given by $\\frac{1}{T}\\mathrm{tr}\\left(\\mathbf{X}_{\\mathrm{R},k}\\mathbf{X}_{\\mathrm{R},k}^H\\right) \\leq P_{\\mathrm{R},k}$, where $P_{\\mathrm{R},k}$ is the power budget of relay $k$.\n\nThe entries of channel matrices $\\left\\{\\mathbf{H}_{k,j}\\right\\}$ and $\\left\\{\\mathbf{G}_{j,k}\\right\\}$ are drawn from a continuous distribution, implying that the channel matrices are of full column or row rank, whichever is smaller, with probability one. We assume that channel state information (CSI) is globally known at every node in the model, following the convention in \\cite{MIMO1,MIMO3,capacity1,Xchannel, Gao, multirelay, MIMO2, yuan}.\\footnote{To realize the scheme in this paper, global CSI is sufficient but not necessary for every node. Each node only needs to know its linear processor designed in this scheme. There are many ways to achieve this. For example, we can employ a central controller that collects global CSI, computes the linear processors of the nodes, and then transmits the linear processors to their corresponding nodes. This will reduce to some extent the system overhead of global CSI acquisition at every node, without compromising the DoF.} Moreover, for notational convenience, we interpret the user index by modulo 3, e.g., user 3 is the same as user 0.\n\n\\subsection{Degrees of Freedom}\nThe goal of this paper is to analyze the degrees of freedom of the symmetric multi-relay MIMO Y channel described above. For convenience of discussion, we assume the same power constraint at each node, i.e., $P_0 = P_1 = P_2 = P$ and $P_{\\mathrm{R},0} = P_{\\mathrm{R},1} = \\cdots = P_{\\mathrm{R},K-1} = P$, which will not compromise the generality of the DoF results derived in this paper. Let $m_{j,j'} \\in \\{1,2,\\cdots,2^{R_{j,j'}T}\\}$ be the message from user $j$ to $j'$, where $R_{j,j'}$ is the corresponding information rate, for $j,j' =0,1,2$ and $j \\not= j'$. Note that $R_{j,j'}$ is in general a function of power $P$, denoted by $R_{j,j'}(P)$. An information rate $R_{j,j'}(P)$ is said to be achievable if the error probability of decoding message $m_{j,j'}$ at receiver $j'$ approaches zero as $T \\rightarrow \\infty$. An achievable DoF of user $j$ to user $j'$ is defined as\n\\begin{equation}\nd_{j,j'} = \\lim\\limits_{P \\rightarrow \\infty}\\frac{R_{j,j'}(P)}{\\log(P)}.\n\\end{equation}\nIntuitively, $d_{j,j'}$ can be interpreted as the number of independent spatial data streams that user $j$ can reliably transmit to user $j'$ during each round of data exchange. An achievable total DoF of the symmetric multi-relay MIMO Y channel is defined as\n\\begin{equation}\n\\label{d_sum}\nd_\\mathrm{sum} = \\frac{1}{2}\\sum_{ \\substack{ 0 \\leq j,j' \\leq 2\\\\j\\not=j' \\\\}} d_{j,j'}.\n\\end{equation}\nNote that the factor $\\frac{1}{2}$ in \\eqref{d_sum} is due to half-duplex communication. The optimal total DoF of the considered model, denoted by $d^\\mathrm{opt}_\\mathrm{sum}$, is defined as the supremum of $d_\\mathrm{sum}$. In this paper, we assume a symmetric DoF setting with $d_{j,j'}=d$ for any $0\\leq j,j' \\leq 2$, $j \\not =j'$. Then the total achievable DoF can be represented by $d_\\mathrm{sum} = \\frac{6d}{2} = 3d$.\n\nWe now present a DoF upper bound by assuming full cooperation among the relays. Under this assumption, the system model in \\eqref{system1} and \\eqref{system2} reduces to a single-relay MIMO Y channel, with $KN$ antennas at the relay. Therefore, the optimal DoF of such a single-relay MIMO Y channel naturally serves as a DoF upper bound of the model considered in \\eqref{system1} and \\eqref{system2}. From \\cite{32}, this upper bound is given by\n\\begin{align}\n\\label{upperbound}\nd_\\mathrm{sum} \\leq \\min\\left\\{\\frac{3M}{2}, KN\\right\\}.\n\\end{align}\nNote that the optimal DoF in \\cite{32} is derived for full-duplex communication. Thus the upper bound in \\eqref{upperbound} is scaled by a factor of $\\frac{1}{2}$ due to the half-duplex loss.\n\n\\subsection{Linear Processing}\nIn this paper, an achievable DoF of the symmetric multi-relay MIMO Y channel is derived by linear processing techniques. The message $m_{j,j'}$, $j' \\not= j$, is encoded into $\\mathbf{S}_{j,j'}\\in \\mathbb{C}^{d \\times T}$, with $d$ independent spatial streams in $T$ channel uses. The transmitted signal of user $j$ is given by\n\\begin{equation}\n\\label{linear1}\n\\mathbf{X}_j = \\sum_{j'\\not=j}\\mathbf{U}_{j,j'}\\mathbf{S}_{j,j'},\\quad j = 0,1,2,\n\\end{equation}\nwhere $\\mathbf{U}_{j,j'}\\in \\mathbb{C}^{M\\times d}$ is the linear precoding matrix for $\\mathbf{S}_{j,j'}$. An amplify-and-forward scheme is employed at the relays. Specifically, the transmitted signal of each relay is represented by\n\\begin{equation}\n\\label{linear2}\n\\mathbf{X}_{\\mathrm{R},k} = \\mathbf{F}_k\\mathbf{Y}_{\\mathrm{R},k},\\quad k=0,1,\\cdots,K-1,\n\\end{equation}\nwhere $\\mathbf{F}_k \\in \\mathbb{C}^{N \\times N}$ is the precoding matrix of relay $k$.\n\nWith \\eqref{system1}, \\eqref{linear1}, and \\eqref{linear2}, we can express the received signal of user $j$ in \\eqref{system2} as\n\\begin{align}\n\\label{received signal}\n\\mathbf{Y}_j = &\\underbrace{\\sum_{k = 0}^{K-1}\\! \\sum_{j' \\not = j}\\!\\mathbf{G}_{j,k}\\mathbf{F}_k\\mathbf{H}_{k,j'}\\mathbf{U}_{j',j}\\mathbf{S}_{j',j}}_{\\text{desired signal}} \\! +\\! \\underbrace{\\sum_{k = 0}^{K-1}\\sum_{j' \\not = j}\\mathbf{G}_{j,k}\\mathbf{F}_k\\mathbf{H}_{k,j}\\mathbf{U}_{j,j'}\\mathbf{S}_{j,j'}}_{\\text{self-interference}}\\!+\\!\\underbrace{\\sum_{k = 0}^{K-1}\\!\\!\\sum_{j' \\not = j'' \\atop j', j'' \\not = j}\\!\\!\\mathbf{G}_{j,k}\\mathbf{F}_k\\mathbf{H}_{k,j'}\\mathbf{U}_{j',j''}\\mathbf{S}_{j',j''}}_{\\text{other interference}} \\nonumber\\\\\n& +\\underbrace{\\sum_{k = 0}^{K-1}\\mathbf{G}_{j,k}\\mathbf{F}_k\\mathbf{Z}_{\\mathrm{R},k}+\\mathbf{Z}_j}_{\\text{noise}}.\n\\end{align}\nIn the above, $\\mathbf{Y}_j$ consists of four signal components: the desired signal, the self interference, the other interference and the noise. Since user $j$ perfectly knows the CSI and the self message $\\left\\{\\mathbf{S}_{j,j'}, \\forall j' \\not= j\\right\\}$, the self-interference term in \\eqref{received signal} can be pre-cancelled before further processing. Each user $j$ is required to decode $2d$ spatial streams, $d$ from each of the other two users. To this end, there must be an interference-free subspace with dimension $2d$ in the receiving signal space of user $j$. More specifically, denote by $\\mathbf{V}_j \\in \\mathbb{C}^{2d \\times M}$ a projection matrix, with $\\mathbf{V}_j\\mathbf{Y}_j$ being the projected image of $\\mathbf{Y}_j$ in the subspace spanned by the row space of $\\mathbf{V}_j$. Then, to ensure the decodability of $\\mathbf{S}_{j',j}$ at user $j$, we should appropriately design $\\{\\mathbf{U}_{j,j'}\\}$, $\\{\\mathbf{F}_k\\}$, and $\\{\\mathbf{V}_j\\}$ to satisfy two sets of requirements, as detailed below.\n\nFirst, $\\mathbf{V}_j\\mathbf{Y}_j$ should be free of interference. That is, the following interference neutralization requirements should be met:\n\\begin{subequations}\n\\label{zeroforcing}\n\\begin{align}\n\\label{zeroforcing_1}\n\\sum_{k=0}^{K-1}\\mathbf{V}_0\\mathbf{G}_{0,k}\\mathbf{F}_k\\mathbf{H}_{k,1}\\mathbf{U}_{1,2}=\\mathbf{0},\\quad &\\quad \n\\sum_{k=0}^{K-1}\\mathbf{V}_0\\mathbf{G}_{0,k}\\mathbf{F}_k\\mathbf{H}_{k,2}\\mathbf{U}_{2,1}=\\mathbf{0},\\\\\n\\label{zeroforcing_12}\n\\sum_{k=0}^{K-1}\\mathbf{V}_1\\mathbf{G}_{1,k}\\mathbf{F}_k\\mathbf{H}_{k,2}\\mathbf{U}_{2,0}=\\mathbf{0},\\quad &\\quad \n\\sum_{k=0}^{K-1}\\mathbf{V}_1\\mathbf{G}_{1,k}\\mathbf{F}_k\\mathbf{H}_{k,0}\\mathbf{U}_{0,2}=\\mathbf{0},\\\\\n\\label{zeroforcing_2}\n\\sum_{k=0}^{K-1}\\mathbf{V}_2\\mathbf{G}_{2,k}\\mathbf{F}_k\\mathbf{H}_{k,0}\\mathbf{U}_{0,1}=\\mathbf{0},\\quad &\\quad \n\\sum_{k=0}^{K-1}\\mathbf{V}_2\\mathbf{G}_{2,k}\\mathbf{F}_k\\mathbf{H}_{k,1}\\mathbf{U}_{1,0}=\\mathbf{0}.\n\\end{align}\nHere, ``interference neutralization\" refers to a special transceiver design strategy for interference cancellation such that a common source of interference from different paths cancels itself at a destination. Second, user $j$ needs to decode $2d$ spatial streams from the projected signal $\\mathbf{V}_j\\mathbf{Y}_j \\in \\mathbb{C}^{2d\\times T}$. To ensure the decodability, the desired signal in \\eqref{received signal} after projection should be of rank $2d$. Define\n\\begin{equation}\n\\mathbf{W}_{k,j} = \\left[\\mathbf{H}_{k,j+1}\\mathbf{U}_{j+1,j},\\mathbf{H}_{k,j-1}\\mathbf{U}_{j-1,j}\\right], \\quad\\!\\!j = 0,1,2. \\nonumber\n\\end{equation}\nThen $\\sum_{k=0}^{K-1}\\mathbf{V}_j\\mathbf{G}_{j,k}\\mathbf{F}_k\\mathbf{W}_{k,j}$ represents the effective channel for the messages desired by user $j$. To ensure the decodability of $2d$ spatial streams at each user $j$, we have the following rank requirements:\n\\begin{align}\n\\label{rank}\n\\mathrm{rank}\\left(\\sum_{k=0}^{K-1}\\mathbf{V}_j\\mathbf{G}_{j,k}\\mathbf{F}_k\\mathbf{W}_{k,j}\\right)=2d, \\quad\\!\\! j=0,1,2.\n\\end{align}\n\\end{subequations}\n\nGiven an antenna setup $\\left(M,N\\right)$ and a target DoF $d$, if there exist suitable $\\{\\mathbf{U}_{j',j},\\mathbf{F}_k,\\mathbf{V}_j\\}$ satisfying \\eqref{zeroforcing} for randomly generated channel matrices $\\{\\mathbf{H}_{k,j},\\mathbf{G}_{j,k}\\}$ with probability one, then a total DoF $d_\\mathrm{sum}=3d$ is achieved by the proposed linear processing scheme. Thus, the key issue is to analyze the solvability of the system \\eqref{zeroforcing} with respect to $\\{\\mathbf{U}_{j',j},\\mathbf{F}_k,\\mathbf{V}_j\\}$, which is the main focus of the rest of this paper.\n\n\\section{Achievable DoF of the Symmetric Multi-Relay MIMO Y Channel}\nIn general, to check the achievability of a certain DoF $d$, we need to jointly design the matrices $\\{\\mathbf{U}_{j,j'},\\mathbf{F}_k,\\mathbf{V}_j\\}$ to meet \\eqref{zeroforcing}. This is a challenging task since the equations in \\eqref{zeroforcing} are nonlinear with respect to $\\{\\mathbf{U}_{j,j'},\\mathbf{F}_k,\\mathbf{V}_j\\}$. To tackle this problem, we start with a conventional approach based on the idea of uplink-downlink symmetry.\n\n\\subsection{Conventional Approach with Uplink-Downlink Symmetry}\nUplink-downlink symmetry has been widely used in precoding design for MIMO mRCs \\cite{capacity1,yuan,3,32,capacity2}. It is shown to be optimal for many single-relay MIMO mRCs \\cite{capacity1,3, 32}, and efficient for some multi-relay MIMO mRCs \\cite{capacity2}. In this subsection, we follow the idea of uplink-downlink symmetry to solve \\eqref{zeroforcing}. Then, for an arbitrary configuration of $(M,N,K)$, we derive an achievable DoF (or an upper bound of the achievable DoF) of the symmetric multi-relay MIMO Y channel. We show that there is a significant DoF gap between this result and the DoF upper bound in \\eqref{upperbound}, implying the inadequacy of the uplink-downlink symmetric precoding design for multi-relay MIMO mRCs.\n\nTo start with, we split each projection matrix $\\mathbf{V}_j$ equally into two parts as\n\\begin{equation}\n\\mathbf{V}_j = \\left[\\mathbf{V}_{j+1,j}, \\mathbf{V}_{j-1,j}\\right]\n\\end{equation}\nwhere $\\mathbf{V}_{j',j}\\in \\mathbb{C}^{d\\times M}$ is the projection matrix for the message $\\mathbf{S}_{j',j}$. Then the interference neutralization conditions \\eqref{zeroforcing_1}-\\eqref{zeroforcing_2} can be rewritten as\n\\begin{equation}\n\\label{zeroforcing_symg}\n\\sum_{k=0}^{K-1}\\mathbf{V}_{j',j}\\mathbf{G}_{j,k}\\mathbf{F}_k\\mathbf{H}_{k,j'}\\mathbf{U}_{j',j''}=\\mathbf{0},\\quad\\!\\! \\forall j \\not= j',\\quad\\!\\!\\! j'\\not=j'',\\quad\\!\\!\\!j \\not = j''.\n\\end{equation}\nNote that for all the 3 users, there are 12 matrix equations in \\eqref{zeroforcing_symg} for interference neutralization. Moreover, the rank requirements remain the same as \\eqref{rank}.\n\nWe next establish $K$ DoF points, namely, $\\left(\\frac{M}{N}, d_\\mathrm{sum}\\right) = \\left(\\frac{2}{3},N\\right)$ and $\\left(\\frac{6k+\\sqrt{6k}}{12},\\frac{\\sqrt{6k}N}{2}\\right)$, $k = 2,\\cdots, K$, by following uplink-downlink symmetric design. The DoF point $\\left(\\frac{2}{3},N\\right)$ is achievable, while the points $\\left(\\frac{6k+\\sqrt{6k}}{12},\\frac{\\sqrt{6k}N}{2}\\right)$, $k = 2,\\cdots, K$, are just upper bounds. That is, for any $\\frac{M}{N} \\geq \\frac{6k+\\sqrt{6k}}{12}$, $k=2,\\cdots,K$, the total DoF achieved by the uplink-downlink symmetric design is upper bounded as $d_\\mathrm{sum} < \\frac{\\sqrt{6k}N}{2}$.\n\nThe first DoF point is derived by deactivating $K-1$ of the $K$ relays. In this case, the model reduces to a single-relay MIMO Y channel with the antenna number of the relay equal to $N$. Then, the precoding design in \\cite{32} can be applied directly. From the result of \\cite{32}, a total DoF of $\\frac{3M}{2}$ can be achieved for half-duplex single-relay MIMO Y channel with $\\frac{M}{N} = \\frac{2}{3}$. That is, $\\left(\\frac{M}{N}, d_\\mathrm{sum}\\right)=\\left(\\frac{2}{3},N\\right)$ is achievable. \n\nWe now consider the DoF point $\\left(\\frac{6K+\\sqrt{6K}}{12},\\frac{\\sqrt{6K}N}{2}\\right)$. Note that the remaining DoF points can be straightforwardly obtained by deactivating $K-k$ relays, for $k=2,\\cdots,K-1$. Following \\cite{capacity2}, we apply signal alignment techniques for the design of $\\{\\mathbf{U}_{j,j'},\\mathbf{V}_{j,j'}\\}$ to reduce the number of linearly independent constraints in \\eqref{zeroforcing_sym}. First consider the uplink signal alignment design. We align the signals exchanged by user $j$ and user $j'$ at each relay. That is, we design $\\{\\mathbf{U}_{j,j'}\\}$ to satisfy\n\\begin{align}\n\\label{uplinkalign}\n\\mathbf{H}_{k,j}\\mathbf{U}_{j,j'} = \\mathbf{H}_{k,j'}\\mathbf{U}_{j',j}, \\quad j,j' = 0,1,2, \\quad j\\not =j', \\quad \\forall k.\n\\end{align}\nNote that for the single-relay case, it usually suffices to align $\\mathbf{H}_{k,j}\\mathbf{U}_{j,j'}$ and $\\mathbf{H}_{k,j'}\\mathbf{U}_{j',j}$ in a common subspace. However, for the multi-relay case here, we rely on a more strict constraint \\eqref{uplinkalign} for signal alignment, so as to reduce the number of linearly independent equations in \\eqref{zeroforcing_sym}. We then consider the downlink signal alignment. From \\eqref{zeroforcing_symg}, we see that the uplink equivalent channel matrix $\\mathbf{H}_{k,j}\\mathbf{U}_{j,j'}\\in\\mathbb{C}^{N\\times d}$ is of the same size as the transpose of downlink equivalent channel matrix $\\mathbf{V}_{j,j'}\\mathbf{G}_{j',k} \\in \\mathbb{C}^{d \\times N}$, for $\\forall k,j,j'$. This structural symmetry implies that any beamforming design in the uplink phase directly carries over to the downlink phase. Specifically, we design the downlink receiving matrices $\\{\\mathbf{V}_{j,j'}\\}$ to satisfy\n\\begin{align}\n\\label{downlinkalign}\n\\mathbf{V}_{j,j'}\\mathbf{G}_{j',k} = \\mathbf{V}_{j',j}\\mathbf{G}_{j,k}, \\quad j,j' = 0,1,2, \\quad j\\not =j', \\quad \\forall k.\n\\end{align}\nFrom the rank-nullity theorem, to ensure the existence of full-rank $\\{\\mathbf{U}_{j,j'},\\mathbf{V}_{j,j'}\\}$ satisfying \\eqref{uplinkalign} and \\eqref{downlinkalign}, the following conditions must be met:\n\\begin{align}\n\\label{symmetry_align_condition}\n2M-KN\\geq d.\n\\end{align}\nWith \\eqref{uplinkalign} and \\eqref{downlinkalign}, the interference neutralization conditions in \\eqref{zeroforcing_symg} reduces to \n\\begin{equation}\n\\label{zeroforcing_sym}\n\\sum_{k=0}^{K-1}\\mathbf{V}_{j',j}\\mathbf{G}_{j,k}\\mathbf{F}_k\\mathbf{H}_{k,j+1}\\mathbf{U}_{j+1,j+2}=\\mathbf{0},\\quad \\!\\! \\quad j,j' = 0,1,2, \\quad j\\not =j'.\n\\end{equation}\nNote that $\\{\\mathbf{U}_{j',j},\\mathbf{V}_{j',j}\\}$ are already fixed to meet \\eqref{uplinkalign} and \\eqref{downlinkalign}. Thus \\eqref{zeroforcing_sym} is a linear system of $\\{\\mathbf{F}_k\\}$ with $6d^2$ equations and $KN^2$ unknown variables. The system has a non-zero solution of $\\{\\mathbf{F}_{k}\\}$ provided $d < \\frac{\\sqrt{6K}N}{6}$. Together with \\eqref{symmetry_align_condition}, we obtain the DoF point $\\left(\\frac{M}{N}, d_\\mathrm{sum}\\right) = \\left(\\frac{6K+\\sqrt{6K}}{12},\\frac{\\sqrt{6K}N}{2}\\right)$. Note that by deactivating $K-k$ relays, we immediately obtain the remaining DoF points $\\left(\\frac{M}{N}, d_\\mathrm{sum}\\right) = \\left(\\frac{6k+\\sqrt{6k}}{12},\\frac{\\sqrt{6k}N}{2}\\right)$, for $k = 2,\\cdots, K-1$.\n\nWe now apply the antenna disablement lemma \\cite{DOF2} to the above $K$ DoF points, yielding a continuous DoF curve of uplink-downlink symmetric design for an arbitrary value of $\\frac{M}{N}$:\n\\begin{equation}\n\\label{DoF_sym}\nd_\\mathrm{sum} = N\\max_{(a,b)\\in \\mathcal{S}_K}g_{(a,b)}\\left(\\frac{M}{N}\\right)\n\\end{equation}\nwhere $\\mathcal{S}_K = \\left\\{\\left(\\frac{2}{3},1\\right)\\right\\} \\cup \\left\\{\\left( \\frac{6k+\\sqrt{6k}}{12},\\frac{\\sqrt{6k}}{2}\\right) \\Big| k = 2,\\cdots, K\\right\\}$ and the $g$-function is defined as\n\\begin{equation}\ng_{(a,b)}(x) = \\begin{cases}\n\\frac{bx}{a} & x 3M^2$, i.e., $\\frac{M}{N} \\in \\left(0, \\frac{\\sqrt{3K}}{3}\\right)$. To prove the achievability of $d = \\frac{M}{2}$, it suffices to show that there exist $\\{\\mathbf{F}_k\\}$ satisfying all the conditions in \\eqref{zeroforcing}. We have the following result. \n\n\\begin{lemma}\n\\label{LemmaDoF3}\nFor ${\\frac{M}{N} \\in \\Big(0,\\frac{\\sqrt{3K}}{K}\\Big)}$ and $d = \\frac{M}{2}$, there exist $\\{\\mathbf{F}_{k}\\}$, together with $\\{\\mathbf{U}_{j,j'}, \\mathbf{V}_{j}\\}$ in \\eqref{UV}, satisfying \\eqref{zeroforcing} with probability one.\n\\end{lemma}\n\\begin{proof}\nSee Appendix I-C.\n\\end{proof}\n\n\\subsection{Achievable DoF Using Antenna Disablement}\nIn the preceding subsections, we have established an achievable DoF for $\\frac{M}{N} \\in \\Big[0, \\frac{\\sqrt{3K}}{3}\\Big) \\cup \\Big[1,\\frac{3K+\\sqrt{9K^2-12K}}{6}\\Big)$. We now follow the antenna disablement approach \\cite{DOF2} to establish the achievable DoF for other ranges of $\\frac{M}{N}$. Specifically, for $\\frac{M}{N} \\in \\left(0,1\\right)$, we disable $N-M$ antennas at each relay. Then, from Lemma \\ref{LemmaDoF2}, we see that any DoF $d<\\frac{3M^2+2MN^*}{9M+N^*} = \\frac{M}{2}$ can be achieved, where $N^*=M$ is the number of active antennas at each relay. The only issue is that $d$ may be not an integer. This issue can be solved by the technique of symbol extension described in Appendix II. Similarly, with symbol extension and antenna disablement, $d<\\frac{KN}{3}$ is achievable for $\\frac{M}{N} \\in \\left(\\frac{3K+\\sqrt{9K^2-12K}}{6},\\infty\\right)$. \n\nCombining Lemmas \\ref{LemmaDoF1}, \\ref{LemmaDoF2}, and \\ref{LemmaDoF3}, we conclude that any DoF $d$ satisfying $d < d^*$ is achievable, where\n\\begin{equation}\nd^* =\n\\begin{cases}\n\\frac{M}{2} & {\\frac{M}{N} \\in \\Big[0,\\max\\left\\{\\frac{\\sqrt{3K}}{3},1\\right\\}\\Big)}\\\\\n\\max\\left\\{\\frac{M}{3} + \\frac{5MN}{27M+3N},\\frac{\\sqrt{3K}N}{6}\\right\\} & {\\frac{M}{N} \\in \\Big[\\max\\left\\{\\frac{\\sqrt{3K}}{3},1\\right\\}, \\frac{9K+\\sqrt{81K^2+60K}}{30}\\Big)} \\\\\n\\frac{M}{3} + \\frac{KN^2}{9M} & {\\frac{M}{N} \\in \\Big[\\frac{9K+\\sqrt{81K^2+60K}}{30},\\frac{3K+\\sqrt{9K^2-12K}}{6}\\Big)} \\\\\n\\frac{KN}{3} & {\\frac{M}{N} \\in \\Big[\\frac{3K+\\sqrt{9K^2-12K}}{6},\\infty\\Big)}.\n\\end{cases}\n\\end{equation}\nWith $d_\\mathrm{sum}=3d$, we immediately obtain \\eqref{achievableDoF0}. This completes the proof of Theorem 1.\n\n\\section{Conclusion and Future Work}\nIn this paper, we developed a new formalism to analyze the achievable DoF of the symmetric multi-relay MIMO Y channel. Specifically, we adopted the idea of uplink-downlink asymmetric design and proposed a new method to tackle the solvability problem of linear systems with rank constraints. In the proposed design, we also incorporated the techniques of signal alignment, antenna disablement, and symbol extension. An achievable DoF for an arbitrary configuration of $(M, N, K)$ was derived. \n\nThe study of multi-relay MIMO mRCs is still in an initial stage. Based on our work, the following directions will be of interest for future research.\n\n\n\\subsection{Tighter Upper Bounds}\nFor $\\frac{M}{N} \\in \\Big(\\max\\left\\{\\frac{\\sqrt{3K}}{3},1\\right\\}, \\frac{3K+\\sqrt{9K^2-12K}}{6}\\Big)$, our achievable total DoF does not match the full relay-cooperation upper bound in \\eqref{upperbound}. We conjecture that the main reason for this mismatch is that the upper bound is too loose in this range of $\\frac{M}{N}$. As such, tighter upper bounds are highly desirable to fully characterize the DoF of the symmetric multi-relay MIMO Y channel. This, however, requires careful analysis on the fundamental performance degradation caused by the separation of relays.\n\n\\subsection{General Antenna and DoF Setups}\nIn this paper, we considered the symmetric multi-relay MIMO Y channel, where the numbers of antennas of all user nodes are assumed to be the same. We also assumed a symmetric DoF setting, where each user transmits the same number of independent spatial data streams. The main purpose of these assumptions is to avoid the combinatorial complexity in manipulating signals and interference. The techniques used in this paper can be extended to the cases with asymmetric antenna and DoF setups. However, this results in a DoF achievability problem far more complicated than \\eqref{zeroforcing}, since we need to analyze the feasibility of all possible DoF tuples of $(d_{0,1},d_{1,0},d_{1,2},d_{2,1},d_{0,2},d_{2,0})$ under an asymmetric antenna configuration. The optimal DoF region for the single-relay case has been recently reported in \\cite{generalMIMOY}. We believe that the techniques used in \\cite{generalMIMOY} will provide some insights on deriving the DoF region of the multi-relay case. \n\n\\subsection{Cases with More Users}\nOur approach can be extended to multi-relay MIMO Y channels with more than three users. However, we emphasize that such an extension is not trivial. As seen from \\cite{Gao, multirelay, MIMO2, yuan}, for MIMO mRCs with more than three users, more signal alignment patterns than pairwise alignment should be exploited to support efficient data exchanges. This implies that in multi-relay MIMO Y channels with more than three users, we need to combine our uplink-downlink asymmetric approach with more intelligent signal alignment strategies. Therefore, the extension to multi-relay MIMO Y channels with more users will be an interesting research topic worthy of future effort.\n\n\n\\begin{appendices}\n\\section{Proof of Lemmas \\ref{LemmaDoF1}-\\ref{LemmaDoF3}}\n\\label{Proof of Lemma 1 and Lemma 2}\n\\subsection{Proof of Lemma \\ref{LemmaDoF1}}\n\\label{Proof of Lemma 1}\nWe need to prove that for $\\frac{M}{N} \\in \\Big[\\frac{9K+\\sqrt{81K^2+60K}}{30}, \\frac{3K+\\sqrt{9K^2-12K}}{6}\\Big)$ and $d<\\frac{M}{3} + \\frac{KN^2}{9M}$, there exist $\\{\\mathbf{U}_{j,j'}\\}$ and $\\{\\mathbf{F}_k\\}$ satisfying \\eqref{pairwisealignment}, \\eqref{pneutralize}, and \\eqref{prank} with probability one. The main steps of the proof are presented as follows:\n\\begin{itemize}\n\\item Show that the signal alignment \\eqref{pairwisealignment} can be performed and $d'$ in \\eqref{d'} is well-defined.\n\\item For $M,N$ and $d$ in the given ranges, construct a set $\\mathcal{T}_{M,N,d}$ of channel realizations such that a randomly generated channel realization belongs to $\\mathcal{T}_{M,N,d}$ with probability one.\n\\item Prove that for almost all elements in $\\mathcal{T}_{M,N,d}$, there exist $\\{\\mathbf{U}_{j,j'}\\}$ and $\\{\\mathbf{F}_k\\}$ satisfying \\eqref{pairwisealignment}, \\eqref{pneutralize}, and \\eqref{prank}. \\footnote{Although the term ``satisfying (24)\" appears both here and in (54), the conditions are different. In (54), we require that there exist $\\{\\mathbf{U}_{j,j'}\\}$ satisfying (24) and $\\mathrm{rank}(\\mathbf{K})=3d'M$, while here we require that there exist $\\{\\mathbf{U}_{j,j'}\\}$ and $\\{\\mathbf{F}_k\\}$ satisfying (24), (31), and (32). It is possible that for some channel realization, there exist $\\{\\mathbf{U}_{j,j'}\\}$ satisfying (24), but there do not exist $\\{\\mathbf{F}_k\\}$, together with these $\\{\\mathbf{U}_{j,j'}\\}$, satisfying (31) and (32).}\n\\end{itemize}\n\nWe first show that the signal alignment \\eqref{pairwisealignment} can be performed. For $\\frac{M}{N} \\geq \\frac{9K+\\sqrt{81K^2+60K}}{30}$, we have\n{\\setlength{\\abovedisplayskip}{4pt}\n\\setlength{\\belowdisplayskip}{4pt}\n\\begin{equation}\n2M- \\left(\\frac{M}{3} + \\frac{KN^2}{9M}\\right) \\geq NK.\n\\end{equation}}\n\\!\\!Further, as $d<\\frac{M}{3} + \\frac{KN^2}{9M}$, we obtain \n{\\setlength{\\abovedisplayskip}{4pt}\n\\setlength{\\belowdisplayskip}{4pt}\n\\begin{equation}\n2M -d > 2M- \\left(\\frac{M}{3} + \\frac{KN^2}{9M}\\right) \\geq NK.\n\\end{equation}}\n\\!\\!Therefore, \\eqref{alignmentcondition} is met, and so there exist full-column-rank $\\{\\mathbf{U}_{j,j'}\\}$ satisfying \\eqref{pairwisealignment} with probability one.\n\nWe next show that $d'$ in \\eqref{d'} is well-defined. That is, $0 \\leq d' = 3d-M \\leq d$ holds for $d$ chosen sufficiently close to $\\frac{M}{3}+\\frac{KN^2}{9M}$. For $\\frac{M}{N} \\in \\left[\\frac{9K+\\sqrt{81K^2+60K}}{30}, \\frac{3K+\\sqrt{9K^2-12K}}{6} \\right]$, together with $d<\\frac{M}{3} + \\frac{KN^2}{9M}$ and $K\\geq2$, we obtain\n\\begin{equation}\nd'-d = 2d -M <\\frac{2M}{3} + \\frac{2KN^2}{9M} - M = \\frac{2KN^2 - 3M^2}{9M} < 0,\n\\end{equation}\nwhere the last step holds by noting $\\frac{M}{N}>\\frac{3}{5}K > \\frac{\\sqrt{2K}}{3}$ for $K \\geq 2$. On the other hand, as\n\\begin{equation}\n3\\times \\left(\\frac{M}{3} + \\frac{KN^2}{9M}\\right) -M =\\frac{KN^2}{3M} >0,\n\\end{equation}\nwe can always choose $d$ close to $\\frac{M}{3} + \\frac{KN^2}{9M}$ to ensure $d' = 3d-M>0$. We henceforth always assume that $d$ is appropriately chosen such that $0\\leq d' \\leq d$.\n\nWe now consider step 2 of the proof. Denote the overall channel $\\mathbf{T}$ of the symmetric multi-relay MIMO Y channel by\n{\\setlength{\\abovedisplayskip}{4pt}\n\\setlength{\\belowdisplayskip}{4pt}\n\\begin{equation}\n\\label{channel_realization}\n\\mathbf{T} = \\left(\\mathbf{H},\\mathbf{G}\\right) \\in \\mathbb{C}^{KN\\times 3M} \\times \\mathbb{C}^{3M \\times KN}\n\\end{equation}}\n\\!\\!where\n{\\setlength{\\abovedisplayskip}{4pt}\n\\setlength{\\belowdisplayskip}{4pt}\n\\begin{equation}\n\\label{uplink_channel_realization}\n\\mathbf{H} = \\left[\\begin{array}{ccc}\n \\mathbf{H}_{0,0} & \\mathbf{H}_{0,1} & \\mathbf{H}_{0,2}\\\\\n \\vdots & \\vdots & \\vdots\\\\\n \\mathbf{H}_{K-1,0} & \\mathbf{H}_{K-1,1} & \\mathbf{H}_{K-1,2}\\\\\n\\end{array}\n\\right] ,\\quad\n\\mathbf{G} = \\left[\\begin{array}{cccc}\n \\mathbf{G}_{0,0} &\\cdots& \\mathbf{G}_{0,K-1}\\\\\n \\mathbf{G}_{1,0} &\\cdots& \\mathbf{G}_{1,K-1}\\\\\n \\mathbf{G}_{2,0} &\\cdots& \\mathbf{G}_{2,K-1}\\\\\n\\end{array}\n\\right].\n\\end{equation}}\n\\!\\!Then, we rewrite \\eqref{pneutralize} using Kronecker product as\n{\\setlength{\\abovedisplayskip}{4pt}\n\\setlength{\\belowdisplayskip}{4pt}\n\\begin{equation}\n\\label{Kform}\n\\mathbf{K}\\mathbf{f} = \\mathbf{0},\n\\end{equation}}\n\\!\\!where\n\\begin{subequations}\n\\begin{equation}\n\\label{K}\n\\mathbf{K}\\!=\\!\\!\\left[\\!\\!\\!\n\\begin{array}{cccc}\n \\left(\\mathbf{H}_{0,1}\\mathbf{U}^{(L)}_{1,2}\\right)^{T}\\!\\!\\! \\otimes\\! \\mathbf{G}_{0,0} \\!&\\! \\left(\\mathbf{H}_{1,1}\\mathbf{U}^{(L)}_{1,2}\\right)^{T}\\!\\!\\!\\otimes \\!\\mathbf{G}_{0,1} \\!&\\! \\!\\cdots\\! \\!&\\! \\left(\\mathbf{H}_{K-1,1}\\mathbf{U}^{(L)}_{1,2}\\right)^{T}\\!\\!\\!\\otimes\\! \\mathbf{G}_{0,K-1}\\\\\n \\left(\\mathbf{H}_{0,2}\\mathbf{U}^{(L)}_{2,0}\\right)^{T}\\!\\!\\!\\otimes\\! \\mathbf{G}_{1,0} \\!&\\! \\left(\\mathbf{H}_{1,2}\\mathbf{U}^{(L)}_{2,0}\\right)^{T}\\!\\!\\otimes\\! \\mathbf{G}_{1,1} \\!&\\! \\!\\cdots\\! \\!&\\! \\left(\\mathbf{H}_{K-1,2}\\mathbf{U}^{(L)}_{2,0}\\right)^{T}\\!\\!\\!\\otimes \\!\\mathbf{G}_{1,K-1}\\\\\n \\left(\\mathbf{H}_{0,0}\\mathbf{U}^{(L)}_{0,1}\\right)^{T}\\!\\!\\!\\otimes \\!\\mathbf{G}_{2,0} \\!&\\! \\left(\\mathbf{H}_{1,0}\\mathbf{U}^{(L)}_{0,1}\\right)^{T}\\!\\!\\!\\otimes \\!\\mathbf{G}_{2,1} \\!&\\! \\!\\cdots\\!\\! &\\! \\left(\\mathbf{H}_{K-1,0}\\mathbf{U}^{(L)}_{0,1}\\right)^{T}\\!\\!\\!\\otimes\\! \\mathbf{G}_{2,K-1}\n\\end{array}\n\\!\\!\\right]\\!\\!\\in \\mathbb{C}^{3d'M\\times KN^2}\n\\end{equation}\n\\begin{equation}\n\\mathbf{f} = \\left[\n\\begin{array}{cccc}\n\\mathrm{vec}(\\mathbf{F}_0)^T&\n\\mathrm{vec}(\\mathbf{F}_1)^T&\n\\cdots&\n\\mathrm{vec}(\\mathbf{F}_{K-1})^T\n\\end{array}\n\\right]^T \\in \\mathbb{C}^{KN^2 \\times 1}.\n\\end{equation}\n\\end{subequations}\n\nWe are now ready to define the set $\\mathcal{T}_{M,N,d}$. For $\\frac{M}{N} \\in \\Big[\\frac{9K+\\sqrt{81K^2+60K}}{30}, \\frac{3K+\\sqrt{9K^2-12K}}{6}\\Big)$ and $d<\\frac{M}{3} + \\frac{KN^2}{9M}$, define \n{\\setlength{\\abovedisplayskip}{3pt}\n\\setlength{\\belowdisplayskip}{4pt}\n\\begin{equation}\n\\label{DefTMNd}\n\\mathcal{T}_{M,N,d} = \\left\\{\\mathbf{T}\\quad\\!\\!\\left|\\quad\\!\\!\n\\begin{aligned}\n&\\text{All $\\mathbf{T}$ satisfying:} \\\\[-0.3cm]\n&\\text{1) }\\mathrm{rank}\\left(\\mathbf{K}_{j}\\right) = KN,\\forall j;\\\\[-0.3cm]\n&\\text{2) }\\text{there exist $\\{\\mathbf{U}_{j,j'}\\}$ satisfying \\eqref{pairwisealignment} and $\\mathrm{rank}\\left(\\mathbf{K}\\right) = 3d'M$}\n\\end{aligned}\\right.\n\\right\\}\n\\end{equation}}\n\\!\\!\\!where $\\mathbf{K}_j$ is defined in \\eqref{K_j}. We claim that a randomly generated $\\mathbf{T}$ belongs to $\\mathcal{T}_{M,N,d}$ with probability one. Recall that the entries of $\\mathbf{T}$ are drawn independently from a continuous distribution. Since $\\mathbf{K}_{j}$ is a wide matrix, it is of full row rank ($=KN$) with probability one. We next show that for a random $\\mathbf{T}$ and full-column-rank $\\{\\mathbf{U}_{j,j'}\\}$ satisfying \\eqref{pairwisealignment}, $\\mathbf{K}$ in \\eqref{K} is of full row rank with probability one. To see this, we first note that $\\mathbf{K}$ is a wide matrix since\n{\\setlength{\\abovedisplayskip}{3pt}\n\\setlength{\\belowdisplayskip}{3pt}\n\\begin{align}\n\\label{wideK}\nKN^2-3d'M & = KN^2+3M^2-9dM > KN^2+3M^2 - 9M\\left(\\frac{M}{3} + \\frac{KN^2}{9M}\\right) = 0.\n\\end{align}}\n\\!\\!Second, from the channel randomness, we have $\\mathrm{rank}\\left(\\mathbf{H}_{k,j}\\mathbf{U}^{(L)}_{j,j'}\\right)=d'$ and $\\mathrm{rank}\\left(\\mathbf{G}_{j',k}\\right)=N$. Then $\\left(\\mathbf{H}_{k,j}\\mathbf{U}^{(L)}_{j,j'}\\right)^{T}\\!\\!\\otimes \\mathbf{G}_{j',k}$ is of rank $d'N$. Each $d'M \\times KN^2$ block-row of $\\mathbf{K}$ consists of $K$ submatrices in the form of $\\left(\\mathbf{H}_{k,j}\\mathbf{U}^{(L)}_{j,j'}\\right)^{T}\\!\\!\\otimes \\mathbf{G}_{j',k}$. From the channel randomness, the rank of each block-row is given by $\\min\\left\\{d'M, Kd'N\\right\\} = d'M$, since $\\frac{M}{N} \\leq \\frac{3K+\\sqrt{9K^2-12K}}{6}N$ for $\\frac{M}{N} \\in \\left[\\frac{9K+\\sqrt{81K^2+60K}}{30},\\frac{3K+\\sqrt{9K^2-12K}}{6}\\right]$.\nMoreover, as $M \\frac{2M-\\frac{3M^2+2MN}{9M+N}}{K} = \\frac{15M^2}{K(9M+N)} > 0,\n\\end{equation}\n\\end{small}}\n\\!\\!implying that $N'>0$ for any $d<\\frac{3M^2+2MN}{9M+N}$.\nFurther, for $1 \\leq \\frac{M}{N}< \\frac{9K+\\sqrt{81K^2+60K}}{30}$, we have\n{\\setlength{\\abovedisplayskip}{4pt}\n\\setlength{\\belowdisplayskip}{4pt}\n\\begin{small}\n\\begin{equation}\n\\frac{2M-\\frac{3M^2+2MN}{9M+N}}{K} < N.\n\\end{equation}\n\\end{small}}\n\\!\\!\\!Therefore, we can always choose $d$ close to $\\frac{3M^2+2MN}{9M+N}$ to ensure $0< N' = \\frac{2M-d}{K} \\leq N$. As analogous to Appendix \\ref{Proof of Lemma 1 and Lemma 2}, we can also verify that for $\\frac{M}{N} \\in \\Big[1, \\frac{9K+\\sqrt{81K^2+60K}}{30}\\Big)$, we can choose $d$ close to $\\frac{3M^2+2MN}{9M+N}$ such that $0 \\leq d' \\leq d$. \n\nWe now consider step 2 of the proof. For $\\frac{M}{N} \\in \\Big[1, \\frac{9K+\\sqrt{81K^2+60K}}{30}\\Big)$ and $d<\\frac{3M^2+2MN}{9M+N}$, define\n{\\setlength{\\abovedisplayskip}{4pt}\n\\setlength{\\belowdisplayskip}{4pt}\n\\begin{equation}\n\\tilde{\\mathcal{T}}_{M,N,d} = \\left\\{\\mathbf{T}\\quad\\!\\!\\left|\\quad\\!\\!\n\\begin{aligned}\n& \\text{All $\\mathbf{T}$ satisfying:}\\\\[-0.3cm]\n& \\text{1) } \\mathrm{rank}(\\tilde{\\mathbf{K}}_{j}) = KN',\\forall j;\\\\[-0.3cm]\n& \\text{2) } \\text{there exist $\\{\\mathbf{U}_{j,j'}\\}$ satisfying \\eqref{alignment} and $\\mathrm{rank}(\\tilde{\\mathbf{K}}) = 3d'M$}\n \\end{aligned}\\right.\\right\\}\n\\end{equation}}\n\\!\\!where $\\tilde{\\mathbf{K}}_j$ is defined in \\eqref{K'_j} and\n\\begin{small}\n\\begin{align}\n\\label{K'}\n\\tilde{\\mathbf{K}}=\\left[\n\\begin{array}{cccc}\n \\left(\\tilde{\\mathbf{H}}_{0,1}\\mathbf{U}^{(L)}_{1,2}\\right)^{T}\\!\\! \\otimes \\mathbf{G}_{0,0} & \\left(\\tilde{\\mathbf{H}}_{1,1}\\mathbf{U}^{(L)}_{1,2}\\right)^{T}\\!\\!\\otimes \\mathbf{G}_{0,1} & \\cdots & \\left(\\tilde{\\mathbf{H}}_{K-1,1}\\mathbf{U}^{(L)}_{1,2}\\right)^{T}\\!\\!\\otimes \\mathbf{G}_{0,K-1}\\\\\n \\left(\\tilde{\\mathbf{H}}_{2,0}\\mathbf{U}^{(L)}_{2,0}\\right)^{T}\\!\\!\\otimes \\mathbf{G}_{1,0} & \\left(\\tilde{\\mathbf{H}}_{1,2}\\mathbf{U}^{(L)}_{2,0}\\right)^{T}\\!\\!\\otimes \\mathbf{G}_{1,1} & \\cdots & \\left(\\tilde{\\mathbf{H}}_{K-1,2}\\mathbf{U}^{(L)}_{2,0}\\right)^{T}\\!\\!\\otimes \\mathbf{G}_{1,K-1}\\\\\n \\left(\\tilde{\\mathbf{H}}_{0,0}\\mathbf{U}^{(L)}_{0,1}\\right)^{T}\\!\\!\\otimes \\mathbf{G}_{2,0} & \\left(\\tilde{\\mathbf{H}}_{1,0}\\mathbf{U}^{(L)}_{0,1}\\right)^{T}\\!\\!\\otimes \\mathbf{G}_{2,1} & \\cdots & \\left(\\tilde{\\mathbf{H}}_{K-1,0}\\mathbf{U}^{(L)}_{0,1}\\right)^{T}\\!\\!\\otimes \\mathbf{G}_{2,K-1}\n\\end{array}\n\\right]\n\\end{align}\n\\end{small}\n\\!\\!\\!where $\\tilde{\\mathbf{H}}_{k,j} = \\mathbf{E}\\mathbf{H}_{k,j}$ as defined in Section \\ref{Signal Alignment II}. Similarly to Appendix \\ref{Proof of Lemma 1}, we can verify that a random generated $\\mathbf{T}$ belongs to $\\tilde{\\mathcal{T}}_{M,N,d}$ with probability one. \n\nWe now consider the last step of the proof, i.e., to show that for almost all $\\mathbf{T}$ in $\\tilde{\\mathcal{T}}_{M,N,d}$, there exist $\\{\\mathbf{U}_{j,j'}\\}$ and $\\{\\tilde{\\mathbf{F}}_k\\}$ satisfying \\eqref{alignment} and \\eqref{pneutralize'}. As analogous to Lemma \\ref{ProveLemma1}, we have the following result. Note that the proof of Lemma \\ref{ProveLemma2} follows that of Lemma \\ref{ProveLemma1} step by step, and is omitted for brevity.\n\n\\begin{lemma}\n\\label{ProveLemma2}\nFor $d<\\frac{3M^2+2MN}{9M+N}$ and $\\frac{M}{N} \\in \\Big[1, \\frac{9K+\\sqrt{81K^2+60K}}{30}\\Big)$, assume that there exist a certain element $\\widehat{\\mathbf{T}} \\in \\tilde{\\mathcal{T}}_{M,N,d}$, full-row-rank $\\{\\widehat{\\mathbf{U}}_{j,j'}\\}$, and relay processing matrices $\\{\\widehat{\\tilde{\\mathbf{F}}}_k\\}$ such that \\eqref{alignment} and \\eqref{pneutralize'} hold.\nThen for a random $\\mathbf{T} \\in \\tilde{\\mathcal{T}}_{M,N,d}$, there exist $\\{\\mathbf{U}_{j,j'}\\}$ and $\\{\\tilde{\\mathbf{F}}_k\\}$ satisfying \\eqref{alignment} and \\eqref{pneutralize'} with probability one.\n\\end{lemma}\n\nBased on Lemma 5, to show that for almost all $\\mathbf{T} \\in \\tilde{\\mathcal{T}}_{M,N,d}$ there exist $\\{\\mathbf{U}_{j,j'}\\}$ and $\\{\\tilde{\\mathbf{F}}_k\\}$ satisfying \\eqref{alignment} and \\eqref{pneutralize'}, it suffices to find a certain $\\widehat{\\mathbf{T}} \\in \\tilde{\\mathcal{T}}_{M,N,d}$ that satisfies the condition. To this end, we set \n{\\setlength{\\abovedisplayskip}{4pt}\n\\setlength{\\belowdisplayskip}{4pt}\n\\begin{small}\n\\begin{align}\n\\label{F_tilde}\n\\widehat{\\tilde{\\mathbf{F}}}_k = [\\mathbf{I}_{N'}, \\mathbf{0}_{ N'\\times (N-N')}]^T \\in \\mathbb{C}^{N \\times N'}, \\quad\\!\\! k=0,\\cdots, K-1\n\\end{align}\n\\end{small}}\n\\!\\!\\!and randomly generate $\\{\\widehat{\\mathbf{G}}_{j,k}\\}$, $\\{\\widehat{\\tilde{\\mathbf{H}}}_{k,j}\\widehat{\\mathbf{U}}^{(R)}_{j,j+1}\\}$ with the entries independently drawn from a continuous distribution. Then, we choose full-rank matrices $\\{\\widehat{\\tilde{\\mathbf{H}}}_{k,j}\\widehat{\\mathbf{U}}^{(L)}_{j,j+1}\\}$ to satisfy\n{\\setlength{\\abovedisplayskip}{4pt}\n\\setlength{\\belowdisplayskip}{4pt}\n\\begin{small}\n\\begin{align}\n\\label{specificH'}\n\\underbrace{\\left[\\begin{array}{cccc}\n \\widehat{\\mathbf{G}}_{j,0}\\widehat{\\tilde{\\mathbf{F}}}_0 & \\widehat{\\mathbf{G}}_{j,1}\\widehat{\\tilde{\\mathbf{F}}}_1 & \\cdots & \\widehat{\\mathbf{G}}_{j,K-1}\\widehat{\\tilde{\\mathbf{F}}}_{K-1}\n\\end{array}\\right]}_{M \\times KN'} \\underbrace{\\left[\n\\begin{array}{c}\n\\widehat{\\tilde{\\mathbf{H}}}_{0,j+1}\\widehat{\\mathbf{U}}^{(L)}_{j+1,j+2} \\\\ \\vdots \\\\ \\widehat{\\tilde{\\mathbf{H}}}_{K-1,j+1}\\widehat{\\mathbf{U}}^{(L)}_{j+1,j+2} \\end{array}\\!\\!\\right]}_{KN'\\times d'} = \\mathbf{0}.\n\\end{align}\n\\end{small}}\n\\!\\!\\!With $\\tilde{\\mathbf{F}}_k$ in \\eqref{F_tilde}, $\\widehat{\\mathbf{G}}_{j,k}\\widehat{\\tilde{\\mathbf{F}}}_{k}$ is simply the first $N'$ columns of $\\widehat{\\mathbf{G}}_{j,k}$. For $\\frac{M}{N} \\in \\Big[1, \\frac{9K+\\sqrt{81K^2+60K}}{30}\\Big)$ and $d<\\frac{3M^2+2MN}{9M+N}$, we have \n{\\setlength{\\abovedisplayskip}{3pt}\n\\setlength{\\belowdisplayskip}{3pt}\n\\begin{equation}\nKN'-M = M-d > d \\geq d',\n\\end{equation}}\n\\!\\!\\!implying that the null space of the $M \\times KN'$ matrix in \\eqref{specificH'} has at least $d'$ dimensions. Thus full-rank $\\{\\widehat{\\tilde{\\mathbf{H}}}_{k,j}\\widehat{\\mathbf{U}}^{(L)}_{j,j+1}\\}$ exist with probability one. Based on the chosen $\\{\\widehat{\\tilde{\\mathbf{H}}}_{k,j}\\widehat{\\mathbf{U}}^{(L)}_{j,j+1}\\}$ and $\\{\\widehat{\\tilde{\\mathbf{H}}}_{k,j}\\widehat{\\mathbf{U}}^{(R)}_{j,j+1}\\}$, we determine the values of $\\{\\widehat{\\tilde{\\mathbf{H}}}_{k,j}\\}$ and $\\{\\widehat{\\mathbf{U}}_{j,j+1}\\}$ (not necessarily unique). With $\\eqref{alignment}$, $\\{\\widehat{\\mathbf{U}}_{j+1,j}\\}$ are also determined. Finally, $\\widehat{\\mathbf{T}}$ is determined by $\\{\\widehat{\\tilde{\\mathbf{H}}}_{k,j}, \\widehat{\\mathbf{G}}_{j,k}\\}$ and it can be verified that $\\widehat{\\mathbf{T}} \\in \\tilde{\\mathcal{T}}_{M,N,d}$ with probability one.\n\nWe now show that the above constructed $\\widehat{\\mathbf{T}}$, $\\{\\widehat{\\tilde{\\mathbf{F}}}_{k}\\}$, and $\\{\\widehat{\\mathbf{U}}_{j,j'}\\}$ satisfy \\eqref{alignment} and \\eqref{pneutralize'} with probability one. By construction, \\eqref{alignment} is automatically met. Further, from \\eqref{specificH'}, we see that \\eqref{pneutralize1'} holds with probability one. To check \\eqref{pneutralize4'}, it suffices to consider the case $j=0$ by symmetry. Note that $\\{\\widehat{\\tilde{\\mathbf{W}}}_{k,0}\\}$ are determined by $\\{\\widehat{\\tilde{\\mathbf{H}}}_{k,2}\\widehat{\\mathbf{U}}^{(L)}_{2,0}\\}$ and $\\{\\widehat{\\tilde{\\mathbf{H}}}_{k,0}\\widehat{\\mathbf{U}}^{(L)}_{0,1}\\}$, which only depend on $\\{\\widehat{\\mathbf{G}}_{1,k}\\}$ and $\\{\\widehat{\\mathbf{G}}_{2,k}\\}$. That is, $\\{\\widehat{\\tilde{\\mathbf{W}}}_{k,0}\\}$ are not functions of $\\{\\widehat{\\mathbf{G}}_{0,k}\\}$. Moreover, $\\{\\widehat{\\mathbf{G}}_{0,k}\\}$, $\\{\\widehat{\\tilde{\\mathbf{F}}}_k\\}$, and $\\{\\widehat{\\tilde{\\mathbf{H}}}_{k,1}\\widehat{\\mathbf{U}}^{(R)}_{1,2}\\}$ are independent of each other by construction. Therefore, \n{\\setlength{\\abovedisplayskip}{4pt}\n\\setlength{\\belowdisplayskip}{4pt}\n\\begin{small}\n\\begin{equation}\n\\mathrm{rank}\\left(\\widehat{\\mathbf{G}}_{0,k}\\widehat{\\tilde{\\mathbf{F}}}_k\\left[\\widehat{\\tilde{\\mathbf{W}}}_{k,0},\\widehat{\\tilde{\\mathbf{H}}}_{k,1}\\widehat{\\mathbf{U}}^{(R)}_{1,2}\\right]\\right) = \\min\\{M,N'\\} = N'.\n\\end{equation}\n\\end{small}}\n\\!\\!As $M k \\\\\n\\displaystyle \\text{Past},&\\text{otherwise}\n\\end{dcases}\n \\label{eq:mt1}\n\\end{equation}\nThe threshold \\textit{k} in equation \\ref{eq:mt1} defines the vote cut off. In this case, \\textit{k} = 0.5 as we applied a simple majority vote in deciding the collective vote of the Mechanical Turk workers\n(i.e., whichever classification received three out of five voters), and similarly within the 12\nWS-DL members. Treating each group as a single entity, the aggregated votes from each of the two datasets were used to calculate the inter rater \nagreement resulting in Cohen's $\\kappa$ = 0.04, indicating slight agreement. This slight agreement was yet not sufficient to proceed with our study.\nExamining the selection from the SNAP data set, we decided that \ntoo many of the tweets had vague contexts and were hard to classify.\n\nGiven the unclear contexts that were present in the first sample set,\nwe then tried a richer set from which to sample. We used the tweets\nfrom the six historical events described in \\cite{TPDL2012:Losing}. For 100 tweets, we built a web page with an\nimage snap shot of the current version of the page, and a version of\nthe page closest to $t_{tweet}$ that could be found in a public web\narchive. We held a face to face meeting with our WSDL research group to\ndetermine the ground truth: for each tweet we went around the table and\nargued for whichever version we thought matched the author's temporal\nintent. We knew this data set would be biased toward $t_{tweet}$\nbecause most of the tweets described historic, cultural events from\n2009-2011. After deliberation, we arrived at: 82\\% past, 9\\% current,\nand 9\\% undecided as our gold standard for this data set. When we\nsubmitted the jobs to Mechanical Turk, we defined levels of three,\nfive, seven, and nine evaluations for each tweet. In the case where we had \nnine evaluations for each tweet, the Mechanical Turk workers would match\nour gold standard 58\\% of the time if we allowed 5-4 splits. If we were \nmore discerning and counted agreement only in cases where workers agreed\n6-3 or better, then the agreement with Mechanical Turk workers fell to 31\\%\n(and similarly for rating levels three, five, and seven). \n\nIn short, if we required clear agreement on the part of Mechanical Turk\nworkers, then we did much worse than simply flipping a coin -- in a data\nset with a clear bias toward $t_{tweet}$ because of the focus on past events.\nIt was at this point we decided our approach in guessing the author's \ntemporal intent was simply too complicated for Mechanical Turk workers. \n\\begin{figure*}[ht!]\n \\begin{center}\n \\subfigure[Changed and Relevant]{%\n \\label{fig:first}\n \\includegraphics[width=0.5\\textwidth]{figure1.png}\n }%\n \\subfigure[Changed and no longer Relevant]{%\n \\label{fig:second}\n \\includegraphics[width=0.5\\textwidth]{figure2.png}\n }\\\\\n \\subfigure[Not changed and Relevant]{%\n \\label{fig:third}\n \\includegraphics[width=0.5\\textwidth]{figure3.png}\n }%\n \\subfigure[Not Changed and not Relevant]{%\n \\label{fig:fourth}\n \\includegraphics[width=0.5\\textwidth]{figure4.png}\n }%\n \\end{center}\n \\caption{Examples of the relevancy mapping of TIRM.}%\n \\label{fig:subfigures}\n\\end{figure*}\n\n\\subsection{Temporal Intention Relevancy Model}\nTo reach our goal of modeling users' temporal intentions, we need to collect \na large dataset which is not, as discussed in the previous section, a trivial \ntask. The difficulty in acquiring the data resides in generating\nthe ground truth or gold standard for the temporal intention of the\nuser who authored the original social media post. Initially, our intention was to\ngenerate a small set of gold standard data (e.g., links classified as\nrepresenting the user's intention to be either ``the resource at\n$t_{tweet}$'' or ``the resource at $t_{click}$''). We eventually decided that the notion of ``temporal\nintention'' was too nuanced to be adequately conveyed in the\ninstructions for the workers of Mechanical Turk. Learning from our\nprevious unsuccessful attempts, we chose to cast the problem\nof ``temporal intention'' to one of relevancy between the tweet and the\nresource as it exists now. \n\nTable \\ref{tab:model} presents the Temporal Intention Relevancy Model (TIRM) that we will use to inform our\ninteraction with the workers at Mechanical Turk. To resonate with one of the common types of experiments in \nit, we designed our new experiment as a categorization of relevance problem which the workers are familiar \nwith. In each Human Intelligence Task or HIT, the worker is presented with the full tweet, its publishing date,\nand in an embedded window, a snapshot of the page that the tweet links to in its current state. Instead of asking workers about\ntemporal intention of the original author, and possibly confusing it\nwith the temporal intention of them as a reader, we asked a simpler question ``is this page still relevant to this tweet?''. There is\nconsiderable precedence in the Mechanical Turk community for making\nrelevance judgements as categorization problems are commonly available as HITs. \n\nTo explain this mapping from intention space to relevancy space, let us assume we have a \nresource \\textit{R} which has been tweeted by some author at time $t_{tweet}$. The state of the \nresource at $t_{tweet}$ is $R_{tweet}$. Consequently, another user clicked on the resource to read it at a later time $t_{click}$. The state of the \nresource at $t_{click}$ is $R_{click}$. The rationale for the model is:\n\\begin{description}\n \\item[Changed \\& Relevant:] If the resource has changed (i.e.,\\\\ $R_{tweet}$ is not similar to $R_{click}$) and it is still relevant to the tweet, then there is a strong indication that the\ntemporal intention of the author must have been the resource as it exists at $t_{click}$ ($R_{click}$). Figure \\ref{fig:first} shows an author tweeting about the \nlatest updates for a newsletter. The linked resource in the tweet continually changes while the tweet is always relevant to it. This indicates that the author's temporal \nintention is a \\textit{current} one.\n\n \\item[Changed \\& Non-Relevant:] If the resource has changed and it is not relevant to the tweet, we assume initial relevance and thus the original author \nmust have meant to share the resource in the state as it existed at $t_{tweet}$ which is $R_{tweet}$ not $R_{click}$. Figure \\ref{fig:second} shows an author \ntweeting about specific breaking news on CNN.com's first page, which by definition changes frequently. This indicates that the author's \ntemporal intention to be the \\textit{past} version. \n\n \\item[Not Changed \\& Relevant:] If the resource has not changed and it is still relevant to the tweet, \nthen we claim that the intention of the author was to share the resource as it existed\nat $t_{tweet}$ ($R_{tweet}$), but it is just a fortunate coincidence that the resource \nhas not changed and is thus still relevant. Figure \\ref{fig:third} shows an author tweeting about an article which still exists. Surely, there is a possibility that the resource \ncould change in the future and become non-relevant. This indicates that the author's intention was a \\textit{past} one.\n\n \\item[Not Changed \\& Non-Relevant:] If the resource has not changed and it is not relevant to the tweet, then we can not be sure of \nthe intention and either $t_{click}$ or $t_{tweet}$ will suffice. This scenario\ncan occur in spam, mistaken link sharing, or more likely that relevancy relies\non out-of-band communication between the original author and the intended readers\\footnote{The Internet meme of ``Rickrolling'' http:\/\/en.wikipedia.org\/wiki\/Rickrolling is a humorous example of purposeful non-relevancy between the context of the link and the link which is to the 1987 pop song by Rick Astley; the point is to ``trick''\nusers into expecting one thing and the link delivers the song.}.\n\\end{description}\n\n\n\\subsection{Gold Standard Dataset}\\label{gold}\n\nAfter laying the basis of the intention-relevance mapping in TIRM, we must collect a large body of data to be utilized in the modeling and \nanalysis phases. Since we are modeling human intention and mapping it to relevance judging, we will utilize Amazon's Mechanical Turk in collecting the training data. \nHowever, prior to collecting the training dataset we need to be confident in the ability of our data collection experiment in representing \nthe real-life educated judgement. To achieve this goal we created a gold standard dataset by obtaining a small dataset and assigning it to members of our research group, whom we have confidence in their ability to perform the task accurately, and then assign the same dataset to workers in Mechanical Turk. We collect both sets of \nassignments and compare their similarity to ensure the ability of the workers to mimic the judgment of the experts. Mechanical Turk HITs are considerably cheaper, \neasier to manage, and faster to conclude than the expert assignments.\n\nEngineering a relevance HIT for Mechanical Turk's workers was fairly straightforward. For the \ngold standard dataset we randomly picked 100 tweets from the SNAP dataset dating back to June 2009 \nand posted them to be classified as ``still relevant'' or ``no longer relevant''. As mentioned earlier, for each HIT we posted the tweet, the date, \nand a snapshot of the resource at $t_{click}$ ($R_{click}$). The experiment requested five unique \nraters with high qualifications (more than 1000 accepted HITs and more than 95\\% acceptance rate). Each HIT cost two cents and a maximum time span of 20 minutes. The experiment was completed \nwithin the first hours from posting and the average completion time per hit was 61 seconds. We examined the data from the workers and dismissed all the \nHITs that took less than 10 seconds indicating a hastly decision. We also filtered out workers who exhibited low quality repetitive assignments and banned them. \nFor the same 100 tweets, we invited our research group again to perform this same experiment of relevance. Their assignments have been collected along with the ones from the workers. \nThe results are shown in table \\ref{tab:turk} showing an almost perfect agreement with Cohen's $\\kappa$ = 0.854.\n\nGiven this substantial agreement between the gold standard and the workers, we can claim that Mechanical Turk can be used in estimating \nthe content's time relevance and in turn to gauge the author's temporal intention after utilizing TIRM. The next step is to expand our dataset and \ncollect a larger dataset, for training and testing, to utilize it in the modeling process.\n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{|l|l|}\n\\hline\nAgreement in three or more votes & 93\\% \\\\ \\hline\nAgreement in four or more votes &80\\% \\\\ \\hline\nAgreement with all five votes & 60\\% \\\\\n\\hline\n\\end{tabular}\n\\vskip 5mm\n\\caption{\\label{tab:turk}Agreement between the research group and Mechanical Turk workers for 100 tweets.}\n\\end{center}\n\\end{table}\nFrom the SNAP dataset of tweets we extracted a large number of tweets starting from June of 2009 at random. For a social media post, in this case a tweet, we want to acquire as much data as possible about \nits existence such as content, age, dissemination, and size. Initially, we targeted the tweets which pass through these filters:\n\n\\begin{itemize}[noitemsep,nolistsep]\n \\item Tweets in the English language.\n \\item Each has an embedded URI pointing to an external resource.\n \\item The embedded URI has been shortened using Bitly (bit.ly).\n \\item The embedded URIs point to unique resources.\n\\end{itemize}\n\nWe chose the tweets which have links as the scope of the study is focused on detecting intention in sharing resources in social media. Also the shared resource provides extended \ncontext of the tweet making the social post more comprehensible. The reason behind choosing bitly shortened URIs is that their API provides invaluable information about the \nclicklog patterns, creation dates, rates of dissemination, and other information as will be described in the next section. Also bitly was fairly \npopular on Twitter at the time of the dataset collection (2009). To ensure our ability to collect information related to the embedded resource, we applied an extra filter ensuring that the linked resource is \ncurrently available on the live web (HTTP response 200 OK), at the time of the analysis, and that it is properly archived in the public archives with at least 10 mementos. \nConsequently, we extracted 5,937 unique instances to be utilized in the next stages.\n\nTo create the dataset that will be processed by Mechanical Turk workers, we selected 1,124 instances randomly from the previous dataset. This training dataset will \nbe assigned to the workers in the same manner to the gold standard experiment.\nTo have an insight of what the author was experiencing and reading upon the time of tweeting, we extracted the closest snapshot of the resource, \nto the time of the tweet, using the Memento framework. For each URI, the closest memento recorded ranged from 3.07 minutes to 56.04 hours \nfrom the time of the tweet, averaging 25.79 hours. Figure \\ref{fig:tweetdelta} shows the difference in hours between $t_{tweet}$ and the closest memento in the public \narchives denoted by $R_{closestMemento}$. For the sake of simplicity we will consider the following approximation:\n\\begin{equation}\nR_{closestMemento} \\approx R_{tweet}\n \\label{eq:closest}\n\\end{equation}\n\\begin{figure} [ht]\n\\centering\n\\includegraphics[scale=0.6]{plot.png}\n\\caption{Sorted Time delta between tweeting time and the closest memento snapshot where the negative Y axis denotes existence prior to $t_{tweet}$.}\n\\label{fig:tweetdelta}\n\\end{figure}\nThis shows that on average we can extract a snapshot of the state of the resource within a \nday from when the author saw it and tweeted about it. This time delta is in fact relative to the nature of the resource. In the case of continuously \nchanging webpages such as CNN.com, one day will not capture everything. However, on the average, web pages are not expected to change as much within this time period. \n\nAlong with the downloaded closest memento snapshot \\\\$R_{closestMemento}$, we downloaded a snapshot of the current state of the resource $R_{current}$. For the sake of simplicity as well, we consider another approximation:\n\\begin{equation}\nR_{current} \\approx R_{click}\n \\label{eq:current}\n\\end{equation}\nThe agreement between Mechanical Turk workers in assigning relevancy to our training dataset of 1,124 tweets is shown in table \\ref{tab:train}.\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{|l|l|l|}\n\\hline\n5 Turkers Agreeing (5-0 cuts) & 589 & 52.40\\% \\\\ \\hline\n4 Turkers Agreeing (4-1 cuts) & 309 & 27.49\\% \\\\ \\hline\n3 Turkers Agreeing (3-2 close call cuts) & 226 & 20.11\\% \\\\ \\hline \\hline\nRelevant Assignments & 929 & 82.65\\% \\\\ \\hline\nNon-Relevant Assignments & 195 & 17.35\\% \\\\ \n\\hline\n\\end{tabular}\n\\vskip 5mm\n\\caption{\\label{tab:train}The distribution of voting outcomes from turkers for the 1,124 assignments.}\n\\end{center}\n\\end{table}\n\\section{Intention Modeling}\nIn the previous section we collected the gold standard dataset using Mechanical Turk and tested its validity against expert opinions. Consequently, we were able to collect a larger dataset of tweets \nwhich have been deemed Relevant or Non-Relevant by Mechanical Turk workers as well. The dataset collected and classified contains tweets which have embedded shortened URIs or bitlys linking to a shared \nweb resource. Each one of the resources is currently live and adequately covered in the public web archives at the time of this study (December 2012). \n\n\\subsection{Feature Extraction}\\label{feature}\nTo complement the training dataset we collected in the previous section from Mechanical Turk we explore the different angles of sharing resources in social media beyond the tweet.\n\n\\subsubsection{Link Analysis}\nAs mentioned earlier, most of the tweets containing resources published in 2009 include a shortened URI. One of the reasons behind this use of \nshortners is due to the space constraints of a tweet (140 characters). We extracted the tweets containing URIs shortened by bitly shortner due \nto their abundance in the SNAP dataset tweet collection. Out of the 476 million tweets in the dataset, 87 million contain bitly \nshortened URIs. The bitly API provide several parameters that could be extracted as well. The total number of clicks, hourly clicklogs, creation dates, referring websites, referring countries, and other information could also be acquired. \n\nThe location of the resource in the domain is important. Surface web pages, as the main page or index, are different in nature from the deep web ones. Relying on the general notion that pages in the deep web are less likely to \nchange as often as the root page, we need to calculate the estimated depth of the resource. Within each tweet, we expanded the resource's bitly to the original long URI and analyzed for hierarchy and depth in the web by counting the number of \nbackslashes in the URI which correlates with the depth fairly well. Also we compare the lengths of the shortened URl and the original one to calculate the reduction rate. Hand in hand \nwith these extracted data points, we proceed to examine the dissemination trends of that resource. \n\n\\subsubsection{Social Media Mining}\nFor each \nembedded resource in a tweet, we used Topsy.com's API\\footnote{http:\/\/code.google.com\/p\/otterapi\/} to extract the total number of tweets that have been recorded linking to this resource. \nWe extract the number of tweets from influential users in the Twitter-sphere as well. Finally, we downloaded the other tweets posted by different users linking to the same resource. The \nAPI permits us to extract a maximum of 500 tweets per resource. This collection of tweets surrounding each resource can benefit us \nin many aspects: providing extended tweet-context for the resource, showing us the social media dissemination pattern by plotting the tweet timestamps against the \ntimeline, and finally, to let us examine how many of those tweets still exist and how many have been deleted.\n\nTo complete the picture, Facebook was mined as well for each of the resources in the tweets to extract the total number of shares, posts, likes, and clicks.\n\n\\subsubsection{Archival Existence}\nTo investigate archival existence and coverage, we calculate how many total mementos, in the aggregated public archives, are available for the resource. We record as well how many archives hold at least a copy of the resource. \nAs mentioned earlier, figure \\ref{fig:tweetdelta} shows the distribution of the delta time between closest archived memento and the tweet creation timestamp. Negative values on the Y-axis denote existence prior to $t_{tweet}$.\n\n\\subsubsection{Sentiment Analysis}\nTo go beyond the tweet text, we utilized the NLTK libraries \\cite{Loper:2002:NNL:1118108.1118117} for natural language text processing to extract the most prominent sentiment in the text. For each tweet we \nextracted the positive, negative and neutral sentiment probabilities. These three probabilities give us an insight on the emotional state of the author at $t_{tweet}$.\n\n\\subsubsection{Content Similarity}\nFinally, to measure the difference between the different snapshots of the resource downloaded earlier, we implemented similarity analysis functions. We transformed each of the resource's $R_{tweet}$ and $R_{click}$ to textual vectors and then calculated the cosine similarity between them. Furthermore, the collected \ntweets from Topsy.com's API associated to each resource have been accumulated in one document giving it a social context. This tweet document has been compared in similarity \nas well with $R_{tweet}$ and $R_{click}$ snapshots of the resource and the percentages were recorded. It is worth mentioning that to extract those similarities we downloaded the snapshots using the \nLynx browser\\footnote{http:\/\/lynx.browser.org\/}. We used the \\textit{source} option which downloads the HTML. Subsequently, on the downloaded content, we used the \nboilerplate removal from HTML pages and full text extraction algorithms by Kohlschutter et al. \\cite{Kohlschutter:2010:BDU:1718487.1718542}. Finally, we calculated the cosine similarity between the each of the pairs of documents.\n\\subsubsection{Entity Identification}\nAnalyzing hundreds of tweets from Twitter timeline we noticed some interesting points. Celebrities are mentioned in abundance and have the largest number of followers. In fan tweets, most celebrities are \nmentioned by their first and last name unless they are known by only one, and finally most tweets about celebrities are in reaction or as a \ndescription to contemporaneous events related to the celebrity. In the field of TV, cinema, performance arts, sports, and politics, millions \nof tweets are posted daily about celebrities as a huge demographic of users use twitter as a form of news feed. Given so, we wanted to analyze the \neffect of detecting celebrity-related tweets to intention and the possibility of using it as a feature. Wikipedia has published several lists of US, \nBritish, and Canadian actors, and singers. Also several lists of sports players and politicians in the English speaking world. We harvested \nthose lists, parsed and indexed them. Finally, given an embedded resource and upon retrieving its tweet flock \nfrom Topsy.com's API we test for the existence of celebrity entities in the collective tweets and record celebrity-relevance feature as true.\n\n\\subsection{Modeling and Classification}\nIn the features extraction phase we gathered several data points denoting context, dissemination, nature, archiving coverage, change, sentiment, and others. In this phase, we investigate which \nfeatures have higher weights indicating importance in modeling and classifying temporal intention. We also investigate the several well known classifiers and their corresponding success rates.\n\nIn the first attempts to train the classifier and analyze the confusion matrix we noticed the instances which were classified by Mechanical Turk workers as close calls (3-2 split) \nhighly populated the false positive\/negative cells of the confusion matrix. These instances indicate a weak classification where one vote can deem the instance relevant or non-relevant. Thus, to \nreduce the confusion, we eliminated the training instances where this uncertainty of the workers reside. From the 1,124 instances, we kept 898 where the agreement \non relevancy was 4 to 1, or 5 total agreement as shown in table \\ref{tab:noclosecalls}. Thus, the cutoff threshold in equation \\ref{eq:mt1} is increased $\\textit{k}>=0.8$.\n\\begin{table}[ht]\n\\begin{center}\n\\begin{tabular}{|l|l|l|}\n\\hline\nRelevant Assignments & 807 & 89.87\\% \\\\ \\hline\nNon Relevant Assignments & 91 & 10.13\\% \\\\ \n\\hline\n\\end{tabular}\n\\vskip 5mm\n\\caption{\\label{tab:noclosecalls}The distribution of voting outcomes from turkers after removing close-calls.}\n\\end{center}\n\\end{table}\n\nUtilizing the sum of all the extracted features, we ran Weka's\\footnote{http:\/\/www.cs.waikato.ac.nz\/ml\/weka\/} different classifiers against the dataset. Subsequently, we train the model and test it using 10-fold cross validation. \nTable \\ref{tab:percentages} and \\ref{tab:accuracy} show \nthe corresponding precisions, recalls and F-measures of the Cost Sensitive classifier based on Random Forest, which outperformed the other classifiers yielding an 90.32\\% success in \nclassification for our trained model.\n\\begin{center}\n\\begin{table*}[ht]\n\\centering\n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}{ |l||l|l|l|l|l| }\n\\hline\n\\multicolumn{6}{ |c| }{10-Fold Cross-Validation Testing} \\\\\n\\hline\n& \\textbf{Mean} & \\textbf{Root Mean} & \\textbf{Kappa} & \\textbf{Incorrectly} & \\textbf{Correctly} \\\\\n\\textbf{Classifier} & \\textbf{Absolute Error} & \\textbf{Squared Error} & \\textbf{Statistic} & \\textbf{Classified \\%} & \\textbf{Classified \\%} \\\\\\hline\\hline\nCost Sensitive classifier & 0.15 & 0.27 & 0.39 & 9.68\\% & \\textbf{90.32\\%} \\\\\nbased on Random Forest &&&&& \\\\ \\hline\n\\end{tabular}\n}\n\\vskip 5mm\n\\caption{Results of 10-fold cross-validation against the best classifier along with the Precision, Recall and F-measure per class}\n\\label{tab:percentages}\n\\end{table*}\n\\end{center}\n\\begin{center}\n\\begin{table*}\n\\centering\n\\begin{tabular}{ |l||l|l|l|l| }\n\\hline\n\\textbf{Classifier} & \\textbf{Precision }& \\textbf{Recall }& \\textbf{F-measure }& \\textbf{Class}\\\\ \\hline\\hline\nCost Sensitive classifier& 0.93 & 0.96 & 0.95 & Relevant \\\\\nbased on Random Forest & 0.53 & 0.37 & 0.44 & Non-Relevant \\\\ \\hline\nWeighted Average & 0.89 & 0.90 & 0.90 & \\\\ \\hline\n\\end{tabular}\n\\vskip 5mm\n\\caption{Precision, Recall and F-measure per class}\n\\label{tab:accuracy}\n\\end{table*}\n\\end{center}\nThe classifier processed 39 different features for each instance in the training dataset. The features were collected in the feature extraction phase explained earlier in section \\ref{feature}. Following \nthe training phase we needed to understand the effect of each feature in the process of modeling intention. This knowledge will help us in reducing the number of required features, by the model, to estimate the intention behind a \ngiven social post. We applied an attribute evaluator supervised algorithm based on Ranker search method to rank the attributes or features accordingly. Analyzing the ranks, table \\ref{hanyhany} \nshows the strongest six features and the order of significance in ranking the features used in classifying user temporal intention along with each's information gain.\n\nIt is also worth mentioning that using the boilerplate removal algorithm along with cosine similarity gave more significance features than HTML similarity with SimHash \\cite{Charikar:2002:SET:509907.509965}.\n\\subsection{Evaluation}\nThe previous section indicates that modeling user intention via TIRM and using numerical, textual, and semantic features in a classifier \nis both feasible and accurate. In this section, we test the trained model against other tweet datasets.\n\n\\subsubsection{Extended Dataset}\\label{testing}\nIn section \\ref{gold} we extracted a dataset of 5,937 instances from which we extracted our training 1,124 \ninstances training dataset. The remaining 4,813 instances formed a new testing dataset. For each instance in this dataset we extracted all the features analyzed in section \\ref{feature}. \nFinally, this dataset was evaluated by the trained model to test the performance and usability yielding the results in table \\ref{tab:results}.\n\\begin{center}\n\\begin{table}\n\\centering\n \\begin{tabular}{|l|l|l|}\n \\hline\n Rank & Feature & Gain Ratio \\\\ \\hline\n 1 & Existence of celebrities in Tweets& 0.149 \\\\ \\hline\n 2 & Number of Mementos& 0.090 \\\\ \\hline \n 3 & Tweet similarity with current page & 0.071 \\\\ \\hline\n 4 & Similarity: Current \\& Past page & 0.0527 \\\\ \\hline\n 5 & Similarity: Tweet and Past page& 0.04401 \\\\ \\hline\n 6 & Original URI's depth& 0.0324 \\\\ \\hline\n \\end{tabular}\n\\vskip 5mm\n \\caption{Classifier features ordered by significance resulting from Rank Search algorithm}\n \\label{hanyhany}\n\\end{table}\n\\end{center}\n\\subsubsection{Historical Integrity of Tweet Collections}\nAs described in section \\ref{definition}, one of the main motives of our analysis of human intention is to maintain the historical integrity of social posts collections. \nSpecifically in social posts related to historic events, preserving the consistency between the tweet and the linked resource is crucial. The link between the post and the resource \nis vulnerable to two kinds of threats: the loss of content itself (either the post or the linked resource) or the mismatch between the author's intention and what the reader is \nreceiving (the resource is no longer intended by the author). In our prior work, we analyzed six datasets related to six different historic events and we evaluated how many \nof these resources are missing and how many are archived \\cite{TPDL2012:Losing}. In this section, we utilize our trained model in predicting the temporal intention and in turn, in estimating the amount of mismatched \nresources where the reader is probably not reading the first draft of history intended by the tweet's author. \n\nDue to the nature of the collections, we limit our analysis to the resources in the form of tweets. In this case, we use the tweet datasets from the 2009-2012 events related to: Michael Jackson's Death, \nH1N1 virus outbreak, Iranian Elections, President Obama's Nobel peace prize, and the Syrian uprising. Similarly to the extended testing dataset in section \\ref{testing}, we extract all the necessary \nfeatures for each instance in the dataset. We test our model with the five datasets and \nreport the results in table \\ref{tab:results} as well. For each dataset we test the response headers once more to assess the percentage missing and alive, which we present in the same table. It is worth \nmentioning that when we started the experiments in September of 2012, the instances of the 3124 extended dataset were extracted to return a 200 OK response, but when we re-tested their existence 4 months later we noticed a loss of 3.23\\% confirming \nthe results from our previous work.\n\\begin{center}\n\\begin{table*}[t]\n\\centering\n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}{ |l||r|r||r|r| }\n\\hline\n\\textbf{Dataset} & \\textbf{Status 200}& \\textbf{Status 404 or Other} & \\textbf{Relevant} \\% & \\textbf{Non-Relevant} \\% \\\\ \\hline\nExtended 4,813 instances & 96.77\\% & 3.23\\% & 96.74\\% & 3.26\\% \\\\ \\hline \\hline\nMJ's Death & 57.54\\% & 42.46\\% & 93.24\\% & 6.76\\% \\\\ \\hline\nH1N1 Outbreak & 8.96\\% & 91.04\\% & 97.48\\% & 2.52\\% \\\\ \\hline\nIran Elections & 68.21\\% & 31.79\\% & 94.69\\% & 5.31\\% \\\\ \\hline\nObama's Nobel & 62.86\\% & 37.14\\% & 93.89\\% & 6.11\\% \\\\ \\hline\nSyrian Uprising & 80.80\\% & 19.20\\% & 70.26\\% & 29.75\\% \\\\ \\hline\n\\end{tabular}\n}\n\\vskip 5mm\n\\caption{Results of testing the extended dataset \\& the historic datasets in classifying relevancy along with the live percentage, and percentage missing of the resources.}\n\\label{tab:results}\n\\end{table*}\n\\end{center}\n\\subsubsection{Evaluating TIRM}\nAfter examining the relevancy of the datasets using our developed relevancy classifier, we now use our TIRM mapping scheme in transforming the results into the intention space. The classifier was trained to be \nconservative in handling the Non-Relevant categorization. Meaning, in classifying Non-Relevancy false negatives are more tolerated than false positives (i.e., the classifier only states a resource is non-relevant \nonly if it was highly confident of this estimation). Another point worth mentioning is that for our training we used the resources that are currently available on the live web; and 404 resources were not included. Table \\ref{tirmresults} show the \npercentages in each of the six datasets per each class of the TIRM model after mapping relevancy to the similarity threshold of 70\\%. Taking the dataset of Michael Jackson's death for example, even though the resource is still accessible nearly 3\\% of the \ndataset is no longer reflecting the author's intention. It is worth noting that the results in the first quadrant of table \\ref{tirmresults} are over reported. Due to the sparsity of the archives, this over reporting is essential to avoid false negatives. \nAs described in figure \\ref{fig:tweetdelta}, the average time delta between sharing and the closest archived version is fairly large (26 hours), in some cases the resource will keep on changing then stops after a couple of hours and stay static. Tightening \nthe bounds in the same figure by more frequent archiving will lead to a large improvement in our model.\n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{rc|c}\n& Relevant & Not Relevant \\\\\n\\cline{2-3}\n\\cline{2-3}\n& MJ:41\\% & MJ:3\\% \\\\ \n& Obama:42\\% & Obama:2\\% \\\\ \nChanged & Syria:44\\% & Syria:25\\% \\\\ \n& Iran:49\\% & Iran:2\\% \\\\ \n& H1N1:6\\% & H1N1:0\\% \\\\ \n& Extended: 53\\% & Extended:2\\% \\\\ \n\\cline{2-3}\n& MJ:52\\% & MJ:4\\% \\\\ \n& Obama:51\\% & Obama:5\\% \\\\ \nNot Changed & Syria:26\\% & Syria:5\\% \\\\ \n& Iran:46\\% & Iran:3\\% \\\\ \n& H1N1:91\\% & H1N1:3\\% \\\\ \n& Extended: 43\\% & Extended:2\\% \\\\ \n\\cline{2-3}\n\\end{tabular}\n\\vskip 5mm\n\\caption{\\label{tirmresults}TIRM Results}\n\\end{center}\n\\end{table}\n\\section{Conclusions}\nIn this work we investigate the problem of the temporal inconsistency in social media and how it is related to the author's intention. This intention proved to be non-trivial to capture and gauge. \nOur Temporal Intention Relevancy Model successfully translated the problem of user intention to a less complicated problem of relevancy. We used Mechanical Turk to collect a gold \nstandard data of user temporal intention and we verified the results by comparing the Turkers' assignments to ones conducted by experts in the field and produced a near perfect agreement. After proving the \nvalidity of using Mechanical Turk in data gathering, we proceeded in collecting a dataset that was used in training the classifier. We extracted several numerical, textual, and semantic features and incorporated \nthem in the training dataset. The trained model is then evaluated \nagainst an extended larger dataset and the datasets from our previous work regarding social posts from different five historical events in the period from 2009-2012. For the shared resources, we found temporal inconsistency to range from <1\\% to 25\\% depending on the dataset.\n\nFor our future work, we will expand the model further more by generalizing the resources and tweets utilized in the training process, and not just the currently available and well archived resources. Also, we will increase the size of the training \ndataset and investigate the effect of each of the features and the gain resulting from combining different permutations of them.\n\n\\section{Acknowledgment}\nThis work was supported in part by the Library of Congress and NSF IIS-1009392.\n\n\\bibliographystyle{abbrv}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nWhite dwarfs are the most common stellar evolutionary end-point\n\\citep{review}. Actually, all stars with masses smaller than $\\sim\n10\\, M_{\\sun}$ will end their lives as white dwarfs \\citep{GB97,\nPoelarends2008}. Hence, given the shape of the initial mass function,\nthe local population of white dwarfs carries crucial information about\nthe physical processes governing the evolution of the vast majority of\nstars, and in particular of the total amount of mass lost by low- and\nintermediate-mas stars during the red giant and asymptotic giant\nbranch evolutionary phases. Also, the population of white dwarfs\ncarries fundamental information about the history, structure and\nproperties of the Solar neighborhood, and specifically about its star\nformation history and age. Clearly, obtaining all this information\nfrom the observed ensemble properties of the white dwarf population is\nan important endeavour.\n\nHowever, to obtain useful information from the ensemble\ncharacteristics of the white dwarf population three conditions must be\nmet. Firstly, extensive and accurate observational data sets are\nneeded. In particular, individual spectra of a sufficiently large\nnumber of white dwarfs are needed. This has been possible recently,\nwith the advent of large-scale automated surveys, which routinely\nobtain reliable spectra for sizable samples of white dwarfs. Examples\nof these surveys, although not the only ones, are the Sloan Digital\nSky Survey \\citep{York}, and the SuperCOSMOS Sky Survey\n\\citep{Row2011}, which have allowed us to have extensive observational\ndata for a very large number of white dwarfs. Secondly, improved\nmodels of the atmospheres of white dwarfs that allow to model their\nspectra -- thus granting us unambiguous determinations of their\natmospheric composition, and accurate measurements of their surface\ngravities and effective temperatures -- are also needed. Over the\nlast years, several model atmosphere grids with increasing levels of\ndetail and sophistication have been released \\citep{Bergeron92,\nKoester2001, Kowalski06, Tremblay11, Tremblay13}, thus providing us\nwith a consistent framework to analyze the observational results.\nFinally, it is also essential to have state-of-the-art white dwarf\nevolutionary sequences to determine their individual ages. To this\nregard, it is worth mentioning that we now understand relatively well\nthe physics controlling the evolution of white dwarfs. In particular,\nit has been known for several decades that the evolution of white\ndwarfs is determined by a simple gravothermal process. However,\nalthough the basic picture of white dwarf evolution has remained\nunchanged for some time, we now have very reliable and accurate\nevolutionary tracks, which take into account all the relevant physical\nprocess involved in their cooling, and that allow us to determine\nprecise ages of individual white dwarfs \\citep{Salaris10,\nRenedo10}. Furthermore, it is worth emphasizing that the individual\nages derived in this way are nowadays as accurate as main sequence\nages \\citep{Salaris}. When all these conditions are met, useful\ninformation can be obtained from the observed data. Accordingly,\nlarge efforts have been recently invested in successfully modeling\nwith a high degree of realism the observed properties of several white\ndwarf populations, like the Galactic disk and halo -- see the very\nrecent works of \\cite{Cojocaru1} and \\cite{Cojocaru2} and references\ntherein -- and the system of Galactic open \\citep{Garcia-Berro2010,\nBellini, Bedin2010} and globular clusters \\citep{Hansen_2002,\nGar_etal_2014, Torres2015}.\n\nIn this paper we analyze the properties of the local sample of disk\nwhite dwarfs, namely the sample of stars with distances smaller than\n40~pc \\citep{Limoges2013, Limoges2015}. The most salient features of\nthis sample of white dwarfs are discussed in Sect.~\\ref{sec:obsdat}.\nWe then employ a Monte Carlo technique to model the observed\nproperties of the local sample of white dwarfs. Our Monte Carlo\nsimulator is described in some detail in Sect.~\\ref{sec:popsyn}. The\nresults of our populations synthesis studies are then described in\nSect.~\\ref{sec:res}. In this section we discuss the effects of the\nselection criteria (Sect.~\\ref{subsec:selcri}), we calculate the age\nof the Galactic disk (Sect.~\\ref{subsec:fitage}), we derive the star\nformation history of the Solar neighborhood\n(Sect.~\\ref{subsec:rburst}), and we determine the sensitivity of this\nage determination to the slope of the initial mass function and to the\nadopted initial-to-final mass relationship\n(Sect.~\\ref{subsec:effects}). Finally, in Sect.~\\ref{sec:concl} we\nsummarize our main results and we draw our conclusions.\n\n\\section{The observational sample}\n\\label{sec:obsdat}\n\nOver the last decades several surveys have provided us with different\nsamples of disk white dwarfs. Hot white dwarfs are preferentially\ndetected using ultraviolet color excesses. The Palomar Green Survey\n\\citep{Green86} and the Kiso Schmidt Survey \\citep{Kondo84} used this\ntechnique to study the population of hot disk white dwarfs. However,\nthese surveys failed to probe the characteristics of the population of\nfaint, hence cool and redder, white dwarfs. Cool disk white dwarfs are\nalso normally detected in proper motion surveys \\citep{LDM88}, thus\nallowing to probe the faint end of the luminosity function, and to\ndetermine the position of its cut-off. Unfortunately, the number of\nwhite dwarfs in the faintest luminosity bins represents a serious\nproblem. Other recent magnitude-limited surveys, like the SDSS, were\nable to detect many faint white dwarfs, thus allowing to determine a\nwhite dwarf luminosity function which covers the entire range of\ninterest of magnitudes, namely $7\\la M_{\\rm bol}\\la 16$\n\\citep{Harris}. However, the sample of \\cite{Harris} is severely\naffected by the observational biases, completeness corrections and\nselection procedures. \\cite{Holberg08} showed that the best way to\novercome these observational drawbacks is to rely on volume-limited\nsamples. Accordingly, \\cite{Holberg08} and \\cite{Gianmichele12}\nstudied the white dwarf population within 20~pc of the Sun, and\nmeasured the properties of an unbiased sample of $\\sim 130$ white\ndwarfs. The completeness of their samples is $\\sim 90\\%$. More\nrecently, \\cite{Limoges2013} and \\cite{Limoges2015} have derived the\nensemble properties of a sample of $\\sim 500$ white dwarfs within\n40~pc of the Sun, using the results of the SUPERBLINK survey, that is\na survey of stars with proper motions larger than 40~mas~yr$^{-1}$\n\\citep{Lepine2005}. The estimated completeness of the white dwarf\nsample derived from this survey is $\\sim 70\\%$, thus allowing for a\nmeaningful statistical analysis. We will compare the results of our\ntheoretical simulations with the white dwarf luminosity function\nderived from this sample. However, a few cautionary remarks are\nnecessary. First, this luminosity function has been obtained from a\nspectroscopic survey that has not been yet completed. Second, the\nphotometry is not yet optimal. Finally, and most importantly,\ntrigonometric parallaxes are not available for most cool white dwarfs\nin the sample, preventing accurate determinations of atmospheric\nparameters and radii (hence, masses) of each individual white dwarf.\nFor these stars \\cite{Limoges2015} were forced to assume a mass of\n$0.6\\, M_{\\sun}$. All in all, the luminosity function of\n\\cite{Limoges2015} is still somewhat preliminary, but nevertheless is\nthe only one based on a volume-limited sample extending out to\n40~pc. Nevertheless, we explore the effects of these issues below, in\nSects.~\\ref{subsec:selcri} and \\ref{subsec:fitage}.\n\n\\section{The population synthesis code}\n\\label{sec:popsyn}\n\nA detailed description of the main ingredients employed in our Monte\nCarlo population synthesis code can be found in our previous works\n\\citep{Gar1999,Tor2001, Tor2002,Gar2004}. Nevertheless, in the\ninterest of completeness, here we summarize its main important\nfeatures. \n\nThe simulations described below were done using the generator of\nrandom numbers of \\cite{James_1990}. This algorithm provides a\nuniform probability densities within the interval $(0,1)$, ensuring a\nrepetition period of $\\ga 10^{18}$, which for practical applications\nis virtually infinite. For Gaussian probability distributions we\nemployed the Box-Muller algorithm \\citep{NRs}. For each of the\nsynthetic white dwarf populations described below, we generated $50$\nindependent Monte Carlo simulations employing different initial seeds.\nFurthermore, for each of these Monte Carlo realizations, we increased\nthe number of simulated Monte Carlo realizations to $10^4$ using\nbootstrap techniques -- see \\cite{Cam2014} for details. In this way\nconvergence in all the final values of the relevant quantities, can be\nensured. In the next sections we present the ensemble average of the\ndifferent Monte Carlo realizations for each quantity of interest, as\nwell as the corresponding standard deviation. Finally, we mention that\nthe total number of synthetic stars of the restricted samples\ndescribed below and the observed sample are always similar. In this\nway we guarantee that the comparison of both sets of data are\nstatistically sound.\n\nTo produce a consistent white dwarf population we first generated a\nset of random positions of synthetic white dwarfs in a spherical\nregion centered on the Sun, adopting a radius of $50$~pc. We used a\ndouble exponential distribution for the local density of stars. For\nthis density distribution we adopted a constant Galactic scale height\nof 250~pc and a constant scale length of 3.5~kpc. For our initial\nmodel the time at which each synthetic star was born was generated\naccording to a constant star formation rate, and adopting a age of the\nGalactic disk age, $t_{\\rm disk}$. The mass of each star was drawn\naccording to a Salpeter mass function \\citep{Salpeter} with exponent\n$\\alpha=2.35$, except otherwise stated, which is totally equivalent\nfor the relevant range of masses to the standard initial mass function\nof \\cite{Kroupa_2001}. Velocities were randomly chosen taking into\naccount the differential rotation of the Galaxy, the peculiar velocity\nof the Sun and a dispersion law that depends on the scale height\n\\citep{Gar1999}. The evolutionary ages of the progenitors were those\nof \\cite{Renedo10}. Given the age of the Galaxy and the age,\nmetallicity, and mass of the progenitor star, we know which synthetic\nstars have had time to become white dwarfs, and for these, we derive\ntheir mass using the initial-final mass relationship of\n\\cite{Cat2008}, except otherwise stated. We also assign a spectral\ntype to each of the artificial stars. In particular, we adopt a\nfraction of 20\\% of white dwarfs with hydrogen-deficient atmospheres,\nwhile the rest of stars is assumed to be of the DA spectral type.\n\nThe set of adopted cooling sequences employed here encompasses the\nmost recent evolutionary calculations for different white dwarf\nmasses. For white dwarf masses smaller than $1.1\\, M_{\\sun}$ we\nadopted the cooling tracks of white dwarfs with carbon-oxygen cores of\n\\cite{Renedo10} for stars with hydrogen dominated atmospheres and\nthose of \\cite{DBs} for hydrogen-deficient envelopes. For white dwarf\nmasses larger than this value we used the evolutionary results for\noxygen-neon white dwarfs of \\cite{Alt2007} and \\cite{Alt2005}.\nFinally, we interpolated the luminosity, effective temperature, and\nthe value of $\\log g$ of each synthetic star using the corresponding\nwhite dwarf evolutionary tracks . Additionally, we also interpolated\ntheir $UBVRI$ colors, which we then converted to the $ugriz$ color\nsystem.\n\n\\section{Results}\n\\label{sec:res}\n\n\\subsection{The effects of the selection criteria}\n\\label{subsec:selcri}\n\n\\begin{figure}[t]\n\\begin{center}\n {\\includegraphics[trim = 10mm 35mm 10mm 35mm, clip, width=\\columnwidth]{fig01a.eps}}\n {\\includegraphics[trim = 10mm 35mm 10mm 35mm, clip, width=\\columnwidth]{fig01b.eps}}\n\\end{center}\n \\caption{Top panel: Effects of the reduced proper motion diagram\n cut on the synthetic population of white dwarfs. Bottom panel:\n Effects of the cut in $V$ magnitude in the simulated white dwarf\n population. In both panels the synthetic white dwarfs are shown\n as solid dots, whereas the red dashed lines represent the\n selection cut.}\n\\label{f:sel_two}\n\\end{figure}\n\nA key point in the comparison of a synthetic population of white\ndwarfs with the observed data is the implementation of the\nobservational selection criteria in the theoretical samples. To\naccount for the observational biases with a high degree of fidelity we\nimplemented in a strict way the selection criteria employed by\n\\cite{Limoges2013,Limoges2015} in their analysis of the SUPERBLINK\ndatabase. Specifically, we only considered objects in the northern\nhemisphere ($\\delta>0^{\\circ}$) up to a distance of 40~pc, and with\nproper motions larger than $\\mu>40\\,{\\rm mas\\,yr^{-1}}$. Then, we\nintroduced a cut in the reduced proper motion diagram $(H_g, g-z)$ as\n\\cite{Limoges2013} did -- see their Fig.~1 -- eliminating from the\nsynthetic sample of white dwarfs those objects with\n$H_g>3.56(g-z)+15.2$ that are outside of location where presumably\nwhite dwarfs should be found. Finally, we only took into\nconsideration those stars with magnitudes brighter than $V=19$.\n\n\\begin{figure}[t]\n\\begin{center}\n {\\includegraphics[trim = 10mm 35mm 10mm 35mm, clip, width=\\columnwidth]{fig02a.eps}}\n {\\includegraphics[trim = 10mm 35mm 10mm 35mm, clip, width=\\columnwidth]{fig02b.eps}}\n\\end{center} \n \\caption{Top panel: Synthetic white dwarf luminosity functions\n (black lines) compared to the observed luminosity function (red\n line). The solid line shows the luminosity function of the\n simulated white dwarf population when all the selection criteria\n have been considered, while the dashed line displays the\n luminosity function of the entire sample. Bottom panel:\n completeness of white dwarf population for our reference model.}\n\\label{f:triple}\n\\end{figure}\n\nIn Fig.~\\ref{f:sel_two} we show the effects of these last two cuts on\nthe entire population of white dwarfs for our fiducial model. In\nparticular, in the top panel of this figure the reduced proper motion\ndiagram $(H_g, g-z)$ of the theoretical white dwarf population (black\ndots) and the corresponding selection criteria (red dashed line) are\ndisplayed. As can be seen, the overall effect of this selection\ncriterion is that the selected sample is, on average, redder than the\npopulation from which it is drawn, independently of the adopted age of\nthe disk. Additionally, in the bottom panel of Fig.~\\ref{f:sel_two}\nwe plot the bolometric magnitudes of the individual white dwarfs as a\nfunction of their distance for the synthetic white dwarf population.\nThe red dashed line represents the selection cut in magnitude,\n$V=19$. It is clear that this cut eliminates faint and distant\nobjects. Also, it is evident that the number of synthetic white dwarfs\nincreases smoothly for increasing magnitudes up to $M_{\\rm\nbol}\\approx15.0$, and that for magnitudes larger than this value there\nis a dramatic drop in the white dwarf number counts. Furthermore, for\ndistances of $\\sim 40$~pc the observational magnitude cut will\neliminate all white dwarfs with bolometric magnitudes larger than\n$M_{\\rm bol}\\approx16.0$. However, this magnitude cut still allows to\nresolve the sharp drop-off in the number counts of white dwarfs at\nmagnitude $M_{\\rm bol}\\approx15.0$. This, in turn, is important since\nas it will be shown below will allow us to unambigously determine the\nage of the Galactic disk.\n\n\\begin{figure}[t]\n\\begin{center}\n {\\includegraphics[trim = 10mm 35mm 10mm 35mm, clip, width=\\columnwidth]{fig03a.eps}}\n {\\includegraphics[trim = 10mm 35mm 10mm 35mm, clip, width=\\columnwidth]{fig03b.eps}}\n\\end{center}\n \\caption{Top panel: $\\chi^2$ probability test as a function of the\n age obtained by fitting the three faintest bins defining the\n cut-off of the white dwarf luminosity function. Bottom panel:\n white dwarf luminosity function for the best-fit age.}\n\\label{f:chi_age}\n\\end{figure}\n\n\\begin{figure}[t]\n\\begin{center}\n {\\includegraphics[trim = 10mm 35mm 10mm 35mm, clip, width=\\columnwidth]{fig04a.eps}}\n {\\includegraphics[trim = 10mm 35mm 10mm 35mm, clip, width=\\columnwidth]{fig04b.eps}}\n\\end{center}\n \\caption{Top panel: $\\chi^2$ probability test as a function of the\n age of the burst of star formation obtained by fitting the nine\n brightest bins of the white dwarf luminosity function. Bottom\n panel: Synthetic white dwarf luminosity function for a disk age\n of $8.9\\,$Gyr and a recent burst of start formation (black line),\n compared with the observed white dwarf luminosity function of\n (red lines).}\n\\label{f:wdlf_burst}\n\\end{figure}\n\nWe now study if our modeling of the selection criteria is robust\nenough. This is an important issue because reliable $ugriz$ photometry\nwas available only for a subset of the SUPERBLINK\ncatalog. Consequently, \\cite{Limoges2015} used photometric data from\nother sources like 2MASS, Galex, and USNO-B1.0 -- see\n\\cite{Limoges2013} for details. It is unclear how this procedure may\naffect the observed sample. Obviously, simulating all the specific\nobservational procedures is too complicated for the purpose of the\npresent analysis, but we conducted two supplementary sets of\nsimulations to assess the reliability of our results. In the first of\nthese sets we discarded an additional fraction of white dwarfs in the\ntheoretical samples obtained after applying all the selection criteria\npreviously described. We found that if the fraction of discarded\nsynthetic stars is $\\la 15\\%$ the results described below remain\nunaffected. Additionally, in a second set of simulations we explored\nthe possibility that the sample of \\cite{Limoges2015} is indeed larger\nthan that used to compute the theoretical luminosity function.\nAccordingly, we artificially increased the number of synthetic white\ndwarfs which pass the successive selection criteria in the reduced\nproper motion diagram. In particular we increased by 15\\% the number\nof artificial white dwarfs populating the lowest luminosity bins of\nthe luminosity function (those with $M_{\\rm bol}>12$). Again, we\nfound that the differences between both sets of simulations -- our\nreference simulation and this one -- are minor.\n\n\\begin{figure*}[t]\n\\begin{center}\n {\\includegraphics[trim = 0mm 34mm 0mm 10mm, clip, width=0.8\\textwidth]{fig05.eps}}\n\\end{center}\n \\caption{Synthetic white dwarf luminosity function for a disk age\n of $8.9$~Gyr and a recent burst of start formation for different\n values of the slope of the Salpeter IMF (black lines), compared\n with the observed white dwarf luminosity function of\n \\cite{Limoges2015} -- red line.}\n\\label{f:wdlf_alfa}\n\\end{figure*}\n\nOnce the effects of the observational biases and selection criteria\nhave been analyzed, a theoretical white dwarf luminosity function can\nbe built and compared to the observed one. To allow for a meaningful\ncomparison between the theoretical and the observational results, we\ngrouped the synthetic white dwarfs using the same magnitude bins\nemployed by \\cite{Limoges2015}. We emphasize that the procedure\nemployed by \\cite{Limoges2015} to derive the white dwarf luminosity\nfunction simply consists in counting the number of stars in each\nmagnitude bin, given that their sample is volume limited. That is, in\nprinciple, their number counts should correspond with the true number\ndensity of objects per bolometric magnitude and unit volume --\nprovided that their sample is complete without the need for correcting\nthe number counts using the $1\/V_{\\rm max}$ method -- or an equivalent\nmethod -- as it occurs for magnitude and proper motion limited\nsamples.\n\nIn the top panel of Fig.~\\ref{f:triple} (top panel) the theoretical\nresults are shown using black lines, while the observed luminosity\nfunction is displayed using a red line. Specifically, for our\nreference model we show the number of white dwarfs per unit bolometric\nmagnitude and volume for the entire theoretical white dwarf population\nwhen no selection criteria are employed -- dashed line and open\nsquares -- and the luminosity function obtained when the selection\ncriteria previously described are used -- solid line and filled\nsquares. It is worthwhile to mention here that the theoretical\nluminosity functions have been normalized to the bin of bolometric\nmagnitude $M_{\\rm bol}=14.75$ which corresponds to the magnitude bin\nfor which the observed white dwarf luminosity function has the\nsmallest error bars. Thus, since this luminosity bin is very close to\nthe maximum of the luminosity function, the normalization criterion is\npractically equivalent to a number density normalization. As clearly\nseen in this figure the theoretical results match very well the\nobserved data, except for a quite apparent excess of hot white dwarfs,\nwhich will be discussed in detail below. Note as well that the\nselection criteria employed by \\cite{Limoges2015} basically affect the\nlow luminosity tail of the white dwarf luminosity function, but not\nthe location of the observed drop-off in the white dwarf number counts\nnor that of the maximum of the luminosity function. This can be more\neasily seen by looking at the bottom panel of Fig.~\\ref{f:triple},\nwhere the completeness of the simulated restricted sample is shown.\nWe found that the completeness of the entire sample is $78\\%$.\nHowever, the restricted sample is nearly complete at intermediate\nbolometric magnitudes -- between $M_{\\rm bol}\\simeq 10$ and 15 -- but\ndecreases very rapidly for magnitudes larger than $M_{\\rm bol}=15$, a\nclear effect of the selection procedure employed by\n\\cite{Limoges2015}. Nevertheless, this low-luminosity tail is\npopulated preferentially by helium-atmosphere stars, and by very\nmassive oxygen-neon white dwarfs. The prevalence of helium-atmosphere\nwhite dwarfs at low luminosities is due to the fact that stars with\nhydrogen-deficient atmospheres have lower luminosities than their\nhydrogen-rich counterparts of the same mass and age, because in their\natmospheres collision-induced absorption does not play a significant\nrole and cool to a very good approximation as black bodies. Also, the\npresence of massive oxygen-neon white dwarfs is a consequence of their\nenhanced cooling rate, due to their smaller heat capacity.\n\n\\subsection{Fitting the age}\n\\label{subsec:fitage}\n\nNow we estimate the age of the disk using the standard method of\nfitting the position of the cut-off of the white dwarf luminosity\nfunction. We did this by comparing the faint end of the observed\nwhite dwarf luminosity function with our synthetic luminosity\nfunctions. Despite the fact that the completeness of the faintest\nbins of the luminosity function is substantially smaller (below $\\sim\n60\\%$), we demonstrated in the previous section that the position of\nthe cut-off remains nearly unaffected by the selection\nprocedures. Accordingly, we ran a set of Monte Carlo simulations for a\nwide range of disk ages. We then employed a $\\chi^2$ test in which we\ncompared the theoretical and observed number counts of those bins that\ndefine the cut-off, namely the three last bins (those with $M_{\\rm\nbol}>15.5$) of the luminosity function. In the top panel of\nFig.~\\ref{f:chi_age} we plot this probability as a function of the\ndisk age. The best fit is obtained for an age of $8.9$~Gyr, and the\nwidth of the distribution at half-maximum is $0.4$~Gyr. The bottom\npanel of this figure shows the white dwarf luminosity function for the\nbest-fit age.\n\nOne possible concern could be that the age derived in this way could\nbe affected by the assumption that the mass of cool white dwarfs for\nwhich no trigonometric parallax could be measured was arbitrarily\nassumed to be $0.6\\, M_{\\sun}$. This may have an impact of the age\ndetermination using the cut-off of the white dwarf luminosity\nfunction. To assess this issue we conducted an additional simulation\nin which all synthetic white dwarfs with cooling times longer than\n1~Gyr have this mass. We then computed the new luminosity function\nand derived the corresponding age estimate. We found that difference\nof ages between both calculations is smaller than 0.1~Gyr.\n\n\\subsection{A recent burst of star formation}\n\\label{subsec:rburst}\n\nAs clearly shown in the bottom panel of Fig.~\\ref{f:chi_age} the\nagreement between the theoretical simulations and the observed results\nis very good except for the brightest bins of the white dwarf\nluminosity, namely those with $M_{\\rm bol}\\la 11$. Also, our\nsimulations fail to reproduce the shape of the peak of the luminosity\nfunction, an aspect which we investigate in more detail in\nSect.~\\ref{subsec:effects}. The excess of white dwarfs for the\nbrightest luminosity bins is statistically significant, as already\nnoted by \\cite{Limoges2015}. \\cite{Limoges2015} already discussed\nvarious possibilities and pointed out that the most likely one is that\nthis feature of the white dwarf luminosity function might be due to a\nrecent burst of star formation. \\cite{NohScalo1990} demonstrated some\ntime ago that a burst of star formation generally produces a bump in\nthe luminosity function, and that the position of the bump on the hot\nbranch of the luminosity function is ultimately dictated by the age of\nthe burst of star formation -- see also \\cite{Rowell13}.\n\n\\begin{figure}[t]\n\\begin{center}\n {\\includegraphics[trim = 10mm 35mm 12mm 30mm, clip, width=0.95\\columnwidth]{fig06.eps}}\n\\end{center}\n\\caption{Initial-final mass relationships adopted in this work. The\n solid line shows the semi-empirical initial-final mass relationship\n of \\cite{Catalan2008}, while the dashed lines have been obtained by\n multiplying the final white dwarf mass by a constant factor $\\beta$,\n as labeled.}\n\\label{f:mimf}\n\\end{figure}\n\n\\begin{figure*}[t]\n\\begin{center}\n {\\includegraphics[trim = 0mm 34mm 0mm 10mm, clip, width=0.8\\textwidth]{fig07.eps}}\n\\end{center}\n \\caption{Synthetic white dwarf luminosity function for a disk age\n of $8.9\\,$Gyr and a recent burst of start formation for several\n choices of the initial-to-final mass relationship (black lines)\n compared with the observed white dwarf luminosity function of\n \\cite{Limoges2015} -- red lines. See text for details.}\n\\label{f:wdlf_beta}\n\\end{figure*}\n\nAccording to these considerations, we explored the possibility of a\nrecent burst of star formation by adopting a burst that occurred some\ntime ago and stays active until present. The strength of this episode\nof star formation is another parameter that can be varied. We thus\nran our Monte Carlo simulator using a fixed age of the disk of\n$8.9\\,$Gyr and considered the time elapsed since the beginning of the\nburst, $\\Delta t$, and its strength as adjustable parameters. The top\npanel of Fig.~\\ref{f:wdlf_burst} shows the probability distribution\nfor $\\Delta t$, computed using the same procedure employed to derive\nthe age of the Solar neighborhood, but adopting the nine brightest\nbins of the white dwarf luminosity function, which correspond to the\nlocation of the bump of the white dwarf luminosity function. The best\nfit is obtained for a burst that happened $\\sim 0.6\\pm 0.2$~Gyr ago\nand is $\\sim 5$ times stronger that the constant star formation rate\nadopted in the previous section. As can be seen in this figure the\nprobability distribution function does not have a clear gaussian\nshape. Moreover, the maximum of the probability distribution is flat,\nand the dispersion is rather high, meaning that the current\nobservational data set does not allow to constrain in an effective way\nthe properties of this episode of star formation. However, when this\nepisode of star formation is included in the calculations the\nagreement between the theoretical calculations and the observational\nresults is excellent. This is clearly shown in the bottom panel of\nFig.~\\ref{f:wdlf_burst}, where we show our best fit model, and compare\nit with the observed white dwarf luminosity function of\n\\cite{Limoges2015}. As can be seen, the observed excess of hot white\ndwarfs is now perfectly reproduced by the theoretical calculations.\n\n\\subsection{Sensitivity of the age to the inputs}\n\\label{subsec:effects}\n\nIn this section we will study the sensitivity of the age determination\nobtained in Sect.~\\ref{subsec:fitage} to the most important inputs\nadopted in our simulations. We start discussing the sensitivity of the\nage to the slope of initial mass function. This is done with the help\nof Fig.~\\ref{f:wdlf_alfa}, where we compare the theoretical white\ndwarf luminosity functions obtained with different values of the\nexponent $\\alpha$ for a Salpeter-like initial mass function with the\nobserved luminosity function. As can be seen, the differences between\nthe different luminosity functions are minimal. Moreover, the value of\n$\\alpha$ does no influence the precise location of the cut-off of the\nluminosity function, hence the age determination is insensitive to the\nadopted initial mass function.\n\nIn a second step we studied the sensitivity of the age determination\nto the initial-to-final mass relationship. As mentioned before, for\nour reference calculation we used the results of \\cite{Cat2008} and\n\\cite{Catalan2008}. To model different slopes of the initial-to-final\nmass relationships we multiplied the resulting final mass obtained\nwith the relationship of \\cite{Cat2008} by a constant factor, $\\beta$\n-- see Fig.~\\ref{f:mimf}. This choice is motivated by the fact that\nmost semi-empirical and theoretical initial-to-final mass\nrelationships have similar shapes -- see, for instance, Fig.~2 of\n\\cite{Renedo10} and Fig.~23 of \\cite{andrews2015}.\nFig.~\\ref{f:wdlf_beta} displays several theoretical luminosity\nfunctions obtained with different values of $\\beta$. Clearly, the\nposition of the cut-off of the white dwarf luminosity function remains\nalmost unchanged, except for very extreme values of $\\beta$. Thus,\nthe age determination obtained previously is not severely affected by\nthe choice of the initial-to-final mass function.\n\nNevertheless, Fig.~\\ref{f:wdlf_beta} reveals one interesting point.\nAs can be seen, large values of $\\beta$ result in better fits of the\nregion near the maximum of the white dwarf luminosity function. This\nfeature was already noted by \\cite{Limoges2015}. They discussed\nseveral possibilities. In a first instance they discussed the\nstatistical relevance of this feature. They found that this\ndiscrepancy between the theoretical models and the observations could\nnot caused by the limitations of the observational sample, because the\nerror bars in this magnitude region are small, and the completitude of\nthe observed sample for these magnitudes is $\\sim 80\\%$ (see\nFig.~\\ref{f:sel_two}). Thus, it seems quite unlikely that they lost so\nmany white dwarfs in the survey. Another possibility could be that\nthe cooling sequences for this range of magnitudes miss any important\nphysical ingredient. However, at these luminosities cooling is\ndominated by convective coupling and crystallization\n\\cite{Fontaine2001}. Since these processes are well understood and the\ncooling sequences in this magnitude range have been extensively tested\nin several circumstances with satisfactory results, it is also quite\nunlikely that this could be the reason for the discrepancy between\ntheory and observations. Also, the initial mass function has virtually\nno effect on the shape the maximum of the white dwarf luminosity\nfunction -- see Fig.~\\ref{f:wdlf_alfa}. Thus, the only possibility we\nare left is the slope of the initial-to-final mass relationship.\nFig.~\\ref{f:wdlf_beta} demonstrates that to reproduce the shape of the\nmaximum of the white dwarf luminosity $\\beta=1.2$ is needed. When such\na extreme value of $\\beta$ is adopted we find that the theoretical\nrestricted samples have clear excesses of massive white\ndwarfs. However, in general, massive have magnitudes beyond that of\nthe maximum of the white dwarf luminosity function. Thus, a likely\nexplanation of this lack of agreement between the theoretical models\nand the observations is that the initial-to-final mass relationship\nhas a {\\sl steeper} slope for initial masses larger than $\\sim 4\\,\nM_{\\sun}$. To check this possibility we ran an additional simulation\nin which we adopted $\\beta=1.0$ for masses smaller than $4\\,\nM_{\\sun}$, and $\\beta = 1.3$ otherwise. Adopting this procedure the\nexcesses of massive white dwarfs disappear, while the fit to the white\ndwarf luminosity function is essentially the same shown in the lower\nleft panel of Fig.~\\ref{f:wdlf_beta}. Interestingly, the analysis of\n\\cite{Dobbie} of massive white dwarfs in the open clusters NGC 3532\nand NGC 2287 strongly suggests that indeed the slope of the\ninitial-to-final-mass relationship for this mass range is steeper.\n\n\\section{Summary, discussion and conclusions}\n\\label{sec:concl}\n\nIn this paper we studied the population of Galactic white dwarfs\nwithin 40~pc of the Sun, and we compared its characteristics with\nthose of the observed sample of \\cite{Limoges2015}. We found that our\nsimulations describe with good accuracy the properties of this sample\nof white dwarfs. Our results show that the completeness of the\nobserved sample is typically $\\sim 80\\%$, although for bolometric\nmagnitudes larger than $\\sim 16$ the completeness drops to much\nsmaller values, of the order of 20\\% and even less at lower\nluminosities. However, the cut-off of the observed luminosity\nfunction, which is located at $M_{\\rm bol}\\simeq 15$ is statistically\nsignificative. We then used the most reliable progenitor evolutionary\ntimes and cooling sequences to derive the age of the Solar\nneighborhood, and found that it is $\\simeq 8.9\\pm 0.2$~Gyr. This age\nestimate is robust, as it does not depend substantially on the most\nrelevant inputs, like the slope of the initial mass function or the\nadopted initial-to-final mass relationship.\n\nWe also studied other interesting features of the observed white dwarf\nluminosity function. In particular, we studied the region around the\nmaximum of the white dwarf luminosity function and we argue that the\nprecise shape of the maximum is best explained assuming that the\ninitial-to-final mass relationship is steeper for progenitor masses\nlarger than about $4\\, M_{\\sun}$. We also investigated the presence\nof a quite apparent bump in the number counts of bright white dwarfs,\nat $M_{\\rm bol}\\simeq 10$, which is statistically significative, and\nthat has remained unexplained until now. Our simulations show that\nthis feature of the white dwarf luminosity function is compatible with\na recent burst of star formation that occurred about $0.6\\pm 0.2$~Gyr\nago and is still ongoing. We also found that this burst of star\nformation was rather intense, about 5 times stronger than the average\nstar formation rate.\n\n\\cite{Rowell13} found that the shape of the white dwarf luminosity\nfunction obtained from the SuperCOSMOS Sky Survey \\citep{SSS} can be\nwell explained adopting a star formation rate which present broad\npeaks at $\\sim 3$ Gyr and $\\sim 8$~Gyr in the past, and marginal\nevidence for a very recent burst of star formation occurring $\\sim\n0.5$~Gyr ago. However, \\cite{Rowell13} also pointed out that the\ndetails of the star formation history in the Solar neighborhood depend\nsensitively on the adopted cooling sequences and, of course, on the\nadopted observational data set. Since the luminosity function\n\\cite{SSS} does not present any prominent feature at bright\nluminosities it is natural that they did not found such an episode of\nstar formation. However, \\cite{Hernandez} using a non-parametric\nBayesian analysis to invert the color-magnitude diagram found that the\nstar formation history presents oscillations with period 0.5~Gyr for\nlookback times smaller than 1.5~Gyr in good agreement with the results\npresented here.\n\nIn conclusion, the study of volume-limited samples of white dwarfs\nwithin the Solar neighborhood provides us with a valuable tool to\nstudy the history of star formation of the Galactic thin\ndisk. Enhanced and nearly complete samples will surely open the door\nto more conclusive studies.\n\n\n\\begin{acknowledgements}\n\nThis work was partially funded by the MINECO grant AYA2014-59084-P and\nby the AGAUR.\n\n\\end{acknowledgements}\n\n\n\\bibliographystyle{aa}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nFlow networks provide a fruitful modeling framework for many applications of interest such as transportation, data, and production networks. They entail a fluid-like description of the macroscopic motion of \\emph{particles}, which are routed from their origins to their destinations via intermediate nodes: we refer to standard textbooks, such as \\cite{Ahuja.Magnanti.ea:93}, for a thorough treatment. \n\nThe present and a companion paper \\cite{PartII} study \\emph{dynamical flow networks}, modeled as systems of ordinary differential equations derived from mass conservation laws on directed acyclic graphs with a single origin-destination pair and a constant inflow at the origin. The rate of change of the particle density on each link of the network equals the difference between the \\emph{inflow} and the \\emph{outflow} of that link. The latter is modeled to depend on the current particle density on that link through a \\emph{flow function}. On the other hand, the way the inflow at an intermediate node gets split among its outgoing links depends on the current particle density, possibly on the whole network, through a \\emph{routing policy}. Such a routing policy is said to be \\emph{distributed} if the proportion of inflow routed to the outgoing links of a node is allowed to depend only on \\emph{local information}, consisting of the current particle densities on the outgoing links of the same node. \nThe inspiration for such a modeling paradigm comes from empirical findings from several application domains: in transportation networks \\cite{Garavello.Piccoli:06}, the flow functions are typically referred to as \\emph{fundamental diagrams}, while the routing policies model the emerging selfish behavior of drivers; in data networks \\cite{BertsekasGallager}, flow functions model congestion-dependent \\emph{throughput} and \\emph{average delays}, while routing policies are designed in order to optimize the total throughput or other performance measures; in production networks \\cite{Karmarkar:89}, flow functions correspond to \\emph{clearing functions}.\n\nOur objective is the design and analysis of distributed routing policies for dynamical flow networks that are \\emph{maximally robust} with respect to \\emph{adversarial disturbances} that reduce the link flow capacities. Two notions of transfer efficiency are introduced in order to capture the extremes of the resilience of the network towards disturbances: The dynamical flow network is \\emph{fully transferring} if the total inflow at the destination node asymptotically approaches the inflow at the origin node, and \\emph{partially transferring} if the total inflow at the destination node is asymptotically bounded away from zero. The robustness of distributed routing policies is evaluated in terms of the network's \\emph{strong} and \\emph{weak} \\emph{resilience}, which are defined as the minimum sum of link-wise magnitude of disturbances making the perturbed dynamical flow network not fully transferring, and, respectively, not partially transferring. In this paper, we prove that the maximum possible weak resilience is yielded by a class of \\emph{locally responsive} distributed routing policies, which rely only on \\emph{local information} on the current particle densities on the network, and are characterized by the property that the portion of its inflow that a node routes towards an outgoing link does not decrease as the particle density on any other outgoing link increases. Moreover, we show that the maximum weak resilience of dynamical flow networks with arbitrary, not necessarily distributed, routing policies equals the \\emph{min-cut capacity} of the network and hence is independent of the initial equilibrium flow. We also prove some fundamental properties of dynamical flow networks driven by locally responsive distributed policies, including global convergence to a unique limit flow. Such properties are mainly a consequence of the particular \\emph{cooperative} structure (in the sense of \\cite{Hirsch:82,Hirsch:85}) that the dynamical flow network inherits from locally responsive routing policies. \n\n\nStability analysis of network flow control policies under non-persistent disturbances, especially in the context of internet, has attracted a lot of attention, e.g., see \\cite{Vinnicombe:02,Paganini:02,Low.Paganini.ea:02,Fan.Arcak.ea:04}. Recent work on robustness analysis of static flow networks under adversarial and probabilistic persistent disturbances in the spirit of this paper include \\cite{Vardi.Zhang:07,Bulteau.Rubino.ea:97,Sengoku.Shinoda.ea:88}. It is worth comparing the distributed routing policies studied in this paper with the back-pressure policy~\\cite{Tassiulas.Ephremides:92}, which is one of the most well-known robust distributed routing policy for queueing networks. While relying on local information in the same way as the distributed routing policies studied here, back-pressure policies require the nodes to have, possibly unlimited, buffer capacity. In contrast, in our framework, the nodes have no buffer capacity. In fact, the distributed routing policies considered in this paper are closely related to the well-known \\emph{hot-potato} or deflection routing policies \\cite{Acampora.Shah:92} \\cite[Sect.~5.1]{BertsekasGallager}, where the nodes route incoming packets immediately to one of the outgoing links. However, to the best of our knowledge, the robustness properties of dynamical flow networks, where the outflow from a link is not necessarily equal to its inflow have not been studied before. \n\nThe contributions of this paper are as follows: (i) we formulate a novel dynamical system framework for robustness analysis of dynamical flow networks under feedback routing policies, possibly constrained in the available information; (ii) we characterize a general class of locally responsive distributed routing policies that yield the maximum weak resilience; (iii) we provide a simple characterization of the resilience in terms of the topology and capacity of the flow network. In particular, the class of locally responsive distributed routing policies can be interpreted as approximate Nash equilibria in an appropriate zero-sum game setting where the objective of the adversary inflicting the disturbance is to make the network not partially transferring with a disturbance of minimum possible magnitude, and the objective of the system planner is to design distributed routing policies that yield the maximum possible resilience. \nThe results of this paper imply that locality constraints on the information available to routing policies do not affect the maximally achievable weak resilience. In contrast, the companion paper \\cite{PartII} focuses on the strong resilience properties of dynamical flow networks, and shows that locally responsive distributed routing policies are maximally robust, but only within the class of distributed routing policies which are constrained to use only local information on the network congestion status. \n\n\nThe rest of the paper is organized as follows.\nIn Section~\\ref{sec:model}, we formulate the problem by formally defining the notion of a dynamical flow network and its resilience, and we prove that the weak resilience of a dynamical flow network driven by an arbitrary, not necessarily distributed, routing policy is upper-bounded by the min-cut capacity of the network. In Section~\\ref{sec:comparison}, we introduce the class of locally responsive distributed routing policies, and state the main results on dynamical flow networks driven by such locally responsive distributed routing policies: Theorem \\ref{thm:uniquelimitflow}, concerning global convergence towards a unique equilibrium flow; and Theorem \\ref{maintheo-weakstability} concerning the maximal weak resilience property. In Sections~\\ref{sec:proofthmuniquelimit}, and \\ref{sec:proof2}, we state proofs of Theorem \\ref{thm:uniquelimitflow}, and Theorem \\ref{maintheo-weakstability}, respectively.\n\nBefore proceeding, we define some preliminary notation to be used throughout the paper. Let ${\\mathbb{R}}$ be the set of real numbers, $\\R_+:=\\{x\\in\\mathbb{R}:\\,x\\ge0\\}$ be the set of nonnegative real numbers. Let $\\mathcal A$ and $\\mathcal B$ be finite sets. Then, $|\\mathcal A|$ will denote the cardinality of $\\mathcal A$, $\\mathbb{R}^{\\mathcal A}$ (respectively, $\\mathbb{R}_+^{\\mathcal A}$) the space of real-valued (nonnegative-real-valued) vectors whose components are indexed by elements of $\\mathcal A$, and $\\mathbb{R}^{\\mathcal A\\times\\mathcal B}$ the space of matrices whose real entries indexed by pairs of elements in $\\mathcal A\\times\\mathcal B$. The transpose of a matrix $M \\in {\\mathbb{R}}^{\\mathcal A \\times\\mathcal B}$, will be denoted by $M^T \\in\\mathbb{R}^{\\mathcal B\\times\\mathcal A}$, while $\\mathbf{1}$ the all-one vector, whose size will be clear from the context. Let $\\cl(\\mathcal X)$ be the closure of a set $\\mathcal X\\subseteq\\mathbb{R}^{\\mathcal A}$. A directed multigraph is the pair $(\\mathcal V,\\mathcal E)$ of a finite set $\\mathcal V$ of nodes, and of a multiset $\\mathcal E$ of links consisting of ordered pairs of nodes (i.e., we allow for parallel links). Given a a multigraph $(\\mathcal V,\\mathcal E)$, for every node $v\\in\\mathcal V$, we shall denote by $\\mathcal E^+_v\\subseteq\\mathcal E$, and $\\mathcal E^-_v\\subseteq\\mathcal E$, the set of its outgoing and incoming links, respectively. Moreover, we shall use the shorthand notation $\\mathcal R_v:=\\mathbb{R}_+^{\\mathcal E^+_v}$ for the set of nonnegative-real-valued vectors whose entries are indexed by elements of $\\mathcal E^+_v$, $\\mathcal S_v:=\\{p\\in \\mathcal R_v:\\,\\sum_{e \\in \\mathcal E_v^+} p_e=1\\}$ for the simplex of probability vectors over $\\mathcal E^+_v$, and $\\mathcal R:=\\mathbb{R}_+^{\\mathcal E}$ for the set of nonnegative-real-valued vectors whose entries are indexed by the links in $\\mathcal E$.\n\n\\section{Dynamical flow networks and their resilience}\n\\label{sec:model}\nIn this section, we introduce our model of dynamical flow networks and define the notions of transfer efficiency. \n\n\\subsection{Dynamical flow networks}\nWe start with the following definition of a flow network.\\medskip\n\n\\begin{definition}[Flow network]\\label{def:flownetwork}\nA \\emph{flow network} $\\mathcal N=(\\mathcal T,\\mu)$ is the pair of a \\emph{topology}, described by a finite directed multigraph $\\mathcal T=(\\mathcal V,\\mathcal E)$, where $\\mathcal V$ is the node set and $\\mathcal E$ is the link multiset, and a family of \\emph{flow functions} $\\mu:=\\{\\mu_e:\\mathbb{R}_+\\to\\mathbb{R}_+\\}_{e\\in\\mathcal E}$ describing the functional dependence $f_e=\\mu_e(\\rho_e)$ of the flow on the density of particles on every link $e\\in\\mathcal E$. \nThe \\emph{flow capacity} of a link $e\\in\\mathcal E$ is defined as \n\\begin{equation} \\supscr{f}{max}_e:=\\sup_{\\rho_e \\geq 0} \\mu_e(\\rho_e)\\,. \\end{equation}\n\\end{definition}\\medskip\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=10cm,height=7cm]{ODdigraphIOwb}\n\\end{center}\n\\caption{\\label{fig:IOnetwork}A network topology satisfying Assumption \\ref{ass:acyclicity}: the nodes $v$ are labeled by the integers between $0$ (denoting the origin node) and $n$ (denoting the destination node), in such a way that the label of the head node of each edge is higher than the label of its tail node. The inflow at the origin, $\\lambda_0$, maybe interpreted as the input to the dynamical flow network, and the total inflow at the destination, $\\lambda_n(t)$, as the output. For $\\alpha\\in(0,1]$, the dynamical flow network is $\\alpha$-transferring if $\\liminf_{t\\to+\\infty}\\lambda_n(t)\\ge\\alpha\\lambda_0$, i.e., if at least $\\alpha$-fraction of the inflow at the origin is transferred to the destination, asymptotically.}\n\\end{figure}\n\nWe shall use the notation $\\mathcal F_v:=\\times_{e\\in\\mathcal E^+_v}[0,\\supscr{f}{max}_e)$ for the set of admissible flow vectors on outgoing links from node $v$, and $\\mathcal F:=\\times_{e\\in\\mathcal E}[0,\\supscr{f}{max}_e)$ for the set of admissible flow vectors for the network. We shall write $f:=\\{f_e:\\,e\\in\\mathcal E\\}\\in\\mathcal F$, and $\\rho:=\\{\\rho_e:\\,e\\in\\mathcal E\\}\\in\\mathcal R$, for the vectors of flows and of densities, respectively, on the different links. The notation $f^v:=\\{f_e:\\,e\\in\\mathcal E^+_v\\}\\in\\mathcal F_v$, and $\\rho^v:=\\{\\rho_e:\\,e\\in\\mathcal E^+_v\\}\\in\\mathcal R_v$ will stand for the vectors of flows and densities, respectively, on the outgoing links of a node $v$. We shall compactly denote by $f=\\mu(\\rho)$ and $f^v=\\mu^v(\\rho^v)$ the functional relationships between density and flow vectors.\n\nThroughout this paper, we shall restrict ourselves to network topologies satisfying the following: \\medskip\n\\begin{assumption}\\label{ass:acyclicity}\nThe topology $\\mathcal T$ contains no cycles, has a unique origin (i.e., a node $v\\in\\mathcal V$ such that $\\mathcal E^-_v$ is empty), and a unique destination (i.e., a node $v\\in\\mathcal V$ such that $\\mathcal E^+_v$ is empty). Moreover, there exists a path in $\\mathcal T$ to the destination node from every other node in $\\mathcal V$. \n\\end{assumption}\\medskip\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=10cm,height=7cm]{mincutwb}\n\\end{center}\n\\caption{An origin\/destination cut of the network: $\\mathcal U$ is a subset of nodes including the origin $0$ but not the destination $n$, and $\\mathcal E^+_{\\mathcal U}$ is the subset of those edges with tail node in $\\mathcal U$, and head node in $\\mathcal V\\setminus\\mathcal U$. \\label{fig:mincut}}\n\\end{figure}\n\nAssumption \\ref{ass:acyclicity} implies that one can find a (not necessarily unique) topological ordering of the node set $\\mathcal V$ (see, e.g., \\cite{Cormen.Leiserson:90}). We shall assume to have fixed one such ordering, identifying $\\mathcal V$ with the integer set $\\{0,1,\\ldots,n\\}$, where $n:=|\\mathcal V|-1$, in such a way that \n\\begin{equation}\\label{vertexordering}\\mathcal E^-_{v}\\subseteq\\bigcup\\nolimits_{0\\le u0$, then the relationship between the throughput and the queue length, $f_e\\propto\\rho_e\/d_e(\\rho_e)$, can be easily shown to satisfy Assumption \\ref{ass:flowfunction}. Therefore, in analogy with the general framework, $\\rho_e$ and $f_e$ denote the queue length and the throughput, respectively, and $\\mu_e(\\rho_e)$ represents the throughput functions on the links of data networks.\n\n\n\\item \\textit{Production networks}:\nIn production networks, the particles represent goods that need to be processed by a series of production modules represented by nodes. It is known, e.g., see \\cite{Karmarkar:89}, that the rate of doing work decreases with the amount of work in progress at a production module. This relationship is formalized by the concept of \\emph{clearing functions}.\nIn this context, production networks have a clear analogy with our setup where $\\rho_e$ represents the work-in-progress, $f_e$ represents the rate of doing work, and $\\mu_e(\\rho_e)$ represents the clearing function.\n\n\\end{enumerate}\n\n\n\n\\begin{remark}\nWhile there are many examples of congestion-dependent throughput functions and clearing functions that satisfy Assumption~\\ref{ass:flowfunction}, typical fundamental diagrams in transportation systems have a $\\cap$-shaped profile. While we do not study the implications of this analytically, some simulations are provided in \\cite{PartII} illustrating how the results of this paper could be extended to this case.\n\\end{remark}\n\n\\begin{remark}\\label{remark2timescales}\nIt is worth stressing that, while distributed routing policies depend only on local information on the current congestion, their structural form may depend on some global information on the flow network which might have been accumulated through a slower time-scale evolutionary dynamics. A two time-scale process of this sort has been analyzed in our related work \\cite{Como.Savla.ea:Wardrop-arxiv} in the context of transportation networks. Multiple time-scale dynamical processes have also been analyzed in \\cite{Borkar.Kumar:03} in the context of communication networks. \n\\end{remark}\n\n\\subsection{Perturbed dynamical flow networks and resilience} \\label{sec:perturbations}\nWe shall consider persistent perturbations of the dynamical flow network (\\ref{dynsyst}) that reduce the flow functions on the links, as per the following: \n\\begin{definition}[Admissible perturbation]\\label{def:admissibeperturbation}\nAn \\emph{admissible perturbation} of a flow network $\\mathcal N=(\\mathcal T,\\mu)$, satisfying Assumptions \\ref{ass:acyclicity} and \\ref{ass:flowfunction}, is a flow network $\\tilde{\\mathcal N}=({\\mathcal T},\\tilde\\mu)$, with the same topology $\\mathcal T$, and a family of perturbed flow functions $\\tilde\\mu:=\\{\\tilde\\mu_e:\\mathbb{R}_+\\to\\mathbb{R}_+\\}_{e\\in\\mathcal E}$, such that, for every $e\\in\\mathcal E$, $\\tilde\\mu_e$ satisfies Assumption \\ref{ass:flowfunction}, as well as\n$$\\tilde\\mu_e(\\rho_e)\\le\\mu_e(\\rho_e)\\,,\\qquad \\forall \\rho_e\\ge0\\,.$$\nWe accordingly let $\\subscr{\\tilde{f}}{e}^\\textup{max}:=\\sup\\{\\tilde{\\mu}_e(\\tilde{\\rho}_e):\\tilde{\\rho}_e\\ge0\\}$.\nThe \\emph{magnitude} of an admissible perturbation is defined as \n\\begin{equation}\\label{deltadef}\\delta:=\\sum\\nolimits_{e\\in\\mathcal E}\\delta_e\\,,\\qquad\\delta_e:=\\sup\\l\\{\\mu_e(\\rho_e)-\\tilde\\mu_e(\\rho_e):\\,\\rho_e\\ge0\\r\\}\\,.\\end{equation} \nThe \\emph{stretching coefficient} of an admissible perturbation is defined as \n\\begin{equation}\\label{thetadef}\\theta:=\\max\\{\\tilde{\\rho}^{\\mu}_e\/{\\rho}^{\\mu}_e:\\,e\\in\\mathcal E\\}\\,,\\end{equation} \nwhere ${\\rho}^{\\mu}_e$, and $\\tilde{\\rho}^{\\mu}_e$ are the median densities associated to the unperturbed and the perturbed flow functions, respectively, on link $e\\in\\mathcal E$, as defined in \\eqref{eq:median-def}. \n\\end{definition}\\medskip\n\nGiven a dynamical flow network as in Definition \\ref{def:dynamicalflownetwork}, and an admissible perturbation as in Definition \\ref{def:admissibeperturbation}, we shall consider the \\emph{perturbed dynamical flow network}\n\\begin{equation}\\label{pertdynsyst}\n\\displaystyle\\frac{\\mathrm{d}}{\\mathrm{d} t}\\tilde\\rho_e(t)=\\tilde\\lambda_v(t)G^v_e(\\tilde\\rho(t))- \\tilde f_e(t)\\,,\\qquad\\forall\\,0\\le v0$ a constant inflow, and $\\mathcal G$ an arbitrary routing policy. Then, for any initial flow $f^{\\circ}$, the weak resilience of the associated dynamical flow network satisfies \n$$\\gamma_0(f^{\\circ},\\mathcal G)\\le C(\\mathcal N)\\,.$$\n\\end{proposition} \n\\begin{proof}\nWe shall prove that, for every $\\alpha\\in(0,1]$, and every $\\theta\\ge1$, \n\\begin{equation}\\label{gammaalphatheta}\\gamma_{\\alpha,\\theta}(f^{\\circ},\\mathcal G)\\le C(\\mathcal N)-\\frac{\\alpha}2\\lambda_0\\,.\\end{equation} \nObserve that (\\ref{gammaalphatheta}) immediately implies that $$\\gamma_0(f^{\\circ},\\mathcal G)=\\lim_{\\theta\\to+\\infty}\\lim_{\\alpha\\downarrow0}\\gamma_{\\alpha,\\theta}(f^{\\circ},\\mathcal G)\\le \\lim_{\\theta\\to+\\infty}\\lim_{\\alpha\\downarrow0} \\left (C(\\mathcal N)-\\alpha\\lambda_0\/2 \\right)=C(\\mathcal N)\\,,$$ \nthus proving the claim. \n\nConsider a minimal origin-destination cut, i.e., some $\\mathcal U\\subseteq\\mathcal V$ such that $0\\in\\mathcal U$, $n\\notin\\mathcal U$, and $\\sum_{e\\in\\mathcal E^+_{\\mathcal U}}f^{\\max}_e=C(\\mathcal N)$. Define $\\varepsilon:=\\alpha\\lambda_0\/(2C(\\mathcal N))$, and consider an admissible perturbation such that $\\tilde\\mu_e(\\rho_e)=\\varepsilon\\mu_e(\\rho_e)$ for every $e\\in\\mathcal E_{\\mathcal U}^+$, and $\\tilde\\mu_e(\\rho_e)=\\mu_e(\\rho_e)$ for all $e\\in\\mathcal E\\setminus\\mathcal E_{\\mathcal U}^+$. It is readily verified that the magnitude of such perturbation satisfies \n$$\\delta=(1-\\varepsilon)\\sum\\nolimits_{e\\in\\mathcal E^+_{\\mathcal U}}f_e^{\\max}=(1-\\varepsilon)C(\\mathcal N)=C(\\mathcal N)-\\frac{\\alpha}2\\lambda_0\\,,$$ \nwhile its stretching coefficient is $1$. \n\nObserve that \n\\begin{equation}\\label{lambdaUbound} \\tilde \\lambda_{\\mathcal U}(t):= \\sum_{e \\in \\mathcal E_{\\mathcal U}^+} \\tilde f_e(t) \\le\\sum\\nolimits_{e\\in\\mathcal E^+_{\\mathcal U}}\\tilde f_e^{\\max}=\\varepsilon\\sum\\nolimits_{e\\in\\mathcal E^+_{\\mathcal U}}f_e^{\\max}=\\alpha\\lambda_0\/2\\,,\\qquad t\\ge0\\,.\\end{equation}\nNow, let $\\mathcal W:=\\mathcal V\\setminus\\mathcal U$ be the set of nodes on the destination side of the cut, and observe that \n\\begin{equation}\\label{derhotiledet}\\begin{array}{rcl}\n\\displaystyle\\frac{\\mathrm{d}}{\\mathrm{d} t}\\l(\\sum\\nolimits_{e\\in\\mathcal E^+_w}\\tilde\\rho_e(t)\\r)&=&\n\\displaystyle\\sum\\nolimits_{e\\in\\mathcal E^+_w}\\l(\\sum\\nolimits_{j\\in\\mathcal E^-_w}\\tilde f_j(t)\\r)G^v_e(\\tilde\\rho(t))-\\sum\\nolimits_{e\\in\\mathcal E^+_w}\\tilde f_e(t)\\\\[10pt]\n&=&\\displaystyle\\sum\\nolimits_{e\\in\\mathcal E^-_w}\\tilde f_e(t)-\\sum\\nolimits_{e\\in\\mathcal E^+_w}\\tilde f_e(t)\n\\end{array}\n\\end{equation}\nDefine $\\mathcal A:=\\cup_{w\\in\\mathcal W}\\mathcal E^+_w$, $\\mathcal B:=\\cup_{w\\in\\mathcal W}\\mathcal E^-_w$, and let $\\zeta(t):=\\sum_{e\\in\\mathcal A}\\tilde \\rho_e(t)$. From (\\ref{derhotiledet}), the identity $\\mathcal A\\cup\\mathcal E^+_{\\mathcal U}=\\mathcal B$, and (\\ref{lambdaUbound}), one gets\n\\begin{equation}\\label{dezetadet}\\begin{array}{rcl}\\displaystyle\\frac{\\mathrm{d}}{\\mathrm{d} t}\\zeta(t)\n&=&\\displaystyle\\sum\\nolimits_{w\\in\\mathcal W}\\sum\\nolimits_{e\\in\\mathcal E^+_w}\\frac{\\mathrm{d}}{\\mathrm{d} t}\\tilde \\rho_e(t)\n\\\\[10pt]&=&\n\\displaystyle\\sum\\nolimits_{e\\in\\mathcal B}\\tilde f_e(t)-\\sum\\nolimits_{e\\in\\mathcal E^-_n}\\tilde f_e(t)-\\sum\\nolimits_{e\\in\\mathcal A}\\tilde f_e(t)\\\\[10pt]&=&\\displaystyle\\sum\\nolimits_{e\\in\\mathcal E^+_{\\mathcal U}}\\tilde f_e(t)-\\sum\\nolimits_{e\\in\\mathcal E^-_n}\\tilde f_e(t)\\\\[10pt]&<&\\displaystyle \\alpha\\lambda_0\/2-\\tilde\\lambda_n(t)\\,.\n\\end{array}\\end{equation}\nNow assume, by contradiction, that \n$$\\liminf_{t\\to+\\infty}\\tilde\\lambda_n(t)\\ge\\alpha\\lambda_0\\,.$$\nThen, there would exist some $\\tau\\ge0$ such that $\\tilde\\lambda_n(t)\\ge 3\\alpha\\lambda_0\/4$ for all $t\\ge\\tau$. For all $t\\ge\\tau$, it would then follow from (\\ref{dezetadet}) that $\\mathrm{d}\\zeta(t)\/\\mathrm{d} t\\le-\\alpha\\lambda_0\/4\\,,$ so that \n$$\\zeta(t)\\le\\zeta(\\tau)+(t-\\tau)\\alpha\\lambda_0\/4$$\nby Gronwall's inequality. Therefore, $\\zeta(t)$ would converge to $-\\infty$ as $t$ grows large, contradicting the fact that $\\zeta(t)\\ge0$ for all $t\\ge0$.\nThen, necessarily $$\\liminf_{t\\to+\\infty}\\tilde\\lambda_n(t)<\\alpha\\lambda_0\\,,$$ so that the perturbed dynamical network is not $\\alpha$-transferring. This implies (\\ref{gammaalphatheta}), and therefore the claim. \\end{proof}\n\n \n\\section{Main results and discussion} \n\\label{sec:comparison}\n\nIn this paper, we shall be concerned with a family of \\emph{maximally robust} distributed routing policies. Such a family is characterized by the following:\n\\begin{definition}[Locally responsive distributed routing policy]\\label{def:myopicpolicy}\nA \\emph{locally responsive} distributed routing policy for a flow network topology $\\mathcal T=(\\mathcal V,\\mathcal E)$ with node set $\\mathcal V=\\{0,1,\\ldots,n\\}$ is a family of continuously differentiable distributed routing functions $\\mathcal G=\\{G^v:\\mathcal R_v\\to\\mathcal S_v\\}_{v\\in\\mathcal V}$ such that, for every non-destination node $0\\le v0$ a constant inflow, and $\\mathcal G$ a locally responsive distributed routing policy such that $G^v_e(\\rho^v)>0$ for all $0\\le v\\varsigma_e\\}$, $\\mathcal J:=\\{e\\in\\mathcal E^+_v:\\,\\sigma_e\\le\\varsigma_e\\}$, and $\\mathcal L:=\\{e\\in\\mathcal E_v^+:\\,\\sigma_e<\\varsigma_e\\}$. Define $G_{\\mathcal K}(\\zeta):=\\sum_{k\\in\\mathcal K}G^v_k(\\zeta)$, $G_{\\mathcal L}(\\zeta):=\\sum_{l\\in\\mathcal L}G^v_l(\\zeta)$, and $G_{\\mathcal J}(\\zeta):=\\sum_{j\\in\\mathcal J}G_j^v(\\zeta)$. We shall show that, for any $\\sigma,\\varsigma\\in\\mathcal R_v$, \n\\begin{equation}\n\\label{eq:JK-ineq}\nG_{\\mathcal K}(\\sigma) \\le G_{\\mathcal K}(\\varsigma), \\qquad G_{\\mathcal L}(\\sigma) \\ge G_{\\mathcal L}(\\varsigma)\\,.\\end{equation}\nLet $\\xi\\in\\mathcal R_v$ be defined by $\\xi_k=\\sigma_k$ for all $k\\in\\mathcal K$, and $\\xi_e=\\varsigma_e$ for all $e\\in\\mathcal E^+_v\\setminus\\mathcal K$. We shall prove that $G_{\\mathcal K}(\\sigma)-G_{\\mathcal K}(\\varsigma)\\le0$ by writing it as a path integral of $\\nabla G_{\\mathcal K}(\\zeta)$ first along the segment $S_{\\mathcal K}$ from $\\varsigma$ to $\\xi$, and then along the segment $S_{\\mathcal L}$ from $\\xi$ to $\\sigma$. Proceeding in this way, one gets \n\\begin{equation}\\label{GK-GK} G_{\\mathcal K}(\\sigma)-G_{\\mathcal K}(\\varsigma)=\n\\int_{S_{\\mathcal K}}\\nabla G_{\\mathcal K}(\\zeta)\\cdot\\mathrm{d}\\zeta+\\int_{S_{\\mathcal L}}\\nabla G_{\\mathcal K}(\\zeta)\\cdot\\mathrm{d}\\zeta=\n-\\int_{S_{\\mathcal K}}\\nabla G_{\\mathcal J}(\\zeta)\\cdot\\mathrm{d}\\zeta+\\int_{S_{\\mathcal L}}\\nabla G_{\\mathcal K}(\\zeta)\\cdot\\mathrm{d}\\zeta\\,,\\end{equation}\nwhere the second equality follows from the fact that $G_{\\mathcal K}(\\zeta)=1-G_{\\mathcal J}(\\zeta)$ since $G^v(\\zeta)\\in\\mathcal S_v$. Now, Property (a) of Definition \\ref{def:myopicpolicy} implies that $\\partial G_{\\mathcal K}(\\zeta)\/\\partial\\zeta_l\\ge0$ for all $l\\in\\mathcal L$, and $\\partial G_{\\mathcal J}(\\zeta)\/\\partial\\zeta_k\\ge0$ for all $k\\in\\mathcal K$. It follows that $\\nabla G_{\\mathcal J}(\\zeta)\\cdot\\mathrm{d}\\zeta\\ge0$ along $S_{\\mathcal K}$, and $\\nabla G_{\\mathcal K}(\\zeta)\\cdot\\mathrm{d}\\zeta\\le0$ along $S_{\\mathcal L}$. Substituting in (\\ref{GK-GK}), one gets the first inequality in (\\ref{eq:JK-ineq}). The second inequality in (\\ref{eq:JK-ineq}) follows by similar arguments. Then, one has \n$$\\sum\\nolimits_{e \\in \\mathcal E_v^+} \\sgn(\\sigma_e- \\varsigma_e) \\left(G_e^v(\\sigma)-G_e^v(\\varsigma) \\right)=G_{\\mathcal K}(\\sigma)-G_{\\mathcal K}(\\varsigma)+G_{\\mathcal L}(\\varsigma)-G_{\\mathcal L}(\\sigma)\\le 0\\,,$$\nwhich proves the claim.\\end{proof}\n\\medskip\n\nWe can now exploit Lemma~\\ref{lem:coop-ext} in order to prove the following key result guaranteeing that the solution of the local dynamical system (\\ref{localsys}) with constant input $\\lambda(t)\\equiv\\lambda$ converges to a limit point which depends on the value of $\\lambda$ but not on the initial condition. (Cf.~Example \\ref{example:limitflow} and Figure \\ref{fig:vectfield}.)\n\n\\begin{lemma}\\textit{(Existence of a globally attractive limit flow for the local dynamical system under constant input)}\n\\label{lemmalocalexistence} \nLet $0\\le v0}\\lim_{t\\to+\\infty}||\\Psi^t(\\sigma)-\\Psi^t(\\varsigma)||_1=0\\,,\\qquad \\forall\\sigma,\\varsigma\\in\\mathcal R_v\\,.\\end{equation}\nNow, for any $\\sigma\\in\\mathcal R_v$, one can apply (\\ref{Phit->0}) with $\\varsigma:=\\Phi^{\\tau}(\\sigma)$, and get that \n$$\\lim_{t\\to+\\infty}||\\Psi^t(\\sigma)-\\Psi^{t+\\tau}(\\sigma)||_1=\n\\lim_{t\\to+\\infty}||\\Psi^t(\\sigma)-\\Psi^t(\\Phi^{\\tau}(\\sigma))||_1=0\\,,\\qquad\\forall\\tau\\ge0\\,.$$\nThe above implies that, for any initial condition $\\rho^v(0)=\\sigma\\in\\mathcal R_v$, the flow $\\Psi^t(\\sigma)$ is Cauchy, and hence convergent to some $f^*(\\lambda,\\sigma)\\in\\cl({\\mathcal F}_v)$. It follows from (\\ref{Phit->0}) again, that \n$$||f^*(\\lambda,\\sigma)-f^*(\\lambda,\\varsigma)||_1=\\lim_{t\\to+\\infty}||\\Psi^t(\\sigma)-\\Psi^t(\\varsigma)||_1=0\\,,\\qquad\\forall\\sigma,\\varsigma\\in\\mathcal R_v\\,,$$\nwhich shows that the limit flow does not depend on the initial condition.\n\\end{proof}\n\nNow, let us define $$\\lambda_v^{\\max}:=\\sum\\nolimits_{e\\in\\mathcal E^+_v}f_e^{\\max}\\,.$$ \nThe following result characterizes the way the local limit flow $f^*(\\lambda)$ depends on the local input $\\lambda$. (Cf.~Example \\ref{example:limitflow} and Figure \\ref{fig:fstar}.)\n\\begin{lemma}[Dependence of the local limit flow on the input]\\label{lemma:f*(lambda)}\nLet $0\\le v\\lambda\\,,$$ \nso that there would exist some $\\tau\\ge0$ such that $\\lambda-\\vartheta(t)\\le 0$ for every $t\\ge\\tau$, and hence (\\ref{chitxit}) would imply that $\\zeta(t)\\le\\zeta(\\tau)<+\\infty$ for all $t\\ge\\tau$, thus contradicting the assumption that $\\rho_e(t)$ converges to $\\rho_e^*=+\\infty$ as $t$ grows large. Hence, necessarily $\\rho^*\\in\\mathcal R_v$, and $f^*(\\lambda)\\in\\mathcal F_v$. Therefore, being a finite limit point of the autonomous dynamical system (\\ref{localsys}) with continuous right hand side, $\\rho^*$ is necessarily an equilibrium, and so $f^*(\\lambda)$ is an equilibrium flow for the local dynamical system (\\ref{localsys}). \n\nOn the other hand, when $\\lambda\\ge\\lambda_v^{\\max}$, (\\ref{chitxit}) shows that $\\zeta(t)$ is non-decreasing, hence convergent to some $\\zeta(\\infty)\\in[0,+\\infty]$ at $t$ grows large. Assume, by contradiction, that $\\zeta(\\infty)$ is finite. Then, passing to the limit of large $t$ in (\\ref{chitxit}), one would get $$\\int_{\\tau}^{+\\infty}(\\lambda-\\vartheta(s))\\mathrm{d} s=\\zeta(\\infty)-\\zeta(\\tau)\\le\\zeta(\\infty)<+\\infty\\,.$$\nThis, and the fact that $\\vartheta(t)<\\lambda_v^{\\max}\\le\\lambda$ for all $t\\ge0$, would imply that \n\\begin{equation}\\label{xitimpossible}\\lim_{t\\to+\\infty}\\vartheta(t)=\\lambda\\,.\\end{equation} Since $f_e(t)\\lambda_v^{\\max}$. On the other hand, if $\\lambda=\\lambda_v^{\\max}$, then (\\ref{xitimpossible}) implies that, for every $e\\in\\mathcal E^+_v$, $f_e(t)$ converges to $f_e^{\\max}$, and hence $\\rho_e(t)$ grows unbounded as $t$ grows large, so that $\\zeta(\\infty)$ would be infinite. Hence, if $\\lambda\\ge\\lambda_v^{\\max}$, then necessarily $\\zeta(\\infty)$ is infinite, and thanks to the previous arguments this implies that $\\rho_e^*=+\\infty$, and hence $f_e^*(\\lambda) = f_e^{\\max}$ for all $\\sigma \\in \\mathcal R_v$, $e\\in\\mathcal E^+_v$. \n\nFinally, it remains to prove continuity of $f^*(\\lambda)$ as a function of $\\lambda$. For this, consider the function $H:(0,+\\infty)^{\\mathcal E^+_v}\\times(0,\\lambda_v^{\\max})\\to\\mathbb{R}^{\\mathcal E^+_v}$ defined by \n$$H_e(\\rho^v,\\lambda):=\\lambda G^v_e(\\rho^v)-\\mu_e(\\rho_e)\\,,\\qquad \\forall e\\in\\mathcal E^+_v\\,.$$ \nClearly, $H$ is differentiable and such that \n\\begin{equation}\n\\label{eq:jacobian}\n\\frac{\\partial}{\\partial_{\\rho_e}}H_e(\\rho^v,\\lambda)=\\lambda\\frac{\\partial}{\\partial_{\\rho_e}}G^v_e(\\rho^v)-\\mu_e'(\\rho_e)=-\\sum_{j\\ne e}\\lambda\\frac{\\partial}{\\partial_{\\rho_e}}G^v_j(\\rho^v)-\\mu_e'(\\rho_e)<-\\sum_{j\\ne e}\\frac{\\partial}{\\partial_{\\rho_e}}H_j(\\rho^v,\\lambda)\\,,\n\\end{equation}\nwhere the inequality follows from the strict monotonicity of the flow function (see Assumption \\ref{ass:flowfunction}). Property (a) in Definition~\\ref{def:myopicpolicy} implies that $\\partial H_j(\\rho^v,\\lambda)\/\\partial \\rho_e \\geq 0$ for all $j \\neq e \\in \\mathcal E_v^+$. Hence, from (\\ref{eq:jacobian}), we also have that $\\partial H_e(\\rho^v,\\lambda)\/\\partial \\rho_e < 0$ for all $e \\in \\mathcal E_v^+$.\nTherefore, for all $\\rho^v\\in(0,+\\infty)^{\\mathcal E^+_v}$, and $\\lambda\\in(0,\\lambda_v^{\\max})$, the Jacobian matrix $\\nabla_{\\rho^v} H(\\rho^v,\\lambda)$ is strictly diagonally dominant, and hence invertible by a standard application of the Gershgorin Circle Theorem, e.g., see Theorem 6.1.10 in \\cite{Horn.Johnson:90}. It then follows from the implicit function theorem that $\\rho^*(\\lambda)$, which is the unique zero of $H(\\,\\cdot\\,,\\lambda)$, is continuous on the interval $(0,\\lambda_v^{\\max})$. Hence, also $f^*(\\lambda)=\\mu(\\rho^*(\\lambda))$ is continuous on $(0,\\lambda_v^{\\max})$, since it is the composition of two continuous functions. Moreover, since \n$$\\sum_{e\\in\\mathcal E^+_v}f_e^*(\\lambda)=\\lambda\\,,\\qquad 0\\le f_e^*(\\lambda)\\le f^{\\max}_e\\,,\\qquad \\forall e\\in\\mathcal E^+_v\\,,\\qquad \\forall\\lambda\\in(0,\\lambda_v^{\\max})\\,,$$\none gets that \n$$\\lim_{\\lambda\\downarrow0}f_e^*(\\lambda)=0\\,,\\qquad\\lim_{\\lambda\\uparrow\\lambda_v^{\\max}} f_e^*(\\lambda)=f_e^{\\max}\\,,$$ for all $e\\in\\mathcal E^+_v$. Now, one has that $\\sum_{e\\in\\mathcal E^+_v}f_e^*(0)=0$, so that $$0=f_e^*(0)=\\lim_{\\lambda\\downarrow0}f_e^*(\\lambda)\\,,\\qquad\\forall e\\in\\mathcal E^+_v\\,.$$ Moreover, as previously shown, $$f_e^*(\\lambda)=f_e^{\\max}=\\lim_{\\lambda\\uparrow\\lambda_v^{\\max}} f_e^*(\\lambda)\\,,\\qquad \\forall \\lambda\\ge\\lambda_v^{\\max}\\,.$$ This completes the proof of continuity of $f^*(\\lambda)$ on $[0,+\\infty)$.\n\\end{proof}\\medskip\n\nWhile Lemma~\\ref{lemmalocalexistence} ensures existence of a unique limit point for the local system (\\ref{localsys}) with constant input $\\lambda(t)\\equiv\\lambda$, the following lemma establishes a monotonicity property with respect to a time-varying input $\\lambda(t)$. \n\\begin{lemma}[Monotonicity of the local system]\\label{lemma:monotone}\nLet $0\\le v\\rho^-_e(t)\\}$, and let $\\tau:=\\min\\{\\tau_e:\\,e\\in\\mathcal E^+_v\\}$. Assume by contradiction that $\\rho^-_e(t)> \\rho^+_e(t)$ for some $t\\ge0$, and $e\\in\\mathcal E^+_v$. Then, $\\tau<+\\infty$, and $\\mathcal I:=\\argmin\\{\\tau_e:\\,e\\in\\mathcal E^+_v\\}$ is a well defined nonempty subset of $\\mathcal E^+_v$. Moreover, by continuity, one has that there exists some $\\varepsilon>0$ such that, $\\rho^-_i(\\tau)=\\rho^+_i(\\tau)$, $\\rho^-_i(t)>\\rho^+_i(t)$, and $\\rho^-_j(t)<\\rho^+_j(t)$ for all $i\\in\\mathcal I$, $j\\in\\mathcal J$, and $t\\in(\\tau,\\tau+\\varepsilon)$, where $\\mathcal J:=\\mathcal E^+_v\\setminus\\mathcal I$. Using Lemma \\ref{lem:coop-ext}, one gets, for every $t\\in(\\tau,\\tau+\\varepsilon)$,\n$$\\begin{array}{rcl}\n0&\\ge&\n\\frac12\\sum_{e}\\sgn(\\rho^-_e(t)-\\rho^+_e(t))\\l(G^v_e(\\rho^-(t))-G^v_e(\\rho^+(t))\\r)\\\\[7pt]\n&=&\\frac12\\l(\\sum_{i}G^v_i(\\rho^-(t))-\\sum_{i}G^v_i(\\rho^+(t))\n-\\sum_{j}G^v_j(\\rho^-(t))+\\sum_{j}G^v_j(\\rho^+(t))\\r)\\\\[7pt]\n&=&\\sum_{i}G^v_i(\\rho^-(t))-\\sum_{i}G^v_i(\\rho^+(t))\\,,\\end{array}$$\nwhere the summation indices $e$, $i$, and $j$ run over $\\mathcal E^+_v$, $\\mathcal I$, and $\\mathcal J$, respectively. \nOn the other hand, Assumption \\ref{ass:flowfunction} implies that $\\mu_i(\\rho^-_i(t))\\ge\\mu_i(\\rho^+_i(t))$ for all $i\\in\\mathcal I$, $t\\in[\\tau,\\tau+\\varepsilon)$. Now, let $\\chi(t):=\\sum_{i\\in\\mathcal I} \\left( \\rho^-_i(t)-\\rho^+_i(t) \\right)\\,.$\nThen, for every $t\\in(\\tau,\\tau+\\varepsilon)$, one has \n$$\n\\begin{array}{rclcl}0&<&\n\\chi(t)-\\chi(\\tau)\\\\[7pt]&=&\\displaystyle\n\\int_\\tau^t\\lambda^-(s)\\sum\\nolimits_{i \\in \\mathcal I}\\l(G^v_i(\\rho^-(s))-G^v_i(\\rho^-(s))\\r)\\mathrm{d} s\\\\[7pt]\n&&\\displaystyle\n-\\int_\\tau^t(\\lambda^+(s)-\\lambda^-(s))\\sum\\nolimits_{i \\in \\mathcal I}G^v_i(\\rho^+(s))\\mathrm{d} s\n-\\int_\\tau^t\\sum\\nolimits_{i \\in \\mathcal I}\\l(\\mu_i(\\rho^-_i(s))-\\mu_i(\\rho^+_i(s))\\r)\\mathrm{d} s\\\\&\\le&0\\,,\n\\end{array}\n$$\nwhich is a contradiction. Then, necessarily (\\ref{eq:monotonicity}) has to hold true. \n\\end{proof}\\medskip\n\nThe following lemma establishes that the output of the local system (\\ref{localsys}) is convergent, provided that the input is convergent. \n\\begin{lemma}[Attractivity of the local dynamical system]\n\\label{lemma:attractivity}\nLet $0\\le v0$, and let $\\tau\\ge0$ be such that $|\\lambda(t)-\\lambda|\\le\\varepsilon$ for all $t\\ge\\tau$. For $t\\ge\\tau$, let $f^-(t)$ and $f^+(t)$ be the flow associated to the solutions of the local dynamical system (\\ref{localsys}) with initial condition $\\rho^-(\\tau)=\\rho^+(\\tau)=\\rho^v(\\tau)$, and constant inputs $\\lambda^-(t)\\equiv\\lambda^-:=\\max\\{\\lambda-\\varepsilon,0\\}$, and $\\lambda^+(t)\\equiv\\lambda+\\varepsilon$, respectively. From Lemma \\ref{lemma:monotone}, one gets that\n\\begin{equation}\\label{sandwitch}f^-_e(t)\\le f_e(t)\\le f^+_e(t)\\,,\\qquad \\forall t\\ge\\tau\\,,\\qquad\\forall e\\in\\mathcal E^+_v\\,.\\end{equation}\nOn the other hand, Lemma \\ref{lemmalocalexistence} implies that $f^-(t)$ converges to $f^*(\\lambda^-)$, and $f^+(t)$ converges to $f^*(\\lambda^+)$, as $t$ grows large. Hence, passing to the limit of large $t$ in (\\ref{sandwitch}) yields \n$$f_e^*(\\lambda^-)\\le\\liminf_{t\\to+\\infty}f_e(t)\\le\\limsup_{t\\to+\\infty}f_e(t)\\le f^*_e(\\lambda+\\varepsilon)\\,,\\qquad\\forall e\\in\\mathcal E^+_v\\,.$$\nForm the arbitrariness of $\\varepsilon >0$, and the continuity of $f^*(\\lambda)$ as a function of $\\lambda$ by Lemma~\\ref{lemma:f*(lambda)}, it follows that $f(t)$ converges to $f^*(\\lambda)$, as $t$ grows large, which proves the claim. \n\\end{proof}\\medskip\n\nWe are now ready to prove Theorem \\ref{thm:uniquelimitflow} by showing that, for any initial condition $\\rho(0)\\in\\mathcal R$, the solution of the dynamical flow network (\\ref{dynsyst}) satisfies \n\\begin{equation}\\label{tildefe*}\\lim_{t\\to+\\infty}f_e(t)=f_e^*\\,,\\end{equation} for all $e\\in\\mathcal E$.\nWe shall prove this by showing via induction on $v=0,1,\\ldots,n-1$ that, for all $e\\in\\mathcal E^+_v$, there exists $f_e^*\\in[0,f_e^{\\max}]$ such that (\\ref{tildefe*}) holds true. First, observe that, thanks to Lemma \\ref{lemmalocalexistence}, this statement is true for $v=0$, since the inflow at the origin is constant. Now, assume that the statement is true for all $0\\le v0$ for all $0\\le v\\tilde f_e^{\\max}\/2$ for all $e\\in\\mathcal E$. Therefore, let us assume that there exists some link $e\\in\\mathcal E$ for which $\\tilde f_e^*\\le\\tilde f_e^{\\max}\/2$. Define $\\rho^{\\theta}\\in\\mathcal R_v$ by $\\rho^\\theta_j=0$ for all $j\\in\\mathcal E^+_v$, $j\\ne e$, and $\\rho^{\\theta}_e=\\theta{\\rho}^{\\mu}_e$, where recall that ${\\rho}^{\\mu}_e$ is the median density of the flow function $\\mu_e$. Since the stretching coefficient of $\\tilde{\\mathcal N}$ is less than or equal to $\\theta$, one has that the median densities of the perturbed and the unperturbed flow functions satisfy $\\tilde{\\rho}^{\\mu}_e\\le\\theta{\\rho}^{\\mu}_e$. This and the fact that $\\tilde f_e^*\\le\\tilde f_e^{\\max}\/2$ imply that $\\tilde\\rho_e^*\\le\\tilde{\\rho}^{\\mu}_e\\le\\rho^{\\theta}_e$, while clearly $\\tilde\\rho_j^*\\ge0=\\rho^{\\theta}_j$ for all $j\\in\\mathcal E^+_v$, $j\\ne e$. Now, let $\\beta_{\\theta}:=G^v_e(\\rho^{\\theta})$, and observe that, thanks to the assumption on the strict positivity of $G^v_e(\\rho^v)$, one has $\\beta_{\\theta}>0$. Then, from Lemma \\ref{lem:coop-ext} one gets that \n\\begin{equation}\\label{Gveineq}G^v_e(\\tilde\\rho^*)=\\frac12\\l(G^v_e(\\tilde\\rho^*)+1-\\sum\\nolimits_{j\\ne e}G^v_j(\\tilde\\rho^*)\\r)\\ge\\frac12\\l(G^v_e(\\rho^{\\theta})+1-\\sum\\nolimits_{j\\ne e}G^v_j(\\rho^{\\theta})\\r)= G^v_e(\\rho^{\\theta})=\\beta_{\\theta}\\,.\\end{equation}\nOn the other hand, since $\\tilde f_e^*\\le\\tilde f_e^{\\max}\/2<\\tilde f^{\\max}_e$, Lemma \\ref{lemmalocalexistence} implies that necessarily $\\tilde\\lambda_v^*G^v_e(\\tilde\\rho^*)=\\tilde f_e^*$. The claim now follows by combining this and (\\ref{Gveineq}). \n\\end{proof}\n\nAs a consequence of Lemma \\ref{lemma:enoughflow}, we now prove the following result showing that the dynamical flow network is partially transferring and providing a lower bound on its weak resilience: \n\\begin{lemma} \\label{lemma:LBgammathetaalpha}\nLet $\\mathcal N$ be a flow network satisfying Assumptions \\ref{ass:acyclicity} and \\ref{ass:flowfunction}, $\\lambda_0\\ge0$ a constant inflow, and $\\mathcal G$ a locally responsive distributed routing policy such that $G^v_e(\\rho^v)>0$ for all $0\\le v C(\\mathcal N)-2 \\alpha |\\mathcal E|\\beta_{\\theta}^{1-n}\\lambda_0\\,,$$ thus contradicting the assumption (\\ref{deltahypothesis}). Hence, necessarily there exists $e\\in\\mathcal E^+_0$ such that $\\tilde f^*_e\\ge\\lambda_0\\alpha\\beta_{\\theta}^{1-n}$, and choosing $v_1$ to be the unique node in $\\mathcal V$ such that $e\\in\\mathcal E^-_{v_1}$, one sees that (\\ref{existseinDv}) holds true with $j=1$.\n\nNow, fix some $1< j^*\\le k$, and assume that (\\ref{existseinDv}) holds true for every $1\\le j\\lambda_0\\alpha\\beta_{\\theta}^{-n}\\ge\\lambda_0\\alpha\\beta_{\\theta}^{j^*-1-n}\\,.\\end{equation}\nLet $\\mathcal U:=\\{v_0,v_1,\\ldots,v_{j^*-1}\\}$ and $\\mathcal E_{\\mathcal U}^+\\subseteq\\mathcal E$ be the set of links with tail node in $\\mathcal U$ and head node in $\\mathcal V\\setminus\\mathcal U$. Assume by contradiction that \n$$\\tilde f^*_e<\\lambda_0\\alpha\\beta_{\\theta}^{j^*-n}\\,,\\qquad \\forall e\\in\\mathcal E^+_{\\mathcal U}\\,.$$ Thanks to (\\ref{ineq1}) and (\\ref{ineq2}), this would imply that, $\\tilde f^*_e<\\beta_{\\theta}\\tilde\\lambda_j^*$, for every $0\\le j C(\\mathcal N)-2 \\alpha |\\mathcal E|\\beta_{\\theta}^{1-n}\\lambda_0\\,,$$ thus contradicting the assumption (\\ref{deltahypothesis}). Hence, necessarily there exists $e\\in\\mathcal E^+_{\\mathcal U}$ such that $\\tilde f^*_e\\ge\\lambda_0\\alpha\\beta_{\\theta}^{1-n}$, and choosing $v_{j^*}$ to be the unique node in $\\mathcal V$ such that $e\\in\\mathcal E^-_{v_{j^*}}$ one sees that (\\ref{existseinDv}) holds true with $j=j^*$. Iterating this argument until $v_{j^*}=n$ proves the claim. \n\\end{proof}\\medskip\n \nIt is now easy to see that Lemma \\ref{lemma:LBgammathetaalpha} implies that $\\lim_{\\alpha\\downarrow0}\\gamma_{\\alpha,\\theta}\\ge C(\\mathcal N)$ for every $\\theta\\ge1$, thus showing that $\\gamma_0(f^{\\circ},\\mathcal G)\\ge C(\\mathcal N)$. Combined with Proposition \\ref{propUB}, this shows that $\\gamma_0(f^{\\circ},\\mathcal G)=C(\\mathcal N)$, thus completing the proof of Theorem \\ref{maintheo-weakstability}. \n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nIn this paper, we studied robustness properties of dynamical flow networks, where the dynamics on every link is driven by the difference between the inflow, which depends on the upstream routing decisions, and the outflow, which depends on the particle density, on that link. We proposed a class of locally responsive distributed routing policies that rely only on local information about the network's current particle densities and yield the maximum weak resilience with respect to adversarial disturbances that reduce the flow functions of the links of the network. We also showed that the weak resilience of the network in that case is equal to min-cut capacity of the network, and that it is independent of the local information constraint and the initial flow. \nStrong resilience of dynamical flow networks is studied in the companion paper \\cite{PartII}. \n\n \n\n \\bibliographystyle{ieeetr}%\n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}}