diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzozcg" "b/data_all_eng_slimpj/shuffled/split2/finalzzozcg" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzozcg" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:intro} \n\nCommunication is essential for our society. \nHumans use language to communicate ideas, which has given rise to complex social structures, and scientists have observed either gestural or vocal communication in other animal groups, complexity of which increases with the complexity of the social structure of the group \\cite{tomasello_origins_2010}. \nCommunication helps to achieve complex goals by enabling cooperation and coordination \\cite{ackley:alife4, KamChuen:AGENTS:01}. \nAdvances in our ability to store and transmit information over time and long distances have greatly expanded our capabilities, and allows us to turn the world into the connected society that we observe today.\nCommunication technologies are at the core of this massively complex system. \n\nCommunication technologies are built upon fundamental mathematical principles and engineering expertise. \nThe fundamental quest in the design of these systems have been to deal with various imperfections in the communication channel (e.g., noise and fading) and the interference among transmitters. \nDecades of research and engineering efforts have produced highly advanced networking protocols, modulation techniques, waveform designs and coding techniques that can overcome these challenges quite effectively. \nHowever, this design approach ignores the aforementioned core objective of communication in enabling coordination and cooperation. \nTo some extent, we have separated the design of a communication network that can reliably carry signals from one point to another from the `language' that is formed to achieve coordination and cooperation among agents. \n\nThis engineering approach was also highlighted by Shannon and Weaver in \\cite{ShannonWeaver49} by organizing the communication problem into three ``levels\": They described level A as the \\textit{technical problem}, which tries to answer the question ``How accurately can the symbols of communication be transmitted?\". \nLevel B is referred to as the \\textit{semantic problem}, and asks the question ``How precisely do the transmitted symbols convey the desired meaning?\". \nFinally, Level C, called the \\textit{effectiveness problem}, strives to answer the question ``How effectively does the received meaning affect conduct in the desired way?\". \nAs we have described above, our communication technologies mainly deal with Level A, ignoring the semantics or the effectiveness problems. \nThis simplifies the problem into the transmission of a discrete message or a continuous waveform over a communication channel in the most reliable manner. \nThe semantics problem deals with the meaning of the messages, and is rather abstract. \nThere is a growing interest in the semantics problem in the recent literature \\cite{Guler:TCCN:18, popovski2019semanticeffectiveness, kountouris2020semanticsempowered, xie2020deep, strinati20206g}.\nHowever, these works typically formulate the semantics as an end-to-end joint source-channel coding problem, where the reconstruction objective can be distortion with respect to the original signal \\cite{bourtsoulatze_deep_2018, weng2020semantic}, or a more general function that can model some form of `meaning' \\cite{Guler:TCCN:18, sreekumar_distributed_2020, Jankowski:JSAC:21, Gunduz:CL:20}, which goes beyond reconstructing the original signal\\footnote{To be more precise, remote hypothesis testing, classification, or retrieval problems can also be formulated as end-to-end joint source-channel coding problems, albeit with a non-additive distortion measure.}. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.3\\linewidth]{Images\/MARLwComms.png}\n \\caption{An illustration of a MARL problem with noisy communication between the agents, e.g., agents communicating over a shared wireless channel. The emerging communication scheme should not only allow the agents to better coordinate and cooperate to maximize their rewards, but also mitigate the adverse effects of the wireless channel, such as noise and interference.}\n \\label{fig:MARLwComms}\n\\vspace{-0.8cm}\n\\end{figure}\n\nIn this paper, we deal with the `effectiveness problem', which generalizes the problems in both level A and level B. In particular, we formulate a multi-agent problem with noisy communications between the agents, where the goal of communications is to help agents better cooperate and achieve a common goal. See Fig. \\ref{fig:MARLwComms} for an illustration of a multi-agent grid-world, where agents can communicate through noisy wireless links. \nIt is well-known that multi-agent reinforcement learning (MARL) problems are notoriously difficult, and are a topic of continuous research. Originally, these problems were approached by treating each agent independently, as in a standard single-agent reinforcement learning (RL) problem, while treating other agents as part of the state of the environment. Consensus and cooperation are achieved through common or correlated reward signals. However, this approach leads to overfitting of policies due to limited local observations of each agent and it relies on other agents not varying their policies \\cite{lanctot_unified_2017}. It has been observed that these limitations can be overcome by leveraging communication between the agents \\cite{KamChuen:AGENTS:01, Balch:AR:94}. \n\nRecently, there has been significant interest in the \\textit{emergence of communication} among agents within the RL literature \\cite{foerster_learning_2016, jiang_learning_2018, jaques_social_2019, das_tarmac_2020}.\nThese works consider MARL problems, in which agents have access to a dedicated communication channel, and the objective is to learn a communication protocol, which can be considered as a `language' to achieve the underlying goal, which is typically translated into maximizing a specific reward function. \nThis corresponds to Level C, as described by Shannon and Weaver in \\cite{ShannonWeaver49}, where the agents change their behavior based on the messages received over the channel in order to maximize their reward. \nHowever, the focus of the aforementioned works is the emergence of communication protocols within the limited communication resources that can provide the desired impact on the behavior of the agents, and, unlike Shannon and Weaver, these works ignore the physical layer characteristics of the channel. \n\nOur goal in this work is to consider the effectiveness problem by taking into account both the channel noise and the end-to-end learning objective. \nIn this problem, the goal of communication is not ``reproducing at one point either exactly or approximately a message selected at another point'' as stated by Shannon in \\cite{ShannonWeaver49}, which is the foundation of the communication and information theoretic formulations that have been studied over the last seven decades. \nInstead, the goal is to enable cooperation in order to improve the objective of the underlying multi-agent game. As we will show later in this paper, the codes that emerge from the proposed framework can be very different from those that would be used for reliable communication of messages. \n\nWe formulate this novel communication problem as a MARL problem, in which the agents have access to a noisy communication channel. More specifically, we formulate this as a multi-agent partially observable Markov decision process (POMDP), and construct RL algorithms that can learn policies that govern both the actions of the agents in the environment and the signals they transmit over the channel. A communication protocol in this scenario should aim to enable cooperation and coordination among agents in the presence of channel noise. Therefore, the emerging modulation and coding schemes must not only be capable of error correction\/ compensation, but also enable agents to share their knowledge of the environment and\/or their intentions. We believe that this novel formulation opens up many new directions for the design of communication protocols and codes that will be applicable in many multi-agent scenarios from teams of robots to platoons of autonomous cars \\cite{wang_networking_2019}, to drone swarm planning \\cite{campion_uav_2018}. \n\n\n\n\nWe summarize the main contributions of this work as follows: \n\n\\begin{enumerate}\n \\item We propose a novel formulation of the ``effectiveness problem'' in communications, where agents communicate over a noisy communication channel in order to achieve better coordination and cooperation in a MARL framework. This can be interpreted as a \\textit{joint communication and learning approach} in the RL context \\cite{Gunduz:CL:20}. The current paper is an initial study of this general framework, focusing on scenarios that involve only point-to-point communications for simplicity. More involved multi-user communication and coordination problems will be the subject of future studies. \n \n \\item The proposed formulation generalizes the recently studied ``learning to communicate'' framework in the MARL literature \\cite{foerster_learning_2016, jiang_learning_2018, jaques_social_2019, das_tarmac_2020}, where the underlying communication channels are assumed to be error-free. \n This framework has been used to argue about the emergence of natural languages \\cite{lazaridou_multi-agent_2017, lazaridou2020multiagent}; however, in practice, there is inherent noise in any communication medium, particularly in human\/animal communications. Indeed, languages have evolved to deal with such noise. For example, Shannon estimated that the English language has approximately 75\\% redundancy. \n Such redundancy provides error correction capabilities. \n Hence, we argue that the proposed framework better models realistic communication problems, and the emerging codes and communication schemes can help better understand the underlying structure of natural languages. \n \n \\item The proposed framework also generalizes communication problems at level A, which have been the target of most communication protocols and codes that have been developed in the literature. \n Channel coding, source coding, as well as joint source-channel coding problems, and their multi-user extensions can be obtained as special cases of the proposed framework. \n The proposed deep reinforcement learning (DRL) framework provides alternative approaches to the design of codes and communication schemes for these problems that can outperform existing ones. \n We highlight that there are very limited practical code designs in the literature for most multi-user communication problems, and the proposed framework and the exploitation of deep representations and gradient-based optimization in DRL can provide a scalable and systematic methodology to make progress in these challenging problems. \n \n \\item We study a particular case of the proposed general framework as an example, which reduces to a point-to-point communication problem. \n In particular, we show that any single-agent Markov decision process (MDP) can be converted into a multi-agent partially observable MDP (MA-POMDP) with a noisy communication link between the two agents. \n We consider both the binary symmetric channel (BSC), the additive white Gaussian noise (AWGN) channel, and the bursty noise (BN) channel for the noisy communication link and solve the MA-POMDP problem by treating the other agent as part of the environment, from the perspective of one agent.\n We employ deep Q-learning (DQN) \\cite{mnih_human-level_2015} and deep deterministic policy gradient (DDPG) \\cite{lillicrap_continuous_2019} to train the agents.\n Substantial performance improvement is observed in the resultant policy over those learned by considering the cooperation and communication problems separately.\n \n \n \\item We then present the joint modulation and channel coding problem as an important special case of the proposed framework. \n In recent years, there has been a growing interest in using machine learning techniques to design practical channel coding and modulation schemes \\cite{Nachmani:STSP:18, Dorner:Asilomar:17, Felix:SPAWC:18, bourtsoulatze_deep_2018, Kurka:JSAIT:20, aoudia_model-free_2019}. \n However, with the exception of \\cite{aoudia_model-free_2019}, most of these approaches assume that the channel model is known and differentiable, allowing the use of supervised training by directly backpropagating through the channel using the channel model. In this paper, we learn to communicate over an unknown channel solely based on the reward function by formulating it as a RL problem. The proposed DRL framework goes beyond the method employed in \\cite{aoudia_model-free_2019}, which treats the channel as a random variable, and numerically approximates the gradient of the loss function. It is shown through numerical examples that the proposed DRL techniques employing DDPG \\cite{lillicrap_continuous_2019}, and actor-critic \\cite{konda_actor-critic_nodate} algorithms significantly improve the block error probability (BLER) of the resultant code.\n \n\\end{enumerate}\n\n\n\n\n\n\n\n\n\n\n\\section{Related Works}\n\\label{sec:related_works}\n\nThe study of communication for multi-agent systems is not new \\cite{wagner_progress_2016}. \nHowever, due to the success of deep neural networks (DNNs) for reinforcement learning (RL), this problem has received renewed interest in the context of DNNs \\cite{lazaridou_multi-agent_2017} and deep RL (DRL) \\cite{foerster_learning_2016, sukhbaatar_learning_2016, havrylov_emergence_2017}, where partially observable multi-agent problems are considered. \nIn each case, the agents, in addition to taking actions that impact the environment, can also communicate with each other via a limited-capacity communication channel. \nParticularly, in \\cite{foerster_learning_2016}, two approaches are considered: reinforced inter-agent learning (RIAL), where two centralized Q-learning networks learn to act and communicate, respectively, and differentiable inter-agent learning (DIAL), where communication feedback is provided via backpropagation of gradients through the channel, while the communication between agents is restricted during execution. \nSimilarly, in \\cite{wang_r-maddpg_2020,lowe2017maddpg}, the authors propose a \\textit{centralized learning, decentralized execution} approach, where a central critic is used to learn the state-action values of all the agents and use those values to train individual policies of each agent. \nAlthough they also consider the transmitted messages as part of the agents' actions, the communication channel is assumed to be noiseless.\n\nCommNet \\cite{sukhbaatar_learning_2016} attempts to leverage communications in cooperative MARL by using multiple continuous-valued transmissions at each time step to make decisions for all agents. \nEach agent broadcasts its message to every other agent, and the averaged message received by each agent forms part of the input.\nHowever, this solution lacks scalability as it depends on a centralized network by treating the problem as a single RL problem. \nSimilarly, BiCNet \\cite{peng_multiagent_2017} utilizes recurrent neural networks to connect individual agent's policy with a centralized controller aggregating the hidden states of each agent, acting as communication messages.\n\nThe reliance of the aforementioned works on a broadcast channel to communicate with all the agents simultaneously may be infeasible or highly inefficient in practice.\nTo overcome this limitation, in \\cite{jiang_learning_2018}, the authors propose an attentional communication model that learns when communication is needed and how to integrate shared information for cooperative decision making.\nIn \\cite{das_tarmac_2020}, directional communication between agents is achieved with a signature-based soft attention mechanism, where each message is associated to the target recipient. \nThey also propose multi-stage communication, where multiple rounds of communication take place before an action is taken.\n\nIt is important to note that, with the exception of \\cite{mostaani_learning-based_2019}, all of the prior works discussed above rely on error-free communication channels.\nMARL with noisy communications is considered in \\cite{mostaani_learning-based_2019}, where two agents placed on a grid world aim to coordinate to step on the goal square simultaneously. \nHowever, for the particular problem presented in \\cite{mostaani_learning-based_2019}, it can be shown that even if the agents are trained independently without any communication at all, the total discounted reward would still be higher than the average reward achieved by the scheme proposed in \\cite{mostaani_learning-based_2019}.\n\n\n\n\\section{Problem Formulation}\n\\label{sec:problem_formulation}\n\nWe consider a multi-agent partially observable Markov decision process (MA-POMDP) with noisy communications. \nConsider first a Markov game with $N$ agents $(\\mathcal{S}, \\{\\mathcal{O}_i\\}_{i=1}^N, \\{\\mathcal{A}_i\\}_{i=1}^N,$ $ P, r)$, where $\\mathcal{S}$ represents all possible configurations of the environment and agents, \n$\\mathcal{O}_i$ and $\\mathcal{A}_i$ are the observation and action sets of agent $i$, respectively, $P$ is the transition kernel that governs the environment, and $r$ is the reward function. \nAt each step $t$ of this Markov game, agent $i$ has a partial observation of the state $o_i^{(t)}\\in \\mathcal{O}_i$, and takes action $a_i^{(t)} \\in \\mathcal{A}_i$, $\\forall i$. \nThen, the state of the MA-POMDP transitions from $s^{(t)}$ to $s^{(t+1)}$ according to the joint actions of the agents following the transition probability $P(s^{(t+1)}|s^{(t)}, \\mathbf{a}^{(t)})$, where $\\mathbf{a}^{(t)} = (a_1^{(t)}, \\ldots, a_N^{(t)})$. \nObservations in the next time instant follow the conditional distribution $\\mathrm{Pr}(o^{(t+1)}|s^{(t)}, \\mathbf{a}^{(t)})$. \nWhile, in general, each agent can have a separate reward function, we consider herein the fully cooperative setting, where the agents receive the same team reward $r^{(t)} = r(s^{(t)}, \\mathbf{a}^{(t)})$ at time $t$.\n\nIn order to coordinate and maximize the total reward, the agents are endowed with a noisy communication channel, which is orthogonal to the environment.\nThat is, the environment transitions depend only on the environment actions, and the only impact of the communication channel is that the actions of the agents can now depend on the past received messages as well as the past observations and rewards. \nWe assume that the communication channel is governed by the conditional probability distribution $P_c$, and we allow the agents to use the channel $M$ times at each time $t$. \nHere, $M$ can be considered as the \\textit{channel bandwidth}. \nLet the signals transmitted and received by agent $i$ at time step $t$ be denoted by $\\mathbf{m}_i^{(t)} \\in \\mathcal{C}_t^M$ and $\\hat{\\mathbf{m}}_i^{(t)} \\in\\mathcal{C}_r^M$, respectively, where $\\mathcal{C}_t$ and $\\mathcal{C}_r$ denote the input and output alphabets of the channel, which can be discrete or continuous.\nWe assume for simplicity that the input and output alphabets of the channel are the same for all the agents. Channel inputs and outputs at time $t$ are related through the conditional distribution $P_c\\big(\\hat{\\mathbf{M}}^{(t)} | \\mathbf{M}^{(t)} \\big) =\\mathrm{Pr}\\big(\\hat{\\mathbf{M}} = \\{\\hat{\\mathbf{m}}_i^{(t)}\\}_{i=1}^N \\big|\\mathbf{M}=\\{\\mathbf{m}_i^{(t)}\\}_{i=1}^N \\big)$, where $\\hat{\\mathbf{M}} = (\\hat{\\mathbf{m}}_1,\\ldots,\\hat{\\mathbf{m}}_N)\\in\\mathbb{R}^{N\\times M}$ denotes the matrix of received signals with each row $\\hat{\\mathbf{m}}_i$ corresponding to a vector of symbols representing the codeword chosen by agent $i$, and likewise for $\\mathbf{M} = (\\mathbf{m}_1, \\ldots, \\mathbf{m}_N)\\in\\mathbb{R}^{N\\times M}$ is the matrix of transmitted signals.\nThat is, the received signal of agent $i$ over the communication channel is a random function of the signals transmitted by all other agents, characterized by the conditional distribution of the multi-user communication channel.\nIn our simulations, we will consider independent and identically distributed channels as well as a channel with Markov noise, but our formulation is general enough to take into account arbitrarily correlated channels, both across time and users.\n\nWe can define a new Markov game with noisy communications, where the actions of agent $i$ now consist of two components, the environment actions $a_i^{(t)}$ as before, and the signal to be transmitted over the channel $\\mathbf{m}_i^{(t)}$. \nEach agent, in addition to taking actions that affect the state of the environment, can also send signals to other agents over $M$ uses of the noisy communication channel. \nThe observation of each agent is now given by $(o_i^{(t)}, \\hat{\\mathbf{m}}_i^{(t)})$; that is, a combination of the partial observation of the environment as before and the channel output signal. \n\n\n\n\n\nAt each time step $t$, agent $i$ observes $(o_i^{(t)}, \\hat{\\mathbf{m}}_i^{(t)})$ and selects an action $(a_i^{(t)}, \\mathbf{m}_i^{(t)})$ according to its policy $\\pi_i:\\mathcal{O}_i \\times \\mathcal{C}_r^M \\rightarrow \\mathcal{A}_i \\times \\mathcal{C}_t^M$. \nThe overall policy over all agents can be defined as $\\Pi:\\mathcal{S}\\rightarrow\\mathcal{A}$.\nThe objective of the Markov game with noisy communications is to maximize the discounted sum of rewards \n\\begin{equation}\n V_\\Pi(s)=\\mathbb{E}_\\Pi\\Bigg[\\sum_{t=1}^\\infty\\gamma^{t-1}r^{(t)}\\Bigg|s^{(1)}=s\\Bigg]\n \\label{eq:value_function}\n\\end{equation}\nfor any initial state $s\\in\\mathcal{S}$ and $\\gamma$ is the discount factor to ensure convergence.\nWe also define the state-action value function, also referred to as Q-function as \n\\begin{equation}\n Q_\\Pi(s^{(t)},a^{(t)})=\\mathbb{E}_\\Pi\\Bigg[\\sum_{i=t}^\\infty\\gamma^{(i-t)}r^{(t)}\\Bigg|s^{(t)},a^{(t)}\\Bigg].\n \\label{eq:q_function}\n\\end{equation}\n\n\n\n\nIn the subsequent sections we will show that this formulation of the MA-POMDP with noisy communications lends itself to multiple problem domains where communication is vital to achieve non-trivial total reward values, and we devise methods that jointly learn to collaborate and communicate despite the noise in the channel. \nAlthough the introduced MA-POMDP framework with communications is fairly general and can model any multi-agent scenario with complex multi-user communications, our focus in this paper will be on point-to-point communications. \nThis will allow us to expose the benefits of the joint communication and learning design, without having to deal with the challenges of multi-user communications. \nExtensions of the proposed framework to scenarios that would involve multi-user communication channels will be studied in future work. \n\n\\section{Guided Robot with Point-to-Point Communications}\n\\label{subsec:eg_prob_guide_scout}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.3\\linewidth]{Images\/grid_world.pdf}\n \\caption{Illustration of the guided robot problem in grid world. The set $\\mathcal{A}_2$ of 16 possible actions the scout agent can take using hand crafted (HC) codewords.}\n \\label{fig:grid_world}\n\\vspace{-0.8cm}\n\\end{figure}\n\nIn this section, we consider a single-agent MDP and turn it into a MA-POMDP problem by dividing the single agent into two separate agents, a \\textit{guide} and a \\textit{scout}, which are connected through a noisy communication channel.\nIn this formulation, we assume that the guide observes the state of the original MDP perfectly, but cannot take actions on the environment directly. \nContrarily, the scout can take actions on the environment, but cannot observe the environment state. \nTherefore, the guide communicates to the scout through a noisy communication channel and the scout has to take actions based on the signals it receives from the guide through the communication channel. \nThe scout can be considered as a robot remotely controlled by the guide agent, which has sensors to observe the environment.\n\nWe consider this particular setting since it clearly exposes the importance of communication as the scout depends solely on the signals received from the guide. \nWithout the communication channel, the scout is limited to purely random actions independent of the current state. \nMoreover, this scenario also allows us to quantify the impact of the channel noise on the overall performance since we recover the original single-agent MDP when the communication channel is perfect; that is, if any desired message can be conveyed over the channel in a reliable manner. \nTherefore, if the optimal reward for the original MDP can be determined, this would serve as an upper bound on the reward of the MA-POMDP with noisy communications. \n\nAs an example to study the proposed framework and to develop and test numerical algorithms aiming to solve the obtained MA-POMDP problem, we consider a grid world of size $L\\times L$, denoted by $\\mathcal{L}= [L]\\times[L]$, where $[L]=\\{0,1,\\dots,L-1\\}$. We denote the scout position at time step $t$ by $p_s^{(t)}=(x_s^{(t)},y_s^{(t)})\\in\\mathcal{L}$. \nAt each time instant, the scout can take one action from the set of 16 possible actions $\\mathcal{A}=\\{[1,0],[-1,0],[0,1],[0,-1],[1,1],[-1,1],[-1,-1],[1,-1],[2,0],$ $[-2,0],[0,2],[0,-2],[2,2],[-2,2],[-2,-2],[2,-2]\\}$. See Fig. \\ref{fig:awgn_grid_world} for an illustration of the scout and the 16 actions it can take. If the action taken by the scout ends up in a cell outside the grid world, the agent remains in its original location. \nThe transition probability kernel of this MDP is specified as follows: after each action, the agent moves to the intended target location with probability (w.p.) $1-\\delta$, and to a random neighboring cell w.p. $\\delta$. \nThat is, the next state is given by $s^{(t+1)} = s^{(t)} + a^{(t)}$ w.p. $1-\\delta$, and $s^{(t+1)} = s^{(t)} + a^{(t)} + z^{(t)}$, where $z^{(t)}$ is uniformly distributed over the set $\\{[1,0],[1,1], [0,1], [-1,1], [-1,0],[0,-1],[-1,-1],[1,-1] \\}$ w.p. $\\delta$. \n\nThe objective of the scout is to find the treasure, located at $p_g=(x_g,y_g)\\in\\mathcal{L}$ as quickly as possible. \nWe assume that the initial position of the scout and the location of the treasure are random, and are not the same. \nThe scout takes instructions from the guide, who observes the grid world, and utilizes a noisy communication channel $M$ times to transmit signal $\\mathbf{m}^{(t)}$ to the scout, who observes $\\hat{\\mathbf{m}}^{(t)}$ from the output of the channel.\nTo put it in the context of the MA-POMDP defined in Section \\ref{sec:problem_formulation}, agent 1 is the guide, with observable state $o_1^{(t)} = s^{(t)}$, where $s^{(t)}=(p_s^{(t)},p_g)$, and action set $\\mathcal{A}_1=\\mathcal{C}_t$. \nAgent 2 is the scout, with observation $o_2^{(t)} = \\hat{\\mathbf{m}}^{(t)}$ and action set $\\mathcal{A}_2 = \\mathcal{A}$ (or, more precisely, $o_1^{(t)} = (s^{(t)},\\o), o_2^{(t)} = (\\o,\\hat{\\mathbf{m}}_2^{(t)})$). \nWe define the reward function as follows to encourage the agents to collaborate to find the treasure as quickly as possible:\n\\begin{equation}\n r^{(t)}=\\begin{cases}\n 10,~&\\text{if } p_s^{(t)}=p_g,\\\\\n -1,~&\\text{otherwise}.\n \\end{cases}\n\\end{equation}\nThe game terminates when $p_s^{(t)}=p_g$.\n\nWe should highlight that despite the simplicity of the problem, the original MDP is not a trivial one when both the initial state of the agent and the target location are random, as it has a rather large state space, and learning the optimal policy requires a long training process in order to observe all possible agent and target location pairs sufficiently many times. In order to simplify the learning of the optimal policy, and focus on learning the communication scheme, we will pay special attention to the scenario where $\\delta=0$. \nThis corresponds to the scenario in which the underlying MDP is deterministic, and it is not difficult to see that the optimal solution to this MDP is to take the shortest path to the treasure.\n\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.6\\linewidth]{Images\/framework_grid_world.pdf}\n \\caption{Information flow between the guide and the scout.}\n \\label{fig:framework_grid_world}\n\\vspace{-0.8cm}\n\\end{figure}\n\nWe consider three types of channel distributions: the BSC, the AWGN, and the BN channel. \nIn the BSC case, we have $\\mathcal{C}_t = \\{-1, +1\\}$. \nFor the AWGN channel and the BN channel, we have $\\mathcal{C}_t = \\{-1, +1\\}$ if the input is constrained to binary phase shift keying (BPSK) modulation, or $\\mathcal{C}_t =\\mathbb{R}$ if no limitation is imposed on the input constellation. We will impose an average power constraint in the latter case. In both cases, the output alphabet is $\\mathcal{C}_r = \\mathbb{R}$. For the BSC, the output of the channel is given by $\\hat{\\mathbf{m}}_i^{(t)}=\\mathbf{m}_i^{(t)} \\oplus \\mathbf{n}^{(t)}$, where $\\mathbf{n}^{(t)} \\sim \\mathrm{Bernoulli(p_e)}$. \nFor the AWGN channel, the output at the $i$th use of the channel is given by $\\hat{\\mathbf{m}}_i^{(t)}=\\mathbf{m}_i^{(t)}+\\mathbf{n}^{(t)}$, where $\\mathbf{n}^{(t)} \\sim\\mathcal{N}(0, \\mathbf{I}_M\\sigma_n^2)$ is the zero-mean Gaussian noise term with covariance matrix $\\mathbf{I}_M\\sigma_n^2$ and $\\mathbf{I}_M$ is $M$-dimensional the identity matrix. \nFor the BN channel, the output at the $i$th use of the channel is given by $\\hat{\\mathbf{m}}_i^{(t)}=\\mathbf{m}_i^{(t)}+\\mathbf{n}_b^{(t)}$, where $\\mathbf{n}_b^{(t)}$ is a two state Markov noise, with one state being the low noise state $N(0,\\mathbf{I}_M\\sigma_n^2)$ as in the AWGN case, and the other being the high noise state $N(0,\\mathbf{I}_M(\\sigma_n^2+\\sigma_b^2))$. \nThe probability of transitioning from the low noise state to the high noise state and remaining in that state is $p_b$. \nIn practice, this channel models an occasional random interference from a nearby transmitter.\n\n\n\nWe first consider the BSC case, also studied in \\cite{Roig:Globecom:20}. The action set of agent 1 is $\\mathcal{A}_1=\\{-1,+1\\}^M$, while the observation set of agent 2 is $\\mathcal{O}_2=\\{-1,+1\\}^M$. We will employ deep Q-learning network, introduced in \\cite{mnih_human-level_2015}, which uses deep neural networks (DNNs) to approximate the Q-function in Eqn. (\\ref{eq:q_function}).\nMore specifically, we use two distinct DNNs, parameterized by $\\boldsymbol{\\theta}_1$ and $\\boldsymbol{\\theta}_2$, respectively, representing DNNs for approximating the Q-functions of agent 1 (guide) and agent 2 (scout).\nThe guide observes $o_1^{(t)}=(p_s^{(t)}, p_g)$ and chooses a channel input signal $\\mathbf{m}_1^{(t)}=a_1^{(t)}=\\argmax_aQ_{\\boldsymbol{\\theta}_1}(o_1^{(t)},a)\\in\\mathcal{A}_1$, based on the current Q-function approximation. \nThe signal is then transmitted across $M$ uses of the BSC. The scout observes $o_2^{(t)}=\\hat{\\mathbf{m}}_2^{(t)}$ at the output of the BSC, and chooses an action based on the current Q-function approximation $a_2^{(t)}=\\argmax_a Q_{\\boldsymbol{\\theta}_2}(o_2^{(t)},a) \\in \\mathcal{A}_2$.\nThe scout then takes the action $a_2^{(t)}$, which updates its position $p_s^{(t+1)}$, collects reward $r^{(t)}$, and the process is repeated.\nThe reward $r^{(t)}$ is fed to both the guide and the scout to update $\\boldsymbol{\\theta}_1$ and $\\boldsymbol{\\theta}_2$.\n\nAs is typical in Q-learning methods, we use \\textit{replay buffer}, \\textit{target networks} and $\\epsilon$-\\textit{greedy} to improve the learned policy.\nThe replay buffers $\\mathcal{R}_1$ and $\\mathcal{R}_2$ store experiences $(o_1^{(t)},a_1^{(t)},r^{(t)},o_1^{(t+1)})$ and $(o_2^{(t)},a_2^{(t)},r^{(t)},o_2^{(t+1)})$ for the guide and scout, respectively, and we sample them uniformly to update the parameters $\\boldsymbol{\\theta}_1$ and $\\boldsymbol{\\theta}_2$.\nThis prevents the states from being correlated. \nWe use target parameters ${\\boldsymbol{\\theta}_1^-}$ and ${\\boldsymbol{\\theta}_2^-}$, which are copies of ${\\boldsymbol{\\theta}_1}$ and ${\\boldsymbol{\\theta}_2}$, to compute the DQN loss function:\n\\begin{align}\n L_{\\text{DQN}}(\\boldsymbol{\\theta}_i)=\\frac{1}{2}\\Big(r^{(t)}+\\gamma\\max_{a}\\big\\{Q_{\\boldsymbol{\\theta}_i^-}\\big(o_i^{(t+1)},a\\big)\\big\\} - Q_{\\boldsymbol{\\theta}_i} \\big(o_i^{(t)},a_i^{(t)}\\big)\\Big)^2,~i=1,2.\n \\label{eq:dqn_loss}\n\\end{align}\nThe parameters $\\boldsymbol{\\theta}_i$ are then updated via gradient descent according to the gradient $\\nabla_{\\boldsymbol{\\theta}_i}L_{\\text{DQN}}(\\boldsymbol{\\theta}_i)$, and the target network parameters are updated via\n\\begin{equation}\n \\boldsymbol{\\theta}_i^-\\leftarrow\\tau\\boldsymbol{\\theta}_i+(1-\\tau)\\boldsymbol{\\theta}_i^-,~~i=1,2,\n \\label{eq:target_update}\n\\end{equation}\nwhere $0\\leq\\tau\\leq1$.\nDue to Q-learning being bootstrapped, if the same $Q_{\\boldsymbol{\\theta}_i}$ is used to estimate the state-action value of time step $t$ and $t+1$, both values would move at the same time, which may lead to the updates to never converge (like a dog chasing its tail).\nBy introducing the target networks, this effect is reduced due to the much slower updates of the target network, as done in Eqn. (\\ref{eq:target_update}).\n\nTo promote exploration, we use $\\epsilon$-greedy, which chooses a random action w.p. $\\epsilon$ at each time step: \n\\begin{equation}\n a_i^{(t)}=\\begin{cases}\n \\argmax_{a}Q_{\\boldsymbol{\\theta}_i}(o_i^{(t)},a),~&\\text{w.p. }1-\\epsilon\\\\\n a\\sim\\text{Uniform}(\\mathcal{A}_i),~&\\text{w.p. }\\epsilon,\n \\end{cases}\n\\end{equation}\nwhere $a\\sim\\text{Uniform}(\\mathcal{A}_i)$ denotes an action that is sampled uniformly from the action set $\\mathcal{A}_i$.\nThe proposed solution for the BSC case is shown in Algorithm \\ref{alg:robot_bsc}.\n\n\\begin{algorithm}[t]\n\\begin{small}\n\\SetAlgoLined\n Initialize Q networks, $\\boldsymbol{\\theta}_i,i=1,2$, using Gaussian $\\mathcal{N}(0,10^{-2})$. Copy parameters to target networks $\\boldsymbol{\\theta}_i^-\\leftarrow\\boldsymbol{\\theta}_i$.\\\\\n $\\textit{episode}=0$\\\\\n \\While{$\\text{episode}<\\text{episode-max}$}{\n $episode = episode + 1$\\\\\n $t=0$\\\\\n $\\epsilon=\\epsilon_{\\text{end}}+(\\epsilon_0-\\epsilon_{\\text{end}})e^{\\big(\\frac{\\text{episode}}{-\\lambda}\\big)}$\\\\\n \\While{Treasure NOT found OR $t1$}{\n Store experiences:\\\\\n $(o_1^{(t-1)},a_1^{(t-1)},r^{(t-1)},o_1^{(t)})\\in\\mathcal{R}_1$ and $(o_2^{(t-1)},a_2^{(t-1)},r^{(t-1)},o_2^{(t)})\\in\\mathcal{R}_2$\n }\n }\n Get batches $\\mathcal{B}_1\\subset\\mathcal{R}_1$, $\\mathcal{B}_2\\subset\\mathcal{R}_2$\\\\\n Compute DQN average loss $L_{\\text{DQN}}(\\boldsymbol{\\theta}_i), i=1,2$ as in Eqn. (\\ref{eq:dqn_loss}) using batch $\\mathcal{B}_i$\\\\\n Update $\\boldsymbol{\\theta}_i$ using $\\nabla_{\\boldsymbol{\\theta}_i}L_{\\text{DQN}}(\\boldsymbol{\\theta}_i), i=1,2$.\n Update target networks $\\boldsymbol{\\theta}_i^-,i=1,2$ via Eqn. (\\ref{eq:target_update})\n }\n\\caption{Proposed solution for the guided robot problem with BSC.}\n\\label{alg:robot_bsc}\n\\end{small}\n\\end{algorithm}\n\nFor the binary input AWGN and BN channels, we can use the exact same solution as the one used for BSC.\nNote that the observation set of the scout is $\\mathcal{O}_2=\\mathbb{R}^M$.\nHowever, the more interesting case is when $\\mathcal{A}_1\\in\\mathbb{R}^M$.\nIt has been observed in the JSCC literature \\cite{tung_sparsecast:_2018,bourtsoulatze_deep_2018}, that relaxing the constellation constraints, similar to analog communications, and training the JSCC scheme in an end-to-end fashion can provide significant performance improvements thanks to the greater degree of freedom available to the transmitter.\nIn this case, since the guide can output continuous actions, we can employ the deep deterministic policy gradient (DDPG) algorithm proposed in \\cite{lillicrap_continuous_2019}.\nDDPG uses a parameterized policy function $\\mu_{\\boldsymbol{\\psi}}(o_1^{(t)})$, which specifies the current policy by deterministically mapping the observation $o_1^{(t)}$ to a continuous action.\nThe critic $Q_{\\boldsymbol{\\theta}_1}(o_1^{(t)},\\mu_{\\boldsymbol{\\psi}}(o_1^{(t)}))$, then estimates the value of the action taken by $\\mu_{\\boldsymbol{\\psi}}(o_1^{(t)})$, and is updated as it is with DQN in Eqn. (\\ref{eq:dqn_loss}).\n\nThe guide policy is updated by applying the chain rule to the expected return from the initial distribution \n\\begin{align}\n J=\\mathbb{E}_{o_1^{(t)}\\sim\\rho^{\\pi_1},o_2^{(t)}\\sim\\rho^{\\pi_2},a_1^{(t)}\\sim\\pi_1,a_2^{(t)}\\sim\\pi_2}\\Bigg[\\sum_{t=1}^\\infty\\gamma^{t-1}r^{(t)}(o_1^{(t)},o_2^{(t)},a_1^{(t)},a_2^{(t)})\\Bigg],\n \\label{eq:exp_return}\n\\end{align}\nwhere $\\rho^{\\pi_i}$ is the discounted observation visitation distribution for policy $\\pi_i$.\nSince we solve this problem by letting each agent treat the other agent as part of the environment, the value of the action taken by the guide is only dependent on its observation $o_1^{(t)}$ and action $\\mu_{\\boldsymbol{\\psi}}(o_1^{(t)})$.\nThus, we use a result in \\cite{silver_deterministic_2014} where the gradient of the objective $J$ in Eqn. (\\ref{eq:exp_return}) with respect to the guide policy parameters $\\boldsymbol{\\psi}$ is shown to be\n\\begin{align}\n \\nabla_{\\boldsymbol{\\psi}} J &=\\mathbb{E}_{o_1^{(t)}\\sim\\rho^{\\pi_1}}\\Big[\\nabla_{\\boldsymbol{\\psi}} Q_{\\boldsymbol{\\theta}_1}(o,a)\\big|_{o=o_1^{(t)},a=\\mu_{\\boldsymbol{\\psi}}(o_1^{(t)})}\\Big]\\\\\n &=\\mathbb{E}_{o_1^{(t)}\\sim\\rho^{\\pi_1}}\\Big[\\nabla_a Q_{\\boldsymbol{\\theta}_1}(o,a)\\big|_{o=o_1^{(t)},a=\\mu_{\\boldsymbol{\\psi}}(o_1^{(t)})}\\nabla_{\\boldsymbol{\\psi}}\\mu_{\\boldsymbol{\\psi}}(o)\\big|_{o=o_1^{(t)}}\\Big]\n \\label{eq:ddpg_gradient}\n\\end{align}\n if certain conditions specified in Theorem \\ref{thm:ddpg_compatibility} are satisfied.\n\\begin{theorem}[{{\\cite{silver_deterministic_2014}}}]\n A function approximator $Q_{\\boldsymbol{\\theta}}(o,a)$ is compatible (i.e., the gradient of the true Q function $Q_{\\boldsymbol{\\theta}^\\ast}$ is preserved by the function approximator) with a deterministic policy $\\mu_{\\boldsymbol{\\psi}}(o)$, such that $\\nabla_{\\boldsymbol{\\psi}} J(\\boldsymbol{\\psi})=\\mathbb{E}[\\nabla_{\\boldsymbol{\\psi}}\\mu_{\\boldsymbol{\\psi}}(o)\\nabla_aQ_{\\boldsymbol{\\theta}}(o,a)|_{a=\\mu_{\\boldsymbol{\\psi}}(o)}]$, if \n \\begin{enumerate}\n \\item $\\nabla_aQ_{\\boldsymbol{\\theta}}(o,a)|_{a=\\mu_{\\boldsymbol{\\psi}}(o)}=\\nabla_{\\boldsymbol{\\psi}}\\mu_{\\boldsymbol{\\psi}}(o)^\\top\\boldsymbol{\\theta}$, and \n \\item $\\boldsymbol{\\theta}$ minimizes the mean-squared error,\n $\\mathbb{E}[e(o;\\boldsymbol{\\theta},\\boldsymbol{\\psi})^\\top e(o;\\boldsymbol{\\theta},\\boldsymbol{\\psi})]$, where\\\\\n $e(o;\\boldsymbol{\\theta},\\boldsymbol{\\psi})\\!=\\!\\nabla_a\\big[Q_{\\boldsymbol{\\theta}}(o,a)|_{a=\\mu_{\\boldsymbol{\\psi}}(o)}-Q_{\\boldsymbol{\\theta}^\\ast}(o,a)|_{a=\\mu_{\\boldsymbol{\\psi}}(o)}\\big]$,\\\\\n and $\\boldsymbol{\\theta}^\\ast$ are the parameters that describe the true Q function exactly.\n \\end{enumerate}\n\\label{thm:ddpg_compatibility}\n\\end{theorem}\nIn practice, criterion 2) of Theorem \\ref{thm:ddpg_compatibility} is approximately satisfied via mean-squared error loss and gradient descent, but criterion 1) may not be satisfied.\nNevertheless, DDPG works well in practice.\n\nThe DDPG loss is two-fold: the critic loss is computed as \n\\begin{align} \\label{eq:ddpg_critic_loss}\n L_{\\text{DDPG}}^{\\text{Critic}}(\\boldsymbol{\\theta}_1)=\\Big(r^{(t)}+\\gamma\\Big\\{Q_{\\boldsymbol{\\theta}_1^-}(o_1^{(t+1)},\\mu_{\\boldsymbol{\\psi}^-}(o_1^{(t+1)}))\\Big\\} - Q_{\\boldsymbol{\\theta}_1}(o_1^{(t)},\\mu_{\\boldsymbol{\\psi}}(o_1^{(t)})\\Big)^2,\n\\end{align}\nwhereas the policy loss is computed as\n\\begin{align}\n &L_{\\text{DDPG}}^{\\text{Policy}}(\\psi)=-Q_{\\boldsymbol{\\theta}_1}(o_1^{(t)},\\mu_{\\boldsymbol{\\psi}}(o_1^{(t)})).\\label{eq:ddpg_policy_loss}\n\\end{align}\n\nAs with the DQN case, we can also use a replay buffer and target network to train the DDPG policy. To promote exploration, we add noise to the actions taken as follows:\n\\begin{equation}\n a_1^{(t)}=\\mu_{\\boldsymbol{\\psi}}(o_1^{(t)}) + w^{(t)},\n\\end{equation}\nwhere $w^{(t)}$ is an Orstein-Uhlenbeck process \\cite{uhlenbeck_theory_1930} to generate temporally correlated noise terms. The proposed solution for the AWGN and BN channel is summarized in Algorithm \\ref{alg:robot_awgn}. We find that by relaxing the modulation constraint to $\\mathbb{R}^M$, the learned policies of guide and scout are substantially better than those achieved in the BPSK case. The numerical results illustrating this conclusion will be discussed in Section \\ref{sec:results}.\n\n\\begin{algorithm}[]\n\\begin{small}\n\\caption{Proposed solution for guided robot problem for AWGN and BN channel.}\\label{alg:robot_awgn}\n\\SetAlgoLined\n Initialize Q networks $\\boldsymbol{\\theta}_i,i=1,2$, using Gaussian $\\mathcal{N}(0,10^{-2})$ and policy network $\\boldsymbol{\\psi}$ if $\\mathcal{A}_1\\in\\mathbb{R}^M$.\n Copy parameters to target networks $\\boldsymbol{\\theta}_i^-\\leftarrow\\boldsymbol{\\theta}_i$, $\\boldsymbol{\\psi}^-\\leftarrow\\boldsymbol{\\psi}$.\\\\\n $\\textit{episode}=1$\\\\\n \\While{$\\text{episode}<\\text{episode-max}$}{\n $t=1$\\\\\n $\\epsilon=\\epsilon_{\\text{end}}+(\\epsilon_0-\\epsilon_{\\text{end}})e^{\\big(\\frac{\\text{episode}}{-\\lambda}\\big)}$\\\\\n \\While{Treasure NOT found OR $t1$}{\n Store experiences:\\\\\n $(o_1^{(t-1)},a_1^{(t-1)},r^{(t-1)},o_1^{(t)})\\in\\mathcal{R}_1$ \\mbox{ and } $(o_2^{(t-1)},a_2^{(t-1)},r^{(t-1)},o_2^{(t)})\\in\\mathcal{R}_2$\n }\n $t=t+1$\n }\n \n Compute average scout loss $L_{\\text{DQN}}(\\boldsymbol{\\theta}_2)$ as in Eqn. (\\ref{eq:dqn_loss}) using batch $\\mathcal{B}_2 \\subset \\mathcal{R}_2$\\\\\n Update $\\boldsymbol{\\theta}_2$ using $\\nabla_{\\boldsymbol{\\theta}_2}L_{\\text{DQN}}(\\boldsymbol{\\theta}_2)$\\\\\n \\uIf{$\\mathcal{A}_1=\\{-1,+1\\}^M$}{\n Compute DQN average loss $L_{\\text{DQN}}(\\boldsymbol{\\theta}_1)$ as in Eqn. (\\ref{eq:dqn_loss}) using batch $\\mathcal{B}_1 \\subset \\mathcal{R}_1$\\\\\n Update $\\boldsymbol{\\theta}_1$ using $\\nabla_{\\boldsymbol{\\theta}_1}L_{\\text{DQN}}(\\boldsymbol{\\theta}_1)$\\\\\n Update target network $\\boldsymbol{\\theta}_i^-,i=1,2$ via Eqn. (\\ref{eq:target_update})\n }\n \\uElseIf{$\\mathcal{A}_1=\\mathbb{R}^M$}{\n Compute average DDPG Critic loss $L_{\\text{DDPG}}^{\\text{Critic}}(\\boldsymbol{\\theta}_1)$ as in Eqn. (\\ref{eq:ddpg_critic_loss}) using batch $\\mathcal{B}_1$\\\\\n Compute average DDPG Policy loss $L_{\\text{DDPG}}^{\\text{Policy}}(\\boldsymbol{\\psi})$ as in Eqn. (\\ref{eq:ddpg_policy_loss}) using batch $\\mathcal{B}_1$\\\\\n Update $\\boldsymbol{\\theta}_1$ and $\\boldsymbol{\\psi}$ using $\\nabla_{\\boldsymbol{\\theta}_1}L_{\\text{DDPG}}^{\\text{Critic}}(\\boldsymbol{\\theta}_1)$ and $\\nabla_{\\psi}L_{\\text{DDPG}}^{\\text{Policy}}(\\boldsymbol{\\psi})$\\\\\n Update target network $\\boldsymbol{\\theta}_i^-,i=1,2,\\boldsymbol{\\psi}^-$ via Eqn. (\\ref{eq:target_update})\n }\n $\\text{episode}=\\text{episode}+1$\n }\n\\end{small}\n\\end{algorithm}\n\n\nTo ensure that the actions taken by the guide meet the power constraint we normalize the channel input to an average power of $1$ as follows:\n\\begin{equation}\n a_1^{(t)}[k]\\leftarrow\\sqrt{M}\\frac{a_1^{(t)}[k]}{\\sqrt{\\Big(a_1^{(t)}\\Big)^\\top a_1^{(t)}}},~k=1,\\dots,M.\n \\label{eq:power_norm}\n\\end{equation}\nThe signal-to-noise ratio (SNR) of the AWGN channel is then defined as \n\\begin{equation}\n \\text{SNR}=-10\\log_{10}(\\sigma_n^2)~\\text{(dB)}.\n\\end{equation}\nDue to the burst noise, we define SNR of the BN channel by the expected SNR of the two noise states: \n\\begin{equation}\n \\text{SNR}=-10((1-p_b)\\log_{10}(\\sigma_n^2)+p_b\\log_{10}(\\sigma_n^2+\\sigma_b^2))~\\text{(dB)}.\n\\end{equation}\n\n\nIn Section \\ref{sec:results}, we will study the effects of both the channel SNR and the channel bandwidth on the performance. Naturally, the capacity of the channel increases with both the SNR and the bandwidth. However, we would like to emphasize that the Shannon capacity is not a relevant metric \\textit{per se} for the problem at hand. Indeed, we will observe that the benefits from increasing channel bandwidth and channel SNR saturate beyond some point. Nevertheless, the performance achieved for the underlying single-agent MDP assuming a perfect communication link from the guide to the scout serves as a more useful bound on the performance with any noisy communication channel. \nThe numerical results for this example will be discussed in detail in Section \\ref{sec:results}.\n\n\n\n\n\\section{Joint Channel Coding and Modulation}\n\\label{subsec:eg_prob_channel_coding}\n\n\nThe formulation given in Section \\ref{sec:problem_formulation} can be readily extended to the aforementioned classic ``level A\" communication problem of channel coding and modulation. \nChannel coding is a problem where $B$ bits are communicated over $M$ channel uses, which corresponds to a code rate of $B\/M$ bits per channel use.\nIn the context of the Markov game introduced previously, we can consider $2^B$ states corresponding to each possible message. Agent 2 has $2^B$ actions, each corresponding to a different reconstruction of the message at agent 1. \nAll the actions transition to the terminal state. \nThe transmitter observes the state and sends a message by using the channel $M$ times, and the receiver observes a noisy version of the message at the output of the channel and chooses an action.\nHerein, we consider the scenario with real channel input and output values, and an average power constraint on the transmitted signals at each time $t$.\nAs such, we can define $\\mathcal{O}_1=\\mathcal{A}_2 = \\{0,1\\}^B$ and $\\mathcal{A}_1 = \\mathcal{O}_2 = \\mathcal{C}^M_t$. We note that maximizing the average reward in this problem is equivalent to designing a channel code with blocklength $B$ and rate $B\/M$ with minimum BLER. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=.6\\linewidth]{Images\/framework_ch_coding.pdf}\n \\caption{Information flow between the transmitter and the receiver.}\n \\label{fig:framework_ch_coding}\n\\vspace{-0.5cm}\n\\end{figure}\n\nThere have been many recent studies focusing on the design of channel coding and modulation schemes using machine learning techniques \\cite{Nachmani:STSP:18, Dorner:Asilomar:17, Felix:SPAWC:18, bourtsoulatze_deep_2018, Kurka:JSAIT:20, aoudia_model-free_2019}. Most of these works use supervised learning techniques, assuming a known and differentiable channel model, which allows backpropagation through the channel during training. On the other hand, here we assume that the channel model is not known, and the agents are limited to their observations of the noisy channel output signals, and must learn a communication strategy through trial and error.\n\nA similar problem is considered in \\cite{aoudia_model-free_2019} from a supervised learning perspective. The authors show that by approximating the gradient of the transmitter with the stochastic policy gradient of the vanilla REINFORCE algorithm \\cite{williams_simple_1992}, it is possible to train both the transmitter and the receiver without knowledge of the channel model. We wish to show here that this problem is actually a special case of the problem formulation we constructed in Section \\ref{sec:problem_formulation} and that by approaching this problem from a RL perspective, the problem lends itself to a variety of solutions from the vast RL literature.\n\n\\begin{algorithm}[t]\n\\begin{small}\n\\caption{Proposed solution for joint channel coding-modulation problem.}\n\\label{alg:channel_coding}\n\\SetAlgoLined\n Initialize DNNs $\\boldsymbol{\\theta}_i,i=1,2$, with Gaussian $\\mathcal{N}(0,10^{-2})$, and policy network $\\boldsymbol{\\psi}$ if using DDPG.\\\\\n $\\textit{episode}=1$\\\\\n \\While{$\\text{episode}<\\text{episode-max}$}{\n $\\epsilon=\\epsilon_{\\text{end}}+(\\epsilon_0-\\epsilon_{\\text{end}})e^{-\\frac{\\text{episode}}{\\lambda}}$\\\\\n Observe $o_1^{(1)}\\sim\\text{Uniform}(\\mathcal{O}_1)$\\\\\n $m_1^{(1)}=\\mu_{\\boldsymbol{\\psi}}(o_1^{(1)})+w^{(1)}$\\\\\n Normalize $m_1^{(1)}$ via Eqn. (\\ref{eq:power_norm})\\\\\n Observe $o_2^{(1)}=P_{\\text{AWGN}}(\\hat{m}_2^{(1)}|m_1^{(1)})$ or $P_{\\text{BN}}(\\hat{m}_2^{(1)}|m_1^{(1)})$\\\\ \n $a_2^{(1)}=\\argmax_aQ_{\\boldsymbol{\\theta}_1}(o_2^{(1)},a)\n $\\\\\n Collect reward $r^{(1)}$ \\\\\n Store experiences:\\\\\n $(o_1^{(1)},a_1^{(1)},r^{(1)})\\in\\mathcal{R}_1$ and $(o_2^{(1)},a_2^{(1)},r^{(1)})\\in\\mathcal{R}_2$\\\\\n Get batches $\\mathcal{B}_1\\subset\\mathcal{R}_1$, $\\mathcal{B}_2\\subset\\mathcal{R}_2$\\\\\n Compute average receiver loss $L_{\\text{CE}}(o_2^{(1)};\\boldsymbol{\\theta}_2)$ as in Eqn. (\\ref{eq:ce_reward}) using batch $\\mathcal{B}_2$\\\\\n Update $\\boldsymbol{\\theta}_2$ using $\\nabla_{\\boldsymbol{\\theta}_2}L_{\\text{CE}}(o_2^{(1)};\\boldsymbol{\\theta}_2)$\\\\\n \\uIf{use DDPG}{\n Compute average transmitter losses $L_{\\text{DDPG}}^{\\text{Critic}}(\\boldsymbol{\\theta}_1)$ and $L_{\\text{DDPG}}^{\\text{Policy}}(\\boldsymbol{\\psi})$ as in Eqns. (\\ref{eq:ch_coding_ddpg_critic_loss},\\ref{eq:ch_coding_ddpg_policy_loss}) using $\\mathcal{B}_1$\\\\\n Update $\\boldsymbol{\\theta}_1$ and $\\boldsymbol{\\psi}$ $\\nabla_{\\boldsymbol{\\theta}_1}L_{\\text{DDPG}}^{\\text{Critic}}(\\boldsymbol{\\theta}_1)$ and $\\nabla_{\\boldsymbol{\\psi}} L_{\\text{DDPG}}^{\\text{Policy}}(\\boldsymbol{\\psi})$\n }\n \\uElseIf{use REINFORCE}{\n Compute average transmitter gradient $\\nabla_{\\boldsymbol{\\theta}_1}J(\\boldsymbol{\\theta}_1)$ as in Eqn. (\\ref{eq:reinforce_loss}) using $\\mathcal{B}_1$\\\\\n Update $\\boldsymbol{\\theta}_1$ using $\\nabla_{\\boldsymbol{\\theta}_1}J(\\boldsymbol{\\theta}_1)$\n \n \n \n }\n \\uElseIf{use Actor-Critic}{\n Compute average transmitter loss $\\nabla_{\\boldsymbol{\\theta}_1}J(\\boldsymbol{\\theta}_1)$ as in Eqn. (\\ref{eq:a2c_loss}) using $\\mathcal{B}_1$\\\\\n Update $\\boldsymbol{\\theta}_1$ using $\\nabla_{\\boldsymbol{\\theta}_1}J(\\boldsymbol{\\theta}_1)$\\\\\n Update value estimate $v_{\\pi_1}(o_1^{(1)})$ via Eqn. (\\ref{eq:value_estimate})\n }\n $\\text{episode}=\\text{episode}+1$\n }\n\\end{small}\n\\end{algorithm}\n\nHere, we opt to use DDPG to learn a deterministic joint channel coding-modulation scheme and use the DQN algorithm for the receiver, as opposed to the vanilla REINFORCE algorithm used in \\cite{aoudia_model-free_2019}.\nWe use negative cross-entropy (CE) loss as the reward function:\n\\begin{equation}\n r^{(1)}=-L_{\\text{CE}}(\\hat{m}^{(1)}_1)=\\sum_{k=1}^{2^B}\\log(Pr(c_k|\\hat{m}^{(1)}_1)),\n \\label{eq:ce_reward}\n\\end{equation}\nwhere $c_k$ is the $k$th codeword in $\\mathcal{O}_1$.\nThe receiver DQN is trained simply with the CE loss, while the transmitter DDPG algorithm receives the reward $r^{(1)}$.\nSimilar to the \\textit{guided robot} problem in Section \\ref{subsec:eg_prob_guide_scout}, we use replay buffer to improve the training process.\nWe note here that in this problem, each episode is simply a one-step MDP, as there is no state transition.\nAs such, the replay buffers store only $(o_1^{(1)},a_1^{(1)},r^{(1)})$, $(o_2^{(1)},a_2^{(1)},r^{(1)})$ and a target network is not required.\nConsequently, the DDPG losses can be simplified as\n\\begin{align}\n &L_{\\text{DDPG}}^{\\text{Critic}}(\\boldsymbol{\\theta}_1)=\\Big(Q_{\\boldsymbol{\\theta}_1}(o_1^{(1)},\\mu_{\\boldsymbol{\\psi}}(o_1^{(1)})-r^{(1)}\\Big)^2,\\label{eq:ch_coding_ddpg_critic_loss}\\\\\n &L(\\boldsymbol{\\psi})_{\\text{DDPG}}^{\\text{Policy}}=-Q_{\\boldsymbol{\\theta}_1}(o_1^{(1)},\\mu_{\\boldsymbol{\\psi}}(o_1^{(1)}))\\label{eq:ch_coding_ddpg_policy_loss}\n\\end{align}\n\n\n\nFurthermore, we improve upon the algorithm used in \\cite{aoudia_model-free_2019} by implementing a critic, which estimates the advantage of a given state-action pair by subtracting a baseline from policy gradient.\nThat is, in the REINFORCE algorithm, the gradient is estimated as\n\\begin{equation}\n \\nabla_{\\boldsymbol{\\theta}_1} J(\\boldsymbol{\\theta}_1)=\\nabla_{\\boldsymbol{\\theta}_1}\\log\\pi_1(a_1^{(1)}|o^{(1)}_1;\\boldsymbol{\\theta}_1)r^{(1)} \\;.\n \\label{eq:reinforce_loss}\n\\end{equation}\nIt is shown in \\cite{konda_actor-critic_nodate} that by subtracting a baseline $b(o_1^{(1)})$, the variance of the gradient $\\nabla_{\\boldsymbol{\\theta}} J(\\boldsymbol{\\theta})$ can be greatly reduced. \nHerein, we use the value of the state, defined by Eqn. (\\ref{eq:value_function}), except, in this problem, the trajectories all have length 1.\nTherefore, the value function can be simplified to \n\\begin{equation}\n b(o_1^{(1)})=v_{\\pi_1}(o_1^{(1)})=\\mathbb{E}_{\\pi_1}\\big[r^{(1)}|o_1^{(1)}\\big].\n\\label{eq:ch_code_baseline}\n\\end{equation}\nThe gradient of the policy with respect to the expected return $J(\\boldsymbol{\\theta}_1)$ is then \n\\begin{equation}\n \\nabla_{\\boldsymbol{\\theta}_1} J(\\boldsymbol{\\theta}_1)=\\nabla_{\\boldsymbol{\\theta}_1}\\log\\pi_1(a_1^{(1)}|o_1^{(1)};\\boldsymbol{\\theta}_1)(r^{(1)}-v_{\\pi_1}(o_1^{(1)})).\n \\label{eq:a2c_loss}\n\\end{equation}\nIn practice, to estimate $v_{\\Pi}(o^{(1)}_1)$, we use a weighted moving average of the reward collected for a given state $o_1^{(1)}\\in\\mathcal{O}_1$ in $\\mathcal{B}_1(o_1^{(1)})=\\{(o,a)\\in \\mathcal{B}_1| o=o_1^{(1)}\\}$\nfor the batch of trajectories $\\mathcal{B}_1$:\n\\begin{equation}\n v_{\\pi_1}(o_1^{(1)})\\leftarrow\n (1-\\alpha) v_{\\pi_1}(o_1^{(1)})+\n \\frac{\\alpha}{|\\mathcal{B}_1(o_1^{(1)})|}\\!\\!\\sum_{(o,a)\\in \\mathcal{B}_1(o_1^{(1)})}\\!\\! r^{(1)}(o,a),\n\\label{eq:value_estimate}\n\\end{equation}\nwhere $\\alpha$ is the weight of the average and $v_{\\pi_1}(o_1^{(1)})$ is initialized with zeros.\nWe use $\\alpha=0.01$ in our experiments.\nThe algorithm for solving the joint channel coding and modulation problem is shown in Algorithm \\ref{alg:channel_coding}.\nThe numerical results and comparison with alternative designs are presented in the next section.\n\n\n\n\n\n\n\\section{Numerical Results}\n\\label{sec:results}\n\n\n\n\\begin{table}\n\\begin{center}\n\\caption{DNN architecture and hyperparameters used.}\n\\begin{tabular}{|c|c|c|}\n\\hline\n$Q_{\\boldsymbol{\\theta}_i}$ & $\\mu_{\\boldsymbol{\\psi}}$ & Hyperparameters \\\\ \\hline\nLinear: 64 & Linear: 64 & $\\gamma=0.99$ \\\\\nReLU & ReLU & $\\epsilon_0=0.9$ \\\\\nLinear: 64 & Linear: 64 & $\\epsilon_{\\text{end}}=0.05$ \\\\\nReLU & ReLU & $\\lambda=1000$ \\\\\nLinear: $\\begin{cases}\n |\\mathcal{A}_i|,~&\\text{if DQN}, \\\\ \n 1,~&\\text{if DDPG}\n \\end{cases}$\n& Linear: dim$(\\mathcal{A}_i)$ & $\\tau=0.005$ \\\\ \\hline\n\\end{tabular}\n\\label{tab:parameters}\n\\end{center}\n\\vspace{-0.8cm}\n\\end{table}\n\nWe first define the DNN architecture used for all the experiments in this section.\nFor networks, the inputs are processed by three fully connected layers followed by the rectified linear unit (ReLU) activation function.\nThe weights of the layers are initialized using Gaussian initialization with mean 0 and standard deviation $0.01$.\nWe store $100K$ experience samples in the replay buffer ($|\\mathcal{R}_i|=100K$), and sample batches of size $128$ for training.\nWe train every experiment for $500K$ episodes.\nThe function used for $\\epsilon$-greedy exploration is\n\\begin{equation}\n \\epsilon=\\epsilon_{\\text{end}}+(\\epsilon_0-\\epsilon_{\\text{end}})e^{\\big(-\\frac{\\text{episode}}{\\lambda}\\big)}\n\\end{equation}\nwhere $\\lambda$ controls the decay rate of $\\epsilon$.\nWe use the ADAM optimizer \\cite{kingma_adam_2017} with learning rate $0.001$ for all the experiments.\nThe network architectures and the hyperparameters chosen are summarized in Table \\ref{tab:parameters}.\nWe consider $\\text{SNR}\\in[0,23]$ dB for the AWGN channel. \nFor the BN channel, we use the same SNR range as the AWGN channel for the low noise state and set $\\sigma_b=2$ for the high noise state.\nWe consider $p_b\\in\\{0.1,0.2\\}$ to see the effect of changing the high noise state probability.\n\n\n\n\\begin{figure} \n \\centering\n \\subfloat[$\\delta=0$ \\label{subfig:bsc_grid_world}]{%\n \\begin{tikzpicture}\n \\pgfplotsset{\n legend style={\n font=\\fontsize{6}{6}\\selectfont,\n at={(0.02,.98)},\n anchor=north west,\n },\n height=0.4\\linewidth,\n width=0.5\\linewidth,\n xmin=0,\n xmax=0.3,\n ymin=2.,\n ymax=10.3,\n ytick distance=1,\n xlabel={$p_e$},\n ylabel={Average number of steps},\n grid=both,\n grid style={line width=.1pt, draw=gray!10},\n major grid style={line width=.2pt,draw=gray!50},\n every axis\/.append style={\n x label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:0.5,-0.1)},\n },\n y label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:-0.1,0.5)},\n },\n x tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=2,\n \/tikz\/.cd\n },\n y tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=1,\n \/tikz\/.cd\n },\n }\n }\n \\begin{axis}\n \\addplot[blue, solid, line width=0.9pt, mark=triangle*, mark options={fill=blue, scale=1.6}] \n table [x=PE, y=BINARY, col sep=comma] {Data\/bsc_grid_world.csv};\n \\addlegendentry{Joint learning and communication ($M=7$)}\n \n \\addplot[color=gray, dashed, line width=1.2pt, mark=*, mark options={fill=gray, solid, scale=1.1}] \n table [x=PE, y=HAMMING (HC), col sep=comma] {Data\/bsc_grid_world.csv};\n \\addlegendentry{Separate learning and communication \/ HC}\n \n \\addplot[color=orange, solid, line width=0.7pt, mark=*, mark options={fill=orange, scale=1}] \n table [x=PE, y=HAMMING (RC), col sep=comma] {Data\/bsc_grid_world.csv};\n \\addlegendentry{Separate learning and communication \/ RC}\n \n \\addplot[color=cyan, dashed, line width=1.2pt, mark=square*, mark options={fill=cyan, solid, scale=1.1}] \n table [x=PE, y=OPT (HC), col sep=comma] {Data\/bsc_grid_world.csv};\n \\addlegendentry{Optimal actions with Hamming code \/ HC}\n \n \\addplot[color=magenta, solid, line width=0.7pt, mark=square*, mark options={fill=magenta, scale=1}] \n table [x=PE, y=OPT (RC), col sep=comma] {Data\/bsc_grid_world.csv};\n \\addlegendentry{Optimal actions with Hamming code \/ RC}\n \n \\addplot[color=darkgray, solid, thick, mark=x, mark options={fill=darkgray, scale=1}] \n table [x=PE, y=LB, col sep=comma] {Data\/bsc_grid_world.csv};\n \\addlegendentry{Optimal actions without noise}\n \\end{axis}\n \\end{tikzpicture}\n }\n \\subfloat[$\\delta=0.05$ \\label{subfig:bsc_grid_world_noisy}]{%\n \\begin{tikzpicture}\n \\pgfplotsset{\n legend style={\n font=\\fontsize{6}{6}\\selectfont,\n at={(0.,1.)},\n anchor=north west,\n },\n height=0.4\\linewidth,\n width=0.5\\linewidth,\n xmin=0,\n xmax=0.3,\n ymin=2.,\n ymax=10.3,\n ytick distance=1,\n xlabel={$p_e$},\n ylabel={Average number of steps},\n grid=both,\n grid style={line width=.1pt, draw=gray!10},\n major grid style={line width=.2pt,draw=gray!50},\n every axis\/.append style={\n x label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:0.5,-0.1)},\n },\n y label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:-0.1,0.5)},\n },\n x tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=2,\n \/tikz\/.cd\n },\n y tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=1,\n \/tikz\/.cd\n },\n }\n }\n \\begin{axis}[mark options={solid}]\n \\addplot[blue, solid, line width=0.9pt, mark=triangle*, mark options={fill=blue, scale=1.6}] \n table [x=PE, y=BINARY_NOISY, col sep=comma] {Data\/bsc_grid_world.csv};\n \\addlegendentry{Joint learning and communication ($M=7$)}\n \n \\addplot[color=gray, dashed, line width=1.2pt, mark=*, mark options={fill=gray, solid, scale=1.1}] \n table [x=PE, y=HAMMING_NOISY (HC), col sep=comma] {Data\/bsc_grid_world.csv};\n \\addlegendentry{Separate learning and communication \/ HC}\n \n \\addplot[color=orange, solid, line width=0.7pt, mark=*, mark options={fill=orange, scale=1}] \n table [x=PE, y=HAMMING_NOISY (RC), col sep=comma] {Data\/bsc_grid_world.csv};\n \\addlegendentry{Separate learning and communication \/ RC}\n \n \\addplot[color=cyan, dashed, line width=1.2pt, mark=square*, mark options={fill=cyan, solid, scale=1.1}] \n table [x=PE, y=OPT_NOISY (HC), col sep=comma] {Data\/bsc_grid_world.csv};\n \\addlegendentry{Optimal actions with Hamming code \/ HC}\n \n \\addplot[color=magenta, solid, line width=0.7pt, mark=square*, mark options={fill=magenta, scale=0.8}] \n table [x=PE, y=OPT_NOISY (RC), col sep=comma] {Data\/bsc_grid_world.csv};\n \\addlegendentry{Optimal actions with Hamming code \/ RC}\n \n \\addplot[color=darkgray, solid, thick, mark=x, mark options={fill=darkgray, scale=1}] \n table [x=PE, y=LB, col sep=comma] {Data\/bsc_grid_world.csv};\n \\addlegendentry{Optimal actions without noise}\n \\end{axis}\n \\end{tikzpicture}\n }\n \\caption{Comparison of agents jointly trained to collaborate and communicate over a BSC to separate learning and communications with a (7,4) Hamming code.}\n \\label{fig:bsc_grid_world} \n\\vspace{-0.8cm}\n\\end{figure}\n\n\\begin{figure} \n \\centering\n \\subfloat[$\\delta=0$ \\label{subfig:awgn_grid_world}]{%\n \\begin{tikzpicture}\n \\pgfplotsset{\n legend style={\n font=\\fontsize{5.8}{5.8}\\selectfont,\n at={(0.99,.99)},\n anchor=north east,\n },\n height=0.4\\linewidth,\n width=0.5\\linewidth,\n xmin=0,\n xmax=23,\n xtick distance=3,\n ymin=2.3,\n ymax=4.3,\n ytick distance=0.5,\n xlabel={SNR (dB)},\n ylabel={Average number of steps},\n grid=both,\n grid style={line width=.1pt, draw=gray!10},\n major grid style={line width=.2pt,draw=gray!50},\n every axis\/.append style={\n x label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:0.5,-0.1)},\n },\n y label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:-0.1,0.5)},\n },\n x tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=2,\n \/tikz\/.cd\n },\n y tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=1,\n \/tikz\/.cd\n },\n }\n }\n \\begin{axis}\n \\addplot[color=blue, solid, line width=0.9pt, mark=triangle*, mark options={fill=blue, scale=1.6}] \n table [x=SNR, y=BINARY, col sep=comma] {Data\/awgn_grid_world.csv};\n \\addlegendentry{Joint learning and communication (BPSK, $M=7$)}\n \n \\addplot[color=blue, dashed, line width=1.2pt, mark=triangle*, mark options={fill=blue, solid, scale=1.6}] \n table [x=SNR, y=REAL, col sep=comma] {Data\/awgn_grid_world.csv};\n \\addlegendentry{Joint learning and communication (Real, $M=7$)}\n \n \\addplot[color=gray, dashed, line width=1.2pt, mark=*, mark options={fill=gray, solid, scale=1.1}] \n table [x=SNR, y=HAMMING (HC), col sep=comma] {Data\/awgn_grid_world.csv};\n \\addlegendentry{Separate learning and communication \/ HC}\n \n \\addplot[color=orange, solid, line width=0.7pt, mark=*, mark options={fill=orange, scale=1}] \n table [x=SNR, y=HAMMING (RC), col sep=comma] {Data\/awgn_grid_world.csv};\n \\addlegendentry{Separate learning and communication \/ RC}\n \n \\addplot[color=cyan, dashed, line width=1.2pt, mark=square*, mark options={fill=cyan, solid, scale=1.1}] \n table [x=SNR, y=OPT (HC), col sep=comma] {Data\/awgn_grid_world.csv};\n \\addlegendentry{Optimal with Hamming code \/ HC}\n \n \\addplot[color=magenta, solid, line width=0.7pt, mark=square*, mark options={fill=magenta, scale=1}] \n table [x=SNR, y=OPT (RC), col sep=comma] {Data\/awgn_grid_world.csv};\n \\addlegendentry{Optimal with Hamming code \/ RC}\n \n \\addplot[color=darkgray, solid, thick, mark=x, mark options={fill=darkgray, scale=1}] \n table [x=SNR, y=LB, col sep=comma] {Data\/awgn_grid_world.csv};\n \\addlegendentry{Optimal actions without noise}\n \\end{axis}\n \\end{tikzpicture}\n }\n \\subfloat[$\\delta=0.05$ \\label{subfig:awgn_grid_world_noisy}]{%\n \\begin{tikzpicture}\n \\pgfplotsset{\n legend style={\n font=\\fontsize{5.8}{5.8}\\selectfont,\n at={(0.99,.99)},\n anchor=north east,\n },\n height=0.4\\linewidth,\n width=0.5\\linewidth,\n xmin=0,\n xmax=23,\n xtick distance=3,\n ymin=2.,\n ymax=5.5,\n ytick distance=0.5,\n xlabel={SNR (dB)},\n ylabel={Average number of steps},\n grid=both,\n grid style={line width=.1pt, draw=gray!10},\n major grid style={line width=.2pt,draw=gray!50},\n every axis\/.append style={\n x label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:0.5,-0.1)},\n },\n y label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:-0.1,0.5)},\n },\n x tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=2,\n \/tikz\/.cd\n },\n y tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=1,\n \/tikz\/.cd\n },\n }\n }\n \\begin{axis}[mark options={solid}]\n \\addplot[color=blue, solid, line width=0.9pt, mark=triangle*, mark options={fill=blue, scale=1.6}] \n table [x=SNR, y=BINARY, col sep=comma] {Data\/awgn_grid_world_noisy.csv};\n \\addlegendentry{Joint learning and communication (BPSK, $M=7$)}\n \n \\addplot[color=blue, dashed, line width=1.2pt, mark=triangle*, mark options={fill=blue, solid, scale=1.6}] \n table [x=SNR, y=REAL, col sep=comma] {Data\/awgn_grid_world_noisy.csv};\n \\addlegendentry{Joint learning and communication (Real, $M=7$)}\n \n \\addplot[color=gray, dashed, line width=1.2pt, mark=*, mark options={fill=gray, solid, scale=1.1}] \n table [x=SNR, y=HAMMING (HC), col sep=comma] {Data\/awgn_grid_world_noisy.csv};\n \\addlegendentry{Separate learning and communication \/ HC}\n \n \\addplot[color=orange, solid, line width=0.7pt, mark=*, mark options={fill=orange, scale=1}] \n table [x=SNR, y=HAMMING (RC), col sep=comma] {Data\/awgn_grid_world_noisy.csv};\n \\addlegendentry{Separate learning and communication \/ RC}\n \n \\addplot[color=cyan, dashed, line width=1.2pt, mark=square*, mark options={fill=cyan, solid, scale=1.1}] \n table [x=SNR, y=OPT (HC), col sep=comma] {Data\/awgn_grid_world_noisy.csv};\n \\addlegendentry{Optimal actions with Hamming code \/ HC}\n \n \\addplot[color=magenta, solid, line width=0.7pt, mark=square*, mark options={fill=magenta, scale=1}] \n table [x=SNR, y=OPT (RC), col sep=comma] {Data\/awgn_grid_world_noisy.csv};\n \\addlegendentry{Optimal actions with Hamming code \/ RC}\n \n \\addplot[color=darkgray, solid, thick, mark=x, mark options={fill=darkgray, scale=1}] \n table [x=SNR, y=LB, col sep=comma] {Data\/awgn_grid_world_noisy.csv};\n \\addlegendentry{Optimal actions without noise}\n \\end{axis}\n \\end{tikzpicture}\n }\n \\caption{Comparison of the agents jointly trained to collaborate and communicate over an AWGN channel to separate learning and communications with a (7,4) Hamming code.}\n \\label{fig:awgn_grid_world} \n\\vspace{-1cm}\n\\end{figure}\n\n\\begin{figure} \n \\centering\n \\subfloat[Separate learning and communication (HC). \\label{subfig:hamming_bsc_vis}]{%\n \\includegraphics[height=0.2\\linewidth]{Images\/hamming_bsc_vis.pdf}\n }\\\\\n \\subfloat[Joint learning and communication. \\label{subfig:learning_bsc_vis}]{%\n \\includegraphics[height=0.2\\linewidth]{Images\/learning_bsc_vis.pdf}\n }\n \\caption{Example visualization of the codewords used by the guide, and the path taken by the scout for $M=7$ uses of a BSC with $p_e=0.2$ and $\\delta=0$. The origin is at the top left corner.}\n \\label{fig:bsc_vis} \n\\vspace{-0.8cm}\n\\end{figure}\n\n\nFor the grid world problem, presented in Section \\ref{subsec:eg_prob_guide_scout}, the scout and treasure are uniformly randomly placed on any distinct locations upon initialization (i.e., $p_g\\ne p_s^{(0)}$).\nThese locations are one-hot encoded to form a $2L^2$ vector that is the observation of the guide $o_1^{(t)}$. \nWe fix the channel bandwidth to $M=\\{7,10\\}$ and compare our solutions to a scheme that separates the channel coding from the underlying MDP.\nThat is, we first train a RL agent that solves the grid world problem without communication constraints. \nWe then introduce a noisy communication channel and encode the action chosen by the RL agent using a (7,4) Hamming code before transmission across the channel.\nThe received message is then decoded and the resultant action is taken.\nWe note that the (7,4) Hamming code is a perfect code that encodes four data bits into seven channel bits by adding three parity bits; thus, it can correct single bit errors.\nThe association between the 16 possible actions and codewords of 4 bits can be done by random permutation, which we refer to as random codewords (RC), or hand-crafted (HC) association by assigning adjacent codewords to similar actions, as shown in Fig. \\ref{fig:grid_world}.\nBy associating adjacent codewords to similar actions, the scout will take a similar action to the one intended even if there is a decoding error, assuming the number of bit errors is not too high.\nLastly, we compute the optimal solution, where the steps taken forms the shortest path to the treasure, and use a Hamming (7,4) channel code to transmit those actions.\nThis is referred to as ``Optimal actions with Hamming Code\" and acts as a lower bound for the separation-based results.\n\n\\begin{figure} \n \\centering\n \\subfloat[$\\delta=0,p_b=0.1$ \\label{subfig:bn01_grid_world}]{%\n \\begin{tikzpicture}\n \\pgfplotsset{\n legend style={\n font=\\fontsize{5.8}{5.8}\\selectfont,\n at={(0.99,.99)},\n anchor=north east,\n },\n height=0.4\\linewidth,\n width=0.5\\linewidth,\n xmin=-1,\n xmax=21,\n xtick distance=3,\n ymin=2.8,\n ymax=6,\n ytick distance=0.5,\n xlabel={SNR (dB)},\n ylabel={Average number of steps},\n grid=both,\n grid style={line width=.1pt, draw=gray!10},\n major grid style={line width=.2pt,draw=gray!50},\n every axis\/.append style={\n x label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:0.5,-0.1)},\n },\n y label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:-0.1,0.5)},\n },\n x tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=2,\n \/tikz\/.cd\n },\n y tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=1,\n \/tikz\/.cd\n },\n }\n }\n \\begin{axis}[mark options={solid}]\n \\addplot[blue, solid, line width=0.9pt, mark=triangle*, mark options={fill=blue, scale=1.5}] \n table [x=SNR, y=BINARY, col sep=comma] {Data\/bn_grid_world_p01.csv};\n \\addlegendentry{Joint learning and communication (BPSK, $M=7$)}\n \n \\addplot[blue, dashed, line width=1.2pt, mark=triangle*, mark options={fill=blue, solid, scale=1.5}] \n table [x=SNR, y=REAL, col sep=comma] {Data\/bn_grid_world_p01.csv};\n \\addlegendentry{Joint learning and communication (Real, $M=7$)}\n \n \\addplot[color=gray, dashed, line width=1.2pt, thick, mark=*, mark options={fill=gray, solid, scale=1.1}] \n table [x=SNR, y=HAMMING (HC), col sep=comma] {Data\/bn_grid_world_p01.csv};\n \\addlegendentry{Separate learning and communication \/ HC}\n \n \\addplot[color=orange, solid, line width=0.7pt, mark=*, mark options={fill=orange, scale=1}] \n table [x=SNR, y=HAMMING (RC), col sep=comma] {Data\/bn_grid_world_p01.csv};\n \\addlegendentry{Separate learning and communication \/ RC}\n \n \\addplot[color=cyan, dashed, line width=1.2pt, thick, mark=square*, mark options={fill=cyan, solid, scale=1.1}] \n table [x=SNR, y=OPT (HC), col sep=comma] {Data\/bn_grid_world_p01.csv};\n \\addlegendentry{Optimal actions with Hamming code \/ HC}\n \n \\addplot[color=magenta, solid, line width=0.7pt, mark=square*, mark options={fill=magenta, scale=1}] \n table [x=SNR, y=OPT (RC), col sep=comma] {Data\/bn_grid_world_p01.csv};\n \\addlegendentry{Optimal actions with Hamming code \/ RC}\n \n \\end{axis}\n \\end{tikzpicture}\n }\n \\subfloat[$\\delta=0.05,p_b=0.1$ \\label{subfig:bn01_grid_world_noisy}]{%\n \\begin{tikzpicture}\n \\pgfplotsset{\n legend style={\n font=\\fontsize{5.8}{5.8}\\selectfont,\n at={(0.99,.99)},\n anchor=north east,\n },\n height=0.4\\linewidth,\n width=0.5\\linewidth,\n xmin=-1,\n xmax=21,\n xtick distance=3,\n ymin=2.8,\n ymax=6,\n ytick distance=0.5,\n xlabel={SNR (dB)},\n ylabel={Average number of steps},\n grid=both,\n grid style={line width=.1pt, draw=gray!10},\n major grid style={line width=.2pt,draw=gray!50},\n every axis\/.append style={\n x label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:0.5,-0.1)},\n },\n y label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:-0.1,0.5)},\n },\n x tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=2,\n \/tikz\/.cd\n },\n y tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=1,\n \/tikz\/.cd\n },\n }\n }\n \\begin{axis}[mark options={solid}]\n \\addplot[blue, solid, line width=0.9pt, mark=triangle*, mark options={fill=blue, scale=1.6}] \n table [x=SNR, y=BINARY, col sep=comma] {Data\/bn_grid_world_noisy_p01.csv};\n \\addlegendentry{Joint learning and communication (BPSK, $M=7$)}\n \n \\addplot[blue, dashed, line width=1.2pt, mark=triangle*, mark options={fill=blue, solid, scale=1.6}] \n table [x=SNR, y=REAL, col sep=comma] {Data\/bn_grid_world_noisy_p01.csv};\n \\addlegendentry{Joint learning and communication (Real, $M=7$)}\n \n \\addplot[color=gray, dashed, line width=1.2pt, thick, mark=*, mark options={fill=gray, solid, scale=1.1}] \n table [x=SNR, y=HAMMING (HC), col sep=comma] {Data\/bn_grid_world_noisy_p01.csv};\n \\addlegendentry{Separate learning and communication \/ HC}\n \n \\addplot[color=orange, solid, line width=0.7pt, mark=*, mark options={fill=orange, scale=1}] \n table [x=SNR, y=HAMMING (RC), col sep=comma] {Data\/bn_grid_world_noisy_p01.csv};\n \\addlegendentry{Separate learning and communication \/ RC}\n \n \\addplot[color=cyan, dashed, line width=1.2pt, thick, mark=square*, mark options={fill=cyan, solid, scale=1.1}] \n table [x=SNR, y=OPT (HC), col sep=comma] {Data\/bn_grid_world_noisy_p01.csv};\n \\addlegendentry{Optimal actions with Hamming code \/ HC}\n \n \\addplot[color=magenta, solid, line width=0.7pt, mark=square*, mark options={fill=magenta, scale=1}] \n table [x=SNR, y=OPT (RC), col sep=comma] {Data\/bn_grid_world_noisy_p01.csv};\n \\addlegendentry{Optimal actions with Hamming code \/ RC}\n \n \\end{axis}\n \\end{tikzpicture}\n }\n \\\\\n \\subfloat[$\\delta=0,p_b=0.2$ \\label{subfig:bn02_grid_world}]{%\n \\begin{tikzpicture}\n \\pgfplotsset{\n legend style={\n font=\\fontsize{5.8}{5.8}\\selectfont,\n at={(0.99,.99)},\n anchor=north east,\n },\n height=0.4\\linewidth,\n width=0.5\\linewidth,\n xmin=-2,\n xmax=18,\n xtick distance=3,\n ymin=2.9,\n ymax=6.5,\n ytick distance=0.5,\n xlabel={SNR (dB)},\n ylabel={Average number of steps},\n grid=both,\n grid style={line width=.1pt, draw=gray!10},\n major grid style={line width=.2pt,draw=gray!50},\n every axis\/.append style={\n x label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:0.5,-0.1)},\n },\n y label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:-0.1,0.5)},\n },\n x tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=2,\n \/tikz\/.cd\n },\n y tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=1,\n \/tikz\/.cd\n },\n }\n }\n \\begin{axis}[mark options={solid}]\n \\addplot[blue, solid, line width=0.9pt, mark=triangle*, mark options={fill=blue, scale=1.6}] \n table [x=SNR, y=BINARY, col sep=comma] {Data\/bn_grid_world_p02.csv};\n \\addlegendentry{Joint learning and communication (BPSK, $M=7$)}\n \n \\addplot[blue, dashed, line width=1.2pt, mark=triangle*, mark options={fill=blue, solid, scale=1.6}] \n table [x=SNR, y=REAL, col sep=comma] {Data\/bn_grid_world_p02.csv};\n \\addlegendentry{Joint learning and communication (Real, $M=7$)}\n \n \\addplot[color=gray, dashed, line width=1.2pt, mark=*, mark options={fill=gray, solid, scale=1.1}] \n table [x=SNR, y=HAMMING (HC), col sep=comma] {Data\/bn_grid_world_p02.csv};\n \\addlegendentry{Separate learning and communication \/ HC}\n \n \\addplot[color=orange, solid, line width=0.7pt, mark=*, mark options={fill=orange, scale=1}] \n table [x=SNR, y=HAMMING (RC), col sep=comma] {Data\/bn_grid_world_p02.csv};\n \\addlegendentry{Separate learning and communication \/ RC}\n \n \\addplot[color=cyan, dashed, line width=1.2pt, mark=square*, mark options={fill=cyan, solid, scale=1.1}] \n table [x=SNR, y=OPT (HC), col sep=comma] {Data\/bn_grid_world_p02.csv};\n \\addlegendentry{Optimal actions with Hamming code \/ HC}\n \n \\addplot[color=magenta, solid, line width=0.7pt, mark=square*, mark options={fill=magenta, scale=1}] \n table [x=SNR, y=OPT (RC), col sep=comma] {Data\/bn_grid_world_p02.csv};\n \\addlegendentry{Optimal actions with Hamming code \/ RC}\n \n \\end{axis}\n \\end{tikzpicture}\n }\n \\subfloat[$\\delta=0.05,p_b=0.2$ \\label{subfig:bn02_grid_world_noisy}]{%\n \\begin{tikzpicture}\n \\pgfplotsset{\n legend style={\n font=\\fontsize{5.8}{5.8}\\selectfont,\n at={(1.0,1.)},\n anchor=north east,\n },\n height=0.4\\linewidth,\n width=0.5\\linewidth,\n xmin=-2,\n xmax=18,\n xtick distance=3,\n ymin=3.1,\n ymax=7,\n ytick distance=0.5,\n xlabel={SNR (dB)},\n ylabel={Average number of steps},\n grid=both,\n grid style={line width=.1pt, draw=gray!10},\n major grid style={line width=.2pt,draw=gray!50},\n every axis\/.append style={\n x label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:0.5,-0.1)},\n },\n y label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:-0.1,0.5)},\n },\n x tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=2,\n \/tikz\/.cd\n },\n y tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=1,\n \/tikz\/.cd\n },\n }\n }\n \\begin{axis}[mark options={solid}]\n \\addplot[blue, solid, line width=0.9pt, mark=triangle*, mark options={fill=blue, scale=1.6}] \n table [x=SNR, y=BINARY, col sep=comma] {Data\/bn_grid_world_noisy_p02.csv};\n \\addlegendentry{Joint learning and communication (BPSK, $M=7$)}\n \n \\addplot[blue, dashed, line width=1.2pt, mark=triangle*, mark options={fill=blue, solid, scale=1.6}] \n table [x=SNR, y=REAL, col sep=comma] {Data\/bn_grid_world_noisy_p02.csv};\n \\addlegendentry{Joint learning and communication (Real, $M=7$)}\n \n \\addplot[color=gray, dashed, line width=1.2pt, mark=*, mark options={fill=gray, solid, scale=1.1}] \n table [x=SNR, y=HAMMING (HC), col sep=comma] {Data\/bn_grid_world_noisy_p02.csv};\n \\addlegendentry{Separate learning and communication \/ HC}\n \n \\addplot[color=orange, solid, line width=0.7pt, mark=*, mark options={fill=orange, scale=1}] \n table [x=SNR, y=HAMMING (RC), col sep=comma] {Data\/bn_grid_world_noisy_p02.csv};\n \\addlegendentry{Separate learning and communication \/ RC}\n \n \\addplot[color=cyan, dashed, line width=1.2pt, mark=square*, mark options={fill=cyan, solid, scale=1.1}] \n table [x=SNR, y=OPT (HC), col sep=comma] {Data\/bn_grid_world_noisy_p02.csv};\n \\addlegendentry{Optimal actions with Hamming code \/ HC}\n \n \\addplot[color=magenta, solid, line width=0.7pt, mark=square*, mark options={fill=magenta, scale=1}] \n table [x=SNR, y=OPT (RC), col sep=comma] {Data\/bn_grid_world_noisy_p02.csv};\n \\addlegendentry{Optimal actions with Hamming code \/ RC}\n \n \\end{axis}\n \\end{tikzpicture}\n }\n \\caption{Comparison of the agents jointly trained to collaborate and communicate over an BN channel to separate learning and communications with a (7,4) Hamming code.}\n \\label{fig:bn_grid_world} \n\\vspace{-1.0cm}\n\\end{figure}\n\nFor the joint channel coding-modulation problem, we again compare the DDPG and actor-critic results with a (7,4) Hamming code using BPSK modulation.\nThe source bit sequence is uniformly randomly chosen from the set $\\{0,1\\}^M$ and one-hot encoded to form the input state $o_1^{(1)}$ of the transmitter.\nWe also compare with the algorithm derived in \\cite{aoudia_model-free_2019}, which uses supervised learning for the receiver and the REINFORCE policy gradient to estimate the gradient of the transmitter. \n\n\n\n\n\n\nWe first present the results for the guided robot problem. \nFig. \\ref{fig:bsc_grid_world} shows the number of steps, averaged over 10K episodes, needed by the scout to reach the treasure for the BSC case with $\\delta=\\{0,0.05\\}$. \nThe ``optimal actions without noise\" refers to the minimum number of steps required to reach the treasure assuming a perfect communication channel and acts as the lower bound for all the experiments.\nIt is clear that jointly learning to communicate and collaborate over a noisy channel outperforms the separation-based results with both RC and HC.\nIn Fig. \\ref{fig:bsc_vis}, we provide an illustration of the actions taken by the agent after some errors over the communication channel with the separate learning and communication scheme (HC) and with the proposed joint learning and communication approach. It can be seen that at step 2 the proposed scheme takes a similar action $(-1,-1)$ to the optimal one $(-2,0)$ despite experiencing 2 bit errors, and in step 3 despite experiencing 3 bit errors (Fig. \\ref{subfig:learning_bsc_vis}). On the other hand, in the separate learning and communication scheme with a (7,4) Hamming code and HC association of actions, the scout decodes a very different action from the optimal one in step 2 which results in an additional step being taken. However, it was able to take a similar action to the optimal one in step 4 despite experiencing 2 bit errors. This shows that although hand crafting codeword assignments can lead to some performance benefits in the separate learning and communication scheme, which was also suggested by Fig. \\ref{fig:bsc_grid_world}, joint learning and communication leads to more robust codeword assignments that give much more consistent results. Indeed, we have also observed that, unlike the separation based scheme, where each message corresponds to a single action, or equivalently, there are 8 different channel output vectors for which the same action is taken, the codeword to action mapping at the scout can be highly asymmetric for the learned scheme. \nMoreover, neither the joint learning and communication results nor the separation-based results achieve the performance of the optimal solution with Hamming code. The gap between the optimal solution with Hamming code and the results obtained by the guide\/scout formulation is due to the DQN architectures' limited capability to learn the optimal solution and the challenge of learning under noisy environments.\nComparing Fig. \\ref{subfig:bsc_grid_world} and \\ref{subfig:bsc_grid_world_noisy}, the performance degradation due to the separation-based results is slightly greater than those from the joint framework. This is because the joint learning and communication approach is better at adjusting its policy and communication strategy to mitigate the effect of the channel noise than employing a standard channel code.\n\n\\begin{figure}\n\\begin{minipage}{.48\\textwidth}\n \\centering\n \\begin{tikzpicture}\n \\pgfplotsset{\n legend style={\n font=\\fontsize{6}{6}\\selectfont,\n at={(0.92,.98)},\n anchor=north east,\n },\n height=0.8\\linewidth,\n width=\\linewidth,\n xmin=0,\n xmax=100000,\n ymin=0,\n ymax=150,\n xlabel={Episode},\n ylabel={Number of steps},\n grid=both,\n grid style={line width=.1pt, draw=gray!10},\n major grid style={line width=.2pt,draw=gray!50},\n every axis\/.append style={\n x label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:0.5,-0.1)},\n },\n y label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:-0.12,0.5)},\n },\n x tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=2,\n \/tikz\/.cd\n },\n y tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=1,\n \/tikz\/.cd\n },\n },\n }\n \\begin{axis}\n \\addplot[no markers, blue] table [x=Step, y=BPSK BSC, col sep=comma] {Data\/conv_mdp.csv};\n \\addlegendentry{BPSK BSC ($P_e=0.05$)}\n \n \\addplot[no markers, red] table [x=Step, y=REAL AWGN, col sep=comma] {Data\/conv_mdp.csv};\n \\addlegendentry{Real AWGN ($10$ dB)}\n \n \\addplot[no markers, green] table [x=Step, y=BPSK AWGN, col sep=comma] {Data\/conv_mdp.csv};\n \\addlegendentry{BPSK AWGN ($10$ dB)}\n \\end{axis}\n \\end{tikzpicture}\n \\caption{Convergence of each channel scenario for the grid world problem without noise ($M=7,~\\delta=0$).}\n \\label{fig:mdp_convergence}\n \\vspace{0.2cm}\n\\end{minipage}%\n\\hfill\n\\begin{minipage}{.48\\textwidth}\n \\centering\n \\begin{tikzpicture}\n \\pgfplotsset{\n legend style={\n font=\\fontsize{6}{6}\\selectfont,\n at={(0.99,.99)},\n anchor=north east,\n },\n height=0.8\\linewidth,\n width=\\linewidth,\n xmin=0,\n xmax=23,\n ymin=2.3,\n ymax=4.1,\n xlabel={SNR (dB)},\n ylabel={Average number of steps},\n grid=both,\n grid style={line width=.1pt, draw=gray!10},\n major grid style={line width=.2pt,draw=gray!50},\n every axis\/.append style={\n x label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:0.5,-0.1)},\n },\n y label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:-0.1,0.5)},\n },\n x tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=2,\n \/tikz\/.cd\n },\n y tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=1,\n \/tikz\/.cd\n },\n }\n }\n \\begin{axis}\n \\addplot[blue, solid, line width=0.9pt, mark=triangle*, mark options={fill=blue, scale=1.5}] \n table [x=SNR, y=BINARY, col sep=comma] {Data\/awgn_grid_world.csv};\n \\addlegendentry{Joint learning and communication (BPSK, $M=7$)}\n \n \\addplot[blue, dashed, line width=1.2pt, mark=triangle*, mark options={fill=blue, solid, scale=1.5}] \n table [x=SNR, y=REAL, col sep=comma] {Data\/awgn_grid_world.csv};\n \\addlegendentry{Joint learning and communication (Real, $M=7$)}\n \n \\addplot[orange, solid, line width=0.9pt, mark=triangle*, mark options={fill=orange, scale=1.5}] \n table [x=SNR, y=BINARY, col sep=comma] {Data\/awgn_grid_world_m10.csv};\n \\addlegendentry{Joint learning and communication (BPSK, $M=10$)}\n \n \\addplot[orange, dashed, line width=1.2pt, mark=triangle*, mark options={fill=orange, solid, scale=1.5}] \n table [x=SNR, y=REAL, col sep=comma] {Data\/awgn_grid_world_m10.csv};\n \\addlegendentry{Joint learning and communication (Real, $M=10$)}\n \n \\end{axis}\n \\end{tikzpicture}\n \\caption{Impact of the channel bandwidth $M=\\{7,10\\}$ on the performance for an AWGN channel ($\\delta=0$).}\n \\label{fig:bw_affec}\n\\end{minipage}\n\\vspace{-1cm}\n\\end{figure}\n\n\nSimilarly, in the AWGN case in Fig. \\ref{fig:awgn_grid_world}, the results from joint learning and communication clearly outperforms those obtained via separate learning and communication.\nHere, the ``Real\" results refer to the guide agent with $\\mathcal{A}_1=\\mathbb{R}^M$, while the ``BPSK\" results refer to the guide agent with $\\mathcal{A}_1=\\{-1,+1\\}^M$.\nThe ``Real\" results here clearly outperform all other schemes considered. The relaxation of the channel constellation to all real values within a power constraint allows the guide to convey more information than a binary constellation can achieve. We also observe that the gain from this relaxation is higher at lower SNR values for both $\\delta$ values. This is in contrast to the gap between the channel capacities achieved with Gaussian and binary inputs in an AWGN channel, which is negligible at low SNR values and increases with SNR. This shows that channel capacity is not the right metric for this problem, and even when two channels are similar in terms of capacity, they can give very different performances in terms of the discounted sum reward when used in the MARL context.\n\n\n\\begin{figure}\n\\begin{minipage}{.48\\textwidth}\n \\centering\n \\begin{tikzpicture}\n \\pgfplotsset{\n legend style={\n font=\\fontsize{6}{6}\\selectfont,\n at={(0.4,.35)},\n anchor=north east,\n },\n height=0.8\\linewidth,\n width=\\linewidth,\n xmin=0,\n xmax=5,\n ymin=0.001,\n ymax=0.4,\n xlabel={SNR (dB)},\n ylabel={BLER},\n grid=both,\n grid style={line width=.1pt, draw=gray!10},\n major grid style={line width=.2pt,draw=gray!50},\n every axis\/.append style={\n x label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:0.5,-0.1)},\n },\n y label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:-0.1,0.5)},\n },\n x tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=2,\n \/tikz\/.cd\n },\n y tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=1,\n \/tikz\/.cd\n },\n }\n }\n \\begin{axis}[\n ymode=log,\n log ticks with fixed point,\n ]\n \n \\addplot[blue, solid, line width=0.9pt, mark=triangle*, mark options={fill=blue, scale=1.5}] \n table [x=SNR, y=HAMMING, col sep=comma] {Data\/ch_coding_awgn.csv};\n \\addlegendentry{HAMMING}\n \n \\addplot[gray, dashed, line width=1.2pt, mark=square*, mark options={fill=gray, solid, scale=1.}] \n table [x=SNR, y=DDPG, col sep=comma] {Data\/ch_coding_awgn.csv};\n \\addlegendentry{DDPG}\n \n \\addplot[cyan, solid, line width=0.9pt, mark=*, mark options={fill=cyan, scale=1.}] \n table [x=SNR, y=REINFORCE, col sep=comma] {Data\/ch_coding_awgn.csv};\n \\addlegendentry{REINFORCE}\n \n \\addplot[magenta, dashed, line width=1.2pt, mark=x, mark options={fill=magenta, scale=1.5, solid}] \n table [x=SNR, y=A2C, col sep=comma] {Data\/ch_coding_awgn.csv};\n \\addlegendentry{Actor-Critic}\n \n \\end{axis}\n \\end{tikzpicture}\n \\caption{BLER performance of different modulation and coding schemes over AWGN channel.}\n \\label{fig:ch_coding_bler}\n \\vspace{-0.2cm}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.48\\textwidth}\n \\centering\n \\begin{tikzpicture}\n \\pgfplotsset{\n legend style={\n font=\\fontsize{6}{6}\\selectfont,\n at={(0.95,.98)},\n anchor=north east,\n },\n height=0.8\\linewidth,\n width=\\linewidth,\n xmin=0,\n xmax=10000,\n ymin=0.007,\n ymax=0.1,\n xlabel={Episode},\n ylabel={BLER},\n grid=both,\n grid style={line width=.1pt, draw=gray!10},\n major grid style={line width=.2pt,draw=gray!50},\n every axis\/.append style={\n x label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:0.5,-0.1)},\n },\n y label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:-0.1,0.5)},\n },\n x tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=2,\n \/tikz\/.cd\n },\n y tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=1,\n \/tikz\/.cd\n },\n },\n }\n \\begin{axis}[\n ymode=log,\n log ticks with fixed point,\n ]\n \\addplot[no markers, gray] table [x=Step, y=DDPG, col sep=comma] {Data\/conv_ch_code.csv};\n \\addlegendentry{DDPG}\n \n \\addplot[no markers, cyan] table [x=Step, y=REINFORCE, col sep=comma] {Data\/conv_ch_code.csv};\n \\addlegendentry{REINFORCE}\n \n \\addplot[no markers, magenta] table [x=Step, y=A2C, col sep=comma] {Data\/conv_ch_code.csv};\n \\addlegendentry{Actor-Critic}\n \\end{axis}\n \\end{tikzpicture}\n \\caption{Convergence behavior for the joint channel coding and modulation problem in an AWGN channel.}\n \\label{fig:ch_coding_convergence}\n\\end{minipage}\n\\vspace{-0.8cm}\n\\end{figure}\n\nIn the BN channel case (Fig. \\ref{fig:bn_grid_world}), similar observations can be made compared to the AWGN case. \nThe biggest difference is that we see a larger performance improvement over the separation case when using our proposed framework than in the AWGN case.\nThis is particularly obvious when using BPSK modulation, where the gap between the BPSK results for the joint learning and communication scheme and those from the separate learning and communication is larger compared to the AWGN channel case.\nThis shows that in this more challenging channel scenario, the proposed framework is better able to adjust jointly the policy and the communication scheme to meet the conditions of the channel.\nIt also again highlights the fact that the Shannon capacity is not the most important metric for this problem as the expected SNR is not significantly less due to the burst noise but we observe an even more pronounced improvement using the proposed schemes over the separation schemes.\n\nIn Figs. \\ref{fig:bsc_grid_world}, \\ref{fig:awgn_grid_world} and \\ref{fig:bn_grid_world}, it can be seen that when the grid world itself is noisy (i.e., $\\delta>0$), the agents are still able to collaborate, albeit at the cost of higher average steps required to reach the treasure. \nThe convergence of the number of steps used to reach the treasure for each channel scenario is shown in Fig. \\ref{fig:mdp_convergence}. \nThe slow convergence for the BSC channel indicates the difficulty of learning a binary code for this channel.\nWe also study the effect of the bandwidth $M$ on the performance. \nIn Fig. \\ref{fig:bw_affec}, we present the average number of steps required for channel bandwidths $M=7$ and $M=10$. \nAs expected, increasing the channel bandwidth reduces the average number of steps for the scout to reach the treasure. \nThe gain is particularly significant for BPSK at the low SNR regime as the guide is better able to protect the information conveyed against the channel noise thanks to the increased bandwidth. \n\n\n\nNext, we present the results for the joint channel coding and modulation problem. \nFig. \\ref{fig:ch_coding_bler} shows the BLER performance obtained by BPSK modulation and Hamming (7,4) code, our DDPG transmitter described in Section \\ref{subsec:eg_prob_channel_coding}, the one proposed by \\cite{aoudia_model-free_2019}, and the proposed approach using an additional critic, labeled as ``Hamming (7,4)\", ``DDPG\", ``REINFORCE\", and ``Actor-Critic\", respectively.\nIt can be seen that the learning approaches (DDPG, REINFORCE and Actor-Critic) perform better than the Hamming (7,4) code. \nAdditionally, stochastic policy algorithms (REINFORCE and Actor-Critic) perform better than DDPG.\nThis is likely due to the limitations of DDPG, as in practice, criterion 1) of Theorem \\ref{thm:ddpg_compatibility} is often not satisfied.\nLastly, we show that we can improve upon the algorithm proposed in \\cite{aoudia_model-free_2019} by adding an additional critic that reduces the variance of the policy gradients; and therefore, learns a better policy.\nThe results obtained by the actor-critic algorithm are superior to those from the REINFORCE algorithm, especially in the higher SNR regime.\nOn average, the learning-based results are better than the Hamming (7,4) performance by $1.24$, $2.58$ and $3.70$ dB for DDPG, REINFORCE and Actor-Critic, respectively.\n\n\n\\begin{figure} \n \\centering\n \\subfloat[$p_b=0.1$ \\label{subfig:ch_coding_bn01}]{%\n \\begin{tikzpicture}\n \\pgfplotsset{\n legend style={\n font=\\fontsize{6}{6}\\selectfont,\n at={(0.4,.35)},\n anchor=north east,\n },\n height=0.4\\linewidth,\n width=0.5\\linewidth,\n xmin=0,\n xmax=5,\n ymin=0.01,\n ymax=0.4,\n xlabel={SNR (dB)},\n ylabel={BLER},\n grid=both,\n grid style={line width=.1pt, draw=gray!10},\n major grid style={line width=.2pt,draw=gray!50},\n every axis\/.append style={\n x label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:0.5,-0.1)},\n },\n y label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:-0.1,0.5)},\n },\n x tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=2,\n \/tikz\/.cd\n },\n y tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=1,\n \/tikz\/.cd\n },\n }\n }\n \\begin{axis}[\n ymode=log,\n log ticks with fixed point,\n ]\n \n \\addplot[blue, solid, line width=0.9pt, mark=triangle*, mark options={fill=blue, scale=1.5, solid}] \n table [x=SNR, y=HAMMING, col sep=comma] {Data\/ch_coding_bn01.csv};\n \\addlegendentry{HAMMING}\n \n \\addplot[gray, dashed, line width=1.2pt, mark=square*, mark options={fill=gray, solid, scale=1.}] \n table [x=SNR, y=DDPG, col sep=comma] {Data\/ch_coding_bn01.csv};\n \\addlegendentry{DDPG}\n \n \\addplot[cyan, solid, line width=0.9pt, mark=*, mark options={fill=cyan, scale=1.}] \n table [x=SNR, y=REINFORCE, col sep=comma] {Data\/ch_coding_bn01.csv};\n \\addlegendentry{REINFORCE}\n \n \\addplot[magenta, dashed, line width=1.2pt, mark=x, mark options={fill=magenta, scale=1.5, solid}] \n table [x=SNR, y=A2C, col sep=comma] {Data\/ch_coding_bn01.csv};\n \\addlegendentry{Actor-Critic}\n \n \\end{axis}\n \\end{tikzpicture}\n }\n \\subfloat[$p_b=0.2$ \\label{subfig:ch_coding_bn02}]{%\n \\begin{tikzpicture}\n \\pgfplotsset{\n legend style={\n font=\\fontsize{6}{6}\\selectfont,\n at={(0.4,.35)},\n anchor=north east,\n },\n height=0.4\\linewidth,\n width=0.5\\linewidth,\n xmin=0,\n xmax=5,\n ymin=0.05,\n ymax=0.5,\n xlabel={SNR (dB)},\n ylabel={BLER},\n grid=both,\n grid style={line width=.1pt, draw=gray!10},\n major grid style={line width=.2pt,draw=gray!50},\n every axis\/.append style={\n x label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:0.5,-0.1)},\n },\n y label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:-0.15,0.5)},\n },\n x tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=2,\n \/tikz\/.cd\n },\n y tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=2,\n \/tikz\/.cd\n },\n }\n }\n \\begin{axis}[\n ymode=log,\n log ticks with fixed point,\n ]\n \n \\addplot[blue, solid, line width=0.9pt, mark=triangle*, mark options={fill=blue, scale=1.5, solid}] \n table [x=SNR, y=HAMMING, col sep=comma] {Data\/ch_coding_bn02.csv};\n \\addlegendentry{HAMMING}\n \n \\addplot[gray, dashed, line width=1.2pt, mark=square*, mark options={fill=gray, solid, scale=1.}] \n table [x=SNR, y=DDPG, col sep=comma] {Data\/ch_coding_bn02.csv};\n \\addlegendentry{DDPG}\n \n \\addplot[cyan, solid, line width=0.9pt, mark=*, mark options={fill=cyan, scale=1.}] \n table [x=SNR, y=REINFORCE, col sep=comma] {Data\/ch_coding_bn02.csv};\n \\addlegendentry{REINFORCE}\n \n \\addplot[magenta, dashed, line width=1.2pt, mark=x, mark options={fill=orange, scale=1.5, solid}] \n table [x=SNR, y=A2C, col sep=comma] {Data\/ch_coding_bn02.csv};\n \\addlegendentry{Actor-Critic}\n \n \\end{axis}\n \\end{tikzpicture}\n }\n \\caption{Comparison of the agents jointly trained to collaborate and communicate over an BN channel to separate learning and communications with a (7,4) Hamming code.}\n \\label{fig:ch_coding_bler_bn} \n\\vspace{-1.0cm}\n\\end{figure}\n\nWhen considering the BN channel case, as shown in Fig. \\ref{fig:ch_coding_bler_bn}, while the BLER increases due to the increased noise for all the schemes, we still see improved performance with the learning algorithms.\nFig. \\ref{fig:ch_coding_convergence} shows the convergence behavior of different learning algorithms for 5dB channel SNR.\nWe can see that the actor-critic algorithm converges the quickest and achieves the lowest BLER, while REINFORCE converges the slowest but achieves lower BLER than DDPG at the end of training.\nThis is in accordance with the BLER performance observed in Fig. \\ref{fig:ch_coding_bler}.\nWe reiterate that the joint channel coding and modulation problem studied from the perspective of supervised learning in \\cite{aoudia_model-free_2019} is indeed a special case of the joint learning and communication framework we presented in Section \\ref{sec:problem_formulation} from a MARL perspective, and can be solved using a myriad of algorithms from the RL literature.\n\nLastly, we note that due to the simplicity of our network architecture, the computation complexity of our models is not significantly more than the separation based results we present herein.\nThe average computation time for encoding and decoding using our proposed DRL solution is approximately $323\\mu s$ compared to $286 \\mu s$ for the separate learning and communication case with a Hamming (7,4) code, using an Intel Core i9 processor.\nThis corresponds to roughly 13\\% increase in computation time, which is modest considering the performance gains observed in both the guided robot problem and the joint channel coding and modulation problem.\n\n\\begin{Remark}\nWe note that both the grid world problem and the channel coding and modulation problems are POMDPs. Therefore, recurrent neural networks (RNNs), such as long-short term memory (LSTM) \\cite{hochreiter_lstm_1997} networks, should provide performance improvements as the cell states can act as belief propagation. However, in our initial simulations, we were not able to observe such improvements, although this is likely to be due to the limitations of our architectures.\n\\end{Remark}\n\n\\begin{Remark}\nEven though we have only considered the channel modulation and coding problem in this paper due to lack of space, our framework can also be reduced to the source coding and joint source-channel coding problems by changing the reward function. If we consider an error-free channel with binary inputs and outputs, and let the reward depend on the average distortion between the $B$-length source sequence observed by agent 1 and its reconstruction generated by agent 2 as its action, we recover the lossy source coding problem, where the length-$B$ sequence is compressed into $M$ bits. If we instead consider a noisy channel in between the two agents, we recover the joint source-channel coding problem with an unknown channel model. \n\\end{Remark}\n\n\n\n\n\n\n\\section{Conclusion}\n\\label{sec:conclusions}\n\nIn this paper, we have proposed a comprehensive framework that jointly considers the learning and communication problems in collaborative MARL over noisy channels. \nSpecifically, we consider a MA-POMDP where agents can exchange messages with each other over a noisy channel in order to improve the shared total long-term average reward.\nBy considering the noisy channel as part of the environment dynamics and the message each agent sends as part of its action, the agents not only learn to collaborate with each other via communications but also learn to communicate ``effectively\". \nThis corresponds to ``level C'' of Shannon and Weaver's organization of the communication problems in \\cite{ShannonWeaver49}, which seeks to answer the question ``How effectively does the received meaning affect conduct in the desired way?\".\nWe show that by jointly considering learning and communications in this framework, the learned joint policy of all the agents is superior to that obtained by treating the communication and the underlying MARL problem separately. \nWe emphasize that the latter is the conventional approach when the MARL solutions obtained in the machine learning literature assume error-free communication links are employed in practice when autonomous vehicles or robots communicate over noisy wireless links to achieve the desired coordination and cooperation. \nWe demonstrate via numerical examples that the policies learned from our joint approach produce higher average rewards than those where separate learning and communication is employed.\nWe also show that the proposed framework is a generalization of most of the communication problems that have been traditionally studied in the literature, corresponding to ``level A'' as described by Shannon and Weaver. \nThis formulation opens the door to employing available numerical MARL techniques, such as the actor-critic framework, for the design of channel modulation and coding schemes for communication over unknown channels. \nWe believe this is a very powerful framework, which has many real world applications, and can greatly benefit from the fast developing algorithms in the MARL literature to design novel communication codes and protocols, particularly with the goal of enabling collaboration and cooperation among distributed agents.\n\n\\bibliographystyle{ieeetr}\n\n\\section{Introduction}\n\\label{sec:intro} \n\nCommunication is essential for our society. \nHumans use language to communicate ideas, which has given rise to complex social structures, and scientists have observed either gestural or vocal communication in other animal groups, complexity of which increases with the complexity of the social structure of the group \\cite{tomasello_origins_2010}. \nCommunication helps to achieve complex goals by enabling cooperation and coordination \\cite{ackley:alife4, KamChuen:AGENTS:01}. \nAdvances in our ability to store and transmit information over time and long distances have greatly expanded our capabilities, and allows us to turn the world into the connected society that we observe today.\nCommunication technologies are at the core of this massively complex system. \n\nCommunication technologies are built upon fundamental mathematical principles and engineering expertise. \nThe fundamental quest in the design of these systems have been to deal with various imperfections in the communication channel (e.g., noise and fading) and the interference among transmitters. \nDecades of research and engineering efforts have produced highly advanced networking protocols, modulation techniques, waveform designs and coding techniques that can overcome these challenges quite effectively. \nHowever, this design approach ignores the aforementioned core objective of communication in enabling coordination and cooperation. \nTo some extent, we have separated the design of a communication network that can reliably carry signals from one point to another from the `language' that is formed to achieve coordination and cooperation among agents. \n\nThis engineering approach was also highlighted by Shannon and Weaver in \\cite{ShannonWeaver49} by organizing the communication problem into three ``levels\": They described level A as the \\textit{technical problem}, which tries to answer the question ``How accurately can the symbols of communication be transmitted?\". \nLevel B is referred to as the \\textit{semantic problem}, and asks the question ``How precisely do the transmitted symbols convey the desired meaning?\". \nFinally, Level C, called the \\textit{effectiveness problem}, strives to answer the question ``How effectively does the received meaning affect conduct in the desired way?\". \nAs we have described above, our communication technologies mainly deal with Level A, ignoring the semantics or the effectiveness problems. \nThis simplifies the problem into the transmission of a discrete message or a continuous waveform over a communication channel in the most reliable manner. \nThe semantics problem deals with the meaning of the messages, and is rather abstract. \nThere is a growing interest in the semantics problem in the recent literature \\cite{Guler:TCCN:18, popovski2019semanticeffectiveness, kountouris2020semanticsempowered, xie2020deep, strinati20206g}.\nHowever, these works typically formulate the semantics as an end-to-end joint source-channel coding problem, where the reconstruction objective can be distortion with respect to the original signal \\cite{bourtsoulatze_deep_2018, weng2020semantic}, or a more general function that can model some form of `meaning' \\cite{Guler:TCCN:18, sreekumar_distributed_2020, Jankowski:JSAC:21, Gunduz:CL:20}, which goes beyond reconstructing the original signal\\footnote{To be more precise, remote hypothesis testing, classification, or retrieval problems can also be formulated as end-to-end joint source-channel coding problems, albeit with a non-additive distortion measure.}. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.3\\linewidth]{Images\/MARLwComms.png}\n \\caption{An illustration of a MARL problem with noisy communication between the agents, e.g., agents communicating over a shared wireless channel. The emerging communication scheme should not only allow the agents to better coordinate and cooperate to maximize their rewards, but also mitigate the adverse effects of the wireless channel, such as noise and interference.}\n \\label{fig:MARLwComms}\n\\vspace{-0.8cm}\n\\end{figure}\n\nIn this paper, we deal with the `effectiveness problem', which generalizes the problems in both level A and level B. In particular, we formulate a multi-agent problem with noisy communications between the agents, where the goal of communications is to help agents better cooperate and achieve a common goal. See Fig. \\ref{fig:MARLwComms} for an illustration of a multi-agent grid-world, where agents can communicate through noisy wireless links. \nIt is well-known that multi-agent reinforcement learning (MARL) problems are notoriously difficult, and are a topic of continuous research. Originally, these problems were approached by treating each agent independently, as in a standard single-agent reinforcement learning (RL) problem, while treating other agents as part of the state of the environment. Consensus and cooperation are achieved through common or correlated reward signals. However, this approach leads to overfitting of policies due to limited local observations of each agent and it relies on other agents not varying their policies \\cite{lanctot_unified_2017}. It has been observed that these limitations can be overcome by leveraging communication between the agents \\cite{KamChuen:AGENTS:01, Balch:AR:94}. \n\nRecently, there has been significant interest in the \\textit{emergence of communication} among agents within the RL literature \\cite{foerster_learning_2016, jiang_learning_2018, jaques_social_2019, das_tarmac_2020}.\nThese works consider MARL problems, in which agents have access to a dedicated communication channel, and the objective is to learn a communication protocol, which can be considered as a `language' to achieve the underlying goal, which is typically translated into maximizing a specific reward function. \nThis corresponds to Level C, as described by Shannon and Weaver in \\cite{ShannonWeaver49}, where the agents change their behavior based on the messages received over the channel in order to maximize their reward. \nHowever, the focus of the aforementioned works is the emergence of communication protocols within the limited communication resources that can provide the desired impact on the behavior of the agents, and, unlike Shannon and Weaver, these works ignore the physical layer characteristics of the channel. \n\nOur goal in this work is to consider the effectiveness problem by taking into account both the channel noise and the end-to-end learning objective. \nIn this problem, the goal of communication is not ``reproducing at one point either exactly or approximately a message selected at another point'' as stated by Shannon in \\cite{ShannonWeaver49}, which is the foundation of the communication and information theoretic formulations that have been studied over the last seven decades. \nInstead, the goal is to enable cooperation in order to improve the objective of the underlying multi-agent game. As we will show later in this paper, the codes that emerge from the proposed framework can be very different from those that would be used for reliable communication of messages. \n\nWe formulate this novel communication problem as a MARL problem, in which the agents have access to a noisy communication channel. More specifically, we formulate this as a multi-agent partially observable Markov decision process (POMDP), and construct RL algorithms that can learn policies that govern both the actions of the agents in the environment and the signals they transmit over the channel. A communication protocol in this scenario should aim to enable cooperation and coordination among agents in the presence of channel noise. Therefore, the emerging modulation and coding schemes must not only be capable of error correction\/ compensation, but also enable agents to share their knowledge of the environment and\/or their intentions. We believe that this novel formulation opens up many new directions for the design of communication protocols and codes that will be applicable in many multi-agent scenarios from teams of robots to platoons of autonomous cars \\cite{wang_networking_2019}, to drone swarm planning \\cite{campion_uav_2018}. \n\n\n\n\nWe summarize the main contributions of this work as follows: \n\n\\begin{enumerate}\n \\item We propose a novel formulation of the ``effectiveness problem'' in communications, where agents communicate over a noisy communication channel in order to achieve better coordination and cooperation in a MARL framework. This can be interpreted as a \\textit{joint communication and learning approach} in the RL context \\cite{Gunduz:CL:20}. The current paper is an initial study of this general framework, focusing on scenarios that involve only point-to-point communications for simplicity. More involved multi-user communication and coordination problems will be the subject of future studies. \n \n \\item The proposed formulation generalizes the recently studied ``learning to communicate'' framework in the MARL literature \\cite{foerster_learning_2016, jiang_learning_2018, jaques_social_2019, das_tarmac_2020}, where the underlying communication channels are assumed to be error-free. \n This framework has been used to argue about the emergence of natural languages \\cite{lazaridou_multi-agent_2017, lazaridou2020multiagent}; however, in practice, there is inherent noise in any communication medium, particularly in human\/animal communications. Indeed, languages have evolved to deal with such noise. For example, Shannon estimated that the English language has approximately 75\\% redundancy. \n Such redundancy provides error correction capabilities. \n Hence, we argue that the proposed framework better models realistic communication problems, and the emerging codes and communication schemes can help better understand the underlying structure of natural languages. \n \n \\item The proposed framework also generalizes communication problems at level A, which have been the target of most communication protocols and codes that have been developed in the literature. \n Channel coding, source coding, as well as joint source-channel coding problems, and their multi-user extensions can be obtained as special cases of the proposed framework. \n The proposed deep reinforcement learning (DRL) framework provides alternative approaches to the design of codes and communication schemes for these problems that can outperform existing ones. \n We highlight that there are very limited practical code designs in the literature for most multi-user communication problems, and the proposed framework and the exploitation of deep representations and gradient-based optimization in DRL can provide a scalable and systematic methodology to make progress in these challenging problems. \n \n \\item We study a particular case of the proposed general framework as an example, which reduces to a point-to-point communication problem. \n In particular, we show that any single-agent Markov decision process (MDP) can be converted into a multi-agent partially observable MDP (MA-POMDP) with a noisy communication link between the two agents. \n We consider both the binary symmetric channel (BSC), the additive white Gaussian noise (AWGN) channel, and the bursty noise (BN) channel for the noisy communication link and solve the MA-POMDP problem by treating the other agent as part of the environment, from the perspective of one agent.\n We employ deep Q-learning (DQN) \\cite{mnih_human-level_2015} and deep deterministic policy gradient (DDPG) \\cite{lillicrap_continuous_2019} to train the agents.\n Substantial performance improvement is observed in the resultant policy over those learned by considering the cooperation and communication problems separately.\n \n \n \\item We then present the joint modulation and channel coding problem as an important special case of the proposed framework. \n In recent years, there has been a growing interest in using machine learning techniques to design practical channel coding and modulation schemes \\cite{Nachmani:STSP:18, Dorner:Asilomar:17, Felix:SPAWC:18, bourtsoulatze_deep_2018, Kurka:JSAIT:20, aoudia_model-free_2019}. \n However, with the exception of \\cite{aoudia_model-free_2019}, most of these approaches assume that the channel model is known and differentiable, allowing the use of supervised training by directly backpropagating through the channel using the channel model. In this paper, we learn to communicate over an unknown channel solely based on the reward function by formulating it as a RL problem. The proposed DRL framework goes beyond the method employed in \\cite{aoudia_model-free_2019}, which treats the channel as a random variable, and numerically approximates the gradient of the loss function. It is shown through numerical examples that the proposed DRL techniques employing DDPG \\cite{lillicrap_continuous_2019}, and actor-critic \\cite{konda_actor-critic_nodate} algorithms significantly improve the block error probability (BLER) of the resultant code.\n \n\\end{enumerate}\n\n\n\n\n\n\n\n\n\n\n\\section{Related Works}\n\\label{sec:related_works}\n\nThe study of communication for multi-agent systems is not new \\cite{wagner_progress_2016}. \nHowever, due to the success of deep neural networks (DNNs) for reinforcement learning (RL), this problem has received renewed interest in the context of DNNs \\cite{lazaridou_multi-agent_2017} and deep RL (DRL) \\cite{foerster_learning_2016, sukhbaatar_learning_2016, havrylov_emergence_2017}, where partially observable multi-agent problems are considered. \nIn each case, the agents, in addition to taking actions that impact the environment, can also communicate with each other via a limited-capacity communication channel. \nParticularly, in \\cite{foerster_learning_2016}, two approaches are considered: reinforced inter-agent learning (RIAL), where two centralized Q-learning networks learn to act and communicate, respectively, and differentiable inter-agent learning (DIAL), where communication feedback is provided via backpropagation of gradients through the channel, while the communication between agents is restricted during execution. \nSimilarly, in \\cite{wang_r-maddpg_2020,lowe2017maddpg}, the authors propose a \\textit{centralized learning, decentralized execution} approach, where a central critic is used to learn the state-action values of all the agents and use those values to train individual policies of each agent. \nAlthough they also consider the transmitted messages as part of the agents' actions, the communication channel is assumed to be noiseless.\n\nCommNet \\cite{sukhbaatar_learning_2016} attempts to leverage communications in cooperative MARL by using multiple continuous-valued transmissions at each time step to make decisions for all agents. \nEach agent broadcasts its message to every other agent, and the averaged message received by each agent forms part of the input.\nHowever, this solution lacks scalability as it depends on a centralized network by treating the problem as a single RL problem. \nSimilarly, BiCNet \\cite{peng_multiagent_2017} utilizes recurrent neural networks to connect individual agent's policy with a centralized controller aggregating the hidden states of each agent, acting as communication messages.\n\nThe reliance of the aforementioned works on a broadcast channel to communicate with all the agents simultaneously may be infeasible or highly inefficient in practice.\nTo overcome this limitation, in \\cite{jiang_learning_2018}, the authors propose an attentional communication model that learns when communication is needed and how to integrate shared information for cooperative decision making.\nIn \\cite{das_tarmac_2020}, directional communication between agents is achieved with a signature-based soft attention mechanism, where each message is associated to the target recipient. \nThey also propose multi-stage communication, where multiple rounds of communication take place before an action is taken.\n\nIt is important to note that, with the exception of \\cite{mostaani_learning-based_2019}, all of the prior works discussed above rely on error-free communication channels.\nMARL with noisy communications is considered in \\cite{mostaani_learning-based_2019}, where two agents placed on a grid world aim to coordinate to step on the goal square simultaneously. \nHowever, for the particular problem presented in \\cite{mostaani_learning-based_2019}, it can be shown that even if the agents are trained independently without any communication at all, the total discounted reward would still be higher than the average reward achieved by the scheme proposed in \\cite{mostaani_learning-based_2019}.\n\n\n\n\\section{Problem Formulation}\n\\label{sec:problem_formulation}\n\nWe consider a multi-agent partially observable Markov decision process (MA-POMDP) with noisy communications. \nConsider first a Markov game with $N$ agents $(\\mathcal{S}, \\{\\mathcal{O}_i\\}_{i=1}^N, \\{\\mathcal{A}_i\\}_{i=1}^N,$ $ P, r)$, where $\\mathcal{S}$ represents all possible configurations of the environment and agents, \n$\\mathcal{O}_i$ and $\\mathcal{A}_i$ are the observation and action sets of agent $i$, respectively, $P$ is the transition kernel that governs the environment, and $r$ is the reward function. \nAt each step $t$ of this Markov game, agent $i$ has a partial observation of the state $o_i^{(t)}\\in \\mathcal{O}_i$, and takes action $a_i^{(t)} \\in \\mathcal{A}_i$, $\\forall i$. \nThen, the state of the MA-POMDP transitions from $s^{(t)}$ to $s^{(t+1)}$ according to the joint actions of the agents following the transition probability $P(s^{(t+1)}|s^{(t)}, \\mathbf{a}^{(t)})$, where $\\mathbf{a}^{(t)} = (a_1^{(t)}, \\ldots, a_N^{(t)})$. \nObservations in the next time instant follow the conditional distribution $\\mathrm{Pr}(o^{(t+1)}|s^{(t)}, \\mathbf{a}^{(t)})$. \nWhile, in general, each agent can have a separate reward function, we consider herein the fully cooperative setting, where the agents receive the same team reward $r^{(t)} = r(s^{(t)}, \\mathbf{a}^{(t)})$ at time $t$.\n\nIn order to coordinate and maximize the total reward, the agents are endowed with a noisy communication channel, which is orthogonal to the environment.\nThat is, the environment transitions depend only on the environment actions, and the only impact of the communication channel is that the actions of the agents can now depend on the past received messages as well as the past observations and rewards. \nWe assume that the communication channel is governed by the conditional probability distribution $P_c$, and we allow the agents to use the channel $M$ times at each time $t$. \nHere, $M$ can be considered as the \\textit{channel bandwidth}. \nLet the signals transmitted and received by agent $i$ at time step $t$ be denoted by $\\mathbf{m}_i^{(t)} \\in \\mathcal{C}_t^M$ and $\\hat{\\mathbf{m}}_i^{(t)} \\in\\mathcal{C}_r^M$, respectively, where $\\mathcal{C}_t$ and $\\mathcal{C}_r$ denote the input and output alphabets of the channel, which can be discrete or continuous.\nWe assume for simplicity that the input and output alphabets of the channel are the same for all the agents. Channel inputs and outputs at time $t$ are related through the conditional distribution $P_c\\big(\\hat{\\mathbf{M}}^{(t)} | \\mathbf{M}^{(t)} \\big) =\\mathrm{Pr}\\big(\\hat{\\mathbf{M}} = \\{\\hat{\\mathbf{m}}_i^{(t)}\\}_{i=1}^N \\big|\\mathbf{M}=\\{\\mathbf{m}_i^{(t)}\\}_{i=1}^N \\big)$, where $\\hat{\\mathbf{M}} = (\\hat{\\mathbf{m}}_1,\\ldots,\\hat{\\mathbf{m}}_N)\\in\\mathbb{R}^{N\\times M}$ denotes the matrix of received signals with each row $\\hat{\\mathbf{m}}_i$ corresponding to a vector of symbols representing the codeword chosen by agent $i$, and likewise for $\\mathbf{M} = (\\mathbf{m}_1, \\ldots, \\mathbf{m}_N)\\in\\mathbb{R}^{N\\times M}$ is the matrix of transmitted signals.\nThat is, the received signal of agent $i$ over the communication channel is a random function of the signals transmitted by all other agents, characterized by the conditional distribution of the multi-user communication channel.\nIn our simulations, we will consider independent and identically distributed channels as well as a channel with Markov noise, but our formulation is general enough to take into account arbitrarily correlated channels, both across time and users.\n\nWe can define a new Markov game with noisy communications, where the actions of agent $i$ now consist of two components, the environment actions $a_i^{(t)}$ as before, and the signal to be transmitted over the channel $\\mathbf{m}_i^{(t)}$. \nEach agent, in addition to taking actions that affect the state of the environment, can also send signals to other agents over $M$ uses of the noisy communication channel. \nThe observation of each agent is now given by $(o_i^{(t)}, \\hat{\\mathbf{m}}_i^{(t)})$; that is, a combination of the partial observation of the environment as before and the channel output signal. \n\n\n\n\n\nAt each time step $t$, agent $i$ observes $(o_i^{(t)}, \\hat{\\mathbf{m}}_i^{(t)})$ and selects an action $(a_i^{(t)}, \\mathbf{m}_i^{(t)})$ according to its policy $\\pi_i:\\mathcal{O}_i \\times \\mathcal{C}_r^M \\rightarrow \\mathcal{A}_i \\times \\mathcal{C}_t^M$. \nThe overall policy over all agents can be defined as $\\Pi:\\mathcal{S}\\rightarrow\\mathcal{A}$.\nThe objective of the Markov game with noisy communications is to maximize the discounted sum of rewards \n\\begin{equation}\n V_\\Pi(s)=\\mathbb{E}_\\Pi\\Bigg[\\sum_{t=1}^\\infty\\gamma^{t-1}r^{(t)}\\Bigg|s^{(1)}=s\\Bigg]\n \\label{eq:value_function}\n\\end{equation}\nfor any initial state $s\\in\\mathcal{S}$ and $\\gamma$ is the discount factor to ensure convergence.\nWe also define the state-action value function, also referred to as Q-function as \n\\begin{equation}\n Q_\\Pi(s^{(t)},a^{(t)})=\\mathbb{E}_\\Pi\\Bigg[\\sum_{i=t}^\\infty\\gamma^{(i-t)}r^{(t)}\\Bigg|s^{(t)},a^{(t)}\\Bigg].\n \\label{eq:q_function}\n\\end{equation}\n\n\n\n\nIn the subsequent sections we will show that this formulation of the MA-POMDP with noisy communications lends itself to multiple problem domains where communication is vital to achieve non-trivial total reward values, and we devise methods that jointly learn to collaborate and communicate despite the noise in the channel. \nAlthough the introduced MA-POMDP framework with communications is fairly general and can model any multi-agent scenario with complex multi-user communications, our focus in this paper will be on point-to-point communications. \nThis will allow us to expose the benefits of the joint communication and learning design, without having to deal with the challenges of multi-user communications. \nExtensions of the proposed framework to scenarios that would involve multi-user communication channels will be studied in future work. \n\n\\section{Guided Robot with Point-to-Point Communications}\n\\label{subsec:eg_prob_guide_scout}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.3\\linewidth]{Images\/grid_world.pdf}\n \\caption{Illustration of the guided robot problem in grid world. The set $\\mathcal{A}_2$ of 16 possible actions the scout agent can take using hand crafted (HC) codewords.}\n \\label{fig:grid_world}\n\\vspace{-0.8cm}\n\\end{figure}\n\nIn this section, we consider a single-agent MDP and turn it into a MA-POMDP problem by dividing the single agent into two separate agents, a \\textit{guide} and a \\textit{scout}, which are connected through a noisy communication channel.\nIn this formulation, we assume that the guide observes the state of the original MDP perfectly, but cannot take actions on the environment directly. \nContrarily, the scout can take actions on the environment, but cannot observe the environment state. \nTherefore, the guide communicates to the scout through a noisy communication channel and the scout has to take actions based on the signals it receives from the guide through the communication channel. \nThe scout can be considered as a robot remotely controlled by the guide agent, which has sensors to observe the environment.\n\nWe consider this particular setting since it clearly exposes the importance of communication as the scout depends solely on the signals received from the guide. \nWithout the communication channel, the scout is limited to purely random actions independent of the current state. \nMoreover, this scenario also allows us to quantify the impact of the channel noise on the overall performance since we recover the original single-agent MDP when the communication channel is perfect; that is, if any desired message can be conveyed over the channel in a reliable manner. \nTherefore, if the optimal reward for the original MDP can be determined, this would serve as an upper bound on the reward of the MA-POMDP with noisy communications. \n\nAs an example to study the proposed framework and to develop and test numerical algorithms aiming to solve the obtained MA-POMDP problem, we consider a grid world of size $L\\times L$, denoted by $\\mathcal{L}= [L]\\times[L]$, where $[L]=\\{0,1,\\dots,L-1\\}$. We denote the scout position at time step $t$ by $p_s^{(t)}=(x_s^{(t)},y_s^{(t)})\\in\\mathcal{L}$. \nAt each time instant, the scout can take one action from the set of 16 possible actions $\\mathcal{A}=\\{[1,0],[-1,0],[0,1],[0,-1],[1,1],[-1,1],[-1,-1],[1,-1],[2,0],$ $[-2,0],[0,2],[0,-2],[2,2],[-2,2],[-2,-2],[2,-2]\\}$. See Fig. \\ref{fig:awgn_grid_world} for an illustration of the scout and the 16 actions it can take. If the action taken by the scout ends up in a cell outside the grid world, the agent remains in its original location. \nThe transition probability kernel of this MDP is specified as follows: after each action, the agent moves to the intended target location with probability (w.p.) $1-\\delta$, and to a random neighboring cell w.p. $\\delta$. \nThat is, the next state is given by $s^{(t+1)} = s^{(t)} + a^{(t)}$ w.p. $1-\\delta$, and $s^{(t+1)} = s^{(t)} + a^{(t)} + z^{(t)}$, where $z^{(t)}$ is uniformly distributed over the set $\\{[1,0],[1,1], [0,1], [-1,1], [-1,0],[0,-1],[-1,-1],[1,-1] \\}$ w.p. $\\delta$. \n\nThe objective of the scout is to find the treasure, located at $p_g=(x_g,y_g)\\in\\mathcal{L}$ as quickly as possible. \nWe assume that the initial position of the scout and the location of the treasure are random, and are not the same. \nThe scout takes instructions from the guide, who observes the grid world, and utilizes a noisy communication channel $M$ times to transmit signal $\\mathbf{m}^{(t)}$ to the scout, who observes $\\hat{\\mathbf{m}}^{(t)}$ from the output of the channel.\nTo put it in the context of the MA-POMDP defined in Section \\ref{sec:problem_formulation}, agent 1 is the guide, with observable state $o_1^{(t)} = s^{(t)}$, where $s^{(t)}=(p_s^{(t)},p_g)$, and action set $\\mathcal{A}_1=\\mathcal{C}_t$. \nAgent 2 is the scout, with observation $o_2^{(t)} = \\hat{\\mathbf{m}}^{(t)}$ and action set $\\mathcal{A}_2 = \\mathcal{A}$ (or, more precisely, $o_1^{(t)} = (s^{(t)},\\o), o_2^{(t)} = (\\o,\\hat{\\mathbf{m}}_2^{(t)})$). \nWe define the reward function as follows to encourage the agents to collaborate to find the treasure as quickly as possible:\n\\begin{equation}\n r^{(t)}=\\begin{cases}\n 10,~&\\text{if } p_s^{(t)}=p_g,\\\\\n -1,~&\\text{otherwise}.\n \\end{cases}\n\\end{equation}\nThe game terminates when $p_s^{(t)}=p_g$.\n\nWe should highlight that despite the simplicity of the problem, the original MDP is not a trivial one when both the initial state of the agent and the target location are random, as it has a rather large state space, and learning the optimal policy requires a long training process in order to observe all possible agent and target location pairs sufficiently many times. In order to simplify the learning of the optimal policy, and focus on learning the communication scheme, we will pay special attention to the scenario where $\\delta=0$. \nThis corresponds to the scenario in which the underlying MDP is deterministic, and it is not difficult to see that the optimal solution to this MDP is to take the shortest path to the treasure.\n\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.6\\linewidth]{Images\/framework_grid_world.pdf}\n \\caption{Information flow between the guide and the scout.}\n \\label{fig:framework_grid_world}\n\\vspace{-0.8cm}\n\\end{figure}\n\nWe consider three types of channel distributions: the BSC, the AWGN, and the BN channel. \nIn the BSC case, we have $\\mathcal{C}_t = \\{-1, +1\\}$. \nFor the AWGN channel and the BN channel, we have $\\mathcal{C}_t = \\{-1, +1\\}$ if the input is constrained to binary phase shift keying (BPSK) modulation, or $\\mathcal{C}_t =\\mathbb{R}$ if no limitation is imposed on the input constellation. We will impose an average power constraint in the latter case. In both cases, the output alphabet is $\\mathcal{C}_r = \\mathbb{R}$. For the BSC, the output of the channel is given by $\\hat{\\mathbf{m}}_i^{(t)}=\\mathbf{m}_i^{(t)} \\oplus \\mathbf{n}^{(t)}$, where $\\mathbf{n}^{(t)} \\sim \\mathrm{Bernoulli(p_e)}$. \nFor the AWGN channel, the output at the $i$th use of the channel is given by $\\hat{\\mathbf{m}}_i^{(t)}=\\mathbf{m}_i^{(t)}+\\mathbf{n}^{(t)}$, where $\\mathbf{n}^{(t)} \\sim\\mathcal{N}(0, \\mathbf{I}_M\\sigma_n^2)$ is the zero-mean Gaussian noise term with covariance matrix $\\mathbf{I}_M\\sigma_n^2$ and $\\mathbf{I}_M$ is $M$-dimensional the identity matrix. \nFor the BN channel, the output at the $i$th use of the channel is given by $\\hat{\\mathbf{m}}_i^{(t)}=\\mathbf{m}_i^{(t)}+\\mathbf{n}_b^{(t)}$, where $\\mathbf{n}_b^{(t)}$ is a two state Markov noise, with one state being the low noise state $N(0,\\mathbf{I}_M\\sigma_n^2)$ as in the AWGN case, and the other being the high noise state $N(0,\\mathbf{I}_M(\\sigma_n^2+\\sigma_b^2))$. \nThe probability of transitioning from the low noise state to the high noise state and remaining in that state is $p_b$. \nIn practice, this channel models an occasional random interference from a nearby transmitter.\n\n\n\nWe first consider the BSC case, also studied in \\cite{Roig:Globecom:20}. The action set of agent 1 is $\\mathcal{A}_1=\\{-1,+1\\}^M$, while the observation set of agent 2 is $\\mathcal{O}_2=\\{-1,+1\\}^M$. We will employ deep Q-learning network, introduced in \\cite{mnih_human-level_2015}, which uses deep neural networks (DNNs) to approximate the Q-function in Eqn. (\\ref{eq:q_function}).\nMore specifically, we use two distinct DNNs, parameterized by $\\boldsymbol{\\theta}_1$ and $\\boldsymbol{\\theta}_2$, respectively, representing DNNs for approximating the Q-functions of agent 1 (guide) and agent 2 (scout).\nThe guide observes $o_1^{(t)}=(p_s^{(t)}, p_g)$ and chooses a channel input signal $\\mathbf{m}_1^{(t)}=a_1^{(t)}=\\argmax_aQ_{\\boldsymbol{\\theta}_1}(o_1^{(t)},a)\\in\\mathcal{A}_1$, based on the current Q-function approximation. \nThe signal is then transmitted across $M$ uses of the BSC. The scout observes $o_2^{(t)}=\\hat{\\mathbf{m}}_2^{(t)}$ at the output of the BSC, and chooses an action based on the current Q-function approximation $a_2^{(t)}=\\argmax_a Q_{\\boldsymbol{\\theta}_2}(o_2^{(t)},a) \\in \\mathcal{A}_2$.\nThe scout then takes the action $a_2^{(t)}$, which updates its position $p_s^{(t+1)}$, collects reward $r^{(t)}$, and the process is repeated.\nThe reward $r^{(t)}$ is fed to both the guide and the scout to update $\\boldsymbol{\\theta}_1$ and $\\boldsymbol{\\theta}_2$.\n\nAs is typical in Q-learning methods, we use \\textit{replay buffer}, \\textit{target networks} and $\\epsilon$-\\textit{greedy} to improve the learned policy.\nThe replay buffers $\\mathcal{R}_1$ and $\\mathcal{R}_2$ store experiences $(o_1^{(t)},a_1^{(t)},r^{(t)},o_1^{(t+1)})$ and $(o_2^{(t)},a_2^{(t)},r^{(t)},o_2^{(t+1)})$ for the guide and scout, respectively, and we sample them uniformly to update the parameters $\\boldsymbol{\\theta}_1$ and $\\boldsymbol{\\theta}_2$.\nThis prevents the states from being correlated. \nWe use target parameters ${\\boldsymbol{\\theta}_1^-}$ and ${\\boldsymbol{\\theta}_2^-}$, which are copies of ${\\boldsymbol{\\theta}_1}$ and ${\\boldsymbol{\\theta}_2}$, to compute the DQN loss function:\n\\begin{align}\n L_{\\text{DQN}}(\\boldsymbol{\\theta}_i)=\\frac{1}{2}\\Big(r^{(t)}+\\gamma\\max_{a}\\big\\{Q_{\\boldsymbol{\\theta}_i^-}\\big(o_i^{(t+1)},a\\big)\\big\\} - Q_{\\boldsymbol{\\theta}_i} \\big(o_i^{(t)},a_i^{(t)}\\big)\\Big)^2,~i=1,2.\n \\label{eq:dqn_loss}\n\\end{align}\nThe parameters $\\boldsymbol{\\theta}_i$ are then updated via gradient descent according to the gradient $\\nabla_{\\boldsymbol{\\theta}_i}L_{\\text{DQN}}(\\boldsymbol{\\theta}_i)$, and the target network parameters are updated via\n\\begin{equation}\n \\boldsymbol{\\theta}_i^-\\leftarrow\\tau\\boldsymbol{\\theta}_i+(1-\\tau)\\boldsymbol{\\theta}_i^-,~~i=1,2,\n \\label{eq:target_update}\n\\end{equation}\nwhere $0\\leq\\tau\\leq1$.\nDue to Q-learning being bootstrapped, if the same $Q_{\\boldsymbol{\\theta}_i}$ is used to estimate the state-action value of time step $t$ and $t+1$, both values would move at the same time, which may lead to the updates to never converge (like a dog chasing its tail).\nBy introducing the target networks, this effect is reduced due to the much slower updates of the target network, as done in Eqn. (\\ref{eq:target_update}).\n\nTo promote exploration, we use $\\epsilon$-greedy, which chooses a random action w.p. $\\epsilon$ at each time step: \n\\begin{equation}\n a_i^{(t)}=\\begin{cases}\n \\argmax_{a}Q_{\\boldsymbol{\\theta}_i}(o_i^{(t)},a),~&\\text{w.p. }1-\\epsilon\\\\\n a\\sim\\text{Uniform}(\\mathcal{A}_i),~&\\text{w.p. }\\epsilon,\n \\end{cases}\n\\end{equation}\nwhere $a\\sim\\text{Uniform}(\\mathcal{A}_i)$ denotes an action that is sampled uniformly from the action set $\\mathcal{A}_i$.\nThe proposed solution for the BSC case is shown in Algorithm \\ref{alg:robot_bsc}.\n\n\\begin{algorithm}[t]\n\\begin{small}\n\\SetAlgoLined\n Initialize Q networks, $\\boldsymbol{\\theta}_i,i=1,2$, using Gaussian $\\mathcal{N}(0,10^{-2})$. Copy parameters to target networks $\\boldsymbol{\\theta}_i^-\\leftarrow\\boldsymbol{\\theta}_i$.\\\\\n $\\textit{episode}=0$\\\\\n \\While{$\\text{episode}<\\text{episode-max}$}{\n $episode = episode + 1$\\\\\n $t=0$\\\\\n $\\epsilon=\\epsilon_{\\text{end}}+(\\epsilon_0-\\epsilon_{\\text{end}})e^{\\big(\\frac{\\text{episode}}{-\\lambda}\\big)}$\\\\\n \\While{Treasure NOT found OR $t1$}{\n Store experiences:\\\\\n $(o_1^{(t-1)},a_1^{(t-1)},r^{(t-1)},o_1^{(t)})\\in\\mathcal{R}_1$ and $(o_2^{(t-1)},a_2^{(t-1)},r^{(t-1)},o_2^{(t)})\\in\\mathcal{R}_2$\n }\n }\n Get batches $\\mathcal{B}_1\\subset\\mathcal{R}_1$, $\\mathcal{B}_2\\subset\\mathcal{R}_2$\\\\\n Compute DQN average loss $L_{\\text{DQN}}(\\boldsymbol{\\theta}_i), i=1,2$ as in Eqn. (\\ref{eq:dqn_loss}) using batch $\\mathcal{B}_i$\\\\\n Update $\\boldsymbol{\\theta}_i$ using $\\nabla_{\\boldsymbol{\\theta}_i}L_{\\text{DQN}}(\\boldsymbol{\\theta}_i), i=1,2$.\n Update target networks $\\boldsymbol{\\theta}_i^-,i=1,2$ via Eqn. (\\ref{eq:target_update})\n }\n\\caption{Proposed solution for the guided robot problem with BSC.}\n\\label{alg:robot_bsc}\n\\end{small}\n\\end{algorithm}\n\nFor the binary input AWGN and BN channels, we can use the exact same solution as the one used for BSC.\nNote that the observation set of the scout is $\\mathcal{O}_2=\\mathbb{R}^M$.\nHowever, the more interesting case is when $\\mathcal{A}_1\\in\\mathbb{R}^M$.\nIt has been observed in the JSCC literature \\cite{tung_sparsecast:_2018,bourtsoulatze_deep_2018}, that relaxing the constellation constraints, similar to analog communications, and training the JSCC scheme in an end-to-end fashion can provide significant performance improvements thanks to the greater degree of freedom available to the transmitter.\nIn this case, since the guide can output continuous actions, we can employ the deep deterministic policy gradient (DDPG) algorithm proposed in \\cite{lillicrap_continuous_2019}.\nDDPG uses a parameterized policy function $\\mu_{\\boldsymbol{\\psi}}(o_1^{(t)})$, which specifies the current policy by deterministically mapping the observation $o_1^{(t)}$ to a continuous action.\nThe critic $Q_{\\boldsymbol{\\theta}_1}(o_1^{(t)},\\mu_{\\boldsymbol{\\psi}}(o_1^{(t)}))$, then estimates the value of the action taken by $\\mu_{\\boldsymbol{\\psi}}(o_1^{(t)})$, and is updated as it is with DQN in Eqn. (\\ref{eq:dqn_loss}).\n\nThe guide policy is updated by applying the chain rule to the expected return from the initial distribution \n\\begin{align}\n J=\\mathbb{E}_{o_1^{(t)}\\sim\\rho^{\\pi_1},o_2^{(t)}\\sim\\rho^{\\pi_2},a_1^{(t)}\\sim\\pi_1,a_2^{(t)}\\sim\\pi_2}\\Bigg[\\sum_{t=1}^\\infty\\gamma^{t-1}r^{(t)}(o_1^{(t)},o_2^{(t)},a_1^{(t)},a_2^{(t)})\\Bigg],\n \\label{eq:exp_return}\n\\end{align}\nwhere $\\rho^{\\pi_i}$ is the discounted observation visitation distribution for policy $\\pi_i$.\nSince we solve this problem by letting each agent treat the other agent as part of the environment, the value of the action taken by the guide is only dependent on its observation $o_1^{(t)}$ and action $\\mu_{\\boldsymbol{\\psi}}(o_1^{(t)})$.\nThus, we use a result in \\cite{silver_deterministic_2014} where the gradient of the objective $J$ in Eqn. (\\ref{eq:exp_return}) with respect to the guide policy parameters $\\boldsymbol{\\psi}$ is shown to be\n\\begin{align}\n \\nabla_{\\boldsymbol{\\psi}} J &=\\mathbb{E}_{o_1^{(t)}\\sim\\rho^{\\pi_1}}\\Big[\\nabla_{\\boldsymbol{\\psi}} Q_{\\boldsymbol{\\theta}_1}(o,a)\\big|_{o=o_1^{(t)},a=\\mu_{\\boldsymbol{\\psi}}(o_1^{(t)})}\\Big]\\\\\n &=\\mathbb{E}_{o_1^{(t)}\\sim\\rho^{\\pi_1}}\\Big[\\nabla_a Q_{\\boldsymbol{\\theta}_1}(o,a)\\big|_{o=o_1^{(t)},a=\\mu_{\\boldsymbol{\\psi}}(o_1^{(t)})}\\nabla_{\\boldsymbol{\\psi}}\\mu_{\\boldsymbol{\\psi}}(o)\\big|_{o=o_1^{(t)}}\\Big]\n \\label{eq:ddpg_gradient}\n\\end{align}\n if certain conditions specified in Theorem \\ref{thm:ddpg_compatibility} are satisfied.\n\\begin{theorem}[{{\\cite{silver_deterministic_2014}}}]\n A function approximator $Q_{\\boldsymbol{\\theta}}(o,a)$ is compatible (i.e., the gradient of the true Q function $Q_{\\boldsymbol{\\theta}^\\ast}$ is preserved by the function approximator) with a deterministic policy $\\mu_{\\boldsymbol{\\psi}}(o)$, such that $\\nabla_{\\boldsymbol{\\psi}} J(\\boldsymbol{\\psi})=\\mathbb{E}[\\nabla_{\\boldsymbol{\\psi}}\\mu_{\\boldsymbol{\\psi}}(o)\\nabla_aQ_{\\boldsymbol{\\theta}}(o,a)|_{a=\\mu_{\\boldsymbol{\\psi}}(o)}]$, if \n \\begin{enumerate}\n \\item $\\nabla_aQ_{\\boldsymbol{\\theta}}(o,a)|_{a=\\mu_{\\boldsymbol{\\psi}}(o)}=\\nabla_{\\boldsymbol{\\psi}}\\mu_{\\boldsymbol{\\psi}}(o)^\\top\\boldsymbol{\\theta}$, and \n \\item $\\boldsymbol{\\theta}$ minimizes the mean-squared error,\n $\\mathbb{E}[e(o;\\boldsymbol{\\theta},\\boldsymbol{\\psi})^\\top e(o;\\boldsymbol{\\theta},\\boldsymbol{\\psi})]$, where\\\\\n $e(o;\\boldsymbol{\\theta},\\boldsymbol{\\psi})\\!=\\!\\nabla_a\\big[Q_{\\boldsymbol{\\theta}}(o,a)|_{a=\\mu_{\\boldsymbol{\\psi}}(o)}-Q_{\\boldsymbol{\\theta}^\\ast}(o,a)|_{a=\\mu_{\\boldsymbol{\\psi}}(o)}\\big]$,\\\\\n and $\\boldsymbol{\\theta}^\\ast$ are the parameters that describe the true Q function exactly.\n \\end{enumerate}\n\\label{thm:ddpg_compatibility}\n\\end{theorem}\nIn practice, criterion 2) of Theorem \\ref{thm:ddpg_compatibility} is approximately satisfied via mean-squared error loss and gradient descent, but criterion 1) may not be satisfied.\nNevertheless, DDPG works well in practice.\n\nThe DDPG loss is two-fold: the critic loss is computed as \n\\begin{align} \\label{eq:ddpg_critic_loss}\n L_{\\text{DDPG}}^{\\text{Critic}}(\\boldsymbol{\\theta}_1)=\\Big(r^{(t)}+\\gamma\\Big\\{Q_{\\boldsymbol{\\theta}_1^-}(o_1^{(t+1)},\\mu_{\\boldsymbol{\\psi}^-}(o_1^{(t+1)}))\\Big\\} - Q_{\\boldsymbol{\\theta}_1}(o_1^{(t)},\\mu_{\\boldsymbol{\\psi}}(o_1^{(t)})\\Big)^2,\n\\end{align}\nwhereas the policy loss is computed as\n\\begin{align}\n &L_{\\text{DDPG}}^{\\text{Policy}}(\\psi)=-Q_{\\boldsymbol{\\theta}_1}(o_1^{(t)},\\mu_{\\boldsymbol{\\psi}}(o_1^{(t)})).\\label{eq:ddpg_policy_loss}\n\\end{align}\n\nAs with the DQN case, we can also use a replay buffer and target network to train the DDPG policy. To promote exploration, we add noise to the actions taken as follows:\n\\begin{equation}\n a_1^{(t)}=\\mu_{\\boldsymbol{\\psi}}(o_1^{(t)}) + w^{(t)},\n\\end{equation}\nwhere $w^{(t)}$ is an Orstein-Uhlenbeck process \\cite{uhlenbeck_theory_1930} to generate temporally correlated noise terms. The proposed solution for the AWGN and BN channel is summarized in Algorithm \\ref{alg:robot_awgn}. We find that by relaxing the modulation constraint to $\\mathbb{R}^M$, the learned policies of guide and scout are substantially better than those achieved in the BPSK case. The numerical results illustrating this conclusion will be discussed in Section \\ref{sec:results}.\n\n\\begin{algorithm}[]\n\\begin{small}\n\\caption{Proposed solution for guided robot problem for AWGN and BN channel.}\\label{alg:robot_awgn}\n\\SetAlgoLined\n Initialize Q networks $\\boldsymbol{\\theta}_i,i=1,2$, using Gaussian $\\mathcal{N}(0,10^{-2})$ and policy network $\\boldsymbol{\\psi}$ if $\\mathcal{A}_1\\in\\mathbb{R}^M$.\n Copy parameters to target networks $\\boldsymbol{\\theta}_i^-\\leftarrow\\boldsymbol{\\theta}_i$, $\\boldsymbol{\\psi}^-\\leftarrow\\boldsymbol{\\psi}$.\\\\\n $\\textit{episode}=1$\\\\\n \\While{$\\text{episode}<\\text{episode-max}$}{\n $t=1$\\\\\n $\\epsilon=\\epsilon_{\\text{end}}+(\\epsilon_0-\\epsilon_{\\text{end}})e^{\\big(\\frac{\\text{episode}}{-\\lambda}\\big)}$\\\\\n \\While{Treasure NOT found OR $t1$}{\n Store experiences:\\\\\n $(o_1^{(t-1)},a_1^{(t-1)},r^{(t-1)},o_1^{(t)})\\in\\mathcal{R}_1$ \\mbox{ and } $(o_2^{(t-1)},a_2^{(t-1)},r^{(t-1)},o_2^{(t)})\\in\\mathcal{R}_2$\n }\n $t=t+1$\n }\n \n Compute average scout loss $L_{\\text{DQN}}(\\boldsymbol{\\theta}_2)$ as in Eqn. (\\ref{eq:dqn_loss}) using batch $\\mathcal{B}_2 \\subset \\mathcal{R}_2$\\\\\n Update $\\boldsymbol{\\theta}_2$ using $\\nabla_{\\boldsymbol{\\theta}_2}L_{\\text{DQN}}(\\boldsymbol{\\theta}_2)$\\\\\n \\uIf{$\\mathcal{A}_1=\\{-1,+1\\}^M$}{\n Compute DQN average loss $L_{\\text{DQN}}(\\boldsymbol{\\theta}_1)$ as in Eqn. (\\ref{eq:dqn_loss}) using batch $\\mathcal{B}_1 \\subset \\mathcal{R}_1$\\\\\n Update $\\boldsymbol{\\theta}_1$ using $\\nabla_{\\boldsymbol{\\theta}_1}L_{\\text{DQN}}(\\boldsymbol{\\theta}_1)$\\\\\n Update target network $\\boldsymbol{\\theta}_i^-,i=1,2$ via Eqn. (\\ref{eq:target_update})\n }\n \\uElseIf{$\\mathcal{A}_1=\\mathbb{R}^M$}{\n Compute average DDPG Critic loss $L_{\\text{DDPG}}^{\\text{Critic}}(\\boldsymbol{\\theta}_1)$ as in Eqn. (\\ref{eq:ddpg_critic_loss}) using batch $\\mathcal{B}_1$\\\\\n Compute average DDPG Policy loss $L_{\\text{DDPG}}^{\\text{Policy}}(\\boldsymbol{\\psi})$ as in Eqn. (\\ref{eq:ddpg_policy_loss}) using batch $\\mathcal{B}_1$\\\\\n Update $\\boldsymbol{\\theta}_1$ and $\\boldsymbol{\\psi}$ using $\\nabla_{\\boldsymbol{\\theta}_1}L_{\\text{DDPG}}^{\\text{Critic}}(\\boldsymbol{\\theta}_1)$ and $\\nabla_{\\psi}L_{\\text{DDPG}}^{\\text{Policy}}(\\boldsymbol{\\psi})$\\\\\n Update target network $\\boldsymbol{\\theta}_i^-,i=1,2,\\boldsymbol{\\psi}^-$ via Eqn. (\\ref{eq:target_update})\n }\n $\\text{episode}=\\text{episode}+1$\n }\n\\end{small}\n\\end{algorithm}\n\n\nTo ensure that the actions taken by the guide meet the power constraint we normalize the channel input to an average power of $1$ as follows:\n\\begin{equation}\n a_1^{(t)}[k]\\leftarrow\\sqrt{M}\\frac{a_1^{(t)}[k]}{\\sqrt{\\Big(a_1^{(t)}\\Big)^\\top a_1^{(t)}}},~k=1,\\dots,M.\n \\label{eq:power_norm}\n\\end{equation}\nThe signal-to-noise ratio (SNR) of the AWGN channel is then defined as \n\\begin{equation}\n \\text{SNR}=-10\\log_{10}(\\sigma_n^2)~\\text{(dB)}.\n\\end{equation}\nDue to the burst noise, we define SNR of the BN channel by the expected SNR of the two noise states: \n\\begin{equation}\n \\text{SNR}=-10((1-p_b)\\log_{10}(\\sigma_n^2)+p_b\\log_{10}(\\sigma_n^2+\\sigma_b^2))~\\text{(dB)}.\n\\end{equation}\n\n\nIn Section \\ref{sec:results}, we will study the effects of both the channel SNR and the channel bandwidth on the performance. Naturally, the capacity of the channel increases with both the SNR and the bandwidth. However, we would like to emphasize that the Shannon capacity is not a relevant metric \\textit{per se} for the problem at hand. Indeed, we will observe that the benefits from increasing channel bandwidth and channel SNR saturate beyond some point. Nevertheless, the performance achieved for the underlying single-agent MDP assuming a perfect communication link from the guide to the scout serves as a more useful bound on the performance with any noisy communication channel. \nThe numerical results for this example will be discussed in detail in Section \\ref{sec:results}.\n\n\n\n\n\\section{Joint Channel Coding and Modulation}\n\\label{subsec:eg_prob_channel_coding}\n\n\nThe formulation given in Section \\ref{sec:problem_formulation} can be readily extended to the aforementioned classic ``level A\" communication problem of channel coding and modulation. \nChannel coding is a problem where $B$ bits are communicated over $M$ channel uses, which corresponds to a code rate of $B\/M$ bits per channel use.\nIn the context of the Markov game introduced previously, we can consider $2^B$ states corresponding to each possible message. Agent 2 has $2^B$ actions, each corresponding to a different reconstruction of the message at agent 1. \nAll the actions transition to the terminal state. \nThe transmitter observes the state and sends a message by using the channel $M$ times, and the receiver observes a noisy version of the message at the output of the channel and chooses an action.\nHerein, we consider the scenario with real channel input and output values, and an average power constraint on the transmitted signals at each time $t$.\nAs such, we can define $\\mathcal{O}_1=\\mathcal{A}_2 = \\{0,1\\}^B$ and $\\mathcal{A}_1 = \\mathcal{O}_2 = \\mathcal{C}^M_t$. We note that maximizing the average reward in this problem is equivalent to designing a channel code with blocklength $B$ and rate $B\/M$ with minimum BLER. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=.6\\linewidth]{Images\/framework_ch_coding.pdf}\n \\caption{Information flow between the transmitter and the receiver.}\n \\label{fig:framework_ch_coding}\n\\vspace{-0.5cm}\n\\end{figure}\n\nThere have been many recent studies focusing on the design of channel coding and modulation schemes using machine learning techniques \\cite{Nachmani:STSP:18, Dorner:Asilomar:17, Felix:SPAWC:18, bourtsoulatze_deep_2018, Kurka:JSAIT:20, aoudia_model-free_2019}. Most of these works use supervised learning techniques, assuming a known and differentiable channel model, which allows backpropagation through the channel during training. On the other hand, here we assume that the channel model is not known, and the agents are limited to their observations of the noisy channel output signals, and must learn a communication strategy through trial and error.\n\nA similar problem is considered in \\cite{aoudia_model-free_2019} from a supervised learning perspective. The authors show that by approximating the gradient of the transmitter with the stochastic policy gradient of the vanilla REINFORCE algorithm \\cite{williams_simple_1992}, it is possible to train both the transmitter and the receiver without knowledge of the channel model. We wish to show here that this problem is actually a special case of the problem formulation we constructed in Section \\ref{sec:problem_formulation} and that by approaching this problem from a RL perspective, the problem lends itself to a variety of solutions from the vast RL literature.\n\n\\begin{algorithm}[t]\n\\begin{small}\n\\caption{Proposed solution for joint channel coding-modulation problem.}\n\\label{alg:channel_coding}\n\\SetAlgoLined\n Initialize DNNs $\\boldsymbol{\\theta}_i,i=1,2$, with Gaussian $\\mathcal{N}(0,10^{-2})$, and policy network $\\boldsymbol{\\psi}$ if using DDPG.\\\\\n $\\textit{episode}=1$\\\\\n \\While{$\\text{episode}<\\text{episode-max}$}{\n $\\epsilon=\\epsilon_{\\text{end}}+(\\epsilon_0-\\epsilon_{\\text{end}})e^{-\\frac{\\text{episode}}{\\lambda}}$\\\\\n Observe $o_1^{(1)}\\sim\\text{Uniform}(\\mathcal{O}_1)$\\\\\n $m_1^{(1)}=\\mu_{\\boldsymbol{\\psi}}(o_1^{(1)})+w^{(1)}$\\\\\n Normalize $m_1^{(1)}$ via Eqn. (\\ref{eq:power_norm})\\\\\n Observe $o_2^{(1)}=P_{\\text{AWGN}}(\\hat{m}_2^{(1)}|m_1^{(1)})$ or $P_{\\text{BN}}(\\hat{m}_2^{(1)}|m_1^{(1)})$\\\\ \n $a_2^{(1)}=\\argmax_aQ_{\\boldsymbol{\\theta}_1}(o_2^{(1)},a)\n $\\\\\n Collect reward $r^{(1)}$ \\\\\n Store experiences:\\\\\n $(o_1^{(1)},a_1^{(1)},r^{(1)})\\in\\mathcal{R}_1$ and $(o_2^{(1)},a_2^{(1)},r^{(1)})\\in\\mathcal{R}_2$\\\\\n Get batches $\\mathcal{B}_1\\subset\\mathcal{R}_1$, $\\mathcal{B}_2\\subset\\mathcal{R}_2$\\\\\n Compute average receiver loss $L_{\\text{CE}}(o_2^{(1)};\\boldsymbol{\\theta}_2)$ as in Eqn. (\\ref{eq:ce_reward}) using batch $\\mathcal{B}_2$\\\\\n Update $\\boldsymbol{\\theta}_2$ using $\\nabla_{\\boldsymbol{\\theta}_2}L_{\\text{CE}}(o_2^{(1)};\\boldsymbol{\\theta}_2)$\\\\\n \\uIf{use DDPG}{\n Compute average transmitter losses $L_{\\text{DDPG}}^{\\text{Critic}}(\\boldsymbol{\\theta}_1)$ and $L_{\\text{DDPG}}^{\\text{Policy}}(\\boldsymbol{\\psi})$ as in Eqns. (\\ref{eq:ch_coding_ddpg_critic_loss},\\ref{eq:ch_coding_ddpg_policy_loss}) using $\\mathcal{B}_1$\\\\\n Update $\\boldsymbol{\\theta}_1$ and $\\boldsymbol{\\psi}$ $\\nabla_{\\boldsymbol{\\theta}_1}L_{\\text{DDPG}}^{\\text{Critic}}(\\boldsymbol{\\theta}_1)$ and $\\nabla_{\\boldsymbol{\\psi}} L_{\\text{DDPG}}^{\\text{Policy}}(\\boldsymbol{\\psi})$\n }\n \\uElseIf{use REINFORCE}{\n Compute average transmitter gradient $\\nabla_{\\boldsymbol{\\theta}_1}J(\\boldsymbol{\\theta}_1)$ as in Eqn. (\\ref{eq:reinforce_loss}) using $\\mathcal{B}_1$\\\\\n Update $\\boldsymbol{\\theta}_1$ using $\\nabla_{\\boldsymbol{\\theta}_1}J(\\boldsymbol{\\theta}_1)$\n \n \n \n }\n \\uElseIf{use Actor-Critic}{\n Compute average transmitter loss $\\nabla_{\\boldsymbol{\\theta}_1}J(\\boldsymbol{\\theta}_1)$ as in Eqn. (\\ref{eq:a2c_loss}) using $\\mathcal{B}_1$\\\\\n Update $\\boldsymbol{\\theta}_1$ using $\\nabla_{\\boldsymbol{\\theta}_1}J(\\boldsymbol{\\theta}_1)$\\\\\n Update value estimate $v_{\\pi_1}(o_1^{(1)})$ via Eqn. (\\ref{eq:value_estimate})\n }\n $\\text{episode}=\\text{episode}+1$\n }\n\\end{small}\n\\end{algorithm}\n\nHere, we opt to use DDPG to learn a deterministic joint channel coding-modulation scheme and use the DQN algorithm for the receiver, as opposed to the vanilla REINFORCE algorithm used in \\cite{aoudia_model-free_2019}.\nWe use negative cross-entropy (CE) loss as the reward function:\n\\begin{equation}\n r^{(1)}=-L_{\\text{CE}}(\\hat{m}^{(1)}_1)=\\sum_{k=1}^{2^B}\\log(Pr(c_k|\\hat{m}^{(1)}_1)),\n \\label{eq:ce_reward}\n\\end{equation}\nwhere $c_k$ is the $k$th codeword in $\\mathcal{O}_1$.\nThe receiver DQN is trained simply with the CE loss, while the transmitter DDPG algorithm receives the reward $r^{(1)}$.\nSimilar to the \\textit{guided robot} problem in Section \\ref{subsec:eg_prob_guide_scout}, we use replay buffer to improve the training process.\nWe note here that in this problem, each episode is simply a one-step MDP, as there is no state transition.\nAs such, the replay buffers store only $(o_1^{(1)},a_1^{(1)},r^{(1)})$, $(o_2^{(1)},a_2^{(1)},r^{(1)})$ and a target network is not required.\nConsequently, the DDPG losses can be simplified as\n\\begin{align}\n &L_{\\text{DDPG}}^{\\text{Critic}}(\\boldsymbol{\\theta}_1)=\\Big(Q_{\\boldsymbol{\\theta}_1}(o_1^{(1)},\\mu_{\\boldsymbol{\\psi}}(o_1^{(1)})-r^{(1)}\\Big)^2,\\label{eq:ch_coding_ddpg_critic_loss}\\\\\n &L(\\boldsymbol{\\psi})_{\\text{DDPG}}^{\\text{Policy}}=-Q_{\\boldsymbol{\\theta}_1}(o_1^{(1)},\\mu_{\\boldsymbol{\\psi}}(o_1^{(1)}))\\label{eq:ch_coding_ddpg_policy_loss}\n\\end{align}\n\n\n\nFurthermore, we improve upon the algorithm used in \\cite{aoudia_model-free_2019} by implementing a critic, which estimates the advantage of a given state-action pair by subtracting a baseline from policy gradient.\nThat is, in the REINFORCE algorithm, the gradient is estimated as\n\\begin{equation}\n \\nabla_{\\boldsymbol{\\theta}_1} J(\\boldsymbol{\\theta}_1)=\\nabla_{\\boldsymbol{\\theta}_1}\\log\\pi_1(a_1^{(1)}|o^{(1)}_1;\\boldsymbol{\\theta}_1)r^{(1)} \\;.\n \\label{eq:reinforce_loss}\n\\end{equation}\nIt is shown in \\cite{konda_actor-critic_nodate} that by subtracting a baseline $b(o_1^{(1)})$, the variance of the gradient $\\nabla_{\\boldsymbol{\\theta}} J(\\boldsymbol{\\theta})$ can be greatly reduced. \nHerein, we use the value of the state, defined by Eqn. (\\ref{eq:value_function}), except, in this problem, the trajectories all have length 1.\nTherefore, the value function can be simplified to \n\\begin{equation}\n b(o_1^{(1)})=v_{\\pi_1}(o_1^{(1)})=\\mathbb{E}_{\\pi_1}\\big[r^{(1)}|o_1^{(1)}\\big].\n\\label{eq:ch_code_baseline}\n\\end{equation}\nThe gradient of the policy with respect to the expected return $J(\\boldsymbol{\\theta}_1)$ is then \n\\begin{equation}\n \\nabla_{\\boldsymbol{\\theta}_1} J(\\boldsymbol{\\theta}_1)=\\nabla_{\\boldsymbol{\\theta}_1}\\log\\pi_1(a_1^{(1)}|o_1^{(1)};\\boldsymbol{\\theta}_1)(r^{(1)}-v_{\\pi_1}(o_1^{(1)})).\n \\label{eq:a2c_loss}\n\\end{equation}\nIn practice, to estimate $v_{\\Pi}(o^{(1)}_1)$, we use a weighted moving average of the reward collected for a given state $o_1^{(1)}\\in\\mathcal{O}_1$ in $\\mathcal{B}_1(o_1^{(1)})=\\{(o,a)\\in \\mathcal{B}_1| o=o_1^{(1)}\\}$\nfor the batch of trajectories $\\mathcal{B}_1$:\n\\begin{equation}\n v_{\\pi_1}(o_1^{(1)})\\leftarrow\n (1-\\alpha) v_{\\pi_1}(o_1^{(1)})+\n \\frac{\\alpha}{|\\mathcal{B}_1(o_1^{(1)})|}\\!\\!\\sum_{(o,a)\\in \\mathcal{B}_1(o_1^{(1)})}\\!\\! r^{(1)}(o,a),\n\\label{eq:value_estimate}\n\\end{equation}\nwhere $\\alpha$ is the weight of the average and $v_{\\pi_1}(o_1^{(1)})$ is initialized with zeros.\nWe use $\\alpha=0.01$ in our experiments.\nThe algorithm for solving the joint channel coding and modulation problem is shown in Algorithm \\ref{alg:channel_coding}.\nThe numerical results and comparison with alternative designs are presented in the next section.\n\n\n\n\n\n\n\\section{Numerical Results}\n\\label{sec:results}\n\n\n\n\\begin{table}\n\\begin{center}\n\\caption{DNN architecture and hyperparameters used.}\n\\begin{tabular}{|c|c|c|}\n\\hline\n$Q_{\\boldsymbol{\\theta}_i}$ & $\\mu_{\\boldsymbol{\\psi}}$ & Hyperparameters \\\\ \\hline\nLinear: 64 & Linear: 64 & $\\gamma=0.99$ \\\\\nReLU & ReLU & $\\epsilon_0=0.9$ \\\\\nLinear: 64 & Linear: 64 & $\\epsilon_{\\text{end}}=0.05$ \\\\\nReLU & ReLU & $\\lambda=1000$ \\\\\nLinear: $\\begin{cases}\n |\\mathcal{A}_i|,~&\\text{if DQN}, \\\\ \n 1,~&\\text{if DDPG}\n \\end{cases}$\n& Linear: dim$(\\mathcal{A}_i)$ & $\\tau=0.005$ \\\\ \\hline\n\\end{tabular}\n\\label{tab:parameters}\n\\end{center}\n\\vspace{-0.8cm}\n\\end{table}\n\nWe first define the DNN architecture used for all the experiments in this section.\nFor networks, the inputs are processed by three fully connected layers followed by the rectified linear unit (ReLU) activation function.\nThe weights of the layers are initialized using Gaussian initialization with mean 0 and standard deviation $0.01$.\nWe store $100K$ experience samples in the replay buffer ($|\\mathcal{R}_i|=100K$), and sample batches of size $128$ for training.\nWe train every experiment for $500K$ episodes.\nThe function used for $\\epsilon$-greedy exploration is\n\\begin{equation}\n \\epsilon=\\epsilon_{\\text{end}}+(\\epsilon_0-\\epsilon_{\\text{end}})e^{\\big(-\\frac{\\text{episode}}{\\lambda}\\big)}\n\\end{equation}\nwhere $\\lambda$ controls the decay rate of $\\epsilon$.\nWe use the ADAM optimizer \\cite{kingma_adam_2017} with learning rate $0.001$ for all the experiments.\nThe network architectures and the hyperparameters chosen are summarized in Table \\ref{tab:parameters}.\nWe consider $\\text{SNR}\\in[0,23]$ dB for the AWGN channel. \nFor the BN channel, we use the same SNR range as the AWGN channel for the low noise state and set $\\sigma_b=2$ for the high noise state.\nWe consider $p_b\\in\\{0.1,0.2\\}$ to see the effect of changing the high noise state probability.\n\n\n\n\\begin{figure} \n \\centering\n \\subfloat[$\\delta=0$ \\label{subfig:bsc_grid_world}]{%\n \\begin{tikzpicture}\n \\pgfplotsset{\n legend style={\n font=\\fontsize{6}{6}\\selectfont,\n at={(0.02,.98)},\n anchor=north west,\n },\n height=0.4\\linewidth,\n width=0.5\\linewidth,\n xmin=0,\n xmax=0.3,\n ymin=2.,\n ymax=10.3,\n ytick distance=1,\n xlabel={$p_e$},\n ylabel={Average number of steps},\n grid=both,\n grid style={line width=.1pt, draw=gray!10},\n major grid style={line width=.2pt,draw=gray!50},\n every axis\/.append style={\n x label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:0.5,-0.1)},\n },\n y label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:-0.1,0.5)},\n },\n x tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=2,\n \/tikz\/.cd\n },\n y tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=1,\n \/tikz\/.cd\n },\n }\n }\n \\begin{axis}\n \\addplot[blue, solid, line width=0.9pt, mark=triangle*, mark options={fill=blue, scale=1.6}] \n table [x=PE, y=BINARY, col sep=comma] {Data\/bsc_grid_world.csv};\n \\addlegendentry{Joint learning and communication ($M=7$)}\n \n \\addplot[color=gray, dashed, line width=1.2pt, mark=*, mark options={fill=gray, solid, scale=1.1}] \n table [x=PE, y=HAMMING (HC), col sep=comma] {Data\/bsc_grid_world.csv};\n \\addlegendentry{Separate learning and communication \/ HC}\n \n \\addplot[color=orange, solid, line width=0.7pt, mark=*, mark options={fill=orange, scale=1}] \n table [x=PE, y=HAMMING (RC), col sep=comma] {Data\/bsc_grid_world.csv};\n \\addlegendentry{Separate learning and communication \/ RC}\n \n \\addplot[color=cyan, dashed, line width=1.2pt, mark=square*, mark options={fill=cyan, solid, scale=1.1}] \n table [x=PE, y=OPT (HC), col sep=comma] {Data\/bsc_grid_world.csv};\n \\addlegendentry{Optimal actions with Hamming code \/ HC}\n \n \\addplot[color=magenta, solid, line width=0.7pt, mark=square*, mark options={fill=magenta, scale=1}] \n table [x=PE, y=OPT (RC), col sep=comma] {Data\/bsc_grid_world.csv};\n \\addlegendentry{Optimal actions with Hamming code \/ RC}\n \n \\addplot[color=darkgray, solid, thick, mark=x, mark options={fill=darkgray, scale=1}] \n table [x=PE, y=LB, col sep=comma] {Data\/bsc_grid_world.csv};\n \\addlegendentry{Optimal actions without noise}\n \\end{axis}\n \\end{tikzpicture}\n }\n \\subfloat[$\\delta=0.05$ \\label{subfig:bsc_grid_world_noisy}]{%\n \\begin{tikzpicture}\n \\pgfplotsset{\n legend style={\n font=\\fontsize{6}{6}\\selectfont,\n at={(0.,1.)},\n anchor=north west,\n },\n height=0.4\\linewidth,\n width=0.5\\linewidth,\n xmin=0,\n xmax=0.3,\n ymin=2.,\n ymax=10.3,\n ytick distance=1,\n xlabel={$p_e$},\n ylabel={Average number of steps},\n grid=both,\n grid style={line width=.1pt, draw=gray!10},\n major grid style={line width=.2pt,draw=gray!50},\n every axis\/.append style={\n x label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:0.5,-0.1)},\n },\n y label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:-0.1,0.5)},\n },\n x tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=2,\n \/tikz\/.cd\n },\n y tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=1,\n \/tikz\/.cd\n },\n }\n }\n \\begin{axis}[mark options={solid}]\n \\addplot[blue, solid, line width=0.9pt, mark=triangle*, mark options={fill=blue, scale=1.6}] \n table [x=PE, y=BINARY_NOISY, col sep=comma] {Data\/bsc_grid_world.csv};\n \\addlegendentry{Joint learning and communication ($M=7$)}\n \n \\addplot[color=gray, dashed, line width=1.2pt, mark=*, mark options={fill=gray, solid, scale=1.1}] \n table [x=PE, y=HAMMING_NOISY (HC), col sep=comma] {Data\/bsc_grid_world.csv};\n \\addlegendentry{Separate learning and communication \/ HC}\n \n \\addplot[color=orange, solid, line width=0.7pt, mark=*, mark options={fill=orange, scale=1}] \n table [x=PE, y=HAMMING_NOISY (RC), col sep=comma] {Data\/bsc_grid_world.csv};\n \\addlegendentry{Separate learning and communication \/ RC}\n \n \\addplot[color=cyan, dashed, line width=1.2pt, mark=square*, mark options={fill=cyan, solid, scale=1.1}] \n table [x=PE, y=OPT_NOISY (HC), col sep=comma] {Data\/bsc_grid_world.csv};\n \\addlegendentry{Optimal actions with Hamming code \/ HC}\n \n \\addplot[color=magenta, solid, line width=0.7pt, mark=square*, mark options={fill=magenta, scale=0.8}] \n table [x=PE, y=OPT_NOISY (RC), col sep=comma] {Data\/bsc_grid_world.csv};\n \\addlegendentry{Optimal actions with Hamming code \/ RC}\n \n \\addplot[color=darkgray, solid, thick, mark=x, mark options={fill=darkgray, scale=1}] \n table [x=PE, y=LB, col sep=comma] {Data\/bsc_grid_world.csv};\n \\addlegendentry{Optimal actions without noise}\n \\end{axis}\n \\end{tikzpicture}\n }\n \\caption{Comparison of agents jointly trained to collaborate and communicate over a BSC to separate learning and communications with a (7,4) Hamming code.}\n \\label{fig:bsc_grid_world} \n\\vspace{-0.8cm}\n\\end{figure}\n\n\\begin{figure} \n \\centering\n \\subfloat[$\\delta=0$ \\label{subfig:awgn_grid_world}]{%\n \\begin{tikzpicture}\n \\pgfplotsset{\n legend style={\n font=\\fontsize{5.8}{5.8}\\selectfont,\n at={(0.99,.99)},\n anchor=north east,\n },\n height=0.4\\linewidth,\n width=0.5\\linewidth,\n xmin=0,\n xmax=23,\n xtick distance=3,\n ymin=2.3,\n ymax=4.3,\n ytick distance=0.5,\n xlabel={SNR (dB)},\n ylabel={Average number of steps},\n grid=both,\n grid style={line width=.1pt, draw=gray!10},\n major grid style={line width=.2pt,draw=gray!50},\n every axis\/.append style={\n x label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:0.5,-0.1)},\n },\n y label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:-0.1,0.5)},\n },\n x tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=2,\n \/tikz\/.cd\n },\n y tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=1,\n \/tikz\/.cd\n },\n }\n }\n \\begin{axis}\n \\addplot[color=blue, solid, line width=0.9pt, mark=triangle*, mark options={fill=blue, scale=1.6}] \n table [x=SNR, y=BINARY, col sep=comma] {Data\/awgn_grid_world.csv};\n \\addlegendentry{Joint learning and communication (BPSK, $M=7$)}\n \n \\addplot[color=blue, dashed, line width=1.2pt, mark=triangle*, mark options={fill=blue, solid, scale=1.6}] \n table [x=SNR, y=REAL, col sep=comma] {Data\/awgn_grid_world.csv};\n \\addlegendentry{Joint learning and communication (Real, $M=7$)}\n \n \\addplot[color=gray, dashed, line width=1.2pt, mark=*, mark options={fill=gray, solid, scale=1.1}] \n table [x=SNR, y=HAMMING (HC), col sep=comma] {Data\/awgn_grid_world.csv};\n \\addlegendentry{Separate learning and communication \/ HC}\n \n \\addplot[color=orange, solid, line width=0.7pt, mark=*, mark options={fill=orange, scale=1}] \n table [x=SNR, y=HAMMING (RC), col sep=comma] {Data\/awgn_grid_world.csv};\n \\addlegendentry{Separate learning and communication \/ RC}\n \n \\addplot[color=cyan, dashed, line width=1.2pt, mark=square*, mark options={fill=cyan, solid, scale=1.1}] \n table [x=SNR, y=OPT (HC), col sep=comma] {Data\/awgn_grid_world.csv};\n \\addlegendentry{Optimal with Hamming code \/ HC}\n \n \\addplot[color=magenta, solid, line width=0.7pt, mark=square*, mark options={fill=magenta, scale=1}] \n table [x=SNR, y=OPT (RC), col sep=comma] {Data\/awgn_grid_world.csv};\n \\addlegendentry{Optimal with Hamming code \/ RC}\n \n \\addplot[color=darkgray, solid, thick, mark=x, mark options={fill=darkgray, scale=1}] \n table [x=SNR, y=LB, col sep=comma] {Data\/awgn_grid_world.csv};\n \\addlegendentry{Optimal actions without noise}\n \\end{axis}\n \\end{tikzpicture}\n }\n \\subfloat[$\\delta=0.05$ \\label{subfig:awgn_grid_world_noisy}]{%\n \\begin{tikzpicture}\n \\pgfplotsset{\n legend style={\n font=\\fontsize{5.8}{5.8}\\selectfont,\n at={(0.99,.99)},\n anchor=north east,\n },\n height=0.4\\linewidth,\n width=0.5\\linewidth,\n xmin=0,\n xmax=23,\n xtick distance=3,\n ymin=2.,\n ymax=5.5,\n ytick distance=0.5,\n xlabel={SNR (dB)},\n ylabel={Average number of steps},\n grid=both,\n grid style={line width=.1pt, draw=gray!10},\n major grid style={line width=.2pt,draw=gray!50},\n every axis\/.append style={\n x label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:0.5,-0.1)},\n },\n y label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:-0.1,0.5)},\n },\n x tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=2,\n \/tikz\/.cd\n },\n y tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=1,\n \/tikz\/.cd\n },\n }\n }\n \\begin{axis}[mark options={solid}]\n \\addplot[color=blue, solid, line width=0.9pt, mark=triangle*, mark options={fill=blue, scale=1.6}] \n table [x=SNR, y=BINARY, col sep=comma] {Data\/awgn_grid_world_noisy.csv};\n \\addlegendentry{Joint learning and communication (BPSK, $M=7$)}\n \n \\addplot[color=blue, dashed, line width=1.2pt, mark=triangle*, mark options={fill=blue, solid, scale=1.6}] \n table [x=SNR, y=REAL, col sep=comma] {Data\/awgn_grid_world_noisy.csv};\n \\addlegendentry{Joint learning and communication (Real, $M=7$)}\n \n \\addplot[color=gray, dashed, line width=1.2pt, mark=*, mark options={fill=gray, solid, scale=1.1}] \n table [x=SNR, y=HAMMING (HC), col sep=comma] {Data\/awgn_grid_world_noisy.csv};\n \\addlegendentry{Separate learning and communication \/ HC}\n \n \\addplot[color=orange, solid, line width=0.7pt, mark=*, mark options={fill=orange, scale=1}] \n table [x=SNR, y=HAMMING (RC), col sep=comma] {Data\/awgn_grid_world_noisy.csv};\n \\addlegendentry{Separate learning and communication \/ RC}\n \n \\addplot[color=cyan, dashed, line width=1.2pt, mark=square*, mark options={fill=cyan, solid, scale=1.1}] \n table [x=SNR, y=OPT (HC), col sep=comma] {Data\/awgn_grid_world_noisy.csv};\n \\addlegendentry{Optimal actions with Hamming code \/ HC}\n \n \\addplot[color=magenta, solid, line width=0.7pt, mark=square*, mark options={fill=magenta, scale=1}] \n table [x=SNR, y=OPT (RC), col sep=comma] {Data\/awgn_grid_world_noisy.csv};\n \\addlegendentry{Optimal actions with Hamming code \/ RC}\n \n \\addplot[color=darkgray, solid, thick, mark=x, mark options={fill=darkgray, scale=1}] \n table [x=SNR, y=LB, col sep=comma] {Data\/awgn_grid_world_noisy.csv};\n \\addlegendentry{Optimal actions without noise}\n \\end{axis}\n \\end{tikzpicture}\n }\n \\caption{Comparison of the agents jointly trained to collaborate and communicate over an AWGN channel to separate learning and communications with a (7,4) Hamming code.}\n \\label{fig:awgn_grid_world} \n\\vspace{-1cm}\n\\end{figure}\n\n\\begin{figure} \n \\centering\n \\subfloat[Separate learning and communication (HC). \\label{subfig:hamming_bsc_vis}]{%\n \\includegraphics[height=0.2\\linewidth]{Images\/hamming_bsc_vis.pdf}\n }\\\\\n \\subfloat[Joint learning and communication. \\label{subfig:learning_bsc_vis}]{%\n \\includegraphics[height=0.2\\linewidth]{Images\/learning_bsc_vis.pdf}\n }\n \\caption{Example visualization of the codewords used by the guide, and the path taken by the scout for $M=7$ uses of a BSC with $p_e=0.2$ and $\\delta=0$. The origin is at the top left corner.}\n \\label{fig:bsc_vis} \n\\vspace{-0.8cm}\n\\end{figure}\n\n\nFor the grid world problem, presented in Section \\ref{subsec:eg_prob_guide_scout}, the scout and treasure are uniformly randomly placed on any distinct locations upon initialization (i.e., $p_g\\ne p_s^{(0)}$).\nThese locations are one-hot encoded to form a $2L^2$ vector that is the observation of the guide $o_1^{(t)}$. \nWe fix the channel bandwidth to $M=\\{7,10\\}$ and compare our solutions to a scheme that separates the channel coding from the underlying MDP.\nThat is, we first train a RL agent that solves the grid world problem without communication constraints. \nWe then introduce a noisy communication channel and encode the action chosen by the RL agent using a (7,4) Hamming code before transmission across the channel.\nThe received message is then decoded and the resultant action is taken.\nWe note that the (7,4) Hamming code is a perfect code that encodes four data bits into seven channel bits by adding three parity bits; thus, it can correct single bit errors.\nThe association between the 16 possible actions and codewords of 4 bits can be done by random permutation, which we refer to as random codewords (RC), or hand-crafted (HC) association by assigning adjacent codewords to similar actions, as shown in Fig. \\ref{fig:grid_world}.\nBy associating adjacent codewords to similar actions, the scout will take a similar action to the one intended even if there is a decoding error, assuming the number of bit errors is not too high.\nLastly, we compute the optimal solution, where the steps taken forms the shortest path to the treasure, and use a Hamming (7,4) channel code to transmit those actions.\nThis is referred to as ``Optimal actions with Hamming Code\" and acts as a lower bound for the separation-based results.\n\n\\begin{figure} \n \\centering\n \\subfloat[$\\delta=0,p_b=0.1$ \\label{subfig:bn01_grid_world}]{%\n \\begin{tikzpicture}\n \\pgfplotsset{\n legend style={\n font=\\fontsize{5.8}{5.8}\\selectfont,\n at={(0.99,.99)},\n anchor=north east,\n },\n height=0.4\\linewidth,\n width=0.5\\linewidth,\n xmin=-1,\n xmax=21,\n xtick distance=3,\n ymin=2.8,\n ymax=6,\n ytick distance=0.5,\n xlabel={SNR (dB)},\n ylabel={Average number of steps},\n grid=both,\n grid style={line width=.1pt, draw=gray!10},\n major grid style={line width=.2pt,draw=gray!50},\n every axis\/.append style={\n x label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:0.5,-0.1)},\n },\n y label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:-0.1,0.5)},\n },\n x tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=2,\n \/tikz\/.cd\n },\n y tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=1,\n \/tikz\/.cd\n },\n }\n }\n \\begin{axis}[mark options={solid}]\n \\addplot[blue, solid, line width=0.9pt, mark=triangle*, mark options={fill=blue, scale=1.5}] \n table [x=SNR, y=BINARY, col sep=comma] {Data\/bn_grid_world_p01.csv};\n \\addlegendentry{Joint learning and communication (BPSK, $M=7$)}\n \n \\addplot[blue, dashed, line width=1.2pt, mark=triangle*, mark options={fill=blue, solid, scale=1.5}] \n table [x=SNR, y=REAL, col sep=comma] {Data\/bn_grid_world_p01.csv};\n \\addlegendentry{Joint learning and communication (Real, $M=7$)}\n \n \\addplot[color=gray, dashed, line width=1.2pt, thick, mark=*, mark options={fill=gray, solid, scale=1.1}] \n table [x=SNR, y=HAMMING (HC), col sep=comma] {Data\/bn_grid_world_p01.csv};\n \\addlegendentry{Separate learning and communication \/ HC}\n \n \\addplot[color=orange, solid, line width=0.7pt, mark=*, mark options={fill=orange, scale=1}] \n table [x=SNR, y=HAMMING (RC), col sep=comma] {Data\/bn_grid_world_p01.csv};\n \\addlegendentry{Separate learning and communication \/ RC}\n \n \\addplot[color=cyan, dashed, line width=1.2pt, thick, mark=square*, mark options={fill=cyan, solid, scale=1.1}] \n table [x=SNR, y=OPT (HC), col sep=comma] {Data\/bn_grid_world_p01.csv};\n \\addlegendentry{Optimal actions with Hamming code \/ HC}\n \n \\addplot[color=magenta, solid, line width=0.7pt, mark=square*, mark options={fill=magenta, scale=1}] \n table [x=SNR, y=OPT (RC), col sep=comma] {Data\/bn_grid_world_p01.csv};\n \\addlegendentry{Optimal actions with Hamming code \/ RC}\n \n \\end{axis}\n \\end{tikzpicture}\n }\n \\subfloat[$\\delta=0.05,p_b=0.1$ \\label{subfig:bn01_grid_world_noisy}]{%\n \\begin{tikzpicture}\n \\pgfplotsset{\n legend style={\n font=\\fontsize{5.8}{5.8}\\selectfont,\n at={(0.99,.99)},\n anchor=north east,\n },\n height=0.4\\linewidth,\n width=0.5\\linewidth,\n xmin=-1,\n xmax=21,\n xtick distance=3,\n ymin=2.8,\n ymax=6,\n ytick distance=0.5,\n xlabel={SNR (dB)},\n ylabel={Average number of steps},\n grid=both,\n grid style={line width=.1pt, draw=gray!10},\n major grid style={line width=.2pt,draw=gray!50},\n every axis\/.append style={\n x label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:0.5,-0.1)},\n },\n y label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:-0.1,0.5)},\n },\n x tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=2,\n \/tikz\/.cd\n },\n y tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=1,\n \/tikz\/.cd\n },\n }\n }\n \\begin{axis}[mark options={solid}]\n \\addplot[blue, solid, line width=0.9pt, mark=triangle*, mark options={fill=blue, scale=1.6}] \n table [x=SNR, y=BINARY, col sep=comma] {Data\/bn_grid_world_noisy_p01.csv};\n \\addlegendentry{Joint learning and communication (BPSK, $M=7$)}\n \n \\addplot[blue, dashed, line width=1.2pt, mark=triangle*, mark options={fill=blue, solid, scale=1.6}] \n table [x=SNR, y=REAL, col sep=comma] {Data\/bn_grid_world_noisy_p01.csv};\n \\addlegendentry{Joint learning and communication (Real, $M=7$)}\n \n \\addplot[color=gray, dashed, line width=1.2pt, thick, mark=*, mark options={fill=gray, solid, scale=1.1}] \n table [x=SNR, y=HAMMING (HC), col sep=comma] {Data\/bn_grid_world_noisy_p01.csv};\n \\addlegendentry{Separate learning and communication \/ HC}\n \n \\addplot[color=orange, solid, line width=0.7pt, mark=*, mark options={fill=orange, scale=1}] \n table [x=SNR, y=HAMMING (RC), col sep=comma] {Data\/bn_grid_world_noisy_p01.csv};\n \\addlegendentry{Separate learning and communication \/ RC}\n \n \\addplot[color=cyan, dashed, line width=1.2pt, thick, mark=square*, mark options={fill=cyan, solid, scale=1.1}] \n table [x=SNR, y=OPT (HC), col sep=comma] {Data\/bn_grid_world_noisy_p01.csv};\n \\addlegendentry{Optimal actions with Hamming code \/ HC}\n \n \\addplot[color=magenta, solid, line width=0.7pt, mark=square*, mark options={fill=magenta, scale=1}] \n table [x=SNR, y=OPT (RC), col sep=comma] {Data\/bn_grid_world_noisy_p01.csv};\n \\addlegendentry{Optimal actions with Hamming code \/ RC}\n \n \\end{axis}\n \\end{tikzpicture}\n }\n \\\\\n \\subfloat[$\\delta=0,p_b=0.2$ \\label{subfig:bn02_grid_world}]{%\n \\begin{tikzpicture}\n \\pgfplotsset{\n legend style={\n font=\\fontsize{5.8}{5.8}\\selectfont,\n at={(0.99,.99)},\n anchor=north east,\n },\n height=0.4\\linewidth,\n width=0.5\\linewidth,\n xmin=-2,\n xmax=18,\n xtick distance=3,\n ymin=2.9,\n ymax=6.5,\n ytick distance=0.5,\n xlabel={SNR (dB)},\n ylabel={Average number of steps},\n grid=both,\n grid style={line width=.1pt, draw=gray!10},\n major grid style={line width=.2pt,draw=gray!50},\n every axis\/.append style={\n x label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:0.5,-0.1)},\n },\n y label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:-0.1,0.5)},\n },\n x tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=2,\n \/tikz\/.cd\n },\n y tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=1,\n \/tikz\/.cd\n },\n }\n }\n \\begin{axis}[mark options={solid}]\n \\addplot[blue, solid, line width=0.9pt, mark=triangle*, mark options={fill=blue, scale=1.6}] \n table [x=SNR, y=BINARY, col sep=comma] {Data\/bn_grid_world_p02.csv};\n \\addlegendentry{Joint learning and communication (BPSK, $M=7$)}\n \n \\addplot[blue, dashed, line width=1.2pt, mark=triangle*, mark options={fill=blue, solid, scale=1.6}] \n table [x=SNR, y=REAL, col sep=comma] {Data\/bn_grid_world_p02.csv};\n \\addlegendentry{Joint learning and communication (Real, $M=7$)}\n \n \\addplot[color=gray, dashed, line width=1.2pt, mark=*, mark options={fill=gray, solid, scale=1.1}] \n table [x=SNR, y=HAMMING (HC), col sep=comma] {Data\/bn_grid_world_p02.csv};\n \\addlegendentry{Separate learning and communication \/ HC}\n \n \\addplot[color=orange, solid, line width=0.7pt, mark=*, mark options={fill=orange, scale=1}] \n table [x=SNR, y=HAMMING (RC), col sep=comma] {Data\/bn_grid_world_p02.csv};\n \\addlegendentry{Separate learning and communication \/ RC}\n \n \\addplot[color=cyan, dashed, line width=1.2pt, mark=square*, mark options={fill=cyan, solid, scale=1.1}] \n table [x=SNR, y=OPT (HC), col sep=comma] {Data\/bn_grid_world_p02.csv};\n \\addlegendentry{Optimal actions with Hamming code \/ HC}\n \n \\addplot[color=magenta, solid, line width=0.7pt, mark=square*, mark options={fill=magenta, scale=1}] \n table [x=SNR, y=OPT (RC), col sep=comma] {Data\/bn_grid_world_p02.csv};\n \\addlegendentry{Optimal actions with Hamming code \/ RC}\n \n \\end{axis}\n \\end{tikzpicture}\n }\n \\subfloat[$\\delta=0.05,p_b=0.2$ \\label{subfig:bn02_grid_world_noisy}]{%\n \\begin{tikzpicture}\n \\pgfplotsset{\n legend style={\n font=\\fontsize{5.8}{5.8}\\selectfont,\n at={(1.0,1.)},\n anchor=north east,\n },\n height=0.4\\linewidth,\n width=0.5\\linewidth,\n xmin=-2,\n xmax=18,\n xtick distance=3,\n ymin=3.1,\n ymax=7,\n ytick distance=0.5,\n xlabel={SNR (dB)},\n ylabel={Average number of steps},\n grid=both,\n grid style={line width=.1pt, draw=gray!10},\n major grid style={line width=.2pt,draw=gray!50},\n every axis\/.append style={\n x label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:0.5,-0.1)},\n },\n y label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:-0.1,0.5)},\n },\n x tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=2,\n \/tikz\/.cd\n },\n y tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=1,\n \/tikz\/.cd\n },\n }\n }\n \\begin{axis}[mark options={solid}]\n \\addplot[blue, solid, line width=0.9pt, mark=triangle*, mark options={fill=blue, scale=1.6}] \n table [x=SNR, y=BINARY, col sep=comma] {Data\/bn_grid_world_noisy_p02.csv};\n \\addlegendentry{Joint learning and communication (BPSK, $M=7$)}\n \n \\addplot[blue, dashed, line width=1.2pt, mark=triangle*, mark options={fill=blue, solid, scale=1.6}] \n table [x=SNR, y=REAL, col sep=comma] {Data\/bn_grid_world_noisy_p02.csv};\n \\addlegendentry{Joint learning and communication (Real, $M=7$)}\n \n \\addplot[color=gray, dashed, line width=1.2pt, mark=*, mark options={fill=gray, solid, scale=1.1}] \n table [x=SNR, y=HAMMING (HC), col sep=comma] {Data\/bn_grid_world_noisy_p02.csv};\n \\addlegendentry{Separate learning and communication \/ HC}\n \n \\addplot[color=orange, solid, line width=0.7pt, mark=*, mark options={fill=orange, scale=1}] \n table [x=SNR, y=HAMMING (RC), col sep=comma] {Data\/bn_grid_world_noisy_p02.csv};\n \\addlegendentry{Separate learning and communication \/ RC}\n \n \\addplot[color=cyan, dashed, line width=1.2pt, mark=square*, mark options={fill=cyan, solid, scale=1.1}] \n table [x=SNR, y=OPT (HC), col sep=comma] {Data\/bn_grid_world_noisy_p02.csv};\n \\addlegendentry{Optimal actions with Hamming code \/ HC}\n \n \\addplot[color=magenta, solid, line width=0.7pt, mark=square*, mark options={fill=magenta, scale=1}] \n table [x=SNR, y=OPT (RC), col sep=comma] {Data\/bn_grid_world_noisy_p02.csv};\n \\addlegendentry{Optimal actions with Hamming code \/ RC}\n \n \\end{axis}\n \\end{tikzpicture}\n }\n \\caption{Comparison of the agents jointly trained to collaborate and communicate over an BN channel to separate learning and communications with a (7,4) Hamming code.}\n \\label{fig:bn_grid_world} \n\\vspace{-1.0cm}\n\\end{figure}\n\nFor the joint channel coding-modulation problem, we again compare the DDPG and actor-critic results with a (7,4) Hamming code using BPSK modulation.\nThe source bit sequence is uniformly randomly chosen from the set $\\{0,1\\}^M$ and one-hot encoded to form the input state $o_1^{(1)}$ of the transmitter.\nWe also compare with the algorithm derived in \\cite{aoudia_model-free_2019}, which uses supervised learning for the receiver and the REINFORCE policy gradient to estimate the gradient of the transmitter. \n\n\n\n\n\n\nWe first present the results for the guided robot problem. \nFig. \\ref{fig:bsc_grid_world} shows the number of steps, averaged over 10K episodes, needed by the scout to reach the treasure for the BSC case with $\\delta=\\{0,0.05\\}$. \nThe ``optimal actions without noise\" refers to the minimum number of steps required to reach the treasure assuming a perfect communication channel and acts as the lower bound for all the experiments.\nIt is clear that jointly learning to communicate and collaborate over a noisy channel outperforms the separation-based results with both RC and HC.\nIn Fig. \\ref{fig:bsc_vis}, we provide an illustration of the actions taken by the agent after some errors over the communication channel with the separate learning and communication scheme (HC) and with the proposed joint learning and communication approach. It can be seen that at step 2 the proposed scheme takes a similar action $(-1,-1)$ to the optimal one $(-2,0)$ despite experiencing 2 bit errors, and in step 3 despite experiencing 3 bit errors (Fig. \\ref{subfig:learning_bsc_vis}). On the other hand, in the separate learning and communication scheme with a (7,4) Hamming code and HC association of actions, the scout decodes a very different action from the optimal one in step 2 which results in an additional step being taken. However, it was able to take a similar action to the optimal one in step 4 despite experiencing 2 bit errors. This shows that although hand crafting codeword assignments can lead to some performance benefits in the separate learning and communication scheme, which was also suggested by Fig. \\ref{fig:bsc_grid_world}, joint learning and communication leads to more robust codeword assignments that give much more consistent results. Indeed, we have also observed that, unlike the separation based scheme, where each message corresponds to a single action, or equivalently, there are 8 different channel output vectors for which the same action is taken, the codeword to action mapping at the scout can be highly asymmetric for the learned scheme. \nMoreover, neither the joint learning and communication results nor the separation-based results achieve the performance of the optimal solution with Hamming code. The gap between the optimal solution with Hamming code and the results obtained by the guide\/scout formulation is due to the DQN architectures' limited capability to learn the optimal solution and the challenge of learning under noisy environments.\nComparing Fig. \\ref{subfig:bsc_grid_world} and \\ref{subfig:bsc_grid_world_noisy}, the performance degradation due to the separation-based results is slightly greater than those from the joint framework. This is because the joint learning and communication approach is better at adjusting its policy and communication strategy to mitigate the effect of the channel noise than employing a standard channel code.\n\n\\begin{figure}\n\\begin{minipage}{.48\\textwidth}\n \\centering\n \\begin{tikzpicture}\n \\pgfplotsset{\n legend style={\n font=\\fontsize{6}{6}\\selectfont,\n at={(0.92,.98)},\n anchor=north east,\n },\n height=0.8\\linewidth,\n width=\\linewidth,\n xmin=0,\n xmax=100000,\n ymin=0,\n ymax=150,\n xlabel={Episode},\n ylabel={Number of steps},\n grid=both,\n grid style={line width=.1pt, draw=gray!10},\n major grid style={line width=.2pt,draw=gray!50},\n every axis\/.append style={\n x label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:0.5,-0.1)},\n },\n y label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:-0.12,0.5)},\n },\n x tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=2,\n \/tikz\/.cd\n },\n y tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=1,\n \/tikz\/.cd\n },\n },\n }\n \\begin{axis}\n \\addplot[no markers, blue] table [x=Step, y=BPSK BSC, col sep=comma] {Data\/conv_mdp.csv};\n \\addlegendentry{BPSK BSC ($P_e=0.05$)}\n \n \\addplot[no markers, red] table [x=Step, y=REAL AWGN, col sep=comma] {Data\/conv_mdp.csv};\n \\addlegendentry{Real AWGN ($10$ dB)}\n \n \\addplot[no markers, green] table [x=Step, y=BPSK AWGN, col sep=comma] {Data\/conv_mdp.csv};\n \\addlegendentry{BPSK AWGN ($10$ dB)}\n \\end{axis}\n \\end{tikzpicture}\n \\caption{Convergence of each channel scenario for the grid world problem without noise ($M=7,~\\delta=0$).}\n \\label{fig:mdp_convergence}\n \\vspace{0.2cm}\n\\end{minipage}%\n\\hfill\n\\begin{minipage}{.48\\textwidth}\n \\centering\n \\begin{tikzpicture}\n \\pgfplotsset{\n legend style={\n font=\\fontsize{6}{6}\\selectfont,\n at={(0.99,.99)},\n anchor=north east,\n },\n height=0.8\\linewidth,\n width=\\linewidth,\n xmin=0,\n xmax=23,\n ymin=2.3,\n ymax=4.1,\n xlabel={SNR (dB)},\n ylabel={Average number of steps},\n grid=both,\n grid style={line width=.1pt, draw=gray!10},\n major grid style={line width=.2pt,draw=gray!50},\n every axis\/.append style={\n x label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:0.5,-0.1)},\n },\n y label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:-0.1,0.5)},\n },\n x tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=2,\n \/tikz\/.cd\n },\n y tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=1,\n \/tikz\/.cd\n },\n }\n }\n \\begin{axis}\n \\addplot[blue, solid, line width=0.9pt, mark=triangle*, mark options={fill=blue, scale=1.5}] \n table [x=SNR, y=BINARY, col sep=comma] {Data\/awgn_grid_world.csv};\n \\addlegendentry{Joint learning and communication (BPSK, $M=7$)}\n \n \\addplot[blue, dashed, line width=1.2pt, mark=triangle*, mark options={fill=blue, solid, scale=1.5}] \n table [x=SNR, y=REAL, col sep=comma] {Data\/awgn_grid_world.csv};\n \\addlegendentry{Joint learning and communication (Real, $M=7$)}\n \n \\addplot[orange, solid, line width=0.9pt, mark=triangle*, mark options={fill=orange, scale=1.5}] \n table [x=SNR, y=BINARY, col sep=comma] {Data\/awgn_grid_world_m10.csv};\n \\addlegendentry{Joint learning and communication (BPSK, $M=10$)}\n \n \\addplot[orange, dashed, line width=1.2pt, mark=triangle*, mark options={fill=orange, solid, scale=1.5}] \n table [x=SNR, y=REAL, col sep=comma] {Data\/awgn_grid_world_m10.csv};\n \\addlegendentry{Joint learning and communication (Real, $M=10$)}\n \n \\end{axis}\n \\end{tikzpicture}\n \\caption{Impact of the channel bandwidth $M=\\{7,10\\}$ on the performance for an AWGN channel ($\\delta=0$).}\n \\label{fig:bw_affec}\n\\end{minipage}\n\\vspace{-1cm}\n\\end{figure}\n\n\nSimilarly, in the AWGN case in Fig. \\ref{fig:awgn_grid_world}, the results from joint learning and communication clearly outperforms those obtained via separate learning and communication.\nHere, the ``Real\" results refer to the guide agent with $\\mathcal{A}_1=\\mathbb{R}^M$, while the ``BPSK\" results refer to the guide agent with $\\mathcal{A}_1=\\{-1,+1\\}^M$.\nThe ``Real\" results here clearly outperform all other schemes considered. The relaxation of the channel constellation to all real values within a power constraint allows the guide to convey more information than a binary constellation can achieve. We also observe that the gain from this relaxation is higher at lower SNR values for both $\\delta$ values. This is in contrast to the gap between the channel capacities achieved with Gaussian and binary inputs in an AWGN channel, which is negligible at low SNR values and increases with SNR. This shows that channel capacity is not the right metric for this problem, and even when two channels are similar in terms of capacity, they can give very different performances in terms of the discounted sum reward when used in the MARL context.\n\n\n\\begin{figure}\n\\begin{minipage}{.48\\textwidth}\n \\centering\n \\begin{tikzpicture}\n \\pgfplotsset{\n legend style={\n font=\\fontsize{6}{6}\\selectfont,\n at={(0.4,.35)},\n anchor=north east,\n },\n height=0.8\\linewidth,\n width=\\linewidth,\n xmin=0,\n xmax=5,\n ymin=0.001,\n ymax=0.4,\n xlabel={SNR (dB)},\n ylabel={BLER},\n grid=both,\n grid style={line width=.1pt, draw=gray!10},\n major grid style={line width=.2pt,draw=gray!50},\n every axis\/.append style={\n x label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:0.5,-0.1)},\n },\n y label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:-0.1,0.5)},\n },\n x tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=2,\n \/tikz\/.cd\n },\n y tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=1,\n \/tikz\/.cd\n },\n }\n }\n \\begin{axis}[\n ymode=log,\n log ticks with fixed point,\n ]\n \n \\addplot[blue, solid, line width=0.9pt, mark=triangle*, mark options={fill=blue, scale=1.5}] \n table [x=SNR, y=HAMMING, col sep=comma] {Data\/ch_coding_awgn.csv};\n \\addlegendentry{HAMMING}\n \n \\addplot[gray, dashed, line width=1.2pt, mark=square*, mark options={fill=gray, solid, scale=1.}] \n table [x=SNR, y=DDPG, col sep=comma] {Data\/ch_coding_awgn.csv};\n \\addlegendentry{DDPG}\n \n \\addplot[cyan, solid, line width=0.9pt, mark=*, mark options={fill=cyan, scale=1.}] \n table [x=SNR, y=REINFORCE, col sep=comma] {Data\/ch_coding_awgn.csv};\n \\addlegendentry{REINFORCE}\n \n \\addplot[magenta, dashed, line width=1.2pt, mark=x, mark options={fill=magenta, scale=1.5, solid}] \n table [x=SNR, y=A2C, col sep=comma] {Data\/ch_coding_awgn.csv};\n \\addlegendentry{Actor-Critic}\n \n \\end{axis}\n \\end{tikzpicture}\n \\caption{BLER performance of different modulation and coding schemes over AWGN channel.}\n \\label{fig:ch_coding_bler}\n \\vspace{-0.2cm}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.48\\textwidth}\n \\centering\n \\begin{tikzpicture}\n \\pgfplotsset{\n legend style={\n font=\\fontsize{6}{6}\\selectfont,\n at={(0.95,.98)},\n anchor=north east,\n },\n height=0.8\\linewidth,\n width=\\linewidth,\n xmin=0,\n xmax=10000,\n ymin=0.007,\n ymax=0.1,\n xlabel={Episode},\n ylabel={BLER},\n grid=both,\n grid style={line width=.1pt, draw=gray!10},\n major grid style={line width=.2pt,draw=gray!50},\n every axis\/.append style={\n x label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:0.5,-0.1)},\n },\n y label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:-0.1,0.5)},\n },\n x tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=2,\n \/tikz\/.cd\n },\n y tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=1,\n \/tikz\/.cd\n },\n },\n }\n \\begin{axis}[\n ymode=log,\n log ticks with fixed point,\n ]\n \\addplot[no markers, gray] table [x=Step, y=DDPG, col sep=comma] {Data\/conv_ch_code.csv};\n \\addlegendentry{DDPG}\n \n \\addplot[no markers, cyan] table [x=Step, y=REINFORCE, col sep=comma] {Data\/conv_ch_code.csv};\n \\addlegendentry{REINFORCE}\n \n \\addplot[no markers, magenta] table [x=Step, y=A2C, col sep=comma] {Data\/conv_ch_code.csv};\n \\addlegendentry{Actor-Critic}\n \\end{axis}\n \\end{tikzpicture}\n \\caption{Convergence behavior for the joint channel coding and modulation problem in an AWGN channel.}\n \\label{fig:ch_coding_convergence}\n\\end{minipage}\n\\vspace{-0.8cm}\n\\end{figure}\n\nIn the BN channel case (Fig. \\ref{fig:bn_grid_world}), similar observations can be made compared to the AWGN case. \nThe biggest difference is that we see a larger performance improvement over the separation case when using our proposed framework than in the AWGN case.\nThis is particularly obvious when using BPSK modulation, where the gap between the BPSK results for the joint learning and communication scheme and those from the separate learning and communication is larger compared to the AWGN channel case.\nThis shows that in this more challenging channel scenario, the proposed framework is better able to adjust jointly the policy and the communication scheme to meet the conditions of the channel.\nIt also again highlights the fact that the Shannon capacity is not the most important metric for this problem as the expected SNR is not significantly less due to the burst noise but we observe an even more pronounced improvement using the proposed schemes over the separation schemes.\n\nIn Figs. \\ref{fig:bsc_grid_world}, \\ref{fig:awgn_grid_world} and \\ref{fig:bn_grid_world}, it can be seen that when the grid world itself is noisy (i.e., $\\delta>0$), the agents are still able to collaborate, albeit at the cost of higher average steps required to reach the treasure. \nThe convergence of the number of steps used to reach the treasure for each channel scenario is shown in Fig. \\ref{fig:mdp_convergence}. \nThe slow convergence for the BSC channel indicates the difficulty of learning a binary code for this channel.\nWe also study the effect of the bandwidth $M$ on the performance. \nIn Fig. \\ref{fig:bw_affec}, we present the average number of steps required for channel bandwidths $M=7$ and $M=10$. \nAs expected, increasing the channel bandwidth reduces the average number of steps for the scout to reach the treasure. \nThe gain is particularly significant for BPSK at the low SNR regime as the guide is better able to protect the information conveyed against the channel noise thanks to the increased bandwidth. \n\n\n\nNext, we present the results for the joint channel coding and modulation problem. \nFig. \\ref{fig:ch_coding_bler} shows the BLER performance obtained by BPSK modulation and Hamming (7,4) code, our DDPG transmitter described in Section \\ref{subsec:eg_prob_channel_coding}, the one proposed by \\cite{aoudia_model-free_2019}, and the proposed approach using an additional critic, labeled as ``Hamming (7,4)\", ``DDPG\", ``REINFORCE\", and ``Actor-Critic\", respectively.\nIt can be seen that the learning approaches (DDPG, REINFORCE and Actor-Critic) perform better than the Hamming (7,4) code. \nAdditionally, stochastic policy algorithms (REINFORCE and Actor-Critic) perform better than DDPG.\nThis is likely due to the limitations of DDPG, as in practice, criterion 1) of Theorem \\ref{thm:ddpg_compatibility} is often not satisfied.\nLastly, we show that we can improve upon the algorithm proposed in \\cite{aoudia_model-free_2019} by adding an additional critic that reduces the variance of the policy gradients; and therefore, learns a better policy.\nThe results obtained by the actor-critic algorithm are superior to those from the REINFORCE algorithm, especially in the higher SNR regime.\nOn average, the learning-based results are better than the Hamming (7,4) performance by $1.24$, $2.58$ and $3.70$ dB for DDPG, REINFORCE and Actor-Critic, respectively.\n\n\n\\begin{figure} \n \\centering\n \\subfloat[$p_b=0.1$ \\label{subfig:ch_coding_bn01}]{%\n \\begin{tikzpicture}\n \\pgfplotsset{\n legend style={\n font=\\fontsize{6}{6}\\selectfont,\n at={(0.4,.35)},\n anchor=north east,\n },\n height=0.4\\linewidth,\n width=0.5\\linewidth,\n xmin=0,\n xmax=5,\n ymin=0.01,\n ymax=0.4,\n xlabel={SNR (dB)},\n ylabel={BLER},\n grid=both,\n grid style={line width=.1pt, draw=gray!10},\n major grid style={line width=.2pt,draw=gray!50},\n every axis\/.append style={\n x label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:0.5,-0.1)},\n },\n y label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:-0.1,0.5)},\n },\n x tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=2,\n \/tikz\/.cd\n },\n y tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=1,\n \/tikz\/.cd\n },\n }\n }\n \\begin{axis}[\n ymode=log,\n log ticks with fixed point,\n ]\n \n \\addplot[blue, solid, line width=0.9pt, mark=triangle*, mark options={fill=blue, scale=1.5, solid}] \n table [x=SNR, y=HAMMING, col sep=comma] {Data\/ch_coding_bn01.csv};\n \\addlegendentry{HAMMING}\n \n \\addplot[gray, dashed, line width=1.2pt, mark=square*, mark options={fill=gray, solid, scale=1.}] \n table [x=SNR, y=DDPG, col sep=comma] {Data\/ch_coding_bn01.csv};\n \\addlegendentry{DDPG}\n \n \\addplot[cyan, solid, line width=0.9pt, mark=*, mark options={fill=cyan, scale=1.}] \n table [x=SNR, y=REINFORCE, col sep=comma] {Data\/ch_coding_bn01.csv};\n \\addlegendentry{REINFORCE}\n \n \\addplot[magenta, dashed, line width=1.2pt, mark=x, mark options={fill=magenta, scale=1.5, solid}] \n table [x=SNR, y=A2C, col sep=comma] {Data\/ch_coding_bn01.csv};\n \\addlegendentry{Actor-Critic}\n \n \\end{axis}\n \\end{tikzpicture}\n }\n \\subfloat[$p_b=0.2$ \\label{subfig:ch_coding_bn02}]{%\n \\begin{tikzpicture}\n \\pgfplotsset{\n legend style={\n font=\\fontsize{6}{6}\\selectfont,\n at={(0.4,.35)},\n anchor=north east,\n },\n height=0.4\\linewidth,\n width=0.5\\linewidth,\n xmin=0,\n xmax=5,\n ymin=0.05,\n ymax=0.5,\n xlabel={SNR (dB)},\n ylabel={BLER},\n grid=both,\n grid style={line width=.1pt, draw=gray!10},\n major grid style={line width=.2pt,draw=gray!50},\n every axis\/.append style={\n x label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:0.5,-0.1)},\n },\n y label style={\n font=\\fontsize{8}{8}\\selectfont,\n at={(axis description cs:-0.15,0.5)},\n },\n x tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=2,\n \/tikz\/.cd\n },\n y tick label style={\n font=\\fontsize{8}{8}\\selectfont,\n \/pgf\/number format\/.cd,\n fixed,\n fixed zerofill,\n precision=2,\n \/tikz\/.cd\n },\n }\n }\n \\begin{axis}[\n ymode=log,\n log ticks with fixed point,\n ]\n \n \\addplot[blue, solid, line width=0.9pt, mark=triangle*, mark options={fill=blue, scale=1.5, solid}] \n table [x=SNR, y=HAMMING, col sep=comma] {Data\/ch_coding_bn02.csv};\n \\addlegendentry{HAMMING}\n \n \\addplot[gray, dashed, line width=1.2pt, mark=square*, mark options={fill=gray, solid, scale=1.}] \n table [x=SNR, y=DDPG, col sep=comma] {Data\/ch_coding_bn02.csv};\n \\addlegendentry{DDPG}\n \n \\addplot[cyan, solid, line width=0.9pt, mark=*, mark options={fill=cyan, scale=1.}] \n table [x=SNR, y=REINFORCE, col sep=comma] {Data\/ch_coding_bn02.csv};\n \\addlegendentry{REINFORCE}\n \n \\addplot[magenta, dashed, line width=1.2pt, mark=x, mark options={fill=orange, scale=1.5, solid}] \n table [x=SNR, y=A2C, col sep=comma] {Data\/ch_coding_bn02.csv};\n \\addlegendentry{Actor-Critic}\n \n \\end{axis}\n \\end{tikzpicture}\n }\n \\caption{Comparison of the agents jointly trained to collaborate and communicate over an BN channel to separate learning and communications with a (7,4) Hamming code.}\n \\label{fig:ch_coding_bler_bn} \n\\vspace{-1.0cm}\n\\end{figure}\n\nWhen considering the BN channel case, as shown in Fig. \\ref{fig:ch_coding_bler_bn}, while the BLER increases due to the increased noise for all the schemes, we still see improved performance with the learning algorithms.\nFig. \\ref{fig:ch_coding_convergence} shows the convergence behavior of different learning algorithms for 5dB channel SNR.\nWe can see that the actor-critic algorithm converges the quickest and achieves the lowest BLER, while REINFORCE converges the slowest but achieves lower BLER than DDPG at the end of training.\nThis is in accordance with the BLER performance observed in Fig. \\ref{fig:ch_coding_bler}.\nWe reiterate that the joint channel coding and modulation problem studied from the perspective of supervised learning in \\cite{aoudia_model-free_2019} is indeed a special case of the joint learning and communication framework we presented in Section \\ref{sec:problem_formulation} from a MARL perspective, and can be solved using a myriad of algorithms from the RL literature.\n\nLastly, we note that due to the simplicity of our network architecture, the computation complexity of our models is not significantly more than the separation based results we present herein.\nThe average computation time for encoding and decoding using our proposed DRL solution is approximately $323\\mu s$ compared to $286 \\mu s$ for the separate learning and communication case with a Hamming (7,4) code, using an Intel Core i9 processor.\nThis corresponds to roughly 13\\% increase in computation time, which is modest considering the performance gains observed in both the guided robot problem and the joint channel coding and modulation problem.\n\n\\begin{Remark}\nWe note that both the grid world problem and the channel coding and modulation problems are POMDPs. Therefore, recurrent neural networks (RNNs), such as long-short term memory (LSTM) \\cite{hochreiter_lstm_1997} networks, should provide performance improvements as the cell states can act as belief propagation. However, in our initial simulations, we were not able to observe such improvements, although this is likely to be due to the limitations of our architectures.\n\\end{Remark}\n\n\\begin{Remark}\nEven though we have only considered the channel modulation and coding problem in this paper due to lack of space, our framework can also be reduced to the source coding and joint source-channel coding problems by changing the reward function. If we consider an error-free channel with binary inputs and outputs, and let the reward depend on the average distortion between the $B$-length source sequence observed by agent 1 and its reconstruction generated by agent 2 as its action, we recover the lossy source coding problem, where the length-$B$ sequence is compressed into $M$ bits. If we instead consider a noisy channel in between the two agents, we recover the joint source-channel coding problem with an unknown channel model. \n\\end{Remark}\n\n\n\n\n\n\n\\section{Conclusion}\n\\label{sec:conclusions}\n\nIn this paper, we have proposed a comprehensive framework that jointly considers the learning and communication problems in collaborative MARL over noisy channels. \nSpecifically, we consider a MA-POMDP where agents can exchange messages with each other over a noisy channel in order to improve the shared total long-term average reward.\nBy considering the noisy channel as part of the environment dynamics and the message each agent sends as part of its action, the agents not only learn to collaborate with each other via communications but also learn to communicate ``effectively\". \nThis corresponds to ``level C'' of Shannon and Weaver's organization of the communication problems in \\cite{ShannonWeaver49}, which seeks to answer the question ``How effectively does the received meaning affect conduct in the desired way?\".\nWe show that by jointly considering learning and communications in this framework, the learned joint policy of all the agents is superior to that obtained by treating the communication and the underlying MARL problem separately. \nWe emphasize that the latter is the conventional approach when the MARL solutions obtained in the machine learning literature assume error-free communication links are employed in practice when autonomous vehicles or robots communicate over noisy wireless links to achieve the desired coordination and cooperation. \nWe demonstrate via numerical examples that the policies learned from our joint approach produce higher average rewards than those where separate learning and communication is employed.\nWe also show that the proposed framework is a generalization of most of the communication problems that have been traditionally studied in the literature, corresponding to ``level A'' as described by Shannon and Weaver. \nThis formulation opens the door to employing available numerical MARL techniques, such as the actor-critic framework, for the design of channel modulation and coding schemes for communication over unknown channels. \nWe believe this is a very powerful framework, which has many real world applications, and can greatly benefit from the fast developing algorithms in the MARL literature to design novel communication codes and protocols, particularly with the goal of enabling collaboration and cooperation among distributed agents.\n\n\\bibliographystyle{ieeetr}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nKnowing stellar ages is fundamental to understanding the time-evolution\nof various astronomical phenomena related to stars and their companions.\nAccordingly, over the past decades much work has been focused on identifying\nthe properties of a star that best reveal its age. For coeval populations\nof stars in clusters, the most reliable ages are determined by fitting model\nisochrones to single cluster members in the color-magnitude\ndiagram. However, for the vast majority of stars not in clusters\n(unevolved late-type field stars), ages determined using the isochrone method\nare highly uncertain because the primary age indicators are nearly constant\nthroughout their main-sequence lifetimes, and because their distances and\nthus luminosities are poorly known. Therefore, finding a distance-independent\nproperty of individual stars that can act as a reliable determinant of their\nages will be of great value.\n\nStellar rotation (and the related measure of chromospheric activity - see\npaper by Mamajek this volume) has emerged as a promising and distance-\nindependent indicator of age\n\\citep[e.g.][]{skumanich72,kawaler89,barnes03a,barnes07}.\n\\citet{skumanich72} first established stellar rotation as an astronomical\nclock by relating the average projected rotation velocity in\nyoung open clusters to their ages via the expression $\\overline{v\\sin i}\n\\propto t^{-0.5}$. The Skumanich relation is limited in mass to\nearly G dwarfs and suffers from the ambiguity (due to the unknown inclination\nangle) of the $v\\sin i$ data. Furthermore, for ages beyond that of the\nHyades cluster ($\\sim625$Myr), the Skumanich relationship is constrained\nonly by a single G2 dwarf - the Sun.\n\nModern photometric time-series surveys in young open clusters can provide\nprecisely measured stellar rotation periods (free of the $\\sin i$ ambiguity)\nfor F, G, K, and M dwarfs. Based on such new data and emerging empirical\nrelationships between stellar rotation, color, and age, a method was proposed\nby \\citet{barnes03a} to derive ages for late-type dwarfs from observations\nof their colors and rotation periods alone. We refer the reader to the paper\nin this volume by Barnes for a motivation and description of the method of\n{\\it gyrochronology}. However, our ability to determine stellar ages from\nstellar rotation, hinges on how well we can measure the dependence of \nrotation on age for stars of different masses.\n\n\n\\section{The key role of open clusters}\n\nAs coeval populations of stars with a range of masses and well determined\nages, open cluster fulfill a critically important role in calibrating the\nrelations between stellar age, rotation, and color. Indeed, {\\it open\nclusters can define a surface in the 3-dimensional space of stellar\nrotation period, color, and age, from which the latter can be determined\nfrom measurements of the former two} (see Figure~\\ref{3d_car} below).\n\nThis inherent quality of open clusters can only be fully exploited if\nprecise stellar rotation periods (free of the $\\sin i$ ambiguity) are\nmeasured for cluster members. Accordingly, the time base-line and\nfrequency of time-series photometric observations must be long enough\nand high enough, respectively, to avoid a bias against detecting periods\nof more slowly rotating stars, and to avoid detection of false rotation\nperiods due to aliases and a strong ``window-function'' in the data.\nFurthermore, measured rotation periods should be combined with\ninformation about cluster membership and multiplicity. Removing\nnon-members and stars in close binaries affected by tidal\nsynchronization will allow a better definition of the relationship\nbetween rotation period and color at the age of the cluster.\nFinally, identification of single cluster members will enable a better\ncluster age to be determined from isochrone fitting. The new results\nfor the open clusters M35 and M34\nshown in Figure~\\ref{m3534pbv}, reflect the powerful combination of\ndecade-long time-series spectroscopy for cluster membership and\ntime-series photometry over 5 months for stellar rotation periods.\n\n\n\\section{New observations in the open clusters M35 and M34}\n\nWe carried out photometric monitoring campaigns over 5 consecutive months\nfor rotational periods, and nearly decade-long radial-velocity surveys for\ncluster membership and binarity, on the $\\sim$150\\,Myr and $\\sim$200\\,Myr\nopen clusters M35 and M34. For detailed descriptions of the observations,\ndata-reduction, and data-analysis, see \\citet{mm05,mms06,mms08},\nand \\citet{bmm09}.\n\n{\\it Time-Series Photometric Observations}:\nWe surveyed, over a timespan of 143 days, a region of $40 \\times 40$ arc\nminutes centered on each cluster. Images were acquired at a frequency of\nonce a night both before and after a central block of 16 full nights with\nobservations at a frequency of once per hour. The data were obtained in\nthe Johnson V band with the WIYN 0.9m telescope on Kitt Peak. Instrumental\nmagnitudes were determined from Point Spread Function photometry. Light\ncurves were produced for more than 14,000 stars with $12 < V < 19.5$.\nRotational periods were determined for 441 and 120 stars in the fields\nof M35 and M34, respectively (see Figure~\\ref{m3534pbv}).\n\n{\\it The spectroscopic surveys}:\nM35 and M34 have been included in the WIYN Open Cluster Study\n(WOCS; \\citet{mathieu00}) since 1997 and 2001. As part of WOCS, 1-3\nradial-velocity measurements per year were obtained on both clusters\nwithin the 1-degree field of the WIYN 3.5m telescope with the multi-object\nfiber positioner (Hydra) feeding a bench-mounted echelle spectrograph.\nObservations were done at central wavelengths of 5130\\AA\\ or 6385\\AA\\\nwith a wavelength range of $\\sim$200\\AA\\ . From this spectral region\nwith many narrow absorption lines, radial velocities were determined\nwith a precision of $< 0.4~$km\/s \\citep{gmh+08,mbd+01}. Of the stars\nwith measured rotational periods in M35 and M34, 203 and 56,\nrespectively, are radial-velocity members of the clusters (dark blue\nsymbols in Figure~\\ref{m3534pbv}). Including photometric members\n(light blue symbols in Figure~\\ref{m3534pbv}), the total number of\nstars with measured rotational periods in M35 and M34, are 310\nand 79.\n\n\\begin{figure}[ht!]\n\\includegraphics[height=.33\\textheight]{f1a.eps}\n\\includegraphics[height=.33\\textheight]{f1b.eps}\n\\caption{The distribution of stellar rotation periods with (B-V)\ncolor index for 310 members of M35 ({\\it left}; \\citep{mms08}) and\n79 members of M34 ({\\it right}). Dark blue (black) plotting symbols\nare used for radial-velocity members and light blue (grey) for\nphotometric members.}\n\\label{m3534pbv}\n\\end{figure}\n \n\n\\section{The color-period diagram}\n\nFigure~\\ref{m3534pbv} shows the rotational periods for members in\nM35 and M34 plotted against their dereddened $B-V$ colors. The coeval stars\nfall along two well-defined sequences representing two different\nrotational states. One sequence displays a clear correlation between\nrotation period and color, and forms a diagonal band of stars whose periods\nare increasing with increasing color index (decreasing mass). The\nsecond sequence consists of rapidly rotating stars and shows little\nmass dependence. A small subset of stars is distributed\nbetween the two sequences. The distribution of stars in the color-period\ndiagrams suggests that the rotational evolution is slow where we\nsee the sequences and fast in the gap between them. Other\nareas of the color-period plane are either unlikely or ``forbidden''.\n\n\n\\section{The dependence of stellar rotation period on color}\n\nFor our purpose of determining the dependence of stellar rotation on\nstellar color, we can focus on the diagonal sequence of more slowly\nrotating stars in Figure~\\ref{m3534pbv}. We can do so because surveys\nfor stellar rotation in the older clusters M37 (550\\,Myr) and the\nHyades (625\\,Myr) show that F, G, and K dwarfs spin down over a few\nhundred million years and converge onto this sequence \\citep{hgp+08,rtl+87}.\n\nBarnes (2003, 2007, and this volume) refer to the diagonal sequence\nas the Interface (I) sequence and propose a function\n($f(B-V)$) to represent it \\citep{barnes07}:\n\n\\begin{equation}\nP(t,B-V) = g(t) \\times f(B-V)\n\\end{equation}\n\n\\noindent where \n\n\\begin{equation}\nf(B-V) = a((B-V)-b)^{c}\n\\end{equation}\n\n~\\\\\n\\noindent with $a = 0.77$ and $c = 0.60$. \\citet{barnes07} fix $b$\nat a value of 0.4, and determine $g(t) = t^{0.52}$.\n\nFrom the method of gyrochronology \\citep{barnes03a,barnes07}, the\nfunctional dependence between stellar color and rotation period\n($f(B-V)$) will directly affect the derived ages, and will, if not\naccurately determined, introduce a systematic error. It is therefore\nimportant to constrain and test the color-rotation relation for stars\non the I sequence as new data of sufficiently high quality becomes\navailable. \\citet{mms08} fit $f(B-V)$ as given in equation [5.2] to\nthe I sequence stars in M35, leaving all 3 coefficients ($a$, $b$, $c$)\nas free parameters.\nThey get the same value of 0.77 for $a$, but a slightly different value\nof 0.55 for $c$. By leaving the translational term $b$ free, a value\nof 0.47 was found. This value for $b$ is interesting because it corresponds\nto the approximate $B-V$ color for F-type stars at the transition from\na radiative to a convective envelope. This transition is also associated\nwith the onset of\neffective magnetic wind breaking \\citep[e.g.][]{schatzman62}, and known\nas the break in the Kraft curve \\citep{kraft67}. The value of 0.47 for\nthe $b$ coefficient therefore suggest that, for M35, the blue (high-mass)\nend of the I sequence begins at the break in the Kraft curve.\n\nThe I sequence in M34 is particularly well-defined and will be used\nto further constrain the dependence between rotation and color in a\nforthcoming paper (Meibom et al. 2009, in preparation).\n\n\n\\section{The dependence of stellar rotation on age}\n\nWith well-defined color-rotation relations (I sequences) for clusters\nof different ages, we are able to constrain the dependence of stellar\nrotation on age for stars of different masses. \nComparison of the rotation periods for F-, G-, and K-type I sequence\ndwarfs at different ages enable a direct test of the Skumanich\nrelationship for early G dwarfs and for dwarfs of higher and lower masses.\n\nInitial comparisons in \\citet{mms08} between the rotation periods of\nG and K dwarfs on the I sequences in M35 and the Hyades, suggest that\nthe Skumanich time-dependence ($P_{rot} \\propto t^{0.5}$) can account\nfor the evolution in rotation periods between M35 and the Hyades for\nG dwarfs. However, the time-dependence for spin-down of K dwarfs is\ndifferent and slower than Skumanich.\nIn a more in depth analysis (preliminary results) Meibom et al. (2009;\nin preparation) calculate the mean rotation periods for late-F, G, early\nK, and late-K I sequence dwarfs in M35, M34, NGC3532 (Barnes 2003; 300\\,Myr),\nM37, and the Hyades. They find that the increase in the mean rotation period\nwith age is consistent with Skumanich spin-down for the late-F and G dwarfs,\nwhereas K dwarfs spin down significantly slower.\nThis deviation from Skumanich spin-down for\nK dwarfs, suggest that the rotation period for late-type stars cannot be\nexpressed as the product of separable functions of time and color\n(Eq. [5.1]). Skumanich spin-down was assumed for all (late-F through\nearly M) stars in \\citet{barnes03a,barnes07} and in \\citet{kawaler89}.\n\nEventually, when rotation periods of sufficient quality is available\nfor a larger number of clusters, the effects on the rotational evolution\nof other stellar parameters, e.g. metallicity, and of the cluster\nenvironment, should be considered. Irwin, this volume, give a more complete\nlist of published rotation data in clusters.\n\n\n\\section{The Kepler mission - a unique opportunity}\n\nAt the present time, the Hyades represent the oldest coeval population\nof stars with measured rotation periods. Measurements of rotation periods\nfor older late-type dwarfs is needed to properly constrain the dependence\nof stellar rotation on age and mass, and to calibrate the technique of\ngyrochronology. Figure~\\ref{3d_car} shows a schematic of the surface\nin the 3-dimensional space of rotation, color, and age. At the present\ntime this surface is defined solely by color-period data in young\nclusters and for the Sun.\nThe solid black curves represent the ages and color-ranges of FGK\ndwarfs in M35, M34, NGC3532, M37, and the Hyades. The color\nand age of the Sun is marked as a solid dot. The figure demonstrates\nclearly the need for observations of stellar rotation periods beyond\nthe age of the Hyades.\n\n\\begin{figure}[ht!]\n\\begin{center}\n\\includegraphics[width=5.5in]{f2.eps} \n\\caption{\nA schematic of the (presumed) empirical surface in the 3-dimensional\nparameter space of stellar age (Myr), color, and rotation period.\nThe surface is currently defined {\\it only} by stars in young open\nclusters (black solid lines), and by the Sun (black dot). The dashed\nblue lines mark the ages and approximate color ranges of FGK dwarfs\nin the 4 open clusters within the Kepler field.}\n\\label{3d_car}\n\\end{center}\n\\end{figure}\n\nThe lack of periods for older stars (with the exception of the Sun)\nreflects the challenging task of measuring - from the ground - photometric\nvariability for slowly rotating stars with ages of $\\sim$1Gyr or more.\nHowever, the Kepler space telescope (scheduled for a 2009 launch),\nwill provide photometric measurements with a precision,\ncadence, and duration, sufficient to\nmeasure stellar rotation periods from brightness modulations for stars\nas old as and older than the Sun. Four open clusters are located within\nthe Kepler target region: NGC\\,6866 ($\\sim$0.5\\,Gyr), NGC\\,6811\n($\\sim$1\\,Gyr), NGC\\,6819 ($\\sim$2.5\\,Gyr), and NGC\\,6791 ($\\sim$10\\,Gyr).\nWith Kepler we therefore have a unique opportunity to extend the\nage-rotation-color relationships beyond the age of the Hyades and\nthe Sun. The dashed blue curves in Figure~\\ref{3d_car} mark the\nages and approximate color ranges of FGK dwarfs in the 4 clusters. \n\n\n\n\n\\def\\aj{AJ}\\def\\araa{ARA\\&A}\\def\\apj{ApJ}\\def\\apjl{ApJ}\\def\\apjs{ApJS}\\def\\ao{%\nAppl.~Opt.}\\def\\apss{Ap\\&SS}\\def\\aap{A\\&A}\\def\\aapr{A\\&A~Rev.}\\def\\aaps{A\\&AS}%\n\\def\\azh{AZh}\\def\\baas{BAAS}\\def\\jrasc{JRASC}\\def\\memras{MmRAS}\\def\\mnras{MNRA%\nS}\\def\\pra{Phys.~Rev.~A}\\def\\prb{Phys.~Rev.~B}\\def\\prc{Phys.~Rev.~C}\\def\\prd{P%\nhys.~Rev.~D}\\def\\pre{Phys.~Rev.~E}\\def\\prl{Phys.~Rev.\n Lett.}\\def\\pasp{PASP}\\def\\pasj{PASJ}\\def\\qjras{QJRAS}\\def\\skytel{S\\&T}\\def\\s%\nolphys{Sol.~Phys.}\\def\\sovast{Soviet~Ast.}\\def\\ssr{Space~Sci.~Rev.}\\def\\zap{ZA%\np}\\def\\nat{Nature}\\def\\iaucirc{IAU~Circ.}\\def\\aplett{Astrophys.~Lett.}\\def\\aps%\npr{Astrophys.~Space~Phys.~Res.}\\def\\bain{Bull.~Astron.~Inst.~Netherlands}\\def\\%\nfcp{Fund.~Cosmic~Phys.}\\def\\gca{Geochim.~Cosmochim.~Acta}\\def\\grl{Geophys.~Res%\n.~Lett.}\\def\\jcp{J.~Chem.~Phys.}\\def\\jgr{J.~Geophys.~Res.}\\def\\jqsrt{J.~Quant.%\n~Spec.~Radiat.~Transf.}\\def\\memsai{Mem.~Soc.~Astron.~Italiana}\\def\\nphysa{Nucl%\n.~Phys.~A}\\def\\physrep{Phys.~Rep.}\\def\\physscr{Phys.~Scr}\\def\\planss{Planet.~S%\npace~Sci.}\\defProc.~SPIE{Proc.~SPIE}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nOptical Character Recognition (OCR) is the process of converting printed text into computer comprehensible form. Images are not directly comprehensible by computers, rather they are a matrix of numbers with a hidden structure in them. So we need algorithms which can extract those structures and then convert them to a form that can be later indexed, stored and searched by a computer. OCR algorithms are required for detecting and recognizing natural language text from images. OCR algorithms are basis for many advanced applications such as record automation, advanced scanning, cheque verification and advanced data entry methods. Other major applications of OCR include converting legacy literature into editable and searchable form, and its demand is very high for languages with rich literary history, since it will not only help in preserving the literature but will also help visually impaired with advanced reading\ndevices.\\\\\nOCR algorithms, despite being computationally expensive and difficult to design, have received a significant improvement in performance recently, mainly due to the increase in capabilities of artificial intelligence algorithms. However this advancement is not evenly distributed over all languages. As most advanced OCR systems are designed to work only for Latin based languages with real time performance and accuracy higher than 99\\% ~\\cite{han2006two}. For instance, OCR systems for English language are even able to read text from natural scene images with same accuracy as of printed text\n ~\\cite{jaderberg2016textInTheWild}. However Arabic script based languages have not seen such advancements. Arabic script is a cursive script with many languages including Urdu, Persian and Pushto based upon it. There are many fonts available for Arabic script including Nastaliq, Naskh and Kofi ~\\cite{urduFontServer}. A few commercial OCR applications are available\nfor Arabic script based languages but they lack in performance \\cite{tesseractOcr}.\\\\\nUrdu is the super set of Arabic and Persian languages with some additional characters. Due to these additional characters, designing an OCR for Urdu is even more complex than Arabic and Persian. Also any advancement in Urdu OCR will not only be beneficial for Urdu language but will also have its direct impact on Arabic, Persian and other Arabic script based languages. \\\\\nAmong the fonts used to print Urdu text, Nastaliq and Naskh are the most common and there have been some successful attempts to build OCR specific to these fonts ~\\cite{javed2010segmentation, naz2017urdu}. However there are more then $400$ fonts registered for Urdu ~\\cite{urduFontServer} and to the best of our knowledge there does not exist any OCR system that can recognize Urdu text written in all these fonts. One of the major reasons for that is huge inter and intra-class variability found in Urdu characters across fonts. Specifically, Urdu has $39$ letters and is written cursively from right to left. Letters\nwithin a word are joined to form a sub-word called a ligature. Unlike English and other Latin based languages, letters in Urdu are not constrained to only a single shape. Their shape changes with their position within the ligature. Number of shapes per letter vary from one to four. This inter-class variability of Urdu has been one of the main challenges in the development of a robust OCR system. There are almost $18,569$ valid ligatures in Urdu which are to be recognized as compared to only $52$ characters (excluding numbers and\npunctuation) in English. Thus designing a font independent Urdu OCR system that can model these intra-class variability becomes the second most important challenge. As mentioned above, Urdu has more than $400$ registered fonts. All these fonts have extreme variations in their writing styles and that also makes it difficult to design an OCR system capable of recognizing Urdu text across all fonts. Figure~\\ref{fig:table} illustrates an example ligature in $72$ different fonts. \\\\\nAnother important challenge in designing robust OCR is the writing style. As Urdu is commonly written in Nastaliq style, in which scripts are written diagonally with no fixed baseline with may be overlapping ligatures due to lack of any standard. Also, Urdu is a bidirectional language where normal text is written from right to left, while numbers are written from left to right. All these challenges makes it hard to build a robust OCR system and that is the reason why Urdu OCR systems are less mature in performance than other languages.\\\\\nTo solve the challenges mentioned above and to capture the strong inter and intra-class variability across available Urdu fonts we have (i) designed a pipeline for synthetically generating textual images of Urdu lexicon in different fonts, (ii) generated a large scale Urdu text recognition data set with complete Urdu Lexicon ($18,569$ ligatures) in $256$ fonts, comprising of more than $4$ million images, and (iii) designed, trained and evaluated a deep Convolutional Neural Network for font independent Urdu text recognition with $86$\\% accuracy across complete Urdu lexicon. It is important to mention that our data generation pipeline is language independent and can be used to generate textual images for any other (Arabic script based) language. This data set captures the inter and intra-class variability by including a large number of fonts and any system trained on this data set would be able to recognize Urdu text font independently. Finally our trained CNN can be transfer-learned to achieve superior performance on font specific tasks, is able to recognize new unseen fonts and achieves comparable performance on current benchmark data set named Urdu Printed Text Image Database (UPTI) ~\\cite{sabbour2013segmentation}.\nFollowing subsection briefly explains the data set creation steps.\n\n\n\\section{Synthetic Data Generation}\nTo train a recognition system for detecting Urdu text irrespective of font and size, it is essential to have access to a data set that with multiple fonts, and enough variation in the structure. While there are some publicly available data sets such as UPTI ~\\cite{sabbour2013segmentation}, but they lack in Font variations and do not comprise on more then one or two fonts. And this lack of large-scale font-rich recognition data sets have forced previous works to focus on font-specific recognition systems ~\\cite{sabbour2013segmentation, nastaliq2013offline, naz2016urdu}. \\\\\nOne of the reason for this lack of font-rich data sets is the time consuming process of labeling such a data set. Which is also very expensive in terms of man hours. Our proposed solution to this problem is to synthetically generate a data sets with all the required properties. This would not only avoid all the laborious work required to label the data set, but can also be generalized and applied to similar problems. Similar approach has been successfully used by ~\\cite{jaderberg2016textInTheWild} for generating English text-recognition data set, ~\\cite{DBLP:journals\/corr\/GoodfellowBIAS13} for recognizing house numbers from Google street view images and ~\\cite{DBLP:journals\/corr\/JaderbergSVZ14} for text recognition in natural scenes. Following the success of synthetic data in aforementioned scenarios, we propose a synthetic data generation pipeline for font-independent printed Urdu text recognition. Following steps illustrate our proposed pipeline.\n\n\\subsection{Font acquisition}\nUrdu has two most frequently used fonts, Nastaleeq and Naskh. \n\\begin{itemize}\n \\item Nastaleeq is a Persian based calligraphic script developed in $14$th century and alongside Urdu it is dominantly used for writing Kashmiri, Punjabi and Pashto languages.\n \\item Naskh is more dominant in Arabic language but is also vastly used for Urdu and Pashto. \n\\end{itemize}\nThe popularity of these two fonts have forced most of previous work to focus on either Nastaliq or Naskh font. But these are not the only fonts used in printing Urdu text, there are more then $400$ fonts publicly available on Urdu font web server ~\\cite{urduFontServer}. Since our approach focus on making a font independent data set, the first step of our pipeline was to acquire a font database with large number of Urdu fonts. Urdu font web server ~\\cite{urduFontServer} is the largest public repository of Urdu fonts. To download all these fonts we developed an automated web scraper. This resulted in a database of approximately $400$ fonts. The next step of the pipeline was to validate that all the acquired fonts support the same unicode set.\n\n\\subsection{Font filtering}\nUnicode provides full support for Urdu lexicon but there are some problems that are still unresolved ~\\cite{ijaz2007Urducorpus}. For example there are some characters with multiple Unicodes, also there are some characters which are connected in nature while the Unicode table provides two different versions of them, one with connected nature and the other with non-connected nature. Due to these discrepancies, different fonts support different Unicode sets. For the next component of our pipeline to work, only one Unicode value per symbol could be supported. So we selected the Unicode set supported by maximum number of fonts and removed all the other fonts. This reduced our font database from $400$ to $256$.\n\n\\subsection{Ligature acquisition}\nAfter the selection of fonts the next step was to acquire an Urdu corpus and decide the level of segmentation i.e. word, ligature or character. The level of segmentation is directly related to the methodology used for recognition. Trivial OCR systems work on character level segmentation. The problem with this approach is that during recognition, the word has to be spitted in characters. In case of Latin script based languages, it is a comparatively easier task, since the characters are written separately. But in case of Arabic script based languages, segmentation becomes the bottleneck ~\\cite{javed2010segmentation}. The alternate approach is to segment the words on ligature level. There have been some successful attempts for Urdu OCR using ligature based segmentation ~\\cite{sabbour2013segmentation, javed2010segmentation} and it also provides a good trade-off between the segmentation complexity in character based approach and the huge vocabulary in word based approach. Based on these observations, we decided to use ligature based segmentation for our pipeline.\\\\ \nThe next step was to acquire a corpus with all possible Urdu ligatures. For this we used a publicly available corpus from Center for Language Engineering (CLE). This corpus have a comprehensive list of Urdu ligatures acquired from over $6$ million lines of text from different sources.\n\n\\subsection{Text Rendering}\nFor Urdu image corpus generation we created a rendering engine that can generate images of all Urdu ligatures in all the selected fonts. There are many existing solutions available in many programming languages, but most of them support only Latin based languages and render images from left to right. Due to the location based connected nature of Urdu if Urdu is rendered from left to right the characters would change their Unicode. So we have to adapt the existing solutions to Arabic script based languages. We developed a text reshaper, that reshapes a ligature and converts it into a reverse form, that can later be rendered using any existing rendering engine.\n\\subsection{Corpus generation}\nThe final phase of the pipeline was to generate the images. A gray-scale image with $160\\times160$ pixels was generated for each ligature in all the fonts. Since the total number of ligatures in Urdu language are almost $18565$ and the selected fonts\nwere $256$, we ended up with a total of almost $47,51872$ images. We also generated some variations of the data set for experimentation, these are as follows:\n\\begin{itemize}\n \\item Since this data set comprises of almost $120$GB a smaller version of the dataset with each image of size $80\\times80$ was generated.\n \\item A binarized version of the the data set with images thresholded to have only $0$ or $1$ was also generated for evaluations purposes.\n \\item Another data set with only $2000$ most occurring ligatures was also generated.\n\\end{itemize}\nWe did not augment the data set since it can be handled during training in most of the deep learning libraries ~\\cite{tensorflow2015-whitepaper, NEURIPS2019_9015}. Figure \\ref{fig:table} illustrates a ligature in $256$ different fonts.\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=\\textwidth,]{ligature_table.png}\n \\caption{An example ligature generated in $256$ different fonts.}\n \\label{fig:table}\n\\end{figure}\n\n\\section{Recognition Algorithm}\nText recognition is an active area of research and there exists many solutions ranging from legacy techniques such as pattern matching \\cite{nawaz2009optical} and Hidden Markov Models (HMM) \\cite{javed2013segmentation} to contemporary techniques such as Neural Networks \\cite{jaderberg2016textInTheWild}. While methods like HMM have been widely used in recognition tasks, Deep Neural Networks (DNNs) have outperformed all previous techniques on many Computer Vision tasks \\cite{krizhevsky2012imagenet, gupta2016synthetic, long2015fully}. Following the success of DNNs on similar tasks, we have designed our system based on DNN.\n\n\\subsection{Architecture Selection}\nDeep Neural Networks have many types including Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN) and Fully Convolutional Neural Networks (FCN) etc. All these types have different architectures (arrangement of neurons) and are designed for different tasks. Among them CNNs are mostly used for Computer Vision tasks. The main reason behind the success of CNN on Computer Vision is the Convolutional Layer, which is the core component of a CNN. The Convolutional layer is inspired from the Convolutional operator, widely used in Digital Image Processing. The Convolutional layer has a matrix of learnable parameters, which are convolved over the input to form the output. During the training, these parameters are updated such that they extract useful information from the input at each level. \\\\\nOver the years CNNs have been successfully used in many Computer Vision tasks including Classification ~\\cite{krizhevsky2012imagenet} , Localization ~\\cite{gupta2016synthetic} and Segmentation ~\\cite{long2015fully}. Inspired by the success of CNN on many large-scale classification tasks ~\\cite{krizhevsky2012imagenet}, we also designed a CNN based recognition system for Urdu text recognition. \n\n\\subsection{Base Architecture}\nThere exists a diverse range of architecture variations in CNNs ~\\cite{jaderberg2016textInTheWild, gupta2016synthetic, long2015fully, girshick2015fast}, based on the nature of the problem being solved (e.g. Localization, Segmentation or Detection etc.). Architectures also vary on the basis of performance, some are optimized for speed `\\cite{redmon2016you} while other are optimized for memory consumption ~\\cite{iandola2016squeezenet}. We have chosen our based architecture based on ResNet-18 ~\\cite{he2016deep}. The main reasons behind this choice are as follows :\n\\begin{itemize}\n \\item Similar to our systems ResNet is also designed for classification.\n \\item It has also been trained on large-scale data set (more then 1 million images).\n \\item It has also trained on large ($1000$) number of categories. \n\\end{itemize}\nResNet-18 has a total of $18$ layers with the last fully-connected layer having $1000$ neurons for classification on ImageNet ~\\cite{krizhevsky2012imagenet} ($1000$ classes). In our case there are $18569$ classes so we modified the last fully-connected classification layer. Further details on the modifications in the network and training process is explained in Section ~\\ref{subsec:incrementalLearning}. \n\n\\subsection{Loss Function}\nOnce we designed our base architecture the next decision was selection of the loss function. ResNet-18 has Softmax Cross Entropy loss function, which has been proved to be very effective in classification tasks \\cite{jaderberg2016textInTheWild} so it was our initial choice. But after our initial experiments we faced gradient vanishing problem. And it caused hindrance in network convergence. We have a total of $18569$ classes (The total number of valid ligatures in Urdu language) while the most challenging classification task to date is Imagenet Large-scale Visual Recognition Challenge (ILSVRC) with $1000$ categories. Using a flat N-way classifier in our network was causing the initial probabilities to be very low and hence the gradients were dieing, slowing down the learning process. Although there have been some attempts to solve this problem but most of the proposed solutions are focused towards Natural Language Processing (NLP) ~\\cite{mikolov2011empirical}. Most related solution was \\cite{jaderberg2016textInTheWild} based on an incremental leaning procedure, which is explained in the next section.\n\n\\subsection{Incremental Learning}\n\\label{subsec:incrementalLearning}\nIn practice very few networks are trained from scratch since it requires relatively large data set, huge computation power and it is also harder to train a complex network from scratch e.g. modern CNN architectures requires weeks of training on high-end GPU clusters \\cite{szegedy2017inception}. Instead it is a common practice to use a pre-trained network which is trained on a large scale data set such as Imagenet and tune it for the problem at hand. There can be many approaches to use such a model, among them most common are (i) Use the pre-trained CNN as a feature extractor by keeping it's weights fixed and replacing the last fully-connected layer ( which has $1000$ outputs ) with any appropriate classifier, and only\ntraining that classifier, (ii) Fine-tune the CNN by replacing the last fully-connected layer with a randomly initialized layer with required outputs and then training the whole network using back-propagation. The major benefit in fine-tuning instead of training from scratch is that instead of randomly initialized weights, you are starting with weights learned from a large-scale data set and it drastically reduces the training time, in some cases in the magnitude of $10$ or $100$ i.e. the network which was taking three weeks to get to a certain accuracy will now require only a few hours. The constraint on using fine-tuning is that the data-set used for the pre-trained network must be similar to your data set otherwise it will not be very helpful. Since we have $160\\times160$ gray-scale images while Imagenet has $256\\times256$ RGB images, we cannot use any pre-trained model from Imagenet. Among other pre-trained models, most related was trained on MNIST dataset with $28\\times28$ images with $10$ categories, but the small image size and small number of categories makes it also inappropriate to use.\\\\\nAs mentioned earlier, the main benefit of fine-tuning or transfer learning is replacing randomly initialized weights with learning weights from a similar data set. Keeping this point in mind we decided to pre-train our own network on a subsample of our data set. For this purpose we selected first $400$ of the easiest classes ( Urdu ligatures consist of 1-8 characters, we sorted them based on the number of characters and selected initial $400$). Choosing this sub-sample will solve both issues (i) the number of classes are very less as compared to complete data set, (ii) it is part of the original data and hense is similar to the complete data. We did all the hyper-parameter tuning and model selection experimentation on this sub-sample and then trained the network to have an acceptable accuracy (~$80$\\%). It is worth mentioning that the whole purpose of this exercise was to get some initial weights so we did not include any regularization at this point. \\\\\nAfter training the network on $400$ classes, we repeated the step by replacing the last fully-connected layer with $2000$ neurons instead for $400$ and trained for \n$2000$ initial classes. After that we finally replaced the last fully-connected layer with $180569$ neurons and trained on complete dataset.\\\\\nOur final architecture is a Convolutional Neural Network with $18$ Convolutional and $2$ fully-connected layers. The network is inspired by ResNet-18 but we also tried to keep it efficient in terms of total parameters, forward-pass time and total memory\nused. L$2$ regularization is used to avoid over-fitting.\n\n\\section{Data Splitting Criteria}\nThe next decision was to come up with an appropriate split for the data to be most beneficial. Usually the splitting criteria used in a random split of $7:2:1$ in training, validation and test set. But in our case this random split can result into a biased sets because this random split will not make sure that each font have same representation in the validation and test set. Also this random split will help us to train out system and then evaluate it's performance on images with same fonts (different geometric variations) but will give us any empirical evidence on new fonts (that are not part of the training set). To solve these issues we came up with the following splitting mechanism:\n\\begin{itemize}\n \\item First we randomly split the data with the ratio of $75:25$ on the basis of fonts. That means all the images belonging to $56$ randomly selected fonts were separated into an Unseen Test set. This will solve our second problem i.e. evaluating the performance on unseen fonts.\n \\item The remaining data ($200$ fonts) was then split with the ratio of $80:10:10$ training, validation and test set.\n\\end{itemize}\nFor all the experiments hyper-parameters were tuned using the validation set. While the final results are reported on the test set.\n\\section{Evaluation Measure}\nOur this work is focused on text classification and for classification tasks most commonly used evaluation metrics are F-score and Accuracy ~\\cite{naz2016arabic, naz2014optical, ahmad2007urdu}.\n\\subsection{Accuracy}\nAccuracy is the ratio of correctly classified example to total number of examples as shown in \\ref{eq_acc}\n\\begin{equation}\n\\label{eq_acc}\n\t{\\displaystyle Accuracy = {\\frac{correct prediction}{total predictions}}}\n\\end{equation}\nThe maximum possible value of Accuracy is $1$ and lowest possible value is $0$. In classification tasks, accuracy is commonly used when all the classes have same number of examples. For cases where the total number of classes are not equal, F-score is preferred.\n\\subsection{F-score}\nF-score can be defined as weighted average of precision and recall. Precision is the ratio of the examples correctly classified into a category to the total examples\nclassified into a category, while recall is the ratio of the examples correctly classified into a category to the total examples in that category see eq.~\\ref{eq:fScore}.\n\n\\begin{equation}\n\\label{eq:fScore}\n\tF1 = \\frac{2 \\cdot precision\\cdot recall}{precision+ recall}\n\\end{equation}\n\n\n\\section{Experiments}\nDue to the scale of our data set (~$120$GB, $4$million images) choosing the architecture and tuning all the hyper-parameters on complete data set was not feasible. Hence we started our experiments with initial $400$ classes. We first trained different variants of a ResNet-18 inspired network on these $400$ classes to investigate the effects of pooling, number of Convolution and fully-connected layers, dropout and learning rate. After selecting and training our initial model on $400$ classes we fine-tuned this model to $2000$ and then $18569$ classes. Following sections explain the implementation details of all the stages.\n\n\\subsection{Stage-I (400 Classes)}\nAs explained earlier, we have used an incremental learning methodology to train our network. Hence our initial task was to design and train a DNN that can perform well on first $400$ classes. For this purpose we split the images belonging to first $400$ classes (from training set) into training ($70$\\%) and validation ($30$\\%). And then tuned our hyper-parameters such as number of Convolutional, Fully-connected and Pooling layers, number of neurons in fully-connected and Convolutional layers and the placement of different layers in the network. At this stage the main purpose was to design a network that can give a good classification score on our initial $400$ classes by keeping the computation as low as possible. All the weights were initialized using Xavier initialization \\cite{pmlr-v9-glorot10a} and the model was trained using Adam optimized \\cite{kingma2014adam}. The model achieved a classification accuracy of $90\\%$ on the validation set. Since this is not our final model and we have to fine-tune this for $2000$ and then all classes, we did not introduce any type of regularization at this stage. \n\n\\subsection{Stage-II (2000 Classes)}\nOnce our initial model on $400$ classes was trained, the next step was to train a model for $2000$ classes. Using the transfer learning technique, previously used in Face Recognition \\cite{sun2014deep} and Object Detection \\cite{shin2016deep} we first created a similar model as the one in Stage-I except the last fully-connected layer. Last fully-connected layer of our new model had $2000$ neurons instead of $400$ due to increase in the number of classes. Then we initialized all layers except the last fully-connected layer with the weights learned from previous stage. Last layer was initialized with random weights using Xavier initialization \\cite{pmlr-v9-glorot10a}. After that the model was trained on $70$\\% images belonging to first $2000$ classes of the training set. We also tried some variants of the base model in which some new convolutional and pooling layers were added. Pooling has two significant effects on the performance of a CNN, (i) It reduces the memory footprint since after pooling the feature maps are reduced by the stride size. (ii) It makes the model invariant to small shifts and distortions in input, since it merge semantically similar features \\cite{lecun2015deep}. Our final model on $2000$ classes achieved $89\\%$ accuracy on the validation set.\\\\\n\n\\subsection{Stage-III (All Classes)}\nFinal stage was to train the model on the complete data set with $18569$ classes. Similar to Stage-II we initialized all layers except the last fully-connected layer\nwith the weights learned from Stage-II. After that this model was trained on complete training set. During training the data was augmented with geometric variations. \nWhile training deep learning models using Adam, appropriate scheduling of learning rate can lead to faster convergence and better performance \\cite{senior2013empirical}. It is a common practice to reduce the learning rate by half, when the learning curve becomes plateau \\cite{zhang2017shufflenet}, we have also used similar approach while training where we started with a learning rate of $0.001$ and reduced it with a factor of $0.5$ at each plateau. After completion the final accuracy on validation set was $0.853$.\n\n\\subsection{Font Specific fine-tining}\nOne of the characteristic of a DNN trained on a large-scale data set is its capability to be fine-tuned to similar data sets, this approach has been successfully used in\nfine-tuning models trained on Imagenet \\cite{naz2016urdu} on tasks like Semantic Segmentation \\cite{long2015fully} and object detection \\cite{ren2015faster}. This approach has been proved to be very useful specially when the amount of labeled data is limited \\cite{lecun2015deep}. In case of fine-tuning the convergence is also faster as compared to training the same model from scratch.\\\\ \nIn our next experiment we tried to evaluate the performance of fine-tuning our trained model to one of the unseen fonts. This can be very useful for scenarios in which the task is to get best possible performance on a specific font instead of generalizing on all fonts. As mentioned in data splitting section, we have kept an Unseen set containing images generated in $56$ fonts, which are not used in training or validation. For fine-tuning we have chosen a font from this set. The images \nbelonging to this font were split into training ($70$\\%) and test ($30$\\%) set. We trained the model with the learning rate $0.00005$ for $5$ epochs and the model was able to achieve accuracy of $95.01\\%$. The same model when trained from scratch was not able to achieve same performance even after $15$ epochs. This shows that the model has the capabilities to be used for similar tasks and can also achieve better performance if the target is a specific subset of fonts.\n\n\\section{Conclusion}\nIn this paper we have presented a large-scale multi-font printed Urdu text recognition data set, comprising of $4$ million images of complete Urdu lexicon in $256$ fonts. The data set has enough inter and intra-class variation to be used for font independent Urdu text recognition. We have also trained ResNet-18 inspired CNN on complete data set with a final accuracy of $85\\%$. Finally the models can be fine-tuned on a specific font to achieve font-specific superior performance. The data set is publicly released\\footnote{The data set can be downloaded from \\url{https:\/\/github.com\/AtiqueUrRehman\/qaida}}, along with the trained models for further research.\n\\printbibliography\n\\end{document}","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe spectral dynamics of single emitters can broadly be categorized as fluctuations and relaxation. Fluctuations are temporally stochastic variations around the equilibrium configuration of any chemical system interacting with its environment at non-zero temperature. Relaxation refers to the system's return to the ground state equilibrium after preparation of a non-equilibrium state, for example, through a laser-driven creation of an excited state population. For single optical emitters, fluctuations can manifest as spectral diffusion, which is spectral jumping occurring from nanoseconds to seconds that is reflective of the microscopic interaction of the bath with the excited state.\\cite{Kettner1994,Ambrose1991} Relaxation occurs via irreversible phonon- or spin-mediated dissipation or spontaneous emission of photons, processes typically observed from picoseconds to microseconds.\\cite{Masia2012}\\\\\nSimultaneous measurement of the full relaxation and fluctuation dynamics for single emitters requires a technique with high spectral and temporal resolution with an additionally high temporal dynamic range from picoseconds to seconds. No single such technique exists and different approaches present their own strengths and weaknesses. Streak-cameras can resolve the spectral evolution along the photoluminescence lifetime trajectory with picosecond resolution, but fail to resolve any dynamics beyond a few nanoseconds and typically lack single-molecule sensitivity.\\cite{ReschGegner2008} CCD-based single-molecule emission spectroscopy can have millisecond temporal resolution, especially if auto-correlation of individual spectra recorded at high sweep-rates is performed, but fails to resolve even faster fluctuations and does not discriminate spectral dynamics along the lifetime trajectory.\\cite{Plakhotnik1998,Plakhotnik1999}\nHong-Ou-Mandel (HOM) spectroscopy has been used to measure the coherence of single photons through two-photon interference. HOM can resolve energy fluctuations through a decrease in photon-coalescence efficiency on fast (picosecond to nanosecond) timescales and lifetime-resolved photon-presorting is straightforward.\\cite{Kuhlmann2015, Thoma2016} However, HOM spectroscopy is exclusively suitable for single-photon emitters at low temperatures as it requires photon-coherences near the transform limit.\nMoreover, the dynamic range of HOM is practically limited to a few nanoseconds delay-time between the two interfering photons, rendering HOM of limited utility for measuring fluctuations occurring over many orders of magnitude in time. Cross-correlating spectrally-filtered photons provides higher temporal dynamic range, but is limited by the finite bandwidth of optical filters restricting the technique to broad lineshapes and spectral diffusion with large energetic spread.\\cite{Sallen2010}\\\\ Fourier spectroscopy is not bound by limitations in spectral bandwidth or two-photon delay-times as each photon self-interferes. As a result, Fourier spectroscopy can readily be married with photon correlation to provide spectral readout with high temporal dynamic range and with arbitrarily high temporal resolution only limited by photon shot-noise.\\cite{Brokmann2006} This technique, Photon Correlation Fourier Spectroscopy (PCFS), has now been established as a powerful tool for the study of optical dephasing and spectral fluctuations of single emitters at low and room temperatures.\\cite{Coolen2008a,Utzat2019,Cui2013a}Despite the success in characterizing spectral fluctuations, so far PCFS was not able to resolve any spectral changes associated with relaxation of the system back to the ground state, for example phonon-mediated relaxation or energy transfer between different states.\\\\\nHere, we propose a pulsed excitation-laser analog of PCFS that readily extracts relaxation and fluctuation dynamics from single emitters. The proposed technique is shown in Figure \\ref{fig:Fig1}. In conventional PCFS, all photons emitted after the continuous-wave laser excitation are used to compile the spectral correlation (a), the auto-correlation of the spectrum compiled from photon pairs with a temporal separation of $\\tau$ along the macrotime axis $t$ of the experiment and an energy difference of $\\zeta$. In lifetime-resolved PCFS, photon-pairs are additionally correlated in the microtime $T$ after pulsed laser excitation. Specifically, the photons are binned according to their microtime, sometimes referred to as the TCSPC channel, and spectral correlations are calculated using these microtime-separated photons (b).\nThe technique can be readily implemented using picosecond single-photon counting equipment as shown in Fig. 1c and requires high-throughput post-processing photon-correlation analysis. We show through numerical simulation that this lifetime-resolved PCFS technique can separate the lineshapes and spectral diffusion dynamics of systems with more than one emissive state as long as the relative weights of the emission from different states changes over the course of the photoluminescence lifetime. More broadly, lifetime-resolved PCFS can in principle extract spectral fluctuations and relaxation with temporal resolutions only limited by the IRF response time of single-photon counting modules.\n\n\n\\section{Theoretical Derivation and Numerical Simulation}\\label{theory}\n\\subsection{Lifetime-resolved PCFS}\nIn PCFS, the interferometer path-length difference is adjusted to discrete positions inside the coherence length of the emission and periodically dithered on second timescales and over a multiple of the emitter center wavelength.\\cite{Brokmann2006} The dither introduces anti-correlations in the intensity cross-correlation functions of the output arms that encode the degree of spectral coherence at a given center position. The lineshape dynamics is thus encoded in the intensity correlations as a function of the time-separation between photons $\\tau$. We show in the Supplementary Information that the PCFS equations can straightforwardly be expanded to include spectral dynamics along the microtime $T$. The central observable in lifetime-resolved PCFS for a spectrum $s(\\omega,t,T)$ dependent on the microtime $t$ and macrotime $T$ is given by the spectral correlation $p(\\zeta,\\tau,T)$ as\n\n\\begin{equation}\\label{spec:corr_central1}\np(\\zeta,\\tau,T)=\\langle\\int_{-\\infty}^{\\infty}s(\\omega,t,T)*s(\\omega+\\zeta,t+\\tau,T) d\\omega\\rangle, \\end{equation}\nwhere $\\langle...\\rangle$ represents the time average. Equation \\ref{spec:corr_central1} describes the central observable in lifetime-resolved PCFS and can be understood intuitively as a histogram of photon-pairs with a shared microtime $T$, a macrotime separation of $\\tau$, and an energy separation $\\zeta$. The form and interpretation of $p(\\zeta,\\tau,T)$ depend on the dynamics of the emissive system and will be discussed in the following sections.\\\\\nWe first discuss the general form of the spectral correlation for a system undergoing spectral diffusion in section \\ref{Spec_diff}. We then discuss two universal systems that map onto many specific real-world scenarios. First, we consider a system of two uncoupled and lifetime-distinct radiating dipoles undergoing uncorrelated spectral diffusion in section \\ref{static doublet_Gauss}. Second, we consider a system of two coupled radiating dipoles subject to population exchange and correlated spectral diffusion in section \\ref{static doublet_Gauss_and_relax}.\n\n\\subsection{The effect of spectral fluctuations on the spectral correlation}\\label{Spec_diff}\nWe consider equation \\ref{spec:corr_central} for spectral fluctuations $\\delta\\omega(t,T)$ that present along the macrotime axis of the experiment around the center frequency $\\omega_{0}$ of a spectrum. We can write for the spectrum $s(\\omega,t,T)=s(\\omega,T) \\otimes \\delta(\\omega-\\delta\\omega(t,T))$, where $\\otimes$ is the convolution, $s(\\omega,T)$ the undiffused spectrum, and $\\delta\\omega(t,T)$ the time-dependent shift from the center wavelength. Spectral fluctuations can be characterized by the correlation function $C(\\tau)=\\langle\\delta\\omega(t,T)\\delta\\omega(t+\\tau, T)\\rangle$. The canonical form of any spectral correlation can then be recast as\n\n\\begin{equation}\\label{spec_corr_fluc}\n\\begin{split}\np(\\zeta,\\tau,T)=\\langle\\int_{-\\infty}^{\\infty} s(\\omega,t,T) s(\\omega +\\zeta,t+\\tau,T) d\\omega\\rangle\\\\\n=C(\\tau)p(\\zeta,\\tau \\rightarrow 0,T)+[1-C(\\tau)]p(\\zeta,\\tau \\rightarrow \\infty,T),\n\\end{split}\n\\end{equation}\n\nreflecting the transition from the undiffused spectral correlation (absent any fluctuations $p(\\zeta,\\tau \\rightarrow 0, T)$) to the diffused spectral correlation $p(\\zeta,\\tau \\rightarrow \\infty, T)$ with the evolution of $C(\\tau)$. Note that for $\\tau \\rightarrow 0$, $\\delta\\omega(t_{1},T)=\\delta\\omega(t_{2},T)$ and the spectral correlation thus reduces to the homogeneous spectral correlation $p(\\zeta,\\tau\\rightarrow 0,T)=\\langle\\int_{-\\infty}^{\\infty}s(\\omega,T)s(\\omega-\\zeta,T)d\\omega\\rangle$.\n\n\\subsection{A lifetime-distinct doublet undergoing Gaussian spectral fluctuations}\\label{static doublet_Gauss}\nWe discuss a system of two uncoupled and lifetime-distinct radiating dipoles undergoing uncorrelated spectral diffusion and involving states $|A\\rangle$ and $|B\\rangle$. The system's energy diagram is shown in figure \\ref{fig:Fig3}a (inset). The microscopic interpretation involves a system with two emissive states coupled to different bath fluctuations. We show how lifetime-resolved PCFS can separate the homogeneous lineshape and spectral diffusion parameters of the the two transitions.\\\\\nThe different emission lifetimes result in microtime-dependent relative weights of emission intensity originating from states $|A\\rangle$ and $|B\\rangle$ after equal populations have been prepared through laser excitation. We decompose the overall dynamic spectrum of the system $s(\\omega,t,T)$ into microtime-dependent components as $s(\\omega,t,T)=a(T)s_{A}(\\omega,t)+b(T)s_{B}(\\omega,t)$, where $a(T)$ and $b(T)$ are the relative probabilities of a given photon originating from either state $|A\\rangle$ or $|B\\rangle$, and show that the spectral correlation expands as\n\\begin{equation}\\label{reduced:spec_corr}\n\\begin{split}\np(\\zeta,\\tau,T)=a(T)^{2}p_{AA}(\\zeta,\\tau)\\\\\n+ a(T)b(T)(p_{AB}(\\zeta,\\tau)+p_{BA}(\\zeta,\\tau))\\\\\n+ b(T)^{2}p_{BB}(\\zeta,\\tau)\n\\end{split}\n\\end{equation}\n(see Supplementary Information). The terms quadratic in $a(T)$ and $b(T)$ represent the spectral auto-correlations of the individual states $p_{AA}$ and $p_{BB}$, while the cross-terms involving $p_{AB}$ represent the cross-correlation of the spectra $s_{A}(\\omega,t,T)$ and $s_{B}(\\omega,t,T)$. The form of the spectral correlation can be understood intuitively because the spectral correlation is compiled from pairs of photons with origins drawn from the four possible combinations of $|A\\rangle$ and $|B\\rangle$. Importantly, the left- and right-sided correlations $p_{AB}$ and $p_{BA}$ are not identical unless $s_{A}(\\omega)$ and $s_{B}(\\omega)$ share the same center frequency $\\omega_{0}$ and are symmetric in $\\omega$.\\\\\nSpectral diffusion is a ubiquitous process observed for many single emitters. Common descriptions of single-emitter spectral diffusion are the non-Markovian and discrete Poissonian Wiener process\\cite{Beyler2013} or the mean-reverting Ornstein-Uhlenbeck process.\\cite{Richert2001} These processes describe spectral diffusion phenomenologically and for simplicity we consider a simple non-Markovian Poissonian Gaussian jumping model (GJM).\\cite{Utzat2019} The GJM process is characterized by a time-invariant probability density for discrete spectral jump occurrence to a new spectral position drawn from a Gaussian probability distribution function over $\\omega$. For the two states $|A\\rangle$ and $|B\\rangle$ as denoted in the subscripts, we write $Prob(\\delta \\omega_{A,B})=\\frac{1}{\\sigma_{A,B}\\sqrt{2\\pi}}e^{-\\frac{\\delta\\omega^{2}}{2\\sigma_{A,B}^{2}}}$ for the probability of a given spectral shift at a point in time. Here, we have introduced the spectral fluctuation term $\\delta\\omega_{A,B}$ from earlier. The microscopic interpretation of this process is the time-stochastic variation of the bath assuming discrete conformations coupling to the system. The corresponding fluctuation correlation function can be written as $C(\\tau)=e^{-\\tau\/\\tau_{c}}$ and is described by an exponential decay with a characteristic spectral jump time of $\\tau_{c}$. When the two states diffuse independently of each other, no correlation is present and $C_{AB}(\\tau)=0$. In this case, $\\langle\\delta\\omega_{A}(t)\\delta\\omega_{B}(t+\\tau)\\rangle=\\langle\\delta\\omega_{a}(t)\\rangle\\langle\\delta\\omega_{B}(t+\\tau)\\rangle=0$ because independently diffusing emissive states will not be correlated and the cross-terms $p_{AB}$ and $p_{BA}$ in equation \\ref{reduced:spec_corr} only reflect the cross-correlations of the inhomogeneous components $p_{AB\/BA}(\\zeta,\\tau \\rightarrow \\infty)$. Absent any memory of spectral fluctuations even at early $\\tau$, the time average over the spectral-correlations of all random configurations is the cross-correlation of the inhomogeneously broadened (diffused) spectra\n\n\\begin{equation}\np_{AB}(\\zeta)=\\langle\\int_{-\\infty}^{\\infty}e^{-\\frac{\\delta\\omega^{2}}{2\\sigma_{A}^{2}}}e^{-\\frac{(\\delta\\omega+\\zeta)^{2}}{2\\sigma_{B}^{2}}}d\\delta\\omega\\rangle,\n\\end{equation}\\label{equation:idk}\n\nwhere $\\sigma_{A}$ and $\\sigma_{B}$ are the widths of the Gaussian probability envelopes of the diffused distributions of states $|A\\rangle$ and $|B\\rangle$.\\\\\nWe numerically simulate the system of independently-diffusing optical transitions with parameters commensurate with typical experimental cryogenic single-molecule spectroscopy (see Supplementary Information). The time-domain results of the simulation are discussed in Figure \\ref{fig:Fig3}. The configuration of the system is shown in Figure \\ref{fig:Fig3}(a). The corresponding lifetime exhibits biexponential decay behavior as expected. In (b) and (c), we compare the cross-correlation functions for two different slices with microtime ranges of $T=0-100$ps and $T=2000-7000$ps, where $|A\\rangle$ and $|B\\rangle$ are the dominant emissive states, respectively. Unlike for the static doublet discussed in the Supplementary Information, the cross-correlations $g_{X}^{(2)}(\\tau)$ indicate spectral dynamics evident from the loss of anti-correlation at longer $\\tau$. As we specify different jumping rates for the two states, the decay of the spectral coherence evident in (b) and (c) occurs at different $\\tau$. The PCFS interferogram derived from the cross-correlations (see Supplementary Information for the derivation) for photons emitted with a time constant of $<100ps$ is shown in (d) and informs on the loss of photon-coherence between $1\\mu\\textrm{s}$ and $1\\textrm{ms}$ owing to the energy fluctuations of the photons emitted $<100$ ps after laser excitation.\\\\\nIn Figure \\ref{fig:Fig4} we discuss the same simulation results in the spectral domain. In (a), we show the full-width-at-half-maximum (FWHM) of the spectral correlation for both $T$ and $\\tau$, a representation that makes immediately obvious the differences in the homogeneous linewidths at early $\\tau$ and the differences in spectrally-diffused linewidths at late $\\tau$. For completeness we also show $p(\\zeta,T)$ (d) and $p(\\zeta,\\tau)$ (b) for fixed $\\tau$ and $T$, respectively. These two representations inform on the spectral evolution owing to spectral diffusion and changing relative emission contributions from different states, respectively. (c) displays the evolution of the spectral correlation from the narrow homogeneous spectrum with a Lorentzian lineshape to the diffused Gaussian lineshape.\\\\\nOne capability of lifetime-resolved PCFS is the ability to extract the homogeneous linewidths of different lifetime-distinct states in the presence of fast spectral diffusion. We demonstrate this ability through a global fit to the T-dependent spectral correlation. We define a model for the fit as a linear combination of two Lorentzians and a Gaussian with floating linewidths parameters. The relative amplitudes $p_{AA}$,$p_{BB}$, and $p_{AB,BA}$ are calculated according to \\ref{reduced:spec_corr} taking the weights $a(T)$ and $b(T)$ from fits to the emission lifetime into account. $p_{AA}$,$p_{BB}$, and $p_{AB,BA}$ are also displayed in (d). We apply a global fit to the slices of the spectral correlation $p(\\zeta,\\tau=60 \\mu \\textrm{s}, T)$ along $T$ as shown in (e),(f) and (g). The cross-correlation $p_{AB,BA}$ present as a broad Gaussian background superimposed with the homogeneous Lorentzian spectral correlations $p_{AA,BB}$ as introduced in equation \\ref{reduced:spec_corr}. The width of this Gaussian component is $\\sigma_{AB}\\approx\\sqrt{\\sigma_{A}^{2}+\\sigma_{B}^{2}}$. The homogeneous lineshape parameters parsed into the numerical model are extracted by the fit within photon shot-noise, thus validating the approach adapted herein.\nWe note that in PCFS, the high temporal resolution achieved through photon-correlation comes at the cost of the loss of the absolute phase of the spectral information. In other words, both the asymmetry of the lineshape and the center frequency of $s(\\omega)$ is lost in the spectral correlation $p(\\zeta)$. The unambiguous reconstruction of $s(\\omega)$ from $p(\\zeta)$ is therefore impossible and the spectral correlation is typically fit with a model parametrizing a suitable form for the underlying emission spectrum, as we adapted herein. \\cite{Cui2013a,Utzat2019}\n\n\\subsection{A dynamic doublet with population transfer and spectral fluctuations}\\label{static doublet_Gauss_and_relax}\nWe now turn to a system of two coupled radiating dipoles undergoing population exchange and subject to correlated spectral diffusion. A specific example would be solid-state quantum emitters undergoing incoherent and phonon-mediated population transfer after non-resonant excitation. \\cite{Vinattieri1994} In quantum emitters, disentangling the relaxation rate and coherence times of the different fine-structure states in the presence of spectral diffusion is important for a detailed understanding of the dephasing process as phonon-mediated population exchange constitutes an important dephasing process in the solid-state.\\cite{Masia2012} We depict the system's energy diagram in Figure \\ref{fig:Fig5}(a), which exhibits two excited states with equal oscillator strengths and an irreversible relaxation rate $k$ from the higher to the lower-lying state. In this system, photon emission from the higher-lying state $|A\\rangle$ will start immediately after population of the state. Emission of the lower-lying state $|B\\rangle$ requires further relaxation and is often phonon-mediated.\\cite{Masia2012} The relative population of states $|A\\rangle$ and $|B\\rangle$ will thus change during the emission lifetime of the overall system as long as the relaxation rate $k$ is faster than the radiative rate $1\/T_{1}$ of both $|A\\rangle$ an $|B\\rangle$. The population dynamics of the system can be described by the following set of coupled equations:\n\n\\begin{equation}\n\\frac{d|A\\rangle}{dt}=-(k+1\/T_{1})|A\\rangle\n\\end{equation}\n\n\\begin{equation}\n\\frac{d|B\\rangle}{dt}=k|A\\rangle-1\/T_{1}|B\\rangle\n\\end{equation}\nwith the solutions:\n\n\\begin{equation}\n|A\\rangle(t)=|A\\rangle_{0}e^{-(k+1\/T_{1})t}\n\\end{equation}\n\n\\begin{equation}\n|B\\rangle(t)=-e^{-(1\/T_{1}+k)t}+Ce^{-1\/T_{1}t}.\n\\end{equation}\n\nWe show the effect of the changing relative cross-correlation probabilities between states $|A\\rangle$ and $|B\\rangle$ ($p_{AA}$,$p_{BB}$) in Figure \\ref{fig:Fig5} (b). Despite the $T$-invariant exponential population decay constant leading to a monoexponential photoluminescence lifetime of the overall system, the relative weights of $p_{AA}$ and $p_{BB}$ are changing with $T$.We show the spectral correlation of the lifetime-resolved PCFS experiment with indiscriminate $T$ in (c). On timescales shorter than the spectral diffusion time $\\tau$, the fine-structure states are well-separated. At late $\\tau$, the broad diffused lineshape obfuscates the fine-structure splitting.\\\\\nWe demonstrate that lifetime-resolved PCFS can recover the lineshape parameters of the homogeneous doublet by applying a least-squares fit of a suitable model to the T-dependent spectral correlation as shown in (d),(e),(f). The model consists of two Lorentzians. with the floating linewidths $\\Gamma_{1}$, $\\Gamma_{2}$, an energy offset $\\Omega$ and a relaxation rate $k$, which determines the temporal change of the relative emission contributions of $\\langle A \\rangle$ and $\\langle B \\rangle$. We recover all model parameters within photon shot-noise thus validating the utility of lifetime-resolved PCFS to extract the coherences and relaxation rates of different emissive fine-structure states. We note that the observation of early-$\\tau$ multiplets in the spectral correlation compared to the broad Gaussian background in section \\ref{static doublet_Gauss} is the signature of correlated spectral diffusion dynamics between the two states. Our simulations suggest that measuring the photon-coherences of quantum emitters exhibiting spectral fluctuations and different emissive fine-structure states will provide an avenue to study quantum emitter optical dephasing through both fluctuations and population exchange between different electronic states.\n\n\\section{Conclusions}\nWe propose a new photon-correlation spectroscopic technique that extracts spectral fluctuations along the lifetime-trajectory of single emitters. The technique works through time-correlation of photons detected at the output arms of a variable path-length difference interferometer in both the microtime and macrotime domain and can be implemented using standard picosecond photon-counting electronics. We show that lineshape and fluctuation parameters can be extracted from the fits to the lifetime-resolved spectral correlations. Our technique opens up multiple frontiers in single-emitter spectroscopy. We emphasize that our technique is general, but point to its special utility in quantum emitter research enabled by the high spectral resolution required to resolve photon-coherences at low temperatures. Experimental efforts will be directed towards probing the fluctuation dynamics of non-stationary systems and investigation of the decoherence processes in quantum emitters. Specific materials are readily available such as emissive defects in diamond and emerging 2D materials as well as semiconductor nanostructures.\n\n\\section{Acknowledgements}\nThe lead author of this study (H.U., study conception, derivation, modeling and interpretation) was initially funded by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering (award no. DE-FG02-07ER46454) and funded by Samsung Inc. (SAIT) during the completion of the study. We thank Weiwei Sun, David Berkinsky, Alex Kaplan, Andrew Proppe, and Matthias Ginterseder for critically reading the manuscript and their feedback.\n\n\n\\newpage\n\n\\begin{figure}[htp]\n \\centering\n \\includegraphics[width=12cm]{figures\/two_PCFS_figures-01.png}\n \\caption{In conventional PCFS, the spectral correlation is compiled from photon-pairs irrespective of their microtime T, often under continuous wave excitation (a). In lifetime-resolved PCFS, photon-pairs with a given microtime $T$ and macrotime separation $\\tau$ are spectrally correlated (b). Here, we adopt a time-binning approach to collect photons with different T in suitable microtime intervals as indicated by the color-shaded background. The proposed optical setup is shown in (c). The photon-stream from a single emitter under pulsed excitation is directed into a variable path-length difference Michelson interferometer. All photon-counts at the output arms of the interferometer are recorded in time-tagged (T3) mode using picosecond single-photon counting electronics.}\n \\label{fig:Fig1}\n\\end{figure}\n\n\\newpage\n\n\\begin{figure}[htp]\n \\centering\n \\includegraphics[width=12cm]{figures\/two_PCFS_figures-02.png}\n \\caption{Simulation of two uncoupled radiating dipoles involving states $|A\\rangle$ and $|B\\rangle$. The two transitions are coupled to two different bath fluctuations and exhibit different lifetimes $T_{1}$, linewidths $\\Gamma$ and spectral diffusion parameters $k_{jump}$ and $\\sigma_{A,B}$. The total fluorescence lifetime of the system exhibits a biexponential decay (a). The shaded panels (b) and (c) show the cross-correlation functions $g_{X}^{(2)}(\\tau)$ for different optical path-length differences $\\delta_{0}$ and microtimes $T$, where $|A\\rangle$ and $|B\\rangle$ are the dominant emissive states, respectively. The loss of coherence with increasing $\\tau$ is evident from the reduction in anti-correlation. This coherence loss occurs at earlier $\\tau$ for early-$T$ photons (emission predominantly from $|A\\rangle$, (b)) compared to late-$T$ photons (emission predominantly from $|B\\rangle$, (c)). The PCFS interferogram $G^{(2)}(\\delta,\\tau)$ for early-T photons is shown in (d) and reflects the evolution from the exponential homogeneous dephasing at early $\\tau$ to the spectrally-diffused Gaussian dephasing at late $\\tau$.}\n \\label{fig:Fig3}\n\\end{figure}\n\n\n\\newpage\n\n\n\\begin{figure}[htp]\n \\centering\n \\includegraphics[width=12cm]{figures\/two_PCFS_figures-03.png}\n \\caption{Spectral results of the lifetime-resolved PCFS simulation of two uncoupled dipoles. (a) shows the full-width-at-half-maximum (FWHM) of $p(\\zeta,\\tau,T)$ along $T$ and $\\tau$. The difference in the homogeneous linewidths of $|A\\rangle$ and $|B\\rangle$ at early $\\tau$ and in the diffused linewidths at late $\\tau$ are immediately obvious in this representation. The orange-shaded panels (b) and (c) show the effect of spectral diffusion for early-microtime photons originating mostly from state $|A\\rangle$. We show the evolution of the weights of auto- and cross-correlations between states along $T$ in (d).The weights are derived from the relative amplitude of the two exponential components of the photoluminescence decay in (Fig.\\ref{fig:Fig3}a). Taking $p_{AA}$,$p_{BB}$, and $p_{AB,BA}$ into account, we apply a global fit to the spectral correlation along $T$ to recover the lineshape parameters of the undiffused system as shown in (e),(f) and (g). The broad underlying Gaussian component in (f) reflects the cross-correlation of the diffused distributions of $|A\\rangle$ and $|B\\rangle$ and has a width of $\\sigma\\approx\\sqrt{\\sigma_{A}^2+\\sigma_{B}^2}$.}\n \\label{fig:Fig4}\n\\end{figure}\n\n\\newpage\n\n\\begin{figure}[htp]\n \\centering\n \\includegraphics[width=12cm]{figures\/two_PCFS_figures-04.png}\n \\caption{Lifetime-resolved PCFS simulation of two coupled dipoles undergoing population transfer and interacting with the same bath resulting in collective spectral diffusion of the doublet (a). We introduce a phonon-mediated relaxation rate between the upper and lower state of $k_{relax}=1\/80 \\textrm{ps}^{-1}$. As the radiative rates of the two states are chosen to be equal, the emission lifetime follows a monoexponential decay behavior despite changing relative populations of $|A\\rangle$ and $|B\\rangle$ with the microtime (b). The spectral correlation irrespective for all photons irrespective of their microtime is shown in (c) and demonstrates the transition from a triplet at early $\\tau$ to the spectrally-diffused distribution at late $\\tau$. The fine-structure splitting $\\Omega$, the linewidths $\\Gamma_{A,B}$, and the relaxation rate $k$ can be recovered through lifetime-resolved PCFS and a global fit of the slices along $T$ with a fixed macrotime correlation of $\\tau=8 \\mu s$ (d),(e) and (f).}\n \\label{fig:Fig5}\n\\end{figure}\n\n\n\\newpage\n\n\\bibliographystyle{ieeetr}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe extension of the standard cointegration paradigm to more general,\nfractional circumstances has drawn increasing attention in the time series\nliterature over the last decade. The possibility of fractional cointegration\nwas already mentioned in the seminal paper by \\cite{eg87}. \\cite{rob94} was\nthe first to establish consistency for narrow-band estimates of fractional\ncointegrating relationships in the stationary case. The properties of this\nestimator (which has become known as NBLS) were then investigated under\nnonstationary circumstances by \\cite{mr01}, \\cite{rm01,rm03}. \\cite%\n{chen_hurv03a,chen_hurv03b} considered principal components methods in the\nfrequency domain, whereas \\cite{vel03}, \\cite{rob_hualde03} advocate\npseudo-maximum likelihood methods which improve the efficiency of the\nestimates and yield standard asymptotic properties. Cointegration among\nstationary processes has also been considered, for instance by \\cite{mar00}, %\n\\cite{cn04}. Many other insightful papers on fractional cointegration have\nappeared in the literature, for instance \\cite{dol_marm04}, \\cite{dav02}.\n\nAll these papers have focused on the case of linear cointegration.\nNevertheless, the possibility of polynomial cointegrating relationships\nseems of practical interest, for instance (but not exclusively) for\napplications to financial data. Nonlinear cointegration has been considered\nin the literature (most recently by \\cite{kmt05}), but only in\nnon-fractional circumstances, to the best of our knowledge. In this paper,\nwe shall focus on nonlinear cointegrating relationships among stationary\nlong memory processes; the restriction to a stationarity framework is made\nnecessary by the need to exploit the rich machinery of expansions into\nHermite polynomials, an extremely powerful tool to investigate nonlinear\ntransformations (see for instance \\cite{gir_surg_85}, \\cite{arcones94}, \\cite%\n{surgalis03}). Our general setting can be explained as follows. Let $%\n\\{A_{t}\\}=\\{x_{t},e_{t}\\},$ $t\\in \\mathbb{Z}$ be a stationary bivariate time\nseries with mean zero and covariance such that%\n\\begin{equation*}\n\\mathbb{E}A_{t}A_{t+\\tau }^{\\prime }:=\\Gamma (\\tau )=\\int_{0}^{2\\pi\n}f(\\lambda )e^{i\\tau \\lambda }d\\lambda \\text{ ,}\n\\end{equation*}%\nwhere \n\\begin{equation*}\nf(\\lambda )=\\left[ \n\\begin{array}{cc}\nf_{xx}(\\lambda ) & f_{xe}(\\lambda ) \\\\ \nf_{ex}(\\lambda ) & f_{ee}(\\lambda )%\n\\end{array}%\n\\right] \\text{ ,}\n\\end{equation*}%\nis the spectral density matrix of $\\{A_{t}\\}$. We shall take $%\n\\{x_{t},e_{t}\\} $ to be long memory, in the sense that \n\\begin{equation}\n\\gamma _{ab}(\\tau )\\simeq G_{ab}\\tau ^{d_{a}+d_{b}-1} \\label{eq:covariance}\n\\end{equation}%\nfor $a,b=x,e$ , $00,$ $\\left|\nG_{xe}\\right| \\geq 0$. We write $z\\sim I(d_{z})$ for long memory processes\nwith memory parameter $d_{z},\\ $and $\\simeq $ to denote that the ratio of\nthe left- and right-hand sides tends to 1.\n\nNow assume there is a polynomial function $g(\\cdot )$ such that $\\mathbb{E}%\n[g(x_{t})]=0$ and \n\\begin{equation}\ny_{t}=g(x_{t})+e_{t},\\qquad 0\\left\\{ -1\\vee \\widetilde{k}_{0}(2d_{\\varepsilon }-1)\\right\\} \n\\text{ . }\n\\end{equation*}\n\n\\ \n\nAssumptions A1-A2 identify a polynomial cointegration model where the\nresidual is a Gaussian subordinated process. Assumption A3 ensures that $%\nH_{K}(x_{t})$ is still a long memory process, with stronger memory than $%\ne_{t}.$ This is needed for consistency and indeed it is also a necessary\nidentification condition: recall $x_{t}$ and $e_{t}$ can be correlated, so\nthere are no means to distinguish $H_{k}(x_{t})$ and $e_{t}$ unless the\nformer has stronger long range dependence. Recall that $H_{k}(x_{t})\\sim\nI(d_{k})$ , $k=k_{0},...,K$, where $2d_{k}-1:=k(2d_{x}-1).$ In this paper,\nwe take $k_{0}$ and $K$ to be known, whereas their estimation will be\naddressed in a different work. Note that to implement our estimates we need\nno a priori information on $\\widetilde{k}_{0},\\widetilde{K}$, although the\nvalue of $\\widetilde{k}_{0}(2d_{\\varepsilon }-1)$ does affect the rate of\nconsistency of our estimators.\n\nAs mentioned before, (\\ref{eq:model}) is a cointegrating relation, so we\nallow $\\mathbb{E}(x_{t}\\varepsilon _{t})$ (and hence $\\mathbb{E}(x_{t}e_{t})$%\n) to be different from zero. As for linear cointegration, this leads to the\ninconsistency of OLS and justifies the use of the spectral regression\nmethods for the estimation of Hermite coefficients. Concerning the kernel,\nwe write $k_{M}(\\cdot )=k(\\tau \/M)$ and introduce the following\n\n\\ \n\n\\textsc{Assumption }B: The kernel $k(\\cdot )$ is a real-valued, symmetric\nLebesgue measurable function that, for $\\upsilon \\in \\mathbb{R}$, satisfies \n\\begin{equation*}\n\\int_{-1}^{1}k(\\upsilon )d\\upsilon =1\\qquad 0\\leq k(\\upsilon )\\leq \\infty\n,\\quad k(\\upsilon )=0\\quad \\text{for}\\;|\\upsilon |>1.\n\\end{equation*}\n\n\\ \n\nOur final assumption is a standard bandwidth condition.\n\n\\ \n\n\\bigskip \\textsc{Assumption }C: Let $\\eta =K\\vee \\widetilde{k}_{0};$ as $%\nn\\rightarrow \\infty $ , \n\\begin{equation*}\n\\frac{1}{M}+\\frac{M^{3\\vee (\\eta -2)}}{n}\\rightarrow 0\\text{ .}\n\\end{equation*}\n\n\\ \n\nAssumption C imposes a minimal lower bound and a significant upper bound on\nthe behaviour of the user-chosen bandwidth parameter $M.$ The need for this\nbandwidth condition is made clear by inspection of the proof in the\nappendix; heuristically, as $K$ grows the signal in $H_{K}(x_{t})$\ndecreases, which makes the estimation harder; on the other hand an increase\nin $\\widetilde{k}_{0}$ makes the convergence rates in Lemma 1 and Theorem 1\nfaster, whence the need for tighter bandwidth conditions. We are not\nclaiming Assumption C is sharp, however an inspection of the Proof of Lemma\n1 reveals that any improvement is likely to require at least almost\nunmanageable computations.\n\nEquation (\\ref{eq:recoln}) can be rewritten more compactly as \n\\begin{equation*}\ny_{t}=\\mathbf{\\beta }^{\\prime }H(x_{t})+e_{t}\\text{ , where }%\nH(x_{t})=\\left\\{ H_{1}(x_{t}),...,H_{K}(x_{t})\\right\\} ^{\\prime }\\text{ }.\n\\end{equation*}\n\nLet us now define: \n\\begin{equation*}\nf_{HH}(\\lambda )=\\left[ \n\\begin{array}{cccc}\nf_{11}(\\lambda ) & 0 & \\cdots & \\cdots \\\\ \n0 & f_{22}(\\lambda ) & 0 & \\cdots \\\\ \n\\vdots & \\vdots & \\ddots & \\vdots \\\\ \n0 & \\vdots & \\vdots & f_{KK}(\\lambda )%\n\\end{array}%\n\\right] \\text{ , }f_{He}(\\lambda )=\\left[ \n\\begin{array}{c}\nf_{1e}(\\lambda ) \\\\ \nf_{2e}(\\lambda ) \\\\ \n\\vdots \\\\ \nf_{Ke}(\\lambda )%\n\\end{array}%\n\\right] \n\\end{equation*}%\nand let also, for $a,b=1,2,\\dots K$.%\n\\begin{eqnarray*}\n\\gamma _{ab}(\\tau ) &=&\\mathbb{E}\\left[ H_{a}(x_{t})H_{b}(x_{t+\\tau })\\right]\n=a!\\delta _{a}^{b}\\left\\{ \\mathbb{E}\\left( x_{t}x_{t+\\tau }\\right) \\right\\}\n^{a}, \\\\\n\\gamma _{ae}(\\tau ) &=&\\mathbb{E}\\left[ H_{a}(x_{t})e_{t+\\tau }\\right] =%\n\\mathbb{E}\\left[ H_{a}(x_{t})\\sum_{\\widetilde{k}=\\widetilde{k}_{0}}^{%\n\\widetilde{K}}\\xi _{\\widetilde{k}}H_{\\widetilde{k}}(\\varepsilon _{t})\\right] \n\\\\\n&=&\\left\\{ \n\\begin{array}{c}\na!\\xi _{a}\\left\\{ \\mathbb{E}\\left( x_{t}\\varepsilon _{t+\\tau }\\right)\n\\right\\} ^{a}\\text{ for }a\\leq \\widetilde{K} \\\\ \n0\\text{ , otherwise}%\n\\end{array}%\n\\right. .\n\\end{eqnarray*}%\nwhere $\\delta _{a}^{b}$ represents the Kronecker delta function. Likewise%\n\\begin{eqnarray*}\nf_{aa}(\\lambda ) &:&=(2\\pi )^{-1}\\int_{-\\infty }^{\\infty }\\gamma _{aa}(\\tau\n)e^{-i\\lambda \\tau }=a!f_{x}^{(\\ast a)}(\\lambda )\\text{ , } \\\\\nf_{ay}(\\lambda ) &:&=(2\\pi )^{-1}\\int_{-\\infty }^{\\infty }\\gamma _{ay}(\\tau\n)e^{-i\\lambda \\tau }\\text{ , }\\gamma _{ay}(\\tau ):=\\mathbb{E}\\left[\nH_{a}(x_{t})y_{t+\\tau }\\right] \\\\\nf_{ae}(\\lambda ) &:&=(2\\pi )^{-1}\\int_{-\\infty }^{\\infty }\\gamma _{ae}(\\tau\n)e^{-i\\lambda \\tau }\\text{ .}\n\\end{eqnarray*}%\nThe Weighted Covariance Estimator (WCE) of $\\beta ^{\\prime }=(\\beta\n_{1},\\dots ,\\beta _{K})$ is defined as%\n\\begin{equation*}\n\\hat{\\beta}_{M}=\\hat{f}_{HH}(0)^{-1}\\hat{f}_{Hy}(0)\\text{ ,}\n\\end{equation*}%\nwhence%\n\\begin{equation*}\n\\hat{\\beta}_{M}-\\beta =\\hat{f}_{HH}(0)^{-1}\\hat{f}_{He}(0)\\text{ ;}\n\\end{equation*}%\nas usual, we assume $\\hat{f}_{HH}(0)$ is non-singular, where \n\\begin{equation*}\n\\hat{f}_{HH}(0)=\\frac{1}{2\\pi }\\left[ \n\\begin{array}{ccc}\n\\sum_{\\tau =-M}^{M}k(\\tau \/M)c_{11}(\\tau ) & \\cdots & \\sum_{\\tau\n=-M}^{M}k(\\tau \/M)c_{1K}(\\tau ) \\\\ \n\\vdots & \\ddots & \\vdots \\\\ \n\\sum_{\\tau =-M}^{M}k(\\tau \/M)c_{K1}(\\tau ) & \\cdots & \\sum_{\\tau\n=-M}^{M}k(\\tau \/M)c_{KK}(\\tau )%\n\\end{array}%\n\\right] \\text{ ,}\n\\end{equation*}%\n\\begin{equation*}\n\\hat{f}_{Hz}(0)=\\frac{1}{2\\pi }\\left[ \n\\begin{array}{c}\n\\sum_{\\tau =-M}^{M}k(\\tau \/M)c_{1z}(\\tau ) \\\\ \n\\vdots \\\\ \n\\sum_{\\tau =-M}^{M}k(M)c_{Kz}(\\tau )%\n\\end{array}%\n\\right] \n\\end{equation*}%\nand \n\\begin{equation*}\n\\begin{array}{lll}\nc_{ab}(\\tau )= & \\left\\{ \n\\begin{array}{l}\nn^{-1}\\sum_{t=1}^{n-\\tau }H_{a}(x_{t})H_{b}(x_{t+\\tau }) \\\\ \nn^{-1}\\sum_{t=|\\tau |+1}^{n}H_{a}(x_{t})H_{b}(x_{t-|\\tau |})%\n\\end{array}%\n\\right. & \n\\begin{array}{l}\n\\qquad \\tau \\geq 0 \\\\ \n\\qquad \\tau <0%\n\\end{array}\n\\\\ \nc_{az}(\\tau )= & \\left\\{ \n\\begin{array}{l}\nn^{-1}\\sum_{t=1}^{n-\\tau }H_{a}(x_{t})z_{t+\\tau } \\\\ \nn^{-1}\\sum_{t=|\\tau |+1}^{n}H_{a}(x_{t})z_{t-|\\tau |}%\n\\end{array}%\n\\right. & \n\\begin{array}{l}\n\\qquad \\tau \\geq 0 \\\\ \n\\qquad \\tau <0%\n\\end{array}%\n\\end{array}%\n\\end{equation*}%\nfor $a,b=1,2,\\dots K$, $z=e,y.$ The following lemma is the main tool for our\nconsistency result, compare Lemma 1 in \\cite{mar00}. As before, we write \n\\begin{equation*}\nd_{a}:=a\\left( d_{x}-\\frac{1}{2}\\right) +\\frac{1}{2}\\text{ , }d_{e}=\\left\\{ \n\\widetilde{k}_{0}\\left( d_{\\varepsilon }-\\frac{1}{2}\\right) +\\frac{1}{2}%\n\\right\\} \\vee 0\\text{ ;}\n\\end{equation*}%\nby Assumption A3 we have $d_{a}>0,$ $a=k_{0},...,K.$\\newline\n\n\\ \\newline\n\\textbf{LEMMA 1} Under Assumptions A-C, as $n\\rightarrow \\infty $ we have: \n\\begin{align}\n\\sum_{\\tau =-M}^{M}k\\left( \\frac{\\tau }{M}\\right) \\left\\{ c_{ab}(\\tau\n)-\\gamma _{ab}(\\tau )\\right\\} & =o_{p}( M^{d_{a}+d_{b}}) \\label{lemma11} \\\\\n\\sum_{\\tau =-M}^{M}k\\left( \\frac{\\tau }{M}\\right) \\left\\{ c_{ae}(\\tau\n)-\\gamma _{ae}(\\tau )\\right\\} & =o_{p}( M^{d_{a}+d_{e}}) \\label{lemma12}\n\\end{align}\nfor $a,b=1,2,\\dots K$\\newline\n\\textbf{Proof \\ }See Appendix\n\n\\ \n\nWe are now ready to state the main result of this paper. Let%\n\\begin{eqnarray*}\nB_{ab} &:&=a!G_{xx}^{a}\\delta _{a}^{b}\\int_{-1}^{1}k(\\upsilon )|\\upsilon\n|^{a(2d_{x}-1)}d\\upsilon <\\infty \\text{ ,} \\\\\nB_{ae} &:&=a!\\xi _{a}\\left\\{ G_{x\\varepsilon }\\right\\}\n^{a}\\int_{-1}^{1}k(\\upsilon )|\\upsilon |^{a(d_{x}+d_{\\varepsilon\n}-1)}d\\upsilon <\\infty \\text{ , for }a\\leq \\widetilde{K}\\text{ ,}\n\\end{eqnarray*}%\nsee also Assumption B, $a,b=k_{0},...,K$. Let%\n\\begin{equation*}\n\\mathcal{B}_{HH}=\\text{diag}\\left\\{ B_{11},\\dots B_{KK}\\right\\} \\text{ , }%\n\\mathcal{B}_{He}=\\left\\{ B_{1e},...,B_{Ke}\\right\\} \\text{ , }\\mathcal{M}=%\n\\text{diag}\\left\\{ M^{-d_{1}},\\dots M^{-d_{K}}\\right\\} \\text{ .}\n\\end{equation*}%\nNote that $B_{ae}=0$ unless $a\\leq \\widetilde{K},$ due to the orthogonality\nof Hermite polynomials.\n\n\\ \\newline\n\\textbf{Theorem 1 }Under the Assumptions A-C, as $n\\rightarrow \\infty $ \n\\begin{equation*}\n\\left[ \n\\begin{array}{ccc}\nM^{d_{1}-d_{e}} & 0 & 0 \\\\ \n0 & \\ddots & 0 \\\\ \n0 & 0 & M^{d_{K}-d_{e}}%\n\\end{array}%\n\\right] \\left( \\hat{\\beta}_{M}-\\beta \\right) =\\mathcal{B}_{HH}^{-1}\\mathcal{B%\n}_{He}+o_{p}(1)\\text{ .}\n\\end{equation*}%\n\\textbf{Proof }By the dominated convergence theorem, as $M\\rightarrow \\infty \n$ \n\\begin{align*}\nM^{-(d_{a}+d_{b})}\\sum_{\\tau =-M}^{M}k\\left( \\frac{\\tau }{M}\\right) \\gamma\n_{ab}(\\tau )& =\\sum_{\\tau =-M}^{M}k\\left( \\frac{\\tau }{M}\\right) \\frac{%\n\\gamma _{ab}(\\tau )}{M^{d_{a}+d_{b}-1}}\\frac{1}{M}\\rightarrow B_{ab} \\\\\nM^{-(d_{a}+d_{e})}\\sum_{\\tau =-M}^{M}k\\left( \\frac{\\tau }{M}\\right) \\gamma\n_{ae}(\\tau )& =\\sum_{\\tau =-M}^{M}k\\left( \\frac{\\tau }{M}\\right) \\frac{%\n\\gamma _{1e}(\\tau )}{M^{d_{a}+d_{e}-1}}\\frac{1}{M}\\rightarrow B_{ae}\n\\end{align*}\n\nFrom Lemma 1, it follows easily that \n\\begin{equation*}\n\\hat{f}_{HH}(0)=\\left[ \n\\begin{array}{cccc}\n\\zeta _{1}+o_{p}(M^{2d_{1}}) & o_{p}(M^{d_{1}+d_{2}}) & \\cdots & \no_{p}(M^{d_{1}+d_{p}}) \\\\ \no_{p}(M^{d_{2}+d_{1}}) & \\zeta _{2}+o_{p}(M^{2d_{2}}) & \\cdots & \no_{p}(M^{d_{2}+d_{p}}) \\\\ \n\\vdots & \\vdots & \\ddots & \\vdots \\\\ \no_{p}(M^{d_{K}+d_{1}}) & \\cdots & \\cdots & \\zeta _{K}+o_{p}(M^{2d_{K}})%\n\\end{array}%\n\\right]\n\\end{equation*}%\nwhere%\n\\begin{equation*}\n\\zeta _{a}:=\\frac{1}{2\\pi }\\sum_{\\tau =-M}^{M}k(\\frac{\\tau }{M})\\gamma\n_{aa}(\\tau )\\text{ .}\n\\end{equation*}%\nMoreover \n\\begin{equation*}\n\\mathcal{M}\\hat{f}_{HH}(0)\\mathcal{M}=\\left[ \n\\begin{array}{ccc}\nB_{11}+o_{p}(1) & \\cdots & o_{p}(1) \\\\ \n\\vdots & \\ddots & \\vdots \\\\ \no_{p}(1) & \\cdots & B_{KK}+o_{p}(1)%\n\\end{array}%\n\\right] \\rightarrow \\mathcal{B}_{HH}\\text{ .}\n\\end{equation*}%\nTherefore, for $M\\rightarrow \\infty $ \n\\begin{equation*}\n\\hat{f}_{HH}(0)=\\mathcal{M}^{-1}\\mathcal{B}_{HH}\\mathcal{M}^{-1}+o_{p}(1)=%\n\\mathcal{B}_{HH}\\mathcal{M}^{-2}+o_{p}(1)\\text{ ,}\n\\end{equation*}%\nsince $\\mathcal{B}_{HH}$ is diagonal and hence commutes with $\\mathcal{M}%\n^{-1}.$ Using the same arguments, it follows easily that: \n\\begin{equation*}\nM^{-d_{e}}\\mathcal{M}\\hat{f}_{he}(0)\\rightarrow \\mathcal{B}_{He}\\text{ , as }%\nn\\rightarrow \\infty \\text{ .}\n\\end{equation*}%\nFinally, as$\\;n\\rightarrow \\infty $ , \n\\begin{equation*}\nM^{-d_{e}}\\mathcal{M}^{-1}\\left\\{ \\hat{\\beta}_{M}-\\beta \\right\\} =\\left\\{ \n\\mathcal{M}\\hat{f}_{hh}(0)\\mathcal{M}\\right\\} ^{-1}\\mathcal{M}M^{-d_{e}}\\hat{%\nf}_{he}(0)\\rightarrow \\mathcal{B}_{HH}^{-1}\\mathcal{B}_{He}\\text{ ,}\\qquad\n\\end{equation*}%\nwhich completes the proof of Theorem 1.\n\n\\hfill$\\square$\n\n\\ \\newline\n\\textbf{Remark} In Theorem 1 we have proved the consistency of the WCE\\\nestimator of the cointegrating vector, $\\hat{\\beta}_{M}\\overset{p}{%\n\\rightarrow }\\beta $. In a very loose sense, this result follows from\nconsistency of a continuously averaged estimate of the spectral density at\nfrequency zero, see Lemma 1. It is also possible to use Lemma 1 to derive a\nrobust estimate for the memory parameter of an observed, Gaussian\nsubordinated series $w_{t}:=g(x_{t}),$ ($k_{0}(d_{x}-\\frac{1}{2})+\\frac{1}{2}%\n=:d_{w},$ say)$.$ We use a very similar idea to the averaged periodogram\nestimate advocated by \\cite{rob94}. More precisely, with an obvious notation\nwe can consider%\n\\begin{eqnarray*}\n\\widetilde{d}_{w} &:&=\\frac{\\log \\left| \\sum_{\\tau =-M}^{M}k(\\frac{\\tau }{M}%\n)c_{ww}(\\tau )\\right| }{2\\log M}=d_{w}+\\frac{\\log B_{ww}}{2\\log M}+o_{p}(1)%\n\\text{ ,} \\\\\n&=&d_{w}+o_{p}(1)\\text{ ,}\n\\end{eqnarray*}%\nwhere we have used Lemma 1. This estimate converges at a mere logarithmic\nrate and it is not asymptotically centered around zero; it is however\nconsistent under much broader circumstances than usually allowed for in the\nliterature. See also \\cite{dalla2004} for very general results on\nconsistency for long memory estimates\n\n\\section{Comments and conclusions}\n\nWe view this paper as a first step in a new research direction, and as such\nwe are well aware that it leaves several questions unresolved and open for\nfuture research. A first issue relates to the choice of the Hermite rank $%\nk_{0}$ and of $K$. As far as the former is concerned, we remark that for the\ngreat majority of practical applications, $k_{0}$ can be taken a priori as 1\nor 2. Under the assumption that $k_{0}=1$, the equality $d_{x}=d_{y}$ holds;\nthis trivial observation immediately suggests a naive test for $k_{0}=1$,\nwhich can be simply implemented by testing for equality of the two memory\nparameters. It should be noted, however, that when $x_{t}$ and $y_{t}$ are\ncointegrated the standard asymptotic results on multivariate long memory\nestimation (for instance \\cite{rob95lp}) do not hold. Incidentally, we note\nthat the nonlinear framework allows to cover the possibility of\ncointegration among time series with different integration orders, a\nsignificant extension over the standard paradigm.\n\nFor $K$, we can take as an identifying assumption \n\\begin{equation}\nK:=\\text{argmax}(k:k(2d_{x}-1)>(2d_{e}-1))\\text{ ;} \\label{idecon}\n\\end{equation}%\nhigher order terms can be thought of as included by definition in the\nresiduals, to make identification possible. Indeed, it is natural to suggest\nto view $g(.)$ as a general nonlinear function and envisage $K$ as growing\nwith $n;$ we expect, however, that only the projection coefficients $b_{k}$\nwith $k$ satisfying (\\ref{idecon}) could be consistently estimated in this\nbroader framework. On the other hand, we note that the it is also possible\nto estimate consistently $K^{\\ast }-1\n\\\\ \n0 & \\;\\qquad \\text{for}\\quad \\;a(2d_{x}-1)<-1%\n\\end{array}%\n\\right. . \\label{eq:index}\n\\end{equation}%\nThe first part of the proof follows closely \\cite{mar00}. For (\\ref{lemma11}%\n), it is sufficient to show that \n\\begin{eqnarray*}\nVar\\left\\{ \\sum_{\\tau =-M+1}^{M-1}k\\left( \\frac{\\tau }{M}\\right) c_{ab}(\\tau\n)\\right\\} &=&\\mathbb{E}\\left\\{ \\sum_{\\tau =-M+1}^{M-1}k\\left( \\frac{\\tau }{M}%\n\\right) \\left[ c_{ab}(\\tau )-\\left( 1-\\frac{\\tau }{n}\\right) \\gamma\n_{ab}(\\tau )\\right] \\right\\} ^{2} \\\\\n&\\leq\n&C\\sum_{p=-M}^{M}\\sum_{q=-M}^{M}|Cov\\{c_{ab}(p),c_{ab}(q)%\n\\}|=o(M^{2d_{a}+2d_{b}})\n\\end{eqnarray*}%\nFrom \\cite{hannan70}, p.210 we have: \n\\begin{eqnarray}\n&&Cov\\{c_{ab}(p),c_{ab}(q)\\} \\notag \\\\\n&=&\\!\\!\\frac{1}{n}\\!\\sum_{r=-n+1}^{n-1}\\!\\!\\left( 1-\\frac{|r|}{n}\\!\\right)\n\\left\\{ \\gamma _{aa}(r)\\gamma _{bb}(r\\!+\\!q\\!-\\!p)+\\gamma\n_{ab}(r\\!+\\!q)\\gamma _{ba}(r\\!-\\!p)\\right\\} \\label{eq:ones} \\\\\n&&+\\frac{1}{n^{2}}\\sum_{r=-n+1}^{n-1}\\sum_{s=1-r}^{n-r}\\text{cum}%\n_{abab}\\left( s,s+p,s+r,s+r+q\\right) \\text{ ,} \\label{eq:two}\n\\end{eqnarray}%\nwhere%\n\\begin{equation*}\n\\text{cum}_{abab}\\left( s,s+p,s+r,s+r+q\\right) =\\text{cum}\\left\\{\nH_{a}(x_{s}),H_{b}(x_{s+p}),H_{a}(x_{s+r}),H_{b}(x_{s+r+q})\\right\\} \\text{ .}\n\\end{equation*}%\nLikewise, for (\\ref{lemma12}) we shall show that \n\\begin{eqnarray*}\nVar\\left\\{ \\sum_{\\tau =-M+1}^{M-1}k\\left( \\frac{\\tau }{M}\\right)\nc_{ae}(p)\\right\\} \\!\\!\\!\\! &=&\\!\\!\\!\\!\\mathbb{E}\\left\\{ \\sum_{\\tau\n=-M+1}^{M-1}k\\left( \\frac{\\tau }{M}\\right) \\left[ c_{ae}(\\tau )-\\left( 1-%\n\\frac{\\tau }{n}\\right) \\gamma _{ae}(\\tau )\\right] \\right\\} ^{2} \\\\\n\\!\\!\\!\\! &\\leq &\\!\\!\\!\\!C\\sum_{p=-M}^{M}\\sum_{q=-M}^{M}\\left|\nCov\\{c_{ae}(p),c_{ae}(q)\\}\\right| =o(M^{2d_{a}+2d_{e}})\n\\end{eqnarray*}%\nwhere \n\\begin{eqnarray}\n&&Cov\\left\\{ c_{ae}(p),c_{ae}(q)\\right\\} \\notag \\\\\n&=&\\!\\!\\frac{1}{n}\\!\\sum_{r=-n+1}^{n-1}\\!\\!\\left( 1-\\frac{|r|}{n}\\right)\n\\left\\{ \\gamma _{aa}(r)\\gamma _{ee}(r\\!+\\!q\\!-\\!p)+\\gamma\n_{ae}(r\\!+\\!q)\\gamma _{ea}(r\\!-\\!p)\\right\\} \\label{mix1} \\\\\n&&+\\frac{1}{n^{2}}\\sum_{r=-n+1}^{n-1}\\sum_{s=1-r}^{n-r}\\text{cum}%\n_{aeae}\\left( s,s+p,s+r,s+r+q\\right) \\text{ ,} \\label{mix2}\n\\end{eqnarray}%\nand%\n\\begin{eqnarray*}\n&&\\text{cum}_{aeae}\\left( s,s+p,s+r,s+r+q\\right) \\\\\n&=&\\text{cum}\\left\\{ H_{a}(x_{s}),e_{s+p},H_{a}(x_{s+r}),e_{s+r+q}\\right\\} \\\\\n&=&\\sum_{k=\\widetilde{k}_{0}}^{\\widetilde{K}}\\sum_{k^{\\prime }=\\widetilde{k}%\n_{0}}^{\\widetilde{K}}\\text{cum}\\left\\{ H_{a}(x_{s}),H_{k}(\\varepsilon\n_{s+p})H_{a}(x_{s+r}),H_{k^{\\prime }},(\\varepsilon _{s+r+q})\\right\\} \\text{ .%\n}\n\\end{eqnarray*}%\nThe argument for (\\ref{eq:ones}) and (\\ref{mix1}) is the same. For instance,\nfor (\\ref{eq:ones}) we have \n\\begin{eqnarray*}\n&&\\sum_{p=-M}^{M}\\sum_{q=-M}^{M}\\frac{1}{n}\\left| \\sum_{r=-n+1}^{n-1}\\left(\n1-\\frac{|r|}{n}\\right) \\{\\gamma _{aa}(r)\\gamma _{bb}(r+q-p)\\}\\right| \\\\\n&\\leq &C\\frac{M}{n}\\sum_{\\tau =-2M}^{2M}\\left( \\sum_{|r|\\leq\n2M}(|r|+1)^{2d_{a}-1}(|r+\\tau |+1)^{2d_{b}-1}+\\right. \\\\\n&&\n\\end{eqnarray*}%\n\\begin{eqnarray*}\n&&+\\left. \\sum_{|r|>2M}(|r|+1)^{2d_{a}-1}(|r+\\tau |+1)^{2d_{b}-1}\\right) \\\\\n&=&C\\frac{M}{n}\\left[ \\sum_{|r|\\leq 2M}\\left( (|r|+1)^{2d_{a}-1}\\sum_{\\tau\n=-2M}^{2M}(|r+\\tau |+1)^{2d_{b}-1}\\right) \\right. \\\\\n&&+\\left. \\sum_{\\tau =-2M}^{2M}\\left(\n\\sum_{2M<|r|-1;$ it is simple to check that for $\\widetilde{k}%\n_{0}(2d_{\\varepsilon }-1)\\leq -1$ the proof is analogous, indeed slightly\nsimpler. \\ \n\n\\subsubsection*{Part I: $a=1,$ $\\widetilde{k}_{0}\\geq 2$ or $a\\geq 2,$ $%\n\\widetilde{k}_{0}=1$}\n\nFor $a=1,$ $\\widetilde{k}_{0}=2$ we have%\n\\begin{eqnarray*}\n&&\\sum_{p=-M}^{M}\\sum_{q=-M}^{M}\\frac{1}{n^{2}}\\left| \\text{cum}\\left\\{\nx_{s},H_{2}(\\varepsilon _{s+p}),x_{s+r},H_{2}(\\varepsilon _{s+r+q})\\right\\}\n\\right| \\\\\n&\\leq &\\sum_{p=-M}^{M}\\sum_{q=-M}^{M}\\frac{C}{n^{2}}\\left|\n\\sum_{r=-n+1}^{n-1}\\sum_{s=1-r}^{n-r}\\gamma _{x\\varepsilon }(p)\\gamma\n_{x\\varepsilon }(q)\\gamma _{\\varepsilon \\varepsilon }(r+q-p)\\right. \\\\\n&&\\left. +\\gamma _{\\varepsilon \\varepsilon }(r+q-p)\\gamma _{\\varepsilon\nx}(r-p)\\gamma _{x\\varepsilon }(r+q)\\right| \\\\\n&\\leq &\\frac{C}{n}\\sum_{p=-M}^{M}(|p|+1)^{d_{x}+d_{\\varepsilon\n}-1}\\sum_{q=-M}^{M}(|q|+1)^{d_{x}+d_{\\varepsilon }-1}\\left( \\sum_{|r|\\leq\n3M}(|r+q-p|+1)^{2d_{\\varepsilon }-1}\\right. \\\\\n&&+\\left. \\sum_{3M<|r|\\leq n}(|r+q-p|+1)^{2d_{\\varepsilon }-1}\\right) \\\\\n&&+\\frac{C}{n}\\!\\!\\sum_{p=-M}^{M}\\!\\sum_{q=-M}^{M}\\!\\!\\left( \\sum_{|r|\\leq\n3M}(|r\\!+\\!q\\!-\\!p|\\!+\\!1)^{2d_{\\varepsilon\n}-1}(|r\\!-\\!p|\\!+\\!1)^{d_{x}+d_{\\varepsilon\n}-1}(|r\\!+\\!q|+1)^{d_{x}+d_{\\varepsilon }-1}\\right. \\\\\n&&+\\left. \\sum_{3M<|r|\\leq n}(|r+q-p|+1)^{2d_{\\varepsilon\n}-1}(|r-p|+1)^{d_{x}+d_{\\varepsilon }-1}(|r+q|+1)^{d_{x}+d_{\\varepsilon\n}-1}\\right) \\\\\n&=&O(n^{-1}M^{d_{x}+d_{\\varepsilon }}M^{d_{x}+d_{\\varepsilon\n}}M^{2d_{\\varepsilon }})+O(n^{-1}M^{2d_{x}+2d_{\\varepsilon\n}}n^{2d_{\\varepsilon }}) \\\\\n&&+O(n^{-1}MM^{d_{x}+d_{\\varepsilon }}M^{2d_{\\varepsilon\n}})+O(n^{-1}M^{2}n^{2d_{x}+4d_{\\varepsilon }-3}) \\\\\n&=&O\\left( \\frac{M}{n}M^{2d_{x}+4d_{\\varepsilon }-1}\\right)\n+o(M^{4d_{\\varepsilon }+2d_{x}})+O\\left( \\frac{M^{2}}{n}M^{d_{x}+3d_{%\n\\varepsilon }-1})+o(M^{4d_{\\varepsilon }+2d_{x}}\\right) \\\\\n&=&o(M^{2d_{x}+4d_{\\varepsilon }-1})=o(M^{2d_{x}+2d_{e}})\\quad \\text{\nbecause }2d_{e}=4d_{\\varepsilon }-1\\text{ .}\n\\end{eqnarray*}%\nThe extension to $\\widetilde{k}_{0}>2$ is trivial: {%\n\\begin{eqnarray*}\n&&\\text{cum}\\left\\{ x_{s},H_{\\widetilde{k}_{0}}(\\varepsilon\n_{s+p}),x_{s+r},H_{\\widetilde{k}_{0}}(\\varepsilon _{s+r+q})\\right\\} \\\\\n&=&\\sum_{p=-M}^{M}\\sum_{q=-M}^{M}\\frac{C}{n^{2}}\\Bigg|\\sum_{r=-n+1}^{n-1}%\n\\sum_{s=1-r}^{n-r}\\gamma _{x\\varepsilon }(p)\\gamma _{x\\varepsilon }(q)\\gamma\n_{\\varepsilon \\varepsilon }^{\\widetilde{k}_{0}-1}(r+q-p)\n\\end{eqnarray*}%\n\\begin{eqnarray*}\n&&+\\gamma _{\\varepsilon \\varepsilon }^{\\widetilde{k}_{0}-1}(r+q-p)\\gamma\n_{\\varepsilon x}(r-p)\\gamma _{x\\varepsilon }(r+q)\\Bigg| \\\\\n&=&O(n^{-1}M^{2d_{x}+2d_{\\varepsilon }}M^{\\widetilde{(k}_{0}-1)(2d_{%\n\\varepsilon }-1)+1})+O(n^{-1}M^{2d_{x}+2d_{\\varepsilon }}n^{\\widetilde{(k}%\n_{0}-1)(2d_{\\varepsilon }-1)+1}) \\\\\n&&+O(n^{-1}M^{d_{x}+d_{\\varepsilon }+1}M^{\\widetilde{(k}_{0}-1)(2d_{%\n\\varepsilon }-1)+1})+O(n^{-1}M^{2}n^{2d_{x}+2d_{\\varepsilon }-2+\\widetilde{(k%\n}_{0}-1)(2d_{\\varepsilon }-1)+1}) \\\\\n&=&O\\left( \\frac{M}{n}M^{2d_{x}+2d_{\\varepsilon }-1}M^{\\widetilde{(k}%\n_{0}-1)(2d_{\\varepsilon }-1)+1}\\right) +o(M^{2d_{x}+\\widetilde{k}%\n_{0}(2d_{\\varepsilon }-1)+1}) \\\\\n&&+O\\left( \\frac{M^{2}}{n}M^{d_{x}+d_{\\varepsilon }-1}M^{\\widetilde{(k}%\n_{0}-1)(2d_{\\varepsilon }-1)+1}\\right) +O(n^{-1}M^{2}n^{2d_{x}+\\widetilde{k}%\n_{0}(2d_{\\varepsilon }-1)}) \\\\\n&=&o(M^{2d_{x}+2d_{e}}),\n\\end{eqnarray*}%\nby the same argument as before. The proof for }$a\\geq 2,$ $\\widetilde{k}%\n_{0}=1$ is entirely analogous and hence omitted.\n\n\\subsubsection*{Part II: $a=2,$ $\\widetilde{k}_{0}\\geq 2$ or $a\\geq 2,$ $%\n\\widetilde{k}_{0}=2$}\n\nFor $a=2,$ $\\widetilde{k}_{0}=2$ we have%\n\\begin{eqnarray*}\n&&\\sum_{p=-M}^{M}\\sum_{q=-M}^{M}\\frac{1}{n^{2}}\\left|\n\\sum_{r=-n+1}^{n-1}\\sum_{s=1-r}^{n-r}\\text{cum}\\{H_{2}(x_{s})H_{2}(%\n\\varepsilon _{s+p})H_{2}(x_{s+r})H_{2}(\\varepsilon _{s+r+q})\\}\\right| \\\\\n&\\leq &\\sum_{p=-M}^{M}\\sum_{q=-M}^{M}\\frac{C}{n^{2}}\\Bigg|%\n\\sum_{r=-n+1}^{n-1}\\sum_{s=1-r}^{n-r}\\gamma _{x\\varepsilon }(p)\\gamma\n_{x\\varepsilon }(q)\\gamma _{xx}(r)\\gamma _{\\varepsilon \\varepsilon }(r+q-p)\n\\\\\n&&+\\gamma _{x\\varepsilon }(p)\\gamma _{x\\varepsilon }(r+q)\\gamma\n_{\\varepsilon x}(r-p)\\gamma _{x\\varepsilon }(q) \\\\\n&&+\\gamma _{xx}(r)\\gamma _{\\varepsilon \\varepsilon }(r+q-p)\\gamma\n_{\\varepsilon x}(r-p)\\gamma _{x\\varepsilon }(r+q)\\Bigg| \\\\\n&\\leq &\\!\\!\\!\\frac{C}{n}\\!\\left\\{\n\\sum_{p=-M}^{M}\\sum_{q=-M}^{M}\\sum_{|r|\\leq 3M}\\!\\left[ (|p|\\!+%\n\\!1)^{d_{x}+d_{\\varepsilon }-1}(|q|\\!+\\!1)^{d_{x}+d_{\\varepsilon\n}-1}(|r|\\!+\\!1)^{2d_{x}-1}(|r\\!+\\!q\\!-\\!p|\\!+\\!1)^{2d_{\\varepsilon\n}-1}\\right. \\right. \\\\\n&&+(|p|\\!+\\!1)^{d_{x}+d_{\\varepsilon }-1}(|q|\\!+\\!1)^{d_{x}+d_{\\varepsilon\n}-1}(|r\\!+\\!q|\\!+\\!1)^{d_{x}+d_{\\varepsilon\n}-1}(|r-p|+1)^{d_{x}+d_{\\varepsilon }-1} \\\\\n&&+\\left. \\left.\n(|r|\\!+\\!1)^{2d_{x}-1}(|r\\!+\\!q\\!-\\!p|\\!+\\!1)^{2d_{\\varepsilon\n}-1}(|r\\!-\\!p|\\!+\\!1)^{d_{x}+d_{\\varepsilon\n}-1}(|r\\!+\\!q|\\!+\\!1)^{d_{x}+d_{\\varepsilon }-1}\\right] \\right\\} \\\\\n&&+\\frac{C}{n}\\!\\left\\{ \\sum_{p=-M}^{M}\\sum_{q=-M}^{M}\\sum_{3M2$ the argument is very much the same:%\n\\begin{eqnarray*}\n&&\\sum_{p=-M}^{M}\\sum_{q=-M}^{M}\\frac{1}{n^{2}}\\left|\n\\sum_{r=-n+1}^{n-1}\\sum_{s=1-r}^{n-r}\\text{cum}\\{H_{2}(x_{s})H_{\\widetilde{k}%\n_{0}}(\\varepsilon _{s+p})H_{2}(x_{s+r})H_{\\widetilde{k}_{0}}(\\varepsilon\n_{s+r+q})\\}\\right| \\\\\n&\\leq &\\sum_{p=-M}^{M}\\sum_{q=-M}^{M}\\frac{C}{n^{2}}\\Bigg|%\n\\sum_{r=-n+1}^{n-1}\\sum_{s=1-r}^{n-r}\\gamma _{x\\varepsilon }(p)\\gamma\n_{x\\varepsilon }(q)\\gamma _{xx}(r)\\gamma _{\\varepsilon \\varepsilon }^{%\n\\widetilde{k}_{0}-1}(r+q-p) \\\\\n&&+\\gamma _{x\\varepsilon }(p)\\gamma _{x\\varepsilon }(q)\\gamma _{x\\varepsilon\n}(r+q)\\gamma _{\\varepsilon x}(r-p)\\gamma _{\\varepsilon \\varepsilon }^{%\n\\widetilde{k}_{0}-2}(r+q-p) \\\\\n&&+\\gamma _{xx}(r)\\gamma _{\\varepsilon x}(r-p)\\gamma _{x\\varepsilon\n}(r+q)\\gamma _{\\varepsilon \\varepsilon }^{\\widetilde{k}_{0}-1}(r+q-p)\\Bigg|\n\\\\\n&=&O(n^{-1}M^{2d_{x}+2d_{\\varepsilon }}M^{\\widetilde{(k}_{0}-1)(2d_{%\n\\varepsilon }-1)+1})+O(n^{-1}M^{2d_{x}+2d_{\\varepsilon }}M^{\\widetilde{(k}%\n_{0}-2)(2d_{\\varepsilon }-1)+1}) \\\\\n&&+O(n^{-1}M^{2d_{x}}M^{d_{x}+d_{\\varepsilon }}M^{\\widetilde{(k}%\n_{0}-1)(2d_{\\varepsilon }-1)+1})+O(n^{-1}M^{2d_{x}+2d_{\\varepsilon\n}}n^{2d_{x}-1+\\widetilde{(k}_{0}-1)(2d_{\\varepsilon }-1)+1}) \\\\\n&&+O(n^{-1}M^{2d_{x}+2d_{\\varepsilon }}n^{2d_{x}+2d_{\\varepsilon }-1+%\n\\widetilde{(k}_{0}-2)(2d_{\\varepsilon\n}-1)})+O(M^{2}n^{-1}n^{4d_{x}+2d_{\\varepsilon }-2+\\widetilde{(k}%\n_{0}-1)(2d_{\\varepsilon }-1)}) \\\\\n&=&O\\left( \\frac{M^{2}}{n}M^{2d_{x}-1}M^{\\widetilde{k}_{0}(2d_{\\varepsilon\n}-1)+1}\\right) +o\\left( \\frac{M^{2}}{n}M^{2d_{x}+2d_{\\varepsilon }-1+%\n\\widetilde{k}_{0}(2d_{\\varepsilon }-1)+1}\\right) \\\\\n&&+O\\left( \\frac{M^{2}}{n}M^{3d_{x}-1-d_{\\varepsilon }}M^{\\widetilde{k}%\n_{0}(2d_{\\varepsilon }-1)+1}\\right) +O\\left( \\frac{M^{2}}{n}M^{4d_{x}-1}M^{%\n\\widetilde{k}_{0}(2d_{\\varepsilon }-1)+1}\\right) \\\\\n&=&o(M^{2d_{2}+2d_{e}})\\text{ , because }2d_{e}=\\widetilde{k}%\n_{0}(2d_{\\varepsilon }-1)+1\\text{ .}\n\\end{eqnarray*}\n\n\\subsection*{Part III: $a\\geq 3,$ $\\widetilde{k}_{0}\\geq 3$}\n\nWe note that, by the diagram formula (as in (\\ref{cumbound}))%\n\\begin{eqnarray*}\n&&\\left| \\text{cum}\\left[ H_{a}(x_{s})H_{\\widetilde{k}_{0}}(\\varepsilon\n_{s+p})H_{a}(x_{s+r})H_{\\widetilde{k}_{0}}(\\varepsilon _{s+r+q})\\right]\n\\right| \\\\\n&\\leq &C\\left| \\text{cum}\\left[ H_{3}(x_{s})H_{3}(\\varepsilon\n_{s+p})H_{3}(x_{s+r})H_{3}(\\varepsilon _{s+r+q})\\right] \\right| \\text{ }.\n\\end{eqnarray*}%\nIt suffices then to focus on $a=\\widetilde{k}_{0}=3.$ There are seven\ndifferent kinds of connected diagrams, which are represented in Figures 1 to\n7. We have \n\\begin{eqnarray*}\n&&\\sum_{p=-M}^{M}\\sum_{q=-M}^{M}\\frac{1}{n^{2}}\\left|\n\\sum_{r=-n+1}^{n-1}\\sum_{s=1-r}^{n-r}\\text{cum}\\{H_{3}(x_{s})H_{3}(%\n\\varepsilon _{s+p})H_{3}(x_{s+r})H_{3}(\\varepsilon _{s+r+q})\\}\\Bigg|\\right.\n\\\\\n&=&\\sum_{p=-M}^{M}\\sum_{q=-M}^{M}\\frac{C}{n^{2}}\\Bigg|\\sum_{r=-n+1}^{n-1}%\n\\sum_{s=1-r}^{n-r}\\gamma _{x\\varepsilon }^{2}(p)\\gamma _{x\\varepsilon\n}^{2}(q)\\gamma _{\\varepsilon x}(r-p)\\gamma _{x\\varepsilon }(r+q)+\n\\end{eqnarray*}%\n\\begin{eqnarray*}\n&+&\\gamma _{x\\varepsilon }(p)\\gamma _{xx}(r)\\gamma _{x\\varepsilon\n}(r+q)\\gamma _{\\varepsilon \\varepsilon }(r+q-p)\\gamma _{\\varepsilon\nx}(r-p)\\gamma _{x\\varepsilon }(q) \\\\\n&+&\\gamma _{xx}^{2}(r)\\gamma _{\\varepsilon \\varepsilon }^{2}(r+q-p)\\gamma\n_{\\varepsilon x}(r-p)\\gamma _{x\\varepsilon }(r+q) \\\\\n&+&\\gamma _{x\\varepsilon }^{2}(p)\\gamma _{x\\varepsilon }^{2}(q)\\gamma\n_{xx}(r)\\gamma _{\\varepsilon \\varepsilon }(r+q-p) \\\\\n&+&\\gamma _{xx}^{2}(r)\\gamma _{\\varepsilon \\varepsilon }^{2}(r+q-p)\\gamma\n_{x\\varepsilon }(p)\\gamma _{x\\varepsilon }(q) \\\\\n&+&\\gamma _{x\\varepsilon }^{2}(r-p)\\gamma _{x\\varepsilon }^{2}(r+q)\\gamma\n_{x\\varepsilon }(p)\\gamma _{x\\varepsilon }(q) \\\\\n&+&\\left. \\gamma _{x\\varepsilon }^{2}(r-p)\\gamma _{x\\varepsilon\n}^{2}(r+q)\\gamma _{xx}(r)\\gamma _{\\varepsilon \\varepsilon }(r+q-p)\\right|\n\\end{eqnarray*}%\n\\begin{eqnarray}\n&\\leq &\\frac{C}{n}\\sum_{p=-M}^{M}(|p|+1)^{2(d_{x}+d_{\\varepsilon\n}-1)}\\sum_{q=-M}^{M}(|q|+1)^{2(d_{x}+d_{\\varepsilon }-1)} \\notag \\\\\n&\\times &\\left[ \\sum_{|r|\\leq 2M}(|r-p|+1)^{d_{x}+d_{\\varepsilon\n}-1}(|r+q|+1)^{d_{x}+d_{\\varepsilon }-1}\\right. \\notag \\\\\n&+&\\left. \\sum_{2M<|r|\\leq n}(|r-p|+1)^{d_{x}+d_{\\varepsilon\n}-1}(|r+q|+1)^{d_{x}+d_{\\varepsilon }-1}\\right] \\label{one}\n\\end{eqnarray}%\n\\begin{eqnarray}\n&+&\\frac{C}{n}\\sum_{p=-M}^{M}(|p|+1)^{d_{x}+d_{\\varepsilon\n}-1}\\sum_{q=-M}^{M}(|q|+1)^{d_{x}+d_{\\varepsilon }-1} \\notag \\\\\n&&\\times \\left[ \\sum_{|r|\\leq\n3M}(|r|+1)^{2d_{x}-1}(|r+q-p|+1)^{2d_{\\varepsilon\n}-1}(|r-p|+1)^{d_{x}+d_{\\varepsilon }-1}(|r+q|+1)^{d_{x}+d_{\\varepsilon\n}-1}\\right. \\notag \\\\\n&+&\\left. \\!\\!\\!\\sum_{3M<|r|\\leq\nn}(|r|\\!+\\!1)^{2d_{x}-1}(|r\\!+\\!q\\!-\\!p|\\!+\\!1)^{2d_{\\varepsilon\n}-1}(|r\\!-\\!p|\\!+\\!1)^{d_{x}+d_{\\varepsilon\n}-1}(|r\\!+\\!q|\\!+\\!1)^{d_{x}+d_{\\varepsilon }-1}\\right] \\label{two}\n\\end{eqnarray}\n\\begin{eqnarray}\n+ &&\\frac{C}{n}\\left[ \\left( \\sum_{|r|\\leq\n3M}(|r|+1)^{2(2d_{x}-1)}\\sum_{p=-M}^{M}\\sum_{q=-M}^{M}(|r+q-p|+1)^{2(2d_{%\n\\varepsilon }-1)}\\right. \\right. \\notag \\\\\n&&\\times \\!\\left. (|r\\!-\\!p|\\!+\\!1)^{d_{x}+d_{\\varepsilon\n}-1}(|r\\!+\\!q|\\!+\\!1)^{d_{x}+d_{\\varepsilon }-1}\\right)\n+\\sum_{p=-M}^{M}\\sum_{q=-M}^{M}\\left( \\sum_{3M<|r|\\leq\nn}(|r|\\!+\\!1)^{2(2d_{x}-1)}\\right. \\notag \\\\\n&&\\times \\left. \\left. |r+q-p|+1)^{2(2d_{\\varepsilon\n}-1)}(|r-p|+1)^{d_{x}+d_{\\varepsilon }-1}(|r+q|+1)^{d_{x}+d_{\\varepsilon\n}-1}\\right) \\right] \\label{three}\n\\end{eqnarray}%\n\\begin{eqnarray}\n+ &&\\frac{C}{n}\\sum_{p=-M}^{M}(|p|+1)^{2(d_{x}+d_{\\varepsilon\n}-1)}\\sum_{q=-M}^{M}(|q|+1)^{2(d_{x}+d_{\\varepsilon }-1)}\\left[ \\left(\n\\sum_{|r|\\leq 3M}(|r|+1)^{2d_{x}-1}\\right. \\right. \\notag \\\\\n&&\\times \\left. (|r\\!+\\!q\\!-\\!p|\\!+\\!1)^{2d_{x}-1}\\right) +\\!\\left.\n\\!\\sum_{3M<|r|\\leq\nn}(|r|\\!+\\!1)^{2d_{x}-1}(|r\\!+\\!q\\!-\\!p|\\!+\\!1)^{2d_{x}-1} \\right]\n\\label{four}\n\\end{eqnarray}%\n\\begin{eqnarray}\n&+&\\frac{C}{n}\\sum_{p=-M}^{M}(|p|+1)^{d_{x}+d_{\\varepsilon\n}-1}\\sum_{q=-M}^{M}(|q|+1)^{d_{x}+d_{\\varepsilon }-1}\\left[ \\left(\n\\sum_{|r|\\leq 3M}(|r|+1)^{2(2d_{x}-1)}\\phantom{cccccccc}\\right. \\right. \n\\notag \\\\\n&&\\times \\left. (|r\\!+\\!q\\!-\\!p|\\!+\\!1)^{2(2d_{\\varepsilon }-1)}\\right)\n+\\left. \\sum_{3M<|r|\\leq\nn}(|r|\\!+\\!1)^{2(2d_{x}-1)}(|r\\!+\\!q\\!-\\!p|+1)^{2(2d_{\\varepsilon }-1)} \n\\right] \\label{five}\n\\end{eqnarray}%\n\\begin{eqnarray}\n+ &&\\frac{C}{n}\\sum_{p=-M}^{M}(|p|+1)^{d_{x}+d_{\\varepsilon\n}-1}\\sum_{q=-M}^{M}(|q|+1)^{d_{x}+d_{\\varepsilon }-1}\\left[ \\left(\n\\sum_{|r|\\leq 3M}(|r+q|+1)^{2(d_{x}+d_{\\varepsilon }-1)}\\phantom{ccc}\\right.\n\\right. \\notag \\\\\n&&\\times \\left. \\left. (|r\\!-\\!p|\\!+\\!1)^{2(d_{x}+d_{\\varepsilon\n}-1)}\\right) +\\!\\!\\!\\sum_{3M<|r|\\leq\nn}(|r\\!+\\!q|\\!+\\!1)^{2(d_{x}+d_{\\varepsilon\n}-1)}(|r\\!-\\!p|\\!+\\!1)^{2(d_{x}+d_{\\varepsilon }-1)}\\right] \\label{six}\n\\end{eqnarray}%\n\\begin{eqnarray}\n&+&\\frac{C}{n}\\sum_{|r|\\leq\n3M}(|r|+1)^{2d_{x}-1}\\sum_{p=-M}^{M}\\sum_{q=-M}^{M}(|r+p-q|+1)^{2d_{%\n\\varepsilon }-1} \\notag \\\\\n&&\\times (|r-p|+1)^{2(d_{x}+d_{\\varepsilon\n}-1)}(|r+q|+1)^{2(d_{x}+d_{\\varepsilon }-1)}+\\sum_{3M<|r|\\leq\nn}(|r|+1)^{2d_{x}-1}\\phantom{ccccccc} \\notag \\\\\n&&\\times\n\\sum_{p=-M}^{M}\\sum_{q=-M}^{M}(|r\\!+\\!p\\!-\\!q|\\!+\\!1)^{2d_{\\varepsilon\n}-1}(|r\\!-\\!p|\\!+\\!1)^{2(d_{x}+d_{\\varepsilon\n}-1)}(|r\\!+\\!q|\\!+\\!1)^{2(d_{x}+d_{\\varepsilon }-1)}. \\label{seven}\n\\end{eqnarray}%\nAfter lengthy but straightforward computations, it is not difficult to see\nthat \n\\begin{eqnarray*}\n(\\ref{one}) &=&O(n^{-1}M^{2d_{x}+2d_{\\varepsilon\n}-1}M^{2d_{x}+2d_{\\varepsilon }-1}M^{d_{x}+d_{\\varepsilon\n}})+O(n^{-1}M^{4d_{x}-1}M^{4d_{\\varepsilon }-1}n^{2d_{x}+2d_{\\varepsilon\n}-1}) \\\\\n&=&o\\left( \\frac{M^{4d_{x}+4d_{\\varepsilon }-1}}{n}\\right)\n+O(M^{4d_{x}-1}M^{4d_{\\varepsilon }-1}n^{2d_{x}+2d_{\\varepsilon }-2}) \\\\\n(\\ref{two}) &=&O(n^{-1}M^{d_{x}+d_{\\varepsilon }}M^{d_{x}+d_{\\varepsilon\n}}M^{2d_{x}+2d_{\\varepsilon }-1})+O(n^{-1}M^{d_{x}+d_{\\varepsilon\n}}M^{d_{x}+d_{\\varepsilon }}n^{4d_{x}+4d_{\\varepsilon }-3}) \\\\\n&=&O\\left( \\frac{M^{4d_{x}+4d_{\\varepsilon }-1}}{n}\\right)\n+o(M^{4d_{x}-1}M^{4d_{\\varepsilon }-1}n^{2d_{x}+2d_{\\varepsilon }-2}) \\\\\n(\\ref{three}) &=&O(n^{-1}M^{4d_{x}-1}M^{4d_{\\varepsilon\n}})+O(n^{-1}M^{2}n^{6d_{x}+6d_{\\varepsilon }-5}) \\\\\n&=&O\\left( \\frac{M^{4d_{x}+4d_{\\varepsilon }-1}}{n}\\right)\n+o(M^{4d_{x}-1}M^{4d_{\\varepsilon }-1}n^{2d_{x}+2d_{\\varepsilon }-2}) \\\\\n(\\ref{four}) &=&O(n^{-1}M^{2d_{x}+2d_{\\varepsilon\n}-1}M^{2d_{x}+2d_{\\varepsilon\n}-1}M^{2d_{x}})+O(n^{-1}M^{4d_{x}-1}M^{4d_{\\varepsilon\n}-1}n^{2d_{x}+2d_{\\varepsilon }-1}) \\\\\n&=&o\\left( \\frac{M^{4d_{x}+4d_{\\varepsilon }-1}}{n}\\right)\n+O(M^{4d_{x}-1}M^{4d_{\\varepsilon }-1}n^{2d_{x}+2d_{\\varepsilon }-2}) \\\\\n(\\ref{five}) &=&O(n^{-1}M^{d_{x}+d_{\\varepsilon }}M^{d_{x}+d_{\\varepsilon\n}}M^{4d_{\\varepsilon\n}-1})+O(n^{-1}M^{2d_{x}}M^{2d_{x}}n^{4d_{x}+4d_{\\varepsilon }-3}) \\\\\n&=&O\\left( \\frac{M^{4d_{x}+4d_{\\varepsilon }-1}}{n}\\right)\n+o(M^{4d_{x}-1}M^{4d_{\\varepsilon }-1}n^{2d_{x}+2d_{\\varepsilon }-2}) \\\\\n\\end{eqnarray*}\n\\begin{eqnarray*}\n(\\ref{six}) &=&O(n^{-1}M^{d_{x}+d_{\\varepsilon }}M^{d_{x}+d_{\\varepsilon\n}}M^{2d_{x}+2d_{\\varepsilon }-1})+O(n^{-1}M^{2d_{x}+2d_{\\varepsilon\n}}n^{4d_{x}+4d_{\\varepsilon }-3}) \\\\\n&=&O\\left( \\frac{M^{4d_{x}+4d_{\\varepsilon }-1}}{n}\\right)\n+o(M^{4d_{x}-1}M^{4d_{\\varepsilon }-1}n^{2d_{x}+2d_{\\varepsilon }-2}) \\\\\n(\\ref{seven}) &=&O(n^{-1}M^{2d_{x}}M^{2d_{x}+2d_{\\varepsilon\n}})+O(n^{-1}M^{2}n^{6d_{x}+6d_{\\varepsilon }-5}) \\\\\n&=&o(\\frac{M^{4d_{x}+4d_{\\varepsilon }-1}}{n})+o(M^{4d_{x}-1}M^{4d_{%\n\\varepsilon }-1}n^{2d_{x}+2d_{\\varepsilon }-2})\n\\end{eqnarray*}\n\\begin{equation*}\n=O(\\frac{M^{4d_{x}+4d_{\\varepsilon }-1}}{n})+o(M^{4d_{x}-1}M^{4d_{%\n\\varepsilon }-1}n^{2d_{x}+2d_{\\varepsilon }-2})\\text{ .}\n\\end{equation*}\nIn view of the previous results, our proof will be completed if we show that \n\\begin{equation*}\n\\frac{M^{4d_{x}+4d_{\\varepsilon }-1}}{n}+M^{4d_{x}-1}M^{4d_{\\varepsilon\n}-1}n^{2d_{x}+2d_{\\varepsilon }-2}=o(M^{2d_{a}+2d_{e}})\\text{ ,}\n\\end{equation*}%\nwhere \n\\begin{equation*}\n2d_{a}+2d_{e}=2ad_{x}+2\\widetilde{k}_{0}d_{\\varepsilon }-(a+\\widetilde{k}%\n_{0})+2\\text{ .}\n\\end{equation*}%\nWe note first that%\n\\begin{equation*}\n\\frac{M^{4d_{x}+4d_{\\varepsilon }-1}}{nM^{2d_{a}+2d_{e}}}=\\frac{%\nM^{4d_{x}+4d_{\\varepsilon }-1}}{n(M^{2ad_{x}+2\\widetilde{k}%\n_{0}d_{\\varepsilon }-(a+\\widetilde{k}_{0})+2})}=\\frac{M^{2(2-a)d_{x}+2(2-%\n\\widetilde{k}_{0})d_{\\varepsilon }-3+a+\\widetilde{k}_{0}}}{n}\\text{ .}\n\\end{equation*}%\nFrom (\\ref{eq:index}) and $d_{e}>0$ it follows that \n\\begin{equation*}\nd_{x}>\\frac{1}{2}-\\frac{1}{2a}\\quad \\text{and}\\quad d_{\\varepsilon }>\\frac{1%\n}{2}-\\frac{1}{2\\widetilde{k}_{0}}\\text{ ,}\n\\end{equation*}%\nwhence, because $a,\\widetilde{k}_{0}\\geq 2$ we have \n\\begin{eqnarray*}\n\\frac{M^{2(2-a)d_{x}+2(2-\\widetilde{k}_{0})d_{\\varepsilon }-3+a+\\widetilde{k}%\n_{0}}}{n} &\\leq &\\frac{M^{2(2-a)(\\frac{1}{2}-\\frac{1}{2a})+2(2-\\widetilde{k}%\n_{0})(\\frac{1}{2}-\\frac{1}{2\\widetilde{k}_{0}})-3+a+\\widetilde{k}_{0}}}{n} \\\\\n&=&o\\left( \\frac{M^{3}}{n}\\right) =o(1)\\text{ ,}\n\\end{eqnarray*}%\nin view of Assumption C. To complete the proof, note that, again from\nAssumption C, for some $\\alpha >a-2,\\widetilde{k}_{0}-2$ we have $M^{\\alpha\n}=O(n),$ whence%\n\\begin{equation*}\n\\frac{M^{4d_{x}+4d_{\\varepsilon }-2}n^{2d_{x}+2d_{\\varepsilon }-2}}{%\nM^{2ad_{x}+2\\widetilde{k}_{0}d_{\\varepsilon }-(a+\\widetilde{k}_{0})+2}}%\n=o(M^{(2-a+\\alpha )(2d_{x}-1)+(2-\\widetilde{k}_{0}+\\alpha )(2d_{\\varepsilon\n}-1)})=o(1)\\qquad \\text{as}\\quad n\\rightarrow \\infty \\text{ .}\n\\end{equation*}%\nThus the proof is completed. \\hfill $\\square$ \\newpage \n\\begin{figure}[!h]\n\\centering \\includegraphics{grafo1} \\label{fig:grafo1}\n\\caption{$\\protect\\gamma _{x\\protect\\varepsilon }^{2}(p)\\protect\\gamma _{x%\n\\protect\\varepsilon }^{2}(q)\\protect\\gamma _{x\\protect\\varepsilon }(r+q)%\n\\protect\\gamma _{\\protect\\varepsilon x}(r-p)$}\n\\end{figure}\n\\begin{figure}[!h]\n\\centering \\includegraphics{grafo2} \\label{fig:grafo2}\n\\caption{$\\protect\\gamma _{x\\protect\\varepsilon }(r+q)\\protect\\gamma _{%\n\\protect\\varepsilon x}(r-p)\\protect\\gamma _{xx}^{2}(r)\\protect\\gamma _{%\n\\protect\\varepsilon \\protect\\varepsilon }^{2}(r+q-p)$}\n\\end{figure}\n\\begin{figure}[!h]\n\\centering \\includegraphics{grafo3} \\label{fig:grafo3}\n\\caption{$\\protect\\gamma _{x\\protect\\varepsilon }(p)\\protect\\gamma _{x%\n\\protect\\varepsilon }(q)\\protect\\gamma _{x\\protect\\varepsilon }^{2}(r+q)%\n\\protect\\gamma _{\\protect\\varepsilon x}^{2}(r-p)\\protect\\gamma _{xx}(r)%\n\\protect\\gamma _{\\protect\\varepsilon \\protect\\varepsilon }(r+q-p)$}\n\\end{figure}\n\\begin{figure}[!h]\n\\centering \\includegraphics{grafo4} \\label{fig:grafo4}\n\\caption{$\\protect\\gamma _{x\\protect\\varepsilon }^{2}(p)\\protect\\gamma _{x%\n\\protect\\varepsilon }^{2}(q)\\protect\\gamma _{xx}(r)\\protect\\gamma _{\\protect%\n\\varepsilon \\protect\\varepsilon }(r+q-p)$}\n\\end{figure}\n\\begin{figure}[!h]\n\\centering \\includegraphics{grafo5} \\label{fig:grafo5}\n\\caption{$\\protect\\gamma _{x\\protect\\varepsilon }^{2}(p)\\protect\\gamma _{x%\n\\protect\\varepsilon }^{2}(q)\\protect\\gamma _{xx}^{2}(r)\\protect\\gamma _{%\n\\protect\\varepsilon \\protect\\varepsilon }^{2}(r+q-p)$}\n\\end{figure}\n\\begin{figure}[!h]\n\\centering \\includegraphics{grafo6} \\label{fig:grafo6}\n\\caption{$\\protect\\gamma _{x\\protect\\varepsilon }(p)\\protect\\gamma _{x%\n\\protect\\varepsilon }(q)\\protect\\gamma _{x\\protect\\varepsilon }^{2}(r+q)%\n\\protect\\gamma _{\\protect\\varepsilon x}^{2}(r-p)$}\n\\end{figure}\n\\begin{figure}[!h]\n\\centering \\includegraphics{grafo7} \\label{fig:grafo7}\n\\caption{$\\protect\\gamma _{xx}(r)\\protect\\gamma _{\\protect\\varepsilon \n\\protect\\varepsilon }(r+q-p)\\protect\\gamma _{x\\protect\\varepsilon }^{2}(r+q)%\n\\protect\\gamma _{\\protect\\varepsilon x}^{2}(r-p)$}\n\\end{figure}\n\\bibliographystyle{econometrica}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}