{"text":"\\section{Introduction}\n\nRobot exploration problem is defined as making a robot or multi-robots explore unknown cluttered environments (i.e., office environment, forests, ruins, etc.) autonomously with specific goals. \nThe goal can be classified as: (1) Maximizing the knowledge of the unknown environments, i.e., acquiring a map of the world \\cite{thrun2002probabilistic}, \\cite{28_background_info}. (2) Searching a static or moving object without prior information about the world. \nWhile solving the exploration problems with the second goal can combine the prior information of the target object, such as the semantic information, it also needs to fulfill the first goal. \n\nThe frontier-based methods \\cite{21_yamauchi1997frontier}, \\cite{22_gonzalez2002navigation} have been widely used to solve the robot exploration problem. \n\\cite{21_yamauchi1997frontier} adopts a greedy strategy, which may lead to an inefficient overall path.\nMany approaches have been proposed to consider more performance metrics (i.e. information gain, etc.) \\cite{22_gonzalez2002navigation}, \\cite{23_bourgault2002information}, \\cite{33_basilico2011exploration}. \nHowever, these approaches are designed and evaluated in a limited number of environments. Therefore, they may fail to generalize to other environments whose layouts are different.\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[scale=0.2]{.\/imgs\/4D_pointclouds.png}\n\t\\caption{The example of 4D point-clouds-like information. (a) shows a map example in $HouseExpo$, where the black, gray, and white areas denote unknown space, free space, and obstacle space, respectively. (b) shows the global map in our algorithm, where the map is obtained by homogeneous transformations. The map's center is the robot start location, and the map's x-coordinate is the same as the robot start orientation. (c) shows the 4D point-clouds-like information generated based on the global map. The x, y, z coordinate denote x location, y location and distance information, respectively. The color denotes frontier information, where red denotes obstacles and blue denotes frontiers.}\n\t\\label{fig:label_pointcloud}\n\\end{figure}\n\nCompared with these classical methods, machine learning-based methods exhibit the advantages of learning from various data.\nDeep Reinforcement Learning (DRL), which uses a neural network to approximate the agent's policy during the interactions with the environments \\cite{sutton2018reinforcement}, gains more and more attention in the application of games \\cite{DQN2015}, robotics \\cite{rl_env_robo}, etc.\nWhen applied to robot exploration problems, most research works \\cite{MaxM2018}, \\cite{31_chen2019self} {design the state space as the form of image} and use Convolution Neural Networks (CNN).\nFor example, in \\cite{MaxM2018}, CNN is utilized as a mapping from the local observation of the robot to the next optimal action.\n\nHowever, the size of CNN's input images is fixed, which results in the following limitations: \n(1) If input images represent the global information, the size of the overall explored map needs to be pre-defined to prevent input images from failing to fully containing all the map information. \n(2) If input images represent the local information, which fails to convey all the state information, the recurrent neural networks (RNN) or memory networks need to be adopted. \nUnfortunately, the robot exploration problem in this formulation requires relatively long-term planning, which is still a tough problem that has not been perfectly solved.\n\nIn this paper, to deal with the aforementioned problems, we present a novel state representation method, which relies on 4D point-clouds-like information of variable size. \nThese information have the same data structure as point clouds and consists of 2D points' location information, and the corresponding 1D frontier and 1D distance information, as shown in Fig.\\ref{fig:label_pointcloud}.\nWe also designs the corresponding training framework, which bases on the deep Q-Learning method with variable action space.\nBy replacing the image observation with 4D point-clouds-like information, our proposed exploration model can deal with unknown maps of arbitrary size.\nBased on dynamic graph CNN (DGCNN) \\cite{wang2019dynamic}, which is one of the typical neural network structure that process point clouds information, our proposed neural network takes 4D point-clouds-like information as input and outputs the expected value of each frontier, which can guide the robot to the frontier with the highest value.\nThis neural network is trained in a way similar to DQN in the $HouseExpo$ environment \\cite{li2019houseexpo}, which is a fast exploration simulation platform that includes data of many 2D indoor layouts. The experiment shows that our exploration model can achieve a relatively good performance, compared with the baseline in \\cite{li2019houseexpo}, state-of-the-art in \\cite{Weight2019}, classical methods in \\cite{21_yamauchi1997frontier}, \\cite{22_gonzalez2002navigation} and a random method.\n\n\\subsection{Original Contributions}\nThe contributions of this paper are threefold. \nFirst, we propose a novel state representation method using 4D point-clouds-like information to solve the aforementioned problems in Section \\ref{sec:relatedwork}.\nAlthough point clouds have been utilized in motion planning and navigation (\\cite{27_2Dpointclouds}, \\cite{30_ObstacleResp}), our work is different from these two papers in two main parts: \n(1) We use point clouds to represent the global information while they represent the local observation.\n(2) Our action space is to select a frontier point from the frontier set of variable size, while their action space contains control commands.\nSecond, we design the corresponding state function based on DGCNN \\cite{wang2019dynamic}, and the training framework based on DQN \\cite{DQN2015}. \nThe novelty is that our action space's size is variable, which makes our neural network converge in a faster way.\nThird, we demonstrate the performance of the proposed method on a wide variety of environments, which the model has not seen before, and includes maps whose size is much larger than maps in the training set. \n\nThe remainder of this paper is organized as follows. \nWe first introduce the related work in Section \\ref{sec:relatedwork}.\nThen we formulate the frontier-based robot exploration problem and DRL exploration problem in Section \\ref{sec:formulation}. \nAfter that, the framework of our proposed method are detailed in Section \\ref{sec:alg}.\nIn Section \\ref{sec:exp}, we demonstrate the performance of our proposed method through a series of simulation experiments. \nAt last, we conclude the work of this paper and discuss directions for future work in section \\ref{sec:conclude}.\n\n\\section{Related Work}\n\\label{sec:relatedwork}\nIn \\cite{21_yamauchi1997frontier}, the classical frontier method is defined, where an occupancy map is utilized in which each cell is placed into one of three classes: open, unknown and occupied.\nThen frontiers are defined as the boundaries between open areas and unknown areas. \nThe robot can constantly gain new information about the world by moving to successive frontiers, while the problem of selecting which frontiers at a specific stage remains to be solved. \nTherefore, in a frontier-based setting, solving the exploration problem is equivalent to finding an efficient exploration strategy that can determine the optimal frontier for the robot to explore. \nA greedy exploration strategy is utilized in \\cite{21_yamauchi1997frontier} to select the nearest unvisited, accessible frontiers. \nThe experiment results in that paper show that the greedy strategy is short-sighted and can waste lots of time, especially when missing a nearby frontier that will disappear at once if selected (this case is illustrated in the experiment part).\n\nMany DRL techniques have been applied into the robot exploration problem in several previous works.\nIn a typical DRL framework \\cite{sutton2018reinforcement}, the agent interacts with the environment by taking actions and receiving rewards from the environment. \nThrough this trial-and-error manner, the agent can learn an optimal policy eventually.\nIn \\cite{MLiu2016}, a CNN network is trained under the DQN framework with RGB-D sensor images as input to make the robot learn obstacle avoidance ability during exploration. \nAlthough avoiding obstacles is important, this paper does not apply DRL to learn the exploration strategy.\nThe work in \\cite{Weight2019} combines frontier-based exploration with DRL to learn an exploration strategy directly. \nThe state information includes the global occupancy map, robot locations and frontiers, while the action is to output the weight of a cost function that evaluates the goodness of each frontier. \nThe cost function includes distance and information gain. \nBy adjusting the weight, the relative importance of each term can be changed. \nHowever, the terms of the cost function rely on human knowledge and may not be applicable in other situations. \nIn \\cite{2_li2019deep}, the state space is similar to the one in \\cite{Weight2019}, while the action is to select a point from the global map.\nHowever, the map size can vary dramatically from one environment to the next. \nIt losses generality when setting the maximum map size before exploring the environments. \n\nIn \\cite{MaxM2018} and \\cite{31_chen2019self}, a local map, which is centered at the robot's current location, is extracted from the global map to represent current state information.\nBy using a local map, \\cite{MaxM2018} trains the robot to select actions in ``turn left, turn right, move forward'', while \\cite{31_chen2019self} learns to select points in the free space of the local map. \nLocal map being state space can eliminate the limitation of global map size, but the current local map fails to contain all the information. \nIn \\cite{MaxM2018}, the robot tends to get trapped in an explored room when there is no frontier in the local map, because the robot has no idea where the frontiers are.\nThe training process in \\cite{31_chen2019self} needs to drive the robot to the nearest frontier when the local map contains no frontier information, although a RNN network is integrated into their framework. \nThis human intervention adopts a greedy strategy and can not guarantee an optimal or near-optimal solution. \nWhen utilizing local observations, the robot exploration problem requires the DRL approach to have a long-term memory.\nNeural map in \\cite{17_parisotto2018neural} is proposed to tackle simple memory architecture problems in DRL.\nBesides, Neural SLAM in \\cite{13_zhang2017neural} embeds traditional SLAM into attention-based external memory architecture.\nHowever, the memory architectures in \\cite{17_parisotto2018neural} and \\cite{13_zhang2017neural} are based on the fixed size of the global map.\nIt is difficult to be applied to unknown environments whose size may be quite large compared with maps in training sets.\nUnlike the aforementioned methods, our method uses the 4D point-clouds-like information to represent the global state information, which does not suffer from both the map size limitation and the simple memory problem. \nAs far as we know, our method is the first to apply point clouds to robot exploration problems. \nTherefore, we also design a respective DRL training framework to map the exploration strategy directly from point clouds.\n\n\n\\section{Problem Formulation}\n\\label{sec:formulation}\n\nOur work aims to develop and train a neural network that can take 4D point-clouds-like information as input and generate efficient policy to guide the exploration process of a robot equipped with a laser scanner. \nThe network should take into account the information about explored areas, occupied areas, and unexplored frontiers. \nIn this paper, the robot exploration problem is to make a robot equipped with a limited-range laser scanner explore unknown environments autonomously.\n\n\\subsection{Frontier-based Robot Exploration Problem}\n\\label{sec:formulation_A}\nIn the robot exploration problem, a 2D occupancy map is most frequently adopted to store the explored environment information. \nDefine the explored 2D occupancy map at step ${t}$ as ${M_t}$.\nEach grid in ${M_t}$ can be classified into the following three states: free grid ${M_t}$, occupied grid ${E_t}$, and unknown grid ${U_t}$. \nAccording to \\cite{21_yamauchi1997frontier}, frontiers ${F_t}$ are defined as the boundaries between the free space ${E_t}$ and unknown space ${U_t}$.\nMany existing DRL exploration frameworks learn a mapping from ${M_t}$ and ${F_t}$ to robot movement commands which can avoid obstacles and navigate to specific locations.\nAlthough this end-to-end framework has a simple structure, it is difficult to train.\nInstead, our method learns a policy network that can directly determine which frontier to explore, which is similar to \\cite{Weight2019} and \\cite{2_li2019deep}.\nAt step $t$, a target frontier is selected from ${F_t}$ based on an exploration strategy and current explored map ${M_t}$. \nOnce the change of the explored map is larger than the threshold, the robot is stopped, and the explored map at step $t+1$ ${M_{t+1}}$ is obtained.\nBy moving to selected frontiers constantly, the robot will explore more unknown areas until no accessible frontier exists. \n\nBecause ${M_t}$ can be represented as an image, it is commonly used to directly represent the state information.\nAs explained in Section \\ref{sec:relatedwork}, a novel state representation method with 4D point-clouds-like information is proposed instead.\nThe 4D point-clouds-like information at step $t$ is defined as a 4-dimensional point cloud set with $n$ points, denoted by $X_t = \\{ x_1^t, ..., x_n^t \\} \\subset \\mathbb{R}^4$.\nEach point contains 4D coordinates $x_i^t = \\left( x_i, y_i, b_i, d_i\\right)$, where $x_i, y_i$ denotes the location of the point, $d_i$ denotes the distance from the point to the robot location without collision, $b_i \\in \\{ 0, 1\\}$ denotes whether point $(x_i, y_i)$ in $M_t$ belongs to frontier or not.\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[scale=0.53]{imgs\/MDP_framework.png}\n\t\\caption{The framework of the proposed method, which consists of five components: (a) A simulator adopted from $HouseExpo$ \\cite{li2019houseexpo}, which receives and executes a robot movement command and outputs the new map in $HouseExpo$ coordinate; (b) The modified Dijkstra algorithm to extract the contour of the free space; (c) The state represented by 4D point-clouds-like information; (d) The policy network which processes the state information and estimates the value of frontiers; (e) The A* algorithm that finds a path connecting the current robot location to the goal frontier and a simple path tracking algorithm that generates the corresponding robot movement commands.}\n\t\\label{fig:label_framework}\n\\end{figure}\n\n\\subsection{DRL exploration formulation}\n\\label{sec:DRL_formulation}\nThe robot exploration problem can be formulated as a Markov Decision Process (MDP), which can be modeled as a tuple $(\\mathcal{S}, \\mathcal{A}, \\mathcal{T}, \\mathcal{R}, \\gamma)$.\nThe state space ${\\mathcal{S}}$ at step $t$ is defined by 4D point cloud $X_t$, which can be divided into frontier set $F_t$ and obstacle set $O_t$.\nThe action space ${\\mathcal{A}_t}$ at step $t$ is the frontier set $F_t$, and the action is to select a point $f_t$ from $F_t$, which is the goal of the navigation module implemented by $A^{*}$ \\cite{Hart1968Astar}.\nWhen the robot take an action $f_t$ from the action space, the state $X_t$ will transit to state $X_{t+1}$ according to the stochastic state transition probability ${\\mathcal{T}(X_t, f_t, X_{t+1})} = p(X_{t+1} | X_t, f_t)$.\nThen the robot will receive an immediate reward $r_t = \\mathcal{R}(X_t, f_t)$.\nThe discount factor $\\gamma \\in [0, 1]$ adjusts the relative importance of immediate and future reward.\nThe objective of DRL algorithms is to learn a policy $\\pi (f_t | X_t)$ that can select actions to maximize the expected reward, which is defined as the accumulated $\\gamma-$discounted rewards over time.\n\nBecause the action space varies according to the size of frontier set $F_t$, it is difficult to design a neural network that maps the state to the action directly.\nThe value-based RL is more suitable to this formulation. \nIn value-based RL, a vector of action values, which are the expected rewards after taking actions in state $X_t$ under policy $\\pi$, can be estimated by a deep Q network (DQN) $Q_{\\pi}(X_t, f_t; \\theta) = \\mathbb{E}[\\sum_{i=t}^{\\infty}{\\gamma}^{i-t}r_i | X_t, f_t]$, where $\\theta$ are the parameters of the muti-layered neural network.\nThe optimal policy is to take action that has the highest action value:\n$f_{t}^{*}=\\mathop{\\text{argmax}}_{f_t}Q_{\\pi}(X_t,f_t; \\theta)$.\nDQN \\cite{DQN2015} is a novel variant of Q-learning, which utilizes two key ideas: \nexperience reply and target network. \nThe DQN tends to overestimate action values, which can be tackled by double DQN in \\cite{doubleDQN}.\nDouble DQN select an action the same as DQN selects, while estimate this action's value by the target network.\n\n\\section{Algorithm}\n\\label{sec:alg}\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[scale=0.3]{.\/imgs\/PointCloud_Generation2.png}\n\t\\caption{The illustration of the 4D point-clouds-like information generation process. (a) presents the map where the black, gray, white, blue points denote obstacles, unknown space, free space, and robot location. In (b), the contour of free space, which is denoted by green points, is generated by the modified Dijkstra algorithm. The number on each point indicates the distance from this point to the robot location. In (c), the points in the contour set are divided into frontier or obstacle sets, which are denoted in dark green and light green, respectively. In (d), the 4D point-clouds-like information are extracted from the image (c). The point clouds include location, frontier flag, and distance information.}\n\t\\label{fig:label_illustration_h}\n\\end{figure}\n\nIn this section, we present the framework of our method and illustrate its key components in detail. \n\n\\subsection{Framework Overview}\nThe typical DRL framework is adopted, where the robot interacts with the environment step by step to learn an exploration strategy. \nThe environment in this framework is based on $HouseExpo$ \\cite{li2019houseexpo}.\nThe state and action space of the original $HouseExpo$ environment is the local observation and robot movement commands, respectively. \nWhen incorporated into our framework, $HouseExpo$ receives a sequence of robot movement command and outputs the global map once the change of the explored map is larger than the threshold, which is detailed in Section \\ref{sec:formulation_A}.\nAs shown in Fig. \\ref{fig:label_framework}, the 4D point-clouds-like information can be obtained by a modified Dijkstra algorithm. \nAfter the policy network outputting the goal point, the $A^{*}$ algorithm is implemented to find the path connecting the robot location to the goal point. \nThen a simple path tracking algorithm is applied to generate the sequence of robot movement commands.\n\n\n\\subsection{Frontier Detection and Distance Computation}\n\\label{sec:frontier_dectect}\nComputing the distances from the robot location to points in the frontier set without collision will be time-consuming if each distance is obtained by running the $A^{*}$ algorithm once. \nInstead, we modify the Dijkstra algorithm to detect frontiers and compute distance at the same time by sharing the search information. \nDenote the ``open list'' and ``close list'' as $L_o$ and $L_c$, respectively.\nThe open list contains points that need to be searched, while the close list contains points that have been searched.\nDefine the contour list as $L_f$, which contains the location and the cost of points that belong to frontier or obstacle.\nOnly points in the free space of map $M_t$ are walkable. \nThe goal of this modified algorithm is to extract the contour of free space and obtain the distance information simultaneously.\nAs shown in Algorithm 1, the start point with the cost of zero, which is decided by the robot location, is added to $L_o$. \nWhile the open list is not empty, the algorithm repeats searching 8 points, denoted by $p_{near}$, adjacent to current point $p_{cur}$.\nThe differences from Dijkstra algorithms are: \n(1) If $p_{near}$ belongs to occupied or unknown space, add $p_{cur}$ to frontier list $L_f$, as shown in line 10 of Algorithm 1.\n(2) Instead of stopping when the goal is found, the algorithm terminates until $L_o$ contains zero points.\nAfter the algorithm ends, the contour list contains points that are frontiers or boundaries between free space and obstacle space.\nPoints in the contour list can be classified by their neighboring information into frontier or obstacle set, which is shown in Fig. \\ref{fig:label_illustration_h}.\n\n\\begin{algorithm}[h]\n\\caption{Modified Dijkstra Algorithm}\n\\begin{algorithmic}[1]\n\\STATE $L_o \\leftarrow \\{ p_{start}\\}, Cost( p_{start}) = 0;$ \n\\WHILE {$L_o \\neq \\phi$}\n\\STATE $p_{cur} \\leftarrow minCost(L_o)$;\n\\STATE $L_c \\leftarrow L_c \\cup p_{cur}, L_o \\leftarrow L_o \\backslash p_{cur}$;\n\\FOR{$p_{near}$ in 8 points adjacent to $p_{cur}$}\n\\STATE $cost_{near} = Cost(p_{cur}) + distance(p_{near}, p_{cur})$\n\\IF{$p_{near} \\in L_c$}\n\\STATE continue;\n\\ELSIF{$p_{near}$ is not walkable}\n\\STATE $L_f \\leftarrow L_f \\cup p_{cur}$;\n\\ELSIF{$p_{near} \\notin L_o$}\n\\STATE $L_o \\leftarrow L_o \\cup p_{near}$\n\\STATE $Cost(p_{near}) = cost_{near}$;\n\\ELSIF{$p_{near} \\in L_o$ and $Cost(p_{near})>cost_{near}$}\n\\STATE $Cost(p_{near})=cost_{near}$\n\\ENDIF\n\\ENDFOR\n\\ENDWHILE \n\\label{code:recentEnd}\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{figure*}[t]\n\t\n\t\\centering\n\t\\includegraphics[scale=0.48]{imgs\/network_structure2.png}\n\t\\caption{The architecture of our proposed neural network. The input point clouds are classified into two categories: frontier set denoted by dark blue and obstacle set denoted by red. The edge convolution operation $EdgeConv$ is denoted by a light blue block, which is used to extract local features around each point in the input set. The feature sets of frontiers and obstacles, which are generated by $EdgeConv$ operations, are denoted by light green and green. The MLP operation is to extract one point's feature by only considering this point's information. After several $EdgeConv$ operations and one MLP operation, a max-pooling operation is applied to generate the global information, which is shown in light yellow.}\n\t\\label{fig:label_network_structure}\n\\end{figure*}\n\n\\subsection{Network with Point Clouds as Input}\n\\label{sec:DQN_architecture}\nIn this section, the architecture of the state-action value network with 4D point-clouds-like information as input is detailed.\nThe architecture is modified from DGCNN in Segmentation task \\cite{wang2019dynamic}, which proposes edge convolution (EdgeConv) operation to extract the local information of point clouds.\nThe EdgeConv operation includes two steps for each point in the input set: (1) construct a local graph including the center point and its k-nearest neighbors; (2) apply convolution-like operations on edges which connect each neighbor to the center point.\nThe $(D_{in}, D_{out})$ EdgeConv operation takes the point set $(N, D_{in})$ as input and outputs the feature set $(N, D_{out})$, where $D_{in}$ and $D_{out}$ denote the dimension of input and output set, and $N$ denotes the number of points in the input set.\nDifferent from DGCNN and other typical networks processing point cloud such as PointNet \\cite{qi2017pointnet}, which have the same input and output point number, our network takes the frontier and obstacle set as input and only outputs the value of points from the frontier set.\nThe reason for this special treatment is to decrease the action space's size to make the network converge in a faster manner.\n\nThe network takes as input $N_f + N_w$ points at time step $t$, which includes $N_f$ frontier points and $N_w$ obstacle points, which are denoted as $F_t = \\{ x_1^t, ..., x_{N_f}^t \\}$ and $O_t = \\{ x_{N_f+1}^t, ..., x_{N_f+N_w}^t \\}$ respectively.\nThe output is a vector of estimated value of each action $Q_{\\pi}(F_t, O_t, \\cdot; \\theta)$.\nThe network architecture contains multiple EdgeConv layers and multi-layer perceptron (mlp) layers.\nAt one EdgeConv layer, the feature set is extracted from the input set. \nThen all features in this edge feature set are aggregated to compute the output EdgeConv feature for each corresponding point.\nAt a mlp layer, the data of each point is operated on independently to obtain the information of one point, which is the same as the mlp in PointNet \\cite{qi2017pointnet}.\nAfter 4 EdgeConv layers, the outputs of all EdgeConv layers are aggregated and processed by a mlp layer to generate a 1D global descriptor, which encodes the global information of input points.\nThen this 1D global descriptor is concatenated with the outputs of all EdgeConv layers.\nAfter that, points that belong to the obstacle set are ignored, and the points from the frontier set are processed by three mlp layers to generate scores for each point.\n\n\n\\subsection{Learning framework based on DQN}\nAs described in Section \\ref{sec:DRL_formulation}, the DQN is a neural network that for a given state $X_t$ outputs a vector of action values.\nThe network architecture of DQN is detailed in Section \\ref{sec:DQN_architecture}, where the state $X_t$ in point clouds format contains frontier and obstacle sets.\nUnder a given policy $\\pi(f_t|F_t,O_t)$, the true value of an action $f_t$ in state $X_t = \\{F_t, O_t \\}$ is: $Q_{\\pi}(X_t, f_t; \\theta) \\equiv \\mathbb{E}[\\sum_{i=t}^{\\infty}{\\gamma}^{i-t}r_i | X_t, f_t]$. \nThe target is to make the estimate from the frontier network converge to the true value. \nThe parameters of the action-value function can be updated in a gradient decent way, after taking action $f_t$ in state $X_t$ and observing the next state $X_{t+1}$ and immediate reward $r_{t+1}$: \n\\begin{eqnarray}\\label{update_Q}\n\t\\theta_{t+1} = \\theta_t + \\alpha (G_t - Q_{\\pi}(X_t, f_t; \\theta_t)),\n\\end{eqnarray}\nwhere $\\alpha$ is the learning rate and the target $G_t$ is defined as \n\\begin{eqnarray}\\label{target}\n\tG_t = r_{t+1}+\\gamma Q_{\\pi}(X_{t+1}, \\mathop{\\text{argmax}}_{f}Q(X_{t+1}, f; \\theta_t); \\theta_t^{'}),\n\\end{eqnarray}\nwhere $\\theta^{'}$ denotes the parameter of the target network, which is updated periodically by $\\theta^{'} = \\theta$.\n\nTo make the estimate from our network converge to the true value, a typical DQN framework \\cite{DQN2015} can be adopted. \nFor each step $t$, the tuple $(X_t, f_t, X_{t+1}, r_{t+1})$ is saved in a reply buffer. \nThe parameters can be updated by equation \\ref{update_Q} and \\ref{target} given the tuple sampled from the reply buffer.\n\n\nThe reward signal in the DRL framework helps the robot know at a certain state $X_t$ whether an action $f_t$ is appropriate to take.\nTo make the robot able to explore unknown environments successfully, we define the following reward function:\n\n\\begin{equation}\\label{reward_all}\nr_t = r_{area}^t + r_{frontier}^t + r_{action}^t.\n\\end{equation}\nThe term $r_{area}^t$ equals the newly discovered area at time $t$, which is designed to encourage the robot to explore unknown areas.\nThe term $r_{action}^t$ provides a consistent penalization signal to the robot when a movement command is taken. \nThis reward encourages the robot to explore unknown environments with a relatively short length of overall path.\nThe term $r_{frontier}^t$ is designed to guide the robot to reduce the number of frontiers, as defined in equation \\ref{reward_f}:\n\\begin{equation}\\label{reward_f}\nr_{frontier}^t = \\left\\{\n\\begin{aligned}\n 1, N_{frontier}^t < N_{frontier}^{t-1}\\\\\n 0, N_{frontier}^t \\geq N_{frontier}^{t-1}.\n\\end{aligned}\n\\right.\n\\end{equation}\nwhere $N_{frontier}^t$ denotes the number of frontier group at time $t$.\n\n\n\\section{Training Details and Simulation Experiments}\n\\label{sec:exp}\nIn this section, we first detail the training process of our DRL agent. \nThen we validate the feasibility of our proposed method in robot exploration problem by two sets of experiments:\n(1) a comparison with five other exploration methods, (2) a test of scalability where maps of larger size compared with training set are to be explored.\n\n\\subsection{Training Details}\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[scale=0.3]{.\/imgs\/HouseExpo_traindata25.png}\n\t\\caption{Some map samples from training set. The black and white pixels represent obstacle and free space respectively.}\n\t\\label{fig:label_mapsample}\n\\end{figure}\n\nTo learn a general exploration strategy, the robot is trained in the $HouseExpo$ environment where 100 maps with different shapes and features are to be explored.\nThe robot is equipped with a 2m range laser scanner with a 180 degree field of view. \nThe noise of laser range measurement is simulated by the Gaussian distribution with a mean of 0 and a standard deviation of 0.02m.\nAs an episode starts, a map is randomly selected from the training set, and the robot start pose, including the location and pose, is also set randomly.\nIn the beginning, the circle area centered at the start location is explored by making the robot rotate for 360 degree.\nThen at each step, a goal frontier point is selected from the frontier set under the policy of our proposed method.\nA* algorithm is applied to find a feasible path connecting the current robot location to the goal frontier. \nA simple path tracking algorithm is used to find the robot commands to follow the planned path: moving to the nearest unvisited path point $p_{near}$, and replanning the path if the distance between the robot and $p_{near}$ is larger than a fixed threshold.\nAn episode ends if the explored ratio of the whole map is over ${95\\%}$.\n\n\\begin{table}[!htbp]\n\\caption{Parameters in Training}\n\\centering\n\\begin{tabular}{cccc}\n \\toprule\n \\multicolumn{2}{c}{HouseExpo} & \\multicolumn{2}{c}{Training} \\\\\n \\hline\n Laser range & $2m$ & Discount factor $\\gamma$ & 0.99 \\\\\n \\hline\n Laser field of view & $180^{\\circ}$ & Target network update f & 4000 steps \\\\\n \\hline\n Robot radius & $0.15m$ & Learning rate & 0.001 \\\\\n \\hline\n Linear step length & $0.3m$ & Replay buffer size & 50000 \\\\\n \\hline\n Angular step length & $15^{\\circ}$ & Exploration rate $\\epsilon$ & 15000\\\\\n \\hline\n Meter2pixel & $16$ & Learning starts & 3000\\\\\n \\bottomrule\n\\end{tabular}\n\\end{table}\n\nSome training map samples are shown in Fig. \\ref{fig:label_mapsample}. \nThe largest size of a map in the training set is 256 by 256.\nBecause the size of state information $X_t$ changes at each time and batch update method requires point clouds with same size, currently, it is not realistic to train the model in a batch-update way.\nIn typical point clouds classification training process, the size of all the point clouds data are pre-processed to the same size.\nHowever, these operations will change the original data's spatial information in our method.\nInstead, for each step, the network parameters are updated 32 times with a batch size equal to 1.\nLearning parameters are listed in Table 1. \nThe training process is performed on a computer with Intel i7-7820X CPU and GeForce GTX 1080Ti GPU.\nThe training starts at 2000 steps and ends after 90000 update steps, which takes 72 hours.\n\n\\subsection{Comparison Study}\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[scale=0.65]{imgs\/HouseExpo_test_data.png}\n\t\\caption{Four maps for testing. The size of map1, map2, map3 and map4 is (234, 191), (231, 200), (255, 174) and (235, 174), respectively. }\n\t\\label{fig:test_data}\n\\end{figure}\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[scale=0.34]{imgs\/comparision.png}\n\t\\caption{The path length's data of each method on four test maps. For each map, each method is tested for 100 times with the same randomly initialized robot start locations.}\n\t\\label{fig:length_comparision}\n\\end{figure}\n\nBesides our proposed method, we also test the performance of the weight tuning method in \\cite{Weight2019}, a cost-based method in \\cite{22_gonzalez2002navigation}, a method with greedy strategy in \\cite{21_yamauchi1997frontier}, a method utilizing a random policy and the baseline in \\cite{li2019houseexpo}, which we denote as the weight method, cost method, greedy method, random method and baseline, respectively.\nTo compare the performance of different methods, we use 4 maps as a testing set, as shown in Fig. \\ref{fig:test_data}.\nFor each test map, we conduct 100 trials by setting robot initial locations randomly for each trail.\n\nThe \\emph{random method} selects a frontier point from the frontier set randomly.\nThe \\emph{greedy method} chooses the nearest frontier point.\nThe \\emph{baseline} utilizes a CNN which directly determines the robot movement commands by the current local observation.\n\nThe \\emph{cost method} evaluates the scalar value of frontiers by a cost function considering distance and information gain information. \n\\begin{equation}\\label{cost_method}\ncost = wd+(1-w)(1-g),\n\\end{equation}\nwhere $w$ is the weight that adjusts the relative importance of costs. \n$d$ and $g$ denote the normalized distance and information gain of a frontier.\nAt each step, after obtaining the frontier set as detailed in Section \\ref{sec:frontier_dectect}, the k-means method is adopted to cluster the points in the frontier set to find frontier centers. \nTo reduce the runtime of computing information gains, we only compute the information gain for each frontier center.\nThe information gain is computed by the area of the unknown space to be explored if this frontier center is selected \\cite{22_gonzalez2002navigation}.\nThe weight in the cost method is fixed to 0.5, which fails to be optimal in environments with different structures and features.\n\nThe \\emph{weight method} can learn to adjust the value of the weight in equation \\ref{cost_method} under the same training framework as our proposed method.\nThe structure of the neural network in weight method is presented in Fig. \\ref{fig:label_network_structure}, which takes the 4D point-clouds-like information as input and outputs a scalar value.\n\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[scale=0.9]{imgs\/comparision_result2.png}\n\t\\caption{The representative exploration trials of six different methods on map1 with the same robot start locations and poses. The bright red denotes the start state and the dark red denotes the end state. The green point denotes the locations where the robot made decisions to choose which frontiers to explore. As the step increases, the brightness of green points becomes darker and darker. The baseline's result doesn't have green points because its action is to choose a robot movement command.}\n\t\\label{fig:map_compare}\n\\end{figure}\n\nWe select the length of overall paths, which are recorded after $95\\%$ of areas are explored, as the metric to evaluate the relative performance of the six exploration methods. \nThe length of overall paths can indicate the efficiency of the exploration methods.\nThe box plot in Fig. \\ref{fig:length_comparision} is utilized to visualize the path length's data of each exploration method on four test maps. \nThe baseline is not considered here because the baseline sometimes fails to explore the whole environment, which will be explained later. \nThree values of this metric are used to analyze the experiment results: (1) the average, (2) the minimum value, (3) the variance.\nOur proposed method has the minimum average length of overall paths for all four test maps.\nThe minimum length of the proposed method in each map is also smaller than other five methods.\nThis indicates that the exploration strategy of our proposed method is more effective and efficient than the other five methods.\nThe random method has the largest variance and average value for each map because it has more chances to sway between two or more frontier groups. \nThat is why the overall path of the random method in Fig. \\ref{fig:map_compare} is the most disorganized.\nThe weight method has a lower average and minimum value compared with the cost method in all test maps, due to the advantages of learning from rich data.\nHowever, the weight method only adjusts a weight value to change the relative importance of distance and information gain. \nIf the computation of the information gain is not accurate or another related cost exists, the weight method fails to fully demonstrate the advantages of learning.\nInstead, our proposed method can learn useful information, including information gain, by learning to estimate the value of frontiers, which is the reason that our proposed method outperforms the weight method.\n\nFig. \\ref{fig:map_compare} shows an example of the overall paths of six different methods when exploring the map1 with the same starting pose.\nEach episode ended once the explored ratio was more than 0.95.\nThe proposed method explored the environment with the shortest path.\nThe explored ratio of the total map according to the length of the current path is shown in Fig. \\ref{fig:label_explored_ratio}.\nThe greedy method's curve has a horizontal line when the explored ratio is near 0.95. \nThis is because the greedy method missed a nearby frontier which would disappear at once if selected. \nHowever, the greedy method chose the nearest frontier instead, which made the robot travel to that missed frontier again and resulted in a longer path.\nFor example, in Fig. \\ref{fig:map_compare}, the greedy method chose to explore the point C instead of the point A, when the robot was at point B. This decision made the robot travel to the point A later, thus making the overall path longer.\nThe baseline's curve also exhibits a horizontal line in Fig. \\ref{fig:label_explored_ratio}, which is quite long.\nThe baseline's local observation contained no frontier information when the surroundings were all explored (in the left part of the environment shown in Fig. \\ref{fig:map_compare}).\nTherefore, at this situation, the baseline could only take ``random'' action (i.e. travelling along the wall) to find the existing frontiers, which would waste lots of travelling distances and may fail to explore the whole environment.\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[scale=0.38]{imgs\/explored_ratio.png}\n\t\\caption{The ratio of the area explored and the area of the whole map with respect to the current path's length. The test map is map1 and the start locations is the same as Fig. \\ref{fig:map_compare}.}\n\t\\label{fig:label_explored_ratio}\n\n\\end{figure}\n\n\\subsection{Scalability Study}\n\nIn this section, a map size of (531, 201) is used to test the performance of our proposed method in larger environments compared with maps in the training set. \nIf the network's input is fixed size images, the map needs to be padded into a (531, 531) image, which is a low-efficient state representation way. \nThen a downscaling operation of the image is required to make the size of the image input the same as the requirement of the neural network, e.g. (256,256).\nAlthough the neural network can process the state data by downscaling, the quality of the input data decreases.\nTherefore, the network fails to work once the scaled input contains much less necessary information than the original image.\nFig. \\ref{fig:label_scalabilityTest} presents the overall path generated by our method without downscaling the map size.\nOur proposed method, which takes point clouds as input, has better robustness in maps with large scales because of the following two reasons.\nFirst, we incorporate the distance information into point clouds, which can help neural networks learn which part of point clouds are walkable.\nAlthough the image representation can also have a fourth channel as distance information, the scaling operation can make some important obstacles or free points disappear, which changes the structure of the map.\nSecondly, the number of pixels in an image increases exponentially as the size of the image increase.\nThe number of points in point clouds equals the number of pixels that represent an obstacle or frontier in a map, which is not an exponential relation unless all the pixels in a map are obstacles or frontiers.\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[scale=1.2]{.\/imgs\/path_1998.png}\n\t\\caption{The result of the scalability test. The map size is (531, 201). The meaning of points' color is the same as Fig. \\ref{fig:map_compare}}\n\t\\label{fig:label_scalabilityTest}\n\\end{figure}\n\n\\section{Conclusions And Future Work}\n\\label{sec:conclude}\n\n\n\nIn this paper, we present a novel state representation method using 4D point-clouds-like information and design the framework to learn an efficient exploration strategy.\nOur proposed method can solve the problems that come with using images as observations.\nThe experiments demonstrate the effectiveness of our proposed method, compared with other five commonly used methods.\nFor the future work, other network structures and RL algorithms can be modified and applied to the robot exploration problem with point clouds as input.\nThe converge speed of training may also be improved by optimizing the training techniques.\nBesides, the multi-robot exploration problem may also use point clouds to represent the global information. \n\n\n\\addtolength{\\textheight}{-8cm} \n \n \n \n \n \n\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\\label{sec:INTRO}\nRelational Reinforcement Learning (RRL) has been investigated in early 2000s by works such as \\cite{bryant1999combining,dvzeroski1998relational,dvzeroski2001relational} among others. \nThe main idea behind RRL is to describe the environment in terms of objects and relations. \nOne of the first practical implementation of this idea was proposed by \\cite{dvzeroski1998relational} and later was improved in \\cite{dvzeroski2001relational} based on a modification to Q-Learning algorithm \\cite{watkins1992q} via the standard relational tree-learning algorithm TILDE \\cite{blockeel1998top}. As shown in \\cite{dvzeroski2001relational}, the RRL system allows for very natural and human readable decision making and policy evaluations. More importantly, the use of variables in ILP system, makes it possible to learn generally formed policies and strategies. Since these policies and actions are not directly associated with any particular instance and entity, this approach leads to a generalization capability beyond what is possible in most typical RL systems.\nGenerally speaking RRL framework offers several benefits over the traditional RL: (i) The learned policy is usually human interpretable, and hence can be viewed, verified and even tweaked by an expert observer. (ii) The learned program can generalize better than the classical RL counterpart. (iii) Since the language for the state representation is chosen by the expert, it is possible to incorporate inductive biases into learning. This can be a significant improvement in complex problems as it might be used to manipulate the agent to choose certain actions without accessing the reward function, (iv) It allows for the incorporation of higher level concepts and prior background knowledge. \n\nIn recent years and with the advent of the new deep learning techniques, significant progress has been made to the classical Q-learning RL framework. By using algorithms such as deep Q-learning and its variants \\cite{mnih2013playing,van2016deep}, as well as Policy learning algorithms such A2C and A3C \\cite{mnih2016asynchronous}, more complex problems are now been tackled. However, the classical RRL framework cannot be easily employed to tackle large scale and complex scenes that exist in recent RL problems. \nSince standard RRL framework is in not usually able to learn from complex visual scenes and cannot be easily combined with differentiable deep neural\nIn particular, none of the inherent benefits of RRL have been materialized in the deep learning frameworks thus far. This is because existing RRL frameworks usually are not designed to learn from complex visual scenes and cannot be easily combined with differentiable deep neural networks.\nIn ~\\cite{payani2019Learning} a novel ILP solver was introduced which uses Neural-Logical Network (NLN)~\\cite{payani2018} for constructing a differentiable neural-logic ILP solver (dNL-ILP). The key aspect of this dNL-ILP solver is a differentiable deduction engine which is at the core of the proposed RRL framework. \nAs such, the resulting differentiable RRL framework can be used similar to deep RL in an end-to-end learning paradigm, trainable via the typical gradient optimizers. Further, in contrast to the early RRL frameworks, this framework is flexible and can learn from ambiguous and fuzzy information. Finally, it can be combined with deep learning techniques such as CNNs to extract relational information from the visual scenes. \nIn the next section we briefly introduce the differentiable dNL-ILP solver. In section, \\ref{sec:RRL} we show how this framework can be used to design a differentiable RRL framework. Experiments will be presented next, followed by the conclusion.\n\n\\section{Differentiable ILP via neural logic networks}\n\\label{sec:dNL-ILP}\n \nIn this section, we briefly present the basic design of the differentiable dNL-ILP which is at the core of the proposed RRL. More detailed presentation of dNL-ILP could be found in \\cite{payani2019Learning}.\nLogic programming is a paradigm in which we use formal logic (and usually first-order-logic) to describe relations between facts and rules of a program domain. In logic programming, rules are usually written as clauses of the form $H \\leftarrow B_1,\\,B_2,\\,\\dots,\\,B_m$, \nwhere $H$ is called \\texttt{head} of the clause and $B_1,\\,B_2,\\,\\dots,\\,B_m$ is called \\texttt{body} of the clause. A clause of this form expresses that if all the atoms $B_i$ in the \\texttt{body} are true, the \\texttt{head} is necessarily true.\nEach of the terms $H$ and $B$ is made of \\texttt{atoms}. Each \\texttt{atom} is created by applying an $n$-ary Boolean function called \\texttt{predicate} to some constants or variables. A \\texttt{predicate} states the relation between some variables or constants in the logic program. We use lowercase letters for constants (instances) and uppercase letters for variables. \nTo avoid technical details, we consider a simple logic program. Assume that a directed graph is defined using a series of facts in the form of \\texttt{edge(X,Y)} where for example \\texttt{edge(a,b)} states that there is an edge from node \\texttt{a} to the node \\texttt{b}. As an example, the graph in Fig. \\ref{fig:connected_graph} can be represented as \\texttt{\\{edge(a,b), edge(b,c), edge(c,d), edge(d,b)\\}}.\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.2\\textwidth]{cnt.png}\n\t\\caption{Connected graph example}\n\t\\label{fig:connected_graph}\n\t\\vspace{-.16in}\n\\end{figure}\nAssume that our task is to learn the \\texttt{cnt(X,Y)} predicate from a series of examples, where \\texttt{cnt(X,Y)} is true if there is a directed path from node \\texttt{X} to node \\texttt{Y}. The set of positive examples in graph depicted in Fig. \\ref{fig:connected_graph} is $\\mathcal{P}=$ \\texttt{\\{cnt(a,b), cnt(a,c), cnt(a,d), cnt(b,b),cnt(b,c), cnt(b,d),\\dots\\}}. Similarly the set of negative examples $\\mathcal{N}$ includes atoms such as \\texttt{\\{cnt(a,a),cnt(b,a),\\dots\\}}.\n\nIt is easy to verify that the predicate \\texttt{cnt} defined as below satisfies all the provided examples (entails the positive examples and rejects the negative one):\n\\begin{align}\n\\text{cnt(X,Y)} &\\leftarrow \\text{edge(X,Y)} \\nonumber\\\\\n\\text{cnt(X,Y)} &\\leftarrow \\text{edge(X,Z),\\,\\,cnt(Z,Y)}\n\\label{eq:cnt}\n\\end{align}\nIn fact by applying each of the above two rules to the constants in the program we can produce all the consequences of such hypothesis\nIf we allow for formulas with 3 variables (\\texttt{X,Y,Z}) as in (\\ref{eq:cnt}), we can easily enumerate all the possible symbolic atoms that could be used in the body of each clause. In our working example, this corresponds to $\\mathbb{I}_{cnt}=$\\texttt{\\{edge(X,X), edge(X,Y), edge(X,Z), \\dots, cnt(Z,Y), cnt(Z,Z)\\}}. \nAs the size of the problem grows, considering all the possibilities becomes unfeasible. Consequently, almost all ILP systems use some form of rule templates to reduce the possible combinations. For example, the dILP \\cite{evans2018learning} model, allows for the clauses (in the body) of at most two atoms in each clause predicate. \nIn \\cite{payani2019Learning}, a novel approach was introduced to alleviate the above limitation and to allow for learning arbitrary complex predicate formulas. The main idea behind this approach is to use multiplicative neurons \\cite{payani2018} that are capable of learning and representing Boolean logic. Consider the fuzzy notion of Boolean algebra where fuzzy Boolean value are represented as a real value in range $[0,1]$, where True and False are represented by 1 and 0, respectively. Let $\\bar{x}$ be the logical `NOT' of $x$. \n\\begin{figure}[tb]\n\t\\centering\n\t\\subfloat[][]{\n\t\t\\small\n\t\t\\vspace{-5mm}\n\t\t\\begin{tabular}{|c|c|c|}\n\t\t\t\\hline \n\t\t\t$x_i$ & $m_i$ & $F_c$ \\\\ \t\\toprule \n\t\t\t0 & 0 & 1 \\\\ \\hline\n\t\t\t0 & 1 & 0 \\\\ \\hline\n\t\t\t1 & 0 & 1 \\\\ \\hline\n\t\t\t1 & 1 & 1 \\\\ \\hline\n\t\t\\end{tabular}\n\t\t\\label{fig:Fc}%\n\t}%\n\t\\qquad\n\t\\subfloat[][]{\n\t\t\\small\n\t\t\\begin{tabular}{|c|c|c|}\n\t\t\t\\hline\t \n\t\t\t$x_i$ & $m_i$ & $F_d$ \\\\ \t\\toprule\n\t\t\t0 & 0 & 0 \\\\ \\hline\n\t\t\t0 & 1 & 0 \\\\ \\hline\n\t\t\t1 & 0 & 0 \\\\ \\hline\n\t\t\t1 & 1 & 1 \\\\ \\hline\n\t\t\\end{tabular}\n\t\t\\label{fig:Fd}%\n\t}\n\t\\caption{Truth table of $F_c(\\cdot)$ and $F_d(\\cdot)$ functions}%\n\t\\label{fig:FcFd}%\n\\end{figure}\nLet $\\boldsymbol{x}^n \\in \\{0,1\\}^n$ be the input vector for a logical neuron. we can associate a trainable Boolean membership weight $m_i$ to each input elements $x_i$ from vector $\\boldsymbol{x}^n$. Consider Boolean function $F_c(x_i,m_i)$ with the truth table as in Fig. \\ref{fig:Fc} which is able to include (exclude) each element $x_i$ in (out of) the conjunction function $f_{conj}(\\boldsymbol{x}^n)$. This design ensures the incorporation of each element $x_i$ in the conjunction function only when the corresponding membership weight $m_i$ is $1$. Consequently, the neural conjunction function $f_{conj}$ can be defined as:\n\\vspace{-2mm}\n\\begin{align}\n\\label{eq:conj}\nf_{conj}(\\boldsymbol{x}^n) &= \\prod_{i=1}^{n} F_c(x_i,m_i) \\nonumber \\\\\n\\text{where, } \\quad F_c(x_i,m_i) &= \\overline{\\overline{x_i} m_i } = 1 - m_i ( 1 - x_i) \n\\end{align}\nLikewise, a neural disjunction function $f_{disj}(\\boldsymbol{x}^n) $ can be defined using the auxiliary function $F_d$ with the truth table as in Fig. ~\\ref{fig:Fd}. \nBy cascading a layer of $N$ neural conjunction functions with a layer of $N$ neural disjunction functions, we can construct a differentiable function to be used for representing and learning a Boolean Disjunctive Normal Form (DNF). \n\ndNL-ILP employs these differentiable Boolean functions (e.g. dNL-DNF) to represent and learn predicate functions. Each dNL function can be seen as a parameterized symbolic formula where the (fuzzy) contribution of each symbol (atom) in the learned hypothesis is controlled by the trainable membership weights (e.g., $w_i$ where $m_i = sigmoid(w_i)$). If we start from the background facts ( e.g. all the groundings of predicate \\texttt{edge(X,Y)} in the graph example and apply the parameterized hypothesis we arrive at some new consequences (e.g., forward chaining). After repeating this process to obtain all possible consequences, we can update the parameters in dNL by minimizing the cross entropy between the desired outcome (provided positive and negative examples) and the deduced consequences. \n\nAn ILP description of a problem in this framework consist of these elements:\n\\begin{enumerate}\n\t\\item The set of constants in the program. In example of Fig. \\ref{fig:connected_graph}, this consists of $\\mathcal{C}=$\\texttt{\\{a,b,c,d\\}}\n\t\\item The set of background facts. In the graph example above this consists of groundings of predicate \\texttt{edge(X,Y)}, i.e., $\\mathcal{B}=$\\texttt{\\{edge(a,b), edge(b,c), edge(c,d), edge(d,b)\\}}\n\t\n\t\\item The definition of auxiliary predicates. In the simple example of graph we did not include any auxiliary predicates. However, in more complex example they would greatly reduce the complexity of the problem.\n\t\n\t\\item The signature of the target hypothesis. In the graph example, This signature indicates the target hypothesis is 2-ary predicate \\texttt{cnt(X,Y)} and in the symbolic representation of this Boolean function we are allowed to use three variables \\texttt{X,Y,Z}. \n\t\n\n\n\t\n\\end{enumerate}\nIn addition to the aforementioned elements, some parameters such as intial values for the membership weights ($m_i=sigmoid(w_i)$), as well as the number of steps of forward chaining should be provided. Furthermore, in dNL-ILP the memberships are fuzzy Boolean values between 0 and 1. As shown in \\cite{payani2019Learning}, for ambiguous problems where a definite Boolean hypothesis may not be found which could satisfy all the examples, there is no guaranty that the membership weights converge to zero or 1. In applications where our only goal is to find a successful hypothesis this result is satisfactory. However, if the interpretability of the learned hypothesis is by itself a goal in learning, we may need to encourage the membership weights to converge to 0 and 1 by adding a penalty term:\n\\begin{equation}\n\\text{interpretability penalty} \\propto m_i(1-m_i)\\label{eq:interpret}\n\\end{equation}\n\n\\section{Relational Reinforcement Learning via dNL-ILP}\n\\label{sec:RRL}\n \n\nEarly works on RRL \\cite{dvzeroski2001relational,van2005survey} mainly relied on access to the explicit representation of states and actions in terms of relational predicate language. In the most successful instances of these approaches, a regression tree algorithm is usually used in combination with a modified version of Q-Learning algorithms. \nThe fundamental limitation of the traditional RRL approaches is that the employed ILP solvers are not differentiable. Therefore, those approaches are typically only applicable the problems for which the explicit relational representation of states and actions is provided. Alternatively, deep RL models, due to recent advancement in deep networks, have been successfully applied to the much more complex problems. These models are able to learn from raw images without relying on any access to the explicit representation of the scene. However, the existing RRL counterparts are falling behind such desirable developments in deep RL. \n\nIn this paper, we establish that differentiable dNL-ILP provides a platform to combine RRL with deep learning methods, constructing a new RRL framework with the best of both worlds. This new RRL system allows the model to learn from the complex visual information received from the environment and extract intermediate explicit relational representation from the raw images by using the typical deep learning models such as convolutional networks. \n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=0.85\\textwidth]{boxworld_states2.png}\n\t\\caption{States representation in the form of predicates in BoxWorld game, before and after an action}\n\t\\label{fig:boxworld}\n\\end{figure*}\nAlthough the dNL-ILP can also be used to formulate RL algorithms such as deep Q-learning, we focus only on deep policy gradient learning algorithm. This formulation is very desirable because it makes the learned policy to be interpretable by human. \nOne of the other advantages of using policy gradient in our RRL framework is that it enables us to restrict actions according to some rules obtained either from human preferences or from problem requirements. This in turn makes it possible to account for human preferences or to avoid certain pitfalls, e.g., as in safe AI.\n\nIn our RRL framework, although we use the generic formulation of the policy gradient with the ability to learn stochastic policy, certain key aspects are different from the traditional deep policy gradient methods, namely state representation, language bias and action representation. In the following, we will explain these concepts in the context of BoxWorld game. In this game, the agent's task is to learn how to stack the boxes on top of each other (in a certain order). For illustration, consider the simplified version of the game as in Fig.\\ref{fig:boxworld} where there are only three boxes labeled as \\texttt{a,b}, and \\texttt{c}. A box can be on top of another or on the \\texttt{floor}. A box can be moved if it is not covered by another box and can be either placed on the floor or on top of another uncovered box. For this game, the environment state can be fully explained via the predicate \\texttt{on(X,Y)}. Fig. \\ref{fig:boxworld} shows the state representation of the scene before and after an action (indicated by the predicate \\texttt{move(c,b)}). In the following we discuss each distinct elements of the proposed framework using the BoxWorld environment. \nFig.~\\ref{fig:ilp_rrl_diag} displays the overall design of our proposed RRL framework. In the following we discuss the elements of this RRL system.\n\n\\begin{figure*}[h]\n\t\\centering\n\t\\includegraphics[width=.95\\textwidth]{.\/rrl_diag.png}\n\t\\vspace{-5mm}\n\t\\caption{Learning explicit relational information from images in our proposed RRL; Images are processed to obtain explicit representation and dNL-ILP engine learns and expresses the desired policy (actions)}\n\t\\label{fig:ilp_rrl_diag}\n\\end{figure*}\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{.\/language_state6.png}\n\n\n\t\\vspace{-10mm}\n\t\\caption{Transforming low-level state representation to high-level form via auxiliary predicates}\n\t\\label{fig:ilp_rrl_state}\n\\end{figure*}\n\\begin{figure*} \n\t\\centering \n\t\\subfloat[A sample from CLEVER datset]{\n\t\t\\includegraphics[width=.55\\textwidth]{.\/clevr.png}\n\t\n\t}\n\t\\subfloat[A sample from sort-of-CLEVER datset]{\n\t\t\\includegraphics[width=.28\\textwidth]{.\/sclevr.png}\n\t\t\\label{subfig:sortofclever}\n\t\n\t}\n\t\\vspace{-2mm}\n\t\\caption{Extracting relational information from visual scene \\cite{santoro2017simple}}\n\t\\label{fig:ilp_RELATIONAL}\n\t\n\\end{figure*}\n\\subsection{State Representation}\n\\label{subsec:STATE-REP}\nIn the previous approaches to the RRL \\cite{dvzeroski1998relational,dvzeroski2001relational,jiang2019neural}, state of the environment is expressed in an explicit relational format in the form of predicate logic. This significantly limits the applicability of RRL in complex environments where such representations are not available. Our goal in this section is to develop a method in which the explicit representation of states can be learned via typical deep learning techniques in a form that will support the policy learning via our differentiable dNL-ILP. \nAs a result, we can utilize the various benefits of the RRL discipline without being restricted only to the environments with explicitly represented states.\n\nFor example, consider the BoxWorld environment explained earlier where the predicate\n\\texttt{on(X,Y)} is used to represent the state explicitly in the relational form (as shown in Fig.\\ref{fig:boxworld}). \nPast works in RRL relied on access to explicit relational representation of states, i.e.,\nall the groundings of the state representation predicates. \nSince this example has 4 constants, i.e. $\\mathcal{C}=$\\texttt{\\{a,b,c,floor\\}}, these groundings would be the binary values (`true' or `false') for the atoms \\texttt{on(a,a), on(a,b), on(a,c), on(a,floor), \\dots, on(floor,floor)}. \nIn recent years, extracting relational information from visual scenes has been investigated.\nFig. \\ref{fig:ilp_RELATIONAL} shows two types of relational representation extracted from images in~\\cite{santoro2017simple}. \nThe idea is to first process the images through multiple CNN networks. The last layer of the convolutional network chain is treated as the feature vector and is usually augmented with some non-local information such as absolute position of each point in the final layer of the CNN network. This feature map is then fed into a relational learning unit which is tasked with extracting non-local features.\nVarious techniques have been then introduced recently for learning these non-local information from the local feature maps, namely, self attention network models \\cite{vaswani2017attention,santoro2017simple} as well as graph networks \\cite{narayanan2017graph2vec,allamanis2017learning}. Unfortunately, none of the resulting presentations from past works is in the form of predicates needed in ILP.\n\n\nIn our approach, we use similar networks discussed earlier to extract non-local information. However given the relational nature of state representation in our RRL model, we consider three strategies in order to facilitate learning the desired relational state from images. Namely:\n\\begin{enumerate}\n\t\\item \\textbf{Finding a suitable state representation:} In our BoxWorld example, we used the \\texttt{on(X,Y)} to represent the state of the environment. However, learning this predicate requires inferring relation among various objects in the scene. As shown by previous works (e.g., \\cite{santoro2017simple}), this is a difficult task even in the context of a fully supervised setting (i.e., all the labels are provided) which is not applicable here. Alternatively, we propose to use lower-level relation for state representation and build higher level representation via predicate language. In the game of BoxWorld as an example, we can describe states by the respective position of each box. In particular, we define two predicates \\texttt{posH(X,Y)} and \\texttt{posV(X,Y)} such that variable $X$ is associated with the individual box, whereas $Y$ indicate horizontal or vertical coordinates of the box, respectively. Fig.~ \\ref{fig:ilp_rrl_state} shows how this new lower-level representations can be transformed into the higher level description by the appropriate predicate language: \n\t\\begin{align}\n\t\\text{on(X, Y)} &\\leftarrow \\text{posH(X, Z)}, \\text{posH(Y, T)}, \\nonumber\\\\ &\\text{inc(T, Z)}, \\text{sameH(X, Y)} \\nonumber\\\\\n\t\\text{sameH(X, Y)} &\\leftarrow \\text{posH(X, Z)}, \\text{posH(Y, Z)}\n\t\\label{eq:on}\n\t\\end{align}\n\t\\item \\textbf{State constraints:} When applicable, we may incorporate relational constraint in the form of a penalty term in the loss function. For example, in our BoxWorld example we can notice that \\text{posY(floor)} should be always 0. In general, the choice of relational language makes it possible to pose constraints based on our knowledge regarding the scene. Enforcing these constraints does not necessarily speed up the learning as we will show in the BoxWorld experiment in Section \\ref{subsec:BoxWorld}. However, it will ensure that the (learned) state representation and consequently the learned relational policy resemble our desired structure of the problem.\n\t\\item \\textbf{Semi-supervised setting:} While it is not desirable to label every single scene that may happen during learning, in most cases it is possible to provide a few labeled scene to help the model to learn the desired state representation faster. These reference points can then be incorporated to the loss function to encourage the network to learn a representation that matches to those provided labeled scenes. We have used a similar approach in Asterix experiment (see appendix \\ref{app:asterix}) to significantly increase the speed of learning.\n\\end{enumerate}\n\n\n\n\\subsection{Action Representation}\n\\label{subsec:ACTION-REP}\nWe formulate the policy gradient in a form that allows the learning of the actions via one (or multiple) target predicates. These predicates exploit the background facts, the state representation predicates, as well as auxiliary predicates to incorporate higher level concepts. \nIn a typical Deep policy gradient (DPG) learning, the probability distributions of actions are usually learned by applying a multi layer perception with a \\texttt{softmax} activation function in the last layer. In our proposed RRL, the action probability distributions can usually be directly associated with groundings of an appropriate predicate. For example, in BoxWorld example in Fig.\\ref{fig:boxworld}, we define a predicate \\texttt{move(A,B)} and associate the actions of the agent with the groundings of this predicate. In an ideal case, where there is deterministic solution to the RRL problem, the predicate \\texttt{move(A,B)} may be learned in such a way that, at each state, only the grounding (corresponding to the correct action) would result 1 ('true') and all the other groundings of this predicate become 0. In such a scenario, the agent will follow the learned logic deterministically. Alternatively, we may get more than one grounding with value equal to 1 or we get some fuzzy values in the range of $[0,1]$. \nIn those cases, we estimate the probability distribution of actions similar to the standard deep policy learning by applying a \\texttt{softmax} function to the valuation vector of the learned predicate \\texttt{move} (i.e., the value of \\texttt{move(X,Y)} for \\texttt{X,Y}$\\in$ \\texttt{\\{a,b,c,floor\\}}). \n\n\\section{Experiments}\n\\label{sec:EXPERIMENTS}\nIn this section we explore the features of the proposed RRL framework via several examples. We have implemented\\footnote{The python implementation of the algorithms in this paper\n\tis available at \\url{https:\/\/github.com\/dnlRRL2020\/RRL}} the models using Tensorflow \\cite{abadi2016tensorflow}.\n\\subsection{BoxWorld Experiment}\n\\label{subsec:BoxWorld}\nBoxWorld environment has been widely used as a benchmark in past RRL systems \\cite{dvzeroski2001relational,van2005survey,jiang2019neural}. In these systems the state of the environment is usually given as an explicitly relational data via groundings of the predicate \\texttt{on(X,Y)}. While ILP based systems are usually able to solve variations of this environments, they rely on explicit representation of state and they cannot infer the state from the image. Here, we consider the task of stacking boxes on top of each other. We increase the difficulty of the problem compared to the previous examples \\cite{dvzeroski2001relational,van2005survey,jiang2019neural} by considering the order of boxes and requiring that the stack is formed on top of the blue box (the blue box should be on the floor). To make sure the models learn generalization, we randomly place boxes on the floor in the beginning of each episode. We consider up to 5 boxes. Hence, the scene constants in our ILP setup is the set \\texttt{\\{a,b,c,d,e,floor\\}}. The dimension of the observation images is 64x64x3 and no explicit relational information is available for the agents. The action space for the problem involving $n$ boxes is $(n+1)\\times (n+1)$ corresponding to all possibilities of moving a box (or the floor) on top of another box or the floor. Obviously some of the actions are not permitted, e.g., placing the floor on top of a box or moving a box that is already covered by another box. \n\n\\paragraph{Comparing to Baseline:}\nIn the first experiment, we compare the performance of the proposed RRL technique to a baseline. For the baseline we consider standard deep A2C (with up to 10 agents) and we use the implementation in \\texttt{stable-baseline} library \\cite{stable-baselines}. We considered both MLP and CNN policies for the deep RL but we report the results for the CNN policy because of its superior performance. \nFor the proposed RRL system, we use two convolutional layers with the kernel size of 3 and strides of 2 with $tanh$ activation function. We apply two layers of MLP with \\texttt{softmax} activation functions to learn the groundings of the predicates \\texttt{posH(X,Y)} and \\texttt{posV(X,Y)}. Our presumed grid is $(n+1)\\times(n+1)$ and we allow for positional constants \\texttt{\\{0,1,\\dots,n\\}} to represent the locations in the grid in our ILP setting. \nAs constraint we add penalty terms to make sure \\texttt{posV(floor,0)} is True. We use vanilla gradient policy learning and to generate actions we define a learnable hypothesis predicate \\texttt{move(X,Y)}. Since we have $n+1$ box constants (including floor), the groundings of this hypothesis correspond to the $(n+1)\\times(n+1)$ possible actions. Since the value of these groundings in dNL-ILP will be between 0 and 1, we generate \\texttt{softmax} logits by multiplying these outputs by a large constant $c$ (e.g., $c=10$). For the target predicate \\texttt{move(X,Y)}, we allows for 6 rules in learning ( corresponding to dNL-DNF function with 6 disjunctions). The complete list of auxiliary predicates and parameters and weights used in the two models are given in appendix \\ref{app:box}. As indicated in Fig. \\ref{fig:ilp_rrl_state} and defined in (\\ref{eq:on}), we introduce predicate \\texttt{on(X,Y)} as a function of the low-level state representation predicates \\texttt{posV(X,Y)} and \\texttt{posH(X,Y)}. We also introduce higher level concepts using these predicates to define the aboveness (i.e., \\texttt{above(X,Y)}) as well as \\texttt{isCovered(X,Y)}. \nFig. \\ref{fig:box_cmp} compares the average success per episode for the two models for the two cases of $n=4$ and $n=5$. The results shows that for the case of $n=4$, both models are able to learn a successful policy after around 7000 episodes. For the more difficult case of $n=5$, our proposed approach converges after around 20K episodes \nwhereas it takes more than 130K episodes for the A2C approach to converge, and even then it fluctuates and does not always succeed.\n\\paragraph{Effect of background knowledge:}\nContrary to the standard deep RL, in an RRL approach, we can introduce our prior knowledge into the problem via the powerful predicate language. By defining the structure of the problem via ILP, we can explicitly introduce inductive biases \\cite{battaglia2018relational} which would restrict the possible form of the solution. We can speed up the learning process or shape the possible learnable actions even further by incorporating background knowledge. \nTo examine the impact of the background knowledge on the speed of learning, \nwe consider three cases for the BoxWorld problem involving $n=4$ boxes. The baseline model (RRL1) is as described before. In RRL2, we add another auxiliary predicate which defines the movable states as:\n\n\\begin{align*}\n\\text{movable(X,Y)} \\leftarrow \\neg \\text{isCovered(X)}, \\neg \\text{isCovered(Y)}, \\\\\\neg \\text{same(A,B)}, \\neg \\text{isfloor(X)}, \\neg \\text{on(X,Y)}\n\\end{align*}\nwhere $\\neg$ indicates the negate of a term. In the third model (RRL3), we go one step further, and we force the target predicate \\text{move(X,Y)} to incorporate the predicate \\texttt{movable(X,Y)} in each of the conjunction terms. \nFig. \\ref{fig:box_bk} compares the learning performance of these models in terms of average success rate (between [0,1]) vs the number of episodes.\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.380\\textwidth]{box_mlp.png}\n\t\\vspace{-2mm}\n\t\\caption{Comparing deep A2C and the proposed model on BoxWorld task}\n\t\\label{fig:box_cmp}\n\\end{figure}\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.380\\textwidth]{b4_bk.png}\n\t\\vspace{-2mm}\n\t\\caption{Effect of background knowledge on learning BoxWorld}\n\t\\label{fig:box_bk}\n\\end{figure}\n\\paragraph{Interpretability:}\nIn the previous experiments, we did not consider the interpretability of the learned hypothesis. Since all the weights are fuzzy values, even though the learned hypothesis is still a parameterized symbolic function, it does not necessarily represent a valid Boolean formula. \nTo achieve an interpretable result we add a small penalty as described in (\\ref{eq:interpret}). We also add a few more state constraints to make sure the learned representation follow our presumed grid notations (see Appendix \\ref{app:box} for details). The learned action predicate is found as: \n\\begin{align*}\n\\text{move(X, Y)} &\\leftarrow \\text{moveable(X, Y)},\\, \\neg \\text{lower(X, Y)} \\\\\n\\text{move(X, Y)} &\\leftarrow \\text{moveable(X, Y)},\\, \\text{isBlue(Y)} \\\\\n\\text{lower(X, Y)} &\\leftarrow \\text{posV(X, Z)},\\, \\text{posV(Y, T)},\\, \\text{lessthan(Z, T)} \n\\end{align*}\n\n\\subsection{GridWorld Experiment}\n\\label{subsec:GridWorld}\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.4\\textwidth]{gridworld.png}\n\t\\caption{GridWorld environment \\cite{zambaldi2018relational}} \\label{fig:ilp_gridworld}\n\t\\label{fig:ilp_boxworldaction}\n\\end{figure}\nWe use the GridWorld environment introduced in \\cite{zambaldi2018relational} for this experiment. This environment is consisted of a $12\\times12$ grid with keys and boxes randomly scattered. It also have an agent, represented by a single dark gray square box. The boxes are represented by two adjacent colors. The square on the right represents the box's lock type whose color indicates which key can be used to open that lock. The square on the left indicates the content of the box which is inaccessible while the box is locked. The agent must collect the key before accessing the box.\nWhen the agent has a key, provided that it walks over the lock box with the same color as its key, it can open the lock box, and then it must enter to the left box to acquire the new key which is inside the left box.\nThe agent cannot get the new key prior to successfully opening the lock box on the right side of the key box.\nThe goal is for the agent to open the gem box colored as white. We consider two difficulty levels. In the simple scenario, there is no (dead-end) branch. In the more difficult version, there can be one branch of dead end. An example of the environment and the branching scenarios is depicted in Fig.~\\ref{fig:ilp_gridworld}.\nThis is a very difficult task involving complex reasoning. Indeed, in the original work it was shown that a multi agent A3C combined with a non-local learning attention model could only start to learn after processing $5\\times10^8$ episodes. To make this problem easier to tackle, \nwe modify the action space to include the location of any point in the grid instead of directional actions. Given this definition of the problem, the agent's task is to give the location of the next move inside the rectangular grid. Hence, the dimension of the action space is $144=12\\times12$. \nFor this environment, we define the predicates \\texttt{color(X,Y,C)}, where $X,Y\\in\\{1,\\dots12\\}$, $C\\in\\{1,\\dots,10\\}$ and \\texttt{hasKey(C)} to represent the state. Here, variables $X,Y$ denote the coordinates, and the variable $C$ is for the color. Similar to the BoxWorld game, we included a few auxiliary predicates such as \\texttt{isBackground(X,Y)}, \\texttt{isAgent(X,Y)} and \\texttt{isGem(X,Y)} as part of the background knowledge. The representational power of ILP allows us to incorporate our prior knowledge about the problem into the model. As such we can include some higher level auxiliary helper predicates such as :\n\\begin{align*}\n\\text{isItem(X, Y)}&\\leftarrow \\ \\neg \\text{isBackground(X, Y)}, \\neg \\text{isAgent(X, Y)} \\\\\n\\text{locked(X, Y)}&\\leftarrow \\ \\text{isItem(X, Y)}, \\text{isItem(X,Z)}, \\text{inc(Y, Z)}\n\\end{align*}\nwhere predicate \\texttt{inc(X,Y)} defines increments for integers (i.e., \\texttt{inc(n,n+1)} is true for every integer n). \nThe list of all auxiliary predicates used in this experiment as well as the parameters of the neural networks used in this experiment are given in Appendix \\ref{app:grid}. \nSimilar to previous experiments we consider two models, an A2C agent as the baseline and our proposed RRL model using the ILP language described in Appendix \\ref{app:grid}.\n\\begin{table}[ht]\n\t\\caption{Number of training episodes required for convergence }\n\t\\label{tbl:results_block}\n\t\\centering \n\t\\begin{tabular} { l c c c c c c c }\n\t\t\\toprule\n\t\tmodel & Without Branch & With Branch\\\\ \t\t\n\t\t\\midrule\n\t\tproposed RRL & 700 & 4500 \\\\\n\t\tA2C & $> 10^8$ & $> 10^8$ \\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\\end{table}\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.35\\textwidth]{bl_bk.png}\n\t\\vspace{-2mm}\n\t\\caption{Effect of background knowledge on learning GridWorld}\n\t\\label{fig:grid_bk}\n\\end{figure}\nWe listed the number of episodes it takes to converge in each setting in Table\\ref{tbl:results_block}. As the results suggest, the proposed approach can learn the solution in both settings very fast. On the contrary, the standard deep A2C was not able to converge after $10^8$ episodes. \nThis example restates the fact that incorporating our prior knowledge regarding the problem can significantly speed up the learning process.\n\nFurther, similar to the BoxWorld experiment, we study the importance of our background knowledge in the learning. In the first task (RRL1), we evaluate our model on the non-branching task by enforcing the action to include the $isItem(X,Y)$ predicate. In RRL2, we do not enforce this. As shown in Fig\\ref{fig:grid_bk}, RRL1 model learns 4 times faster than RRL2. Arguably, this is because, enforcing the inclusion of $isItem(X,Y)$ in the action hypothesis reduces the possibility of exploring irrelevant moves (i.e., moving to a location without any item).\n\\subsection{Relational Reasoning}\n\\label{subsec:SORTOFCLEVER}\nCombining dNL-ILP with standard deep learning techniques is not limited to the RRL settings. In fact, the same approach can be used in other \nareas in which we wish to reason about the relations of objects.\nTo showcase this, we consider the relational reasoning task involving the Sort-of-CLEVR \\cite{santoro2017simple} dataset. This dataset (See Fig.\\ref{subfig:sortofclever}) consists of 2D images of some colored objects. The shape fo each object is either a rectangle or a circle and each image contains up to 6 objects. The questions are hard-coded as fixed-length binary strings. Questions are either non-relational (e.g, \"what is the color of the green object?\") or relational (e.g., \"what is the shape of the nearest object to the green object?\"). In \\cite{santoro2017simple}, the authors combined a CNN generated feature map with a special type of attention based non-local network in order to solve the problem. We use the same CNN network and similar to the GridWorld experiment, we learn the state representation using predicate \\texttt{color(X,Y,C)} (the color of each cell in the grid) as well as \\texttt{isCircle(X,Y)} which learn if the shape of an object is circle or not. Our proposed approach reaches the accuracy of 99\\% on this dataset compared to the 94\\% for the non-local approach presented in \\cite{santoro2017simple}. The details of the model and the list of predicates in our ILP implementation is given in appendix \\ref{app:relational}.\n\n\n\\section{Conclusion}\n\\label{sec:Conclusion}\nIn this paper, we proposed a novel deep Relational Reinforcement Learning (RRL) model based on a differentiable Inductive Logic Programming (ILP) that can effectively learn relational information from image. We showed how this model can take the expert background knowledge and incorporate it into the learning problem using appropriate predicates. The differentiable ILP allows an end to end optimization of the entire framework for learning the policy in RRL. We showed the performance of the proposed RRL framework using environments such as BoxWorld and GridWorld.\n\n\n\n\n\n\n\\nocite{langley00}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\nIn many data science problems, data are available through different views. Generally, the views represent different measurement modalities such as audio and video, or the same text that may be available in different languages. Our main interest here is neuroimaging where recordings are made from multiple subjects. In particular, it is of interest to find common patterns or responses that are shared between subjects when they receive the same stimulation or perform the same cognitive task \\citep{chen2015reduced,richard2020modeling}. \n\nA popular line of work to perform such shared response modeling is group Independent Component Analysis (ICA) methods. The fastest methods~\\cite{calhoun2001method, varoquaux2009canica} are among the most popular, yet they are not grounded on principled probabilistic models for the multiview setting. \nMore principled approaches exist~\\cite{richard2020modeling, guo2008unified}, but they do not model subject-specific deviations from the shared response. However, such deviations are expected in most neuroimaging settings, as the magnitude of the response may differ from subject to subject \\cite{penny2007random}, as may any noise due to heartbeats, respiratory artefacts or head movements~\\cite{liu2016noise}.\nFurthermore, most GroupICA methods are typically unable to separate components whose density is close to a Gaussian.\n\nIndependent vector analysis (IVA)~\\cite{lee2008independent, anderson2011joint} is a powerful framework where components are independent within views but each component of a given view can depend on the corresponding component in other views. \nHowever, current implementations such as IVA-L~\\cite{lee2008independent},\nIVA-G~\\cite{anderson2011joint}, IVA-L-SOS~\\cite{bhinge2019extraction}, IVA-GGD~\\cite{anderson2014independent} or\nIVA with Kotz distribution~\\cite{anderson2013independent} estimate only the\nview-specific components, and do not model or extract a shared response which is\nthe main focus in this work.\n\nOn the other hand, the shared response model~\\cite{chen2015reduced} is a popular approach to perform shared response modeling, yet it imposes orthogonality constrains that are restrictive and not biologically plausible.\n\nIn this work we introduce Shared ICA (ShICA), where each view is modeled as a linear transform of shared independent components contaminated by additive Gaussian noise. ShICA allows the principled extraction of the shared components (or responses) in addition to view-specific components. \nSince it is based on a statistically sound noise model, it enables optimal inference (minimum mean square error, MMSE) of the shared responses.\n\nLet us note that ShICA is no longer the method of choice when the concept of common response is either not useful or not applicable. \nNevertheless, we believe that the ability to extract a common response is an important feature in most contexts because it highlights a stereotypical brain response to a stimulus. Moreover, finding commonality between subjects reduces often unwanted inter-subject variability.\n\nThe paper is organized as follows.\nWe first analyse the theoretical properties of the ShICA model, before providing inference algorithms.\nWe exhibit necessary and sufficient conditions for the ShICA model to be identifiable (previous work only shows local identifiability~\\cite{anderson2014independent}), in the presence of Gaussian or non-Gaussian components. \nWe then use Multiset CCA to fit the model when all the components are assumed to\nbe Gaussian. We exhibit necessary and sufficient conditions for Multiset CCA to\nbe able to recover the unmixing matrices (previous work only gives sufficient\nconditions~\\cite{li2009joint}). In addition, we provide instances of the problem where Multiset CCA cannot recover the mixing matrices while the model is identifiable.\nWe next point out a practical problem : even a small sampling noise\ncan lead to large error in the estimation of unmixing matrices when Multiset CCA is used. To\naddress this issue and recover the correct unmixing matrices, we propose to\napply joint diagonalization to the result of Multiset CCA yielding a new method\ncalled ShICA-J.\nWe further introduce ShICA-ML, a maximum likelihood estimator of ShICA that models non-Gaussian components using a Gaussian mixture model. \nWhile ShICA-ML yields more accurate components, ShICA-J is significantly faster and offers a great initialization to ShICA-ML.\nExperiments on fMRI and MEG data demonstrate that the method outperforms existing GroupICA and IVA methods.\n\n\n\\section{Shared ICA (ShICA): an identifiable multi-view model}\n\\paragraph{Notation} We write vectors in bold letter $\\vb$ and scalars in lower case $a$. Upper case letters $M$ are used to denote\nmatrices. We denote $|M|$ the absolute value of the determinant of $M$. $\\xb \\sim \\Ncal(\\mub, \\Sigma)$ means that $\\xb \\in \\mathbb{R}^k$ follows\na multivariate normal distribution of mean $\\mub \\in \\mathbb{R}^k$ and\ncovariance $\\Sigma \\in \\mathbb{R}^{k \\times k}$. The $j, j$ entry of a diagonal matrix $\\Sigma_i$ is denoted $\\Sigma_{ij}$, the $j$ entry of $\\yb_i$ is denoted $y_{ij}$. Lastly, $\\delta$ is the Kronecker delta.\n\n\\paragraph{Model Definition} In the following, $\\xb_1, \\dots ,\\xb_m \\in \\bbR^p$ denote the $m$ observed random vectors obtained from the $m$ different views. We posit the following generative model, called Shared ICA (ShICA): for $i= 1\\dots m$\n\\begin{equation}\n \\label{eq:model}\n \\xb_i = A_i(\\sbb + \\nb_i)\n\\end{equation}\nwhere $\\sbb \\in \\mathbb{R}^{p}$ contains the latent variables called \\emph{shared components}, $A_1,\\dots, A_m\\in\\bbR^{p\\times p}$ are the invertible mixing matrices, and $\\nb_i \\in\n\\mathbb{R}^{p}$ are \\emph{individual noises}. The individual noises model both the deviations of a view from the mean ---i.e.\\ individual differences--- and measurement noise. Importantly, we explicitly model both the shared components and the individual differences in a probabilistic framework to enable an optimal inference of the parameters and the responses.\n\nWe assume that the shared components are statistically independent, and that the individual noises are Gaussian and independent from the shared components:\n$p(\\sbb) = \\prod_{j=1}^p p(s_j)$ and $\\nb_i \\sim\\mathcal{N}(0, \\Sigma_i)$, where the matrices $\\Sigma_i$ are assumed diagonal and positive. Without loss of generality, components are assumed to have unit variance $\\bbE[\\sbb \\sbb^{\\top}] = I_p$. We further assume that there are at least 3 views: $m \\geq 3$. \n\nIn contrast to almost all existing works, we assume that some components (possibly all of them) may be Gaussian, and denote $\\mathcal{G}$ the set of Gaussian components: $\\sbb_j \\sim \\mathcal{N}(0, 1)$ for $j \\in \\mathcal{G}$. The other components are non-Gaussian: for $j\\notin \\mathcal{G}$, $\\sbb_j$ is non-Gaussian.\n\n\n\\paragraph{Identifiability} The parameters of the model are $\\Theta = (A_1, \\dots, A_m, \\Sigma_1, \\dots, \\Sigma_m)$. We are interested in the identifiability of this model: given observations $\\xb_1,\\dots, \\xb_m$ generated with parameters $\\Theta$, are there some other $\\Theta'$ that may generate the same observations?\nLet us consider the following assumption that requires that the individual noises for Gaussian components are sufficiently diverse:\n\\begin{assumption}[Noise diversity in Gaussian components]\n\\label{ass:diversity}\nFor all $j, j' \\in \\mathcal{G}, j \\neq j'$, the sequences $(\\Sigma_{ij})_{i=1 \\dots m}$ and $(\\Sigma_{ij'})_{i=1 \\dots m}$ are different where $\\Sigma_{ij}$ is the $j, j$ entry of $\\Sigma_i$\n\\end{assumption}\n\nIt is readily seen that there is one trivial set of indeterminacies in the problem: if $P \\in \\mathbb{R}^{p \\times p}$ is a sign and permutation matrix (i.e. a matrix which has one $\\pm 1$ coefficient on each row and column, and $0$'s elsewhere) the parameters $(A_1 P, \\dots, A_m P, P^{\\top}\\Sigma_1 P, \\dots, P^{\\top} \\Sigma_m P)$ also generate $\\xb_1,\\dots, \\xb_m$. The following theorem shows that under the above assumption, these are the only indeterminacies of the problem.\n\n\\begin{theorem}[Identifiability]\n\\label{thm:identif}\nWe make Assumption~\\ref{ass:diversity}. We let $\\Theta'=(A_1', \\dots, A_m', \\Sigma_1', \\dots,\\Sigma_m')$ another set of parameters, and assume that they also generate $\\xb_1,\\dots, \\xb_m$. Then, there exists a sign and permutation matrix $P$ such that for all $i$, $A_i'=A_iP$, and $\\Sigma_i'= P^{\\top} \\Sigma_i P$.\n\\end{theorem}\nThe proof is in Appendix~\\ref{proof:identif}. Identifiability in the Gaussian case is a consequence of the identifiability results in~\\cite{via2011joint} and in the general case, local identifiability results can be derived from the work of ~\\cite{anderson2014independent}. \nHowever local identifiability only shows that for a given set of parameters there exists a neighborhood in which no other set of parameters can generate the same observations~\\cite{rothenberg1971identification}. In contrast, the proof of Theorem~\\ref{thm:identif} shows global identifiability.\n\nTheorem~\\ref{thm:identif} shows that the task of recovering the parameters from the observations is a well-posed problem, under the sufficient condition of Assumption~\\ref{ass:diversity}. We also note that Assumption~\\ref{ass:diversity} is necessary for identifiability. For instance, if $j$ and $j'$ are two Gaussian components such that $\\Sigma_{ij} = \\Sigma_{ij'}$ for all $i$, then a global rotation of the components $j, j'$ yields the same covariance matrices. The current work assumes $m \\geq 3$, in appendix~\\ref{app:identifiability} we give an identifiability result for $m=2$, under stronger conditions.\n\n\n\n\\section{Estimation of components with noise diversity via joint-diagonalization}\n\nWe now consider the computational problem of efficient parameter inference. This section considers components with noise diversity, while the next section deals with non-Gaussian components.\n\n\n\\subsection{Parameter estimation with Multiset CCA}\nIf we assume that the components are all Gaussian,\nthe covariance of the observations given by\n$C_{ij}= \\bbE[\\xb_i\\xb_j^\\top] = A_i(I_p + \\delta_{ij}\\Sigma_i)A_j^{\\top}\\enspace\n$ are sufficient statistics and methods using only second order information, like Multiset CCA, are candidates to estimate the parameters of the model.\nConsider the\nmatrix $\\mathcal{C} \\in \\bbR^{pm \\times pm}$ containing $m \\times m$ blocks of size $p\n\\times p$\nsuch that the block $i,j$ is given by $C_{ij}$. Consider the matrix $\\mathcal{D}$ identical to $\\mathcal{C}$ excepts that the non-diagonal blocks are filled with zeros:\n\\begin{equation}\n \\mathcal{C} = \\begin{bmatrix}\n C_{11} & \\dots & C_{1m}\\\\\n \\vdots & \\ddots & \\vdots \\\\\n C_{m1} &\\dots & C_{mm} \n \\end{bmatrix}\n ,\\enspace\n \\mathcal{D} = \\begin{bmatrix}\n C_{11} & \\dots & 0\\\\\n \\vdots & \\ddots & \\vdots \\\\\n 0 &\\dots & C_{mm} \n \\end{bmatrix}. \n\\end{equation} \nGeneralized CCA consists of the following generalized eigenvalue problem:\n\\begin{equation}\n\\label{eq:eigv}\n \\mathcal{C} \\ub = \\lambda \\mathcal{D}\\ub,\\enspace \\lambda > 0,\\enspace \\ub\\in\\bbR^{pm} \\enspace .\n\\end{equation}\n \nConsider the matrix $U = [\\ub^1, \\dots, \\ub^p] \\in \\mathbb{R}^{mp \\times p}$ formed by concatenating the $p$ leading eigenvectors of the previous problem ranked in decreasing eigenvalue order. Then, consider $U$ to be formed of $m$ blocks of size $p \\times p$ stacked vertically and define $(W^i)^{\\top}$ to be the $i$-th block. These $m$ matrices are the output of Multiset CCA. We also denote $\\lambda_1 \\geq \\dots \\geq \\lambda_p$ the $p$ leading eigenvalues of the problem.\n \n\nAn application of the results of \\cite{li2009joint} shows that Multiset CCA recovers the mixing matrices of ShICA under some assumptions.\n\\begin{proposition}[Sufficient condition for solving ShICA via Multiset CCA~\\cite{li2009joint}]\nLet $r_{ijk} = (1 + \\Sigma_{ik})^{-\\frac12} (1 + \\Sigma_{jk})^{-\\frac12}$.\nAssume that $(r_{ijk})_k$ is non-increasing. Assume that the maximum eigenvalue $\\nu_k$ of matrix $R^{(k)}$ of general element $(r_{ijk})_{ij}$ is such that $\\nu_k = \\lambda_k$ \n.\nAssume that $\\lambda_1 \\dots \\lambda_p$ are distinct.\nThen, there exists scale matrices $\\Gamma_i$ such that $W_i = \n\\Gamma_i A_i^{-1}$ for all $i$.\n\\end{proposition}\nThis proposition gives a sufficient condition for solving ShICA with Multiset CCA. It needs a particular structure for the noise covariances as well as specific ordering for the eigenvalues. The next theorem shows that we only need $\\lambda_1 \\dots \\lambda_p$ to be distinct for Multiset CCA to solve ShICA:\n\\begin{assumption}[Unique eigenvalues]\n \\label{ass:uniqueeig}\n$\\lambda_1 \\dots \\lambda_p$ are distinct.\n\\end{assumption}\n\\begin{theorem}\n \\label{th:eig}\n We only make\n Assumption~\\ref{ass:uniqueeig}. Then, there exists a permutation matrix $P$ and scale matrices $\\Gamma_i$ such that $W_i = P\\Gamma_i A_i^{-1}$ for all $i$.\n\\end{theorem}\nThe proof is in Appendix~\\ref{proof:eig}. This theorem means that solving the generalized eigenvalue problem~\\eqref{eq:eigv} allows to recover the mixing matrices up to a scaling and permutation: this form of generalized CCA recovers the parameters of the statistical model.\nNote that Assumption~\\ref{ass:uniqueeig} is also a necessary condition. Indeed, if two eigenvalues are identical, the eigenvalue problem is not uniquely determined.\n\nWe have two different Assumptions, \\ref{ass:diversity} and \\ref{ass:uniqueeig}, the first of which guarantees theoretical identifiability as per Theorem~\\ref{thm:identif} and the second guarantees consistent estimation by Multiset CCA as per Theorem~\\ref{th:eig}. Next we will discuss their connections, and show some limitations of the Multiset CCA approach. To begin with, we have the following result about the eigenvalues of the problem~\\eqref{eq:eigv} and the $\\Sigma_{ij}$.\n\\begin{proposition}\n \\label{prop:eigvals_from_noise}\n For $j\\leq p$, let $\\lambda_j$ the largest solution of $ \\sum_{i=1}^m\\frac{1}{\\lambda_j(1 + \\Sigma_{ij}) -\\Sigma_{ij}}=1$. Then, $\\lambda_1, \\dots, \\lambda_p$ are the $p$ largest eigenvalues of problem~\\eqref{eq:eigv}.\n\\end{proposition}\nIt is easy to see that we then have $\\lambda_1, \\dots, \\lambda_p$ greater than $1$, while the remaining eigenvalues are lower than $1$.\nFrom this proposition, two things appear clearly. First, Assumption~\\ref{ass:uniqueeig} implies Assumption~\\ref{ass:diversity}.\nIndeed, if the $\\lambda_j$'s are distinct, then the sequences $(\\Sigma_{ij})_i$ must also be different from the previous proposition.\nThis is expected as from Theorem~\\ref{th:eig}, Assumption~\\ref{ass:uniqueeig} implies identifiability, which in turn implies Assumption~\\ref{ass:diversity}.\n\nProp.~\\ref{prop:eigvals_from_noise} also allows us to derive cases where Assumption~\\ref{ass:diversity} holds but not Assumption~\\ref{ass:uniqueeig}. The following Proposition gives a simple case where the model is identifiable but it cannot be solved using Multiset CCA:\n\\begin{proposition}\n\\label{counter}\nAssume that for two integers $j, j'$, the sequence $(\\Sigma_{ij})_i$ is a permutation of $(\\Sigma_{ij'})_i$, i.e. that there exists a permutation of $\\{1,\\dots, p\\}$, $\\pi$, such that for all $i$, $\\Sigma_{ij} = \\Sigma_{\\pi(i)j'}$. Then, $\\lambda_j = \\lambda_{j'}$.\n\\end{proposition}\nIn this setting, Assumption~\\ref{ass:diversity} holds so ShICA is identifiable, while Assumption~\\ref{ass:uniqueeig} does not hold, so Multiset CCA cannot recover the unmixing matrices.\n\n\n\n\n\\subsection{Sampling noise and improved estimation with joint diagonalization} \\label{sec:samplingnoise}\n\nThe consistency theory for Multiset CCA developed above is conducted under the assumption that the\ncovariances $C_{ij}$ are the true covariances of the model, and not\napproximations obtained from observed samples. In practice, however, a serious limitation of Multiset CCA is that even a slight error of estimation on the covariances, due to ``sampling noise'', can yield a large error in the estimation of the unmixing matrices, as will be shown next.\n\nWe begin with an empirical illustration. We take $m=3$, $p=2$, and $\\Sigma_i$ such that $\\lambda_1 = 2 + \\varepsilon$ and $\\lambda_2 =2$ for $\\varepsilon > 0$.\nIn this way, we can control the \\emph{eigen-gap} of the problem, $\\varepsilon$.\nWe take $W_i$ the outputs of Multiset CCA applied to the true covariances $C_{ij}$.\nThen, we generate a perturbation $\\Delta = \\delta \\cdot S$, where $S$ is a random positive symmetric $pm \\times pm$ matrix of norm $1$, and $\\delta >0$ controls the scale of the perturbation. \nWe take $\\Delta_{ij}$ the $p\\times p$ block of $\\Delta$ in position $(i, j)$, and $\\tilde{W}_i$ the output of Multiset CCA applied to the covariances $C_{ij} + \\Delta_{ij}$.\nWe finally compute the sum of the Amari distance between the $W_i$ and $\\tilde{W}_i$: the Amari distance measures how close the two matrices are, up to scale and permutation~\\cite{amari1996new}.\n\\begin{wrapfigure}{r}{.4\\textwidth}\n \n \\centering\n \\includegraphics[width=.99\\linewidth]{figures\/multicca_gap_jd.pdf}\n \\caption{Amari distance between true mixing matrices and estimates of Multiset\n CCA when covariances are perturbed. Different solid curves correspond to different\n eigen-gaps. The black dotted line shows the chance level. When the gap is small, a small perturbation can lead to complete mixing. Joint-diagonalization (colored dotted lines) fixes the problem.}\n \\label{fig:cca_gap}\n \n \\end{wrapfigure}\nFig~\\ref{fig:cca_gap} displays the median Amari distance over 100 random repetitions, as the perturbation scale $\\delta$ increases. The different curves correspond to different values of the eigen-gap $\\varepsilon$. We see clearly that the robustness of Multiset CCA critically depends on the eigen-gap, and when it is small, even a small perturbation of the input (due, for instance, to sampling noise) leads to large estimation errors.\n\n\nThis problem is very general and well studied~\\cite{stewart1973error}: the mapping from matrices to (generalized) eigenvectors is highly non-smooth.\nHowever, the gist of our method is that the \\emph{span} of the leading $p$ eigenvectors is smooth, as long as there is a large enough gap between $\\lambda_p$ and $\\lambda_{p+1}$.\nFor our specific problem we have the following bounds, derived from Prop.~\\ref{prop:eigvals_from_noise}.\n\\begin{proposition}\n We let $\\sigma_{\\max} = \\max_{ij}\\Sigma_{ij}$ and $\\sigma_{\\min} = \\min_{ij}\\Sigma_{ij}$. Then, $\\lambda_p \\geq 1 + \\frac{m-1}{1+\\sigma_{\\max}}$, while $\\lambda_{p+1}\\leq 1 - \\frac{1}{1 + \\sigma_{min}}$.\n\\end{proposition}\nAs a consequence, we have $\\lambda_{p} -\\lambda_{p+1} \\geq \\frac{m-1}{1+\\sigma_{\\max}} + \\frac{1}{1+ \\sigma_{\\min}}\\geq \\frac m{1+ \\sigma_{\\max}}$: the gap between these eigenvalues increases with $m$, and decreases with the noise power.\n\n\\begin{wrapfigure}{l}{.45\\textwidth}\n\\begin{minipage}{.45\\textwidth}\n \\begin{algorithm}[H]\n \\caption{ShICA-J}\n \\label{algo:shicaj}\n \\begin{algorithmic}\n \\STATE {\\bfseries Input :} Covariances $\\tilde{C}_{ij} = \\bbE[\\xb_i\\xb_j^{\\top}]$\n \\STATE $(\\tilde{W}_i)_i \\leftarrow \\mathrm{MultisetCCA}((\\tilde{C}_{ij})_{ij})$\n \\STATE $Q \\leftarrow \\mathrm{JointDiag}((\\tilde{W}_i\\tilde{C}_{ii}\\tilde{W}_i^{\\top})_i)$\n \\STATE $\\Gamma_{ij} \\leftarrow Q\\tilde{W}_i\\tilde{C}_{ij}W_j^\\top Q^\\top$\n \\STATE $(\\Phi_i)_i \\leftarrow \\mathrm{Scaling}((\\Gamma_{ij})_{ij})$\n \\STATE \\textbf{Return : } Unmixing matrices $(\\Phi_iQ\\tilde{W}_i)_i$.\n \\end{algorithmic}\n \\end{algorithm}\n\\end{minipage}\n\\end{wrapfigure}\nIn this setting, when the magnitude of the perturbation $\\Delta$ is smaller than $\\lambda_{p}-\\lambda_{p+1}$, ~\\cite{stewart1973error} indicates that $\\mathrm{Span}([W_1, \\dots, W_m]^{\\top})\\simeq \\mathrm{Span}([\\tilde{W}_1,\\dots, \\tilde{W}_m]^\\top)$, where $[W_1, \\dots, W_m]^{\\top}\\in\\bbR^{pm\\times p}$ is the vertical concatenation of the $W_i$'s.\nIn turn, this shows that there exists a matrix $Q\\in\\bbR^{p\\times p}$ such that\n\\begin{equation}\n \\label{eq:justif_jd}\n W_i \\simeq Q\\tilde{W}_i\\enspace \\text{for all} \\enspace i.\n\\end{equation}\nWe propose to use joint-diagonalization to recover the matrix $Q$. Given the $\\tilde{W}_i$'s, we consider the set of symmetric matrices $\\tilde{K}_i = \\tilde{W}_i\\tilde{C}_{ii}\\tilde{W}_i^{\\top}$, where $\\tilde{C}_{ii}$ is the contaminated covariance of $\\xb_i$. Following Eq.~\\eqref{eq:justif_jd}, we have $Q\\tilde{K}_iQ^{\\top} = W_i \\tilde{C}_{ii}W_i^{\\top}$, and using Theorem~\\ref{th:eig}, we have $Q\\tilde{K}_iQ^{\\top} = P\\Gamma_i A_i^{-1}\\tilde{C}_{ii}A_i^{-\\top}\\Gamma_iP^{\\top}$. Since $\\tilde{C}_{ii}$ is close to $C_{ii} = A_i (I_p + \\Sigma_i)A_i^\\top$, the matrix $P\\Gamma_i A_i^{-1}\\tilde{C}_{ii}A_i^{-\\top}\\Gamma_iP^{\\top}$ is almost diagonal.\nIn other words, the matrix $Q$ is an approximate diagonalizer of the $\\tilde{K}_i$'s, and we approximate $Q$ by joint-diagonalization of the $\\tilde{K}_i$'s. In Fig~\\ref{fig:cca_gap}, we see that this procedure mitigates the problems of multiset-CCA, and gets uniformly better performance regardless of the eigen-gap.\nIn practice, we use a fast joint-diagonalization algorithm~\\cite{ablin2018beyond} to minimize a joint-diagonalization criterion for positive symmetric matrices~\\cite{pham2001joint}. The estimated unmixing matrices $U_i = Q\\tilde{W}_i$ correspond to the true unmixing matrices only up to some scaling which may be different from subject to subject: the information that the components are of unit variance is lost. As a consequence, naive averaging of the recovered components may lead to inconsistant estimation. We now describe a procedure to recover the correct scale of the individual components across subjects.\n\n\\textbf{Scale estimation}\nWe form the matrices $\\Gamma_{ij} = U_i\\tilde{C}_{ij}U_j^\\top$. In order to estimate the scalings, we solve $\n\\min_{(\\Phi_i)} \\sum_{i\\neq j} \\| \\Phi_i \\diag(\\Gamma_{ij}) \\Phi_j - I_p \\|_F^2$\nwhere the $\\Phi_i$ are diagonal matrices.\nThis function is readily minimized with respect to one of the $\\Phi_i$ by the formula\n$\\Phi_i = \\frac{\\sum_{j \\neq i} \\Phi_j \\diag(Y_{ij})}{\\sum_{j \\neq i} \\Phi_j^2 \\diag(Y_{ij})^2}$ (derivations in Appendix~\\ref{app:fixedpoint}). We then iterate the previous formula over $i$ until convergence.\nThe final estimates of the unmixing matrices are given by\n$(\\Phi_i U_i)_{i=1}^m$. The full procedure, called ShICA-J, is summarized in Algorithm~\\ref{algo:shicaj}.\n\n\\subsection{Estimation of noise covariances}\n\nIn practice, it is important to estimate noise covariances $\\Sigma_i$ in order to take advantage of the fact that some views are noisier than others. As it is well known in classical factor analysis, modelling noise variances allows the model to virtually discard variables, or subjects, that are particularly noisy. \n\nUsing the ShICA model with Gaussian components, we derive an estimate for the noise covariances directly from maximum likelihood. We use an expectation-maximization (EM) algorithm, which is especially fast because noise updates are in closed-form. Following derivations given in appendix~\\ref{conditional_density}, the sufficient statistics in the E-step are given by \n\\begin{align}\n\\label{mmse1}\n\\EE[\\sbb|\\xb]= \\left(\\sum_{i=1}^m \\Sigma_i^{-1} + I \\right)^{-1} \\sum_{i=1}^m \\left(\\Sigma_i^{-1} \\yb_i \\right)\n && \\VV[\\sbb|\\xb]= (\\sum_{i=1}^m \\Sigma_i^{-1} + I)^{-1}\n\\end{align}\nIncorporating the M-step we get the following updates that only depend on the covariance matrices:\n$\n\\Sigma_i \\leftarrow \\diag(\\hat{C_{ii}} - 2 \\VV[\\sbb | \\xb] \\sum_{j=1}^m \\Sigma_j^{-1} \\hat{C}_{ji} + \\VV[\\sbb | \\xb] \\sum_{j = 1}^m \\sum_{l = 1}^m \\left(\\Sigma_j^{-1} \\hat{C}_{jl} \\Sigma_l^{-1} \\right) \\VV[\\sbb | \\xb] + \\VV[\\sbb | \\xb])\n$\n\n\\section{ShICA-ML: Maximum likelihood for non-Gaussian components}\nShICA-J only uses second order statistics. However, the ShICA model~\\eqref{eq:model} allows for non-Gaussian components. We now propose an algorithm for fitting the ShICA model that combines covariance information with non-Gaussianity in the estimation to optimally separate both Gaussian and non-Gaussian components.\nWe estimate the parameters by maximum likelihood. Since most non-Gaussian\ncomponents in real data are super-Gaussian~\\cite{delorme2012independent, calhoun2006unmixing}, we assume that the non-Gaussian components $\\sbb$ have the super-Gaussian density \\\\ $p(s_j) = \\frac12\\left(\\mathcal{N}( s_j; 0, \\frac12) + \\mathcal{N}( s_j; 0, \\frac{3}{2})\\right) \\enspace.$\n\nWe propose to maximize the log-likelihood using a generalized\nEM~\\cite{neal1998view, dempster1977maximum}. Derivations are available in Appendix~\\ref{app:emestep}. Like in the previous section, the E-step is in closed-form yielding the following sufficient statistics:\n\\begin{align}\n\\label{mmse2}\n \\EE[s_j | \\xb] = \\frac{\\sum_{\\alpha \\in \\{\\frac12, \\frac32\\}} \\theta_{\\alpha} \\frac{\\alpha \\bar{y}_{j}}{\\alpha + \\bar{\\Sigma_{j}}}}{\\sum_{\\alpha \\in \\{0.5, 1.5\\}} \\theta_{\\alpha}} \\enspace \\text{ and } \\enspace \\VV[s_j | \\xb] = \\frac{\\sum_{\\alpha \\in \\{\\frac12, \\frac32\\}} \\theta_{\\alpha} \\frac{\\bar{\\Sigma_{j}}\\alpha}{\\alpha + \\bar{\\Sigma_{j}}}}{\\sum_{\\alpha \\in \\{0.5, 1.5\\}} \\theta_{\\alpha}} \n\\end{align}\n where $\\theta_{\\alpha} = \\Ncal(\\bar{y}_{j}; 0 , \\bar{\\Sigma}_{j} + \\alpha)$, \n $\\bar{y}_j = \\frac{\\sum_i \\Sigma_{ij}^{-1} y_{ij}}{ \\sum_i\n \\Sigma_{ij}^{-1}}$ and $\\bar{\\Sigma_{j}} = (\\sum_i\n \\Sigma_{ij}^{-1})^{-1}$ with $\\yb_i = W_i \\xb_i$.\nNoise updates are in closed-form and given by:\n$\\Sigma_i \\leftarrow \\diag((\\yb_i - \\EE[\\sbb | \\xb]) (\\yb_i - \\EE[\\sbb | \\xb])^{\\top}+ \\VV[\\sbb | \\xb])$.\nHowever, no closed-form is available for the updates of unmixing matrices. We therefore perform quasi-Newton updates given by\n$W_i \\leftarrow (I - \\rho (\\widehat{\\mathcal{H}^{W_i}})^{-1} \\mathcal{G}^{W_i}) W_i$ where $\\rho \\in \\mathbb{R}$ is chosen by backtracking line-search,\n$\\widehat{\\mathcal{H}^{W_i}_{a, b, c, d}} = \\delta_{ad} \\delta_{bc} +\n\\delta_{ac} \\delta_{bd}\\frac{(y_{ib})^2}{\\Sigma_{ia}}$\nis an approximation of the Hessian\nof the negative complete likelihood and $\\mathcal{G}^{W_i} = -I + (\\Sigma_i)^{-1}(\\yb_i - \\mathbb{E}[\\sbb|\\xb])(\\yb_i)^{\\top}$ is the gradient.\n\nWe alternate between computing the statistics $\\mathbb{E}[\\sbb|\\xb]$, \n$\\mathbb{V}[\\sbb|\\xb]$ (E-step) and updates of parameters $\\Sigma_i$ and $W_i$ for $i=1 \\dots m$ (M-step). Let us highlight that our EM algorithm and in particular the E-step resembles the one used in~\\cite{moulines1997maximum}. However because they assume noise on the sensors and not on the components, their formula for $\\EE[\\sbb| \\xb]$ involves a sum with $2^p$ terms whereas we have only $2$ terms. The resulting method is called ShICA-ML.\n\n\\paragraph{Minimum mean square error estimates in ShICA}\nIn ShICA-J as well as in ShICA-ML, we have a closed-form for the expected components given the data $\\EE[\\sbb | \\xb]$, shown in equation~\\eqref{mmse1} and~\\eqref{mmse2} respectively. This provides minimum mean square error estimates of the shared components, and is an important benefit of explicitly modelling shared components in a probabilistic framework.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\\section{Related Work}\nShICA combines theory and methods coming from different branches of ``component analysis''. It can be viewed as a GroupICA method, as an extension of Multiset CCA, as an Independent Vector Analysis method or, crucially, as an extension of the shared response model. In the setting studied here, ShICA improves upon all existing methods.\n\n\\paragraph{GroupICA}\nGroupICA methods extract independent components from multiple datasets. In its original form\\cite{calhoun2001method}, views are concatenated and then a PCA is applied yielding reduced data on which ICA is applied. One can also reduce the data using Multiset CCA instead of PCA, giving a method called \\emph{CanICA}~\\cite{varoquaux2009canica}. Other works~\\cite{Esposito05NI, Hyva11NI} apply ICA separately on the datasets and attempt to match the decompositions afterwards.\nAlthough these works provide very fast methods, they do not rely on a well defined model like ShICA.\nOther GroupICA methods impose some structure on the mixing matrices such as the tensorial method of~\\cite{beckmann2005tensorial} or the group tensor model in~\\cite{guo2008unified} (which assumes identical mixing matrices up to a scaling) or \\cite{svensen2002ica} (which assumes identical mixing matrices but different components). In ShICA the mixing matrices are only constrained to be invertible.\nLastly, maximum-likelihood based methods exist such as\n\\emph{MultiViewICA}~\\cite{richard2020modeling} (MVICA) or the full model\nof~\\cite{guo2008unified}.\nThese methods are weaker than ShICA as they use the same noise covariance across views and lack a principled method for shared response inference.\n\n\\paragraph{Multiset CCA}\nIn its basic formulation, CCA identifies a shared space between two datasets.\nThe extension to more than two datasets is ambiguous, and many\ndifferent generalized CCA methods have been proposed. \\cite{kettenring1971canonical} introduces 6 objective functions that reduce to CCA when $m=2$ and \\cite{nielsen2002multiset} considered 4 different possible constrains leading to 24 different formulations of Multiset CCA. The formulation used in ShICA-J is refered to in~\\cite{nielsen2002multiset} as SUMCORR with constraint 4 which is one of the fastest as it reduces to solving a generalized eigenvalue problem. The fact that CCA solves a well defined probabilistic model has first been studied in~\\cite{bach2005probabilistic} where it is shown that CCA is identical to multiple battery factor analysis~\\cite{browne1980factor} (restricted to 2 views). This latter formulation differs from our model in that the noise is added on the sensors and not on the components which makes the model unidentifiable. Identifiable variants and\ngeneralizations can be obtained by imposing sparsity on the mixing matrices such as in~\\cite{archambeau2008sparse, klami2014group, witten2009extensions} or non-negativity~\\cite{DELEUS2011143}.\nThe work in~\\cite{li2009joint} exhibits a set of sufficient (but not necessary) conditions under which a well defined model can be learnt by the formulation of Multiset CCA used in ShICA-J. The set of conditions we exhibit in this work are necessary and sufficient. We further emphasize that basic Multiset CCA provides a poor estimator as explained in Section~\\ref{sec:samplingnoise}.\n\n\\paragraph{Independent vector analysis}\nIndependent vector analysis~\\cite{lee2008independent} (IVA) models the data as a linear mixture of independent components $\\xb_i = A_i \\sbb_i$ where each component $s_{ij}$ of a given view $i$ can depend on the corresponding component in other views ($(s_{ij})_{i=1}^m$ are not independent).\nPractical implementations of this very general idea assume a distribution for\n$p((s_{ij})_{i=1}^m)$. In IVA-L~\\cite{lee2008independent}, $p((s_{ij})_{i=1}^m)\n\\propto \\exp(-\\sqrt{\\sum_i (s_{ij})^2})$ (so the variance of each component in\neach view is assumed to be the same), in IVA-G~\\cite{anderson2011joint} or\nin~\\cite{via2011maximum}, $p((s_{ij})_{i=1}^m) \\sim \\mathcal{N}(0, R_{ss})$\nand~\\cite{engberg2016independent} proposed a normal inverse-Gamma density.\nLet us also mention IVA-L-SOS~\\cite{bhinge2019extraction}, IVA-GGD~\\cite{anderson2014independent} and\nIVA with Kotz distribution~\\cite{anderson2013independent} that assume a\nnon-Gaussian density general enough so that they can use both second and higher\norder statistics to extract view-specific components.\nThe model of ShICA can be seen as an instance of IVA\nwhich specifically enables extraction of shared components from the subject specific components, unlike previous versions of IVA. In fact, ShICA comes with minimum mean square error estimates for the shared components\nthat is often the quantity of interest.\nThe IVA theory provides global identifiability conditions in the Gaussian case (IVA-G)~\\cite{via2011joint} and local identifiability conditions in the general case~\\cite{anderson2014independent} from which local identifiability conditions of ShICA could be derived. However, in this work, we provide global identifiability conditions for ShICA.\nLastly, IVA can be performed using joint diagonalization of cross covariances~\\cite{li2011joint, congedo2012orthogonal} although multiple matrices have to be learnt and cross-covariances are not necessarily symmetric positive definite which makes the algorithm slower and less principled.\n\n\\paragraph{Shared response model}\nShICA extracts shared components from multiple datasets, which is also the goal\nof the shared response model (SRM)~\\cite{chen2015reduced}. The robust\nSRM~\\cite{turek2018capturing} also allows to capture subject specific noise.\nHowever these models impose orthogonality constraints on the mixing matrices\nwhile ShICA does not.\nDeep variants of SRM exist such\nas~\\cite{chen2016convolutional} but while they release the orthogonality\nconstrain, they are not very easy to train or interpret and have many\nhyper-parameters to tune. ShICA leverages ICA theory to provide a much more powerful model of shared responses.\n\n\\paragraph{Limitations}\nThe main limitation of this work is that the model cannot reduce the dimension inside each view : there are as many estimated sources as sensors. This might be problematic when the number of sensors is very high. In line with other methods, view-specific dimension reduction has to be done by some external method, typically view-specific PCA. Using specialized methods for the estimation of covariances should also be of interest for ShICA-J, where it only relies on sample covariances. Finally, ShICA-ML uses a simple model of a super-Gaussian distribution, while modelling the non-gaussianities in more detail in ShICA-ML should improve the performance.\n\n\\section{Experiments}\nExperiments used Nilearn~\\cite{abraham2014machine} and MNE~\\cite{gramfort2013meg} for fMRI and MEG data\nprocessing respectively, as well as the scientific Python ecosystem:\nMatplotlib~\\cite{hunter2007matplotlib}, Scikit-learn~\\cite{pedregosa2011scikit},\nNumpy~\\cite{harris2020array} and Scipy~\\cite{2020SciPy-NMeth}. We use the Picard algorithm for non-Gaussian ICA~\\cite{ablin2018faster}, and mvlearn for multi-view ICA~\\cite{perry2020mvlearn}. The above libraries use open-source licenses. fMRI experiments used the following datasets: sherlock~\\cite{chen2017shared}, forrest~\\cite{hanke2014high} , raiders~\\cite{ibc} and gallant~\\cite{ibc}. The data we use do not contain offensive content or identifiable information and consent was obtained before data collection. Computations were run on a large server using up to 100 GB of RAM and 20 CPUs in parallel.\n\\paragraph{Separation performance}\n\\label{sec:rotation}\nIn the following synthetic experiments, data are generated according to model~\\eqref{eq:model} with $p=4$ components and $m=5$ views and mixing matrices are generated by sampling coefficients from a standardized Gaussian.\nGaussian components are generated from a standardized Gaussian and their noise\nhas standard deviation $\\Sigma_i^{\\frac12}$ (obtained by sampling from a uniform\ndensity between $0$ and $1$) while non-Gaussian components are generated from a\nLaplace distribution and their noise standard deviations are equal. We study 3\ncases where either all components are Gaussian, all components are non-Gaussian\nor half of the components are Gaussian and half are non-Gaussian.\nWe vary the\nnumber of samples $n$ between $10^2$ and $10^5$ and display in\nFig~\\ref{exp:rotation} the mean Amari distance across subjects between the true unmixing\nmatrices and estimates of algorithms as a function of $n$.\nThe experiment is repeated $100$ times using different seeds. We report the median result and error bars represent the first and last deciles.\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\textwidth]{.\/figures\/identifiability2.pdf}\n \\caption{\\textbf{Separation performance}: Algorithms are fit on data following model~\\ref{eq:model} \\textbf{(a)} Gaussian components with noise diversity \\textbf{(b)} Non-Gaussian components without noise diversity \\textbf{(c)} Half of the components are Gaussian with noise diversity, the other half is non-Gaussian without noise diversity. }\n \\label{exp:rotation}\n\\end{figure}\n\nWhen all components are Gaussian (Fig.~\\ref{exp:rotation}~(a)), CanICA cannot\nseparate the components at all. In contrast ShICA-J, ShICA-ML, Multiset CCA and\nMVICA are able to separate them, but Multiset CCA needs many more samples than\nShICA-J or ShICA-ML to reach a low amari distance, which shows that correcting for the rotation due to sampling noise improves the results. Looking at error bars, we also see that the performance of Multiset CCA varies quite a lot with the random seeds: this shows that depending on the sampling noise, the rotation can be very different from identity.\nMVICA needs even more sample than Multiset CCA to reach a low amari distance but\nstill outperforms CanICA.\n\nWhen none of the components are Gaussian (Fig.~\\ref{exp:rotation}~(b)), only\nCanICA, ShICA-ML and MVICA are able to separate the components, as other methods do not make use of non-Gaussianity.\nFinally, in the hybrid case (Fig.~\\ref{exp:rotation}~(c)), ShICA-ML is able to\nseparate the components as it can make use of both non-Gaussianity and noise\ndiversity. MVICA is a lot less reliable than ShICA-ML, it is uniformly worse and\nerror bars are very large showing that for some seeds it gives poor results.\nCanICA, ShICA-J and MultisetCCA cannot separate the components at all.\nAdditional experiments illustrating the separation powers of algorithms are available in Appendix~\\ref{app:separation}.\n\n\nAs we can see, MVICA can separate Gaussian components to some extent and therefore does not completely fail when Gaussian and non-Gaussian components are present. However MVICA is a lot less reliable than ShICA-ML: MVICA is uniformly worse than ShICA-ML and the error bars are very large showing that for some seeds it gives poor results.\n\n\\paragraph{Computation time}\n\n\n\\begin{figure}\n \\centering\n \\begin{subfigure}[b]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{.\/figures\/synthetic_gaussian_timings.pdf}\n \\caption{}\n \\label{exp:syn_timings}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{.\/figures\/inter_subject_stability.pdf}\n \\caption{}\n \\label{fig:eeg_intragroup_variability}\n\\end{subfigure}\n \\caption{\\textbf{Left: Computation time.} Algorithms are fit on data generated from model~\\eqref{eq:model} with a super-Gaussian density. For different values of the number of samples, we plot the Amari distance and the fitting time. Thick lines link median values across seeds. \\textbf{Right: Robustness w.r.t intra-subject variability in MEG.}\n (\\textbf{top}) $\\ell_2$ distance between shared components corresponding to the same stimuli in different trials. (\\textbf{bottom}) Fitting time.}\n\\end{figure}\nWe generate components using a slightly super Gaussian density: $s_j = d(x)$ with $d(x) = x |x|^{0.2}$ and $x \\sim \\mathcal{N}(0, 1)$. We vary the number of samples $n$ between $10^2$ and $10^4$. We compute the mean Amari distance across subjects and record the computation time. The experiment is repeated $40$ times. We plot the Amari distance as a function of the computation time in Fig~\\ref{exp:syn_timings}. Each point corresponds to the Amari distance\/computation time for a given number of samples and a given seed. We then consider for a given number of samples, the median Amari distance and computation time across seeds and plot them in the form of a thick line. From Fig~\\ref{exp:syn_timings}, we see that ShICA-J is the method of choice when speed is a concern while ShICA-ML yields the best performance in terms of Amari distance at the cost of an increased computation time. The thick lines for ShICA-J and Multiset CCA are quasi-flat, indicating that the number of samples does not have a strong impact on the fitting time as these methods only work with covariances. On the other hand CanICA or MVICA computation time is more sensitive to the number of samples.\n\n\n\n\n\\paragraph{Robustness w.r.t intra-subject variability in MEG}\n\nIn the following experiments we consider the Cam-CAN\ndataset~\\cite{taylor2017cambridge}. We use the magnetometer data from the MEG of\n$m=100$ subjects chosen randomly among 496.\nIn appendix~\\ref{app:preprocessing}\nwe give more information about Cam-CAN dataset.\nEach subject is repeatedly presented three audio-visual stimuli. \nFor each stimulus, we divide the trials into two sets and within each set,\nthe MEG signal is averaged across trials to isolate the evoked response. This\nprocedure yields 6 chunks of individual data (2 per stimulus).\nWe study the similarity between shared components corresponding to repetitions of the same stimulus. This gives a measure of robustness of each ICA algorithm with respect\nto intra-subject variability.\nData are first reduced using a subject-specific PCA with $p=10$ components.\nThe initial dimensionality of the data before PCA is $102$ as we only use the 102 magnetometers.\nAlgorithms are run 10 times with different seeds on the 6 chunks of data,\nand shared components are extracted.\nWhen two chunks of data correspond to repetitions of the same stimulus they should yield similar\ncomponents.\nFor each component and for each stimulus, we therefore measure the $\\ell_2$\ndistance between the two repetitions of the stimulus.\n This yields $300$ distances per algorithm that are\nplotted on Fig~\\ref{fig:eeg_intragroup_variability}.\n\nThe components recovered by ShICA-ML have a much lower variability than other approaches. The performance of ShICA-J is competitive with MVICA while being much faster to fit. Multiset CCA yields satisfying results compared with ShICA-J. However we see that the number of components that do not match at all across trials is greater in Multiset CCA.\n \nAdditional experiments on MEG data are available in Appendix~\\ref{app:phantom}.\n\n\n\n\\paragraph{Reconstructing the BOLD signal of missing subjects}\n\\begin{wrapfigure}{l}{.42\\textwidth}\n \\centering\n \\includegraphics[width=0.99\\linewidth]{.\/figures\/reconstruction.pdf}\n \\includegraphics[width=0.99\\linewidth]{.\/figures\/reconstruction_timings.pdf}\n \n \\caption{\\textbf{Reconstructing the BOLD signal of\n missing subjects}. (\\textbf{top}) Mean $R^2$ score between reconstructed data and true\n data. (\\textbf{bottom}) Fitting time.\n \n }\n \\label{fig:reconstruction}\n\\end{wrapfigure}\nWe reproduce the experimental pipeline of~\\cite{richard2020modeling} to benchmark GroupICA methods using their ability to reconstruct fMRI data of a left-out subject.\nThe preprocessing involves a dimension reduction step performed using the shared response model~\\cite{chen2015reduced}. Detailed preprocessing pipeline is described in Appendix~\\ref{app:preprocessing}. We call an \\emph{unmixing operator} the product of the dimension\nreduction operator and an unmixing matrix and a \\emph{mixing operator} its pseudoinverse. There is one unmixing operator and one mixing operator per view.\nThe unmixing operators are learned using all subjects\nand $80\\%$ of the runs. Then they are applied on the remaining $20\\%$ of the runs using $80\\%$\nof the subjects yielding unmixed data from which shared components are\nextracted.\nThe unmixed data are combined by averaging (for SRM and other baselines) or using the MMSE estimate for ShICA-J and ShICA-ML.\nWe\nthen apply the mixing operator of the remaining $20\\%$ subjects on the shared components to reconstruct their data.\nReconstruction accuracy is measured via the coefficient of determination, \\aka $R^2$ score, that\nyields for each voxel the relative discrepancy between the true time course and the predicted one.\nFor each compared algorithm, the experiment is run 25 times with different seeds to obtain error bars. We report the mean $R^2$ score across voxels in a region of interest (see Appendix~\\ref{app:preprocessing} for details)\n and display the results in Fig~\\ref{fig:reconstruction}. The error bars represent a $95\\%$ confidence interval.\nThe chance level is given by the $R^2$ score of an algorithm that samples the\ncoefficients of its unmixing matrices and dimension reduction operators from a\nstandardized Gaussian. The median chance level is below $10^{-3}$ on all\ndatasets.\nShICA-ML yields the best $R^2$ score in all datasets and for any number of\ncomponents. ShICA-J yields competitive results with respect to MVICA\nwhile being much faster to fit. A popular benchmark especially in the SRM\ncommunity is the time-segment matching experiment~\\cite{chen2015reduced}: we\ninclude such experiments in Appendix~\\ref{app:timesegment}.\nIn\nappendix~\\ref{app:table}, we give the performance of ShICA-ML, ShICA-J and MVICA\nin form of a table.\n\n\n\n\n\\section{Conclusion, Future work and Societal impact}\n\nWe introduced the ShICA model as a principled unifying solution to the problems of shared response modelling and GroupICA. ShICA is able to use both the diversity of Gaussian variances and non-Gaussianity for optimal estimation. We presented two algorithms to fit the model: ShICA-J, a fast algorithm that uses noise diversity, and ShICA-ML, a maximum likelihood approach that can use non-Gaussianity on top of noise diversity. ShICA algorithms come with principled procedures for shared components estimation, as well as adaptation and estimation of noise levels in each view (subject) and component. On simulated data, ShICA clearly outperforms all competing methods in terms of the trade-off between statistical accuracy and computation time. On brain imaging data, ShICA gives more stable decompositions for comparable computation times, and more accurately predicts the data of one subject from the data of other subjects, making it a good candidate to perform transfer learning. \nOur code is available at \\url{https:\/\/github.com\/hugorichard\/ShICA}.\n\\footnote{Regarding the ethical aspects of this work, we think this work presents exactly the same issues as any brain imaging analysis method related to ICA.}\n\\clearpage\n\\paragraph{Acknowledgement and funding disclosure}\nThis work has received funding\nfrom the European Union's Horizon 2020 Framework Programme for Research and Innovation under\nthe Specific Grant Agreement No. 945539 (Human Brain Project SGA3), the KARAIB AI chair\n(ANR-20-CHIA-0025-01), the Grant SLAB ERC-StG-676943 and the BrAIN AI chair (ANR-20-CHIA-0016). PA acknowledges funding by the French government under management of Agence Nationale de la Recherche as part of the \"Investissements d'avenir\" program, reference ANR19-P3IA-0001 (PRAIRIE 3IA Institute). AH received funding from a CIFAR Fellowship. \n\n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\nIn quantum information science,\nmany figures of merit such as fidelity and von Neumann entropy \\cite{Nielsen2010} are utilized to characterize a quantum state.\nQuantum state tomography (QST) \\cite{James2001},\nby which a quantum density operator of an unknown quantum state is identified,\nis the most comprehensive method for deriving them.\nRecently, QST for photonic high-dimensional quantum states (qudits) \\cite{Thew2002} has been intensively investigated for\nentanglements based on orbital angular momentum \\cite{Agnew2011},\nfrequency bins \\cite{Bernhard2013},\nand time-energy uncertainty \\cite{Richart2014}.\nObservation of high-dimensional multipartite entanglement has also been reported \\cite{Malik2016}.\nFor time-bin qudits, which are promising candidates for transmission over an optical fiber,\nQST based on the conversion between time-bin states and polarization states has been performed \\cite{Nowierski2015}.\nQST generally requires $(d^2 - 1)$ different measurements for a state in $d$ dimensional Hilbert spaces\nbecause a general mixed state is characterized by $(d^2 - 1)$ real numbers.\nThus, it is important to reduce the number of measurement settings for high-dimensional QST.\nFor time-bin qubits,\nQST has been performed with a single delay Mach-Zehnder interferometer (MZI) \\cite{Takesue2009},\nwhich simultaneously constructed measurements projecting on two time-bin basis states and a superposition state of the time-bin basis.\nIn this paper, we propose an efficient scheme to implement QST for time-bin qudits utilizing cascaded delay MZIs \\cite{Ikuta2016a,Richart2012}.\nThanks to the simultaneous construction of the different measurements,\nthe number of measurement settings scales linearly with dimension $d$.\n\n\n\n\n\\section{\\label{sec:QSTDetail}Measurements with cascaded MZIs}\n\\subsection{\\label{sec:BasicQST}Basic concept}\n\nFirst, we give a general description of QST.\nA $d$-dimensional density operator $\\op{\\rho}$ can be expressed as $\\op{\\rho} = \\sum_{i=0}^{d^2-1} g_i \\op{G}_i$,\nwhere $\\op{G}_i$ is the generalized Gell-Mann matrix defined in \\cite{Thew2002}\nand $g_i$ is a real number.\n$g_0$ is usually fixed to $1\/d$ to be $\\mathrm{Tr}\\left( \\op{\\rho} \\right) = 1$,\nbecause $\\op{G}_i$ is traceless for $i \\geq 1$ and $\\op{G}_0$ is the identity operator $\\op{I}_d$.\nWhen we repeat a measurement represented by a projector $\\op{P}_j$ for $N$ photons,\nthe expected values of the photon counts $n_j^E$ is given by\n\\begin{equation}\nn_j^E = N \\mathrm{Tr} \\left( \\op{P}_j \\op{\\rho} \\right) = N \\sum_{i = 0}^{d^2 - 1} A_{ij} g_i\t,\t\\label{eq:ConceptOfQST}\n\\end{equation}\nwhere $A_{ij} = \\mathrm{Tr} \\left( \\op{P}_j \\op{G}_i \\right)$.\nWe can estimate $N$ and $g_i$ by multiplying the inverse matrix of $A_{ij}$ from the left of \\eref{eq:ConceptOfQST}.\nThus, the problem remaining to complete QST is how to prepare a set of measurements\nthat correspond to $\\op{P}_j$ for constructing $A_{ij}$ with rank $d^2$.\n\nTo prepare such a set of measurements for time-bin qudits,\nwe use cascaded MZIs.\n\\Fref{fig:CMZI} shows the concept of the measurements with the cascaded MZIs for a four-dimensional time-bin state.\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=\\linewidth]{figure01_ConceptOfCMZI.eps}\n\\caption{Concept of QST utilizing cascaded MZIs.}\n\\label{fig:CMZI}\n\\end{figure}\nThe 2-bit delay MZI has time delay $2T$ and phase difference $\\theta_2$,\nwhere $T$ denotes the temporal interval of time slots constituting the time-bin basis\nand $\\theta_2$ is the phase difference between the short and the long arms of the 2-bit delay MZI.\nThe 1-bit delay MZI has time delay $T$ and phase difference $\\theta_1$.\nThe output ports of the 2-bit delay MZI, $p_{2x}$ and $p_{2y}$, are connected to the input port of the 1-bit delay MZI and photon detector D2, respectively.\nThe output port of the 1-bit delay MZI, $p_{1x}$, is connected to photon detector D1,\nand the other output port, $p_{1y}$, is terminated.\nWhen the time-bin qudit is launched into the cascaded MZIs,\nD1 can detect a photon in a superposition of four different input states.\nOn the other hand,\nD2 cannot,\nbut it can detect a photon projected on the time-bin basis, which D1 cannot.\nTherefore, the information obtained from D1 and D2 are intrinsically different.\nWe utilize the number of photons detected by D1 and D2 at different detection times as $n_j^E$ in \\eref{eq:ConceptOfQST}.\n\nIn what follows, we describe the measurements by the cascaded MZIs in more detail.\nThe basis for the four-dimensional time-bin state is given by state $\\ket{k} (k \\in [0, 3])$\nin which a photon exists in the $k$th time slot.\nWhen pure state $\\ket{k}$ is launched into the 2-bit delay MZI,\nthe output state at port $p_{2x}$ is $\\op{M}_{2x} \\ket{k}$,\nwhere generalized measurement operator $\\op{M}_{2x}$ is given by\n\\begin{equation}\n\\op{M}_{2x} = \\frac{1}{2} \\sum_{k=0}^3 \\left( \\ket{k} + e^{i \\theta_2} \\ket{k+2} \\right) \\bra{k}\t\t.\n\\end{equation}\nSimilarly,\nwe can obtain the operators representing the measurements of each MZI at ports $p_{2y}, p_{1x}$, and $p_{1y}$ as follows.\n\\begin{eqnarray}\n\\op{M}_{2y} &=& \\frac{1}{2} \\sum_{k=0}^3 \\left( - \\ket{k} + e^{i \\theta_2} \\ket{k+2} \\right) \\bra{k}\t\t,\t\\\\\n\\op{M}_{1x} &=& \\frac{1}{2} \\sum_{k=0}^5 \\left( \\ket{k} + e^{i \\theta_1} \\ket{k+1} \\right) \\bra{k}\t\t,\t\\\\\n\\op{M}_{1y} &=& \\frac{1}{2} \\sum_{k=0}^5 \\left( -\\ket{k} + e^{i \\theta_1} \\ket{k+1} \\right) \\bra{k}\t\t.\n\\end{eqnarray}\n\nPhoton detectors D1 and D2 detect a photon at different detection times, $t_l$, for $l \\in [0, 6]$,\nwhich correspond to the projection measurements $\\op{M}_D = \\ket{l} \\! \\bra{l}$.\nTherefore,\nthe expected value $n^E_{D1 l \\theta_1 \\theta_2}$ of the photons detected by D1 at time $t_l$ is given by\n\\begin{eqnarray}\nn^E_{D1 l \\theta_1 \\theta_2} &=& N \\mathrm{Tr} \\left(\n\t\\op{M}_D\n\t\t\\op{M}_{1x}\n\t\t\t\\op{M}_{2x}\n\t\t\t\t\\op{\\rho}\n\t\t\t\\op{M}_{2x}^{\\dag}\n\t\t\\op{M}_{1x}^{\\dag}\n\t\\op{M}_D^{\\dag}\n\\right)\t\t\\\\\n\t&=& N \\mathrm{Tr} \\left( \\op{E}_{l \\theta_1 \\theta_2}^{D1} \\op{\\rho} \\right)\t\t,\t\\label{eq:E(Count)ltt}\n\\end{eqnarray}\nwhere we define the element of the positive operator valued measure\n$\\op{E}_{l \\theta_1 \\theta_2}^{D1} = \\op{M}_{2x}^{\\dag} \\op{M}_{1x}^{\\dag} \\op{M}_D^{\\dag} \\op{M}_D \\op{M}_{1x} \\op{M}_{2x}$.\nThe element of the positive operator valued measure for D2 is similarly defined as \n$\\op{E}_{l \\theta_1 \\theta_2}^{D2} = \\op{M}_{2y}^{\\dag} \\op{M}_D^{\\dag} \\op{M}_D \\op{M}_{2y}$.\nTo see what the measurement is performed by $\\op{E}_{l \\theta_1 \\theta_2}^{DX}$ for $DX \\in \\{D1, D2\\}$,\nit is convenient to estimate the simplified forms of $\\op{M}_D \\op{M}_{1x} \\op{M}_{2x}$ and $\\op{M}_D \\op{M}_{2y}$.\nFortunately,\n$\\op{M}_D$ is the projection onto the $l$th time slot for output states;\nthus, they have the simplified forms as $w_{DX l} \\ket{l} \\bra{\\psi^{DX}_{l \\theta_1 \\theta_2}}$,\nwhere $w_{DX l}$ is a complex weight\nand $\\ket{\\psi^{DX}_{l \\theta_1 \\theta_2}}$ is a normalized state in four-dimensional state.\nAll the simplified forms of the measurement operators are summarized in \\tref{tab:OpList}.\nTherefore,\n$\\op{E}_{l \\theta_1 \\theta_2}^{DX}$ returns the measurement result by the projector $\\ket{\\psi^{DX}_{l \\theta_1 \\theta_2}} \\bra{\\psi^{DX}_{l \\theta_1 \\theta_2}}$,\nexcluding the difference in weight $|w_{DX l}|^2$.\nThe simplified forms are easier to understand,\nbut the multiplication forms like $\\op{M}_D \\op{M}_{1x} \\op{M}_{2x}$ are more convenient for expanding the dimension\nor compensating for the imperfections due to measurement equipment as described later.\n\n\n\\begin{table}\n\t\\caption{\\label{tab:OpList}Measurement operators at different detection times and detector.}\n\t\\begin{indented}\n\t\\lineup\n\t\\item[]\n\t\t\\begin{tabular}{@{}ccr@{}r@{}r@{}r@{}r@{}r}\n\t\t\t\\br\n\t\t\tDetector & Detection time & \\multicolumn{6}{c}{Measurement operator}\t\\\\\n\t\t\t\\mr\n\t\t\tD1 & $t_0$ & $\\frac{1}{4} \\ket{0}$ & & $\\bra{0}$ &&& \\\\\n\t\t\t & $t_1$ & $\\frac{1}{4} \\ket{1}$ & $($ & $\\bra{1}$ & $+e^{i \\theta_1}\\bra{0}$ && $)$\t\\\\\n\t\t\t & $t_2$ & $\\frac{1}{4} \\ket{2}$ & $($ & $\\bra{2}$ & $+e^{i \\theta_1}\\bra{1}$ & $+e^{i \\theta_2}\\bra{0}$ & $)$\t\\\\\n\t\t\t & $t_3$ & $\\frac{1}{4} \\ket{3}$ & $($ & $\\bra{3}$ & $+e^{i \\theta_1}\\bra{2}$ & $+e^{i \\theta_2}\\bra{1}$ & $+e^{i (\\theta_1 + \\theta_2)}\\bra{0})$\t\\\\\n\t\t\t & $t_4$ & $\\frac{1}{4} \\ket{4}$ & $($ && $e^{i \\theta_1}\\bra{3}$ & $+e^{i \\theta_2}\\bra{2}$ & $+e^{i (\\theta_1 + \\theta_2)}\\bra{1})$\t\\\\\n\t\t\t & $t_5$ & $\\frac{1}{4} \\ket{5}$ & $($ &&& $e^{i \\theta_2}\\bra{3}$ & $+e^{i (\\theta_1 + \\theta_2)}\\bra{2})$\t\\\\\n\t\t\t & $t_6$ & $\\frac{1}{4} \\ket{6}$ & $($ &&&& $e^{i (\\theta_1 + \\theta_2)}\\bra{3})$\t\\\\\n\t\t\t\\cline{2-8}\n\t\t\tD2 & $t_0$ & $-\\frac{1}{2} \\ket{0}$ && $\\bra{0}$ &&&\t\t\\\\\n\t\t\t & $t_1$ & $-\\frac{1}{2} \\ket{1}$ && $\\bra{1}$ &&&\t\t\\\\\n\t\t\t & $t_2$ & $-\\frac{1}{2} \\ket{2}$ & $($ & $\\bra{2}$ && $-e^{i \\theta_2}\\bra{0}$ & $)$\t\\\\\n\t\t\t & $t_3$ & $-\\frac{1}{2} \\ket{3}$ & $($ & $\\bra{3}$ && $-e^{i \\theta_2}\\bra{1}$ & $)$\t\\\\\n\t\t\t & $t_4$ & $-\\frac{1}{2} \\ket{4}$ & $($ &&& $-e^{i \\theta_2}\\bra{2}$ & $)$\t\\\\\n\t\t\t & $t_5$ & $-\\frac{1}{2} \\ket{5}$ & $($ &&& $-e^{i \\theta_2}\\bra{3}$ & $)$\t\\\\\n\t\t\t\\br\n\t\t\\end{tabular}\n\t\\end{indented}\n\\end{table}\n\n\n\nAs in the QST for qubits,\nwe need to rotate $\\theta_1$ and $\\theta_2$ to complete the QST for qudits.\nWe use the same combinations of phase differences $\\theta_1$ and $\\theta_2$ utilized\nfor the time-energy entangled qudits \\cite{Richart2014}.\nThe total Hilbert space of the time-energy entangled qudits is spanned by two different logical qubits.\nOne is the qubit defined by the short and the long arms of the 1-bit delay MZI,\nand the other is the qubit defined by the short and the long arms of the 2-bit delay MZI.\nTherefore,\nthe high-dimensional QST is performed by the combination of the QST for logical qubits.\nSetting the phase differences between the arms at $0$ and $\\pi\/2$ corresponds\nto the measurements by the Pauli matrices $\\sigma_x$ and $\\sigma_y$ \\cite{Nielsen2010} for logical qubits, respectively.\nTherefore, combinations of phase differences $(\\theta_1, \\theta_2) = (0,0), (0, \\pi\/2), (\\pi\/2, 0)$, and $(\\pi\/2, \\pi\/2)$\nare sufficient to obtain the information about the phase of the qudits.\n\nOn the other hand,\nQST for qubits usually requires a measurement corresponding to the Pauli matrix $\\sigma_z$,\nwhich implies that it requires measurements without interference.\nThe measurement corresponding to $\\sigma_z$ for both the logical qubits are performed by D2 at $t_0, t_1, t_4$ and $t_5$,\nbecause the states $\\ket{\\psi^{D2}_{l \\theta_1 \\theta_2}}$ at these times are single time-bin basis states that correspond to eigenstates of $\\sigma_z$.\nHowever,\nwe need to prepare not only a $\\sigma_z \\otimes \\sigma_z$ measurement for logical qubits that doesn't completely interfere\nbut also measurements that partially interfere like a $\\sigma_z \\otimes \\sigma_x$ measurement.\nFrom this point,\nthe measurements by D1 at different detection times play an important role in the proposed scheme,\nbecause the interference pattern of the measurement $\\op{E}_{l \\theta_1 \\theta_2}^{D1}$ depends on detection time $t_l$ as shown in \\fref{fig:CMZI} and \\tref{tab:OpList}.\nIn other words,\nthe combination of the time-bin basis constituting $\\ket{\\psi^{D1}_{l \\theta_1 \\theta_2}}$ varies depending on the detection time.\nThe measurement at $t_0$ by D1 corresponds to the projection onto the single time-bin basis $\\ket{0}$,\nthe measurement at $t_1$ by D1 corresponds to the projection onto a superposition of $\\ket{0}$ and $\\ket{1}$, and so on.\n\n\nConsidering these characteristics of $\\op{E}_{l \\theta_1 \\theta_2}^{DX}$ described above,\nit is expected that the QST for time-bin qudits can be performed only by switching $\\theta_1$ and $\\theta_2$,\nwhich is confirmed by comparing \\eref{eq:ConceptOfQST} and \\eref{eq:E(Count)ltt}\nand by estimating the rank of $A_{ij}$.\nThe proposed scheme can be extended to general $d$-dimensional QST by adding extra MZIs.\nThe number of the MZIs for $d$-dimensional QST is $K$ given by $\\lceil \\log_2 d \\rceil$,\nwhere $\\lceil x \\rceil$ is the ceiling function for $x \\in \\mathbb{R}$.\nThe $K$ delay MZIs have different delay times $2^{i-1} T$ and phase differences $\\theta_i$ for $1 \\leq i \\leq K$.\nEach $\\theta_i$ takes $0$ and $\\pi \/ 2$ independently;\nthus, the number of measurement settings scales linearly with $d$.\n\nIt should be noted that we can implement QST for time-bin qudits without D2,\nwhich is confirmed from the rank of $A_{ij}$.\nHowever,\nD2 not only detects the photon which would be lost without it\nbut also collects information different from that obtained by D1.\nFor example,\nD1 cannot implement the measurement corresponding to the projection onto $\\ket{1}$, which D2 can.\nThis implies that D2 observes the same state from a different angle on the high-dimensional Bloch sphere.\nTherefore, the addition of D2 effectively improves the accuracy of the QST in the same measurement time.\n\n\n\n\\subsection{\\label{sec:LossComp}Compensation for imperfections}\n\nThe measurements described in subsection \\ref{sec:BasicQST} are ideal ones without imperfection.\nIn practice,\nthere are no ideal 50 : 50 beam splitters and no photon detectors with $100 \\%$ detection efficiency.\nFurthermore, when we utilize delay MZIs made with planar light wave circuit technology (PLC),\nthe difference in the optical path length between the long and the short arms causes imperfection due to medium loss.\nHowever,\nthe following modifications of the measurement operators can compensate for such imperfections:\n\\begin{eqnarray}\n\\op{M}_{2x} &=& \\frac{\\sum_{k=0}^3 \\left( \\ket{k} + \\sqrt{\\mathit{\\Delta} \\eta_{2x} } e^{i \\theta_2} \\ket{k+2} \\right) \\bra{k}}{\\sqrt{2\\left( 1+\\mathit{\\Delta} \\eta_{2x} \\right)}} \t\t,\t\\label{eq:CompM2x}\t\\\\\n\\op{M}_{2y} &=& \\frac{\\sum_{k=0}^3 \\left( - \\ket{k} + \\sqrt{\\mathit{\\Delta} \\eta_{2y} } e^{i \\theta_2} \\ket{k+2} \\right) \\bra{k}}{\\sqrt{2\\left( 1+\\mathit{\\Delta} \\eta_{2y} \\right)}} \t\t,\t\\label{eq:CompM2y}\t\\\\\n\\op{M}_{1x} &=& \\frac{\\sum_{k=0}^5 \\left( \\ket{k} + \\sqrt{\\mathit{\\Delta} \\eta_{1x} } e^{i \\theta_1} \\ket{k+1} \\right) \\bra{k}}{\\sqrt{2\\left( 1+\\mathit{\\Delta} \\eta_{1x} \\right)}} \t\t,\t\\label{eq:CompM1x}\t\\\\\n\\op{M}_{1y} &=& \\frac{\\sum_{k=0}^5 \\left( -\\ket{k} + \\sqrt{\\mathit{\\Delta} \\eta_{1y} } e^{i \\theta_1} \\ket{k+1} \\right) \\bra{k}}{\\sqrt{2\\left( 1+\\mathit{\\Delta} \\eta_{1y} \\right)}} \t\t,\t\\label{eq:CompM1y}\t\\\\\n\\op{E}_{l \\theta_1 \\theta_2}^{D1} &=& \\mathit{\\Delta} \\eta_{1} \\op{M}_{2x}^{\\dag} \\op{M}_{1x}^{\\dag} \\op{M}_D^{\\dag} \\op{M}_D \\op{M}_{1x} \\op{M}_{2x}\t,\t\\label{eq:CompED1}\t\\\\\n\\op{E}_{l \\theta_1 \\theta_2}^{D2} &=& \\op{M}_{2y}^{\\dag} \\op{M}_D^{\\dag} \\op{M}_D \\op{M}_{2y}\t,\t\\label{eq:CompED2}\n\\end{eqnarray}\nwhere $\\mathit{\\Delta} \\eta_{2x}, \\mathit{\\Delta} \\eta_{2y}, \\mathit{\\Delta} \\eta_{1x}, \\mathit{\\Delta} \\eta_{1y},$ and $\\mathit{\\Delta} \\eta_{1}$ are relative transmittances.\nRelative transmittances are the ratios between the transmittances depending on the optical paths and detectors.\nWe utilizes the relative values rather than absolute ones for experimental and theoretical convenience.\nThe use of the relative values decreases the expected value of the total photon number $N$ obtained by QST;\nthus, it is not an accurate modification in this sense.\nHowever,\nthe expected density operator $\\op{\\rho}$ will not change because $\\op{\\rho}$ is determined by the relative values of the photon counts.\nTherefore,\nthe use of the relative values is justified for the purpose of QST.\n\n\n\n\\subsection{\\label{sec:MLE}Maximum likelihood estimation}\n\nAs we mentioned in subsection \\ref{sec:BasicQST},\nQST for time-bin qudits can be performed by linear conversion of \\eref{eq:ConceptOfQST}.\nHowever,\nit is well known that\nthe density operator obtained by linear conversion does not often satisfy positivity,\nwhich implies the estimated density operator is unphysical \\cite{James2001}.\nMaximum likelihood estimation (MLE) is often used to avoid this problem \\cite{Agnew2011,James2001,Richart2014,Takesue2009}.\nFirst,\nwe use another representation of $\\op{\\rho}$ to enforce positivity as follows:\n\\begin{eqnarray}\n\\op{\\rho} &=& \\frac{\\op{R}^\\dag \\op{R}}{ \\mathrm{Tr} \\left( \\op{R}^\\dag \\op{R} \\right)}\t\t,\t\\\\\nN &=& \\mathrm{Tr} \\left( \\op{R}^\\dag \\op{R} \\right)\t,\n\\end{eqnarray}\nwhere $\\op{R}$ is an operator having a triangular form \\cite{James2001}.\nMLE is performed by finding $\\op{R}$ that minimizes the likelihood function $L\\left(\\op{R}\\right)$ given by\n\\begin{equation}\nL\\left(\\op{R}\\right) = \\sum_j \\left[ \\frac{\\left(n_j^M - n_j^E \\right)^2}{n_j^E} + \\ln n_j^E\t\\right]\t,\t\\label{eq:MLE}\n\\end{equation}\nwhere $n_j^M$ is the measured photon count and $n_j^E$ is the expected photon count in \\eref{eq:ConceptOfQST}.\nThe summation over $j$ is calculated for $j$ indicating different measurements.\nNote that we add $\\ln n_j^E$ to the likelihood function given in \\cite{James2001}.\nThe likelihood function is derived from the probability of obtaining a set of photon counts $n_j^M$,\nwhich is given by\n\\begin{equation}\nP = \\frac{1}{N_{norm}} \\prod_j \\exp \\left[ - \\frac{\\left(n_j^M - n_j^E \\right)^2}{2 \\sigma_j^2} \\right]\t,\n\\end{equation}\nwhere $N_{norm}$ is the normalization constant and $\\sigma_j \\approx \\sqrt{n_j^E}$ is the standard deviation for the $j$th measurement.\nHowever,\nthe normalization constant $N_{norm}$ can be approximated by $\\prod_j \\sqrt{2\\pi}\\sigma_j$ with Gaussian approximation,\nwhich leads to the additional term $\\ln n_j^E$.\nTo perform MLE according to \\eref{eq:MLE},\nwe need to precisely map $n^E_{D1 l \\theta_1 \\theta_2}$ and $n^E_{D2 l \\theta_1 \\theta_2}$ to $n_j^E$\nbecause the intrinsically same measurements exist in the measurement settings.\nFor example,\nthe measurement at $t_0$ by D1 corresponding to the projection onto $\\ket{0}$ does not depend on $\\theta_1$ and $\\theta_2$.\nFor this purpose,\nwe introduce space $V_j$,\nwhich satisfies the following conditions:\n\\begin{eqnarray}\n^\\forall \\left(DX, l, \\theta_1, \\theta_2 \\right) \\in V_j \\ , \\ ^\\forall \\left( DX', l', \\theta_1', \\theta_2' \\right) \\in V_{j'}\t\\nonumber\\\\\n\\frac{\\op{E}_{l \\theta_1 \\theta_2}^{DX}}{\\mathrm{Tr} \\left( \\op{E}_{l \\theta_1 \\theta_2}^{DX} \\right)}\n=\n\\frac{\\op{E}_{l' \\theta_1' \\theta_2'}^{DX'}}{\\mathrm{Tr} \\left( \\op{E}_{l' \\theta_1' \\theta_2'}^{DX'} \\right)}\n\\qquad\n\\mbox{for}\n\\qquad\nj=j'\t,\t\t\\label{eq:Vcondition1}\n\\\\\n\\frac{\\op{E}_{l \\theta_1 \\theta_2}^{DX}}{\\mathrm{Tr} \\left( \\op{E}_{l \\theta_1 \\theta_2}^{DX} \\right)}\n\\neq\n\\frac{\\op{E}_{l' \\theta_1' \\theta_2'}^{DX'}}{\\mathrm{Tr} \\left( \\op{E}_{l' \\theta_1' \\theta_2'}^{DX'} \\right)}\n\\qquad\n\\mbox{for}\n\\qquad\n j \\neq j'\t.\t\\label{eq:Vcondition2}\n\\end{eqnarray}\nSpace $V_j$ is numerically generated via a comparison according to \\eref{eq:Vcondition1} and \\eref{eq:Vcondition2}.\nBy utilizing $V_j$,\nwe can map $n^E_{D1 l \\theta_1 \\theta_2}$ and $n^E_{D2 l \\theta_1 \\theta_2}$ to $n_j^E$ as follows:\n\\begin{eqnarray}\nn_j^E &=& \\sum_{\\left(DX, l, \\theta_1, \\theta_2 \\right) \\in V_j} n^E_{D1 l \\theta_1 \\theta_2}\t\t\\\\\n&=& N \\mathrm{Tr} \\left( \\op{E}_{j} \\op{\\rho} \\right)\t,\n\\end{eqnarray}\nwhere $\\op{E}_{j} = \\sum_{\\left(DX, l, \\theta_1, \\theta_2 \\right) \\in V_j} \\op{E}_{l \\theta_1 \\theta_2}^{DX}$.\nSimilarly, we obtain $n_j^M$,\nand now we can perform the QST for time-bin qudits by MLE.\n\n\n\\subsection{\\label{SumOfProc}Summary}\n\n\nHere,\nwe summarize the proposed QST procedure.\n\n\nFirst,\nwe measure the relative transmittances\n$\\mathit{\\Delta} \\eta_{2x}, \\mathit{\\Delta} \\eta_{2y}, \\mathit{\\Delta} \\eta_{1x}, \\mathit{\\Delta} \\eta_{1y},$ and $\\mathit{\\Delta} \\eta_{1}$,\nwith which we estimate the measurement operators $\\op{E}_{l \\theta_1 \\theta_2}^{DX}$ according to\n\\eref{eq:CompM2x}--\\eref{eq:CompED2}.\nThen,\nwe generate space $V_j$ from $\\op{E}_{l \\theta_1 \\theta_2}^{DX}$ according to \\eref{eq:Vcondition1} and \\eref{eq:Vcondition2}\nand prepare $\\op{E}_{j} = \\sum_{\\left(DX, l, \\theta_1, \\theta_2 \\right) \\in V_j} \\op{E}_{l \\theta_1 \\theta_2}^{DX}$.\n\n\n\nNext,\nwe perform photon count measurement\nby switching combinations of phase differences $(\\theta_1, \\theta_2) = (0,0), (0, \\pi\/2), (\\pi\/2, 0)$, and $(\\pi\/2, \\pi\/2)$\nand obtain $n^M_{DX l \\theta_1 \\theta_2}$.\nAfter the measurement,\n$n^M_{DX l \\theta_1 \\theta_2}$ is reduced into $n_j^M$ by using space $V_j$.\n\n\nFinally,\nwe find $\\op{R}$ minimizing the likelihood function $L\\left(\\op{R}\\right)$ with $n_j^M$ and $\\op{E}_{j}$\nand obtain the reconstructed density operator $\\op{\\rho}$.\nWhen we perform the QST for the multi-photon state,\nwe extend the procedure as in \\cite{James2001,Thew2002}\nby replacing $\\op{E}_{l \\theta_1 \\theta_2}^{DX}$ and $n^M_{DX l \\theta_1 \\theta_2}$ with its tensor production and coincidence count,\nrespectively.\n\n\n\\section{Experimental setup}\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=\\linewidth]{figure02_ExpSetup.eps}\n\\caption{Experimental setup.\nCW: Continuous wave laser.\nIM: Intensity modulator.\nEDFA: Erbium-doped fiber amplifier.\nPC: Polarization controller.\nFBG: Fiber Bragg grating filter.\nVATT: Optical variable attenuator.\nPPLN: Periodically poled lithium niobate waveguide.\nBPF: Optical band-pass filter.\nWDM: Wavelength demultiplexing filter.\nPol: Polarizer.\n2-bit delay MZI, 1-bit delay MZI (Delay Mach-Zehnder interferometers were fabricated using PLC technology.)\nSNSPD: Superconducting nanowire single-photon detector.\n}\n\\label{fig:ExpSetup}\n\\end{figure}\n\n\\Fref{fig:ExpSetup} shows the experimental setup.\nFirst,\nwe generate a continuous-wave light with a wavelength of 1551.1 nm and a coherence time of $\\sim$10 $\\mu$s,\nwhich is modulated into four-sequential pulses by an intensity modulator.\nThe repetition frequency, the temporal interval, and the pulse duration are 125 MHz, 1 ns, and 100 ps, respectively.\nThese pulses are amplified by an erbium-doped fiber amplifier (EDFA),\nand then the average power of the pulses are adjusted by an optical variable attenuator.\nThey are launched into a periodically poled lithium niobate (PPLN) waveguide,\nwhere 780-nm pump pulses are generated via second harmonic generation.\nThe 780-nm pump pulses are launched into another PPLN waveguide to generate a four-dimensional maximally entangled state through spontaneous parametric down-conversion.\nA fiber Bragg grating filter and two optical band-pass filters are located after the EDFA and the PPLN waveguides, respectively.\nThe fiber Bragg grating filter eliminates amplified spontaneous emission noise from the EDFA,\nand the first and the second band-pass filters eliminate the 1551.1- and the 780-nm pump pulses, respectively.\nThe generated entangled photons are separated by a wavelength demultiplexing filter into a signal and an idler photon whose wavelengths are 1555 and 1547 nm, respectively.\nEach separated photon is launched into the cascaded MZIs followed by two superconducting nanowire single-photon detectors (SNSPDs),\nwhere the QST described in \\sref{sec:QSTDetail} is performed.\nThe cascaded MZIs are composed of a 2-bit delay MZI and a 1-bit delay MZI fabricated by using PLC technology.\nThe phase differences of the 2- and 1-bit delay MZIs are controlled via the thermo-optic effect caused by electrical heaters attached to the waveguides.\nEach MZI shows a $>20$-dB extinction ratio thanks to the stability of the PLC \\cite{Takesue2005, Honjo2004}.\nPolarization controllers and polarizers are located in front of each MZI to operate the MZIs for one polarization.\nChannels 1 and 2 (3 and 4) of the SNSPDs are connected to the 1- and the 2-bit delay MZIs for the signal (idler) photon, respectively.\nThe photon detection events from the SNSPDs are recorded by a time-interval analyzer\nand analyzed by a conventional computer.\nThe detection efficiencies of the SNSPDs for channels 1, 2, 3, and 4 are $40, 56, 34$, and $43$ \\%, respectively,\nand the dark count rate for all channels is $<30$ cps.\n\n\\section{Results}\n\n\\subsection{Measurement of relative transmittance}\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=\\linewidth]{figure03_Marge.eps}\n\\caption{Histograms of single counts for single photon generated by single pump pulse\nfor the detector's (a) channel 1, (b) channel 2, (c) channel 3, and (d) channel 4.}\n\\label{fig:DiffEta}\n\\end{figure}\n\nWe first measured the relative transmittances between the arms of the MZIs---$\\mathit{\\Delta} \\eta_{2x}, \\mathit{\\Delta} \\eta_{2y},$ and $\\mathit{\\Delta} \\eta_{1x}$---for the signal and the idler photon.\nTo measure these values,\nwe generated a single pulse using the intensity modulator instead of four-sequential ones,\nbecause the photons generated by the single pulse don't interfere at the MZIs.\n\\Fref{fig:DiffEta} shows the histograms of single photon counts for each detector channel.\nThe four peaks in \\fref{fig:DiffEta}(a) and (c) correspond to the single counts for finding a photon in detection times $t_0$, $t_1$, $t_2$, and $t_3$, respectively.\nSimilarly,\nthe two peaks in \\fref{fig:DiffEta}(b) and (d) correspond to the single counts for finding a photon in detection times $t_0$ and $t_2$, respectively.\nWe calculated the relative transmittances from these single counts.\nFor example,\nsingle count $S^1_l$ at detection time $t_l$ for channel 1 satisfies the following relation:\n\\begin{equation}\nS^1_0 : S^1_1 : S^1_2 : S^1_3 = 1 : \\mathit{\\Delta} \\eta_{1x}^s : \\mathit{\\Delta} \\eta_{2x}^s : \\mathit{\\Delta} \\eta_{1x}^s \\mathit{\\Delta} \\eta_{2x}^s\t,\n\\end{equation}\nwhere $\\mathit{\\Delta} \\eta_{2x}^s$ and $\\mathit{\\Delta} \\eta_{1x}^s$ are the relative transmittances for the signal photon.\nTherefore,\nthe relative transmittances were estimated as\n$\\mathit{\\Delta} \\eta_{2x}^s = \\left( S^1_2 + S^1_3 \\right) \/ \\left( S^1_0 + S^1_1 \\right)$\nand\n$\\mathit{\\Delta} \\eta_{1x}^s = \\left( S^1_1 + S^1_3 \\right) \/ \\left( S^1_0 + S^1_2 \\right)$.\nSimilarly,\nwe calculated the other relative transmittances,\nwhich are summarized in \\tref{tab:DiffEta}.\nWe didn't measure $\\mathit{\\Delta} \\eta_{1y}$\nbecause output port $p_{1y}$ was terminated\nand thus didn't affect the result of our experiment.\nThe values summarized in \\tref{tab:DiffEta} were utilized for the QST described in the next section.\n\n\\begin{table}\n\t\\caption{\\label{tab:DiffEta}Summary of the relative transmittance.}\n\t\\begin{indented}\n\t\\lineup\n\t\\item[]\n\t\t\\begin{tabular}{@{}crr}\n\t\t\t\\br\n\t\t\t & Signal & Idler\t\\\\\n\t\t\t\\mr\n\t\t\t$\\mathit{\\Delta} \\eta_{2x}$ & 1.009\\0 & 0.8495\t\\\\\n\t\t\t$\\mathit{\\Delta} \\eta_{2y}$ & 0.8300 & 0.8302\t\\\\\n\t\t\t$\\mathit{\\Delta} \\eta_{1x}$ & 1.063\\0 & 0.9669\t\\\\\n\t\t\t\\br\n\t\t\\end{tabular}\n\t\\end{indented}\n\\end{table}\n\n\\subsection{QST for the time-bin entangled qudits}\n\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=\\linewidth]{figure04_rho_merge.eps}\n\\caption{(a) Real parts and (b) imaginary parts of measured density operator $\\op{\\rho}$.}\n\\label{fig:Rho}\n\\end{figure}\n\nWe then generated the four-dimensional maximally entangled state $\\ket{\\Psi_{MES}^4 (\\phi)}$ by utilizing the four-sequential pump pulses.\nThe state is given by\n\\begin{equation}\n\\ket{\\Psi_{MES}^4 (\\phi)} = \\frac{1}{2} \\sum_{k=0}^3 \\exp (i \\phi k ) \\ket{k}_s \\otimes \\ket{k}_i \t,\n\\end{equation}\nwhere $\\ket{k}_s$ and $\\ket{k}_i$ denote the time-bin basis for the signal and idler photon, respectively,\nand $\\phi$ denotes the relative phase between the product states $\\ket{k}_s \\otimes \\ket{k}_i$\ndue to the phases of the pump pulses for SPDC.\nThe pump pulses were generated from the CW laser;\nthus, the phase is proportional to $k$ and determined by the frequency and the temporal interval of the time slots.\nIt should be noted that we can control the phase of the entangled state by modulating that of the pump pulses.\nIn our setup,\nthe CW laser had a coherence time of $\\sim$10 $\\mu$sec,\nwhich implies that, in principle, we can extend the dimension of the entangled photons $d$ up to $10^3\\sim10^4$.\nThe measured single photon count rates for detector channels 1, 2, 3, and 4 were\n17.1, 72.4, 20.6, and 82.1 kcps, respectively.\nFrom these single photon count rates,\nthe relative transmittances between the detectors $\\mathit{\\Delta} \\eta_{1}$ for the signal and idler photon\nwere estimated to be 0.474 and 0.501, respectively.\nThe average photon number per qudit was 0.02,\nand the measurement time for one measurement setting was 10 sec.\nWe employed coincidence counts for arbitrary combinations of detection times between the signal and the idler photon with 16 measurement settings,\nwith which the QST for a single qudit described in \\sref{sec:QSTDetail} was extended to the QST for two qudits.\n\n\nWe performed the QST for the entangled qudits fifteen times.\n\\Fref{fig:Rho} shows one of the measured density operators $\\op{\\rho}$.\nAll measured coincidence counts and reconstructed operators in the fifteen trials are provided in the supplementary material.\nNote that we utilized $\\op{U}\\op{\\rho}\\op{U}^\\dag$ instead of $\\op{\\rho}$ so that the visualized operator would be close to $\\ket{\\Psi_{MES}^4 (0)}$,\nwhere the local unitary operator $\\op{U}$ for the signal photon is given by $\\sum_k \\exp (-i \\phi' k) \\ket{k}_s\\bra{k}_s$.\nBoth the real and the imaginary parts of the measured operator showed characteristics close to $\\ket{\\Psi_{MES}^4 (0)}$,\nand the elements of the operator that were 0 for $\\ket{\\Psi_{MES}^4 (0)}$ were suppressed.\n\n\n\\begin{table}\n\t\\caption{\\label{tab:FigOfMerit}Average quantities derived from measured $\\op{\\rho}$ for the fifteen experimental trials.\n\tThe critical values to violate the CGLMP inequality are also summarized.}\n\t\\begin{indented}\n\t\\lineup\n\t\\item[]\n\t\t\\begin{tabular}{@{}crr}\n\t\t\t\\br\n\t\t\t & Measured & Critical \\\\\n\t\t\t\\mr\n\t\t\tFidelity & $F(\\op{\\rho}, \\op{\\sigma}) = \\m $ 0.950 $\\pm$ 0.003 & $> 0.710$ \\\\\n\t\t\tTrace distance & $D(\\op{\\rho}, \\op{\\sigma}) = \\m $ 0.068 $\\pm$ 0.003 & $< 0.290$ \\\\\n\t\t\tLinear entropy & $H_{lin}(\\op{\\rho}) = \\m $ 0.093 $\\pm$ 0.006 & $< 0.490$ \\\\\n\t\t\tVon Neumann entropy & $H_{vn}(\\op{\\rho}) = \\m $ 0.343 $\\pm$ 0.016 & $< 2.002$ \\\\\n\t\t\t\\multirow{2}{*}{Conditional entropy} & $H_{c}(\\op{\\rho}|s) = - $ 1.654 $\\pm$ 0.016 & $< 0.002$ \\\\\n\t\t\t & $H_{c}(\\op{\\rho}|i) = - $ 1.653 $\\pm$ 0.016 & $< 0.002$ \\\\\n\t\t\t\\br\n\t\t\\end{tabular}\n\t\\end{indented}\n\\end{table}\n\nTo evaluate the measured operators more quantitatively,\nwe derived five figures of merit from $\\op{\\rho}$:\nfidelity $F(\\op{\\rho}, \\op{\\sigma})$,\ntrace distance $D(\\op{\\rho}, \\op{\\sigma})$,\nlinear entropy $H_{lin}(\\op{\\rho})$,\nvon Neumann entropy $H_{vn}(\\op{\\rho})$,\nand conditional entropy $H_{c}(\\op{\\rho}|X)$ \\cite{Nielsen2010,James2001}.\nHere, we employed the following definitions:\n\\begin{eqnarray}\nF(\\op{\\rho}, \\op{\\sigma}) &=& \\left[ \\Tr \\sqrt{ \\sqrt{\\op{\\sigma}} \\op{\\rho} \\sqrt{\\op{\\sigma}}}\\right]^2\t,\t\\\\\nD(\\op{\\rho}, \\op{\\sigma}) &=& \\frac{1}{2} \\Tr \\sqrt{\\left(\\op{\\rho} - \\op{\\sigma}\\right)^2}\t,\t\\\\\nH_{lin}(\\op{\\rho}) &=& 1 - \\Tr \\left( \\op{\\rho}^2 \\right)\t,\t\\\\\nH_{vn}(\\op{\\rho}) &=& - \\Tr \\left( \\op{\\rho} \\log_2 \\op{\\rho}\\right)\t,\t\\\\\nH_{c}(\\op{\\rho}|X) &=& H_{vn}(\\op{\\rho}) - H_{vn}(\\op{\\rho}_X)\t,\n\\end{eqnarray}\nwhere $\\op{\\sigma}$ is given by$\\ket{\\Psi_{MES}^4 (\\phi)} \\bra{\\Psi_{MES}^4 (\\phi)}$ with $\\phi$,\nwhich maximizes $F(\\op{\\rho}, \\op{\\sigma})$ or minimizes $D(\\op{\\rho}, \\op{\\sigma})$,\n$X \\in \\{s, i \\}$ denotes the signal and idler photon, respectively,\nand $\\op{\\rho}_X$ is the reduced density operator for $X$.\nThe average values of these quantities are summarized in \\tref{tab:FigOfMerit}.\nThe errors in \\tref{tab:FigOfMerit} were estimated as standard deviations in the fifteen experimental trials.\nTherefore,\nthey included the statistical characteristics of the coincidence counts and all the effects due to the experimental imperfections as well.\nThe measured fidelity and trace distance showed that the reconstructed operators were close to the target state $\\ket{\\Psi_{MES}^4 (\\phi)}$.\nNote that this is the first time fidelity $>0.90$ has been reported for entangled qudits \\cite{Agnew2011,Bernhard2013,Nowierski2015,Richart2014}.\nThe measured linear entropy and von Neumann entropy were low,\nwhich implies that the reconstructed operators were close to the pure state and that small disturbances occurred in the proposed QST scheme.\nFurthermore,\nthe measured conditional entropies were negative,\nwhich confirmed that the signal and the idler photons were entangled \\cite{Horodecki1996a,Horodecki1996}.\n\nTo evaluate the quality of entangled qudits,\nmany previous experiments employed the Collins-Gisin-Linden-Massar-Popescu (CGLMP) inequality test,\nwhich is a generalized Bell inequality for entangled qudits \\cite{Collins2002,Dada2011a}.\nIf we assume symmetric noise,\ndepolarized entangled state $\\op{\\rho}_{mix}$ is given by\n\\begin{equation}\n\\op{\\rho}_{mix} = p \\ket{\\Psi_{MES}^4 (0)} \\bra{\\Psi_{MES}^4 (0)} + (1 - p) \\frac{\\op{I}_{16}}{16}\t,\t\t\\label{eq:MixedMES}\n\\end{equation}\nwhere $p$ is a probability and $\\op{I}_{16}$ is the identity operator in the 16-dimensional Hilbert space.\nThe condition $p > 0.69055$ is a criterion to violate the CGLMP inequality.\nTherefore,\nthe quantities derived from $\\op{\\rho}_{mix}$ with $p = 0.69055$ can be considered as the critical values for the evaluation of the entangled qudits.\nThese critical values are also summarized in \\tref{tab:FigOfMerit},\nwhich shows that all of the measured values satisfied the conditions to violate the CGLMP inequality.\nThus,\nwe confirmed that the proposed QST scheme based on cascaded MZIs successfully reconstructed the quantum density operator of the time-bin entangled qudits\nwith only 16 measurement settings.\n\n\n\n\\section{Conclusion}\nWe proposed QST for time-bin qudits based on cascaded MZIs,\nwith which the number of measurement settings scales linearly with dimension $d$.\nWe generated a four-dimensional maximally entangled time-bin state\nand confirmed that the proposed scheme successfully reconstructed the density operator with only 16 measurement settings.\nAll the quantities derived from the reconstructed state were close to the ideal ones,\nand the fidelity of 0.950 is the first time fidelity $>0.90$ has been achieved for entangled qudits.\nWe hope that our result will lead to advanced quantum information processing utilizing high-dimensional quantum systems.\n\n\n\n\n\\ack\nWe thank T. Inagaki and F. Morikoshi for fruitful discussions.\n\n\n\n\n\\section*{References}\n\n\\bibliographystyle{iopart-num}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\nA natural description of the dynamics of chaotic systems is in \nterms of evolving probability densities~\\cite{Lasota}. \nOn this level the time evolution in maps is governed by the linear Frobenius--Perron\noperator and the dynamical problem is solved by the determination of the spectral decomposition of this\noperator. In recent years, several authors have constructed complete and explicit spectral\ndecompositions of the Frobenius--Perron operator of a variety of model\nsystems~\\cite{firstgenspec,scndgenspec}. The most useful decompositions contain in\ntheir spectrum the decay rates characterizing the approach to equilibrium of the system. \nFor one-dimensional piecewise-linear Markov maps such decompositions\nare constructed in function spaces spanned by polynomials. The dual space of these polynomials\nare generalized function spaces and so the decompositions are known as generalized\nspectral decompositions~\\cite{Deanbook}. \n\nChaotic systems often contain a control parameter that characterizes\nthe strength of the chaos. A simple model system with such a control\nparameter is the well-known tent map with varying height~\\cite{Schuster}. \nThe tent map with height $h$ on the unit interval is given by\n\\begin{equation} \\label{generaltent}\n{\\rm T}(x) = \\left\\{ \\begin{array}{lc} \n\\alpha x & 0 \\leq x < \\frac{1}{2} \\\\ \\noalign{\\vskip4pt} \n\\alpha(1-x) & \\frac{1}{2} \\leq x < 1,\n\\end{array} \\right.\n\\end{equation}\nwhere the parameter $\\alpha \\equiv 2 h$. In Figure 1 the map (\\ref{generaltent}) with \nheight $\\sqrt{2}\/2$ is shown. Note that the map acts on the unit interval $[0,1)$ but\nhas images on $[0,h]$. \n\\begin{figure}[htb]\n\\begin{center}\n\\scalebox{.5}[.5]{\\includegraphics{fig13.eps}}\n\\parbox{5in}{\\caption{\\small The tent map at $\\alpha = \\protect\\sqrt{2}$. For this value of\n$\\alpha$ iterates of the critical point define four intervals around which the dynamics is\norganized, as discussed in Section 2.}}\n\\end{center}\n\\end{figure}\nAs characterized by its Lyapunov exponent, $\\log\\alpha$,\nthe map (\\ref{generaltent}) switches abruptly chaotic as the\nheight is raised past $1\/2$. We are interested in the chaotic regime where $1\/2 < h \\leq 1$, i.e.,\n$1 < \\alpha \\leq 2$. Similar to the well-known universality of the quadratic map, it has recently \nbeen reported~\\cite{Moon} that the tent map also governs the low-dimensional behavior of a\nwide class of nonlinear phenomena. Specifically, it was found that the dynamics of the\nGinzburg--Landau equation, in its description of the modulational instability of a wave train, is\nreducible to the tent map. Under this reduction, varying the height in\n(\\ref{generaltent}) corresponds to varying the wavelength of the initial modulational instability. \n\nOur interest is in the\nstatistical properties of the iterates of the tent map and in evolving probability\ndensities. The Frobenius--Perron operator, $U$, corresponding to a map, ${\\rm S}(x)$, defined on the\nunit interval evolves a probability density, $\\rho(x,t)$ by one time step as\n\\begin{equation}\n\\rho(x,t+1) = U \\rho(x,t) \\equiv \\int_0^1 dx'\\, \\delta(x - {\\rm S}(x')) \\, \\rho(x',t).\n\\end{equation}\nEvaluating the integral gives a sum of contributions from the inverse branches of ${\\rm S}(x)$. \nThe Frobenius--Perron operator corresponding to the map (\\ref{generaltent}) acts explicitly on a\ndensity $\\rho(x)$ as\n\\begin{equation} \\label{gententfpop}\nU_{\\rm T} \\rho(x) = \\frac{1}{\\alpha}\\left[ \\rho\\left(\n\\frac{x}{\\alpha} \\right) + \\rho\\left(\\frac{\\alpha - x}{\\alpha} \n\\right) \\right]\\Theta \\left(\\frac{\\alpha}{2} - x \\right), \n\\end{equation}\nwhere \n\\begin{equation} \n\\Theta(a-x) = \\left\\{ \\begin{array}{lc} \n1 & x \\leq a \\\\\n0 & x > a.\n\\end{array} \\right. \n\\end{equation}\nThe step function appears here because the map has\nno inverse images for $x>\\alpha\/2$.\nFor $\\alpha=2$ the map has images on the\nwhole unit interval. For this value of $\\alpha$ the invariant density (being the stationary\nsolution of (\\ref{gententfpop})) is\nuniform on the whole unit interval. As $\\alpha$ is lowered the\ninvariant density is supported only on a subset of the interval\n$[\\alpha(1-\\alpha\/2),\\alpha\/2]$, i.e., from the\nsecond iterate to the first iterate of the critical point $x_c \\equiv 1\/2$. The invariant density is\ndiscontinuous at all values of the trajectory of the critical point. If the critical trajectory is\nperiodic (or eventually periodic) there will be a finite number of discontinuities. \n\nFor $\\alpha \\geq \\sqrt{2}$ the invariant density has nonvanishing support on all of\n$[\\alpha(1-\\alpha\/2),\\alpha\/2]$. As $\\alpha$ is decreased past this critical value,\n$\\alpha_{1}$, the invariant density breaks up into two bands with a gap in the\nmiddle. Decreasing $\\alpha$ past $\\alpha_{2} = 2^{1\/4}$ causes the invariant\ndensity to break up into 4 bands. In general the\ninvariant density has\n$2^n$ bands as $\\alpha$ is decreased past $\\alpha_{n} = 2^{2^{-n}}$. \nThese values of $\\alpha$ are called the band-splitting \npoints~\\cite{bsps}. \nThis is illustrated in Figure 2.\n\\begin{figure}[htb]\n\\begin{center}\n\\scalebox{0.73}[0.73]{\\includegraphics{bif.eps}}\n\\parbox{5in}{\\caption{\\small Bifurcation plot of the tent map with varying height showing\nthe formation of bands as the height, parametrized by $\\alpha=2h$ on the horizontal axis,\nis lowered. The vertical axis is the subset $[\\alpha(1-\\alpha\/2),\\alpha\/2]$ of the\none-dimensional phase space. In (a) the shaded regions indicate where the invariant\ndensity has nonvanishing support. At this resolution we see up to the formation of the\nfour bands at $\\alpha=2^{1\/4}$ but higher band-splitting points are not resolved. In (b) iterates\nof the critical trajectory as a function of $\\alpha$ are plotted. These are seen to\ndetermine the band structure in (a).}}\n\\end{center}\n\\end{figure}\nAt the band-splitting points the critical trajectory is eventually \nperiodic and the invariant density is constant in each band and thus piecewise-constant\nover the unit interval. But a general initial density will also have persistent oscillating\ncomponents among the bands. This feature is known as asymptotic periodicity~\\cite{Lasota}.\n\nWe want to determine the spectral decomposition of the Frobenius--Perron operator so that we may\nexpand a density or correlation function in terms of its eigenmodes. In order to do this we need\nthe dual states or left eigenstates of $U$. These correspond to the right eigenstates of the\nadjoint of $U$, which is known as the Koopman operator~\\cite{Lasota}. The Koopman operator,\n$K=U^\\dagger$, acts on a phase space function $A(x)$ as\n\\begin{equation} \\label{koopman}\nK A(x) = A({\\rm T}(x)),\n\\end{equation}\nwhere ${\\rm T}(x)$ is the rule for the map, such as~(\\ref{generaltent}).\nFor decompositions of one-dimensional, chaotic, Markov maps in spaces spanned by polynomials the\nKoopman operator has eigenstates that are generalized functions or\neigenfunctionals~\\cite{firstgenspec,scndgenspec,Deanbook}. \n\nThe decay of the $x$-autocorrelation function for the tent map at the \nband-splitting points and at values of $\\alpha$\nclose to band-splitting points was calculated in~\\cite{Mori}. \nThe tent map has also been studied by\nDorfle~\\cite{Dorfle} at arbitrary values of $\\alpha$ in several function spaces\nbut he does not provide explicit complete spectral decompositions.\nThe asymptotic periodicity of the system has been studied by Provatas and Mackey~\\cite{ProvMac}. \nSince as $t \\to \\infty$ the density is supported only within $[\\alpha(1-\\alpha\/2),\\alpha\/2]$\nall these authors have only considered the map in that region and neglected transient behavior onto\nit. That is sufficient if one only considers the behavior of time correlation functions; but for\nthe general evolution of densities and observables in a nonequilibrium statistical mechanics context\nthis transient behavior must not be neglected since, as will be seen, the slowest decay\nmodes originate from this part of the dynamics.\n\nIn the next section we construct the spectral decomposition of the\nFrobenius--Perron operator at the first band-splitting point. In Section 3 the decomposition at all\nthe band-splitting points is constructed using the self-similarity of the map at higher\nband-splitting points to the map at lower band-splitting points. \\section{The first band-splitting point}\n\nFrom (\\ref{generaltent}) the map at the first band-splitting point corresponding to the\nheight $h=\\sqrt{2}\/2$ or $\\alpha=\\sqrt{2}$ is\n\\begin{equation} \\label{mapatfbsp}\n{\\rm{T}}_{1}(x) = \\left\\{ \\begin{array}{lc} \n\\sqrt{2} \\, x & 0 \\leq x < \\frac{1}{2} \\\\ \\noalign{\\vskip4pt} \n\\sqrt{2} \\, (1-x) & \\frac{1}{2} \\leq x < 1,\n\\end{array} \\right.\n\\end{equation}\nwhich is shown in Figure 1.\nThe dynamics is organized around four\nintervals determined by the trajectory of the critical point. At the third iteration\nthe critical trajectory settles onto the fixed point, $x^*=2-\\sqrt{2}$. The four\nintervals: ${\\rm{I}}=[0,{\\rm{T}}_{1}^{(2)}(x_{\\rm c}))$, \n${\\rm{II}}=[{\\rm{T}}_{1}^{(2)}(x_{\\rm c}),{\\rm{T}}_{1}^{(3)}(x_{\\rm c}))$\n${\\rm{III}}=[{\\rm{T}}_{1}^{(3)}(x_{\\rm c}),{\\rm{T}}_{1}^{(1)}(x_{\\rm c}))$ and\n${\\rm{IV}}=[{\\rm{T}}_{1}^{(1)}(x_{\\rm c}),1)$, define a minimal Markov partition\nfor the map. (These intervals are indicated in Figure 1.) Any point in the interior of\ninterval ${\\rm{IV}}$ is mapped onto some point in interval ${\\rm{I}}$ in one\niteration. Under successive iterations all points in the interior of ${\\rm{I}}$ are\neventually mapped into ${\\rm{II}}$. Any point in interval $\\rm{II}$ maps into interval\n$\\rm{III}$ in one iteration and any point in interval $\\rm{III}$ maps into\n$\\rm{II}$ in one iteration. Thus the union of the intervals $\\rm{II}$ and $\\rm{III}$ \nform the attracting set $\\Omega$. \n\nThe Frobenius--Perron operator for the map (\\ref{mapatfbsp}) acts on \na density as \n\\begin{equation} \\label{op}\nU_{\\rm T_1} \\rho(x) = \\frac{1}{\\sqrt{2}}\\left[ \\rho\\bigg(\n\\frac{x}{\\sqrt{2}} \\bigg) + \\rho\\bigg(\\frac{\\sqrt{2} - x}{\\sqrt{2}} \n\\bigg) \\right]\\Theta \\bigg( \\frac{\\sqrt{2}}{2} - x \\bigg). \n\\end{equation}\nA general initial density continuous over the unit interval develops\ndiscontinuities at the endpoints of the four intervals described above. \nWe thus choose a function space to consider $U_{\\rm T_1}$ in as the space\nof piecewise-polynomial functions where the pieces are the four\nintervals described above. The invariant density has support only in\n$\\Omega$. \n\nThe fact that points oscillate between the intervals $\\rm{II}$ and $\\rm{III}$ means\nthat a general density will have a persistent oscillating component between these two\nintervals under time evolution. This property we will sometimes refer to in the\npresent context as the ``flip property\", since the part of the density in $\\rm{II}$\nwill all be in $\\rm{III}$ (with stretching) in the next time step and vice-versa. The parts of the\ninitial density with support in the intervals\n${\\rm{I}}$ and $\\rm{IV}$ will decay onto $\\Omega$. Since we will use the eigenfunctions on\n$\\Omega$ to determine those on its complement, we first consider the decomposition on $\\Omega$.\n\n\\subsection{Evolution on the attractor}\n\nFor convenience $\\Omega$ is stretched\nonto the interval $[0,1)$. At the end of the computation we will rescale all\nresults back to $\\Omega$. The linear function that makes the stretch is\n\\begin{equation} \\label{phi}\n\\phi(x) = \n(2\/x^{*}) x - \\sqrt{2},\n\\end{equation}\nwhere ${\\rm{T}}_{1}^{(2)}(x_{\\rm c}) \\leq x < {\\rm{T}}_{1}^{(1)}(x_{\\rm c})$.\nThe transformation (\\ref{phi}) is a homeomorphism so that it begets a new\nmap ${\\rm R_1}$ topologically conjugate to the part of ${\\rm T_1}$ on $\\Omega$ as\n$\\rm{R}_{1} = \\phi \\circ {\\rm{T}}_{1} \\circ \\phi^{-1}$\ngiven by \n\\begin{equation} \\label{rescaled}\n{\\rm{R}}_{1}(x) = \\left\\{ \\begin{array}{lc}\n\\sqrt{2} \\, x + x^{*} & 0 \\leq x < x^{*} \\! \/2 \\\\ \\noalign{\\vskip4pt}\n\\sqrt{2} \\, (1-x) & x^{*} \\! \/2 \\leq x < 1, \n\\end{array} \\right.\n\\end{equation}\nwhere $x^{*} = 2 - \\sqrt{2}$ is the fixed point of the map ${\\rm{R}}_{1}(x)$,\nwhich is the same as\nthe fixed point of the map ${\\rm{T}}_{1}(x)$. Under this transformation, the\nintervals $\\rm{II}$ and $\\rm{III}$\nare stretched to the intervals ${\\rm{A}} \\equiv [0,x^{*})$ and ${\\rm{B}} \\equiv\n[x^{*},1)$ respectively. The map ${\\rm{R}}_{1}(x)$ is shown in Figure 3. \n\\begin{figure}[htb]\n\\begin{center}\n\\scalebox{.5}[.5]{\\includegraphics{fig21.eps}}\n\\parbox{5in}{\\caption{\\small The rescaled Tent map at the first band-splitting point.}}\n\\end{center}\n\\end{figure}\n\nThe Frobenius--Perron operator corresponding to the\nrescaled map\n$\\rm{R}_{1}$ acts on a density as\n\\begin{equation} \\label{ur1}\nU_{\\rm{R}_{1}} \\rho(x) = \\frac{1}{\\sqrt{2}}\\left[ \\rho\\bigg(\n\\frac{\\sqrt{2} - x}{\\sqrt{2}} \\bigg) + \\rho\\bigg(\\frac{x - x^{*}}{\\sqrt{2}} \n\\bigg)\\Theta(x - x^{*}) \\right]. \n\\end{equation}\nThe flip property of $\\rm T_1$ is inherited by $\\rm R_1$ in that the inverse image of\n$\\rm A$ is $\\rm B$ and vice-versa. This suggests that a simpler analysis will be\nobtained by considering the map corresponding to two iterations of $\\rm R_1$. This map,\n${\\rm{G}}_{1} \\equiv \\rm{R}_{1} \\circ \\rm{R}_{1}$, is given by \n\\begin{equation}\n{\\rm G}_{1}(x) = \\left\\{ \n\\begin{array}{lc} \n-2x + x^{*} & 0 \\leq x < x^{*} \\! \/2 \\\\ \\noalign{\\vskip4pt}\n 2x - x^{*} & x^{*} \\! \/2 \\leq x < (1+x^*)\/2 \\\\ \\noalign{\\vskip4pt}\n-2x + (2 + x^{*}) & (1+x^*)\/2 \\leq x < 1.\n\\end{array} \\right. \n\\end{equation}\nThe flip property of $\\rm{R}_{1}$ means that ${\\rm{G}}_{1}$\nis metrically decomposable into two independent maps on the intervals\n$\\rm{A}$ and\n$\\rm{B}$, as is clear from Figure 4. From now on in this section we shall\ndrop the subscript $1$ on the maps $\\rm R_1$ and $\\rm G_1$; it being\nunderstood that we are referring to these maps at the first\nband-splitting point. \n\\begin{figure}[htb]\n\\begin{center}\n\\scalebox{.5}[.5]{\\includegraphics{fig31.eps}}\n\\end{center}\n\\parbox{5in}{\\caption{\\small The map ${\\rm{G_{1}}}={\\rm{R_{1} \\circ R_{1}}}$ is metrically\ndecomposable into two parts each conjugate to the tent map with unit height.} }\n\\end{figure}\n\n\nThe map ${\\rm G}$ restricted to $\\rm{A}$ is just a rescaling (with a\nflip) of the tent map with full height, ${\\rm T}_0$, i.e., the map\n(\\ref{generaltent}) with $\\alpha=2$. This is expressed in terms of a\ntopological conjugacy as\n\\begin{equation}\n{\\rm{G}}_{\\rm{A}}(x) = \\phi^{-1}_{\\rm{A}}(x) \\circ {\\rm{T_0}}(x) \\circ\n\\phi_{\\rm{A}}(x),\n\\end{equation}\nwhere\nthe conjugating function $\\phi_{\\rm{A}}(x)$ is \n\\begin{equation}\n\\phi_{\\rm{A}}(x) = 1 - \\frac{x}{x^*},\n\\end{equation}\nand $x \\in [0, x^{*})$.\nSimilarly the map on $\\rm B$ is topologically conjugate to ${\\rm T}_0$ as\n\\begin{equation}\n{\\rm{G}}_{\\rm{B}}(x) = \\phi^{-1}_{\\rm{B}}(x) \\circ {\\rm{T}}_{0}(x) \\circ\n\\phi_{\\rm{B}}(x),\n\\end{equation}\nwhere the conjugating function $\\phi_{\\rm B}(x)$ is \n\\begin{equation}\n\\begin{array}{lc}\n\\phi_{\\rm{B}}(x) = \n(\\sqrt{2}\/x^{*})x - \\sqrt{2} ,\n\\end{array} \n\\end{equation}\nand here $x \\in [x^{*}, 1)$.\nThese conjugacies are useful for us because the\nspectral decompositions of maps that are topologically conjugate are simply\nrelated, as is reviewed in Appendix A.\nThe generalized spectral decomposition of $\\rm{T_{0}}$ has been\npreviously determined~\\cite{Gonzalo,fox} and is reviewed\nin Appendix B. \n\nFollowing the discussion in those appendices gives the right eigenvectors of\n$\\rm{G}_{\\rm{A}}$ and $\\rm{G}_{\\rm{B}}$ as\n\\alpheqn\n\\begin{eqnarray} \\label{Gvecsaa}\n| 2^{-2j} \\rangle_{\\rm{G}_{\\rm{A}}} & = & \\frac{1}{x^{*}}\nB_{2j}\\left( \\frac{x^{*} - x}{2x^{*}} \\right)\\chi_{\\rm{A}} \\\\ \\label{Gvecsab}\n| 0_{2j+1} \\rangle_{\\rm{G}_{\\rm{A}}} & = & \\frac{1}{x^{*}}\nE_{2j+1}\\left( \\frac{x}{x^{*}} \\right)\\chi_{\\rm{A}}, \n\\end{eqnarray}\n\\reseteqn\n\\alpheqn\n\\begin{eqnarray} \\label{Gvecsba}\n| 2^{-2j} \\rangle_{\\rm{G}_{\\rm{B}}} & = & \\frac{\\sqrt{2}}{x^{*}}\nB_{2j}\\left( \\frac{x - x^{*}}{\\sqrt{2}x^{*}} \\right)\\chi_{\\rm{B}} \\\\ \\label{Gvecsbb}\n| 0_{2j+1} \\rangle_{\\rm{G}_{\\rm{B}}} & = & \\frac{\\sqrt{2}}{x^{*}}\nE_{2j+1}\\left( \\frac{\\sqrt{2}}{x^{*}}\\left( x-x^{*} \\right) \\right)\n\\chi_{\\rm{B}},\n\\end{eqnarray}\n\\reseteqn\nwhere the associated eigenvalue is the argument of the\nket vector with $|0_{2j+1}\\rangle$ meaning a null eigenpolynomial of degree $2j+1$\nand $\\chi_{\\rm{A}}$ and $\\chi_{\\rm{B}}$ are indicator functions on the intervals\n$\\rm{A}$ and $\\rm{B}$ respectively. Due to the metric decomposability of $\\rm G$ these\nstates are eigenstates of $U_{\\rm G}$ as well.\n\nSimilarly, the left eigenvectors of\n$\\rm{G}_{\\rm{A}}$ and $\\rm{G}_{\\rm{B}}$ are the generalized functions\n\\alpheqn \n\\begin{eqnarray} \\label{lgvecaa}\n\\langle 2^{-2j} |_{\\rm{G}_{\\rm{A}}} & = & \n\\frac{(-1)^{2j-1}\\left( 2x^{*} \\right)^{2j}}{(2j)!}\\left[\n\\delta^{(2j-1)}_{-}(x - x^{*}) - \\delta^{(2j-1)}_{+}(x) \\right] \\\\ \\label{lgvecab}\n\\langle 0_{2j+1} |_{\\rm{G}_{\\rm{A}}} & = & -\\frac{\\left(x^{*}\\right)^{2j+2}}\n{(2j+1)!}\\delta^{(2j+1)}_{+}\\left( x \\right), \n\\end{eqnarray}\n\\reseteqn\n\\alpheqn\n\\begin{eqnarray} \\label{lgvecba}\n\\langle 2^{-2j} |_{\\rm G_B} \n& = & \\frac{(-1)^{2j-1}\\left( \\sqrt{2} \\, x^{*} \\right)^{2j}}{(2j)!}\\left[\n\\delta^{(2j-1)}_{-}(x - 1) - \\delta^{(2j-1)}_{+}\n(x - x^{*}) \\right] \\\\ \\label{lgvecbb}\n\\langle 0_{2j+1} |_{\\rm G_B} & = & \\frac{-1}{(2j+1)!}\n\\left( \\frac{x^{*}}{\\sqrt{2}} \\right)^{2j+2}\n\\delta^{(2j+1)}_{-}\\left( x - 1 \\right),\n\\end{eqnarray}\n\\reseteqn\nwhere the definitions of $\\delta_{\\pm}$ are given in Appendix B.\n\nSince $U_{\\rm{G}} = U^{2}_{\\rm{R}}$ the spectrum of $U_{\\rm{R}}$ is a subset of\n$\\{0,\\pm 2^{-j} \\}$. Consider a non-zero eigenvalue, \n${2}^{-2j}$, of $U_{\\rm{G}}$. There are two\neigenvectors associated with this eigenvalue, each a polynomial of\norder $2j$. Since the function space on which $U_{\\rm{R}}$ acts has two basis\nelements for each degree $j$, i.e., an $j^{\\rm th}$ degree polynomial in ${\\rm A}$ and an\n$j^{\\rm th}$ degree polynomial in\n${\\rm B}$, there should be either two eigenvectors or one eigenvector and one Jordan\nvector that are polynomials of degree $2j$, associated with either one or both the\neigenvalues $\\{ +2^{-j},-2^{-j} \\}$. Since $U_{\\rm{G}}$ does not have any Jordan vectors\nit follows that $U_{\\rm{R}}$ doesn't either (for non-zero eigenvalues). \nThe eigenvalues of $U_{\\rm R}$ cannot be twofold degenerate since that\nwould imply that all the eigenvectors of\n$U_{\\rm{G}}$ are also eigenvectors of $U_{\\rm{R}}$, which is impossible since $\\rm{G}$ is\nmetrically decomposable and $\\rm{R}$ has the flip property. Therefore the\nnon-zero eigenvalues of $\\rm{R}$ are $+2^{-j}$ and $-2^{-j}$.\n\nThe eigenvectors of\n$U_{\\rm{R}}$ with eigenvalue $\\pm 2^{-j}$ are in the eigenspace \nspanned by the two eigenvectors of $U_{\\rm{G}}$ corresponding to\n$+ 2^{-2j}$. Thus they will be linear combinations as\n\\alpheqn \n\\begin{equation} \\label{reigenforma}\n|{+2^{-j}} \\rangle_{\\rm{R}} = \\frac{1}{2} \\left( |{+2^{-2j}}\n\\rangle_{\\rm{G}_{\\rm{A}}} + c_{j}|{+2^{-2j}} \\rangle_{\\rm{G}_{\\rm{B}}} \\right)\n\\end{equation}\nand\n\\begin{equation}\n|{-2^{-j}} \\rangle_{\\rm{R}} = \\frac{1}{2} \\left( |{+2^{-2j}} \\label{reigenformb}\n\\rangle_{\\rm{G}_{\\rm{A}}} + d_{j}|{+2^{-2j}} \\rangle_{\\rm{G}_{\\rm{B}}} \\right) , \n\\end{equation}\n\\reseteqn\nwhere the coefficent of $1\/2$ is put for convenient normalization. To determine\n$c_j$ we use that $|{+2^{-j}} \\rangle_{\\rm R}$ is an eigenvector of \n$U_{\\rm{R}}$ as\n\\begin{equation}\nU_{\\rm{R}}\\left[ |{+2^{-2j}} \\rangle_{\\rm{G}_{\\rm{A}}} + \nc_{j}|{+2^{-2j}} \\rangle_{\\rm{G}_{\\rm{B}}} \\right] =\n2^{-j}\\left[ |{+2^{-2j}}\\rangle_{\\rm{G}_{\\rm{A}}} + \nc_{j}|{+2^{-2j}} \\rangle_{\\rm{G}_{\\rm{B}}} \\right]. \n\\end{equation}\nThe flip property tells us that \n\\begin{equation} \\label{nn}\nU_{\\rm{R}}\\left[ c_{j}|{+2^{-2j}} \\rangle_{\\rm{G}_{\\rm{B}}} \\right]\n = 2^{-j}|{+2^{-2j}}\\rangle_{\\rm{G}_{\\rm{A}}}. \n\\end{equation}\nSubstituting the explicit form (\\ref{Gvecsaa}) of $|{+2^{-2j}} \\rangle_{\\rm{G}_{\\rm{A}}}$\nand (\\ref{Gvecsba}) of $|{+2^{-2j}} \\rangle_{\\rm{G}_{\\rm{B}}}$ in (\\ref{nn}) and solving\nfor $c_{j}$ we find that\n$c_{j} = 2^{-j}$. A similar analysis shows that $d_{j} = -2^{-j}$. Hence \n\\begin{equation} \\label{fbspurstates}\n|{\\pm 2^{-j}} \\rangle_{\\rm R} = \\frac{1}{2 x^*}\\left( B_{2j}\\left( \\frac\n{x^{*} - x}{2x^{*}} \\right)\\chi_{\\rm{A}} \\pm \\frac{\\sqrt{2}}{2^{j}}B_{2j}\\left(\n\\frac {x - x^{*}}{\\sqrt{2}x^{*}} \\right)\\chi_{\\rm{B}} \\right). \n\\end{equation}\n\nThe invariant state, corresponding to the invariant density of $U_{\\rm R}$ is\n\\begin{equation}\n|{+1}\\rangle_{\\rm R} = \\frac{1}{2 x^*}(\\chi_{\\rm A} + \\sqrt{2} \\, \\chi_{\\rm B}).\n\\end{equation}\nThis state carries all the probability under evolution of $U_{\\rm R}$ and any density\nwill have this component. The state\n\\begin{equation}\n|{-1}\\rangle_{\\rm R} = \\frac{1}{2 x^*}(\\chi_{\\rm A} - \\sqrt{2} \\, \\chi_{\\rm B})\n\\end{equation}\nis the asymptotically periodic state. Only it and the invariant density survives as $t\n\\to \\infty$; but $|{-1}\\rangle_{\\rm R}$, like decaying states, doesn't carry any probability. \nIt does keep the memory though of the projection of the initial density on\n$|{-1}\\rangle_{\\rm R}$, which is a special property of asymptotically periodic\nsystems~\\cite{Lasota}.\n \n\nNow we consider the null space of $\\rm{U_{R}}$. The map $\\rm{G}$ has two independent\nnull vectors (one in $\\rm A$ and one in $\\rm B$) for each odd degree. \nThis implies that\n$\\rm{R}$ can have either a corresponding $2 \\times 2$ Jordan block \nor have 2 independent eigenvectors for each odd degree. The latter case is not possible\nsince null vectors of $\\rm{R}$ cannot have support in interval $\\rm{B}$ because only one\nof the terms on the rhs of (\\ref{ur1}) acts on functions in $\\rm B$.\nThus there is a $2 \\times 2$ Jordan block for each odd degree associated with\neigenvalue $0$.\n\nConsider the action of $U_{\\rm G}\n=U^{2}_{\\rm{R}}$ on a null state, $| 0_{2j+1}\\rangle_{\\rm{G_{A}}}$, of\n$\\rm{G_A}$ as \n\\begin{equation} \nU_{\\rm{R}}\\Big[ U_{\\rm{R}}|0_{2j+1} \\rangle_{\\rm{G}_A} \\Big] = 0.\n\\end{equation} \nThe function inside the square brackets has support only in $\\rm B$, and\n$U_{\\rm{R}}$ acting on any non-zero function with support in $\\rm B$ cannot vanish\nin one iteration. Thus $|0_{2j+1} \\rangle_{\\rm R}=|0_{2j+1} \\rangle_{\\rm\nG_A}$ is a null vector of $U_{\\rm{R}}$ with explicit form given in\n(\\ref{Gvecsab})\n\nThe Jordan vector, $|0_{J_{2j+1}} \\rangle_{\\rm R}$, associated with this eigenvector\nsatisfies\n\\begin{equation} \\label{q}\n U_{\\rm{R}}| 0_{J_{2j+1}} \\rangle_{\\rm{R}} = | 0_{2j+1}\n\\rangle_{\\rm{R}}.\n\\end{equation} \nWe may choose the Jordan vector to have support only\nin ${\\rm B}$ as\n\\begin{equation} \\label{qq}\n| 0_{J_{2j+1}} \\rangle_{\\rm{R}} = \\eta_{2j+1}| 0_{2j+1}\n\\rangle_{\\rm{G}_{\\rm{B}}},\n\\end{equation} \nwhere $\\eta_{2j+1}$ is a constant to be determined. \nTo determine $\\eta_{2j+1}$ we apply $U_{\\rm R}$ on (\\ref{qq}) and use (\\ref{q}) and the\nexplicit forms (\\ref{Gvecsab}) and (\\ref{Gvecsbb}) (remembering that\n$|0_{2j+1} \\rangle_{\\rm R}=|0_{2j+1} \\rangle_{\\rm G_A}$) to obtain\n$\\eta_{2j+1} = -1$.\n\n\\subsubsection{Left eigenstates of $U_{\\rm{R}}$}\n\nThe left eigenstates of $U_{\\rm{R}}$ with non-zero eigenvalues may be determined by an\napproach similar to the one used to find the right eigenstates. \nThe result is\n\\begin{eqnarray}\n\\langle \\pm 2^{-j}|_{\\rm{R}} & = & \\frac{\\left( 2x^{*} \\right)^{2j}}\n{(2j)!} \\left[ \\delta^{(2j-1)}_{+}(x) - \\delta^{(2j-1)}_{-}(x - x^{*})\n\\right. \\nonumber \\\\ & & \\left. \\hskip50pt\n\\pm \\delta^{(2j-1)}_{+}(x - x^{*}) \\mp \\delta^{(2j-1)}_{-}(x - 1) \\right].\n\\end{eqnarray} \nThe left states with zero eigenvalues are like the right states identical to those\nof $U_{\\rm G}$ (except for the factor of $-1$ for the dual state of the Jordan vector) as\n\\alpheqn \n\\begin{eqnarray}\n\\left\\langle 0_{2j+1} \\right|_{\\rm R} & = & \n\\left\\langle 0_{2j+1} \\right|_{\\rm G_A} \\\\\n\\langle 0_{J_{2j+1}} |_{\\rm{R}} & = & \n- \\left\\langle 0_{2j+1} \\right|_{\\rm G_B}. \n\\end{eqnarray}\n\\reseteqn\nNote that $\\langle 0_{J_{2j+1}} |_{\\rm{R}}$ is an eigenstate of the Koopman operator\nand $\\left\\langle 0_{2j+1} \\right|_{\\rm{R}}$ is a Jordan state. \n\n\n\n\\subsection{Decay onto the attractor}\n\nAs noted before, initial densities with support in the intervals $\\rm{I}$ and\/or\n$\\rm{IV}$ will decay into $\\Omega$. Consider a density with support\nonly in $\\rm{IV}$ at $t=0$. At $t=1$ the density has support only in ${\\rm{I}}$. Thus\nany eigenvector of $U_{\\rm T_1}$ with support in $\\rm{IV}$ can only have\neigenvalue\n$0$. To determine such an eigenvector we write an ansatz for it as \n\\begin{equation} \\label{aa}\n| 0_j \\rangle_{\\rm{IV}} =\nf_{{\\rm{I}},j}(x)\\chi_{{\\rm{I}}} +\nf_{{\\rm{II}},j}(x)\\chi_{\\rm{II}}\n+ f_{{\\rm{III}},j}(x)\\chi_{\\rm{III}} + f_{{\\rm{IV}},j}(x)\\chi_{\\rm{IV}},\n\\end{equation}\nwhere the subscript $\\rm IV$ on the ket denotes that it\ndescribes decay out of interval $\\rm IV$ and we take $f_{{\\rm IV},j}(x)$ as a\npolynomial of order $j$. Applying $U_{\\rm T_1}$ to (\\ref{aa}) and collecting terms that are\nmultiplied by the same indicator function we get\n\\begin{eqnarray} \\label{aaa}\nU_{\\rm T_1}|0_j \\rangle_{\\rm{IV}} & = & \n\\left[ f_{{\\rm{I}},j} \\left( \\frac{x}{\\sqrt{2}} \n\\right) + f_{{\\rm{IV}},j}\\left(1- \\frac{x}{\\sqrt{2}} \n\\right) \\right]\\chi_{{\\rm{I}}} \\nonumber \\\\\n & & \\mbox{} + \\left[ f_{{\\rm{I}},j} \\left(\n\\frac{x}{\\sqrt{2}} \n\\right) + f_{{\\rm{III}},j}\\left(1- \\frac{x}{\\sqrt{2}} \\right) \\right]\n\\chi_{\\rm{II}} \\nonumber \\\\ \n& & \\mbox{} + \\left[ f_{{\\rm{II}},j} \\left( \\frac{x}{\\sqrt{2}} \n\\right) + f_{{\\rm{II}},j}\\left(1- \\frac{x}{\\sqrt{2}} \\right)\n\\right]\\chi_{\\rm{III}}.\n\\end{eqnarray} \nSince $|0_j \\rangle_{\\rm{IV}}$ is a null eigenstate the coefficients of each of the\nindicator functions must vanish. Since\n$f_{{\\rm{IV}},j}$ is a polynomial of degree\n$j$, it follows that \n$f_{{\\rm{I}},j}$ and\n$f_{{\\rm{III}},j}$ are also polynomials of degree $j$. We choose\n$f_{{\\rm{II}},j}$ to be zero. Clearly,\n$f_{{\\rm{III}},j}=f_{{\\rm{IV}},j}$ and choosing it as\n$(x-1)^j$ fixes $f_{{\\rm{I}},j}$. We thus have \n\\begin{equation} \\label{rt1} \n| 0_j \\rangle_{\\rm{IV}} = (-1)^{j+1}x^j\\chi_{\\rm{\\rm{I}}} +\n(x-1)^j(\\chi_{\\rm{III}} + \\chi_{\\rm{IV}}). \n\\end{equation} \nWe note that because of the degeneracy associated\nwith eigenvalue $0$ in interval $\\rm{IV}$, this choice of eigenvectors is not\nunique. The left eigenstates (given below) associated with this choice of\nright eigenvectors is unique and therefore when we expand an arbitrary \ndensity in terms of the right eigenvectors, the expansion coefficents \nare uniquely defined.\n\nThe action of $U_{\\rm T_1}$ on a function with support only \non $\\rm{\\rm{I}}$ is given by\n\\begin{equation} U_{\\rm T_1} [f(x)\\chi_{\\rm{I}}] = \\frac{1}{\\sqrt{2}}\nf(x\/\\sqrt{2}) ( \\chi_{\\rm{I}} + \\chi_{\\rm{II}}).\n\\end{equation} \nActing with $U_{\\rm T_1}$ on a monomial in $\\rm{I}$ gives\n\\begin{equation} \\label{ind}\n U_{\\rm T_1} [x^{j}\\chi_{\\rm I}] =\n\\frac{x^{j}}{(\\sqrt{2})^{j+1}} (\\chi_{\\rm{\\rm{I}}} + \\chi_{\\rm{II}}), \n\\end{equation}\nso that the eigenvectors of $U_{\\rm T_1}$ with support in $\\rm{I}$\nare monomials in $\\rm{I}$.\nTo determine their form in the other intervals we again write an ansatz as\nwe did for the eigenvectors with support in $\\rm{IV}$ as\n\\begin{equation} \\label{haha} \n| 2^{-(j+1)\/2}\n\\rangle_{{\\rm{I}}} = x^{j}\\chi_{{\\rm{I}}} + g_{{\\rm II},j}(x) \\chi_{\\rm{II}} +\ng_{{\\rm III},j}(x)\\chi_{\\rm{III}},\n\\end{equation}\nwhere the associated eigenvalue, appearing as the argument of the ket, is seen from (\\ref{ind}). In\nequation (\\ref{haha}) and below the subscript $\\rm I$ on a ket implies that\nit describes decay out of region ${\\rm I}$. Note that the\n$j=0$ mode here is the slowest decay mode in this system. Since $U_{\\rm T_1}$\ndoes not raise the degree of the polynomial it acts on, \n$g_{{\\rm II},j}(x)$ and $g_{{\\rm III},j}(x)$ are polynomials of degree $j$. Applying\n$U_{\\rm T_1}$ to (\\ref{haha}) and using (\\ref{ind}) and the fact that the function on\nthe rhs of (\\ref{haha}) is an eigenvector with eigenvalue\n$2^{-(j+1)\/2}$, we find that \n\\begin{equation} \\label{form1}\nU_{\\rm T_1} ( g_{{\\rm II},j}(x)\\chi_{\\rm{II}} + g_{{\\rm III},j}(x)\\chi_{\\rm{III}}) =\n2^{-(j+1)\/2} (g_{{\\rm II},j}(x)\\chi_{\\rm{II}} + g_{{\\rm III},j}(x)\\chi_{\\rm{III}}\n-x^{j}\\chi_{\\rm{II}}). \n\\end{equation}\nThe determination of $g_{{\\rm II},j}(x)$ \nand $g_{{\\rm III},j}(x)$ is described in Appendix C \n\nThe explicit form of the first few of these eigenvectors is\n\\alpheqn\n\\begin{eqnarray} \\label{firstfew}\n| 2^{-1\/2}\\rangle_{\\rm I} & = & \\chi_{{\\rm{I}}} +\n\\frac{1}{2(1-\\sqrt{2})}|1\\rangle_{\\Omega} + \\frac{1}{2(1+\\sqrt{2})}|{-1\\rangle_\\Omega} \\\\\n| 2^{-1}\\rangle_{\\rm I} & = & x \\, \\chi_{\\rm{I}} -\n\\frac{1}{4(x^*)}|1\\rangle_\\Omega + \\frac{1}{12}|{-1\\rangle_\\Omega} +\n\\frac{(x^*)^2}{2}|0\\rangle_\\Omega \\\\ \n|2^{-3\/2}\\rangle_{\\rm I} & = & x^2 \\,\\chi_{\\rm{I}} -\n\\frac{\\sqrt{2}x^*}{12}|1\\rangle_\\Omega +\n\\frac{(9\\sqrt{2}-8)x^*}{84}|-1\\rangle_\\Omega + \n\\frac{\\sqrt{2}(x^*)^3}{2}|0\\rangle_{\\Omega} \\\\ \n& & \\mbox{}\\hspace{1cm} - \\frac{\\sqrt{2}(x^*)^3}{2}|2^{-1}\\rangle_\\Omega +\n\\frac{(x^*)^4}{2(1+\\sqrt{2})}|{-2^{-1}\\rangle_\\Omega}.\n\\end{eqnarray}\n\\reseteqn \nThe subscript $\\Omega$ on a ket implies that it is a right state of \n$U_{\\rm R}$ which has been rescaled into the attractor, $\\Omega$, of $U_{\\rm T_1}$.\nThe explicit form of these states is given in Table~1.\n\n\\subsection{Left eigenstates of $U_{\\rm T_1}$}\n\nIt is easily seen that the left states given by \n\\begin{equation} \\label{lt1}\n\\langle 0_j |_{\\rm{IV}} = \\frac{(-1)^j}{j!}\\delta_{-}^{(j)}(x-1)\n\\end{equation}\nform a bi-orthonormal set with the null states in~(\\ref{rt1}).\nThese left states are also orthogonal to all the other right states of $U_{\\rm T_1}$.\nThe left states associated with the transient decay states out of $\\rm{I}$ are given by\n\\begin{equation} \\label{l1}\n\\langle {2}^{-(j+1)\/2}|_{\\rm I} = \\frac{1}{j!}\n\\left[ (-1)^j\\delta_{+}^{(j)}(x) + \\delta_{-}^{(j)}(x-1) \\right]. \n\\end{equation}\n\nThe left states of $U_{\\rm R}$ are not the left states of $U_{\\rm T_1}$, even though\nthe right states of $U_{\\rm R}$ are also right states of $U_{\\rm T_1}$. This is\nbecause $U_{\\rm T_1}$ acting on a density contained within $\\Omega$ will\ncontinue to be in $\\Omega$. But in general the\nKoopman operator, $K_{\\rm T_1}$, acting on a function with support only in $\\Omega$\nwill result in the function having support outside $\\Omega$ too.\nThe left states of $U_{\\rm R}$ scaled back to $\\Omega$ form a bi-orthonormal set with the\nright states contained in $\\Omega$, but they are not orthogonal to the transient\nstates that decay into $\\Omega$. To make them so we use Gram--Schmidt orthogonalization. The\nresults are given in Table 1. \n\n\\subsection{The spectral decomposition}\n\nUsing all the eigenstates and eigenvalues given in Table~1 we may write\nthe action of $U_{\\rm T_1}^t$ in terms of its spectral decomposition as\n\\begin{eqnarray} \\label{specdec}\nU_{\\rm T_1}^{t} & = & | 1 \\rangle\\langle 1 |_{\\Omega} + \n(-1)^t|{-1 \\rangle}\\langle{-1 |_\\Omega} + \\sum_{j=0}^{\\infty}\n\\left( 2^{-(2j+1)\/2} \\right)^t|2^{-(2j+1)\/2}\\rangle\\langle\n2^{-(2j+1)\/2}|_{\\rm{I}} \\nonumber \\\\ \n& & + \\sum_{j=1}^{\\infty}(2^{-j})^t \\left[ | {+2^{-j}}\n\\rangle\\langle {+2^{-j} |_\\Omega} +\n|2^{-j}\\rangle\\langle 2^{-j}|_{\\rm{I}} \\right] \\nonumber \\\\ \n& & + \\sum_{j=1}^{\\infty}(-2^{-j})^t | {-2^{-j}}\n\\rangle\\langle {-2^{-j} |_\\Omega} +\n\\sum_{j=0}^{\\infty}\\delta_{1,t} | 0_{2j+1}\n\\rangle\\langle 0_{J_{2j+1}} |_\\Omega \\nonumber \\\\ \n& & + \\sum_{j=0}^{\\infty}\\delta_{0,t} \\left[ |0_{2j+1}\n\\rangle\\langle 0_{2j+1} |_{\\Omega} + |0_{J_{2j+1}}\n\\rangle\\langle 0_{J_{2j+1}} |_{\\Omega} + |0_j\n\\rangle\\langle 0_j|_{\\rm{IV}} \\right], \n\\end{eqnarray}\nwhere the subscript on the bra states also identifies their dual ket states.\n\n\\clearpage\n\n \n\\begin{table}[p!] \n\\[\n\\begin{array}{|c|c|l|l|} \\hline \n\\mbox{eigenvalue} & \\mbox{degeneracy} & \\mbox{symbol} & \\mbox{eigenvector} \\\\ \\hline \n1 & 1 & | 1 \\rangle_{\\Omega} & \\chi_{\\rm{II}} + \\sqrt{2}\\chi_{\\rm{III}} \\\\\n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\ \n & & \\langle 1 |_{\\Omega} & \\frac{(x^*)^2}{2}(\\chi_{\\rm{II}} + \\chi_{\\rm{III}}) \\\\\n& & & \\mbox{}-\\sum_{k=0}^{\\infty}\\left(a_{0,k}\\langle 2^{-(k+1)\/2}|_{\\rm{I}} +\n\\alpha_{0,k}'\\left\\langle 0_k \\right|_{\\rm{IV}} \\right) \\\\\n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\ \n\\hline \n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\ \n-1 & 1 & | {-1 \\rangle_\\Omega} & \\chi_{\\rm{II}} - \\sqrt{2}\\chi_{\\rm{III}} \\\\\n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\ \n & & \\langle -1 |_{\\Omega} & \\frac{(x^*)^2}{2}(\\chi_{\\rm{II}} - \\chi_{\\rm{III}}) \\\\\n& & &\\mbox{}-\n\\sum_{k=0}^{\\infty}\\left( b_{0,k}\\langle 2^{-(k+1)\/2}|_{\\rm{I}} -\n\\alpha_{0,k}'\\left\\langle 0_k \\right|_{\\rm{IV}} \\right) \\\\\n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\\n\\hline\n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\ \n+\\frac{1}{2^j} & 1 & |{+ 2^{-j} \\rangle_\\Omega} &\nB_{2j}\\left( \\frac{x^* - \\phi(x)}{2x^*} \\right)\\chi_{\\rm{II}} +\n\\frac{\\sqrt{2}}{2^j}B_{2j}\\left( \\frac{\\phi(x) -\nx^*}{\\sqrt{2}x^*}\\right)\\chi_{\\rm{III}} \\\\\n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\ \n & & \\langle +2^{-j} |_{\\Omega} & \\frac{\\left( x^{*} \\right)^{4j-2}}\n{(2j)!} \\left[ \\delta^{(2j-1)}_{+}(x-{\\rm{T}}^{(2)}(x_c))\n - \\delta^{(2j-1)}_{-}(x -x^{*}) \\right. \\\\ \n& & & \\mbox{} \\left. \n + \\delta^{(2j-1)}_{+}(x - x^{*}) -\n\\delta^{(2j-1)}_{-}(x - {\\rm{T}}^{(1)}(x_c)) \\right] \\\\ & & & - \n\\sum_{k=2j}^{\\infty}\\left( a_{j,k}\\langle 2^{-(k+1)\/2}|_{\\rm{{I}}} +\n\\alpha_{j,k}'\\left\\langle 0_k \\right|_{\\rm{IV}} \\right) \\\\\n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\\n \\hline \n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\\n-\\frac{1}{2^j} & 1 & |{- 2^{-j} \\rangle_\\Omega} &\nB_{2j}\\left( \\frac{x^* - \\phi(x)}{2x^*} \\right)\\chi_{\\rm{II}} -\n\\frac{\\sqrt{2}}{2^j}B_{2j}\\left( \\frac{\\phi(x) -\nx^*}{\\sqrt{2}x^*}\\right)\\chi_{\\rm{III}} \\\\\n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\ \n & & \\langle -2^{-j} |_{\\Omega} & \\frac{\\left( x^{*}\n\\right)^{4j-2}} {(2j)!} \\left[ \\delta^{(2j-1)}_{+}(x-{\\rm{T}}^{(2)}(x_c))\n- \\delta^{(2j-1)}_{-}(x - x^{*}) \\right. \\\\\n& & & \\mbox{} \\left. \n - \\delta^{(2j-1)}_{+}(x - x^{*}) +\n\\delta^{(2j-1)}_{-}(x - {\\rm{T}}^{(1)}(x_c)) \\right] \\\\ & & & - \n\\sum_{k=2j}^{\\infty}\\left( b_{j,k}\\langle 2^{-(k+1)\/2}|_{\\rm{I}} - \\alpha_{j,k}'\\left\\langle\n0_k \\right|_{\\rm{IV}} \\right) \\\\\n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\\n \\hline \n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\\n0 & \\infty & | 0_{2j+1} \\rangle_{\\Omega} & E_{2j+1}\\left(\n\\frac{\\phi(x)}{x^*} \\right)\\chi_{\\rm{II}} \\\\\n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\ \n & ( 2 \\times 2 & \\left\\langle 0_{2j+1} \\right|_{\\Omega} &\n-\\left(\\frac{x^*}{\\sqrt{2}}\\right)^{4j+2}\\frac{1}{(2j+1)!}\n\\delta^{(2j+1)}_{+}(x-{\\rm{T}}^{(2)}(x_c)) \\\\\n& \\mbox{Jordan} & &\\mbox{} - \\sum_{k=2j+1}^{\\infty}c_{j,k}\n\\langle 2^{-(k+1)\/2}|_{\\rm{I}} \\\\\n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\ \n & \\mbox{blocks}) & | 0_{J_{2j+1}} \\rangle_{\\Omega} &\n-\\sqrt{2}\\left(E_{2j+1}\\left(\n\\frac{\\sqrt{2}}{x^*} (\\phi(x) - x^*)\\right)\\right)\\chi_{\\rm{III}} \\\\\n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\ \n & & \\left\\langle 0_{J_{2j+1}} \\right|_{\\Omega} &\n\\frac{(x^*)^{4j+2}}{\\sqrt{2}(2\\sqrt{2})^{2j+1}}\\frac{1}{(2j+1)!}\n\\delta^{(2j+1)}_{-}(x-{\\rm{T}}^{(1)}(x_c)) \\\\\n& & & \\mbox{}-\n\\sum_{k=2j+1}^{\\infty}\\gamma^{'}_{j,k}\\left\\langle 0_{k}\\right|_{\\rm{IV}}\n\\\\\n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\\n \\hline \n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\\n\\left( \\frac{1}{\\sqrt{2}}\\right)^{j+1} & 1 & |2^{-(j+1)\/2}\\rangle_{\\rm{I}} &\nx^j\\chi_{\\rm{I}} +\n\\sum_{i=0}a_{i,j}| 2^{-i}\\rangle + b_{i,j}|\n-2^{-i}\\rangle + c_{i,j}|0_i\\rangle \\\\\n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\ \n & & \\langle 2^{-(j+1)\/2}|_{\\rm{I}} & \\frac{1}{j!}\n\\left( (-1)^j\\delta^{(j)}(x) + \\delta^{(j)}(x-1) \\right) \\\\ \n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\\n\\hline \n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\\n0 & \\infty & |\n0_j \\rangle_{\\rm{IV}} & (-1)^{j+1}x^j\\chi_{\\rm{I}} + (x-1)^{j}\\left(\n\\chi_{\\rm{III}} + \\chi_{\\rm{IV}} \\right) \\\\\n\\vspace{-1mm}& \\vspace{-1mm} & \\vspace{-1mm} & \\vspace{-1mm}\\\\ \n & & \\left\\langle 0_j \\right|_{\\rm{IV}} & \\frac{(-1)^j}{j!}\n\\delta^{(j)}(x-1) \\\\ \\hline\n \\end{array} \\]\n\\caption{\\small Elements of the spectral decomposition of the tent map at the first band splitting\npoint. The constants $a_{i,j}\\equiv\\langle\n{2^{-i}}|_{\\Omega}(2)^{-(j+1)\/2}\n\\rangle_{\\rm{I}}$, $b_{i,j}\\equiv\\langle\n-{2^{-i}}|_{\\Omega}(2)^{-(j+1)\/2}\n\\rangle_{\\rm{I}}$ and $c_{i,j}\\equiv\\langle\n0_i|_{\\Omega}(2)^{-(j+1)\/2}\n\\rangle_{\\rm{I}}$ and are given in Appendix C. The\nconstants $\\alpha_{i,j}'\\equiv\\langle\n+{2^{-i}}|_\\Omega 0_j \\rangle_{\\rm{IV}}$, $\\beta_{i,j}'\\equiv\\langle -{2^{-i}}|_\\Omega 0_j\n\\rangle_{\\rm{IV}}=-\\alpha_{i,j}'$ and $\\gamma_{i,j}'\\equiv\\langle 0_{J_i}|_\\Omega 0_j\n\\rangle_{\\rm{IV}}$ and\n$\\phi(x)$ is defined in (\\ref{phi}).}\n\\end{table}\n\n\n\\clearpage\n \n\n\n\n\n\n\n\n\\section{Higher band-splitting points}\n\nWe can determine the spectral decomposition of the tent map at any\nband-splitting point (bsp) by generalizing the approach used in the previous\nsection for the first bsp. For the rescaled map at\n$\\alpha = \\sqrt{2}$ we found that by considering its square\nit separated into two parts that were\ndirectly related by a simple change of scale (including a reflection for one\npart) to the tent map at full height where\n$\\alpha = 2$. This relationship also holds generally~\\cite{ProvMac,Heidel}\nbetween the map at the\n$n^{\\rm{th}}$ bsp,\n$(\\alpha_{n} = 2^{2^{-n}})$, and the $(n-1)^{\\rm{th}}$ bsp, \n$(\\alpha_{n-1} = 2^{2^{-(n-1)}} = \\alpha_{n}^2)$.\n\nThe tent map at $\\alpha_n$ on the interval\n$[ {\\rm{T}}^{(2)}_n (x_c), {\\rm{T}}^{(1)}_n (x_c))$,\nwhich contains the attractor, is first stretched to the \nunit interval $[0,1)$ to make the rescaled map ${\\rm R}_n$. \nThe linear function\nthat makes this stretch is \n\\begin{equation} \\label{genphi}\n\\phi_n (x) = \\frac{2x - \\alpha_n (2 - \\alpha_n)}{\\alpha_n(\\alpha_n - 1)}.\n\\end{equation} \nUnder (\\ref{genphi}) the map ${\\rm{T}}_{n}$ transforms to \n${\\rm{R}}_n = \\phi_n \\circ {\\rm{T}}_n \\circ \\phi^{-1}_n$ and\nis given by \n\\begin{equation} \\label{genr}\n{\\rm{R}}_n (x) = \\left\\{ \\begin{array}{lc}\n2-\\alpha_n (1-x) & 0 \\leq x < (\\alpha_n -1)\/ \\alpha_n \\\\\n\\noalign{\\vskip4pt}\n\\alpha_n (1- x) & (\\alpha_n -1)\/ \\alpha_n \\leq x < 1. \n\\end{array} \\right.\n\\end{equation}\n\nWe then compose ${\\rm{R}}_n$ with itself to obtain\n${\\rm{G}}_n \\equiv {\\rm{R}}_n \\circ {\\rm{R}}_n$. \nAs is illustrated in Figure 5 for the $2^{\\rm{nd}}$ bsp, 1t can be shown easily that in general: \n\\newline\n(a) ${\\rm{G}}_n(x)$ in the interval \n$X_{n, {\\rm A}} \\equiv [0,{\\rm{R}}_n^{(4)}(x_c))$ is\ntopologically conjugate to\n${\\rm{R}}_{n-1}$ (the rescaled map at higher height) in the interval\n$[0,1)$ as \n\\begin{equation} \\label{imp1}\n{\\rm{R}}_{n-1} = \\phi_{n,\\rm{A}}\n\\circ {\\rm{G}}_{n,\\rm{A}} \\circ \\phi_{n,\\rm{A}}^{-1} , \n\\end{equation}\nwhere\n\\begin{equation} \\label{impc1}\n\\phi_{n,\\rm{A}} = 1 - \\frac{x}{\\alpha_n (\\alpha_n -1)},\n \\;\\;\\;\\;\\; x \\in X_{n, {\\rm A}}.\n\\end{equation}\n(b) ${\\rm{G}}_n(x)$ in the interval \n$X_{n, {\\rm B}} \\equiv [{\\rm{R}}_n^{(3)}(x_c),1)$ is\ntopologically conjugate to\n${\\rm{R}}_{n-1}$ in the interval $[0,1)$ as\n\\begin{equation} \\label{imp2}\n{\\rm{R}}_{n-1} = \\phi_{n,\\rm{B}} \\circ\n{\\rm{G}}_{n,\\rm{B}} \\circ \\phi_{n,\\rm{B}}^{-1}, \n\\end{equation}\nwhere\n\\begin{equation} \\label{impc2}\n\\phi_{n,\\rm{B}} = \\frac{x-(2-\\alpha_n)}{\\alpha_n -1},\n \\;\\;\\;\\;\\; x \\in X_{n, {\\rm B}}. \n\\end{equation}\n\\begin{figure}[htb]\n\\begin{center}\n\\scalebox{.5}[.5]{\\includegraphics{fig41.eps}}\n\\parbox{5in}{\\caption{\\small At the second band-splitting point, ${\\rm{G_{2}}}$ \nis conjugate in $X_{2,{\\rm A}}$ and $X_{2,{\\rm B}}$ to\n$\\rm{R_{1}}$ shown in Figure 3. The central transient region \nis shown as $X_{2,{\\rm C}}$.}}\n\\end{center}\n\\end{figure}\n\n\nAt the $n^{\\rm{th}}$ bsp the critical trajectory is eventually periodic with period\n$2^{n-1}$ ($n \\geq 1$), and therefore the number of discontinuities that appear\nunder time evolution of an initially smooth density is finite. Equations (\\ref{imp1}) --\n(\\ref{impc2}) imply that the band structure at the $n^{\\rm{th}}$ bsp consists of 2 scaled\ncopies of the band structure at the $(n-1)^{\\rm{th}}$ bsp separated by the\ninterval $[ {\\rm{R}}_{n}^{(4)}(x_c),{\\rm{R}}_{n}^{(3)}(x_c)) \\equiv\nX_{n, {\\rm C}}$, which makes one band. Therefore we have the following\n recursion relation for $S_n$, the number of bands at the $n^{{\\rm th}}$ bsp\nfor the map ${\\rm R}_{n}$,\n\\begin{equation}\nS_{n} = 2 S_{n-1} + 1 , \n\\end{equation}\nfor $n \\geq 2$,\nwhere $ S_{1} = 2$. Solving this recursion relation gives \n\\begin{equation}\\label{bandno}\nS_{n} = 2^n + 2^{n-1} - 1, \n\\end{equation}\nfor $n \\geq 1$.\nOf the $S_n$ bands, the invariant density has support only on $2^n$\nbands. The density is transient on the other\n$2^{n-1} -1$ bands. The function space we consider at the $n^{\\rm{th}}$ bsp is\npiecewise polynomial, with each piece extending over one band. \n\n\\subsection{Decomposition on the attractor}\n\nAssociated with the $2^n$ bands that make up \n$\\Omega_{n}$, the attractor at the $n^{\\rm{th}}$ bsp, are $2^n$\n eigen\/Jordan vectors of each polynomial degree\n$j$. We denote an element in the spectrum of\n$U_{{\\rm R}_n}$ associated with states on $\\Omega_{n}$ as $\\lambda^{n}_{k,j}$ and the right\neigen\/Jordan vector associated with it as $ |\\lambda^{n}_{k,j} \\rangle$. The\nsuperscript $n$ stands for the order of the band-splitting point. The index\n$k$ is an integer from $1$ to $2^n$ and distinguishes between the $2^n$\nindependent eigen\/Jordan vectors of degree $j$, at the $n^{\\rm{th}}$ bsp. \n\nSince ${\\rm G}_{n,{\\rm A}}$ and ${\\rm G}_{n,{\\rm B}}$ are topologically conjugate \nto ${\\rm R}_{n-1}$ all three share the same spectrum. The map ${\\rm G}_n$ is\nthe union of ${\\rm G}_{n,{\\rm A}}$, ${\\rm G}_{n,{\\rm B}}$ and the \npart of ${\\rm{G}}_{n}$ on the central transient interval $X_{n,{\\rm C}}$.\nThe spectrum of ${\\rm{R}}_{n}$ on $\\Omega_{n}$ is determined from the\nspectrum of ${\\rm{G}}_{n}$ on $\\Omega_{n}$. Consider a non-zero eigenvalue\n$\\lambda^{n-1}_{k,2j}$ of ${\\rm R}_{n-1}$ (we will show later that as at the\nfirst bsp the right states corresponding to non-zero eigenvalues\non the attractor are even-order polynomials), which is\nalso an eigenvalue of ${\\rm{G}}_{n}$ with degeneracy 2 \n(one for ${\\rm G}_{n,{\\rm A}}$ and one for ${\\rm G}_{n,{\\rm B}}$). \nUsing arguments parallel to those that we used at the first bsp\nwe deduce that ${\\rm{R}}_{n}$ has two distinct eigenvalues \n$+\\sqrt{\\lambda^{n-1}_{k,2j}}$ and $-\\sqrt{\\lambda^{n-1}_{k,2j}}$. By induction \n${\\rm{R}}_{n+1}$ on $\\Omega_{n+1}$ has in its spectrum the eigenvalues\n$+(\\lambda^{n-1}_{k,2j})^{1\/4}$, $-(\\lambda^{n-1}_{k,2j})^{1\/4}$,\n$+i(\\lambda^{n-1}_{k,2j})^{1\/4}$ and $-i(\\lambda^{n-1}_{k,2j})^{1\/4}$.\nThus from\nthe non-zero eigenvalues at the $0^{\\rm{th}}$ bsp we can determine \nthe non-zero eigenvalues at the\n$n^{\\rm {th}}$ bsp by taking square roots of $2^{-2j}$\nrecursively, $n$ times. Thus \n\\begin{equation} \\label{eig1}\n\\lambda^{n}_{k,2j} =\n\\left(\\frac{1}{2}\\right)^{{2j}\/{2^n}}\\mbox{exp}\\left(\\frac{2\\pi\nik}{2^n}\n\\right),\n\\end{equation}\nwhere $k=1,2,\\ldots,2^n$ and $j=0,1,2,\\ldots$ .\nNote that for $k \\leq 2^{n-1}$ \n\\begin{equation} \\label{minus}\n\\lambda^{n}_{k,2j} = -\\lambda^{n}_{k+2^{n-1},2j}.\n\\end{equation}\n\nAt the $0^{\\rm{th}}$ bsp the eigenvalue $0$ is infinitely degenerate with\nan associated eigenpolynomial of odd degree for each occurence of the\neigenvalue. In the previous section we saw that at the $1^{\\rm st}$ bsp \nthere are an infinite number of $2 \\times 2$ Jordan blocks associated with\neigenvalue zero, one each for each odd degree. We will show by explicit\nconstruction below that this trend continues and at the $n^{\\rm{th}}$ bsp, ${\\rm R}_{n}$\nhas a $2^n \\times 2^n$ Jordan block for every odd degree. \n\n\\subsubsection{Right states on the attractor}\n\nTo determine the right states on $\\Omega_{n}$ with non-zero eigenvalues we begin similar to\n$(20)$ by writing\nthe right states at the\n$n^{\\rm{th}}$ bsp in terms of those at the $(n-1)^{\\rm th}$ bsp as \n\\alpheqn \n\\begin{eqnarray}\n |\\lambda^{n}_{k,2j} \\rangle & = & |\\lambda^{n-1}_{k,2j}\n\\rangle_{{\\rm G}_{n,{\\rm A}}} +c^{n}_{k,2j} |\\lambda^{n-1}_{k,2j}\n\\rangle_{{\\rm G}_{n,{\\rm B}}} \\\\ \n |\\lambda^{n}_{k+2^{n-1},2j} \\rangle & = & |\\lambda^{n-1}_{k,2j}\n\\rangle_{{\\rm G}_{n,{\\rm A}}} +d^{n}_{k,2j} |\\lambda^{n-1}_{k,2j}\n\\rangle_{{\\rm G}_{n,{\\rm B}}},\n\\end{eqnarray}\n\\reseteqn\nwhere $k = 1,2,\\ldots,2^{n-1}$. In equation (52) and\nin the remaining part of this subsection kets without\nany subscript denote right states of ${\\rm{R}}$ at the appropriate bsp. \nConjugacies (\\ref{imp1}) and (\\ref{imp2}) imply that the eigenvectors \nof ${\\rm G}_{n,{\\rm A}}$ and ${\\rm G}_{n,{\\rm B}}$ are related to those \nof ${\\rm R}_{n-1}$ as\n\\alpheqn \n\\begin{eqnarray}\n|\\lambda^{n-1}_{k,2j}\\rangle_{{\\rm G}_{n,{\\rm A}}} & = & \nU_{\\phi^{-1}_{n,\\rm{A}}} |\\lambda^{n-1}_{k,2j} \\rangle \\\\ \n|\\lambda^{n-1}_{k,2j}\\rangle_{{\\rm G}_{n,{\\rm B}}} & = & \nU_{\\phi^{-1}_{n,\\rm{B}}} |\\lambda^{n-1}_{k,2j} \\rangle. \n\\end{eqnarray}\n\\reseteqn\nUsing this in (52) gives\n\\alpheqn \n\\begin{eqnarray} \\label{onea} \n |\\lambda^{n}_{k,2j} \\rangle & =\n& U_{\\phi^{-1}_{n,\\rm{A}}} |\n\\lambda^{n-1}_{k,2j} \\rangle +c^{n}_{k,2j}U_{\\phi^{-1}_{n,\\rm{B}}} \n |\\lambda^{n-1}_{k,2j} \\rangle \\\\ \\label{oneb}\n |\\lambda^{n}_{k+2^{n-1},2j} \\rangle & =\n& U_{\\phi^{-1}_{n,\\rm{A}}} |\n\\lambda^{n-1}_{k,2j} \\rangle +d^{n}_{k,2j}U_{\\phi^{-1}_{n,\\rm{B}}} \n|\\lambda^{n-1}_{k,2j} \\rangle. \n\\end{eqnarray}\n\\reseteqn\nThe state $U_{\\phi^{-1}_{n,\\rm{A}}}|\\lambda^{n-1}_{k,2j} \\rangle$\nhas support only in $X_{n,{\\rm{A}}}$ and \n$U_{\\phi^{-1}_{n,\\rm{B}}} |\\lambda^{n-1}_{k,2j}\n\\rangle$ has support only in $X_{n,{\\rm{B}}}$. \n\nWe now use the relation\n\\begin{equation} \\label{pmyst}\n {\\rm{R}}_{n}\\circ \\phi_{n,\\rm{B}}^{-1} = \\phi_{n,\\rm{A}}^{-1}, \n\\end{equation}\nwhich implies that \n\\begin{equation} \\label{myst}\nU_{{\\rm{R}}_{n}}U_{\\phi_{n,\\rm{B}}^{-1}} =\nU_{\\phi_{n,\\rm{A}}^{-1}}.\n\\end{equation} \n(Even though (\\ref{pmyst}) and (\\ref{myst}) are written \nspecifically at the band-splitting points, they are valid for all\n$\\alpha \\in (1,2]$.)\nActing on (54) by $U_{{\\rm{R}}_n}$ and using (\\ref{myst}) we get\n\\alpheqn\n\\begin{eqnarray} \\label{xiaoa}\n\\lambda^{n}_{k,2j} | \\lambda^{n}_{k,2j} \\rangle & =\n& U_{{\\rm{R}}_n}U_{\\phi^{-1}_{n,\\rm{A}}}|\n\\lambda^{n-1}_{k,2j} \\rangle +c^{n}_{k,2j}U_{\\phi^{-1}_{n,\\rm{A}}} \n |\\lambda^{n-1}_{k,2j} \\rangle \\\\ \\label{xiaob} \n-\\lambda^{n}_{k,2j}| \\lambda^{n}_{k+2^{n-1},2j} \\rangle & =\n& U_{{\\rm{R}}_n}U_{\\phi^{-1}_{n,\\rm{A}}} |\n\\lambda^{n-1}_{k,2j} \\rangle +d^{n}_{k,2j}U_{\\phi^{-1}_{n,\\rm{A}}} \n |\\lambda^{n-1}_{k,2j} \\rangle, \n\\end{eqnarray}\n\\reseteqn\nwhere (\\ref{minus}) has been used on the lhs of (\\ref{xiaob}). Since\n$U_{{\\rm{R}}_n}$ has the flip property that any function with support \nonly in $X_{n,\\rm{B}}$ will go entirely to\n$X_{n,\\rm{A}}$ in one iteration and vice-versa we know that\n$U_{{\\rm{R}}_n}U_{\\phi^{-1}_{n,\\rm{A}}}|\\lambda^{n-1}_{k,2j} \\rangle$\n has support only in $X_{n,{\\rm{B}}}$ and $U_{\\phi^{-1}_{n,\\rm{A}}} \n|\\lambda^{n-1}_{k,2j} \\rangle$ has support only in $X_{n,\\rm{A}}$.\nMultiplying (\\ref{onea}) by $\\lambda^{n}_{k,2j}$ and (\\ref{oneb}) by\n$-\\lambda^{n}_{k,2j}$ and identifying the components with support only\nin $X_{n,{\\rm A}}$ with the corresponding components in (57)\ngives\n\\alpheqn\n\\begin{eqnarray}\nc^{n}_{k,2j}U_{\\phi^{-1}_{n,\\rm{A}}} \n |\\lambda^{n-1}_{k,2j} \\rangle & = & \\lambda^{n}_{k,2j}\nU_{\\phi^{-1}_{n,\\rm{A}}} |\\lambda^{n-1}_{k,2j} \\rangle \\\\ \nd^{n}_{k,2j}U_{\\phi^{-1}_{n,\\rm{A}}} \n |\\lambda^{n-1}_{k,2j} \\rangle & = & -\\lambda^{n}_{k,2j}\nU_{\\phi^{-1}_{n,\\rm{A}}}|\\lambda^{n-1}_{k,2j} \\rangle,\n\\end{eqnarray}\n\\reseteqn\nshowing that $c^{n}_{k,2j} = \\lambda^{n}_{k,2j}$ and $d^{n}_{k,2j} = -\\lambda^{n}_{k,2j}$. \nUsing this result gives the pair of recursion relations\n\\alpheqn\n\\begin{eqnarray} \\label{three}\n| \\lambda^{n}_{k,2j} \\rangle & =\n& \\left( U_{\\phi^{-1}_{n,\\rm{A}}}\n+\\lambda^{n}_{k,2j}U_{\\phi^{-1}_{n,\\rm{B}}} \\right) \n |\\lambda^{n-1}_{k,2j} \\rangle \\\\ \n| \\lambda^{n}_{k+2^{n-1},2j} \\rangle & =\n& \\left( U_{\\phi^{-1}_{n,\\rm{A}}} \n-\\lambda^{n}_{k,2j}U_{\\phi^{-1}_{n,\\rm{B}}} \\right)\n |\\lambda^{n-1}_{k,2j} \\rangle ,\n\\end{eqnarray}\n\\reseteqn \nwhich express the right states at the $n^{\\rm{th}}$ bsp in terms \nof those at the $(n-1)^{\\rm{th}}$ bsp. \n\nThese recursion relations can be solved to write the right\nstates at the\n$n^{\\rm{th}}$ bsp in terms of the right states at the $0^{\\rm{th}}$ ($\\alpha =\n2$) bsp. For notational convenience we define\n\\begin{equation} \\label{four}\n\\begin{array}{lcl}\n\\widehat{{\\rm{A}}}_{i} & \\equiv & U_{\\phi^{-1}_{i,{\\rm{A}}}} \\\\\n\\widehat{{\\rm B}}_{i} & \\equiv & U_{\\phi^{-1}_{i,{\\rm{B}}}} \\\\\n\\widehat{{\\rm B}}^{n}_{k,2j,i} & \\equiv &\n\\left(\\lambda^{n}_{k,2j}\\right)^{i}\\widehat{{\\rm{B}}}_{i}\n\\end{array}\\end{equation}\nLet $\\sigma_i$ ($i=1,2,\\dots,n$) be either $0$ or $1$ and we define $\\widehat\n{\\Pi}_{\\sigma_n\\sigma_{n-1}\\dots\\sigma_1}$ to be an ordered \n$n$-product of $\\widehat{{\\rm{A}}}_{i}$'s and $\\widehat{{\\rm\nB}}^{n}_{k,2j,i}$'s, where if $\\sigma_i = 1$ then the $i^{\\rm {th}}$\nlocation in the $n$-product (counting from the right) will be taken by\n$\\widehat{{\\rm B}}^{n}_{k,2j,i}$ and if $\\sigma_i = 0$ then the $i^{\\rm {th}}$\nlocation will be taken by $\\widehat{{\\rm{A}}}_{i}$. Solving (59)\ngives \n\\begin{equation} \\label{five}\n| \\lambda^{n}_{k,2j} \\rangle = \n\\sum_{\\{\\sigma \\}}\\widehat{\\Pi}_{\\sigma_n\\sigma_{n-1}...\\sigma_1} |\n\\lambda^{0}_{1,2j} \\rangle.\n\\end{equation}\nThe sum in (\\ref{five}) is over all possible $\\sigma$-strings \nof $0$'s and $1$'s of length $n$ and so consists of\n$2^n$ terms ($n$-products). The order of the operators in each $n$-product \nmust be strictly observed \nsince the operators involved do not commute. \n\nTo illustrate (\\ref{five}) we write it out explicitly for $n = 1$ and $2$.\nFor $n=1$ (\\ref{five}) gives\n\\begin{equation} \\label{example1}\n| \\lambda^{1}_{k,2j} \\rangle = \\widehat{\\rm{A}}_{1} |\n\\lambda^{0}_{1,2j} \\rangle + \\widehat{\\rm B}^{1}_{k,2j,1} |\n\\lambda^{0}_{1,2j} \\rangle.\n\\end{equation}\nThis agrees with the expression (\\ref{fbspurstates})\n(corresponding to $k=1$ and $k=2$) we had for the\nright eigenstates at the first bsp. \nFor $n=2$ (\\ref{five}) gives\n\\begin{eqnarray}\n| \\lambda^{2}_{k,2j}\\rangle & = &\n\\widehat{\\rm{A}}_{2}\\widehat{\\rm{A}}_{1} |\n\\lambda^{0}_{1,2j} \\rangle + \\widehat{\\rm{A}}_{2}\\widehat{\\rm B}^{2}_{k,2j,1} |\n\\lambda^{0}_{1,2j} \\rangle \\nonumber \\\\\n& & \\mbox{} + \\widehat{\\rm B}^{2}_{k,2j,2} \\widehat{\\rm A}_{1} |\n\\lambda^{0}_{1,2j} \\rangle + \\widehat{\\rm B}^{2}_{k,2j,2}\\widehat{\\rm B}^{2}_{k,2j,1}\n | \\lambda^{0}_{1,2j} \\rangle.\n\\end{eqnarray}\n\nNow we prove by induction that there is a $2^n \\times\n2^n$ Jordan block associated with the eigenvalue $0$ at the $n^{\\rm{th}}$ bsp for\neach odd order $2j+1$. In section 2 it was shown that this statement is\ntrue for the first bsp. Assume that this statement is true at the $(n-1)^{\\rm{th}}$ bsp.\nWe denote the Jordan vectors as $| 0^{n-1}_{k,2j+1} \\rangle$ where \n$k=2,3,\\dots,2^{n-1}$ ($| 0^{n-1}_{1,2j+1} \\rangle$ is the eigenvector of the block). They satisfy\n\\alpheqn\n\\begin{eqnarray} \\label{jorddef}\nU_{{\\rm R}_n}| 0^{n-1}_{k,2j+1} \\rangle & = & | 0^{n-1}_{k-1,2j+1}\n\\rangle, \\;\\;\\;\\;\\;\\;\\;\\; k \\neq 1 \\\\ \nU_{{\\rm R}_n} | 0^{n-1}_{1,2j+1} \\rangle & = & 0.\n\\end{eqnarray}\n\\reseteqn \nSince we are assuming that $U_{{\\rm{R}}_{n-1}}$ has $2^{n-1} \\times 2^{n-1}$ \nJordan blocks, the conjugacies \n(\\ref{imp1}) and (\\ref{imp2}) imply that ${\\rm{G}}_{n,{\\rm A}}$ and ${\\rm{G}}_{n,{\\rm B}}$\nboth have $2^{n-1} \\times 2^{n-1}$ Jordan blocks with states given by\n\\alpheqn \n\\begin{eqnarray} \n| 0^{n}_{k,2j+1} \\rangle_{{\\rm{G}}_{n,A}} & = & \n\\widehat{{\\rm{A}}}_{n}| 0^{n-1}_{k,2j+1} \\rangle \\\\ \n| 0^{n}_{k,2j+1} \\rangle_{{\\rm{G}}_{n,B}} & = & \\label{jzerob}\n\\widehat{{\\rm{B}}}_{n}| 0^{n-1}_{k,2j+1} \\rangle,\n\\end{eqnarray}\n\\reseteqn\nwhere $k=1,2,\\dots,2^{n-1}$. Since \n${{\\rm{G}}_{n}}$ on $\\Omega_{n}$ has two $2^{n-1} \\times 2^{n-1}$ Jordan blocks\nfor each $j$, ${{\\rm{R}}_{n}}$ on $\\Omega_{n}$ can either have two \n$2^{n-1} \\times 2^{n-1}$ Jordan blocks for each $j$ or have one $2^{n}\n\\times 2^{n}$ Jordan block for each $j$. The first case implies that \n$U_{{\\rm R}_n}$ have two null vectors $| 0^{n}_{1,2j+1}\n\\rangle_{{\\rm{G}}_{n,{\\rm A}}}$ and $| 0^{n}_{1,2j+1}\n\\rangle_{{\\rm{G}}_{n,{\\rm B}}}$for each $j$. But $U_{{\\rm R}_n}\n| 0^{n}_{1,2j+1} \\rangle_{{\\rm{G}}_{n,{\\rm B}}} \\neq 0$, since no function with\nsupport in $X_{n,\\rm{B}}$ can vanish in one iteration under\n$U_{{\\rm R}_n}$. Therefore \n$U_{{\\rm R}_n}$ has a $2^n \\times 2^n$ Jordan block for \neach odd degree $2j+1$. This\ncompletes the proof by induction. \n\nA null state of $U_{{\\rm R}_n}$ has to be a null state of\n$U_{{\\rm G}_n}$ too. Therefore the null state of\n$U_{{\\rm R}_n}$ for each $j$ is given by \n\\begin{equation} \\label{z1}\n| 0^{n}_{1,2j+1} \\rangle = \n\\widehat{{\\rm{A}}}_{n}| 0^{n-1}_{1,2j+1} \\rangle.\n\\end{equation}\nWe use the relation \n\\begin{equation}\n{\\rm R}_n \\circ \\phi^{-1}_{n, {\\rm A}} = \n\\phi^{-1}_{n, {\\rm B}} \\circ {\\rm R}_{n-1}\n\\end{equation}\nwhich implies that\n\\begin{equation} \\label{myst1}\nU_{{\\rm R}_n}\\widehat{{\\rm{A}}}_{n} = \n\\widehat{{\\rm{B}}}_{n}U_{{\\rm R}_{n-1}}.\n\\end{equation}\nUnlike (\\ref{myst}), equation (\\ref{myst1}) \nis true only at the band-splitting points. Equation (\\ref{myst1}) can be used to\nverify that if we act on both sides of (\\ref{z1}) by $U_{{\\rm R}_n}$ its rhs reduces to zero.\n\nThe right state $| 0^{n}_{1,2j+1} \\rangle_{{\\rm G}_{n,{\\rm B}}}$ \nis a good candidate for the Jordan state $| 0^{n}_{2,2j+1} \\rangle$ since under one iteration by \n$U_{{\\rm R}_n}$ it will have support only in $X_{n, \\rm{A}}$ and under \ntwo iterations of $U_{{\\rm R}_n}$ it will vanish. Tentatively, from \n(\\ref{jzerob}) we write \n\\begin{equation} \\label{z2}\n| 0^{n}_{2,2j+1} \\rangle = \n\\widehat{{\\rm{B}}}_{n} | 0^{n-1}_{1,2j+1} \\rangle.\n\\end{equation}\nThis guess for the Jordan state can be verified by using \nrelation (\\ref{myst}). Similarly (\\ref{myst1})\ncan be used to show that \n\\begin{equation} \\label{z3}\n| 0^{n}_{3,2j+1} \\rangle = \n\\widehat{{\\rm{A}}}_{n}| 0^{n-1}_{2,2j+1} \\rangle.\n\\end{equation}\nand equation (\\ref{myst}) can be used to show that \n\\begin{equation} \\label{z4}\n| 0^{n}_{4,2j+1} \\rangle = \n\\widehat{{\\rm{B}}}_{n}| 0^{n-1}_{2,2j+1} \\rangle.\n\\end{equation}\nIn general, we find \n\\begin{eqnarray} \\label{r1}\n| 0^{n}_{k,2j+1} \\rangle & = & \n\\widehat{{\\rm{A}}}_{n} | 0^{n-1}_{\\lceil k\/2 \\rceil,2j+1} \\rangle \\;\\;\\;\\;\n\\mbox{for} \\; k \\; \\mbox{odd} \\\\ \\nonumber\n| 0^{n}_{k,2j+1} \\rangle & = & \n\\widehat{{\\rm{B}}}_{n} | 0^{n-1}_{k\/2 ,2j+1} \\rangle \\;\\;\\;\\;\\;\\;\n\\mbox{for} \\; k \\; \\mbox{even}\n\\end{eqnarray}\nwhere $k=1,2,\\dots,2^n$ and $\\lceil q \\rceil$ is the ceiling function ($q$ if it is an integer or\nelse the next greatest integer). \nThese recursions are illustrated in Figure~6 up to $n=2$.\n\\setlength{\\unitlength}{1mm}\n\\begin{figure}\n\\begin{center}\n\\begin{picture}(67,60)\n\\put(67,33){$|0^{0}_{1,2j+1}\\rangle$}\n\\put(66,34){\\vector(-1,1){15}}\n\\put(66,34){\\vector(-1,-1){15}}\n\\put(58,43){${\\widehat{\\rm A}}_{1}$}\n\\put(58,22){${\\widehat{\\rm B}}_{1}$}\n\\put(36,49){$|0^{1}_{1,2j+1}\\rangle$}\n\\put(36,18){$|0^{1}_{2,2j+1}\\rangle$}\n\\put(35,50){\\vector(-2,1){20}}\n\\put(35,50){\\vector(-2,-1){20}}\n\\put(35,19){\\vector(-2,1){20}}\n\\put(35,19){\\vector(-2,-1){20}}\n\\put(24,56.5){${\\widehat{\\rm A}}_{2}$}\n\\put(24,9){${\\widehat{\\rm B}}_{2}$}\n\\put(24,40){${\\widehat{\\rm B}}_{2}$}\n\\put(24,25){${\\widehat{\\rm A}}_{2}$}\n\\put(0,60){$|0^{2}_{1,2j+1}\\rangle$}\n\\put(0,39){$|0^{2}_{2,2j+1}\\rangle$}\n\\put(0,28){$|0^{2}_{3,2j+1}\\rangle$}\n\\put(0,8){$|0^{2}_{4,2j+1}\\rangle$}\n\\put(2,0){$n=2$}\n\\put(38,0){$n=1$}\n\\put(69,0){$n=0$}\n\\end{picture}\n\\parbox{5in}{\\caption{\\small States associated with eigenvalue zero obtained from the\naction of $\\widehat{{\\rm{A}}}_{n}$ and $\\widehat{{\\rm{B}}}_{n}$ on $|0^{n-1}_{k,2j+1}\\rangle$. }}\n\\end{center}\n\\end{figure}\nThis recursion relation can then be solved to write the eigen\/Jordan\nvectors at the $n^{\\rm{th}}$ bsp in terms of the null vectors at the $0^{\\rm{th}}$ bsp.\n To write down a compact solution we define $\\Pi_{\\sigma_n\\sigma_{n-1}\\dots\\sigma_1}$, which is\nsimilar to the $\\widehat {\\Pi}_{\\sigma_n\\sigma_{n-1}\\dots\\sigma_1}$ previously defined. \n We define $\\Pi_{\\sigma_n\\sigma_{n-1}...\\sigma_1}$ to be an ordered \n$n$-product of $\\widehat{{\\rm{A}}}_{i}$'s and $\\widehat{{\\rm{B}}}_{i}$'s.\nIf $\\sigma_i = 1$ then the $i^{\\rm {th}}$ location in the $n$-product\n(counting from the right) will be taken by $\\widehat{{\\rm{B}}}_{i}$ and if\n$\\sigma_i = 0$ then the $i^{\\rm {th}}$ location will be taken by \n$\\widehat{{\\rm{A}}}_{i}$. With each $\\Pi_{\\sigma_n\\sigma_{n-1}\\dots\\sigma_1}$\nwe associate a binary\nnumber formed from the string of $1$'s and $0$'s as $\\kappa = \\sigma_n\\sigma_{n-1}\\dots\\sigma_1 +1$.\nSolving the recursion relation (\\ref{r1}) we get\n\\begin{equation} \\label{jord}\n| 0^{n}_{\\kappa,2j+1} \\rangle = \\Pi_{\\sigma_n\\sigma_{n-1}...\\sigma_1}\n| 0^{0}_{1,2j+1} \\rangle,\n\\end{equation}\nwhere $| 0^{0}_{1,2j+1} \\rangle$ is the null vector of degree $2j+1$ of the\ntent map with full height, and $\\kappa$ here is the decimal equivalent of the binary $\\kappa$,\nwhich ranges from $1$ to $2^n$. \n\n\\subsubsection{Left states on the attractor}\nWe obtain the left states, $ \\langle \\lambda^{n}_{k,j}\n|$, which are orthonormal to the right states given by\n(\\ref{five}) and (\\ref{jord})\nby taking the duals of those expressions.\nThe dual expression of (\\ref{five}) gives the left \nstates corresponding to the non-zero eigenvalues as \n\\begin{equation}\n\\langle \\lambda^{n}_{k,2j} | =\n\\sum_{\\{\\sigma\\}}\\widehat{\\Pi}^{\\dag}_{\\sigma_n\\sigma_{n-1}\\dots\\sigma_1}\n\\frac{\\langle\n\\lambda^{0}_{1,2j}|}{2^n},\n\\end{equation}\nwhere the $n$-product here is of $(\\widehat{{\\rm{A}}}_{i}^{-1})^{\\dag}$'s and \n$((\\widehat{{\\rm B}}^{n}_{k,2j,i})^{-1})^{\\dag}$'s and the factor of $1\/2^n$ is put for\nnormalization. The left states corresponding to the Jordan vectors associated with the\neigenvalue $0$ are \n\\begin{equation} \\label{leftjord}\n\\langle 0^{n}_{\\kappa,2j+1} | =\n\\Pi^{\\dag}_{\\sigma_n\\sigma_{n-1}\\dots\\sigma_1} \\langle 0^{0}_{1,2j+1}|,\n\\end{equation}\nwhere ${\\Pi^{\\dag}_{\\sigma_n\\sigma_{n-1}...\\sigma_1}}$ is an \nordered $n$-product of $(\\widehat{{\\rm{A}}}_{i}^{-1})^{\\dag}$'s and \n$(\\widehat{{\\rm B}}_{i}^{-1})^{\\dag}$'s.\n\n\\subsection{Decay onto the attractor of the rescaled map}\n\nWe saw in section 2.1 that $\\rm{R}_{1}$ has no transient bands. \nThe band structure of $\\rm{G}_2$\nconsists of two scaled copies of that of $\\rm{R}_{1}$ separated by the central interval \n$X_{2,{\\rm C}}$. This central interval is a transient band of $\\rm{R}_{2}$. Since\n${\\rm{G}}_{3,{\\rm A}}$ and ${\\rm{G}}_{3,{\\rm B}}$ have a band structure similar to\nthat of ${\\rm{R}}_{2}$ both of them have a transient interval also. We refer\nto these two transient bands as peripheral transients since in\naddition there is a central transient in the interval $X_{3,{\\rm C}}$. \nIn general, as discussed below (\\ref{bandno}),\nat the $n^{\\rm{th}}$ bsp ${\\rm{R}}_{n}$ has $2^{n-1} - 1$ transient bands, of which \n$2^{n-1} - 2$ are peripheral transient bands. At each bsp\nall transient bands except the central one are rescaled versions of\nthe central transient bands at previous band-splitting points.\n\\begin{figure}[htb]\n\\begin{center}\n\\scalebox{.5}[.5]{\\includegraphics{fig51.eps}}\n\\parbox{5in}{\\caption{\\small Band structure at the $1^{\\rm{st}}$, $2^{\\rm{nd}}$ and \n$3^{\\rm{rd}}$ band-splitting points. The central transient $X_{2,{\\rm C}}$ at the $2^{\\rm{nd}}$\nbsp transforms into two peripheral transients at the $3^{\\rm{rd}}$ bsp.}}\n\\end{center}\n\\end{figure} \n\nUnder the map ${\\rm{R}}_{n}$ the inverse image of any point in the central\ninterval $X_{n,{\\rm C}}$ is contained within the interval itself. This implies that if\na function initially has no support in $X_{n,{\\rm C}}$, it will continue to have no\nsupport in $X_{n,{\\rm C}}$ under repeated iterations by $U_{{\\rm R}_n}$.\nThe spectrum and the form of the eigen\/Jordan vector in\nthe central interval at all the band-splitting points can be obtained by\ninspection. We notice that \n\\begin{equation} \\label{trans1}\nU_{{\\rm R}_n}\\left[ \\left(x-x^{*}_{n}\\right)^{j}\\chi_{n,{\\rm C}}\\right] =\n(-1)^{j}\\left(\\frac{1}{\\alpha_{n}}\\right)^{j+1}\\left\\{\\left(x-x^{*}_{n}\\right)^{j}\n\\chi_{n,{\\rm C}} +\n\\left(x-x^{*}_{n}\\right)^{j}\\chi_{n,b} \\right\\},\n\\end{equation}\nwhere $\\chi_{n,{\\rm C}}$ is the indicator function on \n$X_{n,{\\rm C}}$ and $\\chi_{n,b} =1$ if \n$x \\in [{\\rm{R}}^{(3)}_{n}(x_c), {\\rm{R}}^{(5)}_{n}(x_c) )$ and $0$ otherwise. The\nform of the eigen\/Jordan vector in $X_{n,{\\rm C}}$ \nassociated with decay out of the central interval is thus\n$\\left(x-x^{*}_{n}\\right)^{j}$. The complete form will be determined below.\nThe eigenvalues associated with decay out of\nthe central transient at the $n^{\\rm{th}}$ bsp are seen by (\\ref{trans1}) to be\n\\begin{equation} \\label{trans2}\n\\phi^{n,0}_{1,j} = (-1)^{j}\\left(\\frac{1}{\\alpha_{n}}\\right)^{j+1}\n\\end{equation}\n \nNext we obtain the complete spectrum at the $n^{\\rm{th}}$ bsp associated with\nthe decay out of all the transient regions onto $\\Omega_{n}$. This is done\nby transforming all the central transients at band-splitting points of\norder less than $n$. Of all the peripheral transient bands at the $n^{\\rm th}$\nbsp, $2^{n-2}$ of these are\n$(n-2)$ times rescaled versions of the central transient at ${\\rm{R}}_{2}$, $2^{n-3}$\nof these are $(n-3)$ times rescaled versions of the central transient at \n${\\rm{R}}_{3}$ and so on up to $2$ of these peripheral transients being rescaled\nversions of the central transient at the $(n-1)^{\\rm{th}}$ bsp.\nThis is shown in Figure~7 up to $n=3$.\nWe denote the transient eigenvalues at the $n^{\\rm{th}}$ bsp by\n$\\phi^{n,l}_{k,j}$, where $l$ indicates that it was obtained from the\ncentral transient at the $(n-l)^{\\rm th}$ bsp, where \n$l=0,1,2,\\dots,n-2$, $k$ is\nan integer from $1$ to $2^l$, $j=0,1,2,3,\\dots,$ and as before the right state \n$|\\phi^{n,l}_{k,j} \\rangle$ is piecewise-polynomial of degree $j$ in each of\nthe pieces. The eigenvalues are obtained in a similar fashion to (\\ref{eig1}) as \n\\begin{equation} \\label{eig2} \\phi^{n,l}_{k,j} =\n\\left(\\phi^{n-l,0}_{1,j}\\right)^{{1}\/{2^l}}\\mbox{exp}\\left(\n\\frac{2\\pi ik}{2^l}\\right),\n\\end{equation}\nwhere the $\\phi^{n,0}_{1,j}$ are given by (\\ref{trans2}). \nFor even values of $j$ there are degeneracies in the spectrum while for odd\nvalues of $j$ there are no degeneracies. We consider first the states associated\nwith the degenerate eigenvalues.\n\n\\subsubsection{Transient right states of even degree}\nAs seen in (\\ref{eig2}), for a given $l$ (with $n$ and $2j$ fixed) there are $2^l$ distinct\neigenvalues indexed by $k$. At each integer step ($l$) all\nthe eigenvalues of the previous step ($l-1$) are present and $2^{l-1}$ new eigenvalues \nappear. But identical eigenvalues from different steps have disparate $k$ values.\nIt is convenient to rearrange the $k$ index so that degenerate eigenvalues share\nthe same $k$.\\footnote{This can be accomplished by choosing the new $k$'s asssociated\nwith the $l^{\\rm th}$ step as $k_{2i-1}^l = k_i^{l-1}$ and \n$k_{2i}^l = k_{2i-1}^l + 2^{l-1}$, where $i=1,\\dots,2^{l-1}$ denotes the order of its appearance and\n$k_1^0=1$.} Table 2 contains $\\phi^{5,l}_{k,0}$ for all possible values of \n$l$ and $k$, arranged to illustrate the reordering in $k$.\n\n\\begin{table}[htbp!]\n\\[\n\\begin{array}{|l|cccccccc|} \\hline\n & k=1 & k=2 & k=3 & k=4 & k=5 & k=6 & k=7 & k=8 \\\\ \\hline\n& \\vspace{-0.4mm} & \\vspace{-0.4mm} & \\vspace{-0.4mm} & \\vspace{-0.4mm} & \\vspace{-0.4mm} \n& \\vspace{-0.4mm} & \\vspace{-0.4mm} & \\\\\n\\phi^{5,0}_{k,0} & \\varphi & & & & & & & \\\\\n& \\vspace{-0.4mm} & \\vspace{-0.4mm} & \\vspace{-0.4mm} & \\vspace{-0.4mm} & \\vspace{-0.4mm} \n& \\vspace{-0.4mm} & \\vspace{-0.4mm} & \\\\\n\\phi^{5,1}_{k,0} & \\varphi & -\\varphi & & & & & & \\\\ \n& \\vspace{-0.4mm} & \\vspace{-0.4mm} & \\vspace{-0.4mm} & \\vspace{-0.4mm} & \\vspace{-0.4mm} \n& \\vspace{-0.4mm} & \\vspace{-0.4mm} &\\\\\n\\phi^{5,2}_{k,0} & \\varphi & -\\varphi & i\\varphi & -i\\varphi & & & & \\\\ \n& \\vspace{-0.4mm} & \\vspace{-0.4mm} & \\vspace{-0.4mm} & \\vspace{-0.4mm} & \\vspace{-0.4mm} \n& \\vspace{-0.4mm} & \\vspace{-0.4mm} & \\\\\n\\phi^{5,3}_{k,0} & \\varphi & -\\varphi & i\\varphi & -i\\varphi & \n\\varphi\\exp\\big( \\frac{\\pi i}{4} \\big) &\n\\varphi\\exp\\big( \\frac{3\\pi i}{4} \\big) &\n\\varphi\\exp\\big( \\frac{5\\pi i}{4} \\big)& \n\\varphi\\exp\\big( \\frac{7\\pi i}{4} \\big) \\\\\n\\hline\n\\end{array}\n\\]\n\\caption{Transient spectrum at the $5^{\\rm{th}}$ bsp with $j=0$. The\nconstant $\\varphi \\equiv 1\/\\alpha_{5}$.} \\end{table}\n\nNext we ask if there are independent eigenvectors\nassociated with the degenerate eigenvalues. \nUsing the same procedure used to find the right eigenvectors\ndecaying out of region $\\rm{I}$ it can be shown that in the rescaled map,\nfor $n > 2$ there is no eigenpolynomial of even degree with support in the central\ntransient. This means that the right states associated with $\\phi^{n,0}_{1,2j}$\nare Jordan vectors, for $n > 2$. For $n = 2$, decay out of the central\ntransient is described by eigenvectors for all values of $j$. Since the right states associated\nwith \n$\\phi^{3,1}_{k,j}$, $\\phi^{4,2}_{k,j}$,\\dots,$\\phi^{n,n-2}_{k,j}$ are transformed\nversions of $| \\phi^{2,0}_{1,j} \\rangle$, these are also eigenvectors. For\n$n \\geq 3$ decay out of the central transient is described by Jordan \nvectors for even values\nof $j$. Therefore all the peripheral transients which are related by conjugacies\n(\\ref{imp1}) and (\\ref{imp2}) to these central transients are also Jordan\nvectors. Hence associated with the transient spectrum are Jordan blocks whose sizes correspond to the\nalgebraic multiplicity of the eigenvalues. Thus from Table 2 we see that at the $5^{\\rm{th}}$ bsp\nthere is a $4 \\times 4$ Jordan block associated with\n$\\phi^{5,0}_{1,2j}$, a $3 \\times 3$ Jordan block associated with $\\phi^{5,1}_{2,2j}$ and so on. At\nthe $n^{\\rm{th}}$ bsp the largest Jordan block in the transient spectrum is\nassociated with $\\phi^{n,0}_{1,2j}$ and is of size $n-1 \\times n-1$. Since \n$\\phi^{n,0}_{1,0}$ is the largest eigenvalue with modulus less than 1 in the\nspectrum of $U_{\\rm R_n}$, it corresponds to the slowest decay mode. Since there is a Jordan block\nassociated with $\\phi^{n,0}_{1,0}$ this decay is modified exponential (polynomial factors in $t$\ntimes exponential decay).\n\nIn general (for even $j$) we have Jordan vectors for $l\\leq n-3$ and eigenvectors for $l=n-2$ as\n\\begin{eqnarray} \\label{t5}\nU_{{\\rm R}_n}|\\phi^{n,l}_{k,2j} \\rangle & = & \\phi^{n,l}_{k,2j}\n|\\phi^{n,l}_{k,2j} \\rangle + |\\phi^{n,l+1}_{k,2j}\n\\rangle \\;\\;\\;\\;\\;\\;\\;\\; n \\geq 3, \\; l \\leq n-3 \\nonumber \\\\ \\noalign{\\vskip4pt}\nU_{{\\rm R}_n}|\\phi^{n,n-2}_{k,2j} \\rangle & = & \\phi^{n,n-2}_{k,2j}\n|\\phi^{n,n-2}_{k,2j} \\rangle \\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\; n \\geq 2. \n\\end{eqnarray}\nBy inspection we have\nalready obtained the form of the Jordan state in the central region in\n(\\ref{trans1}). We have\n\\begin{equation} \\label{t6} | \\phi^{n,0}_{1,2j} \\rangle =\na_{n,2j}\\left(x-x^*_n\\right)^{2j}\\chi_{n,{\\rm C}} + f_{n,2j}(x) \\end{equation}\nwhere $f_{n,2j}(x)$ is piecewise polynomial with degree $2j$ over each of the\n$S_{n}$ intervals (excluding the central interval\n$X_{n, {\\rm C}}$) at the $n^{\\rm{th}}$ bsp. Since a Jordan vector multiplied by a\nscalar doesn't remain a Jordan vector (with respect to the same eigenvector in the block)\nwe do not have the freedom to choose\nthe $a_{n,2j}$'s to be $1$ as we did in (\\ref{haha}). Using (\\ref{trans1})\nand (\\ref{t5}) we know that if (\\ref{t6}) is to be a Jordan vector the\narbitrary functions $f_{n,2j}(x)$ must satisfy \n\\begin{equation} \\label{t1} U_{{\\rm R}_n}\nf_{n,2j}(x) = \\phi^{n,0}_{1,2j}\\left[ f_{n,2j}(x) -\na_{n,2j}\\left(x-x^{*}_{n}\\right)^{2j}\\chi_{n, b}\n\\right] + | \\phi^{n,1}_{1,2j} \\rangle\n\\end{equation}\nA formal approach to determine the $a_{n, 2j}$'s and $f_{n, 2j}(x)$ is \ndescribed in Appendix D.\n\nTo find the eigenstates corresponding to the peripheral transients\nwe transform the central transient eigenstates at the $2^{\\rm{nd}}$ bsp\nto all the higher band-splitting points. We define \n\\begin{equation}\n\\check{\\rm B}^{n,n-2}_{k,2j,i} = \\left( \\phi^{n,n-2}_{k,2j} \\right)^{i}\\widehat{\\rm B}_{i},\n\\end{equation}\nand $\\check{\\Pi}_{\\sigma_n\\sigma_{n-1}\\dots\\sigma_{3}}$ is an ordered product of \n$n-2$ operators $\\widehat{\\rm A}_{i}$ and $\\check{\\rm B}^{n,n-2}_{k,2j,i}$ ($i\n= 3,4,\\dots,n$). If $\\sigma_i = 0$ the $i^{\\rm th}$ place in the\nproduct is taken by $\\widehat{\\rm A}_{i}$ and if $\\sigma_i = 1$ the $i^{\\rm\nth}$ place in the product is taken by $\\check{\\rm B}^{n,n-2}_{k,2j,i}$.\nFollowing the same procedure used to obtain (\\ref{five}) we find that the transient\neigenvectors, $| \\phi^{n,n-2}_{k,2j}\\rangle$, $(n\\geq 3)$, are given by\n\\begin{equation} \\label{transeig}\n| \\phi^{n,n-2}_{k,2j} \\rangle = \\sum_{\\{ \\sigma\\}}\n\\check{\\Pi}_{\\sigma_n \\sigma_{n-1}\\dots\\sigma_3}\n| \\phi^{2,0}_{1,j} \\rangle\n\\end{equation}\n\n\nTo find the Jordan states corresponding to the peripheral transients, \n$| \\phi^{n,l}_{k,2j} \\rangle$ for $1 \\leq l \\leq n-3$, \nwe use an approach similar to that used to find the\nJordan vectors describing decay out of the central transient, \n$| \\phi^{n,0}_{1,2j} \\rangle $. To clarify the notation we \nnote that $| \\phi^{n,1}_{k,2j} \\rangle $ does not\nhave support in the central transient band, but has \nsupport over all the other transient and attracting bands. Similarly\n$| \\phi^{n,2}_{k,2j} \\rangle $ has support on all bands except the\ncentral transient band and the two transient bands that are related by a single\ntransformation to the central transient at the $(n-1)^{\\rm{th}}$ bsp. This\npattern continues and $| \\phi^{n,n-2}_{k,2j} \\rangle$ has support\non all the attracting bands and the $2^{n-2}$ transient bands that are related\nby transformations to the central transient at the $2^{\\rm{nd}}$ bsp. Following\n(\\ref{t6}) and using the fact that the peripheral transients at the\n$n^{\\rm{th}}$ bsp are transformed versions of the transients at the\n$(n-1)^{\\rm{th}}$ bsp we write\n\\begin{equation} \\label{jord10}\n| \\phi^{n,l}_{k,2j} \\rangle = a^{n,l}_{k,2j}\\left( \n\\widehat{\\rm A}_n| \\phi^{n-1,l-1}_{\\lceil k\/2 \\rceil,2j} \\rangle +\n\\phi^{n,l}_{k,2j} \\widehat{\\rm B}_n | \\phi^{n-1,l-1}_{\\lceil k\/2 \\rceil,2j}\n\\rangle \\right) + g^{n,l}_{k,2j}(x)\n\\end{equation}\nwhere $g^{n,l}_{k,2j}(x)$ is piecewise polynomial of degree $2j$. The\npolynomial $g^{n,l}_{k,2j}(x)$ can be obtained using the procedure similar to\nthe one described in Appendix D. We present here the\nsolution only for the simplest case,\n\\begin{equation} \\label{jord14}\n| \\phi^{n,n-3}_{k,2j} \\rangle = 2 \\, \\phi^{n,n-3}_{k,2j} \\left(\n\\widehat{\\rm A}_n| \\phi^{n-1,n-4}_{\\lceil k\/2 \\rceil,2j} \\rangle\n + \\phi^{n,n-3}_{k,2j}\n\\widehat{\\rm B}_n| \\phi^{n-1,n-4}_{\\lceil k\/2 \\rceil,2j} \\rangle\n\\right) - \\frac{1}{2 \\, \\phi^{n,n-3}_{k,2j}}\n| \\phi^{n,n-2}_{k',2j} \\rangle\n\\end{equation}\nwhere $k'=k+1$ if $k$ is odd and $k'=k-1$ if $k$ is even. The recursion\nrelations for the other Jordan vectors involve more terms, and in general\nto find all the Jordan vectors at the $n^{\\rm{th}}$ bsp one must know\nall the Jordan vectors at the $(n-1)^{\\rm{th}}$ bsp. Also at a particular\nbsp to determine $| \\phi^{n,l}_{k,2j} \\rangle $ one must know \nall the $| \\phi^{n,l'}_{k,2j} \\rangle $ for $l' < l$. So\ngenerally one proceeds in the following order\n$| \\phi^{2,0}_{1,2j} \\rangle $, $|\n\\phi^{3,1}_{k,2j}\\rangle $, $| \\phi^{3,0}_{1,2j}\\rangle $,\n$| \\phi^{4,2}_{k,2j} \\rangle $, $|\n\\phi^{4,1}_{k,2j}\\rangle \\dots $. The transient eigenvectors can be\nobtained directly from (\\ref{transeig}) without regard to this order.\n\n\\subsubsection{Transient Right states of odd degree}\n\nFor odd values of $j$ the transient spectrum at the $n^{\\rm th}$ bsp, \ngiven by equation (\\ref{eig2}), is nondegenerate. We first find the \neigenvectors with support in the central transient bands, at all the\nband-splitting points. By inspection we have already obtained the form \nof the eigenvector in the central transient in (\\ref{trans1}) so that\n\\begin{equation} \\label{oddj}\n| \\phi^{n,0}_{1,2j+1} \\rangle = \\left( x-x^{*}_{n} \\right)^{2j+1} \\chi_{n,C} \n+ f_{n,2j+1}(x),\n\\end{equation}\nwhere $f_{n,2j+1}(x)$ is a polynomial of degree $2j+1$ in each of the \n$S_n$ intervals excluding the central interval. For this expression is to be an \neigenvector $f_{n,2j+1}(x)$ must satisfy\n\\begin{equation} \\label{oddj1}\nU_{{\\rm R}_n}\nf_{n,2j+1}(x) = \\phi^{n,0}_{1,2j+1}\\left[ f_{n,2j+1}(x) -\n\\left(x-x^{*}_{n}\\right)^{2j+1}\\chi_{n, b}\n\\right]. \n\\end{equation}\nThis equation for $f_{n,2j+1}(x)$ is similar to equation (\\ref{t1}), except\nthat it has one term less on the rhs, and may be solved using the procedure\noutlined in Appendix~D.\n\nOnce the $| \\phi^{n,0}_{1,2j+1} \\rangle$ are known the eigenvector\ncorresponding to any $| \\phi^{n,l}_{k,2j+1} \\rangle$ can be written as \n\\begin{equation}\n| \\phi^{n,l}_{k,2j+1} \\rangle = \\sum_{\\{ \\sigma \\}\n}\\check{\\Pi}_{\\sigma_n\\sigma_{n-1}\\dots\\sigma_{n-l+1}}\n|\\phi^{n-l,0}_{1,2j+1} \\rangle,\n\\end{equation}\nwhere $\\check{\\Pi}_{\\sigma_n\\sigma_{n-1}\\dots\\sigma_{n-l+1}}$ \n is an ordered product of \n$l$ operators $\\widehat{\\rm A}_{i}$ and $\\check{\\rm B}^{n,l}_{k,2j,i}$ ($i =\nn-l+1,\\dots,n$). The $\\check{\\rm B}^{n,l}_{k,2j,i}$ operators are defined\nhere as \n\\begin{equation}\n\\check{\\rm B}^{n,l}_{k,2j,i} \\equiv \\left(\\phi^{n,l}_{k,2j+1}\\right)^i\n \\widehat{\\rm B}_i\n\\end{equation}\nIf $\\sigma_i = 0$ the\n$i^{\\rm th}$ place in the product is taken by \n$\\widehat{\\rm A}_{i}$ and if $\\sigma_i = 1$ the $i^{\\rm th}$ place in the \nproduct is taken by $\\check{\\rm B}^{n,l}_{k,2j,i}$. The sum is over all\npossible strings of $0$'s and $1$'s of length $l$.\n\n\\subsubsection{Left States}\n \nThe left states $\\langle \\phi^{n,l}_{k,j} |$ form a bi-orthonormal set with all\nthe previously obtained right states at the $n^{\\rm{th}}$ bsp as\n\\alpheqn\n\\begin{eqnarray}\n\\langle \\phi^{n,l}_{k,j} | \\lambda^{n}_{k'j'} \n\\rangle & = & 0 \\\\ \n\\langle \\phi^{n,l}_{k,j} | \\phi^{n,l'}_{k'j'} \n\\rangle & = & \\delta_{ll'}\\delta_{kk'}\\delta_{jj'}.\n\\end{eqnarray}\n\\reseteqn\nAmong the right states only the states with $l=0$ have support in the central transient band.\nFrom (\\ref{t6}) we see that the associated left states are \n\\alpheqn\n\\begin{eqnarray}\n\\langle \\phi^{n,0}_{1,2j} | & = & \\frac{1}\n{a_{n,2j}}\\delta^{(2j)}\\left( x - x^{*}_{n} \\right) \\\\ \n\\langle \\phi^{n,0}_{1,2j+1} | & = & \n-\\delta^{(2j+1)}\\left( x - x^{*}_{n} \\right).\n\\end{eqnarray}\n\\reseteqn\nThe left states corresponding to the peripheral transients at \nthe $n^{\\rm{th}}$ bsp, $\\langle \\phi^{n,l}_{k,j} |$,\nare found by transforming the left states at the \n$(n-1)^{\\rm{th}}$ bsp, \n$\\langle \\phi^{n-1,l-1}_{\\lceil k\/2 \\rceil,j} |$.\nThen the entire set at the $n^{\\rm{th}}$ bsp has to be orthonormalized\nusing a Gram--Schmidt procedure.\n\n\\subsection{Back to ${\\rm T}_n(x)$}\n\nGoing back to the tent map ${\\rm{T}}_{n}(x)$, we transform\nall the right states of ${\\rm{R}}_{n}(x)$ by $U_{\\phi^{-1}_{n}}$ and the left states by\n$K_{\\phi_{n}}$. The states describing decay out of \n$[ {\\rm{T}}_{n}^{(3)}(x_{c}),1 ]$ are null states. \nThe eigenvalues describing decay out of \n$[ 0,{\\rm{T}}_{n}^{(4)}(x_{c})]$ are \n$\\phi_j^n \\equiv \\left( \\frac{1}{\\alpha_{n}}\\right)^{j+1}$ and have\nassociated polynomial eigenvectors of degree $j$. For $j$ even this part of the spectrum overlaps\nwith the spectrum describing decay of transients of ${\\rm R}_n$. There are Jordan vectors associated\nwith this part of the spectrum; denoting them as\n$| \\phi^{n}_{2j} \\rangle$ we have\n\\alpheqn\n\\begin{eqnarray}\nU_{{\\rm T}_n}| \\phi^{n}_{2j} \\rangle = \\phi^{n}_{2j}\n| \\phi^{n}_{2j} \\rangle +| \\phi^{n,0}_{1,2j} \n\\rangle \\\\ \nU_{{\\rm T}_n}| \\phi^{n}_{2j+1}\\rangle = \\phi^{n}_{2j+1}\n| \\phi^{n}_{2j+1} \\rangle .\n\\end{eqnarray}\n\\reseteqn\nThese right states can be determined by an extension of the \nmethods used in\nSection 2.2 to determine the eigenstates describing \ndecay out of interval $\\rm{I}$ for the tent map at the first bsp.\n\n\\section{Conclusion}\n\nWe have presented the generalized spectral\ndecomoposition of the Frobenius--Perron\noperator of the tent map at the\nband-splitting points. The right eigenstates are\npiecewise-polynomial functions and the left eigenstates\nare generalized functions. The \nspectrum is discrete and gives the\ncharacteristic decay times of the map. From the decomposition\none can calculate correlations of arbitrary polynomials (as\nwell as functions expandable in terms of the polynomial \neigenstates). Furthermore, since the modes corresponding\nto transient decay onto the attractor have been obtained, the\nfull nonequilibrium dynamics of initial probability densities\nis accessible.\n\nThe slowest decay mode, corresponding to the\neigenvalue $\\alpha_{n}^{-1}$ at the\n$n^{\\rm th}$ bsp, describes decay onto the\nattractor. At the $n^{\\rm th}$ bsp there is an \n$n \\times n$ Jordan block associated with this\neigenvalue and therefore the decay is modified\nexponential. The asymptotic periodicity\nof the map is clearly reflected in the\nspectrum as at the $n^{\\rm th}$ bsp, all the \n$n^{\\rm th}$ roots of unity are part of the\nspectrum. Our analytic solution of density \nevolution in this system may be useful for \ncomparision with the behavior of systems governed by the Ginzburg--Landau\nequation since a component of its dynamics~\\cite{Moon} can be reduced to the tent map.\n\n\\section*{Acknowledgements}\n\nWe thank I.~Prigogine for his support and encouragement and G.E.~Ord\\'{o}\\~{n}ez\nfor several useful discussions and his comments on the manuscript.\nWe acknowledge US\nDepartment of Energy grant no. FG03-94ER14465, \nthe Welch Foundation\ngrant no. F-0365 and the European Communities Commision (contract no.\n27155.1\/BAS) for support of this work.\n\\section*{Appendix A: Topological conjugacy}\n\nIn this appendix we review the spectral decompositions \nof maps related by a coordinate \ntransformation~\\cite{Deanbook}. Let ${\\rm{T}}: X \\rightarrow X$ be a map defined on the\ninterval\n$X$. Transforming the interval $X$ by the one-to-one, \nonto, continuous function $\\phi : X \\rightarrow Y$ gives a new map, ${\\rm{S}}:Y\n\\rightarrow Y$. This map is determined as\n\\begin{equation}\ny_{t+1} = \\phi(x_{t+1}) = \\phi({\\rm T}(x_t)) \\equiv {\\rm S}(y_t).\n\\end{equation}\nUsing that $\\phi$ has an inverse gives\n\\begin{equation}\n\\phi({\\rm T}(x_t)) = \\phi({\\rm T}(\\phi^{-1}(\\phi(x_t)))) \n= \\phi \\circ {\\rm T} \\circ \\phi^{-1}(y_t),\n\\end{equation}\nso that\n\\begin{equation} \\label{strel}\n{\\rm S} = \\phi \\circ {\\rm T} \\circ \\phi^{-1}.\n\\end{equation}\nThe maps ${\\rm{T}}$ and ${\\rm{S}}$ are said to be topologically \nconjugate to each other. \n\nThe Koopman operator, $K_{\\rm{S}}$,\ncorresponding to ${\\rm{S}}$ is given from (\\ref{strel}) by \n\\begin{equation} \\label{kooprel}\n K_{\\rm{S}} = K^{-1}_{\\phi}K_{\\rm{T}}K_{\\phi},\n\\end{equation}\nwhere we have used the fact that $K_{\\phi^{-1}} = K^{-1}_{\\phi}$.\nThe Frobenius--Perron operator, $U_{\\rm{S}}$, coresponding to ${\\rm{S}}$\nis the adjoint of $K_{\\rm{S}}$. Taking the adjoint of (\\ref{kooprel})\nand using $(K_{\\phi}^{-1})^{\\dagger} = (K_{\\phi}^{\\dagger})^{-1}$ gives\n\\begin{equation} \\label{frobrel}\nU_{\\rm{S}} = U_{\\phi} U_{\\rm{T}} U^{-1}_{\\phi}.\n\\end{equation}\nSince $U_{\\rm S}$ and $U_{\\rm T}$ are related by the similarity (\\ref{frobrel})\nthe spectrum of $U_{\\rm S}$ is identical to that of $U_{\\rm T}$ and eigenstates\ntransform as\n\\begin{equation} \\label{app2}\n\\left| \\lambda_{n} \\right\\rangle_{\\rm{S}} = U_{\\phi}\\left| \\lambda_{n}\n\\right\\rangle_{\\rm{T}},\n\\end{equation}\nwhere we use a Dirac-style bra-ket notation for the states.\nFrom (\\ref{kooprel}) the left states transform as \n\\begin{equation} \\label{app3}\n\\left\\langle \\lambda_{n} \\right|_{\\rm{S}} = \nK_{\\phi^{-1}} \\left\\langle \\lambda_{n} \\right|_{\\rm{T}}.\n\\end{equation}\nJordan states of the maps are also related as in (\\ref{app2}) and (\\ref{app3}) and\nboth the algebraic and geometric multiplicities of the eigenvalues are preserved \nunder conjugacy.\n\n\\section*{Appendix B: The tent map with unit height}\n\nThe Frobenius--Perron operator of the tent map with unit height is given by \n\\begin{equation}\nU_{\\rm{T_{0}}}\\rho(x) = \\frac{1}{2}\\left[ \\rho \\left( \\frac{x}{2} \\right)\n + \\rho\\left( \\frac{2-x}{2} \\right) \\right].\n\\end{equation}\nThe operator $U_{\\rm T_0}$ admits polynomial eigenstates with support on the\nwhole unit interval. Associated with eigenpolynomials of order\n$2j$ are the nonzero eigenvalues $2^{-2j}$. There\nis an infinite degeneracy of the eigenvalue $0$ with an independent\nodd-order eigenpolynomial associated with each occurence of the eigenvalue. \nThus, the odd-order eigenpolynomials are not unique but we choose them as Euler\npolynomials so that the associated left eigendistributions take a simple form.\nThe right eigenvectors of $U_{\\rm T_0}$ are~\\cite{Gonzalo,fox}\n\\alpheqn \n\\begin{eqnarray}\n|2^{-2j}\\rangle_{{\\rm{T}}_{0}} & = & \nB_{2j}(x\/2) \\\\\n| 0_{2j+1} \\rangle_{\\rm T_0} & = & E_{2j+1}(x), \n\\end{eqnarray}\n\\reseteqn\nwhere $B_{j}(x)$ is the Bernoulli polynomial of order $j$ and \n$E_{j}(x)$ is the Euler polynomial of order $j$~\\cite{Absteg}.\nThe corresponding left states are\n\\alpheqn\n\\begin{eqnarray}\n\\langle {2^{-2j}}|_{\\rm{T}_{0}} & = & \n2^{2j}\\widetilde B_{2j} (x) \\\\\n\\left\\langle 0_{2j+1} \\right|_{\\rm{T}_{0}} & = & \n\\frac{-1}{(2j+1)!}\\delta^{(2j+1)}_{-}(x-1),\n\\end{eqnarray}\n\\reseteqn\nwhere $\\tilde B_0(x) = 1$ and for $j\\geq 1$\n\\begin{equation} \n\\widetilde B_{2j} (x) \\equiv \\frac{(-1)^{2j-1}}{(2j)!}\\left[ \n\\delta^{(2j-1)}_{-}(x-1) - \\delta^{(2j-1)}_{+}(x) \\right], \n\\end{equation}\nwhere the action of $\\delta^{(m)}_\\pm (x-c)$ on a sufficently differentiable\nfunction $f(x)$ is given by \n\\begin{equation}\n\\int_a^b dx \\, \\delta^{(m)}_{\\pm}(x-c) f(x) =\n\\lim_{\\epsilon \\rightarrow 0} (-1)^{m} f^{(m)}(c \\pm \\epsilon),\n\\end{equation}\nfor $a \\leq c \\leq b$ and $\\epsilon$ is a positive infinitesimal.\n\nThe time evolution of a density is expressed in terms of the spectral decomposition of\n$U_{\\rm T_0}$ as\n\\begin{equation}\nU_{\\rm T_0}^t \\, \\rho(x) = \\sum_{j=0}^\\infty\n\\left[ (2^{-2j})^t |2^{-2j}\\rangle \\langle 2^{-2j}| \\rho \\rangle\n+ \\delta_{t,0} | 0_{2j+1} \\rangle \\langle 0_{2j+1}| \\rho \\rangle \\right],\n\\end{equation}\nwhere the bilinear form is defined by\n\\begin{equation}\n\\langle f | g \\rangle \\equiv \\int_0^1 dx \\, f^*(x) g(x).\n\\end{equation}\n\n\\section*{Appendix C: Calculation of transient right states}\n\nTo determine the functions \n$g_{{\\rm II},j}(x)$ and $g_{{\\rm III},j}(x)$, which appear\non the rhs of (\\ref{form1}) we expand \n$g_{{\\rm II},j}(x)\\chi_{\\rm{II}} + g_{{\\rm III},j}(x)\\chi_{\\rm{III}}$ in terms of the\neigenstates given in Table 1 of $U_{\\rm T_1}$ on the attractor as \n\\begin{eqnarray} \\label{expand1}\ng_{{\\rm II},j}(x)\\chi_{\\rm{II}} + g_{{\\rm III},j}(x)\\chi_{\\rm{III}} & = &\n\\sum_{i=1}^{\\lfloor j\/2\n\\rfloor} a_{i,j}|{+2^{-i}} \\rangle_{\\Omega} + \n\\sum_{i=1}^{\\lfloor j\/2\\rfloor}\n b_{i,j}|{-2^{-i}} \\rangle_{\\Omega} \\nonumber \\\\\n & & \\mbox{} + \\sum_{i=1}^{\\lfloor\n\\frac{j-1}{2} \\rfloor} c_{i,j}| 0_{2i+1} \\rangle_{\\Omega} +\n\\sum_{i=1}^{\\lfloor\n\\frac{j-1}{2} \\rfloor} d_{i,j}| 0_{J_{2i+1}} \\rangle_{\\Omega}, \n\\end{eqnarray}\nwhere $\\lfloor x \\rfloor$ denotes the integer\npart (floor) of the real number $x$.\nThen acting with $U_{\\rm T_1}$ gives\n\\begin{eqnarray} \\label{expand2}\nU_{{\\rm{T_1}}} \\left( g_{{\\rm II},j}(x)\\chi_{\\rm{II}} + g_{{\\rm\nIII},j}(x)\\chi_{\\rm{III}}\n\\right) & = &\n\\sum_{i=1}^{\\lfloor j\/2\n\\rfloor} \\frac{a_{i,j}}{2^i}|{+2^{-i}} \\rangle_{\\Omega} - \n\\sum_{i=1}^{\\lfloor j\/2\\rfloor}\n \\frac{b_{i,j}}{2^i}|{-2^{-i}} \\rangle_{\\Omega} \\nonumber \\\\\n& & \\mbox{} + \\sum_{i=1}^{\\lfloor\n\\frac{j-1}{2} \\rfloor} d_{i,j}| 0_{2i+1} \\rangle_{\\Omega}. \n\\end{eqnarray} \nWe substitute (\\ref{expand1}) \nand (\\ref{expand2}) into (\\ref{form1}) and act on (\\ref{form1}) by all the left states\non the attractor in succession. Using orthonormality we obtain the\nfollowing equations for the expansion coefficients:\n\\alpheqn\n\\begin{eqnarray} \\label{form2}\n\\frac{a_{i,j}}{2^i} + \\alpha_{i,j} & = & 2^{-(j+1)\/2} \\, a_{i,j} \\\\\n\\frac{b_{i,j}}{2^i} - \\beta_{i,j} & = & -2^{-(j+1)\/2} \\, b_{i,j} \\\\ \nd_{i,j} + \\gamma_{i,j} & = & 2^{-(j+1)\/2} \\, c_{i,j} \\\\\nd_{i,j} & = & 0,\n\\end{eqnarray}\n\\reseteqn \nwhere\n\\alpheqn\n\\begin{eqnarray} \\label{def}\n\\alpha_{i,j} & \\equiv & 2^{-(j+1)\/2} \n\\langle{+2^{-i}}| x^j\\chi_{\\rm{II}} \\rangle \\\\ \n\\beta_{i,j} & \\equiv & 2^{-(j+1)\/2}\n\\langle{-2^{-i}}| x^j\\chi_{\\rm{II}} \\rangle = \n\\alpha_{i,j} \\\\ \n\\gamma_{i,j} & \\equiv & 2^{-(j+1)\/2}\n\\langle 0_i | x^j\\chi_{\\rm{II}} \\rangle,\n\\end{eqnarray}\n\\reseteqn\nand we used $\\langle 0_{J_i}| x^j\\chi_{\\rm{II}} \\rangle = 0$ because \n$\\langle 0_{J_i}|$ has support only in $\\chi_{\\rm III}$. \nExplicit evaluation of $\\langle +\\frac{1}{2^i}|\nx^j\\chi_{\\rm{II}} \\rangle$ gives\n\\begin{equation}\n\\langle +\\frac{1}{2^i}| x^j\\chi_{\\rm{II}} \\rangle = \n\\left\\{ \\begin{array}{lc} \\frac{\\sqrt{2}}{2(\\sqrt{2}-1)(j+1)} \n((2 - \\sqrt{2})^j -1 )\n& i=0 \\\\ \\noalign{\\vskip4pt}\n0 & 2i-1 \\geq j \\\\ \\noalign{\\vskip4pt}\n-\\frac{j!(2 - \\sqrt{2})^{j+2i-1}}{(2i)!(j-2i+1)!}(2^{(j-2i+1)\/2} -1) & 2i-1 < j. \n\\end{array} \\right.\n\\end{equation}\nEvaluation of $\\langle 0_i | x^j\\chi_{\\rm{II}} \\rangle$ gives\n\\begin{equation}\n\\langle 0_i | x^j\\chi_{\\rm{II}} \\rangle = \n\\left\\{ \\begin{array}{lc} \\frac{j!}{(2i+1)!(j-2i-1)!}(\n\\sqrt{2} -1 )^{j+2i+1} & 2i+1 \\leq j \\\\ \\noalign{\\vskip4pt}\n0 & \\mbox{otherwise} .\n\\end{array} \\right.\n\\end{equation}\nThese results are then\nused in (109) to determine the expansion coefficients for the transient\neigenstates with support in region $\\rm I$. \n\n\n\\section*{Appendix D: Transient right states at higher bsps}\n\nWe expand the arbitrary functions $f_{n,2j}(x)$ in terms of the\ntransient and non-transient eigenvectors of $U_{{\\rm R}_n}$. The\nnon-transient eigenvectors are given in (\\ref{five}) and (\\ref{jord}). The\ntransient eigenvectors will be transformed versions of the central\neigenvectors at the previous band-splitting points. The expansion is\n\\begin{eqnarray} \\label{t2}\nf_{n,2j}(x) & = &\n\\sum_{k=1}^{2^n}\\sum_{j^{'}=1}^{j} b^{n,2j}_{k,2j^{'}}\n| \\lambda^{n}_{k,2j^{'}} \\rangle +\n\\sum_{k=1}^{2^n}\\sum_{j^{'}=1}^{j-1} c^{n,2j}_{k,j^{'}}\n| 0^{n}_{k,2j^{'}+1} \\rangle \\nonumber \\\\\n & & + \\sum_{l=1}^{n-2}\\sum_{k=1}^{2^l}\\sum_{j^{'}=1}^{j}\nd^{n,l,2j}_{k,j^{'}}\n| \\phi^{n,l}_{k,2j^{'}} \\rangle + \\sum_{l=1}^{n-2}\\sum_{k=1}^{2^l}\n\\sum_{j^{'}=1}^{j-1} e^{n,l,2j}_{k,j^{'}}\n| \\phi^{n,l}_{k,2j^{'}+1} \\rangle.\n\\end{eqnarray}\nSince $d^{n,n-2,2j}_{1,j}$ is the coefficent of the eigenvector of the\nJordan block it can be set to zero. \nApplying $U_{{\\rm R}_n}$ to the function $f_{n,2j}(x)$ we get\n\\begin{eqnarray} \\label{t3}\n\\lefteqn{U_{{\\rm R}_n} f_{n,2j}(x) = \n\\sum_{k=1}^{2^n}\\sum_{j^{'}=1}^{j}b^{n,2j}_{k,j^{'}}\n\\lambda^{n}_{k,2j^{'}}| \\lambda^{n}_{k,2j^{'}}\\rangle \n+\\sum_{k=1}^{2^{n}-1}\\sum_{j^{'}=1}^{j-1}\nc^{n,2j}_{k+1,j^{'}}\n| 0^{n}_{k,2j^{'}+1} \\rangle } \\hspace{20pt} \\nonumber \\\\\n& & \\mbox{} \n + \\sum_{l=1}^{n-2}\\sum_{k=1}^{2^l}\\sum_{j^{'}=1}^{j}\nd^{n,l,2j}_{k,j^{'}}\n\\phi^{n,l}_{k,2j^{'}}| \\phi^{n,l}_{k,2j^{'}} \\rangle + \\sum_{l=2}^{n-2}\\sum_{k=1}^{2^{l-1}}\\sum_{j^{'}=1}^{j}\nd^{n,l-1,2j}_{k,j^{'}}\n\\phi^{n,l}_{k,2j^{'}}| \\phi^{n,l}_{k,2j^{'}} \\rangle \\nonumber \\\\\n & &\\mbox{} \n+ \\sum_{l=1}^{n-2}\\sum_{k=1}^{2^l}\\sum_{j^{'}=1}^{j}\ne^{n,l,2j}_{k,j^{'}}\n\\phi^{n,l}_{k,2j^{'}+1}| \\phi^{n,l}_{k,2j^{'}+1} \\rangle\n\\end{eqnarray} \nWe substitute (\\ref{t3}) and (\\ref{t2}) into (\\ref{t1}) and\nhit both sides of the equation with $\\langle \\lambda^{n}_{k,2j^{'}}|$,\n$\\langle 0^{n}_{k,2j^{'}+1} |$, $\\langle\n\\phi^{n,l}_{k,2j^{'}+1} |$ and $\\langle\n\\phi^{n,l}_{k,2j^{'}} |$ successively. Letting\n\\alpheqn\n\\begin{eqnarray}\n\\alpha^{n,2j}_{k,j'} & \\equiv & \\langle \\lambda^{n}_{k,2j'} |\n\\left(x-x^*_n \\right)^2j\\chi_{b,n} \\rangle \\\\\n\\beta^{n,2j}_{k,j'} & \\equiv & \\langle 0^{n}_{k,2j'+1} |\n\\left(x-x^*_n \\right)^{2j}\\chi_{b,n} \\rangle \\\\\n\\gamma^{n,l,2j}_{k,j'} & \\equiv & \\langle \\phi^{n,l}_{k,2j'}|\n\\left(x-x^*_n \\right)^{2j}\\chi_{b,n} \\rangle \n\\end{eqnarray}\n\\reseteqn \nwe obtain the following equations for the expansion coefficents\n$a_{n,j}$, $b^{n,2j}_{k,j^{'}}$, $c^{n,2j}_{k,j^{'}}$,\n$d^{n,l,2j}_{k,j^{'}}$ and $e^{n,l,2j}_{k,j'}$\n\\begin{eqnarray} \\label{t4}\nb^{n,2j}_{k,j'}\\lambda^{n}_{k,2j'} & = & \\phi^{n,0}_{1,j}\\left(\nb^{n,2j}_{k,j'} - \\alpha^{n,2j}_{k,j'} \\right) \\nonumber \\\\\nc^{n,2j}_{k+1,j^{'}} & = & \\phi^{n,0}_{1,j}\\left(\nc^{n,2j}_{k,j'} - \\beta^{n,2j}_{k,j'} \\right) \\nonumber \\\\\na_{n,2j}\\phi^{n,0}_{1,2j}\\gamma^{n,1,2j}_{1,j} & = & 1 \\nonumber \\\\\ne^{n,l,2j}_{k,j'}\\phi^{n,l}_{k,2j'+1} & = & \\phi^{n,0}_{1,j}\\left(\ne^{n,l,2j}_{k,j'} - \\delta^{n,l,2j}_{k,j'} \\right).\n\\end{eqnarray}\nThe equations for $d^{n,l,2j}_{k,j'}$ differ depending on the values of\n$l,k$ and $j'$. For $k=1$, $j'=j$ and $l=2,3,\\dots,n-2$\n\\begin{eqnarray} \n d^{n,l-1,2j}_{1,j} & = & a_{n,2j}\\phi^{n,0}_{1,2j}\\gamma^{n,l,2j}_{1,j} \\nonumber \\\\\n d^{n,n-2,2j}_{1,j} & = & 0.\n\\end{eqnarray}\nWhen $j \\neq j'$, $k \\neq 1$ and $l=1$ we have\n\\begin{equation} \nd^{n,1,2j}_{k,j'}\\left(\n\\phi^{n,0}_{1,2j} - \\phi^{n,1}_{k,2j'} \\right)\n= a_{n,j}\\phi^{n,0}_{1,2j}\\gamma^{n,1,2j}_{k,j'}. \n\\end{equation}\nWhen $j \\neq j'$, $k \\neq 1$ and $l=2,3,\\dots,n-2$ we have\n\\begin{equation} \\label{t7}\n d^{n,l,2j}_{k,j'}\\left(\n\\phi^{n,0}_{1,2j} - \\phi^{n,l}_{k,2j'} \\right) - d^{n,l-1,2j}_{k,j'} =\na_{n,2j}\\phi^{n,0}_{1,2j}\\gamma^{n,l,2j}_{k,j'}. \n\\end{equation}\n\nThe equations (\\ref{t4}) -- (\\ref{t7}) for $a_{n,2j}$, \n$b^{n,2j}_{k,j'}$, $c^{n,2j}_{k,j'}$ \n$d^{n,l,2j}_{k,j'}$ and $e^{n,l,2j}_{k,j'}$ are either uncoupled or are coupled in a simple\nmanner and can be solved explicitly to find the expansion coeffients. Plugging\nthese coefficents into (\\ref{t2}) and (\\ref{t6}) we get all the Jordan\nstates with support in the central transient, for all $n > 2$. For $n = 2$\nset all the $d^{l,j}_{2,k,j'} = 0$, $e^{n,l,2j}_{k,j'} = 0$ and $a_{2,j} = 1$ and solve for\n$b^{j}_{n,k,j'}$, $c^{j}_{n,k,j'}$ from (\\ref{t7}) to obtain the\neigenvectors with support in the central transient at the second bsp.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}