diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzkmpr" "b/data_all_eng_slimpj/shuffled/split2/finalzzkmpr" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzkmpr" @@ -0,0 +1,5 @@ +{"text":"\n\n\\section{Introduction}\n\n\n\nWe present a method for automatically generating human-robot interaction (HRI) scenarios in shared autonomy. Consider as an example a manipulation task, where a user provides inputs to a robotic manipulator through a joystick, guiding the robot towards a desired goal, e.g., grasping a bottle on the table. The robot does not know the goal of the user in advance, but infers their desired goal in real-time by observing their inputs and assisting them by moving autonomously towards that goal. Performance of the algorithm is assessed by how fast the robot reaches the goal. However, different environments and human behaviors could cause the robot to fail, by picking the wrong object or colliding with obstacles. \n\n\n\n\n\n\\begin{figure}[!t]\n\n\\centering\n\\includegraphics[width=\\linewidth]{figs\/Title-2.pdf}\n\\caption{An example archive of solutions returned by the quality diversity algorithm MAP-Elites. The solutions in red indicate scenarios where the robot fails to reach the desired user goal in a simulated shared autonomy manipulation task. The scenarios vary with the environment (y-axis: distance between the two candidate goals) and human inputs (x-axis: variation from optimal path).}\n\n\\label{fig:best}\n\\end{figure}\n\n Typically, such algorithms are evaluated with human subject experiments~\\cite{thomaz2016computational}. While these experiments are fundamental in exploring and evaluating human-robot interactions and they can lead to exciting and unpredictable behaviors, they are often limited in the number of environments and human actions that they can cover. Testing an algorithm in simulation with a \\textit{diverse} range of scenarios can improve understanding of the system, inform the experimental setup of real-world studies, and help avoid potentially costly failures ``in the wild.'' \n \n\n \n\n \n\n\n\n\n\n\nOne approach is to simulate agent behaviors by repeatedly sampling from models of human behavior and interaction protocols~\\cite{steinfeld2009oz}. While this approach will show the \\textit{expected} behavior of the system given the pre-specified models, it is unlikely to reveal failure cases that are not captured by the models or are in the tails of the sampling distribution. Exhaustive search of human actions and environments is also computationally prohibitive given the continuous, high-dimensional space of all possible environments and human action sequences. \n\nAnother approach is to formulate this as an optimization problem, where the goal is to find adversarial environments and human behaviors. But we are typically not interested in the maximally adversarial scenario, which is the single, global optimum of our optimization objective, since these scenarios are both easy to find and unlikely to occur in the real-world, e.g., the human moving the joystick of an assistive robotic arm consistently in the wrong direction. \n\nInstead, we are interested in answering questions of the form: how noisy can the human input be before the algorithm breaks? Or, in the aforementioned example task, how far apart do two candidate goals have to be for the robot to disambiguate the human intent? \n\n\n\n\n\n\nOur work makes the following contributions:\\footnote{We include an overview video: \\url{https:\/\/youtu.be\/9P3qomydMWk}}\n\n\\textbf{1.} We propose formulating the problem of generating human-robot interaction scenarios as a \\textit{quality diversity} (QD) problem, where the goal is not to find a single, optimal solution, but a collection of high-quality solutions, in our case failure scenarios of the tested algorithm, across a range of measurable criteria, such as noise in human inputs and distance between objects.\n\n\n\n\n\n\n\n\n\n\n\\textbf{2.} We adopt the QD algorithm MAP-Elites, originally presented in~\\cite{cully:nature15, mouret2015illuminating}, for the problem of scenario generation. Focusing on the shared autonomy domain, where a robotic manipulator attempts to infer the user's goal based on their inputs, we show that MAP-Elites outperforms two baselines: standard Monte Carlo simulation (random search), where we uniformly sample the scenario parameters, and CMA-ES~\\cite{hansen:cma16}, a state-of-the-art derivative-free optimization algorithm, in finding diverse scenarios that minimize the performance of the tested algorithm. We select to test the algorithm ``shared autonomy with hindsight optimization''~\\cite{javdani2015hindsight}, since it has been widely used and we have found it to perform robustly in a range of different environments and tasks. Additionally, in hindsight optimization, inference and planning are tightly coupled, which makes testing particularly challenging; simply testing each individual component is not sufficient to reveal how the algorithm will perform.\n\n \n\n\n\\textbf{3.} We show that Monte Carlo simulation does not perform well because of \\textit{behavior space distortion}: sampling directly from the space of environments and human actions covers only a small region in the space of measurable aspects (behavioral characteristics). For example, uniformly sampling object locations (scenario parameters) results in a non-uniform distribution of their distances (behavioral characteristic) with a very small variance near the mean. On the other hand, MAP-Elites focuses on exploring the space of the behavioral characteristics by retaining an archive of high-performing solutions in that space and perturbing existing solutions with small variations. Therefore, MAP-Elites performs a type of simultaneous search guided by the behavioral characteristics, where solutions in the archive are used to generate future candidate solutions~\\cite{mouret2015illuminating}.\n\n\\textbf{4.} We analyze the failure cases and we show that they result from specific aspects of the implementation of the tested algorithm, rather than being artifacts of the simulation environment. We use the same approach to contrast the performance of hindsight optimization with that of linear policy blending~\\cite{dragan2012formalizing} and generate a diverse range of scenarios that \nconfirm previous theoretical findings~\\cite{trautman2015assistive}. The generated scenarios transfer to the real-world; we reproduce some of the automatically discovered scenarios on a real robot with human inputs. While some of the scenarios are expected, e.g., the robot approaches the wrong goal if the human provides very noisy inputs, others are surprising, e.g, the robot never reaches the desired goal even for a nearly optimal user if the two objects are aligned in column formation in front of the robot (Fig.~\\ref{fig:best})! \n\n\n\n\n\n\nQD algorithms treat the algorithm being tested as a ``black box'', without any knowledge of its implementation, which makes them applicable to multiple domains. \nOverall, we are excited about the potential of QD to facilitate understanding of complex HRI systems, opening up a number of scientific challenges and opportunities to be explored in the future. \n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Problem Statement} \\label{sec:problem}\n\n\n\nGiven a shared autonomy system where a robot interacts with a human, our goal is to generate scenarios that minimize performance of the system, while ensuring that the generated scenarios cover a range of prespecified measures.\n\n\nWe let $R$ be a single robot interacting with a single human $H$. We assume a function $G_H$ that generates human inputs, a function $G_E$ that generates an environment, and an HRI algorithm $G_R$ that generates actions for the robot. The human input generator is parameterized by $\\theta \\in \\mathbb{R}^{n_{\\theta}}$, where $n_{\\theta}$ is the dimensionality of the parameter space, while the environment generator is parameterized by $\\phi \\in \\mathbb{R}^{n_{\\phi}}$. We define a \\textit{scenario} as the tuple $(\\theta, \\phi)$. \n\nIn shared autonomy, $G_E(\\phi)$ generates an initial environment (and robot) state $x_E$. The human observes $x_E$ and provides inputs to the system $u_H = G_H(x_E, \\theta)$ through some type of interface. The robot observes $x_E$ and the human input $u_H$ and takes an action \n $u_R = G_R(x_E, u_H)$. The state changes with dynamics: $\\dot{x}_E = h(x_E, u_R)$. H and R interact for a time horizon $T$, or until they reach a final state $x_f \\in X_E$. \n \n To evaluate a scenario, we assume a function \\\\ $f(x_E^{0..T}, u_R^{0..T},u_H^{0..T}) \\rightarrow \\mathbb{R}$ that maps the state and action history to a real number. We call this an \\textit{assessment} function, which measures the performance of the robotic system. We also assume $M$ user-defined functions, $b_i(x_E^{0..T}, u_R^{0..T},u_H^{0..T})\\rightarrow \\mathbb{R},~i\\in[M]$. These functions measure aspects of generated scenarios that should vary, e.g., noise in human inputs or distance between obstacles. We call these functions \\textit{behavior characteristics} (BCs), which induce a Cartesian space called a \\textit{behavior space}.\n\n\n\n\n\nGiven the parameterization of the environment and human input generators, we can map a value assignment of the parameters $(\\theta, \\phi)$ to a state and action history $(x_E^{0..T},u_R^{0..T},u_H^{0..T})$ and therefore to an assessment $f(\\theta, \\phi)$ and a set of BCs $b(\\theta,\\phi)$. We assume that the behavior space is partitioned into N cells, which form an \\textit{archive} of scenarios, and we let $(\\theta_i, \\phi_i)$ be the parameters of the scenario occupying cell $i \\in [N]$.\n\nThe objective of our scenario generator is to fill in as many cells of the archive as possible with scenarios of high assessment $f$: \\footnote{We note that the assessment function could be any performance metric of interest, such as time to completion or minimum robot's distance to obstacles. Additionally, while in this work we focus on minimizing performance, we could instead search for scenarios that maximize performance, or that achieve performance that matches a desired value. We leave this for future work.\n}\n\\begin{equation}\n \\mathcal{M}(\\theta_1, \\phi_1, ... , \\theta_N, \\phi_N) = \\max \\sum_{i=1}^N f(\\theta_i, \\phi_i)\n\\label{eq:objective}\n\\end{equation}\n\n\n\\section{Background} \\label{sec:background}\n\\noindent\\textbf{Automatic Scenario Generation }\nAutomatically generating scenarios is a long standing problem in human training~\\citep{hofer1998automated}, with the core challenge being the generation of \\textit{realistic} scenarios~\\citep{martin:pcg10}. Previous work~\\citep{zook2012automated} has shown that optimization methods can be applied to generate scenarios by maximizing a scenario quality metric. \n\nScenario generation has been applied extensively to evaluating autonomous vehicles~\\citep{arnold:safecomp13,mullins:av18, abey:av19, rocklage:av17, gambi:av19,sadigh2019verifying}. Contrary to model-checking and formal methods~\\cite{choi2013model,o2014automatic}, which require a model describing the system's performance such as a finite-state machine~\\cite{meinke2015learning} or process algebras~\\cite{o2014automatic},\n black-box approaches do not require access to a model. Most relevant are black-box falsification methods~\\cite{deshmukh2017testing,zhao2003generating,kapinski2016simulation,dreossi2019verifai} that attempt to find an input trace that minimizes the performance of the tested system. Rather than searching for a single global optimum~\\cite{deshmukh2017testing, deshmukh2015stochastic,sadigh2019verifying}, or attempting to maximize coverage of the space of scenario parameters~\\cite{zhao2003generating} or of performance boundary regions~\\cite{mullins:av18}, we propose a quality diversity approach where we optimize an archive formed by a set of behavioral characteristics, with a focus on the shared autonomy domain. This allows us to \\textit{simultaneously} search for human-robot interaction scenarios that minimize the performance of the system over a range of measurable criteria, e.g., over a range of variation in human inputs and distance between goal objects.\n \n\nFinally, scenario generation is closely related to the problem of generating video game levels in procedural content generation (PCG)~\\citep{hendrikx2013procedural,shaker:book16}. An approach gaining popularity is procedural content generation through quality diversity (PCG-QD)~\\citep{gravina2019procedural}, which leverages QD algorithms to drive the search for interesting and diverse content.\n\n\\noindent\\textbf{Quality Diversity and MAP-Elites. }\nQD algorithms differ from pure optimization methods, in that they do not attempt to find a single optimal solution, but a collection of good solutions that differ across specified dimensions of interest. For example, QD algorithms have generated video game levels of varying number of enemies or tile distributions~\\cite{khalifa2019intentional, fontaine2020illuminating}, and objects of varying shape complexity and grasp difficulty~\\cite{morrison2020egad}. \n\n\\mbox{MAP-Elites} \\citep{mouret2015illuminating,cully:nature15} is a popular QD algorithm that searches along a set of explicitly defined attributes called \\textit{behavior characteristics} (BCs), which induce a Cartesian space called a \\textit{behavior space}. The behavior space is tessellated into uniformly spaced grid cells. In each cell, the algorithm maintains the highest performing solution, which is called as \\textit{elite}. The collection of elites returned by the algorithm forms an \\textit{archive} of solutions. \n\n\n\n\n MAP-Elites populates the archive by first randomly sampling a population of solutions, and then selecting the elites -- which are the top performing solutions in each cell of the behavior space -- at random and perturbing them with small variations. The objective of the algorithm is two-fold: maximize the number of filled cells (coverage) and maximize the quality of the elite in each cell. Recent algorithms have focused on how the behavior space is tessellated~\\citep{smith:ppsn16,fontaine:gecco19}, as well as how each elite is perturbed~\\cite{vassiliades:gecco18}. Recent work~\\cite{fontaine2021differentiable} has also shown that, when the objective function and behavior characteristics are first-order differentiable, MAP-Elites via a Gradient Arborescence (MEGA) can result in significant improvements in search efficiency. \n\nBy retaining an archive of high-performing solutions and perturbing existing solutions with small variations, \\mbox{MAP-Elites} simultaneously optimizes every region of the archive, using existing solutions as ``stepping stones'' to find new solutions. Previous work has shown that \\mbox{MAP-Elites} variants~\\cite{mouret2020quality} and surrogate models~\\cite{gaier2017data} outperform independent single-objective constrained optimizations for each cell with \\mbox{CMA-ES}, with the same total budget of evaluations.\n\n \\noindent\\textbf{Coverage-Driven Testing in HRI.} Previous work~\\cite{araiza2015coverage,araiza2016systematic} explored test generation in human-robot interaction using Coverage-Driven Verification (CDV), emulating techniques used in functional verification of hardware designs. Human action sequences were randomly generated in advance and with a model-based generator which modeled the interaction with Probabilistic-Timed Automata. Instead, we focus on online scenario generation by searching over a set of scenario parameters; the generator itself is agnostic to the underlying HRI algorithm and human model. Previous work~\\cite{araiza2016intelligent} has also used Q-learning to generate plans for an agent in order to maximize coverage. Our focus is both on coverage and quality of generating scenarios, with respect to a prespecified set of behavioral characteristics that we want to cover. In contrast to previous studies that simulate human actions, \\textit{we jointly search for environments and human\/agent behaviors.}\n\n\n\\noindent\\textbf{Shared Autonomy.}\nShared autonomy (also: shared control, assistive teleoperation) combines human teleoperation of a robot with intelligent robotic assistance. The method has been applied in the control of robotic arms~\\cite{javdani2018shared,dragan2012formalizing,nikolaidis2017mutualadaptation,Muelling2017,herlant2016assistive,gopinath2016human, jain2019probabilistic,jeon2020shared, losey2019controlling,rakita2019shared,rakita2018shared}, the flight of UAVs~\\cite{reddy2018shared,gillula2011applications,lam2009artificial}, and robot-assisted surgery~\\cite{li2003recognition,ren2008dynamic}. Shared autonomy has been implemented through a variety of interfaces, such as whole body motions~\\cite{dragan2012formalizing}, natural language \\cite{doshi2007efficient}, laser pointers \\cite{veras2009scaled}, brain-computer interfaces \\cite{Muelling2017}, body-machine interfaces~\\cite{jain2015assistive}, and eye gaze~\\cite{bien2004,javdani2018shared}. A shared autonomy system first predicts the human's goal, often through machine learning methods trained from human demonstrations \\cite{hauser13,koppula16,wang13}, and then provides assistance, which often involves blending the user's input with the robot assistance to achieve the predicted goal \\cite{dragan2012formalizing,fagg04,kofman05}. Assistance can provide task-dependent guidance \\cite{aarno2005adaptive}, manipulation of objects~\\cite{jeon2020shared}, or mode switches \\cite{herlant2016assistive}.\n\n\n\n\n\\noindent\\textbf{Shared Autonomy via Hindsight Optimization.}\nIn shared autonomy via hindsight optimization~\\cite{javdani2015hindsight} assistance blends user input and robot control based on the confidence of the robot's goal prediction. The problem is formulated as a Partially Observable Markov Decision Process (POMDP), wherein the user's goal is a latent variable. The system models the user as an approximately optimal stochastic controller, which provides inputs so that the robot reaches the goal as fast as possible. The system treats the user's inputs as observations to update a distribution over the user's goal, and assists the user by minimizing the expected cost to go -- estimated using the distance to goal -- for that distribution. Since solving a POMDP exactly is intractable, the system uses the hindsight optimization (QMDP) approximation~\\cite{littman1995learning}. The system was shown to achieve significant improvements in efficiency of manipulation tasks in an object-grasping task~\\cite{javdani2015hindsight} and more recently in a feeding task~\\cite{javdani2018shared}. We empirically found this algorithm to perform robustly in a range of different environments, which motivates a systematic approach for testing. We refer to this algorithm simply as \\textit{hindsight optimization}.\n\n\n\n\\section{Scenario Generation with MAP-Elites}\n\nAlgorithm~\\ref{alg:map-elites} shows the MAP-Elites algorithm from~\\cite{mouret2015illuminating,cully:nature15}, adapted for scenario generation. The algorithm takes as input a function $G_H$ parameterized by $\\theta$ that generates human inputs, a function $G_E$ parameterized by $\\phi$ that generates environments, and an HRI algorithm $G_R$ that generates actions for the robot. The algorithm searches for scenarios $\\theta, \\phi$ of high assessment values $f$ that fill in the archive $\\mathcal{P}$. \n\nFor each scenario $(\\theta, \\phi)$, MAP-Elites instantiates the generator functions $G_H$ and $G_E$. For instance, $\\phi$ could be a vector of objects positions, and $\\theta$ could be a vector of waypoints representing a trajectory of human inputs, or parameters of a human policy. \n\nWhen a scenario is generated, MAP-Elites executes the scenario in a simulated environment and estimates the assessment function $f$ and the BCs $b$. MAP-Elites then updates the archive if (1) the cell corresponding to the BCs $\\mathcal{X}[b]$ is empty, or (2) the existing scenario (elite) in $\\mathcal{X}[b]$ has a smaller assessment function (lower quality) than the new scenario. This allows populating the archive to maximize coverage as well as improving the quality of existing scenarios.\n\nFor the first $N_{init}$ iterations, the algorithm generates scenarios $\\theta, \\phi$ by randomly sampling from the parameter space. These sampled parameters seed the archive with an initial set of scenarios. After the first $N_{init}$ iterations, MAP-Elites selects a scenario uniformly at random from the archive and perturbs it with a small variation. This allows for better exploration of the archive, compared to random search, as we show in section~\\ref{subsec:Analysis}.\n\nWe note that, while our experiments focus on the shared autonomy domain, the proposed scenario generation method is general and can be applied to multiple HRI domains. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{algorithm}[t!]\n \\caption{Scenario Generation with MAP-Elites}\n\\label{algorithm: known context}\n\\begin{algorithmic}\n\\STATE\\textbf{Input:} Human input generator $G_H$, environment generator $G_E$, HRI algorithm $G_R$, variations\n$\\sigma_{\\theta},\\sigma_{\\phi}$\n\\STATE\\textbf{Initialize:} Scenarios in archive $\\mathcal{X}\\leftarrow \\emptyset$, assessments $\\mathcal{F}\\leftarrow \\emptyset$\n\n\\FOR{$t=1,\\ldots,N $}\n\\IF{$t < N_{init}$}\n\\STATE Generate scenario $\\theta, \\phi = \\mathrm{random\\_generation()}$ \n\\ELSE\n\\STATE Select elite $\\theta', \\phi' = \\mathrm{random\\_selection}(\\mathcal{X})$\n\\STATE Sample $\\theta \\sim N(\\theta', \\sigma_\\theta)$ \n\\STATE Sample $\\phi \\sim N(\\phi', \\sigma_\\phi)$ \n\n\\ENDIF\n\n\\STATE Instantiate $G_H^{\\theta} = G_H(\\theta)$\n\\STATE Instantiate $G_E^{\\phi} = G_E(\\phi)$\n\\STATE Compute $f = \\textrm{assessment}(G_H^{\\theta},G_E^{\\phi}, G_R)$\n\\STATE Compute $b = \\textrm{behaviors}(G_H^{\\theta},G_E^{\\phi}, G_R)$\n \\IF{$\\mathcal{F}[b]= \\emptyset$ or $\\mathcal{F}[b] < f$ } \n \\STATE Update archive $\\mathcal{X}[b] \\leftarrow (\\theta, \\phi),\\mathcal{F}[b] \\leftarrow f$ \n \\ENDIF\n\\ENDFOR\n\\end{algorithmic}\n\\label{alg:map-elites}\n\\end{algorithm}\n\n\\section{Generating Scenarios in Shared Autonomy} \\label{sec:limiting}\n\nWe focus on a shared autonomy manipulation task, where a human user teleoperates a robotic manipulator through a joystick interface. The robot runs a hindsight optimization shared autonomy algorithm~\\cite{javdani2015hindsight}, which uses the user's input to infer the object the user wants the robot to grasp, and assists the user by moving autonomously towards that goal. \n\n\\subsection{Scenario Parameters} \\label{subsec:parameters}\nFollowing the specification of section~\\ref{sec:problem}, we define a human input generator $G_H$ parameterized by $\\theta$ and an environment generator $G_E$ parameterized by $\\phi$.\n\n\\noindent\\textbf{Environment Generator:} The environment generator $G_E$ takes as input the 2D positions $g_i$ of $n$ goal objects (bottles), so that $\\phi = (g^1, ..., g^n)$, and places them on top of a table. We specify the range of the coordinates $g_x \\in [0, 0.25]$ (in meters), $g_y \\in [0, 0.2]$ so that the goals are always reachable by the robot's end-effector. We position the robotic manipulator to face the objects (Fig.~\\ref{fig:elites}).\n\n\n\\noindent\\textbf{Human Input Generator:} Our pilot studies with the shared autonomy system have shown that user inputs are typically not of constant magnitude. Instead, inputs spike when users wish to ``correct'' the robot's path, and decrease in magnitude afterwards when the robot takes over. \n\nTherefore, we specify the human input generator $G_H$, so that it generates a set of equidistant waypoints in Cartesian space forming a straight line that starts from the initial position of the robot's end-effector and ends at the desired goal of the user. At each timestep, the generator takes as input the current state of the robot (and the environnment) $x_E$, and provides a translational velocity command $u_H$ for the robot's end-effector towards the next waypoint, proportional to the distance to that waypoint. \n\nWe allow for noise in the waypoints by adding a disturbance $d \\in [-0.05, 0.05]$ for each of the intermediate waypoints in the horizontal direction (x-axis in Fig.~\\ref{fig:elites}). We selected $m=5$ intermediate waypoints, and specified the human input parameter $\\theta$ as a vector of disturbances, so that $\\theta = (d_1, ... , d_5)$. We note that this is only one way, out of many, of simulating the human inputs. \n\n\n\\noindent\\textbf{HRI Algorithm and Simulation Environment:} We use the publicly available implementation of the hindsight optimization algorithm~\\cite{ada_code}, which runs on the OpenRAVE~\\cite{diankov2008openrave} simulation environment. Experiments were conducted with a Gen2 Lightweight manipulator. For each goal object we assume one target grasp location, on the side of the object that is facing the robot. \n\n\n\n\\subsection{Assessment Function.} The assessment function $f$ represents the quality of a scenario. We evaluate a scenario by simulating it until the robot's end-effector reaches the user's goal, or when the maximum time (10 $s$) has elapsed. We use as an assessment function time to completion, where longer times represent higher scenario quality, since we wish to discover scenarios that \\textit{minimize} performance. \n\n\\subsection{Behavioral Characteristics.} \nWe wish to generate scenarios that show the limits of the shared autonomy system: how noisy can the human be without the system failing to reach the desired goal? How does distance between candidate goals affect the system's performance? Intuitively, noisier human inputs and smaller distances between goals would make the inference of the user's goal harder and thus make the system more likely to fail. \n\nThese dimensions of interest are the behavioral characteristics (BC) $b$: attributes that we wish to obtain coverage for. We explore the following BCs:\n\n\n\\noindent\\textbf{Distance Between Goals:} How far apart the human goal is from other candidate goals in a scenario plays an important role in disambiguating the human intent when the robot runs the hindsight optimization algorithm. The reason is that the implementation of the algorithm models the human user as minimizing a cost function proportional to the distance between the robot and the desired goal. The framework then infers the user's goal by using the user inputs as observations; the more unambiguous the user input, the more accurate the inference of the system. Therefore, we expect that the further away the human goal object $g_H$ is from the nearest goal $g_N$, the better the system will perform. We define this BC as: \n\\begin{equation}\nBC1 = ||g_H - g_N||_2\n\\end{equation}\n\nGiven the range of the goal coordinates, the range of this BC is $[0, 0.32]$. In practice, there will be always a minimum distance between two goal objects because of collisions, but this does not affect our search, since we can ignore cases where the objects collide. We partitioned this behavior space to 25 uniformly spaced intervals\n\n\\noindent\\textbf{Human Variation:} \nWe expect noise in the human inputs to affect the robot's inference of the user's goal and thus the system's performance. \nWe capture variation from the optimal path using the root sum of the squares of the disturbances $d_i$ applied to the $m$ intermediate waypoints.\n\\begin{equation}\nBC2 = \\sqrt{\\sum_{i=1}^m d_i^2}\n\\end{equation}\n A value of 0 indicates a straight line to the goal. Since we have $d_i \\in [-0.05,0.05]$ (section~\\ref{subsec:parameters}), the range of this BC is $[0,0.11]$. We partitioned this behavior space to 100 uniformly spaced intervals\n\n\n\\noindent\\textbf{Human Rationality:} If we interpret the user's actions using a bounded rationality model~\\cite{baker2007goal,fisac2018probabilistically}, we can explain deviations from the optimal trajectory of human inputs as a result of how ``rational'' or ``irratonal'' the user is.\\footnote{We note that we use the human rationality model as one way, out of many, to \\textit{interpret} human inputs and not to as a way to \\textit{generate} inputs. Human inputs can be generated with any generator model. In this paper, we generate human inputs with the deterministic model described in section~\\ref{subsec:parameters}. We discuss extensions to stochastic human models in section~\\ref{sec:discussion}.}\n\n\n\n\n Formally, we let $x_R$ be the 3D position of the robot's end-effector and $u_H$ be the velocity controlled by the user in Cartesian space. We model the user as following Boltzmann policy $\\pi_H \\mapsto P(u_H|x_R,g_H, \\beta)$, where $\\beta$ is the rationality coefficient -- also interpreted as the expertise~\\cite{jeon2020shared}-- of the user and $Q_{g_H}$ is the value function from $x_R$ to the goal $g_H$.\n\\begin{equation}\nP(u_H|x_R,g_H, \\beta) \\propto \ne^{- \\beta Q_{g_H}(x_R, u_H)}\n\\label{eqn:human}\n\\end{equation}\n\nLet $Q_{g_H} = -||u_H||_2 - ||x_R + u_H - g_H||_2$~\\cite{fisac2018probabilistically}, so that the user minimizes the distance to the goal. Observe that if $\\beta \\rightarrow \\infty$, the human is rational, providing velocities exactly in the direction to the goal. If $\\beta \\rightarrow 0$, the human is random, choosing actions uniformly. \n\nWe can estimate the user's rationality, given their inputs, with Bayesian inference~\\cite{fisac2018probabilistically}: \n\n\\begin{equation}\nP(\\beta|x_R,g_H, u_H) \\propto \nP(u_H|x_R,g_H, \\beta) P(\\beta)\n\\label{eqn:human}\n\\end{equation}\n\n\n\nSince the human inputs change at each waypoint (section~\\ref{subsec:parameters}), we perform $m+1$ updates, at the starting position and at each intermediate waypoint, on a finite set of discrete values $\\beta$. Following previous work~\\cite{jeon2020shared}, we set the rationality range $\\beta \\in [0,1000]$. We then choose as behavioral characteristic the value with the maximum a posteriori probability at the end of the task:\n\n\\begin{equation}\nBC3 = \\argmax P(\\beta|x^{0..T}_R, g_H, u^{0..T}_H)\n\\end{equation}\n\nWe partitioned the space to 101 uniformly spaced intervals. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{table*}[t]\n\\centering\n\\resizebox{.8\\linewidth}{!}{\n\\begin{tabular}{l|cc|cc|cc}\n\\hline\n & \\multicolumn{2}{l|}{BC1 \\& BC3, 2 goals} & \\multicolumn{2}{l|}{BC1 \\& BC2, 2 goals} & \\multicolumn{2}{l}{BC1 \\& BC2, 3 goals} \\ \\\\ \n \\toprule\nAlgorithm & Coverage & QD-Score & Coverage & QD-Score & Coverage & QD-score\\\\\n \\midrule\nRandom & 22.3\\% & 3464 & 48.4\\% &7782 & 41.9\\% & 7586\\\\\nCMA-ES & 24.8\\% & 4540& 38.9\\% & 7422 & 34.5\\% & 7265\\\\\nMAP-Elites & \\textbf{62.8\\%} & \\textbf{10128} & \\textbf{63.0\\%} & \\textbf{11216} & \\textbf{57.4\\%} & \\textbf{11204}\\\\\n \\bottomrule\n\\end{tabular}\n}\n\\caption{Results: Percentage of cells covered (coverage) and QD-Score after 10,000 evaluations, averaged over 5 trials.}\n\\label{tab:results}\n\\end{table*}\n\n\n\\begin{figure*}[t!]\n \\centering\n \\includegraphics[width=1.0\\textwidth]{figs\/maps_BC1_plot.pdf}\n \\includegraphics[width=1.0\\textwidth]{figs\/maps_BC2_plot.pdf}\n\\caption{QD-Scores over evaluations (generated scenarios) and example archives returned by the three algorithms for the first two behavior spaces of Table~\\ref{tab:results}. The colors of the cells in the archives represent time to task completion in seconds.}\n\\label{fig:maps}\n\\end{figure*}\n\n\n\n\n\n\n\n\n\n\\section{Experiments}\n\nWe compare different search algorithms in their ability to find diverse and high-quality scenarios in different behavior spaces.\n\n\n\\subsection{Independent Variables}\nThe experiment has two independent variables, the \\textit{behavior space} and the \\textit{search algorithm}.\n\n\\textit{Behavior Space}: (1) Distance between $n=2$ goal objects (BC1) and human rationality (BC3), (2) Distance between $n=2$ goal objects (BC1) and human variation (BC2), and (3) Distance between human goal and nearest goal for $n=3$ goals (BC1) and human variation (BC2).\\footnote{We note that the behavior spaces can be more than two-dimensional, e.g, we could specify a space with all three BCs. We include only 2D spaces since they are easier to visualize and inspect.}\n\n\n\n\\textit{Search Algorithm}: We evaluate three different search methods: MAP-Elites, CMA-ES and random search. The Covariance Matrix Adaptation Evolution Strategy (\\mbox{CMA-ES}) is one of the most competitive derivative-free optimizers for single-objective optimization of continuous spaces (see~\\citep{hansen:cma16,hansen2009benchmarking}) and it is commonly used for falsification of cyber-physical systems~\\cite{deshmukh2015stochastic,zhang2018time}. In random search we use Monte Carlo simulation where scenario parameters are sampled uniformly within their prespecified ranges.\n\nWe implemented a multi-processing system on an AMD Ryzen Threadripper 64-core (128 threads) processor, as a master search node and multiple worker nodes running separate OpenRAVE processes in parallel, which enables simultaneous evaluation of many scenarios. Random search and MAP-Elites run asynchronously on the master search node, while \\mbox{CMA-ES} synchronizes before each covariance matrix update. We generated 10,000 scenarios per trial, and ran 45 trials, 5 for each algorithm and behavior space. One trial parallelized into 100 threads lasted approximately 20 minutes. \n\n\\subsection{Algorithm Tuning}\nMAP-Elites first samples uniformly the space of scenario parameters $\\theta, \\phi$ within their prespecified ranges for an initial population of $100$ scenarios (Algorithm~\\ref{alg:map-elites}). The algorithm then \nrandomly perturbs the elites (scenarios from the archive) with Gaussian noise scaled by a $\\sigma$ parameter. The two scenario parameters, position of goal objects $\\phi$ and human waypoints $\\theta$, are on different scales, thus we specified a different $\\sigma$ for each: $\\sigma_\\phi = 0.01, \\sigma_\\theta = 0.005$. \n\nTo generate the scenarios for random search, we \nuniformly sample scenario parameters within their prespecified ranges, a method identical to generating the initial population of MAP-Elites. \n\nFor CMA-ES, we selected a population of $\\lambda = 12$ following the recommended setting from~\\cite{hansen:cma16}. To encourage exploration, we used the bi-population variant of CMA-ES with restart rules~\\cite{auger2005restart,hansen2009benchmarking}, where the population doubles after each restart, and we selected a large step size, $\\sigma = 0.05$. Since the two search parameters are in different scales, we initialized the diagonal elements of the covariance matrix $C$, so that $c_{ii} = 1.0, i \\in [2n]$ and $c_{ii} = 0.5, i \\in \\{2n+1, ..., 2n+m\\}$, with $2n$ and $m$ the dimensionality of the goal object and human input parameter spaces respectively. \n\nBoth CMA-ES and MAP-Elites may sample scenario parameters that do not fall inside their prespecified ranges. Following recent empirical results on bound constraint handling~\\cite{biedrzycki2020handling}, we adopted a resampling strategy, where new scenarios are resampled until they fall within the prespecified range. \n\\subsection{Measures} \nWe wish to measure both the diversity and the quality of scenarios returned by each algorithm. These are combined by the QD-Score metric~\\cite{pugh2015confronting}, which is defined as the sum of $f$ values of all elites in the archive (Eq.~\\ref{eq:objective} in section~\\ref{sec:problem}). Empty cells have 0 $f$ value. Therefore, QD-score is positively affected by both the coverage of the archive (the number of occupied cells in the archive divided by the total number of cells) and the assessment of the occupied cells. Similarly to previous work~\\cite{fontaine:gecco20}, we compute the QD-Score of CMA-ES and random search for comparison purposes by calculating the behavioral characteristics for each scenario and populating a pseudo-archive. We also include the coverage score as an additional metric of diversity.\n\n\n\\subsection{Hypothesis}\n\\textit{We hypothesize that MAP-Elites will result in larger QD-Score and coverage than both CMA-ES and random search.}\n\nPrevious work~\\cite{fontaine:gecco19,fontaine:gecco20} has shown that behavior spaces are typically distorted: uniformly sampling the search parameter space results in samples concentrated in small areas of the behavior space. Therefore, we expect random search to have small coverage of the behavior space. Additionally, since random search ignores the assessment function $f$, we expect the quality of the found scenarios in the archive to be low.\n\nCMA-ES moves with a single large population that has global optimization properties. Therefore, we expect it to concentrate in regions of high-quality scenarios, rather than explore the archive. On the other hand, MAP-Elites both expands the archive and maximizes the quality of the scenarios within each cell. \n\n\n\\begin{figure*}[t!]\n \\centering\n \\begin{tabular}{ccc}\n \\begin{subfigure}[t]{.25\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/overlayed-side.pdf}\n \\end{subfigure} & \n \\begin{subfigure}[t]{.25\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/overlayed-side-optimal.pdf}\n \\end{subfigure}&\n \\begin{subfigure}[t]{.25\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/overlayed-line.pdf}\n \\end{subfigure} \\\\ \n \n \\end{tabular}\n\\caption{(Left) The robot fails to reach the user's goal $g_H$ because of the large deviation in human inputs from the optimal path. The waypoints of the human inputs are indicated with green color. (Center) We show for comparison how the robot would act if human deviation was 0 (optimal human). (Right) The robot fails to reach the user's goal $g_H$ (bottle furthest away from the robot), even though the human provides a near optimal input trajectory.}\n\\label{fig:elites}\n\\end{figure*}\n\n\n\n \\begin{figure}[t!]\n \\centering\n \\includegraphics[width=0.8\\columnwidth]{figs\/hists_BC12_plot.pdf}\n\n \n \n \n \n \n \n \n \n \\caption{Distribution of cells explored for random search and MAP-Elites. The cell colors represent frequency counts.}\n \\label{fig:hist}\n \n \\end{figure}\n\\subsection{Analysis} \\label{subsec:Analysis}\nTable~\\ref{tab:results} summarizes the performance of the three algorithms, for each of the three behavior spaces. We conducted a two-way ANOVA to examine the effect of the behavior space and the search algorithm on the QD-Score and coverage. There was a statistically significant interaction between the search algorithm and the behavior space for both QD-Score ($F(4,36) = 62.39, p < 0.001$) and coverage ($F(4,36) = 77.92, p < 0.001$). Simple main effects analysis with Bonferroni correction showed that \\mbox{MAP-Elites} outperformed \\mbox{CMA-ES} and random search in both QD-Score and coverage ($p<0.001$ in all comparisons). This result supports our hypothesis. \n\n\nFig.~\\ref{fig:maps} shows the improvement in the QD-Score over time and one example archive from each algorithm for the first two behavior spaces. MAP-Elites visibly finds more cells and of higher quality (red color), illustrating its ability to cover larger areas of the archive with high-quality scenarios. As expected, CMA-ES concentrates in regions of high-quality scenarios but has small coverage. \n\n\nRandom search covers a smaller area of the archive, compared to MAP-Elites, because of the \\textit{behavior space distortion}, shown in Fig.~\\ref{fig:hist}. Even though the search parameters are sampled uniformly, scenarios are concentrated on the left side of the archive specified by the human rationality and distance between goals BCs (Fig.~\\ref{fig:hist}-top). This occurs because if any of the sampled waypoints deviates from the optimal path, low values of rationality become more likely. In the human variation and distance between goals BCs, the distribution of scenarios generated by random search is concentrated in a small region near the center (Fig.~\\ref{fig:hist}-bottom). This is expected, since the two BCs are Euclidean norms of random vectors (see~\\cite{random2018}). On the other hand, MAP-Elites selects elite scenarios from the archive and perturbs them with Gaussian noise, instead of uniformly sampling the scenario parameters, resulting in larger coverage. \n\n\n\n\n\n\n\n\\subsection{Interpreting the Archives}\nIn the generated archives (Fig.~\\ref{fig:maps}), each cell contains an elite, which is the scenario that achieved the maximum assessment value (time to complete the task) for that cell. It is important to confirm that \\emph{the timeouts (red cells in the archive) occur because of the implementation of the tested algorithm (hindsight optimization), rather than being artifacts of the simulation environment}. \n\n\nTherefore, we replay the elites in different regions of the archives to explain the system's performance. We focus on the first two behavior spaces in Fig.~\\ref{fig:maps} using the archives generated with MAP-Elites, since MAP-Elites had the largest QD-Score and coverage.\n\n\n\n\n\n\n\nWe observe that if the distance between goals is large and the human is nearly optimal, the robot performs the task efficiently. This is shown by the blue color in the top-right of the first behavior space (distance and human rationality $\\beta$). We observe the same for large distance and small variation in the second behavior space.\n\n\n\nWe then explore different types of scenarios where the robot fails to reach the user's goal by the maximum time (10s), indicated by the red cells in the archives. When human variation is large (or equivalently rationality is low), the human may provide inputs that guide the robot towards the wrong goal. Since the robot updates a probability distribution over goals based on the user's input~\\cite{javdani2015hindsight} and the robot assumes that the user minimizes their distance to their desired goal, noisy inputs may result in assigning a higher probability to the wrong goal and the robot will move towards that goal instead. Fig.~\\ref{fig:elites}(left) shows the execution trace of one elite where this occurs. Fig.~\\ref{fig:best} shows the position of this elite in the archive. Fig.~\\ref{fig:elites}(center) shows how the robot would reach the desired goal if the human had behaved optimally, instead. \n\n\n\n\n\n\n\n\n\n\nWhat is surprising, however, is that the robot does not reach the user's goal even in parts of the behavior space where human variation is nearly 0 (or equivalently rationality is very high), that is when the human provides a nearly optimal input trajectory! Fig.~\\ref{fig:elites}(right) reveals a case, where the two goal objects are aligned one closely behind the other. The robot approaches the first object, on the way towards the second object, and stops there. \n\n\nWhat is interesting in both scenarios is that the robot gets ``stuck'' at the wrong goal, even when the simulated user continues providing inputs to their desired goal! Inspection of the publicly available implementation~\\cite{ada_code} of the algorithm shows that this results from the combination of two factors: the robot's cost function and the human inputs.\n\n\n\n\n\n\n\\noindent\\textbf{Cost Function.} The cost function that the robot minimizes is specified as a constant cost when the robot is far away from the goal and as a smaller linear cost when the robot is near the target~\\cite{javdani2018shared} (distance to target is smaller than a threshold). This makes the cost of the goal object near the robot significantly lower than the cost of the other goal objects, which results in the probability mass of the goal prediction to concentrate on that goal. While this can help the user align the end-effector with the object (see~\\cite{javdani2018shared}), it can also lead to incorrect inference, if the robot approaches the wrong goal on its way towards the correct goal or because of noisy human input. We confirmed that removing the linear term from the cost function results in the robot reaching the right goal in both examples\n\n\n\n\n\n\n\n\n\n\\noindent\\textbf{Human Inputs.} The hindsight optimization implementation minimizes a cost function specified as the sum of two quadratic terms, the expected cost-to-go to assist for a distribution over\ngoals, and a term penalizing disagreement with the user's input. The first-order approximation of the value function leads to an interpretation of the final robot action $u_R = u_R^A+u_R^u$ as the sum of two independent velocity commands, an ``autonomous'' action $u_R^A$ towards the distribution over goals and an action that follows the user input $u_R^u$, as if the user was directly teleoperating the robot (see~\\cite{javdani2018shared}).\n\nWe have simulated the human inputs, so that they provide a translational velocity command towards the next waypoint, proportional to the distance of the robot's end-effector to that waypoint (section~\\ref{subsec:parameters}). This results in a term $u_R^u$ of small magnitude when the end-effector is close to one of the waypoints. If at the same time the robot has high confidence on one of the goals, $u_R^A$ will point in the direction of that goal and it will cancel out any term $u_R^u$ that attempts to move the robot in the opposite direction.\n\nWe confirmed that, if the user instead applied the maximum possible input towards their desired goal, the robot would get ``unstuck,'' so a real user would always be able to eventually reach their desired goal. However, this requires effort from the user who would need to ``fight'' the robot. Overall, the archive reveals limitations that depend on \\textit{how the goal objects are aligned in the environment, the direction and magnitude of user inputs, and the cost function used by the implementation of the hindsight optimization algorithm.}\n\n\n\n\n \n \\begin{figure}[t!]\n \\begin{tabular}{cc}\n \\centering\n \\begin{subfigure}[t]{.49\\columnwidth}\n \\includegraphics[width=1.0\\columnwidth]{figs\/obstacle-collision-around.pdf}\n \n \\end{subfigure} &\n \\begin{subfigure}[t]{0.49\\columnwidth}\n \\includegraphics[width=1.0\\columnwidth]{figs\/obstacle-collision-noisy.pdf}\n \n \n \\end{subfigure}\n \\end{tabular}\n \\caption{Scenarios where the policy blending algorithm results in collision with an obstacle, approximated by a sphere. (Left) While the human and robot trajectories are each collision-free, blending the two results to collision when they point towards opposite sides of the obstacle. (Right) Blending with a very noisy human input results in collision. }\n \\label{fig:obstacle-examples}\n \\end{figure}\n \n\n \\begin{figure}[t!]\n \\centering\n \\includegraphics[width=0.6\\columnwidth]{figs\/example_two_goals.jpg}\n \\caption{We reproduced the generated scenarios in the real world with actual joystick inputs.}\n \\label{fig:real}\n \\end{figure}\n\\section{Comparing Algorithms}\\label{sec:comparing_short}\nGiven the effectiveness of quality diversity in automatically generating a diverse range of test scenarios, we can also use it to understand differences in performance between algorithms. In this work, we compare the performance of hindsight optimization~\\cite{javdani2015hindsight} with linear policy blending~\\cite{dragan2012formalizing}. We describe the experiment in Appendix~\\ref{sec:comparing}. We found that policy blending resulted in collisions, even for a nearly optimal human, in cases where human and robot inputs pointed towards opposite sides of an obstacle (Fig.~\\ref{fig:obstacle-examples}). This confirms previous theoretical findings~\\cite{trautman2015assistive} of unsafe behavior that linear blending has in the presence of obstacles. On the other hand, hindsight optimization avoids collisions, since it uses the human inputs as observations and the robot's motion is determined only by the robot's policy. \n\n\n\\section{Discussion} \\label{sec:discussion}\n\n\\noindent\\textbf{Experimental Findings.} We found that failure scenarios for hindsight optimization occur when the two goals are close to each other and the human inputs are noisy, or when one goal is in front of the other. In the latter case, failure occurs even if the human input is nearly optimal in minimizing the distance to the desired goal. In both cases, the robot becomes over-confident about the wrong goal and gets ``stuck'' there. \n\nAn important factor is the linear decrease of the cost in the vicinity of the goal objects. When specifying the cost function, it would be prudent to make the distance threshold for the linear decrease proportional to the distance between the goal objects, rather than setting it to an absolute value. \n\nOther potential measures to avoid the system's overconfidence towards the wrong goal are: (1) including the Shannon entropy with respect to all the goals in the cost function~\\cite{jeon2020shared} to penalize actions that result in very high confidence to one specific goal; (2) assigning a non-zero probability that the user changes their mind throughout the task and switches goals~\\cite{nikolaidis2017human, jain2018recursive}. It would be interesting to investigate the effect of more ``conservative'' assistance on subjective and objective metrics of the robot's performance.\n\nFinally, while linear policy blending naturally gives more control to the user and it is preferred by users in simple tasks~\\cite{javdani2018shared}, we empirically verified that the algorithm can generate unsafe trajectories, even if the individual human and robot inputs are safe. \n\nTo show that the presented scenarios can occur in deployed systems, we reproduce them in the real world with actual inputs through a joystick interface (Fig.~\\ref{fig:real}).\\footnote{We show different generated scenarios reproduced in the real world here: \\url{https:\/\/youtu.be\/2-JCO3dUHsA}} \n\n\\noindent\\textbf{Stochasticity in Scenarios.} In our experiments the generated scenarios are deterministic. One may wish to simulate scenarios where there is stochasticity in the robot's decision making or in the environment dynamics. A designer may also wish to test the system's performance under a stochastic human model, e.g., when human inputs are generated by a stochastic noisily rational human.\n\n\n\nThe most common approach in evolutionary optimization of noisy domains is \\textit{explicit averaging}, where we run multiple trials of the same scenario and then retain an aggregate measure of the assessment estimate, e.g, we compute the average to estimate the expected assessment $\\mathbb{E}[f(\\theta,\\phi)]$~\\cite{rakshit2017noisy,jin2005evolutionary}. We can follow the same process to estimate the behavior characteristics~\\cite{justesen2019map}. To improve the efficiency of the estimation, previous work has also employed implicit averaging methods, e.g., where the assessment of a scenario ($\\theta, \\phi$) is estimated by taking the assessments of previously evaluated scenarios in the neighborhood of $\\theta,\\phi$ into account. Previous work also includes adaptive sampling techniques, where the number of trials increases over time as the quality of the solutions in the archive improves~\\cite{justesen2019map}. A recent variant of MAP-Elites (Deep-Grid MAP-Elites) which updates a subpopulation of solutions for each cell in the behavior space has shown significant benefits in sample efficiency ~\\cite{flageat2020fast}. We leave these exciting directions for future work.\n\n\n\n\n\\noindent\\textbf{Limitations.} An important challenge is how to effectively characterize the behavior spaces. While we have assumed bounded behavior spaces, the rationality coefficient does not meet this assumption, which resulted in elites accumulating in the upper bound of the rationality in the archive (Fig.~\\ref{fig:maps}). Adapting the boundaries of the space dynamically based on the distribution of generated scenarios~\\cite{fontaine:gecco19} could improve coverage in this case.\n\nWhile distance between objects indeed played a role, our experiments showed the unexpected and surprising edge case where the two objects are in column formation and the human is nearly optimal. An interesting follow-up experiment would be to specify as a BC some metric of object alignment in column formation and investigate further the effect of this variable. In general, a practitioner can test the system with an initial design of BCs, observe the failure cases, create new BCs from newly observed insights and test the system further.\n \n We focused on how to effectively search the generative space of scenarios, but not on the generation methods themselves. Realism is an important future consideration, both in generating environments and human inputs. In human training, realism can be measured through a modified Turing test designed to require humans to distinguish generated scenarios from human authored ones~\\citep{martin:ucf}. Alternatively, we could run a user study where we place objects in the same locations as our failure scenarios and observe whether participants perform similar actions that cause failures.\n \n\n\n\\noindent\\textbf{Implications.}\nFinding failure scenarios of HRI algorithms will play a critical role in the adoption of these systems. We proposed quality diversity as an approach for automatically generating scenarios that assess the performance of HRI algorithms in the shared autonomy domain and we illustrated a path for future work. While real-world studies are essential in evaluating complex HRI systems, automatic scenario generation can facilitate understanding and tuning of the tested algorithms, as well as provide insights on the experimental design of real-world studies, whose findings can in turn inform the designer for testing the system further. We are excited about applications of quality diversity algorithms as test oracles in verification systems~\\cite{kress2020formalizing,porfirio2018authoring}, as well as in other domains where deployed robotic systems face a diverse range of interaction scenarios. \n\n\\section{Acknowledgements}\nWe thank Tapomayukh Bhattacharjee, David Hsu, Shen Li, Dylan Losey, Dorsa Sadigh, Rosario Scalice, Julian Togelius and our anonymous RSS reviewers for their feedback on early versions of this work.\n\n\\bibliographystyle{ACM-Reference-Format}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nFor every separable exact $C^*$-algebra $\\Ac$, the lack of non-empty quasi-compact open subsets of the primitive ideal \nspace $\\Prim(\\Ac)$ ensures that $\\Ac$ is AF-embeddable, that is, $\\Ac$ is isomorphic to a closed subalgebra of an \nAF-algebra. \nThe converse also holds if $\\Ac$ is traceless (cf. \\cite[Cor.~B-C]{Ga20}), and for other $C^*$-algebras as well. \nFor instance, if $G$ is a generalized $ax+b$-group, hence a solvable Lie group $G = \\Vc \\rtimes \\RR$, where \n$\\Vc$ is a finite-dimensional real vector space, then $C^*(G)$ is not traceless. \nNevertheless, $C^*(G)$ is AF-embeddable if and only if $\\Prim(G)$ has no non-empty quasi-compact open subsets. \n(See Example~\\ref{ax+b} below.) \nHere and throughout this paper, we denote $\\Prim(G):=\\Prim(C^*(G))$ for any locally compact group~$G$, and a topological space is called quasi-compact if all its open covers have finite subcovers, without requiring any separation property. \n\n\nThe questions addressed in the present paper concern the relation between existence of non-empty open quasi-compact subsets of $\\Prim(G)$ and stably finiteness or even AF-embeddability of $C^*(G)$ for solvable Lie groups $G$.\n\nThe focus of our study is on exponential solvable Lie groups for two main reasons: \nFirstly, for all simply connected solvable Lie groups with polynomial growth, we have proved in \\cite{BB20} that there exist no non-empty quasi-compact open subsets in the primitive ideal spaces of their $C^*$-algebras. \nSecondly, there exist large classes of exponential solvable Lie groups $G$ for which $\\Prim(G)$ has finite (hence quasi-compact) open subsets or, more generally, the stabilized $C^*$-algebra $\\Kc \\otimes C^*(G)$ contains non-zero projections, that is, \n$C^*(G)$ is not stably projectionless. \nSee for instance \\cite{GKT92}, \\cite{KT96}, or Section~\\ref{Sect4}. \n\nOur main results for simply connected solvable Lie groups~$G$ could be summarized as follows: \n\\begin{enumerate}[(1)]\n\\item\\label{item_1} If $\\dim G\\in 4\\ZZ + 2$ or $\\dim G\\in 2 \\ZZ +1$ then $C^*(G)$ is stably finite if and only if it is stably projectionless \n(Theorem~\\ref{4n+2}). \n\\item\\label{item_2} If $\\dim G\\in 4\\ZZ$, we prove by examples that\nboth situations can appear: \nif $\\Prim(G) $ has finite open subsets, then $C^*(G)$ could be either AF-embeddable, \n or not even stably finite \n (Proposition~\\ref{Heis} and Theorem~\\ref{N6N15}).\n\\end{enumerate}\nWe also take the first steps towards describing the exponential solvable Lie groups $G$ whose nilradical is 1-codimensional and for which $C^*(G)$ is AF-embeddable while $\\Prim(G)$ contains finite open subsets (Corollary~\\ref{cf-cor8}). \n\nThe methods we use for obtaining some of these results owe much to the deep work \\cite{Sp88} on AF-embeddings of $C^*$-algebra extensions. \nIn addition, we have used the way the ``real'' structures of group $C^*$-algebras (in the sense of \\cite{Ka80} and \\cite{Ros16}) are encoded in the K-theory, \nand also the propagation of stably finiteness of the group $C^*$-algebras through suitable deformations of the constants of structure of the Lie algebras. \n\n\nIn more detail, this paper has the following contents. \nIn Section~\\ref{section1} we obtain some technical results that involve the link between stably finiteness and existence of projections. \nWe also investigate the interaction between ``real'' structures of $C^*$-algebras and Rieffel's construction \nof Connes' Thom isomorphism (Proposition~\\ref{tsigns}) with an application to solvable Lie groups (Corollary~\\ref{signs_solvable}). \nThen we establish that the property of continuous deformations preserve stably finiteness for certain continuous fields of $C^*$-algebras (Proposition~\\ref{prop-cf2}), and establish a condition for the failure of stably finiteness in terms of open points of the primitive ideal space (Proposition~\\ref{proj}). \n \nSection~\\ref{section4k} begins by our main stably-finiteness result on solvable Lie groups whose dimension is not divisible by~4 (Theorem~\\ref{4n+2}). \nWe then turn to groups whose nilradical is 1-codimensional. \nThese groups are basically determined by a derivation of a nilpotent Lie algebra $D\\in\\Der(\\ng)$ and the main task is to describe the stably finiteness or AF-embeddability properties of the $C^*$-algebra of the corresponding group $N\\rtimes_D\\RR$ in terms of the spectrum of~$D$. \nIn this connection, the technique of continuous fields allows us to establish a necessary condition\nfor stably finiteness in terms of the spectrum or the involved derivation (Theorem~\\ref{prop-cf4}). \nWe then obtain a technical result that explains the greater complexity of the behaviour of the groups whose dimension is divisible by~4 (Theorem~\\ref{cf-prop7}), \nand then we draw a key consequence that is effective by way of decreasing the dimensions in the study of the specific examples (Corollary~\\ref{cf-cor8}). \n\nFinally, in Section~\\ref{Sect4}, we use the techniques of Sections \\ref{section1} and \\ref{section4k} in order to study the $C^*$-algebras of some specific solvable Lie groups. \nWe focus on groups whose nilradical is 1-dimensional and 2-step nilpotent \nsince this is the simplest class of groups after the generalized $ax+b$-groups (Example~\\ref{ax+b}). \nOur most complete results are obtained in the cases when the nilradical is a Heisenberg group (Proposition~\\ref{Heis}) or is a central extension of the free 6-dimensional 2-step nilpotent Lie group (Theorem~\\ref{N6N15}), \nwhen we characterize the group $C^*$-algebra properties in terms of the spectral data of the derivation involved in the construction.\nIn particular, when the Heisenberg group has dimension $4k+3$, there are cases, \nwhen \nthere exists an open point in the unitary dual of $\\HH_n \\rtimes \\RR$, but its $C^*$-algebra \nis AF-embeddable.\nWe also discuss a class of Heisenberg-like groups associated to finite-dimensional real division algebras (Theorem~\\ref{N6N17}). \nThese last examples show, in particular,\nthat the necessary condition for stably finiteness is \nnot sufficient, that is, the lack of stably finiteness is not preserved by continuous deformations.\n\n\n\n\n \n \\section{$K$-theoretic tools, stably finiteness, and AF-embeddability}\\label{section1}\n\nIn this section we obtain some technical results that play a key role in the next sections. \n \n \n\\subsection{Notation related to the construction of $K$-groups}\n\\label{App_K}\n\nWe start by reminding notions and notation needed in our paper.\nThroughout the paper we use the notation from \\cite{RLL00}.\n\nFor any $C^*$-algebra $\\Ac$ its unitization is $\\widetilde{\\Ac}:=\\CC\\1\\dotplus A$. \nAlso, for any integers $m,n\\ge 1$ we denote by $M_{m,n}(\\Ac)\\subseteq M_{m,n}(\\widetilde{\\Ac})$ the $m\\times n$ matrix spaces with entries in $\\Ac$ and $\\widetilde{\\Ac}$, respectively, \nand for $m=n$ we write as usually $M_n(\\Ac)\\subseteq M_n(\\widetilde{\\Ac})$ for the corresponding matrix $C^*$-algebras. \nMoreover, $\\1_n\\in M_n(\\widetilde{\\Ac})$ \nthe identity matrix and $\\0_n\\in M_n(\\Ac)$ is the zero matrix. \n\n\nWe denote $\\Pg(\\Ac):=\\{p\\in A\\mid p=p^2=p^*\\}$ and $\\Pg_n(\\Ac):=\\Pg(M_n(\\Ac))$ for any $n\\ge 1$. \nThe disjoint union \n$$\\Pg_\\infty(\\Ac):=\\bigsqcup_{n\\ge 1}\\Pg_n(\\Ac)$$ \nhas the natural structure of a graded (noncommutative) semigroup \nwith its operation $\\oplus$ defined by \n$$\\Pg_n(\\Ac)\\times\\Pg_m(\\Ac)\\to\\Pg_{n+m}(\\Ac),\\quad \n(p,q)\\mapsto\\matto{p}{q}:=\\matt{p}{q}$$ \nfor all $m,n\\ge 1$. \nThe Cartesian projection $s\\colon \\widetilde{\\Ac}\\to\\CC\\1(\\subseteq\\widetilde{\\Ac})$ is extended to \n$s\\colon M_n(\\widetilde{A})\\to M_n(\\widetilde{\\Ac})$, \n$(a_{ij})_{i,j}\\mapsto (s(a_{ij}))_{i,j}$, for any $n\\ge 1$. \nWe recall that the equivalence relation $\\sim_0$ on $\\Pg_\\infty(\\widetilde{\\Ac})$ is defined in the following way: \nif $p\\in\\Pg_m(\\widetilde{\\Ac})$ and $q\\in\\Pg_n(\\widetilde{\\Ac})$, \nthen $p\\sim_0 q$ if and only if there exists $v\\in M_{m,n}(\\widetilde{\\Ac})$ with $v^*v=p$ and $vv^*=q$. \nThere is a natural additive map $\\Pg_\\infty(\\widetilde{\\Ac}) \\to K_0(\\widetilde{\\Ac})$, $p \\mapsto [p]_0$. \nThen, with the above notation, we have\n\\begin{align*}\nK_0(\\widetilde{\\Ac})\n&=\\{[p]_0-[q]_0\\mid p,q\\in \\Pg_\\infty(\\widetilde{\\Ac})\\}, \\\\\nK_0(\\Ac)\n&=\\{[p]_0-[s(p)]_0\\mid p\\in \\Pg_\\infty(\\widetilde{\\Ac})\\}.\n\\end{align*}\nMoreover, for any $C^*$-algebra $\\Bc$ and any $*$-morphism $\\varphi\\colon \\Ac\\to \\Bc$ there is a group morphism $K_0(\\varphi)\\colon K_0(\\Ac)\\to K_0(\\Bc)$, \n$[p]_0-[s(p)]_0\\mapsto[\\widetilde{\\varphi}(p)]_0-[s(\\widetilde{\\varphi}(p))]_0$. \n\n\n\nWe denote $\\Uc(\\widetilde{\\Ac}):=\\{u\\in \\widetilde{A}\\mid u^*u=uu^*=\\1\\}$ and $\\Uc_n(\\widetilde{\\Ac}):=\\Uc(M_n(\\widetilde{\\Ac}))$ for every $n\\ge 1$, which is the basic ingredient in the construction\nof the group $K_1(\\Ac)=K_1(\\widetilde{\\Ac})$.\n\n\n\n\\subsection{A $K$-theoretic condition for stably finiteness}\nThe next proposition is a partial generalization of \\cite[Lemma 1.5]{Sp88}, and\nit is one of the main tools in this paper. \n\n\\begin{proposition}\\label{P1}\nFor every $C^*$-algebra $\\Ac$ the following assertions are equivalent: \n\\begin{enumerate}[{\\rm(i)}]\n\t\\item\\label{P1_item1} \n\tThere exist $k\\ge 1$ and $p\\in\\Pg_k(\\Ac)\\setminus\\{\\0_k\\}$ with $[p]_0=0\\in K_0(\\Ac)$. \n\t\\item\\label{P1_item2} \n\tThere exist $r\\ge 1$ and $v\\in M_r(\\widetilde{\\Ac})$ with $vv^*=\\1_r\\ne v^*v$.\n\t\\item\\label{P1_item3} The $C^*$-algebra $\\Ac$ is not stably finite. \n\\end{enumerate}\n\\end{proposition}\n\n\\begin{proof}\n\\eqref{P1_item1}$\\Rightarrow$\\eqref{P1_item2}: \nWe have $p\\in\\Pg_k(\\Ac)\\subseteq M_k(\\Ac)$ hence $s(p)=\\0_k$. \nThen, by \\cite[4.2.2(ii)]{RLL00}, there exists $m\\ge 1$ with \n$\\matt{p}{\\1_m}\\sim\\matt{\\0_k}{\\1_m}$ in $\\Pg_{k+m}(\\widetilde{\\Ac})$. \nFurthermore, by \\cite[2.2.8(i)]{RLL00}, we obtain \n$\\mattt{p}{\\1_m}{\\0_{k+m}}\\sim_u\\mattt{\\0_k}{\\1_m}{\\0_{k+m}}$ in $\\Pg_{2(k+m)}(\\widetilde{\\Ac})$. \nThat is, there exists $w\\in\\Uc_{2(k+m)}(\\widetilde{\\Ac})$ with \n$$\\mattt{p}{\\1_m}{\\0_{k+m}}= w\\mattt{\\0_k}{\\1_m}{\\0_{k+m}}w^*\n\\in \\Pg_{2(k+m)}(\\widetilde{\\Ac}).$$\nNow, defining \n$$v:=w\\mattt{\\0_k}{\\1_m}{\\0_{k+m}}+\n\\mattt{\\1_k-p}{\\0_m}{\\1_{k+m}}\\in M_{2(k+m)}(\\widetilde{\\Ac})$$\nwe obtain \n\\begin{align*}\n\\allowdisplaybreaks\nvv^*&=w\\mattt{\\0_k}{\\1_m}{\\0_{k+m}}w^* \n+\\mattt{\\1_k-p}{\\0_m}{\\1_{k+m}} \n=\\mattt{\\1_k}{\\1_m}{\\1_{k+m}} \\\\\n&=\\1_{2(k+m)}\n\\end{align*}\nand \n\\allowdisplaybreaks\n\\begin{align*}\nv^*v\n=\n&\\mattt{\\0_k}{\\1_m}{\\0_{k+m}}w^* w\\mattt{\\0_k}{\\1_m}{\\0_{k+m}} \\\\\n&+ \\mattt{\\0_k}{\\1_m}{\\0_{k+m}}w^*\\mattt{\\1_k-p}{\\0_m}{\\1_{k+m}} \\\\\n&+ \\mattt{\\1_k-p}{\\0_m}{\\1_{k+m}}w\\mattt{\\0_k}{\\1_m}{\\0_{k+m}} \\\\\n&+ \\mattt{\\1_k-p}{\\0_m}{\\1_{k+m}} \\\\\n=\n&\n\\mattt{\\0_k}{\\1_m}{\\0_{k+m}}+ \\0_{2(k+m)}+\\0_{2(k+m)}\n+ \\mattt{\\1_k-p}{\\0_m}{\\1_{k+m}} \\\\\n=\n&\n\\mattt{\\1_k-p}{\\1_m}{\\1_{k+m}} \\\\\n\\ne \n&\\1_{2(k+m)}.\n\\end{align*}\n\n\\eqref{P1_item2}$\\Rightarrow$\\eqref{P1_item1}: \nFor $r\\ge 1$ and $v\\in M_r(\\widetilde{\\Ac})$ with $vv^*=\\1_r\\ne v^*v$ \nwe define $p:=\\1_r-v^*v\\in\\Pg_r(\\widetilde{\\Ac})\\setminus\\{\\0_r\\}$. \nSince $vv^*=\\1_r$, we obtain $\\1_r=s(vv^*)=s(v)s(v)^*$ in $M_r(\\CC\\1)$, \nhence $s(v)^*s(v)=\\1_r$, and this implies $s(p)=\\1_r-s(v^*v)=\\1_r-s(v)^*s(v)=\\0_r$ \nIt follows that $p \\in \\Pg_r(\\Ac)$. \n\nMoreover, we have $p+v^*v=vv^*$ and $p v^*v=\\0_r$ hence, by \\cite[3.1.7(iv)]{RLL00}, \n$[p]_0+[v^*v]_0=[vv^*]_0$ in $K_0(\\widetilde{A})$. \nHere we have $v^*v\\sim_0 vv^*$, hence $[v^*v]_0=[vv^*]_0$ in $K_0(\\widetilde{A})$, and we then obtain $[p]_0=0\\in K_0(\\Ac)$.\n\n\\eqref{P1_item2}$\\Rightarrow$\\eqref{P1_item3}: \nThis is just the definition of stably finite $C^*$-algebras. \n\\end{proof}\n\n\nThe following simple facts were already noted in \\cite[proof of Cor. D]{Ga20},\n but we prove them here for completeness. \n\n\\begin{corollary}\\label{rem-1.5} Let $\\Ac$ be a $C^*$-algebra. \n\\begin{enumerate}[{ \\rm (i) }]\n\\item \\label{rem-1.5_i} If $\\Ac$ is stably projectionless, then it is stably finite.\nMoreover, for a $C^*$-algebra $\\Ac$ with $K_0(\\Ac)=0$, $\\Ac$ is stably finite if and only if it is stably projectionless.\n\\item\\label{rem-1.5_ii} $\\Prim(\\Ac)$ has no nonempty quasi-compact open subset, \nthen $\\Ac$ is stably projectionless. \n\\item\\label{rem-1.5_iii} If $\\Prim(\\Ac)$ has no nonempty quasi-compact open subsets, \nthen $\\Ac$ is stably finite. \n\\end{enumerate}\n\\end{corollary}\n\n\\begin{proof}\nAssertion \\eqref{rem-1.5_i} is a direct consequence of Proposition~\\ref{P1} (\\eqref{P1_item1}$\\iff$ \\eqref{P1_item3}.)\n\n\\eqref{rem-1.5_ii}\nIf $\\Ac$ is not stably projectionless, \nthere exist \n$k\\ge 1$ and $p\\in\\Pg_k(\\Ac)\\setminus\\{0\\}$, hence\nthere exists a nonempty quasi-compact open subset of $\\widehat{M_k(\\Ac)}$. \n(See for instance the proof of \\cite[Ex. 4.8((vii)$\\Rightarrow$(iv))]{BB20}.) \nSince $M_k(\\Ac)=M_k(\\CC)\\otimes \\Ac$, \nit follows that $\\widehat{M_k(\\Ac)}$ is homeomorphic to $\\widehat{\\Ac}$, \nby \\cite[Th. B.45(b)]{RaWi98}. \nTherefore $\\widehat{\\Ac}$ has a nonempty quasi-compact open subset. \nMoreover, the canonical mapping $\\widehat{\\Ac}\\to\\Prim(\\Ac)$, $[\\pi]\\mapsto\\Ker\\pi$, \nis continuous and open, hence it maps any nonempty quasi-compact \nopen subset of $\\widehat{\\Ac}$ onto a nonempty quasi-compact open subset of $\\Prim(\\Ac)$.\nIt follows that $\\Prim(\\Ac)$ has a nonempty quasi-compact open subset, \nwhich is a contradiction with the hypothesis. \n\nAssertion~\\eqref{rem-1.5_iii} follows immediately from \\eqref{rem-1.5_i} and \\eqref{rem-1.5_ii}. \n\\end{proof} \n\n\n\n\\begin{remark}\n\\normalfont\nIn the special case of separable exact $C^*$-algebras, \nCorollary~\\ref{rem-1.5} \\eqref{rem-1.5_ii} is a weak version of \\cite[Cor. B]{Ga20}, which gives AF-embeddability rather than just stable finiteness. \n\\end{remark}\n\n\n\\subsection{Action of ``real'' structures on $K$-groups} \n\nThe following terminology goes back to G.G. Kasparov \\cite{Ka80}.\n\\begin{definition}\\label{real} \n\\normalfont\nA \\emph{``real'' structure} of a $C^*$-algebra $\\Ac$ is an antilinear mapping $\\tau\\colon \\Ac\\to \\Ac$, satisfying $\\tau(ab)=\\tau(a)\\tau(b)$, $\\tau(a^*)=\\tau(a)^*$, \nand $\\tau(\\tau(a))=a$ for all $a,b\\in\\Ac$. \nA \\emph{``real'' $C^*$-algebra} is a $C^*$-algebra $\\Ac$ with a fixed ``real'' structure~$\\tau$.\nWe denote $\\overline{a}:=\\tau(a)$ for all $a\\in \\Ac$ when no confusion arise.\n\n\nA \\emph{``real'' ideal} of $\\Ac$ is a closed two-sided ideal $\\Jc\\subseteq \\Ac$ \nthat is invariant to the ``real'' structure of $\\Ac$. \nIn this case $\\Jc$ is a ``real'' $C^*$-algebra with respect to the ``real'' structure $\\tau\\vert_\\Jc\\colon \\Jc\\to \\Jc$. \n\nLet $\\Ac$, $\\Bc$ be $C^*$-algebras with ``real'' structures $\\tau_\\Ac$ and $\\tau_\\Bc$, respectvely. \nA \\emph{``real'' morphism} is a $*$-morphism $\\psi\\colon \\Ac\\to \\Bc$ satisyfing $\\psi(\\tau_\\Ac (a))=\\tau_\\Bc(\\psi(a))$ for all $a\\in \\Ac$. \n\\end{definition}\n\nFor every $n\\ge 1$ the matrix algebra $M_n(\\CC)$ has a canonical ``real'' structure;\nif \n$\\Ac$ is a ``real'' $C^*$-algebra then the $C^*$-algebra \n$M_n(\\Ac)=M_n(\\CC)\\otimes \\Ac$ has a canonical ``real'' structure $(a_{ij})\\mapsto (\\overline{a_{ij}})$. \nMoreover $\\widetilde{\\Ac}$ has a canonical ``real'' structure given by $\\overline{a+ z\\1}=\\overline{a}+\\overline{z}\\1$ for every $a\\in \\Ac$ and $z\\in\\CC$. \n\nIf $u,v\\in\\Uc_\\infty(\\widetilde{\\Ac})$ then $\\overline{u},\\overline{v}\\in \\Uc_\\infty(\\widetilde{\\Ac})$ and $\\overline{\\matto{u}{v}}=\\matto{\\overline{u}}{\\overline{u}}\\in\\Uc_\\infty(\\widetilde{\\Ac})$, and \n$$u\\sim_1 v\\iff \\overline{u}\\sim_1\\overline{v}.$$\n(See \\cite[8.1.1]{RLL00}.) \nTherefore we obtain a well-defined group homomorphism \n$$K_1(\\widetilde{\\Ac})\\to K_1(\\widetilde{\\Ac}),\\quad [u]_1\\mapsto \\overline{[u]_1}:=[\\overline{u}]_1$$\nwhich is actually an isomorphism and is equal to its own inverse. \n\nIf $p,q\\in\\Pg_\\infty(\\widetilde{\\Ac})$ then $\\overline{p},\\overline{q}\\in\\Pg_\\infty(\\widetilde{\\Ac})$ \nand \n$\\overline{\\matto{p}{q}}=\\matto{\\overline{p}}{\\overline{q}}\\in\\Uc_\\infty(\\widetilde{\\Ac})$, \n\\begin{align*}\np\\sim_0 q\\iff \\overline{p}\\sim_0\\overline{q}\n\\end{align*}\nand $s(\\overline{p})=\\overline{s(p)}$. \nWe then obtain a well-defined semigroup homomorphism \n$$\\Dc(\\widetilde{\\Ac})\\to \\Dc(\\widetilde{\\Ac}), \\quad [p]_\\Dc\\mapsto\\overline{[p]_\\Dc}:=[\\overline{p}]_\\Dc$$\nwhich is actually an isomorphism and is equal to its own inverse. \nThis further gives rise to a group isomorphism \n$$K_0(\\widetilde{\\Ac})=G(\\Dc(\\widetilde{\\Ac}))\\to G(\\Dc(\\widetilde{\\Ac}))=K_0(\\widetilde{\\Ac}), \\quad [p]_0\\mapsto\\overline{[p]_0}:=[\\overline{p}]_0$$\nwhich is equal to its own inverse, and satisfies \n$$[s(\\overline{p})]_0=\\overline{[s(p)]_0}\\text{ for all }p\\in\\Pg_\\infty(\\widetilde{\\Ac}).$$\nIf $\\psi\\colon \\Ac\\to \\Bc$ is a ``real'' morphism of ``real'' $C^*$-algebras, \nthen its corresponding group morphism \n$K_j(\\widetilde{\\psi})\\colon K_j(\\widetilde{\\Ac})\\to K_j(\\widetilde{\\Bc})$ \nsatisfies $K_j(\\widetilde{\\psi})(\\overline{x})=\\overline{K_j(\\widetilde{\\psi})(x)}$ for all $x\\in K_j(\\widetilde{\\Ac})$ and $j=0,1$. \n\nIn particular, for $j=0$, $\\Bc=\\{0\\}$, and $\\psi=0$, it follows that the subgroup $K_0(\\Ac)=\\Ker(K_0(\\widetilde{\\psi}))$ is invariant to the automorphism $x\\mapsto\\overline{x}$ of $K_0(\\widetilde{\\Ac})$. \n\n\\begin{lemma}\\label{deltas}\nLet $\\psi\\colon \\Ac \\to \\Bc$ be a ``real'' surjective morphism of ``real'' $C^*$-algebras. \nDenote $\\Jc:=\\Ker\\psi$, regarded as a ``real'' ideal of $\\Ac$ \nwith its corresponding inclusion map $\\varphi\\colon \\Jc\\hookrightarrow \\Ac$, \nand consider the six-term exact sequence \n\\begin{equation}\\label{hexagon}\n\\xymatrix{\nK_0(\\Jc) \\ar[r]^{K_0(\\varphi)} & K_0(\\Ac) \\ar[r]^{K_0(\\psi)} & K_0(\\Bc) \\ar[d]^{\\delta_0}\\\\ \nK_1(\\Bc) \\ar[u]^{\\delta_1} & K_1(\\Ac) \\ar[l]_{K_1(\\psi)} & K_1(\\Jc) \\ar[l]_{K_1(\\varphi)}\n}\n\\end{equation}\nThen we have \n\\begin{enumerate}[{\\rm(i)}]\n\\item\\label{deltas_item1} \n$\\delta_0(\\overline{x})=-\\overline{\\delta_0(x)}$ for all $x\\in K_0(\\Bc)$; \n\\item\\label{deltas_item2} \n$\\delta_1(\\overline{y})=\\overline{\\delta_1(y)}$ for all $y\\in K_1(\\Bc)$.\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\n\\eqref{deltas_item1}\nFor arbitrary $x\\in K_0(\\Bc)$ there exist $n\\ge 1$ and $p\\in \\Pg_n(\\widetilde{\\Bc})$ with $x=[p]_0-[s(p)]_0$. \nThere also exist $a=a^*\\in M_n(\\widetilde{\\Ac})$ and $u\\in \\Uc_n(\\widetilde{\\Jc})$ \nwith \n$$\\widetilde{\\psi}(a)=p\\; \\text{ and }\\; \\widetilde{\\varphi}(u)=\\exp(2\\pi\\ie a)\\in\\Uc_n(\\widetilde{A}),$$ \nand $\\delta_0(x)=-[u]_1$ by \\cite[12.2.2(i)]{RLL00}. \n\nFurthermore $\\overline{x}=[\\overline{p}]_0-[\\overline{s(p)}]_0$ \nand $\\overline{a}=\\overline{a}^*\\in M_n(\\widetilde{A})$ satisfies $\\widetilde{\\psi}(\\overline{a})=\\overline{\\widetilde{\\psi}(a)}=\\overline{p}$. \nOn the other hand, \n$\\widetilde{\\varphi}(\\overline{u}^*)\n=\\overline{\\widetilde{\\varphi}(u)}^*\n=\\exp(2\\pi\\ie \\overline{a})$, \nhence, since $[u^*]_1=-[u]_1$ by \\cite[8.1.3]{RLL00}, we obtain\n$$\\delta_0(\\overline{x})=-[\\overline{u}^*]_1=[\\overline{u}]_1=\\overline{[u]_1}\n=-\\overline{\\delta_0(x)}.$$\n\\eqref{deltas_item2} \nFor arbitrary $y\\in K_1(\\Bc)=K_1(\\widetilde{\\Bc})$ \nthere exist $n\\ge 1$ and $u\\in \\Uc_n(\\widetilde{\\Bc})$ with $y=[u]_1$. \nThere also exist $v\\in \\Uc_{2n}(\\widetilde{\\Ac})$ and \n$p\\in\\Pg_{2n}(\\widetilde{\\Jc})$ \nwith \n$$\\widetilde{\\psi}(v)=\\matt{u}{u^*}\\; \\text{ and }\\;\n\\widetilde{\\varphi}(p)=v\\matt{\\1_n}{\\0_n}v^*, $$ \nand \nwe have $\\delta_1(y)=[p]_0-[s(p)]_0$ by \\cite[9.1.4]{RLL00}. \n\nFurthermore $\\overline{y}=[\\overline{u}]_1$ \nand $\\overline{u}\\in \\Uc_n(\\widetilde{B})$, \n$\\overline{v}\\in \\Uc_{2n}(\\widetilde{A})$, \n$\\overline{p}\\in\\Pg_{2n}(\\widetilde{J})$ satisfy \n$$\\widetilde{\\psi}(\\overline{v})\n=\\overline{\\widetilde{\\psi}(p)}\n=\\matt{\\overline{u}}{\\overline{u}^*}\\; \\text{ and }\\; \n\\widetilde{\\varphi}(\\overline{p})\n=\\overline{\\widetilde{\\varphi}(p)}\n=\\overline{v}\\matt{\\1_n}{\\0_n}\\overline{v}^*$$ \nhence \n$$\\delta_1(\\overline{y})\n=[\\overline{p}]_0-[s(\\overline{p})]_0\n=[\\overline{p}]_0-[\\overline{s(p)}]_0\n=\\overline{\\delta_1(y)}.$$\nThis completes the proof. \n\\end{proof}\n\n\\begin{remark}\n\\normalfont\nFor any locally compact group $G$ we regard its $C^*$-algebra $C^*(G)$ as a ``real'' $C^*$-algebra with its canonical ``real'' structure given by $\\overline{f}(x):=\\overline{f(x)}$ for every $f\\in\\Cc_c(G)\\hookrightarrow C^*(G)$. \nSee for instance \\cite[\\S 3.2]{Ros16}. \n\\end{remark}\n\n\\begin{lemma}\\label{dim1}\nWe have $[\\overline{u}]_1=[u]_1\\in K_1(C^*(\\RR))\\simeq\\ZZ$ \nfor all $u\\in\\Uc_\\infty(C^*(\\RR)^\\sim)$. \n\\end{lemma}\n\n\\begin{proof}\nWe use the well-known $*$-isomorphism given by the Fourier transform \n$$F\\colon C^*(\\RR)\\to\\Cc_0(\\ie\\RR),\\quad \n(F(f))(\\ie\\xi)=\\int_{\\RR}\\ee^{-\\ie\\xi x}f(x)\\de x \n\\text{ if }f\\in\\Cc_c(\\RR)$$\nwhere we regard $\\Cc_0(\\ie\\RR)$ as a commutative $C^*$-algebra with its pointwise operations and with the sup-norm, and \"real\" structure $\\tau_0\\colon \\Cc_0(\\ie\\RR)\\to\\Cc_0(\\ie\\RR)$, \n$(\\tau_0(g))(z)=\\overline{g(\\overline{z})}$ for all $g\\in\\Cc_0(\\ie\\RR)$ and $z\\in\\ie\\RR$. \nWe have \n$$F(\\overline{f})(z)=\\overline{F(f)(\\overline{z})}\\text{ for all }z\\in\\ie\\RR\\text{ and }f\\in\\Cc(\\RR)$$\nhence $F$ is a ``real'' isomorphism of ``real'' $C^*$-algebras. \nConsider the Cayley homeomorphism \n$$\\kappa\\colon \\ie\\RR\\to\\TT\\setminus\\{-1\\},\\quad \\kappa(\\ie\\xi)=\\frac{\\ie\\xi+1}{-\\ie\\xi+1}$$\nwith its inverse \n$\\kappa^{-1}(w)=\\frac{w-1}{w+1}$ for all $w\\in\\TT\\setminus\\{-1\\}$; then\n$$\\overline{\\kappa(z)}=\\kappa(\\overline{z})\\text{ for all }z\\in\\ie\\RR.$$ \nTherefore, the Cayley transform gives a ``real'' isomorphism from the unitization of the``real'' $C^*$-algebra $\\Cc_0(\\ie\\RR)$ \nonto $\\Cc(\\TT)$, when $\\Cc(\\TT)$ is endowed with the ``real'' structure \n$(\\tau(h))(w)=\\overline{h(\\overline{w})}$ for all $h\\in\\Cc(\\TT)$ and $w\\in\\TT$. \n\nFor the above reasons it suffices to prove that the action of the ``real'' structure~$\\tau$ on $K_1(\\Cc(\\TT))$ is the identity map. \nTo this end, we recall that, if we denote $u:=\\id_{\\TT}\\in\\Cc(\\TT,\\TT)=\\Uc_1(\\Cc(\\TT))$, then the mapping \n$$\\ZZ\\to K_1(\\Cc(\\TT)),\\quad m\\mapsto m[u]_1$$\nis a group isomorphism, hence for every $ y= [w]_1 \\in K_1(\\Cc(\\TT))$ there is a unique $m \\in \\ZZ$ such that \n$y = m [u]_1$. \nOn the other hand, it is clear that $\\tau(u)=u$, hence $[\\tau(w)]_1=m [u]_1\\in K_1(\\Cc(\\TT))$.\nThis directly shows that the action of $\\tau$ on $K_1(\\Cc(\\TT))$ is the identity map. \n\\end{proof}\n\n\n\n\\begin{definition}\\label{rdym}\n\\normalfont\nA \\emph{``real'' $C^*$-dynamical system} is a $C^*$-dynamical system $(\\Ac,T,\\alpha)$, where $\\Ac$ is a ``real'' $C^*$-algebra, $T$ is a locally compact group, and $\\alpha\\colon T\\to\\Aut \\Ac$, $t\\mapsto\\alpha_t$, satisfies $\\alpha_t(\\overline{a})=\\overline{\\alpha_t(a)}$ for all $t\\in T$ and $a\\in \\Ac$. \n \\end{definition}\n\n\\begin{lemma}\\label{rcrossed}\nFor every ``real'' $C^*$-dynamical system $(\\Ac,T,\\alpha)$ \nits corresponding crossed product $\\Ac\\rtimes_\\alpha T$ has a unique ``real'' structure satisfying $\\overline{f}(t):=\\overline{f(t)}$ for all $t\\in T$ and $f \\in\\Cc_c(T,\\Ac)$. \n\\end{lemma}\n\n\\begin{proof}\nUniqueness follows from the fact that $\\Cc_c(T,\\Ac)$ is dense in $\\Ac\\rtimes_\\alpha T$. \n\nTo prove the existence, we first note that \nthe antilinear mapping $f\\mapsto \\overline{f}$ defined as in the statement on $\\Cc_c(T,\\Ac)$ preserves the multiplication and the involution. \nIn fact, we recall that \n$$(f\\ast g)(t)=\\int_T f(r)\\alpha_r(g(r^{-1}t))\\de r\\text{ and }\nf^*(t):=\\Delta(t^{-1})\\alpha_t(f(t^{-1})^*)$$\nhence $\\overline{f\\ast g}=\\overline{f}\\ast \\overline{g}$ and $\\overline{f^*}=\\overline{f}^*$ for all $f,g\\in\\Cc_c(T,\\Ac)$. \n\nIt remains to show that $f\\mapsto \\overline{f}$ is isometric with respect to the $C^*$-norm on $\\Cc_c(T,\\Ac)$. \nTo this end let $(\\pi,U)$ be a covariant representation of $(\\Ac,T,\\alpha)$ on a complex Hilbert space~$\\Hc$, that is, $\\pi(\\alpha_t(a))=U_t \\pi(a)U_t^*\\in\\Bc(\\Hc)$ for all $t\\in T$ and $a\\in \\Ac$. \nFor any fixed antilinear involutive isometry $C\\colon \\Hc\\to\\Hc$ \nwe define $\\overline{\\pi}\\colon \\Ac\\to\\Bc(\\Hc)$, $\\overline{\\pi}(a):=C\\pi(\\overline{a})C$, and $\\overline{U}\\colon T\\to\\Bc(\\Hc)$, $\\overline{U}_t:=CU_tC$. \nThen it is straightforward to check that $(\\overline{\\pi},\\overline{U})$ is again a \ncovariant representation of $(\\Ac,T,\\alpha)$ on the complex Hilbert space~$\\Hc$, and moreover for every $f\\in\\Cc_c(T,\\Ac)$ we have \n\\begin{align*}\n(\\pi\\rtimes U)(\\overline{f})\n&=\\int_T\\pi(\\overline{f}(t))U_t\\de t\n=\\int_T\\pi(\\overline{f(t)})U_t\\de t\n=\\int_TC\\overline{\\pi}(f(t))CU_t\\de t \\\\\n&=C\\Bigl(\\int_T\\overline{\\pi}(f(t))\\overline{U}_t\\de t \\Bigr)C\n=C\\Bigl((\\overline{\\pi}\\rtimes \\overline{U})(f)\\Bigr)C.\n\\end{align*}\nThis shows that for every covariant representation $(\\pi,U)$ there exists a covariant representation $(\\overline{\\pi},\\overline{U})$ with \n$\\Vert (\\pi\\rtimes U)(\\overline{f})\\Vert=\\Vert (\\overline{\\pi}\\rtimes \\overline{U})(f)\\Vert$ and then, by the definition of the $C^*$-norm on $\\Cc_c(T,\\Ac)$, we obtain $\\Vert f\\Vert=\\Vert \\overline{f}\\Vert$ in $\\Ac\\rtimes_\\alpha T$.\nThis finishes the proof. \n\\end{proof}\n\n\\begin{remark}\\label{cross-morphism}\n\\normalfont\nLet $(\\Ac,T,\\alpha)$ and $(\\Bc,T,\\beta)$ be ``real'' $C^*$-dynamical systems and $\\psi\\colon \\Ac\\to\\Bc$ is an equivariant ``real'' morphism, then the corresponding $*$-morphism \n $\\psi\\rtimes\\iota \\colon \\Ac\\rtimes_\\alpha T\\to \\Bc\\rtimes_\\beta T$ \n satisfying $((\\psi\\rtimes \\iota)(f))(t)=\\psi(f(t))$ for all $t\\in T$ and $f\\in\\Cc_c(T,\\Ac)$ is a ``real'' morphism. \n\\end{remark}\n\n\nWe now study the interaction between ``real'' structures and some constructions from \\cite{Ri82}. \n\n\\begin{definition}\\label{WHdef}\n\\normalfont \nLet $(\\Ac,\\RR,\\alpha)$ be a ``real'' $C^*$-dynamical system. \nWe consider the $C^*$-algebra $\\cone \\Ac:=\\Cc_0(\\RR\\cup\\{+\\infty\\},\\Ac)$ \nwith its $*$-morphism $\\ev_{+\\infty}\\colon \\cone \\Ac\\to \\Ac$, $f\\mapsto f(+\\infty)$ and the ideal $\\susp \\Ac:=\\Ker(\\ev_{+\\infty})\\simeq \\Cc_0(\\RR,\\Ac)$. \nThen $\\cone \\Ac$ is a ``real'' $C^*$-algebra with its ``real'' structure defined by $\\overline{f}(t):=\\overline{f(t)}$ for all $t\\in\\RR\\cup\\{+\\infty\\}$ and $f\\in\\cone \\Ac$.\n Moreover $\\susp \\Ac$ is a ``real'' ideal, $\\ev_{+\\infty}$ is a ``real'' morphism, and we have the short exact sequence \n$$0\\to\\susp \\Ac\\hookrightarrow \\cone \\Ac\\mathop{\\longrightarrow}\\limits^{\\ev_{+\\infty}} \\Ac\\to 0.$$\nIf we define \n$$\\tau\\otimes \\alpha\\colon \\RR\\to\\Aut(\\cone \\Ac),\\quad ((\\tau\\otimes\\alpha)_r f)(t):=\\alpha_r(f(t-r))$$\nthen $(\\cone A,\\tau\\otimes\\alpha,\\RR)$ is a ``real'' $C^*$-dynamical system and $\\ev_{+\\infty}$ intertwines the actions of $\\RR$ on $\\cone \\Ac$ and $\\Ac$ via $\\tau\\otimes\\alpha$ and $\\alpha$, respectuvely. \nIn particular the ``real'' ideal $\\susp \\Ac$ is invariant to $\\tau\\otimes\\alpha$. \nWe further obtain the short exact sequence \n\\begin{equation}\\label{WHdef_eq1}\n0\\to\\susp \\Ac \\rtimes_{\\tau\\otimes\\alpha}\\RR\\hookrightarrow \n\\cone \\Ac\\rtimes_{\\tau\\otimes\\alpha}\\RR\\mathop{\\longrightarrow}\\limits^{\\psi} \\Ac\\rtimes_\\alpha\\RR\\to 0\n\\end{equation}\ncalled the \\emph{Wiener-Hopf extension} for $\\Ac\\rtimes_\\alpha\\RR$, \nwhere $\\psi:=\\ev_{+\\infty}\\rtimes\\RR$ is a ``real'' morphism. \n(See Remark~\\ref{cross-morphism}.)\n\\end{definition}\n\n\\begin{remark}\\label{SvN}\n\\normalfont\nIn the special case $\\Ac=\\CC$ we get the ``real'' $C^*$-dynamical system $(\\susp,\\tau,\\RR)$ with $\\susp:=\\susp\\CC=\\Cc_0(\\RR)$ and $\\tau\\colon\\RR\\to\\Aut(\\susp)$, \n$(\\tau_rf)(t):=f(t-r)$. \nIf the regular representation of the group $\\RR$ is again denoted by $\\tau\\colon \\RR\\to L^2(\\RR)$, $(\\tau_r\\xi)(t):=\\xi(t-r)$, \nand we define $M\\colon\\susp\\to\\Bc(L^2(\\RR))$, $M(f)\\xi=f\\xi$ for all $f\\in \\susp$ and $\\xi\\in L^2(\\RR)$, \nthen we obtain a covariant representation $(M,\\tau)$ of the $C^*$-dynamical system $(\\susp,\\tau,\\RR)$ whose integrated representation gives a $*$-isomorphism \n\\begin{equation}\\label{SvN_eq1}\nM\\rtimes\\tau\\colon \\susp\\rtimes_\\tau \\RR\\to\\Kc(L^2(\\RR)).\n\\end{equation} \nSee \\cite[Th. 4.24]{Wi07}. \nIf $h\\in\\Cc_c(\\RR)$ and $f\\in\\susp$, then the function $h(\\cdot)f$ (that is, $r\\mapsto h(r)f$) belongs to $\\Cc_c(\\RR,\\susp)\\subseteq\\susp\\rtimes_\\tau\\RR$ \nand we have \n$$(M\\rtimes\\tau)(h(\\cdot)f)=\\int_{\\RR}M(h(r)f)\\tau(r)\\de r=M(f)\\int_{\\RR}h(r)\\tau(r)\\de t$$\nhence \n$$((M\\rtimes\\tau)(h(\\cdot)f))\\xi=f\\cdot (h\\ast\\xi)\\text{ for }\\xi\\in L^2(\\RR)$$\nThe $*$-isomorphism~\\eqref{SvN_eq1} is a ``real'' isomorphism if we regard $\\Kc(L^2(\\RR))$ as a ``real'' $C^*$-algebra \nwith its ``real'' structure given by $\\overline{T}:=CTC$ for all $T\\in \\Kc(L^2(\\RR))$, where $C\\colon L^2(\\RR)\\to L^2(\\RR)$, $C(\\xi):=\\overline{\\xi}$. \nThus, if $T\\in \\Kc(L^2(\\RR))$ is an integral operator defined by an integral kernel $K_T\\colon \\RR\\times\\RR\\to\\CC$, \nthen $\\overline{T}$ is the integral operator defined by the integral kernel \n $K_{\\overline{T}}\\colon \\RR\\times\\RR\\to\\CC$, where $K_{\\overline{T}}(t,r):=\\overline{K_T(t,r)}$ for all $t,r\\in\\RR$. \n\\end{remark}\n\n\\begin{proposition}\\label{tsigns}\nFor every ``real'' $C^*$-dynamical system $(\\Ac,\\RR, \\alpha)$ there exist a group isomorphism \n$$\\Theta_0\\colon K_0(\\Ac\\rtimes_\\alpha\\RR)\\to K_1(\\Ac)$$ \nsatisfying $\\Theta_0(\\overline{x})=-\\overline{\\Theta_0(x)}$ for all $x\\in K_0(\\Ac\\rtimes_\\alpha \\RR)$ \nand \na group isomorphism \n$$\\Theta_1\\colon K_1(\\Ac\\rtimes_\\alpha\\RR)\\to K_0(\\Ac)$$ \nsatisfying $\\Theta_1(\\overline{x})=\\overline{\\Theta_1(x)}$ for all $x\\in K_1(\\Ac\\rtimes_\\alpha \\RR)$. \n\\end{proposition}\n\n\\begin{proof}\nAs proved in \\cite{Ri82}, we have $K_0(\\cone \\Ac\\rtimes_{\\tau\\otimes\\alpha}\\RR)=\\{0\\}$ and $K_1(\\cone \\Ac\\rtimes_{\\tau\\otimes\\alpha}\\RR)=\\{0\\}$. \nTherefore, in the six-term exact sequence \\eqref{hexagon} corresponding to the Wiener-Hopf extension~\\eqref{WHdef_eq1}, \nthe vertical arrows \n\\begin{align*}\n\\delta_0 & \\colon K_0(\\Ac\\rtimes_\\alpha\\RR)\\to \nK_1(\\susp \\Ac \\rtimes_{\\tau\\otimes\\alpha}\\RR), \\\\\n\\delta_1 & \\colon K_1(\\Ac\\rtimes_\\alpha\\RR)\\to \nK_0(\\susp \\Ac \\rtimes_{\\tau\\otimes\\alpha}\\RR),\n\\end{align*}\nare group isomorphisms. \n\nOn the other hand, \n$(\\susp \\Ac,\\tau\\otimes\\iota,\\RR)$\nis a ``real'' $C^*$-dynamical system and we have the $*$-isomorphism \n\\begin{equation}\n\\label{tsigns_proof_eq1}\n\\gamma\\colon \\susp \\Ac \\rtimes_{\\tau\\otimes\\alpha}\\RR\\to \n\\susp \\Ac \\rtimes_{\\tau\\otimes\\iota}\\RR\n\\end{equation}\nwhere \n$\\gamma(f)\\in\\Cc_c(\\RR,\\susp \\Ac)\\subseteq \\susp \\Ac \\rtimes_{\\tau \\otimes \\iota}\\RR$ is given by \n$\\bigl((\\gamma(f))(r)\\bigr)(t)=\\alpha_{-t}((f(r))(t))$ , $r,t\\in\\RR$, \nfor every $f\\in\\Cc_c(\\RR,\\susp \\Ac)\\subseteq \\susp A \\rtimes_{\\tau\\otimes\\alpha}\\RR$, \n(See \\cite[page 147]{Ri82}.)\nThen $\\gamma$ is a ``real'' isomorphism. \n\nMoreover, by \\cite[Lemma 2.75]{Wi07},\nwe have a $*$-isomorphism \n$$\\eta\\colon \\susp \\Ac \\rtimes_{\\tau \\otimes\\iota}\\RR \\to (\\susp \\Ac\\rtimes_{\\tau}\\RR)\\otimes \\Ac$$\nsatisfying $\\eta(h(\\cdot)(f\\otimes a))=(h(\\cdot)f)\\otimes a$ for all $h\\in\\Cc_c(\\RR)$, $f\\in\\Cc_c(\\RR)\\subseteq\\susp$, and $a\\in \\Ac$, \nwhere we regard $h(\\cdot)f$ as an element of $\\Cc_c(\\RR,\\susp)$ as in Remark~\\ref{SvN}. \nTaking into account the $*$-isomorphism $M\\rtimes\\tau$ from \\eqref{SvN_eq1}, \nwe further obtain the $*$-isomorphism\n\\begin{equation}\n\\label{tsigns_proof_eq2}\n\\kappa:=((M\\rtimes\\tau)\\otimes\\id_\\Ac)\\circ \\eta \\colon \\susp \\Ac \\rtimes_{\\tau\\otimes\\iota}\\RR \n\\to \\Kc(L^2(\\RR))\\otimes \\Ac\n\\end{equation}\nsatisfying \n$\\kappa(h(\\cdot)(f\\otimes a))=\\bigl((M\\rtimes\\tau)(h(\\cdot)f)\\bigr)\\otimes a$ \nfor all $h\\in\\Cc_c(\\RR)$, $f\\in\\Cc_c(\\RR)\\subseteq\\susp$, and $a\\in A$. \nIn particular, this shows that $\\Kc(L^2(\\RR))\\otimes \\Ac$ has the structure of a ``real'' $C^*$-algebra satisfying $\\overline{T\\otimes a}=\\overline{T}\\otimes\\overline{a}$ for all $T\\in\\Kc(L^2(\\RR))$ \n(cf. the end of Remark~\\ref{SvN}) and $a\\in \\Ac$.\nThen the above $*$-isomorphism $\\kappa$ is a ``real'' isomorphism. \n \nUsing \\eqref{tsigns_proof_eq1} and \\eqref{tsigns_proof_eq2}, \nwe now obtain the ``real'' isomorphism \n\\begin{equation*}\n\\kappa\\circ\\gamma\\colon \\susp \\Ac \\rtimes_{\\tau\\otimes\\alpha}\\RR\\to \n\\Kc(L^2(\\RR))\\otimes \\Ac.\n\\end{equation*}\nThis in turn gives the group isomorphisms\n$$K_j(\\kappa\\circ\\gamma)\\colon K_j(\\susp \\Ac \\rtimes_{\\tau\\otimes\\alpha}\\RR)\n\\to \nK_j(\\Kc(L^2(\\RR))\\otimes \\Ac)\n$$\nsatisfying $K_j(\\kappa\\circ\\gamma)(\\overline{x})=\\overline{K_j(\\kappa\\circ\\gamma)(x)}$ \nfor all $x\\in K_j(\\susp \\Ac \\rtimes_{\\tau\\otimes\\alpha}\\RR)$ and $j=0,1$. \n\nFinally, we select any $\\xi_0\\in L^2(\\RR)$ with $\\overline{\\xi_0}=\\xi_0$ and $\\Vert \\xi_0\\Vert=1$ and we consider its corresponding rank-one projection $p_0:=(\\cdot\\mid\\xi_0)\\xi_0\\in\\Kc(L^2(\\RR))$, \nso that $\\overline{p_0}=p_0$ in the ``real'' $C^*$-algebra $\\Kc(L^2(\\RR))$. \nThen the mapping \n$$\\mu_{p_0}\\colon \\Ac\\to \\Kc(L^2(\\RR))\\otimes \\Ac,\\quad a\\mapsto p_0\\otimes a$$\nis a ``real'' morphism \nhence the group morphism \n$$K_j(\\mu_{p_0})\\colon K_j(\\Ac)\\to K_j(\\Kc(L^2(\\RR))\\otimes \\Ac)$$ \nsatisfies $K_j(\\mu_{p_0})(\\overline{y})=\\overline{K_j(\\mu_{p_0})(y)}$ for all $y\\in K_j(\\Ac)$ and $j=0,1$. \nOn the other hand, $K_j(\\mu_{p_0})$ is actually a group isomorphism for $j=0,1$. \n(See \\cite[6.4.1 and 8.2.8]{RLL00}.)\nConsequently we obtain the group isomorphisms \n$$\\Theta_0:=K_1(\\mu_{p_0})^{-1}\\circ K_1(\\kappa\\circ\\gamma) \\circ\\delta_0\n\\colon K_0(\\Ac\\rtimes_\\alpha\\RR)\\to K_1(\\Ac)$$\nand \n$$\\Theta_1:=K_0(\\mu_{p_0})^{-1}\\circ K_0(\\kappa\\circ\\gamma) \\circ\\delta_1\n\\colon K_1(\\Ac\\rtimes_\\alpha\\RR)\\to K_0(\\Ac).$$\nLemma~\\ref{deltas} ensures that $\\Theta_1$ and $\\Theta_2$ have the required properties. \n\\end{proof}\n\n\\begin{corollary}\\label{signs_semid}\nLet $N$ be a locally compact group and $\\alpha\\colon \\RR\\to\\Aut(N)$ be a \ncontinuous action of $\\RR$ by automorphisms of $N$. \nThen there exist a group isomorphism \n$$\\Theta_0\\colon K_0(C^*(N\\rtimes_\\alpha\\RR))\\to K_1(C^*(N))$$ \nsatisfying $\\Theta_0(\\overline{x})=-\\overline{\\Theta_0(x)}$ for all $x\\in K_0(C^*(N\\rtimes_\\alpha\\RR))$ \nand \na group isomorphism \n$$\\Theta_1\\colon K_1(C^*(N\\rtimes_\\alpha\\RR))\\to K_0(C^*(N))$$ \nsatisfying $\\Theta_1(\\overline{x})=\\overline{\\Theta_1(x)}$ for all $x\\in K_1(C^*(N\\rtimes_\\alpha\\RR))$. \n\\end{corollary}\n\n\\begin{proof}\nThere exists a group morphism $\\beta\\colon \\RR\\to \\Aut(C^*(N))$ for which \n$(C^*(N),\\RR,\\beta)$ is a ``real'' $C^*$-dynamical system, and \nthe natural inclusion map \n$$\\Cc_c(\\RR,\\Cc_c(N))\\hookrightarrow\\Cc_c(N\\times \\RR)$$\nextends to a $*$-isomorphism $\\gamma\\colon C^*(N)\\rtimes_\\beta\\RR\\to C^*(N\\rtimes_\\alpha\\RR)$, \nby \\cite[Prop. 3.11]{Wi07}.\nThe above inclusion map intertwines the operation of taking the complex-conjugates of the functions on $\\RR$, $N$, and $N\\times\\RR$, \nhence $\\gamma$ is a ``real'' isomorphism. \nThen $K_j(\\gamma)\\colon K_j(C^*(N)\\rtimes_\\beta\\RR)\\to K_j(C^*(N\\rtimes_\\alpha\\RR))$ is a group isomorphism satisfying \n$K_j(\\gamma)(\\overline{x})=\\overline{K_j(\\gamma)(x)}$ for all $x\\in K_j(C^*(N)\\rtimes_\\beta\\RR)$ and $j=0,1$. \nNow the assertion follows by an application of Proposition~\\ref{tsigns}. \n\\end{proof}\n\n\n\n\\begin{corollary}\n\\label{signs_solvable}\nLet $G$ be a connected, simply connected, solvable Lie group and denote $n:=\\dim G$. \nThen the following assertion hold: \n\\begin{enumerate}[{\\rm(i)}]\n\t\\item If $n\\in 2\\ZZ$ then $K_1(C^*(G))= \\{0\\}$, $K_0(C^*(G))\\simeq\\ZZ$, and for every $x\\in K_0(C^*(G))$ we have \n\t$$\\overline{x}=\\begin{cases}\n\tx &\\text{ if }n\\in 4\\ZZ,\\\\\n\t-x &\\text{ if }n\\in 4\\ZZ+2.\n\t\\end{cases}$$\n\t\\item If $n\\in 2\\ZZ+1$ then $K_0(C^*(G))=\\{0\\}$, $K_1(C^*(G))\\simeq\\ZZ$, and for every $x\\in K_1(C^*(G))$ we have \n\t$$\\overline{x}=\\begin{cases}\n\tx &\\text{ if }n\\in 4\\ZZ+1,\\\\\n\t-x &\\text{ if } n\\in 4\\ZZ+3.\n\t\\end{cases}$$\n\\end{enumerate}\n\\end{corollary}\n\n\\begin{proof}\nThe group isomorphisms from the statement are well known. (See \\cite[Sect.~V, Cor.~7]{Co81}.)\nTo prove the assertions on $\\overline{x}$ \nwe recall that, since $G$ is a connected, simply connected, solvable Lie group, \nthere exists a Lie group isomorphism $G\\simeq G_1\\rtimes\\RR$ for a suitable connected, simply connected, solvable Lie group $G_1$.\nNow the conclusion follows by induction, using Corollary~\\ref{signs_semid} and Lemma~\\ref{dim1}. \n\\end{proof}\n\n\n\n\n\\subsection{Continuous fields of $C^*$-algebras }\n\nThe following lemma is implicitly used in the proof of \\cite[Thm.~3.1]{ENN93}. \n\n\\begin{lemma}\\label{lemma-cf1}\nLet $((\\Ac_t)_{t\\in S}, \\Theta)$ be a continuous field of $C^*$-algebras over the locally compact space $S$.\nAssume that for a $t_0\\in S$ there is a projection $p_0\\in \\Pg(\\Ac_{t_0})\\setminus \\{0\\}$. \nThen there is an open neighbourhood $V_0$ of $t_0$ in $S$ and a section \n$\\theta_0\\in \\Theta\\vert_{V_0}$ such that $\\theta_0(t_0) = p_0$ and $ \\theta_0(t) \\in \\Pg(\\Ac_{t})\\setminus \\{0\\}$\nfor every $t\\in V_0$. \n\\end{lemma}\n\n\\begin{proof}\nFor $\\delta \\in (0, 1\/2)$, define\n$$ U=\\{ z\\in \\CC\\mid |z|< \\delta\\} \\cup \\{z\\in \\CC\\mid |z-1|< \\delta\\}.$$\nThen $\\opn{Sp}'_{A_{t_0}}(p_0)=\\{0, 1\\}\\subset U$. \n(Here $\\opn{Sp}'_{\\Ac}(a)$ denotes the \nspectrum of $a$ in the non-necessarily unital $C^*$-algebra $\\Ac$; see \\cite[1.1.6]{Di64}.)\nThen, by \\cite[20.1.10]{Di64}, there exists $x_1\\in \\Theta$ such that $x_1(t_0)=p_0$.\nDefine $x_2 = \\frac{1}{2} (x_1 +x_1^*)$; \nthen $x_2 \\in \\Theta$, $x_2= x_2^*$ and $x_2(t_0)= p_0$. \nIt follows\nby \\cite[10.3.6]{Di64} that there is an open neighbourhood $V_0$ of $t_0$ in $S$ such that \n$\\opn{Sp}'_{A_{t}}(x_2(t))\\subset U$ for every $t\\in V_0$. \nHence, if $f\\in \\Cc (\\CC)$ is such that $f(t)=1$ for $t\\in \\{z\\in \\CC\\mid |z-1|< \\delta\\}$ and \n$f(t) =0$ for $t \\in \\{ z\\in \\CC\\mid |z|< \\delta\\}$, then $f(x_2(t)) \\in \\Pg(\\Ac_t)$.\nBy \\cite[10.3.3]{Di64}, $\\theta_0= f(x_2)\\in \\Theta$.\nSince the function $\\Vert \\theta_0(\\cdot)\\Vert$ is continuous on $V_0$ and $\\Vert \\theta_0(t)\\Vert\n\\in \\{0, 1\\}$, it follows that $\\Vert \\theta_0(t)\\Vert = \\Vert \\theta_0(t_0)\\Vert=1$ for every \n$t \\in V_0$. \nWe have thus obtained that $\\theta_0(t) \\in \\Pg(\\Ac_t) \\setminus \\{ 0\\}$ for every $ t \\in V_0$, $\\theta_0(t_0)= p_0$, hence $\\theta_0$ satisfies all the conditions in the statement.\n\\end{proof}\n\n\\begin{proposition}\\label{prop-cf2}\nLet $((\\Ac_t)_{t\\in [0, 1]}, \\Theta)$ be a continuous field of $C^*$-algebras, trivial away from $0$ (that is, trivial on $(0, 1]$). \nIf $\\Ac_t$ is stably finite for $t\\in (0, 1]$, then $\\Ac_0$ is stably finite.\n\\end{proposition}\n\n\n\\begin{proof}\nAssume that $\\Ac_0$ is not stably finite. \nThen by Proposition~\\ref{P1} \\eqref{P1_item1} it follows that there is $k\\ge 1$ and \n$p_0 \\in \\Pg(M_k(\\Ac_0))\\setminus \\{\\0_k\\}$ such that $[p_0] =0 \\in K_0(\\Ac)$. \nSince $(M_k(\\Ac_t))_{t\\in [0, 1]}$ is a continuous field of $C^*$-algebras, trivial away from $0$ \n(see \\cite[Thm.~2.4]{ENN93}), we may assume that $k=1$. \nBy Lemma~\\ref{lemma-cf1} and since $(\\Ac_t)_{t\\in [0, 1]}$ is trivial away from $0$, there is $\\theta_0\\in \\Theta$ such that $\\theta_0(0)=p_0$ and \n$\\theta_0(t)\\in \\Pg(\\Ac_t)\\setminus \\{0 \\}$ for every $t \\in [0, 1]$. \nIt follows by \\cite[Thm.~3.1 and its proof]{ENN93} that there is a group homomorphism \n$\\varphi \\colon K_0(\\Ac_0) \\to K_0(\\Ac_t)$ such that \n$\\varphi([p_0])= [\\theta_0(1)]$. \nSince we have assumed that $[p_0]=0$, we get that for $\\theta_0(1)\\in \\Pg(\\Ac_1)\\setminus \\{0\\}$\n we have $[\\theta_0(1)]=0$, thus by Proposition~\\ref{P1} \\eqref{P1_item1}, $\\Ac_1$ is not stably finite. \n This is a contradiction; thus $\\Ac_0$ must be stably finite.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{On open points in the primitive ideal spectrum}\n\n\n\\begin{proposition}\\label{P3}\n\tLet $\\Ac$ be a separable $C^*$-algebra\n\tIf $\\pi_0\\colon\\Ac\\to\\Bc(\\Hc_0)$ is a $*$-representation \n\twith its kernel $\\Pc_0:=\\Ker\\pi_0\\subseteq\\Ac$ \n\tand $\\Kc(\\Hc_0)\\subseteq\\pi_0(\\Ac)\\ne\\{0\\}$, \n\tthen the following conditions are equivalent: \n\t\\begin{enumerate}[{\\rm(i)}]\n\t\t\\item\\label{P3_item1} $\\{\\Pc_0\\}$ is an open subset of $\\Prim(\\Ac)$. \n\t\t\\item\\label{P3_item2} There exists a closed two-sided ideal $\\Jc_0\\subseteq\\Ac$ for which \n\t\t$\\pi_0\\vert_{\\Jc_0}\\colon \\Jc_0\\to\\Kc(\\Hc_0)$ is a $*$-isomorphism.\n\t\t\\end{enumerate}\n\t\tIf these conditions are satisfied, then \n\t\t \\begin{equation}\\label{P3_proof_eq7}\n\t\t\\Jc_0=\\bigcap\\limits_{\\Pc\\in\\Prim(\\Ac)\\setminus\\{\\Pc_0\\}}\\Pc\n\t\t\\end{equation}\n\t\tand moreover $\\Jc_0$ is a minimal closed two-sided ideal of $\\Ac$ \n\t\twith \t$\\Pc_0\\cap\\Jc_0=\\{0\\}$. \n\\end{proposition}\n\n\\begin{proof}\nThe hypothesis $\\Kc(\\Hc_0)\\subseteq\\pi_0(\\Ac)\\ne\\{0\\}$ implies that the $*$-representation $\\pi_0$ is irreducible and $\\Hc_0\\ne\\{0\\}$. \nThen, since $\\Ac$ is separable, the Hilbert space $\\Hc_0$ is separable, too.\nWe will show that both conditions in the statement are equivalent to the following: \n\\begin{enumerate}[{\\rm(i)}]\n \\setcounter{enumi}{2}\n\t\\item\\label{P3_item3} There exists a closed two-sided ideal $\\Jc_0\\subseteq\\Ac$ such that \n\t\\begin{equation}\\label{P3-eq1}\n\t\\Prim(\\Ac)=\\{\\Pc_0\\}\\sqcup\\{\\Pc\\in\\Prim(\\Ac)\\mid \\Jc_0\\subseteq\\Pc\\}.\n\t\\end{equation}\n\\end{enumerate}\n\t\n\\eqref{P3_item1}$\\iff$\\eqref{P3_item3}: \nThis and \\eqref{P3_proof_eq7} follow by the definition of the topology of $\\Prim(\\Ac)$. \n(See \\cite[3.1.1]{Di64}.) \n\n\\eqref{P3_item3}$\\implies$\\eqref{P3_item2}: \nThe hypothesis \\eqref{P3-eq1} implies $\\Jc_0\\not\\subseteq\\Pc_0=\\Ker\\pi_0$, \nthat is, $\\pi_0\\vert_{\\Jc_0}\\ne0$. \nMoreover, for every irreducible $*$-representation $\\pi\\colon\\Ac\\to\\Bc(\\Hc)$ we have \n$$\\pi\\vert_{\\Jc_0}\\ne0\\iff\\Jc_0\\not\\subseteq\\Ker\\pi\n\\mathop{\\iff}\\limits^{\\eqref{P3-eq1}}\\Ker\\pi=\\Pc_0\n\\iff[\\pi]=[\\pi_0]\\in\\widehat{\\Ac}$$\nwhere the last equivalence follows by \\cite[Cor. 4.1.10]{Di64} \nsince $\\Kc(\\Hc_0)\\subseteq\\pi_0(\\Ac)$. \nThen, by \\cite[Prop. 2.10.4]{Di64}, $\\widehat{\\Jc_0}$ consists of only one point, namely $\\widehat{\\Jc_0}=\\{[\\pi_0\\vert_{\\Jc_0}]\\}$. \nSince $\\Ac$ is separable it follows that $\\Jc_0$ is separable, too. \nThen $\\Jc_0$ is $*$-isomorphic to the $C^*$-algebra of all compact operators on a separable complex Hilbert space by \\cite[4.7.3]{Di64}. \nNow, since $\\pi_0\\vert_{\\Jc_0}\\ne0$ and $\\pi_0$ is an irreducible representation of~$\\Ac$, hence $\\pi_0\\vert_{\\Jc_0}\\colon \\Jc_0\\to\\Bc(\\Hc_0)$ is an irreducible representation, it follows that \n$\\pi_0\\vert_{\\Jc_0}\\colon \\Jc_0\\to\\Kc(\\Hc_0)$ is a $*$-isomorphism. \n(See \\cite[Cor. 4.1.5]{Di64}.) \nHence \\eqref{P3_item3} holds true with $\\Jc_1:=\\Jc_0$.\n\n\\eqref{P3_item2}$\\implies$\\eqref{P3_item3}: \nThe hypothesis \\eqref{P3_item2} implies $\\widehat{\\Jc_0}=\\{[\\pi_0\\vert_{\\Jc_0}]\\}$. \nThen, \nfor every irreducible $*$-representation $\\pi\\colon\\Ac\\to\\Bc(\\Hc)$, we have either $\\pi\\vert_{\\Jc_0}=0$ or $[\\pi\\vert_{\\Jc_0}]=[\\pi_0\\vert_{\\Jc_0}]\\in\\widehat{\\Jc_1}$. \nThat is, either $\\Jc_1\\subseteq\\Ker\\pi$ or $[\\pi]\n=[\\pi_0]\\in\\widehat{\\Ac}$ by \\cite[Prop. 2.10.4]{Di64}. \nThus\n$\\Prim(\\Ac)=\\{\\Ker\\pi_0\\}\\sqcup\\{\\Pc\\in\\Prim(\\Ac)\\mid \\Jc_1\\subseteq\\Pc\\}$, \nhence~\\eqref{P3_item2} holds true. \n\nFinally, if \\eqref{P3_item1}--\\eqref{P3_item3} hold true, then $\\Jc_0$ is $*$-isomorphic to $\\Kc(\\Hc_0)$, hence $\\Jc_0$ is a simple $C^*$-algebra, and then it is also a minimal ideal of $\\Ac$ with $\\Pc_0\\cap\\Jc_0=\\{0\\}$.\n\\end{proof}\n\n\\begin{remark}\\label{R5}\n\\normalfont\nIn Proposition~\\ref{P3} we have the short exact sequence \n$$0\\to\\Pc_0\\hookrightarrow\\pi_0^{-1}(\\Kc(\\Hc_0))\\mathop{\\longrightarrow}\\limits^{\\pi_0}\\Kc(\\Hc_0)\\to0$$\nand this extension is trivial in the sense that \n$\\pi_0\\vert_{\\pi_0^{-1}(\\Kc(\\Hc_0))}$ has a right inverse, namely $(\\pi_0\\vert_{\\Jc_0})^{-1}\\colon\\Kc(\\Hc_0)\\to\\Jc_0$ given by Proposition~\\ref{P3}\\eqref{P3_item3}. \nThis also shows the direct sum decomposition of ideals \n$$\\pi_0^{-1}(\\Kc(\\Hc_0))=\\Pc_0\\dotplus\\Jc_0.$$\n\\end{remark}\n\n\n\\begin{remark}\\label{P3_group}\n\\normalfont \nThe hypothesis $\\Kc(\\Hc_0)\\subseteq\\pi_0(\\Ac)$ in Proposition~\\ref{P3} is superfluous if $\\Ac=C^*(G)$ for an exponential Lie group~$G$. \nIn fact, let $\\pi_0\\colon G\\to\\Bc(\\Hc_0)$ be an irreducible unitary representation \nwith its corresponding irreducible $*$-representation $\\pi_0\\colon \\Ac\\to\\Bc(\\Hc_0)$ with $\\Pc_0:=\\Ker\\pi\\subseteq\\Ac$. \nSince $G$ is type~I, we have $\\Kc(\\Hc_0)\\subseteq\\pi_0(\\Ac)$, \nand on the other hand $\\{\\Pc_0\\}$ is an open subset of $\\Prim(\\Ac)$ if and only if the unitary representation~$\\pi_0$ is square integrable, \nby \\cite[Prop. 2.3 and 2.14]{Ros78} and \\cite[Cor. 2]{Gr80}. \nMoreover the irreducible unitary representation~$\\pi_0$ is square integrable if and only if its corresponding coadjoint orbit is open in~$\\gg^*$ by \n\\cite[Thm.~3.5]{Ros78}.\nThis provides an alternative argument for the fact that the Kirillov-Bernat correspondence gives a bijection between the open points of $\\Prim(G)$ and the open coadjoint orbits of~$G$, without using the more difficult and deep fact that the Kirillov-Bernat map is actually a homeomorphism. \n\\end{remark}\n\n\n\n\n\n\n\n\n\\begin{corollary}\\label{C4}\nLet $\\Ac$ be a $C^*$-algebra and, for $k=1,\\dots,n$, let $\\pi_k\\colon\\Ac\\to\\Bc(\\Hc_j)$ be a $*$-representation satisfying the hypotheses of Proposition~\\ref{P3}, with its corresponding ideal $\\Jc_k\\subseteq\\Ac$ for which \n$\\pi_k\\vert_{\\Jc_k}\\colon \\Jc_k\\to\\Kc(\\Hc_k)$ is a $*$-isomorphism, \nand $\\Pc_k:=\\Ker\\pi_k$. \nThen the following assertions hold: \n\\begin{enumerate}[{\\rm(i)}]\n\\item\\label{C4_item1} \nWe have $\\Jc_{k_1}=\\Jc_{k_2}$ if and only if $\\Pc_{k_1}=\\Pc_{k_2}$. \n\\item \\label{C4_item2} \nIf we assume $\\Pc_{k_1}\\ne\\Pc_{k_2}$ for $k_1\\ne k_2$, then\n\\begin{equation}\\label{C4_item2_eq1} \n\t\\text{$\\Jc:=\\Jc_1+\\dots+\\Jc_n$ is a direct sum of ideals of $\\Ac$}\n\\end{equation}\nand\n\\begin{equation}\\label{C4_item2_eq2}\n\t\\Prim(\\Ac)=\\{\\Pc_1\\}\\sqcup\\cdots\\sqcup\\{\\Pc_n\\}\\sqcup \\{\\Pc\\in\\Prim(\\Ac)\\mid\\Jc\\subseteq\\Pc\\}.\n\\end{equation}\n\\end{enumerate}\n\\end{corollary}\n\n\\begin{proof}\n\\eqref{C4_item1} \nWe have $\\Prim(\\Ac)\\setminus\\{\\Pc_k\\}=\\{\\Pc\\in\\Prim(\\Ac)\\mid \\Jc_k\\subseteq\\Pc\\} $ by \\eqref{P3-eq1}, \nhence $\\Jc_{k_1}=\\Jc_{k_2}$ implies $\\Pc_{k_1}=\\Pc_{k_2}$. \nOn the other hand, by \\eqref{P3_proof_eq7}, \n$\\Pc_{k_1}=\\Pc_{k_2}$ implies $\\Jc_{k_1}=\\Jc_{k_2}$. \n\n\\eqref{C4_item2} \nIf $k_1\\ne k_2$, then $\\Pc_{k_1}\\ne\\Pc_{k_2}$ by hypothesis, hence $\\Jc_{k_1}\\ne\\Jc_{k_2}$ by \\eqref{C4_item1}. \nSince both $\\Jc_{k_1}$ and $\\Jc_{k_2}$ are distinct minimal ideals of $\\Ac$ \nand $\\Jc_{k_1}\\Jc_{k_2}\\subseteq\\Jc_{k_1}\\cap \\Jc_{k_2}$, \nwe obtain $\\Jc_{k_1}\\Jc_{k_2}=\\Jc_{k_1}\\cap \\Jc_{k_2}=\\{0\\}$, and then \n\\eqref{C4_item2_eq1} is straightforward.\n\n\nWe now prove \\eqref{C4_item2_eq2}. \nIn fact, by \\eqref{P3-eq1}, \nwe have \n$\\Prim(\\Ac)\\setminus\\{\\Pc_k\\}=\\{\\Pc\\in\\Prim(\\Ac)\\mid \\Jc_k\\subseteq\\Pc\\}$ for $k=1,\\dots,n$, hence \n\\allowdisplaybreaks\n\\begin{align*}\n\\Prim(\\Ac)\\setminus\\{\\Pc_1,\\dots,\\Pc_n\\}\n&=\\bigcap_{k=1}^n\\Prim(\\Ac)\\setminus\\{\\Pc_k\\} \\\\\n&=\\bigcap_{k=1}^n \\{\\Pc\\in\\Prim(\\Ac)\\mid \\Jc_k\\subseteq\\Pc\\} \\\\\n&=\\{\\Pc\\in\\Prim(\\Ac)\\mid \\Jc_1+\\cdots+\\Jc_n\\subseteq\\Pc\\}.\n\\end{align*}\nThis finishes the proof. \n\\end{proof}\n\nThe next result is needed in the proof of Corollary~\\ref{cf-cor8}.\n \n\\begin{proposition}\\label{proj}\n\tAssume the setting of Proposition~\\ref{P3} \n\tand, additionally, that \n\t\\begin{enumerate}[{\\rm(i)}]\n\t \\item\\label{proj_item1} $\\Ac$ is a ``real'' $C^*$-algebra; \n\t \\item\\label{proj_item2} $\\Pc_0\\cap\\overline{\\Pc_0}=\\{0\\}$; \n\t \\item\\label{proj_item3} if $p\\in\\Jc_0$ is a minimal projection, then \n\t $K_0(\\Ac)=\\{n[p]_0\\mid n\\in\\ZZ\\}$; \n\t \\item\\label{proj_item4} $\\Pg(A)\\setminus(\\Jc_0+\\overline{\\Jc_0})\\ne\\emptyset$. \n\t\\end{enumerate}\nThen $\\Ac$ is not stably finite. \n\\end{proposition}\n\n\\begin{proof}\nWe define\n$\\overline{\\pi_0}\\colon\\Ac\\to\\Bc(\\Hc_0)$, $\\overline{\\pi_0}(a):=C\\pi_0(\\overline{a})C$, \nfor a fixed antilinear involutive isometry $C\\colon \\Hc\\to\\Hc$. \nIt is easily seen that $\\overline{\\pi_0}$ is an irreducible $*$-representation \nwith $\\Ker\\overline{\\pi_0}=\\overline{\\Pc_0}$ and $\\{\\overline{\\Pc_0}\\}$ is an open subset of $\\Prim(\\Ac)$. \nMoreover, using~\\eqref{P3_proof_eq7}, one can show that \n$\\Jc_{\\overline{\\pi_0}}=\\overline{\\Jc_0}$, where $\\Jc_{\\overline{\\pi_0}}$ is the minimal ideal of $\\Ac$ that is given by the application of Proposition~\\ref{P3} for the representation $\\overline{\\pi_0}$. \nFurthermore, by Remark~\\ref{R5}, we then obtain \n\\begin{equation}\\label{proj_proof_eq1}\n\\overline{\\pi_0}^{-1}(\\Kc(\\Hc_0))\n=\\Jc_{\\overline{\\pi_0}}\\dotplus \\Ker\\overline{\\pi_0}\n=\\overline{\\Jc_0}\\dotplus \\overline{\\Ker\\pi_0} \n=\\overline{\\pi_0^{-1}(\\Kc(\\Hc_0))}.\n\\end{equation}\nSince $\\Pc_0\\cap\\overline{\\Pc_0}=\\{0\\}$ by hypothesis, we may use Corollary~\\ref{C4} for the representations $\\pi_0$ and $\\overline{\\pi_0}$, and we thus obtain $\\Jc_0\\cdot\\overline{\\Jc_0}=\\{0\\}$. \n\nNow let us denote $\\Jc:=\\Jc_0\\dotplus \\overline{\\Jc_0}$ \nand select any $q\\in \\Pg(A)\\setminus\\Jc$. \nWe show that \n\\begin{equation}\\label{proj_proof_eq2}\n\\dim(\\pi_0(q)\\Hc)=\\infty\\text{ and }\\dim(\\overline{\\pi_0}(q)\\Hc)=\\infty.\n\\end{equation}\nIn fact, since $\\pi_0(q),\\overline{\\pi_0}(q)\\in\\Bc(\\Hc_0)$ are projections, \nit suffices to show that $\\pi_0(q)\\not\\in \\Kc(\\Hc_0)$, and this will imply $\\overline{\\pi_0}(q)\\not\\in\\Kc(\\Hc_0)$ by~\\eqref{proj_proof_eq1}. \nWe argue by contradiction: \nAssuming $\\pi_0(q)\\in \\Kc(\\Hc_0)$, we obtain $\\overline{\\pi_0}(q)\\in\\Kc(\\Hc_0)$ by~\\eqref{proj_proof_eq1} and then, by Proposition~\\ref{P3} there exists uniquely determined projections $p_0\\in\\Pg(\\Jc_0)$ and $r_0\\in\\Pg(\\overline{\\Jc_0})$ with $\\pi_0(p_0)=\\pi_0(q)$ and $\\overline{\\pi_0}(r_0)=\\overline{\\pi_0}(q)$. \nSince $\\Pc_0\\ne\\overline{\\Pc_0}$ by the hypothesis~\\eqref{proj_item2}, \nwe have \n$\\Jc_0\\subseteq\\overline{\\Pc_0}$ and $\\overline{\\Jc_0}\\subseteq\\Pc_0$ by \\eqref{P3-eq1} in Proposition~\\ref{P3}, and we then obtain \n$(\\pi_0\\oplus\\overline{\\pi_0})(p_0+r_0)=\\pi_0(q)\\oplus \\overline{\\pi_0}(q)\n=(\\pi_0\\oplus\\overline{\\pi_0})(q)$. \nThe hypothesis $\\Pc_0\\cap\\overline{\\Pc_0}=\\{0\\}$ then implies $q=p_0+r_0\\in\\Jc_0\\dotplus \\overline{\\Jc_0}=\\Jc$, which is a contradiction with the way $q$ was selected. \nThus \\eqref{proj_proof_eq2} is proved. \n\nNow, if $p\\in\\Jc_0$ is a minimal projection, it follows by the hypothesis that there exists $n\\in\\ZZ$ with $[q]_0=n[p]_0\\in K_0(\\Ac)$. \nThere are three possible cases: \n\nCase 1: $n=0$. \nThen $[q]_0=0\\in K_0(\\Ac)$, and Proposition~\\ref{P1} shows that $\\Ac$ is not stably finite. \n\nCase 2: $n<0$. \nThen, denoting $k:=\\vert n\\vert$, we have \n$$0=[q]_0+k[p]_0=[q\\oplus\\underbrace{p\\oplus\\cdots\\oplus p}_{k\\text{ times}}]_0$$\nhence, since $q\\oplus p\\oplus\\cdots\\oplus p\\in M_{k+1}(\\Ac)\\setminus\\{0\\}$, \n Proposition~\\ref{P1} again shows that $\\Ac$ is not stably finite. \n \n Case 3: $n>0$. \n In this case, by \\eqref{proj_proof_eq2}, there exists $\\widetilde{p}_1\\in\\Pg(\\Kc(\\Hc_0))$ with $\\widetilde{p}_1\\le\\pi(q)$ and $\\dim(\\widetilde{p}_1(\\Hc))=n$. \n By Proposition~\\ref{P3}, there exists a unique $p_1\\in\\Pg(\\Jc_0)$ with $\\pi_0(p_1)=\\widetilde{p}_1$. \n We already noted above that $\\Jc_0\\subseteq\\overline{\\Pc_0}=\\Ker\\overline{\\pi_0}$, \n hence $\\overline{\\pi_0}(p_1)=0$, and then \n $$(\\pi_0\\oplus\\overline{\\pi_0})(p_1)=\\pi_0(p_1)\\oplus 0=\\widetilde{p}_1\\oplus 0\\le\\pi_0(q)\\oplus\\overline{\\pi_0}(q)=(\\pi_0\\oplus\\overline{\\pi_0})(q).$$\n As above, the hypothesis hypothesis $\\Pc_0\\cap\\overline{\\Pc_0}=\\{0\\}$ then implies \n $p_1\\le q$, hence $q-p_1\\in\\Pg(\\Ac)$ and $p_1(q-p_1)=0$. \n Now, by \\cite[3.1.7(iv)]{RLL00}, we obtain \n \\begin{equation}\\label{proj_proof_eq3}\n [q]_0=[p_1]_0+[q-p_1]_0\\in K_0(\\Ac)\\subseteq K_0(\\widetilde{\\Ac}).\n \\end{equation}\n On the other hand, since $\\dim(\\widetilde{p}_1(\\Hc))=n$ and $\\pi_0\\vert_{\\Jc_0}\\colon\\Jc_0\\to\\Kc(\\Hc_0)$ is a $*$-iso\\-mor\\-phism, we obtain $[p_1]_0=n[p]_0$ in $K_0(\\Jc_0)$. \n Denoting by $\\varphi\\colon \\Jc_0\\to\\Ac$ the inclusion map, \n it then follows that \n $K_0(\\varphi)([p_1]_0)=nK_0(\\varphi)([p]_0)$ in $K_0(\\Ac)$, \n that is, $[p_1]_0=n[p]_0$ in $K_0(\\Ac)$. \n Then, using \\eqref{proj_proof_eq3} and the way $n$ was chosen, we obtain \n $[q-p_1]_0=0\\in K_0(\\Ac)$. \n On the other hand, $q-p_1\\ne 0$ since $p_1\\in\\Jc_0\\subseteq\\Jc$, while $q\\in\\Pc(\\Ac)\\setminus\\Jc$. \n We may thus apply Proposition~\\ref{P1} to obtain that $\\Ac$ is not stably finite. \n\\end{proof}\n\n\n\n\n\n\n\n\\section{$C^*$-algebras of exponential Lie groups with open coadjoint orbits}\\label{section4k}\n\nThis section contains some of our results on the relation between the quasi-compact open subsets in the primitive ideal space of the $C^*$-algebra of a solvable Lie group and the finite approximation properties of that $C^*$-algebra (Corollaries \\ref{solv-4n+2}~and~\\ref{cf-cor8}). \nThese results mostly concern the exponential Lie groups that admit open coadjoint orbits, \nwhich we call exact symplectic Lie groups since they admit a left-invariant exact symplectic form. (They are elsewhere called as Frobenius Lie groups). \n\n\n\\subsection{Solvable Lie groups of dimension $\\not \\in 4 \\ZZ$}\n\\begin{theorem}\\label{4n+2}\nLet $G$ be \na\nsimply connected\n solvable Lie group with $\\dim G \\not \\in 4 \\ZZ$. \n Then $C^*(G)$ is stably finite if and only if it is \n is stably projectionless.\n \\end{theorem}\n\n\n\\begin{proof}\nThe fact that if $C^*(G)$ is stably projectionless then it is stably finite follows from Corollary~\\ref{rem-1.5} \n\\eqref{rem-1.5_i}. \n\nFor the reverse implication assume first that $\\dim G$ is odd. \nRecall that for a simply connected solvable Lie group $G$, the Connes' Thom isomorphism implies that $K_i(C^*(G))= K_i(\\RR^{\\dim G})$, $i =0, 1$. \nHence if $\\dim G$ is odd $K_0(C^*(G))=0$.\n(See \\cite[Sect. V, Cor. 7]{Co81}.)\nThen the statement\nis a direct consequence of Lemma~\\ref{rem-1.5}.\n\nIt remains to analyse the case when $\\dim G\\in 4 \\ZZ +2$. \nWe prove that if $C^*(G)$ is not stably projectionless then it is not stably finite. \nLet $0\\ne p\\in \\Pg_k(C^*(G))$. \nThen \n$ [p]_0+[\\overline{p}]_0= [p]_0+\\overline{[p]}_0 = 0$\nby Corollary~\\ref{signs_solvable}.\nIf $p=\\overline{p}$ it follows that $[p]_0=0$, hence $C^*(G)$ is not stably finite, \nby Proposition~\\ref{P1}.\nIf $p \\ne \\overline{p}$, define $ q:= p\\oplus \\overline{p}=\\opn{diag}(p, \\overline{p}) \\in \\Pg_{2k} (C^*(G))\\setminus\\{0\\}$.\nThen $[q]_0= 0$, hence, again by Proposition~\\ref{P1}, $C^*(G)$ is not stably finite. \n\\end{proof}\n\n\\begin{corollary}\\label{solv-4n+2}\nLet $G$ be an exponential solvable Lie group such that $\\dim G \\in 4\\ZZ+2$ and has open coadjoint orbits.\nThen $C^*(G)$ is not stably finite. \n\\end{corollary}\n\n\n\n\n\\begin{proof}\nFor an exponential Lie group $G$, an open coadjoint orbit corresponds to an open point $[\\pi]\\in \\widehat{G}\\simeq\\Prim(G)$. (See Remark~\\ref{P3_group}.) Moreover, since $G$ is separable and type I, $\\pi$ is square integrable and \n$\\pi(C^*(G))$ contains the compact operators, by \\cite[Cor. 1 and 2]{Gr80} and \\cite[Prop.~2.3]{Ros78}.\nThen by Proposition~\\ref{P3} there is a minimal ideal $\\Jc_0\\subseteq C^*(G)$ such that $\\Jc_0\\simeq \\Kc(\\Hc_0)$ for a Hilbert space $\\Hc_0$. \nHence there exists $p \\in \\Jc_0$, $0\\ne p= p^*= p^2$. \nThe corollary now follows from Theorem~\\ref{4n+2}.\n\\end{proof}\n\n\n\n\\subsection{Groups of the form $N \\rtimes \\RR$.}\n\nWe are going to see that the above result of Theorem~\\ref{4n+2} fails to be true for groups of dimension of the form \n$4 k$, $k\\in \\NN$. \nTo show this, to give a simple necessary condition for stably finiteness, and to study a little bit further the case \nof $\\dim G\\in 4 \\ZZ +2$, we restrict ourselves to the the groups of the form $G= N \\rtimes \\RR$, where $N$ is a connected simply connected nilpotent Lie group.\n \nBut first we consider the case of groups $G=N\\rtimes \\RR$ where $N$ is abelian, \nthat is, the case of \nthe generalized $ax+b$-groups, where there is a quite clean relation\n between quasi-compact open sets and approximation properties. \n\n\n\\subsubsection{The case of the generalized $ax+b$ groups}\nMost of the following example is already known (see \\cite[Ex.~4.8]{BB20}).\n\n\\begin{example}\\label{ax+b} \\normalfont (\\textit{Generalized $ax+b$-groups})\nLet $\\Vc$ be a finite-dimensional real vector space, $D\\in\\End(\\Vc)$, \nand $G_D:=\\Vc\\rtimes_{\\alpha_D}\\RR$ their corresponding generalized $ax+b$-group. \nThen we claim that the following assertions are equivalent: \n\\begin{enumerate}[{\\rm(i)}]\n\\item\\label{ax+b_item1} Either $\\mathrm{Re}\\, z>0$ for every $z\\in\\sigma(D)$ or $\\mathrm{Re}\\, z<0$ for every $z\\in\\sigma(D)$. \n\\item\\label{ax+b_item2} The $C^*$-algebra $C^*(G_D)$ is not quasidiagonal. \n\\item\\label{ax+b_item3} The $C^*$-algebra $C^*(G_D)$ is not AF-embeddable. \n\\item\\label{ax+b_item4} There exists a nonempty quasi-compact open subset of $\\widehat{G_D}$. \n\\item\\label{ax+b_item5} There exists a nonempty quasi-compact open subset of $\\Prim(G_D)$. \n\\item\\label{ax+b_item6} The set $\\widehat{G_D}\\setminus\\Hom(G_D,\\TT)$ is a nonempty quasi-compact open subset of $\\widehat{G_D}$.\n\\item\\label{ax+b_item8} There exist nonzero self-adjoint idempotent elements of $C^*(G_D)$. \n\\item\\label{ax+b_item1.5} The $C^*$-algebra $C^*(G_D)$ is not stably finite.\n\\end{enumerate}\n \n\n\\begin{proof}[Proof of claim]\nAssertions \\eqref{ax+b_item1} -- \\eqref{ax+b_item8} are equivalent by \\cite[Ex.~4.8]{BB20}. \nThe implication \\eqref{ax+b_item1.5} $\\implies$ \\eqref{ax+b_item2} is clear.\n\nIt remains to prove \\eqref{ax+b_item1} $\\implies$ \\eqref{ax+b_item1.5}.\nAssume that the condition in \\eqref{ax+b_item1} holds. \nIf \n$$\\alpha^*\\colon \\Cc_0(\\Vc^*) \\times \\RR \\to \\Cc_0(\\Vc^*), \\quad \\alpha^*(f, t) = f\\circ \\ee^{t D^*}, $$ \n then $C^*(G_D) \\simeq \\Cc_0(\\Vc^*) \\rtimes_{\\alpha^*} \\RR$.\n\nLet $\\overline{\\Vc^*}$ be one point compactification of $\\Vc^*$ and extend $\\alpha^*$ to \n$\\Vc^*$ by $\\alpha^*_t(\\infty)=\\infty$ for every $t\\in \\RR$.\nThen it follows from \\cite[Pro.~2.14]{BB18} and \\cite[Prop.~4.6]{Pi99} \nthat the $C^*$-algebra $\\Cc_0(\\overline{\\Vc^*})\\rtimes \\RR$ is not stably finite whenever \n\\eqref{ax+b_item1} is true.\n\nWe now use the same argument as in \\cite[Lemma~2.10]{BB18}:\nThe split exact sequence $0\\to \\Cc_0(\\Vc^*) \\to \\Cc(\\overline{\\Vc^*})\\to\\CC\\1\\to 0$ \nleads\n to the split exact sequence \n\t$$0\\to \\Cc_0(\\Vc^*) \\rtimes \\RR \\to \\Cc(\\overline{\\Vc^*})\\rtimes \\RR \\to C^*(\\RR)\\to 0.$$ \n\tThen if we assume that $\\Cc_0(\\Vc^*) \\rtimes \\RR$ is stably finite, since\n\tthe $C^*$-algebra $C^*(\\RR)$ is stably finite, \n\tit follows by \\cite[Lemma~1.5]{Sp88} that $\\Cc(\\overline{\\Vc^*})\\rtimes \\RR$ is stably finite.\n\tThis is a contradiction, hence $\\Cc_0(\\Vc^*) \\rtimes \\RR$ is not stably finite.\n\t\\end{proof}\n\\end{example}\n\n\n\n\n\n\n\\subsubsection{Continuous fields of nilpotent Lie groups} \n\n\nLet $(\\ng, [\\cdot, \\cdot])$ be a nilpotent Lie algebra and let $\\varphi\\colon (0, 1]\\to \\GL(\\ng)$, $t\\mapsto\\varphi_h$ be a continuous map. \nAssume the following conditions hold:\n\\begin{enumerate}\n\\item $\\varphi_1=\\id$; \n\\item There exists $[x, y]_0=\\lim_{h\\to 0 }\\varphi_h^{-1}([\\varphi_h (x), \\varphi_h(y)]_h)$, for every $x, y\\in \\ng$. \n\\end{enumerate}\n\n\nThen we define the bilinear map $\\ng \\times \\ng \\to \\RR$,\n\\begin{equation}\\label{defm}\n[x, y]_h =\\varphi_h^{-1}([\\varphi_h(x), \\varphi_h(y)]).\n\\end{equation}\n\n\n\\begin{remark}\\label{defm-rem}\n\\normalfont\n \\begin{enumerate}[\\rm (i)]\n \\item\\label{defm-rem_i} For all $h\\in [0, 1]$, $[\\cdot, \\cdot]_h$ is a Lie bracket on the vector space underlying $\\ng$, and we denote by $\\ast_h$ the corresponding Baker-Campbell-Hausdorff multiplication, and the corresponding connected and simply connected Lie group by\n$N_h= (\\ng, \\ast_h)$.\n \n \\item \\label{defm-rem_ii} \nFor every $h\\in (0, 1]$, $\\varphi_h \\colon (\\ng, [\\cdot, \\cdot]_h) \\to (\\ng, [\\cdot, \\cdot])$ is a Lie algebra isomorphism. \n\n\\item\\label{defm-rem_iii} For every $h\\in (0, 1]$ we have\n$$ \\ad_h x= \\varphi_h^{-1} \\circ \\ad (\\varphi_h(x)) \\circ \\varphi_h$$\nwhere \n$$ \\begin{aligned} \n\\ad\\, x\\colon \\ng \\to \\ng, & (\\ad\\, x)(y)= [x, y] =[x, y]_1, \\\\\n\\ad_h x\\colon \\ng \\to \\ng, & (\\ad_h x)(y)= [x, y]_h .\n\\end{aligned}\n$$\n\\end{enumerate}\n\\end{remark}\n\nWe consider the map \n\\begin{equation}\\label{defm-mult}\nm \\colon [0, 1] \\times \\ng \\times \\ng \\to \\ng, \\; \\; m(h, x, y) =(h, x\\ast_h y).\n\\end{equation}\nThen $m$ is continuous, by the assumptions above. \nConsider the groupoid with equal source and target maps\n$$ \n\\begin{gathered} \n\\Tc:=[0, 1] \\times \\ng \\stackrel{p}{\\rightarrow} S:=[0, 1], \\; p(h, x)=x, \\\\\n(h, x) \\cdot (h, y) := (h, m_h(x, y)) = (h, x \\ast_h y) \\; \\text{for all } \\;\n(h, x), (h, y) \\in \\Tc_h:=p^{-1}(h).\n\\end{gathered}\n $$\nHence $p$ is a group bundle (depending on the map $\\varphi$) with Haar system given by the Lebesgue measure. \nIf follows by \\cite[Lemma~3.3]{BB18} that $C^*(\\Tc)$ is a $\\Cc(S)$ -algebra that \nis $\\Cc(S)$-linearly $*$-isomorphic \nto the algebra of sections of an upper semi-continuous $C^*$-bundle over~$ S$ \nwhose fibre over any $s\\in S$ is $C^*(\\Tc_h)\\simeq C^*(N_h)$.\n\n\n\n\n\n\\begin{theorem}\\label{prop-cf4} \nFor $\\ng$ be a nilpotent Lie algebra and\n $D\\in \\Der(\\ng)$, define\nthe semi-direct product\n$G :=N \\rtimes_{\\alpha_D} \\RR$.\nIf\nthere exists $\\epsilon \\in \\{-1, 1\\}$ such that \n$\\epsilon \\Re\\, z> 0$ for all $z\\in \\sigma(D)$, \nthen \n$C^*(G)$ is not stably finite.\n\\end{theorem}\n\n\n\\begin{proof}\nLet $\\Vc$ be the underlying real vector space of the Lie algebra $\\ng$, and \ndenote \n$G_0:= \\Vc\\rtimes_{\\alpha_D}\\RR$.\nIf we proved that\n \\begin{equation}\\label{prop-cf4-1}\n C^*(G) \\; \\text{stably finite } \\; \\Rightarrow \\; C^*(G_0) \\; \\text{stably finite,}\n\\end{equation}\nthen the statement follows from Example~\\ref{ax+b}.\n\nThus, it remains to prove \\eqref{prop-cf4-1}.\n\nFor every $h \\in [0, 1]$, let $\\varphi\\colon [0, 1]\\to \\GL(\\ng)$ be the map $\\varphi_h(x) = h x$, \nfor every $x\\in \\ng$. \nConsider the deformed nilpotent Lie algebra \n$ \\ng_h = (\\Vc, h [\\cdot, \\cdot]_\\ng)$ and the corresponding nilpotent Lie group\n$(N_ h , \\ast_{h})$, as above. Then $N_1=N$ and $N_0= \\Vc$. \nWe define $G_h:= N_h \\rtimes_{\\alpha_D} \\RR$, with the multiplication \n$ (x, t)\\cdot_h (y, s) = (x\\ast_h \\ee^{t D}y, t+s)$.\nThen $\\Gc:= \\sqcup_{ h \\in [0, 1]} G_h$ is a smooth bundle of Lie groups over $[0, 1]$. \nIt follows by \\cite[\\S 3]{BB18} that $C^*(\\Gc)= \\sqcup_{h \\in [0, 1]} C^*(G_\\hbar)$ is an upper-continuous bundle of $C^*$-algebras. \n\nOn the other hand, the action $\\alpha_D\\colon \\RR \\to \\Aut(N_{h})$ is independent of $h\\in [0, 1]$, and thus we may choose the Haar measure on $G_h$ to be independent of $h$ as well.\nHence, by \\cite[Def.~3.3, Thm.~3.5]{Ri89}, $(C^*(G_h))_{h \\in [0, 1]}$ is a continuous field of $C^*$-algebras, which is clearly trivial away from $0$. \nThe result then follows by Proposition~\\ref{prop-cf2}.\n\\end{proof}\n\n\n\n\n \n\n\n\n\\subsubsection{Exponential Lie groups with exact symplectic Lie algebras and codimension 1 nilradicals}\n\n\n\n\\begin{definition}\\label{exactsympl}\n\\normalfont \nA solvable Lie algebra $\\gg$ is said to be \\emph{exact symplectic} \nif there is $\\xi_0\\in \\gg^*$ with $\\gg(\\xi_0)=\\{0\\}$. \nEquivalently, if $G$ is any connected Lie group with Lie algebra $\\gg$, then the coadjoint of $\\xi_0$ is an open subset of $\\gg^\\ast$. \n\\end{definition}\n\n\n\n\n\\begin{lemma}\\label{extra}\nLet $\\gg$ be a \nsolvable Lie algebra \nsuch that its nilradical~$\\ng$ has codimension 1. \nLet $\\zg$ be the centre of $\\ng$. \nLet $G$ be a connected simply connected Lie group with its Lie algebra $\\gg$ and $N\\subseteq G$ be the connected subgroup corresponding to the subalgebra $\\ng\\subseteq\\gg$.\nThen the following assertions are equivalent: \n\\begin{enumerate}[\\rm(i)]\n\\item\\label{extra-i} $\\gg$ is exact symplectic.\n\\item\\label{extra-i-2} $\\gg$ is exact symplectic\nand has exactly two open coadjoint orbits.\n\\item\\label{extra2-i} There is an open point in $\\widehat{G}$. \n\\item\\label{extra2-i-2} There are exactly two open points in $\\widehat{G}$. \n\\item \\label{extra-ii}\n$[\\gg, \\zg]\\ne \\{0\\}$, $\\dim \\zg =1$ and the nilpotent Lie group $N=\\exp \\ng$ has generic flat coadjoint orbits\n{\\rm (}or equivalently, there is $\\ell \\in \\ng^*$ such that $\\ng(\\ell) =\\zg$.\\rm{)}\n\\end{enumerate}\n\\end{lemma}\n\n\n\\begin{proof}\nThe equivalences \\eqref{extra2-i} $ \\iff$ \\eqref{extra2-i-2} $\\iff$ \\eqref{extra-ii} follows from \\cite[Thm.~4.5]{KT96}.\n\nThe implication \\eqref{extra-i-2} $\\Rightarrow$ \\eqref{extra-i} is trivial. \nIt remains to prove \\eqref{extra-i} $\\Rightarrow$ \\eqref{extra-ii} $\\Rightarrow$ \\eqref{extra-i-2}. \n For every $\\xi\\in\\gg^*$ we denote by $\\Oc_\\xi\\subseteq\\gg^*$ its corresponding coadjoint orbit. \n\n\\eqref{extra-ii} $\\Rightarrow$\\eqref{extra-i-2}: \nWe prove by contradiction the following assertion: \n\\begin{equation}\\label{extra_proof_eq0}\n\\text{if }\\xi\\in\\gg^*\\text{ and }\\ng(\\xi\\vert_\\ng)=\\zg\\text{ then }\n\\gg(\\xi)=\\{0\\}. \n\\end{equation}\nHence let us assume $\\gg(\\xi)\\ne\\{0\\}$ and let us denote $\\ell:=\\xi\\vert_\\ng\\in\\ng^*$. \nThe hypothesis~\\eqref{extra-ii} implies that that $\\dim\\ng$ is an odd integer and $\\dim\\gg$ is an even integer. \nSince $\\dim\\Oc_\\xi=\\dim\\gg\/\\gg(\\xi)$ and this is an even integer, it follows that $\\dim\\gg(\\xi)$ is also an even integer, hence $\\dim\\gg(\\xi)\\ge2$. \nThen $\\dim\\ng+\\dim\\gg(\\xi)>\\dim\\gg$, hence $\\ng\\cap\\gg(\\xi)\\ne\\{0\\}$. \nOn the other hand \n$$\\ng\\cap\\gg(\\xi)=\\{X\\in\\ng\\mid\\langle\\xi,[X,\\gg]\\rangle=\\{0\\}\\}\n\\subseteq\\ng(\\xi\\vert_\\ng)=\\ng(\\ell)=\\zg$$\nhence, since $\\dim\\zg=1$, we obtain $\\ng\\cap\\gg(\\xi)=\\zg$. \nIn particular $\\zg\\subseteq\\gg(\\xi)$. \n\nNow, selecting any $Y\\in\\gg\\setminus\\ng$, \nthe centre $\\zg\\subseteq\\ng$ is invariant to the derivation $(\\ad_\\gg Y)\\vert_\\ng\\in\\Der(\\ng)$. \nSpecifically, for any $X_0\\in\\zg\\setminus\\{0\\}$ \nand all $X\\in\\ng$ we have \n$$0=[Y,[X,X_0]]=[X,[Y,X_0]]+[[Y,X],X_0]]=[X,[Y,X_0]],$$\nwhere we used that $[X,X_0]=[[Y,X],X_0]]=0$ since $X,[Y,X]\\in\\ng$. \nIt then follows that $[Y,X_0]\\in\\zg$. \nSince $\\gg=\\ng\\dotplus\\RR Y$ and $[\\ng,\\zg]=\\{0\\}$, the hypothesis $[\\gg,\\zg]\\ne\\{0\\}$ \nimplies $[Y,X_0]\\ne0$, hence there exists $a\\in\\RR^\\times$ with $[Y,X_0]=aX_0$. \nThus $\\langle\\xi,[Y,X_0]\\rangle=\\langle\\xi,aX_0\\rangle\n=a\\langle\\ell,X_0\\rangle\\ne0$, \nusing the assumption $\\ng(\\ell)=\\zg=\\RR X_0$. \nThis shows that $X_0\\not\\in\\gg(\\xi)$, which is a contradiction \nwith the above conclusion $\\zg\\subseteq\\gg(\\xi)$. \nThis completes the proof of~\\eqref{extra_proof_eq0}, which shows that $\\gg$ has open coadjoint orbits. \n\nIn order to prove that there are exactly two open coadjoint orbits, \nwe consider the set of generic points \n$$\\gg^*_{\\rm gen}:=\\{\\xi\\in\\gg^*\\mid\\gg(\\xi)=\\{0\\}\\}.$$ \nThe set $\\gg^*_{\\rm reg}$ is the union of the open coadjoint orbits of $\\gg$, which are connected and mutually disjoint, \nhence are also relatively closed in $\\gg^*_{\\rm gen}$. \nThus the open coadjoint orbits are exactly the connected components of $\\gg^*_{\\rm gen}$. \nOn the other hand, by \\eqref{extra_proof_eq0}, we have \n$$\\gg^*_{\\rm gen}=\\{\\xi\\in\\gg^*\\mid \\xi\\vert_{\\zg}\\in\\zg^*\\setminus\\{0\\}\\}=\\gg^*\\setminus\\zg^\\perp$$\nsince for any $\\ell\\in\\ng^*$ the equality $\\ng(\\ell)=\\zg$ is equivalent to $\\ell\\vert_\\zg\\in\\zg^*\\setminus\\{0\\}$, \nby the hypothesis on~$\\ng$ and $\\zg$. \nThus $\\gg^*_{\\rm gen}$ is the complement of a hyperplane in $\\gg^*$, \nand then $\\gg^*_{\\rm gen}$ is the union of two open half-spaces in \n $\\gg^*$. \nThe above remarks then show that these open half-spaces are just \n the open coadjoint orbits of $\\gg^*$, hence there are exactly two such orbits. \n\n\n\\eqref{extra-i}$\\Rightarrow$\\eqref{extra-ii}: \nFix $\\xi_0 \\in \\gg^*$ with $\\gg(\\xi_0)=\\{0\\}$, which exists by hypothesis. \nAssume that\n$[\\gg, \\zg] =\\{ 0\\}$; then $\\zg \\subseteq \\gg(\\xi_0)$, and $\\dim \\zg \\ge 1$ since $\\ng$ is nilpotent. \nThus $\\dim \\gg(\\xi_0)\\ge 1$ and \nthis is a contradiction with \\eqref{extra-i}, therefore\n$[\\gg, \\zg]\\ne \\{0\\}$. \n\nFor $\\xi_0\\in \\gg^*$ as above, define the bilinear functional \n$$B_{\\xi_0} \\colon \\gg\\times \\gg\\to \\RR, \n\\quad B_{\\xi_0}(X, Y) = \\langle \\xi_0, [X, Y]\\rangle.$$ \nWe have that $\\zg \\subseteq \\ng^{\\perp_{B_{\\xi_0}}}$, therefore $\\dim \\ng^{\\perp_{B_{\\xi_0}}}\\ge 1$.\nIf there exists $X_0\\in \\ng^{\\perp_{B_{\\xi_0}}}\\setminus \\ng$ then, since $\\dim(\\gg\/\\ng)=1$, \n$\\gg =\\ng \\dot{+} \\RR X_0$. \nOn the other hand, $X_0 \\perp_{B_{\\xi_0}} \\ng$, hence $X_0\\perp_{B_{\\xi_0}}\\zg$, while \n$\\ng\\perp_{B_{\\xi_0}}\\zg$; it follows that $\\gg \\perp_{B_{\\xi_0}}\\zg$. \nSince $\\zg \\ne \\{0\\}$ this is a contradiction with the fact that $B_{\\xi_0}$ is symplectic. \nThus $\\ng^{\\perp_{B_{\\xi_0}}}\\subseteq \\ng$.\nFor arbitrary $X\\in \\gg \\setminus \\ng$, $\\dim (X^{\\perp_{B_{\\xi_0}}}) =\\dim \\gg -1$.\nHence \nif $\\dim(\\ng^{\\perp_{B_{\\xi_0}}}) \\ge 2$, then $X^{\\perp_{B_{\\xi_0}}}\\cap \\ng^{\\perp_{B_{\\xi_0}}}\\ne 0$, that is, $\\gg^{\\perp_{B_{\\xi_0}}}\\ne 0$, which is again a contradiction.\nIt follows that $\\dim(\\ng^{\\perp_{B_{\\xi_0}}})=1$, hence\n $\\ng (\\xi_0\\vert_{\\ng}) =\\zg$ and $\\dim \\zg =1$. \nThis completes the proof. \n \\end{proof}\n\n\\begin{corollary}\\label{extra-cor}\nLet $G$ be an exponential Lie group with its Lie algebra $\\gg$ and nilradical $\\ng$ such that $\\dim \\gg\/\\ng =1$.\nThen the following assertions are equivalent. \n\\begin{enumerate}[\\rm(i)]\n\\item \\label{cor_extra2-0} There is an open point in $\\Prim(G)$.\n\\item\\label{cor_extra2-i} $\\gg$ is exact symplectic.\n\\item \\label{cor_extra-ii}\n\\begin{itemize}\n\\item{} There is a continuous action $\\alpha \\colon \\RR \\to \\Aut N$ such that $G = N \\rtimes_\\alpha \\RR$ \nand $\\alpha$ acts non-trivially on the centre $Z$ of $N$.\n\\item{} The nilpotent Lie group $N$ has generic flat coadjoint orbits and its centre is 1-dimensional.\n\\end{itemize}\n\\end{enumerate}\n\\end{corollary}\n\n\n\\begin{proof}\nThe assertion is a consequence of Lemma~\\ref{extra} and Remark~\\ref{P3_group}.\n\\end{proof}\n\n\\subsubsection{More on exact symplectic groups $G= N\\rtimes \\RR$}\nLet $N$ be a nilpotent Lie group, connected and simply connected, and $Z$ be the centre of $N$. \nLet $\\ng$ and $\\zg$ be the Lie algebras of $N$ and $Z$, respectively.\nWe assume that $\\dim Z=1$. \nLet $\\Ic$ be the ideal \n$$ \\Ic:= \\bigcap_{\\sigma \\in \\widehat{N\/Z}} \\Ker_{C^*(N)}(\\sigma), $$\nand let $\\Psi \\colon C^*(N) \\to C^{*}(N\/Z)$ be the surjective morphism given by \n$$ (\\Psi(f))(xZ) = \\int_Z f(xz) \\de z.$$\nThen, by \\cite[Prop.~8.C.8]{BkHa20}, we have the short exact sequence \n\\begin{equation}\\label{cf-0}\n 0 \\longrightarrow \\Ic \\longrightarrow C^*(N) \\stackrel{\\Psi}{\\longrightarrow} C^*(N\/Z) \\longrightarrow 0.\n \\end{equation}\n\n\nNow assume that the group $N$ has generic flat coadjoint orbits, and denote $\\dim N = 2d +1$, $d \\in \\NN$.\nLet $\\alpha \\colon \\RR\\to \\Aut(N)$ be a continuous action that acts non-trivially on $Z$, that is, \n$\\alpha_t\\ne \\id_Z$ for some, hence all, $t \\in \\RR\\setminus \\{0\\}$. \nThen there is $\\tau_0 \\in \\RR\\setminus \\{0\\}$ such that $\\de \\alpha_t\\vert_{\\zg} = \\ee^{\\tau_0 t} \\id_\\zg$.\n\nFix $X_0\\in \\zg \\setminus \\{0\\}$ and let $\\pi_1\\colon N \\to \\Bc(L^2(\\RR^d))$ be the unitary irreducible representation such that \n$$\n\\pi_1(\\exp_N (s X_0))=\\ee^{\\ie s} \\id_{L^2(\\RR^d)}. \n$$\nThen \n\\begin{equation}\\label{cf-4}\n(\\pi_1 \\circ \\alpha_t)(\\exp_N(sX_0))= \\pi_1(\\exp_N(s\\ee^{\\tau_0t}X_0))= \n\\ee^{\\ie s\\ee^{\\tau_0t}} \\id_{L^2(\\RR^d)}.\n\\end{equation}\n\nLet $C\\colon L^2(\\RR^d) \\to L^2(\\RR^d)$ be the usual complex conjugation $v \\mapsto \\overline{v(\\cdot)}$. \nFor every $t\\in \\RR$ define \n\\begin{equation}\\label{cf-4.5} \n\\pi_t\\colon N \\to \\Bc(L^2(\\RR^d)),\n \\quad \\pi_t:= \n \\begin{cases} \\pi_1\\circ \\alpha_{\\log t} & \\text{if } t >0, \\\\\n C \\pi_{-t} C & \\text{if } t < 0.\n \\end{cases}\n \\end{equation}\n Then \\eqref{cf-5} and \\eqref{cf-4} give \n $$\n \\pi_t (\\exp_N (sX_0))= \\begin{cases} \\ee^{\\ie t^{\\tau_0} s} \\id_{L^2(\\RR^d)} & \\text{if } t >0, \\\\\n \\ee^{-\\ie \\vert t\\vert^{\\tau_0} s} \\id_{L^2(\\RR^d)} & \\text{if } t < 0.\n \\end{cases}\n$$\nHence the map \n$$ \\RR^{\\times} \\to \\what{N}\\setminus \\what{N\/Z}, \\quad t \\mapsto [\\pi_t]$$\nis a homeomorphism. \n\nWe have thus obtained that there is a $*$-isomorphism \n\\begin{equation}\\label{cf-5}\n \\Phi\\colon \\Ic \\to \\Cc_0(\\RR^{\\times}, \\Kc(L^2(\\RR^d))), \\;\\;\n a\\mapsto (s\\mapsto \\pi_s(a)=\\Phi(a)(s))\n \\end{equation}\n that satisfies \n \\begin{equation}\\label{cf-6}\n ( \\Phi(\\overline{a}))(t)= \\pi_t(\\overline{a}) = C\\overline{\\pi_t}(a) C = C\\pi_{-t}(a) C,\\; \\text{\n for all } t\\in \\RR^{\\times}.\n \\end{equation}\n(Compare with the proof of Lemma~\\ref{rcrossed}.)\n\nDenote \n$$ \\Ic_{\\pm}:=\\Ic \\cap \\bigcap\\limits_{t>0} \\Ker_{C^*(N)} \\pi_{\\mp t}, $$\nwhich are ideals of $\\Ic$.\nThen by \\eqref{cf-5}, \\eqref{cf-6} we get \n\\begin{gather}\n\\Ic = \\Ic_{+} \\dot{+} \\Ic_{-}, \\label{cf-7}\\\\\n\\overline{\\Ic_{+}}=\\Ic_{-}, \\label{cf-8}\\\\\n\\alpha_t(\\Ic_{\\pm})=\\Ic_{\\pm}, \\quad \\text{for all } \\, t \\in\\RR.\\nonumber\n\\end{gather}\n\n\\begin{lemma}\\label{cf-lemma6}\nWith the notation and in the conditions above, \n$$ \\Ic_{+}\\rtimes_{\\alpha} \\RR \\simeq \\Kc(L^2(\\RR^d))\\otimes \\Kc(L^2(\\RR^d)).$$\n\\end{lemma}\n\n\\begin{proof}\nThe restriction of $\\Phi$ to $\\Ic_{+}$ gives a $*$-isomorphism \n$$\\Phi\\vert_{\\Ic_+}\\colon \\Ic_{+} \\, \\widetilde{\\longrightarrow}\\, \\Cc_0(\\RR^\\times_+, \\Kc(L^2(\\RR^d))).$$ \nBy \\eqref{cf-6} and \\eqref{cf-4.5}, for every $t\\in \\RR$ and $s>0$ we have\n\\begin{equation}\\label{-cf-10}\n\\begin{aligned}\n\\Phi(\\alpha_t(a))(s) & = \\pi_s(\\alpha_s(a))\\\\\n & =\\pi_1(\\alpha_{\\log s})(\\alpha_t(a))\\\\\n & =\\pi_1(\\alpha_{\\log (s \\ee^t)}(a))\\\\\n & = \\pi_{s\\ee^t}(a) \n \\\\ \n &=\\Phi(a)(s\\ee^t).\n\\end{aligned}\n\\end{equation}\nOn the other hand, if we denote\n$$ \\rho\\colon \\RR_+^\\times \\times \\Cc_0(\\RR^\\times_+) \\to \\Cc_0(\\RR^\\times_+), \\quad \n(\\rho_t (f))(s)= f(st),$$\nwe have that \n\\begin{equation}\\label{cf-11}\n\\Cc_0(\\RR^\\times_+) \\rtimes_\\rho \\RR^\\times_+ \\simeq \\Kc(L^2(\\RR^\\times_+)).\n\\end{equation}\nThen the assertion in the statement follows from the commutative diagram\n$$ \\xymatrix{\n\\RR \\times \\Ic_+ \\ar[r]^{\\alpha} \\ar[d]_{\\exp \\times \\Phi} & \\Ic_+\\ar[d]^{\\Phi}\\\\\n\\RR_+^\\times \\times (\\Cc_0(\\RR^\\times_+)\\otimes \\Kc(L^2(\\RR^d)))\\ar[r]^{\\;\\; \\rho\\otimes \\id_\\Kc}\n & \\Cc_0(\\RR^\\times_+)\\otimes \\Kc(L^2(\\RR^d))\n}\n$$\nand \\eqref{cf-11}. \n\\end{proof}\n\nDenote \n\\begin{equation}\\label{cf-Jc}\n\\Jc: = \\Ic_+\\rtimes_\\alpha \\RR.\\end{equation}\nThen\n$\\Jc$ is an elementary $C^*$-algebra, by Lemma~\\ref{cf-lemma6}. \n\n\\begin{theorem}\\label{cf-prop7}\nLet $G$ is a solvable Lie group with exact symplectic Lie algebra $\\gg$ of dimension $2d+2$. \nAssume that the nilradical $N$ of $G$ is of codimension 1, and \nlet $Z$ be the centre of $N$.\n\\begin{enumerate}[\\rm (i)]\n\\item\\label{cf-prop7_i} There is an ideal $\\Jc$ of $C^*(G)$ \n such that \n$\\Jc \\simeq \\Kc(L^2(\\RR^d)) \\otimes \\Kc(L^2(\\RR^d))$ and\nthere is the following short exact sequence\n\\begin{equation}\\label{cf-prop7-eq1} \n0\\longrightarrow \\Jc \\dot{+} \\overline{\\Jc} \\stackrel{\\iota}{\\longrightarrow} C^*(G ) \\stackrel{\\psi}{\\longrightarrow} \nC^*(G\/Z) \\rightarrow 0\n\\end{equation}\n\\item\\label{cf-prop7_ii} $K_0(\\Jc\\dot{+} \\overline{\\Jc})^+ \\cap \\Ker K_0(\\iota) =\\{0\\}$ if and only if $\\dim(G) \\in 4 \\ZZ$.\n\\end{enumerate}\n\\end{theorem}\n\n\\begin{proof}\nBy Lemma~\\ref{extra}, we have that $\\dim Z=1$, $\\gg\/\\ng\\simeq \\RR$, the continuous action\n$\\alpha\\colon \\RR \\to \\Aut(N)$ is non-trivial on the centre $Z$ of $N$ and \n$G= N\\rtimes \\RR$.\n\n\nIt follows from \\eqref{cf-0} \nthat we have the short exact sequence\n$$ 0\\longrightarrow \\Ic \\rtimes \\RR \\stackrel{\\iota}\\longrightarrow C^*(N \\rtimes \\RR) \\longrightarrow \nC^*((N\/Z) \\rtimes \\RR) \\longrightarrow 0,\n$$\nwhere $\\iota$ is the inclusion map.\nThen assertion \\eqref{cf-prop7_i} is a consequence of \\eqref{cf-7}, \\eqref{cf-8} and of Lemma~\\ref{cf-lemma6}.\n\n\\eqref{cf-prop7_ii}\nFirst note that, $\\dim G= \\dim(N\\rtimes \\RR)=2d+2$, hence there are two possibilities, either \n$\\dim G\\in 4\\ZZ +2 $ or $\\dim G\\in 4\\ZZ$. \nThen in the six term exact sequence corresponding to \\eqref{cf-prop7-eq1}\n$$ \\xymatrix{\nK_0(\\Jc \\dot{+} \\overline{\\Jc}) \\ar[r]^{K_0(\\iota)}\n & K_0 (C^*(N\\rtimes \\RR))\\ar[r]^{K_0(\\psi)} \n& K_0 (C^*((N\/Z)\\rtimes \\RR))\\ar[d]^{\\exp}\\\\\nK_1 (C^*((N\/Z)\\rtimes \\RR)) \\ar[u]^{\\text{ind}} & \\ar[l]^{K_1(\\psi)}\nK_1 (C^*(N\\rtimes \\RR)) &\\ar[l]^{K_1(\\iota)}\nK_1(\\Jc\\dot{+} \\overline{\\Jc})\n}\n$$\nwe have \n$$ K_0 (C^*((N\/Z)\\rtimes \\RR))=K_1(\\Jc\\dot{+} \\Jc) = K_1 (C^*(N\\rtimes \\RR))=0$$\nand \n$$ K_0(\\Jc \\dot{+} \\overline{\\Jc})\\simeq \\ZZ \\times \\ZZ, \\; \\; K_0 (C^*(N\\rtimes \\RR))\\simeq K_1 (C^*((N\/Z)\\rtimes \\RR))\\simeq \\ZZ.$$\nMore specifically, there is an isomorphism\n$$ \n\\chi\\colon K_0(\\Jc) \\times K_0(\\overline{\\Jc}) \\, \\wtilde{\\longrightarrow} \\, \nK_0(\\Jc \\dot{+} \\overline{\\Jc}),\n$$ \nsuch that $p$ is a minimal projection in $\\Jc$, we have\n$$K_0(\\Jc) \\times K_0(\\overline{\\Jc}) =\\{ (m[p]_{0, \\Jc}, \nn [\\overline{p}]_{0, \\overline{\\Jc}})\\mid m, n \\in \\ZZ \\}\\simeq \\ZZ \\times \\ZZ, $$\nand $\\chi (m[p]_{0, \\Jc}, n[\\overline{p}]_{0, \\overline{\\Jc}})\n = [mp+n\\overline{p}]_{0, \\Jc\\dot{+}\\overline{\\Jc}}\n$ if $m,n\\in\\{0,1\\}$.\nThus\n\\begin{equation}\\label{cf-prop7-eq2}\n( K_0(\\iota) \\circ \\chi ) (m[p]_{0, \\Jc}, n[\\overline{p}]_{0, \\overline{\\Jc}})= \nm[p]_{0} + n[\\overline{p}]_0 \n\\text{ for all }m,n\\in\\ZZ.\n\\end{equation} \n\\textit{Case 1: $\\dim (N\\rtimes \\RR)\\in 4 \\ZZ +2$.} \n\nIn this case $[p]_{0} =- [\\overline{p}]_0$, hence \n $$ [p+\\overline{p}]_{0, \\Jc\\dot{+}\\overline{\\Jc}}\\in\n ( K_0(\\Jc \\dot{+} \\overline{\\Jc})^+ \\cap \\Ker K_0(\\iota) ) \\setminus \\{0\\}.$$\n\\textit{Case 2: $\\dim (N\\rtimes \\RR)\\in 4 \\ZZ$.} \n\nIn this case $[p]_{0} =[\\overline{p}]_0$, hence the morphism\n$ K_0(\\iota) \\circ \\chi\\colon \\ZZ \\times \\ZZ \\to K_0(C^*(N\\rtimes\\RR) ) \\simeq \\ZZ$, \nis given by \n$$ (K_0(\\iota) \\circ \\chi ) (m[p]_{0, \\Jc}, n[\\overline{p}]_{0, \\overline{\\Jc}})= \n(m+n) [p]_{0}.$$\nThus \n$$ K_0(\\Jc \\dot{+} \\overline{\\Jc})^+ \\cap \\Ker K_0(\\iota) = \\{0\\}.$$\nThis finishes the proof.\n\\end{proof}\n\n\n \n\n\n\\begin{corollary}\\label{cf-cor8}\nLet $G$ Let $G$ is a solvable Lie group with exact symplectic Lie algebra $\\gg$. \nAssume that the nilradical $N$ of $G$ is of codimension 1 and let \n$Z$ is the centre of the nilradical of $G$.\nThen the following assertions hold true:\n\\begin{enumerate}[\\rm (i)]\n\\item\\label{cf-cor7-i} $C^*(G)$ is stably finite \nif and only if $C^*(G\/Z)$ is. \n\\item\\label{cf-cor7-ii} $C^*(G)$ is AF-embeddable\nif and only if $C^*(G\/Z)$ is.\n\\end{enumerate}\n\\end{corollary}\n\n\n\\begin{proof} By Lemma~\\ref{extra} it is enough to prove the corollary for groups of the form $G= N \\rtimes \\RR$ \nand $G\/Z = (N\/Z) \\rtimes \\RR$ such that the nilradical $N$ of $G$ and the action of $\\RR$ by automorphisms of $N$ satisfies the hypotheses of Theorem~\\ref{cf-prop7}.\n\n\\eqref{cf-cor7-i} \"$\\Leftarrow$\" The assertion follows from Theorem~\\ref{cf-prop7}\nand \\cite[Lemma~1.5]{Sp88}.\n\n \"$\\Rightarrow$\" Assume that $C^*((N\/Z)\\rtimes \\RR)$ is not stably finite. \n Then, by Proposition~\\ref{P1}, there is $0\\ne q _0\\in \\Pc(M_k \\otimes C^*((N\/Z)\\rtimes \\RR)$. \n(Note that $\\dim ((N\/Z) \\rtimes \\RR)$ is odd, so that $K_0( C^*((N\/Z)\\rtimes \\RR ))=0$; hence\n $[q]_{0, C^*((N\/Z)\\rtimes \\RR)}=0$ anyway.) \n Then, by \\cite[Thm.~1]{Ch83}, there exists $0\\ne q \\in \\Pc(M_k \\otimes C^*( N\\rtimes \\RR))$ with \n $(1\\otimes \\psi)(q) =q_0$; hence in particular, $q\\not \\in M_k \\otimes (J \\dot{+} \\overline{J})$.\nThe result then follows from Proposition~\\ref{proj}.\n\n\\eqref{cf-cor7-ii} Use Theorem~\\ref{cf-prop7}\\eqref{cf-prop7_i}, \\eqref{cf-cor7-i} and \\cite[Thm.~1.15]{Sp88}.\n\\end{proof}\n\n\n\n\n\n\\section{Examples}\n\\label{Sect4}\n\nIn this final section we illustrate the results from Section~\\ref{section4k}, effectively using them for establishing existence or lack of various finite approximation properties for the $C^*$-algebras of several concrete solvable Lie groups. \nFor instance, we study exponential solvable Lie groups that have exactly two open coadjoint orbits, and whose nilradical is either a Heisenberg group (Proposition~\\ref{Heis}), or the free 2-step nilpotent Lie group with 3 generators (Theorem~\\ref{N6N15}), or a Heisenberg-like group associated to a finite-dimensional real division algebra (Theorem~\\ref{N6N17}). \n\n\n\\subsection{Some semidirect products} \nIn Lemma~\\ref{NC_lemma} and Proposition~\\ref{NC} below we use some notation related to quasi-orbits of group actions (see \\cite[\\S 2.1]{BB20}). \nSpecifically, for any group action $\\Gamma \\times X \\to X$ on the topological space $X$, we denote by \n$$ (X\/\\Gamma)^\\approx :=\\{\\overline{\\Gamma x}\\mid x\\in X\\}$$\nthe set of all orbit closures, regarded as a topological subspace of the space $\\text{Cl}(X)$ of all closed subsets of $X$, endowed with the upper topology. \n\n\n\n\\begin{lemma}\\label{NC_lemma}\n\tLet $\\Vc$ be a finite-dimensional real vector space and $T\\in\\End(\\Vc)$ with $\\sigma(T)\\cap\\ie\\RR\\subseteq\\{0\\}$, for which there exist $w_1,w_2\\in\\sigma(T)$ with $\\Re\\, w_1\\le 0\\le\\Re\\, w_2$. \n\tIf we define the abelian group $\\exp(\\RR T):=\\{\\ee^{sT}\\mid s\\in\\RR\\}\\subseteq\\End(\\Vc)$ with its natural action on~$\\Vc$, then there exists a continuous open mapping $\\Psi\\colon(\\Vc\/\\exp(\\RR T))^\\approx\\to\\RR$. \n\\end{lemma}\n\n\\begin{proof}\n\tCase 1: $0\\in\\sigma(T)$, that is, $\\Ker T\\ne\\{0\\}$. \n\t\n\tThen, using the Jordan decomposition, we can obtain a linear subspace $\\Vc_0\\subsetneqq\\Vc$ with $T(\\Vc)\\subseteq\\Vc_0$. \n\tIn particular $T(\\Vc_0)\\subseteq\\Vc_0$, and this implies that \t\n\tthe group $\\exp(\\RR T)$ naturally acts on $\\Vc\/\\Vc_0$, and the quotient map $q\\colon\\Vc\\to\\Vc\/\\Vc_0$ is $\\exp(\\RR T)$-equivariant. \n\tWe then obtain the commutative diagram \n\t$$\\xymatrix{\n\t\t\\Vc \\ar[r]^q \\ar[d] & \\Vc\/\\Vc_0 \\ar[d] \\\\\n\t\t(\\Vc\/\\exp(\\RR T))^\\approx \\ar[r]^{q^\\approx}& ((\\Vc\/\\Vc_0)\/\\exp(\\RR T))^\\approx\n\t}$$\n\twhose vertical arrows are quasi-orbit maps, hence they are continuous and open. \n\t(See for instance \\cite[Lemma 2.3]{BB20}.) \n\tSince $q$ is also continuosu and open, it then directly follows that $q^\\approx$ is continuous and open. \n\t\n\tRecalling that $T(\\Vc)\\subseteq\\Vc_0$, the action of $\\exp(\\RR T)$ on $\\Vc\/\\Vc_0$ is trivial, hence the right-most arrow in the the above diagram is actually a homeomorphism and the composition of its inverse with $q^\\approx$ is a continuous open map \n\t$$\\Psi_1\\colon(\\Vc\/\\exp(\\RR T))^\\approx\\to\\Vc\/\\Vc_0.$$\n\tThe real vector space $\\Vc\/\\Vc_0$ is different from $\\{0\\}$, hence there exists a non-zero linear functional $\\xi\\colon\\Vc\/\\Vc_0\\to\\RR$, \n\tand then the mapping $\\Psi:=\\xi\\circ\\Psi_1$ has the required properties. \n\t\n\tCase 2: $0\\not \\in\\sigma(T)$. \n\t\n\tWe then have the direct sum decomposition $\\Vc=\\Vc_+\\dotplus\\Vc_-$, \n\twhere $\\Vc_\\pm$ is the direct sum of the real generalized eigenspaces corresponding to all $w\\in\\sigma(T)$ with $\\pm\\Re\\,w>0$. \n\t(See \\cite[Sect. 2]{BB18b}.) \n\tDenoting $n_\\pm:=\\dim\\Vc_\\pm$, we obtain $n_-n_+\\ne0$ by hypothesis. \n\tLet $n:=n_++n_-=\\dim\\Vc$ and define \n\t$$T_0:=\\begin{pmatrix}\n\t\\1_{n_+} & 0 \\\\\n\t0 & -\\1_{n_-}\n\t\\end{pmatrix}\n\t\\in M_n(\\RR).$$\n\tIt follows by \\cite[Lemma 2.1]{BB18b} \n\tthat there exists a homeomorphism $\\Theta\\colon \\Vc\\to\\RR^n$ \n\tsatisfying $\\Theta\\circ\\ee^{sT}=\\ee^{sT_0}\\circ\\Theta$ for all $s\\in\\RR$, \n\thence we obtain a homeomorphism \n\t$$\\Theta^\\approx\\colon(\\Vc\/\\exp(\\RR T))^\\approx\n\t\\to(\\RR^n\/\\exp(\\RR T_0))^\\approx.$$\n\tNow define the injective linear map \n\t$$p\\colon \\RR^n=\\RR^{n_+}\\times\\RR^{n_-}\\to\\RR^2,\\quad \n\t((x_1,\\dots,x_{n_+}),(y_1,\\dots,y_{n_-}))\\mapsto(x_1,y_1)$$\n\t(which makes sense since $n_-n_+\\ne0$)\n\tand let \n\t$$S:=\\begin{pmatrix}\n\t1 & \\hfill 0 \\\\\n\t0 & -1\n\t\\end{pmatrix}\\in M_2(\\RR).$$\n\tWe have $p\\circ T_0=S\\circ p$ and $p$ is continuous and open, hence we obtain a continuous open mapping \n\t$$p^\\approx\\colon (\\RR^n\/\\exp(\\RR T_0))^\\approx\\to \n\t(\\RR^2\/\\exp(\\RR S))^\\approx.$$\n\tFurthermore, we note that the mapping \n\t$$\\varphi\\colon\\RR^2\\to\\RR,\\quad \\varphi(x,y):=xy$$\n\tis continuous and open, since its restriction to $\\RR^2\\setminus\\{(0,0)\\}$ is actually a submersion while $\\varphi((-a,a)^2)=(-a^2,a^2)$ for all $a\\in(0,\\infty)$. \n\tOn the other hand, we have $\\varphi\\circ\\ee^{tS}=\\varphi$ for all $t\\in\\RR$, \n\thence there exists a commutative diagram \n\t$$\\xymatrix{\n\t\t\\RR^2 \\ar[r]^\\varphi \\ar[d] & \\RR \\\\\n\t\t(\\RR^2\/\\exp(\\RR S))^\\approx \\ar@{.>}[ur]_{\\varphi^\\approx}\n\t}\n\t$$\n\twhose vertical arrow is a quasi-orbit map, hence is continuous and open, and this directly implies that $\\varphi^\\approx$ is continuous and open as well. \n\tFinally, the composition \n\t$$\\Psi:=\\varphi^\\approx\\circ p^\\approx\\circ\\Theta^\\approx\\colon \n\t(\\Vc\/\\exp(\\RR T))^\\approx\\to\\RR $$ \n\tis a continuous open mapping, as required. \n\\end{proof}\n\n\\begin{proposition}\\label{NC}\n\tLet $\\ng$ be a nilpotent Lie algebra with its center~$\\zg$, and $D\\in\\Der(\\ng)$ \n\tsatisfying the conditions \n\t\\begin{equation}\\label{NC_eq1}\n\t\\sigma(D)\\cap\\ie\\RR\\subseteq\\{0\\}\n\t\\end{equation}\n\tand \n\t\\begin{equation}\\label{NC_eq2}\n\t\\text{there exist }w_1,w_2\\in\\sigma(D\\vert_\\zg)\\text{ with }\n\t\\Re\\, w_1\\le 0\\le\\Re\\, w_2.\n\t\\end{equation}\n\tIf $G$ is a simply connected Lie group with its Lie algebra $\\gg:=\\ng\\rtimes\\RR D$, then $G$ is an exponential solvable Lie group and there exists a continuous open mapping $\\Phi\\colon\\Prim(G)\\to\\RR$. \n\\end{proposition}\n\n\\begin{proof}\n\tThe hypothesis \\eqref{NC_eq1} ensures that $G$ is an exponential solvable Lie group. \n\t\n\tStep 1: \n\tIf $0\\in\\sigma(D\\vert_\\zg)$, then $0\\ne\\Ker (D\\vert_\\zg)=\\zg\\cap\\Ker D$. \n\tOn the other hand, it is easily seen that $\\zg\\cap\\Ker D$ is contained in the center of $\\gg$, hence it follows that the center $Z_G$ of the exponential solvable Lie group $G$ satisfies~$\\dim Z_G\\ge 1$, and then the assertion follows at once using the (continuous open) restriction mapping ${\\rm Res}^G_{Z^G}\\colon \\Prim(G)\\to\\widehat{Z_G}$ given by \\cite[Lemma 2.11]{BB20} \n\talong with the fact that $\\widehat{Z_G}$ is homeomomorphic to $\\RR^k$ for $k:=\\dim Z_G\\ge 1$. \n\t\n\tHence we may assume $0\\not\\in\\sigma(D\\vert_\\zg)$ from now on, without any loss of generality. \n\t\n\tStep 2: \n\tLet $A:=\\{\\ee^{tD}\\mid t\\in\\RR\\}\\hookrightarrow\\Aut(\\ng)=\\Aut(N)$, \n\twhere $N:=(\\ng,\\cdot)$ is the simply connected Lie group associated with~$\\ng$. \n\tWe regard $A$ as an abelian Lie group that is isomorphic either to $(\\RR,+)$ since $D\\ne0$ by Step~1. \n\t(See also \\eqref{NC_eq1}.) \n\tAlso let $Z=(\\zg,\\cdot)=(\\zg,+)$, the center of $N$. \n\t\n\tWe have the semidirect product of Lie groups $G=N\\rtimes A$, hence $C^*(G)=C^*(N)\\rtimes A$, which carries a natural dual action $\\widehat{A}\\times C^*(G)\\to C^*(G)$. \n\tWe then obtain the composition of continuous open maps \n\t$$\\Phi_1\\colon \\Prim(G) \\to(\\Prim(G)\/\\widehat{A})^\\approx\n\t\\simeq(\\Prim (N)\/A)^\\approx \n\t\\to (\\widehat{Z}\/A)^\\approx,$$\n\twhere the left-most map is the quasi-orbit map corresponding to the natural action $\\widehat{A}\\times\\Prim(G)\\to\\Prim(G)$, \n\tthe middle homeomorphism is given by \\cite[Cor. 2.5]{GL86}, \n\twhile the right-most map is obtained as in the proof of \\cite[Prop. 4.7]{BB20} using the fact that the restriction mapping $R^N\n\t\\colon \\Prim(N) \\to\\widehat{Z}$ is not only continuous and open by \\cite[Lemma 2.11]{BB20}, but also $\\Aut(N)$-equivariant. \n\t\n\tStep 3: \n\tThe canonical homeomorphism $E\\colon \\zg^*\\to \\widehat{Z}$, $\\xi\\mapsto\\ee^{\\ie\\xi}$, intertwines the group actions \n\t$$\\RR\\times\\zg^*\\to\\zg^*,\\quad (t,\\xi)\\mapsto\\xi\\circ\\ee^{tD}$$\n\tand \n\t$$\\RR\\times\\widehat{Z}\\to\\widehat{Z},\\quad (t,\\chi)\\mapsto\\chi\\circ\\ee^{tD}\\vert_\\zg$$\n\thence we obtain the homeomorphism \n\t$$E^\\approx\\colon(\\zg^*\/\\RR)^\\approx\\to(\\widehat{Z}\/\\RR)^\\approx.$$\n\tOn the other hand, the hypothesis~\\eqref{NC_eq2} show that we may use Lemma~\\ref{NC_lemma} for $T:=(D\\vert_\\zg)^*\\in\\End(\\zg^*)$ to obtain a continuous open mapping $\\Psi\\colon (\\widehat{Z}\/\\RR)^\\approx\\to\\RR$, \n\tand then the mapping $\\Phi_2:=\\Psi\\circ (E^\\approx)^{-1}\\colon (\\widehat{Z}\/\\RR)^\\approx\\to\\RR$ is continuous and open. \n\tFInally, using the mapping $\\Phi_1$ from Step~2, we obtain the continuous open mapping $\\Phi:=\\Phi_2\\circ\\Phi_1\\colon\\Prim (C^*(G))\\to\\RR$, as required. \n\\end{proof}\n\n\\begin{corollary}\\label{NC_cor1}\n\tIn Proposition~\\ref{NC}, the topological space $\\Prim(G)$ contains no non\\-empty quasi-compact open subsets, and the $C^*$-algebra $C^*(G)$ is AF-embeddable. \n\\end{corollary}\n\n\\begin{proof}\n\tAny nonempty quasi-compact open subset of $\\Prim(G)$ would be mapped via $\\Phi$ onto a nonempty compact open subset of $\\RR$, but there are no such subsets of the connected noncompact space~$\\RR$. \n\tMoreover, $C^*(G)$ is nuclear since $G$ is an amenable group. \n\tOne can then use \\cite[Cor. B]{Ga20} to see that $C^*(G)$ is AF-embeddable. \n\\end{proof}\n\n\n\n\\subsection{The semidirect product $H_n\\rtimes \\RR$}\nWe now give the simplest example of exponential solvable Lie group whose primitive ideal space has finite open subsets and whose $C^*$-algebra is nevertheless AF-embeddable. (See \\eqref{Heis_item4} in Proposition~\\ref{Heis}.)\n\nLet $ H_n$ be the $(2n+1)$-dimensional Heisenberg group with its Lie algebra\n$\\hg:=\\hg_n=\\spa\\{Z,Y_1,\\dots,Y_n,X_1,\\dots,X_n\\}$, \nwhere\n$$\n[Y_j,X_j]=Z\n$$ \nfor $j=1,\\dots,n$.\nDenote by $\\zg: =\\RR Z$ the centre of $\\hg_n$. \nFor any $D\\in\\Der(\\hg_n)$ there exists $d_\\zg\\in\\RR$ with \n$D\\vert_\\zg=d_\\zg\\id_\\zg$, hence there exists an operator \n $$D\/\\zg\\colon\\hg_n\/\\zg\\to\\hg_n\/\\zg, \\quad V+\\zg\\mapsto D(V)+\\zg.$$\nWe also define $\\alpha_D\\colon\\RR\\to\\Aut(H_n)$, \n$t\\mapsto\\exp (tD)$. \n\n\\begin{proposition}\\label{Heis} \nLet $D\\in\\Der(\\hg_n)$ be such that $\\sigma(D)\\cap\\ie\\RR\\subseteq\\{0\\}$, \nand let $G_{n,D}:=H_n \\rtimes_{\\alpha_D}\\RR$ be the corresponding semidirect product.\nThen we have \n\\begin{enumerate}[{\\rm(i)}]\n\\item\\label{Heis_item1} If $d_\\zg=0$ then $C^*(G_{n,D})$ is AF-embeddable for every $n\\ge 1$ and there is no nonempty quasi-compact open subset of $\\Prim(G_{n,D}) $.\n\\item\\label{Heis_item1.5}\n If $d_\\zg\\ne 0$, then there are two open points in $\\Prim(G_{n,D})$, for every $n\\ge 1$. \n\\item\\label{Heis_item2} \nIf $d_\\zg\\ne 0$ and there exists $\\epsilon \\in \\{-1, 1\\}$ such that \n$\\epsilon \\Re\\, z> 0$ for all $z\\in \\sigma(D\/\\zg)$,\n then $C^*(G_{n,D})$ is not stably finite for every $n\\ge 1$. \n\\item\\label{Heis_item3} If $d_\\zg\\ne 0$ and $n\\in 2\\ZZ$, then $C^*(G_{n,D})$ is not stably finite. \n\\item\\label{Heis_item4} If $d_\\zg\\ne 0$, $n\\in 2 \\ZZ +1$, and there are $z_1, z_2\\in \\sigma(D\/\\zg)$ with \n$\\Re\\, z_1\\le 0\\le\\Re\\, z_2$, then $C^*(G_{n,D})$ is AF-embeddable. \n\\end{enumerate}\n\\end{proposition}\n\\begin{proof}\nAssertion \\eqref{Heis_item1} follows for Corollary~\\ref{NC_cor1}, \\eqref{Heis_item1.5} from Lemma~\\ref{extra}, \n\\eqref{Heis_item2} is a consequence of Corollary~\\ref{cf-cor8} and Example~\\ref{ax+b}.\nAssertion \\eqref{Heis_item3} follows from \\eqref{Heis_item1.5} along with Corollary~\\ref{solv-4n+2}, and \\eqref{Heis_item4} can be obtained using Corollary~\\ref{cf-cor8} and Example~\\ref{ax+b}.\n\\end{proof}\n\n\n\n\n\n\\subsection{Two more classes of examples}\n\n\nWe start with a lemma that is essentially a by-product of \\cite{Sp88}.\nWe prove it here for completeness, as we do not have a reference for this very result, and it is needed for Example~\\ref{N6N15}, via\nLemma~\\ref{special}\nWe denote by $\\mathcal{N}$ the class of separable nuclear $C^*$-algebras to which the universal coefficient theorem applies. \n(See for instance \\cite{RoSc87}.)\nThe class~$\\mathcal{N}$ contains the $C^*$-algebras of all simply connected solvable Lie groups, since they are obtained by iterated crossed products by actions of the group~$\\RR$, starting from the 1-dimensional $C^*$-algebra. \n\n\\begin{lemma}\\label{embed}\nLet $0\\to\\Ic\\to\\Ac\\to\\Ac\/\\Ic\\to0$ be an exact sequence of $C^*$-algebras satisfying the following conditions: \n\\begin{enumerate}[{\\rm(i)}]\n\t\\item\\label{embed_item1} The $C^*$-algebra $\\Ac\/\\Ic$ belongs to the class $\\mathcal{N}$. \n\t\\item\\label{embed_item2} The $C^*$-algebras $\\Ic$ and $\\Ac\/\\Ic$ are AF-embeddable. \n\t\\item\\label{embed_item3} The index map $\\delta_1\\colon K_1(\\Ac\/\\Ic)\\to K_0(\\Ic)$ vanishes. \n\\end{enumerate}\nThen the $C^*$-algebra $\\Ac$ is AF-embeddable. \n\\end{lemma}\n\n\\begin{proof}\nStep 1 (reducing to essential ideals): By \\cite[Lemma 1.12]{Sp88} and its proof we obtain a $C^*$-algebra $\\Ac'$ that fits in a commutative diagram \n$$\\xymatrix{\n\\Ic \\ar[r] \\ar@{^{(}->}[d] & \\Ac\\ar[r] \\ar@{^{(}->}[d] & \\Ac\/\\Ic \\ar[d] \\\\\n\\Ic\\otimes\\Kc \\ar[r] & \\Ac'\\ar[r] & \\Ac\/\\Ic\n}\n$$\nwhere $\\Ic\\otimes\\Kc$ embeds as an essential ideal of $\\Ac'$, \nthe first two vertical arrows give rise to group isomorphisms $K_*(\\Ic)\\simeq K_*(\\Ic\\otimes\\Kc)$ ($\\simeq K_*(\\Ic)$) and $K_*(\\Ac)\\simeq K_*(\\Ac')$, while the right-most vertical arrow is an automorphism of $\\Ac\/\\Ic$. \nIt then follows by the hypothesis~\\eqref{embed_item3} along with the naturality of the index map (cf. \\cite[Prop. 9.1.5]{RLL00}) \nthat the index map $\\delta_1\\colon K_1(\\Ac\/\\Ic)\\to K_0(\\Ic\\otimes\\Kc)$ of the bottom horizontal line in the above diagram vanishes. \n\nStep 2 (reducing to AF ideals): \nSince $\\Ic$ is AF-embeddable, it follows that $\\Ic\\otimes\\Kc$ is AF-embeddable, hence there exists an embedding $\\Ic\\otimes\\Kc\\hookrightarrow\\widetilde{\\Jc}$, where $\\widetilde{\\Jc}$ is an AF-algebra. \nLet $\\Jc$ be the hereditary sub-$C^*$-algebra of $\\widetilde{\\Jc}$ generated by $\\Ic\\otimes\\Kc$. \nSince $\\widetilde{\\Jc}$ is an AF-algebra, it then follows by \\cite[Th. 3.1]{Ell76} that $\\Jc$ is an AF-algebra. \nOn the other hand $\\Jc=\\{bcb\\mid 0\\le b\\in\\Ic\\otimes\\Kc,\\ c\\in\\widetilde{\\Jc}\\}$ by \\cite[Cor. II.5.3.9]{Bl06}, \nwhich directly implies that every approximate unit of $\\Ic\\otimes\\Kc$ \nis an approximate unit for~$\\Jc$, too. \nThat is, the embedding $\\Ic\\otimes\\Kc\\hookrightarrow\\Jc$ is approximately unital in the sense of \\cite[Def. 1.10]{Sp88}. \nTherefore we may use \\cite[Rem. 1.11]{Sp88} \nto obtain a commutative diagram \n$$\\xymatrix{\n\t\\Ic\\otimes\\Kc \\ar[r] \\ar@{^{(}->}[d] & \\Ac'\\ar[r] \\ar@{^{(}->}[d] & \\Ac\/\\Ic \\ar[d] \\\\\n\t\\Jc \\ar[r] & \\Ac'+\\Jc\\ar[r] & \\Ac\/\\Ic\n}\n$$\nwhere the right-most vertical arrow is an automorphism of $\\Ac\/\\Ic$. \nSince the index map $\\delta_1\\colon K_1(\\Ac\/\\Ic)\\to K_0(\\Ic\\otimes\\Kc)$ \nof the upper line vanishes by Step~1, \nit then follows by the naturality of the index map \nthat the index map $\\delta_1\\colon K_1(\\Ac\/\\Ic)\\to K_0(\\Jc)$ of the bottom horizontal line in the above diagram vanishes as well. \nNow, since we have seen above that $\\Jc$ is an AF-algebra, it follows by \\cite[Lemma 1.13]{Sp88} applied to the short exact sequence \n$0\\to\\Jc \\to \\Ac'+\\Jc\\to \\Ac\/\\Ic\\to0$ \nthat $\\Ac'+\\Jc$ is AF-embeddable. \n(The $C^*$-algebra $\\Ac'+\\Jc$ belongs to the class~$\\mathcal{N}$ by the two-out-of-three property of that class mentioned in \\cite[V.1.5.4]{Bl06}, so all the hypotheses of \\cite[Lemma 1.13]{Sp88} are satisfied as stated.)\n\nStep 3: The above Steps 1--2 give the embeddings $\\Ac\\hookrightarrow\\Ac'\\hookrightarrow\\Ac'+\\Jc$ \nalong with the fact that $\\Ac'+\\Jc$ is AF-embeddable, \nhence $\\Ac$ is AF-embeddable as well. \n\\end{proof}\n\n \n\n\n\n\n\\begin{lemma}\\label{free_L1}\nLet $\\ng$ be a nilpotent Lie algebra with its centre $\\zg$, \nand assume that $D\\in\\Der(\\ng)$. \nIf there exists $\\xi\\in\\ng^*\\setminus\\zg^\\perp$ \nwith $\\xi\\circ(\\exp D)\\in \\Oc_\\xi$, then \n$\\sigma(D\\vert_\\zg)\\cap 2\\pi\\ie \\ZZ\\ne\\emptyset$. \n\\end{lemma}\n\n\\begin{proof}\nBy hypothesis, there exists $X\\in\\gg$ with $\\xi\\circ(\\exp D)=\\xi\\circ\\exp(\\ad_\\gg X)\\in\\gg^*$. \nThis implies \n$\\xi\\circ(\\exp D)\\vert_{\\zg}=\\xi\\circ\\exp(\\ad_\\gg X)\\vert_\\zg\\in\\zg^*$. \nFor every $Y\\in\\zg$ we have $\\exp(\\ad_\\gg X)Y=Y$ \nand on the other hand since $D$ is a derivation, $D(\\zg)\\subseteq\\zg$. \nTherefore $\\xi\\circ \\exp(D\\vert_\\zg)=\\xi\\vert_\\zg$. \nSince $\\xi\\in\\ng^*\\setminus\\zg^\\perp$, that is, $\\xi\\vert_\\zg\\ne0$, \nwe then obtain $1\\in\\spec(\\exp(D\\vert_\\zg))$. \nTherefore, by the spectral mapping theorem, there exists $w\\in\\sigma(D\\vert_\\zg)$ with $\\exp w=1$, that is, $w\\in2\\pi\\ie \\ZZ$. \n\\end{proof}\n\n\\begin{lemma}\\label{free_L2}\nAssume that $\\ng$ is a nilpotent Lie algebra with its centre $\\zg$, \nand let $X:=(\\ng^*\\setminus\\zg^\\perp)\/N$ be endowed with its quotient topology. \nThen for arbitrary $D\\in\\Der(\\ng)$ the map\n$$\\alpha\\colon X\\times \\RR\\to X,\\quad \n(\\Oc_\\xi,t)\\mapsto \\alpha_t(\\Oc_\\xi):=(\\exp(tD))^*(\\Oc_\\xi)=\\Oc_{\\xi\\circ\\exp(tD)}.$$\nis well defined and a continuous right action. \nMoreover, \n\\begin{enumerate}[\\rm (i)]\n\\item\\label{free_L2_i} if\n$\\sigma(D\\vert_\\zg)\\cap \\ie \\RR=\\emptyset$, \nthen the group action $\\alpha$ is free; \n\\item\\label{free_L2_ii} if $X$ is Hausdorff and \n there exists $\\epsilon \\in \\{-1, 1\\}$ such that \n$\\epsilon \\Re\\, z> 0$ for every $z\\in \\sigma(D\\vert_\\zg)$, \n the \n action \n$\\alpha$ is proper.\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\n\tIn order to check the equality \n\t\\begin{equation}\\label{free_L2_proof_eq1}\n\t(\\exp(tD))^*(\\Oc_\\xi)=\\Oc_{\\xi\\circ\\exp(tD)}\n\t\\end{equation}\nwe use that for every $Y\\in \\ng$ and every $\\gamma\\in\\Aut(\\ng)$ one has $\\gamma\\circ \\exp(\\ad_\\ng Y)\\circ\\gamma^{-1}=\\exp(\\ad_\\ng \\gamma(Y))$. \nTherefore, for $\\gamma:=\\exp(tD)^{-1}$, \n$$\\xi\\circ\\exp(\\ad_\\ng Y)\\circ \\gamma^{-1}\n=\\xi\\circ\\gamma^{-1}\\circ \\exp(\\ad_\\ng \\gamma(Y))\\in\\Oc_{\\xi\\circ\\gamma^{-1}}.$$\nSince the mapping $\\gamma\\colon\\ng\\to\\ng$ is bijective, \nwe then directly obtain~\\eqref{free_L2_proof_eq1}. \n\nIt is clear that $\\alpha$ is a group action of the abelian group $(\\RR,+)$, and its continuity follows by the commutative diagram \n$$\\xymatrix{\n(\\ng^*\\setminus\\zg^\\perp)\\times\\RR \\ar[d]_{q\\times\\id_{\\RR}} \\ar[r]& \\ng^*\\setminus\\zg^\\perp \\ar[d]^{q} \\\\\nX \\times\\RR \\ar[r]^{\\alpha} & X\n}$$\nwhere $q\\colon \\ng^*\\setminus\\zg^\\perp\\to X$, $q(\\xi):=\\Oc_\\xi$ is the quotient map defined by the coadjoint action of $N$, \nwhile the upper horizontal arrow is defined by $(\\xi,t)\\mapsto\\xi\\circ\\exp(tD)$ and is clearly continuous.\n\n\\eqref{free_L2_i} Assume that the group action $\\alpha$ is not free, \nthat is, there exist $t\\in\\RR^\\times$ and $\\xi\\in \\ng^*\\setminus\\zg^\\perp$ \nwith $\\alpha_t(\\Oc_\\xi)=\\Oc_\\xi$. \nBy \\eqref{free_L2_proof_eq1}, we then have $\\Oc_{\\xi\\circ\\exp(tD)}=\\Oc_\\xi$, \nthat is, $\\xi\\circ\\exp(tD)\\in\\Oc_\\xi$. \nThen Lemma~\\ref{free_L1} shows that $\\sigma(tD\\vert_\\zg)\\cap 2\\pi\\ie \\ZZ\\ne\\emptyset$, in particular $\\sigma(D\\vert_\\zg)\\cap \\ie \\RR\\ne\\emptyset$. \n\n\\eqref{free_L2_ii} \n$X$ is a locally compact Hausdorff space.\n\nWithout losing the generality we assume that $\\Re\\, z > 0$ for every $z\\in \\sigma(D\\vert_\\zg)$. \nLet $\\xi,\\eta\\in \\ng^*\\setminus \\zg^{\\perp}$, $ (\\xi_j)_{j\\ge 1}$ be a sequence in \n $\\ng^*\\setminus \\zg^\\perp $, and $(t_j)_{j\\ge1}$ be a sequence in~$\\RR$ such that \n $\\Oc_{\\xi_j} \\to \\Oc_\\xi $ \n\n\nand $\\alpha_{t_j}(\\Oc_{\\xi_j})\\to \\Oc_\\eta $ in~$X$. \n\nAssume that $(t_j)_{j\\ge 1}$ has no limit points, hence it is not bounded. \n It follows that there is a subsequence $(t_{j_k})_{k\\ge 1}$ such that $t_{j_k} \\to +\\infty$ or $t_{j_k} \\to -\\infty$.\n Since $\\Oc_{\\xi_j} \\to \\Oc_\\xi $ in $X$, there is $\\xi'_j \\in \\Oc_{\\xi_j} $ such that \n $\\xi'_j \\to \\xi$, thus $\\xi'_j \\vert_{\\zg}= \\xi_j\\vert_{\\zg} \\to \\xi\\vert_{\\zg}$. \n Similarly, since $\\alpha_{t_j}(\\Oc_{\\xi_j})= \\Oc_{\\xi_j \\circ \\ee^{t_j D}}\\to \\Oc_\\eta $ in $X$, \n it follows that $\\xi_j \\circ \\ee^{t_j D}\\vert_{\\zg} \\to \\eta\\vert_{\\zg}$. \n Assume that $t_{j_k}\\to -\\infty$. \n Then $\\xi_{j_k} \\circ \\ee^{t_{j_k} D}\\vert_{\\zg} \\to 0$ and we get that $\\eta\\vert_{\\zg}=0$. \n This is not possible since $\\eta \\in \\ng^* \\setminus \\zg^\\perp$. \n If $t_{j_k}\\to +\\infty$,\nthen $\\xi_{j_k} \\circ \\ee^{t_{j_k} D}\\vert_{\\zg}\\to +\\infty$. \nThis is again impossible since $\\xi_{j_k} \\circ \\ee^{t_{j_k} D}\\vert_{\\zg}\\to \\eta\\vert_\\zg$. \nTherefore the sequence $(t_j)_{j\\ge1}$ must have a limit point, hence the action $\\alpha$ is proper. \n(See \\cite[Lemma~3.42]{Wi07}.)\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\\begin{lemma}\\label{special}\nLet $H$ be a nilpotent Lie group with its Lie algebra $\\hg$ and $\\zg_\\hg$ the centre of $\\hg$.\nAssume the following conditions hold:\n\\begin{enumerate}[{\\rm (i)}]\n\\item\\label{cond_i} $\\hg$ is two-step nilpotent and $[\\hg, \\hg]=\\zg_\\hg$.\n\\item\\label{cond_ii} The non-trivial coadjoint orbits of $M$ have the same dimension $d$, that is, there is \n$d\\in \\NN$ such that \n$$ \\hg^*\/H= (\\hg^*\/H)_d \\sqcup [\\hg, \\hg]^{\\perp}.$$\nHere $(\\hg^*\/H)_d$ denotes the space of coajoint orbits of $H$ of dimension $d$. \n\\end{enumerate}\nLet $D\\in \\Der(\\hg)$ be such that there exist $z_1, z_2 \\in \\sigma(D)$ such that $(\\Re z_1) (\\Re z_2) \\le 0$.\nThen $C^*(H \\rtimes_D \\RR)$ is AF-embeddable. \n\\end{lemma}\n\n\\begin{proof}\nAssume first that there are $z_1, z_2 \\in \\sigma(D\\vert _{\\zg_\\hg})$ such that $(\\Re z_1) (\\Re z_2) \\le 0$. \nThen $C^*(H\\rtimes_D \\RR)$ is AF-embeddable, by Proposition~\\ref{NC}, and this proves the lemma in this case. \n\n\nNext assume that $\\epsilon\\in \\{-1, 1\\}$ such that \n\\begin{equation}\\label{roots_1}\n\\sigma(D \\vert_{\\zg_\\hg}) \\subset \\epsilon (0, \\infty) +\\ie \\RR. \n\\end{equation}\nDenote by $D\/\\zg_\\hg$ the derivation of $\\hg \/\\zg_\\hg$ obtained from $D$. \nThen the hypothesis and \\eqref{roots_1} show that there exist\n\\begin{equation}\\label{roots_2}\nz_1, z_2 \\in \\sigma(D\/\\zg_\\hg) \\quad \\text{such that} \\; \\; \\Re z_1\\le 0\\le \\Re z_2.\n\\end{equation}\nThen there is a short exact sequence\n$$ 0 \\rightarrow \\Ic \\rightarrow C^*(H)\\rightarrow C^*(H\/Z_H)\\rightarrow 0, \n$$\nwhere $\\widehat{\\Ic} = (\\hg^*\/H)_d$ and $Z_M=\\exp \\zg_\\hg$ is the centre of $H$.\nSince $\\hg$ is two step nilpotent, hence it has only flat orbits, it follows from \\cite[Lemma~6.8]{BBL17} that $\\Ic$ has continuous trace. \nSince $ C^*(H\/Z_H)$ is invariant under automorphisms of $H$, $\\Ic$ is $\\Aut(H)$ invariant as well. \nWe thus obtain the short exact sequence\n\\begin{equation}\\label{ses}\n0 \\rightarrow \\Ic\\rtimes \\RR \\rightarrow C^*(H\\rtimes_{D} \\RR) \\rightarrow C^*(H\/Z_H\\rtimes_{D\/\\zg_\\hg} \\RR )\\rightarrow 0, \n\\end{equation}\nHere, we have that $\\sigma(D) \\cap \\ie \\RR =\\emptyset$ \nand \\ref{roots_1}, hence, by Lemma~\\ref{free_L2}, the continuous action \n$\\alpha_{D} \\colon \\RR \\times \\widehat{\\Ic} \\to \\widehat{\\Ic}$ is free and proper. \nThus $\\Ic \\rtimes_{D} \\RR$ has continuous trace, hence it is AF-embeddable. \nIt follows from condition \\eqref{roots_2} and Example~\\ref{ax+b} that \n$C^*(H\/Z_H\\rtimes_{D\/\\zg_\\hg} \\RR)$ is AF-embeddable as well. \nIf in addition $\\dim(\\hg\/ \\zg_\\hg)$ is odd, then $K_1(C^*(H\/Z_M\\rtimes_{D\/\\zg_\\hg} \\RR) ) = K_0 (C^*(H\/Z_H))=0$, hence the index \nmap corresponding to \\eqref{ses} vanishes. \nTherefore, by Lemma~\\ref{embed}, $C^*(H\\rtimes_{D} \\RR)$ is AF-embeddable. \n\\end{proof}\n\n\n\\begin{remark}\n\\normalfont\nIn the above Lemma, if $\\dim(\\hg\/ \\zg_\\hg)$ is even, the index map corresponding to \\eqref{ses} may not vanish, and \n$C^*(H\\rtimes_{D} \\RR)$ may not be AF-embeddable, or even stably finite, as we will see below.\n\\end{remark}\n\n\n\n\n\n\\begin{remark}\\label{deriv-ext}\n\\normalfont\nLet $\\hg$ be a finite-dimensional real Lie algebra with a symplectic structure $\\omega\\colon\\hg\\times\\hg\\to\\RR$, and define the corresponding central extension \n$\\ng:=\\hg\\dotplus_\\omega\\RR$ with its Lie bracket $[(X_1,t_1),(X_2,t_2)]:=([X_1,X_2],\\omega(X_1,X_2))$ for all $X_1,X_2\\in\\hg$ and $t_1,t_2\\in\\RR$. \nFor any $D_0\\in\\Der(\\hg)$ and $a_0\\in\\RR$ we define the linear map $D\\colon\\ng\\to\\ng$, $D(X,t):=(D_0X,a_0t)$. \nThen $D\\in\\Der(\\ng)$ if and only if \n$$\\omega(D_0X_1,X_2)+\\omega(X_1,D_0X_2)=a_0\\omega(X_1,X_2) \n\\text{ for all }X_1,X_2\\in\\hg.$$\n\\end{remark}\n\n\\begin{lemma}\\label{N6N15-lemma}\n\tLet $\\hg$ be the real Lie algebra with a basis $X_1,X_2,X_3,Y_1,Y_2,Y_3$ satisfying the commutation relations $$[X_1,X_2]=Y_3,\\ [X_2,X_3]=Y_1,\\ [X_3,X_1]=Y_2.$$\n\t\\begin{enumerate}[{\\rm(i)}]\n\t\t\\item\\label{N6N15_item0} The centre of $\\hg$ is $\\zg:=\\spa\\{Y_1,Y_2,Y_3\\}$ and for every $\\xi\\in\\hg^*\\setminus\\zg^\\perp$ we have $\\dim\\Oc_\\xi=2$. \n\t\t\\item\\label{N6N15_item1} \n\t\tFor any $a_1,a_2,a_3\\in\\RR$ there exists a unique skew-symmetric bilinear functional \n\t\t$$\\omega\\colon\\hg\\times\\hg\\to\\RR$$ \n\t\tsatisfying $\\omega(X_j,Y_k)=\\delta_{jk}a_j$, $\\omega(X_j,X_k)=\\omega(Y_j,Y_k)=0$ for all $j,k\\in\\{1,2,3\\}$. \n\t\tMoreover, $\\omega$ is a symplectic structure of the Lie algebra $\\hg$ if and only if \n\t\t\\begin{equation}\\label{N6N15_item1_eq1}\n\t\ta_1+a_2+a_3=0\\text{ and }a_1a_2a_3\\ne0.\n\t\t\\end{equation}\n\t\t\\item\\label{N6N15_item2} \n\t\tFor any matrix $B=(b_{jk})_{1\\le j,k\\le3}\\in M_3(\\RR)$ there exists a unique derivation $D_B\\in\\Der(\\hg)$ satisfying $D_BX_j=\\sum\\limits_{k=1}^3b_{jk}X_k$ for $j=1,2,3$. \n\t\tIf $a_1,a_2,a_3\\in\\RR$ satisfy~\\eqref{N6N15_item1_eq1} and $\\omega$ is their corresponding symplectic structure of~$\\hg$ as in~\\eqref{N6N15_item1} above, then there exists $D\\in\\Der(\\hg\\dotplus_\\omega\\RR)$ with $D\\vert_\\hg=D_B$ \n\t\tif and only if \n\t\t\\begin{equation}\\label{N6N15_item2_eq2}\n\t\tb_{ij}(a_i-a_j)=0\\text{ for all }j,k\\in\\{1,2,3\\}\n\t\t\\end{equation}\n\t\tand if this is the case then $D(0,1)=(0,\\Tr B)$. \n\t\t\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\n\t\t\\eqref{N6N15_item0} This is well known. \n\n\\eqref{N6N15_item1} \nIt is straightforward to check that $\\omega$ is a 2-cocycle if and only if $a_1+a_2+a_3=0$, and on the other hand $\\omega$ is non-degenerate if and only if $a_1a_2a_3\\ne0$. \nTherefore, $\\omega$ is a symplectic structure of the Lie algebra $\\hg$ if and only if \\eqref{N6N15_item1_eq1} is satisfied. \n\n\\eqref{N6N15_item2} \nIn order to obtain a derivation $D_B\\colon\\hg\\to\\hg$ we define $D_BY_j:=[D_B X_r,X_s]+[X_r,D_BX_s]$ if $Y_j=[X_r,X_s]$. \nA straightforward computation then leads to the formula \n\\begin{equation}\\label{N6N15_proof_eq1}\nD_BY_j=-\\sum_{k\\ne j}b_{kj}Y_k+\\Bigl(\\sum_{k\\ne j}b_{kk})Y_j\n\\text{ for }j=1,2,3,\n\\end{equation}\nwhich further implies \n$$\\omega(D_B X_i,Y_j)+\\omega(X_i,D_B Y_j)=\n\\begin{cases}\n(\\Tr B)a_j=(\\Tr B)\\omega(X_j,Y_j)&\\text{ if }i=j,\\\\\nb_{ij}(a_i-a_j)&\\text{ if }i\\ne j.\n\\end{cases}\n$$\nSince $\\omega(X_i,Y_j)=0$ if $i\\ne j$, the assertion then follows by Remark~\\ref{deriv-ext}. \n\\end{proof}\t\t\n\t\t\n\t\t\n\t\t\\begin{theorem}\\label{N6N15}\nLet $\\hg$ be the real Lie algebra with a basis $X_1,X_2,X_3,Y_1,Y_2,Y_3$ satisfying the commutation relations $$[X_1,X_2]=Y_3,\\ [X_2,X_3]=Y_1,\\ [X_3,X_1]=Y_2, $$\n\t\tand , for $b_1, b_2, b_3 \\in \\RR$, $b_1+b_2+b_3\\ne 0$, let $D\\in \\Der(\\hg)$ be the unique derivation with \n\t\t$D X_j= b_{j}X_j$ for $j=1,2,3$.\n\t\tIf we denote $\\ng:=\\hg\\dotplus_\\omega\\RR$ and $N\\rtimes_D\\RR$ is the simply connected Lie group whose Lie algebra is $\\ng\\rtimes\\RR D$, \n\t\tthen the following assertions are equivalent: \n\t\t\\begin{itemize}\n\t\t\t\\item $C^*(N\\rtimes_D\\RR)$ is not AF-embeddable. \n\t\t\t\\item $C^*(N\\rtimes_D\\RR)$ is not stably finite. \n\t\t\t\\item There exists $\\epsilon\\in\\{\\pm1\\}$ with \n\t\t\t$\\epsilon z> 0$ for every $z\\in \\sigma(D)$. \n\t\t\t\\item There exists $\\epsilon\\in\\{\\pm1\\}$ with \n\t\t\t$\\epsilon b_j > 0$ for $j=1,2,3$. \n\t\t\\end{itemize}\n\\end{theorem}\n\n\\begin{proof} \nThe last two assertions in the statement are clearly equivalent since \n$$\\sigma(D)=\\{b_j\\mid j=1,2,3\\}\\cup\\Bigl\\{\\sum_{k\\ne j}b_k\\mid j=1,2,3\\Bigr\\}\n\\cup\\{b_1+b_2+b_3\\}$$\nby \\eqref{N6N15_item2} and \\eqref{N6N15_proof_eq1}. \n\nIf there exists $\\epsilon\\in\\{\\pm1\\}$ with \n$\\epsilon z> 0$ for every $z\\in \\sigma(D)$, then $C^*(N\\rtimes_D\\RR)$ is not stably finite by \nTheorem~\\ref{prop-cf4}. \n\nConversely, if there exists no $\\epsilon\\in\\{\\pm1\\}$ with \n$\\epsilon b_j > 0$ for $j=1,2,3$, then \nit suffices to show that $C^*(H\\rtimes_D\\RR)$ is AF-embeddable,\nsince $\\dim(N\\rtimes_D\\RR)=8\\in 4\\ZZ$ and by using Corollary~\\ref{cf-cor8}\\eqref{cf-cor7-ii}, this implies that \n$C^*(N\\rtimes_D\\RR)$ is AF-embeddable.\nNow the fact that $C^*(H\\rtimes_D\\RR)$ is AF-embeddable follows\n\\eqref{N6N15_item0} and Lemma~\\ref{special}, and this finishes the proof. \n\\end{proof}\n\nFor Theorem~\\ref{N6N17} below, we recall that a \\emph{finite-dimensional real division algebra} is a finite-dimensional real vector space $\\KK$ endowed with a bilinear map $\\KK\\times\\KK\\to\\KK$, $(v,w)\\mapsto v w$, \nwhose corresponding linear mappings $v\\mapsto v w_0$ and $w\\mapsto v_0 w$ are injective (hence bijective) for all $v_0,w_0\\in\\KK\\setminus\\{0\\}$. \nIf this is the case, then $\\dim_\\RR\\KK\\in\\{1,2,4,8\\}$ by \\cite[Cor. 1]{BoMi58}, \nand these values of $\\dim_\\RR\\KK$ are realized for instance if $\\KK$ is the real field~$\\RR$, the complex field~$\\CC$, the quaternion field~$\\mathbb{H}$, and the octonion (non-associative) algebra~$\\mathbb{O}$, respectively. \n\nLet $\\KK$ be a finite-dimensional real division algebra and define \n$$\n\\begin{gathered}\n\\omega\\colon(\\KK^n\\times\\KK^n)\\times(\\KK^n\\times\\KK^n)\\to\\KK,\\\\\n\\omega((v_1,w_1),(v_2,w_2)):=\\sum_{k=1}^n(v_{1k}w_{2k}-v_{2k}w_{1k})\n\\end{gathered}\n$$\nfor $v_j=(v_{j1},\\dots,v_{jn}), w_j=(w_{j1},\\dots,w_{jn})\\in\\KK^n$, $j=1,2$. \nIt is clear that $\\omega((v_1,w_1),(v_2,w_2))=-\\omega((v_2,w_2),(v_1,w_1))$, \nhence we may define the \\emph{real} 2-step nilpotent Lie algebra \n\\begin{equation}\\label{hgK}\n\\hg_\\KK:=\\KK^n\\times\\KK^n\\times\\KK\n\\end{equation}\nwith its Lie bracket \n$$[(v_1,w_1,z_1),(v_2,w_2,z_2)]:=[(0,0,\\omega((v_1,w_1),(v_2,w_2)))].$$\nLet $H_\\KK$ be a connected, simply connected nilpotent Lie group whose Lie algebra is $\\hg_\\KK$.\n\n\\begin{lemma}\\label{N6N17-lemma}\nLet $\\KK$ be a finite-dimensional real division algebra, and define \n $\\zg:=\\{0\\}\\times\\{0\\}\\times\\KK\\subseteq\\hg_\\KK$. Then the following assertions hold: \n\\begin{enumerate}[{\\rm(i)}]\n\t\\item\\label{N6N17_item1} We have $[\\hg_\\KK,\\hg_\\KK]=\\zg$ and $\\zg$ is the centre of $\\hg$. \n\t\\item\\label{N6N17_item2} For every $\\xi\\in\\hg_\\KK^*\\setminus[\\hg_\\KK,\\hg_\\KK]^\\perp$ we have $\\hg_\\KK(\\xi)=\\zg$ and $\\dim\\Oc_\\xi=2n\\dim_\\RR\\KK$. \n\t\\item\\label{N6N17_item3} \n\tThe mapping\n\t$r_\\zg\\colon (\\hg_\\KK^*\\setminus\\zg^\\perp)\/H_\\KK\\to \\zg^*\\setminus\\{0\\}\n\t,\\quad \\Oc_\\xi\\mapsto\\xi\\vert_{\\zg}$\n\tis a well-defined homeomorphism. \n\t\\item\\label{N6N17_item4} If $a=(a_1,\\dots,a_n),b=(b_1,\\dots,b_n)\\in\\RR^n$, $c\\in\\RR$, and $D\\colon\\hg_\\KK\\to\\hg_\\KK$ is the $\\RR$-linear mapping defined by \n\t$$D(v,w,z):=((a_1v_1,\\dots,a_nv_n),(b_1w_1,\\dots,b_nw_n),cz), $$ \n\tthen $D\\in\\Der(\\hg_\\KK)$ if and only if $a_k+b_k=c$ for $k=1,\\dots,n$. \n\t\\end{enumerate}\n \\end{lemma}\n \n\n\\begin{proof}\n \\eqref{N6N17_item1} \nIt is clear that $[\\hg_\\KK,\\hg_\\KK]=\\zg$ and $[\\hg_\\KK,\\zg]=\\{0\\}$.\nIn order to prove that $\\zg$ is actually equal to the centre of $\\hg_\\KK$, \nlet us assume that there exists $x_0:=(v_0,w_0,z)\\in\\hg_\\KK\\setminus\\zg$ with $[x_0,\\hg_\\KK]=\\{0\\}$ and $v_0=(v_{01},\\dots,v_{0n}), w_0=(w_{01},\\dots,w_{0n})\\in\\KK^n$. \nSince $x_0\\not\\in\\zg$, there exists $j\\in\\{1,\\dots,n\\}$ with $v_{0j}\\in\\KK\\setminus\\{0\\}$ or $w_{0j}\\in\\KK\\setminus\\{0\\}$. \nIf for instance $v_{0j}\\ne0$, then we define $x:=(v,w,0)\\in\\hg$ \nwhere $v=0\\in\\KK^n$ and $w=(w_1,\\dots,w_n)\\in\\KK^n$ \nis given by $w_k=0$ if $k\\in\\{1,\\dots,n\\}\\setminus\\{j\\}$ and $w_j=v_{0j}\\in\\KK$, \nand we obtain $[x_0,x]=(0,0,v_{0j}v_{0j})\\in\\hg\\setminus\\{0\\}$, which is a contradiction with the assumption $[x_0,\\hg]=\\{0\\}$. \nThe case $w_j\\in\\KK\\setminus\\{0\\}$ can be discussed similarly. \n\n\\eqref{N6N17_item2} \nWe have $\\hg_\\KK(\\xi)=\\{x\\in\\hg\\mid[x,\\hg_\\KK]\\subseteq\\Ker\\xi\\}$, hence the inclusion $\\zg\\subseteq\\hg_\\KK(\\xi)$ follows by~\\eqref{N6N17_item1}. \nFor the converse inclusion, assume there exists $x_0:=(v_0,w_0,z)\\in\\hg_\\KK(\\xi)\\setminus\\zg$. \nSince $x_0\\not\\in\\zg$, there exists $j\\in\\{1,\\dots,n\\}$ with $v_{0j}\\in\\KK\\setminus\\{0\\}$ or $w_{0j}\\in\\KK\\setminus\\{0\\}$. \nIf for instance $v_{0j}\\ne0$, then for any $y=(v,w,0)\\in\\hg$ with $v=0\\in\\KK^n$ and $w=(w_1,\\dots,w_n)\\in\\KK^n$ \nwith $w_k=0$ if $k\\in\\{1,\\dots,n\\}\\setminus\\{j\\}$ we have \n$[x_0,x]=(0,0,v_{0j}w_j)$. \nHere $w_j\\in\\KK$ is arbitrary and $v_{0j}\\in\\KK\\setminus\\{0\\}$ hence, \nsince $\\KK$ is a division algebra, it follows that $\\zg\\subseteq[x_0,\\hg_\\KK]$. \nOn the other hand, we have by assumption $x_0\\in\\hg_\\KK(\\xi)$, hence \n$[\\hg_\\KK,\\hg_\\KK]=\\zg\\subseteq[x,\\hg_\\KK]\\subseteq\\Ker\\xi$, which is a contradiction with the hypothesis $\\xi\\in\\hg_\\KK^*\\setminus[\\hg_\\KK,\\hg_\\KK]^\\perp$. \n\nThe second assertion follows by the general equality $\\dim\\Oc_\\xi=\\dim(\\hg_\\KK\/\\hg_\\KK(\\xi))$. \n\n\\eqref{N6N17_item3} \nFor every $\\xi\\in\\hg_\\KK^*\\setminus[\\hg_\\KK,\\hg_\\KK]^\\perp$ we have $\\Oc_\\xi=\\xi+\\zg^\\perp$ by~\\eqref{N6N17_item3} hence the mapping $r_\\zg$ is well-defined and bijective. \nMoreover, if we define $r\\colon \\hg_\\KK^*\\setminus\\zg^\\perp\\to \\zg^*\\setminus\\{0\\}$, $\\xi\\mapsto\\xi\\vert_\\zg$, and $q\\colon \\hg_\\KK^*\\setminus\\zg^\\perp\\to(\\hg_\\KK^*\\setminus\\zg^\\perp)\/H_\\KK$, $\\xi\\mapsto \\Oc_\\xi$, \nthen $r_\\zg\\circ q=r$ and, since $r$ is a continuous open mapping and $q$ is a quotient mapping, it follows that $r_\\zg$ is continuous and open. \n\n\\eqref{N6N17_item4} \nThis assertion is straightforward. \n\\end{proof}\n\n\n\\begin{theorem}\\label{N6N17}\nLet $\\KK$ be a finite-dimensional real division algebra.\nAssume $a=(a_1,\\dots,a_n),b=(b_1,\\dots,b_n)\\in\\RR^n$, $c\\in\\RR$,\nare such that $a_k+b_k=c \\ne 0$ for $k=1,\\dots,n$.\nand let $D\\in \\Der(\\hg_\\KK)$ be the $\\RR$-linear mapping defined by \n\t$D(v,w,z):=((a_1v_1,\\dots,a_nv_n),(b_1w_1,\\dots,b_nw_n),cz)$.\nLet $H_\\KK\\rtimes_D\\RR$ the simply connected Lie group whose Lie algebra is $\\hg_\\KK\\rtimes\\RR D$, then \n\t$\\Pg(C^*(H_\\KK\\rtimes_D \\RR))\\ne\\{0\\}$. \n\tIf moreover $\\KK\\ne\\RR$, then $C^*(H_\\KK\\rtimes_D \\RR)$ is not stably finite. \n \\end{theorem}\n\n\\begin{proof} \nSince $H_\\KK$ is a nilpotent Lie group, we may use Kirillov's homeomorphism \n$\\widehat{H_\\KK}\\simeq\\hg_\\KK^*\/H_\\KK$ along with \\eqref{N6N17_item3} to obtain a short exact sequence \n$$0\\to\\Ic\\hookrightarrow C^*(H_\\KK)\\to C^*(H_\\KK\/Z)\\to 0$$\nwhere one has $*$-isomorphisms \n$\\Ic\\simeq \\Cc_0(\\zg^*\\setminus\\{0\\})\\otimes\\Kc$ \nand $C^*(H_\\KK\/Z)\\simeq \\Cc_0(\\zg^\\perp)$. \nThe derivation $D$ gives rise to a group action $\\alpha\\colon \\hg_\\KK^*\/H_\\KK\\times\\RR\\to\\hg_\\KK^*\/H_\\KK$, $(\\Oc_\\xi,t)\\mapsto \\Oc_{\\xi\\circ\\ee^{tD}}$. \nIf we fix any real scalar product on $\\zg$ and we denote by $S_{\\zg^*}$ its corresponding unit sphere, then we have the homeomorphism \n$$S_{\\zg^*}\\times\\RR\\to\\zg^*\\setminus\\{0\\},\\quad (\\eta,t)\\mapsto \\ee^{tc}\\eta$$\nsince $c\\in\\RR\\setminus\\{0\\}$, \nand then we obtain a $*$-isomorphism $\\Cc_0(\\zg^*\\setminus\\{0\\})\\simeq\\Cc(S_{\\zg^*})\\otimes \\Cc_0(\\RR)$. \nWe then obtain $*$-isomorphisms \n\\begin{align*}\n\\Ic\\rtimes_\\alpha\\RR\n& \\simeq (\\Cc_0(\\zg^*\\setminus\\{0\\})\\otimes\\Kc)\\rtimes_\\alpha\\RR\n\\simeq \\Cc(S_{\\zg^*})\\otimes (\\Cc_0(\\RR)\\rtimes_c\\RR)\\otimes\\Kc \\\\\n& \\simeq \\Cc(S_{\\zg^*})\\otimes \\Kc(L^2(\\RR))\\otimes\\Kc \\\\\n& \\simeq \\Cc(S_{\\zg^*})\\otimes\\Kc.\n\\end{align*}\nThe above crossed product $\\Cc_0(\\RR)\\rtimes_c\\RR$ is defined via the group action \n$$\\Cc_0(\\RR)\\times\\RR\\to \\Cc_0(\\RR), \\quad (\\varphi,t)\\mapsto \\varphi_t, \\text{ where }\\varphi_t(s):=\\varphi(s\\ee^{tc}),$$ \nhence one has a $*$-isomorphism \n$\\Cc_0(\\RR)\\rtimes_c\\RR\\simeq \\Kc(L^2(\\RR))$ since $c\\in\\RR\\setminus\\{0\\}$. \nThe above $*$-isomorphisms show that $\\Pg(\\Ic\\rtimes_\\alpha\\RR)\\ne\\{0\\}$. \nSince $\\Ic\\rtimes_\\alpha\\RR$ is an ideal of $C^*(H_\\KK)\\rtimes_\\alpha\\RR$, \nand $C^*(H_\\KK)\\rtimes_\\alpha\\RR\\simeq C^*(H_\\KK\\rtimes_D\\RR)$, we obtain \n$\\Pg(C^*(H_\\KK \\rtimes_D \\RR))\\ne\\{0\\}$. \n\nIf moreover $\\KK\\ne\\RR$, then $\\dim_\\RR\\KK\\in 2\\ZZ$, hence $\\dim(H_\\KK\\rtimes_D\\RR)\\in 2\\ZZ+1$, \n it follows that $C^*(H_\\KK\\rtimes_D \\RR)$ is not stably finite by Theorem~\\ref{4n+2}. \n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nSimple Temporal Problems (STPs) provide a formal structure to describe possible time relations between events. These time relations feature in a wide variety of real-world problems \\cite{art:time21} and include precedences, maximum elapsed time, and a single release and due date per event. A major advantage of STPs is that they are solvable in polynomial time with standard shortest path methods~\\cite{art:stp}. Nevertheless, researchers and practitioners are still fairly limited with respect to what can be modeled using STPs. Even an otherwise simple feature such as multiple release and due dates per event cannot be expressed with STPs.\n\nAlternatively, Disjunctive Temporal Problems (DTPs) offer a much broader framework for describing time relations. However, this expressiveness is offset by the fact that they usually represent NP-Complete problems \\cite{art:dtps}. Simple Disjunctive Temporal Problems (SDTPs) are a primitive type of DTP which generalize STPs and retain efficient polynomial-time solution methods. SDTPs extend STPs by enabling multiple, non-overlapping release and due dates per event.\n\nThroughout the academic literature, SDTPs are referred to by many names: \\textsc{Star} class of problems, zero-one extended STPs and t\\textsubscript{2}DTPs. While it is difficult to know for sure why exactly so much terminology exists for the same problem, one could speculate that it might be that different researchers have each arrived at SDTPs from different angles. Although the particular reason for this terminological variance is not our main concern, it is clear that it results in a highly inefficient scenario. Researchers and practitioners alike end up being held back by the burdensome task of needing to discover what is known about SDTPs when the findings are catalogued under different names. Moreover, even when one does locate existing literature concerning SDTPs, those research papers are primarily theoretical in nature and provide neither a practical implementation of the methods nor empirical insights concerning how the behavior of those algorithms compares under different scenarios.\n\nWe have experienced precisely the situation outlined when trying to compare different approaches for scheduling tasks with multiple time windows in logistical problems such as vehicle routing with synchronizations \\cite{art:vrpms,art:delsynch}, pickup and delivery with transshipment \\cite{art:pdpt}, dial-a-ride with transfers \\cite{art:darpt} and truck driver scheduling with interdependent routes \\cite{art:ssvb-1}. SDTPs are an excellent model for scheduling in these problems. Nevertheless, one needs extremely efficient SDTP methods when solving these logistical problems via local search heuristics, for example. Given the fact that the literature is not only difficult to navigate but also lacks empirical results, we needed to (i) find existing methods, (ii) implement them and (iii) evaluate their advantages and limitations in different cases before employing SDTPs in our applications.\n\nThe goal of this paper is therefore fourfold. First, we propose a standard nomenclature to refer to SDTPs so that future researchers can refer to the same problem by the same name. Second, we explore existing methods and develop new algorithms to solve SDTPs that are capable of not only reducing the theoretical asymptotic worst-case time and space complexities but also ensuring good performance in practice. Third, we provide an empirical study alongside open source implementations of all of the techniques and also make our instances publicly available with the aim of avoiding duplicate work\\footnote{The complete code repository will be made available at a later date.}. Fourth and finally, we hope this paper will serve as a foundation for researchers and practitioners who would like to apply SDTPs in their work and build upon our research and implementations to easily achieve their goals.\n\n\\section{The simple disjunctive temporal problem} \\label{sec:sdtp}\n\nLet us begin by formally defining simple temporal problems, which will be useful when introducing simple disjunctive temporal problems. \n\n\\begin{definition}[Simple Temporal Problem \\cite{art:stp}]\\label{def:stp}\nA \\textit{Simple Temporal Network} (STN) is denoted $N=(T,C)$ where $T$ is the set of \\textit{variables} or \\textit{time-points} and $C$ is the set of binary constraints relating variables of $T$. A time-point $i \\in T$ has a closed domain $[l_i,u_i],\\ l_i,u_i \\in \\mathbb{R}$. Constraints in $C$ are \\textit{simple temporal constraints} given as a tuple $(i,j,w_{ij}) \\in C,\\ i,j \\in T,\\ w_{ij} \\in \\mathbb{R}$ which corresponds to Equation~\\ref{eq:stc}:\n\\begin{equation}\n s_i - s_j \\leq w_{ij} \\label{eq:stc}\n\\end{equation}\n\\noindent where $s_i,s_j \\in \\mathbb{R}$ denote the solution values assigned to time-points $i$ and $j$. \n\n The STP involves determining whether its associated STN is consistent. A network $N$ is consistent iff a \\textit{feasible schedule} or \\textit{solution} $s$ can be derived such that times $s_i$ assigned to each $i \\in T$ respect all constraints present in $C$ and the domains of each time-point.\n\\end{definition}\n\n \\citet{art:stp} showed that an STN can be represented with a distance graph where time-points $T$ are nodes and constraints $C$ are arcs connecting these nodes. First, let us associate a special time-point $\\alpha$ with domain $[0,0]$ to represent the beginning of the time horizon $s_\\alpha = 0$. This fixed point can be used to write unary constraints such as domain boundaries over time-points in $T$ as simple temporal constraints. For example, the bound $[l_i,u_i]$ for $i \\in T$ can be written as:\n \\begin{align*}\n s_\\alpha - s_i &\\leq -l_i\\\\\n s_i - s_\\alpha &\\leq u_i\n \\end{align*}\n \n We can then associate two distance graphs with STN $N=(T,C)$: the \\textit{direct graph} $G_D=(V,A_D)$ and the \\textit{reverse graph} $G_R=(V,A_R)$. For both of these graphs $V=T \\cup \\{\\alpha\\}$. Arc set $A_D= C \\cup \\{(\\alpha,i,-l_i), (i,\\alpha,u_i)\\ \\forall\\ i \\in T\\}$ for which an element $(i,j, w_{ij}) \\in A_D$ denotes an arc from node $i$ to $j$ with weight $w_{ij}$. Meanwhile, $A_R$ is the same as $A_D$ but with the direction of each arc reversed: $(i,j,w_{ij}) \\in A_D$ is $(j,i,w_{ij}) \\in A_R$.\n\nDetermining consistency of an STN then reduces to verifying the existence of negative-cost cycles in either $G_D$ or $G_R$. If there is no negative-cost cycle, the shortest path distance $\\tau_{\\alpha i}$ from node $\\alpha \\in V$ to every other node $i \\in V \\backslash \\{\\alpha\\}$ provides a feasible schedule. When computed over $G_D$, $s_i = -\\tau_{\\alpha i}$ provides the \\textit{earliest feasible schedule}. Meanwhile, when computed over $G_R$, $s_i=\\tau_{\\alpha i}$ provides the \\textit{latest feasible schedule}. The earliest feasible schedule can be defined as the solution $s$ for which given any other feasible solution $s'$ to the SDTP, it holds that $s_i \\leq s'_i,\\ \\forall\\ i \\in T$. Similarly, for the latest feasible schedule it holds that $s_i \\geq s'_i,\\ \\forall\\ i \\in T$.\n\nThere are many algorithms that can be used to detect negative-cost cycles quickly in a distance graph \\cite{art:sp-fp}. One of the most simple is \\textsc{BellmanFord} \\cite{book:cormen}. Indeed, this is an algorithm employed by most methods to solve SDTPs. For the remainder of this paper, we always consider \\textsc{BellmanFord} to refer to its implementation as a label-correcting algorithm using a first-in, first-out queue \\cite{art:sp-fp}.\n\nSTPs can only accommodate time-points for which the domain is a single interval. In order to tackle problems where time-points may be assigned values in one of several disjunctive intervals, a more expressive model is required. Let us now turn our attention to the main problem in this paper: the simple disjunctive temporal problem.\n\n\\begin{definition}[Simple Disjunctive Temporal Problem]\\label{def:sdtp}\nA \\textit{Simple Disjunctive Temporal Network} (SDTN) is denoted $N=(T,C)$, where $T$ is the set of time-points and $C=C_1 \\cup C_2$ are the constraints over these time-points. Based on the classification introduced by \\citet{art:kra}, constraints in $C_1$ are Type 1 while those in $C_2$ are Type 2.\n\\begin{enumerate}\n \\item[] (Type 1) Simple temporal constraints $(i,j,w_{ij}) \\in C_1,\\ i,j \\in T$ representing Equation \\ref{eq:stc}\n \\item[] (Type 2) Simple disjunctive constraints $(i,D_i) \\in C_2$, where $i \\in T$ and $D_i$ is a list of \\textit{intervals} or \\textit{domains} denoted $[l^c_i,u^c_i] \\in D_i,\\ l^c_i,u^c_i \\in \\mathbb{R}$ representing the disjunction\n \\begin{equation*}\n \\bigvee_{[l^c_i,u^c_i] \\in D_i} (l^c_i \\leq s_i \\leq u^c_i) \n \\end{equation*}\n Note that $C_2$ includes unary constraints ($|D_i|=1$). $T_D \\subseteq T$ denotes the set of all time-points for which $|D_i| > 1$. \n\\end{enumerate}\n\nIn order to solve the SDTP, we need to determine whether SDTN $N$ is consistent. Similar to STPs, $N$ is consistent iff a feasible solution $s$ can be derived which respects both constraints $C_1$ and $C_2$. \n\n\\end{definition}\n\n\n\nWe assume that domains in $D_i$ are sorted in ascending order \\cite{art:kra,art:cra}. Let $K = \\max_{(i,D_i) \\in C_2} |D_i|$ denote the largest number of domains for any given time-point and $\\omega = \\sum_{(i,D_i) \\in C_2} |D_i|$ denote the total number of domains in the instance. Let us further denote by $L(D_i)$ and $U(D_i)$ the lower and upper bound values in all domains of $i \\in T$, respectively. The \\textit{global boundary} of $i$ is given by $[L(D_i),U(D_i)]$ such that $s_i$ must belong to this boundary. However, some values within these bounds can still be infeasible. In other words: the domains of time-points are not continuous.\n\nThe existence of Type 2 constraints means we cannot solve the problem directly via shortest paths. Nevertheless, we can use the global boundaries of the time-points to redefine graphs $G_D$ and $G_R$ with $A_D=C_1 \\cup \\{(\\alpha,i,-L(D_i)), (i,\\alpha,U(D_i))\\ \\forall\\ (i,D_i) \\in C_2\\}$ and $A_R$ ($A_D$ with all arc directions reversed). These graphs can be used to compute \\textit{lower-} and \\textit{upper-bound solutions} for the SDTP while employing shortest path algorithms in the same way as for STPs. If a negative cycle exists in $G_D$ or $G_R$ when considering these global boundaries, then the associated SDTP instance is definitely infeasible.\n\nOnce a domain $d_i \\in D_i$ has been selected for each time-point $(i,D_i) \\in C_2$, the SDTP reduces to an STP. Indeed, some of the special-purpose algorithms available in the literature exploit this problem structure to solve SDTP instances. Section \\ref{sec:algs} will discuss this further.\n\n\n\\subsection{Related problems and classification}\n\nAs noted in this paper's introduction, the SDTP has been referred to by various names in the literature. \\citet{art:ult} term it the \\textsc{Star} class of problems because the multiple domains per time-point create connections to the beginning of time $\\alpha$, which resembles the shape of a star. \\citet{art:kumar-estp} refers to the problem as zero-one extended STPs, where subintervals of a time-point's domain are associated with a weight that is either 0 when the interval is infeasible, or~1 when the interval is feasible. An SDTP solution has therefore been constructed when the sum of the weights of selected intervals is $|C_2|$. Meanwhile, \\citet{art:kra} introduced Restricted Disjunctive Temporal Problems (RDTPs) which contain constraints of Type 1, 2 and 3\\footnote{Type 3 constraints consider two different time-points $i,j \\in T,\\ i\\neq j$ and relate them via a disjunction with exactly two terms in the form $(l'_i \\leq s_i \\leq u'_i) \\lor (l'_j \\leq s_j \\leq u'_j)$, where $l'_i,u'_i,l'_j,u'_j \\in \\mathbb{R}$ denote bounds for time-points $i$ or $j$. This type of constraint is not handled in this paper but interested readers are referred to \\cite{art:kra,art:cra} for more information about them.}. The SDTP therefore arises as a special case of RDTPs when there are no Type 3 constraints. \\citet{art:cra} also refer to SDTPs as t\\textsubscript{2}DTPs, framing them as DTPs that only contain constraints of Type 1 and 2. \n\nIn a move to simplify and unify nomenclatures, we have decided to introduce the name \\textit{Simple Disjunctive Temporal Problem} following the same reasoning behind the naming of STPs \\cite{art:stp}. Figure~\\ref{fig:tcps-class} below situates the SDTP within the larger scheme of DTPs.\n\n\\begin{figure}[h]\n \\begin{center}\n \\resizebox{0.35\\linewidth}{!}{\n \\begin{tikzpicture}\n \\node[set,text width=5cm,label={[below=125pt of dtp,text opacity=1]DTP}] \n at (0,-0.8) (dtp) {};\n \\node[set,text width=4cm,label={[below=95pt of dtp,text opacity=1]RDTP}] \n at (0,-0.65) (rdtp) {};\n \\node[set,fill=gray!20,text width=3cm,label={[below=68pt of rdtp,text opacity=1]SDTP}] \n at (0,-0.4) (sdtp) {};\n \\node[set,fill=gray!40,text width=2cm,label={[below=40pt of sdtp]STPTE}] \n (stpt) at (0,-0.2) {};\n \\node[set,fill=gray!60,text width=1cm] (stp) at (0,0) {STP};\n \\end{tikzpicture}\n }\n \\caption{Classification of DTPs in a set diagram.}\n \\label{fig:tcps-class}\n \\end{center}\n\\end{figure}\n\nSDTPs generalize STPs since the latter can be cast as an SDTP for which $|D_i|=1,\\ \\forall\\ i \\in T$. They also generalize the Simple Temporal Problem with Taboo regions featuring both instantaneous Events and processes of constant duration (STPTE) \\cite{art:stpts}. This class of problems differs from SDTPs because STPTEs define common intervals when no time-point can be scheduled rather than individual intervals per time-point. This clearly demonstrates how SDTPs generalize STPTEs. However, when the duration of processes can vary within an interval in STPs with taboo regions, then SDTPs cannot generalize them because Type 3 constraints are needed \\cite{art:stpts}. Finally, RDTPs generalize all of the aforementioned problems while DTPs further generalize RDTPs. The gray area in Figure \\ref{fig:tcps-class} represents the problems that the models and algorithms in this paper address. \n\nAnother problem related to SDTPs is the time-dependent STP \\cite{art:td-stp}. While it might not be an obvious connection at first, in the time-dependent version of STPs the weight $w_{ij}$ in Type 1 constraints is not a constant but rather a function $f(s_i,s_j)$ which depends on the values assigned to the time-points. When $f(s_i,s_j)$ is a piecewise linear and partial function over the global boundary $[L(D_j),U(D_j)]$, it is possible to cast the SDTP as a time-dependent STP. In this case, the function is defined between $\\alpha$ and every $j \\in T$, that is $f(s_\\alpha,s_j)$. The pieces of function $f(s_\\alpha,s_j)$ represent the domains of time-point $j$. This relation has not previously been established in the literature and one of the possible reasons could be that \\citet{art:td-stp} originally focused more on total functions given that each time-point had a single domain in their application. The effect of partial functions in the development of algorithms will be discussed further in Section \\ref{sec:algs}. We opted not to include the time-dependent STP in Figure \\ref{fig:tcps-class} so as to maintain a clear relation between problems that are often considered together in the temporal reasoning literature, namely those that deal with disjunctions. Nevertheless, the connection we have established has important implications for computing solutions to SDTPs.\n\n\\subsection{Constraint programming model}\n\nConstraint Programming (CP) tools are widely used in planning and scheduling domains. Hence, it is worth considering whether CP is a good candidate for solving SDTPs in practice. The corresponding CP model is: \n\\begin{align}\n s_{i} - s_{j} \\leq w_{ij}, &\\quad \\forall\\ (i,j,w_{ij}) \\in C_1 \\label{cp:1}\\\\\n \\bigvee_{[l^k_i,u^k_i] \\in D_i} (l^k_i \\leq s_i \\leq u^k_i), &\\quad \\forall\\ (i,D_i) \\in C_{2} \\label{cp:2}\n\\end{align}\n\\noindent which is actually the same set of equations as those in Definition \\ref{def:sdtp}. This is very convenient because we essentially have a one-to-one mapping between the classic definition of SDTPs and their CP formulation. For simplicity, we will refer to model (\\ref{cp:1})-(\\ref{cp:2}) as CP.\n\nA simplified CP formulation can be written as follows:\n\\begin{align}\n s_{i} - s_{j} \\leq w_{ij}, &\\quad \\forall\\ (i,j,w_{ij}) \\in C_1\\label{cps:1}\\\\\n L(D_i) \\leq s_i \\leq U(D_i), &\\quad \\forall\\ (i,D_i) \\in C_2\\label{cps:2}\\\\\n s_i \\notin \\Phi_i, &\\quad \\forall\\ i \\in T\\label{cps:3}\n\\end{align}\n\nConstraints (\\ref{cps:1}) are the same as Equation \\ref{eq:stc}, while Constraints (\\ref{cps:2}) model the global boundaries of time-points. Constraints (\\ref{cps:3}) are the \\textit{compatibility constraints} and serve as a replacement for disjunctive Constraints (\\ref{cp:2}). Set $\\Phi_i$ contains the enumeration of all infeasible assignments to $i \\in T$ that belong to the interval $[L(D_i),U(D_i)]$. In other words: $\\Phi_i = \\{u^1_i+1,u^1_i+2,\\dots, l^2_i-1,u^2_i+1,\\dots,l^k_i-1\\},\\ k=|D_i|$. Adding these constraints is only possible if we make the additional assumption that $s_i \\in \\mathbb{Z}$. However, given that typical CP tools only operate with integer variables, this assumption is not necessarily restrictive in practice. We will refer to the formulation defined by (\\ref{cps:1})-(\\ref{cps:3}) as \\textit{Simplified Constraint Programming} (SCP).\n\n\n\\subsection{Integer linear programming model}\n\n Integer Linear Programming (ILP) is also often used in the planning and scheduling domains, which motivated us to also formulate the SDTP in ILP form. First, let us define the binary decision variable $x^c_i$ which takes value 1 whenever solution value $s_i$ belongs to domain $[l^c_i,u^c_i] \\in D_i,\\ (i,D_i) \\in C_2$ and 0 otherwise. The corresponding ILP model for SDTPs is:\n \n\\begin{align}\n s_i - s_{j} \\leq w_{ij}, &\\quad \\forall\\ (i,j,w_{ij}) \\in C_1\\label{ilp:1}\\\\ \n l^c_i - M^L_i(1-x^c_i) \\leq s_i, &\\quad \\forall\\ (i,D_i) \\in C_{2},\\ [l^c_i,u^c_i] \\in D_i \\label{ilp:2}\\\\ \n s_i \\leq u^c_i + M^U_i(1-x^c_i), &\\quad \\forall\\ (i,D_i) \\in C_{2},\\ [l^c_i,u^c_i] \\in D_i\\label{ilp:3}\\\\ \n \\sum_{[l^c_i,u^c_i] \\in D_i} x^c_i = 1, &\\quad \\forall\\ (i,D_i) \\in C_{2}\\label{ilp:4}\\\\ \n x^c_i \\in \\{0,1\\}, &\\quad \\forall\\ (i,D_i) \\in C_{2},\\ [l^c_i,u^c_i] \\in D_i \\label{ilp:5}\n\\end{align}\n\nConstraints (\\ref{ilp:1}) refer to the simple temporal constraints (Equation \\ref{eq:stc}). Meanwhile, Constraints (\\ref{ilp:2})-(\\ref{ilp:3}) restrict the values assigned to solution $s$ so that they belong to the active bounds defined by variables $x^c_i$. Note that Constraints (\\ref{ilp:2})-(\\ref{ilp:3}) are big-$M$ constraints. They can be tightened by setting, for each $(i, D_i) \\in C_2$:\n\\begin{align*}\nM^L_i &= \\max_{[l^c_i,u^c_i]\\in D_i} l^c_i - L(D_i)\\\\\nM^U_i &= U(D_i) - \\min_{[l^c_i,u^c_i]\\in D_i} u^c_i \n\\end{align*}\n\\noindent Constraints (\\ref{ilp:4}) ensure that exactly one domain is selected per time-point $i \\in T$. Finally, Constraints~(\\ref{ilp:5}) restrict $x^c_i$ variables to take binary values. Recall that an SDTP is a feasibility problem, with this explaining why there is no objective function present in this ILP.\n\n\n\n\\bigskip\n\nAll three models (ILP, CP and SCP) can be used to solve SDTPs by employing a state-of-the-art solver such as IBM's CPLEX. However, these solvers are often financially expensive. Furthermore, specific methods can provide guarantees concerning expected run times, such as asymptotic polynomial worst-case time complexity. In the following section, we describe many algorithms that can be used to quickly solve SDTPs in practice.\n\n\n\\section{Algorithms} \\label{sec:algs}\n\nA variety of special-purpose algorithms have been proposed for SDTPs. All of the algorithms that are presented in this section will be implemented for our computational experiments. It is worth noting that all of the algorithms provide a guaranteed polynomial asymptotic worst-case time complexity. Algorithms are presented in chronological order of publication date. In some cases, we adapted algorithms to ensure they could be implemented efficiently in practice. For that reason, we try to provide as many implementation details as possible. In all cases where details are missing, we refer interested readers to our code for deeper inspection.\n\nWe assume that each algorithm receives as input an SDTP instance containing network $N=(T,C)$ and associated graphs $G_D$ and $G_R$. Some algorithms also receive additional structures, which we detail for the individual method whenever necessary. All algorithms return a solution vector $s$. When the SDTP instance is feasible, each entry $s_i$ contains a time assigned to $i \\in T$ which in combination with the other entries renders the solution feasible (network $N$ consistent). Whenever the SDTP instance is infeasible, $s=\\emptyset$ is returned.\n\n\\subsection{Upper-Lower Tightening}\n\n\\citet{art:ult} introduced the \\textit{Upper-Lower Tightening} (ULT) algorithm to tackle general disjunctions in DTPs. The original intention behind ULT was to tighten disjunctive constraints and simplify DTP instances. However, \\citet{art:ult} were the first to show that SDTPs could be solved in polynomial time by means of ULT.\n\nThe ULT algorithm operates with constraints between two variables denoted as an interval. The first step is to therefore define set $H = \\{ (i,j)\\ \\forall\\ (i,j,w_{ij}) \\in C_1 \\} \\cup \\{ (\\alpha,i)\\ \\forall\\ (i,D_i) \\in C_2\\}$. Let us further assume that $(i,j) \\in H \\implies (j,i) \\notin H$. \\textit{Boundary} sets $B_{ij}$ are defined for $(i,j) \\in H\\ :\\ i \\neq \\alpha$, $B_{ij}=\\{[-w'_{ji}, w'_{ij}] \\}$ where $w'_{ij} = w_{ij}\\text{ if } (i,j,w_{ij}) \\in C_1$, otherwise $w'_{ij} = +\\infty$, and $w'_{ji} = w_{ji}\\text{ if } (j,i,w_{ji}) \\in C_1$, otherwise $w'_{ji} = +\\infty$. Meanwhile, for $(\\alpha,i) \\in H$ we relate $i$ to the beginning of time $\\alpha$ via $B_{\\alpha i}=D_i\\ :\\ (i,D_i) \\in C_{2}$. We will use the notation $L(B_{ij})$ and $U(B_{ij})$ to denote the lower and upper bounds in $B_{ij}$, respectively.\n\nAlgorithm \\ref{alg:ult} outlines how ULT works. First, a distance matrix $\\delta$ is initialized in line 1. The main loop of the algorithm spans lines 2-8. In lines 3-4, some entries of the distance matrix $\\delta$ are updated according to the current bounds $B$ of each pair $(i,j) \\in H$. \\textsc{FloydWarshall} \\cite{book:cormen} is then used to update matrix $\\delta$ by computing All-Pairs Shortest Paths (APSPs) using the current values in $\\delta$ as the arc weights (line 5). A temporary boundary set $B'$ is created in line 6 with the newly computed values in $\\delta$. Note that in the implementation itself we do not create $B'$ since we can use matrix $\\delta$ directly in its place whenever needed (for example, in line 7). The intersection of $B'$ and $B$ is computed in line 7. Here, we follow the definition of the $\\cap$ operation introduced by \\citet{art:ult}: it returns a set of intervals whose values are permitted by both $B'$ and $B$. ULT iterates so long as there are changes to the bounds in $B$, denoted by operation $\\textsc{Change}$, and no bound is either empty or infeasible. All checks in line 8 can be performed in $O(1)$ time by maintaining the correct flags after lines 6-7. Similarly, lines 3-4 can be performed during operation $\\cap$ in line 7 without increasing the asymptotic worst-case time complexity. Lines 9-11 prepare solution $s$ to be returned. If the instance is feasible then line 10 assigns the earliest feasible schedule to $s$, otherwise $\\emptyset$ is returned.\n\n\\begin{algorithm}[H]\n \\caption{ULT}\n \\label{alg:ult}\n \\footnotesize\n \\begin{algorithmic}[1]\n \\State $\\delta_{ij} \\gets +\\infty,\\ \\forall\\ i,j \\in T \\cup \\{\\alpha\\}$\n \\Do\n \\State $\\delta_{ij} \\gets U(B_{ij}),\\ \\forall\\ (i,j) \\in H$ \\Comment{Update current $\\delta$ entries with new bounds}\n \\State $\\delta_{ji} \\gets -L(B_{ij}),\\ \\forall\\ (i,j) \\in H$\n \\State $\\textsc{FloydWarshall}(\\delta)$ \\Comment{Update distance matrix $\\delta$}\n \\State $B'_{ij} \\gets \\{[-\\delta_{ji},\\delta_{ij}]\\},\\ \\forall\\ (i,j) \\in H$\n \\State $B \\gets B \\cap B'$ \\Comment{Tightens boundaries}\n \\DoWhile{$\\textsc{Change}(B) \\textbf{ and } (B_{ij} \\neq \\emptyset \\textbf{ and } L(B'_{ij}) \\leq U(B'_{ij}),\\ \\forall (i,j) \\in H)$}\n \\State $s \\gets \\emptyset$\n \\IfThen{$B_{ij} \\neq \\emptyset \\textbf{ and } L(B'_{ij}) \\leq U(B'_{ij}),\\ \\forall (i,j) \\in H$}{$s_i \\gets L(B_{\\alpha i}),\\ \\forall\\ i \\in T$}\n \\State \\Return $s$\n \\end{algorithmic}\n\\end{algorithm}\n\nThe asymptotic worst-case time complexity of ULT is $O(|T|^3|C|K + |C|^2K^2)$ \\cite{art:ult}, while its space complexity is $O(|T|^2)$ due to distance matrix $\\delta$. Despite its apparently high computational complexity, ULT is a polynomial time algorithm. Additionally, \\citet{art:ult} noted that even when a problem instance contains multiple disjunctions per constraint between time-points $i,j \\in T$, and is therefore not an SDTP instance, ULT may successfully remove sufficient disjunctions to reduce the problem to an SDTP. In this case, ULT is guaranteed to solve the problem exactly. This is the only algorithm in our study capable of such a reduction.\n\n\n\\subsection{Kumar's Algorithm}\n\n\\citet{art:kumar-estp} proposed a polynomial time algorithm to solve zero-one extended STPs, which essentially correspond to an SDTP. Algorithm \\ref{alg:kra} provides a pseudocode outline of how \\textit{Kumar's Algorithm} (KA) works. In line 1, a distance matrix $\\delta$ is constructed by computing APSPs over graph $G_R$. This step can detect infeasibilities such as if there exists a negative cycle formed by $C_1$ constraints and global boundaries, in which case $\\delta = \\emptyset$ is returned. \n\nMatrix $\\delta$ can be computed by employing (i) \\textsc{FloydWarshall}, (ii) repeated calls to \\textsc{BellmanFord} or (iii) Johnson's Algorithm \\cite{book:cormen}. \\citet{art:kumar-estp} did not specify which method should be used when computing $\\delta$ and therefore we will consider both options (ii) and (iii). Option (i) is disregarded due to its overall poor performance during our preliminary experiments.\n\n\\begin{algorithm}\n \\caption{KA}\n \\label{alg:kra}\n \\footnotesize\n \\begin{algorithmic}[1]\n \\State $\\delta \\gets \\textsc{ComputeDistanceMatrix}(G_R)$\n \\IfThen{$\\delta = \\emptyset$}{\\textbf{return} $\\emptyset$}\n \\State $G_C \\gets \\textsc{CreateConflictGraph}(\\delta, C_2)$ \\Comment{Graph $G_C=(E,A_C)$}\n \\IfThen{$G_C = \\emptyset$}{\\textbf{return} $\\emptyset$}\n \\State $G_B \\gets \\textsc{CreateBipartiteGraph}(G_C)$ \\Comment{Graph $G_B=(E,E',A_B)$, where $E'$ is a copy of $E$}\n \\State $G_F \\gets \\textsc{SolveMaxFlow}(G_B)$ \\Comment{From source $\\theta_1$ to sink $\\theta_2$, with $G_F$ corresponding to the residual graph}\n \\State $S \\gets \\{(\\theta_1,e^c_i)\\ :\\ e^c_i \\notin R(G_F, \\theta_1)\\} \\cup \\{(e^{k\\prime}_j,\\theta_2)\\ :\\ e^{k\\prime}_j \\in R(G_F, \\theta_1)\\}$ \\Comment{Minimum cut in $G_F$}\n \\State $S' \\gets \\{e^c_i\\ :\\ (\\theta_1,e^c_i) \\in S\\ \\lor (e^{c\\prime}_i,\\theta_2) \\in S\\}$ \\Comment{Vertex cover for $G_C$}\n \\State $S'' \\gets E \\backslash S'$\n \\IfThen{$|S''| \\neq |T|$}{\\textbf{return} $\\emptyset$}\n \\State $\\textsc{UpdateGraph}(G_R,S'')$\n \\State $s \\gets \\textsc{BellmanFord}(G_R, \\alpha)$ \\Comment{Solve STP}\n \\State \\Return $s$\n \\end{algorithmic}\n\\end{algorithm}\n\nLine 3 proceeds to create a conflict graph $G_C=(E,A_C)$ with the domains from the SDTP. First, set $E$ of intervals is defined as $E = \\{ e^c_{i}\\ :\\ (i,D_i) \\in C_2,\\ [l^c_i,u^c_i] \\in D_i \\}$. Hence, every element $e^c_{i} \\in E$ represents exactly one domain of a time-point. A domain $[l^c_i,u^c_i]$ has no corresponding element in $E$ if it produces a size-1 conflict, that is, if the following is true:\n\\begin{equation*}\n \\delta_{i\\alpha} + u^c_i < 0\\ \\lor\\ \\delta_{\\alpha i} - l^c_i < 0\n\\end{equation*}\n\nOnce vertex set $E$ has been created, arc set $A_C$ can be defined. An arc $(e^c_{i},e^k_{j}) \\in A_C$ denotes a size-2 conflict between two time-point domains $[l^c_i,u^c_i]$ and $[l^k_j,u^k_j]$. Such a conflict occurs whenever:\n\\begin{equation*}\n u^c_i + \\delta_{ij} - l^k_j < 0\n\\end{equation*}\nNote that size-2 conflicts are also defined between domains of the same time-point $i \\in T$. There is always a conflict $(e^c_{i},e^{c+1}_{i}) \\in A_C$ because $\\delta_{ii}=0$ and $u^c_i < l^{c+1}_i$ (recall from Section \\ref{sec:sdtp} that domains are in ascending order).\n\nProcedure \\textsc{CreateConflictGraph} returns either graph $G_C$ or $\\emptyset$. The latter is returned whenever all domains of a time-point $i \\in T$ produce size-1 conflicts. In this case no domain associated with $i$ is included in $E$, thereby implying that the SDTP instance is infeasible. Once graph $G_C$ has been constructed, line 5 creates a bipartite graph $G_B=(E,E',A_B)$ by copying every element $e^c_{i} \\in E$ to $e^{c\\prime}_{i} \\in E'$. For each $(e^c_{i},e^k_{j}) \\in A_C$ we create an arc $(e^c_{i},e^{k\\prime}_{j}) \\in A_B$. All arcs in $A_B$ connect an element of $E$ to an element of $E'$.\n\nLine 6 solves a maximum bipartite matching over $G_B$ as a maximum flow problem (max-flow), producing the residual graph $G_F$ \\cite{book:cormen}. To solve the problem in the form of a max-flow, we introduce a source node $\\theta_1$ and a sink node $\\theta_2$ to $G_B$. Arcs $(\\theta_1,e^c_{i}),\\ \\forall\\ e^c_{i} \\in E$ and $(e^{k\\prime}_{j},\\theta_2),\\ \\forall\\ e^{k\\prime}_{j} \\in E'$ are included in the graph together with all arcs in $A_B$. Additionally, all arcs are given unitary capacity. Then, it suffices to solve a max-flow from $\\theta_1$ to $\\theta_2$ to produce $G_F$. \n\nThe minimum-cut $S$ is computed in $G_F$ thanks to the max-flow min-cut theorem (line 7). $R(G_F,\\theta_1)$ denotes the set of nodes that are reachable from source $\\theta_1$ in $G_F$ (meaning there is a path with positive residual capacity). Line 8 merges node copies in $S$ to create $S'$, which is a vertex cover for $G_C$ when seen as an undirected graph. Since $S'$ is a vertex cover, if we take all elements in $E$ which are not part of $S'$ to create set $S''$ (line 9), there will be no two elements in $S''$ which have a conflict. In other words: all domains in $S''$ can be part of a feasible SDTP solution.\n\nIf $|S''| = |T|$ then every time-point has exactly one domain assigned to it, that is, $S''_i = e^c_{i},\\ \\forall\\ i \\in T$. If $|S''| < |T|$ the instance is infeasible (line 10). Line 11 continues to update graph $G_R$ with the information in $S''$ concerning the selected domain for each time-point:\n\\begin{align*}\n (\\alpha,i,w_{\\alpha i}) \\in A_R \\implies w_{\\alpha i}=U(S''_i)\\\\\n (i,\\alpha,w_{i\\alpha}) \\in A_R \\implies w_{i\\alpha}=-L(S''_i)\n\\end{align*}\n\\noindent where $U(S''_i)=u^c_i$ and $L(S''_i)=l^c_i$. The final solution $s$ is computed with standard \\textsc{BellmanFord} since the SDTP has now been reduced to a feasible STP (line 12).\n\nNote that in our implementation we do not explicitly create graphs $G_C$ and $G_B$. Instead, we directly create max-flow graph $G_F$. This graph is also modified by \\textsc{SolveMaxFlow} to produce the residual graph. By taking this approach, we reduce both KA's execution time and the amount of memory it requires. Conceptually, however, it is easier to explain how KA works by documenting the creation of each graph in a step-by-step fashion.\n\n\\citet{art:kumar-estp} did not provide the asymptotic worst-case time complexity of KA and instead suggested that KA runs in polynomial time because each step can be performed in polynomial time. Therefore, for the purpose of completeness, we will now explicitly analyse the time complexity of KA. The three algorithmic components which dictate KA's complexity can be found on lines 1, 3 and 6. All other parts of the algorithm can be completed in time which is never slower than these three main components.\n\nLine 1 takes time $O(|T|^2|C|)$ if computed with repeated \\textsc{BellmanFord} and time $O(|T||C| + |T|^2\\log |T|)$ if computed with Johnson's algorithm provided Dijkstra's algorithm \\cite{book:cormen} is implemented with Fibonacci Heaps \\cite{art:fib-heaps}. However, Fibonacci Heaps are often inefficient in practice due to pointer operations leading to poor cache locality and performance \\cite{art:splib,art:heaps}. We therefore opted to use Sequence Heaps \\cite{art:sequence-heaps} in our implementation, which increases the asymptotic worst-case time complexity to $O(|T||C|\\log |T|)$ but improves the performance of the algorithm in practical settings. Line 3 has complexity $O(\\omega^2)$ because we need to check every pair of intervals in $E$ and $|E| = O(\\omega)$. Line 6 solves a max-flow problem. While there are many algorithms to solve max-flow \\cite{art:max-flow}, we opted to use Dinic's Algorithm with complexity $O(\\omega^\\frac{5}{2})$ when applied to graphs from maximum bipartite matching. We have observed that max-flow is not the bottleneck in KA. Indeed, lines 1 and 3 are the most time-consuming steps (see Section \\ref{sec:discussion} for a full discussion).\n\nIn the remainder of the paper, we will refer to the version of KA using repeated \\textsc{BellmanFord} as KAB, and the one using Johnson's algorithm as KAJ. We will write KA when referring to the algorithm in a generic sense which covers both KAB and KAJ. The complexity of KAB is $O(|T|^2|C| + \\omega^\\frac{5}{2})$, while KAJ's is $O(|T||C|\\log |T| + \\omega^\\frac{5}{2})$. Meanwhile, the space complexity of KA is $O(|T|^2 + \\omega^2)$ due to distance matrix $\\delta$ and graph $G_F$.\n\n\n\\subsection{Comin-Rizzi Algorithm}\n\n\\citet{art:cra} introduced asymptotically faster algorithms to solve both SDTPs and RDTPs, making their methods the current state of the art for both problems. For SDTPs, they introduced an algorithm which resembles Johnson's Algorithm for APSPs. Their method begins by performing a first phase using \\textsc{BellmanFord} to detect negative cycles, while subsequent iterations use Dijkstra's Algorithm to correct computations over a graph that contains no negative cycles. However, no experimental study has been conducted using this method until now.\n\nThe \\textit{Comin-Rizzi Algorithm} (CRA) for SDTPs is detailed in Algorithm \\ref{alg:cra}. CRA begins by computing an initial earliest feasible solution $s^0$ considering $C_1$ constraints only. In our implementation, we partially consider $C_2$ constraints by using the global boundaries defined in Section \\ref{sec:sdtp} within $G_D$. The computation of $s^0$ then either produces the earliest possible solution or proves that one cannot exist because (i) there is a negative cycle formed by $C_1$ constraints or (ii) it is not possible to assign a time $s_i$ to at least one time-point $i \\in T$ while complying with the global bounds $[L(D_i),U(D_i)]$.\n\nIf $s^0 \\neq \\emptyset$ then CRA proceeds to its main loop. First, each time-point $i \\in T$ where the current solution $s^0_i$ does not belong to one of the domains $D_i$ is added to list $F$ of assignments that require fixing. While there are elements in $F$, the following steps are repeated (lines 5-12). A time-point $i$ is removed from $F$ (line 6). The first time $i$ is removed from $F$, we compute entry $\\delta_i$ of the distance matrix $\\delta$ from $i$ to all other nodes in the underlying graph $G^{1\\prime}_R$ containing only $C_1$ constraints (lines 7-9). In this graph, the weight $w_{ij}$ of each arc $(i,j,w_{ij}) \\in A_R$ is modified to $w'_{ij}=w_{ij} + s^0_j - s^0_i$. \\citet{art:cra} showed that $G^{1\\prime}_R$ cannot contain negative cycles because it is always true that $w'_{ij} \\geq 0$. Therefore, distances $\\delta_i$ can be computed using \\textsc{Dijkstra} instead of \\textsc{BellmanFord}, which greatly improves the performance of CRA. Each entry $\\delta_i$ is only computed once because $G^{1\\prime}_R$ remains unchanged during CRA's execution.\n\n\\begin{algorithm}\n \\caption{CRA}\n \\label{alg:cra}\n \\footnotesize\n \\begin{algorithmic}[1]\n \\State $s^0 \\gets \\textsc{BellmanFord}(G_D, \\alpha)$ \\Comment{Solve STP using SDTP global boundaries}\n \\IfThen{$s^0 = \\emptyset$}{\\textbf{return} $\\emptyset$}\n \\State $s \\gets s^0$\n \\State $F \\gets \\{i\\ :\\ (i,D_i) \\in C_2 \\land s_i \\notin D_i\\}$ \\Comment{Set of all time-points $i \\in T$ with assignment $s_i$ infeasible}\n \\While{$F \\neq \\emptyset \\textbf{ and } s \\neq \\emptyset \\textbf{ and } s_i \\leq U(D_i)\\ \\forall (i,D_i) \\in C_2$}\n \\State $i \\gets \\textsc{Pop}(F)$\n \\If{$\\delta_i \\textbf{ not yet computed}$}\n \\State $\\delta_i \\gets \\textsc{Dijkstra}(G^{1\\prime}_D, i)$ \\Comment{Lazy computation of $\\delta$}\n \\EndIf\n \\State $\\textsc{UpdateAssignments}(s,s^0,i,\\delta_i)$\n \\State $F \\gets \\{i\\ :\\ (i,D_i) \\in C_2 \\land s_i \\notin D_i\\}$\n \\EndWhile\n \\State \\Return $s$\n \\end{algorithmic}\n\\end{algorithm}\n\nFor each $i$ taken from $F$ in line 6, we update the assignment to $s_i$ by means of procedure \\textsc{UpdateAssignments} (line 10). First, the procedure performs the following operation\n\\begin{equation*}\n s_i \\leftarrow \\lambda(s_i,D_i)\\ :\\ (i,D_i) \\in C_2\n\\end{equation*}\n\\noindent where $\\lambda(s_i,D_i)$ is a function that either returns value $l^c_i$ belonging to the first domain in ascending order $[l^c_i,u^c_i] \\in D_i$ for which $s_i < l^c_i$, or it returns $\\perp$ if no such domain exists. Whenever $\\lambda(s_i,D_i) = \\perp$, CRA stops computations because this proves that the instance is infeasible. In this case, \\textsc{UpdateAssignments} sets $s=\\emptyset$. Alternatively, if $\\lambda(s_i,D_i) \\neq \\perp$ then the new assignment $s_i$ can cause changes to other time-point assignments since $s_i$ has necessarily increased. To correctly propagate these changes, \\citet{art:cra} introduced the following update rules\n\\begin{align*}\n \\rho_{ij} &\\leftarrow \\delta_{ij} + (s_j - s^0_j) - (s_i - s^0_i),\\ &\\forall\\ j \\in P(G^1_D, i)\\\\\n s_j &\\leftarrow s_j + \\max(0, \\lambda(s_i,D_i) - s_i - \\rho_{ij}),\\ &\\forall\\ j \\in P(G^1_D, i)\n\\end{align*}\n\\noindent where $P(G^1_D,i)$ denotes the set of all nodes $j \\in V$ which are reachable from $i$ in $G^1_R$. In other words: there is a path from $i$ to $j$ in $G^1_D$. These update rules can be applied in $O(1)$ time per $j \\in P(G^1_D,i)$ or $O(|T|)$ time in total.\n\nAfter fixing the assignment to $i$ and potentially other time-points, CRA constructs a new list $F$ (line 11). Once $F = \\emptyset$, the assignment in $s$ is feasible and corresponds to the earliest feasible solution. This assignment is then returned in line 13. For infeasible instances, $s=\\emptyset$ is returned instead.\n\nThe asymptotic worst-case time complexity of CRA is $O(|T||C| + |T|^2\\log |T| + |T|\\omega)$ when using Fibonacci Heaps for \\textsc{Dijkstra}'s computations. The asymptotic complexity increases to $O(|T||C|\\log |T| + |T|\\omega)$ when using Sequence Heaps instead, however the empirical performance improves significantly \\cite{art:sequence-heaps}. Regardless of the heap implementation, CRA's space complexity is $O(|T|^2)$ due to distance matrix $\\delta$.\n\nIn their original description of CRA, \\citet{art:cra} precomputed distance matrix $\\delta$ before beginning the main loop in Algorithm \\ref{alg:cra}. For our implementation, we describe the computation as a \\textit{lazy computation} of entries in $\\delta$ given that we only compute them when strictly necessary (lines 7-9). Although both approaches exhibit the same asymptotic worst-case time complexity, in practice the lazy computation performs significantly better since many unnecessary computations are avoided. Additionally, we have incorporated the creation of list $F$ at line 11 into procedure \\textsc{UpdateAssignments}. Whenever the assignment $s_j$ to a time-point $j \\in T$ is modified, we check whether $j$ should be added to or removed from $F$. This avoids reconstructing list $F$ every iteration of the main loop (lines 5-12), thus speeding up computations.\n\n\\subsection{Reduced Upper-Lower Tightening}\n\nThe \\textit{Reduced Upper-Lower Tightening} (RULT) method is a speedup of ULT, specifically targeted towards SDTPs. One can easily derive RULT from ULT by exploiting the structure of SDTPs. Recall that in ULT, we must compute APSPs using \\textsc{FloydWarshall} with complexity $O(|T|^3)$ because \\citet{art:ult} assumed the input was a general DTP with possibly multiple disjunctions per constraint between two time-points $i,j \\in T$. \n\nHowever, SDTPs feature a structure that only contains simple temporal constraints between time-points in $T$. It is therefore sufficient to compute single-source shortest paths twice: first to determine the earliest feasible assignment for each time-point and a second time to determine the latest feasible assignment for each time-point. This creates a single interval per time-point denoting a possibly tighter global boundary concerning their assignments. Similar to ULT, we can use this global boundary to reduce $C_2$ disjunctions in every iteration, thereby reducing the number of disjunctions.\n\nAlgorithm \\ref{alg:rult} outlines RULT. First, boundary set $B$ is initialized with the domains of each time-point (lines 1-2). In contrast to ULT, we only have to maintain boundaries per $i \\in T$ rather than per constraint. The main loop (lines 3-11) runs for as long as there are changes to $B$ and the bounds remain feasible. In every iteration graph $G_D$ is changed with \\textsc{UpdateGraph}, which replaces the weight of arcs connected to $\\alpha$: \n\\begin{align*}\n (\\alpha,i,w_{\\alpha i}) \\in A_D \\implies w_{\\alpha i}=-L(B_i)\\\\\n (i,\\alpha,w_{i\\alpha}) \\in A_D \\implies w_{i\\alpha}=U(B_i)\n\\end{align*}\n\\noindent The same procedure takes place for $G_R$ but outgoing arcs from $\\alpha$ get the upper bound $U(B_i)$ while the incoming arcs get the lower bound $-L(B_i)$. During RULT's execution, values $L(B_i)$ are non-decreasing and $U(B_i)$ are non-increasing. Hence, updating the graphs tightens the global boundary $B_i$ of each time-point $i \\in T$.\n\n\n\n\\begin{algorithm}\n \\caption{RULT}\n \\label{alg:rult}\n \\footnotesize\n \\begin{algorithmic}[1]\n \\State $B_i \\gets \\{[-\\infty,+\\infty]\\},\\ \\forall\\ i \\in T$\n \\State $B_i \\gets D_i,\\ \\forall\\ (i,D_i) \\in C_2$\n \\Do\n \\State $\\textsc{UpdateGraph}(G_D,B)$ \\Comment{Update arc weights connected to $\\alpha$}\n \\State $\\textsc{UpdateGraph}(G_R,B)$\n \\State $p \\gets \\textsc{BellmanFord}(G_D,\\alpha)$ \\Comment{Earliest feasible assignment}\n \\State $q \\gets \\textsc{BellmanFord}(G_R,\\alpha)$ \\Comment{Latest feasible assignment}\n \\IfThen{$p = \\emptyset$ \\textbf{ or } $q = \\emptyset$}{\\textbf{return} $\\emptyset$}\n \\State $B'_i \\gets \\{[-p_i,q_i]\\},\\ \\forall\\ i \\in T$\n \\State $B \\gets B \\cap B'$ \\Comment{Tightens boundaries}\n \\DoWhile{$\\textsc{Change}(B) \\textbf{ and } L(B_{i}) \\leq U(B_{i})\\ \\forall\\ i \\in T$}\n \\State $s \\gets \\emptyset$\n \\IfThen{$L(B_{i}) \\leq U(B_{i})\\ \\forall\\ i \\in T$}{$s_i \\gets L(B_{i}),\\ \\forall\\ i \\in T$}\n \\State \\Return $s$\n \\end{algorithmic}\n\\end{algorithm}\n\nLines 6-7 compute the earliest feasible schedule $p$ and the latest feasible schedule $q$ over the updated graphs. If $p = \\emptyset$ or $q = \\emptyset$ then the instance is infeasible, because a negative cycle still exists even for the relaxed global boundaries of all time-points (line 8). Otherwise, line 9 constructs set $B'$ and line 10 computes the intersection of $B$ and $B'$. Operation $\\cap$ is the same used in ULT and defined by \\citet{art:ult}. Finally, lines 12-14 prepare solution $s$ to be returned. If boundaries in $B$ are feasible, line 13 assigns to every time-point its earliest feasible value. If the latest feasible solution is desired instead, we can assign $U(B_i)$ to $s_i$ in line 13.\n\nThe correctness of RULT follows directly from that of ULT \\cite{art:ult} in combination with the fact that $C_1$ constraints are fixed and the only intervals that must be considered are those in $C_2$. The asymptotic worst-case time complexity of RULT is similar to ULT's. Accounting for the efficiency gain in shortest path computations, which are performed with \\textsc{BellmanFord} instead of \\textsc{FloydWarshall}, RULT's complexity becomes: $O(|T|^2|C|K + |T|^2|K|^2)$. The space complexity of RULT is reduced to $O(|T|)$ given that we only have to allocate additional vectors of size $|T|$. Note that in our implementation, we do not explicitly maintain boundary set $B$.\n\n\n\n\\subsection{Bellman-Ford with Domain Check}\n\nAll of the algorithms described until now have employed \\textsc{BellmanFord} at some point during their execution. This should not be surprising since \\textsc{BellmanFord} can be implemented rather efficiently to detect negative cycles \\cite{art:sp-fp}, which is a core task when solving STPs, SDTPs and RDTPs. It seems only natural then to consider a variant of the original algorithm to solve SDTPs. Let us therefore define \\textit{Bellman-Ford with Domain Check} (BFDC), which incorporates small changes to \\textsc{BellmanFord} in order to address gaps of infeasible values in the shortest path computations. Our method draws inspiration from previous research on temporal problems \\cite{art:cra,art:stp-bf-inc,art:td-stp}.\n\nAlgorithm \\ref{alg:bfdc} describes the full BFDC procedure, which primarily works over graph $G_D$. Lines 1-5 involve the initialization of auxiliary variables. This includes the distance array $\\tau$, path length array $\\pi$ which calculates the number of nodes in the shortest path from $\\alpha$ up to $i \\in V$, the domain index array $z$ which holds the current domain index $z_i$ for each time-point $i \\in T$ and the first-in, first-out queue $Q$ used in \\textsc{BellmanFord}. After initialization, the main loop begins (lines 6-20). In line 7, an element $i$ is removed from the queue and its domain is checked in line 8. Procedure \\textsc{DomainCheck} is detailed in Algorithm \\ref{alg:check}. If \\textsc{DomainCheck} can prove the SDTP instance is infeasible, then it sets $\\tau = \\emptyset$. Otherwise the procedure updates assignments to $\\tau$, $\\pi$ and $z$ as necessary. The algorithm continues to line 9 where, if the instance has not been proven infeasible yet, all outgoing arcs from $i \\in V$ are relaxed and the shortest paths propagated (here \\textit{relax} refers to the nomenclature of \\citet{book:cormen}). \n\n\n\n\\begin{algorithm}\n \\caption{BFDC}\n \\label{alg:bfdc}\n \\footnotesize\n \\begin{algorithmic}[1]\n \\State $\\tau_i \\gets +\\infty,\\ \\forall\\ i \\in T\\cup \\{\\alpha\\}$\n \\State $\\pi_i \\gets 0,\\ \\forall\\ i \\in T\\cup \\{\\alpha\\}$\n \\State $z_i \\gets 1,\\ \\forall\\ i \\in T\\cup \\{\\alpha\\}$\n \\State $Q \\gets \\textsc{Push}(Q, \\alpha)$\n \\State $\\tau_\\alpha \\gets 0$\n \\While{$Q \\neq \\emptyset \\textbf{ and } \\tau \\neq \\emptyset$}\n \\State $i \\gets \\textsc{Pop}(Q)$\n \\State $\\textsc{DomainCheck}(i,\\tau,z,\\pi)$ \\Comment{Algorithm \\ref{alg:check}}\n \\If{$\\tau \\neq \\emptyset$}\n \\For{$(i,j,w_{ij}) \\in A_D$}\\Comment{Standard \\textsc{Relax} phase in \\textsc{BellmanFord}}\n \\If{$\\tau_j > \\tau_i + w_{ij}$}\n \\State $\\tau_j \\gets \\tau_i + w_{ij}$\n \\State $\\pi_j \\gets \\pi_i + 1$\n \\IfThen{$\\pi_j \\geq |T| \\textbf{ or } j = \\alpha$}{$\\tau \\gets \\emptyset$} \\Comment{Checks for negative cycle}\n \\IfThen{$j \\notin Q$}{$Q \\gets \\textsc{Push}(Q, j)$}\n \\EndIf\n \\IfThen{$\\tau = \\emptyset$}{\\textbf{break}}\n \\EndFor\n \\EndIf\n \\EndWhile\n \\IfThenElse{$\\tau \\neq \\emptyset$}{$s_i \\gets -\\tau_i,\\ \\forall\\ i \\in V$}{$s \\gets \\emptyset$}\n \\State \\Return $s$\n \\end{algorithmic}\n\\end{algorithm}\n\nFor each outgoing arc from $i$ in $A_D$ (recall graph $G_D=(V,A_D)$), line 10 checks whether the current shortest path up to $j$ adjacent to $i$ should be updated and, if so, then the algorithm also updates $\\pi_j$ and possibly queue $Q$. In line 14, if the path up to $j$ forms a cycle or the path leads back to $\\alpha$, the instance is determined to be infeasible. If $j$ is not yet in queue $Q$, we add it in line 15 (duplicated elements are not allowed). When the instance has been proven infeasible, line 17 aborts the for-loop (lines 10-18).\n\nThe main loop runs for as long as there are elements in $Q$ and $\\tau$ is not $\\emptyset$. Once one of these conditions is false, Algorithm \\ref{alg:bfdc} proceeds on to line 21. If the instance is feasible, assignment $s$ is created using the values of the shortest paths stored in $\\tau$, otherwise $\\emptyset$ is returned.\n\nProcedure \\textsc{DomainCheck} (Algorithm \\ref{alg:check}) represents the main difference between \\textsc{BellmanFord} and BFDC. In line 1, it verifies whether the current time $s_i=-\\tau_i$ assigned to $i$ belongs to its current domain indexed at $z_i$. If the assigned time does not exceed the domain's upper bound $u^{z_i}_i$ then \\textsc{DomainCheck} simply terminates. Otherwise, the algorithm searches for the first domain in increasing order to which $s_i=-\\tau_i$ belongs (lines 2-8). When a domain is found, lines 4-5 update the assignments for $\\tau_i$ and $\\pi_i$ accordingly. In case $s_i=-\\tau_i$ exceeds all domains in $D_i$ then we have a proof that the instance is infeasible (line 9).\n\n\\begin{lemma}\n Algorithm \\ref{alg:bfdc} is correct and returns either (i) the earliest feasible solution or (ii) proof that no solution exists. \\label{lem:bfdc-correct}\n\\end{lemma}\n\\begin{proof}\n First, note that $\\tau$ is always non-increasing in BFDC. This implies that the SDTP solution $s=-\\tau$ is non-decreasing. In every \\textsc{Relax} phase BFDC assigns the shortest path up to a subset of nodes in $G_D$ and therefore assigns the earliest feasible values to a subset of time-points. Whenever a \\textsc{DomainCheck} phase must increase the assignment to $z_i$ because $-\\tau_i > u^{z_i}_i$, it assigns the earliest feasible domain and either decreases $\\tau_i$ or leaves $\\tau_i$ unchanged (lines 4-5 in Algorithm \\ref{alg:check}). Value $\\tau_i$ is non-increasing and consequently decreasing the assignment of $z_i$ will never lead to a feasible solution. Therefore, $z$ is also non-decreasing in BFDC which implies domain assignment is a backtrack-free search.\n \n With these facts in mind, we can now show that there are two possibilities at the end of BFDC. If $\\tau \\neq \\emptyset$ then $s=-\\tau$ is the earliest feasible solution for the SDTP instance. This is true because $\\tau$ contains the shortest paths in $G_D$ from $\\alpha$ to every other node $i \\in V$, with this achieved by using the minimum feasible assignment of domains $z$. When $\\tau = \\emptyset$ then we have either exhausted the assignment $z_i$ to a time-point $i \\in T$ which implies that BFDC has run out of domains for $i$ ($z_i > |D_i|$), or there is a negative-cost cycle formed by $C_1$ constraints which has been detected during the \\textsc{Relax} phase (line 14 in Algorithm \\ref{alg:bfdc}).\n\\end{proof}\n\n\\begin{algorithm}\n \\caption{\\textsc{DomainCheck}}\n \\label{alg:check}\n \\footnotesize\n \\begin{algorithmic}[1]\n \\Require{Time-point $i$, distance array $\\tau$, domain index array $z$, path length array $\\pi$}\n \\If{$-\\tau_i > u^{z_i}_i$}\\Comment{Domain $[l^{z_i}_i,u^{z_i}_i] \\in D_i$}\n \\For{$z_i=z_i+1$ \\textbf{ until } $|D_i|$}\n \\If{$-\\tau_i \\leq u^{z_i}_i$}\n \\State $\\tau_i \\gets \\min\\{\\tau_i,-l^{z_i}_i\\}$\n \\IfThen{$\\tau_i = -l^{z_i}_i$}{$\\pi_i \\gets 1$}\n \\State \\textbf{break}\n \\EndIf\n \\EndFor\n \\IfThen{$-\\tau_i > U(D_i)$}{$\\tau \\gets \\emptyset$} \\Comment{No domain can accomodate current $\\tau_i$ assignment}\n \\EndIf\n \\end{algorithmic}\n\\end{algorithm}\n\nIt is possible to show that Lemma \\ref{lem:bfdc-correct} also holds for the reversed case: producing the \\textit{latest feasible solution}. For that, domains are sorted in descending order and computations occur over graph $G_R$ instead of $G_D$. This requires minor changes to how Algorithm \\ref{alg:check} works to account for the reversed order of domains., with the general reasoning concerning how the algorithm operates remaining the same. The latest feasible solution $s$, if it exists, can be retrieved directly via $s=\\tau$. Let us now turn to the asymptotic worst-case time complexity of BFDC which is established via Lemma \\ref{lem:bfdc-time}.\n\n\\begin{lemma}\n BFDC stops within a number of iterations proportional to $O(|T||C| + |T|\\omega)$. \\label{lem:bfdc-time}\n\\end{lemma}\n\\begin{proof}\n First, consider that the complexity of \\textsc{BellmanFord} is $O(|T||C|)$ over the same graph $G_D$. The addition of \\textsc{DomainCheck} does not change the size of queue $Q$ and therefore the overall number of iterations remains the same as standard \\textsc{BellmanFord}. The change lies in the computational overhead of each iteration individually.\n \n There are at most $O(|V|)$ phases in \\textsc{BellmanFord} with a first-in, first-out queue. In each phase, a node is extracted from $Q$ at most once \\cite{book:networks}. In other words: the operations taking place in lines 7-11 of Algorithm \\ref{alg:bfdc} are executed at most $|V|$ times per phase. These operations have a complexity equivalent to $O(\\textsc{OutDeg}(i) + |D_i|)$, where \\textsc{OutDeg}$(i)$ denotes the number of arcs in set $A_D$ which have $i$ as their source. Hence, each phase has complexity $O(\\sum_{i \\in V} \\textsc{OutDeg}(i) + |D_i|)$ which is equivalent to $O(|C| + \\omega)$. All together, we arrive at a complexity of $O(|V||C| + |V|\\omega)$ which is equivalent to $O(|T||C| + |T|\\omega)$ when solving SDTPs because $|V| = |T|$.\n\\end{proof}\n\nThe space complexity of BFDC is $O(|T|)$. The auxiliary arrays $\\tau$, $s$, $\\pi$, $z$ and queue $Q$ used in BFDC all require additional space proportional to $|T|$.\n\n\n\\subsection{Asymptotic worst-case complexities}\n\nLet us now assess the theoretical complexities of all the algorithms and draw some initial conclusions concerning what one should expect from empirical results. Table \\ref{tab:complexities} provides both the asymptotic worst-case time complexity and space complexity for each algorithm according to our implementation. Given that the ILP, CP and SCP models are often solved by means of general-purpose black-box solvers, we opted not to include their theoretical complexities in our analysis.\n\n \\begin{table}[!htb]\n \\centering\n \\caption{Asymptotic worst-case complexities for each algorithm.}\n \\label{tab:complexities}\n \\begin{tabular}{lrr}\n \\hline\n Algorithm & Time complexity & Space complexity\\\\\n \\hline\n ULT & $O(|T|^3|C|K + |C|^2K^2)$ & $O(|T|^2)$\\\\\n KAB & $O(|T|^2|C| + \\omega^\\frac{5}{2})$ & $O(|T|^2 + \\omega^2)$\\\\\n KAJ & $O(|T||C|\\log |T| + \\omega^\\frac{5}{2})$ & $O(|T|^2 + \\omega^2)$\\\\\n CRA & $O(|T||C|\\log |T| + |T|\\omega)$ & $O(|T|^2)$\\\\\n RULT & $O(|T|^2|C|K + |T|^2K^2)$ & $O(|T|)$\\\\\n BFDC & $O(|T||C| + |T|\\omega)$ & $O(|T|)$\\\\\n \\hline\n \\end{tabular}\n \\end{table}\n\n In terms of worst-case time complexity, BFDC clearly outperforms all other methods. CRA is the second fastest method. Meanwhile, it is difficult to rank KA and RULT because they have different terms which can dominate one another. Note that $\\omega = O(|T|K)$ and therefore whenever $\\omega \\leq (|T|K)^{\\frac{4}{5}}$ the second term (max-flow) in KA's complexity is never slower than RULT's second term. In this case, we can limit our comparison to the first term referring to shortest paths. Clearly, both KAB and KAJ are asymptotically faster than RULT in this regard. However, when $\\omega \\approx |T|K$ the time complexity of KA is lower than RULT's due to the max-flow phase. As previously mentioned, we can also see that the use of Johnson's Algorithm in KA reduces its time complexity, bringing KAJ closer to CRA. Finally, ULT is the slowest algorithm in Table \\ref{tab:complexities}, mainly due to its heavy utilization of \\textsc{FloydWarshall}. \n \n The time complexities documented in Table \\ref{tab:complexities} are indicative of the challenges faced when solving SDTPs. Despite their close ties to shortest path problems, the presence of negative cycles and disjunctive domains requires more complex and refined techniques. In particular, we wish to call attention to the increased space complexity in most of the established techniques in the literature. Only RULT and BFDC are able to solve SDTPs using linear space. Although this may appear unimportant given the availability of computational resources, a quadratic memory overhead can quickly become prohibitive in practice. This is often problematic given that SDTPs appear as subproblems of other more complex problems which require their own share of memory. We will discuss the impact of memory usage later in Section \\ref{sec:discussion}. \n \n A final remark concerns the relation between SDTPs and time-dependent STPs established in Section \\ref{sec:sdtp}. \\citet{art:td-stp} showed how the time-dependent STP can be solved in time $O(|T||C|)$. However, despite the relation between the two problems, Table \\ref{tab:complexities} shows that solving SDTPs requires asymptotically more time than time-dependent STPs in the worst case. This is primarily due to the discontinuity of time-point domains which renders certain assignments in SDTP solutions infeasible, thereby requiring additional procedures to correct the assignments and (re)check feasibility. When at most one domain exists per time-point ($K \\leq 1$), this correction is not necessary. \n \n\n\\section{Experiments} \\label{sec:exps}\n\nExperiments were carried out on a computer running Ubuntu 20.04 LTS equipped with two Intel Xeon E5-2660v3 processors at 2.60GHz, with a total of 160 GB RAM, 5 MB of L2 cache and 50 MB of L3 cache. Intel's Hyper-threading technology has been disabled at all times to avoid negatively influencing the experiments. All of the algorithms were implemented using \\texttt{C++} and compiled with GNU GCC 9.3 using optimization flag \\texttt{-O3}. The ILP, CP and SCP models were implemented using the \\texttt{C++} API of CPLEX 12.9. Methods were only allowed one thread during execution.\n\nOur experiments primarily focus on measuring observed computation times. In order to obtain accurate time measurements, we employ \\texttt{C++}'s \\texttt{std::steady\\_clock} to measure CPU time. To ensure as much fairness as possible when comparing methods that differ significantly with respect to the input representation, we decided to document only the computation time for solving an instance. This means our results do not include information concerning the time needed for input, output or preprocessing that is performed by some algorithms to transform data into a more suitable format. Similarly, the time to build ILP, CP and SCP models is not included in their results.\n\nWhile we understand that evaluating methods with respect to computation times is not always ideal \\cite{art:exp1,art:exp2}, it is difficult to obtain a single evaluation metric for algorithms that differ so much in terms of their basic operations and components. Additionally, \\citet{art:shapiro} argued that for tractable problems, the running time of algorithms is often a reasonable metric. \n\nSince we are proposing the first experimental study to evaluate algorithms for solving SDTPs, we introduce four datasets to assess the performance of the various methods and their implementations. The four datasets differ in terms of their problem structure and particularly with respect to the underlying distance graph. Instances are subdivided into \\textit{shortest path instances}, \\textit{negative cycle instances}, \\textit{vehicle routing instances} and \\textit{very large instances}. All of them include only integer values. It is therefore possible to accommodate all methods, including CP and SCP, without any changes. We will begin by first detailing the procedure by which we generated each instance set before presenting the computational results obtained from our experiments.\n\n\\subsection{Shortest path instances} \\label{sec:sp-instances}\n\nInstances are created with graph generators for Shortest Path (SP) problems. A graph $G=(V,A)$ is transformed into an SDTN $N=(T,C)$ by setting $T = V$, $C_1 = A$ and deriving $C_2$ constraints for the time-point domains from the shortest paths in $G$. Let us define the following parameters for an SDTP instance: number of time-points $|T|$, number of Type 1 constraints $|C_1|$, number of elements with more than one domain $|T_D|$, and number of domains $K > 1$ per $i \\in T_D$ such that $|D_i| = K$. There are four SP groups which differ in terms of how either graph $G$ or $C_2$ constraints are created. The generation process for each group is summarized below (for more details see Appendix \\ref{ap:instances}).\n\n\\begin{enumerate}\n \\item \\textsc{Rand}: generates graph $G$ using \\textsc{Sprand} introduced in the SPLib \\cite{art:splib}. Nodes and arcs are all created randomly. Constraints $C_2$ are generated based on the shortest path from a dummy node to every $i \\in V$. \n \\item \\textsc{Grid}: generates graph $G$ using \\textsc{Spgrid}, also introduced in the SPLib \\cite{art:splib}. Nodes are generated in a grid format with $X$ layers and $Y$ nodes per layer. Arcs connect nodes within the same layer and to those in subsequent layers. $C_2$ constraints are generated in the same way as for \\textsc{Rand}. \n \\item \\textsc{Seq}: generates graph $G$ using the tailored generator \\textsc{Spseq}. Nodes are generated at random similarly to \\textsc{Sprand}. A path connecting all nodes with $|V|-1$ arcs is created where the weight of all arcs is $w_{ij}=1$. Afterwards, the remaining $|A|-|V|+1$ arcs are created at random with greater weights. This creates a known shortest path which may be difficult for some methods to find. $C_2$ constraints are generated in the same way as for \\textsc{Rand} and \\textsc{Grid}.\n \\item \\textsc{Late}: generates graph $G$ using either \\textsc{Sprand} or \\textsc{Spseq}. $C_2$ constraints are created so that at least 60\\% of the earliest feasible solutions $s_i$ belong to the last domain of the respective time-point. \n\\end{enumerate}\n\nFor each of these four datasets, we also create four subsets to assess which key instance characteristics have the biggest impact on algorithmic performance. For each of these subsets we fix three of the parameters of an SDTP, and then vary the fourth. \n\n\\subsubsection{\\textsc{Nodes} dataset}\n\nThe number of time-points $|T|$ varies in the range \\{100,200,\\dots,12800,25600\\} for dataset \\textsc{Nodes}. Other parameters are fixed to $|C_1| = 6\\cdot|T|$, $|T_D|=0.8\\cdot|T|$ and $K=10$. Five instances were generated for each combination of dataset (1)-(4) and number of time-points $|T|$ (henceforth denoted a configuration): three feasible and two infeasible instances. For example, five instances have been generated for configuration (\\textsc{Rand}, \\textsc{Nodes}, $|T|=100$). \n\nFigure \\ref{fig:sp-nodes} provides the results for the \\textsc{Nodes} subset. Each graph reports the average computation times for each method according to the number of time-points for each dataset. The values reported are the average from 20 runs, so as to mitigate the impact of any outliers due to the short computation times needed to solve SDTPs \\cite{art:shapiro}. Additionally, methods are given a time limit of two seconds. If a method timed out for all instances of a given size, we omit these results for clarity. This explains the incomplete curves present in some of the graphs. However, if for some instance sizes a method could solve at least one instance (out of five), we report the averages including potential timeouts. KAJ rarely outperformed KAB. Based on these results, we decided to only show the results for KAB. In the experiments, we will comment on specific differences between the two methods whenever necessary.\n\n\\begin{figure}[!htb]\n \\centering\n \\includegraphics[width=\\linewidth]{images\/graph_sp_nodes.pdf}\n \\caption{Average computation times in microseconds ($\\mu$s) for the \\textsc{Nodes} subset. Both $x$- and $y$-axis are reported in log scale.}\n \\label{fig:sp-nodes}\n\\end{figure}\n\nThe general performance of the algorithms in graphs of Figure \\ref{fig:sp-nodes} is somewhat consistent. We can clearly see a cluster of curves towards the top of the graphs which include ULT, CP, SCP, ILP and KAB. It is also easy to distinguish a second cluster formed by RULT and BFDC at the bottom of the graphs. Meanwhile, CRA lies in-between these two clusters, typically starting at the bottom for small instances and trending towards the top for the largest ones. These differences are not surprising given that most methods in the top cluster are more general than those in the bottom cluster (including CRA). While these differences are to be expected, the question as to whether they hold in different scenarios must still be answered.\n\nULT demonstrates the fastest growth in the graphs and can rarely solve instances containing $|T| > 1000$. This behavior is easily explained by the use of \\textsc{FloydWarshall} in every iteration, which contributes to a $\\Theta(|T|^3)$ time complexity. CP and SCP are relatively consistent in execution time, with SCP able to solve slightly larger instances in \\textsc{Rand}, \\textsc{Seq} and \\textsc{Late}. For \\textsc{Late} instances, SCP outperformed CP in all scenarios. This showcases how CPLEX as a CP solver can benefit from SCP's model structure. Meanwhile, ILP is the quickest method in the topmost cluster of methods for \\textsc{Rand}, \\textsc{Grid} and \\textsc{SEQ}, while it performs similarly to SCP for \\textsc{Late} instances.\n\nOne can observe that KAB only outperforms other methods in the topmost cluster for small instances ($|T| \\leq 800$). The reason is that computing APSPs requires a significant amount of time and quickly becomes prohibitive for larger instances. For \\textsc{Late} instances, KAB was unable to solve those where $|T| > 3200$. This is because \\textsc{Late} instances tend to have larger interval sets $E$, which directly impact the creation of the conflict graph (line 3 in Algorithm \\ref{alg:kra}) thereby limiting KA's execution time.\n\nCP, SCP, ILP, ULT, and KAB always take longer than one millisecond to compute results. Meanwhile, CRA begins below or at this threshold in all cases and grows quickly, often reaching the one-second threshold. Nevertheless, we can see that CRA typically performs better than ILP. Even when CRA is slower, the differences are not significant. This is true except for the \\textsc{Late} instances, where CRA timed out for all five instances with $|T| = 25600$. In these cases, the combination of many time-points and late feasible schedules leads CRA to compute more entries of the distance matrix using \\textsc{Dijkstra}, causing major computational overhead for the method (line 8 in Algorithm~\\ref{alg:cra}). Both RULT and BFDC always remain below the 100 milliseconds threshold and are faster than CRA. This is despite the fact that RULT has a theoretical worst-case time complexity slower than CRA. We did not observe any timeout for either method in the bottom cluster, which contributes to their lower curves in Figure \\ref{fig:sp-nodes}. The relative position of both algorithms is also very consistent, with RULT only slightly slower than BFDC.\n\n\\subsubsection{\\textsc{Density} dataset}\nThe second subset is \\textsc{Density}, in which we vary the number of constraints $C_1$ in the range \\{20,25,\\dots,85,90\\}\\%, given as a percentage of the maximum number of constraints (maximum number of arcs in the base graph). Other parameters are fixed as follows: $|T|=1008$, $|T_D|=0.8\\cdot|T|$ and $K=10$. Similar to \\textsc{Nodes}, five instances are generated per configuration. Average computation times according to density growth are shown in Figure \\ref{fig:sp-density}.\n\n\\begin{figure}[!htb]\n \\centering\n \\includegraphics[width=\\linewidth]{images\/graph_sp_density.pdf}\n \\caption{Average computation times for the \\textsc{Density} subset. The $y$-axis is reported in log scale.}\n \\label{fig:sp-density}\n\\end{figure}\n\nOn the one hand, the performance of ULT barely changes with respect to varying densities due to \\textsc{FloydWarshall}'s phase which maintains ULT among the slowest methods. On the other hand, ULT was always able to solve at least one instance per configuration which is not true for all methods. Except for the \\textsc{Grid} instances, CP and SCP experience difficulties solving problems with a density greater than $40\\%$. KAB far outperforms ULT for \\textsc{Grid} instances, but generally performs similarly to ULT in all other cases. KAB is also never slower than CP or SCP. In terms of the topmost cluster, ILP was consistently the best performing method.\n\nHere we notice that CRA demonstrates far better performance than ULT, CP, SCP, ILP and KAB for the \\textsc{Rand}, \\textsc{Grid} and \\textsc{Seq} instances. It remains a sort of middling algorithm, but shows little variation in performance resulting from the network's density. However, for the \\textsc{Late} instances, CRA's performance compares to that of the ILP.\nSimilar to the \\textsc{Nodes} dataset, this is explained by the overhead incurred by \\textsc{Dijkstra} computations. For the \\textsc{Density} instances, however, the network is more connected, leading to more time-points being affected by changes made to others. This in turn also requires more assignment updates.\n\nBoth RULT and BFDC also show little variation with respect to the network's density. They remain the fastest algorithms, with no timeouts observed. \n\n\\subsubsection{\\textsc{NumDisj} dataset} \n\nIn the third subset, we vary the number of domains $K$ per time-point $i \\in T_D$ in the range \\{5,10,20,\\dots,90,100\\}. Other parameters are fixed as follows: $|T|=2000$, $|C_1| = 12000$ and $|T_D|=1600$. Figure \\ref{fig:sp-numd} shows the average computation times according to parameter $K$.\n\n\\begin{figure}[!htb]\n \\centering\n \\includegraphics[width=\\linewidth]{images\/graph_sp_numd.pdf}\n \\caption{Average computation times for the \\textsc{NumDisj} subset. The $y$-axis is reported in log scale.}\n \\label{fig:sp-numd}\n\\end{figure}\n\nIn these experiments, we can still differentiate the three clusters of methods from before, but their individual behaviors are now distinct. ULT suffers far more timeouts and often cannot solve a single instance in any configuration. However, this appears unrelated to parameter $K$ and more due to some other specific instance characteristic that was not captured in these experiments. CP and SCP can solve most instance sizes. Additionally, SCP is faster and can solve more instances as the value of $K$ increases compared to CP. This difference is pronounced for dataset \\textsc{Late}, where SCP not only outperforms CP but also CRA and ILP in all cases. Despite its slow performance for the \\textsc{Late} instances, ILP outperforms ULT, CP, SCP and KAB across all other configurations.\n\nKAB experiences the same difficulties solving \\textsc{Late} instances, where only the smallest ones with $K=5$ were solved. This is unsurprising since $\\omega$ is directly related to $K$ (recall Section \\ref{sec:sdtp}). KAJ performed slightly better than KAB and was able to solve \\textsc{Seq} instances with $K \\leq 90$. CRA again outperforms all methods in the topmost cluster except for the \\textsc{Late} instances, where SCP is faster. For \\textsc{Seq}, we notice the power of these \\textsc{Dijkstra} computations because they help CRA to easily find the hidden shortest path used during the instance's construction. This then leads to much shorter executions. RULT and BFDC remain the fastest methods. However, RULT is clearly impacted to a far greater extent by the growth in the number of time-point domains compared to BFDC for the \\textsc{Late} instances. This observation is aligned with their asymptotic worst-case time complexities.\n\nThe graphs in Figure \\ref{fig:sp-numd} suggest that the methods solved using CPLEX (CP, SCP and ILP) are those most impacted by increases in $K$. This may be due to the number of constraints created when more domains exist and an increase in the cardinality of sets $\\Phi_i$ in the SCP model.\n\n\\subsubsection{\\textsc{VarDisj} dataset} The fourth and final subset for SP instances is \\textsc{VarDisj}. This subset varies the size of $T_D$ in the range \\{10,20,\\dots,90,100\\}\\%, given as a percentage of the total number of time-points $|T|$ that have multiple domains. Other parameters are fixed as follows: $|T|=2000$, $|C_1|=6\\cdot|T|$ and $K=10$. Figure \\ref{fig:sp-vard} provides computation runtime results according to the size of set $T_D$.\n\n\\begin{figure}[!htb]\n \\centering\n \\includegraphics[width=\\linewidth]{images\/graph_sp_vard.pdf}\n \\caption{Average computation times for the \\textsc{VarDisj} subset. The $y$-axis is reported in log scale.}\n \\label{fig:sp-vard}\n\\end{figure}\n\nFigure \\ref{fig:sp-vard} shows how the results for this subset of instances differ significantly from the previous subsets. First, for instances \\textsc{Rand}, \\textsc{Grid} and \\textsc{Seq} the methods are now far more distinctly dispersed across different parts of the graphs. ULT is once again the slowest method, highlighting its difficulty in computing APSPs. KAB competes with CP and SCP in dataset \\textsc{Rand}, but is clearly slower than these methods for \\textsc{Grid} and \\textsc{Seq}. Similar to \\textsc{NumDisj}, KAJ demonstrated slightly better performance for \\textsc{Seq} instances than KAB, yet not enough to outperform CP or SCP. Indeed, these two methods always outperform KAB and ULT. Meanwhile, ILP is quicker than the previous four methods, maintaining its somewhat consistent behavior as the best general-purpose algorithm for solving SDTPs. CRA is clearly the method that suffers the most from an increasing number of time-points that have multiple domains. Nevertheless, for the first three datasets, CRA is always faster than ILP and for the \\textsc{Seq} instances even outperforms RULT for small ones. Finally, RULT and BFDC remain the fastest methods and as the size of set $T_D$ increases there is little noticeable impact on their performance.\n\nWhen we consider the \\textsc{Late} instances, which are arguably the most difficult to solve, the situation changes. ULT cannot solve a single instance in this set. KAB appears as the slowest method. Meanwhile, CP, SCP, ILP and CRA are all clustered below KAB. CP is clearly the slowest of the four methods. ILP and CRA exhibit some variations, but overall their growth trend is far less pronounced than for the other three sets (\\textsc{Rand}, \\textsc{Grid} and \\textsc{Seq}). SCP is very consistent across all experiments and for large $T_D$ sets outperforms the other three algorithms, albeit not by a very large margin. RULT is a whole order of magnitude slower than BFDC for almost all datasets except for \\textsc{Seq}. Nevertheless, neither RULT nor BFDC exhibit any significant variations in performance.\n\n\\subsection{Negative cycle instances}\n\nThe \\textsc{Negcycle} instances use the filter of same name proposed by \\citet{art:sp-fp}. This filter is applied to instances from Section \\ref{sec:sp-instances} by introducing a negative cycle into the underlying base graph. Only instances that are feasible before applying the filter are considered so that they become infeasible precisely due to the negative cycle. Similar to \\citet{art:sp-fp}, we consider four classes of negative cycles: (\\textsc{Nc02}) one cycle with three arcs; (\\textsc{Nc03}) $\\lfloor \\sqrt{|T|} \\rfloor$ cycles with three arcs each; (\\textsc{Nc04}) $\\lfloor \\sqrt[3]{|T|} \\rfloor$ cycles with $\\lfloor \\sqrt{|T|} \\rfloor$ arcs each; and (\\textsc{Nc05}) one Hamiltonian cycle. \n\nFor each one of these four classes, we vary $|T|$ in the range \\{100,200,\\dots,12800,25600\\} while fixing parameters $|C_1|=6\\cdot|T|$, $|T_d|=0.8\\cdot|T|$, $K=10$. For each configuration, we generate three instances. Figure \\ref{fig:nc} provides the computation results for each of the negative cycle classes, with the average time to prove infeasibility reported for each method over 20 runs.\n\n\\begin{figure}[!htb]\n \\centering\n \\includegraphics[width=\\linewidth]{images\/graph_all_nc.pdf}\n \\caption{Average computation times for the \\textsc{Negcycle} datasets. Both the $x$- and $y$-axis are reported in log scale.}\n \\label{fig:nc}\n\\end{figure}\n\nAn initial observation is that ULT experiences greater difficulty solving instances with fewer and smaller cycles (\\textsc{Nc02} and \\textsc{Nc03}). CP was unable to solve the largest instances in any of the cases, while SCP was able to prove infeasibility of the large instances when more than one cycle existed (\\textsc{Nc03} and \\textsc{Nc04}). Meanwhile, ILP demonstrated a very consistent performance despite the cycles.\n\nFor the first time, we notice KAB among the fastest algorithms at the bottom of the graphs. This is not surprising since for \\textsc{Negcycle} instances KAB is able to detect the cycle when computing distance matrix $\\delta$ (line 1 in Algorithm \\ref{alg:kra}). Also for the first time, BFDC demonstrates consistently the lowest execution time for \\textsc{Nc02} compared to CRA, RULT and KAB. For the same dataset, CRA is consistently the fastest method because it can also detect negative cycles in its first phase with standard \\textsc{BellmanFord} (line 1 in Algorithm \\ref{alg:cra}). This showcases the overhead incurred by \\textsc{DomainCheck} in BFDC (line 8 in Algorithm \\ref{alg:bfdc}).\n\nHowever, in datasets \\textsc{Nc03}-\\textsc{Nc05} the methods which feature in the cluster at the bottom are harder to differentiate. KAB typically appears to be the slowest, CRA the fastest, with BFDC and RULT lying somewhere in the middle. Nevertheless, the differences between KAB, RULT, BFDC and CRA for \\textsc{Negcycle} instances are negligible for most purposes.\n\n\\subsection{Vehicle routing instances}\n\nWe extract vehicle routing instances (\\textsc{Vrp}) from solutions to Vehicle Routing Problems with Multiple Synchronization (VRPMS) constraints \\cite{art:vrpms}. These VRPMS instances contain multiple routes for which departure times and service times must be assigned while complying with synchronization constraints between routes, in addition to maximum route duration constraints. We refer interested readers to Appendix \\ref{ap:instances} for more information concerning how exactly these instances have been generated.\n\n\n\\textsc{Vrp} instances primarily differ in terms of their number of time-points, which ranges from 10 to 1300. For each instance size, we again create five instances: three feasible and two infeasible. Figure \\ref{fig:vrp} presents the computation times per method according to the number of time-points in the instance, with the values reporting the average over 20 runs. Results are grouped according to instance feasibility given that some differences in performance can be observed depending on whether an instance is feasible.\n\n\\begin{figure}[!htb]\n \\centering\n \\includegraphics[width=\\linewidth]{images\/graph_all_vrp.pdf}\n \\caption{Average computation times for the \\textsc{Vrp} dataset. The $y$-axis is reported in log scale.}\n \\label{fig:vrp}\n\\end{figure}\n\nOne can quickly notice that ULT can easily prove infeasibility of instances, but it experiences difficulty producing solutions for feasible instances. Indeed, for feasible instances ULT cannot solve those where $|T| > 1000$. CP and SCP exhibit different performances, with SCP faster on average, particularly for infeasible instances. KAB consistently performs similarly to CP. The performance of ILP varies significantly for feasible instances and somewhat less significantly for the infeasible cases. It is difficult to conclude whether ILP is faster than SCP overall, although we can easily see that both methods are faster than ULT, CP and KAB. \n\nCRA, RULT and BFDC are once again clustered towards the bottom of the graph, signifying that they are the fastest methods. RULT is the slowest among the three, although CRA does vary a lot and is slower for certain cases. BFDC can be concluded to be the fastest method although, yet again, CRA does outperform BFDC in certain cases. Overall though, one can summarize the order from slowest to fastest as follows: RULT, CRA and BFDC.\n\nFinally, note that \\textsc{Vrp} instances all have an underlying distance graph which is very sparse. Vertices have at most two outgoing arcs and two incoming arcs, with the exception of those connected to the beginning of the time horizon $\\alpha$. Additionally, the graphs are almost acyclical and contain a very limited number of arcs that create cycles. While we do not exploit this structure when solving the \\textsc{Vrp} dataset, this would certainly be an interesting avenue for future research.\n\n\\subsection{Very large instances}\n\nAlthough the three previous datasets have diverse characteristics, they all fail to capture scenarios where the number of time-points is very large. These types of instances are important when it comes to truly verifying the scalability of methods which may have advantages for small-scale problems yet suffer when instance size grows significantly. To verify the extent to which our previous analysis holds for such instances, we generate five very large (\\textsc{Vl}) problems, all of which are feasible. Table \\ref{tab:vl} presents the characteristics of these instances in terms of their \\textit{Base} generation method, number of time-points, number of $C_1$ constraints, maximum number $K$ of domains per time-point and total number of domains $\\omega$. Base \\textsc{Tsp} means that graph $G$ was extracted from the Traveling Salesman Problem Library (TSPLib) \\cite{art:tsplib} instance \\texttt{pla85900} that contains 85900 nodes. For instance \\textsc{Vl}-1, a random subset of nodes is selected from \\texttt{pla85900}. Meanwhile, instances \\textsc{Vl}-3, \\textsc{Vl}-4 and \\textsc{Vl}-5 are generated with the SP procedures outlined in Section \\ref{sec:sp-instances}.\n\n\n\\begin{table}[!ht]\n\\caption{Very large instances.}\\label{tab:vl}\n\\centering\n\\begin{tabular}{llrrrr}\n\\hline\nInstance & Base & $|T|$ & $|C_1|$ & $K$ & $\\omega$\\\\\n\\hline\n\\textsc{Vl-1} & \\textsc{Tsp} & 50 000 & 500 000 & 20 & 905 000\\\\\n\\textsc{Vl-2} & \\textsc{Tsp} & 85 900 & 859 000 & 60 & 4 647 190\\\\\n\\textsc{Vl-3} & \\textsc{Seq} & 200 000 & 2 000 000 & 100 & 16 040 000\\\\\n\\textsc{Vl-4} & \\textsc{Late} & 400 000 & 4 000 000 & 180 & 57 680 000\\\\\n\\textsc{Vl-5} & \\textsc{Rand} & 1 000 000 & 10 000 000 & 500 & 400 200 000\\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\nFigure \\ref{fig:vl} provides a graph documenting the average execution times over 10 runs for a subset of the methods with the \\textsc{Vl} instances. We opted to test only the best performing methods: ILP, CRA, RULT and BFDC. For each run, the methods were given a one hour time limit.\n\nEven among the four best performing methods, it is obvious that CRA and ILP are limited with respect to the instance sizes they can solve. Indeed, they were unable to solve instances beyond \\textsc{Vl-3} within one hour of execution. While it is difficult to determine the precise reason for the behavior of the ILP solver, the reason for CRA lies in how large instances require more \\textsc{Dijkstra} computations (in particular for \\textsc{Vl-4}). The computation of APSPs is not only slow but also leads to far worse memory locality. This impacts cache usage which reduces the performance of CRA compared to RULT, even though the latter has a theoretically slower worst-case time complexity. Figure \\ref{fig:vl} also showcases the fact that \\textsc{Late} instances are much more difficult than other instances. This is true even when a \\textsc{Late} instance has less than half the number of time-points of a \\textsc{Rand} instance. \n\n\\begin{figure}[!htb]\n \\centering\n \\includegraphics[width=0.9\\linewidth]{images\/graph_all_vl.pdf}\n \\caption{Average computation times for the \\textsc{Vl} dataset.}\n \\label{fig:vl}\n\\end{figure}\n\n\\section{Discussion} \\label{sec:discussion}\n\n \\citet{art:exp2} noted that a lot of what is reported in experimental papers are observations about the \\textit{implementation} of an algorithm rather than the algorithm itself as a mathematical object. On the one hand, our study somewhat conforms to this trend. Despite being a well-known limitation of empirical studies, we hope to mitigate this by providing our code so that interested readers can inspect and even improve upon the implementations. On the other hand, some results documented in this paper are implementation-independent. For example: the reduced space complexity achieved by both RULT and BFDC compared to all other methods.\n\nLet us now turn our attention to a broad analysis of the computational study. Table \\ref{tab:summary} summarizes the results of the experiments from Section \\ref{sec:exps}, with the exception of \\textsc{Vl} instances. Columns \\textit{Max. time (ms)}, \\textit{Avg. time (ms)} and \\textit{Std. time (ms)} report the maximum, average and standard deviation of the recorded execution times per method in milliseconds. Column \\textit{Total time (s)} reports the total time required by each method to solve all of the instances, including eventual timeouts. Finally, column \\textit{Timeouts (\\%)} provides the percentage of runs for which the method timed out.\n\nWhen considering Table \\ref{tab:summary}, one must take into account the fact that ULT, CP, SCP and ILP are all more general than CRA, RULT, and BFDC. Hence, it should not come as a surprise that the latter algorithms outperform the former in almost all cases. Nevertheless, our experiments show that there are major differences in performance between algorithms when solving SDTPs and certain conclusions may appear counter-intuitive at first. For example, despite the polynomial worst-case asymptotic time complexity of ULT and KA, these algorithms exhibit poor general performance when solving SDTPs. Meanwhile, ILP demonstrated good performance for a problem that might have initially seemed more suitable for constraint programming.\n\n\\begin{table}[!ht]\n\\caption{Summary of results.}\\label{tab:summary}\n\\centering\n\\begin{tabular}{lrrrrr}\n\\hline\nMethod & Max. time (ms) & Avg. time (ms) & Std. time (ms) & Total time (s) & Timeouts (\\%)\\\\\n\\hline\nILP & 1947 & 350 & 462 & 7058 & 2.28\\\\\nCP & 2017 & 1062 & 985 & 21406 & 33.39\\\\\nSCP & 2002 & 875 & 954 & 17633 & 25.15\\\\\nULT & 1990 & 1476 & 812 & 29758 & 66.76\\\\\nKAB & 2000 & 868 & 873 & 17491 & 28.02\\\\\nKAJ & 2001 & 1065 & 1276 & 21478 & 30.26\\\\\nCRA & 1895 & 138 & 317 & 2772 & 1.19\\\\\nRULT & 152 & 6 & 13 & 115 & 0.00\\\\\nBFDC & 28 & 1 & 2 & 18 & 0.00\\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\nProfiling the implementations of both KAB and KAJ showed that $\\approx 90\\%$ of their execution time was consistently spent building distance matrix $\\delta$ and computing conflicts (lines 1-3 of Algorithm~\\ref{alg:kra}). Less than $5\\%$ of the total time was observed to be incurred by max-flow computations (line 6 of Algorithm~\\ref{alg:kra}). This showcases how the bottleneck is the computation of APSPs and conflicts rather than the theoretically slower max-flow step.\n\nSimilarly, profiling the implementation of CRA showed that \\textsc{Dijkstra} computations were responsible for up to 95\\% of the processing time. This observation includes our lazy evaluation implementation of the distance matrix. When the full matrix is precomputed as originally described by \\citet{art:cra}, the proportion of time spent on \\textsc{Dijkstra} could grow even more extreme. In many cases, precomputation of the distance matrix was not possible within the imposed time limit. Furthermore, precomputing the distance matrix for large instances is simply not possible due to insufficient memory. In spite of these drawbacks, one advantage of CRA is that some instances can be solved quickly during its first stage (\\textsc{BellmanFord}) when the initial solution is already feasible and there is therefore no time-point assignment $s_i$ which must undergo corrections.\n\nThe results detailed in Table \\ref{tab:summary} further confirm those observed in Section \\ref{sec:exps}. RULT and BFDC present the best performances overall, with computation times that are between two and three orders of magnitude shorter than all other methods. There is also no record of either of these algorithms timing out during our experiments. This performance can be explained by two factors. First, both RULT and BFDC focus on computing single-source shortest paths while ULT, KA and CRA consume a lot more computational resources solving APSPs. Second, and this comes as a direct consequence of the first factor, both RULT and BFDC have linear space complexity using only one-dimensional arrays of size $|T|$, thereby improving their cache locality and overall efficiency. Indeed, some instances could be solved almost entirely in cache by these two methods, while the quadratic space complexity of other methods made this far more unlikely.\n\nFigure \\ref{fig:cache} illustrates cache reference measurements for CRA, RULT and BFDC when solving the \\textsc{Vl} instances. We focus on this dataset because it required the most algorithmic effort. Recording of cache reference events was performed using the \\texttt{perf\\_events} package from the Linux kernel \\cite{web:perf}. The \\textit{Full scale} row in the top half of the figure demonstrates how difficult it can sometimes be to compare the behavior of different approaches. This is why we have also included the \\textit{Small scale} graphs below, which zoom into the \\textit{Full scale} graphs in order to reveal further details concerning behavior of each method. These graphs make it clear how both RULT and BFDC can solve the large instances much quicker simply by using the cache more efficiently. In all experiments both RULT and BFDC required fewer total cache references than the number of cache misses by CRA.\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=0.95\\linewidth]{images\/graph_all_cache.pdf}\n \\caption{Cache references per method in the \\textsc{Vl} dataset.}\n \\label{fig:cache}\n\\end{figure}\n\nThe methods implemented in this paper also differ with respect to the type of solution produced. ULT, CRA, RULT and BFDC provide the earliest feasible solution at the end of their execution. However, ULT, RULT and BFDC can also return the latest feasible solution. \\citet{art:cra} did not comment on whether their method could return the latest feasible solution, although it appears possible when computing solutions over graph $G_R$ and with some minor changes to operations (e.g., \\textsc{UpdateAssignments}). By contrast, CP, SCP, ILP and KA are not guaranteed to return either the earliest or latest feasible solution. One could ensure finding either one of them by defining an appropriate objective function for the underlying model, but it is unclear how much this would impact their performance. For example, KA would require the solution of max-flows with arbitrary arc capacities rather than unitary capacities \\cite{art:kumar-estp}.\n\nWhile one could be tempted to conclude that BFDC and RULT should be the go-to methods when faced with SDTPs, this is not necessarily the conclusion we advocate for. Our advice is instead a little more nuanced. Given that BFDC performed the best for SDTPs on isolation, it represents the most sensible choice when evaluating, for example, the feasibility of interdependent vehicle routes. However, other problems which feature SDTPs may benefit from other algorithms to achieve the best performance. For restricted disjunctive temporal problems, \\citet{art:cra} introduced a method which exploits CRA's structure to obtain a low time complexity. In theory, it is also possible to employ BFDC, but this would increase the asymptotic worst-case time complexity of the algorithm for RDTPs. Similarly, \\citet{art:kumar-estp} showed that KA can be employed with minor changes to solve SDTPs where each domain is assigned an arbitrary preference weight and the goal is to find a feasible solution which maximizes the sum of the selected domains. In this problem context it is not clear how one could employ BFDC or RULT. On the other hand, we have shown empirically that KA experiences difficulty solving even medium-sized instances. Therefore, it may be worth considering further research on faster methods to solve these SDTPs with preferences.\n \n\\section{Conclusion}\n\nSimple disjunctive temporal problems generalize simple temporal problems. They have a wide range of real-world applications where they typically arise as subproblems. Some examples of application domains include robot planning, workforce scheduling, logistics and management systems. SDTPs can also be used in decomposition methods to solve more general temporal constraint satisfaction problems. It is therefore of interest for both researchers and practitioners to understand the empirical performance of algorithms for solving SDTPs in addition to their theoretical time bounds. Unfortunately, the literature previously understood very little about these methods in practice.\n\nTo bridge this gap and bring theory and experimentation in these temporal problems closer together, we provided a large exploratory and empirical study concerning new and established algorithms for solving SDTPs. Our results indicate that theoretical worst-case time complexities are not necessarily indicative of the observed computation times of these algorithms. Moreover, we showed that the quadratic space complexity of previous algorithms comes with several drawbacks that limit their use in practice, regardless of their asymptotic time complexity. Indeed, for very large datasets, some methods were unable to solve an otherwise simple problem due to memory limitations, even when executed on modern computers. By contrast, algorithms which possess a lower space complexity albeit a higher time complexity solved very large instances within only a few seconds.\n\nWe hope that the results of this paper provide useful evidence for future researchers that helps them make informed decisions concerning the best algorithm for their application, thereby reducing reimplementation efforts. The code we have made publicly available will also help future research test whether our conclusions hold for other datasets. Finally, our implementations also provide some common ground for benchmarking new algorithms or speedup techniques for simple disjunctive temporal problems and their special cases.\n\n In terms of future research, one could consider performing a similar computational experiment for the restricted disjunctive temporal problem \\cite{art:kra}. Instances could be derived from those introduced in this paper by adding Type 3 constraints. Another option is to investigate algorithms for variants of the SDTP with preferences associated with each domain of a time-point \\cite{art:kumar-estp}. For example, a certain time-point may have greater preference to be executed in the morning rather than in the evening. Another possibility is to extract SDTP instances from real-world applications and verify whether the conclusions from our research remain valid for other graph structures or if better performing methods exist.\n\n\n\n\\begin{acks}\n\n\nThis research was supported by Internal Funds KU Leuven (IMP\/20\/021) and by the Flemish Government, Belgium under \\textit{Onderzoeksprogramma Artifici\u00eble Intelligentie} (AI). Editorial consultation provided by Luke Connolly (KU Leuven).\n\\end{acks}\n\n\\bibliographystyle{ACM-Reference-Format}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\nThere is currently a great deal of interest in the rapidly evolving\nfield of observations of the polarization of the cosmic microwave\nbackground (CMB). This interest stems from the fact that such\nobservations have the potential to discriminate between inflation and\nother early-universe models through their ability to constrain an\nodd-parity $B$-mode polarization component induced by a stochastic\nbackground of gravitational waves at the time of last\nscattering~\\citep{kamionkowsky97,seljak97}.\n\nFrom an observational point of view, we are still a long way off from a\ndetection of this $B$-mode signature of inflation. However, much\nprogress has been made recently with the detection of the much\nstronger $E$-mode polarization signal on large scales by the {\\sevensize WMAP}\\,\nexperiment \\citep{page07,nolta08}.\nOn smaller scales, a growing number of\nballoon-borne and ground-based experiments have also measured $E$-mode\npolarization including {\\sevensize DASI}\\, \\citep{leitch05}, {\\sevensize CBI}\\,\n\\citep{sievers07}, {\\sevensize BOOMERANG}\\, \\citep{montroy06}, {\\sevensize MAXIPOL}\\,\n\\citep{wu07}, {\\sevensize CAPMAP}\\, \\citep{bischoff08} and {\\sevensize QUaD}\\,\n\\citep{pryke09}. Most recently, the high precision measurement of\nsmall-scale polarization by the {\\sevensize QUaD}\\, experiment has, for the first\ntime, revealed a characteristic series of acoustic peaks in the\n$E$-mode spectrum and put the strongest upper limits to date on the\nsmall-scale $B$-mode polarization signal expected from gravitational\nlensing by large-scale structure.\n\nBuilding on the experience gained from these pioneering experiments, a\nnew generation of experiments is now under\nconstruction with the ambitious goal of observing the primordial\n$B$-mode signal. Observing this signal is one of the most challenging goals\nof modern observational cosmology. There are a number of reasons why\nthese types of observations are so difficult. First and foremost, the\nsought-after signal is expected to be extremely small -- in terms of the \ntensor-to-scalar ratio\\footnote{%\nOur normalisation conventions follow those adopted in the\n{\\sevensize CAMB}\\, code~\\citep{lewis00}, so that $r$ is the ratio of primordial\npower spectra for gravitational waves and curvature perturbations.\nExplicitly, for slow-roll inflation in a potential $V(\\phi)$,\n$2 r\\approx M_{\\rm Pl}^2 [V'(\\phi)\/V(\\phi)]^2$ where\n$M_{\\rm Pl}= 2.436 \\times 10^{18}\\, \\mathrm{GeV}\/c^2$ is the reduced\nPlanck mass.}, the RMS polarization signal from primordial $B$-modes is\n$0.4 \\sqrt{r}\\, \\mu \\mathrm{K}$, and the current 95 per cent limit $r< 0.22$ from\n{\\sevensize WMAP}\\, temperature and $E$-mode polarization plus distance\nindicators~\\citep{komatsu09} implies an RMS $<180\\,\\mathrm{nK}$. Secondly,\npolarized emission from\nour own galaxy and from extra-Galactic objects act as a foreground\ncontaminant in observing the CMB polarized sky. Although our\nknowledge of such polarized foregrounds is currently limited,\nparticularly at the higher frequencies $\\mathrel{\\rlap{\\lower4pt\\hbox{\\hskip1pt$\\sim$} 100\\,\\mathrm{GHz}$\nof relevance to bolometer experiments, models suggest that such contamination\ncould be an order of magnitude larger than the sought-after signal on\nthe largest scales (e.g. \\citealt{amblard07}).\nThirdly, gravitational lensing by large-scale structure\nconverts $E$-modes into $B$-modes on small to medium scales\n(see~\\citealt{lewis06} for a review) and acts\nas a source of confusion in attempts to measure the primordial\n$B$-mode signal \\citep{knox02, kesden02}. Note however that the\nlensing $B$-mode signal is a valuable source of cosmological\ninformation in its own right and can be used to put unique constraints\non dark energy and massive neutrinos\n(e.g. \\citealt{kaplinghat03,smithchallinor06}).\nUnfortunately these latter two effects (foreground contamination and\nweak gravitational lensing) contrive in such a way as to render $B$-mode\npolarization observations subject to contamination on all angular\nscales (the primordial $B$-mode signal is dominated by foregrounds on\nlarge scales whilst on smaller scales it is swamped by the lensing\nsignal). Last, but not least, exquisite control of systematic and\ninstrumental effects will be required, to much better than $100\\, \\mathrm{nK}$,\nbefore any detection of $B$-modes can be claimed.\nThe sought-after signal is so small that\nsystematic and instrumental effects considered negligible for an\nexquisitely precise measurement of $E$-modes say, could potentially\nruin a detection of $B$-modes, if left uncorrected. One possible\napproach to mitigating some of these systematics in hardware is to\nmodulate the incoming polarization signal such that it is shifted to\nhigher frequency and thus away from low-frequency systematics which\nwould otherwise contaminate it. There are a number of\ntechniques for achieving this including the use of a rotating\nhalf-wave plate (HWP; e.g.~\\citealt{johnson07}),\nphase-switching, or Faraday rotation modulators~\\citep{keating03}.\n\nIn this paper, we investigate the ability of \nmodulation techniques that are either slow or fast \nwith respect to the temporal variation of the\nsignal\nto mitigate a range of possible systematic\neffects. Our analysis is based on simulations, and the\nsubsequent analysis, of data from a ground-based CMB $B$-mode\npolarization experiment. Previous investigations of the impact of\nsystematic effects on $B$-mode observations include the analytic works\nof \\citet{hu03}, \\citet{odea07} and~\\citet{shimon08},\nas well as the simulation-based\nanalysis of \\cite{mactavish08} who based their study on signal-only\nsimulations of the {\\sevensize SPIDER}\\, experiment. The simulation work presented\nhere is complementary to these previous analyses but we also take our\nanalysis further by including realistic noise in our simulations ---\nwe are thus able to quantify not only any bias found, but also any\ndegradation of performance due to the presence of systematic\neffects. Our work makes use of a detailed simulation pipeline which we\nhave created in the context of the {\\sevensize C}$_\\ell${\\sevensize OVER}\\, experiment. Although the\nprecise details of our simulations are specific to {\\sevensize C}$_\\ell${\\sevensize OVER}\\,, our\ngeneral conclusions regarding the impact of modulation on a variety of\nsystematic effects are relevant to all upcoming ground-based $B$-mode\nexperiments and many of them are also relevant for both balloon-borne\nand space-based missions.\n\nThe paper is organised as follows. In Section \\ref{sec:cmb_expts}, we\nreview the relevant upcoming $B$-mode polarization\nexperiments. Section \\ref{sec:sims} describes our simulation technique\nand the systematic effects we have considered. In Section\n\\ref{sec:analysis}, we describe the map-making and power spectrum\nestimation techniques that we use to analyse the simulated\ndata. Section \\ref{sec:results} presents the results from our main\nanalysis of systematic effects. We discuss our results in Section\n\\ref{sec:discussion} where, for clarity, we group the possible\nsystematic effects considered into those that are, and those that are\nnot, mitigated by a modulation scheme. In this section, we also\ndemonstrate the importance of combining information from multiple\ndetectors during analysis and compare the simulated performance of\n{\\sevensize C}$_\\ell${\\sevensize OVER}\\, to the predicted performance from a Fisher\nanalysis. Our conclusions are summarised in Section\n\\ref{sec:conclusions}. Finally, Appendix~\\ref{app:pointing} develops\na simple model of the spurious $B$-mode power produced by\npointing jitter for experiments with a highly redundant scan strategy.\n\n\\section{CMB polarization experiments}\n\\label{sec:cmb_expts}\nA host of experiments are currently under construction (one is, in\nfact, already observing) with their primary goal being to constrain the\ntensor-to-scalar ratio, and thus the energy scale of inflation,\nthrough observations of the $B$-mode component of the CMB. Here, we\ngive a short summary of the planned experiments.\n\\vspace{-3mm}\n\\subsubsection*{(i) Ground-based experiments:}\n\\begin{itemize}\n \\item{{\\sevensize BICEP\/BICEP-2\/KECK} array:} The {\\sevensize BICEP}\\,\n experiment \\citep{yoon06} has recently completed its third and final season of\n operation from the South Pole. The experiment consisted of a total of 98 polarization-sensitive\n bolometers (PSBs) at 100 and 150~GHz. The optical design is\n very clean but with the downside of poor resolution -- 45 arcmin\n full-width at half-maximum (FWHM) at 150~GHz -- which limits the\n target multipole range to $\\ell <300$. The stated target sensitivity\n is $r = 0.1$. In the first observing season, three 100~GHz pixels and\n three 150~GHz pixels were equipped with Faraday rotation modulators\n and two pixels operated at 220~GHz. {\\sevensize\n BICEP-2} will consist of an upgrade to the {\\sevensize BICEP}\\, telescope with a\n 150~GHz 512-element array of antenna-coupled detectors. It will be\n deployed to the South Pole in November, 2009. The {\\sevensize KECK}\n array will consist of three {\\sevensize BICEP-2}-like telescopes\n (at 100, 150 and 220~GHz). It is hoped to be installed on the\n {\\sevensize DASI}\\, mount (previously occupied by {\\sevensize QUaD}) in November 2010. The\n nominal goal of this array is $r=0.01$.\n\n \\vspace{3mm}\\item{{{\\sevensize C}$_\\ell${\\sevensize OVER}}:} For an up-to-date overview\n see~\\citet{north08}. {\\sevensize C}$_\\ell${\\sevensize OVER}\\, is a\n three-frequency (97, 150 and 225~GHz) instrument to be sited at Pampa La\n Bola in the Atacama desert in Chile. It will have 576 single-polarization\n transition-edge sensors (TES), split equally among the three frequencies.\n The beam size of $\\sim$ 5.5 arcmin FWHM at 150~GHz will sample the \n multipole range $25 < \\ell < 2000$. The target sensitivity is\n $r \\sim 0.03$ and the polarization signal will be modulated with a\n HWP. The 97~GHz instrument is expected to be deployed to Chile in\n late 2009 with the combined 150\/225~GHz instrument to follow soon\n after in 2010. \n\n \\vspace{3mm}\\item{{{\\sevensize QUIET}}:} See~\\citet{samtleben08} for a recent overview.\n {\\sevensize QUIET}\\, is unique among\n planned $B$-mode experiments in that it uses pseudo-correlation\n HEMT-based receivers rather than bolometers. It will observe from Chile\n using the CBI mount -- a planned second phase will involve upgrading\n to $\\sim 1000$ element arrays and relocation of the 7-m Crawford Hill\n antenna from New Jersey to Chile. {\\sevensize QUIET}\\, will observe\n at 40 and 90~GHz. The target sensitivity for the second phase\n is $r \\sim 0.01$.\n\n \\vspace{3mm}\\item{{{\\sevensize POLARBEAR}}:} A three-frequency (90, 150 and\n 220~GHz) single-dish instrument to be sited in the Inyo Mountains, CA\n for its first year of operation, after which it will be relocated to\n the Atacama desert in Chile. It will use 1280 TES bolometers at each \n frequency with polarization modulation from a HWP. The planned beam\n size is 4 arcmin (FWHM) at 150~GHz. The target sensitivity is\n $r=0.015$ for the full instrument.\n\n \\vspace{3mm}\\item{{{\\sevensize BRAIN}}:} See~\\cite{charlassier08} for a recent\n review. {\\sevensize BRAIN}\\, is a unique\n bolometric interferometer project (c.f. {\\sevensize DASI}, {\\sevensize CBI}) to be sited on\n the Dome-C site in Antarctica. The final instrument will have $\\sim 1000$\n bolometers observing at 90, 150 and 220~GHz. {\\sevensize BRAIN}\\, will be primarily\n sensitive to multipoles $50 < \\ell < 200$. The full experiment is planned\n to be operational in 2011 and the stated target sensitivity is $r = 0.01$.\n\\end{itemize}\n\\vspace{-5mm}\n\\subsubsection*{(ii) Balloon-borne experiments}\n\\begin{itemize}\n \\item{{{\\sevensize EBEX}}:} See \\cite{oxley04} for a summary. {\\sevensize EBEX}\\, will\n observe at 150, 250 and 410~GHz and will fly a total of\n $1406$ TES with HWP modulation.\n The angular resolution is 8 arcmin and\n the target multipole range is $20 < \\ell < 2000$. The stated target\n sensitivity is $r = 0.02$. A test\n flight is planned for 2009 and a long-duration balloon (LDB)\n flight is expected soon after.\n\n \\vspace{3mm}\\item{{{\\sevensize SPIDER}}:} See~\\cite{crill08} for a recent description.\n {\\sevensize SPIDER}\\, will deploy $\\sim 3000$ antenna-coupled TES observing\n at 96, 145, 225 and 275~GHz, with a beam size of $\\sim 40$ arcmin at 145~GHz.\n The target multipole range is $10 < \\ell < 300$. A 2-6 day first\n flight is planned for 2010. The target sensitivity is\n $r=0.01$. Signal modulation will be provided by a (slow) stepped HWP\n and fast gondola rotation.\n\n \\vspace{3mm}\\item{{{\\sevensize PIPER}}:} This balloon experiment will deploy a\n focal plane of 5120 TES bolometers in a backshort-under-grid (BUG)\n configuration. Each flight of {\\sevensize PIPER}\\, will observe at a different\n frequency, covering 200, 270, 350 and 600~GHz after the four planned\n flights. The beam size is $\\sim15$ arcmin, corresponding to a target\n multipole range $\\ell < 800$. The first element of the optical system is\n a variable polarization modulator (VPM). The entire optical chain,\n including the modulators, are cooled to 1.5 K so that {\\sevensize PIPER}\\, observes\n at the background limit for balloon altitudes. Including removal of\n foregrounds, the experiment has the sensitivity to make a $2\\sigma$\n detection of $r = 0.007$. The first flight is scheduled for 2013.\n\n\\end{itemize}\n\\vspace{-5mm}\n\\subsubsection*{(iii) Space missions}\n\\begin{itemize}\n \\item{{{\\sevensize PLANCK}}:} See the publication of the~\\citet{planck06} for a detailed\n review of the science programme. {\\sevensize PLANCK}\\, will measure the temperature\n in nine frequency bands, seven of which will have (some) polarization\n sensitivity. The polarized channels (100, 143, 217 and 353~GHz)\n of the high-frequency instrument\n (HFI) use similar PSBs to those deployed on\n {\\sevensize BOOMERANG}\\, and have beam sizes 5--9.5 arcmin. For low $r$, sensitivity to\n primordial gravitational waves will\n mostly come from the large-angle reionisation\n signature~\\citep{zaldarriaga97b} and\n $r=0.05$ may be possible if foregrounds allow. {\\sevensize PLANCK}\\, will \n be sensitive to the multipole range $2 < \\ell <3000$ and is\n scheduled to launch in 2009. The HFI has no active or fast signal modulation\n (i.e.\\ other than scanning).\n\n \\vspace{3mm}\\item{{{\\sevensize CMBPOL\/BPOL}}:}\n Design studies have been conducted\n for satellite mission(s) dedicated to measuring primordial\n $B$-modes comprising $\\sim 2000$ detectors with the ability to measure\n $0.001 < r < 0.01$ if foregrounds allow it~\\citep{bock08,deBernardis08}.\n The timescale for launch of any selected mission is likely beyond 2020.\n\\end{itemize}\n\n\\section{Simulations}\n\\label{sec:sims}\n\nThe {\\sevensize C}$_\\ell${\\sevensize OVER}\\, experiment will consist of two telescopes -- a low frequency\ninstrument with a focal plane consisting of 192\nsingle-polarization 97~GHz TES detectors, and a high frequency instrument with\na combined focal plane of 150 and 225~GHz detectors (192 of\neach). Note that we have not included foreground contamination in our\nsimulations, so for this analysis, we consider only the 150~GHz\ndetector complement -- the corresponding reduction in sensitivity will\napproximate the effect of using the multi-frequency observations to\nremove foregrounds. Figure~\\ref{fig:hf_focal_plane} shows the\narrangement of the 150 and 225~GHz detectors on the high frequency\nfocal plane. The detectors are arranged in detector blocks consisting\nof eight pixels each. Each pixel consists of two TES detectors which are\nsensitive to orthogonal linear polarizations. The polarization\nsensitivity of the eight detector pairs within a block are along, and at\nright angles to, the major axis of their parent block. The detector\ncomplement, both at 150 and 225~GHz, therefore consists of three\n`flavours' of pixels with different polarization sensitivity\ndirections. The 97~GHz focal plane (not shown) has a similar mix of\ndetector orientations.\n\\begin{figure}\n \\centering\n \\resizebox{0.48\\textwidth}{!}{ \n \\rotatebox{-90}{\\includegraphics{fig1.ps}}}\n \\caption{Layout of detectors on the {\\sevensize C}$_\\ell${\\sevensize OVER}\\, high frequency focal\n plane. Each point indicates a pixel comprising two TES detectors\n sensitive to orthogonal linear polarizations. The polarization\n sensitivity directions of detectors within each block are along, and\n at right angles to, the major axis of the block. The outlined\n 150~GHz detector blocks at the centre of the array are used in the\n simulations described here. The field of view of the entire array is\n $\\sim 5 \\, {\\rm deg}^2$.}\n \\label{fig:hf_focal_plane}\n\\end{figure}\n\n\\subsection{Simulation parameters}\n\\label{sec:sims_params}\nBecause simulating the full {\\sevensize C}$_\\ell${\\sevensize OVER}\\, experiment is computationally\ndemanding (a single simulation of a two-year campaign\nat the full {\\sevensize C}$_\\ell${\\sevensize OVER}\\, data rate would require $\\sim 10^4$ CPU-hours),\nwe have scaled some of the simulation parameters in order to make our\nanalysis feasible.\n\\begin{enumerate}\n\\item We simulate only half the 150~GHz detector complement and have\n scaled the noise accordingly. We have verified with a restricted\n number of simulations using all the 150~GHz detectors that the\n marginally more even coverage obtained across our field using all detectors\n has little or no impact on our results or\n conclusions. The 150~GHz detector blocks used in our simulations are\n indicated in Fig.~\\ref{fig:hf_focal_plane} and include all three\n possible orientations of pixels on the focal plane.\n\\item The {\\sevensize C}$_\\ell${\\sevensize OVER}\\, detectors will have response times of $\\sim 200\\,\n \\mu$s and so the data will be sampled at $\\sim 1$~kHz in order to\n sample the detector response function adequately. Simulating at this\n rate is prohibitive so we simulate at a reduced data rate of\n $100$~Hz. For the {\\sevensize C}$_\\ell${\\sevensize OVER}\\, beam size (FWHM = 5.5 arcmin at 150~GHz) and\n our chosen scan speed ($0.25^{\\circ}$\/s), this data rate is still fast\n enough to sample the sky signal adequately.\n\\item {\\sevensize C}$_\\ell${\\sevensize OVER}\\, will observe four widely separated fields on the sky,\n each covering an area of $\\sim 300 \\, {\\rm deg}^2$, over the course\n of two years. Two of the fields are in the southern sky and two\nlie along the equator. For our analysis, we observe each of the four fields\n for a single night only and, again, we have scaled the noise levels to those\n appropriate for the full two-year observing campaign.\n\\end{enumerate}\n\n\\subsection{Observing strategy}\n\\label{sec:obs_strategy}\nAlthough optimisation of the observing strategy is not the focus of\nthis work, a number of possible strategies have been investigated by\nthe {\\sevensize C}$_\\ell${\\sevensize OVER}\\, team. For our analysis, we use the most favoured scan\nstrategy at the time of writing. To minimise rapid variations in\natmospheric noise, the two telescopes will scan back and forth in\nazimuth at constant elevation, allowing the field to rise through the\nchosen observing elevation. Every few hours, the elevation angle of\nthe telescopes will be re-pointed to allow for\nfield-tracking. Although the precise details of the scan are likely to\nchange, the general characteristics of the scan and resulting field\ncoverage properties will remain approximately the same due to the\nlimitations imposed by constant-elevation scanning and observing from\nAtacama. The {\\sevensize C}$_\\ell${\\sevensize OVER}\\, telescopes are designed with the capability of\nscanning at up to $10^{\\circ}$\/s. However, for our analysis where we\nhave considered {\\sevensize C}$_\\ell${\\sevensize OVER}\\, operating with a rotating HWP (Section\n\\ref{sec:modulation}), we have chosen a relatively slow scan speed of\n$0.25^{\\circ}$\/s in light of the HWP rotation frequency which we have\nemployed ($f_\\lambda = 3$~Hz). Although the mode of operation of a HWP\non {\\sevensize C}$_\\ell${\\sevensize OVER}\\, is still under development, a continuously rotating HWP\nis likely to be restricted to rotation frequencies of $f_\\lambda <\n5$~Hz due to mechanical constraints (with current cryogenic rotation\ntechnologies, fast rotation, $\\mathrel{\\rlap{\\lower4pt\\hbox{\\hskip1pt$\\sim$} 5$~Hz, could possibly result in\nexcessive heat generation). Figure~\\ref{fig:hitmaps} shows the coverage maps for a\nsingle day's observing on one of the southern fields and on one of the\nequatorial fields. The corresponding maps for the other two fields are\nbroadly similar. Note that, in the real experiment, we expect that\nsomewhat more uniform field coverage than that shown in\nFig.~\\ref{fig:hitmaps} will be achievable by employing slightly\ndifferent scan patterns on different days.\n\\begin{figure*}\n \\centering\n \\resizebox{0.80\\textwidth}{!}{ \n \\rotatebox{-90}{\\includegraphics{fig2.ps}}}\n \\caption{Hit-maps for one of the southern fields (left; RA\n 09:30 hrs , Dec -40.00$^{\\circ}$) and one of the equatorial fields\n (right; RA 04:00 hrs, Dec 0.00$^{\\circ}$) for a single day's\n observation with half the 150~GHz detector complement. The central\n part of the fields (shown in yellow and red) are roughly $20^{\\circ}$ in\n diameter. These maps have been constructed using a {\\sevensize HEALPIX}\\, resolution of\n $N_{\\rm side} = 1024$ corresponding to a pixel size $\\sim$3.4 arcmin.}\n \\label{fig:hitmaps}\n\\end{figure*}\n\n\\subsection{Signal simulations}\n\\label{sec:signal_sims}\nWe generate model $TT$, $EE$, $TE$ and $BB$ CMB power spectra using\n{\\sevensize CAMB}\\, \\citep{lewis00}. The input cosmology used consists\nof the best-fit standard $\\Lambda$CDM model to the 5-year {\\sevensize WMAP}\\, data\nset \\citep{hinshaw09}, but with a tensor-to-scalar ratio of $r=0.026$,\nchosen to match the {\\sevensize C}$_\\ell${\\sevensize OVER}\\, `target' value. Realisations of CMB\nskies from these power spectra are then created using a modified\nversion of the {\\sevensize HEALPIX}\\footnote{%\nSee http:\/\/healpix.jpl.nasa.gov} \nsoftware \\citep{gorski05}. Our simulations include weak gravitational lensing\nbut ignore its non-Gaussian aspects. Using only Gaussian simulations\nmeans that we slightly mis-estimate the covariance matrices of our\npower spectrum estimates, particularly the $B$-mode\ncovariances~\\citep{smith04,smithchallinor06}. For the {\\sevensize C}$_\\ell${\\sevensize OVER}\\, noise\nlevels, this is expected to have a negligible impact on the\nsignificance level of the total $B$-mode signal.\n\nAs part of the simulation process, the input CMB signal is convolved with a\nperfect Gaussian beam with FWHM $=5.5$ arcmin. Note that an important\nclass of systematic effects which we do not consider in this paper are\nthose caused by imperfect optics; see the discussion in\nSection~\\ref{sec:systematics}.\n\nFor our analysis of the simulated data sets (Section\n\\ref{sec:analysis}) we have chosen to reconstruct maps of the Stokes\nparameters with a map resolution of $3.4$ arcmin ({\\sevensize HEALPIX}\\, $N_{\\rm\nside} = 1024$). Note that this pixel size does not fully sample the\nbeam and it is likely that we will adopt $N_{\\rm side} = 2048$ for the\nanalysis of real data. In order to isolate the effects of the various\nsystematics we have considered, our simulated CMB skies have therefore\nbeen created at this same resolution --- we can then be sure that any\nbias found in the recovered CMB signals is due to the systematic\neffect under consideration rather than due to a poor choice of map\nresolution\\footnote\nWe have verified that spurious $B$-modes\ngenerated through the pixelisation are negligible for the {\\sevensize C}$_\\ell${\\sevensize OVER}\\,\nnoise levels.}. \nNote that our adopted procedure of simulating and map-making at\nidentical resolutions, although useful for the specific aims of this\npaper, is not a true representation of a CMB observation. For real\nobservations, pixelisation of the CMB maps will introduce a bias to\nthe measured signal on scales comparable to the pixel size adopted.\n \nUsing the pointing registers as provided by the scan strategy and\nafter applying the appropriate focal plane offsets for each detector,\nwe create simulated time-streams according to \n\\begin{equation} \nd_i = \\left[ T(\\theta) + Q(\\theta) \\cos(2\\phi_i) + U(\\theta) \\sin(2\\phi_i) \\right]\/2,\n\\label{eqn:signal_timestream}\n\\end{equation} \nwhere $\\theta$ denotes the pointing and $T, Q$ and $U$ are the sky\nsignals as interpolated from the\ninput CMB sky map. The polarization angle, $\\phi_i$ is, in general, a\ncombination of the polarization sensitivity direction of each detector,\nany rotation\nof the telescope around its boresight, the direction of travel of the\ntelescope in RA--Dec space and the orientation of the half-wave plate,\nif present.\n\n\\subsection{Noise simulations}\n\\label{sec:noise_sims}\nThe {\\sevensize C}$_\\ell${\\sevensize OVER}\\, data will be subject to several different noise\nsources. Firstly, photon loading from the telescope, the atmosphere\nand the CMB itself will subject the data to uncorrelated random\nGaussian noise. Secondly, the TES detectors used in {\\sevensize C}$_\\ell${\\sevensize OVER}\\, are\nsubject to their own sources of noise which will possibly\ninclude low-frequency $1\/f$ behaviour and correlations between\ndetectors. Thirdly, the atmosphere also has a very strong $1\/f$\ncomponent which will be heavily correlated across the detector\narray. Fortunately, the $1\/f$ component of the atmosphere is known to\nbe almost completely un-polarized and so can be removed from the\npolarization analysis by combining data from multiple detectors (see\nSection \\ref{sec:differencing}).\n\nThe white-noise levels due to loading from the instrument, atmosphere\nand CMB have been carefully modelled for the case of {\\sevensize C}$_\\ell${\\sevensize OVER}\\,\nobservations from Atacama. We will not present the details here, but\nfor realistic observing conditions and scanning elevations, we have\ncalculated the expected noise-equivalent temperature (NET) due to\nphoton noise alone to be $\\approx 146 \\, \\mu {\\rm K} \\sqrt{\\rm s}$. We\nadd this white noise component to our simulated signal time-streams\nfor each detector as\n\\begin{equation}\nd_i \\rightarrow d_i + \\frac{\\rm NET}{2} \\sqrt{f_{\\rm samp}} g_i,\n\\end{equation}\nwhere $f_{\\rm samp}$ is the sampling frequency and $g_i$ is a Gaussian\nrandom number with $\\mu = 0$ and $\\sigma = 1$. Note that the white-noise\nlevel in the detector time streams is ${\\rm NET} \/ 2$ since the\n{\\sevensize C}$_\\ell${\\sevensize OVER}\\, detectors are half-power detectors (equation\n\\ref{eqn:signal_timestream}). \n\nUsing instrument parameters appropriate for the {\\sevensize C}$_\\ell${\\sevensize OVER}\\, detectors, \nwe use the small-signal TES model of \\cite{irwin05} to create a\nmodel noise power spectrum for the detector noise. This model includes\nboth a contribution from the super-conducting SQUIDs which will be used to\nrecord the detector signals (e.g. \\citealt{reintsema03}) and a\ncontribution from aliasing in the Multi Channel Electronics (MCE;\n\\citealt{battistelli08}) which will be used to read out the signals. \nFor the instrument parameters we have chosen, the effective\nNET of the detector noise in our simulations is approximately equal\nto the total combined photon noise contribution from the atmosphere, the\ninstrument and the CMB. Note however that for the final instrument, it\nis hoped that the detector NET can be reduced to half that of the\ntotal photon noise, thus making {\\sevensize C}$_\\ell${\\sevensize OVER}\\, limited by irreducible\nphoton loading.\nThe \\cite{irwin05} small-signal TES model does not \ninclude a $1\/f$ component to the detector noise so in order\nto investigate the impact of modulation on possible low-frequency\ndetector noise, we add a heuristically chosen $1\/f$ component to the\ndetector noise model with knee frequencies in the range, $0.01 <\nf_{\\rm knee} < 0.1$~Hz. \nThe MCE system which will be used to read out the {\\sevensize C}$_\\ell${\\sevensize OVER}\\, data\nshould have low cross-talk between different channels. However,\ncorrelations will be present at some level and so we include\n10 per cent correlations between all of our simulated detector noise\ntime-streams. Generally, to simulate stationary noise that is correlated in time and\nacross the $N_{\\rm det}$ detectors we proceed as follows.\nLet the noise cross-power spectrum\nbetween detector $d$ and $d'$ be $P_{d,d'}(f)$. \nTaking the Cholesky decomposition of this matrix at each frequency,\n$L_{d,d'}(f)$, defined by\n\\begin{equation} \nP_{d,d'}(f) = \\sum_{d''} L_{d,d''}(f) L_{d',d''}(f), \n\\end{equation}\nwe apply $L$ to $N_{\\rm det}$ independent, white-noise time streams\n$g_{d}(f)$ in Fourier space,\n\\begin{equation}\ng_d(f) \\rightarrow \\sum_{d'} L_{d,d'}(f) g_{d'}(f) .\n\\end{equation}\nThe resulting time-streams, transformed to\nreal space then possess the desired correlations between detectors. Here,\nwe assume that the correlations are independent of frequency so that\nthe noise cross-power spectrum takes the form\n\\begin{equation}\nP_{d,d'}(f) = C_{d,d'} P(f) ,\n\\end{equation}\nwhere the correlation matrix $C_{d,d'} = 1$ for $d = d'$ and $C_{d,d'} = 0.1$\notherwise. In practice, we use discrete Fourier transforms to synthesise\nnoise with periodic boundary conditions (and hence circulant time-time\ncorrelations).\n\nWe use the same technique to simulate the correlated $1\/f$ component\nof the atmosphere. We have measured the noise properties of the atmosphere\nfrom data from the {\\sevensize QUaD}\\, experiment \\citep{hinderks09}. The 150~GHz\nfrequency channel of {\\sevensize QUaD}\\, is obviously well matched to the\n{\\sevensize C}$_\\ell${\\sevensize OVER}\\, 150~GHz channel although {\\sevensize QUaD}\\, observed the CMB from the\nSouth Pole rather than from Atacama. Although there are significant\ndifferences between the properties of the atmosphere at the South Pole\nand at Atacama (e.g. \\citealt{bussmann05}), the {\\sevensize QUaD}\\, observations\nstill represent the best estimate of the $1\/f$ noise properties of\nthe atmosphere available at present. A rough fit of the {\\sevensize QUaD}\\, data to the model, \n\\begin{equation}\nP(f) = {\\rm NET^2} \\left[1 + \\left( \\frac{f_{\\rm knee}}{f} \\right)^\\alpha \\right],\n\\label{eqn:pk_atms}\n\\end{equation}\nyields a knee frequency, $f_{\\rm knee} = 0.45$~Hz and spectral index,\n$\\alpha = 2.5$. Using this model power spectrum we simulated $1\/f$\natmospheric noise correlated across the array in exactly the same way\nas was used for the detector noise. Fortunately, the $1\/f$ component\nin the atmosphere is almost completely un-polarized.\nIf there were no instrumental polarization, detectors\nwithin the same pixel (which always look in\nthe same direction) would therefore be completely correlated\nwith one another and detectors from different pixels would also be\nheavily correlated. For the correlated atmosphere, we therefore use a\ncorrelation matrix given by $C_{d,d'} = 1.0$ for $|d-d'| \\le 1$\n(i.e. for detector pairs) and $C_{d,d'} = 0.5$ otherwise. In the\nfollowing sections, as one of the systematics we have investigated, we \nrelax the assumption that the atmosphere is un-polarized.\n\nFigure~\\ref{fig:noise_compare} compares the photon, atmospheric $1\/f$\nand detector noise contributions to our simulated data in frequency\nspace. At low frequencies, the noise is completely dominated by the\natmospheric $1\/f$ while the white-noise contributions from photon\nloading (including the uncorrelated component of the atmosphere) and\ndetector noise are approximately equal. Note that for observations\nwithout active modulation, and for the scan speed and observing\nelevations which we have adopted, the ``science band'' for the\nmultipole range $20 < \\ell < 2000$ corresponds roughly to $0.01 < f <\n1$~Hz in time-stream frequency. In contrast, for our simulations which\ninclude a continuously rotating HWP, the temperature signal remains within\nthe $0.01 < f < 1$~Hz frequency range but the polarized sky signal is\nmoved to a narrow band centred on $\\sim$12~Hz, well away from both the\ndetector and atmospheric $1\/f$ noise components (see\nSection~\\ref{sec:modulation} and Fig.~\\ref{fig:mod_frequency}). Note also that although the\natmospheric $1\/f$ dominates the detector $1\/f$ at low frequency, the\natmosphere is heavily correlated across detectors and can therefore be\nremoved by combining detectors (e.g. differencing detectors within a\npixel) but this is not true for the detector noise which is only\nweakly correlated between detectors. We demonstrate this in\nFig.~\\ref{fig:tod_noise} where we plot a five-minute sample of simulated\natmospheric and detector noise for the two constituent detectors\nwithin a pixel. Including the atmospheric $1\/f$ component, the\neffective total NET per detector measured from our simulated\ntime-streams is $293 \\, \\mu {\\rm K} \\sqrt{\\rm sec}$ whilst excluding\natmospheric $1\/f$, we measure $210 \\, \\mu {\\rm K} \\sqrt{\\rm s}$.\n\n\\begin{figure}\n \\centering\n \\resizebox{0.47\\textwidth}{!}{ \n \\rotatebox{-90}{\\includegraphics{fig3.ps}}}\n \\caption{Frequency space comparison between the different noise\n sources in the simulations. The grey line shows the detector noise\n power spectrum (here with a $1\/f$ component with knee frequency,\n $f_{\\rm knee} = 0.1$~Hz). The correlated component to the\n atmosphere is shown as the dashed line and the total photon noise\n (including atmospheric loading) is shown as the dotted line. For\n the simulation parameters we have adopted, the temperature sky-signal from\n multipoles $20 < \\ell < 2000$ appears in the time-stream in the\n frequency range $0.01 < f < 1$~Hz. In the absence of fast modulation,\n the polarized sky signal also appears in this frequency range\n whereas in our simulations including fast modulation, the polarized\n sky signal appears in a narrow band centred on 12~Hz.}\n \\label{fig:noise_compare}\n\\end{figure}\n\\begin{figure}\n \\centering\n \\resizebox{0.48\\textwidth}{!}{ \n \\rotatebox{-90}{\\includegraphics{fig4.ps}}}\n \\caption{A five-minute sample of simulated noise time-stream for the two\n detectors within a pixel (denoted ``A'' and ``B'') for both the\n atmospheric noise simulations (\\emph{top}) and for the detector\n noise simulations (\\emph{bottom}). The $1\/f$ component in the\n atmospheric noise time-streams is 100 per cent correlated between the\n A and B detectors and can be removed entirely by\n differencing. In contrast the $1\/f$ component in the detector noise\n (which is much weaker and not noticeable on this plot)\n is only weakly correlated between A and B and is not removed by\n differencing. For comparison, the signal-only time-streams are\n also plotted in the lower panels as the red curves.}\n \\label{fig:tod_noise}\n\\end{figure}\n\n\\subsection{Detector response}\n\\label{sec:detector_response}\nThe \\cite{irwin05} TES small-signal model mentioned above also\nprovides us with an estimate of the detector response (the conversion\nfrom incident power to resultant current in the detectors). In this\nmodel, the power-to-current responsivity, $s_I(\\omega)$, is given by\n\\begin{equation} \ns_I(\\omega) \\propto \\frac{1 - \\tau_+\/\\tau_I}{1 + i\\omega\\tau_+}\n\\frac{1 - \\tau_-\/\\tau_I}{1 + i\\omega\\tau_-}, \n\\label{eqn:response}\n\\end{equation} \nwhere $\\omega = 2 \\pi f$ is the angular frequency and $\\tau_+$ and\n$\\tau_-$ are the ``rise time'' and ``fall time'' (relaxation to steady\nstate) after a delta-function temperature impulse. Here, $\\tau_I$ is\nthe current-biased thermal time constant. The impulse-response in\nthe time domain is\n\\begin{equation}\ns_I(t) \\propto \\frac{e^{-t\/\\tau_+} - e^{-t\/\\tau_-}}{\\tau_+ - \\tau_-}\n\\Theta(t) ,\n\\end{equation}\nwhere $\\Theta(t)$ is the Heaviside step function.\nNote that the constant of\nproportionality in equation~(\\ref{eqn:response}) (which is also\npredicted by the model) is essentially the calibration (gain) of the\ndetectors. The simulated time-streams of\nequation~(\\ref{eqn:signal_timestream}) are converted to detector\ntime-streams through convolution with this response\nfunction. Note that the photon noise and correlated atmospheric noise\nare added to the time-stream before convolution (and so are also\nconvolved with the response function) while the detector noise is\nadded directly to the convolved time-streams. The {\\sevensize C}$_\\ell${\\sevensize OVER}\\, detectors\nare designed to be extremely fast with time-constants $\\tau_\\pm < 1 \\,\n{\\rm ms}$. For our simulations, we have used time-constants\npredicted by the small-signal TES model for {\\sevensize C}$_\\ell${\\sevensize OVER}\\, instrument\nparameters of $\\tau_+ = 300 \\, \\mu{\\rm s}$ and $\\tau_- = 322 \\, \\mu\n{\\rm s}$. \n\nNote that for our chosen scan speed of $0.25^{\\circ}$\/s, the effect of\nthe response function on the signal component in the simulations is\nsmall in the absence of fast modulation -- the {\\sevensize C}$_\\ell${\\sevensize OVER}\\,\ndetectors are so fast that signal attenuation\nand phase differences introduced by convolution with the detector\nresponse only become important at high frequencies, beyond the\nfrequency range of the sky signals. For our simulations without\nfast modulation, sky signals from multipoles $\\ell \\sim 2000$ will\nappear in the time-stream at $\\sim$ 1~Hz where the amplitude of the\nnormalised response function is effectively unity and the associated\nphase change is $-0.2^{\\circ}$. For our simulations\nincluding fast modulation, it becomes more important to correct the\ntime-stream data for the detector response. In this case, the polarized sky signals (from all\nmultipoles) appear within a narrow band centred on $12$~Hz in the\ntime-stream. Here, the amplitude of the response function is still very\nclose to unity but the phase change has grown to $-2.7^{\\circ}$. We\ninclude a deconvolution step in the analysis of all of our simulated\ndata to correct for this effect. Finally, we note that for the $\\sim 10$\nper cent errors in the time-constants which we have considered (see\nSection~\\ref{sec:systematics}), the resulting mis-estimation of the\npolarization signal will again be small, even for the case of\nfast modulation.\n\nIn principle we should also include the effect of sample integration: each\ndiscrete observation is an integral of a continuous signal over the sample period.\nOf course, in the case where down-sampled data is simulated\nthe sample integration should include the effect of the sample\naveraging. For integration over a down-sampled period $\\Delta$, there is an additional\nphase-preserving filtering by $\\mathrm{sinc}(\\omega \\Delta\/2)$ where\n$\\omega \\equiv 2\\pi f$. For our scan\nparameters, the filter is negligible (i.e.\\ unity) for unmodulated data\nwhile for modulated data the filter can be approximated at the frequency\n$4 \\omega_\\lambda$, where $\\omega_\\lambda$ is the angular rotation frequency\nof the HWP. This acts like a small decrease in the polarization efficiency,\nwith the discretised signal at the detector being\n\\begin{eqnarray}\nd_i &\\approx & \\frac{1}{2}\\big\\{T(\\theta) +\n\\mathrm{sinc}(2\\omega_\\lambda \\Delta) \\nonumber \\\\\n&&\\mbox{} \\times \\left[Q(\\theta)\\cos(2\\phi_i)\n+ U(\\theta) \\sin(2\\phi_i)\\right]\\big\\} .\n\\end{eqnarray}\nFor $\\omega_\\lambda = 2\\pi \\times 3\\,\\mathrm{Hz}$ and $\\Delta =\n10\\,\\mathrm{ms}$, the effective polarization efficiency is $0.98$ and\nthis has the effect of raising the noise level in the polarization maps by\n2 per cent. We do not include this small effect in our simulations, but could\neasily do so.\n\n\\subsection{Systematic effects}\n\\label{sec:systematics}\nIn the analysis that follows, we will investigate the impact of\nseveral systematic effects on the ability of a {\\sevensize C}$_\\ell${\\sevensize OVER}-like\nexperiment to recover an input $B$-mode signal. For a reference, we\nuse a suite of simulations which contain no systematics. This ideal simulation\ncontains the input signal, photon noise, $1\/f$ atmospheric noise\ncorrelated across the array (but un-polarized) and TES\ndetector noise with no additional correlated $1\/f$\ncomponent. Additionally, for our reference simulation all pointing\nregisters and detector polarization sensitivity angles use the nominal\nvalues and the signal is convolved with the detector response function\nusing the nominal time constants.\n\nWe then perform additional sets of simulations with the following\nsystematic effects included in isolation:\n\\begin{itemize}\n\\item $1\/f$ detector noise. We have considered an additional\n correlated $1\/f$ component to the detector noise with $1\/f$ knee \n frequencies of $0.1$, $0.05$ and $0.01$~Hz.\n\\item Polarized atmosphere. In addition to the un-polarized atmosphere\n present in the reference simulation, we consider a \\emph{polarized}\n $1\/f$ component in the atmosphere. To simulate polarized atmosphere,\n we proceed as described in Section~\\ref{sec:noise_sims} but now\n we add correlated $1\/f$ atmospheric noise to the $Q$ and $U$ sky signal\n time-streams such that equation~(\\ref{eqn:signal_timestream})\n becomes \n \\begin{eqnarray}\n d_i &=& \\frac{1}{2} \\left[T + \\left(Q + Q^{\\rm atms}_i\\right) \\cos(2 \\phi_i)\n \\nonumber \\right. \\\\\n && \\mbox{} \\phantom{xxxx} \\left. \n + \\left(U + U^{\\rm atms}_i\\right) \\sin(2 \\phi_i)\\right]. \n \\end{eqnarray}\n We take the $Q^{\\rm atms}$ and $U^{\\rm atms}$ atmospheric signals to\n have the same power spectrum as the common-mode atmosphere\n (equation~\\ref{eqn:pk_atms}) but a factor ten smaller in magnitude. \n\\item Detector gain errors. We consider three types of gain errors:\n (i) random errors in the gain that are constant in time, uncorrelated\n between detectors and have a 1 per cent RMS; (ii) gain drifts\n in each detector corresponding to a 1 per cent drift over the course of a\n two-hour observation -- the start and end\n gains for each detector are randomly distributed about the nominal\n gain value with an RMS of 1 per cent; (iii) systematic A\/B gain\n mis-matches (1 per cent mis-match) between the two detectors within each\n pixel. For this latter systematic, we have applied a constant 1 per\n cent A\/B mis-match to all pixels on the focal plane but the direction\n of the mis-match (that is, whether the gain of A is greater or smaller\n than B) is chosen randomly.\n \\item Mis-estimated polarization sensitivity directions. Random\n errors uncorrelated between detectors (including those with the\n same feedhorn) with RMS 0.5$^{\\circ}$ and which are constant in time,\n and a systematic mis-estimation of the\n instrument polarization coordinate reference system by 0.5$^{\\circ}$ are\n considered.\n\\item Mis-estimated half-wave plate angles. For the case where we\n consider an experiment which includes polarization modulation with a\n half-wave plate (see Section \\ref{sec:modulation}), we also\n consider random errors (with RMS 0.5$^{\\circ}$) in the recorded HWP angle \n which we apply to each 100~Hz time-sample. In addition, we consider\n a 0.5$^{\\circ}$ systematic offset in the half-wave plate angle measurements.\n\\item Mis-estimated time-constants. The analysis that follows\n includes a deconvolution step to undo the response function of the\n detectors and return the deconvolved sky signal. In all cases, we\n use the nominal time-constant values of $\\tau_+ = 300 \\, \\mu{\\rm s}$\n and $\\tau_- = 322 \\, \\mu {\\rm s}$ to perform the deconvolution. To\n simulate the effect of mis-estimated time-constants, we introduce\n both a random scatter (with RMS = 10 per cent across detectors) and a\n systematic offset ($\\tau_\\pm$ identically offset by 10 per cent for all\n detectors) in the time-constants when creating the simulated data.\n\\item Pointing errors. We simulate the effects of both a random jitter and a\n slow wander in the overall pointing of the telescope by introducing\n a random scatter uncorrelated between time samples (with RMS 30 arcsec)\n and an overall drift in the\n pointing (1 arcmin drift from true pointing over the course of a\n two-hour observation) when creating the simulated time-stream. Once\n again, the simulated data is subsequently analysed assuming perfect\n pointing registers.\n\\item Differential transmittance in the HWP. As a simple example of a\n HWP-induced systematic, we have considered a differential\n transmittance of the two linear polarizations by the\n HWP. Preliminary measurements of the {\\sevensize C}$_\\ell${\\sevensize OVER}\\, HWPs suggest the level\n of differential transmittance will be in the region of\n $1\\!\\!\\!\\relbar\\!\\!\\!2$ per cent. For\n this work, we consider a 2 per cent differential transmittance in the HWP. \n\\end{itemize}\n\nThe range of systematic effects we include is not exhaustive. In\nparticular, we ignore effects in the HWP, when present, except for a\nmis-estimation of the rotation angle and a differential transmittance\nof the two linear polarizations. In addition, we ignore all optical\neffects. In practice, there are many possible HWP-related systematic\neffects which we have not yet considered. In general, a\nthorough analysis of HWP-related systematics require detailed physical\noptics modelling which is beyond the scope of our current analysis. We\ntherefore leave a detailed investigation of HWP-related systematics to future\nwork and simply urge the reader to bear in mind that where our\nanalysis has included a HWP, we have, in most cases, assumed a perfect one. For other\noptical effects, we note that \\citet{odea07} have already investigated\nsome relevant effects using analytic and numerical techniques and we\nare currently adapting their flat-sky numerical analysis to work with\nthe full-sky simulations described here. Our conclusions on the\nability of modulation to mitigate systematic effects associated with\nimperfect optics, which will be based on detailed physical optics\nsimulations of the {\\sevensize C}$_\\ell${\\sevensize OVER}\\, beams (Johnson et al., in prep), will be\npresented in a future paper. Note that there are important\ninstrument-specific issues to consider in such a study to do with\nwhere the modulation is performed in the instrument. In {\\sevensize C}$_\\ell${\\sevensize OVER}, the\nmodulation will be provided by a HWP between the horns and mirrors and\nthis may lead to a difference in the relative rotation of the field\ndirections on the sky and the beam shapes as the HWP rotates compared\nto a set-up, as in {\\sevensize SPIDER}~\\citep{crill08}, where the HWP is after\n(thinking in emission) the beam-defining elements.\n\nFor a {\\sevensize C}$_\\ell${\\sevensize OVER}-like receiver, consisting of a HWP followed by a polarization\nanalyser (e.g.\\ an orthomode transducer) and detectors, we include\nessentially all\nrelevant systematic effects introduced \\emph{by the receiver}.\nTo see this, note that\nthe most general Jones matrix describing propagation of the two linear\n(i.e.\\ $x$ and $y$) polarization states through the polarization analyser\nis~\\citep{odea07}\n\\begin{equation}\n\\mathbfss{J} = \\left(\n\\begin{array}{cc}\n1+g_1 & \\epsilon_1 \\\\\n-\\epsilon_2 & (1+g_2)e^{i\\alpha}\n\\end{array}\n\\right) ,\n\\end{equation}\nwhere $g_1$, $g_2$ and $\\alpha$ are small real parameters and\n$\\epsilon_1$ and $\\epsilon_2$ are small and complex-valued. The detector\noutputs are proportional to the power in the\n$x$ and $y$-components of the transmitted field (after convolution\nwith the detector response function).\nTo first-order in small\nparameters, only $g_1$, $g_2$ and the real parts of $\\epsilon_1$ and\n$\\epsilon_2$ enter the detected power. In this limit, the perturbed Jones\nmatrix is therefore equivalent in terms of the detected power to\n\\begin{equation}\n\\mathbfss{J} \\sim \\left(\n\\begin{array}{cc}\n1+g_1 & 0 \\\\\n0 & 1+g_2\n\\end{array}\n\\right)\n\\left(\n\\begin{array}{cc}\n\\cos \\alpha_1 & \\sin \\alpha_1 \\\\\n-\\sin\\alpha_2 & \\cos\\alpha_2\n\\end{array}\n\\right)\n\\end{equation}\nwhere the small angles $\\alpha_1 \\approx \\Re\\epsilon_1$ and $\\alpha_2 \\approx\n-\\Re\\epsilon_2$ denote the perturbations in the polarization-sensitivity\ndirections introduced above, and $g_1$ and $g_2$ are the gain errors.\nNote that instrument polarization (i.e.\\ leakage from $T$ to detected \n$Q$ or $U$) is only generated in the receiver through mismatches in the gain at first \norder, but also through $|\\epsilon_1|^2$ and\n$|\\epsilon_2|^2$ in an exact calculation. The latter effect is not\npresent in the simplified description in terms of offsets in the\npolarization-sensitivity directions. Note also that if we\ndifference the outputs of\nthe two detectors in the same pixel, in the presence of perturbations\n$\\alpha_1$ and $\\alpha_2$ to the polarization sensitivity directions we\nfind\n\\begin{eqnarray}\nd_1 - d_2 &=& \\cos(\\alpha_1-\\alpha_2)\\left(\nQ\\cos[2(\\phi-\\bar{\\alpha})] \\nonumber \\right. \\\\\n&&\\mbox{} \\phantom{xxxxxxxxxxxx} \\left. + U \\sin[ 2(\\phi-\\bar{\\alpha})] \\right) ,\n\\end{eqnarray}\nwhere $\\bar{\\alpha} = (\\alpha_1 + \\alpha_2)\/2$. This is equivalent\nto a common rotation of the pair by $\\bar{\\alpha}$ and a decrease in the\npolarization efficiency to $\\cos(\\alpha_1-\\alpha_2)$.\n\n\\subsection{Polarization modulation}\n\\label{sec:modulation}\n\nFor our reference simulation, and for each of the systematic effects\nlisted above, we simulate the experiment using three different\nstrategies for modulating the polarization signal. Firstly, we\nconsider the case where no explicit modulation of the polarization\nsignal is performed -- in this case, the only modulation achieved is\nvia telescope scanning and the relatively small amount of sky rotation\nprovided by the current {\\sevensize C}$_\\ell${\\sevensize OVER}\\, observing strategies. In addition,\nwe also consider the addition of a half-wave plate, either continuously\nrotating or ``stepped'', placed in front of the focal plane. A\nhalf-wave plate modulates the polarization signal such that the\noutput of a single detector (in the detector's local polarization\ncoordinate frame) is \n\\begin{equation} d_i = \\frac{1}{2}\\left[ T + Q \\cos ( 4\\phi_i )\n+ U \\sin ( 4\\phi_i ) \\right], \n\\label{eqn:pol_mod}\n\\end{equation} \nwhere, here, $\\phi_i$ is the angle between the detector's local\npolarization frame and the principal axes of the wave plate.\n\nFor a continuously rotating HWP, the polarized-sky signal is thus\nmodulated at $4 f_\\lambda$ where $f_\\lambda$ is the rotation frequency\nof the HWP. As well as allowing all three Stokes parameters to be\nmeasured from a single detector, modulation with a continuously\nrotating HWP (which we term ``fast'' modulation in this paper) moves\nthe polarization sky-signal to higher frequency and thus away from any\nlow-frequency $1\/f$ detector noise that may be\npresent; see Fig.~\\ref{fig:mod_frequency}.\n(Note that the temperature signal is not\nmodulated and one needs to rely on telescope scanning and analysis\ntechniques to mitigate $1\/f$ noise in $T$.) This ability to mitigate\nthe effect of $1\/f$ noise on the polarization signal is the prime\nmotivation for including a continuously rotating HWP in a CMB\npolarization experiment\\footnote{%\nThe ability to measure all three Stokes\nparameters from a single detector has also been suggested as\nmotivation for including a modulation scheme in CMB polarization\nexperiments. However, we will argue later in\nSection~\\ref{sec:differencing} that, at least for ground-based\nexperiments, an analysis based on extracting all three Stokes\nparameters from individual detectors in isolation using a real-space\ndemodulation technique may be a poor choice.}\nSystematic effects that generate an apparent polarization signal that is\nnot modulated at $4f_\\lambda$ can also be mitigated almost\ncompletely with fast modulation.\nMost notably, instrument polarization generated in the receiver will\nnot produce a spurious polarization signal in the recovered maps\nunless the gain and\ntime-constant mismatches vary sufficiently rapidly ($\\sim 4 f_\\lambda$)\nto move the \\emph{scan}-modulated temperature leakage up into the\npolarization signal band. \n\n\\begin{figure}\n \\centering\n \\resizebox{0.47\\textwidth}{!}{ \n \\rotatebox{-90}{\\includegraphics{fig5.ps}}}\n \\caption{Frequency-space representation of polarization modulation\n with a continuously rotating HWP. The plotted power spectrum is that\n for a single azimuth scan from one of our signal-only simulations\n with the HWP continuously rotating at $f_\\lambda = 3$~Hz. The power\n in the frequency range $0.01 < k < 1$~Hz is the unmodulated\n temperature signal from sky multipoles in the range $20 < \\ell <\n 2000$. The power spike at $4 f_\\lambda = 12$~Hz is the modulated\n polarized signal. The dashed (dotted) line shows the expected\n temperature (polarization) signal band (with arbitrary\n normalisation) appropriate for the scan speed, modulation frequency\n and beam size we have used. The residual power between $1$ and\n $6$~Hz and on either side of the polarization power spike is due to\n pixelisation effects.}\n \\label{fig:mod_frequency}\n\\end{figure}\n\nAs mentioned in the previous section, as an example of a HWP-induced\nsystematic, we have considered the case of a 2 per cent differential\ntransmittance by the HWP of the two incoming linear polarizations. We\nmodel this effect using a non-ideal Jones matrix for the HWP of the\nform, \n\\begin{equation}\n\\mathbfss{J} = \\left(\n\\begin{array}{cc}\n1 & 0 \\\\\n0 & -(1+\\delta)\n\\end{array}\n\\right) ,\n\\label{eqn:diff_trans_jones}\n\\end{equation}\nwhere $\\delta$ describes the level of differential\ntransmittance. Propagating through to detected power, for the\ndifference in output of the two detectors within a pixel, we find\n\\begin{eqnarray}\nd_1 - d_2 \\, &=& \\left[1 + \\delta + \\frac{\\delta^2}{4} \\right]\n(Q\\cos(4\\phi) + U\\sin(4\\phi) ) \\nonumber \\\\\n &-& \\left[\\delta + \\frac{\\delta^2}{2}\\right] I\\cos(2\\phi)\n + \\frac{\\delta^2}{4} Q.\n\\label{eqn:diff_trans_power}\n\\end{eqnarray}\nNote that, in this expression, both the HWP angle, $\\phi$, and the\nStokes parameters are defined in the pixel basis. The first term in\nequation (\\ref{eqn:diff_trans_power}) is the ideal\ndetector-differenced signal but mis-calibrated by a factor, $\\delta +\n\\delta^2\/4$. For reasonable values of $\\delta$, this mis-calibration\nis small ($\\mathrel{\\rlap{\\lower4pt\\hbox{\\hskip1pt$\\sim$} 2$ per cent) and, in any case, is easily dealt with during \na likelihood analysis of the power spectra. The\npotential problem term is the middle term which contains the\ntotal intensity signal modulated at $2 f_\\lambda$. Note that there\nwill be contributions to this term from the CMB monopole, dipole and\nthe atmosphere, which, for our simulations, we have taken to be 5 per cent\nemissive. Even for small values of $\\delta$ therefore, these\nHWP-synchronous signals will completely dominate the raw detector data\nand need to be removed from the data prior to the map-making step.\n\nWith a HWP operating in ``stepped'' mode where the angle of the HWP is\nchanged at regular time intervals (e.g. at the end of each scan), the\ngains are less clear. The polarization sky signal is not shifted to\nhigher frequencies so $1\/f$ detector noise can only be dealt with by\nfast scanning. What stepping the waveplate can potentially do is to increase\nthe range of polarization sensitivity directions with which a given pair of detectors\nsamples any pixel on the sky. This has two important effects: (i) \nit reduces the correlations between the errors in the reconstructed $Q$ and $U$\nStokes parameters in each sky pixel; and (ii) it can mitigate somewhat those\nsystematic effects that do not transform like a true polarization under \nrotation of the waveplate. Of course, one of the strongest motivations\nfor stepped, slow modulation is the avoidance of systematic effects\nassociated with the continuous rotation of the HWP. If these effects are\nsufficiently well understood, then the resulting spurious signals can\nbe rejected during analysis. However, if they are not well understood,\na stepped HWP, while not as effective in mitigating systematics, may\nwell be the preferred option.\n\nNote that for a perfect optical system, rotation of the waveplate is\nequivalent to rotation (by twice the angle) of the instrument.\nHowever, this need not hold with imperfect optics. For example,\nsuppose the beam patterns for the two polarizations of a given\nfeedhorn are purely co-polar (i.e.\\ the polarization sensitivity\ndirections are ``constant'' across the beams and orthogonal), but the\nbeam shapes are orthogonal ellipses. This set-up generates instrument\npolarization with the result that a temperature distribution that is\nlocally quadrupolar on the sky will generate spurious polarization\nthat transforms like true sky polarization under rotation of the\ninstrument~\\citep{hu03,odea07}. However, for an optical arrangement\nlike that in {\\sevensize SPIDER}, where the HWP is after (in emission) the\nbeam-defining optics, as the HWP rotates the polarization directions\nrotate on the sky but the beam shapes remain fixed. The spurious\npolarization from the mis-match of beam shapes is then constant as the\nHWP rotates for any temperature distribution on the sky, and so the\nquadrupolar temperature leakage can be reduced.\n\nIn our analysis, in addition to simulations without explicit\nmodulation we have also simulated an experiment with a HWP continuously\nrotating at $3$~Hz (thus modulating the polarization signal at\n$12$~Hz) and an experiment where a HWP is stepped (by 20$^{\\circ}$) at the end\nof each azimuth scan (for the scan strategy and scan speed we are\nusing, this corresponds to stepping the HWP roughly every $\\sim$ 90\ns).\n\nWe end this section with a comment on the ability of a continuously\nrotating HWP to mitigate un-polarized $1\/f$ atmospheric noise. The polarization\nsignal band is still, of course, moved to higher frequency and thus\naway from the $1\/f$ noise but the atmospheric $1\/f$ noise is so strong\nthat extremely rapid HWP rotation would be required to move the\npolarization band far enough into the tail of the $1\/f$ spectrum.\nSuch rapid rotation is not an option in practice as it would introduce its\nown systematic effects (e.g. excessive heat generation). This is the basis of our argument\nmentioned above that extracting all three Stokes parameters from a\nsingle detector may be a poor choice of analysis technique. However,\nsince the $1\/f$ atmospheric noise is un-polarized, it can be removed\n\\emph{completely} by combining data from multiple detectors. We\nrevisit this issue again with simulations in\nSection~\\ref{sec:differencing}. Finally, we note that if the atmosphere\ndoes contain a polarized $1\/f$ component, then we expect that this\nwill not be mitigated by modulation -- the polarized atmosphere would\nbe modulated in the same way as the sky signal and would\nshift up in frequency accordingly.\n\n\\section{Analysis of simulated data}\n\\label{sec:analysis}\nFor our reference simulation, and for each of the systematic effects\nand modulation strategies described in the previous section, we have\ncreated a suite of 50 signal-only, noise-only and signal-plus-noise\nsimulated datasets. Our analysis of the signal-only data will be used\nto investigate potential biases caused by the systematics while our\nsignal-plus-noise realisations are used together with the noise-only\nsimulations to investigate any degradation of the sensitivity of the\nexperiment due to the presence of the systematic effects.\n\nOur analysis of each dataset consists of processing the data through\nthe stages of deconvolution for the bolometer response function,\npolarization demodulation and map-making, and finally estimation of the\n$E$- and $B$-mode power spectra. For any given single realisation these\nprocesses are performed separately for each of the four observed\n{\\sevensize C}$_\\ell${\\sevensize OVER}\\, fields -- that is, we make maps and measure power spectra for\neach field separately. Since our fields are widely separated on the\nsky, we can treat them as independent and combine the power spectra\nmeasured from each using a simple weighted average to produce a single\nset of $E$- and $B$-mode power spectra for each realisation of the\nexperiment. Note that even if our fields were not widely separated,\nour procedure would still be unbiased (but sub-optimal) and \ncorrelations between the fields would be automatically taken\ninto account in our error analysis since, for any given realisation, \nthe input signal for all four fields is taken from the same simulated\nCMB sky. \n\n\\subsection{Time-stream processing and map-making}\n\\label{sec:map-making}\nWe first deconvolve the time-stream data for the detector response in\nfrequency space using the response function of\nequation~(\\ref{eqn:response}) and using the nominal time-constants in\nall cases. Once this is done, the data from detectors within each\npixel are differenced in order to remove both the CMB temperature signal and\nthe correlated $1\/f$ component of the atmospheric noise. For the case\nwhere the $1\/f$ atmosphere is completely correlated between the two\ndetectors and in the absence of instrumental polarization and\/or\ncalibration systematics, this process will remove the CMB $T$ signal and the\n$1\/f$ atmosphere completely. \n\nFor the case where we have simulated the effect of a differential\ntransmittance in the HWP, a further time-stream processing step is\nrequired at this point to fit for and remove the HWP-synchronous\nsignals from the time-stream. To do this, we have implemented a simple\niterative least-squares estimator to fit, in turn, for the amplitudes\nof both a cosine and sine term at the second harmonic of the\nHWP-rotation frequency, $f_\\lambda$. For our simulations containing\nboth signal and noise, the accuracy with which we are able to remove\nthe HWP-synchronous signals is determined by the noise level in the\ndata. A demonstration of the performance of this procedure is given\nin Fig.~\\ref{fig:hwp_sys_removal} where we plot the power spectra of\nsix minutes of simulated time-stream data (containing both signal\nand noise) before and after the removal of the HWP-synchronous signal.\n\n\\begin{figure}\n \\centering\n \\resizebox{0.455\\textwidth}{!}{ \n \\rotatebox{-90}{\\includegraphics{fig6.ps}}}\n \\caption{Power spectra of a six minute segment of time-stream\n data before (upper panel) and after (lower panel) fitting for and\n removing the HWP-synchronous signal. A 2 per cent differential\n transmittance in the HWP was used to create the simulated data. The\n resulting spurious signal appears at $2 f_\\lambda = 6$~Hz in the upper\n panel and has a peak power of $\\sim 10^{10}$ in the units\n plotted. No obvious residuals are apparent in the lower panel after\n applying our procedure for removing the contamination.}\n \\label{fig:hwp_sys_removal}\n\\end{figure}\n\nAfter detector differencing, and the removal of the HWP-synchronous\nsignals if present, the resulting differenced time stream is then a\npure polarization signal:\n\\begin{equation} d_i = Q \\cos(2\\phi_i) + U \\sin(2\\phi_i),\n\\label{eqn:diff_timestream}\n\\end{equation} \nwhere again, the angle $\\phi_i$ is, in the most general case, a\ncombination of detector orientation, sky crossing angle, telescope\nboresight rotation and the orientation of the HWP if present. In\norder to construct maps of the $Q$ and $U$ Stokes parameters, these\nquantities need to be decorrelated from the differenced time-stream\nusing multiple observations of the same region of sky taken with\ndifferent values of $\\phi_i$. For an experiment which does not\ncontinuously modulate the polarization signal, the $Q$ and $U$ signals\nhave to be demodulated as part of the map-making step. Note however\nthat for an experiment where the polarization signal is continuously\nmodulated, there are a number of alternative techniques to demodulate\n$Q$ and $U$ at the time-stream level. In separate work, one of us has\ncompared the performance of a number of such demodulation schemes and\nour results will be presented in a forthcoming paper (Brown, in prep.).\nFor the purposes of our current analysis\nhowever, we have applied the same map-based demodulation scheme to all\nthree experiments which we have simulated. In this scheme, once the\ntime-streams from each detector pair have been differenced, $Q$ and\n$U$ maps are constructed as \n\\begin{eqnarray}\n\\left( \\begin{array}{c} Q \\\\ U \\end{array} \\right) &=& \\left(\n \\begin{array}{cc} \n \\langle\\cos^2(2\\phi_i)\\rangle & \\langle\\cos(2\\phi_i)\\sin(2\\phi_i)\\rangle \\\\\n \\langle\\cos(2\\phi_i)\\sin(2\\phi_i)\\rangle & \\langle\\sin^2(2\\phi_i)\\rangle \\\\ \n\\end{array} \\right)^{-1} \\nonumber \\\\\n&&\\mbox{} \\times \\left( \\begin{array}{c}\n \\langle\\cos(2\\phi_i)d_i\\rangle \\\\\n \\langle\\sin(2\\phi_i)d_i\\rangle \\\\ \\end{array} \\right),\n\\label{eqn:qu_mapmaking}\n\\end{eqnarray}\nwhere the angled brackets denote an average over all data falling within\neach map pixel. For the work presented here, this averaging is\nperformed using the data from all detector pairs in one operation. \nOne could alternatively make maps per detector pair which could then\nbe co-added later. For the case where the noise properties of each\ndetector pair are similar, the two approaches should be\nequivalent. Note that the map-maker we use for all of our analyses is\na na\\\"{\\i}ve one -- that is, we use simple binning to implement\nequation~(\\ref{eqn:qu_mapmaking}). There are, of course, more optimal\ntechniques available (e.g. \\citealt{sutton09} and references therein)\nwhich would out-perform a\nna\\\"{\\i}ve map-maker in the presence of non-white noise. However, for our\npurposes, where we wish to investigate the impact of modulation in\nisolation, it is more appropriate to apply the same na\\\"{\\i}ve map-making\ntechnique to all of our simulated data. We can then be sure that any\nimprovement we see in results from our simulations including\nmodulation are solely due to the modulation scheme employed. \n\n\\subsection{Power spectrum estimation}\n\\label{subsec:power}\nWe measure $E$- and $B$-mode power spectra from each of our\nreconstructed maps using the ``pure'' pseudo-$C_\\ell$ method of\n\\cite{smith06}. We will not describe the method in detail here and refer the\ninterested reader to \\cite{smith06} and \\cite{smith07} for\nfurther details. Here, we simply note that the ``pure''\npseudo-$C_\\ell$ framework satisfies most of the requirements of a power\nspectrum estimator for a mega-pixel CMB polarization experiment with\ncomplicated noise properties targeted at constraining $B$-modes: it\nis, just like normal pseudo-$C_\\ell$, a fast estimator scaling as\n$N_{\\rm pix}^{3\/2}$ where $N_{\\rm pix}$ is the number of map pixels\n(as opposed to a maximum likelihood estimator which scales as $N_{\\rm\npix}^3$); it is a Monte-Carlo based estimator relying on simulations \nof the noise properties of the experiment to remove the noise bias and \nestimate band-power errors and covariances -- it is thus naturally\nsuited to experiments with complicated noise properties for which \napproximations to the noise cannot be made; and it is near-optimal in the\nsense that it eliminates excess sample variance from $E \\rightarrow B$ mixing\ndue to ambiguous modes which result from incomplete sky observations\n\\citep{lewis02, bunn02}, and which renders simple pseudo-$C_\\ell$ techniques\nunsuitable for small survey areas~\\citep{challinor05}.\n\n\\subsubsection{Power spectrum weight functions}\n\nWith normal pseudo-$C_\\ell$ estimators, one multiplies the data with a function\n$W(\\hat{\\bmath{n}})$ that is chosen heuristically and apodizes the edge of the survey\n(to reduce mode-coupling effects). For example, if one\nis signal dominated, uniform weighting (plus apodization) is a reasonable\nchoice, whereas an inverse-variance weight is a good choice in the\nnoise-dominated regime. Similar reasoning applies for the\npure pseudo-$C_\\ell$ technique, but here one weights the spherical\nharmonic functions rather than the data themselves.\nTo see this, compare the\ndefinition of the ordinary and pure pseudo harmonic $B$-modes:\n\\begin{eqnarray} \n\\widetilde a_{\\ell m}^B = &-&\\frac{i}{2}\n\\sqrt{\\frac{(l-2)!}{(l+2)!}} \\int d^2\\hat{\\bmath{n}} \\bigg[ \\Pi_+(\\hat{\\bmath{n}})\nW(\\hat{\\bmath{n}}) \\bar{\\eth}\\bar{\\eth} Y_{\\ell m}^*(\\hat{\\bmath{n}}) \\nonumber \\\\ \n&&\\mbox{} - \\Pi_-(\\hat{\\bmath{n}}) W(\\hat{\\bmath{n}}) \\eth\\eth Y_{\\ell m}^*(\\hat{\\bmath{n}}) \\bigg]\n\\label{eq:almbdef} \\\\\n\\widetilde a_{\\ell m}^{B\\,{\\rm pure}} = &-&\\frac{i}{2}\n\\sqrt{\\frac{(l-2)!}{(l+2)!}} \\int d^2\\hat{\\bmath{n}} \\bigg[ \\Pi_+(\\hat{\\bmath{n}})\n\\bar{\\eth}\\bar{\\eth} \\big( W(\\hat{\\bmath{n}}) Y_{\\ell m}^*(\\hat{\\bmath{n}}) \\big)\n\\nonumber \\\\ \n&&\\mbox{} - \\Pi_-(\\hat{\\bmath{n}}) \\eth\\eth \\big(\nW(\\hat{\\bmath{n}}) Y_{\\ell m}^*(\\hat{\\bmath{n}}) \\big) \\bigg]. \n\\label{eq:almbpuredef} \n\\end{eqnarray}\nHere, $\\Pi_\\pm(\\hat{n}) = (Q\\pm\niU)(\\hat{\\bmath{n}})$ is the complex polarization and $\\eth, \\bar{\\eth}$ are\nthe spin raising and lowering operators defined\nin~\\citet{zaldarriaga97}. If $W(\\hat{\\bmath{n}})$ is chosen so that it vanishes\nalong with its first derivative on the survey boundary, then the\n$\\widetilde a_{\\ell m}^{B\\,{\\rm pure}}$ couple only to $B$-modes and\nthe excess sample variance due to $E$-$B$ mixing is eliminated.\nThe action of $\\eth, \\bar{\\eth}$ on the spin spherical\nharmonics is simply to convert between different spin-harmonics but\ntheir action on a general weight function is non-trivial for\n$W(\\hat{\\bmath{n}})$ defined on an irregular pixelisation such as {\\sevensize HEALPIX}\\footnote{%\nOne possibility that we have yet to explore is performing the derivatives\ndirectly in spherical-harmonic space. Since $W(\\hat{\\bmath{n}})$ is typically\nsmooth, its spherical transform will be band-limited and straightforward\nto handle.}. To\nget around this problem, we choose to calculate the derivatives of\n$W(\\hat{\\bmath{n}})$ in the flat-sky approximation where the differential\noperators reduce to\n\\begin{eqnarray}\n\\eth \\approx -(\\partial_x + i \\partial_y), \\\\\n\\bar{\\eth} \\approx -(\\partial_x - i \\partial_y).\n\\end{eqnarray}\nThe derivatives are then trivially calculated on a regular Cartesian \ngrid using finite differencing~\\citep{smith07}. \n\nThe most optimal weighting scheme for a pseudo-$C_\\ell$ analysis\ninvolves different weight functions for each $C_\\ell$ band-power\naccording to the signal-to-noise level expected in that band.\nHowever, this is a costly solution (requiring $3 N_{\\rm band}$\nspherical harmonic transforms) and the indications are, from some\nrestricted tests that we have carried out, that the improvement in\nresulting error-bars is small, at least for the specific noise\nproperties of our simulations. For the analysis presented here, we\nhave therefore chosen a simpler scheme whereby we have used a uniform\nweight, appropriately apodized at the boundaries for the entire $\\ell$\nrange for $E$-modes and for $\\ell \\le 200$ for $B$-modes. For $\\ell >\n200$ our simulated experiment is completely noise dominated for a\nmeasurement of $B$-modes and so here we use an inverse-variance\nweighting, again, appropriately apodized at the boundaries. For\nsimplicity, we have approximated the boundary of the map as a circle\nof radius $11^{\\circ}$. Note that restricting our power spectrum analysis\nto this central region of our maps means we are effectively using only\n$\\sim 70$ per cent of the available data. To calculate the derivatives of the\nweight functions, we project our weight maps (defined in {\\sevensize HEALPIX})\nonto a Cartesian grid using a gnomonic projection. Once the\nderivatives of the weight maps have been constructed on the grid using\nfinite differencing, they are transformed back to the original\n{\\sevensize HEALPIX}\\, grid. An example of the inverse variance weight maps we\nhave used and the resulting spin-1 and spin-2 weight functions, $\\eth\nW$ and $\\eth\\eth W$, for one of our fields are shown in\nFig.~\\ref{fig:ppcl_weights}.\n\\begin{figure*}\n \\centering\n \\resizebox{0.55\\textwidth}{!}{ \n \\rotatebox{0}{\\includegraphics{fig7.ps}}}\n \\caption{Inverse-variance weight functions used for power spectrum\n estimation for one of the southern fields. For the noise properties\n of our simulated data, the hit-map shown in the top left panel\n closely approximates the inverse-variance map. This map is heavily\n smoothed and apodized at the boundary of the map to produce the\n spin-0 weight function shown in the top right panel.\n The spin-1 and spin-2 weight functions, $\\eth W$ and $\\eth\\eth W$,\n are shown (as vector fields) in the bottom left and right-hand\n panels respectively.}\n \\label{fig:ppcl_weights}\n\\end{figure*}\n\n\\section{Results from simulations}\n\\label{sec:results}\nThe map-making and power spectrum estimation procedures described\nabove have been applied to each of our simulated datasets treating\neach of our four observing fields independently. For any given\nsimulation set, we have 50 Monte-Carlo simulations so we can estimate the\nuncertainties on the band-powers measured from each field. For each\nrealisation, we can then combine our measurements from the four fields\nusing inverse-variance weights to produce a final single estimate of\nthe power spectrum for each realisation. When presenting our results\nbelow, in all cases, we plot the mean of these final estimates. For\nour reference simulation, and for the $1\/f$ noise systematics, the\nerror-bars plotted are calculated from the scatter among the\nrealisations and are those appropriate for a single realisation. When\ninvestigating the $B$-mode bias from systematics, we plot the results\nfrom signal-only simulations and the error-bars plotted are the\nstandard error on the mean. For some of the noise-related systematics,\nwe will also examine the impact of modulation in the map domain where\nthe effects are already clear.\n\\subsection{Reference simulation}\n\\begin{figure*}\n \\centering\n \\resizebox{0.55\\textwidth}{!}{ \n \\includegraphics{fig8.ps}}\n \\caption{Sample maps constructed from simulated time-stream\n containing noise only (left panels) and both signal and noise\n (right panels). Temperature maps are shown in the top panel and\n $U$-polarization maps are shown in the bottom panels. These maps\n are for one of our reference simulations with no explicit\n modulation scheme and no systematics included. Note the striping\n in the noise-only $T$ map which is completely absent from the $U$\n maps due to differencing of detector pairs before map-making.}\n \\label{fig:maps_reference}\n\\end{figure*}\n\\begin{figure*}\n \\centering\n \\resizebox{0.70\\textwidth}{!}{ \n \\rotatebox{-90}{\\includegraphics{fig9.ps}}}\n \\caption{Mean recovered $E$-mode (top) and $B$-mode (bottom) power\n spectra for the reference simulations without explicit\n modulation. The errors plotted are those appropriate for a single\n realisation. The input CMB power spectra used to create the signal\n component of the simulations are shown as the red curves. In the\n bottom panel the total $B$-mode input signal (including lensing) for\n a tensor-to-scalar ratio of $r=0.026$ is shown as the red curve and\n the $B$-mode signal due to lensing alone is shown as the dashed\n curve. The $\\ell < 200$ multipole range is shown in detail in the\n inset plots.}\n \\label{fig:cls_reference}\n\\end{figure*}\nTo provide a reference for the results which follow, in\nFigs.~\\ref{fig:maps_reference} and \\ref{fig:cls_reference} we show the results from our\nsuite of simulations with no explicit modulation and with no\nsystematics included. In Fig.~\\ref{fig:maps_reference} we plot examples of the\nreconstructed noise-only and signal-plus-noise $T$ and $U$ Stokes\nparameter maps (the reconstructed $Q$ maps -- not shown -- are\nqualitatively similar to $U$). The raw collecting power of an\nexperiment like {\\sevensize C}$_\\ell${\\sevensize OVER}\\, is apparent from the top two panels in this\nfigure. Although the noise $T$ map shown in the top-left panel is\nclearly dominated by striping due to the correlated noise from the\natmosphere, only signal is apparent in the signal-plus-noise map shown in\nthe top-right panel. (In fact, the noise contribution to the $T$\nsignal-plus-noise map is significant, particularly on large scales, and\nso would need to be accounted for when measuring the temperature power\nspectrum.) Conversely, the $U$ noise map is dominated by\nwhite noise; the correlated component of the atmosphere has been\nremoved completely from the polarization time-streams (as has the $T$\nsky signal) by differencing detector pairs before map-making.\n\nFigure~\\ref{fig:cls_reference} shows the mean recovered $E$- and $B$-mode power\nspectra from our reference simulations for the case of no explicit\nmodulation. Here, we see that our analysis is unbiased and recovers the input\npolarization power spectra correctly. For an input tensor-to-scalar\nratio of $r=0.026$, we recover a detection of $B$-modes \\emph{in\nexcess} of the lensing signal of $1.54\\sigma$.\n(We argue in Section~\\ref{sec:fisher} that this is an under-estimate of\nthe detection significance by around 10 per cent due to our ignoring small\nanti-correlations between the errors in adjacent band-powers.)\n\nThe corresponding plots for the stepped and continuously rotating HWP\nare very similar apart from the reconstructed polarization maps at the\nvery edges of the fields where a modulation scheme increases the\nability to decompose into the $Q$ and $U$ Stokes parameters. Since the\nedges of the field are excluded in our power spectrum analysis in any\ncase (see Fig.~\\ref{fig:ppcl_weights}), we find that the performance\n(in terms of $C_\\ell$ errors) of all three types of experiments which\nwe have considered is qualitatively the same in the absence of\nsystematic effects. Note that for all the systematics we have\nconsidered, the effects on the recovery of the $E$-mode spectrum is\nnegligible and so, in the following sections, we plot only the\nrecovered $B$-mode power spectra which are the main focus of this\npaper.\n\n\\subsection{$1\/f$ detector noise}\n\nFigure~\\ref{fig:maps_det_noise} shows the recovered noise-only maps from\na simulation containing a correlated $1\/f$ detector noise\ncomponent with a knee frequency of $f_{\\rm knee} = 0.1$~Hz. \nIn this figure, we have plotted the noise-only maps from\nthe three types of experiment we have considered: no modulation; a\nHWP which is stepped by 20$^{\\circ}$ at the end of each azimuth scan; and\na HWP continuously rotating at $3$~Hz. The impact of modulation on $1\/f$\ndetector noise is clear from this plot -- as described in\nSection~\\ref{sec:modulation}, the continuously rotating HWP shifts\nthe polarization band in frequency away from the $1\/f$ detector noise\nleaving only white noise in the resulting map. A stepped HWP, on the\nother hand, does not mitigate $1\/f$ detector noise in this way\nand so noise striping is apparent in the middle panel of\nFig.~\\ref{fig:maps_det_noise}. \n\\begin{figure*}\n \\centering\n \\resizebox{0.9\\textwidth}{!}{ \n \\rotatebox{-90}{\\includegraphics{fig10.ps}}}\n \\caption{Sample noise-only $U$-maps from simulations containing a\n $1\/f$ component correlated to the detector noise}. For display\n purposes only, the maps have been smoothed with a Gaussian with a FWHM\n of 7 arcmin. In the case where explicit modulation is either absent (left\n panel) or slow (stepped HWP; middle panel), the $1\/f$ noise\n results in faint residual stripes in the polarization maps. In the case of\n a continuously rotating HWP, the polarization signal is modulated\n away from the low frequency $1\/f$ resulting in white-noise behaviour\n in the polarization map (right panel).\n \\label{fig:maps_det_noise}\n\\end{figure*}\n\nThe $B$-mode power spectra measured from our signal-plus-noise simulations\nincluding $1\/f$ detector noise are shown in\nFig.~\\ref{fig:cls_det_noise} (again for $f_{\\rm knee} = 0.1$~Hz) where\nwe show the results from all three types of experiment. \n\\begin{figure*}\n \\centering\n \\resizebox{0.9\\textwidth}{!}{ \n \\rotatebox{-90}{\\includegraphics{fig11.ps}}}\n \\caption{Recovered $B$-mode power spectra for the simulations\n including a correlated $1\/f$ component to the detector noise with\n $f_{\\rm knee} = 0.1$~Hz for no modulation (top), a stepped HWP\n (middle) and a HWP continuously rotating at 3~Hz (bottom). See Table\n \\ref{tab:simsummary} for the significances with which each\n experiment detects the input signals.}\n \\label{fig:cls_det_noise}\n\\end{figure*}\nExamination of the figure suggests that the presence of a $1\/f$\ncomponent in the detector noise leads to a significant degradation in\nthe ability of the unmodulated and stepped-HWP experiments to recover\nthe input $B$-mode signal. This degradation happens at all multipoles\nbut is particularly acute on the largest scales ($\\ell < 200$) where\nthe primordial $B$-mode signal resides. The marginal detection of\nprimordial $B$-modes (for $r = 0.026$) which we saw in our reference\nsimulation (Fig.~\\ref{fig:cls_reference}) is now completely destroyed\nby the presence of the $1\/f$ correlated detector noise. Furthermore,\nnote that our analysis of the simulated data sets is optimistic in the\nsense that we have assumed that any correlated noise can be modelled\naccurately -- that is, our noise-only simulations, which we use to\nmeasure the noise bias, are generated from the same model noise power\nspectrum used to generate the noise component in our signal-plus-noise\nsimulations. (This is the reason that our recovered spectra are\nunbiased.) However, for the analysis of a real experiment, the noise\nproperties need to be measured from the real data and there are\nuncertainties and approximations inherent in this process. Any\ncorrelated detector noise encountered in a real experiment is unlikely\nto be understood to the level which we have assumed in our analysis\nand so will likely result not only in the increased uncertainties we\nhave demonstrated here but also in a biased result at low\nmultipoles. Estimating cross-spectra between maps made from subsets of\ndetectors for which the $1\/f$ detector noise is measured to be\nuncorrelated is a simple way to avoid this noise bias issue, at the\nexpense of a small increase in the error-bars~\\citep{hinshaw03}.\n\nThe results from the simulations where we have continuously modulated\nthe polarization signal recover the input $B$-mode signal to the same\nprecision that we saw with our reference simulation -- our marginal\ndetection of $r=0.026$ is retained even in the presence of the\ncorrelated detector noise. In Section \\ref{sec:significances} and\nTable \\ref{tab:simsummary} we show quantitatively that, for a detector\nknee frequency of $0.1$~Hz, the significance with which the\ncontinuously modulated experiment detects the primordial $B$-mode\nsignal is roughly twice that found for the un-modulated and\nstepped-HWP experiments. Also detailed in Table \\ref{tab:simsummary}\nare the results from our $1\/f$ noise simulations with knee frequencies\nof $0.05$ and $0.01$~Hz. We see, as expected, that the impact of fast\nmodulation is less for a lower knee frequency --- for $f_{\\rm knee} =\n0.05$, rapid modulation still significantly out-performs the\nun-modulated and stepped-HWP experiments while for $f_{\\rm knee} =\n0.01$~Hz, there is essentially no difference between the performance\nof the three types of experiment.\n\nNote that, in the case of rapid modulation, because the polarization\nsignal is moved completely away from the $1\/f$ frequency regime, the\nrecovered spectra should be immune to the issues of mis-estimation or\npoor knowledge of the noise power spectrum at low frequencies\nmentioned above. Although detector $1\/f$ noise can be mitigated by\nother methods (e.g. using a more sophisticated map-maker;\n\\citealt{sutton09}), these usually require accurate knowledge of the\nlow-frequency noise spectrum unlike the hardware approach of fast\nmodulation.\n\n\\subsection{Polarized atmospheric $1\/f$}\nIn contrast to the addition of $1\/f$ detector noise, which can be\nsuccessfully dealt with by rapid modulation, all three types of\nexperiment are degraded similarly by polarized low-frequency noise in\nthe atmosphere. In particular, the errors at low multipoles are\ninflated by a large factor since the large amount of polarized\natmosphere which we have input to the simulations swamps the input\n$B$-mode signal for $r=0.026$. We stress that the levels of polarized\natmosphere we have used in these simulations were deliberately chosen to\ndemonstrate the point that modulation does not help and the levels are\ncertainly pessimistic. \n\n\\subsection{Calibration errors}\nThe power spectra recovered from signal-only simulations where we\nintroduced random gain errors (constant in time) across the focal plane,\nor 1 per cent systematic A\/B gain mis-matches between the two detectors\nwithin each pixel are shown in Fig.~\\ref{fig:cls_gain_errors}. In both\ncases, we see a clear bias in the recovered $B$-mode signal in the\nabsence of fast modulation, but the bias is mitigated entirely by the\npresence of a HWP rotating at 3~Hz. The bias is also mitigated to some\ndegree by the stepped HWP but not completely. In our simulations, the bias is\ngenerally larger for the case of random gain errors since the\nvariance (across the focal plane) of the gain mismatches is twice as large\nin the former case. For our simulations where we allowed detector gains to drift\nover the course of a two-hour observation, we found a similarly\nbehaved $B$-mode bias to those shown in Fig.~\\ref{fig:cls_gain_errors}\nbut with a smaller magnitude (due to the two-hour drifts averaging down\nover the eight-hour observation).\n\n\\begin{figure*}\n \\centering\n \\resizebox{0.90\\textwidth}{!}{ \n \\rotatebox{-90}{\\includegraphics{fig12.ps}}}\n \\caption{Mean recovered $B$-mode power spectra for the simulations\n including random gain errors across the focal plane (black points) or\n systematic 1 per cent A\/B gain mis-matches between detector pairs (blue points)\n for no modulation (top), a stepped\n HWP (middle) and a continuously rotating HWP (bottom). These spectra\n are measured from our suite of signal-only simulations. Our\n simulations containing both signal and noise exhibit the same biased\n recovery for the no-modulation and stepped-HWP cases. The presence\n of a fast modulation scheme (bottom panel) mitigates entirely the\n bias caused by these gain mis-matches. The standard errors in these\n mean recovered spectra are smaller than the plotted symbols.}\n \\label{fig:cls_gain_errors}\n\\end{figure*}\n\nA mis-match between the gains of the two detectors within a pixel\ncorresponds to a $T \\rightarrow Q$ leakage in the detector\nbasis. The projection of this instrumental polarization onto the sky\nwill therefore be suppressed if a wide range of sensitivity directions\n$\\phi_i$ contribute to each sky pixel, as is the case for fast\nmodulation. Note that in the case of a stepped HWP, one should be\ncareful to design the stepping strategy in such a way that it does not\nundo some of the effect of sky rotation. During our analysis, we have\nfound that the performance of a stepped HWP in mitigating systematic\neffects can depend critically on the direction, magnitude and\nfrequency of the HWP step applied. In fact, for some set-ups we have\ninvestigated, a stepped HWP actually worsened the performance in\ncomparison to the no modulation case due to interactions between the\nscan strategy and HWP stepping strategy. However, the results plotted in\nFig.~\\ref{fig:cls_gain_errors} for the stepped HWP case are for a\nHWP step of $20^{\\circ}$ between each azimuth scan which is large and\nfrequent enough to ensure that such interactions between the stepping strategy\nand the scan strategy are sub-dominant. \n\n\\subsection{Mis-estimated polarization angles}\nThe next set of systematics we have considered concern a\nmis-estimation of both the detector orientation angles (i.e.\\ the\ndirection of linear polarization to which each detector is sensitive\nto) and, for the case where a HWP is employed, a mis-estimation of the\nHWP orientation. We have performed simulations including both a random\nscatter (with an RMS of $0.5^{\\circ}$) and a systematic offset of\n$0.5^{\\circ}$ in the simulated detector and HWP angles. Note that for\nthe systematic offset in the detector angles, the same offset is\napplied to all detectors. For both the detector angles and the HWP,\nthe offset introduced corresponds to a systematic error in the\nestimation of the global polarization coordinate frame of the\nexperiment and the effects are therefore degenerate.\n\nFor the simulations which included a random scatter in the angles\n(both detector angles and HWP orientation), we found neither a bias in\nthe recovered $B$-mode power spectra, nor a degradation in the\nerror-bars from the simulations containing both signal and\nnoise. Following the discussion in Section~\\ref{sec:systematics},\ncommon errors in the detector angles for the pair of detectors in a single\nfocal-plane pixel give rise to a rotation of the polarization sensitivity\ndirection of the pixel, while differential errors reduce the polarization\nefficiency.\nFor a typical differential scatter of $\\sqrt{2}\\times 0.5^{\\circ}$,\nthe reduction in the polarization efficiency ($\\sim 10^{-4}$) is\nnegligible. For a given pixel on the sky, the impact of the polarization\nrotation is suppressed by $\\sqrt{N_\\mathrm{sample}}$, where\n$N_\\mathrm{sample}$ is the total number of samples contributing to that\npixel with independent errors in the angles. The combination of a\nlarge number of detectors and, in the case of random HWP angle errors,\ntheir assumed short correlation time in our simulations renders the\neffect of small and random scatter in the angles negligible.\n\nThe results from simulations which included a systematic error in the\nangles are shown in Fig.~\\ref{fig:cls_pol_angles}, where we plot the recovered\n$B$-mode power spectrum from our signal-only simulations. In contrast\nto the simulations with random scatter, there is a clear mixing between $E$\nand $B$ due to the systematic mis-calibration of the polarization\ncoordinate reference system of the instrument. A global mis-estimation\nof the polarization direction by an angle $\\psi$ in the reconstructed maps\nleads to spurious $B$-modes with\n\\begin{equation}\nC_\\ell^B = \\sin^2(2\\psi) C_\\ell^E \\approx 4 \\psi^2 C_\\ell^E .\n\\end{equation}\nNote that in\nFig.~\\ref{fig:cls_pol_angles}, we show the results from the detector angle\nsystematic only for the case of the non-modulated experiment but the\nplot is identical for both the stepped and continuously rotating HWP --\npolarization modulation cannot mitigate a mis-calibration of detector \nangles. The fact that the mixing apparent in Fig.~\\ref{fig:cls_pol_angles} is\ngreater for the HWP mis-calibration is simply because rotating the\nwaveplate by $\\psi$ rotates the polarization direction by $2\\psi$.\nAlthough the spurious $B$-mode power is most\nnoticeable at high multipoles, where $\\ell(\\ell+1)C_\\ell^E$ is largest,\nit is also present on large scales and,\nas is clear from the plot, would bias a measurement of the $B$-mode \nspectrum at all multipoles. \n\\begin{figure}\n \\centering\n \\resizebox{0.48\\textwidth}{!}{ \n \\rotatebox{-90}{\\includegraphics{fig13.ps}}}\n \\caption{Mean recovered $B$-mode power spectra for the signal-only simulations\n including mis-estimated detector polarization sensitivity angles (black\n points) and mis-estimated HWP angles (blue points) where the angles have been\n systematically offset by 0.5$^{\\circ}$ in both cases. The standard errors in these\n mean recovered spectra are smaller than the plotted symbols.} \n \\label{fig:cls_pol_angles}\n\\end{figure}\n\n\\subsection{Mis-estimated time-constants}\n\nThe power spectra measured from simulations which included random and\nsystematic errors in the detector time-constants displayed neither a\nbias, nor a degradation in error-bars. This was to be expected for the\nslow scan speed and extremely fast time-constants we have considered\nin this analysis -- the response function of the {\\sevensize C}$_\\ell${\\sevensize OVER}\\, detectors\nis effectively phase-preserving with zero attenuation in the frequency\nband which contains the sky-signal in our simulations. Note that this\nwould not necessarily have been the case had we considered a much\nfaster scan speed or more rapid polarization modulation.\n\n\\subsection{Pointing errors}\n\nOur analysis of simulations where we have introduced a jitter in the\npointing and\/or an overall wander in the pointing suggest that these\nsystematics have only a very small effect on the recovered $B$-mode\npower spectra, at least for the levels which we have considered\n(i.e. a $30$ arcsec random jitter in the pointing and\/or an overall\nwander of the pointing by $1$ arcmin over the course of a two-hour\nobservation). The only observed effect was a slight suppression of the\nrecovered $B$-mode signal at high multipoles consistent with a\nslight smearing of the effective beam. We note however that the effect \nwe observed was extremely small and was only noticeable in our\nsignal-only simulations. For our simulations containing noise, the\neffect was completely swamped by the errors due to random noise. \n\nIn principle, pointing errors can also lead to leakage\nfrom $E$ to $B$~\\citep{hu03,odea07}. In Appendix~\\ref{app:pointing}\nwe develop a toy-model for\nthe leakage expected from random pointing jitter in the case of\na scan\/modulation strategy that produces a uniform spread of polarization\nsensitivity directions in each sky pixel (such as by fast modulating\nwith a HWP). The result is a white-noise spectrum of $B$-modes but,\nfor the simulation parameters adopted here, the effect is very small --\nless than $1$ per cent of the $B$-mode power induced by weak gravitational lensing\non large scales.\n\n\\subsection{Differential transmittance in the HWP}\nThe power spectra reconstructed from our simulations which \nincluded a 2 per cent differential transmittance in the HWP exhibited\nno degradation in the accuracy of the recovered $B$-mode\nsignal --- even the relatively simple recipe which we have used to\nremove the HWP-synchronous signals from the time-stream \n(see Section \\ref{sec:map-making}) appears sufficient to recover the\n$B$-mode signal to the same accuracy as was seen in our reference\nsimulations. (We quantify this statement in the next section where we\nestimate the detection significances with which the different simulations\ndetect the $E$ and $B$-mode signals). As mentioned in\nSection~\\ref{sec:modulation}, the recovered polarization signal is\nmis-calibrated by $\\sim 2$ per cent in amplitude ($4$ per cent in\npower). Compared to the random noise however, this mis-calibration is\na small effect and is easily dealt with during, e.g. a cosmological\nparameter analysis by marginalising over it. \n\nNote that no prior information on the level of differential\ntransmittance was used during our analysis of the data. Our technique\nfor removing the HWP-synchronous signals is a blind one in this\nsense and should work equally well for other HWP-systematic effects\nthat result in spurious signals at harmonics of the HWP rotation\nfrequency, $f_\\lambda$.\n\n\\section{Discussion}\n\\label{sec:discussion}\n\n\\subsection{Controlling systematics with polarization modulation}\n\\label{sec:significances}\nThe main goal of the analysis presented in this paper is to\ndemonstrate and quantify with simulations the impact of two types of\npolarization modulation (slow modulation using a stepped HWP and rapid\nmodulation with a continuously rotating HWP) on the science return of\nupcoming CMB $B$-mode experiments in the presence of various\nsystematic effects. Although our list of included systematics is not\nan exhaustive one (in particular, we are still investigating the case\nof imperfect optics), we are nevertheless in a position to draw some\nrather general conclusions regarding the usefulness of modulation in\nmitigating systematics. It is, of course, important to bear in mind\nthat we have only considered two examples of a HWP-related systematic\neffect (imperfect HWP angles and differential transmittance in the\nplate). The are many more possible effects which will need to be\nwell understood and strictly controlled if fast polarization\nmodulation with HWPs is to realise its potential.\n\n\\subsubsection*{(i) Systematics mitigated by modulation}\n\\begin{itemize} \n\\item{\\bf Correlated $1\/f$ detector noise:} As expected by the general\n reasoning of Section~\\ref{sec:modulation}, and further borne out\n by our results from simulations, rapid polarization modulation is\n extremely powerful at mitigating a correlated $1\/f$ component in the\n detector noise. Any such $1\/f$ component is not mitigated by\n a HWP operating in stepped mode. \n\\item{\\bf Calibration errors:} Our results demonstrate that fast\n modulation is also useful for mitigating against possible\n calibration errors since it greatly increases the range of\n directions over which sky polarization is measured in a given pixel.\n For example, the clear bias introduced in our simulations by random\n gain drifts or systematic mis-calibrations between detectors was\n mitigated entirely by the HWP continuously rotating at $3$~Hz. This\n bias was also partly (but not completely) mitigated by stepping the\n HWP by 20$^{\\circ}$ between each of our azimuth scans. For some stepping\n strategies we have investigated, the bias actually increased --- a\n poor choice of stepping strategy can actually be worse than having\n no modulation because of interactions between the sky rotation and\n the HWP orientations.\n\\end{itemize}\n\n\\begin{table*}\n\\caption{Detection significances (in units of $\\sigma$) for our\n reference simulations, for our simulations with $1\/f$ noise\n systematics and for our simulations with a 2 per cent differential\n transmittance in the HWP. Also included for comparison are the predicted\n detection significances from a Fisher matrix analysis of the power\n spectrum errors (see Section \\ref{sec:fisher}) and from the\n simulations containing isotropic and uniform Gaussian noise (see\n text). The rightmost column displays the significance of the\n detection of the $B$-mode signal in excess of the lensing signal\n which corresponds directly to the significance with which each\n simulation detects the input tensor-to-scalar ratio of $r=0.026$.}\n\\begin{center}\n\\begin{tabular}{c|c|c|c|c}\nSimulation & Modulation & $E$-mode & $B$-mode & $r = 0.026$ \\\\\n\\hline\nFisher predictions & --- & $128.6$ & $10.24$ & $1.90$ \\\\\n\\hline\nUniform noise & --- & $126.4$ & $10.30$ & $1.45$ \\\\\n\\hline\nReference simulation & None & $127.8$ & $10.41$ & $1.54$ \\\\\n & Stepped HWP & $130.1$ & $10.11$ & $1.41$ \\\\\n & Rotating HWP & $127.7$ & $10.72$ & $1.45$ \\\\\n\\hline\n$1\/f$ detector noise & None & $124.2$ & $7.29$ & $0.83$ \\\\\n($f_{\\rm knee} = 0.1$~Hz) & Stepped HWP & $124.6$ & $7.09$ & $0.79$ \\\\\n & Rotating HWP & $126.8$ & $9.96$ & $1.47$ \\\\\n\\hline\n$1\/f$ detector noise & None & $125.1$ & $8.61$ & $0.95$ \\\\\n($f_{\\rm knee} = 0.05$~Hz) & Stepped HWP & $127.5$ & $8.59$ & $1.14$ \\\\\n & Rotating HWP & $125.6$ & $10.31$ & $1.45$ \\\\\n\\hline\n$1\/f$ detector noise & None & $126.8$ & $9.78$ & $1.30$ \\\\\n($f_{\\rm knee} = 0.01$~Hz) & Stepped HWP & $128.0$ & $9.99$ & $1.44$ \\\\\n & Rotating HWP & $127.1$ & $10.28$ & $1.40$ \\\\\n\\hline\nPolarized atmosphere & None & $122.9$ & $6.86$ & $0.12$ \\\\\n & Stepped HWP & $124.9$ & $6.99$ & $0.10$ \\\\\n & Rotating HWP & $126.0$ & $8.24$ & $0.21$ \\\\\n\\hline\nDifferential transmittance & Rotating HWP & $127.6$ & $10.92$ & $1.59$ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\label{tab:simsummary}\n\\end{table*}\n\n\\vspace{-6mm}\n\\subsubsection*{(ii) Systematics not mitigated by modulation}\n\\begin{itemize}\n\\item{\\bf Polarized $1\/f$ atmosphere:} No amount of modulation (rapid or\n slow) will mitigate a polarized $1\/f$ component in the\n atmosphere. The results from our simulations containing polarized\n atmosphere are summarised in Table \\ref{tab:simsummary}. \n\\item{\\bf Pointing errors:} For our simulations which included pointing\n errors, the effect on the recovered $B$-mode power spectra was\n extremely small and was equivalent to a slight smoothing of the\n effective beam. Although the same amount of smoothing was observed\n in all the simulations (and so the effect is not mitigated by\n polarization modulation), the effect is negligible for the\n sensitivities and beam sizes considered here. The further leakage of\n $E$-mode power into $B$ modes due to pointing errors was, as expected\n (see Appendix~\\ref{app:pointing}) unobservably small in our simulations.\n\\item{\\bf Mis-calibration of polarization angles: } A polarization\n modulation scheme does not mitigate a systematic error in the\n calibration of the polarization sensitivity directions. Experiments\n using a HWP will require precise and accurate measurements of the\n HWP angle at any given time to avoid the $E \\rightarrow B$ mixing\n apparent in Fig.~\\ref{fig:cls_pol_angles}.\n\\end{itemize}\n\nOur results are in broad agreement with those of a similar study by\n\\cite{mactavish08} who based their analysis on signal-only simulations\nof the {\\sevensize SPIDER}\\, experiment. Both \\cite{mactavish08} and this study find\nthat polarization modulation with a continuously rotating HWP is\nextremely effective in mitigating the effects of $1\/f$ detector noise\nbut that, in the presence of significant $1\/f$ noise, the\nanalysis of an experiment where modulation is either absent or slow \nwill require near-optimal map-making techniques. In addition, both studies find that the effect\nof small and random pointing errors on the science return of upcoming\n$B$-mode experiments is negligible given the experimental\nsensitivities. The two analyses also find that the effect of random errors\n(with $\\sim 0.5^{\\circ}$ RMS) in the detector polarization sensitivity\nangles is negligible but that the global polarization coordinate frame\nof the experiment needs to be measured carefully --- \\cite{mactavish08}\nquote a required accuracy of $< 0.25^{\\circ}$ for {\\sevensize SPIDER}\\, which is\nconsistent with the requirement for an unbiased measurement of\nthe $B$-mode signal (for $r = 0.026$) at $\\ell < 300$ with {\\sevensize C}$_\\ell${\\sevensize OVER}. Finally, both\nstudies suggest that, in the absence of fast modulation, relative gain errors\nwill also need to be controlled to the $< 1$ per cent level (although, in\nthis paper, we have demonstrated that such gain errors are almost\nentirely mitigated using a fast modulation scheme; see\nFig.~\\ref{fig:cls_gain_errors}). \n\nIn Table~\\ref{tab:simsummary}, we quantify the impact of modulation on\nthe $1\/f$ noise systematics we have considered in this work by\nconsidering the significances with which we detect the $E$-mode and\n$B$-mode signals. For comparison, the detection significances in the\npresence of a 2 per cent differential transmittance in the HWP are also\npresented. To calculate the total significance of the detection\nwe compute the Fisher error on the amplitude of a fiducial\nspectrum,\n\\begin{equation}\n\\frac{S}{N} = \\left( \\sum_{bb'} P_b^{\\rm fid} {\\rm cov}^{-1}_{bb'}\nP_{b'}^{\\rm fid} \\right)^{1\/2},\n\\end{equation}\nwhere ${\\rm cov}^{-1}_{bb'}$ is the inverse of the band-power covariance\nmatrix for the given spectrum.\nFor the total significance of a detection of $E$ or $B$-modes, the\nfiducial band-powers are simply the binned input power spectra. In\norder to estimate the significance of a detection of primordial\n$B$-modes, we subtract the lensing contribution from the input\n$B$-mode power spectra. Because the primordial $B$-mode power spectrum\nis directly proportional to the tensor-to-scalar ratio, $r$, the\nsignificance with which we detect the $B$-mode signal in excess of the\nlensing signal translates directly to a significance for the detection\nof our input tensor-to-scalar ratio of $r=0.026$.\nWhen analysing the results of the\nsimulations, we approximate ${\\rm cov}^{-1}_{bb'} \\approx \\delta_{bb'}\/\n\\sigma_b^2$ since we are unable to estimate the off-diagonal elements\nfrom our small number of realisations (50) in each simulation set.\nWe know from a Fisher-based analysis (see Section~\\ref{sec:fisher}),\nthe results of which are also reported in Table~\\ref{tab:simsummary},\nthat neighbouring band-powers on the largest scales are, in fact,\n$\\sim 10$ per cent \nanti-correlated, and the diagonal approximation therefore\n\\emph{underestimates} the detection significance by $\\sim 10$ per cent.\nFor consistency, the numbers quoted for the Fisher analysis ignore the\noff-diagonal elements of the covariance matrix. Including the correlations\nincreases the $E$-mode significance to $144.6$ (from $128.6$),\nthe total $B$-mode significance to $11.57$ (from $10.24$) and\nthe primordial $B$-mode significance to $2.04$ (from $1.90$).\n\nIn comparing the entries in Table~\\ref{tab:simsummary} one should\nkeep in mind that the significances reported for the simulations are\nsubject to a Monte-Carlo error due to the finite number ($N_{\\mathrm{sim}}=50$)\nof simulations used to estimate the band-power errors. Approximating the\nband-powers as uncorrelated and Gaussian distributed, the\nsampling error in our estimates of the $S\/N$ is\n\\begin{equation}\n\\Delta (S\/N) \\approx \\frac{1}{S\/N} \\frac{1}{\\sqrt{2N_{\\mathrm{sim}}}}\n\\left[\\sum_b \\left(\\frac{P_b^{\\mathrm{fid}}}{\\sigma_b}\\right)^4\\right]^{1\/2} .\n\\end{equation}\nFor the reference simulation, this gives an error of $0.15$ in the\nsignificance of a detection of $r$ and $0.22$ in the significance of the\ntotal $B$-mode spectrum. The size of these errors likely explain the\napparent anomalies that rotating the HWP degrades the detection of $r$\nin the reference simulation, and that adding $1\/f$ detector noise\nimproves the detection of $r$ over the reference simulation for the case\nof a rotating HWP.\n\nAlso included in Table~\\ref{tab:simsummary} is the performance of our\nexperiment as estimated from a set\nof simple map-based simulations where we have injected uniform and\nisotropic white noise into signal-only $T$, $Q$ and $U$ maps\ndirectly. In these simulations, and also for the Fisher analysis,\nthe white-noise levels were chosen to\nmatch the noise levels in our main analysis and so they have identical\nraw sensitivity to the time-stream simulations but with perfectly\nbehaved noise properties. The broad agreement between our full\ntime-stream simulations and these simple map-based simulations\nsuggests that the anisotropic noise distribution introduced by the\n{\\sevensize C}$_\\ell${\\sevensize OVER}\\, scan strategy does not have a large impact on the\nperformance of the experiment. This agreement also suggests that the\nslightly poorer performance of the simulations in recovering the\n$r=0.026$ primordial $B$-mode signal as compared to the Fisher\npredictions is due to the sub-optimal performance on large scales of the\n(pure) pseudo-$C_\\ell$ estimator we have used.\n\n\\subsection{Importance of combining data from multiple detectors}\n\\label{sec:differencing}\nFor all of our analyses up to this point, in order to remove the\ncorrelated $1\/f$ atmospheric noise from the polarization\nanalysis, we have differenced detector pairs before\nmap-making. However, as mentioned in Section~\\ref{sec:modulation}, for\nthe case of a continuously modulated experiment, it is possible to\nmeasure all three Stokes parameters from a single detector in\nisolation. Here, we argue that this may be a poor choice of analysis\ntechnique in the presence of a highly correlated and common-mode\nsystematic such as atmospheric $1\/f$, at least when one employs\nreal-space demodulation techniques such as those that we have used in\nthis analysis. The key point to appreciate here is that, even with a\nrapid modulation scheme, and for an ideal experiment, it is impossible\nto separate completely the temperature and polarization signals in\nreal space using only a single detector.\\footnote{We note that this is\nnot necessarily true for the case of genuinely band-limited\ntemperature and polarization signals when a classical lock-in\ntechnique (such as that used in the analysis of the {\\sevensize MAXIPOL}\\, data;\n\\citealt{johnson07}) is used to perform the demodulation. We are\ncurrently working to integrate such a technique into our analysis.}\nIn contrast, the technique of detector differencing achieves this\ncomplete separation of the temperature and polarization signals (again\nfor the case of an ideal experiment). Note that this is true even in\nthe signal-only case. Consider again the modulated signal, in the\nabsence of noise, from a single detector sensitive to a single\npolarization: \\begin{equation} d_i = \\left[ T(\\theta) + Q(\\theta) \\cos(2\\phi_i) +\nU(\\theta)\\sin(2\\phi_i) \\right]\/2. \\end{equation} If for each observed data\npoint, $d_i$, the true sky signals, $T$, $Q$ and $U$ are different (as\nis the case for a scanning experiment), there is clearly no way to\nrecover the true values of $T$, $Q$ and $U$ at each point in time.\nThe approximation that one must make in order to demodulate the data\nin real space goes to the very heart of map-making -- that the true\ncontinuously varying sky signal can be approximated as a pixelised\ndistribution where $T$, $Q$ and $U$ are taken to be constant within\neach map-pixel. Armed with this assumption, all three Stokes\nparameters can be reconstructed from a single detector time-stream\nusing a generalisation of equation~(\\ref{eqn:qu_mapmaking}):\n\\begin{equation}\n\\left( \\begin{array}{c} T \\\\ Q \\\\ U \\end{array} \\right) = 2 \\, {\\mathsf M}^{-1} \\cdot\n\\left( \\begin{array}{c}\n \\langle d_i \\rangle \\\\\n \\langle\\cos(2\\phi_i)d_i\\rangle \\\\\n \\langle\\sin(2\\phi_i)d_i\\rangle \\\\ \\end{array} \\right).\n\\label{eqn:tqu_mapmaking}\n\\end{equation}\nwhere the decorrelation matrix, ${\\mathsf M}$ is now given by\n\\begin{equation} \n{\\mathsf M} = \\left( \\begin{array}{ccc} \n \\!\\!\\! 1 & \\!\\!\\! \\langle\\cos(2\\phi_i)\\rangle & \\!\\!\\!\\!\\! \\langle\\sin(2\\phi_i)\\rangle \\\\\n \\!\\!\\! \\langle\\cos(2\\phi_i)\\rangle & \\!\\!\\! \\langle\\cos^2(2\\phi_i)\\rangle &\n \\!\\!\\!\\!\\! \\langle\\cos(2\\phi_i)\\sin(2\\phi_i)\\rangle \\\\\n \\!\\!\\! \\langle\\sin(2\\phi_i)\\rangle & \\!\\!\\! \\langle\\cos(2\\phi_i)\\sin(2\\phi_i)\\rangle &\n \\!\\!\\!\\!\\! \\langle\\sin^2(2\\phi_i)\\rangle \\\\ \\end{array} \\!\\!\\!\\! \\right).\n\\end{equation}\n\\begin{figure}\n \\centering\n \\resizebox{0.48\\textwidth}{!}{ \n \\rotatebox{-90}{\\includegraphics{fig14.ps}}}\n \\caption{Recovered noise only $U$-polarization maps from one of our\n reference simulations with continuous modulation. The map on the\n left is reconstructed from demodulated single detector ``pure'' $U$\n time streams and has not used information from multiple detectors to\n separate the $T$ and polarization signals. The map on the right is\n made from demodulated detector-pair ``pure'' $U$ time-streams and\n explicitly combines information from the two detectors within each\n pixel to separate $T$ from $Q\/U$. Although striping is absent from\n both maps, the white-noise level in the detector-differenced map is\n reduced compared to that made using the non-differencing\n analysis.}\n \\label{fig:maps_diff_nodiff}\n\\end{figure}\nIf, on the other hand, the data from different detectors are combined\n(e.g.\\ when detector differencing is used), the situation is different\n-- because the two detectors within a pixel observe exactly the same\nun-polarized component of the sky signal at exactly the same time,\ndifferencing the detectors removes\nthe $T$ signal completely without any assumptions regarding the scale\nover which the true sky signal is constant. In this case, the\ndecorrelation of $Q$ and $U$ using equation~(\\ref{eqn:qu_mapmaking})\nstill requires an assumption regarding the constancy of the $Q$ and\n$U$ signals over the scale of a map-pixel but now the much larger\ntemperature signal has been removed from the polarization analysis\ncompletely. \n\nNow, if in addition to the sky signal, we have a common-mode\ntime-varying systematic such as an un-polarized $1\/f$ component in the\natmosphere, this contaminant will again be removed entirely with the\ndetector differencing technique (as long as it is completely\ncorrelated between the two detectors) whilst it will introduce a\nfurther approximation into any attempt to decorrelate all three Stokes\nparameters from a single detector time-stream using\nequation~(\\ref{eqn:tqu_mapmaking}). \n\nTo illustrate this point, and to stress the importance of combining\ndata from multiple detectors, we have re-analysed the simulated data\nfrom our set of continuously modulated reference simulations but now\nwe perform the demodulation at the time-stream level for either single\ndetectors or single detector-pairs in isolation. Firstly, we\ndemodulate each detector time-stream individually using\nequation~(\\ref{eqn:tqu_mapmaking}) but now the averaging is performed\nover short segments in time rather than over all data falling within a\nmap-pixel. This procedure results in ``pure'' $T$, $Q$ and $U$\ntime-streams for each detector but at a reduced data rate determined\nby the number of data samples over which the averaging of\nequation~(\\ref{eqn:tqu_mapmaking}) is performed. For our second\nanalysis, we first difference each detector pair and then demodulate\nthe differenced time-streams using equation~(\\ref{eqn:qu_mapmaking}),\nagain applied over short segments of time, resulting in ``pure'' $Q$\nand $U$ time-streams for each detector pair, once again at a reduced\ndata rate. Maps of the $Q$ and $U$ Stokes parameters are then\nconstructed by simple binning of the demodulated $Q$ and $U$\ntime-streams from all detectors or detector pairs. In the case where\ndetector pairs are differenced, we are explicitly combining\ninformation from multiple detectors to separate the temperature and\npolarization signals whilst when we do not difference, we are\nattempting to separate the $T$ and $Q\/U$ signals present in detector\ntime-stream in isolation.\n\nThe results of these tests are shown in Figs.~\\ref{fig:maps_diff_nodiff} and\n\\ref{fig:cls_diff_nodiff}. Figure~\\ref{fig:maps_diff_nodiff} shows the noise-only\n$U$-polarization maps recovered using the two different analysis\ntechniques. Although striping from the atmospheric fluctuations is not\npresent in either of the maps, the extra uncertainty introduced when\none attempts to separate the $T$ and $Q\/U$ signals from individual\ndetectors in isolation clearly results in an increased white noise\nlevel in the polarization maps. This results in a degradation factor\nof $\\sim 2$ in the resulting measurements of the $B$-mode power\nspectrum on all angular scales (Fig.~\\ref{fig:cls_diff_nodiff}) with\na corresponding degradation in the detection significances for both\na measurement of the total $B$-mode signal and for a detection of $r=0.026$\n(Table~\\ref{tab:diff_nodiff}). These results are in excellent\nagreement with those of \\cite{sutton09} who found it necessary to\napply optimal mapping techniques to rapidly modulated single-detector\ntime-streams in the presence of $1\/f$ atmospheric noise. \n\nWe emphasise that the results presented in\nFig.~\\ref{fig:cls_diff_nodiff} and in Table~\\ref{tab:diff_nodiff} for\nthe case where we have analysed single detectors in isolation would\nlikely be improved if the Fourier domain filtering used in\n\\cite{johnson07} was implemented. We are currently working to\nintegrate this step into our algorithm, and we expect to report the\nsubsequent improvement in future publications.\n \n\\begin{figure}\n \\centering\n \\resizebox{0.48\\textwidth}{!}{ \n \\rotatebox{-90}{\\includegraphics{fig15.ps}}}\n \\caption{Comparison between the $B$-mode power spectra recovered\n using an analysis based on differencing detector pairs (black\n points) and one based on demodulating each detector individually\n (blue points). In the presence of a time-varying common-mode\n systematic, such as the $1\/f$ atmospheric noise we have considered\n here, the analysis based on detector differencing is far superior to\n the analysis based on demodulating each detector individually.}\n \\label{fig:cls_diff_nodiff}\n\\end{figure}\n\n\\begin{table}\n\\caption{Detection significances (in units of $\\sigma$) from the\n analysis of identical simulated data with a HWP rotating\n continuously at $3$~Hz. The first analysis is based on detector\n differencing, the second based on demodulation of individual\n detectors in isolation.}\n\\begin{center}\n\\begin{tabular}{c|c|c|c}\nAnalysis & $E$-mode & $B$-mode & $r = 0.026$ \\\\\n\\hline\nDetector differencing & $132.9$ & $10.14$ & $1.43$ \\\\\nDemodulation & $123.4$ & $5.11$ & $0.68$ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\label{tab:diff_nodiff}\n\\end{table}\n\n\\subsection{Comparison of simulated and predicted {\\sevensize C}$_\\ell${\\sevensize OVER}\\, performance}\n\\label{sec:fisher}\nIt is common practice to make predictions of the performance of\nupcoming experiments using a Fisher-matrix analysis which\nattempts to predict the achievable errors on, for example, power\nspectra or cosmological parameters under some simplifying\nassumptions. Generally these assumptions will include uniform coverage\nof the observing fields and uncorrelated Gaussian noise resulting in\nan isotropic and uniform white-noise distribution across the observing\nfield. In contrast, the work described in this paper has made use of a\ndetailed simulation pipeline which we have created for the {\\sevensize C}$_\\ell${\\sevensize OVER}\\,\nexperiment. Our simulation pipeline includes the {\\sevensize C}$_\\ell${\\sevensize OVER}\\,\nfocal-plane designs as well as a realistic scan strategy appropriate\nfor observing the four chosen {\\sevensize C}$_\\ell${\\sevensize OVER}\\, fields from the telescope site\nin Chile. In addition we have employed a detailed model of the TES\ndetector noise properties and responsivity, and $1\/f$ atmospheric\nnoise correlated across the focal-plane array. Moreover, our errors\nare calculated using a Monte-Carlo analysis and so should\nautomatically include any effects due to correlations between map\npixels etc. An interesting exercise therefore is to compare the\nexpected errors from a Fisher-matrix analysis to those obtained\nfrom our simulation analysis. \n\nThe polarization band-power Fisher matrix is (e.g.~\\citealt{tegmark01})\n\\begin{equation}\n\\mathrm{cov}^{-1}_{(bP)(bP)'} = \\frac{1}{2}\\mathrm{tr}\\left(\n\\mathbfss{C}^{-1} \\frac{\\partial \\mathbfss{C}}{\\partial \\mathcal{C}_b^P}\n\\mathbfss{C}^{-1} \\frac{\\partial \\mathbfss{C}}{\\partial \\mathcal{C}_{b'}^{P'}}\n\\right)\n\\end{equation}\nwhere $P$ and $P'$ are $E$ or $B$, and $b$ labels the bandpower. Here,\n$\\mathbfss{C}$ is the covariance matrix of the noisy Stokes maps and\n$\\mathcal{C}_b^P$ are bandpowers of $\\ell (\\ell +1)C_\\ell^P\/(2\\pi)$.\nWe analyse a single circular field with area equal to that retained\nin the pseudo-$C_\\ell$ analysis described in Section~\\ref{subsec:power},\nand multiply the Fisher matrix by four to account for the\nnumber of fields observed (which are thus assumed to be fully independent).\nWe ignore inhomogeneity of the noise in the maps so that our problem\nhas azimuthal symmetry about the field centre. This allows us to\nwork in a basis where the data is Fourier transformed in azimuth and\nthe covariance matrix becomes block diagonal, thus speeding up the\ncomputation of the Fisher matrix considerably. The Fisher matrix\ntakes full account of band-power correlations (both between $b$ and\npolarization type) and the effect of ambiguities in isolating\n$E$ and $B$-modes given the survey geometry. We deal with power\non scales larger than the survey by including a junk band-power for\neach of $E$ and $B$ whose contribution to $\\mathrm{cov}_{(bP)(bP)'}$\nwe remove before computing detection significances.\n\nThe comparison between the predicted and simulated performance of\n{\\sevensize C}$_\\ell${\\sevensize OVER}\\, is shown in Fig.~\\ref{fig:clover_compare}. In this plot, we\nalso include the predicted $B$-mode errors from a na\\\"{\\i}ve mode-counting\nargument based on the fraction of sky observed, $f_{\\rm sky}$. For\nthese estimates, we assume independent measurements of the power\nspectrum in bands of width $\\Delta \\ell$ given by\n\\begin{equation}\n(\\Delta C_\\ell)^2 = \\frac{2}{(2\\ell + 1) f_{\\rm sky} \\Delta \\ell}\n(C_{\\ell} + N_{\\ell})^2,\n\\end{equation}\nwhere $C_\\ell$ is the band-averaged input signal and $N_\\ell$ is the\nband-averaged noise. For uncorrelated and isotropic Gaussian random\nnoise, the latter is given by $N_\\ell = w^{-1} B_\\ell^{-2}$\nwhere $B_\\ell = \\exp( - \\ell (\\ell + 1) \\sigma_B^2 \/ 2)$ is the\ntransform of the beam with $\\sigma_B = \\theta_B \/ \\sqrt{8 \\ln 2}$ for\na beam with FWHM of $\\theta_B$. The weight $w^{-1} =\n\\Omega_{\\rm pix} \\sigma^2_{\\rm pix}$, where the pixel noise in the $Q$\nand $U$ maps is \n\\begin{equation}\n\\sigma^2_{\\rm pix} = \\frac{ ({\\rm NET}\/\\sqrt{2})^2 \\Theta^2}{t_{\\rm obs}\n (N_{\\rm det}\/4) \\Omega_{\\rm pix}}. \n\\label{eqn:pixel_noise}\n\\end{equation}\nHere $\\Theta^2$ is the total observed area, $t_{\\rm obs}$ is\nthe total observation time and $\\Omega_{\\rm pix}$ is the pixel\nsize. In equation~(\\ref{eqn:pixel_noise}), we have used ${\\rm\n NET}\/\\sqrt{2}$ to account for the fact that a single measurement of\n$Q$ or $U$ requires a measurement from two detectors (or,\nalternatively, two measurements from a single detector) and we use\n$N_{\\rm det}\/4$ as the effective number of $Q$ and $U$ detectors.\n\nOver most of the $\\ell$ range, the agreement between the Fisher matrix\npredictions and the simulated performance is rather good -- the only\nsignificant discrepancy is for the lowest band-power where the\nsimulations fail to match the predicted Fisher error. This is almost\ncertainly due to the relatively poor performance of our power spectrum\nestimator on the very largest scales where pseudo-$C_\\ell$ techniques\nare known to be sub-optimal (compared to, for example, a maximum\nlikelihood analysis). In terms of a detection of the total $B$-mode\nsignal, the Fisher analysis predicts a detection for {\\sevensize C}$_\\ell${\\sevensize OVER}\\, of\n$\\sim 12.0\\sigma$. For comparison, the na\\\"{\\i}ve $f_{\\rm sky}$\nanalysis predicts a $12.4\\sigma$ detection. For our assumed\ntensor-to-scalar ratio, the Fisher matrix analysis predicts $r=0.026\n\\pm 0.013$ (a $2.04\\sigma$ detection) and the na\\\"{\\i}ve analysis\nyields $r=0.026 \\pm 0.011$ (a $2.39\\sigma$ detection). Comparing to\nthe detection significances quoted in Table~\\ref{tab:simsummary}, we\nsee that the detections recovered from the simulations fail to match\nthese numbers. For the case of the total $B$-mode amplitude, this\ndiscrepancy is entirely due to the fact that we are unable to measure\nand include in our analysis (anti-)correlations between the\nband-powers measured from our small number (50) of simulations --\nwhen we neglect the correlations in the Fisher matrix analysis, the\nFisher prediction drops to a $10.24\\sigma$ detection, in excellent\nagreement with our measured value from simulations. For the primordial\n$B$-mode signal only, the discrepancy found is also partly due to the\nsame effect (ignoring correlations in the Fisher analysis reduces the\nFisher prediction for primordial $B$-modes to $1.9\\sigma$). As\nmentioned above, we suspect that the additional decrease in sensitivity to\nprimordial $B$-modes seen in the simulations is due to the slightly\nsub-optimal performance of our implementation of the pure\npseudo-$C_\\ell$ estimator on the largest scales.\n\nWe should point out that in this work, we have made no attempt to optimise the\nsurvey strategy in light of recent instrument developments. In\nparticular, the survey size we have adopted for these simulations was\noptimised for a measurement of $r=0.01$ with {\\sevensize C}$_\\ell${\\sevensize OVER}\\, when the\nexperiment was expected to have twice the number of detectors now\nplanned. For the instrument parameters we have adopted in this analysis\n(which are a fair representation of the currently envisaged\nexperiment), the optimal survey area for a measurement of $r=0.026$ \nwould be significantly smaller than the $\\sim 1500$ deg$^2$ we have\nused here due to the increased noise levels from the reduced number of\ndetectors. Alternatively, if we had assumed a larger input value of\n$r$, the optimal survey size would increase. Optimisation of both the\nsurvey area and the scan strategy in light of these changes in the\ninstrument design is the subject of on-going work. \n\\begin{figure}\n \\centering\n \\resizebox{0.48\\textwidth}{!}{ \n \\rotatebox{-90}{\\includegraphics{fig16.ps}}}\n \\caption{Comparison between the predicted performance of {\\sevensize C}$_\\ell${\\sevensize OVER}\\,\n as calculated using a Fisher-matrix analysis and the simulated\n performance from our Monte-Carlo pipeline (for our reference\n simulation). Also shown for comparison are the errors predicted from\n a na\\\"{\\i}ve $f_{\\rm sky}$ analysis.}\n \\label{fig:clover_compare}\n\\end{figure}\nThere are, of course, many other sources of uncertainty which we have\nnot yet accounted for in our simulation pipeline and so both the\npredicted and simulated performance numbers should be taken only as\nguidelines at this time. However, it is encouraging that the extra\nsources of uncertainty which are included in our simulation pipeline\n(realistic instrument parameters, a realistic scan strategy,\ncorrelated noise), in addition to any uncertainties introduced as part\nof our subsequent analysis of the simulated data, do not degrade the\nexpected performance of {\\sevensize C}$_\\ell${\\sevensize OVER}\\, by a large amount. \n\n\\section{Conclusions}\n\\label{sec:conclusions}\nWe have performed a detailed investigation of the ability of both slow\nand fast polarization modulation schemes to mitigate possible\nsystematic effects in upcoming CMB polarization experiments, targeted\nat measuring the $B$-mode signature of gravitational waves in the\nearly universe. To do this we have used a simulation pipeline\ndeveloped in the context of the {\\sevensize C}$_\\ell${\\sevensize OVER}\\, experiment, which includes\nrealistic instrument and observation parameters as well as $1\/f$\ndetector noise and $1\/f$ atmospheric noise correlated across the\n{\\sevensize C}$_\\ell${\\sevensize OVER}\\, focal-plane array. Using this simulation tool, we have\nperformed simulations of {\\sevensize C}$_\\ell${\\sevensize OVER}\\, operating with no explicit\nmodulation, with a stepped HWP and with a HWP rotating continuously at\n$3$~Hz. We have analysed the resulting time-stream simulations using\nthe technique of detector differencing coupled with a na\\\"{\\i}ve map-making\nscheme, and finally have reconstructed the $E$ and $B$-mode power\nspectra using an implementation of the near-optimal ``pure''\npseudo-C$_\\ell$ power spectrum estimator.\n\nAs expected, we find that fast modulation via a continuously rotating\nHWP is extremely powerful in mitigating a correlated $1\/f$ component\nin the detector noise but that a stepped HWP is not. In addition, we\nhave demonstrated that a polarized $1\/f$ component in the atmosphere\nis not mitigated by any amount of modulation and if present, would\nneed to be mitigated in the analysis using a sophisticated map-making\ntechnique. We have further verified with simulations that fast\nmodulation is very effective in mitigating instrumental polarization\nthat is fixed relative to the instrument basis, for example the $T\n\\rightarrow Q$ leakage caused by systematic gain errors and\nmis-matches between detectors, in agreement with the conclusions\nof~\\citet{odea07}. We have also demonstrated that modulation does not\nmitigate a systematic mis-calibration of polarization angles and that\nthese angles will need to be measured accurately in order to avoid a\nsystematic leakage between $E$ and $B$-modes. The other systematics\nwhich we have investigated (pointing errors, mis-estimated\ntime-constants) have a negligible impact on the recovered power\nspectra for the parameters adopted in our simulations.\n\nIn addition to our investigation of systematic effects, we have\nstressed the importance of combining data from multiple detectors and\nhave demonstrated the superior performance of a differencing technique\nas opposed to one based on measuring all three Stokes parameters from\nsingle detectors in isolation. We suggest that the latter technique,\nalthough possible in the presence of rapid modulation, is likely a\npoor choice of analysis technique, at least in the presence of a\ncommon-mode systematic effect such as atmospheric $1\/f$ noise.\n\nFinally, we have compared the simulated performance of the {\\sevensize C}$_\\ell${\\sevensize OVER}\\,\nexperiment with the expected performance from a simplified\nFisher-matrix analysis. For all but the very lowest multipoles, where\nthe simulations fail to match the Fisher predictions, we find\nexcellent agreement between the predicted and simulated\nperformance. In particular, despite the highly anisotropic noise\ndistribution present in our simulated maps, our measurement of the\ntotal $B$-mode signal matches closely with the Fisher matrix\nprediction (the latter assuming isotropic noise). On the other hand,\nthe measurement of the large scale $B$-mode signal (and thus of the\ntensor-to-scalar ratio, $r$) from the simulations is around 20 per\ncent worse\nthan the Fisher prediction. This is almost certainly due to the\nsub-optimality of our power spectrum analysis on large scales. It is\npossible that the Fisher matrix predictions could be recovered from\nthe simulations by using a more optimal weighting scheme in the pure\npseudo-$C_\\ell$ analysis, or, more likely, by using a\nmaximum-likelihood $C_\\ell$ estimator for the low multipoles.\n\nOne important class of systematic effects which we have not considered\nin this paper are those associated with imperfect optics. Additionally, \nwe have considered only two effects associated with an imperfect HWP.\nThe efficacy of fast modulation to mitigate systematic\neffects from imperfect optics, for example instrumental polarization\ndue to beam mis-match, is expected to depend critically on the optical\ndesign (such as the HWP location). We are currently working to\ninclude such optical systematics in our simulation pipeline, along\nwith a more detailed physical model of the atmosphere and models of\nthe expected polarized foreground emission. In future work, in\naddition to investigating further systematic effects, we will extend\nour simulations to multi-frequency observations and will use these to\ntest alternative foreground removal techniques. We will also apply the\n``destriping'' map-making technique of \\cite{sutton09} to our\nsimulations to assess the relative merits of destriping in analysis as\nopposed to a hardware based approach for mitigating $1\/f$ noise.\n \n\\section*{Acknowledgments}\nWe are grateful to the {\\sevensize C}$_\\ell${\\sevensize OVER}\\, collaboration for useful\ndiscussions. We thank Kendrick Smith for making his original pure\npseudo-$C_\\ell$ code available which we adapted to carry out some of\nthe analysis in this paper. We thank John Kovac and Jamie Hinderks for\nthe up-to-date descriptions of the {\\sevensize BICEP-2\/KECK} and {\\sevensize PIPER}\\,\nexperiments respectively. The simulation work described in this paper was carried\nout on the University of Cambridge's distributed computing facility,\n{\\sevensize CAMGRID}. We acknowledge the use of the {\\sevensize FFTW}\\, \\citep{frigo05},\n{\\sevensize CAMB}\\, \\citep{lewis00} and {\\sevensize HEALPIX}\\, \\citep{gorski05} packages.\n\n\\setlength{\\bibhang}{2.0em}\n\\vspace{-3mm}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nThere is significant evidence that large glaciation events took place during the Proterozoic era (2500-540 million years ago). In particular, this evidence points to the existence of glacial formations at low latitudes, see the review articles \\cite{hoffmanschrag2002terranova, pierrehumbert2011climate} and the references therein. One theory on the exodus from such an extreme climate was put forward by Joseph Kirschvink \\cite{kirschvink1992protero}, who advocated that there was accumulation of greenhouse gases in the atmosphere, e.g. CO$_2$. His theory purports that during a large glaciation chemical weathering processes would be shut down, thus eliminating a CO$_2$ sink. Moreover, volcanic activity would continue during the glaciated state. The combination of these effects would \\textit{slowly} lead to enough build-up of atmospheric carbon dioxide to warm the planet and start the melting of the glaciers. Once a melt began, a deglaciation would follow \\textit{rapidly} due to ice-albedo feedback. \n\n\nThere has been a wealth of modelling work on ``snowball'' events ranging from computationally intensive global circulation models (GCMs) to low dimensional conceptual climate models (CCMs). In 1969, Mikhail Budyko and William Sellers independently proposed energy balance models (EBMs); these were CCMs capturing the evolution of the temperature profile of an idealized Earth \\cite{budyko2010effect, sellersglobal}. Many others, for example \\cite{ caldeira1992susceptibility, abbot2011, north1975theory}, have followed in the footsteps of Budyko and Sellers and used similar conceptual models capable of exhibiting snowball events. The low dimensionality of CCMs allows for a dynamical systems analysis, and hence a deeper investigation into some of the key feedbacks such as greenhouse gas and the ice-albedo effect. \n\nFrom the point of view of dynamical systems, many of these early works share a similar theme by focusing on a bifurcation analysis with respect to the \\textit{radiative forcing} parameter, one that depends on changes in atmospheric CO$_2$ and other greenhouse gas levels. The reader may find figures similar to those displayed in Figure \\ref{icf} in earlier works \\cite{abbot2011, hoffmanschrag2002terranova, pollard2005snowball}. These figures illustrate the glaciation state of an Earth with symmetric ice caps in terms of atmospheric CO$_2$ i.e. ice line latitude at $90^o$ means Earth is ice free while $0^o$ means Earth is fully glaciated. In both figures, the effect of the radiative forcing due to CO$_2$ and other greenhouse gases is treated statically as a parameter in a simple bifurcation analysis. Mathematically, the bifurcation analysis already extends beyond the realm of smooth dynamical systems. On one hand, the bifurcation diagrams are obtained conventionally, with the bold curves indicating the stable branches and the dotted curves showing the unstable ones. On the other hand, the extreme states (ice-free and ice-covered) are not true equilibria of the system, but treated as such in the literature as evidenced by the labeling of \"ice-free branch\" and \"ice-covered branch\". Indeed, the extreme states of the ice line are special because they serve as physical boundaries for the dynamics: the ice line cannot extend beyond the pole nor, because of the north-south symmetry assumed in the models, can it move downward of the equator. Therefore, mathematical models of snowball earth must reflect this physical imposition and be treated with a nonsmooth systems perspective.\n\n\nSince glacial extent varies over time and dynamic processes such as chemical weathering affect the level of atmospheric CO$_2$, the bifurcation diagrams in Figure \\ref{icf} can be viewed as phase planes with dynamic variables consisting of the glacier extent (ice line) and radiative forcing due to greenhouse gases. In particular, if a global glaciation did occur, and Kirschvink's argument about the accumulation of atmospheric CO$_2$ held, then we should expect orbits of this dynamical system to traverse the extreme ice latitudes, i.e. the equator and the pole. Following Kirschvink's idea further, we treat greenhouse gas effects on the energy balance by incorporating a slow CO$_2$ variable and treat the ice latitude as a relatively fast variable. The goal of the present article is to analyze such an interplay using a nonsmooth systems framework, thus providing mathematical support for Kirschvink's hypothesis. In this work, we treat the bifurcation diagram (similar to those in Figure \\ref{icf}) as a set of quasi-steady states of a slow-fast system and utilize the theory of Filippov.\n\n \\begin{figure}[ht]\n \\centering\n \\subfloat[Figure 10 from Abbot, Voigt, and Koll \\cite{abbot2011} ] {\\includegraphics[width=0.4\\textwidth]{fig1-abbot-2011-Aeta.pdf} {\\label{fig11}} } \\quad \\quad\n \\subfloat [Figure 6 from Hoffman and Schrag \\cite{hoffmanschrag2002terranova}] {\\includegraphics[width=0.4\\textwidth]{fig1-terranova.pdf} {\\label{fig12}}} \\quad\n \\caption{Bifurcation diagrams from energy balance models illustrating hysteresis in the climate system. In each figure, solid lines correspond to stable steady states while dashed lines correspond to unstable steady states. The positive horizontal axis can be thought of as increasing atmospheric carbon dioxide, and the vertical axis is the latitude of the ice line. Stability of snowball and ice-free states is \\textit{inferred}; these are physical boundaries and not true equilibria of the models.} \n\\label{icf}\n \\end{figure}\n\n \nWe present the topics as follows. In the next section, we motivate the model of interest. In Section \\ref{sect:fil}, we propose an extension of the model that is amenable to a nonsmooth dynamical systems treatment. In particular, we use Filippov's theory for differential equations with discontinuous right-hand sides to show that an ice line model based on a latitude-dependent EBM coupled with a simple equation for greenhouse gas evolution can be embedded in the plane to form a system that has unique forward-time solutions. We end Section \\ref{sect:fil} with an analysis of the system dynamics and conclude the paper with a discussion in Section \\ref{sect:discussion}.\n\n\n\\section{The Equations of Motion}\n\\subsection{The Budyko Energy Balance Model and the Ice Line Equation}\nThe Budyko EBM describes the evolution of the annual temperature profile $T=T(y,t)$, where $t$ denotes time and $y$ denotes the sine of the latitude. The governing equation may be written:\n\n\\begin{equation} \\label{bud}\nR \\frac{\\partial }{\\partial t}T(y,t) = Qs(y)(1-\\alpha(\\eta,y))-(A+BT(y,t))+C \\left( \\int_0^1 T(y,t)dy-T(y,t) \\right).\n\\end{equation}\n\nThe main idea is that the change in temperature is proportional to the imbalance in the energy received by the planet. The amount of short wave radiation entering the atmosphere is given by $Qs(y)(1-\\alpha(\\eta,y))$, where Q is the total solar radiation (treated as constant), $s(y)$ is the distribution of the solar radiation, and $\\alpha(\\eta,y)$ is the albedo at latitude $y$ given that the ice line is at latitude $\\eta$. The outgoing longwave radiation is the term $A+BT(y,t)$; it turns out that the highly complex nature of greenhouse gas effects on the Earth's atmosphere can be better approximated by a linear function of surface temperature than by the Stephan-Boltzmann law for blackbody radiation ($\\sigma T^4$), see the discussion in \\cite{graves1993new}. The parameter $A$ is particularly interesting here, because it is related to greenhouse gas effects on the climate system, which we will describe further in Section \\ref{sec-ghg}. The transport term $C ( \\int_0^1 T(y,t)dy - T )$ redistributes heat by a relaxation process to the global average temperature. Assuming symmetry of the hemispheres, one may take $y \\in [0,1]$ and $T(y,t)$ as a symmetric temperature profile over the interval $[-1,1]$, hence, an even function of $y$. A more detailed discussion of this model can be found in \\cite{tung2007topics}. For interested readers, Table \\ref{tab:ParValues} lists the polynomial expressions of some key functions in this model using standard parameter values. \n\nThe evolution of the ice-water boundary, or what is often called the \\textit{ice line}, $\\eta$, affects the albedo function $\\alpha(\\eta,y)$. In \\cite{tung2007topics} and \\cite{north1975theory}, a critical ice line annual average temperature, $T_c$, is specified. Above this temperature ice melts, causing the ice line to retreat toward the pole, and below it ice forms, allowing glaciers to advance toward the equator. One way to model this is described in \\cite{widiasih2013dynamics}, where the augmented ice line equation governing $\\eta$ is written as\n\n\\begin{equation} \\label{etadot}\n\\frac{d \\eta}{dt} = \\rho ( T(\\eta)-T_c ).\n\\end{equation}\n\nThe exact value of $\\rho$ is unspecified in our analysis, except that it is assumed to be large in comparison to the timescale governed by volcanism and weathering processes. The interested reader will find a detailed discussion of the value of $\\rho$ in \\cite{mcgwid2014simplification}.\nBecause \\eqref{bud} is an integro-differential equation, the phase space must contain the function space of the temperature profile. Therefore, the dynamics of this coupled system take place in some infinite-dimensional space. The work by Widiasih in \\cite{widiasih2013dynamics} treated the model in a discrete time framework and showed the existence of a one-dimensional attracting manifold. \n\nMcGehee and Widiasih \\cite{mcgwid2014simplification} imposed equatorial symmetry by using even Legendre polynomials with a discontinuity at the iceline $y=\\eta$, and showed the existence of a one-dimensional attracting invariant manifold, similar to that shown in \\cite{widiasih2013dynamics}. The invariant manifold is parametrized by the ice line, thereby reducing the dynamics to a single equation:\n\n\\begin{equation} \\label{etadot-reduced}\n\\frac{d\\eta}{dt}= h_0(\\eta; A)\n\\end{equation}\n\\noindent with\n\\begin{equation}\\label{h} \nh_0(\\eta;A):= \\rho\\left(\\frac{Q}{B+C} \\left(s(\\eta)(1-\\alpha(\\eta,\\eta))+\\frac{C}{B}(1-\\overline{\\alpha}(\\eta))\\right)-\\frac{A}{B}-T_c \\right)\n\\end{equation}\n\\noindent where $\\overline{\\alpha}(\\eta)=\\int_0^1 s(y) \\alpha(\\eta,y)dy$. The parameter $A$ appearing in \\eqref{etadot-reduced} is indeed the greenhouse gas parameter in \\cite{mcgwid2014simplification}, and is playing a similar role as that in the bifurcation diagram shown in Figure \\ref{fig11}. \n\nIn what follows, we apply the invariant manifold result of McGehee and Widiasih \\cite{mcgwid2014simplification} by coupling the ice line equation \\eqref{etadot-reduced} to an equation for atmospheric greenhouse gas evolution through the (former) parameter $A$. From here on, we highlight the explicit dependence of the equation for $\\eta$ on $A$ by writing \n\\begin{equation*}\n\\frac{d\\eta}{dt}=h(A,\\eta)\n\\end{equation*}\nwhere $h(A,\\eta)=h_0(\\eta;A)$ and $h(A,\\eta)$ is thought of as a real valued function over the plane $\\mathbb R \\times \\mathbb R$.\n\n\n\\begin{table}[htb]\n\\centering\n\\begin{tabular}{|ccc||c|} \n\\hline\n\\textbf{Parameters} & Value & Units & \\textbf{Functions} \\\\ \\hline \\hline\n&&& \\\\\n$Q$ & 321 & $\\text{W}\\text{m}^{-2}$ & $s(y) = 1 - \\frac{0.482}{2} (3 y^2 - 1)$\\\\\n$s_1$ & 1 & dimensionless & \\\\\n$s_2$ & -0.482 & dimensionless & $h(A,\\eta)=\\rho\\left(112.88+56.91\\eta-24.31\\eta^2-11.05\\eta^3-\\frac{A}{1.5}\\right)$ \\\\\n$B$ & 1.5 & $\\text{W}\\text{m}^{-2}\\text{K}^{-1}$ & \\\\\n$C$ & 2.5B & $\\text{W}\\text{m}^{-2}\\text{K}^{-1}$ & $g(A,\\eta)=\\delta(\\eta-\\eta_c)$ \\\\\n$\\alpha_1$ & 0.32 & dimensionless & \\\\\n$\\alpha_2$ & 0.62 & dimensionless & \n$\\alpha(\\eta,y)=\\begin{cases} \n&\\alpha_1 \\text { when } y< \\eta \\\\\n& \\frac{\\alpha_1+\\alpha_2}{2} \\text{ when } y=\\eta\\\\\n&\\alpha_2 \\text{ when } y>\\eta \n \\end{cases}$ \\\\\n$T_c$ & $-10$ & ${}^\\circ\\text{C}$ & \\\\\n&&& \\\\\n\\hline\n\\end{tabular} \n\\caption{Parameter values as in \\cite{abbot2011} and functions as in \\cite{mcgwid2014simplification}.} \\label{tab:ParValues}\n\\end{table}\n\n\n\\subsection{Incorporating Greenhouse Gases} \\label{sec-ghg} \n\nWe now make an argument for a very simple form of an equation for greenhouse gas evolution. In the Budyko and Sellers models \\cite{ budyko2010effect,sellersglobal}, the parameter $A$ plays the important role of reradiation constant. The Earth absorbs shortwave radiation from the sun, and some of this is reradiated in the form of longwave radiation. The current value of $A$ is measured using satellite data to be approximately $202$ $W\/m^2$ \\cite{ graves1993new, tung2007topics}. \n\nHowever, throughout the span of millions of years $A$ is not constant, as the amount of heat reradiated to space depends crucially on greenhouse gases, especially carbon dioxide. In fact, it was posited in \\cite{caldeira1992susceptibility} that $A$ behaves like a function of the logarithm of atmospheric carbon dioxide, measured in parts per million. Intuitively, adding carbon dioxide to the atmosphere decreases its emissivity, allowing less energy to escape into space. Therefore, outgoing longwave radiation should vary inversely with CO$_2$.\n\nSince the land masses were concentrated in middle and low latitudes prior to the global glaciation period, Kirschvink postulated that the stage was set for an ice-covered Earth. Then \\lq On a snowball Earth, volcanoes would continue to pump CO$_2$ into the atmosphere (and ocean), but the sinks for CO2 \u2013 silicate weathering and photosynthesis would be largely eliminated \\rq \\cite{ hoffmanschrag2002terranova, kirschvink1992protero}. In more recent work, Hogg \\cite{hogg} put forth an elementary model for the evolution of greenhouse gases consistent with Kirschvink's theory. In short, he argued that the main sources for atmospheric CO$_2$ were due to an averaged volcanism rate and ocean outgassing. The main carbon dioxide sink was said to be due to the weathering of silicate rocks. For our purposes, it will be enough to work with volcanism and weathering. Let $V$ denote the rate of volcanism and $W$ the weathering rate, where the latter is assumed to depend on the location of the iceline, or more specifically the amount of available land or rock to be weathered. Putting this together with the assumption that the reradiation variable $A$ varies inversely with CO$_2$ gives\n\n\\begin{align} \n\\frac{dA}{dt}&=-(V-W\\eta)\\nonumber\\\\\n&:=\\delta(\\eta-\\eta_c). \\label{Adoteq}\n\\end{align} \n\nwhere $\\delta>0$ and $\\eta_c=\\frac{V}{W}$, the ratio of volcanism to weathering. Let $g(A,\\eta):=\\delta (\\eta -\\eta_c)$. \n\n\\begin{remark} In what follows, we often replace the right hand side of equation \\eqref{Adoteq} with the more general function $g(A,\\eta)$. While our analysis in Sections \\ref{sect:dynamics} and \\ref{sect:jorm} uses the linear function of ice line alone, it is conceivable that greenhouse gas feedbacks depend directly on their atmospheric concentration, see eg. \\cite{caldeira1992susceptibility, pierrehumbert2011climate}. We leave this avenue for future exploration and extension of the current work. \\end{remark}\n\nNext, allowing $A$ to vary in the ice line equation \\eqref{h}, we now have a system of equations for $A$ and $\\eta$:\n\\begin{equation} \\label{Aeta}\n\\begin{cases}\n&\\dot{A}=g(A,\\eta)\\\\\n&\\dot{\\eta}=h(A, \\eta).\\\\\n\\end{cases}\n\\end{equation}\n \n\nTo understand the timescale of $A$, we refer to the elucidation of the snowball scenario by Hoffman and Schrag in which they argue that weathering took place at a much slower rate than the ice-albedo feedback (see Fig. 7 on page 137 \\cite{hoffmanschrag2002terranova} ). \n\nThe idea of packaging the essence of the long term carbon cycle into one simple equation is not novel, and is consistent with earlier findings \\cite{ edmond1996fluvial, hogg, kump2000chemical}. The novelty comes from connecting such an equation to an energy balance model and analyzing the dynamics of the coupled system. Indeed, the parameter $A_{ex}$ on page 644 \\cite{kump2000chemical} is the variable $\\eta$ in equation \\eqref{Aeta} and it represents \\textit{the effective area of exposure of fresh minerals}. The parameter $\\eta_c$ can therefore be thought of as a \\textit{critical} area. The coupling of the ice line $\\eta$ and the greenhouse gas variable $A$ follows naturally. \n\nIn its current form, the model does not restrict dynamics to the physical region $0\\leq \\eta\\leq 1$. There are orbits that exit this interval on either end, and we must therefore create reasonable assumptions for projection of this motion onto the boundaries. Furthermore, $\\eta$ should evolve along the physical boundaries, as suggested by the bifurcation diagrams in Figure \\ref{icf}. This will be guaranteed by first carefully extending the vector field to the whole plane, and then utilizing Filippov's theory for the resulting nonsmooth system.\n\n\\section{A Filippov System for a Glaciated Planet\\label{sect:fil}}\n\nBefore we dive into the analysis of the coupled $(A, \\eta)$ system, we shall first build an intuition for the type of system of our focus and prove a general result for such a system. We then apply this result to an extended version of \\eqref{Aeta} and study its dynamics. We find that this framework both ensures that trajectories with initial conditions in the physical region remain in this region for all time and produces the expected sliding motion on the boundaries. \n\nVarious frameworks for analyzing non-smooth systems have been developed, eg. by Caratheodory, Rosenthal, Viktorovskii, Utkin \\cite{filippov1964, cortes2008}. Here, we choose to use the framework developed by A.F. Filippov \\cite{filippov1964}. \n\n\n\\subsection{Casting the Model in a Filippov framework}\n\nAs mentioned previously, the pole and the equator define physical constraints of the ice line; it must stay in the unit interval $[0,1]$. However, the model as it stands does not take this into account. A natural choice is to assume that $\\dot{\\eta}=0$ when $\\eta$ reaches a physical boundary. In addition to this, one needs to account for the stability\/instability of these states, thus allowing orbits to exit the boundary when it becomes unstable. More specifically, we should take\n\n\\begin{equation}\\label{AetaFillipov}\n\\begin{cases}\n\\dot{\\eta}&= \\begin{cases}\n 0 & \\{ \\eta=0 \\text{ and } h(\\eta,A)<0 \\}\\text{ \\text{or} }\\{ \\eta=1 \\text{ and } h(A, \\eta)>0\\}\\\\\n h(A, \\eta) & \\text{otherwise}\\end{cases} \\\\\n\\dot{A}&=g(A,\\eta)\n\\end{cases}\n\\end{equation}\n\nThe first equation forces $\\eta$ to stop at the physical boundary exactly when it is about to cross that boundary. When this happens, $\\eta$ should remain constant while $A$ continues to evolve. As $A$ evolves, so does the value of $h$ and this should result in the $\\dot{\\eta}$ equation ``turning back on'' as soon as $h$ becomes positive on the lower boundary, $\\eta=0$, or negative on the upper boundary, $\\eta=1$. This is analogy with Kirschvink's hypothesis, as we expect orbits that enter the snowball state to recover when enough carbon dioxide has built up in the atmosphere. However, it is not clear that classical results from dynamical systems apply because the system is nonsmooth as well as undefined beyond the physical boundary. Fundamentally, the existence and uniqueness of solutions needs to be verified. Our analysis takes the approach of \\textit{extending} the vector field to the nonphysical region in a way that makes the system amenable to the theory of Filippov. This mathematical extension is used to obtain exactly the expected dynamics of the system \\eqref{AetaFillipov} outlined above, including boundary motion. There are many possible modeling choices for the boundary, however, as will be discussed in Remark \\ref{remark}, our proposed extension ensures the consistency of the model dynamics with the existing snowball theory demonstrated by the literature e.g. in \\cite{abbot2011, caldeira1992susceptibility}. The type of extension we adopt is highlighted in Section \\ref{sect:ex-un} below. Much of the proof relies on the theory developed by Fillipov \\cite{filippov1964, filippov1988}.\n\n\\subsection{Existence and uniqueness of solutions \\label{sect:ex-un}}\n\nThe following general result will motivate and apply to an extended version of the system \\eqref{Aeta}, as well as guarantee the desired motion described in \\eqref{AetaFillipov}. \n\n\n\\begin{theorem} \\label{filippov-bud}\nLet $G(x,y), H(x,y): \\mathbb{R}^2 \\rightarrow \\mathbb{R}$ be $C^1$ functions. Define the vector field on $\\mathbb{R} \\times \\mathbb{R}$ as follows:\n\\begin{align} \\label{gen-pws} \n&\\dot{x} = G(x,y)\\\\\n&\\dot{y} = \\begin{cases} \n-|H| \\quad &\\text{when } y > 1\\\\\n\\frac{H-|H|}{2} \\quad &\\text{when } y=1 \\\\\nH \\quad &\\text{when } 0< y < 1\\\\\n\\frac{H+|H|}{2} \\quad &\\text{when } y = 0\\\\\\label{gen-pws1}\n|H| \\quad &\\text{when } y < 0. \\\\\n\\end{cases}\\\\\\nonumber\n\\end{align}\n\nThen, an initial value problem with time derivative \\eqref{gen-pws}-\\eqref{gen-pws1} has a unique forward-time Filippov solution. Furthermore, the strip $\\mathbb{R} \\times [0,1]$ is forward invariant. \n\\end{theorem}\n\n\\begin{proof}\n\n\nFirst, to simplify notation, let $F(x,y)$ denote the vector-valued function $[G(x,y),H(x,y)]$. To show the existence of a Filippov solution, one must show that $F$ is locally essentially bounded or equivalently, satisfies what Filippov called \\textit{Condition B} on p. 207 of \\cite{filippov1964}. This is straight forward to check since $F$ is Lipschitz continuous everywhere except at the boundaries $y=0,1$. \n\nTo show uniqueness of the Filippov solution, we must show the \\textit{one-sided Lipschitz condition}, ie. that for almost all $z_1$ and $z_2$ in some small neighborhood in $\\mathbb R^2$, the inequality\n\\begin{equation} \\label{esslip}\n \\left( F\\left( z_1 \\right)-F \\left( z_2 \\right)\\right)^T \\left( z_1-z_2 \\right) \\le K \\| z_1-z_2 \\|^2\n\\end{equation}\nholds for some constant $K$ (see inequality (51) p. 218 of \\cite{filippov1964}). Since $F$ is Lipschitz away from the discontinuity boundary, one easily checks that this condition is satisfied in this region. \n\nWe now consider initial value problems with initial conditions in the discontinuity boundary. There are two such discontinuity boundaries: the lines $y=0$ and $y=1$. First, let $z$ be a point on the line $y=1$. Let $\\delta$ be small enough that $B_{\\delta}(z)$ is a neighborhood contained within the set $\\left\\lbrace (x,y):y>0 \\right\\rbrace$. Suppose that $z_1=(x_1,y_1)$ satisfies $01$. We consider $\\left( F(z_2)-F(z_1) \\right)^T (z_2-z_1)$ for the following two cases:\n\\begin{enumerate}\n\\item If $H(z_1) \\ge 0$, then $\\left(F(z_2)-F(z_1)\\right)^T(z_2-z_1)=\\left(0, -|H(z_2)|-H(z_1) \\right)^T(x_2-x_1, y_2-y_1)<0$, since $-|H(z_2)|-H(z_1) < 0$ and $y_2-y_1>0$. Then, inequality \\eqref{esslip} is satisfied with $K=1$.\n\n\\item If $H(z_1)<0$, then $F(z_2)-F(z_1)=\\left(0, -|H(z_2)|+|H(z_1)| \\right)$. The second entry may be estimated further:\n$-|H(z_2)|+|H(z_1)| \\le |H(z_1)-H(z_2)| \\le k_H \\|z_1-z_2 \\|$ where $k_H$ is the Lipschitz constant of the function $H$. \nTherefore, the inequality \\eqref{esslip} is satisfied with $K=k_H$.\n\\end{enumerate}\n\nA similar argument holds for the other discontinuity boundary, and therefore, system \\eqref{gen-pws}-\\eqref{gen-pws1} satisfies the one-sided Lipschitz continuity condition \\eqref{esslip}, guaranteeing forward-time uniqueness of Filippov solutions. \n\nFilippov theory further guarantees that a sliding solution exists at the discontinuity boundary. With the normal vector taken in the direction of increasing $y$, one checks that system \\eqref{gen-pws}-\\eqref{gen-pws1} satisfies the conditions of Lemma 3 in \\cite{filippov1964}. This gives the sliding solution on the upper boundary $y=1$. If $z(t)$ denotes this solution, then, with $\\alpha=1\/2$,\n\\begin{align*} \n\\frac{dz}{dt}&=[G(x,1), \\alpha (-|H|)+(1-\\alpha) H] \\\\\n&=[G(x,1), 0].\n\\end{align*}\n\nFurthermore, above the upper discontinuity boundary the vector field points downward, while below the lower boundary, the vector fields points up. Therefore, since any solution can only cross into or slide along the boundary, the strip is forward invariant. \n\\end{proof}\n\nArmed with this result, we return to system \\eqref{Aeta}. Recall that $g$ and $h$ are both $C^1$ functions over the plane. Henceforth, we consider the initial value problem in the state space $(A,\\eta) \\in \\mathbb{R} \\times \\mathbb{R}$ endowed with the following piecewise Lipschitz vector field:\n\n\\begin{align}\\label{ig-pws}\n&\\dot{A} = g(A,\\eta)\\\\\n&\\dot{\\eta} = \\begin{cases} \n-|h| \\quad &\\text{when } \\eta > 1\\\\\n\\frac{h-|h|}{2} \\quad &\\text{when } \n\\eta=1 \\\\\nh \\quad &\\text{when } 0< \\eta < 1\\\\\\label{ig-pws1}\n\\frac{h+|h|}{2} \\quad &\\text{when } \\eta= 0\\\\\n|h| \\quad &\\text{when } \\eta < 0. \\\\\n\\end{cases}\\\\\\nonumber\n\\end{align}\nhaving an initial value $(A(0),\\eta(0)) \\in \\mathbb{R} \\times \\mathbb{R}$. \n\nAn application of Theorem \\ref{filippov-bud} shows that a unique forward-time Filippov solution to the initial value problem \\eqref{ig-pws}-\\eqref{ig-pws1} exists. Furthermore, the forward invariance of the strip $\\mathbb R \\times [0,1]$ ensures the physically relevant aspect of the ice line $\\eta$. We have arrived at the following conclusion:\n\n\\begin{corollary} \nThere exists a unique Filippov solution to \\eqref{ig-pws}-\\eqref{ig-pws1}. Furthermore, the strip $\\mathbb R \\times [0,1]$ is forward time invariant. \n\\end{corollary}\n\n\n\n\\begin{remark}\\label{remark}\nThere are many possible ways of extending system \\eqref{Aeta} beyond the strip $\\mathbb{R} \\times [0,1]$. The choice of extension as done in system \\eqref{ig-pws}-\\eqref{ig-pws1} assures the following points: \n\\begin{enumerate}\n\\item The attracting nature of the strip $\\mathbb{R} \\times [0,1]$.\n\\item The switching from sliding to crossing is entirely determined by zeroes of $h(A, \\eta)$. This point is especially important in the modeling of the physical system, since $h$ signals the ice line's advance or retreat and $h$ should be ``off\" for some positive amount of time at the extreme ice line locations (pole or equator) and should ``turn back on\" when the system has reduced or increased enough greenhouse gases.\n\\item The time derivative of the sliding mode is entirely determined by the governing function of the greenhouse gas effect, $g(A,\\eta)$. Again, this aspect is especially important in the modeling of the physical system, since at the extreme ice line location (pole or equator), the ice-albedo feedback shuts off and the greenhouse gas effect is the only driver of the system. \n\\end{enumerate}\n\\end{remark}\n\n\n\\subsection{Dynamics of the model \\label{sect:dynamics}}\n\nWe now focus our analysis on the system \\eqref{ig-pws}-\\eqref{ig-pws1}. We show that its sliding dynamics guarantee exit from the snowball scenario, and use the Filippov framework to simulate stable small ice cap states and large amplitude periodic orbits. \n\nThe $\\eta$-nullcline, given by $h(A,\\eta)=0$, is cubic in $\\eta$ and linear in $A$ (see Table \\ref{tab:ParValues}). It has a single fold in the region $0 \\leq \\eta \\leq 1$ at $\\eta =\\eta_f \\approx 0.77$, as can be seen in Figure \\ref{critman}. The $A$-nullcline is the horizontal line $\\eta=\\eta_c$. If $0<\\eta_c<1$, the system has a single fixed point. The eigenvalues associated with this fixed point are\n\n\\begin{equation*}\n\\lambda_{\\pm}=\\frac{1}{2}\\left(\\frac{\\partial h}{\\partial \\eta}\\pm \\sqrt{\\left(\\frac{\\partial h}{\\partial \\eta}\\right)^2-\\frac{4\\delta}{B}}\\right).\n\\end{equation*}\n\n From this and Figure \\ref{critman}, we see that if the fixed point lies above the fold and in the physical region, i.e. $\\eta_f<\\eta_c<1$, then it is stable, and it is unstable when $0<\\eta_c<\\eta_f$. This is reminiscent of the ``slope-stability theorem'' from the climate literature \\cite{cahalan1979stability}. In that work, the authors related changes in slopes of equilibrium curves to changes in stability of equilibria for a globally averaged energy balance model. They found that a small ice cap or ice-free solution could be stable, a large ice cap was unstable, and a snowball state was again stable. \n\n\\begin{remark} In the case that $\\eta_c=0,1$, one of the physical boundaries is entirely composed of equilibria, and stability is determined by the direction of the vector field near it. Physically, this degenerate scenario comes about when there is an absence of volcanism in the ice-covered state or a perfect balance between weathering and volcanism in the ice-free state. We do not discuss these cases further.\\end{remark}\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{{phase.png}} \n\\caption{The physical region of the phase space and possible fixed points of the system given by the $\\eta$-nullcline, $h=0$. The location of the equilibrium is determined by the critical effective area of exposed land $0<\\eta_c<1$. Solid black portions of the curve represent stable equilibria while the dashed lines denote unstable equilibria. Solid black portions of the boundary are attractive sliding regions and dashed boundaries are crossing regions.}\\label{critman}\n\\end{figure} \n\n\\subsubsection{Sliding and escape from Snowball Earth}\n\nRecall that $\\eta_c$ was defined to be a ratio of volcanism to weathering. We now prove that the system always recovers from the ice-covered regime provided this parameter is positive, i.e. there is some contribution from volcanic outgassing to atmospheric greenhouse gas content.\n\n\\begin{theorem}\\label{exit} Suppose $\\eta_c>0$. Then any orbit of the system \\eqref{ig-pws}-\\eqref{ig-pws1} that enters the boundary $\\eta=0$ must exit in finite time.\\end{theorem}\n\n\\begin{proof}\nLet $A_*$ be the $A$-coordinate of the intersection of the $\\eta$-nullcline with the lower ice boundary $\\eta=0$.\nSuppose an orbit enters the boundary $\\eta=0$. At such an occurrence $A\\geq A_*$, since this is where $\\dot{\\eta}=h(A,\\eta)\\leq 0$. For $A\\geq A_*$, the confinement vector field in the non-physical region $\\eta<0$ is given by $\\dot{\\eta}=|h(A,\\eta)|\\geq 0$. $A$ is decreasing on either side of $\\eta=0$. The Filippov convention induces sliding according to \n\n\\begin{align*}\n\\dot{\\eta}&=\\frac{h(A,0)+|h(A,0)|}{2}=0\\\\\n\\dot{A}&=-\\delta \\eta_c.\n\\end{align*}\n\nAt the intersection point $A=A_*$, the vector field is tangent to the boundary and sliding terminates. For $A0$. Moreover, there is a single orbit of the smooth system that passes through the tangency point. Due to forward-time uniqueness from Theorem \\ref{filippov-bud}, the sliding solution must follow this orbit and reenter the physical region.\n\n\\end{proof}\n \n\\begin{remark} A similar argument shows that any orbit that enters an ice-free state must exit in finite time. The required condition is that there is \\textit{not} a perfect balance between volcanism and weathering, i.e. $\\eta_c\\neq 1$.\\end{remark}\n\n\n\\subsubsection{Numerical simulations\\label{po}}\n\nThe Filippov framework also allows for numerical simulation of the model. Let $(A_c,\\eta_c)$ be the location of the single fixed point. We find that small ice cap states corresponding to fixed points with $\\eta_f<\\eta_c<1$ are possible attractors of the system, as are periodic orbits born from the Hopf bifurcation at $\\eta_c=\\eta_f$. \n\nThe fold of the $\\eta$-nullcline is a \\textit{canard point}, and the mechanism that produces the large amplitude periodic orbit in Figure \\ref{dynamics} is a \\textit{canard explosion} \\cite{ benoit1981chasse, dumortier1996canard, krupa2001extending}. While mathematically interesting, canard orbits are unlikely observables for planar systems because they occur within an exponentially small parameter range. However, they can be robust in higher dimensions which makes more complex oscillatory behavior possible, see e.g. \\cite{benoit1983systemes,desroches2012mixed,szmolyan2001canards,wechselberger2005existence}. With this in mind, we remark that an interesting avenue for future research involves studying an extended version of the system \\eqref{ig-pws}-\\eqref{ig-pws1} with an additional variable ($w$ in \\cite{mcgwid2014simplification}) and foregoing their invariant manifold reduction.\n\n \\begin{figure}[ht]\n \\centering\n \\subfloat[Small ice cap equilibrium] {\\includegraphics[width=0.4\\textwidth]{sm_ice.png}} \\quad \\quad\n \\subfloat [Periodic orbit] {\\includegraphics[width=0.4\\textwidth]{relax_osc.png}} \\quad\n \\caption{Attractors of the system when (a) $\\eta_c=0.85$ and (b) $\\eta_c=0.6$. The $+$ symbol marks the intial condition and the horizontal long-dashed line is the $A$-nullcline. In (a), the orbit reaches the ice-free state and slides until it reaches the intersection of the folded curve $h(A,\\eta)=0$ with this boundary. It then enters the physical region and approaches the small ice cap equilibrium. In (b), the fixed point is unstable and the orbit oscillates between the ice-free and ice-covered boundaries. Simulations were performed using Mathematica 9.} \n\\label{dynamics}\n \\end{figure}\n\n\\subsection{Application to the Jormungand Model\\label{sect:jorm}}\n\nThe previous occurrence of a complete snowball event is still a highly contested topic in geology. In the model \\eqref{ig-pws}-\\eqref{ig-pws1}, the only possible stable fixed point is a small ice cap. In this section, we present a variant of the system, as introduced by Abbot, Voigt, and Koll \\cite{abbot2011}, and employ the Filippov framework as above. By modifying the albedo function $\\alpha(\\eta,y)$ so that it differentiates between the albedo of bare ice and snow-covered ice, the new system has an additional stable state that corresponds to a large ice cap. In \\cite{abbot2011}, this was called the \\textit{Jormungand} state because it allows for a snake-like band of open ocean at the equator. Moreover, there are again attracting periodic orbits. These can be seen in Figure \\ref{jorm}.\n\nIn this version of the model, the albedo is defined as follows:\n\n$$\\alpha_J(\\eta,y)=\\left\\{\\begin{array}{ll}\\alpha_w,& y<\\eta\\\\\n\\frac{1}{2}(\\alpha_w+\\alpha_2(\\eta)), & y=\\eta\\\\ \\alpha_2(y), & y>\\eta, \\end{array}\\right.\\eqno(14)$$\n\nwhere $\\alpha_2(y)=\\frac{1}{2}(\\alpha_s+\\alpha_i)+\\frac{1}{2}(\\alpha_s-\\alpha_i)\\tanh M(y-0.35) $.\n\nHere $\\alpha_w$ is the albedo of open water, $\\alpha_i$ is the albedo of {\\em bare} sea ice, and $\\alpha_s$ is the albedo of {\\em snow-covered} ice. The model assumes sea ice aquires a snow cover only for latitudes above $y=0.35$. We modify $h(A,\\eta)$ by replacing $\\alpha$ with $\\alpha_J$:\n\n\\begin{equation}\\label{hj}\nh_J(A,\\eta)=\\frac{Q}{B+C} \\left(s(\\eta)(1-\\alpha_J(\\eta,\\eta))+\\frac{C}{B}(1-\\overline{\\alpha_J}(\\eta))\\right)-\\frac{A}{B}-T_c \n\\end{equation}\nwhere we note that although $\\alpha(\\eta,y)$ has a discontinuity when $y=\\eta$, both $\\alpha(\\eta,\\eta)$ and $\\overline{\\alpha_J}(\\eta)=\\int_0^1 \\alpha(\\eta,y)s(y)dy$ are smooth functions of $\\eta$.\n\n\nFor this system, we can immediately apply Theorems \\ref{filippov-bud} and \\ref{exit} to obtain the following result.\n\n\\begin{corollary} The physical region is forward invariant with respect to the system \\eqref{ig-pws}-\\eqref{ig-pws1} where $h$ is replaced by $h_J$. Solutions exist and are unique in forward time, and any trajectory that enters an ice-covered state must exit in finite time provided $\\eta_c\\neq 0$.\\end{corollary}\n\n \\begin{figure}[ht]\n \\centering\n \\subfloat[] {\\includegraphics[width=0.4\\textwidth]{jorm1.png}} \\quad \\quad\n \\subfloat [] {\\includegraphics[width=0.4\\textwidth]{jorm2.png}} \\quad\n \\caption{Periodic orbits of the Jormungand system when (a) $\\eta_c=0.8$ and (b) $\\eta_c=0.15$. The folded curve is the $\\eta$-nullcline $h_J(A,\\eta)=0$ and dashing is as in Figures \\ref{critman} and \\ref{dynamics}.} \n\\label{jorm}\n \\end{figure}\n\n\\begin{table}[htb] \n\\centering\n\\begin{tabular}{|ccc||c|} \n\\hline\n\\textbf{Parameters} & Value & Units \\\\ \\hline \\hline\n&& \\\\\n$T_c$ & 0 & ${}^\\circ\\text{C}$ \\\\\n$M$ & 25 & dimensionless \\\\\n$\\alpha_w$ & $0.35$ & dimensionless \\\\\n$\\alpha_i$ & $0.45$ & dimensionless \\\\\n$\\alpha_s$ & $0.8$ & dimensionless \\\\\n&& \\\\\n\\hline\n\\end{tabular} \n\\caption{Parameter values as in Table \\ref{tab:ParValues} unless specified above. Additional values taken from \\cite{abbot2011}.} \\label{tab:ParJorm}\n\\end{table}\n\n\\section{Discussion\\label{sect:discussion}}\n\nIn this article we have extended a class of energy balance models to include a greenhouse gas component. The resulting system was nonsmooth due to physical constraints and did not immediately fit into an existing analytical framework. However, we defined an extended system that allowed us to utilize the theory of Filippov and obtain the expected motion, in agreement with physical arguments from climate literature. More specifically, we proved existence and uniqueness of forward-time solutions and showed that the system was confined to the physical region with sliding on the ice-covered and ice-free boundaries. We showed that the extended system always escapes from the ice-covered scenario, thus supporting Kirschvink's hypothesis about carbon dioxide accumulation in a snowball scenario due to a shut down of chemical weathering processes \\cite{kirschvink1992protero}. Using our extended system, we found that small ice-cap and large amplitude ice-cover oscillations were possible attractors of the system. We then applied our results to the case of Jormungand world and showed that it is possible to obtain further oscillatory dynamics between no ice cover, large ice-cover, and full ice-cover states. \n\nGeneral mathematical questions brought about by this work have to do with building a general framework for such models so that there is no need to appeal to artificial tools. A first step toward dealing with physical boundaries might be a general theory for semiflows generated by smooth vector fields on manifolds with boundary. In addition, the stable portions of the physical boundaries in our model should be thought of as equilibria of the fast subsystem, i.e. as part of the \\textit{critical manifold} from geometric singular perturbation theory \\cite{jones1995geometric}. In fact, they are much more \\textit{normally hyperbolic} than the curve $h=0$; they are reachable in finite time! However, the current theory cannot be directly applied to the full discontinuous system and we remark that developing an analogous Fenichel theory \\cite{fenichel1979geometric} for singularly perturbed discontinuous systems with sliding is an interesting avenue for future research.\n\nAnother interesting future direction is to study the explicit effect of the temperature (here we have considered it only through the equation for $\\eta$ based on the reduction by McGehee and Widiasih \\cite{mcgwid2014simplification}) on the dynamics of the system. In a nonsmooth system, the addition of a stable dimension may destroy an existing stable periodic orbit \\cite{sieber2010small}. Moreover, an additional dimension could result in more interesting glacial dynamics, such as mixed mode oscillations.\n \nFinally, it is natural to ask how the shape of the $\\eta$-nullcline changes with physical parameters and how this affects system dynamics, e.g. could the lower fold in Figure \\eqref{jorm} move to the left of the upper fold? We refer the reader to \\cite{widwal2014dynamics} for a number of examples. \\\\\n\n\n\\textbf{Acknowledgements:} This research was supported in part by the Mathematics and Climate Research Network and NSF grants DMS-0940366, DMS-0940363. AB was also supported in part by the Institute for Mathematics and its Applications with funds provided by the National Science Foundation. We thank the members of the MCRN Paleoclimate and Nonsmooth Systems seminar groups for many useful discussions, especially Mary Lou Zeeman and Emma Cutler. We also thank Andrew Roberts for his suggestions relating to the formulation of the problem.\n\n\n\n\\bibliographystyle{plain}\n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}}